Kim's Blog

Thoughts on sound and sensescapes, immersion, virtual worlds, interaction and accessibility. Interdiciplinary writings on a range of issues broadly relating to inclusion and sensory environments. PhD student, web developer, cocktail wizard, board game addict.

Recordings from The Termite Club at Ladyfest Leeds 2007

by kim on February 7, 2014

Dug out some recordings of a noise/experimental night at Ladyfest Leeds in 2007, recorded in Holy Trinity Church.

Acoustic Citizenship: The Night & Day Debacle

by kim on January 18, 2014

Like many others this week, I was amazed to find out that someone moved in next to an iconic Manchester music venue, and then complained about the noise. What were they thinking?

Like many other incidents around noise though, it’s not that simple.

On one hand, we have the owner’s perspective. This is the primary source in the article: in this version of events, someone just moved in in a flat above the venue, complained about the noise to the council, and now they’re due to be shut down. How dare they? Playing music is what this city is famous for.

The resident’s story is somewhat different: yes, they’ve only lived in that flat for a year, but have lived in various flats in the city centre for the last three years, and are used to loud volumes. However, they feel that in this instance, the volume is so loud that their pictures are falling off the wall. They’ve tried to communicate with the owner about a perceived bump in volume, especially the bass, and been met with rudeness and ridicule. In this story, of course they don’t want the club shut, just a sensible volume level. After being rebuked by the owner, they saw their only course of action was to contact the council.

As a result of this, the resident has received death threats, and there is an almost unending barrage of abusive comments on the MEN website and social media, and a petition of 60,000 signatures plus. So how can there be two such radically different sides to the same incident?

Noise Abatement Legislation

As in most countries in the world, noise abatement legislation is a mess. Objective measurements of loudness are very crude tools that generally do not take into account the content of the sound. For example, an alarm sounding all day at the same volume as a club soundsystem would be unacceptable. Or, for that matter, ‘the tick tick tock of the stately clock’ when you’re trying to get some shuteye. In this case: alarms, specifically, have an exemption. But when it’s not such a universally despised sound source, there is no real legislative consensus.

There’s an enormous mish mash of legislation around noise. I’m a soundscape researcher — this means I research how people listen to the sounds of the built environment. Think of a soundscape as the aural version of a landscape. Despite this, I have no idea exactly what Night & Day have been served with, or the details of what it says. Part of me is suspicious there is no scan of it or its demands on the petition site, which frankly makes me doubt the owner’s motives. Maybe it just didn’t occur to them to present primary data, but needless to say: legislation is a very clumsy tool.

Aural Citizenship

So why are some people moved to issue threats, and leave a trail of horrible comments on the petition site and MEN articles? Why is the anger directed at the person who complained, and not the council?

As a researcher, my guess would be that people feel a sense of acoustic citizenship — a part of an acoustic community. The soundscape of the Northern Quarter is a familiar, comfortable, escapist leisure environment of which the volume level is an intrinsic part. People have different soundscape preferences — Northern Quarter regulars likely prefer loud environments, while others may prefer quieter ones. Preferences change based on lots of factors including time of day, activity (work or leisure) and mood (people get more annoyed with sounds when they’re also wet or cold). They’re also personal, and people go to great lengths to establish some environments they like.

Returning to the issue at hand, in my research I categorise these nightlife environments as positive, loud environments. They are characterised by the concept of atmosphere. This stems from two main sound sources — people and music. The most direct application of this is the kind of environment experienced at live music events, busy cafés, or parties. The soundscape in this places is loud, but desirably loud — the presence of factors which would be an annoyance at other times are what the listener directly seeks. One participant in my research put it this way:

“I go to a lot of gigs and they’re very intimate [...] people right next to you, they stink of sweat, and the place stinks of beer, and I’d rather be nowhere else.”

Many people feel noise abatement has a strong whiff of ‘health and safety gone mad’. Quotes like this show the opposite side: people need spaces to let go, to have a good time, and let the sound wash over them. The sounds connote a kind of intimacy, a feeling of catharsis, a sense of place, and are part of the architecture of the Northern Quarter just as much as the graffiti, craft beer or Affleck’s Palace. It is completely understandable that a certain part of town is simply the place this is done, and that this place is sacred — after the drudgery of work, somewhere to literally let your hair down.


One of the main factors in my research that triggers noise annoyance is a lack of control. When people feel they have no control over their environment, for any reason, they are much more likely to be annoyed with the soundscape therein. For example, if people have a noisy housemate they like and get on with, they are much less likely to find them annoying than if they feel they can simply have a conversation about it — even if the actual music volume is the same level.

In this instance, from the resident’s perspective, there has been a clear breakdown in communication with the owner. While ‘pictures falling off the wall’ is extreme, and more in the realm of vibration-related annoyance rather than sound-related, communication breakdown is likely a huge trigger. The sound comes to represent the creator of the sound doing it to spite the person who complained.

This is most common when people feel sounds are inflicted on them, typically from someone’s Walkman headphones bleeding on the train or bus. Instead of people feeling confident asking someone to turn it down, the sound becomes a symbol of just how inconsiderate that one person is. How dare they? Don’t they know there’s a quiet carriage?

Qualities of sounds

At the root of this seems to be a debate over the kinds of sounds that club owners, club visitors and residents feel is necessary to have a good time. The owner feels that turning the volume down will dissuade bands from playing. They also feel upset that, after spending so much on sound insulation, the ‘inconsiderate neighbour’ still doesn’t think it’s enough, or seem to acknowledge the lengths they have gone to.

The noise complaint was also specifically about the bass end of the spectrum. Bass transmits through structures far more effectively than treble: most readers will be used to the experience of only being able to hear the bottom end of a piece of music their noisy neighbours are playing, or the throb of a subwoofer from a passing car soundsystem. This low quality, second hand sound can be more intensely annoying than being able to hear the whole thing.

On a personal note, I am an experienced sound engineer, having worked at venues of all shapes and sizes over the course of a decade. I know first-hand that almost all venues have the sound unnecessarily loud. Your ears are amazing, and automatically ‘turn themselves down’ after a while of being exposed to sounds which are too loud. The feeling of everything being weirdly quiet when leaving a club is exactly this — your brain turned down the volume in order to not cause hearing damage.

However, over a period of time, hearing damage does occur. Some estimates say 1 in 6 people in the UK have hearing loss, with this expected to rise. The reasons why are convoluted, but personally I think we have a culture of volume levels being much higher than they need to be — and that most people would simply not notice a significant reduction in soundsystem levels in most places, due to the brain’s automatic volume control system.

One key finding in my research was that most people think the sound is too loud most of the time, especially in bars, so I have some backup here — however, nobody had any interest in stopping it completely, and picked bars for their music choices nevertheless. Surely there should be a way to rectify different people’s sound level preferences?

Opening up a soundscape discussion

One of the problems I’ve found questioning people about sound preferences is that people are not used to talking about sounds, and lack the vocabulary to do so. People don’t really notice sounds until they become annoyances, and often have relationships with soundscapes they’re not aware of. I suspect the overwhelming response to this petition is related to this issue, with signatories becoming aware of something they previously took for granted.

However, it’s clear that we should be able to have open, public discussion about what we want the kinds of places we live in to sound like. In this situation, there has been a clear falling out, but the general question remains — if we wish to not have to resort to calling the council, or using legislative means in general, then how should this be negotiated? How can participants in a multi-cultural, mixed-used city block discuss sound production in a balanced, open, informed way? How can people feel in control of their surroundings, when traditional legislation fails?

These are all open questions, but ones I invite everyone to think about. A petition is also a crude tool: there is no right of response or potential to discuss alternatives or mediation. I think it’s unfair to suggest that people simply put up with other people creating all the sound they want 24/7, but I also think it’s a totally necessary part of any city that prides itself on it’s music culture that businesses can have a guarantee their livelihood cannot be taken away on what feels like a whim. The law in this country generally says what you cannot do: instead we should be establishing a social contract for what we want our cities to sound like. Only then can we be true participants in a democratic acoustic community.

Website: Embody Move Association

by kim on September 23, 2013

We made Embody Move Association’s last website 5 years ago, so it was great to be able to remake it with all the skills we have learnt since then. Embody Move Association are “practitioner trainers”, that is they teach people who wish to teach the techniques on the courses. They were expanding sizeably at the time of this rework, so we made  website that is much more flexible and allows them to reuse the same content as much as possible.


The main challenges with this site were making the relatively complex system of programs and courses seem simple and easily accessible to both customers and the site administrators. This works as a “prospectus” site if going through the “Our Programs” section, but also can be viewed as an A-Z list, date-based list, etc. As different modules feed into some or all programs, we developed a symbol based navigation to make it clear what everything is part of. While simple on the outside, this was a surprising challenge to fully realise in Drupal while keeping everything easy for the client to use.

It is also fully responsive, working from a mobile to desktop, using a fluid grid system in the wonderful Susy. We were very happy with the result, although this is the last client website Kim will be making until their PhD is finished! A nice one to finish on for the time being.

  • When: Summer 2013
  • Tech: Drupal 7, Custom subtheme based on Susy, extensive Views and Media module use.
  • Team: Kim Foale (development), Roshana Rubin-Mayhew (design)
  • Site:

Rethinking Sound and the Web: Part 3

by kim on September 23, 2013

This idea which I’ve blogged about previously was worked into a bid for a British Library Labs competition. My submission was unsuccessful but here’s where I’m up to for another date. This article is therefore a “how it would be done” proposal.

I’m also aware I’ve not updated this in nearly a year! More to come from my backlog, and a possible move to a new domain.

Question: How can very large collections of audio data be quickly and easily searched and catalogued?


Computers, and especially the web, handle audio very badly. Images and video are the de facto web media formats, while browsing large amounts of audio data is still a chore. This project will develop a new shorthand for audio data, and rethink how we visualise and manage audio data in order to bring it up to speed with where images have been for years.

Audio files should have beautiful, semantically rich thumbnails that convey information about the sound within, communicating meaningful properties of the sounds, in order to allow quick comparison and evaluation of the salient points of a very large number of files. Some of these properties are:

  • Duration
  • Rough frequency content
  • Type (music, audio recording, podcast, etc)
  • Volume/dynamic range

They could be represented abstractly with things like:

  • Shape
  • Colour
  • Opacity
  • Pattern
  • Small icons

An audio metadata format will be created or adapted to allow these thumbnails — a server side process can generate the metadata, which a html5 canvas can interpret. This metadata can then be used directly to allow sorts and filters on audio data embedded within web pages, file browsers or other GUI elements.

How can this idea showcase audio content?

Audio data is very hard to effectively parse without directly listening to it, which is very slow for large quantities. Suppose I want to quickly find the short, bass-heavy, low-amplitude recordings on a web page or file browser. Without listening there is no way to do this. However, if amplitude is represented with height, frequency with colour and duration with width, then a thin, red rectangle could indicate the files I’m looking for. I can then compare other files — a more orange rectangle might be a higher pitch, and a large blue square a long, loud, high pitched file.

Being able to ascertain basic properties in this way will hugely speed up interaction with audio.

Here are some examples of how I would process British Library content, with some very rough examples.

“Soundscapes” collection

There is already a map of soundscape recordings, and a separate XML database on the British Library website. However, reading the descriptions of each sound or seeing where they are located on the map gives little useful information about the data set as a whole. A useful visualisation here would be to directly superimpose sound glyphs onto the existing map. I’ve done a very rough mockup here, with some examples of how I would go about using the space. Click for a bigger view.

Soundscape thumbnail map

Ideally this should be rendered with a Google Maps/OpenStreetMap style pan and scroll interface, as is currently on the British Library website. This could be made into a complete sub-site, with extra controls for filtering based on volume, frequency content, duration, etc, using the metadata information. Straight away this output gives us rich contextual information about the sound environment, in ways that wouldn’t necessarily be obvious from listening to sounds one at a time.

“Accents” collection

In a similar way, the “accents” collection has a huge amount of files but no practical way of discerning or identifying an accent heard in the real world short of listening to every file. A well designed glyph would show important parts of the audio recording, and may show where vowels or consonants are emphasised more. This application would necessitate a specialised application and analysis; however, likely focusing heavily on the freqency bands for vowels and consonants. Again, superimposed glyphs on the map may show interesting regional or national patterns that would be hard to discover or navigate with a manual listening exercise.

Similar experiments with manual coding show fascinating results, for example Joshua Katz’s “Dialect Survey Maps”. Imagine a map like this with sound recordings layered on top!

Birdsong database

Ascertaining a birdsong in the wild can be very difficult. Some birds are known to imitate others, and the songs can be very similar. This is another example of a specialised application: with birdsong we are only interested in a fairly specific, high pitched section of the frequency range.

An ideal thumbnail here would show the pattern and pitch of each bird, in a much more literal way than the general soundscape recordings. A simple thumbnail list with filters is the best visualisation here — there is no need for map data. Through listening to a bird in your garden, the thumbnails would allow you to vastly narrow down the number of recordings to compare the birdsong with. With filters for, for instance, geographical region and rarity, a well designed format should point you to relevant files very quickly. Again, to reiterate the project goals — the thumbnails are an aid to focussed listening, not a replacement for listening.

My research

I have about 400 audio recordings from my PhD fieldwork of people’s day-to-day lives, with associated log book entries for what they were and what people thought about them. Without cross-referencing a large spreadsheet, there is no way for me to tell the content of the files without listening to them all. Suppose I want to find audio files that consist of broadband noise, in order to compare responses to them — there is currently no way of doing this without resorting to one of the above solutions. I have metadata for all the sounds the person heard in the recording — if I could select, say, all recordings of traffic and compare the thumbnails to see the range of experience here, it would greatly assist my critical comparison skills.

How would this be implemented?

The basic framework would be:

  1. Research existing audio meta data formats.
  2. Talk to people who access the audio collection to establish user stories about what they need, how, and when.
  3. Work with an artist to develop an intuitive prototype visual shorthand.
  4. See if there is an existing server-side program to generate the metadata — if not, work with a developer to generate one.
  5. Create a proof-of-concept jQuery plugin that implements all this given an input of audio files and their corrosponding metadata.

Potential public engagement — Sounding Shapes

Members of the public will be shown shapes of different sizes and colours, and asked what they think they sound like. They will be played sounds (on headphones) and asked to draw them. This will be a fun, portable public engagement, on the net and other sites as available.

After collecting the data we can publish the results and see what similarities and differences there are between both the ways people draw sounds, and describe the sounds of drawings. This will hopefully be a fun, viral project that can potentially get some traction with the Sound Art community, local news, and blogs on web and interactivity.

Thanks to Amanda for letting me know about the competition, and everyone who gave feedback on my proposal.

Website: Work For Change

by kim on November 18, 2012

Work For Change is a co-op office complex. They had an existing website that wasn’t getting them any Google hits for local keywords (the name of the theater, the workspace itself), and sprawled over about 20 pages even though there was very little information.

They wanted something low-budget, clean and functional. Working with them, we established the key user stories and developed a new site around them:

  • “I am a small business or individual looking to find a studio for rent”
  • “I rent an office and wish to report a repair”
  • “I wish to hire the theatre”

These were then simply translated into the three boxes on the front page, linking to a relevant page for that user group. We condensed all the existing site content under each section, in most cases getting all the information on a single page. A very simple responsive layout was made, generally aimed at the large amount of smartphone users in the workplace.

Website: Kate’s Cuttings

by kim on November 18, 2012

Kate has published a gardening column for a number of years, and had a large collection of photos from this period. Having a lot of writing and photos gave a lot of scope for making a very immediately visual website.

Kate does one big post per month that gets published in the local newsletter, but wanted the option to do extra, smaller posts during the month. We therefore designed a subtle sort mode that publishes each month in descending order, with the month’s main post first. This means that new visitors always see the latest column first.

The defining feature of this site is the right sidebar. Given the subject matter (gardening), it seemed important to emphasise the passing of the seasons, so we used the first thumbnail from each month’s main entry to create the month-view links on the right. The flow of this is a central design element, and I think a huge improvement on a default “month by month” archive box.

Responsive layouts were then added using Omega breakpoints.

“Agriculture”: Soundscape recordings from 2007

by kim on October 22, 2012

These are some field recordings I made in 2007 as part of my BA(Hons) dissertation. Despite the age I still really like these, especially the storm drain. At the time I was in a band called “Agriculture”, but I still really like this name so I’m sticking with it. I uploaded them all to Bandcamp in high quality – enjoy!

Experiments in JavaScript: arbor.js

by kim on June 22, 2012

I’m the first to admit I’m pretty horrible at elegant coding nowadays. When I was younger I used to have all the code skillz and none of the ideas – now it’s totally the other way around, a direction I’m pretty happy about! In order to try and rectify this I’ve been playing with a few JavaScript libraries recently when it’s come up for various projects.

Screenshot of arbor.js demonstration

Arbor.js is a great little library that lets you make visual thesaurus style springy node diagrams using JSON network descriptions. I worked on a visual intro for a client, and while it wasn’t right for them in the end I had fun playing with the library. A lot of the power of this comes from the renderer.js file, which essentially allows you to write your own functions for outputting the data. In this case, I added a method to allow images to be used. It is, however, fairly badly documented and you’ll have to work out a lot from the examples.

This has in no way been performance tweaked,  doesn’t run on IE and crawls on a tablet. Also you may need to refresh a few times to get the images to load properly. I know, it’s basically perfect right?

Documentation aside it’s a very elegant, easy to use library and I’d definitely consider it for other projects.

Try it out!

Thanks to the folk at ION.

Sound and the web – part 2

by kim on June 18, 2012

I’ve bounced the ideas in Sounds and the web off a few people now and had some very valuable feedback. The more I talk about it the more I’m determined to make this a reality. On re-reading my initial article there’s a lot missing – I’ll try and embellish this idea and look to making a prototype sooner rather than later.

Why visualise?

The most fundamendal question raised in this is, why visualise? Sounds and images are processed very differently by the brain. Image thumbnails are hugely informative, in a way an audio thumbnail could never be. Secondly, a common crux in any studio environment where people are dealing critically with audio data, there’s an awful tendancy for people to try and compare waveforms by eye, to balance the position of mixer controls, or line up LEDs on outboard gear rather than using their ears.

Indeed, many of my biggest audio revelations have been working on a mixing desk with no meter bridge. Even doing my HND, the supervisors would periodically get us to turn off all visualisation equipment, and it makes a huge difference. People simply have more confidence in their eyes than their ears, due to the primacy of the visual in our society. So, why make the problem worse?

Types of information

I’d argue that thinking about a visualisation of an audio file isn’t exactly what I’m looking for here, paradoxical as that may seem. Here’s a brief list of common forms and their strengths and weaknesses. If you can think of any others, let me know?

Type of visualisation What it tells you What it’s good for Drawbacks
Audio file thumbnail Nothing Picking the right file if your audio has sensible filenames? Current default in all major OSs and the web
Spectogram Detailed frequency content in relation to time; timbral information Audio file repair, detailed technical analysis Large display space requirement, need acoustics traning to interpret
Waveform graph Detailed amplitude information in relation to time; volume information Editing multiple audio files; navigating known files Large space requirement, no frequency information
Graphic score Rich semantic information Representing music; compositional medium Must be manually drawn; space requirements; only suitable for certain recordings

A different use case

These formats all have their own distinct use cases, but none of these are really what I’m after. I’m not looking for detailed information here – this isn’t a way to browse a single audio file. The information in a spectogram or waveform graph is overkill. It’s also not particularly relevant – I’m not looking for a literal graph of the content of an audio file, I’m looking for a semantic representation. I’m also looking to be able to deal with a large number of files with relative ease, finding something appropriate, quickly, that I then may use one of these other formats to further analyse.

This is key – this doesn’t invalidate any of these other visualisations! What I’m arguing for is something that is above all simple, but containing the most prevalent data to compare one audio file with another. I’m trying to find another level of visualization – one that’s more zoomed out, abstract, large-scale. With relation to the argument of visualisation being a poor simulacra for listening, I actually think this will be a superior solution in many cases. Wheras a spectogram or waveform graph encourages you to have a direct feedback between the height of a peak and it’s expected volume, or alerts you to the presence of a certain partial, an abstract, semantic representation instead gives you the information you need to decide if you want to listen to a file, but doesn’t tell you anything more than that.

Finally, there’s no reason why this can’t also be augmented with manual text description – but I stress again, the idea of this system is to be automated, and allow easy comparison of a large amount of audio files. Requiring too much user input will make this far less likely to be adopted.

The problem of execution!

I’m paradoxically looking for something non-technical, but trying to invent a new visual shorthand at the same time. In a roundabout way, I’m also looking at developing an automated system of metadata generation for audio files for visualisations to draw themselves from. This is looking at quite a big technical stack already! I’m investigating Raphael.js to output drawings, and I think ideally audio metadata would be stored in JSON or some similar lightweight database. The actual audiofile processing is a step I don’t have a clue about. However I still think my basic categories are the key data we need to represent in some way. Here’s a potential scale, which I’d love to get some more input on:

Parameter Visual output
Duration A logarithmic-scale bar at the bottom of the thumbnail.
Main frequency Colour. Red for bass, violet for treble.
Timbral content Shape. Jaggier is more varied, smoother is more sinusoidal.
Type A small glyph – perhaps a treble clef, mouth or tree (guess).
Volume Opacity. Quieter sounds are paler, loud ones more solid.

There’s also potential to overlay more than one shape – perhaps treble, mid and bass can be done on separate, overlaid shapes, with size for volume instead?

Responsive visualisations

Taking this to it’s logical conclusion, sound files should respond to their context. Given 500×500 pixels, they could switch to a spectogram, for instance. Scaled down to 10×10 px, they should probably revert back to a simple measure of type. Mouse-over states could show some kind of animation effect. Contextual selectors could allow foregrounding of different elements. Better metadata could allow for sorts by duration, amount of bass, or richer timbre. The possibilities are varied, and almost entirely unexplored. I’m starting to think about trying to pursue some funding to explore this properly – so any and all potential collaborators please get in touch! I’d like to make the next post some proposed outputs, so any artists, designers or javascript developers would be most welcome.

Many thanks to Amanda, Angela and Martin for their feedback on my initial post.

Research Participants Required

by kim on May 23, 2012

Interested in sensory experience and want to help me with my fieldwork? I need 7 more people to take part in my research!

What do you have to do?

I am a PhD student at the University of Salford, doing a study into how people experience sensory environments. You will be required to keep a short audio diary for two weeks (will take about 5 minutes a day) and then have a follow-up interview of up to 60 minutes. I’ll give you a full briefing and detailed instructions when we first meet (takes about 15 minutes).

On completion you will receive a £20 Amazon voucher. You can also ask me any in depth questions at the end  - I’m intentionally keeping this fairly vague.

You must be a postgraduate student at any Manchester university; apart from that all ages, genders and ethnicities welcome. I can travel to wherever is convenient for you, for all stages of the research.

Get in touch at, or leave a comment below and I’ll get back to you!