Digital Audio – seeing into the future of software development.
As well all know only too well, the relationship between musicians, engineers and producers and their technology is crucial. The progress of those who develop the music often spurs on those who develop the software and visa versa.
Technology has benefited those involved in making and selling content in a variety of ways. Its led to a reduction in the cost of recording and editing sound (the often referred to ‘democratization of the production process’). Its also facilitated the development of new genres and techniques previously only dreamt of by content creators. Lastly, in recent years, the effect digital audio and software has had on the consumer experience has become very clear in terms of distribution, place and time of listening and fragmentation of fans and genres.
Queens Mary University’s Centre for Digital Music is at the cutting edge of software development involving digital music and audio. Setup by Mark Sandler, the centre is focused on creating radical software solutions, which will without doubt find their way into our lives over the next few years.
When asked what the Centre for Digital Music is about, Mark is pretty clear. Its ‘Research where technology, music and audio meet!’
Mark is not new to this area of research and development. Indeed his background is a rich history of study, design and progress in the areas of music, audio and in some cases vision.
He studied electronics at Essex University and his final year was in Audio Engineering followed by a PhD looking into and designing Digital Power Amps (his work on Digital Power Amps have won him a Fellowship from the Audio Engineering Society).
From here he went to Kings College and continued research into a range of areas, especially where vision and sound came together. He was mainly involved with trying to analyze the images and sound with the idea of drawing meta data from it, which would allow the computer to build up a parallel picture describing and thereby categorising elements of the content.
He has been involved in a number of projects including a startup, which developed software that could scale the level of compression used on an audio stream as the speed of the Internet fluctuated in real time. That never took off (but perhaps now is the time for someone to develop a similar technology, what with the talk of ‘cloud based technology and the growth of streaming’), but this again drew upon Mark’s focus on the ability to analyse audio and from there prioritise the most important ‘bits’ allowing the codec to loose parts which were not integral to the listening experience.
Mark joined Queen Mary University in 2001 as Professor of Signal Processing and in 2003 the Centre for Digital Music was formally established, with £5 million in grant funding. It now operates with a number of IT suites, a bespoke Studio for testing applications and has over 30 staff and research placements.
Mark explained what he felt would be the next big change. “By the late 90s or early 00s I believed we had done everything. I felt that if there was to be real change then the Internet was going to make the difference. I didn’t fully get what I meant, but searching for music and accessing the long tail (the independent and semi-pro music world), I realized that, that would make music different. By looking at the parameters based in the content of the music, this would allow consumers and creators alike to look at audio and music in a new light.”
One of the examples of how this might affect consumers is simple but powerful query. Imagine using a search engine and asking ‘I like REM, find me something that sounds like them, for under £5 and/or and are playing thirty miles of where I live, and send me the tickets’.
Now this would require human data entry such as bands informing the net where they were performing, and how much for. However, it would also require the type of software the Centre for Digital Music is developing. Software that can ‘listen’ to REM and determine what gives it ‘its’ sound. From there, the computer would search the Internet for similar sounding audio, link that to its creator (via the human inputted meta data) and then to where they might be performing.
Consumers could use the same software to highlight a chorus they like and then ask the software to find similar sounding choruses in other songs. As a consumer you can see how this can help grow one’s playlist, but as a producer I can also see where this technology can be used. For example, I might have a drum loop I like, but to be honest, I over use! I could ask the software to search an online or offline selection of loops for something similar in terms of rhythm, speed but with different sounds (electronic rather than live for example).
The nirvana that Mark is targeting via his centre is one where we get “technology to help us filter music to allow us to find human elements in music and not music based on technology.” Mark’s vision of what software can do for us, the producers and consumers, is very exciting.
Sitting down with his staff I was shown a range of working applications. For example, Andrew Nesbit demonstrated his application that can take a stereo file and then turn it into a number separate audio tracks, with each of the instruments/sounds isolated. It wasn’t 100% perfect but its 90% of the way there! Ideal for remixers everywhere, or mastering engineers who need to isolate a certain area of sound to remove unwanted frequencies.
I was lucky enough to experience a plug in (already in a form for Audio Units and VST) which adds a delay in real time. It tracks the tempo as dictated by the performer rather than the sequencer. Perfect for a live setting, using software based plugins with tempo changes as directed by the performer. Its creator, Adam Stark, also told me about a colleague (Matthew Davies) who has worked on Rhythm Morphing Software which can take 2 completely different tracks, analyze them both and then make one match the other in terms of bar, tempo and time signature. It works amazingly well and only takes a couple of clicks on a mouse to work!
One member of staff, Rebecca Stewart, has software which allows you to listen to four (or more) tracks at a time by placing them in a 3D space. They all play at the same time but you can tilt a device such as an Iphone or Wii remote to move between them and isolate the sound whilst they all play. One application for this is simply to choose what track to play on your ipod without crudely flicking through them. Yet, it could also be used by DJs to beatmatch and mix between tracks.
The future of music and software is very exciting. If we really can start to teach a computer what makes up a piece of music (something we do without thinking when listening to it ourselves) and identify instruments, notes, ‘styles’ etc then the ability to take that information and use it to create new sounds, music or genres will be open to us in a way people have only dreamed of. We will also find that the gap between the consumer and the creator continues to blur as the process of listening to a piece of music and doing something with it to create anew starts to become one act.
Wen Xue – Research Assistant 4 years.
Sound and Music analyzer.
Xue’s software looks for pitched events in an audio file. He demonstrated this with a piece of classical piano music. He was able to isolate individual notes and then remove them (ideal for mastering – it leaves everything else in the same sample time intact). Apart from mastering he could see the software being used an information retrieval tool, as it allows the performance to be analyzed in great detail. He is currently working on the software’s ability to separate fine notes (notes where there is a semi tone or less in proximity).
György Fazekas – 2 years PHD
Intelligent Audio Editing
György is developing an application that can be used by content creators to embed Metadata into an audio file or accompanying script. You can input simple data such as performers, producers, studio recorded in and more. This is key to protecting rights. However the software is also being developed to take a snapshot of a sequencer’s setup including, instruments, Devices and plug ins, and accompanying settings.
The idea is that it would be able to talk to many types of sequencer (or better, the script would become and industry standard adopted by all companies) and that when loaded into the song, the sequencer would recreate the session using comparable plug ins and settings (where available). Ideal for recreating mixes on different studios or for producers saying ‘I want that sound on a particular album’.
Lastly it has the ability to look at a song, and use colour coding to immediately indentify the sections, allowing for fast editing and comparisons between verses or choruses when editing audio.
Enrique Perez Gonzalez 3 years PHD – masters in music technology university of york
Enrique wanted to develop an application that can help musicians who cannot mix. Its ideal for a live environment from bars to stadiums! So far he has developed a simple eight channel mixer with basic tools – panner, EQ and volume. The user tells the software what instruments are most important (by the order in which the tracks are loaded in) and after a small period it can adjust panning and volume after analyzing the music (we are talking seconds!).
Additionally he has developed other applications such as an automatic feedback destroyer but, one that is pre-emptive rather than reactive. Having seen the demo I’m pretty confident that this will be a life saver!
This is a software program that will modify the tempo of a sequencer so as to stay in time with a drummer. That means if he speeds up or slows down, the sequencer will also change tempo with him.
Whether in the studio, or playing live a band, any pre-recorded virtual instruments would adapt to the human interpretation of tempo, rather than what currently happens which is that the human has to control their natural urges to get slower and faster by keeping with an artificial click track.
UK band Hook and the Twin (formerly two members of DB-signed Psychid) are currently working with Andrew at incorporating B-Keeper and Ableton live into their complex 2-man live set-up.
Check it out at: myspace.com/hookandthetwin