MEET’ N’GREET IRIS
A
“ revolutionary new audio experience that dramatically increases sound quality by introducing the space that is normally missing from recorded audio, unlocking the ‘live’ dimension that’s often lost.”
That’s the pitch for IRIS, a new proprietary sound processing algorithm which claims to transform your music listening by “splitting out and increasing the phase information sent to the brain. The listener’s brain then reassembles this vast increase in information and becomes far more active in the listening process.”
This ‘Active Listening’ is a buzzphrase used throughout the new company’s literature. IRIS will market a pair of headphones (shown above) to showcase the technology, these to be launched through crowdfunding site IndieGogo. But the application is intended for much wider use. IRIS can integrate with third party hardware or applications, and operates on any audio format. It can be integrated into speakers, TVs and other home audio devices, or can enhance audio systems in sectors from education to PAs for intimate hot spots or supersized arenas. There’s even an
‘IRIS Wellness’ area of the website featuring Ibiza sound healer Jeremie Quidu. “an expert in the practice of inner peace”.
So clearly, in addition to selling the headphones, IRIS aims to license its technology. But what does it really do? We downloaded the IRIS Listen app, which allows you to choose up to 50 tracks a month on your device and toggle between IRIS and non-IRIS delivery. You can get more tracks by sharing the app, or unlimited use by buying the headphones.
How did it sound? There’s certainly a difference. IRIS seems a notch louder, for a start, which hi-fi demonstrators will tell you is an old trick for making an A-B comparison sound better in one direction. And while the IRIS version did seemed to enliven music, it also played havoc with stereo imaging, in particular moving images from side channels towards the centre. Questions, questions! So we contacted IRIS for an interview with the Chief Technology Officer, Rob Reng.
SOUND+IMAGE: Could you describe further the ‘space’, the ‘live dimension’ of which you speak? You say this is removed by MP3 or even CD-quality sampling — so is this a loss in high frequency content or (more common when discussing spatial cues) a smearing of time which causes the loss?
ROB RENG: This is really identifying a fundamental flaw in the majority of recorded sound. In a live environment, instrument
players are separated by space. Waves of sound come from each of these players directly at the listener and, at the same time, bounce off of every reflective surface in the environment and enter their ears from many different angles from every direction. Each of these waves hits the ear at slightly different times depending upon the distance they’ve travelled. These timing differences are the relative phase of the sound waves. Each point in the room has a slightly different set of phase relationships, and therefore a slightly different representation of the sound.
However, when this live performance is recorded, even the best technology only captures the phase information present at the recording devices, many times just one or two microphones. Much of what was in the original environment is lost.
In addition to this, most audio compression algorithms such as mp3 process the audio signal through a Discrete Cosine Transform, which effectively disregards an entire phase dimension.
S+I: How do you put it back if this information is lost? Is this not like trying to guess the colours in a black-and-white photograph?
RR: We resynthesize the phase information that was lost during audio compression. We are able to accurately perform this through the transforms that take place in our algorithm.
S+I: If I play a left-right channel test through the app, engaging IRIS shifts both left and right channel announcements from left and right to the centre. There are similar effects on many tracks where panned instruments are shifted in their locations. This is clearly destructive of spatial cues rather than enhancing them! Please discuss?
RR: If you go to a live concert or to a room with musicians, you do not hear instruments in one ear. That is an illusion of space that has become entrenched in recorded music, but that doesn’t actually reflect reality. We want it to feel like you’re in the room with musicians, and are providing an immersive listening experience that is closer to reality.
In addition, our technology is a neural stimulation. In order for your brain to resolve the difference in phase relationships, the harmonic content must be sent to both ears. This allows us to nudge the brain into paying more attention to the source material, and at the same time invoke the feeling that the listener is at one with the sound. Ultimately we want to bring back the enjoyment of just listening and all the emotion that this can bring, enabling listeners to be swept away with the music.
Hard to argue with the motivation there, but as with most sound processing, we err on the side of caution. Still, the app’s free to try, so nothing to be lost by giving it a try. More information at https://irislistenwell.com