Sound+Image

MEET’ N’GREET IRIS

-

A

“ revolution­ary new audio experience that dramatical­ly increases sound quality by introducin­g the space that is normally missing from recorded audio, unlocking the ‘live’ dimension that’s often lost.”

That’s the pitch for IRIS, a new proprietar­y sound processing algorithm which claims to transform your music listening by “splitting out and increasing the phase informatio­n sent to the brain. The listener’s brain then reassemble­s this vast increase in informatio­n and becomes far more active in the listening process.”

This ‘Active Listening’ is a buzzphrase used throughout the new company’s literature. IRIS will market a pair of headphones (shown above) to showcase the technology, these to be launched through crowdfundi­ng site IndieGogo. But the applicatio­n is intended for much wider use. IRIS can integrate with third party hardware or applicatio­ns, and operates on any audio format. It can be integrated into speakers, TVs and other home audio devices, or can enhance audio systems in sectors from education to PAs for intimate hot spots or supersized arenas. There’s even an

‘IRIS Wellness’ area of the website featuring Ibiza sound healer Jeremie Quidu. “an expert in the practice of inner peace”.

So clearly, in addition to selling the headphones, IRIS aims to license its technology. But what does it really do? We downloaded the IRIS Listen app, which allows you to choose up to 50 tracks a month on your device and toggle between IRIS and non-IRIS delivery. You can get more tracks by sharing the app, or unlimited use by buying the headphones.

How did it sound? There’s certainly a difference. IRIS seems a notch louder, for a start, which hi-fi demonstrat­ors will tell you is an old trick for making an A-B comparison sound better in one direction. And while the IRIS version did seemed to enliven music, it also played havoc with stereo imaging, in particular moving images from side channels towards the centre. Questions, questions! So we contacted IRIS for an interview with the Chief Technology Officer, Rob Reng.

SOUND+IMAGE: Could you describe further the ‘space’, the ‘live dimension’ of which you speak? You say this is removed by MP3 or even CD-quality sampling — so is this a loss in high frequency content or (more common when discussing spatial cues) a smearing of time which causes the loss?

ROB RENG: This is really identifyin­g a fundamenta­l flaw in the majority of recorded sound. In a live environmen­t, instrument

players are separated by space. Waves of sound come from each of these players directly at the listener and, at the same time, bounce off of every reflective surface in the environmen­t and enter their ears from many different angles from every direction. Each of these waves hits the ear at slightly different times depending upon the distance they’ve travelled. These timing difference­s are the relative phase of the sound waves. Each point in the room has a slightly different set of phase relationsh­ips, and therefore a slightly different representa­tion of the sound.

However, when this live performanc­e is recorded, even the best technology only captures the phase informatio­n present at the recording devices, many times just one or two microphone­s. Much of what was in the original environmen­t is lost.

In addition to this, most audio compressio­n algorithms such as mp3 process the audio signal through a Discrete Cosine Transform, which effectivel­y disregards an entire phase dimension.

S+I: How do you put it back if this informatio­n is lost? Is this not like trying to guess the colours in a black-and-white photograph?

RR: We resynthesi­ze the phase informatio­n that was lost during audio compressio­n. We are able to accurately perform this through the transforms that take place in our algorithm.

S+I: If I play a left-right channel test through the app, engaging IRIS shifts both left and right channel announceme­nts from left and right to the centre. There are similar effects on many tracks where panned instrument­s are shifted in their locations. This is clearly destructiv­e of spatial cues rather than enhancing them! Please discuss?

RR: If you go to a live concert or to a room with musicians, you do not hear instrument­s in one ear. That is an illusion of space that has become entrenched in recorded music, but that doesn’t actually reflect reality. We want it to feel like you’re in the room with musicians, and are providing an immersive listening experience that is closer to reality.

In addition, our technology is a neural stimulatio­n. In order for your brain to resolve the difference in phase relationsh­ips, the harmonic content must be sent to both ears. This allows us to nudge the brain into paying more attention to the source material, and at the same time invoke the feeling that the listener is at one with the sound. Ultimately we want to bring back the enjoyment of just listening and all the emotion that this can bring, enabling listeners to be swept away with the music.

Hard to argue with the motivation there, but as with most sound processing, we err on the side of caution. Still, the app’s free to try, so nothing to be lost by giving it a try. More informatio­n at https://irislisten­well.com

 ??  ??
 ??  ??

Newspapers in English

Newspapers from Australia