MQA is bad for music

Posted by: DUPREE on 13 February 2017

Our scottish friends are not very hip on MQA and have some pretty good reasons for it. 

https://www.linn.co.uk/blog/mqa-is-bad-for-music

Posted on: 13 February 2017 by matt podniesinski

Interesting read.  Thanks for the link.

Posted on: 13 February 2017 by Brubacca

Death to any format or scheme designed to make me buy my music again.  Or rent it on a monthly basis.  

Posted on: 13 February 2017 by engjoo
Brubacca posted:

Death to any format or scheme designed to make me buy my music again.  Or rent it on a monthly basis.  

Tell me about it. Here in Asia, Universal Music has been releasing many variants of mastering for the same HK/Taiwanese album of the 80s/90s.

XRCD, LPCD, K2HD, SACD, Abbey Road Studios... 

Posted on: 13 February 2017 by Bert Schurink
DUPREE posted:

Our scottish friends are not very hip on MQA and have some pretty good reasons for it. 

https://www.linn.co.uk/blog/mqa-is-bad-for-music

Interesting read and a viewpoint I hadn't yet considered in the discussion - and true....

Posted on: 14 February 2017 by Simon-in-Suffolk

Hmm, to my point of view if MQA was actually better and did provide true fidelity rather a processed facsimile of it with artefacts, I would have no issue if people made money from the licensing of it... we all have livings to make and clearly new revenue streams drive innovation and afterall MP3 is licensed. There are some aspects of MQA that I really think take things forward but they are probably too niche in appeal. It's the broader appeal of lossy compressed hidef where I think it all starts going wrong and the fidelity is compromised... but it is as far as I am aware the only lossy hidef codec on the market currently. Question is what is better .. processes compressed lossy hidef with artefacts, or more neutral lossless 44.1/16 PCM... I suspect for many it will be the marketing people rather than people's ears that will decide.. though I suspect MQA might sound better on cheaper less capable equipment.

Simon

 

 

Posted on: 14 February 2017 by andarkian

I am sure that many of you subscribe to hi def offerings such as TIdal and Qobuz and at sometime or other have just pressed the shuffle button to randomise your favourite tracks. What stands out for me is the varied quality of recordings that come out of the mix. It is a serious disappointment after spending thousands and thousands of pounds on great HiFi gear to find out that beloved song from years back that you hoped would be so enhanced and clarified by the best of equipment was simply lashed together in a recording studio and all that your precious equipment does is unveil the ugliness. 

Whether MQA enhances the worst recording or takes the best to new peaks, for me, has yet to be proven or disproven. At the moment I am waiting on Audirvana to offer its Version 3.0 which is supposed to properly disassemble the MQA wrapper. If it takes nothing away or adds nothing to the original recording then as a means of streaming music it has to be given an open minded chance. 

Posted on: 14 February 2017 by MadScientist

A rather shoddy, unblanced article IMHO.  They clearly see MQA as a threat.

Posted on: 14 February 2017 by meni48

 MQA  is bad for music, okey i didn`t hear this format,in my opinion DSD is also bad for music  without  liveness,i use only HD and regular cd

Posted on: 14 February 2017 by manicm

A counterpoint, of sorts:

http://www.computeraudiophile....-q-mqa-s-bob-stuart/

Posted on: 14 February 2017 by EJS

The CA MQA article seems as suggestive as they come - graphs that illustrate potential observations rather than facts, and still pitch MQA  as a rival format for hires PCM or DSD whereas it really seems geared to facilitate and incrementally improve streaming. The supposed timing benefits remain vaguely explained, and the lossy compression underplayed. 

Linn's article isn't without bias either, but I do agree with the concern that the format may be adopted for mainly economic rather than (sound-) technical reasons, and that MQA will eventually be used to unlock demo's rather than improve playback of otherwise universal files.

 

 

Posted on: 15 February 2017 by Simon-in-Suffolk
andarkian posted:

.... If it takes nothing away or adds nothing to the original recording then as a means of streaming music it has to be given an open minded chance. 

Well then it won't be MQA... there is no 'wrapper' in MQA... that makes MQA sound like an encapsulation, it isn't. The master file has to be processed and then digitally decimated without using conventional pre low pass filtering, so neccessarily from what I can see the aliased frequencies from the decimation  are reflected (folded) into the base band. At this point the sound has been altered or digital distortion added.. When it comes to later oversampling and filtering (unfolding) these aliased frequencies get projected back to their correct part of the spectrum.. but the alias remains.. and the whole process seems to rely on the fact that most people won't be sensitive to this added distortion.... which may or may not be the case... but either way it's hardly getting you to a closer representation of the original recording... more like a processed sweetened version of it.

All this talk of correcting masters etc, quite honestly to my way of thinking is marketing mumbo jumbo.. yes it may process them to subjectively make them sound better for some or on less capable equipment, but I can't see how it can magically repair a master...

Another interesting thought I have had with Tidal and the use of FLAC.. the way MQA is encoded it will work against the way FLAC compression works, and so MQA masters are going to compress probably less effectively than many unprocessed PCM files.. so I suspect the MQA data transport compression will  be reduced if transmitted by FLAC. 

Posted on: 15 February 2017 by andarkian
Simon-in-Suffolk posted:
andarkian posted:

.... If it takes nothing away or adds nothing to the original recording then as a means of streaming music it has to be given an open minded chance. 

Well then it won't be MQA... there is no 'wrapper' in MQA... that makes MQA sound like an encapsulation, it isn't. The master file has to be processed and then digitally decimated without using conventional pre low pass filtering, so neccessarily from what I can see the aliased frequencies from the decimation  are reflected (folded) into the base band. At this point the sound has been altered or digital distortion added.. When it comes to later oversampling and filtering (unfolding) these aliased frequencies get projected back to their correct part of the spectrum.. but the alias remains.. and the whole process seems to rely on the fact that most people won't be sensitive to this added distortion.... which may or may not be the case... but either way it's hardly getting you to a closer representation of the original recording... more like a processed sweetened version of it.

All this talk of correcting masters etc, quite honestly to my way of thinking is marketing mumbo jumbo.. yes it may process them to subjectively make them sound better for some or on less capable equipment, but I can't see how it can magically repair a master...

Another interesting thought I have had with Tidal and the use of FLAC.. the way MQA is encoded it will work against the way FLAC compression works, and so MQA masters are going to compress probably less effectively than many unprocessed PCM files.. so I suspect the MQA data transport compression will  be reduced if transmitted by FLAC. 

Simon,

i thought I would try to take a closer look at MQA before replying. You are right and wrong about it being a 'wrapper'. It allegedly does not decimate the master file, but simply splits it into 3 segmented areas that they call triangulation. Area A  is from 0 - 24 kHz. Area B is from 24 kHz - 48 kHz and Area C is from  48 kHz to 96 kHz in their example. In terms of volume of data area A is the most densely packed and the triangulation starts above this and then descends. The bin level measured in dB is from 0 - 168. This is in fact a 24 bit 192 kHz recording i.e. 2 x 96 kHz. As described, normal encapsulation would be a rectangle defined as, say, 24 bit 192 kHz  which can contain a lot of non information i.e. neither musical or timing or anything else because it is of a fixed area if you like, rather like the book in the box you get from Amazon. According to Meridian, area A is more or less untouched as it contains the most use of the available area. Area B is folded beneath the noise level of area A and C below area B. There is, according to Meridian, no inherent loss of any data because it is  all resident and recoverable from area A. I believe without special hardware up to area B is fully recoverable, but not beyond that. The 192 kHz sample rate is only an example and not a limit. 

MQA does not seem to care what format it is receiving as it is simply deconstructing and reconstructing what it contains.

"In coding terms he equates it to an infinite sample rate, but with an 18-bit noise-floor." Apparently the whole MQA coding / decoding system  induces less temporal blur than 10 meters of air, but I have no idea what that means. 

In short, they claim they do not compress. 

Posted on: 15 February 2017 by Simon-in-Suffolk

[@mention:22262699346003119] you might try looking at Dr Lesurf's analysis and deconstruction of MQA based on its patent. He has  retired recently from the University of St Andrews school of physics and can explain the decimation of the master and the reconstruction oversampling with alias errors with MQA better than I can and how it compresses the hidef audio and then reconstructs it.

http://www.audiomisc.co.uk/MQA...mi/ThereAndBack.html

Less temporal blur - (i say)  is just a fancy way of saying timing information is maintained as distinct from timbral pitch - i.e. what is achieved with any hidefintion low pass filtered encoding - not specific to MQA and readily experienced with 96kHz and 192kHz PCM sample rates

 

Posted on: 15 February 2017 by andarkian
Simon-in-Suffolk posted:

[@mention:22262699346003119] you might try looking at Dr Lesurf's analysis and deconstruction of MQA based on its patent. He has  retired recently from the University of St Andrews school of physics and can explain the decimation of the master and the reconstruction oversampling with alias errors with MQA better than I can and how it compresses the hidef audio and then reconstructs it.

http://www.audiomisc.co.uk/MQA...mi/ThereAndBack.html

Less temporal blur - (i say)  is just a fancy way of saying timing information is maintained as distinct from timbral pitch - i.e. what is achieved with any hidefintion low pass filtered encoding - not specific to MQA and readily experienced with 96kHz and 192kHz PCM sample rates

 

Thanks Simon,

Am afraid this is way beyond anything I can understand and will have to await a satisfactiry opportunity to comprehensively listen to MQA material. The whole digital recording world seems to be a complete nightmare of ingenuity and compromises incomprehensible to anyone with less than a PhD in Electronic Engineering, which isn't me. 

Posted on: 15 February 2017 by Huge
Simon-in-Suffolk posted:

[@mention:22262699346003119] you might try looking at Dr Lesurf's analysis and deconstruction of MQA based on its patent. He has  retired recently from the University of St Andrews school of physics and can explain the decimation of the master and the reconstruction oversampling with alias errors with MQA better than I can and how it compresses the hidef audio and then reconstructs it.

http://www.audiomisc.co.uk/MQA...mi/ThereAndBack.html

Less temporal blur - (i say)  is just a fancy way of saying timing information is maintained as distinct from timbral pitch - i.e. what is achieved with any hidefintion low pass filtered encoding - not specific to MQA and readily experienced with 96kHz and 192kHz PCM sample rates

 

Thanks Simon, that's fascinating.

I now understand how the temporal response in MQA can be made to exceed the Nyquist limit, but also how this will induce artefacts and what can be done to balance these effects.

Posted on: 15 February 2017 by mackb3
Simon-in-Suffolk posted:

Hmm, to my point of view if MQA was actually better and did provide true fidelity rather a processed facsimile of it with artefacts, I would have no issue if people made money from the licensing of it... we all have livings to make and clearly new revenue streams drive innovation and afterall MP3 is licensed. There are some aspects of MQA that I really think take things forward but they are probably too niche in appeal. It's the broader appeal of lossy compressed hidef where I think it all starts going wrong and the fidelity is compromised... but it is as far as I am aware the only lossy hidef codec on the market currently. Question is what is better .. processes compressed lossy hidef with artefacts, or more neutral lossless 44.1/16 PCM... I suspect for many it will be the marketing people rather than people's ears that will decide.. though I suspect MQA might sound better on cheaper less capable equipment.

Simon

 

 

I'm in your camp Simon. Red book is technically sufficient for human consumption and Chord and other FPGA implementations are proof positive that the base standard has a long way to go before wigging out on hi res, DSD and the like. I have files of most the variants and as a lot of us know quality is just not solely the format. Not dissing real progress but this hobby is rife with latching onto the latest drug.

Posted on: 15 February 2017 by John Willmott
mackb3 posted:
Simon-in-Suffolk posted:

Hmm, to my point of view if MQA was actually better and did provide true fidelity rather a processed facsimile of it with artefacts, I would have no issue if people made money from the licensing of it... we all have livings to make and clearly new revenue streams drive innovation and afterall MP3 is licensed. There are some aspects of MQA that I really think take things forward but they are probably too niche in appeal. It's the broader appeal of lossy compressed hidef where I think it all starts going wrong and the fidelity is compromised... but it is as far as I am aware the only lossy hidef codec on the market currently. Question is what is better .. processes compressed lossy hidef with artefacts, or more neutral lossless 44.1/16 PCM... I suspect for many it will be the marketing people rather than people's ears that will decide.. though I suspect MQA might sound better on cheaper less capable equipment.

Simon

 

 

I'm in your camp Simon. Red book is technically sufficient for human consumption and Chord and other FPGA implementations are proof positive that the base standard has a long way to go before wigging out on hi res, DSD and the like. I have files of most the variants and as a lot of us know quality is just not solely the format. Not dissing real progress but this hobby is rife with latching onto the latest drug.

"Technically Sufficient for human consumption" .. is that the extent of your satisfaction with the quality of the current format ?  

And .. "a lot of us know that quality is just not solely the format" .. please expand and explain ..

Posted on: 16 February 2017 by Huge
mackb3 posted:

I'm in your camp Simon. Red book is technically sufficient for human consumption and Chord and other FPGA implementations are proof positive that the base standard has a long way to go before wigging out on hi res, DSD and the like. I have files of most the variants and as a lot of us know quality is just not solely the format. Not dissing real progress but this hobby is rife with latching onto the latest drug.

If you only look at the superficial headline figures (16bit 44.1kHz) then MP3 is also "technically sufficient for human consumption".  The reality is that human perception is more sophisticated than that.

See this... https://phys.org/news/2013-02-...ainty-principle.html as an example.

I also don't think Simon was saying Red Book is "technically sufficient": He's quite aware of the article I quoted above and others showing that human sensory perception extends into acquiring subtle (but often somewhat unreliable) information beyond the previously assumed limits that were determined by single factor testing.

Posted on: 16 February 2017 by andarkian

Ahem, it's me again Simon and Huge,

 I disappeared into the depths of Dr Lesurf's paper almost all of which flew way above my head which you may rightly say excludes me from making any meaningful comments about it. However, I will anyway. It appears that Dr Lesurf based his assessment on certain publications that surround the patent process of MQA. This documentation is not complete or definitive in itself and is also heavily legalese and not just technical by his own admission.

Now, from my tenuous grasp of his technical arguments it appears that he is assuming that Meridian are simply slicing and dicing the data in a form akin to MP3 and then applying some fancy filters and upsampling techniques to 'recreate' the original sound which, if so, would be deceptive, counterproductive and easily self evident. Dr Lesurf is assuming that Meridian is lopping off bits of the signal.

My own understanding is that Meridian have segmented the signals no matter of what size i.e. 48 kHz, 96 kHz, 192 kHz and so on into 3 areas that make up a triangular shape that goes from peak Spectral content to absolute sound floor. The content up to at least 48 kHz is defined as area A and is always reproduced no matter whether you have an MQA decoder or not. Even in area A there will be data elimination but not decimation, or segmentation or anything else. The MQA translates the pure signal, no matter whether MP3, WAV or whatever into a more efficient word size and not just a block of standard size.

Area A contains what Meridian define as Peak Spectral Content equivalent to CD quality sampling, B contains limited Spectral Content and C is mostly an artifact of the original sampling or recording system, but some of it will be included as meaningful in Meridians' triangulation process. In Meridianese they fold area C beneath the Spectral floor of area B and both of these are folded under the Spectral floor of A. The original signal, no matter its origin can be recreated. The theory according to Meridian is that you are only packaging the pure musical signal and not the fresh air around it. All of this demands that the 'Master' passes through the MQA triangulation and folding process. Whether it is 'pure' or not is subjective but theoretically you should get the same garbage out that you put in. 

Posted on: 16 February 2017 by King Size
MadScientist posted:

A rather shoddy, unblanced article IMHO.  They clearly see MQA as a threat.

...and baulk at the idea of paying a license fee to a third party that is partly owned by Meridian.  

Linn are hardly what you would call neutral observers in this and the article is one-sided and exaggerated in order to make a point.

Posted on: 16 February 2017 by andarkian

Coincidentally today....

http://www.theabsolutesound.co...on-demand-streaming/

Posted on: 16 February 2017 by GregW

Last year I posted the following1

I’m pretty positive about MQA, but there is at least one sonic tradeoff that’s making it quite unattractive for some manufacturers. If you have developed a house by sound by tuning your DACs and streamers with a series of digital filters, you can’t use it/them with MQA.

MQA uses digital filters/signal processing to match your DAC to the original encoder, thus creating the end-to-end chain key to the MQA philosophy.

MQA’s solution is for a manufacturer turn on their own signal processing/filters for non MQA material. For MQA material you must use theirs, i.e. you get the ‘Naim sound’ when listening to a cd rip, and the ‘MQA sound’ with MQA recordings.

This is possibly one of the reasons there hasn’t been much open A/B testing of the same material in CD, Hi-Res and MQA.

If like Linn, you rely heavily on DSP in your products it's going to make MQA harder to implement. For example could you apply room correction in DSP after decoding the MQA signal and retain the authenticated blue light? Chord have expressed similar concerns. Reportedly they balked at including a lightning interface into the Mojo because they would need open up their DAC design to Apple. I could imagine they would even less keen to open up their DAC design to a direct competitor. This may be one of the reasons Bob Stuart has left Meridian to focus on MQA2.

1 Source: https://forums.naimaudio.com/to...82#65203403977546782

2 http://www.thenexttrack.com/♫-episode-38-new-in-audio-at-the-consumer-electronics-show-ces/

Posted on: 16 February 2017 by GregW

During today's announcement from MQA and Universal, there was an interesting titbit from Universal's Michael Nash:

We don’t want to step across announcements that other companies will make, but we think that we could safely guide you in the expectation that there will probably be half a dozen services in the marketplace by the end of the year delivering this format.

Today's news will notch up the pressure on hardware manufacturers to implement support for MQA. It's hard to imagine Naim offering Spotify and Tidal and not support the 'highest quality' content from both services.

Posted on: 16 February 2017 by mackb3
John Willmott posted:
mackb3 posted:
Simon-in-Suffolk posted:

Hmm, to my point of view if MQA was actually better and did provide true fidelity rather a processed facsimile of it with artefacts, I would have no issue if people made money from the licensing of it... we all have livings to make and clearly new revenue streams drive innovation and afterall MP3 is licensed. There are some aspects of MQA that I really think take things forward but they are probably too niche in appeal. It's the broader appeal of lossy compressed hidef where I think it all starts going wrong and the fidelity is compromised... but it is as far as I am aware the only lossy hidef codec on the market currently. Question is what is better .. processes compressed lossy hidef with artefacts, or more neutral lossless 44.1/16 PCM... I suspect for many it will be the marketing people rather than people's ears that will decide.. though I suspect MQA might sound better on cheaper less capable equipment.

Simon

 

 

I'm in your camp Simon. Red book is technically sufficient for human consumption and Chord and other FPGA implementations are proof positive that the base standard has a long way to go before wigging out on hi res, DSD and the like. I have files of most the variants and as a lot of us know quality is just not solely the format. Not dissing real progress but this hobby is rife with latching onto the latest drug.

"Technically Sufficient for human consumption" .. is that the extent of your satisfaction with the quality of the current format ?  

And .. "a lot of us know that quality is just not solely the format" .. please expand and explain ..

John,

Q1. Red Book standard still has much potential due to further advancements in DAC tech similar to moving up the Naim range. Same recordings regardless of format but one can hear more and the presentation is more real.

Q2. Proper recording, mastering etc is critical regardless of format as anyone can hear between good and bad recordings whether Red Book, HI RES, DSD, Vinyl, reel to reel etc... I record my band live and in studio. Venue, mic technique, recording levels etc will result in different levels of quality at the same sampling rate. That's all.

Cheers

Posted on: 17 February 2017 by Huge

Point 1:  24/192 has more potential than Red Book: No mater how well optimised the 16/44.1 DAC implementation an equally well optimised 24/192 DAC implementation will outperform it (and also outperform an equally well optimised 24/96 MQA DAC implementation).  (N.B. a very well optimised 16/44.1 DAC implementation can outperform a poorly optimised 24/192 DAC implementation.)

Point 2:  Whilst recording and mastering techniques are actually the dominant factors in the final outcome; in the very best recordings, the Red Book format can be limiting to the final quality - Hi Res just reduces the effect of that limitation.