ND5XS to NDAC - which digital interconnect?
Posted by: Chris G on 13 February 2012
I am delighted to have added the ND5XS to my Supernait and NDAC. 24bit files downloaded from The Classical Shop (Chandos recordings) and from e-classical sound great. I am now looking for a good quality digital interconnect. I hope to audition the Chord Signature in a few weeks and wonder which interconnects others are using for this combination? I have Chord cables throughout my system so I want to consider this option first.
Allen, I think you are on the right path.. Yes Linn, Naim, Weiss you name it will follow this or a similar approach focussing on thier own in house methods to differentiate themselves in the market. Commodity designs are best done in the far east..
So the musicality is determined by the DSP/DAC/analogue output combination and it's colourations due powersupply, RFI, mechanical resonances, grounding etc.
The streamer board on network players are removed IMO from the equation other than to the degree they add switching perturbations via EMI or internal power busses.
Simon
Ok thanks Simon, I just want to nail down, at least in my head, this misnomer put forward by a few here (and in other forums) that the only thing that matters is DAC implementation and everything else does not, so long as bit perfect data is being delivered. Particularly why the dig out can be improved and can convey the character of a player.
If the player is dinking with the data before output to an external DAC, it is not bit perfect. If it is not bit perfect, the sound will change depending on the processing done in the player.
Take PC/Mac based streamers. Most of the different software players (that people can pay a lot of money for...) promote bit-perfect playback. If, in a given system (i.e. same streamer hardware just different software, interconnect to DAC and back end) people can hear differences one of two things is going on. 1) The players aren't bit perfect. 2) The people who hear differences are imagining things.
As an aside, I would expect Naim to be doing bit perfect output whether it comes before or after the DSP. It's only a case of sending out what comes in which is fairly simple as far as software goes.
As an aside, I would expect Naim to be doing bit perfect output whether it comes before or after the DSP. It's only a case of sending out what comes in which is fairly simple as far as software goes.
Isn't it rather obvious that "sending out what comes in" is a lot harder to do in real life?
In fact, I think we need to get beyond this whole concept of "bit perfect"-ness. We talk about it as if it was something that guaranteed the "right" sound. Or, we assume that if a software or hardware manufacturer stamps their product as having "bit perfect" output, then can be no difference (or no improvement) in what we actually hear. It just doesn't hold up in practice. In fact, this belief is easily shattered by listening to an a/b with a highly resolving system (mine was anyway ).
If bit perfect is easy to do, then why for so many year's have people needed to bypass Windows kernel processing using ASIO or (with Windows 7) WASAPI? Are the Windows developers really that stupid? And yes, while most Mac users want to go beyond their primitive MIDI processing to get automatic bit depth/sample rate changing, almost everyone reports hearing something different by changing to an Amarra or a Pure Music. And then, beyond that, they report hearing differences simply by just turning on options like "memory play". How can any of this matter if they are all bit perfect? To me, mass hallucination is not an explanation.
While I would have to do more research than my tired old brain can handle right now in order to understand exactly how Windows and MacOS processes data, I do think we can fairly safely make the claim that it is harder to send out what comes in than we think. Maybe they get the timing wrong. Maybe there is buffer underrun or overflow that is being quietly handled without any error reporting. Maybe there is just a lot more guesswork down deep under the covers than we think...
Nobody is arguing that our auditory system isn't highly subjective and that a lot of what we hear and report comes down to opinion. But when the vast majority of opinions concur, and they point to audible differences amongst the software and hardware players, then I think the topic merits examination of why this is so (as opposed to just endlessly repeating the "bits are bits" mantra).
I do agree that Andy has correctly stated what the two possibilities are. And since I reject that people are simply imagining things, I do think the bit streams are different. I also think that it is not necessarily the case that spitting out exactly what is read in produces the "best" sound. It may be a poor analogy, but I keep thinking about how some people prefer the sound of tube-based systems with very high levels of even order harmonic distortion. I am also recalling a recent thread where some folks heard a lot more positives than others by turning on the NDX/ND5 XS upsampling option. There are probably lots of other examples as welI, but they all suggest that we react differently to hearing more or less "errors", and we all judge better/worse sound quality very subjectively.
Oh well, I think I'm pretty much done on this subject. Any time I hear "bits are bits", or a variant like "bits in/bits out", I have a knee-jerk reaction in the opposite direction.
Hook
PS - I almost forgot to agree with Simon that, even if by some miracle the bits are all correct, then RF rejection, power supply quality, mechanical resonances, and grounding will all still play an important role in determining what we hear.
PPS - Personal footnote: if this was 1988, I could probably make a better case for what I am saying than I am right now. For a while, I did UNIX internals development for Sun. One project I had was custom development for a company that was selling to Billboard. They had built a data collection board , and contracted Sun consulting (me) to modify the buffering algorithms in Sun's high-speed serial device driver in order to to be able to keep up with what was then considered to be a massive amount of data input. It has been a long time, but I do have vague recollections of how hard it was to actually spit out the same bits as were coming in (without duplicating some and losing others). In fact, at the end of the project, all we had to do was get close enough for the pattern matching code above us to work. The purpose of this board was to tune in and listen to radio stations and make sure they were actually playing the songs they had been contracted by Billboard to play. I've always felt guilty about this project, because it was very successful. You see, the company's previous business model was to fly people -- almost all older retired ladies -- around the country to sit them in hotel rooms, listen to the radio, and fill out scorecards. Instead, this company was now able to place a small Sun server in cities and small towns all across the US. Sorry Granny....
It is on a PC running Windows. You ask whether the Windows developers were actually that stupid... Well, in a way, yes they are. Why? because the primary use of a PC system is to mix and output sound from a number of sources and get them to the user. Beeps, sound from movies, sound from applications, CDs etc... all need to be mixed together. In order to do this, they imposed a standard which is 48kHz. EVERYTHING is converted to that, mixed with the appropriate level control and then sent to the output stage, which until fairly recently was good old onboard analog(ue). If you wanted digital out, that was "professional" and involved looking at ASIO as the method of getting data around.
The standardisation in Windows is just that - an imposed standard. Windows wasn't designed as a single source output streamer. At a guess MacOS is the same - the thing is primarily a computer and probably 95% of people who use one don't care if it comes out bit perfect at 44.1kHz or processed at 48kHz.
It guarantees the sound most true to the original recording.
No. The implementation of a DAC is partly digital and partly analog(ue). Assume bit perfect sources for the moment.
Feed the same bits from a single source & cable to different DACs and they will sound different. The DACs have different implementations and will impart their own character onto the sound.
Feed the same bits to a single DAC but from different sources with different cables and the variation in sound coming from the DAC dependent on how isolated it is from the "imperfectness" of the source. The only way the source can influence the DAC in a separate system is either through the mains or through the bitstream coming into the DAC (for arguments sake, assume an optical connection so there is no electrical connection other than through the mains.
Mains can be discounted. There are far too many variables for this to be a consistent factor. Everyones mains will have different levels of crud on it, and the power supplies are supposed to reject it. This leaves the datastream. If the bits are correct (and I fail to see how they could be anything but because digital systems are both reliable and predictable) then the only thing we have left is jitter. For a DAC that eliminates jitter by design, that means the thing that is affecting sound quality is the conversion from optical to digital (which happens immediately the light enters the box) and any slight variations in timing of this conversion before it gets reclocked by the DSP section.
It is as good an explanation as we have got. It is a known physical phenomenon - sometimes peoples beliefs and expectations really do shape what they see and hear. What I find strange is that it is always rejected because "I heard a difference". Well, yes, you may well have heard a difference, but that doesn't mean one was actually there - the logic falls down.
Nope - see above. It's because everything is standardised to non-CD format by the OS.
Why reject a perfectly reasonable explanation.
Let me pose another thought. Doing a blind listening is very difficult. Sometimes it's not just audio that gives you a clue. Let me pose this as a point - and I'm not saying this is what happens or what happens in your case, but it illustrates the point. You say the NDX was reliably recognisable. The perception that you have will have been the NDX is a specialist bit of kit - people on the forums can hear differences - so you immediately expect to hear differences. Let us suppose that the NDX starts music differently to your current box - it takes slightly longer or the fascia lights up to indicate it is playing. You may not see this directly, but it gives your subconscious enough of a clue that it is in fact the NDX playing so you are able to reliably select it and the sound is better because it's a Naim product and people are raving about it...
I'm prepared to believe there are differences with the nDAC, but no one has - or will I suspect - provided any hard proof as to why there are differences and if there are differences, what it is that's causing them. Until that is given, it's a circular argument with the conclusion that it's a placebo effect being just as valid result. Rejecting that out of hand is neither fair nor valid yet is, I suspect, what goes on given people have so much vested in it (both from a buisness standpoint and a personal standpoint where you don't want to look a fool for spending 10x what someone else has for the same result).
Let me pose another thought. Doing a blind listening is very difficult. Sometimes it's not just audio that gives you a clue. Let me pose this as a point - and I'm not saying this is what happens or what happens in your case, but it illustrates the point. You say the NDX was reliably recognisable. The perception that you have will have been the NDX is a specialist bit of kit - people on the forums can hear differences - so you immediately expect to hear differences. Let us suppose that the NDX starts music differently to your current box - it takes slightly longer or the fascia lights up to indicate it is playing. You may not see this directly, but it gives your subconscious enough of a clue that it is in fact the NDX playing so you are able to reliably select it and the sound is better because it's a Naim product and people are raving about it...
Nope. I conducted the a/b test, and Mrs. Hook was the subject. I opened up a couple of large cardboard boxes so she could not see the audio rack (not that she would know by looking what was playing -- she loves music, but is not very interested in audio). She had no idea when the NDX was playing, and when the PC/RME 9632 was playing. Also, I tried two different tests -- short and long. First, I simply used the remote to switch between the nDAC's two inputs while both were playing the same song. Did this ten times, 30 seconds at a time, and asked her to write down A or B as best. Next, I played her an entire 3 minute song, first on one, and then on the other. This became a frustration point for her, she said, because 15-30 seconds into the second playing of the song, she had made her A versus B choice, and wanted to move on.
I tried my best Andy. I tried to remain emotionless and scientific and neutral. One important point to remember is that this was before I bought the NDX. I conducted this a/b to get Mrs. Hook's opinion of whether or not it was worth spending our money on sound quality grounds alone. She, as a test subject, was not susceptible to the kinds of tip-off's that a fellow audiophile might have been. And besides, I would have been happy with either result. If you recall, at the time, I was part of the "all sources for the nDAC sound alike" brigade. Check the old threads and you will see I am not BS'ing. These results threatened what I believed at the time. When the NDX arrived, I did immediately think it sounded better, but I was also concerned about the "cool" factors you raise. For me, the a/b tests confirmed that the differences were real. So I bought the NDX, never looked back, and ten months later, my opinion hasn't changed.
But as I have already said a couple a times, in retrospect, the fact that it sounds better has become almost secondary to me. The NDX's ease-of-use, maintainability and reliability are even more important to me.
I will readily concede that the NDX is very expensive to use only as a transport. No argument from me on those grounds. I agree it is not for everyone, but it is perfect for me. I am a business guy. I work about 100 hours per week, and am very well compensated for what I do (but I pay the price in long hours, high pressure and zero job security). I get one shot at relaxing by listening to music during the week -- it's usually after 10pm, and only for about an hour. The fact that I have not missed a single listening session in the last 10 months due to computer maintenance has been extremely important for me. Others can make their own decision, but I hardly feel like a "fool" for buying the NDX. Quite the opposite in fact. Given my situation, it has proven to be one of the smartest audio purchases I have ever made.
Hook
And it's at this point in the conversation I need to leave it primarily because if I don't, I'd be setting myself up for a set of experiments where I borrowed one and actually captured the bitstream out of the NDX to check it were bit perfect - doing anything other than that will just be arguments for arguments sake. I don't have the inclination to do that and it wouldn't really matter to me because I don't spend much time listening to music these days so I'm not up for spending £3k on a replacement streamer. Especially when I already have one that does far more than the NDX does and I'm 110% happy with it!! If there are differences and it turns out it is jitter related, well then the nDAC doesn't do as good as it claims and we'll see a DAC555 at some point that charges a lot for a bit of design tweakery.
I remain convinced though that people are hearing differences that aren't there with software players (unless they don't do what they claim). I also remain convinced that a lot of what is justified really isn't justifiable - especially given the prices that people pay for cables and the like (my original post was it doesn't matter which cable you use and I'll stand by that post). There's undoubtedly an awful lot of B/S purveyed too. I think that's what alienates me the most TBH - so many half baked theories around to justify high-end audio as a discipline that is outside of normal science when it isn't.
I'll check back in every now and again - it was very nice of you to welcome me back earlier in the thread and I might contribute every now and again too. Meanwhile, I'll be spending some time rebuilding my fileserver (just bought a set of faster interfaces for the disks and I want to upgrade the OS) and keeping the home network running smoothly (we have 3 TVs with media PCs underneath them so all content - including timeshifted TV - is available at any TV with a box) or at any PC. To be honest, I'm finding a lot of enjoyment in finding out new things doing that and computer infrastructure (and DSLRs) is where my hobby money goes. I reckon I have enough compute equipment here to run a small business quite nicely...
I will say though, I don't consider myself normal...
Cheers Andy -- nice chatting with you!
ATB.
Hook
You too my man. Enjoy that setup
You can do anything with the data in the streamer. Delay it 3 days or do something weird with it. As long as it is passed out of the digital interface in order and bit perfect, the ONLY measure of how good the sound quality is though a given DAC is how good and precise the D/A conversion is. The digital part is completely predictable (if it weren't PCs just wouldn't work) but the resultant data is presented to the actual conversion circuit and then the only things that determine how well that digital value is converted back to analogue is:
1) The accuracy & precision of the clock at the D->A conversion
2) The stability of the reference power supply used to do the conversion
Unless the streamer component influences these factors, it will not have any impact on the sound (assuming the source is the fabled "bit perfect").
The problem I and others have is how does a component that is placed several feet away, connected via a medium (optical) that has no electrical contact influence these two variables when 1) the nDAC claims to remove all jitter (i.e. instability in clocking) and 2) the power supplies are isolated from each other through the mains circuitry. The only area where the two interact is a bitstream that arrives over optical (in my case). How does that small bit of circuitry affect the sound quality.
That's the crux of the problem. Nothing more. Nothing less.
Just come back and tried to follow the twists and turns...
One thing i picked up on is Hook saying we should get beyond bit perfectness as it doesnt convey a useful meaning. Absolutely!! It's a horrible marketing generalisation that really causes confusion, as it is often used in a context where transport data and sample data are blurred confusingly into one with a lose context of sample integrity with seemingly no concept of time.
In the end the key thing is the accuracy of the sample value, coneyed in a word (not bit!!) at a precise point of time. If the time was slightly different, then the sample value would need to be different (good DSP maths again).
Therefore for the accurate reconstruction of a analogue signal, the sample words, must be acted upon at precisely the right time. Errors here cause jitter, ie frequency noise, which transforms into the analogue domain (DSP maths again..)
I think bit stream is a term really used to try and describe the accuracy of SPDIF, and even then it's misleading in terms of sample accuracy, as SPDIF uses 32bit word sample data frames per channel, that is transmitted, in a serial data (not sample bit) stream, which needs reconstructing to create the sample data whilst the sample clock needs to be reconstructed to correctly interprete the sample values and reconstruct the the analogue signal from the sample data. Therefore in terms of sample accuracy, there is far more to it the than the SPDIF level transitions.
Bit perfect is important as it implies correct transmission of the data values. I have seen a number of posts in the past that attribute different sounds to digital interconnects as dropping low order bit values as if it is an analogue process - much the way you can lose detail if you add noise in an analogue system. This just does not happen - you cannot attribute analogue behaviours to digital systems like that.
Bit perfectness is just another phrase for data integrity. It is important as the job of a streamer is to get digital data from source to DAC without changing it. This isn't actually very difficult or expensive to do it's just if you use a computer, you have to work around the way the OS wants to see the world. And data integrity is very important if you want to be able to compare like with like otherwise you're comparing different views of the same data which is not objective.
What is more complex to do is to get the DAC to convert the samples at exactly the right time and provide an incredibly stable reference power supply for it to do this. If you don't do both, you don't get the right analogue signal out the back.
As I said above, the only way the streamer can influence the sound is if it has an effect on either of those two things (or three if you're worried about data integrity).
Backed up by Naim's quotation from their White Paper, isn't it just as important to have stable reference power supply at the pre-DAC, DSP processing stage? I can hear power supply noise on the dig out of the NDX when using the internal PS, but is much improved as you go up the outboard PSU ladder, with the 555PS pretty much eliminating this noise, with a much better sound (clarity primarily) overall.
Ideally no - it shouldn't matter. In the digital domain, digits is digits. You only hear any noise when it gets converted back to analogue. If adding power supplies to the NDX clears up the output of an nDAC connected to it (is that what your setup is and where it improves?), then there is something causing interference to the timing of the data arriving at the conversion point and/or PSU feeding the conversion stage reference.
Doing stuff digitally is very predictable and you don't need ultra quiet power supplies to do it. In fact you never need a reference power supply at all digitally.
BTW, the reference power supply in a DAC circuit is NOT the power supply to the chip to make it work, but the reference that gives the DAC the voltage so that it knows to output 1.23462V when 62312 is input digitally. Any variation on this power supply will raise or lower the output proportionately which introduces distortions.
Actually, that's a 3rd source, as the power supply to the chip will also cause distortions as it will change slightly the conversion point in time if that goes up or down.
Assuming data integrity from wherever the data came (CD, HDD, network, wherever) then power supply quality of the source shouldn't matter with a design like the nDAC which claims to remove jitter (because power supply quality can cause jitter to be different in the S/PDIF stream).
Doing stuff digitally is very predictable and you don't need ultra quiet power supplies to do it. In fact you never need a reference power supply at all digitally.
Not entirely in clocked digital circuits, digital clocks are very sensitive things, given the incredible accuracy that they need to work to. Noise on the powersupply of a high speed clock will to some extent modulate the clock with current technology. Therefore if the timing of the digital output is important then usually great effort is made to regulate the clock powerlines, shield it from EMI and manage the earth currents, but even so there is usually a tiny amount of cross modulation.
Not entirely in clocked digital circuits, digital clocks are very sensitive things, given the incredible accuracy that they need to work to. Noise on the powersupply of a high speed clock will to some extent modulate the clock with current technology. Therefore if the timing of the digital output is important then usually great effort is made to regulate the clock powerlines, shield it from EMI and manage the earth currents, but even so there is usually a tiny amount of cross modulation.
And generally, in digital clocked circuits, the positioning of the clock is relatively unimportant as the data is allowed to settle before being sampled (usually half way through the clock cycle). There isn't the need to have precisely time-positioned clocks generally in digital systems. There is when you are converting to the analogue domain though.
What you are talking about is managing the digital out of a streamer where you want to minimise jitter as the clock is embedded in the datastream and you believe that improving that will lead to a better D->A conversion later in the chain. Improving the power supply to the streamer doing the streaming is likely to only have a second order effect. If it had more of an effect, Naim would shove a socket on the back and sell you a power supply to improve it. It doesn't with the nDAC - when the external PSU is used, the internal one still powers all of the digital circuits.
There is a fundamental problem here. It's difficult to describe both concisely and accurately in laymans terms as there is so much background you need to have to start to understand the interactions going on in a system such as this. Secondly, there is an aura of mystique surrounding these elements that are guarded by high-end audio manufacturers and dealers. How else do you think they afford their Jags
I come back to my fundamental point (which you agree with reading your posts) which is timing of the data only matters when you convert back to analogue. In order for the source component to affect this conversion, it must affect either the timing of the data through the DAC or the power supplies around the DAC. The only way this can be achieved is through RFI in some way or through jitter down the digital interface. But the nDAC supposedly removes all jitter and RFI should be non existent if connected by optical.....
BTW. we're not talking about high speed clocks here
It's a dirty job, but someone has to do it.....
Meanwhile the snake oil manufacturers and salesmen make a decent living. If I were one of them, I'm sure I'd want it obfuscated too.
Don't we all know the feeling -
I got the slight feeling from some of your posts that you might not have read - http://www.naimaudio.com/sites..._dac_august_2009.pdf but ass-ume you have at some stage.
I think the problem for some people posting on this thread is that Andy S has both read and correctly understood the white paper...
Don't we all know the feeling -
I got the slight feeling from some of your posts that you might not have read - http://www.naimaudio.com/sites..._dac_august_2009.pdf but ass-ume you have at some stage.
Teehee... I used that exact picture a week or two ago...
I have a feeling you haven't read my "Why is the nDAC so cheap?" thread
Andy, I think therefore we agree then, and yes the clocks are not highspeed in absolute terms, but the master clock is relative to the non oversampled sample data rate, but it'snot the frequency per say, it's the phase of the synchronicity.
Alas the Naim white paper is very basic and only really scratches at Naim's strategy at a very high level and what is written in the DSP and digital domain is pretty non remarkable and in laymen terms, I guess they, Naim, want to keep thier real engineering innovations to themselves for obvious reasons.....
Simon
Allen, come come, simpleton techie? I am not so sure you are as uninformed as you make out...
To your question.... There is a lot to PCB layout and design to combat EMI and high frequency earth loops, I guess Naim will be innovating here. There is also the balance between analogue and digital filtering... Lots of tweak ability here... Of course there is also mechanical isolation...
I think the transform filter function around a step or Finite impulse response, or infinite impulse response functions is an interesting area, and we may see development here, as designing around IIR which i belive Naim did is approximate as they can never be realised accurately in reality, and morepowerful hardware may come to the rescue..... We may see some higher bandwidth DAC chips being used, as I believe Naim are quite close to the bandwidth ceiling of the current devices. Finally my old favourite RFI coupling, I'd be surprised if Naim don't pursue this one more closely...

I haven't read any of Andy S's posts before, but I must say they come as a rational breath of fresh air amid some of the hocus-pocus on this forum.
Excellent thread - So after a scan through, the nDac is resistant to source jitter but it's not totally source agnostic (as are other Ram buffer equipped DACs) so Naim haven't fully cracked that nut (yet).
Still it's a 2k DAC - given the engineering freedom of a less restrictive budget, it'll be interesting to see what they come up with next.
James
Hello James
Never did do that comparison....
I've just repeated an experiment I did a couple of years ago. Play the same track through the streamer against playing the same track from a USB stick.
Conclusion - the same as last time - can't tell the difference.
Given there's no source interaction at all there surely I should be able to hear some difference? The system is revealing enough as I could easily tell the difference between SNAIC and cheapo RCA->DIN a couple of years back.
Hello Andy
Although some report differences between the front and rear USB ports and seem to think the ultimate in playback is a USB stick (i don't have a nDAC so haven't tried it myself)
James.
PS - if you've got a UPnP server, i've got an NDX (well its back at the dealers having the new 24/192 board fitted). You've got my email address and i've got a (relatively) quiet time at work for the next couple of months
uPnP servers can be arranged. We might have 30 mins of setup trouble but I think there are 3 running on the various PCs here (I don't use them, just I haven't bothered turning them off)....
This one we will do.
PS. mail sent............