Nd5 vs ndx into ndac
Posted by: Warpeon on 01 February 2012
Anyone has made the comparison?
I already has ndac and thinking of a streamer, should it be ndx or nd5? I could probably use the extra budget for a xp5xs.
Appreciate any thoughts you would share.
Indeed. And it's no different from 25 years ago, the last time the DAC was in fashion. Also, I have two digital interconnect cables which sound significantly different to each other, and both differ from the DC1. Why? Who cares. I go where my ears and taste tell me. The most important thing is that we're content with our choices. Nothing is "wrong" - if it's not broken.
BTW - is a summary of your theory that a stiffer power supply results in less digital output noise from the streamer. The digital noise is jumping across the nDAC's front-end isolation into the DAC chips and analogue stages?
That is certainly possible, but significantly mitigated with an offboard powersupply where on the NDAC the digital and analogue PS are then physically seperated from the same transformer, but i think it is equally likely about the creation of very low level phase distortion sidebands in the DAC master clock as well, or perhaps a combination of the two..
When power supply interference mixes with the signal, it masks low-level information and causes inter-modulation with the signal. This destroys the integrity of the music signal.
True for analogue but not for digital.
Further noise present in a SPDIF transceiver will modulate the clock and produce phase distortion in the SPDIF biphase encoding. This will casue perturbations in the receiving clock transceiver powerlines through the modulation of the powerlines through the receiver SPDIF switching logic. This has the potential of casuing phase noise in the receiver clocks through modulating the powerlines.
A bit of noise in the SPDIF receiver clock doesn't matter, as long as the data's getting through OK.
and therefore less phase distortion and consequential sideband errors to clocked signals which can can couple to modulate sensitive DAC and DSP clocks
The master oscillator that determines the timing of the DAC chips is very important - and I suspect has a low-impedance output driver and careful PCB tracking. The DSP clock is certainly not critical - you could probably feed the DSP clock input with a WAV file of me singing "Oh for the wings of a dove" and it would still work.
JNoogle, not at all, you missed the point on point 2). You need to get below the 30,000 foot view to understand this. The spectral noise power density, as you should know in order to understand the concepts here, when it converts between digital and analogue domains there is no loss of energy. Time can't convert to heat! . It can't vanish into thin air... Good old DSP maths dear boy.
Point 3), again ehhhmmmm no! Not with the Naim architecture, the master clock drives the DSP data rate and over sampling which drives the i2s and sets the data bandwidth via what looks like a synchronisation gate into the converters... this is where absolute phase via the gate and bitprecision is critical for performance
Anyway I'll agree point 1 is minimal if passing from digi source into ndac, but not eliminated if ndac off board PS is used other than RF imperfect star earth currents and EMI.
Anyway I'd rather not have circular un- informed challenges again. Provide some concrete engineering or mathematical theories/principles to discuss, otherwise please trust what others hear and take time to read up on the physics, it really is fascinating if you can do the maths and electronic engineering principles. I can provide references to some great texts to help explain the principles here.
Simon
Noogle, not at all, you missed the point on point 2). You need to get below the 30,000 foot view to understand this. The spectral noise power density, as you should know in order to understand the concepts here, when it converts between digital and analogue domains there is no loss of energy. Time can't convert to heat! . It can't vanish into thin air... Good old DSP maths dear boy.
Sorry, I've no idea what you're going on about here - are you saying that because there's noise on the digital input, there's automatically noise on the analogue output?
Point 3), again ehhhmmmm no! Not with the Naim architecture, the master clock drives the DSP and over sampling which drives the i2s and sets the data bandwidth into the converters... this is where absolute phase and bitprecision is critical for performance
Yes, of course everything after the input buffer has to be synchronous to the master clock, as it's a synchronous digital system. The I2S data will be clocked into the DAC chip by the master clock, not by the DSP. I'd say a bit of jitter on the DSP clock or I2S data would't matter - there's a buffer on the input of the DAC chips.
Toodle-pip.
Sorry, I've no idea what you're going on about here - are you saying that because there's noise on the digital input, there's automatically noise on the analogue output?
Yes... Bingo ( digital noise in the time domain maps to analogue noise in the frequency domain)
Its a Gaussian distribution curve of noise power that is maintained between domains.
Also with i2s, jitter equals phase distortion, and I am sure you know what phase distortion equals in the analogue domain. BTW in a synchronous setup, a buffer is like a low pass filter with respect to time.. Go figure....
Cheers
S
Yes - but you're making the mistake of thinking that all the digital noise at the input goes out through the analogue signal output. Imagine if some of it went out of the Vcc and Gnd pins of the digital ICs into the power supply...
I2S is in the digital domain, so I don't understand your comment about phase distortion?
I think you need to sit down for a while with a breadboard, a pile of old 7400 series TTL logic chips and a noisy 5V SMPS power supply. True enlightenment will follow.
Dormez bien.
Noogle, morning.... I2s is synchronised so there is no clock reconstruction other than gating. The purpose of gating is aligning the phase of one signal with another.
As far as 7400 chips, those were the days... You try upping the clocking frequency in a synchronous circuit and with TTL you will get some challenging conditions with switching noise and race conditions, I had a problem once with an array of high speed accumulator buffers using TTL causing supply noise on top of an already Standard noisy supply line and I had issues with false triggering of the non maskable interrupt on the controlling CPUsharing the same Powerlines, It was a pig to debug. True enlightenment definitely came when I had to redesign and carefully regulate powersupply and signal routing to reduce EMI. Funny though, on my original designs I had not paid attention to this, it was only 1s and 0s in Karnaugh map tables and logic simulators - how wromg i was. Nowadays I would use CMOS variant logic chips - less switching noise, but higher impedance inputs, and careful buffering required at speed.
Simon
Allen, morning.Noogle has just stated the principle of noise coupling in his above post. We might be getting there at long last.. I can't help thinking it would be easier with a whiteboard and some well targeted labs though
Allen, all joking aside, i have never visited the Naim factory and I would welcome the chance...
LOL! You mean hijack it more than it's been hijacked already?
I don't doubt that another DAC is being researched. Possibly more It's not a dead cert and in the meantime there are three levels of DAC from Naim depending on power supply. Or is that four with the new sub XPS2 one? The jump in performance between XPS2 and 555PS is big and having gone from 252 to 552, and assuming my memory is reliable, the kinds of changes are similar.
I do wonder if Noogle would benefit from a Naim factory visit?
Would I be strapped into a chair with matchsticks in my eyelids and made to watch back-to-back propaganda videos about the mysterious "third force" in digital audio?
Yeah. I've heard the canteen can be a bit rough.
We're going this year.