Where has the NDX into Hugo thread gone?
Posted by: Simon-in-Suffolk on 19 June 2014
Any ideas?
There were some heated debates, but no more so than other recent exchanges on the forum, and those threads are still there...
i can only think of negative defensive reasons which I don't associate with Naim at all.. I hope it wasn't to do with that..perhaps the thread can go back into padded cell? It was a fairly useful resource for those wanting to use their Naim equipment with a Hugo source..
"The 8FS filtered output – as found in most audio DAC’s - does not look like the original sine wave, but the 2048FS looks perfect
The 8FS output has a number of problems – the big step changes overloads the analogue sections creating more HF distortion"
Source: the 'Hugo Technology' presentation on the Chord website
If I have understood their approach to HF distortion correctly, Naim adds zero-value samples to aid removal of it. As the name suggests, a zero-value sample has no value. This, one would imagine, stays close to what was on the CD. It does not add something which was not there.
What values do the samples the Hugo adds have? Presumably non-zero values based on interpolation? If that is the case, then it is a concern for me as I feel interpolation is inherently inaccurate for transients. I would rather have the 'no added sugar' version personally.
To my mind, the CD recording is the reference after the analogue master. I would be very careful about adding to it. Which approach do others feel comfortable with?
Are those mini B usb sockets on the Hugo?
No, they're micro USB.
Hi Mark -
It's cool that you are working through the concepts and theory of signal reconstruction from discretely sampled data sets. I'd advise a bit of caution and balance between the concerns you are raising for yourself by what you are learning and what you enjoy or prefer when your hear various real-world implementations of this scientific art. One famous joke (well, famous in school at least!) is that in theory there's no difference between theory and practice, but in practice there is!
You raise legitimate concerns when you ask how the higher frequency interpolated data can be known to more faithfully capture the original signal than the primary sampled data. In a previous post, I pointed out that for random data you can't do better at all. For deterministic data (where you know the functional form of the source signal - a pure sine wave for example, but any known function works too) then you can be perfect - and since you have the function you don't need to refer to the sampled data when "filling in the gaps". In between these two limiting cases, you can still add "good" data points at higher frequency and different algorithms for re-using the primary sample data can be more or less accurate. The only way to test the success of a given algorithm requires comparison of the reconstructed signal with the original...and the work of "knowing" (trusting really) that a given implementation is higher or lower fidelity to the original, and under what circumstances (onset of bass notes, preservation of relative amplitudes for different frequencies, whatever) takes a lot more time and effort than just digitizing a single signal and then reconstructing it once on the fly. There really is rather a lot of signal processing out there - and I'm pretty sure you could build yourself an excel spreadsheet to check out a few basic approaches with a few different interpolation filters...it's kinda fun and hugely instructive.
A couple of posts, including a recent one from you, have said that zero samples can't add information (or words to that effect) - and that you'd prefer your signal raw with no added sugar. Empirically, and almost purely based on your presence in this forum and your choice to buy expensive hifi gear, I'd say you prefer the clever engineering more than the naive straight digital to analog signal! If you wanted to, you could save a bundle and go back to a first generation CD player and re-experience the good old days!
Naim, and most other sophisticated product manufacturers (including Chord, he added to ensure thread relevance) have decided to produce an output signal with a much higher output clocking frequency than the input signal sampling rate. I think I remember seeing that Naim uses forty times higher output clocking than the standard CD red book value (but I'm vague on this and too lazy to surf and find it since this interface ditches my typed text if I go to a different browser tab). (((EDIT: naim does a bit more than 16X to 768 kHz; the 40 I remembered is the bit depth of their processor, more than 2X original for CDs...sorry for the error, it's late here!))) They do this not (only!) for mathematical joy and rigour, but because it is a successful way to reconstruct a more faithful, and hence better sounding, output signal. Really, it works. Really, a lot of different mathematical approaches work too, some better than others, some more pleasing to some, others more pleasing to others. But generally, closer to the (unsampled portions of the) original sounds better and is more pleasing.
Most algorithms start by zero filling all the new, higher frequency, "dots" for the output, and keep the original data in their spots. Then any convolution style filter will mathematically add up a weighted sum of a given number of data points and update a zero value to the interpolated value. This is obvious if we just want to double the frequency and use a simple linear interpolation: the "one and a half" point will start out as a zero, and will be updated to be the arithmetic mean of the first ("one") and second ("two") points. No magic, no assumptions other than the guess that the local function that was valid between time one and time two is a straight line. In the unlikely event that this is the correct function, then our interpolated point is exactly what the missing sample would have been if we went back and re-digitized at double the speed. Yay. In the equally unlikely event that we had random noise and so there is no relationship between the points at times one, one-and-a-half and two, then our interpolated value is almost certainly wrong and possibly wildly so. Boo. In general, though, for complex but non-pathological signals, like music, this interpolated point will be closer to the original than either the first or second point taken alone. Fancier algorithms use more points at a time and adjust the weights (we used equal weights in the average above) to capture sharper features. Some approaches re-use interpolated data (either single-sided, using the 1.5 point when calculating the 2.5 point, or double sided using both the 1.5 and 2.5 points to calculate the 1.75 and 2.25 points...sort of) and some don't. Few, if any (but I'm not an expert), start off with non-zero filling - largely because that's a much harder assumption to generate in a meaningful way. So it is unlikely that Chord, or Naim, just "invent" interpolated data...they don't have to because they can "invent" the filter shape for the convolution and the scheme for how they will double and re-double the clock rate to build the interpolated waveform.
I hope this helps ground your arithmetical thinking in some quasi-visualization. i find it useful to think through things this way. But I find it fun to listen and marvel at the results of the clever people who have done all this for me and put it in a nice box! Please let me know if I've missed the point or gone down a wrong road here. If not, we can walk through a scheme to decide how to use adaptive filters to get faithful transient recovery with higher output clocks than the original sample rate. As a quick example, though, consider the (2-D) image processing technique that leaves only edges (often shown in black and white) as an extreme way of interpolating for triggers.
Best wishes. Regards alan
Hi Mark -
...
Best wishes. Regards alan
Hi Alan
thanks for this concise introductory write-up.
Very easy to follow and a nice read.
Thank you for taking the trouble and time to write this.
Cheers
Aleg
Richard, is the old thread being re enabled. There was some good review info and comments, a lot of people spent some time compiling it including myself... and I kept no local copy.
if not can you make the thread available to be sent as an email please for reference?
Simon
Simon, it's not a priority - it seems it just collapses into personal insults - and right now there are other more important things taking my time. However, I'll give it a review and prune if I have a moment over the weekend and then decide whether to put it back up. As this thread has gained traction, I might lock the old one...
Richard, thanks for the update, and I understand.
Mark, perhaps the the interpolation is causing confusion..certainly I find the Chord presentation hard to follow and I understand some and have worked with some of this stuff..
in your earlier post I think you are referring to interpolation when over sampling which when used for noise shaping is in my understanding nearly always done with zero samples... certainly I have always done it this way.. And mathematically it's a no brainier.
However the Chord document refers to the reconstruction low pass filter also as an interpolation function.
This uses Robb Watts WTA algorithm, which he states is based on the Whittaker-Shannon or sinc function.
This sinc function is the low pass digital filter that many DACs use to reduce quantisation errors, and remove shaped noise.
However its the implementation of this reconstruction filter or interpolation filter that has a marked influence to the sound and accuracy of the DAC in my opinion. And in the Chord blurb this is where we hear about the number of 'taps' used, FPGA processing etc.
Ultimately if this filter, uses an infinitely long stream of sample pulses and infinitely long filter kernel, then it will resemble the idealised perfect brick wall filter. In achieving all frequency elements are maintained without distortion...but in practice there is always distortion as our world is finite...
But, despite this, I can't see how information can be introduced through interpolation if it has been removed prior to the interpolation, even if we had the idealised Shannon-Whittaker function... I can't see mathematically how this could happen..unless it is some wierd psychoacoustic prediction process..
I agree the Chord presentation makes this as clear as mud and I can't see what they are really trying to say.
At at the end of the day.. the Hugo sounds good so I have tended to ignore some of their non sensical descriptions
Simon
Alan
Thanks for the lengthy obfuscation. Unfortunately it does not address the key points. However, I agree with your comments that 'our interpolated value is almost certainly wrong' and 'no magic, no assumptions other than the guess.' This was my point if you read my post (and all previous posts on this matter) again. I am glad we agree on this.
Let me break the two points down further, starting with the first one:
--------------------------------------------------------------------------------
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 microseconds
Transient A begins at 3 microseconds. On the CD you have samples at 0 and 22 microseconds. The human ear (it is claimed) samples [sic] at 4, 8, 12, 16 and 20 microseconds.
Given the sound shown above by the dashes has not begun at 0 microseconds and only even exists in the 22 microseconds sample, you have no method of telling me that it begins at 3 microseconds. You cannot interpolate it because you have nothing at 0 microseconds.
Given that support for hearing of detail at 4 microseconds and in particular the finer detail of transients is at the very heart of the claims contested here, I put to you a simple and clear proposition. That given the basics above, these claims are flawed. You cannot reconstruct the finer detail as claimed. As you say yourself, 'almost certainly wrong.' 'no magic' and 'the guess.'
ATB, MM
Mark, perhaps the the interpolation is causing confusion..certainly I find the Chord presentation hard to follow and I understand some and have worked with some of this stuff..
in your earlier post I think you are referring to interpolation when over sampling which when used for noise shaping is in my understanding nearly always done with zero samples... certainly I have always done it this way.. And mathematically it's a no brainier.
However the Chord document refers to the reconstruction low pass filter also as an interpolation function.
This uses Robb Watts WTA algorithm, which he states is based on the Whittaker-Shannon or sinc function.
This sinc function is the low pass digital filter that many DACs use to reduce quantisation errors, and remove shaped noise.
However its the implementation of this reconstruction filter or interpolation filter that has a marked influence to the sound and accuracy of the DAC in my opinion. And in the Chord blurb this is where we hear about the number of 'taps' used, FPGA processing etc.
Ultimately if this filter, uses an infinitely long stream of sample pulses and infinitely long filter kernel, then it will resemble the idealised perfect brick wall filter. In achieving all frequency elements are maintained without distortion...but in practice there is always distortion as our world is finite...
But, despite this, I can't see how information can be introduced through interpolation if it has been removed prior to the interpolation, even if we had the idealised Shannon-Whittaker function... I can't see mathematically how this could happen..unless it is some wierd psychoacoustic prediction process..
I agree the Chord presentation makes this as clear as mud and I can't see what they are really trying to say.
At at the end of the day.. the Hugo sounds good so I have tended to ignore some of their non sensical descriptions
Simon
Simon
I feel we agree on this - as we have done before in the previous thread. I am not questioning whether yourself or anyone else likes the Hugo (and nor have I ever done so). What I have done is consistently question the claims as they have been presented by Chord, in hi-fi mags and by various posters on here. In my view, your characterisation of aspects of these as nonsensical is spot on.
Unless anyone else can come forward with new information - unlikely since it has not happened so far in hundreds of posts and there are only three or four of us contributing thoughts on the theory which are not just cut and pasted - I suggest we have reached agreement and can leave this point here.
I might develop the 'added sugar' point separately when time allows.
Nonsensically Yours, MM
Mark, whoops I meant say above:
"This sinc function is the low pass digital filter that many DACs use to remove aliasing errors, and remove shaped noise"
cheers
Are those mini B usb sockets on the Hugo?
No, they're micro USB.
Thanks Jan.
It is quite simple Marky Mark -
Imagine a signal starts at your point number 11, and increases linearly with time, so that after 11 more time units (sampling point 22) it is at 100 arbitrary units. At sampling point 44 it will have an amplitude of 300 units. With linear interpolation you can reconstruct the straight line to show that it must have started at point 11. I don't see the problem. Obviously not all signals increase linearly, but not all musical signals leap to a maximum value immediately.
Sorry it wasn't helpful for you Mark. Perhaps we are at cross purposes, as you say.
Plotting sinc(x) or sin(x)/x filters (they look like sombrero hats, not straight lines) for yourself might help you see how my comments in an earlier post (possibly in the deleted thread) about using arithmetic to "look ahead" and "look behind" when connecting dots does indeed enable you to simulate, often with very high accuracy, the original signal on a finer scale. I don't know that it is perfect at the infamous 4 us level, but it is not as improbable as you fear for 16X up-clocking and an intelligent filter that incorporates knowledge of plausible rise time rates. As someone else said, you do have pre- and post-information to use; your statement that you have nothing simply isn't correct.
If your goal is to debunk marketing lingo, I wish you success.
Regards alan
Dear Simon/all,
Reading all the good comments on this forum, I went and ordered one for a home demo. It is now sitting on my rack and playing for the last hour.
A comment first: I have not been able to find any good coax digital cable that works with the Hugo. That is pretty annoying... so I had to use the supplied optical cable to connect the Dac to my streamer...
My observations so far:
1) to get to the same sound level I am used to with my teddyDac, I now need to adjust significantly the volume control of my Nac 252 (for example, I need to dial the 252 at 10 or 11 o clock vs 9 o'clock when I listen to the TeddyDac. Has anyone observed that as well? Or is there something wrong?
2) After 1 hour of listen, I am puzzled as I do not hear what I was expecting to hear. The sound is "ok", with a good analogue feel to it...but that is it basically. I was expecting a lot of details, but actually it is on par with the teddy (to my ears at least). I was expecting also a bigger soundstage and better drive, it is actually a smaller window (vs my previous CDS3 and definitely vs the Teddy Dac...much better space) and it feels much less dynamic than the Teddy Dac...
I have high respect to all people on this forum, and I do not want to insult anyone. I guess something is wrong in my set up. Has anyone tried with the supplied optical cable? Any similar experiences?
Intriguing - it does sound like something is very wrong - especially because of the listening level required on the 252 and the apparent lack of dynamics - you might want to get your dealer to check it over with you.
1) I use a Coax SPDIF lead. Specifically I use a BNC to Phono Naim DC1 lead between my NDX and Hugo
2) Have you put the Hugo into lineout mode? This sets the level suitable for many preamps. The colour of the digital volume LED should be bluey/purple. Even then the default lineout level is rather high for the standard Naim sensitivity so I trim the digital level down further.
I find the volume out means regular listening on my 282 is between 8 and 9.30 - depending on content. I use a Naim Hiline Phono to DIN to connect Hugo to 282.
I use the optical for films/TV from my Set Top Box.- but it does sound very impressive.
Simon
Hi Louis-André,
As Simon suggests, adjust the volume on the Hugo, after starting it using the *volume bypass* mode (which doesn't actually bypass the volume, but sets it to a suitable level to feed a preamp).
Like you, I was not initially impressed with the Hugo, but it crept up on me. How? I was regularly stopped in my tracks by the reality of a drum strike, the naturalness of a violin, an extra level of meaning conveyed in a singer's words, an improved definition in the bass, and so on.
Also, bear in mind that threads like these generate very high expectations. That combined with an initially unimpressive performance led me to wonder what all the fuss was about. All I can say is give it time, don't try to analyze it, but do see if your reactions to music change. In my case, it became very clear that I was reacting to music through the Hugo far more closely to how I react when at a concert, than other DACs that I have lived with.
As for cables, you don't need anything exotic. I did a large part of my listening using the optical cable supplied with the Hugo. I'm currently running it from a MacBookPro (+ Audirvana) using the stock USB to Micro-USB cable provided, and it's very, very good (into a SN2).
Jan
i am turning the volume knob of the Hugo...I think it solved it....but when I turn the knob too high I get a distorted sound through the speakers, so I just turned down the knob a bit and it seems ok... but is this normal that it does that?
i am turning the volume knob of the Hugo...I think it solved it....but when I turn the knob too high I get a distorted sound through the speakers, so I just turned down the knob a bit and it seems ok... but is this normal that it does that?
Louis
Yes this is normal. If you turn the Hugo volume up to high it with drive the input on your 252 too high which causes distortion. Turn Hugo down until you lose the distortion. I read somewhere that this is the desired solution. Hope it helps with improving it.
Graeme
i am turning the volume knob of the Hugo...I think it solved it....but when I turn the knob too high I get a distorted sound through the speakers, so I just turned down the knob a bit and it seems ok... but is this normal that it does that?
Yes, its normal.
hugo can turn out 3V at maximum volume.
my 282 and I suspect your 252 has a sensitivity of 75mV, so Hugo can really push it into distortion.
I have Hugo's volume at a green level giving me normal volume levels at 8-9 on my 282
Keen to know how it compares with Teddy Dac once you have normalised the volume issue.
Sorry it wasn't helpful for you Mark. Perhaps we are at cross purposes, as you say.
Plotting sinc(x) or sin(x)/x filters (they look like sombrero hats, not straight lines) for yourself might help you see how my comments in an earlier post (possibly in the deleted thread) about using arithmetic to "look ahead" and "look behind" when connecting dots does indeed enable you to simulate, often with very high accuracy, the original signal on a finer scale. I don't know that it is perfect at the infamous 4 us level, but it is not as improbable as you fear for 16X up-clocking and an intelligent filter that incorporates knowledge of plausible rise time rates. As someone else said, you do have pre- and post-information to use; your statement that you have nothing simply isn't correct.
If your goal is to debunk marketing lingo, I wish you success.
Regards alan
Alan
I feel you may still be missing the point. Imagine you are somewhere within a 22 microsecond time period.
Lets say when you "look ahead" to the end of the period you have something and when you "look behind" at the beginning of the time period you have nothing. Where does the something ahead of you start precisely?
The answer is you can only say with certainty it starts at the next sampling point on the CD. You cannot manufacture greater precision on the start point to say 4 microseconds of accuracy. Hypothetically the 'something ahead' could actually start anywhere from 1 microsecond to 21 microseconds into the 22 microsecond period.
Your argument does not even hold for the envelope of a wave let alone the start / stop.
As for your comment above that "I don't know that it is perfect at the infamous 4 us [microsecond] level"......well, Alan...that is the whole point my friend. Glad we agree on this
ATB, MM
Monty Montgomery's video (link below) clears up a lot of the confusion on signal reconstruction, notably the *staircase* representation of a reconstructed signal. Timing resolution is discussed at the 20 minute mark in the section on Band limitation and timing. The video lasts just under 24 minutes.
Well worth watching.
http://xiph.org/video/vid2.shtml
Jan
Keen to know how it compares with Teddy Dac once you have normalised the volume issue.
Yes indeed, i don't know if anyone has compared these two. Maybe you just prefer the Teddy DAC
Keen to know how it compares with Teddy Dac once you have normalised the volume issue.
Analogmusic wrote
"Keen to know how it compares with Teddy Dac once you have normalised the volume issue"
Louis-Andre,
I would also be keen to hear your view after you have had a while to bed the Hugo into your system.
I myself was about to purchase a Teddy DAC (having had success with his stuff in the past) before this thread (or its forerunner) made me switch to the Hugo. A bit of a leap of faith with either one because I wasn't able to demo either.
I'm now extremely happy with what I've got, but it would be interesting to hear your view.