NDX and Chord Hugo
Posted by: Foxman50 on 18 April 2014
I have been contemplating adding a DAC to my NDX/XPS2 to see (or should that be hear) what it can bring to the party. And so thought it about time i made inroads into Having a few home demos. After looking around at products that are within my budget i came across the Chord Hugo DAC.
Although it is meant to be a portable headphone unit, it can be used as a full line level fixed DAC.
The dealer lent me a TQ black digital coax lead, which have twist grip plugs. This was required as the present batch of Hugo's have a case design fault that wont allow any decent cable to fit, soon to be rectified. Thankfully the TQ just manages to hang on to the coax port.
Once all connected and gone through the minimal setup procedure of the Hugo, what does the red LED mean again, i left it to warm up for half an hour.
Poured a beer and sat down for an evenings listening.
What was that, where did that come from, that's what that instrument is. OMG, as my little'n would say, Where is it getting all this detail from.
After spending last night and today with it, all i can say is that it has totally transformed my system from top to bottom. I never considered my NDX to be veiled or shut in, not even sure that's the correct terms. All i can say is its opened up the sound stage and space around instruments. Everything I've put through it has had my toes, feet and legs tapping away to the music.
Even putting the toe tapping, the resolution the clarity to one side, what its greatest achievement for me has been in making albums that I've had trouble listening too enjoyable now.
One added bonus is that it has made the XPS redundant. I cannot hear any difference with it in or out of the system.
While i thought a DAC may make a change in the degree of the jump from ND5 to NDX, i was not prepared for this. Anyone looking at adding a PSU to there NDX may want to check this unit out first.
For me this has to be the bargain of the year.
Jasonf wrote:
"I think we are all aware of that. I wouldn't get too touchy about it and try to see the lighter side".
Fair comment and accepted. For a horrible moment I thought you were being serious.
I agree - and the back and forth banter is quite entertaining even if a little off-topic.
But don't stray off topic guys!
Agreed, I am guilty of some silly banter sometimes.
Jason.
p.s. But I was serious about reading the title wrong...or has there been a thread titled Hugo v NDX...now I am doubting my own mind
No Jason, it's YOUR MIND....
G
It gets to us all in the end!
Hugo.
Agreed, I am guilty of some silly banter sometimes.
Jason.
p.s. But I was serious about reading the title wrong...or has there been a thread titled Hugo v NDX...now I am doubting my own mind
Actually Jason I quite like your banter! And you have got beard which is good in my book.
So is this thread the epicentre of the love for Hugo love or is the rest of the world on the same wavelength?
Well, David Price at Hi-Fi Choice is with the lads here. Hi-Fi World and Hi-Fi + think it's nearly as good as DACs 2-3 times it's price (which suggests it's not an NDS botherer) and What Hi-Fi think it's better, in most respects, than the V1.
Other forums? Well,PFM folk like it very much, but others are more mixed and Linees are uninterested (hopefully, forum rules have been respected?).
I'm not, for one moment, doubting the experiences of the Graemes, Simon and the others. Clearly, the Hugo is special and has been very well received by all, but it does have to be quite extraordinary to beat the NDS + 555.
Keith
Keith, do what I did, and listen for yourself.. only you can decide.. For me it was an NDS beater.. But then I was not the the NDS's greatest fan.. I blame my ears
But others will clearly have a different view, and this is good and healthy, it matters not..there is not a defined order or hierarchy out there.
Simon
it's difficult for me to try as my dealer doesn't stock Chord (and he curled his lip when I mentioned this thread to him).
...but you've made my point for me Simon. You prefer the NDX to the NDS which is a minority view in these parts. One of the Graemes doesn't like Naim amps (he's a Sugden owner, I think).
So it's opinions being expressed by Hugo owners with which others might not agree.
I already have a source which sounds better than the NDS and cost me much less (LP12)
I know I should try the Hugo, but in fairness, I failed to persuade you to try Vodka AQ ethernet a few months ago and you probably don't plan to borrow a Sondek any time soon ;-)
Keith
I have Naim amps and an LP12 and I liked the Hugo.
Yep, you do Steve, and I look forward to your Hugo vs. NDS vs. NDAC playoff, but you're quite a small sample on your own, statistically speaking.
Keith
Hi Keith, yes I think you meant, I prefer the NDAC/555PS to the NDS...as good as the NDX DAC out is, it's a country mile from the NDAC or NDS. But anyway I have moved on now.
Intrigued by your dealer's response.....
Simon
exciting development with the disruptive hugo. i am extremely happy with nds/555dr fed by us-ssd as far as the music goes. if a hugo can match or even beat that, would be an interesting proposition
however the real game-changer for me would be to cut the naim user interface out complete as my wife simply does not get along with it, and does not use the system for that reason. also i find the integration with hifi streaming services not very good.
the digital out feed of course is important and some notice difference between ndx and nd5 when feeding the hugo. i would be very interested in observations on e.g. mac mini into hugo vs nds/555. hugo is not distributed in denmark so hard to test at home easily. so question is if a mac mini type interface gives good enough digital out to bring the hugo into the league of nds/555dr
exciting times with lots of innovation,
lars
Yep, you do Steve, and I look forward to your Hugo vs. NDS vs. NDAC playoff, but you're quite a small sample on your own, statistically speaking.
Keith
In that case more reason to try it for yourself. If you don't want to why comment on this thread? Also any assessment of the Hugo/NDS/NDAC would be irrelevant to you.
Not if I connected a Hugo to the digital out on the NDS. I'm interested to know yoUr views on the Hugo vs. the NDS's DAC. I said it was difficult for me try, not impossible.
Keith
Point taken about David Price.
A few years ago I heard a Funk Firm FXR tonearm. On demo, it was better than an SME V and as good, if not better than the Aro. DP reviewed it in Hi-Fi World and concluded it was the best he'd heard and a bargain at £1,500. A couple of months later the same magazine did a group test in which the Funk Firm arm was beaten by an Inspire arm which cost £700.
Keith
Agree. It is a shame that Naim does not support an open operating system. If customers were able to run an open system on, say, a UnitiServe, no one would seriously think about buying pcs or mac minis (except, maybe, for saving some money). But here we are: Naim goes the proprietary way and customers go their own way.
On the other hand, I can very well understand that Naim does not want to (have to) deal with troubles caused by customers running, on an open system, non-Naim applications on a Naim hardware.
But, of course, there are ways to precisely define liabilities while promoting open systems and I think that, in this respect, Naim is doing a poor job.
Lars wrote:
"however the real game-changer for me would be to cut the naim user interface out complete as my wife simply does not get along with it, and does not use the system for that reason. also i find the integration with hifi streaming services not very good".
Strange - I actually like both the Naim (nSteam) and Linn (Kinsky)user interfaces. One of the reasons I will be retaining my ND5 for use with my Hugo.
I haven't had a chance to try out an NDX with the Hugo, so cannot comment about any possible differences in sound quality, but I would be very surprised if there was an appreciable difference.
Yep, you do Steve, and I look forward to your Hugo vs. NDS vs. NDAC playoff, but you're quite a small sample on your own, statistically speaking.
Keith
In that case more reason to try it for yourself. If you don't want to why comment on this thread? Also any assessment of the Hugo/NDS/NDAC would be irrelevant to you.
The transmission of information about new resources usually starts with one determined advanced scout (well done scout) and the subsequent amplification of the message as further (presumably at the higher end of susceptibility to suggestion) individuals join and augment its volume.
As bees are very honest there probably always is a resource. The number of members of the hive who visit will depend on the general funkiness of the scout's dance and the amplifying effect of, later, synchronised funkiness.
No one bee can pretend to have picked the best source of food - the process of determining that, or an approximation, falls to the hive mind.
Some bees reject assimilation. Perhaps they're wasps in bee's clothing.
And with that and Wimbledon in mind - perhaps we should remember there are many ways to play the game.
And those bees make a very good beer!
Calling all Hugo users connecting into Naim (or anything else) has anyone had any success with the Bluetooth audio from Apple devices. It sounds better from Android devices, but is still miles away from the performance of USB or even more SPDIF from the NDX. I wonder if the Bluetooth audio format is just too compromised and a quality audio system just shows it up for what it is?
Simon
This has been a long and twisty thread! I have had my interest piqued but haven't heard the Hugo yet. I'm jumping in on the math bit here, as people seem to be focussing on quite different things about signals and signal processing. I have no opinion on the sound quality of this device, nor on the highest range of Naim gear ... I have a SuperUniti and a UnitiQute and am very happy with the music in my home! But tempers are flaring and I've come to pour oil on the waters (not on the flames!).
Someone mentioned Nyquist and interpolation and filtering all in one go... The implied claim is that the reconstructed waveform produced from the sampled waveform more accurately represents the original waveform in this dac implementation for reasons related to these concepts.
This has irked some folk and I think it's because most understand that the Nyquist limit sets the highest Fourier component (ie frequency) in the original you can recover absolutely from your sample (the famous "half"). Most also accept that inaudibly high frequencies are important to the way we hear what we hear - for attack, timing, space, and so on...we don't listen to pure sine waves when we listen to music, so cutting off just above the upper limit of the audible spectrum doesn't simply mean that only dogs and bats are affected. In this discussion, it is unfortunate that Nyquist was invoked since it's not germane once the sampling is done and reconstruction is the task. On the "graph paper" picture in our heads, this part is related to how well we can recover the X-axis (ie right/left, or, usually, time). This is the kHz aspect or the bit-rate aspect, whichever you prefer....and some people prefer listening to reproductions from higher sampling rate recordings (eg 48 kHz rather than 44 kHz for CD...remembering that 2 channels at 48 kHz is often written as 96 kHz for convenience when we start talking about the digital stream we will convert back to an analog signal).
Others have introduced different sources of imperfection in the production - sampling - reconstruction - reproduction chain, such as resolution of the (sampled) signal. This is the Y-axis (ie up/down, or, usually, amplitude). When sampling, this is the bit-depth or word size, whichever you prefer. More bits gives more resolution, with the increased precision corresponding to a more accurately captured signal. Again, some people prefer listening to higher bit depth recordings: 24 bit hi-def recordings are often "better" than the 16 bit CD standard.
People have talked about interpolation, and it's happening in all DACs for BOTH the time and amplitude axes. When we see the "step-wise" reconstruction (the zig-zag graph), it's obvious that no Y-axis interpolation was done, and slightly more subtle that there is now an interpolation on the X-axis: but, while we may "jump" up and down at a single value of X, we never need to lift our pencil from the paper. Some think of this as a "sample and hold" approach, or a "set and hold" if you prefer for conversion to analog. That's described in the PDF lecture notes.
In in the simplest implementation of digital to analog conversion, higher sampling rate and greater bit depth produce output waveforms that get progressively closer to the original signal waveform. There is a lot to be gained if you can go back and re-sample. There is also much to be gained, for any given digitized sample waveform, by being less naive, or rather more clever, when choosing your conversion algorithm.
The most important and, in this discussion, overlooked part of the story on analog waveform recovery from a discretely digitized sample, however, is the fact that we need to end up with a continuous signal: no "instantaneous jumps" in either X (time) or Y (amplitude). Many people immediately grasp that a simple "linear interpolation" - ie drawing the diagonal straight line between two adjacent points, rather than the horizontal and vertical lines in the step-wise graph - often looks "closer" to the original analog waveform. It also happens, however, that drawing the curved line between two adjacent points defined by the higher order polynomial that goes through some additional points, both before and after our two points themselves, is often an even better reconstruction. You could solve for the parabola from three points (our two and the next one, for example) or the cubic from four points (our two and one on each side), or whatever you wish. This can also be used for "smoothing" discrete sets of points, but here we're thinking of adding extra information in BOTH the X and Y (time and amplitude) axes. It turns out, and you can see it in the sample photos in the PDF lecture notes, that you really can do a better job of image (or audio) reconstruction than the simplistic step-wise approach. You don't have absolute knowledge - ie you are not getting information back that you threw away on sampling - but if the data conform to your assumption (in this case that the signal has a smoothly varying behaviour), then the pencil line your arithmetic draws from the dots you sampled really is closer to the original line on the piece of paper that you threw away. If it's random data, you cannot win; if it's not, you can at least improve ... We are re-using data: we use four sampled points to join the middle two (1,2,3,4 to join 2 and 3, say), and we use three again plus the next one to join the next pair (2,3,4,5 to join 3 and 4, say). The new information is our assumption as built into our arithmetic (equation) model. If you were doing this by hand, drawing a curve freehand through the sampled points as someone suggested, it would be equivalent to looking at earlier and later points to infer the "most likely" way to join the line to the next point smoothly and continuously.
Someone else mentioned that zero times anything is zero. True for multiplication, but not true for this kind of interpolation, where one way to describe the smoothing is by a mathematical operation called convolution...a trick to speed up the arithmetic for doing our "rolling average" or "filtering" or however you like to think of this interpolation scheme. The trick, then, is that designers can choose how to set the filter shape, or the convolution functional form, and in so doing can control the time (X) and amplitude (Y) characteristics of the continuous analog output waveform -- and different DAC designs / algorithms can emphasize different desire able output characteristics such as "attack" or "smoothness" in the same way that reconstructed images can have "high edge contrast" or "less granularity".
I don't know if this helps or hinders the dialog, but it is offered in the spirit of ensuring that we're talking about the same things...not marketing hype, not jargon, not qualitative descriptions of what we hear or what we prefer, but one way to think of what is happening in the world of digital recording and, more importantly for this forum, in the realm of (continuous) analog reproduction from (discrete) digital source files.
Regards, alan
Thank you Alan for your coherent explanation, which substantially advances the discussion. For me, the most fascinating aspect of all this was the realization of just how much *missing* information can be reconstructed by improving the interpolation, a point that the images in the PDF neatly illustrated.
It's a difficult issue to grasp (reconstructing information that was thrown away) but your post has been very helpful.
Jan
Calling all Hugo users connecting into Naim (or anything else) has anyone had any success with the Bluetooth audio from Apple devices. It sounds better from Android devices, but is still miles away from the performance of USB or even more SPDIF from the NDX. I wonder if the Bluetooth audio format is just too compromised and a quality audio system just shows it up for what it is?
Simon
Never find the need to use it Simon and, now you bring this to light, I'm no longer tempted to try.
Cheers
G
This has been a long and twisty thread! I have had my interest piqued but haven't heard the Hugo yet. I'm jumping in on the math bit here, as people seem to be focussing on quite different things about signals and signal processing. I have no opinion on the sound quality of this device, nor on the highest range of Naim gear ... I have a SuperUniti and a UnitiQute and am very happy with the music in my home! But tempers are flaring and I've come to pour oil on the waters (not on the flames!).
Someone mentioned Nyquist and interpolation and filtering all in one go... The implied claim is that the reconstructed waveform produced from the sampled waveform more accurately represents the original waveform in this dac implementation for reasons related to these concepts.
This has irked some folk and I think it's because most understand that the Nyquist limit sets the highest Fourier component (ie frequency) in the original you can recover absolutely from your sample (the famous "half"). Most also accept that inaudibly high frequencies are important to the way we hear what we hear - for attack, timing, space, and so on...we don't listen to pure sine waves when we listen to music, so cutting off just above the upper limit of the audible spectrum doesn't simply mean that only dogs and bats are affected. In this discussion, it is unfortunate that Nyquist was invoked since it's not germane once the sampling is done and reconstruction is the task. On the "graph paper" picture in our heads, this part is related to how well we can recover the X-axis (ie right/left, or, usually, time). This is the kHz aspect or the bit-rate aspect, whichever you prefer....and some people prefer listening to reproductions from higher sampling rate recordings (eg 48 kHz rather than 44 kHz for CD...remembering that 2 channels at 48 kHz is often written as 96 kHz for convenience when we start talking about the digital stream we will convert back to an analog signal).
Others have introduced different sources of imperfection in the production - sampling - reconstruction - reproduction chain, such as resolution of the (sampled) signal. This is the Y-axis (ie up/down, or, usually, amplitude). When sampling, this is the bit-depth or word size, whichever you prefer. More bits gives more resolution, with the increased precision corresponding to a more accurately captured signal. Again, some people prefer listening to higher bit depth recordings: 24 bit hi-def recordings are often "better" than the 16 bit CD standard.
People have talked about interpolation, and it's happening in all DACs for BOTH the time and amplitude axes. When we see the "step-wise" reconstruction (the zig-zag graph), it's obvious that no Y-axis interpolation was done, and slightly more subtle that there is now an interpolation on the X-axis: but, while we may "jump" up and down at a single value of X, we never need to lift our pencil from the paper. Some think of this as a "sample and hold" approach, or a "set and hold" if you prefer for conversion to analog. That's described in the PDF lecture notes.
In in the simplest implementation of digital to analog conversion, higher sampling rate and greater bit depth produce output waveforms that get progressively closer to the original signal waveform. There is a lot to be gained if you can go back and re-sample. There is also much to be gained, for any given digitized sample waveform, by being less naive, or rather more clever, when choosing your conversion algorithm.
The most important and, in this discussion, overlooked part of the story on analog waveform recovery from a discretely digitized sample, however, is the fact that we need to end up with a continuous signal: no "instantaneous jumps" in either X (time) or Y (amplitude). Many people immediately grasp that a simple "linear interpolation" - ie drawing the diagonal straight line between two adjacent points, rather than the horizontal and vertical lines in the step-wise graph - often looks "closer" to the original analog waveform. It also happens, however, that drawing the curved line between two adjacent points defined by the higher order polynomial that goes through some additional points, both before and after our two points themselves, is often an even better reconstruction. You could solve for the parabola from three points (our two and the next one, for example) or the cubic from four points (our two and one on each side), or whatever you wish. This can also be used for "smoothing" discrete sets of points, but here we're thinking of adding extra information in BOTH the X and Y (time and amplitude) axes. It turns out, and you can see it in the sample photos in the PDF lecture notes, that you really can do a better job of image (or audio) reconstruction than the simplistic step-wise approach. You don't have absolute knowledge - ie you are not getting information back that you threw away on sampling - but if the data conform to your assumption (in this case that the signal has a smoothly varying behaviour), then the pencil line your arithmetic draws from the dots you sampled really is closer to the original line on the piece of paper that you threw away. If it's random data, you cannot win; if it's not, you can at least improve ... We are re-using data: we use four sampled points to join the middle two (1,2,3,4 to join 2 and 3, say), and we use three again plus the next one to join the next pair (2,3,4,5 to join 3 and 4, say). The new information is our assumption as built into our arithmetic (equation) model. If you were doing this by hand, drawing a curve freehand through the sampled points as someone suggested, it would be equivalent to looking at earlier and later points to infer the "most likely" way to join the line to the next point smoothly and continuously.
Someone else mentioned that zero times anything is zero. True for multiplication, but not true for this kind of interpolation, where one way to describe the smoothing is by a mathematical operation called convolution...a trick to speed up the arithmetic for doing our "rolling average" or "filtering" or however you like to think of this interpolation scheme. The trick, then, is that designers can choose how to set the filter shape, or the convolution functional form, and in so doing can control the time (X) and amplitude (Y) characteristics of the continuous analog output waveform -- and different DAC designs / algorithms can emphasize different desire able output characteristics such as "attack" or "smoothness" in the same way that reconstructed images can have "high edge contrast" or "less granularity".
I don't know if this helps or hinders the dialog, but it is offered in the spirit of ensuring that we're talking about the same things...not marketing hype, not jargon, not qualitative descriptions of what we hear or what we prefer, but one way to think of what is happening in the world of digital recording and, more importantly for this forum, in the realm of (continuous) analog reproduction from (discrete) digital source files.
Regards, alan
For me, the most fascinating aspect of all this was the realization of just how much *missing* information can be reconstructed by improving the interpolation, a point that the images in the PDF neatly illustrated.
It's a difficult issue to grasp (reconstructing information that was thrown away)
It can also be difficult to grasp that with 44,000 samples in a second, what happens in the tiny gaps between them is not something you're going to be able to separate. The end result is a smooth analogue signal. As an experiment, draw 44,000 dots on each of two A4 sheets of paper. On one, conform to the dots in joining them. On the other, feel free to freestyle a bit between the dots - whilst still ensuring you join them properly. Now, examine both sheets of paper and describe the differences. That's right, there are none. The dots are too close together to provide for differences.
Many on here have already agreed it difficult to grasp how interpolation can provide a 4 microseconds level of accuracy in the start and stop of sounds amongst other things when there is only 22 microseconds of sampling frequency on the CD. If the 4 microseconds hearing sensitivity were even true, tow would you know whether a sound started at 3, 8, 12, 17 or 21 seconds after the last sample on a 22 microsecond sampling frequency? Interpolation doesn't tell you that because it cannot. What you get is a distortion. You cannot recover this microscopic original timing detail even should you feel the need too. The theory falls apart on the start/stop of sounds - which we might all agree is an issue with a piece of music.