NDX and Chord Hugo
Posted by: Foxman50 on 18 April 2014
I have been contemplating adding a DAC to my NDX/XPS2 to see (or should that be hear) what it can bring to the party. And so thought it about time i made inroads into Having a few home demos. After looking around at products that are within my budget i came across the Chord Hugo DAC.
Although it is meant to be a portable headphone unit, it can be used as a full line level fixed DAC.
The dealer lent me a TQ black digital coax lead, which have twist grip plugs. This was required as the present batch of Hugo's have a case design fault that wont allow any decent cable to fit, soon to be rectified. Thankfully the TQ just manages to hang on to the coax port.
Once all connected and gone through the minimal setup procedure of the Hugo, what does the red LED mean again, i left it to warm up for half an hour.
Poured a beer and sat down for an evenings listening.
What was that, where did that come from, that's what that instrument is. OMG, as my little'n would say, Where is it getting all this detail from.
After spending last night and today with it, all i can say is that it has totally transformed my system from top to bottom. I never considered my NDX to be veiled or shut in, not even sure that's the correct terms. All i can say is its opened up the sound stage and space around instruments. Everything I've put through it has had my toes, feet and legs tapping away to the music.
Even putting the toe tapping, the resolution the clarity to one side, what its greatest achievement for me has been in making albums that I've had trouble listening too enjoyable now.
One added bonus is that it has made the XPS redundant. I cannot hear any difference with it in or out of the system.
While i thought a DAC may make a change in the degree of the jump from ND5 to NDX, i was not prepared for this. Anyone looking at adding a PSU to there NDX may want to check this unit out first.
For me this has to be the bargain of the year.
I go out of town for a few days and look at what happened. The Hugo changed from a great dac into a full-fledged conspiracy theory.
....and you missed the best (now edited-out) threads!
G
Jan, I don't quite follow your line.. indeed the timing info is of prime importance,and Rob Watts and many others have gone to some length to talk about that.
However the extended timing we are talking about with spatial locating, has to be recorded and encoded if we are to store and replay it using current technology. If we use 2 channel PCM the left and right channels are stored alternatively by alternate channel samples. Therefore we have defined our storage medium. Any info we want to reconstruct from this medium has to have been encoded. Now current digital technology requires we sample this information, and that we filter the aliasing (artefact frequencies on decode and audio above half the sample frequency on encode). Therefore if we are to reconstruct this spatial information we will need to have used the appropriate sample rate to record it. (So for 4μS we would need at least a 500kHz sample rate) Yes in there there may be a lot of ultrasonic noise that gives us arguably no info musically, but our brains can extract elements of it potentially for spatial awareness of the source of the music
But if we filter this info out when we record and digitally encode such as Redbook with 44.1kHz sample rate we can't put the info back in.. Not even through interpolation... as the info has been removed and is lost forever.
if this is obvious to you I do apologise and please ignore.
Simon
Those that have the Hugo (Simon and others) please post some system intergrated photos. Assume switching is a manual affair...please elaborate?
Many thanks...
Mack
Those that have the Hugo (Simon and others) please post some system intergrated photos. Assume switching is a manual affair...please elaborate?
Many thanks...
Mack
System is NDX:Hugo:SN2 so I'm only using the digi BNC out from NDX to RCA digi-in on Hugo (using Naim DC1) and RCA out from Hugo to Din-in on SN2 (using Chord Anthem 2). Volume set on Hugo to 'bypass' line-out.
Switching between inputs would have to be done manually if you are intending multiple sources.
G
Those that have the Hugo (Simon and others) please post some system intergrated photos. Assume switching is a manual affair...please elaborate?
Many thanks...
Mack
Mack
I'm confused: switching of what?
i have only one source in use for Hugo.
Yep, switching is manual. I switch between optical, SPDIF (I use the most) and Bluetooth... Sounds least good but the kids seems to like it.
Simon
I go out of town for a few days and look at what happened. The Hugo changed from a great dac into a full-fledged conspiracy theory.
Only for those who haven't heard it yet, I would say. I haven't notice that anybody returned it yet, or being disappointed after audition.
Certainly, not me!
This whole story isn't fun any more. We're not comparing similar products by SQ and price tag, so, we can talk forever, because there will be always people who like one or the other one.
We are comparing apples and oranges here, little box which out performs 3-4 times more expensive boxes. It is hard to argue about it, or better say seem less.
I would put the finger to my forehead, taking deep breath, before I would say something against. And that would happen only after I get a chance to hear it first by my own ears, not just reading the specs and reviews.
In the other hand, I am wondering, why I'm wasting my time here. I learned a lot from this forum for a decade+. I even learned about that little magic box from this forum. It didn't take me long to go to audition it, reading what people compare it to. I realized, there must be something special about it. There is no way these people are crazy saying such things.
Once I got it, I tried to share my opinion about it, but for certain people it's more important to speculate than to give it try.
So, I'm out of this thread, I should probably spend more time listening music instead. It's been better than ever...
Those that have the Hugo (Simon and others) please post some system intergrated photos. Assume switching is a manual affair...please elaborate?
Many thanks...
Mack
Mack
I'm confused: switching of what?
i have only one source in use for Hugo.
Aleg,
I have multiple items plugged into my NDAC so the same for Hugo would be manual switching on the Hugo no remote or linked switching as with the SN2 and NDAC. Simon and GraemeH cleared that up. I would be feeding from my Unitiserve. Sorry for the confusion.
On a side note, I'm auditioning the Rega Elicit-R and Saturn-R and am very impressed. All Rega system save or the speakers...RP-10/Aria/Saturn-R/Elicit-R/ via DynAudio Focus 220's. Interesting contrast to my Naim Rig. So I'll arrange for a Hugo audition. It's all good.
Thanks...
Those that have the Hugo (Simon and others) please post some system intergrated photos. Assume switching is a manual affair...please elaborate?
Many thanks...
Mack
Mack
I'm confused: switching of what?
i have only one source in use for Hugo.
Aleg,
I have multiple items plugged into my NDAC so the same for Hugo would be manual switching on the Hugo no remote or linked switching as with the SN2 and NDAC. Simon and GraemeH cleared that up. I would be feeding from my Unitiserve. Sorry for the confusion.
On a side note, I'm auditioning the Rega Elicit-R and Saturn-R and am very impressed. All Rega system save or the speakers...RP-10/Aria/Saturn-R/Elicit-R/ via DynAudio Focus 220's. Interesting contrast to my Naim Rig. So I'll arrange for a Hugo audition. It's all good.
Thanks...
It could be even worse, as Hugo is imo not meant as a digital hub like Naim DAC, but as a single source DAC with different options for connectivity.
One could of course put one device on the SPDIF coax, one on the Toslink, another one of the USB HD and a final one on the USB SD, but I don't think that is what it is meant for.
Switching is indeed manually by pressing a small button on the side that cycles through all input options.
Cheers
Aleg
The detail can be reconstructed by interpolation. Nyquist theorem.
http://users.encs.concordia.ca.../slides/Sampling.pdf
See section "Reconstruction of a Signal from Its Samples: Interpolation"
Jan that article is theory and remember that the Nyquist Theorem is, as the name suggest, a theorem. It is a mathematical proof that in an ideal system the sampling frequency needs to be twice the highest frequency you wish reconstruct. This is done purely mathematically, I am sure you can find the maths somewhere it is an interesting proof. Please read it.
Now in the article you cite there are many diagrams that show samples fitting exactly into the reconstructed sine wave - perfect does it not look? But the samples the sine curves are being fitted to are an accurate sample of the analog waveform that was converted, that is an integral part of the theorem. However, in practical systems, the samples are only accurate with respect to the word length being used - 16 or 24 and become less accurate (as a %) for lower amplitudes. So the interpolation shown in paper you cite apply in theory, ie on paper but not in practice. So as Mark says you cannot accurately interpolate because you have no idea what the accurate answer should be.
Nyquist also does not have to deal with things like capacitance & inductance in circuits either.
It's the reconstruction of the timing accuracy that is the issue. The ear-brain system is exquisitely sensitive to this, and that is where the unusually high tap number in the reconstruction filter comes into play, as I understand it.
No it hasn't, certainly not what I am talking about anyway. You were talking about the Nyquist Theorem and interpolation but you forgot to take into account that this theorem is a mathematical construct and is based on the assumption that the samples are accurate. I pointed out that you were wrong to do this because we do NOT have accurate samples. Surely this is obvious?
Now you can try and wrangle out of this with your jargon but please if you reply stick to what we were discussing.
Jan, I don't quite follow your line.. indeed the timing info is of prime importance,and Rob Watts and many others have gone to some length to talk about that.
However the extended timing we are talking about with spatial locating, has to be recorded and encoded if we are to store and replay it using current technology. If we use 2 channel PCM the left and right channels are stored alternatively by alternate channel samples. Therefore we have defined our storage medium. Any info we want to reconstruct from this medium has to have been encoded. Now current digital technology requires we sample this information, and that we filter the aliasing (artefact frequencies on decode and audio above half the sample frequency on encode). Therefore if we are to reconstruct this spatial information we will need to have used the appropriate sample rate to record it. (So for 4μS we would need at least a 500kHz sample rate) Yes in there there may be a lot of ultrasonic noise that gives us arguably no info musically, but our brains can extract elements of it potentially for spatial awareness of the source of the music
But if we filter this info out when we record and digitally encode such as Redbook with 44.1kHz sample rate we can't put the info back in.. Not even through interpolation... as the info has been removed and is lost forever.
if this is obvious to you I do apologise and please ignore.
Simon
Hopefully Jan and others may take a step forward by accepting this from you Simon as they won't accept it from me.
In case it remains unclear to anyone, lets do this. Take a pencil and draw a line on one page. Next trace two dots marking the beginning and end of that line on another page. Now throw away the first page. Next, draw a line joining the two dots on the second page. That is interpolation. You are using one method which estimates points between the end-points. Is it exactly the same to the millimetre as the line on the first page? No, it is not, unless by some random coincidence. You can't even check it because you don't have the first page any more.
As per the example above, if (for some reason of your choosing) you felt the need to create artificial samples at 4 microseconds detail you still cannot time them correctly because you do not have the original analogue signal. Hence your ambition of more accurate timing is dashed on the rocks.
Returning to the claim which was held in question and further to general discussion, it seems several of us now agree it is wrong:
"the brain samples sound in real time every 4 micro-seconds, whereas CD refreshes its 'frames' every 22 micro-seconds. It's CD's inability to work as fast as the brain that causes its problems in the time domain, why it doesn't sound natural. And the unique design of the Hugo DAC addresses precisely this failing."
The "unique design of the Hugo DAC" does not address this "failing". We're not even certain that it is a failing as it is based on science which is flaky at best. This does not mean it is not a good DAC, just that the marketing, whether on here or in a mag, does not add up.
Mark you forgot to mention that these samples are not exact, ie they are not exactly on the original analog curve anyway, they are approximations. Which makes it even harder to interpolate.
But I don't think Jan will ever admit his, he will simply change the basis of what has been said and try to tie you down with more jargon. I am beginning to think it is not worth the effort in replying to him.
In case it remains unclear to anyone, lets do this. Take a pencil and draw a line on one page. Next trace two dots marking the beginning and end of that line on another page. Now throw away the first page. Next, draw a line joining the two dots on the second page. That is interpolation. You are using one method which estimates points between the end-points. Is it exactly the same to the millimetre as the line on the first page? No, it is not, unless by some random coincidence. You can't even check it because you don't have the first page any more.
Mark
I don’t think you fully undersatand the term interpolation.
Take a piece of A4 graph paper and ask somebody to draw a sine wave freehand. Record values of y every 5mm along the x axis, keep the graph paper, do not throw it away.
Plot the recorded values on two pieces of graph paper. Take the first plot and join the the dots with a rule, IE straight lines, take the second plot and join the dots freehand producing a curved wave.
Compare both with original, which is more accurate, the first or second.
Guys, I am not sure what interpolation has to do with what we are talking about. For over sampling interpolation is horrendous as loads of false frequencies are created, hence why usually when over sampling zero samples are used, as no erroneous frequencies are therefore created making filtering a lot easier to perform. Zero multiplied by anything is always zero.
Simon
In case it remains unclear to anyone, lets do this. Take a pencil and draw a line on one page. Next trace two dots marking the beginning and end of that line on another page. Now throw away the first page. Next, draw a line joining the two dots on the second page. That is interpolation. You are using one method which estimates points between the end-points. Is it exactly the same to the millimetre as the line on the first page? No, it is not, unless by some random coincidence. You can't even check it because you don't have the first page any more.
Mark
I don’t think you fully undersatand the term interpolation.
Take a piece of A4 graph paper and ask somebody to draw a sine wave freehand. Record values of y every 5mm along the x axis, keep the graph paper, do not throw it away.
Plot the recorded values on two pieces of graph paper. Take the first plot and join the the dots with a rule, IE straight lines, take the second plot and join the dots freehand producing a curved wave.
Compare both with original, which is more accurate, the first or second.
Throw away the piece of graph paper containing the drawing which you have not seen. Equipped with two points from this drawing, reproduce all the points in between with 100% accuracy using the method of your choosing.
Guys, I am not sure what interpolation has to do with what we are talking about. For over sampling interpolation is horrendous as loads of false frequencies are created, hence why usually when over sampling zero samples are used, as no erroneous frequencies are therefore created making filtering a lot easier to perform. Zero multiplied by anything is always zero.
Simon
That is right.
We were talking about interpolation in the context of the prior debate over whether timings of 4 microseconds in accuracy may be extracted from samples of 22 microseconds in frequency.
This has been a long and twisty thread! I have had my interest piqued but haven't heard the Hugo yet. I'm jumping in on the math bit here, as people seem to be focussing on quite different things about signals and signal processing. I have no opinion on the sound quality of this device, nor on the highest range of Naim gear ... I have a SuperUniti and a UnitiQute and am very happy with the music in my home! But tempers are flaring and I've come to pour oil on the waters (not on the flames!).
Someone mentioned Nyquist and interpolation and filtering all in one go... The implied claim is that the reconstructed waveform produced from the sampled waveform more accurately represents the original waveform in this dac implementation for reasons related to these concepts.
This has irked some folk and I think it's because most understand that the Nyquist limit sets the highest Fourier component (ie frequency) in the original you can recover absolutely from your sample (the famous "half"). Most also accept that inaudibly high frequencies are important to the way we hear what we hear - for attack, timing, space, and so on...we don't listen to pure sine waves when we listen to music, so cutting off just above the upper limit of the audible spectrum doesn't simply mean that only dogs and bats are affected. In this discussion, it is unfortunate that Nyquist was invoked since it's not germane once the sampling is done and reconstruction is the task. On the "graph paper" picture in our heads, this part is related to how well we can recover the X-axis (ie right/left, or, usually, time). This is the kHz aspect or the bit-rate aspect, whichever you prefer....and some people prefer listening to reproductions from higher sampling rate recordings (eg 48 kHz rather than 44 kHz for CD...remembering that 2 channels at 48 kHz is often written as 96 kHz for convenience when we start talking about the digital stream we will convert back to an analog signal).
Others have introduced different sources of imperfection in the production - sampling - reconstruction - reproduction chain, such as resolution of the (sampled) signal. This is the Y-axis (ie up/down, or, usually, amplitude). When sampling, this is the bit-depth or word size, whichever you prefer. More bits gives more resolution, with the increased precision corresponding to a more accurately captured signal. Again, some people prefer listening to higher bit depth recordings: 24 bit hi-def recordings are often "better" than the 16 bit CD standard.
People have talked about interpolation, and it's happening in all DACs for BOTH the time and amplitude axes. When we see the "step-wise" reconstruction (the zig-zag graph), it's obvious that no Y-axis interpolation was done, and slightly more subtle that there is now an interpolation on the X-axis: but, while we may "jump" up and down at a single value of X, we never need to lift our pencil from the paper. Some think of this as a "sample and hold" approach, or a "set and hold" if you prefer for conversion to analog. That's described in the PDF lecture notes.
In in the simplest implementation of digital to analog conversion, higher sampling rate and greater bit depth produce output waveforms that get progressively closer to the original signal waveform. There is a lot to be gained if you can go back and re-sample. There is also much to be gained, for any given digitized sample waveform, by being less naive, or rather more clever, when choosing your conversion algorithm.
The most important and, in this discussion, overlooked part of the story on analog waveform recovery from a discretely digitized sample, however, is the fact that we need to end up with a continuous signal: no "instantaneous jumps" in either X (time) or Y (amplitude). Many people immediately grasp that a simple "linear interpolation" - ie drawing the diagonal straight line between two adjacent points, rather than the horizontal and vertical lines in the step-wise graph - often looks "closer" to the original analog waveform. It also happens, however, that drawing the curved line between two adjacent points defined by the higher order polynomial that goes through some additional points, both before and after our two points themselves, is often an even better reconstruction. You could solve for the parabola from three points (our two and the next one, for example) or the cubic from four points (our two and one on each side), or whatever you wish. This can also be used for "smoothing" discrete sets of points, but here we're thinking of adding extra information in BOTH the X and Y (time and amplitude) axes. It turns out, and you can see it in the sample photos in the PDF lecture notes, that you really can do a better job of image (or audio) reconstruction than the simplistic step-wise approach. You don't have absolute knowledge - ie you are not getting information back that you threw away on sampling - but if the data conform to your assumption (in this case that the signal has a smoothly varying behaviour), then the pencil line your arithmetic draws from the dots you sampled really is closer to the original line on the piece of paper that you threw away. If it's random data, you cannot win; if it's not, you can at least improve ... We are re-using data: we use four sampled points to join the middle two (1,2,3,4 to join 2 and 3, say), and we use three again plus the next one to join the next pair (2,3,4,5 to join 3 and 4, say). The new information is our assumption as built into our arithmetic (equation) model. If you were doing this by hand, drawing a curve freehand through the sampled points as someone suggested, it would be equivalent to looking at earlier and later points to infer the "most likely" way to join the line to the next point smoothly and continuously.
Someone else mentioned that zero times anything is zero. True for multiplication, but not true for this kind of interpolation, where one way to describe the smoothing is by a mathematical operation called convolution...a trick to speed up the arithmetic for doing our "rolling average" or "filtering" or however you like to think of this interpolation scheme. The trick, then, is that designers can choose how to set the filter shape, or the convolution functional form, and in so doing can control the time (X) and amplitude (Y) characteristics of the continuous analog output waveform -- and different DAC designs / algorithms can emphasize different desire able output characteristics such as "attack" or "smoothness" in the same way that reconstructed images can have "high edge contrast" or "less granularity".
I don't know if this helps or hinders the dialog, but it is offered in the spirit of ensuring that we're talking about the same things...not marketing hype, not jargon, not qualitative descriptions of what we hear or what we prefer, but one way to think of what is happening in the world of digital recording and, more importantly for this forum, in the realm of (continuous) analog reproduction from (discrete) digital source files.
Regards, alan
Too much ramblings.
Why is it so hard for some people to just have a listen to the Hugo. Come on, just do it!
This has been a long and twisty thread! I have had my interest piqued but haven't heard the Hugo yet. I'm jumping in on the math bit here, as people seem to be focussing on quite different things about signals and signal processing. I have no opinion on the sound quality of this device, nor on the highest range of Naim gear ... I have a SuperUniti and a UnitiQute and am very happy with the music in my home! But tempers are flaring and I've come to pour oil on the waters (not on the flames!).
Someone mentioned Nyquist and interpolation and filtering all in one go... The implied claim is that the reconstructed waveform produced from the sampled waveform more accurately represents the original waveform in this dac implementation for reasons related to these concepts.
This has irked some folk and I think it's because most understand that the Nyquist limit sets the highest Fourier component (ie frequency) in the original you can recover absolutely from your sample (the famous "half"). Most also accept that inaudibly high frequencies are important to the way we hear what we hear - for attack, timing, space, and so on...we don't listen to pure sine waves when we listen to music, so cutting off just above the upper limit of the audible spectrum doesn't simply mean that only dogs and bats are affected. In this discussion, it is unfortunate that Nyquist was invoked since it's not germane once the sampling is done and reconstruction is the task. On the "graph paper" picture in our heads, this part is related to how well we can recover the X-axis (ie right/left, or, usually, time). This is the kHz aspect or the bit-rate aspect, whichever you prefer....and some people prefer listening to reproductions from higher sampling rate recordings (eg 48 kHz rather than 44 kHz for CD...remembering that 2 channels at 48 kHz is often written as 96 kHz for convenience when we start talking about the digital stream we will convert back to an analog signal).
Others have introduced different sources of imperfection in the production - sampling - reconstruction - reproduction chain, such as resolution of the (sampled) signal. This is the Y-axis (ie up/down, or, usually, amplitude). When sampling, this is the bit-depth or word size, whichever you prefer. More bits gives more resolution, with the increased precision corresponding to a more accurately captured signal. Again, some people prefer listening to higher bit depth recordings: 24 bit hi-def recordings are often "better" than the 16 bit CD standard.
People have talked about interpolation, and it's happening in all DACs for BOTH the time and amplitude axes. When we see the "step-wise" reconstruction (the zig-zag graph), it's obvious that no Y-axis interpolation was done, and slightly more subtle that there is now an interpolation on the X-axis: but, while we may "jump" up and down at a single value of X, we never need to lift our pencil from the paper. Some think of this as a "sample and hold" approach, or a "set and hold" if you prefer for conversion to analog. That's described in the PDF lecture notes.
In in the simplest implementation of digital to analog conversion, higher sampling rate and greater bit depth produce output waveforms that get progressively closer to the original signal waveform. There is a lot to be gained if you can go back and re-sample. There is also much to be gained, for any given digitized sample waveform, by being less naive, or rather more clever, when choosing your conversion algorithm.
The most important and, in this discussion, overlooked part of the story on analog waveform recovery from a discretely digitized sample, however, is the fact that we need to end up with a continuous signal: no "instantaneous jumps" in either X (time) or Y (amplitude). Many people immediately grasp that a simple "linear interpolation" - ie drawing the diagonal straight line between two adjacent points, rather than the horizontal and vertical lines in the step-wise graph - often looks "closer" to the original analog waveform. It also happens, however, that drawing the curved line between two adjacent points defined by the higher order polynomial that goes through some additional points, both before and after our two points themselves, is often an even better reconstruction. You could solve for the parabola from three points (our two and the next one, for example) or the cubic from four points (our two and one on each side), or whatever you wish. This can also be used for "smoothing" discrete sets of points, but here we're thinking of adding extra information in BOTH the X and Y (time and amplitude) axes. It turns out, and you can see it in the sample photos in the PDF lecture notes, that you really can do a better job of image (or audio) reconstruction than the simplistic step-wise approach. You don't have absolute knowledge - ie you are not getting information back that you threw away on sampling - but if the data conform to your assumption (in this case that the signal has a smoothly varying behaviour), then the pencil line your arithmetic draws from the dots you sampled really is closer to the original line on the piece of paper that you threw away. If it's random data, you cannot win; if it's not, you can at least improve ... We are re-using data: we use four sampled points to join the middle two (1,2,3,4 to join 2 and 3, say), and we use three again plus the next one to join the next pair (2,3,4,5 to join 3 and 4, say). The new information is our assumption as built into our arithmetic (equation) model. If you were doing this by hand, drawing a curve freehand through the sampled points as someone suggested, it would be equivalent to looking at earlier and later points to infer the "most likely" way to join the line to the next point smoothly and continuously.
Someone else mentioned that zero times anything is zero. True for multiplication, but not true for this kind of interpolation, where one way to describe the smoothing is by a mathematical operation called convolution...a trick to speed up the arithmetic for doing our "rolling average" or "filtering" or however you like to think of this interpolation scheme. The trick, then, is that designers can choose how to set the filter shape, or the convolution functional form, and in so doing can control the time (X) and amplitude (Y) characteristics of the continuous analog output waveform -- and different DAC designs / algorithms can emphasize different desire able output characteristics such as "attack" or "smoothness" in the same way that reconstructed images can have "high edge contrast" or "less granularity".
I don't know if this helps or hinders the dialog, but it is offered in the spirit of ensuring that we're talking about the same things...not marketing hype, not jargon, not qualitative descriptions of what we hear or what we prefer, but one way to think of what is happening in the world of digital recording and, more importantly for this forum, in the realm of (continuous) analog reproduction from (discrete) digital source files.
Regards, alan
What he said and the Hugo sounds great.
Too much ramblings.
Why is it so hard for some people to just have a listen to the Hugo. Come on, just do it!
It is strange that Hugo owners are now expected to provide peer reviewed journal citations to justify their personal experiences. Whats next? Will we have to submit formal proof that the Hugo actually sounds nice in our systems?
Mark, thanks for the reference of why we got onto interpolation.. and yes probably a red herring.
I think I agree with the sentiment starting to be set..and mentioned by Kevin above...
The Hugo sounds different for its size and cost, it subjectively performs rather well.
This may be for many reasons, but from a design / innovation point of view these are the things that standout for me from an engineering point of view.. :
- use of extended windowed FIR filter and reconstruction filter kernel (Hugo designer refers to this as the number of taps and the bespoke algorithm as the "WTA" ) thereby reducing digital artefacts after filtering and retaining low level information.
- use of low power FPGA for DSP, thereby reducing Powerline noise, EM noise and ground loop currents and ground modulation.
- filtering done with 64 bit precision.
- The DACs are a custom discrete delta sigma implementation one for each channel with 2048 over sampling and 5th order noise shaping. (The over sampling and noise shaping is done on the FPGA device separate from the DACs)
- There is extensive on board power regulation and RF filtering. the channel of each DAC has its own precision regulated powersupply for example.
- The analogue output stage and I2V converter is discretely and simply built with minimum low noise components around a class A stage.
But all of this is only an interesting lab experiment, unless it sounds good or makes a difference and only you can decide...
I think there undue prominence on the DSP filtering in the above thread discussion although i find interesting, and that is only one part of the jigsaw and seems to end up in dead ends when discussed... the Hugo block diagram is quite enlightening, just like the Naim block diagrams have been
Alan
Many thanks for putting it into words that even i can understand. Very well written indeed.
Graeme
+1 to the last 5 posts.
Alan, do you feel it is possible to perfectly recover timings to a sensitivity of 4 microseconds from a CD with samples at 22 microseconds of frequency?