Audioquest Cinnamon Ethernet Cable
Posted by: Ricto on 03 May 2017
System NAC 272, 250 DR, NASA5 Speaker wire, Focal BE 1028BE ( on marble plinths chopping boards from Tesco ), Plusnet router leading to an Apple Extreme via a cheap extension ethernet.
Today I received my Audioquest Cinnamon ethernet cable. The current ethernet was a bog standard one. The cost of the Cinnamon was under £60 so I thought, get it its a cheap upgrade ( but really i wasn't expecting much ).
I have been surprised on the sound quality difference, especially since the Apple Extreme is linked by a cheap ethernet ( the old saying "a system is only as good as its weakest link" ). Results to me are the base is cleaner, treble more defined, more musical and all round more enjoyable musicality.
Just passing my observations on
Ricto
We love expensive Bits!
True Blue posted:Who knows why but when I replaced my cat 6 from switch to ndx and unitiserve to switch. The differences were there to be noticed. More of everything. I don't know the science but there are thousands of 0s and 1s if a couple are missed or mistimed somehow this surely would have an effect.
Thats the "bit" I dont get. Surely the frames exchanged over the wire will carry a checksum that allow the receiver to determine frame corruption has occured. In which case retransmission would occur. And buffering in the receiver should ensure no timing issues.
On the other hand I can accept that some electrical characteristic of the cable (the physics of which is beyond my understanding) might feed into, and affect, the analogue stage of the receiving streamer. But then id have expected it only to effect the cable between the switch and streamer.
For the record I dont, at this time, stream... Content with my Cdx2/XPSDR and LP12...
Nevertheless I am open minded and when, finally I do stream, I will be open to experimentation.
Huge posted:Judge posted:How does an Ethernet cable affect sound quality? I can understand that with analogue signals, but a load of 1s and 0s, that are only either a 1 or 0 shouldn't matter. What is going on? And if it matters what affect does the miles and miles of transmission have on precious 24/192 Hi Res files being downloaded?
Oh dear! the old 'bit are bits' argument.
The bits are transmitted as an analogue electrical signal varying between different digital levels, and can therefore vary in timing (an analogue property) and carry RFI by varying in voltage (another analogue property). The digital information is just an interpretation of the analogue signal in the cable.
When the file is downloaded, the analogue signal is re-interpreted at the receive end and the digital information written to memory, the CRC is then compared and the data can be re-transmitted if the values don't match. This isn't realtime, so doesn't affect playback.
"Oh dear!..."? I genuinely didn't understand and there is no particular reason why I should. I'd be pretty confident that many others would have thought the same too. Anyway...
You seem to know, so thanks for explaining that bit.
What are RFI and CRC? If the digital signal is corrupted on its way to the switch, how can a better cable from switch to player improve things, or do they just not get worse?
I'm asking, BTW, because I'm just getting into this aspect of HiFi, partly form SQ reasons and partly for CD storage reasons. So if it is worth investing several £000, I don't want to spoil it for the sake of a half decent "bit of wire".
CRC is Cyclic Redundancy Check. Its essentially a checksum embedded in a frame by the sender before the frame is sent across the wire. Its calculated from the contents of the frame (before the checksum) and then recalculated by the receiver. If the receiver calculates the same checksum as the sender there has been no corruption in transit. If not the receiver drops the frame and higher level protocols will request retransmission.
Sorry Judge, we just get that particular argument so often, and I and others have refuted it so often, I simply didn't think that there was anyone who yet hadn't seen one of the many instances of it being played out.
If I had a penny for each time we've had that argument, I'd have ... well a little collection of pennies anyway!
Allan J has explained CRC and how it's used.
RFI is 'Radio Frequency Interference' (not to be confused with Radio Breakthrough*). RFI is a contamination of a wanted signal (or power source) by unwanted signals in the radio frequency range. These unwanted signals can be anything from digital noise from computers, cordless phones, switch mode power supplies (SMPSs) etc. in your home to switching pulses from industrial activity many hundreds of metres away.
* Radio Breakthrough is where you hear the radio signal through something other than a radio, RFI usually produces a 'mush' behind the signal you want or just generally causes the signal to degrade, such as by losing clarity.
Did I mention it also sounds better
So the point about CRC and, indeed, the error recovery provisions in the higher level protocols is that data loss or corruption should not be seen by higher level applications. Even if a frame is corrupted on the wire and dropped, higher level protocols will request retransmission... this all happens extremely quickly (depending on link we are talking micro or milliseconds) so buffering should prevent any issues associated with lost frames. On a sound link, good frames will be received far more often than not. If none of this were so, all those millions of financial transactions, sent across data networks every day, would be unreliable. You could end up with millions in your pay check... or nothing at all.. For standard ethernet even cable length, as long as its not greater than100m should not be an issue.
so any audible effect (which, again, i do not dispute) from the cable would, it seems to me, have to be some form of electrical interferance feeding from the cable into the anologue components of the streaner.
Mike-B posted:I have found that only the switch to streamer (player) & NAS to switch ethernet branches make a difference. The link between switch & the wireless hub makes no difference that I can detect.
This is good to know. I am moving into a new apt in a few weeks and only have two AQ Cinnamon ethernet cables (.75m and 1.5m) and will most likely need a switch to connect to the Airport Extreme or my ISP's router.
are you all mad! this is ethernet cable over a short distance with low bandwidth data. if somehow it did make a difference its due to very bad hardware design.
Huge posted:Oh dear! the old 'bit are bits' argument.
Hi Huge,
You're absolutely right, all signals are really analog in that they contain an infinite number of possible discrete voltages at an infinite number of discrete timepoints. We choose a threshold (depending primarily on the semiconductor process being used) and call anything above that a 1 and anything below a zero. More or less. Anyway, we also choose a clock-speed and that determines where in time we read a sample so we don't sample transients.
Errors in voltage can lead to 1s being interpreted as zeros and vice-versa. Errors in timing (i.e. jitter) can lead to sampling the bit before or after the one intended. Both of these can be quantified by the bit-error-rate of a particular setup, and every setup has a non-finite bit-error-rate. So if the digital cable feeds the input to the DAC directly, digital cables can certainly be audibly different.
But that's the thing, the cables don't feed the DACs. That incoming analog signal containing our digital information is first conditioned (to restore the amplitude and remove timing noise), then buffered, then re-clocked into the DAC ASIC (e.g. Burr Brown PCM 1791A or Wolfsson or Sabre or Chord's FPGA or whatever). Perhaps with some upsampling and filtering on the way. The contents of the stream going into the DAC can be compared for streams incoming with different digital cables, and if the conditioning is any good should be identical irrespective of cable (within reason).
A way to test this is to take the conditioned output in the buffer for one cable, and then subtract the conditioned output in the buffer for a different cable. Should be all zeros. My understanding is that many folks have done this, and it is all zeros for any reasonable length and quality of digital interconnect. I haven't done this, but this argument can't be subjective as it's measurable.
Long story short, you may certainly hear a difference between different digital cables, but that doesn't mean it's there. I believe one can prove it isn't using the above test. So no, bits aren't bits, and digital signals are absolutely analog. Just try running a mile long interconnect out of your CD player and watch nothing come out the other end. But for reasonable lengths of wire of reasonable conductivity, say a 1-2m $10 cord, there is a demonstrably non-existent difference between the data being fed into the buffer of your DAC and the data that would be fed by the most exquisite cable in the world.
Best,
---Pedro
dvshannow posted:are you all mad!
Stark raving. Here's a good article, not recommending the manufacturer (trying to follow forum rules) just the information in this article on their site:
http://www.bluejeanscable.com/...-your-cat6-a-dog.htm
I've got their Cat 6, and for reasonable lengths, it is demonstrably perfect. That frees up lots of money for audiophile rocks.
Huge posted:Judge posted:How does an Ethernet cable affect sound quality? I can understand that with analogue signals, but a load of 1s and 0s, that are only either a 1 or 0 shouldn't matter. What is going on? And if it matters what affect does the miles and miles of transmission have on precious 24/192 Hi Res files being downloaded?
Oh dear! the old 'bit are bits' argument.
The bits are transmitted as an analogue electrical signal varying between different digital levels, and can therefore vary in timing (an analogue property) and carry RFI by varying in voltage (another analogue property). The digital information is just an interpretation of the analogue signal in the cable.
When the file is downloaded, the analogue signal is re-interpreted at the receive end and the digital information written to memory, the CRC is then compared and the data can be re-transmitted if the values don't match. This isn't realtime, so doesn't affect playback.
Oh Dear! The old argument that bits are not bits argument.
garyi posted:Oh Dear! The old argument that bits are not bits argument.
Hi Pedro, you missed one rather important step... You don't send the contents of the buffer to the amplifier, what goes to the input of the amplifier is the output of the analogue amplifier that follows the DAC subsystem.
The variations in the timing of the digital signal result in changes in the rate of filling the buffer and hence smaller changes in the rate of emptying the buffer, and thus frequency modulation of the analogue output of the DAC subsystem. This same variation of timing also causes a variable load on the digital processing system, and that in turn causes variation on the load on the power supply, which then breaks through to the output of the DAC and to the analogue amplifiers..
Variation in the analogue levels that represent the digital bits can also vary the load on the power supply and although attenuated by the PSRR of the DAC subsystem and analogue stages, this can also make it's way through to the analogue output.
The bits are bits argument (i.e. the interpretation that bits are only bits and nothing else) only applies when looking exclusively at the digital domain; it doesn't apply to hybrid (i.e. mixed digital and analogue) systems, where the effects of the analogue signal carrying the digital data can have an effect on the analogue parts of the circuit.
Huge posted:The variations in the timing of the digital signal result in ... a variable load on the digital processing system, and that in turn causes variation on the load on the power supply, which then breaks through to the output of the DAC and to the analogue amplifiers.
My goodness, I love this forum!
Thank you Huge, you're absolutely right, and I stand corrected. The input to the DAC will be the same regardless of cable, but the output may be different, from the DAC or the analog circuit that follows it, because of noise in the supply introduced by the jitter compensation on the initial conditioning circuitry. Right you are.
Still, I wonder, Naim goes out of it's way to have separate regulators for digital, analog, and power amplifier (if one is included) components. All are drawn from a single mains cable (whether for an on-board PSU or an off-board upgrade). That will provide some, though not perfect shielding of variations in digital current draw, to the analog rail. This should be measurable. You're right that it won't be zero, but I wonder how big it'll be. However big it is, it will be attenuated by the PSRR of the analog circuitry following the DSP.
One should be able to measure the load modulation of the digital supply, isolation of that linear regulator from the analog circuit's linear regulator, and the PSRR of the analog output from the DSP. If you tack all three together, is the jitter effect still in the audible range at the speaker? I don't doubt folks hear differences, but I have too many colleagues in Hearing Sciences to not know that perception can be tremendously subjective to expectation.
As an aside, that turns out to be an evolutionary mechanism that allows us to hear sound below the noise floor of our listening environment (see Beth Strickland at Purdue). In a nutshell, the cochlea dynamically adjust the length of hearing cells to modify the gain at various frequencies in anticipation of a tone at an expected frequency. It's why we hear music better when we're familiar with it. Our brain knows what note is coming next, and the cochlear filter preemptively adapts to the expected tone in anticipation of it's arrival.
Anyway, back to the topic at hand, you've convinced me there's a mechanism why an identical digital stream in the buffer would yield a different signal on the output of the DSP. I would love to understand the amplitudes involved better though, before I was sure the effect was audible to me. Regardless, thanks for your explanation.
---Pedro
Huge posted:The variations in the timing of the digital signal result in changes ... in the rate of emptying the buffer
I think this part is probably not correct, so long as the buffer is re-clocked on it's way to the DAC. I believe Naim, and several other top shelf DACs do re-clock the data resulting in a consistent rate of emptying.
perizoqui posted:Huge posted:The variations in the timing of the digital signal result in changes ... in the rate of emptying the buffer
I think this part is probably not correct, so long as the buffer is re-clocked on it's way to the DAC. I believe Naim, and several other top shelf DACs do re-clock the data resulting in a consistent rate of emptying.
Apologies, I omitted to say that this particular part only applies to realtime (i.e. S/PDif) signals...
S/PDif is a real time protocol, so an adaptive replay clock is needed. Take for instance a situation where the S/PDif signal is coming in clocked at a rate that is 0.9999 of the rate of the pre-DAC buffer. The system will start receiving the signal and will fill the buffer 1/2 full; it will then start playback. After that the buffer will empty slightly faster than it fills, eventually it will run out of samples and after that it will glitch every 10,000 samples as it will have a corrupted frame (the last sample in the frame won't arrive in time). A similar problem occurs if the S/PDif clock is slightly faster, but in this case the problem is buffer overrun when the tail catches up with the head and corrupts the frame.
In practice, this is likely to be a lesser effect than the effect on the amplitude, but present none the less.
The Naim White Paper on the Naim DAC (here) describes their technical solution using a replay master clock with stepped frequencies. Note that in most modern DACs the more common PLL solution uses a data buffer and then uses the PLL extracted clock to modulate a more stable clock, rather than using direct clock extraction - this slows down and reduces the variance, but still doesn't eliminate it.
Judge posted:How does an Ethernet cable affect sound quality? I can understand that with analogue signals, but a load of 1s and 0s, that are only either a 1 or 0 shouldn't matter. What is going on? And if it matters what affect does the miles and miles of transmission have on precious 24/192 Hi Res files being downloaded?
It doesn't. Ethernet is a data standard not an audio standard. Ethernet is not realtime. Parts of the PHY get's powered down by modern OS's between transmits to save energy.
I've captured output from a Cary DMS-500 into a $1600 RME with a generic 315 foot CAT5E ($0.30 foot) and a 3 foot Nordost Heimdall 2 CAT8 ($233 foot) and no one could tell the difference.
Also did this with the AudioQuest Vodka 1 meter and WireWorld Starlight CAT8 3 meter.
For the for the most recent testing with the Nordost I even tossed the 315 cable underneath a running microwave while I captured the tracks. There is simply expectation bias going on. I've offered to come out to someone's setup with a box of cable and make something on the spot along with my trusty $90 layer 3 managed switch also.
Huge posted:Sorry Judge, we just get that particular argument so often, and I and others have refuted it so often, I simply didn't think that there was anyone who yet hadn't seen one of the many instances of it being played out.
If I had a penny for each time we've had that argument, I'd have ... well a little collection of pennies anyway!
Allan J has explained CRC and how it's used.RFI is 'Radio Frequency Interference' (not to be confused with Radio Breakthrough*). RFI is a contamination of a wanted signal (or power source) by unwanted signals in the radio frequency range. These unwanted signals can be anything from digital noise from computers, cordless phones, switch mode power supplies (SMPSs) etc. in your home to switching pulses from industrial activity many hundreds of metres away.
* Radio Breakthrough is where you hear the radio signal through something other than a radio, RFI usually produces a 'mush' behind the signal you want or just generally causes the signal to degrade, such as by losing clarity.
Siemons write up "The Antenna Myth" specifically takes your supposition about RFI to the woodshed and buries it:
'The good news is that the balance performance of the cable itself is sufficient up to 30 MHz to ensure minimum susceptibility to disturbance from these noise sources regardless of the presence of an overall screen/shield.'
Translated: Unshielded CAT5e is pretty robust at noise rejection.
'it is a fact that screens and shields offer substantially improved noise immunity compared to unshielded constructions above 30 MHz... even when improperly grounded.'
Moral of the story: Get CAT6 because by default it has a shield, not going to be any costlier than UTP CAT5e
Huge posted:The variations in the timing of the digital signal result in changes in the rate of filling the buffer and hence smaller changes in the rate of emptying the buffer, and thus frequency modulation of the analogue output of the DAC subsystem. This same variation of timing also causes a variable load on the digital processing system, and that in turn causes variation on the load on the power supply, which then breaks through to the output of the DAC and to the analogue amplifiers..
You may not understand what buffers in the context of computers do. They are clock domain boundaries. Buffers allow an application or device driver to read data out of said buffer with no care for the signaling or packet rate on the other side filling it.
From "http://www.sunburst-design.com/papers/CummingsSNUG2008Boston_CDC.pdf"
'5.8.1 Multi-bit CDC signal passing using asynchronous FIFOS
Passing multiple bits, whether data bits or control bits, can be done through an asynchronous FIFO. An asynchronous FIFO is a shared memory or register buffer where data is inserted from the write clock domain and data is removed from the read clock domain. Since both sender and receiver operate within their own respective clock domains, using a dual-port buffer, such as a FIFO, is a safe way to pass multi-bit values between clock domains. A standard asynchronous FIFO device allows multiple data or control words to be inserted as long as the FIFO is not full, and the receiver and then extract multiple data or control words when convenient as long as the FIFO is not empty.'
Variation in the analogue levels that represent the digital bits can also vary the load on the power supply and although attenuated by the PSRR of the DAC subsystem and analogue stages, this can also make it's way through to the analogue output.
So you are saying a $233/Foot, or $112/Foot, or $27/Foot cable offer better power supply stability than a $1 / Foot certified CAT6 cable from the likes of Blue Jeans Cable?
The bits are bits argument (i.e. the interpretation that bits are only bits and nothing else) only applies when looking exclusively at the digital domain; it doesn't apply to hybrid (i.e. mixed digital and analogue) systems, where the effects of the analogue signal carrying the digital data can have an effect on the analogue parts of the circuit.
1. You have to show the altered analog output as a function of said mystical cable. In my testing I was actually able to get the ADC capture of 24/192 so well that people couldn't tell the captured sample from the original track with a 315 foot cable. Think about what that means.
2. If you have equipment that is that susceptible to cabling that is in spec and standards then you have poorly designed equipment.
I have a Audioquest Vodka cable. I've got it hooked up between the Internet Cable Model and the wireless router.
The HiFi system is connected to a switch, a cable to the wall, from a wall in the laundry room into another switch which is then connected to the wireless router above.
Interestingly, the vodka made the biggest difference in that position - 3 switches removed from my nas/unitiserve etc.
It's not about bits at all - the cable has some noise rejecting properties - and it rejects the noise coming from the Comcast Internet Cable Modem.
Putting an iFi on all ethernet switches helps immensely too.
Interestingly enough, in the same config, putting cinnamon between the unitiserver, NAS and the switch made things quieter but more dead sounding. I've got BlueJeans Cat6a cables everywhere else.
I'm also using TrippLite IsoBar Ultra powerstrips everywhere a switch is to be powered - again makes a difference to the final sound.
I suspect there is some noise introduced by the wallwarts that the expensive cables help filter out - you can use these expensive 'audiophile' cables - or go to the source of the problem.
The TrippLite's are $40 - so fairly cheap in the context of a NDS/552/300 system - but money very well spent.
Key is to have them not close to the Naim system though - they are used in other places to ensure there is no noise in the ethernet cables.
Jinjuku posted:Moral of the story: Get CAT6 because by default it has a shield, not going to be any costlier than UTP CAT5e
Incorrect - Cat6 does not 'normally' have a shield, Cat6A might, but not always, have a shield.
The category is a rating that means the cable reaches a level of crosstalk & other parameters at a particular bandwidth, Cat6 = 250MHz Cat6A = 500MHz.
And of course Cat5e might equally have a shield as well, having a shield has very little if anything to do with the cable category type... The shield is defined by the construction type.. in it its simplest shield form STP denotes shielded twisted pair and UTP denotes unshielded twisted pair
All my bulk Cat5e I use to make up cables is shielded and has a foil shield with a drain wire, and all the connectors I use have a shield grasp/clamp
CAT6A is shielded. I get mine from cablemonkey.co.uk, really well constructed industry grade cables.
Incorrect. Cat 6A which really is optimum for 10 gobs link speeds, which most if not all consumer equipment can not support, can be obtained in UTP or U/FTP versions. That is it can be provided in unshielded or per twist pair shieldeded versions. The unshielded variants tend to be of larger diameter as they use non metallic spacers between the twisted pairs to reduce crosstalk between the pairs.
BTW over a 20m run of Cat5e over several years of quite high data transfer resulting in a few terabytes of data, I have not had a single corrupt frame.. the counters remain all at 0. For upto 1Gbps unless you are in a high RFI environment (certainly not one safe to live in in my opinion) then Cat 5e shielded or non shielded is perfect.