We say source first, but what counts as a source?

Posted by: SongStream on 05 April 2015

It used to be clear, for the first twenty years of my hifi ownership, the source of the data was a silver disc, and the source of the noise was my CD player. Simple wasn't it? Now I mainly stream from Qobuz via a PC, so the source of the data for music I listen to these days is...well, I don't know where it is. I suspect it could be a data centre in France, but tracing it could be a challenge akin to finding the source of the Nile. With the data whereabouts unknown and that accepted, what counts as the source in hifi terms? The PC, the software running on the PC (definitely important!), the ADSL router, the exchange, a data centre or NAS drive, where does it end?


This question entered my head after listening for a while the other day, and thinking afterwards how wonderful the DAC-V1 and SN2 combination was working. In my mind I had thought of the DAC as the source, as everything beyond it was non-hifi, but then started wondering, is it really? As I read about companies producing audiophile NAS drives, with audio grade SSDs in them, and people worrying about power supplys to network switches, audio grade ethernet cables and auralics, I start to wonder if I am missing something.


I've been of the mind that with most DACs being electrically isolated from the incoming digital signal, and with clever buffering and re-clocking, which I believe would intervene regardless of digital source, the audio stream coming from the USB bus of a PC, should be the same, and as good, as any other streamer in to the same DAC.


Thinking about this topic has, once again, left me confused, though that's not difficult. I would be really interested to hear views on the importance of various elements in the digital domain, when streaming from a NAS, or online, and prior to reaching something like a Hugo, or DAC-V1 etc.  

 

Thanks



Posted on: 05 April 2015 by bicela

Great post feeling_zen, thank you. Concepts are clear, time by time also reported by others. But here a very good compact summary.

Posted on: 06 April 2015 by Simon-in-Suffolk

I also consider, and  have posted on this matter often on this forum, the bits are bits argument with digital audio signal reconstruction is a fallacy. My summary:

  1. PCM sample data uses sample words not a bit stream.
  2. Many interfaces such as USB and SPDIF 'serialise' the sample data by transporting them into frames of bits. These frames of bits are clocked at a specific rate so the 'data' doesn't over flow or runout.
  3. In years gone by the DAC clock was used reconstructed from the above transport clock. In these, now largely obsolete, scenarios, transport clock instability (jitter) could directly impact the DAC clock
  4. Most implementations that I am aware of now have strategies for completely seperating the transport clock from the DAC clock. Therefore transport jitter is typically entirely removed.
  5. When the received serialed data frames are received, they are processed. Each transition is detected, but the timing of the rise and fall of the wave form is modulated by the transport clock. This can produce system cross talk into related circuitry. This is why one can often hear the effects of transport jitter in circuits where it is completely removed. It is also an argument for using a highly stable clock to serialise the sample word data in the first place.
  6. Additionally RF Electrical noise can cause intermodulation and other distortions in receiving clock and analogue circuitry. This can affect USB and SPDIF. Galvanic isolators ate good at decoupling ground loop noise, but other types RF noise can happily pass through a galvanic isolator.

Thetefore to me bit are definitely not bits... The real world of electronics makes no distinctions.. Life is far more interesting than simplified conceptual tutorials 

PS I am also a senior design engineer within a large blue chip IT services company  just in case that might be relevant in any way.

 

Simon

Posted on: 06 April 2015 by SongStream

Very interesting stuff guys, and many thanks for the contributions.  Feeling_Zen's piece above certainly provides a balanced and informed view with regard to digital cables, and indeed software, and I don't think there is much I could add to that.  

 

I guess there must be something to the theories; we're sustaining a whole industry producing high-end digital audio cables.  All the people manufacturing them can't be rip off merchants, and all the people buying them can't be fools......probably.  

 

Posted on: 06 April 2015 by SongStream
Originally Posted by Simon-in-Suffolk:

I also consider, and  have posted on this matter often on this forum, the bits are bits argument with digital audio signal reconstruction is a fallacy. My summary:

  1. PCM sample data uses sample words not a bit stream..............

PS I am also a senior design engineer within a large blue chip IT services company  just in case that might be relevant in any way.

 

Simon

Interesting, I will have to look that one up, Simon.  "Sample words" is not a term I am familiar with, or does this term relate directly to the transporting of bits in a frame?  I was aware of that, having read (the interesting? bits) of the USB documentation, a 600 page PDF as I remember, but it did make quite clear the differences between USB data transfer as used for audio, and other time critical data transfers, versus the printer for example.  Ever since I've been waiting for someone to make the argument that if USB cables made any difference and data could go missing, letters and words would be missing when the print a document to their USB printer.  Annoyingly people here seem to be too clever for that one. 

 

PS I can drive a tractor if that's any help.

Posted on: 06 April 2015 by andarkian
Originally Posted by feeling_zen:

It's interesting that this thread suddenly became the ol' "bits are bits" chestnut again. But it also shows some common misconceptions and a lot of jumpting to conclusions based on incomplete understanding. As an senior IT specialist for a major blue chip IT company, I cannot deny that bits are bits because I know this to be true, and empiracally provable. There is no such thing as a better quality bit.


But, I also know there is more engineering behind how these bits are conveyed and the different issues that arise. So while bits may be bits, it is only the first chapter in a longer complex story.

 

Optical Cabling.

Longer runs of optical cable (generally longer than 8-10m) will be susceptable to attenuation of the red spectrum used in the communication. This can result in stray bits but is unlikely in the length or runs used for home audio. Shorter runs of optical medium with minor occlusions can also cause this. Not a major issue with IP over optical since the packets are verifies and resent as required. Audio on the other hand must accept a linear non negotiated stream. Still, the main issue here for sub 24/96Khz streams isn't the bits (which should be perfect in most cases) but jitter and microphonic vibrations. Most Naim units reclock input to reduce jitter to near zero (near because this has to happen after the Toslink receiving unit which operates within defined frequencies). Better cables reduce vibrations similar to how Naim's airplug works - the optical medium floats inside a much larger outer sheath and if possible, terminating ends of the cable are decoupled from the centre optical medium. Vibrations do not affect bits but they do affect analog circuits in the DAC. The most delicate being the first stage analog output before pre amplification brings the signal up to line level.

 

Coaxial cabling

Similar to optical, on short enough runs it is capable of bit perfect transfer but to higher data rates than the Toslink Audio specification. Red spectruim attenuation obviously is not applicable and neither is any artifice or additional non digital noise caused by the conversion of light pulse to electrical signal. Generally preferred as a superior carrier to Toslink (open to debate - I don't want to get into that here) but is subject to RF interference. RF interference is generally not enough to change a bitstream so again, should be bit perfect. But it does introduce the same microphonic vibration issue and can now carry RF interfernce picked up outside the unit to inside the unit to be picked up by any suceptable analog circuits. Generally microphonics are mitigated by virtue of a heavier cable to dampen any affect and suitable screening for the RF.

 

Network cabling

Since ethernet transmissions are both controlled, validated, resent, and sequenced, there should never be any impact on the payload. Missing data packets or packets with failed checksums are going to be re-requested by the receiving end as part of the protocol. So from cheapest to most expensive cable, the payload will not change and packet sniffing proves this. But they introduce the same problems of RF interference and microphonic vibrations as any other non optical cable which can affect analogue circuits. Therefore moderate care in choosing a sturdy well screened network cable will yield some improvements - there is no need to go crazy though - some expensive LAN cables are pure snake oil.

 

What does affect data integrity for networks more than anything is network configuration and the quality of switches and routers. Poor configuration can lead to delayed packets or frequent retransmissions. Ditto for the hardware. Older or cheaper components may only support half duplex (send or receive but not sumultaneous send+receive) and mismatches in port configrations (generally domestic usage network components allow no user configuration) between any two components can lead to packet collisions. The good news is that that the data either arrives there complete (bit perfect) or not at all. The bad news is that it can arrive late or out of sequence generating alot of overhead for the receiving unit.

 

UPnP

As has been discussed elsware on the forum, fed with identical audio files, these can yield different results. Whether they do or do not transcode obvious totally changes the stream (since the formats transferred to the streamer could be dotally different). Even with direct wav data, some change the stream to apply preset recording level normalisation or add (or artificially remove) dynamic range compression by applying transformations directly to the digital stream. If you want to ensure your bit is really a bit over UPnP you are going to need to trust that it faithfully transcodes unadultered data as it unpacks it from the file or passes the file as-is (but that offloads file unpacking to the streamer which has its own impact).

 

Digital out from PC

This is often not bit perfect which is exactly why the DACv1 comes with a utility to verity this. Cheaper soundcards may downsample streams and many applications at lease apply digital transformations to adjust the recording level so you can adjust volume from the PC. Some users on the forum have also state they use room correction software before sending out to the DAC so again, the stream is not going to exactly match the input file's bitstream.

 

For these reasons, the concept of "source" does not fundamentally change. Anything between your source audio data and the preamp is therefore the "source" of which the DAC is only one part.

 

IMNSHO.

As a senior IT engineer, are you saying that since the inception of the modem that the modulation / demodulation process is inherently flawed depending on the data transfer mchanism? Whatever the 'sound card' or DAC does to that data it has to be bit perfect or no music. How it interprets the data after reception is of course a significant factor. And the analogue signal passing through subsequent hardware and cabling may be subject to colouration - no argument at all. However, to return to my bête noire, the Devialet Phantom, they claim to shorten the analogue pathway and compare the digital and analogue signals to eliminate colouration or interference. At least that's my interpretation. Whether it works to satisfactory audiophile standards still requires third party proper assessment, but hopefully it addresses a number of issues raised in the above thread. By the way, this app Naim are using is behaving like a remote 2.4 kb teletype circa 1975 today for some very strange reason. One letter per second.

Posted on: 06 April 2015 by feeling_zen
Originally Posted by andarkian:
As a senior IT engineer, are you saying that since the inception of the modem that the modulation / demodulation process is inherently flawed depending on the data transfer mchanism?
 
I hadn't said that no. Though it is true that SPDIF, being a one-way carrier protocol is a flawed in that there is no way for retransmissions to be requested on error. The receiving end has to either correct an error by extrapolation or momentarily drop the processing until it starts receiving expected datagrams again. At least with ethernet, packets are verifiable and retransmittable.
 
Whatever the 'sound card' or DAC does to that data it has to be bit perfect or no music.
 
Not true. It expects the data to be "complete" which is no the same as "perfect". It has no concept of perfect data anyway. If the DAC expects a 16bit word for a single sample and in that transmission something (software or whatever) changed a single 0 to a 1 it is going to result in a different interpretation by the DAC, not "no music". Incomplete data is possible too. Scratched CDs cause the transport to attempt C1/C2 error correction to figure out (best guess) what the unreadable block contained so that it can pass a "complete" 16 bit sample to the DAC. Not necessarily perfect.
 
How it interprets the data after reception is of course a significant factor. And the analogue signal passing through subsequent hardware and cabling may be subject to colouration - no argument at all. However, to return to my bête noire, the Devialet Phantom, they claim to shorten the analogue pathway and compare the digital and analogue signals to eliminate colouration or interference. At least that's my interpretation. Whether it works to satisfactory audiophile standards still requires third party proper assessment, but hopefully it addresses a number of issues raised in the above thread. By the way, this app Naim are using is behaving like a remote 2.4 kb teletype circa 1975 today for some very strange reason. One letter per second.

My comments in blue

Posted on: 06 April 2015 by andarkian
Originally Posted by feeling_zen:
Originally Posted by andarkian:
As a senior IT engineer, are you saying that since the inception of the modem that the modulation / demodulation process is inherently flawed depending on the data transfer mchanism?
 
I hadn't said that no. Though it is true that SPDIF, being a one-way carrier protocol is a flawed in that there is no way for retransmissions to be requested on error. The receiving end has to either correct an error by extrapolation or momentarily drop the processing until it starts receiving expected datagrams again. At least with ethernet, packets are verifiable and retransmittable.
 
Whatever the 'sound card' or DAC does to that data it has to be bit perfect or no music.
 
Not true. It expects the data to be "complete" which is no the same as "perfect". It has no concept of perfect data anyway. If the DAC expects a 16bit word for a single sample and in that transmission something (software or whatever) changed a single 0 to a 1 it is going to result in a different interpretation by the DAC, not "no music". Incomplete data is possible too. Scratched CDs cause the transport to attempt C1/C2 error correction to figure out (best guess) what the unreadable block contained so that it can pass a "complete" 16 bit sample to the DAC. Not necessarily perfect.
 
How it interprets the data after reception is of course a significant factor. And the analogue signal passing through subsequent hardware and cabling may be subject to colouration - no argument at all. However, to return to my bête noire, the Devialet Phantom, they claim to shorten the analogue pathway and compare the digital and analogue signals to eliminate colouration or interference. At least that's my interpretation. Whether it works to satisfactory audiophile standards still requires third party proper assessment, but hopefully it addresses a number of issues raised in the above thread. By the way, this app Naim are using is behaving like a remote 2.4 kb teletype circa 1975 today for some very strange reason. One letter per second.

My comments in blue

Blimey, this is fun! The SPDIF output tends to be the end product and not subject to repackaging, similar to what is spat out of the DAC. If the received signal at the DAC is corrupt in any way it will be resent. If you are saying that between its arrival and presentation to the DAC there is a chance of corruption then I guess I have to agree but it would potentially be very noticeable. Personally, I believe that processing inside of the DAC is where the music is 'made'. I do believe that is my only real argument.

Posted on: 06 April 2015 by feeling_zen
Originally Posted by andarkian:
Originally Posted by feeling_zen:
Originally Posted by andarkian:
As a senior IT engineer, are you saying that since the inception of the modem that the modulation / demodulation process is inherently flawed depending on the data transfer mchanism?
 
I hadn't said that no. Though it is true that SPDIF, being a one-way carrier protocol is a flawed in that there is no way for retransmissions to be requested on error. The receiving end has to either correct an error by extrapolation or momentarily drop the processing until it starts receiving expected datagrams again. At least with ethernet, packets are verifiable and retransmittable.
 
Whatever the 'sound card' or DAC does to that data it has to be bit perfect or no music.
 
Not true. It expects the data to be "complete" which is no the same as "perfect". It has no concept of perfect data anyway. If the DAC expects a 16bit word for a single sample and in that transmission something (software or whatever) changed a single 0 to a 1 it is going to result in a different interpretation by the DAC, not "no music". Incomplete data is possible too. Scratched CDs cause the transport to attempt C1/C2 error correction to figure out (best guess) what the unreadable block contained so that it can pass a "complete" 16 bit sample to the DAC. Not necessarily perfect.
 
How it interprets the data after reception is of course a significant factor. And the analogue signal passing through subsequent hardware and cabling may be subject to colouration - no argument at all. However, to return to my bête noire, the Devialet Phantom, they claim to shorten the analogue pathway and compare the digital and analogue signals to eliminate colouration or interference. At least that's my interpretation. Whether it works to satisfactory audiophile standards still requires third party proper assessment, but hopefully it addresses a number of issues raised in the above thread. By the way, this app Naim are using is behaving like a remote 2.4 kb teletype circa 1975 today for some very strange reason. One letter per second.

My comments in blue

... If the received signal at the DAC is corrupt in any way it will be resent.

Why do you think it is resent? The DAC cannot request data to be resent. SPDIF over optical or coaxial is unidirectional. If the data is corrupt, the DAC cannot get a resend.

For UPnP etc over a network, packets that fail to arrive or packets that fail checksums are resent but that is handled by the NIC, not the DAC.

Posted on: 06 April 2015 by SongStream

Agree with feeling_zen here, and with both SPDIF and USB interfaces, am postitive there is no option to resend data via that interface.  I believe, in both cases though, the data can still be validated and errors detected, but the DAC is still stuck with it, no retry.  Therefore, as with CD players, I imagine one of the functions in life for the SHARC DSP in a DAC-V1 is to, as feeling_zen has already suggested, figure out a best guess based on some error correction algorithm and smooth out the error before feeding the DAC chip. 

Posted on: 06 April 2015 by andarkian

Seriously guys, we are talking about data that is network oriented not turntables or CDs. The vagaries of both these inputs are well known and accepted. When we were talking about SPDIF interconnects I truly thought you meant the link between my Muso and the TV and not from CD to DAC. If the data that arrives at the front of the DAC from a network perspective is corrupt in any way it will be resent.

Posted on: 06 April 2015 by andarkian
Originally Posted by feeling_zen:
Originally Posted by andarkian:
Originally Posted by feeling_zen:
Originally Posted by andarkian:
As a senior IT engineer, are you saying that since the inception of the modem that the modulation / demodulation process is inherently flawed depending on the data transfer mchanism?
 
I hadn't said that no. Though it is true that SPDIF, being a one-way carrier protocol is a flawed in that there is no way for retransmissions to be requested on error. The receiving end has to either correct an error by extrapolation or momentarily drop the processing until it starts receiving expected datagrams again. At least with ethernet, packets are verifiable and retransmittable.
 
Whatever the 'sound card' or DAC does to that data it has to be bit perfect or no music.
 
Not true. It expects the data to be "complete" which is no the same as "perfect". It has no concept of perfect data anyway. If the DAC expects a 16bit word for a single sample and in that transmission something (software or whatever) changed a single 0 to a 1 it is going to result in a different interpretation by the DAC, not "no music". Incomplete data is possible too. Scratched CDs cause the transport to attempt C1/C2 error correction to figure out (best guess) what the unreadable block contained so that it can pass a "complete" 16 bit sample to the DAC. Not necessarily perfect.
 
How it interprets the data after reception is of course a significant factor. And the analogue signal passing through subsequent hardware and cabling may be subject to colouration - no argument at all. However, to return to my bête noire, the Devialet Phantom, they claim to shorten the analogue pathway and compare the digital and analogue signals to eliminate colouration or interference. At least that's my interpretation. Whether it works to satisfactory audiophile standards still requires third party proper assessment, but hopefully it addresses a number of issues raised in the above thread. By the way, this app Naim are using is behaving like a remote 2.4 kb teletype circa 1975 today for some very strange reason. One letter per second.

My comments in blue

... If the received signal at the DAC is corrupt in any way it will be resent.

Why do you think it is resent? The DAC cannot request data to be resent. SPDIF over optical or coaxial is unidirectional. If the data is corrupt, the DAC cannot get a resend.

For UPnP etc over a network, packets that fail to arrive or packets that fail checksums are resent but that is handled by the NIC, not the DAC.

Agree with the NIC and if there is a six foot connection between the network connector and the DAC surrounded by all sorts of open electrical interference am certain there might be an issue. However, even then the DAC has to have legitimate words to be able to generate music. 

Posted on: 06 April 2015 by SongStream
Originally Posted by andarkian:

Seriously guys, we are talking about data that is network oriented not turntables or CDs. The vagaries of both these inputs are well known and accepted. When we were talking about SPDIF interconnects I truly thought you meant the link between my Muso and the TV and not from CD to DAC. If the data that arrives at the front of the DAC from a network perspective is corrupt in any way it will be resent.

can't speak for anyone else, but I am specifically talking about a USB interface to an external DAC from a PC, and SPDIF from the same, or from a streamer without, or bypassing, it's own DAC.  In these scenarios it is not possible for the host (streamer) to resend data to the client (DAC, and by DAC I mean the whole box on the end of the cable, not specifically the chip inside).

Posted on: 06 April 2015 by andarkian
Originally Posted by SongStream:
Originally Posted by andarkian:

Seriously guys, we are talking about data that is network oriented not turntables or CDs. The vagaries of both these inputs are well known and accepted. When we were talking about SPDIF interconnects I truly thought you meant the link between my Muso and the TV and not from CD to DAC. If the data that arrives at the front of the DAC from a network perspective is corrupt in any way it will be resent.

can't speak for anyone else, but I am specifically talking about a USB interface to an external DAC from a PC, and SPDIF from the same, or from a streamer without, or bypassing, it's own DAC.  In these scenarios it is not possible for the host (streamer) to resend data to the client (DAC, and by DAC I mean the whole box on the end of the cable, not specifically the chip inside).

My reply is as to Zen above. The cable interconnect can suffer corruption but you will notice it. Decided to feed my Muso from the iPad I am using at the moment connected to Spotify. While reading newspapers on Safari Spotify kept stopping and have had to use my MacBook. Point is that the slightest interference wrecks digitised music, it's not the rumble and clicks of a record, and even a CD will only 'correct' so far and it will be noticeable. 

Posted on: 06 April 2015 by feeling_zen
Originally Posted by andarkian:
Originally Posted by feeling_zen:
Originally Posted by andarkian:
 

Why do you think it is resent? The DAC cannot request data to be resent. SPDIF over optical or coaxial is unidirectional. If the data is corrupt, the DAC cannot get a resend.

For UPnP etc over a network, packets that fail to arrive or packets that fail checksums are resent but that is handled by the NIC, not the DAC.

Agree with the NIC and if there is a six foot connection between the network connector and the DAC surrounded by all sorts of open electrical interference am certain there might be an issue. However, even then the DAC has to have legitimate words to be able to generate music. 

Yes we are now getting somewhere!

 

This is the point. "Perfect" data and "Complete" data are different things. Over a network, packets that fail checksum validation or never arrive are resent but this mechanism has no knowledge of the meaning of the payload (which is arbitrary). The next level up (2 levels up actually but I'm trying to keep this simple), the software on the streamer can analyse the content and perform validation. But this confirms only:

1. The format is intended for me and not another application
2. The content is the correct format and understandible (what you call "legitimate")

Assuming the software in the streamer understands the data received and it appears "complete" it can go to the next stage (whether that is further processing or assebled and fed into a SHARC processor or whatever). Simplifying things a bit, assume your PC is making a WAV available to streamer and there is zero problem on the network. The streamer can only know whether the data is indeed WAV and is readable. It cannot know whether that WAV data it received was the same as file it requested. But it is bit perfect in the sense that nothing is lost over the network or if it is, it causes audio to drop out entirely.

 

In a scnario where the UPnP server always passes data as is or transcodes a lossless file faithfully to WAV, then what arrives at the to the DAC (the hi-fi unit not the chip) should be bit perfect. The problem is, some don't transcode faithfully. Half of the threads discussing UPnP servers deal with sound differences as much non audio related as usability querks. If someone is really eager, the degree to which a WAV payload differs by being delivered from a specific server to the source file is measurable, albeit time consuming and painfully dull.

 

To be clear, I am not disputing that it is possible to deliver bit perfect audio over UPnP with the right software. Just that the original thread topic of source definition needs to include everything between the audio file and the preamp and not just the DAC.

Posted on: 06 April 2015 by SongStream

Below is just the headline copied from the USB 2.0 spec, there's much more detail in the document, much more, but this basic summary should certainly eliminate any doubt with regard to re-sending data via USB at least.

 

5.6 Isochronous Transfers
In non-USB environments, isochronous transfers have the general implication of constant-rate, error tolerant transfers. In the USB environment, requesting an isochronous transfer type provides the requester with the following:
• Guaranteed access to USB bandwidth with bounded latency
• Guaranteed constant data rate through the pipe as long as data is provided to the pipe
• In the case of a delivery failure due to error, no retrying of the attempt to deliver the data
While the USB isochronous transfer type is designed to support isochronous sources and destinations, it is not required that software using this transfer type actually be isochronous in order to use the transfer type. Section 5.12 presents more detail on special considerations for handling isochronous data on the USB.

Posted on: 06 April 2015 by andarkian
Originally Posted by SongStream:

Below is just the headline copied from the USB 2.0 spec, there's much more detail in the document, much more, but this basic summary should certainly eliminate any doubt with regard to re-sending data via USB at least.

 

5.6 Isochronous Transfers
In non-USB environments, isochronous transfers have the general implication of constant-rate, error tolerant transfers. In the USB environment, requesting an isochronous transfer type provides the requester with the following:
• Guaranteed access to USB bandwidth with bounded latency
• Guaranteed constant data rate through the pipe as long as data is provided to the pipe
• In the case of a delivery failure due to error, no retrying of the attempt to deliver the data
While the USB isochronous transfer type is designed to support isochronous sources and destinations, it is not required that software using this transfer type actually be isochronous in order to use the transfer type. Section 5.12 presents more detail on special considerations for handling isochronous data on the USB.

Yep, Garbage Out Garbage In. I guess that is why am listening to Black Sabbath translated into Gregorian Chants via interference on the connection from the Muso receptor and the DAC.

Posted on: 06 April 2015 by SongStream
Originally Posted by andarkian:
Yep, Garbage Out Garbage In. I guess that is why am listening to Black Sabbath translated into Gregorian Chants via interference on the connection from the Muso receptor and the DAC.

:-D Amusing.   Is it an improvement though?

Posted on: 06 April 2015 by Jota
Originally Posted by karlosTT:

There was a very interesting recent piece by Michael Lavorgna in Audiostream magazine on the topic of "bits are bits" vs cables.

 

His argument boils down to saying that unless something is truly broken, bits will always reach their destination intact and unchanged, but (as someone mentioned above) the cable may carry with it or even introduce noise, and this affects the audio as we hear it.  Better cables have better screening and noise rejection, so we may detect a subjective improvement in their sound.  This would apply to all connections including LAN cables, thus explaining why some folks hear benefits from exotica such as Audioquest Vodka.

 

This makes sense to me, though for my part I'm very happy with Grey Goose.....  ;-)

 

 

 

I'd like to see the data and evidence of this as a matter of interest because if the difference in noise levels is so great that it clearly affects the audio it should easily be measurable.

Posted on: 06 April 2015 by andarkian

 

 

Originally Posted by SongStream:
Originally Posted by andarkian:
Yep, Garbage Out Garbage In. I guess that is why am listening to Black Sabbath translated into Gregorian Chants via interference on the connection from the Muso receptor and the DAC.

:-D Amusing.   Is it an improvement though?

Well, to quote Bowie circa 1972 it beats the Crazy Cosmic Jive I am supposed to be hearing in the spaces between tunes. Incidentally, it is actually very, very quiet, at least as far as the Muso is concerned  

Posted on: 06 April 2015 by Jota
Originally Posted by feeling_zen:
Originally Posted by andarkian:
Originally Posted by feeling_zen:
Originally Posted by andarkian:
 

Why do you think it is resent? The DAC cannot request data to be resent. SPDIF over optical or coaxial is unidirectional. If the data is corrupt, the DAC cannot get a resend.

For UPnP etc over a network, packets that fail to arrive or packets that fail checksums are resent but that is handled by the NIC, not the DAC.

Agree with the NIC and if there is a six foot connection between the network connector and the DAC surrounded by all sorts of open electrical interference am certain there might be an issue. However, even then the DAC has to have legitimate words to be able to generate music. 

Yes we are now getting somewhere!

 

This is the point. "Perfect" data and "Complete" data are different things. Over a network, packets that fail checksum validation or never arrive are resent but this mechanism has no knowledge of the meaning of the payload (which is arbitrary). The next level up (2 levels up actually but I'm trying to keep this simple), the software on the streamer can analyse the content and perform validation. But this confirms only:

1. The format is intended for me and not another application
2. The content is the correct format and understandible (what you call "legitimate")

Assuming the software in the streamer understands the data received and it appears "complete" it can go to the next stage (whether that is further processing or assebled and fed into a SHARC processor or whatever). Simplifying things a bit, assume your PC is making a WAV available to streamer and there is zero problem on the network. The streamer can only know whether the data is indeed WAV and is readable. It cannot know whether that WAV data it received was the same as file it requested. But it is bit perfect in the sense that nothing is lost over the network or if it is, it causes audio to drop out entirely.

 

In a scnario where the UPnP server always passes data as is or transcodes a lossless file faithfully to WAV, then what arrives at the to the DAC (the hi-fi unit not the chip) should be bit perfect. The problem is, some don't transcode faithfully. Half of the threads discussing UPnP servers deal with sound differences as much non audio related as usability querks. If someone is really eager, the degree to which a WAV payload differs by being delivered from a specific server to the source file is measurable, albeit time consuming and painfully dull.

 

To be clear, I am not disputing that it is possible to deliver bit perfect audio over UPnP with the right software. Just that the original thread topic of source definition needs to include everything between the audio file and the preamp and not just the DAC.

 

Which UPnP servers do not transcode "faithfully"?  When you say faithfully do you mean accurately?

Who discovered this, how is it possible and where can I read about it? 

 

tia.

Posted on: 06 April 2015 by feeling_zen
There is probably lots of threads on that already. Many members have documented differences they have heard and there seems to be a "rough" concensus ona few good ones.

Using my own ears, I havenot been thrilled with the sound from the built in Windows UPnP, iTunes, or the PS3 code variant or Xbmc. Asset and MediaTomb sound better as well as identical to each other.

To do an empiracal  test it is possible to setup a packet sniffer between the streamer and switch and capture and reconstruct the packets of a Wav and compare the non header payload with the source file. With a bit of effort you can write a program to process each byte as 4 separate bits and calculate the percentage of deviations over the whole payload. I have pondered this a few times but would take a day and not sure what the point is. I can hear the difference and I only need to convince myself.
Posted on: 06 April 2015 by fatcat
Originally Posted by Jota:
Originally Posted by karlosTT:

There was a very interesting recent piece by Michael Lavorgna in Audiostream magazine on the topic of "bits are bits" vs cables.

 

His argument boils down to saying that unless something is truly broken, bits will always reach their destination intact and unchanged, but (as someone mentioned above) the cable may carry with it or even introduce noise, and this affects the audio as we hear it.  Better cables have better screening and noise rejection, so we may detect a subjective improvement in their sound.  This would apply to all connections including LAN cables, thus explaining why some folks hear benefits from exotica such as Audioquest Vodka.

 

This makes sense to me, though for my part I'm very happy with Grey Goose.....  ;-)

 

 

 

I'd like to see the data and evidence of this as a matter of interest because if the difference in noise levels is so great that it clearly affects the audio it should easily be measurable.

It is easily measurable.

 

The greater the noise the greater the jitter.

Posted on: 06 April 2015 by Simon-in-Suffolk

Songstream... Regarding sample words... A nibble is 4 bits, a byte is 8 bits and a word is 16 bits and can refer to higher bit depths... The samples bit depths we use are 16 bit or 24 bit and therefore are words. Thats all I was referring to.

Simon

 

Posted on: 06 April 2015 by andarkian
Originally Posted by Simon-in-Suffolk:

Songstream... Regarding sample words... A nibble is 4 bits, a byte is 8 bits and a word is 16 bits and can refer to higher bit depths... The samples bit depths we use are 16 bit or 24 bit and therefore are words. Thats all I was referring to.

Simon

 

Blimey a nibble, haven't heard that since my Assembler days way back in the '70s. I even managed to corrupt a UK wide invoicing program by compiling it in a virtual environment without enough virtual space. You could do that, it just dropped off half the code and didn't help the logic at all. My 1am call out to fix the mess was not appreciated, particularly by senior management.

Posted on: 06 April 2015 by SongStream
Originally Posted by Simon-in-Suffolk:

Songstream... Regarding sample words... A nibble is 4 bits, a byte is 8 bits and a word is 16 bits and can refer to higher bit depths... The samples bit depths we use are 16 bit or 24 bit and therefore are words. Thats all I was referring to.

Simon

 

Ah, with you.  Thanks.