Definition of bit perfect

Posted by: Jude2012 on 18 February 2014

I use a V1 and this DAC makes use of bit perfect tests.

So, I have Been wondering what this actually means.  Many other forums including Audiophilleo have, for me, a vague or in complete descripton.

Would like to understand this area and how it relates to other digital characteristics such as jitter.

Jude
Posted on: 19 February 2014 by mutterback

Here's my quasi-technical understanding of these issues:

 

 

I take Bit Perfect to mean that all the bits in the music file on your drive are making it to your DAC without any alteration. In other words, the bits are not changed.

 

This comes up a lot when PCs, Macs, or any other renderer/player processes the audio before its output to your DAC, and/or the bits get mangled while being set out from your PC.  Without subverting the operating system's sound management processes, in most cases, the output will be changed, and therefore will not be bit perfect. So, that's where all these software and settings (Bit Perfect, Amara, JRiver, audio midi Mac settings, Wasapi on a PC, etc) come in.

 

You can have bit perfect output and tons of jitter. Jitter has to do with the timing, and I believe order, of the bits reaching your DAC, and then the impact that has on the actual audio signal to you speakers. Most computer standards (TCP/IP, Ethernet, USB) are designed intentionally not to be sensitive to the timing, or even order, of bits coming in. This is fundamental for computer networking, but is horrible for audio which is absolutely time-sensative.  There are three aspects to this: the sources of jitter inside your PC or renderer, how the connection between the PC and DAC manages jitter, and finally how the DAC handles the jitter coming in. A DAC that reduces jitter and the asynchronous USB standard enable the bits to be retimed properly before they are converted to audio by your DAC.

Posted on: 19 February 2014 by Dozey

It just means there is no difference in the binary data between the source file and the destination file.

Posted on: 19 February 2014 by Jude2012

@mutterback, great explanation.

 

The thing that puzzles me is, what happens when there is a error that causes a non bit perfect stream?  In the case of Naim DACs and streamers , would this mean you get a buffer over or under run (I.e. a system crash) or will it show up as an artefact of some sort in the sound/music?

 

Also it must mean that bit perfect is an absolute measure rather than a standard (with say a range or tolerance) , is that right?

 

Jude 

Posted on: 19 February 2014 by Simon-in-Suffolk

I think Dozey is spot on. It may seem obvious but some computer audio systems modify the sample data through mixing or going through some sort of gain process. 'Bit Perfect' effectively this means this manipulation of the source sample data is avoided.

What is also worth considering is that modifying the samples isn't necessarily always bad, but you want to know it's happening and want to revert to the actual source values if you wish. So called bit perfect has no bearing on temporal data such as jitter which is why, IMO,  it's more a marketing term rather than a useful term in digital signal integrity with respect to SQ.

 

Simon

Posted on: 19 February 2014 by Jude2012

Thanks all.  

 

Not sure whether Naim streamers or the NDac Has a bit prefect check, but in the grander scheme of streaming and digital audio, it seems to me ike just one factor that influences the sound/music reproduction, not the be all and end all (May be this obvious, but it's good to get it into context).

 

Jude

 

Posted on: 19 February 2014 by Simon-in-Suffolk

Jude, no the NDAC has no so called bit perfect check, since typically it's input is from SPDIF. It's in the area of USB isochronous transfers from computers where subjectively there has been more tendency for computer digital subsystems to modify audio data.. So in such circumstances having a reference sample check is of prime value.

The NDAC has however a temporal check, and if the sample transport rate is synchronised to one of the high precision NDAC transport sample clocks, an LED is lit. If such synchronisation is not possible then a PLL type synch is then adopted to decode the sample data, but such a synthesised clock has higher digital temporal noise (jitter), and in which case the lock LED is extinguished.

Simon

 

Posted on: 19 February 2014 by Sloop John B

Simon,

 

What (if anything) does the synch led being lit tell us about the incoming signal?

 

 

SJB

 

 

 

Posted on: 19 February 2014 by Simon-in-Suffolk

Hi SJB, if the synch light is on, it tells us the the sample transport frequency of the sender matches almost precisely one of the fixed NDAC transport frequencies. For one of the standard sample rates, this tells us the senders clock is accurate.

Simon

Posted on: 19 February 2014 by Sloop John B

Excuse my ignorance, but does this then mean ( if the clock is accurate ) that there will be no jitter? If not what does an inaccurate clock do to the sound?

 

 Thanks

 

SJB

Posted on: 19 February 2014 by engjoo

My understanding.

 

Bit perfect to me means we have the correct music data first at the source (eg a file) that goes to the destination faithfully. This does not means zero jitter.

 

But what is defined as the destination seems to be subject of different interpretation. Is it the streamer network interface board (or USB port in the case of a DAC V1) or the DAC itself ?

 

Posted on: 19 February 2014 by Jude2012

Thanks for clarifying the situation with the NDAC, Simon.  It does raise a couple of queries-

a) Does, SPDIF, Bitperfect, and TCP/IP, all rely on comparing samples of the incoming data stream or do some of them measure the stream continuously ?

b) is there any value added by USB to SPDIF converters, depending the answer to a)?

The rabbit hole gets deeper ... :-)

Posted on: 19 February 2014 by Simon-in-Suffolk

Hi Jude, sorry didn't understand your question. Why do you think SPDIF and UPnP (TCPIP) compare samples? They don't themselves as they are simply transport methods of conveying data, Both use methods of check summing at different points to check for data corruption of both the headers and payloads, and UPnP using TCP has additionally a windowing sequence method to allow the re sending of lost packets.

 SJB, the clock frequency is effectively the mean frequency over a specific time. However the variance of the mean clock frequency or more typically the phase variance of the clock is the jitter. Therefore a locked clock here can and typically would still exhibit jitter to varying degrees.

The effects of reconstructing the sound with transport or sample jitter are quite involved. But in short inaccurate frequency components would be created, which could smear and rob detail within the the reconstructed analogue signal.

BTW over sampling at sample reconstruction can be an effective way of reducing the effects of sample jitter, by effectively transforming the jitter artefacts up frequency with respect to the signal where they can be more easily filtered away. Most good texts on DSP can show you this working mathematically.

Simon

 

 

Posted on: 19 February 2014 by Jude2012
Originally Posted by Simon-in-Suffolk:

Hi Jude, sorry didn't understand your question. Why do you think SPDIF and UPnP (TCPIP) compare samples? They don't themselves as they are simply transport methods of conveying data, Both use methods of check summing at different points to check for data corruption of both the headers and payloads, and UPnP using TCP has additionally a windowing sequence method to allow the re sending of lost packets.

 

 

 

Thanks, as usual, Simon.  I was wondering about samples as you mentioned it your earlier posts. Which I now interpret as transport or clock jitter.  Also, am I correct in understanding that bitperfect checks in async USB is a way of error checking in a similar way to upnp and SPDIF?

 

 

Posted on: 20 February 2014 by DavidDever

The BitPerfect testing requires a fixed-length source data file* to be played from an (asynchronous) USB host, the contents of which are sample-by-sample (word-by-word) compared (in the amplitude domain) to its prototype at the DAC input. 

 

This same process could easily be duplicated synchronously via SPDIF or asynchronously via Ethernet, in theory, given the appropriate software** at the receiving end.

 

Perfect BitPerfect accuracy at the highest sample rate supported by the system (per clock family, both 44100 and 48000 multiples) should indicate that divided frequencies should also be accurate (e.g., test passing at 352.8 kHz should clear 176.4 / 88.2 / 44.1 kHz as well, but may not be indicative of performance at 384 kHz etc.).  The tests cannot measure jitter or word clock variation, unless these are wild enough to cause data comparison errors.

 

* - possibly parametrically generated internal to the DAC to save space?

** - yes, one could do this in an FPGA as well.

Posted on: 20 February 2014 by Simon-in-Suffolk

Jude,   "Also, am I correct in understanding that bitperfect checks in async USB is a way of error checking in a similar way to upnp and SPDIF?".

Not really. Although the checksum is a method that can indeed detect a corruption on the data frame - and with SPDIF that can mean the frame be dropped.. however the checksum is established for the SPDIF protocol itself by the SPDIF protocol. Therefore if the sample data has been modified before its framed up to be sent over SPDIF - then SPDIF has no way of knowing this.

The Bit perfect test - does as far as I am aware does a known comparison of a specific set of samples. The idea is that these samples have travelled from the known source through the originating computer's digital audio subsystem on through to any transport method (USB, SPDIF, TCP) before it reaches the DAC. As I say - its the computer digital audio subsystem where modification of the digital sample data can take place if care is not taken.

Simon

 

Posted on: 20 February 2014 by Jude2012

David, Simon, Mutterback, and Dozey,

 

Thanks very much for the responses and explanations. 

 

It is much clearer to me how the two types of jittter, bitperfect tests and other error checking built into transport protocols all have a part to play, but each of these on their own, is only either part of a solution for reliable and high quality digital music replay.

 

I can also now see the justification for having a low jitter server, irrespective of the protocol used to transfer the data (of course, I have left out the issues of RFI, EMI, and virbations; as well as whether the whole system and ears are sensitive enough to hear the effects), i.e. the DAC can only work with what is gets.

 

Cheers

 

Jude