Surely it's only digits?
Posted by: David O'Higgins on 11 September 2016
Well maybe, but yesterday I imported my backed up music library from a USB 3.0 external drive into a Melco N1A. The library consists of about 2,100 CDs ripped via my Userve, together with 400+ 24 bit downloads. The import took the best part of an uneventful 24 hours. Leaving aside a lot of problems which I hope to cure by using Minimserver on the Melco instead of Twonky, I am astounded at the improvement in SQ. I had been led to expect something much more modest, but this is serious gain territory, for what (in our mad world!) is a very modest outlay of less than €2,000. The biggest gain seems to come from the direct Ethernet connection of the Melco to the NDS.
So now, 36,000 + new tracks ! Where to start? How to go to bed ?.......
David
This sounds a bit like Bake Off. Twelve frilly pancakes. Not one. Twelve.
Halloween Man posted:the challenge is ten attempts and get all ten right. quick and simple. the odds of getting all ten right by luck is minute unless you can actually hear a difference.
I note that you justify the ten on a statistical basis, whilst at the same time refusing to accept that other statistical tests could be equally valid! Interesting.
there is a certain beauty in simplicity. 'statistical significance across a number of people' is a little more ambiguous.
Here we go again into the binary unknown! When I started programming about a hundred years ago on IBM 370s I had to use Assembler and PL/1. One high and one low level language. The high level language was 'converted' into pseudo Assembler before being compiled into machine code. I got into trouble by some know it all for using a 'Call' instead of a Goto because it created more pseudo assembler, to which I just said IBM would build a bigger and faster machine, which they did.
Anyway, the code is the code whether C, COBOL, Java, Assembler or machine level. If it did not translate as intended our world would literally collapse. If you rip a CD to a NAS it will faithfully copy every scratch and flaw as well as plain poor recording and that is what your DAC is going to receive. From there on in anything can and does happen to your music dependent on the DAC's conversion to analogue, the passage of that signal to your amplifier and its progress all the way to your preferred speakers. Plenty good and bad in that pot pourri of electronics, but the raw data will have been the same, just the same as a spreadsheet receives raw data that someone manipulates to make the figures look better, bluer, pinker or graphical.
Halloween Man posted:there is a certain beauty in simplicity. 'statistical significance across a number of people' is a little more ambiguous.
Not really. If it's statistically significant, it simply means that the difference is greater than what chance (randomness) would produce.
Whether the difference that is heard is significant to the listener is another kettle of halibut.
Jan,
True, I'm guilty of using short form parlance and not specifying the required level of statistical significance.
And your point that detection of the difference could be statistically significant, even if the degree of the difference means that the difference in the experience is of little or no (perceptual) significance to the listener, is also well taken.
Jan-Erik Nordoen posted:Halloween Man posted:there is a certain beauty in simplicity. 'statistical significance across a number of people' is a little more ambiguous.
Not really. If it's statistically significant, it simply means that the difference is greater than what chance (randomness) would produce.
Whether the difference that is heard is significant to the listener is another kettle of halibut.
read what i said again. 'statistically significant across a number of people' is MORE ambiguous than 10 out of 10 (which is btw statistically significant and unambiguous).
(pedantic mode on)
'Ambiguous' means unclear or inexact, or open to more than one interpretation.
'Statistically significant' means probably true. So it is unambiguous.
(pedantic mode off)
And I agree completely on the meaning of 10 out 10.
okay. i guessed 4 out of 5 times right. one person may think this is statistically significant, another may not. therefore open to more than one interpretation and therefore ambiguous. its not clear how many times people should complete the listening test and what percentage of correct answers could be considered statistically significant. ambiguous.
if i guessed 10 out of 10 times right then MORE people will be in agreement that this is statistically significant and is therefore less ambiguous. that's all im saying.
this will go around the track many times.the main point what ever source you have gives enjoyment. i have 4 sources vinyl/R2R/ndx /hugo streaming/cd transport i enjoy all of them but analogue sources are still more magical than the digital .the biggest con is high rez files
Huge posted:Simon, even at the machine code level one is rarely troubled by the physical implementation, even the numeric instruction set codes (i.e. below the assembler level) are still an abstraction of the internal operation of the CPU.
It's only a very small proportion of programmers (a few percent) who have any practical experience interfacing between the digital and analogue domains.
Indeed, and many of those in my professional experience who work at machine code level these days tend to work at system level. Even triggering an interrupt on a system can have unintended physical consequences... but yes compared to most software developers it's a small proportion I'm sure.
From time to time we still hear the remarks like "Surely it's only digits?" or "It's just 0's and 1's", but actually it's not.
However, finding the language to convey the truth often proves rather difficult, so I thought I would have a go, without using any jargon, being the non-technical person I am.
I see the 0's and 1's as a representation, not an exact equivalent to the reality. We don't "see" the reality (voltages in this case) directly, so we use a representation. This means we can prove that the representation at the source is the same as the representation at the destination, but still cannot claim the same equality for the underlying reality.
Does this help?
Ears posted:
Does this help?
It's a perfect theme for a term paper in an first-year course in "Philosophy in Digital Electronics."
Otherwise, not so much.
Yes, I was wondering if I spotted the fingerprints of philosophy.
I think the problem may lie in the success or failure of the predicate that the digital representation is the entirety of the underlying reality. Specifically principle of the 'bit are bits' argument lies in the abstraction of integer mathematics, where there is a provable (or at least entirely self consistent to be precise) system available, where there is a finite set of correct solutions.
N.B. the formalised language of predicate logic is far from my speciality, so there may well be errors of expression here!
Hi Huge, I would say the bit are bits argument is predicated by abstraction asyou say, but that abstraction meaning can be consistently maintained across systems. However when we are using bits to represent discrete samples or data streams, then the bits represent a discrete stream of sample data or values. When we have discrete streams time becomes a part of the meaning of the representation of the values. So we have the abstraction value determined by the real variable of time ... hence the results we see. In that sense it is not purely an abstract representation,
i agree your brain starts going through somersaults thinking about it
Yes Simon, that is where the view point of a software engineer working at the system or application level diverges from the view point of someone working at the interface hardware design level. I'm slightly unusual in that I've done both!
Huge and Simon, could we see time as the abstract for the reality of clocks and the miniscule variations of their time-keeping? In this way both the digits and time would be abstractions.
Huge posted:.. I'm slightly unusual in that I've done both!
That makes two of us
Ears posted:Huge and Simon, could we see time as the abstract for the reality of clocks and the miniscule variations of their time-keeping? In this way both the digits and time would be abstractions.
Hi Ears, I think if I follow you correctly - perhaps yes - artificial clocks will have a degree of variation or errors between them, therefore a time encoded rendition will have time based distortion such as jitter - but if the distortion is below the noise floor or precision of the encoded rendition of the abstracted signal then it will be irrelevant. However - to my earlier points of system intermodulation means that any connected sub systems that 'process' the time encoded rendition will produce their own artefacts independent of the original digital abstraction and its original timed rendition and in a closed system will inter-modulate with each other.. i hope you can follow .. it gets hard to describe my thoughts without a white board....
But I think the question your point ultimately raises asks is a distorted signal an abstraction from the original? - I think one has to say yes... but its a different sort of abstraction from how a signal can be accurately represented .. which is information entropy...
Perhaps Huge might have a different perspective.
Hello Simon, many thanks for your reply which I am finding difficult to process with my lack of relevant background. However, I shall try to get to grips with the subject and will store your answer for future reference. My aim was to find simple language to resolve issues which some find difficult; hope I can have another go in due course.
Well, well, well ... this post seems to have diverged a lot from the OP
All this talk about abstractions, etc is irrelevant - the bits are not representations of data streams or anything else, the collection of bits making up a single sample is the reality of the original measurement.
Let me explain this by explaining the entire audio chain (at least from my perspective) ...
A transducer (microphone) converts sound wave levels(air pressure changes since a sound wave is a longitudinal rather than transverse wave) to an electrical voltage or current;
an analogue to digital converter (ADC) uses a clock to sample these values (be they voltage or current) in the time domain;
each value at the sample time is encoded (not represented) by its value in binary.
- The number of bits used in this binary encoding is the bit depth or resolution and defines how fine measurements can be resolved (e.g. 8 bits can only have 256 different levels) which is why higher bit depths of 16 and 24 are preferable.
- the frequency of the samples defined by the ADC clock is the sampling frequency and determines the highest frequency that can be reconstituted; CD sample rates of 44.1kHz can only resolve to an maximum original sound frequency of 20.5kHz.
ok so we have done some magic here and what comes out of the ADC is a stream of samples, each sample being a fixed number of bits encoding the measured value.
In reconstituting the original analogue electrical wave signal in a DAC (in general rather than HiFi terms) there is no need to have the individual samples be in the time domain, all that is required is that the DAC knows how many bits are used to encode each sample and what the sampling frequency is; look at the WAV format specification and you will find all required information is transmitted in the FMT chunk with the sample in the DATA chunk.
... so given, for example, a WAV file this can be stored and transmitted over data networks in a well-understood and error-free manner - bits are bits whether the file is an audio WAV file, a JPG, Excel spreadsheet or an .exe executable.
... which leads to the magic of what happens at the audio replay end; I do not know how the HiFi DAC works in terms of what it needs at its input - the abstraction here is that they only need the WAV file, everything else can be reconstituted from this but I infer from technical specs and posts here that what the HiFi DAC needs is the stream of samples so the interpretation of the WAV file into the individual samples being sent in correct time frames is undertaken by the computer or streamer that is connected to it.
So from the WAV file on the source medium to the computer/streamer feeding the DAC I would maintain that indeed bits are bits, irrespective of device, cabling, routers, etc. The WAV file received will be identical to that stored on the source (or else everything in the world that is computer based will break).
The connection from computer/streamer to DAC is where the timing issues and signal noise can and do influence the SQ.
Sorry for this long post but lets not get into philosophy or signal processing or information theory when and where it is not required
... all we need to do is use a WAV file to reconstitute our best approximation of the original sound wave after all ....
Allan
PS - I have always considered that the greatest magic in sound replay is in fact that a bit of paper (or keflar or whatever) can be made to move in and out and what we hear is all the harmonic subtlety of a musical instrument - a whole bloody orchestra from a little cone of paper - go on, you're kidding me
Allan, yes bits are bits, but the problems are caused by all the other stuff that's carried along with them.
Huge - yes I get that ...
... but only in the connection between the computer/streamer and the DAC; although I concur that the carrier signal degradation could be accumulative but then surely any decent computer/streamer will be resending the data stream on a "new" connection path anyway.
Allan
Allan, there should be no degradation or next to no corruption of the bits them selves when in a data stream ... its the effect of the timing of the bits and processing of the data 'bits' on sensitive elements in a closed system such as a DAC clock or analogue stage ground planes that can be audible .
Simon-in-Suffolk posted:Allan, there should be no degradation or next to no corruption of the bits them selves when in a data stream ... its the effect of the timing of the bits and processing of the data 'bits' on sensitive elements in a closed system such as a DAC clock or analogue stage ground planes that can be audible .
From day one of digital transfer when for many years the best we could hope for was a 2.4 Kbps line, the important thing was the integrity of the data. We used modems to modulate and demodulate the information and with packet switching we could slice, dice and send the chunks of data over disparate routes to be collated and reassembled in the correct order to the recipient device.
As I recall it, the data bits themselves are defined by voltage or lack of it to represent either a '1' or a '0'. Clearly, this works pretty successfully for all aspects of data processing from representing, sound, films, scientific calculations, real life simulation in all sorts of forms, and controlling all sorts of life dependent equipment. The accurate calculations per second needed for all of the above can far exceed anything needed to transfer or reproduce sound accurately.
Perhaps a 'blind' test that captures, isolates and transfers the source data to the DAC and back to the source for comparison purposes would help resolve the argument, but I doubt it very much. As I have argued before, from reception at the DAC to the output from the speakers there is all sorts of possibilities for colouration ir signal interference and I have no dispute.
Anyway, I had a brief discussion with the very helpful assistant in the HiFi store in Oxford who, with the benefit of testing all the kit in the world, was insistent that different streamers definitely produced different sounds with Naim being near the top of his heap. I have given up the argument and on aesthetic, AV requirements, my wife's hatred of boxes and cables as well as sound quality, I am shortening the signal length and buying a Devialet 250 Pro.