What does it mean for a CD rip to be bit perfect?
Posted by: nbpf on 24 January 2017
The notion of "bit perfectness" pops up over and over again in this forum, for instance, when discussing features of CD ripping software. I do not know precisely what it means for a rip to be bit perfect and I have the impression that other contributors might be in a similar situation. Thus, the aim of this thread is to achieve some shared understanding of what it means for files obtained by ripping CDs to be bit perfect. To this end, it seem to me purposeful trying to answer the following questions:
1) Is it possible to decide whether a file obtained by ripping a CD is bit perfect or not?
2) If the answer to 1) is positive, do we have reliable tests of "bit perfectness"? Can we share such tests?
3) Do bit perfect rips need to be identical?
4) If the answer to 3) is negative, which relations have to hold between two different bit perfect rips A and B?
What's your take on bit-perfectness? Do you think that the answer to 1) is positive or negative? What could be other questions that can help better understanding the notion of bit-perfectness?
Huge posted:...
The details required for further explanation go into information theory and statistics ...
Nope. The issue that we have been discussing here is that of sufficiency vs. necessity. Two files that are equal must have the same length: we say that equality is sufficient for "same lengthness" or, in short, that "equal => same length". It is obviously not that case that two file that have the same length must necessarily be equal. We say that equality is not necessary for "same lengthness". That's it. In this thread, we have understood that AccurateRip tests are sufficient but not necessary for bit perfectness. No statistics or information theory is needed here, just plain logic.
Hungryhalibut posted:No Jon, it's not just you. The previous one was closed, and its bastard offspring has now emerged. Who really cares?
HH, I frankly do not understand what's your problem. We have achieved some understanding of what it means for CD rips to be bit perfect. It is likely not a definitive understanding but certainly a step forward in comparison with the understanding achieved in the 7 pages long "unitserve rips vs core rips" thread. We also had some fun as a bonus. What's the problem? If you do not care about what it means for CD rips to be bit perfect, its fine. Just ignore this thread! Best, nbpf
jon honeyball posted:Am I the only person who wants to scream "enough!!!!!!" over this bloody topic?????
Agreed !!!!!!
Oops, sorry, I thought this was the Trump thread.
Simon-in-Suffolk posted:And yes, unless there are media errors or damage, the PCM encoded on a RedBook CD will be identical to the master... CD-ROM would (not) be a very effective or reliable medium otherwise
Thanks Simon. Interesting then as a physical media user I unwittingly listen to a bit perfect copy of the master file each time I play my CD.
nbpf posted:jon honeyball posted:Am I the only person who wants to scream "enough!!!!!!" over this bloody topic?????
Hungryhalibut posted:No Jon, it's not just you. The previous one was closed, and its bastard offspring has now emerged. Who really cares? Well, some clearly do, but it just goes on and on and on, interminably. If it was in Graham's big red chair, it could be flipped.
Huge posted:Jon, HH, that's why I stopped posting until you pointed out the excessive angst. This post is to agree with you.
...
jon, HH, Huge, following a thread one does not understand or care about is not compulsory! Just ignore it. You do not need it and it does not need you either. Best, nbpf
Following a thread I do not understand?
Smile
joerand posted:Thanks Simon. Interesting then as a physical media user I unwittingly listen to a bit perfect copy of the master file each time I play my CD.
That's not true IMO, hence my post. CD-ROM whilst it shares the same physical format as CD Audio is a very different way of storing the data that contains enough information in the error correction data for you to *know* that you have the bit perfect data you were intended to get. Hence schemes like AccurateRip only exist for CD audio, as that format stands alone in *not* having enough error correction data on the disc to allow you to know you have extracted the bit perfect data. Hence the fact that there is debate to be had on CD rips.
Once ripped, with streaming you're always going to get exactly what you ripped thanks to more robust data checking all along the chain.
You might also argue that downloaded files should be bit perfect as they have never seen an audio CD which is where the doubt creeps in. So perhaps if you want guaranteed bit perfectness you should only use CD quality or higher download sources.
"Once ripped, with streaming you're always going to get exactly what you ripped thanks to more robust data checking all along the chain"
Hmmmmmmmmmm
joerand posted:Simon-in-Suffolk posted:And yes, unless there are media errors or damage, the PCM encoded on a RedBook CD will be identical to the master... CD-ROM would (not) be a very effective or reliable medium otherwise
Thanks Simon. Interesting then as a physical media user I unwittingly listen to a bit perfect copy of the master file each time I play my CD.
Hmmm ... are you sure? In order to listen to a bit perfect copy of the master file underlying a CD you have to read that CD. Reading a CD in real time for replay is more error prone than ripping that CD: ripping can afford multiple readings and comparisons between readings but reading in real time has to rely on fast and frugal error correction algorithms.
But I sympathize with your feelings: having in my hands a CD which is a bit perfect copy of the master makes me feel much better than having a bunch of poorly tagged, hopefully bit perfect files and ... to have to take care of them! Plus, I can read the CD's booklet without having to fiddle around with my tablet computer or mobile phone!
Long life our CDs and our CD players!
joerand posted:Simon-in-Suffolk posted:And yes, unless there are media errors or damage, the PCM encoded on a RedBook CD will be identical to the master... CD-ROM would (not) be a very effective or reliable medium otherwise
Thanks Simon. Interesting then as a physical media user I unwittingly listen to a bit perfect copy of the master file each time I play my CD.
Absolutely, this is the case for a CDP user (assuming no unrecoverable error or damage) and those using UPnP streaming as a CD transport (i.e. Playing sequential CD ripped files) ... all quite straight forward really.
Just worth while considering of course there usually is not just one master... and a particular CD publishing might have its own master... which is of course is what some of us collect... but that is another topic....
jon honeyball posted:"Once ripped, with streaming you're always going to get exactly what you ripped thanks to more robust data checking all along the chain"
Hmmmmmmmmmm
You are right and this is also the reason why ripping is so crucial! You do not want to invest a lot of time and efforts and end up with rips that you do not trust. Thus, it is important that we can trust or, even better, check that our rips are bit perfect. This is why understanding what it means for CD rips to be perfect is a relevant question for those who are ripping CDs. It is also an interesting question in its own, I believe.
Ian_S posted:joerand posted:Thanks Simon. Interesting then as a physical media user I unwittingly listen to a bit perfect copy of the master file each time I play my CD.
That's not true IMO, hence my post. CD-ROM whilst it shares the same physical format as CD Audio is a very different way of storing the data that contains enough information in the error correction data for you to *know* that you have the bit perfect data you were intended to get. Hence schemes like AccurateRip only exist for CD audio, as that format stands alone in *not* having enough error correction data on the disc to allow you to know you have extracted the bit perfect data. Hence the fact that there is debate to be had on CD rips.
Once ripped, with streaming you're always going to get exactly what you ripped thanks to more robust data checking all along the chain.
You might also argue that downloaded files should be bit perfect as they have never seen an audio CD which is where the doubt creeps in. So perhaps if you want guaranteed bit perfectness you should only use CD quality or higher download sources.
A very interesting insight Ian, thanks! It seems to me plausible that the CD Audio format was conceived for real time playback and certainly not for recovering the original master files. In fact, I can very well imagine that it was conceived to make it reasonably difficult to recover the original masted files! Thus, it is not very surprising that ripping a CD is not as trivial as copying a file. I am not a fan of ripping CDs and my music collection mainly consists of files that I have bought from trusted sources. Still, for those who plan to embark on a ripping adventure it is perhaps interesting to understand what it means for CD rips to be bit perfect.
I have positive feelings for the immediacy of direct replay from a CDP as opposed to the sonics of alternately stored and streamed replay. I've had numerous opportunities through the years to hear music streamed on high-end Naim systems of friends and dealers. Velvety smooth no doubt. Errors corrected, artifacts eliminated. Still, something "fluffy" about streaming. I just wonder if streaming gets us closer to or farther from the original performance. I suspect dedicated streamers will say closer to, but my ears prefer the grit and inherent errors/artifacts of the shorter chain. I'll take immediacy over perfection. Maybe that's why I still prefer vinyl replay above all.
Firstly, I'm with NBPF - if you don't like a thread then don't read it. I don't like peanuts, so I avoid them, I don't berate the people who do like them. Quite honestly, this entire forum is of zero interest to all but a tiny minority, so complaining of technobabble is a bit like going a Star Wars convention and complaining that some people are dressed as Luke Skywalker. It pretty much comes with the territory.
In reply to Joerand, of course, your ears are the most important thing, and I totally see your point about vinyl. But I don't see how a CD is a shorter chain. If you have a bit perfect copy of that CD then it is the same - that's the benefit of a digital medium. But with streaming you don't have the whole laser malarkey which is an obvious point of weakness in the process. So I would argue that streaming is a shorter chain. I would also argue that it doesn't really matter, and we should all listen to whatever makes us happy :-).
jon honeyball posted:"Once ripped, with streaming you're always going to get exactly what you ripped thanks to more robust data checking all along the chain"
Hmmmmmmmmmm
Curious on why the hmmmmmmmmm ?
joerand posted:I have positive feelings for the immediacy of direct replay from a CDP as opposed to the sonics of alternately stored and streamed replay. I've had numerous opportunities through the years to hear music streamed on high-end Naim systems of friends and dealers. Velvety smooth no doubt. Errors corrected, artifacts eliminated. Still, something "fluffy" about streaming. I just wonder if streaming gets us closer to or farther from the original performance. I suspect dedicated streamers will say closer to, but my ears prefer the grit and inherent errors/artifacts of the shorter chain. I'll take immediacy over perfection. Maybe that's why I still prefer vinyl replay above all.
Joerand, whether its streaming or direct CDP play - it makes no difference - the PCM is the PCM - yes it might be chopped up into tracks at very slightly different points when put into discrete files - but the sample data other than this is identical - and, assuming no damage to unrecoverable errors - then its sample word by sample word identical.
Now lets look at the unrecoverable errors - these do occur from time to time on CD players when reading the CD - even un damaged ones- as there is only usually one attempt to read the media - where a ripper has often retry strategies when building a rip - but such an error will sound like a brief 'tick' sound or possibly a skip - the latter is often objectionable and typically only occurs for damaged disks or badly aligned CD readers - but one 'tick' every 10 albums or so is acceptable in my book for CDP replay.
So in summary I think its best to consider local lossless streaming the same as a CD transport - or perhaps more accurately a CD jukebox transport - (remember those from the 90s ? )
S
nbpf posted:A very interesting insight Ian, thanks! It seems to me plausible that the CD Audio format was conceived for real time playback and certainly not for recovering the original master files. In fact, I can very well imagine that it was conceived to make it reasonably difficult to recover the original masted files! Thus, it is not very surprising that ripping a CD is not as trivial as copying a file. I am not a fan of ripping CDs and my music collection mainly consists of files that I have bought from trusted sources. Still, for those who plan to embark on a ripping adventure it is perhaps interesting to understand what it means for CD rips to be bit perfect.
I think the first point is correct, at the time of conception a CD contained a lot of data and available microprocessing power for CDP's didn't stretch to more robust data checking. It was only much later when CD-writers became available in PC's that the whole issue of copying came about, and then the music industry went into panic. I don't believe they ever thought about how you would rip a CD when the format was conceived.
I'm not sure I would view streaming of rips the same as a CD transport, as the former is built on protocols designed to deliver the data completely intact, with the ability to resend/retry if there are issues, and built in buffering as a result. I really don't agree personally with the concept that somehow (because it's HiFi and we love to improve signal paths) data transmitted over computer networks somehow has high (or even any) error rates. a uPnP server will deliver the exact same bits to any streaming client. It's what that streaming client does with them and the effect of other outside influences that delivers differences in the resulting analog signal.
If computer networks were as error prone as some in the HiFi industry would have you believe then no-one would use them.
nbpf posted:Huge posted:...
The details required for further explanation go into information theory and statistics ...
Nope. The issue that we have been discussing here is that of sufficiency vs. necessity. Two files that are equal must have the same length: we say that equality is sufficient for "same lengthness" or, in short, that "equal => same length". It is obviously not that case that two file that have the same length must necessarily be equal. We say that equality is not necessary for "same lengthness". That's it. In this thread, we have understood that AccurateRip tests are sufficient but not necessary for bit perfectness. No statistics or information theory is needed here, just plain logic.
OK you've rubbished my statement (and taken part of it out of context), but only provided a glib analogy in return. So perhaps you could provide a definitive explanation with "no statistics or information theory", "just plain logic"?
Please first of all define 'bit perfect' then back your statement above with a full explanation (based solely on plain logic ) as to all the way in which the following can or cannot be bit perfect, and what criteria can be used determine this.
Rips of an entire CD
Rips of individual tracks in isolation
Rips with C1 Errors
Rips with variable C2 errors
Rips with repeatable C2 errors
I think we can all understand that when a drive audibly mis-tracks a CD the result won't be bit perfect!
P.S. you'll probably need to include logical definitions of C1 and C2 errors and how they are detected.
For those who do need an answer, this will probably fully answer the question, and you may well be the best person to do that for us (with the possible exception of SiS).
Huge posted:nbpf posted:Huge posted:...
The details required for further explanation go into information theory and statistics ...
Nope. The issue that we have been discussing here is that of sufficiency vs. necessity. Two files that are equal must have the same length: we say that equality is sufficient for "same lengthness" or, in short, that "equal => same length". It is obviously not that case that two file that have the same length must necessarily be equal. We say that equality is not necessary for "same lengthness". That's it. In this thread, we have understood that AccurateRip tests are sufficient but not necessary for bit perfectness. No statistics or information theory is needed here, just plain logic.
OK you've rubbished my statement (and taken part of it out of context), but only provided a glib analogy in return. So perhaps you could provide a definitive explanation with "no statistics or information theory", "just plain logic"?
Please first of all define 'bit perfect' then back your statement above with a full explanation (based solely on plain logic ) as to all the way in which the following can or cannot be bit perfect, and what criteria can be used determine this.
Rips of an entire CD
Rips of individual tracks in isolation
Rips with C1 Errors
Rips with variable C2 errors
Rips with repeatable C2 errorsI think we can all understand that when a drive audibly mis-tracks a CD the result won't be bit perfect!
P.S. you'll probably need to include logical definitions of C1 and C2 errors and how they are detected.
For those who do need an answer, this will probably fully answer the question, and you may well be the best person to do that for us (with the possible exception of SiS).
I am not sure that I grasp which questions you would like me to answer. In my original post, I have tried to wrap up what seems to me the shared understanding that we have so far achieved on what it means for a CD rip to be bit perfect.
It goes without saying that the wrap-up reflects my current understanding of our common understanding! It will have to be revised as we learn new facts about bit perfectness or as I learn new facts about our common understanding of bit perfectness.
But, according to the wrap up, a rip would be bit perfect if it contains the same bit sequence of some reference file letting apart headers, leading and trailing zeroes and perhaps other features that we do not need to care about in detail. We can call these features the BOBOs of the rip. We can call the reference file (with its BOBOs) the "original".
From this notion of bit perfectness it follows that a test based on checksum comparisons between a rip (with its BOBOs) and checksum values of the "original" can only be sufficient for bit perfectness but not necessary.
This logical consequence does not depend on the details of how such a test is actually performed. The checksum values of the "original" could be stored in an AccurateRip database or in our own computer. What matters is that, as far as the BOBOs can have some bearing on the outcome of a test, that test can only be sufficient for bit perfectness (in the sense explained above) but not necessary.
Thus, we can have rips that are bit perfect (in the sense explained above) but that do not pass the checksum test. This would happen if the BOBOs of the rips differ from the BOBOs of the original.
A further logical consequence of the notion of bit perfectness posited above is that two different files can be bit perfect.
We have actually seen a concrete example in the case of the US and Core rips originally reported by Jan-Erik: the two rips were different and, as confirmed by Naim, both bit perfect. This is perfectly fine according to the notion of bit perfectness put forward above.
This fact that Naim has confirmed that two obviously different files were both bit perfect, shows that the notion of bit perfectness outlined above is at least consistent with our empirical observations. Of course, one can think of other consistent notions of bit perfectness.
If you have any suggestion for another notion of bit perfectness that is consistent with our empirical observations, please post it.
nbpf posted:This fact that Naim has confirmed that two obviously different files were both bit perfect, shows that the notion of bit perfectness outlined above is at least consistent with our empirical observations. Of course, one can think of other consistent notions of bit perfectness.
I don't believe that has been said at all - rather that if you are using the extracted files as intended - i.e. sequentially ripped by the same ripper and from the same CD no data is lost and when aggregated together sequentially. However each extracted file itself may not be identical to a similarly ripped file from another ripper because of tiny differences between boundary detection for a given CD-ROM/ripper at the start and end of the file - and therefore these files are NOT bit perfect with each other.
There are methods out there such as with AccurateRip that define a specific agreed offset to allow consistent boundaries between tracks on different CDROM drives thereby ensuring there is bit perfect consistency across rips from different platforms from the same published CD. Naim does not use this method - so boundary variances will occur between different CDROM drive types that are used.
Ian_S posted:jon honeyball posted:"Once ripped, with streaming you're always going to get exactly what you ripped thanks to more robust data checking all along the chain"
Hmmmmmmmmmm
Curious on why the hmmmmmmmmm ?
Where is this robust data checking of which you speak?
I can go bit edit a WAV file -- nothing will tell me that I have done so. There is no checksumming here.
There is no checksum checking in the DAC at point of playback.
There is no checksumming on backup/restore.
So where is this "robust data checking along the chain"?
Ian_S posted:nbpf posted:A very interesting insight Ian, thanks! It seems to me plausible that the CD Audio format was conceived for real time playback and certainly not for recovering the original master files. In fact, I can very well imagine that it was conceived to make it reasonably difficult to recover the original masted files! Thus, it is not very surprising that ripping a CD is not as trivial as copying a file. I am not a fan of ripping CDs and my music collection mainly consists of files that I have bought from trusted sources. Still, for those who plan to embark on a ripping adventure it is perhaps interesting to understand what it means for CD rips to be bit perfect.
I think the first point is correct, at the time of conception a CD contained a lot of data and available microprocessing power for CDP's didn't stretch to more robust data checking. It was only much later when CD-writers became available in PC's that the whole issue of copying came about, and then the music industry went into panic. I don't believe they ever thought about how you would rip a CD when the format was conceived.
I'm not sure I would view streaming of rips the same as a CD transport, as the former is built on protocols designed to deliver the data completely intact, with the ability to resend/retry if there are issues, and built in buffering as a result. I really don't agree personally with the concept that somehow (because it's HiFi and we love to improve signal paths) data transmitted over computer networks somehow has high (or even any) error rates. a uPnP server will deliver the exact same bits to any streaming client. It's what that streaming client does with them and the effect of other outside influences that delivers differences in the resulting analog signal.
If computer networks were as error prone as some in the HiFi industry would have you believe then no-one would use them.
I fully agree with your observations.
I can only add a remark or anecdote on data transfer across a network. This is not relevant to the problem of understanding bit perfectness but perhaps interesting to those who regularly use rsync or rsync-based tools for securing large datasets like music collections. The observation is that, while rsync checks the integrity of the data sent over a network (via MD5 checksum, I believe), the responsibility of actually writing a correct copy of those data to disk is left to the target OS.
This weakness (or feature) has been at the origin of a frustrating experience I ran into last year: a file on the music server would replay very badly (stuttering) through my system. Plain rsync would show that that file and its master copy would be identical. A copy of the master copy on USB stick would replay fine from the USB port of my Naim DAC. These premises made me think (and worry!) that there was something wrong with my HDD, file server or USB to SPDIF bridge. Alas, this was not the case but it took me (more than) a while to find it out! Finally, rsync with the "-c" (checksum) option revealed that the file on the server was not identical to the master file.
The lesson I have learned: if you use rsync (based tools) to manage your data transfers, use it every now and then with the "-c" option. It takes much longer, of course. But it truly the checksums of the files on the source and target systems. Of course, one has to be careful not to replace bit perfect originals with corrupted copies!
jon honeyball posted:Where is this robust data checking of which you speak?
I can go bit edit a WAV file -- nothing will tell me that I have done so. There is no checksumming here.
There is no checksum checking in the DAC at point of playback.
There is no checksumming on backup/restore.
So where is this "robust data checking along the chain"?
Well, no-one can stop you editing files or taking a brillo pad to your CD's...
My point was unlike CD playback which is designed to gracefully fail, and depending on the quality of the CD may experience plenty of C1 errors and be considered OK, when sending a file from a server to a client much more check-summing takes place, and the default behaviour before data gets to the DAC via a network is that packets identified as corrupt get resent and if they're consistently knackered things stall. This prompts you to go check stuff.
Potentially there *is* checksumming in the DAC depending on the file format you use. For WAV which is completely uncompressed then no, there is not. For FLAC then effectively yes there is, as the data will fail to decompress correctly if it suffered the unlikely fate of being corrupted during transmission in a way not detected by the transmission protocols. A zip file for example will fail to decompress if corrupted so compression does have side benefits.
Same applies to backups. If you backup as-is, then you'll have standard hard disk CRC checking... but then again things depend on what you're backing up to as to how protected that backup is. A single hard disk, not so much. RAID a bit more...
All these things start to add up and decrease the chances of getting regular corruption. Of course you might argue they also add more processing which may have other impacts, but unless your name is Google or Amazon, then in your average home network you're not shifting enough data to hit the statistically low chances of getting that 1 in 10million chance of corruption that's evaded hardware, OS, transmission and possibly uncompression CRC checking together and made it through regularly to your DAC...
You're much more likely to get the kind of error that is a broken cable, switch, router, failed disk that causes a more obvious error that you can fix. IMO... of course....
joerand posted:I have positive feelings for the immediacy of direct replay from a CDP as opposed to the sonics of alternately stored and streamed replay. I've had numerous opportunities through the years to hear music streamed on high-end Naim systems of friends and dealers. Velvety smooth no doubt. Errors corrected, artifacts eliminated. Still, something "fluffy" about streaming. I just wonder if streaming gets us closer to or farther from the original performance. I suspect dedicated streamers will say closer to, but my ears prefer the grit and inherent errors/artifacts of the shorter chain. I'll take immediacy over perfection. Maybe that's why I still prefer vinyl replay above all.
Not sure I understand that. In simplistic terms the different processeses are as follows:
CD replay takes reflections of light from a spinning disk, converts it into electrical pulses, which is put into decoding circuitry that separates the digital music stream and other information, using the embedded parity bits to check the stream for errors and in the event of finding errors use an algorithm to modify the stream to mask the error to minimise its audibility, with a tempoepray memory at some stage in the process as a 'buffer', then the resultant stream is passed to the DAC to convert to analogue. The error correction algorithm is necessary because using the reflection of light is prone to error.
With streaming from a store in or attached directly to the same machine where the rendering takes place, a file stored magnetically or electrically is converted into electrical pulses which is held in memory and converted by software into a digital music stream, which is fed to the DAC.
Unless I have omitted anything significant, the streaming process is more direct than from the CD, and should sound no worse, with the potential to sound better. (If the streaming store is remote from the rendering software, the file would be sent across a network as well, which can make a difference, considerably more so if it is streamed across the internet, and very definitely the latter cannot be considered to have 'immediacy')
you say you've heard others' streaming systems, but have you compared directly, as in ripped CDs vs the same CDs on the same system including DAC?
(Vinyl has far more compromises and limitations, while most modern recordings will have been recorded digitally before converting to analogue to make the LP, but that is a different story entirely!)
Simon-in-Suffolk posted:nbpf posted:This fact that Naim has confirmed that two obviously different files were both bit perfect, shows that the notion of bit perfectness outlined above is at least consistent with our empirical observations. Of course, one can think of other consistent notions of bit perfectness.
I don't believe that has been said at all - rather that if you are using the extracted files as intended - i.e. sequentially ripped by the same ripper and from the same CD no data is lost and when aggregated together sequentially. However each extracted file itself may not be identical to a similarly ripped file from another ripper because of tiny differences between boundary detection for a given CD-ROM/ripper at the start and end of the file - and therefore these files are NOT bit perfect with each other.
...
In the first contribution on page 7 of the closed "unitserve rips vs core rips" Steve Harris concludes, among others:
Your rips are good. The audio is correct and all you are seeing is logical offset differences where Core is slightly different to Serve.
In this statement, "your" refers to the Core and US rips. In Jan's tests, these rips were found to be different. In the "unitserve rips vs core rips", we had conjectured that such differences would be confined to BOBO differences: differences in certain amounts of leading and/or trailing zeroes and perhaps in other details that do not encode musical contents. But not differences in musical contents.
You had verified that this conjecture was true for the specific case of Jan's samples. The non-BOBO sections of the samples were identical and could be easily realigned by adding or removing a few leading or trailing zeroes. Thus, the conjecture was consistent with the available empirical evidences that we had in our hands. The question was, of course, whether the conjecture would turn true for all US and Core rips.
To the best of my understanding, Harris' post ensures us that all rips (and not just those that we inspected) of US and Core will only differ -- between themselves and with respect to an "original" -- in BOBO details! This is a very strong statement. It tells us that we can trust Core results (and US results) no matter what we are ripping: the differences between the rips (and, more importantly, the differences between the rips and the "original") will be, if any, BOBO differences!
Now, being identical modulo BOBO details with an "original" is what I have put forward in my wrap-up as a tentative notion of bit perfectness. From this, it follows that Harris is telling us that both the US and the Core rips are bit perfect (in the sense of the wrap-up, not of AccorateRip tests, of course) in spite of being different.
This seems to me a nice understanding of the notion of bit perfectness. It is consistent with Naim's explanations and with our own observations.
indeed - so my point is that ripped files may be not always be bit perfect with each other but they will be a bit perfect subset of the CD (assuming no errors) - and this is what Naim are saying too I believe.
The original thread was that Unitserve and the Core rips for a given CD were not bit perfect with each other and this fits with the above statement.
S