Lossless? Really?

Posted by: madgerald on 29 April 2014

Not sure if this is the right place to ask this Q but pretty sure someone will be able to help...

 

Following the principle that original is best (I've been brainwashed by vinylheads) and if you mess with something you make it worse then if you are going to listen to digital music then CD must be best format (unless you can get your hands on the original uncompressed file).  

 

A good friend of mine disagrees (yes he is in IT) and says that ripped "lossless" will be as good as the original CD since its all just 1's and 0's anyway.  The only way to settle the argument would be to do a blind test streaming a ripped "lossless" CD against the original played on my CDX2 through the same DAC, amp and speakers to see if we can hear the difference.  Trouble is I don't have a separate DAC and am not about to buy one just to prove him wrong.

 

Has anyone conducted such a test and if so what were the results?  Feel free to point me at a previous post if this has been discussed before. 

 

Thanks if you can prove me righteous  

 

Bill 

Posted on: 06 May 2014 by Marky Mark
Originally Posted by Simon-in-Suffolk:

Big Bill - ok lets beg to differ on some of those points - any way i hope i am right or my job as a professional voice/network design engineer will disappear on Tuesday  and i would have wasted the last 20 years.. but at least my customers have paid me well for it 

 

 

PS if you are really interested I can point to you texts on data transmission/encoding and entropy which is of course a grounding principle in ICT. Ok I haven't gone into the deep theory and mathematics since being an undergraduate but it was one of my favourite areas of study, and i certainly use the principles now. You are questioning the words I am using to try and describe the basics in a non technical way - and it is beyond my capability clearly to explain it to you via this forum - perhaps if we had a white board - I usually find it helps.

 

Of course Morse was quite inefficient in information rate, but it was a digital or even a quantised mapping of the alphabet/numbers and certain special codes, and indeed in use you would / do send sentences/messages and seek positive confirmation.. or resend.

 

Data comms that use TCP use dropping algorithms to regulate data flow, so data is deliberately lost in transmission and needs to be resent. WRED is an example of this in class of service managed networks. Of course lower layers such as link frames are discarded if checksums fail due to data transmission error - and the above transport/session mechanism will manage the resend or the data is lost.

UDP is connectionless or often fire and forget - and so if corrupted it is lost for ever - which is why in class of service managed networks with give special consideration to UDP when carry realtime info such as encoded voice datagrams - as we know networks which manage class of service across a layer 3 boundary will some times need to deliberately drop traffic.

 

Interestingly UDP uses sequence numbers and can arrive out of order - from a data transmission point of view the datagram is digitally received 100% - however if used in realtime - its position and relationship to other datagrams is important with respect to time, therefore an application might chose to discard the successfully received data - as its information has become invalid.

 

Therefore one needs to look at the combination of the information and not just rely on the data transmission,  and this is my point - as effectively digital information transmission is not simply a case of sending digital data.

 

In the same way the data transmission can be unreliable and lost but the information recovered using transport methods  - and of course this was the basis of the development of the OSI 7 layer model that data/info comms has been largely based on over the 40 years.

 

PPS - being an engineer I like to design optimum efficient systems, and wasteful inefficient data consumption for no benefit niggles me - you could argue it is sloppy lazy design and it is a lot easier to  be inefficient in data transmission rather than be efficient.

 

PPPS Graeme - no salutes necessary please 

Here is one example. I think Graeme was saluting Bill btw.

Posted on: 06 May 2014 by Simon-in-Suffolk

Big Bill - what and who is being copied and pasted - I have read the thread - and it was clear to me where references or links were being made by posters - why should you find that annoying?  

 

With regard to one of your queries - on a  comment I made to one of your earlier posts - I did go onto to say what was not quite right in my opinion - and that was digital transmission is not always 100% reliable - but you chose to not refer to that  explanation, perhaps it was not clear ore relevant to you - but I attempted to address it - and again with a follow up post which Mark has posted above - so its not accurate or fair to say I ignored it -

Anyway as Alan suggests can we move on?

Posted on: 06 May 2014 by Marky Mark
Originally Posted by Simon-in-Suffolk:

Marky - sorry to hear that - its not meaningless to me - but then I have worked with this stuff most of my career. Sometimes I perhaps assume its easier for others to pick up on this who have not studied or worked with this fascinating area - and yes sometimes my phrasing could be better.

 

However I do get feedback from some off and on this forum that do find it useful, and that motivates me to participate -  so if you can't make head nor tail of it and it irritates you could I respectfully ask you to ignore it - but i will never knowingly insult.

 

 

Simon, it may be wrong to assume you know more than everyone else reading this forum.

Posted on: 06 May 2014 by Big Bill
Originally Posted by Simon-in-Suffolk:

Big Bill - what and who is being copied and pasted - I have read the thread - and it was clear to me where references or links were being made by posters - why should you find that annoying?  

 

With regard to one of your queries - on a  comment I made to one of your earlier posts - I did go onto to say what was not quite right in my opinion - and that was digital transmission is not always 100% reliable - but you chose to not refer to that  explanation, perhaps it was not clear ore relevant to you - but I attempted to address it - and again with a follow up post which Mark has posted above - so its not accurate or fair to say I ignored it -

Anyway as Alan suggests can we move on?

I did refer to that statement.  But you just moved the goal posts again.

 

When I referred to digital transmission I meant with the ECCs and CRCs and the whole shebang built in, the full kit and caboodle.  Now are you saying that at this top level abstraction, if you like to call it that, data transmission is still not 100%?

 

btw just look at the standard of grammar and spelling in your last post compared to the one Mark quoted just above it.  btw why did we need a discussion about UDP?

Posted on: 06 May 2014 by Simon-in-Suffolk
Originally Posted by Marky Mark:
it may be wrong to assume you know more than everyone else reading this forum.

Absolutely agree on that one.

 

Bill - "When I referred to digital transmission I meant with the ECCs and CRCs and the whole shebang built in, the full kit and caboodle.  Now are you saying that at this top level abstraction" - No I was not saying that. If you add methods or protocols to the manage the data transmission one is better  able to recover or detect errors - in which case case data can be potentially transferred 100% reliably (or pretty near to it). - but digital data transmission is not in itself automatically 100% reliable unless such measure are applied - which was the point I was trying to make.

 

I mentioned UDP as it crops up sometimes with UPnP discovery on our home streaming setups - and this is an example where if the data is corrupted or lost in the application stack it is lost full stop - there is (usually) no transport mechanism to recover this - the UPnP Control point or Network player could then 'disappear' from the UPnP network for example - albeit temporarily. 

Posted on: 06 May 2014 by Big Bill
Originally Posted by Simon-in-Suffolk:
Originally Posted by Marky Mark:
it may be wrong to assume you know more than everyone else reading this forum.

Absolutely agree on that one.

 

Bill - "When I referred to digital transmission I meant with the ECCs and CRCs and the whole shebang built in, the full kit and caboodle.  Now are you saying that at this top level abstraction" - No I was not saying that. If you add methods or protocols to the manage the data transmission one is better  able to recover or detect errors - in which case case data can be potentially transferred 100% reliably (or pretty near to it). - but digital data transmission is not in itself automatically 100% reliable unless such measure are applied - which was the point I was trying to make.

But I keep saying, over and over again I KNEW THAT and thought it was obvious in the context.  The discussion was not about Information Theory but network comms as seen by us users.  How many times do I have to say this?

 

UDP??? ps please don't cut and paste a chunk from Wiki about UDP, I know what it is.  But why was it relevant in this thread?

Posted on: 06 May 2014 by Simon-in-Suffolk

Bill - I don't need to or would want to cut and paste from Wikipedia, if i want to use Wikipedia on this forum I provide the URL which I believe is the proper way of using Wikipedia. 

 

You clearly know what UDP is - but is it wise to assume others know what you know? - and after all its a discussion using illustrations and examples for all on this forum who are interested, (which might be dwindling by now) not just yourself.

Posted on: 06 May 2014 by hungryhalibut

I have no idea what UDP is, and probably don't need to know. What I do know is that last year, when I was setting up my streaming system, Simon was a total lifesaver, giving me really helpful advice when many others would have given up, and that he continues to be supportive of us numpties. So, Bill, please stop being confrontational and join the Forum spirit of mutual advice and support. It's easy to be negative, and harder to be positive, and at the end of the day it's only networking, not world peace. Let's all hold hands and feel the Ethernet love.

Posted on: 06 May 2014 by Simon-in-Suffolk
Originally Posted by Wat:
To me Sandy Denny is the greatest singer songwriter of the modern era. I wish i had seen Sandy in concert. Sadly, I never did. As far as I'm aware I have a copy of every recording she made In one format or another. Not only she help Fairport create three of the greatest albums every put together, but she released a series of amazing solo albums. If there is a better album than "Sandy" then i haven't heard it, if there is a better song than "Who Knows Where The Time Go" then I haven't heard it. Sandy died so young and in very sad circumstances, and the world lost the greatest musical star of the 20th century. Though Sandy didn't want to be a star. The musical legacy is unsurpassed and, for me at least, I doubt it ever will be.

absolutely - the North Star Grassman on the BBC Sessions 1971-1973 by Sandy Denny has got to be one of the most spine tingling and emotive tracks of all time - and the technical quality of the recording is not even that great...
Simon

Posted on: 06 May 2014 by Jan-Erik Nordoen
Originally Posted by Hungryhalibut:

I have no idea what UDP is, and probably don't need to know. What I do know is that last year, when I was setting up my streaming system, Simon was a total lifesaver, giving me really helpful advice when many others would have given up, and that he continues to be supportive of us numpties. So, Bill, please stop being confrontational and join the Forum spirit of mutual advice and support. It's easy to be negative, and harder to be positive, and at the end of the day it's only networking, not world peace. Let's all hold hands and feel the Ethernet love.

+ 1 to the sage advice of Lord Emsworth.

Posted on: 06 May 2014 by Big Bill
Originally Posted by Hungryhalibut:

I have no idea what UDP is, and probably don't need to know. What I do know is that last year, when I was setting up my streaming system, Simon was a total lifesaver, giving me really helpful advice when many others would have given up, and that he continues to be supportive of us numpties. So, Bill, please stop being confrontational and join the Forum spirit of mutual advice and support. It's easy to be negative, and harder to be positive, and at the end of the day it's only networking, not world peace. Let's all hold hands and feel the Ethernet love.

That was my point Hungryhalibut, it is of no relevance to this thread.  Upnp is the protocol our streamers use and Upnp doesn't use or rely UDP.  UDP is used to send out messages where you don't get back an OK response.  For example, if you were talking to someone across a noisy room you might break the sentence down into chunks and ask your to mate to OK each one as he gets them.  UDP is low-impact mechanism where this OKing is not used.  You just shout out to your mate and hope he got it and if he didn't then tough.  You could imagine it being used in those 'ticker-tape' boards you see in banks and Doctor's surgeries.

Posted on: 06 May 2014 by Big Bill

Wat she was special.

Posted on: 06 May 2014 by madgerald
Originally Posted by Jan-Erik Nordoen:
Originally Posted by Hungryhalibut:

I have no idea what UDP is, and probably don't need to know. What I do know is that last year, when I was setting up my streaming system, Simon was a total lifesaver, giving me really helpful advice when many others would have given up, and that he continues to be supportive of us numpties. So, Bill, please stop being confrontational and join the Forum spirit of mutual advice and support. It's easy to be negative, and harder to be positive, and at the end of the day it's only networking, not world peace. Let's all hold hands and feel the Ethernet love.

+ 1 to the sage advice of Lord Emsworth.

As I asked the original question just a quick note of thanks for all the info - I have learned a huge amount from all of the people who have taken time to post!

 

Cheers

 

Bill

Posted on: 06 May 2014 by Bananahead
Originally Posted by Wat:
 

To me Sandy Denny is the greatest singer songwriter of the modern era.

Modern era?

 

I wonder which era we live in currently.

 

 

I do love these forums, with everyone being nostalgic about the distant past.

Posted on: 06 May 2014 by Conrad Winchester

It's amazing, and I'm sorry I'm going to use technical words for brevity.

 

These posts always deteriorate into the people like Simon, who do not understand the TCP IP stack and imagine all sorts of issues with imperfect digital information transfer, arguing with people who really do understand it, like Big Bill. It's a shame that people deny science and 50 years of research in this area so readily.

 

It comes down to this - No matter what happens, the digital information in a signal is either transmitted with 100% success or it is not considered to be transmitted at all. The file format and the cables that you use are completely transparent to the information in the signal (see the morse code analogy above). The only thing that matters before decoding is the information in the signal. Whether that is lossless compressed or uncompressed makes no difference to the PCM/DSD bits that come out the end of the process ready to be decoded by the DAC. It's that simple, no snake oil needed.

 

For those who want more information, this is a good read

 

http://en.wikipedia.org/wiki/Internet_protocol_suite

Posted on: 07 May 2014 by Aleg

Oh dear, a keeper of truth and a blind one as well.

Posted on: 07 May 2014 by Simon-in-Suffolk
Originally Posted by madgerald:

As I asked the original question just a quick note of thanks for all the info - I have learned a huge amount from all of the people who have taken time to post!

 

Cheers

 

Bill

Hi - Bill you are welcome. You certainly kicked off some debate and interesting views from some - hopefully you read enough to satisfy your original post

 

Simon

Posted on: 07 May 2014 by james n

Shame the thread has gone somewhat off track - I have to admit i do enjoy Simons contributions although a lot of the more interesting subjects discussed on here could be more easily debated over a couple of pints... 

 

But back to the original question, i still prefer lossless, in this case AIFF. Every time i've changed my computer audio solution - whether Mac 'n' Dac, Linn and Naim streamers and now the Devialet, i re-check my preference as i could gain quite a bit of space back on my hard drive. I take a few of my favourite tracks, convert them to ALAC and listen to them over the next week or two, comparing them to the AIFF versions and always find myself going back to the AIFF versions. I'm quite happy that when decompressed, the ALAC file is the same as the AIFF file and so assume there is some mechanism in the playback chain which favours the AIFF version. As Naim themselves point out in their white papers, processor loading when decompressing increases the power supply noise floor in their streamers, hence the preference for uncompressed WAV. 

Posted on: 07 May 2014 by Big Bill
Originally Posted by Conrad Winchester:

It's amazing, and I'm sorry I'm going to use technical words for brevity.

 

These posts always deteriorate into the people like Simon, who do not understand the TCP IP stack and imagine all sorts of issues with imperfect digital information transfer, arguing with people who really do understand it, like Big Bill. It's a shame that people deny science and 50 years of research in this area so readily.

 

It comes down to this - No matter what happens, the digital information in a signal is either transmitted with 100% success or it is not considered to be transmitted at all. The file format and the cables that you use are completely transparent to the information in the signal (see the morse code analogy above). The only thing that matters before decoding is the information in the signal. Whether that is lossless compressed or uncompressed makes no difference to the PCM/DSD bits that come out the end of the process ready to be decoded by the DAC. It's that simple, no snake oil needed.

 

For those who want more information, this is a good read

 

http://en.wikipedia.org/wiki/Internet_protocol_suite

Thanks Conrad, needless to say I agree with everything you say.

 

BananaHead I think, like me, Wat may also be a big classical fan to say that, I am also in love with Maria Callas and Renata Tebaldi!

 

Wat I will try and explain, so keep with me please.  What you say about TCP/IP is right , I suppose; it ain't perfect.  But there was an interesting point in that article referenced by Conrad, where he states that it wasn't so much a grand plan but that it evolved, I had never looked at it like that before.  Furthermore, it was this evolution of the standards that made it fit into so many niches.

 

Onto Upnp and servers in general.  One of the things that has developed most in the time I have been involved in IT is Application Servers.  Before PC networks we had one massive computer (wrt the times) and a load of dumb (maybe slightly intelligent with some screen rendering) terminals.  But such systems, although they sound really archaic, had one great advantage over PC networks and that was that the newly developed database platforms, like SQL, ran on a single machine - don't sound like much does it?  When PC networks started to replace monolithic IBM mainframes (90% of the market) people wanted to run applications that required a database and this is where the problems started.  Peoples started developing applications using 4GLs or newly developed database systems like dBase, DataMaster, Access etc and these were all OKish on a single machine but were rubbish on networks.  This was because they all were trying to open, at the same time, the database files which were stored on a file server and regular as clockwork these files got corrupted.  The answer came in the form of the database server and I believe the first successful implementation was developed by Amin Gupta who was on the original IBM team that developed SQL.  This server sat on a machine (i think a Novell Netware file server) and it was the only machine that opened the database files, which resided on the same machine.  Corruption of data gone in an instance.  Now the software we developed did not open database files but sent messages to the database server, for example, "Send all the invoice records for company A" and back came what come to be known as a result set.  btw the message "Send all...etc" would have been sent using the SQL language after your application had connected to the database server.  Just like your NAIM streamers connect to a Upnp server - see I am getting there.  This totally revolutionised the IT world and today virtually all data-centric applications use this approach. The actual database server products of today, are incredibly sophisticated, like MS SQL server and Oracle, or even the free db server MySQL.  They can do things like span the databases across different machine, have redundancy built in - Oh I could go on and on and often do.

 

Now this success I suppose sparked a whole load of other server applications, Web Servers, Application servers - where you use code not resident on the local PC but on some remote server, eq ASP pages, JSP pages etc.  And a whole host of other things.

 

The use of servers like those described above give freedom of use, they allow the use of different PC platforms - an Apache server can be used by PCs, Apples, Androids etc, for example and they make the implementation on new devices relatively simple.

 

Upnp is an example of this.  You can run Upnp servers on a NAS and you can have a 'Control Point' (Is it only me who thinks that is a silly name?) on a PC, Apple etc.  The actual music files you play on your streamer are never opened on your streamer.  Take my setup as an example: my NAS is password protected but my Upnp server (minim) has no extra passwording.  To access my music files as actual files you would have to know the password but to stream them you don't.  btw if that doesn't make a lot of sense then please ask and I will try again.

 

The other benefit we saw from using database servers instead of file sharing was in speed.  I remember a client who had developed an Access database which worked fine on a single machine, was noticeably slower on 2 machines and stopped on 3.  So Upnp servers give you the possibility having devices all round your house to play music and they make delivery timely and implementation very easy.

Posted on: 07 May 2014 by scillyisles

Thank you Big Bill for posting some sense on this thread. I also have found that Simon posts a lot of techno babble in his posts. Whilst no doubt he is trying to be helpful, sometimes in his posts particularly about TCP and other Information technology subjects, it is like he has got all the words in a bag, shaken the bag up and then picked out some words and strung them together.

The resultant sentences look impressive to the uninformed but to those who work in this field they are as you say techno babble.

 

Posted on: 07 May 2014 by andarkian

 

 

Originally Posted by Big Bill: Ah, the beauty of the mainframe! I was in our relatively new John Lewis store looking for some lights and couldn't help noticing that the search was being carried out through an IBM CICS system, probably of 1970s vintage. If you want to see a real rational model then think of the human brain. Evolution decided that it would probably be insane to distribute brain processing all over the body so that the arm could compete with the leg etc. The whole Gartner driven 90s philosophy of pure client server was, and still is bunkum. I really do like the idea that all my music is on, and accessible from, central storage. I do not want these assets splattered all over the place unnecessarily and obviously the internet has made direct access and execution so much more sensible and easy. Sorry, I digress 
Originally Posted by Conrad Winchester:

It's amazing, and I'm sorry I'm going to use technical words for brevity.

 

These posts always deteriorate into the people like Simon, who do not understand the TCP IP stack and imagine all sorts of issues with imperfect digital information transfer, arguing with people who really do understand it, like Big Bill. It's a shame that people deny science and 50 years of research in this area so readily.

 

It comes down to this - No matter what happens, the digital information in a signal is either transmitted with 100% success or it is not considered to be transmitted at all. The file format and the cables that you use are completely transparent to the information in the signal (see the morse code analogy above). The only thing that matters before decoding is the information in the signal. Whether that is lossless compressed or uncompressed makes no difference to the PCM/DSD bits that come out the end of the process ready to be decoded by the DAC. It's that simple, no snake oil needed.

 

For those who want more information, this is a good read

 

http://en.wikipedia.org/wiki/Internet_protocol_suite

Thanks Conrad, needless to say I agree with everything you say.

 

BananaHead I think, like me, Wat may also be a big classical fan to say that, I am also in love with Maria Callas and Renata Tebaldi!

 

Wat I will try and explain, so keep with me please.  What you say about TCP/IP is right , I suppose; it ain't perfect.  But there was an interesting point in that article referenced by Conrad, where he states that it wasn't so much a grand plan but that it evolved, I had never looked at it like that before.  Furthermore, it was this evolution of the standards that made it fit into so many niches.

 

Onto Upnp and servers in general.  One of the things that has developed most in the time I have been involved in IT is Application Servers.  Before PC networks we had one massive computer (wrt the times) and a load of dumb (maybe slightly intelligent with some screen rendering) terminals.  But such systems, although they sound really archaic, had one great advantage over PC networks and that was that the newly developed database platforms, like SQL, ran on a single machine - don't sound like much does it?  When PC networks started to replace monolithic IBM mainframes (90% of the market) people wanted to run applications that required a database and this is where the problems started.  Peoples started developing applications using 4GLs or newly developed database systems like dBase, DataMaster, Access etc and these were all OKish on a single machine but were rubbish on networks.  This was because they all were trying to open, at the same time, the database files which were stored on a file server and regular as clockwork these files got corrupted.  The answer came in the form of the database server and I believe the first successful implementation was developed by Amin Gupta who was on the original IBM team that developed SQL.  This server sat on a machine (i think a Novell Netware file server) and it was the only machine that opened the database files, which resided on the same machine.  Corruption of data gone in an instance.  Now the software we developed did not open database files but sent messages to the database server, for example, "Send all the invoice records for company A" and back came what come to be known as a result set.  btw the message "Send all...etc" would have been sent using the SQL language after your application had connected to the database server.  Just like your NAIM streamers connect to a Upnp server - see I am getting there.  This totally revolutionised the IT world and today virtually all data-centric applications use this approach. The actual database server products of today, are incredibly sophisticated, like MS SQL server and Oracle, or even the free db server MySQL.  They can do things like span the databases across different machine, have redundancy built in - Oh I could go on and on and often do.

 

Now this success I suppose sparked a whole load of other server applications, Web Servers, Application servers - where you use code not resident on the local PC but on some remote server, eq ASP pages, JSP pages etc.  And a whole host of other things.

 

The use of servers like those described above give freedom of use, they allow the use of different PC platforms - an Apache server can be used by PCs, Apples, Androids etc, for example and they make the implementation on new devices relatively simple.

 

Upnp is an example of this.  You can run Upnp servers on a NAS and you can have a 'Control Point' (Is it only me who thinks that is a silly name?) on a PC, Apple etc.  The actual music files you play on your streamer are never opened on your streamer.  Take my setup as an example: my NAS is password protected but my Upnp server (minim) has no extra passwording.  To access my music files as actual files you would have to know the password but to stream them you don't.  btw if that doesn't make a lot of sense then please ask and I will try again.

 

The other benefit we saw from using database servers instead of file sharing was in speed.  I remember a client who had developed an Access database which worked fine on a single machine, was noticeably slower on 2 machines and stopped on 3.  So Upnp servers give you the possibility having devices all round your house to play music and they make delivery timely and implementation very easy.

 

Posted on: 07 May 2014 by Big Bill

Andarkian said:

Originally Posted by Big Bill: Ah, the beauty of the mainframe! I was in our relatively new John Lewis store looking for some lights and couldn't help noticing that the search was being carried out through an IBM CICS system, probably of 1970s vintage. If you want to see a real rational model then think of the human brain. Evolution decided that it would probably be insane to distribute brain processing all over the body so that the arm could compete with the leg etc. The whole Gartner driven 90s philosophy of pure client server was, and still is bunkum. I really do like the idea that all my music is on, and accessible from, central storage. I do not want these assets splattered all over the place unnecessarily and obviously the internet has made direct access and execution so much more sensible and easy. Sorry, I digress
 
You saw this in John Lewis store recently?  If the answer is yes then all I can say is Wow!
 
Actually there is a bit of decentralisation of our brain, certainly in our eyesight and hearing.  But all the central processing is done between our ears I guess.
 
Pure client/server for databases has become much less popular these days.  It involves the distribution of code and the whole management process that entails.  But for a period it was the ONLY way to write code that really would work across a network.  Nowadays the the 'thin client' is much preferred but there are a couple of difficulties with it:
(a) It ain't always that 'thin' - go into Task Manager and see how much memory your web browser is using, I use Firefox and it is currently using over half a gigabyte.  Can you imagine saying that to a CICS programmer in the 1970s.  He would have said either "you must be mad" or "what is a gigabyte?".
(b) Because it uses a browser the user interface is not as rich as you can get with using say PowerBuilder or C# dotNet in Client/Server form.  Yes I know there are various environments you can use that make the browser look like is not a browser and even allow you change the right-mouse-button functionality  which was always my favourite growse.
 
Remember too that the database solution you choose doesn't know if you are using Client/Server or Thin Client and you can even mix the two types of front ends.  The last few projects I was involved in before I retired did exactly this.  A data entry and management team used a Client/Server solution and the guys in the branches scattered over our fair nation used thin client.
 
ps I must add that I am far to young to remember what CICS (pronounced Kicks) is and anyone who says different.....
Posted on: 07 May 2014 by Jude2012
Originally Posted by Marky Mark:
Originally Posted by Jude2012:
Originally Posted by Marky Mark:

The usual obfuscation going on here. I think it is worthwhile looking at the basic facts rather than dusty academic papers. At the risk of generalisation and repetition, I might describe the 'challenge' as:

1) FLAC file (a track of 5 mins length) = 40MB
2) Throughput on home network = 10MB per second
3) Network cards / hubs throughput = 10MB per second

Given the above, I might venture the following high-level solutions:

1) transfer entire file to DAC memory in 4 seconds then play in entirety
2) smooth stream over 5 mins at 0.13 MB per second (approx 1/80th of operating throughput) adding a bit on for buffering.
3) some hybrid of the above

 

Whichever I do, decoding is a very, very, very small job. An in-memory operation that could be completed in milliseconds, whether all at once or in dripfeed fashion, for a complete FLAC file with negligible processing. Sure there may be an processing side-effect - if you're using a ZX81.

 

As mentioned on here a long time ago, a Raspberry Pi could be adapted for streaming duties. Certainly it could hold the track above in memory. As could many smartphones and even some wrist-watches.

Perhaps you are right, perhaps you are wrong.  No one solution for any one set-up or any one set of ears.  as for obfuscation of facts .......

There is no perhaps, the above simplistic measures of capacity and throughput are a perfectly sound example. You might work out what the specific parameters are for your specific situation if you so desired but I doubt it makes much odds. This is about focusing on real issues rather than making a song-and-dance about non-issues.

 

Regarding the right solution for anyone's ears, that isn't easy to compare notes on without everyone having the device\s in question available to listen to and all being in the same shared space. That is not going to happen on a forum.


Marky, thanks for your input.  I'll make appropriate use of it.

 

Agree that the right solution is not easy to compare on the forum and that everyone is free to use the forum and other sources to increase their understanding.

 

Jude

 

 

 

Posted on: 07 May 2014 by andarkian
Originally Posted by Big Bill:Customer Information Control System! The multi million pound IBM 370 that I originally worked on had 256KB of memory. 

Andarkian said:

Originally Posted by Big Bill: Ah, the beauty of the mainframe! I was in our relatively new John Lewis store looking for some lights and couldn't help noticing that the search was being carried out through an IBM CICS system, probably of 1970s vintage. If you want to see a real rational model then think of the human brain. Evolution decided that it would probably be insane to distribute brain processing all over the body so that the arm could compete with the leg etc. The whole Gartner driven 90s philosophy of pure client server was, and still is bunkum. I really do like the idea that all my music is on, and accessible from, central storage. I do not want these assets splattered all over the place unnecessarily and obviously the internet has made direct access and execution so much more sensible and easy. Sorry, I digress
 
You saw this in John Lewis store recently?  If the answer is yes then all I can say is Wow!
 
Actually there is a bit of decentralisation of our brain, certainly in our eyesight and hearing.  But all the central processing is done between our ears I guess.
 
Pure client/server for databases has become much less popular these days.  It involves the distribution of code and the whole management process that entails.  But for a period it was the ONLY way to write code that really would work across a network.  Nowadays the the 'thin client' is much preferred but there are a couple of difficulties with it:
(a) It ain't always that 'thin' - go into Task Manager and see how much memory your web browser is using, I use Firefox and it is currently using over half a gigabyte.  Can you imagine saying that to a CICS programmer in the 1970s.  He would have said either "you must be mad" or "what is a gigabyte?".
(b) Because it uses a browser the user interface is not as rich as you can get with using say PowerBuilder or C# dotNet in Client/Server form.  Yes I know there are various environments you can use that make the browser look like is not a browser and even allow you change the right-mouse-button functionality  which was always my favourite growse.
 
Remember too that the database solution you choose doesn't know if you are using Client/Server or Thin Client and you can even mix the two types of front ends.  The last few projects I was involved in before I retired did exactly this.  A data entry and management team used a Client/Server solution and the guys in the branches scattered over our fair nation used thin client.
 
ps I must add that I am far to young to remember what CICS (pronounced Kicks) is and anyone who says different.....

 

Posted on: 07 May 2014 by Aleg
Originally Posted by Big Bill:

..

ps I must add that I am far to young to remember what CICS (pronounced Kicks) is and anyone who says different.....

 

CICS is still alive and kicking today.

 

And it is still more reliable platform than all those mickey mouse x86 based 'machines'.