Tidal Firmware Update?
Posted by: nigelb on 21 August 2016
Does anyone know what the current status of the (delayed?) Tidal firmware update is?
Has it gone back to beta testing? Can the beta testers reveal anything yet?
As a lover of Tidal (and the new artists it has opened my eyes to), I would very much appreciate a further uplift in SQ that the anticipated update promises to provide.
Tidal working like a Swiss clock this week, which is not always the case
Nice work Naim on the new gear - no doubt good for the business and for new customers.
However, not convinced that blurb about 'much better internal buffer and memory' resulting in 'far fewer dropouts' bodes well for existing Tidal subscribers with constant dropouts using existing kit. I think it's time to ask to be a Beta tester in the hope the firmware might fix the dropouts, or give up on Tidal altogether.
Tidal and a good R2R a good option for listening new music. Ifind that i get less drop outs in the morning, more so after mid afteroon
I have not suffered too much in recent months with drop outs but this afternoon Tidal was unlistenable. I realise that this is probably due to an overloaded or stressed out Tidal server somewhere in the world and not necessarily the sort of thing that the long-awaited firmware update can sort.
However, now the New Uniti range had been launched, any chance of an update on what is happening (if anything) with the firmware update? Trevor? I listen a lot to Tidal so this matters to me, especially if the update reduces drop outs and enhances SQ.
Simon-in-Suffolk posted:Hi most of the Tidal issues have been down to duplex latency ... That is round trip time between the streamer , Tidal media server and streamer again . This can vary under load from the Tidal (or any server) as well load on the network route. The current firmwares are quite sensitive to this, however the firmwares that are in development for the streamers are more tolerant to latency here... and dropouts will be significantly improved.
This issue is not directly related to network bandwidth other than in extreme scenarios of congestion. RTD (round trip delay) is a normal TCP/IP network characteristic, and in short the clients need to be able to tolerate it over the typical operating environment with the supported applications. Typically you can mitigate RTD by having larger TCP window segmentation memory and/or well as having larger amounts of application buffer memory and/or improving the speed and efficiency of the client TCP/IP state machine responsiveness of the client.. I believe Naim ate focussing on the latter. The two former options tend to increase application delay and latency.
Simon
I have heard already quite long that the new firmware is coming, however it takes too long now. I hope the mentioned improvements will be soon rolled out. While I understand that a good testing processs is needed, it's not what we should expect in the world of software updates.
Personally I would prefer to have poor tidal sound quality with no dropouts than super sound quality for 5 seconds followed by a 20 second dropout, which is what I get at present.
Dozey posted:Personally I would prefer to have poor tidal sound quality with no dropouts than super sound quality for 5 seconds followed by a 20 second dropout, which is what I get at present.
Surely it is not too much to ask in this day and age to have both the best sound quality available and no (or very few) drop-outs. Actually I am reasonably satisfied with the SQ of full fat Tidal. It is just that there was more than a hint (from Naim and the Beta team) that the firmware update came with enhanced SQ.
Although I am happy to wait until Naim get the sound they want from the FW update, some communication would be nice. Not a running commentary, but some update. The silence is deafening.
OK - I added a Google Chromecast at the weekend - rubbish sounding using the analogue output but quite good using the miniTOSLINK -TOSLINK digital output to the NDS digital input. Plenty good enough for Sleaford Mods!
Well, maybe not rubbish sounding - but not as good as the NDS!
the quality of tidal is very good, it seems at busy time at the end of the day an occasional drop , so what .
I take it you are not using an NDS then AUDIO1946?
audio1946 posted:the quality of tidal is very good, it seems at busy time at the end of the day an occasional drop , so what .
Because there is the prospect of even better SQ and fewer dropouts with the long-awaited firmware update FREE! Can't turn your nose up at that!
I don't know if anyone's heard of this new project Tidal is working on regarding MQA file format. MQA is something Bob Stuart from Meridian has invented and Tidal, as well as a large portion of the industry, is thinking to embrace. Is Naim going to embrace the format as well? If anyone's interested to learn more about this there's a very informative video on YouTube from Rocky Mountain Audio Fest 2015 called Streaming Audio: Preserving the past, protecting the future.
vinylrocks posted:I don't know if anyone's heard of this new project Tidal is working on regarding MQA file format. MQA is something Bob Stuart from Meridian has invented and Tidal, as well as a large portion of the industry, is thinking to embrace. Is Naim going to embrace the format as well? If anyone's interested to learn more about this there's a very informative video on YouTube from Rocky Mountain Audio Fest 2015 called Streaming Audio: Preserving the past, protecting the future.
Great question and one I would certainly like answered. But I am afraid we will not get a definitive answer from Naim as it has been stated many times on other threads that Naim do not comment on current or future developments prior to a formal launch.
Ooh, I hope I am proved wrong here.
I'm not sure why Naim would be willing to pay a license fee for a lossy format that seems to have almost no industry support.
Bananahead posted:I'm not sure why Naim would be willing to pay a license fee for a lossy format that seems to have almost no industry support.
As I understand it isn't lossy at all! You can have a go at their website to see the current status of industry support which includes pioneer, onkyo, NAD as well as Warner music, 2L etc...also please have a look at the video I mention before you comment.
Bananahead posted:I'm not sure why Naim would be willing to pay a license fee for a lossy format that seems to have almost no industry support.
That's my general thinking too. Not that I am bothered either way.
MQA has been in gestation for longer than I can remember. Maybe one day. Or not.
vinylrocks posted:Bananahead posted:I'm not sure why Naim would be willing to pay a license fee for a lossy format that seems to have almost no industry support.
As I understand it isn't lossy at all! You can have a go at their website to see the current status of industry support which includes pioneer, onkyo, NAD as well as Warner music, 2L etc...also please have a look at the video I mention before you comment.
It is lossy at least in a mathematical sense in that there is no reverse algorithm to restore the compressed data exactly to the original.
its like taking the sum "2+2+2+2+2+2". Lossily it can be stored as 10. Losslessly it can be stored as 5x2 (if you define the lossless algorithm as storing multiple addition algorithms as multiplication so you know how to restore it).
That's very interesting! I'm not very familiar to the subject myself but all the research I've done shows otherwise.
Take a look in this very detailed presentation https://youtu.be/T5o6XHVK2HA
if you don't want to be bothered just keep the part from 14.23-17.50 where he explains the lossless uncompression of the files.
Eloise posted:vinylrocks posted:Bananahead posted:I'm not sure why Naim would be willing to pay a license fee for a lossy format that seems to have almost no industry support.
As I understand it isn't lossy at all! You can have a go at their website to see the current status of industry support which includes pioneer, onkyo, NAD as well as Warner music, 2L etc...also please have a look at the video I mention before you comment.
It is lossy at least in a mathematical sense in that there is no reverse algorithm to restore the compressed data exactly to the original.
its like taking the sum "2+2+2+2+2+2". Lossily it can be stored as 10. Losslessly it can be stored as 5x2 (if you define the lossless algorithm as storing multiple addition algorithms as multiplication so you know how to restore it).
I count myself as being somewhat mathematically challenged, but surely you mean 2+2+2+2+2
vinylrocks posted:That's very interesting! I'm not very familiar to the subject myself but all the research I've done shows otherwise.
Take a look in this very detailed presentation https://youtu.be/T5o6XHVK2HA
if you don't want to be bothered just keep the part from 14.23-17.50 where he explains the lossless uncompression of the files.
Hi, yes MQA is lossy, here is a wiki https://en.m.wikipedia.org/wik...uality_Authenticated with some references.
However just because it's lossy, don't discount it... the interesting thing with MQA is the calibrated construction and reconstruction filtering at both ends of the ADC and DAC chain. If you study DSP and digital encoding you know that theoretically both should be matched for optimum conversion and reconstruction... and is an Achilles heel of current and ubiquitous digital audio processing in the audio industry and I suspect limits conventional hidef. Unfortunately this beneficial area of the MQA description seems to be full of flowery slightly obviscating language (IMO)
However my concern is how it handles high speed timing info as opposed to frequencies.. research shows (as documented by various papers at the AES) that for many true hidef is down to inter timing information as opposed to extended frequency information in the analogue domain. True hidef is described by some as being indistinguishable from reality... as opposed to technical parameters.
Also looking at the MQA implementation discussions, I can't see how MQA could work effectively with the current Naim streamer architectures.
Simon
Simon I get it there is a lossy part of the compression process that only affects information over 48K which almost always is nothing but noise. We have to make some distinctions though between audio sampling and frequency response. Normal PCM digital files may be 44.1k but carry information that goes up to 21-22K. MQA promises lossless 0-48K frequency response and lossy 48-192 in one file that could replace red book format and could be transmitted near today's cd quality streaming bit rates. On top this technology carrys a digital fingerprint inside the file so when the file is corrupted or parts are missing it can let you know you are playing a non perfect file.
As we know I doubt it if there is a microphone used in recording studios today that captures more than 40K frequency response. So do we need 24/192? The answer is yes but for recording purposes where multichannel recordings with various effects take space without being compressed.
Anyway, maybe I'm wrong but I was intrigued with the thought...
vinylrocks posted:Simon I get it there is a lossy part of the compression process that only affects information over 48K which almost always is nothing but noise. We have to make some distinctions though between audio sampling and frequency response. Normal PCM digital files may be 44.1k but carry information that goes up to 21-22K. MQA promises lossless 0-48K frequency response and lossy 48-192 in one file that could replace red book format and could be transmitted near today's cd quality streaming bit rates. On top this technology carrys a digital fingerprint inside the file so when the file is corrupted or parts are missing it can let you know you are playing a non perfect file.
As we know I doubt it if there is a microphone used in recording studios today that captures more than 40K frequency response. So do we need 24/192? The answer is yes but for recording purposes where multichannel recordings with various effects take space without being compressed.
Anyway, maybe I'm wrong but I was intrigued with the thought...
Just a minor correction, since the bandwidth of frequencies is as a rule of a thumb half of the frequency rate (hence 44.1 leads to 22,05) the upper frequency we get with 192 is 96K. So as I understand it is the 48-96K frequency bandwidth which is lossyly compressed.
Yes, and don't be constrained in thinking the Nyquist sample frequency only relates to acoustic frequency pitch. They key consideration with Human hearing is also the inter sound timing.. and as we get older this is only marginally affected where as pitch detection more distinctly deteriorates. Of course the Nyquist sampling theory doesn't differentiate between these two which is why we can see a preference for higher sample rates with a significantly lower pass reconstruction filter. It is this timing info I am uncertain how it is constrained by MQA, and perhaps its limited to the 'base band lossless ' sample rate element of the MQA encoding