How Fast Is 2018 Broadband
Posted by: Mike-B on 23 July 2018
www comparison site 'Cable' has published the Mean Download Speed ranking for the planet in 2018. I've shown the top 10 & selected out a few of the next 40 for those interested, the full list is 200. Sorry if I've missed your patch, …. full report is here .... www.cable.co.uk/broadband/rese...d-speed-league-2018/
2 Sweden 46.00
3 Denmark 43.99
4 Norway 40.12
5 Romania 38.60
6 Belgium 36.71
7 Netherlands 35.95
8 Luxembourg 35.14
9 Hungary 34.01
10 Jersey 30.90
16 Spain 27.19
20 USA 25.86
25 Germany 24.00
32 Poland 19.73
35 UK 18.57
43 Italy 15.10
47 Russia 13.51
50 Serbia 13.00
Pev posted:Yep - Truespeed are are guaranteeing 200mbps both download and upload with no contention - roll on October when we are due to be connected.
Of course the proof of the pudding...
Truespeed appear to their blurb using broadband PON technology so there is sharing of aggregation and distribution links so there will often be a degree of contention’, although the term really has little meaning in these contexts, and I find their blurb potentially a little misleading which doesn’t inspire total confidence....... To have a non shared access you need to have a commercial direct point to point fibre to a PoP .. but you will be paying significant commercial rates for that service, and typically these days will offer full duplex at one of the common Ethernet rates of 100Mbps, 1Gbps, 2.5Gbps, 5Gbps or 10Gbps.
Simon-in-Suffolk posted:Atom/Iota/Kan Stands posted:.. Well how about Naim building a dedicated box to store 30 mins or 1 hour of music, to totally remove the scourge of drop outs!?!
I think they call it a NAS, or the Uniti Core.....
I think he means to pull down eg a couple of albums from Tidal or a complete playlist or something and play back from the store rather than over the internet connection.
It would have to be transient (so expire if the playback is paused more than x minutes or something) otherwise it probably wouldn't conform to Tidal etc T&Cs.
best
David
Ie a really large buffer. Just a gigabyte of RAM could hold considerably more than an album of 16/44, and memory Is so cheap that it should not be a real issue, especially given the cost of Naim streamers, so I wonder if there is some other reason for not doing, other than simply not recognising that even whatever enlarged buffer most recent devices have is insufficient to iron out the vagaries of internet connections?
It has occurred to me before that being able to download Tidal music to local storage would be a useful feature, especially for those with legacy streamers and/or flaky internet connections. You can do this with Tidal on portable devices, and you get to use the downloads for as long as your sub is active, so maybe it could be implemented to local USB storage on streamers.
As far as buffers go, I find the BubbleUPnP Server works quite well... it’s not a huge buffer, but large enough that it allows more reliable accesses with the legacy streamers with minimal to no dropouts on Tidal and Qobuz for those with low access speed or large ratio asymmetric internet accesses.
Innocent Bystander posted:Ie a really large buffer. Just a gigabyte of RAM could hold considerably more than an album of 16/44, and memory Is so cheap that it should not be a real issue, especially given the cost of Naim streamers, so I wonder if there is some other reason for not doing, other than simply not recognising that even whatever enlarged buffer most recent devices have is insufficient to iron out the vagaries of internet connections?
Prehaps increasing buffer size adversely effects sound quality. I know one of the tweeks to improve sound quality of the SBT is to reduce the buffer size.
Also, if everybody in the UK downloaded 1gb of data they didn’t use, wouldn’t that slow the interweb down. Not to mention the wasted energy.
On the legacy streamers, using a proxy with its own TCP buffers increases the SQ typically over internet streaming because the TCP engine in the streamer works less hard as the streamer tends to drop to a basic window segment buffer semaphoring approach with the proxy as opposed to more dynamic segment flow control with the peer across the internet... quite noticeable in my experience. If you look at a Wireshark trace you will see what I mean.
Simon
God I really wish I knew what all that meant!
fatcat posted:Innocent Bystander posted:Ie a really large buffer. Just a gigabyte of RAM could hold considerably more than an album of 16/44, and memory Is so cheap that it should not be a real issue, especially given the cost of Naim streamers, so I wonder if there is some other reason for not doing, other than simply not recognising that even whatever enlarged buffer most recent devices have is insufficient to iron out the vagaries of internet connections?
Prehaps increasing buffer size adversely effects sound quality. I know one of the tweeks to improve sound quality of the SBT is to reduce the buffer size.
Also, if everybody in the UK downloaded 1gb of data they didn’t use, wouldn’t that slow the interweb down. Not to mention the wasted energy.
Presumably the butter size v performance aspect depends on the amount of memory and the processing capacity of the computer (that part of a streamer being a computer). I’m sure it could be designed to have large buffer without negative effect,
As for the the effect on the internet, it isn’t data you don’t use, but simply data downloaded in advance of using, so no increase in data transferred.
Finkfan posted:God I really wish I knew what all that meant!
Don’t worry ????. What it means that if you have a legacy streamer (NDX, NDS etc) your internet stream will usually sound better if you go via a proxy or staging server. BubbleUPnP Server is an example of this which has been described elsewhere on the forum. My post described the most likely reasons I believe is the reason why based on analysis and investigation I have undertaken.
Innocent Bystander posted:fatcat posted:Innocent Bystander posted:Ie a really large buffer. Just a gigabyte of RAM could hold considerably more than an album of 16/44, and memory Is so cheap that it should not be a real issue, especially given the cost of Naim streamers, so I wonder if there is some other reason for not doing, other than simply not recognising that even whatever enlarged buffer most recent devices have is insufficient to iron out the vagaries of internet connections?
Prehaps increasing buffer size adversely effects sound quality. I know one of the tweeks to improve sound quality of the SBT is to reduce the buffer size.
Also, if everybody in the UK downloaded 1gb of data they didn’t use, wouldn’t that slow the interweb down. Not to mention the wasted energy.
Presumably the butter size v performance aspect depends on the amount of memory and the processing capacity of the computer (that part of a streamer being a computer). I’m sure it could be designed to have large buffer without negative effect,
As for the the effect on the internet, it isn’t data you don’t use, but simply data downloaded in advance of using, so no increase in data transferred.
Computer is not really a term I would use unless you are using the term to describe a state machine.. like digital light dimmers, electronic oven timers, CD player, remote controls etc. The memory is in the network interface device, and it is the memory or buffers dynamically used by the network transport protocol state machine to validate and collate data to then pass it to the user .. which can be an application, computer, appliance, streamer etc.. in the case of the streamers, larger transport memory means greater network latency can be handled without dropouts... it also means ...SQ is affected differently... and really the streamer memory is less relevant here... instead a staging server that collates and rapidly sends the data when the streamer buffers are empty, and then stops when full seems to affect SQ the best. For example using a proxy for Tidal sounds usually better than streaming Tidal directly.
fatcat posted:Innocent Bystander posted:Ie a really large buffer. Just a gigabyte of RAM could hold considerably more than an album of 16/44, and memory Is so cheap that it should not be a real issue, especially given the cost of Naim streamers, so I wonder if there is some other reason for not doing, other than simply not recognising that even whatever enlarged buffer most recent devices have is insufficient to iron out the vagaries of internet connections?
Prehaps increasing buffer size adversely effects sound quality. I know one of the tweeks to improve sound quality of the SBT is to reduce the buffer size.
Also, if everybody in the UK downloaded 1gb of data they didn’t use, wouldn’t that slow the interweb down. Not to mention the wasted energy.
I have no idea how big a buffer ‘needs’ to be to eliminate issues such as Tidal dropouts and compensate for high latency. The new streamers have 50Mb, which is a lot more than the old platform. The extra processing power to run the new streamers, load data more quickly over Ethernet and WiFi, etc. etc. might have the potential to generate more noise, and mitigating this must have been a major goal for the designers. Lets hope that the Bubble ‘workaround’ is no longer required on the NDX2!
Chris - I think you may be getting your buffers slightly mixed up. The buffers you are referring to are I believe application spool buffers - which is where the sample is stored in memory as quickly as possible and then spooled out - see below
The buffers I am referring to our network TCP window buffers. From memory (i cant find an email I had that confirms) I believe the Naim streamers use 64kbyte TCP window size buffers per session.
Now throughput is not limited alone by internet access speed (most consumers are probably not aware of this ) but is limited by latency and the TCP window size of the network device.
Now throughout (using connection oriented transport) is governed by roundtrip delay * TCP Window buffer. Therefore a 1Mbps (ie FLAC) stream with a 300mS RTD to the streaming service host requires
1000000/8 * 0.3 = 37.5kbytes TCP windows size.
Now if the latency RTD temporarily increases to 600mS, then the maximum throughout a TCP window buffer of 64kbytes can provide is 854kbps - therefore the TCP windows will exhaust quickly as the network transfer can't keep the application spooling buffer topped up and so the spool will empty - and in the legacy streamers this happened quickly - and then there is a drop out or pause in music... and this is despite the speed of the broadband access sync speed. (Remember sync speed DOES NOT equal throughput). This is what is usually behind the Tidal drop outs for example.
There are at least three obvious ways to mitigate this assuming one cant remedy the latency:
- Increase the TCP Window buffer sizes (Windows PCs typically do this), but is not possible on at least the legacy streamers
- Increase Application Spool buffers - works well with inconsistent latency - and with a simple application data transfer model - this is what Naim uses in the new steamers to help insulate itself from the network throughput variability albeit it will still fail for predominately large latency transfers.
- Create a simple network accelerator by using a proxy server with larger TCP window sizes or greater max TCP memory - this is what I uses with BubbleUPnP sever and will work with legacy and new streamers alike.
Thanks Simon, I think that just about makes sense, might have to read it again this evening when I’m not so busy! I guess I was assuming that the increased application buffer Naim now use was intended to be big enough to deal with the issue of latency in at least the majority of cases. I hadn’t really considered the role of TCP buffers in this - these things are never simple!
The size of the buffer isn’t actually the important factor, it’s how empty the buffer is allowd to get before more data is squirted into it.
The people with indepth knowledge of the SBT believe the buffer settings can be optimised dependant on the type of output used. (Analogue, s/pdif or USB).
If this is true for other streamers with numerous types of outputs, it means the streamers are opimised for one output and not optimised for other outputs or not optimised for any.
fatcat posted:The size of the buffer isn’t actually the important factor, it’s how empty the buffer is allowd to get before more data is squirted into it.
The people with indepth knowledge of the SBT believe the buffer settings can be optimised dependant on the type of output used. (Analogue, s/pdif or USB).
If this is true for other streamers with numerous types of outputs, it means the streamers are opimised for one output and not optimised for other outputs or not optimised for any.
Well the Naim streamers load the buffer full as each individual track starts, and keep topping it up until the whole track is loaded in. Do those clever Squeezebox people have a better idea?
Fatcat, no not really, the size of the network buffer is important so as to establish a network throughput for a given RTD between hosts... standard network engineering principles. The application spool memory above the network stack can store a reservoir of application data that can survive, in some circumstances, when the underlying network stalls, whilst waiting for the network transport protocols reestablish the transfer. Obviously the average network transfer throughput must exceed (because of overheads) the rate at which the application spool buffer is depleted. The bigger the application spool, the greater the averaging of data throughput over time.
where the network is reliable and low latency, say on an Ethernet home network with a relatively performant streamer server, then the application spool memory as well as the network buffer can be small, as the underlying network data throughput is more consistent. But as I pointed out to Chris, one shouldn’t confuse the different buffers. The network window buffer and application data spoiler buffer are different and address different things and are managed differently , albeit they have both work in their own ways to provide an accurate application sample playput. The network buffer has to match the parameters with its network peer. The application spooler is independent of the network and is entirely controlled by the application.
if changing spooler buffer size is affecting SQ (other than complete interruption to playback) them it’s likely the noise from reading and writing to memory is causing crosstalk with the clock circuitry and possibly analogue circuitry. This will be because of limitations in the given streamer design. Naim for example go to some length to reduce this crosstalk in their designs, but it is not completely eliminated. It may be with the SBT these compromises and crosstalk limitations are more evident.
Simon-in-Suffolk posted:But as I pointed out to Chris, one shouldn’t confuse the different buffers. The network window buffer and application data spoiler buffer are different and address different things and are managed differently ,
Simon
In terms of the above, what is ALSA buffer.
ChrisSU posted:fatcat posted:The size of the buffer isn’t actually the important factor, it’s how empty the buffer is allowd to get before more data is squirted into it.
The people with indepth knowledge of the SBT believe the buffer settings can be optimised dependant on the type of output used. (Analogue, s/pdif or USB).
If this is true for other streamers with numerous types of outputs, it means the streamers are opimised for one output and not optimised for other outputs or not optimised for any.
Well the Naim streamers load the buffer full as each individual track starts, and keep topping it up until the whole track is loaded in. Do those clever Squeezebox people have a better idea?
The clever squeezebox people will obviously be topping up the buffer. But, they optimise/tweek sound quality by altering the size of buffer and how often the buffer is topped up. I’ve seen claims that optimal settings for different outputs are not the same.
fatcat posted:Simon-in-Suffolk posted:But as I pointed out to Chris, one shouldn’t confuse the different buffers. The network window buffer and application data spoiler buffer are different and address different things and are managed differently ,Simon
In terms of the above, what is ALSA buffer.
I am not totally familiar with ALSA other than it’s an open source typically Linux sound interfacing driver. So it is used for loading sound data into an interface from memory under software control, such as playing an audio file into a sound card or SPDIF or USB interface under CPU control. I kind of doubt Naim use ALSA, but if they did i imagine it would between the application buffer sample spool memory and the Analog Devices DSP front end interface and its job would be to keep the AD input port full.
ChrisSU posted:fatcat posted:Innocent Bystander posted:Ie a really large buffer. Just a gigabyte of RAM could hold considerably more than an album of 16/44, and memory Is so cheap that it should not be a real issue, especially given the cost of Naim streamers, so I wonder if there is some other reason for not doing, other than simply not recognising that even whatever enlarged buffer most recent devices have is insufficient to iron out the vagaries of internet connections?
Prehaps increasing buffer size adversely effects sound quality. I know one of the tweeks to improve sound quality of the SBT is to reduce the buffer size.
Also, if everybody in the UK downloaded 1gb of data they didn’t use, wouldn’t that slow the interweb down. Not to mention the wasted energy.
I have no idea how big a buffer ‘needs’ to be to eliminate issues such as Tidal dropouts and compensate for high latency. The new streamers have 50Mb, which is a lot more than the old platform. The extra processing power to run the new streamers, load data more quickly over Ethernet and WiFi, etc. etc. might have the potential to generate more noise, and mitigating this must have been a major goal for the designers. Lets hope that the Bubble ‘workaround’ is no longer required on the NDX2!
I understand that Tidal (and Qobuz) dropouts are more likely to occur when client applications attempt at throttling the data stream instead of downloading to a (very) large buffer which is what the Tidal and Qobuz apps apparently do.
Failures at throttling the data stream appear to happen more likely with certain Tidal and Qobuz servers and for long tracks. In these cases, connections are typically aborted by the server.
This is at least what the main developer of MPD has recognized and documented in response to https://github.com/MusicPlayerDaemon/MPD/issues/241 in version 0.21 of MPD, see https://github.com/MusicPlayer...MPD/blob/master/NEWS.
I assume that Naim streamers do not use MPD but as long as they are throttling the data stream (e.g., to avoid overflowing relatively small local buffers) they are likely to encounter the same issues. These do not appear to be directly related to latency issues.
Simon-in-Suffolk posted:fatcat posted:Simon-in-Suffolk posted:But as I pointed out to Chris, one shouldn’t confuse the different buffers. The network window buffer and application data spoiler buffer are different and address different things and are managed differently ,Simon
In terms of the above, what is ALSA buffer.
I am not totally familiar with ALSA other than it’s an open source typically Linux sound interfacing driver. So it is used for loading sound data into an interface from memory under software control, such as playing an audio file into a sound card or SPDIF or USB interface under CPU control. I kind of doubt Naim use ALSA, but if they did i imagine it would between the application buffer sample spool memory and the Analog Devices DSP front end interface and its job would be to keep the AD input port full.
ALSA is an API for sound card drivers. In the configuration of a player or renderer, "alsa" is just the type of an output plugin. Other output plugin types could be for instance "oss", "jack", "null", "fifo", etc. Thus, for instance, a configuration file for MPD could contain lines like
audio_output {
type "alsa"
name "Allo DigiOne"
device "hw:sndallodigione"}
to set an Allo DigiOne as a possible output of MPD. The responsibility of buffering an incoming stream lies with the player or renderer, in this case MPD. ALSA comprehends kernel drivers and can be used by commercial applications. Thus, it is possible that Naim actually rely on ALSA for their new streaming platform. But they do not have to, of course.
nbpf posted:I understand that Tidal (and Qobuz) dropouts are more likely to occur when client applications attempt at throttling the data stream instead of downloading to a (very) large buffer which is what the Tidal and Qobuz apps apparently do.
Failures at throttling the data stream appear to happen more likely with certain Tidal and Qobuz servers and for long tracks. In these cases, connections are typically aborted by the server.
Hi nbpf, there is no 'throttling' or shaping of the data stream with Naim devices. Tidal drop out typically occurs when the RTD latency exceeds, typically temporarily, the available TCP flow buffer (window) memory available in the network stack such that overall transfer throughout drops. See my posts above. With TCP throughput is a function of latency and TCP flow memory of the two communicating hosts. The Naim TCP flow memory is limited, but is fine for home networks which tiny latencies, but across the internet especially with any access queuing and route congestion then the latencies can rise and exceed the resources available in the streamer to maintain a given throughput.
I have trawled over many many logs and traces with Naim looking at this. For the most part in the legacy firmware the TCP and network stack has been optimised to make it as fast as possible so it has minimal impact on overall RTD latency within the relatively limited resources available and this has improved things. There are some other flow behaviours added by Naim to help as well. This may be the same in the newer streamers- I haven't done the engineering examination on those - but the newer streamers use a larger application spool to mitigate the effects of irregular network flow to some effect so should be less sensitive to it
S
Simon-in-Suffolk posted:nbpf posted:I understand that Tidal (and Qobuz) dropouts are more likely to occur when client applications attempt at throttling the data stream instead of downloading to a (very) large buffer which is what the Tidal and Qobuz apps apparently do.
Failures at throttling the data stream appear to happen more likely with certain Tidal and Qobuz servers and for long tracks. In these cases, connections are typically aborted by the server.
Hi nbpf, there is no 'throttling' or shaping of the data stream with Naim devices. Tidal drop out typically occurs when the RTD latency exceeds, typically temporarily, the available TCP flow buffer (window) memory available in the network stack such that overall transfer throughout drops. See my posts above. With TCP throughput is a function of latency and TCP flow memory of the two communicating hosts. The Naim TCP flow memory is limited, but is fine for home networks which tiny latencies, but across the internet especially with any access queuing and route congestion then the latencies can rise and exceed the resources available in the streamer to maintain a given throughput.
I have trawled over many many logs and traces with Naim looking at this. For the most part in the legacy firmware the TCP and network stack has been optimised to make it as fast as possible so it has minimal impact on overall RTD latency within the relatively limited resources available and this has improved things. There are some other flow behaviours added by Naim to help as well. This may be the same in the newer streamers- I haven't done the engineering examination on those - but the newer streamers use a larger application spool to mitigate the effects of irregular network flow to some effect so should be less sensitive to it
S
It is well possible that there the Naim dropouts and the MPD dropouts are completely unrelated as your analysis seems to suggest. Still, I think that it is interesting to notice that with a commit from June 14 2018, MPD has adopted a new method of dealing with Tidal and Qobuz streams
For remote files (not streams), this downloads as quickly as possible
to a large buffer instead of throttling the stream during playback.
Throttling can make the server impatient and it may then disconnect.
This is what Qobuz and Tidal do, and this commit attempts to solve
this by not letting the Qobuz/Tidal server wait (closes #241).https://user-images.githubusercontent.com/2726946/41459843-0f857b8c-708b-11e8-95a2-e1c6d5f36d27.png
We will see if this fix the problem once 0.21 is released. Here is a log showing a typical event (the second RST) that results in a curl failure and MPD replay skipping to the next track:
I think that it is also interesting to notice that these failures also occur when a Bubble UPnP proxy is active. But I have never noticed any problem while streaming via the Qobuz app on the same LAN.
Hopefully the new Naim streaming platform is immune to these problems but I have to say that I am starting to be mildly annoyed by Qobuz's sloppyness and I will probably not renew my subscription in the upcoming season.
Our internet is so slow, I resorted to using a 4G mobile signal from EE. However, after recently experiencing loads of drop-outs as well as losing my mobile phone signal, I did a search on their web site to check for known issues in my area.
Apparently, after several emails telling me "it's taking longer than expected to resolve the issue", I received the following:
"We need to carry out a survey of the treeline, as we suspect that it may be blocking our microwave transmitters from sending signal between sites. We will next update you on 7 August."
So in the meantime, back to BT and also a poor mobile phone signal....