upsampling to 96k ?
Posted by: mige0 on 28 August 2011
Any experience with SQ improvements or not (WAV / FLAC)?
Which software to recommend (Linux/ Windows)?
Michael
When I set up a laptop using CMP I had the option of upsampling, but universally everything or nothing.
My experience was that there were certain albums where I felt that it helped, but generally I felt that on rock & pop, which is my main musical diet, it somehow robbed the music of energy and immediacy ...so I lived without.
IF I could have triggered it using some form of metadata on an album by album basis I would have used it occassionally.
M
Just did a quick test myself.
At 96k I get better flow and more intense color/ intimacy - given I use WAV 96k or FLAC 96k (in compression level zero).
More complex parts I could follow with more ease (with harmonics better related) and to my taste its less edgyness/ metallic coloration involved compared to 44.1k in general.
So what is left is the search for the "best" software to up-sample my HDD
Michael
Can someone please explain what up-sampling can do?
If the source is recorded @ 44.1 how can doubling +, an already recorded file be improved? There is nothing that can be added surely?
Or am I missing something fundamental?
The Nyquist sampling rule shows that you must at least sample greater than double the highest frequency Being sampled if you are going to be able to turn it back into a analogue signal.
When you sample a sound at a rate x kHz, there will be reverse and positive images of the sound either side of 2x, 3x etc.
This aliases of the original sampled sound need to be filtered to stop interference and distortion.
Very effective filters over a relatively narrow range of frequencies are hard to make in the real world. The filter would need to be so as it needs to stop all the aliases but not affect the frequencies that have been sampled. Filters have a slope, ie the rate at which they attenuate frequencies. For the sampled sound to be as higher bandwidth as possible, the filter needs to be a steep as possible, so as not to attenuate frequencies associated with the sampled sound, ie dulling the sound.
Steep filters have problems and cause distortions and so not all frequencies near the cut off point (typically .4 of sample frequency in the real world or theoretically .5 of the sample frequency in the maths books) are filtered at the same rate. We hear frequencies as opposed to shapes of sound and so we hear this uneven frequency response as unnatural.
We can however we can process the the original sample by increasing it's sample rate. So to quadruple it's sample rate we would add three blank samples after each sample. This is called up sampling.
This has the effect of pushing the Nyquist frequency a lot higher to the original audio. Ie the aliasing images of the original sample will be further away and higher in frequency,
The net affect is that we can have a filter that is less extreme and has less side effects to the reconstruction of the original sample. Or possibly in this example, the filter slope is the same but is more benign because it's doing it's stuff way higher than the sampled sound's frequencies.
This will sound more natural to us, and so is better hifi.
That is why we up sample or oversample.
I hope that helps.....
Wow thanks for that Simon.
Yeah I know - cause I use the digital out exclusively its anyway set to 96k.
Upsampling files though, is quite a different animal - also you do not run into clocking (jitter) issues as is always the case with SRC done at hardware level.
Michael
Dithering was set to triangular - just in case - but didn't check it out sound wise.
By the way - in my setup the use of a cheap standard S/PDIF transformer in the digital line was what definitely adds to make the bird fly.
Seems that theres too much ground loop interference otherwise, even with the NAIM chassis lifted (no other connects except mains and LAN) and the S/PDIF IN to be isolated.
Well thats possibly nothing you NAIM guys will benefit from as you need to have an input that can handle S/PDIF at PRO level - don't know - just to have it mentioned for the tweakers among you.
This cute little detail came as a surprise to me as well.
Michael
Also up sampling and jitter are entirely different processes in reconstruction..... UNLESS you are referring to reducing jitter in the *original* digital sampling errors and up sampling will reduce it as you reconstruct the waveform, again thanks to the maths, but it is contentious whether this jitter is audible.
Simon
quote:Michael I think you'll find the chassis grounding is purely for the analogue outputs and has no bearing on anything else such as digital input
Well yes - in common audio theory.
quote:I am aware that the ndx and NDAC have galvanic isolators to remove earth loops on electrical digital inputs.
Most certainly they will have, dont know and didn't want to measure - but - its not the galvanic isolation that matters only - its about to push CMRR as high as ever possible.
quote:Also up sampling and jitter are entirely different processes in reconstruction
They (up sampling and jitter) have nothing in common I agree - but - there is complicated interaction looking at the system as a whole.
Simply put, S/PDIF and its RPO twin AES/ EBU are inferior constructs as they lack assignable clocking path (upstreams needed here).
quote:UNLESS you are referring to reducing jitter in the *original* digital sampling errors and up sampling will reduce it as you reconstruct the waveform, again thanks to the maths,
Not sure if I understand - there is nothing like "reducing jitter in the *original* digital sampling".
Once jitter is encoded in the digital stream, it's become impartible of the signal and cant be removed.
quote:but it is contentious whether this jitter is audible.
Michael
The "up-sampling to 96k" is a setting for this output of the UnitiCute only
Michael
Noise = 2 *pi*A*F*Tj
where:
Noise = RMS noise induced by jitter
A = Sine wave amplitude
F = Sine wave frequency
Tj = Time of jitter (RMS seconds)
Therefore an original signal when sampled it has inherent jitter Noise.
Remember encoding and decoding have to be seen as a par of normalised transformational processes.
Therefore at reconstruction with oversampling, in a normalised function each time we increase the sample frequency we in effect need to reduce A above of the original signal to compensate. ( a bit like averaging down) Therefore the RMS noise caused by the original jitter is reduced each time we upsample.
Also remember in the real world F is a summation of sinusoids, not the one used here for simplification.
Finally this works because noise ( jitter hete) has a Gausian distribution. Our wanted signal does not.
DSP is an amazing thing and this property is used in thousands of critical systems as well us benefitting from it audio systems.
Dithering was set to triangular - just in case - but didn't check it out sound wise.
Michael
Surely if you add dither when upsampling you will add distortion? Isn't dither only used when down-sampling or mixing to mask quantisation noise? dBpoweramp only recommends adding dither when reducing bit-depth.
Down-sampling and up-sampling is different - its just calculation errors that *may* introduce patterns that *may* be less audible with dither applied, as far as I know - but as said - I wouldn't say its any necessary as long as I have not digged any deeper and checked out by ear myself - and I didn't yet.
Also up-sampling by hardware is a different animal than up-sampling by software. Its always best IMO to use the native format and to avoid any hardware SRC.
You know - there are lots of people that do not "hear" jitter - though I think *I* do - meaning, its different on what anybody is sensitive for.
Michael
quote:Micheal, ok let me help. Some basic DSP. Jitter is nothing more than noise in the time domain that when transformed to the continuous (analogue) domain appears as a noise of frequencies.
...
So far so good
quote:
Therefore an original signal when sampled it has inherent jitter Noise.
Not so clear - with no jitter present at sampling there is also no encoded Jitter in the digital signal
quote:
..
Therefore at reconstruction with oversampling, in a normalised function each time we increase the sample frequency we in effect need to reduce A above of the original signal to compensate. ( a bit like averaging down) Therefore the RMS noise caused by the original jitter is reduced each time we upsample.
Also remember in the real world F is a summation of sinusoids, not the one used here for simplification.
Finally this works because noise ( jitter hete) has a Gausian distribution. Our wanted signal does not.
might be so - dunno - what is your "helping" conclusion out of this ?
quote:
DSP is an amazing thing and this property is used in thousands of critical systems as well us benefitting from it audio systems.
Well - I certainly agree about DSP being an amazing thing opening the door for a loot of things not been previously available to us.
Michael
First point; all signals when sampled will have jitter even at extremely low levels. It might not be relevant though.
Second point; the power of the jitter reduces more than the signal when oversampling is used so the jitter noise is reduced compared to the wanted signal.
We seem to be coalescing here :-)
Simon
There is no calculation error in upsampling to an integer multiple. It is perfect. Adding dither could only add distortion.
quote:There is no calculation error in upsampling to an integer multiple. It is perfect. Adding dither could only add distortion
44.1 > 96k ???
Besides that - up-sampling means interpolation anyway
Michael
16 bits to 24 bits = times 256 = easy peasy = completely accurate.
I am not an expert in DSP [digital signal processing] but my reading suggests the following ..
if you are going to upsample then pick a multiple of the original e.g. 44.1 to 88.2 rather than
96 ... both resolution audio and wadia recommend this
james
Is it because the resulting DAC conversion becomes more difficult = expensive ?
Can my Unitserve upsample for output to my nDAC ?
Cheers, Paul
@likesmusic
I possibly should have been more precise so I add now:
"Besides that - up-sampling means interpolation anyway"
... what braid of interpolation is up to the coding guy and the limitations of computational power available - but - simple linear interpolation as you seem to refer to, most certainly isn't the "best" way to do so.
This is most easy to understand at the top end at half Nyquist, where a sine is represented by only two samples.
If these are up-sampled and linear interpolation would be applied, the resulting waveform would be forced towards a sawtooth wave form - clearly not of our desire IMO.
But possibly you wanted to make aware of something different?
Michael
Simon
Many thanks, and now understand.
PS - I vaguely remember Fourier series and Laplace transforms from my engineering degree 3 decades ago, but would struggle to spot one now if I tripped over one.
Cheers ! Paul