Support and General Use > Audio Playback, Database and Playlists

-24db Precut Sound Quality on Rockboxxed iPod?

<< < (4/10) > >>

Llorean:
You're talking gibberish. The above signal path is *not* just 16 bit being transformed over and over. It includes the gain adjustment which removes accuracy, the exact reason why you wrote 28/32. That's 4 bits lost of accuracy. When you scale those 4 bits down to 16-bit scale, it's 2 bits loss of accuracy at that scale, is it not? But that's assuming that the 4 bits of lost accuracy is the right number, at 32-bit etc.

You're trying to imagine a scenario where audio can be transformed losslessly. If you decrease the gain in digital this is performed by multiplying the sample by a constant. Since you're decreasing, this results in a changed sample and loss of accuracy. You can't magically make the number smaller yet still keep all the bits.

For example, if you have a sample that is 1111 1111 1111 1111 and you multiply it by a smaller constant, how would you maintain all the data in that sample? You lose some, simply by definition.

Andhyka:
Yes actually you are right. It will lose a lot of accuracy in the process - when scaled to 16bit, which happens on Rockbox right now.

So this is the Original Rockbox:

16 -> 32 -> 28/32 -> 14/16 -> DAC -> 14bit analog signal

But so far I noticed, the iPod Rockbox only lost 2db of accuracy (instead of 4db, which I thought it be).

Well, as I mentioned above, I'm happy with -12db pre-cut listening to iPod Rockbox on -15db volume.

Until I found out that Wolfson DAC supports 96khz/24bit. Rockbox could be updated to do this:

16 -> 32 -> 28/32 -> 21/24 -> DAC -> 16bit analog signal

Edit: I finally realized that pre-cut in software EQ actually cuts away the gain/SNR/fidelity. I still hoped that the pre-cut algorithm is changed to downscale algorithm.

Llorean:
Again I ask you: How do you decrease a signal without losing anything? The statement makes no sense. The least you can possibly lose is the least significant bit. It's simple math.

Andhyka:
Umm I don't know how the algorithm works but the normalize features available on Adobe Audition seems to do exactly that. It allows records with compressed fidelity (prevalent in early 1990s) to fit into its maximum potential. I don't have it as it's very expensive but you get the idea from here:

http://blogs.magnatune.com/buckman/beier1.gif
http://blogs.magnatune.com/buckman/beier2.gif

Edit: Hmm, I guess this must be resource-intensive, and not implementable in iPod.

Llorean:
"Normalizing" doesn't do any such thing. Normalizing can usually be seen as increasing or decreasing the volume to a set point, usually used in an attempt to have all tracks peak at 0dB. It doesn't gain you any additional fidelity, all it does is essentially shifts/stretches where the existing range you have lies across. Are you perhaps referring to something other than a "Normalize" feature? I assure you that you cannot magically recover data that does not exist any more. The very best you can do is attempt to approximate that data in some manner.

Yes, it maximizes resolution, but it doesn't add any additional data. For example, as you said, when you go from 16-bit to 32-bit, you don't gain *anything* because you still have the same sample.

If you take something that's only using 12-bits, and you normalize it so that it's full scale 16-bi, you still have the same information (and because you're scaling to a non-integer multiple, you no longer have a lossless copy of the original).

Navigation

[0] Message Index

[#] Next page

[*] Previous page

Go to full version