Support and General Use > Audio Playback, Database and Playlists
Codec Efficiency Comparison Test (iPod)
bk:
--- Quote from: Llorean on October 08, 2006, 05:12:30 PM ---There is ONE small problem about all this: It's hard/worthless to compare between say, Coldfire and ARM still.
Because of differences both in processor architecture, unboosted and boosted speed, and operating system overhead, boost ratio really wouldn't meant that much when comparing the two.
--- End quote ---
Not true. If a codec is running at 90% boost on one arch and 60% boost on another, then clearly the optimization effort would be best directed towards the first architecture. "Operating system overhead" is meaningless since Rockbox is running on both.
Llorean:
You clearly don't understand the situation.
On on architecture, it's boosting from 30 to 75. On another it's boosting from I think 45 to 124. So, the 45 one may boost 0% of the time simply because the codec runs fullspeed at 45, while it boosts some of the time because it doesn't run full speed at 30.
It is an unequal comparision of how optimized the codecs are simply because the processors are in seperate conditions.
Operating System Overhead IS DIFFERENT between different hardware. Because Rockbox is being compiled into ARM assembly vs M68K assembly operations take different amounts of time to complete. This means that basic code like User Input handling can be less or more efficient on one architecture than another. Then when you get into the code, because one has different inputs than the other, this introduces further differences in operating overhead. Take into account that each screen requires a different amount of time to update, and it will always be updating as long as you yield to the UI thread, and you get even *more* differences in operating overhead because the OS on one hardware has to spend more time drawing than on the other.
There are VAST differences in how Rockbox itself performs on different hardwares, completely independent of codec performance.
For examples of this, try scrolling in long lists on an H300 vs an H100, or on an iPod Nano vs an iPod Photo vs an iPod Video.
Using a transcoder at full boost that does not yield you accomplish a few things:
1) You prevent any other code from executing, insuring that you're ONLY timing the codec itself.
2) You are running the codecs using the full power of the processor without ANY questionable overhead, I believe, which then means that you have MP3@128 on ARM7 @ 75mhz, vs MP3@128 on M68K @ 124mhz. Then you at least only have three real variables: The codec itself, and the maximum processor speed, and the differences architecture makes (A 75mhz ARM7tdmi is not the same actual speed as a 75mh M68K Coldfire).
As far as I'm aware, there's a ridiculous amount of things that can cause a variance in the results.
If it's running at 90% boost on a processor that only goes up to 60mhz and runs at 25mhz idle, vs 60% boost on one that runs at 50mhz idle and 200mhz boosted, its pretty clear that it's running less efficiently on the latter, and this case could reasonably come up. The only proper solution is to figure out how efficiently it runs relative to the architecture itself, which means you need to be able to establish a performance value that is independent of actual processor speed and any hardware you can possible remove from the equation.
bk:
--- Quote from: Llorean on October 08, 2006, 06:34:52 PM ---You clearly don't understand the situation.
--- End quote ---
Do me the favor of not presuming what I do or do not know.
--- Quote ---On on architecture, it's boosting from 30 to 75. On another it's boosting from I think 45 to 124. So, the 45 one may boost 0% of the time simply because the codec runs fullspeed at 45, while it boosts some of the time because it doesn't run full speed at 30.
--- End quote ---
a) On different architectures cycles are not 100% equivalent; b) if the goal is to increase performance of various codecs then boost ratio is an adequate metric to use, regardless of hardware differences.
--- Quote ---It is an unequal comparision of how optimized the codecs are simply because the processors are in seperate conditions.
--- End quote ---
Somewhat true but irrelevant. If codec A has 10% boost on ARM and 75% boost on ColdFire then optimization effort on ARM is not as worthwhile as effort spent on ColdFire for that codec.
--- Quote ---Operating System Overhead IS DIFFERENT between different hardware. Because Rockbox is being compiled into ARM assembly vs M68K assembly operations take different amounts of time to complete. This means that basic code like User Input handling can be less or more efficient on one architecture than another. Then when you get into the code, because one has different inputs than the other, this introduces further differences in operating overhead. Take into account that each screen requires a different amount of time to update, and it will always be updating as long as you yield to the UI thread, and you get even *more* differences in operating overhead because the OS on one hardware has to spend more time drawing than on the other.
--- End quote ---
Also do me the favor of not explaining the process of compilation as if I was a child. Rockbox is the operating environment on all architectures: if there are threading inefficiencies (for example) on one target affecting codec performance then it would be exposed through these tests (indirectly) and could be addressed. This is the entire point of performance benchmarking.
Llorean:
In response to your "Cycles are not 100% equivalent" you may have noticed in many places I mentioned that very point.
You seem to think that all that matters is its boost ratio. Which is completely pointless since the processors run at different speeds, the ARM core we currently have being half the speed of the coldfire.
Your theory is that even if a codec is running *faster* on one architecture than on another, if the processor is slower and thusly it must boost more, concentration should go there. Which is silly to an extent, because the codec is actually *more* efficient in that situation, and optimization efforts could very well be wasted time or diminishing returns. It's better to know an absolute value of efficiency. You can still concentrate more on the slower processors if the speed/efficiency ratio means that processor will be boosting more, but it also gives you a benchmark upon which to base overall speed for that architecture. Particularly useful in considering future targets. For example, if you're considering a slower ARM based target, you can have a better idea what may or may not be feasible on it.
And the reason I suggested you don't understand what's going on, is because you said ""Operating system overhead" is meaningless since Rockbox is running on both." which is what I responded to with that. It is clearly an untrue statement, as this thread is relating to CODEC EFFICIENCY, which means the tests should be related to the Codecs themselves, not Rockbox performance on a given system. It does differ, and hardware (particularly screens) interfere GREATLY and cannot necessarily be improved. This is not a fact that can be simply washed away with the word "irrelevant."
Otherwise we can simply say "All codec optimization efforts should happen on the 5G iPod(or perhaps 3G iPod)" because simply put it has the single worst playback performance of any current system other than the 3G iPod with its broken cache.
Edit: As a side note, please don't talk back to me about explaining compilation. The VAST majority of users here are not very aware of such topics, and I have no way of knowing whether you are or aren't. It is not speaking down to you as if you are a child, as VERY few childs understand that concept. It's speaking down to you as if you were "average" which is often not considered speaking down to someone at all, simply speaking to someone who may not be aware of the details of a technical concept. If you cannot discuss this idea without getting offended, I suggest you step away from it for a few hours and coming back when you have cooled down, as in no way is ANYTHING I say here meant to belittle you. Just state my perception on the topic. As far as I'm concerned, I said nothing untrue up to and including the fact that your statements demonstrated a lack of knowledge about the causes of differing performances across hardware and the relevancy of operating system overhead on the topic of "Codec Efficiency Comparison".
saratoga:
--- Quote from: bk on October 08, 2006, 05:40:07 PM ---
--- Quote from: Llorean on October 08, 2006, 05:12:30 PM ---There is ONE small problem about all this: It's hard/worthless to compare between say, Coldfire and ARM still.
Because of differences both in processor architecture, unboosted and boosted speed, and operating system overhead, boost ratio really wouldn't meant that much when comparing the two.
--- End quote ---
Not true. If a codec is running at 90% boost on one arch and 60% boost on another, then clearly the optimization effort would be best directed towards the first architecture. "Operating system overhead" is meaningless since Rockbox is running on both.
--- End quote ---
I think what Llorean meant was that boost doesn't tell you which is faster, not that the numbers themselves were useless.
--- Quote ---On on architecture, it's boosting from 30 to 75. On another it's boosting from I think 45 to 124. So, the 45 one may boost 0% of the time simply because the codec runs fullspeed at 45, while it boosts some of the time because it doesn't run full speed at 30.
It is an unequal comparision of how optimized the codecs are simply because the processors are in seperate conditions.
--- End quote ---
I think bk just means comparing the relative boost ratios tells you which platform needs optimization most. I'm not really sure why we would care about that, but its a valid point. I agree with you that it doesn't really mean anything though.
--- Quote ---Somewhat true but irrelevant. If codec A has 10% boost on ARM and 75% boost on ColdFire then optimization effort on ARM is not as worthwhile as effort spent on ColdFire for that codec.
--- End quote ---
I don't see how you can conclude this. Given the differences in ISA, power consumption and battery capacity, its entirely possible that they're equally worthwhile. For instance, Coldfire could be highly optimized, but poorly suited for the task, while ARM could be poorly optimized, but well suited.
At any rate, since the developers working on each platform are different people, its not very relevent. Knowing that X needs optimization more then Y doesn't help if theres a fixed group of people who work on X and a seperate group that only work on Y.
Navigation
[0] Message Index
[#] Next page
[*] Previous page
Go to full version