Rockbox General > Rockbox General Discussion

discourse: why do corporations allow "jailbreaking" and "flashing" at all?

<< < (5/6) > >>

Confuseling:

--- Quote from: cowonoid on August 15, 2010, 12:00:13 AM ---So you think, it's not even that they don't care; their intention is just.... not yet decided. Internal, democratical delay of things..

I think if I would be Steve Jobs, I would be very happy if my device was not yet hacked. Both reasons would be okay for me: no one had the idea because of my OS being so good and/or having good security on the devices (which *I* could sell him for 1MIO$... Hello Steve, read this!!!!  ;))


--- End quote ---

I'm not even really saying delay, so much as a permanent balance of irreconcilable interests. I suspect that some engineer says "Our device is wide open due to exploit X! If we go to market like this, it'll be hacked within months!", some executive says "Well, that'll cost an awful lot to plug, and we're behind schedule already", some marketing strategist says "Well, our last player's figures suggest that 1% of our users actually bought the device specifically because they could install aftermarket firmware", and another engineer points out "And we only ended up fixing bugs Y and Z because we borrowed the solution from them..."

Someone or other then decides, after much head scratching, whether to budget for increased security, decrease it, or leave it as it is. Historically, no doubt, there was often an a priori assumption that hackers were evil brutes who ruined your carefully designed product out of sheer spite - and therefore that the only consideration was how much it would cost for the level of security you felt you needed. Nowadays, I suspect most companies are clever enough to see the whole thing more subtly - as contributing advantages and disadvantages to a complex overall product strategy.

As to the last bit - well, I'm inclined to trust the judgement of the programmers. If Apple could lock their systems up tight for a reasonable price, they certainly have the temperament to do so. They don't seem to have managed it yet...

torne:

--- Quote from: cowonoid on August 15, 2010, 12:00:13 AM ---Come on, I could even train my grandmother in verifying digital signatures of e-mails! All the same I could implement it on my DAP, so that it just takes *my* firmware. And apps are nothing else than pieces of additional firmware, which can be signed, too. So the update feature cannot honestly be the hard thing to do, can it? And as for I know, there is no MP3 which can content malicious code and cause a buffer overflow! As well as the classic webbrowser, which only translates HTML code directly into lines and text. And if every app process is additionally embedded into a nice rights management... (I hope this wasn't a good idea which I could have sold!)

--- End quote ---
Loads of hackable devices sign their firmware and their application binaries. People hacked them anyway. Implementing these things *correctly* is very hard (see the 24kpwn exploit for the iPhone 3GS) and implementing them so that they cover all possible attack vectors is basically impossible (see the original free60 exploit on the 360, which relies on the fact that while executables are signed, game data isn't unless the game developer chose to do so, and "GPU shaders" were considered game data even though they have the ability to write to RAM). Even if you think you've done everything, there are certain people who are willing to devote heroic efforts to cracking a sufficiently interesting device (see the PS3 hypervisor glitch exploit, which relies on interfering with the power supply to the processor at *exactly* the right moment to cause it to miscalculate whether a signature is valid or not - it only succeeds one time in many thousands, but the hacking device can just try over and over until it wins).

You say an MP3 can't contain malicious code, but it clearly can; many of the exploits on modern devices are buffer overflows in things like JPEG decoders or font renderers. Also, a web browser is a really huge attack target, even if it doesn't support any kind of scripting: web browsers use loads of libraries to display different image/sound/etc formats, any one of which might have a bug in.

Bagder:

--- Quote from: cowonoid on August 15, 2010, 12:00:13 AM ---Come on, I could even train my grandmother in verifying digital signatures of e-mails! All the same I could implement it on my DAP, so that it just takes *my* firmware.

--- End quote ---

Many have done so. They have been hacked anyway, using many different approaches. Here are three:

1 - the digital signatures could be removed/worked around (Sansa v1 Rhapsody style)

2 - the digital signature had a known mathemathical flaw not taking into account by the manufacturer (Sansa e200 v1 style)

3 - by triggering a buffer overflow in the original firmware that then exposed the correct digital keys


--- Quote ---And as for I know, there is no MP3 which can content malicious code and cause a buffer overflow!

--- End quote ---

That's... just.. wrong. There are MANY such vectors. Every single music file have custom data embedded that can potentially overflow a buffer.

I think you need to do your homework a lot better.

cowonoid:
Thanks, saratoga, torne, Bagder for the technical explanations. I will read about the links you sent / the exploits you mentioned soon to understand it better.

How I said, I am not a computer scientist. I find it just very weird, that producers can't produce a flawless system even if they are in full control of the whole creation process. From what you're telling, it appears to me that producing a completely secure system is like producing secure safes. With a big enough saw (and enough courage, motivation, fun) you can penetrate even the hardest wall..

Nevertheless I still don't understand how a x-bit asymmetric-cryptography-signed file can be fudged without having the private key? They don't even have the public key! Coupled with a 24h "try again"-interval, if upload was unsuccesfull, they couldn't find out by bruteforce. Additionally one could equip each system with a different private/public key couple. They're then assigned to the serial number of the device and the firmware update is automatically signed on download from the vendor page.

Regarding buffer overflows and arbitrary code execution:
squelching all security bugs would be possibly the wrong (and impossible) apporach. What I was thinking of: the whole system hasn't write permissions to the place, where the programs run from. There's an independent micro controller, who is responsible for signature checking and flash-writing tasks. He has no attack area because the only information input is the firmware!

Or one could even utilize a watchdog like principal; encapsuled in a seperate µC, which will request an "act of faith" - lets say - every 10 minutes. The main system must then send "a signed checksum of all the stuff, going on on the device" to the watchdog. If this is not kosher to the watchdog, he simply shuts down the power line. It's like "corporation staff visiting your device every 10 minutes for a maintenance".

All the ideas are kind of introducing a "supervising level" to the device; if it's not allowed to sent Apple security in persona to check all the devices, you just put it *into* your device.
I find this idea exciting! Basically it would be like the in-persona visit, just that you put the will/intention into a piece of chip. Thus it becomes legal; it's inside of the generally accepted system limit of a DAP. Apple staff knocking on your door is not.

soap:
1 - On public key cryptography:  That's all well and good, but processors don't execute cryptotext, they execute plaintext.  The firmware may be encrypted when stored, but is decrypted when run.

Either the firmware is stored in plaintext on the DAP, or the DAP itself must have the key (in order to decrypt the firmware before execution.)

This is the classic struggle, you are selling the end user both the lock and the key, you're just trying your best to hide the key from them.  It MUST be there, though.
 
2 - On a watchdog.  If the watchdog controls behavior by controlling the power lines... bypass it and feed power from somewhere else. 

This is no different than most software copy protection schemes, a tacked-on "watchdog" routine which disables the software if the dongle / authentication server / CD-check isn't found.  All these sound great!  How do you fake a cryptographic challenge and response dongle?  How do you fake a cryptographicly sound authentication server?  How do you fake a proprietary optical disc?

You don't!

You snip out the watchdog.


Navigation

[0] Message Index

[#] Next page

[*] Previous page

Go to full version