GuidePedia

0
MIT prof
New Discovery May Make Encryption ‘Exponentially Easier’ To Break

“Brute force” may not seem like a term well suited to mathematics or the quiet pursuits in cryptography that helped drive information security from the Second World War on through to the modern surveillance state. Yet, it’s a term that may come back to prominence in the coming years, as researchers inch closer to finding the cracks in modern encryption algorithms. While programs like PRISM and XKeyScore may have access to your emails and social media, they’ve gained that access through legal requests, not hacking — encryption is still theoretically secure from mathematical attacks both sophisticated and brute.
In this context, “brute force” refers to the simple act of checking all possible combinations in order. You want to know a four-digit PIN, and so begin by guessing 0000, then 0001, then 0002, and so on. For a computer, such an operation is trivial, and will return the correct answer almost instantly. Cryptography is, really, the process of making such analyses as difficult as possible. With a purely brute force approach, it’s thought that a modern computer could chew on today’s PGP-derived encryption schemes for literally billions of years before happening upon a correct solution. This is what we call “secure.” Yet new research (pdf) from an MIT team including professor Muriel Médard (pictured above) suggests that our trust in communications security may be based on false assumptions about the very nature of information.
Claude E. Shannon, the creator of information theory.
Claude E. Shannon, the creator of information theory.
Code-making and code-breaking is based largely on the idea of entropy, and the statistical probabilities that any particular string of characters will occur in a sequence. Current models assume that information conforms to Shannon entropy, which calculates average probabilities over large amounts of data. This may return correct results for messages overall, but code-breaking is all about finding cracks and pressing on them until the whole shatters. In cryptography, only a small stretch of data need deviate from the assumptions of Shannon entropy to render the whole insecure. The failure of this idea, called the uniformity assumption, could have profound implications for data security. In particular, even slight deviations from perfect uniformity in a source file can provide very real inroads for attackers.
Code-breaking has always been about building on small insights. We can leverage what we know about language, for instance, to identify letters in a very simple cipher. If “e” is the most common letter in the English language, then perhaps the most common character in the code represents that letter. Even if all you ever directly find is an “e,” you can start looking for the most common words containing the letter, like “the,” building on assumptions and quickly breaking complex ciphers with just a single initial insight. After a certain point, you’ve whittled down the possibilities for a message to the point that brute-force analysis becomes not just possible, but simple. After years without meaningful progress, Ancient Egyptian hieroglyphics were deciphered with astonishing quickness after the discovery of the symbol for Cleopatra.
These MIT researchers have shown that one of the most unchallenged assumptions in all cryptography, one that gets at ideas about the very nature of information theory, has left cryptography vulnerable to step-wise attacks of this kind. In their estimation it makes breaking current algorithms “exponentially easier than we thought,” though they are still “exponentially hard.” In other words, it takes a totally impossible task and makes it pretty much impossible, turns a billion-year effort into a multi-million-year one.
Still, it’s these sorts of efforts that can pile up to render our cyphers insecure and, perhaps more importantly, that can force mathematicians and engineers to reexamine their most unchallenged assumptions. As one researcher puts it, cryptography is all about securing your work against threats that you cannot test. In such a field, working from strong basic assumptions is a necessity. The researchers are confident that, having identified this potential weakness, developers will be able to shore up the weakness quickly enough.

Post a Comment

 
Top