
I'd look at the case of the Enigma (naval Enigma in particular) in World War 2 for a demonstration of how a cryptographically insecure system was broken in large part because the attackers were able to get a large number of encrypted messages and to exploit non-random bits of those messages in order to work out how the system operated.ĭuring World War 2, Germany used a machine called the Enigma to encrypt messages to submarines and to troops in the field using what was essentially a exquisitely complicated substitution cipher. So it would be wise to plan your security accordingly. Of all the secrets you could keep, this one is among the easiest for an attacker to derive. But there's a huge difference between keeping the algorithm secret because you can, and keeping it secret because your security depends on it.

And there's no sense disclosing more information than you have to.

It may not afford any additional security, but it's not going to harm your security either. Now, certainly there is no harm in keeping your algorithm secret.

All of these measures are intended toward protecting a specific class of secret, and have a proven track-record of doing so. In contrast, good encryption algorithms are built not only to protect the payload, but also the key, even in in extremely adverse conditions, such as "known plaintext" or "chosen plaintext" attacks, and often even take specific measures to protect against side-channel attacks. They can be obfuscated by compiler tricks and clever tactics, but a decent hacker with a moderate amount of caffeine can usually tackle any such challenge in a single day two if he gets distracted. However, algorithms are notoriously easy to reverse-engineer.
#Security by obscurity example password
The key to real security, though, is to make your secrets easy to protect.Ĭertainly you could make your encryption password public and the algorithm secret, and you may enjoy a certain amount of security for some time. For example, your password is only secure to the extent that it isn't publicly known. In a certain sense, nearly all security is gained through obscurity. "Security Through Obscurity" is a bit of loaded term that means a lot less than it sounds like it does. Here there is an image for the schema suggested I hope I've explained myself i need to be pointed out to some example like the one time pad, where if you XOR with the key more then one message you can XOR back and retrieve the secret. This is supposed to be true and get stronger with the number of messages the attacker is able to collect. If an attacker manages to retrieve, MessageX and MessageY without knowing any secret he can perform some cryptanalysis attack and retrieve the KEY or understand the SecretTransformation functioning.

What in addition it think can be demonstrated, is that: We also know that this security will fail as soon as an attacker manages to retrieve one of the "secrets" Now, Security through obscurity is in contrast with principle of open design and shannon's maxim. SecretTransformation() is not a standard cryptographic function. User B gets MessageY = SecretTransformation(KEY, SecretValue2) User A gets MessageX = SecretTransformation(KEY, SecretValue1) I need to demonstrate that security through obscurity fails twice in the following scenario.
