copyright notice
accesses since March 27, 2008

Security-Through-Obscurity: there's less to this than meets the eye  

Hal Berghel

It was 1883.  According to Wikipedia, the year began with a major fire in Milwaukee hotel killing 73.  The first vaudeville theater is opened in Boston.  The U.S. Civil Service was initiated.  Roselle, New Jersey got electricity from Tom Edison.  The Ladies Home Journal and Life magazine were first published.  The Brooklyn Bridge was opened to traffic.  Pecos, TX held the world's first rodeo.  Krakatoa erupts and kills 36,380 people.  The University of Texas opens its doors.  Irish terrorists set off a bomb in the London underground.  Black Bart conducts his last stagecoach robbery.  The North American railroads institute time zones to standardize train schedules.  Yale introduces its third secret society, Wolf's Head.  Khalil Gibran, Franz Kafka, Lon Chaney, Sr., Benito Mussolini, and John Maynard Keynes were born.  Karl Marx, Edouard Manet, Richard Wagner, and Tom Thumb died. 

What does this have to do with Security-through-Obscurity and the modern CIO, you wonder?  Well 1883 was also the year that a little known cryptographer by the name of Auguste Kerckhoff proposed that security-through-obscurity was a really stupid idea.  1883, mind you.  Before radio.  Before television.  Before computers.  It was only seven years after the first telephone transmission!  And yet, Auguste figured out that one of the dominant models of modern IT security stunk big time.  This is the 125th anniversary of Auguste Kerckoff's pronouncement that the default security model still in use by 21st century software and hardware developers is really lame.  Let's put a fresh coat of paint on Auggy's idea and see how well it still shines.


When it comes to digital security systems, secrecy is indeed the mother of dysfunction.  The same holds true for software.  If you want software to be reliable, secure and efficient, open source is the only way to go.  End of story.

If, on the other hand, you want to make money from software, the open sources model is sub-optimal.  To date no one has discovered how to derive economic rewards from intellectual property by giving it away in perpetuity.  The charges may be direct or indirect (e.g., through advertising, distribution media, documentation, utilities, collateral merchandizing), or subsidized by some business, foundation, government or university. But one way or another the development and production costs have to be borne by someone.  It's a fact of life that people resist paying for things they can get for nothing.  Trying to make money from open sources software is an uphill fight against capitalism's DNA. 

However, it behooves us to distinguish between an activity that generates revenue and one that produces artifacts of enduring value.  These two activities are not necessarily congruent.  In fact they actually work against each other.  If the goal is to create the finest possible artifact, the more eyes that oversee the development the better.  If the goal is to generate profits, the fewer eyes the better.  The reason for this tension lies in the old adage that a group can keep a secret as long as all but one are dead.

The same principles apply to security.  If you want the most secure system, leave it open to inspection and examination.  If there's a bug, the quickest way to find it is to subject it to close scrutiny by bright, well-trained professionals.  The downside is that there's no way to protect the IP when potentially anyone can have access.  As with open source software, the confidence interval in security is directly proportional to the number of qualified people who have vetted it.

So there's the duality: quality and proprietary are adjectives that define opposite ends of the spectrum.  The challenge for most developers is to find some middle ground that they can live with.


During three days in June, 1942, in the middle of the Pacific Ocean, events transpired that may have changed the outcome of World War II.  The Japanese Imperial Fleet sought to trap the anemic U.S. carrier fleet at Midway Island.  The plan was foiled however because the U.S. had broken the Japanese JN-25 code and was reading Japanese naval communications.  The U.S. carrier fleet sprang a trap of their own and sank most of the Japanese capital ships.  This turned the tide of the war in the Pacific.

The error that the Japanese made took place much earlier than June, 1942.  In fact, there error dates back to 1883 when Kerckhoff articulated his now-famous principle:  a cryptosystem should be secure even if only the key is kept secret.  The importance of this principle was apparently lost on the Japanese Imperial Navy (and the WW II German military for that matter).

In our terms, Kerckhoff was espousing open source crypto 75 years before the open source software movement got started.  To quote Auggie's seminal paper: "dans le second, il faut un système remplissant certaines conditions exceptionnelles, conditions que je résumerai sous les six chefs suivants" which i assume either says something about some guy named Dan or six chefs in search of their resumes.  In any case, Auggie's point is that you should always assume that your adversary understands how your security systems operate.  Were Kerckhoff a CIO in today's world (he would be 173 years old, so he'd have some pretty hefty stock options at this point), he would be overcome with bewilderment that the modern enterprise relies on proprietary, aka "black box," IT security solutions.  I'm confident he would say "Didn't the battle of Midway tell you anything?"

But, the fact of the matter is that we're still in JN-25-type denial despite the overwhelming evidence to the contrary.  Malware like Nimda, Code-Red, the Chernobyl Virus, Slammer, Lovesan, etc. were successful because they took advantage of weaknesses of systems that the cloistered designers overlooked.  The reason that Windows products have historically been more vulnerable to malware is to a degree a consequence of the proprietary nature of their software.  With all these lessons learned, why is it that enterprise security software is so hopelessly opaque? 


The answer to the preceding question lies with executives, tight IT budgets, poor quality control, vendors who misrepresent products, and technology inversion within the organization.  That shouldn't be hard to fix should it?  We should have the enterprise feeling 100% by lunchtime.

If I were to conjecture, I would speculate that the security model that is the de facto standard in the enterprise is security-through-obscurity (STO).  For the reasons given in the paragraph above, STO seems to be the crutch we all-too-often use instead of security models that actually might work.  That's not to say that STO is designed into the organization.  Rather it came in through the back door through neglect.

Here's the way I see it.  Organizations start out with the best of intentions. However, IT security is not a revenue stream/profit center.  So, the Christmas list of security technology and support that comes out of the IT shop never makes it past the CFO intact ("You want *how* much to secure our servers?  We could remodel 100 guest rooms for that!")  The wish list turns out to be a non-starter with the leadership team, so IT revises the budget downward.  Here's the point to remember: either the original proposal was unnecessarily inflated (which rarely happens from my experience), or some critical piece of the enterprise security system will have to be dropped (which typically happens from my experience).  The first thing likely to get cut is the tight integration of  hardware and software.  The reason for this is simple: There is a strong incentive for IT to start a security proposal with best practices, and best practice conformance is always the most expensive model.  Best practices are built on principles of fault tolerance, redundancy, graceful failure, parallelism, fail safe recovery,  robust filtering, logging and classification, etc. and these are not trivial to deploy and maintain.  They're also best thought of as absolutes.  It's just not possible to define a little bit of redundancy, and  a smidge of fault tolerance.  Either the system is fault tolerant or it isn't!  What results after the first round or two of budget reductions is a hodge-podge of semi- or poorly- integrated solutions that are unrecognizable as industry best practices.  What is more, by the time that the integration element is withdrawn from the mix, no one has a clear idea any longer how everything works.  Security-through-obscurity rears its ugly head in our enterprise. 

If you don't believe it, remember Kerckhoff's principle and the Battle of Midway.  If your enterprise security system conforms to best practices, it should withstand attack even if the adversaries know how it's structured.  Would you be comfortable posting your network topology and security appliance list on your corporate website?  Of course not.  Modern enterprise security derives some of its strength from the fact that no one - including the IT staff - really understands completely how the pieces fit together.  And this is precisely the weakness that get's exploited.

So let's just unilaterally declare security-through-obscurity the default model of IT security.  We can tell the Sarbanes-Oxley auditors that our security system has been thoughtfully tailored to match the ambiguities in the legislation.