Name: Anonymous 2017-08-19 17:04
http://www.his-2017.co.uk/session/murphy-vs-satan-why-programming-secure-systems-is-still-hard
http://gallium.inria.fr/~xleroy/publi/language-security-etaps03.pdf
In 2005, Ross Anderson and Roger Needham coined the phrase "Programming Satan's Computer" to describe the problems of developing software for secure systems. Their point is illustrated by whatever is the latest high-profile bug, "celebrity glitch", or downright embarrassment in some piece of critical software that is supposed to be trustworthy. It might seem industry is unable to produce software with even the most basic levels of integrity (e.g. "it doesn't crash") let alone subtle application-specific security properties. Is the situation really that bad? Can we do better, based on what we know from over twenty years of building safety-related systems? This talk will reflect on my experience of deploying safety-critical software process and technology in building secure systems, but will also touch on the behaviours and economic pressures that seem to be holding back progress.
http://gallium.inria.fr/~xleroy/publi/language-security-etaps03.pdf
In more colorful language, computer security has been described as “programming Satan’s computer” [6]: the implementor must assume that every weakness that can be exploited will be.
...
An alternate, language-based approach executes all code within the same memory space, without hardware protections, but relies on strong typing to restrict direct access to sensitive resources. These resources are directly represented by pointers, but strong typing prevents these pointers from being forged, e.g. by guessing their addresses. Thus, the typing discipline of the language can be used to enforce security invariants on the resources.