Over on the Chromium blog there is a post that discusses one of the vulnerabilities that allows remote code execution in the Chrome browser.
This vulnerability was submitted through the Pwnium competition.
What struck me as interesting was the amount of work required to execute code remotely. This particular attack requires hitting six different bugs across various components of the browser and only being able to play with eight bytes of arbitrary data. Go read the post for more information.
I bring this up because it’s a good example of why it can be difficult to find security vulnerabilities -– not all bugs are shallow.
If you want secure software you have to start secure. The development process, the development environment, the architecture, the testing, the developers, they all have to be secure.
The development process needs to be secure. Work with a Security Development Lifecycle.
The development environment needs to be secure. Make sure the development machines are always patched, including build machines and QA machines. They need to be locked down. This means making sure developers aren’t running as admin. To all the developers that just groaned: suck it up. You don’t need admin privileges. If you do, you’re doing something wrong.
The architecture of the software needs to be secure. Use models that are deemed secure. Reuse trusted designs, and for the love of all things holy don’t roll your own security systems.
The tests need to broad and deep. Code coverage is a good thing. Not only do you need functional and UI tests, you also need proper vulnerability testing. Fuzz testing is your friend. Test the attack surface. Run code analysis at it’s highest sensitivity. Oh, and be sure to test the binaries themselves. Compilers can do funny things.
Finally, the developers need to be secure. That’s kind of a funny phrase. Developers need to know how to develop securely. Training is key. Get the developers in a room playing Elevation of Privilege. It may not seem like you’re doing work, but you are creating a threat model of the application.
I've written about this before and I will likely continue writing about this until its hammered into everyone.
Part of the problem I think is the community of security. There are hundreds of security conferences every year, except most of them focus on why you’re insecure. The majority of sessions talk about different types of vulnerabilities and new attack vectors. Does anyone else see a problem with that? If all you are taught is how to spot vulnerabilities, you are missing one very important part: how to fix the bloody vulnerability.
The community needs to rethink how we teach people security. Being able to spot vulnerabilities is a necessary skill, but it doesn’t do us much good if we can’t fix the vulnerability.