Over on the Canadian Solution Developer's blog I have a series on the basics of writing secure applications. It's a bit of an introduction to all the things we should know in order to write software that doesn't contain too many vulnerabilities. This is part one of the series, unedited for all to enjoy.
Every year or so a Software Security Advocacy group creates a top 10 list of the security flaws developers introduce into their software. This is something I affectionately refer to as the stupid things we do when building applications list. The group is OWASP (Open Web Application Security Project) and the list is the OWASP Top 10 Project (of which I have no affiliation to either). In this article we will dig into some of the ways we can combat the ever-growing list of security flaws in our applications.
Security is a trade off. We need to balance the requirements of the application with the time and budget constraints of the project.
A lot of times though, nobody has enough forethought to think that security should be a feature, or more importantly, that security should just be a central design requirement for the application regardless of what the time or budget constraints may be (do I sound bitter?).
This of course leads to a funny problem. What happens when your application gets attacked?
There is no easy way to say it: the developers get blamed. Or if it's a seriously heinous breach the boss gets arrested because they were accountable for the breach. In any case it doesn't end well for the organization.
Microsoft had this problem for years – although with a twist because security has always been important to the company. Windows NT 3.5 was Microsoft's first OS that was successfully evaluated under the TCSEC regime at C2 – government speak for "it met a rigorous set of requirements designed to evaluate the security of a product in high-security environments". However, this didn't really jive well with what the news was saying since so many vulnerabilities were plaguing Windows 2000 and XP. There was proof that security was important through the evaluation, but reality looked different because of the bugs.
This led to a major change in Microsoft's ways. In January 2002 Bill Gates sent out a company-wide memorandum stating the need for a new way of doing things.
The events of last year - from September's terrorist attacks to a number of malicious and highly publicized computer viruses - reminded every one of us how important it is to ensure the integrity and security of our critical infrastructure, whether it's the airlines or computer systems.
The creation of the Trustworthy Computing Initiative was the result. In short order, Microsoft did a complete 180 on how it developed software.
Part of the problem with writing secure code is that you just can't look for the bugs at the end of a development cycle, fix them, and move on. It just doesn't work. Microsoft introduced the Security Development Lifecycle to combat this problem, as it introduced processes during the development lifecycle to aid the developers in writing secure code.
Conceptually it's pretty simple: defense in depth.
In order to develop secure applications, we need to adapt our development model from the beginning of development training, all the way up to application release, as well as how we respond to vulnerabilities after launch to include security requirements.
It's important to include security at the beginning of the development process, otherwise we run into the Problem Windows had.
A good chunk of the codebase for Windows Vista was scrapped because too many bugs were found over the course of testing. Once the Windows team started introducing the SDL into their development process, the number of security bugs found dropped considerably. This is the reason it took six years to release Windows Vista. Funny enough, this is partly the reason Windows 7 only took two years to release – less security bugs!
Now, Microsoft had an invested interest in writing secure code so it was an all or nothing kind of thing with the SDL. Companies that haven't made this decision may have considerably more trouble implementing the SDL simply because it costs money to do so. Luckily we don't have to implement the entire process all at once.
As we move through this series, we'll touch on some of the key aspects of the SDL and how we can fit it into the development lifecycle.
Perhaps the most important aspect of the SDL is that it's important to have a good foundation of knowledge of security vulnerabilities. This is where the top 10 list from OWASP comes in handy:
- Cross-Site Scripting (XSS)
- Broken Authentication and Session Management
- Insecure Direct Object References
- Cross-Site Request Forgery (CSRF)
- Security Misconfiguration
- Insecure Cryptographic Storage
- Failure to Restrict URL Access
- Insufficient Transport Layer Protection
- Unvalidated Redirects and Forwards
In the next article in this series we'll take a look at a few of these vulnerabilities up close and some of the libraries available to us to help combat our attackers. Throughout this series we'll also show how different steps in the SDL process can help find and mitigate these vulnerabilities.
In Part III of this series we'll take a look at some of the tools Microsoft has created to aid the process of secure design and analysis.
In Part IV of this series, we'll dig into some of the architectural considerations of developing secure applications.
Finally, to conclude this series we'll take a look at how we can use Team Foundation Server to help us manage incident responses for future vulnerabilities.