Home / My Disclaimer / Who am I? / Search... / Sign in

// Security

Six Bugs and Eight Bytes

by Steve Syfuhs / May 22, 2012 01:07 PM

Over on the Chromium blog there is a post that discusses one of the vulnerabilities that allows remote code execution in the Chrome browser.

This vulnerability was submitted through the Pwnium competition.

What struck me as interesting was the amount of work required to execute code remotely. This particular attack requires hitting six different bugs across various components of the browser and only being able to play with eight bytes of arbitrary data. Go read the post for more information.

I bring this up because it’s a good example of why it can be difficult to find security vulnerabilities -– not all bugs are shallow.

If you want secure software you have to start secure. The development process, the development environment, the architecture, the testing, the developers, they all have to be secure.

The development process needs to be secure. Work with a Security Development Lifecycle.

The development environment needs to be secure. Make sure the development machines are always patched, including build machines and QA machines. They need to be locked down. This means making sure developers aren’t running as admin. To all the developers that just groaned: suck it up. You don’t need admin privileges. If you do, you’re doing something wrong.

The architecture of the software needs to be secure. Use models that are deemed secure. Reuse trusted designs, and for the love of all things holy don’t roll your own security systems.

The tests need to broad and deep. Code coverage is a good thing. Not only do you need functional and UI tests, you also need proper vulnerability testing. Fuzz testing is your friend. Test the attack surface. Run code analysis at it’s highest sensitivity. Oh, and be sure to test the binaries themselves. Compilers can do funny things.

Finally, the developers need to be secure. That’s kind of a funny phrase. Developers need to know how to develop securely. Training is key. Get the developers in a room playing Elevation of Privilege. It may not seem like you’re doing work, but you are creating a threat model of the application.

I've written about this before and I will likely continue writing about this until its hammered into everyone.

Part of the problem I think is the community of security. There are hundreds of security conferences every year, except most of them focus on why you’re insecure. The majority of sessions talk about different types of vulnerabilities and new attack vectors. Does anyone else see a problem with that? If all you are taught is how to spot vulnerabilities, you are missing one very important part: how to fix the bloody vulnerability.

The community needs to rethink how we teach people security. Being able to spot vulnerabilities is a necessary skill, but it doesn’t do us much good if we can’t fix the vulnerability.

Dana Epp on the Landscape of Risk

by Steve Syfuhs / May 07, 2012 11:26 AM

I don’t get to say it enough, but I really love my job. I get to work on a pretty awesome product and I get to work with a bunch of really smart people.

One of them is my boss Dana Epp. Without sounding like a kiss-ass, I have to say I really like working for Dana. There’s so much I learn from the guy, and his presentations rock. I wanted to point out a presentation he did last week at the Kaseya Connect 2012 conference in Las Vegas on The landscape of risk. Go watch it. He starts at about 37 minutes in.

And remember it the next time you connect to the WiFi at a conference. Smile

Study of Commercially Deployed Single Sign On

by Steve Syfuhs / April 25, 2012 05:40 PM

Microsoft Research published a paper sometime last month analyzing Single Sign On services hosted by various commercial entities.

Go Read it: Signing Me onto Your Accounts through Facebook and Google: a Traffic-Guided Security Study of Commercially Deployed Single-Sign-On Web Services.

The paper had been sitting on my desk for a couple weeks (literally) before I had a chance to read through it. It actually made it’s rounds through the company before I had a chance to read it.

In any case, I thought it would be good to post a link for people to read because it outlines some very important implications of using a Single Sign On service.

Abstract:

With the boom of software-as-a-service and social networking, web-based single sign-on (SSO) schemes are being deployed by more and more commercial websites to safeguard many web resources. Despite prior research in formal verification, little has been done to analyze the security quality of SSO schemes that are commercially deployed in the real world. Such an analysis faces unique technical challenges, including lack of access to well-documented protocols and code, and the complexity brought in by the rich browser elements (script, Flash, etc.). In this paper, we report the first “field study” on popular web SSO systems. In every studied case, we focused on the actual web traffic going through the browser, and used an algorithm to recover important semantic information and identify potential exploit opportunities. Such opportunities guided us to the discoveries of real flaws. In this study, we discovered 8 serious logic flaws in high-profile ID providers and relying party websites, such as OpenID (including Google ID and PayPal Access), Facebook, JanRain, Freelancer, FarmVille, Sears.com, etc. Every flaw allows an attacker to sign in as the victim user. We reported our findings to affected companies, and received their acknowledgements in various ways. All the reported flaws, except those discovered very recently, have been fixed. This study shows that the overall security quality of SSO deployments seems worrisome. We hope that the SSO community conducts a study similar to ours, but in a larger scale, to better understand to what extent SSO is insecurely deployed and how to respond to the situation.

The gist of the paper is that when it comes to verification and validation of the security of SSO protocols, we tend to do formal tests of the protocols themselves, but we don’t ever really test the implementations of the protocols. Observation showed that most developers didn’t fully understand the security implications of the most important part in an SSO conversation – the token exchange:

Our success indicates that the developers of today’s web SSO systems often fail to fully understand the security implications during token exchange, particularly, how to ensure that the token is well protected and correctly verified, and what the adversary is capable of doing in the process.

Think about it. The token received from the IdP is the identity. The relying party trusts the validity of the identity by verifying the token somehow. If verification isn’t done properly an attacker can inject information into the token and elevate their privileges or impersonate another user. This is a fundamental problem:

For example, we found that the RPs of Google ID SSO often assume that message fields they require Google to sign would always be signed, which turns out to be a serious misunderstanding (Section 4.1).

Not all of the data in a token needs to be signed. In fact, if the IdP isn’t the authoritative source of the particular piece of data it may not want to sign that data. If the IdP can’t or wont sign the data, do you really want to trust it?

What’s the rule that’s always hammered into us when writing code? Do not trust user input. Even if it’s supposed to have come from another machine:

[…] when our browser (i.e., Bob’s browser) relayed BRM1 [part 1 of the message exchange], it changed openid.ext1.required (Figure 8) to (firstname,lastname). As a result, BRM3 [part 3 of message exchange] sent by the IdP did not contain the email element (i.e., openid.ext1.value.email). When this message was relayed by the browser, we appended to it alice@a.com as the email element. We found that Smartsheet accepted us as Alice and granted us the full control of her account.

If you receive a message that contains something you need to use you, not only do you have to validate that it’s in the right format, but you have to validate that it hasn’t been modified or tampered with before it hits your code.

This is something I’ve talked about before, but in a more generalized nature. Validate-validate-validate!

As an aside, an interesting observation made in the research is that all of this was done through black-box testing. The researchers didn’t have access to any source code. So if the researchers could find problems this way, the attackers could find problems the same way:

Our study shows that not only do logic flaws pervasively exist in web SSO deployments, but they are practically discoverable by the adversary through analysis of the SSO steps disclosed from the browser, even though source code of these systems is unavailable.

This tends to be the case with validation problems. Throw a bunch of corrupted data at something and see if it sticks.

They also realized that their biggest challenge wasn't trying to understand the protocol, but trying to understand the data being used within the protocol.

For every case that we studied, we spent more time on understanding how each SSO system work than on reasoning at the pure logic level.

The fundamental design of a Single Sign On service doesn’t really change between protocols.  The protocols may use varying terms to describe the different players in the system, but there are really only three that are important: the IdP, the RP, and the client. They interact with each other in fundamentally similar ways across most SSO protocols. It’s no surprise that understanding the data was harder than understanding the logic.

They didn’t go into much detail about why they spent more time studying data, but earlier they talked about how different vendors used different variations on the protocols.

[…] the way that today’s web SSO systems are constructed is largely through integrating web APIs, SDKs and sample code offered by the IdPs. During this process, a protocol serves merely as a loose guideline, which individual RPs often bend for the convenience of integrating SSO into their systems. Some IdPs do not even bother to come up with a rigorous protocol for their service.

In my experience the cost of changing a security protocol for the sake of convenience is usually protocol security. It usually doesn’t end well.

Security Development Conference May 15-16 in Washington, DC

by Steve Syfuhs / February 20, 2012 12:50 PM

Registration for the Security Development Conference just went live not too long ago and I just registered. Early bird pricing is $300.

Check it out.

sdc12_header

3 TRACKS & 24 SESSIONS

The session tracks for this industry event will target three important roles for any security organization: security engineers, business decision makers, and lifecycle process managers. The sessions in these tracks will include experts representing over 30 organizations from a variety of industries. Visit the event website to see our current list of speakers, spread the word and register early to join us at the Security Development Conference 2012 on May 15 & 16!

WHY YOU SHOULD ATTEND

  • Accelerate Adoption – Hear from leaders across a variety of organizations and learn from their experiences on how to accelerate SDL adoption in your own organization
  • Gain Efficiencies – Learn effective ways to align SDL practices across engineering, business, and management
  • Networking – Interact with peers, vendors and sponsors who provide SDL services, training, and tools
  • Affordable Training – This is an affordable training opportunity that can benefit your entire security team
  • Continuing Education – Earn 8 CPE Continuing Education (CE) credits for your CISSP credentials

REGISTRATION FEES

Early Bird (February 20 - March 15) $300
Discount (March 16 - April 13) $400
Standard (April 14 - May 11) $500
Onsite Rate (May 12-May 16) $700

Talking ADFS on RunAs Radio

by Steve Syfuhs / December 01, 2011 07:02 PM

During the Toronto stop of the TechDays tour in Canada Richard Campbell was in town talking to a bunch of really smart people about the latest and greatest technologies they've been working on.

And then me for some reason.

We got to talk about ADFS and associates:

Richard talks to Steve Syfuhs at TechDays Toronto about IT Pros providing security services for developers using Active Directory Federated Services. IT and development talking to each other willingly? Perish the thought! But in truth, Steve makes it clear that ADFS provides a great wrapper for developers to access active directory or any other service that has security claims that an application might require. Azure depends on it, even Office 365 can take advantage of ADFS. Steve discusses how IT can work with developers to make the jobs of both groups easier.

You can listen to it here: http://www.runasradio.com/default.aspx?showNum=240

I need to work on using fewer vague analogies.

Change of Scenery

by Steve Syfuhs / November 29, 2011 12:56 PM

Every once in a while you need to make a life-altering decision.

Last night I sent an email to the ObjectSharp team telling them I had resigned (I had spoken to the bosses in person prior).

Boy, talk about blunt, eh?

Every once in a while you are offered a once in a lifetime opportunity to do something pretty amazing. I’ve had three of these opportunities. The first was Woodbine Entertainment where I got my start in the Toronto development world. The second was ObjectSharp where I have been able to learn so much from some of the brightest minds in the industry. The third was two weeks ago in Vancouver.

Two weeks ago I was offered a position to lead development of a product for an ISV in BC.

Me? A leader? Wait. Huh?

Well okay, not quite. Cue the screeching cut-away-from-sappy noise.

So what's the deal? I'm not saying, yet. At this point I'm sure a few people could guess though. Smile

Suffice to say I'll be moving to BC at the end of the year. I'll be at ObjectSharp until December 16th and then will be moving right around the new year.

I'm not sure I can really describe how excited I am about this new position. I'll be working on an awesome product with an awesome group of people.

Of course, it's a little sad leaving ObjectSharp. They have a such a great team of people, and some of the smartest people in the industry.

So it should be an interesting experience.

Input Validation: The Good, The Bad, and the What the Hell are you Doing?

by Steve Syfuhs / November 28, 2011 11:00 AM

Good morning class!

Pop quiz: How many of you do proper input validation in your ASP.NET site, WebForms, MVC, or otherwise?

Some Background

There is an axiom in computer science: never trust user input because it's guaranteed to contain invalid data at some point.

In security we have a similar axiom: never trust user input because it's guaranteed to contain invalid data at some point, and your code is bound to contain a security vulnerability somewhere, somehow. Granted, it doesn't flow as well as the former, but the point still stands.

The solution to this problem is conceptually simple: validate, validate, validate. Every single piece of input that is received from a user should be validated.

Of course when anyone says something is a simple concept it's bound to be stupidly complex to get the implementation right. Unfortunately proper validation is not immune to this problem. Why?

The Problem

Our applications are driven by user data. Without data our applications would be pretty useless. This data is usually pretty domain-specific too so everything we receive should have particular structures, and there's a pretty good chance that a few of these structures are so specific to the organization that there is no well-defined standard. By that I mean it becomes pretty difficult to validate certain data structures if they are custom designed and potentially highly-complex.

So we have this problem. First, if we don't validate that the stuff we are given is clean, our application starts behaving oddly and that limits the usefulness of the application. Second, if we don't validate that the stuff we are given is clean, and there is a bug in the code, we have a potential vulnerability that could wreak havoc for the users.

The Solution

The solution as stated above is to validate all the input, both from a business perspective and from a security perspective. We want it to go something like this:

In this post we are going to look at the best way to validate the security of incoming data within ASP.NET. This requires looking into how ASP.NET processes input from the user.

When ASP.NET receives something from the user it can come from four different vectors:

  • Within the Query String (?foo=bar)
  • Within the Form (via a POST)
  • Within a cookie
  • Within the server variables (a collection generated from HTTP headers and internal server configuration)

These vectors drive ASP.NET, and you can potentially compromise an application by maliciously modifying any of them.

Pop quiz: How many of you check whether custom cookies exist before trying to use them? Almost everyone, good. Now, how many of you validate that the data within the cookies is, well, valid before using them?

What about checking your HTTP headers?

The Bypass

Luckily ASP.NET has some out-of-the-box behaviors that protect the application from malicious input. Unfortunately ASP.NET isn't very forgiving when it comes to validation. It doesn't distinguish between quasi-good input and bad input, so anything containing an angle bracket causes a YSoD.

The defacto fix to this is to do one of two things:

  • Disable validation in the page declaration within WebForms, or stick a [ValidateInput(false)] attribute on an MVC controller
  • Set <pages validateRequest="false"> in web.config

What this will do is tell ASP.NET to basically skip validating the four vectors and let anything in. It was assumed that you would do validation on your own.

Raise your hand if you think this is a bad idea. Okay, keep your hands up if you've never done this for a production application. At this point almost everyone should have put their hands down. I did.

The reason we do this is because as I said before, ASP.NET isn't very forgiving when it comes to validation. It's all or nothing.

What's worse, as ASP.NET got older it started becoming pickier about what it let in so you had more reasons for disabling validation. In .NET 4 validation occurs at a much earlier point. It's a major breaking change:

The request validation feature in ASP.NET provides a certain level of default protection against cross-site scripting (XSS) attacks. In previous versions of ASP.NET, request validation was enabled by default. However, it applied only to ASP.NET pages (.aspx files and their class files) and only when those pages were executing.

In ASP.NET 4, by default, request validation is enabled for all requests, because it is enabled before the BeginRequest phase of an HTTP request. As a result, request validation applies to requests for all ASP.NET resources, not just .aspx page requests. This includes requests such as Web service calls and custom HTTP handlers. Request validation is also active when custom HTTP modules are reading the contents of an HTTP request.

Since backwards compatibility is so important, a configuration attribute was also added to tell ASP.NET to revert to the 2.0 validation mode meaning that it occurs later in the request lifecycle like in ASP.NET 2.0:

<httpRuntime requestValidationMode="2.0" />

If you do a search online for request validation almost everyone comes back with this solution. In fact, it became a well known solution with the Windows Identity Foundation in ASP.NET 4.0 because when you do a federated sign on, WIF receives the token as a chunk of XML. The validator doesn't approve because of the angle brackets. If you set the validation mode to 2.0, the validator checks after the request passes through all HttpModules, which is how WIF consumes that token via the WSFederationAuthenticationModule.

The Proper Solution

So we have the problem. We also have built in functionality that solves our problem, but the way it does it kind of sucks (it's not a bad solution, but it's also not extensible). We want a way that doesn't suck.

In earlier versions of ASP.NET the best solution was to disable validation and within a HttpModule check every vector for potentially malicious input. The benefit here is that you have control over what is malicious and what is not. You would write something along these lines:

public class ValidatorHttpModule : IHttpModule
{
    public void Dispose() { }

    public void Init(HttpApplication context)
    {
        context.BeginRequest += new EventHandler(context_BeginRequest);
    }

    void context_BeginRequest(object sender, EventArgs e)
    {
        HttpApplication context = (HttpApplication)sender;

        foreach (var q in context.Request.QueryString)
        {
            if (CheckQueryString(q))
            {
                throw new SecurityException("Bad validation");
            }
        }

        foreach (var f in context.Request.Form)
        {
            if (CheckForm(f))
            {
                throw new SecurityException("Bad validation");
            }
        }

        foreach (var c in context.Request.Cookies)
        {
            if (CheckCookie(c))
            {
                throw new SecurityException("Bad validation");
            }
        }

        foreach (var s in context.Request.ServerVariables)
        {
            if (CheckServerVariable(s))
            {
                throw new SecurityException("Bad validation");
            }
        }
    }

    // <snip />
}

The downside to this approach though is that you are stuck with pretty clunky validation logic. It executes on every single request, which may not always be necessary. You are also forced to execute the code in order of whenever your HttpModule is initialized. It won't necessarily execute first, so it won't necessarily protect all parts of your application. Protection from an attack that doesn't protect everything from that particular attack isn't very useful.  <Cynicism>Half-assed protection is only good when you have half an ass.</Cynicism>

What we want is something that executes before everything else. In our HttpModule we are validating on BeginRequest, but we want to validate before BeginRequest.

The way we do this is with a custom RequestValidator. On a side note, this post may qualify as having the longest introduction ever. In any case, this custom RequestValidator is set within the httpRuntime tag within the web.config:

<httpRuntime requestValidationType="Syfuhs.Web.Security.CustomRequestValidator" />

We create a custom request validator by creating a class with a base class of System.Web.Util.RequestValidator. Then we override the IsValidRequestString method.

This method allows us to find out where the input is coming from, e.g. from a Form or from a cookie etc. This validator is called on each value within the four collections above, but only when a value exists. It saves us the trouble of going over everything in each request. Within an HttpModule we could certainly build out the same functionality by checking contents of each collection, but this saves us the hassle of writing the boilerplate code. It also provides us a way of describing the problem in detail because we can pass an index location of where the problem exists. So if we find a problem at character 173 we can pass that value back to the caller and ASP.NET will throw an exception describing that index. This is how we get such a detailed exception from WIF:

A Potentially Dangerous Request.Form Value Was Detected from the Client (wresult="<t:RequestSecurityTo...")

Our validator class ends up looking like:

public class MyCustomRequestValidator : RequestValidator
{
    protected override bool IsValidRequestString(HttpContext context, string value, RequestValidationSource requestValidationSource, string collectionKey, out int validationFailureIndex)
    {
        validationFailureIndex = 0;

        switch (requestValidationSource)
        {
            case RequestValidationSource.Cookies:
                return ValidateCookie(collectionKey, value, out validationFailureIndex);
                break;

            case RequestValidationSource.Form:
                return ValidateFormValue(collectionKey, value, out validationFailureIndex);
                break;

            // <snip />
        }

        return base.IsValidRequestString(context, value, requestValidationSource, collectionKey, out validationFailureIndex);
    }

    // <snip />
}

Each application has different validation requirements so I've just mocked up how you would create a custom validator.

If you use this design you can easily validate all inputs across the application, and you don't have to turn off validation.

So once again, pop quiz: How many of you do proper input validation?

The Importance of Elevating Privilege

by Steve Syfuhs / August 28, 2011 04:00 PM

The biggest detractor to Single Sign On is the same thing that makes it so appealing – you only need to prove your identity once. This scares the hell out of some people because if you can compromise a users session in one application it's possible to affect other applications. Congratulations: checking your Facebook profile just caused your online store to delete all it's orders. Let's break that attack down a little.

  • You just signed into Facebook and checked your [insert something to check here] from some friend. That contained a link to something malicious.
  • You click the link, and it opens a page that contains an iframe. The iframe points to a URL for your administration portal of the online store with a couple parameters in the query string telling the store to delete all the incoming orders.
  • At this point you don't have a session with the administration portal and in a pre-SSO world it would redirect you to a login page. This would stop most attacks because either a) the iframe is too small to show the page, or b) (hopefully) the user is smart enough to realize that a link from a friend on Facebook shouldn't redirect you to your online store's administration portal. In a post-SSO world, the portal would redirect you to the STS of choice and that STS already has you signed in (imagine what else could happen in this situation if you were using Facebook as your identity provider).
  • So you've signed into the STS already, and it doesn't prompt for credentials. It redirects you to the administration page you were originally redirected away from, but this time with a session. The page is pulled up, the query string parameters are parsed, and the orders are deleted.

There are certainly ways to stop this as part of this is a bit trivial. For instance you could pop up an Ok/Cancel dialog asking "are you sure you want to delete these?", but for the sake of discussion lets think of this at a high level.

The biggest problem with this scenario is that deleting orders doesn't require anything more than being signed in. By default you had the highest privileges available.

This problem is similar to the problem many users of Windows XP had. They were, by default, running with administrative privileges. This lead to a bunch of problems because any application running could do whatever it pleased on the system. Malware was rampant, and worse, users were just doing all around stupid things because they didn't know what they were doing but they had the permissions necessary to do it.

The solution to that problem is to give users non-administrative privileges by default, and when something required higher privileges you have to re-authenticate and temporarily run with the higher privileges. The key here is that you are running temporarily with higher privileges. However, security lost the argument and Microsoft caved while developing Windows Vista creating User Account Control (UAC). By default a user is an administrator, but they don't have administrative privileges. Their user token is a stripped down administrator token. You only have non-administrative privileges. In order to take full advantage of the administrator token, a user has to elevate and request the full token temporarily. This is a stop-gap solution though because it's theoretically possible to circumvent UAC because the administrative token exists. It also doesn't require you to re-authenticate – you just have to approve the elevation.

As more and more things are moving to the web it's important that we don't lose control over privileges. It's still very important that you don't have administrative privileges by default because, frankly, you probably don't need them all the time.

Some web applications are requiring elevation. For instance consider online banking sites. When I sign in I have a default set of privileges. I can view my accounts and transfer money between my accounts. Anything else requires that I re-authenticate myself by entering a private pin. So for instance I cannot transfer money to an account that doesn't belong to me without proving that it really is me making the transfer.

There are a couple ways you can design a web application that requires privilege elevation. Lets take a look at how to do it with Claims Based Authentication and WIF.

First off, lets look at the protocol. Out of the box WIF supports the WS-Federation protocol. The passive version of the protocol supports a query parameter of wauth. This parameter defines how authentication should happen. The values for it are mostly specific to each STS however there are a few well-defined values that the SAML protocol specifies. These values are passed to the STS to tell it to authenticate using a particular method. Here are some most often used:

Authentication Type/Credential Wauth Value
Password urn:oasis:names:tc:SAML:1.0:am:password
Kerberos urn:ietf:rfc:1510
TLS urn:ietf:rfc:2246
PKI/X509 urn:oasis:names:tc:SAML:1.0:am:X509-PKI
Default urn:oasis:names:tc:SAML:1.0:am:unspecified

When you pass one of these values to the STS during the signin request, the STS should then request that particular type of credential. the wauth parameter supports arbitrary values so you can use whatever you like. So therefore we can create a value that tells the STS that we want to re-authenticate because of an elevation request.

All you have to do is redirect to the STS with the wauth parameter:

https://yoursts/authenticate?wa=wsignin1.0&wtrealm=uri:myrp&wauth=urn:super:secure:elevation:method

Once the user has re-authenticated you need to tell the relying party some how. This is where the Authentication Method claim comes in handy:

http://schemas.microsoft.com/ws/2008/06/identity/claims/authenticationmethod

Just add the claim to the output identity:

protected override IClaimsIdentity GetOutputClaimsIdentity(IClaimsPrincipal principal, RequestSecurityToken request, Scope scope)
{
    IClaimsIdentity ident = principal.Identity as IClaimsIdentity;
    ident.Claims.Add(new Claim(ClaimTypes.AuthenticationMethod, "urn:super:secure:elevation:method"));
    // finish filling claims...
    return ident;
}

At that point the relying party can then check to see whether the method satisfies the request. You could write an extension method like:

public static bool IsElevated(this IClaimsPrincipal principal)
{
    return principal.Identity.AuthenticationType == "urn:super:secure:elevation:method";
}

And then have a bit of code to check:

var p = Thread.CurrentPrincipal as IClaimsPrincipal;
if (p != null && p.IsElevated())
{
    DoSomethingRequiringElevation();
}

This satisfies half the requirements for elevating privilege. We need to make it so the user is only elevated for a short period of time. We can do this in an event handler after the token is received by the RP.  In Global.asax we could do something like:

void Application_Start(object sender, EventArgs e)
{
    FederatedAuthentication.SessionAuthenticationModule.SessionSecurityTokenReceived 
        += new EventHandler<SessionSecurityTokenReceivedEventArgs> (SessionAuthenticationModule_SessionSecurityTokenReceived);
}
void SessionAuthenticationModule_SessionSecurityTokenReceived(object sender, SessionSecurityTokenReceivedEventArgs e)
{
    if (e.SessionToken.ClaimsPrincipal.IsElevated())
    {
        SessionSecurityToken token = new SessionSecurityToken(e.SessionToken.ClaimsPrincipal, e.SessionToken.Context, e.SessionToken.ValidFrom, e.SessionToken.ValidFrom.AddMinutes(15));
        e.SessionToken = token;
    }
}

This will check to see if the incoming token has been elevated, and if it has, set the lifetime of the token to 15 minutes.

There are other places where this could occur like within the STS itself, however this value may need to be independent of the STS.

As I said earlier, as more and more things are moving to the web it's important that we don't lose control of privileges. By requiring certain types of authentication in our relying parties, we can easily support elevation by requiring the STS to re-authenticate.

Dear Recipient

by Steve Syfuhs / July 18, 2011 04:00 PM

Got this email just now.  I was amused (mainly because it got past the spam filters)…

Dear Email user,

This message is from Administration center Maintenance Policy verified that your mailbox exceeds (10.9GB) its limit, you may be unable to receive new email or send mails, To re- set your SPACE on our database prior to maintain your IN BOX, You must Reply to this email by Confirming your account details below.

Surname:

First Name:

User-Name:

Password:

Date Of Birth:

Failure to do this will immediately render your Web-email address deactivated from our database.

Thanks

Admin Help Desk© 2011

And my Reply…

Dear Admin Help Desk© 2011,

Thank you for the warning that my SPACE on your database prior to maintain my IN BOX exceeds its limit. Please find the following information handy for future reference.

Surname: Jobs

First Name: Steve

User-Name: Jobbers1

Password: CenterOfTheUniverse

Date Of Birth: December 25th, 0001

If for whatever reason you cannot reset my SPACE please email me back at tips@fbi.gov and I will be sure to help you.

Part 5: Incident Response Management with Team Foundation Server

by Steve Syfuhs / July 01, 2011 02:00 PM

Over on the Canadian Solution Developer's blog I have a series on the basics of writing secure applications. It's a bit of an introduction to all the things we should know in order to write software that doesn't contain too many vulnerabilities. This is part five of the series, unedited for all to enjoy.

There are only a few certainties in life: death, taxes, me getting this post in late, and one of your applications getting attacked.  Throughout the lifetime of an application it will undergo a barrage of attack – especially if it's public facing.  If you followed the SDL, tested properly, coded securely, and managed well, you will have gotten most of the bugs out.

Most.

There will always be bugs in production code, and there will very likely always be a security bug in production code.  Further, if there is a security bug in production code, an attacker will probably find it.  Perhaps the best metric for security is along the lines of mean-time-to-failure.  Or rather, mean-time-to-breach.  All safes for storing valuables are rated in how long they can withstand certain types of attacks – not whether they can, but how long they can.  There is no one-single thing we can do to prevent an attack, and we cannot prevent all attacks.  It's just not in the cards.  So, it stands to reason then that we should prepare for something bad happening.  The final stage of the SDL requires that an Incident Response Plan is created.  This is the procedure to follow in the event of a vulnerability being found.

In security parlance, there are protocols and procedures.  The majority of the SDL is all protocol.  A protocol is the usual way to do things.  It's the list of steps you follow to accomplish a task that is associated with a normal working condition, e.g. fuzzing a file parser during development.  You follow a set of steps to fuzz something, and you really don't deviate from those steps.  A procedure is when something is different.  A procedure is reactive.  How you respond to a security breach is a procedure.  It's a set of steps, but it's not a normal condition.

An Incident Response Plan (IRP - the procedure) serves a few functions:

  • It has the list of people to contact in the event of the emergency
  • It is the actual list of steps to follow when bad things happen
  • It includes references to other procedures for code written by other teams

This may be one of the more painful parts of the SDL, because it's mostly process over anything else.  Luckily there are two wonderful products by Microsoft that help: Team Foundation Server.  For those of you who just cringed, bear with me.

Microsoft released the MSF-Agile plus Security Development Lifecycle Process Template for VS 2010 (it also takes second place in the longest product name contest) to make the entire SDL process easier for developers.  There is the SDL Process Template for 2008 as well.

It's useful for each stage of the SDL, but we want to take a look at how it can help with managing the IRP.  First though, lets define the IRP.

Emergency Contacts (Incident Response Team)

The contacts usually need to be available 24 hours a day, seven days a week.  These people have a range of functions depending on the severity of the breach:

  • Developer – Someone to comprehend and/or triage the problem
  • Tester – Someone to test and verify any changes
  • Manager – Someone to approve changes that need to be made
  • Marketing/PR – Someone to make a public announcement (if necessary)

Each plan is different for each application and for each organization, so there may be ancillary people involved as well (perhaps an end user to verify data).  Each person isn't necessarily required at each stage of the response, but they still need to be available in the event that something changes.

The Incident Response Plan

Over the years I've written a few Incident Response Plans (Never mind that I was asked to do it after an attack most times – you WILL go out and create one after reading this right?).  Each plan was unique in it's own way, but there were commonalities as well.

Each plan should provide the steps to answer a few questions about the vulnerability:

  • How was the vulnerability disclosed?  Did someone attack, or did someone let you know about it?
  • Was the vulnerability found in something you host, or an application that your customers host?
  • Is it an ongoing attack?
  • What was breached?
  • How do you notify the your customers about the vulnerability?
  • When do you notify them about the vulnerability?

And each plan should provide the steps to answer a few questions about the fix:

  • If it's an ongoing attack, how do you stop it?
  • How do you test the fix?
  • How do you deploy the fix?
  • How do you notify the public about the fix?

Some of these questions may not be answerable immediately – you may need to wait until a postmortem to answer them.

This is the high level IRP for example:

  1. The Attack – It's already happened
  2. Evaluate the state of the systems or products to determine the extent of the vulnerability
    • What was breached?
    • What is the vulnerability
  3. Define the first step to mitigate the threat
    • How do you stop the threat?
    • Design the bug fix
  4. Isolate the vulnerabilities if possible
    • Disconnect targeted machine from network
    • Complete forensic backup of system
    • Turn off the targeted machine if hosted
  5. Initiate the mitigation plan
    • Develop the bug fix
    • Test the bug fix
  6. Alert the necessary people
    • Get Marketing/PR to inform clients of breach (don't forget to tell them about the fix too!)
    • If necessary, inform the proper legal/governmental bodies
  7. Deploy any fixes
    • Rebuild any affected systems
    • Deploy patch(es)
    • Reconnect to network
  8. Follow up with legal/governmental bodies if prosecution of attacker is necessary
    • Analyze forensic backups of systems
  9. Do a postmortem of the attack/vulnerability
    • What went wrong?
    • Why did it go wrong?
    • What went right?
    • Why did it go right?
    • How can this class of attack be mitigated in the future?
    • Are there any other products/systems that would be affected by the same class?

Some of procedures can be done in parallel, hence the need for people to be on call.

Team Foundation Server

So now that we have a basic plan created, we should make it easy to implement.  The SDL Process Template (mentioned above) creates a set of task lists and bug types within TFS projects that are used to define things like security bugs, SDL-specific tasks, exit criteria, etc..

image

While these can (and should) be used throughout the lifetime of the project, they can also be used to map out the procedures in the IRP.  In fact, a new project creates an entry in Open SDL Tasks to create an Incident Response Team:

image

A bug works well to manage incident responses.

image

Once a bug is created we can link a new task with the bug.

image

And then we can assign a user to the task:

image

 

Each bug and task are now visible in the Security Exit Criteria query:

image

Once all the items in the Exit Criteria have been met, you can release the patch.

Conclusion

Security is a funny thing. A lot of times you don't think about it until it's too late. Other times you follow the SDL completely, and you still get attacked.

In the last four posts we looked at writing secure software from a pretty high level.  We touched on common vulnerabilities and their mitigations, tools you can use to test for vulnerabilities, some thoughts to apply to architecting the application securely, and finally we looked at how to respond to problems after release.  By no means will these posts automatically make you write secure code, but hopefully they have given you guidance to start understanding what goes into writing secure code.  It's a lot of work, and sometimes its hard work.

Finally, there is an idea I like to put into the first section of every Incident Response Plan I've written, and I think it applies to writing software securely in general:

Something bad just happened.  This is not the time to panic, nor the time to place blame.  Your goal is to make sure the affected system or application is secured and in working order, and your customers are protected.

Something bad may not have happened yet, and it may not in the future, but it's important to plan accordingly because your goal should be to protect the application, the system, and most importantly, the customer.

// About

Steve is a renaissance kid when it comes to technology. He spends his time in the security stack.