Home / My Disclaimer / Who am I? / Search... / Sign in

// steve

MVP Summit 2012

by Steve Syfuhs / February 27, 2012 01:00 PM

I'll be in Redmond this week for the 2012 MVP Summit. It's my first summit so I'm pretty fricken excited! Hopefully there will be some cool announcements coming, but I have to respect the NDA.

To be continued.

Security Development Conference May 15-16 in Washington, DC

by Steve Syfuhs / February 20, 2012 12:50 PM

Registration for the Security Development Conference just went live not too long ago and I just registered. Early bird pricing is $300.

Check it out.

sdc12_header

3 TRACKS & 24 SESSIONS

The session tracks for this industry event will target three important roles for any security organization: security engineers, business decision makers, and lifecycle process managers. The sessions in these tracks will include experts representing over 30 organizations from a variety of industries. Visit the event website to see our current list of speakers, spread the word and register early to join us at the Security Development Conference 2012 on May 15 & 16!

WHY YOU SHOULD ATTEND

  • Accelerate Adoption – Hear from leaders across a variety of organizations and learn from their experiences on how to accelerate SDL adoption in your own organization
  • Gain Efficiencies – Learn effective ways to align SDL practices across engineering, business, and management
  • Networking – Interact with peers, vendors and sponsors who provide SDL services, training, and tools
  • Affordable Training – This is an affordable training opportunity that can benefit your entire security team
  • Continuing Education – Earn 8 CPE Continuing Education (CE) credits for your CISSP credentials

REGISTRATION FEES

Early Bird (February 20 - March 15) $300
Discount (March 16 - April 13) $400
Standard (April 14 - May 11) $500
Onsite Rate (May 12-May 16) $700

Windows Azure Access Control Service Announcements

by Steve Syfuhs / December 21, 2011 11:20 AM

This seems to have made the rounds yesterday and today. The Windows Azure ACS team decided to extend the promotional period to December 20th 2012. In other words, free for the next year. Sweet! I can’t say it was on my Christmas wish list, but it certainly is a great little gift.

Also, ACS v1 is being deprecated the same day. Curiously, that was on my wish list. Winking smile

ACS 1.0 will be officially taken offline on December 20, 2012. This 12 month period along with the ACS 1.0 Migration Tool allows customers with ACS 1.0 namespaces to proactively migrate to ACS 2.0.

Frankly, I don’t think many people used v1, or if they did they migrated to v2 quickly after it was released.

For those that haven’t migrated yet, Microsoft released a set of guidelines for migrating. You really should get on that.

Guide to Claims-Based Identity Second Edition

by Steve Syfuhs / December 13, 2011 10:28 AM

It looks like the Guide to Claims-Based Identity and Access Control was released as a second addition!

Take a look at the list of authors:

If you want a list of experts on security then look no further. These guys are some of the best in the industry and are my go-to for resources on Claims.

Talking ADFS on RunAs Radio

by Steve Syfuhs / December 01, 2011 07:02 PM

During the Toronto stop of the TechDays tour in Canada Richard Campbell was in town talking to a bunch of really smart people about the latest and greatest technologies they've been working on.

And then me for some reason.

We got to talk about ADFS and associates:

Richard talks to Steve Syfuhs at TechDays Toronto about IT Pros providing security services for developers using Active Directory Federated Services. IT and development talking to each other willingly? Perish the thought! But in truth, Steve makes it clear that ADFS provides a great wrapper for developers to access active directory or any other service that has security claims that an application might require. Azure depends on it, even Office 365 can take advantage of ADFS. Steve discusses how IT can work with developers to make the jobs of both groups easier.

You can listen to it here: http://www.runasradio.com/default.aspx?showNum=240

I need to work on using fewer vague analogies.

Change of Scenery

by Steve Syfuhs / November 29, 2011 12:56 PM

Every once in a while you need to make a life-altering decision.

Last night I sent an email to the ObjectSharp team telling them I had resigned (I had spoken to the bosses in person prior).

Boy, talk about blunt, eh?

Every once in a while you are offered a once in a lifetime opportunity to do something pretty amazing. I’ve had three of these opportunities. The first was Woodbine Entertainment where I got my start in the Toronto development world. The second was ObjectSharp where I have been able to learn so much from some of the brightest minds in the industry. The third was two weeks ago in Vancouver.

Two weeks ago I was offered a position to lead development of a product for an ISV in BC.

Me? A leader? Wait. Huh?

Well okay, not quite. Cue the screeching cut-away-from-sappy noise.

So what's the deal? I'm not saying, yet. At this point I'm sure a few people could guess though. Smile

Suffice to say I'll be moving to BC at the end of the year. I'll be at ObjectSharp until December 16th and then will be moving right around the new year.

I'm not sure I can really describe how excited I am about this new position. I'll be working on an awesome product with an awesome group of people.

Of course, it's a little sad leaving ObjectSharp. They have a such a great team of people, and some of the smartest people in the industry.

So it should be an interesting experience.

Input Validation: The Good, The Bad, and the What the Hell are you Doing?

by Steve Syfuhs / November 28, 2011 11:00 AM

Good morning class!

Pop quiz: How many of you do proper input validation in your ASP.NET site, WebForms, MVC, or otherwise?

Some Background

There is an axiom in computer science: never trust user input because it's guaranteed to contain invalid data at some point.

In security we have a similar axiom: never trust user input because it's guaranteed to contain invalid data at some point, and your code is bound to contain a security vulnerability somewhere, somehow. Granted, it doesn't flow as well as the former, but the point still stands.

The solution to this problem is conceptually simple: validate, validate, validate. Every single piece of input that is received from a user should be validated.

Of course when anyone says something is a simple concept it's bound to be stupidly complex to get the implementation right. Unfortunately proper validation is not immune to this problem. Why?

The Problem

Our applications are driven by user data. Without data our applications would be pretty useless. This data is usually pretty domain-specific too so everything we receive should have particular structures, and there's a pretty good chance that a few of these structures are so specific to the organization that there is no well-defined standard. By that I mean it becomes pretty difficult to validate certain data structures if they are custom designed and potentially highly-complex.

So we have this problem. First, if we don't validate that the stuff we are given is clean, our application starts behaving oddly and that limits the usefulness of the application. Second, if we don't validate that the stuff we are given is clean, and there is a bug in the code, we have a potential vulnerability that could wreak havoc for the users.

The Solution

The solution as stated above is to validate all the input, both from a business perspective and from a security perspective. We want it to go something like this:

In this post we are going to look at the best way to validate the security of incoming data within ASP.NET. This requires looking into how ASP.NET processes input from the user.

When ASP.NET receives something from the user it can come from four different vectors:

  • Within the Query String (?foo=bar)
  • Within the Form (via a POST)
  • Within a cookie
  • Within the server variables (a collection generated from HTTP headers and internal server configuration)

These vectors drive ASP.NET, and you can potentially compromise an application by maliciously modifying any of them.

Pop quiz: How many of you check whether custom cookies exist before trying to use them? Almost everyone, good. Now, how many of you validate that the data within the cookies is, well, valid before using them?

What about checking your HTTP headers?

The Bypass

Luckily ASP.NET has some out-of-the-box behaviors that protect the application from malicious input. Unfortunately ASP.NET isn't very forgiving when it comes to validation. It doesn't distinguish between quasi-good input and bad input, so anything containing an angle bracket causes a YSoD.

The defacto fix to this is to do one of two things:

  • Disable validation in the page declaration within WebForms, or stick a [ValidateInput(false)] attribute on an MVC controller
  • Set <pages validateRequest="false"> in web.config

What this will do is tell ASP.NET to basically skip validating the four vectors and let anything in. It was assumed that you would do validation on your own.

Raise your hand if you think this is a bad idea. Okay, keep your hands up if you've never done this for a production application. At this point almost everyone should have put their hands down. I did.

The reason we do this is because as I said before, ASP.NET isn't very forgiving when it comes to validation. It's all or nothing.

What's worse, as ASP.NET got older it started becoming pickier about what it let in so you had more reasons for disabling validation. In .NET 4 validation occurs at a much earlier point. It's a major breaking change:

The request validation feature in ASP.NET provides a certain level of default protection against cross-site scripting (XSS) attacks. In previous versions of ASP.NET, request validation was enabled by default. However, it applied only to ASP.NET pages (.aspx files and their class files) and only when those pages were executing.

In ASP.NET 4, by default, request validation is enabled for all requests, because it is enabled before the BeginRequest phase of an HTTP request. As a result, request validation applies to requests for all ASP.NET resources, not just .aspx page requests. This includes requests such as Web service calls and custom HTTP handlers. Request validation is also active when custom HTTP modules are reading the contents of an HTTP request.

Since backwards compatibility is so important, a configuration attribute was also added to tell ASP.NET to revert to the 2.0 validation mode meaning that it occurs later in the request lifecycle like in ASP.NET 2.0:

<httpRuntime requestValidationMode="2.0" />

If you do a search online for request validation almost everyone comes back with this solution. In fact, it became a well known solution with the Windows Identity Foundation in ASP.NET 4.0 because when you do a federated sign on, WIF receives the token as a chunk of XML. The validator doesn't approve because of the angle brackets. If you set the validation mode to 2.0, the validator checks after the request passes through all HttpModules, which is how WIF consumes that token via the WSFederationAuthenticationModule.

The Proper Solution

So we have the problem. We also have built in functionality that solves our problem, but the way it does it kind of sucks (it's not a bad solution, but it's also not extensible). We want a way that doesn't suck.

In earlier versions of ASP.NET the best solution was to disable validation and within a HttpModule check every vector for potentially malicious input. The benefit here is that you have control over what is malicious and what is not. You would write something along these lines:

public class ValidatorHttpModule : IHttpModule
{
    public void Dispose() { }

    public void Init(HttpApplication context)
    {
        context.BeginRequest += new EventHandler(context_BeginRequest);
    }

    void context_BeginRequest(object sender, EventArgs e)
    {
        HttpApplication context = (HttpApplication)sender;

        foreach (var q in context.Request.QueryString)
        {
            if (CheckQueryString(q))
            {
                throw new SecurityException("Bad validation");
            }
        }

        foreach (var f in context.Request.Form)
        {
            if (CheckForm(f))
            {
                throw new SecurityException("Bad validation");
            }
        }

        foreach (var c in context.Request.Cookies)
        {
            if (CheckCookie(c))
            {
                throw new SecurityException("Bad validation");
            }
        }

        foreach (var s in context.Request.ServerVariables)
        {
            if (CheckServerVariable(s))
            {
                throw new SecurityException("Bad validation");
            }
        }
    }

    // <snip />
}

The downside to this approach though is that you are stuck with pretty clunky validation logic. It executes on every single request, which may not always be necessary. You are also forced to execute the code in order of whenever your HttpModule is initialized. It won't necessarily execute first, so it won't necessarily protect all parts of your application. Protection from an attack that doesn't protect everything from that particular attack isn't very useful.  <Cynicism>Half-assed protection is only good when you have half an ass.</Cynicism>

What we want is something that executes before everything else. In our HttpModule we are validating on BeginRequest, but we want to validate before BeginRequest.

The way we do this is with a custom RequestValidator. On a side note, this post may qualify as having the longest introduction ever. In any case, this custom RequestValidator is set within the httpRuntime tag within the web.config:

<httpRuntime requestValidationType="Syfuhs.Web.Security.CustomRequestValidator" />

We create a custom request validator by creating a class with a base class of System.Web.Util.RequestValidator. Then we override the IsValidRequestString method.

This method allows us to find out where the input is coming from, e.g. from a Form or from a cookie etc. This validator is called on each value within the four collections above, but only when a value exists. It saves us the trouble of going over everything in each request. Within an HttpModule we could certainly build out the same functionality by checking contents of each collection, but this saves us the hassle of writing the boilerplate code. It also provides us a way of describing the problem in detail because we can pass an index location of where the problem exists. So if we find a problem at character 173 we can pass that value back to the caller and ASP.NET will throw an exception describing that index. This is how we get such a detailed exception from WIF:

A Potentially Dangerous Request.Form Value Was Detected from the Client (wresult="<t:RequestSecurityTo...")

Our validator class ends up looking like:

public class MyCustomRequestValidator : RequestValidator
{
    protected override bool IsValidRequestString(HttpContext context, string value, RequestValidationSource requestValidationSource, string collectionKey, out int validationFailureIndex)
    {
        validationFailureIndex = 0;

        switch (requestValidationSource)
        {
            case RequestValidationSource.Cookies:
                return ValidateCookie(collectionKey, value, out validationFailureIndex);
                break;

            case RequestValidationSource.Form:
                return ValidateFormValue(collectionKey, value, out validationFailureIndex);
                break;

            // <snip />
        }

        return base.IsValidRequestString(context, value, requestValidationSource, collectionKey, out validationFailureIndex);
    }

    // <snip />
}

Each application has different validation requirements so I've just mocked up how you would create a custom validator.

If you use this design you can easily validate all inputs across the application, and you don't have to turn off validation.

So once again, pop quiz: How many of you do proper input validation?

Strongly Typed Claims

by Steve Syfuhs / November 12, 2011 04:03 PM

Sometimes it's a pain in the neck working with Claims. A lot of times you need to look for particular claim and that usually means looping through the claims collection and parsing the value to a particular type.

This little dance is the trade-off for having such a simple interface to a potentially arbitrary collection of claims. Most of the time this works, but every once in a while you need to create a basic user object that contains some strongly typed properties. You could build up a basic object like:

public class User
{
    public string UserName { get; set; }

    public string EmailAddress { get; set; }

    public string Department { get; set; }

    public List<string> Roles { get; set; }
}

This would require you to intercept the IClaimsIdentity object and search through the claims collection setting each property manually whenever you wanted to get access to the data. This can get tiresome and is error prone.

I think I've come up with a relatively complete solution to this problem. Basically it works by creating a custom IClaimsIdentity class that sets a User property through reflection. You can then access the user through Thread.CurrentPrincipal.Identity like this:

TypedClaimsIdentity ident = Thread.CurrentPrincipal.Identity as TypedClaimsIdentity;
string email = ident.User.EmailAddress.Value;
var userRoles = ident.User.Roles;

Once you've defined the particular types and their associated claims, the particular values will be set through reflection. So to declare your user properties, create a class like this:

public class MyTypedClaimsUser : TypedClaims
{
    public MyTypedClaimsUser()
    {
        this.Name = new TypedClaim<string>();
        this.EmailAddress = new TypedClaim<string>();
        this.Roles = new List<TypedClaim<string>>();
        this.Expiration = new TypedClaim<DateTime>();
        this.AuthenticationMethod = new TypedClaim<string>();
    }

    [TypedClaim(ClaimTypes.Name, false)]
    public TypedClaim<string> Name { get; private set; }

    [TypedClaim(ClaimTypes.Email, false)]
    public TypedClaim<string> EmailAddress { get; private set; }

    [TypedClaim(ClaimTypes.Role, true)]
    public List<TypedClaim<string>> Roles { get; private set; }

    [TypedClaim(ClaimTypes.Expiration, true)]
    public TypedClaim<DateTime> Expiration { get; private set; }

    [TypedClaim(ClaimTypes.AuthenticationMethod, false)]
    public TypedClaim<string> AuthenticationMethod { get; private set; }

    [TypedClaim(ClaimTypes.GroupSid, false)]
    public TypedClaim<string> GroupSid { get; private set; }
}

Each property must be defined a certain way. Each property must have a particular attribute set: TypedClaimAttribute. This attribute will help the reflection code associate the property with the expected claim. That way the Name property will always be mapped to the ClaimTypes.Name claim type, which is the http://schemas.xmlsoap.org/ws/2005/05/identity/claims/name claim. It also helps by warning the code that it's going to likely have multiple potential values, like the Role claim.

Each property is also of a particular type: TypedClaim<T>. In theory I could have just used simple types like strings but by going this route you can get access to claim metadata like Name.ClaimType or Name.Issuer. TypedClaim<T> is inherited from Claim.

So how does this all work? Well first you need to be able to add the User object into the Identity object. This is done by creating a custom IClaimsIdentity class:

[Serializable]
public class TypedClaimsIdentity : IClaimsIdentity
{
    public TypedClaimsIdentity(IClaimsIdentity identity)
    {
        user = new MyTypedClaimsUser();

        if (identity.Claims != null)
            this.claims = identity.Claims;
        else
            claims = new ClaimCollection(identity);

        this.Actor = identity.Actor;
        this.AuthenticationType = identity.AuthenticationType;

        Update();
    }

    private void Update()
    {
        user.Update(this.claims);
    }

    private MyTypedClaimsUser user;

    public MyTypedClaimsUser User
    {
        get
        {
            Update();
            return user;
        }
    }

    private ClaimCollection claims;

    public ClaimCollection Claims
    {
        get
        {
            Update();
            return claims;
        }
    }

    public IClaimsIdentity Actor { get; set; }

    public SecurityToken BootstrapToken { get; set; }

    public IClaimsIdentity Copy()
    {
        ClaimsIdentity claimsIdentity = new ClaimsIdentity(this.AuthenticationType);

        if (this.Claims != null)
        {
            claimsIdentity.Claims.AddRange(claims.CopyWithSubject(claimsIdentity));
        }

        claimsIdentity.Label = this.Label;
        claimsIdentity.NameClaimType = this.NameClaimType;
        claimsIdentity.RoleClaimType = this.RoleClaimType;
        claimsIdentity.BootstrapToken = this.BootstrapToken;

        return claimsIdentity;
    }

    public string Label { get; set; }

    public string NameClaimType { get; set; }

    public string RoleClaimType { get; set; }

    public string AuthenticationType { get; private set; }

    public bool IsAuthenticated { get { return claims.Count > 0; } }

    public string Name { get { return User.Name.Value; } }
}

There isn't anything spectacularly interesting about this class. The important part is the constructor. It only accepts an IClaimsIdentity object because it's designed as a way to wrap around an already created identity. It then updates the User object through Update().

The User object is updated through reflection. The Update() method calls User.Update(…) which is defined within the base class of MyTypedClaimsUser. This will call into a helper class that looks through the User object and find any properties that contain the TypedClaimAttribute.

EDIT: When it comes to reflection, there is always a better way to do something. My original code was mostly a PoC and didn't make use of existing .NET-isms. I've edited this bit to include the code changes.

The helper class was originally a bit clunky because all it did was look through the properties and if/else if's through their types and parses them:

if (type == typeof(string))
{
    return new TypedClaim<string>(selectedClaims.First()) { Value = selectedClaims.First().Value };
}

This really isn't the smartest way to do it because .NET already contains some pretty strong conversion functions; specifically Convert.ChangeType(value, type).

Going this route requires generating the proper TypedClaim<T> though. Many thanks to Anna Lear because she pointed out the MakeGenericType(…) method, which allows you to take a type and convert it to a generic type with the specified type parameters. That way I could dynamically pass a type into a generic without hardcoding anything. This allows the TypedClaim<T> to be set at runtime without having to code for each particular parameter. So you end up with basic logic along the lines of:

Type constructed = typeof(TypedClaim<>).MakeGenericType(new Type[] { genericParamType });

object val = Convert.ChangeType(claim.Value, genericParamType);

return Activator.CreateInstance(constructed, claim.ClaimType, val);

The Activator.CreateInstance method will construct an instance of the particular type which will eventually be passed into PropertyInfo.Value.SetValue(…).

Finally, it's time to integrate this into your web application. The best location is probably going to be through a custom ClaimsAuthenticationManager. It works like this:

public class TypedClaimsAuthenticationManager : ClaimsAuthenticationManager
{
    public override IClaimsPrincipal Authenticate(string resourceName, IClaimsPrincipal incomingPrincipal)
    {
        if (!incomingPrincipal.Identity.IsAuthenticated)
            return base.Authenticate(resourceName, incomingPrincipal);

        for (int i = 0; i < incomingPrincipal.Identities.Count; i++)
            incomingPrincipal.Identities[i] = new TypedClaimsIdentity(incomingPrincipal.Identities[i]);

        return base.Authenticate(resourceName, incomingPrincipal);
    }
}

Then to tell WIF about this new CAM you need to make a change to the web.config. Within the Microsoft.IdentityModel/Service section, add this:

<claimsAuthenticationManager type="Syfuhs.IdentityModel.TypedClaimsAuthenticationManager, Syfuhs.IdentityModel" />

By dynamically setting the values of the user object, you can create a fairly robust identity model for your application.

You can download the updated code here: typedclaimsv2.zip (6.21 kb)

You can download the original code here: typedclaims.zip (5.61 kb)

Tamper-Evident Configuration Files in ASP.NET

by Steve Syfuhs / September 28, 2011 04:00 PM

A couple weeks ago someone sent a message to one of our internal mailing lists. His message was pretty straightforward: how do you prevent modifications of a configuration file for an application [while the user has administrative rights on the machine]?

There were a couple responses including mine, which was to cryptographically sign the configuration file with an asymmetric key. For a primer on digital signing, take a look here. Asymmetric signing is one possible way of signing a file. By signing it this way the configuration file could be signed by an administrator before deploying the application, and all the application needed to validate the signature was the public key associated with the private key used to sign the file. This separated the private key from the application, preventing the configuration from being re-signed maliciously. It’s similar in theory to how code-signing works.

In the event that validation of the configuration file failed, the application would not load, or would gracefully fail and exit the next time the file was checked (or the application had an exclusive lock on the configuration file so it couldn’t be edited while running).

We are also saved the problem of figuring out the signature format because there is a well-respected XML signature schema: http://www.w3.org/2000/09/xmldsig#. WCF uses this format to sign messages. For a good code-walkthrough see Barry Dorrans’ Beginning ASP.NET Security. More on the code later here though.

Technically, this won’t prevent changes to the file, but it will prevent the application from accepting those changes. It’s kind of like those tamper-evident tags manufacturers stick on the enclosures of their equipment. It doesn’t prevent someone from opening the thing, but they will get caught if someone checks it. You’ll notice I didn’t call them “tamper-resistance” tags.

Given this problem, I went one step further and asked myself: how would I do this with a web application? A well-informed ASP.NET developer might suggest using aspnet_regiis to encrypt the configuration file. Encrypting the configuration does protect against certain things, like being able to read configuration data. However, there are a couple problems with this.

  • If I’m an administrator on that server I can easily decrypt the file by calling aspnet_regiis
  • If I’ve found a way to exploit the site, I can potentially overwrite the contents of the file and make the application behave differently
  • The encryption/decryption keys need to be shared in web farms

Consider our goal. We want to prevent a user with administrative privileges from modifying the configuration. Encryption does not help us in this case.  Signing the configuration will help though (As an aside, for more protection you encrypt the file then sign it, but that’s out of the scope of this) because the web application will stop working if a change is made that invalidates the signature.

Of course, there’s one little problem. You can’t stick the signature in the configuration file, because ASP.NET will b-itch complain about the foreign XML tag. The original application in question was assumed to have a custom XML file for it’s configuration, but in reality it doesn’t, so this problem applies there too.

There are three possible solutions to this:

  • Create a custom ConfigurationSection class for the signature
  • Create a custom configuration file and handler, and intercept all calls to web.config
  • Stick the signature of the configuration file into a different file

The first option isn’t a bad idea, but I really didn’t want to muck about with the configuration classes. The second option is, well, pretty much a bad idea in almost all cases, mainly because I’m not entirely sure you can even intercept all calls to the configuration classes.

I went with option three.

The other file has two important parts: the signature of the web.config file, and a signature for itself. This second signature prevents someone from modifying the signature for the web.config file. Our code becomes a bit more complicated because now we need to validate both signatures.

This makes us ask the question, where is the validation handled? It needs to happen early enough in the request lifecycle, so I decided to stick it into a HTTP Module, for the sake of modularity.

Hold it, you say. If the code is in a HTTP Module, then it needs to be added to the web.config. If you are adding it to the web.config, and protecting the web.config by this module, then removing said module from the web.config will prevent the validation from occurring.

Yep.

There are two ways around this:

  • Add the validation call into Global.asax
  • Hard code the addition of the HTTP Module

It’s very rare that I take the easy approach, so I’ve decided to hard code the addition of the HTTP Module, because sticking the code into a module is cleaner.

In older versions of ASP.NET you had to make some pretty ugly hacks to get the module in because it needs to happen very early in startup of the web application. With ASP.NET 4.0, an assembly attribute was added that allowed you to call code almost immediately after startup:

[assembly: PreApplicationStartMethod(typeof(Syfuhs.Security.Web.Startup), "Go")]

Within the Startup class there is a public static method called Go(). This method calls the Register() within an instance of my HttpModule. This module inherits from an abstract class called DynamicallyLoadedHttpModule, which inherits from IHttpModule. This class looks like:

public abstract class DynamicallyLoadedHttpModule : IHttpModule
{
    public void Register()
    {
        DynamicHttpApplication.RegisterModule(delegate(HttpApplication app) { return this; });
    }

    public abstract void Init(HttpApplication context);

    public abstract void Dispose();
}

The DynamicHttpApplication class inherits from HttpApplication and allows you to load HTTP modules in code. This code was not written by me. It was originally written by Nikhil Kothari:

using HttpModuleFactory = System.Func<System.Web.HttpApplication, System.Web.IHttpModule>;

public abstract class DynamicHttpApplication : HttpApplication
{
    private static readonly Collection<HttpModuleFactory> Factories = new Collection<HttpModuleFactory>();
    private static object _sync = new object();
    private static bool IsInitialized = false;

    private List<IHttpModule> modules;

    public override void Init()
    {
        base.Init();

        if (Factories.Count == 0)
            return;

        List<IHttpModule> dynamicModules = new List<IHttpModule>();

        lock (_sync)
        {
            if (Factories.Count == 0)
                return;

            foreach (HttpModuleFactory factory in Factories)
            {
                IHttpModule m = factory(this);

                if (m != null)
                {
                    m.Init(this);
                    dynamicModules.Add(m);
                }
            }
        }

        if (dynamicModules.Count != 0)
            modules = dynamicModules;

        IsInitialized = true;
    }

    public static void RegisterModule(HttpModuleFactory factory)
    {
        if (IsInitialized)
            throw new InvalidOperationException(Exceptions.CannotRegisterModuleLate);

        if (factory == null)
            throw new ArgumentNullException("factory");

        Factories.Add(factory);
    }

    public override void Dispose()
    {
        if (modules != null)
            modules.ForEach(m => m.Dispose());

        modules = null;
            
        base.Dispose();

        GC.SuppressFinalize(this);
    }
}

Finally, to get this all wired up we modify the Global.asax to inherit from DynamicHttpApplication:

public class Global : DynamicHttpApplication { ... }

Like I said, you could just add the validation code into Global (but where’s the fun in that?)…

So, now that we’ve made it possible to add the HTTP Module, lets actually look at the module:

public sealed class SignedConfigurationHttpModule : DynamicallyLoadedHttpModule
{
    public override void Init(HttpApplication context)
    {
        if (context == null)
            throw new ArgumentNullException("context");

        context.BeginRequest += new EventHandler(context_BeginRequest);
        context.Error += new EventHandler(context_Error);
    }

    private void context_BeginRequest(object sender, EventArgs e)
    {
        HttpApplication app = (HttpApplication)sender;

        SignatureValidator validator = new SignatureValidator(app.Request.PhysicalApplicationPath);

        validator.ValidateConfigurationSignatures(CertificateLocator.LocateSigningCertificate());
    }

    private void context_Error(object sender, EventArgs e)
    {
        HttpApplication app = (HttpApplication)sender;

        foreach (var exception in app.Context.AllErrors)
        {
            if (exception is XmlSignatureValidationFailedException)
            {
                // Maybe do something
                // Or don't...
                break;
            }
        }
    }

    public override void Dispose() { }
}

Nothing special here. Just hooking into the context.BeginRequest event so validation occurs on each request. There would be some performance impact as a result.

The core validation is contained within the SignatureValidator class, and there is a public method that we call to validate the signature file, ValidateConfigurationSignatures(…). This method accepts an X509Certificate2 to compare the signature against.

The specification for the schema we are using for the signature will actually encode the public key of the private key into the signature element, however we want to go one step further and make sure it’s signed by a particular certificate. This will prevent someone from modifying the configuration file, and re-signing it with a different private key. Validation of the signature is not enough; we need to make sure it’s signed by someone we trust.

The validator first validates the schema of the signature file. Is the XML well formed? Does the signature file conform to a schema we defined (the schema is defined in a Constants class)? Following that is validates the signature of the file itself. Has the file been tampered with? Following that it validates the signature of the web.config file. Has the web.config file been tampered with?

Before it can do all of this though, it needs to check to see if the signature file exists. The variable passed into the constructor is the physical path of the web application. The validator knows that the signature file should be in the App_Data folder within the root. This file needs to be here because the folder by default will not let you access anything in it, and we don’t want anyone downloading the file. The path is also hardcoded specifically so changes to the configuration cannot bypass the signature file validation.

Here is the validator:

internal sealed class SignatureValidator
{
    public SignatureValidator(string physicalApplicationPath)
    {
        this.physicalApplicationPath = physicalApplicationPath;
        this.signatureFilePath = Path.Combine(this.physicalApplicationPath, "App_Data\\Signature.xml");
    }

    private string physicalApplicationPath;
    private string signatureFilePath;

    public void ValidateConfigurationSignatures(X509Certificate2 cert)
    {
        Permissions.DemandFilePermission(FileIOPermissionAccess.Read, this.signatureFilePath);

        if (cert == null)
            throw new ArgumentNullException("cert");

        if (cert.HasPrivateKey)
            throw new SecurityException(Exceptions.ValidationCertificateHasPrivateKey);

        if (!File.Exists(signatureFilePath))
            throw new SecurityException(Exceptions.CouldNotLoadSignatureFile);

        XmlDocument doc = new XmlDocument() { PreserveWhitespace = true };
        doc.Load(signatureFilePath);

        ValidateXmlSchema(doc);

        CheckForUnsignedConfig(doc);

        if (!X509CertificateCompare.Compare(cert, ValidateSignature(doc)))
            throw new XmlSignatureValidationFailedException(Exceptions.SignatureFileNotSignedByExpectedCertificate);

        List<XmlSignature> signatures = ParseSignatures(doc);

        ValidateSignatures(signatures, cert);
    }

    private void CheckForUnsignedConfig(XmlDocument doc)
    {
        List<string> signedFiles = new List<string>();

        foreach (XmlElement file in doc.GetElementsByTagName("File"))
        {
            string fileName = Path.Combine(this.physicalApplicationPath, file["FileName"].InnerText);

            signedFiles.Add(fileName.ToUpperInvariant());
        }

        CheckConfigFiles(signedFiles);
    }

    private void CheckConfigFiles(List<string> signedFiles)
    {
        foreach (string file in Directory.EnumerateFiles(this.physicalApplicationPath, "*.config", SearchOption.AllDirectories))
        {
            string path = Path.Combine(this.physicalApplicationPath, file);

            if (!signedFiles.Contains(path.ToUpperInvariant()))
                throw new XmlSignatureValidationFailedException(string.Format(CultureInfo.CurrentCulture, Exceptions.ConfigurationFileWithoutSignature, path));
        }
    }

    private void ValidateXmlSchema(XmlDocument doc)
    {
        using (StringReader fileReader = new StringReader(Constants.SignatureFileSchema))
        using (StringReader signatureReader = new StringReader(Constants.SignatureSchema))
        {
            XmlSchema fileSchema = XmlSchema.Read(fileReader, null);
            XmlSchema signatureSchema = XmlSchema.Read(signatureReader, null);

            doc.Schemas.Add(fileSchema);
            doc.Schemas.Add(signatureSchema);

            doc.Validate(Schemas_ValidationEventHandler);
        }
    }

    void Schemas_ValidationEventHandler(object sender, ValidationEventArgs e)
    {
        throw new XmlSignatureValidationFailedException(Exceptions.InvalidSchema, e.Exception);
    }

    public static X509Certificate2 ValidateSignature(XmlDocument xml)
    {
        if (xml == null)
            throw new ArgumentNullException("xml");

        XmlElement signature = ExtractSignature(xml.DocumentElement);

        return ValidateSignature(xml, signature);
    }

    public static X509Certificate2 ValidateSignature(XmlDocument doc, XmlElement signature)
    {
        if (doc == null)
            throw new ArgumentNullException("doc");

        if (signature == null)
            throw new ArgumentNullException("signature");

        X509Certificate2 signingCert = null;

        SignedXml signed = new SignedXml(doc);
        signed.LoadXml(signature);

        foreach (KeyInfoClause clause in signed.KeyInfo)
        {
            KeyInfoX509Data key = clause as KeyInfoX509Data;

            if (key == null || key.Certificates.Count != 1)
                continue;

            signingCert = (X509Certificate2)key.Certificates[0];
        }

        if (signingCert == null)
            throw new CryptographicException(Exceptions.SigningKeyNotFound);

        if (!signed.CheckSignature())
            throw new CryptographicException(Exceptions.SignatureValidationFailed);

        return signingCert;
    }

    private static void ValidateSignatures(List<XmlSignature> signatures, X509Certificate2 cert)
    {
        foreach (XmlSignature signature in signatures)
        {
            X509Certificate2 signingCert = ValidateSignature(signature.Document, signature.Signature);

            if (!X509CertificateCompare.Compare(cert, signingCert))
                throw new XmlSignatureValidationFailedException(
                    string.Format(CultureInfo.CurrentCulture, 
                    Exceptions.SignatureForFileNotSignedByExpectedCertificate, signature.FileName));
        }
    }

    private List<XmlSignature> ParseSignatures(XmlDocument doc)
    {
        List<XmlSignature> signatures = new List<XmlSignature>();

        foreach (XmlElement file in doc.GetElementsByTagName("File"))
        {
            string fileName = Path.Combine(this.physicalApplicationPath, file["FileName"].InnerText);

            Permissions.DemandFilePermission(FileIOPermissionAccess.Read, fileName);

            if (!File.Exists(fileName))
                throw new FileNotFoundException(string.Format(CultureInfo.CurrentCulture, Exceptions.FileNotFound, fileName));

            XmlDocument fileDoc = new XmlDocument() { PreserveWhitespace = true };
            fileDoc.Load(fileName);

            XmlElement sig = file["FileSignature"] as XmlElement;

            signatures.Add(new XmlSignature()
            {
                FileName = fileName,
                Document = fileDoc,
                Signature = ExtractSignature(sig)
            });
        }

        return signatures;
    }

    private static XmlElement ExtractSignature(XmlElement xml)
    {
        XmlNodeList xmlSignatureNode = xml.GetElementsByTagName("Signature");

        if (xmlSignatureNode.Count <= 0)
            throw new CryptographicException(Exceptions.SignatureNotFound);

        return xmlSignatureNode[xmlSignatureNode.Count - 1] as XmlElement;
    }
}

You’ll notice there is a bit of functionality I didn’t mention. Checking that the web.config file hasn’t been modified isn’t enough. We also need to check if any *other* configuration file has been modified. It’s no good if you leave the root configuration file alone, but modify the <authorization> tag within the administration folder to allow anonymous access, right?

So there is code looks through the site for any files that have the “config” extension, and if that file isn’t in the signature file, it throws an exception.

There is also a check done at the very beginning of the validation. If you pass an X509Certificate2 with a private key it will throw an exception. This is absolutely by design. You sign the file with the private key. You validate with the public key. If the private key is present during validation that means you are not separating the keys, and all of this has been a huge waste of time because the private key is not protected. Oops.

Finally, it’s important to know how to sign the files. I’m not a fan of generating XML properly, partially because I’m lazy and partially because it’s a pain to do, so mind the StringBuilder:

public sealed class XmlSigner
{
    public XmlSigner(string appPath)
    {
        this.physicalApplicationPath = appPath;
    }

    string physicalApplicationPath;

    public XmlDocument SignFiles(string[] paths, X509Certificate2 cert)
    {
        if (paths == null || paths.Length == 0)
            throw new ArgumentNullException("paths");

        if (cert == null || !cert.HasPrivateKey)
            throw new ArgumentNullException("cert");

        XmlDocument doc = new XmlDocument() { PreserveWhitespace = true };
        StringBuilder sb = new StringBuilder();

        sb.Append("<Configuration>");
        sb.Append("<Files>");

        foreach (string p in paths)
        {
            sb.Append("<File>");

            sb.AppendFormat("<FileName>{0}</FileName>", p.Replace(this.physicalApplicationPath, ""));
            sb.AppendFormat("<FileSignature><Signature xmlns=\"http://www.w3.org/2000/09/xmldsig#\">{0}</Signature></FileSignature>", 
            SignFile(p, cert).InnerXml);

            sb.Append("</File>");
        }

        sb.Append("</Files>");
        sb.Append("</Configuration>");

        doc.LoadXml(sb.ToString());

        doc.DocumentElement.AppendChild(doc.ImportNode(SignXmlDocument(doc, cert), true));

        return doc;
    }

    public static XmlElement SignFile(string path, X509Certificate2 cert)
    {
        if (string.IsNullOrWhiteSpace(path))
            throw new ArgumentNullException("path");

        if (cert == null || !cert.HasPrivateKey)
            throw new ArgumentException(Exceptions.CertificateDoesNotContainPrivateKey);

        Permissions.DemandFilePermission(FileIOPermissionAccess.Read, path);

        XmlDocument doc = new XmlDocument();
        doc.PreserveWhitespace = true;
        doc.Load(path);

        return SignXmlDocument(doc, cert);
    }

    public static XmlElement SignXmlDocument(XmlDocument doc, X509Certificate2 cert)
    {
        if (doc == null)
            throw new ArgumentNullException("doc");

        if (cert == null || !cert.HasPrivateKey)
            throw new ArgumentException(Exceptions.CertificateDoesNotContainPrivateKey);

        SignedXml signed = new SignedXml(doc) { SigningKey = cert.PrivateKey };

        Reference reference = new Reference() { Uri = "" };

        XmlDsigC14NTransform transform = new XmlDsigC14NTransform();
        reference.AddTransform(transform);

        XmlDsigEnvelopedSignatureTransform envelope = new XmlDsigEnvelopedSignatureTransform();
        reference.AddTransform(envelope);
        signed.AddReference(reference);

        KeyInfo keyInfo = new KeyInfo();
        keyInfo.AddClause(new KeyInfoX509Data(cert));
        signed.KeyInfo = keyInfo;

        signed.ComputeSignature();

        XmlElement xmlSignature = signed.GetXml();

        return xmlSignature;
    }
}

To write this to a file you can call it like this:

XmlWriter writer = XmlWriter.Create(@"C:\Dev\Projects\Syfuhs.Security.Web\Syfuhs.Security.Web.WebTest\App_Data\Signature.xml");
XmlSigner signer = new XmlSigner(Request.PhysicalApplicationPath);

XmlDocument xml = signer.SignFiles(new string[] { 
@"C:\Dev\Projects\Syfuhs.Security.Web\Syfuhs.Security.Web.WebTest\Web.config",
@"C:\Dev\Projects\Syfuhs.Security.Web\Syfuhs.Security.Web.WebTest\Web.debug.config",
@"C:\Dev\Projects\Syfuhs.Security.Web\Syfuhs.Security.Web.WebTest\Web.release.config",
@"C:\Dev\Projects\Syfuhs.Security.Web\Syfuhs.Security.Web.WebTest\Account\Web.config",
@"C:\Dev\Projects\Syfuhs.Security.Web\Syfuhs.Security.Web.WebTest\test.config"
}, 
new X509Certificate2(@"C:\Dev\Projects\Syfuhs.Security.Web\Syfuhs.Security.Web.WebTest\cert.pfx", "1"));

xml.WriteTo(writer);
writer.Flush();

Now within this code, you have to pass in a X509Certificate2 with a private key, otherwise you can’t sign the files.

These processes should occur on different machines. The private key should never be on the server hosting the site. The basic steps for deployment would go something like:

1. Compile web application.
2. Configure site and configuration files on staging server.
3. Run application that signs the configuration and generates the signature file.
4. Drop the signature.xml file into the App_Data folder.
5. Deploy configured and signed application to production.

There is one final note (I think I’ve made that note a few times by now…) and that is the CertificateLocator class. At the moment it just returns a X509Certificate2 from a particular path on my file system. This isn’t necessarily the best approach because it may be possible to overwrite that file. You should store that certificate in a safe place, and make a secure call to get it. For instance a web service call might make sense. If you have a Hardware Security Module (HSM) to store secret bits in, even better.

Concluding Bits

What have we accomplished by signing our configuration files? We add a degree of trust that our application hasn’t been compromised. In the event that the configuration has been modified, the application stops working. This could be from malicious intent, or careless administrators. This is a great way to prevent one-off changes to configuration files in web farms. It is also a great way to prevent customers from mucking up the configuration file you’ve deployed with your application.

This solution was designed in a way mitigate quite a few attacks. An attacker cannot modify configuration files. An attacker cannot modify the signature file. An attacker cannot view the signature file. An attacker cannot remove the signature file. An attacker cannot remove the HTTP Module that validates the signature without changing the underlying code. An attacker cannot change the underlying code because it’s been compiled before being deployed.

Is it necessary to use on every deployment? No, probably not.

Does it go a little overboard with regard to complexity? Yeah, a little.

Does it protect against a real problem? Absolutely.

Unfortunately it also requires full trust.

Overall it’s a fairly robust solution and shows how you can mitigate certain types of risks seen in the real world.

And of course, it works with both WebForms and MVC.

You can download the full source: https://syfuhs.blob.core.windows.net/files/c6afeabc-36fc-4d41-9aa7-64cc9385280d-configurationsignaturevalidation.zip

Protecting Against BEAST and it's Man in the Middle Attack on SSL/TLS 1.0

by Steve Syfuhs / September 22, 2011 04:00 PM

Last week some researchers announced they figured out how to exploit a vulnerability in the SSL 3.0/TLS 1.0 protocol.

This particular vulnerability has been known for a few years, and our friends on the OpenSSL project had written a paper discussing it in 2004. This has been a theoretical problem for a while, but nobody was really able to weaponize the vulnerability.

In theory these researchers have.

The vulnerability is only found in the encryption algorithms that support CBC because of the nature of block ciphers (as apposed to stream ciphers). For more information read the OpenSSL paper above.  The way it works is a method called plaintext-recovery. Basically, if I know a particular value of plaintext, I can mangle the encrypted data in such a way that makes it possible to guess the rest of the plaintext. The reason this stayed theoretical was because there wasn’t wasn’t an easy way to guess the plaintext. The researchers have potentially found an easier way to guess.

Now, this vulnerability is only found under certain conditions:

  • You must be running SSL 3.0/TLS 1.0 (TLS 1.1 and 1.2 are not affected)
  • You must be using a block cipher for encryption

Without making this sound like FUD, it should be noted that this hasn’t necessarily become a problem. The security industry hasn’t really made a decision on whether this is a serious problem yet. It’s kind of a wait and see scenario.

Now, why is this problem? Most web servers are still stuck on TLS v1. And by most, I really mean most. The reason for this is because most web browsers only support TLS v1.

You can thank Mozilla Network Security Services (NSS). This is the crypto plugin that Firefox and Chrome use, and NSS doesn’t support TLS 1.1 or 1.2.

Chrome’s latest developer release now has a fix for this vulnerability, by switching to a stream cipher instead of block cipher.

Curiously Internet Explorer does support TLS 1.1 and 1.2 (and has since IE7). IIS 7 also supports it.

However, IIS only supports TLS v1 by default. You need to enable TLS 1.1 and 1.2.

It’s pretty straightforward, just add the following registry keys:

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.2]

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.2\Client]
"Enabled"=dword:00000001
"DisabledByDefault"=dword:00000000

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.2\Server]
"Enabled"=dword:00000001
"DisabledByDefault"=dword:00000000

While you are at it, it might not be a bad idea to disable SSL v2 as well:

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\SSL 2.0\Server]
"DisabledByDefault"=dword:00000001

EDIT: I guess I should clarify that you should also disable TLS 1.0 as well, otherwise the browser can request that TLS v1 be used instead.

Set DisabledByDefault=1 under TLS 1.0\Server.

Next, reboot the server.

If you want to automate the procedure for multiple servers, Derek Seaman has a great little PowerShell script that sets everything up for you. He also shows you how you can test to make sure it actually worked, which is nice.

// About

Steve is a renaissance kid when it comes to technology. He spends his time in the security stack.