Home / My Disclaimer / Who am I? / Search... / Sign in

// Mitigation

Input Validation: The Good, The Bad, and the What the Hell are you Doing?

by Steve Syfuhs / November 28, 2011 11:00 AM

Good morning class!

Pop quiz: How many of you do proper input validation in your ASP.NET site, WebForms, MVC, or otherwise?

Some Background

There is an axiom in computer science: never trust user input because it's guaranteed to contain invalid data at some point.

In security we have a similar axiom: never trust user input because it's guaranteed to contain invalid data at some point, and your code is bound to contain a security vulnerability somewhere, somehow. Granted, it doesn't flow as well as the former, but the point still stands.

The solution to this problem is conceptually simple: validate, validate, validate. Every single piece of input that is received from a user should be validated.

Of course when anyone says something is a simple concept it's bound to be stupidly complex to get the implementation right. Unfortunately proper validation is not immune to this problem. Why?

The Problem

Our applications are driven by user data. Without data our applications would be pretty useless. This data is usually pretty domain-specific too so everything we receive should have particular structures, and there's a pretty good chance that a few of these structures are so specific to the organization that there is no well-defined standard. By that I mean it becomes pretty difficult to validate certain data structures if they are custom designed and potentially highly-complex.

So we have this problem. First, if we don't validate that the stuff we are given is clean, our application starts behaving oddly and that limits the usefulness of the application. Second, if we don't validate that the stuff we are given is clean, and there is a bug in the code, we have a potential vulnerability that could wreak havoc for the users.

The Solution

The solution as stated above is to validate all the input, both from a business perspective and from a security perspective. We want it to go something like this:

In this post we are going to look at the best way to validate the security of incoming data within ASP.NET. This requires looking into how ASP.NET processes input from the user.

When ASP.NET receives something from the user it can come from four different vectors:

  • Within the Query String (?foo=bar)
  • Within the Form (via a POST)
  • Within a cookie
  • Within the server variables (a collection generated from HTTP headers and internal server configuration)

These vectors drive ASP.NET, and you can potentially compromise an application by maliciously modifying any of them.

Pop quiz: How many of you check whether custom cookies exist before trying to use them? Almost everyone, good. Now, how many of you validate that the data within the cookies is, well, valid before using them?

What about checking your HTTP headers?

The Bypass

Luckily ASP.NET has some out-of-the-box behaviors that protect the application from malicious input. Unfortunately ASP.NET isn't very forgiving when it comes to validation. It doesn't distinguish between quasi-good input and bad input, so anything containing an angle bracket causes a YSoD.

The defacto fix to this is to do one of two things:

  • Disable validation in the page declaration within WebForms, or stick a [ValidateInput(false)] attribute on an MVC controller
  • Set <pages validateRequest="false"> in web.config

What this will do is tell ASP.NET to basically skip validating the four vectors and let anything in. It was assumed that you would do validation on your own.

Raise your hand if you think this is a bad idea. Okay, keep your hands up if you've never done this for a production application. At this point almost everyone should have put their hands down. I did.

The reason we do this is because as I said before, ASP.NET isn't very forgiving when it comes to validation. It's all or nothing.

What's worse, as ASP.NET got older it started becoming pickier about what it let in so you had more reasons for disabling validation. In .NET 4 validation occurs at a much earlier point. It's a major breaking change:

The request validation feature in ASP.NET provides a certain level of default protection against cross-site scripting (XSS) attacks. In previous versions of ASP.NET, request validation was enabled by default. However, it applied only to ASP.NET pages (.aspx files and their class files) and only when those pages were executing.

In ASP.NET 4, by default, request validation is enabled for all requests, because it is enabled before the BeginRequest phase of an HTTP request. As a result, request validation applies to requests for all ASP.NET resources, not just .aspx page requests. This includes requests such as Web service calls and custom HTTP handlers. Request validation is also active when custom HTTP modules are reading the contents of an HTTP request.

Since backwards compatibility is so important, a configuration attribute was also added to tell ASP.NET to revert to the 2.0 validation mode meaning that it occurs later in the request lifecycle like in ASP.NET 2.0:

<httpRuntime requestValidationMode="2.0" />

If you do a search online for request validation almost everyone comes back with this solution. In fact, it became a well known solution with the Windows Identity Foundation in ASP.NET 4.0 because when you do a federated sign on, WIF receives the token as a chunk of XML. The validator doesn't approve because of the angle brackets. If you set the validation mode to 2.0, the validator checks after the request passes through all HttpModules, which is how WIF consumes that token via the WSFederationAuthenticationModule.

The Proper Solution

So we have the problem. We also have built in functionality that solves our problem, but the way it does it kind of sucks (it's not a bad solution, but it's also not extensible). We want a way that doesn't suck.

In earlier versions of ASP.NET the best solution was to disable validation and within a HttpModule check every vector for potentially malicious input. The benefit here is that you have control over what is malicious and what is not. You would write something along these lines:

public class ValidatorHttpModule : IHttpModule
{
    public void Dispose() { }

    public void Init(HttpApplication context)
    {
        context.BeginRequest += new EventHandler(context_BeginRequest);
    }

    void context_BeginRequest(object sender, EventArgs e)
    {
        HttpApplication context = (HttpApplication)sender;

        foreach (var q in context.Request.QueryString)
        {
            if (CheckQueryString(q))
            {
                throw new SecurityException("Bad validation");
            }
        }

        foreach (var f in context.Request.Form)
        {
            if (CheckForm(f))
            {
                throw new SecurityException("Bad validation");
            }
        }

        foreach (var c in context.Request.Cookies)
        {
            if (CheckCookie(c))
            {
                throw new SecurityException("Bad validation");
            }
        }

        foreach (var s in context.Request.ServerVariables)
        {
            if (CheckServerVariable(s))
            {
                throw new SecurityException("Bad validation");
            }
        }
    }

    // <snip />
}

The downside to this approach though is that you are stuck with pretty clunky validation logic. It executes on every single request, which may not always be necessary. You are also forced to execute the code in order of whenever your HttpModule is initialized. It won't necessarily execute first, so it won't necessarily protect all parts of your application. Protection from an attack that doesn't protect everything from that particular attack isn't very useful.  <Cynicism>Half-assed protection is only good when you have half an ass.</Cynicism>

What we want is something that executes before everything else. In our HttpModule we are validating on BeginRequest, but we want to validate before BeginRequest.

The way we do this is with a custom RequestValidator. On a side note, this post may qualify as having the longest introduction ever. In any case, this custom RequestValidator is set within the httpRuntime tag within the web.config:

<httpRuntime requestValidationType="Syfuhs.Web.Security.CustomRequestValidator" />

We create a custom request validator by creating a class with a base class of System.Web.Util.RequestValidator. Then we override the IsValidRequestString method.

This method allows us to find out where the input is coming from, e.g. from a Form or from a cookie etc. This validator is called on each value within the four collections above, but only when a value exists. It saves us the trouble of going over everything in each request. Within an HttpModule we could certainly build out the same functionality by checking contents of each collection, but this saves us the hassle of writing the boilerplate code. It also provides us a way of describing the problem in detail because we can pass an index location of where the problem exists. So if we find a problem at character 173 we can pass that value back to the caller and ASP.NET will throw an exception describing that index. This is how we get such a detailed exception from WIF:

A Potentially Dangerous Request.Form Value Was Detected from the Client (wresult="<t:RequestSecurityTo...")

Our validator class ends up looking like:

public class MyCustomRequestValidator : RequestValidator
{
    protected override bool IsValidRequestString(HttpContext context, string value, RequestValidationSource requestValidationSource, string collectionKey, out int validationFailureIndex)
    {
        validationFailureIndex = 0;

        switch (requestValidationSource)
        {
            case RequestValidationSource.Cookies:
                return ValidateCookie(collectionKey, value, out validationFailureIndex);
                break;

            case RequestValidationSource.Form:
                return ValidateFormValue(collectionKey, value, out validationFailureIndex);
                break;

            // <snip />
        }

        return base.IsValidRequestString(context, value, requestValidationSource, collectionKey, out validationFailureIndex);
    }

    // <snip />
}

Each application has different validation requirements so I've just mocked up how you would create a custom validator.

If you use this design you can easily validate all inputs across the application, and you don't have to turn off validation.

So once again, pop quiz: How many of you do proper input validation?

Tamper-Evident Configuration Files in ASP.NET

by Steve Syfuhs / September 28, 2011 04:00 PM

A couple weeks ago someone sent a message to one of our internal mailing lists. His message was pretty straightforward: how do you prevent modifications of a configuration file for an application [while the user has administrative rights on the machine]?

There were a couple responses including mine, which was to cryptographically sign the configuration file with an asymmetric key. For a primer on digital signing, take a look here. Asymmetric signing is one possible way of signing a file. By signing it this way the configuration file could be signed by an administrator before deploying the application, and all the application needed to validate the signature was the public key associated with the private key used to sign the file. This separated the private key from the application, preventing the configuration from being re-signed maliciously. It’s similar in theory to how code-signing works.

In the event that validation of the configuration file failed, the application would not load, or would gracefully fail and exit the next time the file was checked (or the application had an exclusive lock on the configuration file so it couldn’t be edited while running).

We are also saved the problem of figuring out the signature format because there is a well-respected XML signature schema: http://www.w3.org/2000/09/xmldsig#. WCF uses this format to sign messages. For a good code-walkthrough see Barry Dorrans’ Beginning ASP.NET Security. More on the code later here though.

Technically, this won’t prevent changes to the file, but it will prevent the application from accepting those changes. It’s kind of like those tamper-evident tags manufacturers stick on the enclosures of their equipment. It doesn’t prevent someone from opening the thing, but they will get caught if someone checks it. You’ll notice I didn’t call them “tamper-resistance” tags.

Given this problem, I went one step further and asked myself: how would I do this with a web application? A well-informed ASP.NET developer might suggest using aspnet_regiis to encrypt the configuration file. Encrypting the configuration does protect against certain things, like being able to read configuration data. However, there are a couple problems with this.

  • If I’m an administrator on that server I can easily decrypt the file by calling aspnet_regiis
  • If I’ve found a way to exploit the site, I can potentially overwrite the contents of the file and make the application behave differently
  • The encryption/decryption keys need to be shared in web farms

Consider our goal. We want to prevent a user with administrative privileges from modifying the configuration. Encryption does not help us in this case.  Signing the configuration will help though (As an aside, for more protection you encrypt the file then sign it, but that’s out of the scope of this) because the web application will stop working if a change is made that invalidates the signature.

Of course, there’s one little problem. You can’t stick the signature in the configuration file, because ASP.NET will b-itch complain about the foreign XML tag. The original application in question was assumed to have a custom XML file for it’s configuration, but in reality it doesn’t, so this problem applies there too.

There are three possible solutions to this:

  • Create a custom ConfigurationSection class for the signature
  • Create a custom configuration file and handler, and intercept all calls to web.config
  • Stick the signature of the configuration file into a different file

The first option isn’t a bad idea, but I really didn’t want to muck about with the configuration classes. The second option is, well, pretty much a bad idea in almost all cases, mainly because I’m not entirely sure you can even intercept all calls to the configuration classes.

I went with option three.

The other file has two important parts: the signature of the web.config file, and a signature for itself. This second signature prevents someone from modifying the signature for the web.config file. Our code becomes a bit more complicated because now we need to validate both signatures.

This makes us ask the question, where is the validation handled? It needs to happen early enough in the request lifecycle, so I decided to stick it into a HTTP Module, for the sake of modularity.

Hold it, you say. If the code is in a HTTP Module, then it needs to be added to the web.config. If you are adding it to the web.config, and protecting the web.config by this module, then removing said module from the web.config will prevent the validation from occurring.

Yep.

There are two ways around this:

  • Add the validation call into Global.asax
  • Hard code the addition of the HTTP Module

It’s very rare that I take the easy approach, so I’ve decided to hard code the addition of the HTTP Module, because sticking the code into a module is cleaner.

In older versions of ASP.NET you had to make some pretty ugly hacks to get the module in because it needs to happen very early in startup of the web application. With ASP.NET 4.0, an assembly attribute was added that allowed you to call code almost immediately after startup:

[assembly: PreApplicationStartMethod(typeof(Syfuhs.Security.Web.Startup), "Go")]

Within the Startup class there is a public static method called Go(). This method calls the Register() within an instance of my HttpModule. This module inherits from an abstract class called DynamicallyLoadedHttpModule, which inherits from IHttpModule. This class looks like:

public abstract class DynamicallyLoadedHttpModule : IHttpModule
{
    public void Register()
    {
        DynamicHttpApplication.RegisterModule(delegate(HttpApplication app) { return this; });
    }

    public abstract void Init(HttpApplication context);

    public abstract void Dispose();
}

The DynamicHttpApplication class inherits from HttpApplication and allows you to load HTTP modules in code. This code was not written by me. It was originally written by Nikhil Kothari:

using HttpModuleFactory = System.Func<System.Web.HttpApplication, System.Web.IHttpModule>;

public abstract class DynamicHttpApplication : HttpApplication
{
    private static readonly Collection<HttpModuleFactory> Factories = new Collection<HttpModuleFactory>();
    private static object _sync = new object();
    private static bool IsInitialized = false;

    private List<IHttpModule> modules;

    public override void Init()
    {
        base.Init();

        if (Factories.Count == 0)
            return;

        List<IHttpModule> dynamicModules = new List<IHttpModule>();

        lock (_sync)
        {
            if (Factories.Count == 0)
                return;

            foreach (HttpModuleFactory factory in Factories)
            {
                IHttpModule m = factory(this);

                if (m != null)
                {
                    m.Init(this);
                    dynamicModules.Add(m);
                }
            }
        }

        if (dynamicModules.Count != 0)
            modules = dynamicModules;

        IsInitialized = true;
    }

    public static void RegisterModule(HttpModuleFactory factory)
    {
        if (IsInitialized)
            throw new InvalidOperationException(Exceptions.CannotRegisterModuleLate);

        if (factory == null)
            throw new ArgumentNullException("factory");

        Factories.Add(factory);
    }

    public override void Dispose()
    {
        if (modules != null)
            modules.ForEach(m => m.Dispose());

        modules = null;
            
        base.Dispose();

        GC.SuppressFinalize(this);
    }
}

Finally, to get this all wired up we modify the Global.asax to inherit from DynamicHttpApplication:

public class Global : DynamicHttpApplication { ... }

Like I said, you could just add the validation code into Global (but where’s the fun in that?)…

So, now that we’ve made it possible to add the HTTP Module, lets actually look at the module:

public sealed class SignedConfigurationHttpModule : DynamicallyLoadedHttpModule
{
    public override void Init(HttpApplication context)
    {
        if (context == null)
            throw new ArgumentNullException("context");

        context.BeginRequest += new EventHandler(context_BeginRequest);
        context.Error += new EventHandler(context_Error);
    }

    private void context_BeginRequest(object sender, EventArgs e)
    {
        HttpApplication app = (HttpApplication)sender;

        SignatureValidator validator = new SignatureValidator(app.Request.PhysicalApplicationPath);

        validator.ValidateConfigurationSignatures(CertificateLocator.LocateSigningCertificate());
    }

    private void context_Error(object sender, EventArgs e)
    {
        HttpApplication app = (HttpApplication)sender;

        foreach (var exception in app.Context.AllErrors)
        {
            if (exception is XmlSignatureValidationFailedException)
            {
                // Maybe do something
                // Or don't...
                break;
            }
        }
    }

    public override void Dispose() { }
}

Nothing special here. Just hooking into the context.BeginRequest event so validation occurs on each request. There would be some performance impact as a result.

The core validation is contained within the SignatureValidator class, and there is a public method that we call to validate the signature file, ValidateConfigurationSignatures(…). This method accepts an X509Certificate2 to compare the signature against.

The specification for the schema we are using for the signature will actually encode the public key of the private key into the signature element, however we want to go one step further and make sure it’s signed by a particular certificate. This will prevent someone from modifying the configuration file, and re-signing it with a different private key. Validation of the signature is not enough; we need to make sure it’s signed by someone we trust.

The validator first validates the schema of the signature file. Is the XML well formed? Does the signature file conform to a schema we defined (the schema is defined in a Constants class)? Following that is validates the signature of the file itself. Has the file been tampered with? Following that it validates the signature of the web.config file. Has the web.config file been tampered with?

Before it can do all of this though, it needs to check to see if the signature file exists. The variable passed into the constructor is the physical path of the web application. The validator knows that the signature file should be in the App_Data folder within the root. This file needs to be here because the folder by default will not let you access anything in it, and we don’t want anyone downloading the file. The path is also hardcoded specifically so changes to the configuration cannot bypass the signature file validation.

Here is the validator:

internal sealed class SignatureValidator
{
    public SignatureValidator(string physicalApplicationPath)
    {
        this.physicalApplicationPath = physicalApplicationPath;
        this.signatureFilePath = Path.Combine(this.physicalApplicationPath, "App_Data\\Signature.xml");
    }

    private string physicalApplicationPath;
    private string signatureFilePath;

    public void ValidateConfigurationSignatures(X509Certificate2 cert)
    {
        Permissions.DemandFilePermission(FileIOPermissionAccess.Read, this.signatureFilePath);

        if (cert == null)
            throw new ArgumentNullException("cert");

        if (cert.HasPrivateKey)
            throw new SecurityException(Exceptions.ValidationCertificateHasPrivateKey);

        if (!File.Exists(signatureFilePath))
            throw new SecurityException(Exceptions.CouldNotLoadSignatureFile);

        XmlDocument doc = new XmlDocument() { PreserveWhitespace = true };
        doc.Load(signatureFilePath);

        ValidateXmlSchema(doc);

        CheckForUnsignedConfig(doc);

        if (!X509CertificateCompare.Compare(cert, ValidateSignature(doc)))
            throw new XmlSignatureValidationFailedException(Exceptions.SignatureFileNotSignedByExpectedCertificate);

        List<XmlSignature> signatures = ParseSignatures(doc);

        ValidateSignatures(signatures, cert);
    }

    private void CheckForUnsignedConfig(XmlDocument doc)
    {
        List<string> signedFiles = new List<string>();

        foreach (XmlElement file in doc.GetElementsByTagName("File"))
        {
            string fileName = Path.Combine(this.physicalApplicationPath, file["FileName"].InnerText);

            signedFiles.Add(fileName.ToUpperInvariant());
        }

        CheckConfigFiles(signedFiles);
    }

    private void CheckConfigFiles(List<string> signedFiles)
    {
        foreach (string file in Directory.EnumerateFiles(this.physicalApplicationPath, "*.config", SearchOption.AllDirectories))
        {
            string path = Path.Combine(this.physicalApplicationPath, file);

            if (!signedFiles.Contains(path.ToUpperInvariant()))
                throw new XmlSignatureValidationFailedException(string.Format(CultureInfo.CurrentCulture, Exceptions.ConfigurationFileWithoutSignature, path));
        }
    }

    private void ValidateXmlSchema(XmlDocument doc)
    {
        using (StringReader fileReader = new StringReader(Constants.SignatureFileSchema))
        using (StringReader signatureReader = new StringReader(Constants.SignatureSchema))
        {
            XmlSchema fileSchema = XmlSchema.Read(fileReader, null);
            XmlSchema signatureSchema = XmlSchema.Read(signatureReader, null);

            doc.Schemas.Add(fileSchema);
            doc.Schemas.Add(signatureSchema);

            doc.Validate(Schemas_ValidationEventHandler);
        }
    }

    void Schemas_ValidationEventHandler(object sender, ValidationEventArgs e)
    {
        throw new XmlSignatureValidationFailedException(Exceptions.InvalidSchema, e.Exception);
    }

    public static X509Certificate2 ValidateSignature(XmlDocument xml)
    {
        if (xml == null)
            throw new ArgumentNullException("xml");

        XmlElement signature = ExtractSignature(xml.DocumentElement);

        return ValidateSignature(xml, signature);
    }

    public static X509Certificate2 ValidateSignature(XmlDocument doc, XmlElement signature)
    {
        if (doc == null)
            throw new ArgumentNullException("doc");

        if (signature == null)
            throw new ArgumentNullException("signature");

        X509Certificate2 signingCert = null;

        SignedXml signed = new SignedXml(doc);
        signed.LoadXml(signature);

        foreach (KeyInfoClause clause in signed.KeyInfo)
        {
            KeyInfoX509Data key = clause as KeyInfoX509Data;

            if (key == null || key.Certificates.Count != 1)
                continue;

            signingCert = (X509Certificate2)key.Certificates[0];
        }

        if (signingCert == null)
            throw new CryptographicException(Exceptions.SigningKeyNotFound);

        if (!signed.CheckSignature())
            throw new CryptographicException(Exceptions.SignatureValidationFailed);

        return signingCert;
    }

    private static void ValidateSignatures(List<XmlSignature> signatures, X509Certificate2 cert)
    {
        foreach (XmlSignature signature in signatures)
        {
            X509Certificate2 signingCert = ValidateSignature(signature.Document, signature.Signature);

            if (!X509CertificateCompare.Compare(cert, signingCert))
                throw new XmlSignatureValidationFailedException(
                    string.Format(CultureInfo.CurrentCulture, 
                    Exceptions.SignatureForFileNotSignedByExpectedCertificate, signature.FileName));
        }
    }

    private List<XmlSignature> ParseSignatures(XmlDocument doc)
    {
        List<XmlSignature> signatures = new List<XmlSignature>();

        foreach (XmlElement file in doc.GetElementsByTagName("File"))
        {
            string fileName = Path.Combine(this.physicalApplicationPath, file["FileName"].InnerText);

            Permissions.DemandFilePermission(FileIOPermissionAccess.Read, fileName);

            if (!File.Exists(fileName))
                throw new FileNotFoundException(string.Format(CultureInfo.CurrentCulture, Exceptions.FileNotFound, fileName));

            XmlDocument fileDoc = new XmlDocument() { PreserveWhitespace = true };
            fileDoc.Load(fileName);

            XmlElement sig = file["FileSignature"] as XmlElement;

            signatures.Add(new XmlSignature()
            {
                FileName = fileName,
                Document = fileDoc,
                Signature = ExtractSignature(sig)
            });
        }

        return signatures;
    }

    private static XmlElement ExtractSignature(XmlElement xml)
    {
        XmlNodeList xmlSignatureNode = xml.GetElementsByTagName("Signature");

        if (xmlSignatureNode.Count <= 0)
            throw new CryptographicException(Exceptions.SignatureNotFound);

        return xmlSignatureNode[xmlSignatureNode.Count - 1] as XmlElement;
    }
}

You’ll notice there is a bit of functionality I didn’t mention. Checking that the web.config file hasn’t been modified isn’t enough. We also need to check if any *other* configuration file has been modified. It’s no good if you leave the root configuration file alone, but modify the <authorization> tag within the administration folder to allow anonymous access, right?

So there is code looks through the site for any files that have the “config” extension, and if that file isn’t in the signature file, it throws an exception.

There is also a check done at the very beginning of the validation. If you pass an X509Certificate2 with a private key it will throw an exception. This is absolutely by design. You sign the file with the private key. You validate with the public key. If the private key is present during validation that means you are not separating the keys, and all of this has been a huge waste of time because the private key is not protected. Oops.

Finally, it’s important to know how to sign the files. I’m not a fan of generating XML properly, partially because I’m lazy and partially because it’s a pain to do, so mind the StringBuilder:

public sealed class XmlSigner
{
    public XmlSigner(string appPath)
    {
        this.physicalApplicationPath = appPath;
    }

    string physicalApplicationPath;

    public XmlDocument SignFiles(string[] paths, X509Certificate2 cert)
    {
        if (paths == null || paths.Length == 0)
            throw new ArgumentNullException("paths");

        if (cert == null || !cert.HasPrivateKey)
            throw new ArgumentNullException("cert");

        XmlDocument doc = new XmlDocument() { PreserveWhitespace = true };
        StringBuilder sb = new StringBuilder();

        sb.Append("<Configuration>");
        sb.Append("<Files>");

        foreach (string p in paths)
        {
            sb.Append("<File>");

            sb.AppendFormat("<FileName>{0}</FileName>", p.Replace(this.physicalApplicationPath, ""));
            sb.AppendFormat("<FileSignature><Signature xmlns=\"http://www.w3.org/2000/09/xmldsig#\">{0}</Signature></FileSignature>", 
            SignFile(p, cert).InnerXml);

            sb.Append("</File>");
        }

        sb.Append("</Files>");
        sb.Append("</Configuration>");

        doc.LoadXml(sb.ToString());

        doc.DocumentElement.AppendChild(doc.ImportNode(SignXmlDocument(doc, cert), true));

        return doc;
    }

    public static XmlElement SignFile(string path, X509Certificate2 cert)
    {
        if (string.IsNullOrWhiteSpace(path))
            throw new ArgumentNullException("path");

        if (cert == null || !cert.HasPrivateKey)
            throw new ArgumentException(Exceptions.CertificateDoesNotContainPrivateKey);

        Permissions.DemandFilePermission(FileIOPermissionAccess.Read, path);

        XmlDocument doc = new XmlDocument();
        doc.PreserveWhitespace = true;
        doc.Load(path);

        return SignXmlDocument(doc, cert);
    }

    public static XmlElement SignXmlDocument(XmlDocument doc, X509Certificate2 cert)
    {
        if (doc == null)
            throw new ArgumentNullException("doc");

        if (cert == null || !cert.HasPrivateKey)
            throw new ArgumentException(Exceptions.CertificateDoesNotContainPrivateKey);

        SignedXml signed = new SignedXml(doc) { SigningKey = cert.PrivateKey };

        Reference reference = new Reference() { Uri = "" };

        XmlDsigC14NTransform transform = new XmlDsigC14NTransform();
        reference.AddTransform(transform);

        XmlDsigEnvelopedSignatureTransform envelope = new XmlDsigEnvelopedSignatureTransform();
        reference.AddTransform(envelope);
        signed.AddReference(reference);

        KeyInfo keyInfo = new KeyInfo();
        keyInfo.AddClause(new KeyInfoX509Data(cert));
        signed.KeyInfo = keyInfo;

        signed.ComputeSignature();

        XmlElement xmlSignature = signed.GetXml();

        return xmlSignature;
    }
}

To write this to a file you can call it like this:

XmlWriter writer = XmlWriter.Create(@"C:\Dev\Projects\Syfuhs.Security.Web\Syfuhs.Security.Web.WebTest\App_Data\Signature.xml");
XmlSigner signer = new XmlSigner(Request.PhysicalApplicationPath);

XmlDocument xml = signer.SignFiles(new string[] { 
@"C:\Dev\Projects\Syfuhs.Security.Web\Syfuhs.Security.Web.WebTest\Web.config",
@"C:\Dev\Projects\Syfuhs.Security.Web\Syfuhs.Security.Web.WebTest\Web.debug.config",
@"C:\Dev\Projects\Syfuhs.Security.Web\Syfuhs.Security.Web.WebTest\Web.release.config",
@"C:\Dev\Projects\Syfuhs.Security.Web\Syfuhs.Security.Web.WebTest\Account\Web.config",
@"C:\Dev\Projects\Syfuhs.Security.Web\Syfuhs.Security.Web.WebTest\test.config"
}, 
new X509Certificate2(@"C:\Dev\Projects\Syfuhs.Security.Web\Syfuhs.Security.Web.WebTest\cert.pfx", "1"));

xml.WriteTo(writer);
writer.Flush();

Now within this code, you have to pass in a X509Certificate2 with a private key, otherwise you can’t sign the files.

These processes should occur on different machines. The private key should never be on the server hosting the site. The basic steps for deployment would go something like:

1. Compile web application.
2. Configure site and configuration files on staging server.
3. Run application that signs the configuration and generates the signature file.
4. Drop the signature.xml file into the App_Data folder.
5. Deploy configured and signed application to production.

There is one final note (I think I’ve made that note a few times by now…) and that is the CertificateLocator class. At the moment it just returns a X509Certificate2 from a particular path on my file system. This isn’t necessarily the best approach because it may be possible to overwrite that file. You should store that certificate in a safe place, and make a secure call to get it. For instance a web service call might make sense. If you have a Hardware Security Module (HSM) to store secret bits in, even better.

Concluding Bits

What have we accomplished by signing our configuration files? We add a degree of trust that our application hasn’t been compromised. In the event that the configuration has been modified, the application stops working. This could be from malicious intent, or careless administrators. This is a great way to prevent one-off changes to configuration files in web farms. It is also a great way to prevent customers from mucking up the configuration file you’ve deployed with your application.

This solution was designed in a way mitigate quite a few attacks. An attacker cannot modify configuration files. An attacker cannot modify the signature file. An attacker cannot view the signature file. An attacker cannot remove the signature file. An attacker cannot remove the HTTP Module that validates the signature without changing the underlying code. An attacker cannot change the underlying code because it’s been compiled before being deployed.

Is it necessary to use on every deployment? No, probably not.

Does it go a little overboard with regard to complexity? Yeah, a little.

Does it protect against a real problem? Absolutely.

Unfortunately it also requires full trust.

Overall it’s a fairly robust solution and shows how you can mitigate certain types of risks seen in the real world.

And of course, it works with both WebForms and MVC.

You can download the full source: https://syfuhs.blob.core.windows.net/files/c6afeabc-36fc-4d41-9aa7-64cc9385280d-configurationsignaturevalidation.zip

Find my Windows Phone 7

by Steve Syfuhs / January 06, 2011 04:00 PM

For the last month and a half I’ve been playing around with my new Windows Phone 7.  Needless to say, I really like it.  There are a few things that are still a little rough – side-loading application is a good example, but overall I’m really impressed with this platform.  It may be version 7 technically, but realistically its a v1 product.  I say that in a good way though – Microsoft reinvented the product.

Part of this reinvention is a cloud-oriented platform.  Today’s Dilbert cartoon was a perfect tongue-in-cheek explanation of the evolution of computing, and the mobile market makes no exception.  Actually, when you think about it, mobile phones and the cloud go together like peanut butter and chocolate.  If you have to ask, they go together really well.  Also, if you have to ask, are you living under a rock?

This whole cloud/phone comingling is central to the Windows Phone 7, and you can realize the potential immediately.

When you start syncing your phone via the Zune software, you will eventually get to the sync page for the phone.  The first thing I noticed was the link “do more with windows live”.

image

What does that do?

Well, once you have set up your phone with your Live ID, a new application is added to your Windows Live home.  This app is for all devices, and when you click on the above link in Zune, it will take you to the section for the particular phone you are syncing.

image

The first thing that caught my attention was the “Find my Phone” feature.  It brings up a list of actions for when you have lost your phone.

image

Each action is progressively bolder than the previous – and each action is very straightforward.

Map it

If the device is on, use the Location services on the phone to find it and display on a Bing Map.

Ring it

If you have a basic idea of where the phone is and the phone is on, ringing it will make the phone ring with a distinct tone even if you have it set to silent or vibrate.  Use this wisely. Smile

Lock it

Now it gets a little more complicated.  When you lock the phone you are given an option to provide a message on the lock screen:

image

If someone comes across your phone, you can set a message telling them what they can do with it.  Word of advice though: if you leave a phone number, don’t leave your mobile number. Winking smile

Erase it

Finally we have the last option.  The nuclear option if you will.  Once you set the phone to be erased, the next time the phone is turned on and tries to connect to the Live Network, the phone will be wiped and set to factory defaults.

A side effect of wiping your phone is that the next time you set it up and sync with the same Live ID, most settings will remain intact.  You will have to add your email and Facebook accounts, and set all the device settings, but once you sync with Zune, all of your apps will be reinstalled.  Now that is a useful little feature.

Finally

Overall I’m really happy with how the phone turned out.  It’s a strong platform and it’s growing quickly.  The Find my Phone feature is a relatively small thing, but it showcases the potential of a phone/cloud mash up and adds so much value to consumers for when the lose their phone.

In a previous post I talked about the security of the Windows Phone 7.  This post was all about how consumers can quickly mitigate any risks from losing their phone.  For more information on using this phone in the enterprise, check out the Windows Phone 7 Guides for IT Professionals.

// About

Steve is a renaissance kid when it comes to technology. He spends his time in the security stack.