Home / My Disclaimer / Who am I? / Search... / Sign in

// Exceptions

The Case of the Failed Restore

by Steve Syfuhs / November 13, 2012 03:51 PM

As applications get more and more complex the backup and restore processes also tend to become more complex.

A lot of times backup can be broken down into simple processes:

  1. Get data from various sources
    • Database
    • Web.config
    • DPAPI
    • Certificate Stores
    • File system
    • etc
  2. Persist data to disk in specific format
  3. Validate data in specific format isn’t corrupt

A lot of times this can be a manual process, but in best case scenarios its all automated by some tool. In my particular case there was a tool that did all of this for me. Woohoo! Of course, there was a catch. The format was custom, so a backup of the database didn’t just call SQL backup; it essentially did a SELECT * FROM {all tables} and serialized that data to disk.

The process wasn’t particularly fancy, but it was designed so that the tool had full control over the data before it was ever restored. There’s nothing particularly wrong with such a design as it solved various problems that creep in when doing restores. The biggest problem it solved was the ability to handle breaking changes to the application’s schema during an upgrade as upgrades consisted of

  1. Backup
  2. Uninstall old version
  3. Install new version
  4. Restore backed up data

Since the restore tool knew about the breaking changes to the schema it was able to do something about it before the data ever went into the database. Better to mangle the data in C# than mangle the data in SQL. My inner DBA twitches a little whenever I say that.

Restoring data is conceptually a simple process:

  1. Deserialize data from specific format on disk
  2. Mangle as necessary to fit new schema
  3. Foreach (record in data) INSERT record

In theory the goal of the restore tool should be to make the application be in the exact same state as it was when it was originally backed up. In most cases this means having the database be exactly the same row for row, column for column. SQL Restore does a wonderful job of this. It doesn’t really do much processing of the backup data -- it simply overwrites the database file. You can’t get any more exact than that.

But alas, this tool didn’t use SQL Backup or SQL Restore and there was a problem – the tool was failing on restoring the database.

Putting on my debugger hat I stared at the various moving parts to see what could have caused it to fail.

The file wasn’t corrupt and the data was well formed. Hmm.

Log files! Aha, lets check the log files. There was an error! ‘There was a violation of primary key constraint (column)…’ Hmm.

Glancing over the Disk Usage by Top Tables report in SQL Management Studio suggested that all or most of the data was getting inserted into the database based on what I new of the data before it was backed up. Hmm.

The error was pretty straightforward – a record was trying to be inserted into a table that had a primary key value that already existed in that table. Checking the backup file showed that no primary keys were actually duplicated. Hmm.

Thinking back to how the tool actually did a restore I went through the basic steps in my head of where a duplicate primary key could be created. Serialization succeeded as it was able to make it to the data mangling bit. The log files showed that the mangling succeeded because it dumped all the values and there were no duplicates. Inserting the data mostly succeeded, but the transaction failed. Hmm.

How did the insert process work? First it truncated all data in all tables, since it was going to replace all the data. Then it disabled all key constraints so it could do a bulk insert table by table. Then it enabled identity insert so the identity values were exactly the same as before the backup. It then looped through all the data and inserted all the records. It then disabled identity insert and enabled the key constraints. Finally it committed the transaction.

It failed before it could enable the constraints so it failed on the actual insert. Actually, we already knew this because of the log file, but its always helpful to see the full picture. Except things weren’t making any sense. The data being inserted was valid. Or was it? The table that had the primary key violation was the audit table. The last record was two minutes ago, but the one before it was from three months ago. The last record ID was 12345, and the one before it was 12344. Checking the data in the backup file showed that there were at least twice as many records so it failed halfway though the restore of that table.

The last audit record was: User [MyTestAccount] successfully logged in.

Ah dammit. That particular account was used by an application on my phone, and it checks in every few minutes.

While the restore was happening the application in question was still accessible, so the application on my phone did exactly what it was supposed to do.

Moral of the story: when doing a restore, make sure nobody can inadvertently modify data before you finish said restore.

SQL Server does this by making it impossible to write to the database while its being restored. If you don’t have the luxury of using SQL Restore be sure to make it impossible to write to the database by either making the application inaccessible or code it into your application to be able to run in a read only fashion.

Token Request Validation in ASP.NET

by Steve Syfuhs / October 28, 2010 04:00 PM

Earlier this week during my TechDays presentation on Windows Identity Foundation, there was a part during the demo that I said would fail miserably after the user was authenticated and the token was POST’ed back to the relying party.  Out of the box, ASP.NET does request validation.  If a user has submitted content through request parameters it goes through a validation step, and by default this step is to break on anything funky such as angle brackets.  This helps to deter things like cross site scripting attacks.  However, we were passing XML so we needed to turn off this validation.  There are two approaches to doing this.

The first approach, which is what I did in the demo, was to set the validation mode to “2.0”.  All this did was tell ASP.NET to use a less strict validation scheme.  To do that you need to add a line to the web.config file:

<system.web>
<httpRuntime requestValidationMode=”2.0” />
</system.web>

This is not the best way to do things though.  It creates a new vector for attack, as you’ve just allowed an endpoint to accept trivial data.  What is more preferred is to create a custom request validator.  You can find a great example in the Fabrikam Shipping demo.

It’s pretty straightforward to create a validator.  First you create a class that inherits System.Web.Util.RequestValidator, and then you override the method IsValidRequestString(…).  At that point you can do anything you want to validate, but the demo code tries to build a SignInResponseMessage object from the wresult parameter.  If it creates the object successfully the request is valid.  Otherwise it passes the request to the base implementation of IsValidRequestString(…).

The code to handle this validation is pretty straightforward:

    public class WSFederationRequestValidator : RequestValidator
    {
        protected override bool IsValidRequestString(HttpContext context,
            string value, RequestValidationSource requestValidationSource, 
            string collectionKey, out int validationFailureIndex)
        {
            validationFailureIndex = 0;

            if (requestValidationSource == RequestValidationSource.Form
                && collectionKey.Equals(WSFederationConstants.Parameters.Result, 
                   StringComparison.Ordinal))
            {
                SignInResponseMessage message =
                     WSFederationMessage.CreateFromFormPost(context.Request) 
                     as SignInResponseMessage;

                if (message != null)
                {
                    return true;
                }
            }

            return base.IsValidRequestString(context, value, requestValidationSource,
                   collectionKey, out validationFailureIndex);
        }
    }

Once you’ve created your request validator, you need to update the web.config file to tell .NET to use the validator.  You can do that by adding the following xml:

<system.web>
<httpRuntime requestValidationType="Microsoft.Samples.DPE.FabrikamShipping.Web.Security.WSFederationRequestValidator" />
</system.web>

You can find the validation code in FabrikamShipping.Web\Security\WSFederationRequestValidator.cs within the FabrikamShipping solution.

Using the ASP.NET Roles Provider with Windows Identity Foundation

by Steve Syfuhs / August 30, 2010 04:00 PM

Using the Windows Identity Foundation to handle user authentication and identity management can require you to drastically rethink how you will build your application.  There are a few fundamental differences between how authentication and roles will be handled when you switch to a Claims model. 

As an example if you used an STS to provide Claims to your application, you wouldn’t (couldn’t really) use the FormsAuthentication class.

Another thing to keep in mind is how you would handle Roles.  WIF sort of handles roles if you were to use <location> tags in web.config files like:

  <location path="test.aspx">
    <system.web>
      <authorization>
        <deny users="*" />
        <allow roles="admin" />
      </authorization>
    </system.web>
  </location>

WIF would handle this in an earlier part of the page lifecycle, and only allow authenticated users with a returned Role claim of admin.  This works well for some cases, but not all.

In larger applications we may want custom Roles, and the ability to map these roles to the Roles provided by the STS. 

This is by no means a place to tell you when you should use what architectural design, but a lot of times we want somewhere in the middle of these extremes…

Sometimes we just want to use the Roles class to check for role membership based on the Role claims.  From what I can find there is no RolesProvider implementation for WIF, so I wrote a very simple provider.  It is by all rights a hack.  The reason I say this is because there are quite a few methods that just can’t be implemented.  For instance, getting roles for other users is impossible, or adding a user to a role, or creating a role, deleting a role, etc.  This is all impossible because we can’t send anything back to the STS telling it what to do with the roles.

We are also limited to the scope of the roles.  I can only get the roles of the currently logged in user, nothing beyond.  So, with all the usual warnings (it works on my machine, don’t blame me if it steals your soul, etc)…

using System;
using System.Linq;
using System.Threading;
using System.Web.Security;
using Microsoft.IdentityModel.Claims;

public class ClaimsRoleProvider : RoleProvider
{
    IClaimsIdentity claimsIdentity;
    ClaimCollection userClaims;

    private void initClaims()
    {
        claimsIdentity = ((IClaimsPrincipal)(Thread.CurrentPrincipal)).Identities[0];
        userClaims = claimsIdentity.Claims;
    }

    public override string ApplicationName
    {
        get
        {
            initClaims();
            return claimsIdentity.GetType().ToString();
        }
        set
        {
            throw new NotImplementedException();
        }
    }

    public override bool RoleExists(string roleName)
    {
        initClaims();

        return userClaims.Where(r => r.Value == roleName).Any();
    }

    public override bool IsUserInRole(string username, string roleName)
    {
        initClaims();

        return userClaims.Where(r => r.Value == roleName).Any();
    }

    public override string[] GetRolesForUser(string username)
    {
        initClaims();

        return userClaims.Where(r => r.ClaimType == ClaimTypes.Role).Select(r => r.Value).ToArray();
    }

    public override string[] GetAllRoles()
    {
        initClaims();

        return userClaims.Where(r => r.ClaimType == ClaimTypes.Role).Select(r => r.Value).ToArray();
    }

    #region Not implementable

    public override string[] GetUsersInRole(string roleName)
    {
        throw new NotImplementedException();
    }

    public override void RemoveUsersFromRoles(string[] usernames, string[] roleNames)
    {
        throw new NotImplementedException();
    }

    public override void CreateRole(string roleName)
    {
        throw new NotImplementedException();
    }

    public override bool DeleteRole(string roleName, bool throwOnPopulatedRole)
    {
        throw new NotImplementedException();
    }

    public override string[] FindUsersInRole(string roleName, string usernameToMatch)
    {
        throw new NotImplementedException();
    }

    public override void AddUsersToRoles(string[] usernames, string[] roleNames)
    {
        throw new NotImplementedException();
    }

    #endregion
}

The next step is to modify the web.config to use this provider.  I put this in a separate assembly so it could be re-used.

    <roleManager enabled="true" defaultProvider="claimsRoleProvider">
      <providers>
        <clear />
        <add name="claimsRoleProvider" type="ClaimsRoleProvider, MyAssem.Providers,
Version=1.0.0.0, Culture=neutral, PublicKeyToken=4a27739ef3347280" />
      </providers>
    </roleManager>

One final thing to be aware of… Roles.IsUserInRole(string roleName) uses IPrincipal.Identity.Name in it’s overloaded version in lieu of a username parameter which could result in this ArgumentNullException:

Value cannot be null.
Parameter name: username

Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code.
Exception Details: System.ArgumentNullException: Value cannot be null.
Parameter name: username
Source Error:

Line 17:         var claims = from c in claimsIdentity.Claims select c;
Line 18: 
Line 19:         bool inRole = Roles.IsUserInRole("VPN");
Line 20:         
Line 21:         foreach (var r in claims)

Since the IClaimsIdentity is getting generated based on the claims it receives, it sets the Name property to whatever claim value is associated with the http://schemas.xmlsoap.org/ws/2005/05/identity/claims/name claim type. If one isn't present, it will be set to null.

It took way too long for me to figure that one out. :)

// About

Steve is a renaissance kid when it comes to technology. He spends his time in the security stack.