Home / My Disclaimer / Who am I? / Search... / Sign in

// Development

Windows Azure Pack Authentication Part 3 – Using a Third Party IdP

by Steve Syfuhs / February 07, 2014 06:22 PM

In the previous installments of this series we looked at how Windows Azure Pack authenticates users and how it’s configured out of the box for federation. This time around we’re going to look at how you can configure federation with a third party IdP.

Microsoft designed Windows Azure Pack the right way. It supports federation with industry protocols out of the box. You can’t say that for many services, and you certainly can’t say that those services support it natively for all versions – more often than not you have to pay extra for it.

Windows Azure Pack supports federation, and actually uses it to authenticate users by default. This little fact makes it easy to federate to a 3rd party IdP.

If we searched around we will find lots of resources on federating to ADFS, as that’s Microsoft’s federation product, and there are a number of good (German content) walkthroughs on how you can get it working. If you want to use ADFS go read one or all of those articles as everything we talk about today will be about using a non-Microsoft federation service.

Before we begin though I’d like to point out that Microsoft does have some resources on using 3rd party IdPs, but unfortunately the information is a bit thin in some places.

Prerequisites

Federation is a complex beast and we should be clear about what is required to get it working. In no particular order you need the following:

  • STS that supports the WS-Federation (passive) protocol
  • STS that supports WS-Federation wrapped JSON Web Tokens (JWT)
  • Optional: STS that supports WS-Trust + JWT

If you plan to use the public APIs with federated accounts then you will need a STS that supports WS-Trust + JWT.

If you don’t have a STS that can support these requirements then you should really consider taking a look at ADFS, or if you’re looking for customization, Thinktecture Identity Server. Both are top notch IdPs (edit: insert pitch about the IdP my company builds and sells as well [edit-edit: our next version natively supports JWT] Winking smile -- sorry, this concludes the not-so-regularly-scheduled product placement).

Another option is to roll your own IdP. Don’t do this. No seriously, don’t. It’s a complicated mess. You’re way better off using the Thinktecture server and extending it to fit your needs.

Supposing though that you already have an IdP and want to support JWT though, here’s how we can do it. In this context the IdP is the overarching identity providing system and the STS is simply the service issuing tokens.

Skip this next section if you just want to see how to configure Windows Azure Pack. That’s the main part that’s lacking in the MSDN documentation.

JWT via IdentityModel

First off, you need to be using .NET 4.5, and you need to be using the the 4.5 IdentityModel stack. You can’t use the original 3.5 bits.

At this point I’m going to assume you’ve got a working IdP already. There are lots of articles out there explaining how to build one. We’re just going to mod the STS.

Before making any code changes though you need to add the JWT token handler, which is easily installed via Nuget (I Red heart Nuget):

PM> Install-Package System.IdentityModel.Tokens.Jwt

This will need to be added to the project that exposes your STS configuration class.

Next, we need to inject the token handler into the STS pipeline. This can easily be done by adding an entry to the web.config system.identityModel section:

Or if you want to hardcode it you can add it to your SecurityTokenServiceConfiguration class.

There are of course other (potentially better) ways you can add it in, but this serves our purpose for the sake of a sample.

By adding the JWT token handler into the STS pipeline we can begin issuing JWTs to any relying parties that request one. This poses a problem though because passive requests don’t have a requested token type tacked on. Active (WS-Trust) requests do, but not passive. So we need to specify that a JWT should be minted instead of a SAML token. This can be done in the GetScope method of the STS class.

All we really needed to do was specify the TokenType as WIF will use that to determine which token handler should be used to mint the token. We know this is the value to use because it’s exposed by the GetTokenTypeIdentifiers() method in the JWTSecurityTokenHandler class.

Did I mention the JWT library is open source?

So now at this point if we made a request for token to the STS we could receive a WS-Federation wrapped JWT.

If the idea of using a JWT instead of a SAML token appeals to you, you can configure your app to use the JWT token handler similar to Dominick’s sample.

If you were submitting a WS-Trust RST to the STS you could use client code along the lines of:

When the GetScope method is called the request.TokenType should be set to whatever you passed in at the client. For more information on service calls you can take a look at the whitepaper Claims-Based Identity in Windows Azure Pack (docx). A future installment of this series might have more information about using services.

Lastly, we need to sign the JWT. The only caveat to using the JWT token handler is that the minimum RSA key size is 2048 bits. If you’re using a key smaller than that then please upgrade it. We’re going to overlook the fact that the MSDN article shows how to bypass minimum key sizes. Seriously. Don’t do it. I don’t want to have to explain why (putting paranoia aside for a moment, 1024 is being deprecated by Windows and related services in the near future anyway).

Issuing Tokens to Windows Azure Pack

So now we’re at a point where we can mint a JWT token. The question we need to ask now is what claims should this token contain? Looking at Part 1 we see that the Admin Portal requires UPN and Group claims. The tenant portal only requires the UPN claim.

Lucky for us the JWT token handler is smart. It knows to transform certain known XML-token-friendly-claim-types to JWT friendly claim types. In our case we can use http://schemas.xmlsoap.org/ws/2005/05/identity/claims/upn in our ClaimsIdentity to map to the UPN claim, and http://schemas.xmlsoap.org/claims/Group to map to our Group claim.

Then we need to determine where to send the token, and who to address it to. Both the tenant and admin sites have Federation Metadata documents that specify this information for us. If you’ve got an IdP that can parse the metadata then all you need to do is point it to https://yourtenantsite/FederationMetadata/2007-06/FederationMetadata.xml for the tenant configuration or https://youradminsite/FederationMetadata/2007-06/FederationMetadata.xml for the admin configuration.

Of course, this information will also map up to the configuration elements we looked at in Part 2. That’ll tell us the Audience URI and the Reply To for both sites.

Finally we have everything we need to mint the token, address it, and send it on its way.

Configuring Windows Azure Pack to Trust your Token

The tokens been sent and once it hits either the tenant or admin site it’ll promptly be ignored and you’ll get an ugly error message saying “nope, not gonna happen, bub.”

We therefore need to configure Windows Azure Pack to trust our token. Looking at MSDN we see some somewhat useful information telling us what we need to modify, but frankly, its missing a bunch of information so we’re going to ignore it.

First things first: if your IdP publishes a Federation Metadata document then you can just configure everything via PowerShell:

You can replace the target “Admin” with “Tenant” if you want to configure the Tenant Portal. The only caveat with doing it this way is that the metadata document needs to be accessible from the server. I’ve submitted a feature request that they also support local file paths too; hopefully they listen! Since the parameter takes the full URL you can put the metadata document somewhere public if its not normally accessible. You will only need the metadata accessible while applying this configuration.

If the cmdlet completed successfully then you should be able to log in from your own IdP. That’s all there is to it for you. I would recommend seriously considering going this route instead of configuring things manually.

Otherwise, lets carry on.

Since we can’t import our federation metadata (since we probably don’t have any), we need to configure things manually. To do that we need to modify settings in the database.

Looking back to Part 2 we see all the configuration elements that enable our federated trust to the default IdPs. We’ll need to update a few settings across the Microsoft.MgmtSvc.Store and Microsoft.MgmtSvc.PortalConfigStore databases.

As per the MSDN documentation it says to modify the settings in the PortalConfigStore database. It’s wrong. It’s incomplete as that’s only part of the process.

The PortalConfigStore database contains the settings used by the Tenant and Admin Portals to validate and request tokens. We need to modify these settings to use our custom IdP. To do so locate the Authentication.IdentityProvider setting in the [Config].[Settings] table.  The namespace we need to choose is dependent on which site we want to configure. In our case we select the Admin namespace. As we saw last time it looks something like:

We need to substitute our STS information here. The Realm is whatever your STS issuer is, and the Endpoint is where ever your WS-Federation endpoint is located. The Certificate should be a base 64 encoded representation of your signing certificate (remember, just the public key).

In my experience I’ve had to do an IISRESET on the portals to get the settings refreshed. I might just be impatient though.

Once those values are replaced you can try logging in. You should be redirected to your IdP and if you issue the token properly it’ll hit the portal and you should be logged in. Unfortunately this’ll actually fail with a non-useful error message.

deadsession

Who can guess why? So far I’ve stated that the MSDN documentation is missing information. What have we missed? Hopefully if you’ve read the first two parts of this series you’re yelling at the screen telling me to get on with it already because you’ve caught on to what I’m saying.

We haven’t configured the API services to trust our STS! Oops.

With that being said, we now have proof that Windows Azure Pack flows the token to the services from the Portal and, more importantly, the services validate the token. Cool!

Anyway, now to configure the APIs. Warning: complicated.

In the Microsoft.MgmtSvc.Store database locate the Settings table and then locate the Authentication.IdentityProvider.Secondary element in the AdminAPI namespace. We need to update it with the exact same values as we put in to the configuration element in the other database.

If you’re only wanting to configure the Tenant Portal you’d want to modify the Authentication.IdentityProvider.Primary configuration element. Be careful with the Primary/Secondary elements as they can get confusing.

If you’re configuring the Admin Portal you’ll need to update the Authentication.IdentityProvider.Secondary configuration element in the TenantAPI namespace to use the configuration you specified for the Admin Portal as well. As I said previously, I think this is because the Admin Portal calls into the Tenant API. The Admin Portal will use an admin-trusted token – therefore the TenantAPI needs to trust the admin’s STS.

Now that you’ve completed configuration you can do an IISRESET and try logging in. If you configured everything properly you should now be able to log in from your own IdP.

Troubleshooting

For those rock star Ops people who understand identity this guide was likely pretty easy to follow, understand, and implement. For everyone else though, this was probably a pain in the neck. Here are some troubleshooting tips.

Review the Event Logs
It’s surprising how many people forget that a lot of applications will write errors to the Windows Event Log. Windows Azure Pack has quite a number of logs that you can review for more information. If you’re trying to track down an issue in the portals look in the MgmtSvc-*Site where * is Tenant or Admin. Errors will get logged there. If you’re stuck mucking about the APIs look in the MgmtSvc-*API where * is Tenant, Admin, or TenantPublic.

Enable Development Mode
You can enable developer mode in sites by modifying a value in the web.config. Unprotect the web.config by calling:

And then locate the appSetting named Microsoft.Azure.Portal.Configuration.PortalConfiguration.DevelopmentMode and set the value to true. Be sure to undo and re-protect the configuration when you’re done. You should then get a neat error tracing window show up in the portals, and more diagnostic information will be logged to the event logs. Probably not wise to do this in a production environment.

Use the PowerShell CmdLets
There are a quite a number of PowerShell cmdlets available for you to learn about the configuration of Windows Azure Pack. If you open the Windows Azure Pack Administration PowerShell console you can see that there are two modules that get loaded that are full of cmdlets:

PS C:\Windows\system32> get-command -Module MgmtSvcConfig

CommandType     Name                                               ModuleName
-----------     ----                                               ----------
Cmdlet          Add-MgmtSvcAdminUser                               MgmtSvcConfig
Cmdlet          Add-MgmtSvcDatabaseUser                            MgmtSvcConfig
Cmdlet          Add-MgmtSvcResourceProviderConfiguration           MgmtSvcConfig
Cmdlet          Get-MgmtSvcAdminUser                               MgmtSvcConfig
Cmdlet          Get-MgmtSvcDatabaseSetting                         MgmtSvcConfig
Cmdlet          Get-MgmtSvcDefaultDatabaseName                     MgmtSvcConfig
Cmdlet          Get-MgmtSvcEndpoint                                MgmtSvcConfig
Cmdlet          Get-MgmtSvcFeature                                 MgmtSvcConfig
Cmdlet          Get-MgmtSvcFqdn                                    MgmtSvcConfig
Cmdlet          Get-MgmtSvcNamespace                               MgmtSvcConfig
Cmdlet          Get-MgmtSvcNotificationSubscriber                  MgmtSvcConfig
Cmdlet          Get-MgmtSvcResourceProviderConfiguration           MgmtSvcConfig
Cmdlet          Get-MgmtSvcSchema                                  MgmtSvcConfig
Cmdlet          Get-MgmtSvcSetting                                 MgmtSvcConfig
Cmdlet          Initialize-MgmtSvcFeature                          MgmtSvcConfig
Cmdlet          Initialize-MgmtSvcProduct                          MgmtSvcConfig
Cmdlet          Install-MgmtSvcDatabase                            MgmtSvcConfig
Cmdlet          New-MgmtSvcMachineKey                              MgmtSvcConfig
Cmdlet          New-MgmtSvcPassword                                MgmtSvcConfig
Cmdlet          New-MgmtSvcResourceProviderConfiguration           MgmtSvcConfig
Cmdlet          New-MgmtSvcSelfSignedCertificate                   MgmtSvcConfig
Cmdlet          Protect-MgmtSvcConfiguration                       MgmtSvcConfig
Cmdlet          Remove-MgmtSvcAdminUser                            MgmtSvcConfig
Cmdlet          Remove-MgmtSvcDatabaseUser                         MgmtSvcConfig
Cmdlet          Remove-MgmtSvcNotificationSubscriber               MgmtSvcConfig
Cmdlet          Remove-MgmtSvcResourceProviderConfiguration        MgmtSvcConfig
Cmdlet          Reset-MgmtSvcPassphrase                            MgmtSvcConfig
Cmdlet          Set-MgmtSvcCeip                                    MgmtSvcConfig
Cmdlet          Set-MgmtSvcDatabaseSetting                         MgmtSvcConfig
Cmdlet          Set-MgmtSvcDatabaseUser                            MgmtSvcConfig
Cmdlet          Set-MgmtSvcFqdn                                    MgmtSvcConfig
Cmdlet          Set-MgmtSvcIdentityProviderSettings                MgmtSvcConfig
Cmdlet          Set-MgmtSvcNotificationSubscriber                  MgmtSvcConfig
Cmdlet          Set-MgmtSvcPassphrase                              MgmtSvcConfig
Cmdlet          Set-MgmtSvcRelyingPartySettings                    MgmtSvcConfig
Cmdlet          Set-MgmtSvcSetting                                 MgmtSvcConfig
Cmdlet          Test-MgmtSvcDatabase                               MgmtSvcConfig
Cmdlet          Test-MgmtSvcPassphrase                             MgmtSvcConfig
Cmdlet          Test-MgmtSvcProtectedConfiguration                 MgmtSvcConfig
Cmdlet          Uninstall-MgmtSvcDatabase                          MgmtSvcConfig
Cmdlet          Unprotect-MgmtSvcConfiguration                     MgmtSvcConfig
Cmdlet          Update-MgmtSvcV1Data                               MgmtSvcConfig

As well as the MgmtSvcConfig module which is moreso for daily administration.

Read the Windows Azure Pack Claims Whitepaper
See here: Claims-Based Identity in Windows Azure Pack (docx).

Visit the Forums
When in doubt take a look at the forums and ask a question if you’re stuck.

Email Me
Lastly, you can contact me (steve@syfuhs.net) with any questions. I may not have answers but I might be able to find someone who can help.

Conclusion

In the first two parts of this series we looked at how authentication works, how it’s configured, and now in this installment we looked at how we can configure a third party IdP to log in to Windows Azure Pack. If you’re trying to configure Windows Azure Pack to use a custom IdP I imagine this part is the most complicated to figure out and hopefully it was documented well enough. I personally spent a fair amount of time fiddling with settings and most of the information I’ve gathered for this series has been the result of lots of trial and error. With any luck this series has proven useful to you and you have more luck with the configuration than I originally did.

Next time we’ll take a look at how we can consume the public APIs using a third party IdP for authentication.

In the future we might take a look at how we can authenticate requests to a service called from a Windows Azure Pack add-on, and how we can call into Windows Azure Pack APIs from an add-on.

Real-time User Notification and Session Management with SignalR - Part 2

by Steve Syfuhs / March 24, 2013 02:31 PM

In Part 1 I introduced a basic usage of SignalR and talked about the goals we were trying to accomplish with the library.

In the next few posts I’m going to show how we can build a real-time user notification and session management system for a web application.

In this post I’ll show how we can implement a solution that accomplishes our goals.

Before diving back into SignalR it’s important to have a quick rundown of concepts for session management. If we think about how sessions work for a user in most applications it’s usually conceptually simple. A session is a mechanism to track user rights between the user logging in and logging out.  A session is usually tracked through a cookie attached to each request made to the server. A user has a session (or multiple sessions if they are logged in from another machine/browser) and each session is tied to a request or connection. Each time the user requests a page a new connection is opened to the server. As long as the session is active each connection is authorized to do whatever it needs to do (as defined by whatever authorization policies are in place).

image

When you kill a session each subsequent connection for that session is denied. The session is dead, no more access. Simple. A session is usually killed when a user explicitly logs out and destroys the session cookie or the browser is closed. This doesn’t normally kill any other sessions tied to the user though. The connections made from another browser are still authorized.

From a security perspective we may want to notify the user that another session is already active or was just created. We can then allow the user to destroy the other session if they want.

SignalR works really well in this scenario because it solves a nasty problem of timing. Normally when the server wants to tell the client something it has to wait for the client to make a request to the server and then the client has to act on the server’s message. A request to the server is usually only done when a user explicitly clicks something, or there’s a timer polling every 30 seconds or so. If we want to notify the user instantly of another session we can’t necessarily wait for the client to call. SignalR solves this problem because it can call the client directly from the server.

Now, allowing a user to control other sessions requires tracking sessions and connections. If we follow the diagram above we have a pretty simple relationship between users and sessions, and between sessions and connections. We could store this information in a database or other persistent storage, and in fact would want to for non-trivial applications, but for the sake of this post we’ll just store the data in memory.

Most session handlers these days (e.g. the SessionAuthenticationModule in WIF) create a cookie that contains everything the web application should know about the user. As long as that identity in the cookie is valid the user can do whatever the session handler allows. This is a mostly stateless process and aligns with various tenants of REST. Each request to the server contains the identity of the user, and the server doesn’t have to track anything. It’s simple and powerful.

However, in non-trivial applications this doesn’t always cut it. Security sometimes requires state. In this case we require state in the sense that the server needs to track all active sessions tied to a user. For this we’ll use the WIF SessionAuthenticationModule (SAM) and a custom SessionSecurityTokenHandler.

Before we can validate a session though, we need to track when a session is created. If the application is configured for federation you can create a custom ClaimsAuthenticationManager and call the session creation code, or if you are creating a session token manually you can call this code on login.

void CreateSession()
{
    string sess = CreateSessionKey();

    var principal = new ClaimsPrincipal(new[] { new ClaimsIdentity(new[] { new Claim(ClaimTypes.Name, "myusername"), new Claim(ClaimTypes.Sid, sess) }, AuthenticationTypes.Password) });

    var token = FederatedAuthentication.SessionAuthenticationModule.CreateSessionSecurityToken(principal, "mycontext", DateTime.UtcNow, DateTime.UtcNow.AddDays(1), false);

    FederatedAuthentication.SessionAuthenticationModule.WriteSessionTokenToCookie(token);

    NotificationHub.RegisterSession(sess, principal.Identity.Name);
}

private string CreateSessionKey()
{
    var rng = System.Security.Cryptography.RNGCryptoServiceProvider.Create();

    var bytes = new byte[32];

    rng.GetNonZeroBytes(bytes);

    return Convert.ToBase64String(bytes);
}

We’ll get back to the NotificationHub.RegisterSession method in a bit.

After the session is created, on subsequent requests the SessionSecurityTokenHandler validates whether a user’s session is still valid and authorized. The SAM calls the token handler when it receives a session cookie and generates an identity for the current request.

From here we can determine whether the user’s session was forced to logout. If we override the ValidateSession method we can check against the NotificationHub. Keep in mind this is an example – it’s not a good design decision to track session data in your notification hub. I’m also using ClaimTypes.Sid, which isn’t the best claim type to use either.

protected override void ValidateSession(SessionSecurityToken securityToken)
{
    base.ValidateSession(securityToken);

    var ident = securityToken.ClaimsPrincipal.Identity as IClaimsIdentity;

    if (ident == null)
        throw new SecurityTokenException();

    var sessionClaim = ident.Claims.Where(c => c.ClaimType == ClaimTypes.Sid).FirstOrDefault();

    if(sessionClaim == null)
        throw new SecurityTokenExpiredException();

    if (!NotificationHub.IsSessionValid(sessionClaim.Value))
    {
        throw new SecurityTokenExpiredException();
    }
}

Every time a client makes a request to the server the user’s session is validated against the internal list of valid sessions. If the session is unknown or invalid an exception is thrown which kills the request.

To configure the use of this SecurityTokenHandler you can add it to the web.config in the microsoft.identityModel/service section. Yes, this is still WIF 3.5/.NET 4.0.  There is no requirement for .NET 4.5 here.

<securityTokenHandlers>
    <remove type="Microsoft.IdentityModel.Tokens.SessionSecurityTokenHandler, Microsoft.IdentityModel" />
    <add type="Syfuhs.Demo.CustomSessionSecurityTokenHandler, MyDemo" />
</securityTokenHandlers>

Now that we can track sessions on the server side we need to track connections. To start tracking connections we need to start at our Hub. If we go back to our NotificationHub we can override a few methods, specifically OnConnected and OnDisconnected. Every time a page has loaded the SignalR hubs client library, OnConnected is called and every time the page is unloaded OnDisconnected is called. Between these two methods we can tie all active connections to a session. Before we do that though we need to make sure that all requests to our Hub are only from logged in users.

To ensure only active sessions talk to our hub we need to decorate our hub with the [Authorize] attribute.

[Authorize(RequireOutgoing = true)]
public class NotificationHub : Hub
{
    // snip
}

Then we override the OnConnected method. Within this method we can access what’s called the ConnectionId, and associate it to our session. The ConnectionId is unique for each page loaded and connected to the server.

For this demo we’ll store the tracking information in a couple dictionaries.

private static readonly Dictionary<string, string> UserSessions = new Dictionary<string, string>();

private static readonly Dictionary<string, List<string>> sessionConnections = new Dictionary<string, List<string>>();

public override Task OnConnected()
{
    var user = Context.User.Identity as IClaimsIdentity;

    if (user == null)
        throw new SecurityException();

    var sessionClaim = user.Claims.Where(c => c.ClaimType == ClaimTypes.Sid).FirstOrDefault();

    if (sessionClaim == null)
        throw new SecurityException();

    sessionConnections[sessionClaim.Value].Add(Context.ConnectionId);

    return base.OnConnected();
}

On disconnect we want to remove the connection associated with the session.

public override Task OnDisconnected()
{
    var user = Context.User.Identity as IClaimsIdentity;

    if (user == null)
        throw new SecurityException();

    var sessionClaim = user.Claims.Where(c => c.ClaimType == ClaimTypes.Sid).FirstOrDefault();

    if (sessionClaim == null)
        throw new SecurityException();

    sessionConnections[sessionClaim.Value].Remove(Context.ConnectionId);

    return base.OnDisconnected();
}

Now at this point we can map all active connections to their various sessions. When we create a new session from a user logging in we want to notify all active connections that the new session was created. This notification will allow us to kill the new session if necessary. Here’s where we implement that NotificationHub.RegisterSession method.

internal static void RegisterSession(string sessionId, string user)
{
    UserSessions[sessionId] = user;
    sessionConnections[sessionId] = new List<string>();

    var message = "You logged in to another session";

    var context = GlobalHost.ConnectionManager.GetHubContext<NotificationHub>();

    var userCurrentSessions = UserSessions.Where(u => u.Value == user);

    foreach (var s in userCurrentSessions)
    {
        var connectionsTiedToSession = sessionConnections.Where(c => c.Key == s.Key).SelectMany(c => c.Value);

        foreach (var connectionId in connectionsTiedToSession)
            context.Clients.Client(connectionId).sessionRegistered(message, sessionId);
    }
}

This method will create a new session entry for us and look up all other sessions for the user. It will then loop through all connections for the sessions and notify the user that a new session was created.

So far so good, right? This takes care of almost all of the server side code. But next we’ll jump to the client side JavaScript and implement that notification.

When the server calls the client to notify the user about a new session we want to write the message out to screen and give the user the option of killing the session.

HTML:

<div class="notification"></div>

JavaScript:

var notifier = $.connection.notificationHub;

notifier.client.sessionRegistered = function (message, session) {
    $('.notification').text(message);

    $('.notification').append('<a class="killSession" href="#">End Session</a>');
    $('.notification').append('<a class="DismissNotification" href="#">Dismiss</a>');
    $('.killSession').click(function () {
        notifier.server.killSession(session);
        $('.notification').hide(500);
    });

    $('.DismissNotification').click(function () {
        $('.notification').hide(500);
    });
};

On session registration the notification div text is set to the message and a link is created to allow the user to kill the session. The click event calls the NotificationHub.KillSession method.

Back in the hub we implement the KillSession method to remove the session from the list of active sessions.

public void KillSession(string session)
{
    var connections = sessionConnections[session].ToList();

    sessionConnections.Remove(session);
    UserSessions.Remove(session);

    foreach (var c in connections)
    {
        Clients.Client(c).sessionEnded();
    }

}

Once the session is dead a call is made back to the clients associated with that session to notify the page that the session has ended. Back in the JavaScript we can hook into the sessionEnded function and reload the page.

notifier.client.sessionEnded = function () {
    location.reload();
}

Reloading the page will cause the browser to make a request to the server and the server will call our custom SessionSecurityTokenHandler where the ValidateSession method will throw an exception. Once this exception is thrown the request is stopped and all subsequent requests within the same session will have the same fate. The dead session should redirect to your login page.

To test this out all we have to do is load up our application and log in. Then if we create a new session by opening a new browser and logging in, e.g. switching from IE to Chrome, or within IE opening a new session via File > New Session, our first browser should notify you. If you click the End Session link you should automatically be logged out of the other session and redirected to your login page.

Pretty cool, huh?

Real-time User Notification and Session Management with SignalR - Part 1

by Steve Syfuhs / March 07, 2013 10:21 PM

As more and more applications and services are becoming always on and accessible from a wide range of devices it’s important that we are able to securely manage sessions for users across all of these systems.

Imagine that you have a web application that a user tends to stay logged into all day. Over time the application produces notifications for the user and those notifications should be shown fairly immediately. In this post I’m going to talk about a very important notification – when the user’s account has logged into another device while still logged into their existing session. If the user is logged into the application on their desktop at work it might be bad that they also just logged into their account from a computer on the other side of the country. Essentially, what we want is a way to notify the user that their account just logged in from another device. Why didn’t I just lead with that?

In the next few posts I’m going to show how we can build a real-time user notification and session management system for a web application.

To accomplish this task I’m going to use the SignalR library:

ASP.NET SignalR is a new library for ASP.NET developers that simplifies the process of adding real-time web functionality to your applications. Real-time web functionality is the ability to have server-side code push content to connected clients instantly as it becomes available.

Conceptually it’s exactly what we want to use – it allows us to notify a client (the user’s first browser session) from the server that another client (another browser or device) has logged in with the same account.

SignalR is based on a Remote Procedure Call (RPC) design pattern allowing messages to flow from the server to a client. The long and the short of it is that whenever a page is loaded in the browser a chunk of JavaScript is executed that calls back to the server and opens a connection either via websockets when supported or falls back to other methods like long polling or funky (but powerful) iframe business.

To understand how this works it’s necessary to get SignalR up and running. First, create a new web project of your choosing  in Visual Studio and open the Nuget Package Manager. Search online for the package “Microsoft.AspNet.SignalR” and install it. For the sake of simplicity this will install the entire SignalR library. Down the road you may decide to trim the installed components down to only the requisite pieces.

Locate the global.asax file in your project and open it. In the Application_Start method add this bit of code:

RouteTable.Routes.MapHubs();

This will register a hub (something we’ll create in a minute) to the “~/signalr/hubs” route. Next open your MasterPage or View and add the following script references somewhere after a reference to jQuery:

<script type="text/javascript" src="scripts/jquery.signalR-1.0.1.js"></script>
<script type="text/javascript" src="signalr/hubs"></script>

You’ll notice the second script reference is the same as our route that was added earlier. This script is dynamically generated and provides us a proxy for communicating with the hub on the server side.

At this point we haven’t done much. All we’ve done is set up our web application to use SignalR. It doesn’t do anything yet. In order for communication to occur we need something called a Hub.

A hub is the thing that offers us that RPC mechanism. We call into it to send messages. It then sends the messages to the given recipients based on the connections opened by the client-side JavaScript. To create a hub all we need to do is create a new class and inherit from Microsoft.AspNet.SignalR.Hub. I’ve created one called NotificationHub.

public class NotificationHub : Hub
{
    // Nothing to see here yet
}

A hub is conceptually a connector between your browser and your server. When a message is sent from your browser it is received by a hub and the hub sends it off to a given recipient. A hub receives messages through methods defined by you.

Before digging into specifics a quick demo is in order. In our NotificationHub class let’s create a new method:

public void Hello(string message)
{
     Debug.WriteLine(message);

}

For now that’s all we have to write server-side for the sake of this demo. It will receive a message and it will write it to the debug stream. Next, go back to your page to write some HTML and JavaScript.

First create a <div> and give it an Id of connected:

<div id=”connected”></div>

Then add some JavaScript:

$.connection.hub.start().done(function () {
        $('#connected').text('I'm connected with Id: ' + $.connection.hub.id);
    });
}

What this will do is open a proxy connection to the hub(s) and once it’s completed the connection dance, the proxy calls a function and sets the text to the Id of the proxy connection. This Id value is a unique identifier created every time the client connects back to the server.

Now that we have an open connection to our hub we can call our Hello method. To do this we need to get the proxy to our notification hub, which is done through the $.connection object.

var notifier = $.connection.notificationHub;

For each hub we create and map to a route, the connection object has a pointer to it’s equivalent JavaScript proxy. Mind the camel-casing though. Once we have our proxy we can call our method through the server property. This property maps functions to methods in the hub. So to call our Hello method in the hub we call this JavaScript:

notifier.server.hello(‘World!’);

Lets make that clickable.

<a id=”sayHi” href=”#”>Say Hello!</a>

$(‘#sayHi’).click(function() { notifier.server.hello(‘World!’); });

If you click that you should now see “World!” in your Debug window.

That’s all fine and dandy for sending messages to the server, but AJAX already does that. Boring! Let’s go back to our hub and update that Hello method. Add the following line of code:

public void Hello(string message)
{
    Clients.All.helloEveryone(message);
}

What this will do is broadcast our message to All connected clients. It will call a function on the client named helloEveryone. For more information on who can receive messages take a look at the Hubs documentation. However, for our clients to receive that message we need to hook in a function for our proxy to call when it receives the broadcast. Back in the HTML and JavaScript add this:

<div id=”msg”></div>

notifier.client.helloEveryone = function(message) {
    $('#msg').text(message);
}

We’ve hooked a function into the client object so that when the proxy receives the message to call the function, it will call our implementation. It’s really easy to build out a collection of calls to communicate both directions with this library. All calls that should be sent to the server should call notifier.server.{yourHubMethod} and all calls from the hub to the clients should be mapped to notifier.client.{eventListener}.

If you open a few browsers and click that link, all browsers should simultaneously receive the message and show “World!”. That’s pretty cool.

At this point we have nearly enough information to build out our session management and notification system. In the next post I’ll talk about how we can send messages directly to a specific user, as well as how to send messages from outside the scope of a hub.

The Case of the Failed Restore

by Steve Syfuhs / November 13, 2012 03:51 PM

As applications get more and more complex the backup and restore processes also tend to become more complex.

A lot of times backup can be broken down into simple processes:

  1. Get data from various sources
    • Database
    • Web.config
    • DPAPI
    • Certificate Stores
    • File system
    • etc
  2. Persist data to disk in specific format
  3. Validate data in specific format isn’t corrupt

A lot of times this can be a manual process, but in best case scenarios its all automated by some tool. In my particular case there was a tool that did all of this for me. Woohoo! Of course, there was a catch. The format was custom, so a backup of the database didn’t just call SQL backup; it essentially did a SELECT * FROM {all tables} and serialized that data to disk.

The process wasn’t particularly fancy, but it was designed so that the tool had full control over the data before it was ever restored. There’s nothing particularly wrong with such a design as it solved various problems that creep in when doing restores. The biggest problem it solved was the ability to handle breaking changes to the application’s schema during an upgrade as upgrades consisted of

  1. Backup
  2. Uninstall old version
  3. Install new version
  4. Restore backed up data

Since the restore tool knew about the breaking changes to the schema it was able to do something about it before the data ever went into the database. Better to mangle the data in C# than mangle the data in SQL. My inner DBA twitches a little whenever I say that.

Restoring data is conceptually a simple process:

  1. Deserialize data from specific format on disk
  2. Mangle as necessary to fit new schema
  3. Foreach (record in data) INSERT record

In theory the goal of the restore tool should be to make the application be in the exact same state as it was when it was originally backed up. In most cases this means having the database be exactly the same row for row, column for column. SQL Restore does a wonderful job of this. It doesn’t really do much processing of the backup data -- it simply overwrites the database file. You can’t get any more exact than that.

But alas, this tool didn’t use SQL Backup or SQL Restore and there was a problem – the tool was failing on restoring the database.

Putting on my debugger hat I stared at the various moving parts to see what could have caused it to fail.

The file wasn’t corrupt and the data was well formed. Hmm.

Log files! Aha, lets check the log files. There was an error! ‘There was a violation of primary key constraint (column)…’ Hmm.

Glancing over the Disk Usage by Top Tables report in SQL Management Studio suggested that all or most of the data was getting inserted into the database based on what I new of the data before it was backed up. Hmm.

The error was pretty straightforward – a record was trying to be inserted into a table that had a primary key value that already existed in that table. Checking the backup file showed that no primary keys were actually duplicated. Hmm.

Thinking back to how the tool actually did a restore I went through the basic steps in my head of where a duplicate primary key could be created. Serialization succeeded as it was able to make it to the data mangling bit. The log files showed that the mangling succeeded because it dumped all the values and there were no duplicates. Inserting the data mostly succeeded, but the transaction failed. Hmm.

How did the insert process work? First it truncated all data in all tables, since it was going to replace all the data. Then it disabled all key constraints so it could do a bulk insert table by table. Then it enabled identity insert so the identity values were exactly the same as before the backup. It then looped through all the data and inserted all the records. It then disabled identity insert and enabled the key constraints. Finally it committed the transaction.

It failed before it could enable the constraints so it failed on the actual insert. Actually, we already knew this because of the log file, but its always helpful to see the full picture. Except things weren’t making any sense. The data being inserted was valid. Or was it? The table that had the primary key violation was the audit table. The last record was two minutes ago, but the one before it was from three months ago. The last record ID was 12345, and the one before it was 12344. Checking the data in the backup file showed that there were at least twice as many records so it failed halfway though the restore of that table.

The last audit record was: User [MyTestAccount] successfully logged in.

Ah dammit. That particular account was used by an application on my phone, and it checks in every few minutes.

While the restore was happening the application in question was still accessible, so the application on my phone did exactly what it was supposed to do.

Moral of the story: when doing a restore, make sure nobody can inadvertently modify data before you finish said restore.

SQL Server does this by making it impossible to write to the database while its being restored. If you don’t have the luxury of using SQL Restore be sure to make it impossible to write to the database by either making the application inaccessible or code it into your application to be able to run in a read only fashion.

Talking ADFS on RunAs Radio

by Steve Syfuhs / December 01, 2011 07:02 PM

During the Toronto stop of the TechDays tour in Canada Richard Campbell was in town talking to a bunch of really smart people about the latest and greatest technologies they've been working on.

And then me for some reason.

We got to talk about ADFS and associates:

Richard talks to Steve Syfuhs at TechDays Toronto about IT Pros providing security services for developers using Active Directory Federated Services. IT and development talking to each other willingly? Perish the thought! But in truth, Steve makes it clear that ADFS provides a great wrapper for developers to access active directory or any other service that has security claims that an application might require. Azure depends on it, even Office 365 can take advantage of ADFS. Steve discusses how IT can work with developers to make the jobs of both groups easier.

You can listen to it here: http://www.runasradio.com/default.aspx?showNum=240

I need to work on using fewer vague analogies.

Input Validation: The Good, The Bad, and the What the Hell are you Doing?

by Steve Syfuhs / November 28, 2011 11:00 AM

Good morning class!

Pop quiz: How many of you do proper input validation in your ASP.NET site, WebForms, MVC, or otherwise?

Some Background

There is an axiom in computer science: never trust user input because it's guaranteed to contain invalid data at some point.

In security we have a similar axiom: never trust user input because it's guaranteed to contain invalid data at some point, and your code is bound to contain a security vulnerability somewhere, somehow. Granted, it doesn't flow as well as the former, but the point still stands.

The solution to this problem is conceptually simple: validate, validate, validate. Every single piece of input that is received from a user should be validated.

Of course when anyone says something is a simple concept it's bound to be stupidly complex to get the implementation right. Unfortunately proper validation is not immune to this problem. Why?

The Problem

Our applications are driven by user data. Without data our applications would be pretty useless. This data is usually pretty domain-specific too so everything we receive should have particular structures, and there's a pretty good chance that a few of these structures are so specific to the organization that there is no well-defined standard. By that I mean it becomes pretty difficult to validate certain data structures if they are custom designed and potentially highly-complex.

So we have this problem. First, if we don't validate that the stuff we are given is clean, our application starts behaving oddly and that limits the usefulness of the application. Second, if we don't validate that the stuff we are given is clean, and there is a bug in the code, we have a potential vulnerability that could wreak havoc for the users.

The Solution

The solution as stated above is to validate all the input, both from a business perspective and from a security perspective. We want it to go something like this:

In this post we are going to look at the best way to validate the security of incoming data within ASP.NET. This requires looking into how ASP.NET processes input from the user.

When ASP.NET receives something from the user it can come from four different vectors:

  • Within the Query String (?foo=bar)
  • Within the Form (via a POST)
  • Within a cookie
  • Within the server variables (a collection generated from HTTP headers and internal server configuration)

These vectors drive ASP.NET, and you can potentially compromise an application by maliciously modifying any of them.

Pop quiz: How many of you check whether custom cookies exist before trying to use them? Almost everyone, good. Now, how many of you validate that the data within the cookies is, well, valid before using them?

What about checking your HTTP headers?

The Bypass

Luckily ASP.NET has some out-of-the-box behaviors that protect the application from malicious input. Unfortunately ASP.NET isn't very forgiving when it comes to validation. It doesn't distinguish between quasi-good input and bad input, so anything containing an angle bracket causes a YSoD.

The defacto fix to this is to do one of two things:

  • Disable validation in the page declaration within WebForms, or stick a [ValidateInput(false)] attribute on an MVC controller
  • Set <pages validateRequest="false"> in web.config

What this will do is tell ASP.NET to basically skip validating the four vectors and let anything in. It was assumed that you would do validation on your own.

Raise your hand if you think this is a bad idea. Okay, keep your hands up if you've never done this for a production application. At this point almost everyone should have put their hands down. I did.

The reason we do this is because as I said before, ASP.NET isn't very forgiving when it comes to validation. It's all or nothing.

What's worse, as ASP.NET got older it started becoming pickier about what it let in so you had more reasons for disabling validation. In .NET 4 validation occurs at a much earlier point. It's a major breaking change:

The request validation feature in ASP.NET provides a certain level of default protection against cross-site scripting (XSS) attacks. In previous versions of ASP.NET, request validation was enabled by default. However, it applied only to ASP.NET pages (.aspx files and their class files) and only when those pages were executing.

In ASP.NET 4, by default, request validation is enabled for all requests, because it is enabled before the BeginRequest phase of an HTTP request. As a result, request validation applies to requests for all ASP.NET resources, not just .aspx page requests. This includes requests such as Web service calls and custom HTTP handlers. Request validation is also active when custom HTTP modules are reading the contents of an HTTP request.

Since backwards compatibility is so important, a configuration attribute was also added to tell ASP.NET to revert to the 2.0 validation mode meaning that it occurs later in the request lifecycle like in ASP.NET 2.0:

<httpRuntime requestValidationMode="2.0" />

If you do a search online for request validation almost everyone comes back with this solution. In fact, it became a well known solution with the Windows Identity Foundation in ASP.NET 4.0 because when you do a federated sign on, WIF receives the token as a chunk of XML. The validator doesn't approve because of the angle brackets. If you set the validation mode to 2.0, the validator checks after the request passes through all HttpModules, which is how WIF consumes that token via the WSFederationAuthenticationModule.

The Proper Solution

So we have the problem. We also have built in functionality that solves our problem, but the way it does it kind of sucks (it's not a bad solution, but it's also not extensible). We want a way that doesn't suck.

In earlier versions of ASP.NET the best solution was to disable validation and within a HttpModule check every vector for potentially malicious input. The benefit here is that you have control over what is malicious and what is not. You would write something along these lines:

public class ValidatorHttpModule : IHttpModule
{
    public void Dispose() { }

    public void Init(HttpApplication context)
    {
        context.BeginRequest += new EventHandler(context_BeginRequest);
    }

    void context_BeginRequest(object sender, EventArgs e)
    {
        HttpApplication context = (HttpApplication)sender;

        foreach (var q in context.Request.QueryString)
        {
            if (CheckQueryString(q))
            {
                throw new SecurityException("Bad validation");
            }
        }

        foreach (var f in context.Request.Form)
        {
            if (CheckForm(f))
            {
                throw new SecurityException("Bad validation");
            }
        }

        foreach (var c in context.Request.Cookies)
        {
            if (CheckCookie(c))
            {
                throw new SecurityException("Bad validation");
            }
        }

        foreach (var s in context.Request.ServerVariables)
        {
            if (CheckServerVariable(s))
            {
                throw new SecurityException("Bad validation");
            }
        }
    }

    // <snip />
}

The downside to this approach though is that you are stuck with pretty clunky validation logic. It executes on every single request, which may not always be necessary. You are also forced to execute the code in order of whenever your HttpModule is initialized. It won't necessarily execute first, so it won't necessarily protect all parts of your application. Protection from an attack that doesn't protect everything from that particular attack isn't very useful.  <Cynicism>Half-assed protection is only good when you have half an ass.</Cynicism>

What we want is something that executes before everything else. In our HttpModule we are validating on BeginRequest, but we want to validate before BeginRequest.

The way we do this is with a custom RequestValidator. On a side note, this post may qualify as having the longest introduction ever. In any case, this custom RequestValidator is set within the httpRuntime tag within the web.config:

<httpRuntime requestValidationType="Syfuhs.Web.Security.CustomRequestValidator" />

We create a custom request validator by creating a class with a base class of System.Web.Util.RequestValidator. Then we override the IsValidRequestString method.

This method allows us to find out where the input is coming from, e.g. from a Form or from a cookie etc. This validator is called on each value within the four collections above, but only when a value exists. It saves us the trouble of going over everything in each request. Within an HttpModule we could certainly build out the same functionality by checking contents of each collection, but this saves us the hassle of writing the boilerplate code. It also provides us a way of describing the problem in detail because we can pass an index location of where the problem exists. So if we find a problem at character 173 we can pass that value back to the caller and ASP.NET will throw an exception describing that index. This is how we get such a detailed exception from WIF:

A Potentially Dangerous Request.Form Value Was Detected from the Client (wresult="<t:RequestSecurityTo...")

Our validator class ends up looking like:

public class MyCustomRequestValidator : RequestValidator
{
    protected override bool IsValidRequestString(HttpContext context, string value, RequestValidationSource requestValidationSource, string collectionKey, out int validationFailureIndex)
    {
        validationFailureIndex = 0;

        switch (requestValidationSource)
        {
            case RequestValidationSource.Cookies:
                return ValidateCookie(collectionKey, value, out validationFailureIndex);
                break;

            case RequestValidationSource.Form:
                return ValidateFormValue(collectionKey, value, out validationFailureIndex);
                break;

            // <snip />
        }

        return base.IsValidRequestString(context, value, requestValidationSource, collectionKey, out validationFailureIndex);
    }

    // <snip />
}

Each application has different validation requirements so I've just mocked up how you would create a custom validator.

If you use this design you can easily validate all inputs across the application, and you don't have to turn off validation.

So once again, pop quiz: How many of you do proper input validation?

Authentication in an Active Claims Model

by Steve Syfuhs / December 17, 2010 04:00 PM

When working with Claims Based Authentication a lot of things are similar between the two different models, Active and Passive.  However, there are a few cases where things differ… a lot.  The biggest of course being how a Request for Security Token (RST) is authenticated.  In a passive model the user is given a web page where they can essentially have full reign over how credentials are handled.  Once the credentials have been received and authenticated by the web server, the server generates an identity and passes it off to SecurityTokenService.Issue(…) and does it’s thing by gathering claims, packaging them up into a token, and POST’ing the token back to the Relying Party.

Basically we are handling authentication any other way an ASP.NET application would, by using the Membership provider and funnelling all anonymous users to the login page, and then redirecting back to the STS.  To hand off to the STS, we can just call:

FederatedPassiveSecurityTokenServiceOperations.ProcessRequest(
HttpContext.Current.Request, 
HttpContext.Current.User, 
MyTokenServiceConfiguration.Current.CreateSecurityTokenService(), 
HttpContext.Current.Response); 

However, it’s a little different with the active model.

Web services manage identity via tokens but they differ from passive models because everything is passed via tokens including credentials.  The client consumes the credentials and packages them into a SecurityToken object which is serialized and passed to the STS.  The STS deserializes the token and passes it off to a SecurityTokenHandler.  This security token handler validates the credentials and generates an identity and pushes it up the call stack to the STS.

Much like with ASP.NET, there is a built in Membership Provider for username/password combinations, but you are limited to the basic functionality of the provider.  90% of the time, this is probably just fine.  Other times you may need to create your own SecurityTokenHandler.  It’s actually not that hard to do.

First you need to know what sort of token is being passed across the wire.  The big three are:

  • UserNameSecurityToken – Has a username and password pair
  • WindowsSecurityToken – Used for Windows authentication using NTLM or Kerberos
  • X509SecurityToken – Uses x509 certificate for authentication

Each is pretty self explanatory.

Some others out of the box are:

image

Reflector is an awesome tool.  Just sayin’.

Now that we know what type of token we are expecting we can build the token handler.  For the sake of simplicity let’s create one for the UserNameSecurityToken.

To do that we create a new class derived from Microsoft.IdentityModel.Tokens.UserNameSecurityTokenHandler.  We could start at SecurityTokenHandler, but it’s an abstract class and requires a lot to get it working.  Suffice to say it’s mostly boilerplate code.

We now need to override a method and property: ValidateToken(SecurityToken token) and TokenType.

TokenType is used later on to tell what kind of token the handler can actually validate.  More on that in a minute.

Overriding ValidateToken is fairly trivial*.  This is where we actually handle the authentication.  However, it returns a ClaimsIdentityCollection instead of bool, so if the credentials are invalid we need to throw an exception.  I would recommend the SecurityTokenValidationException.  Once the authentication is done we get the identity for the credentials and bundle them up into a ClaimsIdentityCollection.  We can do that by creating an IClaimsIdentity and passing it into the constructor of a ClaimsIdentityCollection.

public override ClaimsIdentityCollection ValidateToken(SecurityToken token)
{
    UserNameSecurityToken userToken = token as UserNameSecurityToken;

    if (userToken == null)
        throw new ArgumentNullException("token");

    string username = userToken.UserName;
    string pass = userToken.Password;

    if (!Membership.ValidateUser(username, pass))
        throw new SecurityTokenValidationException("Username or password is wrong.");

    IClaimsIdentity ident = new ClaimsIdentity();
    ident.Claims.Add(new Claim(WSIdentityConstants.ClaimTypes.Name, username));

    return new ClaimsIdentityCollection(new IClaimsIdentity[] { ident });
}

Next we need set the TokenType:

public override Type TokenType
{
    get
    {
        return typeof(UserNameSecurityToken);
    }
}

This property is used as a way to tell it’s calling parent that it can validate/authenticate any tokens of the type it returns.  The web service that acts as the STS loads a collection SecurityTokenHandler’s as part of it’s initialization and when it receives a token it iterates through the collection looking for one that can handle it.

To add the handler to the collection you add it via configuration or if you are crazy doing a lot of low level work you can add it to the SecurityTokenServiceConfiguration in the HostFactory for the service:

securityTokenServiceConfiguration.SecurityTokenHandlers.Add(new MyAwesomeUserNameSecurityTokenHandler())

To add it via configuration you first need to remove any other handlers that can validate the same type of token:

<microsoft.identityModel>
<service>
<securityTokenHandlers>
<remove type="Microsoft.IdentityModel.Tokens.WindowsUserNameSecurityTokenHandler,
Microsoft.IdentityModel, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" />
<remove type="Microsoft.IdentityModel.Tokens.MembershipUserNameSecurityTokenHandler,
Microsoft.IdentityModel, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" />
<add type="Syfuhs.IdentityModel.Tokens.MyAwesomeUserNameSecurityTokenHandler, Syfuhs.IdentityModel" />
</securityTokenHandlers>

That’s pretty much all there is to it.  Here is the class for the sake of completeness:

using System;
using System.IdentityModel.Tokens;
using System.Web.Security;
using Microsoft.IdentityModel.Claims;
using Microsoft.IdentityModel.Protocols.WSIdentity;
using Microsoft.IdentityModel.Tokens;

namespace Syfuhs.IdentityModel.Tokens
{
    public class MyAwesomeUserNameSecurityTokenHandler : UserNameSecurityTokenHandler
    {
        public override bool CanValidateToken { get { return true; } }

        public override ClaimsIdentityCollection ValidateToken(SecurityToken token)
        {
            UserNameSecurityToken userToken = token as UserNameSecurityToken;

            if (userToken == null)
                throw new ArgumentNullException("token");

            string username = userToken.UserName;
            string pass = userToken.Password;

            if (!Membership.ValidateUser(username, pass))
                throw new SecurityTokenValidationException("Username or password is wrong.");

            IClaimsIdentity ident = new ClaimsIdentity();
            ident.Claims.Add(new Claim(WSIdentityConstants.ClaimTypes.Name, username));

            return new ClaimsIdentityCollection(new IClaimsIdentity[] { ident });
        }
    }
}

* Trivial in the development sense, not trivial in the security sense.

AzureFest–Final Countdown: 2 Days to go

by Steve Syfuhs / December 08, 2010 04:00 PM

[The soundtrack for this post can be found at Youtube]

Cory Fowler is the Canadian MVP for Windows Azure, an ObjectSharp Consultant, and a good friend of mine.  He will be presenting on Windows Azure at, you guessed it, AzureFest!  We have two half day events on December 11th 2010 (two days from now – see what I did there?) at Microsoft’s office in Mississauga and it’s chock full of everything you need to know about getting started with Windows Azure.  You can register by clicking here.

What You'll Learn

  • How to setup your Azure Account
  • How to take a traditional on-premise ASP.NET applications and deploy it to Azure
  • Publishing Applications to Azure Developer Portal
  • Setting up the Azure SDK and Azure Tools for Visual Studio on your laptop
  • using the development App Fabric

We Provide

  • The tools you will need on your machine to prepare yourself for Azure
  • Hands on instruction and expert assistance
  • Power and network access
  • Snacks and refreshments
  • For every azure activation – funding for your User Group
  • Post event technical resources so you can take your skills to the next level

You Provide

  • Your own laptop
  • Your own credit card (for Azure activations this is required, even if you only setup for a trial period, but this event is free!)
  • Your experience in building ASP.NET Applications and Services

Seats are still available.  Register!

P.S. Did I mention this event is free?

Preventing Frame Exploits in a Passive Claims Model

by Steve Syfuhs / November 30, 2010 04:00 PM

At a presentation a few weeks ago someone asked me about capturing session details during authentication at an STS by way of frames and JavaScript.  To paraphrase the question: “What prevents a malicious developer from sticking an RP within an iframe, cause a redirect to an STS, get some user to log in, and then capture the details through JavaScript from the parent page?”  There are a couple of ways this problem can be solved.  It’s a defense-in-depth problem where on their own, each piece won’t close every attack vector, but when used together you end up with a pretty solid solution.

  • First, a lot of new browsers will actually prevent cross-frame JavaScript calls when SSL is involved.  Depending on the browser, the JavaScript will throw the equivalent of an Access Denied exception.  This is not the case with all browser versions though.  Older browsers may not do this.
  • Second, some browsers will not allow you to host an SSL page in a frame if the parent page is not using SSL.  The easy fix for the malicious developer is to simply use SSL for the parent site, but that could be problematic as the CA’s theoretically verify the sites requesting certificates.
  • Third, you could write some JavaScript for the STS to bust out of the frame.  It would look something like this:

if (top != self)
{
    try
    {
        top.location.replace(self.location.href);
    }
    catch (e)
    {
    }
}

The problem with this is that it wouldn’t work if the browser has JavaScript disabled.

  • Fourth, there is a new HTTP header that Microsoft introduced in IE 8 that tells the browser that if the requested page is hosted in a frame to simply stop processing the request.  Safari and Chrome support it natively, and Firefox supports it with the NoScript add on.  The header is called X-Frame-Options and it can have two values: “DENY” which prevents all requests, and “SAMEORIGIN” which allows a page to be rendered if the parent page is the same page.  E.g. the parent is somesite.com/page and the framed page is somesite.com/page.

There are a couple of ways to add this header to your page.  First you can add it via ASP.NET:

Context.Response.AddHeader("x-frame-options", "DENY");

Or you could add it to all pages via IIS.  To do this open the IIS Manager and select the site in question.  Then select the Feature “HTTP Response Headers”:

image

Select Add… and then set the name to x-frame-options and the value to DENY:

image

By keeping in mind these options you can do a lot to prevent any exploits that use frames.

The Basics of Building a Security Token Service

by Steve Syfuhs / October 29, 2010 04:00 PM

Last week at TechDays in Toronto I ran into a fellow I worked with while I was at Woodbine.  He works with a consulting firm Woodbine uses, and he caught my session on Windows Identity Foundation.  His thoughts were (essentially—paraphrased) that the principle of Claims Authentication was sound and a good idea, however implementing it requires a major investment.  Yes.  Absolutely.  You will essentially be adding a new tier to the application.  Hmm.  I’m not sure if I can get away with that analogy.  It will certainly feel like you are adding a new tier anyway.

What strikes me as the main investment is the Security Token Service.  When you break it down, there are a lot of moving parts in an STS.  In a previous post I asked what it would take to create something similar to ADFS 2.  I said it would be fairly straightforward, and broke down the parts as well as what would be required of them.  I listed:

  • Token Services
  • A Windows Authentication end-point
  • An Attribute store-property-to-claim mapper (maps any LDAP properties to any claim types)
  • An application management tool (MMC snap-in and PowerShell cmdlets)
  • Proxy Services (Allows requests to pass NAT’ed zones)

These aren’t all that hard to develop.  With the exception of the proxy services and token service itself, there’s a good chance we have created something similar to each one if user authentication is part of an application.  We have the authentication endpoint: a login form to do SQL Authentication, or the Windows Authentication Provider for ASP.NET.  We have the attribute store and something like a claims mapper: Active Directory, SQL databases, etc.  We even have an application management tool: anything you used to manage users in the first place.  This certainly doesn’t get us all the way there, but they are good starting points.

Going back to my first point, the STS is probably the biggest investment.  However, it’s kind of trivial to create an STS using WIF.  I say that with a big warning though: an STS is a security system.  Securing such a system is NOT trivial.  Writing your own STS probably isn’t the best way to approach this.  You would probably be better off to use an STS like ADFS.  With that being said it’s good to know what goes into building an STS, and if you really do have the proper resources to develop one, as well as do proper security testing (you probably wouldn’t be reading this article on how to do it in that case…), go for it.

For the sake of simplicity I’ll be going through the Fabrikam Shipping demo code since they did a great job of creating a simple STS.  The fun bits are in the Fabrikam.IPSts project under the Identity folder.  The files we want to look at are CustomSecurityTokenService.cs, CustomSecurityTokenServiceConfiguration.cs, and the default.aspx code file.  I’m not sure I like the term “configuration”, as the way this is built strikes me as factory-ish.

image

The process is pretty simple.  A request is made to default.aspx which passes the request to FederatedPassiveSecurityTokenServiceOperations.ProcessRequest() as well as a newly instantiated CustomSecurityTokenService object by calling CustomSecurityTokenServiceConfiguration.Current.CreateSecurityTokenService().

The configuration class contains configuration data for the STS (hence the name) like the signing certificate, but it also instantiates an instance of the STS using the configuration.  The code for is simple:

namespace Microsoft.Samples.DPE.Fabrikam.IPSts
{
    using Microsoft.IdentityModel.Configuration;
    using Microsoft.IdentityModel.SecurityTokenService;

    internal class CustomSecurityTokenServiceConfiguration
: SecurityTokenServiceConfiguration
    {
        private static CustomSecurityTokenServiceConfiguration current;

        private CustomSecurityTokenServiceConfiguration()
        {
            this.SecurityTokenService = typeof(CustomSecurityTokenService);
            this.SigningCredentials =
new X509SigningCredentials(this.ServiceCertificate);
            this.TokenIssuerName = "https://ipsts.fabrikam.com/";
        }

        public static CustomSecurityTokenServiceConfiguration Current
        {
            get
            {
                if (current == null)
                {
                    current = new CustomSecurityTokenServiceConfiguration();
                }

                return current;
            }
        }
    }
}

It has a base type of SecurityTokenServiceConfiguration and all it does is set the custom type for the new STS, the certificate used for signing, and the issuer name.  It then lets the base class handle the rest.  Then there is the STS itself.  It’s dead simple.  The custom class has a base type of SecurityTokenService and overrides a couple methods.  The important method it overrides is GetOutputClaimsIdentity():

protected override IClaimsIdentity GetOutputClaimsIdentity(
IClaimsPrincipal principal, RequestSecurityToken request, Scope scope)
{
    var inputIdentity = (IClaimsIdentity)principal.Identity;

    Claim name = inputIdentity.Claims.Single(claim =>
claim.ClaimType == ClaimTypes.Name);
    Claim email = new Claim(ClaimTypes.Email,
Membership.Provider.GetUser(name.Value, false).Email);
    string[] roles = Roles.Provider.GetRolesForUser(name.Value);

    var issuedIdentity = new ClaimsIdentity();
    issuedIdentity.Claims.Add(name);
    issuedIdentity.Claims.Add(email);

    foreach (var role in roles)
    {
        var roleClaim = new Claim(ClaimTypes.Role, role);
        issuedIdentity.Claims.Add(roleClaim);
    }

    return issuedIdentity;
}

It gets the authenticated user, grabs all the roles from the RolesProvider, and generates a bunch of claims then returns the identity.  Pretty simple.

At this point you’ve just moved the authentication and Roles stuff away from the application.  Nothing has really changed data-wise.  If you only cared about roles, name, and email you are done.  If you needed something more you could easily add in the logic to grab the values you needed. 

By no means is this production ready, but it is a good basis for how the STS creates claims.

// About

Steve is a renaissance kid when it comes to technology. He spends his time in the security stack.