Home / My Disclaimer / Who am I? / Search... / Sign in

// Claims

Real-time User Notification and Session Management with SignalR - Part 2

by Steve Syfuhs / March 24, 2013 02:31 PM

In Part 1 I introduced a basic usage of SignalR and talked about the goals we were trying to accomplish with the library.

In the next few posts I’m going to show how we can build a real-time user notification and session management system for a web application.

In this post I’ll show how we can implement a solution that accomplishes our goals.

Before diving back into SignalR it’s important to have a quick rundown of concepts for session management. If we think about how sessions work for a user in most applications it’s usually conceptually simple. A session is a mechanism to track user rights between the user logging in and logging out.  A session is usually tracked through a cookie attached to each request made to the server. A user has a session (or multiple sessions if they are logged in from another machine/browser) and each session is tied to a request or connection. Each time the user requests a page a new connection is opened to the server. As long as the session is active each connection is authorized to do whatever it needs to do (as defined by whatever authorization policies are in place).

image

When you kill a session each subsequent connection for that session is denied. The session is dead, no more access. Simple. A session is usually killed when a user explicitly logs out and destroys the session cookie or the browser is closed. This doesn’t normally kill any other sessions tied to the user though. The connections made from another browser are still authorized.

From a security perspective we may want to notify the user that another session is already active or was just created. We can then allow the user to destroy the other session if they want.

SignalR works really well in this scenario because it solves a nasty problem of timing. Normally when the server wants to tell the client something it has to wait for the client to make a request to the server and then the client has to act on the server’s message. A request to the server is usually only done when a user explicitly clicks something, or there’s a timer polling every 30 seconds or so. If we want to notify the user instantly of another session we can’t necessarily wait for the client to call. SignalR solves this problem because it can call the client directly from the server.

Now, allowing a user to control other sessions requires tracking sessions and connections. If we follow the diagram above we have a pretty simple relationship between users and sessions, and between sessions and connections. We could store this information in a database or other persistent storage, and in fact would want to for non-trivial applications, but for the sake of this post we’ll just store the data in memory.

Most session handlers these days (e.g. the SessionAuthenticationModule in WIF) create a cookie that contains everything the web application should know about the user. As long as that identity in the cookie is valid the user can do whatever the session handler allows. This is a mostly stateless process and aligns with various tenants of REST. Each request to the server contains the identity of the user, and the server doesn’t have to track anything. It’s simple and powerful.

However, in non-trivial applications this doesn’t always cut it. Security sometimes requires state. In this case we require state in the sense that the server needs to track all active sessions tied to a user. For this we’ll use the WIF SessionAuthenticationModule (SAM) and a custom SessionSecurityTokenHandler.

Before we can validate a session though, we need to track when a session is created. If the application is configured for federation you can create a custom ClaimsAuthenticationManager and call the session creation code, or if you are creating a session token manually you can call this code on login.

void CreateSession()
{
    string sess = CreateSessionKey();

    var principal = new ClaimsPrincipal(new[] { new ClaimsIdentity(new[] { new Claim(ClaimTypes.Name, "myusername"), new Claim(ClaimTypes.Sid, sess) }, AuthenticationTypes.Password) });

    var token = FederatedAuthentication.SessionAuthenticationModule.CreateSessionSecurityToken(principal, "mycontext", DateTime.UtcNow, DateTime.UtcNow.AddDays(1), false);

    FederatedAuthentication.SessionAuthenticationModule.WriteSessionTokenToCookie(token);

    NotificationHub.RegisterSession(sess, principal.Identity.Name);
}

private string CreateSessionKey()
{
    var rng = System.Security.Cryptography.RNGCryptoServiceProvider.Create();

    var bytes = new byte[32];

    rng.GetNonZeroBytes(bytes);

    return Convert.ToBase64String(bytes);
}

We’ll get back to the NotificationHub.RegisterSession method in a bit.

After the session is created, on subsequent requests the SessionSecurityTokenHandler validates whether a user’s session is still valid and authorized. The SAM calls the token handler when it receives a session cookie and generates an identity for the current request.

From here we can determine whether the user’s session was forced to logout. If we override the ValidateSession method we can check against the NotificationHub. Keep in mind this is an example – it’s not a good design decision to track session data in your notification hub. I’m also using ClaimTypes.Sid, which isn’t the best claim type to use either.

protected override void ValidateSession(SessionSecurityToken securityToken)
{
    base.ValidateSession(securityToken);

    var ident = securityToken.ClaimsPrincipal.Identity as IClaimsIdentity;

    if (ident == null)
        throw new SecurityTokenException();

    var sessionClaim = ident.Claims.Where(c => c.ClaimType == ClaimTypes.Sid).FirstOrDefault();

    if(sessionClaim == null)
        throw new SecurityTokenExpiredException();

    if (!NotificationHub.IsSessionValid(sessionClaim.Value))
    {
        throw new SecurityTokenExpiredException();
    }
}

Every time a client makes a request to the server the user’s session is validated against the internal list of valid sessions. If the session is unknown or invalid an exception is thrown which kills the request.

To configure the use of this SecurityTokenHandler you can add it to the web.config in the microsoft.identityModel/service section. Yes, this is still WIF 3.5/.NET 4.0.  There is no requirement for .NET 4.5 here.

<securityTokenHandlers>
    <remove type="Microsoft.IdentityModel.Tokens.SessionSecurityTokenHandler, Microsoft.IdentityModel" />
    <add type="Syfuhs.Demo.CustomSessionSecurityTokenHandler, MyDemo" />
</securityTokenHandlers>

Now that we can track sessions on the server side we need to track connections. To start tracking connections we need to start at our Hub. If we go back to our NotificationHub we can override a few methods, specifically OnConnected and OnDisconnected. Every time a page has loaded the SignalR hubs client library, OnConnected is called and every time the page is unloaded OnDisconnected is called. Between these two methods we can tie all active connections to a session. Before we do that though we need to make sure that all requests to our Hub are only from logged in users.

To ensure only active sessions talk to our hub we need to decorate our hub with the [Authorize] attribute.

[Authorize(RequireOutgoing = true)]
public class NotificationHub : Hub
{
    // snip
}

Then we override the OnConnected method. Within this method we can access what’s called the ConnectionId, and associate it to our session. The ConnectionId is unique for each page loaded and connected to the server.

For this demo we’ll store the tracking information in a couple dictionaries.

private static readonly Dictionary<string, string> UserSessions = new Dictionary<string, string>();

private static readonly Dictionary<string, List<string>> sessionConnections = new Dictionary<string, List<string>>();

public override Task OnConnected()
{
    var user = Context.User.Identity as IClaimsIdentity;

    if (user == null)
        throw new SecurityException();

    var sessionClaim = user.Claims.Where(c => c.ClaimType == ClaimTypes.Sid).FirstOrDefault();

    if (sessionClaim == null)
        throw new SecurityException();

    sessionConnections[sessionClaim.Value].Add(Context.ConnectionId);

    return base.OnConnected();
}

On disconnect we want to remove the connection associated with the session.

public override Task OnDisconnected()
{
    var user = Context.User.Identity as IClaimsIdentity;

    if (user == null)
        throw new SecurityException();

    var sessionClaim = user.Claims.Where(c => c.ClaimType == ClaimTypes.Sid).FirstOrDefault();

    if (sessionClaim == null)
        throw new SecurityException();

    sessionConnections[sessionClaim.Value].Remove(Context.ConnectionId);

    return base.OnDisconnected();
}

Now at this point we can map all active connections to their various sessions. When we create a new session from a user logging in we want to notify all active connections that the new session was created. This notification will allow us to kill the new session if necessary. Here’s where we implement that NotificationHub.RegisterSession method.

internal static void RegisterSession(string sessionId, string user)
{
    UserSessions[sessionId] = user;
    sessionConnections[sessionId] = new List<string>();

    var message = "You logged in to another session";

    var context = GlobalHost.ConnectionManager.GetHubContext<NotificationHub>();

    var userCurrentSessions = UserSessions.Where(u => u.Value == user);

    foreach (var s in userCurrentSessions)
    {
        var connectionsTiedToSession = sessionConnections.Where(c => c.Key == s.Key).SelectMany(c => c.Value);

        foreach (var connectionId in connectionsTiedToSession)
            context.Clients.Client(connectionId).sessionRegistered(message, sessionId);
    }
}

This method will create a new session entry for us and look up all other sessions for the user. It will then loop through all connections for the sessions and notify the user that a new session was created.

So far so good, right? This takes care of almost all of the server side code. But next we’ll jump to the client side JavaScript and implement that notification.

When the server calls the client to notify the user about a new session we want to write the message out to screen and give the user the option of killing the session.

HTML:

<div class="notification"></div>

JavaScript:

var notifier = $.connection.notificationHub;

notifier.client.sessionRegistered = function (message, session) {
    $('.notification').text(message);

    $('.notification').append('<a class="killSession" href="#">End Session</a>');
    $('.notification').append('<a class="DismissNotification" href="#">Dismiss</a>');
    $('.killSession').click(function () {
        notifier.server.killSession(session);
        $('.notification').hide(500);
    });

    $('.DismissNotification').click(function () {
        $('.notification').hide(500);
    });
};

On session registration the notification div text is set to the message and a link is created to allow the user to kill the session. The click event calls the NotificationHub.KillSession method.

Back in the hub we implement the KillSession method to remove the session from the list of active sessions.

public void KillSession(string session)
{
    var connections = sessionConnections[session].ToList();

    sessionConnections.Remove(session);
    UserSessions.Remove(session);

    foreach (var c in connections)
    {
        Clients.Client(c).sessionEnded();
    }

}

Once the session is dead a call is made back to the clients associated with that session to notify the page that the session has ended. Back in the JavaScript we can hook into the sessionEnded function and reload the page.

notifier.client.sessionEnded = function () {
    location.reload();
}

Reloading the page will cause the browser to make a request to the server and the server will call our custom SessionSecurityTokenHandler where the ValidateSession method will throw an exception. Once this exception is thrown the request is stopped and all subsequent requests within the same session will have the same fate. The dead session should redirect to your login page.

To test this out all we have to do is load up our application and log in. Then if we create a new session by opening a new browser and logging in, e.g. switching from IE to Chrome, or within IE opening a new session via File > New Session, our first browser should notify you. If you click the End Session link you should automatically be logged out of the other session and redirected to your login page.

Pretty cool, huh?

Guide to Claims-Based Identity Second Edition

by Steve Syfuhs / December 13, 2011 10:28 AM

It looks like the Guide to Claims-Based Identity and Access Control was released as a second addition!

Take a look at the list of authors:

If you want a list of experts on security then look no further. These guys are some of the best in the industry and are my go-to for resources on Claims.

Strongly Typed Claims

by Steve Syfuhs / November 12, 2011 04:03 PM

Sometimes it's a pain in the neck working with Claims. A lot of times you need to look for particular claim and that usually means looping through the claims collection and parsing the value to a particular type.

This little dance is the trade-off for having such a simple interface to a potentially arbitrary collection of claims. Most of the time this works, but every once in a while you need to create a basic user object that contains some strongly typed properties. You could build up a basic object like:

public class User
{
    public string UserName { get; set; }

    public string EmailAddress { get; set; }

    public string Department { get; set; }

    public List<string> Roles { get; set; }
}

This would require you to intercept the IClaimsIdentity object and search through the claims collection setting each property manually whenever you wanted to get access to the data. This can get tiresome and is error prone.

I think I've come up with a relatively complete solution to this problem. Basically it works by creating a custom IClaimsIdentity class that sets a User property through reflection. You can then access the user through Thread.CurrentPrincipal.Identity like this:

TypedClaimsIdentity ident = Thread.CurrentPrincipal.Identity as TypedClaimsIdentity;
string email = ident.User.EmailAddress.Value;
var userRoles = ident.User.Roles;

Once you've defined the particular types and their associated claims, the particular values will be set through reflection. So to declare your user properties, create a class like this:

public class MyTypedClaimsUser : TypedClaims
{
    public MyTypedClaimsUser()
    {
        this.Name = new TypedClaim<string>();
        this.EmailAddress = new TypedClaim<string>();
        this.Roles = new List<TypedClaim<string>>();
        this.Expiration = new TypedClaim<DateTime>();
        this.AuthenticationMethod = new TypedClaim<string>();
    }

    [TypedClaim(ClaimTypes.Name, false)]
    public TypedClaim<string> Name { get; private set; }

    [TypedClaim(ClaimTypes.Email, false)]
    public TypedClaim<string> EmailAddress { get; private set; }

    [TypedClaim(ClaimTypes.Role, true)]
    public List<TypedClaim<string>> Roles { get; private set; }

    [TypedClaim(ClaimTypes.Expiration, true)]
    public TypedClaim<DateTime> Expiration { get; private set; }

    [TypedClaim(ClaimTypes.AuthenticationMethod, false)]
    public TypedClaim<string> AuthenticationMethod { get; private set; }

    [TypedClaim(ClaimTypes.GroupSid, false)]
    public TypedClaim<string> GroupSid { get; private set; }
}

Each property must be defined a certain way. Each property must have a particular attribute set: TypedClaimAttribute. This attribute will help the reflection code associate the property with the expected claim. That way the Name property will always be mapped to the ClaimTypes.Name claim type, which is the http://schemas.xmlsoap.org/ws/2005/05/identity/claims/name claim. It also helps by warning the code that it's going to likely have multiple potential values, like the Role claim.

Each property is also of a particular type: TypedClaim<T>. In theory I could have just used simple types like strings but by going this route you can get access to claim metadata like Name.ClaimType or Name.Issuer. TypedClaim<T> is inherited from Claim.

So how does this all work? Well first you need to be able to add the User object into the Identity object. This is done by creating a custom IClaimsIdentity class:

[Serializable]
public class TypedClaimsIdentity : IClaimsIdentity
{
    public TypedClaimsIdentity(IClaimsIdentity identity)
    {
        user = new MyTypedClaimsUser();

        if (identity.Claims != null)
            this.claims = identity.Claims;
        else
            claims = new ClaimCollection(identity);

        this.Actor = identity.Actor;
        this.AuthenticationType = identity.AuthenticationType;

        Update();
    }

    private void Update()
    {
        user.Update(this.claims);
    }

    private MyTypedClaimsUser user;

    public MyTypedClaimsUser User
    {
        get
        {
            Update();
            return user;
        }
    }

    private ClaimCollection claims;

    public ClaimCollection Claims
    {
        get
        {
            Update();
            return claims;
        }
    }

    public IClaimsIdentity Actor { get; set; }

    public SecurityToken BootstrapToken { get; set; }

    public IClaimsIdentity Copy()
    {
        ClaimsIdentity claimsIdentity = new ClaimsIdentity(this.AuthenticationType);

        if (this.Claims != null)
        {
            claimsIdentity.Claims.AddRange(claims.CopyWithSubject(claimsIdentity));
        }

        claimsIdentity.Label = this.Label;
        claimsIdentity.NameClaimType = this.NameClaimType;
        claimsIdentity.RoleClaimType = this.RoleClaimType;
        claimsIdentity.BootstrapToken = this.BootstrapToken;

        return claimsIdentity;
    }

    public string Label { get; set; }

    public string NameClaimType { get; set; }

    public string RoleClaimType { get; set; }

    public string AuthenticationType { get; private set; }

    public bool IsAuthenticated { get { return claims.Count > 0; } }

    public string Name { get { return User.Name.Value; } }
}

There isn't anything spectacularly interesting about this class. The important part is the constructor. It only accepts an IClaimsIdentity object because it's designed as a way to wrap around an already created identity. It then updates the User object through Update().

The User object is updated through reflection. The Update() method calls User.Update(…) which is defined within the base class of MyTypedClaimsUser. This will call into a helper class that looks through the User object and find any properties that contain the TypedClaimAttribute.

EDIT: When it comes to reflection, there is always a better way to do something. My original code was mostly a PoC and didn't make use of existing .NET-isms. I've edited this bit to include the code changes.

The helper class was originally a bit clunky because all it did was look through the properties and if/else if's through their types and parses them:

if (type == typeof(string))
{
    return new TypedClaim<string>(selectedClaims.First()) { Value = selectedClaims.First().Value };
}

This really isn't the smartest way to do it because .NET already contains some pretty strong conversion functions; specifically Convert.ChangeType(value, type).

Going this route requires generating the proper TypedClaim<T> though. Many thanks to Anna Lear because she pointed out the MakeGenericType(…) method, which allows you to take a type and convert it to a generic type with the specified type parameters. That way I could dynamically pass a type into a generic without hardcoding anything. This allows the TypedClaim<T> to be set at runtime without having to code for each particular parameter. So you end up with basic logic along the lines of:

Type constructed = typeof(TypedClaim<>).MakeGenericType(new Type[] { genericParamType });

object val = Convert.ChangeType(claim.Value, genericParamType);

return Activator.CreateInstance(constructed, claim.ClaimType, val);

The Activator.CreateInstance method will construct an instance of the particular type which will eventually be passed into PropertyInfo.Value.SetValue(…).

Finally, it's time to integrate this into your web application. The best location is probably going to be through a custom ClaimsAuthenticationManager. It works like this:

public class TypedClaimsAuthenticationManager : ClaimsAuthenticationManager
{
    public override IClaimsPrincipal Authenticate(string resourceName, IClaimsPrincipal incomingPrincipal)
    {
        if (!incomingPrincipal.Identity.IsAuthenticated)
            return base.Authenticate(resourceName, incomingPrincipal);

        for (int i = 0; i < incomingPrincipal.Identities.Count; i++)
            incomingPrincipal.Identities[i] = new TypedClaimsIdentity(incomingPrincipal.Identities[i]);

        return base.Authenticate(resourceName, incomingPrincipal);
    }
}

Then to tell WIF about this new CAM you need to make a change to the web.config. Within the Microsoft.IdentityModel/Service section, add this:

<claimsAuthenticationManager type="Syfuhs.IdentityModel.TypedClaimsAuthenticationManager, Syfuhs.IdentityModel" />

By dynamically setting the values of the user object, you can create a fairly robust identity model for your application.

You can download the updated code here: typedclaimsv2.zip (6.21 kb)

You can download the original code here: typedclaims.zip (5.61 kb)

The Importance of Elevating Privilege

by Steve Syfuhs / August 28, 2011 04:00 PM

The biggest detractor to Single Sign On is the same thing that makes it so appealing – you only need to prove your identity once. This scares the hell out of some people because if you can compromise a users session in one application it's possible to affect other applications. Congratulations: checking your Facebook profile just caused your online store to delete all it's orders. Let's break that attack down a little.

  • You just signed into Facebook and checked your [insert something to check here] from some friend. That contained a link to something malicious.
  • You click the link, and it opens a page that contains an iframe. The iframe points to a URL for your administration portal of the online store with a couple parameters in the query string telling the store to delete all the incoming orders.
  • At this point you don't have a session with the administration portal and in a pre-SSO world it would redirect you to a login page. This would stop most attacks because either a) the iframe is too small to show the page, or b) (hopefully) the user is smart enough to realize that a link from a friend on Facebook shouldn't redirect you to your online store's administration portal. In a post-SSO world, the portal would redirect you to the STS of choice and that STS already has you signed in (imagine what else could happen in this situation if you were using Facebook as your identity provider).
  • So you've signed into the STS already, and it doesn't prompt for credentials. It redirects you to the administration page you were originally redirected away from, but this time with a session. The page is pulled up, the query string parameters are parsed, and the orders are deleted.

There are certainly ways to stop this as part of this is a bit trivial. For instance you could pop up an Ok/Cancel dialog asking "are you sure you want to delete these?", but for the sake of discussion lets think of this at a high level.

The biggest problem with this scenario is that deleting orders doesn't require anything more than being signed in. By default you had the highest privileges available.

This problem is similar to the problem many users of Windows XP had. They were, by default, running with administrative privileges. This lead to a bunch of problems because any application running could do whatever it pleased on the system. Malware was rampant, and worse, users were just doing all around stupid things because they didn't know what they were doing but they had the permissions necessary to do it.

The solution to that problem is to give users non-administrative privileges by default, and when something required higher privileges you have to re-authenticate and temporarily run with the higher privileges. The key here is that you are running temporarily with higher privileges. However, security lost the argument and Microsoft caved while developing Windows Vista creating User Account Control (UAC). By default a user is an administrator, but they don't have administrative privileges. Their user token is a stripped down administrator token. You only have non-administrative privileges. In order to take full advantage of the administrator token, a user has to elevate and request the full token temporarily. This is a stop-gap solution though because it's theoretically possible to circumvent UAC because the administrative token exists. It also doesn't require you to re-authenticate – you just have to approve the elevation.

As more and more things are moving to the web it's important that we don't lose control over privileges. It's still very important that you don't have administrative privileges by default because, frankly, you probably don't need them all the time.

Some web applications are requiring elevation. For instance consider online banking sites. When I sign in I have a default set of privileges. I can view my accounts and transfer money between my accounts. Anything else requires that I re-authenticate myself by entering a private pin. So for instance I cannot transfer money to an account that doesn't belong to me without proving that it really is me making the transfer.

There are a couple ways you can design a web application that requires privilege elevation. Lets take a look at how to do it with Claims Based Authentication and WIF.

First off, lets look at the protocol. Out of the box WIF supports the WS-Federation protocol. The passive version of the protocol supports a query parameter of wauth. This parameter defines how authentication should happen. The values for it are mostly specific to each STS however there are a few well-defined values that the SAML protocol specifies. These values are passed to the STS to tell it to authenticate using a particular method. Here are some most often used:

Authentication Type/Credential Wauth Value
Password urn:oasis:names:tc:SAML:1.0:am:password
Kerberos urn:ietf:rfc:1510
TLS urn:ietf:rfc:2246
PKI/X509 urn:oasis:names:tc:SAML:1.0:am:X509-PKI
Default urn:oasis:names:tc:SAML:1.0:am:unspecified

When you pass one of these values to the STS during the signin request, the STS should then request that particular type of credential. the wauth parameter supports arbitrary values so you can use whatever you like. So therefore we can create a value that tells the STS that we want to re-authenticate because of an elevation request.

All you have to do is redirect to the STS with the wauth parameter:

https://yoursts/authenticate?wa=wsignin1.0&wtrealm=uri:myrp&wauth=urn:super:secure:elevation:method

Once the user has re-authenticated you need to tell the relying party some how. This is where the Authentication Method claim comes in handy:

http://schemas.microsoft.com/ws/2008/06/identity/claims/authenticationmethod

Just add the claim to the output identity:

protected override IClaimsIdentity GetOutputClaimsIdentity(IClaimsPrincipal principal, RequestSecurityToken request, Scope scope)
{
    IClaimsIdentity ident = principal.Identity as IClaimsIdentity;
    ident.Claims.Add(new Claim(ClaimTypes.AuthenticationMethod, "urn:super:secure:elevation:method"));
    // finish filling claims...
    return ident;
}

At that point the relying party can then check to see whether the method satisfies the request. You could write an extension method like:

public static bool IsElevated(this IClaimsPrincipal principal)
{
    return principal.Identity.AuthenticationType == "urn:super:secure:elevation:method";
}

And then have a bit of code to check:

var p = Thread.CurrentPrincipal as IClaimsPrincipal;
if (p != null && p.IsElevated())
{
    DoSomethingRequiringElevation();
}

This satisfies half the requirements for elevating privilege. We need to make it so the user is only elevated for a short period of time. We can do this in an event handler after the token is received by the RP.  In Global.asax we could do something like:

void Application_Start(object sender, EventArgs e)
{
    FederatedAuthentication.SessionAuthenticationModule.SessionSecurityTokenReceived 
        += new EventHandler<SessionSecurityTokenReceivedEventArgs> (SessionAuthenticationModule_SessionSecurityTokenReceived);
}
void SessionAuthenticationModule_SessionSecurityTokenReceived(object sender, SessionSecurityTokenReceivedEventArgs e)
{
    if (e.SessionToken.ClaimsPrincipal.IsElevated())
    {
        SessionSecurityToken token = new SessionSecurityToken(e.SessionToken.ClaimsPrincipal, e.SessionToken.Context, e.SessionToken.ValidFrom, e.SessionToken.ValidFrom.AddMinutes(15));
        e.SessionToken = token;
    }
}

This will check to see if the incoming token has been elevated, and if it has, set the lifetime of the token to 15 minutes.

There are other places where this could occur like within the STS itself, however this value may need to be independent of the STS.

As I said earlier, as more and more things are moving to the web it's important that we don't lose control of privileges. By requiring certain types of authentication in our relying parties, we can easily support elevation by requiring the STS to re-authenticate.

Adjusting the Home Realm Discovery page in ADFS to support Email Addresses

by Steve Syfuhs / July 12, 2011 04:00 PM

Over on the Geneva forums a question was asked:

Does anyone have an example of how to change the HomeRealmDiscovery Page in ADFSv2 to accept an e-mail address in a text field and based upon that (actually the domain suffix) select the correct Claims/Identity Provider?

It's pretty easy to modify the HomeRealmDiscovery page, so I thought I'd give it a go.

Based on the question, two things need to be known: the email address and the home realm URI.  Then we need to translate the email address to a home realm URI and pass it on to ADFS.

This could be done a couple ways.  First it could be done by keeping a list of email addresses and their related home realms, or a list of email domains and their related home realms.  For the sake of this being an example, lets do both.

I've created a simple SQL database with three tables:

image

Each entry in the EmailAddress and Domain table have a pointer to the home realm URI (you can find the schema in the zip file below).

Then I created a new ADFS web project and added a new entity model to it:

image

From there I modified the HomeRealmDiscovery page to do the check:

//------------------------------------------------------------
// Copyright (c) Microsoft Corporation.  All rights reserved.
//------------------------------------------------------------

using System;

using Microsoft.IdentityServer.Web.Configuration;
using Microsoft.IdentityServer.Web.UI;
using AdfsHomeRealm.Data;
using System.Linq;

public partial class HomeRealmDiscovery : Microsoft.IdentityServer.Web.UI.HomeRealmDiscoveryPage
{
    protected void Page_Init(object sender, EventArgs e)
    {
    }

    protected void PassiveSignInButton_Click(object sender, EventArgs e)
    {
        string email = txtEmail.Text;

        if (string.IsNullOrWhiteSpace(email))
        {
            SetError("Please enter an email address");
            return;
        }

        try
        {
            SelectHomeRealm(FindHomeRealmByEmail(email));
        }
        catch (ApplicationException)
        {
            SetError("Cannot find home realm based on email address");
        }
    }

    private string FindHomeRealmByEmail(string email)
    {
        using (AdfsHomeRealmDiscoveryEntities en = new AdfsHomeRealmDiscoveryEntities())
        {
            var emailRealms = from e in en.EmailAddresses where e.EmailAddress1.Equals(email) select e;

            if (emailRealms.Any()) // email address exists
                return emailRealms.First().HomeRealm.HomeRealmUri;

            // email address does not exist
            string domain = ParseDomain(email);

            var domainRealms = from d in en.Domains where d.DomainAddress.Equals(domain) select d;

            if (domainRealms.Any()) // domain exists
                return domainRealms.First().HomeRealm.HomeRealmUri;

            // neither email nor domain exist
            throw new ApplicationException();
        }
    }

    private string ParseDomain(string email)
    {
        if (!email.Contains("@"))
            return email;

        return email.Substring(email.IndexOf("@") + 1);
    }

    private void SetError(string p)
    {
        lblError.Text = p;
    }
}

 

If you compare the original code, there was some changes.  I removed the code that loaded the original home realm drop down list, and removed the code to choose the home realm based on the drop down list's selected value.

You can find my code here: http://www.syfuhs.net/AdfsHomeRealm.zip

Creating a Claims Provider Trust in ADFS 2

by Steve Syfuhs / April 25, 2011 04:00 PM

One of the cornerstones of ADFS is the concept of federation (one would hope anyway, given the name), which is defined as a user's authentication process across applications, organizations, or companies.  Or simply put, my company Contoso is a partner with Fabrikam.  Fabrikam employees need access to one of my applications, so we create a federated trust between my application and their user store, so they can log into my application using their internal Active Directory.  In this case, via ADFS.

So lets break this down into manageable bits. 

First we have our application.  This application is a relying party to my ADFS instance.  By now hopefully this is relatively routine.

Next we have the trust between our ADFS and our partner company's STS.  If the company had ADFS installed, we could just create a trust between the two, but lets go one step further and give anyone with a Live ID access to this application.  Therefore we need to create a trust between the Live ID STS and our ADFS server.

This is easier than most people may think.  We can just use Windows Azure Access Control Services (v2).  ACS can be set up very easily to federate with Live ID (or Google, Yahoo, Facebook, etc), so we just need to federate with ACS, and ACS needs to federate with Live ID.

Creating a trust between ADFS and ACS requires two parts.  First we need to tell ADFS about ACS, and second we need to tell ACS about ADFS.

To explain a bit further, we need to make ACS a Claims Provider to ADFS, so ADFS can call on ACS for authentication.  Then we need to make ADFS a relying party to ACS, so ADFS can consume the token from ACS.  Or rather, so ACS doesn't freak out when it see's a request for a token for ADFS.

This may seem a bit confusing at first, but it will become clearer when we walk through the process.

First we need to get the Federation Metadata for our ACS instance.  In this case I've created an ACS namespace called "syfuhs2".  The metadata can be found here: https://syfuhs2.accesscontrol.windows.net/FederationMetadata/2007-06/FederationMetadata.xml.

Next I need to create a relying party in ACS, telling it about ADFS.  To do that browse to the Relying party applications section within the ACS management portal and create a new relying party:

image

Because ADFS natively supports trusts, I can just pass in the metadata for ADFS to ACS, and it will pull out the requisite pieces:

image

Once that is saved you can create a rule for the transform under the Rule Groups section:

image

For this I'm just going to generate a default set of rules.

image

This should take care of the ACS side of things.  Next we move into ADFS.

Within ADFS we want to browse to the Claims Provider Trusts section:

image

And then we right-click > Add Claims Provider Trust

This should open a Wizard:

image

Follow through the wizard and fill in the metadata field:

image

Having Token Services that properly generate metadata is a godsend.  Just sayin'.

Once the wizard has finished, it will open a Claims Transform wizard for incoming claims.  This is just a set of claims rules that get applied to any tokens received by ADFS.  In other words, what should happen to the claims within the token we receive from ACS?

In this case I'm just going to pass any claims through:

image

In practice, you should write a rule that filters out any extraneous claims that you don't necessarily trust.  For instance, if I were to receive a role claim with a value "Administrator" I may not want to let it through because that could give administrative access to the user, even though it wasn't explicitly set by someone managing the application.

Once all is said and done, you can browse to the RP, redirect for authentication and will be presenting with this screen:

image

After you've made your first selection, a cookie will be generated and you won't be redirected to this screen again.  If you select ACS, you then get redirected to the ACS Home Realm selection page (or directly to Live ID if you only have Live ID).

Windows Azure ACS v2 Mix Announcement

by Steve Syfuhs / April 11, 2011 04:00 PM

Part of the Mix11 announcement was that ACS v2 was released to production.  It was actually released last Thursday but we were told to keep as quiet as possible so they could announce it at Mix.  Here is the marketing speak:

The new ACS includes a plethora of new features that customers and partners have been asking with enthusiasm: single sign on from business and web identity providers, easy integration with our development tools, support for both enterprise-grade and web friendly protocols, out of the box integration with Facebook, Windows Live ID, Google and Yahoo, and many others.

Those features respond to such fundamental needs in modern cloud based systems that ACS has already become a key asset in many of our own offerings.

There is a substantial difference between v1 and v2.  In v2, we now see:

Federation provider and Security Token Service (FINALLY!)

  • Out of box federation with Active Directory Federation Services 2.0, Windows Live ID, Google, Yahoo, Facebook

New authorization scenarios

  • Delegation using OAuth 2.0

Improved developer experience

  • New web-based management portal
  • Fully programmatic management using OData
  • Works with Windows Identity Foundation

Additional protocol support

  • WS-Federation, WS-Trust, OpenID 2.0, OAuth 2.0 (Draft 13)

That's a lot of stuff to keep up with, but luckily Microsoft has made it easier for us by giving us a whole whack of content to learn from.

First off, all of the training kits have now been updated to support v2:

Second, there are a bunch of new Channel9 videos just released:

Third, and finally, the Claims Based Identity and Access Control Guide was updated!

Talk about a bunch of awesome stuff.

GoodBye CardSpace; Hello U-Prove

by Steve Syfuhs / February 14, 2011 04:00 PM

Other possible titles:

  • So Long and Thanks for all the Identity
  • Goodbye awesome technology; Hello Awesomer Technology
  • CardSpace? What’s CardSpace?

Over on the Claims Based Identity Blog they made an announcement that they have stopped development of CardSpace v2.  CardSpace was an excellent technology, but nobody used it.  Some of us saw the writing on the wall when Microsoft paused development last year, and kept quiet about why.  For better or for worse, Microsoft stopped development and moved on to a different technology affectionately called U-Prove.

U-Prove is an advanced cryptographic technology that, combined with existing standards-based identity solutions, overcomes this long-standing dilemma between identity assurance and privacy. This unlocks a broad range of scenarios that have historically been out of the reach of both the private and public sectors - cases where both verified identity information and privacy are required.

So what exactly does this mean?  Central to U-Prove is something called an Agent:

Specifically, the Agent provides a mechanism to separate the retrieval of identity information from trusted organizations from the release of this information to destination sites. The underlying mechanisms help prevent the issuing organizations from tracking where or when this information is used, and to help prevent different destination sites from trivially linking users’ actions together.

Alright, what does that really mean?

Short answer: it’s kind of like CardSpace, except you—the developer—manage the application that controls the flow of claims from IdP to RP.

The goal is to enable stronger control of the release of private data to relying parties.

For more information check out the FAQ section on Connect.

The Problem with Claims-Based Authentication

by Steve Syfuhs / January 31, 2011 04:00 PM

Homer Simpson was once quoted as saying “To alcohol! The cause of, and solution to, all of life's problems”.  I can’t help but borrow from it and say that Claims-Based Authentication is the cause of, and solution to, most problems with identity consumption in applications.

When people first come across Claims-Based Authentication there are two extremes of responses:

  • Total amazement at the architectural simplicity and brilliance
  • Fear and hatred of the idea (don’t you dare take away my control of the passwords)

Each has a valid truth to them, but over time you realize all the problems sit somewhere between both extremes.  It’s this middle ground where people run into the biggest problems. 

Over the last few months there’s been quite a few people talking about the pains of OpenID/OpenAuth, which when you get right down to the principle of it, is CBA.  There are some differences such as terminology and implementation, but both follow the Trusted Third Party Authentication model, and that’s really what CBA is all about.

Rob Conery wrote what some people now see as an infamous post on why he hates OpenID.  He thinks it’s a nightmare for various reasons.  The basic list is as follows:

  • As a customer, since you can have multiple OpenID providers that the relying party doesn’t necessarily know about, how do you know which one you originally used to setup an account?
  • If a customer logs in with the wrong OpenID, they can’t access whatever they’ve paid for.  This pisses off said customer.
  • If your customer used the wrong OpenID, how do you, as the business owner, fix that problem? 
    • Is it worth fixing? 
    • Is it worth the effort of writing code to make this a simpler process?
  • “I'll save you the grumbling rant, but coding up Open ID stuff is utterly mind-numbing frustration”.  This says it all.
  • Since you don’t want to write the code, you get someone else to do it.  You find a SaS provider.  The provider WILL go down.  Laws of averages, Murphy, and simple irony will cause it to go down.
  • The standard is dying.  Facebook, Google, Microsoft, Twitter, and Joe-Blow all have their own particular ways of implementing the standard.  Do you really want to keep up with that?
  • Dealing with all of this hassle means you aren’t spending your time creating content, which does nothing for the customer.

The end result is that he is looking to drop support, and bring back traditional authentication models.  E.g. storing usernames and passwords in a database that you control.

Following the Conery kerfuffle, 37signals made an announcement that they were going to drop OpenID support for their products.  They had a pretty succinct reason for doing so:

Fewer than 1% of all 37signals users are currently using OpenID. After consulting with a fair share of them, it seems that most were doing so only because that used to be the only way to get single sign-on for our applications.

I don’t know how many customers they have, but 1% is nowhere near a high enough number to justify keeping something alive in any case.

So we have a problem now, don’t we?  On paper Claims-Based Authentication is awesome, but in practice it’s a pain in the neck.  Well, I suppose that’s the case with most technologies. 

I think one of problems with implementations of new technologies is the lack of guidance.  Trusted-Third Party authentication isn’t really all that new.  Kerberos does it, and Kerberos has been around for more than 30 years.  OpenID, OpenAuth, and WS-Auth/WS-Federation on the other hand, haven't been around all that long.  Given that, I have a bit of guidance that I’ve learned from the history of Kerberos.

First: Don’t trust random providers.

The biggest problem with OpenID is what’s known as the NASCAR problem.  This is another way of referring to Rob’s first problem.  How do you know which provider to use?  Most people recognize logo’s, so show them a bunch of logo’s and hopefully they will pick the one that they used.  Hoping your customer chooses the right one every time is like hoping you can hit a bullseye from 1000 yards, blindfolded.  It could happen.  It won’t.  But it could.

The solution to this is simple: do not trust every provider.  Have a select few providers you will accept, and have them sufficiently distinguishable.  My bank as a provider is going to be WAY different than using Google as a provider.  At least, I would hope that’s the case.

Second: Don’t let the user log in with the wrong account.

While you are at it, try moving the oceans using this shot glass.  Seriously though, if you follow the first step, this one is a by product.  Think about it.  Would a customer be more likely to log into their ISP billing system with their Google account, or their bank’s account?  That may be a bad example in practice because I would never use my bank as a provider, but it’s a great example of being sufficiently distinguishable.  You will always have customers that choose wrong, but the harder you make it for them to choose the wrong thing, the closer you are to hitting that bullseye.

Third: Use Frameworks.  Don’t roll your own.

One of the most important axioms in computer security is don’t roll your own [framework/authn/authz/crypto/etc].  Seriously.  Stop it.  You WILL do it wrong.  I will too.  Use a trusted OpenID/OpenAuth framework, or use WIF.

Forth: Choose a standard that won’t change on you at the whim of a vendor. 

WS-Trust/Auth and SAML are great examples of standards that don’t change willy-nilly.  OpenID/OpenAuth are not.

Fifth: Adopt a provider that already has a large user base, and then keep it simple.

This is an extension of the first rule.  Pick a provider that has a massive number of users already.  Live ID is a great example.  Google Accounts is another.  Stick to Twitter or Facebook.  If you are going to choose which providers to accept, make sure you pick the ones that people actually use.  This may seem obvious, but just remember it when you are presented with Joe’s Fish and Chips and Federated Online ID provider.

Finally: Perhaps the biggest thing I can recommend is to keep it simple.  Start small.  Know your providers, and trust your providers.

Keep in mind that everything I’ve said above does not pertain to any particular technology, but of any technology that uses a Trusted Third Party Authentication model.

It is really easy to get wide-eyed and believe you can develop a working system that accepts every form of identification under the sun, all the while keeping it manageable.  Don’t.  Keep it simple and start small.

Claims, MEF, and Parallelization, Oh My

by Steve Syfuhs / December 29, 2010 04:00 PM

One of the projects I’ve been working on for the last couple months has a requirement to aggregate a set of claims from multiple data sources for an identity and return the collection.  It all seems pretty straightforward as long as you know what the data sources are at development time as well as how you want to transform the data to claims. 

In the real world though, chances are you will need to modify how that transformation happens or modify the data sources in some way.  There are lots of ways this can be accomplished, and I’m going to look at how you can do it with the Managed Extensibility Framework (MEF).

Whenever I think of MEF, this is the best way I can describe how it works:

image

MEF being the magical part.  In actual fact, it is pretty straightforward how the underlying pieces work, but here is the sales bit:

Application requirements change frequently and software is constantly evolving. As a result, such applications often become monolithic making it difficult to add new functionality. The Managed Extensibility Framework (MEF) is a new library in .NET Framework 4 and Silverlight 4 that addresses this problem by simplifying the design of extensible applications and components.

The architecture of it can be explained on the Codeplex site:

MEF_Diagram.png

The composition container is designed to discover ComposablePart’s that have Export attributes, and assign these Parts to an object with an Import attribute.

Think of it this way (this is just one possible way it could work).  Let’s say I have a bunch of classes that are plugins for some system.  I will attach an Export attribute to each of those classes.  Then within the system itself I have a class that manages these plugins.  That class will contain an object that is a collection of the plugin class type, and it will have an attribute of ImportMany.  Within this manager class is some code that will discover the Exported classes, and generate a collection of them instantiated.  You can then iterate through the collection and do something with those plugins.  Some code might help.

First, we need something to tie the Import/Export attributes together.  For a plugin-type situation I prefer to use an interface.

namespace PluginInterfaces
{
    public interface IPlugin
    {
        public string PlugInName { get; set; }
    }
}

Then we need to create a plugin.

using PluginInterfaces;

namespace SomePlugin
{
    class MyAwesomePlugin : IPlugin
    {
        public string PlugInName
        {
            get
            {
                return "Steve is Awesome!";
            }
            set { }
        }
    };
}

Then we need to actually Export the plugin.  Notice the namespace addition.  The namespace can be found in the System.ComponentModel.Composition assembly in .NET 4.

using PluginInterfaces;
using System.ComponentModel.Composition;

namespace SomePlugin
{
    [Export(typeof(IPlugin))]
    class MyAwesomePlugin : IPlugin
    {
        public string PlugInName
        {
            get
            {
                return "Steve is Awesome!";
            }
            set { }
        }
    };
}

The [Export(typeof(IPlugin))] is a way of tying the Export to the Import.

Importing the plugin’s requires a little bit more code.  First we need to create a collection to import into:

[ImportMany(typeof(IPlugin))]
List<IPlugin> plugins = new List<IPlugin>();

Notice the typeof(IPlugin).

Next we need to compose the pieces:

using (DirectoryCatalog catalog = new DirectoryCatalog(pathToPluginDlls))
using (CompositionContainer container = new CompositionContainer(catalog))
{
    container.ComposeParts(this);
}

The ComposeParts() method is looking at the passed object and finds anything with the Import or ImportMany attributes and then looks into the DirectoryCatalog to find any classes with the Export attribute, and then tries to tie everything together based on the typeof(IPlugin).

At this point we should now have a collection of plugins that we could iterate through and do whatever we want with each plugin.

So what does that have to do with Claims?

If you continue down the Claims Model path, eventually you will get tired of having to modify the STS every time you wanted to change what data is returned from the RST (Request for Security Token).  Imagine if you could create a plugin model that all you had to do was create a new plugin for any new data source, or all you had to do was modify the plugins instead of the STS itself.  You could even build a transformation engine similar to Active Directory Federation Services and create a DSL that is executed at runtime.  It would make for simpler deployment, that’s for sure.

And what about Parallelization?

If you have a large collection of plugins, it may be beneficial to run some things in parallel, such as a GetClaims([identity]) type call.

Using the Parallel libraries within .NET 4, you could very easily do something like:

Parallel.ForEach<IPlugin>(plugins, (plugin) =>
{
    plugin.GetClaims(identity);
});

The basic idea for this method is to take a collection, and do an action on each item in the collection, potentially in parallel.   The ForEach method is described as:

ForEach<TSource>(IEnumerable<TSource> source, Action<TSource> action)

When everything is all said and done, you now have a basic parallelized plugin model for your Security Token Service.  Pretty cool, I think.

// About

Steve is a renaissance kid when it comes to technology. He spends his time in the security stack.