Home / My Disclaimer / Who am I? / Search... / Sign in

// WIF

Windows Azure Pack Authentication Part 3 – Using a Third Party IdP

by Steve Syfuhs / February 07, 2014 06:22 PM

In the previous installments of this series we looked at how Windows Azure Pack authenticates users and how it’s configured out of the box for federation. This time around we’re going to look at how you can configure federation with a third party IdP.

Microsoft designed Windows Azure Pack the right way. It supports federation with industry protocols out of the box. You can’t say that for many services, and you certainly can’t say that those services support it natively for all versions – more often than not you have to pay extra for it.

Windows Azure Pack supports federation, and actually uses it to authenticate users by default. This little fact makes it easy to federate to a 3rd party IdP.

If we searched around we will find lots of resources on federating to ADFS, as that’s Microsoft’s federation product, and there are a number of good (German content) walkthroughs on how you can get it working. If you want to use ADFS go read one or all of those articles as everything we talk about today will be about using a non-Microsoft federation service.

Before we begin though I’d like to point out that Microsoft does have some resources on using 3rd party IdPs, but unfortunately the information is a bit thin in some places.

Prerequisites

Federation is a complex beast and we should be clear about what is required to get it working. In no particular order you need the following:

  • STS that supports the WS-Federation (passive) protocol
  • STS that supports WS-Federation wrapped JSON Web Tokens (JWT)
  • Optional: STS that supports WS-Trust + JWT

If you plan to use the public APIs with federated accounts then you will need a STS that supports WS-Trust + JWT.

If you don’t have a STS that can support these requirements then you should really consider taking a look at ADFS, or if you’re looking for customization, Thinktecture Identity Server. Both are top notch IdPs (edit: insert pitch about the IdP my company builds and sells as well [edit-edit: our next version natively supports JWT] Winking smile -- sorry, this concludes the not-so-regularly-scheduled product placement).

Another option is to roll your own IdP. Don’t do this. No seriously, don’t. It’s a complicated mess. You’re way better off using the Thinktecture server and extending it to fit your needs.

Supposing though that you already have an IdP and want to support JWT though, here’s how we can do it. In this context the IdP is the overarching identity providing system and the STS is simply the service issuing tokens.

Skip this next section if you just want to see how to configure Windows Azure Pack. That’s the main part that’s lacking in the MSDN documentation.

JWT via IdentityModel

First off, you need to be using .NET 4.5, and you need to be using the the 4.5 IdentityModel stack. You can’t use the original 3.5 bits.

At this point I’m going to assume you’ve got a working IdP already. There are lots of articles out there explaining how to build one. We’re just going to mod the STS.

Before making any code changes though you need to add the JWT token handler, which is easily installed via Nuget (I Red heart Nuget):

PM> Install-Package System.IdentityModel.Tokens.Jwt

This will need to be added to the project that exposes your STS configuration class.

Next, we need to inject the token handler into the STS pipeline. This can easily be done by adding an entry to the web.config system.identityModel section:

Or if you want to hardcode it you can add it to your SecurityTokenServiceConfiguration class.

There are of course other (potentially better) ways you can add it in, but this serves our purpose for the sake of a sample.

By adding the JWT token handler into the STS pipeline we can begin issuing JWTs to any relying parties that request one. This poses a problem though because passive requests don’t have a requested token type tacked on. Active (WS-Trust) requests do, but not passive. So we need to specify that a JWT should be minted instead of a SAML token. This can be done in the GetScope method of the STS class.

All we really needed to do was specify the TokenType as WIF will use that to determine which token handler should be used to mint the token. We know this is the value to use because it’s exposed by the GetTokenTypeIdentifiers() method in the JWTSecurityTokenHandler class.

Did I mention the JWT library is open source?

So now at this point if we made a request for token to the STS we could receive a WS-Federation wrapped JWT.

If the idea of using a JWT instead of a SAML token appeals to you, you can configure your app to use the JWT token handler similar to Dominick’s sample.

If you were submitting a WS-Trust RST to the STS you could use client code along the lines of:

When the GetScope method is called the request.TokenType should be set to whatever you passed in at the client. For more information on service calls you can take a look at the whitepaper Claims-Based Identity in Windows Azure Pack (docx). A future installment of this series might have more information about using services.

Lastly, we need to sign the JWT. The only caveat to using the JWT token handler is that the minimum RSA key size is 2048 bits. If you’re using a key smaller than that then please upgrade it. We’re going to overlook the fact that the MSDN article shows how to bypass minimum key sizes. Seriously. Don’t do it. I don’t want to have to explain why (putting paranoia aside for a moment, 1024 is being deprecated by Windows and related services in the near future anyway).

Issuing Tokens to Windows Azure Pack

So now we’re at a point where we can mint a JWT token. The question we need to ask now is what claims should this token contain? Looking at Part 1 we see that the Admin Portal requires UPN and Group claims. The tenant portal only requires the UPN claim.

Lucky for us the JWT token handler is smart. It knows to transform certain known XML-token-friendly-claim-types to JWT friendly claim types. In our case we can use http://schemas.xmlsoap.org/ws/2005/05/identity/claims/upn in our ClaimsIdentity to map to the UPN claim, and http://schemas.xmlsoap.org/claims/Group to map to our Group claim.

Then we need to determine where to send the token, and who to address it to. Both the tenant and admin sites have Federation Metadata documents that specify this information for us. If you’ve got an IdP that can parse the metadata then all you need to do is point it to https://yourtenantsite/FederationMetadata/2007-06/FederationMetadata.xml for the tenant configuration or https://youradminsite/FederationMetadata/2007-06/FederationMetadata.xml for the admin configuration.

Of course, this information will also map up to the configuration elements we looked at in Part 2. That’ll tell us the Audience URI and the Reply To for both sites.

Finally we have everything we need to mint the token, address it, and send it on its way.

Configuring Windows Azure Pack to Trust your Token

The tokens been sent and once it hits either the tenant or admin site it’ll promptly be ignored and you’ll get an ugly error message saying “nope, not gonna happen, bub.”

We therefore need to configure Windows Azure Pack to trust our token. Looking at MSDN we see some somewhat useful information telling us what we need to modify, but frankly, its missing a bunch of information so we’re going to ignore it.

First things first: if your IdP publishes a Federation Metadata document then you can just configure everything via PowerShell:

You can replace the target “Admin” with “Tenant” if you want to configure the Tenant Portal. The only caveat with doing it this way is that the metadata document needs to be accessible from the server. I’ve submitted a feature request that they also support local file paths too; hopefully they listen! Since the parameter takes the full URL you can put the metadata document somewhere public if its not normally accessible. You will only need the metadata accessible while applying this configuration.

If the cmdlet completed successfully then you should be able to log in from your own IdP. That’s all there is to it for you. I would recommend seriously considering going this route instead of configuring things manually.

Otherwise, lets carry on.

Since we can’t import our federation metadata (since we probably don’t have any), we need to configure things manually. To do that we need to modify settings in the database.

Looking back to Part 2 we see all the configuration elements that enable our federated trust to the default IdPs. We’ll need to update a few settings across the Microsoft.MgmtSvc.Store and Microsoft.MgmtSvc.PortalConfigStore databases.

As per the MSDN documentation it says to modify the settings in the PortalConfigStore database. It’s wrong. It’s incomplete as that’s only part of the process.

The PortalConfigStore database contains the settings used by the Tenant and Admin Portals to validate and request tokens. We need to modify these settings to use our custom IdP. To do so locate the Authentication.IdentityProvider setting in the [Config].[Settings] table.  The namespace we need to choose is dependent on which site we want to configure. In our case we select the Admin namespace. As we saw last time it looks something like:

We need to substitute our STS information here. The Realm is whatever your STS issuer is, and the Endpoint is where ever your WS-Federation endpoint is located. The Certificate should be a base 64 encoded representation of your signing certificate (remember, just the public key).

In my experience I’ve had to do an IISRESET on the portals to get the settings refreshed. I might just be impatient though.

Once those values are replaced you can try logging in. You should be redirected to your IdP and if you issue the token properly it’ll hit the portal and you should be logged in. Unfortunately this’ll actually fail with a non-useful error message.

deadsession

Who can guess why? So far I’ve stated that the MSDN documentation is missing information. What have we missed? Hopefully if you’ve read the first two parts of this series you’re yelling at the screen telling me to get on with it already because you’ve caught on to what I’m saying.

We haven’t configured the API services to trust our STS! Oops.

With that being said, we now have proof that Windows Azure Pack flows the token to the services from the Portal and, more importantly, the services validate the token. Cool!

Anyway, now to configure the APIs. Warning: complicated.

In the Microsoft.MgmtSvc.Store database locate the Settings table and then locate the Authentication.IdentityProvider.Secondary element in the AdminAPI namespace. We need to update it with the exact same values as we put in to the configuration element in the other database.

If you’re only wanting to configure the Tenant Portal you’d want to modify the Authentication.IdentityProvider.Primary configuration element. Be careful with the Primary/Secondary elements as they can get confusing.

If you’re configuring the Admin Portal you’ll need to update the Authentication.IdentityProvider.Secondary configuration element in the TenantAPI namespace to use the configuration you specified for the Admin Portal as well. As I said previously, I think this is because the Admin Portal calls into the Tenant API. The Admin Portal will use an admin-trusted token – therefore the TenantAPI needs to trust the admin’s STS.

Now that you’ve completed configuration you can do an IISRESET and try logging in. If you configured everything properly you should now be able to log in from your own IdP.

Troubleshooting

For those rock star Ops people who understand identity this guide was likely pretty easy to follow, understand, and implement. For everyone else though, this was probably a pain in the neck. Here are some troubleshooting tips.

Review the Event Logs
It’s surprising how many people forget that a lot of applications will write errors to the Windows Event Log. Windows Azure Pack has quite a number of logs that you can review for more information. If you’re trying to track down an issue in the portals look in the MgmtSvc-*Site where * is Tenant or Admin. Errors will get logged there. If you’re stuck mucking about the APIs look in the MgmtSvc-*API where * is Tenant, Admin, or TenantPublic.

Enable Development Mode
You can enable developer mode in sites by modifying a value in the web.config. Unprotect the web.config by calling:

And then locate the appSetting named Microsoft.Azure.Portal.Configuration.PortalConfiguration.DevelopmentMode and set the value to true. Be sure to undo and re-protect the configuration when you’re done. You should then get a neat error tracing window show up in the portals, and more diagnostic information will be logged to the event logs. Probably not wise to do this in a production environment.

Use the PowerShell CmdLets
There are a quite a number of PowerShell cmdlets available for you to learn about the configuration of Windows Azure Pack. If you open the Windows Azure Pack Administration PowerShell console you can see that there are two modules that get loaded that are full of cmdlets:

PS C:\Windows\system32> get-command -Module MgmtSvcConfig

CommandType     Name                                               ModuleName
-----------     ----                                               ----------
Cmdlet          Add-MgmtSvcAdminUser                               MgmtSvcConfig
Cmdlet          Add-MgmtSvcDatabaseUser                            MgmtSvcConfig
Cmdlet          Add-MgmtSvcResourceProviderConfiguration           MgmtSvcConfig
Cmdlet          Get-MgmtSvcAdminUser                               MgmtSvcConfig
Cmdlet          Get-MgmtSvcDatabaseSetting                         MgmtSvcConfig
Cmdlet          Get-MgmtSvcDefaultDatabaseName                     MgmtSvcConfig
Cmdlet          Get-MgmtSvcEndpoint                                MgmtSvcConfig
Cmdlet          Get-MgmtSvcFeature                                 MgmtSvcConfig
Cmdlet          Get-MgmtSvcFqdn                                    MgmtSvcConfig
Cmdlet          Get-MgmtSvcNamespace                               MgmtSvcConfig
Cmdlet          Get-MgmtSvcNotificationSubscriber                  MgmtSvcConfig
Cmdlet          Get-MgmtSvcResourceProviderConfiguration           MgmtSvcConfig
Cmdlet          Get-MgmtSvcSchema                                  MgmtSvcConfig
Cmdlet          Get-MgmtSvcSetting                                 MgmtSvcConfig
Cmdlet          Initialize-MgmtSvcFeature                          MgmtSvcConfig
Cmdlet          Initialize-MgmtSvcProduct                          MgmtSvcConfig
Cmdlet          Install-MgmtSvcDatabase                            MgmtSvcConfig
Cmdlet          New-MgmtSvcMachineKey                              MgmtSvcConfig
Cmdlet          New-MgmtSvcPassword                                MgmtSvcConfig
Cmdlet          New-MgmtSvcResourceProviderConfiguration           MgmtSvcConfig
Cmdlet          New-MgmtSvcSelfSignedCertificate                   MgmtSvcConfig
Cmdlet          Protect-MgmtSvcConfiguration                       MgmtSvcConfig
Cmdlet          Remove-MgmtSvcAdminUser                            MgmtSvcConfig
Cmdlet          Remove-MgmtSvcDatabaseUser                         MgmtSvcConfig
Cmdlet          Remove-MgmtSvcNotificationSubscriber               MgmtSvcConfig
Cmdlet          Remove-MgmtSvcResourceProviderConfiguration        MgmtSvcConfig
Cmdlet          Reset-MgmtSvcPassphrase                            MgmtSvcConfig
Cmdlet          Set-MgmtSvcCeip                                    MgmtSvcConfig
Cmdlet          Set-MgmtSvcDatabaseSetting                         MgmtSvcConfig
Cmdlet          Set-MgmtSvcDatabaseUser                            MgmtSvcConfig
Cmdlet          Set-MgmtSvcFqdn                                    MgmtSvcConfig
Cmdlet          Set-MgmtSvcIdentityProviderSettings                MgmtSvcConfig
Cmdlet          Set-MgmtSvcNotificationSubscriber                  MgmtSvcConfig
Cmdlet          Set-MgmtSvcPassphrase                              MgmtSvcConfig
Cmdlet          Set-MgmtSvcRelyingPartySettings                    MgmtSvcConfig
Cmdlet          Set-MgmtSvcSetting                                 MgmtSvcConfig
Cmdlet          Test-MgmtSvcDatabase                               MgmtSvcConfig
Cmdlet          Test-MgmtSvcPassphrase                             MgmtSvcConfig
Cmdlet          Test-MgmtSvcProtectedConfiguration                 MgmtSvcConfig
Cmdlet          Uninstall-MgmtSvcDatabase                          MgmtSvcConfig
Cmdlet          Unprotect-MgmtSvcConfiguration                     MgmtSvcConfig
Cmdlet          Update-MgmtSvcV1Data                               MgmtSvcConfig

As well as the MgmtSvcConfig module which is moreso for daily administration.

Read the Windows Azure Pack Claims Whitepaper
See here: Claims-Based Identity in Windows Azure Pack (docx).

Visit the Forums
When in doubt take a look at the forums and ask a question if you’re stuck.

Email Me
Lastly, you can contact me (steve@syfuhs.net) with any questions. I may not have answers but I might be able to find someone who can help.

Conclusion

In the first two parts of this series we looked at how authentication works, how it’s configured, and now in this installment we looked at how we can configure a third party IdP to log in to Windows Azure Pack. If you’re trying to configure Windows Azure Pack to use a custom IdP I imagine this part is the most complicated to figure out and hopefully it was documented well enough. I personally spent a fair amount of time fiddling with settings and most of the information I’ve gathered for this series has been the result of lots of trial and error. With any luck this series has proven useful to you and you have more luck with the configuration than I originally did.

Next time we’ll take a look at how we can consume the public APIs using a third party IdP for authentication.

In the future we might take a look at how we can authenticate requests to a service called from a Windows Azure Pack add-on, and how we can call into Windows Azure Pack APIs from an add-on.

Windows Azure Pack Authentication Part 2

by Steve Syfuhs / January 30, 2014 07:50 PM

Last time we took a look at how Windows Azure Pack authenticates users in the Admin Portal. In this post we are going to look at how authentication works in the Tenant Portal.

Authentication in the Tenant Portal works exactly the same way authentication in the Admin Portal works.

Detailed and informative explanation, right?

Actually, with any luck you’ve read, and were more importantly, able to decipher my explanations in the last post. The reason for that is because we’re going to go a bit deeper into the configuration of how authentication is configured.  If that’s actually the case then you know everything you need to know to continue on here. There are a couple minor differences between the Admin sites and Tenant sites, such as the tenant STS will store users in a standalone SQL database instead of Active Directory, and there is a set of public service endpoints that also federate with the Tenant STS. For the time being we can ignore the public API, but we may revisit it in the future.

diag1

One of the things this diagram doesn’t show is how the various services store configuration information. This is somewhat important because the Portals and APIs need to keep track of where the STS is, what is used to sign tokens, who is allowed to receive tokens, etc.

Since Windows Azure Pack is designed to be distributed in nature, it’s a fair bet most of the configuration is stored in databases. Let’s check the PowerShell cmdlets (horizontal spacing truncated a bit to fit):

PS C:\Windows\system32> Get-MgmtSvcDefaultDatabaseName

DefaultDatabaseName                        Description
-------------------                        -----------
Microsoft.MgmtSvc.Config                   Configuration store database
Microsoft.MgmtSvc.PortalConfigStore        Admin and Tenant sites database
Microsoft.MgmtSvc.Store                    Rest API layer database
Microsoft.MgmtSvc.MySQL                    MySQL resource provider database
Microsoft.MgmtSvc.SQLServer                SQLServer resource provider database
Microsoft.MgmtSvc.Usage                    Usage service database
Microsoft.MgmtSvc.WebAppGallery            WebApp Gallery resource provider database

Well that’s handy. It even describes what each database does. Looking at the databases on the server we see each one:

diag2

Looking at the descriptions we can immediately ignore anything that is described as a “resource provider database” because resource providers in Windows Azure Pack are the services exposed by the Portals and APIs.  That leaves us the Microsoft.MgmtSvc.Config, Microsoft.MgmtSvc.PortalConfigStore, Microsoft.MgmtSvc.Store, and Microsoft.MgmtSvc.Usage databases.

The usage database looks like the odd one out so if we peek into the tables we see configuration information and data for usage of the resource providers. Scratch that.

We’re then left a Config database, a PortalConfigStore database, and a Store database. How’s that for useful naming conventions? Given the descriptions we could infer we likely only want to look into the PortalConfigStore database for the Tenant and Admin Portal configuration, and the Store database for the API configuration. To confirm that we could peek into the Config database and see what’s there. If we look in the Settings table we see a bunch of encrypted key value pairs. Nothing jumps out as being related to federation information like endpoints, claims, or signing certificates, but we do see pointers to database credentials.

If we quickly take a look at some of the web.config files in the various Windows Azure Pack sites we can see that some of them only have connection strings to the Config database. Actually, if we look at any of the web.config files we’ll see they are protected, so we need to unprotect them:

PS C:\Windows\system32> Unprotect-MgmtSvcConfiguration -Namespace TenantAPI

Please remember to protect-* them when you’re done snooping!

If we compare the connection strings and information in the Config.Settings table, its reasonable to hypothesize that the Config database stores pointers to the other configuration databases, and the sites only need to have a configured connection string to a single database. This seems to only apply to some sites though. The Portal sites actually have connection strings only pointing to the PortalConfigStore database. That actually makes sense from a security perspective though.

Since the Portal sites are public-ish facing, they are more likely to be attacked, and therefore really shouldn’t have direct connections to databases storing sensitive information – hence the Web APIs. Looking at the architectural documentation on TechNet we can see its recommended that API services not be public facing (with the exception of the Public APIs) as well, so that supports my assertion.

Moving on, we now have the PortalConfigStore and the Store databases left. The descriptions tell us everything we need to know about them. We end up with a service relationship along the lines of:

diag3

Okay, now that we have a rough idea of how configuration data is stored we can peek into the databases and see what’s what.

Portal Sites Authentication

Starting with the PortalConfigStore database we see a collection of tables.

diag4

The two tables that pop out are the Settings table and the aspnet_Users table. We know the Auth Site for the tenants stores users in a database, and lookie here we have a collection of users.

Next up is the Settings table. It’s a namespace-key-value-pair mapped table. Since this database stores information for multiple sites, it makes sense to separate configuration data into multiple realms – the namespaces.

There are 4 namespaces we care about:

  • AdminSite
  • AuthSite
  • TenantSite
  • WindowsAuthSite

Looking at the TenantSite configuration we see a few entries with JSON values:

  • Authentication.IdentityProvider
  • Authentication.RelyingParty

Aha! Here’s where we store the necessary bits to do the federation dance. The Authentication.RelyingParty entry stores the information describing the TenantSite. So when it goes to the IdP with a request it can use these values. In my case I’ve got the following:

{
   "EncryptionCertificate":null,
   "Realm":"http://azureservices/TenantSite",
   "ReplyTo":https://manage-cloud.syfuhs.net/
}

Really, just the bare minimum to describe the RP. The Realm, which is the unique identifier of the site, the Reply To URL which is where the token should be returned to, and the Encryption Certificate in case the returned token is encrypted – which it isn’t by default. With this information we can make a request to the IdP, but of course, we don’t know anything about the IdP yet so we need to look up that configuration information.

Looking at the Authentication.IdentityProvider entry we see everything else we need to complete a WS-Federation passive request for token. This is my configuration:

{
   "Realm":"
http://azureservices/AuthSite",
   "Endpoint":"
https://auth-cloud.syfuhs.net/wsfederation/issue",
   "Certificates":[
      "MIIC2...ADLt0="
   ]
}

To complete the request we actually only need the Endpoint as that describes where the request should be sent, but we also now have the information to validate the token response. The Realm describes who minted the token, and the Certificates element is a collection of certificates that could have been used to sign the token. If the token was signed by one of these certificates, we know it’s a valid token.

We do have to go one step further when validating this though, as we need to make sure the token is intended to be used by the Tenant Portal. This is done by comparing the audience URI in the token (see the last post) to the Realm in the Authentication.RelyingParty configuration value. If everything matches up we’re good to go.

We can see the configuration in the AdminSite namespace is similar too.

Next up we want to look at the AuthSite namespace configuration. There are similar entries to the TenantSite, but they serve slightly different purposes.

The Authentication.IdentityProvider entry matches the entry for the TenantSite. I’m not entirely sure of its purpose, but I suspect it might be a reference value for when changes are made and the original configuration is needed. Just a guess on that though.

Moving on we have the Authentication.RelyingParty.Primary entry. This value describes who can request a token, which in our case is the TenantSite. My entry looks like this:

{
   "EncryptionCertificate":null,
   "Realm":"http://azureservices/TenantSite",
   "ReplyTo":https://manage-cloud.syfuhs.net/
}

It’s pretty similar to the configuration in the TenantSite entry. The Realm is the identifier of which site can request a token, and the Reply To URL is where the token should be returned once its minted.

Compare that to the values in the WindowsAuthSite namespace and things look pretty similar too.

So with all that information we’ve figured out how the Portal sites and the Auth sites are configured. Of course, we haven’t looked at the APIs yet.

API Authentication

If you recall from the last post the API calls are authenticated by attaching a JWT to the request header. The JWT has to validated by the APIs the same way the Portals have to validate the JWTs received from the STS. If we look at the diagram above though, the API sites don’t have access to the PortalConfigStore database; they have access to the Store database. Therefore its reasonable to assume the Store database has a copy of the federation configuration data as well.

Looking at the Settings table we can confirm that assumption. It’s got the same schema as the Settings table in the PortalConfigStore database, though in this case there are different namespaces. There are two namespaces that are of interest here:

  • AdminAPI
  • TenantAPI

This aligns with the service diagram above. We should only have settings for the namespaces of services that actually touch the database.

If we look at the TenantAPI elements we have four entries:

  • Authentication.IdentityProvider.Primary
  • Authentication.IdentityProvider.Secondary
  • Authentication.RelyingParty.Primary
  • Authentication.RelyingParty.Secondary

The Authentication.IdentityServer.Primary entry matches up with the TenantSite Authentication.IdentityServer entry in the PortalConfigStore database. That makes sense since it needs to trust the token same as the Tenant Portal site. The Secondary element is curious though. It’s configured as a relying party to the Admin STS. I suspect that is there because the Tenant APIs can be called from the Admin Portal.

Comparing these values to the AdminAPI namespace we see that there are only configuration entries for the Admin STS. Seems reasonable since the Tenant Portal probably shouldn’t be calling into admin APIs. Haven’t got a clue why the AdminAPI relying party is configured as Secondary though Smile. Artifact of the design I guess. Another interesting artifact of this configuration is that the ReplyTo values in the RelyingParty entries show the default value from when I first installed the services. We see something like:

{
   "EncryptionCertificate":null,
   "Realm":"http://azureservices/TenantSite",
   "ReplyTo":https://syfuhs-cloud:30081/
}

And

{
   "EncryptionCertificate":null,
   "Realm":"http://azureservices/AdminSite",
   "ReplyTo":https://syfuhs-cloud:30091/
}

I reconfigured the endpoints to be publically accessible so these values are now incorrect.

API’s can’t really use Reply To the same passive requests can, so it makes sense that they don’t get updated – they don’t have to be updated. The values don’t have to be present either, but again, artifacts I guess.

Conclusion

In the previous post we looked at how authentication works conceptually, and in this post we looked at how authentication is configured in detail. Next time we’ll take a look at how we can reconfigure Windows Azure Pack to work with our own IdPs.

No spoilers this time. Winking smile

Windows Azure Pack Authentication Part 1

by Steve Syfuhs / January 29, 2014 10:17 PM

Recently Microsoft released their on-premise Private Cloud offering called Windows Azure Pack for Windows Server.

Windows Azure Pack for Windows Server is a collection of Windows Azure technologies, available to Microsoft customers at no additional cost for installation into your data center. It runs on top of Windows Server 2012 R2 and System Center 2012 R2 and, through the use of the Windows Azure technologies, enables you to offer a rich, self-service, multi-tenant cloud, consistent with the public Windows Azure experience.

Cool!

There are a fair number of articles out there that have nice write ups on how it works, what it looks like, how to manage it, etc., but I’m not going to bore you with the introductions. Besides, Marc over at hyper-v.nu has already written up a fantastic collection of blog posts and I couldn’t do nearly as good a job introducing it.

Today I want to look at how Windows Azure Pack does authentication.

Architecture

Before we jump head first into authentication we should take a look at how Windows azure Pack works at an architectural level. It’s important to understand all the pieces that depend on authentication. If you take a look at the TechNet articles you can see there are a number of moving parts.

The primary components of Windows Azure Pack are broken down into services. Depending on how you want it all to scale you can install the services on one server or multiple servers, or multiple redundant servers. There are 7+1 primary services, and 5+ secondary services involved.

The primary services are:

To help simplify some future samples I’ve included the base URLs of the services above. Anything public-ish facing has its own subdomain, and the related backend APIs are on the same domain but a different port (the ports coincide with the default installation). Also, these are public endpoints – be kind please!

The Secondary services are for resource providers which are things like Web Sites, VM Cloud, Service Bus, etc. While the secondary services are absolutely important to a private cloud deployment and perhaps “secondary” is an inappropriate adjective, they aren’t necessarily in scope when talking about authentication. Not at this point at least. Maybe in a future post. Let me know if that’s something you would like to read about.

Admin Portal

The Admin Portal is a UI surface that allows administrators to manage resource providers like Web Sites and VM Clouds. It calls into the Admin API to do all the heavy lifting. The Admin API is a collection of Web API interfaces.

The Admin Portal and the Admin API are Relying Parties of the Admin Authentication Site. The Admin Authentication Site is a STS that authenticates users using Windows Auth.

adminauth

During initial authentication the Admin Portal will redirect to the STS and request a WS-Federation-wrapped JWT (JSON Web Token – pronounced “jot”). Once the Admin Portal receives the token it validates the token and begins issuing requests to the Admin API attaching that unwrapped JWT in an Authorization header.

This is how a login would flow:

  1. Request admin-cloud.syfuhs.net
  2. No auth – redirect to adminauth-cloud.syfuhs.net
  3. Do Windows Auth and mint a token
  4. Return the JWT to the Admin Portal
  5. Attach the JWT to the session

It’s just a WS-Fed passive flow. Nothing particularly fancy here besides using a JWT instead of a SAML token. WS-Federation is a token-agnostic protocol so you can use any kind of token format so long as both the IdP and RP understand it. A JWT looks something like this:

Header: {
    "x5t": "3LFgh5SzFeO4sgYfGJ5idbHxmEo",
    "alg": "RS256",
    "typ": "JWT"
},
Claims: {
    "upn": "SYFUHS-CLOUD\\Steve",
    "primarysid": "S-1-5-21-3349883041-1762849023-1404173506-500",
    "aud": "
http://azureservices/AdminSite",
    "primarygroupsid": "S-1-5-21-3349883041-1762849023-1404173506-513",
    "iss": "
http://azureservices/WindowsAuthSite",
    "exp": 1391086240,
    "group": [
        "SYFUHS-CLOUD\\None",
        "Everyone",
        "NT AUTHORITY\\Local account and member of Administrators group",
        "SYFUHS-CLOUD\\MgmtSvc Operators",
        "BUILTIN\\Administrators",
        "BUILTIN\\Users",
        "NT AUTHORITY\\NETWORK",
        "NT AUTHORITY\\Authenticated Users",
        "NT AUTHORITY\\This Organization",
        "NT AUTHORITY\\Local account",
        "NT AUTHORITY\\NTLM Authentication"
    ],
    "nbf": 1391057440
}, Signature: “…”

Actually, that’s a bit off because its not represented as { Header: {…}, Claims: {…} }, but that’s the logical representation.

If we look at the token there are some important bits. The UPN claim is the user identifier; the AUD claim is the audience receiving the token; the ISS claim is the issuer of the token. This is pretty much all the Admin Portal needs for proper authentication. Since this is an administrators portal it should probably do some authorization checks too though. The Admin Portal uses the UPN and/or group membership claims to decide whether a user is authorized.

If we quickly take a look at the configuration databases, namely the Microsoft.MgmtSvc.Store database, we can see a table called [mp].[AuthorizedAdminUsers]. This table lists the principals that are currently authorized to log into the Admin Portal. Admittedly, we probably don’t want to go mucking around the database though so we can use PowerShell to take a look.

PS C:\Windows\system32> Get-MgmtSvcAdminUser -Server localhost\sqlexpress
SYFUHS-CLOUD\MgmtSvc Operators
SYFUHS-CLOUD\Steve

My local user account and the MgmtSvc Operators group matches the claims in my token, so I can log in. Presumably its built so I just need a UPN or group claim matched up to let me in, but I must confess I haven’t gotten to testing that yet. Surely there’s documentation on TechNet about it… Winking smile

As an aside, it looks like PowerShell is the only way to modify the admin user list currently, so you can use Windows groups to easily manage authorization.

So anyway, now we have this token attached to the user session as part of the FedAuth cookie. I’m guessing they’ve set the BootstrapToken BootstrapContext to be the JWT because this token will have to always be present behind the scenes while the users session is still valid. However, in order for the Admin Portal to do anything it needs to call into the Admin API. Here’s the cool part: the JWT that is part of the session is simply attached to the request as an Authorization header (snipped for clarity).

GET https://admin-cloud.syfuhs.net:30004/subscriptions?skip=0&take=1 HTTP/1.1
Authorization: Bearer eyJ0eXAiOi...2Pfl_q3oVA
x-ms-principal-id: SYFUHS-CLOUD%5cSteve
Accept-Language: en-US
Host: admin-cloud.syfuhs.net:30004
Connection: Keep-Alive

The response is just a chunk of JSON:

HTTP/1.1 200 OK
Cache-Control: no-cache
Pragma: no-cache
Content-Type: application/json; charset=utf-8
Expires: -1
Server: Microsoft-IIS/8.5
X-AspNet-Version: 4.0.30319
x-ms-request-id: 3a96aabb91e7403b968a8aa9b569ad5f.2014-01-30T04:42:02.8142310Z
X-Powered-By: ASP.NET
Date: Thu, 30 Jan 2014 04:42:04 GMT
Content-Length: 1124

{
   "items":[
      {
         "SubscriptionID":"386dd878-64b8-41a7-a02a-644f646e2df8",
         "SubscriptionName":"Sample Plan",
         "AccountAdminLiveEmailId":"steve@syfuhs.net",
         "ServiceAdminLiveEmailId":null,
         "CoAdminNames":[

         ],
         "AddOnReferences":[

         ],
         "AddOns":[

         ],
         "State":1,
         "QuotaSyncState":0,
         "ActivationSyncState":0,
         "PlanId":"Samplhr1gggmr",
         "Services":[
            {
               "Type":"sqlservers",
               "State":"registered",
               "QuotaSyncState":0,
               "ActivationSyncState":0,
               "BaseQuotaSettings":[…snip…]
            },
            {
               "Type":"mysqlservers",
               "State":"registered",
               "QuotaSyncState":0,
               "ActivationSyncState":0,
               "BaseQuotaSettings":[…snip…]
            }
         ],
         "LastErrorMessage":null,
         "Features":null,
         "OfferFriendlyName":"Sample Plan",
         "OfferCategory":null,
         "Created":"2014-01-30T03:21:43.533"
      }
   ],
   "filteredTotalCount":1,
   "totalCount":1
}

At this point (or rather, before it returns the response *cough*) the Admin API needs to authenticate and authorize the incoming request. The service is somewhat RESTful so it looks to the headers to check for an Authorization header (because the HTTP spec suggests that’s how it be done, REST blah blah blah, et al Open-mouthed smile).

The authorization header states that it has a Bearer token, which basically means the caller has already proven they are who they say they are based on the fact that they hold a token from a trusted STS (hence “bearer”). In other words they don’t have to do anything else. That token is the key to the kingdom.

Yet another aside: bearer tokens are sometimes considered insecure because anyone who has a copy of one could impersonate the user. With that being said, it’s better to use a token than say send the users password with each request.

Now at this point I can’t say for certain how things work internally as I don’t have any access to the source (I *could* use Reflector but that’s cheating), but since it’s Web API I could guess that they’re using a DelegatingHandler or something similar to do the verification. Vittorio actually has a great sample on how this could be done via what he’s calling Poor Mans Delegation/ActAs. It’s the same principle – receive token on web login, validate token so user can actually log in, keep token, want to call authenticated web service, still have token, stick token in header, web service authorizes token, done. Not too shabby.

Conclusion

So at this point we’ve seen how the Admin Portal authenticates users and how it securely calls its backend web services. Next time we’ll look at how the Tenant Portal does it (SPOILERS: it does it the same way!). Ahem -- more specifically, we’ll look at how the Tenant Portal is configured so it can actually trust the token it receives. From there we can take a look at how other IdPs can be configured for log in, and if we’re really wanting to be daring we could build our own custom STS for log in (it’s a wee-bit more complicated than you might think).

Introduction to Windows Azure Active Directory Federation Part 1

by Steve Syfuhs / November 30, 2012 12:19 AM

Earlier this week Microsoft released some interesting numbers regarding Windows Azure Active Directory (WAAD) authentication.

Since the inception of the authentication service on the Windows Azure platform in 2010, we have now processed 200 BILLION authentications for 50 MILLION active user accounts. In an average week we receive 4.7 BILLION authentication requests for users in over 420 THOUSAND different domains.

[…] To put it into perspective, in the 2 minutes it takes to brew yourself a single cup of coffee, Windows Azure Active Directory (AD) has already processed just over 1 MILLION authentications from many different devices and users around the world.  Not only are we processing a huge number of authentications but we’re doing it really fast!  We respond to 9,000 requests per second and in the U.S. the average authentication takes less than 0.7 seconds.

Whoa.

Now, some people may be wondering what this is all about. Where are all these requests coming from? What domains? Who? Huh? What? It’s actually pretty straightforward: 99.99999999% of all these requests are coming from Office 365 and Dynamics CRM.

Windows Azure Active Directory started as the authentication service for Office 365. The service is built on the Microsoft Federation Gateway, which is the foundation for Windows Live/Microsoft accounts. As the platform matured Microsoft opened the system to allow more applications to authenticate against the service. It has since transitioned into it’s proper name Windows Azure Active Directory.

The system at it’s core is simply a multitenant directory of users. Each tenant is tied to at least one unique domain. Each tenant can then allow applications to federate. This is basically how Office 365 works. When you create a new Office 365 account, the provisioning system creates a new tenant in WAAD and ties it to a subdomain of onmicrosoft.com, so you would for instance get contoso.onmicrosoft.com. Once the tenant is created the provisioning system then goes off to the various services you’ve selected like Exchange, SharePoint, CRM, etc and starts telling them to create their various things necessary for service. These services now know about your WAAD tenant.

This is all well and good, but you’re now using contoso.onmicrosoft.com, and you would rather use a different domain like contoso.com for email and usernames. Adding a domain to Office 365 requires telling both WAAD and the various services that a new domain is available to use in the tenant. Now WAAD has two domains associated with it.

Now we can create users with our custom domain contoso.com, but there’s like a thousand users and you have Active Directory locally. It would be much better if we could just log into Office 365 using our own Active Directory credentials, and it would be so much nicer on the administrator if he didn’t have to create a thousand users. This calls for federation between WAAD and AD through Active Directory Federation Services (too. many. AD-based. names!).

Things get a little more complicated here. Before looking at federation between WAAD and AD we should take a look at how authentication normally works in Office 365.

First a user will try to access an application like SharePoint. SharePoint doesn’t see a session for the user so it redirects the user to login.microsoftonline.com, which is the public face of Windows Azure Active Directory. The user enters their credentials managed through your WAAD tenant, and is then redirected back to SharePoint with a token. SharePoint consumes the token and creates a session for the user. This is a standard process called passive federation. The federation is between SharePoint and WAAD. SharePoint and the various other services trust login.microsoftonline.com (and only login.microsoftonline.com) to issue tokens, so when a user has a token issued by login.microsoftonline.com its understood that the user has been authenticated and is now trusted. Clear as mud, right?

Allowing authentication via your on premise Active Directory complicates things a little. This involves creating a trust between Windows Azure Active Directory and your Active Directory through a service called Active Directory Federation Services. A trust is basically a contract that states WAAD will understand and allow tokens received from ADFS. With this trust in place, any authentication requests to WAAD through login.microsoftonline.com will be passed to your ADFS server. Once your ADFS server authenticates you, a token is generated and sent back to login.microsoftonline.com. This token is then consumed, and a new token is generated by login.microsoftonline.com and issued to whichever service asked for you to log in. Remember what I said above: Office 365 services only trust tokens issued by login.microsoftonline.com. Everything flows through WAAD.

That was a pretty high-level discussion of how things work, but unfortunately it’s missing a few key pieces like DirSync. In my next post I’ll dive much deeper into the inner workings of all these bits and pieces explaining how Windows Azure Active Directory federates with your on premise Active Directory.

Guide to Claims-Based Identity Second Edition

by Steve Syfuhs / December 13, 2011 10:28 AM

It looks like the Guide to Claims-Based Identity and Access Control was released as a second addition!

Take a look at the list of authors:

If you want a list of experts on security then look no further. These guys are some of the best in the industry and are my go-to for resources on Claims.

Input Validation: The Good, The Bad, and the What the Hell are you Doing?

by Steve Syfuhs / November 28, 2011 11:00 AM

Good morning class!

Pop quiz: How many of you do proper input validation in your ASP.NET site, WebForms, MVC, or otherwise?

Some Background

There is an axiom in computer science: never trust user input because it's guaranteed to contain invalid data at some point.

In security we have a similar axiom: never trust user input because it's guaranteed to contain invalid data at some point, and your code is bound to contain a security vulnerability somewhere, somehow. Granted, it doesn't flow as well as the former, but the point still stands.

The solution to this problem is conceptually simple: validate, validate, validate. Every single piece of input that is received from a user should be validated.

Of course when anyone says something is a simple concept it's bound to be stupidly complex to get the implementation right. Unfortunately proper validation is not immune to this problem. Why?

The Problem

Our applications are driven by user data. Without data our applications would be pretty useless. This data is usually pretty domain-specific too so everything we receive should have particular structures, and there's a pretty good chance that a few of these structures are so specific to the organization that there is no well-defined standard. By that I mean it becomes pretty difficult to validate certain data structures if they are custom designed and potentially highly-complex.

So we have this problem. First, if we don't validate that the stuff we are given is clean, our application starts behaving oddly and that limits the usefulness of the application. Second, if we don't validate that the stuff we are given is clean, and there is a bug in the code, we have a potential vulnerability that could wreak havoc for the users.

The Solution

The solution as stated above is to validate all the input, both from a business perspective and from a security perspective. We want it to go something like this:

In this post we are going to look at the best way to validate the security of incoming data within ASP.NET. This requires looking into how ASP.NET processes input from the user.

When ASP.NET receives something from the user it can come from four different vectors:

  • Within the Query String (?foo=bar)
  • Within the Form (via a POST)
  • Within a cookie
  • Within the server variables (a collection generated from HTTP headers and internal server configuration)

These vectors drive ASP.NET, and you can potentially compromise an application by maliciously modifying any of them.

Pop quiz: How many of you check whether custom cookies exist before trying to use them? Almost everyone, good. Now, how many of you validate that the data within the cookies is, well, valid before using them?

What about checking your HTTP headers?

The Bypass

Luckily ASP.NET has some out-of-the-box behaviors that protect the application from malicious input. Unfortunately ASP.NET isn't very forgiving when it comes to validation. It doesn't distinguish between quasi-good input and bad input, so anything containing an angle bracket causes a YSoD.

The defacto fix to this is to do one of two things:

  • Disable validation in the page declaration within WebForms, or stick a [ValidateInput(false)] attribute on an MVC controller
  • Set <pages validateRequest="false"> in web.config

What this will do is tell ASP.NET to basically skip validating the four vectors and let anything in. It was assumed that you would do validation on your own.

Raise your hand if you think this is a bad idea. Okay, keep your hands up if you've never done this for a production application. At this point almost everyone should have put their hands down. I did.

The reason we do this is because as I said before, ASP.NET isn't very forgiving when it comes to validation. It's all or nothing.

What's worse, as ASP.NET got older it started becoming pickier about what it let in so you had more reasons for disabling validation. In .NET 4 validation occurs at a much earlier point. It's a major breaking change:

The request validation feature in ASP.NET provides a certain level of default protection against cross-site scripting (XSS) attacks. In previous versions of ASP.NET, request validation was enabled by default. However, it applied only to ASP.NET pages (.aspx files and their class files) and only when those pages were executing.

In ASP.NET 4, by default, request validation is enabled for all requests, because it is enabled before the BeginRequest phase of an HTTP request. As a result, request validation applies to requests for all ASP.NET resources, not just .aspx page requests. This includes requests such as Web service calls and custom HTTP handlers. Request validation is also active when custom HTTP modules are reading the contents of an HTTP request.

Since backwards compatibility is so important, a configuration attribute was also added to tell ASP.NET to revert to the 2.0 validation mode meaning that it occurs later in the request lifecycle like in ASP.NET 2.0:

<httpRuntime requestValidationMode="2.0" />

If you do a search online for request validation almost everyone comes back with this solution. In fact, it became a well known solution with the Windows Identity Foundation in ASP.NET 4.0 because when you do a federated sign on, WIF receives the token as a chunk of XML. The validator doesn't approve because of the angle brackets. If you set the validation mode to 2.0, the validator checks after the request passes through all HttpModules, which is how WIF consumes that token via the WSFederationAuthenticationModule.

The Proper Solution

So we have the problem. We also have built in functionality that solves our problem, but the way it does it kind of sucks (it's not a bad solution, but it's also not extensible). We want a way that doesn't suck.

In earlier versions of ASP.NET the best solution was to disable validation and within a HttpModule check every vector for potentially malicious input. The benefit here is that you have control over what is malicious and what is not. You would write something along these lines:

public class ValidatorHttpModule : IHttpModule
{
    public void Dispose() { }

    public void Init(HttpApplication context)
    {
        context.BeginRequest += new EventHandler(context_BeginRequest);
    }

    void context_BeginRequest(object sender, EventArgs e)
    {
        HttpApplication context = (HttpApplication)sender;

        foreach (var q in context.Request.QueryString)
        {
            if (CheckQueryString(q))
            {
                throw new SecurityException("Bad validation");
            }
        }

        foreach (var f in context.Request.Form)
        {
            if (CheckForm(f))
            {
                throw new SecurityException("Bad validation");
            }
        }

        foreach (var c in context.Request.Cookies)
        {
            if (CheckCookie(c))
            {
                throw new SecurityException("Bad validation");
            }
        }

        foreach (var s in context.Request.ServerVariables)
        {
            if (CheckServerVariable(s))
            {
                throw new SecurityException("Bad validation");
            }
        }
    }

    // <snip />
}

The downside to this approach though is that you are stuck with pretty clunky validation logic. It executes on every single request, which may not always be necessary. You are also forced to execute the code in order of whenever your HttpModule is initialized. It won't necessarily execute first, so it won't necessarily protect all parts of your application. Protection from an attack that doesn't protect everything from that particular attack isn't very useful.  <Cynicism>Half-assed protection is only good when you have half an ass.</Cynicism>

What we want is something that executes before everything else. In our HttpModule we are validating on BeginRequest, but we want to validate before BeginRequest.

The way we do this is with a custom RequestValidator. On a side note, this post may qualify as having the longest introduction ever. In any case, this custom RequestValidator is set within the httpRuntime tag within the web.config:

<httpRuntime requestValidationType="Syfuhs.Web.Security.CustomRequestValidator" />

We create a custom request validator by creating a class with a base class of System.Web.Util.RequestValidator. Then we override the IsValidRequestString method.

This method allows us to find out where the input is coming from, e.g. from a Form or from a cookie etc. This validator is called on each value within the four collections above, but only when a value exists. It saves us the trouble of going over everything in each request. Within an HttpModule we could certainly build out the same functionality by checking contents of each collection, but this saves us the hassle of writing the boilerplate code. It also provides us a way of describing the problem in detail because we can pass an index location of where the problem exists. So if we find a problem at character 173 we can pass that value back to the caller and ASP.NET will throw an exception describing that index. This is how we get such a detailed exception from WIF:

A Potentially Dangerous Request.Form Value Was Detected from the Client (wresult="<t:RequestSecurityTo...")

Our validator class ends up looking like:

public class MyCustomRequestValidator : RequestValidator
{
    protected override bool IsValidRequestString(HttpContext context, string value, RequestValidationSource requestValidationSource, string collectionKey, out int validationFailureIndex)
    {
        validationFailureIndex = 0;

        switch (requestValidationSource)
        {
            case RequestValidationSource.Cookies:
                return ValidateCookie(collectionKey, value, out validationFailureIndex);
                break;

            case RequestValidationSource.Form:
                return ValidateFormValue(collectionKey, value, out validationFailureIndex);
                break;

            // <snip />
        }

        return base.IsValidRequestString(context, value, requestValidationSource, collectionKey, out validationFailureIndex);
    }

    // <snip />
}

Each application has different validation requirements so I've just mocked up how you would create a custom validator.

If you use this design you can easily validate all inputs across the application, and you don't have to turn off validation.

So once again, pop quiz: How many of you do proper input validation?

Strongly Typed Claims

by Steve Syfuhs / November 12, 2011 04:03 PM

Sometimes it's a pain in the neck working with Claims. A lot of times you need to look for particular claim and that usually means looping through the claims collection and parsing the value to a particular type.

This little dance is the trade-off for having such a simple interface to a potentially arbitrary collection of claims. Most of the time this works, but every once in a while you need to create a basic user object that contains some strongly typed properties. You could build up a basic object like:

public class User
{
    public string UserName { get; set; }

    public string EmailAddress { get; set; }

    public string Department { get; set; }

    public List<string> Roles { get; set; }
}

This would require you to intercept the IClaimsIdentity object and search through the claims collection setting each property manually whenever you wanted to get access to the data. This can get tiresome and is error prone.

I think I've come up with a relatively complete solution to this problem. Basically it works by creating a custom IClaimsIdentity class that sets a User property through reflection. You can then access the user through Thread.CurrentPrincipal.Identity like this:

TypedClaimsIdentity ident = Thread.CurrentPrincipal.Identity as TypedClaimsIdentity;
string email = ident.User.EmailAddress.Value;
var userRoles = ident.User.Roles;

Once you've defined the particular types and their associated claims, the particular values will be set through reflection. So to declare your user properties, create a class like this:

public class MyTypedClaimsUser : TypedClaims
{
    public MyTypedClaimsUser()
    {
        this.Name = new TypedClaim<string>();
        this.EmailAddress = new TypedClaim<string>();
        this.Roles = new List<TypedClaim<string>>();
        this.Expiration = new TypedClaim<DateTime>();
        this.AuthenticationMethod = new TypedClaim<string>();
    }

    [TypedClaim(ClaimTypes.Name, false)]
    public TypedClaim<string> Name { get; private set; }

    [TypedClaim(ClaimTypes.Email, false)]
    public TypedClaim<string> EmailAddress { get; private set; }

    [TypedClaim(ClaimTypes.Role, true)]
    public List<TypedClaim<string>> Roles { get; private set; }

    [TypedClaim(ClaimTypes.Expiration, true)]
    public TypedClaim<DateTime> Expiration { get; private set; }

    [TypedClaim(ClaimTypes.AuthenticationMethod, false)]
    public TypedClaim<string> AuthenticationMethod { get; private set; }

    [TypedClaim(ClaimTypes.GroupSid, false)]
    public TypedClaim<string> GroupSid { get; private set; }
}

Each property must be defined a certain way. Each property must have a particular attribute set: TypedClaimAttribute. This attribute will help the reflection code associate the property with the expected claim. That way the Name property will always be mapped to the ClaimTypes.Name claim type, which is the http://schemas.xmlsoap.org/ws/2005/05/identity/claims/name claim. It also helps by warning the code that it's going to likely have multiple potential values, like the Role claim.

Each property is also of a particular type: TypedClaim<T>. In theory I could have just used simple types like strings but by going this route you can get access to claim metadata like Name.ClaimType or Name.Issuer. TypedClaim<T> is inherited from Claim.

So how does this all work? Well first you need to be able to add the User object into the Identity object. This is done by creating a custom IClaimsIdentity class:

[Serializable]
public class TypedClaimsIdentity : IClaimsIdentity
{
    public TypedClaimsIdentity(IClaimsIdentity identity)
    {
        user = new MyTypedClaimsUser();

        if (identity.Claims != null)
            this.claims = identity.Claims;
        else
            claims = new ClaimCollection(identity);

        this.Actor = identity.Actor;
        this.AuthenticationType = identity.AuthenticationType;

        Update();
    }

    private void Update()
    {
        user.Update(this.claims);
    }

    private MyTypedClaimsUser user;

    public MyTypedClaimsUser User
    {
        get
        {
            Update();
            return user;
        }
    }

    private ClaimCollection claims;

    public ClaimCollection Claims
    {
        get
        {
            Update();
            return claims;
        }
    }

    public IClaimsIdentity Actor { get; set; }

    public SecurityToken BootstrapToken { get; set; }

    public IClaimsIdentity Copy()
    {
        ClaimsIdentity claimsIdentity = new ClaimsIdentity(this.AuthenticationType);

        if (this.Claims != null)
        {
            claimsIdentity.Claims.AddRange(claims.CopyWithSubject(claimsIdentity));
        }

        claimsIdentity.Label = this.Label;
        claimsIdentity.NameClaimType = this.NameClaimType;
        claimsIdentity.RoleClaimType = this.RoleClaimType;
        claimsIdentity.BootstrapToken = this.BootstrapToken;

        return claimsIdentity;
    }

    public string Label { get; set; }

    public string NameClaimType { get; set; }

    public string RoleClaimType { get; set; }

    public string AuthenticationType { get; private set; }

    public bool IsAuthenticated { get { return claims.Count > 0; } }

    public string Name { get { return User.Name.Value; } }
}

There isn't anything spectacularly interesting about this class. The important part is the constructor. It only accepts an IClaimsIdentity object because it's designed as a way to wrap around an already created identity. It then updates the User object through Update().

The User object is updated through reflection. The Update() method calls User.Update(…) which is defined within the base class of MyTypedClaimsUser. This will call into a helper class that looks through the User object and find any properties that contain the TypedClaimAttribute.

EDIT: When it comes to reflection, there is always a better way to do something. My original code was mostly a PoC and didn't make use of existing .NET-isms. I've edited this bit to include the code changes.

The helper class was originally a bit clunky because all it did was look through the properties and if/else if's through their types and parses them:

if (type == typeof(string))
{
    return new TypedClaim<string>(selectedClaims.First()) { Value = selectedClaims.First().Value };
}

This really isn't the smartest way to do it because .NET already contains some pretty strong conversion functions; specifically Convert.ChangeType(value, type).

Going this route requires generating the proper TypedClaim<T> though. Many thanks to Anna Lear because she pointed out the MakeGenericType(…) method, which allows you to take a type and convert it to a generic type with the specified type parameters. That way I could dynamically pass a type into a generic without hardcoding anything. This allows the TypedClaim<T> to be set at runtime without having to code for each particular parameter. So you end up with basic logic along the lines of:

Type constructed = typeof(TypedClaim<>).MakeGenericType(new Type[] { genericParamType });

object val = Convert.ChangeType(claim.Value, genericParamType);

return Activator.CreateInstance(constructed, claim.ClaimType, val);

The Activator.CreateInstance method will construct an instance of the particular type which will eventually be passed into PropertyInfo.Value.SetValue(…).

Finally, it's time to integrate this into your web application. The best location is probably going to be through a custom ClaimsAuthenticationManager. It works like this:

public class TypedClaimsAuthenticationManager : ClaimsAuthenticationManager
{
    public override IClaimsPrincipal Authenticate(string resourceName, IClaimsPrincipal incomingPrincipal)
    {
        if (!incomingPrincipal.Identity.IsAuthenticated)
            return base.Authenticate(resourceName, incomingPrincipal);

        for (int i = 0; i < incomingPrincipal.Identities.Count; i++)
            incomingPrincipal.Identities[i] = new TypedClaimsIdentity(incomingPrincipal.Identities[i]);

        return base.Authenticate(resourceName, incomingPrincipal);
    }
}

Then to tell WIF about this new CAM you need to make a change to the web.config. Within the Microsoft.IdentityModel/Service section, add this:

<claimsAuthenticationManager type="Syfuhs.IdentityModel.TypedClaimsAuthenticationManager, Syfuhs.IdentityModel" />

By dynamically setting the values of the user object, you can create a fairly robust identity model for your application.

You can download the updated code here: typedclaimsv2.zip (6.21 kb)

You can download the original code here: typedclaims.zip (5.61 kb)

The Importance of Elevating Privilege

by Steve Syfuhs / August 28, 2011 04:00 PM

The biggest detractor to Single Sign On is the same thing that makes it so appealing – you only need to prove your identity once. This scares the hell out of some people because if you can compromise a users session in one application it's possible to affect other applications. Congratulations: checking your Facebook profile just caused your online store to delete all it's orders. Let's break that attack down a little.

  • You just signed into Facebook and checked your [insert something to check here] from some friend. That contained a link to something malicious.
  • You click the link, and it opens a page that contains an iframe. The iframe points to a URL for your administration portal of the online store with a couple parameters in the query string telling the store to delete all the incoming orders.
  • At this point you don't have a session with the administration portal and in a pre-SSO world it would redirect you to a login page. This would stop most attacks because either a) the iframe is too small to show the page, or b) (hopefully) the user is smart enough to realize that a link from a friend on Facebook shouldn't redirect you to your online store's administration portal. In a post-SSO world, the portal would redirect you to the STS of choice and that STS already has you signed in (imagine what else could happen in this situation if you were using Facebook as your identity provider).
  • So you've signed into the STS already, and it doesn't prompt for credentials. It redirects you to the administration page you were originally redirected away from, but this time with a session. The page is pulled up, the query string parameters are parsed, and the orders are deleted.

There are certainly ways to stop this as part of this is a bit trivial. For instance you could pop up an Ok/Cancel dialog asking "are you sure you want to delete these?", but for the sake of discussion lets think of this at a high level.

The biggest problem with this scenario is that deleting orders doesn't require anything more than being signed in. By default you had the highest privileges available.

This problem is similar to the problem many users of Windows XP had. They were, by default, running with administrative privileges. This lead to a bunch of problems because any application running could do whatever it pleased on the system. Malware was rampant, and worse, users were just doing all around stupid things because they didn't know what they were doing but they had the permissions necessary to do it.

The solution to that problem is to give users non-administrative privileges by default, and when something required higher privileges you have to re-authenticate and temporarily run with the higher privileges. The key here is that you are running temporarily with higher privileges. However, security lost the argument and Microsoft caved while developing Windows Vista creating User Account Control (UAC). By default a user is an administrator, but they don't have administrative privileges. Their user token is a stripped down administrator token. You only have non-administrative privileges. In order to take full advantage of the administrator token, a user has to elevate and request the full token temporarily. This is a stop-gap solution though because it's theoretically possible to circumvent UAC because the administrative token exists. It also doesn't require you to re-authenticate – you just have to approve the elevation.

As more and more things are moving to the web it's important that we don't lose control over privileges. It's still very important that you don't have administrative privileges by default because, frankly, you probably don't need them all the time.

Some web applications are requiring elevation. For instance consider online banking sites. When I sign in I have a default set of privileges. I can view my accounts and transfer money between my accounts. Anything else requires that I re-authenticate myself by entering a private pin. So for instance I cannot transfer money to an account that doesn't belong to me without proving that it really is me making the transfer.

There are a couple ways you can design a web application that requires privilege elevation. Lets take a look at how to do it with Claims Based Authentication and WIF.

First off, lets look at the protocol. Out of the box WIF supports the WS-Federation protocol. The passive version of the protocol supports a query parameter of wauth. This parameter defines how authentication should happen. The values for it are mostly specific to each STS however there are a few well-defined values that the SAML protocol specifies. These values are passed to the STS to tell it to authenticate using a particular method. Here are some most often used:

Authentication Type/Credential Wauth Value
Password urn:oasis:names:tc:SAML:1.0:am:password
Kerberos urn:ietf:rfc:1510
TLS urn:ietf:rfc:2246
PKI/X509 urn:oasis:names:tc:SAML:1.0:am:X509-PKI
Default urn:oasis:names:tc:SAML:1.0:am:unspecified

When you pass one of these values to the STS during the signin request, the STS should then request that particular type of credential. the wauth parameter supports arbitrary values so you can use whatever you like. So therefore we can create a value that tells the STS that we want to re-authenticate because of an elevation request.

All you have to do is redirect to the STS with the wauth parameter:

https://yoursts/authenticate?wa=wsignin1.0&wtrealm=uri:myrp&wauth=urn:super:secure:elevation:method

Once the user has re-authenticated you need to tell the relying party some how. This is where the Authentication Method claim comes in handy:

http://schemas.microsoft.com/ws/2008/06/identity/claims/authenticationmethod

Just add the claim to the output identity:

protected override IClaimsIdentity GetOutputClaimsIdentity(IClaimsPrincipal principal, RequestSecurityToken request, Scope scope)
{
    IClaimsIdentity ident = principal.Identity as IClaimsIdentity;
    ident.Claims.Add(new Claim(ClaimTypes.AuthenticationMethod, "urn:super:secure:elevation:method"));
    // finish filling claims...
    return ident;
}

At that point the relying party can then check to see whether the method satisfies the request. You could write an extension method like:

public static bool IsElevated(this IClaimsPrincipal principal)
{
    return principal.Identity.AuthenticationType == "urn:super:secure:elevation:method";
}

And then have a bit of code to check:

var p = Thread.CurrentPrincipal as IClaimsPrincipal;
if (p != null && p.IsElevated())
{
    DoSomethingRequiringElevation();
}

This satisfies half the requirements for elevating privilege. We need to make it so the user is only elevated for a short period of time. We can do this in an event handler after the token is received by the RP.  In Global.asax we could do something like:

void Application_Start(object sender, EventArgs e)
{
    FederatedAuthentication.SessionAuthenticationModule.SessionSecurityTokenReceived 
        += new EventHandler<SessionSecurityTokenReceivedEventArgs> (SessionAuthenticationModule_SessionSecurityTokenReceived);
}
void SessionAuthenticationModule_SessionSecurityTokenReceived(object sender, SessionSecurityTokenReceivedEventArgs e)
{
    if (e.SessionToken.ClaimsPrincipal.IsElevated())
    {
        SessionSecurityToken token = new SessionSecurityToken(e.SessionToken.ClaimsPrincipal, e.SessionToken.Context, e.SessionToken.ValidFrom, e.SessionToken.ValidFrom.AddMinutes(15));
        e.SessionToken = token;
    }
}

This will check to see if the incoming token has been elevated, and if it has, set the lifetime of the token to 15 minutes.

There are other places where this could occur like within the STS itself, however this value may need to be independent of the STS.

As I said earlier, as more and more things are moving to the web it's important that we don't lose control of privileges. By requiring certain types of authentication in our relying parties, we can easily support elevation by requiring the STS to re-authenticate.

Adjusting the Home Realm Discovery page in ADFS to support Email Addresses

by Steve Syfuhs / July 12, 2011 04:00 PM

Over on the Geneva forums a question was asked:

Does anyone have an example of how to change the HomeRealmDiscovery Page in ADFSv2 to accept an e-mail address in a text field and based upon that (actually the domain suffix) select the correct Claims/Identity Provider?

It's pretty easy to modify the HomeRealmDiscovery page, so I thought I'd give it a go.

Based on the question, two things need to be known: the email address and the home realm URI.  Then we need to translate the email address to a home realm URI and pass it on to ADFS.

This could be done a couple ways.  First it could be done by keeping a list of email addresses and their related home realms, or a list of email domains and their related home realms.  For the sake of this being an example, lets do both.

I've created a simple SQL database with three tables:

image

Each entry in the EmailAddress and Domain table have a pointer to the home realm URI (you can find the schema in the zip file below).

Then I created a new ADFS web project and added a new entity model to it:

image

From there I modified the HomeRealmDiscovery page to do the check:

//------------------------------------------------------------
// Copyright (c) Microsoft Corporation.  All rights reserved.
//------------------------------------------------------------

using System;

using Microsoft.IdentityServer.Web.Configuration;
using Microsoft.IdentityServer.Web.UI;
using AdfsHomeRealm.Data;
using System.Linq;

public partial class HomeRealmDiscovery : Microsoft.IdentityServer.Web.UI.HomeRealmDiscoveryPage
{
    protected void Page_Init(object sender, EventArgs e)
    {
    }

    protected void PassiveSignInButton_Click(object sender, EventArgs e)
    {
        string email = txtEmail.Text;

        if (string.IsNullOrWhiteSpace(email))
        {
            SetError("Please enter an email address");
            return;
        }

        try
        {
            SelectHomeRealm(FindHomeRealmByEmail(email));
        }
        catch (ApplicationException)
        {
            SetError("Cannot find home realm based on email address");
        }
    }

    private string FindHomeRealmByEmail(string email)
    {
        using (AdfsHomeRealmDiscoveryEntities en = new AdfsHomeRealmDiscoveryEntities())
        {
            var emailRealms = from e in en.EmailAddresses where e.EmailAddress1.Equals(email) select e;

            if (emailRealms.Any()) // email address exists
                return emailRealms.First().HomeRealm.HomeRealmUri;

            // email address does not exist
            string domain = ParseDomain(email);

            var domainRealms = from d in en.Domains where d.DomainAddress.Equals(domain) select d;

            if (domainRealms.Any()) // domain exists
                return domainRealms.First().HomeRealm.HomeRealmUri;

            // neither email nor domain exist
            throw new ApplicationException();
        }
    }

    private string ParseDomain(string email)
    {
        if (!email.Contains("@"))
            return email;

        return email.Substring(email.IndexOf("@") + 1);
    }

    private void SetError(string p)
    {
        lblError.Text = p;
    }
}

 

If you compare the original code, there was some changes.  I removed the code that loaded the original home realm drop down list, and removed the code to choose the home realm based on the drop down list's selected value.

You can find my code here: http://www.syfuhs.net/AdfsHomeRealm.zip

SAML Protocol Extension CTP for Windows Identity Foundation

by Steve Syfuhs / May 15, 2011 04:00 PM

Earlier this morning the Geneva (WIF/ADFS) Product Team announced a CTP for supporting the SAML protocol within WIF.  WIF has supported SAML tokens since it's inception, however it hasn't supported the SAML protocol until now.  According to the team:

This WIF extension allows .NET developers to easily create claims-based SP-Lite compliant Service Provider applications that use SAML 2.0 conformant identity providers such as AD FS 2.0.

This is the first I've seen this CTP, so I decided to jump into the Quick Start solution to get a feel for what's going on.  Here is the solution hierarchy:

image

There isn't much to it.  We have the sample identity provider that generates a token for us, a relying party application (service provider), and a utilities project to help with some sample-related duties.

In most cases, we really only need to worry about the Service Provider, as the IdP probably already exists.  I think creating an IdP using this framework is for a different post.

If we consider that WIF mostly works via configuration changes to the web.config, it stands to reason that the SAML extensions will too.  Lets take a look at the web.config file.

There are three new things in the web.config that are different from a default-configured WIF application.

First we see a new configSection declaration:

<section name="microsoft.identityModel.saml" type="Microsoft.IdentityModel.Web.Configuration.MicrosoftIdentityModelSamlSection, Microsoft.IdentityModel.Protocols"/>

This creates a new configuration section called microsoft.identityModel.saml.

Interestingly, this doesn't actually contain much.  Just pointers to metadata:

<microsoft.identityModel.saml metadata="bin\App_Data\serviceprovider.xml">
    <identityProviders>
        <metadata file="bin\App_Data\identityprovider.xml"/>
    </identityProviders>
</microsoft.identityModel.saml>

Now this is a step away from WIF-ness.  These metadata documents are consumed by the extension.  They contain certificates and endpoint references:

<SingleSignOnService Binding="urn:oasis:names:tc:SAML:2.0:bindings:HTTP-Redirect" Location="http://localhost:6010/IdentityProvider/saml/redirect/sso"/>

I can see some extensibility options here.

Finally, an HTTP Module is added to handle the token response:

<add name="Saml2AuthenticationModule" type="Microsoft.IdentityModel.Web.Saml2AuthenticationModule"/>

This module works similarly to the WSFederationAuthenticationModule used by WIF out of the box.

It then uses the SessionAuthenticationModule to handle session creation and management, which is the same module used by WIF.

As you start digging through the rest of the project, there isn't actually anything too surprising to see.  The default.aspx page just grabs a claim from the IClaimsidentity object and adds a control used by the sample to display SAML data.  There is a signout button though which calls the following line of code:

Saml2AuthenticationModule.Current.SignOut( "~/Login.aspx" );

In the Login.aspx page there is a sign in button that calls a similar line of code:

Saml2AuthenticationModule.Current.SignIn( "~/Default.aspx" );

All in all, this SAML protocol extension seems to making federating with a SAML IdP fairly simple and straightforward.

// About

Steve is a renaissance kid when it comes to technology. He spends his time in the security stack.