Home / My Disclaimer / Who am I? / Search... / Sign in

// Single Sign-On

Windows Azure Pack Authentication Part 3 – Using a Third Party IdP

by Steve Syfuhs / February 07, 2014 06:22 PM

In the previous installments of this series we looked at how Windows Azure Pack authenticates users and how it’s configured out of the box for federation. This time around we’re going to look at how you can configure federation with a third party IdP.

Microsoft designed Windows Azure Pack the right way. It supports federation with industry protocols out of the box. You can’t say that for many services, and you certainly can’t say that those services support it natively for all versions – more often than not you have to pay extra for it.

Windows Azure Pack supports federation, and actually uses it to authenticate users by default. This little fact makes it easy to federate to a 3rd party IdP.

If we searched around we will find lots of resources on federating to ADFS, as that’s Microsoft’s federation product, and there are a number of good (German content) walkthroughs on how you can get it working. If you want to use ADFS go read one or all of those articles as everything we talk about today will be about using a non-Microsoft federation service.

Before we begin though I’d like to point out that Microsoft does have some resources on using 3rd party IdPs, but unfortunately the information is a bit thin in some places.

Prerequisites

Federation is a complex beast and we should be clear about what is required to get it working. In no particular order you need the following:

  • STS that supports the WS-Federation (passive) protocol
  • STS that supports WS-Federation wrapped JSON Web Tokens (JWT)
  • Optional: STS that supports WS-Trust + JWT

If you plan to use the public APIs with federated accounts then you will need a STS that supports WS-Trust + JWT.

If you don’t have a STS that can support these requirements then you should really consider taking a look at ADFS, or if you’re looking for customization, Thinktecture Identity Server. Both are top notch IdPs (edit: insert pitch about the IdP my company builds and sells as well [edit-edit: our next version natively supports JWT] Winking smile -- sorry, this concludes the not-so-regularly-scheduled product placement).

Another option is to roll your own IdP. Don’t do this. No seriously, don’t. It’s a complicated mess. You’re way better off using the Thinktecture server and extending it to fit your needs.

Supposing though that you already have an IdP and want to support JWT though, here’s how we can do it. In this context the IdP is the overarching identity providing system and the STS is simply the service issuing tokens.

Skip this next section if you just want to see how to configure Windows Azure Pack. That’s the main part that’s lacking in the MSDN documentation.

JWT via IdentityModel

First off, you need to be using .NET 4.5, and you need to be using the the 4.5 IdentityModel stack. You can’t use the original 3.5 bits.

At this point I’m going to assume you’ve got a working IdP already. There are lots of articles out there explaining how to build one. We’re just going to mod the STS.

Before making any code changes though you need to add the JWT token handler, which is easily installed via Nuget (I Red heart Nuget):

PM> Install-Package System.IdentityModel.Tokens.Jwt

This will need to be added to the project that exposes your STS configuration class.

Next, we need to inject the token handler into the STS pipeline. This can easily be done by adding an entry to the web.config system.identityModel section:

Or if you want to hardcode it you can add it to your SecurityTokenServiceConfiguration class.

There are of course other (potentially better) ways you can add it in, but this serves our purpose for the sake of a sample.

By adding the JWT token handler into the STS pipeline we can begin issuing JWTs to any relying parties that request one. This poses a problem though because passive requests don’t have a requested token type tacked on. Active (WS-Trust) requests do, but not passive. So we need to specify that a JWT should be minted instead of a SAML token. This can be done in the GetScope method of the STS class.

All we really needed to do was specify the TokenType as WIF will use that to determine which token handler should be used to mint the token. We know this is the value to use because it’s exposed by the GetTokenTypeIdentifiers() method in the JWTSecurityTokenHandler class.

Did I mention the JWT library is open source?

So now at this point if we made a request for token to the STS we could receive a WS-Federation wrapped JWT.

If the idea of using a JWT instead of a SAML token appeals to you, you can configure your app to use the JWT token handler similar to Dominick’s sample.

If you were submitting a WS-Trust RST to the STS you could use client code along the lines of:

When the GetScope method is called the request.TokenType should be set to whatever you passed in at the client. For more information on service calls you can take a look at the whitepaper Claims-Based Identity in Windows Azure Pack (docx). A future installment of this series might have more information about using services.

Lastly, we need to sign the JWT. The only caveat to using the JWT token handler is that the minimum RSA key size is 2048 bits. If you’re using a key smaller than that then please upgrade it. We’re going to overlook the fact that the MSDN article shows how to bypass minimum key sizes. Seriously. Don’t do it. I don’t want to have to explain why (putting paranoia aside for a moment, 1024 is being deprecated by Windows and related services in the near future anyway).

Issuing Tokens to Windows Azure Pack

So now we’re at a point where we can mint a JWT token. The question we need to ask now is what claims should this token contain? Looking at Part 1 we see that the Admin Portal requires UPN and Group claims. The tenant portal only requires the UPN claim.

Lucky for us the JWT token handler is smart. It knows to transform certain known XML-token-friendly-claim-types to JWT friendly claim types. In our case we can use http://schemas.xmlsoap.org/ws/2005/05/identity/claims/upn in our ClaimsIdentity to map to the UPN claim, and http://schemas.xmlsoap.org/claims/Group to map to our Group claim.

Then we need to determine where to send the token, and who to address it to. Both the tenant and admin sites have Federation Metadata documents that specify this information for us. If you’ve got an IdP that can parse the metadata then all you need to do is point it to https://yourtenantsite/FederationMetadata/2007-06/FederationMetadata.xml for the tenant configuration or https://youradminsite/FederationMetadata/2007-06/FederationMetadata.xml for the admin configuration.

Of course, this information will also map up to the configuration elements we looked at in Part 2. That’ll tell us the Audience URI and the Reply To for both sites.

Finally we have everything we need to mint the token, address it, and send it on its way.

Configuring Windows Azure Pack to Trust your Token

The tokens been sent and once it hits either the tenant or admin site it’ll promptly be ignored and you’ll get an ugly error message saying “nope, not gonna happen, bub.”

We therefore need to configure Windows Azure Pack to trust our token. Looking at MSDN we see some somewhat useful information telling us what we need to modify, but frankly, its missing a bunch of information so we’re going to ignore it.

First things first: if your IdP publishes a Federation Metadata document then you can just configure everything via PowerShell:

You can replace the target “Admin” with “Tenant” if you want to configure the Tenant Portal. The only caveat with doing it this way is that the metadata document needs to be accessible from the server. I’ve submitted a feature request that they also support local file paths too; hopefully they listen! Since the parameter takes the full URL you can put the metadata document somewhere public if its not normally accessible. You will only need the metadata accessible while applying this configuration.

If the cmdlet completed successfully then you should be able to log in from your own IdP. That’s all there is to it for you. I would recommend seriously considering going this route instead of configuring things manually.

Otherwise, lets carry on.

Since we can’t import our federation metadata (since we probably don’t have any), we need to configure things manually. To do that we need to modify settings in the database.

Looking back to Part 2 we see all the configuration elements that enable our federated trust to the default IdPs. We’ll need to update a few settings across the Microsoft.MgmtSvc.Store and Microsoft.MgmtSvc.PortalConfigStore databases.

As per the MSDN documentation it says to modify the settings in the PortalConfigStore database. It’s wrong. It’s incomplete as that’s only part of the process.

The PortalConfigStore database contains the settings used by the Tenant and Admin Portals to validate and request tokens. We need to modify these settings to use our custom IdP. To do so locate the Authentication.IdentityProvider setting in the [Config].[Settings] table.  The namespace we need to choose is dependent on which site we want to configure. In our case we select the Admin namespace. As we saw last time it looks something like:

We need to substitute our STS information here. The Realm is whatever your STS issuer is, and the Endpoint is where ever your WS-Federation endpoint is located. The Certificate should be a base 64 encoded representation of your signing certificate (remember, just the public key).

In my experience I’ve had to do an IISRESET on the portals to get the settings refreshed. I might just be impatient though.

Once those values are replaced you can try logging in. You should be redirected to your IdP and if you issue the token properly it’ll hit the portal and you should be logged in. Unfortunately this’ll actually fail with a non-useful error message.

deadsession

Who can guess why? So far I’ve stated that the MSDN documentation is missing information. What have we missed? Hopefully if you’ve read the first two parts of this series you’re yelling at the screen telling me to get on with it already because you’ve caught on to what I’m saying.

We haven’t configured the API services to trust our STS! Oops.

With that being said, we now have proof that Windows Azure Pack flows the token to the services from the Portal and, more importantly, the services validate the token. Cool!

Anyway, now to configure the APIs. Warning: complicated.

In the Microsoft.MgmtSvc.Store database locate the Settings table and then locate the Authentication.IdentityProvider.Secondary element in the AdminAPI namespace. We need to update it with the exact same values as we put in to the configuration element in the other database.

If you’re only wanting to configure the Tenant Portal you’d want to modify the Authentication.IdentityProvider.Primary configuration element. Be careful with the Primary/Secondary elements as they can get confusing.

If you’re configuring the Admin Portal you’ll need to update the Authentication.IdentityProvider.Secondary configuration element in the TenantAPI namespace to use the configuration you specified for the Admin Portal as well. As I said previously, I think this is because the Admin Portal calls into the Tenant API. The Admin Portal will use an admin-trusted token – therefore the TenantAPI needs to trust the admin’s STS.

Now that you’ve completed configuration you can do an IISRESET and try logging in. If you configured everything properly you should now be able to log in from your own IdP.

Troubleshooting

For those rock star Ops people who understand identity this guide was likely pretty easy to follow, understand, and implement. For everyone else though, this was probably a pain in the neck. Here are some troubleshooting tips.

Review the Event Logs
It’s surprising how many people forget that a lot of applications will write errors to the Windows Event Log. Windows Azure Pack has quite a number of logs that you can review for more information. If you’re trying to track down an issue in the portals look in the MgmtSvc-*Site where * is Tenant or Admin. Errors will get logged there. If you’re stuck mucking about the APIs look in the MgmtSvc-*API where * is Tenant, Admin, or TenantPublic.

Enable Development Mode
You can enable developer mode in sites by modifying a value in the web.config. Unprotect the web.config by calling:

And then locate the appSetting named Microsoft.Azure.Portal.Configuration.PortalConfiguration.DevelopmentMode and set the value to true. Be sure to undo and re-protect the configuration when you’re done. You should then get a neat error tracing window show up in the portals, and more diagnostic information will be logged to the event logs. Probably not wise to do this in a production environment.

Use the PowerShell CmdLets
There are a quite a number of PowerShell cmdlets available for you to learn about the configuration of Windows Azure Pack. If you open the Windows Azure Pack Administration PowerShell console you can see that there are two modules that get loaded that are full of cmdlets:

PS C:\Windows\system32> get-command -Module MgmtSvcConfig

CommandType     Name                                               ModuleName
-----------     ----                                               ----------
Cmdlet          Add-MgmtSvcAdminUser                               MgmtSvcConfig
Cmdlet          Add-MgmtSvcDatabaseUser                            MgmtSvcConfig
Cmdlet          Add-MgmtSvcResourceProviderConfiguration           MgmtSvcConfig
Cmdlet          Get-MgmtSvcAdminUser                               MgmtSvcConfig
Cmdlet          Get-MgmtSvcDatabaseSetting                         MgmtSvcConfig
Cmdlet          Get-MgmtSvcDefaultDatabaseName                     MgmtSvcConfig
Cmdlet          Get-MgmtSvcEndpoint                                MgmtSvcConfig
Cmdlet          Get-MgmtSvcFeature                                 MgmtSvcConfig
Cmdlet          Get-MgmtSvcFqdn                                    MgmtSvcConfig
Cmdlet          Get-MgmtSvcNamespace                               MgmtSvcConfig
Cmdlet          Get-MgmtSvcNotificationSubscriber                  MgmtSvcConfig
Cmdlet          Get-MgmtSvcResourceProviderConfiguration           MgmtSvcConfig
Cmdlet          Get-MgmtSvcSchema                                  MgmtSvcConfig
Cmdlet          Get-MgmtSvcSetting                                 MgmtSvcConfig
Cmdlet          Initialize-MgmtSvcFeature                          MgmtSvcConfig
Cmdlet          Initialize-MgmtSvcProduct                          MgmtSvcConfig
Cmdlet          Install-MgmtSvcDatabase                            MgmtSvcConfig
Cmdlet          New-MgmtSvcMachineKey                              MgmtSvcConfig
Cmdlet          New-MgmtSvcPassword                                MgmtSvcConfig
Cmdlet          New-MgmtSvcResourceProviderConfiguration           MgmtSvcConfig
Cmdlet          New-MgmtSvcSelfSignedCertificate                   MgmtSvcConfig
Cmdlet          Protect-MgmtSvcConfiguration                       MgmtSvcConfig
Cmdlet          Remove-MgmtSvcAdminUser                            MgmtSvcConfig
Cmdlet          Remove-MgmtSvcDatabaseUser                         MgmtSvcConfig
Cmdlet          Remove-MgmtSvcNotificationSubscriber               MgmtSvcConfig
Cmdlet          Remove-MgmtSvcResourceProviderConfiguration        MgmtSvcConfig
Cmdlet          Reset-MgmtSvcPassphrase                            MgmtSvcConfig
Cmdlet          Set-MgmtSvcCeip                                    MgmtSvcConfig
Cmdlet          Set-MgmtSvcDatabaseSetting                         MgmtSvcConfig
Cmdlet          Set-MgmtSvcDatabaseUser                            MgmtSvcConfig
Cmdlet          Set-MgmtSvcFqdn                                    MgmtSvcConfig
Cmdlet          Set-MgmtSvcIdentityProviderSettings                MgmtSvcConfig
Cmdlet          Set-MgmtSvcNotificationSubscriber                  MgmtSvcConfig
Cmdlet          Set-MgmtSvcPassphrase                              MgmtSvcConfig
Cmdlet          Set-MgmtSvcRelyingPartySettings                    MgmtSvcConfig
Cmdlet          Set-MgmtSvcSetting                                 MgmtSvcConfig
Cmdlet          Test-MgmtSvcDatabase                               MgmtSvcConfig
Cmdlet          Test-MgmtSvcPassphrase                             MgmtSvcConfig
Cmdlet          Test-MgmtSvcProtectedConfiguration                 MgmtSvcConfig
Cmdlet          Uninstall-MgmtSvcDatabase                          MgmtSvcConfig
Cmdlet          Unprotect-MgmtSvcConfiguration                     MgmtSvcConfig
Cmdlet          Update-MgmtSvcV1Data                               MgmtSvcConfig

As well as the MgmtSvcConfig module which is moreso for daily administration.

Read the Windows Azure Pack Claims Whitepaper
See here: Claims-Based Identity in Windows Azure Pack (docx).

Visit the Forums
When in doubt take a look at the forums and ask a question if you’re stuck.

Email Me
Lastly, you can contact me (steve@syfuhs.net) with any questions. I may not have answers but I might be able to find someone who can help.

Conclusion

In the first two parts of this series we looked at how authentication works, how it’s configured, and now in this installment we looked at how we can configure a third party IdP to log in to Windows Azure Pack. If you’re trying to configure Windows Azure Pack to use a custom IdP I imagine this part is the most complicated to figure out and hopefully it was documented well enough. I personally spent a fair amount of time fiddling with settings and most of the information I’ve gathered for this series has been the result of lots of trial and error. With any luck this series has proven useful to you and you have more luck with the configuration than I originally did.

Next time we’ll take a look at how we can consume the public APIs using a third party IdP for authentication.

In the future we might take a look at how we can authenticate requests to a service called from a Windows Azure Pack add-on, and how we can call into Windows Azure Pack APIs from an add-on.

Windows Azure Pack Authentication Part 2

by Steve Syfuhs / January 30, 2014 07:50 PM

Last time we took a look at how Windows Azure Pack authenticates users in the Admin Portal. In this post we are going to look at how authentication works in the Tenant Portal.

Authentication in the Tenant Portal works exactly the same way authentication in the Admin Portal works.

Detailed and informative explanation, right?

Actually, with any luck you’ve read, and were more importantly, able to decipher my explanations in the last post. The reason for that is because we’re going to go a bit deeper into the configuration of how authentication is configured.  If that’s actually the case then you know everything you need to know to continue on here. There are a couple minor differences between the Admin sites and Tenant sites, such as the tenant STS will store users in a standalone SQL database instead of Active Directory, and there is a set of public service endpoints that also federate with the Tenant STS. For the time being we can ignore the public API, but we may revisit it in the future.

diag1

One of the things this diagram doesn’t show is how the various services store configuration information. This is somewhat important because the Portals and APIs need to keep track of where the STS is, what is used to sign tokens, who is allowed to receive tokens, etc.

Since Windows Azure Pack is designed to be distributed in nature, it’s a fair bet most of the configuration is stored in databases. Let’s check the PowerShell cmdlets (horizontal spacing truncated a bit to fit):

PS C:\Windows\system32> Get-MgmtSvcDefaultDatabaseName

DefaultDatabaseName                        Description
-------------------                        -----------
Microsoft.MgmtSvc.Config                   Configuration store database
Microsoft.MgmtSvc.PortalConfigStore        Admin and Tenant sites database
Microsoft.MgmtSvc.Store                    Rest API layer database
Microsoft.MgmtSvc.MySQL                    MySQL resource provider database
Microsoft.MgmtSvc.SQLServer                SQLServer resource provider database
Microsoft.MgmtSvc.Usage                    Usage service database
Microsoft.MgmtSvc.WebAppGallery            WebApp Gallery resource provider database

Well that’s handy. It even describes what each database does. Looking at the databases on the server we see each one:

diag2

Looking at the descriptions we can immediately ignore anything that is described as a “resource provider database” because resource providers in Windows Azure Pack are the services exposed by the Portals and APIs.  That leaves us the Microsoft.MgmtSvc.Config, Microsoft.MgmtSvc.PortalConfigStore, Microsoft.MgmtSvc.Store, and Microsoft.MgmtSvc.Usage databases.

The usage database looks like the odd one out so if we peek into the tables we see configuration information and data for usage of the resource providers. Scratch that.

We’re then left a Config database, a PortalConfigStore database, and a Store database. How’s that for useful naming conventions? Given the descriptions we could infer we likely only want to look into the PortalConfigStore database for the Tenant and Admin Portal configuration, and the Store database for the API configuration. To confirm that we could peek into the Config database and see what’s there. If we look in the Settings table we see a bunch of encrypted key value pairs. Nothing jumps out as being related to federation information like endpoints, claims, or signing certificates, but we do see pointers to database credentials.

If we quickly take a look at some of the web.config files in the various Windows Azure Pack sites we can see that some of them only have connection strings to the Config database. Actually, if we look at any of the web.config files we’ll see they are protected, so we need to unprotect them:

PS C:\Windows\system32> Unprotect-MgmtSvcConfiguration -Namespace TenantAPI

Please remember to protect-* them when you’re done snooping!

If we compare the connection strings and information in the Config.Settings table, its reasonable to hypothesize that the Config database stores pointers to the other configuration databases, and the sites only need to have a configured connection string to a single database. This seems to only apply to some sites though. The Portal sites actually have connection strings only pointing to the PortalConfigStore database. That actually makes sense from a security perspective though.

Since the Portal sites are public-ish facing, they are more likely to be attacked, and therefore really shouldn’t have direct connections to databases storing sensitive information – hence the Web APIs. Looking at the architectural documentation on TechNet we can see its recommended that API services not be public facing (with the exception of the Public APIs) as well, so that supports my assertion.

Moving on, we now have the PortalConfigStore and the Store databases left. The descriptions tell us everything we need to know about them. We end up with a service relationship along the lines of:

diag3

Okay, now that we have a rough idea of how configuration data is stored we can peek into the databases and see what’s what.

Portal Sites Authentication

Starting with the PortalConfigStore database we see a collection of tables.

diag4

The two tables that pop out are the Settings table and the aspnet_Users table. We know the Auth Site for the tenants stores users in a database, and lookie here we have a collection of users.

Next up is the Settings table. It’s a namespace-key-value-pair mapped table. Since this database stores information for multiple sites, it makes sense to separate configuration data into multiple realms – the namespaces.

There are 4 namespaces we care about:

  • AdminSite
  • AuthSite
  • TenantSite
  • WindowsAuthSite

Looking at the TenantSite configuration we see a few entries with JSON values:

  • Authentication.IdentityProvider
  • Authentication.RelyingParty

Aha! Here’s where we store the necessary bits to do the federation dance. The Authentication.RelyingParty entry stores the information describing the TenantSite. So when it goes to the IdP with a request it can use these values. In my case I’ve got the following:

{
   "EncryptionCertificate":null,
   "Realm":"http://azureservices/TenantSite",
   "ReplyTo":https://manage-cloud.syfuhs.net/
}

Really, just the bare minimum to describe the RP. The Realm, which is the unique identifier of the site, the Reply To URL which is where the token should be returned to, and the Encryption Certificate in case the returned token is encrypted – which it isn’t by default. With this information we can make a request to the IdP, but of course, we don’t know anything about the IdP yet so we need to look up that configuration information.

Looking at the Authentication.IdentityProvider entry we see everything else we need to complete a WS-Federation passive request for token. This is my configuration:

{
   "Realm":"
http://azureservices/AuthSite",
   "Endpoint":"
https://auth-cloud.syfuhs.net/wsfederation/issue",
   "Certificates":[
      "MIIC2...ADLt0="
   ]
}

To complete the request we actually only need the Endpoint as that describes where the request should be sent, but we also now have the information to validate the token response. The Realm describes who minted the token, and the Certificates element is a collection of certificates that could have been used to sign the token. If the token was signed by one of these certificates, we know it’s a valid token.

We do have to go one step further when validating this though, as we need to make sure the token is intended to be used by the Tenant Portal. This is done by comparing the audience URI in the token (see the last post) to the Realm in the Authentication.RelyingParty configuration value. If everything matches up we’re good to go.

We can see the configuration in the AdminSite namespace is similar too.

Next up we want to look at the AuthSite namespace configuration. There are similar entries to the TenantSite, but they serve slightly different purposes.

The Authentication.IdentityProvider entry matches the entry for the TenantSite. I’m not entirely sure of its purpose, but I suspect it might be a reference value for when changes are made and the original configuration is needed. Just a guess on that though.

Moving on we have the Authentication.RelyingParty.Primary entry. This value describes who can request a token, which in our case is the TenantSite. My entry looks like this:

{
   "EncryptionCertificate":null,
   "Realm":"http://azureservices/TenantSite",
   "ReplyTo":https://manage-cloud.syfuhs.net/
}

It’s pretty similar to the configuration in the TenantSite entry. The Realm is the identifier of which site can request a token, and the Reply To URL is where the token should be returned once its minted.

Compare that to the values in the WindowsAuthSite namespace and things look pretty similar too.

So with all that information we’ve figured out how the Portal sites and the Auth sites are configured. Of course, we haven’t looked at the APIs yet.

API Authentication

If you recall from the last post the API calls are authenticated by attaching a JWT to the request header. The JWT has to validated by the APIs the same way the Portals have to validate the JWTs received from the STS. If we look at the diagram above though, the API sites don’t have access to the PortalConfigStore database; they have access to the Store database. Therefore its reasonable to assume the Store database has a copy of the federation configuration data as well.

Looking at the Settings table we can confirm that assumption. It’s got the same schema as the Settings table in the PortalConfigStore database, though in this case there are different namespaces. There are two namespaces that are of interest here:

  • AdminAPI
  • TenantAPI

This aligns with the service diagram above. We should only have settings for the namespaces of services that actually touch the database.

If we look at the TenantAPI elements we have four entries:

  • Authentication.IdentityProvider.Primary
  • Authentication.IdentityProvider.Secondary
  • Authentication.RelyingParty.Primary
  • Authentication.RelyingParty.Secondary

The Authentication.IdentityServer.Primary entry matches up with the TenantSite Authentication.IdentityServer entry in the PortalConfigStore database. That makes sense since it needs to trust the token same as the Tenant Portal site. The Secondary element is curious though. It’s configured as a relying party to the Admin STS. I suspect that is there because the Tenant APIs can be called from the Admin Portal.

Comparing these values to the AdminAPI namespace we see that there are only configuration entries for the Admin STS. Seems reasonable since the Tenant Portal probably shouldn’t be calling into admin APIs. Haven’t got a clue why the AdminAPI relying party is configured as Secondary though Smile. Artifact of the design I guess. Another interesting artifact of this configuration is that the ReplyTo values in the RelyingParty entries show the default value from when I first installed the services. We see something like:

{
   "EncryptionCertificate":null,
   "Realm":"http://azureservices/TenantSite",
   "ReplyTo":https://syfuhs-cloud:30081/
}

And

{
   "EncryptionCertificate":null,
   "Realm":"http://azureservices/AdminSite",
   "ReplyTo":https://syfuhs-cloud:30091/
}

I reconfigured the endpoints to be publically accessible so these values are now incorrect.

API’s can’t really use Reply To the same passive requests can, so it makes sense that they don’t get updated – they don’t have to be updated. The values don’t have to be present either, but again, artifacts I guess.

Conclusion

In the previous post we looked at how authentication works conceptually, and in this post we looked at how authentication is configured in detail. Next time we’ll take a look at how we can reconfigure Windows Azure Pack to work with our own IdPs.

No spoilers this time. Winking smile

Whitepaper: Active Directory from on-premises to the cloud

by Steve Syfuhs / January 14, 2013 04:44 PM

A new whitepaper was released last Friday (Jan 11/2013) that discusses all the various options for dealing with identity in cloud, on-premise, and hybrid environments: Active Directory from on-premises to the cloud.

It takes a look at how Windows Azure Active Directory is making a play for cloud identity, as well as how it works with hybrid/on-premise scenarios.

Here’s the overview:

Identity management, provisioning, role management, and authentication are key services both on-premises and through the (hybrid) cloud. With the Bring Your Own Apps (BYOA) for the cloud and Software as a Service (SaaS) applications, the desire to better collaborate a la Facebook with the “social” enterprise, the need to support and integrate with social networks, which lead to a Bring Your Own Identity (BYOI) trend, identity becomes a service where identity “bridges” in the cloud talk to on-premises directories or the directories themselves move and/or are located in the cloud.

Active Directory (AD) is a Microsoft brand for identity related capabilities. In the on-premises world, Windows Server AD provides a set of identity capabilities and services and is hugely popular (88% of Fortune 1000 and 95% of enterprises use AD). Windows Azure AD is AD reimagined for the cloud, designed to solve for you the new identity and access challenges that come with the shift to a cloud-centric, multi-tenant world.

Windows Azure AD can be truly seen as an Identity Management as a Service (IDMaaS) cloud multi-tenant service. This goes far beyond taking AD and simply running it within a virtual machine (VM) in Windows Azure.

This document is intended for IT professionals, system architects, and developers who are interested in understanding the various options for managing and using identities in their (hybrid) cloud environment based on the AD foundation and how to leverage the related capabilities. AD, AD in Windows Azure and Windows Azure AD are indeed useful for slightly different scenarios. This document is part of a series of documents on the identity and security features of Windows Azure AD/Office 365 (see the links below for the other available documents in the series).

This is a pretty good paper. It documents a lot of the internal details of how all the various services play together in a central location, as well as sheds light on some publically-known, if not publically documented, methods of user provisioning.

Below is a list of great resources referenced in the document.

Windows Azure Active Directory Federation In Depth (Part 2)

by Steve Syfuhs / December 07, 2012 10:02 PM

In my last post I talked a little bit about the provisioning and federation processes for Office 365 and Windows Azure Active Directory (WAAD). This time around I want to talk a little bit about how the various pieces fit together when federating an on premise Active Directory environment with WAAD and Office 365. You can find lots of articles online that talk about how to configure everything, but I wanted to dig a little deeper and show you why everything is configured the way it is.

Out of the box a Windows Azure Active Directory tenant manages users for you. You can create all your users online without ever having to configure anything on premise. This works fairly well for small businesses and organizations that are wanting to stop managing identities on premise altogether. However, for more advanced scenarios organizations will want to synchronize their on-premise Active Directory with WAAD. Getting this working revolves around two things: the users, and the domain.

First off, lets take a quick look at the domain. I’m using the Microsoft Online Services Module for PowerShell to query for this information. I’m going to use my domain as an example: syfuhs.net.

PS C:\Users\Steve\Desktop> Get-MsolDomain -DomainName syfuhs.net | fl *

Authentication : Managed
Capabilities   : Email, OfficeCommunicationsOnline
IsDefault      : True
IsInitial      : False
Name           : syfuhs.net
RootDomain     :
Status         : Verified

The important thing to look at is the Authentication attribute. It shows Managed because I haven’t configured federation for this domain.

If we then take a look at a user we see some basic directory information that we entered when the user was created. I’ve removed a bit of the empty fields but left an important one, the ImmutableId field.

PS C:\Users\Steve\Desktop> Get-MsolUser -UserPrincipalName steve@syfuhs.net | fl *

DisplayName                 : Steve Syfuhs
FirstName                   : Steve
ImmutableId                 :
LastName                    : Syfuhs
OverallProvisioningStatus   : Success
UserPrincipalName           : steve@syfuhs.net
ValidationStatus            : Healthy

The Immutable ID is a unique attribute that distinguishes a user in both on-premise Active Directory and Windows Azure Active Directory. Since I haven’t configured federation this value is blank.

Skip ahead a few pages after running the Convert-MsolDomainToFederated cmdlet and my domain is magically federated with my local Active Directory. If I re-run the first command we’ll see the Authentication attribute set to Federated. However, running the second command doesn’t return an Immutable ID and if I tried logging in through ADFS I get an error. What gives?

If we look at the token that is passed from ADFS to WAAD after sign in we see that there is actually a claim for an Immutable ID. This ID is what is used to determine the identity of the user, and if Office 365 has no idea who has that value it can’t trust that identity.

This particular problem is solved through directory synchronization using the DirSync service. DirSync is configured to get all users from Active Directory and add them to Windows Azure Active Directory. It synchronizes most attributes configured for a user including the objectGUID attribute. This attribute is synchronized to the ImmutableID attribute in WAAD. It’s the anchor that binds an on-premise user with a cloud user.

Two questions tend to arise from this process:

  1. Why not just use the UPN for synchronization?
  2. Why do you need to synchronize in the first place?

Both questions are fairly simple to answer, but the answers depend on one another. You cannot synchronize against a UPN because a user’s UPN can easily change. You need a value that will never change across the lifetime of a user account (hence the name “immutable”). You need the value to stay constant because synchronization will happen often. You need to synchronize any time a value changes in the on-premise Active Directory. Examples of changes include address changes or name changes. Changing your name can often result in changing your UPN.

It’s preferred to keep these attributes up to date in both systems because then applications can trust that they are getting the right values when requested from either system. This still begs the question though, why do you need to synchronize in the first place? Some people may ask this because it’s theoretically possible to provision new users as they first sign into an application. If the user doesn’t exist when they log in just create them. Simple.

The problem of course is that certain systems require knowledge of the user before the user ever logs in. A perfect example is Exchange. Imagine if a user is on vacation while the transition to Office 365 occurs. If the user doesn’t log in until they get back, that means they wouldn’t have received any email while they were away. Admittedly, not receiving a few weeks of email might be the preferred scenario for some, but I digress.

So we have to configure DirSync. Skip ahead a few more pages and DirSync executed and synchronized all my users. If we take a look back at my account we now see a value for the immutable ID:

PS C:\Users\Steve\Desktop> Get-MsolUser -UserPrincipalName steve@syfuhs.net | fl *

DisplayName                 : Steve Syfuhs
FirstName                   : Steve
ImmutableId                 : lHh/rEL830q6/mStDnD4uw==
UserPrincipalName           : steve@syfuhs.net
ValidationStatus            : Healthy

At this point I should now be able to log in.

If I navigate to https://portal.microsoftonline.com I’m redirected to https://login.microsoftonline.com and prompted for credentials. However, as soon as I type in my username it prompts telling me I have to go else where to sign in.

image

The sign in screen is smart enough to parse the domain name from my user and lookup the Authentication type tied to that domain. If the domain is configured as Federated the sign in page is told to redirect to ADFS. If we return back to that first PowerShell command we’ll see the authentication is set to Federated. This was set by the Convert-MsolDomainToFederated  command. Two things happened when it was called.

First, ADFS was configured to allow sending tokens to Windows Azure Active Directory. Second, WAAD was configured to receive tokens from ADFS.

We can take a look at exactly what was configured in WAAD by running more PowerShell.

PS C:\Windows\system32> Get-MsolDomainFederationSettings -DomainName syfuhs.net

ActiveLogOnUri         : << adfs server and username mixed endpoint >>
FederationBrandName    : syfuhs.net
IssuerUri              : urn:syfuhs:net
LogOffUri              : << adfs signout url >>
MetadataExchangeUri    : << adfs server mex endpoint >>
NextSigningCertificate :
PassiveLogOnUri        :
https://login.syfuhs/adfs/ls/
SigningCertificate     : MIICzDCCAbSgA.....sh37NMr5gpFGrUnnbFjuk9ATXF1WZ

I’ve stripped out a few things to make it a little more readable. The key is that PassiveLogOnUri field. That is the URL passed back to the sign in page and is what is used to compose a WS-Federation signin request.

If I click the link I’m redirected to ADFS and if the computer I’m using is a member of the same domain as ADFS I shouldn’t be prompted for credentials. After Windows Authentication does it’s thing ADFS determines that WAAD sent us because the wtrealm URL parameter is set to urn:federation:MicrosoftOnline which is WAAD's Audience URI.

When Convert-MsolDomainToFederated was called, ADFS was instructed to create a Relying Party Trust for WAAD. That trust had a set of claims issuance rules that query Active Directory for various things like a user’s objectGUID and UPN. These values are formatted, bundled into a SAML token, and signed with the ADFS signing key. The token is then POST’ed back to WAAD.

The SigningKey field we saw in the Get-MsolDomainFederationSettings command is the public key to the ADFS signing key. It was configured when Convert-MsolDomainToFederated was called. It is used to verify that the token received from ADFS is valid.  If the token is in fact valid the domain is located based on the Issuer URI and UPN, and the user is located in the domain. If a user is found then WAAD will create a new token for the user and issue it to whichever service initially requested login, which in our case is https://portal.microsoftonline.com.

From this point on any time I browse to an Office 365 service like Exchange, I’m redirected back to https://login.microsoftonline.com, and if my session is still valid from earlier, a new token is issued for Exchange. Same with SharePoint and Dynamics, Windows Intune, and any other application I’ve configured through Windows Azure Active Directory – even the Windows Azure management portal.

Federation with Office 365 through Windows Azure Active Directory is a very powerful feature and will be a very important aspect of cloud identity in the near future. While federation may seem like a complex black box, if we start digging into the configuration involved we start to learn a lot about the all the various moving parts, and hopefully realize its not too complex.

Self-Serving Single Sign On

by Steve Syfuhs / September 17, 2012 01:34 PM

When I wrote Enough with the Pain of Passwords someone told me it was completely self-serving. Actually, it was.

My day job is building a commercial Single Sign On product so I’m terribly biased toward people using it. I quite like my job, and I really like my product so I’m more than happy to get people to buy our stuff. This doesn’t actually change how I feel about passwords though.image

I hate passwords. In current form they are an archaic mechanism for authentication and that mechanism is more often than not flawed.

Archaic is, I think, an appropriate word because we are fairly limited in how we can use passwords. This is because passwords are shared secrets. Both you and the application need to know the password – the secret. This makes it difficult to properly secure the secret because frankly, more than one entity knows the secret. However, shared secrets are a useful method for authenticating a principal because they are so simple to use, but they can be a challenge to use securely. Shared secrets tend to be static. They don’t change often because there often isn’t an easy way to change them.

Shared secrets create barriers between systems because each and every system in play needs to know the secret. Either you let everyone know the secret, or nobody new know the secret. The latter increases security (in theory) but reduces usability, while the former decreases security, and increases usability (in theory). When it comes to integrating such systems we don’t need yet another thing getting in the way. Sure we could argue that it’s more secure that way, but it’s not. It can’t be. If the business really needs these things to talk they will make them talk in ways that were never originally in the plans. That means security boundaries will likely be circumvented in ways the original developers never imagined. This can either be in a rich desktop client, or through a web browser.

This of course isn’t just true in a business – it applies to the internet as a whole. Users tend to consume data from one service and do something with it in another. If it’s a pain to move between those two services then the user is annoyed and that means they may just decide its not worth it – or worse they go the easy route and use the same password between the services so they don’t have to think. The less we make someone think about things other than what they are working on the better because they will do as much as possible to think as little as possible. As a user, I will do as much as possible to be able to think as little as possible when logging into things.

I might be mixing ideas here. In some cases the systems could be artificial like a server and applications, or it could be a person consuming content from one of the applications and trying to share it in another application. It could either occur through back channel API’s or through a user copying and pasting from browser tab to browser tab.

Imagine that you have an application, and to use the application you need an identity – in other words, its not anonymous. To get an identity you need to prove you are who you say you are. This is authentication. Normally we use passwords, and this means we have to manage passwords. We need a process to change them. To issue them. To reset them. To authenticate them. To store them. We tell our users to use something complex and secure. We tell them not to reuse it. We tell them not to write it down. All the while this is going to slow down the user because they aren’t going to remember the friggen thing anyway. Rinse, wash, and repeat for every application someone uses.

This does not scale. Either a user stops using new applications (or stops using older applications) or more likely the user just starts repeating their passwords. Let me repeat that, since very few developers seem to get this: passwords do not scale.

This is why we want federation. It’s not a new concept. Get someone else to do it. We have to be careful though. It’s not one size fits all – one single provider can’t be the sole source for identity information. Nor can we support every possible provider under the sun. That’s why last time I used the term persona. Business persona, personal persona, play persona, student persona, etc.

Imagine your target audience. Are you wanting to deal with businesses? Are you trying to do something useful for peoples personal lives? If you build in federation, they will use it. Maybe not immediately, but eventually there will be a critical mass of passwords and people will start transitioning to federated identities.

Does this solve our problem though? Is it secure? As all good answers are cop outs, it depends. The short answer is, it can be. It depends on how much we want to trust the identity provider, as well as how much we can trust the identity provider.

In other words, how much do we trust that the data we receive from the identity provider is accurate, and how much do we trust that it really is the person we want logging into the identity provider? For the first half its pretty simple: I trust the government to provide the SIN/SSN of the user accurately, but I don’t really trust Twitter to provide an accurate home address. Part of this is common sense. The other half revolves around how the identity provider authenticates its users.

We know passwords are weak so if the identity provider only allows passwords, maybe we don’t trust it to secure our intellectual property.  I trust Twitter to authenticate users for my new super-cool social media thingamajig, but I certainly wouldn’t trust it for my new business financial management solution. If the identity provider is using two-factor authentication then perhaps we can trust it a bit more. Does the identity provider allow for elevated contexts? Can we authenticate with passwords for generic access, but then request a stronger authentication method for certain things?

Now that’s an interesting thought. If you have an identity provider that can do this for you, you now have a new feature. Suppose your application is used in team scenarios. Multiple people accessing stuff might require management and administration. Anyone can read or create, but to modify or delete you need to elevate. Oops, heading off on a tangent.

Anyway, back on message: your authentication sucks and if I have to create another bloody password to use your new application, I’m not going to use your new application.

That might be a bit too harsh. Passwords can’t scale because I can’t remember anymore new passwords. If you require me to use a new password, I’m not going to remember it, so I can’t use your application.

This is a self-serving plea from me to developers: I hate passwords because I can’t remember them. Please make my life easier.

Enough with the Pain of Passwords

by Steve Syfuhs / August 21, 2012 02:44 PM

Passwords suck. This is a well known fact. Of course, we have conditioned ourselves to using passwords in painful ways so we accept this fact and move on with our lives. Except, I can’t take it anymore.

Every week it seems there’s another breach, and every week there’s yet another security guru spouting the need for better-stronger-more-secure passwords. If you create a “strong” password that meets xyz requirements then it will take 42.3 bajillion years to crack regardless of what hash algorithm the application is using. The problem with this though is that “strong” passwords are ridiculously hard to remember and passwords that are easy to remember are easy to crack.

So we promote the use of password managers -– tools that we use to store our super-strong-secure passwords in a secure way and we get our passwords from said manager every time we need to use a password.

Fuck. That.

I mean, yes they work for storing password securely, but the underlying problem is that we use passwords for authentication for nearly everything. This isn’t necessarily a bad thing in theory, but in practice passwords don’t scale for humans. As we become more and more connected to more and more applications we realize we have more and more passwords that we need to remember on an almost daily basis, and they are all very different, variations on a theme, or the same password reused over and over again. Using a password manager for this is complicated and slow, and slow or complicated security processes fail. Often.

No wonder we hear about passwords all the time.

Then we hear about the blame. We blame users for not using secure passwords when things go south.

“Oh that’s why you got hacked… your passwords aren’t secure.”

Of course we blame the users! Don’t believe me? Check the news. We point out that 58% of all passwords breached are under some arbitrary length requirement that normal users don’t understand let alone care about. Sure we do it as a service to others, but we do it in a condescending way as if people should know better.

Or we blame the application developers.

“You didn’t hash your passwords with a key-derived function with 100 iterations so attackers can brute force the passwords if they have an offline copy of your database.

Really? What the hell does that mean? Especially to a developer that doesn’t understand basic abstraction let alone cryptography? Oh, I’m supposed to use at least 1000 iterations?

Passwords are a mess.

As the old joke goes

“Doctor, it hurts when I do this.”
”Then don’t do that!”

We need to stop using passwords. There will always be uses for passwords, but as a defacto authentication scheme? No. Stop it. Just stop it.

How do we do that then?

The standard answer is to get our identity from somewhere else. This is Federation or Single Sign On. Instead of doing the work yourself, get someone else to do it.

Consider the problem: for the bulk of the applications we use we have one persona, or a set of personas for various groups of applications. Business applications get my business persona; personal applications get my personal persona; educational applications get my student persona; etc. If we use the same basic persona (or identity) for a set of applications why on earth should I have to create a different identity for each application? Why should each application have their own way of authenticating me?

The resistance we see though is the application developer not wanting to give up control of that password. If they control the password, they control the users fate in the application. Actually, it’s a little understandable.

This is a call to action to all those developers: stop it!

Let go of the passwords. Just let go. Loosen your grip. Easy there. I know it’s tough, but it’s time to let go.

In fact, you don’t want to manage those passwords. Do you really want to get blamed when all of your users passwords are stolen because you didn’t know any better? This means less risk for you.

Risk bad. Less risk good.

But wait! Don’t go hooking up with every identity provider you meet. Only hook up with those you like. If you’re building a business application, go federate with the customer’s company; if you’re building a slick new social media site federate with Twitter or Facebook or LiveID or Google.

Keep it simple. You like Twitter as a provider. Joe’s Fish Taco and Identity notsomuch. Having too many providers will confuse your users because they will forget which provider they used.

If you don’t have any user passwords to manage anymore, that means you don’t have any user passwords to manage! Less work. Less hassle. Less annoyed customers because they can’t remember their password.

Nice, right?

As users we need to beg the developers of the applications we use to talk with other identity providers, otherwise they won’t see the need. We have to convince them passwords suck.

Is that too much to ask?

Study of Commercially Deployed Single Sign On

by Steve Syfuhs / April 25, 2012 05:40 PM

Microsoft Research published a paper sometime last month analyzing Single Sign On services hosted by various commercial entities.

Go Read it: Signing Me onto Your Accounts through Facebook and Google: a Traffic-Guided Security Study of Commercially Deployed Single-Sign-On Web Services.

The paper had been sitting on my desk for a couple weeks (literally) before I had a chance to read through it. It actually made it’s rounds through the company before I had a chance to read it.

In any case, I thought it would be good to post a link for people to read because it outlines some very important implications of using a Single Sign On service.

Abstract:

With the boom of software-as-a-service and social networking, web-based single sign-on (SSO) schemes are being deployed by more and more commercial websites to safeguard many web resources. Despite prior research in formal verification, little has been done to analyze the security quality of SSO schemes that are commercially deployed in the real world. Such an analysis faces unique technical challenges, including lack of access to well-documented protocols and code, and the complexity brought in by the rich browser elements (script, Flash, etc.). In this paper, we report the first “field study” on popular web SSO systems. In every studied case, we focused on the actual web traffic going through the browser, and used an algorithm to recover important semantic information and identify potential exploit opportunities. Such opportunities guided us to the discoveries of real flaws. In this study, we discovered 8 serious logic flaws in high-profile ID providers and relying party websites, such as OpenID (including Google ID and PayPal Access), Facebook, JanRain, Freelancer, FarmVille, Sears.com, etc. Every flaw allows an attacker to sign in as the victim user. We reported our findings to affected companies, and received their acknowledgements in various ways. All the reported flaws, except those discovered very recently, have been fixed. This study shows that the overall security quality of SSO deployments seems worrisome. We hope that the SSO community conducts a study similar to ours, but in a larger scale, to better understand to what extent SSO is insecurely deployed and how to respond to the situation.

The gist of the paper is that when it comes to verification and validation of the security of SSO protocols, we tend to do formal tests of the protocols themselves, but we don’t ever really test the implementations of the protocols. Observation showed that most developers didn’t fully understand the security implications of the most important part in an SSO conversation – the token exchange:

Our success indicates that the developers of today’s web SSO systems often fail to fully understand the security implications during token exchange, particularly, how to ensure that the token is well protected and correctly verified, and what the adversary is capable of doing in the process.

Think about it. The token received from the IdP is the identity. The relying party trusts the validity of the identity by verifying the token somehow. If verification isn’t done properly an attacker can inject information into the token and elevate their privileges or impersonate another user. This is a fundamental problem:

For example, we found that the RPs of Google ID SSO often assume that message fields they require Google to sign would always be signed, which turns out to be a serious misunderstanding (Section 4.1).

Not all of the data in a token needs to be signed. In fact, if the IdP isn’t the authoritative source of the particular piece of data it may not want to sign that data. If the IdP can’t or wont sign the data, do you really want to trust it?

What’s the rule that’s always hammered into us when writing code? Do not trust user input. Even if it’s supposed to have come from another machine:

[…] when our browser (i.e., Bob’s browser) relayed BRM1 [part 1 of the message exchange], it changed openid.ext1.required (Figure 8) to (firstname,lastname). As a result, BRM3 [part 3 of message exchange] sent by the IdP did not contain the email element (i.e., openid.ext1.value.email). When this message was relayed by the browser, we appended to it alice@a.com as the email element. We found that Smartsheet accepted us as Alice and granted us the full control of her account.

If you receive a message that contains something you need to use you, not only do you have to validate that it’s in the right format, but you have to validate that it hasn’t been modified or tampered with before it hits your code.

This is something I’ve talked about before, but in a more generalized nature. Validate-validate-validate!

As an aside, an interesting observation made in the research is that all of this was done through black-box testing. The researchers didn’t have access to any source code. So if the researchers could find problems this way, the attackers could find problems the same way:

Our study shows that not only do logic flaws pervasively exist in web SSO deployments, but they are practically discoverable by the adversary through analysis of the SSO steps disclosed from the browser, even though source code of these systems is unavailable.

This tends to be the case with validation problems. Throw a bunch of corrupted data at something and see if it sticks.

They also realized that their biggest challenge wasn't trying to understand the protocol, but trying to understand the data being used within the protocol.

For every case that we studied, we spent more time on understanding how each SSO system work than on reasoning at the pure logic level.

The fundamental design of a Single Sign On service doesn’t really change between protocols.  The protocols may use varying terms to describe the different players in the system, but there are really only three that are important: the IdP, the RP, and the client. They interact with each other in fundamentally similar ways across most SSO protocols. It’s no surprise that understanding the data was harder than understanding the logic.

They didn’t go into much detail about why they spent more time studying data, but earlier they talked about how different vendors used different variations on the protocols.

[…] the way that today’s web SSO systems are constructed is largely through integrating web APIs, SDKs and sample code offered by the IdPs. During this process, a protocol serves merely as a loose guideline, which individual RPs often bend for the convenience of integrating SSO into their systems. Some IdPs do not even bother to come up with a rigorous protocol for their service.

In my experience the cost of changing a security protocol for the sake of convenience is usually protocol security. It usually doesn’t end well.

Windows Live and Windows 8

by Steve Syfuhs / September 12, 2011 04:00 PM

So. I guess I wasn't the only one with this idea: http://www.syfuhs.net/post/2011/02/28/making-the-internet-single-sign-on-capable.aspx

Sweet. Smile

Announced earlier today at the Build conference, Microsoft is creating a tighter integration between Windows 8 and Windows Live.  More details to come when I download the bits later tonight.

The Importance of Elevating Privilege

by Steve Syfuhs / August 28, 2011 04:00 PM

The biggest detractor to Single Sign On is the same thing that makes it so appealing – you only need to prove your identity once. This scares the hell out of some people because if you can compromise a users session in one application it's possible to affect other applications. Congratulations: checking your Facebook profile just caused your online store to delete all it's orders. Let's break that attack down a little.

  • You just signed into Facebook and checked your [insert something to check here] from some friend. That contained a link to something malicious.
  • You click the link, and it opens a page that contains an iframe. The iframe points to a URL for your administration portal of the online store with a couple parameters in the query string telling the store to delete all the incoming orders.
  • At this point you don't have a session with the administration portal and in a pre-SSO world it would redirect you to a login page. This would stop most attacks because either a) the iframe is too small to show the page, or b) (hopefully) the user is smart enough to realize that a link from a friend on Facebook shouldn't redirect you to your online store's administration portal. In a post-SSO world, the portal would redirect you to the STS of choice and that STS already has you signed in (imagine what else could happen in this situation if you were using Facebook as your identity provider).
  • So you've signed into the STS already, and it doesn't prompt for credentials. It redirects you to the administration page you were originally redirected away from, but this time with a session. The page is pulled up, the query string parameters are parsed, and the orders are deleted.

There are certainly ways to stop this as part of this is a bit trivial. For instance you could pop up an Ok/Cancel dialog asking "are you sure you want to delete these?", but for the sake of discussion lets think of this at a high level.

The biggest problem with this scenario is that deleting orders doesn't require anything more than being signed in. By default you had the highest privileges available.

This problem is similar to the problem many users of Windows XP had. They were, by default, running with administrative privileges. This lead to a bunch of problems because any application running could do whatever it pleased on the system. Malware was rampant, and worse, users were just doing all around stupid things because they didn't know what they were doing but they had the permissions necessary to do it.

The solution to that problem is to give users non-administrative privileges by default, and when something required higher privileges you have to re-authenticate and temporarily run with the higher privileges. The key here is that you are running temporarily with higher privileges. However, security lost the argument and Microsoft caved while developing Windows Vista creating User Account Control (UAC). By default a user is an administrator, but they don't have administrative privileges. Their user token is a stripped down administrator token. You only have non-administrative privileges. In order to take full advantage of the administrator token, a user has to elevate and request the full token temporarily. This is a stop-gap solution though because it's theoretically possible to circumvent UAC because the administrative token exists. It also doesn't require you to re-authenticate – you just have to approve the elevation.

As more and more things are moving to the web it's important that we don't lose control over privileges. It's still very important that you don't have administrative privileges by default because, frankly, you probably don't need them all the time.

Some web applications are requiring elevation. For instance consider online banking sites. When I sign in I have a default set of privileges. I can view my accounts and transfer money between my accounts. Anything else requires that I re-authenticate myself by entering a private pin. So for instance I cannot transfer money to an account that doesn't belong to me without proving that it really is me making the transfer.

There are a couple ways you can design a web application that requires privilege elevation. Lets take a look at how to do it with Claims Based Authentication and WIF.

First off, lets look at the protocol. Out of the box WIF supports the WS-Federation protocol. The passive version of the protocol supports a query parameter of wauth. This parameter defines how authentication should happen. The values for it are mostly specific to each STS however there are a few well-defined values that the SAML protocol specifies. These values are passed to the STS to tell it to authenticate using a particular method. Here are some most often used:

Authentication Type/Credential Wauth Value
Password urn:oasis:names:tc:SAML:1.0:am:password
Kerberos urn:ietf:rfc:1510
TLS urn:ietf:rfc:2246
PKI/X509 urn:oasis:names:tc:SAML:1.0:am:X509-PKI
Default urn:oasis:names:tc:SAML:1.0:am:unspecified

When you pass one of these values to the STS during the signin request, the STS should then request that particular type of credential. the wauth parameter supports arbitrary values so you can use whatever you like. So therefore we can create a value that tells the STS that we want to re-authenticate because of an elevation request.

All you have to do is redirect to the STS with the wauth parameter:

https://yoursts/authenticate?wa=wsignin1.0&wtrealm=uri:myrp&wauth=urn:super:secure:elevation:method

Once the user has re-authenticated you need to tell the relying party some how. This is where the Authentication Method claim comes in handy:

http://schemas.microsoft.com/ws/2008/06/identity/claims/authenticationmethod

Just add the claim to the output identity:

protected override IClaimsIdentity GetOutputClaimsIdentity(IClaimsPrincipal principal, RequestSecurityToken request, Scope scope)
{
    IClaimsIdentity ident = principal.Identity as IClaimsIdentity;
    ident.Claims.Add(new Claim(ClaimTypes.AuthenticationMethod, "urn:super:secure:elevation:method"));
    // finish filling claims...
    return ident;
}

At that point the relying party can then check to see whether the method satisfies the request. You could write an extension method like:

public static bool IsElevated(this IClaimsPrincipal principal)
{
    return principal.Identity.AuthenticationType == "urn:super:secure:elevation:method";
}

And then have a bit of code to check:

var p = Thread.CurrentPrincipal as IClaimsPrincipal;
if (p != null && p.IsElevated())
{
    DoSomethingRequiringElevation();
}

This satisfies half the requirements for elevating privilege. We need to make it so the user is only elevated for a short period of time. We can do this in an event handler after the token is received by the RP.  In Global.asax we could do something like:

void Application_Start(object sender, EventArgs e)
{
    FederatedAuthentication.SessionAuthenticationModule.SessionSecurityTokenReceived 
        += new EventHandler<SessionSecurityTokenReceivedEventArgs> (SessionAuthenticationModule_SessionSecurityTokenReceived);
}
void SessionAuthenticationModule_SessionSecurityTokenReceived(object sender, SessionSecurityTokenReceivedEventArgs e)
{
    if (e.SessionToken.ClaimsPrincipal.IsElevated())
    {
        SessionSecurityToken token = new SessionSecurityToken(e.SessionToken.ClaimsPrincipal, e.SessionToken.Context, e.SessionToken.ValidFrom, e.SessionToken.ValidFrom.AddMinutes(15));
        e.SessionToken = token;
    }
}

This will check to see if the incoming token has been elevated, and if it has, set the lifetime of the token to 15 minutes.

There are other places where this could occur like within the STS itself, however this value may need to be independent of the STS.

As I said earlier, as more and more things are moving to the web it's important that we don't lose control of privileges. By requiring certain types of authentication in our relying parties, we can easily support elevation by requiring the STS to re-authenticate.

Making the Internet Single Sign On Capable

by Steve Syfuhs / February 28, 2011 04:00 PM

Every couple of weeks I start up Autoruns to see what new stuff has added itself to Windows startup and what not (screw you Adobe – you as a software company make me want to swear endlessly).  Anyway, a few months ago around the time the latest version of Windows Live Messenger and it’s suite RTM’ed I poked around to see if anything new was added.  Turns out there was:

image

A new credential provider was added!

image

Interesting.

Not only that, it turns out a couple Winsock providers were added too:

image

I started poking around the DLL’s and noticed that they don’t do much.  Apparently you can use smart cards for WLID authentication.  I suspect that’s what the credential provider and associated Winsock Provider is for, as well as part of WLID’s sign-on helper so credentials can be managed via the Credential Manager:

image

Ah well, nothing too exciting here.

Skip a few months and something occurred to me.  Microsoft was able to solve part of the Claims puzzle.  How do you bridge the gap between desktop application identities and web application identities?  They did part of what CardSpace was unable to do because CardSpace as a whole didn’t really solve a problem people were facing.  The problem Windows Live ran into was how do you share credentials between desktop and web applications without constantly asking for the credentials?  I.e. how do you do Single Sign On…

This got me thinking.

What if I wanted to step this up a smidge and instead of logging into Windows Live Messenger with my credentials, why not log into Windows with my Windows Live Credentials?

Yes, Windows.  I want to change this:

97053_windows7loginscreen

Question: What would this solve?

Answer: At present, nothing ground-breakingly new.  For the sake of argument, lets look at how this would be done, and I’ll (hopefully) get to my point.

First off, we need to know how to modify the Windows logon screen.  In older versions of Windows (versions older than 2003 R2) you had to do a lot of heavy lifting to make any changes to the screen.  You had to write your own GINA which involved essentially creating your own UI.  Talk about painful.

With the introduction of Vista, Microsoft changed the game when it came to custom credentials.  Their reasoning was simple: they didn’t want you to muck up the basic look and feel.  You had to follow their guidelines.

As a result we are left with something along the lines of these controls to play with:

image

The logon screen is now controlled by Credential Providers instead of the GINA.  There are two providers built into Windows by default, one for Kerberos or NTLM authentication, and one for Smart Card authentication.

The architecture looks like:

ff404303_ce20dc63-b1a8-42c4-a8a2-955f4de7e5b5(en-us,WS_10)

When the Secure Attention Sequence (CTRL + ALT + DEL / SAS) is called, Winlogon switches to a different desktop and instantiates a new instance of LogonUI.exe.  LogonUI enumerates all the credential provider DLL’s from registry and displays their controls on the desktop.

When I enter in my credentials they are serialized and supposed to be passed to the LSA.

Once the LSA has these credentials it can then do the authentication.

I say “supposed” to be passed to the LSA because there are two frames of thought here.  The first frame is to handle authentication within the Credential Provider itself.  This can cause problems later on down the road.  I’ll explain why in the second frame.

The second frame of thought is when you need to use custom credentials, need to do some funky authentication, and then save save the associated identity token somewhere.  This becomes important when other applications need your identity.

You can accomplish this via what’s called an Authentication Package.

IC200673

When a custom authentication package is created, it has to be designed in such a way that applications cannot access stored credentials directly.  The applications must go through the pre-canned MSV1_0 package to receive a token.

Earlier when I asked about using Windows Live for authentication we would need to develop two things: a Credential Provider, and a custom Authentication Package.

The logon process would work something like this:

  • Select Live ID Credential Provider
  • Type in Live ID and Password and submit
  • Credential Provider passes serialized credential structure to Winlogon
  • Winlogon passes credentials to LSA
  • LSA passes credential to Custom Authentication Package
  • Package connects to Live ID STS and requests a token with given credentials
  • Token is returned
  • Authentication Package validated token and saves it to local cache
  • Package returns authentication result back up call stack to Winlogon
  • Winlogon initializes user’s profile and desktop

I asked before: What would this solve?

This isn’t really a ground-breaking idea.  I’ve just described a domain environment similar to what half a million companies have already done with Active Directory, except the credential store is Live ID.

On it’s own we’ve just simplified the authentication process for every home user out there.  No more disparate accounts across multiple machines.  Passwords are in sync, and identity information is always up to date.

What if Live ID sets up a new service that lets you create access groups for things like home and friends and you can create file shares as appropriate.  Then you can extend the Windows 7 Homegroup sharing based on those access groups.

Wait, they already have something like that with Skydrive (sans Homegroup stuff anyway).

Maybe they want to use a different token service.

Imagine if the user was able to select the “Federated User” credential provider that would give you a drop down box listing a few Security Token Services.  Azure ACS can hook you up.

Imagine if one of these STS’s was something everyone used *cough* Facebook *cough*.

Imagine the STS was one that a lot of sites on the internet use *cough* Facebook *cough*.

Imagine if the associated protocol used by the STS and websites were modified slightly to add a custom set of headers sent to the browser.  Maybe it looked like this:

Relying-Party-Accepting-Token-Type: urn:sometokentype:www.somests.com
Relying-Party-Token-Reply-Url: https://login.myawesomesite.com/auth

Finally, imagine if your browser was smart enough to intercept those headers and look up the user’s token, check if they matched the header ”Relying-Party-Accepting-Token-Type” and then POST the token to the given reply URL.

Hmm.  We’ve just made the internet SSO capable.

Now to just move everyone’s cheese to get this done.

Patent Pending. Winking smile

// About

Steve is a renaissance kid when it comes to technology. He spends his time in the security stack.