Home / My Disclaimer / Who am I? / Search... / Sign in

// Certificates

Creating Authority-Signed and Self-Signed Certificates in .NET

by Steve Syfuhs / February 09, 2014 03:19 PM

Whenever I get some free time I like to tackle certain projects that have piqued my interest. Often times I don’t get to complete these projects, or they take months to complete. In this case I’ve spent the last few months trying to get these samples to work. Hopefully you’ll find them useful.

In the world of security, and more specifically in .NET, there aren’t a whole lot of options for creating certificates for development. Sure you could use makecert.exe or if you’re truly masochistic you could spin up a CA, but both are a pain to use and aren’t necessarily useful when you need to consistently create signed certificates for whatever reason. Other options include using a library like BouncyCastle but that can be a bit complicated, and given the portable nature of the library, doesn’t use Windows APIs to do the work.

So I offer some sample code. This code should not be used in production. Please. Seriously. It’s not that good. Its great for testing, but its in no shape whatsoever for production systems. That’s why CAs are built.

In any case I’ve put the code up on Github. There is no license so use it as you see fit so long as it doesn’t come back to bite me in the ass. Winking smile

This gist shows how you can create self-signed certificates and how you can then sign the certificates of those keys with a CA’s private key. The calling code is in the KeyGenSigning project, and the actual meat of the signing is done in the CertLib project. The key generation and signing bits are mostly P/Invoke’d APIs so they execute fairly fast.

Currently the code relies on CSPs to do the work. In theory it could work with NCryptoKey’s but I haven’t tried it yet.

In any case, enjoy. Hopefully you found this useful.

Windows Azure Pack Authentication Part 3 – Using a Third Party IdP

by Steve Syfuhs / February 07, 2014 06:22 PM

In the previous installments of this series we looked at how Windows Azure Pack authenticates users and how it’s configured out of the box for federation. This time around we’re going to look at how you can configure federation with a third party IdP.

Microsoft designed Windows Azure Pack the right way. It supports federation with industry protocols out of the box. You can’t say that for many services, and you certainly can’t say that those services support it natively for all versions – more often than not you have to pay extra for it.

Windows Azure Pack supports federation, and actually uses it to authenticate users by default. This little fact makes it easy to federate to a 3rd party IdP.

If we searched around we will find lots of resources on federating to ADFS, as that’s Microsoft’s federation product, and there are a number of good (German content) walkthroughs on how you can get it working. If you want to use ADFS go read one or all of those articles as everything we talk about today will be about using a non-Microsoft federation service.

Before we begin though I’d like to point out that Microsoft does have some resources on using 3rd party IdPs, but unfortunately the information is a bit thin in some places.

Prerequisites

Federation is a complex beast and we should be clear about what is required to get it working. In no particular order you need the following:

  • STS that supports the WS-Federation (passive) protocol
  • STS that supports WS-Federation wrapped JSON Web Tokens (JWT)
  • Optional: STS that supports WS-Trust + JWT

If you plan to use the public APIs with federated accounts then you will need a STS that supports WS-Trust + JWT.

If you don’t have a STS that can support these requirements then you should really consider taking a look at ADFS, or if you’re looking for customization, Thinktecture Identity Server. Both are top notch IdPs (edit: insert pitch about the IdP my company builds and sells as well [edit-edit: our next version natively supports JWT] Winking smile -- sorry, this concludes the not-so-regularly-scheduled product placement).

Another option is to roll your own IdP. Don’t do this. No seriously, don’t. It’s a complicated mess. You’re way better off using the Thinktecture server and extending it to fit your needs.

Supposing though that you already have an IdP and want to support JWT though, here’s how we can do it. In this context the IdP is the overarching identity providing system and the STS is simply the service issuing tokens.

Skip this next section if you just want to see how to configure Windows Azure Pack. That’s the main part that’s lacking in the MSDN documentation.

JWT via IdentityModel

First off, you need to be using .NET 4.5, and you need to be using the the 4.5 IdentityModel stack. You can’t use the original 3.5 bits.

At this point I’m going to assume you’ve got a working IdP already. There are lots of articles out there explaining how to build one. We’re just going to mod the STS.

Before making any code changes though you need to add the JWT token handler, which is easily installed via Nuget (I Red heart Nuget):

PM> Install-Package System.IdentityModel.Tokens.Jwt

This will need to be added to the project that exposes your STS configuration class.

Next, we need to inject the token handler into the STS pipeline. This can easily be done by adding an entry to the web.config system.identityModel section:

Or if you want to hardcode it you can add it to your SecurityTokenServiceConfiguration class.

There are of course other (potentially better) ways you can add it in, but this serves our purpose for the sake of a sample.

By adding the JWT token handler into the STS pipeline we can begin issuing JWTs to any relying parties that request one. This poses a problem though because passive requests don’t have a requested token type tacked on. Active (WS-Trust) requests do, but not passive. So we need to specify that a JWT should be minted instead of a SAML token. This can be done in the GetScope method of the STS class.

All we really needed to do was specify the TokenType as WIF will use that to determine which token handler should be used to mint the token. We know this is the value to use because it’s exposed by the GetTokenTypeIdentifiers() method in the JWTSecurityTokenHandler class.

Did I mention the JWT library is open source?

So now at this point if we made a request for token to the STS we could receive a WS-Federation wrapped JWT.

If the idea of using a JWT instead of a SAML token appeals to you, you can configure your app to use the JWT token handler similar to Dominick’s sample.

If you were submitting a WS-Trust RST to the STS you could use client code along the lines of:

When the GetScope method is called the request.TokenType should be set to whatever you passed in at the client. For more information on service calls you can take a look at the whitepaper Claims-Based Identity in Windows Azure Pack (docx). A future installment of this series might have more information about using services.

Lastly, we need to sign the JWT. The only caveat to using the JWT token handler is that the minimum RSA key size is 2048 bits. If you’re using a key smaller than that then please upgrade it. We’re going to overlook the fact that the MSDN article shows how to bypass minimum key sizes. Seriously. Don’t do it. I don’t want to have to explain why (putting paranoia aside for a moment, 1024 is being deprecated by Windows and related services in the near future anyway).

Issuing Tokens to Windows Azure Pack

So now we’re at a point where we can mint a JWT token. The question we need to ask now is what claims should this token contain? Looking at Part 1 we see that the Admin Portal requires UPN and Group claims. The tenant portal only requires the UPN claim.

Lucky for us the JWT token handler is smart. It knows to transform certain known XML-token-friendly-claim-types to JWT friendly claim types. In our case we can use http://schemas.xmlsoap.org/ws/2005/05/identity/claims/upn in our ClaimsIdentity to map to the UPN claim, and http://schemas.xmlsoap.org/claims/Group to map to our Group claim.

Then we need to determine where to send the token, and who to address it to. Both the tenant and admin sites have Federation Metadata documents that specify this information for us. If you’ve got an IdP that can parse the metadata then all you need to do is point it to https://yourtenantsite/FederationMetadata/2007-06/FederationMetadata.xml for the tenant configuration or https://youradminsite/FederationMetadata/2007-06/FederationMetadata.xml for the admin configuration.

Of course, this information will also map up to the configuration elements we looked at in Part 2. That’ll tell us the Audience URI and the Reply To for both sites.

Finally we have everything we need to mint the token, address it, and send it on its way.

Configuring Windows Azure Pack to Trust your Token

The tokens been sent and once it hits either the tenant or admin site it’ll promptly be ignored and you’ll get an ugly error message saying “nope, not gonna happen, bub.”

We therefore need to configure Windows Azure Pack to trust our token. Looking at MSDN we see some somewhat useful information telling us what we need to modify, but frankly, its missing a bunch of information so we’re going to ignore it.

First things first: if your IdP publishes a Federation Metadata document then you can just configure everything via PowerShell:

You can replace the target “Admin” with “Tenant” if you want to configure the Tenant Portal. The only caveat with doing it this way is that the metadata document needs to be accessible from the server. I’ve submitted a feature request that they also support local file paths too; hopefully they listen! Since the parameter takes the full URL you can put the metadata document somewhere public if its not normally accessible. You will only need the metadata accessible while applying this configuration.

If the cmdlet completed successfully then you should be able to log in from your own IdP. That’s all there is to it for you. I would recommend seriously considering going this route instead of configuring things manually.

Otherwise, lets carry on.

Since we can’t import our federation metadata (since we probably don’t have any), we need to configure things manually. To do that we need to modify settings in the database.

Looking back to Part 2 we see all the configuration elements that enable our federated trust to the default IdPs. We’ll need to update a few settings across the Microsoft.MgmtSvc.Store and Microsoft.MgmtSvc.PortalConfigStore databases.

As per the MSDN documentation it says to modify the settings in the PortalConfigStore database. It’s wrong. It’s incomplete as that’s only part of the process.

The PortalConfigStore database contains the settings used by the Tenant and Admin Portals to validate and request tokens. We need to modify these settings to use our custom IdP. To do so locate the Authentication.IdentityProvider setting in the [Config].[Settings] table.  The namespace we need to choose is dependent on which site we want to configure. In our case we select the Admin namespace. As we saw last time it looks something like:

We need to substitute our STS information here. The Realm is whatever your STS issuer is, and the Endpoint is where ever your WS-Federation endpoint is located. The Certificate should be a base 64 encoded representation of your signing certificate (remember, just the public key).

In my experience I’ve had to do an IISRESET on the portals to get the settings refreshed. I might just be impatient though.

Once those values are replaced you can try logging in. You should be redirected to your IdP and if you issue the token properly it’ll hit the portal and you should be logged in. Unfortunately this’ll actually fail with a non-useful error message.

deadsession

Who can guess why? So far I’ve stated that the MSDN documentation is missing information. What have we missed? Hopefully if you’ve read the first two parts of this series you’re yelling at the screen telling me to get on with it already because you’ve caught on to what I’m saying.

We haven’t configured the API services to trust our STS! Oops.

With that being said, we now have proof that Windows Azure Pack flows the token to the services from the Portal and, more importantly, the services validate the token. Cool!

Anyway, now to configure the APIs. Warning: complicated.

In the Microsoft.MgmtSvc.Store database locate the Settings table and then locate the Authentication.IdentityProvider.Secondary element in the AdminAPI namespace. We need to update it with the exact same values as we put in to the configuration element in the other database.

If you’re only wanting to configure the Tenant Portal you’d want to modify the Authentication.IdentityProvider.Primary configuration element. Be careful with the Primary/Secondary elements as they can get confusing.

If you’re configuring the Admin Portal you’ll need to update the Authentication.IdentityProvider.Secondary configuration element in the TenantAPI namespace to use the configuration you specified for the Admin Portal as well. As I said previously, I think this is because the Admin Portal calls into the Tenant API. The Admin Portal will use an admin-trusted token – therefore the TenantAPI needs to trust the admin’s STS.

Now that you’ve completed configuration you can do an IISRESET and try logging in. If you configured everything properly you should now be able to log in from your own IdP.

Troubleshooting

For those rock star Ops people who understand identity this guide was likely pretty easy to follow, understand, and implement. For everyone else though, this was probably a pain in the neck. Here are some troubleshooting tips.

Review the Event Logs
It’s surprising how many people forget that a lot of applications will write errors to the Windows Event Log. Windows Azure Pack has quite a number of logs that you can review for more information. If you’re trying to track down an issue in the portals look in the MgmtSvc-*Site where * is Tenant or Admin. Errors will get logged there. If you’re stuck mucking about the APIs look in the MgmtSvc-*API where * is Tenant, Admin, or TenantPublic.

Enable Development Mode
You can enable developer mode in sites by modifying a value in the web.config. Unprotect the web.config by calling:

And then locate the appSetting named Microsoft.Azure.Portal.Configuration.PortalConfiguration.DevelopmentMode and set the value to true. Be sure to undo and re-protect the configuration when you’re done. You should then get a neat error tracing window show up in the portals, and more diagnostic information will be logged to the event logs. Probably not wise to do this in a production environment.

Use the PowerShell CmdLets
There are a quite a number of PowerShell cmdlets available for you to learn about the configuration of Windows Azure Pack. If you open the Windows Azure Pack Administration PowerShell console you can see that there are two modules that get loaded that are full of cmdlets:

PS C:\Windows\system32> get-command -Module MgmtSvcConfig

CommandType     Name                                               ModuleName
-----------     ----                                               ----------
Cmdlet          Add-MgmtSvcAdminUser                               MgmtSvcConfig
Cmdlet          Add-MgmtSvcDatabaseUser                            MgmtSvcConfig
Cmdlet          Add-MgmtSvcResourceProviderConfiguration           MgmtSvcConfig
Cmdlet          Get-MgmtSvcAdminUser                               MgmtSvcConfig
Cmdlet          Get-MgmtSvcDatabaseSetting                         MgmtSvcConfig
Cmdlet          Get-MgmtSvcDefaultDatabaseName                     MgmtSvcConfig
Cmdlet          Get-MgmtSvcEndpoint                                MgmtSvcConfig
Cmdlet          Get-MgmtSvcFeature                                 MgmtSvcConfig
Cmdlet          Get-MgmtSvcFqdn                                    MgmtSvcConfig
Cmdlet          Get-MgmtSvcNamespace                               MgmtSvcConfig
Cmdlet          Get-MgmtSvcNotificationSubscriber                  MgmtSvcConfig
Cmdlet          Get-MgmtSvcResourceProviderConfiguration           MgmtSvcConfig
Cmdlet          Get-MgmtSvcSchema                                  MgmtSvcConfig
Cmdlet          Get-MgmtSvcSetting                                 MgmtSvcConfig
Cmdlet          Initialize-MgmtSvcFeature                          MgmtSvcConfig
Cmdlet          Initialize-MgmtSvcProduct                          MgmtSvcConfig
Cmdlet          Install-MgmtSvcDatabase                            MgmtSvcConfig
Cmdlet          New-MgmtSvcMachineKey                              MgmtSvcConfig
Cmdlet          New-MgmtSvcPassword                                MgmtSvcConfig
Cmdlet          New-MgmtSvcResourceProviderConfiguration           MgmtSvcConfig
Cmdlet          New-MgmtSvcSelfSignedCertificate                   MgmtSvcConfig
Cmdlet          Protect-MgmtSvcConfiguration                       MgmtSvcConfig
Cmdlet          Remove-MgmtSvcAdminUser                            MgmtSvcConfig
Cmdlet          Remove-MgmtSvcDatabaseUser                         MgmtSvcConfig
Cmdlet          Remove-MgmtSvcNotificationSubscriber               MgmtSvcConfig
Cmdlet          Remove-MgmtSvcResourceProviderConfiguration        MgmtSvcConfig
Cmdlet          Reset-MgmtSvcPassphrase                            MgmtSvcConfig
Cmdlet          Set-MgmtSvcCeip                                    MgmtSvcConfig
Cmdlet          Set-MgmtSvcDatabaseSetting                         MgmtSvcConfig
Cmdlet          Set-MgmtSvcDatabaseUser                            MgmtSvcConfig
Cmdlet          Set-MgmtSvcFqdn                                    MgmtSvcConfig
Cmdlet          Set-MgmtSvcIdentityProviderSettings                MgmtSvcConfig
Cmdlet          Set-MgmtSvcNotificationSubscriber                  MgmtSvcConfig
Cmdlet          Set-MgmtSvcPassphrase                              MgmtSvcConfig
Cmdlet          Set-MgmtSvcRelyingPartySettings                    MgmtSvcConfig
Cmdlet          Set-MgmtSvcSetting                                 MgmtSvcConfig
Cmdlet          Test-MgmtSvcDatabase                               MgmtSvcConfig
Cmdlet          Test-MgmtSvcPassphrase                             MgmtSvcConfig
Cmdlet          Test-MgmtSvcProtectedConfiguration                 MgmtSvcConfig
Cmdlet          Uninstall-MgmtSvcDatabase                          MgmtSvcConfig
Cmdlet          Unprotect-MgmtSvcConfiguration                     MgmtSvcConfig
Cmdlet          Update-MgmtSvcV1Data                               MgmtSvcConfig

As well as the MgmtSvcConfig module which is moreso for daily administration.

Read the Windows Azure Pack Claims Whitepaper
See here: Claims-Based Identity in Windows Azure Pack (docx).

Visit the Forums
When in doubt take a look at the forums and ask a question if you’re stuck.

Email Me
Lastly, you can contact me (steve@syfuhs.net) with any questions. I may not have answers but I might be able to find someone who can help.

Conclusion

In the first two parts of this series we looked at how authentication works, how it’s configured, and now in this installment we looked at how we can configure a third party IdP to log in to Windows Azure Pack. If you’re trying to configure Windows Azure Pack to use a custom IdP I imagine this part is the most complicated to figure out and hopefully it was documented well enough. I personally spent a fair amount of time fiddling with settings and most of the information I’ve gathered for this series has been the result of lots of trial and error. With any luck this series has proven useful to you and you have more luck with the configuration than I originally did.

Next time we’ll take a look at how we can consume the public APIs using a third party IdP for authentication.

In the future we might take a look at how we can authenticate requests to a service called from a Windows Azure Pack add-on, and how we can call into Windows Azure Pack APIs from an add-on.

Windows Azure Pack Authentication Part 2

by Steve Syfuhs / January 30, 2014 07:50 PM

Last time we took a look at how Windows Azure Pack authenticates users in the Admin Portal. In this post we are going to look at how authentication works in the Tenant Portal.

Authentication in the Tenant Portal works exactly the same way authentication in the Admin Portal works.

Detailed and informative explanation, right?

Actually, with any luck you’ve read, and were more importantly, able to decipher my explanations in the last post. The reason for that is because we’re going to go a bit deeper into the configuration of how authentication is configured.  If that’s actually the case then you know everything you need to know to continue on here. There are a couple minor differences between the Admin sites and Tenant sites, such as the tenant STS will store users in a standalone SQL database instead of Active Directory, and there is a set of public service endpoints that also federate with the Tenant STS. For the time being we can ignore the public API, but we may revisit it in the future.

diag1

One of the things this diagram doesn’t show is how the various services store configuration information. This is somewhat important because the Portals and APIs need to keep track of where the STS is, what is used to sign tokens, who is allowed to receive tokens, etc.

Since Windows Azure Pack is designed to be distributed in nature, it’s a fair bet most of the configuration is stored in databases. Let’s check the PowerShell cmdlets (horizontal spacing truncated a bit to fit):

PS C:\Windows\system32> Get-MgmtSvcDefaultDatabaseName

DefaultDatabaseName                        Description
-------------------                        -----------
Microsoft.MgmtSvc.Config                   Configuration store database
Microsoft.MgmtSvc.PortalConfigStore        Admin and Tenant sites database
Microsoft.MgmtSvc.Store                    Rest API layer database
Microsoft.MgmtSvc.MySQL                    MySQL resource provider database
Microsoft.MgmtSvc.SQLServer                SQLServer resource provider database
Microsoft.MgmtSvc.Usage                    Usage service database
Microsoft.MgmtSvc.WebAppGallery            WebApp Gallery resource provider database

Well that’s handy. It even describes what each database does. Looking at the databases on the server we see each one:

diag2

Looking at the descriptions we can immediately ignore anything that is described as a “resource provider database” because resource providers in Windows Azure Pack are the services exposed by the Portals and APIs.  That leaves us the Microsoft.MgmtSvc.Config, Microsoft.MgmtSvc.PortalConfigStore, Microsoft.MgmtSvc.Store, and Microsoft.MgmtSvc.Usage databases.

The usage database looks like the odd one out so if we peek into the tables we see configuration information and data for usage of the resource providers. Scratch that.

We’re then left a Config database, a PortalConfigStore database, and a Store database. How’s that for useful naming conventions? Given the descriptions we could infer we likely only want to look into the PortalConfigStore database for the Tenant and Admin Portal configuration, and the Store database for the API configuration. To confirm that we could peek into the Config database and see what’s there. If we look in the Settings table we see a bunch of encrypted key value pairs. Nothing jumps out as being related to federation information like endpoints, claims, or signing certificates, but we do see pointers to database credentials.

If we quickly take a look at some of the web.config files in the various Windows Azure Pack sites we can see that some of them only have connection strings to the Config database. Actually, if we look at any of the web.config files we’ll see they are protected, so we need to unprotect them:

PS C:\Windows\system32> Unprotect-MgmtSvcConfiguration -Namespace TenantAPI

Please remember to protect-* them when you’re done snooping!

If we compare the connection strings and information in the Config.Settings table, its reasonable to hypothesize that the Config database stores pointers to the other configuration databases, and the sites only need to have a configured connection string to a single database. This seems to only apply to some sites though. The Portal sites actually have connection strings only pointing to the PortalConfigStore database. That actually makes sense from a security perspective though.

Since the Portal sites are public-ish facing, they are more likely to be attacked, and therefore really shouldn’t have direct connections to databases storing sensitive information – hence the Web APIs. Looking at the architectural documentation on TechNet we can see its recommended that API services not be public facing (with the exception of the Public APIs) as well, so that supports my assertion.

Moving on, we now have the PortalConfigStore and the Store databases left. The descriptions tell us everything we need to know about them. We end up with a service relationship along the lines of:

diag3

Okay, now that we have a rough idea of how configuration data is stored we can peek into the databases and see what’s what.

Portal Sites Authentication

Starting with the PortalConfigStore database we see a collection of tables.

diag4

The two tables that pop out are the Settings table and the aspnet_Users table. We know the Auth Site for the tenants stores users in a database, and lookie here we have a collection of users.

Next up is the Settings table. It’s a namespace-key-value-pair mapped table. Since this database stores information for multiple sites, it makes sense to separate configuration data into multiple realms – the namespaces.

There are 4 namespaces we care about:

  • AdminSite
  • AuthSite
  • TenantSite
  • WindowsAuthSite

Looking at the TenantSite configuration we see a few entries with JSON values:

  • Authentication.IdentityProvider
  • Authentication.RelyingParty

Aha! Here’s where we store the necessary bits to do the federation dance. The Authentication.RelyingParty entry stores the information describing the TenantSite. So when it goes to the IdP with a request it can use these values. In my case I’ve got the following:

{
   "EncryptionCertificate":null,
   "Realm":"http://azureservices/TenantSite",
   "ReplyTo":https://manage-cloud.syfuhs.net/
}

Really, just the bare minimum to describe the RP. The Realm, which is the unique identifier of the site, the Reply To URL which is where the token should be returned to, and the Encryption Certificate in case the returned token is encrypted – which it isn’t by default. With this information we can make a request to the IdP, but of course, we don’t know anything about the IdP yet so we need to look up that configuration information.

Looking at the Authentication.IdentityProvider entry we see everything else we need to complete a WS-Federation passive request for token. This is my configuration:

{
   "Realm":"
http://azureservices/AuthSite",
   "Endpoint":"
https://auth-cloud.syfuhs.net/wsfederation/issue",
   "Certificates":[
      "MIIC2...ADLt0="
   ]
}

To complete the request we actually only need the Endpoint as that describes where the request should be sent, but we also now have the information to validate the token response. The Realm describes who minted the token, and the Certificates element is a collection of certificates that could have been used to sign the token. If the token was signed by one of these certificates, we know it’s a valid token.

We do have to go one step further when validating this though, as we need to make sure the token is intended to be used by the Tenant Portal. This is done by comparing the audience URI in the token (see the last post) to the Realm in the Authentication.RelyingParty configuration value. If everything matches up we’re good to go.

We can see the configuration in the AdminSite namespace is similar too.

Next up we want to look at the AuthSite namespace configuration. There are similar entries to the TenantSite, but they serve slightly different purposes.

The Authentication.IdentityProvider entry matches the entry for the TenantSite. I’m not entirely sure of its purpose, but I suspect it might be a reference value for when changes are made and the original configuration is needed. Just a guess on that though.

Moving on we have the Authentication.RelyingParty.Primary entry. This value describes who can request a token, which in our case is the TenantSite. My entry looks like this:

{
   "EncryptionCertificate":null,
   "Realm":"http://azureservices/TenantSite",
   "ReplyTo":https://manage-cloud.syfuhs.net/
}

It’s pretty similar to the configuration in the TenantSite entry. The Realm is the identifier of which site can request a token, and the Reply To URL is where the token should be returned once its minted.

Compare that to the values in the WindowsAuthSite namespace and things look pretty similar too.

So with all that information we’ve figured out how the Portal sites and the Auth sites are configured. Of course, we haven’t looked at the APIs yet.

API Authentication

If you recall from the last post the API calls are authenticated by attaching a JWT to the request header. The JWT has to validated by the APIs the same way the Portals have to validate the JWTs received from the STS. If we look at the diagram above though, the API sites don’t have access to the PortalConfigStore database; they have access to the Store database. Therefore its reasonable to assume the Store database has a copy of the federation configuration data as well.

Looking at the Settings table we can confirm that assumption. It’s got the same schema as the Settings table in the PortalConfigStore database, though in this case there are different namespaces. There are two namespaces that are of interest here:

  • AdminAPI
  • TenantAPI

This aligns with the service diagram above. We should only have settings for the namespaces of services that actually touch the database.

If we look at the TenantAPI elements we have four entries:

  • Authentication.IdentityProvider.Primary
  • Authentication.IdentityProvider.Secondary
  • Authentication.RelyingParty.Primary
  • Authentication.RelyingParty.Secondary

The Authentication.IdentityServer.Primary entry matches up with the TenantSite Authentication.IdentityServer entry in the PortalConfigStore database. That makes sense since it needs to trust the token same as the Tenant Portal site. The Secondary element is curious though. It’s configured as a relying party to the Admin STS. I suspect that is there because the Tenant APIs can be called from the Admin Portal.

Comparing these values to the AdminAPI namespace we see that there are only configuration entries for the Admin STS. Seems reasonable since the Tenant Portal probably shouldn’t be calling into admin APIs. Haven’t got a clue why the AdminAPI relying party is configured as Secondary though Smile. Artifact of the design I guess. Another interesting artifact of this configuration is that the ReplyTo values in the RelyingParty entries show the default value from when I first installed the services. We see something like:

{
   "EncryptionCertificate":null,
   "Realm":"http://azureservices/TenantSite",
   "ReplyTo":https://syfuhs-cloud:30081/
}

And

{
   "EncryptionCertificate":null,
   "Realm":"http://azureservices/AdminSite",
   "ReplyTo":https://syfuhs-cloud:30091/
}

I reconfigured the endpoints to be publically accessible so these values are now incorrect.

API’s can’t really use Reply To the same passive requests can, so it makes sense that they don’t get updated – they don’t have to be updated. The values don’t have to be present either, but again, artifacts I guess.

Conclusion

In the previous post we looked at how authentication works conceptually, and in this post we looked at how authentication is configured in detail. Next time we’ll take a look at how we can reconfigure Windows Azure Pack to work with our own IdPs.

No spoilers this time. Winking smile

Tamper-Evident Configuration Files in ASP.NET

by Steve Syfuhs / September 28, 2011 04:00 PM

A couple weeks ago someone sent a message to one of our internal mailing lists. His message was pretty straightforward: how do you prevent modifications of a configuration file for an application [while the user has administrative rights on the machine]?

There were a couple responses including mine, which was to cryptographically sign the configuration file with an asymmetric key. For a primer on digital signing, take a look here. Asymmetric signing is one possible way of signing a file. By signing it this way the configuration file could be signed by an administrator before deploying the application, and all the application needed to validate the signature was the public key associated with the private key used to sign the file. This separated the private key from the application, preventing the configuration from being re-signed maliciously. It’s similar in theory to how code-signing works.

In the event that validation of the configuration file failed, the application would not load, or would gracefully fail and exit the next time the file was checked (or the application had an exclusive lock on the configuration file so it couldn’t be edited while running).

We are also saved the problem of figuring out the signature format because there is a well-respected XML signature schema: http://www.w3.org/2000/09/xmldsig#. WCF uses this format to sign messages. For a good code-walkthrough see Barry Dorrans’ Beginning ASP.NET Security. More on the code later here though.

Technically, this won’t prevent changes to the file, but it will prevent the application from accepting those changes. It’s kind of like those tamper-evident tags manufacturers stick on the enclosures of their equipment. It doesn’t prevent someone from opening the thing, but they will get caught if someone checks it. You’ll notice I didn’t call them “tamper-resistance” tags.

Given this problem, I went one step further and asked myself: how would I do this with a web application? A well-informed ASP.NET developer might suggest using aspnet_regiis to encrypt the configuration file. Encrypting the configuration does protect against certain things, like being able to read configuration data. However, there are a couple problems with this.

  • If I’m an administrator on that server I can easily decrypt the file by calling aspnet_regiis
  • If I’ve found a way to exploit the site, I can potentially overwrite the contents of the file and make the application behave differently
  • The encryption/decryption keys need to be shared in web farms

Consider our goal. We want to prevent a user with administrative privileges from modifying the configuration. Encryption does not help us in this case.  Signing the configuration will help though (As an aside, for more protection you encrypt the file then sign it, but that’s out of the scope of this) because the web application will stop working if a change is made that invalidates the signature.

Of course, there’s one little problem. You can’t stick the signature in the configuration file, because ASP.NET will b-itch complain about the foreign XML tag. The original application in question was assumed to have a custom XML file for it’s configuration, but in reality it doesn’t, so this problem applies there too.

There are three possible solutions to this:

  • Create a custom ConfigurationSection class for the signature
  • Create a custom configuration file and handler, and intercept all calls to web.config
  • Stick the signature of the configuration file into a different file

The first option isn’t a bad idea, but I really didn’t want to muck about with the configuration classes. The second option is, well, pretty much a bad idea in almost all cases, mainly because I’m not entirely sure you can even intercept all calls to the configuration classes.

I went with option three.

The other file has two important parts: the signature of the web.config file, and a signature for itself. This second signature prevents someone from modifying the signature for the web.config file. Our code becomes a bit more complicated because now we need to validate both signatures.

This makes us ask the question, where is the validation handled? It needs to happen early enough in the request lifecycle, so I decided to stick it into a HTTP Module, for the sake of modularity.

Hold it, you say. If the code is in a HTTP Module, then it needs to be added to the web.config. If you are adding it to the web.config, and protecting the web.config by this module, then removing said module from the web.config will prevent the validation from occurring.

Yep.

There are two ways around this:

  • Add the validation call into Global.asax
  • Hard code the addition of the HTTP Module

It’s very rare that I take the easy approach, so I’ve decided to hard code the addition of the HTTP Module, because sticking the code into a module is cleaner.

In older versions of ASP.NET you had to make some pretty ugly hacks to get the module in because it needs to happen very early in startup of the web application. With ASP.NET 4.0, an assembly attribute was added that allowed you to call code almost immediately after startup:

[assembly: PreApplicationStartMethod(typeof(Syfuhs.Security.Web.Startup), "Go")]

Within the Startup class there is a public static method called Go(). This method calls the Register() within an instance of my HttpModule. This module inherits from an abstract class called DynamicallyLoadedHttpModule, which inherits from IHttpModule. This class looks like:

public abstract class DynamicallyLoadedHttpModule : IHttpModule
{
    public void Register()
    {
        DynamicHttpApplication.RegisterModule(delegate(HttpApplication app) { return this; });
    }

    public abstract void Init(HttpApplication context);

    public abstract void Dispose();
}

The DynamicHttpApplication class inherits from HttpApplication and allows you to load HTTP modules in code. This code was not written by me. It was originally written by Nikhil Kothari:

using HttpModuleFactory = System.Func<System.Web.HttpApplication, System.Web.IHttpModule>;

public abstract class DynamicHttpApplication : HttpApplication
{
    private static readonly Collection<HttpModuleFactory> Factories = new Collection<HttpModuleFactory>();
    private static object _sync = new object();
    private static bool IsInitialized = false;

    private List<IHttpModule> modules;

    public override void Init()
    {
        base.Init();

        if (Factories.Count == 0)
            return;

        List<IHttpModule> dynamicModules = new List<IHttpModule>();

        lock (_sync)
        {
            if (Factories.Count == 0)
                return;

            foreach (HttpModuleFactory factory in Factories)
            {
                IHttpModule m = factory(this);

                if (m != null)
                {
                    m.Init(this);
                    dynamicModules.Add(m);
                }
            }
        }

        if (dynamicModules.Count != 0)
            modules = dynamicModules;

        IsInitialized = true;
    }

    public static void RegisterModule(HttpModuleFactory factory)
    {
        if (IsInitialized)
            throw new InvalidOperationException(Exceptions.CannotRegisterModuleLate);

        if (factory == null)
            throw new ArgumentNullException("factory");

        Factories.Add(factory);
    }

    public override void Dispose()
    {
        if (modules != null)
            modules.ForEach(m => m.Dispose());

        modules = null;
            
        base.Dispose();

        GC.SuppressFinalize(this);
    }
}

Finally, to get this all wired up we modify the Global.asax to inherit from DynamicHttpApplication:

public class Global : DynamicHttpApplication { ... }

Like I said, you could just add the validation code into Global (but where’s the fun in that?)…

So, now that we’ve made it possible to add the HTTP Module, lets actually look at the module:

public sealed class SignedConfigurationHttpModule : DynamicallyLoadedHttpModule
{
    public override void Init(HttpApplication context)
    {
        if (context == null)
            throw new ArgumentNullException("context");

        context.BeginRequest += new EventHandler(context_BeginRequest);
        context.Error += new EventHandler(context_Error);
    }

    private void context_BeginRequest(object sender, EventArgs e)
    {
        HttpApplication app = (HttpApplication)sender;

        SignatureValidator validator = new SignatureValidator(app.Request.PhysicalApplicationPath);

        validator.ValidateConfigurationSignatures(CertificateLocator.LocateSigningCertificate());
    }

    private void context_Error(object sender, EventArgs e)
    {
        HttpApplication app = (HttpApplication)sender;

        foreach (var exception in app.Context.AllErrors)
        {
            if (exception is XmlSignatureValidationFailedException)
            {
                // Maybe do something
                // Or don't...
                break;
            }
        }
    }

    public override void Dispose() { }
}

Nothing special here. Just hooking into the context.BeginRequest event so validation occurs on each request. There would be some performance impact as a result.

The core validation is contained within the SignatureValidator class, and there is a public method that we call to validate the signature file, ValidateConfigurationSignatures(…). This method accepts an X509Certificate2 to compare the signature against.

The specification for the schema we are using for the signature will actually encode the public key of the private key into the signature element, however we want to go one step further and make sure it’s signed by a particular certificate. This will prevent someone from modifying the configuration file, and re-signing it with a different private key. Validation of the signature is not enough; we need to make sure it’s signed by someone we trust.

The validator first validates the schema of the signature file. Is the XML well formed? Does the signature file conform to a schema we defined (the schema is defined in a Constants class)? Following that is validates the signature of the file itself. Has the file been tampered with? Following that it validates the signature of the web.config file. Has the web.config file been tampered with?

Before it can do all of this though, it needs to check to see if the signature file exists. The variable passed into the constructor is the physical path of the web application. The validator knows that the signature file should be in the App_Data folder within the root. This file needs to be here because the folder by default will not let you access anything in it, and we don’t want anyone downloading the file. The path is also hardcoded specifically so changes to the configuration cannot bypass the signature file validation.

Here is the validator:

internal sealed class SignatureValidator
{
    public SignatureValidator(string physicalApplicationPath)
    {
        this.physicalApplicationPath = physicalApplicationPath;
        this.signatureFilePath = Path.Combine(this.physicalApplicationPath, "App_Data\\Signature.xml");
    }

    private string physicalApplicationPath;
    private string signatureFilePath;

    public void ValidateConfigurationSignatures(X509Certificate2 cert)
    {
        Permissions.DemandFilePermission(FileIOPermissionAccess.Read, this.signatureFilePath);

        if (cert == null)
            throw new ArgumentNullException("cert");

        if (cert.HasPrivateKey)
            throw new SecurityException(Exceptions.ValidationCertificateHasPrivateKey);

        if (!File.Exists(signatureFilePath))
            throw new SecurityException(Exceptions.CouldNotLoadSignatureFile);

        XmlDocument doc = new XmlDocument() { PreserveWhitespace = true };
        doc.Load(signatureFilePath);

        ValidateXmlSchema(doc);

        CheckForUnsignedConfig(doc);

        if (!X509CertificateCompare.Compare(cert, ValidateSignature(doc)))
            throw new XmlSignatureValidationFailedException(Exceptions.SignatureFileNotSignedByExpectedCertificate);

        List<XmlSignature> signatures = ParseSignatures(doc);

        ValidateSignatures(signatures, cert);
    }

    private void CheckForUnsignedConfig(XmlDocument doc)
    {
        List<string> signedFiles = new List<string>();

        foreach (XmlElement file in doc.GetElementsByTagName("File"))
        {
            string fileName = Path.Combine(this.physicalApplicationPath, file["FileName"].InnerText);

            signedFiles.Add(fileName.ToUpperInvariant());
        }

        CheckConfigFiles(signedFiles);
    }

    private void CheckConfigFiles(List<string> signedFiles)
    {
        foreach (string file in Directory.EnumerateFiles(this.physicalApplicationPath, "*.config", SearchOption.AllDirectories))
        {
            string path = Path.Combine(this.physicalApplicationPath, file);

            if (!signedFiles.Contains(path.ToUpperInvariant()))
                throw new XmlSignatureValidationFailedException(string.Format(CultureInfo.CurrentCulture, Exceptions.ConfigurationFileWithoutSignature, path));
        }
    }

    private void ValidateXmlSchema(XmlDocument doc)
    {
        using (StringReader fileReader = new StringReader(Constants.SignatureFileSchema))
        using (StringReader signatureReader = new StringReader(Constants.SignatureSchema))
        {
            XmlSchema fileSchema = XmlSchema.Read(fileReader, null);
            XmlSchema signatureSchema = XmlSchema.Read(signatureReader, null);

            doc.Schemas.Add(fileSchema);
            doc.Schemas.Add(signatureSchema);

            doc.Validate(Schemas_ValidationEventHandler);
        }
    }

    void Schemas_ValidationEventHandler(object sender, ValidationEventArgs e)
    {
        throw new XmlSignatureValidationFailedException(Exceptions.InvalidSchema, e.Exception);
    }

    public static X509Certificate2 ValidateSignature(XmlDocument xml)
    {
        if (xml == null)
            throw new ArgumentNullException("xml");

        XmlElement signature = ExtractSignature(xml.DocumentElement);

        return ValidateSignature(xml, signature);
    }

    public static X509Certificate2 ValidateSignature(XmlDocument doc, XmlElement signature)
    {
        if (doc == null)
            throw new ArgumentNullException("doc");

        if (signature == null)
            throw new ArgumentNullException("signature");

        X509Certificate2 signingCert = null;

        SignedXml signed = new SignedXml(doc);
        signed.LoadXml(signature);

        foreach (KeyInfoClause clause in signed.KeyInfo)
        {
            KeyInfoX509Data key = clause as KeyInfoX509Data;

            if (key == null || key.Certificates.Count != 1)
                continue;

            signingCert = (X509Certificate2)key.Certificates[0];
        }

        if (signingCert == null)
            throw new CryptographicException(Exceptions.SigningKeyNotFound);

        if (!signed.CheckSignature())
            throw new CryptographicException(Exceptions.SignatureValidationFailed);

        return signingCert;
    }

    private static void ValidateSignatures(List<XmlSignature> signatures, X509Certificate2 cert)
    {
        foreach (XmlSignature signature in signatures)
        {
            X509Certificate2 signingCert = ValidateSignature(signature.Document, signature.Signature);

            if (!X509CertificateCompare.Compare(cert, signingCert))
                throw new XmlSignatureValidationFailedException(
                    string.Format(CultureInfo.CurrentCulture, 
                    Exceptions.SignatureForFileNotSignedByExpectedCertificate, signature.FileName));
        }
    }

    private List<XmlSignature> ParseSignatures(XmlDocument doc)
    {
        List<XmlSignature> signatures = new List<XmlSignature>();

        foreach (XmlElement file in doc.GetElementsByTagName("File"))
        {
            string fileName = Path.Combine(this.physicalApplicationPath, file["FileName"].InnerText);

            Permissions.DemandFilePermission(FileIOPermissionAccess.Read, fileName);

            if (!File.Exists(fileName))
                throw new FileNotFoundException(string.Format(CultureInfo.CurrentCulture, Exceptions.FileNotFound, fileName));

            XmlDocument fileDoc = new XmlDocument() { PreserveWhitespace = true };
            fileDoc.Load(fileName);

            XmlElement sig = file["FileSignature"] as XmlElement;

            signatures.Add(new XmlSignature()
            {
                FileName = fileName,
                Document = fileDoc,
                Signature = ExtractSignature(sig)
            });
        }

        return signatures;
    }

    private static XmlElement ExtractSignature(XmlElement xml)
    {
        XmlNodeList xmlSignatureNode = xml.GetElementsByTagName("Signature");

        if (xmlSignatureNode.Count <= 0)
            throw new CryptographicException(Exceptions.SignatureNotFound);

        return xmlSignatureNode[xmlSignatureNode.Count - 1] as XmlElement;
    }
}

You’ll notice there is a bit of functionality I didn’t mention. Checking that the web.config file hasn’t been modified isn’t enough. We also need to check if any *other* configuration file has been modified. It’s no good if you leave the root configuration file alone, but modify the <authorization> tag within the administration folder to allow anonymous access, right?

So there is code looks through the site for any files that have the “config” extension, and if that file isn’t in the signature file, it throws an exception.

There is also a check done at the very beginning of the validation. If you pass an X509Certificate2 with a private key it will throw an exception. This is absolutely by design. You sign the file with the private key. You validate with the public key. If the private key is present during validation that means you are not separating the keys, and all of this has been a huge waste of time because the private key is not protected. Oops.

Finally, it’s important to know how to sign the files. I’m not a fan of generating XML properly, partially because I’m lazy and partially because it’s a pain to do, so mind the StringBuilder:

public sealed class XmlSigner
{
    public XmlSigner(string appPath)
    {
        this.physicalApplicationPath = appPath;
    }

    string physicalApplicationPath;

    public XmlDocument SignFiles(string[] paths, X509Certificate2 cert)
    {
        if (paths == null || paths.Length == 0)
            throw new ArgumentNullException("paths");

        if (cert == null || !cert.HasPrivateKey)
            throw new ArgumentNullException("cert");

        XmlDocument doc = new XmlDocument() { PreserveWhitespace = true };
        StringBuilder sb = new StringBuilder();

        sb.Append("<Configuration>");
        sb.Append("<Files>");

        foreach (string p in paths)
        {
            sb.Append("<File>");

            sb.AppendFormat("<FileName>{0}</FileName>", p.Replace(this.physicalApplicationPath, ""));
            sb.AppendFormat("<FileSignature><Signature xmlns=\"http://www.w3.org/2000/09/xmldsig#\">{0}</Signature></FileSignature>", 
            SignFile(p, cert).InnerXml);

            sb.Append("</File>");
        }

        sb.Append("</Files>");
        sb.Append("</Configuration>");

        doc.LoadXml(sb.ToString());

        doc.DocumentElement.AppendChild(doc.ImportNode(SignXmlDocument(doc, cert), true));

        return doc;
    }

    public static XmlElement SignFile(string path, X509Certificate2 cert)
    {
        if (string.IsNullOrWhiteSpace(path))
            throw new ArgumentNullException("path");

        if (cert == null || !cert.HasPrivateKey)
            throw new ArgumentException(Exceptions.CertificateDoesNotContainPrivateKey);

        Permissions.DemandFilePermission(FileIOPermissionAccess.Read, path);

        XmlDocument doc = new XmlDocument();
        doc.PreserveWhitespace = true;
        doc.Load(path);

        return SignXmlDocument(doc, cert);
    }

    public static XmlElement SignXmlDocument(XmlDocument doc, X509Certificate2 cert)
    {
        if (doc == null)
            throw new ArgumentNullException("doc");

        if (cert == null || !cert.HasPrivateKey)
            throw new ArgumentException(Exceptions.CertificateDoesNotContainPrivateKey);

        SignedXml signed = new SignedXml(doc) { SigningKey = cert.PrivateKey };

        Reference reference = new Reference() { Uri = "" };

        XmlDsigC14NTransform transform = new XmlDsigC14NTransform();
        reference.AddTransform(transform);

        XmlDsigEnvelopedSignatureTransform envelope = new XmlDsigEnvelopedSignatureTransform();
        reference.AddTransform(envelope);
        signed.AddReference(reference);

        KeyInfo keyInfo = new KeyInfo();
        keyInfo.AddClause(new KeyInfoX509Data(cert));
        signed.KeyInfo = keyInfo;

        signed.ComputeSignature();

        XmlElement xmlSignature = signed.GetXml();

        return xmlSignature;
    }
}

To write this to a file you can call it like this:

XmlWriter writer = XmlWriter.Create(@"C:\Dev\Projects\Syfuhs.Security.Web\Syfuhs.Security.Web.WebTest\App_Data\Signature.xml");
XmlSigner signer = new XmlSigner(Request.PhysicalApplicationPath);

XmlDocument xml = signer.SignFiles(new string[] { 
@"C:\Dev\Projects\Syfuhs.Security.Web\Syfuhs.Security.Web.WebTest\Web.config",
@"C:\Dev\Projects\Syfuhs.Security.Web\Syfuhs.Security.Web.WebTest\Web.debug.config",
@"C:\Dev\Projects\Syfuhs.Security.Web\Syfuhs.Security.Web.WebTest\Web.release.config",
@"C:\Dev\Projects\Syfuhs.Security.Web\Syfuhs.Security.Web.WebTest\Account\Web.config",
@"C:\Dev\Projects\Syfuhs.Security.Web\Syfuhs.Security.Web.WebTest\test.config"
}, 
new X509Certificate2(@"C:\Dev\Projects\Syfuhs.Security.Web\Syfuhs.Security.Web.WebTest\cert.pfx", "1"));

xml.WriteTo(writer);
writer.Flush();

Now within this code, you have to pass in a X509Certificate2 with a private key, otherwise you can’t sign the files.

These processes should occur on different machines. The private key should never be on the server hosting the site. The basic steps for deployment would go something like:

1. Compile web application.
2. Configure site and configuration files on staging server.
3. Run application that signs the configuration and generates the signature file.
4. Drop the signature.xml file into the App_Data folder.
5. Deploy configured and signed application to production.

There is one final note (I think I’ve made that note a few times by now…) and that is the CertificateLocator class. At the moment it just returns a X509Certificate2 from a particular path on my file system. This isn’t necessarily the best approach because it may be possible to overwrite that file. You should store that certificate in a safe place, and make a secure call to get it. For instance a web service call might make sense. If you have a Hardware Security Module (HSM) to store secret bits in, even better.

Concluding Bits

What have we accomplished by signing our configuration files? We add a degree of trust that our application hasn’t been compromised. In the event that the configuration has been modified, the application stops working. This could be from malicious intent, or careless administrators. This is a great way to prevent one-off changes to configuration files in web farms. It is also a great way to prevent customers from mucking up the configuration file you’ve deployed with your application.

This solution was designed in a way mitigate quite a few attacks. An attacker cannot modify configuration files. An attacker cannot modify the signature file. An attacker cannot view the signature file. An attacker cannot remove the signature file. An attacker cannot remove the HTTP Module that validates the signature without changing the underlying code. An attacker cannot change the underlying code because it’s been compiled before being deployed.

Is it necessary to use on every deployment? No, probably not.

Does it go a little overboard with regard to complexity? Yeah, a little.

Does it protect against a real problem? Absolutely.

Unfortunately it also requires full trust.

Overall it’s a fairly robust solution and shows how you can mitigate certain types of risks seen in the real world.

And of course, it works with both WebForms and MVC.

You can download the full source: https://syfuhs.blob.core.windows.net/files/c6afeabc-36fc-4d41-9aa7-64cc9385280d-configurationsignaturevalidation.zip

Making the X509Store more Friendly

by Steve Syfuhs / May 12, 2011 04:00 PM

When you need to grab a certificate out of a Windows Certificate Store, you can use a class called X509Store.  It's very simple to use:

X509Store store = new X509Store(StoreName.My, StoreLocation.CurrentUser);

store.Open(OpenFlags.ReadOnly);

X509Certificate2Collection myCerts = store.Certificates.Find(X509FindType.FindByThumbprint, "...", false);

store.Close();

However, I don't like this open/close mechanism.  It reminds me too much of Dispose(), except I can't use a using statement.  There are lots of arguments around whether a using statement is a good way of doing things and I'm in the camp of yes, it is.  When they are used properly they make code a lot more logical.  It creates a scope for an object explicitly.  I want to do something like this:

using (X509Store store = new X509Store(StoreName.My, StoreLocation.CurrentUser, OpenFlags.ReadOnly))
{
    X509Certificate2Collection myCerts = store.Certificates.Find(X509FindType.FindByThumbprint, "...", false);
}

The simple solution would be to subclass this, implement IDisposable, and overwrite some of the internals.  The problem though is that someone on the .NET team thought it would be wise to seal the class.  Crap.  Okay, lets create a new class:

public class X509Store2 : IDisposable
{
    private X509Store store;

    public X509Store2(IntPtr storeHandle, OpenFlags flags)
    {
        store = new X509Store(storeHandle);
        store.Open(flags);
    }

    public X509Store2(StoreLocation storeLocation, OpenFlags flags)
    {
        store = new X509Store(storeLocation);
        store.Open(flags);
    }

    public X509Store2(StoreName storeName, OpenFlags flags)
    {
        store = new X509Store(storeName);
        store.Open(flags);
    }

    public X509Store2(string storeName, OpenFlags flags)
    {
        store = new X509Store(storeName);
        store.Open(flags);
    }

    public X509Store2(StoreName storeName, StoreLocation storeLocation, OpenFlags flags)
    {
        store = new X509Store(storeName, storeLocation);
        store.Open(flags);
    }

    public X509Store2(string storeName, StoreLocation storeLocation, OpenFlags flags)
    {
        store = new X509Store(storeName, storeLocation);
        store.Open(flags);
    }

    public X509Certificate2Collection Certificates { get { return store.Certificates; } }

    public StoreLocation Location { get { return store.Location; } }

    public string Name { get { return store.Name; } }

    public IntPtr StoreHandle { get { return store.StoreHandle; } }

    public void Add(X509Certificate2 certificate)
    {
        store.Add(certificate);
    }

    public void AddRange(X509Certificate2Collection certificates)
    {
        store.AddRange(certificates);
    }

    private void Close()
    {
        store.Close();
    }

    private void Open(OpenFlags flags)
    {
        store.Open(flags);
    }

    public void Remove(X509Certificate2 certificate)
    {
        store.Remove(certificate);
    }
    public void RemoveRange(X509Certificate2Collection certificates)
    {
        store.RemoveRange(certificates);
    }

    public void Dispose()
    {
        this.Close();
    }
}

At this point I've copied all the public members of the X509Store class and called their counterparts in the store.  I've also set Open() and Close() to private so they can't be called.  In theory I could just remove them, but I didn't.

Enjoy!

Claims Transformation and Custom Attribute Stores in Active Directory Federation Services 2

by Steve Syfuhs / September 14, 2010 04:00 PM

Active Directory Federation Services 2 has an amazing amount of power when it comes to claims transformation.  To understand how it works lets take a look at a set of claims rules and the flow of data from ADFS to the Relying Party:

image

We can have multiple rules to transform claims, and each one takes precedence via an Order:

image

In the case above, Transform Rule 2 transformed the claims that Rule 1 requested from the attribute store, which in this case was Active Directory.  This becomes extremely useful because there are times when some of the data you need to pull out of Active Directory isn’t in a useable format.  There are a couple options to fix this:

  • Make the receiving application deal with it
  • Modify it before sending it off
  • Ignore it

Lets take a look at the second option (imagine an entire blog post on just ignoring it…).  ADFS allows us to transform claims before they are sent off in the token by way of the Claims Rule Language.  It follows the pattern: "If a set of conditions is true, issue one or more claims."  As such, it’s a big Boolean system.  Syntactically, it’s pretty straightforward.

To issue a claim by implicitly passing true:

=> issue(Type = "http://MyAwesomeUri/claims/AwesomeRole", Value = "Awesome Employee");

What that did was ignored the fact that there weren’t any conditions and will always pass a value.

To check a condition:

c:[Type == "http://schemas.microsoft.com/ws/2008/06/identity/claims/role", Value == "SomeRole"]
    => issue(Type = "http://MyAwesomeUri/claims/AwesomeRole", Value = "AwesomeRole");

Breaking down the query, we are checking that a claim created in a previous step has a specific type; in this case role and the claim’s value is SomeRole.  Based on that we are going to append a new claim to the outgoing list with a new type and a new value.

That’s pretty useful in it’s own right, but ADFS can actually go even further by allowing you to pull data out of custom data stores by way of Custom Attribute Stores.  There are four options to choose from when getting data:

  1. Active Directory (default)
  2. LDAP (Any directory that you can query via LDAP)
  3. SQL Server (awesome)
  4. Custom built store via custom .NET assembly

Let’s get some data from a SQL database.  First we need to create the attribute store.  Go to Trust Relationships/Attribute Stores in the ADFS MMC Console (or you could also use PowerShell):

image

Then add an attribute store:

image

All you need is a connection string to the database in question:

image

The next step is to create the query to pull data from the database.  It’s surprisingly straightforward.  This is a bit of a contrived example, but lets grab the certificate name and the certificate hash from a database table where the certificate name is equal to the value of the http://MyCertUri/UserCertName claim type:

c:[type == http://MyCertUri/UserCertName]
   => issue(store = "MyAttributeStore",
         types = ("http://test/CertName", "http://test/CertHash"),
         query = "SELECT CertificateName, CertificateHash FROM UserCertificates WHERE CertificateName='{0}'", param = c.value);

For each column you request in the SQL query, you need a claim type as well.  Also, unlike most SQL queries, to use parameters we need to use a format similar to String.Format instead of using @MyVariable syntaxes.

In a nutshell this is how you deal with claims transformation.  For a more in depth article on how to do this check out TechNet: http://technet.microsoft.com/en-us/library/dd807118(WS.10).aspx.

Certificates and ADFS 2.0

by Steve Syfuhs / August 14, 2010 04:00 PM

One of the problems with pushing all this data back and forth between Token Services and clients and Relying Parties is that some of this information really needs to encrypted.  If someone can eavesdrop on your communications and catch your token authorization they could easily impersonate you.  We don’t want that.  As such, we use certificates in ADFS for EVERYTHING.

The problem with doing things this way is that certificates are a pain in the neck.  With ADFS we need at least three certificates for each server:

  • Service Communication certificate: This certificate is used for SSL communications for web services and connections between proxies.  This is the certificate used by IIS for the Federation Service site.
  • Token Signing certificate: This certificate is used to sign all tokens passed to the client.
  • Token decryption certificate: This certificate is used for decrypting incoming tokens.  This would be the private key for the Service Communication certificate.

Managing these certificates isn’t easy.  Sometimes we can get away with just using our domain CA, other times we need 3rd party CA’s to sign them.  Microsoft has provided lots of guidance in this, but it’s not the easiest to find.  You can access it on TechNet (Certificate Requirements for Federation Servers): http://technet.microsoft.com/en-ca/library/dd807040(WS.10).aspx

Hopefully that helps.

Working with Certificates in Code

by Steve Syfuhs / August 04, 2010 04:00 PM

Just a quick little collection of useful code snippets when dealing with certificates.  Some of these don’t really need to be in their own methods but it helps for clarification.

Namespaces for Everything

using System.Security.Cryptography.X509Certificates;
using System.Security;

Save Certificate to Store

// Nothing fancy here.  Just a helper method to parse strings.
private StoreName parseStoreName(string name)
{
    return (StoreName)Enum.Parse(typeof(StoreName), name);
}
	
// Same here
private StoreLocation parseStoreLocation(string location)
{
    return (StoreLocation)Enum.Parse(typeof(StoreLocation), location);
}
	
private void saveCertToStore(X509Certificate2 x509Certificate2, StoreName storeName, StoreLocation storeLocation)
{
    X509Store store = new X509Store(storeName, storeLocation);

    store.Open(OpenFlags.ReadWrite);
    store.Add(x509Certificate2);

    store.Close();
}

Create Certificate from byte[] array

private X509Certificate2 CreateCertificateFromByteArray(byte[] certFile)
{
     return new X509Certificate2(certFile); 
	// will throw exception if certificate has private key
}

The comment says that it will throw an exception if the certificate has a private key because the private key has a password associated with it. If you don't pass the password as a parameter it will throw a System.Security.Cryptography.CryptographicException exception.

Get Certificate from Store by Thumbprint

private bool FindCertInStore(
    string thumbprint, 
    StoreName storeName, 
    StoreLocation storeLocation, 
    out X509Certificate2 theCert)
{
    theCert = null;
    X509Store store = new X509Store(storeName, storeLocation);

    try
    {
        store.Open(OpenFlags.ReadWrite);

        string thumbprintFixed = thumbprint.Replace(" ", "").ToUpperInvariant();

        foreach (var cert in store.Certificates)
        {
            if (cert.Thumbprint.ToUpperInvariant().Equals(thumbprintFixed))
            {
                theCert = cert;

                return true;
            }
        }

        return false;
    }
    finally
    {
        store.Close();
    }
}

Have fun!

// About

Steve is a renaissance kid when it comes to technology. He spends his time in the security stack.