Home / My Disclaimer / Who am I? / Search... / Sign in

// API's

Windows Azure Pack Authentication Part 3 – Using a Third Party IdP

by Steve Syfuhs / February 07, 2014 06:22 PM

In the previous installments of this series we looked at how Windows Azure Pack authenticates users and how it’s configured out of the box for federation. This time around we’re going to look at how you can configure federation with a third party IdP.

Microsoft designed Windows Azure Pack the right way. It supports federation with industry protocols out of the box. You can’t say that for many services, and you certainly can’t say that those services support it natively for all versions – more often than not you have to pay extra for it.

Windows Azure Pack supports federation, and actually uses it to authenticate users by default. This little fact makes it easy to federate to a 3rd party IdP.

If we searched around we will find lots of resources on federating to ADFS, as that’s Microsoft’s federation product, and there are a number of good (German content) walkthroughs on how you can get it working. If you want to use ADFS go read one or all of those articles as everything we talk about today will be about using a non-Microsoft federation service.

Before we begin though I’d like to point out that Microsoft does have some resources on using 3rd party IdPs, but unfortunately the information is a bit thin in some places.

Prerequisites

Federation is a complex beast and we should be clear about what is required to get it working. In no particular order you need the following:

  • STS that supports the WS-Federation (passive) protocol
  • STS that supports WS-Federation wrapped JSON Web Tokens (JWT)
  • Optional: STS that supports WS-Trust + JWT

If you plan to use the public APIs with federated accounts then you will need a STS that supports WS-Trust + JWT.

If you don’t have a STS that can support these requirements then you should really consider taking a look at ADFS, or if you’re looking for customization, Thinktecture Identity Server. Both are top notch IdPs (edit: insert pitch about the IdP my company builds and sells as well [edit-edit: our next version natively supports JWT] Winking smile -- sorry, this concludes the not-so-regularly-scheduled product placement).

Another option is to roll your own IdP. Don’t do this. No seriously, don’t. It’s a complicated mess. You’re way better off using the Thinktecture server and extending it to fit your needs.

Supposing though that you already have an IdP and want to support JWT though, here’s how we can do it. In this context the IdP is the overarching identity providing system and the STS is simply the service issuing tokens.

Skip this next section if you just want to see how to configure Windows Azure Pack. That’s the main part that’s lacking in the MSDN documentation.

JWT via IdentityModel

First off, you need to be using .NET 4.5, and you need to be using the the 4.5 IdentityModel stack. You can’t use the original 3.5 bits.

At this point I’m going to assume you’ve got a working IdP already. There are lots of articles out there explaining how to build one. We’re just going to mod the STS.

Before making any code changes though you need to add the JWT token handler, which is easily installed via Nuget (I Red heart Nuget):

PM> Install-Package System.IdentityModel.Tokens.Jwt

This will need to be added to the project that exposes your STS configuration class.

Next, we need to inject the token handler into the STS pipeline. This can easily be done by adding an entry to the web.config system.identityModel section:

Or if you want to hardcode it you can add it to your SecurityTokenServiceConfiguration class.

There are of course other (potentially better) ways you can add it in, but this serves our purpose for the sake of a sample.

By adding the JWT token handler into the STS pipeline we can begin issuing JWTs to any relying parties that request one. This poses a problem though because passive requests don’t have a requested token type tacked on. Active (WS-Trust) requests do, but not passive. So we need to specify that a JWT should be minted instead of a SAML token. This can be done in the GetScope method of the STS class.

All we really needed to do was specify the TokenType as WIF will use that to determine which token handler should be used to mint the token. We know this is the value to use because it’s exposed by the GetTokenTypeIdentifiers() method in the JWTSecurityTokenHandler class.

Did I mention the JWT library is open source?

So now at this point if we made a request for token to the STS we could receive a WS-Federation wrapped JWT.

If the idea of using a JWT instead of a SAML token appeals to you, you can configure your app to use the JWT token handler similar to Dominick’s sample.

If you were submitting a WS-Trust RST to the STS you could use client code along the lines of:

When the GetScope method is called the request.TokenType should be set to whatever you passed in at the client. For more information on service calls you can take a look at the whitepaper Claims-Based Identity in Windows Azure Pack (docx). A future installment of this series might have more information about using services.

Lastly, we need to sign the JWT. The only caveat to using the JWT token handler is that the minimum RSA key size is 2048 bits. If you’re using a key smaller than that then please upgrade it. We’re going to overlook the fact that the MSDN article shows how to bypass minimum key sizes. Seriously. Don’t do it. I don’t want to have to explain why (putting paranoia aside for a moment, 1024 is being deprecated by Windows and related services in the near future anyway).

Issuing Tokens to Windows Azure Pack

So now we’re at a point where we can mint a JWT token. The question we need to ask now is what claims should this token contain? Looking at Part 1 we see that the Admin Portal requires UPN and Group claims. The tenant portal only requires the UPN claim.

Lucky for us the JWT token handler is smart. It knows to transform certain known XML-token-friendly-claim-types to JWT friendly claim types. In our case we can use http://schemas.xmlsoap.org/ws/2005/05/identity/claims/upn in our ClaimsIdentity to map to the UPN claim, and http://schemas.xmlsoap.org/claims/Group to map to our Group claim.

Then we need to determine where to send the token, and who to address it to. Both the tenant and admin sites have Federation Metadata documents that specify this information for us. If you’ve got an IdP that can parse the metadata then all you need to do is point it to https://yourtenantsite/FederationMetadata/2007-06/FederationMetadata.xml for the tenant configuration or https://youradminsite/FederationMetadata/2007-06/FederationMetadata.xml for the admin configuration.

Of course, this information will also map up to the configuration elements we looked at in Part 2. That’ll tell us the Audience URI and the Reply To for both sites.

Finally we have everything we need to mint the token, address it, and send it on its way.

Configuring Windows Azure Pack to Trust your Token

The tokens been sent and once it hits either the tenant or admin site it’ll promptly be ignored and you’ll get an ugly error message saying “nope, not gonna happen, bub.”

We therefore need to configure Windows Azure Pack to trust our token. Looking at MSDN we see some somewhat useful information telling us what we need to modify, but frankly, its missing a bunch of information so we’re going to ignore it.

First things first: if your IdP publishes a Federation Metadata document then you can just configure everything via PowerShell:

You can replace the target “Admin” with “Tenant” if you want to configure the Tenant Portal. The only caveat with doing it this way is that the metadata document needs to be accessible from the server. I’ve submitted a feature request that they also support local file paths too; hopefully they listen! Since the parameter takes the full URL you can put the metadata document somewhere public if its not normally accessible. You will only need the metadata accessible while applying this configuration.

If the cmdlet completed successfully then you should be able to log in from your own IdP. That’s all there is to it for you. I would recommend seriously considering going this route instead of configuring things manually.

Otherwise, lets carry on.

Since we can’t import our federation metadata (since we probably don’t have any), we need to configure things manually. To do that we need to modify settings in the database.

Looking back to Part 2 we see all the configuration elements that enable our federated trust to the default IdPs. We’ll need to update a few settings across the Microsoft.MgmtSvc.Store and Microsoft.MgmtSvc.PortalConfigStore databases.

As per the MSDN documentation it says to modify the settings in the PortalConfigStore database. It’s wrong. It’s incomplete as that’s only part of the process.

The PortalConfigStore database contains the settings used by the Tenant and Admin Portals to validate and request tokens. We need to modify these settings to use our custom IdP. To do so locate the Authentication.IdentityProvider setting in the [Config].[Settings] table.  The namespace we need to choose is dependent on which site we want to configure. In our case we select the Admin namespace. As we saw last time it looks something like:

We need to substitute our STS information here. The Realm is whatever your STS issuer is, and the Endpoint is where ever your WS-Federation endpoint is located. The Certificate should be a base 64 encoded representation of your signing certificate (remember, just the public key).

In my experience I’ve had to do an IISRESET on the portals to get the settings refreshed. I might just be impatient though.

Once those values are replaced you can try logging in. You should be redirected to your IdP and if you issue the token properly it’ll hit the portal and you should be logged in. Unfortunately this’ll actually fail with a non-useful error message.

deadsession

Who can guess why? So far I’ve stated that the MSDN documentation is missing information. What have we missed? Hopefully if you’ve read the first two parts of this series you’re yelling at the screen telling me to get on with it already because you’ve caught on to what I’m saying.

We haven’t configured the API services to trust our STS! Oops.

With that being said, we now have proof that Windows Azure Pack flows the token to the services from the Portal and, more importantly, the services validate the token. Cool!

Anyway, now to configure the APIs. Warning: complicated.

In the Microsoft.MgmtSvc.Store database locate the Settings table and then locate the Authentication.IdentityProvider.Secondary element in the AdminAPI namespace. We need to update it with the exact same values as we put in to the configuration element in the other database.

If you’re only wanting to configure the Tenant Portal you’d want to modify the Authentication.IdentityProvider.Primary configuration element. Be careful with the Primary/Secondary elements as they can get confusing.

If you’re configuring the Admin Portal you’ll need to update the Authentication.IdentityProvider.Secondary configuration element in the TenantAPI namespace to use the configuration you specified for the Admin Portal as well. As I said previously, I think this is because the Admin Portal calls into the Tenant API. The Admin Portal will use an admin-trusted token – therefore the TenantAPI needs to trust the admin’s STS.

Now that you’ve completed configuration you can do an IISRESET and try logging in. If you configured everything properly you should now be able to log in from your own IdP.

Troubleshooting

For those rock star Ops people who understand identity this guide was likely pretty easy to follow, understand, and implement. For everyone else though, this was probably a pain in the neck. Here are some troubleshooting tips.

Review the Event Logs
It’s surprising how many people forget that a lot of applications will write errors to the Windows Event Log. Windows Azure Pack has quite a number of logs that you can review for more information. If you’re trying to track down an issue in the portals look in the MgmtSvc-*Site where * is Tenant or Admin. Errors will get logged there. If you’re stuck mucking about the APIs look in the MgmtSvc-*API where * is Tenant, Admin, or TenantPublic.

Enable Development Mode
You can enable developer mode in sites by modifying a value in the web.config. Unprotect the web.config by calling:

And then locate the appSetting named Microsoft.Azure.Portal.Configuration.PortalConfiguration.DevelopmentMode and set the value to true. Be sure to undo and re-protect the configuration when you’re done. You should then get a neat error tracing window show up in the portals, and more diagnostic information will be logged to the event logs. Probably not wise to do this in a production environment.

Use the PowerShell CmdLets
There are a quite a number of PowerShell cmdlets available for you to learn about the configuration of Windows Azure Pack. If you open the Windows Azure Pack Administration PowerShell console you can see that there are two modules that get loaded that are full of cmdlets:

PS C:\Windows\system32> get-command -Module MgmtSvcConfig

CommandType     Name                                               ModuleName
-----------     ----                                               ----------
Cmdlet          Add-MgmtSvcAdminUser                               MgmtSvcConfig
Cmdlet          Add-MgmtSvcDatabaseUser                            MgmtSvcConfig
Cmdlet          Add-MgmtSvcResourceProviderConfiguration           MgmtSvcConfig
Cmdlet          Get-MgmtSvcAdminUser                               MgmtSvcConfig
Cmdlet          Get-MgmtSvcDatabaseSetting                         MgmtSvcConfig
Cmdlet          Get-MgmtSvcDefaultDatabaseName                     MgmtSvcConfig
Cmdlet          Get-MgmtSvcEndpoint                                MgmtSvcConfig
Cmdlet          Get-MgmtSvcFeature                                 MgmtSvcConfig
Cmdlet          Get-MgmtSvcFqdn                                    MgmtSvcConfig
Cmdlet          Get-MgmtSvcNamespace                               MgmtSvcConfig
Cmdlet          Get-MgmtSvcNotificationSubscriber                  MgmtSvcConfig
Cmdlet          Get-MgmtSvcResourceProviderConfiguration           MgmtSvcConfig
Cmdlet          Get-MgmtSvcSchema                                  MgmtSvcConfig
Cmdlet          Get-MgmtSvcSetting                                 MgmtSvcConfig
Cmdlet          Initialize-MgmtSvcFeature                          MgmtSvcConfig
Cmdlet          Initialize-MgmtSvcProduct                          MgmtSvcConfig
Cmdlet          Install-MgmtSvcDatabase                            MgmtSvcConfig
Cmdlet          New-MgmtSvcMachineKey                              MgmtSvcConfig
Cmdlet          New-MgmtSvcPassword                                MgmtSvcConfig
Cmdlet          New-MgmtSvcResourceProviderConfiguration           MgmtSvcConfig
Cmdlet          New-MgmtSvcSelfSignedCertificate                   MgmtSvcConfig
Cmdlet          Protect-MgmtSvcConfiguration                       MgmtSvcConfig
Cmdlet          Remove-MgmtSvcAdminUser                            MgmtSvcConfig
Cmdlet          Remove-MgmtSvcDatabaseUser                         MgmtSvcConfig
Cmdlet          Remove-MgmtSvcNotificationSubscriber               MgmtSvcConfig
Cmdlet          Remove-MgmtSvcResourceProviderConfiguration        MgmtSvcConfig
Cmdlet          Reset-MgmtSvcPassphrase                            MgmtSvcConfig
Cmdlet          Set-MgmtSvcCeip                                    MgmtSvcConfig
Cmdlet          Set-MgmtSvcDatabaseSetting                         MgmtSvcConfig
Cmdlet          Set-MgmtSvcDatabaseUser                            MgmtSvcConfig
Cmdlet          Set-MgmtSvcFqdn                                    MgmtSvcConfig
Cmdlet          Set-MgmtSvcIdentityProviderSettings                MgmtSvcConfig
Cmdlet          Set-MgmtSvcNotificationSubscriber                  MgmtSvcConfig
Cmdlet          Set-MgmtSvcPassphrase                              MgmtSvcConfig
Cmdlet          Set-MgmtSvcRelyingPartySettings                    MgmtSvcConfig
Cmdlet          Set-MgmtSvcSetting                                 MgmtSvcConfig
Cmdlet          Test-MgmtSvcDatabase                               MgmtSvcConfig
Cmdlet          Test-MgmtSvcPassphrase                             MgmtSvcConfig
Cmdlet          Test-MgmtSvcProtectedConfiguration                 MgmtSvcConfig
Cmdlet          Uninstall-MgmtSvcDatabase                          MgmtSvcConfig
Cmdlet          Unprotect-MgmtSvcConfiguration                     MgmtSvcConfig
Cmdlet          Update-MgmtSvcV1Data                               MgmtSvcConfig

As well as the MgmtSvcConfig module which is moreso for daily administration.

Read the Windows Azure Pack Claims Whitepaper
See here: Claims-Based Identity in Windows Azure Pack (docx).

Visit the Forums
When in doubt take a look at the forums and ask a question if you’re stuck.

Email Me
Lastly, you can contact me (steve@syfuhs.net) with any questions. I may not have answers but I might be able to find someone who can help.

Conclusion

In the first two parts of this series we looked at how authentication works, how it’s configured, and now in this installment we looked at how we can configure a third party IdP to log in to Windows Azure Pack. If you’re trying to configure Windows Azure Pack to use a custom IdP I imagine this part is the most complicated to figure out and hopefully it was documented well enough. I personally spent a fair amount of time fiddling with settings and most of the information I’ve gathered for this series has been the result of lots of trial and error. With any luck this series has proven useful to you and you have more luck with the configuration than I originally did.

Next time we’ll take a look at how we can consume the public APIs using a third party IdP for authentication.

In the future we might take a look at how we can authenticate requests to a service called from a Windows Azure Pack add-on, and how we can call into Windows Azure Pack APIs from an add-on.

Windows Azure Pack Authentication Part 2

by Steve Syfuhs / January 30, 2014 07:50 PM

Last time we took a look at how Windows Azure Pack authenticates users in the Admin Portal. In this post we are going to look at how authentication works in the Tenant Portal.

Authentication in the Tenant Portal works exactly the same way authentication in the Admin Portal works.

Detailed and informative explanation, right?

Actually, with any luck you’ve read, and were more importantly, able to decipher my explanations in the last post. The reason for that is because we’re going to go a bit deeper into the configuration of how authentication is configured.  If that’s actually the case then you know everything you need to know to continue on here. There are a couple minor differences between the Admin sites and Tenant sites, such as the tenant STS will store users in a standalone SQL database instead of Active Directory, and there is a set of public service endpoints that also federate with the Tenant STS. For the time being we can ignore the public API, but we may revisit it in the future.

diag1

One of the things this diagram doesn’t show is how the various services store configuration information. This is somewhat important because the Portals and APIs need to keep track of where the STS is, what is used to sign tokens, who is allowed to receive tokens, etc.

Since Windows Azure Pack is designed to be distributed in nature, it’s a fair bet most of the configuration is stored in databases. Let’s check the PowerShell cmdlets (horizontal spacing truncated a bit to fit):

PS C:\Windows\system32> Get-MgmtSvcDefaultDatabaseName

DefaultDatabaseName                        Description
-------------------                        -----------
Microsoft.MgmtSvc.Config                   Configuration store database
Microsoft.MgmtSvc.PortalConfigStore        Admin and Tenant sites database
Microsoft.MgmtSvc.Store                    Rest API layer database
Microsoft.MgmtSvc.MySQL                    MySQL resource provider database
Microsoft.MgmtSvc.SQLServer                SQLServer resource provider database
Microsoft.MgmtSvc.Usage                    Usage service database
Microsoft.MgmtSvc.WebAppGallery            WebApp Gallery resource provider database

Well that’s handy. It even describes what each database does. Looking at the databases on the server we see each one:

diag2

Looking at the descriptions we can immediately ignore anything that is described as a “resource provider database” because resource providers in Windows Azure Pack are the services exposed by the Portals and APIs.  That leaves us the Microsoft.MgmtSvc.Config, Microsoft.MgmtSvc.PortalConfigStore, Microsoft.MgmtSvc.Store, and Microsoft.MgmtSvc.Usage databases.

The usage database looks like the odd one out so if we peek into the tables we see configuration information and data for usage of the resource providers. Scratch that.

We’re then left a Config database, a PortalConfigStore database, and a Store database. How’s that for useful naming conventions? Given the descriptions we could infer we likely only want to look into the PortalConfigStore database for the Tenant and Admin Portal configuration, and the Store database for the API configuration. To confirm that we could peek into the Config database and see what’s there. If we look in the Settings table we see a bunch of encrypted key value pairs. Nothing jumps out as being related to federation information like endpoints, claims, or signing certificates, but we do see pointers to database credentials.

If we quickly take a look at some of the web.config files in the various Windows Azure Pack sites we can see that some of them only have connection strings to the Config database. Actually, if we look at any of the web.config files we’ll see they are protected, so we need to unprotect them:

PS C:\Windows\system32> Unprotect-MgmtSvcConfiguration -Namespace TenantAPI

Please remember to protect-* them when you’re done snooping!

If we compare the connection strings and information in the Config.Settings table, its reasonable to hypothesize that the Config database stores pointers to the other configuration databases, and the sites only need to have a configured connection string to a single database. This seems to only apply to some sites though. The Portal sites actually have connection strings only pointing to the PortalConfigStore database. That actually makes sense from a security perspective though.

Since the Portal sites are public-ish facing, they are more likely to be attacked, and therefore really shouldn’t have direct connections to databases storing sensitive information – hence the Web APIs. Looking at the architectural documentation on TechNet we can see its recommended that API services not be public facing (with the exception of the Public APIs) as well, so that supports my assertion.

Moving on, we now have the PortalConfigStore and the Store databases left. The descriptions tell us everything we need to know about them. We end up with a service relationship along the lines of:

diag3

Okay, now that we have a rough idea of how configuration data is stored we can peek into the databases and see what’s what.

Portal Sites Authentication

Starting with the PortalConfigStore database we see a collection of tables.

diag4

The two tables that pop out are the Settings table and the aspnet_Users table. We know the Auth Site for the tenants stores users in a database, and lookie here we have a collection of users.

Next up is the Settings table. It’s a namespace-key-value-pair mapped table. Since this database stores information for multiple sites, it makes sense to separate configuration data into multiple realms – the namespaces.

There are 4 namespaces we care about:

  • AdminSite
  • AuthSite
  • TenantSite
  • WindowsAuthSite

Looking at the TenantSite configuration we see a few entries with JSON values:

  • Authentication.IdentityProvider
  • Authentication.RelyingParty

Aha! Here’s where we store the necessary bits to do the federation dance. The Authentication.RelyingParty entry stores the information describing the TenantSite. So when it goes to the IdP with a request it can use these values. In my case I’ve got the following:

{
   "EncryptionCertificate":null,
   "Realm":"http://azureservices/TenantSite",
   "ReplyTo":https://manage-cloud.syfuhs.net/
}

Really, just the bare minimum to describe the RP. The Realm, which is the unique identifier of the site, the Reply To URL which is where the token should be returned to, and the Encryption Certificate in case the returned token is encrypted – which it isn’t by default. With this information we can make a request to the IdP, but of course, we don’t know anything about the IdP yet so we need to look up that configuration information.

Looking at the Authentication.IdentityProvider entry we see everything else we need to complete a WS-Federation passive request for token. This is my configuration:

{
   "Realm":"
http://azureservices/AuthSite",
   "Endpoint":"
https://auth-cloud.syfuhs.net/wsfederation/issue",
   "Certificates":[
      "MIIC2...ADLt0="
   ]
}

To complete the request we actually only need the Endpoint as that describes where the request should be sent, but we also now have the information to validate the token response. The Realm describes who minted the token, and the Certificates element is a collection of certificates that could have been used to sign the token. If the token was signed by one of these certificates, we know it’s a valid token.

We do have to go one step further when validating this though, as we need to make sure the token is intended to be used by the Tenant Portal. This is done by comparing the audience URI in the token (see the last post) to the Realm in the Authentication.RelyingParty configuration value. If everything matches up we’re good to go.

We can see the configuration in the AdminSite namespace is similar too.

Next up we want to look at the AuthSite namespace configuration. There are similar entries to the TenantSite, but they serve slightly different purposes.

The Authentication.IdentityProvider entry matches the entry for the TenantSite. I’m not entirely sure of its purpose, but I suspect it might be a reference value for when changes are made and the original configuration is needed. Just a guess on that though.

Moving on we have the Authentication.RelyingParty.Primary entry. This value describes who can request a token, which in our case is the TenantSite. My entry looks like this:

{
   "EncryptionCertificate":null,
   "Realm":"http://azureservices/TenantSite",
   "ReplyTo":https://manage-cloud.syfuhs.net/
}

It’s pretty similar to the configuration in the TenantSite entry. The Realm is the identifier of which site can request a token, and the Reply To URL is where the token should be returned once its minted.

Compare that to the values in the WindowsAuthSite namespace and things look pretty similar too.

So with all that information we’ve figured out how the Portal sites and the Auth sites are configured. Of course, we haven’t looked at the APIs yet.

API Authentication

If you recall from the last post the API calls are authenticated by attaching a JWT to the request header. The JWT has to validated by the APIs the same way the Portals have to validate the JWTs received from the STS. If we look at the diagram above though, the API sites don’t have access to the PortalConfigStore database; they have access to the Store database. Therefore its reasonable to assume the Store database has a copy of the federation configuration data as well.

Looking at the Settings table we can confirm that assumption. It’s got the same schema as the Settings table in the PortalConfigStore database, though in this case there are different namespaces. There are two namespaces that are of interest here:

  • AdminAPI
  • TenantAPI

This aligns with the service diagram above. We should only have settings for the namespaces of services that actually touch the database.

If we look at the TenantAPI elements we have four entries:

  • Authentication.IdentityProvider.Primary
  • Authentication.IdentityProvider.Secondary
  • Authentication.RelyingParty.Primary
  • Authentication.RelyingParty.Secondary

The Authentication.IdentityServer.Primary entry matches up with the TenantSite Authentication.IdentityServer entry in the PortalConfigStore database. That makes sense since it needs to trust the token same as the Tenant Portal site. The Secondary element is curious though. It’s configured as a relying party to the Admin STS. I suspect that is there because the Tenant APIs can be called from the Admin Portal.

Comparing these values to the AdminAPI namespace we see that there are only configuration entries for the Admin STS. Seems reasonable since the Tenant Portal probably shouldn’t be calling into admin APIs. Haven’t got a clue why the AdminAPI relying party is configured as Secondary though Smile. Artifact of the design I guess. Another interesting artifact of this configuration is that the ReplyTo values in the RelyingParty entries show the default value from when I first installed the services. We see something like:

{
   "EncryptionCertificate":null,
   "Realm":"http://azureservices/TenantSite",
   "ReplyTo":https://syfuhs-cloud:30081/
}

And

{
   "EncryptionCertificate":null,
   "Realm":"http://azureservices/AdminSite",
   "ReplyTo":https://syfuhs-cloud:30091/
}

I reconfigured the endpoints to be publically accessible so these values are now incorrect.

API’s can’t really use Reply To the same passive requests can, so it makes sense that they don’t get updated – they don’t have to be updated. The values don’t have to be present either, but again, artifacts I guess.

Conclusion

In the previous post we looked at how authentication works conceptually, and in this post we looked at how authentication is configured in detail. Next time we’ll take a look at how we can reconfigure Windows Azure Pack to work with our own IdPs.

No spoilers this time. Winking smile

Converting Bootstrap Tokens to SAML Tokens

by Steve Syfuhs / September 09, 2010 04:00 PM

there comes a point where using an eavesdropping application to catch packets as they fly between Secure Token Services and Relying Parties becomes tiresome.  For me it came when I decided to give up on creating a man-in-the-middle between SSL sessions between ADFS and applications.  Mainly because ADFS doesn’t like that.  At all.

Needless to say I wanted to see the tokens.  Luckily, Windows Identity Foundation has the solution by way of the Bootstrap token.  To understand what it is, consider how this whole process works.  Once you’ve authenticated, the STS will POST a chunk of XML (the SAML Token) back to the RP.  WIF will interpret it as necessary and do it’s magic generating a new principal with the payload.  However, in some instances you need to keep this token intact.  This would be the case if you were creating a web service and needed to forward the token.  What WIF does is generate a bootstrap token from the SAML token, in the event you needed to forward it off to somewhere.

Before taking a look at it, let's add in some useful using statements:

using System;
using System.IdentityModel.Tokens;
using System.Text;
using System.Threading;
using System.Xml;
using Microsoft.IdentityModel.Claims;
using Microsoft.IdentityModel.Tokens;
using Microsoft.IdentityModel.Tokens.Saml11;

The bootstrap token is attached to IClaimsPrincipal identity:

SecurityToken bootstrapToken = ((IClaimsPrincipal)Thread.CurrentPrincipal).Identities[0].BootstrapToken;

However if you do this out of the box, BootstrapToken will be null.  By default, WIF will not save the token.  We need to explicitly enable this in the web.config file.  Add this line under <microsoft.IdentityModel><service><securityTokenHandlers>:

<securityTokenHandlerConfiguration saveBootstrapTokens="true" />

Once you’ve done that, WIF will load the token.

The properties are fairly straightforward, but you can’t just get a blob from it:

image

Luckily we have some code to convert from the bootstrap token to a chunk of XML:

SecurityToken bootstrapToken = ((IClaimsPrincipal)Thread.CurrentPrincipal).Identities[0].BootstrapToken;

StringBuilder sb = new StringBuilder();

using (var writer = XmlWriter.Create(sb))
{
     new Saml11SecurityTokenHandler(new SamlSecurityTokenRequirement()).WriteToken(writer, bootstrapToken);
}

string theXml = sb.ToString();

We get a proper XML document:

image

That’s all there is to it.

Converting Claims to Windows Tokens and User Impersonation

by Steve Syfuhs / September 09, 2010 04:00 PM

In a domain environment it is really useful to switch user contexts in a web application.  This could be if you are needing to log in with credentials that have elevated permissions (or vice-versa) or just needing to log in as another user.

It’s pretty easy to do this with Windows Identity Foundation and Claims Authentication.  When the WIF framework is installed, a service is installed (that is off by default) that can translate Claims to Windows Tokens.  This is called (not surprisingly) the Claims to Windows Token Service or (c2WTS).

Following the deploy-with-least-amount-of-attack-surface methodology, this service does not work out of the box.  You need to turn it on and enable which user’s are allowed to impersonate via token translation.  Now, this doesn’t mean which users can switch, it means which users running the process are allowed to switch.  E.g. the process running the IIS application pools local service/network service/local system/etc (preferably a named service user other than system users).

To allow users to do this go to C:\Program Files\Windows Identity Foundation\v3.5\c2wtshost.exe.config and add in the service users to <allowedCallers>:

<windowsTokenService> 
  <!-- 
      By default no callers are allowed to use the Windows Identity Foundation Claims To NT Token Service. 
      Add the identities you wish to allow below. 
    --> 
  <allowedCallers> 
    <clear/> 
    <!-- <add value="NT AUTHORITY\Network Service" /> --> 
    <!-- <add value="NT AUTHORITY\Local Service" /> –> 
    <!-- <add value="nt authority\system" /> –> 
    <!-- <add value="NT AUTHORITY\Authenticated Users" /> --> 
  </allowedCallers> 
</windowsTokenService> 

You should notice that by default, all users are not allowed.  Once you’ve done that you can start up the service.  It is called Claims to Windows Token Service in the Services MMC snap-in.

That takes care of the administrative side of things.  Lets write some code.  But first, some usings:

using System;
using System.Linq;
using System.Security.Principal;
using System.Threading;
using Microsoft.IdentityModel.Claims;
using Microsoft.IdentityModel.WindowsTokenService;

The next step is to actually generate the token.  From an architectural perspective, we want to use the UPN claims type as that’s what the service wants to see.  To get the claim, we do some simple LINQ:

IClaimsIdentity identity = (ClaimsIdentity)Thread.CurrentPrincipal.Identity;
string upn = identity.Claims.Where(c => c.ClaimType == ClaimTypes.Upn).First().Value;

if (String.IsNullOrEmpty(upn))
{
    throw new Exception("No UPN claim found");
}

Following that we do the impersonation:

WindowsIdentity windowsIdentity = S4UClient.UpnLogon(upn);

using (WindowsImpersonationContext ctxt = windowsIdentity.Impersonate())
{
    DoSomethingAsNewUser();

    ctxt.Undo(); // redundant with using { } statement
}

To release the token we call the Undo() method, but if you are within a using { } statement the Undo() method is called when the object is disposed.

One thing to keep in mind though.  If you do not have permission to impersonate a user a System.ServiceModel.Security.SecurityAccessDeniedException will be thrown.

That’s all there is to it.

Implementation Details

In my opinion, these types of calls really shouldn’t be made all that often.  Realistically you need to take a look at how impersonation fits into the application and then go from there.

Azure Blob Uploads

by Steve Syfuhs / August 02, 2010 04:00 PM

Earlier today I was talking with Cory Fowler about an issue he was having with an Azure blob upload.  Actually, he offered to help with one of my problems first before he asked me for my thoughts – he’s a real community guy.  Alas I wasn’t able to help him with his problem, but it got me thinking about how to handle basic Blob uploads. 

On the CommunityFTW project I had worked on a few months back I used Azure as the back end for media storage.  The basis was simple: upload media stuffs to a container of my choice.  The end result was this class:

    public sealed class BlobUploadManager
    {
        private static CloudBlobClient blobStorage;

        private static bool s_createdContainer = false;
        private static object s_blobLock = new Object();
        private string theContainer = "";

        public BlobUploadManager(string containerName)
        {
            if (string.IsNullOrEmpty(containerName))
                throw new ArgumentNullException("containerName");

            CreateOnceContainer(containerName);
        }

        public CloudBlobClient BlobClient { get; set; }

        public string CreateUploadContainer()
        {
            BlobContainerPermissions perm = new BlobContainerPermissions();
            var blobContainer = blobStorage.GetContainerReference(theContainer);
            perm.PublicAccess = BlobContainerPublicAccessType.Container;
            blobContainer.SetPermissions(perm);

            var sas = blobContainer.GetSharedAccessSignature(new SharedAccessPolicy()
            {
                Permissions = SharedAccessPermissions.Write,
                SharedAccessExpiryTime = DateTime.UtcNow + TimeSpan.FromMinutes(60)
            });

            return new UriBuilder(blobContainer.Uri) { Query = sas.TrimStart('?') }.Uri.AbsoluteUri;
        }

        private void CreateOnceContainer(string containerName)
        {
            this.theContainer = containerName;

            if (s_createdContainer)
                return;

            lock (s_blobLock)
            {
                var storageAccount = new CloudStorageAccount(
                                         new StorageCredentialsAccountAndKey(
                                             SettingsController.GetSettingValue("BlobAccountName"),
                                             SettingsController.GetSettingValue("BlobKey")),
                                         false);

                blobStorage = storageAccount.CreateCloudBlobClient();
                CloudBlobContainer container = blobStorage.GetContainerReference(containerName);
                container.CreateIfNotExist();

                container.SetPermissions(
                    new BlobContainerPermissions()
                    {
                        PublicAccess = BlobContainerPublicAccessType.Container
                    });

                s_createdContainer = true;
            }
        }

        public string UploadBlob(Stream blobStream, string blobName)
        {
            if (blobStream == null)
                throw new ArgumentNullException("blobStream");

            if (string.IsNullOrEmpty(blobName))
                throw new ArgumentNullException("blobName");

            blobStorage.GetContainerReference(this.theContainer)
		       .GetBlobReference(blobName.ToLowerInvariant())
		       .UploadFromStream(blobStream);

            return blobName.ToLowerInvariant();
        }
    }

With any luck with might help someone trying to jump into Azure.

Making an ASP.NET Website Claims Aware with the Windows Identity Foundation

by Steve Syfuhs / August 02, 2010 04:00 PM

Straight from Microsoft this is what the Windows Identity Foundation is:

Windows Identity Foundation helps .NET developers build claims-aware applications that externalize user authentication from the application, improving developer productivity, enhancing application security, and enabling interoperability. Developers can enjoy greater productivity, using a single simplified identity model based on claims. They can create more secure applications with a single user access model, reducing custom implementations and enabling end users to securely access applications via on-premises software as well as cloud services. Finally, they can enjoy greater flexibility in application development through built-in interoperability that allows users, applications, systems and other resources to communicate via claims.

In other words it is a method for centralizing user Identity information, very much like how the Windows Live and OpenID systems work.  The system is reasonably simple.  I have a Membership data store that contains user information.  I want (n) number of websites to use that membership store, EXCEPT I don’t want each application to have direct access to membership data such as passwords.  The way around it is through claims.

In order for this to work you need a central web application called a Secure Token Service (STS).  This application will do authentication and provide a set of available claims.  It will say “hey! I am able to give you the person’s email address, their username and the roles they belong to.”  Each of those pieces of information is a claim.  This message exists in the application’s Federation Metadata

So far you are probably saying “yeah, so what?”

What I haven’t mentioned is that every application (called a Relying Party) that uses this central application has one thing in common: each application doesn’t have to handle authentication – at all.  Each application passes off the authentication request to the central application and the central application does the hard work.  When you type in your username and password, you are typing it into the central application, not one of the many other applications.  Once the central application authenticates your credentials it POST’s the claims back to the other application.  A diagram might help:

image

Image borrowed from the Identity Training kit (http://www.microsoft.com/downloads/details.aspx?familyid=C3E315FA-94E2-4028-99CB-904369F177C0&displaylang=en)

The key takeaway is that only one single application does authentication.  Everything else just redirects to it.  So lets actually see what it takes to authenticate against an STS (central application).  In future posts I will go into detail about how to create an STS as well as how to use Active Directory Federation Services, which is an STS that authenticates directly against (you guessed it) Active Directory.

First step is to install the Framework and SDK.

WIF RTW: http://www.microsoft.com/downloads/details.aspx?FamilyID=eb9c345f-e830-40b8-a5fe-ae7a864c4d76&displaylang=en

WIF SDK: http://www.microsoft.com/downloads/details.aspx?displaylang=en&FamilyID=c148b2df-c7af-46bb-9162-2c9422208504

The SDK will install sample projects and add two Visual Studio menu items under the Tools menu.  Both menu items do essentially the same thing, the difference being that “Add STS Reference” pre-populates the wizard with the current web application’s data.

Once the SDK is installed start up Visual Studio as Administrator.  Create a new web application.  Next go to the Properties section and go into the Web section.  Change the Server Settings to use IIS.  You need to use IIS.  To install IIS on Windows 7 check out this post.

image

So far we haven’t done anything crazy.  We’ve just set a new application to use IIS for development.  Next we have some fun.  Let’s add the STS Reference.

To add the STS Reference go to Tools > Add Sts Reference… and fill out the initial screen:

image


Click next and it will prompt you about using an HTTPS connection.  For the sake of this we don’t need HTTPS so just continue.  The next screen asks us about where we get the STS Federation Metadata from.  In this case I already have an STS so I just paste in the URI:

image

Once it downloads the metadata it will ask if we want the Token that the STS sends back to be encrypted.  My recommendation is that we do, but for the sake of this we won’t.

image

As an aside: In order for the STS to encrypt the token it will use a public key to which our application (the Relying Party) will have the private key.  When we select a certificate it will stick that public key in the Relying Party’s own Federation Metadata file.  Anyway… When we click next we are given a list of available Claims the STS can give us:

image
There is nothing to edit here; it’s just informative.  Next we get a summary of what we just did:

image

We can optionally schedule a Windows task to download changes.

We’ve now just added a crap-load of information to the *.config file.  Actually, we really didn’t.  We just told ASP.NET to use the Microsoft.IdentityModel.Web.WSFederationAuthenticationModule to handle authentication requests and Microsoft.IdentityModel.Web.SessionAuthenticationModule to handle session management.  Everything else is just boiler-plate configuration.  So lets test this thing:

  1. Hit F5 – Compile compile compile compile compile… loads up http://localhost/WebApplication1
  2. Page automatically redirects to https://login.myweg.com/login.aspx?ReturnUrl=%2fusers%2fissue.aspx%3fwa%3dwsignin1.0%26wtrealm%3dhttp%253a%252f%252flocalhost%252fWebApplication1%26wctx%3drm%253d0%2526id%253dpassive%2526ru%253d%25252fWebApplication1%25252f%26wct%3d2010-08-03T23%253a03%253a40Z&wa=wsignin1.0&wtrealm=http%3a%2f%2flocalhost%2fWebApplication1&wctx=rm%3d0%26id%3dpassive%26ru%3d%252fWebApplication1%252f&wct=2010-08-03T23%3a03%3a40Z (notice the variables we’ve passed?)
  3. Type in our username and password…
  4. Redirect to http://localhost/WebApplication1
  5. Yellow Screen of Death

Wait.  What?  If you are running IIS 7.5 and .NET 4.0, ASP.NET will probably blow up.  This is because the data that was POST’ed back to us from the STS had funny characters in the values like angle brackets and stuff.  ASP.NET does not like this.  Rightfully so, Cross Site Scripting attacks suck.  To resolve this you have two choices:

  1. Add <httpRuntime requestValidationMode="2.0" /> to your web.config
  2. Use a proper RequestValidator that can handle responses from Token Services

For the sake of testing add <httpRuntime requestValidationMode="2.0" /> to the web.config and retry the test.  You should be redirected to http://localhost/WebApplication1 and no errors should occur.

Seems like a pointless exercise until you add a chunk of code to the default.aspx page. Add a GridView and then add this code:

using System;
using System.Collections.Generic;
using System.Linq;
using System.Web;
using System.Web.UI;
using System.Web.UI.WebControls;
using System.Threading;
using System.IdentityModel;
using System.IdentityModel.Claims;
using Microsoft.IdentityModel.Claims;

namespace WebApplication1
{
    public partial class _Default : System.Web.UI.Page
    {
        protected void Page_Load(object sender, EventArgs e)
        {
            IClaimsIdentity claimsIdentity = ((IClaimsPrincipal)(Thread.CurrentPrincipal)).Identities[0];

            GridView1.DataSource = claimsIdentity.Claims;
            GridView1.DataBind();
        }
    }
}

Rerun the test and you should get back some values.  I hope some light bulbs just turned on for some people :)

Getting the Data to the Phone

by Steve Syfuhs / July 31, 2010 04:00 PM

A few posts back I started talking about what it would take to create a new application for the new Windows Phone 7.  I’m not a fan of learning from trivial applications that don’t touch on the same technologies that I would be using in the real world, so I thought I would build a real application that someone can use.

Since this application uses a well known dataset I kind of get lucky because I already have my database schema, which is in a reasonably well designed way.  My first step is to get it to the Phone, so I will use WCF Data Services and an Entity Model.  I created the model and just imported the necessary tables.  I called this model RaceInfoModel.edmx.  The entities name is RaceInfoEntities  This is ridiculously simple to do.

The following step is to expose the model to the outside world through an XML format in a Data Service.  I created a WCF Data Service and made a few config changes:

using System.Data.Services;
using System.Data.Services.Common;
using System;

namespace RaceInfoDataService
{
    public class RaceInfo : DataService
{ public static void InitializeService(DataServiceConfiguration config) { if (config
== null) throw new ArgumentNullException("config"); config.UseVerboseErrors
= true; config.SetEntitySetAccessRule("*", EntitySetRights.AllRead); //config.SetEntitySetPageSize("*",
25); config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V2;
} } }

This too is reasonably simple.  Since it’s a web service, I can hit it from a web browser and I get a list of available datasets:

image

This isn’t a complete list of available items, just a subset.

At this point I can package everything up and stick it on a web server.  It could technically be ready for production if you were satisfied with not having any Access Control’s on reading the data.  In this case, lets say for arguments sake that I was able to convince the powers that be that everyone should be able to access it.  There isn’t anything confidential in the data, and we provide the data in other services anyway, so all is well.  Actually, that’s kind of how I would prefer it anyway.  Give me Data or Give me Death!

Now we create the Phone project.  You need to install the latest build of the dev tools, and you can get that here http://developer.windowsphone.com/windows-phone-7/.  Install it.  Then create the project.  You should see:

image

The next step is to make the Phone application actually able to use the data.  Here it gets tricky.  Or really, here it gets stupid.  (It better he fixed by RTM or else *shakes fist*)

For some reason, the Visual Studio 2010 Phone 7 project type doesn’t allow you to automatically import services.  You have to generate the service class manually.  It’s not that big a deal since my service won’t be changing all that much, but nevertheless it’s still a pain to regenerate it manually every time a change comes down the pipeline.  To generate the necessary class run this at a command prompt:

cd C:\Windows\Microsoft.NET\Framework\v4.0.30319
DataSvcutil.exe
     /uri:http://localhost:60141/RaceInfo.svc/
     /DataServiceCollection
     /Version:2.0
     /out:"PATH.TO.PROJECT\RaceInfoService.cs"

(Formatted to fit my site layout)

Include that file in the project and compile.

UPDATE: My bad, I had already installed the reference, so this won’t compile for most people.  The Windows Phone 7 runtime doesn’t have the System.Data namespace available that we need.  Therefore we need to install them…  They are still in development, so here is the CTP build http://www.microsoft.com/downloads/details.aspx?displaylang=en&FamilyID=b251b247-70ca-4887-bab6-dccdec192f8d.

You should now have a compile-able project with service references that looks something like:

image

We have just connected our phone application to our database!  All told, it took me 10 minutes to do this.  Next up we start playing with the data.

My First CodePlex Project!

by Steve Syfuhs / February 03, 2010 04:00 PM

A few minutes ago I just finalized my first CodePlex project.  While working on the ever-mysterious Infrastructure 2010 project, I needed to integrate the Live Meeting API into an application we are using.  So I decided to stick it into it’s own assembly for reuse.

I also figured that since it’s a relatively simple project, and because for the life of me I couldn’t find a similar wrapper, I would open source it.  Maybe there is someone out there who can benefit from it.

The code is ugly, but it works.  I suspect I will continue development, and clean it up a little.  With that being said:

  • It needs documentation (obviously).
  • All the StringBuilder stuff should really be converted to XML objects
  • It need's cleaner exception handling
  • It needs API versioning support
  • It needs to implement more API functions

Otherwise it works like a charm.  Check it out!

Six Simple Development Rules (for Writing Secure Code)

by Steve Syfuhs / December 15, 2009 04:00 PM

I wish I could say that I came up with this list, but alas I did not.  I came across it on the Assessment, Consulting & Engineering Team blog from Microsoft, this morning.  They are a core part of the Microsoft internal IT Security Group, and are around to provide resources for internal and external software developers.  These 6 rules are key to developing secure applications, and they should be followed at all times.

Personally, I try to follow the rules closely, and am working hard at creating an SDL for our department.  Aside from Rule 1, you could consider each step a sort of checklist for when you sign off, or preferably design, the application for production.

--

Rule #1: Implement a Secure Development Lifecycle in your organization.

This includes the following activities:

  • Train your developers, and testers in secure development and secure testing respectively
  • Establish a team of security experts to be the ‘go to’ group when people want advice on security
  • Implement Threat Modeling in your development process. If you do nothing else, do this!
  • Implement Automatic and Manual Code Reviews for your in-house written applications
  • Ensure you have ‘Right to Inspect’ clauses in your contracts with vendors and third parties that are producing software for you
  • Have your testers include basic security testing in their standard testing practices
  • Do deployment reviews and hardening exercises for your systems
  • Have an emergency response process in place and keep it updated

If you want some good information on doing this, email me and check out this link:
http://www.microsoft.com/sdl

Rule #2: Implement a centralized input validation system (CIVS) in your organization.

These CIVS systems are designed to perform common input validation on commonly accepted input values. Let’s face it, as much as we’d all like to believe that we are the only ones doing things like, registering users, or recording data from visitors it’s actually all the same thing.

When you receive data it will very likely be an integer, decimal, phone number, date, URI, email address, post code, or string. The values and formats of the first 7 of those are very predictable. The string’s are a bit harder to deal with but they can all be validated against known good values. Always remember to check for the three F’s; Form, Fit and Function.

  • Form: Is the data the right type of data that you expect? If you are expecting a quantity, is the data an integer? Always cast data to a strong type as soon as possible to help determine this.
  • Fit: Is the data the right length/size? Will the data fit in the buffer you allocated (including any trailing nulls if applicable). If you are expecting and Int32, or a Short, make sure you didn’t get an Int64 value. Did you get a positive integer for a quantity rather than a negative integer?
  • Function: Can the data you received be used for the purpose it was intended? If you receive a date, is the date value in the right range? If you received an integer to be used as an index, is it in the right range? If you received an int as a value for an Enum, does it match a legitimate Enum value?

In a vast majority of the cases, string data being sent to an application will be 0-9, a-z, A-Z. In some cases such as names or currencies you may want to allow –, $, % and ‘. You will almost never need , <> {} or [] unless you have a special use case such as http://www.regexlib.com in which case see Rule #3.

You want to build this as a centralized library so that all of the applications in your organization can use it. This means if you have to fix your phone number validator, everyone gets the fix. By the same token, you have to inspect and scrutinize the crap out of these CIVS to ensure that they are not prone to errors and vulnerabilities because everyone will be relying on it. But, applying heavy scrutiny to a centralized library is far better than having to apply that same scrutiny to every single input value of every single application.  You can be fairly confident that as long as they are using the CIVS, that they are doing the right thing.

Fortunately implementing a CIVS is easy if you start with the Enterprise Library Validation Application Block which is a free download from Microsoft that you can use in all of your applications.

Rule #3: Implement input/output encoding for all externally supplied values.

Due to the prevalence of cross site scripting vulnerabilities, you need to encode any values that came from an outside source that you may display back to the browser. (even embedded browsers in thick client applications). The encoding essentially takes potentially dangerous characters like < or > and converts them into their HTML, HTTP, or URL equivalents.

For example, if you were to HTTP encode <script>alert(‘XSS Bug’)</script> it would look like: &lt;script&gt;alert('XSS Bug')&lt;/script&gt;  A lot of this functionality is build into the .NET system. For example, the code to do the above looks like:

Server.HtmlEncode("<script>alert('XSS Bug')</script>");

However it is important to know that the Server.HTMLEncode only encodes about 4 of the nasty characters you might encounter. It’s better to use a more ‘industrial strength’ library like the Anti Cross Site Scripting library. Another free download from Microsoft. This library does a lot more encoding and will do HTTP and URI encoding based on a white list. The above encoding would look like this in AntiXSS

using Microsoft.Security.Application;
AntiXss.HtmlEncode("<script>alert('XSS Bug')</script>");

You can also run a neat test system that a friend of mine developed to test your application for XSS vulnerabilities in its outputs. It is aptly named XSS Attack Tool.

Rule #4: Abandon Dynamic SQL

There is no reason you should be using dynamic SQL in your applications anymore. If your database does not support parameterized stored procedures in one form or another, get a new database.

Dynamic SQL is when developers try to build a SQL query in code then submit it to the DB to be executed as a string rather than calling a stored procedures and feeding it the values. It usually looks something like this:

(for you VB fans)

dim sql
sql = "Select ArticleTitle, ArticleBody FROM Articles WHERE ArticleID = "
sql = sql & request.querystring("ArticleID")
set results = objConn.execute(sql)

In fact, this article from 2001 is chock full of what NOT to do. Including dynamic SQL in a stored procedure.

Here is an example of a stored procedure that is vulnerable to SQL Injection:

Create Procedure GenericTableSelect @TableName VarChar(100)
AS
Declare @SQL VarChar(1000)
SELECT @SQL = 'SELECT * FROM '
SELECT @SQL = @SQL + @TableName
Exec ( @SQL) GO

See this article for a look at using Parameterized Stored Procedures.

Rule #5: Properly architect your applications for scalability and failover

Applications can be brought down by a simple crash. Or a not so simple one. Architecting your applications so that they can scale easily, vertically or horizontally, and so that they are fault tolerant will give you a lot of breathing room.

Keep in mind that fault tolerant is not just a way to say that they restart when they crash. It means that you have a proper exception handling hierarchy built into the application.  It also means that the application needs to be able to handle situations that result in server failover. This is usually where session management comes in.

The best fault tolerant session management solution is to store session state in SQL Server.  This also helps avoid the server affinity issues some applications have.

You will also want a good load balancer up front. This will help distribute load evenly so that you won’t run into the failover scenario often hopefully.

And by all means do NOT do what they did on the site in the beginning of this article. Set up your routers and switches to properly shunt bad traffic or DOS traffic. Then let your applications handle the input filtering.

Rule #6: Always check the configuration of your production servers

Configuration mistakes are all too popular. When you consider that proper server hardening and standard out of the box deployments are probably a good secure default, there are a lot of people out there changing stuff that shouldn’t be. You may have remembered when Bing went down for about 45 minutes. That was due to configuration issues.

To help address this, we have released the Web Application Configuration Auditor (WACA). This is a free download that you can use on your servers to see if they are configured according to best practice. You can download it at this link.

You should establish a standard SOE for your web servers that is hardened and properly configured. Any variations to that SOE should be scrutinised and go through a very thorough change control process. Test them first before turning them loose on the production environment…please.

So with all that being said, you will be well on your way to stopping the majority of attacks you are likely to encounter on your web applications. Most of the attacks that occur are SQL Injection, XSS, and improper configuration issues. The above rules will knock out most of them. In fact, Input Validation is your best friend. Regardless of inspecting firewalls and things, the applications is the only link in the chain that can make an intelligent and informed decision on if the incoming data is actually legit or not. So put your effort where it will do you the most good.

Windows LiveID Almost OpenID

by Steve Syfuhs / January 12, 2009 04:00 PM

liveopenidThe Windows Live team announced a few months ago that their Live ID service will be a new provider for the OpenID system.  The Live team was quoted:

Beginning today, Windows Live™ ID is publicly committing to support the OpenID digital identity framework with the announcement of the public availability of a Community Technology Preview (CTP) of the Windows Live ID OpenID Provider.

You will soon be able to use your Windows Live ID account to sign in to any OpenID Web site.

I saw the potential in OpenID a while ago, long before I heard about Microsoft’s intentions.  The only problem was that I didn’t really find a good way to implement such a system on my website.  Not only that, I didn’t really have a purpose for doing such a thing.  The only reason anyone would need to log into the site would be to administer it.  And seeing as I’m the only person who could log in, there was never a need.

Then a brilliant idea hit me.  Let users create accounts to make comment posting easier.  Originally, a user would leave a comment, and I would log in to verify comments, at which point the comment would actually show up.  Sometimes I wouldn’t log in for a couple days, which meant no comments.  So now, if a user wants to post a comment, all they have to do is log in with their openID, and the comment will appear.

Implementing OpenID

I used the ExtremeSwank OpenID Consumer for ASP.NET 2.0.  The beauty of this framework is that all I have to do is drop a control on a webform and OpenID functionality is there.  The control handles all the communications, and when the authenticating site returns it’s data, you access the data through the control’s properties.  To handle the authentication on my end, I tied the values returned from the control into my already in place Forms Authentication mechanism:

if (!(OpenIDControl1.UserObject
== null)) { if (Membership.GetUser(OpenIDControl1.UserObject.Identity)
== null) { string email = OpenIDControl1.UserObject
.GetValue(SimpleRegistrationFields.Email); string username = ""; if (HttpContext.Current.User.Identity != null) { username = HttpContext.Current.User.Identity.Name; } else { username = OpenIDControl1.UserObject.Identity; } MembershipCreateStatus membershipStatus; MembershipUser user = Membership.CreateUser( username, RandomString(12, false), email, "This is an OpenID Account. You should log in with your OpenID", RandomString(12, false), true, out membershipStatus ); if (membershipStatus != MembershipCreateStatus.Success) { lblError.Text
= "Cannot create account for OpenID Account: "
+ membershipStatus.ToString(); } } }
That’s all there is to it.

// About

Steve is a renaissance kid when it comes to technology. He spends his time in the security stack.