Home / My Disclaimer / Who am I? / Search... / Sign in

// Latest

Windows Azure Pack Authentication Part 3.5 – Using ADFS

by Steve Syfuhs / February 26, 2014 06:00 PM

Since we looked at using a custom IdP for Windows Azure Pack last time I figured it would be good to explicitly list some resources for those looking to use ADFS instead as that’s a fairly common scenario people are exploring.

Building Clouds

  1. Federated Identities to Windows Azure Pack through AD FS – Part 1 of 3: http://blogs.technet.com/b/privatecloud/archive/2013/12/17/federated-identities-to-windows-azure-pack-through-ad-fs-part-1-of-3.aspx
  2. Federated Identities to Windows Azure Pack through AD FS – Part 2 of 3: http://blogs.technet.com/b/privatecloud/archive/2013/12/17/federated-identities-to-windows-azure-pack-through-ad-fs-part-2-of-3.aspx
  3. Federated Identities to Windows Azure Pack through AD FS – Part 3 of 3: http://blogs.technet.com/b/privatecloud/archive/2013/12/18/federated-identities-to-windows-azure-pack-through-ad-fs-part-3-of-3.aspx

TechNet

Configure Active Directory Federation Services for Windows Azure Pack: http://technet.microsoft.com/en-us/library/dn296436.aspx

Windows Azure Pack (#WAPack) and Related Blogs, Videos and TechNet Articles: http://social.technet.microsoft.com/wiki/contents/articles/20689.windows-azure-pack-wapack-and-related-blogs-videos-and-technet-articles.aspx

Hyper-V.nu

  1. Windows Azure Pack with ADFS and Windows Azure Multi-Factor Authentication – Part 1: http://www.hyper-v.nu/archives/mvaneijk/2014/01/windows-azure-pack-with-adfs-and-windows-azure-multi-factor-authentication-part-1/
  2. Windows Azure Pack with ADFS and Windows Azure Multi-Factor Authentication – Part 2: http://www.hyper-v.nu/archives/mvaneijk/2014/02/windows-azure-pack-with-adfs-and-windows-azure-multi-factor-authentication-part-2/

As more people start diving into production deployments of Windows Azure Pack I suspect we’ll see more articles around using ADFS too. For an up to date curated list of content for Windows Azure Pack take a look at the TechNet Wiki above.

Creating Authority-Signed and Self-Signed Certificates in .NET

by Steve Syfuhs / February 09, 2014 03:19 PM

Whenever I get some free time I like to tackle certain projects that have piqued my interest. Often times I don’t get to complete these projects, or they take months to complete. In this case I’ve spent the last few months trying to get these samples to work. Hopefully you’ll find them useful.

In the world of security, and more specifically in .NET, there aren’t a whole lot of options for creating certificates for development. Sure you could use makecert.exe or if you’re truly masochistic you could spin up a CA, but both are a pain to use and aren’t necessarily useful when you need to consistently create signed certificates for whatever reason. Other options include using a library like BouncyCastle but that can be a bit complicated, and given the portable nature of the library, doesn’t use Windows APIs to do the work.

So I offer some sample code. This code should not be used in production. Please. Seriously. It’s not that good. Its great for testing, but its in no shape whatsoever for production systems. That’s why CAs are built.

In any case I’ve put the code up on Github. There is no license so use it as you see fit so long as it doesn’t come back to bite me in the ass. Winking smile

This gist shows how you can create self-signed certificates and how you can then sign the certificates of those keys with a CA’s private key. The calling code is in the KeyGenSigning project, and the actual meat of the signing is done in the CertLib project. The key generation and signing bits are mostly P/Invoke’d APIs so they execute fairly fast.

Currently the code relies on CSPs to do the work. In theory it could work with NCryptoKey’s but I haven’t tried it yet.

In any case, enjoy. Hopefully you found this useful.

Windows Azure Pack Authentication Part 3 – Using a Third Party IdP

by Steve Syfuhs / February 07, 2014 06:22 PM

In the previous installments of this series we looked at how Windows Azure Pack authenticates users and how it’s configured out of the box for federation. This time around we’re going to look at how you can configure federation with a third party IdP.

Microsoft designed Windows Azure Pack the right way. It supports federation with industry protocols out of the box. You can’t say that for many services, and you certainly can’t say that those services support it natively for all versions – more often than not you have to pay extra for it.

Windows Azure Pack supports federation, and actually uses it to authenticate users by default. This little fact makes it easy to federate to a 3rd party IdP.

If we searched around we will find lots of resources on federating to ADFS, as that’s Microsoft’s federation product, and there are a number of good (German content) walkthroughs on how you can get it working. If you want to use ADFS go read one or all of those articles as everything we talk about today will be about using a non-Microsoft federation service.

Before we begin though I’d like to point out that Microsoft does have some resources on using 3rd party IdPs, but unfortunately the information is a bit thin in some places.

Prerequisites

Federation is a complex beast and we should be clear about what is required to get it working. In no particular order you need the following:

  • STS that supports the WS-Federation (passive) protocol
  • STS that supports WS-Federation wrapped JSON Web Tokens (JWT)
  • Optional: STS that supports WS-Trust + JWT

If you plan to use the public APIs with federated accounts then you will need a STS that supports WS-Trust + JWT.

If you don’t have a STS that can support these requirements then you should really consider taking a look at ADFS, or if you’re looking for customization, Thinktecture Identity Server. Both are top notch IdPs (edit: insert pitch about the IdP my company builds and sells as well [edit-edit: our next version natively supports JWT] Winking smile -- sorry, this concludes the not-so-regularly-scheduled product placement).

Another option is to roll your own IdP. Don’t do this. No seriously, don’t. It’s a complicated mess. You’re way better off using the Thinktecture server and extending it to fit your needs.

Supposing though that you already have an IdP and want to support JWT though, here’s how we can do it. In this context the IdP is the overarching identity providing system and the STS is simply the service issuing tokens.

Skip this next section if you just want to see how to configure Windows Azure Pack. That’s the main part that’s lacking in the MSDN documentation.

JWT via IdentityModel

First off, you need to be using .NET 4.5, and you need to be using the the 4.5 IdentityModel stack. You can’t use the original 3.5 bits.

At this point I’m going to assume you’ve got a working IdP already. There are lots of articles out there explaining how to build one. We’re just going to mod the STS.

Before making any code changes though you need to add the JWT token handler, which is easily installed via Nuget (I Red heart Nuget):

PM> Install-Package System.IdentityModel.Tokens.Jwt

This will need to be added to the project that exposes your STS configuration class.

Next, we need to inject the token handler into the STS pipeline. This can easily be done by adding an entry to the web.config system.identityModel section:

Or if you want to hardcode it you can add it to your SecurityTokenServiceConfiguration class.

There are of course other (potentially better) ways you can add it in, but this serves our purpose for the sake of a sample.

By adding the JWT token handler into the STS pipeline we can begin issuing JWTs to any relying parties that request one. This poses a problem though because passive requests don’t have a requested token type tacked on. Active (WS-Trust) requests do, but not passive. So we need to specify that a JWT should be minted instead of a SAML token. This can be done in the GetScope method of the STS class.

All we really needed to do was specify the TokenType as WIF will use that to determine which token handler should be used to mint the token. We know this is the value to use because it’s exposed by the GetTokenTypeIdentifiers() method in the JWTSecurityTokenHandler class.

Did I mention the JWT library is open source?

So now at this point if we made a request for token to the STS we could receive a WS-Federation wrapped JWT.

If the idea of using a JWT instead of a SAML token appeals to you, you can configure your app to use the JWT token handler similar to Dominick’s sample.

If you were submitting a WS-Trust RST to the STS you could use client code along the lines of:

When the GetScope method is called the request.TokenType should be set to whatever you passed in at the client. For more information on service calls you can take a look at the whitepaper Claims-Based Identity in Windows Azure Pack (docx). A future installment of this series might have more information about using services.

Lastly, we need to sign the JWT. The only caveat to using the JWT token handler is that the minimum RSA key size is 2048 bits. If you’re using a key smaller than that then please upgrade it. We’re going to overlook the fact that the MSDN article shows how to bypass minimum key sizes. Seriously. Don’t do it. I don’t want to have to explain why (putting paranoia aside for a moment, 1024 is being deprecated by Windows and related services in the near future anyway).

Issuing Tokens to Windows Azure Pack

So now we’re at a point where we can mint a JWT token. The question we need to ask now is what claims should this token contain? Looking at Part 1 we see that the Admin Portal requires UPN and Group claims. The tenant portal only requires the UPN claim.

Lucky for us the JWT token handler is smart. It knows to transform certain known XML-token-friendly-claim-types to JWT friendly claim types. In our case we can use http://schemas.xmlsoap.org/ws/2005/05/identity/claims/upn in our ClaimsIdentity to map to the UPN claim, and http://schemas.xmlsoap.org/claims/Group to map to our Group claim.

Then we need to determine where to send the token, and who to address it to. Both the tenant and admin sites have Federation Metadata documents that specify this information for us. If you’ve got an IdP that can parse the metadata then all you need to do is point it to https://yourtenantsite/FederationMetadata/2007-06/FederationMetadata.xml for the tenant configuration or https://youradminsite/FederationMetadata/2007-06/FederationMetadata.xml for the admin configuration.

Of course, this information will also map up to the configuration elements we looked at in Part 2. That’ll tell us the Audience URI and the Reply To for both sites.

Finally we have everything we need to mint the token, address it, and send it on its way.

Configuring Windows Azure Pack to Trust your Token

The tokens been sent and once it hits either the tenant or admin site it’ll promptly be ignored and you’ll get an ugly error message saying “nope, not gonna happen, bub.”

We therefore need to configure Windows Azure Pack to trust our token. Looking at MSDN we see some somewhat useful information telling us what we need to modify, but frankly, its missing a bunch of information so we’re going to ignore it.

First things first: if your IdP publishes a Federation Metadata document then you can just configure everything via PowerShell:

You can replace the target “Admin” with “Tenant” if you want to configure the Tenant Portal. The only caveat with doing it this way is that the metadata document needs to be accessible from the server. I’ve submitted a feature request that they also support local file paths too; hopefully they listen! Since the parameter takes the full URL you can put the metadata document somewhere public if its not normally accessible. You will only need the metadata accessible while applying this configuration.

If the cmdlet completed successfully then you should be able to log in from your own IdP. That’s all there is to it for you. I would recommend seriously considering going this route instead of configuring things manually.

Otherwise, lets carry on.

Since we can’t import our federation metadata (since we probably don’t have any), we need to configure things manually. To do that we need to modify settings in the database.

Looking back to Part 2 we see all the configuration elements that enable our federated trust to the default IdPs. We’ll need to update a few settings across the Microsoft.MgmtSvc.Store and Microsoft.MgmtSvc.PortalConfigStore databases.

As per the MSDN documentation it says to modify the settings in the PortalConfigStore database. It’s wrong. It’s incomplete as that’s only part of the process.

The PortalConfigStore database contains the settings used by the Tenant and Admin Portals to validate and request tokens. We need to modify these settings to use our custom IdP. To do so locate the Authentication.IdentityProvider setting in the [Config].[Settings] table.  The namespace we need to choose is dependent on which site we want to configure. In our case we select the Admin namespace. As we saw last time it looks something like:

We need to substitute our STS information here. The Realm is whatever your STS issuer is, and the Endpoint is where ever your WS-Federation endpoint is located. The Certificate should be a base 64 encoded representation of your signing certificate (remember, just the public key).

In my experience I’ve had to do an IISRESET on the portals to get the settings refreshed. I might just be impatient though.

Once those values are replaced you can try logging in. You should be redirected to your IdP and if you issue the token properly it’ll hit the portal and you should be logged in. Unfortunately this’ll actually fail with a non-useful error message.

deadsession

Who can guess why? So far I’ve stated that the MSDN documentation is missing information. What have we missed? Hopefully if you’ve read the first two parts of this series you’re yelling at the screen telling me to get on with it already because you’ve caught on to what I’m saying.

We haven’t configured the API services to trust our STS! Oops.

With that being said, we now have proof that Windows Azure Pack flows the token to the services from the Portal and, more importantly, the services validate the token. Cool!

Anyway, now to configure the APIs. Warning: complicated.

In the Microsoft.MgmtSvc.Store database locate the Settings table and then locate the Authentication.IdentityProvider.Secondary element in the AdminAPI namespace. We need to update it with the exact same values as we put in to the configuration element in the other database.

If you’re only wanting to configure the Tenant Portal you’d want to modify the Authentication.IdentityProvider.Primary configuration element. Be careful with the Primary/Secondary elements as they can get confusing.

If you’re configuring the Admin Portal you’ll need to update the Authentication.IdentityProvider.Secondary configuration element in the TenantAPI namespace to use the configuration you specified for the Admin Portal as well. As I said previously, I think this is because the Admin Portal calls into the Tenant API. The Admin Portal will use an admin-trusted token – therefore the TenantAPI needs to trust the admin’s STS.

Now that you’ve completed configuration you can do an IISRESET and try logging in. If you configured everything properly you should now be able to log in from your own IdP.

Troubleshooting

For those rock star Ops people who understand identity this guide was likely pretty easy to follow, understand, and implement. For everyone else though, this was probably a pain in the neck. Here are some troubleshooting tips.

Review the Event Logs
It’s surprising how many people forget that a lot of applications will write errors to the Windows Event Log. Windows Azure Pack has quite a number of logs that you can review for more information. If you’re trying to track down an issue in the portals look in the MgmtSvc-*Site where * is Tenant or Admin. Errors will get logged there. If you’re stuck mucking about the APIs look in the MgmtSvc-*API where * is Tenant, Admin, or TenantPublic.

Enable Development Mode
You can enable developer mode in sites by modifying a value in the web.config. Unprotect the web.config by calling:

And then locate the appSetting named Microsoft.Azure.Portal.Configuration.PortalConfiguration.DevelopmentMode and set the value to true. Be sure to undo and re-protect the configuration when you’re done. You should then get a neat error tracing window show up in the portals, and more diagnostic information will be logged to the event logs. Probably not wise to do this in a production environment.

Use the PowerShell CmdLets
There are a quite a number of PowerShell cmdlets available for you to learn about the configuration of Windows Azure Pack. If you open the Windows Azure Pack Administration PowerShell console you can see that there are two modules that get loaded that are full of cmdlets:

PS C:\Windows\system32> get-command -Module MgmtSvcConfig

CommandType     Name                                               ModuleName
-----------     ----                                               ----------
Cmdlet          Add-MgmtSvcAdminUser                               MgmtSvcConfig
Cmdlet          Add-MgmtSvcDatabaseUser                            MgmtSvcConfig
Cmdlet          Add-MgmtSvcResourceProviderConfiguration           MgmtSvcConfig
Cmdlet          Get-MgmtSvcAdminUser                               MgmtSvcConfig
Cmdlet          Get-MgmtSvcDatabaseSetting                         MgmtSvcConfig
Cmdlet          Get-MgmtSvcDefaultDatabaseName                     MgmtSvcConfig
Cmdlet          Get-MgmtSvcEndpoint                                MgmtSvcConfig
Cmdlet          Get-MgmtSvcFeature                                 MgmtSvcConfig
Cmdlet          Get-MgmtSvcFqdn                                    MgmtSvcConfig
Cmdlet          Get-MgmtSvcNamespace                               MgmtSvcConfig
Cmdlet          Get-MgmtSvcNotificationSubscriber                  MgmtSvcConfig
Cmdlet          Get-MgmtSvcResourceProviderConfiguration           MgmtSvcConfig
Cmdlet          Get-MgmtSvcSchema                                  MgmtSvcConfig
Cmdlet          Get-MgmtSvcSetting                                 MgmtSvcConfig
Cmdlet          Initialize-MgmtSvcFeature                          MgmtSvcConfig
Cmdlet          Initialize-MgmtSvcProduct                          MgmtSvcConfig
Cmdlet          Install-MgmtSvcDatabase                            MgmtSvcConfig
Cmdlet          New-MgmtSvcMachineKey                              MgmtSvcConfig
Cmdlet          New-MgmtSvcPassword                                MgmtSvcConfig
Cmdlet          New-MgmtSvcResourceProviderConfiguration           MgmtSvcConfig
Cmdlet          New-MgmtSvcSelfSignedCertificate                   MgmtSvcConfig
Cmdlet          Protect-MgmtSvcConfiguration                       MgmtSvcConfig
Cmdlet          Remove-MgmtSvcAdminUser                            MgmtSvcConfig
Cmdlet          Remove-MgmtSvcDatabaseUser                         MgmtSvcConfig
Cmdlet          Remove-MgmtSvcNotificationSubscriber               MgmtSvcConfig
Cmdlet          Remove-MgmtSvcResourceProviderConfiguration        MgmtSvcConfig
Cmdlet          Reset-MgmtSvcPassphrase                            MgmtSvcConfig
Cmdlet          Set-MgmtSvcCeip                                    MgmtSvcConfig
Cmdlet          Set-MgmtSvcDatabaseSetting                         MgmtSvcConfig
Cmdlet          Set-MgmtSvcDatabaseUser                            MgmtSvcConfig
Cmdlet          Set-MgmtSvcFqdn                                    MgmtSvcConfig
Cmdlet          Set-MgmtSvcIdentityProviderSettings                MgmtSvcConfig
Cmdlet          Set-MgmtSvcNotificationSubscriber                  MgmtSvcConfig
Cmdlet          Set-MgmtSvcPassphrase                              MgmtSvcConfig
Cmdlet          Set-MgmtSvcRelyingPartySettings                    MgmtSvcConfig
Cmdlet          Set-MgmtSvcSetting                                 MgmtSvcConfig
Cmdlet          Test-MgmtSvcDatabase                               MgmtSvcConfig
Cmdlet          Test-MgmtSvcPassphrase                             MgmtSvcConfig
Cmdlet          Test-MgmtSvcProtectedConfiguration                 MgmtSvcConfig
Cmdlet          Uninstall-MgmtSvcDatabase                          MgmtSvcConfig
Cmdlet          Unprotect-MgmtSvcConfiguration                     MgmtSvcConfig
Cmdlet          Update-MgmtSvcV1Data                               MgmtSvcConfig

As well as the MgmtSvcConfig module which is moreso for daily administration.

Read the Windows Azure Pack Claims Whitepaper
See here: Claims-Based Identity in Windows Azure Pack (docx).

Visit the Forums
When in doubt take a look at the forums and ask a question if you’re stuck.

Email Me
Lastly, you can contact me (steve@syfuhs.net) with any questions. I may not have answers but I might be able to find someone who can help.

Conclusion

In the first two parts of this series we looked at how authentication works, how it’s configured, and now in this installment we looked at how we can configure a third party IdP to log in to Windows Azure Pack. If you’re trying to configure Windows Azure Pack to use a custom IdP I imagine this part is the most complicated to figure out and hopefully it was documented well enough. I personally spent a fair amount of time fiddling with settings and most of the information I’ve gathered for this series has been the result of lots of trial and error. With any luck this series has proven useful to you and you have more luck with the configuration than I originally did.

Next time we’ll take a look at how we can consume the public APIs using a third party IdP for authentication.

In the future we might take a look at how we can authenticate requests to a service called from a Windows Azure Pack add-on, and how we can call into Windows Azure Pack APIs from an add-on.

Windows Azure Pack Authentication Part 2

by Steve Syfuhs / January 30, 2014 07:50 PM

Last time we took a look at how Windows Azure Pack authenticates users in the Admin Portal. In this post we are going to look at how authentication works in the Tenant Portal.

Authentication in the Tenant Portal works exactly the same way authentication in the Admin Portal works.

Detailed and informative explanation, right?

Actually, with any luck you’ve read, and were more importantly, able to decipher my explanations in the last post. The reason for that is because we’re going to go a bit deeper into the configuration of how authentication is configured.  If that’s actually the case then you know everything you need to know to continue on here. There are a couple minor differences between the Admin sites and Tenant sites, such as the tenant STS will store users in a standalone SQL database instead of Active Directory, and there is a set of public service endpoints that also federate with the Tenant STS. For the time being we can ignore the public API, but we may revisit it in the future.

diag1

One of the things this diagram doesn’t show is how the various services store configuration information. This is somewhat important because the Portals and APIs need to keep track of where the STS is, what is used to sign tokens, who is allowed to receive tokens, etc.

Since Windows Azure Pack is designed to be distributed in nature, it’s a fair bet most of the configuration is stored in databases. Let’s check the PowerShell cmdlets (horizontal spacing truncated a bit to fit):

PS C:\Windows\system32> Get-MgmtSvcDefaultDatabaseName

DefaultDatabaseName                        Description
-------------------                        -----------
Microsoft.MgmtSvc.Config                   Configuration store database
Microsoft.MgmtSvc.PortalConfigStore        Admin and Tenant sites database
Microsoft.MgmtSvc.Store                    Rest API layer database
Microsoft.MgmtSvc.MySQL                    MySQL resource provider database
Microsoft.MgmtSvc.SQLServer                SQLServer resource provider database
Microsoft.MgmtSvc.Usage                    Usage service database
Microsoft.MgmtSvc.WebAppGallery            WebApp Gallery resource provider database

Well that’s handy. It even describes what each database does. Looking at the databases on the server we see each one:

diag2

Looking at the descriptions we can immediately ignore anything that is described as a “resource provider database” because resource providers in Windows Azure Pack are the services exposed by the Portals and APIs.  That leaves us the Microsoft.MgmtSvc.Config, Microsoft.MgmtSvc.PortalConfigStore, Microsoft.MgmtSvc.Store, and Microsoft.MgmtSvc.Usage databases.

The usage database looks like the odd one out so if we peek into the tables we see configuration information and data for usage of the resource providers. Scratch that.

We’re then left a Config database, a PortalConfigStore database, and a Store database. How’s that for useful naming conventions? Given the descriptions we could infer we likely only want to look into the PortalConfigStore database for the Tenant and Admin Portal configuration, and the Store database for the API configuration. To confirm that we could peek into the Config database and see what’s there. If we look in the Settings table we see a bunch of encrypted key value pairs. Nothing jumps out as being related to federation information like endpoints, claims, or signing certificates, but we do see pointers to database credentials.

If we quickly take a look at some of the web.config files in the various Windows Azure Pack sites we can see that some of them only have connection strings to the Config database. Actually, if we look at any of the web.config files we’ll see they are protected, so we need to unprotect them:

PS C:\Windows\system32> Unprotect-MgmtSvcConfiguration -Namespace TenantAPI

Please remember to protect-* them when you’re done snooping!

If we compare the connection strings and information in the Config.Settings table, its reasonable to hypothesize that the Config database stores pointers to the other configuration databases, and the sites only need to have a configured connection string to a single database. This seems to only apply to some sites though. The Portal sites actually have connection strings only pointing to the PortalConfigStore database. That actually makes sense from a security perspective though.

Since the Portal sites are public-ish facing, they are more likely to be attacked, and therefore really shouldn’t have direct connections to databases storing sensitive information – hence the Web APIs. Looking at the architectural documentation on TechNet we can see its recommended that API services not be public facing (with the exception of the Public APIs) as well, so that supports my assertion.

Moving on, we now have the PortalConfigStore and the Store databases left. The descriptions tell us everything we need to know about them. We end up with a service relationship along the lines of:

diag3

Okay, now that we have a rough idea of how configuration data is stored we can peek into the databases and see what’s what.

Portal Sites Authentication

Starting with the PortalConfigStore database we see a collection of tables.

diag4

The two tables that pop out are the Settings table and the aspnet_Users table. We know the Auth Site for the tenants stores users in a database, and lookie here we have a collection of users.

Next up is the Settings table. It’s a namespace-key-value-pair mapped table. Since this database stores information for multiple sites, it makes sense to separate configuration data into multiple realms – the namespaces.

There are 4 namespaces we care about:

  • AdminSite
  • AuthSite
  • TenantSite
  • WindowsAuthSite

Looking at the TenantSite configuration we see a few entries with JSON values:

  • Authentication.IdentityProvider
  • Authentication.RelyingParty

Aha! Here’s where we store the necessary bits to do the federation dance. The Authentication.RelyingParty entry stores the information describing the TenantSite. So when it goes to the IdP with a request it can use these values. In my case I’ve got the following:

{
   "EncryptionCertificate":null,
   "Realm":"http://azureservices/TenantSite",
   "ReplyTo":https://manage-cloud.syfuhs.net/
}

Really, just the bare minimum to describe the RP. The Realm, which is the unique identifier of the site, the Reply To URL which is where the token should be returned to, and the Encryption Certificate in case the returned token is encrypted – which it isn’t by default. With this information we can make a request to the IdP, but of course, we don’t know anything about the IdP yet so we need to look up that configuration information.

Looking at the Authentication.IdentityProvider entry we see everything else we need to complete a WS-Federation passive request for token. This is my configuration:

{
   "Realm":"
http://azureservices/AuthSite",
   "Endpoint":"
https://auth-cloud.syfuhs.net/wsfederation/issue",
   "Certificates":[
      "MIIC2...ADLt0="
   ]
}

To complete the request we actually only need the Endpoint as that describes where the request should be sent, but we also now have the information to validate the token response. The Realm describes who minted the token, and the Certificates element is a collection of certificates that could have been used to sign the token. If the token was signed by one of these certificates, we know it’s a valid token.

We do have to go one step further when validating this though, as we need to make sure the token is intended to be used by the Tenant Portal. This is done by comparing the audience URI in the token (see the last post) to the Realm in the Authentication.RelyingParty configuration value. If everything matches up we’re good to go.

We can see the configuration in the AdminSite namespace is similar too.

Next up we want to look at the AuthSite namespace configuration. There are similar entries to the TenantSite, but they serve slightly different purposes.

The Authentication.IdentityProvider entry matches the entry for the TenantSite. I’m not entirely sure of its purpose, but I suspect it might be a reference value for when changes are made and the original configuration is needed. Just a guess on that though.

Moving on we have the Authentication.RelyingParty.Primary entry. This value describes who can request a token, which in our case is the TenantSite. My entry looks like this:

{
   "EncryptionCertificate":null,
   "Realm":"http://azureservices/TenantSite",
   "ReplyTo":https://manage-cloud.syfuhs.net/
}

It’s pretty similar to the configuration in the TenantSite entry. The Realm is the identifier of which site can request a token, and the Reply To URL is where the token should be returned once its minted.

Compare that to the values in the WindowsAuthSite namespace and things look pretty similar too.

So with all that information we’ve figured out how the Portal sites and the Auth sites are configured. Of course, we haven’t looked at the APIs yet.

API Authentication

If you recall from the last post the API calls are authenticated by attaching a JWT to the request header. The JWT has to validated by the APIs the same way the Portals have to validate the JWTs received from the STS. If we look at the diagram above though, the API sites don’t have access to the PortalConfigStore database; they have access to the Store database. Therefore its reasonable to assume the Store database has a copy of the federation configuration data as well.

Looking at the Settings table we can confirm that assumption. It’s got the same schema as the Settings table in the PortalConfigStore database, though in this case there are different namespaces. There are two namespaces that are of interest here:

  • AdminAPI
  • TenantAPI

This aligns with the service diagram above. We should only have settings for the namespaces of services that actually touch the database.

If we look at the TenantAPI elements we have four entries:

  • Authentication.IdentityProvider.Primary
  • Authentication.IdentityProvider.Secondary
  • Authentication.RelyingParty.Primary
  • Authentication.RelyingParty.Secondary

The Authentication.IdentityServer.Primary entry matches up with the TenantSite Authentication.IdentityServer entry in the PortalConfigStore database. That makes sense since it needs to trust the token same as the Tenant Portal site. The Secondary element is curious though. It’s configured as a relying party to the Admin STS. I suspect that is there because the Tenant APIs can be called from the Admin Portal.

Comparing these values to the AdminAPI namespace we see that there are only configuration entries for the Admin STS. Seems reasonable since the Tenant Portal probably shouldn’t be calling into admin APIs. Haven’t got a clue why the AdminAPI relying party is configured as Secondary though Smile. Artifact of the design I guess. Another interesting artifact of this configuration is that the ReplyTo values in the RelyingParty entries show the default value from when I first installed the services. We see something like:

{
   "EncryptionCertificate":null,
   "Realm":"http://azureservices/TenantSite",
   "ReplyTo":https://syfuhs-cloud:30081/
}

And

{
   "EncryptionCertificate":null,
   "Realm":"http://azureservices/AdminSite",
   "ReplyTo":https://syfuhs-cloud:30091/
}

I reconfigured the endpoints to be publically accessible so these values are now incorrect.

API’s can’t really use Reply To the same passive requests can, so it makes sense that they don’t get updated – they don’t have to be updated. The values don’t have to be present either, but again, artifacts I guess.

Conclusion

In the previous post we looked at how authentication works conceptually, and in this post we looked at how authentication is configured in detail. Next time we’ll take a look at how we can reconfigure Windows Azure Pack to work with our own IdPs.

No spoilers this time. Winking smile

Windows Azure Pack Authentication Part 1

by Steve Syfuhs / January 29, 2014 10:17 PM

Recently Microsoft released their on-premise Private Cloud offering called Windows Azure Pack for Windows Server.

Windows Azure Pack for Windows Server is a collection of Windows Azure technologies, available to Microsoft customers at no additional cost for installation into your data center. It runs on top of Windows Server 2012 R2 and System Center 2012 R2 and, through the use of the Windows Azure technologies, enables you to offer a rich, self-service, multi-tenant cloud, consistent with the public Windows Azure experience.

Cool!

There are a fair number of articles out there that have nice write ups on how it works, what it looks like, how to manage it, etc., but I’m not going to bore you with the introductions. Besides, Marc over at hyper-v.nu has already written up a fantastic collection of blog posts and I couldn’t do nearly as good a job introducing it.

Today I want to look at how Windows Azure Pack does authentication.

Architecture

Before we jump head first into authentication we should take a look at how Windows azure Pack works at an architectural level. It’s important to understand all the pieces that depend on authentication. If you take a look at the TechNet articles you can see there are a number of moving parts.

The primary components of Windows Azure Pack are broken down into services. Depending on how you want it all to scale you can install the services on one server or multiple servers, or multiple redundant servers. There are 7+1 primary services, and 5+ secondary services involved.

The primary services are:

To help simplify some future samples I’ve included the base URLs of the services above. Anything public-ish facing has its own subdomain, and the related backend APIs are on the same domain but a different port (the ports coincide with the default installation). Also, these are public endpoints – be kind please!

The Secondary services are for resource providers which are things like Web Sites, VM Cloud, Service Bus, etc. While the secondary services are absolutely important to a private cloud deployment and perhaps “secondary” is an inappropriate adjective, they aren’t necessarily in scope when talking about authentication. Not at this point at least. Maybe in a future post. Let me know if that’s something you would like to read about.

Admin Portal

The Admin Portal is a UI surface that allows administrators to manage resource providers like Web Sites and VM Clouds. It calls into the Admin API to do all the heavy lifting. The Admin API is a collection of Web API interfaces.

The Admin Portal and the Admin API are Relying Parties of the Admin Authentication Site. The Admin Authentication Site is a STS that authenticates users using Windows Auth.

adminauth

During initial authentication the Admin Portal will redirect to the STS and request a WS-Federation-wrapped JWT (JSON Web Token – pronounced “jot”). Once the Admin Portal receives the token it validates the token and begins issuing requests to the Admin API attaching that unwrapped JWT in an Authorization header.

This is how a login would flow:

  1. Request admin-cloud.syfuhs.net
  2. No auth – redirect to adminauth-cloud.syfuhs.net
  3. Do Windows Auth and mint a token
  4. Return the JWT to the Admin Portal
  5. Attach the JWT to the session

It’s just a WS-Fed passive flow. Nothing particularly fancy here besides using a JWT instead of a SAML token. WS-Federation is a token-agnostic protocol so you can use any kind of token format so long as both the IdP and RP understand it. A JWT looks something like this:

Header: {
    "x5t": "3LFgh5SzFeO4sgYfGJ5idbHxmEo",
    "alg": "RS256",
    "typ": "JWT"
},
Claims: {
    "upn": "SYFUHS-CLOUD\\Steve",
    "primarysid": "S-1-5-21-3349883041-1762849023-1404173506-500",
    "aud": "
http://azureservices/AdminSite",
    "primarygroupsid": "S-1-5-21-3349883041-1762849023-1404173506-513",
    "iss": "
http://azureservices/WindowsAuthSite",
    "exp": 1391086240,
    "group": [
        "SYFUHS-CLOUD\\None",
        "Everyone",
        "NT AUTHORITY\\Local account and member of Administrators group",
        "SYFUHS-CLOUD\\MgmtSvc Operators",
        "BUILTIN\\Administrators",
        "BUILTIN\\Users",
        "NT AUTHORITY\\NETWORK",
        "NT AUTHORITY\\Authenticated Users",
        "NT AUTHORITY\\This Organization",
        "NT AUTHORITY\\Local account",
        "NT AUTHORITY\\NTLM Authentication"
    ],
    "nbf": 1391057440
}, Signature: “…”

Actually, that’s a bit off because its not represented as { Header: {…}, Claims: {…} }, but that’s the logical representation.

If we look at the token there are some important bits. The UPN claim is the user identifier; the AUD claim is the audience receiving the token; the ISS claim is the issuer of the token. This is pretty much all the Admin Portal needs for proper authentication. Since this is an administrators portal it should probably do some authorization checks too though. The Admin Portal uses the UPN and/or group membership claims to decide whether a user is authorized.

If we quickly take a look at the configuration databases, namely the Microsoft.MgmtSvc.Store database, we can see a table called [mp].[AuthorizedAdminUsers]. This table lists the principals that are currently authorized to log into the Admin Portal. Admittedly, we probably don’t want to go mucking around the database though so we can use PowerShell to take a look.

PS C:\Windows\system32> Get-MgmtSvcAdminUser -Server localhost\sqlexpress
SYFUHS-CLOUD\MgmtSvc Operators
SYFUHS-CLOUD\Steve

My local user account and the MgmtSvc Operators group matches the claims in my token, so I can log in. Presumably its built so I just need a UPN or group claim matched up to let me in, but I must confess I haven’t gotten to testing that yet. Surely there’s documentation on TechNet about it… Winking smile

As an aside, it looks like PowerShell is the only way to modify the admin user list currently, so you can use Windows groups to easily manage authorization.

So anyway, now we have this token attached to the user session as part of the FedAuth cookie. I’m guessing they’ve set the BootstrapToken BootstrapContext to be the JWT because this token will have to always be present behind the scenes while the users session is still valid. However, in order for the Admin Portal to do anything it needs to call into the Admin API. Here’s the cool part: the JWT that is part of the session is simply attached to the request as an Authorization header (snipped for clarity).

GET https://admin-cloud.syfuhs.net:30004/subscriptions?skip=0&take=1 HTTP/1.1
Authorization: Bearer eyJ0eXAiOi...2Pfl_q3oVA
x-ms-principal-id: SYFUHS-CLOUD%5cSteve
Accept-Language: en-US
Host: admin-cloud.syfuhs.net:30004
Connection: Keep-Alive

The response is just a chunk of JSON:

HTTP/1.1 200 OK
Cache-Control: no-cache
Pragma: no-cache
Content-Type: application/json; charset=utf-8
Expires: -1
Server: Microsoft-IIS/8.5
X-AspNet-Version: 4.0.30319
x-ms-request-id: 3a96aabb91e7403b968a8aa9b569ad5f.2014-01-30T04:42:02.8142310Z
X-Powered-By: ASP.NET
Date: Thu, 30 Jan 2014 04:42:04 GMT
Content-Length: 1124

{
   "items":[
      {
         "SubscriptionID":"386dd878-64b8-41a7-a02a-644f646e2df8",
         "SubscriptionName":"Sample Plan",
         "AccountAdminLiveEmailId":"steve@syfuhs.net",
         "ServiceAdminLiveEmailId":null,
         "CoAdminNames":[

         ],
         "AddOnReferences":[

         ],
         "AddOns":[

         ],
         "State":1,
         "QuotaSyncState":0,
         "ActivationSyncState":0,
         "PlanId":"Samplhr1gggmr",
         "Services":[
            {
               "Type":"sqlservers",
               "State":"registered",
               "QuotaSyncState":0,
               "ActivationSyncState":0,
               "BaseQuotaSettings":[…snip…]
            },
            {
               "Type":"mysqlservers",
               "State":"registered",
               "QuotaSyncState":0,
               "ActivationSyncState":0,
               "BaseQuotaSettings":[…snip…]
            }
         ],
         "LastErrorMessage":null,
         "Features":null,
         "OfferFriendlyName":"Sample Plan",
         "OfferCategory":null,
         "Created":"2014-01-30T03:21:43.533"
      }
   ],
   "filteredTotalCount":1,
   "totalCount":1
}

At this point (or rather, before it returns the response *cough*) the Admin API needs to authenticate and authorize the incoming request. The service is somewhat RESTful so it looks to the headers to check for an Authorization header (because the HTTP spec suggests that’s how it be done, REST blah blah blah, et al Open-mouthed smile).

The authorization header states that it has a Bearer token, which basically means the caller has already proven they are who they say they are based on the fact that they hold a token from a trusted STS (hence “bearer”). In other words they don’t have to do anything else. That token is the key to the kingdom.

Yet another aside: bearer tokens are sometimes considered insecure because anyone who has a copy of one could impersonate the user. With that being said, it’s better to use a token than say send the users password with each request.

Now at this point I can’t say for certain how things work internally as I don’t have any access to the source (I *could* use Reflector but that’s cheating), but since it’s Web API I could guess that they’re using a DelegatingHandler or something similar to do the verification. Vittorio actually has a great sample on how this could be done via what he’s calling Poor Mans Delegation/ActAs. It’s the same principle – receive token on web login, validate token so user can actually log in, keep token, want to call authenticated web service, still have token, stick token in header, web service authorizes token, done. Not too shabby.

Conclusion

So at this point we’ve seen how the Admin Portal authenticates users and how it securely calls its backend web services. Next time we’ll look at how the Tenant Portal does it (SPOILERS: it does it the same way!). Ahem -- more specifically, we’ll look at how the Tenant Portal is configured so it can actually trust the token it receives. From there we can take a look at how other IdPs can be configured for log in, and if we’re really wanting to be daring we could build our own custom STS for log in (it’s a wee-bit more complicated than you might think).

What Makes a Device a Business Device?

by Steve Syfuhs / March 26, 2013 05:40 PM

Last night I had the opportunity to meet up with some local west coast MVPs and as all good meet ups go some great conversations ensued. We talked about lots of things but towards the end of the night we got on the topic of personal devices and business devices.

The question was posed: is an iPhone/Windows Phone/iPad/Surface/etc a business device?

There was a resounding “no!” from a few people. Of course it [a given device] isn’t a business device… it was made for consumers.

It’s a valid argument based on the logic that there is a distinction between business-level and consumer-level devices. Business devices tend to have features that align with business requirements and consumer devices have features that align with consumer requirements. They tend to be branded as one or the other. Sure. Yep. Absolutely. Bang on.

Except people use their consumer devices at work, or they use their consumer devices at home to access resources at work.

Any time you use a device of any type to access business resources it is now a business device. Period.

This has security implications. The minute you use your own personal device to access business resources, for better or for worse, it is now under the purview of your organization’s security. It has to be. Security is there to protect the organization. If you access business resources from your device Security must be in place to protect the phone to protect the resources to protect the business. This makes it a business device.

I’m not saying it’s a good idea to use consumer-branded devices for business or a bad idea. I’m simply saying that if you use a device for work-related things, regardless of it’s intended usage, it is now a business device.

Real-time User Notification and Session Management with SignalR - Part 2

by Steve Syfuhs / March 24, 2013 02:31 PM

In Part 1 I introduced a basic usage of SignalR and talked about the goals we were trying to accomplish with the library.

In the next few posts I’m going to show how we can build a real-time user notification and session management system for a web application.

In this post I’ll show how we can implement a solution that accomplishes our goals.

Before diving back into SignalR it’s important to have a quick rundown of concepts for session management. If we think about how sessions work for a user in most applications it’s usually conceptually simple. A session is a mechanism to track user rights between the user logging in and logging out.  A session is usually tracked through a cookie attached to each request made to the server. A user has a session (or multiple sessions if they are logged in from another machine/browser) and each session is tied to a request or connection. Each time the user requests a page a new connection is opened to the server. As long as the session is active each connection is authorized to do whatever it needs to do (as defined by whatever authorization policies are in place).

image

When you kill a session each subsequent connection for that session is denied. The session is dead, no more access. Simple. A session is usually killed when a user explicitly logs out and destroys the session cookie or the browser is closed. This doesn’t normally kill any other sessions tied to the user though. The connections made from another browser are still authorized.

From a security perspective we may want to notify the user that another session is already active or was just created. We can then allow the user to destroy the other session if they want.

SignalR works really well in this scenario because it solves a nasty problem of timing. Normally when the server wants to tell the client something it has to wait for the client to make a request to the server and then the client has to act on the server’s message. A request to the server is usually only done when a user explicitly clicks something, or there’s a timer polling every 30 seconds or so. If we want to notify the user instantly of another session we can’t necessarily wait for the client to call. SignalR solves this problem because it can call the client directly from the server.

Now, allowing a user to control other sessions requires tracking sessions and connections. If we follow the diagram above we have a pretty simple relationship between users and sessions, and between sessions and connections. We could store this information in a database or other persistent storage, and in fact would want to for non-trivial applications, but for the sake of this post we’ll just store the data in memory.

Most session handlers these days (e.g. the SessionAuthenticationModule in WIF) create a cookie that contains everything the web application should know about the user. As long as that identity in the cookie is valid the user can do whatever the session handler allows. This is a mostly stateless process and aligns with various tenants of REST. Each request to the server contains the identity of the user, and the server doesn’t have to track anything. It’s simple and powerful.

However, in non-trivial applications this doesn’t always cut it. Security sometimes requires state. In this case we require state in the sense that the server needs to track all active sessions tied to a user. For this we’ll use the WIF SessionAuthenticationModule (SAM) and a custom SessionSecurityTokenHandler.

Before we can validate a session though, we need to track when a session is created. If the application is configured for federation you can create a custom ClaimsAuthenticationManager and call the session creation code, or if you are creating a session token manually you can call this code on login.

void CreateSession()
{
    string sess = CreateSessionKey();

    var principal = new ClaimsPrincipal(new[] { new ClaimsIdentity(new[] { new Claim(ClaimTypes.Name, "myusername"), new Claim(ClaimTypes.Sid, sess) }, AuthenticationTypes.Password) });

    var token = FederatedAuthentication.SessionAuthenticationModule.CreateSessionSecurityToken(principal, "mycontext", DateTime.UtcNow, DateTime.UtcNow.AddDays(1), false);

    FederatedAuthentication.SessionAuthenticationModule.WriteSessionTokenToCookie(token);

    NotificationHub.RegisterSession(sess, principal.Identity.Name);
}

private string CreateSessionKey()
{
    var rng = System.Security.Cryptography.RNGCryptoServiceProvider.Create();

    var bytes = new byte[32];

    rng.GetNonZeroBytes(bytes);

    return Convert.ToBase64String(bytes);
}

We’ll get back to the NotificationHub.RegisterSession method in a bit.

After the session is created, on subsequent requests the SessionSecurityTokenHandler validates whether a user’s session is still valid and authorized. The SAM calls the token handler when it receives a session cookie and generates an identity for the current request.

From here we can determine whether the user’s session was forced to logout. If we override the ValidateSession method we can check against the NotificationHub. Keep in mind this is an example – it’s not a good design decision to track session data in your notification hub. I’m also using ClaimTypes.Sid, which isn’t the best claim type to use either.

protected override void ValidateSession(SessionSecurityToken securityToken)
{
    base.ValidateSession(securityToken);

    var ident = securityToken.ClaimsPrincipal.Identity as IClaimsIdentity;

    if (ident == null)
        throw new SecurityTokenException();

    var sessionClaim = ident.Claims.Where(c => c.ClaimType == ClaimTypes.Sid).FirstOrDefault();

    if(sessionClaim == null)
        throw new SecurityTokenExpiredException();

    if (!NotificationHub.IsSessionValid(sessionClaim.Value))
    {
        throw new SecurityTokenExpiredException();
    }
}

Every time a client makes a request to the server the user’s session is validated against the internal list of valid sessions. If the session is unknown or invalid an exception is thrown which kills the request.

To configure the use of this SecurityTokenHandler you can add it to the web.config in the microsoft.identityModel/service section. Yes, this is still WIF 3.5/.NET 4.0.  There is no requirement for .NET 4.5 here.

<securityTokenHandlers>
    <remove type="Microsoft.IdentityModel.Tokens.SessionSecurityTokenHandler, Microsoft.IdentityModel" />
    <add type="Syfuhs.Demo.CustomSessionSecurityTokenHandler, MyDemo" />
</securityTokenHandlers>

Now that we can track sessions on the server side we need to track connections. To start tracking connections we need to start at our Hub. If we go back to our NotificationHub we can override a few methods, specifically OnConnected and OnDisconnected. Every time a page has loaded the SignalR hubs client library, OnConnected is called and every time the page is unloaded OnDisconnected is called. Between these two methods we can tie all active connections to a session. Before we do that though we need to make sure that all requests to our Hub are only from logged in users.

To ensure only active sessions talk to our hub we need to decorate our hub with the [Authorize] attribute.

[Authorize(RequireOutgoing = true)]
public class NotificationHub : Hub
{
    // snip
}

Then we override the OnConnected method. Within this method we can access what’s called the ConnectionId, and associate it to our session. The ConnectionId is unique for each page loaded and connected to the server.

For this demo we’ll store the tracking information in a couple dictionaries.

private static readonly Dictionary<string, string> UserSessions = new Dictionary<string, string>();

private static readonly Dictionary<string, List<string>> sessionConnections = new Dictionary<string, List<string>>();

public override Task OnConnected()
{
    var user = Context.User.Identity as IClaimsIdentity;

    if (user == null)
        throw new SecurityException();

    var sessionClaim = user.Claims.Where(c => c.ClaimType == ClaimTypes.Sid).FirstOrDefault();

    if (sessionClaim == null)
        throw new SecurityException();

    sessionConnections[sessionClaim.Value].Add(Context.ConnectionId);

    return base.OnConnected();
}

On disconnect we want to remove the connection associated with the session.

public override Task OnDisconnected()
{
    var user = Context.User.Identity as IClaimsIdentity;

    if (user == null)
        throw new SecurityException();

    var sessionClaim = user.Claims.Where(c => c.ClaimType == ClaimTypes.Sid).FirstOrDefault();

    if (sessionClaim == null)
        throw new SecurityException();

    sessionConnections[sessionClaim.Value].Remove(Context.ConnectionId);

    return base.OnDisconnected();
}

Now at this point we can map all active connections to their various sessions. When we create a new session from a user logging in we want to notify all active connections that the new session was created. This notification will allow us to kill the new session if necessary. Here’s where we implement that NotificationHub.RegisterSession method.

internal static void RegisterSession(string sessionId, string user)
{
    UserSessions[sessionId] = user;
    sessionConnections[sessionId] = new List<string>();

    var message = "You logged in to another session";

    var context = GlobalHost.ConnectionManager.GetHubContext<NotificationHub>();

    var userCurrentSessions = UserSessions.Where(u => u.Value == user);

    foreach (var s in userCurrentSessions)
    {
        var connectionsTiedToSession = sessionConnections.Where(c => c.Key == s.Key).SelectMany(c => c.Value);

        foreach (var connectionId in connectionsTiedToSession)
            context.Clients.Client(connectionId).sessionRegistered(message, sessionId);
    }
}

This method will create a new session entry for us and look up all other sessions for the user. It will then loop through all connections for the sessions and notify the user that a new session was created.

So far so good, right? This takes care of almost all of the server side code. But next we’ll jump to the client side JavaScript and implement that notification.

When the server calls the client to notify the user about a new session we want to write the message out to screen and give the user the option of killing the session.

HTML:

<div class="notification"></div>

JavaScript:

var notifier = $.connection.notificationHub;

notifier.client.sessionRegistered = function (message, session) {
    $('.notification').text(message);

    $('.notification').append('<a class="killSession" href="#">End Session</a>');
    $('.notification').append('<a class="DismissNotification" href="#">Dismiss</a>');
    $('.killSession').click(function () {
        notifier.server.killSession(session);
        $('.notification').hide(500);
    });

    $('.DismissNotification').click(function () {
        $('.notification').hide(500);
    });
};

On session registration the notification div text is set to the message and a link is created to allow the user to kill the session. The click event calls the NotificationHub.KillSession method.

Back in the hub we implement the KillSession method to remove the session from the list of active sessions.

public void KillSession(string session)
{
    var connections = sessionConnections[session].ToList();

    sessionConnections.Remove(session);
    UserSessions.Remove(session);

    foreach (var c in connections)
    {
        Clients.Client(c).sessionEnded();
    }

}

Once the session is dead a call is made back to the clients associated with that session to notify the page that the session has ended. Back in the JavaScript we can hook into the sessionEnded function and reload the page.

notifier.client.sessionEnded = function () {
    location.reload();
}

Reloading the page will cause the browser to make a request to the server and the server will call our custom SessionSecurityTokenHandler where the ValidateSession method will throw an exception. Once this exception is thrown the request is stopped and all subsequent requests within the same session will have the same fate. The dead session should redirect to your login page.

To test this out all we have to do is load up our application and log in. Then if we create a new session by opening a new browser and logging in, e.g. switching from IE to Chrome, or within IE opening a new session via File > New Session, our first browser should notify you. If you click the End Session link you should automatically be logged out of the other session and redirected to your login page.

Pretty cool, huh?

A Word from Microsoft Canada

by Steve Syfuhs / March 19, 2013 11:59 PM

Earlier this week I was asked if I could help out the Canadian DPE Team at Microsoft by letting my readers know about some important information coming down the pipe. For those not in Canada you may not be interested, but still take a look because there’s a lot of great resources listed.

--

The team at Microsoft Canada is focused on ensuring that they help set you up for success by providing the information and tools you need in order to get the most out of Microsoft based solutions at home and at work.

Twice a year, Microsoft sends out the Global Relationship Study (GRS for short). It’s a survey that Microsoft uses to collect your feedback and help inform their planning. If you receive emails from Microsoft, subscribe to their newsletters‚ or you’ve attended our any of their events you may receive the survey.

The important details:

  • Timing – March 4th to April 12th 2013
  • Sent From – “Microsoft Feedback”
  • Email Alias – “feedback@e–mail.microsoft.com
  • Subject Line – “Help Microsoft Focus on Customers and Partners”

Many of you already read the Microsoft Canada IT Pro team’s blogs‚ connect with them on LinkedIn and have attended their events in the last year or so. So you may already know that you’re their top priority. So they want to hear from you.

Pierre, Anthony and Mitch use the GRS results to shape what they do, how they do it and if it’s resonating with you. Tell them what you need to be the “go-to” guy (or gal). Tell them what you need to grow your career. They want you to be completely satisfied with Microsoft Canada.

This year, Pierre, Anthony and Mitch have delivered 30 IT Camps and counting across the country giving you the opportunity to get hands on and learn how to get the most value for your organization. They have a few more events planned this year so keep an eye on their plancast feed for events near you.

Based on your feedback the topics they’re planning to cover will include:

  • Windows 8
  • Windows Server 2012
  • System Center 2012
  • Private Cloud
  • BYOD – Management and Security

That’s not all. They’ve heard you loud and clear so in addition to hands on events, they’re also delivering more technical content online via the IT Pro Connection Blog. Windows 8 continues to be a big area of focus for them. They covered a lot of great contentat launch and they’ve complimented that with new content like:

In addition to this, there are some valuable online resources you can use like Microsoft Virtual Academy, Microsoft’s no-cost online training portal. Or software evaluations (free trials) on TechNet that allow you to build your own labsto try out what you’ve learned.

Regardless of how you engage with the team at Microsoft Canada‚ you’d probably agree that they hear you. They’d also encourage you to continue to provide that great feedback. They thrive on it‚ they relish it‚ they wallow in it and most importantly of all‚ they action it. So please keep connecting with them and keep it coming! Pierre, Anthony and Mitch are listening.

Resources, Tools and Training

  • Tim Horton’s Gift Card Contest– We’re giving away 350 Tim Horton’s gift cards, all you have to do to qualify is download a free qualifying software evaluation (trial). Download all three for more chances to win, but hurry, the contest closes soon.*
  • Windows 8 Resource Guide- Download a printable, one-page guide to the top resources that will help you explore, plan for, deploy, manage, and support Windows 8 as part of your IT infrastructure.
  • Windows Server 2012 Evaluation – Get hands on with Windows Server 2012 and explore the scale and performance possibilities for your server virtualization.
  • Microsoft Support - Get help with products‚ specific errors‚ virus detection and removal and more.
  • Microsoft Licensing -Visit the Volume Licensing Portal today to ask questions about volume licensing‚ get a quote‚ activate a product or find the right program for your organization.

*No purchase necessary. Contest open to residents of Canada, excluding Quebec. Contest closes April 11, 2013 at 11:59:59 p.m. ET. Three-Hundred-and-Fifty (350) prizes are available to be won: (i) $10 CDN Tim Horton’s gift card.  Skill-testing question required. Odds of winning depend on the number of eligible entries. For full rules, including entry, eligibility requirements and complete prize description, review the full terms and Conditions.

Real-time User Notification and Session Management with SignalR - Part 1

by Steve Syfuhs / March 07, 2013 10:21 PM

As more and more applications and services are becoming always on and accessible from a wide range of devices it’s important that we are able to securely manage sessions for users across all of these systems.

Imagine that you have a web application that a user tends to stay logged into all day. Over time the application produces notifications for the user and those notifications should be shown fairly immediately. In this post I’m going to talk about a very important notification – when the user’s account has logged into another device while still logged into their existing session. If the user is logged into the application on their desktop at work it might be bad that they also just logged into their account from a computer on the other side of the country. Essentially, what we want is a way to notify the user that their account just logged in from another device. Why didn’t I just lead with that?

In the next few posts I’m going to show how we can build a real-time user notification and session management system for a web application.

To accomplish this task I’m going to use the SignalR library:

ASP.NET SignalR is a new library for ASP.NET developers that simplifies the process of adding real-time web functionality to your applications. Real-time web functionality is the ability to have server-side code push content to connected clients instantly as it becomes available.

Conceptually it’s exactly what we want to use – it allows us to notify a client (the user’s first browser session) from the server that another client (another browser or device) has logged in with the same account.

SignalR is based on a Remote Procedure Call (RPC) design pattern allowing messages to flow from the server to a client. The long and the short of it is that whenever a page is loaded in the browser a chunk of JavaScript is executed that calls back to the server and opens a connection either via websockets when supported or falls back to other methods like long polling or funky (but powerful) iframe business.

To understand how this works it’s necessary to get SignalR up and running. First, create a new web project of your choosing  in Visual Studio and open the Nuget Package Manager. Search online for the package “Microsoft.AspNet.SignalR” and install it. For the sake of simplicity this will install the entire SignalR library. Down the road you may decide to trim the installed components down to only the requisite pieces.

Locate the global.asax file in your project and open it. In the Application_Start method add this bit of code:

RouteTable.Routes.MapHubs();

This will register a hub (something we’ll create in a minute) to the “~/signalr/hubs” route. Next open your MasterPage or View and add the following script references somewhere after a reference to jQuery:

<script type="text/javascript" src="scripts/jquery.signalR-1.0.1.js"></script>
<script type="text/javascript" src="signalr/hubs"></script>

You’ll notice the second script reference is the same as our route that was added earlier. This script is dynamically generated and provides us a proxy for communicating with the hub on the server side.

At this point we haven’t done much. All we’ve done is set up our web application to use SignalR. It doesn’t do anything yet. In order for communication to occur we need something called a Hub.

A hub is the thing that offers us that RPC mechanism. We call into it to send messages. It then sends the messages to the given recipients based on the connections opened by the client-side JavaScript. To create a hub all we need to do is create a new class and inherit from Microsoft.AspNet.SignalR.Hub. I’ve created one called NotificationHub.

public class NotificationHub : Hub
{
    // Nothing to see here yet
}

A hub is conceptually a connector between your browser and your server. When a message is sent from your browser it is received by a hub and the hub sends it off to a given recipient. A hub receives messages through methods defined by you.

Before digging into specifics a quick demo is in order. In our NotificationHub class let’s create a new method:

public void Hello(string message)
{
     Debug.WriteLine(message);

}

For now that’s all we have to write server-side for the sake of this demo. It will receive a message and it will write it to the debug stream. Next, go back to your page to write some HTML and JavaScript.

First create a <div> and give it an Id of connected:

<div id=”connected”></div>

Then add some JavaScript:

$.connection.hub.start().done(function () {
        $('#connected').text('I'm connected with Id: ' + $.connection.hub.id);
    });
}

What this will do is open a proxy connection to the hub(s) and once it’s completed the connection dance, the proxy calls a function and sets the text to the Id of the proxy connection. This Id value is a unique identifier created every time the client connects back to the server.

Now that we have an open connection to our hub we can call our Hello method. To do this we need to get the proxy to our notification hub, which is done through the $.connection object.

var notifier = $.connection.notificationHub;

For each hub we create and map to a route, the connection object has a pointer to it’s equivalent JavaScript proxy. Mind the camel-casing though. Once we have our proxy we can call our method through the server property. This property maps functions to methods in the hub. So to call our Hello method in the hub we call this JavaScript:

notifier.server.hello(‘World!’);

Lets make that clickable.

<a id=”sayHi” href=”#”>Say Hello!</a>

$(‘#sayHi’).click(function() { notifier.server.hello(‘World!’); });

If you click that you should now see “World!” in your Debug window.

That’s all fine and dandy for sending messages to the server, but AJAX already does that. Boring! Let’s go back to our hub and update that Hello method. Add the following line of code:

public void Hello(string message)
{
    Clients.All.helloEveryone(message);
}

What this will do is broadcast our message to All connected clients. It will call a function on the client named helloEveryone. For more information on who can receive messages take a look at the Hubs documentation. However, for our clients to receive that message we need to hook in a function for our proxy to call when it receives the broadcast. Back in the HTML and JavaScript add this:

<div id=”msg”></div>

notifier.client.helloEveryone = function(message) {
    $('#msg').text(message);
}

We’ve hooked a function into the client object so that when the proxy receives the message to call the function, it will call our implementation. It’s really easy to build out a collection of calls to communicate both directions with this library. All calls that should be sent to the server should call notifier.server.{yourHubMethod} and all calls from the hub to the clients should be mapped to notifier.client.{eventListener}.

If you open a few browsers and click that link, all browsers should simultaneously receive the message and show “World!”. That’s pretty cool.

At this point we have nearly enough information to build out our session management and notification system. In the next post I’ll talk about how we can send messages directly to a specific user, as well as how to send messages from outside the scope of a hub.

Whitepaper: Active Directory from on-premises to the cloud

by Steve Syfuhs / January 14, 2013 04:44 PM

A new whitepaper was released last Friday (Jan 11/2013) that discusses all the various options for dealing with identity in cloud, on-premise, and hybrid environments: Active Directory from on-premises to the cloud.

It takes a look at how Windows Azure Active Directory is making a play for cloud identity, as well as how it works with hybrid/on-premise scenarios.

Here’s the overview:

Identity management, provisioning, role management, and authentication are key services both on-premises and through the (hybrid) cloud. With the Bring Your Own Apps (BYOA) for the cloud and Software as a Service (SaaS) applications, the desire to better collaborate a la Facebook with the “social” enterprise, the need to support and integrate with social networks, which lead to a Bring Your Own Identity (BYOI) trend, identity becomes a service where identity “bridges” in the cloud talk to on-premises directories or the directories themselves move and/or are located in the cloud.

Active Directory (AD) is a Microsoft brand for identity related capabilities. In the on-premises world, Windows Server AD provides a set of identity capabilities and services and is hugely popular (88% of Fortune 1000 and 95% of enterprises use AD). Windows Azure AD is AD reimagined for the cloud, designed to solve for you the new identity and access challenges that come with the shift to a cloud-centric, multi-tenant world.

Windows Azure AD can be truly seen as an Identity Management as a Service (IDMaaS) cloud multi-tenant service. This goes far beyond taking AD and simply running it within a virtual machine (VM) in Windows Azure.

This document is intended for IT professionals, system architects, and developers who are interested in understanding the various options for managing and using identities in their (hybrid) cloud environment based on the AD foundation and how to leverage the related capabilities. AD, AD in Windows Azure and Windows Azure AD are indeed useful for slightly different scenarios. This document is part of a series of documents on the identity and security features of Windows Azure AD/Office 365 (see the links below for the other available documents in the series).

This is a pretty good paper. It documents a lot of the internal details of how all the various services play together in a central location, as well as sheds light on some publically-known, if not publically documented, methods of user provisioning.

Below is a list of great resources referenced in the document.

// About

Steve is a renaissance kid when it comes to technology. He spends his time in the security stack.