Home / My Disclaimer / Who am I? / Search... / Sign in

// SQL

Adjusting the Home Realm Discovery page in ADFS to support Email Addresses

by Steve Syfuhs / July 12, 2011 04:00 PM

Over on the Geneva forums a question was asked:

Does anyone have an example of how to change the HomeRealmDiscovery Page in ADFSv2 to accept an e-mail address in a text field and based upon that (actually the domain suffix) select the correct Claims/Identity Provider?

It's pretty easy to modify the HomeRealmDiscovery page, so I thought I'd give it a go.

Based on the question, two things need to be known: the email address and the home realm URI.  Then we need to translate the email address to a home realm URI and pass it on to ADFS.

This could be done a couple ways.  First it could be done by keeping a list of email addresses and their related home realms, or a list of email domains and their related home realms.  For the sake of this being an example, lets do both.

I've created a simple SQL database with three tables:

image

Each entry in the EmailAddress and Domain table have a pointer to the home realm URI (you can find the schema in the zip file below).

Then I created a new ADFS web project and added a new entity model to it:

image

From there I modified the HomeRealmDiscovery page to do the check:

//------------------------------------------------------------
// Copyright (c) Microsoft Corporation.  All rights reserved.
//------------------------------------------------------------

using System;

using Microsoft.IdentityServer.Web.Configuration;
using Microsoft.IdentityServer.Web.UI;
using AdfsHomeRealm.Data;
using System.Linq;

public partial class HomeRealmDiscovery : Microsoft.IdentityServer.Web.UI.HomeRealmDiscoveryPage
{
    protected void Page_Init(object sender, EventArgs e)
    {
    }

    protected void PassiveSignInButton_Click(object sender, EventArgs e)
    {
        string email = txtEmail.Text;

        if (string.IsNullOrWhiteSpace(email))
        {
            SetError("Please enter an email address");
            return;
        }

        try
        {
            SelectHomeRealm(FindHomeRealmByEmail(email));
        }
        catch (ApplicationException)
        {
            SetError("Cannot find home realm based on email address");
        }
    }

    private string FindHomeRealmByEmail(string email)
    {
        using (AdfsHomeRealmDiscoveryEntities en = new AdfsHomeRealmDiscoveryEntities())
        {
            var emailRealms = from e in en.EmailAddresses where e.EmailAddress1.Equals(email) select e;

            if (emailRealms.Any()) // email address exists
                return emailRealms.First().HomeRealm.HomeRealmUri;

            // email address does not exist
            string domain = ParseDomain(email);

            var domainRealms = from d in en.Domains where d.DomainAddress.Equals(domain) select d;

            if (domainRealms.Any()) // domain exists
                return domainRealms.First().HomeRealm.HomeRealmUri;

            // neither email nor domain exist
            throw new ApplicationException();
        }
    }

    private string ParseDomain(string email)
    {
        if (!email.Contains("@"))
            return email;

        return email.Substring(email.IndexOf("@") + 1);
    }

    private void SetError(string p)
    {
        lblError.Text = p;
    }
}

 

If you compare the original code, there was some changes.  I removed the code that loaded the original home realm drop down list, and removed the code to choose the home realm based on the drop down list's selected value.

You can find my code here: http://www.syfuhs.net/AdfsHomeRealm.zip

Claims Transformation and Custom Attribute Stores in Active Directory Federation Services 2

by Steve Syfuhs / September 14, 2010 04:00 PM

Active Directory Federation Services 2 has an amazing amount of power when it comes to claims transformation.  To understand how it works lets take a look at a set of claims rules and the flow of data from ADFS to the Relying Party:

image

We can have multiple rules to transform claims, and each one takes precedence via an Order:

image

In the case above, Transform Rule 2 transformed the claims that Rule 1 requested from the attribute store, which in this case was Active Directory.  This becomes extremely useful because there are times when some of the data you need to pull out of Active Directory isn’t in a useable format.  There are a couple options to fix this:

  • Make the receiving application deal with it
  • Modify it before sending it off
  • Ignore it

Lets take a look at the second option (imagine an entire blog post on just ignoring it…).  ADFS allows us to transform claims before they are sent off in the token by way of the Claims Rule Language.  It follows the pattern: "If a set of conditions is true, issue one or more claims."  As such, it’s a big Boolean system.  Syntactically, it’s pretty straightforward.

To issue a claim by implicitly passing true:

=> issue(Type = "http://MyAwesomeUri/claims/AwesomeRole", Value = "Awesome Employee");

What that did was ignored the fact that there weren’t any conditions and will always pass a value.

To check a condition:

c:[Type == "http://schemas.microsoft.com/ws/2008/06/identity/claims/role", Value == "SomeRole"]
    => issue(Type = "http://MyAwesomeUri/claims/AwesomeRole", Value = "AwesomeRole");

Breaking down the query, we are checking that a claim created in a previous step has a specific type; in this case role and the claim’s value is SomeRole.  Based on that we are going to append a new claim to the outgoing list with a new type and a new value.

That’s pretty useful in it’s own right, but ADFS can actually go even further by allowing you to pull data out of custom data stores by way of Custom Attribute Stores.  There are four options to choose from when getting data:

  1. Active Directory (default)
  2. LDAP (Any directory that you can query via LDAP)
  3. SQL Server (awesome)
  4. Custom built store via custom .NET assembly

Let’s get some data from a SQL database.  First we need to create the attribute store.  Go to Trust Relationships/Attribute Stores in the ADFS MMC Console (or you could also use PowerShell):

image

Then add an attribute store:

image

All you need is a connection string to the database in question:

image

The next step is to create the query to pull data from the database.  It’s surprisingly straightforward.  This is a bit of a contrived example, but lets grab the certificate name and the certificate hash from a database table where the certificate name is equal to the value of the http://MyCertUri/UserCertName claim type:

c:[type == http://MyCertUri/UserCertName]
   => issue(store = "MyAttributeStore",
         types = ("http://test/CertName", "http://test/CertHash"),
         query = "SELECT CertificateName, CertificateHash FROM UserCertificates WHERE CertificateName='{0}'", param = c.value);

For each column you request in the SQL query, you need a claim type as well.  Also, unlike most SQL queries, to use parameters we need to use a format similar to String.Format instead of using @MyVariable syntaxes.

In a nutshell this is how you deal with claims transformation.  For a more in depth article on how to do this check out TechNet: http://technet.microsoft.com/en-us/library/dd807118(WS.10).aspx.

SQL Server 2008 R2 Launch Event - Application Lifecycle Management

by Steve Syfuhs / May 30, 2010 04:00 PM

Unfortunately I will be unable to attend the ALM presentation later this afternoon, but luckily I was able to catch it in Montreal last week.

When I think of ALM, I think of the development lifecycle of an application – whether it be agile or waterfall or whatever floats your boat – that encompasses all parts of the process.  We’ve had tools over the years that help us manage each section or iteration of the process, but there was some obvious pieces missing.  What about the SQL?  Databases are essential to pretty much all applications that get developed nowadays, yet for a long time we didn’t have much in the way to help streamline and manage the processes of developing database pieces.

Enter ALM for SQL Server.  DBA’s are now given all the tools and resources developers have had for a while.  It’s now easier to manage Packaging and Deployment of Databases, better source control of SQL scripts, and something really cool: Database schema versioning.

I have a story: Sometime over the last couple years, a developer wrote a small little application that monitors changes to database schemas through triggers, and then sync’ed the changes with SVN.  This was pretty cool.  It allowed us to watch what changed when things went south.  Problem was, it wasn’t necessarily reliable, it relied on some internal pieces to be added to the database manually, and made finding changes through SVN tricky.

With ALM, versioning of databases happens before deployment.  Changes are stored in TFS, and its possible to rollback certain changes fairly easily.  Certain changes. :)

That’s pretty cool.

SQL Server 2008 R2 Launch - PowerPivot

by Steve Syfuhs / May 30, 2010 04:00 PM

We just finished the SQL Server 2008 R2 Launch Keynote.  That’s quite a mouthful.  One of the problems I saw with this release was that not a lot of people knew what went into it.  R2 products are strange in that people just sort of assume they are nothing more than Service Pack releases.  Well, this isn't actually the case.

There were some really cool things shown in the keynote this morning.  PowerPivot being my all-time favorite.  Excel on Analysis Services steroids would be an apt description.  Analyzing huge sets of data within Excel was sort of tricky because you ran into a million row limit.  This was only half the problem though.  Performance was painful if you had that many rows in Excel -- sluggish at best.  PowerPivot helps change this dramatically.  The demo just shown pulled down 100,000,000 rows of data into Excel, filtered down to 19, and then filtered back up to the initial set -- all within a second.  That's pretty cool.

Just don't hit print.

Database Shrinkage

by Steve Syfuhs / March 12, 2010 04:00 PM

Don’t do it!

shrinkDatabase

Putting the I Back into Infrastructure

by Steve Syfuhs / February 07, 2010 04:00 PM

Tonight at the IT Pro Toronto we did a pre-launch of the Infrastructure 2010 project.  Have you ever been in a position where you just don’t have a clear grasp of a concept or design?  It’s not fun.  As a result, CIPS Toronto, IT Pro Toronto, and TorontoSQL banded together to create a massive event to help make things a little more clear.  To give you a clearer understanding of how corporate networks work.  Perhaps to explain why some decisions are made, and why in retrospect, some are bad decisions.

Infrastructure 2010 is about teaching you everything there is to know about a state-of-the-art, best practices compliant, corporate intranet.  We will build, from the ground up, an entire infrastructure.  We will teach you how to build, from the ground up, an entire infrastructure.

Sessions are minimum 300 level, and content-rich.  Therefore:

i2010Proud[1]

Well, maybe.  (P.S. if you work for Microsoft, pretend you didn’t see that picture)

Six Simple Development Rules (for Writing Secure Code)

by Steve Syfuhs / December 15, 2009 04:00 PM

I wish I could say that I came up with this list, but alas I did not.  I came across it on the Assessment, Consulting & Engineering Team blog from Microsoft, this morning.  They are a core part of the Microsoft internal IT Security Group, and are around to provide resources for internal and external software developers.  These 6 rules are key to developing secure applications, and they should be followed at all times.

Personally, I try to follow the rules closely, and am working hard at creating an SDL for our department.  Aside from Rule 1, you could consider each step a sort of checklist for when you sign off, or preferably design, the application for production.

--

Rule #1: Implement a Secure Development Lifecycle in your organization.

This includes the following activities:

  • Train your developers, and testers in secure development and secure testing respectively
  • Establish a team of security experts to be the ‘go to’ group when people want advice on security
  • Implement Threat Modeling in your development process. If you do nothing else, do this!
  • Implement Automatic and Manual Code Reviews for your in-house written applications
  • Ensure you have ‘Right to Inspect’ clauses in your contracts with vendors and third parties that are producing software for you
  • Have your testers include basic security testing in their standard testing practices
  • Do deployment reviews and hardening exercises for your systems
  • Have an emergency response process in place and keep it updated

If you want some good information on doing this, email me and check out this link:
http://www.microsoft.com/sdl

Rule #2: Implement a centralized input validation system (CIVS) in your organization.

These CIVS systems are designed to perform common input validation on commonly accepted input values. Let’s face it, as much as we’d all like to believe that we are the only ones doing things like, registering users, or recording data from visitors it’s actually all the same thing.

When you receive data it will very likely be an integer, decimal, phone number, date, URI, email address, post code, or string. The values and formats of the first 7 of those are very predictable. The string’s are a bit harder to deal with but they can all be validated against known good values. Always remember to check for the three F’s; Form, Fit and Function.

  • Form: Is the data the right type of data that you expect? If you are expecting a quantity, is the data an integer? Always cast data to a strong type as soon as possible to help determine this.
  • Fit: Is the data the right length/size? Will the data fit in the buffer you allocated (including any trailing nulls if applicable). If you are expecting and Int32, or a Short, make sure you didn’t get an Int64 value. Did you get a positive integer for a quantity rather than a negative integer?
  • Function: Can the data you received be used for the purpose it was intended? If you receive a date, is the date value in the right range? If you received an integer to be used as an index, is it in the right range? If you received an int as a value for an Enum, does it match a legitimate Enum value?

In a vast majority of the cases, string data being sent to an application will be 0-9, a-z, A-Z. In some cases such as names or currencies you may want to allow –, $, % and ‘. You will almost never need , <> {} or [] unless you have a special use case such as http://www.regexlib.com in which case see Rule #3.

You want to build this as a centralized library so that all of the applications in your organization can use it. This means if you have to fix your phone number validator, everyone gets the fix. By the same token, you have to inspect and scrutinize the crap out of these CIVS to ensure that they are not prone to errors and vulnerabilities because everyone will be relying on it. But, applying heavy scrutiny to a centralized library is far better than having to apply that same scrutiny to every single input value of every single application.  You can be fairly confident that as long as they are using the CIVS, that they are doing the right thing.

Fortunately implementing a CIVS is easy if you start with the Enterprise Library Validation Application Block which is a free download from Microsoft that you can use in all of your applications.

Rule #3: Implement input/output encoding for all externally supplied values.

Due to the prevalence of cross site scripting vulnerabilities, you need to encode any values that came from an outside source that you may display back to the browser. (even embedded browsers in thick client applications). The encoding essentially takes potentially dangerous characters like < or > and converts them into their HTML, HTTP, or URL equivalents.

For example, if you were to HTTP encode <script>alert(‘XSS Bug’)</script> it would look like: &lt;script&gt;alert('XSS Bug')&lt;/script&gt;  A lot of this functionality is build into the .NET system. For example, the code to do the above looks like:

Server.HtmlEncode("<script>alert('XSS Bug')</script>");

However it is important to know that the Server.HTMLEncode only encodes about 4 of the nasty characters you might encounter. It’s better to use a more ‘industrial strength’ library like the Anti Cross Site Scripting library. Another free download from Microsoft. This library does a lot more encoding and will do HTTP and URI encoding based on a white list. The above encoding would look like this in AntiXSS

using Microsoft.Security.Application;
AntiXss.HtmlEncode("<script>alert('XSS Bug')</script>");

You can also run a neat test system that a friend of mine developed to test your application for XSS vulnerabilities in its outputs. It is aptly named XSS Attack Tool.

Rule #4: Abandon Dynamic SQL

There is no reason you should be using dynamic SQL in your applications anymore. If your database does not support parameterized stored procedures in one form or another, get a new database.

Dynamic SQL is when developers try to build a SQL query in code then submit it to the DB to be executed as a string rather than calling a stored procedures and feeding it the values. It usually looks something like this:

(for you VB fans)

dim sql
sql = "Select ArticleTitle, ArticleBody FROM Articles WHERE ArticleID = "
sql = sql & request.querystring("ArticleID")
set results = objConn.execute(sql)

In fact, this article from 2001 is chock full of what NOT to do. Including dynamic SQL in a stored procedure.

Here is an example of a stored procedure that is vulnerable to SQL Injection:

Create Procedure GenericTableSelect @TableName VarChar(100)
AS
Declare @SQL VarChar(1000)
SELECT @SQL = 'SELECT * FROM '
SELECT @SQL = @SQL + @TableName
Exec ( @SQL) GO

See this article for a look at using Parameterized Stored Procedures.

Rule #5: Properly architect your applications for scalability and failover

Applications can be brought down by a simple crash. Or a not so simple one. Architecting your applications so that they can scale easily, vertically or horizontally, and so that they are fault tolerant will give you a lot of breathing room.

Keep in mind that fault tolerant is not just a way to say that they restart when they crash. It means that you have a proper exception handling hierarchy built into the application.  It also means that the application needs to be able to handle situations that result in server failover. This is usually where session management comes in.

The best fault tolerant session management solution is to store session state in SQL Server.  This also helps avoid the server affinity issues some applications have.

You will also want a good load balancer up front. This will help distribute load evenly so that you won’t run into the failover scenario often hopefully.

And by all means do NOT do what they did on the site in the beginning of this article. Set up your routers and switches to properly shunt bad traffic or DOS traffic. Then let your applications handle the input filtering.

Rule #6: Always check the configuration of your production servers

Configuration mistakes are all too popular. When you consider that proper server hardening and standard out of the box deployments are probably a good secure default, there are a lot of people out there changing stuff that shouldn’t be. You may have remembered when Bing went down for about 45 minutes. That was due to configuration issues.

To help address this, we have released the Web Application Configuration Auditor (WACA). This is a free download that you can use on your servers to see if they are configured according to best practice. You can download it at this link.

You should establish a standard SOE for your web servers that is hardened and properly configured. Any variations to that SOE should be scrutinised and go through a very thorough change control process. Test them first before turning them loose on the production environment…please.

So with all that being said, you will be well on your way to stopping the majority of attacks you are likely to encounter on your web applications. Most of the attacks that occur are SQL Injection, XSS, and improper configuration issues. The above rules will knock out most of them. In fact, Input Validation is your best friend. Regardless of inspecting firewalls and things, the applications is the only link in the chain that can make an intelligent and informed decision on if the incoming data is actually legit or not. So put your effort where it will do you the most good.

October 15th Evening SQL Server DBA Event: Disaster Recovery - Edwin Sarmiento, MVP for SQL Server

by Steve Syfuhs / October 13, 2009 04:00 PM

OttawaSQL.net

October 15th Evening SQL Server DBA Event: Disaster Recovery – Edwin Sarmiento, MVP for SQL Server

Speaker: Edwin M. Sarmiento, MVP for SQL Server

Date: Thursday, October 15th, 2009

Time: 6:00 PM to 8:30 PM

Venue: Microsoft Ottawa Office

RSVP: http://www.clicktoattend.com/?id=142063

Session 1 (6:00 PM to 7:10 PM):  Understanding and communicating business-orientated disaster recovery  concepts and objectives

So you have a database maintenance plan that does a backup of your databases and you’re pretty sure that it works fine. But is that really enough? Are you sure that you will be able to meet your service level agreements if and when disaster strikes? This session will explain the need for understanding and communicating business-orientated disaster recovery concepts and objectives to the business stakeholders. This will include defining your RPO and RTO and how it affects your disaster recovery plan.

Session 2 (7:20 to 8:30 PM):  Disaster Recovery for the Paranoid DBA

In the first session, much have been said about disaster recovery in general. In this session, we will look at bringing the concepts down to SQL Server. This session will focus on dealing with a recovery situation for a SQL Server 2005/2008 database, an instance or an entire server. Topics covered will be backup schemes, partial backups and piecemeal restores, page-level recovery and a thorough understanding of how to troubleshoot a "Suspect" database.

Edwin M. Sarmiento

Speaker Bio:

Edwin M. Sarmiento (MVP for SQL Server) works as a Senior SQL Server DBA/Systems Engineer for The Pythian Group in Ottawa, Canada. He is very passionate about technology but has interests in music, professional and organizational development, leadership and management matters when not working with databases. He lives up to his primary mission statement – "To help people grow and develop their full potential as God has planned for them.".

Refreshments:

Pizza and pop will be provided.

Note: No one will be admitted by building security after 5:55 PM, and the event will start promptly at 6:00 PM.

OttawaSQL.net is a community group of Ottawa area developers and IT professionals.  We share an interest in Microsoft’s data technologies especially:  SQL Server, SharePoint, PerformancePoint, Workflow Foundations, LINQ, ADO.NET and Entity Framework.

ASP.NET Application Deployment Best Practices - Part 1

by Steve Syfuhs / September 16, 2009 04:00 PM

Over the last few months I have been collecting best practices for deploying ASP.NET applications to production.  The intent was to create a document that described the necessary steps needed to deploy consistent, reliable, secure applications that are easily maintainable for administrators.  The result was an 11 page document.  I would like to take a couple excerpts from it and essentially list what I believe to be key requirements for production applications.

The key is consistency.

  • Generate new encryption keys

The benefit to doing this is that internal hashing and encrypting schemes use different keys between applications. If an application is compromised, the private keys that can get recovered will have no effect on other applications. This is most important in applications that use Forms Authentication such as the member’s section. This Key Generator app is using built-in .NET key generation code in the RNGCryptoServiceProvider.

  • Version and give Assemblies Strong Names

Use AssemblyInfo.cs file:

[assembly: AssemblyTitle("NameSpace.Based.AssemblyTitle")]
[assembly: AssemblyDescription("This is My Awesome Assembly…")]
[assembly: AssemblyConfiguration("")]
[assembly: AssemblyCompany("My Awesome Company")]
[assembly: AssemblyProduct("ApplicationName")]
[assembly: AssemblyCopyright("Copyright © 2009")]
[assembly: AssemblyTrademark("TM Application Name")]
[assembly: AssemblyCulture("en-CA")]

Strong names and versioning is the backbone of .NET assemblies. It helps distinguish between different versions of assemblies, and provides copyright attributes to code we have written internally. This is especially helpful if we decide to sell any of our applications.

  • Deploy Shared Assemblies to the GAC
    • Assemblies such as common controls
    • gacutil.exe -I "g:\dev\published\myApp\bin\myAssembly.dll"

If any assemblies are created that get used across multiple applications they should be deployed to the GAC (Global Assembly Cache). Examples of this could be Data Access Layers, or common controls such as the Telerik controls. The benefit to doing this is that we will not have multiple copies of the same DLL in different applications. A requirement of doing this is that the assembly must be signed and use a multipart name.

  • Pre-Compile Site: [In Visual Studio] Build > Publish Web Site

Any application that is in production should be running in a compiled state. What this means is that any application should not have any code-behind files or App_Code class files on the servers. This will limit damage if our servers are compromised, as the attacker will not be able to modify the source.

  • Encrypt SQL Connections and Connection Strings

Encrypt SQL Connection Strings

Aspnet_regiis.exe -pe connectionStrings -site myWebSite -app /myWebApp

Encrypt SQL Connections

Add ‘Encrypt=True’ to all connection strings before encrypting

SQL Connections contain sensitive data such as username/password combinations for access to database servers. These connection strings are stored in web.config files which are stored in plain-text on the server. If malicious users access these files they will have credentials to access the servers. Encrypting the strings will prevent the ability to read the config section.

However, encrypting the connection string is only half of the issue. SQL transactions are transmitted across the network in plain-text. Sensitive data could be acquired if a network sniffer was running on a compromised web server. SQL Connections should also be encrypted using SSL Certificates.

  • Use key file generated by Strong Name Tool:

C:\Program Files\Microsoft SDKs\Windows\v7.0A\bin\sn.exe

“sn.exe -k g:\dev\path\to\app\myAppKey.snk”

Signing an assembly provides validation that the code is ours. It will also allow for GAC deployment by giving the assembly a signature. The key file should be unique to each application, and should be kept in a secure location.

  • Set retail=”true” in machine.config

<configuration>

<system.web>

<deployment retail="true"/>

</system.web>

</configuration>

In a production environment applications do not want to show exception errors or trace messages. Setting the retail property to true is simple way to turn off debugging, tracing, and force the application to use friendly error pages.

In part 2 I continue my post on more best practices for deployment to a production environment.

Single Sign-On

by Steve Syfuhs / June 13, 2009 04:00 PM

Is it just me, or is Microsoft the only vendor out there that gives you SSO in all their products, free?  Novell requires you buy their add-on product.  Oracle has nothing relevant.  Never gonna happen on any Linux distro out of the box.  Too many variables.

The integration alone is reason enough to use Microsoft products.  Is it just me, or do people choose to go anti-Microsoft out of spite?

Just a thought.

// About

Steve is a renaissance kid when it comes to technology. He spends his time in the security stack.