Home / My Disclaimer / Who am I? / Search... / Sign in

// Patterns and Practices

The Case of the Failed Restore

by Steve Syfuhs / November 13, 2012 03:51 PM

As applications get more and more complex the backup and restore processes also tend to become more complex.

A lot of times backup can be broken down into simple processes:

  1. Get data from various sources
    • Database
    • Web.config
    • DPAPI
    • Certificate Stores
    • File system
    • etc
  2. Persist data to disk in specific format
  3. Validate data in specific format isn’t corrupt

A lot of times this can be a manual process, but in best case scenarios its all automated by some tool. In my particular case there was a tool that did all of this for me. Woohoo! Of course, there was a catch. The format was custom, so a backup of the database didn’t just call SQL backup; it essentially did a SELECT * FROM {all tables} and serialized that data to disk.

The process wasn’t particularly fancy, but it was designed so that the tool had full control over the data before it was ever restored. There’s nothing particularly wrong with such a design as it solved various problems that creep in when doing restores. The biggest problem it solved was the ability to handle breaking changes to the application’s schema during an upgrade as upgrades consisted of

  1. Backup
  2. Uninstall old version
  3. Install new version
  4. Restore backed up data

Since the restore tool knew about the breaking changes to the schema it was able to do something about it before the data ever went into the database. Better to mangle the data in C# than mangle the data in SQL. My inner DBA twitches a little whenever I say that.

Restoring data is conceptually a simple process:

  1. Deserialize data from specific format on disk
  2. Mangle as necessary to fit new schema
  3. Foreach (record in data) INSERT record

In theory the goal of the restore tool should be to make the application be in the exact same state as it was when it was originally backed up. In most cases this means having the database be exactly the same row for row, column for column. SQL Restore does a wonderful job of this. It doesn’t really do much processing of the backup data -- it simply overwrites the database file. You can’t get any more exact than that.

But alas, this tool didn’t use SQL Backup or SQL Restore and there was a problem – the tool was failing on restoring the database.

Putting on my debugger hat I stared at the various moving parts to see what could have caused it to fail.

The file wasn’t corrupt and the data was well formed. Hmm.

Log files! Aha, lets check the log files. There was an error! ‘There was a violation of primary key constraint (column)…’ Hmm.

Glancing over the Disk Usage by Top Tables report in SQL Management Studio suggested that all or most of the data was getting inserted into the database based on what I new of the data before it was backed up. Hmm.

The error was pretty straightforward – a record was trying to be inserted into a table that had a primary key value that already existed in that table. Checking the backup file showed that no primary keys were actually duplicated. Hmm.

Thinking back to how the tool actually did a restore I went through the basic steps in my head of where a duplicate primary key could be created. Serialization succeeded as it was able to make it to the data mangling bit. The log files showed that the mangling succeeded because it dumped all the values and there were no duplicates. Inserting the data mostly succeeded, but the transaction failed. Hmm.

How did the insert process work? First it truncated all data in all tables, since it was going to replace all the data. Then it disabled all key constraints so it could do a bulk insert table by table. Then it enabled identity insert so the identity values were exactly the same as before the backup. It then looped through all the data and inserted all the records. It then disabled identity insert and enabled the key constraints. Finally it committed the transaction.

It failed before it could enable the constraints so it failed on the actual insert. Actually, we already knew this because of the log file, but its always helpful to see the full picture. Except things weren’t making any sense. The data being inserted was valid. Or was it? The table that had the primary key violation was the audit table. The last record was two minutes ago, but the one before it was from three months ago. The last record ID was 12345, and the one before it was 12344. Checking the data in the backup file showed that there were at least twice as many records so it failed halfway though the restore of that table.

The last audit record was: User [MyTestAccount] successfully logged in.

Ah dammit. That particular account was used by an application on my phone, and it checks in every few minutes.

While the restore was happening the application in question was still accessible, so the application on my phone did exactly what it was supposed to do.

Moral of the story: when doing a restore, make sure nobody can inadvertently modify data before you finish said restore.

SQL Server does this by making it impossible to write to the database while its being restored. If you don’t have the luxury of using SQL Restore be sure to make it impossible to write to the database by either making the application inaccessible or code it into your application to be able to run in a read only fashion.

AntiXss vs HttpUtility - So What?

by Steve Syfuhs / May 25, 2010 04:00 PM

Earlier today, Cory Fowler suggested I write up a post discussing the differences between the AntiXss library and the methods found in HttpUtility and how it helps defend from cross site scripting (xss).  As I was thinking about what to write, it occurred to me that I really had no idea how it did what it did, and why it differed from HttpUtility.  <side-track>I’m kinda wondering how many other people out there run in to the same thing?  We are told to use some technology because it does xyz better than abc, but when it comes right down to it, we aren’t quite sure of the internals.  Just a thought for later I suppose. </side-track>

A Quick Refresher

To quickly summarize what xss is: If you have a textbox on your website that someone can enter text into, and then on another page, display that same text, the user could maliciously add in <script> tags to do anything it wanted with JavaScript.  This usually results in redirecting to another website that shows advertisements or try’s to install malware.

The way to stop this is to not trust any input, and encode any character that could be part of a tag to an HtmlEncode’d entity.

HttpUtility does this though, right?

The HttpUtility class definitely does do this.  However, it is relatively limited in how it encodes possibly malicious text.  It works by encoding specific characters like the the brackets < > to &lt; and &gt;  This can get tricky because it you could theoretically bypass these characters (somehow – speculative).

Enter AntiXss

The AntiXss library works in essentially the opposite manner.  It has a white-list of allowed characters, and encodes everything else.  These characters are the usual a-z 0-9, etc characters.

Further Reading

I’m not really doing you, dear reader, any help by reiterating what dozens of people have said before me (and probably did it better), so here are a couple links that contain loads of information on actually using the AntiXss library and protecting your website from cross site scripting:

ViewStateUserKey, ValidateAntiForgeryToken, and the Security Development Lifecycle

by Steve Syfuhs / April 08, 2010 04:00 PM

Last week Microsoft published the 5th revision to the SDL.  You can get it here: http://www.microsoft.com/security/sdl/default.aspx.

Of note, there are additions for .NET -- specifically ASP.NET and the MVC Framework.  Two key things I noticed initially were the addition of System.Web.UI.Page.ViewStateUserKey, and ValidateAntiForgeryToken Attribute in MVC.

Both have existed for a while, but they are now added to requirements for final testing.

ViewStateUserKey is page-specific identifier for a user.  Sort of a viewstate session.  It’s used to prevent forging of Form data from other pages, or in fancy terms it prevents Cross-site Request Forgery attacks.

Imagine a web form that has a couple fields on it – sensitive fields, say money transfer fields: account to, amount, transaction date, etc.  You need to log in, fill in the details, and click submit.  That submit POST’s the data back to the server, and the server processes it.  The only validation that goes on is whether the viewstate hasn’t been tampered with.

Okay, so now consider that you are still logged in to that site, and someone sends you a link to a funny picture of a cat.  Yay, kittehs!  Anyway, on that page is a simple set of hidden form tags with malicious data in it.  Something like their account number, and an obscene number for cash transfer.  On page load, javascript POST’s that form data to the transfer page, and since you are already logged in, the server accepts it.  Sneaky.

The reason this worked is because the viewstate was never modified.  It could be the same viewstate across multiple sessions.  Therefore, the way you fix this to add a session identifier to the viewstate through the ViewStateUserKey.  Be forewarned, you need to do this in Page_Init, otherwise it’ll throw an exception.  The easiest way to accomplish this is:

void Page_Init (object sender, EventArgs e) 
{ 
	ViewStateUserKey = Session.SessionID; 
}

Oddly simple.  I wonder why this isn’t default in the newer versions of ASP.NET?

Next up is the ValidateAntiForgeryToken attribute.

In MVC, you add this attribute to all POST action methods.  This attribute requires all POST’ed forms have a token associated with each request.  Each token is session specific, so if it’s an old or other-session token, the POST will fail.  So given that, you need to add the token to the page.  To do that you use the Html.AntiForgeryToken() helper to add the token to the form.

It prevents the same type of attack as the ViewStateUserKey, albeit in a much simpler fashion.

Putting the I Back into Infrastructure

by Steve Syfuhs / February 07, 2010 04:00 PM

Tonight at the IT Pro Toronto we did a pre-launch of the Infrastructure 2010 project.  Have you ever been in a position where you just don’t have a clear grasp of a concept or design?  It’s not fun.  As a result, CIPS Toronto, IT Pro Toronto, and TorontoSQL banded together to create a massive event to help make things a little more clear.  To give you a clearer understanding of how corporate networks work.  Perhaps to explain why some decisions are made, and why in retrospect, some are bad decisions.

Infrastructure 2010 is about teaching you everything there is to know about a state-of-the-art, best practices compliant, corporate intranet.  We will build, from the ground up, an entire infrastructure.  We will teach you how to build, from the ground up, an entire infrastructure.

Sessions are minimum 300 level, and content-rich.  Therefore:

i2010Proud[1]

Well, maybe.  (P.S. if you work for Microsoft, pretend you didn’t see that picture)

Six Simple Development Rules (for Writing Secure Code)

by Steve Syfuhs / December 15, 2009 04:00 PM

I wish I could say that I came up with this list, but alas I did not.  I came across it on the Assessment, Consulting & Engineering Team blog from Microsoft, this morning.  They are a core part of the Microsoft internal IT Security Group, and are around to provide resources for internal and external software developers.  These 6 rules are key to developing secure applications, and they should be followed at all times.

Personally, I try to follow the rules closely, and am working hard at creating an SDL for our department.  Aside from Rule 1, you could consider each step a sort of checklist for when you sign off, or preferably design, the application for production.

--

Rule #1: Implement a Secure Development Lifecycle in your organization.

This includes the following activities:

  • Train your developers, and testers in secure development and secure testing respectively
  • Establish a team of security experts to be the ‘go to’ group when people want advice on security
  • Implement Threat Modeling in your development process. If you do nothing else, do this!
  • Implement Automatic and Manual Code Reviews for your in-house written applications
  • Ensure you have ‘Right to Inspect’ clauses in your contracts with vendors and third parties that are producing software for you
  • Have your testers include basic security testing in their standard testing practices
  • Do deployment reviews and hardening exercises for your systems
  • Have an emergency response process in place and keep it updated

If you want some good information on doing this, email me and check out this link:
http://www.microsoft.com/sdl

Rule #2: Implement a centralized input validation system (CIVS) in your organization.

These CIVS systems are designed to perform common input validation on commonly accepted input values. Let’s face it, as much as we’d all like to believe that we are the only ones doing things like, registering users, or recording data from visitors it’s actually all the same thing.

When you receive data it will very likely be an integer, decimal, phone number, date, URI, email address, post code, or string. The values and formats of the first 7 of those are very predictable. The string’s are a bit harder to deal with but they can all be validated against known good values. Always remember to check for the three F’s; Form, Fit and Function.

  • Form: Is the data the right type of data that you expect? If you are expecting a quantity, is the data an integer? Always cast data to a strong type as soon as possible to help determine this.
  • Fit: Is the data the right length/size? Will the data fit in the buffer you allocated (including any trailing nulls if applicable). If you are expecting and Int32, or a Short, make sure you didn’t get an Int64 value. Did you get a positive integer for a quantity rather than a negative integer?
  • Function: Can the data you received be used for the purpose it was intended? If you receive a date, is the date value in the right range? If you received an integer to be used as an index, is it in the right range? If you received an int as a value for an Enum, does it match a legitimate Enum value?

In a vast majority of the cases, string data being sent to an application will be 0-9, a-z, A-Z. In some cases such as names or currencies you may want to allow –, $, % and ‘. You will almost never need , <> {} or [] unless you have a special use case such as http://www.regexlib.com in which case see Rule #3.

You want to build this as a centralized library so that all of the applications in your organization can use it. This means if you have to fix your phone number validator, everyone gets the fix. By the same token, you have to inspect and scrutinize the crap out of these CIVS to ensure that they are not prone to errors and vulnerabilities because everyone will be relying on it. But, applying heavy scrutiny to a centralized library is far better than having to apply that same scrutiny to every single input value of every single application.  You can be fairly confident that as long as they are using the CIVS, that they are doing the right thing.

Fortunately implementing a CIVS is easy if you start with the Enterprise Library Validation Application Block which is a free download from Microsoft that you can use in all of your applications.

Rule #3: Implement input/output encoding for all externally supplied values.

Due to the prevalence of cross site scripting vulnerabilities, you need to encode any values that came from an outside source that you may display back to the browser. (even embedded browsers in thick client applications). The encoding essentially takes potentially dangerous characters like < or > and converts them into their HTML, HTTP, or URL equivalents.

For example, if you were to HTTP encode <script>alert(‘XSS Bug’)</script> it would look like: &lt;script&gt;alert('XSS Bug')&lt;/script&gt;  A lot of this functionality is build into the .NET system. For example, the code to do the above looks like:

Server.HtmlEncode("<script>alert('XSS Bug')</script>");

However it is important to know that the Server.HTMLEncode only encodes about 4 of the nasty characters you might encounter. It’s better to use a more ‘industrial strength’ library like the Anti Cross Site Scripting library. Another free download from Microsoft. This library does a lot more encoding and will do HTTP and URI encoding based on a white list. The above encoding would look like this in AntiXSS

using Microsoft.Security.Application;
AntiXss.HtmlEncode("<script>alert('XSS Bug')</script>");

You can also run a neat test system that a friend of mine developed to test your application for XSS vulnerabilities in its outputs. It is aptly named XSS Attack Tool.

Rule #4: Abandon Dynamic SQL

There is no reason you should be using dynamic SQL in your applications anymore. If your database does not support parameterized stored procedures in one form or another, get a new database.

Dynamic SQL is when developers try to build a SQL query in code then submit it to the DB to be executed as a string rather than calling a stored procedures and feeding it the values. It usually looks something like this:

(for you VB fans)

dim sql
sql = "Select ArticleTitle, ArticleBody FROM Articles WHERE ArticleID = "
sql = sql & request.querystring("ArticleID")
set results = objConn.execute(sql)

In fact, this article from 2001 is chock full of what NOT to do. Including dynamic SQL in a stored procedure.

Here is an example of a stored procedure that is vulnerable to SQL Injection:

Create Procedure GenericTableSelect @TableName VarChar(100)
AS
Declare @SQL VarChar(1000)
SELECT @SQL = 'SELECT * FROM '
SELECT @SQL = @SQL + @TableName
Exec ( @SQL) GO

See this article for a look at using Parameterized Stored Procedures.

Rule #5: Properly architect your applications for scalability and failover

Applications can be brought down by a simple crash. Or a not so simple one. Architecting your applications so that they can scale easily, vertically or horizontally, and so that they are fault tolerant will give you a lot of breathing room.

Keep in mind that fault tolerant is not just a way to say that they restart when they crash. It means that you have a proper exception handling hierarchy built into the application.  It also means that the application needs to be able to handle situations that result in server failover. This is usually where session management comes in.

The best fault tolerant session management solution is to store session state in SQL Server.  This also helps avoid the server affinity issues some applications have.

You will also want a good load balancer up front. This will help distribute load evenly so that you won’t run into the failover scenario often hopefully.

And by all means do NOT do what they did on the site in the beginning of this article. Set up your routers and switches to properly shunt bad traffic or DOS traffic. Then let your applications handle the input filtering.

Rule #6: Always check the configuration of your production servers

Configuration mistakes are all too popular. When you consider that proper server hardening and standard out of the box deployments are probably a good secure default, there are a lot of people out there changing stuff that shouldn’t be. You may have remembered when Bing went down for about 45 minutes. That was due to configuration issues.

To help address this, we have released the Web Application Configuration Auditor (WACA). This is a free download that you can use on your servers to see if they are configured according to best practice. You can download it at this link.

You should establish a standard SOE for your web servers that is hardened and properly configured. Any variations to that SOE should be scrutinised and go through a very thorough change control process. Test them first before turning them loose on the production environment…please.

So with all that being said, you will be well on your way to stopping the majority of attacks you are likely to encounter on your web applications. Most of the attacks that occur are SQL Injection, XSS, and improper configuration issues. The above rules will knock out most of them. In fact, Input Validation is your best friend. Regardless of inspecting firewalls and things, the applications is the only link in the chain that can make an intelligent and informed decision on if the incoming data is actually legit or not. So put your effort where it will do you the most good.

Security, Security, Security is about Policy, Policy, Policy

by Steve Syfuhs / November 19, 2009 04:00 PM

The other day I had the opportunity to take part in an interesting meeting with Microsoft. The discussion was security, and the meeting members were 20 or so IT Pro’s, developers, and managers from various Fortune 500 companies in the GTA. It was not a sales call.

Throughout the day, Microsofties Rob Labbe and Mohammad Akif went into significant detail about the current threat landscape facing all technology vendors and departments. There was one point that was paramount. Security is not all about technology.

Security is about the policies implemented at the human level. Blinky-lighted devices look cool, but in the end, they will not likely add value to protecting your network. Here in lies the problem. Not too many people realize this -- hence the purpose of the meeting.

Towards the end of the meeting, as we were all letting the presentations sink in, I asked a relatively simple question:

What resources are out there for new/young people entering the security field?

The response was pretty much exactly what I was (unfortunately) expecting: notta.

Security it seems is mostly a self-taught topic. Yes there are some programs at schools out there, but they tend to be academic – naturally. By this I mean that there is no fluidity in discussion. It’s as if you are studying a snapshot of the IT landscape that was taken 18 months ago. Most security experts will tell you the landscape changes daily, if not multiple times a day. Therefore we need to keep up on the changes in security, and any teacher will tell you, it’s impossible in an academic situation.

Keeping up to date with security is a manual process. You follow blogs, you subscribe to newsgroups and mailing lists, your company gets hacked by a new form of attack, etc., and in the end you have a reasonable idea of what is out there yesterday. And you know what? This is just the attack vectors! You need to follow a whole new set of blogs and mailing lists to understand how to mitigate such attacks. That sucks.

Another issue is the ramp up to being able to follow daily updates. Security is tough when starting out. It involves so many different processes at so many different levels of the application interactions that eyes glaze over at the thought of learning the ins and outs of security.

So here we have two core problems with security:

  1. Security changes daily – it’s hard to keep up
  2. It’s scary when you are new at this

Let’s start by addressing the second issue. Security is a scary topic, but let’s breaks it down into its core components.

  1. Security is about keeping data away from those who shouldn’t see it
  2. Security is about keeping data available for those who need to see it

At its core, security is simple. It starts getting tricky when you jump into the semantics of how to implement the core. So let’s address this too.

A properly working system will do what you intended it to do at a systematic level: calculate numbers, view customer information, launch a missile, etc. This is a fundamental tenant of application development. Security is about understanding the unintended consequences of what a user can do with that system.

These consequences are of the like:

  • SQL Injection
  • Cross Site Scripting attacks
  • Cross Site Forgery attacks
  • Buffer overflow attacks
  • Breaking encryption schemes
  • Session hijacking
  • etc.

Once you understand that these types of attacks can exist, everything is just semantics from this point on. These semantics are along the line of figuring out best practices for system designs, and that’s really just a matter of studying.

Security is about understanding that anything is possible. Once you understand attacks can happen, you learn how they can happen. Then you learn how to prevent them from happening. To use a phrase I really hate using, security is about thinking outside the box.

Most developers do the least amount of work possible to build an application. I am terribly guilty of this. In doing so however, there is a very high likelihood that I didn’t consider what else can be done with the same code. Making this consideration is (again, lame phrase) thinking outside the box.

It is in following this consideration that I can develop a secure system.

So… policies?

At the end of the day however, I am a lazy developer.  I will still do as little work as possible to get the system working, and frankly, this is not conducive to creating a secure system.

The only way to really make this work is to implement security policies that force certain considerations to be made.  Each system is different, and each organization is different.  There is no single policy that will cover the scope of all systems for all organizations, but a policy is simple. 

A policy is a rule that must be followed, and in this case, we are talking about a development rule.  This can include requiring certain types of tests while developing, or following a specific development model like the Security Development Lifecycle.  It is with these policies that we can govern the creation of secure systems.

Policies create an organization-level standard.  Standards are the backbone of security.

These standards fall under the category of semantics, mentioned earlier.  Given that, I propose an idea for learning security.

  • Understand the core ideology of security – mentioned above
  • Understand that policies drive security
  • Jump head first into the semantics starting with security models

The downside is that you will never understand everything there is to know about security.  No one will.

Perhaps its not that flawed of an idea.

Creating a new Forest and Domain on Server Core

by Steve Syfuhs / October 14, 2009 04:00 PM

Over the weekend, good friend, Mitch Garvis decided it was necessary to rebuild his home network.  Now, most home networks don’t have a $25,000 Server at the core.  This one did.  Given that, we decided to do it right.    The network architecture called for Virtualization, so we decided to use Hyper-V.  The network called for management, so we decided to install SCCM and Ops Manager.  The network called for simplicity so we used Active Directory.

However, we decided to up the ante and install this all on Server Core.  Now, the tricky part is that we needed to install Active Directory.  The reason this became tricky was because there is no documented procedure out there on how to install a new Forest on Core.  There are lots of very smart people on the internet that described how to install new domains part of existing forests, but not new forests.  So we got to work.

After running dcpromo a few times we realized we couldn’t create the Forest by throwing commands at it.  It occurred to one of us that we should try creating an unattend.txt install file.  After a few tries, we figured out the proper structure of the file, and after 10 minutes of watching the CLI spit out random sentences, we had a new domain. 

The structure of the file is fairly simple, but you need the correct variable data.  We used the following unattend.txt file to create the new domain:

[DCInstall]
InstallDNS=yes
NewDomain=forest
NewDomainDNSName=swmi.ca
DomainNetBiosName=SWMI
SiteName=Default-First-Site-Name
ReplicaOrNewDomain=domain
ForestLevel=3
DomainLevel=3
DatabasePath="%systemroot%\ntds"
LogPath="%systemroot%\ntds"
RebootOnCompletion=yes
SYSVOLPath="%systemroot%\sysvol"
SafeModeAdminPassword=Pa$$w0rd

Now: Once the file was created we put it in the root of C: on the server core machine, and typed the following command:

dcpromo /unattend:c:\unattend.txt

Surprisingly it worked.  After checking with Microsoft, this is a supported option, and it’s not a hack in any way.  It’s just undocumented.

Until now.

Reference: Mitch Garvis, SWMI, http://garvis.ca/blogs/mitch/archive/2009/10/12/creating-a-new-domain-forest-on-server-core.aspx

ASP.NET Application Deployment Best Practices - Part 2

by Steve Syfuhs / September 16, 2009 04:00 PM

In my previous post I started a list of best practices that should be followed for deploying applications to production systems.  This is continuation of that post.

  • Create new Virtual Application in IIS

Right-click [website app will live in] > Create Application

Creating a new application provides each ASP.NET application its own sandbox environment. The benefit to this is that site resources do not get shared between applications. It is a requirement for all new web applications written in ASP.NET.

  • Create a new application pool for Virtual App
    • Right click on Application Pools and select Add Application Pool
    • Define name: “apAppName” - ‘ap’ followed by the Application Name
    • Set Framework version to 2.0
    • Set the Managed Pipeline mode: Most applications should use the default setting

An application pool is a distinct process running on the web server. It segregates processes and system resources in an attempt to prevent errant web applications from allocating all system resources. It also prevents any nasty application crashes from taking the entire website down. It is also necessary for creating distinct security contexts for applications. Setting this up is essential for high availability.

  • Set the memory limit for application pool

There is a finite amount of available resources on the web servers. We do not want any one application to allocate them all. Setting a reasonable max per application lets the core website run comfortably and allows for many applications to run at any given time. If it is a small lightweight application, the max limit could be set lower.

  • Create and appropriately use an app_Offline.htm file

Friendlier than an ASP.NET exception screen (aka the Yellow Screen of Death)

If this file exists it will automatically stop all traffic into a web application. Aptly named, it is best used when server updates occur that might take the application down for an extended period of time. It should be stylized to conform to the application style. Best practice is to keep the file in the root directory of the application renamed to app_Online.htm, that way it can easily be found if an emergency update were to occur.

  • Don’t use the Default Website instance
    • This should be disabled by default
    • Either create a new website instance or create a Virtual Application under existing website instance

Numerous vulnerabilities in the wild make certain assumptions that the default website instance is used, which creates reasonably predictable attack vectors given that default properties exist. If we disable this instance and create new instances it will mitigate a number of attacks immediately.

  • Create two Build Profiles
    • One for development/testing
    • One for production

Using two build profiles is very handy for managing configuration settings such as connection strings and application keys. It lessens the manageability issues associated with developing web applications remotely. This is not a necessity, though it does make development easier.

  • Don’t use the wwwroot folder to host web apps

Define a root folder for all web applications other than wwwroot

As with the previous comment, there are vulnerabilities that use the default wwwroot folder as an attack vector. A simple mitigation to this is to move the root folders for websites to another location, preferably on a different disk than the Operating System.

These two lists sum up what I believe to be a substantial set of best practices for application deployments.  The intent was not to create a list of best development best practices, or which development model to follow, but as an aid in strictly deployment.  It should be left to you or your department to define development models.

ASP.NET Application Deployment Best Practices - Part 1

by Steve Syfuhs / September 16, 2009 04:00 PM

Over the last few months I have been collecting best practices for deploying ASP.NET applications to production.  The intent was to create a document that described the necessary steps needed to deploy consistent, reliable, secure applications that are easily maintainable for administrators.  The result was an 11 page document.  I would like to take a couple excerpts from it and essentially list what I believe to be key requirements for production applications.

The key is consistency.

  • Generate new encryption keys

The benefit to doing this is that internal hashing and encrypting schemes use different keys between applications. If an application is compromised, the private keys that can get recovered will have no effect on other applications. This is most important in applications that use Forms Authentication such as the member’s section. This Key Generator app is using built-in .NET key generation code in the RNGCryptoServiceProvider.

  • Version and give Assemblies Strong Names

Use AssemblyInfo.cs file:

[assembly: AssemblyTitle("NameSpace.Based.AssemblyTitle")]
[assembly: AssemblyDescription("This is My Awesome Assembly…")]
[assembly: AssemblyConfiguration("")]
[assembly: AssemblyCompany("My Awesome Company")]
[assembly: AssemblyProduct("ApplicationName")]
[assembly: AssemblyCopyright("Copyright © 2009")]
[assembly: AssemblyTrademark("TM Application Name")]
[assembly: AssemblyCulture("en-CA")]

Strong names and versioning is the backbone of .NET assemblies. It helps distinguish between different versions of assemblies, and provides copyright attributes to code we have written internally. This is especially helpful if we decide to sell any of our applications.

  • Deploy Shared Assemblies to the GAC
    • Assemblies such as common controls
    • gacutil.exe -I "g:\dev\published\myApp\bin\myAssembly.dll"

If any assemblies are created that get used across multiple applications they should be deployed to the GAC (Global Assembly Cache). Examples of this could be Data Access Layers, or common controls such as the Telerik controls. The benefit to doing this is that we will not have multiple copies of the same DLL in different applications. A requirement of doing this is that the assembly must be signed and use a multipart name.

  • Pre-Compile Site: [In Visual Studio] Build > Publish Web Site

Any application that is in production should be running in a compiled state. What this means is that any application should not have any code-behind files or App_Code class files on the servers. This will limit damage if our servers are compromised, as the attacker will not be able to modify the source.

  • Encrypt SQL Connections and Connection Strings

Encrypt SQL Connection Strings

Aspnet_regiis.exe -pe connectionStrings -site myWebSite -app /myWebApp

Encrypt SQL Connections

Add ‘Encrypt=True’ to all connection strings before encrypting

SQL Connections contain sensitive data such as username/password combinations for access to database servers. These connection strings are stored in web.config files which are stored in plain-text on the server. If malicious users access these files they will have credentials to access the servers. Encrypting the strings will prevent the ability to read the config section.

However, encrypting the connection string is only half of the issue. SQL transactions are transmitted across the network in plain-text. Sensitive data could be acquired if a network sniffer was running on a compromised web server. SQL Connections should also be encrypted using SSL Certificates.

  • Use key file generated by Strong Name Tool:

C:\Program Files\Microsoft SDKs\Windows\v7.0A\bin\sn.exe

“sn.exe -k g:\dev\path\to\app\myAppKey.snk”

Signing an assembly provides validation that the code is ours. It will also allow for GAC deployment by giving the assembly a signature. The key file should be unique to each application, and should be kept in a secure location.

  • Set retail=”true” in machine.config

<configuration>

<system.web>

<deployment retail="true"/>

</system.web>

</configuration>

In a production environment applications do not want to show exception errors or trace messages. Setting the retail property to true is simple way to turn off debugging, tracing, and force the application to use friendly error pages.

In part 2 I continue my post on more best practices for deployment to a production environment.

Roles and Responsibilities for Managing an Enterprise Web Site

by Steve Syfuhs / September 16, 2009 04:00 PM

The intent of this post is to create a summary definition of roles required to adequately manage an enterprise website. It is designed to be used in tandem with a RACI (Responsibility, Accountability, Consultable, and Informed) document to provide a unified management model for the web Infrastructure developed.

Each role is neither inclusive nor exclusive in that any one person can qualify for more than one role, and more than one person can qualify for the same role, as long as each role has been fulfilled adequately.

In a future post I will discuss the creation of a RACI document.

Roles

  • Database Administrator

Database administrators are charged with controlling website data resources, and use repeatable practices to ensure data availability, integrity and security, recover corrupted data and eliminate data redundancy, as well as leverages tools to improve database performance and efficiency.

  • Application Administrator

Application Administrators are charged with installing, supporting, and maintaining applications, and planning for and responding to service outages and other problems including, but not limited to, troubleshooting end-user issues at the application level.

  • Server/Operating System Administrator

Server Administrators are charged with installing, supporting, and maintaining servers and other systems, as well planning for and responding to server outages and other problems including, but not limited to, troubleshooting Application Administration issues at the Operating System level.

  • User Account/Permissions Administrator

Account Administrators are charged with managing user accounts as well as permissions for users within the system. This includes, but is not limited to, locking and unlocking user accounts, as well as resetting passwords.

  • Hardware Administrator

Hardware Administrators are charged with managing server hardware and resources. This includes, but is not limited to, deployment of servers as well as troubleshooting issues such as faulty hardware.

  • Network Administrator

Network Administrators are charged with managing physical network resources such as routers and switches and logical network resources such as firewall rules and IP settings. This includes, but is not limited to, managing routing rules as well as troubleshooting connectivity issues.

These roles were created in an attempt to define job responsibilities at an executive level.  A RACI document is then suggested as the next step to define what each role entails at the management level.

// About

Steve is a renaissance kid when it comes to technology. He spends his time in the security stack.