Home / My Disclaimer / Who am I? / Search... / Sign in

// Bugs

Missing Drive Space? Check IntelliTrace Files

by Steve Syfuhs / January 13, 2011 04:00 PM

My laptop has a relatively old SSD, so it only has about 128 GB of space.  This works out nicely because I like to keep projects and extraneous files on an external drive.  However, when you’ve got Visual Studio 2005-2010 installed, 2 instances of SQL Server installed, and god knows what else installed, space gets a little tight with 128 GB.  As a result I tend to keep an eye on space.  It came as a surprise to find out I lost 20 GB over the course of a week or two without downloading or installing anything substantial.

To find out where my space went, I turned to a simple little tool called Disk Space Finder by IntelliConcepts.  There are probably a million applications like this, but this is the one I always seems to remember.  It scans through your hard drive checking file sizes and breaks down usage as necessary.

I was able to dig into the ProgramData folder, and then further into the data folder for Visual Studio IntelliTrace:

image

If you leave IntelliTrace enabled for all debugging you could potentially end up with a couple hundred *.itrace files like I did (not actually pictured).  It looks like an itrace file is created every time the debugger is attached to a process, so effectively every time you hit F5 a file is created.  Doubly so if you are debugging multiple launchable projects at once.

You can find the folder containing these files at C:\ProgramData\Microsoft Visual Studio\10.0\TraceDebugging.

The quick fix is to just delete the files and/or stop using IntelliTrace.  I recommend just deleting the files because I think IntelliTrace is an amazing—if not a little undercooked – tool.  It’s a v1 product.  Considering what it’s capable of, this is a minor blemish.

The long term fix is to install Visual Studio 2010 SP1, as there is apparently a fix for this issue.  The downside of course is that SP1 is still in beta.  Hence long term.

Vulnerabilities in Twitter’s OAuth Implementation

by Steve Syfuhs / September 02, 2010 04:00 PM

Earlier this week Twitter disabled Basic Authentication for clients, and switched over to their new OAuth implementation.  It turns out though that OAuth is fairly weak in a few areas, as it hasn’t really become a mature standard.  While this isn’t the end of the world, it does leave each implementer to their own devices to cover the weak points.

This is just a quick overview of the one of the WTF’s that is Twitter OAuth, but Ars Technica has a great article on this in detail.

One key point that Twitter seemed to miss entirely is how they handle client verification.  I.e. proving that the client in question is really who they say they are.  For instance, I use Sobees quite a bit, and have been playing around with MetroTwit lately too.  Twitter want’s each instance of Sobees to prove that it is Sobees.  The client application does this by getting a public/private key and passing them to the authentication mechanism.

This seems odd.  How does the application store the private key?  Most implementations will probably stick it in a config file while others might encrypt it.  Suffice to say, all applications need this private key.  It is very easy to extract text from binary structures, let alone config files, so what happens if I get another client’s private key?

Since this private key is used for identification, I could very easily stick that key into my application and pretend that I am that application.  This wouldn’t really lead to user PII being compromised, but it can easily cause harm.  Twitter’s goal for this is to reduce spam, because if they track too much spam coming from certain private keys they will revoke the key preventing the application from being able to sign the user in.

Who see’s the problem here?  What happens if my competitor steals my key and starts spamming people?  My key gets revoked, and I need to replace it.  If it’s a client application, that means updating it, testing it, deploying it, and hope that the mass downtime across every instance doesn’t lose too many customers for you.  Worse yet for those that have written iPhone apps, because that could mean weeks of delays while Apple twiddles their thumbs.

I suspect that they won’t revoke any keys once they come to their senses.  Or more likely, will revoke a key for something like TweetDeck and hear the outcry from the large user base.  After they can sign back in again, of course.

Bizarre Error Message from Explorer

by Steve Syfuhs / August 02, 2010 04:00 PM

Interesting error found in explorer.exe.  I tried hitting [Windows] + [E] and got this message:

image

Kinda bizarre.  I blame solar flares.

ADFS 2.0 Windows Service Not Starting on Server 2008

by Steve Syfuhs / July 22, 2010 04:00 PM

I’ve been working on getting a testable ADFS environment setup for evaluation and development.  Basically, because of laziness (and timeliness), I’m using Windows Virtual PC to host Server 2008 guests for testing.  I didn’t have the time to setup a fully working x64 environment, so I couldn’t go to R2.

One of the issues I’ve been running into is that the Windows Service won’t start properly.  Or rather, at all.  It’s running into a timing issue when running as Network Service, as its timing out while waiting for a network connection.  More Googling with Bing returned the fix for me from here.

In the file [C:\Program Files\Active Directory Federation Services 2.0\Microsoft.IdentityServer.Servicehost.exe.config] add this entry to it:

<runtime>
    <generatePublisherEvidence enabled="false"/> 
</runtime>

Other places have noted that this isn’t a problem on R2.  I haven’t tested this yet, so I don’t know if it’s true.

AntiXss vs HttpUtility - So What?

by Steve Syfuhs / May 25, 2010 04:00 PM

Earlier today, Cory Fowler suggested I write up a post discussing the differences between the AntiXss library and the methods found in HttpUtility and how it helps defend from cross site scripting (xss).  As I was thinking about what to write, it occurred to me that I really had no idea how it did what it did, and why it differed from HttpUtility.  <side-track>I’m kinda wondering how many other people out there run in to the same thing?  We are told to use some technology because it does xyz better than abc, but when it comes right down to it, we aren’t quite sure of the internals.  Just a thought for later I suppose. </side-track>

A Quick Refresher

To quickly summarize what xss is: If you have a textbox on your website that someone can enter text into, and then on another page, display that same text, the user could maliciously add in <script> tags to do anything it wanted with JavaScript.  This usually results in redirecting to another website that shows advertisements or try’s to install malware.

The way to stop this is to not trust any input, and encode any character that could be part of a tag to an HtmlEncode’d entity.

HttpUtility does this though, right?

The HttpUtility class definitely does do this.  However, it is relatively limited in how it encodes possibly malicious text.  It works by encoding specific characters like the the brackets < > to &lt; and &gt;  This can get tricky because it you could theoretically bypass these characters (somehow – speculative).

Enter AntiXss

The AntiXss library works in essentially the opposite manner.  It has a white-list of allowed characters, and encodes everything else.  These characters are the usual a-z 0-9, etc characters.

Further Reading

I’m not really doing you, dear reader, any help by reiterating what dozens of people have said before me (and probably did it better), so here are a couple links that contain loads of information on actually using the AntiXss library and protecting your website from cross site scripting:

Visual Studio Step Up Promotion...The Headache

by Steve Syfuhs / April 23, 2010 04:00 PM

A few months ago some friends of mine at Microsoft told me about a step-up promotion that was going on for the release of Visual Studio 2010.  If you purchased a license for Visual Studio 2008 through Volume Licensing, it would translate into the next version up for the 2010 version.  Seems fairly straightforward but here is the actual process:

vsStepUp

So we upgraded our licenses to benefit from the step up.  Problem was, we couldn’t access any of the applications we were licensed to use (after RTM, obviously).  After a week or so of back and forth with Microsoft we finally got it squared away.  A lot of manual cajoling in the MSDN Sales system, I suspect, took place.  It turns out a lot of people were running into this issue.

Someone told me this issue got elevated to Steve B (not our specific issue, but the step-up issue in general).  I’m curious where things actually went wrong.  I suspect the workflow that was in place at the business level wasn’t in place at the technical level, so everything ended up becoming a manual process.  However, that is purely speculative.  Talk with Steve if you have questions.

In the end, everything worked out.  I got Visual Studio 2010 installed (which fricken rocks, btw), and my productivity will go up immensely once we get TFS deployed.  After of course, it has the necessary drop while I’m downloading and playing with the new MSDN subscription.

For those that are interested in the promotion, it’s still valid until the end of April.  Contact your account rep’s if you are interested.

Bad User Interfaces are Insecure

by Steve Syfuhs / March 04, 2010 04:00 PM

The Best of Intentions

So you’ve built this application.  It’s a brilliant application.  It’s design is spectacular, the architecture is flawless, the coding is clean and coherent, and you even followed the SDL best practices and created a secure application.

There is one minor problem though.  The interface is terrible.  It’s not intuitive, and settings are poorly described in the options window.  A lot of people wouldn’t necessarily see this as a security issue, but more of an interaction bug -- blame the UX people and get on with your day.

Consider this (highly hyperbolic) options window though:

BadSecuritySettings

How intuitive is it?  Notsomuch, eh?  You have to really think about what it’s asking.  Worst of all, there is so much extraneous information there that is supposed to help you decide.

At first glance I’m going to check it.  I see “security” and “enable” in the text, and naturally assume it’s asking me if I want to make it run securely (lets say for the sake of argument it speaks the truth), because god knows I’m not going to read it all the way through the first time.

By the second round through I’ve already assumed I know what it’s asking, read it fully, get confused, and struggle with what it has to say.

A normal end user will not even get to this point.  They’ll check it, and click save without thinking, because of just that – they don’t want to have to think about it.

Now, consider this:

GoodSecuritySettings

Isn’t this more intuitive?  Isn’t it easier to look at?  But wait, does it do the same thing?  Absolutely.  It asks the user if they want to run a secure application.

The Path to Security Hell

When I first considered what I wanted to say on this topic, I asked myself “how can this really be classified as a security bug?”  After all, it’s the user’s fault for checking it right?

Well, no.  It’s our fault.  We developed it securely, we told them they needed it to be run securely, and we gave them the option to turn off security (again, hyperbole, but you get the point).  It’s okay to let them choose if they want to run an insecure application, but if we confuse them, if we make it difficult to understand what the heck is going on, they aren’t actually doing what they want and we therefore failed at making the application they wanted secure, secure.

It is our problem.

So what?

Most developers I know at the very least will make an attempt to write a secure application.  They check for buffer overflows, SQL Injection, Cross Site Scripting, blah blah blah.  Unfortunately some, myself included, tend to forget that the end users don’t necessarily know about security, nor care about it.  We do like most developers do.  We tell them what we know: “There has been a fatal exception at 0x123FF567!!one! The index was outside the bounds of the array.  We need to destroy the application threads and process.”

That sounds fairly familiar to most error messages we display to our end users.  Frankly, they don’t care about it.  They are just pissed the work they were doing was just lost.

The funny thing is, we really don’t notice this.  When I was building the first settings window above, I kept reading the text and thinking to myself, it makes perfect sense.  The reason for this is by virtue of the fact that what I wrote is my logic.  I wrote the logic, I wrote the text, I inherently understand what I wrote.  We do this all the time.  I do this all the time, and then I get a phone call from some user saying “wtf does this mean?”, aaaaaaand then I change it to something a little more friendly.  By the 4th or so iteration of this I usually get it right (or maybe they just get tired of calling?).

So what does this say about us? Well, I’m not sure. I think it’s saying we need to work on our user interface skills, and as an extension of that, we need to work on our soft skills – our interpersonal skills. Maybe. Just a thought.

Six Simple Development Rules (for Writing Secure Code)

by Steve Syfuhs / December 15, 2009 04:00 PM

I wish I could say that I came up with this list, but alas I did not.  I came across it on the Assessment, Consulting & Engineering Team blog from Microsoft, this morning.  They are a core part of the Microsoft internal IT Security Group, and are around to provide resources for internal and external software developers.  These 6 rules are key to developing secure applications, and they should be followed at all times.

Personally, I try to follow the rules closely, and am working hard at creating an SDL for our department.  Aside from Rule 1, you could consider each step a sort of checklist for when you sign off, or preferably design, the application for production.

--

Rule #1: Implement a Secure Development Lifecycle in your organization.

This includes the following activities:

  • Train your developers, and testers in secure development and secure testing respectively
  • Establish a team of security experts to be the ‘go to’ group when people want advice on security
  • Implement Threat Modeling in your development process. If you do nothing else, do this!
  • Implement Automatic and Manual Code Reviews for your in-house written applications
  • Ensure you have ‘Right to Inspect’ clauses in your contracts with vendors and third parties that are producing software for you
  • Have your testers include basic security testing in their standard testing practices
  • Do deployment reviews and hardening exercises for your systems
  • Have an emergency response process in place and keep it updated

If you want some good information on doing this, email me and check out this link:
http://www.microsoft.com/sdl

Rule #2: Implement a centralized input validation system (CIVS) in your organization.

These CIVS systems are designed to perform common input validation on commonly accepted input values. Let’s face it, as much as we’d all like to believe that we are the only ones doing things like, registering users, or recording data from visitors it’s actually all the same thing.

When you receive data it will very likely be an integer, decimal, phone number, date, URI, email address, post code, or string. The values and formats of the first 7 of those are very predictable. The string’s are a bit harder to deal with but they can all be validated against known good values. Always remember to check for the three F’s; Form, Fit and Function.

  • Form: Is the data the right type of data that you expect? If you are expecting a quantity, is the data an integer? Always cast data to a strong type as soon as possible to help determine this.
  • Fit: Is the data the right length/size? Will the data fit in the buffer you allocated (including any trailing nulls if applicable). If you are expecting and Int32, or a Short, make sure you didn’t get an Int64 value. Did you get a positive integer for a quantity rather than a negative integer?
  • Function: Can the data you received be used for the purpose it was intended? If you receive a date, is the date value in the right range? If you received an integer to be used as an index, is it in the right range? If you received an int as a value for an Enum, does it match a legitimate Enum value?

In a vast majority of the cases, string data being sent to an application will be 0-9, a-z, A-Z. In some cases such as names or currencies you may want to allow –, $, % and ‘. You will almost never need , <> {} or [] unless you have a special use case such as http://www.regexlib.com in which case see Rule #3.

You want to build this as a centralized library so that all of the applications in your organization can use it. This means if you have to fix your phone number validator, everyone gets the fix. By the same token, you have to inspect and scrutinize the crap out of these CIVS to ensure that they are not prone to errors and vulnerabilities because everyone will be relying on it. But, applying heavy scrutiny to a centralized library is far better than having to apply that same scrutiny to every single input value of every single application.  You can be fairly confident that as long as they are using the CIVS, that they are doing the right thing.

Fortunately implementing a CIVS is easy if you start with the Enterprise Library Validation Application Block which is a free download from Microsoft that you can use in all of your applications.

Rule #3: Implement input/output encoding for all externally supplied values.

Due to the prevalence of cross site scripting vulnerabilities, you need to encode any values that came from an outside source that you may display back to the browser. (even embedded browsers in thick client applications). The encoding essentially takes potentially dangerous characters like < or > and converts them into their HTML, HTTP, or URL equivalents.

For example, if you were to HTTP encode <script>alert(‘XSS Bug’)</script> it would look like: &lt;script&gt;alert('XSS Bug')&lt;/script&gt;  A lot of this functionality is build into the .NET system. For example, the code to do the above looks like:

Server.HtmlEncode("<script>alert('XSS Bug')</script>");

However it is important to know that the Server.HTMLEncode only encodes about 4 of the nasty characters you might encounter. It’s better to use a more ‘industrial strength’ library like the Anti Cross Site Scripting library. Another free download from Microsoft. This library does a lot more encoding and will do HTTP and URI encoding based on a white list. The above encoding would look like this in AntiXSS

using Microsoft.Security.Application;
AntiXss.HtmlEncode("<script>alert('XSS Bug')</script>");

You can also run a neat test system that a friend of mine developed to test your application for XSS vulnerabilities in its outputs. It is aptly named XSS Attack Tool.

Rule #4: Abandon Dynamic SQL

There is no reason you should be using dynamic SQL in your applications anymore. If your database does not support parameterized stored procedures in one form or another, get a new database.

Dynamic SQL is when developers try to build a SQL query in code then submit it to the DB to be executed as a string rather than calling a stored procedures and feeding it the values. It usually looks something like this:

(for you VB fans)

dim sql
sql = "Select ArticleTitle, ArticleBody FROM Articles WHERE ArticleID = "
sql = sql & request.querystring("ArticleID")
set results = objConn.execute(sql)

In fact, this article from 2001 is chock full of what NOT to do. Including dynamic SQL in a stored procedure.

Here is an example of a stored procedure that is vulnerable to SQL Injection:

Create Procedure GenericTableSelect @TableName VarChar(100)
AS
Declare @SQL VarChar(1000)
SELECT @SQL = 'SELECT * FROM '
SELECT @SQL = @SQL + @TableName
Exec ( @SQL) GO

See this article for a look at using Parameterized Stored Procedures.

Rule #5: Properly architect your applications for scalability and failover

Applications can be brought down by a simple crash. Or a not so simple one. Architecting your applications so that they can scale easily, vertically or horizontally, and so that they are fault tolerant will give you a lot of breathing room.

Keep in mind that fault tolerant is not just a way to say that they restart when they crash. It means that you have a proper exception handling hierarchy built into the application.  It also means that the application needs to be able to handle situations that result in server failover. This is usually where session management comes in.

The best fault tolerant session management solution is to store session state in SQL Server.  This also helps avoid the server affinity issues some applications have.

You will also want a good load balancer up front. This will help distribute load evenly so that you won’t run into the failover scenario often hopefully.

And by all means do NOT do what they did on the site in the beginning of this article. Set up your routers and switches to properly shunt bad traffic or DOS traffic. Then let your applications handle the input filtering.

Rule #6: Always check the configuration of your production servers

Configuration mistakes are all too popular. When you consider that proper server hardening and standard out of the box deployments are probably a good secure default, there are a lot of people out there changing stuff that shouldn’t be. You may have remembered when Bing went down for about 45 minutes. That was due to configuration issues.

To help address this, we have released the Web Application Configuration Auditor (WACA). This is a free download that you can use on your servers to see if they are configured according to best practice. You can download it at this link.

You should establish a standard SOE for your web servers that is hardened and properly configured. Any variations to that SOE should be scrutinised and go through a very thorough change control process. Test them first before turning them loose on the production environment…please.

So with all that being said, you will be well on your way to stopping the majority of attacks you are likely to encounter on your web applications. Most of the attacks that occur are SQL Injection, XSS, and improper configuration issues. The above rules will knock out most of them. In fact, Input Validation is your best friend. Regardless of inspecting firewalls and things, the applications is the only link in the chain that can make an intelligent and informed decision on if the incoming data is actually legit or not. So put your effort where it will do you the most good.

Deleting Temporary Internet Files from the Command Line

by Steve Syfuhs / November 22, 2009 04:00 PM

A quicky but a goody.  Sometimes you just need a quick way to delete temp files from IE.  In most cases for me its when I’m writing a webapp, so I’ve stuck this in the build properties:

RunDll32.exe InetCpl.cpl,ClearMyTracksByProcess 8
RunDll32.exe InetCpl.cpl,ClearMyTracksByProcess 2
RunDll32.exe InetCpl.cpl,ClearMyTracksByProcess 1
RunDll32.exe InetCpl.cpl,ClearMyTracksByProcess 16
RunDll32.exe InetCpl.cpl,ClearMyTracksByProcess 32
RunDll32.exe InetCpl.cpl,ClearMyTracksByProcess 255
RunDll32.exe InetCpl.cpl,ClearMyTracksByProcess 4351

It doesn’t require elevated permissions, and has been tested on Vista and Windows 7.  Each command deletes the different types of data: temp files, stored form info, cookies etc.  Enjoy.

Security, Security, Security is about Policy, Policy, Policy

by Steve Syfuhs / November 19, 2009 04:00 PM

The other day I had the opportunity to take part in an interesting meeting with Microsoft. The discussion was security, and the meeting members were 20 or so IT Pro’s, developers, and managers from various Fortune 500 companies in the GTA. It was not a sales call.

Throughout the day, Microsofties Rob Labbe and Mohammad Akif went into significant detail about the current threat landscape facing all technology vendors and departments. There was one point that was paramount. Security is not all about technology.

Security is about the policies implemented at the human level. Blinky-lighted devices look cool, but in the end, they will not likely add value to protecting your network. Here in lies the problem. Not too many people realize this -- hence the purpose of the meeting.

Towards the end of the meeting, as we were all letting the presentations sink in, I asked a relatively simple question:

What resources are out there for new/young people entering the security field?

The response was pretty much exactly what I was (unfortunately) expecting: notta.

Security it seems is mostly a self-taught topic. Yes there are some programs at schools out there, but they tend to be academic – naturally. By this I mean that there is no fluidity in discussion. It’s as if you are studying a snapshot of the IT landscape that was taken 18 months ago. Most security experts will tell you the landscape changes daily, if not multiple times a day. Therefore we need to keep up on the changes in security, and any teacher will tell you, it’s impossible in an academic situation.

Keeping up to date with security is a manual process. You follow blogs, you subscribe to newsgroups and mailing lists, your company gets hacked by a new form of attack, etc., and in the end you have a reasonable idea of what is out there yesterday. And you know what? This is just the attack vectors! You need to follow a whole new set of blogs and mailing lists to understand how to mitigate such attacks. That sucks.

Another issue is the ramp up to being able to follow daily updates. Security is tough when starting out. It involves so many different processes at so many different levels of the application interactions that eyes glaze over at the thought of learning the ins and outs of security.

So here we have two core problems with security:

  1. Security changes daily – it’s hard to keep up
  2. It’s scary when you are new at this

Let’s start by addressing the second issue. Security is a scary topic, but let’s breaks it down into its core components.

  1. Security is about keeping data away from those who shouldn’t see it
  2. Security is about keeping data available for those who need to see it

At its core, security is simple. It starts getting tricky when you jump into the semantics of how to implement the core. So let’s address this too.

A properly working system will do what you intended it to do at a systematic level: calculate numbers, view customer information, launch a missile, etc. This is a fundamental tenant of application development. Security is about understanding the unintended consequences of what a user can do with that system.

These consequences are of the like:

  • SQL Injection
  • Cross Site Scripting attacks
  • Cross Site Forgery attacks
  • Buffer overflow attacks
  • Breaking encryption schemes
  • Session hijacking
  • etc.

Once you understand that these types of attacks can exist, everything is just semantics from this point on. These semantics are along the line of figuring out best practices for system designs, and that’s really just a matter of studying.

Security is about understanding that anything is possible. Once you understand attacks can happen, you learn how they can happen. Then you learn how to prevent them from happening. To use a phrase I really hate using, security is about thinking outside the box.

Most developers do the least amount of work possible to build an application. I am terribly guilty of this. In doing so however, there is a very high likelihood that I didn’t consider what else can be done with the same code. Making this consideration is (again, lame phrase) thinking outside the box.

It is in following this consideration that I can develop a secure system.

So… policies?

At the end of the day however, I am a lazy developer.  I will still do as little work as possible to get the system working, and frankly, this is not conducive to creating a secure system.

The only way to really make this work is to implement security policies that force certain considerations to be made.  Each system is different, and each organization is different.  There is no single policy that will cover the scope of all systems for all organizations, but a policy is simple. 

A policy is a rule that must be followed, and in this case, we are talking about a development rule.  This can include requiring certain types of tests while developing, or following a specific development model like the Security Development Lifecycle.  It is with these policies that we can govern the creation of secure systems.

Policies create an organization-level standard.  Standards are the backbone of security.

These standards fall under the category of semantics, mentioned earlier.  Given that, I propose an idea for learning security.

  • Understand the core ideology of security – mentioned above
  • Understand that policies drive security
  • Jump head first into the semantics starting with security models

The downside is that you will never understand everything there is to know about security.  No one will.

Perhaps its not that flawed of an idea.

// About

Steve is a renaissance kid when it comes to technology. He spends his time in the security stack.