Unit Testing – Green is good!

Developers have a reputation for not enjoying testing, but that’s just an outdated myth.  Any modern developer will tell you that there’s a bunch of tools and processes out there that we can employ to take the pain of testing away, and even make it fun.

Seriously, fun?  Well yes.

When writing code, how do you know you’ve finished it?  Does it work, does it meet the requirement (use-case)?  Well you can run the code and manually do some tests, but you need to document this and anytime you make changes, re-run those same tests and document the outcomes again.  To make this fun, repeatable and provable we need to automate this.  We need to write code to test our code.  We need unit tests.

Unit tests are small pieces of code that we write to test our code, our business rules and logic.  Ideally we can configure our build tools and environment to run these tests automatically each time we build.  We get instant feedback and gain a level of confidence that are code is still working and we’ve not broken anything.

Now there are arguments that before we write any application code, we should write all our unit tests.  I find this hard to do I’ll admit – how can I write unit tests that compile and run when I’ve not got any application code yet?  Well I take a pragmatic approach and write some application code, write some unit tests based on the use-case, then maybe realize I’ve got a test case that I’ve not yet coded for, write some more application code and then check the unit tests and run them.

Unit tests passed!
Unit tests passed!

I typically structure my applications in layers (cake), with an intelligent business model layer (Csla.Net) and it’s that model layer that contains the business rules and logic that definitely needs unit testing.

To get consistent unit test results your application framework needs to support mocking.  If your code gets data from a database or web service, you really want to mock that data.  There’s techniques for doing that too, dependency repository injection patterns for example.

Once you get everything setup and get into the routine of writing unit tests, the benefits are huge.

  • Build each time and prove code passes unit tests.
  • Higher level of confidence in code base.
  • Build and deploy more frequently – Agile.
  • Reduces manual testing requirement – Saves money long-term.
  • Know you are doing right – quality improved.

Write unit tests and go green!

Windows Snipping Tool

When writing technical design documents, I’ll often have some diagrams or prototype code snippets that I want to include in the document.  More often than not, I reach for a screen capture tool.  In the past, you’d search the internet for some free tool, or pay a few dollars for some shareware application.

“It’s right in front of you”.

It’s easy to forget what applications come with your operating system.  I run Windows 7 at work but Windows 8.1 at home.  Windows has a nice little tool called the “Snipping Tool” and it lets you grab screenshots with ease.

To find it, just start typing “snipping tool” in the Windows 8 home screen or the Windows 7 Search programs and files box.  You can add it to your task bar for quick access later.

ASP.Net Identity Security

ASP.Net Identity 2.0 replaces the previous ASP.Net membership systems and ships with templates in Visual Studio 2013.  It includes support for OAUTH and OWIN, which basically means we have the possibility of using external authentication providers like Microsoft, Facebook, Google, etc and that the authentication service can be independent from IIS and System.Web.  So you have the potential to run ASP.Net websites on non-Microsoft platforms, but more importantly its leaner, faster and you can build an OWIN based middleware component which enables the same security mechanisms to be used in WebForms, MVC, Web API and SignalR applications.

“The new membership system is based on OWIN rather than the ASP.NET Forms Authentication module. This means that you can use the same authentication mechanism whether you’re using Web Forms or MVC in IIS, or you’re self-hosting Web API or SignalR.  The new membership database is managed by Entity Framework Code First, and all of the tables are represented by entity classes that you can modify. This means that you can easily customize the database schema and profile-related web UI to fit your own needs, and you can easily deploy your updates using Code First Migrations.”  – http://www.asp.net/visual-studio/overview/2013/creating-web-projects-in-visual-studio#auth

So it’s pretty easy and quick to get membership setup and working.  And it’s reasonably easy to customize it.

There are however a few problems with the default ASP.Net identity implementation – if these problems are relevant to you of course.  They may not be, and you may consider the current implementation secure enough and appropriate.

Password Hashing.

The password hashing algorithm uses a Salt (randomly generated code) for each and every password.  This is good.  The algorithm then uses the Salt with the password and hashes it a number of times.  This is by default 5000 (hard to find in the Microsoft documentation).  Now there’s an overhead when hashing a password and you need to strike a balance between making your system run quickly enough for users but also make it hard for a hacker to try to implement a brute-force attack.  To slow a hacker down, you want to increase the amount of times you hash the password – called the Work Factor.

There are recommendations – see https://www.owasp.org/index.php/Password_Storage_Cheat_Sheet and it is suggested that a time of 1 second to hash a password is a good compromise.  The issue is that as hardware gets faster, the processing time to hash a password therefore reduces.  So a hacker armed with top of the range hardware can crack your passwords quickly.

Apple reportedly (April 2014) hash the iTunes passwords 10,000 times.  But the OWASP recommendations apparently are for 64,000 times in year 2012.  You need to aim for 1 second on the hardware you are running on to hash a password.

Unfortunately it’s not easy to change the work factor in ASP.Net Identity.  The other problem is that even if we could increase it, over time we’d want to increase it some more as hardware improves.  But once we have set it, we can’t change it as any existing users would no longer be able to authenticate.  To get around this, we would need to remember the work factor we used at the time they created their password and store that safely somehow on their user profile.

Two Factor Brute Force Attacks

If we implement two factor authentication (this is where a user registers or changes a password and is sent a code to their mobile phone, which they then have to enter), then we need to defend this too.  A user will typically have a window in which to verify (maybe 5 minutes, maybe 1 hour).  In this time, this is a point of attack that needs protecting.  ASP.Net Identity checks that the time is still valid but fails to implement failed attempt tracking.  This means that a hacker can try to gain access repeatedly within the timed period – perhaps making several hundred thousand attempts in 5 minutes.  Really what we want here is to deny the verification code after 5 incorrect attempts.

What do we need to do?

Well if we think our site is unlikely to get attacked (don’t BTW) and we are not implementing two-factor authentication then ASP.Net Identity works nicely enough for our needs.  If we need improved security today and in the future, or we have some ISO security standards that need to be met, maybe we need a better solution.

IdentityReboot

Microsoft tend to give you some tools, and some frameworks and this gets you so far…but often not far enough.  Brock Allen is well-known in the community and has implemented several identity membership frameworks to solve these problems.  IdentityReboot is a quick way to make use of ASP.Net Identity and extend it to solve these problems – see http://brockallen.com/2014/02/11/introducing-identityreboot/#comment-23487 on why and the steps to tweak the existing code.

IdentityReboot provides an AdaptivePasswordHasher which calculates the work factor (based on OWASP recommendations) using the current year.  Alternatively, you can configure it in code (or from a config file).  See here for more details http://brockallen.com/2014/02/09/how-membershipreboot-stores-passwords-properly/ .  This also stores the work factor used at the time in the users hashed password, which means that if we change the factor or the year ticks over we can still authenticate existing users. When they come to change their passwords they use the new increased work factor, increasing their security.  This solves our password hashing concerns.

IdentityRebootUserManager

This extension solves the two-factor brute force attack by imposing a limit to the number of failed attempts on the two-factor code verification.  Trivial to implement and just 2 lines of code to change.

Summary

Maybe over time ASP.Net Identity will improve and these issues will be addressed.  There does appear to be an IPasswordHasher interface exposed on the UserManager, so maybe there is some scope to write our own (but I really don’t want to).

Hooking IdentityReboot open source extensions in is trivial and we have a better security model.  There are plenty of other security attack points – this just addresses the two immediate issues I have.  Of course, you need to have a long password with a mix of letters, numbers, special characters and mixed case.  Do the basics right.

IdentityServer3 from Brock Allen takes the security framework to another level.  Highly recommended, so check that out too.

Richard.

Simple Business Layer Models

Some business layer models are simple, too simple.

You’ve seen it a million times, code separation with a data access project doing the CRUD create + read + update + delete stuff.  A model layer that defines a bunch of properties with getter and setter methods and not much else.  You then plaster a user interface on top.  For exmple, with ASP.Net MVC you find that the controllers do the donkey work of firing up the data access repository to call the data access methods, populate the model and return it to the view.

What’s wrong with this?  Well a couple of things.

One, you have a reasonable amount of code in your Controllers, essentially in the UI project.  We won’t get much reuse here if we wanted a different UI.  We would need to strip that code out somehow into a Class Library project that we can then have a chance of reusing across different UI projects.

Two, what exactly is this model layer actually doing for us?  Say we have a Customer class with the usual get and set properties.  You will see this model every day I bet, and its just a plain old class object POCO.  In an MVC application, I bet the properties are decorated with some attributes.  Attributes for some simple validation rules like limiting maximum string lengths, setting number ranges, required fields.  This is useful for simple validation in MVC as it can take those attributes and generate client-side JavaScript validation.

But that’s pretty much as advanced as it gets.  OK, maybe you can do a tiny bit more validation using a third party validation package like FoolProof.  This lets you define dependencies between properties and do some other clever validations that MVC doesn’t do out of the box.

But most systems end up having quite complex business rules, business rules that depend on the state of the object, the state of other related objects and often data tucked away in the data-source (database, web service, third party service).

How do we express those in the model?  We really want to, because if we can do that, we have increased the re-usability of our code.  Not only that, but we can express ALL our business rules in one place.

“If you build it…”

You need something that provides a good business rules engine, something that helps track the state of your models, something that works with all the various UI technologies (in the Microsoft Stack) and web.  That means working with the different data binding techniques WPF, WinForms, MVC, Windows Phone, etc utilize today and in the future.  If you want a re-usable business model layer, that’s some plumbing code to write and maintain.  Oh, maybe we need it to work with iOS and Android too.

“Stand on the shoulders of giants”.

Now I’m all for writing framework code that makes my life easier.  Developers tend to have a box of tricks and common code that travels around with them from project to project and client to client.  And in my younger days, writing a framework was a challenge and enjoyable and the open source movement was not like it is today.  Today though, its all about being productive and open source delivers a treasure chest of tools and support.

Say hello to my little friend.

Csla.Net is a framework that gives us an intelligent business model layer that provides the things we need to create code that we can write to describe all our rules in the model, even the complex ones.  So, what does a typical business model look like?

[Serializable]
public class VehicleEdit : BusinessBase<VehicleEdit>
{
   public VehicleEdit() 
   { 
     /* normally make this private constructor forces use of factory
        methods to create instance, but MVC model binding
        needs it public */ 
   }

   public static readonly PropertyInfo<byte[]> TimeStampProperty = 
        RegisterProperty<byte[]>(c => c.TimeStamp);
   [Browsable(false)]
   [EditorBrowsable(EditorBrowsableState.Never)]
   public byte[] TimeStamp
   {
      get { return GetProperty(TimeStampProperty); }
      set { SetProperty(TimeStampProperty, value); }
   }

   public static readonly PropertyInfo<int> IdProperty = 
        RegisterProperty<int>(c => c.Id);
   public int Id
   {
      get { return GetProperty(IdProperty); }
      set { SetProperty(IdProperty, value); }
   }

   public static readonly PropertyInfo<string> NameProperty = 
        RegisterProperty<string>(c => c.Name);
   public string Name
   {
      get { return GetProperty(NameProperty); }
      set { SetProperty(NameProperty, value); }
   }

   //...continued

Here we have a VehicleEdit class.  I have called it VehicleEdit rather than Vehicle because it represents a vehicle model that is editable.  It inherits from the Csla class BusinessBase which gives it some properties out of the box like IsNew, IsDirty, IsValid, IsSaveable and some methods like BeginEdit, ApplyEdit, SaveChanges, and events.

If I wanted a read-only vehicle model, I would inherit from a ReadOnlyBase class.  Csla.Net encourages you to write domain specific models for the use case.  So while you could use a VehicleEdit model in a read-only UI page, its better and efficient to use a ReadOnlyBase derived model.  Long term, this simplifies the UI code.

The properties look a little different, but what they do here is create backing fields that then get tracked.  The GetProperty and SetProperty methods hook into the Csla framework and do a number of things here.  They flag the model as dirty if a property changes value, they raise data binding events for your UI, they fire off business rule validation, they check authorization rules (is the current user allowed to view or set this property or model), and the IsValid and IsSaveable properties are updated.  All sounds very useful.

This is only the start!  This is only starters of a 7 course dinner!

How do we write validation rules?  How do we write authorization rules and enforce them?  Can we control access down to property level?  How to we call the data access code?  How do we work with these Csla based business objects in the UI code?  How do we deploy in a physical 1-tier, 2-tier, 3-tier model without any code changes?  How do we unit test and mock data?

For now, just know that we can elegantly do all of this when using Csla.Net.

I’ll blog more about this soon.

Richard.

Hello Cake!

I’m not going to start this blog with a “Hello World” example, rather just to say this is where I will blog about code, cake and other things I like.

Who am I?  I write code.  I have been a developer, team leader, manager.  I have project managed, designed systems, written specifications, interviewed and hired and fired.  I have written code, created bugs, fixed many more.  And I still love it.

So this blog is just about me and the things I do, mostly so I can remember what I did last week and find those bits of code I once found useful.

Everyone should blog.  For the coders out there, it serves as a place to go for those code snippets that you once used and need again, a place to remember how good or bad your code was.

It is also a place to share.

Put your code out there.

Richard.