Saturday, January 30, 2010

Evolution of test coverage

One of the projects I've worked on over the last couple of years has been benefitting from ever-increasing test coverage. The data layer was originally written using DataSets, but awhile back was moved to a domain model with NHibernate as the ORM. The domain model adheres quite closely to the database ERM (one class per table). A unit testing layer has been built along with the domain model. The unit testing layer has been growing/evolving as new business requirements have been added to the project.

The tests have adhered closer to a "test-last" or "test-middle" model than a "test-first" model, since initial tests were built immediately after migration of the domain model:
  • First, the domain model was created, with entity classes mapped closely to the underlying database tables
  • Second, the original business logic was migrated across, which in effect redistributed it from Transaction Script pattern to Domain Model pattern.
  • Finally, an initial set of unit tests were built against the classes of the domain model.
Since then, unit tests have been built at two levels of scope, corresponding roughly to two different points of entry on a sequence diagram. In the domain model, the sequence diagram would begin with a higher-level entity, and then drill down into (and come back up from) lower-level objects in the domain. Therefore:
  • If a unit test is addressing an individual entity method farther to the right in the sequence diagram, it will be small and focused exclusively on verification of the behaviour of that method.
  • If a unit test is addressing a method at the far left of the sequence diagram (a point-of-entry), the test will tend to be much larger, begin with a large stub, and then verify multiple points within an object graph at the end of the test. These larger tests tend to be grouped, with each version of the test checking a different scenario from a matrix diagram.
In the current project, the combination of the above tests has evolved to 300+ tests, and this has provided an invaluable safety net to more easily make changes to the domain model in response to ongoing business requirements.

So, does this imply that QA no longer has any work to do? No, but the issues that are found by QA now tend to concentrate elsewhere, either in:
1) Newly discovered business logic that differs from the understanding currently reflected in the domain tests (resulting in 95% domain coverage rather than 100% domain coverage.)
2) Layers ABOVE the domain model layer that are more difficult to test, including:
     a) Repository (database query) layer: missing query information
     b) Service layer: incorrect coordination of calls to repository and domain tiers
     c) Remote facade/web service layer: missing elements or nulls when mapping to/from web service DTOs or DataSets

Integration tests do exist for testing a complete process, including database interactions and the service layer, but these tend to be tied to specific data and require some amount of setup, often involving cooperation from QA.

On a couple of previous projects, I have had some success with greater automated test coverage across all layers, but this required use of a database sandbox: rather that working with a copy of production data (which tends to be large, constantly evolving, and therefore poor for testing multiple integration scenarios) instead build an entire database from a script, which can then be dropped/recreated before each run of the automated tests, populating the sandbox with only the subset of data required for the integration tests.  

Conclusion

The ultimate goal for test coverage is to be as complete as possible, covering every layer, not just the domain model, but all levels above it, up to and including the client.

Wednesday, January 27, 2010

The Vetrinary Admin: Linking TDD Kata to creating user stories

Today is my 12th day of doing the TDD kata experiment. Mostly it has been with the Calculator kata, but I've started experimenting with creating new katas. First was a Model-View-Presenter kata with mocks, but I am now taking this one step further.

What the TDD kata makes obvious is the practise of writing tests based on a user story.

Historically I am more accustomed to working from a detailed requirements spec, which the last few years has tended to look like this:

1) Read the spec
2) Create a domain model and NHibernate mappings, to database tables which typically were created previously (legacy code or a previous phase).
3) Figure out how the spec translates into distribution across the domain model objects.
4) Write unit tests for various domain model methods as you create them

Not fully test first, more like test middle. And the domain model tends to be mapped fairly closely to the database tables, so the domain entities tend to grow.

But that's a discussion for a whole other blog post. My purpose here is let the TDD kata approach (getting direction from a user story) be my opportunity to practice creating tests directly from user stories.

I've started reading User Stories Applied by Mike Cohn, and I've decided to try some experiments with TDD kata by creating a list of user stories (eg. for a vetrinarian administrator), choosing one of the user stories to write out, and then building a TDD kata based on that story.

Here is today's example, for the vetrinarian administrator:

5 User Stories
#1 - Vet admin can enter pet details (register the pet)
#2 - Vet admin can log a single pet visit
#3 - Vet admin can track and add medications for a pet
#4 - Vet admin can create an invoice for the visit
#5 - Vet admin can create a purchase order for specialized pet foods

[Edit - adding more user story details]
#1 Vet admin can enter pet details (register the pet)
a) Can enter pet with name, breed, age, temperment, and brief health history.
b) Can enter owner name and address (if new).
c) Can associate the owner to the pet (add to owner's list of pets)

#2 Vet admin can log a single pet visit
a) Vet admin can create a pet visit record.
b) Vet admin can enter the comments provided by the vet about the visit
c) Vet admin can add a list of new prescriptions to the pet's prescription list, and link them to this visit.
d) Vet admin can issue a receipt for payment.
[end Edit.]

If I take the 3rd user story, I then write it out in detail:

#3 Vet admin can track pet medications
a) Vet admin can search for medications by name
b) Vet can assign found medication to Fluffy the dog's record
c) If Fluffy has allergy to the newly assigned medication, a flag will be raised
d) If new medication has contraindications with Fluffy's existing meds, a flag will be raised.
e) If flag is raised, vet admin must get Vet override before adding
f) Vet admin can enter prescription date, dosage instructions
g) Vet admin can print prescription.

Now, obviously this user story is too big for a 30 minute TDD kata, but it's a place for me to get started. My hope was that I could just address the domain in my kata, but the user story already includes a medication lookup, which is probably a repository query, so I decided to build the tests at the Model-View-Presenter level with mocked repository and view.

In 30 minutes, I managed to complete only the first step:


[TestFixture]
public class MedicationTrackerPresenterTests
{
private MockRepository _mockRepository;
private IMedicationTrackerView _medicationTrackerView;
private IMedicationRepository _medicationRepository;

[SetUp]
public void SetUp()
{
_mockRepository = new MockRepository();
_medicationTrackerView = _mockRepository.StrictMock();
_medicationRepository = _mockRepository.StrictMock();
}

[TearDown]
public void TearDown()
{
_mockRepository.ReplayAll();
_mockRepository.VerifyAll();
}

[Test]
public void VetAdminCanSearchForMedicationsByName()
{
_medicationTrackerView.SearchEvents += null;
var searchMedicationsEventRaiser = LastCall.IgnoreArguments().GetEventRaiser();
const string searchInput = "Tylenol";
var medications = new List();
Expect.Call(_medicationTrackerView.SearchInput).Return(searchInput).IgnoreArguments();
Expect.Call(_medicationRepository.FindMedicationsByName(searchInput)).Return(medications);
_medicationTrackerView.MedicationsSearchResult = medications;

_mockRepository.ReplayAll();

var medicationTrackerPresenter = new MedicationTrackerPresenter(_medicationRepository, _medicationTrackerView);
searchMedicationsEventRaiser.Raise(_medicationTrackerView, EventArgs.Empty);
}
}


Tomorrow, I will go back to Calculator kata (every 2nd day at least). But this process of going from application concept, to a list of user stories, to fleshing out 1 user story, to building the tests for that story as a TDD kata, feels like a strong practise that I want to reinforce.

Sunday, January 24, 2010

Upgraded version of #goos C# sample code ch.14 posted with "WinFormLicker"

I have just posted an update of the #goos (Growing Object-Oriented Software, Guided by Tests) C# sample code for chapter 14. This version now adheres more closely to the Java sample code by providing classes in a "WinFormLicker" namespace that launch a WinForm instance in a separate thread and to observe the actions applied against the controls, similar to the behaviour applied to the Swing JFrame window in the Java sample code.

The code is posted here:

http://github.com/dgadd/GOOS_sample_csharp


Saturday, January 23, 2010

Creating a C# Window Inspector to parallel WindowLicker in the Java #goos sample code (updated)

This morning, I started into chapter 15 of Growing Object-Oriented Software, Guided by Tests. At this point, reliance on WindowLicker in the Java code to inspect the changes happening in the GUI layer is increasing. I had been avoiding this by simply using a mock IAuctionSniperView interface, and validating that the interface's Status string property had been set.

However, this creates a few problems:
1) The C# code isn't fully parallel to the Java sample code
2) In the book, the end-to-end/acceptance tests operate at single level of scope, calling either ApplicationRunner or FakeAuctionServer to validate each step of the test. By using a mocked interface, I had to place the mock expected actions (replay and verify) at the top level (rather than inside AppicationRunner).
3) And, of course, it's not truly an end-to-end test, as it stops at the view interface.

I decided to experiment with writing some tests in a new project to see what the minimal amount of code would be necessary to create a simple WinForm inspector that could start simply by observing the activity of the status Label being set.

After a few false starts (and needing to review my knowledge of the ParmeterizedThreadStart class) I managed to get this working and displaying a label.

The tests:


[TestFixture]
public class WinFormInspectorTests
{
private WinFormInspector _winFormInspector;

[SetUp]
public void Setup()
{
_winFormInspector = new WinFormInspector(new Main());
}

[Test]
public void Inspector_Can_Instantiate_WinForm()
{
Assert.IsNotNull(_winFormInspector.Main);
}

[Test]
public void Inspector_Can_Launch_Application()
{
_winFormInspector.LaunchApplication();
_winFormInspector.SleepApplication(1000);
_winFormInspector.QuitApplication();
}

[Test]
public void Inspector_Can_Observe_Status_Label()
{
const string status = "Lost";

_winFormInspector.LaunchApplication();
_winFormInspector.Main.SniperStatus = status;
_winFormInspector.ShowsSniperStatus(status);
_winFormInspector.SleepApplication(1000);
_winFormInspector.QuitApplication();
}
}


The WinFormInspector class:


public class WinFormInspector
{
private readonly Main _main;
private Thread _thread;


public WinFormInspector(Main main)
{
_main = main;
}

public Main Main
{
get { return _main; }
}

public void ShowsSniperStatus(string expectedStatus)
{
if (!_main.SniperStatus.Equals(expectedStatus))
{
throw new Exception("Expected status does not match SniperStatus label.");
}
}

public void LaunchApplication()
{
_thread = new Thread(new ParameterizedThreadStart(Launch));
_thread.Start(this.Main);
}

public void SleepApplication(int sleepMilliseconds)
{
Thread.Sleep(sleepMilliseconds);
}

public void QuitApplication()
{
this.Main.Close();
Application.Exit();
}

private static void Launch(object input)
{
var form = (Form)input;
Application.Run(form);
}
}


...and the WinForm class, "Main":


public class Main : Form
{
private readonly Label _lblStatus;

public Main()
{
_lblStatus = new Label();
this.Controls.Add(_lblStatus);
}

public string SniperStatus
{
get
{
return _lblStatus.Text;
}
set
{
_lblStatus.Text = value;
}
}
}


My next step is to move this over into the AuctionSniper C# sample code project. One of the things I'm debating is whether to keep the mocked view tests as well.

Friday, January 22, 2010

Eclipse / Visual Studio keyboard shortcuts for TDD Calculator kata

Tonight I tried out the TDD Calculator kata in Eclipse.

As part of the process, I searched for equivalent keyboard shortcuts in Eclipse, and came up with the following quick comparison:

EclipseVisual Studio
with Resharper
Task
Ctrl-F6Ctrl-TabJump between Classes
Ctrl-F7Ctrl-Tab-LeftArrowJump between Views
Alt-Shift-Q,PCtrl-Alt-LJump to Package / Solution Explorer
Ctrl-Shift-WAlt-W,LClose All Editor Windows
Ctrl-Shift-F8F5Go to Debug (Switch Perspectives)
Alt-Shift-X, TCtrl-R-A(VS)
Option-R-U-N(R#)
Run All Tests
Alt-Shift-D, TCtrl-R,Ctrl-T(VS)
Option-R-U-D(R#)
Run Contextual Test in Debug Mode
F2Alt-Enter OR Alt-Shift-F10Show refactoring suggestions
Alt-Shift-MCtrl-R-MExtract Method
Alt-Shift-VCtrl-R-OMove Class to another Namespace
Ctrl-7 (toggle)Ctrl-K-CComment a block of code
Ctrl-7 (toggle)Ctrl-K-UUncomment a block of code

Thursday, January 21, 2010

TDD Calculator kata: thoughts on day 6

I've been doing the TDD Calculator kata (as per Roy Osherove: http://osherove.com/tdd-kata-1/) the last 6 days. Today was the first day where I did the complete kata. I got the first section down to 22 minutes, but by the time I tackled the final issue (recognizing and processing multiple custom delimiters) the frustration had kicked in and I had metaphorically rolled up my sleeves: staring at output in debug mode, watching side-effects break 4 of the previous tests, and feeling stress levels go up as the clock ticked. I finally resolved all issues, had all tests passing, and did final refactoring in just under 45 minutes.

Temporary stress levels aside, this has been a very productive experience. I've seen a number of patterns emerging with each successive repetition of the practise:

1) Faster and faster interaction with Resharper
I have had friends recommending Resharper to me for a couple of years now, but it was actually working with Eclipse & Java again to build the sample code in "Growing Object-Oriented Software, Guided by Tests" that reminded me about all the tools that Eclipse provides to assist with code generation as you build test-first. When I returned to Visual Studio to build the equivalent code in C#, it quickly became apparent that Resharper was the Eclipse-ification of Visual Studio. With the beginning of the TDD kata practise, the usage of Resharper has become even more prominent, with reliance on it for class and interface geneation, constant reference to its recommendations for improving the code, and quick in-browser test runs with NUnit.

2) Getting serious about Visual Studio (and Resharper) keyboard shortcuts
I have been quite happy to mouse along in Visual Studio, but watching some of the kata samples out there have brought home the usefulness (first for the kata, but already quickly apparent in my daily work) of using keyboard shortcuts to stay caught up to the train of thought. The 10 most-useful that I have started using regularly are:
* Ctrl-Alt-L to jump to Solution Explorer (and down and left hours to collapse projects)
* Ctrl-Tab to move between tests and code ("Active Files") and to other Visual Studio windows
* Shift-F10 instead of right-click (yes, I had to google that one)
* Option-R-U-N to run all tests in the Resharper window
* Alt-Enter to look at contextual Resharper recommendations (typically to invert if conditionals or switch declarative types to var)
* Alt-Shift-F10 to look at contextual Visual Studio recommendations (typically to either reference a using statement for a class, or to cascade a renaming across the code)
* F9 to set a breakpoint, and Ctrl-Shift-F9 to clear all breakpoints
* Ctrl-K-C and Ctrl-K-U to comment/uncomment code
* Ctrl-R-M to extract a method, and
* Ctrl-R-O to move a class to a difference namespace

3) Learning to use the simplest solution possible
It felt gimmicky at first, but solving two tests passing either "" or "3" to return 0 from the first, and 3 from the second, is most appropriately solved with:

return inputString.Length > 0 ? 3 : 0;

Of course that isn't "real", but it forces me to not overbuild. I notice that when I hit the more challenging issues to resolve (eg. the final requirement in the kata) that it's very tempting to start building the Sistine Chapel, but of course then the question becomes how do I test a coding monstrosity?

I'll come back to this one at the end, with it's implications for the coding of larger projects

4) Correspondence of test names and one-at-a-time issues to resolve
The Calculator kata states a series of issues to resolve. Each test is named with the resolution of that issue, for example:


[Test]
public void Calculator_Allows_Multiple_MultiChar_Custom_Delimiter()


It's very, very clear, and is a excellent parallelism to the original stated issues.

Overall Conclusions
I've written lots of unit tests over the last 3 years, and they have been extremely useful and provided signifigant code coverage (on my current project, I just passed the 300th unit test), but they, and the classes they address, tend to be large. I am using a domain model with NHibernate, I always practise moving business logic down (when it slips into the client or a service layer) into the domain object where it belongs, but then it tends to stop there. The domain model is correct, the logic is inside the entity, it prevents redundancy, all good things, but it makes for large entities and large tests, as opposed to more incremental tests shaping the design. In re-reading Refactoring [Fowler] this past November, I saw that smaller classes, with increasing delegation (eg. a domain entity calling out to a strategy class) makes for greater granularity, and with it, smaller tests. Starting from the tests-first, with the goal to keep the tests small and simple, helps to keep the classes small and delegating naturally.

TDD/mocks kata for MVP (Model-View-Presenter)

Here is a simple TDD Kata for Model-View-Presenter and Rhino Mocks.

Model-View-Presenter is an implementation approach rather than a framework. Therefore it can be used to create testable sub-presentation layers for any GUI platform (WinForm .NET, ASP.NET, Java Swing, SharePoint web controls, you name it) because the view is abstracted to an interface. All of the (testable) interaction logic now occurs in a newly created layer called the presentation layer. The presentation layer is completely agnostic about the view implementation; in fact, it can have MULTIPLE view implementations.

Finally, any time that an SUT (a "system-under-test") is created which interacts with other code through interfaces, those interfaces can (and usually should) be tested with unit tests that isolate the SUT. This is done by either faking the interface implementations (with minimal "fake" implementation classes) or mocking the implementations with mocking tools such as jMock (for Java), or NMock, Moq, or RhinoMocks for .NET.

In this TDD kata example, RhinoMocks is used as the mocking framework.

Model-View-Presenter TDD Kata
PreRequisites:
Create solution.
Reference TDD framework and mocking framework.
Create namespaces for Model, View, Presenter, and UnitTests.

NOTE  For each step which follows, code samples are shown with possible implementations (below).

1) Create a presenter class which instantiates two interfaces: mock repository and view. Use naming prefix "Customer" on the presenter, the repository, and the view.
2) In the View, create an event: Initialize and a  string property: PageTitle
3) Verify that when the Initialize event is raised:
   * the view's PageTitle property is set to "Welcome".
4) Create a Customer in the Model with properties FirstName and LastName.
5) In the View, create an event: GetCustomers and a List property: Customers.
6) Verify that when the GetCustomers event is raised:
   * the repository method GetCustomers()  is called and returns a list of Customers
   * the view's Customers property is set to the Customers list
7) Create a SortCustomerEventHandler delegate with SortCustomerEventArgs that passes SortExpression and IsAscending.
8) In the View, create an event: SortCustomer (using the SortCustomerEventHandler delegate)
9) Verify that when the SortCustomer event is raised:
   * SortExpression and SortDirection properties are passed to SortCustomerEventArgs
   * [possibly that the sort has occured]
   * the view's Customers property is set to the Customers list

Here is what one possible test output look like using RhinoMocks:

[TestFixture]
public class CustomerPresenterTests
{
    private readonly MockRepository _mockRepository = new MockRepository();
    private CustomerPresenter _customerPresenter;
    private ICustomerRepository _mockCustomerRepository;
    private ICustomerView _mockCustomerView;

    [SetUp]
    public void Setup()
    {
        _mockCustomerView = _mockRepository.StrictMock();
        _mockCustomerRepository = _mockRepository.StrictMock();
    }

    [TearDown]
    public void TearDown()
    {
        _mockRepository.ReplayAll();

        _mockRepository.VerifyAll();
    }

    [Test]
    public void CustomerPresenter_Can_Be_Instantiated()
    {
        _mockCustomerView.Initialize += null;
        LastCall.IgnoreArguments();
        _mockCustomerView.GetCustomers += null;
        LastCall.IgnoreArguments();
        _mockCustomerView.SortCustomers += null;
        LastCall.IgnoreArguments();

        _mockRepository.ReplayAll();

        _customerPresenter = new CustomerPresenter(_mockCustomerRepository, _mockCustomerView);
    }

    [Test]
    public void CustomerPresenter_Sets_ViewTitle_When_Initialize_Event_Raised()
    {
        _mockCustomerView.Initialize += null;
        var initializeEventRaised = LastCall.IgnoreArguments().GetEventRaiser();
        _mockCustomerView.GetCustomers += null;
        LastCall.IgnoreArguments();
        _mockCustomerView.SortCustomers += null;
        LastCall.IgnoreArguments();
        _mockCustomerView.PageTitle = "Welcome";

        _mockRepository.ReplayAll();

        _customerPresenter = new CustomerPresenter(_mockCustomerRepository, _mockCustomerView);
        initializeEventRaised.Raise(_mockCustomerView, EventArgs.Empty);
    }

    [Test]
    public void CustomerPresenter_GetsCustomers_When_GetCustomers_Event_Raised()
    {
        var customers = new List();

        _mockCustomerView.Initialize += null;
        LastCall.IgnoreArguments();
        _mockCustomerView.GetCustomers += null;
        var getCustomersEventRaised = LastCall.IgnoreArguments().GetEventRaiser();
        _mockCustomerView.SortCustomers += null;
        LastCall.IgnoreArguments();
        Expect.Call(_mockCustomerRepository.GetCustomers()).Return(customers);
        _mockCustomerView.Customers = customers;

        _mockRepository.ReplayAll();

        _customerPresenter = new CustomerPresenter(_mockCustomerRepository, _mockCustomerView);
        getCustomersEventRaised.Raise(_mockCustomerView, EventArgs.Empty);
    }

    [Test]
    public void CustomerPresenter_SortsCustomers_When_SortCustomers_Event_Raised()
    {
        var customers = new List();

        _mockCustomerView.Initialize += null;
        LastCall.IgnoreArguments();
        _mockCustomerView.GetCustomers += null;
        LastCall.IgnoreArguments();
        _mockCustomerView.SortCustomers += null;
        var getSortCustomerEventRaiser = LastCall.IgnoreArguments().GetEventRaiser();
        Expect.Call(_mockCustomerRepository.GetCustomers()).Return(customers);
        _mockCustomerView.Customers = customers;

        _mockRepository.ReplayAll();

        _customerPresenter = new CustomerPresenter(_mockCustomerRepository, _mockCustomerView);
        var sce = new SortCustomersEventArgs("", true);
        getSortCustomerEventRaiser.Raise(_mockCustomerView, sce);
    }
}



And here is what one possible CustomerPresenter implementation looks like:

public class CustomerPresenter
{
    private readonly ICustomerRepository _customerRepository;
    private readonly ICustomerView _customerView;

    public CustomerPresenter(ICustomerRepository customerRepository, ICustomerView customerView)
    {
        _customerRepository = customerRepository;
        _customerView = customerView;

        _customerView.Initialize += CustomerViewInitialize;
        _customerView.GetCustomers += CustomerViewGetCustomers;
        _customerView.SortCustomers += CustomerViewSortCustomers;
    }

    void CustomerViewSortCustomers(object sender, SortCustomersEventArgs sce)
    {
        List customers = _customerRepository.GetCustomers();

        customers.Sort(delegate(Customer first, Customer second)
        {
            int result;

            switch (sce.SortExpression)
            {
                case "FirstName":
                    result = first.FirstName.CompareTo(second.FirstName);
                    break;
                case "LastName":
                    result = first.LastName.CompareTo(second.LastName);
                    break;
                default:
                    result = 0;
                    break;
            }

            return (sce.IsAscending) ? result : -result;
        });

        _customerView.Customers = customers;
    }

    private void CustomerViewGetCustomers(object sender, EventArgs e)
    {
        List customers = _customerRepository.GetCustomers();
        _customerView.Customers = customers;
    }

    private void CustomerViewInitialize(object sender, EventArgs e)
    {
        _customerView.PageTitle = "Welcome";
    }
}



I've tried a couple of implementations of ICustomerView: WinForm, and as a Sharepoint web part. Interesting to see it work.

But, from the kata point of view, the only part that matters is repeating the creation of the tests, and experiementing/refining the approach.

Tuesday, January 19, 2010

GOOS ch. 14 C# sample code github URL

http://github.com/dgadd/GOOS_sample_csharp

GOOS C# ch. 14 sample code posted to github (with README.txt)

I've been working through sample code for the book Growing Object-Oriented Software, Guided by Tests by Steve Freeman and Nat Pryce. The example provided is in Java; I've been porting it to C# with Rhino Mocks. Details here.

Below is the README.txt file I posted with the GOOS C# ch. 14 sample code to github:

README.txt for GOOS sample code in Visual Studio 2010
================================================

David Gadd
http://www.twitter.com/gaddzeit
Email: gaddzeit@yahoo.ca

This version of the sample code from #goos was written using:
* Visual Studio 2010 Beta 2.0
* Visual Studio's unit-testing framework
(I experienced NUnit compatability issues in VS 2010 Beta 2.0;
to use this code in Visual Studio 2008 with NUnit just retag the test class/method attributes.)
* Reshaper Beta5 (helpful but not required to run code)
* Rhino Mocks version 3.6.0.0 (assume this reference will be broken; you will need to re-reference to a local copy)

This version is complete as of the end of Chapter 14, with all acceptance and unit tests passing.

While I used OpenServer for the Java version, for this version I created a fake XMPP server (all method calls are similar/identical).

Instead of using WindowLicker for a full end-to-end acceptance test, I am simply mocking the IPickerMainView interface
and verifying in RhinoMocks that the SniperStatus string property is being set. To achieve this, the level of scope in
the end-to-end acceptance test is not identical to the Java code; above the method calls to ApplicationRunner.cs
and FakeAuctionServer.cs I am setting RhinoMocks expectations on _mockPickerMainView.

Other than that, I am using the more commonly-used conventions in C# than in Java of:
* prefixing interface names with capital I
* underscore prefixing instance variables

If you have any questions feel free to tweet or email me.

David Gadd

Sunday, January 17, 2010

Re-awakening the blog

This blog has been asleep for almost 3 years. It's time to wake it back up again (add some new posts.) There's only so much I can fit in 140 char on twitter.