The tests have adhered closer to a "test-last" or "test-middle" model than a "test-first" model, since initial tests were built immediately after migration of the domain model:
- First, the domain model was created, with entity classes mapped closely to the underlying database tables
- Second, the original business logic was migrated across, which in effect redistributed it from Transaction Script pattern to Domain Model pattern.
- Finally, an initial set of unit tests were built against the classes of the domain model.
- If a unit test is addressing an individual entity method farther to the right in the sequence diagram, it will be small and focused exclusively on verification of the behaviour of that method.
- If a unit test is addressing a method at the far left of the sequence diagram (a point-of-entry), the test will tend to be much larger, begin with a large stub, and then verify multiple points within an object graph at the end of the test. These larger tests tend to be grouped, with each version of the test checking a different scenario from a matrix diagram.
So, does this imply that QA no longer has any work to do? No, but the issues that are found by QA now tend to concentrate elsewhere, either in:
1) Newly discovered business logic that differs from the understanding currently reflected in the domain tests (resulting in 95% domain coverage rather than 100% domain coverage.)
2) Layers ABOVE the domain model layer that are more difficult to test, including:
a) Repository (database query) layer: missing query information
b) Service layer: incorrect coordination of calls to repository and domain tiers
c) Remote facade/web service layer: missing elements or nulls when mapping to/from web service DTOs or DataSets
Integration tests do exist for testing a complete process, including database interactions and the service layer, but these tend to be tied to specific data and require some amount of setup, often involving cooperation from QA.
On a couple of previous projects, I have had some success with greater automated test coverage across all layers, but this required use of a database sandbox: rather that working with a copy of production data (which tends to be large, constantly evolving, and therefore poor for testing multiple integration scenarios) instead build an entire database from a script, which can then be dropped/recreated before each run of the automated tests, populating the sandbox with only the subset of data required for the integration tests.
Conclusion
The ultimate goal for test coverage is to be as complete as possible, covering every layer, not just the domain model, but all levels above it, up to and including the client.