This 1 hour and 20 minute screencast guides you through a TDD Kata for designing with Model-View-Presenter pattern in iOS using the JetBrains AppCode IDE for objective-C and the OCMock static library.
The premise of this kata is that the existing ViewControllers in any iOS application are tightly coupled to presentation layer concerns. Rather than attempting to write tests directly against the view controllers, instead write unit tests to generate a Presenter class, injected with multiple protocols, that will coordinate activities between those protocols. In this classic "bank account transfer" example, the presenter delegates method calls to a remote account repository protocol, a local account repository protocol, and a view protocol. The use of OCMock to mock these protocols enables us to design and understand the interactions, and to "generate-by-usage" each element of the MVP pattern. Later in the kata an IOC container class, "ServiceLocator", is designed by unit test to standardize presenter instantiation in a single location.
This kata uses the JetBrains AppCode IDE to generate all tests and code. The kata has a heavy emphasis on effective use of keyboard shortcuts to generate classes, protocols, methods and import statements quickly and naturally as part of the design process. Please note that in this screencast, the key mappings have been set to the standard Intellij keymap used by Intellij (java). This keymap is often set as the default in other JetBrains IDEs. (JetBrains provides IDEs in multiple programming languages.)
To begin the kata screencast, click here.
Monday, December 03, 2012
Monday, February 13, 2012
TDD with Objective-C and Calculator Kata (using JetBrains' AppCode)
I've just created a 1 hour tutorial/screencast that demonstrates TDD in objective-C (iOS 5) via Roy Osherove's Calculator Kata. The screencast primarily uses JetBrains' new AppCode IDE for objective-C, but it also flips occasionally into XCode 4.2 to set up a storyboard with a simple UIViewController that connects to the TDD-created Calculator class.
The screencast demonstrates a variety of layouts and keyboard shortcuts for AppCode (and to a lesser extent, XCode) as well as covering a number of language features of objective-C.
Please have a look, and if you have any questions, send me a comment at my twitter account.
Screencast: Learning Objective-C via TDD and Calculator Kata
The screencast demonstrates a variety of layouts and keyboard shortcuts for AppCode (and to a lesser extent, XCode) as well as covering a number of language features of objective-C.
Please have a look, and if you have any questions, send me a comment at my twitter account.
Screencast: Learning Objective-C via TDD and Calculator Kata
Sunday, February 12, 2012
DDD Kata, Part 4 (Service Layer with Mocks)
Pre-Requisite: DDD Kata Part 3
Kata Review
In part 2 of the kata, you built a simple service test to demonstrate the passing of the Item from the Inventory aggregate root to the Invoice aggregate root. In part 3 of the kata, you created IUnitOfWork interface to manage atomic transactions with commit and rollbacks.
Now we need to design the real service.
In this test we will "inject" repository interfaces into the service class constructor to do the work of persisting the state changes to our domain entities. The UnitOfWork we created in part 3 of the kata will assist us in this effort.
Completed kata example on github: DDD Kata Part 4 sample code (github)
NOTE If you haven't already download RhinoMocks, download it and add the DLLs to a 3rd Party Libs directory for reference.
1. Open the previous solution you created in kata 3.
2. Add a reference to RhinoMocks.DLL to the library "Kata.Services.Tests.Unit".
3. Use RhinoMocks to mock the following interfaces (use Resharper to generate the new ones).
NOTE Your mocking levels are stub, dynamic, and strict. The 3rd choice strictly enforces the test specfiications. Start with a strict implementation for now.
4. Create a test.
5. Enter code comments, and basic Rhino Mocks method calls, to generate a test skeleton:
Now populate the test skeleton as follows:
6. In declare constants, create constants for productCode and serialNumber
7. In declare constants, build an Inventory instance using method StockItemBy()
8. In declare constants, create an Invoice
9. RhinoMocks tests based on object equality, so make sure you have assigned Ids to all your objects.
10. In expectations, expect that IUnitOfWorkFactory creates IUnitOfWork.
12. Add comment: // Call to Inventory.PullItemBy(productCode)
13. Add comment: // Call to Invoice.BillItem(item)
15. Expect that IUnitOfWork.Commit() is called
17. Between ReplayAll and VerifyAll, create InvoicingService instance with all mocked interfaces.
18. Now use Resharper to generate the method under test: InvoicingService.CreateSimpleInvoice(productCode,serialNumber).
19. Run the test and watch it fail.
20. Use the expectations set in the test to assemble the method.
21. Remember to wrap the call in IUnitOfWork using statement, and to commit at end of the block
22. Invoice will fail because its Id cannot be known, so MODIFY the expectation for InvoiceRepository.Save() by adding:
LastCall.IgnoreArguments();
23. All tests should now pass.
This completes DDD kata part 3.
The next kata will introduce Fluent NHibernate as an implementation framework against the domain entities and repository interfaces that you have created so far.
Kata Review
In part 2 of the kata, you built a simple service test to demonstrate the passing of the Item from the Inventory aggregate root to the Invoice aggregate root. In part 3 of the kata, you created IUnitOfWork interface to manage atomic transactions with commit and rollbacks.
Now we need to design the real service.
In this test we will "inject" repository interfaces into the service class constructor to do the work of persisting the state changes to our domain entities. The UnitOfWork we created in part 3 of the kata will assist us in this effort.
Completed kata example on github: DDD Kata Part 4 sample code (github)
NOTE If you haven't already download RhinoMocks, download it and add the DLLs to a 3rd Party Libs directory for reference.
1. Open the previous solution you created in kata 3.
2. Add a reference to RhinoMocks.DLL to the library "Kata.Services.Tests.Unit".
3. Use RhinoMocks to mock the following interfaces (use Resharper to generate the new ones).
NOTE Your mocking levels are stub, dynamic, and strict. The 3rd choice strictly enforces the test specfiications. Start with a strict implementation for now.
- IInvoiceRepository
- IInventoryRepository
- IUnitOfWorkFactory
- IUnitOfWork
private MockRepository _mockRepository;
private IInvoiceRepository _invoiceRepository;
private IInventoryRepository _inventoryRepository;
private IUnitOfWorkFactory _unitOfWorkFactory;
private IUnitOfWork _unitOfWork;
[SetUp]
public void SetUp()
{
_mockRepository = new MockRepository();
_invoiceRepository = _mockRepository.StrictMock();
_inventoryRepository = _mockRepository.StrictMock();
_unitOfWorkFactory = _mockRepository.StrictMock();
_unitOfWork = _mockRepository.StrictMock();
}
4. Create a test.
5. Enter code comments, and basic Rhino Mocks method calls, to generate a test skeleton:
[Test]
public void CreateSimpleInvoiceMethod_ProductCodeAndSerialNumberInputs_GenratesSimpleInvoice()
{
// declare constants
// expectations
_mockRepository.ReplayAll();.
// call to new service method
_mockRepository.VerifyAll();
}
Now populate the test skeleton as follows:
6. In declare constants, create constants for productCode and serialNumber
7. In declare constants, build an Inventory instance using method StockItemBy()
8. In declare constants, create an Invoice
9. RhinoMocks tests based on object equality, so make sure you have assigned Ids to all your objects.
// declare constants
const string productCode = "ABCD1234";
const string serialNumber = "BB2135315";
var inventory = new Inventory { Id = 1234 };
inventory.StockItemBy(productCode, serialNumber);
var invoice = new Invoice() { Id = 1234 };
10. In expectations, expect that IUnitOfWorkFactory creates IUnitOfWork.
NOTE If the Create() method exists on the implementation class (UnitOfWorkFactory) but not on the interface, use Resharper to generate it on the interface.11. Expect that InventoryRepository.LoadInventoryByProduct(IUnitOfWork uow, string productCode) returns Inventory instance.
12. Add comment: // Call to Inventory.PullItemBy(productCode)
13. Add comment: // Call to Invoice.BillItem(item)
NOTE These must be comments only as they aren't on mock objects. Actual calls would reduce inventory to zero in no context and cause a null error below.14. Expect that InvoiceRepository.Save(IUnitOfWork uow, Invoice invoice) saves invoice.
15. Expect that IUnitOfWork.Commit() is called
NOTE If the Commit() method exists on the implementation class (UnitOfWork) but not on the interface, use Resharper to generate it on the interface.16. Expect that IUnitOfWork.Dispose() is called
// expectations
Expect.Call(_unitOfWorkFactory.Create()).Return(_unitOfWork);
Expect.Call(_inventoryRepository.LoadInventoryByProduct(_unitOfWork, productCode)).Return(inventory);
// Call to Inventory.PullItemBy(productCode)
// Call to Invoice.BillItem(item)
_invoiceRepository.Save(_unitOfWork, invoice);
_unitOfWork.Commit();
_unitOfWork.Dispose();
17. Between ReplayAll and VerifyAll, create InvoicingService instance with all mocked interfaces.
_mockRepository.ReplayAll();
var sut = new InvoicingService(_unitOfWorkFactory, _inventoryRepository, _invoiceRepository);
Invoice actualInvoice = sut.CreateSimpleInvoice(productCode, serialNumber);
_mockRepository.VerifyAll();
18. Now use Resharper to generate the method under test: InvoicingService.CreateSimpleInvoice(productCode,serialNumber).
19. Run the test and watch it fail.
20. Use the expectations set in the test to assemble the method.
21. Remember to wrap the call in IUnitOfWork using statement, and to commit at end of the block
public Invoice CreateSimpleInvoice(string productCode, string serialNumber)
{
using(IUnitOfWork unitOfWork = _unitOfWorkFactory.Create())
{
Inventory inventory = _inventoryRepository.LoadInventoryByProduct(unitOfWork, productCode);
Item item = inventory.PullItemBy(productCode);
var invoice = new Invoice();
invoice.BillItem(item);
_invoiceRepository.Save(unitOfWork, invoice);
unitOfWork.Commit();
return invoice;
}
}
22. Invoice will fail because its Id cannot be known, so MODIFY the expectation for InvoiceRepository.Save() by adding:
LastCall.IgnoreArguments();
23. All tests should now pass.
This completes DDD kata part 3.
The next kata will introduce Fluent NHibernate as an implementation framework against the domain entities and repository interfaces that you have created so far.
DDD Kata, Part 3 (build atomic transaction manager i.e. UnitOfWork)
Pre-requisite: DDD Kata Part 2
Kata Focus
1) Work occurs in the Repository layer, which will be used to persist to and from a data store. The data store will be encapsulated behind interfaces.
2) A pre-requisite activity is to build a wrapper interface to encapsulate transaction commit/rollback, with commit and rollback occurring on Dispose().
Completed kata example on github: DDD Kata Part 3 sample code (github)
The Kata
Time goal: under 30 minutes
Repository layer
1. Create new class libraries:
We will start by creating the interface to wrap transaction commits and rollbacks. For an initial, simple name, we'll use AtomicTransactionManager. In a few minutes we will refactor that to use the name of the corresponding design pattern.
Repository: AtomicTransactionManager
1. In the new Repository unit test library, create class AtomicTransactionManagerTests.cs
2. Verify that AtomicTransactionManager is instance of IAtomicTransactionManager.
3. Verify that the constructor of AtomicTransactionManager sets TransactionState property to “Is Begun”.
6. Verify that IUnitOfWork is instance of IDisposable
7. Verify that the Dispose() method sets TransactionState property to “RolledBack”.
8. Verify that calling first the Commit() method, then the Dispose() method, sets TransactionState Property to “Committed”.
UnitOfWorkFactory
Use a factory class to encapsulate the generation of the IUnitOfWork.
9. Verify that UnitOfWorkFactory is instance of IUnitOfWorkFactory.
10. Verify that IUnitOfWorkFactory.Create() returns an IUnitOfWork instance.
Part 3 of the kata is complete.
In the next kata, we will build the service layer with repository interfaces and mock objects.
Continue with DDD Kata Part 4
Kata Focus
1) Work occurs in the Repository layer, which will be used to persist to and from a data store. The data store will be encapsulated behind interfaces.
2) A pre-requisite activity is to build a wrapper interface to encapsulate transaction commit/rollback, with commit and rollback occurring on Dispose().
Completed kata example on github: DDD Kata Part 3 sample code (github)
The Kata
Time goal: under 30 minutes
Repository layer
1. Create new class libraries:
- Kata.Repository.Tests.Unit
- Kata.Repository
We will start by creating the interface to wrap transaction commits and rollbacks. For an initial, simple name, we'll use AtomicTransactionManager. In a few minutes we will refactor that to use the name of the corresponding design pattern.
Repository: AtomicTransactionManager
1. In the new Repository unit test library, create class AtomicTransactionManagerTests.cs
2. Verify that AtomicTransactionManager is instance of IAtomicTransactionManager.
3. Verify that the constructor of AtomicTransactionManager sets TransactionState property to “Is Begun”.
NOTE You should be using Resharper's "generate-by-usage" (alt-Enter) to generate these properties and methods of the sut. However, make sure that the declared type on the sut is interface, otherwise your generate-by-usage will create class-only properties and methods. It should be creating the properties and methods on the interface.4. Verify that Commit() method sets TransactionState property to “CommitRequested”
The actual name for this design pattern is UnitOfWork. You can read about it here: Martin Fowler, PEAA: Unit of Work5. Refactor the class and interface you have created so far to UnitOfWork and IUnitOfWork
6. Verify that IUnitOfWork is instance of IDisposable
7. Verify that the Dispose() method sets TransactionState property to “RolledBack”.
8. Verify that calling first the Commit() method, then the Dispose() method, sets TransactionState Property to “Committed”.
UnitOfWorkFactory
Use a factory class to encapsulate the generation of the IUnitOfWork.
9. Verify that UnitOfWorkFactory is instance of IUnitOfWorkFactory.
10. Verify that IUnitOfWorkFactory.Create() returns an IUnitOfWork instance.
Part 3 of the kata is complete.
In the next kata, we will build the service layer with repository interfaces and mock objects.
Continue with DDD Kata Part 4
Tuesday, November 22, 2011
DDD Kata, part 2 (Add second aggregate root to domain. Service method stage 1)
Pre-requisite: DDD Kata part 1
Completed kata example on github: DDD Kata Part 2 sample code (github)
Kata Focus
1) A second aggregate root (Inventory).
2) Business methods in each aggregate root to transfer a Product's Item from Inventory to Invoice
3) Service method, stage 1: Non-persistent, no mocks; only to verify service method created, and that it passes Item across the aggregate roots.
NOTE The next kata (part 3) will introduce design of service orchestration through mocks, repository interfaces, and IUnitOfWork with IUnitOfWorkFactory.
The Kata
Time goal: under 30 minutes
Domain: Inventory
Note Steps 1 through 5 work through the same concepts (object identity, read-only collections, sets) as were explored in DDD kata part 1. You may wish to build to the end of step 5 only once, and then use this as a jumping off point for the newer material in kata 2.
1. M: New test classes for Inventory, Product and Item--start by verifying that they are instances of DomainEntityBase.
2. M: Verify that Inventory.Products is read-only collection (instance of IEnumerable<Product>).3. M: Verify that Inventory.AddProduct() increments Inventory.Products collection property.
4. M: Verify that Product.Items is read-only collection (instance of IEnumerable<Item>).
5. M: Verify that Product.AddItem() increments Product.Items collection property.
New concepts begin here.
Domain: DomainEntityBase
Changes to DomainEntityBase are necessary for proper collection add/remove behaviour on transient objects (with Id = 0).
1. M: Verify that TransientId is of type System.Guid.
2. M: Verify that TransientId has a non-empty value (i.e. not = Guid.Empty).
3. B: Verify that two instances of DomainEntityBase with 0 Id, but matching TransientId values are equal.
Domain: Inventory
4. M: Verify that Inventory.GetNewOrExistingProductBy(string productCode) returns product with matching code.
Hint The methods in tests 7 and 8 will both call to the method created in test 6.
5. M: Verify that Inventory.StockItemBy(string productCode, int serialNumber):
7. M: Refactor LineItem. Change its Product property to reference Item instead. Fix and update any broken tests.
8. B: Verify that Invoice.BillItem(Item item) increments Invoice.LineItems, and that the LineItem references the billed Item.
Service layer, Stage 1
Non-persistent, just verifying the football pass of Item from one aggregate root to another.
1. M: Create new class libraries:
Part 2 of the kata is complete.
Continue with DDD Kata Part 3
Completed kata example on github: DDD Kata Part 2 sample code (github)
Kata Focus
1) A second aggregate root (Inventory).
2) Business methods in each aggregate root to transfer a Product's Item from Inventory to Invoice
3) Service method, stage 1: Non-persistent, no mocks; only to verify service method created, and that it passes Item across the aggregate roots.
NOTE The next kata (part 3) will introduce design of service orchestration through mocks, repository interfaces, and IUnitOfWork with IUnitOfWorkFactory.
The Kata
Time goal: under 30 minutes
Domain: Inventory
Note Steps 1 through 5 work through the same concepts (object identity, read-only collections, sets) as were explored in DDD kata part 1. You may wish to build to the end of step 5 only once, and then use this as a jumping off point for the newer material in kata 2.
1. M: New test classes for Inventory, Product and Item--start by verifying that they are instances of DomainEntityBase.
2. M: Verify that Inventory.Products is read-only collection (instance of IEnumerable<Product>).3. M: Verify that Inventory.AddProduct() increments Inventory.Products collection property.
4. M: Verify that Product.Items is read-only collection (instance of IEnumerable<Item>).
5. M: Verify that Product.AddItem() increments Product.Items collection property.
New concepts begin here.
Domain: DomainEntityBase
Changes to DomainEntityBase are necessary for proper collection add/remove behaviour on transient objects (with Id = 0).
1. M: Verify that TransientId is of type System.Guid.
2. M: Verify that TransientId has a non-empty value (i.e. not = Guid.Empty).
3. B: Verify that two instances of DomainEntityBase with 0 Id, but matching TransientId values are equal.
Domain: Inventory
4. M: Verify that Inventory.GetNewOrExistingProductBy(string productCode) returns product with matching code.
Hint The methods in tests 7 and 8 will both call to the method created in test 6.
5. M: Verify that Inventory.StockItemBy(string productCode, int serialNumber):
- increments count of Inventory.Products.First().Items
- sets Product.ProductCode and Item.SerialNumber
- returns Item from Product.Items
- decrements Product.Items (i.e. the Item has been removed)
7. M: Refactor LineItem. Change its Product property to reference Item instead. Fix and update any broken tests.
8. B: Verify that Invoice.BillItem(Item item) increments Invoice.LineItems, and that the LineItem references the billed Item.
Service layer, Stage 1
Non-persistent, just verifying the football pass of Item from one aggregate root to another.
1. M: Create new class libraries:
- Kata.Services.Tests.Unit
- Kata.Services
Part 2 of the kata is complete.
Continue with DDD Kata Part 3
Sunday, October 30, 2011
DDD Kata, part 1 (simple domain: Invoice and LineItem)
Kata Focus
1) Object identity and equality by Id
2) Id maintained in base class (entity object)
3) Equality on properties (value object)
4) A single aggregate root
5) Associations controlled from aggregate root (read-only, unique sets)
6) Business logic verified from the aggregate root
The kata will focus 80% on ORM mechanics (such as ORM issues of identity and equality) and 20% business requirements; tests are therefore delineated as M (for Mechanics) or B (for Business requirement).
The Kata
Time goal: under 30 minutes
A. DomainEntityBase
1. M: Verify that two instances of DomainEntityBase are equal when they have the same ID value
2. M: Verify that two instances are NOT equal when they have different ID values
3. M: Verify that two instances are NOT equal when they have 0 ID values.
B. Invoice, Money, and LineItems association
1. M: Verify that Invoice is an instance of DomainEntityBase.
2. M: Verify that LineItem is an instance of DomainEntityBase.
3. M: Verify that Money's constructor accepts Amount (decimal) and Currency (string) parameters whose values match equivalent properties.
4. M: Verify that Money.Amount and Money.Currency properties are read-only
5. M: Verify that two Moneys are equal when they have the same Amount and Currency.
6. M: Verify that Invoice has a read-only collection of LineItems.
7. M: Verify that adding a LineItem to Invoice increases its count of LineItems from 0 to 1
8. M: Verify that adding the SAME LineItem (by identifier) does not increment the set of LineItems.
9. M: Verify that the bi-directional reference (LineItem.Invoice) equals the owning Invoice.
10. B: Given an existing LineItem, when I try to add a LineItem without a ProductCode,
then I am informed that I must provide a ProductCode.
Bonus (outside the 30 minute kata window)
1. B: Given an existing LineItem with Price (type Money) of Currency CDN, when I try to add a LineItem with a USD Price, then I am informed that all LineItems must share the same Currency.
2. B: Given a set of LineItems, when I check the SubTotal for the Invoice, then the SubTotal matches Sum of the Quantity of LineItems times the Price.
3. B: Given an existing LineItem, when I try to add another LineItem with the same ProductCode,
then the original LineItem for that ProductCode has its Quantity incremented by the quantity of the added item.
4. B.Given an Invoice with LineItems, when the Currency of LineItems is USD and the SubTotal > 100M, then the Discount amount equals 5% of the SubTotal.
Continue with DDD Kata part 2
*****
Test Examples
DomainEntityBase, test 1:
DomainEntityBase, test 2:
DomainEntityBase, test 3:
Invoice_Money_LineItems, test 1:
Invoice_Money_LineItems, test 2:
Invoice_Money_LineItems, test 3:
Invoice_Money_LineItems, test 4:
Invoice_Money_LineItems, test 5:
Invoice_Money_LineItems, test 6:
Invoice_Money_LineItems, test 7:
Invoice_Money_LineItems, test 8:
Invoice_Money_LineItems, test 9:
Invoice_Money_LineItems, test 10:
1) Object identity and equality by Id
2) Id maintained in base class (entity object)
3) Equality on properties (value object)
4) A single aggregate root
5) Associations controlled from aggregate root (read-only, unique sets)
6) Business logic verified from the aggregate root
The kata will focus 80% on ORM mechanics (such as ORM issues of identity and equality) and 20% business requirements; tests are therefore delineated as M (for Mechanics) or B (for Business requirement).
The Kata
Time goal: under 30 minutes
A. DomainEntityBase
1. M: Verify that two instances of DomainEntityBase are equal when they have the same ID value
2. M: Verify that two instances are NOT equal when they have different ID values
3. M: Verify that two instances are NOT equal when they have 0 ID values.
B. Invoice, Money, and LineItems association
1. M: Verify that Invoice is an instance of DomainEntityBase.
2. M: Verify that LineItem is an instance of DomainEntityBase.
3. M: Verify that Money's constructor accepts Amount (decimal) and Currency (string) parameters whose values match equivalent properties.
4. M: Verify that Money.Amount and Money.Currency properties are read-only
5. M: Verify that two Moneys are equal when they have the same Amount and Currency.
6. M: Verify that Invoice has a read-only collection of LineItems.
7. M: Verify that adding a LineItem to Invoice increases its count of LineItems from 0 to 1
8. M: Verify that adding the SAME LineItem (by identifier) does not increment the set of LineItems.
9. M: Verify that the bi-directional reference (LineItem.Invoice) equals the owning Invoice.
10. B: Given an existing LineItem, when I try to add a LineItem without a ProductCode,
then I am informed that I must provide a ProductCode.
Bonus (outside the 30 minute kata window)
1. B: Given an existing LineItem with Price (type Money) of Currency CDN, when I try to add a LineItem with a USD Price, then I am informed that all LineItems must share the same Currency.
2. B: Given a set of LineItems, when I check the SubTotal for the Invoice, then the SubTotal matches Sum of the Quantity of LineItems times the Price.
3. B: Given an existing LineItem, when I try to add another LineItem with the same ProductCode,
then the original LineItem for that ProductCode has its Quantity incremented by the quantity of the added item.
4. B.Given an Invoice with LineItems, when the Currency of LineItems is USD and the SubTotal > 100M, then the Discount amount equals 5% of the SubTotal.
Continue with DDD Kata part 2
*****
Test Examples
DomainEntityBase, test 1:
[Test]
public void TwoInstance_SameIdInput_AreEqual()
{
const int id = 1325123;
var sut1 = new DomainEntityBase { Id = id };
var sut2 = new DomainEntityBase { Id = id };
Assert.AreEqual(sut1, sut2);
}
DomainEntityBase, test 2:
[Test]
public void TwoInstance_DifferentIdInput_AreNotEqual()
{
var sut1 = new DomainEntityBase { Id = 123512 };
var sut2 = new DomainEntityBase { Id = 64236 };
Assert.AreNotEqual(sut1, sut2);
}
DomainEntityBase, test 3:
[Test]
public void TwoInstance_ZeroIdInput_AreNotEqual()
{
var sut1 = new DomainEntityBase { Id = 0 };
var sut2 = new DomainEntityBase { Id = 0 };
Assert.AreNotEqual(sut1, sut2);
}
Invoice_Money_LineItems, test 1:
[Test]
public void Constructor_NoINputs_IsInstanceOfDomainEntityBase()
{
var sut = new Invoice();
Assert.IsInstanceOf(typeof(DomainEntityBase), sut);
}
Invoice_Money_LineItems, test 2:
[Test]
public void Constructor_NoINputs_IsInstanceOfDomainEntityBase()
{
var sut = new LineItem();
Assert.IsInstanceOf(typeof(DomainEntityBase), sut);
}
Invoice_Money_LineItems, test 3:
[Test]
public void Constructor_AmountAndCurrencyInputs_MatchGetterProperties()
{
const decimal amount = 3.25M;
const string currency = "CDN";
var sut = new Money(amount, currency);
Assert.AreEqual(amount, sut.Amount);
Assert.AreEqual(currency, sut.Currency);
}
Invoice_Money_LineItems, test 4:
[Test]
public void Constructor_AmountAndCurrencyInputs_AreReadOnly()
{
const decimal amount = 3.25M;
const string currency = "CDN";
var sut = new Money(amount, currency);
Assert.IsFalse(sut.GetType().GetProperty("Amount").CanWrite);
Assert.IsFalse(sut.GetType().GetProperty("Currency").CanWrite);
}
Invoice_Money_LineItems, test 5:
[Test]
public void TwoInstances_SameCurrencyAndAmountInputs_AreEqual()
{
const decimal amount = 3.25M;
const string currency = "CDN";
var sut1 = new Money(amount, currency);
var sut2 = new Money(amount, currency);
Assert.AreEqual(sut1, sut2); // use struct for default "all-class-members" equality
}
Invoice_Money_LineItems, test 6:
[Test]
public void LineItemsProperty_Getter_IsReadOnlyCollection()
{
var sut = new Invoice();
Assert.IsInstanceOf(typeof(IEnumerable<LineItem>), sut.LineItems);
}
Invoice_Money_LineItems, test 7:
[Test]
public void AddLineItemsMethod_LineItemInput_IncrementLineItemsCollection()
{
var sut = new Invoice();
Assert.AreEqual(0, sut.LineItems.Count());
sut.AddLineItem(new LineItem { ProductCode = "aaa"});
Assert.AreEqual(1, sut.LineItems.Count());
}
Invoice_Money_LineItems, test 8:
[Test]
public void AddLineItemsMethod_SameLineItemTwiceInput_DoesNotIncrementLineItemsCollection()
{
var sut = new Invoice();
var lineItem = new LineItem { Id = 3522, ProductCode = "aaa" };
sut.AddLineItem(lineItem);
Assert.AreEqual(1, sut.LineItems.Count());
sut.AddLineItem(lineItem);
Assert.AreEqual(1, sut.LineItems.Count());
}
Invoice_Money_LineItems, test 9:
[Test]
public void AddLineItemsMethod_LineItemInput_InvoicePropertyMatchesParent()
{
var sut = new Invoice();
var lineItem = new LineItem { Id = 3522, ProductCode = "aaa"};
sut.AddLineItem(lineItem);
Assert.AreEqual(lineItem.Invoice, sut);
}
Invoice_Money_LineItems, test 10:
// Given an existing LineItem, when I try to add a LineItem without a ProductCode,
// then I am informed that I must provide a ProductCode.
[Test]
[ExpectedException(typeof(InvalidLineItemException), ExpectedMessage = "You must provide a ProductCode")]
public void AddLineItemsMethod_LineItemWithoutProductCode_ThrowsException()
{
var sut = new Invoice();
var lineItem = new LineItem { Id = 3522 };
sut.AddLineItem(lineItem);
}
Sunday, July 17, 2011
Branch-Per-Feature using the Total Integration Total Isolation Principle
I posted this originally as a comment on a Google Plus thread here which has a more complete discussion on ideas (and disagreements) regarding Branch-Per-Feature and Continuous Integration.
The purpose of this entry is to focus on how a principle of Total Integration and Total Isolation shapes the approach taken to branch-per feature.
*****
In this approach, all of the following are givens:
1) releases occur on a regular basis, and development is oriented to the release schedule
2) releases coincide with merge to master
3) the release is tagged, and an empty commit immediately following the release commit is also tagged (as the start of the new cycle)
4) the existing integration ("dev") branch and the qa branch from last cycle are now re-pointed to the new start-of-cycle tag
Key point here: master is ONLY updated once per release, at the point of release--but the integration/dev and qa branches originate from (and are therefore identical to) master
The process now begins, which adheres to a "Total Integration Total Isolation" principle
1) Each developer chooses a ticket and creates a branch with that number (eg. In JIRA, tickets ABC-141, ABC-142, ABC-143, ABC-144)
2) This branch will be short lived (it won't live past the release of the ticket) although the actual commits will be preserved.
3) The developer commits frequently their branch (eg. ABC-143).
4) However, as per some of the heated discussion in this thread, the developer also merges every few hours with the integration/dev branch and checks for conflicts, compile fails, and runs all tests either locally or via the CI server's integraton/dev branch build.
5) Merge conflicts are resolved (and cached for future reuse if the DVCS permits), but outright failures requires the dev to go back to their feature branch and make the fix there, before re-attempting the merge to integration/dev branch.
6) Note that the feature branches never merge FROM the integration branch, because this would violate the isolation side of the Total Integration Total Isolation principle.
7) What about major refactorings or new archicture/scaffolding that one or more features need to share? In that case, a new ticket (eg. Dev task ABC-145) is created to hold that shared work and a branch is created (again, off of the start-of-cycle tag) to hold that major refactoring or scaffolding. The features requiring this branch change their start point/dependency from the start-of-cycle tag to this refactoring/scaffolding branch and they will retain this dependency until the end of the release.
8) Going forward, each new work is commited only to the feature branch, and each feature branch is regularly merged to integration/dev. This results in the Total Integration Total Isolation goal
9) Since every branch now originates from start-of-cycle (or from a shared refactoring/scaffolding branch that originated from start-of-cycle), QA can now safely pick and choose which features to merge onto the qa branch, and ultimately, which to release and merge to master.
10) Any features that were not released can be discarded (if rejected entirely) or rebased onto the next start-of-cycle tag (if to be resumed).
The purpose of this entry is to focus on how a principle of Total Integration and Total Isolation shapes the approach taken to branch-per feature.
*****
In this approach, all of the following are givens:
1) releases occur on a regular basis, and development is oriented to the release schedule
2) releases coincide with merge to master
3) the release is tagged, and an empty commit immediately following the release commit is also tagged (as the start of the new cycle)
4) the existing integration ("dev") branch and the qa branch from last cycle are now re-pointed to the new start-of-cycle tag
Key point here: master is ONLY updated once per release, at the point of release--but the integration/dev and qa branches originate from (and are therefore identical to) master
The process now begins, which adheres to a "Total Integration Total Isolation" principle
1) Each developer chooses a ticket and creates a branch with that number (eg. In JIRA, tickets ABC-141, ABC-142, ABC-143, ABC-144)
2) This branch will be short lived (it won't live past the release of the ticket) although the actual commits will be preserved.
3) The developer commits frequently their branch (eg. ABC-143).
4) However, as per some of the heated discussion in this thread, the developer also merges every few hours with the integration/dev branch and checks for conflicts, compile fails, and runs all tests either locally or via the CI server's integraton/dev branch build.
5) Merge conflicts are resolved (and cached for future reuse if the DVCS permits), but outright failures requires the dev to go back to their feature branch and make the fix there, before re-attempting the merge to integration/dev branch.
6) Note that the feature branches never merge FROM the integration branch, because this would violate the isolation side of the Total Integration Total Isolation principle.
7) What about major refactorings or new archicture/scaffolding that one or more features need to share? In that case, a new ticket (eg. Dev task ABC-145) is created to hold that shared work and a branch is created (again, off of the start-of-cycle tag) to hold that major refactoring or scaffolding. The features requiring this branch change their start point/dependency from the start-of-cycle tag to this refactoring/scaffolding branch and they will retain this dependency until the end of the release.
8) Going forward, each new work is commited only to the feature branch, and each feature branch is regularly merged to integration/dev. This results in the Total Integration Total Isolation goal
9) Since every branch now originates from start-of-cycle (or from a shared refactoring/scaffolding branch that originated from start-of-cycle), QA can now safely pick and choose which features to merge onto the qa branch, and ultimately, which to release and merge to master.
10) Any features that were not released can be discarded (if rejected entirely) or rebased onto the next start-of-cycle tag (if to be resumed).
Friday, July 01, 2011
Branch-Per-Feature: Successful Transitions/Cleanup Between Sprints
.
Introduction
Branch-per-feature is the discipline of beginning every feature branch for a given sprint off exactly the same commit (typically, the first commit of the sprint). The strict enforcement of isolation between features quickly reveals the bad habits of dependencies we build between multiple features, and forces us to ask the right questions of how to keep our features independent.
Some benefits of branch-per-feature:
Successful Transitions (Cleanup) Between Sprints
The focus of this blog post is more narrow: to define the steps involved in successful transitions between sprints/releases when using branch-per-feature to manage the release.
We've been working out the logistics for this recently at work, under the guidance of @martinaatmaa and @adymitruk. What this looks like:
Over the course of a given sprint, each feature is branched off a common commit from the start of the sprint. As each feature reaches a point of stability and completion, it is merged back into an integration branch (named something like projectname-dev). When qa is ready to test, all features to be tested are folded into a qa branch (named something like projectname-qa). Upon release, the projectname-qa branch is merged into master, and tagged as the release branch for that sprint. Now that the code has been released, the feature branches for that sprint are no longer required. They are cleaned up (deleted), although the underlying commits are kept.
Examples and screenshots illustrating this process are shown below.
To begin the next sprint, a new empty commit is created and tagged something like start-sprint2. Each feature is created off that starting commit.
Scaffolding
Early in the sprint, if it is determined that a scaffolding commit is necessary to hold some common architecture to be used by all features, a new scaffolding-only feature is created for that purpose. The scaffolding feature branch becomes the new starting point of the commit, with all features branching off that pre-requisite feature.
Feature Branch Naming
The tools we are using to achieve this are git and JIRA (with the Greenhopper plugin). For feature branch naming, we use the JIRA ticket names. For example, a project whose tickets are named PAYGATE-265, PAYGATE-286 would have corresponding git feature branch names (paygate-265, paygate-286).
To practise my skills in branch-per-feature at home, I've been using Ubuntu, rails, git, and a local install of JIRA/Greenhopper. The project used for this practise is the book Ruby on Rails 3 Tutorial. I've broken out the chapter contents into individual features as JIRA tickets. The JIRA project is broken up into very small sprints (1 weekend worth of work per sprint, probably 4-6 hours at most.)
The rest of this blog entry demonstrates branch-per-feature using this simple project, as we fold the features of sprint 1 into an integration branch, then a qa branch, and finally a release branch. We then clean up the old branches and begin work on sprint 2 features, with each one branching off the starting commit of sprint 2.
Here is a screenshot of sprint 1 in JIRA. Each ticket in this sprint has been implemented as a feature branch in git:
To complete the sprint (and its related release), I perform 3 steps:
With the sprint completed, its time to go back to JIRA (with Greenhopper plugin) to verify that everything in sprint 1 is closed, and that remaining story points are at 0, followed by setting up and prioritizing sprint 2:
Now that sprint 2 is prepared, its time for the dev team to begin coding. I switch to the JIRA/Greenhopper task board view, and drag my first ticket LRNRAILS-15 to In Progress:
To begin work on the dev tickets, we need to first set up the new sprint 2 in git. I begin by creating an empty commit off the release/master branch. To do this, I checkout the tag release-lrnrails-sprint1 (or master branch). To create the new commit of an empty branch:
I then tag it with a name such as start-lrnrails-sprint2.
Since this is the commit that all features will originate from, I will also move my integration (rails-dev) and QA branches (rails-qa) to this commit (by checking out eadch branch and then using git reset --hard start-lrnrais-sprint2 to point them at the starting commit).
Finally, we need to CLEAN UP (remove) all of the sprint 1 feature branches as they are no longer needed. At work we relied on @adymitruk's bash scripts to construct the delete commands that clear out both the local and remote branches. I tried this with my project at home, and got it to work successfully.
For example, given that I want to delete branch jira-lrnrails-5, my local and remote commands would be:
To achieve this for all branch-per-features which were merged into the release, I checkout the release-lrnrails-sprint1 tag (or master branch), and then run the following commands, first as preview (using echo to verify the commands):
and then the actual execution of the commands:
Then, the same for the remote branches:
Now, with all of the branch-per-features deleted for sprint1, the view is much cleaner in gitk:
Now I am ready to begin work on sprint 2. I checkout the starting point commit for this sprint:
and then create my feature branch off that starting commit:
I then proceed to write the code for this feature. At a certain point, I will do one-or-more commits for this feature branch:
With the feature branch commited, I am ready to test my integration branch for the first time. I check out the integration branch (myprojectname-dev) and then do a git merge --no-ff against the new feature branch:
This gives me my first integration branch merge of sprint 2:
From here, the second (and subsequent) sprints can move forward using branch-per-feature to properly isolate code changes, and to enable QA to assemble release packages based on a specifically chosen subset of verified features.
Introduction
Branch-per-feature is the discipline of beginning every feature branch for a given sprint off exactly the same commit (typically, the first commit of the sprint). The strict enforcement of isolation between features quickly reveals the bad habits of dependencies we build between multiple features, and forces us to ask the right questions of how to keep our features independent.
Some benefits of branch-per-feature:
- dev: proper isolation of code changes; breaking bad habits of code dependencies between branches
- dev: embracing granularity of code changes
- dev/QA: (almost) painless merging
- QA: ability to assemble a release made up only of branches that are ready
Successful Transitions (Cleanup) Between Sprints
The focus of this blog post is more narrow: to define the steps involved in successful transitions between sprints/releases when using branch-per-feature to manage the release.
We've been working out the logistics for this recently at work, under the guidance of @martinaatmaa and @adymitruk. What this looks like:
Over the course of a given sprint, each feature is branched off a common commit from the start of the sprint. As each feature reaches a point of stability and completion, it is merged back into an integration branch (named something like projectname-dev). When qa is ready to test, all features to be tested are folded into a qa branch (named something like projectname-qa). Upon release, the projectname-qa branch is merged into master, and tagged as the release branch for that sprint. Now that the code has been released, the feature branches for that sprint are no longer required. They are cleaned up (deleted), although the underlying commits are kept.
Examples and screenshots illustrating this process are shown below.
To begin the next sprint, a new empty commit is created and tagged something like start-sprint2. Each feature is created off that starting commit.
Scaffolding
Early in the sprint, if it is determined that a scaffolding commit is necessary to hold some common architecture to be used by all features, a new scaffolding-only feature is created for that purpose. The scaffolding feature branch becomes the new starting point of the commit, with all features branching off that pre-requisite feature.
Feature Branch Naming
The tools we are using to achieve this are git and JIRA (with the Greenhopper plugin). For feature branch naming, we use the JIRA ticket names. For example, a project whose tickets are named PAYGATE-265, PAYGATE-286 would have corresponding git feature branch names (paygate-265, paygate-286).
Note If a scaffolding feature is required in the sprint, the scaffolding branch will have a normal branch name (paygate-262) and all subsequent commits can indicate the dependency in their name (paygate-265-d-262, paygate-286-d-262).Practise Scenario
To practise my skills in branch-per-feature at home, I've been using Ubuntu, rails, git, and a local install of JIRA/Greenhopper. The project used for this practise is the book Ruby on Rails 3 Tutorial. I've broken out the chapter contents into individual features as JIRA tickets. The JIRA project is broken up into very small sprints (1 weekend worth of work per sprint, probably 4-6 hours at most.)
The rest of this blog entry demonstrates branch-per-feature using this simple project, as we fold the features of sprint 1 into an integration branch, then a qa branch, and finally a release branch. We then clean up the old branches and begin work on sprint 2 features, with each one branching off the starting commit of sprint 2.
Here is a screenshot of sprint 1 in JIRA. Each ticket in this sprint has been implemented as a feature branch in git:
To complete the sprint (and its related release), I perform 3 steps:
- merge all the tickets into the integration branch (rails-dev) and test the code
- upon success, merge all passing features into the QA branch (rails-qa) and test the code
- upon success, merge rails-qa into master and tag it as the release.
With the sprint completed, its time to go back to JIRA (with Greenhopper plugin) to verify that everything in sprint 1 is closed, and that remaining story points are at 0, followed by setting up and prioritizing sprint 2:
Now that sprint 2 is prepared, its time for the dev team to begin coding. I switch to the JIRA/Greenhopper task board view, and drag my first ticket LRNRAILS-15 to In Progress:
To begin work on the dev tickets, we need to first set up the new sprint 2 in git. I begin by creating an empty commit off the release/master branch. To do this, I checkout the tag release-lrnrails-sprint1 (or master branch). To create the new commit of an empty branch:
git commit --allow-empty
I then tag it with a name such as start-lrnrails-sprint2.
git tag start-lrnrails-sprint2
Since this is the commit that all features will originate from, I will also move my integration (rails-dev) and QA branches (rails-qa) to this commit (by checking out eadch branch and then using git reset --hard start-lrnrais-sprint2 to point them at the starting commit).
Finally, we need to CLEAN UP (remove) all of the sprint 1 feature branches as they are no longer needed. At work we relied on @adymitruk's bash scripts to construct the delete commands that clear out both the local and remote branches. I tried this with my project at home, and got it to work successfully.
For example, given that I want to delete branch jira-lrnrails-5, my local and remote commands would be:
git branch -D jira-lrnrails-5
git push origin :jira-lrnrails-5
To achieve this for all branch-per-features which were merged into the release, I checkout the release-lrnrails-sprint1 tag (or master branch), and then run the following commands, first as preview (using echo to verify the commands):
git branch --merged | grep lrnrails -i | xargs -i{} echo git branch -D {}
and then the actual execution of the commands:
git branch --merged | grep lrnrails -i | xargs -i{} git branch -D {}
Then, the same for the remote branches:
git branch -r --merged | grep lrnrails -i | cut -d '/' -f 2 | xargs -i{} echo git push origin :{}
git branch -r --merged | grep lrnrails -i | cut -d '/' -f 2 | xargs -i{} git push origin :{}
Now, with all of the branch-per-features deleted for sprint1, the view is much cleaner in gitk:
Now I am ready to begin work on sprint 2. I checkout the starting point commit for this sprint:
git checkout start-lrnrails-sprint2
and then create my feature branch off that starting commit:
git checkout -b jira-lrnrails-15
I then proceed to write the code for this feature. At a certain point, I will do one-or-more commits for this feature branch:
git add . -A
git commit -m "LRNRAILS-15 Adding variables to the views"
With the feature branch commited, I am ready to test my integration branch for the first time. I check out the integration branch (myprojectname-dev) and then do a git merge --no-ff against the new feature branch:
git merge --no-ff jira-lrnrails-15
This gives me my first integration branch merge of sprint 2:
From here, the second (and subsequent) sprints can move forward using branch-per-feature to properly isolate code changes, and to enable QA to assemble release packages based on a specifically chosen subset of verified features.
Subscribe to:
Posts (Atom)