Monday, June 8, 2015

Stefan Birkner's system-rules library

Just a super quick post about an interesting library I just discovered :)

Its very rare the case when you will have to test things such as an String being printed in the console or a System property being set from the program... But in occasions it happens that.

Last week I discovered a library called system-rules, from Stefan Birkner. http://stefanbirkner.github.io/system-rules/
It is really interesting and easy to use, it can help you easily do tests that involve java.lang.System

Lets just quickly saw an example.
Imagine that for some reason you want to test that the console prints some message... I don't know you, but the only way I know to do that, is to redirect the stream that goes to the console, to something that you can control and read from(e.g a file, a log...). This kind of test would have lots of boiler plate. It would look somehow like this:

 @Test  
   public void consoleOutputOldSchoolTest() throws FileNotFoundException {  
     //Create some text file where the output will be redirected, so you can make an assertion later.  
     File testingFile = new File("testingFile.txt");  
     //Create a testing stream and give the file to it  
     PrintStream testingStream = new PrintStream(testingFile);  
     //Keep a copy of the old console output stream  
     PrintStream consoleStream = new PrintStream(System.out);  
     //Reset the object out to the new stream  
     System.setOut(testingStream);  
     //Write something to the "console"  
     System.out.print("test");  
     //Rewire back to the original console output  
     System.setOut(consoleStream);  
     //Just an informative message  
     System.out.println("all back to normal");  
     //Read the output that we are testing  
     String output = new Scanner(testingFile).nextLine();  
     //Do your assertion  
     assertThat(output, is("test"));  
     //Delete the test file  
     testingFile.delete();  
   }  

With Stefan's library you can test the messages that go the terminal in the blink of an eye:

   @Rule  
   public final SystemOutRule systemOutRule = new SystemOutRule().enableLog();  
   @Test  
   public void systemRulesLibraryConsoleOutputTest() {  
     System.out.print("test");  
     assertThat(systemOutRule.getLog(), is("test"));  
   }  

Ok, now back to work now ;)

Sunday, April 12, 2015

Install and configure Java in ubuntu easy

Whenever I change laptop or format my hard drive, I end up making a mess with the soft links to configure java...

This post is just a reminder too myself, for the future. I found a clean way I am comfortable with for setting my system to use java, via the update-alternatives tool that ubuntu has. This is how I do it:

1- Download the JDK from oracle and copy it to your user directory.
I like having it under ~/Java/jdk1.8.0_40

2- I often like installing maven, so I do the same, just I create another directory for it and I put in there the downloaded maven. ~/maven/apache-maven3.3.1

3- Then I set the necessary environmental variables for my user in the ~/.bashrc file.

export JAVA_HOME=/home/djordje/Java/jdk1.8.0_40
export M2_HOME=/home/djordje/maven/apache-maven-3.3.1
export M2=$M2_HOME/bin
export PATH=$PATH:$JAVA_HOME/bin:$M2  


4- I use update-alternatives to install in the system the java interpreter and the compiler.

sudo update-alternatives --install "/usr/bin/java" "java" "/home/djordje/Java/jdk1.8.0_40/bin/java" 1

sudo update-alternatives --install "/usr/bin/javac" "javac" "/home/djordje/Java/jdk1.8.0_40/bin/javac" 1

5- Again i use update-alternatives, but for this time to configure the default version of the interpreter and compiler to be used by default.

sudo update-alternatives --config java
sudo update-alternatives --config javac

When running the command, it shows all versions of java installed, to pick one

6- At the end I just run java -version to see that the system is using the version of java I want.

I like this way, it is easy to do and you can quickly switch to a different installed version by just using the --config option









Wednesday, March 18, 2015

Contract testing

Let's imagine the following scenario...
We are working in distributed system with lots of applications.
The developers understand the importance of avoiding coupling amoung componets, so they decide to create restful applications to communicate via xml and json,
instead of building applications that are binary dependant with other applications.

During the development of a feature, the development team, did a change to the API, and unconciously they broke one of the consummer apps.
Unfortunately, this bug was really expensive, since the company just managed to discover it in its replica, pre-production environment by a long running
end to end functional test, after determining that what was broken was actually a marshaller of xml, there was no quick fix and they had to roll back.

In the root cause analysis meeting, developers from each of the teams, that own the apps that failed realised that the API change was the reason for the bug
and that there was no aditional work done in one of the unmarshallers.
The developers were told to fix the bug and also to come up with a solution that would avoid this from happening again.

After fixing the bug the developers toke some time to think how they could catch this kind of bugs before the pre-production environment where the expensive
integration tests run. One of them said, "What we need is consummer contract testing!"...

Consumer contract testing, allows consumers and providers of an API knowing if their latest changes on their marshallers or unmarshallers, could potentially be
harmful for the other party, without the necessity of performing an integration test. This is how it works:


1- The provider of the API, publishes an example of the API somewhere where he knows the consumer can access it(e.g publish it in a repo, sending it via email...).
2- The consummer takes the API example and writes a test that tolerantly accesses the values of interest.
   This in-document path(e.g xpath,jsonpath...) used to retrieve the values from the API example, is known as the contract.
3- The consummer publishes the contract in a place where knows the provider has access to it(e.g publish it in a repo, sending it via email...).
4- The provider will take the contract, and will use it in a test, against the generated output of the application. If the test fails when being run, the provider will know that they could potentially be breaking the the consumer, if they were to release the current version under test(a negotiation can take place).

Let's now have a look at a practical example of each of the steps above.

1- The developers that own the provider app, take from their passing acceptance test the output that the application is sending back to the consumer and they save
it into a file called "apiexample.xml", which looks like this:

 <output>  
      <content>  
           <partA>A</partA>  
           <partB>B</partB>  
      </content>  
 </output>  

They send this file over email to the team that owns the consumer application.

2- The developers that own the consumer app, will take the exampe and will write queries to it, to determine the contract they need. A unit test against the example, could be fine.

 @Test  
    public void apiExampleGeneratesValidatesToContract() throws Exception {  
     XPath xPath = XPathFactory.newInstance().newXPath();  
     String value = xPath.evaluate("/output/content/partB", getSource(readExample("apiexample.xml")));  
     assertThat(value,is(notNullValue()));  
    }  

3- Now that the developers know that the contract to access what they are interested in is:
 "/output/content/partB"
They can save it in a file called "contract.txt" and send it over email to the other team for they to make sure they will always be outputing according to the contract. Note that this tolerant
paths, allow to the provider to change any part of the API they want to change, as long as the contract is respected.

4- The provider will read the "contract.txt" file and will write a test where the contract will be applied to the applications output.

 @Test  
    public void apiExampleGeneratesValidatesToContract() throws Exception {  
     XPath xPath = XPathFactory.newInstance().newXPath();  
     String value = xPath.evaluate("/output/content/partB", getSource(readExample("apiexample.xml")));  
     assertThat(value,is(notNullValue()));  
    }  

Now when any of the teams run their builds, they will know if they are in breaching the contract and they will avoid the bug going further than the development environment.

You can find the complete source code of this example here.

Wednesday, March 11, 2015

Yet Another Blog Article About Acceptance Testing


Acceptance tests are tests conducted to determine if the requirements of a specification are met.
In modern software development, we call this specification, acceptance criteria.

“Whenever possible” it would be desirable to acceptance test, the system end to end.
By end to end, I mean talking to the system from the outside, through its interfaces.

Note that at the beginning of the previous paragraph, I said “Whenever possible”.
The reason for this, is that it would be risky and also costly to integration test our code(against other code, we don't control/own). Sometime applications within a system, don't even belong to our company or they are too costly and slow to run. Because of this, the amount of system full stack tests/functional tests, should be very reduced/almost none.

In acceptance testing we often start from an assumption about those external systems we cannot control. The parts out of our control are faked and the acceptance criteria, is aimed to those parts we control.

When writing an acceptance test, there is a commonly used format to define the acceptance criteria. It is well known as the “given,when,then” format:

- given: The setup/preconditions, of the scenario that we will test. Its contains what is that we expect from those remote systems(either internal or external) on which we depend.
- when: Is the specific call to the exposed interface we are testing.
- then: Is the validation of the results.

Today's acceptance test are written with the help of live specification frameworks, such as: Jbehave, Fit, Fitnesse, Concordion, Yatspec...
The use out this tools, will make easier to both understand complex scenarios and maintain criteria.


Understanding Yatspec

Next I will talk about writing acceptance tests with a popular live specification framework called Yatspec. I will explain some of its features and describe the way it presents the test report. Also I will explain with an example how we could stub systems out of our control and use them in our acceptance test.

About yatspec
-
its a Live specification framework for Java(https://code.google.com/p/yatspec/)
-produces readable Html
-supports table/parametrized tests
-allows writing in given-when-then style

 
The scenario
The application we will be testing, will receive a GET request from a client, then it will send subsequent GET requests to two remote systems(A and B), process the responses and POST the result to a third system(C), just before returning it to the client.



The criteria
-Given System A will reply 1 2 3
-And System B will reply 4 5 6
-When the client asks for the known odd numbers
-Then the application responds 1 3 5
-Then 'System C' receives 1 3 5


Creating html reports
Before going in depth into our example, I want to expend some time discussing how Yatspec reports look like, and what are the basics in order to create them(If you want to go directly to the scenario implementation, just skip this section).

When a Yatspec specifications are run, it will generate a html report. Advance options, can allow you to publish it remotely, but by default it will be written to a temporary file in the file system.
The terminal will tell you where it is like this:
Yatspec output:
/tmp/acceptancetests/KnownOddNumbersTest.html
We can navigate to it from the browsers url:
file:///tmp/acceptancetests/KnownOddNumbersTest.html

Lets have a look at how it is structured:


(a) Is the title of the report. If Yatspec finds the postfix 'Test' on the class name, it will remove it and just present the rest of the title.

 @RunWith(SpecRunner.class)  
 public class KnownOddNumbersTest extends TestState {  
      //Your tests  
 ...  
 }  


(b) In the contents section you will see a summary of all the test names(There can be multiple tests) in the same specification.



(c)This is the test name. We don't need to add any additional, anotations, all we need is to write our test names in “camel case”. If the test throws any exception, it will not be shown in the report.


 @Test  
 public void shouldReceiveResultWhenARequestIsSentToTheApplication() throws Exception {  
       //Test body...  
 }  


(d) At the beginning of each test, the criteria will be presented. Yatspec will use the contents of the method body to generate it. The methods given(), and(), when(), then() are inherited from TestState.java(latter I will explain how to use them).

 
 @Test  
   public void shouldReceiveResultWhenARequestIsSentToTheApplication() throws Exception {  
     given(systemARepliesWithNumbers("1,2,3"));  
     and(systemBRepliesWithNumbers("4,5,6"));  
     when(aRequestIsSentToTheApplication());  
     then(theApplicationReturnedValue(), is("1,3,5"));  
     then(systemCReceivedValue(),is("1,3,5"));  
   }  

(e) This is where test result will be shown. Yatspec will colour this part in green if the test passes , in red if the test fail or in orange it the test is not run.

(f)Interesting givens are the preconditions for the test to run. This preconditions are stored in the class TestState.java in an object called interestingGivens. The way we would commonly do this by passing a GivensBuilder object to the the method given(). Also the method and() can be used to add more information in our interesting givens.
 
 @Test  
   public void shouldReceiveResultWhenARequestIsSentToTheApplication() throws Exception {  
     given(systemARepliesWithNumbers("1,2,3"));  
     and(systemBRepliesWithNumbers("4,5,6"));  
     //...  
   }  
   private GivensBuilder systemARepliesWithNumbers(String numbers) {  
     return givens -> {  
       givens.add("system A returns", numbers);  
       return givens;  
     };  
   }  
   private GivensBuilder systemBRepliesWithNumbers(String numbers) {  
     return givens -> {  
       givens.add("system B returns", numbers);  
       return givens;  
     };  
   }  

(g) This are the captured inputs and outputs. Its purpose is to record values that go in or out of any component in the workflow. TestState.java contains an object called capturedInputsAndOutputs to which we can add or query from. Comonly we would indirectly add a value to the capturedInputsAndOutputs to track the response of our application so it can be verified latter, via a parameter of type ActionUnderTest.java to the when() clause method.

 @Test  
   public void shouldReceiveResultWhenARequestIsSentToTheApplication() throws Exception {  
     //...  
     when(aRequestIsSentToTheApplication());  
     //...  
   }  
 private ActionUnderTest aRequestIsSentToTheApplication() {  
     return (givens, captured) -> {   
 //The second object of this lambda is capturedInputsAndOutputs  
       captures.add("application response", newClient()  
           .target("http://localhost:9999/")  
           .request().get().readEntity(String.class));  
       return captures;  
     };  
   }  


(h) This are the final verifications. They are created by the then() method. You will distinguish if the output was generated by the then() method, because it is not highlighted in yellow.
An StateExtractor.java is responsible for the values in this section. The state extractor will take from the captures the values that where recorded previously so a matcher can verify if they are correct.


 @Test  
   public void shouldReceiveResultWhenARequestIsSentToTheApplication() throws Exception {  
     //...  
     then(theApplicationReturnedValue(), is("1,3,5"));  
   }  
 private StateExtractor<String> theApplicationReturnedValue() {  
     return captures -> captures.getType("application response", String.class);  
   }  
 }  

The scenario implementation
Now that we understand the criteria and we have some basic understanding of Yatspec reports. Lets write an acceptance test for the criteria described before.

In our scenario System A, B and C are out of our control(Lets imagine they are owned by companies). We need to first query A and B and then send the processed result to C before replying to the client.
This means that our interesting givens will be the values returned from A and B and our captured inputs and outputs will contain the input into C.

 
So let's have a look at how Systems A and B return the values previously saved in the interesting givens to the application and also how System C captures the input.

For this example, I created a class called FakeServerTemplate.java which contains the boiler plate code that is necessary to create an embedded server. Each System A, B and C will inherit from it and provide specific handler implementations.

 public abstract class FakeSystemTemplate {  
   private final HttpServer server;  
   protected InterestingGivens givens;  
   protected CapturedInputAndOutputs captures;  
   public FakeSystemTemplate(int port, String context,InterestingGivens givens, CapturedInputAndOutputs captures) throws IOException {  
     this.givens = givens;  
     this.captures = captures;  
     InetSocketAddress socketAddress = new InetSocketAddress(port);  
     server = HttpServer.create(socketAddress,0);  
     server.createContext(context, customHandler());  
     server.start();  
   }  
   public abstract HttpHandler customHandler();  
   public void stopServer() {  
     server.stop(0);  
   }  
 }  


Latter, when we create the acceptance test we will see how we will pass the interesting givens and the captured inputs and outputs to the Systems.
Systems A and B will return the values stored in the interesting givens using a unique key(Latter we will see how this keys are set in the givens).


 public class SystemA extends FakeSystemTemplate {  
   public SystemA(int port, String context, InterestingGivens interestingGivens, CapturedInputAndOutputs capturedInputAndOutputs) throws IOException {  
     super(port, context, interestingGivens, capturedInputAndOutputs);  
   }  
   @Override  
   public HttpHandler customHandler() {  
     return httpExchange -> {  
       String response = givens.getType("system A returns", String.class);  
       httpExchange.sendResponseHeaders(200, response.length());  
       OutputStream outputStream = httpExchange.getResponseBody();  
       outputStream.write(response.getBytes());  
       outputStream.close();  
       httpExchange.close();  
       captures.add("output from system A", response);  
     };  
   }  
 } 
 
 public class SystemB extends FakeSystemTemplate {  
   public SystemB(int port, String context, InterestingGivens interestingGivens, CapturedInputAndOutputs capturedInputAndOutputs) throws IOException {  
     super(port, context, interestingGivens, capturedInputAndOutputs);  
   }  
   @Override  
   public HttpHandler customHandler() {  
     return httpExchange -> {  
       String response = givens.getType("system B returns", String.class);  
       httpExchange.sendResponseHeaders(200, response.length());  
       OutputStream outputStream = httpExchange.getResponseBody();  
       outputStream.write(response.getBytes());  
       outputStream.close();  
       httpExchange.close();  
       captures.add("output from system B", response);  
     };  
   }  
 }  


For system C we will be capturing the arriving input.

 public class SystemC extends FakeSystemTemplate {  
   public SystemC(int port, String context, InterestingGivens interestingGivens, CapturedInputAndOutputs capturedInputAndOutputs) throws IOException {  
     super(port, context, interestingGivens, capturedInputAndOutputs);  
   }  
   @Override  
   public HttpHandler customHandler() {  
     return httpExchange -> {  
       Scanner scanner = new Scanner(httpExchange.getRequestBody());  
       String receivedMessage = "";  
       while(scanner.hasNext()) {  
         receivedMessage += scanner.next();  
       }  
       scanner.close();  
       httpExchange.sendResponseHeaders(200, 0);  
       httpExchange.close();  
       captures.add("system C received value", receivedMessage);  
     };  
   }  
 }  


Now that our remote systems are ready, lets write our test.


 @RunWith(SpecRunner.class)  
 public class KnownOddNumbersTest extends TestState {  
   private SystemA systemA;  
   private SystemB systemB;  
   private SystemC systemC;  
   private Application application;  
   @Before  
   public void setUp() throws Exception {  
     systemA = new SystemA(9996, "/", interestingGivens, capturedInputAndOutputs);  
     systemB = new SystemB(9997, "/", interestingGivens, capturedInputAndOutputs);  
     systemC = new SystemC(9998, "/", interestingGivens, capturedInputAndOutputs);  
     application = new Application(9999, "/");  
   }  
   @After  
   public void tearDown() throws Exception {  
     systemA.stopServer();  
     systemB.stopServer();  
     systemC.stopServer();  
     application.stopApplication();  
   }  
   @Test  
   public void shouldReceiveResultWhenARequestIsSentToTheApplication() throws Exception {  
     given(systemARepliesWithNumbers("1,2,3"));  
     and(systemBRepliesWithNumbers("4,5,6"));  
     when(aRequestIsSentToTheApplication());  
     then(theApplicationReturnedValue(), is("1,3,5"));  
     then(systemCReceivedValue(),is("1,3,5"));  
   }  
 }  


By extending TestState.java we get acces to the interestingGivens and capturedInputsAndOutputs objects. We will pass them to the remote systems, this way Systems A and B will be aware of what we expect them to return and also C will be able to capture its input.

The methods used inside given(), and(), when() then() are just static fixture methods. I think it good to avoid making long classes so that's why the test class just contains the test, everything else is extracted into reusable fixture methods. Lets have a look at them.


 public class GivensFixture {  
   public static GivensBuilder systemARepliesWithNumbers(String numbers) {  
     return givens -> {  
       givens.add("system A returns", numbers);  
       return givens;  
     };  
   }  
   public static GivensBuilder systemBRepliesWithNumbers(String numbers) {  
     return givens -> {  
       givens.add("system B returns", numbers);  
       return givens;  
     };  
   }
 
  public class WhenFixture {  
   public static ActionUnderTest aRequestIsSentToTheApplication() {  
     return (givens, captures) -> {  
       captures.add("application response", newClient().target("http://localhost:9999/").request().get().readEntity(String.class));  
       return captures;  
     };  
   }  
 }
 
 public class ThenFixture {  
   public static StateExtractor<String> theApplicationReturnedValue() {  
     return captures -> captures.getType("application response", String.class);  
   }  
   public static StateExtractor<String> systemCReceivedValue() {  
     return captures -> captures.getType("system C received value", String.class);  
   }  
 }  


Once we run the application, the acceptance test would go red, the next thing to do if we were parcticing ATDD, would be to go into the production code and write unit tests to guide the creation of the code that is required to make the acceptance go green. Remember the ATDD cycle.

 
The TDD of the final solution is out of the scope for this blog post, but you can find all the completed codes at this git repo:



Wednesday, February 4, 2015

Exposing the data layer of your app using REST

The more we sepparate the concerns of our system, the more mainteinable it becomes.

It is very common to find applications written in such way that the data access mechanisms(SQL files, JDBC client code, ORM mappings...) are located just next to(coupled/interdependant) the service/bussiness logic. This often makes finding bug, making a change, etc.. harder.

Calculating a result and storing it, are different things. So why not sepparating those 2 responsibilities among different applications?

One would be responsible of making sure the results are calculated and the other will just provide data management support.
In my opinion the result of doing this is a system that is more understandable, maintainable and upgrade friendly.

In many companies, the data is often managed by database engineering teams which have: schedules, goals and even different managers than the development teams. In this type of organization, delays, missunderstandings, conflicts of interests and work de-synchronization are very common. So to make the most of a decoupled system, we not just need a good software approach, but also a process and team structure that are compatible with it(But this may be a topic for another post). This type of decoupling will not just make maintenance easy for the developers but also, it will probably encourage discussion about the process and the teams structure.

In my example I decided expose 2 persistent services via 1 url and persisting simultaniously in 2 types of databases(a sql and a no-sql DB).

This is the implementation of the no-sql adapter


 public class NoSqlAddressInsertAdapter implements CreateService {  
   private final MongoClient mongoClient;  
   @Inject  
   public NoSqlAddressInsertAdapter(MongoClient mongoClient) {  
     this.mongoClient = mongoClient;  
   }  
   @Override  
   public void create(Address address) {  
     DBCollection collection = mongoClient.getDB("radadata").getCollection("address");  
     collection.insert(toNoSqlAddress(address));  
   }  
   private AddressNoSql toNoSqlAddress(Address address) {  
     AddressNoSql addressNoSql = new AddressNoSql();  
     addressNoSql.append("firstline", address.getFirstLine());  
     addressNoSql.append("secondline", address.getSecondLine());  
     addressNoSql.append("postcode", address.getPostcode());  
     addressNoSql.append("persons", address.getPersons().stream().map(toNoSqlPersons()).collect(toList()));  
     return addressNoSql;  
   }  
   private Function<Person, PersonNoSql> toNoSqlPersons() {  
     return person -> {  
       PersonNoSql personNoSql = new PersonNoSql();  
       personNoSql.append("firstname", person.getFirstName());  
       personNoSql.append("secondname", person.getSecondName());  
       return personNoSql;  
     };  
   }  
 }  

This is the implementation of the sql-adapter


 public class SqlAddressInsertAdapter implements CreateService {  
   @Inject  
   public SqlAddressInsertAdapter() {  
   }  
   private static SessionFactory getSessionFactory() {  
     return HibernateUtil.getSessionFactory();  
   }  
   private Session session;  
   @Override  
   public void create(Address address) {  
     session = SqlAddressInsertAdapter.getSessionFactory().getCurrentSession();  
     session.beginTransaction();  
     Set<ORMPerson> ormPersons = address.getPersons().stream().map(toOrmPersons()).collect(toSet());  
     ORMAddress ormAddress = new ORMAddress();  
     ormAddress.setFirstLine(address.getFirstLine());  
     ormAddress.setSecondLine(address.getSecondLine());  
     ormAddress.setPostcode(address.getPostcode());  
     ormAddress.setOrmPersons(ormPersons);  
     session.save(ormAddress);  
     session.getTransaction().commit();  
   }  
   @Override  
   public void create(Person person) {  
     //  
   }  
   private Function<Person, ORMPerson> toOrmPersons() {  
     return person -> new ORMPerson(person.getFirstName(),person.getSecondName());  
   }  
 }  

Note that both adapters use their specific domain objects, one uses ORM(Those ORMClasses are hibernate entities) and the other doesn't.

This is a sample REST endpoint will allow access to those services simultaniously


 @Service  
 @Path("insertperson")  
 public class InsertAddressResource {  
   private final services.nosqlcrud.CreateService noSqlcreateService;  
   private final services.sqlcrud.CreateService sqlCreateService;  
   @Inject  
   public InsertAddressResource(services.nosqlcrud.CreateService noSqlcreateService,  
                  services.sqlcrud.CreateService sqlCreateService) {  
     this.noSqlcreateService = noSqlcreateService;  
     this.sqlCreateService = sqlCreateService;  
   }  
   @POST  
   @Consumes({"application/json"})  
   public void insert(Address address) {  
     noSqlcreateService.create(address);  
     sqlCreateService.create(address);  
   }  
   /*  
     A Sample Json to POST:  
     URL: http://localhost:9998/insertperson  
     Content Type: application/json  
     {  
      "firstline": "street bla bla",  
      "secondline": "town of bla bla",  
      "postcode": "ble ble ble",  
      "persons": [  
         {"firstname":"Armin","secondname":"Josef"},  
         {"firstname":"Johan","secondname":"Uhgler"}  
       ]  
     }  
   */  
 }  

This snippets of code are just part of a demo app I wrote some days ago to show how to expose the data layer via REST.
The rest of the project, can be found at: https://github.com/SFRJ/Rest-Approach-to-Data-Persistence-R.A.D.A-

Wednesday, January 7, 2015

Retrospectives – “Lets talk about it”(Part 2)

In the previous post, I briefly explained what retrospectives are, why they are important and also I explained what is often that happens before them and how the facilitator prepares for it.

The following posts will be more focussed on retrospective formats/styles that could help the self organized team in different scenarios.


The first format/style I would like to explain is what I call "The Diplomatic Open Retro".
This retrospective style is best suited for a team that its not very familiar with the concept of retrospectives and also has a necessity of improving mostly its internal team self organizational process(e.g internal communication, workload management, development practices, internal optimizations, etc...).

How it works
At the beggining every attendant receives some post-it notes and is asked to write down all the topics that would like to discuss. Ten minutes should be enough, but depending on many factors sometimes gathering topics is more difficult. In order to help people getting inspired, the facilitator can play some relaxing music, also could write some of the hot-topics from the previous analysis in a board or even encourage the people to talk to each other(as long as it is helping discover topics).

This period is a critical part of the retrospective and it should take as long as needed, nobody should feel rush and only when all are happy with the topics collected the retrospective will carry on. Also it is important to mention that in the post-it, the team members can write in whatever way they want, there is no predefined format, even a simple sentence could do. If a team member doesn't know what to write, it is perfectly fine(he/she doesn't have to).



The next step will be to go one round around the table in which each of the members will briefly with a couple of sentences, explain each of the cards they wrote. There will be no replica, this is just a pure diplomatic exercise in which the members will try to convince the others of voting on their topics to be discussed. The person talking will stand up and as he/she briefly explains the topic, will also start sticking them into the voting board. During this period it often happens that people are mentioning the same topic so, this will be also a great exercise to group the topics that are repeated together so the voting can after be more accurate.


Once the topics are on the board, it is the time for for voting. Each of the members will be asked to place 3 marks in those topics that considers more important to be discussed.
It is important to understand that the time for the retrospective is limited and not all the topics will be discussed so the team needs to have a mechanism of selecting those topics that are considered more important. No voted topics will be discarded(They will appear in future retrospectives if they are important).


The voted topics will be discussed in order of(most voted ones first). The facilitator will make sure that takes notes of possible actions and key points as the conversations goes. Each topic will be time boxed within 10 to 15 minutes, after that time the facilitator will ask every body to start proposing and deciding on actions and owners for those actions. Actions will need to be decided before moving on to the next topic. It is very common in retrospectives that there is a lot of debate but little actions, this retrospective style attempts to gather actions while topics are closed. For the team to decide that an action is not needed its ok but this is rare to occur and if it occurs it will have to be decided by all that no action is to be taken. See an example of how gathered actions look like:

ACTIONS
Low Team Capacity(4 votes)
  • Team unsure if should talk to HR, Management or other Dev team.(Owner: No action to be taken until we find out) 


Coolaboration between teams(3 votes)
  • Devs to assist testers before moving into next dev task(Owner: All devs)
  • Setup the machine of the new joiner(Owner: Team Leader)
  •  Review handover checklist before going on holidays(Owner: All devs)


Failing builds(3 votes)
  • Determine why the build is red for more than a month(Owner: Senior dev)


Cakes all over the office(2 votes)
  • -Stop eating unhealthy cakes and organize a team dinner to celebrate xmas(Owner: Team Leader)

Tech debt catch up(2 votes)

  • -Not enough time to discuss in this retro, add as a hot-topic for next retro(Owner: Facilitator)



Sometimes the team is unable to decide an action, because their dependency/blocker is outside of their team. In this case, they will need to identify who are those individuals that need to be influenced. But that is a topic that I will cover in another post.







Friday, December 26, 2014

Retrospectives – “Discovering our selves”(Part 1)

A retrospective is a well known practice in many Agile development teams. Its goal is to help the team reflect on the previous working weeks, commonly 2 or 3 weeks with the aim of distinguishing ways of improving the way they work.  Retrospectives are also very important for this agile self-organized teams, because since they don’t receive direct commands from managers(see my previous post), it is of extreme importance to have mechanisms that improve increase awareness and prevent from burning out.

What makes a retrospective  a little bit different from other meetings, is that it often follows an organized protocol for interaction. The retrospective's protocol is defined and applied by one or many persons external to the team, known as the facilitators.  The role of the facilitator, is to, in an impartial way facilitate the teams express their concerns and discover actions that can help them address those concerns.

Each facilitator, has its own technique/s for facilitating retrospectives. Different techniques are useful in different circumstances. That is why one of the first things the facilitator will do in order to prepare a good retrospective, will be to have a brief chat with some representatives from the team, to get some idea/highlights of what was going on lately: current work, most notorious blocker, absences, who will attend the retrospective, important events… 

This first mini reconnaissance mission is not a silver bullet but often, it helps the facilitator get a grasp of what type of retrospective format could be used.  Sometimes retrospectives will have a high level of technicalities, other times there will be lots of complaints about blockers, others there will be communication issues, process, etc…

Without going into an specific retrospective format yet(not in Part 1), I would like to just name a list of healthy tips that is useful to hang somewhere on the room for all to see and/or even say them out loudly(the facilitator can even ask for a volunteer/s to read them out) at the retrospective, just before commencing:

·         Don’t blame. We all have been working to the best of our abilities.
·         Don’t monopolize the conversation, be conscious when you should let others participate.
·         Don’t interrupt people when they are speaking.
·         Don’t be afraid of expressing what you think no matter how bad it is.
·         Don’t feel intimidated by anyone because of their position.
·         Do critic and welcome critics(Blame not equal to critic).
·         Do remember that change is always possible.
·         Do remember that your company will be what you want it to be.

Dialogue it’s a dexterity which is not easy to master. The goal of this tips(note that I didn’t say rules) are to just to encourage a healthier debate. Many times will be the case that people feel: shy, impatient, inferior, superior, lazy, pessimistic, etc …

To help break some of those psychological barriers another duty of the facilitator will be to make sure that the environment where the retrospective will be held is comfortable enough. The environment can significantly impact the results of a retrospective.  But of course, It is up to the creativity of each facilitator, how to do so.  In any case here some more tips:

·         A bit of not loud ambient music at the beginning or even during the hold retrospective, can help stimulate people and also reduce the uncomfortable sensation some people claim to have when the group is in silence.
·         Soft drinks and water could help avoid dry mouths when speaking.
·         Coffee and Tea can help give a boost to people if the retrospective has to be held on the last hours of the day.
·         Alcohol is often discouraged specially if it is expected the retrospective to last too long. Some facilitators don’t have nothing against it if when it is in moderation.
·         Sweet and salty snacks are often found in retrospectives, specially chocolate(Apparently there are scientific research that suggest that it can increase peoples happiness).
·         Fruit, it’s a healthy option that many people often appreciate in retrospectives.  
·         Appropriate jokes and even chit-chat are often common at the beginning of retrospectives, it is perfectly fine if the facilitator engages on himself on them briefly or even initiates them while the retrospective is not jet started or is about to start, as a way of icebreaking.

The facilitator should have at the beginning of the retrospective a list of the members of the team and their role, that are expected to attend the retrospective. The reason for this is that in many occasions there are other people external to the team, that also were invited to the retrospective and to make sure that everybody knows who is in the room it may be nice to just make sure that they briefly introduce themselves to all the team if they haven’t done it yet.

Once the retrospective has started and regardless of the format that the facilitator will decide to use, often there will be a round of what is known as “Temperature Read”.  It is not mandatory thing to do it but it is very common in almost every retrospective.  The goal of temperature reading can be different and it also have an specific format depending on what is that we want to get from the team.  It may go from just a simple icebreaker to a puzzle game where everybody is engaged.
Since this is a topic for itself, in this series of blog posts, I will not go deep into it, but next I will just briefly describe one of those exercises.

For example, It might be of interest of the facilitator to discover how often teams need to do a retrospective. The facilitator, will ask everybody to write a number on a post-it note from 1 to 5 where the smaller the number is, means they consider there is not need to have a retrospective right now, and the greater the number is, it means that they are really eager to have a retrospective right now. After the retrospective the facilitator will count the votes and depending on the predominant result, an action suggesting to change the frequency in which the team has retrospectives can be suggested to the team:

·         1 or 2 can appear if the team has retrospectives too often. Sometimes it becomes like a routine for the team and the quality of the retrospective result is not that good.
·         3 or 4 often indicates that the frequency in which the team has retrospectives is probably appropriate, often nice productive retrospectives with good usage of the time, etc…
·         5 may be a sign of the team needing retrospectives more often. It is common that in retrospectives where the predominant temperature was 5, many topics remain undiscussed due to lack of time.

Of course this previous bullet points were just an example and those patterns not necessarily need to apply and can even be interpreted differently by different people. If it is the desire of the team to research on that topic, they can do it and try to discover when is best for them to have a retrospective.


With this I conclude part 1 on this blog post series on retrospective facilitation.
Stay tuned, in the coming posts I will discuss in depth some of the most powerful retrospective formats(each of them for a different purpose), some of them used in many companies, from small start-ups to huge  mega corporations. Remember that the retrospective is a very helpful  thing for the self-organized team.

Share with your frieds