You are on page 1of 92

Bad Tests, Good Tests

Tomek Kaczanowski
Bad Tests, Good Tests

Bad Tests, Good Tests


Tomek Kaczanowski
Copyright @ 2013 kaczanowscy.pl Tomasz Kaczanowski

All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, without the prior written
permission of the publisher, except in the case of brief quotations embedded in critical articles or reviews.

Every effort has been made in the preparation of this book to ensure the accuracy of the information presented. However, the information contained in this book
is sold without warranty, either express or implied. Neither the author, nor the publisher, and its dealers and distributors will be held liable for any damages
caused or alleged to be caused directly or indirectly by this book.

The author has endeavored to provide trademark information about all of the companies and products mentioned in this book by the appropriate use of capitals.
However, the author cannot guarantee the accuracy of this information.

Visit us on the web: http://practicalunittesting.com

Published by kaczanowscy.pl Tomasz Kaczanowski

Cover design by Agata Wajer-Gadecka, http://poleznaku.pl

ISBN: 978-83-938471-0-5

First printing, December 2013

Version pdf_a4_20131229_2323
Dedication
To all, who develop open-source projects.
May the Force be with you! :)

i
Table of Contents
Acknowledgments ................................................................................................................................ iii
Foreword .............................................................................................................................................. iv
Preface .................................................................................................................................................. vi
About the Author ............................................................................................................................... viii
1. Why Bother? ..................................................................................................................................... 1
2. Breaking all the Rules ...................................................................................................................... 2
2.1. No Assertions ......................................................................................................................... 2
2.2. Autogeneration ....................................................................................................................... 2
2.3. It is a Full Time Job .............................................................................................................. 3
2.4. Conclusions ............................................................................................................................ 5
3. Strength ............................................................................................................................................. 6
3.1. Use Smart Values .................................................................................................................. 6
3.2. Self-Test ................................................................................................................................. 8
3.3. Happy Path ........................................................................................................................... 10
3.4. Expecting Exceptions Anywhere ......................................................................................... 14
3.5. Evolve or ........................................................................................................................ 16
3.6. Assertions should be Merciless ........................................................................................... 18
3.7. Is Mockito Working Fine? .................................................................................................. 18
3.8. Expected Exceptions and Verification ................................................................................. 19
3.9. Mockito any() vs. isA() ....................................................................................................... 21
3.10. Be Generic! ........................................................................................................................ 22
3.11. Write the Right Tests ........................................................................................................ 25
4. Maintainability ................................................................................................................................ 27
4.1. Mockem All! ...................................................................................................................... 27
4.2. Control the Environment ..................................................................................................... 31
4.3. Time Means Trouble (Always!) .......................................................................................... 34
4.4. Waste of Time ..................................................................................................................... 36
4.5. Overspecified Tests .............................................................................................................. 37
4.6. SRP for Tests ....................................................................................................................... 48
5. Readability ...................................................................................................................................... 54
5.1. Why formatting helps .......................................................................................................... 54
5.2. Ceremony ............................................................................................................................. 55
5.3. Creation of Objects .............................................................................................................. 56
5.4. Naming is King ................................................................................................................... 61
5.5. Mocks Are Good ................................................................................................................. 69
5.6. Assertions ............................................................................................................................. 70
6. Things to Remember ...................................................................................................................... 79

ii
Acknowledgments
This book would never have happened, had it not been for the help and encouragement of a number
of people.

Bartosz Ocytko, Martin Skurla and Bartek Zdanowski shared some examples of bad tests with me
(which doesnt mean they wrote them in the first place!). My colleagues from the various teams Ive
worked with also contributed in this way; however, their contributions may not always have been made
knowingly ;)

Jakub Nabrdalik and Tomasz Borek delivered three complete sections that are now included within the
book.

Krzysztof Koziol and Marcin Michalak read early versions of this book and provided some valuable
feedback.

My special thanks go to Petri Kainulainen, who helped me a great deal by carrying out several exhaustive
reviews of early draft versions of the book.
Petri not only found many weak spots and suggested improvements, but also complimented me on the
things he liked and encouraged me to go on (which is something I really needed!). Thank you!

Peter Kofler - yes, the Code Cop himself! - agreed to write a foreword and well, I couldnt imagine
anything closer to the spirit of this book. Thank you!

Carl Humphries has done a splendid job by translating this book from my version of English into a
standard one. You would not enjoy reading this book at all, were it not for his hard work!

Even though many people have contributed to this book and have carried out reviews, all
bugs still belong to me!

iii
Foreword
I still remember when I came across the first bad tests. Early in my career I had joined a new team and
I discovered JUnit tests hiding throughout the main source tree. Although they were JUnit tests, they
were neither unit tests, nor could they be run standalone. The teams architect had created a special test
runner that depended on the whole application being initialized inside the application server container.
After the architect had left, nobody ran these tests any more. The real problem was that the application
made heavy use of singletons internally, preventing any isolated execution of the business logic. The
bad design of the system prohibited us from writing proper unit tests. It was no surprise that there were
so few of them. Back then I was already fanatic about code quality which also meant fanatic about unit
tests, and I immediately attacked the problem. It took me several years to push back the singletons and
enable unit testing, so that the whole team would be able to add unit tests as they went along.

In the years since then some things have changed in our industry. Eventually the singleton pattern was
considered an anti-pattern and the rise of Test Driven Development put a new focus on unit tests. But the
problems with tests remain the same. I still encounter bad tests regularly. As we have more automated
tests than we used to have which is without doubt a great thing the problems of bad tests sum up and
have a higher impact as well. These tests slow down ongoing development and increase the maintenance
cost without adding much benefit. As todays code bases grow in size and complexity each year, we
need to add and maintain more and more tests. Some projects are facing testing-hell already, where bad
tests actually grind whole development teams to a halt. To avoid this every software developer needs
to understand what defines a good test, to recognize a bad one and to know ways to transform bad tests
into proper ones. And this is exactly where Tomek and his book come in.

I met Tomek at GeeCON, a Polish Java conference. He delivered a talk about bad and good tests, where
he shared some of the advice given in this book. He had been aware of bad unit testing habits since
years like me, but unlike me had taken the time to collect and categorize them. I was excited. There was
finally someone who cared for the quality of automated tests and even talked about it. I just had to meet
this guy. Fortunately GeeCON is a community conference where you can still meet with speakers if you
want to, and so we met. After chatting a bit about tests and our industry in general I knew that we would
be friends. Tomek is genuinely concerned about code quality in general and bad tests in particular and
I wish there were more people as dedicated to quality in our industry.

Tomek is a developer and he talks code. This book shows roughly as much code as plain text. For each
category of bad tests, Tomek shows the original code and then an improved version of it. This is a
good mixture, usually the example of code gives more insights than long explanations of coding rules
or design guidelines. After all a picture is worth a thousand words. Of course the underlying principles
that bad test might violate are manifold and explaining them all in detail would exceed the scope of
this book. This is a practical book showing real world code. Some of the code examples are really ugly,
some contain intricate puzzles. The number of bad tests Tomek has collected over the years is really
depressing but Tomeks humorous comments and the sheer amount of WTFs make this book a hilarious
read - hilarious and scary at the same time.

This is a small book, but it is full of great advice how to avoid bad tests. Every developer should read
it. The fact that it is short really helps. Go ahead, read it, you really need to know all about good and
bad tests. Please do not be the person that adds another testing hack like the architect in the beginning
of my career. After all I am sure you do not want your tests to appear in the next edition of this book
as bad example, do you?

Peter Kofler, Code Cop

iv
Foreword

Austria, 2013

v
Preface
So you want to write great tests? Ah, so do I! :) I have some experience in writing tests, and I would
like to share it with you.

Stay focused, stay sharp, distill what you read through your own experiences, and choose what seems
right and valuable for you.

The Idea
The idea behind this book is to present test code snippets and discuss ways of making them better.

All examples discussed in this book come from real code of real applications. I havent made them up.
They are real. From thousands of tests I have seen I have selected these which illustrate some typical
errors or imperfections which I frequently encounter while doing code reviews (or pondering over my
own code!).

Some of the code examples were obfuscated in order to "protect the innocents". ;) Many were
significantly truncated, so only their "essence" was left, and all the cluttering was removed. This makes
the examples easier to understand but at the same time it makes some of the discussed issues look not
so important. However they are important in real life scenarios, where you have to deal with numerous
much more complex tests (or the same issues were repeated in numerous tests).

SUT and Assertions


From time to time I use term SUT which denotes System Under Test, that is the thing being tested. The
SUT can vary in size. For unit tests it is usually a class, for integration tests it might be a class, a layer
or a module, and for end-to-end tests it is the whole system.

Throughout the book I often use AssertJ assertions1 (e.g. assertThat()) instead of those provided by
JUnit or TestNG (e.g. assertEquals()). After having written thousands of tests (of each kind), Ive
learned that they make my tests more readable by allowing me to better express my intentions.

Icons
The following icons appear within the text:

A helpful tip.

An additional note.

Warning, danger ahead!

1
http://joel-costigliola.github.io/assertj/

vi
Preface

All three icons were designed by Webdesigner Depot http://www.webdesignerdepot.com/.

Fonts
For listings I have decided to use the Oxygen Mono font, created by Vernon Adams http://
code.newtypography.co.uk/.

What I Expect From You


To really benefit from reading this book you should:

have some experience in writing tests, especially unit tests,

know a testing framework; preferably TestNG or JUnit, but any other will also be fine,

be interested in improving your tests-writing skills,

understand the value of clean test code.

You could gain some of the required knowledge by reading my other books (visit http://
practicalunittesting.com).

Tips, Rules, Guidelines etc.


This book contains many advices, for example:

As a rule never use System.currentTimeMillis() but introduce an additional interface


that you can later stub in your tests.

Such advices are true in 99% of cases, and that is why I share them with you. However you will encounter
cases when you should break them and be proud of it! So keep the rules in mind, stick to them, but
do not follow them blindly.

Why This Book is For Free?


This book is for free because I decided so. :) This is possible thanks to many of you, who have bought
my previous books from the "Practical Unit Testing" serie. Thank you very much!

By giving this book for free I would also like to pay my debts to the community. I have gained a lot
from many bloggers, open-source contributors and forums participants. Thanks to your engagement and
the willingness to share your knowledge and experience Im a better developer today, and I can benefit
from many open-source tools. Thank you very much!

Least but not least, it is real fun to give gifts. And Im enjoying it very much! :)

Book Site
Please visit the http://practicalunittesting.com for news and information on downloadable extras etc.

vii
About the Author
Tomek Kaczanowski is a technical team leader from Krakow, Poland. He has a strong interest in code
quality, testing and automation - preferably all three together. Combining technical with soft skills,
he also ventures into the realms of mentoring, teaching, lecturing and article writing, not to mention
preaching sermons to the unconverted in the hope of redeeming them (or at least their code)! He hates
doing things manually, and is allergic to empty src/test/java directories.

Tomek believes that by working with legacy code, and improving it, he can make the world a better
place. To his disappointment, the world does not seem to care all that much about his efforts.

Apart from all this weirdness, he is a pretty normal person a husband, father of two, and cat owner.

viii
Chapter 1. Why Bother?
This book is not meant to explain all the reasons we might have for writing tests. Even so, there are a
few things we do need to know. In short, there are three main reasons for writing them1:

If we have a complete set of tests, and they all pass, then we know that our application works.

Introducing changes is much easier if we have tests. They will inform us if we break some
functionality.

Tests are probably the only documentation we have which stays up-to-date over the whole life of
our software.

You should keep these reasons for writing tests in mind when considering the examples of bad tests
presented in this book. Ask questions like "Does it test anything?", "Is it maintainable", "Is it easy to
understand?" etc.

The worrying thing about writing tests is that there are numerous accounts of people introducing tests
into their development process and ending up with even more junk code to support than they had
to begin with! Why? I guess, mainly because they lacked the knowledge and understanding needed to
write good tests: tests, that is, that will be an asset, not a burden.

Whatever you do, do it properly. If it is worth doing, it is worth doing well. Keep these
maxims in mind when writing tests!

The more code youve written, the more code youve got to maintain. And this is just as
true of tests!

1
Please forgive the simplifications; I just want to keep things brief!

1
Chapter 2. Breaking all the Rules
Yes, Im breaking all the rules, breakin' all the rules

Ozzy Osbourne Breaking All The Rules

We will start with some really useless tests. Some of them were written a long time (10 years or so) ago,
in an era when many of us probably hadnt bothered to write any tests at all. So with all due respect to
the (unknown) authors, today we can say that these tests were really bad.

2.1. No Assertions
These few lines of code are copied from a very old and horrible (truly horrible!) test. Among other things,
it had no assertions. Nothing. Null. Zero. Instead, it printed a lot of information to the System.out, so
that developers could verify whether the tested functionality was working or not.
IResult result = format.execute();
System.out.println(result.size());
Iterator iter = result.iterator();
while (iter.hasNext()) {
IResult r = (IResult) iter.next();
System.out.println(r.getMessage());
}

This is wrong, of course. Tests should never require us to perform any kind of manual verification.
No need to browse logs, no need to query a database by hand. Nothing like this. It should all happen
automatically. A good test just fails when the functionality being tested is malfunctioning.

A better version is presented below:


IResult result = format.execute();
assertThat(result.size()).isEqualTo(3);
Iterator iter = result.iterator();
while (iter.hasNext()) {
IResult r = (IResult) iter.next();
assertThat(r.getMessage()).contains("error");
}

Obviously I have no idea what the expected values would have been, so I have made them up. :)

This version does not require the user to browse any log files. The testing framework will verify the
outcome and notify the user in case of failure.

There is only one (evil!) reason to write tests without assertions: to inflate the code coverage
metrics without doing anything useful!

2.2. Autogeneration
Unit tests are simple, right? All they do is set some object, call its methods, and verify results, right?
So, why dont we make computers write them? Hip-hip hurray, we will save a plenty of time, and have
100% code coverage! Lets do it!

2
Chapter 2. Breaking all the Rules

Well, it simply does not work, you know. If you think about why it is, really, that you are writing tests,
you will quickly realize that this is a bad idea. It does not help you discover bugs, it does not help
you come up with better design, it promotes test-last coding and goes against the "test behaviour not
methods" rule (see Section 4.6.2). Also, such tests constitute a rather weak form of documentation. All
of which means you just really shouldnt do it! There you are!

Below is an attempt to autogenerate some test code. As you can see, it has successfully generated tests
for getters/setters, which is, to put it bluntly, a waste of time.
public void testSetGetTimestamp() throws Exception {
// JUnitDoclet begin method setTimestamp getTimestamp
java.util.Calendar[] tests = {new GregorianCalendar(), null};

for (int i = 0; i < tests.length; i++) {


adapter.setTimestamp(tests[i]);
assertEquals(tests[i], adapter.getTimestamp());
}
// JUnitDoclet end method setTimestamp getTimestamp
}

public void testSetGetParam() throws Exception {


// JUnitDoclet begin method setParam getParam
String[] tests = {"a", "aaa", "---", "23121313", "", null};

for (int i = 0; i < tests.length; i++) {


adapter.setParam(tests[i]);
assertEquals(tests[i], adapter.getParam());
}
// JUnitDoclet end method setParam getParam
}

What kind of bugs would you ever expect to discover by means of such tests?

So now, repeat after me! "I will neither autogenerate the test code, nor its skeleton. I will neither
autogenerate the test code, nor its skeleton. I will neither autogenerate the test code, nor its skeleton.
I will neither autogenerate the test code, nor its skeleton" (A pretty useful mantra if you cant get to
sleep should help after 1000 repetitions!)

Do NOT autogenerate either the test code or its skeleton.

2.3. It is a Full Time Job


I want a life time job bein' close to you
Makin' all your dreams come true
But if your heart dont throb about a full time job
A part time job will do

Eddy Arnold "A Full Time Job"

Lets face it, writing tests only makes sense if you (and your team) take care of them every day. In this
section I present two examples of what happens when that is not so.

3
Chapter 2. Breaking all the Rules

First comes a short snippet of the SystemAdminSmokeTest class. This is a smoke test1, mind you, which
means it is an important test that aims to give you some quick feedback about the health of the system. At
the time when I joined the project, the test looked as follows. Please note that all the lines are commented
out!

class SystemAdminSmokeTest extends GroovyTestCase {

void testSmoke() {
// do not remove below code
// def ds = new org.h2.jdbcx.JdbcDataSource(
// URL: 'jdbc:h2:mem:test;DB_CLOSE_DELAY=-1;MODE=Oracle',
// user: 'sa', password: '')
//
// def jpaProperties = new Properties()
// jpaProperties.setProperty(
// 'hibernate.cache.use_second_level_cache', 'false')
// jpaProperties.setProperty(
// 'hibernate.cache.use_query_cache', 'false')
//
// def emf = new LocalContainerEntityManagerFactoryBean(
// dataSource: ds, persistenceUnitName: 'my-domain',
// jpaVendorAdapter: new HibernateJpaVendorAdapter(
// database: Database.H2, showSql: true,
// generateDdl: true), jpaProperties: jpaProperties)

...
}

Well, that certainly is not going to inform anyone about the state of the system I guess there was a
time when this test used to work. But then some changes came along, and it was commented out (only
"for a very short time", I bet) and was never brought to life again. A pity, yes! And a rather discouraging
sign to anyone joining the team2.

Delete tests if they are no longer needed. There is no need to keep them.

And what about ignored tests?

Sometimes you really have no choice but to disable a test for a short period of time. This happens
when performing redesigns, which is something that often requires us to update numerous tests.
Disabling tests is always a perilous situation, because theres no guarantee that youll ever go
back to them and bring them back to life again.

The way I deal with this problem is by adding a comment to the ignored test. The comment
includes the number of the related ticket (from the issue tracking system). When I have finished
working on this ticket I always search the codebase, looking for places marked with its number.
And even if I forget to do so, my colleague will do this when he or she performs a code review.
One of us will be sure to find that disabled place and finish the job.

1
See https://en.wikipedia.org/wiki/Smoke_testing#Software_development
2
Which was misleading BTW, because the project and the team were doing pretty well!

4
Chapter 2. Breaking all the Rules

Another example is even more depressing. In one of the modules of some project, in the src/test/
java directory, I found this test:

@Test
public class ExampleTest {

public void testExample() {


assertTrue(true);
}
}

I think youve already guessed the end of this story. Yes, that was the only test there

2.4. Conclusions
There are a few things we can learn from the attempts to write tests presented so far:

"Do not live with broken windows"3; fix things immediately!

Writing and maintaining tests is an effort required of the whole team (and this effort must also be
supported, or at least not hindered, by all of the stakeholders).

Good intentions are not enough. To fulfill the goal of achieving well-tested software, you also need
knowledge, time and determination.

Verification must be automatic!

Write tests, do not generate them!

Writing tests is a full-time job, and there is no one else who will do it for you.

If you dont actually write tests, you cant expect to be good at this!

Armed with this basic knowledge, we can now move on to some more interesting examples.

3
See http://c2.com/cgi/wiki?FixBrokenWindows

5
Chapter 3. Strength
The first reason people start writing tests is to "verify that something works". At first sight this seems
trivial: you think about a test case, write it down in the form of a JUnit/TestNG test, and voila! However,
this is not always so.

In this chapter we shall be concentrating on the "strength" of our tests. Do they really test anything
interesting? As we shall see, it is possible for them to fail miserably in this respect. We shall come
across tests that definitely test something, but maybe not exactly what should be tested. We shall even
encounter tests that dont test anything valuable at all!

3.1. Use Smart Values


Most of them mean absolutely nothing,
Oh most of them mean absolutely nothing,
But some of them mean everything
Oh numbers

Jason Michael Carroll Numbers

When writing tests we should try to minimize the risk of having tests pass when the scenario they test
is not really fulfilled. Let us now examine two tests that are at risk of passing accidentally.

3.1.1. 20, 50, 50?


Let us consider the following factory method:
public PriceCalculator create() {
BigDecimal minMargin = settings.getMinMargin();
BigDecimal maxMargin = settings.getMaxMargin();
BigDecimal premiumShare = settings.getPremiumShare();
return new PriceCalculator(minMargin, maxMargin, premiumShare);
}

settings is a collaborator, and as such should be stubbed in test.

This test does exactly this:


public class PriceCalculatorFactoryTest {
SettingsService settings = mock(SettingsService.class);

@Test
public void shouldCreatePriceCalculator() {
//given
given(settings.getMinMargin()).willReturn(new BigDecimal(20));
given(settings.getMaxMargin()).willReturn(new BigDecimal(50));
given(settings.getPremiumShare()).willReturn(new BigDecimal(50));

//when
PriceCalculator calculator
= new PriceCalculatorFactory(settings).create();

6
Chapter 3. Strength

//then
assertThat(calculator)
.isEqualTo(new PriceCalculator(new BigDecimal(20),
new BigDecimal(50), new BigDecimal(50)));
}
}

Even if the values used in test (minMargin = 20, maxMargin = 50, premiumShare = 50) makes business
sense, they should not be used in test. Why? Because there is always a chance that the tested method is
flawed and values are improperly assigned (this can happen if someone created it using the copy&paste
approach). Imagine the maxMargin parameter being assigned to premiumShare, and vice versa. The test
would still pass, even in the event of such an implementation error, because both parameters have been
assigned the same value of 50.

To minimize the risk, use different values for each parameter. The following code shows this (there is
also some refactoring being done, so the values are extracted as static final fields):
public class PriceCalculatorFactoryTest {
SettingsService settings = mock(SettingsService.class);

private static final BigDecimal MIN_MARGIN = new BigDecimal(20);


private static final BigDecimal MAX_MARGIN = new BigDecimal(30);
private static final BigDecimal PREMIUM_SHARE = new BigDecimal(40);

@Test
public void shouldCreatePriceCalculator() {
//given
given(settings.getMinMargin()).willReturn(MIN_MARGIN);
given(settings.getMaxMargin()).willReturn(MAX_MARGIN);
given(settings.getPremiumShare()).willReturn(PREMIUM_SHARE);

//when
PriceCalculator calculator
= new PriceCalculatorFactory(settings).create();

//then
assertThat(calculator).isEqualTo(
new PriceCalculator(MIN_MARGIN, MAX_MARGIN, PREMIUM_SHARE));
}
}

Thanks to this change, the test now fails if the values in the constructor of the PriceCalculator class
have been misplaced.

Avoid using the same numbers/strings for different variables/properties in your tests method!
Often powers of 2 (1, 2, 4, 8, ) are quite convenient, as they do not sum up to each other
easily.

3.1.2. Zeros are Evil


When testing functionality, we should select the test cases (i.e. the input values) with care. This example
illustrates the need for this.
public class PaymentServiceTest {

7
Chapter 3. Strength

PaymentAdapter paymentAdapter = mock(PaymentAdapter.class);


PaymentService paymentService = new PaymentService(paymentAdapter);

@Test
public void shouldReturnRevenueForClient() {
//given
Client client = new Client();
given(paymentAdapter
.getRevenue(client, PaymentService.REPORT_COUNT))
.willReturn(0d);

//when
double actual = paymentService.getRevenue(client);

//then
assertThat(actual).isEqualTo(0d);
}
}

What is wrong with this test? Well, nothing. Almost nothing

But, a picky person (like me!) would notice that the value we are using for the verification is 0.
Hmm there is nothing wrong with zeros, but Okay, the point is as follows: some values - especially
null and 0 - are often returned from methods autogenerated by IDEs. This means that using null or 0
to verify whether or not some function or other is working correctly is, frankly, not the best idea.

So, probably this test would be stronger if we replaced zero with any other value, like this:
@Test
public void shouldReturnRevenueForClient() {
//given
Client client = new Client();
given(paymentAdapter
.getRevenue(client, PaymentService.REPORT_COUNT))
.willReturn(1.23);

//when
double actual = paymentService.getRevenue(client);

//then
assertThat(actual).isEqualTo(1.23);
}

This does not mean you should not test your method against null and 0! On the contrary,
these two values are often problematic, and it is usually worth checking how your code
behaves with them. The thing is NOT to use them as the only test cases for happy path testing!

3.2. Self-Test
Within a single application, a need arose to implement a number of payment methods. The list of these,
along with information about the countries where each payment method would be accepted, was kept
by the PaymentMethod enum.

The test below verifies whether a getMethodsForCountry() method of the PaymentMethod class
returns valid methods for Poland. First, it creates the list of expected payment methods, then it calls the

8
Chapter 3. Strength

method in question, and then finally it compares the expected list with the result of the tested method
invocation.
@Test
public void shouldGetMethodsForPoland() {
//given
List<PaymentMethod> all = Lists.newArrayList(PaymentMethod.values());
List<PaymentMethod> methodsAvailableInPoland = Lists.newArrayList();
for (PaymentMethod method : all) {
if (method.isEligibleForCountry("PL")) {
methodsAvailableInPoland.add(method);
}
}

//when
List<PaymentMethod> methodsForCountry = PaymentMethod
.getMethodsForCountry("PL", all);

//then
assertThat(methodsForCountry).isEqualTo(methodsAvailableInPoland);
}

The test seems right but there is something disturbing about its "given" part. It uses two methods
of the PaymentMethod enum (values() and isEligibleForCountry()) in order to prepare the list
of expected methods. Let us have a look now at the implementation of the getMethodsForCountry()
method of the PaymentMethod class, to see if our fears are justified.
public enum PaymentMethod {

public static List<PaymentMethod> getMethodsForCountry(


Country country, List<PaymentMethod> availableMethods) {
List<PaymentMethod> methodsForCountry = Lists.newArrayList();
for (PaymentMethod method : availableMethods) {
if (method.isEligibleForCountry(country.getCode())) {
methodsForCountry.add(method);
}
}
return methodsForCountry;
}
}

Well, yes, it seems that we are using the same code to create both sides of the assertion - actual and
expected. This is plainly wrong! This way we cant really test whether this method works properly or
not. We only know that it is idempotent. What we should do, instead, is prepare the list of expected
methods by hand like this:
@Test
public void shouldGetMethodsForPoland() {
//given
List<PaymentMethod> all = Lists.newArrayList(PaymentMethod.values());
List<PaymentMethod> methodsAvailableInPoland = Arrays.asList(
new PaymentMethod[] {
PaymentMethod.MASTERCARD,
PaymentMethod.VISA,
...
// and all other methods available in Poland
});

9
Chapter 3. Strength

//when
List<PaymentMethod> methodsForCountry = PaymentMethod
.getMethodsForCountry("PL", all);

//then
assertThat(methodsForCountry).isEqualTo(methodsAvailableInPoland);
}

Ah, better now! :) This test no longer employs the logic of the PaymentMethod enum. It simply provides
a list of expected results.

3.3. Happy Path


Happy path testing is a well-defined test case using known input, which executes without
exception and produces an expected output.
Wikipedia

The issue of writing "happy path" tests arises so pervasively that it probably counts as the most popular
antipattern of all.

But what is "happy path" testing, anyway? For our purposes, it will suffice to say that "happy path" tests
cover the simplest, even obvious, scenarios. It is like testing that our calculator can add 2 and 2, and that
if you add user John Doe to the database, then youll be able to find it there.

Let me stress one thing, though. Such tests are not evil! There is nothing wrong with having them. On
the contrary they are essential. Why? Because the scenarios they cover are the most important ones for
your application. If your application does not handle such simple cases, then it is useless. So the problem
is not with writing such tests, but with the fact that they are often the only tests there are. This, of course,
leads to problems, as soon as the real data starts coming in: suddenly, it turns out that your calculator
cant handle negative values, and that your database isnt prepared to handle a surname longer than 20
characters or so. And your clients will notice this real soon!

3.3.1. FizzBuzz
Write a program that prints the integers from 1 to 100. But for multiples of three, print
"Fizz" instead of the number, and for multiples of five, print "Buzz". For numbers which
are multiples of both three and five, print "FizzBuzz".
FizzBuzz RosettaCode

Let us have a look at this implementation of the test class which claims to verify whether the FizzBuzz
class behaves according to the specification presented above1.

public class FizzBuzzTest {


@Test
public void testMultipleOfThreeAndFivePrintsFizzBuzz() {
assertEquals("FizzBuzz", FizzBuzz.getResult(15));
}

@Test

1
The test implementation is copied from a CodeReview discussion. See: http://codereview.stackexchange.com/questions/9749

10
Chapter 3. Strength

public void testMultipleOfThreeOnlyPrintsFizz() {


assertEquals("Fizz", FizzBuzz.getResult(93));
}

@Test
public void testMultipleOfFiveOnlyPrintsBuzz() {
assertEquals("Buzz", FizzBuzz.getResult(10));
}

@Test
public void testInputOfEightPrintsTheNumber() {
assertEquals("8", FizzBuzz.getResult(8));
}
}

Looks decent, doesnt it? Each requirement is covered by testing it with one piece of test data.

However I would still call it a "happy path" test. The number of test cases is not enough to deduce
whether the FizzBuzz class gives correct answers for all integers from 1 to 100. It might, but it might
not. I could even come up with a very simple implementation that passes this test but does not really
fulfill the FizzBuzz requirements.

I am not saying we should test every integer from the given range (even though that would not be hard,
as there are only 100 of them), but I would definitely write some more test cases. The nice thing is that I
do not have to write more test methods. All I have to do is use more data. The next listing demonstrates
how to do this using JUnit.

Using JUnit? Use JUnitParams2 library to write parameterized tests.

@RunWith(JUnitParamsRunner.class)
public class FizzBuzzJUnitTest {

@Test
@Parameters(value = {"15", "30", "75"})
public void testMultipleOfThreeAndFivePrintsFizzBuzz(
int multipleOf3And5) {
assertEquals("FizzBuzz", FizzBuzz.getResult(multipleOf3And5));
}

@Test
@Parameters(value = {"9", "36", "81"})
public void testMultipleOfThreeOnlyPrintsFizz(int multipleOf3) {
assertEquals("Fizz", FizzBuzz.getResult(multipleOf3));
}

@Test
@Parameters(value = {"10", "55", "100"})
public void testMultipleOfFiveOnlyPrintsBuzz(int multipleOf5) {
assertEquals("Buzz", FizzBuzz.getResult(multipleOf5));
}

@Test

2
https://code.google.com/p/junitparams/

11
Chapter 3. Strength

@Parameters(value = {"2", "16", "23", "47", "52", "56", "67", "68", "98"})
public void testInputOfEightPrintsTheNumber(int expectedNumber) {
assertEquals("" + expectedNumber,
FizzBuzz.getResult(expectedNumber));
}
}

This test is definitely stronger. It tests more than one case for each requirement.

Play BizzBuzz with your friends. See http://en.wikipedia.org/wiki/Bizz_buzz for game


instruction. :)

3.3.2. Not Enough Testing


Once upon a time, I was engaged in developing a web application. It presented some data to the user in
the form of paged lists. My task was to develop a component which would provide all the data required
in order to create "paging controls" this, for example:
<< < 1 2 3 ... 99 > >>

Basically, the component (based on the information about the current page, the number of elements per
page and the total number of elements) had to provide such information as:

which pages to print,

what the offsets for each page were (so the links could be generated),

which pages should not be linked (i.e. the current one).

The algorithms were rather simple, but there was some danger of off-by-one errors3, so I decided to
write unit tests. There were many of them. The ones shown below are related to the "next" button (the
> sign which takes you one page forward).

@Test
public class PagerTest {

private static final int PER_PAGE = 10;

public void shouldGiveOffsetZeroWhenOnZeroPage() {


Pager pager = new Pager(PER_PAGE);

assertThat(pager.getOffset()).isEqualTo(0);
}

public void shouldIncreaseOffsetWhenGoingToPageOne() {


Pager pager = new Pager(PER_PAGE);

pager.goToNextPage();

assertThat(pager.getOffset()).isEqualTo(PER_PAGE);
}

3
See http://en.wikipedia.org/wiki/Off-by-one_error.

12
Chapter 3. Strength

Both tests verify whether the right offset is given. This information is used later by database queries
to decide which rows should be retrieved from the database. So, given that there should be 10 elements
printed on each page (PER_PAGE = 10), we should have offset 0 on the 0 page, and offset 10 on the
first page. Simple, isnt it?

So anyway, I implemented this feature, and all the tests were green, which in turn meant I was convinced
everything was fine. Then, after some time, when clicking through the web pages of this application, I
discovered (to my horror and dismay!) that the "paging" was not working properly! Starting from page
1, every page presented exactly the same set of items! This was ridiculous! I knew it worked I had
my tests, after all!

The problem with a situation like this is that you really cannot find a bug so long as that
voice inside your head keeps on shouting such drivel as "it works! it has to work! it works!"
(against all the obvious evidence that this is not, in fact, the case). Put a stop to it, and get
on with finding the bug!

So, after an initial period of confusion I took a look at the tests once again. I asked myself: What are
they testing? Well, they are testing whether you can go from page 0 to page 1. And, in my opinion (at
least at the time of writing the tests), the fact that you can go from 0 to 1 meant that you should also
be able to move from 1 to 2, and then from 2 to 3, and so on. And this seemed so obvious to me that I
hadnt bothered with writing any more tests, because if it worked for "from-0-to-1", then it would also
surely work for "from-1-to-2", right? Well, in theory, maybe so, but I was proven wrong on this one! If
you look at these tests with a clear mind, you will notice what they do not test. They do not test going
from page 1 to page 2, and going from page 2 to page 3, and so on. Uh and the problems with the
production code showed me clearly that they should!

Now take a look at the production code:


public void goToNextPage() {
this.offset = +perPage;
}

It passes all my tests with flying colors, but it still has a bug! BTW, can you spot it?

To better expose this bug, you could write a test like this:
public void shouldIncreaseOffsetTwiceWhenGoingToPageTwo() {
Pager pager = new Pager(PER_PAGE);

pager.goToNextPage();
pager.goToNextPage();

assertThat(pager.getOffset()).isEqualTo(2 * PER_PAGE);
}

Now, what can we learn from this example? Well, I believe the main lesson here is that you should not be
satisfied with the simplest possible test case. One test is not enough to verify whether some functionality
is working properly. Take at least two cases.

The cases should differ as much as possible, so the probability of finding some bugs is really
increased. Adding test cases only to have more test cases makes no sense. In fact, it smells
like a violation of the DRY principle and should be avoided!

13
Chapter 3. Strength

Zero, One and Many

A good rule when thinking about test cases is to consider three cases: for zero, for one and for
many. This idea is hard to define, but quite simple to understand with the help of a few examples:

Testing SQL query, we would like to test whether it behaves correctly when:

a. there is no object in the database that meets the query criteria,

b. there is exactly one object in the database that meets the query criteria,

c. there are many objects in the database that meet the query criteria.

In the case of our pager, we have already tested for zero and for one. What was missing was
the many test which in this case would verify the ability of the pager to move from page 5 to
page 6 (or any other number greater than 1).

3.4. Expecting Exceptions Anywhere


The test I would now like to discuss comes straight from an online tutorial. It is an awful test with more
than one issue. We will concentrate on one part of it: namely, how the exceptions are tested.

I hope your Mum taught you that you certainly should not trust everything you read! So
choose the tutorials you learn from with care!

The story is, that the author of this code created their own implementation of the List interface. The
two test methods presented below are the only tests which verify whether the MyList class works as
expected.
@Test(expected=IndexOutOfBoundsException.class)
public void testMyList() {
MyList<Integer> list = new MyList<Integer>();
list.add(1);
list.add(2);
list.add(3);
list.add(3);
list.add(4);
assertTrue(4 == list.get(4));
assertTrue(2 == list.get(1));
assertTrue(3 == list.get(2));

list.get(6);
}

@Test(expected=IndexOutOfBoundsException.class)
public void testNegative() {
MyList<Integer> list = new MyList<Integer>();
list.add(1);
list.add(2);
list.add(3);
list.add(3);
list.add(4);
list.get(-1);

14
Chapter 3. Strength

As you will have noticed, both testmethods are decorated with


@Test(expected=IndexOutOfBoundsException.class). This means both will pass only if this
exception is thrown. The point is that it does not matter when this exception is thrown, as long as any
of the statements executed within each test method throw it. So, such a flawed implementation of the
MyList class as presented below passes both test methods with flying colors.

public class MyList<T> {

public MyList() {
throw new IndexOutOfBoundsException();
}

public void add(T i) {


}

public T get(T i) {
return null;
}
}

Well, this is embarrassing Apparently, we need to do something about the exception verification.
There are two main techniques we could use. The first is to split the test into several smaller tests (obeying
the SRP principle - see Section 4.6). The second is to better localize the exception.

3.4.1. Split
The first approach would result in several small test methods, each of them occupied
with a different functionality of the MyList class, like shouldKeepValuesInOrder() or
shouldAcceptDuplicateValues(), and so on. Now, once we have this verified, we would also go on
to create a test method specifically for verifying what happens when we try to retrieve some element
that is not there. Like this:
@Test(expected=IndexOutOfBoundsException.class)
public void shouldThrowExceptionWhenTryingToGetElementOutsideTheList() {
MyList<Integer> list = new MyList<Integer>();
list.add(0);
list.add(1);
list.add(2);
list.get(3);
}

This does not differ much from the original test, but, thanks to our having other test methods
(shouldKeepValuesInOrder, shouldAcceptDuplicateValues(), etc.), we are in a much safer
position. We have already verified the constructor, and that the basic add/get functionality works, so
we can assume with greater confidence that the IndexOutOfBoundsException will be caused by the
last line of this test method.

3.4.2. Localize
The other approach says we should localize the exception. To accomplish this we need to get rid of the
expected attribute of the @Test annotation, which is satisfied when an exception is thrown at any point
of the test method.

15
Chapter 3. Strength

The first thing we could do is use the try-catch statement, and fail the test if we do not enter the catch
clause. However, there are some nicer options.

The solution presented below makes good use of:

the catch-exception library4, which offers the catchException() and the caughtException()
methods, where these are exactly what we need,

the nice DSL provided by AssertJ to examine the exception within the catch clause.

@Test
public void shouldThrowExceptionWhenTryingToGetElementOutsideTheList() {
MyList<Integer> list = new MyList<Integer>();
list.add(0);
list.add(1);
list.add(2);

catchException(list).get(3);

assertThat(caughtException())
.isExactlyInstanceOf(IndexOutOfBoundsException.class);
}

Now there is no doubt about where the exception comes from! We can pinpoint it with great accuracy,
and the result of the test will leave no doubt about the source of the exception.

Catch-exception is quite handy. Consider using it everywhere, instead of annotation or try-


catch statement.

3.5. Evolve or
Software evolution [] the process of developing software initially, then repeatedly
updating it for various reasons.
Wikipedia

Below, you can see a test that caught my attention during code review. It tests the Transaction class,
which holds information about the user involved in the transaction, together with the amount of the
transaction. Both of the test methods shown verify "negative" cases: the system is supposed to treat
transactions with an amount equal to or less than zero as invalid.

public class TransactionTest {

@Test
public void shouldRecognizeTransactionsWithZeroValueAsInvalid() {
//given
Transaction tx = new Transaction(BigDecimal.ZERO,
new InternalUser());

//when

4
See http://code.google.com/p/catch-exception/

16
Chapter 3. Strength

boolean actual = tx.validate();

//then
assertThat(actual).isFalse();
}

@Test
public void shouldRecognizeTransactionWithNegativeValueAsInvalid() {
//given
Transaction tx = new Transaction(BigDecimal.ONE.negate(),
new InternalUser());

//when
boolean actual = tx.validate();

//then
assertThat(actual).isFalse();
}
}

This test was valid and reasonable some time ago, when the validate() method looked like this:

public boolean validate() {


return amount.compareTo(BigDecimal.ZERO) > 0;
}

However, when I came to review the test the same method had the following form:

public boolean validate() {


if (!user.isExternal()) {
return false;
}
return amount.compareTo(BigDecimal.ZERO) > 0;
}

What this means is that both tests will pass, regardless of whether or not the method properly verifies the
amount of the transaction. The first check - user.isExternal() - is what really makes both tests pass.

What happened here? Well, the code had evolved, but the tests had not. Both test methods should
have been modified by replacing the instance of the InternalUser class with one belonging to the
ExternalUser class. In addition, a further test should have been written to verify the behaviour of the
validate() method with a valid amount and a non-external user.

When your code evolves, take care that your tests also evolve. The fact that they still pass is
not enough to justify saying that everything is fine!

And BTW, if you think about it some more Shouldnt the test evolve before the code, rather than after
it? This may not always be applicable (especially in cases of huge changes) but whenever possible,
go for it!

This is one of the arguments for code coverage. Someone who had been following the
coverage results on a regular basis would have been in a position to notice the decline in
coverage for this class or package.

17
Chapter 3. Strength

3.6. Assertions should be Merciless


Ending your test with assertXYZ is not enough. The point is to have assertion(s) which really verify the
scenario being tested something which does not always occur, as this example demonstrates:
@Test
public void shouldRemoveEmailsByState() {
//given
Email pending = createAndSaveEmail("pending","content pending",
"abc@def.com", Email.PENDING);
Email failed = createAndSaveEmail("failed","content failed",
"abc@def.com", Email.FAILED);
Email sent = createAndSaveEmail("sent","content sent",
"abc@def.com", Email.SENT);

//when
emailDAO.removeByState(Email.FAILED);

//then
assertThat(emailDAO.findAll()).doesNotContain(failed);
}

This test first creates three email entities and stores them in the database. Then it executes the
removeByState() method of the emailDAO SUT. Next it verifies whether Well, yes, what does it
really verify?

The assertion makes sure that the failed email is not in the database anymore. However, it does not
prove that the removeByState() works as expected. In particular, if the method is flawed and removes
all emails, then the test still passes!

My suggestion would be to protect ourselves against such issues. For example, like this:
assertThat(emailDAO.findAll())
.isNotEmpty()
.doesNotContain(failed);

or even better, like this:


assertThat(emailDAO.findAll())
.contains(pending, sent)
.doesNotContain(failed);

This way we verify exactly what has been removed from database.

contains() and doesNotContain() methods are part of the AssertJ API (which you should
really consider using if you use JUnit or TestNG!).

3.7. Is Mockito Working Fine?


This section was written by Tomasz Borek (https://lafkblogs.wordpress.com/)

18
Chapter 3. Strength

So we have a form. And sooner or later this form will be filled in with some data. And for some reason
we would like to control whether the data it holds could be updated or not. The test you see below is
supposed to verify exactly this.
@Test
public void testFormUpdate() {
// given
Form f = Mockito.mock(Form.class);
Mockito.when(f.isUpdateAllowed()).thenReturn(true);

// when
boolean result = f.isUpdateAllowed();

// then
assertTrue(result);
}

Hm. whats wrong with this test, which has been written quite clearly? Well, it doesnt test the Form
class, but the Mockito framework! If Mockito.mock() and Mockito.when() work as they should, then
this test will be green. Even if we completely break the logic inside the isUpsideMethod() of the Form
class, the test will still be green.

What should we do, then? Obviously, we should make the result of this test depend on the behaviour
of the tested class. In short, we should not mock the tested method, but use the real one. For example,
like this:
@Test
public void testFormUpdate() {
// given
Form f = new Form();
f.setUpdateAllowed(true);

// when - then
assertTrue(f.isUpdateAllowed());
}

For more complex cases, we might be forced to actually mock something: for example, if some external
component (validator, or security engine) were to be required by the isUpdateAllowed() method to
provide an answer. But we would still mock this external component, and not the tested class!

3.8. Expected Exceptions and Verification


Test frameworks and tools really are so simple arent they? But watch out! Sometimes they can act
differently from how you expect them to.

Consider the following method from production code:


public void registerDomain(Domain domain) {
try {
dnsService.addDomainIfMissing(domain.getAddress());
} catch (RuntimeException ex) {
domainService.saveDomain(domain, domain.isRegisteredInDns(),
domain.getDnsFailures() + 1);
throw ex;
}

19
Chapter 3. Strength

domainService.saveDomain(domain, domain.isRegisteredInDns(),
domain.getDnsFailures());
}

A developer intended to verify whether a saveDomain() method of the DomainService class would
be called with the appropriate parameters (i.e. with the increased number of DNS failures), even in the
event of an exception. Let us take a look at the test code:
@Test(expected = RuntimeException.class)
public void shouldSaveFailureInformationWhenExceptionOccurWhenAddingDomain() {
//given
doThrow(new RuntimeException()).when(dnsService)
.addDomainIfMissing(DOMAIN_ADDRESS);

//when
domainRegistrator.registerDomain(domain);

//then
verify(domainService)
.saveDomain(domain, false, DNS_FAILURES + 1);
}

As you can see, there are two verifications. Firstly, the test verifies whether an exception is rethrown.
This is done with the expected attribute of the @Test annotation. Secondly, the last line verifies that
the expected method of the domainService collaborator has been called.

We should rather strive for SRP, but sometimes it makes sense to test more than one thing
in one test (e.g. when the setup cost is really high, as it sometimes is with integration and
end-to-end tests).

That is exactly what we wanted to have, right? Well, not quite because it doesnt work!

The problem is that the second verification never happens. Why? Because this line:
domainRegistrator.registerDomain(domain);

throws an exception which ends the execution of the test!

The fix is straightforward. We need to catch the exception within the test, so that the last line is also
executed. This can be done with a standard try/catch statement, but we can do better than this using
the catch-exception library:
@Test
public void shouldSaveFailureInformationWhenExceptionOccurWhenAddingDomain() {
//given
doThrow(new RuntimeException()).when(dnsService)
.addDomainIfMissing(DOMAIN_ADDRESS);

//when
catchException(domainRegistrator).registerDomain(domain);

//then
verify(domainService)
.saveDomain(domain, false, DNS_FAILURES + 1);
assertThat(caughtException()).isInstanceOf(RuntimeException.class);
}

20
Chapter 3. Strength

As you can see, weve got rid of the expected attribute from the @Test annotation. Instead, we catch
the exception using the catchException() method, and then verify it using the caughtException()
method.
After this change, the test is no longer interrupted when the exception is thrown, and all verifications
are performed.

Testing frameworks arent very complex; however, there are always some gotchas. Make
sure to read some documentation.

There is one more thing left to point out. I bet the original test was written after the code. If the
developer had followed the TDD rules, he/she would first have run the test, seen it fail and only then
have implemented the feature. Working code-first, you risk creating useless tests.

Always see the failing test first.

3.9. Mockito any() vs. isA()


This example is courtesy of Bartek Zdanowski (http://touk.pl/blog/author/bzd/). Thank you!

Let us consider the following production code:


public class AddOrganizationAction implements Action { ... }
public class AddPersonToOrganizationAction implements Action { ... }

public interface DispatchAsync {


void execute(Action action, AsyncCallback callback);
}

In many cases we might be tempted to verify the execution of the execute() method using the any()
matcher, like this:
verify(async).execute(any(AddOrganizationAction.class),
any(AsyncCallback.class));

This looks right. However, there is one issue with this code. Let us try to change the first argument from
AddOrganizationAction.class to AddPersonToOrganizationAction.class:

verify(async).execute(any(AddPersonToOrganizationAction.class),
any(AsyncCallback.class));

To our surprise, the test still passes! Why is that?

The mystery is revealed once you look into the JavaDocs of Mockitos Matchers class. It says the
following:

This method dont do any type checks; it is only there to avoid casting in your code.

21
Chapter 3. Strength

Mockito JavaDocs

So, every time you say you expect any(SomeClass.class), hoping that it will verify whether an object
of SomeClass was passed as an argument, you are going wrong!

Read the documentation of the frameworks you use. And help the authors make them better,
if something is unclear!

In our case, in order to really verify the execution of the execute() method we should use another
matcher: isA(). This one, unlike the any() matcher, performs type checking.

verify(async).execute(isA(AddOrganizationAction.class),
any(AsyncCallback.class));

Search for any() usages in your Mockito-powered tests, and think about changing them to
isA().

3.10. Be Generic!
Once upon a time there was a servlet which took two parameters - packet and type. If both were present in
a request, then the servlet asked its collaborator - packetDataProcessor - to process a specific packet.
If one or both of the parameters were missing, then no processing was supposed to occur. Simple, isnt
it? Let us have a look at the tests now.

3.10.1. Expecting Something Wont Happen


Let us start with a "happy path" test, which verifies the behaviour of the servlet when both parameters
are present (and valid).

@Test
public void shouldProcessPacket() throws IOException, ServletException {
//given
given(request.getParameter(PacketApi.PACKET_PARAMETER))
.willReturn(PACKET);
given(request.getParameter(PacketApi.TYPE_PARAMETER))
.willReturn(TYPE);

//when
servlet.doGet(request, response);

//then
verify(packetDataProcessor).process(PACKET, TYPE);
}

This one looks good to me. I would probably have given it a slightly more descriptive name. However,
apart from this minor issue, it is perfect.

But let us look at the next test.

22
Chapter 3. Strength

@Test
public void shouldNotProcessIfPacketParameterIsMissing()
throws IOException, ServletException {
//given
given(request.getParameter(PacketApi.TYPE_PARAMETER))
.willReturn(TYPE);

//when
servlet.doGet(request, response);

//then
verify(packetDataProcessor, never()).process(PACKET, TYPE);
}

So this is a "negative" case. One of the required parameters is missing5, and there should be no
processing. Seems ok, but there is one thing bothering me. Why does the verification assume that the
packetDataProcessor could be asked to process PACKET? This value does not appear anywhere in the
test, so why does the verification use it? Aha! It does not appear in the test, but is surely still in the head
of the developer who wrote the previous test (see the previous listing).

Anyway, this is too specific. I would suggest that the test take the following form: "If one of the required
parameters is missing, then packetDataProcessor should process nothing". And after translation into
Java/Mockito/JUnit language that will look thus:

@Test
public void shouldProcessIfPacketParameterIsMissing()
throws IOException, ServletException {
//given
given(request.getParameter(PacketApi.TYPE_PARAMETER))
.willReturn(TYPE);

//when
servlet.doGet(request, response);

//then
verifyZeroInteractions(packetDataProcessor);
}

The verifyZeroInteractions() method of Mockito is meant exactly for such purposes. We want to
know that there were no calls to packetDataProcessor, because a validation mechanism should reject
any incomplete request beforehand.

5
This required parameter PACKET_PARAMETER is not mentioned in the tests, and Mockito will by default return null when
asked for it. Which is something you probably know. :)

23
Chapter 3. Strength

Explicit or Implicit?

As you probably know, when you ask a test double for some value you will get "null, an
empty collection or appropriate primitive/primitive wrapper value"6 unless you have explicitly
instructed Mockito as to what value should be returned. This basically means that this snippet
of code:
HttpServletRequest request = mock(HttpServletRequest.class);
given(request.getParameter(PacketApi.PACKET_PARAMETER))
.willReturn(null);
given(request.getParameter(PacketApi.TYPE_PARAMETER))
.willReturn(TYPE);

null returned explicitely

is equivalent to this:
HttpServletRequest request = mock(HttpServletRequest.class);
given(request.getParameter(PacketApi.TYPE_PARAMETER))
.willReturn(TYPE);

Now, which version should you use in your tests? This is a matter of taste, and personally I like
my tests to tell the whole story. Which means, I would rather add these (actually redundant)
lines of code, in order to inform the reader about all aspects of the test case.

3.10.2. Expecting a Specific Exception


Another example from the same test class. This time it is about the servlets behaviour after a processing
error.
@Test
public void shouldReturnStatus500IfThereWasAnErrorDuringProcessing()
throws IOException, ServletException {
//given
given(request.getParameter(PacketApi.PACKET_PARAMETER))
.willReturn(PACKET);
given(request.getParameter(PacketApi.TYPE_PARAMETER))
.willReturn(TYPE);
doThrow(NullPointerException.class)
.when(packetDataProcessor).process(PACKET,TYPE);

//when
servlet.doGet(request, response);

//then
verify(response).setStatus(HttpServletResponse.SC_INTERNAL_SERVER_ERROR);
}

This test verifies the behaviour of the servlet in the event of a NullPointerException being thrown
when processing. Hmm Why, from the plethora of exceptions, was NPE selected? It is hard to say. I
took a look at the implementation of packetDataProcessor and found out that it throws some parsing
exceptions when trying to process garbaged data. Having a test which concentrates solely on NPE will
not tell us what happens when other exceptions are thrown. Which is bad.
6
See Mockitos documentation.

24
Chapter 3. Strength

As with the previous example, this test is too specific. Instead of just verifying whether status code 500 is
set in the event of a NullPointerException, it should really verify whether this status code is returned
in the event of any exception during processing (as shown below).
@Test
public void shouldReturnStatus500IfThereWasAnErrorDuringProcessing()
throws IOException, ServletException {
//given
given(request.getParameter(PacketApi.PACKET_PARAMETER))
.willReturn(PACKET);
given(request.getParameter(PacketApi.TYPE_PARAMETER))
.willReturn(TYPE);
doThrow(Exception.class)
.when(packetDataProcessor).process(PACKET,TYPE);

//when
servlet.doGet(request, response);

//then
verify(response).setStatus(HttpServletResponse.SC_INTERNAL_SERVER_ERROR);
}

Do not be too specific. It will make your test less valuable, by examining only a subset of
the test cases.

3.11. Write the Right Tests


In contrast to all other sections, I have no code to present here. But I do have a story to tell. So

I was once working on a really good project. The code quality was above average, the architecture
was right, the people were smart. Everything seemed to be in place. However, as the complexity of
our software grew, we started to notice various issues. The charts presented on the client dashboard
went bonkers when users changed time zone. It happened that during some big "refactorings" or code
merging we mysteriously "lost" some functionality. Users reported issues with exporting functionality -
functionality which hanged when too many objects were involved. After registration, notifications were
not sent appropriately. And so on and so on.
All of this was really irritating. We had unit tests, for Gods sake! In fact, we had plenty of them. Our
code was more OO than ever - every small class was testable and we really had unit tests covering them
all! Still, we had so many issues with our project!

The moral of the story is, that depending on the type of application you write, some types of test (unit,
integration, end-to-end, performance, etc.) are probably going to be more important than the rest. If you
put too much effort into unit testing, but not enough into end-to-end tests, you might encounter the sort
of issues we did. On the other hand, if you do things the other way around, youll face different issues
- related to some untested corner cases and to the maintenance of many (heavy and slow) end-to-end
tests. Kudos to those who are able to come up with the right proportion of unit, integration and end-
to-end tests!

Throughout this book we pay a great deal of attention to writing our tests in the right way. We ponder
over such things as the naming of variables, lack of implementation details, clarity of test code, and so
on. Each of these seems tiny, but when you neglect several of them your tests become a nightmare to
maintain. I do hope you really understand this!

25
Chapter 3. Strength

And this is all true and very important. However, it is not enough. Even the best, most beautiful tests
are not enough, if (when summed up) they do not cover the whole functionality of the system.

Write tests which make you confident that your system works.

Commercial Break :)

Hi, hows it going? Do you like the book so far? Feel like youve learned something useful?
I hope so!

You might also like to read my other books devoted to testing. Please visit http://
practicalunittesting.com to find out about them.

26
Chapter 4. Maintainability
[] unless the tests are designed so theyre easy to modify, updating them after a system
change can take a lot of time.
Gojko Adzic Specification by Example

Tests help us to maintain our software: if we introduce changes, they will inform us about whatever it
is that is working differently from before. And this is great!

However, there is also another side to this story: the issue of just how very "fragile" our tests are. If
we introduce even just a single change into our production code, then how many of them are we going
to need to update?

In this section we shall examine a number of tests, considering them with a view to their chances of
surviving such changes in production code. In contrast to the tests from the previous chapter, these
usually verify something quite well. Now we shall set about trying to improve them, so they dont break
so easily.

4.1. Mockem All!


To a man with a hammer everything looks like a nail.
Abraham Maslow

This is something I observe frequently among developers who have only recently discovered the
pleasures of mocking. :) They tend to overuse all kinds of test double, even when it is not really the best
option. Lets discuss some examples.

4.1.1. Mocking Container


The code below aims to verify whether some data has been added to a modelAndView object1. Yes, you
heard it right: it verifies whether something has been added.
@Test
public void shouldAddTimeZoneToModelAndView() {
//given
Context context = mock(Context.class);
ModelAndView modelAndView = mock(ModelAndView.class);
given(context.getTimezone()).willReturn("timezone X");

//when
new UserDataInterceptor(context)
.postHandle(null, null, null, modelAndView);

//then
verify(modelAndView).addObject("timezone", "timezone X");
}

The problem with this test is, that what we really care about is whether modelAndView contains certain
data, not whether some methods of the modelAndView have been called! It does not seem to be such a big
deal. If it has been added then it is probably there, so what is the problem? Right? This test is probably
1
This is ModelAndView from the Spring MVC framework.

27
Chapter 4. Maintainability

doing its job in verifying that the modelAndView object ends up with the timezone set. However as a
rule, we should always try to verify the outcome itself, not how it has been achieved. Why? Because
this way we leave open the possibility of changing the implementation details of the tested class without
breaking the test.

I will give you an example (even if it is not true in relation to the ModelAndView API). Should we care
whether the timezone property was set like this,
ModelAndView mav = new ModelAndView();
mav.addObject("timezone", "timezone X");

or like this?
ModelAndView mav = new ModelAndView("timezone", "timezone X");

Does it really matter how the timezone value got inside the modelAndView object? I think not. And the
test also shouldnt bother with such nuances. Still, the version of the test we have just discussed does
exactly this: it verifies whether the SUT does things in a certain way.

Below, you can see an improved version of this test. It treats the SUT more like a black box, and only
cares about its output.
@Test
public void shouldAddTimeZoneToModelAndView() {
//given
Context context = mock(Context.class);
ModelAndView modelAndView = new ModelAndView();
given(context.getTimezone()).willReturn("timezone X");

//when
new UserDataInterceptor(context)
.postHandle(null, null, null, modelAndView);

//then
assertThat(modelAndView).contains("timezone", "timezone X");
}

There is no contains() method like this, but this is the assertion we would like to have.

As you can see, in the second version we are using a real object, not a test double. And this is fine,
because ModelAndView is not a service, but a container for data2, which should not be mocked. Howgh!

Excited that you can mock everything, huh? Slow down and make sure that you really need
to verify interactions. Chances are you dont.

4.1.2. Mocking PrintWriter


Let us look at another example of mocks overuse. This one comes from the Spring MVC application.
When asked for data, the SUT - a controller - was expected to:

fetch some data from its collaborator - an object of the ReportService type,
2
It is a Map which contains values passed from controllers to views.

28
Chapter 4. Maintainability

write this data to the CSV file and return it to the user.

public class ReportController {

@Autowired
private final ReportService reportService;

public void generateReport(HttpServletRequest request,


HttpServletResponse response) throws IOException {
Filter filter = parseRequest(request);
List<ReportData> reportData = reportService.getReportData(filter);
PrintWriter writer = response.getWriter();
writeHeaders(writer);
for (ReportData data : reportData) {
writer.append(String.valueOf(data.getMin()));
writer.append(",");
writer.append(String.valueOf(data.getMax()));
writer.append(",");
writer.append(String.valueOf(data.getAvg()));
writer.append("\n");
}
}

private void writeHeaders(PrintWriter writer) {


writer.append("min,max,avg\n");
}

private Filter parseRequest(HttpServletRequest request) { ... }


}

The interesting part is the controller using a PrintWriter object to create a response for users. To test
the result (the content of the response) we need to somehow intercept the data written to the stream.
This is an original attempt:
public class ControllerTest {

HttpServletRequest req = mock(HttpServletRequest.class);


HttpServletResponse resp = mock(HttpServletResponse.class);
ReportService reportService = mock(ReportService.class);
PrintWriter writer = mock(PrintWriter.class);

ReportController ctrl = new ReportController(reportService);

@Test
public void shouldWriteReportData() throws IOException {
// given
ReportData dataPl = new ReportData(1, 2, 1.5);
ReportData dataFr = new ReportData(3, 4, 0.12345);

given(resp.getWriter()).willReturn(writer);
given(reportService.getReportData(any(Filter.class)))
.willReturn(Arrays.asList(dataPl, dataFr));

// when
ctrl.generateReport(req, resp);

// then
InOrder inOrder = Mockito.inOrder(writer);
inOrder.verify(writer).append("min,max,avg\n");

29
Chapter 4. Maintainability

inOrder.verify(writer).append(String.valueOf(1));
inOrder.verify(writer).append(",");
inOrder.verify(writer).append(String.valueOf(2));
inOrder.verify(writer).append(",");
inOrder.verify(writer).append(String.valueOf(1.5));
inOrder.verify(writer).append("\n");
inOrder.verify(writer).append(String.valueOf(3));
inOrder.verify(writer).append(",");
inOrder.verify(writer).append(String.valueOf(4));
inOrder.verify(writer).append(",");
inOrder.verify(writer).append(String.valueOf(0.12345));
inOrder.verify(writer).append("\n");
}
}

Hmm, the "then" part looks bad. What it does is verify every single call to the writer object. This is
not only ugly and unreadable, but also extremely fragile. Any change in the way the data is written will
make this test fail. That is not good.

It seems we cant verify whether the correct data has been written into the CSV file just by mocking an
object of the PrintWriter class. Lets try another approach, then.

public class ControllerWithStringWriterTest {

HttpServletRequest req = mock(HttpServletRequest.class);


HttpServletResponse resp = mock(HttpServletResponse.class);
ReportService reportService = mock(ReportService.class);

ReportController ctrl = new ReportController(reportService);

@Test
public void shouldWriteReportData() throws IOException {
// given
ReportData dataPl = new ReportData(1, 2, 1.5);
ReportData dataFr = new ReportData(3, 4, 0.12345);

Writer stringWriter = new StringWriter();


PrintWriter writer = new PrintWriter(stringWriter);
given(resp.getWriter()).willReturn(writer);
given(reportService.getReportData(any(Filter.class)))
.willReturn(Arrays.asList(dataPl, dataFr));

// when
ctrl.generateReport(req, resp);

// then
assertThat(stringWriter.toString())
.isEqualTo("min,max,avg\n1,2,1.5\n3,4,0.12345\n");
}
}

Better now, isnt it? The main change, which has enabled us to verify the expected result much more
simply, is that instead of mocking PrintWriter we used a real one. In fact, we are using two real writers,
which, combined together, give us what we need: the ability to verify the outcome of the controllers
work in a simple and readable way. The main advantage of this approach, as against our previous attempt,
is that this time we are verifying the outcome, not the way it was achieved. Thus, we are free to modify
the internals of the generateReport() method.

30
Chapter 4. Maintainability

Instead of using Mockitos mocks for HttpServletRequest and HttpServletResponse,


we could equally have used the MockHttpServletRequest and MockHttpServletResponse
classes from the Spring project.

4.2. Control the Environment


You should know exactly what environment your tests are being run in. If you dont, youll end up
spending time wondering why your test is behaving oddly. Moreover, it can be a long while before you
realize that it was the assumptions that were not valid. So every time you use some file system, database
or external services, youd better really make sure you know what their state is before executing your
tests.

Let us take a look at two examples of what happens when you ignore this advice.

4.2.1. Global State and Test Dependencies


With unit tests, one rule of thumb is to keep them independent from each other. And this is right. For
integration and end-to-end tests, it sometimes makes sense to have some dependencies between tests
(e.g. so they run in a certain order). And it isnt really so bad if we consciously introduce a dependency
(and explicitly declare it for example, using the TestNG dependsOnMethod feature). A worse scenario
is when we are not aware of the dependency ourselves. This can bring serious problems down on us.

Keep your tests independent from each other. Or at least make the dependency explicit!

Lets have a look at the following example. The test presented below verifies whether the log4j3
configuration code is working okay. If a system property named logConfig is available, it should
load the file specified by this property. If the property has not been set, it should load some default
configuration.

LoggingPropertyConfigurator configurator
= mock(LoggingPropertyConfigurator.class);
BaseServletContextListener baseServletContextListener
= new BaseServletContextListener(configurator);

@Test
public void shouldLoadDefaultProperties() {
baseServletContextListener.contextInitialized(null);
verify(configurator).configure(any(Properties.class));
}

@Test(expected = LoggingInitialisationException.class)
public void shouldThrowLoggingException() {
System.setProperty("logConfig", "nonExistingFile");
baseServletContextListener.contextInitialized(null);
}

3
http://logging.apache.org/log4j/

31
Chapter 4. Maintainability

This test stayed green for months. Then, suddenly (after some completely unrelated commit), it turned
red. Why?! For no apparent reason the log4j configuration was not something we had often changed.
In fact, we hadnt touched it for a long time

After some digging, we found out that the test failed because the order of execution of tests
had changed. Everything was fine as long as the shouldLoadDefaultProperties() method was
executed before the shouldThrowLoggingException() method. But once this order was changed,
things started to go wrong. In particular, the logConfig system property was set when the
shouldLoadDefaultProperties() test was executed, which altered the behaviour of the SUT and
made the test fail.

And why was the order of execution changed? Well, it doesnt really matter. In general, test frameworks
do not guarantee the order of execution (unless you explicitly ask for it), so you should not rely on this.

Be extra cautious when modifying the global state (system properties, file system, database
etc.) in your tests. This can influence other tests. Take care to make sure that tests execute
in a well-defined environment!

Now, how to fix it? Basically, there are two solutions.

First, you can impose a strict order of execution for these two test methods (TestNG will allow you to
do this, JUnit wont).

@Test
public void shouldLoadDefaultProperties() { ... }

@Test(expected = LoggingInitialisationException.class,
dependesOnMethod = "shouldLoadDefaultProperties")
public void shouldThrowLoggingException() { ... }

This will work as long as there are no more tests whose outcome also depends on the value of the
logConfig system property. If such tests are added, you must remember to also specify their relation
(their dependency) to these existing tests. That, in turn, could mean that you can end up with a network
of tests that depend on each other, which will not be easy to maintain.

Another option (and a recommended one) is to clean the logConfig system property variable before the
shouldLoadDefaultProperties() method is executed. If there are more tests like this, then maybe
putting the cleaning code into some setUp() method would be a good idea. For example:

@BeforeMethod
public void cleanSystemProperties() {
System.setProperty("logConfig", null);
}

// the rest of the code remains unchanged

Clean the environment before tests, not afterwards. That way your tests will be guaranteed
to run in a clean environment.

32
Chapter 4. Maintainability

4.2.2. Database Assumptions


Take a look at this test. This is an integration test with userService and dao talking to some real
database.
@Test
public void shouldAddUser() {
User user = new User();
userService.save(user);
assertEquals(dao.getNbOfUsers(), 1);
}

What bothers me here are the following:

1. it does not really verify whether the user was added,

2. it makes some assumptions regarding the state of the database before being executed.

Regarding the first point, this is probably best accomplished by verifying that the user in the database is
equal to the user object. This may be fairly straightforward, provided that userService or dao provides
the appropriate method(s) for fetching the user, or it may require a lot of additional work (e.g. writing
custom JDBC queries directly to database and parsing results in your test code yuck!).

Make sure the test does exactly what its (method) name promises.

As for the second issue, the solution is quite simple: replace absolute values with relative ones:
@Test
public void shouldAddUser() {
int nb = dao.getNbOfUsers();
User user = new User();
userService.save(user);
assertEquals(dao.getNbOfUsers(), nb + 1);
}

Now the content of the test method reflects its name (shouldAddUser()). Good.

It so happened that making just such an assumption about the database state (i.e. the assumption that
there are no users at all at the beginning of the test) caused some additional work for my team some time
ago. At some point it turned out that we needed to add a fake user to the system (the reason being that we
needed to handle some extra cases, so it was easier to filter them out from reports, etc.). Moreover, this
user had to always be available. So we created a database patch which added this fake user to the system.
It was applied along with all other patches, to see the database in the initial state before integration tests
were run. And then we observed (with amazement) that a lot of tests were starting to fail. Why? Because
they assumed that the database had no users at all!

Do not make assumptions about the database content. Use relative rather than absolute values.

33
Chapter 4. Maintainability

There are many ways to insert data into the database for the purpose of testing. Raw JDBC
calls, DBUnit, using Hibernate and the service layer (as presented above), and probably many
more. It does not really matter here, because with all of these techniques you could run up
against the same issues the ones discussed in this chapter.

4.3. Time Means Trouble (Always!)


Time is an illusion.
Albert Einstein

As a rule never use System.currentTimeMillis() or new Date() in your production code.


Add an additional layer of abstraction - e.g. a TimeProvider interface. This will let you test
your time-dependent methods with ease.

Let us see what happens when you do not follow this advice.

An obvious example would look like this:


time = System.currentTimeMillis();

if (time.isAfter(5, PM)) {
... do some afternoon activity
}
else {
... do something else
}

Illegal Java code, but it illustrates the issue well. :)

Well, this one is obvious - you cant test all paths until you control the value of the time variable.
And this is because you create it using System.currentTimeMillis(), which is hard to mock using
civilized techniques.

But let us consider now an example that is both real and slightly more subtle. Take a look at this fragment
of production code (a part of Util class):
public String getUrl(User user, String timestamp) {
String name = user.getFullName();

String url = baseUrl


+ "name="+URLEncoder.encode(name, "UTF-8")
+ "&timestamp="+timestamp;
return url;
}

public String getUrl(User user) {


Date date = new Date();
Long time = (date.getTime() / 1000); //convert ms to seconds
String timestamp = time.toString();
return getUrl(user, timestamp);
}

Let us say that a programmer wanted to test the second getUrl() method (the one which takes a single
user parameter). Testing such a method should be straightforward - it returns String, which we could

34
Chapter 4. Maintainability

analyze and verify as regards its correctness. Alas, because new Date() was used, the programmer
responsible for this code was not able to verify the resulting URL! Instead, he came up with the following
test:
@Test
public void shouldUseTimestampMethod() {
//given
Util util = new Util();
Util spyUtil = spy(util);

//when
spyUtil.getUrl(user);

//then
verify(spyUtil).getUrl(eq(user), anyString());
}

Can you believe what you are seeing here? This is really, really bad, and for a number of reasons. The
test:

tests implementation instead of behaviour (see Section 4.6.2),

uses partial mocking (a Mockito.spy() method on a real object), which is rarely required (a red light
should start flashing in your head when you see it4),

verifies interactions instead of returned values.

And all of this, just because the production code does not handle the time properly!

Struggling to write a test for a piece of production code? Probably the production code is
ugly! Maybe you should improve it first?

Once we have redesigned the production code by replacing the call to new Date() with a call to some
collaborator, our test begins to look quite nice:
@Test
public void shouldAddTimestampToGeneratedUrl() {
//given
Util util = new ....
TimeProvider timeProvider = mock(TimeProvider.class);
when(timeProvider.getTime()).thenReturn("12345");
util.set(timeProvider);

//when
String url = util.getUrl(user);

//then
assertThat(url).contains("timestamp=12345");
}

Please note that the shouldAddTimestampToGeneratedUrl() method does not use mocks, but only a
stub!
4
Please refer to the Mockito documentation for an explanation.

35
Chapter 4. Maintainability

If the method tested returns some values, then use these to verify whether the method is
working properly. Only use mocks/spies if this is really required!

And one more thing: this would have never happened if the developer had coded test-first!

Is a Bigger Hammer a Better Hammer?

In this chapter I have mentioned that most mocking tools will not let you mock static method
calls. This is not because such things are impossible. The reason is, that their authors believe
this is not a good thing to do. By allowing us to mock only what is easily mockable (e.g. injected
via the Dependency Injection mechanism), tools like Mockito persuade us to follow the rules of
good design. And by refusing to work with some code structures they are sending us a message
to the effect that "there is something wrong with your design".

Of course, some tools do exist that can mock anything at all that is possible. However, for the
reason mentioned in the previous paragraph, we should rather avoid them.

P.S. I could imagine cases where the use of such more powerful tools would be legitimate.
However, I cant recall any such case when working with relatively new code!

4.4. Waste of Time


Test everything that can possibly break!
Extreme Programming Gurus

There are some things which are not worth unit-testing. Really, there are. Getters, setters and delegators
are the best examples. They are tested by integration and end-to-end tests anyway, and the chances of
having a bug there are minimal.

Take, for example, this test (which, like all other tests in this book, is real however obfuscated).

@Test
public void shouldReturnImportantValue() {
//given
given(settings.getImportantValue()).willReturn(IMPORTANT_VALUE);

//when
BigDecimal importantValue = settingsFacade.getImportantValue();

//then
assertThat(importantValue).isEqualByComparingTo(IMPORTANT_VALUE);
}

The test concerns two classes: the SettingsFacade class, and its collaborator belonging to the Settings
class, to which it delegates calls. The method to be tested looks like this:

public BigDecimal getImportantValue() {


return settings.getImportantValue();
5 }
A delegator method is one which does not do anything on its own, but delegates the call to some collaborator.

36
Chapter 4. Maintainability

Hm. This is a delegator method5. Exactly what kind of error are we expecting to catch by writing
this kind of test?

I often hear it being said that the real reason for writing such tests is because of the future evolution of
code: in the future, the thinking goes, this or that simple getter or delegator method will be updated, and
then well find out whether its current functionality has been broken. I do not share this view. First of
all it goes against the YAGNI principle6. I cant give precise numbers, but Id say that the vast majority
(99%?) of such simple methods will not evolve. Which means that 99 times out of 100 youve wasted
your time writing the tests. Secondly, any mature developer understands that when introducing some
logic into an existing method she/he will need to review the tests and update them. Thirdly, there is a
chance that even if your unit tests wont notice the change, your higher level tests will.
Fourthly, our hope that that future change of such method will be caught by our test is, well, only a
hope The thing is, you just cant predict the future evolution of the code. What if the code evolves
like this?
public BigDecimal getImportantValue() {
return (settings.getImportantValue() != null)
? settings.getImportantValue() : DEFAULT_VALUE;
}

Our old test will still pass, wont it? The point is, it can only prevent some changes to the code.

Write unit tests for everything that can break. Do not waste your time testing what is too
simple to break.

99% of your code is NOT that simple! No excuses you should unit-test it!

Commercial Break :)

Hi again! Are you still eager to learn more? Yes? Thats great!

Please visit http://practicalunittesting.com to learn more about writing high-quality tests.

4.5. Overspecified Tests


This section contains examples which illustrate another common issue with our tests: I often find that
they specify too many things that arent genuinely related to the scenario being tested. This reduces the
readability of tests, and also affects their maintainability, as it increases the number of things that will
necessitate updating the test code.

4.5.1. Mocking is not about Repeating Everything


The code below - copied from the Mockito mailing list - is definitely breaking one of the rules of
mocking: "mock only types you own". But it also manages to be so detailed that it describes every single
bit of the production code (I say that without actually having seen the production code, but I bet this is
6
See http://en.wikipedia.org/wiki/You_ain%27t_gonna_need_it

37
Chapter 4. Maintainability

the case!). This is bad. The test code is so tightly coupled to implementation that you couldnt introduce
any changes at all there without breaking it.

And what does it really test? Well, everything and nothing, all at once. It verifies all the interactions
between the SUT and its collaborators, but actually goes even further: it stubs the SUT, and then verifies
whether it has been stubbed! This really is useless, unless your goal is to test Mockitos stubbing abilities.
(Do not do that; thousands of people have already proved that Mockito works fine!).
@Mock private DataSource dataSource;

@Mock private Mock connection;

@Mock private Mock statement;

@Mock private ResultSet resultSet;

@Test
public void test() throws Exception {
MockitoAnnotations.initMocks(this);
systemUnderTest = new OracleDAOImpl();
systemUnderTest.setDBConnectionManager(connectionManager);
Set<NACustomerDTO> set = new HashSet<NACustomerDTO>();
when(connectionManager.getDataSource()).thenReturn(dataSource);
when(dataSource.getConnection()).thenReturn(connection);
when(connection.createStatement()).thenReturn(statement);
when(statement.executeQuery(anyString())).thenReturn(resultSet);
when(resultSet.next()).thenReturn(false);
when(resultSet.getLong(1)).thenReturn(1L);
when(resultSet.getString(2)).thenReturn("7178");

doNothing().when(resultSet).close();

stub(systemUnderTest.getNACustomers()).toReturn(set);
final Set<NACustomerDTO> result = systemUnderTest.getNACustomers();

verify(connectionManager).getDataSource();
verify(dataSource).getConnection();
verify(connection).createStatement();
verify(statement).executeQuery(anyString());
verify(resultSet).next();
verify(resultSet).getLong(1);
verify(resultSet).getString(2);

assertNotNull(result);

verify(connectionManager).getDataSource().getConnection();
}

The SUT is stubbed.


This line verifies whether the stubbing has been successful.

Writing a test for existing code is NOT about repeating it line by line!

Okay, so it would be unfair to leave this test like this. I think you deserve at least a few hints about
how to test your DAO layer.

38
Chapter 4. Maintainability

The rule of thumb is that your DAOs are something you should rather not unit test. Why? Because there
is usually no logic there (DAO should be a thin wrapper over database stuff). Instead, have integration
tests which set up a real database. If your DAO is not vendor-specific, it can be any database that is
convenient for testing - e.g. H27. Then verify that the entities inserted and fetched by your DAOs are
what you expect them to be. Concentrate on the complicated stuff (providing your DAO layer has any).
Do not write unit tests for basic CRUD (Create-Read-Update-Delete) operations performed by the ORM
framework. (If there is something wrong here - which is most unlikely - you will find out when you
come to run end-to-end tests).

Go and search on http://stackoverflow.com for "dao testing" youll find many good tips
there.

4.5.2. The Root of All Evil


This section was written by Jakub Nabrdalik (http://blog.solidcraft.eu/).

Overspecified tests are often the root of all evil, because we tend to make several mistakes with them
with horrible, long-running consequences. For nearly two years, before I learned my lesson, I was
overspecifying tests of a framework to the point where no refactoring could ever be performed. How is
that possible? Lets examine an example, this time using Groovy8 and Spock9.

Spock

If you are new to Spock, n*mock.whatever() means that the whatever() method of the mock
object should be called exactly n times. No more, no less. The underscore _ means "everything"
or "anything". And the >> sign instructs the test framework to return the right-side argument
when the method is called. Everything else should be quite easy to read (which is what Spock
excels at).

Lets go with the test.


def "should create outlet insert command with valid params with new account"() {
given:
def defaultParams = OutletFactory.validOutletParams
defaultParams.remove('mobileMoneyAccountNumber')
defaultParams.remove('accountType')
defaultParams.put('merchant.id', merchant.id)
controller.params.putAll(defaultParams)

when:
controller.save()

then:
1 * securityServiceMock.getCurrentlyLoggedUser() >> user
1 * commandNotificationServiceMock.notifyAccepters(_)

7
http://www.h2database.com
8
http://groovy.codehaus.org
9
http://code.google.com/p/spock/

39
Chapter 4. Maintainability

0 * _._
Outlet.count() == 0
OutletInsertCommand.count() == 1
def savedCommand = OutletInsertCommand.get(1)
savedCommand.mobileMoneyAccountNumber == '1000000000000'
savedCommand.accountType == CyclosAccountType.NOT_AGENT
controller.flash.message != null
response.redirectedUrl == '/outlet/list'
}

Have you noticed the overgrown "then" part? One of the programmers, during a code review, concluded
that this test is so overspecified as to be more like a case of reverse engineering of the SUT. We are
repeating practically every line (definitely every block) of the code in the class we are testing, but with
a different syntax. If the whole system worth several man-months or man-years is going to be tested
like this, youll never be able to change a line of the code without breaking some tests in the process.
And since all the collaborations in those tests are closely verified, you are probably going to break a few
hundred tests, just with a simple refactoring. I see it happening over and over again.

Lets take a closer look and find out what, exactly, is going wrong with the specification. Heres the
first line in the "then" part:
1 * securityServiceMock.getCurrentlyLoggedUser() >> user

This line verifies whether some security service was asked for a logged user, and returns the user. And
it was asked EXACTLY one time. No more, no less.

But this is not what we want to verify. This is probably required by the controller (the class under test),
but it only means it should be in the "given" part. And it should not verify that its called "exactly once".
Its a stub, for Gods sake! The user is either logged in or not. There is no sense in making him "logged
in, but you can ask only once".

Its important to draw a sharp line between prerequisites and outcomes.

Then there is the second line.


1 * commandNotificationServiceMock.notifyAccepters(_)

This verifies that some notification service is called exactly once. And it may be okay the business
logic may require that but then, why hasnt it been stated clearly in the name of the test? Ah, I know,
the name would be too long. Well, thats also a suggestion. You need to make another test: something
like "should notify about newly created outlet insert command".

So here we have another problem of overspecified tests: they usually conceal several business
requirements under a single name. And its not easy to dig out those requirements later. We shall see
several more examples in the following lines.

Here is the third line.


0 * _._

Practically every mocking framework has some kind of "verify nothing else happened, apart from what
Ive mentioned", and Spock is no different. This line, in case you havent already worked it out for
yourself yet, is "You shall not make any other interactions with any mocks, or stubs, or anything, Amen!".
My suggestion is: never, ever use this kind of assertion. It makes the test vulnerable to everything. Any

40
Chapter 4. Maintainability

change in the code, no matter what it does, will break the test if it touches any collaborators. And tests
should only break when the business logic or the contract is broken.

Mockitos verifyNoMoreInteractions() method should also be used with caution. As


stated in Mockitos documentation: "Use it only when its relevant. Abusing it leads to
overspecified, less maintainable tests". What that means, basically, is that you should only
use it when it is really important that no more interactions with this particular mock have
occurred.

And then, another line.


Outlet.count() == 0

This verifies that we dont have any outlets in the database. Do you know why? You dont, because the
name of the test doesnt tell you. I do, because I know the business logic of this domain. The test doesnt
clearly tell you what the business logic is. Its a typical case of burying a clear business requirement
under a wrong name, where this means we lose a wonderful chance to represent business requirements
as a set of tests (or specifications).

Then there is the part that is actually relevant to the name of the test.
OutletInsertCommand.count() == 1
def savedCommand = OutletInsertCommand.get(1)
savedCommand.mobileMoneyAccountNumber == '1000000000000'
savedCommand.accountType == CyclosAccountType.NOT_AGENT

We expect the object weve created in the database, and then we verify whether its account is "new".
And we suppose that "new" here means a specific account number and type. Yet it screams out to be
extracted into another method with a readable name.

And then this:


controller.flash.message != null
response.redirectedUrl == '/outlet/list'

After that, we have some flash message not set. And a redirection. And why are we testing this?
Not because the name of the test says so, thats for sure. The truth is, we shouldnt care what the
flash.message says, or what the redirectUrl is. But what if its stated clearly in the requirements,
you might ask?

Is that really so? If the requirements go so far as to specify where the user is redirected to after having
performed an action, we should verify that with a functional (web) test. This test wont tell us whether
the user is presented with a nice page with a list of outlets. The url /outlet/list could very well end
up with HTTP 404.

And I really doubt that the requirements tell us to "not show a flash message" after everythings done.

Those two lines verify nothing: they only add another cup of concrete to our test, which will make it
even harder to refactor later.

And here is our conclusion: Do not write overspecified tests! Verify only the right thing, and verify
only one right thing at a time! Have your tests represent only one business requirement, and have them
fail only for one business-related reason!

41
Chapter 4. Maintainability

Never, ever verify the algorithm of a method step by step! Verify the outcomes of the algorithm! You
should be free to change the method, as long as the outcome the real thing you expect is not changed.

Imagine a sorting problem. Would you verify its internal algorithm, or would you rather verify whether
the collection is sorted? Why would we verify the internal algorithm? It should be free to change. Your
tests should not stand in the way.

What should the above code look like? We have already noticed that there are three potential business
rules worth testing here.

1. When we create an outlet, we should have an OutletInsertCommand in our database, but we


shouldnt have an Outlet yet (the change has to be accepted by a supervisor). Those two lines check
that:
Outlet.count() == 0
OutletInsertCommand.count() == 1

2. The supervisor (called acceptor in here) should be notified. And this is verified as well:
1 * commandNotificationServiceMock.notifyAcceptors(_)

3. The command in question should have a "new account", and this state is represented like this:
def savedCommand = OutletInsertCommand.get(1)
savedCommand.mobileMoneyAccountNumber == '1000000000000'
savedCommand.accountType == CyclosAccountType.NOT_AGENT

So what should we do? Is it enough to delete everything else from the test and just leave the lines
mentioned above? Not really.

If we were to start with a test before all else, we would have those three requirements represented as
three different tests, because we would then start with just the name of the test. And thats a much better
solution, because every time we break a business expectation, we will then know exactly which rule
weve broken. In other words, its better for a test to have just a single business-related reason for failing.

So lets write those tests properly!


def setup() {
userIsLoggedIn()
}

private void userIsLoggedIn() {


securityServiceMock.getCurrentlyLoggedUser() >> user
}

In Spock, the setUp() method is automatically called before each test method.

We get the unimportant prerequisite out of the way of the test. No need to bring noise to our test methods.

Then, knowing we have three business requirements, we build the common part of the setup. We give
it a verbose name, and we are going to call it from each test, as corresponding exactly to that part of the
testing scenario just as if we had written it in Behaviour Driven Development style.
private void setValidOutletInsertCommandParameters(def controller) {
def validParams = OutletFactory.validOutletParams
validParams.remove('mobileMoneyAccountNumber')

42
Chapter 4. Maintainability

validParams.remove('accountType')
validParams.put('merchant.id', merchant.id)
controller.params.putAll(validParams)
}

Then, finally, we create a test for each business requirement:


def "created outlet insert command should have new account"() {
given:
setValidOutletInsertCommandParameters(controller)

when:
controller.save()

then:
outletInsertCommandHasNewAccount()
}

private boolean outletInsertCommandHasNewAccount() {


def savedCommand = OutletInsertCommand.get(1)
savedCommand.mobileMoneyAccountNumber == '1000000000000' &&
savedCommand.accountType == CyclosAccountType.NOT_AGENT
}

def "should not create outlet, when creating outlet insert command"() {
given:
setValidOutletInsertCommandParameters(controller)

when:
controller.save()

then:
Outlet.count() == 0
OutletInsertCommand.count() == 1
}

def "should notify acceptors when creating outlet insert command"() {


given:
setValidOutletInsertCommandParameters(controller)

when:
controller.save()

then:
1 * commandNotificationServiceMock.notifyAccepters(_)
}

Now the test breaks only when it has to, we get the correct feedback, and we are free to do whatever
else we want.

4.5.3. Copy & Paste


Below is a sample test, which I wrote myself. It verifies whether some Maven-related utility class can
recognize snapshot artifacts.
@DataProvider
public Object[][] snapshotArtifacts() {
return new Object[][]{

43
Chapter 4. Maintainability

{"a", "b", "2.2-SNAPSHOT", Artifact.JAR },


{"c", "d", "2.2.4.6-SNAPSHOT", Artifact.JAR},
{"e", "f", "2-SNAPSHOT", Artifact.JAR}
};
}

@Test(dataProvider = "snapshotArtifacts")
public void shouldRecognizeSnapshots(
String groupId, String artifactId,
String version, Type type) {
Artifact artifact
= new Artifact(groupId, artifactId, version, type);
assertThat(artifact.isSnapshot()).isTrue();
}

All is good and well, but If you look at the core of what the test tries to verify you will soon notice
that only one parameter out of the four provided by the snapshotArtifacts() data provider is really
relevant. Who cares about groupId, artifactId and type, when the only thing that matters is the
version?

After the refactor the same test took the following form:
@DataProvider
public Object[][] snapshotVersions() {
return new Object[][]{
{"2.2-SNAPSHOT"},
{"2.2.4.6-SNAPSHOT"},
{"2-SNAPSHOT"}
};
}

@Test(dataProvider = "snapshotVersions")
public void shouldRecognizeSnapshots(String version) {
Artifact artifact
= new Artifact(VALID_GROUP, VALID_ARTIFACT_ID,
version, VALID_TYPE);
assertThat(artifact.isSnapshot()).isTrue();
}

As you can see it uses a lot of static values (e.g. VALID_GROUP), whose role is to announce that "Im a
correct value and Im really not important for this test scenario".

As Ive confessed, this (bad) test was mine. After I discovered its weakness I asked myself
how it could have come about that I wrote such code. The answer was simple. It all started
with the "copy&paste" of a data provider which had been used in another test. For this other
test four parameters had been required, but they were not appropriate for this one. Once
again: when copying code make sure it really fits!

Another option would be to use a test data builder pattern (see Section 5.3), like this:
Artifact artifact = new ArtifactBuilder()
.validArtifact()
.withVersion(version)
.build();

However, this time I have to say that I would prefer the solution with CONSTANTS. I think it is more
readable.

44
Chapter 4. Maintainability

More Copy and Paste Bashing

We all do copy some parts of the test code to avoid writing the same lines all over again in
other test methods. I wouldnt blame anyone for doing this. The problem only starts when we
do it mindlessly.

With tests, this often results in:

overgrown set-up methods (which create many more objects than are required),

too many assertions (which leads to overspecified tests)

both of which add to the complexity, making tests less readable and more fragile.

4.5.4. Asserting Too Much


Take a look at this test. Only the closing lines of it are given, but believe me, it was a really complex
end-to-end test! It involved the SUT (the whole system) receiving some requests and generating answers
in the form of CSV files.

@Test
public void invalidTxShouldBeCanceled() {
... some complex test here

// then
String fileContent =
FileUtils.getContentOfFile("response.csv");
assertTrue(fileContent.contains(
"CANCEL,123,123cancel,billing_id_123_cancel,SUCCESS,"));
}

Look at the method name - invalidTxShouldBeCanceled() (hint: Tx stands for transaction) - and at
the assertion. Hm, do you think they match? Do you feel that the assertion really verifies whether the
invalid transaction was cancelled? Well, it probably does, but it seems to test much more than this. In
fact, it verifies whether the received file conforms to some data pattern or not.

Another variant, which I would recommend, relies on custom assertions (see Section 5.6).

@Test
public void invalidTxShouldBeCanceled() {
... some complex test here

// then
String fileContent =
FileUtils.getContentOfFile("response.csv");
TxDTOAssert.assertThat(fileContent)
.hasTransaction("123cancel").withResultCode(SUCCESS);
}

I like this version much more. It hides the implementation details. Inside the custom assertion code there
is some logic which handles the file content (and extracts the status from a single line of this CSV file),
but it is hidden there, and does not clutter up the test itself. It also reads much more nicely: assert that the

45
Chapter 4. Maintainability

response file contains an entry about the transaction 123cancel, stating that it has finished successfully
(meaning that the cancellation of this transaction succeeded).

Now when the CSV format changes all you need do is update the code encapsulated within the
TxDTOAssert class. Were it to be otherwise, you would have to introduce some change into the (many)
tests verifying the content of the CSV file.

assertThat()

A slight problem with the solution presented above is that we need to prefix the call to the
assertThat() method with an assertion class name (TxDTOAssert), so that the method does
not clash with the assertThat() methods of the main AssertJ class, Assertions (which is
probably already included in the static imports section of our test class). As Petri Kainulainen
has suggested, we could improve the readability by using a different method name, e.g.
assertThatFile(). In such a case the code snippet would take the following form:

@Test
public void invalidTxShouldBeCanceled() {
... some complex test here

// then
String fileContent =
FileUtils.getContentOfFile("response.csv");
assertThatFile(fileContent)
.hasTransaction("123cancel").withResultCode(SUCCESS);
}

4.5.5. The Test That Knows Too Much


Why are tests brittle? Because we put too many details into them! See the following example of a
WebDriver-powered test (this is a shortened test lifted from a discussion on some mailing list).
public void createSurvey() throws InterruptedException {
//CREATE SURVEY
WebElement allproject
= driver.findElement(By.xpath("//*[@id='projectnav']/ul/li[2]/a"));
allproject.click();

WebElement myfolder
= driver.findElement(By.linkText("John Doe"));
myfolder.click();

WebElement myProject
= driver.findElement(By.linkText("My project"));
myProject.click();

WebElement createsurveylink = driver.findElement(By.xpath(


"//*[@id='bcontrol']/body/form[1]/table[2]/tbody/tr/td[2]/a[1]/img"));
createsurveylink.click();

WebElement surveyname = driver.findElement(By.xpath(


"//*[@id='bcontrol']/body/form/table[4]/tbody/tr[2]/td[3]/input"));
surveyname.sendKeys("Test Survey created on " + new Date());

46
Chapter 4. Maintainability

WebElement surveynameconfirm = driver.findElement(By.xpath(


"//*[@id='bcontrol']/body/form/table[1]/tbody/tr[2]/td[3]/a[2]/img"));
surveynameconfirm.click();
}

The main issue about this code is that it describes the webpage in considerable detail. Any change to this
website will cause this test to fail. And the website will change, thats for sure! The XPath expressions
used in the test will not last long!

I would be tempted to suspect that there are many more tests written in the same manner. This
means that information about the details of some webpages will be scattered across multiple
tests. That, in turn, means that a change to the website will cause many tests to fail.

What can we do about this? Firstly, we could improve the website (e.g. adding some IDs to important
elements) so that we can easily point to them without using XPath expressions. This would be better,
but still not good enough.

What we should really do is free our test code of any detailed information about the website. End-to-
end tests should be preoccupied with business scenarios, not implementation details!

In the world of Selenium tests, the pattern we are supposed to use is called Page Objects10. If we do
this, we end up with a test like this:
public void shouldCreateSurvey() {
Date date = new Date();

ProjectsPage projectsPage = mainDashboard.goToProjectsPage();

projectsPage.openProject("My project");

SurverEditionPage surverEditionPage
= projectsPage.createSurvey("Test Survery created on " + date);

// there were no assertions in the original tests


// (which was rather weird...),
// but I guess something like this would make sense
String surveyName = surveyEditionPage.getEditedSurveyName();

assertThat(surveyName).isEqualTo("Test Survey created on " + date);


}

What can we see here? Several things:

most of the work is delegated to various xyzPage objects, which handle things like XPath access to
various elements,

you can read the test and understand what it is all about, even if you have no clue about XPath & Co.,

this test will survive refactorings, as long as the general flow of actions remains unchanged.

All this does not mean that the nasty low-level details have miraculously vanished. No, as weve already
mentioned, theyve been moved to xyzPage classes. The thing is, we have probably got a lot of tests
(verifying different business scenarios) that perform some operations related to the survey edition. All
10
See http://code.google.com/p/selenium/wiki/PageObjects for more information about Page Objects.

47
Chapter 4. Maintainability

these tests can now be rendered free of any low-level detail, and use the functionality provided by the
xyzPage objects.

It is a good idea to separate the business scenario from low-level details.

4.6. SRP for Tests


Im single
(Right now)
Thats how I wanna be

Natasha Bedingfield "Single"

We are all familiar with the Single Responsibility Principle11, which basically says that every class
should take care of just one thing. I think it is very valuable to think about tests in terms of the SRP.
Where tests are concerned, each test method should conform to the following rule: "Each test method
should verify just one scenario".

Why? Because:

such test methods are pretty simple to understand,

if they fail, you know exactly which functionality of your software is not working (this is especially
important when introducing changes to existing software).

Writing tests that fulfill the SRP principle is very simple for unit tests, but not always
reasonable for other kinds of test. Sometimes you will want to verify several things with one
test method, on account of the high setup cost involved.

Still, I have observed the SRP rule being breached frequently. In this section we shall take a closer look
at some examples.

4.6.1. Valid and Not Valid


Let us have a look at an example. It comes from a simple utility class, responsible for making sure that
phone prefixes entered by users are valid.
@DataProvider
public Object[][] data() {
return new Object[][] { {"48", true}, {"+48", true},
{"++48", true}, {"+48503", true}, {"+4", false},
{"++4", false}, {"", false},
{null, false}, {" ", false}, };
}

@Test(dataProvider = "data")
public void testQueryVerification(String query, boolean expected) {

11
SRP, see http://en.wikipedia.org/wiki/Single_responsibility_principle

48
Chapter 4. Maintainability

assertEquals(expected, FieldVerifier.isValidQuery(query));
}

At first sight, it may seem like the test is really focused on just one thing (namely, verifying query
validity), but if you look more closely you will observe some alarming signals:

the method name - testQueryVerification - is quite generic, and it would be hard to change its
name so that it used a "should" prefix (see Section 5.4.4),

the name of the data provider12 - data - smells really bad,

it has some kind of logic: assertions depend on the value of the expected boolean flag passed,

a "generic" assertion - assertEquals - is used to verify the outcome.

To discover whether your test fulfills the SPR, ask the following question: "If it fails, will
I be able to discover what functionality of my software is broken just by reading the name
of the failed test method?".

Let us see how we could improve this test. Here is another variant which is equivalent to the previous
one, but the verification is divided into two parts:
@DataProvider
public Object[][] validQueries() {
return new Object[][] { {"48"}, {"48123"},
{"+48"}, {"++48"}, {"+48503"}};
}

@Test(dataProvider = "validQueries")
public void shouldRecognizeValidQueries(String validQuery) {
assertTrue(FieldVerifier.isValidQuery(validQuery));
}

@DataProvider
public Object[][] invalidQueries() {
return new Object[][] {
{"+4"}, {"++4"},
{""}, {null}, {" "} };
}

@Test(dataProvider = "invalidQueries")
public void shouldRejectInvalidQueries(String invalidQuery) {
assertFalse(FieldVerifier.isValidQuery(invalidQuery));
}

This version is certainly longer, but it is also more readable, easier to understand (no boolean flag), and
keeps to the SRP rule. We have achieved a clean separation of positive and negative test cases.

What I really like here are the names: each of them is highly descriptive. We have a data provider
which provides validQueries, and a test method which takes validQuery as a parameter. And their
counterparts - an invalidQueries data provider and a method which takes an invalidQuery parameter.
Also, both test method names describe very precisely the scenario they test.
12
A data provider is a method which provides data for test cases. See TestNGs @DataProvider annotation or JUnitParams
project.

49
Chapter 4. Maintainability

Watch the method names. Are they intention-revealing?

No logic in tests! Even the simplest will be evil!

4.6.2. Test Behaviour Not Methods!


Now for another example of a test doing too much. Let us look first at the method being tested. This
belongs to the UserRegisterController class.
public ModelAndView registerUser(UserData userData, BindingResult result,
HttpServletRequest request) {
if (result.hasErrors()) {
return showRegisterForm(request, false);
}

User savedUser = userService.saveNewUser(userData);


mailSender.sendRegistrationInfo(savedUser);
return new ModelAndView("redirect:/signin");
}

Let us assume that all tests presented below have the following variables properly initialized.
// mocks
UserData userData = mock(UserData.class);
UserService userService = mock(UserService.class);
BindingResult bindingResult = mock(BindingResult.class);
MailSender mailSender = mock(MailSender.class);
User user = mock(User.class);
HttpServletRequest request = mock(HttpServletRequest.class);

// sut
UserRegisterController userRegisterController = ... // object created;

Now let us have a look at the original test.

I have been obliged to simplify the original test to make the essence of what we are discussing
here more clear. Unfortunately, this has also made the effect of the changes we are going to
introduce less visible. :(

@Test
public void shouldReturnRedirectViewAndSendEmail() {
//given
given(bindingResult.hasErrors()).willReturn(false);
given(userService.saveNewUser(eq(userData)))
.willReturn(user);

//when
ModelAndView userRegisterResult = userRegisterController
.registerUser(userData, bindingResult, request);

//then

50
Chapter 4. Maintainability

assertThat(userRegisterResult.getViewName())
.isEqualTo("redirect:/signin");
verify(mailSender).sendRegistrationInfo(user);
}

As you can see, this test verifies exactly what the tested method does. Which means it verifies two
expectations:

that the sendRegistrationInfo() method of the mailSender collaborator will be invoked with
specific parameter (user object),

that the user (the user who filled in the registration form, not the user object!) will be redirected to
a specific page.

It also indirectly verifies whether the user object is stored by the userService collaborator.

This is not bad. In fact, this test does a decent job. My advice, however, would be to split it into two
parts, like this:
@Test
public void shouldRedirectToSigninPageWhenNoErrors() {
//given
given(bindingResult.hasErrors()).willReturn(false);

//when
ModelAndView userRegisterResult = userRegisterController
.registerUser(userData, bindingResult, request);

//then
assertThat(userRegisterResult.getViewName())
.isEqualTo("redirect:/signin");
}

This first test verifies solely whether the redirect is fine.


@Test
public void shouldNotifyAboutNewUserRegistration() {
//given
given(bindingResult.hasErrors()).willReturn(false);
given(userService.saveNewUser(eq(userData)))
.willReturn(user);

//when
userRegisterController.registerUser(userData, bindingResult, request);

//then
verify(mailSender).sendRegistrationInfo(user);

The second test verifies whether the sendRegistrationInfo() method of the mailSender object was
invoked.

The main difference between the first version presented and the one we have now is that the first one
tried to test method, whereas this second one tests responsibility with respect to the class. The current
version consists of two test methods, and each of them is focused on one responsibility that the tested
class has. The class is supposed to redirect the user to a specific page after successful registration. And
there is a test which verifies this. Some actors should be notified about the fact of successful registration
and there is a test which verifies whether this has occurred.

51
Chapter 4. Maintainability

Im sure the responsibility of the class does not end there (for example, I guess it also prints errors when
the registration form lacks some data). Good. We can always add more tests to cover the rest of its
responsibilities. Responsibilities, not methods!

As with the production code, we should try to keep our tests small and focused. We should also abstract
from the implementation so the refactorings do not break the tests. The changes we have just introduced
to the initial test have allowed us to achieve both of these things. The tests are so focused that some
modifications of the registerSubmit() method implementation might only fail a selected test. Then it
will be very simple to see what it is that is no longer working. Thats exactly how I like it!

Forget about methods. Test responsibilities of the class.

A simple rule to follow: "One method, multiple behaviours? Multiple tests!".

4.6.3. A Counter Example


And now, a counter example. Yes, after Ive tried hard to convince you that it is good to split tests into
smaller ones (each testing a unique test case), now Im going to demonstrate an example of the opposite
approach.

Take a look at the test method below:


@Test
public void shouldRecognizeDistrict() {
//given
District district = mock(District.class);
District anotherDistrict = mock(District.class);

//when
City city = new City(district, NUMBER_OF_PEOPLE);

//then
assertThat(city.isLocatedIn(district)).isTrue();
assertThat(city.isLocatedIn(anotherDistrict)).isFalse();
}

It verifies two things, namely:

whether City is capable of recognizing the same district it was created with,

whether City is capable of recognizing a different district from the one it was created with.

Should we split this test into two, then? My answer is: no.

In contrast to the previous example, there are not many cases to test here, so the test method is very
concise and simple to understand. In other words, the size of this method (taking into account the number
of test cases it verifies, not only the number of lines it has) has not exceeded my private safety limit.
And because of this fact, I do not feel like having two testing methods instead.

52
Chapter 4. Maintainability

Let us have a look at a two-method version, for the sake of comparison:


@Test
public void shouldRecognizeItsDistrict() {
//given
District district = mock(District.class);

//when
City city = new City(district, NUMBER_OF_PEOPLE);

//then
assertThat(city.isLocatedIn(district)).isTrue();
}

@Test
public void shouldRecognizeDifferentDistrict() {
//given
District district = mock(District.class);
District anotherDistrict = mock(District.class);

//when
City city = new City(district, NUMBER_OF_PEOPLE);

//then
assertThat(city.isLocatedIn(anotherDistrict)).isFalse();
}

For me, this really is overkill. But that is only my subjective opinion. Make your own decision about
it, and choose the approach which suits you best.

Rules are there to be broken. ;)

Mocking?

A curious reader might ask why the author of this test decided to use mock(District.class)
instead of new District(). A District looks like a very simple class (a POJO, perhaps),
which does not deserve to be mocked, right?

Let me explain. First of all, this is not a real mock but a dummy object. Mocks are created for
behaviour verification, and in this test we do not verify any calls to objects of the District class.
Unfortunately, we tend to use the term mocks to describe any kind of test doubles (mocks, test
spies, stubs, dummy objects). The mocking frameworks do not help us to distinguish between
them, either, as they usually offer just one method for creating any kind of test double (e.g. in
the case of the Mockito framework, the static mock() method).

And what is the advantage of using mock(District.class) over new District()? The main
plus is that a test which uses a dummy object will not care about the construction details of
the District class. This might be quite a gain: imagine a District constructor taking many
parameters (which in turn would also require other parameters to be constructed). That would
definitely both diminish the readability of the test code and make it prone to changes consequent
upon any alteration to the constructor of the required classes.

53
Chapter 5. Readability
Code is read much more often than it is written.
Common knowledge

We repeatedly read tests: we treat them as documentation for learning about how certain classes behave,
as we must understand what they do and how they do it in order to be in a position to change them. Last,
but not least, we are forced to read them when they fail, so we can decide what has really happened.

Given that this is so, it seems like a good idea to write our tests in a way that will make reading them
a pleasurable experience. In this section we shall focus precisely on this readability aspect. We will
discover that sometimes a minor improvement in this area can bring serious benefits.

5.1. Why formatting helps


This section was written by Tomasz Borek (https://lafkblogs.wordpress.com/)

Below is an example from production code dealing with a user security question. The test is a showcase
for why formatting helps with reading code and as we all know, code is much more frequently read
than written.
@Test
public void will_getChangSecurityQuestRgtAndDetails_if_AdvUserhasRuleId25(){
User user = createUser(userId);
user.setAdvanced(true);
PasswordRuleDto passwordRuleDto = new PasswordRuleDto();
passwordRuleDto.setPasswordRuleId(rulId25);
List<PasswordRuleDto> passwordRules = new ArrayList<PasswordRuleDto>();
passwordRules.add(passwordRuleDto);
given(currentUser.getUser()).thenReturn(user);
given(userDAO.readByPrimaryKey(userId)).thenReturn(user);
given(passwordBean.getPasswordRules()).thenReturn(passwordRules);
UserSecurityQuestionDto dto = userChangeSecurityQuestionBean
.getChangSecurityQuestionRgtAndDetails();
assertNotNull(dto.getEmail());
assertNotNull(dto.getFirstName());
assertNotNull(dto.getLastName());
assertEquals(dto.isChangeSecurityQuestion(), true);
}

Now lets see what happened when I added three non-blank lines:
@Test
public void will_getChangSecurityQuestRgtAndDetails_if_AdvUserhasRuleId25(){
// given
User user = createUser(userId);
user.setAdvanced(true);
PasswordRuleDto passwordRuleDto = new PasswordRuleDto();

passwordRuleDto.setPasswordRuleId(rulId25);
List<PasswordRuleDto> passwordRules = new ArrayList<PasswordRuleDto>();
passwordRules.add(passwordRuleDto);

54
Chapter 5. Readability

given(currentUser.getUser()).willReturn(user);
given(userDAO.readByPrimaryKey(userId)).willReturn(user);
given(passwordBean.getPasswordRules()).willReturn(passwordRules);

// when
UserSecurityQuestionDto dto = userChangeSecurityQuestionBean
.getChangSecurityQuestionRgtAndDetails();

// then
assertNotNull(dto.getEmail());
assertNotNull(dto.getFirstName());
assertNotNull(dto.getLastName());
assertEquals(dto.isChangeSecurityQuestion(), true);
}

Still not a stellar test, mind you. But much more readable. Breaking it up into blocks makes refactoring
possible. You can clearly see the distinction between test setup and the actual test call with the assertions.
Its easier to gain some insight into whats going on, and (hopefully) you can then more easily see what
the next steps here should be

5.2. Ceremony
BDD is great for telling stories. The "given/when/then" rhythm is a great thing, and many tests benefit
from this pattern. However, I sometimes find it hard to justify the presence of the whole BDD ceremony
at least in certain very simple tests, such as the one shown below.
@Test
public void shouldBuildEmailSender() {
// given
String senderName = "Chuck Norris";
String senderEmail = "chuck@norris.com";

// when
String emailSender = EmailUtils.buildEmailSender(senderName, senderEmail);

// then
assertThat(emailSender).isEqualTo("Chuck Norris <chuck@norris.com>");
}

When I look at the content of the "given", "when" and "then" sections, I feel that it is as if they had only
been added to honor the tradition of BDD. This is like adding design patterns to your production code,
not because they are required, but because someone told you that you should do so. With all respect to
BDD ideas, why not stick with something much simpler? For example, something like this:
@Test
public void shouldBuildEmailSender() {
String emailSender = EmailUtils
.buildEmailSender("Chuck Norris", "chuck@norris.com");
assertThat(emailSender).isEqualTo("Chuck Norris <chuck@norris.com>");
}

Or even like this:


@Test
public void shouldBuildEmailSender() {
assertThat(EmailUtils.buildEmailSender("Chuck Norris","chuck@norris.com"))
.isEqualTo("Chuck Norris <chuck@norris.com>");

55
Chapter 5. Readability

To me, this test likewise looks like a case of ceremony-overkill:


@Test
public void shouldBeAdministrator() {
//given
User user = new Administrator();

//when
boolean administrator = user.isAdministrator();
boolean guest = user.isGuest();
boolean moderator = user.isModerator();

//then
assertThat(administrator).isTrue();
assertThat(guest).isFalse();
assertThat(moderator).isFalse();
}

I would suggest a shorter version, but with exactly the same testing power.
@Test
public void shouldBeAdministrator() {
User user = new Administrator();

assertThat(user.isAdministrator()).isTrue();
assertThat(user.isGuest()).isFalse();
assertThat(user.isModerator()).isFalse();
}

KISS1. Keep It Simple Stupid! :)

5.3. Creation of Objects


To create something you must be something.
Johann Wolfgang von Goethe

In general, tests tend to repeat the same pattern, which is usually known as arrange/act/assert. First we
create objects (SUT and DOCs), then we act on them, and finally we verify whether or not the expected
result was returned. The act phase is usually very short, and often consists of only single method call.
Usually there is not much to fix there. But it is different with the arrange and assert parts. They tend
to grow and contain a lot of code, giving us room to make improvements.

Let us now take a closer look at some examples of the imperfect objects creation (the arrange part)
within the test code.

The User class was one of the main entities of this system, and it had almost 20 fields. This entity was
often used within the integration tests. We created objects of this class, we then persisted them to the
database, and after doing this we executed the methods of some services in order to exercise various
parts of the system interacting with the objects of this User class.
1
See https://en.wikipedia.org/wiki/KISS_principle

56
Chapter 5. Readability

Some excerpts from different integration and end-to-end tests are shown below.
@Before
public void initialize() {
User user = new User("email@example.com", "Example", "Example", "qwerty",
"Europe/Warsaw", UserState.NOT_VERIFIED, new Address());
...
}

@Test
public void shouldCommitTransaction() {
User user = new User("firstName", "lastName", "password",
"email@example.com", "qwerty", UserState.ACTIVE, new Address());
user.setRegistrationDate(oneDayAgo.toDate());
user.setAccessCode("qwerty");
...
}

@Test
public void shouldGetUserByCompanyData() {
User user = new User("email", "FirstName", "LastName", "Password",
"Europe/Warsaw", UserState.ACTIVE, address);
user.setRegistrationDate(new Date());
user.setCompany(company);
user.setAccessCode("Access Code");
...
}

There are some problems with this approach. Firstly, many tests now know about the details of the
creation of User objects. This means they will have to be changed every time we add a new parameter
to constructors of the User class.
Secondly, it is very hard to say which of the parameters passed to the constructor of the User class
are really important for each test case. In some cases you can guess that some of them are completely
irrelevant but you never know for sure.
Thirdly, as you can see, different constructors are being used in different tests. The thing is, these
constructors were created so it would be easier to write tests! In other words, we created constructors
which set selected properties of the User class (and left unused properties untouched - i.e. with null
values). Hm, this sounds evil. In effect, we added a means for creating crippled objects of the User class
to our production code
Fourthly, in addition to using the constructors, some setter methods have also been used. This is not
something evil, but it does lengthen the code responsible for creating User objects.

What can we do about all this, then? We could have created some private methods. Yes, this would
certainly have been better than the constructor-hell we have landed up in. But in my experience, such
methods tend to multiply and repeat among many test classes. Also, such private methods are usually
not flexible enough to express the intent of a specific test. So the next step was to parameterize these
private methods yet in the end we ended up with private methods and the setters that follow them.
Hence, we decided not to use these. Instead, we invested in Test Data Builders2.

The concept is very simple. A builder class is responsible for the creation of objects of the given class.
Period.

After introducing such builders, the code took the following form:
2
http://nat.truemesh.com/archives/000714.html

57
Chapter 5. Readability

@Before
public void initialize() {
User notVerifiedUser = UserBuilder.createUser(UserState.NOT_VERIFIED)
.create();
...
}

@Test
public void shouldCommitTransaction() {
User user = UserBuilder.createActiveUser()
.create();
...
}

@Test
public void shouldGetUserByCompanyData() {
User user = UserBuilder.createActiveUser()
.withCompany(company)
.create();
...
}

Now what, exactly, have we gained? Let me list the benefits of this approach:

your domain objects can be immutable, because no setters are required, only a parameterless
constructor,

your domain objects are freed from construction logic,

only one class (the builder) knows how to build objects of a given class, so every change is localized
to this one builder class,

the DSL is up to you, so you can create methods which describe precisely the properties of the created
object.

Convinced? Unconvinced? The problem is, that the advantages of Test Data Builders are hard to
present via just a few lines of code. Yet once you start using them for your important domain objects,
youll quickly discover for yourself the improved readability they bring to your test code (and to your
production code as well!).

Your IDE probably offers some support for creating Test Data Builders. Try it!

Writing Test Data Builders


A test data builder is a class which is responsible for creating objects of some type, for the sake of what
is involved in tests of one sort or another. Its role is to:

make tests unaware of any construction details of the objects of their use,

avoid production code being polluted by additional constructors and setters methods used only by
tests,

standardize the way objects are created within test code,

58
Chapter 5. Readability

provide convenient methods, which will make objects creation simple and readable.

Test data builders are very simple classes. They provide a lot of setter-like methods, which return this
(the instance of the builder). This allows for chaining of the subsequent calls. Each builder also has its
"creation" method (called build() or create(), or whatever happens to suit you), which creates an
object of the given type, and sets its properties. For example, a builder of the MockServer type could
appear as follows:
public class MockServerBuilder {
private Map<String, String> responseMap;
private ResponseType responseType;
private String serverUrl;
private boolean ssl;

public MockServerBuilder withResponse(Map<String, String> responseMap) {


this.responseMap = responseMap;
return this;
}

public MockServerBuilder withResponseType(ResponseType responseType) {


this.responseType = responseType;
return this;
}

public MockServerBuilder withUrl(String serverUrl) {


this.serverUrl = serverUrl;
return this;
}

public MockServerBuilder withoutSsl() {


this.ssl = false;
return this;
}

... probably some more methods here

public MockServer create() {


return new MockServer(responseMap, responseType, serverUrl, ssl);
}
}

Some additional notes:

builders are quite cheap to create your IDE will help you with this,

you are the master of the DSL, which means you tailor the "setter" methods to your liking.

5.3.1. It is so Easy to Make a Mistake


To finish, let us look at yet another example which shows how the use of multi-parameter constructors
can bring confusion to your test code. Below, the beginnings of two test methods from the same test
class are presented:
@Test
public void shouldCreateUserAndFindHimById() {
//given
User user = new User("elvis2", "elvis@is.alive");

59
Chapter 5. Readability

...
}

@Test
public void shouldUpdateUser() {
// given
User user = new User("elvis@com.pl", "elvis3");
...
}

Hm Whats going on here? What is the first parameter - email or name? Because you cannot see
into the production code, Ill tell you. It is email. And the second parameter is (surprise, surprise!) a
password! Yes, a password. Wasnt that clear to you? No? Nor to me. :) That is what happens when you
use silly values for parameters and multi-parameters constructors instead of builders.

I was really surprised to see a mistake in the use of the two-parameters-only constructor! I
thought such mistakes only happen if you have more than 5 parameters or so

A better version would create user objects like this:


User user = new User(USER_EMAIL, USER_PASSWORD);

or like this (to demonstrate that any users would do):


User user = new UserBuilder().standardUser()
.create();

or like this (if only email is important):


User user = new UserBuilder().standardUser()
.but().withEmail("elvis@presley.com")
.create();

or like this (if both parameters have some significance for the test case):
User user = new UserBuilder().standardUser()
.but().withEmail("elvis@presley.com")
.and().withPassword("sivle")
.create();

A lot of options to choose from. Just select the most appropriate one, given the context and, of course,
your own personal preferences.

Dont be satisfied with the Test Data Builders implementation generated by IDEs! Shape
them to your liking, so you can write really readable test code.

5.3.2. Use Simpler Types


Let us consider another example of object creation that could be improved. Consider the following test
fragment:
List<ProductCategory> categories = Arrays.asList(
product("car", new BigDecimal("100"), new BigDecimal("10000")),
product("bike", new BigDecimal("20"), new BigDecimal("5000")),
product("plane", new BigDecimal("1000"), new BigDecimal("500000"))

60
Chapter 5. Readability

);

This code creates several objects of the ProductCategory class using the private product() method.
The problem with this test code is that it reflects the complexity of the business domain, which requires
us to use BigDecimal objects (I would suspect that these values serve to represent prices). However,
if you look at the values used, youll notice that integers would have sufficed. This means we actually
could use integers, providing that the product() method allows us to. Since it is we who decide what
parameters the product() method should accept, there is no problem with this. After the change, the
test code could look like this:
List<ProductCategory> categories = Arrays.asList(
product("car", 100, 10000),
product("bike", 20, 5000),
product("plane", 1000, 500000)
);

Not a big deal, but if you have many tests which use complex objects (like BigDecimal) to create other
objects (which is not so unusual for complex integration tests), then the difference might well prove
significant.

Even though this example was using private methods for objects creation, the advice just given also
applies to Test Data Builders. We should likewise equip them with methods which allow us to use simple
objects. For example, I have learned that adding methods which take a String representation of dates
(e.g. "2013-05-23") instead of Date objects makes the test code far more readable.

KISS: Keep It Simple Stupid! :)

5.4. Naming is King


To give the right and proper word to each and every thing!

Cyprian Kamil Norwid Vade-mecum: In lieu of introduction

This is an interesting issue, because when it comes to giving appropriate names to methods and variables
in production code, we are usually very successful. However, when it comes to tests, we jettison these
good habits and often perform poorly. I suspect this is because in tests it is quite normal to have a few
objects of the same type, resulting in simple naming schemas like productA, productB, productC etc.
Sometimes this is good enough, but sometimes it could be much improved.

5.4.1. A Rose by any Other Name Not Really!


The first example presents a simple case of a test related to a permission-checking module. The next
listing shows a small fragment of this test - a data provider which feeds test methods with data.
@DataProvider
public static Object[][] userPermissions() {
return new Object[][]{
{"user_1", READ},
{"user_2", READ},
{"user_2", WRITE},

61
Chapter 5. Readability

{"user_3", READ},
{"user_3", WRITE},
{"user_3", DELETE}
};
}

As you can see, it connects three users (user_1, user_2, user_3) with some permissions (READ, WRITE,
DELETE). Fine, but why can user_1 only READ while user_3 can also WRITE and DELETE? This is not at
all obvious, and if some test fails you will learn something like: "user_3 was expected to have DELETE
permission". Then, youll start scratching your head, wondering why, actually, user_3 should have such
permission. Of course, the answer is nowhere to be found. It was obvious a few months ago, when this
test was created but now? Who the heck knows who user_3 is?

And the cure? See below.


@DataProvider
public static Object[][] userPermissions() {
return new Object[][]{
{"guest", READ},
{"logged", READ},
{"logged", WRITE},
{"admin", READ},
{"admin", WRITE},
{"admin", DELETE}
};
}

A guest can only read, logged users can also write, and admin has all permissions. Now the error message
- e.g. "admin was expected to have DELETE permission" - makes sense.

5.4.2. Object1, Object2, Object3


Now let us consider another example. This is a (tiny) part of a complex integration test which creates
a bunch of objects (mainly of the User class). It uses a lot of helper methods which are responsible for
the creation of objects, and also for saving them into the database.
@Test
public void importantTest() {
User user1 = daoTestHelper.addUserAndAssociateWithProduct(
product, 30l, ProductUserState.ACTIVE);
daoTestHelper.userProduct30DayStatistics(user1, "PL", 1);
daoTestHelper.userProduct30DayStatistics(user1, "US", 2);
User user2 = daoTestHelper.addUserAndAssociateWithProduct(
product, 30l, ProductUserState.ACTIVE);
daoTestHelper.userProduct30DayStatistics(user2, "PL", 4);
daoTestHelper.userProduct30DayStatistics(user2, "US", 8);

// ... and so on till user5 or so


}

What I hate about this test are the names of user objects suffixed with numbers (user1, user2, etc.).
They furnish no information whatsoever about the state (role, properties) of the objects. Here comes
an updated version:
@Test
public void importantTest() {
User userWith1PlEntry = daoTestHelper.addUserAndAssociateWithProduct(

62
Chapter 5. Readability

product, 30l, ProductUserState.ACTIVE);


daoTestHelper.userProduct30DayStatistics(userWith1PlEntry, "PL", 1);
daoTestHelper.userProduct30DayStatistics(userWith1PlEntry, "US", 2);
User userWith4PlEntries = daoTestHelper.addUserAndAssociateWithProduct(
product, 30l, ProductUserState.ACTIVE);
daoTestHelper.userProduct30DayStatistics(userWith4PlEntries, "PL", 4);
daoTestHelper.userProduct30DayStatistics(userWith4PlEntries, "US", 8);

// ... and so on
}

Believe me, these names makes perfect sense for this particular scenario, and for the problem domain.
They allow one to come with some very readable custom assertions (see Section 5.6), such as this:
assertThat(result).hasResultsForUser(userWith4PlEntries, 4);

Surprisingly, this simple refactoring of names has helped me uncover an issue that lay hidden within
the test code. I must confess that I didnt see it when the original names of the variables (user1, user2,
etc.) were being used. Look at the longer snippet of the original code! It repeats the same pattern of
first creating a user and then creating some statistics for it. However, there is one place where there is
a break in this pattern. Can you spot it?
@Test
public void importantTest() {
User user1 = daoTestHelper.addUserAndAssociateWithProduct(
product, 30l, ProductUserState.ACTIVE);
daoTestHelper.userProduct30DayStatistics(user1, "PL", 1);
daoTestHelper.userProduct30DayStatistics(user1, "US", 2);
User user2 = daoTestHelper.addUserAndAssociateWithProduct(
product, 30l, ProductUserState.ACTIVE);
daoTestHelper.userProduct30DayStatistics(user2, "PL", 4);
daoTestHelper.userProduct30DayStatistics(user2, "US", 8);
User user3 = daoTestHelper.addUserAndAssociateWithProduct(
product, 30l, ProductUserState.DELETED);
daoTestHelper.userProduct30DayStatistics(user3, "PL", 16);
daoTestHelper.userProduct30DayStatistics(user3, "CZ", 32);
User user4 = daoTestHelper.addUserAndAssociateWithProduct(
product, 30l, ProductUserState.ACTIVE);
daoTestHelper.userProduct30DayStatistics(user3, "US", 64);
User user5 = daoTestHelper.addUserAndAssociateWithProduct(
product, 30l, ProductUserState.DELETED);

// ... and so on
}

Well, maybe you spotted it (congratulations!), but when I was working with the code, I didnt! Only
when I started renaming variables did I notice that in one place user3 was used where user4 should
have been (take a look at the next to last line). It hasnt influenced the outcome of the test (because
this line is responsible for adding data which should not be included in the final result). But it has done
another evil thing: the test has not really tested the intended scenario!

When names of variables differ only very slightly they will be easily confused.

Conclusion. It is the same as with classes. At first you begin with useless names like
ClientDAOImpl but then you learn that you can have much more intention-revealing names.

63
Chapter 5. Readability

HibernateClientDAO, InMemoryClientDAO, HighPerformanceClientDAO or whatever describes this


particular implementation. There is always something to say about your class. I believe the same applies
to the variables used in tests. I often start with client1 and client2, but then it occurs to me that these
should really be client and vipClient (while in other test method I have client and regularClient).
Try it - it will make your test code so much more readable!

Do not suffix/prefix names of test variables with letters or numbers! Find intention-revealing
names for them!

Whenever you create some object for testing purposes, there is some reason behind this. You are creating
it because it is different from other objects. It has different properties. Do not be shy about that! Say it
aloud! Say it by giving the object a uniquely recognizable name!

5.4.3. What do True and False Stand For?


What is truth?
Pilate John 18:38

If we expect our tests to act as a documentation then we need to put some effort into their readability.
Consider the following example:
MockServer server = new MockServer(responseMap, true,
new URL(SERVER_ROOT).getPort(), false);

This single line of code is responsible for the creation of a server object of the MockServer class. It
is used later in test code.

Nice, but what on earth do true and false mean here? What kind of server is it, actually, that is being
created?

I dont expect you to have the JavaDocs for MockServer (which sounds like a utility class created
especially for testing purposes), so youll have to browse the source code to find out. This need not be
a major problem (but it may be - depending on the complexity of the MockServer class). However, it is
certainly a nuisance, and it means that this test does not fulfil the role of a documentation very well. In
order to understand the test case we will have to browse some other documents (i.e. source code). This
is not a tragedy, but neither is it something to be overjoyed about.

So what can we do about this? Well, there are at least three options.

Constants
First, we could create some static variables with intention-revealing names, like this:
private static final boolean RESPONSE_IS_A_FILE = true;
private static final boolean NO_SSL = false;

MockServer server = new MockServer(responseMap, RESPONSE_IS_A_FILE,


new URL(SERVER_ROOT).getPort(), NO_SSL);

Now there is no doubt about what kind of server is being created. It does not use SSL and responds with
a file. Better, isnt it? And we havent had to work very hard to achieve this effect.

64
Chapter 5. Readability

Private Methods
Another option is to replace the new keyword with some more expressive statement. For example, we
could create a private method within the test code.
private MockServer noSslFileServer() throws MalformedURLException {
return new MockServer(responseMap, true,
new URL(SERVER_ROOT).getPort(), false);
}

Then we could create the server like this:


MockServer server = noSslFileServer();

This is more readable than the original version. However, this solution is not really scalable3. The number
of such private methods will grow along with the complexity of the MockServer: the more parameters
and options there are, the more methods there are that are needed.

Test Data Builders


The third approach requires some additional work. Yeah, you guessed it already - we could use the Test
Data Builders idea, and express very clearly what kind of server is required. For example, like this:
MockServer server = new MockServerBuilder()
.withResponse(responseMap)
.withResponseType(FILE)
.withUrl(SERVER_ROOT)
.withoutSsl()
.create();

Is this approach better than the previous ones? Not necessarily, as it definitely requires more work.
However, the fact that we control the API allows us to shape it according to our liking. Maybe, in the
domain that this MockServer is used in, this will be the perfect solution (one that in this case implicitly
sets the "no-SSL" option, and shortens any type and URL setting):
MockServer server = new MockServerBuilder()
.createFileServer(SERVER_ROOT)
.withResponse(responseMap)
.create();

It seems fair to say that the more complex, and the more often created, the class in question is, the more
we will stand to benefit from making use of test data builders.

5.4.4. Should is Better than Test


Once upon a time, when His Majesty King JUnit the Third ruled over the land of Java testing, we were
all forced to prefix our test methods with the "test" prefix. Somehow, this custom survived the death of
the King4. Which is really troublesome, as we shall shortly see.

Take a look at the following example (which looks like Groovy, judging by the lack of semicolons):
3
See Section 5.6.2 for similar discussion regarding assertions.
4
Some people accuse his younger brother (JUnit the Fourth) of murdering him. Others blame barbaric tribes of TestNG. Meanwhile,
Java 5 acolytes mention the lack of annotations as the reason for his doom. The rest simply do not care at all, but all are happy
that he has gone.

65
Chapter 5. Readability

@Test
public void testOperation() {
configureRequest("/validate")
rc = new RequestContext(parser, request)
assert rc.getConnector() == null
assert rc.getOperation().equals("validate")
}

Now what is the purpose of this test? The name says it should "test operation", which is rather vague.
What kind of operation? What behaviour is expected? And when this test fails, will it then be clear what
the functionality in the system is that is not working?

The thing is, the documentational value of this test is zero, or even less. It brings confusion rather than
clarifying anything.

As this example shows, the first problem with test methods prefixed with test is that they tend to test
something. And when they fail, all you know is that something in your system is not working as expected.
And then the investigation begins And it can take hours

Take a look at the next example, which uncovers another downside to the test naming pattern. As the
name of this test method says, it "tests query".
@Test
public void testQuery(){
when(q.getResultList()).thenReturn(null);
assertNull(dao.findByQuery(Transaction.class, q, false));
assertNull(dao.findByQuery(Operator.class, q, false));
assertNull(dao.findByQuery(null, null, false));

List result = new LinkedList();


when(q.getResultList()).thenReturn(result);
assertEquals(dao.findByQuery(Transaction.class, q, false), result);
assertEquals(dao.findByQuery(Operator.class, q, false), result);
assertEquals(dao.findByQuery(null, null, false), null);

when(q.getSingleResult()).thenReturn(null);
assertEquals(dao.findByQuery(Transaction.class, q, true).size(), 0);
assertEquals(dao.findByQuery(Operator.class, q, true).size(), 0);
assertEquals(dao.findByQuery(null, null, true), null);

when(q.getSingleResult()).thenReturn(t);
assertSame(dao.findByQuery(Transaction.class, q, true).get(0), t);
when(q.getSingleResult()).thenReturn(o);
assertSame(dao.findByQuery(Operator.class, q, true).get(0), o);
when(q.getSingleResult()).thenReturn(null);
assertSame(dao.findByQuery(null, null, true), null);
}

This test:

definitely breaches the SRP (see Section 4.6),

uses some magical switches (who tells me what true and false stand for? See Section 5.4.3),

uses cryptic variable names (q, o and t).

Nevertheless, all in all it is a valuable test. Believe me, I have analyzed every bit of it, and it all pretty
much makes sense. However, I still hate the way it looks. It is too long and it verifies too many scenarios.

66
Chapter 5. Readability

And here we come to the second problem that frequently occurs with test prefixed test methods. Once
you have written something like testQuery(), it feels natural to put there every test related to query.
That is how such monstrous methods are born. A bad thing: really bad

Im convinced that this would never have happened, if only the developer had started with a should
prefix. When you type should, it makes you think about some specific scenarios. And you end up with
shouldReturnNullWhenDaoReturnsNull(), or with shouldReturnSingleValueReturnedByDao(),
and so on. However, you certainly wont end up with shouldTestQuery(). Unless, that is, you are
really trying to annoy me. ;)

The test would be much better split up into a few test methods, like this:
@Test
public void shouldReturnNullListWhenDaoReturnsNull {
....
}

@Test
public void shouldReturnEmptyListWhenDaoReturnsIt {
....
}

@Test
public void shouldReturnNullSingleResultWhenDaoReturnsNull {
....
}

@Test
public void shouldReturnSingleResultReturnedByDao {
....
}

Start with should. Think about the scenario. Do not use the test prefix.

5.4.5. When a Test Name Lies


I lie to myself all the time. But I never believe me.
S.E. Hinton The Outsiders

I often find test methods with misleading names. Take this one, for example:
public void shouldInsertNewValues() {
//given
//when
reportRepository.updateReport(ReportColumn.DATE,
ReportColumn.PLACE, reportMap(BigDecimal.TEN));
reportRepository.updateReport(ReportColumn.DATE,
ReportColumn.PLACE, reportMap(new BigDecimal("5")));

//then
assertThat(reportRepository
.getCount(ReportColumn.DATE, ReportColumn.PLACE))
.isEqualTo(1);

67
Chapter 5. Readability

Does it really verify whether "new values are inserted"? I wouldnt say so. Frankly Im not 100% sure
what this test is all about, but it rather seems to me that it verifies whether the new value overrides an
older one. If so, it should rather be named shouldOverrideOldReportWithNewValues.

No big deal, eh? Well, maybe not a big deal, but why should your test lie to you? If one of the purposes
of tests is to serve as documentation, then it is a sick joke to have tests lying to you.

As long as this test is green, everything is good. But when the test suddenly fails after youve made some
very sizeable changes in this area of the system5 youll be in real trouble. Is the scenario covered by
this test still valid? Should it be removed? Should it be changed If so, how? Good luck with solving
all of this!

Make sure names of test methods are properly descriptive of the scenarios they cover. Be
especially careful when updating tests. Are they still valid after the changes you have just
introduced?

5.4.6. Implementation Details are not so Important


Indeed, the implementation details are not so important that they should be promoted to the test method
name. Have a look at this example:
@Test
public void shouldReturnFalseIfTransactionIsPending() {
//given
transaction.setState(PayoutTransactionState.PENDING);

//when
boolean paid = transaction.isPaid();

//then
assertThat(paid).isFalse();
}

This time the name of the test method does not lie, but it does specify unnecessary details. It should
rather state the business scenario being tested. For example:
@Test
public void pendingTransactionShouldNotBeConsideredAsPaid() {

...
}

Now, if you change the API by making isPaid() return void (and throw exception in cases of failure),
then the test method name will remain intact: the actual business scenario has not changed.

When thinking about the test method, leave the implementation details alone. What matters
are the requirements.

5
Remember, the only constant thing in software development is change. :)

68
Chapter 5. Readability

5.5. Mocks Are Good


In the following test four objects are created. One of them is a mock of the TrafficTrendProvider
class, which is used by the SUT (an object of the TrafficService class). The remaining two are real
objects created using the new keyword. When we look more closely we notice that the report object is
being used solely in order to create the other real object: trafficTrend.
@Test
public void shouldGetTrafficTrend() {
//given
TrafficTrendProvider trafficTrendProvider
= mock(TrafficTrendProvider.class);
Report report = new Report(null, "", 1, 2, 3,
BigDecimal.ONE, BigDecimal.ONE, 1);
TrafficTrend trafficTrend = new TrafficTrend(report, report,
new Date(), new Date(), new Date(), new Date());
given(trafficTrendProvider.getTrafficTrend()).willReturn(trafficTrend);
TrafficService service = new TrafficService(trafficTrendProvider);

//when
TrafficTrend result = service.getTrafficTrend();

//then
assertThat(result).isEqualTo(trafficTrend);
}

The major issue with this test is that it creates real objects which are irrelevant to the tested scenario!
This is bad, because the test becomes very fragile this way. Any change in constructor for the Report
or TrafficTrend classes and this test will need to be updated.
Another downside to this test is that it distracts the reader from the testing scenario by providing too
many details which are not important at all. For example, what is the empty String parameter ("") of
the Report constructor, and does it matter that there are four identical dates (new Date()) being passed
to the second constructor?

Instead, I suggest cutting this kind of blah-blah right back to the bare minimum, and using mocks in
place of real objects.
@Test
public void shouldGetTrafficTrend() {
//given
TrafficTrendProvider trafficTrendProvider
= mock(TrafficTrendProvider.class);
TrafficTrend trafficTrend = mock(TrafficTrend.class);
given(trafficTrendProvider.getTrafficTrend()).willReturn(trafficTrend);
TrafficService service = new TrafficService(trafficTrendProvider);

//when
TrafficTrend result = service.getTrafficTrend();

//then
assertThat(result).isEqualTo(trafficTrend);
}

By providing only what is important for verifying the scenario being tested, this test avoids distracting
the reader. It uses mocks instead of real objects, which makes it less fragile than its previous version.

69
Chapter 5. Readability

Creating objects only to create other objects, so you can create other objects? Do not do that!
(Unless the creation of objects is what you want to test).

Really need to create so many objects? Uh-oh, ok. Then at least do not pollute your test code
with the details of the creation (because they are usually irrelevant to the tested scenario).
See Section 5.3 for some hints.

Obviously, such alteration improves not only the readability of test, but also its maintainability. The
improved version will survive any changes in constructors of the Report or TrafficTrend classes.

5.6. Assertions
One must verify or expel his doubts, and convert them into the certainty of Yes or No.
Thomas Carlyle

The assertions part is where the real testing happens. This is where you verify whether things have
worked out as expected. Apart from being crucial for making the test valid and valuable, assertions also
have a great impact on the documentation value of your tests.

Lets take a look at a few examples of not-so-perfect assertions. But before we do so, allow me to
introduce the idea of custom assertions.

Custom Assertions

If you use JUnit or TestNG, then you must already be familiar with assertion methods like
assertEquals(), assertTrue(), assertSame(), and others. It is possible to verify everything
using such simple assertions possible, that is, but not always convenient to do so. While for
many simple tests these basic assertions are sufficient, they fall short when it comes to more
complicated ones.

If you look around, youll also find some "helper" projects which offer specialized assertions
so that you can, for example, verify collection-related stuff more easily. These are useful but
still, they know nothing about your business objects, and cant help you with verifying these.

The next step is to use tools like AssertJ6, or Hamcrest7, which offer unlimited extensibility.
This means you can extend their original set of assertions with your own custom assertions. It
will require some effort from your side, but as youll see sometimes the results are worth it.

5.6.1. Say No to Complex Assertions


In unit tests, assertions are usually not a problem. Your tests are focused and test just one thing, so there
is usually only one assertion, and everything is clear. However, it is much worse with integration and
end-to-end tests, in which the assertion part can sometimes be huge.
6
http://joel-costigliola.github.io/assertj/
7
http://hamcrest.org/

70
Chapter 5. Readability

Lets take a look at an example of such a situation. This test verifies whether a certain artifact (WAR file)
has been copied to some remote server.
@Test
public void shouldPreDeployApplication() {
// given
Artifact artifact = mock(Artifact.class);
when(artifact.getFileName()).thenReturn("war-artifact-2.0.war");
ServerConfiguration config = new ServerConfiguration(
ADDRESS, USER, KEY_FILE, TOMCAT_PATH, TEMP_PATH);
Tomcat tomcat = new Tomcat(HTTP_TOMCAT_URL, config);
String destDir = new File(".").getCanonicalPath()
+ SLASH + "target" + SLASH;
new File(destDir).mkdirs();

// when
tomcat.preDeploy(artifact, new FakeWar(WAR_FILE_LENGTH));

//then
JSch jsch = new JSch();
jsch.addIdentity(KEY_FILE);
Session session = jsch.getSession(USER, ADDRESS, 22);
session.setConfig("StrictHostKeyChecking", "no");
session.connect();
Channel channel = session.openChannel("sftp");
session.setServerAliveInterval(92000);
channel.connect();
ChannelSftp sftpChannel = (ChannelSftp) channel;

sftpChannel.get(TEMP_PATH + SLASH + artifact.getFileName(), destDir);


sftpChannel.exit();

session.disconnect();

File downloadedFile = new File(destDir, artifact.getFileName());

assertThat(downloadedFile).exists().hasSize(WAR_FILE_LENGTH);
}

As you can see, the then part is pretty enormous, and rather unpleasant. It contains a lot of implementation
details, which clutter the view. It is not so simple to read the test and understand the scenario it is
verifying.
An interesting observation is that the real assertion happens in the last line of the test method. The rest
of the "then" part is only preparing for this final line.

How to make it better? Many of us would extract the assertion part as a private method. Okay, this
would be better, but as well see in a minute (see Section 5.6.2), it can lead to problems as well, so lets
pursue a different approach.
public void shouldPreDeployApplication() {
//given
Artifact artifact = mock(Artifact.class);
when(artifact.getFileName()).thenReturn(ARTIFACT_FILE_NAME);
ServerConfiguration config = new ServerConfiguration(
ADDRESS, USER, KEY_FILE, TOMCAT_PATH, TEMP_PATH);
Tomcat tomcat = new Tomcat(HTTP_TOMCAT_URL, config);

// when

71
Chapter 5. Readability

tomcat.preDeploy(artifact, new FakeWar(WAR_FILE_LENGTH));

// then
SSHServerAssert.assertThat(ARTIFACT_FILE_NAME)
.existsOnServer(config).hasSize(WAR_FILE_LENGTH);
}

Better, isnt it? Now the test code speaks in terms of the requested functionality and not in terms of
implementation. Good.

My advice is that when you see the assertion part of your test code growing, you should stop and write
the assertion as you would like it to be. Then replace all the current asserting code with this one-liner,
and try to implement your custom assertion. It is easier, and actually less work than you might think
(and this is the case no matter whether you use Hamcrest or AssertJ).

Just say it. Write the assertions as you would like them to be. Then implement it.

Let us consider another example of overly complicated assertions.

As you can see below, an indexing variable i is used to navigate through the resulting list. The assertions
are rather cryptic. At least for me, it is not obvious that, for example, the third element of this list should
have twelve views (whatever a view means).
@Test
public void shouldGetDataFromReport() {
//given
... some complex set-up here

//when
List<ReportData> result
= someDao.getReport(filter, WITH_EXCLUDED, 100, 0);

//then
int i = 0;
assertThat(result.size()).isEqualTo(5);
assertThat(result.get(i).getReportId()).isEqualTo(filter.getId());
assertThat(result.get(i++).getViews()).isEqualTo(16);
assertThat(result.get(i++).getViews()).isEqualTo(12);
assertThat(result.get(i++).getViews()).isEqualTo(3);
assertThat(result.get(i++).getViews()).isEqualTo(0);
assertThat(result.get(i++).getViews()).isEqualTo(0);
}

As usual, when it comes to making assertions more readable we can use private methods (not really
recommended - see Section 5.6.2) or write our own custom assertions. Since, in this case, the test
is verifying some very important part of the system, I have decided that it would be worthwhile to
implement a custom assertion. The result is shown in the listing below.
@Test
public void shouldGetDataFromReport() {
//given
... some complex set-up here

//when

72
Chapter 5. Readability

List<ReportData> result
= someDao.getReport(filter, WITH_EXCLUDED, 100, 0);

//then
assertThat(result)
.hasStatisticsForDays(5)
.allStatisticsAreForReport(filter.getId())
.isSortedByViews()
.hasResultsForDay(monday, 3)
.hasResultsForDay(tuesday, 12)
.hasResultsForDay(wednesday, 16)
.hasResultsForDay(thursday, 0)
.hasResultsForDay(friday, 0);
}

In my humble opinion it is now much clearer why we should expect 12 at some point. Wouldnt you
agree?

Let your tests (and assertions) speak with the domain language, not the terms used in the
implementation.

5.6.2. Avoid Assertions using Private Methods


So, youve got a test scenario. It is a complex integration test, which covers some important functionality.
At the end of it you write some assertions to make sure that the data in the database is in the state
expected. It could look like this:
@Test
public void testChargeInRetryingState() throws Exception {
// given
TxDTO request = createTxDTO(RequestType.CHARGE);
AndroidTx androidTx = ...
Processor processor = ...
... much more complex set-up code here

// when
final TxDTO txDTO = processor.processRequest(request);

// then
assertEquals(txDTO.getResultCode(), ResultCode.SUCCESS);
final List<AndroidTxStep> steps = new ArrayList<AndroidTxStep>(
androidTx.getTxSteps());
final AndroidTxStep lastStep = steps.get(steps.size() - 1);
assertEquals(lastStep.getTxState(),
AndroidTxState.CHARGE_PENDING);
assertEquals(lastStep.getMessage(), ClientMessage.SUCCESS);
... some more assertions here
}

So far, so good. The test is complex, but so is the problem domain, so you decide you have no choice
but to bear with it.

Now you write some other tests. You soon discover that the assertions are very similar to those you
have already written (they only differ with respect to expected states - for example, some other test
method expects AndroidTxState.SUBMITTED instead of AndroidTxState.CHARGE_PENDING). "Reuse

73
Chapter 5. Readability

is good", you think, and you do the most obvious thing: you create a private method (which takes
expected states as a parameter) and call it from both tests. As you write more test code, the same method
is being called from more test methods.
This seems right. There is no code duplication among test methods, because all of the assertions are
done by this separated private method. An example of such a test would be this:
@Test
public void testChargeInRetryingState() throws Exception {
// given
...

// when
...

// then
assertState(ResultCode.SUCCESS, androidTx,
AndroidTxState.CHARGE_PENDING, ClientMessage.SUCCESS;
}

And everything is good and works well at least for a while. You write more tests, still reusing this
one assertion method, but soon you encounter problems. The assertions in the method are not exactly
what you need in some particular test cases. They are very close to what you want, but not exactly right.
"Reuse is good", you say, and make your private method more generic, so it can handle more test cases.
The resulting assertion method is presented below:
private void assertState(TxDTO txDTO, AndroidTx androidTx,
AndroidTxState expectedAndroidState,
AndroidTxState expectedPreviousAndroidState,
ExtendedState expectedState,
String expectedClientStatus,
ResultCode expectedRequestResultCode) {

final List<AndroidTxStep> steps = new ArrayList<AndroidTxStep>(


androidTx.getTxSteps());
final boolean checkPreviousStep = expectedAndroidState != null;
assertTrue(steps.size() >= (checkPreviousStep ? 3 : 2));

if (checkPreviousStep) {
AndroidTxStep lastStep = steps.get(steps.size() - 2);
assertEquals(lastStep.getTxState(),
expectedPreviousAndroidState);
}

final AndroidTxStep lastStep = steps.get(steps.size() - 1);


assertEquals(lastStep.getTxState(), expectedAndroidState);
assertEquals(lastStep.getMessage(), expectedClientStatus);

assertEquals(txDTO.getResultCode(), expectedRequestResultCode);
assertEquals(androidTx.getState(), expectedAndroidState);
assertEquals(androidTx.getExtendedState(), expectedState);

if (expectedClientStatus == null) {
verifyZeroInteractions(client);
}
}

Oh my! This has gone too far! Now we have a monster method, which verifies every possible scenario! It
takes seven parameters and has some logic. This will be impossible to maintain. Now you cant just read

74
Chapter 5. Readability

your tests and understand straightaway what scenario is actually being tested (or, at least, not without
undertaking some serious investigations).
Moreover, you cant understand this even if you know the business domain pretty well!

So what is the right way to go about this? Well, I would suggest writing a custom assertion. This allows
us to write such tests in the following way:
public void testChargeInRetryingState() throws Exception {
// given
TxDTO request = createTxDTO(RequestType.CHARGE);
AndroidTx androidTx = ...
Processor processor = ...
... much more complex set-up code here

// when
final TxDTO txDTO = processor.processRequest(request);

// then
assertEquals(txDTO.getResultCode(), ResultCode.SUCCESS);
assertThat(androidTx)
.hasState(AndroidTxtate.CHARGED)
.hasMessage(ClientMessage.SUCCESS)
.hasPreviousState(AndroidTxState.CHARGE_PENDING)
.hasExtendedState(null);
}

What is different from the previous test? First of all, one can simply read the assertion part and
understand it.

The test is still not really simple, but this is only because the business domain of the tested system is
itself complex! I think this version makes it much easier to understand what the expected outcome of
the test is. There is no logic involved (no if statements in the assertions part), and this leaves us in no
doubt as to what is being asserted.

Whenever your assertions start to grow beyond your safety limit (2 lines? 3 lines? 5 lines? I
do not know: after all, its your safety limit, not mine), introduce custom assertions instead.
In fact, it seems reasonable to come up with custom assertions for the main objects from
your domain (because you will probably have many tests which involve them - both at unit
and integration levels).

And how to come up with a good custom assertions method? My trick is simply to write it in such a
way that I am happy reading it. So, without actually having any custom assertion implemented, I write
a line like this in my test code:
assertThat(client).isVip().and().hasDiscount(0.2);

And then I implement it. Of course, it sometimes happens that I need to deviate from the original plan,
and that the resulting fluent interface differs as to details, but usually Im able to come quite close to
what it was I imagined.

Let us take another look at private methods used for assertions.

In another test that I have seen, many of the test methods looked like this one:
@Test

75
Chapter 5. Readability

public void testCompile_32Bit_FakeSourceFile() {


CompilerSupport _32BitCompilerSupport
= CompilerSupportFactory.getDefault32BitCompilerSupport();
testCompile_FakeSourceFile(_32BitCompilerSupport);
}

It is not easy to figure out what scenario is being tested by this method. In fact it is almost impossible,
unless one looks into the testCompile_FakeSourceFile() method. So, let us have a look at it.

private void testCompile_FakeSourceFile(CompilerSupport compilerSupport) {


String[] compiledFiles = compilerSupport
.compile(new File[] { new File("fake") });
assertThat(compiledFiles, is(emptyArray()));
}

Hm clear? Not to me. I still cannot tell you what the story behind
testCompile_32Bit_FakeSourceFile is. Is that because I dont know anything about the domain
of the problem? To some extent the answer is yes. However, I suspect that even if I was very well
acquainted with it, I would still have to figure out the purpose of the method. It seems to me that some
improvement on this is possible.

The trouble with this test is that the testCompile_FakeSourceFile is doing two things at once: it is
executing the tested functionality of the SUT, but also asserting on it. What I would suggest is getting
rid of this helper private method by inlining it.

@Test
public void testCompile_32Bit_FakeSourceFile() {
CompilerSupport _32BitCompilerSupport
= CompilerSupportFactory.getDefault32BitCompilerSupport();
String[] compiledFiles = compilerSupport.compile(SINGLE_FAKE_FILE);
assertThat(compiledFiles, is(emptyArray())));
}

Then I would consider renaming the test method and writing a custom assertion to better express the
intention behind the test.

@Test
public void compiler32BitShouldNotBotherToCompileFakeSourceFile() {
CompilerSupport _32BitCompilerSupport
= CompilerSupportFactory.getDefault32BitCompilerSupport();
String[] compiledFiles = compilerSupport.compile(SINGLE_FAKE_FILE);
assertThat(compiledFiles).nothingWasCompiled();
}

Is it better? In my opinion, yes. Now I can read the test method alone and understand it.

76
Chapter 5. Readability

The Cost of Custom Assertions

There is a question which could be asked by an inquisitive reader. Custom assertions contain
some code, even some logic. How can I be sure that there are no mistakes there?

Okay, I admit it: I do not test my custom assertions. This stems from a number of considerations.

First of all, the logic of custom assertions is very limited (if there is any at all). Usually, it boils
down to fetching some properties of an object and comparing it with the expected result. When
writing custom assertions for your domain objects, you will rarely encounter the need for a for
loop there, or anything more complex. It often happens that the majority of code is devoted to
the creation of custom error messages.
Things look different, though, for custom assertions that are more related to integration tests,
like the one discussed in Section 4.5.4. In such cases you might well consider writing tests for
your assertions.

Secondly, custom assertions are often written after you have already written assertions in your
tests. This is pretty natural. At some point in the writing of your test, you realize that the assertion
part of it is getting too big and/or too complicated. Then you create your custom assertion class,
and copy parts of your assertions there. This usually happens when you have verified that your
assertions are working (by running tests).

5.6.3. Expected Objects


Each test should tell a story about what happens and what is expected. The story extends over the whole
test, and every part of the test figures in the telling of it.

I find there are some tests which, when it comes to assertion, fail to tell the story. Have a look at the
following test, which introduces this issue.
@Test
public void shouldReturnModelWithCorrectValuesCalculated() {
//given
...

//when
DataModel result = ...;

//then
DataModel expectedResult = new DataModel<>(3.275, 1, 100);
// value should be (1+5+50+75=131)/(10+10+10+10=40) = 3.275
assertThat(result).isEqualTo(expectedResult);
}

I have omitted the "given" and "when" parts, because the interesting things happen at the very end of
this test.

The name of the test says that the SUT shouldReturnModelWithCorrectValuesCalculated(). The
comment at the end of the last assertion line tells me exactly what was important for the creator of the
test. There is some average value calculated, and it is essential that it be equal to 3.275. Given that this
is so, instead of comparing full objects, let us verify just this one value:

77
Chapter 5. Readability

// then
assertThat(result.getAverageValue()).isEqualTo(3.275);

After this change, the test tells its story much better. The assertions part corresponds to the title of the
test by specifying the exact values which, it was expected, would be calculated.

By changing the assertion in the way we did, we also resolved another (as yet unmentioned) weakness
of the previous version of this test. When the test fails, and the objects compared do not overwrite the
toString() method, then all you get is a cryptic error message. Like this one:

org.junit.ComparisonFailure:
Expected :my.company.DataModel@e6037658
Actual :my.company.DataModel@4af66537

Another drawback to such a test is that it can fail if the objects compared differ with regard to some
unimportant details.

Now you have a few options:

run the test again in debug mode to see what the problem is,

add a reasonable toString() method to the DataModel class,

write custom assertions.

When verifying specified properties instead of objects, you are sure to receive a more precise error
message saying exactly which of them is different from what was expected.

Comparing objects shifts the scenario of the test into the technical sphere. The story behind
the test has gone: all we know, here, is that at the end, two objects should match. I would
suggest putting in a few assertions (or, even better, writing some custom ones), rather than
comparing objects.

78
Chapter 6. Things to Remember
So, by now youll have read through all the examples of "bad tests" and probably learned a thing or two
along the way. I would encourage you to scan the book once again, in search of tips and warnings (the
ones clearly visible with icons on the left). I put these in because I believe they are worth remembering.

Below, youll find some more pieces of advice ones which were probably not made fully explicit in
the book: even so, they are still very, very important.

Doing things the right way takes more time than doing them any old way. But this extra effort usually
pays off in the long term.

Hard to write a test? Maybe the production code is of low quality? or maybe you should consider
writing tests before production code?

Writing code is team work. Dont forget to discuss these (new) ways of writing tests with your
colleagues.

It pays to (at least) scan the documentation of the tools you use. Theres a good chance youll find
some hidden gems there.

Be pragmatic, and let experience be your guide. It doesnt matter if someone - even the author of a
book - advises you to employ some technique or other. If it doesnt work for you, simply dont do it!

The last thing I would like to say is that I hope you enjoy writing the very best tests possible! :)

79
Chapter 6. Things to Remember

Practical Unit Testing


If you feel like youd like to know more about writing unit tests, mocking, assertions and all those magic
spells thatll make your code bulletproof, take a look at my previous books:

"Practical Unit Testing with TestNG and Mockito" (2012)

"Practical Unit Testing with JUnit and Mockito" (2013)

Reading these books will enable you to learn to write high-quality unit tests using the finest
technologies of the Java world!

Please visit the books' website http://practicalunittesting.com for more information.

80
Chapter 6. Things to Remember

Help!
The idea of this book is to offer some examples of best practice when writing tests using real code
only. Unfortunately, it is not so easy to find interesting cases cases that would be worth mentioning
in the book. My colleagues at work have not furnished me with enough examples. :) I have tried to find
interesting cases by browsing open-source projects, but deciding whether a test is good or bad without
knowing the context has turned out to be a far from trivial task. Another obstacle has been the fact that
Ive already gathered quite a collection of examples of various sorts, which means Ive already got all
the "typical" cases covered, anyway. Right now Im hunting for rare beasts. And those are, of course,
very hard to find. :(

And this is why Im asking you for help! Maybe you have encountered some test code which you think
might interest me? A test which took 3 minutes to run before you optimized it? A pseudo-test which
was aimed at bumping up the coverage metrics? A monster, 300 lines long, which tested everything (but
never really worked)? An over-mocked atrocity which tested the validity of the mocking framework
instead of testing the production code? A test which your team had to update every day because it was
so fragile? Oh, I would just love to learn about them!

So please, pretty please: if you happen to have a piece of test code which you think might interest me,
just send me an e-mail at kaczanowski.tomek@gmail.com. Thank you!

81

You might also like