5 properties of Good Tests
==========================

- Correctness
- Readability
- Completeness
- Demonstrability
- Resilience


Correctness
- Write correct tests

Readablity
- Yourself and others
- Perhaps a documentation

Completeness
- Test the full of my API
- Only my API

Demonstrability
- Show use cases

Resilience
- Only failes when we want to it
- Never fails otherwise



Correctness
===========


Tests depend upon known bugs


int square( int x)
{
  // TODO: Implement
  return 0;
}

TEST (SquareTest, MathTests)
{
  EXPECT_EQ( 0, square(2));
  EXPECT_EQ( 0, square(3));
  EXPECT_EQ( 0, square(5));
}

- code review!

TEST (SquareTest, MathTests)
{
  EXPECT_EQ(  4, square(2));
  EXPECT_EQ(  9, square(3));
  EXPECT_EQ( 25, square(5));
}

If the test fails, we should decide who to blame:
- the writer of the code?, or
- the writer of the tests?


- Log library issue



Tests that don't execute real scenarios


class MockWorld : public World
{
  // assume world is flat
  bool IsFlat() override { return true; }
};

TEST (Flat, WorldTests)
{
  MockWorld world;
  EXPECT_TRUE( world.Populate() );
  EXPECT_TRUE( world.IsFlat() );
}

- Testing the Mock not the real code


Readability
===========


Tests should be obvious tothe future reader (including yourself)

Mistakes:

- Too much boilerplate and other distraction

TEST (BigSystemTest, CallIsUnimplemented)
{
  TestStorageSystem storage;
  auto test_data = GetTestFileMap();
  storage.MapFilesystem(test_data);
  BigSystem system;
  ASSERT_OK( system.Initialize(5));
  ThreadPoool pool(10);
  pool.StartThreads();

  storage.SetThreads(pool);
  system.SetStorage(storage);

  ASSERT_TRUE(system.IsRunning());

  EXPECT_TRUE( IsUnimplemented(system.Status()) );    // actual test
}


- Not enough context in the test (opposite than previos)
  (hiding important details)

TEST (BigSystemTest, ReadMagicBytes)
{
  BigSystem system = InitializeTestSystemAndTestData();
  EXPECT_EQ( 42, system.PrivateKey() );
}

  At least comment it!


- Don't use advanced test framework features when it is not necessary


class BigSystemTest : public ::testing::Test
{
public:
  BigSystemTest() : filename_("/tmp/test") { }

  void SetUp()
  {
    ASSERT_OK( file::WriteData(filename_, "Hello world!\n") );
  }
protected:
  BigSystem   system_;
  std::string filename_;
};


TEST_F(BigSystemTest, BasicTest)
{
  EXPECT_TRUE(system_.Initialize());
}


  is equivalent with that:


TEST_F(BigSystemTest, BasicTest)
{
  BigSystem  system;
  EXPECT_TRUE(system.Initialize());
}


  is equivalent?
  Perhaps there are hidden activities when writing file::WriteData() ...



  A test is like a novel:

   - setup
   - action
   - conclusion



Completeness
============


- test for the easy cases:

TEST (FactorialTest, basicTests)
{
  EXPECT_EQ( 1,   Factorial(1) );
  EXPECT_EQ( 120, Factorial(5) );
}



int Factorial( int n)
{
  if ( 1 == n )  return 1;
  if ( 5 == n )  return 120;
  return -1;
}



- test for the edge cases


TEST (FactorialTest, basicTests)
{
  EXPECT_EQ( 1,   Factorial(1) );
  EXPECT_EQ( 120, Factorial(5) );
  EXPECT_EQ( 1,   Factorial(0) );
  EXPECT_EQ( 479001600, Factorial(12) );
  EXPECT_EQ( std::numeric_limits::max<int>(), Factorial(13) );  // overflow
  EXPECT_EQ( 1,   Factorial(0) );               // check, no internal state
  EXPECT_EQ( std::numeric_limits::max<int>(), Factorial(-TEST ( Foo, First)
{
  ASSERT_EQ( 0, i);
  ++i;
}
10) ); // overflow
}


  perhaps write tests first: not be driven by the implementation...


- test only what we are responsible for


TEST (FilterTest, WithVector)
{
  vector<int> v;   // make sure vector is working
  v.push_back(1);
  EXPECT_EQ( 1, v.size() );
  v.clear();
  EXPECT_EQ( 0, v.size() );
  EXPECT_TRUE( v.empty() );

  // Now test our stuff
  v = Filter( { 1,2,3,4,5}, [](int x) { 0 == return x % 2; } );
  EXPECT_THAT( v, ElementsAre(2,4) );
}


  Where to draw the line? I should test only what I implemented.



TEST (FilterTest, WithVector)
{
  vector<int> v;

  v = Filter( { 1,2,3,4,5}, [](int x) { 0 == return x % 2; } );
  EXPECT_THAT( v, ElementsAre(2,4) );
}



Demonstrabilty
==============


Clients will learn the system via tests.
Tests should serve as a demonstration of how teh API works.


- Using private API is bad.
- Using friend + test only methods are bad.
- Bad usage in unit tests suggesting a bad API


class Foo
{
  friend FooTest;
public:
  bool Setup();
private:
  bool ShortcutSetupForTesting();
};

TEST (FooTest, Setup)
{
  EXPECT_TRUE( ShortcutSetupForTesting() );
}


  No user can call ShortcutSetupForTesting().


class Foo
{
public:
  bool Setup();
};

TEST (FooTest, Setup)
{
  EXPECT_TRUE( Setup() );
}



Resilience
==========


- Write tests that depend only on published API guarantees!

Mistakes:
  - Flaky tests (re-run gets different results)
  - Brittle tests (depending on too many assumptions, implementation details)
  - Tests depending on execution order
  - Mocks depending upon underlying APIs
  - Non-hermetic tests


Flaky test
----------


TEST ( UpdaterTest, RunsFast)
{
  Updater updater;
  updater.UpdateAsync();
  SleepFor(Seconds(.5));  // should be enough
  EXPECT_TRUE( updater.Updated() );
}


 - RotatingLogFileTest  at midnight ...



Brittle test
------------

  Tests that can fail for changes unrelated to the code under test.
  Might be our code changes, but not **this** part.


TEST ( Tags, ContentsAreCorrect)
{
  TagSet tags = {5,8,10};  // unordered set

  EXPECT_THAT( tags, ElementsAre(8,5,10) );
}



Correct:


TEST ( Tags, ContentsAreCorrect)
{
  TagSet tags = {5,8,10};  // unordered set

  EXPECT_THAT( tags, UnorderedElementsAre(5,8,10) );
}



What about that?


TEST ( MyTest, LogWasCalled)
{
  StartLogCapture();
  EXPECT_TRUE( Frobber::start() );
  EXPECT_TRUE( Logs(), Contains("file.cc:421: OpenedFile frobber.config") );
}


Can fail for changes unrelated to the code under test.
Red flag: "Run your code twice".

Perhaps use here regular expression...  and set up boundaries wher my log should be.



Ordering
--------

  Tests fail if they aren't run all together or in a particular order.


static int i = 0;

TEST ( Foo, First)
{
  ASSERT_EQ( 0, i);
  ++i;
}

TEST ( Foo, Second)
{
  ASSERT_EQ( 1, i);
  ++i;
}


Many test environments runs tests paralelry or somebody touch the state.
Global state is bad idea, causing hidden dependency.
Files, etc ...


Hermetic
--------

  Test fails if anyone else runs the same test at the same time.


TEST (Foo, StorageTest)
{
  StorageServer *server = GetStorageServerHandle();
  auto val = rand();

  server->Store("testkey", val);
  EXPECT_EQ( val, server->Load("testkey") );
}


  - std::this_thread::get_id()
  - putenv()



Deep dependency
---------------


class File
{
public:
  ...
  virtual bool Stat( Stat *stat);
  virtual bool StatWithOptions( Stat *stat, Options, options)
  {
    return Stat(stat);  // ignore options
  }
};

TEST (MyTest, FSUsage)
{
  ...
  EXPECT_CALL( file, Stat(_)).Times(1);
  Frobber::Stat();
}


  Will fail when StatWithOptions() is implemented directly, not via Stat().

  People are depending on implicit interfaces (call order, etc...)



Goals:
======


- Correctness: write test testing what you wanted to test
- Readability : write readable tests, use code review
- Completeness: test all edge cases, but test only what you are responsible for
- Demonstrability: show how to use the API
- Resilience: hermetic, only breaks when unacceptable behaviour change happens



Goals are not RULES.



Who writes the test? - implementer or somebody else

What to do, when you have a large tests with complex state?

How to test Async operations

 - separate goals:
    - does it give the correct results?
    - does it do fast?
   don't try it in one test

 - e.g. sanitizers etc. have overhead.