One of the core principles of Go is to keep things simple and provide (ideally) only one way of doing something. Python is another language that follows this same mantra.

Writing unit tests in Go is easy but also extremely light on features. At least for me, it didn't take long before I got fed up of writing boiler-plate code and went to find a better testing package.

This article is specifically on writing data-driven, or parameterised tests that use a table of inputs and expected outputs that share the same testing logic.

The Traditional Way

The Go way is simple and used through the internal libraries, here is an example from the fmt package:

var flagtests = []struct {
in string
out string
}{
{"%a", "[%a]"},
{"%-a", "[%-a]"},
{"%+a", "[%+a]"},
{"%#a", "[%#a]"},
{"% a", "[% a]"},
{"%0a", "[%0a]"},
{"%1.2a", "[%1.2a]"},
{"%-1.2a", "[%-1.2a]"},
{"%+1.2a", "[%+1.2a]"},
{"%-+1.2a", "[%+-1.2a]"},
{"%-+1.2abc", "[%+-1.2a]bc"},
{"%-1.2abc", "[%-1.2a]bc"},
}

func TestFlagParser(t *testing.T) {
var flagprinter flagPrinter
for _, tt := range flagtests {
s := Sprintf(tt.in, &flagprinter)
if s != tt.out {
t.Errorf("Sprintf(%q, &flagprinter) => %q, want %q", tt.in, s, tt.out)
}
}
}

Pros:

  • Requires no extra libraries.

Cons:

  • CPU-bound tests will be run in serial.
  • Does not allow filtering of individual tests.

Named-tests in Go 1.7+

Go 1.7 introduces a new feature called "named tests", even though what it really means is that you can now have nested tests which works great for self naming (and filtering) examples of a given test. Here is an example:

func TestFoo(t *testing.T) {
//
t.Run("A=1", func(t *testing.T) { ... })
t.Run("A=2", func(t *testing.T) { ... })
t.Run("B=1", func(t *testing.T) { ... })
//
}

Tweaking the previous example to use subtests:

func TestFlagParser(t *testing.T) {
for _, tt := range flagtests {
t.Run(tt.in, func(t *testing.T) {
var flagprinter flagPrinter
s := Sprintf(tt.in, &flagprinter)
if s != tt.out {
t.Errorf("Sprintf(%q, &flagprinter) => %q, want %q",
tt.in, s, tt.out)
}
})
}
}

Pros:

  • Requires no extra libraries.
  • CPU-bound tests now run in parallel.
  • Filtering of individual tests.

Cons:

  • Requires Go 1.7 or higher.
  • Might be more difficult or not always possible to coordinate all tests running concurrently.

Ginko Table Driven Tests

Ginkgo and Gomega are my personal favourite testing frameworks. The more I use them together the more impressed I become.

Here is the same test written in Ginkgo (some of the examples were removed from brevity):

import (
. "github.com/onsi/ginkgo/extensions/table"
. "github.com/onsi/ginkgo"
. "github.com/onsi/gomega"
)

var _ = Describe("Sprintf", func() {
DescribeTable("FlagParser",
func(in string, out string) {
var flagprinter flagPrinter
s := Sprintf(in, &flagprinter)
Expect(s).To(Equal(out, "Expected " + in))
},
Entry("%a", "%a", "[%a]"),
Entry("%-a", "%-a", "[%-a]"),
Entry("%+a", "%+a", "[%+a]"),
)
})


Pros:

  • Really rich feature set built specifically for data-driven tests.
  • CPU-bound tests now run in parallel.
  • Filtering of individual tests.
  • Works along side native tests and other frameworks.

Cons:

  • Requires an external dependency.

Matrix-style Tests

This is related to this article but a bit of a different direction. What I mean by matrix-style tests is data-driven tests that have to satisfy all combinations of all dimensions of data. For example if I wanted to test all the possible inputs for a logical OR operation (true and false for each input) it would be 2 x 2 = 4 tests.

When you start to get to 3 or more dimensions it becomes difficult to validate that you have 8, 16, etc possible combinations. I had this same issue and needed to have the data points validated as part of the test itself. If the test was ever refactored or new values were added it would make sure all input combinations were covered.

func verifyMatrixTestsR(t *testing.T, tests [][]interface{},
values []interface{}, dimentions [][]interface{}) {
if len(dimentions) >= 1 {
for _, dimention := range dimentions[0] {
verifyMatrixTestsR(t, tests, append(values, dimention),
dimentions[1:])
}

return
}

found := false
for _, test := range tests {
match := true

for i, value := range values {
if value != test[i] {
match = false
break
}
}

if match {
found = true
break
}
}

if !found {
t.Errorf("Missing test for %v", values)
}
}

func verifyMatrixTests(t *testing.T, tests [][]interface{},
dimentions ...[]interface{}) {
verifyMatrixTestsR(t, tests, []interface{}{}, dimentions)
}

func TestOr(t *testing.T) {
// Each test is written as:
//
// { dimension 1, dimension 2, ...dimension N, ...extra values }
//
tests := [][]interface{}{
{ true, false, true }, // eg. true OR false = true
{ true, true, true },
{ false, false, false },
{ false, true, true },
}

// verifyMatrixTests validates that we have tests for every combination
// of the provided dimensions. An takes the form of:
//
// verifyMatrixTests(t, tests, dimension 1 values, dimension 2 values, ... }
//
verifyMatrixTests(t, tests,
[]interface{}{ true, false }, []interface{}{ true, false })

// Now we can handle the test data however we want.
// Unfortunately the interface{} have to be cast. I'm sure there is a
// neater way to handle this.
for _, test := range tests {
if (test[0].(bool) || test[1].(bool)) != test[2].(bool) {
t.Errorf("%v", test)
}
}
}

Thanks to Christoph Berger for suggesting gotests which is another great tool for generating tests from existing source code rather than manually deriving the data sets yourself.

Do you know of any other great testing packages? Let me know!