What are unit tests
How to write unit tests
How to test your Go program with the go tool
Unit tests
Test case
Test function
Assertion
Test-Driven Development (TDD)
Code coverage
Here is a function :
\n// compute and return the total price of a hotel booking\n// all amounts in input must be multiplied by 100. Currency is Dollar\n// the amount returned must be divided by 100. (ex: 10132 => 101.32 $)\nfunc totalPrice(nights, rate, cityTax uint) uint {\n return nights*rate + cityTax\n}
\nThis function computes the price of a booking. It seems right, no? How to be sure that the amount returned is correct? We can run it with some data as argument and check its result :
\npackage main\n\nimport "fmt"\n\nfunc main() {\n price := totalPrice(3, 10000, 132)\n fmt.Println(price)\n}
\nThe program outputs 30132
which is equivalent to 301.32$
. We should divide it by 100 to get an amount in dollars.
Is it correct? Let’s compute it by hand. The total price of a room is the number of nights multiplied by the rate plus the city tax : 3\\times(100+1,32)=3\\times101.32=303.96. Have you spotted the bug in the function?
\nThis statement :
\nreturn nights*rate + cityTax
\nIt should be replaced by this one :
\nreturn nights * (rate + cityTax)
\nThis way, the function returns the right answer. What if our program checks it directly?
\n// unit-test/intro/main.go\npackage main\n\nimport "fmt"\n\n//...\n\nfunc main() {\n price := totalPrice(3, 10000, 132)\n if price == 30396 {\n fmt.Println("function works")\n } else {\n fmt.Println("function is buggy")\n }\n}
\nThe program itself will check if the function implementation is correct. No surprises it outputs the function works ! This program is a unit test!
\n\nIf we take the definition from the IEEE (Institute for Electrical and Electronics Engineers)
We take individual parts of the system to test them. In other words, we check that individual parts of the system work. The whole system is not tested.
\nThe unit test is created and run by the developer of the code. With this tool, we can check that our methods and functions run as expected. Unit tests are focused exclusively on checking that those small programming units are working.
\nSome developers will argue that unit tests are useless. Often they say that when they develop their code, they are testing permanently that the system work. For instance, a web developer that has to create a website will often have two screens, the one with the source code and the one with the program running. When he wants to implement something new, he will start with the code and checks if it works.
\nThis process is purely manual and depends on the experience of the developer on the system. A newly hired might not detect errors and breaking changes. What if we can run those tests automatically? Imagine that you can run those tests each time you build your Go program! Or even better, you could run them each time you make a change to a package!
\n\nA single unit test is called a test case. A group of test cases is called a test set (or test suite).
\nTo better understand what a test case is, let’s take an example. Imagine that you have developed a function to capitalize a string. We will build a test case to check it.
\nOur test case will be composed of :
\nA test input. For instance : “coffee”
An expected output : In our example, it will be the string “COFFEE”
The actual output of our function under test
A way to assert that the actual returned value of our function is the one that is expected. We could use string comparison features of Go to check that the two strings are equal. We can also use a Go package to do it. This part of the unit test is called the assertion
In this section, I will go through some reasons extracted from an IEEE survey about unit testing
Unit tests will control that functions and methods work as expected. Without unit tests, developers test their functionality during the development phase. Those tests are not reproducible. After the development of the feature, those manual tests are no longer run.
If they write their tests into the project sources, they can run those tests later. They protect the project against nasty regressions (development of new features breaks something in the code of another one).
The presence of unit test can be a customer requirement. It seems to be pretty rare, but some specifications include test coverage requirements.
A better focus on API design is also generally observed when developers write unit tests. You have to call the function you are developing; as a consequence you can see improvements. This focus is even bigger if you use the TDD method.
Unit tests also serve as code documentation. Users that want to know how to call a specific function can take a look at the unit test to get their answer immediately.
Several languages are putting the tests into a specific directory, often called tests. In Go unit tests are living next to the code they test. Tests are part of the package under test.
\nHere is the list of files from the directory src/strings :
\nYou can see that there is a naming pattern: there is a file named xxx_test.go for each file named xxx.go.
\nWhen you build your program, the file named xxx_test.go will be ignored by the compiler.
\n\nLet’s write our very first unit test together. We will test the package foo :
\n// unit-test/basic/foo/foo.go\npackage foo\n\nimport "fmt"\n\nfunc Foo() string {\n return fmt.Sprintf("Foo")\n}
\n\nLet’s create a file foo_test.go
in the same folder as foo.go
:
// unit-test/basic/foo/foo_test.go \npackage foo\n\nimport "testing"\n\nfunc TestFoo(t *testing.T) {\n\n}
\nYou can see that this source file is part of the foo package. We have imported the package testing from the standard library (that we will use later).
\nA single function is defined: TestFoo. This function takes as input a pointer to testing.T (*testing.T).
\n\nA test function should be named following the same convention :
\nThe first part of the test function name is the word Test. It is fixed. It’s always \"Test\"
The second part is often the name of the function you want to test. It must start with a capital letter.
Here is an example of the function Foo
. The function has no argument, but it always returns the string “Foo”. If we want to unit test it, we will assert (verify) that the return of the function is \"Foo\"
:
// unit-test/basic/foo/foo_test.go\npackage foo\n\nimport "testing"\n\nfunc TestFoo(t *testing.T) {\n expected := "Foo"\n actual := Foo()\n if expected != actual {\n t.Errorf("Expected %s do not match actual %s", expected, actual)\n }\n}
\nWe first define a variable expected
that holds our expected result. Then we define the variable actual
that will hold the actual return value of the function Foo
from the package foo
.
Please remember those two terms : actual and expected. They are classic variables names in the context of testing.
\nThe expected variable is the result as expected by the user.
The actual variable holds the execution result of the unit of the code we want to test.
Then the test continues with an assertion. We test the equality between the actual value and the expected one. If that’s not the case, we make the test fail by using a t.Errorf method (that is defined on the type struct T
from the package testing
) :
t.Errorf("Expected %s do not match actual %s", expected, actual)
\n\nThere is no method defined on the type T
to signal the test success.
When the test function returns without calling a failure method, then it is interpreted as a success.
\n\nTo signal a failure, you can use the following methods :
\nError
: will log and marks the test function as failed. Execution will continue.
Errorf
: will log (with the specified format) and marks the test function as failed. Execution will continue.
Fail
: will mark the function as failing. Execution will continue.
FailNow
: this one will mark the test as failed and stop the execution of the current test function (if you have other assertions, they will not be tested).
You also have the methods Fatal
and Fatalf
that will log and call internally FailNow
.
Sometimes you need to store files that will support your unit tests. For instance, an example of a configuration file, a model CSV file (for an application that will generate files)...
\nStore those files into the testdata
folder.
The Go standard library gives all the necessary tools to build your unit test without external libraries. Despite this fact, it is common to see projects that use external “assertion libraries”. An assertion library exposes many functions and methods to build assertions. One very popular module is github.com/stretchr/testify.
\nTo add it to your project, type the following command in your terminal :
\n$ go get github.com/stretchr/testify
\nAs an example, this is the previous unit test written with the help of the package assert
from the module github.com/stretchr/testify
:
// unit-test/assert-lib/foo/foo_test.go\npackage foo\n\nimport (\n "testing"\n\n "github.com/stretchr/testify/assert"\n)\n\nfunc TestFoo(t *testing.T) {\n assert.Equal(t, "Foo", Foo(), "they should be equal")\n}
\nOther libraries exist a quick search on GitHub can give you some additional references: https://github.com/search?l=Go&q=assertion+library&type=Repositories
\n\nTo run your unit tests, you have to use the command-line interface. Open a terminal and cd to your project directory :
\n$ cd go/src/gitlab.com/loir402/foo
\nThen run the following command :
\n$ go test
\nThe following output result is displayed in the terminal :
\nPASS\nok gitlab.com/loir402/foo 0.005s
\nThis command will run all the unit tests for the package located into the current directory. For instance, if you want to run the unit tests of the path package of the current directory :
\n$ cd /usr/local/go/src/path\n$ go test
\n\nYou can run all the unit tests of your current project by launching the command :
\n$ go test ./...
\n\nWhat is the output of a failed unit test? Here is an example. We have modified our unit test to make it crash. Instead of the string \"Foo\"
we are no expecting \"Bar\"
. Consequently, the test fails.
$ go test\n--- FAIL: TestFoo (0.00s)\n foo_test.go:9: Expected Bar do not match actual Foo\nFAIL\nexit status 1\nFAIL gitlab.com/loir402/foo 0.005s
\nYou can note that the test result is more verbose in the case of a failure. It will indicate which test case fail by printing the test case name (TestFoo
). it will also give you the line of the test that fails (foo_test.go:9
).
Then you can see that the system is printing the error message that we have told him to print in the case of a failure.
\nThe program exits with a status code of 1, allowing you to autodetect it if you want to create continuous integration tools.
\n\nAn exit code different from 0 signals an error.
An exit code of 0 signals NO errors
When you run the go test
command, the go tool will also run go vet
automatically on the tested packages (source of the packages and test files).
The go vet
command is part of the Go toolchain. It performs syntax verification on your source code to detect potential errors.
This command has a whole list of checks; when you run a go test, only a small subset is launched :
\nwill detect the bad usages of the package sync/atomic
\nthis check will verify the usages of boolean conditions.
\nwhen you run a go test you can specify build tags to the command line, this check will verify that you have correctly formed build tags in the command you type.
\nchecks that you never compare a function with nil
\nRunning a set of go vet
command automatically before launching unit tests is a brilliant idea. It can make you discover mistakes before they cause harm to your program!
To compile our test without running it, you can type the following command :
\n$ go test -c
\nThis will create a test binary called “packageName.test”.
\n\nIn the previous example, we tested our function against one expected result. In a real situation, you might want to test your function with several test cases.
\nOne approach could be to build a test function like that :
\n// unit-test/table-test/price/price_test.go\npackage price\n\nimport "testing"\n\nfunc Test_totalPrice1(t *testing.T) {\n // test case 1\n expected := uint(0)\n actual := totalPrice(0, 150, 12)\n if expected != actual {\n t.Errorf("Expected %d does not match actual %d", expected, actual)\n }\n // test case 2\n expected = uint(112)\n actual = totalPrice(1, 100, 12)\n if expected != actual {\n t.Errorf("Expected %d does not match actual %d", expected, actual)\n }\n\n // test case 3\n expected = uint(224)\n actual = totalPrice(2, 100, 12)\n if expected != actual {\n t.Errorf("Expected %d does not match actual %d", expected, actual)\n }\n\n}
\nWe have 3 test cases; each test case follow the previous one
This is a good approach; it works as expected. However, we can use the table test approach that can be more convenient :
// unit-test/table-test/price/price_test.go\npackage price\n\nimport "testing"\n\nfunc Test_totalPrice(t *testing.T) {\n type parameters struct {\n nights uint\n rate uint\n cityTax uint\n }\n type testCase struct {\n name string\n args parameters\n want uint\n }\n tests := []testCase{\n {\n name: "test 0 nights",\n args: parameters{nights: 0, rate: 150, cityTax: 12},\n want: 0,\n },\n {\n name: "test 1 nights",\n args: parameters{nights: 1, rate: 100, cityTax: 12},\n want: 112,\n },\n {\n name: "test 2 nights",\n args: parameters{nights: 2, rate: 100, cityTax: 12},\n want: 224,\n },\n }\n for _, tt := range tests {\n t.Run(tt.name, func(t *testing.T) {\n if got := totalPrice(tt.args.nights, tt.args.rate, tt.args.cityTax); got != tt.want {\n t.Errorf("totalPrice() = %v, want %v", got, tt.want)\n }\n })\n }\n}
\nWe create a type struct named parameters
. Each field of that struct is a function parameter of the function under test
Then we create a struct named testCase
. With three fields:
name
: the name of the test case: a human-readable name
args
: the parameters to give to the function under test
want
: the expected value returned by the function
A slice named tests
containing elements of type testCase
is created. This is here that we will manually define each test case
Then with a for loop, we iterate over the elements of the slice tests
.
At each iteration, we call the method t.Run
Parameters :
\nthe test name tt.name
An anonymous function that contains the test to run (its signature is similar to a standard test case)
At each iteration, we compare what we got (the actual value) to what we expect.
Here is the run output of this test (successful) :
\n=== RUN Test_totalPrice\n=== RUN Test_totalPrice/test_0_nights\n=== RUN Test_totalPrice/test_1_nights\n=== RUN Test_totalPrice/test_2_nights\n--- PASS: Test_totalPrice (0.00s)\n --- PASS: Test_totalPrice/test_0_nights (0.00s)\n --- PASS: Test_totalPrice/test_1_nights (0.00s)\n --- PASS: Test_totalPrice/test_2_nights (0.00s)\nPASS
\nThe run result gives the information that three subtests are run.
It also gives the result for each test along with the name of the test.
go test
is a command that we can run in two different modes :
This mode is triggered when you run the command :
\n$ go test
\nHere nothing is added, just go test
. In this mode, Go will build the package located in the current directory.
All unit tests of the project will not be executed, but only the ones defined at the current package level. Some IDEs will run this command each time you save a source file; that’s a pretty good idea because each time you modify a package file, you can check that the unit tests are passing.
\n\nIn this mode, you can ask Go to unit test some specific packages or all the project packages.
\nFor instance, if you have a project that defines a pkgName
strings you can run the following command :
go test modulePath/pkgName
\nThis command will work in any project directory. It will run the test of the package pkgName
from the module modulePath
.
When you are in package list mode go will cache the test result of successful tests. This mechanism has been developed to avoid testing packages multiple time.
\nTo test this behavior, launch the tests on the package strings :
\n$ go test strings
\nIt will output the following :
\nok strings 4.256s
\nYou can see here that the unit test’s time is 4.256s, which is quite long.
\nTry to launch it again :
\n$ go test strings\nok strings (cached)
\nYou can see here that the result is instantaneous, and instead of the duration, the (cached) is displayed. It means that go has retrieved the cached version of the test result.
\n\nNote that when you modify a test file or a source file of the package, the test result that has been cached is invalidated, and the test will be effectively run.
\nTo disable caching, you can use the following command flag :
\n$ go test strings -count=1
\n\nIf you are using environment variables in your source files, Go will cache the test result if the environment variables set are not changing.
\nLet’s take an example, imagine that you are using the environment variable MYENV
inside your test script :
func TestFoo(t *testing.T) {\n env := os.Getenv("MYENV")\n fmt.Println(env)\n //..\n}
\nThe first time, when you execute the test with the environment variable set to \"BAR\"
, then the test will run :
$ export MYENV=BAR && go test gitlab.com/loir402/foo\nok gitlab.com/loir402/foo 0.005s
\nAt the second run of the same command, Go will retrieve the test result directly from cache :
\nok gitlab.com/loir402/foo (cached)
\nBut if you change the value of the environment variable MYENV
then the test will be executed :
$ export MYENV=CORGE && go test gitlab.com/loir402/foo\nok gitlab.com/loir402/foo 0.005s
\n\nThe same mechanism is in place when your code opens a file. If you run your test for the first time, Go will cache the result. But if the file has changed, the result is no longer cached, and the test is executed again :
\nfunc TestFoo(t *testing.T) {\n d, err := ioutil.ReadFile("testdata/lol.txt")\n if err != nil {\n t.Errorf("impossible to open file")\n }\n fmt.Print(string(d))\n //..\n}
\nHere we open the file testdata/lol.txt. If we run the test for the first time, it is executed, and it’s cached.
\nIf we modify the content of testdata/lol.txt and rerun the test, it will be executed because the file’s content has changed, then the test conditions are not the same.
\n\nIn a big project, the number of unit tests can become very large. Running the unit tests can become time-consuming for the team.
\nYou need to add a call to the Parallel
method from the package testing to allow your test to be run concurrently by the go command line.
Let’s take an example:
\nfunc TestCorge1(t *testing.T) {\n time.Sleep(300 * time.Millisecond)\n}\n\nfunc TestCorge2(t *testing.T) {\n time.Sleep(300 * time.Millisecond)\n}\n\nfunc TestCorge3(t *testing.T) {\n time.Sleep(300 * time.Millisecond)\n}
\nHere we have 3 unit tests, that are testing nothing but that just wait for 300 milliseconds each. We did not add any assertion on purpose to facilitate the reading of the source code.
\nLet’s run those tests
\n$ go test
\nThe test result is the following :
\nPASS\nok gitlab.com/loir402/corge 0.913s
\nTests are taking 0.913seconds to run which is roughly 3\\times300ms.
\nLet’s make them run in parallel :
\nfunc TestCorge1(t *testing.T) {\n t.Parallel()\n time.Sleep(300 * time.Millisecond)\n}\n\nfunc TestCorge2(t *testing.T) {\n t.Parallel()\n time.Sleep(300 * time.Millisecond)\n}\n\nfunc TestCorge3(t *testing.T) {\n t.Parallel()\n time.Sleep(300 * time.Millisecond)\n}
\nHere we just added
\nt.Parallel()
\nat the beginning of the test. This simple method call will increase the running speed of our test :
\n$ go test\nPASS\nok gitlab.com/loir402/corge 0.308s
\nWe have divided the running time by 3 ! This gain of time is precious for the development team, so use this feature when you build your unit tests !
\n\nYou can build a test that will accept command line arguments. Those arguments can be passed to the test executable by using a flag. Let’s take an example of a test that requires command-line arguments.
\nfunc TestArgs(t *testing.T) {\n arg1 := os.Args[1]\n if arg1 != "baz" {\n t.Errorf("Expected baz do not match actual %s", arg1)\n }\n}
\nHere we retrieve the second command-line argument. Note that os.Args
is a slice of strings ([]string
), and the first index (0) is occupied by internal values of the go test command line (the emplacement of the cached build).
To pass arguments when the test run, we can use the flag -args :
\n$ go test gitlab.com/loir402/foo -args bar
\nThe execution result is the following :
\n--- FAIL: TestArgs (0.00s)\n foo_test.go:24: Expected baz do not match actual bar\nFAIL\nFAIL gitlab.com/loir402/foo 0.005s
\nYou can add as many flags as you want with this method. Note that -args
is not part of the cacheable flags.
You can pass all the existing build flags to the go test command line. In addition to that, specific testing flags are available.
\nWe intentionally do not cover benchmark-specific flags. We will explain them in a dedicated chapter.
\n\nThis flag will display the coverage data analysis. It’s, in my opinion, the most important flag to know. The coverage data give you a statistic about the percentage of statements of your code that is covered by a unit test :
\n$ go test -cover\nPASS\ncoverage: 100.0% of statements\nok gitlab.com/loir402/foo 0.005s
\nThis flag allows you to choose a method to compute the coverage percentage. (the default is “set”, other values available are “count” or “atomic”). For more information about the computation method, see the specific section.
\nYou can specify that coverage data will be computed only for a subset of your project’s package.
\nwrite somewhere the cover data
\nWith this flag, when the first test break, all the other tests are not run. This is useful when you want to debug your code and fix problems one by one (when they happen)
\nThis flag will define the maximum number of tests that will be run in parallel. By default, it is set to the variable GOMAXPROCS
\nBy default, the timeout is set to 10 minutes when you run your tests. Consequently, unit tests that run for more than 10 minutes will panic. If your test suite needs more than 10 minutes to be executed, you can overwrite that setting and set a specific duration (the value of this flag is a string that will be parsed as a time.Duration)
\nVerbose mode will display the names of the test functions as they are run :
\nHere is an example output of the verbose mode :
\n=== RUN TestCorge1\n=== PAUSE TestCorge1\n=== RUN TestCorge2\n=== PAUSE TestCorge2\n=== RUN TestCorge3\n=== PAUSE TestCorge3\n=== CONT TestCorge1\n=== CONT TestCorge3\n=== CONT TestCorge2\n--- PASS: TestCorge2 (0.31s)\n--- PASS: TestCorge3 (0.31s)\n--- PASS: TestCorge1 (0.31s)\nPASS\nok gitlab.com/loir402/corge 0.311s
\nThe log is pretty long for three tests because we run them in parallel. Tests that are not running in parallel do not log the PAUSE and CONT steps :
\n=== RUN TestCorge1\n--- PASS: TestCorge1 (0.31s)\n=== RUN TestCorge2\n--- PASS: TestCorge2 (0.31s)\n=== RUN TestCorge3\n--- PASS: TestCorge3 (0.31s)\nPASS\nok gitlab.com/loir402/corge 0.921s
\nWhen you run your test go will automatically run go vet for a set of common errors (see [subsec:Go-testvet]). If you want to deactivate it totally (I do not recommend), you can set this flag to off. But you can also add complimentary checks.
\nThe go test command line also defines specific flags to identify performances issues of your code. Those flags are covered in the specific chapter “Profiling”
\n\nIs a project sufficiently tested?
What defines a good test level?
How to measure the testing level of a project?
Code coverage answers those questions. Code coverage is a measure of how a project is tested. It is often given as a percentage.
\nThe measure’s definition is not unique, and different code coverage definitions exist.
\nThe go tool can compute this code coverage statistic for you. Three modes (or computation methods) are available.
\nTo output the test coverage of your code, run the following command :
\n$ go test -cover\nPASS\ncoverage: 66.7% of statements\n\nok go_book/testCoverage 0.005s
\nYou see that a new line has been added to the test summary. It gives the percentage of code coverage.
\nWe will go through this figure’s different computation methods in the next sections.
\n\nThis is the default mode. In literature, we call this mode the “statement coverage” that because it counts the percentage of executed statements in tests
Perfect test coverage is 100%, meaning that all the code statements have been tested.
\nLet’s take an example with the following code :
\npackage testCoverage\n\nfunc BazBaz(number int) int {\n if number < 10 {\n return number\n } else {\n return number\n }\n}
\nThis package defines one single function. Inside this function, a conditional statement discriminates two cases. Input numbers under ten and numbers above.
\nLet’s write a test :
\nfunc TestBazBaz(t *testing.T) {\n expected := 3\n actual := BazBaz(3)\n if actual != expected {\n t.Errorf("actual %d, expected %d", actual, expected)\n }\n}
\nIn this unit test, we execute BazBaz
with the number 3 as input.
Let’s run the test :
\ngo test -cover\nPASS\ncoverage: 66.7% of statements\n\nok go_book/testCoverage 0.005s
\nWe only covered 66.7% of statements.
\n\nTo help us understand the computation go can generate a coverprofile, which is a file detailing which statements are covered.
\nTo generate this file, you have to use two commands in your terminal :
\n$ go test -coverprofile profile
\nThis first command will generate the profile file :
\nmode: set\nunit-test/coverage/testCoverage.go:3.29,4.17 1 1\nunit-test/coverage/testCoverage.go:4.17,6.3 1 1\nunit-test/coverage/testCoverage.go:6.8,8.3 1 0
\nThis file details the blocks of code of your application. Each line represent a “block”. At the end of each line you can see two figures: the total number of statements and the number of statements covered (see figure 2)
\nThis file is not easily readable. From this file, you can generate a nice HTML file like in the figure 1. To do this, type the following command :
\n$ go tool cover -html=profile
\nIt will create an HTML page, store it (not in your project directory) and open it on a browser.
\n\nWe have a total of three statements, and two are covered: the first if statement then the first return. It means that \\frac{2}{3} (or 66.7%) of statements are covered.
\nWe can increase that percentage to 100% by integrating a test of the remaining statement (the else part of our condition) :
\nfunc TestBazBaz2(t *testing.T) {\n expected := 25\n actual := BazBaz(25)\n if actual != expected {\n t.Errorf("actual %d, expected %d", actual, expected)\n }\n}
\nThis will lead to a coverage of 100%. All the statements of our code are covered.
\n\nThe count mode is similar to the set mode. With this mode you can detect if some part of the code is covered by more tests than others.
\nFor instance, the function :
\nfunc BazBaz(number int) int {\n if number < 10 {\n return number\n } else {\n return number\n }\n}
\nis tested by two test cases :
\nOne that will test an input less than 10
One that will test an input greater than 10.
All statements are covered, but the first one (the conditional statement if) is tested twice. During the execution of the second test, the test number < 10 is evaluated.
\nThe conditional statement is “more” tested than the other one.
\nThe coverprofile in count mode has not the same layout as you can see in figure 3.
\nThe greener the statement is, the more it’s tested.
\nThe coverprofile has the same layout, but the last figure represents the number of time the statement is tested :
\nmode: count\nunit-test/coverage/testCoverage.go:3.29,4.17 1 2\nunit-test/coverage/testCoverage.go:4.17,6.3 1 1\nunit-test/coverage/testCoverage.go:6.8,8.3 1 1
\nOn the second line of this profile, you can see that the first code block (starts at 3.29 and ends at 4.17) has 1 statement tested two times.
\n\nThe last cover mode is count. It is useful when you build concurrent programs. Internally the system will use atomic counters (instead of simple counters). With those concurrent safe counters, the coverprofile will be more precise.
\nTo demonstrate it, I have modified the BazBaz
function to make it even more silly by adding useless goroutines :
// unit-test/coverage/testCoverage.go\npackage testCoverage\n\nimport (\n "fmt"\n "sync"\n)\n\nfunc BazBaz(number int) int {\n var waitGroup sync.WaitGroup\n for i := 0; i < 100; i++ {\n waitGroup.Add(1)\n go concurrentTask(number, &waitGroup)\n }\n waitGroup.Wait()\n return number\n}\n\nfunc concurrentTask(number int, waitGroup *sync.WaitGroup) {\n useless := number + 2\n fmt.Println(useless)\n waitGroup.Done()\n}
\nWe will launch 100 useless concurrent tasks that just make an assignment: set useless to the number + 2. We use waitgroups to ensure that all our concurrent tasks will execute before the program ends. We do not modify the unit tests.
\nLet’s get the coverprofile in count mode :
\n$ go test -coverprofile profileCount -covermode count\n$ cat profileCount\nmode: count\ngo_book/testCoverage/testCoverage.go:8.29,10.27 2 2\ngo_book/testCoverage/testCoverage.go:14.2,15.15 2 2\ngo_book/testCoverage/testCoverage.go:10.27,13.3 2 200\ngo_book/testCoverage/testCoverage.go:18.60,22.2 3 197
\nAnd in atomic mode :
\n$ go test -coverprofile profileAtomic -covermode atomic\n$ cat profileAtomic\nmode: atomic\ngo_book/testCoverage/testCoverage.go:8.29,10.27 2 2\ngo_book/testCoverage/testCoverage.go:14.2,15.15 2 2\ngo_book/testCoverage/testCoverage.go:10.27,13.3 2 200\ngo_book/testCoverage/testCoverage.go:18.60,22.2 3 200
\nIf we use the count mode, the result is not accurate. For the last block of code (from 18.60 to 22.2), the count mode has found that we have tested the statement 197 times. The atomic mode has found that we have tested it 200 times which is the correct value.
\nNote that this cover mode will add overhead to the coverprofile creation.
\n\nTest-Driven Development (or TDD) is a development method where you design the tests before actually writing the software.
\nHistorically this method has emerged with the development of the XP methodology in the late nineties. This method has been widespread among the community, and authors like Robert C. Martin1 have contributed to its adoption.
\nLet’s jump right away to an example in Go. Our objective is to build a function that will count the number of vowels in a string. We first begin by creating a test case (that will fail because we have not created the function :
\n// unit-test/tdd/tdd_test.go\npackage tdd\n\nimport "testing"\n\nfunc TestVowelCount(t *testing.T) {\n expected := uint(5)\n actual := VowelCount("I love you")\n if actual != expected {\n t.Errorf("actual %d, expected %d", actual, expected)\n }\n}
\nHere we are calling the function VowelCount
with the sentence \"I love you\"
. In this sentence, we have five vowels; our expected result is the integer 4. As usual, we compare the actual number and the expected.
Let’s run our test to see what happens :
\n$ go test\n# go_book/tdd [go_book/tdd.test]\n./tdd_test.go:7:12: undefined: VowelCount\nFAIL go_book/tdd [build failed]
\nWe cannot compile; the test fails.
\nNow we can implement our function. We start by creating a map of vowels from the alphabet.
\n// unit-test/tdd/tdd.go\npackage tdd\n\nvar vowels = map[string]bool{\n "a": true,\n "e": true,\n "i": true,\n "o": true,\n "u": true}
\nThen we create the function. It will take each letter of the sentence iteratively and check if the vowel is on the map :
\n// unit-test/tdd/tdd.go\npackage tdd\n\n//...\n\nfunc VowelCount(sentence string) uint {\n var count uint\n for _, char := range sentence {\n if vowels[string(char)] {\n count++\n }\n }\n return count\n}
\nThe char is the Unicode code point. That’s why we have to convert it to a string to make our script work. Let’s run the test again to see if our implementation works :
\n$ go test\n--- FAIL: TestVowelCount (0.00s)\n tdd_test.go:9: actual 4, expected 5\nFAIL\nexit status 1\n\nFAIL go_book/tdd 0.005s
\nIt seems not to work. What could be wrong with our code? One letter seems to be skipped. There is something we missed, here our test string comes with the I capitalized, but we compare it to lowercase letters. That’s a bug. We want to count also capitalized vowels.
\nWhat have to options :
\nAdd the uppercase letters to our map.
Each letter should be converted to lowercase and then compared with the existing map.
The second solution seems to be less efficient than the first :
\nIn the first option, we have to spend precious time converting each letter,
Whereas in the second solution, we are just performing a map lookup which is really fast (O(1)2).
Let’s add the capitalized vowels to our map :
\nvar vowels = map[string]bool{\n //...\n "A": true,\n "E": true,\n "I": true,\n "O": true,\n "U": true}
\nThen we run our test again, and it works !
\n$ go test\nPASS\n\nok go_book/tdd 0.005s
\n\nThe developer is forced to create a function that can be tested.
When we wrote our test, we have chosen our function’s signature. This is an act of design focused on the use of our functionality. In a certain measure, we are designing our API in an user-centric approach. We are using our API before actually implementing it forces us to keep things simple and usable.
With this method every single function of our source code is tested.
You might argue that this way of developing is not very natural. This feeling is normal. Many developers (especially young ones) that I have met are reluctant to do the tests and prefer to develop the functionality in a minimal amount of time.
\n\nTo convince you of this approach, I will focus on the fact and on real studies that have been conducted about TDD. (Those results have been extracted from the very good article of David Janzen
Using TDD has generated a reduction of 50% of defect density for a company composed of 9 developers. With a minimal impact on productivity.
For another company, the reduction percentage was 40%, with no impact at all on the productivity of the team (9 developers)
The use of unit test leads to a better information flow in the company
Another study that was conducted in a class of undergraduate class of computer science has demonstrated a reduction of 45% of defects per thousand lines of code
I hope you are convinced.
\n\nIn a Go program where test files are stored?
You develop a function named ShippingCost
. What is the conventional signature of its test function?
In a test, you need to load data from a separate file; where can you store it?
What is an assertion? Give an example.
In a Go program where test files are stored?
\nYou develop a function named ShippingCost
. What is the conventional signature of its test function?
func TestShippingCost(t *testing.T)
\nNote the string \"Test\"
at the beginning of the function, this is mandatory
The characters after “Test” are free, but they must begin with a capitalized letter or an underscore (_)
In a test, you need to load data from a separate file; where can you store it?
\nYou can create a directory named “testdata” in the directory containing the source files of the package.
You can put in this folder files loaded in your test cases.
What is an assertion? Give an example.
\nIn the context of unit tests, an assertion is a boolean expression (i.e., An expression that will be evaluated to true or false).
A “traditional” example is :
actual == expected
Writing unit tests is a good practice :
\nIt gives relative protection against regressions.
Unit tests may detect bugs before they appear in production.
Improve the design of your package’s API
Unit tests live inside packages.
Tests are written on test files.
xxx_test.go is the conventional name of a test file.
Test files can contain multiple test functions
A test function can contain several test cases
The name of a test function should begin with Test
, the next characters is either an underscore or a capitalized letter
The name of the function under test is generally contained in the test function name.
A test function has the following signature :
\nfunc TestShippingCost(t *testing.T)
Table tests are a convenient way to test several test cases in a single function.
Test-Driven Development is a method that can increase the quality of your code and reduce defects.
\nThe unit test is written before the actual implementation
The unit test fails first; the aim is then to make it pass.
Previous
\n\t\t\t\t\t\t\t\t\tGo Module Proxies
\n\t\t\t\t\t\t\t\tNext
\n\t\t\t\t\t\t\t\t\tArrays
\n\t\t\t\t\t\t\t\t