# Repeat until no more failures
$ prove -j 15 --state=failed,save ./t[0-9]*.sh
+You can give DEFAULT_TEST_TARGET=prove on the make command (or define it
+in config.mak) to cause "make test" to run tests under prove.
+GIT_PROVE_OPTS can be used to pass additional options, e.g.
+
+ $ make DEFAULT_TEST_TARGET=prove GIT_PROVE_OPTS='--timer --jobs 16' test
+
You can also run each test individually from command line, like this:
$ sh ./t3010-ls-files-killed-modified.sh
--debug::
This may help the person who is developing a new test.
It causes the command defined with test_debug to run.
+ The "trash" directory (used to store all temporary data
+ during testing) is not deleted even if there are no
+ failed tests so that you can inspect its contents after
+ the test finished.
--immediate::
This causes the test to immediately exit upon the first
not see any output, this option implies --verbose. For
convenience, it also implies --tee.
+ Note that valgrind is run with the option --leak-check=no,
+ as the git process is short-lived and some errors are not
+ interesting. In order to run a single command under the same
+ conditions manually, you should set GIT_VALGRIND to point to
+ the 't/valgrind/' directory and use the commands under
+ 't/valgrind/bin/'.
+
--tee::
In addition to printing the test output to the terminal,
write it to files named 't/test-results/$TEST_NAME.out'.
If you create files under t/ directory (i.e. here) that is not
the top-level test script, never name the file to match the above
pattern. The Makefile here considers all such files as the
-top-level test script and tries to run all of them. A care is
+top-level test script and tries to run all of them. Care is
especially needed if you are creating a common test library
file, similar to test-lib.sh, because such a library file may
not be suitable for standalone execution.
test ...
That way all of the commands in your tests will succeed or fail. If
- you must ignore the return value of something (e.g., the return
- after unsetting a variable that was already unset is unportable) it's
- best to indicate so explicitly with a semicolon:
+ you must ignore the return value of something, consider using a
+ helper function (e.g. use sane_unset instead of unset, in order
+ to avoid unportable return value for unsetting a variable that was
+ already unset), or prepending the command with test_might_fail or
+ test_must_fail.
- unset HLAGH;
- git merge hla &&
- git push gh &&
- test ...
+ - Check the test coverage for your tests. See the "Test coverage"
+ below.
+
+ Don't blindly follow test coverage metrics; if a new function you added
+ doesn't have any coverage, then you're probably doing something wrong,
+ but having 100% coverage doesn't necessarily mean that you tested
+ everything.
+
+ Tests that are likely to smoke out future regressions are better
+ than tests that just inflate the coverage metrics.
+
+ - When a test checks for an absolute path that a git command generated,
+ construct the expected value using $(pwd) rather than $PWD,
+ $TEST_DIRECTORY, or $TRASH_DIRECTORY. It makes a difference on
+ Windows, where the shell (MSYS bash) mangles absolute path names.
+ For details, see the commit message of 4114156ae9.
Don't:
Skipping tests
--------------
-If you need to skip all the remaining tests you should set skip_all
-and immediately call test_done. The string you give to skip_all will
-be used as an explanation for why the test was skipped. for instance:
+If you need to skip tests you should do so by using the three-arg form
+of the test_* functions (see the "Test harness library" section
+below), e.g.:
+
+ test_expect_success PERL 'I need Perl' "
+ '$PERL_PATH' -e 'hlagh() if unf_unf()'
+ "
+
+The advantage of skipping tests like this is that platforms that don't
+have the PERL and other optional dependencies get an indication of how
+many tests they're missing.
+
+If the test code is too hairy for that (i.e. does a lot of setup work
+outside test assertions) you can also skip all remaining tests by
+setting skip_all and immediately call test_done:
if ! test_have_prereq PERL
then
test_done
fi
+The string you give to skip_all will be used as an explanation for why
+the test was skipped.
+
End with test_done
------------------
- test_expect_success [<prereq>] <message> <script>
- Usually takes two strings as parameter, and evaluates the
+ Usually takes two strings as parameters, and evaluates the
<script>. If it yields success, test is considered
successful. <message> should state what it is testing.
'tree=$(git-write-tree)'
If you supply three parameters the first will be taken to be a
- prerequisite, see the test_set_prereq and test_have_prereq
+ prerequisite; see the test_set_prereq and test_have_prereq
documentation below:
test_expect_success TTY 'git --paginate rev-list uses a pager' \
Like test_expect_success this function can optionally use a three
argument invocation with a prerequisite as the first argument.
- - test_expect_code [<prereq>] <code> <message> <script>
-
- Analogous to test_expect_success, but pass the test if it exits
- with a given exit <code>
-
- test_expect_code 1 'Merge with d/f conflicts' 'git merge "merge msg" B master'
-
- test_debug <script>
This takes a single argument, <script>, and evaluates it only
- test_tick
Make commit and tag names consistent by setting the author and
- committer times to defined stated. Subsequent calls will
+ committer times to defined state. Subsequent calls will
advance the times by a fixed amount.
- test_commit <message> [<filename> [<contents>]]
Merges the given rev using the given message. Like test_commit,
creates a tag and calls test_tick before committing.
- - test_set_prereq SOME_PREREQ
+ - test_set_prereq <prereq>
Set a test prerequisite to be used later with test_have_prereq. The
test-lib will set some prerequisites for you, see the
test_have_prereq directly, or the three argument invocation of
test_expect_success and test_expect_failure.
- - test_have_prereq SOME PREREQ
+ - test_have_prereq <prereq>
Check if we have a prerequisite previously set with
test_set_prereq. The most common use of this directly is to skip
'Perl API' \
"$PERL_PATH" "$TEST_DIRECTORY"/t9700/test.pl
+ - test_expect_code <exit-code> <command>
+
+ Run a command and ensure that it exits with the given exit code.
+ For example:
+
+ test_expect_success 'Merge with d/f conflicts' '
+ test_expect_code 1 git merge "merge msg" B master
+ '
+
- test_must_fail <git-command>
Run a git command and ensure it fails in a controlled way. Use
<expected> file. This behaves like "cmp" but produces more
helpful output when the test is run with "-v" option.
+ - test_line_count (= | -lt | -ge | ...) <length> <file>
+
+ Check whether a file has the length it is expected to.
+
+ - test_path_is_file <path> [<diagnosis>]
+ test_path_is_dir <path> [<diagnosis>]
+ test_path_is_missing <path> [<diagnosis>]
+
+ Check if the named path is a file, if the named path is a
+ directory, or if the named path does not exist, respectively,
+ and fail otherwise, showing the <diagnosis> text.
+
- test_when_finished <script>
Prepend <script> to a list of commands to run to clean up
updating when such a change to the internal happens, so do _not_
do it and leave the low level of validation to t0000-basic.sh.
+Test coverage
+-------------
+
+You can use the coverage tests to find code paths that are not being
+used or properly exercised yet.
+
+To do that, run the coverage target at the top-level (not in the t/
+directory):
+
+ make coverage
+
+That'll compile Git with GCC's coverage arguments, and generate a test
+report with gcov after the tests finish. Running the coverage tests
+can take a while, since running the tests in parallel is incompatible
+with GCC's coverage mode.
+
+After the tests have run you can generate a list of untested
+functions:
+
+ make coverage-untested-functions
+
+You can also generate a detailed per-file HTML report using the
+Devel::Cover module. To install it do:
+
+ # On Debian or Ubuntu:
+ sudo aptitude install libdevel-cover-perl
+
+ # From the CPAN with cpanminus
+ curl -L http://cpanmin.us | perl - --sudo --self-upgrade
+ cpanm --sudo Devel::Cover
+
+Then, at the top-level:
+
+ make cover_db_html
+
+That'll generate a detailed cover report in the "cover_db_html"
+directory, which you can then copy to a webserver, or inspect locally
+in a browser.
+
Smoke testing
-------------
SMOKE_USERNAME=<username> SMOKE_PASSWORD=<password> make smoke_report
+You can also add an additional comment to attach to the report, and/or
+a comma separated list of tags:
+
+ SMOKE_USERNAME=<username> SMOKE_PASSWORD=<password> \
+ SMOKE_COMMENT=<comment> SMOKE_TAGS=<tags> \
+ make smoke_report
+
Once the report is uploaded it'll be made available at
http://smoke.git.nix.is, here's an overview of Recent Smoke Reports
for Git: