# Repeat until no more failures
$ prove -j 15 --state=failed,save ./t[0-9]*.sh
+You can give DEFAULT_TEST_TARGET=prove on the make command (or define it
+in config.mak) to cause "make test" to run tests under prove.
+GIT_PROVE_OPTS can be used to pass additional options, e.g.
+
+ $ make DEFAULT_TEST_TARGET=prove GIT_PROVE_OPTS='--timer --jobs 16' test
+
You can also run each test individually from command line, like this:
$ sh ./t3010-ls-files-killed-modified.sh
test ...
That way all of the commands in your tests will succeed or fail. If
- you must ignore the return value of something (e.g., the return
- after unsetting a variable that was already unset is unportable) it's
- best to indicate so explicitly with a semicolon:
-
- unset HLAGH;
- git merge hla &&
- git push gh &&
- test ...
+ you must ignore the return value of something, consider using a
+ helper function (e.g. use sane_unset instead of unset, in order
+ to avoid unportable return value for unsetting a variable that was
+ already unset), or prepending the command with test_might_fail or
+ test_must_fail.
+
+ - Check the test coverage for your tests. See the "Test coverage"
+ below.
+
+ Don't blindly follow test coverage metrics, they're a good way to
+ spot if you've missed something. If a new function you added
+ doesn't have any coverage you're probably doing something wrong,
+ but having 100% coverage doesn't necessarily mean that you tested
+ everything.
+
+ Tests that are likely to smoke out future regressions are better
+ than tests that just inflate the coverage metrics.
+
+ - When a test checks for an absolute path that a git command generated,
+ construct the expected value using $(pwd) rather than $PWD,
+ $TEST_DIRECTORY, or $TRASH_DIRECTORY. It makes a difference on
+ Windows, where the shell (MSYS bash) mangles absolute path names.
+ For details, see the commit message of 4114156ae9.
Don't:
Skipping tests
--------------
-If you need to skip all the remaining tests you should set skip_all
-and immediately call test_done. The string you give to skip_all will
-be used as an explanation for why the test was skipped. for instance:
+If you need to skip tests you should do so be using the three-arg form
+of the test_* functions (see the "Test harness library" section
+below), e.g.:
+
+ test_expect_success PERL 'I need Perl' "
+ '$PERL_PATH' -e 'hlagh() if unf_unf()'
+ "
+
+The advantage of skipping tests like this is that platforms that don't
+have the PERL and other optional dependencies get an indication of how
+many tests they're missing.
+
+If the test code is too hairy for that (i.e. does a lot of setup work
+outside test assertions) you can also skip all remaining tests by
+setting skip_all and immediately call test_done:
if ! test_have_prereq PERL
then
test_done
fi
+The string you give to skip_all will be used as an explanation for why
+the test was skipped.
+
End with test_done
------------------
Like test_expect_success this function can optionally use a three
argument invocation with a prerequisite as the first argument.
- - test_expect_code [<prereq>] <code> <message> <script>
-
- Analogous to test_expect_success, but pass the test if it exits
- with a given exit <code>
-
- test_expect_code 1 'Merge with d/f conflicts' 'git merge "merge msg" B master'
-
- test_debug <script>
This takes a single argument, <script>, and evaluates it only
'Perl API' \
"$PERL_PATH" "$TEST_DIRECTORY"/t9700/test.pl
+ - test_expect_code <exit-code> <command>
+
+ Run a command and ensure that it exits with the given exit code.
+ For example:
+
+ test_expect_success 'Merge with d/f conflicts' '
+ test_expect_code 1 git merge "merge msg" B master
+ '
+
- test_must_fail <git-command>
Run a git command and ensure it fails in a controlled way. Use
<expected> file. This behaves like "cmp" but produces more
helpful output when the test is run with "-v" option.
+ - test_line_count (= | -lt | -ge | ...) <length> <file>
+
+ Check whether a file has the length it is expected to.
+
+ - test_path_is_file <file> [<diagnosis>]
+ test_path_is_dir <dir> [<diagnosis>]
+ test_path_is_missing <path> [<diagnosis>]
+
+ Check whether a file/directory exists or doesn't. <diagnosis> will
+ be displayed if the test fails.
+
- test_when_finished <script>
Prepend <script> to a list of commands to run to clean up
updating when such a change to the internal happens, so do _not_
do it and leave the low level of validation to t0000-basic.sh.
+Test coverage
+-------------
+
+You can use the coverage tests to find code paths that are not being
+used or properly exercised yet.
+
+To do that, run the coverage target at the top-level (not in the t/
+directory):
+
+ make coverage
+
+That'll compile Git with GCC's coverage arguments, and generate a test
+report with gcov after the tests finish. Running the coverage tests
+can take a while, since running the tests in parallel is incompatible
+with GCC's coverage mode.
+
+After the tests have run you can generate a list of untested
+functions:
+
+ make coverage-untested-functions
+
+You can also generate a detailed per-file HTML report using the
+Devel::Cover module. To install it do:
+
+ # On Debian or Ubuntu:
+ sudo aptitude install libdevel-cover-perl
+
+ # From the CPAN with cpanminus
+ curl -L http://cpanmin.us | perl - --sudo --self-upgrade
+ cpanm --sudo Devel::Cover
+
+Then, at the top-level:
+
+ make cover_db_html
+
+That'll generate a detailed cover report in the "cover_db_html"
+directory, which you can then copy to a webserver, or inspect locally
+in a browser.
+
Smoke testing
-------------