t / READMEon commit t: branch: fix broken && chains (d0423dd)
   1Core GIT Tests
   2==============
   3
   4This directory holds many test scripts for core GIT tools.  The
   5first part of this short document describes how to run the tests
   6and read their output.
   7
   8When fixing the tools or adding enhancements, you are strongly
   9encouraged to add tests in this directory to cover what you are
  10trying to fix or enhance.  The later part of this short document
  11describes how your test scripts should be organized.
  12
  13
  14Running Tests
  15-------------
  16
  17The easiest way to run tests is to say "make".  This runs all
  18the tests.
  19
  20    *** t0000-basic.sh ***
  21    ok 1 - .git/objects should be empty after git init in an empty repo.
  22    ok 2 - .git/objects should have 3 subdirectories.
  23    ok 3 - success is reported like this
  24    ...
  25    ok 43 - very long name in the index handled sanely
  26    # fixed 1 known breakage(s)
  27    # still have 1 known breakage(s)
  28    # passed all remaining 42 test(s)
  29    1..43
  30    *** t0001-init.sh ***
  31    ok 1 - plain
  32    ok 2 - plain with GIT_WORK_TREE
  33    ok 3 - plain bare
  34
  35Since the tests all output TAP (see http://testanything.org) they can
  36be run with any TAP harness. Here's an example of parallel testing
  37powered by a recent version of prove(1):
  38
  39    $ prove --timer --jobs 15 ./t[0-9]*.sh
  40    [19:17:33] ./t0005-signals.sh ................................... ok       36 ms
  41    [19:17:33] ./t0022-crlf-rename.sh ............................... ok       69 ms
  42    [19:17:33] ./t0024-crlf-archive.sh .............................. ok      154 ms
  43    [19:17:33] ./t0004-unwritable.sh ................................ ok      289 ms
  44    [19:17:33] ./t0002-gitfile.sh ................................... ok      480 ms
  45    ===(     102;0  25/?  6/?  5/?  16/?  1/?  4/?  2/?  1/?  3/?  1... )===
  46
  47prove and other harnesses come with a lot of useful options. The
  48--state option in particular is very useful:
  49
  50    # Repeat until no more failures
  51    $ prove -j 15 --state=failed,save ./t[0-9]*.sh
  52
  53You can give DEFAULT_TEST_TARGET=prove on the make command (or define it
  54in config.mak) to cause "make test" to run tests under prove.
  55GIT_PROVE_OPTS can be used to pass additional options, e.g.
  56
  57    $ make DEFAULT_TEST_TARGET=prove GIT_PROVE_OPTS='--timer --jobs 16' test
  58
  59You can also run each test individually from command line, like this:
  60
  61    $ sh ./t3010-ls-files-killed-modified.sh
  62    ok 1 - git update-index --add to add various paths.
  63    ok 2 - git ls-files -k to show killed files.
  64    ok 3 - validate git ls-files -k output.
  65    ok 4 - git ls-files -m to show modified files.
  66    ok 5 - validate git ls-files -m output.
  67    # passed all 5 test(s)
  68    1..5
  69
  70You can pass --verbose (or -v), --debug (or -d), and --immediate
  71(or -i) command line argument to the test, or by setting GIT_TEST_OPTS
  72appropriately before running "make".
  73
  74--verbose::
  75        This makes the test more verbose.  Specifically, the
  76        command being run and their output if any are also
  77        output.
  78
  79--verbose-only=<pattern>::
  80        Like --verbose, but the effect is limited to tests with
  81        numbers matching <pattern>.  The number matched against is
  82        simply the running count of the test within the file.
  83
  84--debug::
  85        This may help the person who is developing a new test.
  86        It causes the command defined with test_debug to run.
  87        The "trash" directory (used to store all temporary data
  88        during testing) is not deleted even if there are no
  89        failed tests so that you can inspect its contents after
  90        the test finished.
  91
  92--immediate::
  93        This causes the test to immediately exit upon the first
  94        failed test. Cleanup commands requested with
  95        test_when_finished are not executed if the test failed,
  96        in order to keep the state for inspection by the tester
  97        to diagnose the bug.
  98
  99--long-tests::
 100        This causes additional long-running tests to be run (where
 101        available), for more exhaustive testing.
 102
 103--valgrind=<tool>::
 104        Execute all Git binaries under valgrind tool <tool> and exit
 105        with status 126 on errors (just like regular tests, this will
 106        only stop the test script when running under -i).
 107
 108        Since it makes no sense to run the tests with --valgrind and
 109        not see any output, this option implies --verbose.  For
 110        convenience, it also implies --tee.
 111
 112        <tool> defaults to 'memcheck', just like valgrind itself.
 113        Other particularly useful choices include 'helgrind' and
 114        'drd', but you may use any tool recognized by your valgrind
 115        installation.
 116
 117        As a special case, <tool> can be 'memcheck-fast', which uses
 118        memcheck but disables --track-origins.  Use this if you are
 119        running tests in bulk, to see if there are _any_ memory
 120        issues.
 121
 122        Note that memcheck is run with the option --leak-check=no,
 123        as the git process is short-lived and some errors are not
 124        interesting. In order to run a single command under the same
 125        conditions manually, you should set GIT_VALGRIND to point to
 126        the 't/valgrind/' directory and use the commands under
 127        't/valgrind/bin/'.
 128
 129--valgrind-only=<pattern>::
 130        Like --valgrind, but the effect is limited to tests with
 131        numbers matching <pattern>.  The number matched against is
 132        simply the running count of the test within the file.
 133
 134--tee::
 135        In addition to printing the test output to the terminal,
 136        write it to files named 't/test-results/$TEST_NAME.out'.
 137        As the names depend on the tests' file names, it is safe to
 138        run the tests with this option in parallel.
 139
 140--with-dashes::
 141        By default tests are run without dashed forms of
 142        commands (like git-commit) in the PATH (it only uses
 143        wrappers from ../bin-wrappers).  Use this option to include
 144        the build directory (..) in the PATH, which contains all
 145        the dashed forms of commands.  This option is currently
 146        implied by other options like --valgrind and
 147        GIT_TEST_INSTALLED.
 148
 149--root=<directory>::
 150        Create "trash" directories used to store all temporary data during
 151        testing under <directory>, instead of the t/ directory.
 152        Using this option with a RAM-based filesystem (such as tmpfs)
 153        can massively speed up the test suite.
 154
 155You can also set the GIT_TEST_INSTALLED environment variable to
 156the bindir of an existing git installation to test that installation.
 157You still need to have built this git sandbox, from which various
 158test-* support programs, templates, and perl libraries are used.
 159If your installed git is incomplete, it will silently test parts of
 160your built version instead.
 161
 162When using GIT_TEST_INSTALLED, you can also set GIT_TEST_EXEC_PATH to
 163override the location of the dashed-form subcommands (what
 164GIT_EXEC_PATH would be used for during normal operation).
 165GIT_TEST_EXEC_PATH defaults to `$GIT_TEST_INSTALLED/git --exec-path`.
 166
 167
 168Skipping Tests
 169--------------
 170
 171In some environments, certain tests have no way of succeeding
 172due to platform limitation, such as lack of 'unzip' program, or
 173filesystem that do not allow arbitrary sequence of non-NUL bytes
 174as pathnames.
 175
 176You should be able to say something like
 177
 178    $ GIT_SKIP_TESTS=t9200.8 sh ./t9200-git-cvsexport-commit.sh
 179
 180and even:
 181
 182    $ GIT_SKIP_TESTS='t[0-4]??? t91?? t9200.8' make
 183
 184to omit such tests.  The value of the environment variable is a
 185SP separated list of patterns that tells which tests to skip,
 186and either can match the "t[0-9]{4}" part to skip the whole
 187test, or t[0-9]{4} followed by ".$number" to say which
 188particular test to skip.
 189
 190Note that some tests in the existing test suite rely on previous
 191test item, so you cannot arbitrarily disable one and expect the
 192remainder of test to check what the test originally was intended
 193to check.
 194
 195
 196Naming Tests
 197------------
 198
 199The test files are named as:
 200
 201        tNNNN-commandname-details.sh
 202
 203where N is a decimal digit.
 204
 205First digit tells the family:
 206
 207        0 - the absolute basics and global stuff
 208        1 - the basic commands concerning database
 209        2 - the basic commands concerning the working tree
 210        3 - the other basic commands (e.g. ls-files)
 211        4 - the diff commands
 212        5 - the pull and exporting commands
 213        6 - the revision tree commands (even e.g. merge-base)
 214        7 - the porcelainish commands concerning the working tree
 215        8 - the porcelainish commands concerning forensics
 216        9 - the git tools
 217
 218Second digit tells the particular command we are testing.
 219
 220Third digit (optionally) tells the particular switch or group of switches
 221we are testing.
 222
 223If you create files under t/ directory (i.e. here) that is not
 224the top-level test script, never name the file to match the above
 225pattern.  The Makefile here considers all such files as the
 226top-level test script and tries to run all of them.  Care is
 227especially needed if you are creating a common test library
 228file, similar to test-lib.sh, because such a library file may
 229not be suitable for standalone execution.
 230
 231
 232Writing Tests
 233-------------
 234
 235The test script is written as a shell script.  It should start
 236with the standard "#!/bin/sh" with copyright notices, and an
 237assignment to variable 'test_description', like this:
 238
 239        #!/bin/sh
 240        #
 241        # Copyright (c) 2005 Junio C Hamano
 242        #
 243
 244        test_description='xxx test (option --frotz)
 245
 246        This test registers the following structure in the cache
 247        and tries to run git-ls-files with option --frotz.'
 248
 249
 250Source 'test-lib.sh'
 251--------------------
 252
 253After assigning test_description, the test script should source
 254test-lib.sh like this:
 255
 256        . ./test-lib.sh
 257
 258This test harness library does the following things:
 259
 260 - If the script is invoked with command line argument --help
 261   (or -h), it shows the test_description and exits.
 262
 263 - Creates an empty test directory with an empty .git/objects database
 264   and chdir(2) into it.  This directory is 't/trash
 265   directory.$test_name_without_dotsh', with t/ subject to change by
 266   the --root option documented above.
 267
 268 - Defines standard test helper functions for your scripts to
 269   use.  These functions are designed to make all scripts behave
 270   consistently when command line arguments --verbose (or -v),
 271   --debug (or -d), and --immediate (or -i) is given.
 272
 273Do's, don'ts & things to keep in mind
 274-------------------------------------
 275
 276Here are a few examples of things you probably should and shouldn't do
 277when writing tests.
 278
 279Do:
 280
 281 - Put all code inside test_expect_success and other assertions.
 282
 283   Even code that isn't a test per se, but merely some setup code
 284   should be inside a test assertion.
 285
 286 - Chain your test assertions
 287
 288   Write test code like this:
 289
 290        git merge foo &&
 291        git push bar &&
 292        test ...
 293
 294   Instead of:
 295
 296        git merge hla
 297        git push gh
 298        test ...
 299
 300   That way all of the commands in your tests will succeed or fail. If
 301   you must ignore the return value of something, consider using a
 302   helper function (e.g. use sane_unset instead of unset, in order
 303   to avoid unportable return value for unsetting a variable that was
 304   already unset), or prepending the command with test_might_fail or
 305   test_must_fail.
 306
 307 - Check the test coverage for your tests. See the "Test coverage"
 308   below.
 309
 310   Don't blindly follow test coverage metrics; if a new function you added
 311   doesn't have any coverage, then you're probably doing something wrong,
 312   but having 100% coverage doesn't necessarily mean that you tested
 313   everything.
 314
 315   Tests that are likely to smoke out future regressions are better
 316   than tests that just inflate the coverage metrics.
 317
 318 - When a test checks for an absolute path that a git command generated,
 319   construct the expected value using $(pwd) rather than $PWD,
 320   $TEST_DIRECTORY, or $TRASH_DIRECTORY. It makes a difference on
 321   Windows, where the shell (MSYS bash) mangles absolute path names.
 322   For details, see the commit message of 4114156ae9.
 323
 324Don't:
 325
 326 - exit() within a <script> part.
 327
 328   The harness will catch this as a programming error of the test.
 329   Use test_done instead if you need to stop the tests early (see
 330   "Skipping tests" below).
 331
 332 - use '! git cmd' when you want to make sure the git command exits
 333   with failure in a controlled way by calling "die()".  Instead,
 334   use 'test_must_fail git cmd'.  This will signal a failure if git
 335   dies in an unexpected way (e.g. segfault).
 336
 337   On the other hand, don't use test_must_fail for running regular
 338   platform commands; just use '! cmd'.
 339
 340 - use perl without spelling it as "$PERL_PATH". This is to help our
 341   friends on Windows where the platform Perl often adds CR before
 342   the end of line, and they bundle Git with a version of Perl that
 343   does not do so, whose path is specified with $PERL_PATH.
 344
 345 - use sh without spelling it as "$SHELL_PATH", when the script can
 346   be misinterpreted by broken platform shell (e.g. Solaris).
 347
 348 - chdir around in tests.  It is not sufficient to chdir to
 349   somewhere and then chdir back to the original location later in
 350   the test, as any intermediate step can fail and abort the test,
 351   causing the next test to start in an unexpected directory.  Do so
 352   inside a subshell if necessary.
 353
 354 - Break the TAP output
 355
 356   The raw output from your test may be interpreted by a TAP harness. TAP
 357   harnesses will ignore everything they don't know about, but don't step
 358   on their toes in these areas:
 359
 360   - Don't print lines like "$x..$y" where $x and $y are integers.
 361
 362   - Don't print lines that begin with "ok" or "not ok".
 363
 364   TAP harnesses expect a line that begins with either "ok" and "not
 365   ok" to signal a test passed or failed (and our harness already
 366   produces such lines), so your script shouldn't emit such lines to
 367   their output.
 368
 369   You can glean some further possible issues from the TAP grammar
 370   (see http://search.cpan.org/perldoc?TAP::Parser::Grammar#TAP_Grammar)
 371   but the best indication is to just run the tests with prove(1),
 372   it'll complain if anything is amiss.
 373
 374Keep in mind:
 375
 376 - Inside <script> part, the standard output and standard error
 377   streams are discarded, and the test harness only reports "ok" or
 378   "not ok" to the end user running the tests. Under --verbose, they
 379   are shown to help debugging the tests.
 380
 381
 382Skipping tests
 383--------------
 384
 385If you need to skip tests you should do so by using the three-arg form
 386of the test_* functions (see the "Test harness library" section
 387below), e.g.:
 388
 389    test_expect_success PERL 'I need Perl' '
 390        "$PERL_PATH" -e "hlagh() if unf_unf()"
 391    '
 392
 393The advantage of skipping tests like this is that platforms that don't
 394have the PERL and other optional dependencies get an indication of how
 395many tests they're missing.
 396
 397If the test code is too hairy for that (i.e. does a lot of setup work
 398outside test assertions) you can also skip all remaining tests by
 399setting skip_all and immediately call test_done:
 400
 401        if ! test_have_prereq PERL
 402        then
 403            skip_all='skipping perl interface tests, perl not available'
 404            test_done
 405        fi
 406
 407The string you give to skip_all will be used as an explanation for why
 408the test was skipped.
 409
 410End with test_done
 411------------------
 412
 413Your script will be a sequence of tests, using helper functions
 414from the test harness library.  At the end of the script, call
 415'test_done'.
 416
 417
 418Test harness library
 419--------------------
 420
 421There are a handful helper functions defined in the test harness
 422library for your script to use.
 423
 424 - test_expect_success [<prereq>] <message> <script>
 425
 426   Usually takes two strings as parameters, and evaluates the
 427   <script>.  If it yields success, test is considered
 428   successful.  <message> should state what it is testing.
 429
 430   Example:
 431
 432        test_expect_success \
 433            'git-write-tree should be able to write an empty tree.' \
 434            'tree=$(git-write-tree)'
 435
 436   If you supply three parameters the first will be taken to be a
 437   prerequisite; see the test_set_prereq and test_have_prereq
 438   documentation below:
 439
 440        test_expect_success TTY 'git --paginate rev-list uses a pager' \
 441            ' ... '
 442
 443   You can also supply a comma-separated list of prerequisites, in the
 444   rare case where your test depends on more than one:
 445
 446        test_expect_success PERL,PYTHON 'yo dawg' \
 447            ' test $(perl -E 'print eval "1 +" . qx[python -c "print 2"]') == "4" '
 448
 449 - test_expect_failure [<prereq>] <message> <script>
 450
 451   This is NOT the opposite of test_expect_success, but is used
 452   to mark a test that demonstrates a known breakage.  Unlike
 453   the usual test_expect_success tests, which say "ok" on
 454   success and "FAIL" on failure, this will say "FIXED" on
 455   success and "still broken" on failure.  Failures from these
 456   tests won't cause -i (immediate) to stop.
 457
 458   Like test_expect_success this function can optionally use a three
 459   argument invocation with a prerequisite as the first argument.
 460
 461 - test_debug <script>
 462
 463   This takes a single argument, <script>, and evaluates it only
 464   when the test script is started with --debug command line
 465   argument.  This is primarily meant for use during the
 466   development of a new test script.
 467
 468 - test_done
 469
 470   Your test script must have test_done at the end.  Its purpose
 471   is to summarize successes and failures in the test script and
 472   exit with an appropriate error code.
 473
 474 - test_tick
 475
 476   Make commit and tag names consistent by setting the author and
 477   committer times to defined state.  Subsequent calls will
 478   advance the times by a fixed amount.
 479
 480 - test_commit <message> [<filename> [<contents>]]
 481
 482   Creates a commit with the given message, committing the given
 483   file with the given contents (default for both is to reuse the
 484   message string), and adds a tag (again reusing the message
 485   string as name).  Calls test_tick to make the SHA-1s
 486   reproducible.
 487
 488 - test_merge <message> <commit-or-tag>
 489
 490   Merges the given rev using the given message.  Like test_commit,
 491   creates a tag and calls test_tick before committing.
 492
 493 - test_set_prereq <prereq>
 494
 495   Set a test prerequisite to be used later with test_have_prereq. The
 496   test-lib will set some prerequisites for you, see the
 497   "Prerequisites" section below for a full list of these.
 498
 499   Others you can set yourself and use later with either
 500   test_have_prereq directly, or the three argument invocation of
 501   test_expect_success and test_expect_failure.
 502
 503 - test_have_prereq <prereq>
 504
 505   Check if we have a prerequisite previously set with
 506   test_set_prereq. The most common use of this directly is to skip
 507   all the tests if we don't have some essential prerequisite:
 508
 509        if ! test_have_prereq PERL
 510        then
 511            skip_all='skipping perl interface tests, perl not available'
 512            test_done
 513        fi
 514
 515 - test_external [<prereq>] <message> <external> <script>
 516
 517   Execute a <script> with an <external> interpreter (like perl). This
 518   was added for tests like t9700-perl-git.sh which do most of their
 519   work in an external test script.
 520
 521        test_external \
 522            'GitwebCache::*FileCache*' \
 523            "$PERL_PATH" "$TEST_DIRECTORY"/t9503/test_cache_interface.pl
 524
 525   If the test is outputting its own TAP you should set the
 526   test_external_has_tap variable somewhere before calling the first
 527   test_external* function. See t9700-perl-git.sh for an example.
 528
 529        # The external test will outputs its own plan
 530        test_external_has_tap=1
 531
 532 - test_external_without_stderr [<prereq>] <message> <external> <script>
 533
 534   Like test_external but fail if there's any output on stderr,
 535   instead of checking the exit code.
 536
 537        test_external_without_stderr \
 538            'Perl API' \
 539            "$PERL_PATH" "$TEST_DIRECTORY"/t9700/test.pl
 540
 541 - test_expect_code <exit-code> <command>
 542
 543   Run a command and ensure that it exits with the given exit code.
 544   For example:
 545
 546        test_expect_success 'Merge with d/f conflicts' '
 547                test_expect_code 1 git merge "merge msg" B master
 548        '
 549
 550 - test_must_fail <git-command>
 551
 552   Run a git command and ensure it fails in a controlled way.  Use
 553   this instead of "! <git-command>".  When git-command dies due to a
 554   segfault, test_must_fail diagnoses it as an error; "! <git-command>"
 555   treats it as just another expected failure, which would let such a
 556   bug go unnoticed.
 557
 558 - test_might_fail <git-command>
 559
 560   Similar to test_must_fail, but tolerate success, too.  Use this
 561   instead of "<git-command> || :" to catch failures due to segv.
 562
 563 - test_cmp <expected> <actual>
 564
 565   Check whether the content of the <actual> file matches the
 566   <expected> file.  This behaves like "cmp" but produces more
 567   helpful output when the test is run with "-v" option.
 568
 569 - test_line_count (= | -lt | -ge | ...) <length> <file>
 570
 571   Check whether a file has the length it is expected to.
 572
 573 - test_path_is_file <path> [<diagnosis>]
 574   test_path_is_dir <path> [<diagnosis>]
 575   test_path_is_missing <path> [<diagnosis>]
 576
 577   Check if the named path is a file, if the named path is a
 578   directory, or if the named path does not exist, respectively,
 579   and fail otherwise, showing the <diagnosis> text.
 580
 581 - test_when_finished <script>
 582
 583   Prepend <script> to a list of commands to run to clean up
 584   at the end of the current test.  If some clean-up command
 585   fails, the test will not pass.
 586
 587   Example:
 588
 589        test_expect_success 'branch pointing to non-commit' '
 590                git rev-parse HEAD^{tree} >.git/refs/heads/invalid &&
 591                test_when_finished "git update-ref -d refs/heads/invalid" &&
 592                ...
 593        '
 594
 595 - test_pause
 596
 597        This command is useful for writing and debugging tests and must be
 598        removed before submitting. It halts the execution of the test and
 599        spawns a shell in the trash directory. Exit the shell to continue
 600        the test. Example:
 601
 602        test_expect_success 'test' '
 603                git do-something >actual &&
 604                test_pause &&
 605                test_cmp expected actual
 606        '
 607
 608 - test_ln_s_add <path1> <path2>
 609
 610   This function helps systems whose filesystem does not support symbolic
 611   links. Use it to add a symbolic link entry to the index when it is not
 612   important that the file system entry is a symbolic link, i.e., instead
 613   of the sequence
 614
 615        ln -s foo bar &&
 616        git add bar
 617
 618   Sometimes it is possible to split a test in a part that does not need
 619   the symbolic link in the file system and a part that does; then only
 620   the latter part need be protected by a SYMLINKS prerequisite (see below).
 621
 622Prerequisites
 623-------------
 624
 625These are the prerequisites that the test library predefines with
 626test_have_prereq.
 627
 628See the prereq argument to the test_* functions in the "Test harness
 629library" section above and the "test_have_prereq" function for how to
 630use these, and "test_set_prereq" for how to define your own.
 631
 632 - PERL & PYTHON
 633
 634   Git wasn't compiled with NO_PERL=YesPlease or
 635   NO_PYTHON=YesPlease. Wrap any tests that need Perl or Python in
 636   these.
 637
 638 - POSIXPERM
 639
 640   The filesystem supports POSIX style permission bits.
 641
 642 - BSLASHPSPEC
 643
 644   Backslashes in pathspec are not directory separators. This is not
 645   set on Windows. See 6fd1106a for details.
 646
 647 - EXECKEEPSPID
 648
 649   The process retains the same pid across exec(2). See fb9a2bea for
 650   details.
 651
 652 - PIPE
 653
 654   The filesystem we're on supports creation of FIFOs (named pipes)
 655   via mkfifo(1).
 656
 657 - SYMLINKS
 658
 659   The filesystem we're on supports symbolic links. E.g. a FAT
 660   filesystem doesn't support these. See 704a3143 for details.
 661
 662 - SANITY
 663
 664   Test is not run by root user, and an attempt to write to an
 665   unwritable file is expected to fail correctly.
 666
 667 - LIBPCRE
 668
 669   Git was compiled with USE_LIBPCRE=YesPlease. Wrap any tests
 670   that use git-grep --perl-regexp or git-grep -P in these.
 671
 672 - CASE_INSENSITIVE_FS
 673
 674   Test is run on a case insensitive file system.
 675
 676 - UTF8_NFD_TO_NFC
 677
 678   Test is run on a filesystem which converts decomposed utf-8 (nfd)
 679   to precomposed utf-8 (nfc).
 680
 681Tips for Writing Tests
 682----------------------
 683
 684As with any programming projects, existing programs are the best
 685source of the information.  However, do _not_ emulate
 686t0000-basic.sh when writing your tests.  The test is special in
 687that it tries to validate the very core of GIT.  For example, it
 688knows that there will be 256 subdirectories under .git/objects/,
 689and it knows that the object ID of an empty tree is a certain
 69040-byte string.  This is deliberately done so in t0000-basic.sh
 691because the things the very basic core test tries to achieve is
 692to serve as a basis for people who are changing the GIT internal
 693drastically.  For these people, after making certain changes,
 694not seeing failures from the basic test _is_ a failure.  And
 695such drastic changes to the core GIT that even changes these
 696otherwise supposedly stable object IDs should be accompanied by
 697an update to t0000-basic.sh.
 698
 699However, other tests that simply rely on basic parts of the core
 700GIT working properly should not have that level of intimate
 701knowledge of the core GIT internals.  If all the test scripts
 702hardcoded the object IDs like t0000-basic.sh does, that defeats
 703the purpose of t0000-basic.sh, which is to isolate that level of
 704validation in one place.  Your test also ends up needing
 705updating when such a change to the internal happens, so do _not_
 706do it and leave the low level of validation to t0000-basic.sh.
 707
 708Test coverage
 709-------------
 710
 711You can use the coverage tests to find code paths that are not being
 712used or properly exercised yet.
 713
 714To do that, run the coverage target at the top-level (not in the t/
 715directory):
 716
 717    make coverage
 718
 719That'll compile Git with GCC's coverage arguments, and generate a test
 720report with gcov after the tests finish. Running the coverage tests
 721can take a while, since running the tests in parallel is incompatible
 722with GCC's coverage mode.
 723
 724After the tests have run you can generate a list of untested
 725functions:
 726
 727    make coverage-untested-functions
 728
 729You can also generate a detailed per-file HTML report using the
 730Devel::Cover module. To install it do:
 731
 732   # On Debian or Ubuntu:
 733   sudo aptitude install libdevel-cover-perl
 734
 735   # From the CPAN with cpanminus
 736   curl -L http://cpanmin.us | perl - --sudo --self-upgrade
 737   cpanm --sudo Devel::Cover
 738
 739Then, at the top-level:
 740
 741    make cover_db_html
 742
 743That'll generate a detailed cover report in the "cover_db_html"
 744directory, which you can then copy to a webserver, or inspect locally
 745in a browser.