doc-diff: add --clean mode to remove temporary working gunk
As part of its operation, doc-diff creates a bunch of temporary
working files and holds onto them in order to speed up subsequent
invocations. These files are never deleted. Moreover, it creates a
temporary working tree (via git-wortkree) which likewise never gets
removed.
Without knowing the implementation details of the tool, a user may not
know how to clean up manually afterward. Worse, the user may find it
surprising and alarming to discover a working tree which s/he did not
create explicitly.
To address these issues, add a --clean mode which removes the
temporary working tree and deletes all generated files.
Signed-off-by: Eric Sunshine <sunshine@sunshineco.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
doc-diff: fix non-portable 'man' invocation
doc-diff invokes 'man' with the -l option to force "local" mode,
however, neither MacOS nor FreeBSD recognize this option. On those
platforms, if the argument to 'man' contains a slash, it is
automatically interpreted as a file specification, so a "local"-like
mode is not needed. And, it turns out, 'man' which does support -l
falls back to enabling -l automatically if it can't otherwise find a
manual entry corresponding to the argument. Since doc-diff always
passes an absolute path of the nroff source file to 'man', the -l
option kicks in anyhow, despite not being specified explicitly.
Therefore, make the invocation portable to the various platforms by
simply dropping -l.
Signed-off-by: Eric Sunshine <sunshine@sunshineco.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
doc/git-branch: remove obsolete "-l" references
The previous commit switched "-l" to meaning "--list", but a
few vestiges of its prior meaning as "--create-reflog"
remained:
- the synopsis mentioned "-l" when creating a new branch;
we can drop this entirely, as it has been the default
for years
- the --list command mentions the unfortunate "-l"
confusion, but we've now fixed that
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
t5303: use printf to generate delta bases
The exact byte count of the delta base file is important.
The test-delta helper will feed it to patch_delta(), which
will barf if it doesn't match the size byte given in the
delta. Using "echo" may end up with unexpected line endings
on some platforms (e.g,. "\r\n" instead of just "\n").
This actually wouldn't cause the test to fail (since we
already expect test-delta to complain about these bogus
deltas), but would mean that we're not exercising the code
we think we are.
Let's use printf instead (which we already trust to give us
byte-perfect output when we generate the deltas).
While we're here, let's tighten the 5-byte result size used
in the "truncated copy parameters" test. This just needs to
have enough room to attempt to parse the bogus copy command,
meaning 2 is sufficient. Using 5 was arbitrary and just
copied from the base size; since those no longer match, it's
simply confusing. Let's use a more meaningful number.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
commit: don't use generation numbers if not needed
In 3afc679b "commit: use generations in paint_down_to_common()",
the queue in paint_down_to_common() was changed to use a priority
order based on generation number before commit date. This served
two purposes:
1. When generation numbers are present, the walk guarantees
correct topological relationships, regardless of clock skew in
commit dates.
2. It enables short-circuiting the walk when the min_generation
parameter is added in d7c1ec3e "commit: add short-circuit to
paint_down_to_common()". This short-circuit helps commands
like 'git branch --contains' from needing to walk to a merge
base when we know the result is false.
The commit message for 3afc679b includes the following sentence:
This change does not affect the number of commits that are
walked during the execution of paint_down_to_common(), only
the order that those commits are inspected.
This statement is incorrect. Because it changes the order in which
the commits are inspected, it changes the order they are added to
the queue, and hence can change the number of loops before the
queue_has_nonstale() method returns true.
This change makes a concrete difference depending on the topology
of the commit graph. For instance, computing the merge-base between
consecutive versions of the Linux kernel has no effect for versions
after v4.9, but 'git merge-base v4.8 v4.9' presents a performance
regression:
v2.18.0: 0.122s
v2.19.0-rc1: 0.547s
HEAD: 0.127s
To determine that this was simply an ordering issue, I inserted
a counter within the while loop of paint_down_to_common() and
found that the loop runs 167,468 times in v2.18.0 and 635,579
times in v2.19.0-rc1.
The topology of this case can be described in a simplified way
here:
v4.9
| \
| \
v4.8 \
| \ \
| \ |
... A B
| / /
| / /
|/__/
C
Here, the "..." means "a very long line of commits". By generation
number, A and B have generation one more than C. However, A and B
have commit date higher than most of the commits reachable from
v4.8. When the walk reaches v4.8, we realize that it has PARENT1
and PARENT2 flags, so everything it can reach is marked as STALE,
including A. B has only the PARENT1 flag, so is not STALE.
When paint_down_to_common() is run using
compare_commits_by_commit_date, A and B are removed from the queue
early and C is inserted into the queue. At this point, C and the
rest of the queue entries are marked as STALE. The loop then
terminates.
When paint_down_to_common() is run using
compare_commits_by_gen_then_commit_date, B is removed from the
queue only after the many commits reachable from v4.8 are explored.
This causes the loop to run longer. The reason for this regression
is simple: the queue order is intended to not explore a commit
until everything that _could_ reach that commit is explored. From
the information gathered by the original ordering, we have no
guarantee that there is not a commit D reachable from v4.8 that
can also reach B. We gained absolute correctness in exchange for
a performance regression.
The performance regression is probably the worse option, since
these incorrect results in paint_down_to_common() are rare. The
topology required for the performance regression are less rare,
but still require multiple merge commits where the parents differ
greatly in generation number. In our example above, the commit A
is as important as the commit B to demonstrate the problem, since
otherwise the commit C will sit in the queue as non-stale just as
long in both orders.
The solution provided uses the min_generation parameter to decide
if we should use generation numbers in our ordering. When
min_generation is equal to zero, it means that the caller has no
known cutoff for the walk, so we should rely on our commit-date
heuristic as before; this is the case with merge_bases_many().
When min_generation is non-zero, then the caller knows a valuable
cutoff for the short-circuit mechanism; this is the case with
remove_redundant() and in_merge_bases_many().
Signed-off-by: Derrick Stolee <dstolee@microsoft.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
patch-delta: handle truncated copy parameters
When we see a delta command instructing us to copy bytes
from the base, we have to read the offset and size from the
delta stream. We do this without checking whether we're at
the end of the stream, meaning we may read past the end of
the buffer.
In practice this isn't exploitable in any interesting way
because:
1. Deltas are always in packfiles, so we have at least a
20-byte trailer that we'll end up reading.
2. The worst case is that we try to perform a nonsense
copy from the base object into the result, based on
whatever was in the pack stream next. In most cases
this will simply fail due to our bounds-checks against
the base or the result.
But even if you carefully constructed a pack stream for
which it succeeds, it wouldn't perform any delta
operation that you couldn't have simply included in a
non-broken form.
But obviously it's poor form to read past the end of the
buffer we've been given. Unfortunately there's no easy way
to do a single length check, since the number of bytes we
need depends on the number of bits set in the initial
command byte. So we'll just check each byte as we parse. We
can hide the complexity in a macro; it's ugly, but not as
ugly as writing out each individual conditional.
Signed-off-by: Jeff King <peff@peff.net>
Reviewed-by: Nicolas Pitre <nico@fluxnic.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
patch-delta: consistently report corruption
When applying a delta, if we see an opcode that cannot be
fulfilled (e.g., asking to write more bytes than the
destination has left), we break out of our parsing loop but
don't signal an explicit error. We rely on the sanity check
after the loop to see if we have leftover delta bytes or
didn't fill our result buffer.
This can silently ignore corruption when the delta buffer
ends with a bogus command and the destination buffer is
already full. Instead, let's jump into the error handler
directly when we see this case.
Note that the tests also cover the "bad opcode" case, which
already handles this correctly.
Signed-off-by: Jann Horn <jannh@google.com>
Signed-off-by: Jeff King <peff@peff.net>
Reviewed-by: Nicolas Pitre <nico@fluxnic.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
patch-delta: fix oob read
If `cmd` is in the range [0x01,0x7f] and `cmd > top-data`, the
`memcpy(out, data, cmd)` can copy out-of-bounds data from after `delta_buf`
into `dst_buf`.
This is not an exploitable bug because triggering the bug increments the
`data` pointer beyond `top`, causing the `data != top` sanity check after
the loop to trigger and discard the destination buffer - which means that
the result of the out-of-bounds read is never used for anything.
Signed-off-by: Jann Horn <jannh@google.com>
Signed-off-by: Jeff King <peff@peff.net>
Reviewed-by: Nicolas Pitre <nico@fluxnic.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
t5303: test some corrupt deltas
We don't have any tests that specifically check boundary
cases in patch_delta(). It obviously gets exercised by tests
which read from packfiles, but it's hard to create packfiles
with bogus deltas.
So let's cover some obvious boundary cases:
1. commands that overflow the result buffer
a. literal content from the delta
b. copies from a base
2. commands where the source isn't large enough
a. literal content from a truncated delta
b. copies that need more bytes than the base has
3. copy commands who parameters are truncated
And indeed, we have problems with both 2a and 3. I've marked
these both as expect_failure, though note that because they
involve reading past the end of a buffer, they will
typically only be caught when run under valgrind or ASan.
There's one more test here, too, which just applies a basic
delta. Since all of the other tests expect failure and we
don't otherwise use "test-tool delta" in the test suite,
this gives a sanity check that the tool works at all.
These are based on an earlier patch by Jann Horn
<jannh@google.com>.
Signed-off-by: Jeff King <peff@peff.net>
Reviewed-by: Nicolas Pitre <nico@fluxnic.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
test-delta: read input into a heap buffer
We currently read the input to test-delta by mmap()-ing it.
However, memory-checking tools like valgrind and ASan are
less able to detect reads/writes past the end of an mmap'd
buffer, because the OS is likely to give us extra bytes to
pad out the final page size. So instead, let's read into a
heap buffer.
As a bonus, this also makes it possible to write tests with
empty bases, as mmap() will complain about a zero-length
map.
This is based on a patch by Jann Horn <jannh@google.com>
which actually aligned the data at the end of a page, and
followed it with another page marked with mprotect(). That
would detect problems even without a tool like ASan, but it
was significantly more complex and may have introduced
portability problems. By comparison, this approach pushes
the complexity onto existing memory-checking tools.
Note that this could be done even more simply by using
strbuf_read_file(), but that would defeat the purpose:
strbufs generally overallocate (and at the very least
include a trailing NUL which we do not care about), which
would defeat most memory checkers.
Signed-off-by: Jeff King <peff@peff.net>
Reviewed-by: Nicolas Pitre <nico@fluxnic.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
doc-diff: always use oids inside worktree
The doc-diff script immediately resolves its two endpoints
to actual object ids, so that we can reuse cached results
even if they appear under a different name. But we still use
the original name the user fed us when running "git
checkout" in our temporary worktree. This can lead to
confusing results:
- the namespace inside the worktree is different than the
one outside. In particular, "./doc-diff origin HEAD"
will resolve HEAD inside the worktree, whose detached
HEAD will be pointing at origin! As a result, such a
diff would always be empty.
- worse, we will store this result under the oid we got by
resolving HEAD in the main worktree, thus polluting our
cache
- we didn't pass --detach, which meant that using a branch
name would cause us to actually check out that branch,
making it unavailable to other worktrees.
We can solve this by feeding the already-resolved object id
to git-checkout. That naturally forces a detached HEAD, but
just to make clear our expectation, let's explicitly pass
--detach.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
worktree: delete .git/worktrees if empty after 'remove'
For cleanliness, "git worktree prune" deletes the .git/worktrees
directory if it is empty after pruning is complete.
For consistency, make "git worktree remove <path>" likewise delete
.git/worktrees if it is empty after the removal.
Signed-off-by: Eric Sunshine <sunshine@sunshineco.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
worktree: teach 'remove' to override lock when --force given twice
For consistency with "add -f -f" and "move -f -f" which override
the lock on a worktree, allow "remove -f -f" to do so, as well, as a
convenience.
Signed-off-by: Eric Sunshine <sunshine@sunshineco.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
worktree: teach 'move' to override lock when --force given twice
For consistency with "add -f -f", which allows a missing but locked
worktree path to be re-used, allow "move -f -f" to override a lock,
as well, as a convenience.
Signed-off-by: Eric Sunshine <sunshine@sunshineco.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
worktree: teach 'add' to respect --force for registered but missing path
For safety, "git worktree add <path>" will refuse to add a new
worktree at <path> if <path> is already associated with a worktree
entry, even if <path> is missing (for instance, has been deleted or
resides on non-mounted removable media or network share). The typical
way to re-create a worktree at <path> in such a situation is either to
prune all "broken" entries ("git worktree prune") or to selectively
remove the worktree entry manually ("git worktree remove <path>").
However, neither of these approaches ("prune" nor "remove") is
especially convenient, and they may be unsuitable for scripting when a
tool merely wants to re-use a worktree if it exists or create it from
scratch if it doesn't (much as a tool might use "mkdir -p" to re-use
or create a directory).
Therefore, teach 'add' to respect --force as a convenient way to
re-use a path already associated with a worktree entry if the path is
non-existent. For a locked worktree, require --force to be specified
twice.
Signed-off-by: Eric Sunshine <sunshine@sunshineco.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
worktree: disallow adding same path multiple times
A given path should only ever be associated with a single registered
worktree. This invariant is enforced by refusing to create a new
worktree at a given path if that path already exists. For example:
$ git worktree add -q --detach foo
$ git worktree add -q --detach foo
fatal: 'foo' already exists
However, the check can be fooled, and the invariant broken, if the
path is missing. Continuing the example:
$ rm -fr foo
$ git worktree add -q --detach foo
$ git worktree list
... eadebfe [master]
.../foo eadebfe (detached HEAD)
.../foo eadebfe (detached HEAD)
This "corruption" leads to the unfortunate situation in which the
worktree can not be removed:
$ git worktree remove foo
fatal: validation failed, cannot remove working tree: '.../foo'
does not point back to '.git/worktrees/foo'
Nor can the bogus entry be pruned:
$ git worktree prune -v
$ git worktree list
... eadebfe [master]
.../foo eadebfe (detached HEAD)
.../foo eadebfe (detached HEAD)
without first deleting the worktree directory manually:
$ rm -fr foo
$ git worktree prune -v
Removing .../foo: gitdir file points to non-existent location
Removing .../foo1: gitdir file points to non-existent location
$ git worktree list
... eadebfe [master]
or by manually deleting the worktree entry in .git/worktrees.
To address this problem, upgrade "git worktree add" validation to
allow worktree creation only if the given path is not already
associated with an existing worktree (even if the path itself is
non-existent), thus preventing such bogus worktree entries from being
created in the first place.
Reported-by: Jeff King <peff@peff.net>
Signed-off-by: Eric Sunshine <sunshine@sunshineco.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
worktree: prepare for more checks of whether path can become worktree
Certain conditions must be met for a path to be a valid candidate as the
location of a new worktree; for instance, the path must not exist or
must be an empty directory. Although the number of conditions is small,
new conditions will soon be added so factor out the existing checks into
a separate function to avoid further bloating add_worktree().
Signed-off-by: Eric Sunshine <sunshine@sunshineco.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
worktree: generalize delete_git_dir() to reduce code duplication
prune_worktrees() and delete_git_dir() both remove worktree
administrative entries from .git/worktrees, and their implementations
are nearly identical. The only difference is that prune_worktrees() is
also capable of removing a bogus non-worktree-related file from
.git/worktrees.
Simplify by extending delete_git_dir() to handle the little bit of
extra functionality needed by prune_worktrees(), and drop the
effectively duplicate code from the latter.
Signed-off-by: Eric Sunshine <sunshine@sunshineco.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
worktree: move delete_git_dir() earlier in file for upcoming new callers
This is a pure code movement to avoid having to forward-declare the
function when new callers are subsequently added.
Signed-off-by: Eric Sunshine <sunshine@sunshineco.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
worktree: don't die() in library function find_worktree()
Callers don't expect library function find_worktree() to die(); they
expect it to return the named worktree if found, or NULL if not.
Although find_worktree() itself never invokes die(), it calls
real_pathdup() with 'die_on_error' incorrectly set to 'true', thus will
die() indirectly if the user-provided path is not to real_pathdup()'s
liking. This can be observed, for instance, with any git-worktree
command which searches for an existing worktree:
$ git worktree unlock foo
fatal: 'foo' is not a working tree
$ git worktree unlock foo/bar
fatal: Invalid path '.../foo': No such file or directory
The first error message is the expected one from "git worktree unlock"
not finding the specified worktree; the second is from find_worktree()
invoking real_pathdup() incorrectly and die()ing prematurely.
Aside from the inconsistent error message between the two cases, this
bug hasn't otherwise been a serious problem since existing callers all
die() anyhow when the worktree can't be found. However, that may not be
true of callers added in the future, so fix find_worktree() to avoid
die()ing.
Signed-off-by: Eric Sunshine <sunshine@sunshineco.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
am: avoid directory rename detection when calling recursive merge machinery
Let's say you have the following three trees, where Base is from one commit
behind either master or branch:
Base : bar_v1, foo/{file1, file2, file3}
branch: bar_v2, foo/{file1, file2}, goo/file3
master: bar_v3, foo/{file1, file2, file3}
Using git-am (or am-based rebase) to apply the changes from branch onto
master results in the following tree:
Result: bar_merged, goo/{file1, file2, file3}
This is not what users want; they did not rename foo/ -> goo/, they only
renamed one file within that directory. The reason this happens is am
constructs fake trees (via build_fake_ancestor()) of the following form:
Base_bfa : bar_v1, foo/file3
branch_bfa: bar_v2, goo/file3
Combining these two trees with master's tree:
master: bar_v3, foo/{file1, file2, file3},
You can see that merge_recursive_generic() would see branch_bfa as renaming
foo/ -> goo/, and master as just adding both foo/file1 and foo/file2. As
such, it ends up with goo/{file1, file2, file3}
The core problem is that am does not have access to the original trees; it
can only construct trees using the blobs involved in the patch. As such,
it is not safe to perform directory rename detection within am -3.
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
merge-recursive: add ability to turn off directory rename detection
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
t3401: add another directory rename testcase for rebase and am
Similar to commit 16346883ab ("t3401: add directory rename testcases for
rebase and am", 2018-06-27), add another testcase for directory rename
detection. This new testcase differs in that it showcases a situation
where no directory rename was performed, but which some backends
incorrectly detect.
As with the other testcase, run this in conjunction with each of the
types of rebases:
git-rebase--interactive
git-rebase--am
git-rebase--merge
and also use the same testcase for
git am --3way
Reported-by: Nikolay Kasyanov <corrmage@gmail.com>
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
mailinfo: support format=flowed
Add best-effort support for patches sent using format=flowed (RFC 3676).
Remove leading spaces ("unstuff"), remove soft line breaks (indicated
by space + newline), but leave the signature separator (dash dash space
newline) alone.
Warn in git am when encountering a format=flowed patch, because any
trailing spaces would most probably be lost, as the sending MUA is
encouraged to remove them when preparing the email.
Provide a test patch formatted by Mozilla Thunderbird 60 using its
default configuration. It reuses the contents of the file mailinfo.c
before and after this patch.
Signed-off-by: Rene Scharfe <l.s.r@web.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Documentation/Makefile: make manpage-base-url.xsl generation quieter
The exact sed command to generate manpage-base-url.xsl appears in
the output, unlike the rules for other files that by default only
show summary.
Make the output for this rule similiar to all the other rules by
printing a short status message instead of the whole command.
Signed-off-by: Tim Schumacher <timschumi@gmx.de>
Reviewed-by: Jonathan Nieder <jrnieder@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
show_dirstat: simplify same-content check
We use two nested conditionals to store a content_changed
variable, but only bother to look at the result once,
directly after we set it. We can drop the variable entirely
and just use a single "if".
This needless complexity is the result of 2ff3a80334 (Teach
--dirstat not to completely ignore rearranged lines within a
file, 2011-04-11). Before that, we held onto the
content_changed variable much longer.
While we're touching the condition, we can swap out oidcmp()
for !oideq(). Our coccinelle patches didn't previously find
this case because of the intermediate variable, but now it's
a simple boolean in a conditional.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
read-cache: use oideq() in ce_compare functions
These functions return the full oidcmp() value, but the
callers really only care whether it is non-zero. We can use
the more strict !oideq(), which a compiler may be able to
optimize further.
This does change the meaning of the return value subtly, but
it's unlikely that anybody would try to use them for
ordering. They're static-local in this file, and they
already return other error values that would confuse an
ordering (e.g., open() failure gives -1).
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
convert hashmap comparison functions to oideq()
The comparison functions used for hashmaps don't care about
strict ordering; they only want to compare entries for
equality. Let's use the oideq() function instead, which can
potentially be better optimized. Note that unlike the
previous patches mass-converting calls like "!oidcmp()",
this patch could actually provide an improvement even with
the current implementation. Those comparison functions are
passed around as function pointers, so at compile-time the
compiler cannot realize that the caller (which is in another
file completely) will treat the return value as a boolean.
Note that this does change the return values in quite a
subtle way (it's still an int, but now the sign bit is
irrelevant for ordering). Because of their funny
hashmap-specific signature, it's unlikely that any of these
static functions would be reused for more generic ordering.
But to be double-sure, let's stop using "cmp" in their
names.
Calling them "eq" doesn't quite work either, because the
hashmap convention is actually _inverted_. "0" means "same",
and non-zero means "different". So I've called them "neq" by
convention here.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
convert "hashcmp() != 0" to "!hasheq()"
This rounds out the previous three patches, covering the
inequality logic for the "hash" variant of the functions.
As with the previous three, the accompanying code changes
are the mechanical result of applying the coccinelle patch;
see those patches for more discussion.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
convert "oidcmp() != 0" to "!oideq()"
This is the flip side of the previous two patches: checking
for a non-zero oidcmp() can be more strictly expressed as
inequality. Like those patches, we write "!= 0" in the
coccinelle transformation, which covers by isomorphism the
more common:
if (oidcmp(E1, E2))
As with the previous two patches, this patch can be achieved
almost entirely by running "make coccicheck"; the only
differences are manual line-wrap fixes to match the original
code.
There is one thing to note for anybody replicating this,
though: coccinelle 1.0.4 seems to miss the case in
builtin/tag.c, even though it's basically the same as all
the others. Running with 1.0.7 does catch this, so
presumably it's just a coccinelle bug that was fixed in the
interim.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
convert "hashcmp() == 0" to hasheq()
This is the partner patch to the previous one, but covering
the "hash" variants instead of "oid". Note that our
coccinelle rule is slightly more complex to avoid triggering
the call in hasheq().
I didn't bother to add a new rule to convert:
- hasheq(E1->hash, E2->hash)
+ oideq(E1, E2)
Since these are new functions, there won't be any such
existing callers. And since most of the code is already
using oideq, we're not likely to introduce new ones.
We might still see "!hashcmp(E1->hash, E2->hash)" from topics
in flight. But because our new rule comes after the existing
ones, that should first get converted to "!oidcmp(E1, E2)"
and then to "oideq(E1, E2)".
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
convert "oidcmp() == 0" to oideq()
Using the more restrictive oideq() should, in the long run,
give the compiler more opportunities to optimize these
callsites. For now, this conversion should be a complete
noop with respect to the generated code.
The result is also perhaps a little more readable, as it
avoids the "zero is equal" idiom. Since it's so prevalent in
C, I think seasoned programmers tend not to even notice it
anymore, but it can sometimes make for awkward double
negations (e.g., we can drop a few !!oidcmp() instances
here).
This patch was generated almost entirely by the included
coccinelle patch. This mechanical conversion should be
completely safe, because we check explicitly for cases where
oidcmp() is compared to 0, which is what oideq() is doing
under the hood. Note that we don't have to catch "!oidcmp()"
separately; coccinelle's standard isomorphisms make sure the
two are treated equivalently.
I say "almost" because I did hand-edit the coccinelle output
to fix up a few style violations (it mostly keeps the
original formatting, but sometimes unwraps long lines).
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
introduce hasheq() and oideq()
The main comparison functions we provide for comparing
object ids are hashcmp() and oidcmp(). These are more
flexible than a strict equality check, since they also
express ordering. That makes them useful for sorting and
binary searching. However, it also makes them potentially
slower than a strict equality check. Consider this C code,
which is traditionally what our hashcmp has looked like:
#include <string.h>
int hashcmp(const unsigned char *a, const unsigned char *b)
{
return memcmp(a, b, 20);
}
Compiling with "gcc -O2 -S -fverbose-asm", the generated
assembly shows that we actually call memcmp(). But if we
change this to a strict equality check:
return !memcmp(a, b, 20);
we get a faster inline version:
movq (%rdi), %rax # MEM[(void *)a_4(D)], MEM[(void *)a_4(D)]
movq 8(%rdi), %rdx # MEM[(void *)a_4(D)], tmp101
xorq (%rsi), %rax # MEM[(void *)b_5(D)], tmp94
xorq 8(%rsi), %rdx # MEM[(void *)b_5(D)], tmp93
orq %rax, %rdx # tmp94, tmp93
jne .L2 #,
movl 16(%rsi), %eax # MEM[(void *)b_5(D)], tmp104
cmpl %eax, 16(%rdi) # tmp104, MEM[(void *)a_4(D)]
je .L5 #,
Obviously our hashcmp() doesn't include the "!". But because
it's an inline function, optimizing compilers are able to
see "!hashcmp(a,b)" in calling code and take advantage of
this case. So there has been no value thus far in
introducing a more restricted interface for doing strict
equality checks.
But as Git learns about more values for the_hash_algo, our
hashcmp() will grow more complicated and may even delegate
at runtime to functions optimized specifically for that hash
size. That breaks the inline connection we have, and the
compiler will have to assume that the caller really cares
about the sign and magnitude of the memcmp() result, even
though the vast majority don't.
We can solve that by introducing a hasheq() function (and
matching oideq() wrapper), which callers can use to make it
clear that they only care about equality. For now, the
implementation will literally be "!hashcmp()", but it frees
us up later to introduce code optimized specifically for the
equality check.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
coccinelle: use <...> for function exclusion
Sometimes we want to suppress a coccinelle transformation
inside a particular function. For example, in finding
conversions of hashcmp() to oidcmp(), we should not convert
the call in oidcmp() itself, since that would cause infinite
recursion. We write that like this:
@@
identifier f != oidcmp;
expression E1, E2;
@@
f(...) {...
- hashcmp(E1->hash, E2->hash)
+ oidcmp(E1, E2)
...}
to match the interior of any function _except_ oidcmp().
Unfortunately, this doesn't catch all cases (e.g., the one
in sequencer.c that this patch fixes). The problem, as
explained by one of the Coccinelle developers in [1], is:
For transformation, A ... B requires that B occur on every
execution path starting with A, unless that execution path
ends up in error handling code. (eg, if (...) { ...
return; }). Here your A is the start of the function. So
you need a call to hashcmp on every path through the
function, which fails when you add ifs.
[...]
Another issue with A ... B is that by default A and B
should not appear in the matched region. So your original
rule matches only the case where every execution path
contains exactly one call to hashcmp, not more than one.
One way to solve this is to put the pattern inside an
angle-bracket pattern like "<... P ...>", which allows zero
or more matches of P. That works (and is what this patch
does), but it has one drawback: it matches more than we care
about, and Coccinelle uses extra CPU. Here are timings for
"make coccicheck" before and after this patch:
[before]
real 1m27.122s
user 7m34.451s
sys 0m37.330s
[after]
real 2m18.040s
user 10m58.310s
sys 0m41.549s
That's not ideal, but it's more important for this to be
correct than to be fast. And coccicheck is already fairly
slow (and people don't run it for every single patch). So
it's an acceptable tradeoff.
There _is_ a better way to do it, which is to record the
position at which we find hashcmp(), and then check it
against the forbidden function list. Like:
@@
position p : script:python() { p[0].current_element != "oidcmp" };
expression E1,E2;
@@
- hashcmp@p(E1->hash, E2->hash)
+ oidcmp(E1, E2)
This is only a little slower than the current code, and does
the right thing in all cases. Unfortunately, not all builds
of Coccinelle include python support (including the ones in
Debian). Requiring it may mean that fewer people can easily
run the tool, which is worse than it simply being a little
slower.
We may want to revisit this decision in the future if:
- builds with python become more common
- we find more uses for python support that tip the
cost-benefit analysis
But for now this patch sticks with the angle-bracket
solution, and converts all existing cocci patches. This
fixes only one missed case in the current code, though it
makes a much better difference for some new rules I'm adding
(converting "!hashcmp()" to "hasheq()" misses over half the
possible conversions using the old form).
[1] https://public-inbox.org/git/alpine.DEB.2.21.1808240652370.2344@hadrien/
Helped-by: Julia Lawall <julia.lawall@lip6.fr>
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
.gitattributes: add conflict-marker-size for relevant files
Some files in git.git contain lines that look like conflict markers,
either in examples or tests, or in the case of Documentation/gitk.txt
because of the asciidoc heading.
Having conflict markers the same length as the actual content can be
confusing for humans, and is impossible to handle for tools like 'git
rerere'. Work around that by setting the 'conflict-marker-size'
attribute for those files to 32, which makes the conflict markers
unambiguous.
Helped-by: Junio C Hamano <gitster@pobox.com>
Signed-off-by: Thomas Gummerer <t.gummerer@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
chainlint: match "quoted" here-doc tags
A here-doc tag can be quoted ('EOF'/"EOF") or escaped (\EOF) to suppress
interpolation within the body. chainlint recognizes single-quoted and
escaped tags, but does not know about double-quoted tags. For
completeness, teach it to recognize double-quoted tags, as well.
Signed-off-by: Eric Sunshine <sunshine@sunshineco.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
commit-graph: define GIT_TEST_COMMIT_GRAPH
The commit-graph feature is tested in isolation by
t5318-commit-graph.sh and t6600-test-reach.sh, but there are many
more interesting scenarios involving commit walks. Many of these
scenarios are covered by the existing test suite, but we need to
maintain coverage when the optional commit-graph structure is not
present.
To allow running the full test suite with the commit-graph present,
add a new test environment variable, GIT_TEST_COMMIT_GRAPH. Similar
to GIT_TEST_SPLIT_INDEX, this variable makes every Git command try
to load the commit-graph when parsing commits, and writes the
commit-graph file after every 'git commit' command.
There are a few tests that rely on commits not existing in
pack-files to trigger important events, so manually set
GIT_TEST_COMMIT_GRAPH to false for the necessary commands.
There is one test in t6024-recursive-merge.sh that relies on the
merge-base algorithm picking one of two ambiguous merge-bases, and
the commit-graph feature changes which merge-base is picked.
Helped-by: Eric Sunshine <sunshine@sunshineco.com>
Signed-off-by: Derrick Stolee <dstolee@microsoft.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
tests: fix non-portable iconv invocation
The iconv that comes with a FreeBSD 11.2-RELEASE-p2 box I have access
to doesn't support the SHIFT-JIS encoding. Guard a test added in
e92d62253 ("convert: add round trip check based on
'core.checkRoundtripEncoding'", 2018-04-15) first released with Git
v2.18.0 with a prerequisite that checks for its availability.
The iconv command is in POSIX, and we have numerous tests
unconditionally relying on its ability to convert ASCII, UTF-8 and
UTF-16, but unconditionally relying on the presence of more obscure
encodings isn't portable.
Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Reviewed-by: Jonathan Nieder <jrnieder@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
tests: fix non-portable "${var:-"str"}" construct
On both AIX 7200-00-01-1543 and FreeBSD 11.2-RELEASE-p2 the
"${var:-"str"}" syntax means something different than what it does
under the bash or dash shells.
Both will consider the start of the new unescaped quotes to be a new
argument to test_expect_success, resulting in the following error:
error: bug in the test script: 'git diff-tree initial # magic
is (not' does not look like a prereq
Fix this by removing the redundant quotes. There's no need for them,
and the resulting code works under all the aforementioned shells. This
fixes a regression in c2f1d3989 ("t4013: test new output from diff
--abbrev --raw", 2017-12-03) first released with Git v2.16.0.
Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
rerere: add note about files with existing conflict markers
When a file contains lines that look like conflict markers, 'git
rerere' may fail not be able to record a conflict resolution.
Emphasize that in the man page, and mention a possible workaround for
the issue.
Suggested-by: Junio C Hamano <gitster@pobox.com>
Signed-off-by: Thomas Gummerer <t.gummerer@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
rerere: mention caveat about unmatched conflict markers
4af3220 ("rerere: teach rerere to handle nested conflicts",
2018-08-05) introduced slightly better behaviour if the user commits
conflict markers and then gets another conflict in 'git rerere'.
However this is just a heuristic to punt on such conflicts better, and
doesn't deal with any unmatched conflict markers. Make that clearer
in the documentation.
Suggested-by: Junio C Hamano <gitster@pobox.com>
Signed-off-by: Thomas Gummerer <t.gummerer@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
commit-reach: correct accidental #include of C file
Without this change, the build breaks with clang:
libgit/ref-filter.pic.o: multiple definition of 'filter_refs'
libgit/commit-reach.pic.o: previous definition here
Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Git 2.19-rc1
Signed-off-by: Junio C Hamano <gitster@pobox.com>
l10n: ru.po: update Russian translation
Signed-off-by: Dimitriy Ryazantcev <dimitriy.ryazantcev@gmail.com>
Getting ready for -rc1
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Merge branch 'ja/i18n-message-fixes'
Messages fix.
* ja/i18n-message-fixes:
i18n: fix mistakes in translated strings
Merge branch 'ds/commit-graph-fsck'
Finishing touches to doc.
* ds/commit-graph-fsck:
config: fix commit-graph related config docs
Merge branch 'js/range-diff'
Finishing touched to help string.
* js/range-diff:
range-diff: update stale summary of --no-dual-color
Merge branch 'sg/test-rebase-editor-fix'
Test fix.
* sg/test-rebase-editor-fix:
t/lib-rebase.sh: support explicit 'pick' commands in 'fake_editor.sh'
Merge branch 'jk/hashcmp-optim-for-2.19'
Partially revert the support for multiple hash functions to regain
hash comparison performance; we'd think of a way to do this better
in the next cycle.
* jk/hashcmp-optim-for-2.19:
hashcmp: assert constant hash size
Merge branch 'ab/test-must-be-empty-for-master'
Test fixes.
* ab/test-must-be-empty-for-master:
t6018-rev-list-glob: fix 'empty stdin' test
Merge branch 'sg/t3420-autostash-fix'
Test fixes.
* sg/t3420-autostash-fix:
t3420-rebase-autostash: don't try to grep non-existing files
Merge branch 'sg/t3903-missing-fix'
Test fixes.
* sg/t3903-missing-fix:
t3903-stash: don't try to grep non-existing file
Merge branch 'sg/t7501-thinkofix'
Test fixes.
* sg/t7501-thinkofix:
t7501-commit: drop silly command substitution
Merge branch 'sg/t0020-conversion-fix'
Test fixes.
* sg/t0020-conversion-fix:
t0020-crlf: check the right file
Merge branch 'sg/t4051-fix'
Test fixes.
* sg/t4051-fix:
t4051-diff-function-context: read the right file
Merge branch 'js/larger-timestamps'
Portability fix.
* js/larger-timestamps:
commit: use timestamp_t for author_date_slab
Merge branch 'jk/use-compat-util-in-test-tool'
Dev tool update.
* jk/use-compat-util-in-test-tool:
test-tool.h: include git-compat-util.h
Merge branch 'sg/test-must-be-empty'
Test fixes.
* sg/test-must-be-empty:
tests: use 'test_must_be_empty' instead of 'test_cmp <empty> <out>'
tests: use 'test_must_be_empty' instead of 'test_cmp /dev/null <out>'
tests: use 'test_must_be_empty' instead of 'test ! -s'
tests: use 'test_must_be_empty' instead of '! test -s'
Merge branch 'rs/opt-updates'
"git cmd -h" updates.
* rs/opt-updates:
parseopt: group literal string alternatives in argument help
remote: improve argument help for add --mirror
checkout-index: improve argument help for --stage
Merge branch 'nd/complete-config-vars'
"git help --config" (which is used in command line completion)
missed the configuration variables not described in the main
config.txt file but are described in another file that is included
by it, which has been corrected.
* nd/complete-config-vars:
generate-cmdlist.sh: collect config from all config.txt files
Merge branch 'ab/unconditional-free-and-null'
Code clean-up.
* ab/unconditional-free-and-null:
refactor various if (x) FREE_AND_NULL(x) to just FREE_AND_NULL(x)
Merge branch 'ep/worktree-quiet-option'
"git worktree" command learned "--quiet" option to make it less
verbose.
* ep/worktree-quiet-option:
worktree: add --quiet option
Merge branch 'sm/branch-sort-config'
"git branch --list" learned to take the default sort order from the
'branch.sort' configuration variable, just like "git tag --list"
pays attention to 'tag.sort'.
* sm/branch-sort-config:
branch: support configuring --sort via .gitconfig
Merge branch 'nd/config-core-checkstat-doc'
The meaning of the possible values the "core.checkStat"
configuration variable can take were not adequately documented,
which has been fixed.
* nd/config-core-checkstat-doc:
config.txt: clarify core.checkStat
tests: fix and add lint for non-portable grep --file
The --file option to grep isn't in POSIX[1], but -f is[1]. Let's check
for that in the future, and fix the portability regression in
f237c8b6fe ("commit-graph: implement git-commit-graph write",
2018-04-02) that broke e.g. AIX.
1. http://pubs.opengroup.org/onlinepubs/009695399/utilities/grep.html
Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
tests: fix version-specific portability issue in Perl JSON
The test guarded by PERLJSON added in 75459410ed ("json_writer: new
routines to create JSON data", 2018-07-13) assumed that a JSON boolean
value like "true" or "false" would be represented as "1" or "0" in
Perl.
This behavior can't be relied upon, e.g. with JSON.pm 2.50 and
JSON::PP. A JSON::PP::Boolean object will be represented as "true"
or "false". To work around this let's check if we have any refs left
after we check for hashes and arrays, assume those are JSON objects,
and coerce them to a known boolean value.
The behavior of this test still looks odd to me. Why implement our own
ad-hoc encoder just for some one-off test, as opposed to say Perl's
own Data::Dumper with Sortkeys et al? But with this change it works,
so let's leave it be.
Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
tests: use shorter labels in chainlint.sed for AIX sed
Improve the portability of chainlint by using shorter labels. On
AIX sed will complain about:
sed: 0602-417 The label :hereslurp is greater than eight
characters
This, in combination with the previous fix to this file makes
GIT_TEST_CHAIN_LINT=1 (which is the default) working again on AIX
without issues, and the "gmake check-chainlint" test also passes.
Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Acked-by: Eric Sunshine <sunshine@sunshineco.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
range-diff: update stale summary of --no-dual-color
275267937b (range-diff: make dual-color the default mode, 2018-08-13)
replaced --dual-color with --no-dual-color but left the option's
summary untouched. Rewrite the summary to describe --no-dual-color
rather than dual-color.
Helped-by: Jonathan Nieder <jrnieder@gmail.com>
Helped-by: Johannes Schindelin <Johannes.Schindelin@gmx.de>
Signed-off-by: Kyle Meyer <kyle@kyleam.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Document update for nd/unpack-trees-with-cache-tree
Fix an incorrect comment in the new code added in b4da37380b
(unpack-trees: optimize walking same trees with cache-tree -
2018-08-18) and document about the new test variable that is enabled
by default in test-lib.sh in 4592e6080f (cache-tree: verify valid
cache-tree in the test suite - 2018-08-18)
Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
tests: fix comment syntax in chainlint.sed for AIX sed
Change a comment in chainlint.sed to appease AIX sed, which would
previously print this error:
sed: # stash for later printing is not a recognized function
1. https://public-inbox.org/git/CAPig+cTTbU5HFMKgNyrxTp3+kcK46-Fn=4ZH6zDt1oQChAc3KA@mail.gmail.com/
Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Acked-by: Eric Sunshine <sunshine@sunshineco.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
tests: fix and add lint for non-portable seq
The seq command is not in POSIX, and doesn't exist on
e.g. OpenBSD. We've had the test_seq wrapper since d17cf5f3a3 ("tests:
Introduce test_seq", 2012-08-04), but use of it keeps coming back,
e.g. in the recently added "fetch negotiator" tests being added here.
So let's also add a check to "make test-lint". The regex is aiming to
capture the likes of $(seq ..) and "seq" as a stand-alone command,
without capturing some existing cases where we e.g. have files called
"seq", as \bseq\b would do.
Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
tests: fix and add lint for non-portable head -c N
The "head -c BYTES" option is non-portable (not in POSIX[1]). Change
such invocations to use the test_copy_bytes wrapper added in
48860819e8 ("t9300: factor out portable "head -c" replacement",
2016-06-30).
This fixes a test added in 9d2e330b17 ("ewah_read_mmap: bounds-check
mmap reads", 2018-06-14), which has been breaking
t5310-pack-bitmaps.sh on OpenBSD since 2.18.0. The OpenBSD ports
already have a similar workaround after their upgrade to 2.18.0[2].
I have not tested this on IRIX, but according to 4de0bbd898 ("t9300:
use perl "head -c" clone in place of "dd bs=1 count=16000" kluge",
2010-12-13) this invocation would have broken things there too.
Also, change a valgrind-specific codepath in test-lib.sh to use this
wrapper. Given where valgrind runs I don't think this would ever
become a portability issue in practice, but it's easier to just use
the wrapper than introduce some exception for the "make test-lint"
check being added here.
1. http://pubs.opengroup.org/onlinepubs/9699919799/utilities/head.html
2. https://github.com/openbsd/ports/commit/08d5d82eaefe5cf2f125ecc0c6a57df9cf91350c#diff-f7d3c4fabeed1691620d608f1534f5e5
Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
i18n: fix mistakes in translated strings
Fix typos and convert a question which does not expect to be replied
to a simple advice.
Signed-off-by: Jean-Noël Avila <jn.avila@free.fr>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
config: fix commit-graph related config docs
The core.commitGraph config setting was accidentally removed from
the config documentation. In that same patch, the config setting
that writes a commit-graph during garbage collection was incorrectly
written to the doc as "gc.commitGraph" instead of "gc.writeCommitGraph".
Reported-by: Szeder Gábor <szeder.dev@gmail.com>
Signed-off-by: Derrick Stolee <dstolee@microsoft.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
append_signoff: use size_t for string offsets
The append_signoff() function takes an "int" to specify the
number of bytes to ignore. Most callers just pass 0, and the
remainder use ignore_non_trailer() to skip over cruft.
That function also returns an int, and uses them internally.
On systems where size_t is larger than an int (i.e., most
64-bit systems), dealing with a ridiculously large commit
message could end up overflowing an int, producing
surprising results (e.g., returning a negative offset, which
would cause us to look outside the original string).
Let's consistently use size_t for these offsets through this
whole stack. As a bonus, this makes the meaning of
"ignore_footer" as an offset (and not a boolean) more clear.
But while we're here, let's also document the interface.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
sequencer: ignore "---" divider when parsing trailers
When the sequencer code appends a signoff or cherry-pick
origin, it uses the default trailer-parsing options, which
treat "---" as the end of the commit message. As a result,
it may be fooled by a commit message that contains that
string and fail to find the existing trailer block. Even
more confusing, the actual append code does not know about
"---", and always appends to the end of the string. This can
lead to bizarre results. E.g., appending a signoff to a
commit message like this:
subject
body
---
these dashes confuse the parser!
Signed-off-by: A
results in output with a final block like:
Signed-off-by: A
Signed-off-by: A
The parser thinks the final line of the message is "body",
and ignores everything else, claiming there are no trailers.
So we output an extra newline separator (wrong) and add a
duplicate signoff (also wrong).
Since we know we are feeding a pure commit message, we can
simply tell the parser to ignore the "---" divider.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
pretty, ref-filter: format %(trailers) with no_divider option
In both of these cases we know that we are feeding the
trailer-parsing code a pure commit message. We should tell
it so, which avoids false positives for a commit message
that contains a "---" line.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
interpret-trailers: allow suppressing "---" divider
Even with the newly-tightened "---" parser, it's still
possible for a commit message to trigger a false positive if
it contains something like "--- foo". If the caller knows
that it has only a single commit message, it can now tell us
with the "--no-divider" option, eliminating any false
positives.
If we were designing this from scratch, I'd probably make
this the default. But we've advertised the "---" behavior in
the documentation since interpret-trailers has existed.
Since it's meant to be scripted, breaking that would be a
bad idea.
Note that the logic is in the underlying trailer.c code,
which is used elsewhere. The default there will keep the
current behavior, but many callers will benefit from setting
this new option. That's left for future patches.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
interpret-trailers: tighten check for "---" patch boundary
The interpret-trailers command accepts not only raw commit
messages, but it also can manipulate trailers in
format-patch output. That means it must find the "---"
boundary separating the commit message from the patch.
However, it does so by looking for any line starting with
"---", regardless of whether there is further content.
This is overly lax compared to the parsing done in
mailinfo.c's patchbreak(), and may cause false positives
(e.g., t/perf output tables uses dashes; if you cut and
paste them into your commit message, it fools the parser).
We could try to reuse patchbreak() here, but it actually has
several heuristics that are not of interest to us (e.g.,
matching "diff -" without a three-dash separator or even a
CVS "Index:" line). We're not interested in taking in
whatever random cruft people may send, but rather handling
git-formatted patches.
Note that the existing documentation was written in a loose
way, so technically we are changing the behavior from what
it said. But this should implement the original intent in a
more accurate way.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
trailer: pass process_trailer_opts to trailer_info_get()
Most of the trailer code has an "opts" struct which is
filled in by the caller. We don't pass it down to
trailer_info_get(), which does the initial parsing, because
there hasn't yet been a need to do so.
Let's start passing it down in preparation for adding new
options. Note that there's a single caller which doesn't
otherwise have such an options struct. Since it's just one
caller (that we'd have to modify anyway), let's not bother
with any special treatment like accepting a NULL options
struct, and just have it allocate one with the defaults.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
trailer: use size_t for iterating trailer list
We store the length of the trailers list in a size_t. So on
a 64-bit system with a 32-bit int, in the unlikely case that
we manage to actually allocate a list with 2^31 entries,
we'd loop forever trying to iterate over it (our "int" would
wrap to negative before exceeding info->trailer_nr).
This probably doesn't matter in practice. Each entry is at
least a pointer plus a non-empty string, so even without
malloc overhead or the memory to hold the original string
we're parsing from, you'd need to allocate tens of
gigabytes. But it's easy enough to do it right.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
trailer: use size_t for string offsets
Many of the string-parsing functions inside trailer.c return
integer offsets into the string (e.g., to point to the end
of the trailer block). Several of these use an "int" to
return or store the offsets. On a system where "size_t" is
much larger than "int" (e.g., most 64-bit ones), it's easy
to feed a gigantic commit message that results in a negative
offset. This can result in us reading memory before the
string (if the int is used as an index) or far after (if
it's implicitly cast to a size_t by passing to a strbuf
function).
Let's fix this by using size_t for all string offsets. Note
that several of the functions need ssize_t, since they use
"-1" as a sentinel value. The interactions here can be
pretty subtle. E.g., end_of_title in find_trailer_start()
does not itself need to be signed, but it is compared to the
result of last_line(), which is. That promotes the latter to
unsigned, and the ">=" does not behave as you might expect.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
t/lib-rebase.sh: support explicit 'pick' commands in 'fake_editor.sh'
The verbose output of the test 'reword without issues functions as
intended' in 't3423-rebase-reword.sh', added in a9279c6785 (sequencer:
do not squash 'reword' commits when we hit conflicts, 2018-06-19),
contains the following error output:
sed: -e expression #1, char 2: extra characters after command
This error comes from within the 'fake-editor.sh' script created by
'lib-rebase.sh's set_fake_editor() function, and the root cause is the
FAKE_LINES="pick 1 reword 2" variable in the test in question, in
particular the "pick" word. 'fake-editor.sh' assumes 'pick' to be the
default rebase command and doesn't support an explicit 'pick' command
in FAKE_LINES. As a result, 'pick' will be used instead of a line
number when assembling the following 'sed' script:
sed -n picks/^pick/pick/p
which triggers the aforementioned error.
Luckily, this didn't affect the test's correctness: the erroring 'sed'
command doesn't write anything to the todo script, and processing the
rest of FAKE_LINES generates the desired todo script, as if that
'pick' command were not there at all.
The minimal fix would be to remove the 'pick' word from FAKE_LINES,
but that would leave us susceptible to similar issues in the future.
Instead, teach the fake-editor script to recognize an explicit 'pick'
command, which is still a fairly trivial change.
In the future we might want to consider reinforcing this fake editor
script with an &&-chain and stricter parsing of the FAKE_LINES
variable (e.g. to error out when encountering unknown rebase commands
or commands and line numbers in the wrong order).
Signed-off-by: SZEDER Gábor <szeder.dev@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
hashcmp: assert constant hash size
Prior to 509f6f62a4 (cache: update object ID functions for
the_hash_algo, 2018-07-16), hashcmp() called memcmp() with a
constant size of 20 bytes. Some compilers were able to turn
that into a few quad-word comparisons, which is faster than
actually calling memcmp().
In 509f6f62a4, we started using the_hash_algo->rawsz
instead. Even though this will always be 20, the compiler
doesn't know that while inlining hashcmp() and ends up just
generating a call to memcmp().
Eventually we'll have to deal with multiple hash sizes, but
for the upcoming v2.19, we can restore some of the original
performance by asserting on the size. That gives the
compiler enough information to know that the memcmp will
always be called with a length of 20, and it performs the
same optimization.
Here are numbers for p0001.2 run against linux.git on a few
versions. This is using -O2 with gcc 8.2.0.
Test v2.18.0 v2.19.0-rc0 HEAD
------------------------------------------------------------------------------
0001.2: 34.24(33.81+0.43) 34.83(34.42+0.40) +1.7% 33.90(33.47+0.42) -1.0%
You can see that v2.19 is a little slower than v2.18. This
commit ended up slightly faster than v2.18, but there's a
fair bit of run-to-run noise (the generated code in the two
cases is basically the same). This patch does seem to be
consistently 1-2% faster than v2.19.
I tried changing hashcpy(), which was also touched by
509f6f62a4, in the same way, but couldn't measure any
speedup. Which makes sense, at least for this workload. A
traversal of the whole commit graph requires looking up
every entry of every tree via lookup_object(). That's many
multiples of the numbers of objects in the repository (most
of the lookups just return "yes, we already saw that
object").
[jn: verified using "make object.s" that the memcmp call goes away.]
Reported-by: Derrick Stolee <stolee@gmail.com>
Signed-off-by: Jeff King <peff@peff.net
Reviewed-by: Jonathan Nieder <jrnieder@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
rev-list: make empty --stdin not an error
When we originally did the series that contains 7ba826290a
(revision: add rev_input_given flag, 2017-08-02) the intent
was that "git rev-list --stdin </dev/null" would similarly
become a successful noop. However, an attempt at the time to
do that did not work[1]. The problem is that rev_input_given
serves two roles:
- it tells rev-list.c that it should not error out
- it tells revision.c that it should not have the "default"
ref kick (e.g., "HEAD" in "git log")
We want to trigger the former, but not the latter. This is
technically possible with a single flag, if we set the flag
only after revision.c's revs->def check. But this introduces
a rather subtle ordering dependency.
Instead, let's keep two flags: one to denote when we got
actual input (which triggers both roles) and one for when we
read stdin (which triggers only the first).
This does mean a caller interested in the first role has to
check both flags, but there's only one such caller. And any
future callers might want to make the distinction anyway
(e.g., if they care less about erroring out, and more about
whether revision.c soaked up our stdin).
In fact, we already keep such a flag internally in
revision.c for this purpose, so this is really just exposing
that to the caller (and the old function-local flag can go
away in favor of our new one).
[1] https://public-inbox.org/git/20170802223416.gwiezhbuxbdmbjzx@sigill.intra.peff.net/
Helped-by: Junio C Hamano <gitster@pobox.com>
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
t3420-rebase-autostash: don't try to grep non-existing files
Several tests in 't3420-rebase-autostash.sh' start various rebase
processes that are expected to fail because of merge conflicts. These
tests then run '! grep' to ensure that the autostash feature did its
job, and the dirty contents of a file is gone. However, due to the
test repo's history and the choice of upstream branch that file
shouldn't exist in the conflicted state at all. Consequently, this
'grep' doesn't fail as expected, because it can't find the dirty
content, but it fails because it can't open the file.
Tighten this check by using 'test_path_is_missing' instead, thereby
avoiding unexpected errors from 'grep' as well.
Signed-off-by: SZEDER Gábor <szeder.dev@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
t3903-stash: don't try to grep non-existing file
The test 'store updates stash ref and reflog' in 't3903-stash.sh'
creates a stash from a new file, runs 'git reset --hard' to throw away
any modifications to the work tree, and then runs '! grep' to ensure
that the staged contents are gone. Since the file didn't exist
before, it shouldn't exist after 'git reset' either. Consequently,
this 'grep' doesn't fail as expected, because it can't find the staged
content, but it fails because it can't open the file.
Tighten this check by using 'test_path_is_missing' instead, thereby
avoiding an unexpected error from 'grep' as well.
Signed-off-by: SZEDER Gábor <szeder.dev@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Merge branch 'nd/pack-deltify-regression-fix'
In a recent update in 2.18 era, "git pack-objects" started
producing a larger than necessary packfiles by missing
opportunities to use large deltas.
* nd/pack-deltify-regression-fix:
pack-objects: fix performance issues on packing large deltas
t6018-rev-list-glob: fix 'empty stdin' test
Prior to d3c6751b18 (tests: make use of the test_must_be_empty
function, 2018-07-27), in the test 'rev-list should succeed with empty
output on empty stdin' in 't6018-rev-list-glob' the empty 'expect'
file served dual purpose: besides specifying the expected output, as
usual, it also served as empty input for 'git rev-list --stdin'.
Then d3c6751b18 came along, and, as part of the conversion to
'test_must_be_empty', removed this empty 'expect' file, not realizing
its secondary purpose. Redirecting stdin from the now non-existing
file failed the test, but since this test expects failure in the first
place, this issue went unnoticed.
Redirect 'git rev-list's stdin explicitly from /dev/null to provide
empty input. (Strictly speaking we don't need this redirection,
because the test script's stdin is already redirected from /dev/null
anyway, but I think it's better to be explicit about it.)
Signed-off-by: SZEDER Gábor <szeder.dev@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
t4051-diff-function-context: read the right file
The test ' context does not include preceding empty lines' in the
block of tests 'change with long common tail and no context' in
't4051-diff-function-context.sh' tries to read the file
'long_common_tail.diff.diff', but that file doesn't exist as its name
contains one more '.diff' suffixes than necessary.
Despite this error the test still succeeded without checking what it's
supposed to, because this erroneous read is done on the line:
test "$(first_context_line <long_common_tail.diff.diff)" != " "
which means that:
- the command substitution hides the error, so it won't fail the
test, and
- the result of the command substitution is the empty string, which
is, of course, not equal to a single space character, so the
condition is fulfilled, and the test succeeds.
As a minimal fix, fix the name of the file to be read.
In the future we might want to reorganize this test script (1) to use
'test_cmp' instead of 'test's and command substitutions to catch
failing commands and to provide helpful error messages, and (2) to
specify what the expected result actually _is_ instead of what it
isn't.
Signed-off-by: SZEDER Gábor <szeder.dev@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
t0020-crlf: check the right file
In the test 'checkout with autocrlf=input' in 't0020-crlf.sh', one of
the 'has_cr' checks looks at the non-existing file 'two' instead of
'dir/two'. The test still succeeds, without actually checking what it
was supposed to, because this check is expected to fail anyway.
As a minimal fix, fix the name of the file to be checked.
Signed-off-by: SZEDER Gábor <szeder.dev@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
t7501-commit: drop silly command substitution
The test '--dry-run with conflicts fixed from a merge' in
't7501-commit.sh', added in 8dc874b2ee (wt-status.c: set commitable
bit if there is a meaningful merge., 2016-02-15), runs the following
unnecessary and downright bogus command substitution:
! $(git merge --no-commit commit-1) &&
I.e. after 'git merge ...' is executed and expectedly fails, the test
attempts to execute its output:
Merging:
80f2ea2 commit 2
virtual commit-1
found 1 common ancestor:
e60d113 Initial commit
Auto-merging test-file
CONFLICT (content): Merge conflict in test-file
Automatic merge failed; fix conflicts and then commit the result.
as a command, which most likely fails, because there is no such
command as "Merging:". Then '!' negates the failed exit status, the
test continues, and eventually succeeds.
Remove this command substitution and use 'test_must_fail' to ensure
that 'git merge' fails.
Signed-off-by: SZEDER Gábor <szeder.dev@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
commit: use timestamp_t for author_date_slab
The author_date_slab is used to store the author date of a commit
when walking with the --author-date flag in rev-list or log. This
was added as an 'unsigned long' in
81c6b38b "log: --author-date-order"
Since 'unsigned long' is ambiguous in its bit-ness across platforms
(64-bit in Linux, 32-bit in Windows, for example), most references
to the author dates in commit.c were converted to timestamp_t in
dddbad72 "timestamp_t: a new data type for timestamps"
However, the slab definition was missed, leading to a mismatch in
the data types in Windows. This would not reveal itself as a bug
unless someone authors a commit after February 2106, but commits
can store anything as their author date.
Signed-off-by: Derrick Stolee <dstolee@microsoft.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
SubmittingPatches: mention doc-diff
We already advise people to make sure their documentation
formats correctly. Let's point them at the doc-diff script,
which can help with that.
Let's also put a brief note in the script about its purpose,
since that otherwise can only be found in the original
commit message. Along with the existing -h/usage text,
that's hopefully enough for developers to make use of it.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
pack-objects: reuse on-disk deltas for thin "have" objects
When we serve a fetch, we pass the "wants" and "haves" from
the fetch negotiation to pack-objects. That tells us not
only which objects we need to send, but we also use the
boundary commits as "preferred bases": their trees and blobs
are candidates for delta bases, both for reusing on-disk
deltas and for finding new ones.
However, this misses some opportunities. Modulo some special
cases like shallow or partial clones, we know that every
object reachable from the "haves" could be a preferred base.
We don't use all of them for two reasons:
1. It's expensive to traverse the whole history and
enumerate all of the objects the other side has.
2. The delta search is expensive, so we want to keep the
number of candidate bases sane. The boundary commits
are the most likely to work.
When we have reachability bitmaps, though, reason 1 no
longer applies. We can efficiently compute the set of
reachable objects on the other side (and in fact already did
so as part of the bitmap set-difference to get the list of
interesting objects). And using this set conveniently
covers the shallow and partial cases, since we have to
disable the use of bitmaps for those anyway.
The second reason argues against using these bases in the
search for new deltas. But there's one case where we can use
this information for free: when we have an existing on-disk
delta that we're considering reusing, we can do so if we
know the other side has the base object. This in fact saves
time during the delta search, because it's one less delta we
have to compute.
And that's exactly what this patch does: when we're
considering whether to reuse an on-disk delta, if bitmaps
tell us the other side has the object (and we're making a
thin-pack), then we reuse it.
Here are the results on p5311 using linux.git, which
simulates a client fetching after `N` days since their last
fetch:
Test origin HEAD
--------------------------------------------------------------------------
5311.3: server (1 days) 0.27(0.27+0.04) 0.12(0.09+0.03) -55.6%
5311.4: size (1 days) 0.9M 237.0K -73.7%
5311.5: client (1 days) 0.04(0.05+0.00) 0.10(0.10+0.00) +150.0%
5311.7: server (2 days) 0.34(0.42+0.04) 0.13(0.10+0.03) -61.8%
5311.8: size (2 days) 1.5M 347.7K -76.5%
5311.9: client (2 days) 0.07(0.08+0.00) 0.16(0.15+0.01) +128.6%
5311.11: server (4 days) 0.56(0.77+0.08) 0.13(0.10+0.02) -76.8%
5311.12: size (4 days) 2.8M 566.6K -79.8%
5311.13: client (4 days) 0.13(0.15+0.00) 0.34(0.31+0.02) +161.5%
5311.15: server (8 days) 0.97(1.39+0.11) 0.30(0.25+0.05) -69.1%
5311.16: size (8 days) 4.3M 1.0M -76.0%
5311.17: client (8 days) 0.20(0.22+0.01) 0.53(0.52+0.01) +165.0%
5311.19: server (16 days) 1.52(2.51+0.12) 0.30(0.26+0.03) -80.3%
5311.20: size (16 days) 8.0M 2.0M -74.5%
5311.21: client (16 days) 0.40(0.47+0.03) 1.01(0.98+0.04) +152.5%
5311.23: server (32 days) 2.40(4.44+0.20) 0.31(0.26+0.04) -87.1%
5311.24: size (32 days) 14.1M 4.1M -70.9%
5311.25: client (32 days) 0.70(0.90+0.03) 1.81(1.75+0.06) +158.6%
5311.27: server (64 days) 11.76(26.57+0.29) 0.55(0.50+0.08) -95.3%
5311.28: size (64 days) 89.4M 47.4M -47.0%
5311.29: client (64 days) 5.71(9.31+0.27) 15.20(15.20+0.32) +166.2%
5311.31: server (128 days) 16.15(36.87+0.40) 0.91(0.82+0.14) -94.4%
5311.32: size (128 days) 134.8M 100.4M -25.5%
5311.33: client (128 days) 9.42(16.86+0.49) 25.34(25.80+0.46) +169.0%
In all cases we save CPU time on the server (sometimes
significant) and the resulting pack is smaller. We do spend
more CPU time on the client side, because it has to
reconstruct more deltas. But that's the right tradeoff to
make, since clients tend to outnumber servers. It just means
the thin pack mechanism is doing its job.
From the user's perspective, the end-to-end time of the
operation will generally be faster. E.g., in the 128-day
case, we saved 15s on the server at a cost of 16s on the
client. Since the resulting pack is 34MB smaller, this is a
net win if the network speed is less than 270Mbit/s. And
that's actually the worst case. The 64-day case saves just
over 11s at a cost of just under 11s. So it's a slight win
at any network speed, and the 40MB saved is pure bonus. That
trend continues for the smaller fetches.
The implementation itself is mostly straightforward, with
the new logic going into check_object(). But there are two
tricky bits.
The first is that check_object() needs access to the
relevant information (the thin flag and bitmap result). We
can do this by pushing these into program-lifetime globals.
The second is that the rest of the code assumes that any
reused delta will point to another "struct object_entry" as
its base. But of course the case we are interested in here
is the one where don't have such an entry!
I looked at a number of options that didn't quite work:
- we could use a flag to signal a reused delta, but it's
not a single bit. We have to actually store the oid of
the base, which is normally done by pointing to the
existing object_entry. And we'd have to modify all the
code which looks at deltas.
- we could add the reused bases to the end of the existing
object_entry array. While this does create some extra
work as later stages consider the extra entries, it's
actually not too bad (we're not sending them, so they
don't cost much in the delta search, and at most we'd
have 2*N of them).
But there's a more subtle problem. Adding to the existing
array means we might need to grow it with realloc, which
could move the earlier entries around. While many of the
references to other entries are done by integer index,
some (including ones on the stack) use pointers, which
would become invalidated.
This isn't insurmountable, but it would require quite a
bit of refactoring (and it's hard to know that you've got
it all, since it may work _most_ of the time and then
fail subtly based on memory allocation patterns).
- we could allocate a new one-off entry for the base. In
fact, this is what an earlier version of this patch did.
However, since the refactoring brought in by ad635e82d6
(Merge branch 'nd/pack-objects-pack-struct', 2018-05-23),
the delta_idx code requires that both entries be in the
main packing list.
So taking all of those options into account, what I ended up
with is a separate list of "external bases" that are not
part of the main packing list. Each delta entry that points
to an external base has a single-bit flag to do so; we have a
little breathing room in the bitfield section of
object_entry.
This lets us limit the change primarily to the oe_delta()
and oe_set_delta_ext() functions. And as a bonus, most of
the rest of the code does not consider these dummy entries
at all, saving both runtime CPU and code complexity.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
pack-bitmap: save "have" bitmap from walk
When we do a bitmap walk, we save the result, which
represents (WANTs & ~HAVEs); i.e., every object we care
about visiting in our walk. However, we throw away the
haves bitmap, which can sometimes be useful, too. Save it
and provide an access function so code which has performed a
walk can query it.
A few notes on the accessor interface:
- the bitmap code calls these "haves" because it grew out
of the want/have negotiation for fetches. But really,
these are simply the objects that would be flagged
UNINTERESTING in a regular traversal. Let's use that
more universal nomenclature for the external module
interface. We may want to change the internal naming
inside the bitmap code, but that's outside the scope of
this patch.
- it still uses a bare "sha1" rather than "oid". That's
true of all of the bitmap code. And in this particular
instance, our caller in pack-objects is dealing with the
bare sha1 that comes from a packed REF_DELTA (we're
pointing directly to the mmap'd pack on disk). That's
something we'll have to deal with as we transition to a
new hash, but we can wait and see how the caller ends up
being fixed and adjust this interface accordingly.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
test-tool.h: include git-compat-util.h
The test-tool programs include "test-tool.h" as their first
include, which breaks our CodingGuideline of "the first
include must be git-compat-util.h or an equivalent".
Rather than change them all, let's instead make test-tool.h
one of those equivalents, just like we do for builtin.h
(which many of the actual git builtins include first).
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
tests: use 'test_must_be_empty' instead of 'test_cmp <empty> <out>'
Using 'test_must_be_empty' is shorter and more idiomatic than
>empty &&
test_cmp empty out
as it saves the creation of an empty file. Furthermore, sometimes the
expected empty file doesn't have such a descriptive name like 'empty',
and its creation is far away from the place where it's finally used
for comparison (e.g. in 't7600-merge.sh', where two expected empty
files are created in the 'setup' test, but are used only about 500
lines later).
These cases were found by instrumenting 'test_cmp' to error out the
test script when it's used to compare empty files, and then converted
manually.
Note that even after this patch there still remain a lot of cases
where we use 'test_cmp' to check empty files:
- Sometimes the expected output is not hard-coded in the test, but
'test_cmp' is used to ensure that two similar git commands produce
the same output, and that output happens to be empty, e.g. the
test 'submodule update --merge - ignores --merge for new
submodules' in 't7406-submodule-update.sh'.
- Repetitive common tasks, including preparing the expected results
and running 'test_cmp', are often extracted into a helper
function, and some of this helper's callsites expect no output.
- For the same reason as above, the whole 'test_expect_success'
block is within a helper function, e.g. in 't3070-wildmatch.sh'.
- Or 'test_cmp' is invoked in a loop, e.g. the test 'cvs update
(-p)' in 't9400-git-cvsserver-server.sh'.
Signed-off-by: SZEDER Gábor <szeder.dev@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
tests: use 'test_must_be_empty' instead of 'test_cmp /dev/null <out>'
Using 'test_must_be_empty' is more idiomatic than 'test_cmp /dev/null
out', and its message on error is perhaps a bit more to the point.
This patch was basically created by running:
sed -i -e 's%test_cmp /dev/null%test_must_be_empty%' t[0-9]*.sh
with the exception of the change in 'should not fail in an empty repo'
in 't7401-submodule-summary.sh', where it was 'test_cmp output
/dev/null'.
Signed-off-by: SZEDER Gábor <szeder.dev@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>