A v2 pack index file can specify an offset within a packfile
of up to 2^64-1 bytes. On a system with a signed 64-bit
off_t, we can represent only up to 2^63-1. This means that a
corrupted .idx file can end up with a negative offset in the
pack code. Our bounds-checking use_pack function looks for
too-large offsets, but not for ones that have wrapped around
to negative. Let's do so, which fixes an out-of-bounds
access demonstrated in t5313.
Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
If a pack .idx file has a corrupted offset for an object, we
may try to access an offset in the .idx or .pack file that
is larger than the file's size. For the .pack case, we have
use_pack() to protect us, which realizes the access is out
of bounds. But if the corrupted value asks us to look in the
.idx file's secondary 64-bit offset table, we blindly add it
to the mmap'd index data and access arbitrary memory.
We can fix this with a simple bounds-check compared to the
size we found when we opened the .idx file.
Note that there's similar code in index-pack that is
triggered only during "index-pack --verify". To support
both, we pull the bounds-check into a separate function,
which dies when it sees a corrupted file.
It would be nice if we could return an error, so that the
pack code could try to find a good copy of the object
elsewhere. Currently nth_packed_object_offset doesn't have
any way to return an error, but it could probably use "0" as
a sentinel value (since no object can start there). This is
the minimal fix, and we can improve the resilience later on
top.
Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
t5313: test bounds-checks of corrupted/malicious pack/idx files
Our on-disk .pack and .idx files may reference other data by
offset. We should make sure that we are not fooled by
corrupt data into accessing memory outside of our mmap'd
boundaries.
This patch adds a series of tests for offsets found in .pack
and .idx files. For the most part we get this right, but
there are two tests of .idx files marked as failures: we do
not bounds-check offsets in the v2 index's extended offset
table, nor do we handle .idx offsets that overflow a signed
off_t.
With these tests, we should have good coverage of all
offsets found in these files. Note that this doesn't cover
.bitmap files, which may have similar bugs.
Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
merge-file: enforce MAX_XDIFF_SIZE on incoming files
The previous commit enforces MAX_XDIFF_SIZE at the
interfaces to xdiff: xdi_diff (which calls xdl_diff) and
ll_xdl_merge (which calls xdl_merge).
But we have another direct call to xdl_merge in
merge-file.c. If it were written today, this probably would
just use the ll_merge machinery. But it predates that code,
and uses slightly different options to xdl_merge (e.g.,
ZEALOUS_ALNUM).
We could try to abstract out an xdi_merge to match the
existing xdi_diff, but even that is difficult. Rather than
simply report error, we try to treat large files as binary,
and that distinction would happen outside of xdi_merge.
The simplest fix is to just replicate the MAX_XDIFF_SIZE
check in merge-file.c.
Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
The xdiff code is not prepared to handle extremely large
files. It uses "int" in many places, which can overflow if
we have a very large number of lines or even bytes in our
input files. This can cause us to produce incorrect diffs,
with no indication that the output is wrong. Or worse, we
may even underallocate a buffer whose size is the result of
an overflowing addition.
We're much better off to tell the user that we cannot diff
or merge such a large file. This patch covers both cases,
but in slightly different ways:
1. For merging, we notice the large file and cleanly fall
back to a binary merge (which is effectively "we cannot
merge this").
2. For diffing, we make the binary/text distinction much
earlier, and in many different places. For this case,
we'll use the xdi_diff as our choke point, and reject
any diff there before it hits the xdiff code.
This means in most cases we'll die() immediately after.
That's not ideal, but in practice we shouldn't
generally hit this code path unless the user is trying
to do something tricky. We already consider files
larger than core.bigfilethreshold to be binary, so this
code would only kick in when that is circumvented
(either by bumping that value, or by using a
.gitattribute to mark a file as diffable).
In other words, we can avoid being "nice" here, because
there is already nice code that tries to do the right
thing. We are adding the suspenders to the nice code's
belt, so notice when it has been worked around (both to
protect the user from malicious inputs, and because it
is better to die() than generate bogus output).
The maximum size was chosen after experimenting with feeding
large files to the xdiff code. It's just under a gigabyte,
which leaves room for two obvious cases:
- a diff3 merge conflict result on files of maximum size X
could be 3*X plus the size of the markers, which would
still be only about 3G, which fits in a 32-bit int.
- some of the diff code allocates arrays of one int per
record. Even if each file consists only of blank lines,
then a file smaller than 1G will have fewer than 1G
records, and therefore the int array will fit in 4G.
Since the limit is arbitrary anyway, I chose to go under a
gigabyte, to leave a safety margin (e.g., we would not want
to overflow by allocating "(records + 1) * sizeof(int)" or
similar.
Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
When we call into xdiff to perform a diff, we generally lose
the return code completely. Typically by ignoring the return
of our xdi_diff wrapper, but sometimes we even propagate
that return value up and then ignore it later. This can
lead to us silently producing incorrect diffs (e.g., "git
log" might produce no output at all, not even a diff header,
for a content-level diff).
In practice this does not happen very often, because the
typical reason for xdiff to report failure is that it
malloc() failed (it uses straight malloc, and not our
xmalloc wrapper). But it could also happen when xdiff
triggers one our callbacks, which returns an error (e.g.,
outf() in builtin/rerere.c tries to report a write failure
in this way). And the next patch also plans to add more
failure modes.
Let's notice an error return from xdiff and react
appropriately. In most of the diff.c code, we can simply
die(), which matches the surrounding code (e.g., that is
what we do if we fail to load a file for diffing in the
first place). This is not that elegant, but we are probably
better off dying to let the user know there was a problem,
rather than simply generating bogus output.
We could also just die() directly in xdi_diff, but the
callers typically have a bit more context, and can provide a
better message (and if we do later decide to pass errors up,
we're one step closer to doing so).
There is one interesting case, which is in diff_grep(). Here
if we cannot generate the diff, there is nothing to match,
and we silently return "no hits". This is actually what the
existing code does already, but we make it a little more
explicit.
Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
By default, libcurl will follow circular http redirects
forever. Let's put a cap on this so that somebody who can
trigger an automated fetch of an arbitrary repository (e.g.,
for CI) cannot convince git to loop infinitely.
The value chosen is 20, which is the same default that
Firefox uses.
Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
Previously, libcurl would follow redirection to any protocol
it was compiled for support with. This is desirable to allow
redirection from HTTP to HTTPS. However, it would even
successfully allow redirection from HTTP to SFTP, a protocol
that git does not otherwise support at all. Furthermore
git's new protocol-whitelisting could be bypassed by
following a redirect within the remote helper, as it was
only enforced at transport selection time.
This patch limits redirects within libcurl to HTTP, HTTPS,
FTP and FTPS. If there is a protocol-whitelist present, this
list is limited to those also allowed by the whitelist. As
redirection happens from within libcurl, it is impossible
for an HTTP redirect to a protocol implemented within
another remote helper.
When the curl version git was compiled with is too old to
support restrictions on protocol redirection, we warn the
user if GIT_ALLOW_PROTOCOL restrictions were requested. This
is a little inaccurate, as even without that variable in the
environment, we would still restrict SFTP, etc, and we do
not warn in that case. But anything else means we would
literally warn every time git accesses an http remote.
This commit includes a test, but it is not as robust as we
would hope. It redirects an http request to ftp, and checks
that curl complained about the protocol, which means that we
are relying on curl's specific error message to know what
happened. Ideally we would redirect to a working ftp server
and confirm that we can clone without protocol restrictions,
and not with them. But we do not have a portable way of
providing an ftp server, nor any other protocol that curl
supports (https is the closest, but we would have to deal
with certificates).
[jk: added test and version warning]
Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
submodule: allow only certain protocols for submodule fetches
Some protocols (like git-remote-ext) can execute arbitrary
code found in the URL. The URLs that submodules use may come
from arbitrary sources (e.g., .gitmodules files in a remote
repository). Let's restrict submodules to fetching from a
known-good subset of protocols.
Note that we apply this restriction to all submodule
commands, whether the URL comes from .gitmodules or not.
This is more restrictive than we need to be; for example, in
the tests we run:
git submodule add ext::...
which should be trusted, as the URL comes directly from the
command line provided by the user. But doing it this way is
simpler, and makes it much less likely that we would miss a
case. And since such protocols should be an exception
(especially because nobody who clones from them will be able
to update the submodules!), it's not likely to inconvenience
anyone in practice.
Reported-by: Blake Burkhart <bburky@bburky.com> Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
transport: add a protocol-whitelist environment variable
If we are cloning an untrusted remote repository into a
sandbox, we may also want to fetch remote submodules in
order to get the complete view as intended by the other
side. However, that opens us up to attacks where a malicious
user gets us to clone something they would not otherwise
have access to (this is not necessarily a problem by itself,
but we may then act on the cloned contents in a way that
exposes them to the attacker).
Ideally such a setup would sandbox git entirely away from
high-value items, but this is not always practical or easy
to set up (e.g., OS network controls may block multiple
protocols, and we would want to enable some but not others).
We can help this case by providing a way to restrict
particular protocols. We use a whitelist in the environment.
This is more annoying to set up than a blacklist, but
defaults to safety if the set of protocols git supports
grows). If no whitelist is specified, we continue to default
to allowing all protocols (this is an "unsafe" default, but
since the minority of users will want this sandboxing
effect, it is the only sensible one).
A note on the tests: ideally these would all be in a single
test file, but the git-daemon and httpd test infrastructure
is an all-or-nothing proposition rather than a test-by-test
prerequisite. By putting them all together, we would be
unable to test the file-local code on machines without
apache.
Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
When we show "branch@{0}", we format into a fixed-size
buffer using sprintf. This can overflow if you have long
branch names. We can fix it by using a temporary strbuf.
Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
read_info_alternates: handle paths larger than PATH_MAX
This function assumes that the relative_base path passed
into it is no larger than PATH_MAX, and writes into a
fixed-size buffer. However, this path may not have actually
come from the filesystem; for example, add_submodule_odb
generates a path using a strbuf and passes it in. This is
hard to trigger in practice, though, because the long
submodule directory would have to exist on disk before we
would try to open its info/alternates file.
We can easily avoid the bug, though, by simply creating the
filename on the heap.
Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
When we are loading a notes tree into our internal hash
table, we also collect any files that are clearly non-notes.
We format the name of the file into a PATH_MAX buffer, but
unlike true notes (which cannot be larger than a fanned-out
sha1 hash), these tree entries can be arbitrarily long,
overflowing our buffer.
We can fix this by switching to a strbuf. It doesn't even
cost us an extra allocation, as we can simply hand ownership
of the buffer over to the non-note struct.
This is of moderate security interest, as you might fetch
notes trees from an untrusted remote. However, we do not do
so by default, so you would have to manually fetch into the
notes namespace.
Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
verify_absent: allow filenames longer than PATH_MAX
When unpack-trees wants to know whether a path will
overwrite anything in the working tree, we use lstat() to
see if there is anything there. But if we are going to write
"foo/bar", we can't just lstat("foo/bar"); we need to look
for leading prefixes (e.g., "foo"). So we use the lstat cache
to find the length of the leading prefix, and copy the
filename up to that length into a temporary buffer (since
the original name is const, we cannot just stick a NUL in
it).
The copy we make goes into a PATH_MAX-sized buffer, which
will overflow if the prefix is longer than PATH_MAX. How
this happens is a little tricky, since in theory PATH_MAX is
the biggest path we will have read from the filesystem. But
this can happen if:
- the compiled-in PATH_MAX does not accurately reflect
what the filesystem is capable of
- the leading prefix is not _quite_ what is on disk; it
contains the next element from the name we are checking.
So if we want to write "aaa/bbb/ccc/ddd" and "aaa/bbb"
exists, the prefix of interest is "aaa/bbb/ccc". If
"aaa/bbb" approaches PATH_MAX, then "ccc" can overflow
it.
So this can be triggered, but it's hard to do. In
particular, you cannot just "git clone" a bogus repo. The
verify_absent checks happen before unpack-trees writes
anything to the filesystem, so there are never any leading
prefixes during the initial checkout, and the bug doesn't
trigger. And by definition, these files are larger than
PATH_MAX, so writing them will fail, and clone will
complain (though it may write a partial path, which will
cause a subsequent "git checkout" to hit the bug).
We can fix it by creating the temporary path on the heap.
The extra malloc overhead is not important, as we are
already making at least one stat() call (and probably more
for the prefix discovery).
Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
Various fixes around "git am" that applies a patch to a history
that is not there yet.
* pt/am-abort-fix:
am --abort: keep unrelated commits on unborn branch
am --abort: support aborting to unborn branch
am --abort: revert changes introduced by failed 3way merge
am --skip: support skipping while on unborn branch
am -3: support 3way merge on unborn branch
am --skip: revert changes introduced by failed 3way merge
Merge branch 'mh/reporting-broken-refs-from-for-each-ref' into maint
"git for-each-ref" reported "missing object" for 0{40} when it
encounters a broken ref. The lack of object whose name is 0{40} is
not the problem; the ref being broken is.
* mh/reporting-broken-refs-from-for-each-ref:
read_loose_refs(): treat NULL_SHA1 loose references as broken
read_loose_refs(): simplify function logic
for-each-ref: report broken references correctly
t6301: new tests of for-each-ref error handling
Merge branch 'jc/fsck-retire-require-eoh' into maint
A fix to a minor regression to "git fsck" in v2.2 era that started
complaining about a body-less tag object when it lacks a separator
empty line after its header to separate it with a non-existent body.
* jc/fsck-retire-require-eoh:
fsck: it is OK for a tag and a commit to lack the body
We used to ask libCURL to use the most secure authentication method
available when talking to an HTTP proxy only when we were told to
talk to one via configuration variables. We now ask libCURL to
always use the most secure authentication method, because the user
can tell libCURL to use an HTTP proxy via an environment variable
without using configuration variables.
* et/http-proxyauth:
http: always use any proxy auth method available
Merge branch 'jc/unexport-git-pager-in-use-in-pager' into maint
When you say "!<ENTER>" while running say "git log", you'd confuse
yourself in the resulting shell, that may look as if you took
control back to the original shell you spawned "git log" from but
that isn't what is happening. To that new shell, we leaked
GIT_PAGER_IN_USE environment variable that was meant as a local
communication between the original "Git" and subprocesses that was
spawned by it after we launched the pager, which caused many
"interesting" things to happen, e.g. "git diff | cat" still paints
its output in color by default.
Stop leaking that environment variable to the pager's half of the
fork; we only need it on "Git" side when we spawn the pager.
* jc/unexport-git-pager-in-use-in-pager:
pager: do not leak "GIT_PAGER_IN_USE" to the pager
Merge branch 'rh/test-color-avoid-terminfo-in-original-home' into maint
An ancient test framework enhancement to allow color was not
entirely correct; this makes it work even when tput needs to read
from the ~/.terminfo under the user's real HOME directory.
* rh/test-color-avoid-terminfo-in-original-home:
test-lib.sh: fix color support when tput needs ~/.terminfo
Revert "test-lib.sh: do tests for color support after changing HOME"
Merge branch 'me/fetch-into-shallow-safety' into maint
"git fetch --depth=<depth>" and "git clone --depth=<depth>" issued
a shallow transfer request even to an upload-pack that does not
support the capability.
* me/fetch-into-shallow-safety:
fetch-pack: check for shallow if depth given
When we want to write out a loose object file, we have
always first made sure we don't already have the object
somewhere. Since 33d4221 (write_sha1_file: freshen existing
objects, 2014-10-15), we also update the timestamp on the
file, so that a simultaneous prune knows somebody is
likely to reference it soon.
If our utime() call fails, we treat this the same as not
having the object in the first place; the safe thing to do
is write out another copy. However, the loose-object check
accidentally inverts the utime() check; it returns failure
_only_ when the utime() call actually succeeded. Thus it was
failing to protect us there, and in the normal case where
utime() succeeds, it caused us to pointlessly write out and
link the object.
This passed our freshening tests, because writing out the
new object is certainly _one_ way of updating its utime. So
the normal case was inefficient, but not wrong.
While we're here, let's also drop a comment in front of the
check_and_freshen functions, making a note of their return
type (since it is not our usual "0 for success, -1 for
error").
Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
rebase: return non-zero error code if format-patch fails
Since e481af06 (rebase: Handle cases where format-patch fails) we
notice if format-patch fails and return immediately from
git-rebase--am. We save the return value with ret=$?, but then we
return $?, which is usually zero in this case.
Fix this by returning $ret instead.
Cc: Andrew Wong <andrew.kw.w@gmail.com> Signed-off-by: Clemens Buchacher <clemens.buchacher@intel.com> Helped-by: Jorge Nunes <jorge.nunes@intel.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
Documentation/branch: document -M and -D in terms of --force
Now that we have proper documentation for --force's interaction with -d
and -m, we can avoid duplication and consider -M and -D as convenience
aliases for -m --force and -d --force.
Signed-off-by: Matthieu Moy <Matthieu.Moy@imag.fr> Signed-off-by: Junio C Hamano <gitster@pobox.com>
pager: do not leak "GIT_PAGER_IN_USE" to the pager
Since 2e6c012e (setup_pager: set GIT_PAGER_IN_USE, 2011-08-17), we
export GIT_PAGER_IN_USE so that a process that becomes the upstream
of the spawned pager can still tell that we have spawned the pager
and decide to do colored output even when its output no longer goes
to a terminal (i.e. isatty(1)).
But we forgot to clear it from the enviornment of the spawned pager.
This is not a problem in a sane world, but if you have a handful of
thousands Git users in your organization, somebody is bound to do
strange things, e.g. typing "!<ENTER>" instead of 'q' to get control
back from $LESS. GIT_PAGER_IN_USE is still set in that subshell
spawned by "less", and all sorts of interesting things starts
happening, e.g. "git diff | cat" starts coloring its output.
We can clear the environment variable in the half of the fork that
runs the pager to avoid the confusion.
Signed-off-by: Junio C Hamano <gitster@pobox.com> Acked-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
rev-list: disable --use-bitmap-index when pruning commits
The reachability bitmaps do not have enough information to
tell us which commits might have changed path "foo", so the
current code produces wrong answers for:
git rev-list --use-bitmap-index --count HEAD -- foo
(it silently ignores the "foo" limiter). Instead, we should
fall back to doing a normal traversal (it is OK to fall
back rather than complain, because --use-bitmap-index is a
pure optimization, and might not kick in for other reasons,
such as there being no bitmaps in the repository).
Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
config.c: fix writing config files on Windows network shares
Renaming to an existing file doesn't work on Windows network shares if the
target file is open.
munmap() the old config file before commit_lock_file.
Signed-off-by: Karsten Blees <blees@dcon.de> Acked-by: Jeff King <peff@peff.net> Acked-by: Johannes Schindelin <johannes.schindelin@gmx.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
rebase -i: do not leave a CHERRY_PICK_HEAD file behind
When skipping commits whose changes were already applied via `git rebase
--continue`, we need to clean up said file explicitly.
The same is not true for `git rebase --skip` because that will execute
`git reset --hard` as part of the "skip" handling in git-rebase.sh, even
before git-rebase--interactive.sh is called.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
When rev-list's --cherry option does not detect that a patch has already
been applied upstream, an interactive rebase would offer to reapply it and
consequently stop at that patch with a failure, mentioning that the diff
is empty.
Traditionally, a `git rebase --continue` simply skips the commit in such a
situation.
However, as pointed out by Gábor Szeder, this leaves a CHERRY_PICK_HEAD
behind, making the Git prompt believe that a cherry pick is still going
on. This commit adds a test case demonstrating this bug.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
We set CURLOPT_PROXYAUTH to use the most secure authentication
method available only when the user has set configuration variables
to specify a proxy. However, libcurl also supports specifying a
proxy through environment variables. In that case libcurl defaults
to only using the Basic proxy authentication method, because we do
not use CURLOPT_PROXYAUTH.
Set CURLOPT_PROXYAUTH to always use the most secure authentication
method available, even when there is no git configuration telling us
to use a proxy. This allows the user to use environment variables to
configure a proxy that requires an authentication method different
from Basic.
Signed-off-by: Enrique A. Tobis <etobis@twosigma.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
fsck: it is OK for a tag and a commit to lack the body
When fsck validates a commit or a tag, it scans each line in the
header of the object using helper functions such as "start_with()",
etc. that work on a NUL terminated buffer, but before a1e920a0
(index-pack: terminate object buffers with NUL, 2014-12-08), the
validation functions were fed the object data in a piece of memory
that is not necessarily terminated with a NUL.
We added a helper function require_end_of_header() to be called at
the beginning of these validation functions to insist that the
object data contains an empty line before its end. The theory is
that the validating functions will notice and stop when it hits an
empty line as a normal end of header (or a required header line that
is missing) without scanning past the end of potentially not
NUL-terminated buffer.
But the theory forgot that in the older days, Git itself happily
created objects with only the header lines without a body. This
caused Git 2.2 and later to issue an unnecessary warning in some
existing repositories.
With a1e920a0, we do not need to require an empty line (or the body)
in these objects to safely parse and validate them. Drop the
offending "must have an empty line" check from this helper function,
while keeping the other check to make sure that there is no NUL in
the header part of the object, and adjust the name of the helper to
what it does accordingly.
Noticed-by: Wolfgang Denk <wd@denx.de> Helped-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
Merge branch 'jk/make-fix-dependencies' into maint
Build clean-up.
* jk/make-fix-dependencies:
Makefile: silence perl/PM.stamp recipe
Makefile: avoid timestamp updates to GIT-BUILD-OPTIONS
Makefile: drop dependency between git-instaweb and gitweb
There was a dead code that used to handle "git pull --tags" and
show special-cased error message, which was made irrelevant when
the semantics of the option changed back in Git 1.9 days.
* pt/pull-tags-error-diag:
pull: remove --tags error in no merge candidates case
Merge branch 'jk/diagnose-config-mmap-failure' into maint
The configuration reader/writer uses mmap(2) interface to access
the files; when we find a directory, it barfed with "Out of memory?".
* jk/diagnose-config-mmap-failure:
xmmap(): drop "Out of memory?"
config.c: rewrite ENODEV into EISDIR when mmap fails
config.c: avoid xmmap error messages
config.c: fix mmap leak when writing config
read-cache.c: drop PROT_WRITE from mmap of index
Merge branch 'jk/squelch-missing-link-warning-for-unreachable' into maint
Recent "git prune" traverses young unreachable objects to safekeep
old objects in the reachability chain from them, which sometimes
caused error messages that are unnecessarily alarming.
* jk/squelch-missing-link-warning-for-unreachable:
suppress errors on missing UNINTERESTING links
silence broken link warnings with revs->ignore_missing_links
add quieter versions of parse_{tree,commit}
Merge branch 'mm/rebase-i-post-rewrite-exec' into maint
"git rebase -i" fired post-rewrite hook when it shouldn't (namely,
when it was told to stop sequencing with 'exec' insn).
* mm/rebase-i-post-rewrite-exec:
t5407: use <<- to align the expected output
rebase -i: fix post-rewrite hook with failed exec command
rebase -i: demonstrate incorrect behavior of post-rewrite
The improved ARRAY_SIZE macro uses BARF_UNLESS_AN_ARRAY which expands
to a valid check for recent gcc versions and to 0 for older gcc
versions but is not defined on non-gcc builds.
Non-gcc builds need this macro to expand to 0 as well. The current outer
test (defined(__GNUC__) && (__GNUC__ >= 3)) is a strictly weaker
condition than the inner test (GIT_GNUC_PREREQ(3, 1)) so we can omit the
outer test and cause the BARF_UNLESS_AN_ARRAY macro to be defined
correctly on non-gcc builds as well as gcc builds with older versions.
Signed-off-by: Charles Bailey <cbailey32@bloomberg.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
test-lib.sh: fix color support when tput needs ~/.terminfo
If tput needs ~/.terminfo for the current $TERM, then tput will
succeed before HOME is changed to $TRASH_DIRECTORY (causing color to
be set to 't') but fail afterward.
One possible way to fix this is to treat HOME like TERM: back up the
original value and temporarily restore it before say_color() runs
tput.
Instead, pre-compute and save the color control sequences before
changing either TERM or HOME. Use the saved control sequences in
say_color() rather than call tput each time. This avoids the need to
back up and restore the TERM and HOME variables, and it avoids the
overhead of a subshell and two invocations of tput per call to
say_color().
Signed-off-by: Richard Hansen <rhansen@bbn.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
docs: clarify that --encoding can produce invalid sequences
In the common case that the commit encoding matches the
output encoding, we do not touch the buffer at all, which
makes things much more efficient. But it might be unclear to
a consumer that we will pass through bogus sequences.
Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
When a repository is first fetched as a shallow clone, either by
git-clone or by fetching into an empty repo, the server's capabilities
are not currently consulted. The client will send shallow requests even
if the server does not understand them, and the resulting error may be
unhelpful to the user. This change pre-emptively checks so we can exit
with a helpful error if necessary.
Signed-off-by: Mike Edgar <adgar@google.com> Reviewed-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>