Reported-by: Ramsay Jones <ramsay@ramsayjones.plus.com> Signed-off-by: Derrick Stolee <dstolee@microsoft.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
RelNotes 2.18: minor fix to entry about dynamically loading completions
It was not "newer versions of bash" but newer versions of
bash-completion that made commit 085e2ee0e6 (completion: load
completion file for external subcommand, 2018-04-29) both necessary
and possible.
Update the corresponding RelNotes entry accordingly.
Signed-off-by: SZEDER Gábor <szeder.dev@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
The code path that triggered that "BUG" really does not want to run
without an explicit commit message. In the case where we want to amend a
commit message, we have an *implicit* commit message, though: the one of
the commit to amend. Therefore, this code path should not even be
entered.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
rebase --root: demonstrate a bug while amending root commit messages
When splitting a repository, running `git rebase -i --root` to reword
the initial commit, Git dies with
BUG: sequencer.c:795: root commit without message.
Signed-off-by: Todd Zullinger <tmz@pobox.com> Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
The return value of ewah_read_mmap() is now an ssize_t,
since we could (in theory) process up to 32GB of data. This
would never happen in practice, but a corrupt or malicious
.bitmap or index file could convince us to do so.
Let's make sure that we don't stuff the value into an int,
which would cause us to incorrectly move our pointer
forward. We'd always move too little, since negative values
are used for reporting errors. So the worst case is just
that we end up reporting a corrupt file, not an
out-of-bounds read.
Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
The on-disk ewah format tells us how big the ewah data is,
and we blindly read that much from the buffer without
considering whether the mmap'd data is long enough, which
can lead to out-of-bound reads.
Let's make sure we have data available before reading it,
both for the ewah header/footer as well as for the bit data
itself. In particular:
- keep our ptr/len pair in sync as we move through the
buffer, and check it before each read
- check the size for integer overflow (this should be
impossible on 64-bit, as the size is given as a 32-bit
count of 8-byte words, but is possible on a 32-bit
system)
- return the number of bytes read as an ssize_t instead of
an int, again to prevent integer overflow
- compute the return value using a pointer difference;
this should yield the same result as the existing code,
but makes it more obvious that we got our computations
right
The included test is far from comprehensive, as it just
picks a static point at which to truncate the generated
bitmap. But in practice this will hit in the middle of an
ewah and make sure we're at least exercising this code.
Reported-by: Luat Nguyen <root@l4w.io> Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
Support for the --set-upstream option was removed in 52668846ea
(builtin/branch: stop supporting the "--set-upstream" option,
2017-08-17). The change did not completely remove the command
due to an issue noted in the commit's log message.
So, a test was added to ensure that a command which uses the
'--set-upstream' option fails instead of silently acting as an alias
for the '--set-upstream-to' option due to option parsing features.
To avoid confusion, clarify that the option is disabled intentionally
in the corresponding test description.
The test is expected to be around as long as we intentionally fail
on seeing the '--set-upstream' option which in turn we expect to
do for a period of time after which we can be sure that existing
users of '--set-upstream' are aware that the option is no
longer supported.
Signed-off-by: Kaartic Sivaraam <kaartic.sivaraam@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
git-credential-netrc: use in-tree Git.pm for tests
The netrc test.pl script calls git-credential-netrc which imports the
Git module. Pass GITPERLLIB to git-credential-netrc via PERL5LIB to
ensure the in-tree Git module is used for testing.
Signed-off-by: Luis Marsano <luis.marsano@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
The Makefile tweak NO_ICONV is meant to allow Git to be built without
iconv in case iconv is not installed or is otherwise dysfunctional.
However, NO_ICONV's disabling of iconv is incomplete and can incorrectly
allow "-liconv" to slip into the linker flags when NEEDS_LIBICONV is
defined, which breaks the build when iconv is not installed.
On some platforms, iconv lives directly in libc, whereas, on others it
resides in libiconv. For the latter case, NEEDS_LIBICONV instructs the
Makefile to add "-liconv" to the linker flags. config.mak.uname
automatically defines NEEDS_LIBICONV for platforms which require it.
The adding of "-liconv" is done unconditionally, despite NO_ICONV.
Work around this problem by making NO_ICONV take precedence over
NEEDS_LIBICONV.
Reported by: Mahmoud Al-Qudsi <mqudsi@neosmart.net> Signed-off-by: Eric Sunshine <sunshine@sunshineco.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
Some of our tests try to make sure Git behaves sensibly in a
read-only directory, by dropping 'w' permission bit before doing a
test and then restoring it after it is done. The latter is needed
for the test framework to clean after itself without leaving a
leftover directory that cannot be removed.
Ancient parts of tests however arrange the above with
chmod a-w . &&
... do the test ...
status=$?
chmod 775 .
(exit $status)
which obviously would not work if the test somehow dies before it
has the chance to do "chmod 775". Rewrite them by following a more
robust pattern recently written tests use, which is
test_when_finished "chmod 775 ." &&
chmod a-w . &&
... do the test ...
submodule: unset core.worktree if no working tree is present
When a submodules work tree is removed, we should unset its core.worktree
setting as the worktree is no longer present. This is not just in line
with the conceptual view of submodules, but it fixes an inconvenience
for looking at submodules that are not checked out:
git clone --recurse-submodules git://github.com/git/git && cd git &&
git checkout --recurse-submodules v2.13.0
git -C .git/modules/sha1collisiondetection log
fatal: cannot chdir to '../../../sha1collisiondetection': \
No such file or directory
With this patch applied, the final call to git log works instead of dying
in its setup, as the checkout will unset the core.worktree setting such
that following log will be run in a bare repository.
This patch covers all commands that are in the unpack machinery, i.e.
checkout, read-tree, reset. A follow up patch will address
"git submodule deinit", which will also make use of the new function
submodule_unset_core_worktree(), which is why we expose it in this patch.
Signed-off-by: Stefan Beller <sbeller@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
submodule: fix NULL correctness in renamed broken submodules
When fetching with recursing into submodules, the fetch logic inspects
the superproject which submodules actually need to be fetched. This is
tricky for submodules that were renamed in the fetched range of commits.
This was implemented in c68f8375760 (implement fetching of moved
submodules, 2017-10-16), and this patch fixes a mistake in the logic
there.
When the warning is printed, the `name` might be NULL as
default_name_or_path can return NULL, so fix the warning to use the path
as obtained from the diff machinery, as that is not NULL.
While at it, make sure we only attempt to load the submodule if a git
directory of the submodule is found as default_name_or_path will return
NULL in case the git directory cannot be found. Note that passing NULL
to submodule_from_name is just a semantic error, as submodule_from_name
accepts NULL as a value, but then the return value is not the submodule
that was asked for, but some arbitrary other submodule. (Cf. 'config_from'
in submodule-config.c: "If any parameter except the cache is a NULL
pointer just return the first submodule. Can be used to check whether
there are any submodules parsed.")
Reported-by: Duy Nguyen <pclouds@gmail.com> Helped-by: Duy Nguyen <pclouds@gmail.com> Helped-by: Heiko Voigt <hvoigt@hvoigt.net> Signed-off-by: Stefan Beller <sbeller@google.com> Acked-by: Heiko Voigt <hvoigt@hvoigt.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
"index-pack --strict" has been taught to make sure that it runs the
final object integrity checks after making the freshly indexed
packfile available to itself.
* jk/index-pack-maint:
index-pack: correct install_packed_git() args
index-pack: handle --strict checks of non-repo packs
prepare_commit_graft: treat non-repository as a noop
fetch-pack: test explicitly that --all can fetch tag references pointing to non-commits
Fetch-pack --all became broken with respect to unusual tags in 5f0fc64513 (fetch-pack: eliminate spurious error messages, 2012-09-09),
and was fixed only recently in e9502c0a7f (fetch-pack: don't try to fetch
peel values with --all, 2018-06-11). However the test added in e9502c0a7f does not explicitly cover all funky cases.
In order to be sure fetching funky tags will never break, let's
explicitly test all relevant cases with 4 tag objects pointing to 1) a
blob, 2) a tree, 3) a commit, and 4) another tag objects. The referenced
tag objects themselves are referenced from under regular refs/tags/*
namespace. Before e9502c0a7f `fetch-pack --all` was failing e.g. this way:
.../git/t/trash directory.t5500-fetch-pack/fetchall$ git fetch-pack --all ..
fatal: A git upload-pack: not our ref 038f48ad...
fatal: The remote end hung up unexpectedly
Helped-by: Junio C Hamano <gitster@pobox.com> Signed-off-by: Kirill Smelkov <kirr@nexedi.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
The buffer being passed to zlib includes a NUL terminator that git
needs to keep in place. unpack_compressed_entry() attempts to detect
the case that the source buffer hasn't been fully consumed by
checking to see if the destination buffer has been over consumed.
This causes a problem, that more recent zlib patches have been
poisoning the unconsumed portions of the buffer which overwrites
the NUL byte, while correctly returning length and status.
Let's place the NUL at the end of the buffer after inflate returns
to assure that it doesn't result in problems for git even if its
been overwritten by zlib.
Signed-off-by: Jeremy Linton <lintonrjeremy@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
RelNotes 2.18: clarify where directory rename detection applies
Mention that this feature works with some commands (merge and cherry-pick,
implying that it also works with commands that build on these like rebase
-m and rebase -i). Explicitly mentioning two commands hopefully implies
that it may not always work with other commands (am, and rebase without
flags that imply either -m or -i).
Also, since the directory rename detection from this cycle was
specifically added in merge-recursive and not diffcore-rename, remove the
'in "diff" family" phrase from the note. (Folks have requested in the
past that `git diff` detect directory renames and somehow simplify its
output, so it may be helpful to avoid implying that diff has any new
capability here.)
Signed-off-by: Elijah Newren <newren@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
The "autodie" module was added in Perl 5.10.1, but our INSTALL
document says "version 5.8 or later is needed".
As discussed in <87efhfvxzu.fsf@evledraar.gmail.com> this script is in
contrib/, so we might not want to apply that policy, however in this
case "autodie" was recently added as a "gratuitous safeguard" in 786ef50a23 ("git-credential-netrc: accept gpg option",
2018-05-12) (see
<CAHqJXRE8OKSKcck1APHAHccLZhox+tZi8nNu2RA74RErX8s3Pg@mail.gmail.com>).
Looking at it more carefully the addition of "autodie" inadvertently
introduced a logic error, since having it is equivalent to this patch:
@@ -245,10 +244,10 @@ sub load_netrc {
if ($gpgmode) {
my @cmd = ($options{'gpg'}, qw(--decrypt), $file);
log_verbose("Using GPG to open $file: [@cmd]");
- open $io, "-|", @cmd;
+ open $io, "-|", @cmd or die "@cmd: $!";
} else {
log_verbose("Opening $file...");
- open $io, '<', $file;
+ open $io, '<', $file or die "$file: $!$!;
}
# nothing to do if the open failed (we log the error later)
As shown in the context the intent of that code is not do die but to
log the error later.
Per my reading of the file this was the only thing autodie was doing
in this file (there was no other code it altered). So let's remove it,
both to fix the logic error and to get rid of the dependency.
git-p4 originally would fetch changes in one query. On large repos this
could fail because of the limits that Perforce imposes on the number of
items returned and the number of queries in the database.
To fix this, git-p4 learned to query changes in blocks of 512 changes,
However, this can be very slow - if you have a few million changes,
with each chunk taking about a second, it can be an hour or so.
Although it's possible to tune this value manually with the
"--changes-block-size" option, it's far from obvious to ordinary users
that this is what needs doing.
This change alters the block size dynamically by looking for the
specific error messages returned from the Perforce server, and reducing
the block size if the error is seen, either to the limit reported by the
server, or to half the current block size.
That means we can start out with a very large block size, and then let
it automatically drop down to a value that works without error, while
still failing correctly if some other error occurs.
Signed-off-by: Luke Diamand <luke@diamand.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
git-p4: raise exceptions from p4CmdList based on error from p4 server
This change lays some groundwork for better handling of rowcount errors
from the server, where it fails to send us results because we requested
too many.
It adds an option to p4CmdList() to return errors as a Python exception.
The exceptions are derived from P4Exception (something went wrong),
P4ServerException (the server sent us an error code) and
P4RequestSizeException (we requested too many rows/results from the
server database).
This makes the code that handles the errors a bit easier.
The default behavior is unchanged; the new code is enabled with a flag.
Signed-off-by: Luke Diamand <luke@diamand.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
Currently when p4 fails to run, git-p4 just crashes with an obscure
error message.
For example, if the P4 ticket has expired, you get:
Error: Cannot locate perforce checkout of <path> in client view
This change checks whether git-p4 can talk to the Perforce server when
the first P4 operation is attempted, and tries to print a meaningful
error message if it fails.
Signed-off-by: Luke Diamand <luke@diamand.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
git-p4: add option to disable syncing of p4/master with p4
Add an option to the git-p4 submit command to disable syncing
with Perforce.
This is useful for the case where a git-p4 mirror has been setup
on a server somewhere, running from (e.g.) cron, and developers
then clone from this. Having the local cloned copy also sync
from Perforce just isn't useful.
Signed-off-by: Luke Diamand <luke@diamand.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
git-p4: disable-rebase: allow setting this via configuration
This just lets you set the --disable-rebase option with the
git configuration options git-p4.disableRebase. If you're
using this option, you probably want to set it all the time
for a given repo.
Signed-off-by: Luke Diamand <luke@diamand.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
On a daily work with multiple local git branches, the usual way to
submit only a specified commit was to cherry-pick the commit on
master then run git-p4 submit. It can be very annoying to switch
between local branches and master, only to submit one commit. The
proposed new way is to select directly the commit you want to
submit.
Add option --commit to command 'git-p4 submit' in order to submit
only specified commit(s) in p4.
On a daily work developping software with big compilation time, one
may not want to rebase on his local git tree, in order to avoid long
recompilation.
Add option --disable-rebase to command 'git-p4 submit' in order to
disable rebase after submission.
Thanks-to: Cedric Borgese <cedric.borgese@gmail.com> Reviewed-by: Luke Diamand <luke@diamand.org> Signed-off-by: Romain Merland <merlorom@yahoo.fr> Signed-off-by: Junio C Hamano <gitster@pobox.com>
builtin/send-pack didn't call git_default_config, and because of this
git push --signed didn't respect the username and email in gitconfig in
the HTTP transport.
Signed-off-by: Masaya Suzuki <masayasuzuki@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
merge-recursive: add pointer about unduly complex looking code
handle_change_delete() has a block of code displaying one of four nearly
identical messages. Each contains about half a dozen variable
interpolations, which use nearly identical variables as well. Someone
trying to parse this may be slowed down trying to parse the differences
and why they are here; help them out by adding a comment explaining the
differences.
Further, point out that this code structure isn't collapsed into something
more concise and readable for the programmer, because we want to keep full
messages intact in order to make translators' jobs much easier.
Signed-off-by: Elijah Newren <newren@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
merge-recursive: rename conflict_rename_*() family of functions
These functions were added because processing of these conflicts needed
to be deferred until process_entry() in order to get D/F conflicts and
such right. The number of these has grown over time, and now include
some whose name is misleading:
* conflict_rename_normal() is for handling normal file renames; a
typical rename may need content merging, but we expect conflicts
from that to be more the exception than the rule.
* conflict_rename_via_dir() will not be a conflict; it was just an
add that turned into a move due to directory rename detection.
(If there was a file in the way of the move, that would have been
detected and reported earlier.)
* conflict_rename_rename_2to1 and conflict_rename_add (the latter
of which doesn't exist yet but has been submitted before and I
intend to resend) technically might not be conflicts if the
colliding paths happen to match exactly.
Rename this family of functions to handle_rename_*().
Also rename handle_renames() to detect_and_process_renames() both to make
it clearer what it does, and to differentiate it as a pre-processing step
from all the handle_rename_*() functions which are called from
process_entry().
Acked-by: Johannes Schindelin <Johannes.Schindelin@gmx.de> Signed-off-by: Elijah Newren <newren@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
merge-recursive: clarify the rename_dir/RENAME_DIR meaning
We had an enum of rename types which included RENAME_DIR; this name felt
misleading since it was not about an entire directory but was a status for
each individual file add that occurred within a renamed directory.
Since this type is for signifying that the files in question were being
renamed due to directory rename detection, rename this enum value to
RENAME_VIA_DIR.
Make a similar change to the conflict_rename_dir() function, and add a
comment to the top of that function explaining its purpose (it may not be
quite as obvious as for the other conflict_rename_*() functions).
Acked-by: Johannes Schindelin <Johannes.Schindelin@gmx.de> Signed-off-by: Elijah Newren <newren@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
Various refactorings throughout the code have left lots of alignment
issues that were driving me crazy; fix them.
Acked-by: Johannes Schindelin <Johannes.Schindelin@gmx.de> Signed-off-by: Elijah Newren <newren@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
completion: correct zsh detection when run from git-completion.zsh
v2.18.0-rc0~90^2 (completion: reduce overhead of clearing cached
--options, 2018-04-18) worked around a bug in bash's "set" builtin on
MacOS by using compgen instead. It was careful to avoid breaking zsh
by guarding this workaround with
if [[ -n ${ZSH_VERSION-}} ]]
Alas, this interacts poorly with git-completion.zsh's bash emulation:
ZSH_VERSION='' . "$script"
Correct it by instead using a new GIT_SOURCING_ZSH_COMPLETION shell
variable to detect whether git-completion.bash is being sourced from
git-completion.zsh. This way, the zsh variant is used both when run
from zsh directly and when run via git-completion.zsh.
Reproduction recipe:
1. cd git/contrib/completion && cp git-completion.zsh _git
2. Put the following in a new ~/.zshrc file:
With this patch:
Triggers nice git-completion.bash based tab completion
Without:
contrib/completion/git-completion.bash:354: read-only variable: QISUFFIX
zsh:12: command not found: ___main
zsh:15: _default: function definition file not found
_dispatch:70: bad math expression: operand expected at `/usr/bin/g...'
Segmentation fault
Reported-by: Rick van Hattem <wolph@wol.ph> Reported-by: Dave Borowitz <dborowitz@google.com> Signed-off-by: SZEDER Gábor <szeder.dev@gmail.com> Signed-off-by: Jonathan Nieder <jrnieder@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
The function does not start taking the repository object as a
parameter before v2.18 track. Make the topic mergeable to v2.17
maintenance track by dropping it.
fetch-pack: don't try to fetch peel values with --all
When "fetch-pack --all" sees a tag-to-blob on the remote, it
tries to fetch both the tag itself ("refs/tags/foo") and the
peeled value that the remote advertises ("refs/tags/foo^{}").
Asking for the object pointed to by the latter can cause
upload-pack to complain with "not our ref", since it does
not mark the peeled objects with the OUR_REF (unless they
were at the tip of some other ref).
Arguably upload-pack _should_ be marking those peeled
objects. But it never has in the past, since clients would
generally just ask for the tag and expect to get the peeled
value along with it. And that's how "git fetch" works, as
well as older versions of "fetch-pack --all".
The problem was introduced by 5f0fc64513 (fetch-pack:
eliminate spurious error messages, 2012-09-09). Before then,
the matching logic was something like:
if (refname is ill-formed)
do nothing
else if (doing --all)
always consider it matched
else
look through list of sought refs for a match
That commit wanted to flip the order of the second two arms
of that conditional. But we ended up with:
if (refname is ill-formed)
do nothing
else
look through list of sought refs for a match
if (--all and no match so far)
always consider it matched
That means tha an ill-formed ref will trigger the --all
conditional block, even though we should just be ignoring
it. We can fix that by having a single "else" with all of
the well-formed logic, that checks the sought refs and
"--all" in the correct order.
Reported-by: Kirill Smelkov <kirr@nexedi.com> Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
Commit 159e7b080b (fsck: detect gitmodules files,
2018-05-02) taught fsck to look at the content of
.gitmodules files. If the object turns out not to be a blob
at all, we just complain and punt on checking the content.
And since this was such an obvious and trivial code path, I
didn't even bother to add a test.
Except it _does_ do one non-trivial thing, which is call the
report() function, which wants us to pass a pointer to a
"struct object". Which we don't have (we have only a "struct
object_id"). So we erroneously pass a NULL object to
report(), which gets dereferenced and causes a segfault.
It seems like we could refactor report() to just take the
object_id itself. But we pass the object pointer along to
a callback function, and indeed this ends up in
builtin/fsck.c's objreport() which does want to look at
other parts of the object (like the type).
So instead, let's just use lookup_unknown_object() to get
the real "struct object", and pass that.
Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
t7415: don't bother creating commit for symlink test
Early versions of the fsck .gitmodules detection code
actually required a tree to be at the root of a commit for
it to be checked for .gitmodules. What we ended up with in 159e7b080b (fsck: detect gitmodules files, 2018-05-02),
though, finds a .gitmodules file in _any_ tree (see that
commit for more discussion).
As a result, there's no need to create a commit in our
tests. Let's drop it in the name of simplicity. And since
that was the only thing referencing $tree, we can pull our
tree creation out of a command substitution.
Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
The commands that make use of --git-completion-helper feature could
now produce a lot of --no-xxx options that a command can take. This in
many case could nearly double the amount of completable options, using
more screen estate and also harder to search for the wanted option.
This patch attempts to mitigate that by collapsing extra --no-
options, the ones that are added by --git-completion-helper and not in
original struct option arrays. The "--no-..." option will be displayed
in this case to hint about more options, e.g.
Corner case: to make sure that people will never accidentally complete
the fake option "--no-..." there must be one real --no- in the first
complete listing even if it's not from the original struct option.
PS. This could could be made simpler with ";&" to fall through from
"--no-*" block and share the code but ";&" is not available on bash-3
(i.e. Mac)
Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
A regression introduced in 8462ff43 ("convert_to_git():
safe_crlf/checksafe becomes int conv_flags", 2018-01-13) back in Git
2.17 cycle caused autocrlf rewrites to produce a warning message
despite setting safecrlf=false.
Signed-off-by: Anthony Sottile <asottile@umich.edu> Acked-By: Torsten Bögershausen <tboegi@web.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
tests: make forging GPG signed commits and tags more robust
A couple of test scripts create forged GPG signed commits or tags to
check that such forgery can't fool various git commands' signature
verification. All but one of those test scripts are prone to
occasional failures because the forgery creates a bogus GPG signature,
and git commands error out with an unexpected error message, e.g.
"Commit deadbeef does not have a GPG signature" instead of "... has a
bad GPG signature".
't5573-pull-verify-signatures.sh', 't7510-signed-commit.sh' and
't7612-merge-verify-signatures.sh' create forged signed commits like
this:
git commit -S -m "bad on side" &&
git cat-file commit side-bad >raw &&
sed -e "s/bad/forged bad/" raw >forged &&
git hash-object -w -t commit forged >forged.commit
On rare occasions the given pattern occurs not only in the commit
message but in the GPG signature as well, and after it's replaced in
the signature the resulting signature becomes invalid, GPG will report
CRC error and that it couldn't find any signature, which will then
ultimately cause the test failure.
Since in all three cases the pattern to be replaced during the forgery
is the first word of the commit message's subject line, and since the
GPG signature in the commit object is indented by a space, let's just
anchor those patterns to the beginning of the line to prevent this
issue.
The test script 't7030-verify-tag.sh' creates a forged signed tag
object in a similar way by replacing the pattern "seventh", but the
GPG signature in tag objects is not indented by a space, so the above
solution is not applicable in this case. However, in the tag object
in question the pattern "seventh" occurs not only in the tag message
but in the 'tag' header as well. To create a forged tag object it's
sufficient to replace only one of the two occurences, so modify the
sed script to limit the pattern to the 'tag' header (i.e. a line
beginning with "tag ", which, because of the space character, can
never occur in the base64-encoded GPG signature).
Note that the forgery in 't7004-tag.sh' is not affected by this issue:
while 't7004' does create a forged signed tag kind of the same way,
it replaces "signed-tag" in the tag object, which, because of the '-'
character, can never occur in the base64-encoded GPG signarute.
Signed-off-by: SZEDER Gábor <szeder.dev@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
The two tests 'detect fudged signature' and 'detect fudged signature
with NUL' in 't7510-signed-commit.sh' check that 'git verify-commit'
errors out when encountering a forged commit, but they do so by
running
! git verify-commit ...
Use 'test_must_fail' instead, because that would catch potential
unexpected errors like a segfault as well.
Signed-off-by: SZEDER Gábor <szeder.dev@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
refspec: initalize `refspec_item` in `valid_fetch_refspec()`
We allocate a `struct refspec_item` on the stack without initializing
it. In particular, its `dst` and `src` members will contain some random
data from the stack. When we later call `refspec_item_clear()`, it will
call `free()` on those pointers. So if the call to `parse_refspec()` did
not assign to them, we will be freeing some random "pointers". This is
undefined behavior.
To the best of my understanding, this cannot currently be triggered by
user-provided data. And for what it's worth, the test-suite does not
trigger this with SANITIZE=address. It can be provoked by calling
`valid_fetch_refspec(":*")`.
Zero the struct, as is done in other users of `struct refspec_item` by
using the refspec_item_init() initialization function.
Signed-off-by: Martin Ågren <martin.agren@gmail.com> Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
Re-add the non-fatal version of refspec_item_init_or_die() renamed
away in an earlier change to get a more minimal diff. This should be
used by callers that have their own error handling.
This new function could be marked "static" since nothing outside of
refspec.c uses it, but expecting future use of it, let's make it
available to other users.
Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
add -p: fix counting empty context lines in edited patches
recount_edited_hunk() introduced in commit 2b8ea7f3c7 ("add -p:
calculate offset delta for edited patches", 2018-03-05) required all
context lines to start with a space, empty lines are not counted. This
was intended to avoid any recounting problems if the user had
introduced empty lines at the end when editing the patch. However this
introduced a regression into 'git add -p' as it seems it is common for
editors to strip the trailing whitespace from empty context lines when
patches are edited thereby introducing empty lines that should be
counted. 'git apply' knows how to deal with such empty lines and POSIX
states that whether or not there is an space on an empty context line
is implementation defined [1].
Fix the regression by counting lines that consist solely of a newline
as well as lines starting with a space as context lines and add a test
to prevent future regressions.
Reported-by: Mahmoud Al-Qudsi <mqudsi@neosmart.net> Reported-by: Oliver Joseph Ash <oliverjash@gmail.com> Reported-by: Jeff Felchner <jfelchner1@gmail.com> Signed-off-by: Phillip Wood <phillip.wood@dunelm.org.uk> Signed-off-by: Junio C Hamano <gitster@pobox.com>
"git pull -recurse-submodules --rebase", when the submodule
repository's history did not have anything common between ours and
the upstream's, failed to execute. We need to fetch from them to
continue even in such a case.
* jt/submodule-pull-recurse-rebase:
submodule: do not pass null OID to setup_revisions
As there are plans to implement other ref storage systems,
let's use a way to remove remote refs that does not depend
on refs being files.
This makes it clear to readers that this test does not
depend on which ref backend is used.
Suggested-by: Michael Haggerty <mhagger@alum.mit.edu> Helped-by: Jeff King <peff@peff.net> Signed-off-by: Christian Couder <chriscool@tuxfamily.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
upload-pack: reject shallow requests that would return nothing
Shallow clones with --shallow-since or --shalow-exclude work by
running rev-list to get all reachable commits, then draw a boundary
between reachable and unreachable and send "shallow" requests based on
that.
The code does miss one corner case: if rev-list returns nothing, we'll
have no border and we'll send no shallow requests back to the client
(i.e. no history cuts). This essentially means a full clone (or a full
branch if the client requests just one branch). One example is the
oldest commit is older than what is specified by --shallow-since.
To avoid this, if rev-list returns nothing, we abort the clone/fetch.
The user could adjust their request (e.g. --shallow-since further back
in the past) and retry.
Another possible option for this case is to fall back to a default
depth (like depth 1). But I don't like too much magic that way because
we may return something unexpected to the user. If they request
"history since 2008" and we return a single depth at 2000, that might
break stuff for them. It is better to tell them that something is
wrong and let them take the best course of action.
Note that we need to die() in get_shallow_commits_by_rev_list()
instead of just checking for empty result from its caller
deepen_by_rev_list() and handling the error there. The reason is,
empty result could be a valid case: if you have commits in year 2013
and you request --shallow-since=year.2000 then you should get a full
clone (i.e. empty result).
Reported-by: Andreas Krey <a.krey@gmx.de> Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>