git-credential-netrc: use in-tree Git.pm for tests
The netrc test.pl script calls git-credential-netrc which imports the
Git module. Pass GITPERLLIB to git-credential-netrc via PERL5LIB to
ensure the in-tree Git module is used for testing.
Signed-off-by: Luis Marsano <luis.marsano@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
The Makefile tweak NO_ICONV is meant to allow Git to be built without
iconv in case iconv is not installed or is otherwise dysfunctional.
However, NO_ICONV's disabling of iconv is incomplete and can incorrectly
allow "-liconv" to slip into the linker flags when NEEDS_LIBICONV is
defined, which breaks the build when iconv is not installed.
On some platforms, iconv lives directly in libc, whereas, on others it
resides in libiconv. For the latter case, NEEDS_LIBICONV instructs the
Makefile to add "-liconv" to the linker flags. config.mak.uname
automatically defines NEEDS_LIBICONV for platforms which require it.
The adding of "-liconv" is done unconditionally, despite NO_ICONV.
Work around this problem by making NO_ICONV take precedence over
NEEDS_LIBICONV.
Reported by: Mahmoud Al-Qudsi <mqudsi@neosmart.net> Signed-off-by: Eric Sunshine <sunshine@sunshineco.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
Some of our tests try to make sure Git behaves sensibly in a
read-only directory, by dropping 'w' permission bit before doing a
test and then restoring it after it is done. The latter is needed
for the test framework to clean after itself without leaving a
leftover directory that cannot be removed.
Ancient parts of tests however arrange the above with
chmod a-w . &&
... do the test ...
status=$?
chmod 775 .
(exit $status)
which obviously would not work if the test somehow dies before it
has the chance to do "chmod 775". Rewrite them by following a more
robust pattern recently written tests use, which is
test_when_finished "chmod 775 ." &&
chmod a-w . &&
... do the test ...
log: prevent error if line range ends past end of file
If the -L option is used to specify a line range in git log, and the end
of the range is past the end of the file, git will fail with a fatal
error. This commit prevents such behaviour - instead we perform the log
for existing lines within the specified range.
This commit also fixes a corner case where -L ,-n:file would be treated
as a log over the whole file. Now we treat this as -L 1,-n:file and
blame the first line of the file instead.
Signed-off-by: Isabella Stephens <istephens@atlassian.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
blame: prevent error if range ends past end of file
If the -L option is used to specify a line range in git blame, and the
end of the range is past the end of the file, git will fail with a fatal
error. This commit prevents such behavior - instead we display the blame
for existing lines within the specified range. Tests are amended
accordingly.
This commit also fixes two corner cases. Blaming -L n,-(n+1) now blames
the first n lines of a file rather than from n to the end of the file.
Blaming -L ,-n will be treated as -L 1,-n and blame the first line of
the file, rather than blaming the whole file.
Signed-off-by: Isabella Stephens <istephens@atlassian.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
Introduce the new files fetch-negotiator.{h,c}, which contains an API
behind which the details of negotiation are abstracted. Currently, only
one algorithm is available: the existing one.
This patch is written to be easily reviewed: static functions are
moved verbatim from fetch-pack.c to negotiator/default.c, and it can be
seen that the lines replaced by negotiator->X() calls are present in the
X() functions respectively.
Signed-off-by: Jonathan Tan <jonathantanmy@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
fetch-pack: move common check and marking together
When receiving 'ACK <object-id> continue' for a common commit, check if
the commit was already known to be common and mark it as such if not up
front. This should make future refactoring of how the information about
common commits is stored more straightforward.
No visible change intended.
Signed-off-by: Jonathan Tan <jonathantanmy@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
Reduce the number of global variables by making the priority queue and
the count of non-common commits in it local, passing them as a struct to
various functions where necessary.
This also helps in the case that fetch_pack() is invoked twice in the
same process (when tag following is required when using a transport that
does not support tag following), in that different priority queues will
now be used in each invocation, instead of reusing the possibly
non-empty one.
Signed-off-by: Jonathan Tan <jonathantanmy@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
In negotiation using protocol v2, fetch-pack sometimes does not make
full use of the information obtained in the ref advertisement:
specifically, that if the server advertises a commit that the client
also has, the client never needs to inform the server that it has the
commit's parents, since it can just tell the server that it has the
advertised commit and it knows that the server can and will infer the
rest.
This is because, in do_fetch_pack_v2(), rev_list_insert_ref_oid() is
invoked before mark_complete_and_common_ref(). This means that if we
have a commit that is both our ref and their ref, it would be enqueued
by rev_list_insert_ref_oid() as SEEN, and since it is thus already SEEN,
mark_complete_and_common_ref() would not enqueue it.
If mark_complete_and_common_ref() were invoked first, as it is in
do_fetch_pack() for protocol v0, then mark_complete_and_common_ref()
would enqueue it with COMMON_REF | SEEN. The addition of COMMON_REF
ensures that its parents are not sent as "have" lines.
Change the order in do_fetch_pack_v2() to be consistent with
do_fetch_pack(), and to avoid sending unnecessary "have" lines.
Signed-off-by: Jonathan Tan <jonathantanmy@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
When "ACK %s ready" is received, find_common() clears rev_list in an
attempt to stop further "have" lines from being sent [1]. It is much
more readable to explicitly break from the loop instead.
So explicitly break from the loop, and make the clearing of the rev_list
happen unconditionally.
[1] The rationale is further described in the originating commit f2cba9299b ("fetch-pack: Finish negotation if remote replies "ACK %s
ready"", 2011-03-14).
Signed-off-by: Jonathan Tan <jonathantanmy@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
submodule: unset core.worktree if no working tree is present
When a submodules work tree is removed, we should unset its core.worktree
setting as the worktree is no longer present. This is not just in line
with the conceptual view of submodules, but it fixes an inconvenience
for looking at submodules that are not checked out:
git clone --recurse-submodules git://github.com/git/git && cd git &&
git checkout --recurse-submodules v2.13.0
git -C .git/modules/sha1collisiondetection log
fatal: cannot chdir to '../../../sha1collisiondetection': \
No such file or directory
With this patch applied, the final call to git log works instead of dying
in its setup, as the checkout will unset the core.worktree setting such
that following log will be run in a bare repository.
This patch covers all commands that are in the unpack machinery, i.e.
checkout, read-tree, reset. A follow up patch will address
"git submodule deinit", which will also make use of the new function
submodule_unset_core_worktree(), which is why we expose it in this patch.
Signed-off-by: Stefan Beller <sbeller@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
submodule: fix NULL correctness in renamed broken submodules
When fetching with recursing into submodules, the fetch logic inspects
the superproject which submodules actually need to be fetched. This is
tricky for submodules that were renamed in the fetched range of commits.
This was implemented in c68f8375760 (implement fetching of moved
submodules, 2017-10-16), and this patch fixes a mistake in the logic
there.
When the warning is printed, the `name` might be NULL as
default_name_or_path can return NULL, so fix the warning to use the path
as obtained from the diff machinery, as that is not NULL.
While at it, make sure we only attempt to load the submodule if a git
directory of the submodule is found as default_name_or_path will return
NULL in case the git directory cannot be found. Note that passing NULL
to submodule_from_name is just a semantic error, as submodule_from_name
accepts NULL as a value, but then the return value is not the submodule
that was asked for, but some arbitrary other submodule. (Cf. 'config_from'
in submodule-config.c: "If any parameter except the cache is a NULL
pointer just return the first submodule. Can be used to check whether
there are any submodules parsed.")
Reported-by: Duy Nguyen <pclouds@gmail.com> Helped-by: Duy Nguyen <pclouds@gmail.com> Helped-by: Heiko Voigt <hvoigt@hvoigt.net> Signed-off-by: Stefan Beller <sbeller@google.com> Acked-by: Heiko Voigt <hvoigt@hvoigt.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
If tag following is required when using a transport that does not
support tag following, fetch_pack() will be invoked twice in the same
process, necessitating a clearing of the object flags used by
fetch_pack() sometime during the second invocation. This is currently
done in find_common(), which means that the invocation of
mark_complete_and_common_ref() in do_fetch_pack() is useless.
(This cannot be reproduced with Git alone, because all transports that
come with Git support tag following.)
Therefore, move this clearing from find_common() to its
parent function do_fetch_pack(), right before it calls
mark_complete_and_common_ref().
This has been occurring since the commit that introduced the clearing of
marks, 420e9af498 ("Fix tag following", 2008-03-19).
The corresponding code for protocol v2 in do_fetch_pack_v2() does not
have this problem, as the clearing of flags is done before any marking
(whether by rev_list_insert_ref_oid() or
mark_complete_and_common_ref()).
Signed-off-by: Jonathan Tan <jonathantanmy@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
The function everything_local(), despite its name, also (1) marks
commits as COMPLETE and COMMON_REF and (2) invokes filter_refs() as
important side effects. Extract (1) into its own function
(mark_complete_and_common_ref()) and remove
(2).
The restoring of save_commit_buffer, which was introduced in a1c6d7c1a7
("fetch-pack: restore save_commit_buffer after use", 2017-12-08), is a
concern of the parse_object() call in mark_complete_and_common_ref(), so
it has been moved from the end of everything_local() to the end of
mark_complete_and_common_ref().
Signed-off-by: Jonathan Tan <jonathantanmy@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
"index-pack --strict" has been taught to make sure that it runs the
final object integrity checks after making the freshly indexed
packfile available to itself.
* jk/index-pack-maint:
index-pack: correct install_packed_git() args
index-pack: handle --strict checks of non-repo packs
prepare_commit_graft: treat non-repository as a noop
fetch-pack: test explicitly that --all can fetch tag references pointing to non-commits
Fetch-pack --all became broken with respect to unusual tags in 5f0fc64513 (fetch-pack: eliminate spurious error messages, 2012-09-09),
and was fixed only recently in e9502c0a7f (fetch-pack: don't try to fetch
peel values with --all, 2018-06-11). However the test added in e9502c0a7f does not explicitly cover all funky cases.
In order to be sure fetching funky tags will never break, let's
explicitly test all relevant cases with 4 tag objects pointing to 1) a
blob, 2) a tree, 3) a commit, and 4) another tag objects. The referenced
tag objects themselves are referenced from under regular refs/tags/*
namespace. Before e9502c0a7f `fetch-pack --all` was failing e.g. this way:
.../git/t/trash directory.t5500-fetch-pack/fetchall$ git fetch-pack --all ..
fatal: A git upload-pack: not our ref 038f48ad...
fatal: The remote end hung up unexpectedly
Helped-by: Junio C Hamano <gitster@pobox.com> Signed-off-by: Kirill Smelkov <kirr@nexedi.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
The buffer being passed to zlib includes a NUL terminator that git
needs to keep in place. unpack_compressed_entry() attempts to detect
the case that the source buffer hasn't been fully consumed by
checking to see if the destination buffer has been over consumed.
This causes a problem, that more recent zlib patches have been
poisoning the unconsumed portions of the buffer which overwrites
the NUL byte, while correctly returning length and status.
Let's place the NUL at the end of the buffer after inflate returns
to assure that it doesn't result in problems for git even if its
been overwritten by zlib.
Signed-off-by: Jeremy Linton <lintonrjeremy@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
RelNotes 2.18: clarify where directory rename detection applies
Mention that this feature works with some commands (merge and cherry-pick,
implying that it also works with commands that build on these like rebase
-m and rebase -i). Explicitly mentioning two commands hopefully implies
that it may not always work with other commands (am, and rebase without
flags that imply either -m or -i).
Also, since the directory rename detection from this cycle was
specifically added in merge-recursive and not diffcore-rename, remove the
'in "diff" family" phrase from the note. (Folks have requested in the
past that `git diff` detect directory renames and somehow simplify its
output, so it may be helpful to avoid implying that diff has any new
capability here.)
Signed-off-by: Elijah Newren <newren@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
The "autodie" module was added in Perl 5.10.1, but our INSTALL
document says "version 5.8 or later is needed".
As discussed in <87efhfvxzu.fsf@evledraar.gmail.com> this script is in
contrib/, so we might not want to apply that policy, however in this
case "autodie" was recently added as a "gratuitous safeguard" in 786ef50a23 ("git-credential-netrc: accept gpg option",
2018-05-12) (see
<CAHqJXRE8OKSKcck1APHAHccLZhox+tZi8nNu2RA74RErX8s3Pg@mail.gmail.com>).
Looking at it more carefully the addition of "autodie" inadvertently
introduced a logic error, since having it is equivalent to this patch:
@@ -245,10 +244,10 @@ sub load_netrc {
if ($gpgmode) {
my @cmd = ($options{'gpg'}, qw(--decrypt), $file);
log_verbose("Using GPG to open $file: [@cmd]");
- open $io, "-|", @cmd;
+ open $io, "-|", @cmd or die "@cmd: $!";
} else {
log_verbose("Opening $file...");
- open $io, '<', $file;
+ open $io, '<', $file or die "$file: $!$!;
}
# nothing to do if the open failed (we log the error later)
As shown in the context the intent of that code is not do die but to
log the error later.
Per my reading of the file this was the only thing autodie was doing
in this file (there was no other code it altered). So let's remove it,
both to fix the logic error and to get rid of the dependency.
git-p4 originally would fetch changes in one query. On large repos this
could fail because of the limits that Perforce imposes on the number of
items returned and the number of queries in the database.
To fix this, git-p4 learned to query changes in blocks of 512 changes,
However, this can be very slow - if you have a few million changes,
with each chunk taking about a second, it can be an hour or so.
Although it's possible to tune this value manually with the
"--changes-block-size" option, it's far from obvious to ordinary users
that this is what needs doing.
This change alters the block size dynamically by looking for the
specific error messages returned from the Perforce server, and reducing
the block size if the error is seen, either to the limit reported by the
server, or to half the current block size.
That means we can start out with a very large block size, and then let
it automatically drop down to a value that works without error, while
still failing correctly if some other error occurs.
Signed-off-by: Luke Diamand <luke@diamand.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
git-p4: raise exceptions from p4CmdList based on error from p4 server
This change lays some groundwork for better handling of rowcount errors
from the server, where it fails to send us results because we requested
too many.
It adds an option to p4CmdList() to return errors as a Python exception.
The exceptions are derived from P4Exception (something went wrong),
P4ServerException (the server sent us an error code) and
P4RequestSizeException (we requested too many rows/results from the
server database).
This makes the code that handles the errors a bit easier.
The default behavior is unchanged; the new code is enabled with a flag.
Signed-off-by: Luke Diamand <luke@diamand.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
Currently when p4 fails to run, git-p4 just crashes with an obscure
error message.
For example, if the P4 ticket has expired, you get:
Error: Cannot locate perforce checkout of <path> in client view
This change checks whether git-p4 can talk to the Perforce server when
the first P4 operation is attempted, and tries to print a meaningful
error message if it fails.
Signed-off-by: Luke Diamand <luke@diamand.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
git-p4: add option to disable syncing of p4/master with p4
Add an option to the git-p4 submit command to disable syncing
with Perforce.
This is useful for the case where a git-p4 mirror has been setup
on a server somewhere, running from (e.g.) cron, and developers
then clone from this. Having the local cloned copy also sync
from Perforce just isn't useful.
Signed-off-by: Luke Diamand <luke@diamand.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
git-p4: disable-rebase: allow setting this via configuration
This just lets you set the --disable-rebase option with the
git configuration options git-p4.disableRebase. If you're
using this option, you probably want to set it all the time
for a given repo.
Signed-off-by: Luke Diamand <luke@diamand.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
On a daily work with multiple local git branches, the usual way to
submit only a specified commit was to cherry-pick the commit on
master then run git-p4 submit. It can be very annoying to switch
between local branches and master, only to submit one commit. The
proposed new way is to select directly the commit you want to
submit.
Add option --commit to command 'git-p4 submit' in order to submit
only specified commit(s) in p4.
On a daily work developping software with big compilation time, one
may not want to rebase on his local git tree, in order to avoid long
recompilation.
Add option --disable-rebase to command 'git-p4 submit' in order to
disable rebase after submission.
Thanks-to: Cedric Borgese <cedric.borgese@gmail.com> Reviewed-by: Luke Diamand <luke@diamand.org> Signed-off-by: Romain Merland <merlorom@yahoo.fr> Signed-off-by: Junio C Hamano <gitster@pobox.com>
builtin/send-pack didn't call git_default_config, and because of this
git push --signed didn't respect the username and email in gitconfig in
the HTTP transport.
Signed-off-by: Masaya Suzuki <masayasuzuki@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
merge-recursive: add pointer about unduly complex looking code
handle_change_delete() has a block of code displaying one of four nearly
identical messages. Each contains about half a dozen variable
interpolations, which use nearly identical variables as well. Someone
trying to parse this may be slowed down trying to parse the differences
and why they are here; help them out by adding a comment explaining the
differences.
Further, point out that this code structure isn't collapsed into something
more concise and readable for the programmer, because we want to keep full
messages intact in order to make translators' jobs much easier.
Signed-off-by: Elijah Newren <newren@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
merge-recursive: rename conflict_rename_*() family of functions
These functions were added because processing of these conflicts needed
to be deferred until process_entry() in order to get D/F conflicts and
such right. The number of these has grown over time, and now include
some whose name is misleading:
* conflict_rename_normal() is for handling normal file renames; a
typical rename may need content merging, but we expect conflicts
from that to be more the exception than the rule.
* conflict_rename_via_dir() will not be a conflict; it was just an
add that turned into a move due to directory rename detection.
(If there was a file in the way of the move, that would have been
detected and reported earlier.)
* conflict_rename_rename_2to1 and conflict_rename_add (the latter
of which doesn't exist yet but has been submitted before and I
intend to resend) technically might not be conflicts if the
colliding paths happen to match exactly.
Rename this family of functions to handle_rename_*().
Also rename handle_renames() to detect_and_process_renames() both to make
it clearer what it does, and to differentiate it as a pre-processing step
from all the handle_rename_*() functions which are called from
process_entry().
Acked-by: Johannes Schindelin <Johannes.Schindelin@gmx.de> Signed-off-by: Elijah Newren <newren@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
merge-recursive: clarify the rename_dir/RENAME_DIR meaning
We had an enum of rename types which included RENAME_DIR; this name felt
misleading since it was not about an entire directory but was a status for
each individual file add that occurred within a renamed directory.
Since this type is for signifying that the files in question were being
renamed due to directory rename detection, rename this enum value to
RENAME_VIA_DIR.
Make a similar change to the conflict_rename_dir() function, and add a
comment to the top of that function explaining its purpose (it may not be
quite as obvious as for the other conflict_rename_*() functions).
Acked-by: Johannes Schindelin <Johannes.Schindelin@gmx.de> Signed-off-by: Elijah Newren <newren@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
Various refactorings throughout the code have left lots of alignment
issues that were driving me crazy; fix them.
Acked-by: Johannes Schindelin <Johannes.Schindelin@gmx.de> Signed-off-by: Elijah Newren <newren@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
completion: correct zsh detection when run from git-completion.zsh
v2.18.0-rc0~90^2 (completion: reduce overhead of clearing cached
--options, 2018-04-18) worked around a bug in bash's "set" builtin on
MacOS by using compgen instead. It was careful to avoid breaking zsh
by guarding this workaround with
if [[ -n ${ZSH_VERSION-}} ]]
Alas, this interacts poorly with git-completion.zsh's bash emulation:
ZSH_VERSION='' . "$script"
Correct it by instead using a new GIT_SOURCING_ZSH_COMPLETION shell
variable to detect whether git-completion.bash is being sourced from
git-completion.zsh. This way, the zsh variant is used both when run
from zsh directly and when run via git-completion.zsh.
Reproduction recipe:
1. cd git/contrib/completion && cp git-completion.zsh _git
2. Put the following in a new ~/.zshrc file:
With this patch:
Triggers nice git-completion.bash based tab completion
Without:
contrib/completion/git-completion.bash:354: read-only variable: QISUFFIX
zsh:12: command not found: ___main
zsh:15: _default: function definition file not found
_dispatch:70: bad math expression: operand expected at `/usr/bin/g...'
Segmentation fault
Reported-by: Rick van Hattem <wolph@wol.ph> Reported-by: Dave Borowitz <dborowitz@google.com> Signed-off-by: SZEDER Gábor <szeder.dev@gmail.com> Signed-off-by: Jonathan Nieder <jrnieder@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
The function does not start taking the repository object as a
parameter before v2.18 track. Make the topic mergeable to v2.17
maintenance track by dropping it.
fetch-pack: don't try to fetch peel values with --all
When "fetch-pack --all" sees a tag-to-blob on the remote, it
tries to fetch both the tag itself ("refs/tags/foo") and the
peeled value that the remote advertises ("refs/tags/foo^{}").
Asking for the object pointed to by the latter can cause
upload-pack to complain with "not our ref", since it does
not mark the peeled objects with the OUR_REF (unless they
were at the tip of some other ref).
Arguably upload-pack _should_ be marking those peeled
objects. But it never has in the past, since clients would
generally just ask for the tag and expect to get the peeled
value along with it. And that's how "git fetch" works, as
well as older versions of "fetch-pack --all".
The problem was introduced by 5f0fc64513 (fetch-pack:
eliminate spurious error messages, 2012-09-09). Before then,
the matching logic was something like:
if (refname is ill-formed)
do nothing
else if (doing --all)
always consider it matched
else
look through list of sought refs for a match
That commit wanted to flip the order of the second two arms
of that conditional. But we ended up with:
if (refname is ill-formed)
do nothing
else
look through list of sought refs for a match
if (--all and no match so far)
always consider it matched
That means tha an ill-formed ref will trigger the --all
conditional block, even though we should just be ignoring
it. We can fix that by having a single "else" with all of
the well-formed logic, that checks the sought refs and
"--all" in the correct order.
Reported-by: Kirill Smelkov <kirr@nexedi.com> Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
Commit 159e7b080b (fsck: detect gitmodules files,
2018-05-02) taught fsck to look at the content of
.gitmodules files. If the object turns out not to be a blob
at all, we just complain and punt on checking the content.
And since this was such an obvious and trivial code path, I
didn't even bother to add a test.
Except it _does_ do one non-trivial thing, which is call the
report() function, which wants us to pass a pointer to a
"struct object". Which we don't have (we have only a "struct
object_id"). So we erroneously pass a NULL object to
report(), which gets dereferenced and causes a segfault.
It seems like we could refactor report() to just take the
object_id itself. But we pass the object pointer along to
a callback function, and indeed this ends up in
builtin/fsck.c's objreport() which does want to look at
other parts of the object (like the type).
So instead, let's just use lookup_unknown_object() to get
the real "struct object", and pass that.
Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
t7415: don't bother creating commit for symlink test
Early versions of the fsck .gitmodules detection code
actually required a tree to be at the root of a commit for
it to be checked for .gitmodules. What we ended up with in 159e7b080b (fsck: detect gitmodules files, 2018-05-02),
though, finds a .gitmodules file in _any_ tree (see that
commit for more discussion).
As a result, there's no need to create a commit in our
tests. Let's drop it in the name of simplicity. And since
that was the only thing referencing $tree, we can pull our
tree creation out of a command substitution.
Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
The commands that make use of --git-completion-helper feature could
now produce a lot of --no-xxx options that a command can take. This in
many case could nearly double the amount of completable options, using
more screen estate and also harder to search for the wanted option.
This patch attempts to mitigate that by collapsing extra --no-
options, the ones that are added by --git-completion-helper and not in
original struct option arrays. The "--no-..." option will be displayed
in this case to hint about more options, e.g.
Corner case: to make sure that people will never accidentally complete
the fake option "--no-..." there must be one real --no- in the first
complete listing even if it's not from the original struct option.
PS. This could could be made simpler with ";&" to fall through from
"--no-*" block and share the code but ";&" is not available on bash-3
(i.e. Mac)
Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
A regression introduced in 8462ff43 ("convert_to_git():
safe_crlf/checksafe becomes int conv_flags", 2018-01-13) back in Git
2.17 cycle caused autocrlf rewrites to produce a warning message
despite setting safecrlf=false.
Signed-off-by: Anthony Sottile <asottile@umich.edu> Acked-By: Torsten Bögershausen <tboegi@web.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
tests: make forging GPG signed commits and tags more robust
A couple of test scripts create forged GPG signed commits or tags to
check that such forgery can't fool various git commands' signature
verification. All but one of those test scripts are prone to
occasional failures because the forgery creates a bogus GPG signature,
and git commands error out with an unexpected error message, e.g.
"Commit deadbeef does not have a GPG signature" instead of "... has a
bad GPG signature".
't5573-pull-verify-signatures.sh', 't7510-signed-commit.sh' and
't7612-merge-verify-signatures.sh' create forged signed commits like
this:
git commit -S -m "bad on side" &&
git cat-file commit side-bad >raw &&
sed -e "s/bad/forged bad/" raw >forged &&
git hash-object -w -t commit forged >forged.commit
On rare occasions the given pattern occurs not only in the commit
message but in the GPG signature as well, and after it's replaced in
the signature the resulting signature becomes invalid, GPG will report
CRC error and that it couldn't find any signature, which will then
ultimately cause the test failure.
Since in all three cases the pattern to be replaced during the forgery
is the first word of the commit message's subject line, and since the
GPG signature in the commit object is indented by a space, let's just
anchor those patterns to the beginning of the line to prevent this
issue.
The test script 't7030-verify-tag.sh' creates a forged signed tag
object in a similar way by replacing the pattern "seventh", but the
GPG signature in tag objects is not indented by a space, so the above
solution is not applicable in this case. However, in the tag object
in question the pattern "seventh" occurs not only in the tag message
but in the 'tag' header as well. To create a forged tag object it's
sufficient to replace only one of the two occurences, so modify the
sed script to limit the pattern to the 'tag' header (i.e. a line
beginning with "tag ", which, because of the space character, can
never occur in the base64-encoded GPG signature).
Note that the forgery in 't7004-tag.sh' is not affected by this issue:
while 't7004' does create a forged signed tag kind of the same way,
it replaces "signed-tag" in the tag object, which, because of the '-'
character, can never occur in the base64-encoded GPG signarute.
Signed-off-by: SZEDER Gábor <szeder.dev@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
The two tests 'detect fudged signature' and 'detect fudged signature
with NUL' in 't7510-signed-commit.sh' check that 'git verify-commit'
errors out when encountering a forged commit, but they do so by
running
! git verify-commit ...
Use 'test_must_fail' instead, because that would catch potential
unexpected errors like a segfault as well.
Signed-off-by: SZEDER Gábor <szeder.dev@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
refspec: initalize `refspec_item` in `valid_fetch_refspec()`
We allocate a `struct refspec_item` on the stack without initializing
it. In particular, its `dst` and `src` members will contain some random
data from the stack. When we later call `refspec_item_clear()`, it will
call `free()` on those pointers. So if the call to `parse_refspec()` did
not assign to them, we will be freeing some random "pointers". This is
undefined behavior.
To the best of my understanding, this cannot currently be triggered by
user-provided data. And for what it's worth, the test-suite does not
trigger this with SANITIZE=address. It can be provoked by calling
`valid_fetch_refspec(":*")`.
Zero the struct, as is done in other users of `struct refspec_item` by
using the refspec_item_init() initialization function.
Signed-off-by: Martin Ågren <martin.agren@gmail.com> Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
Re-add the non-fatal version of refspec_item_init_or_die() renamed
away in an earlier change to get a more minimal diff. This should be
used by callers that have their own error handling.
This new function could be marked "static" since nothing outside of
refspec.c uses it, but expecting future use of it, let's make it
available to other users.
Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
add -p: fix counting empty context lines in edited patches
recount_edited_hunk() introduced in commit 2b8ea7f3c7 ("add -p:
calculate offset delta for edited patches", 2018-03-05) required all
context lines to start with a space, empty lines are not counted. This
was intended to avoid any recounting problems if the user had
introduced empty lines at the end when editing the patch. However this
introduced a regression into 'git add -p' as it seems it is common for
editors to strip the trailing whitespace from empty context lines when
patches are edited thereby introducing empty lines that should be
counted. 'git apply' knows how to deal with such empty lines and POSIX
states that whether or not there is an space on an empty context line
is implementation defined [1].
Fix the regression by counting lines that consist solely of a newline
as well as lines starting with a space as context lines and add a test
to prevent future regressions.
Reported-by: Mahmoud Al-Qudsi <mqudsi@neosmart.net> Reported-by: Oliver Joseph Ash <oliverjash@gmail.com> Reported-by: Jeff Felchner <jfelchner1@gmail.com> Signed-off-by: Phillip Wood <phillip.wood@dunelm.org.uk> Signed-off-by: Junio C Hamano <gitster@pobox.com>
Introduce a checkout.defaultRemote setting which can be used to
designate a remote to prefer (via checkout.defaultRemote=origin) when
running e.g. "git checkout master" to mean origin/master, even though
there's other remotes that have the "master" branch.
I want this because it's very handy to use this workflow to checkout a
repository and create a topic branch, then get back to a "master" as
retrieved from upstream:
Will output (without the advice output added earlier in this series):
error: pathspec 'master' did not match any file(s) known to git.
The new checkout.defaultRemote config allows me to say that whenever
that ambiguity comes up I'd like to prefer "origin", and it'll still
work as though the only remote I had was "origin".
Also adjust the advice.checkoutAmbiguousRemoteBranchName message to
mention this new config setting to the user, the full output on my
git.git is now (the last paragraph is new):
$ ./git --exec-path=$PWD checkout master
error: pathspec 'master' did not match any file(s) known to git.
hint: 'master' matched more than one remote tracking branch.
hint: We found 26 remotes with a reference that matched. So we fell back
hint: on trying to resolve the argument as a path, but failed there too!
hint:
hint: If you meant to check out a remote tracking branch on, e.g. 'origin',
hint: you can do so by fully qualifying the name with the --track option:
hint:
hint: git checkout --track origin/<name>
hint:
hint: If you'd like to always have checkouts of an ambiguous <name> prefer
hint: one remote, e.g. the 'origin' remote, consider setting
hint: checkout.defaultRemote=origin in your config.
I considered splitting this into checkout.defaultRemote and
worktree.defaultRemote, but it's probably less confusing to break our
own rules that anything shared between config should live in core.*
than have two config settings, and I couldn't come up with a short
name under core.* that made sense (core.defaultRemoteForCheckout?).
See also 70c9ac2f19 ("DWIM "git checkout frotz" to "git checkout -b
frotz origin/frotz"", 2009-10-18) which introduced this DWIM feature
to begin with, and 4e85333197 ("worktree: make add <path> <branch>
dwim", 2017-11-26) which added it to git-worktree.
Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
checkout: add advice for ambiguous "checkout <branch>"
As the "checkout" documentation describes:
If <branch> is not found but there does exist a tracking branch in
exactly one remote (call it <remote>) with a matching name, treat
as equivalent to [...] <remote>/<branch.
This is a really useful feature. The problem is that when you add
another remote (e.g. a fork), git won't find a unique branch name
anymore, and will instead print this unhelpful message:
$ git checkout master
error: pathspec 'master' did not match any file(s) known to git
Now it will, on my git.git checkout, print:
$ ./git --exec-path=$PWD checkout master
error: pathspec 'master' did not match any file(s) known to git.
hint: 'master' matched more than one remote tracking branch.
hint: We found 26 remotes with a reference that matched. So we fell back
hint: on trying to resolve the argument as a path, but failed there too!
hint:
hint: If you meant to check out a remote tracking branch on, e.g. 'origin',
hint: you can do so by fully qualifying the name with the --track option:
hint:
hint: git checkout --track origin/<name>
Note that the "error: pathspec[...]" message is still printed. This is
because whatever else checkout may have tried earlier, its final
fallback is to try to resolve the argument as a path. E.g. in this
case:
$ ./git --exec-path=$PWD checkout master pu
error: pathspec 'master' did not match any file(s) known to git.
error: pathspec 'pu' did not match any file(s) known to git.
There we don't print the "hint:" implicitly due to earlier logic
around the DWIM fallback. That fallback is only used if it looks like
we have one argument that might be a branch.
I can't think of an intrinsic reason for why we couldn't in some
future change skip printing the "error: pathspec[...]" error. However,
to do so we'd need to pass something down to checkout_paths() to make
it suppress printing an error on its own, and for us to be confident
that we're not silencing cases where those errors are meaningful.
I don't think that's worth it since determining whether that's the
case could easily change due to future changes in the checkout logic.
Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
Pass the previously added "num_matches" struct value up to the callers
of unique_tracking_name(). This will allow callers to optionally print
better error messages in a later change.
Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
Add an *_INIT macro for the tracking_name_data similar to what exists
elsewhere in the codebase, e.g. OID_ARRAY_INIT in sha1-array.h. This
will make it more idiomatic in later changes to add more fields to the
struct & its initialization macro.
Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
checkout tests: index should be clean after dwim checkout
Assert that whenever there's a DWIM checkout that the index should be
clean afterwards, in addition to the correct branch being checked-out.
The way the DWIM checkout code in checkout.[ch] works is by looping
over all remotes, and for each remote trying to find if a given
reference name only exists on that remote, or if it exists anywhere
else.
This is done by starting out with a `unique = 1` tracking variable in
a struct shared by the entire loop, which will get set to `0` if the
data reference is not unique.
Thus if we find a match we know the dst_oid member of
tracking_name_data must be correct, since it's associated with the
only reference on the only remote that could have matched our query.
But if there was ever a mismatch there for some reason we might end up
with the correct branch checked out, but at the wrong oid, which would
show whatever the difference between the two staged in the
index (checkout branch A, stage changes from the state of branch B).
So let's amend the tests (mostly added in) 399e4a1c56 ("t2024: Add
tests verifying current DWIM behavior of 'git checkout <branch>'",
2013-04-21) to always assert that "status" is clean after we run
"checkout", that's being done with "-uno" because there's going to be
some untracked files related to the test itself which we don't care
about.
In all these tests (DWIM or otherwise) we start with a clean index, so
these tests are asserting that that's still the case after the
"checkout", failed or otherwise.
Then if we ever run into this sort of regression, either in the
existing code or with a new feature, we'll know.
Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
"git pull -recurse-submodules --rebase", when the submodule
repository's history did not have anything common between ours and
the upstream's, failed to execute. We need to fetch from them to
continue even in such a case.
* jt/submodule-pull-recurse-rebase:
submodule: do not pass null OID to setup_revisions
As there are plans to implement other ref storage systems,
let's use a way to remove remote refs that does not depend
on refs being files.
This makes it clear to readers that this test does not
depend on which ref backend is used.
Suggested-by: Michael Haggerty <mhagger@alum.mit.edu> Helped-by: Jeff King <peff@peff.net> Signed-off-by: Christian Couder <chriscool@tuxfamily.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>