There is an optimization used in "git diff $treeA $treeB" to borrow
an already checked-out copy in the working tree when it is known to
be the same as the blob being compared, expecting that open/mmap of
such a file is faster than reading it from the object store, which
involves inflating and applying delta. This however kicked in even
when the checked-out copy needs to go through the convert-to-git
conversion (including the clean filter), which defeats the whole
point of the optimization. The optimization has been disabled when
the conversion is necessary.
* jk/diff-do-not-reuse-wtf-needs-cleaning:
diff: do not reuse worktree files that need "clean" conversion
* rs/submodule-config-code-cleanup:
submodule-config: fix test binary crashing when no arguments given
submodule-config: combine early return code into one goto
submodule-config: passing name reference for .gitmodule blobs
submodule-config: use explicit empty string instead of strbuf in config_from()
"git push" and "git clone" learned to give better progress meters
to the end user who is waiting on the terminal.
* jk/push-progress:
receive-pack: send keepalives during quiet periods
receive-pack: turn on connectivity progress
receive-pack: relay connectivity errors to sideband
receive-pack: turn on index-pack resolving progress
index-pack: add flag for showing delta-resolution progress
clone: use a real progress meter for connectivity check
check_connected: add progress flag
check_connected: relay errors to alternate descriptor
check_everything_connected: use a struct with named options
check_everything_connected: convert to argv_array
rev-list: add optional progress reporting
check_everything_connected: always pass --quiet to rev-list
"git fetch" exchanges batched have/ack messages between the sender
and the receiver, initially doubling every time and then falling
back to enlarge the window size linearly. The "smart http"
transport, being an half-duplex protocol, outgrows the preset limit
too quickly and becomes inefficient when interacting with a large
repository. The internal mechanism learned to grow the window size
more aggressively when working with the "smart http" transport.
* jt/fetch-large-handshake-window-on-http:
fetch-pack: grow stateless RPC windows exponentially
Users of the parse_options_concat() API function need to allocate
extra slots in advance and fill them with OPT_END() when they want
to decide the set of supported options dynamically, which makes the
code error-prone and hard to read. This has been corrected by tweaking
the API to allocate and return a new copy of "struct option" array.
* jk/parse-options-concat:
parse_options: allocate a new array when concatenating
Dumb http transport on the client side has been optimized.
* ew/http-walker:
list: avoid incompatibility with *BSD sys/queue.h
http-walker: reduce O(n) ops with doubly-linked list
http: avoid disconnecting on 404s for loose objects
http-walker: remove unused parameter from fetch_object
The build procedure for "git persistent-https" helper (in contrib/)
has been updated so that it can be built with more recent versions
of Go.
* pm/build-persistent-https-with-recent-go:
contrib/persistent-https: use Git version for build label
contrib/persistent-https: update ldflags syntax for Go 1.7+
"git merge" in Git v2.9 was taught to forbid merging an unrelated
lines of history by default, but that is exactly the kind of thing
the "--rejoin" mode of "git subtree" (in contrib/) wants to do.
"git subtree" has been taught to use the "--allow-unrelated-histories"
option to override the default.
submodule-config: passing name reference for .gitmodule blobs
Commit 959b5455 (submodule: implement a config API for lookup of
.gitmodules values, 2015-08-18) implemented the initial version of the
submodule config cache. During development of that initial version we
extracted the function gitmodule_sha1_from_commit(). During that process
we missed that the strbuf rev was still used in config_from() and now is
left empty. Lets fix this by also returning this string.
This means that now when reading .gitmodules from revisions, the error
messages also contain a reference to the blob they are from.
Signed-off-by: Heiko Voigt <hvoigt@hvoigt.net> Reviewed-by: Stefan Beller <sbeller@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
"git grep -i" has been taught to fold case in non-ascii locales
correctly.
* nd/icase:
grep.c: reuse "icase" variable
diffcore-pickaxe: support case insensitive match on non-ascii
diffcore-pickaxe: Add regcomp_or_die()
grep/pcre: support utf-8
gettext: add is_utf8_locale()
grep/pcre: prepare locale-dependent tables for icase matching
grep: rewrite an if/else condition to avoid duplicate expression
grep/icase: avoid kwsset when -F is specified
grep/icase: avoid kwsset on literal non-ascii strings
test-regex: expose full regcomp() to the command line
test-regex: isolate the bug test code
grep: break down an "if" stmt in preparation for next changes
Merge branch 'sb/submodule-parallel-fetch' into maint
Fix recently introduced codepaths that are involved in parallel
submodule operations, which gave up on reading too early, and
could have wasted CPU while attempting to write under a corner
case condition.
* sb/submodule-parallel-fetch:
hoist out handle_nonblock function for xread and xwrite
xwrite: poll on non-blocking FDs
xread: retry after poll on EAGAIN/EWOULDBLOCK
The test framework learned a new helper test_match_signal to
check an exit code from getting killed by an expected signal.
* jk/test-match-signal:
t/lib-git-daemon: use test_match_signal
test_must_fail: use test_match_signal
t0005: use test_match_signal as appropriate
tests: factor portable signal check out of t0005
Merge branch 'js/am-call-theirs-theirs-in-fallback-3way' into maint
One part of "git am" had an oddball helper function that called
stuff from outside "his" as opposed to calling what we have "ours",
which was not gender-neutral and also inconsistent with the rest of
the system where outside stuff is usuall called "theirs" in
contrast to "ours".
Merge branch 'js/color-on-windows-comment' into maint
For a long time, we carried an in-code comment that said our
colored output would work only when we use fprintf/fputs on
Windows, which no longer is the case for the past few years.
* js/color-on-windows-comment:
color.h: remove obsolete comment about limitations on Windows
More mark-up updates to typeset strings that are expected to
literally typed by the end user in fixed-width font.
* mm/doc-tt:
doc: typeset HEAD and variants as literal
CodingGuidelines: formatting HEAD in documentation
doc: typeset long options with argument as literal
doc: typeset '--' as literal
doc: typeset long command-line options as literal
doc: typeset short command-line options as literal
Documentation/git-mv.txt: fix whitespace indentation
Merge branch 'js/sign-empty-commit-fix' into maint
"git commit --amend --allow-empty-message -S" for a commit without
any message body could have misidentified where the header of the
commit object ends.
* js/sign-empty-commit-fix:
commit -S: avoid invalid pointer with empty message
Git does not know what the contents in the index should be for a
path added with "git add -N" yet, so "git grep --cached" should not
show hits (or show lack of hits, with -L) in such a path, but that
logic does not apply to "git grep", i.e. searching in the working
tree files. But we did so by mistake, which has been corrected.
* nd/ita-cleanup:
grep: fix grepping for "intent to add" files
t7810-grep.sh: fix a whitespace inconsistency
t7810-grep.sh: fix duplicated test name
Merge branch 'js/find-commit-subject-ignore-leading-blanks' into maint
A helper function that takes the contents of a commit object and
finds its subject line did not ignore leading blank lines, as is
commonly done by other codepaths. Make it ignore leading blank
lines to match.
* js/find-commit-subject-ignore-leading-blanks:
reset --hard: skip blank lines when reporting the commit subject
sequencer: use skip_blank_lines() to find the commit subject
commit -C: skip blank lines at the beginning of the message
commit.c: make find_commit_subject() more robust
pretty: make the skip_blank_lines() function public
Recent FreeBSD stopped making perl available at /usr/bin/perl;
switch the default the built-in path to /usr/local/bin/perl on not
too ancient FreeBSD releases.
* ew/find-perl-on-freebsd-in-local:
config.mak.uname: correct perl path on FreeBSD
Recent update to "git daemon" tries to enable the socket-level
KEEPALIVE, but when it is spawned via inetd, the standard input
file descriptor may not necessarily be connected to a socket.
Suppress an ENOTSOCK error from setsockopt().
* ew/daemon-socket-keepalive:
Windows: add missing definition of ENOTSOCK
daemon: ignore ENOTSOCK from setsockopt
"git pack-objects" and "git index-pack" mostly operate with off_t
when talking about the offset of objects in a packfile, but there
were a handful of places that used "unsigned long" to hold that
value, leading to an unintended truncation.
* nd/pack-ofs-4gb-limit:
fsck: use streaming interface for large blobs in pack
pack-objects: do not truncate result in-pack object size on 32-bit systems
index-pack: correct "offset" type in unpack_entry_data()
index-pack: report correct bad object offsets even if they are large
index-pack: correct "len" type in unpack_data()
sha1_file.c: use type off_t* for object_info->disk_sizep
pack-objects: pass length to check_pack_crc() without truncation
"git worktree prune" protected worktrees that are marked as
"locked" by creating a file in a known location. "git worktree"
command learned a dedicated command pair to create and remove such
a file, so that the users do not have to do this with editor.
"git notes merge" had a code to see if a path exists (and fails if
it does) and then open the path for writing (when it doesn't).
Replace it with open with O_EXCL.
* rs/notes-merge-no-toctou:
notes-merge: use O_EXCL to avoid overwriting existing files
A few tests that specifically target "git rebase -i" have been
added.
* js/rebase-i-tests:
rebase -i: we allow extra spaces after fixup!/squash!
rebase -i: demonstrate a bug with --autosquash
t3404: add a test for the --gpg-sign option
"git merge" in v2.9 prevents merging unrelated histories.
"git subtree split --rejoin" creates unrelated histories when
creating a split repo from a raw sub-directory that did not
originate from an invocation of "git subtree add".
Restore the original behavior by passing --allow-unrelated-histories
when merging subtrees. This ensures that the synthetic history
created by "git subtree split" can be merged.
Add a test to ensure that this feature works as advertised.
Reported-by: Brett Cundal <brett.cundal@iugome.com> Helped-by: Johannes Schindelin <Johannes.Schindelin@gmx.de> Signed-off-by: David Aguilar <davvid@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
Existing autoconf generated test for the need to link with pthread
library did not check all the functions from pthread libraries;
recent FreeBSD has some functions in libc but not others, and we
mistakenly thought linking with libc is enough when it is not.
* ew/autoconf-pthread:
configure.ac: stronger test for pthread linkage
"git blame file" allowed the lineage of lines in the uncommitted,
unadded contents of "file" to be inspected, but it refused when
"file" did not appear in the current commit. When "file" was
created by renaming an existing file (but the change has not been
committed), this restriction was unnecessarily tight.
* mh/blame-worktree:
t/t8003-blame-corner-cases.sh: Use here documents
blame: allow to blame paths freshly added to the index
When "git fsck" reports a broken link (e.g. a tree object contains
a blob that does not exist), both containing object and the object
that is referred to were reported with their 40-hex object names.
The command learned the "--name-objects" option to show the path to
the containing object from existing refs (e.g. "HEAD~24^2:file.txt").
* js/fsck-name-object:
fsck: optionally show more helpful info for broken links
fsck: give the error function a chance to see the fsck_options
fsck_walk(): optionally name objects on the go
fsck: refactor how to describe objects
"git add -N dir/file && git write-tree" produced an incorrect tree
when there are other paths in the same directory that sorts after
"file".
* nd/cache-tree-ita:
cache-tree: do not generate empty trees as a result of all i-t-a subentries
cache-tree.c: fix i-t-a entry skipping directory updates sometimes
test-lib.sh: introduce and use $EMPTY_BLOB
test-lib.sh: introduce and use $EMPTY_TREE
* nd/test-helpers:
t/test-lib.sh: fix running tests with --valgrind
Makefile: use VCSSVN_LIB to refer to svn library
Makefile: drop extra dependencies for test helpers
Makefile assumed that -lrt is always available on platforms that
want to use clock_gettime() and CLOCK_MONOTONIC, which is not a
case for recent Mac OS X. The necessary symbols are often found in
libc on many modern systems and having -lrt on the command line, as
long as the library exists, had no effect, but when the platform
removes librt.a that is a different matter--having -lrt will break
the linkage.
This change could be seen as a regression for those who do need to
specify -lrt, as they now specifically ask for NEEDS_LIBRT when
building. Hopefully they are in the minority these days.
* rw/make-needs-librt:
config.mak.uname: define NEEDS_LIBRT under Linux, for now
Makefile: add NEEDS_LIBRT to optionally link with librt
The API to iterate over all the refs (i.e. for_each_ref(), etc.)
has been revamped.
* mh/ref-iterators:
for_each_reflog(): reimplement using iterators
dir_iterator: new API for iterating over a directory tree
for_each_reflog(): don't abort for bad references
do_for_each_ref(): reimplement using reference iteration
refs: introduce an iterator interface
ref_resolves_to_object(): new function
entry_resolves_to_object(): rename function from ref_resolves_to_object()
get_ref_cache(): only create an instance if there is a submodule
remote rm: handle symbolic refs correctly
delete_refs(): add a flags argument
refs: use name "prefix" consistently
do_for_each_ref(): move docstring to the header file
refs: remove unnecessary "extern" keywords
Error handling in the codepaths that updates refs has been
improved.
* mh/update-ref-errors:
lock_ref_for_update(): avoid a symref resolution
lock_ref_for_update(): make error handling more uniform
t1404: add more tests of update-ref error handling
t1404: document function test_update_rejected
t1404: remove "prefix" argument to test_update_rejected
t1404: rename file to t1404-update-ref-errors.sh
When c5c31d33 (grep: move pattern-type bits support to top-level
grep.[ch], 2012-10-03) introduced grep_commit_pattern_type() helper
function, the intention was to allow the users of grep API to having
to fiddle only with .pattern_type_option (which can be set to "fixed",
"basic", "extended", and "pcre"), and then immediately before compiling
the pattern strings for use, call grep_commit_pattern_type() to have
it prepare various bits in the grep_opt structure (like .fixed,
.regflags, etc.).
However, grep_set_pattern_type_option() helper function the grep API
internally uses were left as an external function by mistake. This
function shouldn't have been made callable by the users of the API.
Later when the grep API was used in revision traversal machinery,
the caller then mistakenly started calling the function around 34a4ae55 (log --grep: use the same helper to set -E/-F options as
"git grep", 2012-10-03), instead of setting the .pattern_type_option
field and letting the grep_commit_pattern_type() to take care of the
details.
This caused an unnecessary bug that made a configured
grep.patternType take precedence over the command line options
(e.g. --basic-regexp, --fixed-strings) in "git log" family of
commands.
submodule-helper: fix indexing in clone retry error reporting path
'git submodule--helper update-clone' has logic to retry failed clones
a second time. For this purpose, there is a list of submodules to clone,
and a second list that is filled with the submodules to retry. Within
these lists, the submodules are identified by an index as if both lists
were just appended.
This works nicely except when the second clone attempt fails as well. To
report an error, the identifying index must be adjusted by an offset so
that it can be used as an index into the second list. However, the
calculation uses the logical total length of the lists so that the result
always points one past the end of the second list.
Pick the correct index.
Signed-off-by: Johannes Sixt <j6t@kdbg.org> Acked-by: Stefan Beller <sbeller@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
git-submodule: forward exit code of git-submodule--helper more faithfully
git-submodule--helper is invoked as the upstream of a pipe in several
places. Usually, the failure of a program in this position is not
detected by the shell. For this reason, the code inserts a token in the
output stream when git-submodule--helper fails that is detected
downstream, where the shell script is quit with exit code 1.
There happens to be a bug in git-submodule--helper that leads to a
segmentation fault. The test suite triggers the crash in several places,
all of which are protected by 'test_must_fail'. But due to the inspecific
exit code 1, the crash remains undiagnosed.
Extend the failure protocol such that git-submodule--helper's exit code
is passed downstream (only in the case of failure). This enables the
downstream to use it as its own exit code, and 'test_must_fail' to
identify the segmentation fault as an unexpected failure.
The bug itself is fixed in the next commit.
Signed-off-by: Johannes Sixt <j6t@kdbg.org> Acked-by: Stefan Beller <sbeller@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
In the transport protocol we use NAK to signal the non existence of a
common base, so fix the documentation. This helps readers of the document,
as they don't have to wonder about the difference between NAK and NACK.
As NACK is used in git archive and upload-archive, this is easy to get
wrong.
Signed-off-by: Stefan Beller <sbeller@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
If you have whitespace errors in lines you've introduced, it
can be convenient to be able to jump directly to them for
fixing. You can't quite use "git jump diff" for this,
because though it passes arbitrary options to "git diff", it
expects to see an actual unified diff in the output.
Whereas "git diff --check" actually produces lines that look
like compiler quickfix lines already, meaning we just need
to run it and feed the output directly to the editor.
Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
contrib/git-jump: fix greedy regex when matching hunks
The hunk-header regex looks for "\+\d+" to find the
post-image line numbers, but it skips the pre-image line
numbers with a simple ".*". That means we may greedily eat
the post-image numbers and match a "\+\d" further on, in the
funcname text.
diff: do not reuse worktree files that need "clean" conversion
When accessing a blob for a diff, we may try to reuse file
contents in the working tree, under the theory that it is
faster to mmap those file contents than it would be to
extract the content from the object database.
When we have to filter those contents, though, that
assumption does not hold. Even for our internal conversions
like CRLF, we have to allocate and fill a new buffer anyway.
But much worse, for external clean filters we have to exec
an arbitrary script, and we have no idea how expensive it
may be to run.
So let's skip this optimization when conversion into git's
"clean" form is required. This applies whenever the
"want_file" flag is false. When it's true, the caller
actually wants the smudged worktree contents, which the
reused file by definition already has (in fact, this is a
key optimization going the other direction, since reusing
the worktree file there lets us skip smudge filters).
Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
The previous commit introduced the first use of ENOTSOCK. This macro is
not available on Windows. Define it as WSAENOTSOCK because that is the
corresponding error value reported by the Windows versions of socket
functions.
Signed-off-by: Johannes Sixt <j6t@kdbg.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
contrib/persistent-https: use Git version for build label
The previous method simply used the UNIX timestamp of when the binary was
built as its build label.
$ make && ./git-remote-persistent-http -print_label 1469061546
This patch aims to align the label for this binary with the Git version
contained in the GIT-VERSION-FILE. This gives a better sense of the version
of the binary as it can be mapped to a particular revision or release of
Git itself. For example:
$ make && ./git-remote-persistent-http -print_label
2.9.1.275.g75676c8
Discussion of this patch is available on a related thread in the mailing
list surrounding this package called "contrib/persistent-https: update
ldflags syntax for Go 1.7+". The gmane.org link is:
http://article.gmane.org/gmane.comp.version-control.git/299653/
Signed-off-by: Parker Moore <parkrmoore@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
contrib/persistent-https: update ldflags syntax for Go 1.7+
Running `make all` in `contrib/persistent-https` results in a
failure on Go 1.7 and above.
Specifically, the error is:
go build -o git-remote-persistent-https \
-ldflags "-X main._BUILD_EMBED_LABEL 1468613136"
# _/Users/parkr/github/git/contrib/persistent-https
/usr/local/Cellar/go/1.7rc1/libexec/pkg/tool/darwin_amd64/link: -X
flag requires argument of the form importpath.name=value
make: *** [git-remote-persistent-https] Error 2
This `name=value` syntax for the -X flag was introduced in Go v1.5
(released Aug 19, 2015):
strbuf: avoid calling strbuf_grow() twice in strbuf_addbuf()
Implement strbuf_addbuf() as a normal function in order to avoid calling
strbuf_grow() twice, with the second callinside strbud_add() being a
no-op. This is slightly faster and also reduces the text size a bit.
Signed-off-by: Rene Scharfe <l.s.r@web.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
When the GPG prereq is not set, we do not run test 34. That
test changes the directory of the test script as a side
effect (something we usually frown on, but which matches the
style of the rest of this script). When test 35 (the
url-scrubbing test) runs, it expects to be in the directory
from test 34. If it's not, the test fails; we are in a
different sub-repo, our test-commit is built on a different
history, and the push becomes a non-fast-forward.
We can fix this by unconditionally moving to the directory
we expect (again, against our usual style but matching how
the rest of the script operates).
As an additional protection, let's also switch from "make a
new commit and push to master" to just "push to a new
branch". We don't care about the branch name; we just want
_some_ ref update to trigger the status output. Pushing to a
new branch is less likely to run into problems with
force-updates, changing the checked-out branch, etc.
Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
receive-pack: send keepalives during quiet periods
After a client has sent us the complete pack, we may spend
some time processing the data and running hooks. If the
client asked us to be quiet, receive-pack won't send any
progress data during the index-pack or connectivity-check
steps. And hooks may or may not produce their own progress
output. In these cases, the network connection is totally
silent from both ends.
Git itself doesn't care about this (it will wait forever),
but other parts of the system (e.g., firewalls,
load-balancers, etc) might hang up the connection. So we'd
like to send some sort of keepalive to let the network and
the client side know that we're still alive and processing.
We can use the same trick we did in 05e9515 (upload-pack:
send keepalive packets during pack computation, 2013-09-08).
Namely, we will send an empty sideband data packet every `N`
seconds that we do not relay any stderr data over the
sideband channel. As with 05e9515, this means that we won't
bother sending keepalives when there's actual progress data,
but will kick in when it has been disabled (or if there is a
lull in the progress data).
The concept is simple, but the details are subtle enough
that they need discussing here.
Before the client sends us the pack, we don't want to do any
keepalives. We'll have sent our ref advertisement, and we're
waiting for them to send us the pack (and tell us that they
support sidebands at all).
While we're receiving the pack from the client (or waiting
for it to start), there's no need for keepalives; it's up to
them to keep the connection active by sending data.
Moreover, it would be wrong for us to do so. When we are the
server in the smart-http protocol, we must treat our
connection as half-duplex. So any keepalives we send while
receiving the pack would potentially be buffered by the
webserver. Not only does this make them useless (since they
would not be delivered in a timely manner), but it could
actually cause a deadlock if we fill up the buffer with
keepalives. (It wouldn't be wrong to send keepalives in this
phase for a full-duplex connection like ssh; it's simply
pointless, as it is the client's responsibility to speak).
As soon as we've gotten all of the pack data, then the
client is waiting for us to speak, and we should start
keepalives immediately. From here until the end of the
connection, we send one any time we are not otherwise
sending data.
But there's a catch. Receive-pack doesn't know the moment
we've gotten all the data. It passes the descriptor to
index-pack, who reads all of the data, and then starts
resolving the deltas. We have to communicate that back.
To make this work, we instruct the sideband muxer to enable
keepalives in three phases:
1. In the beginning, not at all.
2. While reading from index-pack, wait for a signal
indicating end-of-input, and then start them.
3. Afterwards, always.
The signal from index-pack in phase 2 has to come over the
stderr channel which the muxer is reading. We can't use an
extra pipe because the portable run-command interface only
gives us stderr and stdout.
Stdout is already used to pass the .keep filename back to
receive-pack. We could also send a signal there, but then we
would find out about it in the main thread. And the
keepalive needs to be done by the async muxer thread (since
it's the one writing sideband data back to the client). And
we can't reliably signal the async thread from the main
thread, because the async code sometimes uses threads and
sometimes uses forked processes.
Therefore the signal must come over the stderr channel,
where it may be interspersed with other random
human-readable messages from index-pack. This patch makes
the signal a single NUL byte. This is easy to parse, should
not appear in any normal stderr output, and we don't have to
worry about any timing issues (like seeing half the signal
bytes in one read(), and half in a subsequent one).
This is a bit ugly, but it's simple to code and should work
reliably.
Another option would be to stop using an async thread for
muxing entirely, and just poll() both stderr and stdout of
index-pack from the main thread. This would work for
index-pack (because we aren't doing anything useful in the
main thread while it runs anyway). But it would make the
connectivity check and the hook muxers much more
complicated, as they need to simultaneously feed the
sub-programs while reading their stderr.
The index-pack phase is the only one that needs this
signaling, so it could simply behave differently than the
other two. That would mean having two separate
implementations of copy_to_sideband (and the keepalive
code), though. And it still doesn't get rid of the
signaling; it just means we can write a nicer message like
"END_OF_INPUT" or something on stdout, since we don't have
to worry about separating it from the stderr cruft.
One final note: this signaling trick is only done with
index-pack, not with unpack-objects. There's no point in
doing it for the latter, because by definition it only kicks
in for a small number of objects, where keepalives are not
as useful (and this conveniently lets us avoid duplicating
the implementation).
Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
When we receive a large push, the server side of the
connection may spend a lot of time (30s or more for a full
push of linux.git) walking the object graph without
producing any output. Let's give the user some indication
that we're actually working.
Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
receive-pack: relay connectivity errors to sideband
If the connectivity check encounters a problem when
receiving a push, the error output goes to receive-pack's
stderr, whose destination depends on the protocol used
(ssh tends to send it to the user, though without a "remote"
prefix; http will generally eat it in the server's error
log).
The information should consistently go back to the user, as
there is a reasonable chance their client is buggy and
generating a bad pack.
We can do so by muxing it over the sideband as we do with
other sub-process stderr.
Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
receive-pack: turn on index-pack resolving progress
When we receive a large push, the server side may have to
spend a lot of CPU processing the incoming packfile.
During the "receiving" phase, we are typically network
bound, and the client is writing its own progress to the
user. But during the delta resolution phase, we may spend
minutes (e.g., for a full push of linux.git) without
making any indication to the user that the connection has
not hung.
Let's ask index-pack to produce progress output for this
phase (unless the client asked us to be quiet, of course).
Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
index-pack: add flag for showing delta-resolution progress
The index-pack command has two progress meters: one for
"receiving objects", and one for "resolving deltas". You get
neither by default, or both with "-v".
But for a push through receive-pack, we would want only the
"resolving deltas" phase, _not_ the "receiving objects"
progress. There are two reasons for this.
One is simply that existing clients are already printing
"writing objects" progress at the same time. Arguably
"receiving" from the far end is more useful, because it
tells you what has actually gotten there, as opposed to what
might be stuck in a buffer somewhere between the client and
server. But that would require a protocol extension to tell
clients not to print their progress. Possible, but
complexity for little gain.
The second reason is much more important. In a full-duplex
connection like git-over-ssh, we can print progress while
the pack is incoming, and it will immediately get to the
client. But for a half-duplex connection like git-over-http,
we should not say anything until we have received the full
request. Anything we write is subject to being stuck in a
buffer by the webserver. Worse, we can end up in a deadlock
if that buffer fills up.
So our best bet is to avoid writing anything that isn't a
small fixed size until we've received the full pack.
Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
clone: use a real progress meter for connectivity check
Because the initial connectivity check for a cloned
repository can be slow, 0781aa4 (clone: let the user know
when check_everything_connected is run, 2013-05-03) added a
"fake" progress meter; we simply say "Checking connectivity"
when it starts, and "done" at the end, with nothing between.
Since check_connected() now knows how to do a real progress
meter, we can drop our fake one and use that one instead.
Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
Connectivity checks have to traverse the entire object graph
in the worst case (e.g., a full clone or a full push). For
large repositories like linux.git, this can take 30-60
seconds, during which time git may produce little or no
output.
Let's add the option of showing progress, which is taken
care of by rev-list.
Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
check_connected: relay errors to alternate descriptor
Unless the "quiet" flag is given, check_connected sends any
errors to the stderr of the caller (because the child
rev-list inherits that descriptor). However, server-side
callers may want to send these over a sideband channel
instead. Let's make that possible.
Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
check_everything_connected: use a struct with named options
The number of variants of check_everything_connected has
grown over the years, so that the "real" function takes
several possibly-zero, possibly-NULL arguments. We hid the
complexity behind some wrapper functions, but this doesn't
scale well when we want to add new options.
If we add more wrapper variants to handle the new options,
then we can get a combinatorial explosion when those options
might be used together (right now nobody wants to use both
"shallow" and "transport" together, so we get by with just a
few wrappers).
If instead we add new parameters to each function, each of
which can have a default value, then callers who want the
defaults end up with confusing invocations like:
where it is unclear which parameter is which (and every
caller needs updated when we add new options).
Instead, let's add a struct to hold all of the optional
parameters. This is a little more verbose for the callers
(who have to declare the struct and fill it in), but it
makes their code much easier to follow, because every option
is named as it is set (and unused options do not have to be
mentioned at all).
Note that we could also stick the iteration function and its
callback data into the option struct, too. But since those
are required for each call, by avoiding doing so, we can let
very simple callers just pass "NULL" for the options and not
worry about the struct at all.
While we're touching each site, let's also rename the
function to check_connected(). The existing name was quite
long, and not all of the wrappers even used the full name.
Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
It's easy to ask rev-list to do a traversal that may takes
many seconds (e.g., by calling "--objects --all"). In theory
you can monitor its progress by the output you get to
stdout, but this isn't always easy.
Some operations, like "--count", don't make any output until
the end.
And some callers, like check_everything_connected(), are
using it just for the error-checking of the traversal, and
throw away stdout entirely.
This patch adds a "--progress" option which can be used to
give some eye-candy for a user waiting for a long traversal.
This is just a rev-list option and not a regular traversal
option, because it needs cooperation from the callbacks in
builtin/rev-list.c to do the actual count.
Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
check_everything_connected: always pass --quiet to rev-list
The check_everything_connected function takes a "quiet"
parameter which does two things if non-zero:
1. redirect rev-list's stderr to /dev/null to avoid
showing errors to the user
2. pass "--quiet" to rev-list
Item (1) is obviously useful. But item (2) is
surprisingly not. For rev-list, "--quiet" does not have
anything to do with chattiness on stderr; it tells rev-list
not to bother writing the list of traversed objects to
stdout, for efficiency. And since we always redirect
rev-list's stdout to /dev/null in this function, there is no
point in asking it to ever write anything to stdout.
The efficiency gains are modest; a best-of-five run of "git
rev-list --objects --all" on linux.git dropped from 32.013s
to 30.502s when adding "--quiet". That's only about 5%, but
given how easy it is, it's worth doing.
Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
fetch-pack: grow stateless RPC windows exponentially
When updating large repositories, the LARGE_FLUSH limit (that is, the
limit at which the window growth strategy switches from exponential to
linear) is reached quite quickly. Use a conservative exponential growth
strategy when that limit is reached instead (and increase LARGE_FLUSH so
that there is no regression in window size).
This optimization is only applied during stateless RPCs to avoid the
issue raised and fixed in commit 44d8dc54 (Fix potential local
deadlock during fetch-pack, 2011-03-29).
Signed-off-by: Jonathan Tan <jonathantanmy@google.com> Reviewed-by: Jonathan Nieder <jrnieder@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>