git-svn.perl: keep processing all commits in parents_exclude
This fixes a bug where git finds the incorrect merge parent. Consider a
repository with trunk, branch1 of trunk, and branch2 of branch1.
Without this change, git interprets a merge of branch2 into trunk as a
merge of branch1 into trunk.
Signed-off-by: Steven Walter <stevenrwalter@gmail.com> Reviewed-by: Sam Vilain <sam@vilain.net> Signed-off-by: Eric Wong <normalperson@yhbt.net>
git-svn.perl: consider all ranges for a given merge, instead of only tip-by-tip
Consider the case where you have trunk, branch1 of trunk, and branch2 of
branch1. trunk is merged back into branch2, and then branch2 is
reintegrated into trunk. The merge of branch2 into trunk will have
svn:mergeinfo property references to both branch1 and branch2. When
git-svn fetches the commit that merges branch2 (check_cherry_pick),
it is necessary to eliminate the merged contents of branch1 as well as
branch2, or else the merge will be incorrectly ignored as a cherry-pick.
Signed-off-by: Steven Walter <stevenrwalter@gmail.com> Reviewed-by: Sam Vilain <sam@vilain.net> Signed-off-by: Eric Wong <normalperson@yhbt.net>
When upload-pack advertises refs, we attempt to peel tags
and advertise the peeled version. We currently hand-roll the
tag dereferencing, and use as many optimizations as we can
to avoid loading non-tag objects into memory.
Not only has peel_ref recently learned these optimizations,
too, but it also contains an even more important one: it
has access to the "peeled" data from the pack-refs file.
That means we can avoid not only loading annotated tags
entirely, but also avoid doing any kind of object lookup at
all.
This cut the CPU time to advertise refs by 50% in the
linux-2.6 repo, as measured by:
echo 0000 | git-upload-pack . >/dev/null
best-of-five, warm cache, objects and refs fully packed:
[before] [after]
real 0m0.026s real 0m0.013s
user 0m0.024s user 0m0.008s
sys 0m0.000s sys 0m0.000s
Those numbers are irrelevantly small compared to an actual
fetch. Here's a larger repo (400K refs, of which 12K are
unique, and of which only 107 are unique annotated tags):
[before] [after]
real 0m0.704s real 0m0.596s
user 0m0.600s user 0m0.496s
sys 0m0.096s sys 0m0.092s
This shows only a 15% speedup (mostly because it has fewer
actual tags to parse), but a larger absolute value (100ms,
which isn't a lot compared to a real fetch, but this
advertisement happens on every fetch, even if the client is
just finding out they are completely up to date).
In truly pathological cases, where you have a large number
of unique annotated tags, it can make an even bigger
difference. Here are the numbers for a linux-2.6 repository
that has had every seventh commit tagged (so about 50K
tags):
[before] [after]
real 0m0.443s real 0m0.097s
user 0m0.416s user 0m0.080s
sys 0m0.024s sys 0m0.012s
Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
The point of peel_ref is to dereference tags; if the base
object is not a tag, then we can return early without even
loading the object into memory.
This patch accomplishes that by checking sha1_object_info
for the type. For a packed object, we can get away with just
looking in the pack index. For a loose object, we only need
to inflate the first couple of header bytes.
This is a bit of a gamble; if we do find a tag object, then
we will end up loading the content anyway, and the extra
lookup will have been wasteful. However, if it is not a tag
object, then we save loading the object entirely. Depending
on the ratio of non-tags to tags in the input, this can be a
minor win or minor loss.
However, it does give us one potential major win: if a ref
points to a large blob (e.g., via an unannotated tag), then
we can avoid looking at it entirely.
Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
The idea of the peel_ref function is to dereference tag
objects recursively until we hit a non-tag, and return the
sha1. Conceptually, it should return 0 if it is successful
(and fill in the sha1), or -1 if there was nothing to peel.
However, the current behavior is much more confusing. For a
regular loose ref, the behavior is as described above. But
there is an optimization to reuse the peeled-ref value for a
ref that came from a packed-refs file. If we have such a
ref, we return its peeled value, even if that peeled value
is null (indicating that we know the ref definitely does
_not_ peel).
It might seem like such information is useful to the caller,
who would then know not to bother loading and trying to peel
the object. Except that they should not bother loading and
trying to peel the object _anyway_, because that fallback is
already handled by peel_ref. In other words, the whole point
of calling this function is that it handles those details
internally, and you either get a sha1, or you know that it
is not peel-able.
This patch catches the null sha1 case internally and
converts it into a -1 return value (i.e., there is nothing
to peel). This simplifies callers, which do not need to
bother checking themselves.
Two callers are worth noting:
- in pack-objects, a comment indicates that there is a
difference between non-peelable tags and unannotated
tags. But that is not the case (before or after this
patch). Whether you get a null sha1 has to do with
internal details of how peel_ref operated.
- in show-ref, if peel_ref returns a failure, the caller
tries to decide whether to try peeling manually based on
whether the REF_ISPACKED flag is set. But this doesn't
make any sense. If the flag is set, that does not
necessarily mean the ref came from a packed-refs file
with the "peeled" extension. But it doesn't matter,
because even if it didn't, there's no point in trying to
peel it ourselves, as peel_ref would already have done
so. In other words, the fallback peeling is guaranteed
to fail.
Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
When we are asked to peel a ref to a sha1, we internally call
deref_tag, which will recursively parse each tagged object
until we reach a non-tag. This has the benefit that we will
verify our ability to load and parse the pointed-to object.
However, there is a performance downside: we may not need to
load that object at all (e.g., if we are listing peeled
simply listing peeled refs), or it may be a large object
that should follow a streaming code path (e.g., an annotated
tag of a large blob).
It makes more sense for peel_ref to choose the fast thing
rather than performing the extra check, for two reasons:
1. We will already sometimes short-circuit the tag parsing
in favor of a peeled entry from a packed-refs file. So
we are already favoring speed in some cases, and it is
not wise for a caller to rely on peel_ref to detect
corruption.
2. We already silently ignore much larger corruptions,
like a ref that points to a non-existent object, or a
tag object that exists but is corrupted.
2. peel_ref is not the right place to check for such a
database corruption. It is returning only the sha1
anyway, not the actual object. Any callers which use
that sha1 to load an object will soon discover the
corruption anyway, so we are really just pushing back
the discovery to later in the program.
Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
paint_down_to_common(): parse commit before relying on its timestamp
When refactoring the merge-base computation to reduce the pairwise
O(n*(n-1)) traversals to parallel O(n) traversals, the code forgot
that timestamp based heuristics needs each commit to have been
parsed. This caused an empty "git pull" to spend cycles, traversing
the history all the way down to 0 (because an unparsed commit object
has 0 timestamp, and any other commit object with positive timestamp
will be processed for its parents, all getting parsed), only to come
up with a merge message to be used.
Teach the commands from the "log" family the "--grep-reflog" option
to limit output by string that appears in the reflog entry when the
"--walk-reflogs" option is in effect.
* nd/grep-reflog:
revision: make --grep search in notes too if shown
log --grep-reflog: reject the option without -g
revision: add --grep-reflog to filter commits by reflog messages
grep: prepare for new header field filter
t1450: the order the objects are checked is undefined
When a tag T points at an object X that is of a type that is
different from what the tag records as, fsck should report it as an
error.
However, depending on the order X and T are checked individually,
the actual error message can be different. If X is checked first,
fsck remembers X's type and then when it checks T, it notices that T
records X as a wrong type (i.e. the complaint is about a broken tag
T). If T is checked first, on the other hand, fsck remembers that we
need to verify X is of the type tag records, and when it later
checks X, it notices that X is of a wrong type (i.e. the complaint
is about a broken object X).
The important thing is that fsck notices such an error and diagnoses
the issue on object X, but the test was expecting that we happen to
check objects in the order to make us detect issues with tag T, not
with object X. Remove this unwarranted assumption.
Merge branch 'rr/maint-submodule-unknown-cmd' into maint
"git submodule frotz" was not diagnosed as "frotz" being an unknown
subcommand to "git submodule"; the user instead got a complaint that
"git submodule status" was run with an unknown path "frotz".
* rr/maint-submodule-unknown-cmd:
submodule: if $command was not matched, don't parse other args
Merge branch 'sp/maint-http-enable-gzip' into maint
"git fetch" over http advertised that it supports "deflate", which
is much less common, and did not advertise more common "gzip" on its
Accept-Encoding header.
* sp/maint-http-enable-gzip:
Enable info/refs gzip decompression in HTTP client
Merge branch 'sp/maint-http-info-refs-no-retry' into maint
"git fetch" over http had an old workaround for an unlikely server
misconfiguration; it turns out that this hurts debuggability of the
configuration in general, and has been reverted.
* sp/maint-http-info-refs-no-retry:
Revert "retry request without query when info/refs?query fails"
Documentation: mention `push.default` in git-push.txt
It already is listed in the "git config" documentation, but people
interested in pushing would first look at "git push" documentation.
Noticed-by: David Glasser Signed-off-by: Ramkumar Ramachandra <artagnon@gmail.com> Acked-by: Matthieu Moy <Matthieu.Moy@grenoble-inp.fr> Fixed-by: Junio C Hamano <gitster@pobox.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
The 'a', 'i' and 'c' commands take a literal text to be added
followed by backslash, but then in the source we cannot indent
the literal text which makes it ugly.
We need to also remember to double the backslash inside double
quotes.
Avoid these issues altogether by having an extra line in a template
file and generate test vectors by deleting the line or replacing the
line and not using the 'a' command.
The actual external command to run for mergetool backend can be
specified with difftool/mergetool.$name.cmd configuration
variables, but this mechanism was ignored for the backends we
natively support.
* da/mergetool-custom:
mergetool--lib: Allow custom commands to override built-ins
Send errors from "unpack-objects" and "index-pack" back to the "git
push" over the git and smart-http protocols, just like it is done
for a push over the ssh protocol.
* jk/receive-pack-unpack-error-to-pusher:
receive-pack: drop "n/a" on unpacker errors
receive-pack: send pack-processing stderr over sideband
receive-pack: redirect unpack-objects stdout to /dev/null
Remove the hard coded length limit on variable names in config files
Previously while reading the variable names in config files, there
was a 256 character limit with at most 128 of those characters being
used by the section header portion of the variable name. This
limitation was only enforced while reading the config files. It was
possible to write a config file that was not subsequently readable.
Instead of enforcing this limitation for both reading and writing,
remove it entirely by changing the var member of the config_file
struct to a strbuf instead of a fixed length buffer. Update all of
the parsing functions in config.c to use the strbuf instead of the
static buffer.
The parsing functions that returned the base length of the variable
name now return simply 0 for success and -1 for failure. The base
length information is obtained through the strbuf's len member.
We now send the buf member of the strbuf to external callback
functions to preserve the external api. None of the external
callers rely on the old size limitation for sizing their own buffers
so removing the limit should have no externally visible effect.
Signed-off-by: Ben Walton <bdwalton@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
diff: diff.context configuration gives default to -U
Introduce a configuration variable diff.context that tells
Porcelain commands to use a non-default number of context
lines instead of 3 (the default). With this variable, users
do not have to keep repeating "git log -U8" from the command
line; instead, it becomes sufficient to say "git config
diff.context 8" just once.
Signed-off-by: Jeff Muizelaar <jmuizelaar@mozilla.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
mailinfo: don't require "text" mime type for attachments
Currently "git am" does insane things if the mbox it is given contains
attachments with a MIME type that aren't "text/*".
In particular, it will still decode them, and pass them "one line at a
time" to the mail body filter, but because it has determined that they
aren't text (without actually looking at the contents, just at the mime
type) the "line" will be the encoding line (eg 'base64') rather than a
line of *content*.
Which then will cause the text filtering to fail, because we won't
correctly notice when the attachment text switches from the commit message
to the actual patch. Resulting in a patch failure, even if patch may be a
perfectly well-formed attachment, it's just that the message type may be
(for example) "application/octet-stream" instead of "text/plain".
Just remove all the bogus games with the message_type. The only difference
that code creates is how the data is passed to the filter function
(chunked per-pred-code line or per post-decode line), and that difference
is *wrong*, since chunking things per pre-decode line can never be a
sensible operation, and cannot possibly matter for binary data anyway.
This code goes all the way back to March of 2007, in commit 87ab79923463
("builtin-mailinfo.c infrastrcture changes"), and apparently Don used to
pass random mbox contents to git. However, the pre-decode vs post-decode
logic really shouldn't matter even for that case, and more importantly, "I
fed git am crap" is not a valid reason to break *real* patch attachments.
If somebody really cares, and determines that some attachment is binary
data (by looking at the data, not the MIME-type), the whole attachment
should be dismissed, rather than fed in random-sized chunks to
"handle_filter()".
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Cc: Don Zickus <dzickus@redhat.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
Merge branch 'maint' of git://github.com/git-l10n/git-po into maint
Update German and Simplified Chinese translations.
* 'maint' of git://github.com/git-l10n/git-po:
l10n: de.po: correct translation of a 'rebase' message
l10n: Improve many translation for zh_CN
l10n: Unify the translation for '(un)expected'
Merge branch 'jc/maint-log-grep-all-match-1' into maint
* jc/maint-log-grep-all-match-1:
grep.c: make two symbols really file-scope static this time
t7810-grep: test --all-match with multiple --grep and --author options
t7810-grep: test interaction of multiple --grep and --author options
t7810-grep: test multiple --author with --all-match
t7810-grep: test multiple --grep with and without --all-match
t7810-grep: bring log --grep tests in common form
grep.c: mark private file-scope symbols as static
log: document use of multiple commit limiting options
log --grep/--author: honor --all-match honored for multiple --grep patterns
grep: show --debug output only once
grep: teach --debug option to dump the parse tree
Teach an option to edit the insn sheet to "git rebase -i".
* aw/rebase-i-edit-todo:
rebase -i: suggest using --edit-todo to fix an unknown instruction
rebase -i: Add tests for "--edit-todo"
rebase -i: Teach "--edit-todo" action
rebase -i: Refactor help messages for todo file
rebase usage: subcommands can not be combined with -i
revision: make --grep search in notes too if shown
Notes are shown after commit body. From user perspective it looks
pretty much like commit body and they may assume --grep would search
in that part too.
Make it so.
Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
revision: add --grep-reflog to filter commits by reflog messages
Similar to --author/--committer which filters commits by author and
committer header fields. --grep-reflog adds a fake "reflog" header to
commit and a grep filter to search on that line.
All rules to --author/--committer apply except no timestamp stripping.
Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
grep supports only author and committer headers, which have the same
special treatment that later headers may or may not have. Check for
field type and only strip_timestamp() when the field is either author
or committer.
GREP_HEADER_FIELD_MAX is put in the grep_header_field enum to be
calculated automatically, correctly, as long as it's at the end of the
enum.
Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
We correctly handle completion items with spaces just fine,
since we pass the lists around with newline delimiters.
However, we do not handle filenames with shell
metacharacters, as "compgen -W" performs expansion on the
list we give it.
Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* maint:
l10n: de.po: correct translation of a 'rebase' message
l10n: Improve many translation for zh_CN
l10n: Unify the translation for '(un)expected'
submodule: if $command was not matched, don't parse other args
"git submodule" command DWIMs the command line and assumes a
unspecified action word for 'status' action. This is a UI mistake
that leads to a confusing behaviour. A mistyped command name is
instead treated as a request for 'status' of the submodule with that
name, e.g.
$ git submodule show
error: pathspec 'show' did not match any file(s) known to git.
Did you forget to 'git add'?
Stop DWIMming an unknown or mistyped subcommand name as pathspec
given to unspelled "status" subcommand. "git submodule" without any
argument is still interpreted as "git submodule status", but its
value is questionable.
Adjust t7400 to match, and stop advertising the default subcommand
being 'status' which does not help much in practice, other than
promoting laziness and confusion.
Signed-off-by: Ramkumar Ramachandra <artagnon@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
Only the first test t0000 in the test suite made sure we have built
Git to be tested; move the check to test-lib so that it applies to
all tests equally.
* rr/test-make-sure-we-have-git:
t/test-lib: make sure Git has already been built
* js/poll-emu:
make poll() work on platforms that can't recv() on a non-socket
poll() exits too early with EFAULT if 1st arg is NULL
fix some win32 specific dependencies in poll.c
make poll available for other platforms lacking it
Run our test scripts with MALLOC_CHECK_ and MALLOC_PERTURB_, the
built-in memory access checking facility GNU libc has.
* ep/malloc-check-perturb:
MALLOC_CHECK: various clean-ups
Add MALLOC_CHECK_ and MALLOC_PERTURB_ libc env to the test suite for detecting heap corruption
* po/maint-docs:
Doc branch: show -vv option and alternative
Doc clean: add See Also link
Doc add: link gitignore
Doc: separate gitignore pattern sources
Doc: shallow clone deepens _to_ new depth
The codepath for handling "--tee" ends up relaunching the test
script under a shell, and that one has to be a Bourne. But we
incorrectly used $SHELL, which could be a non-Bourne (e.g. zsh or
csh); we have the Makefile variable $SHELL_PATH for exactly that,
so use it instead.
Signed-off-by: Ramkumar Ramachandra <artagnon@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
mergetool--lib: Allow custom commands to override built-ins
Allow users to override the default commands provided by the
mergetools/* scriptlets.
Users occasionally run into problems where they expect to be
able to override the built-in tool names. The documentation
does not explicitly mention that built-ins cannot be overridden,
so it's easy to assume that it should work.
Lift this restriction so that built-in tools are handled the
same way as user-configured tools. Add tests to guarantee this
behavior.
A nice benefit of this change is that it protects users from
having future versions of git trump their custom configuration
with a new built-in tool.
That patch does fix expansion of weird variables in some
simple tests, but it also seems to break other things, like
expansion of refs by "git checkout".
While we're sorting out the correct solution, we are much
better with the original bug (people with metacharacters in
their completions occasionally see an error message) than
the current bug (ref completion does not work at all).
Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
Merge branch 'jc/maint-blame-no-such-path' into maint
Even during a conflicted merge, "git blame $path" always meant to
blame uncommitted changes to the "working tree" version; make it
more useful by showing cleanly merged parts as coming from the other
branch that is being merged.
This incidentally fixes an unrelated problem on a case insensitive
filesystem, where "git blame MAKEFILE" run in a history that has
"Makefile" but not "MAKEFILE" did not say "No such file MAKEFILE in
HEAD" but pretended as if "MAKEFILE" was a newly added file.
* jc/maint-blame-no-such-path:
blame: allow "blame file" in the middle of a conflicted merge
blame $path: avoid getting fooled by case insensitive filesystems
"git fetch --all", when passed "--no-tags", did not honor the
"--no-tags" option while fetching from individual remotes (the same
issue existed with "--tags", but combination "--all --tags" makes
much less sense than "--all --no-tags").
* dj/fetch-all-tags:
fetch --all: pass --tags/--no-tags through to each remote
submodule: use argv_array instead of hand-building arrays
fetch: use argv_array instead of hand-building arrays
argv-array: fix bogus cast when freeing array
argv-array: add pop function
File modification times in ZIP files are encoded in DOS format: local
time with a granularity of two seconds. Add an extra field to all
archive entries to also record the mtime in Unix' fashion, as UTC with
a granularity of one second.
This has the desirable side-effect of convincing Info-ZIP unzip 6.00
to respect general purpose flag 11, which is used to indicate that a
file name is encoded in UTF-8. Any extra field would do, actually,
but the extended timestamp is a reasonably small one (22 bytes per
entry). Archives created by Info-ZIP zip 3.0 contain it, too (but
with ctime and atime as well).
Signed-off-by: Rene Scharfe <rene.scharfe@lsrfire.ath.cx> Signed-off-by: Junio C Hamano <gitster@pobox.com>
commit: pay attention to submodule.$name.ignore in .gitmodules
"git status" does not list a submodule with uncommitted working tree
files as modified when "submodule.$name.ignore" is set to "dirty" in
in-tree ".gitmodules" file. Both status and commit honor the setting
in $GIT_DIR/config, but "commit" does not pick it up from .gitmodules,
which is inconsistent.
Teach "git commit" to pay attention to the setting in .gitmodules as
well.
Signed-off-by: Orgad Shaneh <orgads@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
git blame: document that it always follows origin across whole-file renames
Make it clear to people who (rightly or wrongly) think that the
"--follow" option should follow origin across while-file renames
that we already do so. That would explain the output that they see
when they do give the "--follow" option to the command.
We may or may not want to do a "--no-follow" patch as a follow-up,
but that is a separate topic.
Finishing touch to update documentation of string-list to make sure
the earlier rewrite of ref-list match logic that depends on its sort
order will not get broken.
* mh/fetch-filter-refs:
string_list API: document what "sorted" means
Usually there is no need for users to specify whether an
http remote is smart or dumb; the protocol is designed so
that a single initial request is made, and the client can
determine the server's capability from the response.
However, some misconfigured dumb-only servers may not like
the initial request by a smart client, as it contains a
query string. Until recently, commit 703e6e7 worked around
this by making a second request. However, that commit was
recently reverted due to its side effect of masking the
initial request's error code.
Since git has had that workaround for several years, we
don't know exactly how many such misconfigured servers are
out there. The reversion of 703e6e7 assumes they are rare
enough not to worry about. Still, that reversion leaves
somebody who does run into such a server with no escape
hatch at all. Let's give them an environment variable they
can tweak to perform the "dumb" request.
This is intentionally not a documented interface. It's
overly simple and is really there for debugging in case
somebody does complain about git not working with their
server. A real user-facing interface would entail a
per-remote or per-URL config variable.
Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
The output from git push currently looks like this:
$ git push dest HEAD
fatal: [some message from index-pack]
error: unpack failed: index-pack abnormal exit
To dest
! [remote rejected] HEAD -> master (n/a (unpacker error))
That n/a is meant to be "the per-ref status is not
available" but the nested parentheses just make it look
ugly. Let's turn the final line into just:
! [remote rejected] HEAD -> master (unpacker error)
Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
receive-pack: send pack-processing stderr over sideband
Receive-pack invokes either unpack-objects or index-pack to
handle the incoming pack. However, we do not redirect the
stderr of the sub-processes at all, so it is never seen by
the client. From the initial thread adding sideband support,
which is here:
it is clear that some messages are specifically kept off the
sideband (with the assumption that they are of interest only
to an administrator, not the client). The stderr of the
subprocesses is mentioned in the thread, but it's unclear if
they are included in that group, or were simply forgotten.
However, there are a few good reasons to show them to the
client:
1. In many cases, they are directly about the incoming
packfile (e.g., fsck warnings with --strict, corruption
in the packfile, etc). Without these messages, the
client just gets "unpacker error" with no extra useful
diagnosis.
2. No matter what the cause, we are probably better off
showing the errors to the client. If the client and the
server admin are not the same entity, it is probably
much easier for the client to cut-and-paste the errors
they see than for the admin to try to dig them out of a
log and correlate them with a particular session.
3. Users of the ssh transport typically already see these
stderr messages, as the remote's stderr is copied
literally by ssh. This brings other transports (http,
and push-over-git if you are crazy enough to enable it)
more in line with ssh. As a bonus for ssh users,
because the messages are now fed through the sideband
and printed by the local git, they will have "remote:"
prepended and be properly interleaved with any local
output to stderr.
Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
receive-pack: redirect unpack-objects stdout to /dev/null
The unpack-objects command should not generally produce any
output on stdout. However, if it's given extra input after
the packfile, it will spew the remainder to stdout. When
called by receive-pack, this means we will break protocol,
since our stdout is connected to the remote send-pack.
Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>