Introduce <branch>@{push} short-hand to denote the remote-tracking
branch that tracks the branch at the remote the <branch> would be
pushed to.
* jk/at-push-sha1:
for-each-ref: accept "%(push)" format
for-each-ref: use skip_prefix instead of starts_with
sha1_name: implement @{push} shorthand
sha1_name: refactor interpret_upstream_mark
sha1_name: refactor upstream_mark
remote.c: add branch_get_push
remote.c: return upstream name from stat_tracking_info
remote.c: untangle error logic in branch_get_upstream
remote.c: report specific errors from branch_get_upstream
remote.c: introduce branch_get_upstream helper
remote.c: hoist read_config into remote_get_1
remote.c: provide per-branch pushremote name
remote.c: hoist branch.*.remote lookup out of remote_get_1
remote.c: drop "remote" pointer from "struct branch"
remote.c: refactor setup of branch->merge list
remote.c: drop default_remote_name variable
"git rebase -i" fired post-rewrite hook when it shouldn't (namely,
when it was told to stop sequencing with 'exec' insn).
* mm/rebase-i-post-rewrite-exec:
t5407: use <<- to align the expected output
rebase -i: fix post-rewrite hook with failed exec command
rebase -i: demonstrate incorrect behavior of post-rewrite
"git upload-pack" that serves "git fetch" can be told to serve
commits that are not at the tip of any ref, as long as they are
reachable from a ref, with uploadpack.allowReachableSHA1InWant
configuration variable.
* fm/fetch-raw-sha1:
upload-pack: optionally allow fetching reachable sha1
upload-pack: prepare to extend allow-tip-sha1-in-want
config.txt: clarify allowTipSHA1InWant with camelCase
Group list of commands shown by "git help" along the workflow
elements to help early learners.
* sg/help-group:
help: respect new common command grouping
command-list.txt: drop the "common" tag
generate-cmdlist: parse common group commands
command-list.txt: add the common groups block
command-list: prepare machinery for upcoming "common groups" section
"git cat-file --batch(-check)" learned the "--follow-symlinks"
option that follows an in-tree symbolic link when asked about an
object via extended SHA-1 syntax, e.g. HEAD:RelNotes that points at
Documentation/RelNotes/2.5.0.txt. With the new option, the command
behaves as if HEAD:Documentation/RelNotes/2.5.0.txt was given as
input instead.
* dt/cat-file-follow-symlinks:
cat-file: add --follow-symlinks to --batch
sha1_name: get_sha1_with_context learns to follow symlinks
tree-walk: learn get_tree_entry_follow_symlinks
The clean/smudge interface did not work well when filtering an
empty contents (failed and then passed the empty input through).
It can be argued that a filter that produces anything but empty for
an empty input is nonsense, but if the user wants to do strange
things, then why not?
* jh/filter-empty-contents:
sha1_file: pass empty buffer to index empty file
Communication between the HTTP server and http_backend process can
lead to a dead-lock when relaying a large ref negotiation request.
Diagnose the situation better, and mitigate it by reading such a
request first into core (to a reasonable limit).
* jk/http-backend-deadlock:
http-backend: spool ref negotiation requests to buffer
t5551: factor out tag creation
http-backend: fix die recursion with custom handler
When t7063 starts, it runs "update-index --untracked-cache"
to see if we support the untracked cache. Its output goes
straight to stderr, even if the test is not run with "-v".
Let's wrap it in a prereq that will hide the output by
default, but show it with "-v".
Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
"git rev-list --objects $old --not --all" to see if everything that
is reachable from $old is already connected to the existing refs
was very inefficient.
* jk/still-interesting:
limit_list: avoid quadratic behavior from still_interesting
"hash-object --literally" introduced in v2.2 was not prepared to
take a really long object type name.
* jc/hash-object:
write_sha1_file(): do not use a separate sha1[] array
t1007: add hash-object --literally tests
hash-object --literally: fix buffer overrun with extra-long object type
git-hash-object.txt: document --literally option
Merge branch 'jk/filter-branch-use-of-sed-on-incomplete-line' into maint
"filter-branch" corrupted commit log message that ends with an
incomplete line on platforms with some "sed" implementations that
munge such a line. Work it around by avoiding to use "sed".
* jk/filter-branch-use-of-sed-on-incomplete-line:
filter-branch: avoid passing commit message through sed
Merge branch 'jk/stash-require-clean-index' into maint
"git stash pop/apply" forgot to make sure that not just the working
tree is clean but also the index is clean. The latter is important
as a stash application can conflict and the index will be used for
conflict resolution.
* jk/stash-require-clean-index:
stash: require a clean index to apply
t3903: avoid applying onto dirty index
t3903: stop hard-coding commit sha1s
Merge branch 'jk/git-no-more-argv0-path-munging' into maint
We have prepended $GIT_EXEC_PATH and the path "git" is installed in
(typically "/usr/bin") to $PATH when invoking subprograms and hooks
for almost eternity, but the original use case the latter tried to
support was semi-bogus (i.e. install git to /opt/foo/git and run it
without having /opt/foo on $PATH), and more importantly it has
become less and less relevant as Git grew more mainstream (i.e. the
users would _want_ to have it on their $PATH). Stop prepending the
path in which "git" is installed to users' $PATH, as that would
interfere the command search order people depend on (e.g. they may
not like versions of programs that are unrelated to Git in /usr/bin
and want to override them by having different ones in /usr/local/bin
and have the latter directory earlier in their $PATH).
* jk/git-no-more-argv0-path-munging:
stop putting argv[0] dirname at front of PATH
Teach the index to optionally remember already seen untracked files
to speed up "git status" in a working tree with tons of cruft.
* nd/untracked-cache: (24 commits)
git-status.txt: advertisement for untracked cache
untracked cache: guard and disable on system changes
mingw32: add uname()
t7063: tests for untracked cache
update-index: test the system before enabling untracked cache
update-index: manually enable or disable untracked cache
status: enable untracked cache
untracked-cache: temporarily disable with $GIT_DISABLE_UNTRACKED_CACHE
untracked cache: mark index dirty if untracked cache is updated
untracked cache: print stats with $GIT_TRACE_UNTRACKED_STATS
untracked cache: avoid racy timestamps
read-cache.c: split racy stat test to a separate function
untracked cache: invalidate at index addition or removal
untracked cache: load from UNTR index extension
untracked cache: save to an index extension
ewah: add convenient wrapper ewah_serialize_strbuf()
untracked cache: don't open non-existent .gitignore
untracked cache: mark what dirs should be recursed/saved
untracked cache: record/validate dir mtime and reuse cached output
untracked cache: make a wrapper around {open,read,close}dir()
...
The code to read pack-bitmap wanted to allocate a few hundred
pointers to a structure, but by mistake allocated and leaked memory
enough to hold that many actual structures. Correct the allocation
size and also have it on stack, as it is small enough.
Merge branch 'jk/http-backend-deadlock-2.3' into jk/http-backend-deadlock
* jk/http-backend-deadlock-2.3:
http-backend: spool ref negotiation requests to buffer
t5551: factor out tag creation
http-backend: fix die recursion with custom handler
Merge branch 'jk/http-backend-deadlock-2.2' into jk/http-backend-deadlock-2.3
* jk/http-backend-deadlock-2.2:
http-backend: spool ref negotiation requests to buffer
t5551: factor out tag creation
http-backend: fix die recursion with custom handler
http-backend: spool ref negotiation requests to buffer
When http-backend spawns "upload-pack" to do ref
negotiation, it streams the http request body to
upload-pack, who then streams the http response back to the
client as it reads. In theory, git can go full-duplex; the
client can consume our response while it is still sending
the request. In practice, however, HTTP is a half-duplex
protocol. Even if our client is ready to read and write
simultaneously, we may have other HTTP infrastructure in the
way, including the webserver that spawns our CGI, or any
intermediate proxies.
In at least one documented case[1], this leads to deadlock
when trying a fetch over http. What happens is basically:
1. Apache proxies the request to the CGI, http-backend.
2. http-backend gzip-inflates the data and sends
the result to upload-pack.
3. upload-pack acts on the data and generates output over
the pipe back to Apache. Apache isn't reading because
it's busy writing (step 1).
This works fine most of the time, because the upload-pack
output ends up in a system pipe buffer, and Apache reads
it as soon as it finishes writing. But if both the request
and the response exceed the system pipe buffer size, then we
deadlock (Apache blocks writing to http-backend,
http-backend blocks writing to upload-pack, and upload-pack
blocks writing to Apache).
We need to break the deadlock by spooling either the input
or the output. In this case, it's ideal to spool the input,
because Apache does not start reading either stdout _or_
stderr until we have consumed all of the input. So until we
do so, we cannot even get an error message out to the
client.
The solution is fairly straight-forward: we read the request
body into an in-memory buffer in http-backend, freeing up
Apache, and then feed the data ourselves to upload-pack. But
there are a few important things to note:
1. We limit the in-memory buffer to prevent an obvious
denial-of-service attack. This is a new hard limit on
requests, but it's unlikely to come into play. The
default value is 10MB, which covers even the ridiculous
100,000-ref negotation in the included test (that
actually caps out just over 5MB). But it's configurable
on the off chance that you don't mind spending some
extra memory to make even ridiculous requests work.
2. We must take care only to buffer when we have to. For
pushes, the incoming packfile may be of arbitrary
size, and we should connect the input directly to
receive-pack. There's no deadlock problem here, though,
because we do not produce any output until the whole
packfile has been read.
For upload-pack's initial ref advertisement, we
similarly do not need to buffer. Even though we may
generate a lot of output, there is no request body at
all (i.e., it is a GET, not a POST).
Test-adapted-from: Dennis Kaarsemaker <dennis@kaarsemaker.net> Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
With uploadpack.allowReachableSHA1InWant configuration option set on the
server side, "git fetch" can make a request with a "want" line that names
an object that has not been advertised (likely to have been obtained out
of band or from a submodule pointer). Only objects reachable from the
branch tips, i.e. the union of advertised branches and branches hidden by
transfer.hideRefs, will be processed. Note that there is an associated
cost of having to walk back the history to check the reachability.
This feature can be used when obtaining the content of a certain commit,
for which the sha1 is known, without the need of cloning the whole
repository, especially if a shallow fetch is used. Useful cases are e.g.
repositories containing large files in the history, fetching only the
needed data for a submodule checkout, when sharing a sha1 without telling
which exact branch it belongs to and in Gerrit, if you think in terms of
commits instead of change numbers. (The Gerrit case has already been
solved through allowTipSHA1InWant as every Gerrit change has a ref.)
Signed-off-by: Fredrik Medley <fredrik.medley@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
upload-pack: prepare to extend allow-tip-sha1-in-want
To allow future extensions, e.g. allowing non-tip sha1, replace the
boolean allow_tip_sha1_in_want variable with the flag-style
allow_request_with_bare_object_name variable.
Signed-off-by: Fredrik Medley <fredrik.medley@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
There was a commented-out (instead of being marked to expect
failure) test that documented a breakage that was fixed since the
test was written; turn it into a proper test.
* sb/t1020-cleanup:
subdirectory tests: code cleanup, uncomment test
The controlling tty-based heuristics to squelch progress output did
not consider that the process may not be talking to a tty at all
(e.g. sending the progress to sideband #2). This is a finishing
touch to a topic that is already in 'master'.
* lm/squelch-bg-progress:
progress: treat "no terminal" as being in the foreground
Filter scripts were run with SIGPIPE disabled on the Git side,
expecting that they may not read what Git feeds them to filter.
We however treated a filter that does not read its input fully
before exiting as an error.
This changes semantics, but arguably in a good way. If a filter
can produce its output without consuming its input using whatever
magic, we now let it do so, instead of diagnosing it as a
programming error.
* jc/ignore-epipe-in-filter:
filter_buffer_or_fd(): ignore EPIPE
copy.c: make copy_fd() report its status silently
There was a dead code that used to handle "git pull --tags" and
show special-cased error message, which was made irrelevant when
the semantics of the option changed back in Git 1.9 days.
* pt/pull-tags-error-diag:
pull: remove --tags error in no merge candidates case
The ref API did not handle cases where 'refs/heads/xyzzy/frotz' is
removed at the same time as 'refs/heads/xyzzy' is added (or vice
versa) very well.
* mh/ref-directory-file:
reflog_expire(): integrate lock_ref_sha1_basic() errors into ours
ref_transaction_commit(): delete extra "the" from error message
ref_transaction_commit(): provide better error messages
rename_ref(): integrate lock_ref_sha1_basic() errors into ours
lock_ref_sha1_basic(): improve diagnostics for ref D/F conflicts
lock_ref_sha1_basic(): report errors via a "struct strbuf *err"
verify_refname_available(): report errors via a "struct strbuf *err"
verify_refname_available(): rename function
refs: check for D/F conflicts among refs created in a transaction
ref_transaction_commit(): use a string_list for detecting duplicates
is_refname_available(): use dirname in first loop
struct nonmatching_ref_data: store a refname instead of a ref_entry
report_refname_conflict(): inline function
entry_matches(): inline function
is_refname_available(): convert local variable "dirname" to strbuf
is_refname_available(): avoid shadowing "dir" variable
is_refname_available(): revamp the comments
t1404: new tests of ref D/F conflicts within transactions
Multi-ref transaction support we merged a few releases ago
unnecessarily kept many file descriptors open, risking to fail with
resource exhaustion. This is for 2.4.x track.
* mh/write-refs-sooner-2.4:
ref_transaction_commit(): fix atomicity and avoid fd exhaustion
ref_transaction_commit(): remove the local flags variable
ref_transaction_commit(): inline call to write_ref_sha1()
rename_ref(): inline calls to write_ref_sha1() from this function
commit_ref_update(): new function, extracted from write_ref_sha1()
write_ref_to_lockfile(): new function, extracted from write_ref_sha1()
t7004: rename ULIMIT test prerequisite to ULIMIT_STACK_SIZE
update-ref: test handling large transactions properly
ref_transaction_commit(): fix atomicity and avoid fd exhaustion
ref_transaction_commit(): remove the local flags variable
ref_transaction_commit(): inline call to write_ref_sha1()
rename_ref(): inline calls to write_ref_sha1() from this function
commit_ref_update(): new function, extracted from write_ref_sha1()
write_ref_to_lockfile(): new function, extracted from write_ref_sha1()
t7004: rename ULIMIT test prerequisite to ULIMIT_STACK_SIZE
update-ref: test handling large transactions properly
The "log --decorate" enhancement in Git 2.4 that shows the commit
at the tip of the current branch e.g. "HEAD -> master", did not
work with --decorate=full.
* mg/log-decorate-HEAD:
log: do not shorten decoration names too early
log: decorate HEAD with branch name under --decorate=full, too
Various documentation mark-up fixes to make the output more
consistent in general and also make AsciiDoctor (an alternative
formatter) happier.
* jk/asciidoc-markup-fix:
doc: convert AsciiDoc {?foo} to ifdef::foo[]
doc: put example URLs and emails inside literal backticks
doc: drop backslash quoting of some curly braces
doc: convert \--option to --option
doc/add: reformat `--edit` option
doc: fix length of underlined section-title
doc: fix hanging "+"-continuation
doc: fix unquoted use of "{type}"
doc: fix misrendering due to `single quote'
Just as we have "%(upstream)" to report the "@{upstream}"
for each ref, this patch adds "%(push)" to match "@{push}".
It supports the same tracking format modifiers as upstream
(because you may want to know, for example, which branches
have commits to push).
Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
In a triangular workflow, each branch may have two distinct
points of interest: the @{upstream} that you normally pull
from, and the destination that you normally push to. There
isn't a shorthand for the latter, but it's useful to have.
For instance, you may want to know which commits you haven't
pushed yet:
git log @{push}..
Or as a more complicated example, imagine that you normally
pull changes from origin/master (which you set as your
@{upstream}), and push changes to your own personal fork
(e.g., as myfork/topic). You may push to your fork from
multiple machines, requiring you to integrate the changes
from the push destination, rather than upstream. With this
patch, you can just do:
git rebase @{push}
rather than typing out the full name.
The heavy lifting is all done by branch_get_push; here we
just wire it up to the "@{push}" syntax.
Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
Now that most of the logic for our local get_upstream_branch
has been pushed into the generic branch_get_upstream, we can
fold the remainder into interpret_upstream_mark.
Furthermore, what remains is generic to any branch-related
"@{foo}" we might add in the future, and there's enough
boilerplate that we'd like to reuse it. Let's parameterize
the two operations (parsing the mark and computing its
value) so that we can reuse this for "@{push}" in the near
future.
Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
In a triangular workflow, the place you pull from and the
place you push to may be different. As we have
branch_get_upstream for the former, this patch adds
branch_get_push for the latter (and as the former implements
@{upstream}, so will this implement @{push} in a future
patch).
Note that the memory-handling for the return value bears
some explanation. Some code paths require allocating a new
string, and some let us return an existing string. We should
provide a consistent interface to the caller, so it knows
whether to free the result or not.
We could do so by xstrdup-ing any existing strings, and
having the caller always free. But that makes us
inconsistent with branch_get_upstream, so we would prefer to
simply take ownership of the resulting string. We do so by
storing it inside the "struct branch", just as we do with
the upstream refname (in that case we compute it when the
branch is created, but there's no reason not to just fill
it in lazily in this case).
Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
remote.c: return upstream name from stat_tracking_info
After calling stat_tracking_info, callers often want to
print the name of the upstream branch (in addition to the
tracking count). To do this, they have to access
branch->merge->dst[0] themselves. This is not wrong, as the
return value from stat_tracking_info tells us whether we
have an upstream branch or not. But it is a bit leaky, as we
make an assumption about how it calculated the upstream
name.
Instead, let's add an out-parameter that lets the caller
know the upstream name we found.
As a bonus, we can get rid of the unusual tri-state return
from the function. We no longer need to use it to
differentiate between "no tracking config" and "tracking ref
does not exist" (since you can check the upstream_name for
that), so we can just use the usual 0/-1 convention for
success/error.
Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
remote.c: untangle error logic in branch_get_upstream
The error-diagnosis logic in branch_get_upstream was copied
straight from sha1_name.c in the previous commit. However,
because we check all error cases and upfront and then later
diagnose them, the logic is a bit tangled. In particular:
- if branch->merge[0] is NULL, we may end up dereferencing
it for an error message (in practice, it should never be
NULL, so this is probably not a triggerable bug).
- We may enter the code path because branch->merge[0]->dst
is NULL, but we then start our error diagnosis by
checking whether our local branch exists. But that is
only relevant to diagnosing missing merge config, not a
missing tracking ref; our diagnosis may hide the real
problem.
Instead, let's just use a sequence of "if" blocks to check
for each error type, diagnose it, and return NULL.
Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
Commit f86a374 (pack-bitmap.c: fix a memleak, 2015-03-30)
noticed that we leak the "result" bitmap. But we should use
"bitmap_free" rather than straight "free", as the former
remembers to free the bitmap array pointed to by the struct.
Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
Fix remaining instances where "pack-file" is used instead of
"packfile". Some places remain where we still use "pack-file",
This is the case when we explicitly refer to a file with a
".pack" extension as opposed to a data source providing a pack
data stream.
Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
rebase -i: fix post-rewrite hook with failed exec command
Usually, when 'git rebase' stops before completing the rebase, it is to
give the user an opportunity to edit a commit (e.g. with the 'edit'
command). In such cases, 'git rebase' leaves the sha1 of the commit being
rewritten in "$state_dir"/stopped-sha, and subsequent 'git rebase
--continue' will call the post-rewrite hook with this sha1 as <old-sha1>
argument to the post-rewrite hook.
The case of 'git rebase' stopping because of a failed 'exec' command is
different: it gives the opportunity to the user to examine or fix the
failure, but does not stop saying "here's a commit to edit, use
--continue when you're done". So, there's no reason to call the
post-rewrite hook for 'exec' commands. If the user did rewrite the
commit, it would be with 'git commit --amend' which already called the
post-rewrite hook.
Fix the behavior to leave no stopped-sha file in case of failed exec
command, and teach 'git rebase --continue' to skip record_in_rewritten if
no stopped-sha file is found.
Signed-off-by: Matthieu Moy <Matthieu.Moy@imag.fr> Signed-off-by: Junio C Hamano <gitster@pobox.com>
rebase -i: demonstrate incorrect behavior of post-rewrite
The 'exec' command is sending the current commit to stopped-sha, which is
supposed to contain the original commit (before rebase). As a result, if
an 'exec' command fails, the next 'git rebase --continue' will send the
current commit as <old-sha1> to the post-rewrite hook.
'git help' shows common commands in alphabetical order:
The most commonly used git commands are:
add Add file contents to the index
bisect Find by binary search the change that introduced a bug
branch List, create, or delete branches
checkout Checkout a branch or paths to the working tree
clone Clone a repository into a new directory
commit Record changes to the repository
[...]
without any indication of how commands relate to high-level
concepts or each other. Revise the output to explain their relationship
with the typical Git workflow:
These are common Git commands used in various situations:
start a working area (see also: git help tutorial)
clone Clone a repository into a new directory
init Create an empty Git repository or reinitialize [...]
work on the current change (see also: git help everyday)
add Add file contents to the index
reset Reset current HEAD to the specified state
examine the history and state (see also: git help revisions)
log Show commit logs
status Show the working tree status
[...]
Helped-by: Eric Sunshine <sunshine@sunshineco.com> Signed-off-by: Ramsay Jones <ramsay@ramsay1.demon.co.uk> Signed-off-by: Sébastien Guimmara <sebastien.guimmara@gmail.com> Reviewed-by: Eric Sunshine <sunshine@sunshineco.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
command-list.sh, retired in the previous patch, was the only
consumer of the "common" tag, so drop this now-unnecessary
attribute.
before:
git-add mainporcelain common worktree
after:
git-add mainporcelain worktree
Helped-by: Eric Sunshine <sunshine@sunshineco.com> Signed-off-by: Sébastien Guimmara <sebastien.guimmara@gmail.com> Reviewed-by: Eric Sunshine <sunshine@sunshineco.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
Parse the group block to create the array of group descriptions:
static char *common_cmd_groups[] = {
N_("starting a working area"),
N_("working on the current change"),
N_("working with others"),
N_("examining the history and state"),
N_("growing, marking and tweaking your history"),
};
then map each element of common_cmds[] to a group via its index:
static struct cmdname_help common_cmds[] = {
{"add", N_("Add file contents to the index"), 1},
{"branch", N_("List, create, or delete branches"), 4},
{"checkout", N_("Checkout a branch or paths to the ..."), 4},
{"clone", N_("Clone a repository into a new directory"), 0},
{"commit", N_("Record changes to the repository"), 4},
...
};
so that 'git help' can print those commands grouped by theme.
Only commands tagged with an attribute from the group block are emitted to
common_cmds[].
[commit message by Sébastien Guimmara <sebastien.guimmara@gmail.com>]
Signed-off-by: Eric Sunshine <sunshine@sunshineco.com> Signed-off-by: Sébastien Guimmara <sebastien.guimmara@gmail.com> Reviewed-by: Eric Sunshine <sunshine@sunshineco.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
The ultimate goal is for "git help" to display common commands in
groups rather than alphabetically. As a first step, define the
groups in a new block, and then assign a group to each
common command.
Add a block at the beginning of command-list.txt:
init start a working area (see also: git help tutorial)
worktree work on the current change (see also:[...]
info examine the history and state (see also: git [...]
history grow, mark and tweak your history
remote collaborate (see also: git help workflows)
storing information about common commands group, then map each common
command to a group:
git-add mainporcelain common worktree
Helped-by: Eric Sunshine <sunshine@sunshineco.com> Helped-by: Junio C Hamano <gitster@pobox.com> Helped-by: Emma Jane Hogbin Westby <emma.westby@gmail.com> Signed-off-by: Sébastien Guimmara <sebastien.guimmara@gmail.com> Reviewed-by: Eric Sunshine <sunshine@sunshineco.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
command-list: prepare machinery for upcoming "common groups" section
The ultimate goal is for "git help" to classify common commands by
group. Toward this end, a subsequent patch will add a new "common
groups" section to command-list.txt preceding the actual command list.
As preparation, teach existing command-list.txt parsing machinery, which
doesn't care about grouping, to skip over this upcoming "common groups"
section.
Signed-off-by: Eric Sunshine <sunshine@sunshineco.com> Signed-off-by: Sébastien Guimmara <sebastien.guimmara@gmail.com> Reviewed-by: Eric Sunshine <sunshine@sunshineco.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
remote.c: report specific errors from branch_get_upstream
When the previous commit introduced the branch_get_upstream
helper, there was one call-site that could not be converted:
the one in sha1_name.c, which gives detailed error messages
for each possible failure.
Let's teach the helper to optionally report these specific
errors. This lets us convert another callsite, and means we
can use the helper in other locations that want to give the
same error messages.
The logic and error messages come straight from sha1_name.c,
with the exception that we start each error with a lowercase
letter, as is our usual style (note that a few tests need
updated as a result).
Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
All of the information needed to find the @{upstream} of a
branch is included in the branch struct, but callers have to
navigate a series of possible-NULL values to get there.
Let's wrap that logic up in an easy-to-read helper.
Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
Before the previous commit, we had to make sure that
read_config() was called before entering remote_get_1,
because we needed to pass pushremote_name by value. But now
that we pass a function, we can let remote_get_1 handle
loading the config itself, turning our wrappers into true
one-liners.
Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
When remote.c loads its config, it records the
branch.*.pushremote for the current branch along with the
global remote.pushDefault value, and then binds them into a
single value: the default push for the current branch. We
then pass this value (which may be NULL) to remote_get_1
when looking up a remote for push.
This has a few downsides:
1. It's confusing. The early-binding of the "current
value" led to bugs like the one fixed by 98b406f
(remote: handle pushremote config in any order,
2014-02-24). And the fact that pushremotes fall back to
ordinary remotes is not explicit at all; it happens
because remote_get_1 cannot tell the difference between
"we are not asking for the push remote" and "there is
no push remote configured".
2. It throws away intermediate data. After read_config()
finishes, we have no idea what the value of
remote.pushDefault was, because the string has been
overwritten by the current branch's
branch.*.pushremote.
3. It doesn't record other data. We don't note the
branch.*.pushremote value for anything but the current
branch.
Let's make this more like the fetch-remote config. We'll
record the pushremote for each branch, and then explicitly
compute the correct remote for the current branch at the
time of reading.
Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
remote.c: hoist branch.*.remote lookup out of remote_get_1
We'll want to use this logic as a fallback when looking up
the pushremote, so let's pull it out into its own function.
We don't technically need to make this available outside of
remote.c, but doing so will provide a consistent API with
pushremote_for_branch, which we will add later.
Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
remote.c: drop "remote" pointer from "struct branch"
When we create each branch struct, we fill in the
"remote_name" field from the config, and then fill in the
actual "remote" field (with a "struct remote") based on that
name. However, it turns out that nobody really cares about
the latter field. The only two sites that access it at all
are:
1. git-merge, which uses it to notice when the branch does
not have a remote defined. But we can easily replace this
with looking at remote_name instead.
2. remote.c itself, when setting up the @{upstream} merge
config. But we don't need to save the "remote" in the
"struct branch" for that; we can just look it up for
the duration of the operation.
So there is no need to have both fields; they are redundant
with each other (the struct remote contains the name, or you
can look up the struct from the name). It would be nice to
simplify this, especially as we are going to add matching
pushremote config in a future patch (and it would be nice to
keep them consistent).
So which one do we keep and which one do we get rid of?
If we had a lot of callers accessing the struct, it would be
more efficient to keep it (since you have to do a lookup to
go from the name to the struct, but not vice versa). But we
don't have a lot of callers; we have exactly one, so
efficiency doesn't matter. We can decide this based on
simplicity and readability.
And the meaning of the struct value is somewhat unclear. Is
it always the remote matching remote_name? If remote_name is
NULL (i.e., no per-branch config), does the struct fall back
to the "origin" remote, or is it also NULL? These questions
will get even more tricky with pushremotes, whose fallback
behavior is more complicated. So let's just store the name,
which pretty clearly represents the branch.*.remote config.
Any lookup or fallback behavior can then be implemented in
helper functions.
Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
When we call branch_get() to lookup or create a "struct
branch", we make sure the "merge" field is filled in so that
callers can access it. But the conditions under which we do
so are a little confusing, and can lead to two funny
situations:
1. If there's no branch.*.remote config, we cannot provide
branch->merge (because it is really just an application
of branch.*.merge to our remote's refspecs). But
branch->merge_nr may be non-zero, leading callers to be
believe they can access branch->merge (e.g., in
branch_merge_matches and elsewhere).
It doesn't look like this can cause a segfault in
practice, as most code paths dealing with merge config
will bail early if there is no remote defined. But it's
a bit of a dangerous construct.
We can fix this by setting merge_nr to "0" explicitly
when we realize that we have no merge config. Note that
merge_nr also counts the "merge_name" fields (which we
_do_ have; that's how merge_nr got incremented), so we
will "lose" access to them, in the sense that we forget
how many we had. But no callers actually care; we use
merge_name only while iteratively reading the config,
and then convert it to the final "merge" form the first
time somebody calls branch_get().
2. We set up the "merge" field every time branch_get is
called, even if it has already been done. This leaks
memory.
It's not a big deal in practice, since most code paths
will access only one branch, or perhaps each branch
only one time. But if you want to be pathological, you
can leak arbitrary memory with:
yes @{upstream} | head -1000 | git rev-list --stdin
We can fix this by skipping setup when branch->merge is
already non-NULL.
In addition to those two fixes, this patch pushes the "do we
need to setup merge?" logic down into set_merge, where it is
a bit easier to follow.
Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
If you run "git stash --help", you get the help for stash
(this magic is done by the git wrapper itself). But if you
run "git stash drop --help", you get an error. We
cannot show help specific to "stash drop", of course, but we
can at least give the user the normal stash manpage.
Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
The option parser for git-stash stuffs unknown flags into
the $FLAGS variable, where they can be accessed by the
individual commands. However, most commands do not even look
at these extra flags, leading to unexpected results like
this:
We should notice the extra flags and bail. Rather than
annotate each command to reject a non-empty $FLAGS variable,
we can notice that "stash show" is the only command that
actually _wants_ arbitrary flags. So we switch the default
mode to reject unknown flags, and let stash_show() opt into
the feature.
Reported-by: Vincent Legoll <vincent.legoll@gmail.com> Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
This wires the in-repo-symlink following code through to the cat-file
builtin. In the event of an out-of-repo link, cat-file will print
the link in a new format.
Signed-off-by: David Turner <dturner@twopensource.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
sha1_name: get_sha1_with_context learns to follow symlinks
Wire up get_sha1_with_context to call get_tree_entry_follow_symlinks
when GET_SHA1_FOLLOW_SYMLINKS is passed in flags. G_S_FOLLOW_SYMLINKS
is incompatible with G_S_ONLY_TO_DIE because the diagnosis
that ONLY_TO_DIE triggers does not at present consider symlinks, and
it would be a significant amount of additional code to allow it to
do so.
Signed-off-by: David Turner <dturner@twopensource.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
Add a new function, get_tree_entry_follow_symlinks, to tree-walk.[ch].
The function is not yet used. It will be used to implement git
cat-file --batch --follow-symlinks.
The function locates an object by path, following symlinks in the
repository. If the symlinks lead outside the repository, the function
reports this to the caller.
Signed-off-by: David Turner <dturner@twopensource.com> Signed-off-by: Ramsay Jones <ramsay@ramsay1.demon.co.uk> Signed-off-by: Junio C Hamano <gitster@pobox.com>
Helped-by: Philip Oakley <philipoakley@iee.org> Helped-by: Johannes Schindelin <johannes.schindelin@gmx.de> Helped-by: Sebastian Schuberth <sschuberth@gmail.com> Helped-by: SZEDER Gábor <szeder@ira.uka.de> Signed-off-by: David Aguilar <davvid@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
mergetool--lib: set IFS for difftool and mergetool
git-sh-setup sets IFS but it is not used by git-difftool--helper.
Set IFS in git-mergetool--lib so that the mergetool scriptlets,
difftool, and mergetool do not need to do so.
Signed-off-by: David Aguilar <davvid@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>