Code clean-up for completion script (in contrib/).
* sg/completion-config:
completion: simplify query for config variables
completion: add a helper function to get config variables
--- /dev/null
+Git v2.4.1 Release Notes
+========================
+
+Fixes since v2.4
+----------------
+
+ * The usual "git diff" when seeing a file turning into a directory
+ showed a patchset to remove the file and create all files in the
+ directory, but "git diff --no-index" simply refused to work. Also,
+ when asked to compare a file and a directory, imitate POSIX "diff"
+ and compare the file with the file with the same name in the
+ directory, instead of refusing to run.
+
+ * The default $HOME/.gitconfig file created upon "git config --global"
+ that edits it had incorrectly spelled user.name and user.email
+ entries in it.
+
+ * "git commit --date=now" or anything that relies on approxidate lost
+ the daylight-saving-time offset.
+
+ * "git cat-file bl $blob" failed to barf even though there is no
+ object type that is "bl".
+
+ * Teach the codepaths that read .gitignore and .gitattributes files
+ that these files encoded in UTF-8 may have UTF-8 BOM marker at the
+ beginning; this makes it in line with what we do for configuration
+ files already.
+
+ * Access to objects in repositories that borrow from another one on a
+ slow NFS server unnecessarily got more expensive due to recent code
+ becoming more cautious in a naive way not to lose objects to pruning.
+
+ * We avoid setting core.worktree when the repository location is the
+ ".git" directory directly at the top level of the working tree, but
+ the code misdetected the case in which the working tree is at the
+ root level of the filesystem (which arguably is a silly thing to
+ do, but still valid).
+
+Also contains typofixes, documentation updates and trivial code
+clean-ups.
* Tweak the sample "store" backend of the credential helper to honor
XDG configuration file locations when specified.
+ * A heuristic to help the "git <cmd> <revs> <pathspec>" command line
+ convention to catch mistyped paths is to make sure all the non-rev
+ parameters in the later part of the command line are names of the
+ files in the working tree, but that means "git grep $str -- \*.c"
+ must always be disambiguated with "--", because nobody sane will
+ create a file whose name literally is asterisk-dot-see. Loosen the
+ heuristic to declare that with a wildcard string the user likely
+ meant to give us a pathspec.
+
+ * "git merge FETCH_HEAD" learned that the previous "git fetch" could
+ be to create an Octopus merge, i.e. recording multiple branches
+ that are not marked as "not-for-merge"; this allows us to lose an
+ old style invocation "git merge <msg> HEAD $commits..." in the
+ implementation of "git pull" script; the old style syntax can now
+ be deprecated.
+
+ * Help us to find broken test script that splits the body part of the
+ test by mistaken use of wrong kind of quotes.
+ (merge d93d5d5 jc/test-prereq-validate later to maint).
+
+ * Developer support to automatically detect broken &&-chain in the
+ test scripts is now turned on by default.
+ (merge 92b269f jk/test-chain-lint later to maint).
+
Performance, Internal Implementation, Development Support etc.
to read packed-refs file revealed that the former is unacceptably
inefficient.
+ * The refs API uses ref_lock struct which had its own "int fd", even
+ though the same file descriptor was in the lock struct it contains.
+ Clean-up the code to lose this redundant field.
+
+ * Add the "--allow-unknown-type" option to "cat-file" to allow
+ inspecting loose objects of an experimental or a broken type.
+
* Many long-running operations show progress eye-candy, even when
they are later backgrounded. Hide the eye-candy when the process
is sent to the background instead.
directory, instead of refusing to run.
(merge 0615173 jc/diff-no-index-d-f later to maint).
+ * "git rebase -i" moved the "current" command from "todo" to "done" a
+ bit too prematurely, losing a step when a "pick" did not even start.
+ (merge 8cbc57c ph/rebase-i-redo later to maint).
+
+ * The connection initiation code for "ssh" transport tried to absorb
+ differences between the stock "ssh" and Putty-supplied "plink" and
+ its derivatives, but the logic to tell that we are using "plink"
+ variants were too loose and falsely triggered when "plink" appeared
+ anywhere in the path (e.g. "/home/me/bin/uplink/ssh").
+ (merge baaf233 bc/connect-plink later to maint).
+
+ * "git stash pop/apply" forgot to make sure that not just the working
+ tree is clean but also the index is clean. The latter is important
+ as a stash application can conflict and the index will be used for
+ conflict resolution.
+ (merge ed178ef jk/stash-require-clean-index later to maint).
+
+ * We have prepended $GIT_EXEC_PATH and the path "git" is installed in
+ (typically "/usr/bin") to $PATH when invoking subprograms and hooks
+ for almost eternity, but the original use case the latter tried to
+ support was semi-bogus (i.e. install git to /opt/foo/git and run it
+ without having /opt/foo on $PATH), and more importantly it has
+ become less and less relevant as Git grew more mainstream (i.e. the
+ users would _want_ to have it on their $PATH). Stop prepending the
+ path in which "git" is installed to users' $PATH, as that would
+ interfere the command search order people depend on (e.g. they may
+ not like versions of programs that are unrelated to Git in /usr/bin
+ and want to override them by having different ones in /usr/local/bin
+ and have the latter directory earlier in their $PATH).
+ (merge a0b4507 jk/git-no-more-argv0-path-munging later to maint).
+
+ * core.excludesfile (defaulting to $XDG_HOME/git/ignore) is supposed
+ to be overridden by repository-specific .git/info/exclude file, but
+ the order was swapped from the beginning. This belatedly fixes it.
+ (merge 099d2d8 jc/gitignore-precedence later to maint).
+
+ * After "git add -N", the path appeared in output of "git diff HEAD"
+ and "git diff --cached HEAD", leading "git status" to classify it
+ as "Changes to be committed". Such a path, however, is not yet to
+ be scheduled to be committed. "git diff" showed the change to the
+ path as modification, not as a "new file", in the header of its
+ output.
+
+ Treat such paths as "yet to be added to the index but Git already
+ know about them"; "git diff HEAD" and "git diff --cached HEAD"
+ should not talk about them, and "git diff" should show them as new
+ files yet to be added to the index.
+ (merge d95d728 nd/diff-i-t-a later to maint).
+
* Code cleanups and documentation updates.
(merge 0269f96 mm/usage-log-l-can-take-regex later to maint).
(merge 64f2589 nd/t1509-chroot-test later to maint).
(merge 846e5df pt/xdg-config-path later to maint).
(merge 1154aa4 jc/plug-fmt-merge-msg-leak later to maint).
(merge 319b678 jk/sha1-file-reduce-useless-warnings later to maint).
+ (merge 9a35c14 fg/document-commit-message-stripping later to maint).
If set, store cookies received during requests to the file specified by
http.cookieFile. Has no effect if http.cookieFile is unset.
+http.sslCipherList::
+ A list of SSL ciphers to use when negotiating an SSL connection.
+ The available ciphers depend on whether libcurl was built against
+ NSS or OpenSSL and the particular configuration of the crypto
+ library in use. Internally this sets the 'CURLOPT_SSL_CIPHER_LIST'
+ option; see the libcurl documentation for more details on the format
+ of this list.
++
+Can be overridden by the 'GIT_SSL_CIPHER_LIST' environment variable.
+To force git to use libcurl's default cipher list and ignore any
+explicit http.sslCipherList option, set 'GIT_SSL_CIPHER_LIST' to the
+empty string.
+
http.sslVerify::
Whether to verify the SSL certificate when fetching or pushing
over HTTPS. Can be overridden by the 'GIT_SSL_NO_VERIFY' environment
remote.<name>.receivepack::
The default program to execute on the remote side when pushing. See
- option \--receive-pack of linkgit:git-push[1].
+ option --receive-pack of linkgit:git-push[1].
remote.<name>.uploadpack::
The default program to execute on the remote side when fetching. See
- option \--upload-pack of linkgit:git-fetch-pack[1].
+ option --upload-pack of linkgit:git-fetch-pack[1].
remote.<name>.tagOpt::
- Setting this value to \--no-tags disables automatic tag following when
- fetching from remote <name>. Setting it to \--tags will fetch every
+ Setting this value to --no-tags disables automatic tag following when
+ fetching from remote <name>. Setting it to --tags will fetch every
tag from remote <name>, even if they are not reachable from remote
branch heads. Passing these flags directly to linkgit:git-fetch[1] can
- override this setting. See options \--tags and \--no-tags of
+ override this setting. See options --tags and --no-tags of
linkgit:git-fetch[1].
remote.<name>.vcs::
Any diff-generating command can take the `-c` or `--cc` option to
produce a 'combined diff' when showing a merge. This is the default
format when showing merges with linkgit:git-diff[1] or
-linkgit:git-show[1]. Note also that you can give the `-m' option to any
+linkgit:git-show[1]. Note also that you can give the `-m` option to any
of these commands to force generation of diffs with individual parents
of a merge.
-u::
--patch::
Generate patch (see section on generating patches).
- {git-diff? This is the default.}
+ifdef::git-diff[]
+ This is the default.
+endif::git-diff[]
endif::git-format-patch[]
-s::
ifndef::git-format-patch[]
--raw::
Generate the raw format.
- {git-diff-core? This is the default.}
+ifdef::git-diff-core[]
+ This is the default.
+endif::git-diff-core[]
endif::git-format-patch[]
ifndef::git-format-patch[]
initial command menu and directly jumps to the `patch` subcommand.
See ``Interactive mode'' for details.
--e, \--edit::
+-e::
+--edit::
Open the diff vs. the index in an editor and let the user
edit it. After the editor was closed, adjust the hunk headers
and apply the patch to the index.
SYNOPSIS
--------
[verse]
-'git cat-file' (-t | -s | -e | -p | <type> | --textconv ) <object>
+'git cat-file' (-t [--allow-unknown-type]| -s [--allow-unknown-type]| -e | -p | <type> | --textconv ) <object>
'git cat-file' (--batch | --batch-check) < <list-of-objects>
DESCRIPTION
not be combined with any other options or arguments. See the
section `BATCH OUTPUT` below for details.
+--allow-unknown-type::
+ Allow -s or -t to query broken/corrupt objects of unknown type.
+
OUTPUT
------
If '-t' is specified, one of the <type>.
+
--
strip::
- Strip leading and trailing empty lines, trailing whitespace, and
- #commentary and collapse consecutive empty lines.
+ Strip leading and trailing empty lines, trailing whitespace,
+ commentary and collapse consecutive empty lines.
whitespace::
Same as `strip` except #commentary is not removed.
verbatim::
--verbose::
Show unified diff between the HEAD commit and what
would be committed at the bottom of the commit message
- template. Note that this diff output doesn't have its
- lines prefixed with '#'.
+ template to help the user describe the commit by reminding
+ what changes the commit has.
+ Note that this diff output doesn't have its
+ lines prefixed with '#'. This diff will not be a part
+ of the commit message.
+
If specified twice, show in addition the unified diff between
what would be committed and the worktree files, i.e. the unstaged
have been completed, or to save the marks table across
incremental runs. As <file> is only opened and truncated
at completion, the same path can also be safely given to
- \--import-marks.
+ --import-marks.
The file will not be written if no new object has been
marked/exported.
--import-marks=<file>::
Before processing any input, load the marks specified in
<file>. The input file must exist, must be readable, and
- must use the same format as produced by \--export-marks.
+ must use the same format as produced by --export-marks.
+
Any commits that have already been marked will not be exported again.
-If the backend uses a similar \--import-marks file, this allows for
+If the backend uses a similar --import-marks file, this allows for
incremental bidirectional exporting of the repository by keeping the
marks the same across runs.
--quiet::
Disable all non-fatal output, making fast-import silent when it
is successful. This option disables the output shown by
- \--stats.
+ --stats.
--stats::
Display some basic statistics about the objects fast-import has
created, the packfiles they were stored into, and the
memory used by fast-import during this run. Showing this output
- is currently the default, but can be disabled with \--quiet.
+ is currently the default, but can be disabled with --quiet.
Options for Frontends
~~~~~~~~~~~~~~~~~~~~~
have been completed, or to save the marks table across
incremental runs. As <file> is only opened and truncated
at checkpoint (or completion) the same path can also be
- safely given to \--import-marks.
+ safely given to --import-marks.
--import-marks=<file>::
Before processing any input, load the marks specified in
<file>. The input file must exist, must be readable, and
- must use the same format as produced by \--export-marks.
+ must use the same format as produced by --export-marks.
Multiple options may be supplied to import more than one
set of marks. If a mark is defined to different values,
the last file wins.
prints a warning message. fast-import will always attempt to update all
branch refs, and does not stop on the first failure.
-Branch updates can be forced with \--force, but it's recommended that
-this only be used on an otherwise quiet repository. Using \--force
+Branch updates can be forced with --force, but it's recommended that
+this only be used on an otherwise quiet repository. Using --force
is not necessary for an initial import into an empty repository.
~~~~~~~~~~~~
The following date formats are supported. A frontend should select
the format it will use for this import by passing the format name
-in the \--date-format=<fmt> command-line option.
+in the --date-format=<fmt> command-line option.
`raw`::
This is the Git native format and is `<time> SP <offutc>`.
- It is also fast-import's default format, if \--date-format was
+ It is also fast-import's default format, if --date-format was
not specified.
+
The time of the event is specified by `<time>` as the number of
of bytes, except `LT`, `GT` and `LF`. `<name>` is typically UTF-8 encoded.
The time of the change is specified by `<when>` using the date format
-that was selected by the \--date-format=<fmt> command-line option.
+that was selected by the --date-format=<fmt> command-line option.
See ``Date Formats'' above for the set of supported formats, and
their syntax.
See `filemodify` above for a detailed description of `<path>`.
`filecopy`
-^^^^^^^^^^^^
+^^^^^^^^^^
Recursively copies an existing file or subdirectory to a different
location within the branch. The existing file or directory must
exist. If the destination exists it will be completely replaced
....
Note that fast-import automatically switches packfiles when the current
-packfile reaches \--max-pack-size, or 4 GiB, whichever limit is
+packfile reaches --max-pack-size, or 4 GiB, whichever limit is
smaller. During an automatic packfile switch fast-import does not update
the branch refs, tags or marks.
Use One Mark Per Commit
~~~~~~~~~~~~~~~~~~~~~~~
When doing a repository conversion, use a unique mark per commit
-(`mark :<n>`) and supply the \--export-marks option on the command
+(`mark :<n>`) and supply the --export-marks option on the command
line. fast-import will dump a file which lists every mark and the Git
object SHA-1 that corresponds to it. If the frontend can tie
the marks back to the source repository, it is easy to verify the
However repacking the repository is necessary to improve data
locality and access performance. It can also take hours on extremely
-large projects (especially if -f and a large \--window parameter is
+large projects (especially if -f and a large --window parameter is
used). Since repacking is safe to run alongside readers and writers,
run the repack in the background and let it finish when it finishes.
There is no reason to wait to explore your new Git project!
~~~~~~~~~~~~~~~~~~~~~~~~~
If you are repacking very old imported data (e.g. older than the
last year), consider expending some extra CPU time and supplying
-\--window=50 (or higher) when you run 'git repack'.
+--window=50 (or higher) when you run 'git repack'.
This will take longer, but will also produce a smaller packfile.
You only need to expend the effort once, and everyone using your
project will benefit from the smaller repository.
fast-import automatically moves active branches to inactive status based on
a simple least-recently-used algorithm. The LRU chain is updated on
each `commit` command. The maximum number of active branches can be
-increased or decreased on the command line with \--active-branches=.
+increased or decreased on the command line with --active-branches=.
per active tree
~~~~~~~~~~~~~~~
the things up in .bash_profile).
--exec=<git-upload-pack>::
- Same as \--upload-pack=<git-upload-pack>.
+ Same as --upload-pack=<git-upload-pack>.
--depth=<n>::
Limit fetching to ancestor-chains not longer than n.
EXAMPLES
--------
-All of the following examples map 'http://$hostname/git/foo/bar.git'
-to '/var/www/git/foo/bar.git'.
+All of the following examples map `http://$hostname/git/foo/bar.git`
+to `/var/www/git/foo/bar.git`.
Apache 2.x::
Ensure mod_cgi, mod_alias, and mod_env are enabled, set
If no commit is given from the command line, merge the remote-tracking
branches that the current branch is configured to use as its upstream.
See also the configuration section of this manual page.
++
+When `FETCH_HEAD` (and no other commit) is specified, the branches
+recorded in the `.git/FETCH_HEAD` file by the previous invocation
+of `git fetch` for merging are merged to the current branch.
PRE-MERGE CHECKS
--shallow::
Optimize a pack that will be provided to a client with a shallow
- repository. This option, combined with \--thin, can result in a
+ repository. This option, combined with --thin, can result in a
smaller pack at the cost of speed.
--delta-base-offset::
--[no-]verify::
Toggle the pre-push hook (see linkgit:githooks[5]). The
- default is \--verify, giving the hook a chance to prevent the
- push. With \--no-verify, the hook is bypassed completely.
+ default is --verify, giving the hook a chance to prevent the
+ push. With --no-verify, the hook is bypassed completely.
include::urls-remotes.txt[]
If the upstream branch already contains a change you have made (e.g.,
because you mailed a patch which was applied upstream), then that commit
will be skipped. For example, running `git rebase master` on the
-following history (in which A' and A introduce the same set of changes,
+following history (in which `A'` and `A` introduce the same set of changes,
but have different committer information):
------------
SYNOPSIS
--------
[verse]
-'git rev-list' [ \--max-count=<number> ]
- [ \--skip=<number> ]
- [ \--max-age=<timestamp> ]
- [ \--min-age=<timestamp> ]
- [ \--sparse ]
- [ \--merges ]
- [ \--no-merges ]
- [ \--min-parents=<number> ]
- [ \--no-min-parents ]
- [ \--max-parents=<number> ]
- [ \--no-max-parents ]
- [ \--first-parent ]
- [ \--remove-empty ]
- [ \--full-history ]
- [ \--not ]
- [ \--all ]
- [ \--branches[=<pattern>] ]
- [ \--tags[=<pattern>] ]
- [ \--remotes[=<pattern>] ]
- [ \--glob=<glob-pattern> ]
- [ \--ignore-missing ]
- [ \--stdin ]
- [ \--quiet ]
- [ \--topo-order ]
- [ \--parents ]
- [ \--timestamp ]
- [ \--left-right ]
- [ \--left-only ]
- [ \--right-only ]
- [ \--cherry-mark ]
- [ \--cherry-pick ]
- [ \--encoding=<encoding> ]
- [ \--(author|committer|grep)=<pattern> ]
- [ \--regexp-ignore-case | -i ]
- [ \--extended-regexp | -E ]
- [ \--fixed-strings | -F ]
- [ \--date=(local|relative|default|iso|iso-strict|rfc|short) ]
- [ [ \--objects | \--objects-edge | \--objects-edge-aggressive ]
- [ \--unpacked ] ]
- [ \--pretty | \--header ]
- [ \--bisect ]
- [ \--bisect-vars ]
- [ \--bisect-all ]
- [ \--merge ]
- [ \--reverse ]
- [ \--walk-reflogs ]
- [ \--no-walk ] [ \--do-walk ]
- [ \--use-bitmap-index ]
+'git rev-list' [ --max-count=<number> ]
+ [ --skip=<number> ]
+ [ --max-age=<timestamp> ]
+ [ --min-age=<timestamp> ]
+ [ --sparse ]
+ [ --merges ]
+ [ --no-merges ]
+ [ --min-parents=<number> ]
+ [ --no-min-parents ]
+ [ --max-parents=<number> ]
+ [ --no-max-parents ]
+ [ --first-parent ]
+ [ --remove-empty ]
+ [ --full-history ]
+ [ --not ]
+ [ --all ]
+ [ --branches[=<pattern>] ]
+ [ --tags[=<pattern>] ]
+ [ --remotes[=<pattern>] ]
+ [ --glob=<glob-pattern> ]
+ [ --ignore-missing ]
+ [ --stdin ]
+ [ --quiet ]
+ [ --topo-order ]
+ [ --parents ]
+ [ --timestamp ]
+ [ --left-right ]
+ [ --left-only ]
+ [ --right-only ]
+ [ --cherry-mark ]
+ [ --cherry-pick ]
+ [ --encoding=<encoding> ]
+ [ --(author|committer|grep)=<pattern> ]
+ [ --regexp-ignore-case | -i ]
+ [ --extended-regexp | -E ]
+ [ --fixed-strings | -F ]
+ [ --date=(local|relative|default|iso|iso-strict|rfc|short) ]
+ [ [ --objects | --objects-edge | --objects-edge-aggressive ]
+ [ --unpacked ] ]
+ [ --pretty | --header ]
+ [ --bisect ]
+ [ --bisect-vars ]
+ [ --bisect-all ]
+ [ --merge ]
+ [ --reverse ]
+ [ --walk-reflogs ]
+ [ --no-walk ] [ --do-walk ]
+ [ --use-bitmap-index ]
<commit>... [ \-- <paths>... ]
DESCRIPTION
+
If you want to make sure that the output actually names an object in
your object database and/or can be used as a specific type of object
-you require, you can add "\^{type}" peeling operator to the parameter.
+you require, you can add the `^{type}` peeling operator to the parameter.
For example, `git rev-parse "$VAR^{commit}"` will make sure `$VAR`
names an existing object that is a commit-ish (i.e. a commit, or an
annotated tag that points at a commit). To make sure that `$VAR`
form as close to the original input as possible.
--symbolic-full-name::
- This is similar to \--symbolic, but it omits input that
+ This is similar to --symbolic, but it omits input that
are not refs (i.e. branch or tag names; or more
explicitly disambiguating "heads/master" form, when you
want to name the "master" branch when there is an
a directory on the default $PATH.
--exec=<git-receive-pack>::
- Same as \--receive-pack=<git-receive-pack>.
+ Same as --receive-pack=<git-receive-pack>.
--all::
Instead of explicitly specifying which refs to update,
For tags, it shows the tag message and the referenced objects.
For trees, it shows the names (equivalent to 'git ls-tree'
-with \--name-only).
+with --name-only).
For plain blobs, it shows the plain contents.
Given the following noisy input with '$' indicating the end of a line:
---------
+---------
|A brief introduction $
| $
|$
Use 'git stripspace' with no arguments to obtain:
---------
+---------
|A brief introduction$
|$
|A new paragraph$
Use 'git stripspace --strip-comments' to obtain:
---------
+---------
|A brief introduction$
|$
|A new paragraph$
--username=<user>;;
For transports that SVN handles authentication for (http,
https, and plain svn), specify the username. For other
- transports (e.g. svn+ssh://), you must include the username in
- the URL, e.g. svn+ssh://foo@svn.bar.com/project
+ transports (e.g. `svn+ssh://`), you must include the username in
+ the URL, e.g. `svn+ssh://foo@svn.bar.com/project`
--prefix=<prefix>;;
This allows one to specify a prefix which is prepended
to the names of remotes if trunk/branches/tags are
Ask the user to confirm that a patch set should actually be sent to SVN.
For each patch, one may answer "yes" (accept this patch), "no" (discard this
patch), "all" (accept all patches), or "quit".
- +
- 'git svn dcommit' returns immediately if answer is "no" or "quit", without
- committing anything to SVN.
++
+'git svn dcommit' returns immediately if answer is "no" or "quit", without
+committing anything to SVN.
'branch'::
Create a branch in the SVN repository.
CONFIGURATION
-------------
By default, 'git tag' in sign-with-default mode (-s) will use your
-committer identity (of the form "Your Name <\your@email.address>") to
+committer identity (of the form `Your Name <your@email.address>`) to
find a key. If you want to use a different default key, you can specify
it in the repository configuration as follows:
branch of the `git.git` repository.
Documentation for older releases are available here:
-* link:v2.4.0/git.html[documentation for release 2.4]
+* link:v2.4.1/git.html[documentation for release 2.4.1]
* release notes for
+ link:RelNotes/2.4.1.txt[2.4.1],
link:RelNotes/2.4.0.txt[2.4].
* link:v2.3.8/git.html[documentation for release 2.3.8]
@@ -1 +1,2 @@
Hello World
+It's a new day for git
-----
+------------
i.e. the diff of the change we caused by adding another line to `hello`.
files:
- 'git diff-index' compares contents of a "tree" object and the
- working directory (when '\--cached' flag is not used) or a
- "tree" object and the index file (when '\--cached' flag is
+ working directory (when '--cached' flag is not used) or a
+ "tree" object and the index file (when '--cached' flag is
used);
- 'git diff-files' compares contents of the index file and the
When the "-C" option is used, the original contents of modified files,
and deleted files (and also unmodified files, if the
-"\--find-copies-harder" option is used) are considered as candidates
+"--find-copies-harder" option is used) are considered as candidates
of the source files in rename/copy operation. If the input were like
these filepairs, that talk about a modified file fileY and a newly
created file file0:
of <n> correspond to the number of -v flags passed on the
command line.
-'option progress' \{'true'|'false'\}::
+'option progress' {'true'|'false'}::
Enables (or disables) progress messages displayed by the
transport helper during a command.
'option depth' <depth>::
Deepens the history of a shallow repository.
-'option followtags' \{'true'|'false'\}::
+'option followtags' {'true'|'false'}::
If enabled the helper should automatically fetch annotated
tag objects if the object the tag points at was transferred
during the fetch command. If the tag is not fetched by
ask for the tag specifically. Some helpers may be able to
use this option to avoid a second network connection.
-'option dry-run' \{'true'|'false'\}:
+'option dry-run' {'true'|'false'}:
If true, pretend the operation completed successfully,
but don't actually change any repository data. For most
helpers this only applies to the 'push', if supported.
must not rely on this option being set before
connect request occurs.
-'option check-connectivity' \{'true'|'false'\}::
+'option check-connectivity' {'true'|'false'}::
Request the helper to check connectivity of a clone.
-'option force' \{'true'|'false'\}::
+'option force' {'true'|'false'}::
Request the helper to perform a force update. Defaults to
'false'.
-'option cloning \{'true'|'false'\}::
+'option cloning {'true'|'false'}::
Notify the helper this is a clone request (i.e. the current
repository is guaranteed empty).
-'option update-shallow \{'true'|'false'\}::
+'option update-shallow {'true'|'false'}::
Allow to extend .git/shallow if the new refs require it.
SEE ALSO
@echo PYTHON_PATH=\''$(subst ','\'',$(PYTHON_PATH_SQ))'\' >>$@
@echo TAR=\''$(subst ','\'',$(subst ','\'',$(TAR)))'\' >>$@
@echo NO_CURL=\''$(subst ','\'',$(subst ','\'',$(NO_CURL)))'\' >>$@
+ @echo NO_EXPAT=\''$(subst ','\'',$(subst ','\'',$(NO_EXPAT)))'\' >>$@
@echo USE_LIBPCRE=\''$(subst ','\'',$(subst ','\'',$(USE_LIBPCRE)))'\' >>$@
@echo NO_PERL=\''$(subst ','\'',$(subst ','\'',$(NO_PERL)))'\' >>$@
@echo NO_PYTHON=\''$(subst ','\'',$(subst ','\'',$(NO_PYTHON)))'\' >>$@
switch (fix_unmerged_status(p, data)) {
default:
die(_("unexpected diff status %c"), p->status);
+ case DIFF_STATUS_ADDED:
case DIFF_STATUS_MODIFIED:
case DIFF_STATUS_TYPE_CHANGED:
if (add_file_to_index(&the_index, path, data->flags)) {
sha1, &flags);
if (!target) {
error(remote_branch
- ? _("remote branch '%s' not found.")
+ ? _("remote-tracking branch '%s' not found.")
: _("branch '%s' not found."), bname.buf);
ret = 1;
continue;
if (delete_ref(name, sha1, REF_NODEREF)) {
error(remote_branch
- ? _("Error deleting remote branch '%s'")
+ ? _("Error deleting remote-tracking branch '%s'")
: _("Error deleting branch '%s'"),
bname.buf);
ret = 1;
}
if (!quiet) {
printf(remote_branch
- ? _("Deleted remote branch %s (was %s).\n")
+ ? _("Deleted remote-tracking branch %s (was %s).\n")
: _("Deleted branch %s (was %s).\n"),
bname.buf,
(flags & REF_ISBROKEN) ? "broken"
if (!strcmp(cmd, "verify")) {
close(bundle_fd);
+ if (argc != 1) {
+ usage(builtin_bundle_usage);
+ return 1;
+ }
if (verify_bundle(&header, 1))
return 1;
fprintf(stderr, _("%s is okay\n"), bundle_file);
return !!list_bundle_refs(&header, argc, argv);
}
if (!strcmp(cmd, "create")) {
+ if (argc < 2) {
+ usage(builtin_bundle_usage);
+ return 1;
+ }
if (!startup_info->have_repository)
die(_("Need a repository to create a bundle."));
return !!create_bundle(&header, bundle_file, argc, argv);
#include "userdiff.h"
#include "streaming.h"
-static int cat_one_file(int opt, const char *exp_type, const char *obj_name)
+static int cat_one_file(int opt, const char *exp_type, const char *obj_name,
+ int unknown_type)
{
unsigned char sha1[20];
enum object_type type;
char *buf;
unsigned long size;
struct object_context obj_context;
+ struct object_info oi = {NULL};
+ struct strbuf sb = STRBUF_INIT;
+ unsigned flags = LOOKUP_REPLACE_OBJECT;
+
+ if (unknown_type)
+ flags |= LOOKUP_UNKNOWN_OBJECT;
if (get_sha1_with_context(obj_name, 0, sha1, &obj_context))
die("Not a valid object name %s", obj_name);
buf = NULL;
switch (opt) {
case 't':
- type = sha1_object_info(sha1, NULL);
- if (type > 0) {
- printf("%s\n", typename(type));
+ oi.typename = &sb;
+ if (sha1_object_info_extended(sha1, &oi, flags) < 0)
+ die("git cat-file: could not get object info");
+ if (sb.len) {
+ printf("%s\n", sb.buf);
+ strbuf_release(&sb);
return 0;
}
break;
case 's':
- type = sha1_object_info(sha1, &size);
- if (type > 0) {
- printf("%lu\n", size);
- return 0;
- }
- break;
+ oi.sizep = &size;
+ if (sha1_object_info_extended(sha1, &oi, flags) < 0)
+ die("git cat-file: could not get object info");
+ printf("%lu\n", size);
+ return 0;
case 'e':
return !has_sha1_file(sha1);
}
static const char * const cat_file_usage[] = {
- N_("git cat-file (-t | -s | -e | -p | <type> | --textconv) <object>"),
+ N_("git cat-file (-t [--allow-unknown-type]|-s [--allow-unknown-type]|-e|-p|<type>|--textconv) <object>"),
N_("git cat-file (--batch | --batch-check) < <list-of-objects>"),
NULL
};
int opt = 0;
const char *exp_type = NULL, *obj_name = NULL;
struct batch_options batch = {0};
+ int unknown_type = 0;
const struct option options[] = {
OPT_GROUP(N_("<type> can be one of: blob, tree, commit, tag")),
- OPT_SET_INT('t', NULL, &opt, N_("show object type"), 't'),
- OPT_SET_INT('s', NULL, &opt, N_("show object size"), 's'),
- OPT_SET_INT('e', NULL, &opt,
+ OPT_CMDMODE('t', NULL, &opt, N_("show object type"), 't'),
+ OPT_CMDMODE('s', NULL, &opt, N_("show object size"), 's'),
+ OPT_CMDMODE('e', NULL, &opt,
N_("exit with zero when there's no error"), 'e'),
- OPT_SET_INT('p', NULL, &opt, N_("pretty-print object's content"), 'p'),
- OPT_SET_INT(0, "textconv", &opt,
+ OPT_CMDMODE('p', NULL, &opt, N_("pretty-print object's content"), 'p'),
+ OPT_CMDMODE(0, "textconv", &opt,
N_("for blob objects, run textconv on object's content"), 'c'),
+ OPT_BOOL( 0, "allow-unknown-type", &unknown_type,
+ N_("allow -s and -t to work with broken/corrupt objects")),
{ OPTION_CALLBACK, 0, "batch", &batch, "format",
N_("show info and content of objects fed from the standard input"),
PARSE_OPT_OPTARG, batch_option_callback },
git_config(git_cat_file_config, NULL);
- if (argc != 3 && argc != 2)
- usage_with_options(cat_file_usage, options);
-
argc = parse_options(argc, argv, prefix, options, cat_file_usage, 0);
if (opt) {
if (batch.enabled)
return batch_objects(&batch);
- return cat_one_file(opt, exp_type, obj_name);
+ if (unknown_type && opt != 't' && opt != 's')
+ die("git cat-file --allow-unknown-type: use with -s or -t");
+ return cat_one_file(opt, exp_type, obj_name, unknown_type);
}
}
if (len) {
struct strbuf truname = STRBUF_INIT;
- strbuf_addstr(&truname, "refs/heads/");
- strbuf_addstr(&truname, remote);
+ strbuf_addf(&truname, "refs/heads/%s", remote);
strbuf_setlen(&truname, truname.len - len);
if (ref_exists(truname.buf)) {
strbuf_addf(msg,
strbuf_release(&truname);
goto cleanup;
}
- }
-
- if (!strcmp(remote, "FETCH_HEAD") &&
- !access(git_path("FETCH_HEAD"), R_OK)) {
- const char *filename;
- FILE *fp;
- struct strbuf line = STRBUF_INIT;
- char *ptr;
-
- filename = git_path("FETCH_HEAD");
- fp = fopen(filename, "r");
- if (!fp)
- die_errno(_("could not open '%s' for reading"),
- filename);
- strbuf_getline(&line, fp, '\n');
- fclose(fp);
- ptr = strstr(line.buf, "\tnot-for-merge\t");
- if (ptr)
- strbuf_remove(&line, ptr-line.buf+1, 13);
- strbuf_addbuf(msg, &line);
- strbuf_release(&line);
- goto cleanup;
+ strbuf_release(&truname);
}
if (remote_head->util) {
st_stdin.st_mode == st_stdout.st_mode);
}
-static struct commit_list *collect_parents(struct commit *head_commit,
- int *head_subsumed,
- int argc, const char **argv)
+static struct commit_list *reduce_parents(struct commit *head_commit,
+ int *head_subsumed,
+ struct commit_list *remoteheads)
{
- int i;
- struct commit_list *remoteheads = NULL, *parents, *next;
- struct commit_list **remotes = &remoteheads;
+ struct commit_list *parents, *next, **remotes = &remoteheads;
- if (head_commit)
- remotes = &commit_list_insert(head_commit, remotes)->next;
- for (i = 0; i < argc; i++) {
- struct commit *commit = get_merge_parent(argv[i]);
- if (!commit)
- help_unknown_ref(argv[i], "merge",
- "not something we can merge");
- remotes = &commit_list_insert(commit, remotes)->next;
- }
- *remotes = NULL;
+ /*
+ * Is the current HEAD reachable from another commit being
+ * merged? If so we do not want to record it as a parent of
+ * the resulting merge, unless --no-ff is given. We will flip
+ * this variable to 0 when we find HEAD among the independent
+ * tips being merged.
+ */
+ *head_subsumed = 1;
+ /* Find what parents to record by checking independent ones. */
parents = reduce_heads(remoteheads);
- *head_subsumed = 1; /* we will flip this to 0 when we find it */
for (remoteheads = NULL, remotes = &remoteheads;
parents;
parents = next) {
*head_subsumed = 0;
else
remotes = &commit_list_insert(commit, remotes)->next;
+ free(parents);
+ }
+ return remoteheads;
+}
+
+static void prepare_merge_message(struct strbuf *merge_names, struct strbuf *merge_msg)
+{
+ struct fmt_merge_msg_opts opts;
+
+ memset(&opts, 0, sizeof(opts));
+ opts.add_title = !have_message;
+ opts.shortlog_len = shortlog_len;
+ opts.credit_people = (0 < option_edit);
+
+ fmt_merge_msg(merge_names, merge_msg, &opts);
+ if (merge_msg->len)
+ strbuf_setlen(merge_msg, merge_msg->len - 1);
+}
+
+static void handle_fetch_head(struct commit_list **remotes, struct strbuf *merge_names)
+{
+ const char *filename;
+ int fd, pos, npos;
+ struct strbuf fetch_head_file = STRBUF_INIT;
+
+ if (!merge_names)
+ merge_names = &fetch_head_file;
+
+ filename = git_path("FETCH_HEAD");
+ fd = open(filename, O_RDONLY);
+ if (fd < 0)
+ die_errno(_("could not open '%s' for reading"), filename);
+
+ if (strbuf_read(merge_names, fd, 0) < 0)
+ die_errno(_("could not read '%s'"), filename);
+ if (close(fd) < 0)
+ die_errno(_("could not close '%s'"), filename);
+
+ for (pos = 0; pos < merge_names->len; pos = npos) {
+ unsigned char sha1[20];
+ char *ptr;
+ struct commit *commit;
+
+ ptr = strchr(merge_names->buf + pos, '\n');
+ if (ptr)
+ npos = ptr - merge_names->buf + 1;
+ else
+ npos = merge_names->len;
+
+ if (npos - pos < 40 + 2 ||
+ get_sha1_hex(merge_names->buf + pos, sha1))
+ commit = NULL; /* bad */
+ else if (memcmp(merge_names->buf + pos + 40, "\t\t", 2))
+ continue; /* not-for-merge */
+ else {
+ char saved = merge_names->buf[pos + 40];
+ merge_names->buf[pos + 40] = '\0';
+ commit = get_merge_parent(merge_names->buf + pos);
+ merge_names->buf[pos + 40] = saved;
+ }
+ if (!commit) {
+ if (ptr)
+ *ptr = '\0';
+ die("not something we can merge in %s: %s",
+ filename, merge_names->buf + pos);
+ }
+ remotes = &commit_list_insert(commit, remotes)->next;
+ }
+
+ if (merge_names == &fetch_head_file)
+ strbuf_release(&fetch_head_file);
+}
+
+static struct commit_list *collect_parents(struct commit *head_commit,
+ int *head_subsumed,
+ int argc, const char **argv,
+ struct strbuf *merge_msg)
+{
+ int i;
+ struct commit_list *remoteheads = NULL;
+ struct commit_list **remotes = &remoteheads;
+ struct strbuf merge_names = STRBUF_INIT, *autogen = NULL;
+
+ if (merge_msg && (!have_message || shortlog_len))
+ autogen = &merge_names;
+
+ if (head_commit)
+ remotes = &commit_list_insert(head_commit, remotes)->next;
+
+ if (argc == 1 && !strcmp(argv[0], "FETCH_HEAD")) {
+ handle_fetch_head(remotes, autogen);
+ remoteheads = reduce_parents(head_commit, head_subsumed, remoteheads);
+ } else {
+ for (i = 0; i < argc; i++) {
+ struct commit *commit = get_merge_parent(argv[i]);
+ if (!commit)
+ help_unknown_ref(argv[i], "merge",
+ "not something we can merge");
+ remotes = &commit_list_insert(commit, remotes)->next;
+ }
+ remoteheads = reduce_parents(head_commit, head_subsumed, remoteheads);
+ if (autogen) {
+ struct commit_list *p;
+ for (p = remoteheads; p; p = p->next)
+ merge_name(merge_remote_util(p->item)->name, autogen);
+ }
}
+
+ if (autogen) {
+ prepare_merge_message(autogen, merge_msg);
+ strbuf_release(autogen);
+ }
+
return remoteheads;
}
option_commit = 0;
}
- if (!abort_current_merge) {
- if (!argc) {
- if (default_to_upstream)
- argc = setup_with_upstream(&argv);
- else
- die(_("No commit specified and merge.defaultToUpstream not set."));
- } else if (argc == 1 && !strcmp(argv[0], "-"))
- argv[0] = "@{-1}";
+ if (!argc) {
+ if (default_to_upstream)
+ argc = setup_with_upstream(&argv);
+ else
+ die(_("No commit specified and merge.defaultToUpstream not set."));
+ } else if (argc == 1 && !strcmp(argv[0], "-")) {
+ argv[0] = "@{-1}";
}
+
if (!argc)
usage_with_options(builtin_merge_usage,
builtin_merge_options);
- /*
- * This could be traditional "merge <msg> HEAD <commit>..." and
- * the way we can tell it is to see if the second token is HEAD,
- * but some people might have misused the interface and used a
- * commit-ish that is the same as HEAD there instead.
- * Traditional format never would have "-m" so it is an
- * additional safety measure to check for it.
- */
-
- if (!have_message && head_commit &&
- is_old_style_invocation(argc, argv, head_commit->object.sha1)) {
- strbuf_addstr(&merge_msg, argv[0]);
- head_arg = argv[1];
- argv += 2;
- argc -= 2;
- remoteheads = collect_parents(head_commit, &head_subsumed, argc, argv);
- } else if (!head_commit) {
+ if (!head_commit) {
struct commit *remote_head;
/*
* If the merged head is a valid one there is no reason
* to forbid "git merge" into a branch yet to be born.
* We do the same for "git pull".
*/
- if (argc != 1)
- die(_("Can merge only exactly one commit into "
- "empty head"));
if (squash)
die(_("Squash commit into empty head not supported yet"));
if (fast_forward == FF_NO)
die(_("Non-fast-forward commit does not make sense into "
"an empty head"));
- remoteheads = collect_parents(head_commit, &head_subsumed, argc, argv);
+ remoteheads = collect_parents(head_commit, &head_subsumed,
+ argc, argv, NULL);
remote_head = remoteheads->item;
if (!remote_head)
die(_("%s - not something we can merge"), argv[0]);
+ if (remoteheads->next)
+ die(_("Can merge only exactly one commit into empty head"));
read_empty(remote_head->object.sha1, 0);
update_ref("initial pull", "HEAD", remote_head->object.sha1,
NULL, 0, UPDATE_REFS_DIE_ON_ERR);
goto done;
- } else {
- struct strbuf merge_names = STRBUF_INIT;
+ }
+ /*
+ * This could be traditional "merge <msg> HEAD <commit>..." and
+ * the way we can tell it is to see if the second token is HEAD,
+ * but some people might have misused the interface and used a
+ * commit-ish that is the same as HEAD there instead.
+ * Traditional format never would have "-m" so it is an
+ * additional safety measure to check for it.
+ */
+ if (!have_message &&
+ is_old_style_invocation(argc, argv, head_commit->object.sha1)) {
+ warning("old-style 'git merge <msg> HEAD <commit>' is deprecated.");
+ strbuf_addstr(&merge_msg, argv[0]);
+ head_arg = argv[1];
+ argv += 2;
+ argc -= 2;
+ remoteheads = collect_parents(head_commit, &head_subsumed,
+ argc, argv, NULL);
+ } else {
/* We are invoked directly as the first-class UI. */
head_arg = "HEAD";
* the standard merge summary message to be appended
* to the given message.
*/
- remoteheads = collect_parents(head_commit, &head_subsumed, argc, argv);
- for (p = remoteheads; p; p = p->next)
- merge_name(merge_remote_util(p->item)->name, &merge_names);
-
- if (!have_message || shortlog_len) {
- struct fmt_merge_msg_opts opts;
- memset(&opts, 0, sizeof(opts));
- opts.add_title = !have_message;
- opts.shortlog_len = shortlog_len;
- opts.credit_people = (0 < option_edit);
-
- fmt_merge_msg(&merge_names, &merge_msg, &opts);
- if (merge_msg.len)
- strbuf_setlen(&merge_msg, merge_msg.len - 1);
- }
+ remoteheads = collect_parents(head_commit, &head_subsumed,
+ argc, argv, &merge_msg);
}
if (!head_commit || !argc)
/* object replacement */
#define LOOKUP_REPLACE_OBJECT 1
+#define LOOKUP_UNKNOWN_OBJECT 2
extern void *read_sha1_file_extended(const unsigned char *sha1, enum object_type *type, unsigned long *size, unsigned flag);
static inline void *read_sha1_file(const unsigned char *sha1, enum object_type *type, unsigned long *size)
{
unsigned long *sizep;
unsigned long *disk_sizep;
unsigned char *delta_base_sha1;
+ struct strbuf *typename;
/* Response */
enum {
# List of known git commands.
-# command name category [deprecated] [common]
+# command name category [deprecated] [common]
git-add mainporcelain common
git-am mainporcelain
git-annotate ancillaryinterrogators
git-diff-index plumbinginterrogators
git-diff-tree plumbinginterrogators
git-difftool ancillaryinterrogators
-git-fast-export ancillarymanipulators
-git-fast-import ancillarymanipulators
+git-fast-export ancillarymanipulators
+git-fast-import ancillarymanipulators
git-fetch mainporcelain common
git-fetch-pack synchingrepositories
git-filter-branch ancillarymanipulators
git-fmt-merge-msg purehelpers
git-for-each-ref plumbinginterrogators
git-format-patch mainporcelain
-git-fsck ancillaryinterrogators
+git-fsck ancillaryinterrogators
git-gc mainporcelain
git-get-tar-commit-id ancillaryinterrogators
git-grep mainporcelain common
git-gui mainporcelain
git-hash-object plumbingmanipulators
-git-help ancillaryinterrogators
+git-help ancillaryinterrogators
git-http-backend synchingrepositories
git-http-fetch synchelpers
git-http-push synchelpers
conn->in = conn->out = -1;
if (protocol == PROTO_SSH) {
const char *ssh;
- int putty;
+ int putty, tortoiseplink = 0;
char *ssh_host = hostandport;
const char *port = NULL;
get_host_and_port(&ssh_host, &port);
free(path);
free(conn);
return NULL;
+ }
+
+ ssh = getenv("GIT_SSH_COMMAND");
+ if (ssh) {
+ conn->use_shell = 1;
+ putty = 0;
} else {
- ssh = getenv("GIT_SSH_COMMAND");
- if (ssh) {
- conn->use_shell = 1;
- putty = 0;
- } else {
- ssh = getenv("GIT_SSH");
- if (!ssh)
- ssh = "ssh";
- putty = !!strcasestr(ssh, "plink");
- }
-
- argv_array_push(&conn->args, ssh);
- if (putty && !strcasestr(ssh, "tortoiseplink"))
- argv_array_push(&conn->args, "-batch");
- if (port) {
- /* P is for PuTTY, p is for OpenSSH */
- argv_array_push(&conn->args, putty ? "-P" : "-p");
- argv_array_push(&conn->args, port);
- }
- argv_array_push(&conn->args, ssh_host);
+ const char *base;
+ char *ssh_dup;
+
+ ssh = getenv("GIT_SSH");
+ if (!ssh)
+ ssh = "ssh";
+
+ ssh_dup = xstrdup(ssh);
+ base = basename(ssh_dup);
+
+ tortoiseplink = !strcasecmp(base, "tortoiseplink") ||
+ !strcasecmp(base, "tortoiseplink.exe");
+ putty = !strcasecmp(base, "plink") ||
+ !strcasecmp(base, "plink.exe") || tortoiseplink;
+
+ free(ssh_dup);
+ }
+
+ argv_array_push(&conn->args, ssh);
+ if (tortoiseplink)
+ argv_array_push(&conn->args, "-batch");
+ if (port) {
+ /* P is for PuTTY, p is for OpenSSH */
+ argv_array_push(&conn->args, putty ? "-P" : "-p");
+ argv_array_push(&conn->args, port);
}
+ argv_array_push(&conn->args, ssh_host);
} else {
/* remove repo-local variables from the environment */
conn->env = local_repo_env;
checkout-index) : plumbing;;
commit-tree) : plumbing;;
count-objects) : infrequent;;
- credential-cache) : credentials helper;;
- credential-store) : credentials helper;;
+ credential) : credentials;;
+ credential-*) : credentials helper;;
cvsexportcommit) : export;;
cvsimport) : import;;
cvsserver) : daemon;;
http.noEPSV
http.postBuffer
http.proxy
+ http.sslCipherList
http.sslCAInfo
http.sslCAPath
http.sslCert
ignore-joins ignore prior --rejoin commits
onto= try connecting new tree to an existing one
rejoin merge the new branch back into HEAD
- options for 'add', 'merge', 'pull' and 'push'
+ options for 'add', 'merge', and 'pull'
squash merge subtree changes as a single commit
"
eval "$(echo "$OPTS_SPEC" | git rev-parse --parseopt -- "$@" || echo exit $?)"
debug()
{
if [ -n "$debug" ]; then
- echo "$@" >&2
+ printf "%s\n" "$*" >&2
fi
}
say()
{
if [ -z "$quiet" ]; then
- echo "$@" >&2
+ printf "%s\n" "$*" >&2
+ fi
+}
+
+progress()
+{
+ if [ -z "$quiet" ]; then
+ printf "%s\r" "$*" >&2
fi
}
eval "$grl" |
while read rev parents; do
revcount=$(($revcount + 1))
- say -n "$revcount/$revmax ($createcount)\r"
+ progress "$revcount/$revmax ($createcount)"
debug "Processing commit: $rev"
exists=$(cache_get $rev)
if [ -n "$exists" ]; then
OPTIONS FOR add, merge, push, pull
----------------------------------
--squash::
- This option is only valid for add, merge, push and pull
+ This option is only valid for add, merge, and pull
commands.
+
Instead of merging the entire history from the subtree project, produce
ce->sha1, !is_null_sha1(ce->sha1),
ce->name, 0);
continue;
+ } else if (ce->ce_flags & CE_INTENT_TO_ADD) {
+ diff_addremove(&revs->diffopt, '+', ce->ce_mode,
+ EMPTY_BLOB_SHA1_BIN, 0,
+ ce->name, 0);
+ continue;
}
changed = match_stat_with_submodule(&revs->diffopt, ce, &st,
struct rev_info *revs = o->unpack_data;
int match_missing, cached;
+ /* i-t-a entries do not actually exist in the index */
+ if (idx && (idx->ce_flags & CE_INTENT_TO_ADD)) {
+ idx = NULL;
+ if (!tree)
+ return; /* nothing to diff.. */
+ }
+
/* if the entry is not checked out, don't examine work tree */
cached = o->index_only ||
(idx && ((idx->ce_flags & CE_VALID) || ce_skip_worktree(idx)));
const char *path;
dir->exclude_per_dir = ".gitignore";
- path = git_path("info/exclude");
+
+ /* core.excludefile defaulting to $XDG_HOME/git/ignore */
if (!excludes_file)
excludes_file = xdg_config_home("ignore");
- if (!access_or_warn(path, R_OK, 0))
- add_excludes_from_file(dir, path);
if (excludes_file && !access_or_warn(excludes_file, R_OK, 0))
add_excludes_from_file(dir, excludes_file);
+
+ /* per repository user preference */
+ path = git_path("info/exclude");
+ if (!access_or_warn(path, R_OK, 0))
+ add_excludes_from_file(dir, path);
}
int remove_path(const char *name)
struct strbuf new_path = STRBUF_INIT;
add_path(&new_path, git_exec_path());
- add_path(&new_path, argv0_path);
if (old_path)
strbuf_addstr(&new_path, old_path);
die "Fast-forward update failed: $?\n" if $?;
}
else {
- system(qw(git merge cvsimport HEAD), "$remote/$opt_o");
+ system(qw(git merge -m cvsimport), "$remote/$opt_o");
die "Could not merge $opt_o into the current branch.\n" if $?;
}
} else {
fi
fi
-merge_name=$(git fmt-merge-msg $log_arg <"$GIT_DIR/FETCH_HEAD") || exit
case "$rebase" in
true)
eval="git-rebase $diffstat $strategy_args $merge_args $rebase_args $verbosity"
eval="git-merge $diffstat $no_commit $verify_signatures $edit $squash $no_ff $ff_only"
eval="$eval $log_arg $strategy_args $merge_args $verbosity $progress"
eval="$eval $gpg_sign_args"
- eval="$eval \"\$merge_name\" HEAD $merge_head"
+ eval="$eval FETCH_HEAD"
;;
esac
eval "exec $eval"
fi
}
+# Put the last action marked done at the beginning of the todo list
+# again. If there has not been an action marked done yet, leave the list of
+# items on the todo list unchanged.
+reschedule_last_action () {
+ tail -n 1 "$done" | cat - "$todo" >"$todo".new
+ sed -e \$d <"$done" >"$done".new
+ mv -f "$todo".new "$todo"
+ mv -f "$done".new "$done"
+}
+
append_todo_help () {
git stripspace --comment-lines >>"$todo" <<\EOF
output eval git cherry-pick \
${gpg_sign_opt:+$(git rev-parse --sq-quote "$gpg_sign_opt")} \
"$strategy_args" $empty_args $ff "$@"
+
+ # If cherry-pick dies it leaves the to-be-picked commit unrecorded. Reschedule
+ # previous task so this commit is not lost.
+ ret=$?
+ case "$ret" in [01]) ;; *) reschedule_last_action ;; esac
+ return $ret
}
pick_one_preserving_merges () {
assert_stash_like "$@"
git update-index -q --refresh || die "$(gettext "unable to refresh index")"
+ git diff-index --cached --quiet --ignore-submodules HEAD -- ||
+ die "$(gettext "Cannot apply stash: Your index contains uncommitted changes.")"
# current index state
c_tree=$(git write-tree) ||
static int curl_ssl_verify = -1;
static int curl_ssl_try;
static const char *ssl_cert;
+static const char *ssl_cipherlist;
#if LIBCURL_VERSION_NUM >= 0x070903
static const char *ssl_key;
#endif
curl_ssl_verify = git_config_bool(var, value);
return 0;
}
+ if (!strcmp("http.sslcipherlist", var))
+ return git_config_string(&ssl_cipherlist, var, value);
if (!strcmp("http.sslcert", var))
return git_config_string(&ssl_cert, var, value);
#if LIBCURL_VERSION_NUM >= 0x070903
if (http_proactive_auth)
init_curl_http_auth(result);
+ if (getenv("GIT_SSL_CIPHER_LIST"))
+ ssl_cipherlist = getenv("GIT_SSL_CIPHER_LIST");
+
+ if (ssl_cipherlist != NULL && *ssl_cipherlist)
+ curl_easy_setopt(result, CURLOPT_SSL_CIPHER_LIST,
+ ssl_cipherlist);
+
if (ssl_cert != NULL)
curl_easy_setopt(result, CURLOPT_SSLCERT, ssl_cert);
if (has_cert_password())
#include "line-log.h"
static struct decoration name_decoration = { "object names" };
+static int decoration_loaded;
+static int decoration_flags;
static char decoration_colors[][COLOR_MAXLEN] = {
GIT_COLOR_RESET,
struct object *obj;
enum decoration_type type = DECORATION_NONE;
+ assert(cb_data == NULL);
+
if (starts_with(refname, "refs/replace/")) {
unsigned char original_sha1[20];
if (!check_replace_refs)
else if (!strcmp(refname, "HEAD"))
type = DECORATION_REF_HEAD;
- if (!cb_data || *(int *)cb_data == DECORATE_SHORT_REFS)
- refname = prettify_refname(refname);
add_name_decoration(type, refname, obj);
while (obj->type == OBJ_TAG) {
obj = ((struct tag *)obj)->tagged;
void load_ref_decorations(int flags)
{
- static int loaded;
- if (!loaded) {
- loaded = 1;
- for_each_ref(add_ref_decoration, &flags);
- head_ref(add_ref_decoration, &flags);
+ if (!decoration_loaded) {
+ decoration_loaded = 1;
+ decoration_flags = flags;
+ for_each_ref(add_ref_decoration, NULL);
+ head_ref(add_ref_decoration, NULL);
for_each_commit_graft(add_graft_decoration, NULL);
}
}
branch_name = resolve_ref_unsafe("HEAD", 0, unused, &rru_flags);
if (!(rru_flags & REF_ISSYMREF))
return NULL;
- if (!skip_prefix(branch_name, "refs/heads/", &branch_name))
+
+ if (!starts_with(branch_name, "refs/"))
return NULL;
/* OK, do we have that ref in the list? */
return NULL;
}
+static void show_name(struct strbuf *sb, const struct name_decoration *decoration)
+{
+ if (decoration_flags == DECORATE_SHORT_REFS)
+ strbuf_addstr(sb, prettify_refname(decoration->name));
+ else
+ strbuf_addstr(sb, decoration->name);
+}
+
/*
* The caller makes sure there is no funny color before calling.
* format_decorations_extended makes sure the same after return.
if (decoration->type == DECORATION_REF_TAG)
strbuf_addstr(sb, "tag: ");
- strbuf_addstr(sb, decoration->name);
+ show_name(sb, decoration);
if (current_and_HEAD &&
decoration->type == DECORATION_REF_HEAD) {
strbuf_addstr(sb, " -> ");
strbuf_addstr(sb, color_reset);
strbuf_addstr(sb, decorate_get_color(use_color, current_and_HEAD->type));
- strbuf_addstr(sb, current_and_HEAD->name);
+ show_name(sb, current_and_HEAD);
}
strbuf_addstr(sb, color_reset);
char *orig_ref_name;
struct lock_file *lk;
unsigned char old_sha1[20];
- int lock_fd;
};
/*
*/
#define REF_HAVE_OLD 0x10
+/*
+ * Used as a flag in ref_update::flags when the lockfile needs to be
+ * committed.
+ */
+#define REF_NEEDS_COMMIT 0x20
+
/*
* Try to read one refname component from the front of refname.
* Return the length of the component found, or -1 if the component is
* presence of an empty subdirectory does not block the creation of a
* similarly-named reference. (The fact that reference names with the
* same leading components can conflict *with each other* is a
- * separate issue that is regulated by is_refname_available().)
+ * separate issue that is regulated by verify_refname_available().)
*
* Please note that the name field contains the fully-qualified
* reference (or subdirectory) name. Space could be saved by only
}
}
-static int entry_matches(struct ref_entry *entry, const struct string_list *list)
-{
- return list && string_list_has_string(list, entry->name);
-}
-
struct nonmatching_ref_data {
const struct string_list *skip;
- struct ref_entry *found;
+ const char *conflicting_refname;
};
static int nonmatching_ref_fn(struct ref_entry *entry, void *vdata)
{
struct nonmatching_ref_data *data = vdata;
- if (entry_matches(entry, data->skip))
+ if (data->skip && string_list_has_string(data->skip, entry->name))
return 0;
- data->found = entry;
+ data->conflicting_refname = entry->name;
return 1;
}
-static void report_refname_conflict(struct ref_entry *entry,
- const char *refname)
-{
- error("'%s' exists; cannot create '%s'", entry->name, refname);
-}
-
/*
- * Return true iff a reference named refname could be created without
- * conflicting with the name of an existing reference in dir. If
- * skip is non-NULL, ignore potential conflicts with refs in skip
- * (e.g., because they are scheduled for deletion in the same
- * operation).
+ * Return 0 if a reference named refname could be created without
+ * conflicting with the name of an existing reference in dir.
+ * Otherwise, return a negative value and write an explanation to err.
+ * If extras is non-NULL, it is a list of additional refnames with
+ * which refname is not allowed to conflict. If skip is non-NULL,
+ * ignore potential conflicts with refs in skip (e.g., because they
+ * are scheduled for deletion in the same operation). Behavior is
+ * undefined if the same name is listed in both extras and skip.
*
* Two reference names conflict if one of them exactly matches the
- * leading components of the other; e.g., "foo/bar" conflicts with
- * both "foo" and with "foo/bar/baz" but not with "foo/bar" or
- * "foo/barbados".
+ * leading components of the other; e.g., "refs/foo/bar" conflicts
+ * with both "refs/foo" and with "refs/foo/bar/baz" but not with
+ * "refs/foo/bar" or "refs/foo/barbados".
*
- * skip must be sorted.
+ * extras and skip must be sorted.
*/
-static int is_refname_available(const char *refname,
- const struct string_list *skip,
- struct ref_dir *dir)
+static int verify_refname_available(const char *refname,
+ const struct string_list *extras,
+ const struct string_list *skip,
+ struct ref_dir *dir,
+ struct strbuf *err)
{
const char *slash;
- size_t len;
int pos;
- char *dirname;
+ struct strbuf dirname = STRBUF_INIT;
+ int ret = -1;
+
+ /*
+ * For the sake of comments in this function, suppose that
+ * refname is "refs/foo/bar".
+ */
+
+ assert(err);
+ strbuf_grow(&dirname, strlen(refname) + 1);
for (slash = strchr(refname, '/'); slash; slash = strchr(slash + 1, '/')) {
+ /* Expand dirname to the new prefix, not including the trailing slash: */
+ strbuf_add(&dirname, refname + dirname.len, slash - refname - dirname.len);
+
/*
- * We are still at a leading dir of the refname; we are
- * looking for a conflict with a leaf entry.
- *
- * If we find one, we still must make sure it is
- * not in "skip".
+ * We are still at a leading dir of the refname (e.g.,
+ * "refs/foo"; if there is a reference with that name,
+ * it is a conflict, *unless* it is in skip.
*/
- pos = search_ref_dir(dir, refname, slash - refname);
- if (pos >= 0) {
- struct ref_entry *entry = dir->entries[pos];
- if (entry_matches(entry, skip))
- return 1;
- report_refname_conflict(entry, refname);
- return 0;
+ if (dir) {
+ pos = search_ref_dir(dir, dirname.buf, dirname.len);
+ if (pos >= 0 &&
+ (!skip || !string_list_has_string(skip, dirname.buf))) {
+ /*
+ * We found a reference whose name is
+ * a proper prefix of refname; e.g.,
+ * "refs/foo", and is not in skip.
+ */
+ strbuf_addf(err, "'%s' exists; cannot create '%s'",
+ dirname.buf, refname);
+ goto cleanup;
+ }
}
+ if (extras && string_list_has_string(extras, dirname.buf) &&
+ (!skip || !string_list_has_string(skip, dirname.buf))) {
+ strbuf_addf(err, "cannot process '%s' and '%s' at the same time",
+ refname, dirname.buf);
+ goto cleanup;
+ }
/*
* Otherwise, we can try to continue our search with
- * the next component; if we come up empty, we know
- * there is nothing under this whole prefix.
+ * the next component. So try to look up the
+ * directory, e.g., "refs/foo/". If we come up empty,
+ * we know there is nothing under this whole prefix,
+ * but even in that case we still have to continue the
+ * search for conflicts with extras.
*/
- pos = search_ref_dir(dir, refname, slash + 1 - refname);
- if (pos < 0)
- return 1;
-
- dir = get_ref_dir(dir->entries[pos]);
+ strbuf_addch(&dirname, '/');
+ if (dir) {
+ pos = search_ref_dir(dir, dirname.buf, dirname.len);
+ if (pos < 0) {
+ /*
+ * There was no directory "refs/foo/",
+ * so there is nothing under this
+ * whole prefix. So there is no need
+ * to continue looking for conflicting
+ * references. But we need to continue
+ * looking for conflicting extras.
+ */
+ dir = NULL;
+ } else {
+ dir = get_ref_dir(dir->entries[pos]);
+ }
+ }
}
/*
- * We are at the leaf of our refname; we want to
- * make sure there are no directories which match it.
+ * We are at the leaf of our refname (e.g., "refs/foo/bar").
+ * There is no point in searching for a reference with that
+ * name, because a refname isn't considered to conflict with
+ * itself. But we still need to check for references whose
+ * names are in the "refs/foo/bar/" namespace, because they
+ * *do* conflict.
*/
- len = strlen(refname);
- dirname = xmallocz(len + 1);
- sprintf(dirname, "%s/", refname);
- pos = search_ref_dir(dir, dirname, len + 1);
- free(dirname);
+ strbuf_addstr(&dirname, refname + dirname.len);
+ strbuf_addch(&dirname, '/');
+
+ if (dir) {
+ pos = search_ref_dir(dir, dirname.buf, dirname.len);
- if (pos >= 0) {
+ if (pos >= 0) {
+ /*
+ * We found a directory named "$refname/"
+ * (e.g., "refs/foo/bar/"). It is a problem
+ * iff it contains any ref that is not in
+ * "skip".
+ */
+ struct nonmatching_ref_data data;
+
+ data.skip = skip;
+ data.conflicting_refname = NULL;
+ dir = get_ref_dir(dir->entries[pos]);
+ sort_ref_dir(dir);
+ if (do_for_each_entry_in_dir(dir, 0, nonmatching_ref_fn, &data)) {
+ strbuf_addf(err, "'%s' exists; cannot create '%s'",
+ data.conflicting_refname, refname);
+ goto cleanup;
+ }
+ }
+ }
+
+ if (extras) {
/*
- * We found a directory named "refname". It is a
- * problem iff it contains any ref that is not
- * in "skip".
+ * Check for entries in extras that start with
+ * "$refname/". We do that by looking for the place
+ * where "$refname/" would be inserted in extras. If
+ * there is an entry at that position that starts with
+ * "$refname/" and is not in skip, then we have a
+ * conflict.
*/
- struct ref_entry *entry = dir->entries[pos];
- struct ref_dir *dir = get_ref_dir(entry);
- struct nonmatching_ref_data data;
+ for (pos = string_list_find_insert_index(extras, dirname.buf, 0);
+ pos < extras->nr; pos++) {
+ const char *extra_refname = extras->items[pos].string;
- data.skip = skip;
- sort_ref_dir(dir);
- if (!do_for_each_entry_in_dir(dir, 0, nonmatching_ref_fn, &data))
- return 1;
+ if (!starts_with(extra_refname, dirname.buf))
+ break;
- report_refname_conflict(data.found, refname);
- return 0;
+ if (!skip || !string_list_has_string(skip, extra_refname)) {
+ strbuf_addf(err, "cannot process '%s' and '%s' at the same time",
+ refname, extra_refname);
+ goto cleanup;
+ }
+ }
}
- /*
- * There is no point in searching for another leaf
- * node which matches it; such an entry would be the
- * ref we are looking for, not a conflict.
- */
- return 1;
+ /* No conflicts were found */
+ ret = 0;
+
+cleanup:
+ strbuf_release(&dirname);
+ return ret;
}
struct packed_ref_cache {
*/
static struct ref_lock *lock_ref_sha1_basic(const char *refname,
const unsigned char *old_sha1,
+ const struct string_list *extras,
const struct string_list *skip,
- unsigned int flags, int *type_p)
+ unsigned int flags, int *type_p,
+ struct strbuf *err)
{
const char *ref_file;
const char *orig_refname = refname;
int resolve_flags = 0;
int attempts_remaining = 3;
+ assert(err);
+
lock = xcalloc(1, sizeof(struct ref_lock));
- lock->lock_fd = -1;
if (mustexist)
resolve_flags |= RESOLVE_REF_READING;
ref_file = git_path("%s", orig_refname);
if (remove_empty_directories(ref_file)) {
last_errno = errno;
- error("there are still refs under '%s'", orig_refname);
+
+ if (!verify_refname_available(orig_refname, extras, skip,
+ get_loose_refs(&ref_cache), err))
+ strbuf_addf(err, "there are still refs under '%s'",
+ orig_refname);
+
goto error_return;
}
refname = resolve_ref_unsafe(orig_refname, resolve_flags,
*type_p = type;
if (!refname) {
last_errno = errno;
- error("unable to resolve reference %s: %s",
- orig_refname, strerror(errno));
+ if (last_errno != ENOTDIR ||
+ !verify_refname_available(orig_refname, extras, skip,
+ get_loose_refs(&ref_cache), err))
+ strbuf_addf(err, "unable to resolve reference %s: %s",
+ orig_refname, strerror(last_errno));
+
goto error_return;
}
/*
* our refname.
*/
if (is_null_sha1(lock->old_sha1) &&
- !is_refname_available(refname, skip, get_packed_refs(&ref_cache))) {
+ verify_refname_available(refname, extras, skip,
+ get_packed_refs(&ref_cache), err)) {
last_errno = ENOTDIR;
goto error_return;
}
/* fall through */
default:
last_errno = errno;
- error("unable to create directory for %s", ref_file);
+ strbuf_addf(err, "unable to create directory for %s", ref_file);
goto error_return;
}
- lock->lock_fd = hold_lock_file_for_update(lock->lk, ref_file, lflags);
- if (lock->lock_fd < 0) {
+ if (hold_lock_file_for_update(lock->lk, ref_file, lflags) < 0) {
last_errno = errno;
if (errno == ENOENT && --attempts_remaining > 0)
/*
*/
goto retry;
else {
- struct strbuf err = STRBUF_INIT;
- unable_to_lock_message(ref_file, errno, &err);
- error("%s", err.buf);
- strbuf_release(&err);
+ unable_to_lock_message(ref_file, errno, err);
goto error_return;
}
}
static int rename_ref_available(const char *oldname, const char *newname)
{
struct string_list skip = STRING_LIST_INIT_NODUP;
+ struct strbuf err = STRBUF_INIT;
int ret;
string_list_insert(&skip, oldname);
- ret = is_refname_available(newname, &skip, get_packed_refs(&ref_cache))
- && is_refname_available(newname, &skip, get_loose_refs(&ref_cache));
+ ret = !verify_refname_available(newname, NULL, &skip,
+ get_packed_refs(&ref_cache), &err)
+ && !verify_refname_available(newname, NULL, &skip,
+ get_loose_refs(&ref_cache), &err);
+ if (!ret)
+ error("%s", err.buf);
+
string_list_clear(&skip, 0);
+ strbuf_release(&err);
return ret;
}
-static int write_ref_sha1(struct ref_lock *lock, const unsigned char *sha1,
- const char *logmsg);
+static int write_ref_to_lockfile(struct ref_lock *lock, const unsigned char *sha1);
+static int commit_ref_update(struct ref_lock *lock,
+ const unsigned char *sha1, const char *logmsg);
int rename_ref(const char *oldrefname, const char *newrefname, const char *logmsg)
{
struct stat loginfo;
int log = !lstat(git_path("logs/%s", oldrefname), &loginfo);
const char *symref = NULL;
+ struct strbuf err = STRBUF_INIT;
if (log && S_ISLNK(loginfo.st_mode))
return error("reflog for %s is a symlink", oldrefname);
logmoved = log;
- lock = lock_ref_sha1_basic(newrefname, NULL, NULL, 0, NULL);
+ lock = lock_ref_sha1_basic(newrefname, NULL, NULL, NULL, 0, NULL, &err);
if (!lock) {
- error("unable to lock %s for update", newrefname);
+ error("unable to rename '%s' to '%s': %s", oldrefname, newrefname, err.buf);
+ strbuf_release(&err);
goto rollback;
}
hashcpy(lock->old_sha1, orig_sha1);
- if (write_ref_sha1(lock, orig_sha1, logmsg)) {
+
+ if (write_ref_to_lockfile(lock, orig_sha1) ||
+ commit_ref_update(lock, orig_sha1, logmsg)) {
error("unable to write current sha1 into %s", newrefname);
goto rollback;
}
return 0;
rollback:
- lock = lock_ref_sha1_basic(oldrefname, NULL, NULL, 0, NULL);
+ lock = lock_ref_sha1_basic(oldrefname, NULL, NULL, NULL, 0, NULL, &err);
if (!lock) {
- error("unable to lock %s for rollback", oldrefname);
+ error("unable to lock %s for rollback: %s", oldrefname, err.buf);
+ strbuf_release(&err);
goto rollbacklog;
}
flag = log_all_ref_updates;
log_all_ref_updates = 0;
- if (write_ref_sha1(lock, orig_sha1, NULL))
+ if (write_ref_to_lockfile(lock, orig_sha1) ||
+ commit_ref_update(lock, orig_sha1, NULL))
error("unable to write current sha1 into %s", oldrefname);
log_all_ref_updates = flag;
{
if (close_lock_file(lock->lk))
return -1;
- lock->lock_fd = -1;
return 0;
}
{
if (commit_lock_file(lock->lk))
return -1;
- lock->lock_fd = -1;
return 0;
}
}
/*
- * Write sha1 into the ref specified by the lock. Make sure that errno
- * is sane on error.
+ * Write sha1 into the open lockfile, then close the lockfile. On
+ * errors, rollback the lockfile and set errno to reflect the problem.
*/
-static int write_ref_sha1(struct ref_lock *lock,
- const unsigned char *sha1, const char *logmsg)
+static int write_ref_to_lockfile(struct ref_lock *lock,
+ const unsigned char *sha1)
{
static char term = '\n';
struct object *o;
errno = EINVAL;
return -1;
}
- if (write_in_full(lock->lock_fd, sha1_to_hex(sha1), 40) != 40 ||
- write_in_full(lock->lock_fd, &term, 1) != 1 ||
+ if (write_in_full(lock->lk->fd, sha1_to_hex(sha1), 40) != 40 ||
+ write_in_full(lock->lk->fd, &term, 1) != 1 ||
close_ref(lock) < 0) {
int save_errno = errno;
error("Couldn't write %s", lock->lk->filename.buf);
errno = save_errno;
return -1;
}
+ return 0;
+}
+
+/*
+ * Commit a change to a loose reference that has already been written
+ * to the loose reference lockfile. Also update the reflogs if
+ * necessary, using the specified lockmsg (which can be NULL).
+ */
+static int commit_ref_update(struct ref_lock *lock,
+ const unsigned char *sha1, const char *logmsg)
+{
clear_loose_ref_cache(&ref_cache);
if (log_ref_write(lock->ref_name, lock->old_sha1, sha1, logmsg) < 0 ||
(strcmp(lock->ref_name, lock->orig_ref_name) &&
return 0;
}
-static int ref_update_compare(const void *r1, const void *r2)
-{
- const struct ref_update * const *u1 = r1;
- const struct ref_update * const *u2 = r2;
- return strcmp((*u1)->refname, (*u2)->refname);
-}
-
-static int ref_update_reject_duplicates(struct ref_update **updates, int n,
+static int ref_update_reject_duplicates(struct string_list *refnames,
struct strbuf *err)
{
- int i;
+ int i, n = refnames->nr;
assert(err);
for (i = 1; i < n; i++)
- if (!strcmp(updates[i - 1]->refname, updates[i]->refname)) {
+ if (!strcmp(refnames->items[i - 1].string, refnames->items[i].string)) {
strbuf_addf(err,
"Multiple updates for ref '%s' not allowed.",
- updates[i]->refname);
+ refnames->items[i].string);
return 1;
}
return 0;
struct ref_update **updates = transaction->updates;
struct string_list refs_to_delete = STRING_LIST_INIT_NODUP;
struct string_list_item *ref_to_delete;
+ struct string_list affected_refnames = STRING_LIST_INIT_NODUP;
assert(err);
return 0;
}
- /* Copy, sort, and reject duplicate refs */
- qsort(updates, n, sizeof(*updates), ref_update_compare);
- if (ref_update_reject_duplicates(updates, n, err)) {
+ /* Fail if a refname appears more than once in the transaction: */
+ for (i = 0; i < n; i++)
+ string_list_append(&affected_refnames, updates[i]->refname);
+ string_list_sort(&affected_refnames);
+ if (ref_update_reject_duplicates(&affected_refnames, err)) {
ret = TRANSACTION_GENERIC_ERROR;
goto cleanup;
}
- /* Acquire all locks while verifying old values */
+ /*
+ * Acquire all locks, verify old values if provided, check
+ * that new values are valid, and write new values to the
+ * lockfiles, ready to be activated. Only keep one lockfile
+ * open at a time to avoid running out of file descriptors.
+ */
for (i = 0; i < n; i++) {
struct ref_update *update = updates[i];
- unsigned int flags = update->flags;
- if ((flags & REF_HAVE_NEW) && is_null_sha1(update->new_sha1))
- flags |= REF_DELETING;
+ if ((update->flags & REF_HAVE_NEW) &&
+ is_null_sha1(update->new_sha1))
+ update->flags |= REF_DELETING;
update->lock = lock_ref_sha1_basic(
update->refname,
((update->flags & REF_HAVE_OLD) ?
update->old_sha1 : NULL),
- NULL,
- flags,
- &update->type);
+ &affected_refnames, NULL,
+ update->flags,
+ &update->type,
+ err);
if (!update->lock) {
+ char *reason;
+
ret = (errno == ENOTDIR)
? TRANSACTION_NAME_CONFLICT
: TRANSACTION_GENERIC_ERROR;
- strbuf_addf(err, "Cannot lock the ref '%s'.",
- update->refname);
+ reason = strbuf_detach(err, NULL);
+ strbuf_addf(err, "Cannot lock ref '%s': %s",
+ update->refname, reason);
+ free(reason);
goto cleanup;
}
- }
-
- /* Perform updates first so live commits remain referenced */
- for (i = 0; i < n; i++) {
- struct ref_update *update = updates[i];
- int flags = update->flags;
-
- if ((flags & REF_HAVE_NEW) && !is_null_sha1(update->new_sha1)) {
+ if ((update->flags & REF_HAVE_NEW) &&
+ !(update->flags & REF_DELETING)) {
int overwriting_symref = ((update->type & REF_ISSYMREF) &&
(update->flags & REF_NODEREF));
- if (!overwriting_symref
- && !hashcmp(update->lock->old_sha1, update->new_sha1)) {
+ if (!overwriting_symref &&
+ !hashcmp(update->lock->old_sha1, update->new_sha1)) {
/*
* The reference already has the desired
* value, so we don't need to write it.
*/
- unlock_ref(update->lock);
+ } else if (write_ref_to_lockfile(update->lock,
+ update->new_sha1)) {
+ /*
+ * The lock was freed upon failure of
+ * write_ref_to_lockfile():
+ */
update->lock = NULL;
- } else if (write_ref_sha1(update->lock, update->new_sha1,
- update->msg)) {
- update->lock = NULL; /* freed by write_ref_sha1 */
strbuf_addf(err, "Cannot update the ref '%s'.",
update->refname);
ret = TRANSACTION_GENERIC_ERROR;
goto cleanup;
} else {
- /* freed by write_ref_sha1(): */
+ update->flags |= REF_NEEDS_COMMIT;
+ }
+ }
+ if (!(update->flags & REF_NEEDS_COMMIT)) {
+ /*
+ * We didn't have to write anything to the lockfile.
+ * Close it to free up the file descriptor:
+ */
+ if (close_ref(update->lock)) {
+ strbuf_addf(err, "Couldn't close %s.lock",
+ update->refname);
+ goto cleanup;
+ }
+ }
+ }
+
+ /* Perform updates first so live commits remain referenced */
+ for (i = 0; i < n; i++) {
+ struct ref_update *update = updates[i];
+
+ if (update->flags & REF_NEEDS_COMMIT) {
+ if (commit_ref_update(update->lock,
+ update->new_sha1, update->msg)) {
+ /* freed by commit_ref_update(): */
+ update->lock = NULL;
+ strbuf_addf(err, "Cannot update the ref '%s'.",
+ update->refname);
+ ret = TRANSACTION_GENERIC_ERROR;
+ goto cleanup;
+ } else {
+ /* freed by commit_ref_update(): */
update->lock = NULL;
}
}
/* Perform deletes now that updates are safely completed */
for (i = 0; i < n; i++) {
struct ref_update *update = updates[i];
- int flags = update->flags;
- if ((flags & REF_HAVE_NEW) && is_null_sha1(update->new_sha1)) {
+ if (update->flags & REF_DELETING) {
if (delete_ref_loose(update->lock, update->type, err)) {
ret = TRANSACTION_GENERIC_ERROR;
goto cleanup;
}
- if (!(flags & REF_ISPRUNING))
+ if (!(update->flags & REF_ISPRUNING))
string_list_append(&refs_to_delete,
update->lock->ref_name);
}
if (updates[i]->lock)
unlock_ref(updates[i]->lock);
string_list_clear(&refs_to_delete, 0);
+ string_list_clear(&affected_refnames, 0);
return ret;
}
char *log_file;
int status = 0;
int type;
+ struct strbuf err = STRBUF_INIT;
memset(&cb, 0, sizeof(cb));
cb.flags = flags;
* reference itself, plus we might need to update the
* reference if --updateref was specified:
*/
- lock = lock_ref_sha1_basic(refname, sha1, NULL, 0, &type);
- if (!lock)
- return error("cannot lock ref '%s'", refname);
+ lock = lock_ref_sha1_basic(refname, sha1, NULL, NULL, 0, &type, &err);
+ if (!lock) {
+ error("cannot lock ref '%s': %s", refname, err.buf);
+ strbuf_release(&err);
+ return -1;
+ }
if (!reflog_exists(refname)) {
unlock_ref(lock);
return 0;
status |= error("couldn't write %s: %s", log_file,
strerror(errno));
} else if (update &&
- (write_in_full(lock->lock_fd,
+ (write_in_full(lock->lk->fd,
sha1_to_hex(cb.last_kept_sha1), 40) != 40 ||
- write_str_in_full(lock->lock_fd, "\n") != 1 ||
+ write_str_in_full(lock->lk->fd, "\n") != 1 ||
close_ref(lock) < 0)) {
status |= error("couldn't write %s",
lock->lk->filename.buf);
if (arg[2] == '\0') /* ":/" is root dir, always exists */
return 1;
name = arg + 2;
- } else if (prefix)
+ } else if (!no_wildcard(arg))
+ return 1;
+ else if (prefix)
name = prefix_filename(prefix, strlen(prefix), arg);
else
name = arg;
return git_inflate(stream, 0);
}
+static int unpack_sha1_header_to_strbuf(git_zstream *stream, unsigned char *map,
+ unsigned long mapsize, void *buffer,
+ unsigned long bufsiz, struct strbuf *header)
+{
+ int status;
+
+ status = unpack_sha1_header(stream, map, mapsize, buffer, bufsiz);
+
+ /*
+ * Check if entire header is unpacked in the first iteration.
+ */
+ if (memchr(buffer, '\0', stream->next_out - (unsigned char *)buffer))
+ return 0;
+
+ /*
+ * buffer[0..bufsiz] was not large enough. Copy the partial
+ * result out to header, and then append the result of further
+ * reading the stream.
+ */
+ strbuf_add(header, buffer, stream->next_out - (unsigned char *)buffer);
+ stream->next_out = buffer;
+ stream->avail_out = bufsiz;
+
+ do {
+ status = git_inflate(stream, 0);
+ strbuf_add(header, buffer, stream->next_out - (unsigned char *)buffer);
+ if (memchr(buffer, '\0', stream->next_out - (unsigned char *)buffer))
+ return 0;
+ stream->next_out = buffer;
+ stream->avail_out = bufsiz;
+ } while (status != Z_STREAM_END);
+ return -1;
+}
+
static void *unpack_sha1_rest(git_zstream *stream, void *buffer, unsigned long size, const unsigned char *sha1)
{
int bytes = strlen(buffer) + 1;
* too permissive for what we want to check. So do an anal
* object header parse by hand.
*/
-int parse_sha1_header(const char *hdr, unsigned long *sizep)
+static int parse_sha1_header_extended(const char *hdr, struct object_info *oi,
+ unsigned int flags)
{
- char type[10];
- int i;
+ const char *type_buf = hdr;
unsigned long size;
+ int type, type_len = 0;
/*
- * The type can be at most ten bytes (including the
- * terminating '\0' that we add), and is followed by
+ * The type can be of any size but is followed by
* a space.
*/
- i = 0;
for (;;) {
char c = *hdr++;
if (c == ' ')
break;
- type[i++] = c;
- if (i >= sizeof(type))
- return -1;
+ type_len++;
}
- type[i] = 0;
+
+ type = type_from_string_gently(type_buf, type_len, 1);
+ if (oi->typename)
+ strbuf_add(oi->typename, type_buf, type_len);
+ /*
+ * Set type to 0 if its an unknown object and
+ * we're obtaining the type using '--allow-unkown-type'
+ * option.
+ */
+ if ((flags & LOOKUP_UNKNOWN_OBJECT) && (type < 0))
+ type = 0;
+ else if (type < 0)
+ die("invalid object type");
+ if (oi->typep)
+ *oi->typep = type;
/*
* The length must follow immediately, and be in canonical
size = size * 10 + c;
}
}
- *sizep = size;
+
+ if (oi->sizep)
+ *oi->sizep = size;
/*
* The length must be followed by a zero byte
*/
- return *hdr ? -1 : type_from_string(type);
+ return *hdr ? -1 : type;
+}
+
+int parse_sha1_header(const char *hdr, unsigned long *sizep)
+{
+ struct object_info oi;
+
+ oi.sizep = sizep;
+ oi.typename = NULL;
+ oi.typep = NULL;
+ return parse_sha1_header_extended(hdr, &oi, LOOKUP_REPLACE_OBJECT);
}
static void *unpack_sha1_file(void *map, unsigned long mapsize, enum object_type *type, unsigned long *size, const unsigned char *sha1)
}
static int sha1_loose_object_info(const unsigned char *sha1,
- struct object_info *oi)
+ struct object_info *oi,
+ int flags)
{
- int status;
- unsigned long mapsize, size;
+ int status = 0;
+ unsigned long mapsize;
void *map;
git_zstream stream;
char hdr[32];
+ struct strbuf hdrbuf = STRBUF_INIT;
if (oi->delta_base_sha1)
hashclr(oi->delta_base_sha1);
* return value implicitly indicates whether the
* object even exists.
*/
- if (!oi->typep && !oi->sizep) {
+ if (!oi->typep && !oi->typename && !oi->sizep) {
struct stat st;
if (stat_sha1_file(sha1, &st) < 0)
return -1;
return -1;
if (oi->disk_sizep)
*oi->disk_sizep = mapsize;
- if (unpack_sha1_header(&stream, map, mapsize, hdr, sizeof(hdr)) < 0)
+ if ((flags & LOOKUP_UNKNOWN_OBJECT)) {
+ if (unpack_sha1_header_to_strbuf(&stream, map, mapsize, hdr, sizeof(hdr), &hdrbuf) < 0)
+ status = error("unable to unpack %s header with --allow-unknown-type",
+ sha1_to_hex(sha1));
+ } else if (unpack_sha1_header(&stream, map, mapsize, hdr, sizeof(hdr)) < 0)
status = error("unable to unpack %s header",
sha1_to_hex(sha1));
- else if ((status = parse_sha1_header(hdr, &size)) < 0)
+ if (status < 0)
+ ; /* Do nothing */
+ else if (hdrbuf.len) {
+ if ((status = parse_sha1_header_extended(hdrbuf.buf, oi, flags)) < 0)
+ status = error("unable to parse %s header with --allow-unknown-type",
+ sha1_to_hex(sha1));
+ } else if ((status = parse_sha1_header_extended(hdr, oi, flags)) < 0)
status = error("unable to parse %s header", sha1_to_hex(sha1));
- else if (oi->sizep)
- *oi->sizep = size;
git_inflate_end(&stream);
munmap(map, mapsize);
- if (oi->typep)
+ if (status && oi->typep)
*oi->typep = status;
+ strbuf_release(&hdrbuf);
return 0;
}
struct cached_object *co;
struct pack_entry e;
int rtype;
+ enum object_type real_type;
const unsigned char *real = lookup_replace_object_extended(sha1, flags);
co = find_cached_object(real);
*(oi->disk_sizep) = 0;
if (oi->delta_base_sha1)
hashclr(oi->delta_base_sha1);
+ if (oi->typename)
+ strbuf_addstr(oi->typename, typename(co->type));
oi->whence = OI_CACHED;
return 0;
}
if (!find_pack_entry(real, &e)) {
/* Most likely it's a loose object. */
- if (!sha1_loose_object_info(real, oi)) {
+ if (!sha1_loose_object_info(real, oi, flags)) {
oi->whence = OI_LOOSE;
return 0;
}
return -1;
}
+ /*
+ * packed_object_info() does not follow the delta chain to
+ * find out the real type, unless it is given oi->typep.
+ */
+ if (oi->typename && !oi->typep)
+ oi->typep = &real_type;
+
rtype = packed_object_info(e.p, e.offset, oi);
if (rtype < 0) {
mark_bad_packed_object(e.p, real);
+ if (oi->typep == &real_type)
+ oi->typep = NULL;
return sha1_object_info_extended(real, oi, 0);
} else if (in_delta_base_cache(e.p, e.offset)) {
oi->whence = OI_DBCACHED;
oi->u.packed.is_delta = (rtype == OBJ_REF_DELTA ||
rtype == OBJ_OFS_DELTA);
}
+ if (oi->typename)
+ strbuf_addstr(oi->typename, typename(*oi->typep));
+ if (oi->typep == &real_type)
+ oi->typep = NULL;
return 0;
}
# Copyright (c) 2008 Clemens Buchacher <drizzd@aon.at>
#
+if test -n "$NO_CURL"
+then
+ skip_all='skipping test, git built without http support'
+ test_done
+fi
+
+if test -n "$NO_EXPAT" && test -n "$LIB_HTTPD_DAV"
+then
+ skip_all='skipping test, git built without expat support'
+ test_done
+fi
+
test_tristate GIT_TEST_HTTPD
if test "$GIT_TEST_HTTPD" = false
then
test_cmp err.expect err
'
+test_expect_success 'info/exclude trumps core.excludesfile' '
+ echo >>global-excludes usually-ignored &&
+ echo >>.git/info/exclude "!usually-ignored" &&
+ >usually-ignored &&
+ echo "?? usually-ignored" >expect &&
+
+ git status --porcelain usually-ignored >actual &&
+ test_cmp expect actual
+'
+
test_done
test_cmp expect actual
'
+ test_expect_success "Type of $type is correct using --allow-unknown-type" '
+ echo $type >expect &&
+ git cat-file -t --allow-unknown-type $sha1 >actual &&
+ test_cmp expect actual
+ '
+
+ test_expect_success "Size of $type is correct using --allow-unknown-type" '
+ echo $size >expect &&
+ git cat-file -s --allow-unknown-type $sha1 >actual &&
+ test_cmp expect actual
+ '
+
test -z "$content" ||
test_expect_success "Content of $type is correct" '
maybe_remove_timestamp "$content" $no_ts >expect &&
}
'
+bogus_type="bogus"
+bogus_content="bogus"
+bogus_size=$(strlen "$bogus_content")
+bogus_sha1=$(echo_without_newline "$bogus_content" | git hash-object -t $bogus_type --literally -w --stdin)
+
+test_expect_success "Type of broken object is correct" '
+ echo $bogus_type >expect &&
+ git cat-file -t --allow-unknown-type $bogus_sha1 >actual &&
+ test_cmp expect actual
+'
+
+test_expect_success "Size of broken object is correct" '
+ echo $bogus_size >expect &&
+ git cat-file -s --allow-unknown-type $bogus_sha1 >actual &&
+ test_cmp expect actual
+'
+bogus_type="abcdefghijklmnopqrstuvwxyz1234679"
+bogus_content="bogus"
+bogus_size=$(strlen "$bogus_content")
+bogus_sha1=$(echo_without_newline "$bogus_content" | git hash-object -t $bogus_type --literally -w --stdin)
+
+test_expect_success "Type of broken object is correct when type is large" '
+ echo $bogus_type >expect &&
+ git cat-file -t --allow-unknown-type $bogus_sha1 >actual &&
+ test_cmp expect actual
+'
+
+test_expect_success "Size of large broken object is correct when type is large" '
+ echo $bogus_size >expect &&
+ git cat-file -s --allow-unknown-type $bogus_sha1 >actual &&
+ test_cmp expect actual
+'
+
test_done
test_expect_success 'stdin update ref fails with wrong old value' '
echo "update $c $m $m~1" >stdin &&
test_must_fail git update-ref --stdin <stdin 2>err &&
- grep "fatal: Cannot lock the ref '"'"'$c'"'"'" err &&
+ grep "fatal: Cannot lock ref '"'"'$c'"'"'" err &&
test_must_fail git rev-parse --verify -q $c
'
test_expect_success 'stdin delete ref fails with wrong old value' '
echo "delete $a $m~1" >stdin &&
test_must_fail git update-ref --stdin <stdin 2>err &&
- grep "fatal: Cannot lock the ref '"'"'$a'"'"'" err &&
+ grep "fatal: Cannot lock ref '"'"'$a'"'"'" err &&
git rev-parse $m >expect &&
git rev-parse $a >actual &&
test_cmp expect actual
update $c ''
EOF
test_must_fail git update-ref --stdin <stdin 2>err &&
- grep "fatal: Cannot lock the ref '"'"'$c'"'"'" err &&
+ grep "fatal: Cannot lock ref '"'"'$c'"'"'" err &&
git rev-parse $m >expect &&
git rev-parse $a >actual &&
test_cmp expect actual &&
test_expect_success 'stdin -z update ref fails with wrong old value' '
printf $F "update $c" "$m" "$m~1" >stdin &&
test_must_fail git update-ref -z --stdin <stdin 2>err &&
- grep "fatal: Cannot lock the ref '"'"'$c'"'"'" err &&
+ grep "fatal: Cannot lock ref '"'"'$c'"'"'" err &&
test_must_fail git rev-parse --verify -q $c
'
git rev-parse "$c" >expect &&
printf $F "create $c" "$m~1" >stdin &&
test_must_fail git update-ref -z --stdin <stdin 2>err &&
- grep "fatal: Cannot lock the ref '"'"'$c'"'"'" err &&
+ grep "fatal: Cannot lock ref '"'"'$c'"'"'" err &&
git rev-parse "$c" >actual &&
test_cmp expect actual
'
test_expect_success 'stdin -z delete ref fails with wrong old value' '
printf $F "delete $a" "$m~1" >stdin &&
test_must_fail git update-ref -z --stdin <stdin 2>err &&
- grep "fatal: Cannot lock the ref '"'"'$a'"'"'" err &&
+ grep "fatal: Cannot lock ref '"'"'$a'"'"'" err &&
git rev-parse $m >expect &&
git rev-parse $a >actual &&
test_cmp expect actual
git update-ref $c $m &&
printf $F "update $a" "$m" "$m" "update $b" "$m" "$m" "update $c" "$m" "$Z" >stdin &&
test_must_fail git update-ref -z --stdin <stdin 2>err &&
- grep "fatal: Cannot lock the ref '"'"'$c'"'"'" err &&
+ grep "fatal: Cannot lock ref '"'"'$c'"'"'" err &&
git rev-parse $m >expect &&
git rev-parse $a >actual &&
test_cmp expect actual &&
test_must_fail git rev-parse --verify -q $c
'
+run_with_limited_open_files () {
+ (ulimit -n 32 && "$@")
+}
+
+test_lazy_prereq ULIMIT_FILE_DESCRIPTORS 'run_with_limited_open_files true'
+
+test_expect_success ULIMIT_FILE_DESCRIPTORS 'large transaction creating branches does not burst open file limit' '
+(
+ for i in $(test_seq 33)
+ do
+ echo "create refs/heads/$i HEAD"
+ done >large_input &&
+ run_with_limited_open_files git update-ref --stdin <large_input &&
+ git rev-parse --verify -q refs/heads/33
+)
+'
+
+test_expect_success ULIMIT_FILE_DESCRIPTORS 'large transaction deleting branches does not burst open file limit' '
+(
+ for i in $(test_seq 33)
+ do
+ echo "delete refs/heads/$i HEAD"
+ done >large_input &&
+ run_with_limited_open_files git update-ref --stdin <large_input &&
+ test_must_fail git rev-parse --verify -q refs/heads/33
+)
+'
+
test_done
--- /dev/null
+#!/bin/sh
+
+test_description='Test git update-ref with D/F conflicts'
+. ./test-lib.sh
+
+test_update_rejected () {
+ prefix="$1" &&
+ before="$2" &&
+ pack="$3" &&
+ create="$4" &&
+ error="$5" &&
+ printf "create $prefix/%s $C\n" $before |
+ git update-ref --stdin &&
+ git for-each-ref $prefix >unchanged &&
+ if $pack
+ then
+ git pack-refs --all
+ fi &&
+ printf "create $prefix/%s $C\n" $create >input &&
+ test_must_fail git update-ref --stdin <input 2>output.err &&
+ grep -F "$error" output.err &&
+ git for-each-ref $prefix >actual &&
+ test_cmp unchanged actual
+}
+
+Q="'"
+
+test_expect_success 'setup' '
+
+ git commit --allow-empty -m Initial &&
+ C=$(git rev-parse HEAD)
+
+'
+
+test_expect_success 'existing loose ref is a simple prefix of new' '
+
+ prefix=refs/1l &&
+ test_update_rejected $prefix "a c e" false "b c/x d" \
+ "$Q$prefix/c$Q exists; cannot create $Q$prefix/c/x$Q"
+
+'
+
+test_expect_success 'existing packed ref is a simple prefix of new' '
+
+ prefix=refs/1p &&
+ test_update_rejected $prefix "a c e" true "b c/x d" \
+ "$Q$prefix/c$Q exists; cannot create $Q$prefix/c/x$Q"
+
+'
+
+test_expect_success 'existing loose ref is a deeper prefix of new' '
+
+ prefix=refs/2l &&
+ test_update_rejected $prefix "a c e" false "b c/x/y d" \
+ "$Q$prefix/c$Q exists; cannot create $Q$prefix/c/x/y$Q"
+
+'
+
+test_expect_success 'existing packed ref is a deeper prefix of new' '
+
+ prefix=refs/2p &&
+ test_update_rejected $prefix "a c e" true "b c/x/y d" \
+ "$Q$prefix/c$Q exists; cannot create $Q$prefix/c/x/y$Q"
+
+'
+
+test_expect_success 'new ref is a simple prefix of existing loose' '
+
+ prefix=refs/3l &&
+ test_update_rejected $prefix "a c/x e" false "b c d" \
+ "$Q$prefix/c/x$Q exists; cannot create $Q$prefix/c$Q"
+
+'
+
+test_expect_success 'new ref is a simple prefix of existing packed' '
+
+ prefix=refs/3p &&
+ test_update_rejected $prefix "a c/x e" true "b c d" \
+ "$Q$prefix/c/x$Q exists; cannot create $Q$prefix/c$Q"
+
+'
+
+test_expect_success 'new ref is a deeper prefix of existing loose' '
+
+ prefix=refs/4l &&
+ test_update_rejected $prefix "a c/x/y e" false "b c d" \
+ "$Q$prefix/c/x/y$Q exists; cannot create $Q$prefix/c$Q"
+
+'
+
+test_expect_success 'new ref is a deeper prefix of existing packed' '
+
+ prefix=refs/4p &&
+ test_update_rejected $prefix "a c/x/y e" true "b c d" \
+ "$Q$prefix/c/x/y$Q exists; cannot create $Q$prefix/c$Q"
+
+'
+
+test_expect_success 'one new ref is a simple prefix of another' '
+
+ prefix=refs/5 &&
+ test_update_rejected $prefix "a e" false "b c c/x d" \
+ "cannot process $Q$prefix/c$Q and $Q$prefix/c/x$Q at the same time"
+
+'
+
+test_done
. ./test-lib.sh
test_expect_success 'intent to add' '
+ test_commit 1 &&
+ git rm 1.t &&
+ echo hello >1.t &&
echo hello >file &&
echo hello >elif &&
git add -N file &&
- git add elif
+ git add elif &&
+ git add -N 1.t
+'
+
+test_expect_success 'git status' '
+ git status --porcelain | grep -v actual >actual &&
+ cat >expect <<-\EOF &&
+ DA 1.t
+ A elif
+ A file
+ EOF
+ test_cmp expect actual
'
test_expect_success 'check result of "add -N"' '
git add -N nitfol &&
git commit -m second &&
test $(git ls-tree HEAD -- nitfol | wc -l) = 0 &&
- test $(git diff --name-only HEAD -- nitfol | wc -l) = 1
+ test $(git diff --name-only HEAD -- nitfol | wc -l) = 0 &&
+ test $(git diff --name-only -- nitfol | wc -l) = 1
'
test_expect_success 'can commit with an unrelated i-t-a entry in index' '
: >dir/bar &&
git add -N dir/bar &&
git diff --cached --name-only >actual &&
- echo dir/bar >expect &&
+ >expect &&
test_cmp expect actual &&
git write-tree >/dev/null &&
git diff --cached --name-only >actual &&
- echo dir/bar >expect &&
+ >expect &&
test_cmp expect actual
'
--- /dev/null
+#!/bin/sh
+
+test_description='"git merge" top-level frontend'
+
+. ./test-lib.sh
+
+t3033_reset () {
+ git checkout -B master two &&
+ git branch -f left three &&
+ git branch -f right four
+}
+
+test_expect_success setup '
+ test_commit one &&
+ git branch left &&
+ git branch right &&
+ test_commit two &&
+ git checkout left &&
+ test_commit three &&
+ git checkout right &&
+ test_commit four &&
+ git checkout master
+'
+
+# Local branches
+
+test_expect_success 'merge an octopus into void' '
+ t3033_reset &&
+ git checkout --orphan test &&
+ git rm -fr . &&
+ test_must_fail git merge left right &&
+ test_must_fail git rev-parse --verify HEAD &&
+ git diff --quiet &&
+ test_must_fail git rev-parse HEAD
+'
+
+test_expect_success 'merge an octopus, fast-forward (ff)' '
+ t3033_reset &&
+ git reset --hard one &&
+ git merge left right &&
+ # one is ancestor of three (left) and four (right)
+ test_must_fail git rev-parse --verify HEAD^3 &&
+ git rev-parse HEAD^1 HEAD^2 | sort >actual &&
+ git rev-parse three four | sort >expect &&
+ test_cmp expect actual
+'
+
+test_expect_success 'merge octopus, non-fast-forward (ff)' '
+ t3033_reset &&
+ git reset --hard one &&
+ git merge --no-ff left right &&
+ # one is ancestor of three (left) and four (right)
+ test_must_fail git rev-parse --verify HEAD^4 &&
+ git rev-parse HEAD^1 HEAD^2 HEAD^3 | sort >actual &&
+ git rev-parse one three four | sort >expect &&
+ test_cmp expect actual
+'
+
+test_expect_success 'merge octopus, fast-forward (does not ff)' '
+ t3033_reset &&
+ git merge left right &&
+ # two (master) is not an ancestor of three (left) and four (right)
+ test_must_fail git rev-parse --verify HEAD^4 &&
+ git rev-parse HEAD^1 HEAD^2 HEAD^3 | sort >actual &&
+ git rev-parse two three four | sort >expect &&
+ test_cmp expect actual
+'
+
+test_expect_success 'merge octopus, non-fast-forward' '
+ t3033_reset &&
+ git merge --no-ff left right &&
+ test_must_fail git rev-parse --verify HEAD^4 &&
+ git rev-parse HEAD^1 HEAD^2 HEAD^3 | sort >actual &&
+ git rev-parse two three four | sort >expect &&
+ test_cmp expect actual
+'
+
+# The same set with FETCH_HEAD
+
+test_expect_success 'merge FETCH_HEAD octopus into void' '
+ t3033_reset &&
+ git checkout --orphan test &&
+ git rm -fr . &&
+ git fetch . left right &&
+ test_must_fail git merge FETCH_HEAD &&
+ test_must_fail git rev-parse --verify HEAD &&
+ git diff --quiet &&
+ test_must_fail git rev-parse HEAD
+'
+
+test_expect_success 'merge FETCH_HEAD octopus fast-forward (ff)' '
+ t3033_reset &&
+ git reset --hard one &&
+ git fetch . left right &&
+ git merge FETCH_HEAD &&
+ # one is ancestor of three (left) and four (right)
+ test_must_fail git rev-parse --verify HEAD^3 &&
+ git rev-parse HEAD^1 HEAD^2 | sort >actual &&
+ git rev-parse three four | sort >expect &&
+ test_cmp expect actual
+'
+
+test_expect_success 'merge FETCH_HEAD octopus non-fast-forward (ff)' '
+ t3033_reset &&
+ git reset --hard one &&
+ git fetch . left right &&
+ git merge --no-ff FETCH_HEAD &&
+ # one is ancestor of three (left) and four (right)
+ test_must_fail git rev-parse --verify HEAD^4 &&
+ git rev-parse HEAD^1 HEAD^2 HEAD^3 | sort >actual &&
+ git rev-parse one three four | sort >expect &&
+ test_cmp expect actual
+'
+
+test_expect_success 'merge FETCH_HEAD octopus fast-forward (does not ff)' '
+ t3033_reset &&
+ git fetch . left right &&
+ git merge FETCH_HEAD &&
+ # two (master) is not an ancestor of three (left) and four (right)
+ test_must_fail git rev-parse --verify HEAD^4 &&
+ git rev-parse HEAD^1 HEAD^2 HEAD^3 | sort >actual &&
+ git rev-parse two three four | sort >expect &&
+ test_cmp expect actual
+'
+
+test_expect_success 'merge FETCH_HEAD octopus non-fast-forward' '
+ t3033_reset &&
+ git fetch . left right &&
+ git merge --no-ff FETCH_HEAD &&
+ test_must_fail git rev-parse --verify HEAD^4 &&
+ git rev-parse HEAD^1 HEAD^2 HEAD^3 | sort >actual &&
+ git rev-parse two three four | sort >expect &&
+ test_cmp expect actual
+'
+
+test_done
'
test_expect_success 'reference merge' '
- git merge -s recursive "reference merge" HEAD master
+ git merge -s recursive -m "reference merge" master
'
PRE_REBASE=$(git rev-parse test-rebase)
grep "^# Rebase ..* onto ..* ([0-9]" actual
'
+test_expect_success 'rebase -i commits that overwrite untracked files (pick)' '
+ git checkout --force branch2 &&
+ git clean -f &&
+ set_fake_editor &&
+ FAKE_LINES="edit 1 2" git rebase -i A &&
+ test_cmp_rev HEAD F &&
+ test_path_is_missing file6 &&
+ >file6 &&
+ test_must_fail git rebase --continue &&
+ test_cmp_rev HEAD F &&
+ rm file6 &&
+ git rebase --continue &&
+ test_cmp_rev HEAD I
+'
+
+test_expect_success 'rebase -i commits that overwrite untracked files (squash)' '
+ git checkout --force branch2 &&
+ git clean -f &&
+ git tag original-branch2 &&
+ set_fake_editor &&
+ FAKE_LINES="edit 1 squash 2" git rebase -i A &&
+ test_cmp_rev HEAD F &&
+ test_path_is_missing file6 &&
+ >file6 &&
+ test_must_fail git rebase --continue &&
+ test_cmp_rev HEAD F &&
+ rm file6 &&
+ git rebase --continue &&
+ test $(git cat-file commit HEAD | sed -ne \$p) = I &&
+ git reset --hard original-branch2
+'
+
+test_expect_success 'rebase -i commits that overwrite untracked files (no ff)' '
+ git checkout --force branch2 &&
+ git clean -f &&
+ set_fake_editor &&
+ FAKE_LINES="edit 1 2" git rebase -i --no-ff A &&
+ test $(git cat-file commit HEAD | sed -ne \$p) = F &&
+ test_path_is_missing file6 &&
+ >file6 &&
+ test_must_fail git rebase --continue &&
+ test $(git cat-file commit HEAD | sed -ne \$p) = F &&
+ rm file6 &&
+ git rebase --continue &&
+ test $(git cat-file commit HEAD | sed -ne \$p) = I
+'
+
test_done
test_expect_success 'stash some dirty working directory' '
echo 1 > file &&
git add file &&
+ echo unrelated >other-file &&
+ git add other-file &&
test_tick &&
git commit -m initial &&
echo 2 > file &&
test_cmp expect file
'
+test_expect_success 'apply requires a clean index' '
+ test_when_finished "git reset --hard" &&
+ echo changed >other-file &&
+ git add other-file &&
+ test_must_fail git stash apply
+'
+
test_expect_success 'apply does not need clean working directory' '
echo 4 >other-file &&
- git add other-file &&
- echo 5 >other-file &&
git stash apply &&
echo 3 >expect &&
test_cmp expect file
'
test_expect_success 'stash list implies --first-parent -m' '
- cat >expect <<-\EOF &&
- stash@{0}: WIP on master: b27a2bc subdir
+ cat >expect <<-EOF &&
+ stash@{0}
diff --git a/file b/file
index 257cc56..d26b33d 100644
-foo
+working
EOF
- git stash list -p >actual &&
+ git stash list --format=%gd -p >actual &&
test_cmp expect actual
'
test_expect_success 'stash list --cc shows combined diff' '
cat >expect <<-\EOF &&
- stash@{0}: WIP on master: b27a2bc subdir
+ stash@{0}
diff --cc file
index 257cc56,9015a7a..d26b33d
-index
++working
EOF
- git stash list -p --cc >actual &&
+ git stash list --format=%gd -p --cc >actual &&
test_cmp expect actual
'
test_expect_success SYMLINKS 'symlinks do not respect userdiff config by path' '
cat >expect <<-\EOF &&
diff --git a/file.bin b/file.bin
- index e69de29..d95f3ad 100644
- Binary files a/file.bin and b/file.bin differ
+ new file mode 100644
+ index 0000000..d95f3ad
+ Binary files /dev/null and b/file.bin differ
diff --git a/link.bin b/link.bin
- index e69de29..dce41ec 120000
- --- a/link.bin
+ new file mode 120000
+ index 0000000..dce41ec
+ --- /dev/null
+++ b/link.bin
@@ -0,0 +1 @@
+file.bin
Rearranged lines in dir/sub
-commit 59d314ad6f356dd08601a4cd5e530381da3e3c64 (HEAD, refs/heads/master)
+commit 59d314ad6f356dd08601a4cd5e530381da3e3c64 (HEAD -> refs/heads/master)
Merge: 9a6d494 c7a2ab9
Author: A U Thor <author@example.com>
Date: Mon Jun 26 00:04:00 2006 +0000
mv "$2.x" "$2"
}
-D=`pwd`
-
test_expect_success setup '
-
echo file >file &&
git add file &&
git commit -a -m original
-
'
test_expect_success 'pulling into void' '
- mkdir cloned &&
- cd cloned &&
- git init &&
- git pull ..
-'
-
-cd "$D"
-
-test_expect_success 'checking the results' '
+ git init cloned &&
+ (
+ cd cloned &&
+ git pull ..
+ ) &&
test -f file &&
test -f cloned/file &&
test_cmp file cloned/file
'
test_expect_success 'pulling into void using master:master' '
- mkdir cloned-uho &&
+ git init cloned-uho &&
(
cd cloned-uho &&
- git init &&
git pull .. master:master
) &&
test -f file &&
)
'
-
test_expect_success 'pulling into void does not remove new staged files' '
git init cloned-staged-new &&
(
)
'
+test_expect_success 'pulling into void must not create an octopus' '
+ git init cloned-octopus &&
+ (
+ cd cloned-octopus &&
+ test_must_fail git pull .. master master &&
+ ! test -f file
+ )
+'
+
test_expect_success 'test . as a remote' '
git branch copy master &&
test_description='fetch/clone from a shallow clone over http'
. ./test-lib.sh
-
-if test -n "$NO_CURL"; then
- skip_all='skipping test, git built without http support'
- test_done
-fi
-
. "$TEST_DIRECTORY"/lib-httpd.sh
start_httpd
test_description='test smart pushing over http via http-backend'
. ./test-lib.sh
-if test -n "$NO_CURL"; then
- skip_all='skipping test, git built without http support'
- test_done
-fi
-
ROOT_PATH="$PWD"
. "$TEST_DIRECTORY"/lib-gpg.sh
. "$TEST_DIRECTORY"/lib-httpd.sh
test_description='push from/to a shallow clone over http'
. ./test-lib.sh
-
-if test -n "$NO_CURL"; then
- say 'skipping test, git built without http support'
- test_done
-fi
-
. "$TEST_DIRECTORY"/lib-httpd.sh
start_httpd
test_description='test dumb fetching over http via static file'
. ./test-lib.sh
-
-if test -n "$NO_CURL"; then
- skip_all='skipping test, git built without http support'
- test_done
-fi
-
. "$TEST_DIRECTORY"/lib-httpd.sh
start_httpd
test_description='test smart fetching over http via http-backend'
. ./test-lib.sh
-
-if test -n "$NO_CURL"; then
- skip_all='skipping test, git built without http support'
- test_done
-fi
-
. "$TEST_DIRECTORY"/lib-httpd.sh
start_httpd
test_description='test git-http-backend'
. ./test-lib.sh
-
-if test -n "$NO_CURL"; then
- skip_all='skipping test, git built without http support'
- test_done
-fi
-
. "$TEST_DIRECTORY"/lib-httpd.sh
start_httpd
'
}
+copy_ssh_wrapper_as () {
+ cp "$TRASH_DIRECTORY/ssh-wrapper" "$1" &&
+ GIT_SSH="$1" &&
+ export GIT_SSH
+}
+
expect_ssh () {
test_when_finished '
(cd "$TRASH_DIRECTORY" && rm -f ssh-expect && >ssh-output)
test_expect_success 'bracketed hostnames are still ssh' '
git clone "[myhost:123]:src" ssh-bracket-clone &&
- expect_ssh myhost '-p 123' src
+ expect_ssh "-p 123" myhost src
+'
+
+test_expect_success 'uplink is not treated as putty' '
+ copy_ssh_wrapper_as "$TRASH_DIRECTORY/uplink" &&
+ git clone "[myhost:123]:src" ssh-bracket-clone-uplink &&
+ expect_ssh "-p 123" myhost src
+'
+
+test_expect_success 'plink is treated specially (as putty)' '
+ copy_ssh_wrapper_as "$TRASH_DIRECTORY/plink" &&
+ git clone "[myhost:123]:src" ssh-bracket-clone-plink-0 &&
+ expect_ssh "-P 123" myhost src
'
+test_expect_success 'plink.exe is treated specially (as putty)' '
+ copy_ssh_wrapper_as "$TRASH_DIRECTORY/plink.exe" &&
+ git clone "[myhost:123]:src" ssh-bracket-clone-plink-1 &&
+ expect_ssh "-P 123" myhost src
+'
+
+test_expect_success 'tortoiseplink is like putty, with extra arguments' '
+ copy_ssh_wrapper_as "$TRASH_DIRECTORY/tortoiseplink" &&
+ git clone "[myhost:123]:src" ssh-bracket-clone-plink-2 &&
+ expect_ssh "-batch -P 123" myhost src
+'
+
+# Reset the GIT_SSH environment variable for clone tests.
+setup_ssh_wrapper
+
counter=0
# $1 url
# $2 none|host
'
test_expect_success 'Merge with d/f conflicts' '
- test_expect_code 1 git merge "merge msg" B master
+ test_expect_code 1 git merge -m "merge msg" master
'
test_expect_success 'F/D conflict' '
" > file &&
git commit -m "C3" file &&
git branch C3 &&
-git merge "pre E3 merge" B A &&
+git merge -m "pre E3 merge" A &&
echo "1
2
3 changed in E3, branch B. New file size
" > file &&
git commit -m "E3" file &&
git checkout A &&
-git merge "pre D8 merge" A C3 &&
+git merge -m "pre D8 merge" C3 &&
echo "1
2
3 changed in C3, branch B
9" > file &&
git commit -m D8 file'
-test_expect_success 'Criss-cross merge' 'git merge "final merge" A B'
+test_expect_success 'Criss-cross merge' 'git merge -m "final merge" B'
cat > file-expect <<EOF
1
(ulimit -s 128 && "$@")
}
-test_lazy_prereq ULIMIT 'run_with_limited_stack true'
+test_lazy_prereq ULIMIT_STACK_SIZE 'run_with_limited_stack true'
# we require ulimit, this excludes Windows
-test_expect_success ULIMIT '--contains works in a deep repo' '
+test_expect_success ULIMIT_STACK_SIZE '--contains works in a deep repo' '
>expect &&
i=1 &&
while test $i -lt 8000
EOF
test_expect_success !AUTOIDENT 'do not fire editor when committer is bogus' '
- >.git/result
+ >.git/result &&
>expect &&
echo >>negative &&
'
test_expect_success 'merge early [cvswork3] b3 with b1' '
- ( cd gitwork3 && git merge "message" HEAD b1 ) &&
+ ( cd gitwork3 && git merge -m "message" b1 ) &&
git fetch gitwork3 b3:b3 &&
git tag v3merged b3 &&
git push --tags gitcvs.git b3:b3
return 1
}
+test_verify_prereq () {
+ test -z "$test_prereq" ||
+ expr >/dev/null "$test_prereq" : '[A-Z0-9_,!]*$' ||
+ error "bug in the test script: '$test_prereq' does not look like a prereq"
+}
+
test_expect_failure () {
test_start_
test "$#" = 3 && { test_prereq=$1; shift; } || test_prereq=
test "$#" = 2 ||
error "bug in the test script: not 2 or 3 parameters to test-expect-failure"
+ test_verify_prereq
export test_prereq
if ! test_skip "$@"
then
test "$#" = 3 && { test_prereq=$1; shift; } || test_prereq=
test "$#" = 2 ||
error "bug in the test script: not 2 or 3 parameters to test-expect-success"
+ test_verify_prereq
export test_prereq
if ! test_skip "$@"
then
error >&5 "bug in the test script: not 3 or 4 parameters to test_external"
descr="$1"
shift
+ test_verify_prereq
export test_prereq
if ! test_skip "$descr" "$@"
then
test_cleanup=:
expecting_failure=$2
- if test "${GIT_TEST_CHAIN_LINT:-0}" != 0; then
+ if test "${GIT_TEST_CHAIN_LINT:-1}" != 0; then
# 117 is magic because it is unlikely to match the exit
# code of other programs
test_eval_ "(exit 117) && $1"