* whitespace=!indent,trail,space
*.[ch] whitespace=indent,trail,space diff=cpp
-*.sh whitespace=indent,trail,space
+*.sh whitespace=indent,trail,space eol=lf
+*.perl eol=lf
+*.pm eol=lf
+/Documentation/git-*.txt eol=lf
+/command-list.txt eol=lf
+/GIT-VERSION-GEN eol=lf
+/mergetools/* eol=lf
Note however that a comment that explains a translatable string to
translators uses a convention of starting with a magic token
- "TRANSLATORS: " immediately after the opening delimiter, even when
- it spans multiple lines. We do not add an asterisk at the beginning
- of each line, either. E.g.
+ "TRANSLATORS: ", e.g.
- /* TRANSLATORS: here is a comment that explains the string
- to be translated, that follows immediately after it */
+ /*
+ * TRANSLATORS: here is a comment that explains the string to
+ * be translated, that follows immediately after it.
+ */
_("Here is a translatable string explained by the above.");
- Double negation is often harder to understand than no negation
--- /dev/null
+Git v2.13.1 Release Notes
+=========================
+
+Fixes since v2.13
+-----------------
+
+ * The Web interface to gmane news archive is long gone, even though
+ the articles are still accessible via NTTP. Replace the links with
+ ones to public-inbox.org. Because their message identification is
+ based on the actual message-id, it is likely that it will be easier
+ to migrate away from it if/when necessary.
+
+ * Update tests to pass under GETTEXT_POISON (a mechanism to ensure
+ that output strings that should not be translated are not
+ translated by mistake), and tell TravisCI to run them.
+
+ * Setting "log.decorate=false" in the configuration file did not take
+ effect in v2.13, which has been corrected.
+
+ * An earlier update to test 7400 needed to be skipped on CYGWIN.
+
+ * Git sometimes gives an advice in a rhetorical question that does
+ not require an answer, which can confuse new users and non native
+ speakers. Attempt to rephrase them.
+
+ * "git read-tree -m" (no tree-ish) gave a nonsense suggestion "use
+ --empty if you want to clear the index". With "-m", such a request
+ will still fail anyway, as you'd need to name at least one tree-ish
+ to be merged.
+
+ * The codepath in "git am" that is used when running "git rebase"
+ leaked memory held for the log message of the commits being rebased.
+
+ * "pack-objects" can stream a slice of an existing packfile out when
+ the pack bitmap can tell that the reachable objects are all needed
+ in the output, without inspecting individual objects. This
+ strategy however would not work well when "--local" and other
+ options are in use, and need to be disabled.
+
+ * Clarify documentation for include.path and includeIf.<condition>.path
+ configuration variables.
+
+ * Tag objects, which are not reachable from any ref, that point at
+ missing objects were mishandled by "git gc" and friends (they
+ should silently be ignored instead)
+
+ * A few http:// links that are redirected to https:// in the
+ documentation have been updated to https:// links.
+
+ * Make sure our tests would pass when the sources are checked out
+ with "platform native" line ending convention by default on
+ Windows. Some "text" files out tests use and the test scripts
+ themselves that are meant to be run with /bin/sh, ought to be
+ checked out with eol=LF even on Windows.
+
+ * Fix memory leaks pointed out by Coverity (and people).
+
+ * The receive-pack program now makes sure that the push certificate
+ records the same set of push options used for pushing.
+
+ * "git cherry-pick" and other uses of the sequencer machinery
+ mishandled a trailer block whose last line is an incomplete line.
+ This has been fixed so that an additional sign-off etc. are added
+ after completing the existing incomplete line.
+
+ * The shell completion script (in contrib/) learned "git stash" has
+ a new "push" subcommand.
+
+Also contains various documentation updates and code clean-ups.
necessary to go beyond the 4GB limit.
(merge 867e40ff3a rs/large-zip later to maint).
+ * "git reset" learned "--recurse-submodules" option.
+
+ * "git diff --submodule=diff" now recurses into nested submodules.
+ (merge 5a5221427c jk/diff-submodule-diff-inline later to maint).
+
+ * "git repack" learned to accept the --threads=<n> option and pass it
+ to pack-objects.
+
+ * "git send-email" learned to run sendemail-validate hook to inspect
+ and reject a message before sending it out.
+ (merge 6489660b4b jt/send-email-validate-hook later to maint).
+
+ * There is no good reason why "git fetch $there $sha1" should fail
+ when the $sha1 names an object at the tip of an advertised ref,
+ even when the other side hasn't enabled allowTipSHA1InWant.
+
+ * The recently introduced "[includeIf "gitdir:$dir"] path=..."
+ mechansim has further been taught to take symlinks into account.
+ The directory "$dir" specified in "gitdir:$dir" may be a symlink to
+ a real location, not something that $(getcwd) may return. In such
+ a case, a realpath of "$dir" is compared with the real path of the
+ current repository to determine if the contents from the named path
+ should be included.
+
Performance, Internal Implementation, Development Support etc.
* We can trigger Windows auto-build tester (credits: Dscho &
Microsoft) from our existing Travis CI tester now.
+ * Conversion from uchar[20] to struct object_id continues.
+
+ * Simplify parse_pathspec() codepath and stop it from looking at the
+ default in-core index.
+ (merge 08de9151a8 bw/pathspec-sans-the-index later to maint).
+
+ * Add perf-test for wildmatch.
+ (merge 62ca75a6b9 ab/perf-wildmatch later to maint).
+
+ * Code from "conversion using external process" codepath has been
+ extracted to a separate sub-process.[ch] module.
+ (merge 4f2a2e9f0e bp/sub-process-convert-filter later to maint).
+
+ * When "git checkout", "git merge", etc. manipulates the in-core
+ index, various pieces of information in the index extensions are
+ discarded from the original state, as it is usually not the case
+ that they are kept up-to-date and in-sync with the operation on the
+ main index. The untracked cache extension is copied across these
+ operations now, which would speed up "git status" (as long as the
+ cache is properly invalidated).
+
+ * The internal implementation of "git grep" has seen some clean-up.
+ (merge 8df4c2953f ab/grep-preparatory-cleanup later to maint).
+
+ * Update the C style recommendation for notes for translators, as
+ recent versions of gettext tools can work with our style of
+ multi-line comments.
+ (merge 66f5f6dca9 ab/c-translators-comment-style later to maint).
+
Also contains various documentation updates and code clean-ups.
not translated by mistake), and TravisCI is told to run them.
(merge b8e188f6f5 ab/fix-poison-tests later to maint).
+ * "git checkout --recurse-submodules" did not quite work with a
+ submodule that itself has submodules.
+ (merge 218c883783 sb/checkout-recurse-submodules later to maint).
+
+ * Plug some leaks and updates internal API used to implement the
+ split index feature to make it easier to avoid such a leak in the
+ future.
+ (merge de950c5773 nd/split-index-unshare later to maint).
+
+ * "pack-objects" can stream a slice of an existing packfile out when
+ the pack bitmap can tell that the reachable objects are all needed
+ in the output, without inspecting individual objects. This
+ strategy however would not work well when "--local" and other
+ options are in use, and need to be disabled.
+ (merge da5a1f8100 jk/disable-pack-reuse-when-broken later to maint).
+
+ * Fix memory leaks pointed out by Coverity (and people).
+ (merge 443a12f37b js/plug-leaks later to maint).
+
+ * "git read-tree -m" (no tree-ish) gave a nonsense suggestion "use
+ --empty if you want to clear the index". With "-m", such a request
+ will still fail anyway, as you'd need to name at least one tree-ish
+ to be merged.
+ (merge b9b10d3681 jc/read-tree-empty-with-m later to maint).
+
+ * Make sure our tests would pass when the sources are checked out
+ with "platform native" line ending convention by default on
+ Windows. Some "text" files out tests use and the test scripts
+ themselves that are meant to be run with /bin/sh, ought to be
+ checked out with eol=LF even on Windows.
+ (merge 2779f66505 js/eol-on-ourselves later to maint).
+
+ * Introduce the BUG() macro to improve die("BUG: ...").
+ (merge 3d7dd2d3b6 jk/bug-to-abort later to maint).
+
+ * Clarify documentation for include.path and includeIf.<condition>.path
+ configuration variables.
+ (merge ce933ebd5a jk/doc-config-include later to maint).
+
+ * Git sometimes gives an advice in a rhetorical question that does
+ not require an answer, which can confuse new users and non native
+ speakers. Attempt to rephrase them.
+ (merge 6963893943 ja/do-not-ask-needless-questions later to maint).
+
+ * A few http:// links that are redirected to https:// in the
+ documentation have been updated to https:// links.
+ (merge 5e68729fd9 jk/update-links-in-docs later to maint).
+
+ * "git for-each-ref --format=..." with %(HEAD) in the format used to
+ resolve the HEAD symref as many times as it had processed refs,
+ which was wasteful, and "git branch" shared the same problem.
+ (merge 613a0e52ea kn/ref-filter-branch-list later to maint).
+
+ * Regression fix to topic recently merged to 'master'.
+ (merge d096d7f1ef pw/rebase-i-regression-fix later to maint).
+
+ * The shell completion script (in contrib/) learned "git stash" has
+ a new "push" subcommand.
+ (merge 3851e4483f tg/stash-push-fixup later to maint).
+
+ * "git interpret-trailers", when used as GIT_EDITOR for "git commit
+ -v", looked for and appended to a trailer block at the very end,
+ i.e. at the end of the "diff" output. The command has been
+ corrected to pay attention to the cut-mark line "commit -v" adds to
+ the buffer---the real trailer block should appear just before it.
+ (merge d76650b8d1 bm/interpret-trailers-cut-line-is-eom later to maint).
+
+ * A test allowed both "git push" and "git receive-pack" on the other
+ end write their traces into the same file. This is OK on platforms
+ that allows atomically appending to a file opened with O_APPEND,
+ but on other platforms led to a mangled output, causing
+ intermittent test failures. This has been fixed by disabling
+ traces from "receive-pack" in the test.
+ (merge 71406ed4d6 jk/alternate-ref-optim later to maint).
+
+ * Tag objects, which are not reachable from any ref, that point at
+ missing objects were mishandled by "git gc" and friends (they
+ should silently be ignored instead)
+ (merge a3ba6bf10a jk/ignore-broken-tags-when-ignoring-missing-links later to maint).
+
+ * "git describe --contains" penalized light-weight tags so much that
+ they were almost never considered. Instead, give them about the
+ same chance to be considered as an annotated tag that is the same
+ age as the underlying commit would.
+ (merge ef1e74065c jc/name-rev-lw-tag later to maint).
+
+ * The "run-command" API implementation has been made more robust
+ against dead-locking in a threaded environment.
+ (merge e3f43ce765 bw/forking-and-threading later to maint).
+
+ * A recent update to t5545-push-options.sh started skipping all the
+ tests in the script when a web server testing is disabled or
+ unavailable, not just the ones that require a web server. Non HTTP
+ tests have been salvaged to always run in this script.
+ (merge 2e397e4ddf jc/skip-test-in-the-middle later to maint).
+
+ * "git send-email" now uses Net::SMTP::SSL, which is obsolete, only
+ when needed. Recent versions of Net::SMTP can do TLS natively.
+ (merge 0ead000c3a dk/send-email-avoid-net-smtp-ssl-when-able later to maint).
+
+ * "foo\bar\baz" in "git fetch foo\bar\baz", even though there is no
+ slashes in it, cannot be a nickname for a remote on Windows, as
+ that is likely to be a pathname on a local filesystem.
+ (merge d9244ecf4f js/bs-is-a-dir-sep-on-windows later to maint).
+
+ * "git clean -d" used to clean directories that has ignored files,
+ even though the command should not lose ignored ones without "-x".
+ "git status --ignored" did not list ignored and untracked files
+ without "-uall". These have been corrected.
+ (merge 6b1db43109 sl/clean-d-ignored-fix later to maint).
+
+ * The result from "git diff" that compares two blobs, e.g. "git diff
+ $commit1:$path $commit2:$path", used to be shown with the full
+ object name as given on the command line, but it is more natural to
+ use the $path in the output and use it to look up .gitattributes.
+ (merge 30d005c020 jk/diff-blob later to maint).
+
+ * The "collision detecting" SHA-1 implementation shipped with 2.13
+ was quite broken on some big-endian platforms and/or platforms that
+ do not like unaligned fetches. Update to the upstream code which
+ has already fixed these issues.
+ (merge a0103914c2 ab/sha1dc-maint later to maint).
+
+ * "git am -h" triggered a BUG().
+ (merge f3a2fffe06 jk/unbreak-am-h later to maint).
+
+ * The interaction of "url.*.insteadOf" and custom URL scheme's
+ whitelisting is now documented better.
+ (merge 2c9a2ae285 jk/url-insteadof-config later to maint).
+
* Other minor doc, test and build updates and code cleanups.
(merge 515360f9e9 jn/credential-doc-on-clear later to maint).
(merge 0e6d899fee ab/aix-needs-compat-regex later to maint).
(merge e294e8959f jc/apply-fix-mismerge later to maint).
(merge 7f1b225153 bw/submodule-with-bs-path later to maint).
(merge c8f7c8b704 tb/dedup-crlf-tests later to maint).
+ (merge 449456ad47 sg/core-filemode-doc-typofix later to maint).
+ (merge ba4dce784e km/log-showsignature-doc later to maint).
+ (merge c5a9157393 jh/memihash-opt later to maint).
+ (merge 80f4cd8046 ab/ref-filter-no-contains later to maint).
+ (merge b275da816c ah/doc-interpret-trailers-ifexists later to maint).
+ (merge fc7a5edb55 ah/doc-pretty-format-fix later to maint).
+ (merge 7e95fcb4b5 sb/t5531-update-desc later to maint).
+ (merge b8f354f294 sd/t3200-typofix later to maint).
+ (merge ba746ff9c9 ah/doc-filter-branch-export-env later to maint).
+ (merge 44e2ff09ce ab/t3070-test-dedup later to maint).
+ (merge 9ee4aa95db rf/completion-config-commit later to maint).
+ (merge fb87327aee ah/doc-rev-parse-short-default later to maint).
Includes
~~~~~~~~
+The `include` and `includeIf` sections allow you to include config
+directives from another source. These sections behave identically to
+each other with the exception that `includeIf` sections may be ignored
+if their condition does not evaluate to true; see "Conditional includes"
+below.
+
You can include a config file from another by setting the special
-`include.path` variable to the name of the file to be included. The
-variable takes a pathname as its value, and is subject to tilde
-expansion. `include.path` can be given multiple times.
+`include.path` (or `includeIf.*.path`) variable to the name of the file
+to be included. The variable takes a pathname as its value, and is
+subject to tilde expansion. These variables can be given multiple times.
-The included file is expanded immediately, as if its contents had been
-found at the location of the include directive. If the value of the
-`include.path` variable is a relative path, the path is considered to
+The contents of the included file are inserted immediately, as if they
+had been found at the location of the include directive. If the value of the
+variable is a relative path, the path is considered to
be relative to the configuration file in which the include directive
was found. See below for examples.
You can include a config file from another conditionally by setting a
`includeIf.<condition>.path` variable to the name of the file to be
-included. The variable's value is treated the same way as
-`include.path`. `includeIf.<condition>.path` can be given multiple times.
+included.
The condition starts with a keyword followed by a colon and some data
whose format and meaning depends on the keyword. Supported keywords
* Symlinks in `$GIT_DIR` are not resolved before matching.
+ * Both the symlink & realpath versions of paths will be matched
+ outside of `$GIT_DIR`. E.g. if ~/git is a symlink to
+ /mnt/storage/git, both `gitdir:~/git` and `gitdir:/mnt/storage/git`
+ will match.
++
+This was not the case in the initial release of this feature in
+v2.13.0, which only matched the realpath version. Configuration that
+wants to be compatible with the initial release of this feature needs
+to either specify only the realpath version, or both versions.
+
* Note that "../" is not special and will match literally, which is
unlikely what you want.
[include]
path = /path/to/foo.inc ; include by absolute path
- path = foo ; expand "foo" relative to the current file
- path = ~/foo ; expand "foo" in your `$HOME` directory
+ path = foo.inc ; find "foo.inc" relative to the current file
+ path = ~/foo.inc ; find "foo.inc" in your `$HOME` directory
; include if $GIT_DIR is /path/to/foo/.git
[includeIf "gitdir:/path/to/foo/.git"]
[includeIf "gitdir:~/to/group/"]
path = /path/to/foo.inc
+ ; relative paths are always relative to the including
+ ; file (if the condition is true); their location is not
+ ; affected by the condition
+ [includeIf "gitdir:/path/to/group/"]
+ path = foo.inc
+
Values
~~~~~~
is to be honored.
+
Some filesystems lose the executable bit when a file that is
-marked as executable is checked out, or checks out an
+marked as executable is checked out, or checks out a
non-executable file with executable bit on.
linkgit:git-clone[1] or linkgit:git-init[1] probe the filesystem
to see if it handles the executable bit correctly
computed based on the approximate number of packed objects
in your repository, which hopefully is enough for
abbreviated object names to stay unique for some time.
+ The minimum length is 4.
add.ignoreErrors::
add.ignore-errors (deprecated)::
Tools like linkgit:git-log[1] or linkgit:git-whatchanged[1], which
normally hide the root commit will now show it. True by default.
+log.showSignature::
+ If true, makes linkgit:git-log[1], linkgit:git-show[1], and
+ linkgit:git-whatchanged[1] assume `--show-signature`.
+
log.mailmap::
If true, makes linkgit:git-log[1], linkgit:git-show[1], and
linkgit:git-whatchanged[1] assume `--use-mailmap`.
the best alternative for the particular user, even for a
never-before-seen repository on the site. When more than one
insteadOf strings match a given URL, the longest match is used.
++
+Note that any protocol restrictions will be applied to the rewritten
+URL. If the rewrite changes the URL to use a custom protocol or remote
+helper, you may need to adjust the `protocol.*.allow` config to permit
+the request. In particular, protocols you expect to use for submodules
+must be set to `always` rather than the default of `user`. See the
+description of `protocol.allow` above.
url.<base>.pushInsteadOf::
Any URL that starts with this value will not be pushed to;
This filter may be used if you only need to modify the environment
in which the commit will be performed. Specifically, you might
want to rewrite the author/committer name/email/time environment
- variables (see linkgit:git-commit-tree[1] for details). Do not forget
- to re-export the variables.
+ variables (see linkgit:git-commit-tree[1] for details).
--tree-filter <command>::
This is the filter for rewriting the tree and its contents.
if test "$GIT_AUTHOR_EMAIL" = "root@localhost"
then
GIT_AUTHOR_EMAIL=john@example.com
- export GIT_AUTHOR_EMAIL
fi
if test "$GIT_COMMITTER_EMAIL" = "root@localhost"
then
GIT_COMMITTER_EMAIL=john@example.com
- export GIT_COMMITTER_EMAIL
fi
' -- --all
--------------------------------------------------------
-P::
--perl-regexp::
- Use Perl-compatible regexp for patterns. Requires libpcre to be
- compiled in.
+ Use Perl-compatible regular expressions for patterns.
++
+Support for these types of regular expressions is an optional
+compile-time dependency. If Git wasn't compiled with support for them
+providing this option will cause it to die.
-F::
--fixed-strings::
same <token> in the message.
+
The valid values for this option are: `addIfDifferentNeighbor` (this
-is the default), `addIfDifferent`, `add`, `overwrite` or `doNothing`.
+is the default), `addIfDifferent`, `add`, `replace` or `doNothing`.
+
With `addIfDifferentNeighbor`, a new trailer will be added only if no
trailer with the same (<token>, <value>) pair is above or below the line
-------
If `-m` is specified, 'git read-tree' can perform 3 kinds of
merge, a single tree merge if only 1 tree is given, a
-fast-forward merge with 2 trees, or a 3-way merge if 3 trees are
+fast-forward merge with 2 trees, or a 3-way merge if 3 or more trees are
provided.
SYNOPSIS
--------
[verse]
-'git repack' [-a] [-A] [-d] [-f] [-F] [-l] [-n] [-q] [-b] [--window=<n>] [--depth=<n>]
+'git repack' [-a] [-A] [-d] [-f] [-F] [-l] [-n] [-q] [-b] [--window=<n>] [--depth=<n>] [--threads=<n>]
DESCRIPTION
-----------
to be applied that many times to get to the necessary object.
The default value for --window is 10 and --depth is 50.
+--threads=<n>::
+ This option is passed through to `git pack-objects`.
+
--window-memory=<n>::
This option provides an additional limit on top of `--window`;
the window size will dynamically scale down so as to not take
'git diff-{asterisk}'). In contrast to the `--sq-quote` option,
the command input is still interpreted as usual.
+--short[=length]::
+ Same as `--verify` but shortens the object name to a unique
+ prefix with at least `length` characters. The minimum length
+ is 4, the default is the effective value of the `core.abbrev`
+ configuration variable (see linkgit:git-config[1]).
+
--not::
When showing object names, prefix them with '{caret}' and
strip '{caret}' prefix from the object names that already have
The option core.warnAmbiguousRefs is used to select the strict
abbreviation mode.
---short::
---short=number::
- Instead of outputting the full SHA-1 values of object names try to
- abbreviate them to a shorter unique name. When no length is specified
- 7 is used. The minimum length is 4.
-
--symbolic::
Usually the object names are output in SHA-1 form (with
possible '{caret}' prefix); this option makes them output in a
Currently, validation means the following:
+
--
+ * Invoke the sendemail-validate hook if present (see linkgit:githooks[5]).
* Warn of patches that contain lines longer than 998 characters; this
is due to SMTP limits as described by http://www.ietf.org/rfc/rfc2821.txt.
--
'git tag' [-a | -s | -u <keyid>] [-f] [-m <msg> | -F <file>]
<tagname> [<commit> | <object>]
'git tag' -d <tagname>...
-'git tag' [-n[<num>]] -l [--contains <commit>] [--contains <commit>]
+'git tag' [-n[<num>]] -l [--contains <commit>] [--no-contains <commit>]
[--points-at <object>] [--column[=<options>] | --no-column]
[--create-reflog] [--sort=<key>] [--format=<format>]
[--[no-]merged [<commit>]] [<pattern>...]
The commits are guaranteed to be listed in the order that they were
processed by rebase.
+sendemail-validate
+~~~~~~~~~~~~~~~~~~
+
+This hook is invoked by 'git send-email'. It takes a single parameter,
+the name of the file that holds the e-mail to be sent. Exiting with a
+non-zero status causes 'git send-email' to abort before sending any
+e-mails.
+
GIT
---
* Fields use modified URI encoding, defined in RFC 3986, section 2.1
(Percent-Encoding), or rather "Query string encoding" (see
-http://en.wikipedia.org/wiki/Query_string#URL_encoding[]), the difference
+https://en.wikipedia.org/wiki/Query_string#URL_encoding[]), the difference
being that SP (" ") can be encoded as "{plus}" (and therefore "{plus}" has to be
also percent-encoded).
+
than given and there are spaces on its left, use those spaces
- '%><(<N>)', '%><|(<N>)': similar to '% <(<N>)', '%<|(<N>)'
respectively, but padding both sides (i.e. the text is centered)
--%(trailers): display the trailers of the body as interpreted by
+- %(trailers): display the trailers of the body as interpreted by
linkgit:git-interpret-trailers[1]
NOTE: Some placeholders may depend on other options given to the
pattern as a regular expression).
--perl-regexp::
- Consider the limiting patterns to be Perl-compatible regular expressions.
- Requires libpcre to be compiled in.
+ Consider the limiting patterns to be Perl-compatible regular
+ expressions.
++
+Support for these types of regular expressions is an optional
+compile-time dependency. If Git wasn't compiled with support for them
+providing this option will cause it to die.
--remove-empty::
Stop when a given path disappears from the tree.
Similar to `DIR_SHOW_IGNORED`, but return ignored files in `ignored[]`
in addition to untracked files in `entries[]`.
+`DIR_KEEP_UNTRACKED_CONTENTS`:::
+
+ Only has meaning if `DIR_SHOW_IGNORED_TOO` is also set; if this is set, the
+ untracked contents of untracked directories are also returned in
+ `entries[]`.
+
`DIR_COLLECT_IGNORED`:::
Special mode for git-add. Return ignored files in `ignored[]` and
--- /dev/null
+sub-process API
+===============
+
+The sub-process API makes it possible to run background sub-processes
+for the entire lifetime of a Git invocation. If Git needs to communicate
+with an external process multiple times, then this can reduces the process
+invocation overhead. Git and the sub-process communicate through stdin and
+stdout.
+
+The sub-processes are kept in a hashmap by command name and looked up
+via the subprocess_find_entry function. If an existing instance can not
+be found then a new process should be created and started. When the
+parent git command terminates, all sub-processes are also terminated.
+
+This API is based on the run-command API.
+
+Data structures
+---------------
+
+* `struct subprocess_entry`
+
+The sub-process structure. Members should not be accessed directly.
+
+Types
+-----
+
+'int(*subprocess_start_fn)(struct subprocess_entry *entry)'::
+
+ User-supplied function to initialize the sub-process. This is
+ typically used to negotiate the interface version and capabilities.
+
+
+Functions
+---------
+
+`cmd2process_cmp`::
+
+ Function to test two subprocess hashmap entries for equality.
+
+`subprocess_start`::
+
+ Start a subprocess and add it to the subprocess hashmap.
+
+`subprocess_stop`::
+
+ Kill a subprocess and remove it from the subprocess hashmap.
+
+`subprocess_find_entry`::
+
+ Find a subprocess in the subprocess hashmap.
+
+`subprocess_get_child_process`::
+
+ Get the underlying `struct child_process` from a subprocess.
+
+`subprocess_read_status`::
+
+ Helper function to read packets looking for the last "status=<foo>"
+ key/value pair.
# Define NO_OPENSSL environment variable if you do not have OpenSSL.
# This also implies BLK_SHA1.
#
-# Define USE_LIBPCRE if you have and want to use libpcre. git-grep will be
-# able to use Perl-compatible regular expressions.
+# Define USE_LIBPCRE if you have and want to use libpcre. Various
+# commands such as log and grep offer runtime options to use
+# Perl-compatible regular expressions instead of standard or extended
+# POSIX regular expressions.
#
# Define LIBPCREDIR=/foo/bar if your libpcre header and library files are in
# /foo/bar/include and /foo/bar/lib directories.
LIB_OBJS += string-list.o
LIB_OBJS += submodule.o
LIB_OBJS += submodule-config.o
+LIB_OBJS += sub-process.o
LIB_OBJS += symlinks.o
LIB_OBJS += tag.o
LIB_OBJS += tempfile.o
endif
ifdef USE_LIBPCRE
- BASIC_CFLAGS += -DUSE_LIBPCRE
+ BASIC_CFLAGS += -DUSE_LIBPCRE1
ifdef LIBPCREDIR
BASIC_CFLAGS += -I$(LIBPCREDIR)/include
EXTLIBS += -L$(LIBPCREDIR)/$(lib) $(CC_LD_DYNPATH)$(LIBPCREDIR)/$(lib)
DC_SHA1 := YesPlease
LIB_OBJS += sha1dc/sha1.o
LIB_OBJS += sha1dc/ubc_check.o
- BASIC_CFLAGS += -DSHA1_DC
+ BASIC_CFLAGS += \
+ -DSHA1_DC \
+ -DSHA1DC_NO_STANDARD_INCLUDES \
+ -DSHA1DC_INIT_SAFE_HASH_DEFAULT=0 \
+ -DSHA1DC_CUSTOM_INCLUDE_SHA1_C="\"cache.h\"" \
+ -DSHA1DC_CUSTOM_TRAILING_INCLUDE_SHA1_C="\"sha1dc_git.c\"" \
+ -DSHA1DC_CUSTOM_TRAILING_INCLUDE_SHA1_H="\"sha1dc_git.h\"" \
+ -DSHA1DC_CUSTOM_INCLUDE_UBC_CHECK_C="\"git-compat-util.h\""
endif
endif
endif
@echo TAR=\''$(subst ','\'',$(subst ','\'',$(TAR)))'\' >>$@+
@echo NO_CURL=\''$(subst ','\'',$(subst ','\'',$(NO_CURL)))'\' >>$@+
@echo NO_EXPAT=\''$(subst ','\'',$(subst ','\'',$(NO_EXPAT)))'\' >>$@+
- @echo USE_LIBPCRE=\''$(subst ','\'',$(subst ','\'',$(USE_LIBPCRE)))'\' >>$@+
+ @echo USE_LIBPCRE1=\''$(subst ','\'',$(subst ','\'',$(USE_LIBPCRE)))'\' >>$@+
@echo NO_PERL=\''$(subst ','\'',$(subst ','\'',$(NO_PERL)))'\' >>$@+
+ @echo NO_PTHREADS=\''$(subst ','\'',$(subst ','\'',$(NO_PTHREADS)))'\' >>$@+
@echo NO_PYTHON=\''$(subst ','\'',$(subst ','\'',$(NO_PYTHON)))'\' >>$@+
@echo NO_UNIX_SOCKETS=\''$(subst ','\'',$(subst ','\'',$(NO_UNIX_SOCKETS)))'\' >>$@+
@echo PAGER_ENV=\''$(subst ','\'',$(subst ','\'',$(PAGER_ENV)))'\' >>$@+
ifdef GIT_PERF_MAKE_OPTS
@echo GIT_PERF_MAKE_OPTS=\''$(subst ','\'',$(subst ','\'',$(GIT_PERF_MAKE_OPTS)))'\' >>$@+
endif
+ifdef GIT_PERF_MAKE_COMMAND
+ @echo GIT_PERF_MAKE_COMMAND=\''$(subst ','\'',$(subst ','\'',$(GIT_PERF_MAKE_COMMAND)))'\' >>$@+
+endif
ifdef GIT_INTEROP_MAKE_OPTS
@echo GIT_INTEROP_MAKE_OPTS=\''$(subst ','\'',$(subst ','\'',$(GIT_INTEROP_MAKE_OPTS)))'\' >>$@+
endif
/*
* Custom integer square root from
- * http://en.wikipedia.org/wiki/Integer_square_root
+ * https://en.wikipedia.org/wiki/Integer_square_root
*/
static int sqrti(int val)
{
steps_msg = xstrfmt(Q_("(roughly %d step)", "(roughly %d steps)",
steps), steps);
- /* TRANSLATORS: the last %s will be replaced with
- "(roughly %d steps)" translation */
+ /*
+ * TRANSLATORS: the last %s will be replaced with "(roughly %d
+ * steps)" translation.
+ */
printf(Q_("Bisecting: %d revision left to test after this %s\n",
"Bisecting: %d revisions left to test after this %s\n",
nr), nr, steps_msg);
#include "revision.h"
#include "bulk-checkin.h"
#include "argv-array.h"
+#include "submodule.h"
static const char * const builtin_add_usage[] = {
N_("git add [<options>] [--] <pathspec>..."),
*dst++ = entry;
}
dir->nr = dst - dir->entries;
- add_pathspec_matches_against_index(pathspec, seen);
+ add_pathspec_matches_against_index(pathspec, &the_index, seen);
return seen;
}
if (read_cache() < 0)
die(_("index file corrupt"));
+ die_in_unpopulated_submodule(&the_index, prefix);
+
/*
* Check the "pathspec '%s' did not match any files" block
* below before enabling new magic.
*/
parse_pathspec(&pathspec, 0,
PATHSPEC_PREFER_FULL |
- PATHSPEC_SYMLINK_LEADING_PATH |
- PATHSPEC_STRIP_SUBMODULE_SLASH_EXPENSIVE,
+ PATHSPEC_SYMLINK_LEADING_PATH,
prefix, argv);
+ die_path_inside_submodule(&the_index, &pathspec);
+
if (add_new_files) {
int baselen;
}
/* This picks up the paths that are not tracked */
- baselen = fill_directory(&dir, &pathspec);
+ baselen = fill_directory(&dir, &the_index, &pathspec);
if (pathspec.nr)
seen = prune_directory(&dir, &pathspec, baselen);
}
int i;
if (!seen)
- seen = find_pathspecs_matching_against_index(&pathspec);
+ seen = find_pathspecs_matching_against_index(&pathspec, &the_index);
/*
* file_exists() assumes exact match
!file_exists(path))) {
if (ignore_missing) {
int dtype = DT_UNKNOWN;
- if (is_excluded(&dir, path, &dtype))
- dir_add_ignored(&dir, path, pathspec.items[i].len);
+ if (is_excluded(&dir, &the_index, path, &dtype))
+ dir_add_ignored(&dir, &the_index,
+ path, pathspec.items[i].len);
} else
die(_("pathspec '%s' did not match any files"),
pathspec.items[i].original);
}
if (is_empty_file(am_path(state, "patch"))) {
- printf_ln(_("Patch is empty. Was it split wrong?"));
+ printf_ln(_("Patch is empty."));
die_user_resolve(state);
}
struct strbuf sb = STRBUF_INIT;
FILE *fp = xfopen(mail, "r");
const char *x;
+ int ret = 0;
- if (strbuf_getline_lf(&sb, fp))
- return -1;
-
- if (!skip_prefix(sb.buf, "From ", &x))
- return -1;
-
- if (get_oid_hex(x, commit_id) < 0)
- return -1;
+ if (strbuf_getline_lf(&sb, fp) ||
+ !skip_prefix(sb.buf, "From ", &x) ||
+ get_oid_hex(x, commit_id) < 0)
+ ret = -1;
strbuf_release(&sb);
fclose(fp);
- return 0;
+ return ret;
}
/**
if (unmerged_cache()) {
printf_ln(_("You still have unmerged paths in your index.\n"
- "Did you forget to use 'git add'?"));
+ "You should 'git add' each file with resolved conflicts to mark them as such.\n"
+ "You might run `git rm` on a file to accept \"deleted by them\" for it."));
die_user_resolve(state);
}
OPT_END()
};
+ if (argc == 2 && !strcmp(argv[1], "-h"))
+ usage_with_options(usage, options);
+
git_config(git_am_config, NULL);
am_state_init(&state);
blame_date_width = sizeof("2006-10-19");
break;
case DATE_RELATIVE:
- /* TRANSLATORS: This string is used to tell us the maximum
- display width for a relative timestamp in "git blame"
- output. For C locale, "4 years, 11 months ago", which
- takes 22 places, is the longest among various forms of
- relative timestamps, but your language may need more or
- fewer display columns. */
+ /*
+ * TRANSLATORS: This string is used to tell us the
+ * maximum display width for a relative timestamp in
+ * "git blame" output. For C locale, "4 years, 11
+ * months ago", which takes 22 places, is the longest
+ * among various forms of relative timestamps, but
+ * your language may need more or fewer display
+ * columns.
+ */
blame_date_width = utf8_strwidth(_("4 years, 11 months ago")) + 1; /* add the null */
break;
case DATE_NORMAL:
if (unknown_type)
flags |= LOOKUP_UNKNOWN_OBJECT;
- if (get_sha1_with_context(obj_name, 0, oid.hash, &obj_context))
+ if (get_sha1_with_context(obj_name, GET_SHA1_RECORD_PATH,
+ oid.hash, &obj_context))
die("Not a valid object name %s", obj_name);
if (!path)
die("git cat-file %s: bad file", obj_name);
write_or_die(1, buf, size);
+ free(buf);
+ free(obj_context.path);
return 0;
}
#include "quote.h"
#include "pathspec.h"
#include "parse-options.h"
+#include "submodule.h"
static int quiet, verbose, stdin_paths, show_non_matching, no_index;
static const char * const check_ignore_usage[] = {
parse_pathspec(&pathspec,
PATHSPEC_ALL_MAGIC & ~PATHSPEC_FROMTOP,
PATHSPEC_SYMLINK_LEADING_PATH |
- PATHSPEC_STRIP_SUBMODULE_SLASH_EXPENSIVE |
PATHSPEC_KEEP_ORDER,
prefix, argv);
+ die_path_inside_submodule(&the_index, &pathspec);
+
/*
* look for pathspecs matching entries in the index, since these
* should not be ignored, in order to be consistent with
* 'git status', 'git add' etc.
*/
- seen = find_pathspecs_matching_against_index(&pathspec);
+ seen = find_pathspecs_matching_against_index(&pathspec, &the_index);
for (i = 0; i < pathspec.nr; i++) {
full_path = pathspec.items[i].match;
exclude = NULL;
if (!seen[i]) {
- exclude = last_exclude_matching(dir, full_path, &dtype);
+ exclude = last_exclude_matching(dir, &the_index,
+ full_path, &dtype);
}
if (!quiet && (exclude || show_non_matching))
output_exclude(pathspec.items[i].original, exclude);
/*
* NEEDSWORK:
* There is absolutely no reason to write this as a blob object
- * and create a phony cache entry just to leak. This hack is
- * primarily to get to the write_entry() machinery that massages
- * the contents to work-tree format and writes out which only
- * allows it for a cache entry. The code in write_entry() needs
- * to be refactored to allow us to feed a <buffer, size, mode>
- * instead of a cache entry. Such a refactoring would help
- * merge_recursive as well (it also writes the merge result to the
- * object database even when it may contain conflicts).
+ * and create a phony cache entry. This hack is primarily to get
+ * to the write_entry() machinery that massages the contents to
+ * work-tree format and writes out which only allows it for a
+ * cache entry. The code in write_entry() needs to be refactored
+ * to allow us to feed a <buffer, size, mode> instead of a cache
+ * entry. Such a refactoring would help merge_recursive as well
+ * (it also writes the merge result to the object database even
+ * when it may contain conflicts).
*/
if (write_sha1_file(result_buf.ptr, result_buf.size,
blob_type, oid.hash))
die(_("Unable to add merge result for '%s'"), path);
+ free(result_buf.ptr);
ce = make_cache_entry(mode, oid.hash, path, 2, 0);
if (!ce)
die(_("make_cache_entry failed for path '%s'"), path);
status = checkout_entry(ce, state, NULL);
+ free(ce);
return status;
}
* new_branch && argc > 1 will be caught later.
*/
if (opts.new_branch && argc == 1)
- die(_("Cannot update paths and switch to branch '%s' at the same time.\n"
- "Did you intend to checkout '%s' which can not be resolved as commit?"),
- opts.new_branch, argv[0]);
+ die(_("'%s' is not a commit and a branch '%s' cannot be created from it"),
+ argv[0], opts.new_branch);
if (opts.force_detach)
die(_("git checkout: --detach does not take a path argument '%s'"),
for_each_string_list_item(item, &del_list) {
int dtype = DT_UNKNOWN;
- if (is_excluded(&dir, item->string, &dtype)) {
+ if (is_excluded(&dir, &the_index, item->string, &dtype)) {
*item->string = '\0';
changed++;
}
}
}
+static void correct_untracked_entries(struct dir_struct *dir)
+{
+ int src, dst, ign;
+
+ for (src = dst = ign = 0; src < dir->nr; src++) {
+ /* skip paths in ignored[] that cannot be inside entries[src] */
+ while (ign < dir->ignored_nr &&
+ 0 <= cmp_dir_entry(&dir->entries[src], &dir->ignored[ign]))
+ ign++;
+
+ if (ign < dir->ignored_nr &&
+ check_dir_entry_contains(dir->entries[src], dir->ignored[ign])) {
+ /* entries[src] contains an ignored path, so we drop it */
+ free(dir->entries[src]);
+ } else {
+ struct dir_entry *ent = dir->entries[src++];
+
+ /* entries[src] does not contain an ignored path, so we keep it */
+ dir->entries[dst++] = ent;
+
+ /* then discard paths in entries[] contained inside entries[src] */
+ while (src < dir->nr &&
+ check_dir_entry_contains(ent, dir->entries[src]))
+ free(dir->entries[src++]);
+
+ /* compensate for the outer loop's loop control */
+ src--;
+ }
+ }
+ dir->nr = dst;
+}
+
int cmd_clean(int argc, const char **argv, const char *prefix)
{
int i, res;
dir.flags |= DIR_SHOW_OTHER_DIRECTORIES;
+ if (remove_directories)
+ dir.flags |= DIR_SHOW_IGNORED_TOO | DIR_KEEP_UNTRACKED_CONTENTS;
+
if (read_cache() < 0)
die(_("index file corrupt"));
PATHSPEC_PREFER_CWD,
prefix, argv);
- fill_directory(&dir, &pathspec);
+ fill_directory(&dir, &the_index, &pathspec);
+ correct_untracked_entries(&dir);
for (i = 0; i < dir.nr; i++) {
struct dir_entry *ent = dir.entries[i];
string_list_append(&del_list, rel);
}
+ for (i = 0; i < dir.nr; i++)
+ free(dir.entries[i]);
+
+ for (i = 0; i < dir.ignored_nr; i++)
+ free(dir.ignored[i]);
+
if (interactive && del_list.nr > 0)
interactive_main_loop();
if (verbose || /* Truncate the message just before the diff, if any. */
cleanup_mode == CLEANUP_SCISSORS)
- wt_status_truncate_message_at_cut_line(&sb);
+ strbuf_setlen(&sb, wt_status_locate_end(sb.buf, sb.len));
if (cleanup_mode != CLEANUP_NONE)
strbuf_stripspace(&sb, cleanup_mode == CLEANUP_ALL);
usage_with_options(builtin_config_usage, builtin_config_options);
}
+ if (use_local_config && nongit)
+ die(_("--local can only be used inside a git repository"));
+
if (given_config_source.file &&
!strcmp(given_config_source.file, "-")) {
given_config_source.file = NULL;
int result;
unsigned options = 0;
+ git_config(git_diff_basic_config, NULL); /* no "diff" UI options */
init_revisions(&rev, prefix);
gitmodules_config();
- git_config(git_diff_basic_config, NULL); /* no "diff" UI options */
rev.abbrev = 0;
precompose_argv(argc, argv);
int i;
int result;
+ git_config(git_diff_basic_config, NULL); /* no "diff" UI options */
init_revisions(&rev, prefix);
gitmodules_config();
- git_config(git_diff_basic_config, NULL); /* no "diff" UI options */
rev.abbrev = 0;
precompose_argv(argc, argv);
struct setup_revision_opt s_r_opt;
int read_stdin = 0;
+ git_config(git_diff_basic_config, NULL); /* no "diff" UI options */
init_revisions(opt, prefix);
gitmodules_config();
- git_config(git_diff_basic_config, NULL); /* no "diff" UI options */
opt->abbrev = 0;
opt->diff = 1;
opt->disable_stdin = 1;
#define DIFF_NO_INDEX_EXPLICIT 1
#define DIFF_NO_INDEX_IMPLICIT 2
-struct blobinfo {
- struct object_id oid;
- const char *name;
- unsigned mode;
-};
-
static const char builtin_diff_usage[] =
"git diff [<options>] [<commit> [<commit>]] [--] [<path>...]";
+static const char *blob_path(struct object_array_entry *entry)
+{
+ return entry->path ? entry->path : entry->name;
+}
+
static void stuff_change(struct diff_options *opt,
unsigned old_mode, unsigned new_mode,
const struct object_id *old_oid,
const struct object_id *new_oid,
int old_oid_valid,
int new_oid_valid,
- const char *old_name,
- const char *new_name)
+ const char *old_path,
+ const char *new_path)
{
struct diff_filespec *one, *two;
if (DIFF_OPT_TST(opt, REVERSE_DIFF)) {
SWAP(old_mode, new_mode);
SWAP(old_oid, new_oid);
- SWAP(old_name, new_name);
+ SWAP(old_path, new_path);
}
if (opt->prefix &&
- (strncmp(old_name, opt->prefix, opt->prefix_length) ||
- strncmp(new_name, opt->prefix, opt->prefix_length)))
+ (strncmp(old_path, opt->prefix, opt->prefix_length) ||
+ strncmp(new_path, opt->prefix, opt->prefix_length)))
return;
- one = alloc_filespec(old_name);
- two = alloc_filespec(new_name);
+ one = alloc_filespec(old_path);
+ two = alloc_filespec(new_path);
fill_filespec(one, old_oid->hash, old_oid_valid, old_mode);
fill_filespec(two, new_oid->hash, new_oid_valid, new_mode);
static int builtin_diff_b_f(struct rev_info *revs,
int argc, const char **argv,
- struct blobinfo *blob)
+ struct object_array_entry **blob)
{
/* Blob vs file in the working tree*/
struct stat st;
diff_set_mnemonic_prefix(&revs->diffopt, "o/", "w/");
- if (blob[0].mode == S_IFINVALID)
- blob[0].mode = canon_mode(st.st_mode);
+ if (blob[0]->mode == S_IFINVALID)
+ blob[0]->mode = canon_mode(st.st_mode);
stuff_change(&revs->diffopt,
- blob[0].mode, canon_mode(st.st_mode),
- &blob[0].oid, &null_oid,
+ blob[0]->mode, canon_mode(st.st_mode),
+ &blob[0]->item->oid, &null_oid,
1, 0,
- path, path);
+ blob[0]->path ? blob[0]->path : path,
+ path);
diffcore_std(&revs->diffopt);
diff_flush(&revs->diffopt);
return 0;
static int builtin_diff_blobs(struct rev_info *revs,
int argc, const char **argv,
- struct blobinfo *blob)
+ struct object_array_entry **blob)
{
unsigned mode = canon_mode(S_IFREG | 0644);
if (argc > 1)
usage(builtin_diff_usage);
- if (blob[0].mode == S_IFINVALID)
- blob[0].mode = mode;
+ if (blob[0]->mode == S_IFINVALID)
+ blob[0]->mode = mode;
- if (blob[1].mode == S_IFINVALID)
- blob[1].mode = mode;
+ if (blob[1]->mode == S_IFINVALID)
+ blob[1]->mode = mode;
stuff_change(&revs->diffopt,
- blob[0].mode, blob[1].mode,
- &blob[0].oid, &blob[1].oid,
+ blob[0]->mode, blob[1]->mode,
+ &blob[0]->item->oid, &blob[1]->item->oid,
1, 1,
- blob[0].name, blob[1].name);
+ blob_path(blob[0]), blob_path(blob[1]));
diffcore_std(&revs->diffopt);
diff_flush(&revs->diffopt);
return 0;
struct rev_info rev;
struct object_array ent = OBJECT_ARRAY_INIT;
int blobs = 0, paths = 0;
- struct blobinfo blob[2];
+ struct object_array_entry *blob[2];
int nongit = 0, no_index = 0;
int result = 0;
} else if (obj->type == OBJ_BLOB) {
if (2 <= blobs)
die(_("more than two blobs given: '%s'"), name);
- oidcpy(&blob[blobs].oid, &obj->oid);
- blob[blobs].name = name;
- blob[blobs].mode = entry->mode;
+ blob[blobs] = entry;
blobs++;
} else {
hashmap_entry_init(entry, strhash(buf.buf));
hashmap_add(result, entry);
}
+ fclose(fp);
if (finish_command(&diff_files))
die("diff-files did not exit properly");
strbuf_release(&index_env);
}
if (lmode && status != 'C') {
- if (checkout_path(lmode, &loid, src_path, &lstate))
- return error("could not write '%s'", src_path);
+ if (checkout_path(lmode, &loid, src_path, &lstate)) {
+ ret = error("could not write '%s'", src_path);
+ goto finish;
+ }
}
if (rmode && !S_ISLNK(rmode)) {
hashmap_add(&working_tree_dups, entry);
if (!use_wt_file(workdir, dst_path, &roid)) {
- if (checkout_path(rmode, &roid, dst_path, &rstate))
- return error("could not write '%s'",
- dst_path);
+ if (checkout_path(rmode, &roid, dst_path,
+ &rstate)) {
+ ret = error("could not write '%s'",
+ dst_path);
+ goto finish;
+ }
} else if (!is_null_oid(&roid)) {
/*
* Changes in the working tree need special
ADD_CACHE_JUST_APPEND);
add_path(&rdir, rdir_len, dst_path);
- if (ensure_leading_directories(rdir.buf))
- return error("could not create "
- "directory for '%s'",
- dst_path);
+ if (ensure_leading_directories(rdir.buf)) {
+ ret = error("could not create "
+ "directory for '%s'",
+ dst_path);
+ goto finish;
+ }
add_path(&wtdir, wtdir_len, dst_path);
if (symlinks) {
if (symlink(wtdir.buf, rdir.buf)) {
}
}
+ fclose(fp);
+ fp = NULL;
if (finish_command(&child)) {
ret = error("error occurred running diff --raw");
goto finish;
}
if (!i)
- return 0;
+ goto finish;
/*
* Changes to submodules require special treatment.This loop writes a
exit_cleanup(tmpdir, rc);
finish:
+ if (fp)
+ fclose(fp);
+
free(lbase_dir);
free(rbase_dir);
strbuf_release(&ldir);
oid_to_hex(&tag->object.oid));
case DROP:
/* Ignore this tag altogether */
+ free(buf);
return;
case REWRITE:
if (tagged->type != OBJ_COMMIT) {
(int)(tagger_end - tagger), tagger,
tagger == tagger_end ? "" : "\n",
(int)message_size, (int)message_size, message ? message : "");
+ free(buf);
}
static struct commit *get_commit(struct rev_cmdline_entry *e, char *full_name)
static inline void grep_lock(void)
{
- if (num_threads)
- pthread_mutex_lock(&grep_mutex);
+ assert(num_threads);
+ pthread_mutex_lock(&grep_mutex);
}
static inline void grep_unlock(void)
{
- if (num_threads)
- pthread_mutex_unlock(&grep_mutex);
+ assert(num_threads);
+ pthread_mutex_unlock(&grep_mutex);
}
/* Signalled when a new work_item is added to todo. */
if (num_threads < 0)
die(_("invalid number of threads specified (%d) for %s"),
num_threads, var);
+#ifdef NO_PTHREADS
+ else if (num_threads && num_threads != 1) {
+ /*
+ * TRANSLATORS: %s is the configuration
+ * variable for tweaking threads, currently
+ * grep.threads
+ */
+ warning(_("no threads support, ignoring %s"), var);
+ num_threads = 0;
+ }
+#endif
}
return st;
break;
case GREP_PATTERN_TYPE_UNSPECIFIED:
break;
+ default:
+ die("BUG: Added a new grep pattern type without updating switch statement");
}
for (pattern = opt->pattern_list; pattern != NULL;
if (exc_std)
setup_standard_excludes(&dir);
- fill_directory(&dir, pathspec);
+ fill_directory(&dir, &the_index, pathspec);
for (i = 0; i < dir.nr; i++) {
if (!dir_path_match(dir.entries[i], pathspec, 0, NULL))
continue;
break;
}
- if (get_sha1_with_context(arg, 0, oid.hash, &oc)) {
+ if (get_sha1_with_context(arg, GET_SHA1_RECORD_PATH,
+ oid.hash, &oc)) {
if (seen_dashdash)
die(_("unable to resolve revision: %s"), arg);
break;
if (!seen_dashdash)
verify_non_filename(prefix, arg);
add_object_array_with_path(object, arg, &list, oc.mode, oc.path);
+ free(oc.path);
}
/*
else if (num_threads < 0)
die(_("invalid number of threads specified (%d)"), num_threads);
#else
+ if (num_threads)
+ warning(_("no threads support, ignoring --threads"));
num_threads = 0;
#endif
!DIFF_OPT_TST(&rev->diffopt, ALLOW_TEXTCONV))
return stream_blob_to_fd(1, oid, NULL, 0);
- if (get_sha1_with_context(obj_name, 0, oidc.hash, &obj_context))
+ if (get_sha1_with_context(obj_name, GET_SHA1_RECORD_PATH,
+ oidc.hash, &obj_context))
die(_("Not a valid object name %s"), obj_name);
- if (!obj_context.path[0] ||
- !textconv_object(obj_context.path, obj_context.mode, &oidc, 1, &buf, &size))
+ if (!obj_context.path ||
+ !textconv_object(obj_context.path, obj_context.mode, &oidc, 1, &buf, &size)) {
+ free(obj_context.path);
return stream_blob_to_fd(1, oid, NULL, 0);
+ }
if (!buf)
die(_("git show %s: bad file"), obj_name);
write_or_die(1, buf, size);
+ free(obj_context.path);
return 0;
}
{
int len = max_prefix_len;
- if (len >= ent->len)
+ if (len > ent->len)
die("git ls-files: internal error - directory entry not superset of prefix");
if (!dir_path_match(ent, &pathspec, len, ps_matched))
strbuf_addstr(&name, super_prefix);
strbuf_addstr(&name, ce->name);
- if (len >= ce_namelen(ce))
+ if (len > ce_namelen(ce))
die("git ls-files: internal error - cache entry not superset of prefix");
if (recurse_submodules && S_ISGITLINK(ce->ce_mode) &&
static int ce_excluded(struct dir_struct *dir, const struct cache_entry *ce)
{
int dtype = ce_to_dtype(ce);
- return is_excluded(dir, ce->name, &dtype);
+ return is_excluded(dir, &the_index, ce->name, &dtype);
}
static void show_files(struct dir_struct *dir)
if (show_others || show_killed) {
if (!show_others)
dir->flags |= DIR_COLLECT_KILLED_ONLY;
- fill_directory(dir, &pathspec);
+ fill_directory(dir, &the_index, &pathspec);
if (show_others)
show_other_files(dir);
if (show_killed)
active_nr = last - pos;
}
+static int get_common_prefix_len(const char *common_prefix)
+{
+ int common_prefix_len;
+
+ if (!common_prefix)
+ return 0;
+
+ common_prefix_len = strlen(common_prefix);
+
+ /*
+ * If the prefix has a trailing slash, strip it so that submodules wont
+ * be pruned from the index.
+ */
+ if (common_prefix[common_prefix_len - 1] == '/')
+ common_prefix_len--;
+
+ return common_prefix_len;
+}
+
/*
* Read the tree specified with --with-tree option
* (typically, HEAD) into stage #1 and then
"--error-unmatch");
parse_pathspec(&pathspec, 0,
- PATHSPEC_PREFER_CWD |
- PATHSPEC_STRIP_SUBMODULE_SLASH_CHEAP,
+ PATHSPEC_PREFER_CWD,
prefix, argv);
/*
max_prefix = NULL;
else
max_prefix = common_prefix(&pathspec);
- max_prefix_len = max_prefix ? strlen(max_prefix) : 0;
+ max_prefix_len = get_common_prefix_len(max_prefix);
+
+ prune_cache(max_prefix, max_prefix_len);
/* Treat unmatching pathspec elements as errors */
if (pathspec.nr && error_unmatch)
show_killed || show_modified || show_resolve_undo))
show_cached = 1;
- prune_cache(max_prefix, max_prefix_len);
if (with_tree) {
/*
* Basic sanity check; show-stages and show-unmerged
do {
peek = fgetc(f);
+ if (peek == EOF) {
+ if (f == stdin)
+ /* empty stdin is OK */
+ ret = skip;
+ else {
+ fclose(f);
+ error(_("empty mbox: '%s'"), file);
+ }
+ goto out;
+ }
} while (isspace(peek));
ungetc(peek, f);
unsigned mode;
enum object_type mode_type; /* object type derived from mode */
enum object_type obj_type; /* object type derived from sha */
- char *path;
+ char *path, *to_free = NULL;
unsigned char sha1[20];
ptr = buf;
struct strbuf p_uq = STRBUF_INIT;
if (unquote_c_style(&p_uq, path, NULL))
die("invalid quoting");
- path = strbuf_detach(&p_uq, NULL);
+ path = to_free = strbuf_detach(&p_uq, NULL);
}
/*
}
append_to_tree(mode, sha1, path);
+ free(to_free);
}
int cmd_mktree(int ac, const char **av, const char *prefix)
timestamp_t taggerdate;
int generation;
int distance;
+ int from_tag;
} rev_name;
-static long cutoff = LONG_MAX;
+static timestamp_t cutoff = TIME_MAX;
/* How many generations are maximally preferred over _one_ merge traversal? */
#define MERGE_TRAVERSAL_WEIGHT 65535
+static int is_better_name(struct rev_name *name,
+ const char *tip_name,
+ timestamp_t taggerdate,
+ int generation,
+ int distance,
+ int from_tag)
+{
+ /*
+ * When comparing names based on tags, prefer names
+ * based on the older tag, even if it is farther away.
+ */
+ if (from_tag && name->from_tag)
+ return (name->taggerdate > taggerdate ||
+ (name->taggerdate == taggerdate &&
+ name->distance > distance));
+
+ /*
+ * We know that at least one of them is a non-tag at this point.
+ * favor a tag over a non-tag.
+ */
+ if (name->from_tag != from_tag)
+ return from_tag;
+
+ /*
+ * We are now looking at two non-tags. Tiebreak to favor
+ * shorter hops.
+ */
+ if (name->distance != distance)
+ return name->distance > distance;
+
+ /* ... or tiebreak to favor older date */
+ if (name->taggerdate != taggerdate)
+ return name->taggerdate > taggerdate;
+
+ /* keep the current one if we cannot decide */
+ return 0;
+}
+
static void name_rev(struct commit *commit,
const char *tip_name, timestamp_t taggerdate,
- int generation, int distance,
+ int generation, int distance, int from_tag,
int deref)
{
struct rev_name *name = (struct rev_name *)commit->util;
struct commit_list *parents;
int parent_number = 1;
+ char *to_free = NULL;
parse_commit(commit);
return;
if (deref) {
- tip_name = xstrfmt("%s^0", tip_name);
+ tip_name = to_free = xstrfmt("%s^0", tip_name);
if (generation)
die("generation: %d, but deref?", generation);
name = xmalloc(sizeof(rev_name));
commit->util = name;
goto copy_data;
- } else if (name->taggerdate > taggerdate ||
- (name->taggerdate == taggerdate &&
- name->distance > distance)) {
+ } else if (is_better_name(name, tip_name, taggerdate,
+ generation, distance, from_tag)) {
copy_data:
name->tip_name = tip_name;
name->taggerdate = taggerdate;
name->generation = generation;
name->distance = distance;
- } else
+ name->from_tag = from_tag;
+ } else {
+ free(to_free);
return;
+ }
for (parents = commit->parents;
parents;
parent_number);
name_rev(parents->item, new_name, taggerdate, 0,
- distance + MERGE_TRAVERSAL_WEIGHT, 0);
+ distance + MERGE_TRAVERSAL_WEIGHT,
+ from_tag, 0);
} else {
name_rev(parents->item, tip_name, taggerdate,
- generation + 1, distance + 1, 0);
+ generation + 1, distance + 1,
+ from_tag, 0);
}
}
}
}
if (o && o->type == OBJ_COMMIT) {
struct commit *commit = (struct commit *)o;
+ int from_tag = starts_with(path, "refs/tags/");
+ if (taggerdate == ULONG_MAX)
+ taggerdate = ((struct commit *)o)->date;
path = name_ref_abbrev(path, can_abbreviate_output);
- name_rev(commit, xstrdup(path), taggerdate, 0, 0, deref);
+ name_rev(commit, xstrdup(path), taggerdate, 0, 0,
+ from_tag, deref);
}
return 0;
}
ref = (flags & NOTES_INIT_WRITABLE) ? t->update_ref : t->ref;
if (!starts_with(ref, "refs/notes/"))
- /* TRANSLATORS: the first %s will be replaced by a
- git notes command: 'add', 'merge', 'remove', etc.*/
+ /*
+ * TRANSLATORS: the first %s will be replaced by a git
+ * notes command: 'add', 'merge', 'remove', etc.
+ */
die(_("refusing to %s notes in %s (outside of refs/notes/)"),
subcommand, ref);
return t;
die("invalid number of threads specified (%d)",
delta_search_threads);
#ifdef NO_PTHREADS
- if (delta_search_threads != 1)
+ if (delta_search_threads != 1) {
warning("no threads support, ignoring %s", k);
+ delta_search_threads = 0;
+ }
#endif
return 0;
}
*/
static int pack_options_allow_reuse(void)
{
- return pack_to_stdout && allow_ofs_delta;
+ return pack_to_stdout &&
+ allow_ofs_delta &&
+ !ignore_packed_keep &&
+ (!local || !have_non_local_packs) &&
+ !incremental;
}
static int get_object_list_from_bitmap(struct rev_info *revs)
/* return if there are no objects missing from the unique set */
if (missing->size == 0) {
*min = unique;
+ free(missing);
return;
}
die("failed to unpack tree object %s", arg);
stage++;
}
- if (nr_trees == 0 && !read_empty)
+ if (!nr_trees && !read_empty && !opts.merge)
warning("read-tree: emptying the index with no arguments is deprecated; use --empty");
else if (nr_trees > 0 && read_empty)
die("passing trees as arguments contradicts --empty");
setup_work_tree();
if (opts.merge) {
- if (stage < 2)
- die("just how do you expect me to merge %d trees?", stage-1);
switch (stage - 1) {
+ case 0:
+ die("you must specify at least one tree to merge");
+ break;
case 1:
opts.fn = opts.prefix ? bind_merge : oneway_merge;
break;
{
const char *name = cmd->ref_name;
struct strbuf namespaced_name_buf = STRBUF_INIT;
- const char *namespaced_name, *ret;
+ static char *namespaced_name;
+ const char *ret;
struct object_id *old_oid = &cmd->old_oid;
struct object_id *new_oid = &cmd->new_oid;
}
strbuf_addf(&namespaced_name_buf, "%s%s", get_git_namespace(), name);
+ free(namespaced_name);
namespaced_name = strbuf_detach(&namespaced_name_buf, NULL);
if (is_ref_checked_out(namespaced_name)) {
url_nr = states.remote->url_nr;
}
for (i = 0; i < url_nr; i++)
- /* TRANSLATORS: the colon ':' should align with
- the one in " Fetch URL: %s" translation */
+ /*
+ * TRANSLATORS: the colon ':' should align
+ * with the one in " Fetch URL: %s"
+ * translation.
+ */
printf_ln(_(" Push URL: %s"), url[i]);
if (!i)
printf_ln(_(" Push URL: %s"), _("(no URL)"));
int keep_unreachable = 0;
const char *window = NULL, *window_memory = NULL;
const char *depth = NULL;
+ const char *threads = NULL;
const char *max_pack_size = NULL;
int no_reuse_delta = 0, no_reuse_object = 0;
int no_update_server_info = 0;
N_("same as the above, but limit memory size instead of entries count")),
OPT_STRING(0, "depth", &depth, N_("n"),
N_("limits the maximum delta depth")),
+ OPT_STRING(0, "threads", &threads, N_("n"),
+ N_("limits the maximum number of threads")),
OPT_STRING(0, "max-pack-size", &max_pack_size, N_("bytes"),
N_("maximum size of each packfile")),
OPT_BOOL(0, "pack-kept-objects", &pack_kept_objects,
argv_array_pushf(&cmd.args, "--window-memory=%s", window_memory);
if (depth)
argv_array_pushf(&cmd.args, "--depth=%s", depth);
+ if (threads)
+ argv_array_pushf(&cmd.args, "--threads=%s", threads);
if (max_pack_size)
argv_array_pushf(&cmd.args, "--max-pack-size=%s", max_pack_size);
if (no_reuse_delta)
#include "parse-options.h"
#include "unpack-trees.h"
#include "cache-tree.h"
+#include "submodule.h"
+#include "submodule-config.h"
+
+static int recurse_submodules = RECURSE_SUBMODULES_DEFAULT;
+
+static int option_parse_recurse_submodules(const struct option *opt,
+ const char *arg, int unset)
+{
+ if (unset) {
+ recurse_submodules = RECURSE_SUBMODULES_OFF;
+ return 0;
+ }
+ if (arg)
+ recurse_submodules =
+ parse_update_recurse_submodules_arg(opt->long_name,
+ arg);
+ else
+ recurse_submodules = RECURSE_SUBMODULES_ON;
+
+ return 0;
+}
static const char * const git_reset_usage[] = {
N_("git reset [--mixed | --soft | --hard | --merge | --keep] [-q] [<commit>]"),
parse_pathspec(pathspec, 0,
PATHSPEC_PREFER_FULL |
- PATHSPEC_STRIP_SUBMODULE_SLASH_CHEAP |
(patch_mode ? PATHSPEC_PREFIX_ORIGIN : 0),
prefix, argv);
}
N_("reset HEAD, index and working tree"), MERGE),
OPT_SET_INT(0, "keep", &reset_type,
N_("reset HEAD but keep local changes"), KEEP),
+ { OPTION_CALLBACK, 0, "recurse-submodules", &recurse_submodules,
+ "reset", "control recursive updating of submodules",
+ PARSE_OPT_OPTARG, option_parse_recurse_submodules },
OPT_BOOL('p', "patch", &patch_mode, N_("select hunks interactively")),
OPT_BOOL('N', "intent-to-add", &intent_to_add,
N_("record only the fact that removed paths will be added later")),
PARSE_OPT_KEEP_DASHDASH);
parse_args(&pathspec, argv, prefix, patch_mode, &rev);
+ if (recurse_submodules != RECURSE_SUBMODULES_DEFAULT) {
+ gitmodules_config();
+ git_config(submodule_config, NULL);
+ set_config_update_recurse_submodules(RECURSE_SUBMODULES_ON);
+ }
+
unborn = !strcmp(rev, "HEAD") && get_sha1("HEAD", oid.hash);
if (unborn) {
/* reset on unborn branch: treat as reset to empty tree */
die(_("index file corrupt"));
parse_pathspec(&pathspec, 0,
- PATHSPEC_PREFER_CWD |
- PATHSPEC_STRIP_SUBMODULE_SLASH_CHEAP,
+ PATHSPEC_PREFER_CWD,
prefix, argv);
refresh_index(&the_index, REFRESH_QUIET, &pathspec, NULL, NULL);
int i, result = 0;
char *ps_matched = NULL;
parse_pathspec(pathspec, 0,
- PATHSPEC_PREFER_FULL |
- PATHSPEC_STRIP_SUBMODULE_SLASH_CHEAP,
+ PATHSPEC_PREFER_FULL,
prefix, argv);
if (pathspec->nr)
find_unique_abbrev(wt->head_sha1, DEFAULT_ABBREV));
if (wt->is_detached)
strbuf_addstr(&sb, "(detached HEAD)");
- else if (wt->head_ref)
- strbuf_addf(&sb, "[%s]", shorten_unambiguous_ref(wt->head_ref, 0));
- else
+ else if (wt->head_ref) {
+ char *ref = shorten_unambiguous_ref(wt->head_ref, 0);
+ strbuf_addf(&sb, "[%s]", ref);
+ free(ref);
+ } else
strbuf_addstr(&sb, "(error)");
}
printf("%s\n", sb.buf);
#define CLOSE_LOCK (1 << 1)
extern int write_locked_index(struct index_state *, struct lock_file *lock, unsigned flags);
extern int discard_index(struct index_state *);
+extern void move_index_extensions(struct index_state *dst, struct index_state *src);
extern int unmerged_index(const struct index_state *);
extern int verify_path(const char *path);
extern int strcmp_offset(const char *s1, const char *s2, size_t *first_change);
struct object_context {
unsigned char tree[20];
- char path[PATH_MAX];
unsigned mode;
/*
* symlink_path is only used by get_tree_entry_follow_symlinks,
* and only for symlinks that point outside the repository.
*/
struct strbuf symlink_path;
+ /*
+ * If GET_SHA1_RECORD_PATH is set, this will record path (if any)
+ * found when resolving the name. The caller is responsible for
+ * releasing the memory.
+ */
+ char *path;
};
#define GET_SHA1_QUIETLY 01
#define GET_SHA1_TREEISH 020
#define GET_SHA1_BLOB 040
#define GET_SHA1_FOLLOW_SYMLINKS 0100
+#define GET_SHA1_RECORD_PATH 0200
#define GET_SHA1_ONLY_TO_DIE 04000
#define GET_SHA1_DISAMBIGUATORS \
extern int get_sha1_treeish(const char *str, unsigned char *sha1);
extern int get_sha1_blob(const char *str, unsigned char *sha1);
extern void maybe_die_on_misspelt_object_name(const char *name, const char *prefix);
-extern int get_sha1_with_context(const char *str, unsigned flags, unsigned char *sha1, struct object_context *orc);
+extern int get_sha1_with_context(const char *str, unsigned flags, unsigned char *sha1, struct object_context *oc);
extern int get_oid(const char *str, struct object_id *oid);
#include "commit-slab.h"
#include "prio-queue.h"
#include "sha1-lookup.h"
+#include "wt-status.h"
static struct commit_extra_header *read_commit_extra_header_lines(const char *buf, size_t len, const char **);
/*
* Inspect the given string and determine the true "end" of the log message, in
* order to find where to put a new Signed-off-by: line. Ignored are
- * trailing comment lines and blank lines, and also the traditional
- * "Conflicts:" block that is not commented out, so that we can use
- * "git commit -s --amend" on an existing commit that forgot to remove
- * it.
+ * trailing comment lines and blank lines. To support "git commit -s
+ * --amend" on an existing commit, we also ignore "Conflicts:". To
+ * support "git commit -v", we truncate at cut lines.
*
* Returns the number of bytes from the tail to ignore, to be fed as
* the second parameter to append_signoff().
int boc = 0;
int bol = 0;
int in_old_conflicts_block = 0;
+ size_t cutoff = wt_status_locate_end(buf, len);
- while (bol < len) {
+ while (bol < cutoff) {
const char *next_line = memchr(buf + bol, '\n', len - bol);
if (!next_line)
}
bol = next_line - buf;
}
- return boc ? len - boc : 0;
+ return boc ? len - boc : len - cutoff;
}
return p+1;
}
-/*
- * Splits the PATH into parts.
- */
-static char **get_path_split(void)
-{
- char *p, **path, *envpath = mingw_getenv("PATH");
- int i, n = 0;
-
- if (!envpath || !*envpath)
- return NULL;
-
- envpath = xstrdup(envpath);
- p = envpath;
- while (p) {
- char *dir = p;
- p = strchr(p, ';');
- if (p) *p++ = '\0';
- if (*dir) { /* not earlier, catches series of ; */
- ++n;
- }
- }
- if (!n)
- return NULL;
-
- ALLOC_ARRAY(path, n + 1);
- p = envpath;
- i = 0;
- do {
- if (*p)
- path[i++] = xstrdup(p);
- p = p+strlen(p)+1;
- } while (i < n);
- path[i] = NULL;
-
- free(envpath);
-
- return path;
-}
-
-static void free_path_split(char **path)
-{
- char **p = path;
-
- if (!path)
- return;
-
- while (*p)
- free(*p++);
- free(path);
-}
-
/*
* exe_only means that we only want to detect .exe files, but not scripts
* (which do not have an extension)
*/
-static char *lookup_prog(const char *dir, const char *cmd, int isexe, int exe_only)
+static char *lookup_prog(const char *dir, int dirlen, const char *cmd,
+ int isexe, int exe_only)
{
char path[MAX_PATH];
- snprintf(path, sizeof(path), "%s/%s.exe", dir, cmd);
+ snprintf(path, sizeof(path), "%.*s\\%s.exe", dirlen, dir, cmd);
if (!isexe && access(path, F_OK) == 0)
return xstrdup(path);
* Determines the absolute path of cmd using the split path in path.
* If cmd contains a slash or backslash, no lookup is performed.
*/
-static char *path_lookup(const char *cmd, char **path, int exe_only)
+static char *path_lookup(const char *cmd, int exe_only)
{
+ const char *path;
char *prog = NULL;
int len = strlen(cmd);
int isexe = len >= 4 && !strcasecmp(cmd+len-4, ".exe");
if (strchr(cmd, '/') || strchr(cmd, '\\'))
- prog = xstrdup(cmd);
+ return xstrdup(cmd);
+
+ path = mingw_getenv("PATH");
+ if (!path)
+ return NULL;
- while (!prog && *path)
- prog = lookup_prog(*path++, cmd, isexe, exe_only);
+ while (!prog) {
+ const char *sep = strchrnul(path, ';');
+ int dirlen = sep - path;
+ if (dirlen)
+ prog = lookup_prog(path, dirlen, cmd, isexe, exe_only);
+ if (!*sep)
+ break;
+ path = sep + 1;
+ }
return prog;
}
int fhin, int fhout, int fherr)
{
pid_t pid;
- char **path = get_path_split();
- char *prog = path_lookup(cmd, path, 0);
+ char *prog = path_lookup(cmd, 0);
if (!prog) {
errno = ENOENT;
if (interpr) {
const char *argv0 = argv[0];
- char *iprog = path_lookup(interpr, path, 1);
+ char *iprog = path_lookup(interpr, 1);
argv[0] = prog;
if (!iprog) {
errno = ENOENT;
fhin, fhout, fherr);
free(prog);
}
- free_path_split(path);
return pid;
}
static int try_shell_exec(const char *cmd, char *const *argv)
{
const char *interpr = parse_interpreter(cmd);
- char **path;
char *prog;
int pid = 0;
if (!interpr)
return 0;
- path = get_path_split();
- prog = path_lookup(interpr, path, 1);
+ prog = path_lookup(interpr, 1);
if (prog) {
int argc = 0;
const char **argv2;
free(prog);
free(argv2);
}
- free_path_split(path);
return pid;
}
int mingw_execvp(const char *cmd, char *const *argv)
{
- char **path = get_path_split();
- char *prog = path_lookup(cmd, path, 0);
+ char *prog = path_lookup(cmd, 0);
if (prog) {
mingw_execv(prog, argv);
} else
errno = ENOENT;
- free_path_split(path);
return -1;
}
(isalpha(*(path)) && (path)[1] == ':' ? 2 : 0)
int mingw_skip_dos_drive_prefix(char **path);
#define skip_dos_drive_prefix mingw_skip_dos_drive_prefix
-#define is_dir_sep(c) ((c) == '/' || (c) == '\\')
+static inline int mingw_is_dir_sep(int c)
+{
+ return c == '/' || c == '\\';
+}
+#define is_dir_sep mingw_is_dir_sep
static inline char *mingw_find_last_dir_sep(const char *path)
{
char *ret = NULL;
if (!fd) {
if (!GetConsoleMode(hcon, &mode))
return 0;
+ /*
+ * This code path is only reached if there is no console
+ * attached to stdout/stderr, i.e. we will not need to output
+ * any text to any console, therefore we might just as well
+ * use black as foreground color.
+ */
+ sbi.wAttributes = 0;
} else if (!GetConsoleScreenBufferInfo(hcon, &sbi))
return 0;
/* convert utf-8 to utf-16 */
int wlen = xutftowcsn(wbuf, (char*) str, ARRAY_SIZE(wbuf), len);
+ if (wlen < 0) {
+ wchar_t *err = L"[invalid]";
+ WriteConsoleW(console, err, wcslen(err), &dummy, NULL);
+ return;
+ }
/* write directly to console */
WriteConsoleW(console, wbuf, wlen, &dummy, NULL);
struct strbuf pattern = STRBUF_INIT;
int ret = 0, prefix;
const char *git_dir;
+ int already_tried_absolute = 0;
if (opts->git_dir)
git_dir = opts->git_dir;
strbuf_add(&pattern, cond, cond_len);
prefix = prepare_include_condition_pattern(&pattern);
+again:
if (prefix < 0)
goto done;
ret = !wildmatch(pattern.buf + prefix, text.buf + prefix,
icase ? WM_CASEFOLD : 0, NULL);
+ if (!ret && !already_tried_absolute) {
+ /*
+ * We've tried e.g. matching gitdir:~/work, but if
+ * ~/work is a symlink to /mnt/storage/work
+ * strbuf_realpath() will expand it, so the rule won't
+ * match. Let's match against a
+ * strbuf_add_absolute_path() version of the path,
+ * which'll do the right thing
+ */
+ strbuf_reset(&text);
+ strbuf_add_absolute_path(&text, git_dir);
+ already_tried_absolute = 1;
+ goto again;
+ }
done:
strbuf_release(&pattern);
strbuf_release(&text);
struct lock_file *lock;
int out_fd;
char buf[1024];
- FILE *config_file;
+ FILE *config_file = NULL;
struct stat st;
if (new_name && !section_name_is_ok(new_name)) {
}
}
fclose(config_file);
+ config_file = NULL;
commit_and_out:
if (commit_lock_file(lock) < 0)
ret = error_errno("could not write config file %s",
config_filename);
out:
+ if (config_file)
+ fclose(config_file);
rollback_lock_file(lock);
out_no_rollback:
free(filename_buf);
AS_HELP_STRING([], [ARG can be prefix for openssl library and headers]),
GIT_PARSE_WITH([openssl]))
-# Define USE_LIBPCRE if you have and want to use libpcre. git-grep will be
-# able to use Perl-compatible regular expressions.
+# Define USE_LIBPCRE if you have and want to use libpcre. Various
+# commands such as log and grep offer runtime options to use
+# Perl-compatible regular expressions instead of standard or extended
+# POSIX regular expressions.
#
# Define LIBPCREDIR=/foo/bar if your libpcre header and library files are in
# /foo/bar/include and /foo/bar/lib directories.
GIT_CONF_SUBST([NO_OPENSSL])
#
-# Define USE_LIBPCRE if you have and want to use libpcre. git-grep will be
-# able to use Perl-compatible regular expressions.
+# Define USE_LIBPCRE if you have and want to use libpcre. Various
+# commands such as log and grep offer runtime options to use
+# Perl-compatible regular expressions instead of standard or extended
+# POSIX regular expressions.
#
if test -n "$USE_LIBPCRE"; then
--- /dev/null
+*.bash eol=lf
color.status.untracked
color.status.updated
color.ui
+ commit.cleanup
+ commit.gpgSign
commit.status
commit.template
+ commit.verbose
core.abbrev
core.askpass
core.attributesfile
_git_stash ()
{
local save_opts='--all --keep-index --no-keep-index --quiet --patch --include-untracked'
- local subcommands='save list show apply clear drop pop create branch'
+ local subcommands='push save list show apply clear drop pop create branch'
local subcommand="$(__git_find_on_cmdline "$subcommands")"
if [ -z "$subcommand" ]; then
case "$cur" in
esac
else
case "$subcommand,$cur" in
+ push,--*)
+ __gitcomp "$save_opts --message"
+ ;;
save,--*)
__gitcomp "$save_opts"
;;
[url "persistent-http"]
insteadof = http
+You may also want to allow the use of the persistent-https helper for
+submodule URLs (since any https URLs pointing to submodules will be
+rewritten, and Git's out-of-the-box defaults forbid submodules from
+using unknown remote helpers):
+
+[protocol "persistent-https"]
+ allow = always
+[protocol "persistent-http"]
+ allow = always
+
#####################################################################
# BUILDING FROM SOURCE
--- /dev/null
+/git-new-workdir eol=lf
#include "quote.h"
#include "sigchain.h"
#include "pkt-line.h"
+#include "sub-process.h"
/*
* convert.c - convert a file when checking it out and checking it in.
#define CAP_SMUDGE (1u<<1)
struct cmd2process {
- struct hashmap_entry ent; /* must be the first member! */
+ struct subprocess_entry subprocess; /* must be the first member! */
unsigned int supported_capabilities;
- const char *cmd;
- struct child_process process;
};
-static int cmd_process_map_initialized;
-static struct hashmap cmd_process_map;
-
-static int cmd2process_cmp(const struct cmd2process *e1,
- const struct cmd2process *e2,
- const void *unused)
-{
- return strcmp(e1->cmd, e2->cmd);
-}
-
-static struct cmd2process *find_multi_file_filter_entry(struct hashmap *hashmap, const char *cmd)
-{
- struct cmd2process key;
- hashmap_entry_init(&key, strhash(cmd));
- key.cmd = cmd;
- return hashmap_get(hashmap, &key, NULL);
-}
+static int subprocess_map_initialized;
+static struct hashmap subprocess_map;
-static int packet_write_list(int fd, const char *line, ...)
+static int start_multi_file_filter_fn(struct subprocess_entry *subprocess)
{
- va_list args;
int err;
- va_start(args, line);
- for (;;) {
- if (!line)
- break;
- if (strlen(line) > LARGE_PACKET_DATA_MAX)
- return -1;
- err = packet_write_fmt_gently(fd, "%s\n", line);
- if (err)
- return err;
- line = va_arg(args, const char*);
- }
- va_end(args);
- return packet_flush_gently(fd);
-}
-
-static void read_multi_file_filter_status(int fd, struct strbuf *status)
-{
- struct strbuf **pair;
- char *line;
- for (;;) {
- line = packet_read_line(fd, NULL);
- if (!line)
- break;
- pair = strbuf_split_str(line, '=', 2);
- if (pair[0] && pair[0]->len && pair[1]) {
- /* the last "status=<foo>" line wins */
- if (!strcmp(pair[0]->buf, "status=")) {
- strbuf_reset(status);
- strbuf_addbuf(status, pair[1]);
- }
- }
- strbuf_list_free(pair);
- }
-}
-
-static void kill_multi_file_filter(struct hashmap *hashmap, struct cmd2process *entry)
-{
- if (!entry)
- return;
-
- entry->process.clean_on_exit = 0;
- kill(entry->process.pid, SIGTERM);
- finish_command(&entry->process);
-
- hashmap_remove(hashmap, entry, NULL);
- free(entry);
-}
-
-static void stop_multi_file_filter(struct child_process *process)
-{
- sigchain_push(SIGPIPE, SIG_IGN);
- /* Closing the pipe signals the filter to initiate a shutdown. */
- close(process->in);
- close(process->out);
- sigchain_pop(SIGPIPE);
- /* Finish command will wait until the shutdown is complete. */
- finish_command(process);
-}
-
-static struct cmd2process *start_multi_file_filter(struct hashmap *hashmap, const char *cmd)
-{
- int err;
- struct cmd2process *entry;
- struct child_process *process;
- const char *argv[] = { cmd, NULL };
+ struct cmd2process *entry = (struct cmd2process *)subprocess;
struct string_list cap_list = STRING_LIST_INIT_NODUP;
char *cap_buf;
const char *cap_name;
-
- entry = xmalloc(sizeof(*entry));
- entry->cmd = cmd;
- entry->supported_capabilities = 0;
- process = &entry->process;
-
- child_process_init(process);
- process->argv = argv;
- process->use_shell = 1;
- process->in = -1;
- process->out = -1;
- process->clean_on_exit = 1;
- process->clean_on_exit_handler = stop_multi_file_filter;
-
- if (start_command(process)) {
- error("cannot fork to run external filter '%s'", cmd);
- return NULL;
- }
-
- hashmap_entry_init(entry, strhash(cmd));
+ struct child_process *process = &subprocess->process;
+ const char *cmd = subprocess->cmd;
sigchain_push(SIGPIPE, SIG_IGN);
- err = packet_write_list(process->in, "git-filter-client", "version=2", NULL);
+ err = packet_writel(process->in, "git-filter-client", "version=2", NULL);
if (err)
goto done;
if (err)
goto done;
- err = packet_write_list(process->in, "capability=clean", "capability=smudge", NULL);
+ err = packet_writel(process->in, "capability=clean", "capability=smudge", NULL);
for (;;) {
cap_buf = packet_read_line(process->out, NULL);
done:
sigchain_pop(SIGPIPE);
- if (err || errno == EPIPE) {
- error("initialization for external filter '%s' failed", cmd);
- kill_multi_file_filter(hashmap, entry);
- return NULL;
- }
-
- hashmap_add(hashmap, entry);
- return entry;
+ return err;
}
static int apply_multi_file_filter(const char *path, const char *src, size_t len,
struct strbuf filter_status = STRBUF_INIT;
const char *filter_type;
- if (!cmd_process_map_initialized) {
- cmd_process_map_initialized = 1;
- hashmap_init(&cmd_process_map, (hashmap_cmp_fn) cmd2process_cmp, 0);
+ if (!subprocess_map_initialized) {
+ subprocess_map_initialized = 1;
+ hashmap_init(&subprocess_map, (hashmap_cmp_fn) cmd2process_cmp, 0);
entry = NULL;
} else {
- entry = find_multi_file_filter_entry(&cmd_process_map, cmd);
+ entry = (struct cmd2process *)subprocess_find_entry(&subprocess_map, cmd);
}
fflush(NULL);
if (!entry) {
- entry = start_multi_file_filter(&cmd_process_map, cmd);
- if (!entry)
+ entry = xmalloc(sizeof(*entry));
+ entry->supported_capabilities = 0;
+
+ if (subprocess_start(&subprocess_map, &entry->subprocess, cmd, start_multi_file_filter_fn)) {
+ free(entry);
return 0;
+ }
}
- process = &entry->process;
+ process = &entry->subprocess.process;
if (!(wanted_capability & entry->supported_capabilities))
return 0;
if (err)
goto done;
- read_multi_file_filter_status(process->out, &filter_status);
+ err = subprocess_read_status(process->out, &filter_status);
+ if (err)
+ goto done;
+
err = strcmp(filter_status.buf, "success");
if (err)
goto done;
if (err)
goto done;
- read_multi_file_filter_status(process->out, &filter_status);
+ err = subprocess_read_status(process->out, &filter_status);
+ if (err)
+ goto done;
+
err = strcmp(filter_status.buf, "success");
done:
sigchain_pop(SIGPIPE);
- if (err || errno == EPIPE) {
+ if (err) {
if (!strcmp(filter_status.buf, "error")) {
/* The filter signaled a problem with the file. */
} else if (!strcmp(filter_status.buf, "abort")) {
* Force shutdown and restart if another blob requires filtering.
*/
error("external filter '%s' failed", cmd);
- kill_multi_file_filter(&cmd_process_map, entry);
+ subprocess_stop(&subprocess_map, &entry->subprocess);
+ free(entry);
}
} else {
strbuf_swap(dst, &nbuf);
#endif
static int diff_detect_rename_default;
-static int diff_indent_heuristic; /* experimental */
+static int diff_indent_heuristic = 1;
static int diff_rename_limit_default = 400;
static int diff_suppress_blank_empty;
static int diff_use_color_default = -1;
return 0;
}
- if (git_diff_heuristic_config(var, value, cb) < 0)
- return -1;
-
if (!strcmp(var, "diff.wserrorhighlight")) {
int val = parse_ws_error_highlight(value);
if (val < 0)
if (starts_with(var, "submodule."))
return parse_submodule_config_option(var, value);
+ if (git_diff_heuristic_config(var, value, cb) < 0)
+ return -1;
+
return git_default_config(var, value, cb);
}
* Copyright (C) Linus Torvalds, 2005-2006
* Junio Hamano, 2005-2006
*/
+#define NO_THE_INDEX_COMPATIBILITY_MACROS
#include "cache.h"
#include "dir.h"
#include "attr.h"
};
static enum path_treatment read_directory_recursive(struct dir_struct *dir,
- const char *path, int len, struct untracked_cache_dir *untracked,
+ struct index_state *istate, const char *path, int len,
+ struct untracked_cache_dir *untracked,
int check_only, const struct pathspec *pathspec);
-static int get_dtype(struct dirent *de, const char *path, int len);
+static int get_dtype(struct dirent *de, struct index_state *istate,
+ const char *path, int len);
int fspathcmp(const char *a, const char *b)
{
return len ? xmemdupz(pathspec->items[0].match, len) : NULL;
}
-int fill_directory(struct dir_struct *dir, const struct pathspec *pathspec)
+int fill_directory(struct dir_struct *dir,
+ struct index_state *istate,
+ const struct pathspec *pathspec)
{
const char *prefix;
size_t prefix_len;
prefix = prefix_len ? pathspec->items[0].match : "";
/* Read the directory and prune it */
- read_directory(dir, prefix, prefix_len, pathspec);
+ read_directory(dir, istate, prefix, prefix_len, pathspec);
return prefix_len;
}
x->el = el;
}
-static void *read_skip_worktree_file_from_index(const char *path, size_t *size,
+static void *read_skip_worktree_file_from_index(const struct index_state *istate,
+ const char *path, size_t *size,
struct sha1_stat *sha1_stat)
{
int pos, len;
void *data;
len = strlen(path);
- pos = cache_name_pos(path, len);
+ pos = index_name_pos(istate, path, len);
if (pos < 0)
return NULL;
- if (!ce_skip_worktree(active_cache[pos]))
+ if (!ce_skip_worktree(istate->cache[pos]))
return NULL;
- data = read_sha1_file(active_cache[pos]->oid.hash, &type, &sz);
+ data = read_sha1_file(istate->cache[pos]->oid.hash, &type, &sz);
if (!data || type != OBJ_BLOB) {
free(data);
return NULL;
*size = xsize_t(sz);
if (sha1_stat) {
memset(&sha1_stat->stat, 0, sizeof(sha1_stat->stat));
- hashcpy(sha1_stat->sha1, active_cache[pos]->oid.hash);
+ hashcpy(sha1_stat->sha1, istate->cache[pos]->oid.hash);
}
return data;
}
/*
* Given a file with name "fname", read it (either from disk, or from
- * the index if "check_index" is non-zero), parse it and store the
+ * an index if 'istate' is non-null), parse it and store the
* exclude rules in "el".
*
* If "ss" is not NULL, compute SHA-1 of the exclude file and fill
* ss_valid is non-zero, "ss" must contain good value as input.
*/
static int add_excludes(const char *fname, const char *base, int baselen,
- struct exclude_list *el, int check_index,
+ struct exclude_list *el,
+ struct index_state *istate,
struct sha1_stat *sha1_stat)
{
struct stat st;
warn_on_inaccessible(fname);
if (0 <= fd)
close(fd);
- if (!check_index ||
- (buf = read_skip_worktree_file_from_index(fname, &size, sha1_stat)) == NULL)
+ if (!istate ||
+ (buf = read_skip_worktree_file_from_index(istate, fname, &size, sha1_stat)) == NULL)
return -1;
if (size == 0) {
free(buf);
if (sha1_stat) {
int pos;
if (sha1_stat->valid &&
- !match_stat_data_racy(&the_index, &sha1_stat->stat, &st))
+ !match_stat_data_racy(istate, &sha1_stat->stat, &st))
; /* no content change, ss->sha1 still good */
- else if (check_index &&
- (pos = cache_name_pos(fname, strlen(fname))) >= 0 &&
- !ce_stage(active_cache[pos]) &&
- ce_uptodate(active_cache[pos]) &&
+ else if (istate &&
+ (pos = index_name_pos(istate, fname, strlen(fname))) >= 0 &&
+ !ce_stage(istate->cache[pos]) &&
+ ce_uptodate(istate->cache[pos]) &&
!would_convert_to_git(fname))
hashcpy(sha1_stat->sha1,
- active_cache[pos]->oid.hash);
+ istate->cache[pos]->oid.hash);
else
hash_sha1_file(buf, size, "blob", sha1_stat->sha1);
fill_stat_data(&sha1_stat->stat, &st);
int add_excludes_from_file_to_list(const char *fname, const char *base,
int baselen, struct exclude_list *el,
- int check_index)
+ struct index_state *istate)
{
- return add_excludes(fname, base, baselen, el, check_index, NULL);
+ return add_excludes(fname, base, baselen, el, istate, NULL);
}
struct exclude_list *add_exclude_list(struct dir_struct *dir,
if (!dir->untracked)
dir->unmanaged_exclude_files++;
el = add_exclude_list(dir, EXC_FILE, fname);
- if (add_excludes(fname, "", 0, el, 0, sha1_stat) < 0)
+ if (add_excludes(fname, "", 0, el, NULL, sha1_stat) < 0)
die("cannot use %s as an exclude file", fname);
}
int pathlen,
const char *basename,
int *dtype,
- struct exclude_list *el)
+ struct exclude_list *el,
+ struct index_state *istate)
{
struct exclude *exc = NULL; /* undecided */
int i;
if (x->flags & EXC_FLAG_MUSTBEDIR) {
if (*dtype == DT_UNKNOWN)
- *dtype = get_dtype(NULL, pathname, pathlen);
+ *dtype = get_dtype(NULL, istate, pathname, pathlen);
if (*dtype != DT_DIR)
continue;
}
*/
int is_excluded_from_list(const char *pathname,
int pathlen, const char *basename, int *dtype,
- struct exclude_list *el)
+ struct exclude_list *el, struct index_state *istate)
{
struct exclude *exclude;
- exclude = last_exclude_matching_from_list(pathname, pathlen, basename, dtype, el);
+ exclude = last_exclude_matching_from_list(pathname, pathlen, basename,
+ dtype, el, istate);
if (exclude)
return exclude->flags & EXC_FLAG_NEGATIVE ? 0 : 1;
return -1; /* undecided */
}
static struct exclude *last_exclude_matching_from_lists(struct dir_struct *dir,
+ struct index_state *istate,
const char *pathname, int pathlen, const char *basename,
int *dtype_p)
{
for (j = group->nr - 1; j >= 0; j--) {
exclude = last_exclude_matching_from_list(
pathname, pathlen, basename, dtype_p,
- &group->el[j]);
+ &group->el[j], istate);
if (exclude)
return exclude;
}
* Loads the per-directory exclude list for the substring of base
* which has a char length of baselen.
*/
-static void prep_exclude(struct dir_struct *dir, const char *base, int baselen)
+static void prep_exclude(struct dir_struct *dir,
+ struct index_state *istate,
+ const char *base, int baselen)
{
struct exclude_list_group *group;
struct exclude_list *el;
int dt = DT_DIR;
dir->basebuf.buf[stk->baselen - 1] = 0;
dir->exclude = last_exclude_matching_from_lists(dir,
+ istate,
dir->basebuf.buf, stk->baselen - 1,
dir->basebuf.buf + current, &dt);
dir->basebuf.buf[stk->baselen - 1] = '/';
strbuf_addbuf(&sb, &dir->basebuf);
strbuf_addstr(&sb, dir->exclude_per_dir);
el->src = strbuf_detach(&sb, NULL);
- add_excludes(el->src, el->src, stk->baselen, el, 1,
+ add_excludes(el->src, el->src, stk->baselen, el, istate,
untracked ? &sha1_stat : NULL);
}
/*
* undecided.
*/
struct exclude *last_exclude_matching(struct dir_struct *dir,
- const char *pathname,
- int *dtype_p)
+ struct index_state *istate,
+ const char *pathname,
+ int *dtype_p)
{
int pathlen = strlen(pathname);
const char *basename = strrchr(pathname, '/');
basename = (basename) ? basename+1 : pathname;
- prep_exclude(dir, pathname, basename-pathname);
+ prep_exclude(dir, istate, pathname, basename-pathname);
if (dir->exclude)
return dir->exclude;
- return last_exclude_matching_from_lists(dir, pathname, pathlen,
+ return last_exclude_matching_from_lists(dir, istate, pathname, pathlen,
basename, dtype_p);
}
* scans all exclude lists to determine whether pathname is excluded.
* Returns 1 if true, otherwise 0.
*/
-int is_excluded(struct dir_struct *dir, const char *pathname, int *dtype_p)
+int is_excluded(struct dir_struct *dir, struct index_state *istate,
+ const char *pathname, int *dtype_p)
{
struct exclude *exclude =
- last_exclude_matching(dir, pathname, dtype_p);
+ last_exclude_matching(dir, istate, pathname, dtype_p);
if (exclude)
return exclude->flags & EXC_FLAG_NEGATIVE ? 0 : 1;
return 0;
return ent;
}
-static struct dir_entry *dir_add_name(struct dir_struct *dir, const char *pathname, int len)
+static struct dir_entry *dir_add_name(struct dir_struct *dir,
+ struct index_state *istate,
+ const char *pathname, int len)
{
- if (cache_file_exists(pathname, len, ignore_case))
+ if (index_file_exists(istate, pathname, len, ignore_case))
return NULL;
ALLOC_GROW(dir->entries, dir->nr+1, dir->alloc);
return dir->entries[dir->nr++] = dir_entry_new(pathname, len);
}
-struct dir_entry *dir_add_ignored(struct dir_struct *dir, const char *pathname, int len)
+struct dir_entry *dir_add_ignored(struct dir_struct *dir,
+ struct index_state *istate,
+ const char *pathname, int len)
{
- if (!cache_name_is_other(pathname, len))
+ if (!index_name_is_other(istate, pathname, len))
return NULL;
ALLOC_GROW(dir->ignored, dir->ignored_nr+1, dir->ignored_alloc);
* the directory name; instead, use the case insensitive
* directory hash.
*/
-static enum exist_status directory_exists_in_index_icase(const char *dirname, int len)
+static enum exist_status directory_exists_in_index_icase(struct index_state *istate,
+ const char *dirname, int len)
{
struct cache_entry *ce;
- if (cache_dir_exists(dirname, len))
+ if (index_dir_exists(istate, dirname, len))
return index_directory;
- ce = cache_file_exists(dirname, len, ignore_case);
+ ce = index_file_exists(istate, dirname, len, ignore_case);
if (ce && S_ISGITLINK(ce->ce_mode))
return index_gitdir;
* the files it contains) will sort with the '/' at the
* end.
*/
-static enum exist_status directory_exists_in_index(const char *dirname, int len)
+static enum exist_status directory_exists_in_index(struct index_state *istate,
+ const char *dirname, int len)
{
int pos;
if (ignore_case)
- return directory_exists_in_index_icase(dirname, len);
+ return directory_exists_in_index_icase(istate, dirname, len);
- pos = cache_name_pos(dirname, len);
+ pos = index_name_pos(istate, dirname, len);
if (pos < 0)
pos = -pos-1;
- while (pos < active_nr) {
- const struct cache_entry *ce = active_cache[pos++];
+ while (pos < istate->cache_nr) {
+ const struct cache_entry *ce = istate->cache[pos++];
unsigned char endchar;
if (strncmp(ce->name, dirname, len))
* (c) otherwise, we recurse into it.
*/
static enum path_treatment treat_directory(struct dir_struct *dir,
+ struct index_state *istate,
struct untracked_cache_dir *untracked,
const char *dirname, int len, int baselen, int exclude,
const struct pathspec *pathspec)
{
/* The "len-1" is to strip the final '/' */
- switch (directory_exists_in_index(dirname, len-1)) {
+ switch (directory_exists_in_index(istate, dirname, len-1)) {
case index_directory:
return path_recurse;
untracked = lookup_untracked(dir->untracked, untracked,
dirname + baselen, len - baselen);
- return read_directory_recursive(dir, dirname, len,
+ return read_directory_recursive(dir, istate, dirname, len,
untracked, 1, pathspec);
}
return 0;
}
-static int get_index_dtype(const char *path, int len)
+static int get_index_dtype(struct index_state *istate,
+ const char *path, int len)
{
int pos;
const struct cache_entry *ce;
- ce = cache_file_exists(path, len, 0);
+ ce = index_file_exists(istate, path, len, 0);
if (ce) {
if (!ce_uptodate(ce))
return DT_UNKNOWN;
}
/* Try to look it up as a directory */
- pos = cache_name_pos(path, len);
+ pos = index_name_pos(istate, path, len);
if (pos >= 0)
return DT_UNKNOWN;
pos = -pos-1;
- while (pos < active_nr) {
- ce = active_cache[pos++];
+ while (pos < istate->cache_nr) {
+ ce = istate->cache[pos++];
if (strncmp(ce->name, path, len))
break;
if (ce->name[len] > '/')
return DT_UNKNOWN;
}
-static int get_dtype(struct dirent *de, const char *path, int len)
+static int get_dtype(struct dirent *de, struct index_state *istate,
+ const char *path, int len)
{
int dtype = de ? DTYPE(de) : DT_UNKNOWN;
struct stat st;
if (dtype != DT_UNKNOWN)
return dtype;
- dtype = get_index_dtype(path, len);
+ dtype = get_index_dtype(istate, path, len);
if (dtype != DT_UNKNOWN)
return dtype;
if (lstat(path, &st))
static enum path_treatment treat_one_path(struct dir_struct *dir,
struct untracked_cache_dir *untracked,
+ struct index_state *istate,
struct strbuf *path,
int baselen,
const struct pathspec *pathspec,
int dtype, struct dirent *de)
{
int exclude;
- int has_path_in_index = !!cache_file_exists(path->buf, path->len, ignore_case);
+ int has_path_in_index = !!index_file_exists(istate, path->buf, path->len, ignore_case);
if (dtype == DT_UNKNOWN)
- dtype = get_dtype(de, path->buf, path->len);
+ dtype = get_dtype(de, istate, path->buf, path->len);
/* Always exclude indexed files */
if (dtype != DT_DIR && has_path_in_index)
if ((dir->flags & DIR_COLLECT_KILLED_ONLY) &&
(dtype == DT_DIR) &&
!has_path_in_index &&
- (directory_exists_in_index(path->buf, path->len) == index_nonexistent))
+ (directory_exists_in_index(istate, path->buf, path->len) == index_nonexistent))
return path_none;
- exclude = is_excluded(dir, path->buf, &dtype);
+ exclude = is_excluded(dir, istate, path->buf, &dtype);
/*
* Excluded? If we don't explicitly want to show
return path_none;
case DT_DIR:
strbuf_addch(path, '/');
- return treat_directory(dir, untracked, path->buf, path->len,
+ return treat_directory(dir, istate, untracked, path->buf, path->len,
baselen, exclude, pathspec);
case DT_REG:
case DT_LNK:
static enum path_treatment treat_path_fast(struct dir_struct *dir,
struct untracked_cache_dir *untracked,
struct cached_dir *cdir,
+ struct index_state *istate,
struct strbuf *path,
int baselen,
const struct pathspec *pathspec)
* to its bottom. Verify again the same set of directories
* with check_only set.
*/
- return read_directory_recursive(dir, path->buf, path->len,
+ return read_directory_recursive(dir, istate, path->buf, path->len,
cdir->ucd, 1, pathspec);
/*
* We get path_recurse in the first run when
static enum path_treatment treat_path(struct dir_struct *dir,
struct untracked_cache_dir *untracked,
struct cached_dir *cdir,
+ struct index_state *istate,
struct strbuf *path,
int baselen,
const struct pathspec *pathspec)
struct dirent *de = cdir->de;
if (!de)
- return treat_path_fast(dir, untracked, cdir, path,
+ return treat_path_fast(dir, untracked, cdir, istate, path,
baselen, pathspec);
if (is_dot_or_dotdot(de->d_name) || !strcmp(de->d_name, ".git"))
return path_none;
return path_none;
dtype = DTYPE(de);
- return treat_one_path(dir, untracked, path, baselen, pathspec, dtype, de);
+ return treat_one_path(dir, untracked, istate, path, baselen, pathspec, dtype, de);
}
static void add_untracked(struct untracked_cache_dir *dir, const char *name)
static int valid_cached_dir(struct dir_struct *dir,
struct untracked_cache_dir *untracked,
+ struct index_state *istate,
struct strbuf *path,
int check_only)
{
return 0;
}
if (!untracked->valid ||
- match_stat_data_racy(&the_index, &untracked->stat_data, &st)) {
+ match_stat_data_racy(istate, &untracked->stat_data, &st)) {
if (untracked->valid)
invalidate_directory(dir->untracked, untracked);
fill_stat_data(&untracked->stat_data, &st);
*/
if (path->len && path->buf[path->len - 1] != '/') {
strbuf_addch(path, '/');
- prep_exclude(dir, path->buf, path->len);
+ prep_exclude(dir, istate, path->buf, path->len);
strbuf_setlen(path, path->len - 1);
} else
- prep_exclude(dir, path->buf, path->len);
+ prep_exclude(dir, istate, path->buf, path->len);
/* hopefully prep_exclude() haven't invalidated this entry... */
return untracked->valid;
static int open_cached_dir(struct cached_dir *cdir,
struct dir_struct *dir,
struct untracked_cache_dir *untracked,
+ struct index_state *istate,
struct strbuf *path,
int check_only)
{
memset(cdir, 0, sizeof(*cdir));
cdir->untracked = untracked;
- if (valid_cached_dir(dir, untracked, path, check_only))
+ if (valid_cached_dir(dir, untracked, istate, path, check_only))
return 0;
cdir->fdir = opendir(path->len ? path->buf : ".");
if (dir->untracked)
* Returns the most significant path_treatment value encountered in the scan.
*/
static enum path_treatment read_directory_recursive(struct dir_struct *dir,
- const char *base, int baselen,
- struct untracked_cache_dir *untracked, int check_only,
- const struct pathspec *pathspec)
+ struct index_state *istate, const char *base, int baselen,
+ struct untracked_cache_dir *untracked, int check_only,
+ const struct pathspec *pathspec)
{
struct cached_dir cdir;
enum path_treatment state, subdir_state, dir_state = path_none;
strbuf_add(&path, base, baselen);
- if (open_cached_dir(&cdir, dir, untracked, &path, check_only))
+ if (open_cached_dir(&cdir, dir, untracked, istate, &path, check_only))
goto out;
if (untracked)
while (!read_cached_dir(&cdir)) {
/* check how the file or directory should be treated */
- state = treat_path(dir, untracked, &cdir, &path,
+ state = treat_path(dir, untracked, &cdir, istate, &path,
baselen, pathspec);
if (state > dir_state)
dir_state = state;
/* recurse into subdir if instructed by treat_path */
- if (state == path_recurse) {
+ if ((state == path_recurse) ||
+ ((state == path_untracked) &&
+ (dir->flags & DIR_SHOW_IGNORED_TOO) &&
+ (get_dtype(cdir.de, istate, path.buf, path.len) == DT_DIR))) {
struct untracked_cache_dir *ud;
ud = lookup_untracked(dir->untracked, untracked,
path.buf + baselen,
path.len - baselen);
subdir_state =
- read_directory_recursive(dir, path.buf,
+ read_directory_recursive(dir, istate, path.buf,
path.len, ud,
check_only, pathspec);
if (subdir_state > dir_state)
switch (state) {
case path_excluded:
if (dir->flags & DIR_SHOW_IGNORED)
- dir_add_name(dir, path.buf, path.len);
+ dir_add_name(dir, istate, path.buf, path.len);
else if ((dir->flags & DIR_SHOW_IGNORED_TOO) ||
((dir->flags & DIR_COLLECT_IGNORED) &&
exclude_matches_pathspec(path.buf, path.len,
pathspec)))
- dir_add_ignored(dir, path.buf, path.len);
+ dir_add_ignored(dir, istate, path.buf, path.len);
break;
case path_untracked:
if (dir->flags & DIR_SHOW_IGNORED)
break;
- dir_add_name(dir, path.buf, path.len);
+ dir_add_name(dir, istate, path.buf, path.len);
if (cdir.fdir)
add_untracked(untracked, path.buf + baselen);
break;
return dir_state;
}
-static int cmp_name(const void *p1, const void *p2)
+int cmp_dir_entry(const void *p1, const void *p2)
{
const struct dir_entry *e1 = *(const struct dir_entry **)p1;
const struct dir_entry *e2 = *(const struct dir_entry **)p2;
return name_compare(e1->name, e1->len, e2->name, e2->len);
}
+/* check if *out lexically strictly contains *in */
+int check_dir_entry_contains(const struct dir_entry *out, const struct dir_entry *in)
+{
+ return (out->len < in->len) &&
+ (out->name[out->len - 1] == '/') &&
+ !memcmp(out->name, in->name, out->len);
+}
+
static int treat_leading_path(struct dir_struct *dir,
+ struct index_state *istate,
const char *path, int len,
const struct pathspec *pathspec)
{
break;
if (simplify_away(sb.buf, sb.len, pathspec))
break;
- if (treat_one_path(dir, NULL, &sb, baselen, pathspec,
+ if (treat_one_path(dir, NULL, istate, &sb, baselen, pathspec,
DT_DIR, NULL) == path_none)
break; /* do not recurse into it */
if (len <= baselen) {
return root;
}
-int read_directory(struct dir_struct *dir, const char *path,
- int len, const struct pathspec *pathspec)
+int read_directory(struct dir_struct *dir, struct index_state *istate,
+ const char *path, int len, const struct pathspec *pathspec)
{
struct untracked_cache_dir *untracked;
* e.g. prep_exclude()
*/
dir->untracked = NULL;
- if (!len || treat_leading_path(dir, path, len, pathspec))
- read_directory_recursive(dir, path, len, untracked, 0, pathspec);
- QSORT(dir->entries, dir->nr, cmp_name);
- QSORT(dir->ignored, dir->ignored_nr, cmp_name);
+ if (!len || treat_leading_path(dir, istate, path, len, pathspec))
+ read_directory_recursive(dir, istate, path, len, untracked, 0, pathspec);
+ QSORT(dir->entries, dir->nr, cmp_dir_entry);
+ QSORT(dir->ignored, dir->ignored_nr, cmp_dir_entry);
+
+ /*
+ * If DIR_SHOW_IGNORED_TOO is set, read_directory_recursive() will
+ * also pick up untracked contents of untracked dirs; by default
+ * we discard these, but given DIR_KEEP_UNTRACKED_CONTENTS we do not.
+ */
+ if ((dir->flags & DIR_SHOW_IGNORED_TOO) &&
+ !(dir->flags & DIR_KEEP_UNTRACKED_CONTENTS)) {
+ int i, j;
+
+ /* remove from dir->entries untracked contents of untracked dirs */
+ for (i = j = 0; j < dir->nr; j++) {
+ if (i &&
+ check_dir_entry_contains(dir->entries[i - 1], dir->entries[j])) {
+ free(dir->entries[j]);
+ dir->entries[j] = NULL;
+ } else {
+ dir->entries[i++] = dir->entries[j];
+ }
+ }
+
+ dir->nr = i;
+ }
+
if (dir->untracked) {
static struct trace_key trace_untracked_stats = TRACE_KEY_INIT(UNTRACKED_STATS);
trace_printf_key(&trace_untracked_stats,
dir->untracked->gitignore_invalidated,
dir->untracked->dir_invalidated,
dir->untracked->dir_opened);
- if (dir->untracked == the_index.untracked &&
+ if (dir->untracked == istate->untracked &&
(dir->untracked->dir_opened ||
dir->untracked->gitignore_invalidated ||
dir->untracked->dir_invalidated))
- the_index.cache_changed |= UNTRACKED_CHANGED;
- if (dir->untracked != the_index.untracked) {
+ istate->cache_changed |= UNTRACKED_CHANGED;
+ if (dir->untracked != istate->untracked) {
free(dir->untracked);
dir->untracked = NULL;
}
DIR_NO_GITLINKS = 1<<3,
DIR_COLLECT_IGNORED = 1<<4,
DIR_SHOW_IGNORED_TOO = 1<<5,
- DIR_COLLECT_KILLED_ONLY = 1<<6
+ DIR_COLLECT_KILLED_ONLY = 1<<6,
+ DIR_KEEP_UNTRACKED_CONTENTS = 1<<7
} flags;
struct dir_entry **entries;
struct dir_entry **ignored;
extern int report_path_error(const char *ps_matched, const struct pathspec *pathspec, const char *prefix);
extern int within_depth(const char *name, int namelen, int depth, int max_depth);
-extern int fill_directory(struct dir_struct *dir, const struct pathspec *pathspec);
-extern int read_directory(struct dir_struct *, const char *path, int len, const struct pathspec *pathspec);
-
-extern int is_excluded_from_list(const char *pathname, int pathlen, const char *basename,
- int *dtype, struct exclude_list *el);
-struct dir_entry *dir_add_ignored(struct dir_struct *dir, const char *pathname, int len);
+extern int fill_directory(struct dir_struct *dir,
+ struct index_state *istate,
+ const struct pathspec *pathspec);
+extern int read_directory(struct dir_struct *, struct index_state *istate,
+ const char *path, int len,
+ const struct pathspec *pathspec);
+
+extern int is_excluded_from_list(const char *pathname, int pathlen,
+ const char *basename, int *dtype,
+ struct exclude_list *el,
+ struct index_state *istate);
+struct dir_entry *dir_add_ignored(struct dir_struct *dir,
+ struct index_state *istate,
+ const char *pathname, int len);
/*
* these implement the matching logic for dir.c:excluded_from_list and
const char *, int, int, unsigned);
extern struct exclude *last_exclude_matching(struct dir_struct *dir,
+ struct index_state *istate,
const char *name, int *dtype);
-extern int is_excluded(struct dir_struct *dir, const char *name, int *dtype);
+extern int is_excluded(struct dir_struct *dir,
+ struct index_state *istate,
+ const char *name, int *dtype);
extern struct exclude_list *add_exclude_list(struct dir_struct *dir,
int group_type, const char *src);
extern int add_excludes_from_file_to_list(const char *fname, const char *base, int baselen,
- struct exclude_list *el, int check_index);
+ struct exclude_list *el, struct index_state *istate);
extern void add_excludes_from_file(struct dir_struct *, const char *fname);
extern void parse_exclude_pattern(const char **string, int *patternlen, unsigned *flags, int *nowildcardlen);
extern void add_exclude(const char *string, const char *base,
has_trailing_dir);
}
+int cmp_dir_entry(const void *p1, const void *p2);
+int check_dir_entry_contains(const struct dir_entry *out, const struct dir_entry *in);
+
void untracked_cache_invalidate_path(struct index_state *, const char *);
void untracked_cache_remove_from_index(struct index_state *, const char *);
void untracked_cache_add_to_index(struct index_state *, const char *);
sub = submodule_from_ce(ce);
if (sub)
return submodule_move_head(ce->name,
- NULL, oid_to_hex(&ce->oid), SUBMODULE_MOVE_HEAD_FORCE);
+ NULL, oid_to_hex(&ce->oid),
+ state->force ? SUBMODULE_MOVE_HEAD_FORCE : 0);
break;
default:
return error("unknown file mode for %s in index", path);
unlink_or_warn(ce->name);
return submodule_move_head(ce->name,
- NULL, oid_to_hex(&ce->oid),
- SUBMODULE_MOVE_HEAD_FORCE);
+ NULL, oid_to_hex(&ce->oid), 0);
} else
return submodule_move_head(ce->name,
"HEAD", oid_to_hex(&ce->oid),
- SUBMODULE_MOVE_HEAD_FORCE);
+ state->force ? SUBMODULE_MOVE_HEAD_FORCE : 0);
}
if (!changed)
git_dir = getenv(GIT_DIR_ENVIRONMENT);
if (!git_dir) {
if (!startup_info->have_repository)
- die("BUG: setup_git_env called without repository");
+ BUG("setup_git_env called without repository");
git_dir = DEFAULT_GIT_DIR_ENVIRONMENT;
}
gitfile = read_gitfile(git_dir);
#include "version.h"
#include "prio-queue.h"
#include "sha1-array.h"
+#include "oidset.h"
static int transfer_unpack_limit = -1;
static int fetch_unpack_limit = -1;
}
}
+static void add_refs_to_oidset(struct oidset *oids, struct ref *refs)
+{
+ for (; refs; refs = refs->next)
+ oidset_insert(oids, &refs->old_oid);
+}
+
+static int tip_oids_contain(struct oidset *tip_oids,
+ struct ref *unmatched, struct ref *newlist,
+ const struct object_id *id)
+{
+ /*
+ * Note that this only looks at the ref lists the first time it's
+ * called. This works out in filter_refs() because even though it may
+ * add to "newlist" between calls, the additions will always be for
+ * oids that are already in the set.
+ */
+ if (!tip_oids->map.tablesize) {
+ add_refs_to_oidset(tip_oids, unmatched);
+ add_refs_to_oidset(tip_oids, newlist);
+ }
+ return oidset_contains(tip_oids, id);
+}
+
static void filter_refs(struct fetch_pack_args *args,
struct ref **refs,
struct ref **sought, int nr_sought)
{
struct ref *newlist = NULL;
struct ref **newtail = &newlist;
+ struct ref *unmatched = NULL;
struct ref *ref, *next;
+ struct oidset tip_oids = OIDSET_INIT;
int i;
i = 0;
ref->next = NULL;
newtail = &ref->next;
} else {
- free(ref);
+ ref->next = unmatched;
+ unmatched = ref;
}
}
continue;
if ((allow_unadvertised_object_request &
- (ALLOW_TIP_SHA1 | ALLOW_REACHABLE_SHA1))) {
+ (ALLOW_TIP_SHA1 | ALLOW_REACHABLE_SHA1)) ||
+ tip_oids_contain(&tip_oids, unmatched, newlist,
+ &ref->old_oid)) {
ref->match_status = REF_MATCHED;
*newtail = copy_ref(ref);
newtail = &(*newtail)->next;
ref->match_status = REF_UNADVERTISED_NOT_ALLOWED;
}
}
+
+ oidset_clear(&tip_oids);
+ for (ref = unmatched; ref; ref = next) {
+ next = ref->next;
+ free(ref);
+ }
+
*refs = newlist;
}
my $normal_color = $repo->get_color("", "reset");
my $diff_algorithm = $repo->config('diff.algorithm');
-my $diff_indent_heuristic = $repo->config_bool('diff.indentheuristic');
my $diff_filter = $repo->config('interactive.difffilter');
my $use_readkey = 0;
if (defined $diff_algorithm) {
splice @diff_cmd, 1, 0, "--diff-algorithm=${diff_algorithm}";
}
- if ($diff_indent_heuristic) {
- splice @diff_cmd, 1, 0, "--indent-heuristic";
- }
if (defined $patch_mode_revision) {
push @diff_cmd, get_diff_reference($patch_mode_revision);
}
extern void set_warn_routine(void (*routine)(const char *warn, va_list params));
extern void (*get_warn_routine(void))(const char *warn, va_list params);
extern void set_die_is_recursing_routine(int (*routine)(void));
-extern void set_error_handle(FILE *);
extern int starts_with(const char *str, const char *prefix);
#define HAVE_VARIADIC_MACROS 1
#endif
+#ifdef HAVE_VARIADIC_MACROS
+__attribute__((format (printf, 3, 4))) NORETURN
+void BUG_fl(const char *file, int line, const char *fmt, ...);
+#define BUG(...) BUG_fl(__FILE__, __LINE__, __VA_ARGS__)
+#else
+__attribute__((format (printf, 1, 2))) NORETURN
+void BUG(const char *fmt, ...);
+#endif
+
/*
* Preserves errno, prints a message, but gives no warning for ENOENT.
* Returns 0 on success, which includes trying to unlink an object that does
sed -e '/^^/d' "$tempdir"/raw-heads >"$tempdir"/heads
test -s "$tempdir"/heads ||
- die "Which ref do you want to rewrite?"
+ die "You must specify a ref to rewrite."
GIT_INDEX_FILE="$(pwd)/../index"
export GIT_INDEX_FILE
* encoding=US-ASCII
git-gui.sh encoding=UTF-8
/po/*.po encoding=UTF-8
+/GIT-VERSION-GEN eol=lf
use Text::ParseWords;
use Term::ANSIColor;
use File::Temp qw/ tempdir tempfile /;
-use File::Spec::Functions qw(catfile);
+use File::Spec::Functions qw(catdir catfile);
use Error qw(:try);
+use Cwd qw(abs_path cwd);
use Git;
use Git::I18N;
die __("The required SMTP server is not properly defined.")
}
+ require Net::SMTP;
+ my $use_net_smtp_ssl = version->parse($Net::SMTP::VERSION) < version->parse("2.34");
+ $smtp_domain ||= maildomain();
+
if ($smtp_encryption eq 'ssl') {
$smtp_server_port ||= 465; # ssmtp
- require Net::SMTP::SSL;
- $smtp_domain ||= maildomain();
require IO::Socket::SSL;
# Suppress "variable accessed once" warning.
# Net::SMTP::SSL->new() does not forward any SSL options
IO::Socket::SSL::set_client_defaults(
ssl_verify_params());
- $smtp ||= Net::SMTP::SSL->new($smtp_server,
- Hello => $smtp_domain,
- Port => $smtp_server_port,
- Debug => $debug_net_smtp);
+
+ if ($use_net_smtp_ssl) {
+ require Net::SMTP::SSL;
+ $smtp ||= Net::SMTP::SSL->new($smtp_server,
+ Hello => $smtp_domain,
+ Port => $smtp_server_port,
+ Debug => $debug_net_smtp);
+ }
+ else {
+ $smtp ||= Net::SMTP->new($smtp_server,
+ Hello => $smtp_domain,
+ Port => $smtp_server_port,
+ Debug => $debug_net_smtp,
+ SSL => 1);
+ }
}
else {
- require Net::SMTP;
- $smtp_domain ||= maildomain();
$smtp_server_port ||= 25;
$smtp ||= Net::SMTP->new($smtp_server,
Hello => $smtp_domain,
Debug => $debug_net_smtp,
Port => $smtp_server_port);
if ($smtp_encryption eq 'tls' && $smtp) {
- require Net::SMTP::SSL;
- $smtp->command('STARTTLS');
- $smtp->response();
- if ($smtp->code == 220) {
+ if ($use_net_smtp_ssl) {
+ $smtp->command('STARTTLS');
+ $smtp->response();
+ if ($smtp->code != 220) {
+ die sprintf(__("Server does not support STARTTLS! %s"), $smtp->message);
+ }
+ require Net::SMTP::SSL;
$smtp = Net::SMTP::SSL->start_SSL($smtp,
ssl_verify_params())
- or die "STARTTLS failed! ".IO::Socket::SSL::errstr();
- $smtp_encryption = '';
- # Send EHLO again to receive fresh
- # supported commands
- $smtp->hello($smtp_domain);
- } else {
- die sprintf(__("Server does not support STARTTLS! %s"), $smtp->message);
+ or die sprintf(__("STARTTLS failed! %s"), IO::Socket::SSL::errstr());
+ }
+ else {
+ $smtp->starttls(ssl_verify_params())
+ or die sprintf(__("STARTTLS failed! %s"), IO::Socket::SSL::errstr());
}
+ $smtp_encryption = '';
+ # Send EHLO again to receive fresh
+ # supported commands
+ $smtp->hello($smtp_domain);
}
}
sub validate_patch {
my $fn = shift;
+
+ if ($repo) {
+ my $validate_hook = catfile(catdir($repo->repo_path(), 'hooks'),
+ 'sendemail-validate');
+ my $hook_error;
+ if (-x $validate_hook) {
+ my $target = abs_path($fn);
+ # The hook needs a correct cwd and GIT_DIR.
+ my $cwd_save = cwd();
+ chdir($repo->wc_path() or $repo->repo_path())
+ or die("chdir: $!");
+ local $ENV{"GIT_DIR"} = $repo->repo_path();
+ $hook_error = "rejected by sendemail-validate hook"
+ if system($validate_hook, $target);
+ chdir($cwd_save) or die("chdir: $!");
+ }
+ return $hook_error if $hook_error;
+ }
+
open(my $fh, '<', $fn)
or die sprintf(__("unable to open %s: %s\n"), $fn, $!);
while (my $line = <$fh>) {
<p><strong>Pattern</strong> is by default a normal string that is matched precisely (but without
regard to case, except in the case of pickaxe). However, when you check the <em>re</em> checkbox,
the pattern entered is recognized as the POSIX extended
-<a href="http://en.wikipedia.org/wiki/Regular_expression">regular expression</a> (also case
+<a href="https://en.wikipedia.org/wiki/Regular_expression">regular expression</a> (also case
insensitive).</p>
<dl>
<dt><b>commit</b></dt>
case GREP_PATTERN_TYPE_BRE:
opt->fixed = 0;
- opt->pcre = 0;
- opt->regflags &= ~REG_EXTENDED;
+ opt->pcre1 = 0;
break;
case GREP_PATTERN_TYPE_ERE:
opt->fixed = 0;
- opt->pcre = 0;
+ opt->pcre1 = 0;
opt->regflags |= REG_EXTENDED;
break;
case GREP_PATTERN_TYPE_FIXED:
opt->fixed = 1;
- opt->pcre = 0;
- opt->regflags &= ~REG_EXTENDED;
+ opt->pcre1 = 0;
break;
case GREP_PATTERN_TYPE_PCRE:
opt->fixed = 0;
- opt->pcre = 1;
- opt->regflags &= ~REG_EXTENDED;
+ opt->pcre1 = 1;
break;
}
}
die("%s'%s': %s", where, p->pattern, error);
}
-#ifdef USE_LIBPCRE
-static void compile_pcre_regexp(struct grep_pat *p, const struct grep_opt *opt)
+static int is_fixed(const char *s, size_t len)
+{
+ size_t i;
+
+ for (i = 0; i < len; i++) {
+ if (is_regex_special(s[i]))
+ return 0;
+ }
+
+ return 1;
+}
+
+static int has_null(const char *s, size_t len)
+{
+ /*
+ * regcomp cannot accept patterns with NULs so when using it
+ * we consider any pattern containing a NUL fixed.
+ */
+ if (memchr(s, 0, len))
+ return 1;
+
+ return 0;
+}
+
+#ifdef USE_LIBPCRE1
+static void compile_pcre1_regexp(struct grep_pat *p, const struct grep_opt *opt)
{
const char *error;
int erroffset;
if (opt->ignore_case) {
if (has_non_ascii(p->pattern))
- p->pcre_tables = pcre_maketables();
+ p->pcre1_tables = pcre_maketables();
options |= PCRE_CASELESS;
}
if (is_utf8_locale() && has_non_ascii(p->pattern))
options |= PCRE_UTF8;
- p->pcre_regexp = pcre_compile(p->pattern, options, &error, &erroffset,
- p->pcre_tables);
- if (!p->pcre_regexp)
+ p->pcre1_regexp = pcre_compile(p->pattern, options, &error, &erroffset,
+ p->pcre1_tables);
+ if (!p->pcre1_regexp)
compile_regexp_failed(p, error);
- p->pcre_extra_info = pcre_study(p->pcre_regexp, 0, &error);
- if (!p->pcre_extra_info && error)
+ p->pcre1_extra_info = pcre_study(p->pcre1_regexp, 0, &error);
+ if (!p->pcre1_extra_info && error)
die("%s", error);
}
-static int pcrematch(struct grep_pat *p, const char *line, const char *eol,
+static int pcre1match(struct grep_pat *p, const char *line, const char *eol,
regmatch_t *match, int eflags)
{
int ovector[30], ret, flags = 0;
if (eflags & REG_NOTBOL)
flags |= PCRE_NOTBOL;
- ret = pcre_exec(p->pcre_regexp, p->pcre_extra_info, line, eol - line,
+ ret = pcre_exec(p->pcre1_regexp, p->pcre1_extra_info, line, eol - line,
0, flags, ovector, ARRAY_SIZE(ovector));
if (ret < 0 && ret != PCRE_ERROR_NOMATCH)
die("pcre_exec failed with error code %d", ret);
return ret;
}
-static void free_pcre_regexp(struct grep_pat *p)
+static void free_pcre1_regexp(struct grep_pat *p)
{
- pcre_free(p->pcre_regexp);
- pcre_free(p->pcre_extra_info);
- pcre_free((void *)p->pcre_tables);
+ pcre_free(p->pcre1_regexp);
+ pcre_free(p->pcre1_extra_info);
+ pcre_free((void *)p->pcre1_tables);
}
-#else /* !USE_LIBPCRE */
-static void compile_pcre_regexp(struct grep_pat *p, const struct grep_opt *opt)
+#else /* !USE_LIBPCRE1 */
+static void compile_pcre1_regexp(struct grep_pat *p, const struct grep_opt *opt)
{
die("cannot use Perl-compatible regexes when not compiled with USE_LIBPCRE");
}
-static int pcrematch(struct grep_pat *p, const char *line, const char *eol,
+static int pcre1match(struct grep_pat *p, const char *line, const char *eol,
regmatch_t *match, int eflags)
{
return 1;
}
-static void free_pcre_regexp(struct grep_pat *p)
+static void free_pcre1_regexp(struct grep_pat *p)
{
}
-#endif /* !USE_LIBPCRE */
-
-static int is_fixed(const char *s, size_t len)
-{
- size_t i;
-
- /* regcomp cannot accept patterns with NULs so we
- * consider any pattern containing a NUL fixed.
- */
- if (memchr(s, 0, len))
- return 1;
-
- for (i = 0; i < len; i++) {
- if (is_regex_special(s[i]))
- return 0;
- }
-
- return 1;
-}
+#endif /* !USE_LIBPCRE1 */
static void compile_fixed_regexp(struct grep_pat *p, struct grep_opt *opt)
{
struct strbuf sb = STRBUF_INIT;
int err;
- int regflags;
+ int regflags = opt->regflags;
basic_regex_quote_buf(&sb, p->pattern);
- regflags = opt->regflags & ~REG_EXTENDED;
if (opt->ignore_case)
regflags |= REG_ICASE;
err = regcomp(&p->regexp, sb.buf, regflags);
* simple string match using kws. p->fixed tells us if we
* want to use kws.
*/
- if (opt->fixed || is_fixed(p->pattern, p->patternlen))
+ if (opt->fixed ||
+ has_null(p->pattern, p->patternlen) ||
+ is_fixed(p->pattern, p->patternlen))
p->fixed = !icase || ascii_only;
else
p->fixed = 0;
return;
}
- if (opt->pcre) {
- compile_pcre_regexp(p, opt);
+ if (opt->pcre1) {
+ compile_pcre1_regexp(p, opt);
return;
}
case GREP_PATTERN_BODY:
if (p->kws)
kwsfree(p->kws);
- else if (p->pcre_regexp)
- free_pcre_regexp(p);
+ else if (p->pcre1_regexp)
+ free_pcre1_regexp(p);
else
regfree(&p->regexp);
free(p->pattern);
if (p->fixed)
hit = !fixmatch(p, line, eol, match);
- else if (p->pcre_regexp)
- hit = !pcrematch(p, line, eol, match, eflags);
+ else if (p->pcre1_regexp)
+ hit = !pcre1match(p, line, eol, match, eflags);
else
hit = !regexec_buf(&p->regexp, line, eol - line, 1, match,
eflags);
#ifndef GREP_H
#define GREP_H
#include "color.h"
-#ifdef USE_LIBPCRE
+#ifdef USE_LIBPCRE1
#include <pcre.h>
#else
typedef int pcre;
size_t patternlen;
enum grep_header_field field;
regex_t regexp;
- pcre *pcre_regexp;
- pcre_extra *pcre_extra_info;
- const unsigned char *pcre_tables;
+ pcre *pcre1_regexp;
+ pcre_extra *pcre1_extra_info;
+ const unsigned char *pcre1_tables;
kwset_t kws;
unsigned fixed:1;
unsigned ignore_case:1;
int allow_textconv;
int extended;
int use_reflog_filter;
- int pcre;
+ int pcre1;
int relative;
int pathname;
int null_following_name;
#include "cache.h"
#include "builtin.h"
#include "exec_cmd.h"
+#include "run-command.h"
#include "levenshtein.h"
#include "help.h"
#include "common-cmds.h"
string_list_clear(&list, 0);
}
-static int is_executable(const char *name)
-{
- struct stat st;
-
- if (stat(name, &st) || /* stat, not lstat */
- !S_ISREG(st.st_mode))
- return 0;
-
-#if defined(GIT_WINDOWS_NATIVE)
- /*
- * On Windows there is no executable bit. The file extension
- * indicates whether it can be run as an executable, and Git
- * has special-handling to detect scripts and launch them
- * through the indicated script interpreter. We test for the
- * file extension first because virus scanners may make
- * it quite expensive to open many files.
- */
- if (ends_with(name, ".exe"))
- return S_IXUSR;
-
-{
- /*
- * Now that we know it does not have an executable extension,
- * peek into the file instead.
- */
- char buf[3] = { 0 };
- int n;
- int fd = open(name, O_RDONLY);
- st.st_mode &= ~S_IXUSR;
- if (fd >= 0) {
- n = read(fd, buf, 2);
- if (n == 2)
- /* look for a she-bang */
- if (!strcmp(buf, "#!"))
- st.st_mode |= S_IXUSR;
- close(fd);
- }
-}
-#endif
- return st.st_mode & S_IXUSR;
-}
-
static void list_commands_in_dir(struct cmdnames *cmds,
const char *path,
const char *prefix)
if (SIMILAR_ENOUGH(best_similarity)) {
fprintf_ln(stderr,
- Q_("\nDid you mean this?",
- "\nDid you mean one of these?",
+ Q_("\nThe most similar command is",
+ "\nThe most similar commands are",
n));
for (i = 0; i < n; i++)
changed = process_all_files(&parent_range, rev, &queue, range);
if (parent)
add_line_range(rev, parent, parent_range);
+ free_line_log_data(parent_range);
return changed;
}
for (;;) {
int peek;
- peek = fgetc(in); ungetc(peek, in);
+ peek = fgetc(in);
+ if (peek == EOF)
+ break;
+ ungetc(peek, in);
if (peek != ' ' && peek != '\t')
break;
if (strbuf_getline_lf(&continuation, in))
do {
peek = fgetc(mi->input);
+ if (peek == EOF) {
+ fclose(cmitmsg);
+ return error("empty patch: '%s'", patch);
+ }
} while (isspace(peek));
ungetc(peek, mi->input);
c->mode_from_env = 1;
c->combine = parse_combine_notes_fn(rewrite_mode_env);
if (!c->combine)
- /* TRANSLATORS: The first %s is the name of the
- environment variable, the second %s is its value */
+ /*
+ * TRANSLATORS: The first %s is the name of
+ * the environment variable, the second %s is
+ * its value.
+ */
error(_("Bad %s value: '%s'"), GIT_NOTES_REWRITE_MODE_ENVIRONMENT,
rewrite_mode_env);
}
fprintf_ln(outfile, _("usage: %s"), _(*usagestr++));
while (*usagestr && **usagestr)
- /* TRANSLATORS: the colon here should align with the
- one in "usage: %s" translation */
+ /*
+ * TRANSLATORS: the colon here should align with the
+ * one in "usage: %s" translation.
+ */
fprintf_ln(outfile, _(" or: %s"), _(*usagestr++));
while (*usagestr) {
if (**usagestr)
struct patch_id *add_commit_patch_id(struct commit *commit,
struct patch_ids *ids)
{
- struct patch_id *key = xcalloc(1, sizeof(*key));
+ struct patch_id *key;
if (!patch_id_defined(commit))
return NULL;
+ key = xcalloc(1, sizeof(*key));
if (init_patch_id_entry(key, commit, ids)) {
free(key);
return NULL;
+#define NO_THE_INDEX_COMPATIBILITY_MACROS
#include "cache.h"
#include "dir.h"
#include "pathspec.h"
* to use find_pathspecs_matching_against_index() instead.
*/
void add_pathspec_matches_against_index(const struct pathspec *pathspec,
+ const struct index_state *istate,
char *seen)
{
int num_unmatched = 0, i;
num_unmatched++;
if (!num_unmatched)
return;
- for (i = 0; i < active_nr; i++) {
- const struct cache_entry *ce = active_cache[i];
+ for (i = 0; i < istate->cache_nr; i++) {
+ const struct cache_entry *ce = istate->cache[i];
ce_path_match(ce, pathspec, seen);
}
}
* nature of the "closest" (i.e. most specific) matches which each of the
* given pathspecs achieves against all items in the index.
*/
-char *find_pathspecs_matching_against_index(const struct pathspec *pathspec)
+char *find_pathspecs_matching_against_index(const struct pathspec *pathspec,
+ const struct index_state *istate)
{
char *seen = xcalloc(pathspec->nr, 1);
- add_pathspec_matches_against_index(pathspec, seen);
+ add_pathspec_matches_against_index(pathspec, istate, seen);
return seen;
}
return parse_short_magic(magic, elem);
}
-static void strip_submodule_slash_cheap(struct pathspec_item *item)
-{
- if (item->len >= 1 && item->match[item->len - 1] == '/') {
- int i = cache_name_pos(item->match, item->len - 1);
-
- if (i >= 0 && S_ISGITLINK(active_cache[i]->ce_mode)) {
- item->len--;
- item->match[item->len] = '\0';
- }
- }
-}
-
-static void strip_submodule_slash_expensive(struct pathspec_item *item)
-{
- int i;
-
- for (i = 0; i < active_nr; i++) {
- struct cache_entry *ce = active_cache[i];
- int ce_len = ce_namelen(ce);
-
- if (!S_ISGITLINK(ce->ce_mode))
- continue;
-
- if (item->len <= ce_len || item->match[ce_len] != '/' ||
- memcmp(ce->name, item->match, ce_len))
- continue;
-
- if (item->len == ce_len + 1) {
- /* strip trailing slash */
- item->len--;
- item->match[item->len] = '\0';
- } else {
- die(_("Pathspec '%s' is in submodule '%.*s'"),
- item->original, ce_len, ce->name);
- }
- }
-}
-
-static void die_inside_submodule_path(struct pathspec_item *item)
-{
- int i;
-
- for (i = 0; i < active_nr; i++) {
- struct cache_entry *ce = active_cache[i];
- int ce_len = ce_namelen(ce);
-
- if (!S_ISGITLINK(ce->ce_mode))
- continue;
-
- if (item->len < ce_len ||
- !(item->match[ce_len] == '/' || item->match[ce_len] == '\0') ||
- memcmp(ce->name, item->match, ce_len))
- continue;
-
- die(_("Pathspec '%s' is in submodule '%.*s'"),
- item->original, ce_len, ce->name);
- }
-}
-
/*
* Perform the initialization of a pathspec_item based on a pathspec element.
*/
item->original = xstrdup(elt);
}
- if (flags & PATHSPEC_STRIP_SUBMODULE_SLASH_CHEAP)
- strip_submodule_slash_cheap(item);
-
- if (flags & PATHSPEC_STRIP_SUBMODULE_SLASH_EXPENSIVE)
- strip_submodule_slash_expensive(item);
-
if (magic & PATHSPEC_LITERAL) {
item->nowildcard_len = item->len;
} else {
/* sanity checks, pathspec matchers assume these are sane */
if (item->nowildcard_len > item->len ||
item->prefix > item->len) {
- /*
- * This case can be triggered by the user pointing us to a
- * pathspec inside a submodule, which is an input error.
- * Detect that here and complain, but fallback in the
- * non-submodule case to a BUG, as we have no idea what
- * would trigger that.
- */
- die_inside_submodule_path(item);
- die ("BUG: item->nowildcard_len > item->len || item->prefix > item->len)");
+ die ("BUG: error initializing pathspec_item");
}
}
#define PATHSPEC_PREFER_CWD (1<<0) /* No args means match cwd */
#define PATHSPEC_PREFER_FULL (1<<1) /* No args means match everything */
#define PATHSPEC_MAXDEPTH_VALID (1<<2) /* max_depth field is valid */
-/* strip the trailing slash if the given path is a gitlink */
-#define PATHSPEC_STRIP_SUBMODULE_SLASH_CHEAP (1<<3)
/* die if a symlink is part of the given path's directory */
-#define PATHSPEC_SYMLINK_LEADING_PATH (1<<4)
-/*
- * This is like a combination of ..LEADING_PATH and .._SLASH_CHEAP
- * (but not the same): it strips the trailing slash if the given path
- * is a gitlink but also checks and dies if gitlink is part of the
- * leading path (i.e. the given path goes beyond a submodule). It's
- * safer than _SLASH_CHEAP and also more expensive.
- */
-#define PATHSPEC_STRIP_SUBMODULE_SLASH_EXPENSIVE (1<<5)
-#define PATHSPEC_PREFIX_ORIGIN (1<<6)
-#define PATHSPEC_KEEP_ORDER (1<<7)
+#define PATHSPEC_SYMLINK_LEADING_PATH (1<<3)
+#define PATHSPEC_PREFIX_ORIGIN (1<<4)
+#define PATHSPEC_KEEP_ORDER (1<<5)
/*
* For the callers that just need pure paths from somewhere else, not
* from command line. Global --*-pathspecs options are ignored. No
* magic is parsed in each pathspec either. If PATHSPEC_LITERAL is
* allowed, then it will automatically set for every pathspec.
*/
-#define PATHSPEC_LITERAL_PATH (1<<8)
+#define PATHSPEC_LITERAL_PATH (1<<6)
extern void parse_pathspec(struct pathspec *pathspec,
unsigned magic_mask,
return strcmp(s1, s2);
}
-extern char *find_pathspecs_matching_against_index(const struct pathspec *pathspec);
-extern void add_pathspec_matches_against_index(const struct pathspec *pathspec, char *seen);
+extern void add_pathspec_matches_against_index(const struct pathspec *pathspec,
+ const struct index_state *istate,
+ char *seen);
+extern char *find_pathspecs_matching_against_index(const struct pathspec *pathspec,
+ const struct index_state *istate);
#endif /* PATHSPEC_H */
return status;
}
+int packet_writel(int fd, const char *line, ...)
+{
+ va_list args;
+ int err;
+ va_start(args, line);
+ for (;;) {
+ if (!line)
+ break;
+ if (strlen(line) > LARGE_PACKET_DATA_MAX)
+ return -1;
+ err = packet_write_fmt_gently(fd, "%s\n", line);
+ if (err)
+ return err;
+ line = va_arg(args, const char*);
+ }
+ va_end(args);
+ return packet_flush_gently(fd);
+}
+
static int packet_write_gently(const int fd_out, const char *buf, size_t size)
{
static char packet_write_buffer[LARGE_PACKET_MAX];
PACKET_READ_CHOMP_NEWLINE);
if (dst_len)
*dst_len = len;
- return len ? packet_buffer : NULL;
+ return (len > 0) ? packet_buffer : NULL;
}
char *packet_read_line(int fd, int *len_p)
return packet_read_line_generic(fd, NULL, NULL, len_p);
}
+int packet_read_line_gently(int fd, int *dst_len, char **dst_line)
+{
+ int len = packet_read(fd, NULL, NULL,
+ packet_buffer, sizeof(packet_buffer),
+ PACKET_READ_CHOMP_NEWLINE|PACKET_READ_GENTLE_ON_EOF);
+ if (dst_len)
+ *dst_len = len;
+ if (dst_line)
+ *dst_line = (len > 0) ? packet_buffer : NULL;
+ return len;
+}
+
char *packet_read_line_buf(char **src, size_t *src_len, int *dst_len)
{
return packet_read_line_generic(-1, src, src_len, dst_len);
void packet_buf_write(struct strbuf *buf, const char *fmt, ...) __attribute__((format (printf, 2, 3)));
int packet_flush_gently(int fd);
int packet_write_fmt_gently(int fd, const char *fmt, ...) __attribute__((format (printf, 2, 3)));
+LAST_ARG_MUST_BE_NULL
+int packet_writel(int fd, const char *line, ...);
int write_packetized_from_fd(int fd_in, int fd_out);
int write_packetized_from_buf(const char *src_in, size_t len, int fd_out);
*/
char *packet_read_line(int fd, int *size);
+/*
+ * Convenience wrapper for packet_read that sets the PACKET_READ_GENTLE_ON_EOF
+ * and CHOMP_NEWLINE options. The return value specifies the number of bytes
+ * read into the buffer or -1 on truncated input. If the *dst_line parameter
+ * is not NULL it will return NULL for a flush packet or when the number of
+ * bytes copied is zero and otherwise points to a static buffer (that may be
+ * overwritten by subsequent calls). If the size parameter is not NULL, the
+ * length of the packet is written to it.
+ */
+int packet_read_line_gently(int fd, int *size, char **dst_line);
+
/*
* Same as packet_read_line, but read from a buf rather than a descriptor;
* see packet_read for details on how src_* is used.
{
int i;
- for (i = 0; i < istate->cache_nr; i++) {
- if (istate->cache[i]->index &&
- istate->split_index &&
- istate->split_index->base &&
- istate->cache[i]->index <= istate->split_index->base->cache_nr &&
- istate->cache[i] == istate->split_index->base->cache[istate->cache[i]->index - 1])
- continue;
+ unshare_split_index(istate, 1);
+ for (i = 0; i < istate->cache_nr; i++)
free(istate->cache[i]);
- }
resolve_undo_clear_index(istate);
istate->cache_nr = 0;
istate->cache_changed = 0;
rollback_lock_file(lockfile);
}
-static int do_write_index(struct index_state *istate, int newfd,
+static int do_write_index(struct index_state *istate, struct tempfile *tempfile,
int strip_extensions)
{
+ int newfd = tempfile->fd;
git_SHA_CTX c;
struct cache_header hdr;
int i, err, removed, extended, hdr_version;
return -1;
}
- if (ce_flush(&c, newfd, istate->sha1) || fstat(newfd, &st))
+ if (ce_flush(&c, newfd, istate->sha1))
+ return -1;
+ if (close_tempfile(tempfile))
+ return error(_("could not close '%s'"), tempfile->filename.buf);
+ if (stat(tempfile->filename.buf, &st))
return -1;
istate->timestamp.sec = (unsigned int)st.st_mtime;
istate->timestamp.nsec = ST_MTIME_NSEC(st);
static int do_write_locked_index(struct index_state *istate, struct lock_file *lock,
unsigned flags)
{
- int ret = do_write_index(istate, get_lock_file_fd(lock), 0);
+ int ret = do_write_index(istate, &lock->tempfile, 0);
if (ret)
return ret;
assert((flags & (COMMIT_LOCK | CLOSE_LOCK)) !=
return do_write_locked_index(istate, lock, flags);
}
move_cache_to_base_index(istate);
- ret = do_write_index(si->base, fd, 1);
+ ret = do_write_index(si->base, &temporary_sharedindex, 1);
if (ret) {
delete_tempfile(&temporary_sharedindex);
return ret;
fill_stat_data(sv->sd, &st);
}
}
+
+void move_index_extensions(struct index_state *dst, struct index_state *src)
+{
+ dst->untracked = src->untracked;
+ src->untracked = NULL;
+}
unsigned int length;
} objectname;
struct refname_atom refname;
+ char *head;
} u;
} *used_atom;
static int used_atom_cnt, need_tagged, need_symref;
}
}
+static void head_atom_parser(struct used_atom *atom, const char *arg)
+{
+ struct object_id unused;
+
+ atom->u.head = resolve_refdup("HEAD", RESOLVE_REF_READING, unused.hash, NULL);
+}
static struct {
const char *name;
{ "push", FIELD_STR, remote_ref_atom_parser },
{ "symref", FIELD_STR, refname_atom_parser },
{ "flag" },
- { "HEAD" },
+ { "HEAD", FIELD_STR, head_atom_parser },
{ "color", FIELD_STR, color_atom_parser },
{ "align", FIELD_STR, align_atom_parser },
{ "end" },
state.branch);
else if (state.detached_from) {
if (state.detached_at)
- /* TRANSLATORS: make sure this matches
- "HEAD detached at " in wt-status.c */
+ /*
+ * TRANSLATORS: make sure this matches "HEAD
+ * detached at " in wt-status.c
+ */
strbuf_addf(&desc, _("(HEAD detached at %s)"),
state.detached_from);
else
- /* TRANSLATORS: make sure this matches
- "HEAD detached from " in wt-status.c */
+ /*
+ * TRANSLATORS: make sure this matches "HEAD
+ * detached from " in wt-status.c
+ */
strbuf_addf(&desc, _("(HEAD detached from %s)"),
state.detached_from);
}
} else if (!deref && grab_objectname(name, ref->objectname.hash, v, atom)) {
continue;
} else if (!strcmp(name, "HEAD")) {
- const char *head;
- struct object_id oid;
-
- head = resolve_ref_unsafe("HEAD", RESOLVE_REF_READING,
- oid.hash, NULL);
- if (head && !strcmp(ref->refname, head))
+ if (atom->u.head && !strcmp(ref->refname, atom->u.head))
v->s = "*";
else
v->s = " ";
if (!reflogs || reflogs->nr == 0) {
struct object_id oid;
char *b;
- if (dwim_log(branch, strlen(branch), oid.hash, &b) == 1) {
+ int ret = dwim_log(branch, strlen(branch),
+ oid.hash, &b);
+ if (ret > 1)
+ free(b);
+ else if (ret == 1) {
if (reflogs) {
free(reflogs->ref);
free(reflogs);
reflogs = read_complete_reflog(branch);
}
}
- if (!reflogs || reflogs->nr == 0)
+ if (!reflogs || reflogs->nr == 0) {
+ if (reflogs) {
+ free(reflogs->ref);
+ free(reflogs);
+ }
+ free(branch);
return -1;
+ }
string_list_insert(&info->complete_reflogs, branch)->util
= reflogs;
}
+ free(branch);
commit_reflog = xcalloc(1, sizeof(struct commit_reflog));
if (recno < 0) {
commit_reflog->recno = get_reflog_recno_by_time(reflogs, timestamp);
if (commit_reflog->recno < 0) {
- free(branch);
+ if (reflogs) {
+ free(reflogs->ref);
+ free(reflogs);
+ }
free(commit_reflog);
return -1;
}
{
if (!name[0] || is_dot_or_dotdot(name))
return 0;
- return !strchr(name, '/'); /* no slash */
+
+ /* remote nicknames cannot contain slashes */
+ while (*name)
+ if (is_dir_sep(*name++))
+ return 0;
+ return 1;
}
const char *remote_for_branch(struct branch *branch, int *explicit)
else if (is_null_oid(&matched_src->new_oid))
error("unable to delete '%s': remote ref does not exist",
dst_value);
- else if ((dst_guess = guess_ref(dst_value, matched_src)))
+ else if ((dst_guess = guess_ref(dst_value, matched_src))) {
matched_dst = make_linked_ref(dst_guess, dst_tail);
- else
+ free(dst_guess);
+ } else
error("unable to push to unqualified destination: %s\n"
"The destination refspec neither matches an "
"existing ref on the remote nor\n"
die("bad tag");
object = parse_object(&tag->tagged->oid);
if (!object) {
- if (flags & UNINTERESTING)
+ if (revs->ignore_missing_links || (flags & UNINTERESTING))
return NULL;
die("bad object %s", oid_to_hex(&tag->tagged->oid));
}
revs->limited = 1;
}
+static int dotdot_missing(const char *arg, char *dotdot,
+ struct rev_info *revs, int symmetric)
+{
+ if (revs->ignore_missing)
+ return 0;
+ /* de-munge so we report the full argument */
+ *dotdot = '.';
+ die(symmetric
+ ? "Invalid symmetric difference expression %s"
+ : "Invalid revision range %s", arg);
+}
+
+static int handle_dotdot_1(const char *arg, char *dotdot,
+ struct rev_info *revs, int flags,
+ int cant_be_filename,
+ struct object_context *a_oc,
+ struct object_context *b_oc)
+{
+ const char *a_name, *b_name;
+ struct object_id a_oid, b_oid;
+ struct object *a_obj, *b_obj;
+ unsigned int a_flags, b_flags;
+ int symmetric = 0;
+ unsigned int flags_exclude = flags ^ (UNINTERESTING | BOTTOM);
+ unsigned int oc_flags = GET_SHA1_COMMITTISH | GET_SHA1_RECORD_PATH;
+
+ a_name = arg;
+ if (!*a_name)
+ a_name = "HEAD";
+
+ b_name = dotdot + 2;
+ if (*b_name == '.') {
+ symmetric = 1;
+ b_name++;
+ }
+ if (!*b_name)
+ b_name = "HEAD";
+
+ if (get_sha1_with_context(a_name, oc_flags, a_oid.hash, a_oc) ||
+ get_sha1_with_context(b_name, oc_flags, b_oid.hash, b_oc))
+ return -1;
+
+ if (!cant_be_filename) {
+ *dotdot = '.';
+ verify_non_filename(revs->prefix, arg);
+ *dotdot = '\0';
+ }
+
+ a_obj = parse_object(&a_oid);
+ b_obj = parse_object(&b_oid);
+ if (!a_obj || !b_obj)
+ return dotdot_missing(arg, dotdot, revs, symmetric);
+
+ if (!symmetric) {
+ /* just A..B */
+ b_flags = flags;
+ a_flags = flags_exclude;
+ } else {
+ /* A...B -- find merge bases between the two */
+ struct commit *a, *b;
+ struct commit_list *exclude;
+
+ a = lookup_commit_reference(&a_obj->oid);
+ b = lookup_commit_reference(&b_obj->oid);
+ if (!a || !b)
+ return dotdot_missing(arg, dotdot, revs, symmetric);
+
+ exclude = get_merge_bases(a, b);
+ add_rev_cmdline_list(revs, exclude, REV_CMD_MERGE_BASE,
+ flags_exclude);
+ add_pending_commit_list(revs, exclude, flags_exclude);
+ free_commit_list(exclude);
+
+ b_flags = flags;
+ a_flags = flags | SYMMETRIC_LEFT;
+ }
+
+ a_obj->flags |= a_flags;
+ b_obj->flags |= b_flags;
+ add_rev_cmdline(revs, a_obj, a_name, REV_CMD_LEFT, a_flags);
+ add_rev_cmdline(revs, b_obj, b_name, REV_CMD_RIGHT, b_flags);
+ add_pending_object_with_path(revs, a_obj, a_name, a_oc->mode, a_oc->path);
+ add_pending_object_with_path(revs, b_obj, b_name, b_oc->mode, b_oc->path);
+ return 0;
+}
+
+static int handle_dotdot(const char *arg,
+ struct rev_info *revs, int flags,
+ int cant_be_filename)
+{
+ struct object_context a_oc, b_oc;
+ char *dotdot = strstr(arg, "..");
+ int ret;
+
+ if (!dotdot)
+ return -1;
+
+ memset(&a_oc, 0, sizeof(a_oc));
+ memset(&b_oc, 0, sizeof(b_oc));
+
+ *dotdot = '\0';
+ ret = handle_dotdot_1(arg, dotdot, revs, flags, cant_be_filename,
+ &a_oc, &b_oc);
+ *dotdot = '.';
+
+ free(a_oc.path);
+ free(b_oc.path);
+
+ return ret;
+}
+
int handle_revision_arg(const char *arg_, struct rev_info *revs, int flags, unsigned revarg_opt)
{
struct object_context oc;
- char *dotdot;
+ char *mark;
struct object *object;
struct object_id oid;
int local_flags;
const char *arg = arg_;
int cant_be_filename = revarg_opt & REVARG_CANNOT_BE_FILENAME;
- unsigned get_sha1_flags = 0;
+ unsigned get_sha1_flags = GET_SHA1_RECORD_PATH;
flags = flags & UNINTERESTING ? flags | BOTTOM : flags & ~BOTTOM;
- dotdot = strstr(arg, "..");
- if (dotdot) {
- struct object_id from_oid;
- const char *next = dotdot + 2;
- const char *this = arg;
- int symmetric = *next == '.';
- unsigned int flags_exclude = flags ^ (UNINTERESTING | BOTTOM);
- static const char head_by_default[] = "HEAD";
- unsigned int a_flags;
-
- *dotdot = 0;
- next += symmetric;
-
- if (!*next)
- next = head_by_default;
- if (dotdot == arg)
- this = head_by_default;
- if (this == head_by_default && next == head_by_default &&
- !symmetric) {
- /*
- * Just ".."? That is not a range but the
- * pathspec for the parent directory.
- */
- if (!cant_be_filename) {
- *dotdot = '.';
- return -1;
- }
- }
- if (!get_sha1_committish(this, from_oid.hash) &&
- !get_sha1_committish(next, oid.hash)) {
- struct object *a_obj, *b_obj;
-
- if (!cant_be_filename) {
- *dotdot = '.';
- verify_non_filename(revs->prefix, arg);
- }
-
- a_obj = parse_object(&from_oid);
- b_obj = parse_object(&oid);
- if (!a_obj || !b_obj) {
- missing:
- if (revs->ignore_missing)
- return 0;
- die(symmetric
- ? "Invalid symmetric difference expression %s"
- : "Invalid revision range %s", arg);
- }
-
- if (!symmetric) {
- /* just A..B */
- a_flags = flags_exclude;
- } else {
- /* A...B -- find merge bases between the two */
- struct commit *a, *b;
- struct commit_list *exclude;
-
- a = (a_obj->type == OBJ_COMMIT
- ? (struct commit *)a_obj
- : lookup_commit_reference(&a_obj->oid));
- b = (b_obj->type == OBJ_COMMIT
- ? (struct commit *)b_obj
- : lookup_commit_reference(&b_obj->oid));
- if (!a || !b)
- goto missing;
- exclude = get_merge_bases(a, b);
- add_rev_cmdline_list(revs, exclude,
- REV_CMD_MERGE_BASE,
- flags_exclude);
- add_pending_commit_list(revs, exclude,
- flags_exclude);
- free_commit_list(exclude);
-
- a_flags = flags | SYMMETRIC_LEFT;
- }
-
- a_obj->flags |= a_flags;
- b_obj->flags |= flags;
- add_rev_cmdline(revs, a_obj, this,
- REV_CMD_LEFT, a_flags);
- add_rev_cmdline(revs, b_obj, next,
- REV_CMD_RIGHT, flags);
- add_pending_object(revs, a_obj, this);
- add_pending_object(revs, b_obj, next);
- return 0;
- }
- *dotdot = '.';
+ if (!cant_be_filename && !strcmp(arg, "..")) {
+ /*
+ * Just ".."? That is not a range but the
+ * pathspec for the parent directory.
+ */
+ return -1;
}
- dotdot = strstr(arg, "^@");
- if (dotdot && !dotdot[2]) {
- *dotdot = 0;
+ if (!handle_dotdot(arg, revs, flags, revarg_opt))
+ return 0;
+
+ mark = strstr(arg, "^@");
+ if (mark && !mark[2]) {
+ *mark = 0;
if (add_parents_only(revs, arg, flags, 0))
return 0;
- *dotdot = '^';
+ *mark = '^';
}
- dotdot = strstr(arg, "^!");
- if (dotdot && !dotdot[2]) {
- *dotdot = 0;
+ mark = strstr(arg, "^!");
+ if (mark && !mark[2]) {
+ *mark = 0;
if (!add_parents_only(revs, arg, flags ^ (UNINTERESTING | BOTTOM), 0))
- *dotdot = '^';
+ *mark = '^';
}
- dotdot = strstr(arg, "^-");
- if (dotdot) {
+ mark = strstr(arg, "^-");
+ if (mark) {
int exclude_parent = 1;
- if (dotdot[2]) {
+ if (mark[2]) {
char *end;
- exclude_parent = strtoul(dotdot + 2, &end, 10);
+ exclude_parent = strtoul(mark + 2, &end, 10);
if (*end != '\0' || !exclude_parent)
return -1;
}
- *dotdot = 0;
+ *mark = 0;
if (!add_parents_only(revs, arg, flags ^ (UNINTERESTING | BOTTOM), exclude_parent))
- *dotdot = '^';
+ *mark = '^';
}
local_flags = 0;
}
if (revarg_opt & REVARG_COMMITTISH)
- get_sha1_flags = GET_SHA1_COMMITTISH;
+ get_sha1_flags |= GET_SHA1_COMMITTISH;
if (get_sha1_with_context(arg, get_sha1_flags, oid.hash, &oc))
return revs->ignore_missing ? 0 : -1;
verify_non_filename(revs->prefix, arg);
object = get_reference(revs, arg, &oid, flags ^ local_flags);
add_rev_cmdline(revs, object, arg_, REV_CMD_REV, flags ^ local_flags);
- add_pending_object_with_mode(revs, object, arg, oc.mode);
+ add_pending_object_with_path(revs, object, arg, oc.mode, oc.path);
+ free(oc.path);
return 0;
}
} else if (!strcmp(arg, "--extended-regexp") || !strcmp(arg, "-E")) {
revs->grep_filter.pattern_type_option = GREP_PATTERN_TYPE_ERE;
} else if (!strcmp(arg, "--regexp-ignore-case") || !strcmp(arg, "-i")) {
+ revs->grep_filter.ignore_case = 1;
revs->grep_filter.regflags |= REG_ICASE;
DIFF_OPT_SET(&revs->diffopt, PICKAXE_IGNORE_CASE);
} else if (!strcmp(arg, "--fixed-strings") || !strcmp(arg, "-F")) {
close(fd[1]);
}
-#ifndef GIT_WINDOWS_NATIVE
-static inline void dup_devnull(int to)
+int is_executable(const char *name)
{
- int fd = open("/dev/null", O_RDWR);
- if (fd < 0)
- die_errno(_("open /dev/null failed"));
- if (dup2(fd, to) < 0)
- die_errno(_("dup2(%d,%d) failed"), fd, to);
- close(fd);
+ struct stat st;
+
+ if (stat(name, &st) || /* stat, not lstat */
+ !S_ISREG(st.st_mode))
+ return 0;
+
+#if defined(GIT_WINDOWS_NATIVE)
+ /*
+ * On Windows there is no executable bit. The file extension
+ * indicates whether it can be run as an executable, and Git
+ * has special-handling to detect scripts and launch them
+ * through the indicated script interpreter. We test for the
+ * file extension first because virus scanners may make
+ * it quite expensive to open many files.
+ */
+ if (ends_with(name, ".exe"))
+ return S_IXUSR;
+
+{
+ /*
+ * Now that we know it does not have an executable extension,
+ * peek into the file instead.
+ */
+ char buf[3] = { 0 };
+ int n;
+ int fd = open(name, O_RDONLY);
+ st.st_mode &= ~S_IXUSR;
+ if (fd >= 0) {
+ n = read(fd, buf, 2);
+ if (n == 2)
+ /* look for a she-bang */
+ if (!strcmp(buf, "#!"))
+ st.st_mode |= S_IXUSR;
+ close(fd);
+ }
}
#endif
+ return st.st_mode & S_IXUSR;
+}
+/*
+ * Search $PATH for a command. This emulates the path search that
+ * execvp would perform, without actually executing the command so it
+ * can be used before fork() to prepare to run a command using
+ * execve() or after execvp() to diagnose why it failed.
+ *
+ * The caller should ensure that file contains no directory
+ * separators.
+ *
+ * Returns the path to the command, as found in $PATH or NULL if the
+ * command could not be found. The caller inherits ownership of the memory
+ * used to store the resultant path.
+ *
+ * This should not be used on Windows, where the $PATH search rules
+ * are more complicated (e.g., a search for "foo" should find
+ * "foo.exe").
+ */
static char *locate_in_PATH(const char *file)
{
const char *p = getenv("PATH");
}
strbuf_addstr(&buf, file);
- if (!access(buf.buf, F_OK))
+ if (is_executable(buf.buf))
return strbuf_detach(&buf, NULL);
if (!*end)
}
#ifndef GIT_WINDOWS_NATIVE
-static int execv_shell_cmd(const char **argv)
+static int child_notifier = -1;
+
+enum child_errcode {
+ CHILD_ERR_CHDIR,
+ CHILD_ERR_DUP2,
+ CHILD_ERR_CLOSE,
+ CHILD_ERR_SIGPROCMASK,
+ CHILD_ERR_ENOENT,
+ CHILD_ERR_SILENT,
+ CHILD_ERR_ERRNO
+};
+
+struct child_err {
+ enum child_errcode err;
+ int syserr; /* errno */
+};
+
+static void child_die(enum child_errcode err)
{
- struct argv_array nargv = ARGV_ARRAY_INIT;
- prepare_shell_cmd(&nargv, argv);
- trace_argv_printf(nargv.argv, "trace: exec:");
- sane_execvp(nargv.argv[0], (char **)nargv.argv);
- argv_array_clear(&nargv);
- return -1;
+ struct child_err buf;
+
+ buf.err = err;
+ buf.syserr = errno;
+
+ /* write(2) on buf smaller than PIPE_BUF (min 512) is atomic: */
+ xwrite(child_notifier, &buf, sizeof(buf));
+ _exit(1);
}
-#endif
-#ifndef GIT_WINDOWS_NATIVE
-static int child_notifier = -1;
+static void child_dup2(int fd, int to)
+{
+ if (dup2(fd, to) < 0)
+ child_die(CHILD_ERR_DUP2);
+}
-static void notify_parent(void)
+static void child_close(int fd)
{
+ if (close(fd))
+ child_die(CHILD_ERR_CLOSE);
+}
+
+static void child_close_pair(int fd[2])
+{
+ child_close(fd[0]);
+ child_close(fd[1]);
+}
+
+/*
+ * parent will make it look like the child spewed a fatal error and died
+ * this is needed to prevent changes to t0061.
+ */
+static void fake_fatal(const char *err, va_list params)
+{
+ vreportf("fatal: ", err, params);
+}
+
+static void child_error_fn(const char *err, va_list params)
+{
+ const char msg[] = "error() should not be called in child\n";
+ xwrite(2, msg, sizeof(msg) - 1);
+}
+
+static void child_warn_fn(const char *err, va_list params)
+{
+ const char msg[] = "warn() should not be called in child\n";
+ xwrite(2, msg, sizeof(msg) - 1);
+}
+
+static void NORETURN child_die_fn(const char *err, va_list params)
+{
+ const char msg[] = "die() should not be called in child\n";
+ xwrite(2, msg, sizeof(msg) - 1);
+ _exit(2);
+}
+
+/* this runs in the parent process */
+static void child_err_spew(struct child_process *cmd, struct child_err *cerr)
+{
+ static void (*old_errfn)(const char *err, va_list params);
+
+ old_errfn = get_error_routine();
+ set_error_routine(fake_fatal);
+ errno = cerr->syserr;
+
+ switch (cerr->err) {
+ case CHILD_ERR_CHDIR:
+ error_errno("exec '%s': cd to '%s' failed",
+ cmd->argv[0], cmd->dir);
+ break;
+ case CHILD_ERR_DUP2:
+ error_errno("dup2() in child failed");
+ break;
+ case CHILD_ERR_CLOSE:
+ error_errno("close() in child failed");
+ break;
+ case CHILD_ERR_SIGPROCMASK:
+ error_errno("sigprocmask failed restoring signals");
+ break;
+ case CHILD_ERR_ENOENT:
+ error_errno("cannot run %s", cmd->argv[0]);
+ break;
+ case CHILD_ERR_SILENT:
+ break;
+ case CHILD_ERR_ERRNO:
+ error_errno("cannot exec '%s'", cmd->argv[0]);
+ break;
+ }
+ set_error_routine(old_errfn);
+}
+
+static void prepare_cmd(struct argv_array *out, const struct child_process *cmd)
+{
+ if (!cmd->argv[0])
+ die("BUG: command is empty");
+
+ /*
+ * Add SHELL_PATH so in the event exec fails with ENOEXEC we can
+ * attempt to interpret the command with 'sh'.
+ */
+ argv_array_push(out, SHELL_PATH);
+
+ if (cmd->git_cmd) {
+ argv_array_push(out, "git");
+ argv_array_pushv(out, cmd->argv);
+ } else if (cmd->use_shell) {
+ prepare_shell_cmd(out, cmd->argv);
+ } else {
+ argv_array_pushv(out, cmd->argv);
+ }
+
/*
- * execvp failed. If possible, we'd like to let start_command
- * know, so failures like ENOENT can be handled right away; but
- * otherwise, finish_command will still report the error.
+ * If there are no '/' characters in the command then perform a path
+ * lookup and use the resolved path as the command to exec. If there
+ * are no '/' characters or if the command wasn't found in the path,
+ * have exec attempt to invoke the command directly.
*/
- xwrite(child_notifier, "", 1);
+ if (!strchr(out->argv[1], '/')) {
+ char *program = locate_in_PATH(out->argv[1]);
+ if (program) {
+ free((char *)out->argv[1]);
+ out->argv[1] = program;
+ }
+ }
+}
+
+static char **prep_childenv(const char *const *deltaenv)
+{
+ extern char **environ;
+ char **childenv;
+ struct string_list env = STRING_LIST_INIT_DUP;
+ struct strbuf key = STRBUF_INIT;
+ const char *const *p;
+ int i;
+
+ /* Construct a sorted string list consisting of the current environ */
+ for (p = (const char *const *) environ; p && *p; p++) {
+ const char *equals = strchr(*p, '=');
+
+ if (equals) {
+ strbuf_reset(&key);
+ strbuf_add(&key, *p, equals - *p);
+ string_list_append(&env, key.buf)->util = (void *) *p;
+ } else {
+ string_list_append(&env, *p)->util = (void *) *p;
+ }
+ }
+ string_list_sort(&env);
+
+ /* Merge in 'deltaenv' with the current environ */
+ for (p = deltaenv; p && *p; p++) {
+ const char *equals = strchr(*p, '=');
+
+ if (equals) {
+ /* ('key=value'), insert or replace entry */
+ strbuf_reset(&key);
+ strbuf_add(&key, *p, equals - *p);
+ string_list_insert(&env, key.buf)->util = (void *) *p;
+ } else {
+ /* otherwise ('key') remove existing entry */
+ string_list_remove(&env, *p, 0);
+ }
+ }
+
+ /* Create an array of 'char *' to be used as the childenv */
+ childenv = xmalloc((env.nr + 1) * sizeof(char *));
+ for (i = 0; i < env.nr; i++)
+ childenv[i] = env.items[i].util;
+ childenv[env.nr] = NULL;
+
+ string_list_clear(&env, 0);
+ strbuf_release(&key);
+ return childenv;
+}
+
+struct atfork_state {
+#ifndef NO_PTHREADS
+ int cs;
+#endif
+ sigset_t old;
+};
+
+#ifndef NO_PTHREADS
+static void bug_die(int err, const char *msg)
+{
+ if (err) {
+ errno = err;
+ die_errno("BUG: %s", msg);
+ }
}
#endif
+static void atfork_prepare(struct atfork_state *as)
+{
+ sigset_t all;
+
+ if (sigfillset(&all))
+ die_errno("sigfillset");
+#ifdef NO_PTHREADS
+ if (sigprocmask(SIG_SETMASK, &all, &as->old))
+ die_errno("sigprocmask");
+#else
+ bug_die(pthread_sigmask(SIG_SETMASK, &all, &as->old),
+ "blocking all signals");
+ bug_die(pthread_setcancelstate(PTHREAD_CANCEL_DISABLE, &as->cs),
+ "disabling cancellation");
+#endif
+}
+
+static void atfork_parent(struct atfork_state *as)
+{
+#ifdef NO_PTHREADS
+ if (sigprocmask(SIG_SETMASK, &as->old, NULL))
+ die_errno("sigprocmask");
+#else
+ bug_die(pthread_setcancelstate(as->cs, NULL),
+ "re-enabling cancellation");
+ bug_die(pthread_sigmask(SIG_SETMASK, &as->old, NULL),
+ "restoring signal mask");
+#endif
+}
+#endif /* GIT_WINDOWS_NATIVE */
+
static inline void set_cloexec(int fd)
{
int flags = fcntl(fd, F_GETFD);
code += 128;
} else if (WIFEXITED(status)) {
code = WEXITSTATUS(status);
- /*
- * Convert special exit code when execvp failed.
- */
- if (code == 127) {
- code = -1;
- failed_errno = ENOENT;
- }
} else {
error("waitpid is confused (%s)", argv0);
}
#ifndef GIT_WINDOWS_NATIVE
{
int notify_pipe[2];
+ int null_fd = -1;
+ char **childenv;
+ struct argv_array argv = ARGV_ARRAY_INIT;
+ struct child_err cerr;
+ struct atfork_state as;
+
if (pipe(notify_pipe))
notify_pipe[0] = notify_pipe[1] = -1;
+ if (cmd->no_stdin || cmd->no_stdout || cmd->no_stderr) {
+ null_fd = open("/dev/null", O_RDWR | O_CLOEXEC);
+ if (null_fd < 0)
+ die_errno(_("open /dev/null failed"));
+ set_cloexec(null_fd);
+ }
+
+ prepare_cmd(&argv, cmd);
+ childenv = prep_childenv(cmd->env);
+ atfork_prepare(&as);
+
+ /*
+ * NOTE: In order to prevent deadlocking when using threads special
+ * care should be taken with the function calls made in between the
+ * fork() and exec() calls. No calls should be made to functions which
+ * require acquiring a lock (e.g. malloc) as the lock could have been
+ * held by another thread at the time of forking, causing the lock to
+ * never be released in the child process. This means only
+ * Async-Signal-Safe functions are permitted in the child.
+ */
cmd->pid = fork();
failed_errno = errno;
if (!cmd->pid) {
+ int sig;
/*
- * Redirect the channel to write syscall error messages to
- * before redirecting the process's stderr so that all die()
- * in subsequent call paths use the parent's stderr.
+ * Ensure the default die/error/warn routines do not get
+ * called, they can take stdio locks and malloc.
*/
- if (cmd->no_stderr || need_err) {
- int child_err = dup(2);
- set_cloexec(child_err);
- set_error_handle(fdopen(child_err, "w"));
- }
+ set_die_routine(child_die_fn);
+ set_error_routine(child_error_fn);
+ set_warn_routine(child_warn_fn);
close(notify_pipe[0]);
set_cloexec(notify_pipe[1]);
child_notifier = notify_pipe[1];
- atexit(notify_parent);
if (cmd->no_stdin)
- dup_devnull(0);
+ child_dup2(null_fd, 0);
else if (need_in) {
- dup2(fdin[0], 0);
- close_pair(fdin);
+ child_dup2(fdin[0], 0);
+ child_close_pair(fdin);
} else if (cmd->in) {
- dup2(cmd->in, 0);
- close(cmd->in);
+ child_dup2(cmd->in, 0);
+ child_close(cmd->in);
}
if (cmd->no_stderr)
- dup_devnull(2);
+ child_dup2(null_fd, 2);
else if (need_err) {
- dup2(fderr[1], 2);
- close_pair(fderr);
+ child_dup2(fderr[1], 2);
+ child_close_pair(fderr);
} else if (cmd->err > 1) {
- dup2(cmd->err, 2);
- close(cmd->err);
+ child_dup2(cmd->err, 2);
+ child_close(cmd->err);
}
if (cmd->no_stdout)
- dup_devnull(1);
+ child_dup2(null_fd, 1);
else if (cmd->stdout_to_stderr)
- dup2(2, 1);
+ child_dup2(2, 1);
else if (need_out) {
- dup2(fdout[1], 1);
- close_pair(fdout);
+ child_dup2(fdout[1], 1);
+ child_close_pair(fdout);
} else if (cmd->out > 1) {
- dup2(cmd->out, 1);
- close(cmd->out);
+ child_dup2(cmd->out, 1);
+ child_close(cmd->out);
}
if (cmd->dir && chdir(cmd->dir))
- die_errno("exec '%s': cd to '%s' failed", cmd->argv[0],
- cmd->dir);
- if (cmd->env) {
- for (; *cmd->env; cmd->env++) {
- if (strchr(*cmd->env, '='))
- putenv((char *)*cmd->env);
- else
- unsetenv(*cmd->env);
- }
+ child_die(CHILD_ERR_CHDIR);
+
+ /*
+ * restore default signal handlers here, in case
+ * we catch a signal right before execve below
+ */
+ for (sig = 1; sig < NSIG; sig++) {
+ /* ignored signals get reset to SIG_DFL on execve */
+ if (signal(sig, SIG_DFL) == SIG_IGN)
+ signal(sig, SIG_IGN);
}
- if (cmd->git_cmd)
- execv_git_cmd(cmd->argv);
- else if (cmd->use_shell)
- execv_shell_cmd(cmd->argv);
- else
- sane_execvp(cmd->argv[0], (char *const*) cmd->argv);
+
+ if (sigprocmask(SIG_SETMASK, &as.old, NULL) != 0)
+ child_die(CHILD_ERR_SIGPROCMASK);
+
+ /*
+ * Attempt to exec using the command and arguments starting at
+ * argv.argv[1]. argv.argv[0] contains SHELL_PATH which will
+ * be used in the event exec failed with ENOEXEC at which point
+ * we will try to interpret the command using 'sh'.
+ */
+ execve(argv.argv[1], (char *const *) argv.argv + 1,
+ (char *const *) childenv);
+ if (errno == ENOEXEC)
+ execve(argv.argv[0], (char *const *) argv.argv,
+ (char *const *) childenv);
+
if (errno == ENOENT) {
- if (!cmd->silent_exec_failure)
- error("cannot run %s: %s", cmd->argv[0],
- strerror(ENOENT));
- exit(127);
+ if (cmd->silent_exec_failure)
+ child_die(CHILD_ERR_SILENT);
+ child_die(CHILD_ERR_ENOENT);
} else {
- die_errno("cannot exec '%s'", cmd->argv[0]);
+ child_die(CHILD_ERR_ERRNO);
}
}
+ atfork_parent(&as);
if (cmd->pid < 0)
error_errno("cannot fork() for %s", cmd->argv[0]);
else if (cmd->clean_on_exit)
mark_child_for_cleanup(cmd->pid, cmd);
/*
- * Wait for child's execvp. If the execvp succeeds (or if fork()
+ * Wait for child's exec. If the exec succeeds (or if fork()
* failed), EOF is seen immediately by the parent. Otherwise, the
- * child process sends a single byte.
+ * child process sends a child_err struct.
* Note that use of this infrastructure is completely advisory,
* therefore, we keep error checks minimal.
*/
close(notify_pipe[1]);
- if (read(notify_pipe[0], ¬ify_pipe[1], 1) == 1) {
+ if (xread(notify_pipe[0], &cerr, sizeof(cerr)) == sizeof(cerr)) {
/*
- * At this point we know that fork() succeeded, but execvp()
+ * At this point we know that fork() succeeded, but exec()
* failed. Errors have been reported to our stderr.
*/
wait_or_whine(cmd->pid, cmd->argv[0], 0);
+ child_err_spew(cmd, &cerr);
failed_errno = errno;
cmd->pid = -1;
}
close(notify_pipe[0]);
+
+ if (null_fd >= 0)
+ close(null_fd);
+ argv_array_clear(&argv);
+ free(childenv);
}
#else
{
#define CHILD_PROCESS_INIT { NULL, ARGV_ARRAY_INIT, ARGV_ARRAY_INIT }
void child_process_init(struct child_process *);
void child_process_clear(struct child_process *);
+extern int is_executable(const char *name);
int start_command(struct child_process *);
int finish_command(struct child_process *);
if (active_cache_changed &&
write_locked_index(&the_index, &index_lock, COMMIT_LOCK))
- /* TRANSLATORS: %s will be "revert", "cherry-pick" or
+ /*
+ * TRANSLATORS: %s will be "revert", "cherry-pick" or
* "rebase -i".
*/
return error(_("%s: Unable to write new index file"),
strbuf_trim(&stash_sha1);
child.git_cmd = 1;
+ child.no_stdout = 1;
+ child.no_stderr = 1;
argv_array_push(&child.args, "stash");
argv_array_push(&child.args, "apply");
argv_array_push(&child.args, stash_sha1.buf);
if (!run_command(&child))
- printf(_("Applied autostash."));
+ printf(_("Applied autostash.\n"));
else {
struct child_process store = CHILD_PROCESS_INIT;
res = error(_("could not read orig-head"));
goto cleanup_head_ref;
}
+ strbuf_reset(&buf);
if (!read_oneliner(&buf, rebase_path_onto(), 0)) {
res = error(_("could not read 'onto'"));
goto cleanup_head_ref;
/* --work-tree is set without --git-dir; use discovered one */
if (getenv(GIT_WORK_TREE_ENVIRONMENT) || git_work_tree_cfg) {
+ char *to_free = NULL;
+ const char *ret;
+
if (offset != cwd->len && !is_absolute_path(gitdir))
- gitdir = real_pathdup(gitdir, 1);
+ gitdir = to_free = real_pathdup(gitdir, 1);
if (chdir(cwd->buf))
die_errno("Could not come back to cwd");
- return setup_explicit_git_dir(gitdir, cwd, nongit_ok);
+ ret = setup_explicit_git_dir(gitdir, cwd, nongit_ok);
+ free(to_free);
+ return ret;
}
/* #16.2, #17.2, #20.2, #21.2, #24, #25, #28, #29 (see t1510) */
/* --work-tree is set without --git-dir; use discovered one */
if (getenv(GIT_WORK_TREE_ENVIRONMENT) || git_work_tree_cfg) {
- const char *gitdir;
+ static const char *gitdir;
gitdir = offset == cwd->len ? "." : xmemdupz(cwd->buf, offset);
if (chdir(cwd->buf))
memset(oc, 0, sizeof(*oc));
oc->mode = S_IFINVALID;
+ strbuf_init(&oc->symlink_path, 0);
ret = get_sha1_1(name, namelen, sha1, flags);
if (!ret)
return ret;
namelen = strlen(cp);
}
- strlcpy(oc->path, cp, sizeof(oc->path));
+ if (flags & GET_SHA1_RECORD_PATH)
+ oc->path = xstrdup(cp);
if (!active_cache)
read_cache();
}
}
hashcpy(oc->tree, tree_sha1);
- strlcpy(oc->path, filename, sizeof(oc->path));
+ if (flags & GET_SHA1_RECORD_PATH)
+ oc->path = xstrdup(filename);
free(new_filename);
return ret;
get_sha1_with_context_1(name, GET_SHA1_ONLY_TO_DIE, prefix, sha1, &oc);
}
-int get_sha1_with_context(const char *str, unsigned flags, unsigned char *sha1, struct object_context *orc)
+int get_sha1_with_context(const char *str, unsigned flags, unsigned char *sha1, struct object_context *oc)
{
if (flags & GET_SHA1_FOLLOW_SYMLINKS && flags & GET_SHA1_ONLY_TO_DIE)
die("BUG: incompatible flags for get_sha1_with_context");
- return get_sha1_with_context_1(str, flags, NULL, sha1, orc);
+ return get_sha1_with_context_1(str, flags, NULL, sha1, oc);
}
* https://opensource.org/licenses/MIT
***/
-#include "cache.h"
-#include "sha1dc/sha1.h"
-#include "sha1dc/ubc_check.h"
+#ifndef SHA1DC_NO_STANDARD_INCLUDES
+#include <string.h>
+#include <memory.h>
+#include <stdio.h>
+#include <stdlib.h>
+#endif
+
+#ifdef SHA1DC_CUSTOM_INCLUDE_SHA1_C
+#include SHA1DC_CUSTOM_INCLUDE_SHA1_C
+#endif
+
+#ifndef SHA1DC_INIT_SAFE_HASH_DEFAULT
+#define SHA1DC_INIT_SAFE_HASH_DEFAULT 1
+#endif
+
+#include "sha1.h"
+#include "ubc_check.h"
/*
If you are compiling on a big endian platform and your compiler does not define one of these,
you will have to add whatever macros your tool chain defines to indicate Big-Endianness.
*/
-#if (defined(__BYTE_ORDER) && (__BYTE_ORDER == __BIG_ENDIAN)) || \
+#ifdef SHA1DC_BIGENDIAN
+#undef SHA1DC_BIGENDIAN
+#endif
+#if (!defined SHA1DC_FORCE_LITTLEENDIAN) && \
+ ((defined(__BYTE_ORDER) && (__BYTE_ORDER == __BIG_ENDIAN)) || \
(defined(__BYTE_ORDER__) && (__BYTE_ORDER__ == __BIG_ENDIAN__)) || \
- defined(__BIG_ENDIAN__) || defined(__ARMEB__) || defined(__THUMBEB__) || defined(__AARCH64EB__) || \
- defined(_MIPSEB) || defined(__MIPSEB) || defined(__MIPSEB__)
+ defined(_BIG_ENDIAN) || defined(__BIG_ENDIAN__) || defined(__ARMEB__) || defined(__THUMBEB__) || defined(__AARCH64EB__) || \
+ defined(_MIPSEB) || defined(__MIPSEB) || defined(__MIPSEB__) || defined(SHA1DC_FORCE_BIGENDIAN))
+
+#define SHA1DC_BIGENDIAN
-#define SHA1DC_BIGENDIAN 1
-#else
-#undef SHA1DC_BIGENDIAN
#endif /*ENDIANNESS SELECTION*/
+#if (defined SHA1DC_FORCE_UNALIGNED_ACCESS || \
+ defined(__amd64__) || defined(__amd64) || defined(__x86_64__) || defined(__x86_64) || \
+ defined(i386) || defined(__i386) || defined(__i386__) || defined(__i486__) || \
+ defined(__i586__) || defined(__i686__) || defined(_M_IX86) || defined(__X86__) || \
+ defined(_X86_) || defined(__THW_INTEL__) || defined(__I86__) || defined(__INTEL__) || \
+ defined(__386) || defined(_M_X64) || defined(_M_AMD64))
+
+#define SHA1DC_ALLOW_UNALIGNED_ACCESS
+
+#endif /*UNALIGNMENT DETECTION*/
+
+
#define rotate_right(x,n) (((x)>>(n))|((x)<<(32-(n))))
#define rotate_left(x,n) (((x)<<(n))|((x)>>(32-(n))))
#define sha1_mix(W, t) (rotate_left(W[t - 3] ^ W[t - 8] ^ W[t - 14] ^ W[t - 16], 1))
-#if defined(SHA1DC_BIGENDIAN)
+#ifdef SHA1DC_BIGENDIAN
#define sha1_load(m, t, temp) { temp = m[t]; }
#else
#define sha1_load(m, t, temp) { temp = m[t]; sha1_bswap32(temp); }
-#endif /* !defined(SHA1DC_BIGENDIAN) */
+#endif
#define sha1_store(W, t, x) *(volatile uint32_t *)&W[t] = x
ihvout[0] = ihvin[0] + a; ihvout[1] = ihvin[1] + b; ihvout[2] = ihvin[2] + c; ihvout[3] = ihvin[3] + d; ihvout[4] = ihvin[4] + e; \
}
+#ifdef _MSC_VER
+#pragma warning(push)
+#pragma warning(disable: 4127) /* Complier complains about the checks in the above macro being constant. */
+#endif
+
#ifdef DOSTORESTATE0
SHA1_RECOMPRESS(0)
#endif
SHA1_RECOMPRESS(79)
#endif
+#ifdef _MSC_VER
+#pragma warning(pop)
+#endif
+
static void sha1_recompression_step(uint32_t step, uint32_t ihvin[5], uint32_t ihvout[5], const uint32_t me2[80], const uint32_t state[5])
{
switch (step)
ctx->ihv[3] = 0x10325476;
ctx->ihv[4] = 0xC3D2E1F0;
ctx->found_collision = 0;
- ctx->safe_hash = 0;
+ ctx->safe_hash = SHA1DC_INIT_SAFE_HASH_DEFAULT;
ctx->ubc_check = 1;
ctx->detect_coll = 1;
ctx->reduced_round_coll = 0;
void SHA1DCUpdate(SHA1_CTX* ctx, const char* buf, size_t len)
{
unsigned left, fill;
+
if (len == 0)
return;
while (len >= 64)
{
ctx->total += 64;
+
+#if defined(SHA1DC_ALLOW_UNALIGNED_ACCESS)
sha1_process(ctx, (uint32_t*)(buf));
+#else
+ memcpy(ctx->buffer, buf, 64);
+ sha1_process(ctx, (uint32_t*)(ctx->buffer));
+#endif /* defined(SHA1DC_ALLOW_UNALIGNED_ACCESS) */
buf += 64;
len -= 64;
}
return ctx->found_collision;
}
-void git_SHA1DCFinal(unsigned char hash[20], SHA1_CTX *ctx)
-{
- if (!SHA1DCFinal(hash, ctx))
- return;
- die("SHA-1 appears to be part of a collision attack: %s",
- sha1_to_hex(hash));
-}
-
-void git_SHA1DCUpdate(SHA1_CTX *ctx, const void *vdata, unsigned long len)
-{
- const char *data = vdata;
- /* We expect an unsigned long, but sha1dc only takes an int */
- while (len > INT_MAX) {
- SHA1DCUpdate(ctx, data, INT_MAX);
- data += INT_MAX;
- len -= INT_MAX;
- }
- SHA1DCUpdate(ctx, data, len);
-}
+#ifdef SHA1DC_CUSTOM_TRAILING_INCLUDE_SHA1_C
+#include SHA1DC_CUSTOM_TRAILING_INCLUDE_SHA1_C
+#endif
* See accompanying file LICENSE.txt or copy at
* https://opensource.org/licenses/MIT
***/
+
#ifndef SHA1DC_SHA1_H
#define SHA1DC_SHA1_H
extern "C" {
#endif
-/* uses SHA-1 message expansion to expand the first 16 words of W[] to 80 words */
-/* void sha1_message_expansion(uint32_t W[80]); */
-
-/* sha-1 compression function; first version takes a message block pre-parsed as 16 32-bit integers, second version takes an already expanded message) */
-/* void sha1_compression(uint32_t ihv[5], const uint32_t m[16]);
-void sha1_compression_W(uint32_t ihv[5], const uint32_t W[80]); */
+#ifndef SHA1DC_NO_STANDARD_INCLUDES
+#include <stdint.h>
+#endif
-/* same as sha1_compression_W, but additionally store intermediate states */
+/* sha-1 compression function that takes an already expanded message, and additionally store intermediate states */
/* only stores states ii (the state between step ii-1 and step ii) when DOSTORESTATEii is defined in ubc_check.h */
void sha1_compression_states(uint32_t[5], const uint32_t[16], uint32_t[80], uint32_t[80][5]);
/*
-// function type for sha1_recompression_step_T (uint32_t ihvin[5], uint32_t ihvout[5], const uint32_t me2[80], const uint32_t state[5])
-// where 0 <= T < 80
-// me2 is an expanded message (the expansion of an original message block XOR'ed with a disturbance vector's message block difference)
-// state is the internal state (a,b,c,d,e) before step T of the SHA-1 compression function while processing the original message block
-// the function will return:
-// ihvin: the reconstructed input chaining value
-// ihvout: the reconstructed output chaining value
+// Function type for sha1_recompression_step_T (uint32_t ihvin[5], uint32_t ihvout[5], const uint32_t me2[80], const uint32_t state[5]).
+// Where 0 <= T < 80
+// me2 is an expanded message (the expansion of an original message block XOR'ed with a disturbance vector's message block difference.)
+// state is the internal state (a,b,c,d,e) before step T of the SHA-1 compression function while processing the original message block.
+// The function will return:
+// ihvin: The reconstructed input chaining value.
+// ihvout: The reconstructed output chaining value.
*/
typedef void(*sha1_recompression_type)(uint32_t*, uint32_t*, const uint32_t*, const uint32_t*);
-/* table of sha1_recompression_step_0, ... , sha1_recompression_step_79 */
-/* extern sha1_recompression_type sha1_recompression_step[80];*/
-
-/* a callback function type that can be set to be called when a collision block has been found: */
+/* A callback function type that can be set to be called when a collision block has been found: */
/* void collision_block_callback(uint64_t byteoffset, const uint32_t ihvin1[5], const uint32_t ihvin2[5], const uint32_t m1[80], const uint32_t m2[80]) */
typedef void(*collision_block_callback)(uint64_t, const uint32_t*, const uint32_t*, const uint32_t*, const uint32_t*);
-/* the SHA-1 context */
+/* The SHA-1 context. */
typedef struct {
uint64_t total;
uint32_t ihv[5];
uint32_t states[80][5];
} SHA1_CTX;
-/* initialize SHA-1 context */
+/* Initialize SHA-1 context. */
void SHA1DCInit(SHA1_CTX*);
/*
-// function to enable safe SHA-1 hashing:
-// collision attacks are thwarted by hashing a detected near-collision block 3 times
-// think of it as extending SHA-1 from 80-steps to 240-steps for such blocks:
-// the best collision attacks against SHA-1 have complexity about 2^60,
-// thus for 240-steps an immediate lower-bound for the best cryptanalytic attacks would 2^180
-// an attacker would be better off using a generic birthday search of complexity 2^80
-//
-// enabling safe SHA-1 hashing will result in the correct SHA-1 hash for messages where no collision attack was detected
-// but it will result in a different SHA-1 hash for messages where a collision attack was detected
-// this will automatically invalidate SHA-1 based digital signature forgeries
-// enabled by default
+ Function to enable safe SHA-1 hashing:
+ Collision attacks are thwarted by hashing a detected near-collision block 3 times.
+ Think of it as extending SHA-1 from 80-steps to 240-steps for such blocks:
+ The best collision attacks against SHA-1 have complexity about 2^60,
+ thus for 240-steps an immediate lower-bound for the best cryptanalytic attacks would be 2^180.
+ An attacker would be better off using a generic birthday search of complexity 2^80.
+
+ Enabling safe SHA-1 hashing will result in the correct SHA-1 hash for messages where no collision attack was detected,
+ but it will result in a different SHA-1 hash for messages where a collision attack was detected.
+ This will automatically invalidate SHA-1 based digital signature forgeries.
+ Enabled by default.
*/
void SHA1DCSetSafeHash(SHA1_CTX*, int);
-/* function to disable or enable the use of Unavoidable Bitconditions (provides a significant speed up) */
-/* enabled by default */
+/*
+ Function to disable or enable the use of Unavoidable Bitconditions (provides a significant speed up).
+ Enabled by default
+ */
void SHA1DCSetUseUBC(SHA1_CTX*, int);
-/* function to disable or enable the use of Collision Detection */
-/* enabled by default */
+/*
+ Function to disable or enable the use of Collision Detection.
+ Enabled by default.
+ */
void SHA1DCSetUseDetectColl(SHA1_CTX*, int);
/* function to disable or enable the detection of reduced-round SHA-1 collisions */
/* returns: 0 = no collision detected, otherwise = collision found => warn user for active attack */
int SHA1DCFinal(unsigned char[20], SHA1_CTX*);
-/*
- * Same as SHA1DCFinal, but convert collision attack case into a verbose die().
- */
-void git_SHA1DCFinal(unsigned char [20], SHA1_CTX *);
-
-/*
- * Same as SHA1DCUpdate, but adjust types to match git's usual interface.
- */
-void git_SHA1DCUpdate(SHA1_CTX *ctx, const void *data, unsigned long len);
-
-#define platform_SHA_CTX SHA1_CTX
-#define platform_SHA1_Init SHA1DCInit
-#define platform_SHA1_Update git_SHA1DCUpdate
-#define platform_SHA1_Final git_SHA1DCFinal
-
#if defined(__cplusplus)
}
#endif
-#endif /* SHA1DC_SHA1_H */
+#ifdef SHA1DC_CUSTOM_TRAILING_INCLUDE_SHA1_H
+#include SHA1DC_CUSTOM_TRAILING_INCLUDE_SHA1_H
+#endif
+
+#endif
// ubc_check has been verified against ubc_check_verify using the 'ubc_check_test' program in the tools section
*/
-#include "git-compat-util.h"
-#include "sha1dc/ubc_check.h"
+#ifndef SHA1DC_NO_STANDARD_INCLUDES
+#include <stdint.h>
+#endif
+#ifdef SHA1DC_CUSTOM_INCLUDE_UBC_CHECK_C
+#include SHA1DC_CUSTOM_INCLUDE_UBC_CHECK_C
+#endif
+#include "ubc_check.h"
static const uint32_t DV_I_43_0_bit = (uint32_t)(1) << 0;
static const uint32_t DV_I_44_0_bit = (uint32_t)(1) << 1;
dvmask[0]=mask;
}
+
+#ifdef SHA1DC_CUSTOM_TRAILING_INCLUDE_UBC_CHECK_C
+#include SHA1DC_CUSTOM_TRAILING_INCLUDE_UBC_CHECK_C
+#endif
// thus one needs to do the recompression check for each DV that has its bit set
*/
-#ifndef UBC_CHECK_H
-#define UBC_CHECK_H
+#ifndef SHA1DC_UBC_CHECK_H
+#define SHA1DC_UBC_CHECK_H
#if defined(__cplusplus)
extern "C" {
#endif
+#ifndef SHA1DC_NO_STANDARD_INCLUDES
+#include <stdint.h>
+#endif
+
#define DVMASKSIZE 1
typedef struct { int dvType; int dvK; int dvB; int testt; int maski; int maskb; uint32_t dm[80]; } dv_info_t;
extern dv_info_t sha1_dvs[];
}
#endif
-#endif /* UBC_CHECK_H */
+#ifdef SHA1DC_CUSTOM_TRAILING_INCLUDE_UBC_CHECK_H
+#include SHA1DC_CUSTOM_TRAILING_INCLUDE_UBC_CHECK_H
+#endif
+
+#endif
--- /dev/null
+/*
+ * This code is included at the end of sha1dc/sha1.c with the
+ * SHA1DC_CUSTOM_TRAILING_INCLUDE_SHA1_C macro.
+ */
+
+void git_SHA1DCFinal(unsigned char hash[20], SHA1_CTX *ctx)
+{
+ if (!SHA1DCFinal(hash, ctx))
+ return;
+ die("SHA-1 appears to be part of a collision attack: %s",
+ sha1_to_hex(hash));
+}
+
+void git_SHA1DCUpdate(SHA1_CTX *ctx, const void *vdata, unsigned long len)
+{
+ const char *data = vdata;
+ /* We expect an unsigned long, but sha1dc only takes an int */
+ while (len > INT_MAX) {
+ SHA1DCUpdate(ctx, data, INT_MAX);
+ data += INT_MAX;
+ len -= INT_MAX;
+ }
+ SHA1DCUpdate(ctx, data, len);
+}
--- /dev/null
+/*
+ * This code is included at the end of sha1dc/sha1.h with the
+ * SHA1DC_CUSTOM_TRAILING_INCLUDE_SHA1_H macro.
+ */
+
+/*
+ * Same as SHA1DCFinal, but convert collision attack case into a verbose die().
+ */
+void git_SHA1DCFinal(unsigned char [20], SHA1_CTX *);
+
+/*
+ * Same as SHA1DCUpdate, but adjust types to match git's usual interface.
+ */
+void git_SHA1DCUpdate(SHA1_CTX *ctx, const void *data, unsigned long len);
+
+#define platform_SHA_CTX SHA1_CTX
+#define platform_SHA1_Init SHA1DCInit
+#define platform_SHA1_Update git_SHA1DCUpdate
+#define platform_SHA1_Final git_SHA1DCFinal
struct commit_list *head = NULL;
int bitmap_nr = (info->nr_bits + 31) / 32;
size_t bitmap_size = st_mult(sizeof(uint32_t), bitmap_nr);
- uint32_t *tmp = xmalloc(bitmap_size); /* to be freed before return */
- uint32_t *bitmap = paint_alloc(info);
struct commit *c = lookup_commit_reference_gently(oid, 1);
+ uint32_t *tmp; /* to be freed before return */
+ uint32_t *bitmap;
+
if (!c)
return;
+
+ tmp = xmalloc(bitmap_size);
+ bitmap = paint_alloc(info);
memset(bitmap, 0, bitmap_size);
bitmap[id / 32] |= (1U << (id % 32));
commit_list_insert(c, &head);
int i;
/*
- * do not delete old si->base, its index entries may be shared
- * with istate->cache[]. Accept a bit of leaking here because
- * this code is only used by short-lived update-index.
+ * If "si" is shared with another index_state (e.g. by
+ * unpack-trees code), we will need to duplicate split_index
+ * struct. It's not happening now though, luckily.
*/
+ assert(si->refcount <= 1);
+
+ unshare_split_index(istate, 0);
+ if (si->base) {
+ discard_index(si->base);
+ free(si->base);
+ }
si->base = xcalloc(1, sizeof(*si->base));
si->base->version = istate->version;
/* zero timestamp disables racy test in ce_write_index() */
istate->cache_nr = si->saved_cache_nr;
}
+void unshare_split_index(struct index_state *istate, int discard)
+{
+ struct split_index *si = istate->split_index;
+ int i;
+
+ if (!si || !si->base)
+ return;
+
+ for (i = 0; i < istate->cache_nr; i++) {
+ struct cache_entry *ce = istate->cache[i];
+ struct cache_entry *new = NULL;
+
+ if (!ce->index ||
+ ce->index > si->base->cache_nr ||
+ ce != si->base->cache[ce->index - 1])
+ continue;
+
+ if (!discard) {
+ int len = ce_namelen(ce);
+ new = xcalloc(1, cache_entry_size(len));
+ copy_cache_entry(new, ce);
+ memcpy(new->name, ce->name, len);
+ new->index = 0;
+ }
+ istate->cache[i] = new;
+ }
+}
+
+
void discard_split_index(struct index_state *istate)
{
struct split_index *si = istate->split_index;
if (!si)
return;
+ unshare_split_index(istate, 0);
istate->split_index = NULL;
si->refcount--;
if (si->refcount)
void remove_split_index(struct index_state *istate)
{
- if (istate->split_index) {
- /*
- * can't discard_split_index(&the_index); because that
- * will destroy split_index->base->cache[], which may
- * be shared with the_index.cache[]. So yeah we're
- * leaking a bit here.
- */
- istate->split_index = NULL;
- istate->cache_changed |= SOMETHING_CHANGED;
- }
+ if (!istate->split_index)
+ return;
+ discard_split_index(istate);
+ istate->cache_changed |= SOMETHING_CHANGED;
}
void discard_split_index(struct index_state *istate);
void add_split_index(struct index_state *istate);
void remove_split_index(struct index_state *istate);
+void unshare_split_index(struct index_state *istate, int discard);
#endif
return list->items + index;
}
+void string_list_remove(struct string_list *list, const char *string,
+ int free_util)
+{
+ int exact_match;
+ int i = get_entry_index(list, string, &exact_match);
+
+ if (exact_match) {
+ if (list->strdup_strings)
+ free(list->items[i].string);
+ if (free_util)
+ free(list->items[i].util);
+
+ list->nr--;
+ memmove(list->items + i, list->items + i + 1,
+ (list->nr - i) * sizeof(struct string_list_item));
+ }
+}
+
int string_list_has_string(const struct string_list *list, const char *string)
{
int exact_match;
*/
struct string_list_item *string_list_insert(struct string_list *list, const char *string);
+/*
+ * Removes the given string from the sorted list.
+ * If the string doesn't exist, the list is not altered.
+ */
+extern void string_list_remove(struct string_list *list, const char *string,
+ int free_util);
+
/*
* Checks if the given string is part of a sorted list. If it is part of the list,
* return the coresponding string_list_item, NULL otherwise.
--- /dev/null
+/*
+ * Generic implementation of background process infrastructure.
+ */
+#include "sub-process.h"
+#include "sigchain.h"
+#include "pkt-line.h"
+
+int cmd2process_cmp(const struct subprocess_entry *e1,
+ const struct subprocess_entry *e2,
+ const void *unused)
+{
+ return strcmp(e1->cmd, e2->cmd);
+}
+
+struct subprocess_entry *subprocess_find_entry(struct hashmap *hashmap, const char *cmd)
+{
+ struct subprocess_entry key;
+
+ hashmap_entry_init(&key, strhash(cmd));
+ key.cmd = cmd;
+ return hashmap_get(hashmap, &key, NULL);
+}
+
+int subprocess_read_status(int fd, struct strbuf *status)
+{
+ struct strbuf **pair;
+ char *line;
+ int len;
+
+ for (;;) {
+ len = packet_read_line_gently(fd, NULL, &line);
+ if ((len < 0) || !line)
+ break;
+ pair = strbuf_split_str(line, '=', 2);
+ if (pair[0] && pair[0]->len && pair[1]) {
+ /* the last "status=<foo>" line wins */
+ if (!strcmp(pair[0]->buf, "status=")) {
+ strbuf_reset(status);
+ strbuf_addbuf(status, pair[1]);
+ }
+ }
+ strbuf_list_free(pair);
+ }
+
+ return (len < 0) ? len : 0;
+}
+
+void subprocess_stop(struct hashmap *hashmap, struct subprocess_entry *entry)
+{
+ if (!entry)
+ return;
+
+ entry->process.clean_on_exit = 0;
+ kill(entry->process.pid, SIGTERM);
+ finish_command(&entry->process);
+
+ hashmap_remove(hashmap, entry, NULL);
+}
+
+static void subprocess_exit_handler(struct child_process *process)
+{
+ sigchain_push(SIGPIPE, SIG_IGN);
+ /* Closing the pipe signals the subprocess to initiate a shutdown. */
+ close(process->in);
+ close(process->out);
+ sigchain_pop(SIGPIPE);
+ /* Finish command will wait until the shutdown is complete. */
+ finish_command(process);
+}
+
+int subprocess_start(struct hashmap *hashmap, struct subprocess_entry *entry, const char *cmd,
+ subprocess_start_fn startfn)
+{
+ int err;
+ struct child_process *process;
+ const char *argv[] = { cmd, NULL };
+
+ entry->cmd = cmd;
+ process = &entry->process;
+
+ child_process_init(process);
+ process->argv = argv;
+ process->use_shell = 1;
+ process->in = -1;
+ process->out = -1;
+ process->clean_on_exit = 1;
+ process->clean_on_exit_handler = subprocess_exit_handler;
+
+ err = start_command(process);
+ if (err) {
+ error("cannot fork to run subprocess '%s'", cmd);
+ return err;
+ }
+
+ hashmap_entry_init(entry, strhash(cmd));
+
+ err = startfn(entry);
+ if (err) {
+ error("initialization for subprocess '%s' failed", cmd);
+ subprocess_stop(hashmap, entry);
+ return err;
+ }
+
+ hashmap_add(hashmap, entry);
+ return 0;
+}
--- /dev/null
+#ifndef SUBPROCESS_H
+#define SUBPROCESS_H
+
+#include "git-compat-util.h"
+#include "hashmap.h"
+#include "run-command.h"
+
+/*
+ * Generic implementation of background process infrastructure.
+ * See Documentation/technical/api-background-process.txt.
+ */
+
+ /* data structures */
+
+struct subprocess_entry {
+ struct hashmap_entry ent; /* must be the first member! */
+ const char *cmd;
+ struct child_process process;
+};
+
+/* subprocess functions */
+
+int cmd2process_cmp(const struct subprocess_entry *e1,
+ const struct subprocess_entry *e2, const void *unused);
+
+typedef int(*subprocess_start_fn)(struct subprocess_entry *entry);
+int subprocess_start(struct hashmap *hashmap, struct subprocess_entry *entry, const char *cmd,
+ subprocess_start_fn startfn);
+
+void subprocess_stop(struct hashmap *hashmap, struct subprocess_entry *entry);
+
+struct subprocess_entry *subprocess_find_entry(struct hashmap *hashmap, const char *cmd);
+
+/* subprocess helper functions */
+
+static inline struct child_process *subprocess_get_child_process(
+ struct subprocess_entry *entry)
+{
+ return &entry->process;
+}
+
+/*
+ * Helper function that will read packets looking for "status=<foo>"
+ * key/value pairs and return the value from the last "status" packet
+ */
+
+int subprocess_read_status(int fd, struct strbuf *status);
+
+#endif
return ret;
}
+/*
+ * Dies if the provided 'prefix' corresponds to an unpopulated submodule
+ */
+void die_in_unpopulated_submodule(const struct index_state *istate,
+ const char *prefix)
+{
+ int i, prefixlen;
+
+ if (!prefix)
+ return;
+
+ prefixlen = strlen(prefix);
+
+ for (i = 0; i < istate->cache_nr; i++) {
+ struct cache_entry *ce = istate->cache[i];
+ int ce_len = ce_namelen(ce);
+
+ if (!S_ISGITLINK(ce->ce_mode))
+ continue;
+ if (prefixlen <= ce_len)
+ continue;
+ if (strncmp(ce->name, prefix, ce_len))
+ continue;
+ if (prefix[ce_len] != '/')
+ continue;
+
+ die(_("in unpopulated submodule '%s'"), ce->name);
+ }
+}
+
+/*
+ * Dies if any paths in the provided pathspec descends into a submodule
+ */
+void die_path_inside_submodule(const struct index_state *istate,
+ const struct pathspec *ps)
+{
+ int i, j;
+
+ for (i = 0; i < istate->cache_nr; i++) {
+ struct cache_entry *ce = istate->cache[i];
+ int ce_len = ce_namelen(ce);
+
+ if (!S_ISGITLINK(ce->ce_mode))
+ continue;
+
+ for (j = 0; j < ps->nr ; j++) {
+ const struct pathspec_item *item = &ps->items[j];
+
+ if (item->len <= ce_len)
+ continue;
+ if (item->match[ce_len] != '/')
+ continue;
+ if (strncmp(ce->name, item->match, ce_len))
+ continue;
+ if (item->len == ce_len + 1)
+ continue;
+
+ die(_("Pathspec '%s' is in submodule '%.*s'"),
+ item->original, ce_len, ce->name);
+ }
+ }
+}
+
int parse_submodule_update_strategy(const char *value,
struct submodule_update_strategy *dst)
{
cp.no_stdin = 1;
/* TODO: other options may need to be passed here. */
- argv_array_push(&cp.args, "diff");
+ argv_array_pushl(&cp.args, "diff", "--submodule=diff", NULL);
+
argv_array_pushf(&cp.args, "--line-prefix=%s", line_prefix);
if (DIFF_OPT_TST(o, REVERSE_DIFF)) {
argv_array_pushf(&cp.args, "--src-prefix=%s%s/",
{
struct child_process cp = CHILD_PROCESS_INIT;
- prepare_submodule_repo_env_no_git_dir(&cp.env_array);
+ prepare_submodule_repo_env(&cp.env_array);
cp.git_cmd = 1;
argv_array_pushl(&cp.args, "diff-index", "--quiet",
static void submodule_reset_index(const char *path)
{
struct child_process cp = CHILD_PROCESS_INIT;
- prepare_submodule_repo_env_no_git_dir(&cp.env_array);
+ prepare_submodule_repo_env(&cp.env_array);
cp.git_cmd = 1;
cp.no_stdin = 1;
int ret = 0;
struct child_process cp = CHILD_PROCESS_INIT;
const struct submodule *sub;
+ int *error_code_ptr, error_code;
+
+ if (!is_submodule_initialized(path))
+ return 0;
+
+ if (flags & SUBMODULE_MOVE_HEAD_FORCE)
+ /*
+ * Pass non NULL pointer to is_submodule_populated_gently
+ * to prevent die()-ing. We'll use connect_work_tree_and_git_dir
+ * to fixup the submodule in the force case later.
+ */
+ error_code_ptr = &error_code;
+ else
+ error_code_ptr = NULL;
+
+ if (old && !is_submodule_populated_gently(path, error_code_ptr))
+ return 0;
sub = submodule_from_path(null_sha1, path);
absorb_git_dir_into_superproject("", path,
ABSORB_GITDIR_RECURSE_SUBMODULES);
} else {
- struct strbuf sb = STRBUF_INIT;
- strbuf_addf(&sb, "%s/modules/%s",
+ char *gitdir = xstrfmt("%s/modules/%s",
get_git_common_dir(), sub->name);
- connect_work_tree_and_git_dir(path, sb.buf);
- strbuf_release(&sb);
+ connect_work_tree_and_git_dir(path, gitdir);
+ free(gitdir);
/* make sure the index is clean as well */
submodule_reset_index(path);
}
+
+ if (old && (flags & SUBMODULE_MOVE_HEAD_FORCE)) {
+ char *gitdir = xstrfmt("%s/modules/%s",
+ get_git_common_dir(), sub->name);
+ connect_work_tree_and_git_dir(path, gitdir);
+ free(gitdir);
+ }
}
- prepare_submodule_repo_env_no_git_dir(&cp.env_array);
+ prepare_submodule_repo_env(&cp.env_array);
cp.git_cmd = 1;
cp.no_stdin = 1;
argv_array_pushf(&cp.args, "--super-prefix=%s%s/",
get_super_prefix_or_empty(), path);
- argv_array_pushl(&cp.args, "read-tree", NULL);
+ argv_array_pushl(&cp.args, "read-tree", "--recurse-submodules", NULL);
if (flags & SUBMODULE_MOVE_HEAD_DRY_RUN)
argv_array_push(&cp.args, "-n");
if (!(flags & SUBMODULE_MOVE_HEAD_DRY_RUN)) {
if (new) {
- struct child_process cp1 = CHILD_PROCESS_INIT;
+ child_process_init(&cp);
/* also set the HEAD accordingly */
- cp1.git_cmd = 1;
- cp1.no_stdin = 1;
- cp1.dir = path;
+ cp.git_cmd = 1;
+ cp.no_stdin = 1;
+ cp.dir = path;
- argv_array_pushl(&cp1.args, "update-ref", "HEAD", new, NULL);
+ prepare_submodule_repo_env(&cp.env_array);
+ argv_array_pushl(&cp.args, "update-ref", "HEAD", new, NULL);
- if (run_command(&cp1)) {
+ if (run_command(&cp)) {
ret = -1;
goto out;
}
* Otherwise the return error code is the same as of resolve_gitdir_gently.
*/
extern int is_submodule_populated_gently(const char *path, int *return_error_code);
+extern void die_in_unpopulated_submodule(const struct index_state *istate,
+ const char *prefix);
+extern void die_path_inside_submodule(const struct index_state *istate,
+ const struct pathspec *ps);
extern int parse_submodule_update_strategy(const char *value,
struct submodule_update_strategy *dst);
extern const char *submodule_strategy_to_string(const struct submodule_update_strategy *s);
t[0-9][0-9][0-9][0-9]/* -whitespace
-t0110/url-* binary
+/diff-lib/* eol=lf
+/t0110/url-* binary
+/t3900/*.txt eol=lf
+/t3901/*.txt eol=lf
+/t4034/*/* eol=lf
+/t4013/* eol=lf
+/t4018/* eol=lf
+/t4051/* eol=lf
+/t4100/* eol=lf
+/t4101/* eol=lf
+/t4109/* eol=lf
+/t4110/* eol=lf
+/t4135/* eol=lf
+/t4211/* eol=lf
+/t4252/* eol=lf
+/t5100/* eol=lf
+/t5515/* eol=lf
+/t556x_common eol=lf
+/t7500/* eol=lf
+/t8005/*.txt eol=lf
+/t9*/*.dump eol=lf
Test is not run by root user, and an attempt to write to an
unwritable file is expected to fail correctly.
- - LIBPCRE
+ - PCRE
- Git was compiled with USE_LIBPCRE=YesPlease. Wrap any tests
+ Git was compiled with support for PCRE. Wrap any tests
that use git-grep --perl-regexp or git-grep -P in these.
- CASE_INSENSITIVE_FS
Test is run on a filesystem which converts decomposed utf-8 (nfd)
to precomposed utf-8 (nfc).
+ - PTHREADS
+
+ Git wasn't compiled with NO_PTHREADS=YesPlease.
+
Tips for Writing Tests
----------------------
git checkout -b "add_sub1" &&
git submodule add ../submodule_update_sub1 sub1 &&
+ git submodule add ../submodule_update_sub1 uninitialized_sub &&
git config -f .gitmodules submodule.sub1.ignore all &&
git config submodule.sub1.ignore all &&
git add .gitmodules &&
then
RESULTDS=failure
fi
- RESULTR=success
- if test "$KNOWN_FAILURE_SUBMODULE_RECURSIVE_NESTED" = 1
- then
- RESULTR=failure
- fi
RESULTOI=success
if test "$KNOWN_FAILURE_SUBMODULE_OVERWRITE_IGNORED_UNTRACKED" = 1
then
'
# recursing deeper than one level doesn't work yet.
- test_expect_$RESULTR "$command: modified submodule updates submodule recursively" '
+ test_expect_success "$command: modified submodule updates submodule recursively" '
prolog &&
reset_work_tree_to_interested add_nested_sub &&
(
)
'
# Updating a submodule from an invalid sha1 updates
- test_expect_success "$command: modified submodule does not update submodule work tree from invalid commit" '
+ test_expect_success "$command: modified submodule does update submodule work tree from invalid commit" '
prolog &&
reset_work_tree_to_interested invalid_sub1 &&
(
cd submodule_update &&
git branch -t valid_sub1 origin/valid_sub1 &&
- test_must_fail $command valid_sub1 &&
- test_superproject_content origin/invalid_sub1
+ $command valid_sub1 &&
+ test_superproject_content origin/valid_sub1 &&
+ test_submodule_content sub1 origin/valid_sub1
+ )
+ '
+
+ # Old versions of Git were buggy writing the .git link file
+ # (e.g. before f8eaa0ba98b and then moving the superproject repo
+ # whose submodules contained absolute paths)
+ test_expect_success "$command: updating submodules fixes .git links" '
+ prolog &&
+ reset_work_tree_to_interested add_sub1 &&
+ (
+ cd submodule_update &&
+ git branch -t modify_sub1 origin/modify_sub1 &&
+ echo "gitdir: bogus/path" >sub1/.git &&
+ $command modify_sub1 &&
+ test_superproject_content origin/modify_sub1 &&
+ test_submodule_content sub1 origin/modify_sub1
)
'
}
GIT_PERF_MAKE_OPTS
Options to use when automatically building a git tree for
- performance testing. E.g., -j6 would be useful.
+ performance testing. E.g., -j6 would be useful. Passed
+ directly to make as "make $GIT_PERF_MAKE_OPTS".
+
+ GIT_PERF_MAKE_COMMAND
+ An arbitrary command that'll be run in place of the make
+ command, if set the GIT_PERF_MAKE_OPTS variable is
+ ignored. Useful in cases where source tree changes might
+ require issuing a different make command to different
+ revisions.
+
+ This can be (ab)used to monkeypatch or otherwise change the
+ tree about to be built. Note that the build directory can be
+ re-used for subsequent runs so the make command might get
+ executed multiple times on the same tree, but don't count on
+ any of that, that's an implementation detail that might change
+ in the future.
GIT_PERF_REPO
GIT_PERF_LARGE_REPO
After that you will want to use some of the following:
+ test_perf_fresh_repo # sets up an empty repository
test_perf_default_repo # sets up a "normal" repository
test_perf_large_repo # sets up a "large" repository
test_checkout_worktree
test_expect_success 'verify both methods build the same hashmaps' '
- $GIT_BUILD_DIR/t/helper/test-lazy-init-name-hash$X --dump --single | sort >out.single &&
- $GIT_BUILD_DIR/t/helper/test-lazy-init-name-hash$X --dump --multi | sort >out.multi &&
- test_cmp out.single out.multi
+ test-lazy-init-name-hash --dump --single >out.single &&
+ if test-lazy-init-name-hash --dump --multi >out.multi
+ then
+ test_set_prereq REPO_BIG_ENOUGH_FOR_MULTI &&
+ sort <out.single >sorted.single &&
+ sort <out.multi >sorted.multi &&
+ test_cmp sorted.single sorted.multi
+ fi
'
-test_expect_success 'multithreaded should be faster' '
- $GIT_BUILD_DIR/t/helper/test-lazy-init-name-hash$X --perf >out.perf
+test_expect_success 'calibrate' '
+ entries=$(wc -l <out.single) &&
+
+ case $entries in
+ ?) count=1000000 ;;
+ ??) count=100000 ;;
+ ???) count=10000 ;;
+ ????) count=1000 ;;
+ ?????) count=100 ;;
+ ??????) count=10 ;;
+ *) count=1 ;;
+ esac &&
+ export count &&
+
+ case $entries in
+ 1) entries_desc="1 entry" ;;
+ *) entries_desc="$entries entries" ;;
+ esac &&
+
+ case $count in
+ 1) count_desc="1 round" ;;
+ *) count_desc="$count rounds" ;;
+ esac &&
+
+ desc="$entries_desc, $count_desc" &&
+ export desc
'
+test_perf "single-threaded, $desc" "
+ test-lazy-init-name-hash --single --count=$count
+"
+
+test_perf REPO_BIG_ENOUGH_FOR_MULTI "multi-threaded, $desc" "
+ test-lazy-init-name-hash --multi --count=$count
+"
+
test_done
--- /dev/null
+#!/bin/sh
+
+test_description="Tests pathological globbing performance
+
+Shows how Git's globbing performance performs when given the sort of
+pathological patterns described in at https://research.swtch.com/glob
+"
+
+. ./perf-lib.sh
+
+test_globs_big='10 25 50 75 100'
+test_globs_small='1 2 3 4 5 6'
+
+test_perf_fresh_repo
+
+test_expect_success 'setup' '
+ for i in $(test_seq 1 100)
+ do
+ printf "a" >>refname &&
+ for j in $(test_seq 1 $i)
+ do
+ printf "a*" >>refglob.$i
+ done &&
+ echo b >>refglob.$i
+ done &&
+ test_commit test $(cat refname).t "" $(cat refname).t
+'
+
+for i in $test_globs_small
+do
+ test_perf "refglob((a*)^nb) against tag (a^100).t; n = $i" '
+ git for-each-ref "refs/tags/$(cat refglob.'$i')b"
+ '
+done
+
+for i in $test_globs_small
+do
+ test_perf "fileglob((a*)^nb) against file (a^100).t; n = $i" '
+ git ls-files "$(cat refglob.'$i')b"
+ '
+done
+
+test_done
test_perf_default_repo
-test_expect_success 'setup' '
+test_expect_success 'setup rebasing on top of a lot of changes' '
git checkout -f -b base &&
git checkout -b to-rebase &&
git checkout -b upstream &&
git rebase --onto base HEAD^
'
+test_expect_success 'setup rebasing many changes without split-index' '
+ git config core.splitIndex false &&
+ git checkout -b upstream2 to-rebase &&
+ git checkout -b to-rebase2 upstream
+'
+
+test_perf 'rebase a lot of unrelated changes without split-index' '
+ git rebase --onto upstream2 base &&
+ git rebase --onto base upstream2
+'
+
+test_expect_success 'setup rebasing many changes with split-index' '
+ git config core.splitIndex true
+'
+
+test_perf 'rebase a lot of unrelated changes with split-index' '
+ git rebase --onto upstream2 base &&
+ git rebase --onto base upstream2
+'
+
test_done
--- /dev/null
+#!/bin/sh
+
+test_description="Comparison of git-log's --grep regex engines
+
+Set GIT_PERF_4220_LOG_OPTS in the environment to pass options to
+git-grep. Make sure to include a leading space,
+e.g. GIT_PERF_4220_LOG_OPTS=' -i'. Some options to try:
+
+ -i
+ --invert-grep
+ -i --invert-grep
+"
+
+. ./perf-lib.sh
+
+test_perf_large_repo
+test_checkout_worktree
+
+for pattern in \
+ 'how.to' \
+ '^how to' \
+ '[how] to' \
+ '\(e.t[^ ]*\|v.ry\) rare' \
+ 'm\(ú\|u\)lt.b\(æ\|y\)te'
+do
+ for engine in basic extended perl
+ do
+ if test $engine != "basic"
+ then
+ # Poor man's basic -> extended converter.
+ pattern=$(echo $pattern | sed 's/\\//g')
+ fi
+ if test $engine = "perl" && ! test_have_prereq PCRE
+ then
+ prereq="PCRE"
+ else
+ prereq=""
+ fi
+ test_perf $prereq "$engine log$GIT_PERF_4220_LOG_OPTS --grep='$pattern'" "
+ git -c grep.patternType=$engine log --pretty=format:%h$GIT_PERF_4220_LOG_OPTS --grep='$pattern' >'out.$engine' || :
+ "
+ done
+
+ test_expect_success "assert that all engines found the same for$GIT_PERF_4220_LOG_OPTS '$pattern'" '
+ test_cmp out.basic out.extended &&
+ if test_have_prereq PCRE
+ then
+ test_cmp out.basic out.perl
+ fi
+ '
+done
+
+test_done
--- /dev/null
+#!/bin/sh
+
+test_description="Comparison of git-log's --grep regex engines with -F
+
+Set GIT_PERF_4221_LOG_OPTS in the environment to pass options to
+git-grep. Make sure to include a leading space,
+e.g. GIT_PERF_4221_LOG_OPTS=' -i'. Some options to try:
+
+ -i
+ --invert-grep
+ -i --invert-grep
+"
+
+. ./perf-lib.sh
+
+test_perf_large_repo
+test_checkout_worktree
+
+for pattern in 'int' 'uncommon' 'æ'
+do
+ for engine in fixed basic extended perl
+ do
+ if test $engine = "perl" && ! test_have_prereq PCRE
+ then
+ prereq="PCRE"
+ else
+ prereq=""
+ fi
+ test_perf $prereq "$engine log$GIT_PERF_4221_LOG_OPTS --grep='$pattern'" "
+ git -c grep.patternType=$engine log --pretty=format:%h$GIT_PERF_4221_LOG_OPTS --grep='$pattern' >'out.$engine' || :
+ "
+ done
+
+ test_expect_success "assert that all engines found the same for$GIT_PERF_4221_LOG_OPTS '$pattern'" '
+ test_cmp out.fixed out.basic &&
+ test_cmp out.fixed out.extended &&
+ if test_have_prereq PCRE
+ then
+ test_cmp out.fixed out.perl
+ fi
+ '
+done
+
+test_done
--- /dev/null
+#!/bin/sh
+
+test_description="Comparison of git-grep's regex engines
+
+Set GIT_PERF_7820_GREP_OPTS in the environment to pass options to
+git-grep. Make sure to include a leading space,
+e.g. GIT_PERF_7820_GREP_OPTS=' -i'. Some options to try:
+
+ -i
+ -w
+ -v
+ -vi
+ -vw
+ -viw
+"
+
+. ./perf-lib.sh
+
+test_perf_large_repo
+test_checkout_worktree
+
+for pattern in \
+ 'how.to' \
+ '^how to' \
+ '[how] to' \
+ '\(e.t[^ ]*\|v.ry\) rare' \
+ 'm\(ú\|u\)lt.b\(æ\|y\)te'
+do
+ for engine in basic extended perl
+ do
+ if test $engine != "basic"
+ then
+ # Poor man's basic -> extended converter.
+ pattern=$(echo "$pattern" | sed 's/\\//g')
+ fi
+ if test $engine = "perl" && ! test_have_prereq PCRE
+ then
+ prereq="PCRE"
+ else
+ prereq=""
+ fi
+ test_perf $prereq "$engine grep$GIT_PERF_7820_GREP_OPTS '$pattern'" "
+ git -c grep.patternType=$engine grep$GIT_PERF_7820_GREP_OPTS -- '$pattern' >'out.$engine' || :
+ "
+ done
+
+ test_expect_success "assert that all engines found the same for$GIT_PERF_7820_GREP_OPTS '$pattern'" '
+ test_cmp out.basic out.extended &&
+ if test_have_prereq PCRE
+ then
+ test_cmp out.basic out.perl
+ fi
+ '
+done
+
+test_done
--- /dev/null
+#!/bin/sh
+
+test_description="Comparison of git-grep's regex engines with -F
+
+Set GIT_PERF_7821_GREP_OPTS in the environment to pass options to
+git-grep. Make sure to include a leading space,
+e.g. GIT_PERF_7821_GREP_OPTS=' -w'. See p7820-grep-engines.sh for more
+options to try.
+"
+
+. ./perf-lib.sh
+
+test_perf_large_repo
+test_checkout_worktree
+
+for pattern in 'int' 'uncommon' 'æ'
+do
+ for engine in fixed basic extended perl
+ do
+ if test $engine = "perl" && ! test_have_prereq PCRE
+ then
+ prereq="PCRE"
+ else
+ prereq=""
+ fi
+ test_perf $prereq "$engine grep$GIT_PERF_7821_GREP_OPTS $pattern" "
+ git -c grep.patternType=$engine grep$GIT_PERF_7821_GREP_OPTS $pattern >'out.$engine' || :
+ "
+ done
+
+ test_expect_success "assert that all engines found the same for$GIT_PERF_7821_GREP_OPTS $pattern" '
+ test_cmp out.fixed out.basic &&
+ test_cmp out.fixed out.extended &&
+ if test_have_prereq PCRE
+ then
+ test_cmp out.fixed out.perl
+ fi
+ '
+done
+
+test_done
GIT_PERF_LARGE_REPO=$TEST_DIRECTORY/..
fi
+test_perf_do_repo_symlink_config_ () {
+ test_have_prereq SYMLINKS || git config core.symlinks false
+}
+
test_perf_create_repo_from () {
test "$#" = 2 ||
error "bug in the test script: not 2 parameters to test-create-repo"
) &&
(
cd "$repo" &&
- "$MODERN_GIT" init -q && {
- test_have_prereq SYMLINKS ||
- git config core.symlinks false
- } &&
+ "$MODERN_GIT" init -q &&
+ test_perf_do_repo_symlink_config_ &&
mv .git/hooks .git/hooks-disabled 2>/dev/null
) || error "failed to copy repository '$source' to '$repo'"
}
# call at least one of these to establish an appropriately-sized repository
+test_perf_fresh_repo () {
+ repo="${1:-$TRASH_DIRECTORY}"
+ "$MODERN_GIT" init -q "$repo" &&
+ (
+ cd "$repo" &&
+ test_perf_do_repo_symlink_config_
+ )
+}
+
test_perf_default_repo () {
test_perf_create_repo_from "${1:-$TRASH_DIRECTORY}" "$GIT_PERF_REPO"
}
unpack_git_rev () {
rev=$1
+ echo "=== Unpacking $rev in build/$rev ==="
mkdir -p build/$rev
(cd "$(git rev-parse --show-cdup)" && git archive --format=tar $rev) |
(cd build/$rev && tar x)
cp "../../$config" "build/$rev/"
fi
done
- (cd build/$rev && make $GIT_PERF_MAKE_OPTS) ||
- die "failed to build revision '$mydir'"
+ echo "=== Building $rev ==="
+ (
+ cd build/$rev &&
+ if test -n "$GIT_PERF_MAKE_COMMAND"
+ then
+ sh -c "$GIT_PERF_MAKE_COMMAND"
+ else
+ make $GIT_PERF_MAKE_OPTS
+ fi
+ ) || die "failed to build revision '$mydir'"
}
run_dirs_helper () {
test_cmp empty err
'
+test_expect_success !MINGW 'run_command can run a script without a #! line' '
+ cat >hello <<-\EOF &&
+ cat hello-script
+ EOF
+ chmod +x hello &&
+ test-run-command run-command ./hello >actual 2>err &&
+
+ test_cmp hello-script actual &&
+ test_cmp empty err
+'
+
+test_expect_success 'run_command does not try to execute a directory' '
+ test_when_finished "rm -rf bin1 bin2" &&
+ mkdir -p bin1/greet bin2 &&
+ write_script bin2/greet <<-\EOF &&
+ cat bin2/greet
+ EOF
+
+ PATH=$PWD/bin1:$PWD/bin2:$PATH \
+ test-run-command run-command greet >actual 2>err &&
+ test_cmp bin2/greet actual &&
+ test_cmp empty err
+'
+
+test_expect_success POSIXPERM 'run_command passes over non-executable file' '
+ test_when_finished "rm -rf bin1 bin2" &&
+ mkdir -p bin1 bin2 &&
+ write_script bin1/greet <<-\EOF &&
+ cat bin1/greet
+ EOF
+ chmod -x bin1/greet &&
+ write_script bin2/greet <<-\EOF &&
+ cat bin2/greet
+ EOF
+
+ PATH=$PWD/bin1:$PWD/bin2:$PATH \
+ test-run-command run-command greet >actual 2>err &&
+ test_cmp bin2/greet actual &&
+ test_cmp empty err
+'
+
test_expect_success POSIXPERM 'run_command reports EACCES' '
cat hello-script >hello.sh &&
chmod -x hello.sh &&
. ./lib-gettext.sh
test_expect_success 'git show a ISO-8859-1 commit under C locale' '
- . "$TEST_DIRECTORY"/t3901-8859-1.txt &&
+ . "$TEST_DIRECTORY"/t3901/8859-1.txt &&
test_commit "iso-c-commit" iso-under-c &&
git show >out 2>err &&
! test -s err &&
'
test_expect_success GETTEXT_LOCALE 'git show a ISO-8859-1 commit under a UTF-8 locale' '
- . "$TEST_DIRECTORY"/t3901-8859-1.txt &&
+ . "$TEST_DIRECTORY"/t3901/8859-1.txt &&
test_commit "iso-utf8-commit" iso-under-utf8 &&
LANGUAGE=is LC_ALL="$is_IS_locale" git show >out 2>err &&
! test -s err &&
. ./test-lib.sh
. "$TEST_DIRECTORY"/lib-submodule-update.sh
-KNOWN_FAILURE_SUBMODULE_RECURSIVE_NESTED=1
KNOWN_FAILURE_DIRECTORY_SUBMODULE_CONFLICTS=1
KNOWN_FAILURE_SUBMODULE_OVERWRITE_IGNORED_UNTRACKED=1
test_cmp expect output
'
+test_expect_success '--local requires a repo' '
+ # we expect 128 to ensure that we do not simply
+ # fail to find anything and return code "1"
+ test_expect_code 128 nongit git config --local foo.bar
+'
+
test_done
)
'
+test_expect_success SYMLINKS 'conditional include, gitdir matching symlink' '
+ ln -s foo bar &&
+ (
+ cd bar &&
+ echo "[includeIf \"gitdir:bar/\"]path=bar7" >>.git/config &&
+ echo "[test]seven=7" >.git/bar7 &&
+ echo 7 >expect &&
+ git config test.seven >actual &&
+ test_cmp expect actual
+ )
+'
+
+test_expect_success SYMLINKS 'conditional include, gitdir matching symlink, icase' '
+ (
+ cd bar &&
+ echo "[includeIf \"gitdir/i:BAR/\"]path=bar8" >>.git/config &&
+ echo "[test]eight=8" >.git/bar8 &&
+ echo 8 >expect &&
+ git config test.eight >actual &&
+ test_cmp expect actual
+ )
+'
+
test_expect_success 'include cycles are detected' '
cat >.gitconfig <<-\EOF &&
[test]value = gitconfig
'
KNOWN_FAILURE_DIRECTORY_SUBMODULE_CONFLICTS=1
-KNOWN_FAILURE_SUBMODULE_RECURSIVE_NESTED=1
test_submodule_switch_recursing "git checkout --recurse-submodules"
test_submodule_forced_switch_recursing "git checkout -f --recurse-submodules"
match 1 x 'f' '[[:xdigit:]]'
match 1 x 'D' '[[:xdigit:]]'
match 1 x '_' '[[:alnum:][:alpha:][:blank:][:cntrl:][:digit:][:graph:][:lower:][:print:][:punct:][:space:][:upper:][:xdigit:]]'
-match 1 x '_' '[[:alnum:][:alpha:][:blank:][:cntrl:][:digit:][:graph:][:lower:][:print:][:punct:][:space:][:upper:][:xdigit:]]'
match 1 x '.' '[^[:alnum:][:alpha:][:blank:][:cntrl:][:digit:][:lower:][:space:][:upper:][:xdigit:]]'
match 1 x '5' '[a-c[:digit:]x-z]'
match 1 x 'b' '[a-c[:digit:]x-z]'
test_expect_success 'config information was renamed, too' '
test $(git config branch.s.dummy) = Hello &&
- test_must_fail git config branch.s/s/dummy
+ test_must_fail git config branch.s/s.dummy
'
test_expect_success 'deleting a symref' '
+++ /dev/null
-: to be sourced in t3901 -- this is latin-1
-GIT_AUTHOR_NAME="Áéí óú" &&
-GIT_COMMITTER_NAME=$GIT_AUTHOR_NAME &&
-export GIT_AUTHOR_NAME GIT_COMMITTER_NAME
# use UTF-8 in author and committer name to match the
# i18n.commitencoding settings
- . "$TEST_DIRECTORY"/t3901-utf8.txt &&
+ . "$TEST_DIRECTORY"/t3901/utf8.txt &&
test_tick &&
echo "$GIT_AUTHOR_NAME" >mine &&
# the second one on the side branch is ISO-8859-1
git config i18n.commitencoding ISO8859-1 &&
# use author and committer name in ISO-8859-1 to match it.
- . "$TEST_DIRECTORY"/t3901-8859-1.txt
+ . "$TEST_DIRECTORY"/t3901/8859-1.txt
fi &&
test_tick &&
echo Yet another >theirs &&
# The result will be committed by GIT_COMMITTER_NAME --
# we want UTF-8 encoded name.
- . "$TEST_DIRECTORY"/t3901-utf8.txt &&
+ . "$TEST_DIRECTORY"/t3901/utf8.txt &&
git checkout -b test &&
git rebase master &&
test_expect_success 'rebase (U/L)' '
git config i18n.commitencoding UTF-8 &&
git config i18n.logoutputencoding ISO8859-1 &&
- . "$TEST_DIRECTORY"/t3901-utf8.txt &&
+ . "$TEST_DIRECTORY"/t3901/utf8.txt &&
git reset --hard side &&
git rebase master &&
# In this test we want ISO-8859-1 encoded commits as the result
git config i18n.commitencoding ISO8859-1 &&
git config i18n.logoutputencoding ISO8859-1 &&
- . "$TEST_DIRECTORY"/t3901-8859-1.txt &&
+ . "$TEST_DIRECTORY"/t3901/8859-1.txt &&
git reset --hard side &&
git rebase master &&
# to get ISO-8859-1 results.
git config i18n.commitencoding ISO8859-1 &&
git config i18n.logoutputencoding UTF-8 &&
- . "$TEST_DIRECTORY"/t3901-8859-1.txt &&
+ . "$TEST_DIRECTORY"/t3901/8859-1.txt &&
git reset --hard side &&
git rebase master &&
git config i18n.commitencoding UTF-8 &&
git config i18n.logoutputencoding UTF-8 &&
- . "$TEST_DIRECTORY"/t3901-utf8.txt &&
+ . "$TEST_DIRECTORY"/t3901/utf8.txt &&
git reset --hard master &&
git cherry-pick side^ &&
git config i18n.commitencoding ISO8859-1 &&
git config i18n.logoutputencoding ISO8859-1 &&
- . "$TEST_DIRECTORY"/t3901-8859-1.txt &&
+ . "$TEST_DIRECTORY"/t3901/8859-1.txt &&
git reset --hard master &&
git cherry-pick side^ &&
git config i18n.commitencoding UTF-8 &&
git config i18n.logoutputencoding ISO8859-1 &&
- . "$TEST_DIRECTORY"/t3901-utf8.txt &&
+ . "$TEST_DIRECTORY"/t3901/utf8.txt &&
git reset --hard master &&
git cherry-pick side^ &&
git config i18n.commitencoding ISO8859-1 &&
git config i18n.logoutputencoding UTF-8 &&
- . "$TEST_DIRECTORY"/t3901-8859-1.txt &&
+ . "$TEST_DIRECTORY"/t3901/8859-1.txt &&
git reset --hard master &&
git cherry-pick side^ &&
test_expect_success 'rebase --merge (U/U)' '
git config i18n.commitencoding UTF-8 &&
git config i18n.logoutputencoding UTF-8 &&
- . "$TEST_DIRECTORY"/t3901-utf8.txt &&
+ . "$TEST_DIRECTORY"/t3901/utf8.txt &&
git reset --hard side &&
git rebase --merge master &&
test_expect_success 'rebase --merge (U/L)' '
git config i18n.commitencoding UTF-8 &&
git config i18n.logoutputencoding ISO8859-1 &&
- . "$TEST_DIRECTORY"/t3901-utf8.txt &&
+ . "$TEST_DIRECTORY"/t3901/utf8.txt &&
git reset --hard side &&
git rebase --merge master &&
# In this test we want ISO-8859-1 encoded commits as the result
git config i18n.commitencoding ISO8859-1 &&
git config i18n.logoutputencoding ISO8859-1 &&
- . "$TEST_DIRECTORY"/t3901-8859-1.txt &&
+ . "$TEST_DIRECTORY"/t3901/8859-1.txt &&
git reset --hard side &&
git rebase --merge master &&
# to get ISO-8859-1 results.
git config i18n.commitencoding ISO8859-1 &&
git config i18n.logoutputencoding UTF-8 &&
- . "$TEST_DIRECTORY"/t3901-8859-1.txt &&
+ . "$TEST_DIRECTORY"/t3901/8859-1.txt &&
git reset --hard side &&
git rebase --merge master &&
test_expect_success 'am (U/U)' '
# Apply UTF-8 patches with UTF-8 commitencoding
git config i18n.commitencoding UTF-8 &&
- . "$TEST_DIRECTORY"/t3901-utf8.txt &&
+ . "$TEST_DIRECTORY"/t3901/utf8.txt &&
git reset --hard master &&
git am out-u1 out-u2 &&
test_expect_success !MINGW 'am (L/L)' '
# Apply ISO-8859-1 patches with ISO-8859-1 commitencoding
git config i18n.commitencoding ISO8859-1 &&
- . "$TEST_DIRECTORY"/t3901-8859-1.txt &&
+ . "$TEST_DIRECTORY"/t3901/8859-1.txt &&
git reset --hard master &&
git am out-l1 out-l2 &&
test_expect_success 'am (U/L)' '
# Apply ISO-8859-1 patches with UTF-8 commitencoding
git config i18n.commitencoding UTF-8 &&
- . "$TEST_DIRECTORY"/t3901-utf8.txt &&
+ . "$TEST_DIRECTORY"/t3901/utf8.txt &&
git reset --hard master &&
# am specifies --utf8 by default.
test_expect_success 'am --no-utf8 (U/L)' '
# Apply ISO-8859-1 patches with UTF-8 commitencoding
git config i18n.commitencoding UTF-8 &&
- . "$TEST_DIRECTORY"/t3901-utf8.txt &&
+ . "$TEST_DIRECTORY"/t3901/utf8.txt &&
git reset --hard master &&
git am --no-utf8 out-l1 out-l2 2>err &&
test_expect_success !MINGW 'am (L/U)' '
# Apply UTF-8 patches with ISO-8859-1 commitencoding
git config i18n.commitencoding ISO8859-1 &&
- . "$TEST_DIRECTORY"/t3901-8859-1.txt &&
+ . "$TEST_DIRECTORY"/t3901/8859-1.txt &&
git reset --hard master &&
# mailinfo will re-code the commit message to the charset specified by
+++ /dev/null
-: to be sourced in t3901 -- this is utf8
-GIT_AUTHOR_NAME="Áéí óú" &&
-GIT_COMMITTER_NAME=$GIT_AUTHOR_NAME &&
-export GIT_AUTHOR_NAME GIT_COMMITTER_NAME
--- /dev/null
+: to be sourced in t3901 -- this is latin-1
+GIT_AUTHOR_NAME="Áéí óú" &&
+GIT_COMMITTER_NAME=$GIT_AUTHOR_NAME &&
+export GIT_AUTHOR_NAME GIT_COMMITTER_NAME
--- /dev/null
+: to be sourced in t3901 -- this is utf8
+GIT_AUTHOR_NAME="Áéí óú" &&
+GIT_COMMITTER_NAME=$GIT_AUTHOR_NAME &&
+export GIT_AUTHOR_NAME GIT_COMMITTER_NAME
# overlap function context of 1st change and -u context of 2nd change
grep -v "delete me from hello" <"$dir/hello.c" >file.c &&
- sed 2p <"$dir/dummy.c" >>file.c &&
+ sed "2a\\
+ extra line" <"$dir/dummy.c" >>file.c &&
commit_and_tag changed_hello_dummy file.c &&
git checkout initial &&
test_cmp expected actual
'
+test_expect_success 'diff --submodule=diff recurses into nested submodules' '
+ cat >expected <<-EOF &&
+ Submodule sm2 contains modified content
+ Submodule sm2 a5a65c9..280969a:
+ diff --git a/sm2/.gitmodules b/sm2/.gitmodules
+ new file mode 100644
+ index 0000000..3a816b8
+ --- /dev/null
+ +++ b/sm2/.gitmodules
+ @@ -0,0 +1,3 @@
+ +[submodule "nested"]
+ + path = nested
+ + url = ../sm2
+ Submodule nested 0000000...b55928c (new submodule)
+ diff --git a/sm2/nested/file b/sm2/nested/file
+ new file mode 100644
+ index 0000000..ca281f5
+ --- /dev/null
+ +++ b/sm2/nested/file
+ @@ -0,0 +1 @@
+ +nested content
+ diff --git a/sm2/nested/foo8 b/sm2/nested/foo8
+ new file mode 100644
+ index 0000000..db9916b
+ --- /dev/null
+ +++ b/sm2/nested/foo8
+ @@ -0,0 +1 @@
+ +foo8
+ diff --git a/sm2/nested/foo9 b/sm2/nested/foo9
+ new file mode 100644
+ index 0000000..9c3b4f6
+ --- /dev/null
+ +++ b/sm2/nested/foo9
+ @@ -0,0 +1 @@
+ +foo9
+ EOF
+ git diff --submodule=diff >actual 2>err &&
+ test_must_be_empty err &&
+ test_cmp expected actual
+'
+
test_done
EOF
'
+# --- diff tests ----------------------------------------------------------
+
test_expect_success 'diff: ugly spaces' '
- git diff old new -- spaces.txt >out &&
+ git diff --no-indent-heuristic old new -- spaces.txt >out &&
compare_diff spaces-expect out
'
+test_expect_success 'diff: --no-indent-heuristic overrides config' '
+ git -c diff.indentHeuristic=true diff --no-indent-heuristic old new -- spaces.txt >out2 &&
+ compare_diff spaces-expect out2
+'
+
test_expect_success 'diff: nice spaces with --indent-heuristic' '
- git diff --indent-heuristic old new -- spaces.txt >out-compacted &&
+ git -c diff.indentHeuristic=false diff --indent-heuristic old new -- spaces.txt >out-compacted &&
compare_diff spaces-compacted-expect out-compacted
'
-test_expect_success 'diff: nice spaces with diff.indentHeuristic' '
+test_expect_success 'diff: nice spaces with diff.indentHeuristic=true' '
git -c diff.indentHeuristic=true diff old new -- spaces.txt >out-compacted2 &&
compare_diff spaces-compacted-expect out-compacted2
'
-test_expect_success 'diff: --no-indent-heuristic overrides config' '
- git -c diff.indentHeuristic=true diff --no-indent-heuristic old new -- spaces.txt >out2 &&
- compare_diff spaces-expect out2
-'
-
test_expect_success 'diff: --indent-heuristic with --patience' '
git diff --indent-heuristic --patience old new -- spaces.txt >out-compacted3 &&
compare_diff spaces-compacted-expect out-compacted3
'
test_expect_success 'diff: ugly functions' '
- git diff old new -- functions.c >out &&
+ git diff --no-indent-heuristic old new -- functions.c >out &&
compare_diff functions-expect out
'
compare_diff functions-compacted-expect out-compacted
'
-test_expect_success 'blame: ugly spaces' '
- git blame old..new -- spaces.txt >out-blame &&
- compare_blame spaces-expect out-blame
-'
+# --- blame tests ---------------------------------------------------------
test_expect_success 'blame: nice spaces with --indent-heuristic' '
git blame --indent-heuristic old..new -- spaces.txt >out-blame-compacted &&
compare_blame spaces-compacted-expect out-blame-compacted
'
-test_expect_success 'blame: nice spaces with diff.indentHeuristic' '
+test_expect_success 'blame: nice spaces with diff.indentHeuristic=true' '
git -c diff.indentHeuristic=true blame old..new -- spaces.txt >out-blame-compacted2 &&
compare_blame spaces-compacted-expect out-blame-compacted2
'
+test_expect_success 'blame: ugly spaces with --no-indent-heuristic' '
+ git blame --no-indent-heuristic old..new -- spaces.txt >out-blame &&
+ compare_blame spaces-expect out-blame
+'
+
+test_expect_success 'blame: ugly spaces with diff.indentHeuristic=false' '
+ git -c diff.indentHeuristic=false blame old..new -- spaces.txt >out-blame2 &&
+ compare_blame spaces-expect out-blame2
+'
+
test_expect_success 'blame: --no-indent-heuristic overrides config' '
- git -c diff.indentHeuristic=true blame --no-indent-heuristic old..new -- spaces.txt >out-blame2 &&
+ git -c diff.indentHeuristic=true blame --no-indent-heuristic old..new -- spaces.txt >out-blame3 &&
git blame old..new -- spaces.txt >out-blame &&
- compare_blame spaces-expect out-blame2
+ compare_blame spaces-expect out-blame3
+'
+
+test_expect_success 'blame: --indent-heuristic overrides config' '
+ git -c diff.indentHeuristic=false blame --indent-heuristic old..new -- spaces.txt >out-blame-compacted3 &&
+ compare_blame spaces-compacted-expect out-blame-compacted2
+'
+
+# --- diff-tree tests -----------------------------------------------------
+
+test_expect_success 'diff-tree: nice spaces with --indent-heuristic' '
+ git diff-tree --indent-heuristic -p old new -- spaces.txt >out-diff-tree-compacted &&
+ compare_diff spaces-compacted-expect out-diff-tree-compacted
+'
+
+test_expect_success 'diff-tree: nice spaces with diff.indentHeuristic=true' '
+ git -c diff.indentHeuristic=true diff-tree -p old new -- spaces.txt >out-diff-tree-compacted2 &&
+ compare_diff spaces-compacted-expect out-diff-tree-compacted2
+'
+
+test_expect_success 'diff-tree: ugly spaces with --no-indent-heuristic' '
+ git diff-tree --no-indent-heuristic -p old new -- spaces.txt >out-diff-tree &&
+ compare_diff spaces-expect out-diff-tree
+'
+
+test_expect_success 'diff-tree: ugly spaces with diff.indentHeuristic=false' '
+ git -c diff.indentHeuristic=false diff-tree -p old new -- spaces.txt >out-diff-tree2 &&
+ compare_diff spaces-expect out-diff-tree2
+'
+
+test_expect_success 'diff-tree: --indent-heuristic overrides config' '
+ git -c diff.indentHeuristic=false diff-tree --indent-heuristic -p old new -- spaces.txt >out-diff-tree-compacted3 &&
+ compare_diff spaces-compacted-expect out-diff-tree-compacted3
+'
+
+test_expect_success 'diff-tree: --no-indent-heuristic overrides config' '
+ git -c diff.indentHeuristic=true diff-tree --no-indent-heuristic -p old new -- spaces.txt >out-diff-tree3 &&
+ compare_diff spaces-expect out-diff-tree3
+'
+
+# --- diff-index tests ----------------------------------------------------
+
+test_expect_success 'diff-index: nice spaces with --indent-heuristic' '
+ git checkout -B diff-index &&
+ git reset --soft HEAD~ &&
+ git diff-index --indent-heuristic -p old -- spaces.txt >out-diff-index-compacted &&
+ compare_diff spaces-compacted-expect out-diff-index-compacted &&
+ git checkout -f master
+'
+
+test_expect_success 'diff-index: nice spaces with diff.indentHeuristic=true' '
+ git checkout -B diff-index &&
+ git reset --soft HEAD~ &&
+ git -c diff.indentHeuristic=true diff-index -p old -- spaces.txt >out-diff-index-compacted2 &&
+ compare_diff spaces-compacted-expect out-diff-index-compacted2 &&
+ git checkout -f master
+'
+
+test_expect_success 'diff-index: ugly spaces with --no-indent-heuristic' '
+ git checkout -B diff-index &&
+ git reset --soft HEAD~ &&
+ git diff-index --no-indent-heuristic -p old -- spaces.txt >out-diff-index &&
+ compare_diff spaces-expect out-diff-index &&
+ git checkout -f master
+'
+
+test_expect_success 'diff-index: ugly spaces with diff.indentHeuristic=false' '
+ git checkout -B diff-index &&
+ git reset --soft HEAD~ &&
+ git -c diff.indentHeuristic=false diff-index -p old -- spaces.txt >out-diff-index2 &&
+ compare_diff spaces-expect out-diff-index2 &&
+ git checkout -f master
+'
+
+test_expect_success 'diff-index: --indent-heuristic overrides config' '
+ git checkout -B diff-index &&
+ git reset --soft HEAD~ &&
+ git -c diff.indentHeuristic=false diff-index --indent-heuristic -p old -- spaces.txt >out-diff-index-compacted3 &&
+ compare_diff spaces-compacted-expect out-diff-index-compacted3 &&
+ git checkout -f master
+'
+
+test_expect_success 'diff-index: --no-indent-heuristic overrides config' '
+ git checkout -B diff-index &&
+ git reset --soft HEAD~ &&
+ git -c diff.indentHeuristic=true diff-index --no-indent-heuristic -p old -- spaces.txt >out-diff-index3 &&
+ compare_diff spaces-expect out-diff-index3 &&
+ git checkout -f master
+'
+
+# --- diff-files tests ----------------------------------------------------
+
+test_expect_success 'diff-files: nice spaces with --indent-heuristic' '
+ git checkout -B diff-files &&
+ git reset HEAD~ &&
+ git diff-files --indent-heuristic -p spaces.txt >out-diff-files-raw &&
+ grep -v index out-diff-files-raw >out-diff-files-compacted &&
+ compare_diff spaces-compacted-expect out-diff-files-compacted &&
+ git checkout -f master
+'
+
+test_expect_success 'diff-files: nice spaces with diff.indentHeuristic=true' '
+ git checkout -B diff-files &&
+ git reset HEAD~ &&
+ git -c diff.indentHeuristic=true diff-files -p spaces.txt >out-diff-files-raw2 &&
+ grep -v index out-diff-files-raw2 >out-diff-files-compacted2 &&
+ compare_diff spaces-compacted-expect out-diff-files-compacted2 &&
+ git checkout -f master
+'
+
+test_expect_success 'diff-files: ugly spaces with --no-indent-heuristic' '
+ git checkout -B diff-files &&
+ git reset HEAD~ &&
+ git diff-files --no-indent-heuristic -p spaces.txt >out-diff-files-raw &&
+ grep -v index out-diff-files-raw >out-diff-files &&
+ compare_diff spaces-expect out-diff-files &&
+ git checkout -f master
+'
+
+test_expect_success 'diff-files: ugly spaces with diff.indentHeuristic=false' '
+ git checkout -B diff-files &&
+ git reset HEAD~ &&
+ git -c diff.indentHeuristic=false diff-files -p spaces.txt >out-diff-files-raw2 &&
+ grep -v index out-diff-files-raw2 >out-diff-files &&
+ compare_diff spaces-expect out-diff-files &&
+ git checkout -f master
+'
+
+test_expect_success 'diff-files: --indent-heuristic overrides config' '
+ git checkout -B diff-files &&
+ git reset HEAD~ &&
+ git -c diff.indentHeuristic=false diff-files --indent-heuristic -p spaces.txt >out-diff-files-raw3 &&
+ grep -v index out-diff-files-raw3 >out-diff-files-compacted &&
+ compare_diff spaces-compacted-expect out-diff-files-compacted &&
+ git checkout -f master
+'
+
+test_expect_success 'diff-files: --no-indent-heuristic overrides config' '
+ git checkout -B diff-files &&
+ git reset HEAD~ &&
+ git -c diff.indentHeuristic=true diff-files --no-indent-heuristic -p spaces.txt >out-diff-files-raw4 &&
+ grep -v index out-diff-files-raw4 >out-diff-files &&
+ compare_diff spaces-expect out-diff-files &&
+ git checkout -f master
'
test_done
--- /dev/null
+#!/bin/sh
+
+test_description='test direct comparison of blobs via git-diff'
+. ./test-lib.sh
+
+run_diff () {
+ # use full-index to make it easy to match the index line
+ git diff --full-index "$@" >diff
+}
+
+check_index () {
+ grep "^index $1\\.\\.$2" diff
+}
+
+check_mode () {
+ grep "^old mode $1" diff &&
+ grep "^new mode $2" diff
+}
+
+check_paths () {
+ grep "^diff --git a/$1 b/$2" diff
+}
+
+test_expect_success 'create some blobs' '
+ echo one >one &&
+ echo two >two &&
+ chmod +x two &&
+ git add . &&
+
+ # cover systems where modes are ignored
+ git update-index --chmod=+x two &&
+
+ git commit -m base &&
+
+ sha1_one=$(git rev-parse HEAD:one) &&
+ sha1_two=$(git rev-parse HEAD:two)
+'
+
+test_expect_success 'diff by sha1' '
+ run_diff $sha1_one $sha1_two
+'
+test_expect_success 'index of sha1 diff' '
+ check_index $sha1_one $sha1_two
+'
+test_expect_success 'sha1 diff uses arguments as paths' '
+ check_paths $sha1_one $sha1_two
+'
+test_expect_success 'sha1 diff has no mode change' '
+ ! grep mode diff
+'
+
+test_expect_success 'diff by tree:path (run)' '
+ run_diff HEAD:one HEAD:two
+'
+test_expect_success 'index of tree:path diff' '
+ check_index $sha1_one $sha1_two
+'
+test_expect_success 'tree:path diff uses filenames as paths' '
+ check_paths one two
+'
+test_expect_success 'tree:path diff shows mode change' '
+ check_mode 100644 100755
+'
+
+test_expect_success 'diff by ranged tree:path' '
+ run_diff HEAD:one..HEAD:two
+'
+test_expect_success 'index of ranged tree:path diff' '
+ check_index $sha1_one $sha1_two
+'
+test_expect_success 'ranged tree:path diff uses filenames as paths' '
+ check_paths one two
+'
+test_expect_success 'ranged tree:path diff shows mode change' '
+ check_mode 100644 100755
+'
+
+test_expect_success 'diff blob against file' '
+ run_diff HEAD:one two
+'
+test_expect_success 'index of blob-file diff' '
+ check_index $sha1_one $sha1_two
+'
+test_expect_success 'blob-file diff uses filename as paths' '
+ check_paths one two
+'
+test_expect_success FILEMODE 'blob-file diff shows mode change' '
+ check_mode 100644 100755
+'
+
+test_expect_success 'blob-file diff prefers filename to sha1' '
+ run_diff $sha1_one two &&
+ check_paths two two
+'
+
+test_done
initial
EOF
test_expect_success 'log --invert-grep --grep' '
- git log --pretty="tformat:%s" --invert-grep --grep=th --grep=Sec >actual &&
- test_cmp expect actual
+ # Fixed
+ git -c grep.patternType=fixed log --pretty="tformat:%s" --invert-grep --grep=th --grep=Sec >actual &&
+ test_cmp expect actual &&
+
+ # POSIX basic
+ git -c grep.patternType=basic log --pretty="tformat:%s" --invert-grep --grep=t[h] --grep=S[e]c >actual &&
+ test_cmp expect actual &&
+
+ # POSIX extended
+ git -c grep.patternType=basic log --pretty="tformat:%s" --invert-grep --grep=t[h] --grep=S[e]c >actual &&
+ test_cmp expect actual &&
+
+ # PCRE
+ if test_have_prereq PCRE
+ then
+ git -c grep.patternType=perl log --pretty="tformat:%s" --invert-grep --grep=t[h] --grep=S[e]c >actual &&
+ test_cmp expect actual
+ fi
'
test_expect_success 'log --invert-grep --grep -i' '
echo initial >expect &&
- git log --pretty="tformat:%s" --invert-grep -i --grep=th --grep=Sec >actual &&
- test_cmp expect actual
+
+ # Fixed
+ git -c grep.patternType=fixed log --pretty="tformat:%s" --invert-grep -i --grep=th --grep=Sec >actual &&
+ test_cmp expect actual &&
+
+ # POSIX basic
+ git -c grep.patternType=basic log --pretty="tformat:%s" --invert-grep -i --grep=t[h] --grep=S[e]c >actual &&
+ test_cmp expect actual &&
+
+ # POSIX extended
+ git -c grep.patternType=extended log --pretty="tformat:%s" --invert-grep -i --grep=t[h] --grep=S[e]c >actual &&
+ test_cmp expect actual &&
+
+ # PCRE
+ if test_have_prereq PCRE
+ then
+ git -c grep.patternType=perl log --pretty="tformat:%s" --invert-grep -i --grep=t[h] --grep=S[e]c >actual &&
+ test_cmp expect actual
+ fi
'
test_expect_success 'log --grep option parsing' '
test_expect_success 'log --grep -i' '
echo Second >expect &&
+
+ # Fixed
git log -1 --pretty="tformat:%s" --grep=sec -i >actual &&
- test_cmp expect actual
+ test_cmp expect actual &&
+
+ # POSIX basic
+ git -c grep.patternType=basic log -1 --pretty="tformat:%s" --grep=s[e]c -i >actual &&
+ test_cmp expect actual &&
+
+ # POSIX extended
+ git -c grep.patternType=extended log -1 --pretty="tformat:%s" --grep=s[e]c -i >actual &&
+ test_cmp expect actual &&
+
+ # PCRE
+ if test_have_prereq PCRE
+ then
+ git -c grep.patternType=perl log -1 --pretty="tformat:%s" --grep=s[e]c -i >actual &&
+ test_cmp expect actual
+ fi
'
test_expect_success 'log -F -E --grep=<ere> uses ere' '
echo second >expect &&
- git log -1 --pretty="tformat:%s" -F -E --grep=s.c.nd >actual &&
+ # basic would need \(s\) to do the same
+ git log -1 --pretty="tformat:%s" -F -E --grep="(s).c.nd" >actual &&
+ test_cmp expect actual
+'
+
+test_expect_success PCRE 'log -F -E --perl-regexp --grep=<pcre> uses PCRE' '
+ test_when_finished "rm -rf num_commits" &&
+ git init num_commits &&
+ (
+ cd num_commits &&
+ test_commit 1d &&
+ test_commit 2e
+ ) &&
+
+ # In PCRE \d in [\d] is like saying "0-9", and matches the 2
+ # in 2e...
+ echo 2e >expect &&
+ git -C num_commits log -1 --pretty="tformat:%s" -F -E --perl-regexp --grep="[\d]" >actual &&
+ test_cmp expect actual &&
+
+ # ...in POSIX basic and extended it is the same as [d],
+ # i.e. "d", which matches 1d, but does not match 2e.
+ echo 1d >expect &&
+ git -C num_commits log -1 --pretty="tformat:%s" -F -E --grep="[\d]" >actual &&
test_cmp expect actual
'
test_cmp expect actual
'
+test_expect_success 'log with various grep.patternType configurations & command-lines' '
+ git init pattern-type &&
+ (
+ cd pattern-type &&
+ test_commit 1 file A &&
+
+ # The tagname is overridden here because creating a
+ # tag called "(1|2)" as test_commit would otherwise
+ # implicitly do would fail on e.g. MINGW.
+ test_commit "(1|2)" file B 2 &&
+
+ echo "(1|2)" >expect.fixed &&
+ cp expect.fixed expect.basic &&
+ cp expect.fixed expect.extended &&
+ cp expect.fixed expect.perl &&
+
+ # A strcmp-like match with fixed.
+ git -c grep.patternType=fixed log --pretty=tformat:%s \
+ --grep="(1|2)" >actual.fixed &&
+
+ # POSIX basic matches (, | and ) literally.
+ git -c grep.patternType=basic log --pretty=tformat:%s \
+ --grep="(.|.)" >actual.basic &&
+
+ # POSIX extended needs to have | escaped to match it
+ # literally, whereas under basic this is the same as
+ # (|2), i.e. it would also match "1". This test checks
+ # for extended by asserting that it is not matching
+ # what basic would match.
+ git -c grep.patternType=extended log --pretty=tformat:%s \
+ --grep="\|2" >actual.extended &&
+ if test_have_prereq PCRE
+ then
+ # Only PCRE would match [\d]\| with only
+ # "(1|2)" due to [\d]. POSIX basic would match
+ # both it and "1" since similarly to the
+ # extended match above it is the same as
+ # \([\d]\|\). POSIX extended would
+ # match neither.
+ git -c grep.patternType=perl log --pretty=tformat:%s \
+ --grep="[\d]\|" >actual.perl &&
+ test_cmp expect.perl actual.perl
+ fi &&
+ test_cmp expect.fixed actual.fixed &&
+ test_cmp expect.basic actual.basic &&
+ test_cmp expect.extended actual.extended &&
+
+ git log --pretty=tformat:%s -F \
+ --grep="(1|2)" >actual.fixed.short-arg &&
+ git log --pretty=tformat:%s -E \
+ --grep="\|2" >actual.extended.short-arg &&
+ test_cmp expect.fixed actual.fixed.short-arg &&
+ test_cmp expect.extended actual.extended.short-arg &&
+
+ git log --pretty=tformat:%s --fixed-strings \
+ --grep="(1|2)" >actual.fixed.long-arg &&
+ git log --pretty=tformat:%s --basic-regexp \
+ --grep="(.|.)" >actual.basic.long-arg &&
+ git log --pretty=tformat:%s --extended-regexp \
+ --grep="\|2" >actual.extended.long-arg &&
+ if test_have_prereq PCRE
+ then
+ git log --pretty=tformat:%s --perl-regexp \
+ --grep="[\d]\|" >actual.perl.long-arg &&
+ test_cmp expect.perl actual.perl.long-arg
+ else
+ test_must_fail git log --perl-regexp \
+ --grep="[\d]\|"
+ fi &&
+ test_cmp expect.fixed actual.fixed.long-arg &&
+ test_cmp expect.basic actual.basic.long-arg &&
+ test_cmp expect.extended actual.extended.long-arg
+ )
+'
+
cat > expect <<EOF
* Second
* sixth
| |
| | Merge branch 'side'
| |
-| * commit side
+| * commit tags/side-2
| | Author: A U Thor <author@example.com>
| |
| | side-2
test_cmp expect actual
'
+test_expect_success 'log --source paints symmetric ranges' '
+ cat >expect <<-\EOF &&
+ 09e12a9 source-b three
+ 8e393e1 source-a two
+ EOF
+ git log --oneline --source source-a...source-b >actual &&
+ test_cmp expect actual
+'
+
test_done
test_path_is_file foo.idx
'
+test_expect_success !PTHREADS,C_LOCALE_OUTPUT 'index-pack --threads=N or pack.threads=N warns when no pthreads' '
+ test_must_fail git index-pack --threads=2 2>err &&
+ grep ^warning: err >warnings &&
+ test_line_count = 1 warnings &&
+ grep -F "no threads support, ignoring --threads=2" err &&
+
+ test_must_fail git -c pack.threads=2 index-pack 2>err &&
+ grep ^warning: err >warnings &&
+ test_line_count = 1 warnings &&
+ grep -F "no threads support, ignoring pack.threads" err &&
+
+ test_must_fail git -c pack.threads=2 index-pack --threads=4 2>err &&
+ grep ^warning: err >warnings &&
+ test_line_count = 2 warnings &&
+ grep -F "no threads support, ignoring --threads=4" err &&
+ grep -F "no threads support, ignoring pack.threads" err
+'
+
+test_expect_success !PTHREADS,C_LOCALE_OUTPUT 'pack-objects --threads=N or pack.threads=N warns when no pthreads' '
+ git pack-objects --threads=2 --stdout --all </dev/null >/dev/null 2>err &&
+ grep ^warning: err >warnings &&
+ test_line_count = 1 warnings &&
+ grep -F "no threads support, ignoring --threads" err &&
+
+ git -c pack.threads=2 pack-objects --stdout --all </dev/null >/dev/null 2>err &&
+ grep ^warning: err >warnings &&
+ test_line_count = 1 warnings &&
+ grep -F "no threads support, ignoring pack.threads" err &&
+
+ git -c pack.threads=2 pack-objects --threads=4 --stdout --all </dev/null >/dev/null 2>err &&
+ grep ^warning: err >warnings &&
+ test_line_count = 2 warnings &&
+ grep -F "no threads support, ignoring --threads" err &&
+ grep -F "no threads support, ignoring pack.threads" err
+'
+
#
# WARNING!
#
}
test_expect_success 'setup repo with moderate-sized history' '
- for i in $(test_seq 1 10); do
+ for i in $(test_seq 1 10)
+ do
test_commit $i
done &&
git checkout -b other HEAD~5 &&
- for i in $(test_seq 1 10); do
+ for i in $(test_seq 1 10)
+ do
test_commit side-$i
done &&
git checkout master &&
'
test_expect_success 'setup further non-bitmapped commits' '
- for i in $(test_seq 1 10); do
+ for i in $(test_seq 1 10)
+ do
test_commit further-$i
done
'
git -C no-bitmaps.git fetch .. HEAD
'
+test_expect_success 'set up reusable pack' '
+ rm -f .git/objects/pack/*.keep &&
+ git repack -adb &&
+ reusable_pack () {
+ git for-each-ref --format="%(objectname)" |
+ git pack-objects --delta-base-offset --revs --stdout "$@"
+ }
+'
+
+test_expect_success 'pack reuse respects --honor-pack-keep' '
+ test_when_finished "rm -f .git/objects/pack/*.keep" &&
+ for i in .git/objects/pack/*.pack
+ do
+ >${i%.pack}.keep
+ done &&
+ reusable_pack --honor-pack-keep >empty.pack &&
+ git index-pack empty.pack &&
+ >expect &&
+ git show-index <empty.idx >actual &&
+ test_cmp expect actual
+'
+
+test_expect_success 'pack reuse respects --local' '
+ mv .git/objects/pack/* alt.git/objects/pack/ &&
+ test_when_finished "mv alt.git/objects/pack/* .git/objects/pack/" &&
+ reusable_pack --local >empty.pack &&
+ git index-pack empty.pack &&
+ >expect &&
+ git show-index <empty.idx >actual &&
+ test_cmp expect actual
+'
+
+test_expect_success 'pack reuse respects --incremental' '
+ reusable_pack --incremental >empty.pack &&
+ git index-pack empty.pack &&
+ >expect &&
+ git show-index <empty.idx >actual &&
+ test_cmp expect actual
+'
test_done
$shared .have
EOF
- GIT_TRACE_PACKET=$(pwd)/trace git push fork HEAD:foo &&
+ GIT_TRACE_PACKET=$(pwd)/trace \
+ git push \
+ --receive-pack="unset GIT_TRACE_PACKET; git-receive-pack" \
+ fork HEAD:foo &&
extract_ref_advertisement <trace >refs &&
test_cmp expect refs
'
git fetch-pack hidden $(git -C hidden rev-parse refs/hidden/one)
'
+test_expect_success 'fetch-pack can fetch a raw sha1 that is advertised as a ref' '
+ rm -rf server client &&
+ git init server &&
+ test_commit -C server 1 &&
+
+ git init client &&
+ git -C client fetch-pack ../server \
+ $(git -C server rev-parse refs/heads/master)
+'
+
+test_expect_success 'fetch-pack can fetch a raw sha1 overlapping a named ref' '
+ rm -rf server client &&
+ git init server &&
+ test_commit -C server 1 &&
+ test_commit -C server 2 &&
+
+ git init client &&
+ git -C client fetch-pack ../server \
+ $(git -C server rev-parse refs/tags/1) refs/tags/1
+'
+
+test_expect_success 'fetch-pack cannot fetch a raw sha1 that is not advertised as a ref' '
+ rm -rf server &&
+
+ git init server &&
+ test_commit -C server 5 &&
+ git -C server tag -d 5 &&
+ test_commit -C server 6 &&
+
+ git init client &&
+ test_must_fail git -C client fetch-pack ../server \
+ $(git -C server rev-parse refs/heads/master^) 2>err &&
+ test_i18ngrep "Server does not allow request for unadvertised object" err
+'
+
check_prot_path () {
cat >expected <<-EOF &&
Diag: url=$1
#!/bin/sh
-test_description='unpack-objects'
+test_description='test push with submodules'
. ./test-lib.sh
)
'
-test_expect_success push '
+test_expect_success 'push works with recorded gitlink' '
(
cd work &&
git push ../pub.git master
test_description='pushing to a repository using push options'
. ./test-lib.sh
-. "$TEST_DIRECTORY"/lib-httpd.sh
-start_httpd
mk_repo_pair () {
rm -rf workbench upstream &&
test_cmp expect upstream/.git/hooks/post-receive.push_options
'
-test_expect_success 'push option denied properly by http server' '
- test_when_finished "rm -rf test_http_clone" &&
- test_when_finished "rm -rf \"$HTTPD_DOCUMENT_ROOT_PATH\"/upstream.git" &&
- mk_repo_pair &&
- git -C upstream config receive.advertisePushOptions false &&
- git -C upstream config http.receivepack true &&
- cp -R upstream/.git "$HTTPD_DOCUMENT_ROOT_PATH"/upstream.git &&
- git clone "$HTTPD_URL"/smart/upstream test_http_clone &&
- test_commit -C test_http_clone one &&
- test_must_fail git -C test_http_clone push --push-option=asdf origin master 2>actual &&
- test_i18ngrep "the receiving end does not support push options" actual &&
- git -C test_http_clone push origin master
-'
-
-test_expect_success 'push options work properly across http' '
- test_when_finished "rm -rf test_http_clone" &&
- test_when_finished "rm -rf \"$HTTPD_DOCUMENT_ROOT_PATH\"/upstream.git" &&
- mk_repo_pair &&
- git -C upstream config receive.advertisePushOptions true &&
- git -C upstream config http.receivepack true &&
- cp -R upstream/.git "$HTTPD_DOCUMENT_ROOT_PATH"/upstream.git &&
- git clone "$HTTPD_URL"/smart/upstream test_http_clone &&
-
- test_commit -C test_http_clone one &&
- git -C test_http_clone push origin master &&
- git -C "$HTTPD_DOCUMENT_ROOT_PATH"/upstream.git rev-parse --verify master >expect &&
- git -C test_http_clone rev-parse --verify master >actual &&
- test_cmp expect actual &&
-
- test_commit -C test_http_clone two &&
- git -C test_http_clone push --push-option=asdf --push-option="more structured text" origin master &&
- printf "asdf\nmore structured text\n" >expect &&
- test_cmp expect "$HTTPD_DOCUMENT_ROOT_PATH"/upstream.git/hooks/pre-receive.push_options &&
- test_cmp expect "$HTTPD_DOCUMENT_ROOT_PATH"/upstream.git/hooks/post-receive.push_options &&
-
- git -C "$HTTPD_DOCUMENT_ROOT_PATH"/upstream.git rev-parse --verify master >expect &&
- git -C test_http_clone rev-parse --verify master >actual &&
- test_cmp expect actual
-'
-
test_expect_success 'push options and submodules' '
test_when_finished "rm -rf parent" &&
test_when_finished "rm -rf parent_upstream" &&
test_cmp expect parent_upstream/.git/hooks/post-receive.push_options
'
+. "$TEST_DIRECTORY"/lib-httpd.sh
+start_httpd
+
+test_expect_success 'push option denied properly by http server' '
+ test_when_finished "rm -rf test_http_clone" &&
+ test_when_finished "rm -rf \"$HTTPD_DOCUMENT_ROOT_PATH\"/upstream.git" &&
+ mk_repo_pair &&
+ git -C upstream config receive.advertisePushOptions false &&
+ git -C upstream config http.receivepack true &&
+ cp -R upstream/.git "$HTTPD_DOCUMENT_ROOT_PATH"/upstream.git &&
+ git clone "$HTTPD_URL"/smart/upstream test_http_clone &&
+ test_commit -C test_http_clone one &&
+ test_must_fail git -C test_http_clone push --push-option=asdf origin master 2>actual &&
+ test_i18ngrep "the receiving end does not support push options" actual &&
+ git -C test_http_clone push origin master
+'
+
+test_expect_success 'push options work properly across http' '
+ test_when_finished "rm -rf test_http_clone" &&
+ test_when_finished "rm -rf \"$HTTPD_DOCUMENT_ROOT_PATH\"/upstream.git" &&
+ mk_repo_pair &&
+ git -C upstream config receive.advertisePushOptions true &&
+ git -C upstream config http.receivepack true &&
+ cp -R upstream/.git "$HTTPD_DOCUMENT_ROOT_PATH"/upstream.git &&
+ git clone "$HTTPD_URL"/smart/upstream test_http_clone &&
+
+ test_commit -C test_http_clone one &&
+ git -C test_http_clone push origin master &&
+ git -C "$HTTPD_DOCUMENT_ROOT_PATH"/upstream.git rev-parse --verify master >expect &&
+ git -C test_http_clone rev-parse --verify master >actual &&
+ test_cmp expect actual &&
+
+ test_commit -C test_http_clone two &&
+ git -C test_http_clone push --push-option=asdf --push-option="more structured text" origin master &&
+ printf "asdf\nmore structured text\n" >expect &&
+ test_cmp expect "$HTTPD_DOCUMENT_ROOT_PATH"/upstream.git/hooks/pre-receive.push_options &&
+ test_cmp expect "$HTTPD_DOCUMENT_ROOT_PATH"/upstream.git/hooks/post-receive.push_options &&
+
+ git -C "$HTTPD_DOCUMENT_ROOT_PATH"/upstream.git rev-parse --verify master >expect &&
+ git -C test_http_clone rev-parse --verify master >actual &&
+ test_cmp expect actual
+'
+
stop_httpd
test_done
(cd "$HTTPD_DOCUMENT_ROOT_PATH/repo.git" &&
git config core.bare true &&
mkdir -p hooks &&
- echo "exec git update-server-info" >hooks/post-update &&
- chmod +x hooks/post-update &&
+ write_script "hooks/post-update" <<-\EOF &&
+ exec git update-server-info
+ EOF
hooks/post-update
) &&
git remote add public "$HTTPD_DOCUMENT_ROOT_PATH/repo.git" &&
test_i18ncmp expect actual
'
-cat <<EOF >expect
-fatal: Pathspec '.' is in submodule 'sub'
-EOF
-
test_expect_success 'error message for path inside submodule from within submodule' '
test_must_fail git -C sub add . 2>actual &&
- test_i18ncmp expect actual
+ test_i18ngrep "in unpopulated submodule" actual
'
test_done
'
done
-test_expect_success 'do not complain about existing broken links' '
+test_expect_success 'do not complain about existing broken links (commit)' '
cat >broken-commit <<-\EOF &&
tree 0000000000000000000000000000000000000001
parent 0000000000000000000000000000000000000002
test_must_be_empty stderr
'
+test_expect_success 'do not complain about existing broken links (tree)' '
+ cat >broken-tree <<-\EOF &&
+ 100644 blob 0000000000000000000000000000000000000003 foo
+ EOF
+ tree=$(git mktree --missing <broken-tree) &&
+ git gc 2>stderr &&
+ git cat-file -e $tree &&
+ test_must_be_empty stderr
+'
+
+test_expect_success 'do not complain about existing broken links (tag)' '
+ cat >broken-tag <<-\EOF &&
+ object 0000000000000000000000000000000000000004
+ type commit
+ tag broken
+ tagger whatever <whatever@example.com> 1234 -0000
+
+ this is a broken tag
+ EOF
+ tag=$(git hash-object -t tag -w broken-tag) &&
+ git gc 2>stderr &&
+ git cat-file -e $tag &&
+ test_must_be_empty stderr
+'
+
test_done
. ./test-lib.sh
+nul_match () {
+ matches=$1
+ flags=$2
+ pattern=$3
+ pattern_human=$(echo "$pattern" | sed 's/Q/<NUL>/g')
+
+ if test "$matches" = 1
+ then
+ test_expect_success "git grep -f f $flags '$pattern_human' a" "
+ printf '$pattern' | q_to_nul >f &&
+ git grep -f f $flags a
+ "
+ elif test "$matches" = 0
+ then
+ test_expect_success "git grep -f f $flags '$pattern_human' a" "
+ printf '$pattern' | q_to_nul >f &&
+ test_must_fail git grep -f f $flags a
+ "
+ elif test "$matches" = T1
+ then
+ test_expect_failure "git grep -f f $flags '$pattern_human' a" "
+ printf '$pattern' | q_to_nul >f &&
+ git grep -f f $flags a
+ "
+ elif test "$matches" = T0
+ then
+ test_expect_failure "git grep -f f $flags '$pattern_human' a" "
+ printf '$pattern' | q_to_nul >f &&
+ test_must_fail git grep -f f $flags a
+ "
+ else
+ test_expect_success "PANIC: Test framework error. Unknown matches value $matches" 'false'
+ fi
+}
+
test_expect_success 'setup' "
- echo 'binaryQfile' | q_to_nul >a &&
+ echo 'binaryQfileQm[*]cQ*æQð' | q_to_nul >a &&
git add a &&
git commit -m.
"
git grep .fi a
'
-test_expect_success 'git grep -F y<NUL>f a' "
- printf 'yQf' | q_to_nul >f &&
- git grep -f f -F a
-"
-
-test_expect_success 'git grep -F y<NUL>x a' "
- printf 'yQx' | q_to_nul >f &&
- test_must_fail git grep -f f -F a
-"
-
-test_expect_success 'git grep -Fi Y<NUL>f a' "
- printf 'YQf' | q_to_nul >f &&
- git grep -f f -Fi a
-"
-
-test_expect_success 'git grep -Fi Y<NUL>x a' "
- printf 'YQx' | q_to_nul >f &&
- test_must_fail git grep -f f -Fi a
-"
-
-test_expect_success 'git grep y<NUL>f a' "
- printf 'yQf' | q_to_nul >f &&
- git grep -f f a
-"
-
-test_expect_success 'git grep y<NUL>x a' "
- printf 'yQx' | q_to_nul >f &&
- test_must_fail git grep -f f a
-"
+nul_match 1 '-F' 'yQf'
+nul_match 0 '-F' 'yQx'
+nul_match 1 '-Fi' 'YQf'
+nul_match 0 '-Fi' 'YQx'
+nul_match 1 '' 'yQf'
+nul_match 0 '' 'yQx'
+nul_match 1 '' 'æQð'
+nul_match 1 '-F' 'eQm[*]c'
+nul_match 1 '-Fi' 'EQM[*]C'
+
+# Regex patterns that would match but shouldn't with -F
+nul_match 0 '-F' 'yQ[f]'
+nul_match 0 '-F' '[y]Qf'
+nul_match 0 '-Fi' 'YQ[F]'
+nul_match 0 '-Fi' '[Y]QF'
+nul_match 0 '-F' 'æQ[ð]'
+nul_match 0 '-F' '[æ]Qð'
+nul_match 0 '-Fi' 'ÆQ[Ð]'
+nul_match 0 '-Fi' '[Æ]QÐ'
+
+# kwset is disabled on -i & non-ASCII. No way to match non-ASCII \0
+# patterns case-insensitively.
+nul_match T1 '-i' 'ÆQÐ'
+
+# \0 implicitly disables regexes. This is an undocumented internal
+# limitation.
+nul_match T1 '' 'yQ[f]'
+nul_match T1 '' '[y]Qf'
+nul_match T1 '-i' 'YQ[F]'
+nul_match T1 '-i' '[Y]Qf'
+nul_match T1 '' 'æQ[ð]'
+nul_match T1 '' '[æ]Qð'
+nul_match T1 '-i' 'ÆQ[Ð]'
+
+# ... because of \0 implicitly disabling regexes regexes that
+# should/shouldn't match don't do the right thing.
+nul_match T1 '' 'eQm.*cQ'
+nul_match T1 '-i' 'EQM.*cQ'
+nul_match T0 '' 'eQm[*]c'
+nul_match T0 '-i' 'EQM[*]C'
+
+# Due to the REG_STARTEND extension when kwset() is disabled on -i &
+# non-ASCII the string will be matched in its entirety, but the
+# pattern will be cut off at the first \0.
+nul_match 0 '-i' 'NOMATCHQð'
+nul_match T0 '-i' '[Æ]QNOMATCH'
+nul_match T0 '-i' '[æ]QNOMATCH'
+# Matches, but for the wrong reasons, just stops at [æ]
+nul_match 1 '-i' '[Æ]Qð'
+nul_match 1 '-i' '[æ]Qð'
+
+# Ensure that the matcher doesn't regress to something that stops at
+# \0
+nul_match 0 '-F' 'yQ[f]'
+nul_match 0 '-Fi' 'YQ[F]'
+nul_match 0 '' 'yQNOMATCH'
+nul_match 0 '' 'QNOMATCH'
+nul_match 0 '-i' 'YQNOMATCH'
+nul_match 0 '-i' 'QNOMATCH'
+nul_match 0 '-F' 'æQ[ð]'
+nul_match 0 '-Fi' 'ÆQ[Ð]'
+nul_match 0 '' 'yQNÓMATCH'
+nul_match 0 '' 'QNÓMATCH'
+nul_match 0 '-i' 'YQNÓMATCH'
+nul_match 0 '-i' 'QNÓMATCH'
test_expect_success 'grep respects binary diff attribute' '
echo text >t &&
'
test_expect_success 'grep --textconv honors textconv' '
- echo "a:binaryQfile" >expect &&
+ echo "a:binaryQfileQm[*]cQ*æQð" >expect &&
git grep --textconv Qfile >actual &&
test_cmp expect actual
'
'
test_expect_success 'grep --textconv blob honors textconv' '
- echo "HEAD:a:binaryQfile" >expect &&
+ echo "HEAD:a:binaryQfileQm[*]cQ*æQð" >expect &&
git grep --textconv Qfile HEAD:a >actual &&
test_cmp expect actual
'
?? actual
?? expected
?? untracked/
+!! untracked/ignored
EOF
test_expect_success 'status untracked directory with --ignored' '
test_i18ncmp ../expect ../err
'
+test_expect_success 'untracked cache survives a checkout' '
+ git commit --allow-empty -m empty &&
+ test-dump-untracked-cache >../before &&
+ test_when_finished "git checkout master" &&
+ git checkout -b other_branch &&
+ test-dump-untracked-cache >../after &&
+ test_cmp ../before ../after &&
+ test_commit test &&
+ test-dump-untracked-cache >../before &&
+ git checkout master &&
+ test-dump-untracked-cache >../after &&
+ test_cmp ../before ../after
+'
+
+test_expect_success 'untracked cache survives a commit' '
+ test-dump-untracked-cache >../before &&
+ git add done/two &&
+ git commit -m commit &&
+ test-dump-untracked-cache >../after &&
+ test_cmp ../before ../after
+'
+
test_done
. ./test-lib.sh
. "$TEST_DIRECTORY"/lib-submodule-update.sh
+KNOWN_FAILURE_SUBMODULE_RECURSIVE_NESTED=1
+KNOWN_FAILURE_DIRECTORY_SUBMODULE_CONFLICTS=1
+KNOWN_FAILURE_SUBMODULE_OVERWRITE_IGNORED_UNTRACKED=1
+
+test_submodule_switch_recursing "git reset --recurse-submodules --keep"
+
+test_submodule_forced_switch_recursing "git reset --hard --recurse-submodules"
+
test_submodule_switch "git reset --keep"
test_submodule_switch "git reset --merge"
test_path_is_dir foobar
'
+test_expect_success 'git clean -d skips untracked dirs containing ignored files' '
+ echo /foo/bar >.gitignore &&
+ echo ignoreme >>.gitignore &&
+ rm -rf foo &&
+ mkdir -p foo/a/aa/aaa foo/b/bb/bbb &&
+ touch foo/bar foo/baz foo/a/aa/ignoreme foo/b/ignoreme foo/b/bb/1 foo/b/bb/2 &&
+ git clean -df &&
+ test_path_is_dir foo &&
+ test_path_is_file foo/bar &&
+ test_path_is_missing foo/baz &&
+ test_path_is_file foo/a/aa/ignoreme &&
+ test_path_is_missing foo/a/aa/aaa &&
+ test_path_is_file foo/b/ignoreme &&
+ test_path_is_missing foo/b/bb
+'
+
test_done
test_cmp expected actual
'
+test_expect_success 'with cut line' '
+ cat >expected <<-\EOF &&
+ my subject
+
+ review: Brian
+ sign: A U Thor <author@example.com>
+ # ------------------------ >8 ------------------------
+ ignore this
+ EOF
+ git interpret-trailers --trailer review:Brian >actual <<-\EOF &&
+ my subject
+ # ------------------------ >8 ------------------------
+ ignore this
+ EOF
+ test_cmp expected actual
+'
+
test_done
test_cmp expected actual
'
- test_expect_success LIBPCRE "grep $L with grep.patterntype=perl" '
+ test_expect_success PCRE "grep $L with grep.patterntype=perl" '
echo "${HC}ab:a+b*c" >expected &&
git -c grep.patterntype=perl grep "a\x{2b}b\x{2a}c" $H ab >actual &&
test_cmp expected actual
'
+ test_expect_success !PCRE "grep $L with grep.patterntype=perl errors without PCRE" '
+ test_must_fail git -c grep.patterntype=perl grep "foo.*bar"
+ '
+
test_expect_success "grep $L with grep.patternType=default and grep.extendedRegexp=true" '
echo "${HC}ab:abc" >expected &&
git \
test_cmp expected actual
'
+for threads in $(test_seq 0 10)
+do
+ test_expect_success "grep --threads=$threads & -c grep.threads=$threads" "
+ git grep --threads=$threads . >actual.$threads &&
+ if test $threads -ge 1
+ then
+ test_cmp actual.\$(($threads - 1)) actual.$threads
+ fi &&
+ git -c grep.threads=$threads grep . >actual.$threads &&
+ if test $threads -ge 1
+ then
+ test_cmp actual.\$(($threads - 1)) actual.$threads
+ fi
+ "
+done
+
+test_expect_success !PTHREADS,C_LOCALE_OUTPUT 'grep --threads=N or pack.threads=N warns when no pthreads' '
+ git grep --threads=2 Hello hello_world 2>err &&
+ grep ^warning: err >warnings &&
+ test_line_count = 1 warnings &&
+ grep -F "no threads support, ignoring --threads" err &&
+ git -c grep.threads=2 grep Hello hello_world 2>err &&
+ grep ^warning: err >warnings &&
+ test_line_count = 1 warnings &&
+ grep -F "no threads support, ignoring grep.threads" err &&
+ git -c grep.threads=2 grep --threads=4 Hello hello_world 2>err &&
+ grep ^warning: err >warnings &&
+ test_line_count = 2 warnings &&
+ grep -F "no threads support, ignoring --threads" err &&
+ grep -F "no threads support, ignoring grep.threads" err &&
+ git -c grep.threads=0 grep --threads=0 Hello hello_world 2>err &&
+ test_line_count = 0 err
+'
+
test_expect_success 'grep from a subdirectory to search wider area (1)' '
mkdir -p s &&
(
hello.c: printf("Hello world.\n");
EOF
-test_expect_success LIBPCRE 'grep --perl-regexp pattern' '
+test_expect_success PCRE 'grep --perl-regexp pattern' '
git grep --perl-regexp "\p{Ps}.*?\p{Pe}" hello.c >actual &&
test_cmp expected actual
'
-test_expect_success LIBPCRE 'grep -P pattern' '
+test_expect_success !PCRE 'grep --perl-regexp pattern errors without PCRE' '
+ test_must_fail git grep --perl-regexp "foo.*bar"
+'
+
+test_expect_success PCRE 'grep -P pattern' '
git grep -P "\p{Ps}.*?\p{Pe}" hello.c >actual &&
test_cmp expected actual
'
+test_expect_success !PCRE 'grep -P pattern errors without PCRE' '
+ test_must_fail git grep -P "foo.*bar"
+'
+
test_expect_success 'grep pattern with grep.extendedRegexp=true' '
>empty &&
test_must_fail git -c grep.extendedregexp=true \
test_cmp empty actual
'
-test_expect_success LIBPCRE 'grep -P pattern with grep.extendedRegexp=true' '
+test_expect_success PCRE 'grep -P pattern with grep.extendedRegexp=true' '
git -c grep.extendedregexp=true \
grep -P "\p{Ps}.*?\p{Pe}" hello.c >actual &&
test_cmp expected actual
'
-test_expect_success LIBPCRE 'grep -P -v pattern' '
+test_expect_success PCRE 'grep -P -v pattern' '
{
echo "ab:a+b*c"
echo "ab:a+bc"
test_cmp expected actual
'
-test_expect_success LIBPCRE 'grep -P -i pattern' '
+test_expect_success PCRE 'grep -P -i pattern' '
cat >expected <<-EOF &&
hello.c: printf("Hello world.\n");
EOF
test_cmp expected actual
'
-test_expect_success LIBPCRE 'grep -P -w pattern' '
+test_expect_success PCRE 'grep -P -w pattern' '
{
echo "hello_world:Hello world"
echo "hello_world:HeLLo world"
test_cmp expected actual
'
+test_expect_success PCRE 'grep -P backreferences work (the PCRE NO_AUTO_CAPTURE flag is not set)' '
+ git grep -P -h "(?P<one>.)(?P=one)" hello_world >actual &&
+ test_cmp hello_world actual &&
+ git grep -P -h "(.)\1" hello_world >actual &&
+ test_cmp hello_world actual
+'
+
test_expect_success 'grep -G invalidpattern properly dies ' '
test_must_fail git grep -G "a["
'
test_must_fail git -c grep.patterntype=extended grep "a["
'
-test_expect_success LIBPCRE 'grep -P invalidpattern properly dies ' '
+test_expect_success PCRE 'grep -P invalidpattern properly dies ' '
test_must_fail git grep -P "a["
'
-test_expect_success LIBPCRE 'grep invalidpattern properly dies with grep.patternType=perl' '
+test_expect_success PCRE 'grep invalidpattern properly dies with grep.patternType=perl' '
test_must_fail git -c grep.patterntype=perl grep "a["
'
test_cmp expected actual
'
-test_expect_success LIBPCRE 'grep -G -F -E -P pattern' '
+test_expect_success PCRE 'grep -G -F -E -P pattern' '
echo "d0:0" >expected &&
git grep -G -F -E -P "[\d]" d0 >actual &&
test_cmp expected actual
'
-test_expect_success LIBPCRE 'grep pattern with grep.patternType=fixed, =basic, =extended, =perl' '
+test_expect_success PCRE 'grep pattern with grep.patternType=fixed, =basic, =extended, =perl' '
echo "d0:0" >expected &&
git \
-c grep.patterntype=fixed \
test_cmp expected actual
'
-test_expect_success LIBPCRE 'grep -P pattern with grep.patternType=fixed' '
+test_expect_success PCRE 'grep -P pattern with grep.patternType=fixed' '
echo "ab:a+b*c" >expected &&
git \
-c grep.patterntype=fixed \
space: line with leading space3
EOF
-test_expect_success LIBPCRE 'grep -E "^ "' '
+test_expect_success PCRE 'grep -E "^ "' '
git grep -E "^ " space >actual &&
test_cmp expected actual
'
-test_expect_success LIBPCRE 'grep -P "^ "' '
+test_expect_success PCRE 'grep -P "^ "' '
git grep -P "^ " space >actual &&
test_cmp expected actual
'
git grep -i "TILRAUN: HALLÓ HEIMUR!"
'
-test_expect_success GETTEXT_LOCALE,LIBPCRE 'grep pcre utf-8 icase' '
+test_expect_success GETTEXT_LOCALE,PCRE 'grep pcre utf-8 icase' '
git grep --perl-regexp "TILRAUN: H.lló Heimur!" &&
git grep --perl-regexp -i "TILRAUN: H.lló Heimur!" &&
git grep --perl-regexp -i "TILRAUN: H.LLÓ HEIMUR!"
'
-test_expect_success GETTEXT_LOCALE,LIBPCRE 'grep pcre utf-8 string with "+"' '
+test_expect_success GETTEXT_LOCALE,PCRE 'grep pcre utf-8 string with "+"' '
test_write_lines "TILRAUN: Hallóó Heimur!" >file2 &&
git add file2 &&
git grep -l --perl-regexp "TILRAUN: H.lló+ Heimur!" >actual &&
'
test_expect_success REGEX_LOCALE 'grep literal string, with -F' '
- git grep --debug -i -F "TILRAUN: Halló Heimur!" 2>&1 >/dev/null |
- grep fixed >debug1 &&
- test_write_lines "fixed TILRAUN: Halló Heimur!" >expect1 &&
- test_cmp expect1 debug1 &&
-
- git grep --debug -i -F "TILRAUN: HALLÓ HEIMUR!" 2>&1 >/dev/null |
- grep fixed >debug2 &&
- test_write_lines "fixed TILRAUN: HALLÓ HEIMUR!" >expect2 &&
- test_cmp expect2 debug2
+ git grep -i -F "TILRAUN: Halló Heimur!" &&
+ git grep -i -F "TILRAUN: HALLÓ HEIMUR!"
'
test_expect_success REGEX_LOCALE 'grep string with regex, with -F' '
- test_write_lines "^*TILR^AUN:.* \\Halló \$He[]imur!\$" >file &&
-
- git grep --debug -i -F "^*TILR^AUN:.* \\Halló \$He[]imur!\$" 2>&1 >/dev/null |
- grep fixed >debug1 &&
- test_write_lines "fixed \\^*TILR^AUN:\\.\\* \\\\Halló \$He\\[]imur!\\\$" >expect1 &&
- test_cmp expect1 debug1 &&
-
- git grep --debug -i -F "^*TILR^AUN:.* \\HALLÓ \$HE[]IMUR!\$" 2>&1 >/dev/null |
- grep fixed >debug2 &&
- test_write_lines "fixed \\^*TILR^AUN:\\.\\* \\\\HALLÓ \$HE\\[]IMUR!\\\$" >expect2 &&
- test_cmp expect2 debug2
+ test_write_lines "TILRAUN: Halló Heimur [abc]!" >file3 &&
+ git add file3 &&
+ git grep -i -F "TILRAUN: Halló Heimur [abc]!" file3
'
test_expect_success REGEX_LOCALE 'pickaxe -i on non-ascii' '
export LC_ALL
'
-test_expect_success GETTEXT_ISO_LOCALE,LIBPCRE 'grep pcre string' '
+test_expect_success GETTEXT_ISO_LOCALE,PCRE 'grep pcre string' '
git grep --perl-regexp -i "TILRAUN: H.lló Heimur!" &&
git grep --perl-regexp -i "TILRAUN: H.LLÓ HEIMUR!"
'
. ./test-lib.sh
test_expect_success 'setup directory structure and submodule' '
- echo "foobar" >a &&
+ echo "(1|2)d(3|4)" >a &&
mkdir b &&
- echo "bar" >b/b &&
+ echo "(3|4)" >b/b &&
git add a b &&
git commit -m "add a and b" &&
git init submodule &&
- echo "foobar" >submodule/a &&
+ echo "(1|2)d(3|4)" >submodule/a &&
git -C submodule add a &&
git -C submodule commit -m "add a" &&
git submodule add ./submodule &&
test_expect_success 'grep correctly finds patterns in a submodule' '
cat >expect <<-\EOF &&
- a:foobar
- b/b:bar
- submodule/a:foobar
+ a:(1|2)d(3|4)
+ b/b:(3|4)
+ submodule/a:(1|2)d(3|4)
EOF
- git grep -e "bar" --recurse-submodules >actual &&
+ git grep -e "(3|4)" --recurse-submodules >actual &&
test_cmp expect actual
'
test_expect_success 'grep and basic pathspecs' '
cat >expect <<-\EOF &&
- submodule/a:foobar
+ submodule/a:(1|2)d(3|4)
EOF
git grep -e. --recurse-submodules -- submodule >actual &&
test_expect_success 'grep and nested submodules' '
git init submodule/sub &&
- echo "foobar" >submodule/sub/a &&
+ echo "(1|2)d(3|4)" >submodule/sub/a &&
git -C submodule/sub add a &&
git -C submodule/sub commit -m "add a" &&
git -C submodule submodule add ./sub &&
git commit -m "updated submodule" &&
cat >expect <<-\EOF &&
- a:foobar
- b/b:bar
- submodule/a:foobar
- submodule/sub/a:foobar
+ a:(1|2)d(3|4)
+ b/b:(3|4)
+ submodule/a:(1|2)d(3|4)
+ submodule/sub/a:(1|2)d(3|4)
EOF
- git grep -e "bar" --recurse-submodules >actual &&
+ git grep -e "(3|4)" --recurse-submodules >actual &&
test_cmp expect actual
'
test_expect_success 'grep and multiple patterns' '
cat >expect <<-\EOF &&
- a:foobar
- submodule/a:foobar
- submodule/sub/a:foobar
+ a:(1|2)d(3|4)
+ submodule/a:(1|2)d(3|4)
+ submodule/sub/a:(1|2)d(3|4)
EOF
- git grep -e "bar" --and -e "foo" --recurse-submodules >actual &&
+ git grep -e "(3|4)" --and -e "(1|2)" --recurse-submodules >actual &&
test_cmp expect actual
'
test_expect_success 'grep and multiple patterns' '
cat >expect <<-\EOF &&
- b/b:bar
+ b/b:(3|4)
EOF
- git grep -e "bar" --and --not -e "foo" --recurse-submodules >actual &&
+ git grep -e "(3|4)" --and --not -e "(1|2)" --recurse-submodules >actual &&
test_cmp expect actual
'
test_expect_success 'basic grep tree' '
cat >expect <<-\EOF &&
- HEAD:a:foobar
- HEAD:b/b:bar
- HEAD:submodule/a:foobar
- HEAD:submodule/sub/a:foobar
+ HEAD:a:(1|2)d(3|4)
+ HEAD:b/b:(3|4)
+ HEAD:submodule/a:(1|2)d(3|4)
+ HEAD:submodule/sub/a:(1|2)d(3|4)
EOF
- git grep -e "bar" --recurse-submodules HEAD >actual &&
+ git grep -e "(3|4)" --recurse-submodules HEAD >actual &&
test_cmp expect actual
'
test_expect_success 'grep tree HEAD^' '
cat >expect <<-\EOF &&
- HEAD^:a:foobar
- HEAD^:b/b:bar
- HEAD^:submodule/a:foobar
+ HEAD^:a:(1|2)d(3|4)
+ HEAD^:b/b:(3|4)
+ HEAD^:submodule/a:(1|2)d(3|4)
EOF
- git grep -e "bar" --recurse-submodules HEAD^ >actual &&
+ git grep -e "(3|4)" --recurse-submodules HEAD^ >actual &&
test_cmp expect actual
'
test_expect_success 'grep tree HEAD^^' '
cat >expect <<-\EOF &&
- HEAD^^:a:foobar
- HEAD^^:b/b:bar
+ HEAD^^:a:(1|2)d(3|4)
+ HEAD^^:b/b:(3|4)
EOF
- git grep -e "bar" --recurse-submodules HEAD^^ >actual &&
+ git grep -e "(3|4)" --recurse-submodules HEAD^^ >actual &&
test_cmp expect actual
'
test_expect_success 'grep tree and pathspecs' '
cat >expect <<-\EOF &&
- HEAD:submodule/a:foobar
- HEAD:submodule/sub/a:foobar
+ HEAD:submodule/a:(1|2)d(3|4)
+ HEAD:submodule/sub/a:(1|2)d(3|4)
EOF
- git grep -e "bar" --recurse-submodules HEAD -- submodule >actual &&
+ git grep -e "(3|4)" --recurse-submodules HEAD -- submodule >actual &&
test_cmp expect actual
'
test_expect_success 'grep tree and pathspecs' '
cat >expect <<-\EOF &&
- HEAD:submodule/a:foobar
- HEAD:submodule/sub/a:foobar
+ HEAD:submodule/a:(1|2)d(3|4)
+ HEAD:submodule/sub/a:(1|2)d(3|4)
EOF
- git grep -e "bar" --recurse-submodules HEAD -- "submodule*a" >actual &&
+ git grep -e "(3|4)" --recurse-submodules HEAD -- "submodule*a" >actual &&
test_cmp expect actual
'
test_expect_success 'grep tree and more pathspecs' '
cat >expect <<-\EOF &&
- HEAD:submodule/a:foobar
+ HEAD:submodule/a:(1|2)d(3|4)
EOF
- git grep -e "bar" --recurse-submodules HEAD -- "submodul?/a" >actual &&
+ git grep -e "(3|4)" --recurse-submodules HEAD -- "submodul?/a" >actual &&
test_cmp expect actual
'
test_expect_success 'grep tree and more pathspecs' '
cat >expect <<-\EOF &&
- HEAD:submodule/sub/a:foobar
+ HEAD:submodule/sub/a:(1|2)d(3|4)
EOF
- git grep -e "bar" --recurse-submodules HEAD -- "submodul*/sub/a" >actual &&
+ git grep -e "(3|4)" --recurse-submodules HEAD -- "submodul*/sub/a" >actual &&
test_cmp expect actual
'
test_expect_success !MINGW 'grep recurse submodule colon in name' '
git init parent &&
test_when_finished "rm -rf parent" &&
- echo "foobar" >"parent/fi:le" &&
+ echo "(1|2)d(3|4)" >"parent/fi:le" &&
git -C parent add "fi:le" &&
git -C parent commit -m "add fi:le" &&
git init "su:b" &&
test_when_finished "rm -rf su:b" &&
- echo "foobar" >"su:b/fi:le" &&
+ echo "(1|2)d(3|4)" >"su:b/fi:le" &&
git -C "su:b" add "fi:le" &&
git -C "su:b" commit -m "add fi:le" &&
git -C parent commit -m "add submodule" &&
cat >expect <<-\EOF &&
- fi:le:foobar
- su:b/fi:le:foobar
+ fi:le:(1|2)d(3|4)
+ su:b/fi:le:(1|2)d(3|4)
EOF
- git -C parent grep -e "foobar" --recurse-submodules >actual &&
+ git -C parent grep -e "(1|2)d(3|4)" --recurse-submodules >actual &&
test_cmp expect actual &&
cat >expect <<-\EOF &&
- HEAD:fi:le:foobar
- HEAD:su:b/fi:le:foobar
+ HEAD:fi:le:(1|2)d(3|4)
+ HEAD:su:b/fi:le:(1|2)d(3|4)
EOF
- git -C parent grep -e "foobar" --recurse-submodules HEAD >actual &&
+ git -C parent grep -e "(1|2)d(3|4)" --recurse-submodules HEAD >actual &&
test_cmp expect actual
'
test_expect_success 'grep history with moved submoules' '
git init parent &&
test_when_finished "rm -rf parent" &&
- echo "foobar" >parent/file &&
+ echo "(1|2)d(3|4)" >parent/file &&
git -C parent add file &&
git -C parent commit -m "add file" &&
git init sub &&
test_when_finished "rm -rf sub" &&
- echo "foobar" >sub/file &&
+ echo "(1|2)d(3|4)" >sub/file &&
git -C sub add file &&
git -C sub commit -m "add file" &&
git -C parent commit -m "add submodule" &&
cat >expect <<-\EOF &&
- dir/sub/file:foobar
- file:foobar
+ dir/sub/file:(1|2)d(3|4)
+ file:(1|2)d(3|4)
EOF
- git -C parent grep -e "foobar" --recurse-submodules >actual &&
+ git -C parent grep -e "(1|2)d(3|4)" --recurse-submodules >actual &&
test_cmp expect actual &&
git -C parent mv dir/sub sub-moved &&
git -C parent commit -m "moved submodule" &&
cat >expect <<-\EOF &&
- file:foobar
- sub-moved/file:foobar
+ file:(1|2)d(3|4)
+ sub-moved/file:(1|2)d(3|4)
EOF
- git -C parent grep -e "foobar" --recurse-submodules >actual &&
+ git -C parent grep -e "(1|2)d(3|4)" --recurse-submodules >actual &&
test_cmp expect actual &&
cat >expect <<-\EOF &&
- HEAD^:dir/sub/file:foobar
- HEAD^:file:foobar
+ HEAD^:dir/sub/file:(1|2)d(3|4)
+ HEAD^:file:(1|2)d(3|4)
EOF
- git -C parent grep -e "foobar" --recurse-submodules HEAD^ >actual &&
+ git -C parent grep -e "(1|2)d(3|4)" --recurse-submodules HEAD^ >actual &&
test_cmp expect actual
'
test_expect_success 'grep using relative path' '
test_when_finished "rm -rf parent sub" &&
git init sub &&
- echo "foobar" >sub/file &&
+ echo "(1|2)d(3|4)" >sub/file &&
git -C sub add file &&
git -C sub commit -m "add file" &&
git init parent &&
- echo "foobar" >parent/file &&
+ echo "(1|2)d(3|4)" >parent/file &&
git -C parent add file &&
mkdir parent/src &&
- echo "foobar" >parent/src/file2 &&
+ echo "(1|2)d(3|4)" >parent/src/file2 &&
git -C parent add src/file2 &&
git -C parent submodule add ../sub &&
git -C parent commit -m "add files and submodule" &&
# From top works
cat >expect <<-\EOF &&
- file:foobar
- src/file2:foobar
- sub/file:foobar
+ file:(1|2)d(3|4)
+ src/file2:(1|2)d(3|4)
+ sub/file:(1|2)d(3|4)
EOF
- git -C parent grep --recurse-submodules -e "foobar" >actual &&
+ git -C parent grep --recurse-submodules -e "(1|2)d(3|4)" >actual &&
test_cmp expect actual &&
# Relative path to top
cat >expect <<-\EOF &&
- ../file:foobar
- file2:foobar
- ../sub/file:foobar
+ ../file:(1|2)d(3|4)
+ file2:(1|2)d(3|4)
+ ../sub/file:(1|2)d(3|4)
EOF
- git -C parent/src grep --recurse-submodules -e "foobar" -- .. >actual &&
+ git -C parent/src grep --recurse-submodules -e "(1|2)d(3|4)" -- .. >actual &&
test_cmp expect actual &&
# Relative path to submodule
cat >expect <<-\EOF &&
- ../sub/file:foobar
+ ../sub/file:(1|2)d(3|4)
EOF
- git -C parent/src grep --recurse-submodules -e "foobar" -- ../sub >actual &&
+ git -C parent/src grep --recurse-submodules -e "(1|2)d(3|4)" -- ../sub >actual &&
test_cmp expect actual
'
test_expect_success 'grep from a subdir' '
test_when_finished "rm -rf parent sub" &&
git init sub &&
- echo "foobar" >sub/file &&
+ echo "(1|2)d(3|4)" >sub/file &&
git -C sub add file &&
git -C sub commit -m "add file" &&
git init parent &&
mkdir parent/src &&
- echo "foobar" >parent/src/file &&
+ echo "(1|2)d(3|4)" >parent/src/file &&
git -C parent add src/file &&
git -C parent submodule add ../sub src/sub &&
git -C parent submodule add ../sub sub &&
# Verify grep from root works
cat >expect <<-\EOF &&
- src/file:foobar
- src/sub/file:foobar
- sub/file:foobar
+ src/file:(1|2)d(3|4)
+ src/sub/file:(1|2)d(3|4)
+ sub/file:(1|2)d(3|4)
EOF
- git -C parent grep --recurse-submodules -e "foobar" >actual &&
+ git -C parent grep --recurse-submodules -e "(1|2)d(3|4)" >actual &&
test_cmp expect actual &&
# Verify grep from a subdir works
cat >expect <<-\EOF &&
- file:foobar
- sub/file:foobar
+ file:(1|2)d(3|4)
+ sub/file:(1|2)d(3|4)
EOF
- git -C parent/src grep --recurse-submodules -e "foobar" >actual &&
+ git -C parent/src grep --recurse-submodules -e "(1|2)d(3|4)" >actual &&
test_cmp expect actual
'
test_incompatible_with_recurse_submodules --untracked
test_incompatible_with_recurse_submodules --no-index
+test_expect_success 'grep --recurse-submodules should pass the pattern type along' '
+ # Fixed
+ test_must_fail git grep -F --recurse-submodules -e "(.|.)[\d]" &&
+ test_must_fail git -c grep.patternType=fixed grep --recurse-submodules -e "(.|.)[\d]" &&
+
+ # Basic
+ git grep -G --recurse-submodules -e "(.|.)[\d]" >actual &&
+ cat >expect <<-\EOF &&
+ a:(1|2)d(3|4)
+ submodule/a:(1|2)d(3|4)
+ submodule/sub/a:(1|2)d(3|4)
+ EOF
+ test_cmp expect actual &&
+ git -c grep.patternType=basic grep --recurse-submodules -e "(.|.)[\d]" >actual &&
+ test_cmp expect actual &&
+
+ # Extended
+ git grep -E --recurse-submodules -e "(.|.)[\d]" >actual &&
+ cat >expect <<-\EOF &&
+ .gitmodules:[submodule "submodule"]
+ .gitmodules: path = submodule
+ .gitmodules: url = ./submodule
+ a:(1|2)d(3|4)
+ submodule/.gitmodules:[submodule "sub"]
+ submodule/a:(1|2)d(3|4)
+ submodule/sub/a:(1|2)d(3|4)
+ EOF
+ test_cmp expect actual &&
+ git -c grep.patternType=extended grep --recurse-submodules -e "(.|.)[\d]" >actual &&
+ test_cmp expect actual &&
+ git -c grep.extendedRegexp=true grep --recurse-submodules -e "(.|.)[\d]" >actual &&
+ test_cmp expect actual &&
+
+ # Perl
+ if test_have_prereq PCRE
+ then
+ git grep -P --recurse-submodules -e "(.|.)[\d]" >actual &&
+ cat >expect <<-\EOF &&
+ a:(1|2)d(3|4)
+ b/b:(3|4)
+ submodule/a:(1|2)d(3|4)
+ submodule/sub/a:(1|2)d(3|4)
+ EOF
+ test_cmp expect actual &&
+ git -c grep.patternType=perl grep --recurse-submodules -e "(.|.)[\d]" >actual &&
+ test_cmp expect actual
+ fi
+'
+
test_done
test_cmp expected-list actual-list
'
+test_expect_success $PREREQ 'invoke hook' '
+ mkdir -p .git/hooks &&
+
+ write_script .git/hooks/sendemail-validate <<-\EOF &&
+ # test that we have the correct environment variable, pwd, and
+ # argument
+ case "$GIT_DIR" in
+ *.git)
+ true
+ ;;
+ *)
+ false
+ ;;
+ esac &&
+ test -f 0001-add-master.patch &&
+ grep "add master" "$1"
+ EOF
+
+ mkdir subdir &&
+ (
+ # Test that it works even if we are not at the root of the
+ # working tree
+ cd subdir &&
+ git send-email \
+ --from="Example <nobody@example.com>" \
+ --to=nobody@example.com \
+ --smtp-server="$(pwd)/../fake.sendmail" \
+ ../0001-add-master.patch &&
+
+ # Verify error message when a patch is rejected by the hook
+ sed -e "s/add master/x/" ../0001-add-master.patch >../another.patch &&
+ git send-email \
+ --from="Example <nobody@example.com>" \
+ --to=nobody@example.com \
+ --smtp-server="$(pwd)/../fake.sendmail" \
+ ../another.patch 2>err
+ test_i18ngrep "rejected by sendemail-validate hook" err
+ )
+'
+
+test_expect_success $PREREQ 'test that send-email works outside a repo' '
+ nongit git send-email \
+ --from="Example <nobody@example.com>" \
+ --to=nobody@example.com \
+ --smtp-server="$(pwd)/fake.sendmail" \
+ "$(pwd)/0001-add-master.patch"
+'
+
test_done
git config i18n.commitencoding ISO8859-1 &&
# use author and committer name in ISO-8859-1 to match it.
- . "$TEST_DIRECTORY"/t3901-8859-1.txt &&
+ . "$TEST_DIRECTORY"/t3901/8859-1.txt &&
test_tick &&
echo rosten >file &&
git commit -s -m den file &&
test_expect_success \
'encode(commit): utf8' \
- '. "$TEST_DIRECTORY"/t3901-utf8.txt &&
+ '. "$TEST_DIRECTORY"/t3901/utf8.txt &&
test_when_finished "GIT_AUTHOR_NAME=\"A U Thor\"" &&
test_when_finished "GIT_COMMITTER_NAME=\"C O Mitter\"" &&
echo "UTF-8" >> file &&
test_expect_success \
'encode(commit): iso-8859-1' \
- '. "$TEST_DIRECTORY"/t3901-8859-1.txt &&
+ '. "$TEST_DIRECTORY"/t3901/8859-1.txt &&
test_when_finished "GIT_AUTHOR_NAME=\"A U Thor\"" &&
test_when_finished "GIT_COMMITTER_NAME=\"C O Mitter\"" &&
echo "ISO-8859-1" >> file &&
fi
case "$test_failure" in
0)
- # Maybe print SKIP message
- if test -n "$skip_all" && test $test_count -gt 0
- then
- error "Can't use skip_all after running some tests"
- fi
- test -z "$skip_all" || skip_all=" # SKIP $skip_all"
-
if test $test_external_has_tap -eq 0
then
if test $test_remaining -gt 0
then
say_color pass "# passed all $msg"
fi
- say "1..$test_count$skip_all"
+
+ # Maybe print SKIP message
+ test -z "$skip_all" || skip_all="# SKIP $skip_all"
+ case "$test_count" in
+ 0)
+ say "1..$test_count${skip_all:+ $skip_all}"
+ ;;
+ *)
+ test -z "$skip_all" ||
+ say_color warn "$skip_all"
+ say "1..$test_count"
+ ;;
+ esac
fi
if test -z "$debug"
( COLUMNS=1 && test $COLUMNS = 1 ) && test_set_prereq COLUMNS_CAN_BE_1
test -z "$NO_PERL" && test_set_prereq PERL
+test -z "$NO_PTHREADS" && test_set_prereq PTHREADS
test -z "$NO_PYTHON" && test_set_prereq PYTHON
-test -n "$USE_LIBPCRE" && test_set_prereq LIBPCRE
+test -n "$USE_LIBPCRE1" && test_set_prereq PCRE
test -z "$NO_GETTEXT" && test_set_prereq GETTEXT
# Can we rely on git's output in the C locale?
int i;
init_tree_desc(&t, NULL, 0UL);
- strbuf_init(result_path, 0);
strbuf_addstr(&namebuf, name);
hashcpy(current_tree_sha1, tree_sha1);
const char *new_id,
struct unpack_trees_options *o)
{
+ unsigned flags = SUBMODULE_MOVE_HEAD_DRY_RUN;
const struct submodule *sub = submodule_from_ce(ce);
if (!sub)
return 0;
+ if (o->reset)
+ flags |= SUBMODULE_MOVE_HEAD_FORCE;
+
switch (sub->update_strategy.type) {
case SM_UPDATE_UNSPECIFIED:
case SM_UPDATE_CHECKOUT:
- if (submodule_move_head(ce->name, old_id, new_id, SUBMODULE_MOVE_HEAD_DRY_RUN))
+ if (submodule_move_head(ce->name, old_id, new_id, flags))
return o->gently ? -1 :
add_rejected_path(o, ERROR_WOULD_LOSE_SUBMODULE, ce->name);
return 0;
case SM_UPDATE_CHECKOUT:
case SM_UPDATE_REBASE:
case SM_UPDATE_MERGE:
+ /* state.force is set at the caller. */
submodule_move_head(ce->name, "HEAD", NULL,
SUBMODULE_MOVE_HEAD_FORCE);
break;
struct cache_entry **cache_end;
int dtype = DT_DIR;
int ret = is_excluded_from_list(prefix->buf, prefix->len,
- basename, &dtype, el);
+ basename, &dtype, el, &the_index);
int rc;
strbuf_addch(prefix, '/');
/* Non-directory */
dtype = ce_to_dtype(ce);
ret = is_excluded_from_list(ce->name, ce_namelen(ce),
- name, &dtype, el);
+ name, &dtype, el, &the_index);
if (ret < 0)
ret = defval;
if (ret > 0)
o->skip_sparse_checkout = 1;
if (!o->skip_sparse_checkout) {
char *sparse = git_pathdup("info/sparse-checkout");
- if (add_excludes_from_file_to_list(sparse, "", 0, &el, 0) < 0)
+ if (add_excludes_from_file_to_list(sparse, "", 0, &el, NULL) < 0)
o->skip_sparse_checkout = 1;
else
o->el = ⪙
WRITE_TREE_SILENT |
WRITE_TREE_REPAIR);
}
+ move_index_extensions(&o->result, o->dst_index);
discard_index(o->dst_index);
*o->dst_index = o->result;
} else {
memset(&d, 0, sizeof(d));
if (o->dir)
d.exclude_per_dir = o->dir->exclude_per_dir;
- i = read_directory(&d, pathbuf, namelen+1, NULL);
+ i = read_directory(&d, &the_index, pathbuf, namelen+1, NULL);
if (i)
return o->gently ? -1 :
add_rejected_path(o, ERROR_NOT_UPTODATE_DIR, ce->name);
return 0;
if (o->dir &&
- is_excluded(o->dir, name, &dtype))
+ is_excluded(o->dir, &the_index, name, &dtype))
/*
* ce->name is explicitly excluded, so it is Ok to
* overwrite it.
#include "git-compat-util.h"
#include "cache.h"
-static FILE *error_handle;
-
void vreportf(const char *prefix, const char *err, va_list params)
{
char msg[4096];
- FILE *fh = error_handle ? error_handle : stderr;
char *p;
vsnprintf(msg, sizeof(msg), err, params);
if (iscntrl(*p) && *p != '\t' && *p != '\n')
*p = '?';
}
- fprintf(fh, "%s%s\n", prefix, msg);
+ fprintf(stderr, "%s%s\n", prefix, msg);
}
static NORETURN void usage_builtin(const char *err, va_list params)
die_is_recursing = routine;
}
-void set_error_handle(FILE *fh)
-{
- error_handle = fh;
-}
-
void NORETURN usagef(const char *err, ...)
{
va_list params;
warn_routine(warn, params);
va_end(params);
}
+
+static NORETURN void BUG_vfl(const char *file, int line, const char *fmt, va_list params)
+{
+ char prefix[256];
+
+ /* truncation via snprintf is OK here */
+ if (file)
+ snprintf(prefix, sizeof(prefix), "BUG: %s:%d: ", file, line);
+ else
+ snprintf(prefix, sizeof(prefix), "BUG: ");
+
+ vreportf(prefix, fmt, params);
+ abort();
+}
+
+#ifdef HAVE_VARIADIC_MACROS
+NORETURN void BUG_fl(const char *file, int line, const char *fmt, ...)
+{
+ va_list ap;
+ va_start(ap, fmt);
+ BUG_vfl(file, line, fmt, ap);
+ va_end(ap);
+}
+#else
+NORETURN void BUG(const char *fmt, ...)
+{
+ va_list ap;
+ va_start(ap, fmt);
+ BUG_vfl(NULL, 0, fmt, ap);
+ va_end(ap);
+}
+#endif
/* The env would be set for the superproject. */
get_common_dir_noenv(&sb, submodule_gitdir);
+ free(submodule_gitdir);
/*
* The check below is only known to be good for repository format
/* See if there is any file inside the worktrees directory. */
dir = opendir(sb.buf);
strbuf_release(&sb);
- free(submodule_gitdir);
if (!dir)
return 0;
dir.untracked = the_index.untracked;
setup_standard_excludes(&dir);
- fill_directory(&dir, &s->pathspec);
+ fill_directory(&dir, &the_index, &s->pathspec);
for (i = 0; i < dir.nr; i++) {
struct dir_entry *ent = dir.entries[i];
status_printf_ln(s, GIT_COLOR_NORMAL, "%s", "");
}
-void wt_status_truncate_message_at_cut_line(struct strbuf *buf)
+size_t wt_status_locate_end(const char *s, size_t len)
{
const char *p;
struct strbuf pattern = STRBUF_INIT;
strbuf_addf(&pattern, "\n%c %s", comment_line_char, cut_line);
- if (starts_with(buf->buf, pattern.buf + 1))
- strbuf_setlen(buf, 0);
- else if ((p = strstr(buf->buf, pattern.buf)))
- strbuf_setlen(buf, p - buf->buf + 1);
+ if (starts_with(s, pattern.buf + 1))
+ len = 0;
+ else if ((p = strstr(s, pattern.buf)))
+ len = p - s + 1;
strbuf_release(&pattern);
+ return len;
}
void wt_status_add_cut_line(FILE *fp)
static int split_commit_in_progress(struct wt_status *s)
{
int split_in_progress = 0;
- char *head = read_line_from_git_path("HEAD");
- char *orig_head = read_line_from_git_path("ORIG_HEAD");
- char *rebase_amend = read_line_from_git_path("rebase-merge/amend");
- char *rebase_orig_head = read_line_from_git_path("rebase-merge/orig-head");
+ char *head, *orig_head, *rebase_amend, *rebase_orig_head;
- if (!head || !orig_head || !rebase_amend || !rebase_orig_head ||
+ if ((!s->amend && !s->nowarn && !s->workdir_dirty) ||
!s->branch || strcmp(s->branch, "HEAD"))
- return split_in_progress;
+ return 0;
- if (!strcmp(rebase_amend, rebase_orig_head)) {
- if (strcmp(head, rebase_amend))
- split_in_progress = 1;
- } else if (strcmp(orig_head, rebase_orig_head)) {
- split_in_progress = 1;
- }
+ head = read_line_from_git_path("HEAD");
+ orig_head = read_line_from_git_path("ORIG_HEAD");
+ rebase_amend = read_line_from_git_path("rebase-merge/amend");
+ rebase_orig_head = read_line_from_git_path("rebase-merge/orig-head");
- if (!s->amend && !s->nowarn && !s->workdir_dirty)
- split_in_progress = 0;
+ if (!head || !orig_head || !rebase_amend || !rebase_orig_head)
+ ; /* fall through, no split in progress */
+ else if (!strcmp(rebase_amend, rebase_orig_head))
+ split_in_progress = !!strcmp(head, rebase_amend);
+ else if (strcmp(orig_head, rebase_orig_head))
+ split_in_progress = 1;
free(head);
free(orig_head);
free(rebase_amend);
free(rebase_orig_head);
+
return split_in_progress;
}
abbrev_sha1_in_line(&line);
string_list_append(lines, line.buf);
}
+ fclose(f);
return 0;
}
unsigned char cherry_pick_head_sha1[20];
};
-void wt_status_truncate_message_at_cut_line(struct strbuf *);
+size_t wt_status_locate_end(const char *s, size_t len);
void wt_status_add_cut_line(FILE *fp);
void wt_status_prepare(struct wt_status *s);
void wt_status_print(struct wt_status *s);