--- /dev/null
+Git v1.7.6.2 Release Notes
+==========================
+
+Fixes since v1.7.6.1
+--------------------
+
+ * v1.7.6.1 broke "git push --quiet"; it used to be a no-op against an old
+ version of Git running on the other end, but v1.7.6.1 made it abort.
--- /dev/null
+Git v1.7.6.3 Release Notes
+==========================
+
+Fixes since v1.7.6.2
+--------------------
+
+ * "git -c var=value subcmd" misparsed the custom configuration when
+ value contained an equal sign.
+
+ * "git fetch" had a major performance regression, wasting many
+ needless cycles in a repository where there is no submodules
+ present. This was especially bad, when there were many refs.
+
+ * "git reflog $refname" did not default to the "show" subcommand as
+ the documentation advertised the command to do.
+
+ * "git reset" did not leave meaningful log message in the reflog.
+
+ * "git status --ignored" did not show ignored items when there is no
+ untracked items.
+
+ * "git tag --contains $commit" was unnecessarily inefficient.
+
+Also contains minor fixes and documentation updates.
--- /dev/null
+Git v1.7.6.4 Release Notes
+==========================
+
+Fixes since v1.7.6.3
+--------------------
+
+ * The error reporting logic of "git am" when the command is fed a file
+ whose mail-storage format is unknown was fixed.
+
+ * "git branch --set-upstream @{-1} foo" did not expand @{-1} correctly.
+
+ * "git check-ref-format --print" used to parrot a candidate string that
+ began with a slash (e.g. /refs/heads/master) without stripping it, to make
+ the result a suitably normalized string the caller can append to "$GIT_DIR/".
+
+ * "git clone" failed to clone locally from a ".git" file that itself
+ is not a directory but is a pointer to one.
+
+ * "git clone" from a local repository that borrows from another
+ object store using a relative path in its objects/info/alternates
+ file did not adjust the alternates in the resulting repository.
+
+ * "git describe --dirty" did not refresh the index before checking the
+ state of the working tree files.
+
+ * "git ls-files ../$path" that is run from a subdirectory reported errors
+ incorrectly when there is no such path that matches the given pathspec.
+
+ * "git mergetool" could loop forever prompting when nothing can be read
+ from the standard input.
+
+Also contains minor fixes and documentation updates.
* Interix, Cygwin and Minix ports got updated.
- * Various updates git-p4 (in contrib/) and "git fast-import".
+ * Various updates to git-p4 (in contrib/), fast-import, and git-svn.
* Gitweb learned to read from /etc/gitweb-common.conf when it exists,
before reading from gitweb_config.perl or from /etc/gitweb.conf
platforms with 64-bit long, which has been corrected.
* Git now recognizes loose objects written by other implementations that
- uses non-standard window size for zlib deflation (e.g. Agit running on
+ use a non-standard window size for zlib deflation (e.g. Agit running on
Android with 4kb window). We used to reject anything that was not
deflated with 32kb window.
been improved, especially when a command that is not built-in was
involved.
- * "git am" learned to pass "--exclude=<path>" option through to underlying
+ * "git am" learned to pass the "--exclude=<path>" option through to underlying
"git apply".
- * You can now feed many empty lines before feeding a mbox file to
+ * You can now feed many empty lines before feeding an mbox file to
"git am".
* "git archive" can be told to pass the output to gzip compression and
produce "archive.tar.gz".
- * "git bisect" can be used in a bare repository (provided if the test
+ * "git bisect" can be used in a bare repository (provided that the test
you perform per each iteration does not need a working tree, of
course).
* The length of abbreviated object names in "git branch -v" output
- now honors core.abbrev configuration variable.
+ now honors the core.abbrev configuration variable.
* "git check-attr" can take relative paths from the command line.
- * "git check-attr" learned "--all" option to list the attributes for a
+ * "git check-attr" learned an "--all" option to list the attributes for a
given path.
* "git checkout" (both the code to update the files upon checking out a
- different branch, the code to checkout specific set of files) learned
+ different branch and the code to checkout a specific set of files) learned
to stream the data from object store when possible, without having to
- read the entire contents of a file in memory first. An earlier round
+ read the entire contents of a file into memory first. An earlier round
of this code that is not in any released version had a large leak but
now it has been plugged.
- * "git clone" can now take "--config key=value" option to set the
+ * "git clone" can now take a "--config key=value" option to set the
repository configuration options that affect the initial checkout.
* "git commit <paths>..." now lets you feed relative pathspecs that
- refer outside your current subdirectory.
+ refer to outside your current subdirectory.
- * "git diff --stat" learned --stat-count option to limit the output of
- diffstat report.
+ * "git diff --stat" learned a --stat-count option to limit the output of
+ a diffstat report.
- * "git diff" learned "--histogram" option, to use a different diff
+ * "git diff" learned a "--histogram" option to use a different diff
generation machinery stolen from jgit, which might give better
performance.
+ * "git diff" had a weird worst case behaviour that can be triggered
+ when comparing files with potentially many places that could match.
+
* "git fetch", "git push" and friends no longer show connection
- errors for addresses that couldn't be connected when at least one
+ errors for addresses that couldn't be connected to when at least one
address succeeds (this is arguably a regression but a deliberate
one).
- * "git grep" learned --break and --heading options, to let users mimic
- output format of "ack".
+ * "git grep" learned "--break" and "--heading" options, to let users mimic
+ the output format of "ack".
- * "git grep" learned "-W" option that shows wider context using the same
+ * "git grep" learned a "-W" option that shows wider context using the same
logic used by "git diff" to determine the hunk header.
+ * Invoking the low-level "git http-fetch" without "-a" option (which
+ git itself never did---normal users should not have to worry about
+ this) is now deprecated.
+
* The "--decorate" option to "git log" and its family learned to
highlight grafted and replaced commits.
* "git rebase master topci" no longer spews usage hints after giving
- "fatal: no such branch: topci" error message.
+ the "fatal: no such branch: topci" error message.
+
+ * The recursive merge strategy implementation got a fairly large
+ fix for many corner cases that may rarely happen in real world
+ projects (it has been verified that none of the 16000+ merges in
+ the Linux kernel history back to v2.6.12 is affected with the
+ corner case bugs this update fixes).
- * "git stash" learned --include-untracked option.
+ * "git stash" learned an "--include-untracked option".
* "git submodule update" used to stop at the first error updating a
submodule; it now goes on to update other submodules that can be
updated, and reports the ones with errors at the end.
- * "git upload-pack" and "git receive-pack" learned to pretend only a
+ * "git push" can be told with the "--recurse-submodules=check" option to
+ refuse pushing of the supermodule, if any of its submodules'
+ commits hasn't been pushed out to their remotes.
+
+ * "git upload-pack" and "git receive-pack" learned to pretend that only a
subset of the refs exist in a repository. This may help a site to
put many tiny repositories into one repository (this would not be
useful for larger repositories as repacking would be problematic).
that is more efficient in reading objects in packfiles.
* test scripts for gitweb tried to run even when CGI-related perl modules
- are not installed; it now exits early when they are unavailable.
+ are not installed; they now exit early when the latter are unavailable.
Also contains various documentation updates and minor miscellaneous
changes.
Fixes since v1.7.6
------------------
-Unless otherwise noted, all the fixes in 1.7.6.X maintenance track are
+Unless otherwise noted, all fixes in the 1.7.6.X maintenance track are
included in this release.
- * "git branch --set-upstream @{-1} foo" did not expand @{-1} correctly.
- (merge e9d4f74 mg/branch-set-upstream-previous later to 'maint').
-
* "git branch -m" and "git checkout -b" incorrectly allowed the tip
of the branch that is currently checked out updated.
- (merge 55c4a67 ci/forbid-unwanted-current-branch-update later to 'maint').
-
- * "git clone" failed to clone locally from a ".git" file that itself
- is not a directory but is a pointer to one.
- (merge 9b0ebc7 nd/maint-clone-gitdir later to 'maint').
-
- * "git clone" from a local repository that borrows from another
- object store using a relative path in its objects/info/alternates
- file did not adjust the alternates in the resulting repository.
- (merge e6baf4a1 jc/maint-clone-alternates later to 'maint').
-
- * "git describe --dirty" did not refresh the index before checking the
- state of the working tree files.
- (cherry-pick bb57148 ac/describe-dirty-refresh later to 'maint').
-
- * "git ls-files ../$path" that is run from a subdirectory reported errors
- incorrectly when there is no such path that matches the given pathspec.
- (merge 0f64bfa cb/maint-ls-files-error-report later to 'maint').
-
---
-exec >/var/tmp/1
-echo O=$(git describe master)
-O=v1.7.6.1-415-g284daf2
-git log --first-parent --oneline $O..master
-echo
-git shortlog --no-merges ^maint ^$O master
(2) Generate your patch using git tools out of your commits.
-git based diff tools (git, Cogito, and StGIT included) generate
-unidiff which is the preferred format.
+git based diff tools generate unidiff which is the preferred format.
You do not have to be afraid to use -M option to "git diff" or
"git format-patch", if your patch involves file renames. The
when its superproject retrieves a commit that updates the submodule's
reference.
+fetch.fsckObjects::
+ If it is set to true, git-fetch-pack will check all fetched
+ objects. It will abort in the case of a malformed object or a
+ broken link. The result of an abort are only dangling objects.
+ Defaults to false. If not set, the value of `transfer.fsckObjects`
+ is used instead.
+
fetch.unpackLimit::
If the number of objects fetched over the git native
transfer is below this
If it is set to true, git-receive-pack will check all received
objects. It will abort in the case of a malformed object or a
broken link. The result of an abort are only dangling objects.
- Defaults to false.
+ Defaults to false. If not set, the value of `transfer.fsckObjects`
+ is used instead.
receive.unpackLimit::
If the number of objects received in a push is below this
archiving user's umask will be used instead. See umask(2) and
linkgit:git-archive[1].
+transfer.fsckObjects::
+ When `fetch.fsckObjects` or `receive.fsckObjects` are
+ not set, the value of this variable is used instead.
+ Defaults to false.
+
transfer.unpackLimit::
When `fetch.unpackLimit` or `receive.unpackLimit` are
not set, the value of this variable is used instead.
--------
[verse]
'git cherry-pick' [--edit] [-n] [-m parent-number] [-s] [-x] [--ff] <commit>...
+'git cherry-pick' --reset
+'git cherry-pick' --continue
DESCRIPTION
-----------
Pass the merge strategy-specific option through to the
merge strategy. See linkgit:git-merge[1] for details.
+SEQUENCER SUBCOMMANDS
+---------------------
+include::sequencer.txt[]
+
EXAMPLES
--------
`git cherry-pick master`::
-e <pattern>::
--exclude=<pattern>::
- Specify special exceptions to not be cleaned. Each <pattern> is
- the same form as in $GIT_DIR/info/excludes and this option can be
- given multiple times.
+ In addition to those found in .gitignore (per directory) and
+ $GIT_DIR/info/exclude, also consider these patterns to be in the
+ set of the ignore rules in effect.
-x::
- Don't use the ignore rules. This allows removing all untracked
+ Don't use the standard ignore rules read from .gitignore (per
+ directory) and $GIT_DIR/info/exclude, but do still use the ignore
+ rules given with `-e` options. This allows removing all untracked
files, including build products. This can be used (possibly in
conjunction with 'git reset') to create a pristine
working directory to test a clean build.
Listen on an alternative port. Incompatible with '--inetd' option.
--init-timeout=<n>::
- Timeout between the moment the connection is established and the
- client request is received (typically a rather low value, since
+ Timeout (in seconds) between the moment the connection is established
+ and the client request is received (typically a rather low value, since
that should be basically immediate).
--timeout=<n>::
- Timeout for specific client sub-requests. This includes the time
- it takes for the server to process the sub-request and the time spent
- waiting for the next client's request.
+ Timeout (in seconds) for specific client sub-requests. This includes
+ the time it takes for the server to process the sub-request and the
+ time spent waiting for the next client's request.
--max-connections=<n>::
Maximum number of concurrent clients, defaults to 32. Set it to
`committer`, and `tagger`) can be suffixed with `name`, `email`,
and `date` to extract the named component.
-The first line of the message in a commit and tag object is
-`subject`, the remaining lines are `body`. The whole message
-is `contents`.
+The complete message in a commit and tag object is `contents`.
+Its first line is `contents:subject`, the remaining lines
+are `contents:body` and the optional GPG signature
+is `contents:signature`.
For sorting purposes, fields with numeric values sort in numeric
order (`objectsize`, `authordate`, `committerdate`, `taggerdate`).
--to=<email>::
Add a `To:` header to the email headers. This is in addition
to any configured headers, and may be used multiple times.
+ The negated form `--no-to` discards all `To:` headers added so
+ far (from config or command line).
--cc=<email>::
Add a `Cc:` header to the email headers. This is in addition
to any configured headers, and may be used multiple times.
+ The negated form `--no-cc` discards all `Cc:` headers added so
+ far (from config or command line).
--add-header=<header>::
Add an arbitrary header to the email headers. This is in addition
to any configured headers, and may be used multiple times.
- For example, `--add-header="Organization: git-foo"`
+ For example, `--add-header="Organization: git-foo"`.
+ The negated form `--no-add-header` discards *all* (`To:`,
+ `Cc:`, and custom) headers added so far from config or command
+ line.
--cover-letter::
In addition to the patches, generate a cover letter file
-C <object>::
--reuse-message=<object>::
- Take the note message from the given blob object (for
- example, another note).
+ Take the given blob object (for example, another note) as the
+ note message. (Use `git notes copy <object>` instead to
+ copy notes between objects.)
-c <object>::
--reedit-message=<object>::
$ git notes --ref=built add -C "$blob" HEAD
------------
+(You cannot simply use `git notes --ref=built add -F a.out HEAD`
+because that is not binary-safe.)
Of course, it doesn't make much sense to display non-text-format notes
with 'git log', so if you use such notes, you'll probably need to write
some special-purpose tools to do something useful with them.
is specified. This flag forces progress status even if the
standard error stream is not directed to a terminal.
+--recurse-submodules=check::
+ Check whether all submodule commits used by the revisions to be
+ pushed are available on a remote tracking branch. Otherwise the
+ push will be aborted and the command will exit with non-zero status.
+
+
include::urls-remotes.txt[]
OUTPUT
SYNOPSIS
--------
[verse]
-'git-receive-pack' [--quiet] <directory>
+'git-receive-pack' <directory>
DESCRIPTION
-----------
OPTIONS
-------
---quiet::
- Print only error messages.
-
<directory>::
The repository to sync into.
git, there is no need to re-link git to add a new helper, nor any
need to link the helper with the implementation of git.
-Every helper must support the "capabilities" command, which git will
-use to determine what other commands the helper will accept. Other
-commands generally concern facilities like discovering and updating
-remote refs, transporting objects between the object database and
-the remote repository, and updating the local object store.
-
-Helpers supporting the 'fetch' capability can discover refs from the
-remote repository and transfer objects reachable from those refs to
-the local object store. Helpers supporting the 'push' capability can
-transfer local objects to the remote repository and update remote refs.
+Every helper must support the "capabilities" command, which git
+uses to determine what other commands the helper will accept. Those
+other commands can be used to discover and update remote refs,
+transport objects between the object database and the remote repository,
+and update the local object store.
Git comes with a "curl" family of remote helpers, that handle various
transport protocols, such as 'git-remote-http', 'git-remote-https',
'git-remote-ftp' and 'git-remote-ftps'. They implement the capabilities
'fetch', 'option', and 'push'.
+INPUT FORMAT
+------------
+
+Git sends the remote helper a list of commands on standard input, one
+per line. The first command is always the 'capabilities' command, in
+response to which the remote helper must print a list of the
+capabilities it supports (see below) followed by a blank line. The
+response to the capabilities command determines what commands Git uses
+in the remainder of the command stream.
+
+The command stream is terminated by a blank line. In some cases
+(indicated in the documentation of the relevant commands), this blank
+line is followed by a payload in some other protocol (e.g., the pack
+protocol), while in others it indicates the end of input.
+
+Capabilities
+~~~~~~~~~~~~
+
+Each remote helper is expected to support only a subset of commands.
+The operations a helper supports are declared to git in the response
+to the `capabilities` command (see COMMANDS, below).
+
+'option'::
+ For specifying settings like `verbosity` (how much output to
+ write to stderr) and `depth` (how much history is wanted in the
+ case of a shallow clone) that affect how other commands are
+ carried out.
+
+'connect'::
+ For fetching and pushing using git's native packfile protocol
+ that requires a bidirectional, full-duplex connection.
+
+'push'::
+ For listing remote refs and pushing specified objects from the
+ local object store to remote refs.
+
+'fetch'::
+ For listing remote refs and fetching the associated history to
+ the local object store.
+
+'import'::
+ For listing remote refs and fetching the associated history as
+ a fast-import stream.
+
+'refspec' <refspec>::
+ This modifies the 'import' capability, allowing the produced
+ fast-import stream to modify refs in a private namespace
+ instead of writing to refs/heads or refs/remotes directly.
+ It is recommended that all importers providing the 'import'
+ capability use this.
++
+A helper advertising the capability
+`refspec refs/heads/{asterisk}:refs/svn/origin/branches/{asterisk}`
+is saying that, when it is asked to `import refs/heads/topic`, the
+stream it outputs will update the `refs/svn/origin/branches/topic`
+ref.
++
+This capability can be advertised multiple times. The first
+applicable refspec takes precedence. The left-hand of refspecs
+advertised with this capability must cover all refs reported by
+the list command. If no 'refspec' capability is advertised,
+there is an implied `refspec {asterisk}:{asterisk}`.
+
+Capabilities for Pushing
+~~~~~~~~~~~~~~~~~~~~~~~~
+'connect'::
+ Can attempt to connect to 'git receive-pack' (for pushing),
+ 'git upload-pack', etc for communication using the
+ packfile protocol.
++
+Supported commands: 'connect'.
+
+'push'::
+ Can discover remote refs and push local commits and the
+ history leading up to them to new or existing remote refs.
++
+Supported commands: 'list for-push', 'push'.
+
+If a helper advertises both 'connect' and 'push', git will use
+'connect' if possible and fall back to 'push' if the helper requests
+so when connecting (see the 'connect' command under COMMANDS).
+
+Capabilities for Fetching
+~~~~~~~~~~~~~~~~~~~~~~~~~
+'connect'::
+ Can try to connect to 'git upload-pack' (for fetching),
+ 'git receive-pack', etc for communication using the
+ packfile protocol.
++
+Supported commands: 'connect'.
+
+'fetch'::
+ Can discover remote refs and transfer objects reachable from
+ them to the local object store.
++
+Supported commands: 'list', 'fetch'.
+
+'import'::
+ Can discover remote refs and output objects reachable from
+ them as a stream in fast-import format.
++
+Supported commands: 'list', 'import'.
+
+If a helper advertises 'connect', git will use it if possible and
+fall back to another capability if the helper requests so when
+connecting (see the 'connect' command under COMMANDS).
+When choosing between 'fetch' and 'import', git prefers 'fetch'.
+Other frontends may have some other order of preference.
+
+'refspec' <refspec>::
+ This modifies the 'import' capability.
++
+A helper advertising
+`refspec refs/heads/{asterisk}:refs/svn/origin/branches/{asterisk}`
+in its capabilities is saying that, when it handles
+`import refs/heads/topic`, the stream it outputs will update the
+`refs/svn/origin/branches/topic` ref.
++
+This capability can be advertised multiple times. The first
+applicable refspec takes precedence. The left-hand of refspecs
+advertised with this capability must cover all refs reported by
+the list command. If no 'refspec' capability is advertised,
+there is an implied `refspec {asterisk}:{asterisk}`.
+
INVOCATION
----------
'push' +<src>:<dst>::
Pushes the given local <src> commit or branch to the
remote branch described by <dst>. A batch sequence of
- one or more push commands is terminated with a blank line.
+ one or more 'push' commands is terminated with a blank line
+ (if there is only one reference to push, a single 'push' command
+ is followed by a blank line). For example, the following would
+ be two batches of 'push', the first asking the remote-helper
+ to push the local ref 'master' to the remote ref 'master' and
+ the local 'HEAD' to the remote 'branch', and the second
+ asking to push ref 'foo' to ref 'bar' (forced update requested
+ by the '+').
++
+------------
+push refs/heads/master:refs/heads/master
+push HEAD:refs/heads/branch
+\n
+push +refs/heads/foo:refs/heads/bar
+\n
+------------
+
Zero or more protocol options may be entered after the last 'push'
command, before the batch's terminating blank line.
Especially useful for interoperability with a foreign versioning
system.
+
+Just like 'push', a batch sequence of one or more 'import' is
+terminated with a blank line. For each batch of 'import', the remote
+helper should produce a fast-import stream terminated by a 'done'
+command.
++
Supported if the helper has the "import" capability.
'connect' <service>::
Additional commands may be supported, as may be determined from
capabilities reported by the helper.
-CAPABILITIES
-------------
-
-'fetch'::
-'option'::
-'push'::
-'import'::
-'connect'::
- This helper supports the corresponding command with the same name.
-
-'refspec' 'spec'::
- When using the import command, expect the source ref to have
- been written to the destination ref. The earliest applicable
- refspec takes precedence. For example
- "refs/heads/{asterisk}:refs/svn/origin/branches/{asterisk}" means
- that, after an "import refs/heads/name", the script has written to
- refs/svn/origin/branches/name. If this capability is used at
- all, it must cover all refs reported by the list command; if
- it is not used, it is effectively "{asterisk}:{asterisk}"
-
REF LIST ATTRIBUTES
-------------------
--------
linkgit:git-remote[1]
+linkgit:git-remote-testgit[1]
+
GIT
---
Part of the linkgit:git[1] suite
--- /dev/null
+git-remote-testgit(1)
+=====================
+
+NAME
+----
+git-remote-testgit - Example remote-helper
+
+
+SYNOPSIS
+--------
+[verse]
+git clone testgit::<source-repo> [<destination>]
+
+DESCRIPTION
+-----------
+
+This command is a simple remote-helper, that is used both as a
+testcase for the remote-helper functionality, and as an example to
+show remote-helper authors one possible implementation.
+
+The best way to learn more is to read the comments and source code in
+'git-remote-testgit.py'.
+
+SEE ALSO
+--------
+linkgit:git-remote-helpers[1]
+
+GIT
+---
+Part of the linkgit:git[1] suite
--------
[verse]
'git revert' [--edit | --no-edit] [-n] [-m parent-number] [-s] <commit>...
+'git revert' --reset
+'git revert' --continue
DESCRIPTION
-----------
Pass the merge strategy-specific option through to the
merge strategy. See linkgit:git-merge[1] for details.
+SEQUENCER SUBCOMMANDS
+---------------------
+include::sequencer.txt[]
+
EXAMPLES
--------
`git revert HEAD~3`::
SYNOPSIS
--------
[verse]
-'git send-pack' [--all] [--dry-run] [--force] [--receive-pack=<git-receive-pack>] [--quiet] [--verbose] [--thin] [<host>:]<directory> [<ref>...]
+'git send-pack' [--all] [--dry-run] [--force] [--receive-pack=<git-receive-pack>] [--verbose] [--thin] [<host>:]<directory> [<ref>...]
DESCRIPTION
-----------
the remote repository can lose commits; use it with
care.
---quiet::
- Print only error messages.
-
--verbose::
Run verbosely.
affecting the working tree; and the 'rebase' command will be
able to update the working tree with the latest changes.
+--preserve-empty-dirs;;
+ Create a placeholder file in the local Git repository for each
+ empty directory fetched from Subversion. This includes directories
+ that become empty by removing all entries in the Subversion
+ repository (but not the directory itself). The placeholder files
+ are also tracked and removed when no longer necessary.
+
+--placeholder-filename=<filename>;;
+ Set the name of placeholder files created by --preserve-empty-dirs.
+ Default: ".gitignore"
+
'rebase'::
This fetches revisions from the SVN parent of the current HEAD
and rebases the current (uncommitted to SVN) work against it.
Add the given merge information during the dcommit
(e.g. `--mergeinfo="/branches/foo:1-10"`). All svn server versions can
store this information (as a property), and svn clients starting from
- version 1.5 can make use of it. 'git svn' currently does not use it
- and does not set it automatically.
+ version 1.5 can make use of it. To specify merge information from multiple
+ branches, use a single space character between the branches
+ (`--mergeinfo="/branches/foo:1-10 /branches/bar:3,5-6,8"`)
++
+[verse]
+config key: svn.pushmergeinfo
++
+This option will cause git-svn to attempt to automatically populate the
+svn:mergeinfo property in the SVN repository when possible. Currently, this can
+only be done when dcommitting non-fast-forward merges where all parents but the
+first have already been pushed into SVN.
'branch'::
Create a branch in the SVN repository.
branch of the `git.git` repository.
Documentation for older releases are available here:
-* link:v1.7.6.1/git.html[documentation for release 1.7.6.1]
+* link:v1.7.7/git.html[documentation for release 1.7.7]
* release notes for
- link:RelNotes/1.7.6.1.txt[1.7.6.1].
+ link:RelNotes/1.7.7.txt[1.7.7].
+
+* link:v1.7.6.4/git.html[documentation for release 1.7.6.4]
+
+* release notes for
+ link:RelNotes/1.7.6.4.txt[1.7.6.4],
+ link:RelNotes/1.7.6.3.txt[1.7.6.3],
+ link:RelNotes/1.7.6.2.txt[1.7.6.2],
+ link:RelNotes/1.7.6.1.txt[1.7.6.1],
link:RelNotes/1.7.6.txt[1.7.6].
* link:v1.7.5.4/git.html[documentation for release 1.7.5.4]
----
gitnamespaces - Git namespaces
+SYNOPSIS
+--------
+[verse]
+GIT_NAMESPACE=<namespace> 'git upload-pack'
+GIT_NAMESPACE=<namespace> 'git receive-pack'
+
+
DESCRIPTION
-----------
- Update "What's cooking" message to review the updates to
existing topics, newly added topics and graduated topics.
- This step is helped with Meta/UWC script (where Meta/ contains
+ This step is helped with Meta/cook script (where Meta/ contains
a checkout of the 'todo' branch).
- Merge topics to 'next'. For each branch whose tip is not
- Nothing is next-worthy; do not do anything.
- - Rebase topics that do not have any commit in next yet. This
- step is optional but sometimes is worth doing when an old
- series that is not in next can take advantage of low-level
- framework change that is merged to 'master' already.
+ - [** OBSOLETE **] Optionally rebase topics that do not have any commit
+ in next yet, when they can take advantage of low-level framework
+ change that is merged to 'master' already.
$ git rebase master ai/topic
pre-rebase hook to make sure that topics that are already in
'next' are not rebased beyond the merged commit.
- - Rebuild "pu" to merge the tips of topics not in 'next'.
+ - [** OBSOLETE **] Rebuild "pu" to merge the tips of topics not in 'next'.
$ git checkout pu
$ git reset --hard next
- Fetch html and man branches back from k.org, and push four
integration branches and the two documentation branches to
- repo.or.cz
+ repo.or.cz and other mirrors.
Some observations to be made.
--- /dev/null
+--reset::
+ Forget about the current operation in progress. Can be used
+ to clear the sequencer state after a failed cherry-pick or
+ revert.
+
+--continue::
+ Continue the operation in progress using the information in
+ '.git/sequencer'. Can be used to continue after resolving
+ conflicts in a failed cherry-pick or revert.
. Can sort an unsorted list using `sort_string_list`.
+. Can remove individual items of an unsorted list using
+ `unsorted_string_list_delete_item`.
+
. Finally it should free the list using `string_list_clear`.
Example:
The above two functions need to look through all items, as opposed to their
counterpart for sorted lists, which performs a binary search.
+`unsorted_string_list_delete_item`::
+
+ Remove an item from a string_list. The `string` pointer of the items
+ will be freed in case the `strdup_strings` member of the string_list
+ is set. The third parameter controls if the `util` pointer of the
+ items should be freed or not.
+
Data structures
---------------
#!/bin/sh
GVF=GIT-VERSION-FILE
-DEF_VER=v1.7.7-rc0
+DEF_VER=v1.7.7
LF='
'
# DEFAULT_EDITOR='$GIT_FALLBACK_EDITOR',
# DEFAULT_EDITOR='"C:\Program Files\Vim\gvim.exe" --nofork'
#
-# Define COMPUTE_HEADER_DEPENDENCIES if your compiler supports the -MMD option
-# and you want to avoid rebuilding objects when an unrelated header file
-# changes.
-#
# Define CHECK_HEADER_DEPENDENCIES to check for problems in the hard-coded
# dependency rules.
#
LIB_H += compat/bswap.h
LIB_H += compat/cygwin.h
LIB_H += compat/mingw.h
+LIB_H += compat/obstack.h
LIB_H += compat/win32/pthread.h
LIB_H += compat/win32/syslog.h
LIB_H += compat/win32/sys/poll.h
LIB_H += compat/win32/dirent.h
+LIB_H += connected.h
LIB_H += csum-file.h
LIB_H += decorate.h
LIB_H += delta.h
LIB_H += grep.h
LIB_H += hash.h
LIB_H += help.h
+LIB_H += kwset.h
LIB_H += levenshtein.h
LIB_H += list-objects.h
LIB_H += ll-merge.h
LIB_H += resolve-undo.h
LIB_H += revision.h
LIB_H += run-command.h
+LIB_H += sequencer.h
LIB_H += sha1-array.h
LIB_H += sha1-lookup.h
LIB_H += sideband.h
LIB_OBJS += color.o
LIB_OBJS += combine-diff.o
LIB_OBJS += commit.o
+LIB_OBJS += compat/obstack.o
LIB_OBJS += config.o
LIB_OBJS += connect.o
+LIB_OBJS += connected.o
LIB_OBJS += convert.o
LIB_OBJS += copy.o
LIB_OBJS += csum-file.o
LIB_OBJS += help.o
LIB_OBJS += hex.o
LIB_OBJS += ident.o
+LIB_OBJS += kwset.o
LIB_OBJS += levenshtein.o
LIB_OBJS += list-objects.o
LIB_OBJS += ll-merge.o
LIB_OBJS += run-command.o
LIB_OBJS += server-info.o
LIB_OBJS += setup.o
+LIB_OBJS += sequencer.o
LIB_OBJS += sha1-array.o
LIB_OBJS += sha1-lookup.o
LIB_OBJS += sha1_file.o
ifdef CHECK_HEADER_DEPENDENCIES
COMPUTE_HEADER_DEPENDENCIES =
USE_COMPUTED_HEADER_DEPENDENCIES =
+else
+ifndef COMPUTE_HEADER_DEPENDENCIES
+dep_check = $(shell $(CC) $(ALL_CFLAGS) \
+ -c -MF /dev/null -MMD -MP -x c /dev/null -o /dev/null 2>&1; \
+ echo $$?)
+ifeq ($(dep_check),0)
+COMPUTE_HEADER_DEPENDENCIES=YesPlease
+endif
+endif
endif
ifdef COMPUTE_HEADER_DEPENDENCIES
ifdef COMPUTE_HEADER_DEPENDENCIES
$(dep_dirs):
- mkdir -p $@
+ @mkdir -p $@
missing_dep_dirs := $(filter-out $(wildcard $(dep_dirs)),$(dep_dirs))
dep_file = $(dir $@).depend/$(notdir $@).d
{ "detachedhead", &advice_detached_head },
};
+void advise(const char *advice, ...)
+{
+ va_list params;
+
+ va_start(params, advice);
+ vreportf("hint: ", advice, params);
+ va_end(params);
+}
+
int git_default_advice_config(const char *var, const char *value)
{
const char *k = skip_prefix(var, "advice.");
return 0;
}
-void NORETURN die_resolve_conflict(const char *me)
+int error_resolve_conflict(const char *me)
{
- if (advice_resolve_conflict)
+ error("'%s' is not possible because you have unmerged files.", me);
+ if (advice_resolve_conflict) {
/*
* Message used both when 'git commit' fails and when
* other commands doing a merge do.
*/
- die("'%s' is not possible because you have unmerged files.\n"
- "Please, fix them up in the work tree, and then use 'git add/rm <file>' as\n"
- "appropriate to mark resolution and make a commit, or use 'git commit -a'.", me);
- else
- die("'%s' is not possible because you have unmerged files.", me);
+ advise("Fix them up in the work tree,");
+ advise("and then use 'git add/rm <file>' as");
+ advise("appropriate to mark resolution and make a commit,");
+ advise("or use 'git commit -a'.");
+ }
+ return -1;
+}
+
+void NORETURN die_resolve_conflict(const char *me)
+{
+ error_resolve_conflict(me);
+ die("Exiting because of an unresolved conflict.");
}
extern int advice_detached_head;
int git_default_advice_config(const char *var, const char *value);
-
+void advise(const char *advice, ...);
+int error_resolve_conflict(const char *me);
extern void NORETURN die_resolve_conflict(const char *me);
#endif /* ADVICE_H */
#include "refs.h"
#include "remote.h"
#include "commit.h"
+#include "sequencer.h"
struct tracking {
struct refspec spec;
return 0;
}
-int validate_new_branchname(const char *name, struct strbuf *ref, int force)
+int validate_new_branchname(const char *name, struct strbuf *ref,
+ int force, int attr_only)
{
- const char *head;
- unsigned char sha1[20];
-
if (strbuf_check_branch_ref(ref, name))
die("'%s' is not a valid branch name.", name);
if (!ref_exists(ref->buf))
return 0;
- else if (!force)
+ else if (!force && !attr_only)
die("A branch named '%s' already exists.", ref->buf + strlen("refs/heads/"));
- head = resolve_ref("HEAD", sha1, 0, NULL);
- if (!is_bare_repository() && head && !strcmp(head, ref->buf))
- die("Cannot force update the current branch.");
+ if (!attr_only) {
+ const char *head;
+ unsigned char sha1[20];
+ head = resolve_ref("HEAD", sha1, 0, NULL);
+ if (!is_bare_repository() && head && !strcmp(head, ref->buf))
+ die("Cannot force update the current branch.");
+ }
return 1;
}
if (track == BRANCH_TRACK_EXPLICIT || track == BRANCH_TRACK_OVERRIDE)
explicit_tracking = 1;
- if (validate_new_branchname(name, &ref, force || track == BRANCH_TRACK_OVERRIDE)) {
+ if (validate_new_branchname(name, &ref, force,
+ track == BRANCH_TRACK_OVERRIDE)) {
if (!force)
dont_change_ref = 1;
else
unlink(git_path("MERGE_MSG"));
unlink(git_path("MERGE_MODE"));
unlink(git_path("SQUASH_MSG"));
+ remove_sequencer_state(0);
}
* interpreted ref in ref, force indicates whether (non-head) branches
* may be overwritten. A non-zero return value indicates that the force
* parameter was non-zero and the branch already exists.
+ *
+ * Contrary to all of the above, when attr_only is 1, the caller is
+ * not interested in verifying if it is Ok to update the named
+ * branch to point at a potentially different commit. It is merely
+ * asking if it is OK to change some attribute for the named branch
+ * (e.g. tracking upstream).
+ *
+ * NEEDSWORK: This needs to be split into two separate functions in the
+ * longer run for sanity.
+ *
*/
-int validate_new_branchname(const char *name, struct strbuf *ref, int force);
+int validate_new_branchname(const char *name, struct strbuf *ref, int force, int attr_only);
/*
* Remove information about the state of working on the current
die(_("Invalid branch name: '%s'"), oldname);
}
- validate_new_branchname(newname, &newref, force);
+ validate_new_branchname(newname, &newref, force, 0);
strbuf_addf(&logmsg, "Branch: renamed %s to %s",
oldref.buf, newref.buf);
} else if (!strcmp(cmd, "unbundle")) {
if (!startup_info->have_repository)
die(_("Need a repository to unbundle."));
- return !!unbundle(&header, bundle_fd) ||
+ return !!unbundle(&header, bundle_fd, 0) ||
list_bundle_refs(&header, argc, argv);
} else
usage(builtin_bundle_usage);
" or: git check-ref-format --branch <branchname-shorthand>";
/*
- * Replace each run of adjacent slashes in src with a single slash,
- * and write the result to dst.
+ * Remove leading slashes and replace each run of adjacent slashes in
+ * src with a single slash, and write the result to dst.
*
* This function is similar to normalize_path_copy(), but stripped down
* to meet check_ref_format's simpler needs.
static void collapse_slashes(char *dst, const char *src)
{
char ch;
- char prev = '\0';
+ char prev = '/';
while ((ch = *src++) != '\0') {
if (prev == '/' && ch == prev)
if (opts.new_branch) {
struct strbuf buf = STRBUF_INIT;
- opts.branch_exists = validate_new_branchname(opts.new_branch, &buf, !!opts.new_branch_force);
+ opts.branch_exists = validate_new_branchname(opts.new_branch, &buf,
+ !!opts.new_branch_force, 0);
strbuf_release(&buf);
}
OPT_BOOLEAN('d', NULL, &remove_directories,
"remove whole directories"),
{ OPTION_CALLBACK, 'e', "exclude", &exclude_list, "pattern",
- "exclude <pattern>", PARSE_OPT_NONEG, exclude_cb },
+ "add <pattern> to ignore rules", PARSE_OPT_NONEG, exclude_cb },
OPT_BOOLEAN('x', NULL, &ignored, "remove ignored files, too"),
OPT_BOOLEAN('X', NULL, &ignored_only,
"remove only ignored files"),
setup_standard_excludes(&dir);
for (i = 0; i < exclude_list.nr; i++)
- add_exclude(exclude_list.items[i].string, "", 0, dir.exclude_list);
+ add_exclude(exclude_list.items[i].string, "", 0,
+ &dir.exclude_list[EXC_CMDL]);
pathspec = get_pathspec(prefix, argv);
static int fetch_unpack_limit = -1;
static int unpack_limit = 100;
static int prefer_ofs_delta = 1;
-static int no_done = 0;
+static int no_done;
+static int fetch_fsck_objects = -1;
+static int transfer_fsck_objects = -1;
static struct fetch_pack_args args = {
/* .uploadpack = */ "git-upload-pack",
};
}
if (*hdr_arg)
*av++ = hdr_arg;
+ if (fetch_fsck_objects >= 0
+ ? fetch_fsck_objects
+ : transfer_fsck_objects >= 0
+ ? transfer_fsck_objects
+ : 0)
+ *av++ = "--strict";
*av++ = NULL;
cmd.in = demux.out;
return 0;
}
+ if (!strcmp(var, "fetch.fsckobjects")) {
+ fetch_fsck_objects = git_config_bool(var, value);
+ return 0;
+ }
+
+ if (!strcmp(var, "transfer.fsckobjects")) {
+ transfer_fsck_objects = git_config_bool(var, value);
+ return 0;
+ }
+
return git_default_config(var, value, cb);
}
#include "sigchain.h"
#include "transport.h"
#include "submodule.h"
+#include "connected.h"
static const char * const builtin_fetch_usage[] = {
"git fetch [<options>] [<repository> [<refspec>...]]",
}
}
+static int iterate_ref_map(void *cb_data, unsigned char sha1[20])
+{
+ struct ref **rm = cb_data;
+ struct ref *ref = *rm;
+
+ if (!ref)
+ return -1; /* end of the list */
+ *rm = ref->next;
+ hashcpy(sha1, ref->old_sha1);
+ return 0;
+}
+
static int store_updated_refs(const char *raw_url, const char *remote_name,
struct ref *ref_map)
{
url = transport_anonymize_url(raw_url);
else
url = xstrdup("foreign");
+
+ rm = ref_map;
+ if (check_everything_connected(iterate_ref_map, 0, &rm))
+ return error(_("%s did not send all necessary objects\n"), url);
+
for (rm = ref_map; rm; rm = rm->next) {
struct ref *ref = NULL;
* We would want to bypass the object transfer altogether if
* everything we are going to fetch already exists and is connected
* locally.
- *
- * The refs we are going to fetch are in ref_map. If running
- *
- * $ git rev-list --objects --stdin --not --all
- *
- * (feeding all the refs in ref_map on its standard input)
- * does not error out, that means everything reachable from the
- * refs we are going to fetch exists and is connected to some of
- * our existing refs.
*/
static int quickfetch(struct ref *ref_map)
{
- struct child_process revlist;
- struct ref *ref;
- int err;
- const char *argv[] = {"rev-list",
- "--quiet", "--objects", "--stdin", "--not", "--all", NULL};
+ struct ref *rm = ref_map;
/*
* If we are deepening a shallow clone we already have these
*/
if (depth)
return -1;
-
- if (!ref_map)
- return 0;
-
- memset(&revlist, 0, sizeof(revlist));
- revlist.argv = argv;
- revlist.git_cmd = 1;
- revlist.no_stdout = 1;
- revlist.no_stderr = 1;
- revlist.in = -1;
-
- err = start_command(&revlist);
- if (err) {
- error(_("could not run rev-list"));
- return err;
- }
-
- /*
- * If rev-list --stdin encounters an unknown commit, it terminates,
- * which will cause SIGPIPE in the write loop below.
- */
- sigchain_push(SIGPIPE, SIG_IGN);
-
- for (ref = ref_map; ref; ref = ref->next) {
- if (write_in_full(revlist.in, sha1_to_hex(ref->old_sha1), 40) < 0 ||
- write_str_in_full(revlist.in, "\n") < 0) {
- if (errno != EPIPE && errno != EINVAL)
- error(_("failed write to rev-list: %s"), strerror(errno));
- err = -1;
- break;
- }
- }
-
- if (close(revlist.in)) {
- error(_("failed to close rev-list's stdin: %s"), strerror(errno));
- err = -1;
- }
-
- sigchain_pop(SIGPIPE);
-
- return finish_command(&revlist) || err;
+ return check_everything_connected(iterate_ref_map, 1, &rm);
}
static int fetch_refs(struct transport *transport, struct ref *ref_map)
argc = parse_options(argc, argv, prefix,
builtin_fetch_options, builtin_fetch_usage, 0);
+ if (recurse_submodules != RECURSE_SUBMODULES_OFF) {
+ if (recurse_submodules_default) {
+ int arg = parse_fetch_recurse_submodules_arg("--recurse-submodules-default", recurse_submodules_default);
+ set_config_fetch_recurse_submodules(arg);
+ }
+ gitmodules_config();
+ git_config(submodule_config, NULL);
+ }
+
if (all) {
if (argc == 1)
die(_("fetch --all does not take a repository argument"));
if (!result && (recurse_submodules != RECURSE_SUBMODULES_OFF)) {
const char *options[10];
int num_options = 0;
- if (recurse_submodules_default) {
- int arg = parse_fetch_recurse_submodules_arg("--recurse-submodules-default", recurse_submodules_default);
- set_config_fetch_recurse_submodules(arg);
- }
- gitmodules_config();
- git_config(submodule_config, NULL);
add_options_to_argv(&num_options, options);
result = fetch_populated_submodules(num_options, options,
submodule_prefix,
{ "subject" },
{ "body" },
{ "contents" },
+ { "contents:subject" },
+ { "contents:body" },
+ { "contents:signature" },
{ "upstream" },
{ "symref" },
{ "flag" },
return xmemdupz(email, eoemail + 1 - email);
}
+static char *copy_subject(const char *buf, unsigned long len)
+{
+ char *r = xmemdupz(buf, len);
+ int i;
+
+ for (i = 0; i < len; i++)
+ if (r[i] == '\n')
+ r[i] = ' ';
+
+ return r;
+}
+
static void grab_date(const char *buf, struct atom_value *v, const char *atomname)
{
const char *eoemail = strstr(buf, "> ");
}
}
-static void find_subpos(const char *buf, unsigned long sz, const char **sub, const char **body)
+static void find_subpos(const char *buf, unsigned long sz,
+ const char **sub, unsigned long *sublen,
+ const char **body, unsigned long *bodylen,
+ unsigned long *nonsiglen,
+ const char **sig, unsigned long *siglen)
{
- while (*buf) {
- const char *eol = strchr(buf, '\n');
- if (!eol)
- return;
- if (eol[1] == '\n') {
- buf = eol + 1;
- break; /* found end of header */
- }
- buf = eol + 1;
+ const char *eol;
+ /* skip past header until we hit empty line */
+ while (*buf && *buf != '\n') {
+ eol = strchrnul(buf, '\n');
+ if (*eol)
+ eol++;
+ buf = eol;
}
+ /* skip any empty lines */
while (*buf == '\n')
buf++;
- if (!*buf)
- return;
- *sub = buf; /* first non-empty line */
- buf = strchr(buf, '\n');
- if (!buf) {
- *body = "";
- return; /* no body */
+
+ /* parse signature first; we might not even have a subject line */
+ *sig = buf + parse_signature(buf, strlen(buf));
+ *siglen = strlen(*sig);
+
+ /* subject is first non-empty line */
+ *sub = buf;
+ /* subject goes to first empty line */
+ while (buf < *sig && *buf && *buf != '\n') {
+ eol = strchrnul(buf, '\n');
+ if (*eol)
+ eol++;
+ buf = eol;
}
+ *sublen = buf - *sub;
+ /* drop trailing newline, if present */
+ if (*sublen && (*sub)[*sublen - 1] == '\n')
+ *sublen -= 1;
+
+ /* skip any empty lines */
while (*buf == '\n')
- buf++; /* skip blank between subject and body */
+ buf++;
*body = buf;
+ *bodylen = strlen(buf);
+ *nonsiglen = *sig - buf;
}
/* See grab_values */
static void grab_sub_body_contents(struct atom_value *val, int deref, struct object *obj, void *buf, unsigned long sz)
{
int i;
- const char *subpos = NULL, *bodypos = NULL;
+ const char *subpos = NULL, *bodypos = NULL, *sigpos = NULL;
+ unsigned long sublen = 0, bodylen = 0, nonsiglen = 0, siglen = 0;
for (i = 0; i < used_atom_cnt; i++) {
const char *name = used_atom[i];
name++;
if (strcmp(name, "subject") &&
strcmp(name, "body") &&
- strcmp(name, "contents"))
+ strcmp(name, "contents") &&
+ strcmp(name, "contents:subject") &&
+ strcmp(name, "contents:body") &&
+ strcmp(name, "contents:signature"))
continue;
if (!subpos)
- find_subpos(buf, sz, &subpos, &bodypos);
- if (!subpos)
- return;
+ find_subpos(buf, sz,
+ &subpos, &sublen,
+ &bodypos, &bodylen, &nonsiglen,
+ &sigpos, &siglen);
if (!strcmp(name, "subject"))
- v->s = copy_line(subpos);
+ v->s = copy_subject(subpos, sublen);
+ else if (!strcmp(name, "contents:subject"))
+ v->s = copy_subject(subpos, sublen);
else if (!strcmp(name, "body"))
- v->s = xstrdup(bodypos);
+ v->s = xmemdupz(bodypos, bodylen);
+ else if (!strcmp(name, "contents:body"))
+ v->s = xmemdupz(bodypos, nonsiglen);
+ else if (!strcmp(name, "contents:signature"))
+ v->s = xmemdupz(sigpos, siglen);
else if (!strcmp(name, "contents"))
v->s = xstrdup(subpos);
}
unsigned long size;
char *buf = read_sha1_file(obj->sha1,
&type, &size);
- if (buf) {
- if (fwrite(buf, size, 1, f) != 1)
- die_errno("Could not write '%s'",
- filename);
- free(buf);
- }
+ if (buf && fwrite(buf, 1, size, f) != size)
+ die_errno("Could not write '%s'", filename);
+ free(buf);
} else
fprintf(f, "%s\n", sha1_to_hex(obj->sha1));
if (fclose(f))
struct strbuf base;
int hit, len;
+ read_sha1_lock();
data = read_object_with_reference(obj->sha1, tree_type,
&size, NULL);
+ read_sha1_unlock();
+
if (!data)
die(_("unable to read tree (%s)"), sha1_to_hex(obj->sha1));
string_list_append(&extra_cc, value);
return 0;
}
- if (!strcmp(var, "diff.color") || !strcmp(var, "color.diff")) {
+ if (!strcmp(var, "diff.color") || !strcmp(var, "color.diff") ||
+ !strcmp(var, "color.ui")) {
return 0;
}
if (!strcmp(var, "format.numbered")) {
strbuf_release(&reflog_message);
}
+static struct object *want_commit(const char *name)
+{
+ struct object *obj;
+ unsigned char sha1[20];
+ if (get_sha1(name, sha1))
+ return NULL;
+ obj = parse_object(sha1);
+ return peel_to_type(name, 0, obj, OBJ_COMMIT);
+}
+
/* Get the name for the merge commit's message. */
static void merge_name(const char *remote, struct strbuf *msg)
{
remote = bname.buf;
memset(branch_head, 0, sizeof(branch_head));
- remote_head = peel_to_type(remote, 0, NULL, OBJ_COMMIT);
+ remote_head = want_commit(remote);
if (!remote_head)
die(_("'%s' does not point to a commit"), remote);
if (!allow_fast_forward)
die(_("Non-fast-forward commit does not make sense into "
"an empty head"));
- remote_head = peel_to_type(argv[0], 0, NULL, OBJ_COMMIT);
+ remote_head = want_commit(argv[0]);
if (!remote_head)
die(_("%s - not something we can merge"), argv[0]);
read_empty(remote_head->sha1, 0);
struct object *o;
struct commit *commit;
- o = peel_to_type(argv[i], 0, NULL, OBJ_COMMIT);
+ o = want_commit(argv[i]);
if (!o)
die(_("%s - not something we can merge"), argv[i]);
commit = lookup_commit(o->sha1);
if (have_message)
strbuf_addstr(&msg,
" (no commit created; -m option ignored)");
- o = peel_to_type(sha1_to_hex(remoteheads->item->object.sha1),
- 0, NULL, OBJ_COMMIT);
+ o = want_commit(sha1_to_hex(remoteheads->item->object.sha1));
if (!o)
return 1;
commit->object.flags |= OBJECT_ADDED;
}
-static void show_object(struct object *obj, const struct name_path *path, const char *last)
+static void show_object(struct object *obj,
+ const struct name_path *path, const char *last,
+ void *data)
{
char *name = path_name(path, last);
return 1;
}
-static int get_one_patchid(unsigned char *next_sha1, git_SHA_CTX *ctx)
+static int get_one_patchid(unsigned char *next_sha1, git_SHA_CTX *ctx, struct strbuf *line_buf)
{
- static char line[1000];
int patchlen = 0, found_next = 0;
int before = -1, after = -1;
- while (fgets(line, sizeof(line), stdin) != NULL) {
+ while (strbuf_getwholeline(line_buf, stdin, '\n') != EOF) {
+ char *line = line_buf->buf;
char *p = line;
int len;
unsigned char sha1[20], n[20];
git_SHA_CTX ctx;
int patchlen;
+ struct strbuf line_buf = STRBUF_INIT;
git_SHA1_Init(&ctx);
hashclr(sha1);
while (!feof(stdin)) {
- patchlen = get_one_patchid(n, &ctx);
+ patchlen = get_one_patchid(n, &ctx, &line_buf);
flush_current_id(patchlen, sha1, &ctx);
hashcpy(sha1, n);
}
+ strbuf_release(&line_buf);
}
static const char patch_id_usage[] = "git patch-id < patch";
#include "remote.h"
#include "transport.h"
#include "parse-options.h"
+#include "submodule.h"
static const char * const push_usage[] = {
"git push [<options>] [<repository> [<refspec>...]]",
return !!errs;
}
+static int option_parse_recurse_submodules(const struct option *opt,
+ const char *arg, int unset)
+{
+ int *flags = opt->value;
+ if (arg) {
+ if (!strcmp(arg, "check"))
+ *flags |= TRANSPORT_RECURSE_SUBMODULES_CHECK;
+ else
+ die("bad %s argument: %s", opt->long_name, arg);
+ } else
+ die("option %s needs an argument (check)", opt->long_name);
+
+ return 0;
+}
+
int cmd_push(int argc, const char **argv, const char *prefix)
{
int flags = 0;
OPT_BIT('n' , "dry-run", &flags, "dry run", TRANSPORT_PUSH_DRY_RUN),
OPT_BIT( 0, "porcelain", &flags, "machine-readable output", TRANSPORT_PUSH_PORCELAIN),
OPT_BIT('f', "force", &flags, "force updates", TRANSPORT_PUSH_FORCE),
+ { OPTION_CALLBACK, 0, "recurse-submodules", &flags, "check",
+ "controls recursive pushing of submodules",
+ PARSE_OPT_OPTARG, option_parse_recurse_submodules },
OPT_BOOLEAN( 0 , "thin", &thin, "use thin pack"),
OPT_STRING( 0 , "receive-pack", &receivepack, "receive-pack", "receive pack program"),
OPT_STRING( 0 , "exec", &receivepack, "receive-pack", "receive pack program"),
#include "transport.h"
#include "string-list.h"
#include "sha1-array.h"
+#include "connected.h"
static const char receive_pack_usage[] = "git receive-pack <git-dir>";
static int deny_non_fast_forwards;
static enum deny_action deny_current_branch = DENY_UNCONFIGURED;
static enum deny_action deny_delete_current = DENY_UNCONFIGURED;
-static int receive_fsck_objects;
+static int receive_fsck_objects = -1;
+static int transfer_fsck_objects = -1;
static int receive_unpack_limit = -1;
static int transfer_unpack_limit = -1;
static int unpack_limit = 100;
return 0;
}
+ if (strcmp(var, "transfer.fsckobjects") == 0) {
+ transfer_fsck_objects = git_config_bool(var, value);
+ return 0;
+ }
+
if (!strcmp(var, "receive.denycurrentbranch")) {
deny_current_branch = parse_deny_action(var, value);
return 0;
return 0;
}
-static int run_receive_hook(struct command *commands, const char *hook_name)
+typedef int (*feed_fn)(void *, const char **, size_t *);
+static int run_and_feed_hook(const char *hook_name, feed_fn feed, void *feed_state)
{
- static char buf[sizeof(commands->old_sha1) * 2 + PATH_MAX + 4];
- struct command *cmd;
struct child_process proc;
struct async muxer;
const char *argv[2];
- int have_input = 0, code;
-
- for (cmd = commands; !have_input && cmd; cmd = cmd->next) {
- if (!cmd->error_string)
- have_input = 1;
- }
+ int code;
- if (!have_input || access(hook_name, X_OK) < 0)
+ if (access(hook_name, X_OK) < 0)
return 0;
argv[0] = hook_name;
return code;
}
- for (cmd = commands; cmd; cmd = cmd->next) {
- if (!cmd->error_string) {
- size_t n = snprintf(buf, sizeof(buf), "%s %s %s\n",
- sha1_to_hex(cmd->old_sha1),
- sha1_to_hex(cmd->new_sha1),
- cmd->ref_name);
- if (write_in_full(proc.in, buf, n) != n)
- break;
- }
+ while (1) {
+ const char *buf;
+ size_t n;
+ if (feed(feed_state, &buf, &n))
+ break;
+ if (write_in_full(proc.in, buf, n) != n)
+ break;
}
close(proc.in);
if (use_sideband)
return finish_command(&proc);
}
+struct receive_hook_feed_state {
+ struct command *cmd;
+ struct strbuf buf;
+};
+
+static int feed_receive_hook(void *state_, const char **bufp, size_t *sizep)
+{
+ struct receive_hook_feed_state *state = state_;
+ struct command *cmd = state->cmd;
+
+ while (cmd && cmd->error_string)
+ cmd = cmd->next;
+ if (!cmd)
+ return -1; /* EOF */
+ strbuf_reset(&state->buf);
+ strbuf_addf(&state->buf, "%s %s %s\n",
+ sha1_to_hex(cmd->old_sha1), sha1_to_hex(cmd->new_sha1),
+ cmd->ref_name);
+ state->cmd = cmd->next;
+ if (bufp) {
+ *bufp = state->buf.buf;
+ *sizep = state->buf.len;
+ }
+ return 0;
+}
+
+static int run_receive_hook(struct command *commands, const char *hook_name)
+{
+ struct receive_hook_feed_state state;
+ int status;
+
+ strbuf_init(&state.buf, 0);
+ state.cmd = commands;
+ if (feed_receive_hook(&state, NULL, NULL))
+ return 0;
+ state.cmd = commands;
+ status = run_and_feed_hook(hook_name, feed_receive_hook, &state);
+ strbuf_release(&state.buf);
+ return status;
+}
+
static int run_update_hook(struct command *cmd)
{
static const char update_hook[] = "hooks/update";
string_list_clear(&ref_list, 0);
}
+static int command_singleton_iterator(void *cb_data, unsigned char sha1[20])
+{
+ struct command **cmd_list = cb_data;
+ struct command *cmd = *cmd_list;
+
+ if (!cmd)
+ return -1; /* end of list */
+ *cmd_list = NULL; /* this returns only one */
+ hashcpy(sha1, cmd->new_sha1);
+ return 0;
+}
+
+static void set_connectivity_errors(struct command *commands)
+{
+ struct command *cmd;
+
+ for (cmd = commands; cmd; cmd = cmd->next) {
+ struct command *singleton = cmd;
+ if (!check_everything_connected(command_singleton_iterator,
+ 0, &singleton))
+ continue;
+ cmd->error_string = "missing necessary objects";
+ }
+}
+
+static int iterate_receive_command_list(void *cb_data, unsigned char sha1[20])
+{
+ struct command **cmd_list = cb_data;
+ struct command *cmd = *cmd_list;
+
+ if (!cmd)
+ return -1; /* end of list */
+ *cmd_list = cmd->next;
+ hashcpy(sha1, cmd->new_sha1);
+ return 0;
+}
+
static void execute_commands(struct command *commands, const char *unpacker_error)
{
struct command *cmd;
return;
}
+ cmd = commands;
+ if (check_everything_connected(iterate_receive_command_list,
+ 0, &cmd))
+ set_connectivity_errors(commands);
+
if (run_receive_hook(commands, pre_receive_hook)) {
for (cmd = commands; cmd; cmd = cmd->next)
cmd->error_string = "pre-receive hook declined";
static const char *pack_lockfile;
-static const char *unpack(int quiet)
+static const char *unpack(void)
{
struct pack_header hdr;
const char *hdr_err;
char hdr_arg[38];
+ int fsck_objects = (receive_fsck_objects >= 0
+ ? receive_fsck_objects
+ : transfer_fsck_objects >= 0
+ ? transfer_fsck_objects
+ : 0);
hdr_err = parse_pack_header(&hdr);
if (hdr_err)
if (ntohl(hdr.hdr_entries) < unpack_limit) {
int code, i = 0;
- const char *unpacker[5];
+ const char *unpacker[4];
unpacker[i++] = "unpack-objects";
- if (quiet)
- unpacker[i++] = "-q";
- if (receive_fsck_objects)
+ if (fsck_objects)
unpacker[i++] = "--strict";
unpacker[i++] = hdr_arg;
unpacker[i++] = NULL;
keeper[i++] = "index-pack";
keeper[i++] = "--stdin";
- if (receive_fsck_objects)
+ if (fsck_objects)
keeper[i++] = "--strict";
keeper[i++] = "--fix-thin";
keeper[i++] = hdr_arg;
int cmd_receive_pack(int argc, const char **argv, const char *prefix)
{
- int quiet = 0;
int advertise_refs = 0;
int stateless_rpc = 0;
int i;
const char *arg = *argv++;
if (*arg == '-') {
- if (!strcmp(arg, "--quiet")) {
- quiet = 1;
- continue;
- }
-
if (!strcmp(arg, "--advertise-refs")) {
advertise_refs = 1;
continue;
const char *unpack_status = NULL;
if (!delete_only(commands))
- unpack_status = unpack(quiet);
+ unpack_status = unpack();
execute_commands(commands, unpack_status);
if (pack_lockfile)
unlink_or_warn(pack_lockfile);
commit->buffer = NULL;
}
-static void finish_object(struct object *obj, const struct name_path *path, const char *name)
+static void finish_object(struct object *obj,
+ const struct name_path *path, const char *name,
+ void *cb_data)
{
if (obj->type == OBJ_BLOB && !has_sha1_file(obj->sha1))
die("missing blob object '%s'", sha1_to_hex(obj->sha1));
}
-static void show_object(struct object *obj, const struct name_path *path, const char *component)
+static void show_object(struct object *obj,
+ const struct name_path *path, const char *component,
+ void *cb_data)
{
- char *name = path_name(path, component);
- /* An object with name "foo\n0000000..." can be used to
- * confuse downstream "git pack-objects" very badly.
- */
- const char *ep = strchr(name, '\n');
+ struct rev_info *info = cb_data;
- finish_object(obj, path, name);
- if (ep) {
- printf("%s %.*s\n", sha1_to_hex(obj->sha1),
- (int) (ep - name),
- name);
- }
- else
- printf("%s %s\n", sha1_to_hex(obj->sha1), name);
- free(name);
+ finish_object(obj, path, component, cb_data);
+ if (info->verify_objects && !obj->parsed && obj->type != OBJ_COMMIT)
+ parse_object(obj->sha1);
+ show_object_with_name(stdout, obj, path, component);
}
static void show_edge(struct commit *commit)
#include "rerere.h"
#include "merge-recursive.h"
#include "refs.h"
+#include "dir.h"
+#include "sequencer.h"
/*
* This implements the builtins revert and cherry-pick.
static const char * const revert_usage[] = {
"git revert [options] <commit-ish>",
+ "git revert <subcommand>",
NULL
};
static const char * const cherry_pick_usage[] = {
"git cherry-pick [options] <commit-ish>",
+ "git cherry-pick <subcommand>",
NULL
};
-static int edit, no_replay, no_commit, mainline, signoff, allow_ff;
-static enum { REVERT, CHERRY_PICK } action;
-static struct commit *commit;
-static int commit_argc;
-static const char **commit_argv;
-static int allow_rerere_auto;
-
-static const char *me;
-
-/* Merge strategy. */
-static const char *strategy;
-static const char **xopts;
-static size_t xopts_nr, xopts_alloc;
+enum replay_action { REVERT, CHERRY_PICK };
+enum replay_subcommand { REPLAY_NONE, REPLAY_RESET, REPLAY_CONTINUE };
+
+struct replay_opts {
+ enum replay_action action;
+ enum replay_subcommand subcommand;
+
+ /* Boolean options */
+ int edit;
+ int record_origin;
+ int no_commit;
+ int signoff;
+ int allow_ff;
+ int allow_rerere_auto;
+
+ int mainline;
+ int commit_argc;
+ const char **commit_argv;
+
+ /* Merge strategy */
+ const char *strategy;
+ const char **xopts;
+ size_t xopts_nr, xopts_alloc;
+};
#define GIT_REFLOG_ACTION "GIT_REFLOG_ACTION"
+static const char *action_name(const struct replay_opts *opts)
+{
+ return opts->action == REVERT ? "revert" : "cherry-pick";
+}
+
static char *get_encoding(const char *message);
-static const char * const *revert_or_cherry_pick_usage(void)
+static const char * const *revert_or_cherry_pick_usage(struct replay_opts *opts)
{
- return action == REVERT ? revert_usage : cherry_pick_usage;
+ return opts->action == REVERT ? revert_usage : cherry_pick_usage;
}
static int option_parse_x(const struct option *opt,
const char *arg, int unset)
{
+ struct replay_opts **opts_ptr = opt->value;
+ struct replay_opts *opts = *opts_ptr;
+
if (unset)
return 0;
- ALLOC_GROW(xopts, xopts_nr + 1, xopts_alloc);
- xopts[xopts_nr++] = xstrdup(arg);
+ ALLOC_GROW(opts->xopts, opts->xopts_nr + 1, opts->xopts_alloc);
+ opts->xopts[opts->xopts_nr++] = xstrdup(arg);
return 0;
}
-static void parse_args(int argc, const char **argv)
+static void verify_opt_compatible(const char *me, const char *base_opt, ...)
+{
+ const char *this_opt;
+ va_list ap;
+
+ va_start(ap, base_opt);
+ while ((this_opt = va_arg(ap, const char *))) {
+ if (va_arg(ap, int))
+ break;
+ }
+ va_end(ap);
+
+ if (this_opt)
+ die(_("%s: %s cannot be used with %s"), me, this_opt, base_opt);
+}
+
+static void verify_opt_mutually_compatible(const char *me, ...)
+{
+ const char *opt1, *opt2;
+ va_list ap;
+
+ va_start(ap, me);
+ while ((opt1 = va_arg(ap, const char *))) {
+ if (va_arg(ap, int))
+ break;
+ }
+ if (opt1) {
+ while ((opt2 = va_arg(ap, const char *))) {
+ if (va_arg(ap, int))
+ break;
+ }
+ }
+
+ if (opt1 && opt2)
+ die(_("%s: %s cannot be used with %s"), me, opt1, opt2);
+}
+
+static void parse_args(int argc, const char **argv, struct replay_opts *opts)
{
- const char * const * usage_str = revert_or_cherry_pick_usage();
+ const char * const * usage_str = revert_or_cherry_pick_usage(opts);
+ const char *me = action_name(opts);
int noop;
+ int reset = 0;
+ int contin = 0;
struct option options[] = {
- OPT_BOOLEAN('n', "no-commit", &no_commit, "don't automatically commit"),
- OPT_BOOLEAN('e', "edit", &edit, "edit the commit message"),
+ OPT_BOOLEAN(0, "reset", &reset, "forget the current operation"),
+ OPT_BOOLEAN(0, "continue", &contin, "continue the current operation"),
+ OPT_BOOLEAN('n', "no-commit", &opts->no_commit, "don't automatically commit"),
+ OPT_BOOLEAN('e', "edit", &opts->edit, "edit the commit message"),
{ OPTION_BOOLEAN, 'r', NULL, &noop, NULL, "no-op (backward compatibility)",
PARSE_OPT_NOARG | PARSE_OPT_HIDDEN, NULL, 0 },
- OPT_BOOLEAN('s', "signoff", &signoff, "add Signed-off-by:"),
- OPT_INTEGER('m', "mainline", &mainline, "parent number"),
- OPT_RERERE_AUTOUPDATE(&allow_rerere_auto),
- OPT_STRING(0, "strategy", &strategy, "strategy", "merge strategy"),
- OPT_CALLBACK('X', "strategy-option", &xopts, "option",
+ OPT_BOOLEAN('s', "signoff", &opts->signoff, "add Signed-off-by:"),
+ OPT_INTEGER('m', "mainline", &opts->mainline, "parent number"),
+ OPT_RERERE_AUTOUPDATE(&opts->allow_rerere_auto),
+ OPT_STRING(0, "strategy", &opts->strategy, "strategy", "merge strategy"),
+ OPT_CALLBACK('X', "strategy-option", &opts, "option",
"option for merge strategy", option_parse_x),
OPT_END(),
OPT_END(),
OPT_END(),
};
- if (action == CHERRY_PICK) {
+ if (opts->action == CHERRY_PICK) {
struct option cp_extra[] = {
- OPT_BOOLEAN('x', NULL, &no_replay, "append commit name"),
- OPT_BOOLEAN(0, "ff", &allow_ff, "allow fast-forward"),
+ OPT_BOOLEAN('x', NULL, &opts->record_origin, "append commit name"),
+ OPT_BOOLEAN(0, "ff", &opts->allow_ff, "allow fast-forward"),
OPT_END(),
};
if (parse_options_concat(options, ARRAY_SIZE(options), cp_extra))
die(_("program error"));
}
- commit_argc = parse_options(argc, argv, NULL, options, usage_str,
- PARSE_OPT_KEEP_ARGV0 |
- PARSE_OPT_KEEP_UNKNOWN);
- if (commit_argc < 2)
+ opts->commit_argc = parse_options(argc, argv, NULL, options, usage_str,
+ PARSE_OPT_KEEP_ARGV0 |
+ PARSE_OPT_KEEP_UNKNOWN);
+
+ /* Check for incompatible subcommands */
+ verify_opt_mutually_compatible(me,
+ "--reset", reset,
+ "--continue", contin,
+ NULL);
+
+ /* Set the subcommand */
+ if (reset)
+ opts->subcommand = REPLAY_RESET;
+ else if (contin)
+ opts->subcommand = REPLAY_CONTINUE;
+ else
+ opts->subcommand = REPLAY_NONE;
+
+ /* Check for incompatible command line arguments */
+ if (opts->subcommand != REPLAY_NONE) {
+ char *this_operation;
+ if (opts->subcommand == REPLAY_RESET)
+ this_operation = "--reset";
+ else
+ this_operation = "--continue";
+
+ verify_opt_compatible(me, this_operation,
+ "--no-commit", opts->no_commit,
+ "--signoff", opts->signoff,
+ "--mainline", opts->mainline,
+ "--strategy", opts->strategy ? 1 : 0,
+ "--strategy-option", opts->xopts ? 1 : 0,
+ "-x", opts->record_origin,
+ "--ff", opts->allow_ff,
+ NULL);
+ }
+
+ else if (opts->commit_argc < 2)
usage_with_options(usage_str, options);
- commit_argv = argv;
+ if (opts->allow_ff)
+ verify_opt_compatible(me, "--ff",
+ "--signoff", opts->signoff,
+ "--no-commit", opts->no_commit,
+ "-x", opts->record_origin,
+ "--edit", opts->edit,
+ NULL);
+ opts->commit_argv = argv;
}
struct commit_message {
const char *message;
};
-static int get_message(const char *raw_message, struct commit_message *out)
+static int get_message(struct commit *commit, struct commit_message *out)
{
const char *encoding;
const char *abbrev, *subject;
int abbrev_len, subject_len;
char *q;
- if (!raw_message)
+ if (!commit->buffer)
return -1;
- encoding = get_encoding(raw_message);
+ encoding = get_encoding(commit->buffer);
if (!encoding)
encoding = "UTF-8";
if (!git_commit_encoding)
git_commit_encoding = "UTF-8";
out->reencoded_message = NULL;
- out->message = raw_message;
+ out->message = commit->buffer;
if (strcmp(encoding, git_commit_encoding))
- out->reencoded_message = reencode_string(raw_message,
+ out->reencoded_message = reencode_string(commit->buffer,
git_commit_encoding, encoding);
if (out->reencoded_message)
out->message = out->reencoded_message;
{
const char *p = message, *eol;
- if (!p)
- die (_("Could not read commit message of %s"),
- sha1_to_hex(commit->object.sha1));
while (*p && *p != '\n') {
for (eol = p + 1; *eol && *eol != '\n'; eol++)
; /* do nothing */
return NULL;
}
-static void add_message_to_msg(struct strbuf *msgbuf, const char *message)
-{
- const char *p = message;
- while (*p && (*p != '\n' || p[1] != '\n'))
- p++;
-
- if (!*p)
- strbuf_addstr(msgbuf, sha1_to_hex(commit->object.sha1));
-
- p += 2;
- strbuf_addstr(msgbuf, p);
-}
-
-static void write_cherry_pick_head(void)
+static void write_cherry_pick_head(struct commit *commit)
{
int fd;
struct strbuf buf = STRBUF_INIT;
strbuf_release(&buf);
}
-static void advise(const char *advice, ...)
-{
- va_list params;
-
- va_start(params, advice);
- vreportf("hint: ", advice, params);
- va_end(params);
-}
-
static void print_advice(void)
{
char *msg = getenv("GIT_CHERRY_PICK_HELP");
return lookup_tree((const unsigned char *)EMPTY_TREE_SHA1_BIN);
}
-static NORETURN void die_dirty_index(const char *me)
+static int error_dirty_index(struct replay_opts *opts)
{
- if (read_cache_unmerged()) {
- die_resolve_conflict(me);
- } else {
- if (advice_commit_before_merge) {
- if (action == REVERT)
- die(_("Your local changes would be overwritten by revert.\n"
- "Please, commit your changes or stash them to proceed."));
- else
- die(_("Your local changes would be overwritten by cherry-pick.\n"
- "Please, commit your changes or stash them to proceed."));
- } else {
- if (action == REVERT)
- die(_("Your local changes would be overwritten by revert.\n"));
- else
- die(_("Your local changes would be overwritten by cherry-pick.\n"));
- }
- }
+ if (read_cache_unmerged())
+ return error_resolve_conflict(action_name(opts));
+
+ /* Different translation strings for cherry-pick and revert */
+ if (opts->action == CHERRY_PICK)
+ error(_("Your local changes would be overwritten by cherry-pick."));
+ else
+ error(_("Your local changes would be overwritten by revert."));
+
+ if (advice_commit_before_merge)
+ advise(_("Commit your changes or stash them to proceed."));
+ return -1;
}
static int fast_forward_to(const unsigned char *to, const unsigned char *from)
static int do_recursive_merge(struct commit *base, struct commit *next,
const char *base_label, const char *next_label,
- unsigned char *head, struct strbuf *msgbuf)
+ unsigned char *head, struct strbuf *msgbuf,
+ struct replay_opts *opts)
{
struct merge_options o;
struct tree *result, *next_tree, *base_tree, *head_tree;
next_tree = next ? next->tree : empty_tree();
base_tree = base ? base->tree : empty_tree();
- for (xopt = xopts; xopt != xopts + xopts_nr; xopt++)
+ for (xopt = opts->xopts; xopt != opts->xopts + opts->xopts_nr; xopt++)
parse_merge_opt(&o, *xopt);
clean = merge_trees(&o,
(write_cache(index_fd, active_cache, active_nr) ||
commit_locked_index(&index_lock)))
/* TRANSLATORS: %s will be "revert" or "cherry-pick" */
- die(_("%s: Unable to write new index file"), me);
+ die(_("%s: Unable to write new index file"), action_name(opts));
rollback_lock_file(&index_lock);
if (!clean) {
* If we are revert, or if our cherry-pick results in a hand merge,
* we had better say that the current user is responsible for that.
*/
-static int run_git_commit(const char *defmsg)
+static int run_git_commit(const char *defmsg, struct replay_opts *opts)
{
/* 6 is max possible length of our args array including NULL */
const char *args[6];
args[i++] = "commit";
args[i++] = "-n";
- if (signoff)
+ if (opts->signoff)
args[i++] = "-s";
- if (!edit) {
+ if (!opts->edit) {
args[i++] = "-F";
args[i++] = defmsg;
}
return run_command_v_opt(args, RUN_GIT_CMD);
}
-static int do_pick_commit(void)
+static int do_pick_commit(struct commit *commit, struct replay_opts *opts)
{
unsigned char head[20];
struct commit *base, *next, *parent;
struct strbuf msgbuf = STRBUF_INIT;
int res;
- if (no_commit) {
+ if (opts->no_commit) {
/*
* We do not intend to commit immediately. We just want to
* merge the differences in, so let's compute the tree
die (_("Your index file is unmerged."));
} else {
if (get_sha1("HEAD", head))
- die (_("You do not have a valid HEAD"));
+ return error(_("You do not have a valid HEAD"));
if (index_differs_from("HEAD", 0))
- die_dirty_index(me);
+ return error_dirty_index(opts);
}
discard_cache();
int cnt;
struct commit_list *p;
- if (!mainline)
- die(_("Commit %s is a merge but no -m option was given."),
- sha1_to_hex(commit->object.sha1));
+ if (!opts->mainline)
+ return error(_("Commit %s is a merge but no -m option was given."),
+ sha1_to_hex(commit->object.sha1));
for (cnt = 1, p = commit->parents;
- cnt != mainline && p;
+ cnt != opts->mainline && p;
cnt++)
p = p->next;
- if (cnt != mainline || !p)
- die(_("Commit %s does not have parent %d"),
- sha1_to_hex(commit->object.sha1), mainline);
+ if (cnt != opts->mainline || !p)
+ return error(_("Commit %s does not have parent %d"),
+ sha1_to_hex(commit->object.sha1), opts->mainline);
parent = p->item;
- } else if (0 < mainline)
- die(_("Mainline was specified but commit %s is not a merge."),
- sha1_to_hex(commit->object.sha1));
+ } else if (0 < opts->mainline)
+ return error(_("Mainline was specified but commit %s is not a merge."),
+ sha1_to_hex(commit->object.sha1));
else
parent = commit->parents->item;
- if (allow_ff && parent && !hashcmp(parent->object.sha1, head))
+ if (opts->allow_ff && parent && !hashcmp(parent->object.sha1, head))
return fast_forward_to(commit->object.sha1, head);
if (parent && parse_commit(parent) < 0)
/* TRANSLATORS: The first %s will be "revert" or
"cherry-pick", the second %s a SHA1 */
- die(_("%s: cannot parse parent commit %s"),
- me, sha1_to_hex(parent->object.sha1));
+ return error(_("%s: cannot parse parent commit %s"),
+ action_name(opts), sha1_to_hex(parent->object.sha1));
- if (get_message(commit->buffer, &msg) != 0)
- die(_("Cannot get commit message for %s"),
- sha1_to_hex(commit->object.sha1));
+ if (get_message(commit, &msg) != 0)
+ return error(_("Cannot get commit message for %s"),
+ sha1_to_hex(commit->object.sha1));
/*
* "commit" is an existing commit. We would want to apply
defmsg = git_pathdup("MERGE_MSG");
- if (action == REVERT) {
+ if (opts->action == REVERT) {
base = commit;
base_label = msg.label;
next = parent;
}
strbuf_addstr(&msgbuf, ".\n");
} else {
+ const char *p;
+
base = parent;
base_label = msg.parent_label;
next = commit;
next_label = msg.label;
- add_message_to_msg(&msgbuf, msg.message);
- if (no_replay) {
+
+ /*
+ * Append the commit log message to msgbuf; it starts
+ * after the tree, parent, author, committer
+ * information followed by "\n\n".
+ */
+ p = strstr(msg.message, "\n\n");
+ if (p) {
+ p += 2;
+ strbuf_addstr(&msgbuf, p);
+ }
+
+ if (opts->record_origin) {
strbuf_addstr(&msgbuf, "(cherry picked from commit ");
strbuf_addstr(&msgbuf, sha1_to_hex(commit->object.sha1));
strbuf_addstr(&msgbuf, ")\n");
}
- if (!no_commit)
- write_cherry_pick_head();
+ if (!opts->no_commit)
+ write_cherry_pick_head(commit);
}
- if (!strategy || !strcmp(strategy, "recursive") || action == REVERT) {
+ if (!opts->strategy || !strcmp(opts->strategy, "recursive") || opts->action == REVERT) {
res = do_recursive_merge(base, next, base_label, next_label,
- head, &msgbuf);
+ head, &msgbuf, opts);
write_message(&msgbuf, defmsg);
} else {
struct commit_list *common = NULL;
commit_list_insert(base, &common);
commit_list_insert(next, &remotes);
- res = try_merge_command(strategy, xopts_nr, xopts, common,
- sha1_to_hex(head), remotes);
+ res = try_merge_command(opts->strategy, opts->xopts_nr, opts->xopts,
+ common, sha1_to_hex(head), remotes);
free_commit_list(common);
free_commit_list(remotes);
}
if (res) {
- error(action == REVERT
+ error(opts->action == REVERT
? _("could not revert %s... %s")
: _("could not apply %s... %s"),
find_unique_abbrev(commit->object.sha1, DEFAULT_ABBREV),
msg.subject);
print_advice();
- rerere(allow_rerere_auto);
+ rerere(opts->allow_rerere_auto);
} else {
- if (!no_commit)
- res = run_git_commit(defmsg);
+ if (!opts->no_commit)
+ res = run_git_commit(defmsg, opts);
}
free_message(&msg);
return res;
}
-static void prepare_revs(struct rev_info *revs)
+static void prepare_revs(struct rev_info *revs, struct replay_opts *opts)
{
int argc;
init_revisions(revs, NULL);
revs->no_walk = 1;
- if (action != REVERT)
+ if (opts->action != REVERT)
revs->reverse = 1;
- argc = setup_revisions(commit_argc, commit_argv, revs, NULL);
+ argc = setup_revisions(opts->commit_argc, opts->commit_argv, revs, NULL);
if (argc > 1)
- usage(*revert_or_cherry_pick_usage());
+ usage(*revert_or_cherry_pick_usage(opts));
if (prepare_revision_walk(revs))
die(_("revision walk setup failed"));
die(_("empty commit set passed"));
}
-static void read_and_refresh_cache(const char *me)
+static void read_and_refresh_cache(struct replay_opts *opts)
{
static struct lock_file index_lock;
int index_fd = hold_locked_index(&index_lock, 0);
if (read_index_preload(&the_index, NULL) < 0)
- die(_("git %s: failed to read the index"), me);
+ die(_("git %s: failed to read the index"), action_name(opts));
refresh_index(&the_index, REFRESH_QUIET|REFRESH_UNMERGED, NULL, NULL, NULL);
if (the_index.cache_changed) {
if (write_index(&the_index, index_fd) ||
commit_locked_index(&index_lock))
- die(_("git %s: failed to refresh the index"), me);
+ die(_("git %s: failed to refresh the index"), action_name(opts));
}
rollback_lock_file(&index_lock);
}
-static int revert_or_cherry_pick(int argc, const char **argv)
+/*
+ * Append a commit to the end of the commit_list.
+ *
+ * next starts by pointing to the variable that holds the head of an
+ * empty commit_list, and is updated to point to the "next" field of
+ * the last item on the list as new commits are appended.
+ *
+ * Usage example:
+ *
+ * struct commit_list *list;
+ * struct commit_list **next = &list;
+ *
+ * next = commit_list_append(c1, next);
+ * next = commit_list_append(c2, next);
+ * assert(commit_list_count(list) == 2);
+ * return list;
+ */
+static struct commit_list **commit_list_append(struct commit *commit,
+ struct commit_list **next)
+{
+ struct commit_list *new = xmalloc(sizeof(struct commit_list));
+ new->item = commit;
+ *next = new;
+ new->next = NULL;
+ return &new->next;
+}
+
+static int format_todo(struct strbuf *buf, struct commit_list *todo_list,
+ struct replay_opts *opts)
+{
+ struct commit_list *cur = NULL;
+ struct commit_message msg = { NULL, NULL, NULL, NULL, NULL };
+ const char *sha1_abbrev = NULL;
+ const char *action_str = opts->action == REVERT ? "revert" : "pick";
+
+ for (cur = todo_list; cur; cur = cur->next) {
+ sha1_abbrev = find_unique_abbrev(cur->item->object.sha1, DEFAULT_ABBREV);
+ if (get_message(cur->item, &msg))
+ return error(_("Cannot get commit message for %s"), sha1_abbrev);
+ strbuf_addf(buf, "%s %s %s\n", action_str, sha1_abbrev, msg.subject);
+ }
+ return 0;
+}
+
+static struct commit *parse_insn_line(char *start, struct replay_opts *opts)
+{
+ unsigned char commit_sha1[20];
+ char sha1_abbrev[40];
+ enum replay_action action;
+ int insn_len = 0;
+ char *p, *q;
+
+ if (!prefixcmp(start, "pick ")) {
+ action = CHERRY_PICK;
+ insn_len = strlen("pick");
+ p = start + insn_len + 1;
+ } else if (!prefixcmp(start, "revert ")) {
+ action = REVERT;
+ insn_len = strlen("revert");
+ p = start + insn_len + 1;
+ } else
+ return NULL;
+
+ q = strchr(p, ' ');
+ if (!q)
+ return NULL;
+ q++;
+
+ strlcpy(sha1_abbrev, p, q - p);
+
+ /*
+ * Verify that the action matches up with the one in
+ * opts; we don't support arbitrary instructions
+ */
+ if (action != opts->action) {
+ const char *action_str;
+ action_str = action == REVERT ? "revert" : "cherry-pick";
+ error(_("Cannot %s during a %s"), action_str, action_name(opts));
+ return NULL;
+ }
+
+ if (get_sha1(sha1_abbrev, commit_sha1) < 0)
+ return NULL;
+
+ return lookup_commit_reference(commit_sha1);
+}
+
+static int parse_insn_buffer(char *buf, struct commit_list **todo_list,
+ struct replay_opts *opts)
+{
+ struct commit_list **next = todo_list;
+ struct commit *commit;
+ char *p = buf;
+ int i;
+
+ for (i = 1; *p; i++) {
+ commit = parse_insn_line(p, opts);
+ if (!commit)
+ return error(_("Could not parse line %d."), i);
+ next = commit_list_append(commit, next);
+ p = strchrnul(p, '\n');
+ if (*p)
+ p++;
+ }
+ if (!*todo_list)
+ return error(_("No commits parsed."));
+ return 0;
+}
+
+static void read_populate_todo(struct commit_list **todo_list,
+ struct replay_opts *opts)
+{
+ const char *todo_file = git_path(SEQ_TODO_FILE);
+ struct strbuf buf = STRBUF_INIT;
+ int fd, res;
+
+ fd = open(todo_file, O_RDONLY);
+ if (fd < 0)
+ die_errno(_("Could not open %s."), todo_file);
+ if (strbuf_read(&buf, fd, 0) < 0) {
+ close(fd);
+ strbuf_release(&buf);
+ die(_("Could not read %s."), todo_file);
+ }
+ close(fd);
+
+ res = parse_insn_buffer(buf.buf, todo_list, opts);
+ strbuf_release(&buf);
+ if (res)
+ die(_("Unusable instruction sheet: %s"), todo_file);
+}
+
+static int populate_opts_cb(const char *key, const char *value, void *data)
+{
+ struct replay_opts *opts = data;
+ int error_flag = 1;
+
+ if (!value)
+ error_flag = 0;
+ else if (!strcmp(key, "options.no-commit"))
+ opts->no_commit = git_config_bool_or_int(key, value, &error_flag);
+ else if (!strcmp(key, "options.edit"))
+ opts->edit = git_config_bool_or_int(key, value, &error_flag);
+ else if (!strcmp(key, "options.signoff"))
+ opts->signoff = git_config_bool_or_int(key, value, &error_flag);
+ else if (!strcmp(key, "options.record-origin"))
+ opts->record_origin = git_config_bool_or_int(key, value, &error_flag);
+ else if (!strcmp(key, "options.allow-ff"))
+ opts->allow_ff = git_config_bool_or_int(key, value, &error_flag);
+ else if (!strcmp(key, "options.mainline"))
+ opts->mainline = git_config_int(key, value);
+ else if (!strcmp(key, "options.strategy"))
+ git_config_string(&opts->strategy, key, value);
+ else if (!strcmp(key, "options.strategy-option")) {
+ ALLOC_GROW(opts->xopts, opts->xopts_nr + 1, opts->xopts_alloc);
+ opts->xopts[opts->xopts_nr++] = xstrdup(value);
+ } else
+ return error(_("Invalid key: %s"), key);
+
+ if (!error_flag)
+ return error(_("Invalid value for %s: %s"), key, value);
+
+ return 0;
+}
+
+static void read_populate_opts(struct replay_opts **opts_ptr)
+{
+ const char *opts_file = git_path(SEQ_OPTS_FILE);
+
+ if (!file_exists(opts_file))
+ return;
+ if (git_config_from_file(populate_opts_cb, opts_file, *opts_ptr) < 0)
+ die(_("Malformed options sheet: %s"), opts_file);
+}
+
+static void walk_revs_populate_todo(struct commit_list **todo_list,
+ struct replay_opts *opts)
{
struct rev_info revs;
+ struct commit *commit;
+ struct commit_list **next;
- git_config(git_default_config, NULL);
- me = action == REVERT ? "revert" : "cherry-pick";
- setenv(GIT_REFLOG_ACTION, me, 0);
- parse_args(argc, argv);
-
- if (allow_ff) {
- if (signoff)
- die(_("cherry-pick --ff cannot be used with --signoff"));
- if (no_commit)
- die(_("cherry-pick --ff cannot be used with --no-commit"));
- if (no_replay)
- die(_("cherry-pick --ff cannot be used with -x"));
- if (edit)
- die(_("cherry-pick --ff cannot be used with --edit"));
+ prepare_revs(&revs, opts);
+
+ next = todo_list;
+ while ((commit = get_revision(&revs)))
+ next = commit_list_append(commit, next);
+}
+
+static int create_seq_dir(void)
+{
+ const char *seq_dir = git_path(SEQ_DIR);
+
+ if (file_exists(seq_dir))
+ return error(_("%s already exists."), seq_dir);
+ else if (mkdir(seq_dir, 0777) < 0)
+ die_errno(_("Could not create sequencer directory '%s'."), seq_dir);
+ return 0;
+}
+
+static void save_head(const char *head)
+{
+ const char *head_file = git_path(SEQ_HEAD_FILE);
+ static struct lock_file head_lock;
+ struct strbuf buf = STRBUF_INIT;
+ int fd;
+
+ fd = hold_lock_file_for_update(&head_lock, head_file, LOCK_DIE_ON_ERROR);
+ strbuf_addf(&buf, "%s\n", head);
+ if (write_in_full(fd, buf.buf, buf.len) < 0)
+ die_errno(_("Could not write to %s."), head_file);
+ if (commit_lock_file(&head_lock) < 0)
+ die(_("Error wrapping up %s."), head_file);
+}
+
+static void save_todo(struct commit_list *todo_list, struct replay_opts *opts)
+{
+ const char *todo_file = git_path(SEQ_TODO_FILE);
+ static struct lock_file todo_lock;
+ struct strbuf buf = STRBUF_INIT;
+ int fd;
+
+ fd = hold_lock_file_for_update(&todo_lock, todo_file, LOCK_DIE_ON_ERROR);
+ if (format_todo(&buf, todo_list, opts) < 0)
+ die(_("Could not format %s."), todo_file);
+ if (write_in_full(fd, buf.buf, buf.len) < 0) {
+ strbuf_release(&buf);
+ die_errno(_("Could not write to %s."), todo_file);
+ }
+ if (commit_lock_file(&todo_lock) < 0) {
+ strbuf_release(&buf);
+ die(_("Error wrapping up %s."), todo_file);
}
+ strbuf_release(&buf);
+}
- read_and_refresh_cache(me);
+static void save_opts(struct replay_opts *opts)
+{
+ const char *opts_file = git_path(SEQ_OPTS_FILE);
+
+ if (opts->no_commit)
+ git_config_set_in_file(opts_file, "options.no-commit", "true");
+ if (opts->edit)
+ git_config_set_in_file(opts_file, "options.edit", "true");
+ if (opts->signoff)
+ git_config_set_in_file(opts_file, "options.signoff", "true");
+ if (opts->record_origin)
+ git_config_set_in_file(opts_file, "options.record-origin", "true");
+ if (opts->allow_ff)
+ git_config_set_in_file(opts_file, "options.allow-ff", "true");
+ if (opts->mainline) {
+ struct strbuf buf = STRBUF_INIT;
+ strbuf_addf(&buf, "%d", opts->mainline);
+ git_config_set_in_file(opts_file, "options.mainline", buf.buf);
+ strbuf_release(&buf);
+ }
+ if (opts->strategy)
+ git_config_set_in_file(opts_file, "options.strategy", opts->strategy);
+ if (opts->xopts) {
+ int i;
+ for (i = 0; i < opts->xopts_nr; i++)
+ git_config_set_multivar_in_file(opts_file,
+ "options.strategy-option",
+ opts->xopts[i], "^$", 0);
+ }
+}
- prepare_revs(&revs);
+static int pick_commits(struct commit_list *todo_list, struct replay_opts *opts)
+{
+ struct commit_list *cur;
+ int res;
- while ((commit = get_revision(&revs))) {
- int res = do_pick_commit();
- if (res)
+ setenv(GIT_REFLOG_ACTION, action_name(opts), 0);
+ if (opts->allow_ff)
+ assert(!(opts->signoff || opts->no_commit ||
+ opts->record_origin || opts->edit));
+ read_and_refresh_cache(opts);
+
+ for (cur = todo_list; cur; cur = cur->next) {
+ save_todo(cur, opts);
+ res = do_pick_commit(cur->item, opts);
+ if (res) {
+ if (!cur->next)
+ /*
+ * An error was encountered while
+ * picking the last commit; the
+ * sequencer state is useless now --
+ * the user simply needs to resolve
+ * the conflict and commit
+ */
+ remove_sequencer_state(0);
return res;
+ }
}
+ /*
+ * Sequence of picks finished successfully; cleanup by
+ * removing the .git/sequencer directory
+ */
+ remove_sequencer_state(1);
return 0;
}
+static int pick_revisions(struct replay_opts *opts)
+{
+ struct commit_list *todo_list = NULL;
+ unsigned char sha1[20];
+
+ read_and_refresh_cache(opts);
+
+ /*
+ * Decide what to do depending on the arguments; a fresh
+ * cherry-pick should be handled differently from an existing
+ * one that is being continued
+ */
+ if (opts->subcommand == REPLAY_RESET) {
+ remove_sequencer_state(1);
+ return 0;
+ } else if (opts->subcommand == REPLAY_CONTINUE) {
+ if (!file_exists(git_path(SEQ_TODO_FILE)))
+ goto error;
+ read_populate_opts(&opts);
+ read_populate_todo(&todo_list, opts);
+
+ /* Verify that the conflict has been resolved */
+ if (!index_differs_from("HEAD", 0))
+ todo_list = todo_list->next;
+ } else {
+ /*
+ * Start a new cherry-pick/ revert sequence; but
+ * first, make sure that an existing one isn't in
+ * progress
+ */
+
+ walk_revs_populate_todo(&todo_list, opts);
+ if (create_seq_dir() < 0) {
+ error(_("A cherry-pick or revert is in progress."));
+ advise(_("Use --continue to continue the operation"));
+ advise(_("or --reset to forget about it"));
+ return -1;
+ }
+ if (get_sha1("HEAD", sha1)) {
+ if (opts->action == REVERT)
+ return error(_("Can't revert as initial commit"));
+ return error(_("Can't cherry-pick into empty head"));
+ }
+ save_head(sha1_to_hex(sha1));
+ save_opts(opts);
+ }
+ return pick_commits(todo_list, opts);
+error:
+ return error(_("No %s in progress"), action_name(opts));
+}
+
int cmd_revert(int argc, const char **argv, const char *prefix)
{
+ struct replay_opts opts;
+ int res;
+
+ memset(&opts, 0, sizeof(opts));
if (isatty(0))
- edit = 1;
- action = REVERT;
- return revert_or_cherry_pick(argc, argv);
+ opts.edit = 1;
+ opts.action = REVERT;
+ git_config(git_default_config, NULL);
+ parse_args(argc, argv, &opts);
+ res = pick_revisions(&opts);
+ if (res < 0)
+ die(_("revert failed"));
+ return res;
}
int cmd_cherry_pick(int argc, const char **argv, const char *prefix)
{
- action = CHERRY_PICK;
- return revert_or_cherry_pick(argc, argv);
+ struct replay_opts opts;
+ int res;
+
+ memset(&opts, 0, sizeof(opts));
+ opts.action = CHERRY_PICK;
+ git_config(git_default_config, NULL);
+ parse_args(argc, argv, &opts);
+ res = pick_revisions(&opts);
+ if (res < 0)
+ die(_("cherry-pick failed"));
+ return res;
}
args.force_update = 1;
continue;
}
- if (!strcmp(arg, "--quiet")) {
- args.quiet = 1;
- continue;
- }
if (!strcmp(arg, "--verbose")) {
args.verbose = 1;
continue;
fd[0] = 0;
fd[1] = 1;
} else {
- struct strbuf sb = STRBUF_INIT;
- strbuf_addstr(&sb, receivepack);
- if (args.quiet)
- strbuf_addstr(&sb, " --quiet");
- conn = git_connect(fd, dest, sb.buf,
+ conn = git_connect(fd, dest, receivepack,
args.verbose ? CONNECT_VERBOSE : 0);
- strbuf_release(&sb);
}
memset(&extra_have, 0, sizeof(extra_have));
return 0;
}
-int unbundle(struct bundle_header *header, int bundle_fd)
+int unbundle(struct bundle_header *header, int bundle_fd, int flags)
{
const char *argv_index_pack[] = {"index-pack",
- "--fix-thin", "--stdin", NULL};
+ "--fix-thin", "--stdin", NULL, NULL};
struct child_process ip;
+ if (flags & BUNDLE_VERBOSE)
+ argv_index_pack[3] = "-v";
+
if (verify_bundle(header, 0))
return -1;
memset(&ip, 0, sizeof(ip));
int create_bundle(struct bundle_header *header, const char *path,
int argc, const char **argv);
int verify_bundle(struct bundle_header *header, int verbose);
-int unbundle(struct bundle_header *header, int bundle_fd);
+#define BUNDLE_VERBOSE 1
+int unbundle(struct bundle_header *header, int bundle_fd, int flags);
int list_bundle_refs(struct bundle_header *header,
int argc, const char **argv);
extern int git_config_maybe_bool(const char *, const char *);
extern int git_config_string(const char **, const char *, const char *);
extern int git_config_pathname(const char **, const char *, const char *);
+extern int git_config_set_in_file(const char *, const char *, const char *);
extern int git_config_set(const char *, const char *);
extern int git_config_parse_key(const char *, char **, int *);
extern int git_config_set_multivar(const char *, const char *, const char *, int);
+extern int git_config_set_multivar_in_file(const char *, const char *, const char *, const char *, int);
extern int git_config_rename_section(const char *, const char *);
extern const char *git_etc_gitconfig(void);
extern int check_repository_format_version(const char *var, const char *value, void *cb);
--- /dev/null
+/* obstack.c - subroutines used implicitly by object stack macros
+ Copyright (C) 1988, 1989, 1990, 1991, 1992, 1993, 1994, 1996, 1997, 1998,
+ 1999, 2000, 2001, 2002, 2003, 2004, 2005 Free Software Foundation, Inc.
+ This file is part of the GNU C Library.
+
+ The GNU C Library is free software; you can redistribute it and/or
+ modify it under the terms of the GNU Lesser General Public
+ License as published by the Free Software Foundation; either
+ version 2.1 of the License, or (at your option) any later version.
+
+ The GNU C Library is distributed in the hope that it will be useful,
+ but WITHOUT ANY WARRANTY; without even the implied warranty of
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ Lesser General Public License for more details.
+
+ You should have received a copy of the GNU Lesser General Public
+ License along with the GNU C Library; if not, write to the Free
+ Software Foundation, Inc., 51 Franklin Street, Fifth Floor,
+ Boston, MA 02110-1301, USA. */
+
+#include "git-compat-util.h"
+#include <gettext.h>
+#include "obstack.h"
+
+/* NOTE BEFORE MODIFYING THIS FILE: This version number must be
+ incremented whenever callers compiled using an old obstack.h can no
+ longer properly call the functions in this obstack.c. */
+#define OBSTACK_INTERFACE_VERSION 1
+
+/* Comment out all this code if we are using the GNU C Library, and are not
+ actually compiling the library itself, and the installed library
+ supports the same library interface we do. This code is part of the GNU
+ C Library, but also included in many other GNU distributions. Compiling
+ and linking in this code is a waste when using the GNU C library
+ (especially if it is a shared library). Rather than having every GNU
+ program understand `configure --with-gnu-libc' and omit the object
+ files, it is simpler to just do this in the source for each such file. */
+
+#include <stdio.h> /* Random thing to get __GNU_LIBRARY__. */
+#if !defined _LIBC && defined __GNU_LIBRARY__ && __GNU_LIBRARY__ > 1
+# include <gnu-versions.h>
+# if _GNU_OBSTACK_INTERFACE_VERSION == OBSTACK_INTERFACE_VERSION
+# define ELIDE_CODE
+# endif
+#endif
+
+#include <stddef.h>
+
+#ifndef ELIDE_CODE
+
+
+# if HAVE_INTTYPES_H
+# include <inttypes.h>
+# endif
+# if HAVE_STDINT_H || defined _LIBC
+# include <stdint.h>
+# endif
+
+/* Determine default alignment. */
+union fooround
+{
+ uintmax_t i;
+ long double d;
+ void *p;
+};
+struct fooalign
+{
+ char c;
+ union fooround u;
+};
+/* If malloc were really smart, it would round addresses to DEFAULT_ALIGNMENT.
+ But in fact it might be less smart and round addresses to as much as
+ DEFAULT_ROUNDING. So we prepare for it to do that. */
+enum
+ {
+ DEFAULT_ALIGNMENT = offsetof (struct fooalign, u),
+ DEFAULT_ROUNDING = sizeof (union fooround)
+ };
+
+/* When we copy a long block of data, this is the unit to do it with.
+ On some machines, copying successive ints does not work;
+ in such a case, redefine COPYING_UNIT to `long' (if that works)
+ or `char' as a last resort. */
+# ifndef COPYING_UNIT
+# define COPYING_UNIT int
+# endif
+
+
+/* The functions allocating more room by calling `obstack_chunk_alloc'
+ jump to the handler pointed to by `obstack_alloc_failed_handler'.
+ This can be set to a user defined function which should either
+ abort gracefully or use longjump - but shouldn't return. This
+ variable by default points to the internal function
+ `print_and_abort'. */
+static void print_and_abort (void);
+void (*obstack_alloc_failed_handler) (void) = print_and_abort;
+
+# ifdef _LIBC
+# if SHLIB_COMPAT (libc, GLIBC_2_0, GLIBC_2_3_4)
+/* A looong time ago (before 1994, anyway; we're not sure) this global variable
+ was used by non-GNU-C macros to avoid multiple evaluation. The GNU C
+ library still exports it because somebody might use it. */
+struct obstack *_obstack_compat;
+compat_symbol (libc, _obstack_compat, _obstack, GLIBC_2_0);
+# endif
+# endif
+
+/* Define a macro that either calls functions with the traditional malloc/free
+ calling interface, or calls functions with the mmalloc/mfree interface
+ (that adds an extra first argument), based on the state of use_extra_arg.
+ For free, do not use ?:, since some compilers, like the MIPS compilers,
+ do not allow (expr) ? void : void. */
+
+# define CALL_CHUNKFUN(h, size) \
+ (((h) -> use_extra_arg) \
+ ? (*(h)->chunkfun) ((h)->extra_arg, (size)) \
+ : (*(struct _obstack_chunk *(*) (long)) (h)->chunkfun) ((size)))
+
+# define CALL_FREEFUN(h, old_chunk) \
+ do { \
+ if ((h) -> use_extra_arg) \
+ (*(h)->freefun) ((h)->extra_arg, (old_chunk)); \
+ else \
+ (*(void (*) (void *)) (h)->freefun) ((old_chunk)); \
+ } while (0)
+
+\f
+/* Initialize an obstack H for use. Specify chunk size SIZE (0 means default).
+ Objects start on multiples of ALIGNMENT (0 means use default).
+ CHUNKFUN is the function to use to allocate chunks,
+ and FREEFUN the function to free them.
+
+ Return nonzero if successful, calls obstack_alloc_failed_handler if
+ allocation fails. */
+
+int
+_obstack_begin (struct obstack *h,
+ int size, int alignment,
+ void *(*chunkfun) (long),
+ void (*freefun) (void *))
+{
+ register struct _obstack_chunk *chunk; /* points to new chunk */
+
+ if (alignment == 0)
+ alignment = DEFAULT_ALIGNMENT;
+ if (size == 0)
+ /* Default size is what GNU malloc can fit in a 4096-byte block. */
+ {
+ /* 12 is sizeof (mhead) and 4 is EXTRA from GNU malloc.
+ Use the values for range checking, because if range checking is off,
+ the extra bytes won't be missed terribly, but if range checking is on
+ and we used a larger request, a whole extra 4096 bytes would be
+ allocated.
+
+ These number are irrelevant to the new GNU malloc. I suspect it is
+ less sensitive to the size of the request. */
+ int extra = ((((12 + DEFAULT_ROUNDING - 1) & ~(DEFAULT_ROUNDING - 1))
+ + 4 + DEFAULT_ROUNDING - 1)
+ & ~(DEFAULT_ROUNDING - 1));
+ size = 4096 - extra;
+ }
+
+ h->chunkfun = (struct _obstack_chunk * (*)(void *, long)) chunkfun;
+ h->freefun = (void (*) (void *, struct _obstack_chunk *)) freefun;
+ h->chunk_size = size;
+ h->alignment_mask = alignment - 1;
+ h->use_extra_arg = 0;
+
+ chunk = h->chunk = CALL_CHUNKFUN (h, h -> chunk_size);
+ if (!chunk)
+ (*obstack_alloc_failed_handler) ();
+ h->next_free = h->object_base = __PTR_ALIGN ((char *) chunk, chunk->contents,
+ alignment - 1);
+ h->chunk_limit = chunk->limit
+ = (char *) chunk + h->chunk_size;
+ chunk->prev = NULL;
+ /* The initial chunk now contains no empty object. */
+ h->maybe_empty_object = 0;
+ h->alloc_failed = 0;
+ return 1;
+}
+
+int
+_obstack_begin_1 (struct obstack *h, int size, int alignment,
+ void *(*chunkfun) (void *, long),
+ void (*freefun) (void *, void *),
+ void *arg)
+{
+ register struct _obstack_chunk *chunk; /* points to new chunk */
+
+ if (alignment == 0)
+ alignment = DEFAULT_ALIGNMENT;
+ if (size == 0)
+ /* Default size is what GNU malloc can fit in a 4096-byte block. */
+ {
+ /* 12 is sizeof (mhead) and 4 is EXTRA from GNU malloc.
+ Use the values for range checking, because if range checking is off,
+ the extra bytes won't be missed terribly, but if range checking is on
+ and we used a larger request, a whole extra 4096 bytes would be
+ allocated.
+
+ These number are irrelevant to the new GNU malloc. I suspect it is
+ less sensitive to the size of the request. */
+ int extra = ((((12 + DEFAULT_ROUNDING - 1) & ~(DEFAULT_ROUNDING - 1))
+ + 4 + DEFAULT_ROUNDING - 1)
+ & ~(DEFAULT_ROUNDING - 1));
+ size = 4096 - extra;
+ }
+
+ h->chunkfun = (struct _obstack_chunk * (*)(void *,long)) chunkfun;
+ h->freefun = (void (*) (void *, struct _obstack_chunk *)) freefun;
+ h->chunk_size = size;
+ h->alignment_mask = alignment - 1;
+ h->extra_arg = arg;
+ h->use_extra_arg = 1;
+
+ chunk = h->chunk = CALL_CHUNKFUN (h, h -> chunk_size);
+ if (!chunk)
+ (*obstack_alloc_failed_handler) ();
+ h->next_free = h->object_base = __PTR_ALIGN ((char *) chunk, chunk->contents,
+ alignment - 1);
+ h->chunk_limit = chunk->limit
+ = (char *) chunk + h->chunk_size;
+ chunk->prev = NULL;
+ /* The initial chunk now contains no empty object. */
+ h->maybe_empty_object = 0;
+ h->alloc_failed = 0;
+ return 1;
+}
+
+/* Allocate a new current chunk for the obstack *H
+ on the assumption that LENGTH bytes need to be added
+ to the current object, or a new object of length LENGTH allocated.
+ Copies any partial object from the end of the old chunk
+ to the beginning of the new one. */
+
+void
+_obstack_newchunk (struct obstack *h, int length)
+{
+ register struct _obstack_chunk *old_chunk = h->chunk;
+ register struct _obstack_chunk *new_chunk;
+ register long new_size;
+ register long obj_size = h->next_free - h->object_base;
+ register long i;
+ long already;
+ char *object_base;
+
+ /* Compute size for new chunk. */
+ new_size = (obj_size + length) + (obj_size >> 3) + h->alignment_mask + 100;
+ if (new_size < h->chunk_size)
+ new_size = h->chunk_size;
+
+ /* Allocate and initialize the new chunk. */
+ new_chunk = CALL_CHUNKFUN (h, new_size);
+ if (!new_chunk)
+ (*obstack_alloc_failed_handler) ();
+ h->chunk = new_chunk;
+ new_chunk->prev = old_chunk;
+ new_chunk->limit = h->chunk_limit = (char *) new_chunk + new_size;
+
+ /* Compute an aligned object_base in the new chunk */
+ object_base =
+ __PTR_ALIGN ((char *) new_chunk, new_chunk->contents, h->alignment_mask);
+
+ /* Move the existing object to the new chunk.
+ Word at a time is fast and is safe if the object
+ is sufficiently aligned. */
+ if (h->alignment_mask + 1 >= DEFAULT_ALIGNMENT)
+ {
+ for (i = obj_size / sizeof (COPYING_UNIT) - 1;
+ i >= 0; i--)
+ ((COPYING_UNIT *)object_base)[i]
+ = ((COPYING_UNIT *)h->object_base)[i];
+ /* We used to copy the odd few remaining bytes as one extra COPYING_UNIT,
+ but that can cross a page boundary on a machine
+ which does not do strict alignment for COPYING_UNITS. */
+ already = obj_size / sizeof (COPYING_UNIT) * sizeof (COPYING_UNIT);
+ }
+ else
+ already = 0;
+ /* Copy remaining bytes one by one. */
+ for (i = already; i < obj_size; i++)
+ object_base[i] = h->object_base[i];
+
+ /* If the object just copied was the only data in OLD_CHUNK,
+ free that chunk and remove it from the chain.
+ But not if that chunk might contain an empty object. */
+ if (! h->maybe_empty_object
+ && (h->object_base
+ == __PTR_ALIGN ((char *) old_chunk, old_chunk->contents,
+ h->alignment_mask)))
+ {
+ new_chunk->prev = old_chunk->prev;
+ CALL_FREEFUN (h, old_chunk);
+ }
+
+ h->object_base = object_base;
+ h->next_free = h->object_base + obj_size;
+ /* The new chunk certainly contains no empty object yet. */
+ h->maybe_empty_object = 0;
+}
+# ifdef _LIBC
+libc_hidden_def (_obstack_newchunk)
+# endif
+
+/* Return nonzero if object OBJ has been allocated from obstack H.
+ This is here for debugging.
+ If you use it in a program, you are probably losing. */
+
+/* Suppress -Wmissing-prototypes warning. We don't want to declare this in
+ obstack.h because it is just for debugging. */
+int _obstack_allocated_p (struct obstack *h, void *obj);
+
+int
+_obstack_allocated_p (struct obstack *h, void *obj)
+{
+ register struct _obstack_chunk *lp; /* below addr of any objects in this chunk */
+ register struct _obstack_chunk *plp; /* point to previous chunk if any */
+
+ lp = (h)->chunk;
+ /* We use >= rather than > since the object cannot be exactly at
+ the beginning of the chunk but might be an empty object exactly
+ at the end of an adjacent chunk. */
+ while (lp != NULL && ((void *) lp >= obj || (void *) (lp)->limit < obj))
+ {
+ plp = lp->prev;
+ lp = plp;
+ }
+ return lp != NULL;
+}
+\f
+/* Free objects in obstack H, including OBJ and everything allocate
+ more recently than OBJ. If OBJ is zero, free everything in H. */
+
+# undef obstack_free
+
+void
+obstack_free (struct obstack *h, void *obj)
+{
+ register struct _obstack_chunk *lp; /* below addr of any objects in this chunk */
+ register struct _obstack_chunk *plp; /* point to previous chunk if any */
+
+ lp = h->chunk;
+ /* We use >= because there cannot be an object at the beginning of a chunk.
+ But there can be an empty object at that address
+ at the end of another chunk. */
+ while (lp != NULL && ((void *) lp >= obj || (void *) (lp)->limit < obj))
+ {
+ plp = lp->prev;
+ CALL_FREEFUN (h, lp);
+ lp = plp;
+ /* If we switch chunks, we can't tell whether the new current
+ chunk contains an empty object, so assume that it may. */
+ h->maybe_empty_object = 1;
+ }
+ if (lp)
+ {
+ h->object_base = h->next_free = (char *) (obj);
+ h->chunk_limit = lp->limit;
+ h->chunk = lp;
+ }
+ else if (obj != NULL)
+ /* obj is not in any of the chunks! */
+ abort ();
+}
+
+# ifdef _LIBC
+/* Older versions of libc used a function _obstack_free intended to be
+ called by non-GCC compilers. */
+strong_alias (obstack_free, _obstack_free)
+# endif
+\f
+int
+_obstack_memory_used (struct obstack *h)
+{
+ register struct _obstack_chunk* lp;
+ register int nbytes = 0;
+
+ for (lp = h->chunk; lp != NULL; lp = lp->prev)
+ {
+ nbytes += lp->limit - (char *) lp;
+ }
+ return nbytes;
+}
+\f
+# ifdef _LIBC
+# include <libio/iolibio.h>
+# endif
+
+# ifndef __attribute__
+/* This feature is available in gcc versions 2.5 and later. */
+# if __GNUC__ < 2 || (__GNUC__ == 2 && __GNUC_MINOR__ < 5)
+# define __attribute__(Spec) /* empty */
+# endif
+# endif
+
+static void
+print_and_abort (void)
+{
+ /* Don't change any of these strings. Yes, it would be possible to add
+ the newline to the string and use fputs or so. But this must not
+ happen because the "memory exhausted" message appears in other places
+ like this and the translation should be reused instead of creating
+ a very similar string which requires a separate translation. */
+# ifdef _LIBC
+ (void) __fxprintf (NULL, "%s\n", _("memory exhausted"));
+# else
+ fprintf (stderr, "%s\n", _("memory exhausted"));
+# endif
+ exit (1);
+}
+
+#endif /* !ELIDE_CODE */
--- /dev/null
+/* obstack.h - object stack macros
+ Copyright (C) 1988-1994,1996-1999,2003,2004,2005,2009
+ Free Software Foundation, Inc.
+ This file is part of the GNU C Library.
+
+ The GNU C Library is free software; you can redistribute it and/or
+ modify it under the terms of the GNU Lesser General Public
+ License as published by the Free Software Foundation; either
+ version 2.1 of the License, or (at your option) any later version.
+
+ The GNU C Library is distributed in the hope that it will be useful,
+ but WITHOUT ANY WARRANTY; without even the implied warranty of
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ Lesser General Public License for more details.
+
+ You should have received a copy of the GNU Lesser General Public
+ License along with the GNU C Library; if not, write to the Free
+ Software Foundation, Inc., 51 Franklin Street, Fifth Floor,
+ Boston, MA 02110-1301, USA. */
+
+/* Summary:
+
+All the apparent functions defined here are macros. The idea
+is that you would use these pre-tested macros to solve a
+very specific set of problems, and they would run fast.
+Caution: no side-effects in arguments please!! They may be
+evaluated MANY times!!
+
+These macros operate a stack of objects. Each object starts life
+small, and may grow to maturity. (Consider building a word syllable
+by syllable.) An object can move while it is growing. Once it has
+been "finished" it never changes address again. So the "top of the
+stack" is typically an immature growing object, while the rest of the
+stack is of mature, fixed size and fixed address objects.
+
+These routines grab large chunks of memory, using a function you
+supply, called `obstack_chunk_alloc'. On occasion, they free chunks,
+by calling `obstack_chunk_free'. You must define them and declare
+them before using any obstack macros.
+
+Each independent stack is represented by a `struct obstack'.
+Each of the obstack macros expects a pointer to such a structure
+as the first argument.
+
+One motivation for this package is the problem of growing char strings
+in symbol tables. Unless you are "fascist pig with a read-only mind"
+--Gosper's immortal quote from HAKMEM item 154, out of context--you
+would not like to put any arbitrary upper limit on the length of your
+symbols.
+
+In practice this often means you will build many short symbols and a
+few long symbols. At the time you are reading a symbol you don't know
+how long it is. One traditional method is to read a symbol into a
+buffer, realloc()ating the buffer every time you try to read a symbol
+that is longer than the buffer. This is beaut, but you still will
+want to copy the symbol from the buffer to a more permanent
+symbol-table entry say about half the time.
+
+With obstacks, you can work differently. Use one obstack for all symbol
+names. As you read a symbol, grow the name in the obstack gradually.
+When the name is complete, finalize it. Then, if the symbol exists already,
+free the newly read name.
+
+The way we do this is to take a large chunk, allocating memory from
+low addresses. When you want to build a symbol in the chunk you just
+add chars above the current "high water mark" in the chunk. When you
+have finished adding chars, because you got to the end of the symbol,
+you know how long the chars are, and you can create a new object.
+Mostly the chars will not burst over the highest address of the chunk,
+because you would typically expect a chunk to be (say) 100 times as
+long as an average object.
+
+In case that isn't clear, when we have enough chars to make up
+the object, THEY ARE ALREADY CONTIGUOUS IN THE CHUNK (guaranteed)
+so we just point to it where it lies. No moving of chars is
+needed and this is the second win: potentially long strings need
+never be explicitly shuffled. Once an object is formed, it does not
+change its address during its lifetime.
+
+When the chars burst over a chunk boundary, we allocate a larger
+chunk, and then copy the partly formed object from the end of the old
+chunk to the beginning of the new larger chunk. We then carry on
+accreting characters to the end of the object as we normally would.
+
+A special macro is provided to add a single char at a time to a
+growing object. This allows the use of register variables, which
+break the ordinary 'growth' macro.
+
+Summary:
+ We allocate large chunks.
+ We carve out one object at a time from the current chunk.
+ Once carved, an object never moves.
+ We are free to append data of any size to the currently
+ growing object.
+ Exactly one object is growing in an obstack at any one time.
+ You can run one obstack per control block.
+ You may have as many control blocks as you dare.
+ Because of the way we do it, you can `unwind' an obstack
+ back to a previous state. (You may remove objects much
+ as you would with a stack.)
+*/
+
+
+/* Don't do the contents of this file more than once. */
+
+#ifndef _OBSTACK_H
+#define _OBSTACK_H 1
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+\f
+/* We need the type of a pointer subtraction. If __PTRDIFF_TYPE__ is
+ defined, as with GNU C, use that; that way we don't pollute the
+ namespace with <stddef.h>'s symbols. Otherwise, include <stddef.h>
+ and use ptrdiff_t. */
+
+#ifdef __PTRDIFF_TYPE__
+# define PTR_INT_TYPE __PTRDIFF_TYPE__
+#else
+# include <stddef.h>
+# define PTR_INT_TYPE ptrdiff_t
+#endif
+
+/* If B is the base of an object addressed by P, return the result of
+ aligning P to the next multiple of A + 1. B and P must be of type
+ char *. A + 1 must be a power of 2. */
+
+#define __BPTR_ALIGN(B, P, A) ((B) + (((P) - (B) + (A)) & ~(A)))
+
+/* Similiar to _BPTR_ALIGN (B, P, A), except optimize the common case
+ where pointers can be converted to integers, aligned as integers,
+ and converted back again. If PTR_INT_TYPE is narrower than a
+ pointer (e.g., the AS/400), play it safe and compute the alignment
+ relative to B. Otherwise, use the faster strategy of computing the
+ alignment relative to 0. */
+
+#define __PTR_ALIGN(B, P, A) \
+ __BPTR_ALIGN (sizeof (PTR_INT_TYPE) < sizeof (void *) ? (B) : (char *) 0, \
+ P, A)
+
+#include <string.h>
+
+struct _obstack_chunk /* Lives at front of each chunk. */
+{
+ char *limit; /* 1 past end of this chunk */
+ struct _obstack_chunk *prev; /* address of prior chunk or NULL */
+ char contents[4]; /* objects begin here */
+};
+
+struct obstack /* control current object in current chunk */
+{
+ long chunk_size; /* preferred size to allocate chunks in */
+ struct _obstack_chunk *chunk; /* address of current struct obstack_chunk */
+ char *object_base; /* address of object we are building */
+ char *next_free; /* where to add next char to current object */
+ char *chunk_limit; /* address of char after current chunk */
+ union
+ {
+ PTR_INT_TYPE tempint;
+ void *tempptr;
+ } temp; /* Temporary for some macros. */
+ int alignment_mask; /* Mask of alignment for each object. */
+ /* These prototypes vary based on `use_extra_arg', and we use
+ casts to the prototypeless function type in all assignments,
+ but having prototypes here quiets -Wstrict-prototypes. */
+ struct _obstack_chunk *(*chunkfun) (void *, long);
+ void (*freefun) (void *, struct _obstack_chunk *);
+ void *extra_arg; /* first arg for chunk alloc/dealloc funcs */
+ unsigned use_extra_arg:1; /* chunk alloc/dealloc funcs take extra arg */
+ unsigned maybe_empty_object:1;/* There is a possibility that the current
+ chunk contains a zero-length object. This
+ prevents freeing the chunk if we allocate
+ a bigger chunk to replace it. */
+ unsigned alloc_failed:1; /* No longer used, as we now call the failed
+ handler on error, but retained for binary
+ compatibility. */
+};
+
+/* Declare the external functions we use; they are in obstack.c. */
+
+extern void _obstack_newchunk (struct obstack *, int);
+extern int _obstack_begin (struct obstack *, int, int,
+ void *(*) (long), void (*) (void *));
+extern int _obstack_begin_1 (struct obstack *, int, int,
+ void *(*) (void *, long),
+ void (*) (void *, void *), void *);
+extern int _obstack_memory_used (struct obstack *);
+
+void obstack_free (struct obstack *, void *);
+
+\f
+/* Error handler called when `obstack_chunk_alloc' failed to allocate
+ more memory. This can be set to a user defined function which
+ should either abort gracefully or use longjump - but shouldn't
+ return. The default action is to print a message and abort. */
+extern void (*obstack_alloc_failed_handler) (void);
+\f
+/* Pointer to beginning of object being allocated or to be allocated next.
+ Note that this might not be the final address of the object
+ because a new chunk might be needed to hold the final size. */
+
+#define obstack_base(h) ((void *) (h)->object_base)
+
+/* Size for allocating ordinary chunks. */
+
+#define obstack_chunk_size(h) ((h)->chunk_size)
+
+/* Pointer to next byte not yet allocated in current chunk. */
+
+#define obstack_next_free(h) ((h)->next_free)
+
+/* Mask specifying low bits that should be clear in address of an object. */
+
+#define obstack_alignment_mask(h) ((h)->alignment_mask)
+
+/* To prevent prototype warnings provide complete argument list. */
+#define obstack_init(h) \
+ _obstack_begin ((h), 0, 0, \
+ (void *(*) (long)) obstack_chunk_alloc, \
+ (void (*) (void *)) obstack_chunk_free)
+
+#define obstack_begin(h, size) \
+ _obstack_begin ((h), (size), 0, \
+ (void *(*) (long)) obstack_chunk_alloc, \
+ (void (*) (void *)) obstack_chunk_free)
+
+#define obstack_specify_allocation(h, size, alignment, chunkfun, freefun) \
+ _obstack_begin ((h), (size), (alignment), \
+ (void *(*) (long)) (chunkfun), \
+ (void (*) (void *)) (freefun))
+
+#define obstack_specify_allocation_with_arg(h, size, alignment, chunkfun, freefun, arg) \
+ _obstack_begin_1 ((h), (size), (alignment), \
+ (void *(*) (void *, long)) (chunkfun), \
+ (void (*) (void *, void *)) (freefun), (arg))
+
+#define obstack_chunkfun(h, newchunkfun) \
+ ((h) -> chunkfun = (struct _obstack_chunk *(*)(void *, long)) (newchunkfun))
+
+#define obstack_freefun(h, newfreefun) \
+ ((h) -> freefun = (void (*)(void *, struct _obstack_chunk *)) (newfreefun))
+
+#define obstack_1grow_fast(h,achar) (*((h)->next_free)++ = (achar))
+
+#define obstack_blank_fast(h,n) ((h)->next_free += (n))
+
+#define obstack_memory_used(h) _obstack_memory_used (h)
+\f
+#if defined __GNUC__ && defined __STDC__ && __STDC__
+/* NextStep 2.0 cc is really gcc 1.93 but it defines __GNUC__ = 2 and
+ does not implement __extension__. But that compiler doesn't define
+ __GNUC_MINOR__. */
+# if __GNUC__ < 2 || (__NeXT__ && !__GNUC_MINOR__)
+# define __extension__
+# endif
+
+/* For GNU C, if not -traditional,
+ we can define these macros to compute all args only once
+ without using a global variable.
+ Also, we can avoid using the `temp' slot, to make faster code. */
+
+# define obstack_object_size(OBSTACK) \
+ __extension__ \
+ ({ struct obstack const *__o = (OBSTACK); \
+ (unsigned) (__o->next_free - __o->object_base); })
+
+# define obstack_room(OBSTACK) \
+ __extension__ \
+ ({ struct obstack const *__o = (OBSTACK); \
+ (unsigned) (__o->chunk_limit - __o->next_free); })
+
+# define obstack_make_room(OBSTACK,length) \
+__extension__ \
+({ struct obstack *__o = (OBSTACK); \
+ int __len = (length); \
+ if (__o->chunk_limit - __o->next_free < __len) \
+ _obstack_newchunk (__o, __len); \
+ (void) 0; })
+
+# define obstack_empty_p(OBSTACK) \
+ __extension__ \
+ ({ struct obstack const *__o = (OBSTACK); \
+ (__o->chunk->prev == 0 \
+ && __o->next_free == __PTR_ALIGN ((char *) __o->chunk, \
+ __o->chunk->contents, \
+ __o->alignment_mask)); })
+
+# define obstack_grow(OBSTACK,where,length) \
+__extension__ \
+({ struct obstack *__o = (OBSTACK); \
+ int __len = (length); \
+ if (__o->next_free + __len > __o->chunk_limit) \
+ _obstack_newchunk (__o, __len); \
+ memcpy (__o->next_free, where, __len); \
+ __o->next_free += __len; \
+ (void) 0; })
+
+# define obstack_grow0(OBSTACK,where,length) \
+__extension__ \
+({ struct obstack *__o = (OBSTACK); \
+ int __len = (length); \
+ if (__o->next_free + __len + 1 > __o->chunk_limit) \
+ _obstack_newchunk (__o, __len + 1); \
+ memcpy (__o->next_free, where, __len); \
+ __o->next_free += __len; \
+ *(__o->next_free)++ = 0; \
+ (void) 0; })
+
+# define obstack_1grow(OBSTACK,datum) \
+__extension__ \
+({ struct obstack *__o = (OBSTACK); \
+ if (__o->next_free + 1 > __o->chunk_limit) \
+ _obstack_newchunk (__o, 1); \
+ obstack_1grow_fast (__o, datum); \
+ (void) 0; })
+
+/* These assume that the obstack alignment is good enough for pointers
+ or ints, and that the data added so far to the current object
+ shares that much alignment. */
+
+# define obstack_ptr_grow(OBSTACK,datum) \
+__extension__ \
+({ struct obstack *__o = (OBSTACK); \
+ if (__o->next_free + sizeof (void *) > __o->chunk_limit) \
+ _obstack_newchunk (__o, sizeof (void *)); \
+ obstack_ptr_grow_fast (__o, datum); }) \
+
+# define obstack_int_grow(OBSTACK,datum) \
+__extension__ \
+({ struct obstack *__o = (OBSTACK); \
+ if (__o->next_free + sizeof (int) > __o->chunk_limit) \
+ _obstack_newchunk (__o, sizeof (int)); \
+ obstack_int_grow_fast (__o, datum); })
+
+# define obstack_ptr_grow_fast(OBSTACK,aptr) \
+__extension__ \
+({ struct obstack *__o1 = (OBSTACK); \
+ *(const void **) __o1->next_free = (aptr); \
+ __o1->next_free += sizeof (const void *); \
+ (void) 0; })
+
+# define obstack_int_grow_fast(OBSTACK,aint) \
+__extension__ \
+({ struct obstack *__o1 = (OBSTACK); \
+ *(int *) __o1->next_free = (aint); \
+ __o1->next_free += sizeof (int); \
+ (void) 0; })
+
+# define obstack_blank(OBSTACK,length) \
+__extension__ \
+({ struct obstack *__o = (OBSTACK); \
+ int __len = (length); \
+ if (__o->chunk_limit - __o->next_free < __len) \
+ _obstack_newchunk (__o, __len); \
+ obstack_blank_fast (__o, __len); \
+ (void) 0; })
+
+# define obstack_alloc(OBSTACK,length) \
+__extension__ \
+({ struct obstack *__h = (OBSTACK); \
+ obstack_blank (__h, (length)); \
+ obstack_finish (__h); })
+
+# define obstack_copy(OBSTACK,where,length) \
+__extension__ \
+({ struct obstack *__h = (OBSTACK); \
+ obstack_grow (__h, (where), (length)); \
+ obstack_finish (__h); })
+
+# define obstack_copy0(OBSTACK,where,length) \
+__extension__ \
+({ struct obstack *__h = (OBSTACK); \
+ obstack_grow0 (__h, (where), (length)); \
+ obstack_finish (__h); })
+
+/* The local variable is named __o1 to avoid a name conflict
+ when obstack_blank is called. */
+# define obstack_finish(OBSTACK) \
+__extension__ \
+({ struct obstack *__o1 = (OBSTACK); \
+ void *__value = (void *) __o1->object_base; \
+ if (__o1->next_free == __value) \
+ __o1->maybe_empty_object = 1; \
+ __o1->next_free \
+ = __PTR_ALIGN (__o1->object_base, __o1->next_free, \
+ __o1->alignment_mask); \
+ if (__o1->next_free - (char *)__o1->chunk \
+ > __o1->chunk_limit - (char *)__o1->chunk) \
+ __o1->next_free = __o1->chunk_limit; \
+ __o1->object_base = __o1->next_free; \
+ __value; })
+
+# define obstack_free(OBSTACK, OBJ) \
+__extension__ \
+({ struct obstack *__o = (OBSTACK); \
+ void *__obj = (OBJ); \
+ if (__obj > (void *)__o->chunk && __obj < (void *)__o->chunk_limit) \
+ __o->next_free = __o->object_base = (char *)__obj; \
+ else (obstack_free) (__o, __obj); })
+\f
+#else /* not __GNUC__ or not __STDC__ */
+
+# define obstack_object_size(h) \
+ (unsigned) ((h)->next_free - (h)->object_base)
+
+# define obstack_room(h) \
+ (unsigned) ((h)->chunk_limit - (h)->next_free)
+
+# define obstack_empty_p(h) \
+ ((h)->chunk->prev == 0 \
+ && (h)->next_free == __PTR_ALIGN ((char *) (h)->chunk, \
+ (h)->chunk->contents, \
+ (h)->alignment_mask))
+
+/* Note that the call to _obstack_newchunk is enclosed in (..., 0)
+ so that we can avoid having void expressions
+ in the arms of the conditional expression.
+ Casting the third operand to void was tried before,
+ but some compilers won't accept it. */
+
+# define obstack_make_room(h,length) \
+( (h)->temp.tempint = (length), \
+ (((h)->next_free + (h)->temp.tempint > (h)->chunk_limit) \
+ ? (_obstack_newchunk ((h), (h)->temp.tempint), 0) : 0))
+
+# define obstack_grow(h,where,length) \
+( (h)->temp.tempint = (length), \
+ (((h)->next_free + (h)->temp.tempint > (h)->chunk_limit) \
+ ? (_obstack_newchunk ((h), (h)->temp.tempint), 0) : 0), \
+ memcpy ((h)->next_free, where, (h)->temp.tempint), \
+ (h)->next_free += (h)->temp.tempint)
+
+# define obstack_grow0(h,where,length) \
+( (h)->temp.tempint = (length), \
+ (((h)->next_free + (h)->temp.tempint + 1 > (h)->chunk_limit) \
+ ? (_obstack_newchunk ((h), (h)->temp.tempint + 1), 0) : 0), \
+ memcpy ((h)->next_free, where, (h)->temp.tempint), \
+ (h)->next_free += (h)->temp.tempint, \
+ *((h)->next_free)++ = 0)
+
+# define obstack_1grow(h,datum) \
+( (((h)->next_free + 1 > (h)->chunk_limit) \
+ ? (_obstack_newchunk ((h), 1), 0) : 0), \
+ obstack_1grow_fast (h, datum))
+
+# define obstack_ptr_grow(h,datum) \
+( (((h)->next_free + sizeof (char *) > (h)->chunk_limit) \
+ ? (_obstack_newchunk ((h), sizeof (char *)), 0) : 0), \
+ obstack_ptr_grow_fast (h, datum))
+
+# define obstack_int_grow(h,datum) \
+( (((h)->next_free + sizeof (int) > (h)->chunk_limit) \
+ ? (_obstack_newchunk ((h), sizeof (int)), 0) : 0), \
+ obstack_int_grow_fast (h, datum))
+
+# define obstack_ptr_grow_fast(h,aptr) \
+ (((const void **) ((h)->next_free += sizeof (void *)))[-1] = (aptr))
+
+# define obstack_int_grow_fast(h,aint) \
+ (((int *) ((h)->next_free += sizeof (int)))[-1] = (aint))
+
+# define obstack_blank(h,length) \
+( (h)->temp.tempint = (length), \
+ (((h)->chunk_limit - (h)->next_free < (h)->temp.tempint) \
+ ? (_obstack_newchunk ((h), (h)->temp.tempint), 0) : 0), \
+ obstack_blank_fast (h, (h)->temp.tempint))
+
+# define obstack_alloc(h,length) \
+ (obstack_blank ((h), (length)), obstack_finish ((h)))
+
+# define obstack_copy(h,where,length) \
+ (obstack_grow ((h), (where), (length)), obstack_finish ((h)))
+
+# define obstack_copy0(h,where,length) \
+ (obstack_grow0 ((h), (where), (length)), obstack_finish ((h)))
+
+# define obstack_finish(h) \
+( ((h)->next_free == (h)->object_base \
+ ? (((h)->maybe_empty_object = 1), 0) \
+ : 0), \
+ (h)->temp.tempptr = (h)->object_base, \
+ (h)->next_free \
+ = __PTR_ALIGN ((h)->object_base, (h)->next_free, \
+ (h)->alignment_mask), \
+ (((h)->next_free - (char *) (h)->chunk \
+ > (h)->chunk_limit - (char *) (h)->chunk) \
+ ? ((h)->next_free = (h)->chunk_limit) : 0), \
+ (h)->object_base = (h)->next_free, \
+ (h)->temp.tempptr)
+
+# define obstack_free(h,obj) \
+( (h)->temp.tempint = (char *) (obj) - (char *) (h)->chunk, \
+ ((((h)->temp.tempint > 0 \
+ && (h)->temp.tempint < (h)->chunk_limit - (char *) (h)->chunk)) \
+ ? (int) ((h)->next_free = (h)->object_base \
+ = (h)->temp.tempint + (char *) (h)->chunk) \
+ : (((obstack_free) ((h), (h)->temp.tempint + (char *) (h)->chunk), 0), 0)))
+
+#endif /* not __GNUC__ or not __STDC__ */
+
+#ifdef __cplusplus
+} /* C++ */
+#endif
+
+#endif /* obstack.h */
return offset;
}
+int git_config_set_in_file(const char *config_filename,
+ const char *key, const char *value)
+{
+ return git_config_set_multivar_in_file(config_filename, key, value, NULL, 0);
+}
+
int git_config_set(const char *key, const char *value)
{
return git_config_set_multivar(key, value, NULL, 0);
* - the config file is removed and the lock file rename()d to it.
*
*/
-int git_config_set_multivar(const char *key, const char *value,
- const char *value_regex, int multi_replace)
+int git_config_set_multivar_in_file(const char *config_filename,
+ const char *key, const char *value,
+ const char *value_regex, int multi_replace)
{
int fd = -1, in_fd;
int ret;
- char *config_filename;
struct lock_file *lock = NULL;
- if (config_exclusive_filename)
- config_filename = xstrdup(config_exclusive_filename);
- else
- config_filename = git_pathdup("config");
-
/* parse-key returns negative; flip the sign to feed exit(3) */
ret = 0 - git_config_parse_key(key, &store.key, &store.baselen);
if (ret)
out_free:
if (lock)
rollback_lock_file(lock);
- free(config_filename);
return ret;
write_err_out:
}
+int git_config_set_multivar(const char *key, const char *value,
+ const char *value_regex, int multi_replace)
+{
+ const char *config_filename;
+ char *buf = NULL;
+ int ret;
+
+ if (config_exclusive_filename)
+ config_filename = config_exclusive_filename;
+ else
+ config_filename = buf = git_pathdup("config");
+
+ ret = git_config_set_multivar_in_file(config_filename, key, value,
+ value_regex, multi_replace);
+ free(buf);
+ return ret;
+}
+
static int section_name_match (const char *buf, const char *name)
{
int i = 0, j = 0, dot = 0;
--- /dev/null
+#include "cache.h"
+#include "run-command.h"
+#include "sigchain.h"
+#include "connected.h"
+
+/*
+ * If we feed all the commits we want to verify to this command
+ *
+ * $ git rev-list --verify-objects --stdin --not --all
+ *
+ * and if it does not error out, that means everything reachable from
+ * these commits locally exists and is connected to some of our
+ * existing refs.
+ *
+ * Returns 0 if everything is connected, non-zero otherwise.
+ */
+int check_everything_connected(sha1_iterate_fn fn, int quiet, void *cb_data)
+{
+ struct child_process rev_list;
+ const char *argv[] = {"rev-list", "--verify-objects",
+ "--stdin", "--not", "--all", NULL, NULL};
+ char commit[41];
+ unsigned char sha1[20];
+ int err = 0;
+
+ if (fn(cb_data, sha1))
+ return err;
+
+ if (quiet)
+ argv[5] = "--quiet";
+
+ memset(&rev_list, 0, sizeof(rev_list));
+ rev_list.argv = argv;
+ rev_list.git_cmd = 1;
+ rev_list.in = -1;
+ rev_list.no_stdout = 1;
+ rev_list.no_stderr = quiet;
+ if (start_command(&rev_list))
+ return error(_("Could not run 'git rev-list'"));
+
+ sigchain_push(SIGPIPE, SIG_IGN);
+
+ commit[40] = '\n';
+ do {
+ memcpy(commit, sha1_to_hex(sha1), 40);
+ if (write_in_full(rev_list.in, commit, 41) < 0) {
+ if (errno != EPIPE && errno != EINVAL)
+ error(_("failed write to rev-list: %s"),
+ strerror(errno));
+ err = -1;
+ break;
+ }
+ } while (!fn(cb_data, sha1));
+
+ if (close(rev_list.in)) {
+ error(_("failed to close rev-list's stdin: %s"), strerror(errno));
+ err = -1;
+ }
+
+ sigchain_pop(SIGPIPE);
+ return finish_command(&rev_list) || err;
+}
--- /dev/null
+#ifndef CONNECTED_H
+#define CONNECTED_H
+
+/*
+ * Take callback data, and return next object name in the buffer.
+ * When called after returning the name for the last object, return -1
+ * to signal EOF, otherwise return 0.
+ */
+typedef int (*sha1_iterate_fn)(void *, unsigned char [20]);
+
+/*
+ * Make sure that our object store has all the commits necessary to
+ * connect the ancestry chain to some of our existing refs, and all
+ * the trees and blobs that these commits use.
+ *
+ * Return 0 if Ok, non zero otherwise (i.e. some missing objects)
+ */
+extern int check_everything_connected(sha1_iterate_fn, int quiet, void *cb_data);
+
+#endif /* CONNECTED_H */
# will have put this somewhere standard. You should make this script
# executable then link to it in the repository you would like to use it in.
# For example, on debian the hook is stored in
-# /usr/share/doc/git-core/contrib/hooks/post-receive-email:
+# /usr/share/git-core/contrib/hooks/post-receive-email:
#
# chmod a+x post-receive-email
# cd /path/to/your/repository.git
-# ln -sf /usr/share/doc/git-core/contrib/hooks/post-receive-email hooks/post-receive
+# ln -sf /usr/share/git-core/contrib/hooks/post-receive-email hooks/post-receive
#
# This hook script assumes it is enabled on the central repository of a
# project, with all users pushing only to it and not between each other. It
dollar = memchr(src, '$', len);
if (!dollar)
break;
- memcpy(dst, src, dollar + 1 - src);
+ memmove(dst, src, dollar + 1 - src);
dst += dollar + 1 - src;
len -= dollar + 1 - src;
src = dollar + 1;
src = dollar + 1;
}
}
- memcpy(dst, src, len);
+ memmove(dst, src, len);
strbuf_setlen(buf, dst + len - buf->buf);
return 1;
}
static int match_tz(const char *date, int *offp)
{
char *end;
- int offset = strtoul(date+1, &end, 10);
- int min, hour;
- int n = end - date - 1;
+ int hour = strtoul(date + 1, &end, 10);
+ int n = end - (date + 1);
+ int min = 0;
- min = offset % 100;
- hour = offset / 100;
+ if (n == 4) {
+ /* hhmm */
+ min = hour % 100;
+ hour = hour / 100;
+ } else if (n != 2) {
+ min = 99; /* random crap */
+ } else if (*end == ':') {
+ /* hh:mm? */
+ min = strtoul(end + 1, &end, 10);
+ if (end - (date + 1) != 5)
+ min = 99; /* random crap */
+ } /* otherwise we parsed "hh" */
/*
- * Don't accept any random crap.. At least 3 digits, and
- * a valid minute. We might want to check that the minutes
- * are divisible by 30 or something too.
+ * Don't accept any random crap. Even though some places have
+ * offset larger than 12 hours (e.g. Pacific/Kiritimati is at
+ * UTC+14), there is something wrong if hour part is much
+ * larger than that. We might also want to check that the
+ * minutes are divisible by 15 or something too. (Offset of
+ * Kathmandu, Nepal is UTC+5:45)
*/
- if (min < 60 && n > 2) {
- offset = hour*60+min;
+ if (min < 60 && hour < 24) {
+ int offset = hour * 60 + min;
if (*date == '-')
offset = -offset;
-
*offp = offset;
}
return end - date;
opts.unpack_data = revs;
opts.src_index = &the_index;
opts.dst_index = NULL;
+ opts.pathspec = &revs->diffopt.pathspec;
init_tree_desc(&t, tree->buffer, tree->size);
return unpack_trees(1, &t, &opts);
#include "diff.h"
#include "diffcore.h"
#include "xdiff-interface.h"
+#include "kwset.h"
struct diffgrep_cb {
regex_t *regexp;
static unsigned int contains(struct diff_filespec *one,
const char *needle, unsigned long len,
- regex_t *regexp)
+ regex_t *regexp, kwset_t kws)
{
unsigned int cnt;
unsigned long sz;
} else { /* Classic exact string match */
while (sz) {
- const char *found = memmem(data, sz, needle, len);
- if (!found)
+ size_t offset = kwsexec(kws, data, sz, NULL);
+ const char *found;
+ if (offset == -1)
break;
+ else
+ found = data + offset;
sz -= found - data + len;
data = found + len;
cnt++;
unsigned long len = strlen(needle);
int i, has_changes;
regex_t regex, *regexp = NULL;
+ kwset_t kws = NULL;
struct diff_queue_struct outq;
DIFF_QUEUE_CLEAR(&outq);
die("invalid pickaxe regex: %s", errbuf);
}
regexp = ®ex;
+ } else {
+ kws = kwsalloc(NULL);
+ kwsincr(kws, needle, len);
+ kwsprep(kws);
}
if (opts & DIFF_PICKAXE_ALL) {
if (!DIFF_FILE_VALID(p->two))
continue; /* ignore unmerged */
/* created */
- if (contains(p->two, needle, len, regexp))
+ if (contains(p->two, needle, len, regexp, kws))
has_changes++;
}
else if (!DIFF_FILE_VALID(p->two)) {
- if (contains(p->one, needle, len, regexp))
+ if (contains(p->one, needle, len, regexp, kws))
has_changes++;
}
else if (!diff_unmodified_pair(p) &&
- contains(p->one, needle, len, regexp) !=
- contains(p->two, needle, len, regexp))
+ contains(p->one, needle, len, regexp, kws) !=
+ contains(p->two, needle, len, regexp, kws))
has_changes++;
}
if (has_changes)
if (!DIFF_FILE_VALID(p->two))
; /* ignore unmerged */
/* created */
- else if (contains(p->two, needle, len, regexp))
+ else if (contains(p->two, needle, len, regexp,
+ kws))
has_changes = 1;
}
else if (!DIFF_FILE_VALID(p->two)) {
- if (contains(p->one, needle, len, regexp))
+ if (contains(p->one, needle, len, regexp, kws))
has_changes = 1;
}
else if (!diff_unmodified_pair(p) &&
- contains(p->one, needle, len, regexp) !=
- contains(p->two, needle, len, regexp))
+ contains(p->one, needle, len, regexp, kws) !=
+ contains(p->two, needle, len, regexp, kws))
has_changes = 1;
if (has_changes)
if (opts & DIFF_PICKAXE_REGEX)
regfree(®ex);
+ else
+ kwsfree(kws);
free(q->queue);
*q = outq;
perl -ne 'BEGIN { $subject = 0 }
if ($subject > 1) { print ; }
elsif (/^\s+$/) { next ; }
- elsif (/^Author:/) { print s/Author/From/ ; }
+ elsif (/^Author:/) { s/Author/From/ ; print ;}
elsif (/^(From|Date)/) { print ; }
elsif ($subject) {
$subject = 2 ;
this=
msgnum=
;;
+ hg)
+ this=0
+ for hg in "$@"
+ do
+ this=$(( $this + 1 ))
+ msgnum=$(printf "%0${prec}d" $this)
+ # hg stores changeset metadata in #-commented lines preceding
+ # the commit message and diff(s). The only metadata we care about
+ # are the User and Date (Node ID and Parent are hashes which are
+ # only relevant to the hg repository and thus not useful to us)
+ # Since we cannot guarantee that the commit message is in
+ # git-friendly format, we put no Subject: line and just consume
+ # all of the message as the body
+ perl -M'POSIX qw(strftime)' -ne 'BEGIN { $subject = 0 }
+ if ($subject) { print ; }
+ elsif (/^\# User /) { s/\# User/From:/ ; print ; }
+ elsif (/^\# Date /) {
+ my ($hashsign, $str, $time, $tz) = split ;
+ $tz = sprintf "%+05d", (0-$tz)/36;
+ print "Date: " .
+ strftime("%a, %d %b %Y %H:%M:%S ",
+ localtime($time))
+ . "$tz\n";
+ } elsif (/^\# /) { next ; }
+ else {
+ print "\n", $_ ;
+ $subject = 1;
+ }
+ ' <"$hg" >"$dotest/$msgnum" || clean_abort
+ done
+ echo "$this" >"$dotest/last"
+ this=
+ msgnum=
+ ;;
*)
- if test -n "$parse_patch" ; then
+ if test -n "$patch_format"
+ then
clean_abort "$(eval_gettext "Patch format \$patch_format is not supported.")"
else
clean_abort "$(gettext "Patch format detection failed.")"
bisect_autostart() {
test -s "$GIT_DIR/BISECT_START" || {
- (
- gettext "You need to start by \"git bisect start\"" &&
- echo
- ) >&2
+ gettextln "You need to start by \"git bisect start\"" >&2
if test -t 0
then
# TRANSLATORS: Make sure to include [Y] and [n] in your
t,,good)
# have bad but not good. we could bisect although
# this is less optimum.
- (
- gettext "Warning: bisecting only with a bad commit." &&
- echo
- ) >&2
+ gettextln "Warning: bisecting only with a bad commit." >&2
if test -t 0
then
# TRANSLATORS: Make sure to include [Y] and [n] in your
if test -s "$GIT_DIR/BISECT_START"
then
- (
- gettext "You need to give me at least one good and one bad revisions.
-(You can use \"git bisect bad\" and \"git bisect good\" for that.)" &&
- echo
- ) >&2
+ gettextln "You need to give me at least one good and one bad revisions.
+(You can use \"git bisect bad\" and \"git bisect good\" for that.)" >&2
else
- (
- gettext "You need to start by \"git bisect start\".
+ gettextln "You need to start by \"git bisect start\".
You then need to give me at least one good and one bad revisions.
-(You can use \"git bisect bad\" and \"git bisect good\" for that.)" &&
- echo
- ) >&2
+(You can use \"git bisect bad\" and \"git bisect good\" for that.)" >&2
fi
exit 1 ;;
esac
bisect_reset() {
test -s "$GIT_DIR/BISECT_START" || {
- gettext "We are not bisecting."; echo
+ gettextln "We are not bisecting."
return
}
case "$#" in
while true
do
command="$@"
- eval_gettext "running \$command"; echo
+ eval_gettextln "running \$command"
"$@"
res=$?
# Check for really bad run error.
if [ $res -lt 0 -o $res -ge 128 ]
then
- (
- eval_gettext "bisect run failed:
-exit code \$res from '\$command' is < 0 or >= 128" &&
- echo
- ) >&2
+ eval_gettextln "bisect run failed:
+exit code \$res from '\$command' is < 0 or >= 128" >&2
exit $res
fi
if sane_grep "first bad commit could be any of" "$GIT_DIR/BISECT_RUN" \
> /dev/null
then
- (
- gettext "bisect run cannot continue any more" &&
- echo
- ) >&2
+ gettextln "bisect run cannot continue any more" >&2
exit $res
fi
if [ $res -ne 0 ]
then
- (
- eval_gettext "bisect run failed:
-'bisect_state \$state' exited with error code \$res" &&
- echo
- ) >&2
+ eval_gettextln "bisect run failed:
+'bisect_state \$state' exited with error code \$res" >&2
exit $res
fi
if sane_grep "is the first bad commit" "$GIT_DIR/BISECT_RUN" > /dev/null
then
- gettext "bisect run success"; echo
+ gettextln "bisect run success"
exit 0;
fi
. git-sh-setup
if [ "$(is_bare_repository)" = false ]; then
- git diff-files --ignore-submodules --quiet &&
- git diff-index --cached --quiet HEAD -- ||
- die "Cannot rewrite branch(es) with a dirty working directory."
+ require_clean_work_tree 'rewrite branches'
fi
tempdir=.git-rewrite
do
echo "$MERGED seems unchanged."
printf "Was the merge successful? [y/n] "
- read answer
+ read answer || return 1
case "$answer" in
y*|Y*) status=0; break ;;
n*|N*) status=1; break ;;
resolve_symlink_merge () {
while true; do
printf "Use (l)ocal or (r)emote, or (a)bort? "
- read ans
+ read ans || return 1
case "$ans" in
[lL]*)
git checkout-index -f --stage=2 -- "$MERGED"
git rev-parse --verify HEAD > "$state_dir"/stopped-sha
${SHELL:-@SHELL_PATH@} -c "$rest" # Actual execution
status=$?
+ # Run in subshell because require_clean_work_tree can die.
+ dirty=f
+ (require_clean_work_tree "rebase" 2>/dev/null) || dirty=t
if test "$status" -ne 0
then
warn "Execution failed: $rest"
+ test "$dirty" = f ||
+ warn "and made changes to the index and/or the working tree"
+
warn "You can fix the problem, and then run"
warn
warn " git rebase --continue"
warn
exit "$status"
- fi
- # Run in subshell because require_clean_work_tree can die.
- if ! (require_clean_work_tree "rebase")
+ elif test "$dirty" = t
then
+ warn "Execution succeeded: $rest"
+ warn "but left changes to the index and/or the working tree"
warn "Commit or stash your changes, and then run"
warn
warn " git rebase --continue"
then
: Nothing to commit -- skip this
else
+ if ! test -f "$author_script"
+ then
+ die "You have staged changes in your working tree. If these changes are meant to be
+squashed into the previous commit, run:
+
+ git commit --amend
+
+If they are meant to go into a new commit, run:
+
+ git commit
+
+In both case, once you're done, continue with:
+
+ git rebase --continue
+"
+ fi
. "$author_script" ||
- die "Cannot find the author identity"
+ die "Error trying to find the author identity to amend commit"
current_head=
if test -f "$amend"
then
#!/usr/bin/env python
+# This command is a simple remote-helper, that is used both as a
+# testcase for the remote-helper functionality, and as an example to
+# show remote-helper authors one possible implementation.
+#
+# This is a Git <-> Git importer/exporter, that simply uses git
+# fast-import and git fast-export to consume and produce fast-import
+# streams.
+#
+# To understand better the way things work, one can activate debug
+# traces by setting (to any value) the environment variables
+# GIT_TRANSPORT_HELPER_DEBUG and GIT_DEBUG_TESTGIT, to see messages
+# from the transport-helper side, or from this example remote-helper.
+
# hashlib is only available in python >= 2.5
try:
import hashlib
if test -n "$patch_mode" && test -n "$untracked"
then
- die "Can't use --patch and ---include-untracked or --all at the same time"
+ die "Can't use --patch and --include-untracked or --all at the same time"
fi
stash_msg="$*"
test "$untracked" = "all" && CLEAN_X_OPTION=-x || CLEAN_X_OPTION=
if test -n "$untracked"
then
- git clean --force --quiet $CLEAN_X_OPTION
+ git clean --force --quiet -d $CLEAN_X_OPTION
fi
if test "$keep_index" = "t" && test -n $i_tree
$_prefix, $_no_checkout, $_url, $_verbose,
$_git_format, $_commit_url, $_tag, $_merge_info);
$Git::SVN::_follow_parent = 1;
+$SVN::Git::Fetcher::_placeholder_filename = ".gitignore";
$_q ||= 0;
my %remote_opts = ( 'username=s' => \$Git::SVN::Prompt::_username,
'config-dir=s' => \$Git::SVN::Ra::config_dir,
%fc_opts } ],
clone => [ \&cmd_clone, "Initialize and fetch revisions",
{ 'revision|r=s' => \$_revision,
+ 'preserve-empty-dirs' =>
+ \$SVN::Git::Fetcher::_preserve_empty_dirs,
+ 'placeholder-filename=s' =>
+ \$SVN::Git::Fetcher::_placeholder_filename,
%fc_opts, %init_opts } ],
init => [ \&cmd_init, "Initialize a repo for tracking" .
" (requires URL argument)",
my $ignore_regex = \$SVN::Git::Fetcher::_ignore_regex;
command_noisy('config', "$pfx.ignore-paths", $$ignore_regex)
if defined $$ignore_regex;
+
+ if (defined $SVN::Git::Fetcher::_preserve_empty_dirs) {
+ my $fname = \$SVN::Git::Fetcher::_placeholder_filename;
+ command_noisy('config', "$pfx.preserve-empty-dirs", 'true');
+ command_noisy('config', "$pfx.placeholder-filename", $$fname);
+ }
}
sub init_subdir {
unlink $gs->{index};
}
+sub split_merge_info_range {
+ my ($range) = @_;
+ if ($range =~ /(\d+)-(\d+)/) {
+ return (int($1), int($2));
+ } else {
+ return (int($range), int($range));
+ }
+}
+
+sub combine_ranges {
+ my ($in) = @_;
+
+ my @fnums = ();
+ my @arr = split(/,/, $in);
+ for my $element (@arr) {
+ my ($start, $end) = split_merge_info_range($element);
+ push @fnums, $start;
+ }
+
+ my @sorted = @arr [ sort {
+ $fnums[$a] <=> $fnums[$b]
+ } 0..$#arr ];
+
+ my @return = ();
+ my $last = -1;
+ my $first = -1;
+ for my $element (@sorted) {
+ my ($start, $end) = split_merge_info_range($element);
+
+ if ($last == -1) {
+ $first = $start;
+ $last = $end;
+ next;
+ }
+ if ($start <= $last+1) {
+ if ($end > $last) {
+ $last = $end;
+ }
+ next;
+ }
+ if ($first == $last) {
+ push @return, "$first";
+ } else {
+ push @return, "$first-$last";
+ }
+ $first = $start;
+ $last = $end;
+ }
+
+ if ($first != -1) {
+ if ($first == $last) {
+ push @return, "$first";
+ } else {
+ push @return, "$first-$last";
+ }
+ }
+
+ return join(',', @return);
+}
+
+sub merge_revs_into_hash {
+ my ($hash, $minfo) = @_;
+ my @lines = split(' ', $minfo);
+
+ for my $line (@lines) {
+ my ($branchpath, $revs) = split(/:/, $line);
+
+ if (exists($hash->{$branchpath})) {
+ # Merge the two revision sets
+ my $combined = "$hash->{$branchpath},$revs";
+ $hash->{$branchpath} = combine_ranges($combined);
+ } else {
+ # Just do range combining for consolidation
+ $hash->{$branchpath} = combine_ranges($revs);
+ }
+ }
+}
+
+sub merge_merge_info {
+ my ($mergeinfo_one, $mergeinfo_two) = @_;
+ my %result_hash = ();
+
+ merge_revs_into_hash(\%result_hash, $mergeinfo_one);
+ merge_revs_into_hash(\%result_hash, $mergeinfo_two);
+
+ my $result = '';
+ # Sort below is for consistency's sake
+ for my $branchname (sort keys(%result_hash)) {
+ my $revlist = $result_hash{$branchname};
+ $result .= "$branchname:$revlist\n"
+ }
+ return $result;
+}
+
+sub populate_merge_info {
+ my ($d, $gs, $uuid, $linear_refs, $rewritten_parent) = @_;
+
+ my %parentshash;
+ read_commit_parents(\%parentshash, $d);
+ my @parents = @{$parentshash{$d}};
+ if ($#parents > 0) {
+ # Merge commit
+ my $all_parents_ok = 1;
+ my $aggregate_mergeinfo = '';
+ my $rooturl = $gs->repos_root;
+
+ if (defined($rewritten_parent)) {
+ # Replace first parent with newly-rewritten version
+ shift @parents;
+ unshift @parents, $rewritten_parent;
+ }
+
+ foreach my $parent (@parents) {
+ my ($branchurl, $svnrev, $paruuid) =
+ cmt_metadata($parent);
+
+ unless (defined($svnrev)) {
+ # Should have been caught be preflight check
+ fatal "merge commit $d has ancestor $parent, but that change "
+ ."does not have git-svn metadata!";
+ }
+ unless ($branchurl =~ /^$rooturl(.*)/) {
+ fatal "commit $parent git-svn metadata changed mid-run!";
+ }
+ my $branchpath = $1;
+
+ my $ra = Git::SVN::Ra->new($branchurl);
+ my (undef, undef, $props) =
+ $ra->get_dir(canonicalize_path("."), $svnrev);
+ my $par_mergeinfo = $props->{'svn:mergeinfo'};
+ unless (defined $par_mergeinfo) {
+ $par_mergeinfo = '';
+ }
+ # Merge previous mergeinfo values
+ $aggregate_mergeinfo =
+ merge_merge_info($aggregate_mergeinfo,
+ $par_mergeinfo, 0);
+
+ next if $parent eq $parents[0]; # Skip first parent
+ # Add new changes being placed in tree by merge
+ my @cmd = (qw/rev-list --reverse/,
+ $parent, qw/--not/);
+ foreach my $par (@parents) {
+ unless ($par eq $parent) {
+ push @cmd, $par;
+ }
+ }
+ my @revsin = ();
+ my ($revlist, $ctx) = command_output_pipe(@cmd);
+ while (<$revlist>) {
+ my $irev = $_;
+ chomp $irev;
+ my (undef, $csvnrev, undef) =
+ cmt_metadata($irev);
+ unless (defined $csvnrev) {
+ # A child is missing SVN annotations...
+ # this might be OK, or might not be.
+ warn "W:child $irev is merged into revision "
+ ."$d but does not have git-svn metadata. "
+ ."This means git-svn cannot determine the "
+ ."svn revision numbers to place into the "
+ ."svn:mergeinfo property. You must ensure "
+ ."a branch is entirely committed to "
+ ."SVN before merging it in order for "
+ ."svn:mergeinfo population to function "
+ ."properly";
+ }
+ push @revsin, $csvnrev;
+ }
+ command_close_pipe($revlist, $ctx);
+
+ last unless $all_parents_ok;
+
+ # We now have a list of all SVN revnos which are
+ # merged by this particular parent. Integrate them.
+ next if $#revsin == -1;
+ my $newmergeinfo = "$branchpath:" . join(',', @revsin);
+ $aggregate_mergeinfo =
+ merge_merge_info($aggregate_mergeinfo,
+ $newmergeinfo, 1);
+ }
+ if ($all_parents_ok and $aggregate_mergeinfo) {
+ return $aggregate_mergeinfo;
+ }
+ }
+
+ return undef;
+}
+
sub cmd_dcommit {
my $head = shift;
command_noisy(qw/update-index --refresh/);
"without --no-rebase may be required."
}
my $expect_url = $url;
+
+ my $push_merge_info = eval {
+ command_oneline(qw/config --get svn.pushmergeinfo/)
+ };
+ if (not defined($push_merge_info)
+ or $push_merge_info eq "false"
+ or $push_merge_info eq "no"
+ or $push_merge_info eq "never") {
+ $push_merge_info = 0;
+ }
+
+ unless (defined($_merge_info) || ! $push_merge_info) {
+ # Preflight check of changes to ensure no issues with mergeinfo
+ # This includes check for uncommitted-to-SVN parents
+ # (other than the first parent, which we will handle),
+ # information from different SVN repos, and paths
+ # which are not underneath this repository root.
+ my $rooturl = $gs->repos_root;
+ foreach my $d (@$linear_refs) {
+ my %parentshash;
+ read_commit_parents(\%parentshash, $d);
+ my @realparents = @{$parentshash{$d}};
+ if ($#realparents > 0) {
+ # Merge commit
+ shift @realparents; # Remove/ignore first parent
+ foreach my $parent (@realparents) {
+ my ($branchurl, $svnrev, $paruuid) = cmt_metadata($parent);
+ unless (defined $paruuid) {
+ # A parent is missing SVN annotations...
+ # abort the whole operation.
+ fatal "$parent is merged into revision $d, "
+ ."but does not have git-svn metadata. "
+ ."Either dcommit the branch or use a "
+ ."local cherry-pick, FF merge, or rebase "
+ ."instead of an explicit merge commit.";
+ }
+
+ unless ($paruuid eq $uuid) {
+ # Parent has SVN metadata from different repository
+ fatal "merge parent $parent for change $d has "
+ ."git-svn uuid $paruuid, while current change "
+ ."has uuid $uuid!";
+ }
+
+ unless ($branchurl =~ /^$rooturl(.*)/) {
+ # This branch is very strange indeed.
+ fatal "merge parent $parent for $d is on branch "
+ ."$branchurl, which is not under the "
+ ."git-svn root $rooturl!";
+ }
+ }
+ }
+ }
+ }
+
+ my $rewritten_parent;
Git::SVN::remove_username($expect_url);
+ if (defined($_merge_info)) {
+ $_merge_info =~ tr{ }{\n};
+ }
while (1) {
my $d = shift @$linear_refs or last;
unless (defined $last_rev) {
print "diff-tree $d~1 $d\n";
} else {
my $cmt_rev;
+
+ unless (defined($_merge_info) || ! $push_merge_info) {
+ $_merge_info = populate_merge_info($d, $gs,
+ $uuid,
+ $linear_refs,
+ $rewritten_parent);
+ }
+
my %ed_opts = ( r => $last_rev,
log => get_commit_entry($d)->{log},
ra => Git::SVN::Ra->new($url),
@finish = qw/reset --mixed/;
}
command_noisy(@finish, $gs->refname);
+
+ $rewritten_parent = command_oneline(qw/rev-parse HEAD/);
+
if (@diff) {
@refs = ();
my ($url_, $rev_, $uuid_, $gs_) =
my (undef, $max_commit) = $gs->rev_map_max(1);
last if (!$max_commit);
my ($url) = ::cmt_metadata($max_commit);
- last if ($url eq $gs->full_url);
+ last if ($url eq $gs->metadata_url);
$ref_id .= '-';
}
print STDERR "Initializing parent: $ref_id\n" unless $::_q > 1;
}
package SVN::Git::Fetcher;
-use vars qw/@ISA/;
+use vars qw/@ISA $_ignore_regex $_preserve_empty_dirs $_placeholder_filename
+ @deleted_gpath %added_placeholder $repo_id/;
use strict;
use warnings;
use Carp qw/croak/;
+use File::Basename qw/dirname/;
use IO::File qw//;
-use vars qw/$_ignore_regex/;
# file baton members: path, mode_a, mode_b, pool, fh, blob, base
sub new {
$self->{empty_symlinks} =
_mark_empty_symlinks($git_svn, $switch_path);
}
- $self->{ignore_regex} = eval { command_oneline('config', '--get',
- "svn-remote.$git_svn->{repo_id}.ignore-paths") };
+
+ # some options are read globally, but can be overridden locally
+ # per [svn-remote "..."] section. Command-line options will *NOT*
+ # override options set in an [svn-remote "..."] section
+ $repo_id = $git_svn->{repo_id};
+ my $k = "svn-remote.$repo_id.ignore-paths";
+ my $v = eval { command_oneline('config', '--get', $k) };
+ $self->{ignore_regex} = $v;
+
+ $k = "svn-remote.$repo_id.preserve-empty-dirs";
+ $v = eval { command_oneline('config', '--get', '--bool', $k) };
+ if ($v && $v eq 'true') {
+ $_preserve_empty_dirs = 1;
+ $k = "svn-remote.$repo_id.placeholder-filename";
+ $v = eval { command_oneline('config', '--get', $k) };
+ $_placeholder_filename = $v;
+ }
+
+ # Load the list of placeholder files added during previous invocations.
+ $k = "svn-remote.$repo_id.added-placeholder";
+ $v = eval { command_oneline('config', '--get-all', $k) };
+ if ($_preserve_empty_dirs && $v) {
+ # command() prints errors to stderr, so we only call it if
+ # command_oneline() succeeded.
+ my @v = command('config', '--get-all', $k);
+ $added_placeholder{ dirname($_) } = $_ foreach @v;
+ }
+
$self->{empty} = {};
$self->{dir_prop} = {};
$self->{file_prop} = {};
$self->{gii}->remove($gpath);
print "\tD\t$gpath\n" unless $::_q;
}
+ # Don't add to @deleted_gpath if we're deleting a placeholder file.
+ push @deleted_gpath, $gpath unless $added_placeholder{dirname($path)};
$self->{empty}->{$path} = 0;
undef;
}
my ($dir, $file) = ($path =~ m#^(.*?)/?([^/]+)$#);
delete $self->{empty}->{$dir};
$mode = '100644';
+
+ if ($added_placeholder{$dir}) {
+ # Remove our placeholder file, if we created one.
+ delete_entry($self, $added_placeholder{$dir})
+ unless $path eq $added_placeholder{$dir};
+ delete $added_placeholder{$dir}
+ }
}
+
{ path => $path, mode_a => $mode, mode_b => $mode,
pool => SVN::Pool->new, action => 'A' };
}
chomp;
$self->{gii}->remove($_);
print "\tD\t$_\n" unless $::_q;
+ push @deleted_gpath, $gpath;
}
command_close_pipe($ls, $ctx);
$self->{empty}->{$path} = 0;
my ($dir, $file) = ($path =~ m#^(.*?)/?([^/]+)$#);
delete $self->{empty}->{$dir};
$self->{empty}->{$path} = 1;
+
+ if ($added_placeholder{$dir}) {
+ # Remove our placeholder file, if we created one.
+ delete_entry($self, $added_placeholder{$dir});
+ delete $added_placeholder{$dir}
+ }
+
out:
{ path => $path };
}
sub close_edit {
my $self = shift;
+
+ if ($_preserve_empty_dirs) {
+ my @empty_dirs;
+
+ # Any entry flagged as empty that also has an associated
+ # dir_prop represents a newly created empty directory.
+ foreach my $i (keys %{$self->{empty}}) {
+ push @empty_dirs, $i if exists $self->{dir_prop}->{$i};
+ }
+
+ # Search for directories that have become empty due subsequent
+ # file deletes.
+ push @empty_dirs, $self->find_empty_directories();
+
+ # Finally, add a placeholder file to each empty directory.
+ $self->add_placeholder_file($_) foreach (@empty_dirs);
+
+ $self->stash_placeholder_list();
+ }
+
$self->{git_commit_ok} = 1;
$self->{nr} = $self->{gii}->{nr};
delete $self->{gii};
$self->SUPER::close_edit(@_);
}
+sub find_empty_directories {
+ my ($self) = @_;
+ my @empty_dirs;
+ my %dirs = map { dirname($_) => 1 } @deleted_gpath;
+
+ foreach my $dir (sort keys %dirs) {
+ next if $dir eq ".";
+
+ # If there have been any additions to this directory, there is
+ # no reason to check if it is empty.
+ my $skip_added = 0;
+ foreach my $t (qw/dir_prop file_prop/) {
+ foreach my $path (keys %{ $self->{$t} }) {
+ if (exists $self->{$t}->{dirname($path)}) {
+ $skip_added = 1;
+ last;
+ }
+ }
+ last if $skip_added;
+ }
+ next if $skip_added;
+
+ # Use `git ls-tree` to get the filenames of this directory
+ # that existed prior to this particular commit.
+ my $ls = command('ls-tree', '-z', '--name-only',
+ $self->{c}, "$dir/");
+ my %files = map { $_ => 1 } split(/\0/, $ls);
+
+ # Remove the filenames that were deleted during this commit.
+ delete $files{$_} foreach (@deleted_gpath);
+
+ # Report the directory if there are no filenames left.
+ push @empty_dirs, $dir unless (scalar %files);
+ }
+ @empty_dirs;
+}
+
+sub add_placeholder_file {
+ my ($self, $dir) = @_;
+ my $path = "$dir/$_placeholder_filename";
+ my $gpath = $self->git_path($path);
+
+ my $fh = $::_repository->temp_acquire($gpath);
+ my $hash = $::_repository->hash_and_insert_object(Git::temp_path($fh));
+ Git::temp_release($fh, 1);
+ $self->{gii}->update('100644', $hash, $gpath) or croak $!;
+
+ # The directory should no longer be considered empty.
+ delete $self->{empty}->{$dir} if exists $self->{empty}->{$dir};
+
+ # Keep track of any placeholder files we create.
+ $added_placeholder{$dir} = $path;
+}
+
+sub stash_placeholder_list {
+ my ($self) = @_;
+ my $k = "svn-remote.$repo_id.added-placeholder";
+ my $v = eval { command_oneline('config', '--get-all', $k) };
+ command_noisy('config', '--unset-all', $k) if $v;
+ foreach (values %added_placeholder) {
+ command_noisy('config', '--add', $k, $_);
+ }
+}
+
package SVN::Git::Editor;
use vars qw/@ISA $_rmdir $_cp_similarity $_find_copies_harder $_rename_limit/;
use strict;
}
#endif /* !USE_LIBPCRE */
+static int is_fixed(const char *s, size_t len)
+{
+ size_t i;
+
+ /* regcomp cannot accept patterns with NULs so we
+ * consider any pattern containing a NUL fixed.
+ */
+ if (memchr(s, 0, len))
+ return 1;
+
+ for (i = 0; i < len; i++) {
+ if (is_regex_special(s[i]))
+ return 0;
+ }
+
+ return 1;
+}
+
static void compile_regexp(struct grep_pat *p, struct grep_opt *opt)
{
int err;
p->word_regexp = opt->word_regexp;
p->ignore_case = opt->ignore_case;
- p->fixed = opt->fixed;
- if (p->fixed)
+ if (opt->fixed || is_fixed(p->pattern, p->patternlen))
+ p->fixed = 1;
+ else
+ p->fixed = 0;
+
+ if (p->fixed) {
+ if (opt->regflags & REG_ICASE || p->ignore_case) {
+ static char trans[256];
+ int i;
+ for (i = 0; i < 256; i++)
+ trans[i] = tolower(i);
+ p->kws = kwsalloc(trans);
+ } else {
+ p->kws = kwsalloc(NULL);
+ }
+ kwsincr(p->kws, p->pattern, p->patternlen);
+ kwsprep(p->kws);
return;
+ }
if (opt->pcre) {
compile_pcre_regexp(p, opt);
case GREP_PATTERN: /* atom */
case GREP_PATTERN_HEAD:
case GREP_PATTERN_BODY:
- if (p->pcre_regexp)
+ if (p->kws)
+ kwsfree(p->kws);
+ else if (p->pcre_regexp)
free_pcre_regexp(p);
else
regfree(&p->regexp);
static int fixmatch(struct grep_pat *p, char *line, char *eol,
regmatch_t *match)
{
- char *hit;
-
- if (p->ignore_case) {
- char *s = line;
- do {
- hit = strcasestr(s, p->pattern);
- if (hit)
- break;
- s += strlen(s) + 1;
- } while (s < eol);
- } else
- hit = memmem(line, eol - line, p->pattern, p->patternlen);
-
- if (!hit) {
+ struct kwsmatch kwsm;
+ size_t offset = kwsexec(p->kws, line, eol - line, &kwsm);
+ if (offset == -1) {
match->rm_so = match->rm_eo = -1;
return REG_NOMATCH;
- }
- else {
- match->rm_so = hit - line;
- match->rm_eo = match->rm_so + p->patternlen;
+ } else {
+ match->rm_so = offset;
+ match->rm_eo = match->rm_so + kwsm.size[0];
return 0;
}
}
typedef int pcre;
typedef int pcre_extra;
#endif
+#include "kwset.h"
enum grep_pat_token {
GREP_PATTERN,
regex_t regexp;
pcre *pcre_regexp;
pcre_extra *pcre_extra_info;
+ kwset_t kws;
unsigned fixed:1;
unsigned ignore_case:1;
unsigned word_regexp:1;
--- /dev/null
+/*
+ * This file has been copied from commit e7ac713d^ in the GNU grep git
+ * repository. A few small changes have been made to adapt the code to
+ * Git.
+ */
+
+/* kwset.c - search for any of a set of keywords.
+ Copyright 1989, 1998, 2000, 2005 Free Software Foundation, Inc.
+
+ This program is free software; you can redistribute it and/or modify
+ it under the terms of the GNU General Public License as published by
+ the Free Software Foundation; either version 2, or (at your option)
+ any later version.
+
+ This program is distributed in the hope that it will be useful,
+ but WITHOUT ANY WARRANTY; without even the implied warranty of
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ GNU General Public License for more details.
+
+ You should have received a copy of the GNU General Public License
+ along with this program; if not, write to the Free Software
+ Foundation, Inc., 51 Franklin Street - Fifth Floor, Boston, MA
+ 02110-1301, USA. */
+
+/* Written August 1989 by Mike Haertel.
+ The author may be reached (Email) at the address mike@ai.mit.edu,
+ or (US mail) as Mike Haertel c/o Free Software Foundation. */
+
+/* The algorithm implemented by these routines bears a startling resemblence
+ to one discovered by Beate Commentz-Walter, although it is not identical.
+ See "A String Matching Algorithm Fast on the Average," Technical Report,
+ IBM-Germany, Scientific Center Heidelberg, Tiergartenstrasse 15, D-6900
+ Heidelberg, Germany. See also Aho, A.V., and M. Corasick, "Efficient
+ String Matching: An Aid to Bibliographic Search," CACM June 1975,
+ Vol. 18, No. 6, which describes the failure function used below. */
+
+#include "cache.h"
+
+#include "kwset.h"
+#include "compat/obstack.h"
+
+#define NCHAR (UCHAR_MAX + 1)
+#define obstack_chunk_alloc xmalloc
+#define obstack_chunk_free free
+
+#define U(c) ((unsigned char) (c))
+
+/* Balanced tree of edges and labels leaving a given trie node. */
+struct tree
+{
+ struct tree *llink; /* Left link; MUST be first field. */
+ struct tree *rlink; /* Right link (to larger labels). */
+ struct trie *trie; /* Trie node pointed to by this edge. */
+ unsigned char label; /* Label on this edge. */
+ char balance; /* Difference in depths of subtrees. */
+};
+
+/* Node of a trie representing a set of reversed keywords. */
+struct trie
+{
+ unsigned int accepting; /* Word index of accepted word, or zero. */
+ struct tree *links; /* Tree of edges leaving this node. */
+ struct trie *parent; /* Parent of this node. */
+ struct trie *next; /* List of all trie nodes in level order. */
+ struct trie *fail; /* Aho-Corasick failure function. */
+ int depth; /* Depth of this node from the root. */
+ int shift; /* Shift function for search failures. */
+ int maxshift; /* Max shift of self and descendents. */
+};
+
+/* Structure returned opaquely to the caller, containing everything. */
+struct kwset
+{
+ struct obstack obstack; /* Obstack for node allocation. */
+ int words; /* Number of words in the trie. */
+ struct trie *trie; /* The trie itself. */
+ int mind; /* Minimum depth of an accepting node. */
+ int maxd; /* Maximum depth of any node. */
+ unsigned char delta[NCHAR]; /* Delta table for rapid search. */
+ struct trie *next[NCHAR]; /* Table of children of the root. */
+ char *target; /* Target string if there's only one. */
+ int mind2; /* Used in Boyer-Moore search for one string. */
+ char const *trans; /* Character translation table. */
+};
+
+/* Allocate and initialize a keyword set object, returning an opaque
+ pointer to it. Return NULL if memory is not available. */
+kwset_t
+kwsalloc (char const *trans)
+{
+ struct kwset *kwset;
+
+ kwset = (struct kwset *) xmalloc(sizeof (struct kwset));
+
+ obstack_init(&kwset->obstack);
+ kwset->words = 0;
+ kwset->trie
+ = (struct trie *) obstack_alloc(&kwset->obstack, sizeof (struct trie));
+ if (!kwset->trie)
+ {
+ kwsfree((kwset_t) kwset);
+ return NULL;
+ }
+ kwset->trie->accepting = 0;
+ kwset->trie->links = NULL;
+ kwset->trie->parent = NULL;
+ kwset->trie->next = NULL;
+ kwset->trie->fail = NULL;
+ kwset->trie->depth = 0;
+ kwset->trie->shift = 0;
+ kwset->mind = INT_MAX;
+ kwset->maxd = -1;
+ kwset->target = NULL;
+ kwset->trans = trans;
+
+ return (kwset_t) kwset;
+}
+
+/* This upper bound is valid for CHAR_BIT >= 4 and
+ exact for CHAR_BIT in { 4..11, 13, 15, 17, 19 }. */
+#define DEPTH_SIZE (CHAR_BIT + CHAR_BIT/2)
+
+/* Add the given string to the contents of the keyword set. Return NULL
+ for success, an error message otherwise. */
+const char *
+kwsincr (kwset_t kws, char const *text, size_t len)
+{
+ struct kwset *kwset;
+ register struct trie *trie;
+ register unsigned char label;
+ register struct tree *link;
+ register int depth;
+ struct tree *links[DEPTH_SIZE];
+ enum { L, R } dirs[DEPTH_SIZE];
+ struct tree *t, *r, *l, *rl, *lr;
+
+ kwset = (struct kwset *) kws;
+ trie = kwset->trie;
+ text += len;
+
+ /* Descend the trie (built of reversed keywords) character-by-character,
+ installing new nodes when necessary. */
+ while (len--)
+ {
+ label = kwset->trans ? kwset->trans[U(*--text)] : *--text;
+
+ /* Descend the tree of outgoing links for this trie node,
+ looking for the current character and keeping track
+ of the path followed. */
+ link = trie->links;
+ links[0] = (struct tree *) &trie->links;
+ dirs[0] = L;
+ depth = 1;
+
+ while (link && label != link->label)
+ {
+ links[depth] = link;
+ if (label < link->label)
+ dirs[depth++] = L, link = link->llink;
+ else
+ dirs[depth++] = R, link = link->rlink;
+ }
+
+ /* The current character doesn't have an outgoing link at
+ this trie node, so build a new trie node and install
+ a link in the current trie node's tree. */
+ if (!link)
+ {
+ link = (struct tree *) obstack_alloc(&kwset->obstack,
+ sizeof (struct tree));
+ if (!link)
+ return "memory exhausted";
+ link->llink = NULL;
+ link->rlink = NULL;
+ link->trie = (struct trie *) obstack_alloc(&kwset->obstack,
+ sizeof (struct trie));
+ if (!link->trie)
+ {
+ obstack_free(&kwset->obstack, link);
+ return "memory exhausted";
+ }
+ link->trie->accepting = 0;
+ link->trie->links = NULL;
+ link->trie->parent = trie;
+ link->trie->next = NULL;
+ link->trie->fail = NULL;
+ link->trie->depth = trie->depth + 1;
+ link->trie->shift = 0;
+ link->label = label;
+ link->balance = 0;
+
+ /* Install the new tree node in its parent. */
+ if (dirs[--depth] == L)
+ links[depth]->llink = link;
+ else
+ links[depth]->rlink = link;
+
+ /* Back up the tree fixing the balance flags. */
+ while (depth && !links[depth]->balance)
+ {
+ if (dirs[depth] == L)
+ --links[depth]->balance;
+ else
+ ++links[depth]->balance;
+ --depth;
+ }
+
+ /* Rebalance the tree by pointer rotations if necessary. */
+ if (depth && ((dirs[depth] == L && --links[depth]->balance)
+ || (dirs[depth] == R && ++links[depth]->balance)))
+ {
+ switch (links[depth]->balance)
+ {
+ case (char) -2:
+ switch (dirs[depth + 1])
+ {
+ case L:
+ r = links[depth], t = r->llink, rl = t->rlink;
+ t->rlink = r, r->llink = rl;
+ t->balance = r->balance = 0;
+ break;
+ case R:
+ r = links[depth], l = r->llink, t = l->rlink;
+ rl = t->rlink, lr = t->llink;
+ t->llink = l, l->rlink = lr, t->rlink = r, r->llink = rl;
+ l->balance = t->balance != 1 ? 0 : -1;
+ r->balance = t->balance != (char) -1 ? 0 : 1;
+ t->balance = 0;
+ break;
+ default:
+ abort ();
+ }
+ break;
+ case 2:
+ switch (dirs[depth + 1])
+ {
+ case R:
+ l = links[depth], t = l->rlink, lr = t->llink;
+ t->llink = l, l->rlink = lr;
+ t->balance = l->balance = 0;
+ break;
+ case L:
+ l = links[depth], r = l->rlink, t = r->llink;
+ lr = t->llink, rl = t->rlink;
+ t->llink = l, l->rlink = lr, t->rlink = r, r->llink = rl;
+ l->balance = t->balance != 1 ? 0 : -1;
+ r->balance = t->balance != (char) -1 ? 0 : 1;
+ t->balance = 0;
+ break;
+ default:
+ abort ();
+ }
+ break;
+ default:
+ abort ();
+ }
+
+ if (dirs[depth - 1] == L)
+ links[depth - 1]->llink = t;
+ else
+ links[depth - 1]->rlink = t;
+ }
+ }
+
+ trie = link->trie;
+ }
+
+ /* Mark the node we finally reached as accepting, encoding the
+ index number of this word in the keyword set so far. */
+ if (!trie->accepting)
+ trie->accepting = 1 + 2 * kwset->words;
+ ++kwset->words;
+
+ /* Keep track of the longest and shortest string of the keyword set. */
+ if (trie->depth < kwset->mind)
+ kwset->mind = trie->depth;
+ if (trie->depth > kwset->maxd)
+ kwset->maxd = trie->depth;
+
+ return NULL;
+}
+
+/* Enqueue the trie nodes referenced from the given tree in the
+ given queue. */
+static void
+enqueue (struct tree *tree, struct trie **last)
+{
+ if (!tree)
+ return;
+ enqueue(tree->llink, last);
+ enqueue(tree->rlink, last);
+ (*last) = (*last)->next = tree->trie;
+}
+
+/* Compute the Aho-Corasick failure function for the trie nodes referenced
+ from the given tree, given the failure function for their parent as
+ well as a last resort failure node. */
+static void
+treefails (register struct tree const *tree, struct trie const *fail,
+ struct trie *recourse)
+{
+ register struct tree *link;
+
+ if (!tree)
+ return;
+
+ treefails(tree->llink, fail, recourse);
+ treefails(tree->rlink, fail, recourse);
+
+ /* Find, in the chain of fails going back to the root, the first
+ node that has a descendent on the current label. */
+ while (fail)
+ {
+ link = fail->links;
+ while (link && tree->label != link->label)
+ if (tree->label < link->label)
+ link = link->llink;
+ else
+ link = link->rlink;
+ if (link)
+ {
+ tree->trie->fail = link->trie;
+ return;
+ }
+ fail = fail->fail;
+ }
+
+ tree->trie->fail = recourse;
+}
+
+/* Set delta entries for the links of the given tree such that
+ the preexisting delta value is larger than the current depth. */
+static void
+treedelta (register struct tree const *tree,
+ register unsigned int depth,
+ unsigned char delta[])
+{
+ if (!tree)
+ return;
+ treedelta(tree->llink, depth, delta);
+ treedelta(tree->rlink, depth, delta);
+ if (depth < delta[tree->label])
+ delta[tree->label] = depth;
+}
+
+/* Return true if A has every label in B. */
+static int
+hasevery (register struct tree const *a, register struct tree const *b)
+{
+ if (!b)
+ return 1;
+ if (!hasevery(a, b->llink))
+ return 0;
+ if (!hasevery(a, b->rlink))
+ return 0;
+ while (a && b->label != a->label)
+ if (b->label < a->label)
+ a = a->llink;
+ else
+ a = a->rlink;
+ return !!a;
+}
+
+/* Compute a vector, indexed by character code, of the trie nodes
+ referenced from the given tree. */
+static void
+treenext (struct tree const *tree, struct trie *next[])
+{
+ if (!tree)
+ return;
+ treenext(tree->llink, next);
+ treenext(tree->rlink, next);
+ next[tree->label] = tree->trie;
+}
+
+/* Compute the shift for each trie node, as well as the delta
+ table and next cache for the given keyword set. */
+const char *
+kwsprep (kwset_t kws)
+{
+ register struct kwset *kwset;
+ register int i;
+ register struct trie *curr;
+ register char const *trans;
+ unsigned char delta[NCHAR];
+
+ kwset = (struct kwset *) kws;
+
+ /* Initial values for the delta table; will be changed later. The
+ delta entry for a given character is the smallest depth of any
+ node at which an outgoing edge is labeled by that character. */
+ memset(delta, kwset->mind < UCHAR_MAX ? kwset->mind : UCHAR_MAX, NCHAR);
+
+ /* Check if we can use the simple boyer-moore algorithm, instead
+ of the hairy commentz-walter algorithm. */
+ if (kwset->words == 1 && kwset->trans == NULL)
+ {
+ char c;
+
+ /* Looking for just one string. Extract it from the trie. */
+ kwset->target = obstack_alloc(&kwset->obstack, kwset->mind);
+ if (!kwset->target)
+ return "memory exhausted";
+ for (i = kwset->mind - 1, curr = kwset->trie; i >= 0; --i)
+ {
+ kwset->target[i] = curr->links->label;
+ curr = curr->links->trie;
+ }
+ /* Build the Boyer Moore delta. Boy that's easy compared to CW. */
+ for (i = 0; i < kwset->mind; ++i)
+ delta[U(kwset->target[i])] = kwset->mind - (i + 1);
+ /* Find the minimal delta2 shift that we might make after
+ a backwards match has failed. */
+ c = kwset->target[kwset->mind - 1];
+ for (i = kwset->mind - 2; i >= 0; --i)
+ if (kwset->target[i] == c)
+ break;
+ kwset->mind2 = kwset->mind - (i + 1);
+ }
+ else
+ {
+ register struct trie *fail;
+ struct trie *last, *next[NCHAR];
+
+ /* Traverse the nodes of the trie in level order, simultaneously
+ computing the delta table, failure function, and shift function. */
+ for (curr = last = kwset->trie; curr; curr = curr->next)
+ {
+ /* Enqueue the immediate descendents in the level order queue. */
+ enqueue(curr->links, &last);
+
+ curr->shift = kwset->mind;
+ curr->maxshift = kwset->mind;
+
+ /* Update the delta table for the descendents of this node. */
+ treedelta(curr->links, curr->depth, delta);
+
+ /* Compute the failure function for the decendents of this node. */
+ treefails(curr->links, curr->fail, kwset->trie);
+
+ /* Update the shifts at each node in the current node's chain
+ of fails back to the root. */
+ for (fail = curr->fail; fail; fail = fail->fail)
+ {
+ /* If the current node has some outgoing edge that the fail
+ doesn't, then the shift at the fail should be no larger
+ than the difference of their depths. */
+ if (!hasevery(fail->links, curr->links))
+ if (curr->depth - fail->depth < fail->shift)
+ fail->shift = curr->depth - fail->depth;
+
+ /* If the current node is accepting then the shift at the
+ fail and its descendents should be no larger than the
+ difference of their depths. */
+ if (curr->accepting && fail->maxshift > curr->depth - fail->depth)
+ fail->maxshift = curr->depth - fail->depth;
+ }
+ }
+
+ /* Traverse the trie in level order again, fixing up all nodes whose
+ shift exceeds their inherited maxshift. */
+ for (curr = kwset->trie->next; curr; curr = curr->next)
+ {
+ if (curr->maxshift > curr->parent->maxshift)
+ curr->maxshift = curr->parent->maxshift;
+ if (curr->shift > curr->maxshift)
+ curr->shift = curr->maxshift;
+ }
+
+ /* Create a vector, indexed by character code, of the outgoing links
+ from the root node. */
+ for (i = 0; i < NCHAR; ++i)
+ next[i] = NULL;
+ treenext(kwset->trie->links, next);
+
+ if ((trans = kwset->trans) != NULL)
+ for (i = 0; i < NCHAR; ++i)
+ kwset->next[i] = next[U(trans[i])];
+ else
+ memcpy(kwset->next, next, NCHAR * sizeof(struct trie *));
+ }
+
+ /* Fix things up for any translation table. */
+ if ((trans = kwset->trans) != NULL)
+ for (i = 0; i < NCHAR; ++i)
+ kwset->delta[i] = delta[U(trans[i])];
+ else
+ memcpy(kwset->delta, delta, NCHAR);
+
+ return NULL;
+}
+
+/* Fast boyer-moore search. */
+static size_t
+bmexec (kwset_t kws, char const *text, size_t size)
+{
+ struct kwset const *kwset;
+ register unsigned char const *d1;
+ register char const *ep, *sp, *tp;
+ register int d, gc, i, len, md2;
+
+ kwset = (struct kwset const *) kws;
+ len = kwset->mind;
+
+ if (len == 0)
+ return 0;
+ if (len > size)
+ return -1;
+ if (len == 1)
+ {
+ tp = memchr (text, kwset->target[0], size);
+ return tp ? tp - text : -1;
+ }
+
+ d1 = kwset->delta;
+ sp = kwset->target + len;
+ gc = U(sp[-2]);
+ md2 = kwset->mind2;
+ tp = text + len;
+
+ /* Significance of 12: 1 (initial offset) + 10 (skip loop) + 1 (md2). */
+ if (size > 12 * len)
+ /* 11 is not a bug, the initial offset happens only once. */
+ for (ep = text + size - 11 * len;;)
+ {
+ while (tp <= ep)
+ {
+ d = d1[U(tp[-1])], tp += d;
+ d = d1[U(tp[-1])], tp += d;
+ if (d == 0)
+ goto found;
+ d = d1[U(tp[-1])], tp += d;
+ d = d1[U(tp[-1])], tp += d;
+ d = d1[U(tp[-1])], tp += d;
+ if (d == 0)
+ goto found;
+ d = d1[U(tp[-1])], tp += d;
+ d = d1[U(tp[-1])], tp += d;
+ d = d1[U(tp[-1])], tp += d;
+ if (d == 0)
+ goto found;
+ d = d1[U(tp[-1])], tp += d;
+ d = d1[U(tp[-1])], tp += d;
+ }
+ break;
+ found:
+ if (U(tp[-2]) == gc)
+ {
+ for (i = 3; i <= len && U(tp[-i]) == U(sp[-i]); ++i)
+ ;
+ if (i > len)
+ return tp - len - text;
+ }
+ tp += md2;
+ }
+
+ /* Now we have only a few characters left to search. We
+ carefully avoid ever producing an out-of-bounds pointer. */
+ ep = text + size;
+ d = d1[U(tp[-1])];
+ while (d <= ep - tp)
+ {
+ d = d1[U((tp += d)[-1])];
+ if (d != 0)
+ continue;
+ if (U(tp[-2]) == gc)
+ {
+ for (i = 3; i <= len && U(tp[-i]) == U(sp[-i]); ++i)
+ ;
+ if (i > len)
+ return tp - len - text;
+ }
+ d = md2;
+ }
+
+ return -1;
+}
+
+/* Hairy multiple string search. */
+static size_t
+cwexec (kwset_t kws, char const *text, size_t len, struct kwsmatch *kwsmatch)
+{
+ struct kwset const *kwset;
+ struct trie * const *next;
+ struct trie const *trie;
+ struct trie const *accept;
+ char const *beg, *lim, *mch, *lmch;
+ register unsigned char c;
+ register unsigned char const *delta;
+ register int d;
+ register char const *end, *qlim;
+ register struct tree const *tree;
+ register char const *trans;
+
+ accept = NULL;
+
+ /* Initialize register copies and look for easy ways out. */
+ kwset = (struct kwset *) kws;
+ if (len < kwset->mind)
+ return -1;
+ next = kwset->next;
+ delta = kwset->delta;
+ trans = kwset->trans;
+ lim = text + len;
+ end = text;
+ if ((d = kwset->mind) != 0)
+ mch = NULL;
+ else
+ {
+ mch = text, accept = kwset->trie;
+ goto match;
+ }
+
+ if (len >= 4 * kwset->mind)
+ qlim = lim - 4 * kwset->mind;
+ else
+ qlim = NULL;
+
+ while (lim - end >= d)
+ {
+ if (qlim && end <= qlim)
+ {
+ end += d - 1;
+ while ((d = delta[c = *end]) && end < qlim)
+ {
+ end += d;
+ end += delta[U(*end)];
+ end += delta[U(*end)];
+ }
+ ++end;
+ }
+ else
+ d = delta[c = (end += d)[-1]];
+ if (d)
+ continue;
+ beg = end - 1;
+ trie = next[c];
+ if (trie->accepting)
+ {
+ mch = beg;
+ accept = trie;
+ }
+ d = trie->shift;
+ while (beg > text)
+ {
+ c = trans ? trans[U(*--beg)] : *--beg;
+ tree = trie->links;
+ while (tree && c != tree->label)
+ if (c < tree->label)
+ tree = tree->llink;
+ else
+ tree = tree->rlink;
+ if (tree)
+ {
+ trie = tree->trie;
+ if (trie->accepting)
+ {
+ mch = beg;
+ accept = trie;
+ }
+ }
+ else
+ break;
+ d = trie->shift;
+ }
+ if (mch)
+ goto match;
+ }
+ return -1;
+
+ match:
+ /* Given a known match, find the longest possible match anchored
+ at or before its starting point. This is nearly a verbatim
+ copy of the preceding main search loops. */
+ if (lim - mch > kwset->maxd)
+ lim = mch + kwset->maxd;
+ lmch = NULL;
+ d = 1;
+ while (lim - end >= d)
+ {
+ if ((d = delta[c = (end += d)[-1]]) != 0)
+ continue;
+ beg = end - 1;
+ if (!(trie = next[c]))
+ {
+ d = 1;
+ continue;
+ }
+ if (trie->accepting && beg <= mch)
+ {
+ lmch = beg;
+ accept = trie;
+ }
+ d = trie->shift;
+ while (beg > text)
+ {
+ c = trans ? trans[U(*--beg)] : *--beg;
+ tree = trie->links;
+ while (tree && c != tree->label)
+ if (c < tree->label)
+ tree = tree->llink;
+ else
+ tree = tree->rlink;
+ if (tree)
+ {
+ trie = tree->trie;
+ if (trie->accepting && beg <= mch)
+ {
+ lmch = beg;
+ accept = trie;
+ }
+ }
+ else
+ break;
+ d = trie->shift;
+ }
+ if (lmch)
+ {
+ mch = lmch;
+ goto match;
+ }
+ if (!d)
+ d = 1;
+ }
+
+ if (kwsmatch)
+ {
+ kwsmatch->index = accept->accepting / 2;
+ kwsmatch->offset[0] = mch - text;
+ kwsmatch->size[0] = accept->depth;
+ }
+ return mch - text;
+}
+
+/* Search through the given text for a match of any member of the
+ given keyword set. Return a pointer to the first character of
+ the matching substring, or NULL if no match is found. If FOUNDLEN
+ is non-NULL store in the referenced location the length of the
+ matching substring. Similarly, if FOUNDIDX is non-NULL, store
+ in the referenced location the index number of the particular
+ keyword matched. */
+size_t
+kwsexec (kwset_t kws, char const *text, size_t size,
+ struct kwsmatch *kwsmatch)
+{
+ struct kwset const *kwset = (struct kwset *) kws;
+ if (kwset->words == 1 && kwset->trans == NULL)
+ {
+ size_t ret = bmexec (kws, text, size);
+ if (kwsmatch != NULL && ret != (size_t) -1)
+ {
+ kwsmatch->index = 0;
+ kwsmatch->offset[0] = ret;
+ kwsmatch->size[0] = kwset->mind;
+ }
+ return ret;
+ }
+ else
+ return cwexec(kws, text, size, kwsmatch);
+}
+
+/* Free the components of the given keyword set. */
+void
+kwsfree (kwset_t kws)
+{
+ struct kwset *kwset;
+
+ kwset = (struct kwset *) kws;
+ obstack_free(&kwset->obstack, NULL);
+ free(kws);
+}
--- /dev/null
+/* This file has been copied from commit e7ac713d^ in the GNU grep git
+ * repository. A few small changes have been made to adapt the code to
+ * Git.
+ */
+
+/* kwset.h - header declaring the keyword set library.
+ Copyright (C) 1989, 1998, 2005 Free Software Foundation, Inc.
+
+ This program is free software; you can redistribute it and/or modify
+ it under the terms of the GNU General Public License as published by
+ the Free Software Foundation; either version 2, or (at your option)
+ any later version.
+
+ This program is distributed in the hope that it will be useful,
+ but WITHOUT ANY WARRANTY; without even the implied warranty of
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ GNU General Public License for more details.
+
+ You should have received a copy of the GNU General Public License
+ along with this program; if not, write to the Free Software
+ Foundation, Inc., 51 Franklin Street - Fifth Floor, Boston, MA
+ 02110-1301, USA. */
+
+/* Written August 1989 by Mike Haertel.
+ The author may be reached (Email) at the address mike@ai.mit.edu,
+ or (US mail) as Mike Haertel c/o Free Software Foundation. */
+
+struct kwsmatch
+{
+ int index; /* Index number of matching keyword. */
+ size_t offset[1]; /* Offset of each submatch. */
+ size_t size[1]; /* Length of each submatch. */
+};
+
+struct kwset_t;
+typedef struct kwset_t* kwset_t;
+
+/* Return an opaque pointer to a newly allocated keyword set, or NULL
+ if enough memory cannot be obtained. The argument if non-NULL
+ specifies a table of character translations to be applied to all
+ pattern and search text. */
+extern kwset_t kwsalloc(char const *);
+
+/* Incrementally extend the keyword set to include the given string.
+ Return NULL for success, or an error message. Remember an index
+ number for each keyword included in the set. */
+extern const char *kwsincr(kwset_t, char const *, size_t);
+
+/* When the keyword set has been completely built, prepare it for
+ use. Return NULL for success, or an error message. */
+extern const char *kwsprep(kwset_t);
+
+/* Search through the given buffer for a member of the keyword set.
+ Return a pointer to the leftmost longest match found, or NULL if
+ no match is found. If foundlen is non-NULL, store the length of
+ the matching substring in the integer it points to. Similarly,
+ if foundindex is non-NULL, store the index of the particular
+ keyword found therein. */
+extern size_t kwsexec(kwset_t, char const *, size_t, struct kwsmatch *);
+
+/* Deallocate the given keyword set and all its associated storage. */
+extern void kwsfree(kwset_t);
+
struct blob *blob,
show_object_fn show,
struct name_path *path,
- const char *name)
+ const char *name,
+ void *cb_data)
{
struct object *obj = &blob->object;
if (obj->flags & (UNINTERESTING | SEEN))
return;
obj->flags |= SEEN;
- show(obj, path, name);
+ show(obj, path, name, cb_data);
}
/*
const unsigned char *sha1,
show_object_fn show,
struct name_path *path,
- const char *name)
+ const char *name,
+ void *cb_data)
{
/* Nothing to do */
}
show_object_fn show,
struct name_path *path,
struct strbuf *base,
- const char *name)
+ const char *name,
+ void *cb_data)
{
struct object *obj = &tree->object;
struct tree_desc desc;
if (parse_tree(tree) < 0)
die("bad tree object %s", sha1_to_hex(obj->sha1));
obj->flags |= SEEN;
- show(obj, path, name);
+ show(obj, path, name, cb_data);
me.up = path;
me.elem = name;
me.elem_len = strlen(name);
if (S_ISDIR(entry.mode))
process_tree(revs,
lookup_tree(entry.sha1),
- show, &me, base, entry.path);
+ show, &me, base, entry.path,
+ cb_data);
else if (S_ISGITLINK(entry.mode))
process_gitlink(revs, entry.sha1,
- show, &me, entry.path);
+ show, &me, entry.path,
+ cb_data);
else
process_blob(revs,
lookup_blob(entry.sha1),
- show, &me, entry.path);
+ show, &me, entry.path,
+ cb_data);
}
strbuf_setlen(base, baselen);
free(tree->buffer);
continue;
if (obj->type == OBJ_TAG) {
obj->flags |= SEEN;
- show_object(obj, NULL, name);
+ show_object(obj, NULL, name, data);
continue;
}
if (obj->type == OBJ_TREE) {
process_tree(revs, (struct tree *)obj, show_object,
- NULL, &base, name);
+ NULL, &base, name, data);
continue;
}
if (obj->type == OBJ_BLOB) {
process_blob(revs, (struct blob *)obj, show_object,
- NULL, name);
+ NULL, name, data);
continue;
}
die("unknown pending object %s (%s)",
#define LIST_OBJECTS_H
typedef void (*show_commit_fn)(struct commit *, void *);
-typedef void (*show_object_fn)(struct object *, const struct name_path *, const char *);
-typedef void (*show_edge_fn)(struct commit *);
-
+typedef void (*show_object_fn)(struct object *, const struct name_path *, const char *, void *);
void traverse_commit_list(struct rev_info *, show_commit_fn, show_object_fn, void *);
+typedef void (*show_edge_fn)(struct commit *);
void mark_edges_uninteresting(struct commit_list *, struct rev_info *, show_edge_fn);
#endif
enum rename_type {
RENAME_NORMAL = 0,
RENAME_DELETE,
- RENAME_ONE_FILE_TO_TWO
+ RENAME_ONE_FILE_TO_ONE,
+ RENAME_ONE_FILE_TO_TWO,
+ RENAME_TWO_FILES_TO_ONE
};
-struct rename_df_conflict_info {
+struct rename_conflict_info {
enum rename_type rename_type;
struct diff_filepair *pair1;
struct diff_filepair *pair2;
const char *branch2;
struct stage_data *dst_entry1;
struct stage_data *dst_entry2;
+ struct diff_filespec ren1_other;
+ struct diff_filespec ren2_other;
};
/*
unsigned mode;
unsigned char sha[20];
} stages[4];
- struct rename_df_conflict_info *rename_df_conflict_info;
+ struct rename_conflict_info *rename_conflict_info;
unsigned processed:1;
};
-static inline void setup_rename_df_conflict_info(enum rename_type rename_type,
- struct diff_filepair *pair1,
- struct diff_filepair *pair2,
- const char *branch1,
- const char *branch2,
- struct stage_data *dst_entry1,
- struct stage_data *dst_entry2)
+static inline void setup_rename_conflict_info(enum rename_type rename_type,
+ struct diff_filepair *pair1,
+ struct diff_filepair *pair2,
+ const char *branch1,
+ const char *branch2,
+ struct stage_data *dst_entry1,
+ struct stage_data *dst_entry2,
+ struct merge_options *o,
+ struct stage_data *src_entry1,
+ struct stage_data *src_entry2)
{
- struct rename_df_conflict_info *ci = xcalloc(1, sizeof(struct rename_df_conflict_info));
+ struct rename_conflict_info *ci = xcalloc(1, sizeof(struct rename_conflict_info));
ci->rename_type = rename_type;
ci->pair1 = pair1;
ci->branch1 = branch1;
ci->branch2 = branch2;
ci->dst_entry1 = dst_entry1;
- dst_entry1->rename_df_conflict_info = ci;
+ dst_entry1->rename_conflict_info = ci;
dst_entry1->processed = 0;
assert(!pair2 == !dst_entry2);
if (dst_entry2) {
ci->dst_entry2 = dst_entry2;
ci->pair2 = pair2;
- dst_entry2->rename_df_conflict_info = ci;
- dst_entry2->processed = 0;
+ dst_entry2->rename_conflict_info = ci;
+ }
+
+ if (rename_type == RENAME_TWO_FILES_TO_ONE) {
+ /*
+ * For each rename, there could have been
+ * modifications on the side of history where that
+ * file was not renamed.
+ */
+ int ostage1 = o->branch1 == branch1 ? 3 : 2;
+ int ostage2 = ostage1 ^ 1;
+
+ ci->ren1_other.path = pair1->one->path;
+ hashcpy(ci->ren1_other.sha1, src_entry1->stages[ostage1].sha);
+ ci->ren1_other.mode = src_entry1->stages[ostage1].mode;
+
+ ci->ren2_other.path = pair2->one->path;
+ hashcpy(ci->ren2_other.sha1, src_entry2->stages[ostage2].sha);
+ ci->ren2_other.mode = src_entry2->stages[ostage2].mode;
}
}
for (i = 0; i < active_nr; i++) {
struct cache_entry *ce = active_cache[i];
if (ce_stage(ce))
- fprintf(stderr, "BUG: %d %.*s", ce_stage(ce),
+ fprintf(stderr, "BUG: %d %.*s\n", ce_stage(ce),
(int)ce_namelen(ce), ce->name);
}
die("Bug in merge-recursive.c");
return unmerged;
}
-static void make_room_for_directories_of_df_conflicts(struct merge_options *o,
- struct string_list *entries)
+static int string_list_df_name_compare(const void *a, const void *b)
{
- /* If there are D/F conflicts, and the paths currently exist
- * in the working copy as a file, we want to remove them to
- * make room for the corresponding directory. Such paths will
- * later be processed in process_df_entry() at the end. If
- * the corresponding directory ends up being removed by the
- * merge, then the file will be reinstated at that time;
- * otherwise, if the file is not supposed to be removed by the
- * merge, the contents of the file will be placed in another
- * unique filename.
+ const struct string_list_item *one = a;
+ const struct string_list_item *two = b;
+ int onelen = strlen(one->string);
+ int twolen = strlen(two->string);
+ /*
+ * Here we only care that entries for D/F conflicts are
+ * adjacent, in particular with the file of the D/F conflict
+ * appearing before files below the corresponding directory.
+ * The order of the rest of the list is irrelevant for us.
*
- * NOTE: This function relies on the fact that entries for a
- * D/F conflict will appear adjacent in the index, with the
- * entries for the file appearing before entries for paths
- * below the corresponding directory.
+ * To achieve this, we sort with df_name_compare and provide
+ * the mode S_IFDIR so that D/F conflicts will sort correctly.
+ * We use the mode S_IFDIR for everything else for simplicity,
+ * since in other cases any changes in their order due to
+ * sorting cause no problems for us.
+ */
+ int cmp = df_name_compare(one->string, onelen, S_IFDIR,
+ two->string, twolen, S_IFDIR);
+ /*
+ * Now that 'foo' and 'foo/bar' compare equal, we have to make sure
+ * that 'foo' comes before 'foo/bar'.
*/
+ if (cmp)
+ return cmp;
+ return onelen - twolen;
+}
+
+static void record_df_conflict_files(struct merge_options *o,
+ struct string_list *entries)
+{
+ /* If there is a D/F conflict and the file for such a conflict
+ * currently exist in the working copy, we want to allow it to be
+ * removed to make room for the corresponding directory if needed.
+ * The files underneath the directories of such D/F conflicts will
+ * be processed before the corresponding file involved in the D/F
+ * conflict. If the D/F directory ends up being removed by the
+ * merge, then we won't have to touch the D/F file. If the D/F
+ * directory needs to be written to the working copy, then the D/F
+ * file will simply be removed (in make_room_for_path()) to make
+ * room for the necessary paths. Note that if both the directory
+ * and the file need to be present, then the D/F file will be
+ * reinstated with a new unique name at the time it is processed.
+ */
+ struct string_list df_sorted_entries;
const char *last_file = NULL;
int last_len = 0;
int i;
+ /*
+ * If we're merging merge-bases, we don't want to bother with
+ * any working directory changes.
+ */
+ if (o->call_depth)
+ return;
+
+ /* Ensure D/F conflicts are adjacent in the entries list. */
+ memset(&df_sorted_entries, 0, sizeof(struct string_list));
for (i = 0; i < entries->nr; i++) {
- const char *path = entries->items[i].string;
+ struct string_list_item *next = &entries->items[i];
+ string_list_append(&df_sorted_entries, next->string)->util =
+ next->util;
+ }
+ qsort(df_sorted_entries.items, entries->nr, sizeof(*entries->items),
+ string_list_df_name_compare);
+
+ string_list_clear(&o->df_conflict_file_set, 1);
+ for (i = 0; i < df_sorted_entries.nr; i++) {
+ const char *path = df_sorted_entries.items[i].string;
int len = strlen(path);
- struct stage_data *e = entries->items[i].util;
+ struct stage_data *e = df_sorted_entries.items[i].util;
/*
* Check if last_file & path correspond to a D/F conflict;
* i.e. whether path is last_file+'/'+<something>.
- * If so, remove last_file to make room for path and friends.
+ * If so, record that it's okay to remove last_file to make
+ * room for path and friends if needed.
*/
if (last_file &&
len > last_len &&
memcmp(path, last_file, last_len) == 0 &&
path[last_len] == '/') {
- output(o, 3, "Removing %s to make room for subdirectory; may re-add later.", last_file);
- unlink(last_file);
+ string_list_insert(&o->df_conflict_file_set, last_file);
}
/*
last_file = NULL;
}
}
+ string_list_clear(&df_sorted_entries, 0);
}
struct rename {
return renames;
}
-static int update_stages_options(const char *path, struct diff_filespec *o,
- struct diff_filespec *a, struct diff_filespec *b,
- int clear, int options)
+static int update_stages(const char *path, const struct diff_filespec *o,
+ const struct diff_filespec *a,
+ const struct diff_filespec *b)
{
+
+ /*
+ * NOTE: It is usually a bad idea to call update_stages on a path
+ * before calling update_file on that same path, since it can
+ * sometimes lead to spurious "refusing to lose untracked file..."
+ * messages from update_file (via make_room_for path via
+ * would_lose_untracked). Instead, reverse the order of the calls
+ * (executing update_file first and then update_stages).
+ */
+ int clear = 1;
+ int options = ADD_CACHE_OK_TO_ADD | ADD_CACHE_SKIP_DFCHECK;
if (clear)
if (remove_file_from_cache(path))
return -1;
return 0;
}
-static int update_stages(const char *path, struct diff_filespec *o,
- struct diff_filespec *a, struct diff_filespec *b,
- int clear)
+static void update_entry(struct stage_data *entry,
+ struct diff_filespec *o,
+ struct diff_filespec *a,
+ struct diff_filespec *b)
{
- int options = ADD_CACHE_OK_TO_ADD | ADD_CACHE_OK_TO_REPLACE;
- return update_stages_options(path, o, a, b, clear, options);
-}
-
-static int update_stages_and_entry(const char *path,
- struct stage_data *entry,
- struct diff_filespec *o,
- struct diff_filespec *a,
- struct diff_filespec *b,
- int clear)
-{
- int options;
-
entry->processed = 0;
entry->stages[1].mode = o->mode;
entry->stages[2].mode = a->mode;
hashcpy(entry->stages[1].sha, o->sha1);
hashcpy(entry->stages[2].sha, a->sha1);
hashcpy(entry->stages[3].sha, b->sha1);
- options = ADD_CACHE_OK_TO_ADD | ADD_CACHE_SKIP_DFCHECK;
- return update_stages_options(path, o, a, b, clear, options);
}
static int remove_file(struct merge_options *o, int clean,
}
}
-static int would_lose_untracked(const char *path)
+static int dir_in_way(const char *path, int check_working_copy)
+{
+ int pos, pathlen = strlen(path);
+ char *dirpath = xmalloc(pathlen + 2);
+ struct stat st;
+
+ strcpy(dirpath, path);
+ dirpath[pathlen] = '/';
+ dirpath[pathlen+1] = '\0';
+
+ pos = cache_name_pos(dirpath, pathlen+1);
+
+ if (pos < 0)
+ pos = -1 - pos;
+ if (pos < active_nr &&
+ !strncmp(dirpath, active_cache[pos]->name, pathlen+1)) {
+ free(dirpath);
+ return 1;
+ }
+
+ free(dirpath);
+ return check_working_copy && !lstat(path, &st) && S_ISDIR(st.st_mode);
+}
+
+static int was_tracked(const char *path)
{
int pos = cache_name_pos(path, strlen(path));
switch (ce_stage(active_cache[pos])) {
case 0:
case 2:
- return 0;
+ return 1;
}
pos++;
}
- return file_exists(path);
+ return 0;
}
-static int make_room_for_path(const char *path)
+static int would_lose_untracked(const char *path)
{
- int status;
+ return !was_tracked(path) && file_exists(path);
+}
+
+static int make_room_for_path(struct merge_options *o, const char *path)
+{
+ int status, i;
const char *msg = "failed to create path '%s'%s";
+ /* Unlink any D/F conflict files that are in the way */
+ for (i = 0; i < o->df_conflict_file_set.nr; i++) {
+ const char *df_path = o->df_conflict_file_set.items[i].string;
+ size_t pathlen = strlen(path);
+ size_t df_pathlen = strlen(df_path);
+ if (df_pathlen < pathlen &&
+ path[df_pathlen] == '/' &&
+ strncmp(path, df_path, df_pathlen) == 0) {
+ output(o, 3,
+ "Removing %s to make room for subdirectory\n",
+ df_path);
+ unlink(df_path);
+ unsorted_string_list_delete_item(&o->df_conflict_file_set,
+ i, 0);
+ break;
+ }
+ }
+
+ /* Make sure leading directories are created */
status = safe_create_leading_directories_const(path);
if (status) {
if (status == -3) {
}
}
- if (make_room_for_path(path) < 0) {
+ if (make_room_for_path(o, path) < 0) {
update_wd = 0;
free(buf);
goto update_index;
static int merge_3way(struct merge_options *o,
mmbuffer_t *result_buf,
- struct diff_filespec *one,
- struct diff_filespec *a,
- struct diff_filespec *b,
+ const struct diff_filespec *one,
+ const struct diff_filespec *a,
+ const struct diff_filespec *b,
const char *branch1,
const char *branch2)
{
return merge_status;
}
-static struct merge_file_info merge_file(struct merge_options *o,
- struct diff_filespec *one,
- struct diff_filespec *a,
- struct diff_filespec *b,
- const char *branch1,
- const char *branch2)
+static struct merge_file_info merge_file_1(struct merge_options *o,
+ const struct diff_filespec *one,
+ const struct diff_filespec *a,
+ const struct diff_filespec *b,
+ const char *branch1,
+ const char *branch2)
{
struct merge_file_info result;
result.merge = 0;
return result;
}
+static struct merge_file_info
+merge_file_special_markers(struct merge_options *o,
+ const struct diff_filespec *one,
+ const struct diff_filespec *a,
+ const struct diff_filespec *b,
+ const char *branch1,
+ const char *filename1,
+ const char *branch2,
+ const char *filename2)
+{
+ char *side1 = NULL;
+ char *side2 = NULL;
+ struct merge_file_info mfi;
+
+ if (filename1) {
+ side1 = xmalloc(strlen(branch1) + strlen(filename1) + 2);
+ sprintf(side1, "%s:%s", branch1, filename1);
+ }
+ if (filename2) {
+ side2 = xmalloc(strlen(branch2) + strlen(filename2) + 2);
+ sprintf(side2, "%s:%s", branch2, filename2);
+ }
+
+ mfi = merge_file_1(o, one, a, b,
+ side1 ? side1 : branch1, side2 ? side2 : branch2);
+ free(side1);
+ free(side2);
+ return mfi;
+}
+
+static struct merge_file_info merge_file(struct merge_options *o,
+ const char *path,
+ const unsigned char *o_sha, int o_mode,
+ const unsigned char *a_sha, int a_mode,
+ const unsigned char *b_sha, int b_mode,
+ const char *branch1,
+ const char *branch2)
+{
+ struct diff_filespec one, a, b;
+
+ one.path = a.path = b.path = (char *)path;
+ hashcpy(one.sha1, o_sha);
+ one.mode = o_mode;
+ hashcpy(a.sha1, a_sha);
+ a.mode = a_mode;
+ hashcpy(b.sha1, b_sha);
+ b.mode = b_mode;
+ return merge_file_1(o, &one, &a, &b, branch1, branch2);
+}
+
+static void handle_change_delete(struct merge_options *o,
+ const char *path,
+ const unsigned char *o_sha, int o_mode,
+ const unsigned char *a_sha, int a_mode,
+ const unsigned char *b_sha, int b_mode,
+ const char *change, const char *change_past)
+{
+ char *renamed = NULL;
+ if (dir_in_way(path, !o->call_depth)) {
+ renamed = unique_path(o, path, a_sha ? o->branch1 : o->branch2);
+ }
+
+ if (o->call_depth) {
+ /*
+ * We cannot arbitrarily accept either a_sha or b_sha as
+ * correct; since there is no true "middle point" between
+ * them, simply reuse the base version for virtual merge base.
+ */
+ remove_file_from_cache(path);
+ update_file(o, 0, o_sha, o_mode, renamed ? renamed : path);
+ } else if (!a_sha) {
+ output(o, 1, "CONFLICT (%s/delete): %s deleted in %s "
+ "and %s in %s. Version %s of %s left in tree%s%s.",
+ change, path, o->branch1,
+ change_past, o->branch2, o->branch2, path,
+ NULL == renamed ? "" : " at ",
+ NULL == renamed ? "" : renamed);
+ update_file(o, 0, b_sha, b_mode, renamed ? renamed : path);
+ } else {
+ output(o, 1, "CONFLICT (%s/delete): %s deleted in %s "
+ "and %s in %s. Version %s of %s left in tree%s%s.",
+ change, path, o->branch2,
+ change_past, o->branch1, o->branch1, path,
+ NULL == renamed ? "" : " at ",
+ NULL == renamed ? "" : renamed);
+ if (renamed)
+ update_file(o, 0, a_sha, a_mode, renamed);
+ /*
+ * No need to call update_file() on path when !renamed, since
+ * that would needlessly touch path. We could call
+ * update_file_flags() with update_cache=0 and update_wd=0,
+ * but that's a no-op.
+ */
+ }
+ free(renamed);
+}
+
static void conflict_rename_delete(struct merge_options *o,
struct diff_filepair *pair,
const char *rename_branch,
const char *other_branch)
{
- char *dest_name = pair->two->path;
- int df_conflict = 0;
- struct stat st;
+ const struct diff_filespec *orig = pair->one;
+ const struct diff_filespec *dest = pair->two;
+ const unsigned char *a_sha = NULL;
+ const unsigned char *b_sha = NULL;
+ int a_mode = 0;
+ int b_mode = 0;
+
+ if (rename_branch == o->branch1) {
+ a_sha = dest->sha1;
+ a_mode = dest->mode;
+ } else {
+ b_sha = dest->sha1;
+ b_mode = dest->mode;
+ }
- output(o, 1, "CONFLICT (rename/delete): Rename %s->%s in %s "
- "and deleted in %s",
- pair->one->path, pair->two->path, rename_branch,
- other_branch);
- if (!o->call_depth)
- update_stages(dest_name, NULL,
- rename_branch == o->branch1 ? pair->two : NULL,
- rename_branch == o->branch1 ? NULL : pair->two,
- 1);
- if (lstat(dest_name, &st) == 0 && S_ISDIR(st.st_mode)) {
- dest_name = unique_path(o, dest_name, rename_branch);
- df_conflict = 1;
+ handle_change_delete(o,
+ o->call_depth ? orig->path : dest->path,
+ orig->sha1, orig->mode,
+ a_sha, a_mode,
+ b_sha, b_mode,
+ "rename", "renamed");
+
+ if (o->call_depth) {
+ remove_file_from_cache(dest->path);
+ } else {
+ update_stages(dest->path, NULL,
+ rename_branch == o->branch1 ? dest : NULL,
+ rename_branch == o->branch1 ? NULL : dest);
}
- update_file(o, 0, pair->two->sha1, pair->two->mode, dest_name);
- if (df_conflict)
- free(dest_name);
+
}
-static void conflict_rename_rename_1to2(struct merge_options *o,
- struct diff_filepair *pair1,
- const char *branch1,
- struct diff_filepair *pair2,
- const char *branch2)
+static struct diff_filespec *filespec_from_entry(struct diff_filespec *target,
+ struct stage_data *entry,
+ int stage)
{
- /* One file was renamed in both branches, but to different names. */
- char *del[2];
- int delp = 0;
- const char *ren1_dst = pair1->two->path;
- const char *ren2_dst = pair2->two->path;
- const char *dst_name1 = ren1_dst;
- const char *dst_name2 = ren2_dst;
- struct stat st;
- if (lstat(ren1_dst, &st) == 0 && S_ISDIR(st.st_mode)) {
- dst_name1 = del[delp++] = unique_path(o, ren1_dst, branch1);
- output(o, 1, "%s is a directory in %s adding as %s instead",
- ren1_dst, branch2, dst_name1);
+ unsigned char *sha = entry->stages[stage].sha;
+ unsigned mode = entry->stages[stage].mode;
+ if (mode == 0 || is_null_sha1(sha))
+ return NULL;
+ hashcpy(target->sha1, sha);
+ target->mode = mode;
+ return target;
+}
+
+static void handle_file(struct merge_options *o,
+ struct diff_filespec *rename,
+ int stage,
+ struct rename_conflict_info *ci)
+{
+ char *dst_name = rename->path;
+ struct stage_data *dst_entry;
+ const char *cur_branch, *other_branch;
+ struct diff_filespec other;
+ struct diff_filespec *add;
+
+ if (stage == 2) {
+ dst_entry = ci->dst_entry1;
+ cur_branch = ci->branch1;
+ other_branch = ci->branch2;
+ } else {
+ dst_entry = ci->dst_entry2;
+ cur_branch = ci->branch2;
+ other_branch = ci->branch1;
}
- if (lstat(ren2_dst, &st) == 0 && S_ISDIR(st.st_mode)) {
- dst_name2 = del[delp++] = unique_path(o, ren2_dst, branch2);
- output(o, 1, "%s is a directory in %s adding as %s instead",
- ren2_dst, branch1, dst_name2);
+
+ add = filespec_from_entry(&other, dst_entry, stage ^ 1);
+ if (add) {
+ char *add_name = unique_path(o, rename->path, other_branch);
+ update_file(o, 0, add->sha1, add->mode, add_name);
+
+ remove_file(o, 0, rename->path, 0);
+ dst_name = unique_path(o, rename->path, cur_branch);
+ } else {
+ if (dir_in_way(rename->path, !o->call_depth)) {
+ dst_name = unique_path(o, rename->path, cur_branch);
+ output(o, 1, "%s is a directory in %s adding as %s instead",
+ rename->path, other_branch, dst_name);
+ }
}
+ update_file(o, 0, rename->sha1, rename->mode, dst_name);
+ if (stage == 2)
+ update_stages(rename->path, NULL, rename, add);
+ else
+ update_stages(rename->path, NULL, add, rename);
+
+ if (dst_name != rename->path)
+ free(dst_name);
+}
+
+static void conflict_rename_rename_1to2(struct merge_options *o,
+ struct rename_conflict_info *ci)
+{
+ /* One file was renamed in both branches, but to different names. */
+ struct diff_filespec *one = ci->pair1->one;
+ struct diff_filespec *a = ci->pair1->two;
+ struct diff_filespec *b = ci->pair2->two;
+
+ output(o, 1, "CONFLICT (rename/rename): "
+ "Rename \"%s\"->\"%s\" in branch \"%s\" "
+ "rename \"%s\"->\"%s\" in \"%s\"%s",
+ one->path, a->path, ci->branch1,
+ one->path, b->path, ci->branch2,
+ o->call_depth ? " (left unresolved)" : "");
if (o->call_depth) {
- remove_file_from_cache(dst_name1);
- remove_file_from_cache(dst_name2);
+ struct merge_file_info mfi;
+ struct diff_filespec other;
+ struct diff_filespec *add;
+ mfi = merge_file(o, one->path,
+ one->sha1, one->mode,
+ a->sha1, a->mode,
+ b->sha1, b->mode,
+ ci->branch1, ci->branch2);
/*
- * Uncomment to leave the conflicting names in the resulting tree
- *
- * update_file(o, 0, pair1->two->sha1, pair1->two->mode, dst_name1);
- * update_file(o, 0, pair2->two->sha1, pair2->two->mode, dst_name2);
+ * FIXME: For rename/add-source conflicts (if we could detect
+ * such), this is wrong. We should instead find a unique
+ * pathname and then either rename the add-source file to that
+ * unique path, or use that unique path instead of src here.
*/
- } else {
- update_stages(ren1_dst, NULL, pair1->two, NULL, 1);
- update_stages(ren2_dst, NULL, NULL, pair2->two, 1);
+ update_file(o, 0, mfi.sha, mfi.mode, one->path);
- update_file(o, 0, pair1->two->sha1, pair1->two->mode, dst_name1);
- update_file(o, 0, pair2->two->sha1, pair2->two->mode, dst_name2);
+ /*
+ * Above, we put the merged content at the merge-base's
+ * path. Now we usually need to delete both a->path and
+ * b->path. However, the rename on each side of the merge
+ * could also be involved in a rename/add conflict. In
+ * such cases, we should keep the added file around,
+ * resolving the conflict at that path in its favor.
+ */
+ add = filespec_from_entry(&other, ci->dst_entry1, 2 ^ 1);
+ if (add)
+ update_file(o, 0, add->sha1, add->mode, a->path);
+ else
+ remove_file_from_cache(a->path);
+ add = filespec_from_entry(&other, ci->dst_entry2, 3 ^ 1);
+ if (add)
+ update_file(o, 0, add->sha1, add->mode, b->path);
+ else
+ remove_file_from_cache(b->path);
+ } else {
+ handle_file(o, a, 2, ci);
+ handle_file(o, b, 3, ci);
}
- while (delp--)
- free(del[delp]);
}
static void conflict_rename_rename_2to1(struct merge_options *o,
- struct rename *ren1,
- const char *branch1,
- struct rename *ren2,
- const char *branch2)
+ struct rename_conflict_info *ci)
{
- /* Two files were renamed to the same thing. */
- char *new_path1 = unique_path(o, ren1->pair->two->path, branch1);
- char *new_path2 = unique_path(o, ren2->pair->two->path, branch2);
- output(o, 1, "Renaming %s to %s and %s to %s instead",
- ren1->pair->one->path, new_path1,
- ren2->pair->one->path, new_path2);
- remove_file(o, 0, ren1->pair->two->path, 0);
- update_file(o, 0, ren1->pair->two->sha1, ren1->pair->two->mode, new_path1);
- update_file(o, 0, ren2->pair->two->sha1, ren2->pair->two->mode, new_path2);
- free(new_path2);
- free(new_path1);
+ /* Two files, a & b, were renamed to the same thing, c. */
+ struct diff_filespec *a = ci->pair1->one;
+ struct diff_filespec *b = ci->pair2->one;
+ struct diff_filespec *c1 = ci->pair1->two;
+ struct diff_filespec *c2 = ci->pair2->two;
+ char *path = c1->path; /* == c2->path */
+ struct merge_file_info mfi_c1;
+ struct merge_file_info mfi_c2;
+
+ output(o, 1, "CONFLICT (rename/rename): "
+ "Rename %s->%s in %s. "
+ "Rename %s->%s in %s",
+ a->path, c1->path, ci->branch1,
+ b->path, c2->path, ci->branch2);
+
+ remove_file(o, 1, a->path, would_lose_untracked(a->path));
+ remove_file(o, 1, b->path, would_lose_untracked(b->path));
+
+ mfi_c1 = merge_file_special_markers(o, a, c1, &ci->ren1_other,
+ o->branch1, c1->path,
+ o->branch2, ci->ren1_other.path);
+ mfi_c2 = merge_file_special_markers(o, b, &ci->ren2_other, c2,
+ o->branch1, ci->ren2_other.path,
+ o->branch2, c2->path);
+
+ if (o->call_depth) {
+ /*
+ * If mfi_c1.clean && mfi_c2.clean, then it might make
+ * sense to do a two-way merge of those results. But, I
+ * think in all cases, it makes sense to have the virtual
+ * merge base just undo the renames; they can be detected
+ * again later for the non-recursive merge.
+ */
+ remove_file(o, 0, path, 0);
+ update_file(o, 0, mfi_c1.sha, mfi_c1.mode, a->path);
+ update_file(o, 0, mfi_c2.sha, mfi_c2.mode, b->path);
+ } else {
+ char *new_path1 = unique_path(o, path, ci->branch1);
+ char *new_path2 = unique_path(o, path, ci->branch2);
+ output(o, 1, "Renaming %s to %s and %s to %s instead",
+ a->path, new_path1, b->path, new_path2);
+ remove_file(o, 0, path, 0);
+ update_file(o, 0, mfi_c1.sha, mfi_c1.mode, new_path1);
+ update_file(o, 0, mfi_c2.sha, mfi_c2.mode, new_path2);
+ free(new_path2);
+ free(new_path1);
+ }
}
static int process_renames(struct merge_options *o,
for (i = 0; i < a_renames->nr; i++) {
sre = a_renames->items[i].util;
string_list_insert(&a_by_dst, sre->pair->two->path)->util
- = sre->dst_entry;
+ = (void *)sre;
}
for (i = 0; i < b_renames->nr; i++) {
sre = b_renames->items[i].util;
string_list_insert(&b_by_dst, sre->pair->two->path)->util
- = sre->dst_entry;
+ = (void *)sre;
}
for (i = 0, j = 0; i < a_renames->nr || j < b_renames->nr;) {
struct rename *ren1 = NULL, *ren2 = NULL;
const char *branch1, *branch2;
const char *ren1_src, *ren1_dst;
+ struct string_list_item *lookup;
if (i >= a_renames->nr) {
ren2 = b_renames->items[j++].util;
ren1 = tmp;
}
- ren1->dst_entry->processed = 1;
- ren1->src_entry->processed = 1;
-
if (ren1->processed)
continue;
ren1->processed = 1;
+ ren1->dst_entry->processed = 1;
+ /* BUG: We should only mark src_entry as processed if we
+ * are not dealing with a rename + add-source case.
+ */
+ ren1->src_entry->processed = 1;
ren1_src = ren1->pair->one->path;
ren1_dst = ren1->pair->two->path;
if (ren2) {
+ /* One file renamed on both sides */
const char *ren2_src = ren2->pair->one->path;
const char *ren2_dst = ren2->pair->two->path;
- /* Renamed in 1 and renamed in 2 */
+ enum rename_type rename_type;
if (strcmp(ren1_src, ren2_src) != 0)
- die("ren1.src != ren2.src");
+ die("ren1_src != ren2_src");
ren2->dst_entry->processed = 1;
ren2->processed = 1;
if (strcmp(ren1_dst, ren2_dst) != 0) {
- setup_rename_df_conflict_info(RENAME_ONE_FILE_TO_TWO,
- ren1->pair,
- ren2->pair,
- branch1,
- branch2,
- ren1->dst_entry,
- ren2->dst_entry);
+ rename_type = RENAME_ONE_FILE_TO_TWO;
+ clean_merge = 0;
} else {
+ rename_type = RENAME_ONE_FILE_TO_ONE;
+ /* BUG: We should only remove ren1_src in
+ * the base stage (think of rename +
+ * add-source cases).
+ */
remove_file(o, 1, ren1_src, 1);
- update_stages_and_entry(ren1_dst,
- ren1->dst_entry,
- ren1->pair->one,
- ren1->pair->two,
- ren2->pair->two,
- 1 /* clear */);
+ update_entry(ren1->dst_entry,
+ ren1->pair->one,
+ ren1->pair->two,
+ ren2->pair->two);
}
+ setup_rename_conflict_info(rename_type,
+ ren1->pair,
+ ren2->pair,
+ branch1,
+ branch2,
+ ren1->dst_entry,
+ ren2->dst_entry,
+ o,
+ NULL,
+ NULL);
+ } else if ((lookup = string_list_lookup(renames2Dst, ren1_dst))) {
+ /* Two different files renamed to the same thing */
+ char *ren2_dst;
+ ren2 = lookup->util;
+ ren2_dst = ren2->pair->two->path;
+ if (strcmp(ren1_dst, ren2_dst) != 0)
+ die("ren1_dst != ren2_dst");
+
+ clean_merge = 0;
+ ren2->processed = 1;
+ /*
+ * BUG: We should only mark src_entry as processed
+ * if we are not dealing with a rename + add-source
+ * case.
+ */
+ ren2->src_entry->processed = 1;
+
+ setup_rename_conflict_info(RENAME_TWO_FILES_TO_ONE,
+ ren1->pair,
+ ren2->pair,
+ branch1,
+ branch2,
+ ren1->dst_entry,
+ ren2->dst_entry,
+ o,
+ ren1->src_entry,
+ ren2->src_entry);
+
} else {
/* Renamed in 1, maybe changed in 2 */
- struct string_list_item *item;
/* we only use sha1 and mode of these */
struct diff_filespec src_other, dst_other;
int try_merge;
int renamed_stage = a_renames == renames1 ? 2 : 3;
int other_stage = a_renames == renames1 ? 3 : 2;
- remove_file(o, 1, ren1_src, o->call_depth || renamed_stage == 2);
+ /* BUG: We should only remove ren1_src in the base
+ * stage and in other_stage (think of rename +
+ * add-source case).
+ */
+ remove_file(o, 1, ren1_src,
+ renamed_stage == 2 || !was_tracked(ren1_src));
hashcpy(src_other.sha1, ren1->src_entry->stages[other_stage].sha);
src_other.mode = ren1->src_entry->stages[other_stage].mode;
try_merge = 0;
if (sha_eq(src_other.sha1, null_sha1)) {
- if (string_list_has_string(&o->current_directory_set, ren1_dst)) {
- ren1->dst_entry->processed = 0;
- setup_rename_df_conflict_info(RENAME_DELETE,
- ren1->pair,
- NULL,
- branch1,
- branch2,
- ren1->dst_entry,
- NULL);
- } else {
- clean_merge = 0;
- conflict_rename_delete(o, ren1->pair, branch1, branch2);
- }
+ setup_rename_conflict_info(RENAME_DELETE,
+ ren1->pair,
+ NULL,
+ branch1,
+ branch2,
+ ren1->dst_entry,
+ NULL,
+ o,
+ NULL,
+ NULL);
} else if ((dst_other.mode == ren1->pair->two->mode) &&
sha_eq(dst_other.sha1, ren1->pair->two->sha1)) {
- /* Added file on the other side
- identical to the file being
- renamed: clean merge */
- update_file(o, 1, ren1->pair->two->sha1, ren1->pair->two->mode, ren1_dst);
+ /*
+ * Added file on the other side identical to
+ * the file being renamed: clean merge.
+ * Also, there is no need to overwrite the
+ * file already in the working copy, so call
+ * update_file_flags() instead of
+ * update_file().
+ */
+ update_file_flags(o,
+ ren1->pair->two->sha1,
+ ren1->pair->two->mode,
+ ren1_dst,
+ 1, /* update_cache */
+ 0 /* update_wd */);
} else if (!sha_eq(dst_other.sha1, null_sha1)) {
- const char *new_path;
clean_merge = 0;
try_merge = 1;
output(o, 1, "CONFLICT (rename/add): Rename %s->%s in %s. "
ren1_dst, branch2);
if (o->call_depth) {
struct merge_file_info mfi;
- struct diff_filespec one, a, b;
-
- one.path = a.path = b.path =
- (char *)ren1_dst;
- hashcpy(one.sha1, null_sha1);
- one.mode = 0;
- hashcpy(a.sha1, ren1->pair->two->sha1);
- a.mode = ren1->pair->two->mode;
- hashcpy(b.sha1, dst_other.sha1);
- b.mode = dst_other.mode;
- mfi = merge_file(o, &one, &a, &b,
- branch1,
- branch2);
+ mfi = merge_file(o, ren1_dst, null_sha1, 0,
+ ren1->pair->two->sha1, ren1->pair->two->mode,
+ dst_other.sha1, dst_other.mode,
+ branch1, branch2);
output(o, 1, "Adding merged %s", ren1_dst);
- update_file(o, 0,
- mfi.sha,
- mfi.mode,
- ren1_dst);
+ update_file(o, 0, mfi.sha, mfi.mode, ren1_dst);
try_merge = 0;
} else {
- new_path = unique_path(o, ren1_dst, branch2);
+ char *new_path = unique_path(o, ren1_dst, branch2);
output(o, 1, "Adding as %s instead", new_path);
update_file(o, 0, dst_other.sha1, dst_other.mode, new_path);
+ free(new_path);
}
- } else if ((item = string_list_lookup(renames2Dst, ren1_dst))) {
- ren2 = item->util;
- clean_merge = 0;
- ren2->processed = 1;
- output(o, 1, "CONFLICT (rename/rename): "
- "Rename %s->%s in %s. "
- "Rename %s->%s in %s",
- ren1_src, ren1_dst, branch1,
- ren2->pair->one->path, ren2->pair->two->path, branch2);
- conflict_rename_rename_2to1(o, ren1, branch1, ren2, branch2);
} else
try_merge = 1;
b = ren1->pair->two;
a = &src_other;
}
- update_stages_and_entry(ren1_dst, ren1->dst_entry, one, a, b, 1);
- if (string_list_has_string(&o->current_directory_set, ren1_dst)) {
- setup_rename_df_conflict_info(RENAME_NORMAL,
- ren1->pair,
- NULL,
- branch1,
- NULL,
- ren1->dst_entry,
- NULL);
- }
+ update_entry(ren1->dst_entry, one, a, b);
+ setup_rename_conflict_info(RENAME_NORMAL,
+ ren1->pair,
+ NULL,
+ branch1,
+ NULL,
+ ren1->dst_entry,
+ NULL,
+ o,
+ NULL,
+ NULL);
}
}
}
return ret;
}
-static void handle_delete_modify(struct merge_options *o,
+static void handle_modify_delete(struct merge_options *o,
const char *path,
- const char *new_path,
+ unsigned char *o_sha, int o_mode,
unsigned char *a_sha, int a_mode,
unsigned char *b_sha, int b_mode)
{
- if (!a_sha) {
- output(o, 1, "CONFLICT (delete/modify): %s deleted in %s "
- "and modified in %s. Version %s of %s left in tree%s%s.",
- path, o->branch1,
- o->branch2, o->branch2, path,
- path == new_path ? "" : " at ",
- path == new_path ? "" : new_path);
- update_file(o, 0, b_sha, b_mode, new_path);
- } else {
- output(o, 1, "CONFLICT (delete/modify): %s deleted in %s "
- "and modified in %s. Version %s of %s left in tree%s%s.",
- path, o->branch2,
- o->branch1, o->branch1, path,
- path == new_path ? "" : " at ",
- path == new_path ? "" : new_path);
- update_file(o, 0, a_sha, a_mode, new_path);
- }
+ handle_change_delete(o,
+ path,
+ o_sha, o_mode,
+ a_sha, a_mode,
+ b_sha, b_mode,
+ "modify", "modified");
}
static int merge_content(struct merge_options *o,
unsigned char *o_sha, int o_mode,
unsigned char *a_sha, int a_mode,
unsigned char *b_sha, int b_mode,
- const char *df_rename_conflict_branch)
+ struct rename_conflict_info *rename_conflict_info)
{
const char *reason = "content";
+ const char *path1 = NULL, *path2 = NULL;
struct merge_file_info mfi;
struct diff_filespec one, a, b;
- struct stat st;
unsigned df_conflict_remains = 0;
if (!o_sha) {
hashcpy(b.sha1, b_sha);
b.mode = b_mode;
- mfi = merge_file(o, &one, &a, &b, o->branch1, o->branch2);
- if (df_rename_conflict_branch &&
- lstat(path, &st) == 0 && S_ISDIR(st.st_mode)) {
- df_conflict_remains = 1;
+ if (rename_conflict_info) {
+ struct diff_filepair *pair1 = rename_conflict_info->pair1;
+
+ path1 = (o->branch1 == rename_conflict_info->branch1) ?
+ pair1->two->path : pair1->one->path;
+ /* If rename_conflict_info->pair2 != NULL, we are in
+ * RENAME_ONE_FILE_TO_ONE case. Otherwise, we have a
+ * normal rename.
+ */
+ path2 = (rename_conflict_info->pair2 ||
+ o->branch2 == rename_conflict_info->branch1) ?
+ pair1->two->path : pair1->one->path;
+
+ if (dir_in_way(path, !o->call_depth))
+ df_conflict_remains = 1;
}
+ mfi = merge_file_special_markers(o, &one, &a, &b,
+ o->branch1, path1,
+ o->branch2, path2);
if (mfi.clean && !df_conflict_remains &&
- sha_eq(mfi.sha, a_sha) && mfi.mode == a.mode)
+ sha_eq(mfi.sha, a_sha) && mfi.mode == a_mode) {
+ int path_renamed_outside_HEAD;
output(o, 3, "Skipped %s (merged same as existing)", path);
- else
+ /*
+ * The content merge resulted in the same file contents we
+ * already had. We can return early if those file contents
+ * are recorded at the correct path (which may not be true
+ * if the merge involves a rename).
+ */
+ path_renamed_outside_HEAD = !path2 || !strcmp(path, path2);
+ if (!path_renamed_outside_HEAD) {
+ add_cacheinfo(mfi.mode, mfi.sha, path,
+ 0, (!o->call_depth), 0);
+ return mfi.clean;
+ }
+ } else
output(o, 2, "Auto-merging %s", path);
if (!mfi.clean) {
reason = "submodule";
output(o, 1, "CONFLICT (%s): Merge conflict in %s",
reason, path);
+ if (rename_conflict_info && !df_conflict_remains)
+ update_stages(path, &one, &a, &b);
}
if (df_conflict_remains) {
- const char *new_path;
- update_file_flags(o, mfi.sha, mfi.mode, path,
- o->call_depth || mfi.clean, 0);
- new_path = unique_path(o, path, df_rename_conflict_branch);
- mfi.clean = 0;
+ char *new_path;
+ if (o->call_depth) {
+ remove_file_from_cache(path);
+ } else {
+ if (!mfi.clean)
+ update_stages(path, &one, &a, &b);
+ else {
+ int file_from_stage2 = was_tracked(path);
+ struct diff_filespec merged;
+ hashcpy(merged.sha1, mfi.sha);
+ merged.mode = mfi.mode;
+
+ update_stages(path, NULL,
+ file_from_stage2 ? &merged : NULL,
+ file_from_stage2 ? NULL : &merged);
+ }
+
+ }
+ new_path = unique_path(o, path, rename_conflict_info->branch1);
output(o, 1, "Adding as %s instead", new_path);
- update_file_flags(o, mfi.sha, mfi.mode, new_path, 0, 1);
+ update_file(o, 0, mfi.sha, mfi.mode, new_path);
+ free(new_path);
+ mfi.clean = 0;
} else {
update_file(o, mfi.clean, mfi.sha, mfi.mode, path);
}
unsigned char *a_sha = stage_sha(entry->stages[2].sha, a_mode);
unsigned char *b_sha = stage_sha(entry->stages[3].sha, b_mode);
- if (entry->rename_df_conflict_info)
- return 1; /* Such cases are handled elsewhere. */
-
- entry->processed = 1;
- if (o_sha && (!a_sha || !b_sha)) {
- /* Case A: Deleted in one */
- if ((!a_sha && !b_sha) ||
- (!b_sha && blob_unchanged(o_sha, a_sha, normalize, path)) ||
- (!a_sha && blob_unchanged(o_sha, b_sha, normalize, path))) {
- /* Deleted in both or deleted in one and
- * unchanged in the other */
- if (a_sha)
- output(o, 2, "Removing %s", path);
- /* do not touch working file if it did not exist */
- remove_file(o, 1, path, !a_sha);
- } else if (string_list_has_string(&o->current_directory_set,
- path)) {
- entry->processed = 0;
- return 1; /* Assume clean until processed */
- } else {
- /* Deleted in one and changed in the other */
- clean_merge = 0;
- handle_delete_modify(o, path, path,
- a_sha, a_mode, b_sha, b_mode);
- }
-
- } else if ((!o_sha && a_sha && !b_sha) ||
- (!o_sha && !a_sha && b_sha)) {
- /* Case B: Added in one. */
- unsigned mode;
- const unsigned char *sha;
-
- if (a_sha) {
- mode = a_mode;
- sha = a_sha;
- } else {
- mode = b_mode;
- sha = b_sha;
- }
- if (string_list_has_string(&o->current_directory_set, path)) {
- /* Handle D->F conflicts after all subfiles */
- entry->processed = 0;
- return 1; /* Assume clean until processed */
- } else {
- output(o, 2, "Adding %s", path);
- update_file(o, 1, sha, mode, path);
- }
- } else if (a_sha && b_sha) {
- /* Case C: Added in both (check for same permissions) and */
- /* case D: Modified in both, but differently. */
- clean_merge = merge_content(o, path,
- o_sha, o_mode, a_sha, a_mode, b_sha, b_mode,
- NULL);
- } else if (!o_sha && !a_sha && !b_sha) {
- /*
- * this entry was deleted altogether. a_mode == 0 means
- * we had that path and want to actively remove it.
- */
- remove_file(o, 1, path, !a_mode);
- } else
- die("Fatal merge failure, shouldn't happen.");
-
- return clean_merge;
-}
-
-/*
- * Per entry merge function for D/F (and/or rename) conflicts. In the
- * cases we can cleanly resolve D/F conflicts, process_entry() can
- * clean out all the files below the directory for us. All D/F
- * conflict cases must be handled here at the end to make sure any
- * directories that can be cleaned out, are.
- *
- * Some rename conflicts may also be handled here that don't necessarily
- * involve D/F conflicts, since the code to handle them is generic enough
- * to handle those rename conflicts with or without D/F conflicts also
- * being involved.
- */
-static int process_df_entry(struct merge_options *o,
- const char *path, struct stage_data *entry)
-{
- int clean_merge = 1;
- unsigned o_mode = entry->stages[1].mode;
- unsigned a_mode = entry->stages[2].mode;
- unsigned b_mode = entry->stages[3].mode;
- unsigned char *o_sha = stage_sha(entry->stages[1].sha, o_mode);
- unsigned char *a_sha = stage_sha(entry->stages[2].sha, a_mode);
- unsigned char *b_sha = stage_sha(entry->stages[3].sha, b_mode);
- struct stat st;
-
entry->processed = 1;
- if (entry->rename_df_conflict_info) {
- struct rename_df_conflict_info *conflict_info = entry->rename_df_conflict_info;
- char *src;
+ if (entry->rename_conflict_info) {
+ struct rename_conflict_info *conflict_info = entry->rename_conflict_info;
switch (conflict_info->rename_type) {
case RENAME_NORMAL:
+ case RENAME_ONE_FILE_TO_ONE:
clean_merge = merge_content(o, path,
o_sha, o_mode, a_sha, a_mode, b_sha, b_mode,
- conflict_info->branch1);
+ conflict_info);
break;
case RENAME_DELETE:
clean_merge = 0;
conflict_info->branch2);
break;
case RENAME_ONE_FILE_TO_TWO:
- src = conflict_info->pair1->one->path;
clean_merge = 0;
- output(o, 1, "CONFLICT (rename/rename): "
- "Rename \"%s\"->\"%s\" in branch \"%s\" "
- "rename \"%s\"->\"%s\" in \"%s\"%s",
- src, conflict_info->pair1->two->path, conflict_info->branch1,
- src, conflict_info->pair2->two->path, conflict_info->branch2,
- o->call_depth ? " (left unresolved)" : "");
- if (o->call_depth) {
- remove_file_from_cache(src);
- update_file(o, 0, conflict_info->pair1->one->sha1,
- conflict_info->pair1->one->mode, src);
- }
- conflict_rename_rename_1to2(o, conflict_info->pair1,
- conflict_info->branch1,
- conflict_info->pair2,
- conflict_info->branch2);
- conflict_info->dst_entry2->processed = 1;
+ conflict_rename_rename_1to2(o, conflict_info);
+ break;
+ case RENAME_TWO_FILES_TO_ONE:
+ clean_merge = 0;
+ conflict_rename_rename_2to1(o, conflict_info);
break;
default:
entry->processed = 0;
break;
}
} else if (o_sha && (!a_sha || !b_sha)) {
- /* Modify/delete; deleted side may have put a directory in the way */
- const char *new_path = path;
- if (lstat(path, &st) == 0 && S_ISDIR(st.st_mode))
- new_path = unique_path(o, path, a_sha ? o->branch1 : o->branch2);
- clean_merge = 0;
- handle_delete_modify(o, path, new_path,
- a_sha, a_mode, b_sha, b_mode);
- } else if (!o_sha && !!a_sha != !!b_sha) {
- /* directory -> (directory, file) */
+ /* Case A: Deleted in one */
+ if ((!a_sha && !b_sha) ||
+ (!b_sha && blob_unchanged(o_sha, a_sha, normalize, path)) ||
+ (!a_sha && blob_unchanged(o_sha, b_sha, normalize, path))) {
+ /* Deleted in both or deleted in one and
+ * unchanged in the other */
+ if (a_sha)
+ output(o, 2, "Removing %s", path);
+ /* do not touch working file if it did not exist */
+ remove_file(o, 1, path, !a_sha);
+ } else {
+ /* Modify/delete; deleted side may have put a directory in the way */
+ clean_merge = 0;
+ handle_modify_delete(o, path, o_sha, o_mode,
+ a_sha, a_mode, b_sha, b_mode);
+ }
+ } else if ((!o_sha && a_sha && !b_sha) ||
+ (!o_sha && !a_sha && b_sha)) {
+ /* Case B: Added in one. */
+ /* [nothing|directory] -> ([nothing|directory], file) */
+
const char *add_branch;
const char *other_branch;
unsigned mode;
sha = b_sha;
conf = "directory/file";
}
- if (lstat(path, &st) == 0 && S_ISDIR(st.st_mode)) {
- const char *new_path = unique_path(o, path, add_branch);
+ if (dir_in_way(path, !o->call_depth)) {
+ char *new_path = unique_path(o, path, add_branch);
clean_merge = 0;
output(o, 1, "CONFLICT (%s): There is a directory with name %s in %s. "
"Adding %s as %s",
conf, path, other_branch, path, new_path);
+ if (o->call_depth)
+ remove_file_from_cache(path);
update_file(o, 0, sha, mode, new_path);
+ if (o->call_depth)
+ remove_file_from_cache(path);
+ free(new_path);
} else {
output(o, 2, "Adding %s", path);
- update_file(o, 1, sha, mode, path);
+ /* do not overwrite file if already present */
+ update_file_flags(o, sha, mode, path, 1, !a_sha);
}
- } else {
- entry->processed = 0;
- return 1; /* not handled; assume clean until processed */
- }
+ } else if (a_sha && b_sha) {
+ /* Case C: Added in both (check for same permissions) and */
+ /* case D: Modified in both, but differently. */
+ clean_merge = merge_content(o, path,
+ o_sha, o_mode, a_sha, a_mode, b_sha, b_mode,
+ NULL);
+ } else if (!o_sha && !a_sha && !b_sha) {
+ /*
+ * this entry was deleted altogether. a_mode == 0 means
+ * we had that path and want to actively remove it.
+ */
+ remove_file(o, 1, path, !a_mode);
+ } else
+ die("Fatal merge failure, shouldn't happen.");
return clean_merge;
}
get_files_dirs(o, merge);
entries = get_unmerged();
- make_room_for_directories_of_df_conflicts(o, entries);
+ record_df_conflict_files(o, entries);
re_head = get_renames(o, head, common, head, merge, entries);
re_merge = get_renames(o, merge, common, head, merge, entries);
clean = process_renames(o, re_head, re_merge);
- for (i = 0; i < entries->nr; i++) {
+ for (i = entries->nr-1; 0 <= i; i--) {
const char *path = entries->items[i].string;
struct stage_data *e = entries->items[i].util;
if (!e->processed
&& !process_entry(o, path, e))
clean = 0;
}
- for (i = 0; i < entries->nr; i++) {
- const char *path = entries->items[i].string;
- struct stage_data *e = entries->items[i].util;
- if (!e->processed
- && !process_df_entry(o, path, e))
- clean = 0;
- }
for (i = 0; i < entries->nr; i++) {
struct stage_data *e = entries->items[i].util;
if (!e->processed)
o->current_file_set.strdup_strings = 1;
memset(&o->current_directory_set, 0, sizeof(struct string_list));
o->current_directory_set.strdup_strings = 1;
+ memset(&o->df_conflict_file_set, 0, sizeof(struct string_list));
+ o->df_conflict_file_set.strdup_strings = 1;
}
int parse_merge_opt(struct merge_options *o, const char *s)
struct strbuf obuf;
struct string_list current_file_set;
struct string_list current_directory_set;
+ struct string_list df_conflict_file_set;
};
/* merge_trees() but with recursive ancestor consolidation */
static inline int bad_ref_char(int ch)
{
- if (((unsigned) ch) <= ' ' ||
+ if (((unsigned) ch) <= ' ' || ch == 0x7f ||
ch == '~' || ch == '^' || ch == ':' || ch == '\\')
return 1;
/* 2.13 Pattern Matching Notation */
argv[argc++] = "--thin";
if (options.dry_run)
argv[argc++] = "--dry-run";
- if (options.verbosity < 0)
- argv[argc++] = "--quiet";
- else if (options.verbosity > 1)
+ if (options.verbosity > 1)
argv[argc++] = "--verbose";
argv[argc++] = url;
for (i = 0; i < nr_spec; i++)
return n;
}
+static int show_path_component_truncated(FILE *out, const char *name, int len)
+{
+ int cnt;
+ for (cnt = 0; cnt < len; cnt++) {
+ int ch = name[cnt];
+ if (!ch || ch == '\n')
+ return -1;
+ fputc(ch, out);
+ }
+ return len;
+}
+
+static int show_path_truncated(FILE *out, const struct name_path *path)
+{
+ int emitted, ours;
+
+ if (!path)
+ return 0;
+ emitted = show_path_truncated(out, path->up);
+ if (emitted < 0)
+ return emitted;
+ if (emitted)
+ fputc('/', out);
+ ours = show_path_component_truncated(out, path->elem, path->elem_len);
+ if (ours < 0)
+ return ours;
+ return ours || emitted;
+}
+
+void show_object_with_name(FILE *out, struct object *obj, const struct name_path *path, const char *component)
+{
+ struct name_path leaf;
+ leaf.up = (struct name_path *)path;
+ leaf.elem = component;
+ leaf.elem_len = strlen(component);
+
+ fprintf(out, "%s ", sha1_to_hex(obj->sha1));
+ show_path_truncated(out, &leaf);
+ fputc('\n', out);
+}
+
void add_object(struct object *obj,
struct object_array *p,
struct name_path *path,
* to filter the result of "A..B" further to the ones that can actually
* reach A.
*/
-static struct commit_list *collect_bottom_commits(struct commit_list *list)
+static struct commit_list *collect_bottom_commits(struct rev_info *revs)
{
- struct commit_list *elem, *bottom = NULL;
- for (elem = list; elem; elem = elem->next)
- if (elem->item->object.flags & UNINTERESTING)
- commit_list_insert(elem->item, &bottom);
+ struct commit_list *bottom = NULL;
+ int i;
+ for (i = 0; i < revs->cmdline.nr; i++) {
+ struct rev_cmdline_entry *elem = &revs->cmdline.rev[i];
+ if ((elem->flags & UNINTERESTING) &&
+ elem->item->type == OBJ_COMMIT)
+ commit_list_insert((struct commit *)elem->item, &bottom);
+ }
return bottom;
}
struct commit_list *bottom = NULL;
if (revs->ancestry_path) {
- bottom = collect_bottom_commits(list);
+ bottom = collect_bottom_commits(revs);
if (!bottom)
die("--ancestry-path given but there are no bottom commits");
}
return 0;
}
+static void add_rev_cmdline(struct rev_info *revs,
+ struct object *item,
+ const char *name,
+ int whence,
+ unsigned flags)
+{
+ struct rev_cmdline_info *info = &revs->cmdline;
+ int nr = info->nr;
+
+ ALLOC_GROW(info->rev, nr + 1, info->alloc);
+ info->rev[nr].item = item;
+ info->rev[nr].name = name;
+ info->rev[nr].whence = whence;
+ info->rev[nr].flags = flags;
+ info->nr++;
+}
+
struct all_refs_cb {
int all_flags;
int warned_bad_reflog;
struct all_refs_cb *cb = cb_data;
struct object *object = get_reference(cb->all_revs, path, sha1,
cb->all_flags);
+ add_rev_cmdline(cb->all_revs, object, path, REV_CMD_REF, cb->all_flags);
add_pending_object(cb->all_revs, object, path);
return 0;
}
struct object *o = parse_object(sha1);
if (o) {
o->flags |= cb->all_flags;
+ /* ??? CMDLINEFLAGS ??? */
add_pending_object(cb->all_revs, o, "");
}
else if (!cb->warned_bad_reflog) {
for_each_reflog(handle_one_reflog, &cb);
}
-static int add_parents_only(struct rev_info *revs, const char *arg, int flags)
+static int add_parents_only(struct rev_info *revs, const char *arg_, int flags)
{
unsigned char sha1[20];
struct object *it;
struct commit *commit;
struct commit_list *parents;
+ const char *arg = arg_;
if (*arg == '^') {
flags ^= UNINTERESTING;
for (parents = commit->parents; parents; parents = parents->next) {
it = &parents->item->object;
it->flags |= flags;
+ add_rev_cmdline(revs, it, arg_, REV_CMD_PARENTS_ONLY, flags);
add_pending_object(revs, it, arg);
}
return 1;
revs->limited = 1;
}
-int handle_revision_arg(const char *arg, struct rev_info *revs,
+int handle_revision_arg(const char *arg_, struct rev_info *revs,
int flags,
int cant_be_filename)
{
struct object *object;
unsigned char sha1[20];
int local_flags;
+ const char *arg = arg_;
dotdot = strstr(arg, "..");
if (dotdot) {
const char *this = arg;
int symmetric = *next == '.';
unsigned int flags_exclude = flags ^ UNINTERESTING;
+ unsigned int a_flags;
*dotdot = 0;
next += symmetric;
add_pending_commit_list(revs, exclude,
flags_exclude);
free_commit_list(exclude);
- a->object.flags |= flags | SYMMETRIC_LEFT;
+ a_flags = flags | SYMMETRIC_LEFT;
} else
- a->object.flags |= flags_exclude;
+ a_flags = flags_exclude;
+ a->object.flags |= a_flags;
b->object.flags |= flags;
+ add_rev_cmdline(revs, &a->object, this,
+ REV_CMD_LEFT, a_flags);
+ add_rev_cmdline(revs, &b->object, next,
+ REV_CMD_RIGHT, flags);
add_pending_object(revs, &a->object, this);
add_pending_object(revs, &b->object, next);
return 0;
if (!cant_be_filename)
verify_non_filename(revs->prefix, arg);
object = get_reference(revs, arg, sha1, flags ^ local_flags);
+ add_rev_cmdline(revs, object, arg_, REV_CMD_REV, flags ^ local_flags);
add_pending_object_with_mode(revs, object, arg, mode);
return 0;
}
revs->tree_objects = 1;
revs->blob_objects = 1;
revs->edge_hint = 1;
+ } else if (!strcmp(arg, "--verify-objects")) {
+ revs->tag_objects = 1;
+ revs->tree_objects = 1;
+ revs->blob_objects = 1;
+ revs->verify_objects = 1;
} else if (!strcmp(arg, "--unpacked")) {
revs->unpacked = 1;
} else if (!prefixcmp(arg, "--unpacked=")) {
struct log_info;
struct string_list;
+struct rev_cmdline_info {
+ unsigned int nr;
+ unsigned int alloc;
+ struct rev_cmdline_entry {
+ struct object *item;
+ const char *name;
+ enum {
+ REV_CMD_REF,
+ REV_CMD_PARENTS_ONLY,
+ REV_CMD_LEFT,
+ REV_CMD_RIGHT,
+ REV_CMD_REV
+ } whence;
+ unsigned flags;
+ } *rev;
+};
+
struct rev_info {
/* Starting list */
struct commit_list *commits;
/* Parents of shown commits */
struct object_array boundary_commits;
+ /* The end-points specified by the end user */
+ struct rev_cmdline_info cmdline;
+
/* Basic information */
const char *prefix;
const char *def;
tag_objects:1,
tree_objects:1,
blob_objects:1,
+ verify_objects:1,
edge_hint:1,
limited:1,
unpacked:1,
char *path_name(const struct name_path *path, const char *name);
+extern void show_object_with_name(FILE *, struct object *, const struct name_path *, const char *);
+
extern void add_object(struct object *obj,
struct object_array *p,
struct name_path *path,
--- /dev/null
+#include "cache.h"
+#include "sequencer.h"
+#include "strbuf.h"
+#include "dir.h"
+
+void remove_sequencer_state(int aggressive)
+{
+ struct strbuf seq_dir = STRBUF_INIT;
+ struct strbuf seq_old_dir = STRBUF_INIT;
+
+ strbuf_addf(&seq_dir, "%s", git_path(SEQ_DIR));
+ strbuf_addf(&seq_old_dir, "%s", git_path(SEQ_OLD_DIR));
+ remove_dir_recursively(&seq_old_dir, 0);
+ rename(git_path(SEQ_DIR), git_path(SEQ_OLD_DIR));
+ if (aggressive)
+ remove_dir_recursively(&seq_old_dir, 0);
+ strbuf_release(&seq_dir);
+ strbuf_release(&seq_old_dir);
+}
--- /dev/null
+#ifndef SEQUENCER_H
+#define SEQUENCER_H
+
+#define SEQ_DIR "sequencer"
+#define SEQ_OLD_DIR "sequencer-old"
+#define SEQ_HEAD_FILE "sequencer/head"
+#define SEQ_TODO_FILE "sequencer/todo"
+#define SEQ_OPTS_FILE "sequencer/opts"
+
+/*
+ * Removes SEQ_OLD_DIR and renames SEQ_DIR to SEQ_OLD_DIR, ignoring
+ * any errors. Intended to be used by 'git reset'.
+ *
+ * With the aggressive flag, it additionally removes SEQ_OLD_DIR,
+ * ignoring any errors. Inteded to be used by the sequencer's
+ * '--reset' subcommand.
+ */
+void remove_sequencer_state(int aggressive);
+
+#endif
const char *objdir = get_object_directory();
struct alternate_object_database *ent;
struct alternate_object_database *alt;
- /* 43 = 40-byte + 2 '/' + terminating NUL */
- int pfxlen = len;
- int entlen = pfxlen + 43;
- int base_len = -1;
+ int pfxlen, entlen;
+ struct strbuf pathbuf = STRBUF_INIT;
if (!is_absolute_path(entry) && relative_base) {
- /* Relative alt-odb */
- if (base_len < 0)
- base_len = strlen(relative_base) + 1;
- entlen += base_len;
- pfxlen += base_len;
+ strbuf_addstr(&pathbuf, real_path(relative_base));
+ strbuf_addch(&pathbuf, '/');
}
- ent = xmalloc(sizeof(*ent) + entlen);
+ strbuf_add(&pathbuf, entry, len);
- if (!is_absolute_path(entry) && relative_base) {
- memcpy(ent->base, relative_base, base_len - 1);
- ent->base[base_len - 1] = '/';
- memcpy(ent->base + base_len, entry, len);
- }
- else
- memcpy(ent->base, entry, pfxlen);
+ normalize_path_copy(pathbuf.buf, pathbuf.buf);
+
+ pfxlen = strlen(pathbuf.buf);
+
+ /*
+ * The trailing slash after the directory name is given by
+ * this function at the end. Remove duplicates.
+ */
+ while (pfxlen && pathbuf.buf[pfxlen-1] == '/')
+ pfxlen -= 1;
+
+ entlen = pfxlen + 43; /* '/' + 2 hex + '/' + 38 hex + NUL */
+ ent = xmalloc(sizeof(*ent) + entlen);
+ memcpy(ent->base, pathbuf.buf, pfxlen);
+ strbuf_release(&pathbuf);
ent->name = ent->base + pfxlen + 1;
ent->base[pfxlen + 3] = '/';
{
if (name && !namelen)
namelen = strlen(name);
- if (!o) {
- unsigned char sha1[20];
- if (get_sha1_1(name, namelen, sha1))
- return NULL;
- o = parse_object(sha1);
- }
while (1) {
if (!o || (!o->parsed && !parse_object(o->sha1)))
return NULL;
{
sb->alloc = sb->len = 0;
sb->buf = strbuf_slopbuf;
- if (hint) {
+ if (hint)
strbuf_grow(sb, hint);
- sb->buf[0] = '\0';
- }
}
void strbuf_release(struct strbuf *sb)
void strbuf_grow(struct strbuf *sb, size_t extra)
{
+ int new_buf = !sb->alloc;
if (unsigned_add_overflows(extra, 1) ||
unsigned_add_overflows(sb->len, extra + 1))
die("you want to use way too much memory");
- if (!sb->alloc)
+ if (new_buf)
sb->buf = NULL;
ALLOC_GROW(sb->buf, sb->len + extra + 1, sb->alloc);
+ if (new_buf)
+ sb->buf[0] = '\0';
}
void strbuf_trim(struct strbuf *sb)
return unsorted_string_list_lookup(list, string) != NULL;
}
+void unsorted_string_list_delete_item(struct string_list *list, int i, int free_util)
+{
+ if (list->strdup_strings)
+ free(list->items[i].string);
+ if (free_util)
+ free(list->items[i].util);
+ list->items[i] = list->items[list->nr-1];
+ list->nr--;
+}
int unsorted_string_list_has_string(struct string_list *list, const char *string);
struct string_list_item *unsorted_string_list_lookup(struct string_list *list,
const char *string);
+void unsorted_string_list_delete_item(struct string_list *list, int i, int free_util);
#endif /* STRING_LIST_H */
#include "diffcore.h"
#include "refs.h"
#include "string-list.h"
+#include "sha1-array.h"
static struct string_list config_name_for_path;
static struct string_list config_fetch_recurse_submodules_for_name;
static struct string_list config_ignore_for_name;
static int config_fetch_recurse_submodules = RECURSE_SUBMODULES_ON_DEMAND;
static struct string_list changed_submodule_paths;
+static int initialized_fetch_ref_tips;
+static struct sha1_array ref_tips_before_fetch;
+static struct sha1_array ref_tips_after_fetch;
+
/*
* The following flag is set if the .gitmodules file is unmerged. We then
* disable recursion for all submodules where .git/config doesn't have a
config_fetch_recurse_submodules = value;
}
+static int has_remote(const char *refname, const unsigned char *sha1, int flags, void *cb_data)
+{
+ return 1;
+}
+
+static int submodule_needs_pushing(const char *path, const unsigned char sha1[20])
+{
+ if (add_submodule_odb(path) || !lookup_commit_reference(sha1))
+ return 0;
+
+ if (for_each_remote_ref_submodule(path, has_remote, NULL) > 0) {
+ struct child_process cp;
+ const char *argv[] = {"rev-list", NULL, "--not", "--remotes", "-n", "1" , NULL};
+ struct strbuf buf = STRBUF_INIT;
+ int needs_pushing = 0;
+
+ argv[1] = sha1_to_hex(sha1);
+ memset(&cp, 0, sizeof(cp));
+ cp.argv = argv;
+ cp.env = local_repo_env;
+ cp.git_cmd = 1;
+ cp.no_stdin = 1;
+ cp.out = -1;
+ cp.dir = path;
+ if (start_command(&cp))
+ die("Could not run 'git rev-list %s --not --remotes -n 1' command in submodule %s",
+ sha1_to_hex(sha1), path);
+ if (strbuf_read(&buf, cp.out, 41))
+ needs_pushing = 1;
+ finish_command(&cp);
+ close(cp.out);
+ strbuf_release(&buf);
+ return needs_pushing;
+ }
+
+ return 0;
+}
+
+static void collect_submodules_from_diff(struct diff_queue_struct *q,
+ struct diff_options *options,
+ void *data)
+{
+ int i;
+ int *needs_pushing = data;
+
+ for (i = 0; i < q->nr; i++) {
+ struct diff_filepair *p = q->queue[i];
+ if (!S_ISGITLINK(p->two->mode))
+ continue;
+ if (submodule_needs_pushing(p->two->path, p->two->sha1)) {
+ *needs_pushing = 1;
+ break;
+ }
+ }
+}
+
+
+static void commit_need_pushing(struct commit *commit, struct commit_list *parent, int *needs_pushing)
+{
+ const unsigned char (*parents)[20];
+ unsigned int i, n;
+ struct rev_info rev;
+
+ n = commit_list_count(parent);
+ parents = xmalloc(n * sizeof(*parents));
+
+ for (i = 0; i < n; i++) {
+ hashcpy((unsigned char *)(parents + i), parent->item->object.sha1);
+ parent = parent->next;
+ }
+
+ init_revisions(&rev, NULL);
+ rev.diffopt.output_format |= DIFF_FORMAT_CALLBACK;
+ rev.diffopt.format_callback = collect_submodules_from_diff;
+ rev.diffopt.format_callback_data = needs_pushing;
+ diff_tree_combined(commit->object.sha1, parents, n, 1, &rev);
+
+ free(parents);
+}
+
+int check_submodule_needs_pushing(unsigned char new_sha1[20], const char *remotes_name)
+{
+ struct rev_info rev;
+ struct commit *commit;
+ const char *argv[] = {NULL, NULL, "--not", "NULL", NULL};
+ int argc = ARRAY_SIZE(argv) - 1;
+ char *sha1_copy;
+ int needs_pushing = 0;
+ struct strbuf remotes_arg = STRBUF_INIT;
+
+ strbuf_addf(&remotes_arg, "--remotes=%s", remotes_name);
+ init_revisions(&rev, NULL);
+ sha1_copy = xstrdup(sha1_to_hex(new_sha1));
+ argv[1] = sha1_copy;
+ argv[3] = remotes_arg.buf;
+ setup_revisions(argc, argv, &rev, NULL);
+ if (prepare_revision_walk(&rev))
+ die("revision walk setup failed");
+
+ while ((commit = get_revision(&rev)) && !needs_pushing)
+ commit_need_pushing(commit, commit->parents, &needs_pushing);
+
+ free(sha1_copy);
+ strbuf_release(&remotes_arg);
+
+ return needs_pushing;
+}
+
static int is_submodule_commit_present(const char *path, unsigned char sha1[20])
{
int is_present = 0;
}
}
+static int add_sha1_to_array(const char *ref, const unsigned char *sha1,
+ int flags, void *data)
+{
+ sha1_array_append(data, sha1);
+ return 0;
+}
+
void check_for_new_submodule_commits(unsigned char new_sha1[20])
+{
+ if (!initialized_fetch_ref_tips) {
+ for_each_ref(add_sha1_to_array, &ref_tips_before_fetch);
+ initialized_fetch_ref_tips = 1;
+ }
+
+ sha1_array_append(&ref_tips_after_fetch, new_sha1);
+}
+
+struct argv_array {
+ const char **argv;
+ unsigned int argc;
+ unsigned int alloc;
+};
+
+static void init_argv(struct argv_array *array)
+{
+ array->argv = NULL;
+ array->argc = 0;
+ array->alloc = 0;
+}
+
+static void push_argv(struct argv_array *array, const char *value)
+{
+ ALLOC_GROW(array->argv, array->argc + 2, array->alloc);
+ array->argv[array->argc++] = xstrdup(value);
+ array->argv[array->argc] = NULL;
+}
+
+static void clear_argv(struct argv_array *array)
+{
+ int i;
+ for (i = 0; i < array->argc; i++)
+ free((char **)array->argv[i]);
+ free(array->argv);
+ init_argv(array);
+}
+
+static void add_sha1_to_argv(const unsigned char sha1[20], void *data)
+{
+ push_argv(data, sha1_to_hex(sha1));
+}
+
+static void calculate_changed_submodule_paths(void)
{
struct rev_info rev;
struct commit *commit;
- const char *argv[] = {NULL, NULL, "--not", "--all", NULL};
- int argc = ARRAY_SIZE(argv) - 1;
+ struct argv_array argv;
+
+ /* No need to check if there are no submodules configured */
+ if (!config_name_for_path.nr)
+ return;
init_revisions(&rev, NULL);
- argv[1] = xstrdup(sha1_to_hex(new_sha1));
- setup_revisions(argc, argv, &rev, NULL);
+ init_argv(&argv);
+ push_argv(&argv, "--"); /* argv[0] program name */
+ sha1_array_for_each_unique(&ref_tips_after_fetch,
+ add_sha1_to_argv, &argv);
+ push_argv(&argv, "--not");
+ sha1_array_for_each_unique(&ref_tips_before_fetch,
+ add_sha1_to_argv, &argv);
+ setup_revisions(argv.argc, argv.argv, &rev, NULL);
if (prepare_revision_walk(&rev))
die("revision walk setup failed");
parent = parent->next;
}
}
- free((char *)argv[1]);
+
+ clear_argv(&argv);
+ sha1_array_clear(&ref_tips_before_fetch);
+ sha1_array_clear(&ref_tips_after_fetch);
+ initialized_fetch_ref_tips = 0;
}
int fetch_populated_submodules(int num_options, const char **options,
cp.git_cmd = 1;
cp.no_stdin = 1;
+ calculate_changed_submodule_paths();
+
for (i = 0; i < active_nr; i++) {
struct strbuf submodule_path = STRBUF_INIT;
struct strbuf submodule_git_dir = STRBUF_INIT;
unsigned is_submodule_modified(const char *path, int ignore_untracked);
int merge_submodule(unsigned char result[20], const char *path, const unsigned char base[20],
const unsigned char a[20], const unsigned char b[20]);
+int check_submodule_needs_pushing(unsigned char new_sha1[20], const char *remotes_name);
#endif
--- /dev/null
+#!/bin/sh
+
+gpg_version=`gpg --version 2>&1`
+if test $? = 127; then
+ say "You do not seem to have gpg installed"
+else
+ # As said here: http://www.gnupg.org/documentation/faqs.html#q6.19
+ # the gpg version 1.0.6 didn't parse trust packets correctly, so for
+ # that version, creation of signed tags using the generated key fails.
+ case "$gpg_version" in
+ 'gpg (GnuPG) 1.0.6'*)
+ say "Your version of gpg (1.0.6) is too buggy for testing"
+ ;;
+ *)
+ # key generation info: gpg --homedir t/lib-gpg --gen-key
+ # Type DSA and Elgamal, size 2048 bits, no expiration date.
+ # Name and email: C O Mitter <committer@example.com>
+ # No password given, to enable non-interactive operation.
+ cp -R "$TEST_DIRECTORY"/lib-gpg ./gpghome
+ chmod 0700 gpghome
+ GNUPGHOME="$(pwd)/gpghome"
+ export GNUPGHOME
+ test_set_prereq GPG
+ ;;
+ esac
+fi
+
+sanitize_pgp() {
+ perl -ne '
+ /^-----END PGP/ and $in_pgp = 0;
+ print unless $in_pgp;
+ /^-----BEGIN PGP/ and $in_pgp = 1;
+ '
+}
check_parse 2008-02-14 bad
check_parse '2008-02-14 20:30:45' '2008-02-14 20:30:45 +0000'
check_parse '2008-02-14 20:30:45 -0500' '2008-02-14 20:30:45 -0500'
+check_parse '2008-02-14 20:30:45 -0015' '2008-02-14 20:30:45 -0015'
+check_parse '2008-02-14 20:30:45 -5' '2008-02-14 20:30:45 +0000'
+check_parse '2008-02-14 20:30:45 -5:' '2008-02-14 20:30:45 +0000'
+check_parse '2008-02-14 20:30:45 -05' '2008-02-14 20:30:45 -0500'
+check_parse '2008-02-14 20:30:45 -:30' '2008-02-14 20:30:45 +0000'
+check_parse '2008-02-14 20:30:45 -05:00' '2008-02-14 20:30:45 -0500'
check_parse '2008-02-14 20:30:45' '2008-02-14 20:30:45 -0500' EST5
check_approxidate() {
valid_ref 'foo/bar/baz'
valid_ref 'refs///heads/foo'
invalid_ref 'heads/foo/'
+valid_ref '/heads/foo'
+valid_ref '///heads/foo'
+invalid_ref '/foo'
invalid_ref './foo'
invalid_ref '.refs/foo'
invalid_ref 'heads/foo..bar'
valid_ref 'heads/foo@bar'
invalid_ref 'heads/v@{ation'
invalid_ref 'heads/foo\bar'
+invalid_ref "$(printf 'heads/foo\t')"
+invalid_ref "$(printf 'heads/foo\177')"
+valid_ref "$(printf 'heads/fu\303\237')"
test_expect_success "check-ref-format --branch @{-1}" '
T=$(git write-tree) &&
valid_ref_normalized 'heads/foo' 'heads/foo'
valid_ref_normalized 'refs///heads/foo' 'refs/heads/foo'
+valid_ref_normalized '/heads/foo' 'heads/foo'
+valid_ref_normalized '///heads/foo' 'heads/foo'
invalid_ref_normalized 'foo'
+invalid_ref_normalized '/foo'
invalid_ref_normalized 'heads/foo/../bar'
invalid_ref_normalized 'heads/./foo'
invalid_ref_normalized 'heads\foo'
for f in ../y*
do
echo "error: pathspec $sq$f$sq did not match any file(s) known to git."
- done >expect &&
- echo "Did you forget to ${sq}git add${sq}?" >>expect &&
- ls ../x* >>expect &&
- test_must_fail git ls-files -c --error-unmatch ../[xy]* >actual 2>&1 &&
- test_cmp expect actual
+ done >expect.err &&
+ echo "Did you forget to ${sq}git add${sq}?" >>expect.err &&
+ ls ../x* >expect.out &&
+ test_must_fail git ls-files -c --error-unmatch ../[xy]* >actual.out 2>actual.err &&
+ test_cmp expect.out actual.out &&
+ test_cmp expect.err actual.err
)
'
for f in ../x*
do
echo "error: pathspec $sq$f$sq did not match any file(s) known to git."
- done >expect &&
- echo "Did you forget to ${sq}git add${sq}?" >>expect &&
- ls ../y* >>expect &&
- test_must_fail git ls-files -o --error-unmatch ../[xy]* >actual 2>&1 &&
- test_cmp expect actual
+ done >expect.err &&
+ echo "Did you forget to ${sq}git add${sq}?" >>expect.err &&
+ ls ../y* >expect.out &&
+ test_must_fail git ls-files -o --error-unmatch ../[xy]* >actual.out 2>actual.err &&
+ test_cmp expect.out actual.out &&
+ test_cmp expect.err actual.err
)
'
ln -s e a &&
git add a e &&
test_tick &&
- git commit -m "rename a->e, symlink a->e"
+ git commit -m "rename a->e, symlink a->e" &&
+ oln=`printf e | git hash-object --stdin`
fi
'
if test_have_prereq SYMLINKS
then
- test_expect_success 'merge-recursive rename vs. rename/symlink' '
+ test_expect_failure 'merge-recursive rename vs. rename/symlink' '
git checkout -f rename &&
git merge rename-ln &&
( git ls-tree -r HEAD ; git ls-files -s ) >actual &&
(
+ echo "120000 blob $oln a"
echo "100644 blob $o0 b"
echo "100644 blob $o0 c"
echo "100644 blob $o0 d/e"
echo "100644 blob $o0 e"
+ echo "120000 $oln 0 a"
echo "100644 $o0 0 b"
echo "100644 $o0 0 c"
echo "100644 $o0 0 d/e"
test_must_fail git branch -d my10
'
+test_expect_success 'use set-upstream on the current branch' '
+ git checkout master &&
+ git --bare init myupstream.git &&
+ git push myupstream.git master:refs/heads/frotz &&
+ git remote add origin myupstream.git &&
+ git fetch &&
+ git branch --set-upstream master origin/frotz &&
+
+ test "z$(git config branch.master.remote)" = "zorigin" &&
+ test "z$(git config branch.master.merge)" = "zrefs/heads/frotz"
+
+'
+
test_done
git rebase --abort
'
+test_expect_success 'clean error after failed "exec"' '
+ test_tick &&
+ test_when_finished "git rebase --abort || :" &&
+ (
+ FAKE_LINES="1 exec_false" &&
+ export FAKE_LINES &&
+ test_must_fail git rebase -i HEAD^
+ ) &&
+ echo "edited again" > file7 &&
+ git add file7 &&
+ test_must_fail git rebase --continue 2>error &&
+ grep "You have staged changes in your working tree." error
+'
+
test_expect_success 'rebase a detached HEAD' '
grandparent=$(git rev-parse HEAD~2) &&
git checkout $(git rev-parse HEAD) &&
--- /dev/null
+#!/bin/sh
+
+test_description='Test cherry-pick continuation features
+
+ + anotherpick: rewrites foo to d
+ + picked: rewrites foo to c
+ + unrelatedpick: rewrites unrelated to reallyunrelated
+ + base: rewrites foo to b
+ + initial: writes foo as a, unrelated as unrelated
+
+'
+
+. ./test-lib.sh
+
+pristine_detach () {
+ git cherry-pick --reset &&
+ git checkout -f "$1^0" &&
+ git read-tree -u --reset HEAD &&
+ git clean -d -f -f -q -x
+}
+
+test_expect_success setup '
+ echo unrelated >unrelated &&
+ git add unrelated &&
+ test_commit initial foo a &&
+ test_commit base foo b &&
+ test_commit unrelatedpick unrelated reallyunrelated &&
+ test_commit picked foo c &&
+ test_commit anotherpick foo d &&
+ git config advice.detachedhead false
+
+'
+
+test_expect_success 'cherry-pick persists data on failure' '
+ pristine_detach initial &&
+ test_must_fail git cherry-pick -s base..anotherpick &&
+ test_path_is_dir .git/sequencer &&
+ test_path_is_file .git/sequencer/head &&
+ test_path_is_file .git/sequencer/todo &&
+ test_path_is_file .git/sequencer/opts
+'
+
+test_expect_success 'cherry-pick persists opts correctly' '
+ pristine_detach initial &&
+ test_must_fail git cherry-pick -s -m 1 --strategy=recursive -X patience -X ours base..anotherpick &&
+ test_path_is_dir .git/sequencer &&
+ test_path_is_file .git/sequencer/head &&
+ test_path_is_file .git/sequencer/todo &&
+ test_path_is_file .git/sequencer/opts &&
+ echo "true" >expect &&
+ git config --file=.git/sequencer/opts --get-all options.signoff >actual &&
+ test_cmp expect actual &&
+ echo "1" >expect &&
+ git config --file=.git/sequencer/opts --get-all options.mainline >actual &&
+ test_cmp expect actual &&
+ echo "recursive" >expect &&
+ git config --file=.git/sequencer/opts --get-all options.strategy >actual &&
+ test_cmp expect actual &&
+ cat >expect <<-\EOF &&
+ patience
+ ours
+ EOF
+ git config --file=.git/sequencer/opts --get-all options.strategy-option >actual &&
+ test_cmp expect actual
+'
+
+test_expect_success 'cherry-pick cleans up sequencer state upon success' '
+ pristine_detach initial &&
+ git cherry-pick initial..picked &&
+ test_path_is_missing .git/sequencer
+'
+
+test_expect_success '--reset does not complain when no cherry-pick is in progress' '
+ pristine_detach initial &&
+ git cherry-pick --reset
+'
+
+test_expect_success '--reset cleans up sequencer state' '
+ pristine_detach initial &&
+ test_must_fail git cherry-pick base..picked &&
+ git cherry-pick --reset &&
+ test_path_is_missing .git/sequencer
+'
+
+test_expect_success 'cherry-pick cleans up sequencer state when one commit is left' '
+ pristine_detach initial &&
+ test_must_fail git cherry-pick base..picked &&
+ test_path_is_missing .git/sequencer &&
+ echo "resolved" >foo &&
+ git add foo &&
+ git commit &&
+ {
+ git rev-list HEAD |
+ git diff-tree --root --stdin |
+ sed "s/$_x40/OBJID/g"
+ } >actual &&
+ cat >expect <<-\EOF &&
+ OBJID
+ :100644 100644 OBJID OBJID M foo
+ OBJID
+ :100644 100644 OBJID OBJID M unrelated
+ OBJID
+ :000000 100644 OBJID OBJID A foo
+ :000000 100644 OBJID OBJID A unrelated
+ EOF
+ test_cmp expect actual
+'
+
+test_expect_success 'cherry-pick does not implicitly stomp an existing operation' '
+ pristine_detach initial &&
+ test_must_fail git cherry-pick base..anotherpick &&
+ test-chmtime -v +0 .git/sequencer >expect &&
+ test_must_fail git cherry-pick unrelatedpick &&
+ test-chmtime -v +0 .git/sequencer >actual &&
+ test_cmp expect actual
+'
+
+test_expect_success '--continue complains when no cherry-pick is in progress' '
+ pristine_detach initial &&
+ test_must_fail git cherry-pick --continue
+'
+
+test_expect_success '--continue complains when there are unresolved conflicts' '
+ pristine_detach initial &&
+ test_must_fail git cherry-pick base..anotherpick &&
+ test_must_fail git cherry-pick --continue
+'
+
+test_expect_success '--continue continues after conflicts are resolved' '
+ pristine_detach initial &&
+ test_must_fail git cherry-pick base..anotherpick &&
+ echo "c" >foo &&
+ git add foo &&
+ git commit &&
+ git cherry-pick --continue &&
+ test_path_is_missing .git/sequencer &&
+ {
+ git rev-list HEAD |
+ git diff-tree --root --stdin |
+ sed "s/$_x40/OBJID/g"
+ } >actual &&
+ cat >expect <<-\EOF &&
+ OBJID
+ :100644 100644 OBJID OBJID M foo
+ OBJID
+ :100644 100644 OBJID OBJID M foo
+ OBJID
+ :100644 100644 OBJID OBJID M unrelated
+ OBJID
+ :000000 100644 OBJID OBJID A foo
+ :000000 100644 OBJID OBJID A unrelated
+ EOF
+ test_cmp expect actual
+'
+
+test_expect_success '--continue respects opts' '
+ pristine_detach initial &&
+ test_must_fail git cherry-pick -x base..anotherpick &&
+ echo "c" >foo &&
+ git add foo &&
+ git commit &&
+ git cherry-pick --continue &&
+ test_path_is_missing .git/sequencer &&
+ git cat-file commit HEAD >anotherpick_msg &&
+ git cat-file commit HEAD~1 >picked_msg &&
+ git cat-file commit HEAD~2 >unrelatedpick_msg &&
+ git cat-file commit HEAD~3 >initial_msg &&
+ test_must_fail grep "cherry picked from" initial_msg &&
+ grep "cherry picked from" unrelatedpick_msg &&
+ grep "cherry picked from" picked_msg &&
+ grep "cherry picked from" anotherpick_msg
+'
+
+test_expect_success '--signoff is not automatically propagated to resolved conflict' '
+ pristine_detach initial &&
+ test_must_fail git cherry-pick --signoff base..anotherpick &&
+ echo "c" >foo &&
+ git add foo &&
+ git commit &&
+ git cherry-pick --continue &&
+ test_path_is_missing .git/sequencer &&
+ git cat-file commit HEAD >anotherpick_msg &&
+ git cat-file commit HEAD~1 >picked_msg &&
+ git cat-file commit HEAD~2 >unrelatedpick_msg &&
+ git cat-file commit HEAD~3 >initial_msg &&
+ test_must_fail grep "Signed-off-by:" initial_msg &&
+ grep "Signed-off-by:" unrelatedpick_msg &&
+ test_must_fail grep "Signed-off-by:" picked_msg &&
+ grep "Signed-off-by:" anotherpick_msg
+'
+
+test_expect_success 'malformed instruction sheet 1' '
+ pristine_detach initial &&
+ test_must_fail git cherry-pick base..anotherpick &&
+ echo "resolved" >foo &&
+ git add foo &&
+ git commit &&
+ sed "s/pick /pick/" .git/sequencer/todo >new_sheet &&
+ cp new_sheet .git/sequencer/todo &&
+ test_must_fail git cherry-pick --continue
+'
+
+test_expect_success 'malformed instruction sheet 2' '
+ pristine_detach initial &&
+ test_must_fail git cherry-pick base..anotherpick &&
+ echo "resolved" >foo &&
+ git add foo &&
+ git commit &&
+ sed "s/pick/revert/" .git/sequencer/todo >new_sheet &&
+ cp new_sheet .git/sequencer/todo &&
+ test_must_fail git cherry-pick --continue
+'
+
+test_done
echo bar6 > file2 &&
git add file2 &&
git stash &&
- ! "git rev-parse --quiet --verify does-not-exist" &&
+ test_must_fail git rev-parse --quiet --verify does-not-exist &&
test_must_fail git stash drop does-not-exist &&
test_must_fail git stash drop does-not-exist@{0} &&
test_must_fail git stash pop does-not-exist &&
echo 3 > file &&
test_tick &&
echo 1 > file2 &&
+ mkdir untracked &&
+ echo untracked >untracked/untracked &&
git stash --include-untracked &&
git diff-files --quiet &&
git diff-index --cached --quiet HEAD
'
cat > expect <<EOF
+?? actual
?? expect
-?? output
EOF
test_expect_success 'stash save --include-untracked cleaned the untracked files' '
- git status --porcelain > output
- test_cmp output expect
+ git status --porcelain >actual &&
+ test_cmp expect actual
'
cat > expect.diff <<EOF
+++ b/file2
@@ -0,0 +1 @@
+1
+diff --git a/untracked/untracked b/untracked/untracked
+new file mode 100644
+index 0000000..5a72eb2
+--- /dev/null
++++ b/untracked/untracked
+@@ -0,0 +1 @@
++untracked
EOF
cat > expect.lstree <<EOF
file2
+untracked
EOF
test_expect_success 'stash save --include-untracked stashed the untracked files' '
test "!" -f file2 &&
- git diff HEAD..stash^3 -- file2 > output &&
- test_cmp output expect.diff &&
- git ls-tree --name-only stash^3: > output &&
- test_cmp output expect.lstree
+ test ! -e untracked &&
+ git diff HEAD stash^3 -- file2 untracked >actual &&
+ test_cmp expect.diff actual &&
+ git ls-tree --name-only stash^3: >actual &&
+ test_cmp expect.lstree actual
'
test_expect_success 'stash save --patch --include-untracked fails' '
test_must_fail git stash --patch --include-untracked
cat > expect <<EOF
M file
+?? actual
?? expect
?? file2
-?? output
+?? untracked/
EOF
test_expect_success 'stash pop after save --include-untracked leaves files untracked again' '
git stash pop &&
- git status --porcelain > output
- test_cmp output expect
+ git status --porcelain >actual &&
+ test_cmp expect actual &&
+ test "1" = "`cat file2`" &&
+ test untracked = "`cat untracked/untracked`"
'
-git clean --force --quiet
+git clean --force --quiet -d
test_expect_success 'stash save -u dirty index' '
echo 4 > file3 &&
test_expect_success 'stash save --include-untracked dirty index got stashed' '
git stash pop --index &&
- git diff --cached > output &&
- test_cmp output expect
+ git diff --cached >actual &&
+ test_cmp expect actual
'
git reset > /dev/null
cat > .gitignore <<EOF
.gitignore
ignored
+ignored.d/
EOF
test_expect_success 'stash save --include-untracked respects .gitignore' '
echo ignored > ignored &&
+ mkdir ignored.d &&
+ echo ignored >ignored.d/untracked &&
git stash -u &&
test -s ignored &&
+ test -s ignored.d/untracked &&
test -s .gitignore
'
test_expect_success 'stash save -u can stash with only untracked files different' '
echo 4 > file4 &&
- git stash -u
+ git stash -u &&
test "!" -f file4
'
test_expect_success 'stash save --all does not respect .gitignore' '
git stash -a &&
test "!" -f ignored &&
+ test "!" -e ignored.d &&
test "!" -f .gitignore
'
test_expect_success 'stash save --all is stash poppable' '
git stash pop &&
test -s ignored &&
+ test -s ignored.d/untracked &&
test -s .gitignore
'
grep "^To: R. E. Cipient <rcipient@example.com>\$" patch9
'
+# check_patch <patch>: Verify that <patch> looks like a half-sane
+# patch email to avoid a false positive with !grep
+check_patch () {
+ grep -e "^From:" "$1" &&
+ grep -e "^Date:" "$1" &&
+ grep -e "^Subject:" "$1"
+}
+
test_expect_success '--no-to overrides config.to' '
git config --replace-all format.to \
"R. E. Cipient <rcipient@example.com>" &&
git format-patch --no-to --stdout master..side |
sed -e "/^\$/q" >patch10 &&
+ check_patch patch10 &&
! grep "^To: R. E. Cipient <rcipient@example.com>\$" patch10
'
git format-patch --no-to --to="Someone Else <else@out.there>" \
--stdout master..side |
sed -e "/^\$/q" >patch11 &&
+ check_patch patch11 &&
! grep "^To: Someone <someone@out.there>\$" patch11 &&
grep "^To: Someone Else <else@out.there>\$" patch11
'
"C. E. Cipient <rcipient@example.com>" &&
git format-patch --no-cc --stdout master..side |
sed -e "/^\$/q" >patch12 &&
+ check_patch patch12 &&
! grep "^Cc: C. E. Cipient <rcipient@example.com>\$" patch12
'
-test_expect_success '--no-add-headers overrides config.headers' '
+test_expect_success '--no-add-header overrides config.headers' '
git config --replace-all format.headers \
"Header1: B. E. Cipient <rcipient@example.com>" &&
- git format-patch --no-add-headers --stdout master..side |
+ git format-patch --no-add-header --stdout master..side |
sed -e "/^\$/q" >patch13 &&
+ check_patch patch13 &&
! grep "^Header1: B. E. Cipient <rcipient@example.com>\$" patch13
'
'
test_expect_success 'thread via config' '
- git config format.thread true &&
+ test_config format.thread true &&
check_threading expect.thread master
'
test_expect_success 'thread deep via config' '
- git config format.thread deep &&
+ test_config format.thread deep &&
check_threading expect.deep master
'
test_expect_success 'thread config + override' '
- git config format.thread deep &&
+ test_config format.thread deep &&
check_threading expect.thread --thread master
'
test_expect_success 'thread config + --no-thread' '
- git config format.thread deep &&
+ test_config format.thread deep &&
check_threading expect.no-threading --no-thread master
'
git mv file foo &&
git commit -m foo &&
git format-patch --cover-letter -1 &&
+ check_patch 0000-cover-letter.patch &&
! grep "file => foo .* 0 *\$" 0000-cover-letter.patch &&
git format-patch --cover-letter -1 -M &&
grep "file => foo .* 0 *\$" 0000-cover-letter.patch
git config format.signature "config sig" &&
git format-patch --stdout --signature="my sig" --no-signature \
-1 >output &&
+ check_patch output &&
! grep "config sig" output &&
! grep "my sig" output &&
! grep "^-- \$" output
test_expect_success 'format.signature="" supresses signatures' '
git config format.signature "" &&
git format-patch --stdout -1 >output &&
+ check_patch output &&
! grep "^-- \$" output
'
test_expect_success 'format-patch --no-signature supresses signatures' '
git config --unset-all format.signature &&
git format-patch --stdout --no-signature -1 >output &&
+ check_patch output &&
! grep "^-- \$" output
'
test_expect_success 'format-patch --signature="" supresses signatures' '
- git format-patch --signature="" -1 >output &&
+ git format-patch --stdout --signature="" -1 >output &&
+ check_patch output &&
! grep "^-- \$" output
'
test_cmp expect actual
'
+test_expect_success 'format patch ignores color.ui' '
+ test_unconfig color.ui &&
+ git format-patch --stdout -1 >expect &&
+ test_config color.ui always &&
+ git format-patch --stdout -1 >actual &&
+ test_cmp expect actual
+'
+
test_done
--- /dev/null
+#!/bin/sh
+
+test_description='fetch/receive strict mode'
+. ./test-lib.sh
+
+test_expect_success setup '
+ echo hello >greetings &&
+ git add greetings &&
+ git commit -m greetings &&
+
+ S=$(git rev-parse :greetings | sed -e "s|^..|&/|") &&
+ X=$(echo bye | git hash-object -w --stdin | sed -e "s|^..|&/|") &&
+ mv -f .git/objects/$X .git/objects/$S &&
+
+ test_must_fail git fsck
+'
+
+test_expect_success 'fetch without strict' '
+ rm -rf dst &&
+ git init dst &&
+ (
+ cd dst &&
+ git config fetch.fsckobjects false &&
+ git config transfer.fsckobjects false &&
+ test_must_fail git fetch ../.git master
+ )
+'
+
+test_expect_success 'fetch with !fetch.fsckobjects' '
+ rm -rf dst &&
+ git init dst &&
+ (
+ cd dst &&
+ git config fetch.fsckobjects false &&
+ git config transfer.fsckobjects true &&
+ test_must_fail git fetch ../.git master
+ )
+'
+
+test_expect_success 'fetch with fetch.fsckobjects' '
+ rm -rf dst &&
+ git init dst &&
+ (
+ cd dst &&
+ git config fetch.fsckobjects true &&
+ git config transfer.fsckobjects false &&
+ test_must_fail git fetch ../.git master
+ )
+'
+
+test_expect_success 'fetch with transfer.fsckobjects' '
+ rm -rf dst &&
+ git init dst &&
+ (
+ cd dst &&
+ git config transfer.fsckobjects true &&
+ test_must_fail git fetch ../.git master
+ )
+'
+
+test_expect_success 'push without strict' '
+ rm -rf dst &&
+ git init dst &&
+ (
+ cd dst &&
+ git config fetch.fsckobjects false &&
+ git config transfer.fsckobjects false
+ ) &&
+ git push dst master:refs/heads/test
+'
+
+test_expect_success 'push with !receive.fsckobjects' '
+ rm -rf dst &&
+ git init dst &&
+ (
+ cd dst &&
+ git config receive.fsckobjects false &&
+ git config transfer.fsckobjects true
+ ) &&
+ git push dst master:refs/heads/test
+'
+
+test_expect_success 'push with receive.fsckobjects' '
+ rm -rf dst &&
+ git init dst &&
+ (
+ cd dst &&
+ git config receive.fsckobjects true &&
+ git config transfer.fsckobjects false
+ ) &&
+ test_must_fail git push dst master:refs/heads/test
+'
+
+test_expect_success 'push with transfer.fsckobjects' '
+ rm -rf dst &&
+ git init dst &&
+ (
+ cd dst &&
+ git config transfer.fsckobjects true
+ ) &&
+ test_must_fail git push dst master:refs/heads/test
+'
+
+test_done
)
'
+test_expect_success 'push if submodule has no remote' '
+ (
+ cd work/gar/bage &&
+ >junk2 &&
+ git add junk2 &&
+ git commit -m "Second junk"
+ ) &&
+ (
+ cd work &&
+ git add gar/bage &&
+ git commit -m "Second commit for gar/bage" &&
+ git push --recurse-submodules=check ../pub.git master
+ )
+'
+
+test_expect_success 'push fails if submodule commit not on remote' '
+ (
+ cd work/gar &&
+ git clone --bare bage ../../submodule.git &&
+ cd bage &&
+ git remote add origin ../../../submodule.git &&
+ git fetch &&
+ >junk3 &&
+ git add junk3 &&
+ git commit -m "Third junk"
+ ) &&
+ (
+ cd work &&
+ git add gar/bage &&
+ git commit -m "Third commit for gar/bage" &&
+ test_must_fail git push --recurse-submodules=check ../pub.git master
+ )
+'
+
+test_expect_success 'push succeeds after commit was pushed to remote' '
+ (
+ cd work/gar/bage &&
+ git push origin master
+ ) &&
+ (
+ cd work &&
+ git push --recurse-submodules=check ../pub.git master
+ )
+'
+
+test_expect_success 'push fails when commit on multiple branches if one branch has no remote' '
+ (
+ cd work/gar/bage &&
+ >junk4 &&
+ git add junk4 &&
+ git commit -m "Fourth junk"
+ ) &&
+ (
+ cd work &&
+ git branch branch2 &&
+ git add gar/bage &&
+ git commit -m "Fourth commit for gar/bage" &&
+ git checkout branch2 &&
+ (
+ cd gar/bage &&
+ git checkout HEAD~1
+ ) &&
+ >junk1 &&
+ git add junk1 &&
+ git commit -m "First junk" &&
+ test_must_fail git push --recurse-submodules=check ../pub.git
+ )
+'
+
+test_expect_success 'push succeeds if submodule has no remote and is on the first superproject commit' '
+ git init --bare a
+ git clone a a1 &&
+ (
+ cd a1 &&
+ git init b
+ (
+ cd b &&
+ >junk &&
+ git add junk &&
+ git commit -m "initial"
+ ) &&
+ git add b &&
+ git commit -m "added submodule" &&
+ git push --recurse-submodule=check origin master
+ )
+'
+
test_done
x40="$x38$x2"
test_expect_success 'PUT and MOVE sends object to URLs with SHA-1 hash suffix' '
- sed -e "s/PUT /OP /" -e "s/MOVE /OP /" "$HTTPD_ROOT_PATH"/access.log |
- grep -e "\"OP .*/objects/$x2/${x38}_$x40 HTTP/[.0-9]*\" 20[0-9] "
+ sed \
+ -e "s/PUT /OP /" \
+ -e "s/MOVE /OP /" \
+ -e "s|/objects/$x2/${x38}_$x40|WANTED_PATH_REQUEST|" \
+ "$HTTPD_ROOT_PATH"/access.log |
+ grep -e "\"OP .*WANTED_PATH_REQUEST HTTP/[.0-9]*\" 20[0-9] "
'
test_cmp expect actual
'
+# b---bc
+# / \ /
+# a X
+# \ / \
+# c---cb
+#
+# All refnames prefixed with 'x' to avoid confusion with the tags
+# generated by test_commit on case-insensitive systems.
+test_expect_success 'setup criss-cross' '
+ mkdir criss-cross &&
+ (cd criss-cross &&
+ git init &&
+ test_commit A &&
+ git checkout -b xb master &&
+ test_commit B &&
+ git checkout -b xc master &&
+ test_commit C &&
+ git checkout -b xbc xb -- &&
+ git merge xc &&
+ git checkout -b xcb xc -- &&
+ git merge xb &&
+ git checkout master)
+'
+
+# no commits in bc descend from cb
+test_expect_success 'criss-cross: rev-list --ancestry-path cb..bc' '
+ (cd criss-cross &&
+ git rev-list --ancestry-path xcb..xbc > actual &&
+ test -z "$(cat actual)")
+'
+
+# no commits in repository descend from cb
+test_expect_success 'criss-cross: rev-list --ancestry-path --all ^cb' '
+ (cd criss-cross &&
+ git rev-list --ancestry-path --all ^xcb > actual &&
+ test -z "$(cat actual)")
+'
+
test_done
git add letters &&
git commit -m initial &&
+ # Throw in letters.txt for sorting order fun
+ # ("letters.txt" sorts between "letters" and "letters/file")
echo i >>letters &&
- git add letters &&
+ echo "version 2" >letters.txt &&
+ git add letters letters.txt &&
git commit -m modified &&
git checkout -b delete HEAD^ &&
git rm letters &&
mkdir letters &&
>letters/file &&
- git add letters &&
+ echo "version 1" >letters.txt &&
+ git add letters letters.txt &&
git commit -m deleted
'
git checkout delete^0 &&
test_must_fail git merge modify &&
- test 3 = $(git ls-files -s | wc -l) &&
- test 2 = $(git ls-files -u | wc -l) &&
- test 1 = $(git ls-files -o | wc -l) &&
+ test 5 -eq $(git ls-files -s | wc -l) &&
+ test 4 -eq $(git ls-files -u | wc -l) &&
+ test 1 -eq $(git ls-files -o | wc -l) &&
test -f letters/file &&
+ test -f letters.txt &&
test -f letters~modify
'
test_expect_success 'modify/delete + directory/file conflict; other way' '
+ # Yes, we really need the double reset since "letters" appears as
+ # both a file and a directory.
+ git reset --hard &&
git reset --hard &&
git clean -f &&
git checkout modify^0 &&
+
test_must_fail git merge delete &&
- test 3 = $(git ls-files -s | wc -l) &&
- test 2 = $(git ls-files -u | wc -l) &&
- test 1 = $(git ls-files -o | wc -l) &&
+ test 5 -eq $(git ls-files -s | wc -l) &&
+ test 4 -eq $(git ls-files -u | wc -l) &&
+ test 1 -eq $(git ls-files -o | wc -l) &&
test -f letters/file &&
+ test -f letters.txt &&
test -f letters~HEAD
'
git reset --hard &&
git checkout --orphan dir-in-way &&
git rm -rf . &&
+ git clean -fdqx &&
mkdir sub &&
mkdir dir &&
git checkout -q renamed-file-has-no-conflicts^0 &&
test_must_fail git merge --strategy=recursive dir-in-way >output &&
- grep "CONFLICT (delete/modify): dir/file-in-the-way" output &&
+ grep "CONFLICT (modify/delete): dir/file-in-the-way" output &&
grep "Auto-merging dir" output &&
grep "Adding as dir~HEAD instead" output &&
- test 2 -eq "$(git ls-files -u | wc -l)" &&
+ test 3 -eq "$(git ls-files -u | wc -l)" &&
test 2 -eq "$(git ls-files -u dir/file-in-the-way | wc -l)" &&
test_must_fail git diff --quiet &&
test_must_fail git merge --strategy=recursive renamed-file-has-no-conflicts >output 2>errors &&
! grep "error: refusing to lose untracked file at" errors &&
- grep "CONFLICT (delete/modify): dir/file-in-the-way" output &&
+ grep "CONFLICT (modify/delete): dir/file-in-the-way" output &&
grep "Auto-merging dir" output &&
grep "Adding as dir~renamed-file-has-no-conflicts instead" output &&
- test 2 -eq "$(git ls-files -u | wc -l)" &&
+ test 3 -eq "$(git ls-files -u | wc -l)" &&
test 2 -eq "$(git ls-files -u dir/file-in-the-way | wc -l)" &&
test_must_fail git diff --quiet &&
8
9
10
-<<<<<<< HEAD
+<<<<<<< HEAD:dir
12
=======
11
->>>>>>> dir-not-in-way
+>>>>>>> dir-not-in-way:sub/file
EOF
test_expect_success 'Rename+D/F conflict; renamed file cannot merge, dir not in way' '
8
9
10
-<<<<<<< HEAD
+<<<<<<< HEAD:sub/file
11
=======
12
->>>>>>> renamed-file-has-conflicts
+>>>>>>> renamed-file-has-conflicts:dir
EOF
test_expect_success 'Same as previous, but merged other way' '
! test -f original
'
+test_expect_success 'setup avoid unnecessary update, normal rename' '
+ git reset --hard &&
+ git checkout --orphan avoid-unnecessary-update-1 &&
+ git rm -rf . &&
+ git clean -fdqx &&
+
+ printf "1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n" >original &&
+ git add -A &&
+ git commit -m "Common commmit" &&
+
+ git mv original rename &&
+ echo 11 >>rename &&
+ git add -u &&
+ git commit -m "Renamed and modified" &&
+
+ git checkout -b merge-branch-1 HEAD~1 &&
+ echo "random content" >random-file &&
+ git add -A &&
+ git commit -m "Random, unrelated changes"
+'
+
+test_expect_success 'avoid unnecessary update, normal rename' '
+ git checkout -q avoid-unnecessary-update-1^0 &&
+ test-chmtime =1000000000 rename &&
+ test-chmtime -v +0 rename >expect &&
+ git merge merge-branch-1 &&
+ test-chmtime -v +0 rename >actual &&
+ test_cmp expect actual # "rename" should have stayed intact
+'
+
+test_expect_success 'setup to test avoiding unnecessary update, with D/F conflict' '
+ git reset --hard &&
+ git checkout --orphan avoid-unnecessary-update-2 &&
+ git rm -rf . &&
+ git clean -fdqx &&
+
+ mkdir df &&
+ printf "1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n" >df/file &&
+ git add -A &&
+ git commit -m "Common commmit" &&
+
+ git mv df/file temp &&
+ rm -rf df &&
+ git mv temp df &&
+ echo 11 >>df &&
+ git add -u &&
+ git commit -m "Renamed and modified" &&
+
+ git checkout -b merge-branch-2 HEAD~1 &&
+ >unrelated-change &&
+ git add unrelated-change &&
+ git commit -m "Only unrelated changes"
+'
+
+test_expect_success 'avoid unnecessary update, with D/F conflict' '
+ git checkout -q avoid-unnecessary-update-2^0 &&
+ test-chmtime =1000000000 df &&
+ test-chmtime -v +0 df >expect &&
+ git merge merge-branch-2 &&
+ test-chmtime -v +0 df >actual &&
+ test_cmp expect actual # "df" should have stayed intact
+'
+
+test_expect_success 'setup avoid unnecessary update, dir->(file,nothing)' '
+ git rm -rf . &&
+ git clean -fdqx &&
+ rm -rf .git &&
+ git init &&
+
+ >irrelevant &&
+ mkdir df &&
+ >df/file &&
+ git add -A &&
+ git commit -mA &&
+
+ git checkout -b side
+ git rm -rf df &&
+ git commit -mB &&
+
+ git checkout master &&
+ git rm -rf df &&
+ echo bla >df &&
+ git add -A &&
+ git commit -m "Add a newfile"
+'
+
+test_expect_success 'avoid unnecessary update, dir->(file,nothing)' '
+ git checkout -q master^0 &&
+ test-chmtime =1000000000 df &&
+ test-chmtime -v +0 df >expect &&
+ git merge side &&
+ test-chmtime -v +0 df >actual &&
+ test_cmp expect actual # "df" should have stayed intact
+'
+
+test_expect_success 'setup avoid unnecessary update, modify/delete' '
+ git rm -rf . &&
+ git clean -fdqx &&
+ rm -rf .git &&
+ git init &&
+
+ >irrelevant &&
+ >file &&
+ git add -A &&
+ git commit -mA &&
+
+ git checkout -b side
+ git rm -f file &&
+ git commit -m "Delete file" &&
+
+ git checkout master &&
+ echo bla >file &&
+ git add -A &&
+ git commit -m "Modify file"
+'
+
+test_expect_success 'avoid unnecessary update, modify/delete' '
+ git checkout -q master^0 &&
+ test-chmtime =1000000000 file &&
+ test-chmtime -v +0 file >expect &&
+ test_must_fail git merge side &&
+ test-chmtime -v +0 file >actual &&
+ test_cmp expect actual # "file" should have stayed intact
+'
+
+test_expect_success 'setup avoid unnecessary update, rename/add-dest' '
+ git rm -rf . &&
+ git clean -fdqx &&
+ rm -rf .git &&
+ git init &&
+
+ printf "1\n2\n3\n4\n5\n6\n7\n8\n" >file &&
+ git add -A &&
+ git commit -mA &&
+
+ git checkout -b side
+ cp file newfile &&
+ git add -A &&
+ git commit -m "Add file copy" &&
+
+ git checkout master &&
+ git mv file newfile &&
+ git commit -m "Rename file"
+'
+
+test_expect_success 'avoid unnecessary update, rename/add-dest' '
+ git checkout -q master^0 &&
+ test-chmtime =1000000000 newfile &&
+ test-chmtime -v +0 newfile >expect &&
+ git merge side &&
+ test-chmtime -v +0 newfile >actual &&
+ test_cmp expect actual # "file" should have stayed intact
+'
+
+test_expect_success 'setup merge of rename + small change' '
+ git reset --hard &&
+ git checkout --orphan rename-plus-small-change &&
+ git rm -rf . &&
+ git clean -fdqx &&
+
+ echo ORIGINAL >file &&
+ git add file &&
+
+ test_tick &&
+ git commit -m Initial &&
+ git checkout -b rename_branch &&
+ git mv file renamed_file &&
+ git commit -m Rename &&
+ git checkout rename-plus-small-change &&
+ echo NEW-VERSION >file &&
+ git commit -a -m Reformat
+'
+
+test_expect_success 'merge rename + small change' '
+ git merge rename_branch &&
+
+ test 1 -eq $(git ls-files -s | wc -l) &&
+ test 0 -eq $(git ls-files -o | wc -l) &&
+ test $(git rev-parse HEAD:renamed_file) = $(git rev-parse HEAD~1:file)
+'
+
+test_expect_success 'setup for use of extended merge markers' '
+ git rm -rf . &&
+ git clean -fdqx &&
+ rm -rf .git &&
+ git init &&
+
+ printf "1\n2\n3\n4\n5\n6\n7\n8\n" >original_file &&
+ git add original_file &&
+ git commit -mA &&
+
+ git checkout -b rename &&
+ echo 9 >>original_file &&
+ git add original_file &&
+ git mv original_file renamed_file &&
+ git commit -mB &&
+
+ git checkout master &&
+ echo 8.5 >>original_file &&
+ git add original_file &&
+ git commit -mC
+'
+
+cat >expected <<\EOF &&
+1
+2
+3
+4
+5
+6
+7
+8
+<<<<<<< HEAD:renamed_file
+9
+=======
+8.5
+>>>>>>> master^0:original_file
+EOF
+
+test_expect_success 'merge master into rename has correct extended markers' '
+ git checkout rename^0 &&
+ test_must_fail git merge -s recursive master^0 &&
+ test_cmp expected renamed_file
+'
+
+cat >expected <<\EOF &&
+1
+2
+3
+4
+5
+6
+7
+8
+<<<<<<< HEAD:original_file
+8.5
+=======
+9
+>>>>>>> rename^0:renamed_file
+EOF
+
+test_expect_success 'merge rename into master has correct extended markers' '
+ git reset --hard &&
+ git checkout master^0 &&
+ test_must_fail git merge -s recursive rename^0 &&
+ test_cmp expected renamed_file
+'
+
+test_expect_success 'setup spurious "refusing to lose untracked" message' '
+ git rm -rf . &&
+ git clean -fdqx &&
+ rm -rf .git &&
+ git init &&
+
+ > irrelevant_file &&
+ printf "1\n2\n3\n4\n5\n6\n7\n8\n" >original_file &&
+ git add irrelevant_file original_file &&
+ git commit -mA &&
+
+ git checkout -b rename &&
+ git mv original_file renamed_file &&
+ git commit -mB &&
+
+ git checkout master &&
+ git rm original_file &&
+ git commit -mC
+'
+
+test_expect_success 'no spurious "refusing to lose untracked" message' '
+ git checkout master^0 &&
+ test_must_fail git merge rename^0 2>errors.txt &&
+ ! grep "refusing to lose untracked file" errors.txt
+'
+
test_done
git bisect reset &&
git checkout broken &&
git bisect start broken master --no-checkout &&
- git bisect run sh -c '
+ git bisect run \"\$SHELL_PATH\" -c '
GOOD=\$(git for-each-ref \"--format=%(objectname)\" refs/bisect/good-*) &&
git rev-list --objects BISECT_HEAD --not \$GOOD >tmp.\$\$ &&
git pack-objects --stdout >/dev/null < tmp.\$\$
#!/bin/sh
-test_description='recursive merge corner cases'
+test_description='recursive merge corner cases involving criss-cross merges'
. ./test-lib.sh
+get_clean_checkout () {
+ git reset --hard &&
+ git clean -fdqx &&
+ git checkout "$1"
+}
+
#
# L1 L2
# o---o
test_must_fail git merge -s recursive R2^0 &&
- test 5 = $(git ls-files -s | wc -l) &&
- test 3 = $(git ls-files -u | wc -l) &&
- test 0 = $(git ls-files -o | wc -l) &&
+ test 2 = $(git ls-files -s | wc -l) &&
+ test 2 = $(git ls-files -u | wc -l) &&
+ test 2 = $(git ls-files -o | wc -l) &&
- test $(git rev-parse :0:one) = $(git rev-parse L2:one) &&
- test $(git rev-parse :0:two) = $(git rev-parse R2:two) &&
test $(git rev-parse :2:three) = $(git rev-parse L2:three) &&
test $(git rev-parse :3:three) = $(git rev-parse R2:three) &&
- cp two merged &&
- >empty &&
- test_must_fail git merge-file \
- -L "Temporary merge branch 2" \
- -L "" \
- -L "Temporary merge branch 1" \
- merged empty one &&
- test $(git rev-parse :1:three) = $(git hash-object merged)
+ test $(git rev-parse L2:three) = $(git hash-object three~HEAD) &&
+ test $(git rev-parse R2:three) = $(git hash-object three~R2^0)
'
#
test_must_fail git merge -s recursive R2^0 &&
- test 5 = $(git ls-files -s | wc -l) &&
- test 3 = $(git ls-files -u | wc -l) &&
- test 0 = $(git ls-files -o | wc -l) &&
+ test 2 = $(git ls-files -s | wc -l) &&
+ test 2 = $(git ls-files -u | wc -l) &&
+ test 2 = $(git ls-files -o | wc -l) &&
- test $(git rev-parse :0:one) = $(git rev-parse L2:one) &&
- test $(git rev-parse :0:two) = $(git rev-parse R2:two) &&
test $(git rev-parse :2:three) = $(git rev-parse L2:three) &&
test $(git rev-parse :3:three) = $(git rev-parse R2:three) &&
- head -n 10 two >merged &&
- cp one merge-me &&
- >empty &&
- test_must_fail git merge-file \
- -L "Temporary merge branch 2" \
- -L "" \
- -L "Temporary merge branch 1" \
- merged empty merge-me &&
- test $(git rev-parse :1:three) = $(git hash-object merged)
+ test $(git rev-parse L2:three) = $(git hash-object three~HEAD) &&
+ test $(git rev-parse R2:three) = $(git hash-object three~R2^0)
'
#
test $(git rev-parse :1:new_a) = $(git hash-object merged)
'
+#
+# criss-cross + modify/delete:
+#
+# B D
+# o---o
+# / \ / \
+# A o X ? F
+# \ / \ /
+# o---o
+# C E
+#
+# Commit A: file with contents 'A\n'
+# Commit B: file with contents 'B\n'
+# Commit C: file not present
+# Commit D: file with contents 'B\n'
+# Commit E: file not present
+#
+# Merging commits D & E should result in modify/delete conflict.
+
+test_expect_success 'setup criss-cross + modify/delete resolved differently' '
+ git rm -rf . &&
+ git clean -fdqx &&
+ rm -rf .git &&
+ git init &&
+
+ echo A >file &&
+ git add file &&
+ test_tick &&
+ git commit -m A &&
+
+ git branch B &&
+ git checkout -b C &&
+ git rm file &&
+ test_tick &&
+ git commit -m C &&
+
+ git checkout B &&
+ echo B >file &&
+ git add file &&
+ test_tick &&
+ git commit -m B &&
+
+ git checkout B^0 &&
+ test_must_fail git merge C &&
+ echo B >file &&
+ git add file &&
+ test_tick &&
+ git commit -m D &&
+ git tag D &&
+
+ git checkout C^0 &&
+ test_must_fail git merge B &&
+ git rm file &&
+ test_tick &&
+ git commit -m E &&
+ git tag E
+'
+
+test_expect_success 'git detects conflict merging criss-cross+modify/delete' '
+ git checkout D^0 &&
+
+ test_must_fail git merge -s recursive E^0 &&
+
+ test 2 -eq $(git ls-files -s | wc -l) &&
+ test 2 -eq $(git ls-files -u | wc -l) &&
+
+ test $(git rev-parse :1:file) = $(git rev-parse master:file) &&
+ test $(git rev-parse :2:file) = $(git rev-parse B:file)
+'
+
+test_expect_success 'git detects conflict merging criss-cross+modify/delete, reverse direction' '
+ git reset --hard &&
+ git checkout E^0 &&
+
+ test_must_fail git merge -s recursive D^0 &&
+
+ test 2 -eq $(git ls-files -s | wc -l) &&
+ test 2 -eq $(git ls-files -u | wc -l) &&
+
+ test $(git rev-parse :1:file) = $(git rev-parse master:file) &&
+ test $(git rev-parse :3:file) = $(git rev-parse B:file)
+'
+
+#
+# criss-cross + modify/modify with very contrived file contents:
+#
+# B D
+# o---o
+# / \ / \
+# A o X ? F
+# \ / \ /
+# o---o
+# C E
+#
+# Commit A: file with contents 'A\n'
+# Commit B: file with contents 'B\n'
+# Commit C: file with contents 'C\n'
+# Commit D: file with contents 'D\n'
+# Commit E: file with contents:
+# <<<<<<< Temporary merge branch 1
+# C
+# =======
+# B
+# >>>>>>> Temporary merge branch 2
+#
+# Now, when we merge commits D & E, does git detect the conflict?
+
+test_expect_success 'setup differently handled merges of content conflict' '
+ git clean -fdqx &&
+ rm -rf .git &&
+ git init &&
+
+ echo A >file &&
+ git add file &&
+ test_tick &&
+ git commit -m A &&
+
+ git branch B &&
+ git checkout -b C &&
+ echo C >file &&
+ git add file &&
+ test_tick &&
+ git commit -m C &&
+
+ git checkout B &&
+ echo B >file &&
+ git add file &&
+ test_tick &&
+ git commit -m B &&
+
+ git checkout B^0 &&
+ test_must_fail git merge C &&
+ echo D >file &&
+ git add file &&
+ test_tick &&
+ git commit -m D &&
+ git tag D &&
+
+ git checkout C^0 &&
+ test_must_fail git merge B &&
+ cat <<EOF >file &&
+<<<<<<< Temporary merge branch 1
+C
+=======
+B
+>>>>>>> Temporary merge branch 2
+EOF
+ git add file &&
+ test_tick &&
+ git commit -m E &&
+ git tag E
+'
+
+test_expect_failure 'git detects conflict w/ criss-cross+contrived resolution' '
+ git checkout D^0 &&
+
+ test_must_fail git merge -s recursive E^0 &&
+
+ test 3 -eq $(git ls-files -s | wc -l) &&
+ test 3 -eq $(git ls-files -u | wc -l) &&
+ test 0 -eq $(git ls-files -o | wc -l) &&
+
+ test $(git rev-parse :2:file) = $(git rev-parse D:file) &&
+ test $(git rev-parse :3:file) = $(git rev-parse E:file)
+'
+
+#
+# criss-cross + d/f conflict via add/add:
+# Commit A: Neither file 'a' nor directory 'a/' exist.
+# Commit B: Introduce 'a'
+# Commit C: Introduce 'a/file'
+# Commit D: Merge B & C, keeping 'a' and deleting 'a/'
+#
+# Two different later cases:
+# Commit E1: Merge B & C, deleting 'a' but keeping 'a/file'
+# Commit E2: Merge B & C, deleting 'a' but keeping a slightly modified 'a/file'
+#
+# B D
+# o---o
+# / \ / \
+# A o X ? F
+# \ / \ /
+# o---o
+# C E1 or E2
+#
+# Merging D & E1 requires we first create a virtual merge base X from
+# merging A & B in memory. Now, if X could keep both 'a' and 'a/file' in
+# the index, then the merge of D & E1 could be resolved cleanly with both
+# 'a' and 'a/file' removed. Since git does not currently allow creating
+# such a tree, the best we can do is have X contain both 'a~<unique>' and
+# 'a/file' resulting in the merge of D and E1 having a rename/delete
+# conflict for 'a'. (Although this merge appears to be unsolvable with git
+# currently, git could do a lot better than it currently does with these
+# d/f conflicts, which is the purpose of this test.)
+#
+# Merge of D & E2 has similar issues for path 'a', but should always result
+# in a modify/delete conflict for path 'a/file'.
+#
+# We run each merge in both directions, to check for directional issues
+# with D/F conflict handling.
+#
+
+test_expect_success 'setup differently handled merges of directory/file conflict' '
+ git rm -rf . &&
+ git clean -fdqx &&
+ rm -rf .git &&
+ git init &&
+
+ >ignore-me &&
+ git add ignore-me &&
+ test_tick &&
+ git commit -m A &&
+ git tag A &&
+
+ git branch B &&
+ git checkout -b C &&
+ mkdir a &&
+ echo 10 >a/file &&
+ git add a/file &&
+ test_tick &&
+ git commit -m C &&
+
+ git checkout B &&
+ echo 5 >a &&
+ git add a &&
+ test_tick &&
+ git commit -m B &&
+
+ git checkout B^0 &&
+ test_must_fail git merge C &&
+ git clean -f &&
+ rm -rf a/ &&
+ echo 5 >a &&
+ git add a &&
+ test_tick &&
+ git commit -m D &&
+ git tag D &&
+
+ git checkout C^0 &&
+ test_must_fail git merge B &&
+ git clean -f &&
+ git rm --cached a &&
+ echo 10 >a/file &&
+ git add a/file &&
+ test_tick &&
+ git commit -m E1 &&
+ git tag E1 &&
+
+ git checkout C^0 &&
+ test_must_fail git merge B &&
+ git clean -f &&
+ git rm --cached a &&
+ printf "10\n11\n" >a/file &&
+ git add a/file &&
+ test_tick &&
+ git commit -m E2 &&
+ git tag E2
+'
+
+test_expect_success 'merge of D & E1 fails but has appropriate contents' '
+ get_clean_checkout D^0 &&
+
+ test_must_fail git merge -s recursive E1^0 &&
+
+ test 2 -eq $(git ls-files -s | wc -l) &&
+ test 1 -eq $(git ls-files -u | wc -l) &&
+ test 0 -eq $(git ls-files -o | wc -l) &&
+
+ test $(git rev-parse :0:ignore-me) = $(git rev-parse A:ignore-me) &&
+ test $(git rev-parse :2:a) = $(git rev-parse B:a)
+'
+
+test_expect_success 'merge of E1 & D fails but has appropriate contents' '
+ get_clean_checkout E1^0 &&
+
+ test_must_fail git merge -s recursive D^0 &&
+
+ test 2 -eq $(git ls-files -s | wc -l) &&
+ test 1 -eq $(git ls-files -u | wc -l) &&
+ test 0 -eq $(git ls-files -o | wc -l) &&
+
+ test $(git rev-parse :0:ignore-me) = $(git rev-parse A:ignore-me) &&
+ test $(git rev-parse :3:a) = $(git rev-parse B:a)
+'
+
+test_expect_success 'merge of D & E2 fails but has appropriate contents' '
+ get_clean_checkout D^0 &&
+
+ test_must_fail git merge -s recursive E2^0 &&
+
+ test 4 -eq $(git ls-files -s | wc -l) &&
+ test 3 -eq $(git ls-files -u | wc -l) &&
+ test 1 -eq $(git ls-files -o | wc -l) &&
+
+ test $(git rev-parse :2:a) = $(git rev-parse B:a) &&
+ test $(git rev-parse :3:a/file) = $(git rev-parse E2:a/file) &&
+ test $(git rev-parse :1:a/file) = $(git rev-parse C:a/file) &&
+ test $(git rev-parse :0:ignore-me) = $(git rev-parse A:ignore-me) &&
+
+ test -f a~HEAD
+'
+
+test_expect_success 'merge of E2 & D fails but has appropriate contents' '
+ get_clean_checkout E2^0 &&
+
+ test_must_fail git merge -s recursive D^0 &&
+
+ test 4 -eq $(git ls-files -s | wc -l) &&
+ test 3 -eq $(git ls-files -u | wc -l) &&
+ test 1 -eq $(git ls-files -o | wc -l) &&
+
+ test $(git rev-parse :3:a) = $(git rev-parse B:a) &&
+ test $(git rev-parse :2:a/file) = $(git rev-parse E2:a/file) &&
+ test $(git rev-parse :1:a/file) = $(git rev-parse C:a/file)
+ test $(git rev-parse :0:ignore-me) = $(git rev-parse A:ignore-me) &&
+
+ test -f a~D^0
+'
+
+#
+# criss-cross with rename/rename(1to2)/modify followed by
+# rename/rename(2to1)/modify:
+#
+# B D
+# o---o
+# / \ / \
+# A o X ? F
+# \ / \ /
+# o---o
+# C E
+#
+# Commit A: new file: a
+# Commit B: rename a->b, modifying by adding a line
+# Commit C: rename a->c
+# Commit D: merge B&C, resolving conflict by keeping contents in newname
+# Commit E: merge B&C, resolving conflict similar to D but adding another line
+#
+# There is a conflict merging B & C, but one of filename not of file
+# content. Whoever created D and E chose specific resolutions for that
+# conflict resolution. Now, since: (1) there is no content conflict
+# merging B & C, (2) D does not modify that merged content further, and (3)
+# both D & E resolve the name conflict in the same way, the modification to
+# newname in E should not cause any conflicts when it is merged with D.
+# (Note that this can be accomplished by having the virtual merge base have
+# the merged contents of b and c stored in a file named a, which seems like
+# the most logical choice anyway.)
+#
+# Comment from Junio: I do not necessarily agree with the choice "a", but
+# it feels sound to say "B and C do not agree what the final pathname
+# should be, but we know this content was derived from the common A:a so we
+# use one path whose name is arbitrary in the virtual merge base X between
+# D and E" and then further let the rename detection to notice that that
+# arbitrary path gets renamed between X-D to "newname" and X-E also to
+# "newname" to resolve it as both sides renaming it to the same new
+# name. It is akin to what we do at the content level, i.e. "B and C do not
+# agree what the final contents should be, so we leave the conflict marker
+# but that may cancel out at the final merge stage".
+
+test_expect_success 'setup rename/rename(1to2)/modify followed by what looks like rename/rename(2to1)/modify' '
+ git reset --hard &&
+ git rm -rf . &&
+ git clean -fdqx &&
+ rm -rf .git &&
+ git init &&
+
+ printf "1\n2\n3\n4\n5\n6\n" >a &&
+ git add a &&
+ git commit -m A &&
+ git tag A &&
+
+ git checkout -b B A &&
+ git mv a b &&
+ echo 7 >>b &&
+ git add -u &&
+ git commit -m B &&
+
+ git checkout -b C A &&
+ git mv a c &&
+ git commit -m C &&
+
+ git checkout -q B^0 &&
+ git merge --no-commit -s ours C^0 &&
+ git mv b newname &&
+ git commit -m "Merge commit C^0 into HEAD" &&
+ git tag D &&
+
+ git checkout -q C^0 &&
+ git merge --no-commit -s ours B^0 &&
+ git mv c newname &&
+ printf "7\n8\n" >>newname &&
+ git add -u &&
+ git commit -m "Merge commit B^0 into HEAD" &&
+ git tag E
+'
+
+test_expect_success 'handle rename/rename(1to2)/modify followed by what looks like rename/rename(2to1)/modify' '
+ git checkout D^0 &&
+
+ git merge -s recursive E^0 &&
+
+ test 1 -eq $(git ls-files -s | wc -l) &&
+ test 0 -eq $(git ls-files -u | wc -l) &&
+ test 0 -eq $(git ls-files -o | wc -l) &&
+
+ test $(git rev-parse HEAD:newname) = $(git rev-parse E:newname)
+'
+
+#
+# criss-cross with rename/rename(1to2)/add-source + resolvable modify/modify:
+#
+# B D
+# o---o
+# / \ / \
+# A o X ? F
+# \ / \ /
+# o---o
+# C E
+#
+# Commit A: new file: a
+# Commit B: rename a->b
+# Commit C: rename a->c, add different a
+# Commit D: merge B&C, keeping b&c and (new) a modified at beginning
+# Commit E: merge B&C, keeping b&c and (new) a modified at end
+#
+# Merging commits D & E should result in no conflict; doing so correctly
+# requires getting the virtual merge base (from merging B&C) right, handling
+# renaming carefully (both in the virtual merge base and later), and getting
+# content merge handled.
+
+test_expect_success 'setup criss-cross + rename/rename/add + modify/modify' '
+ git rm -rf . &&
+ git clean -fdqx &&
+ rm -rf .git &&
+ git init &&
+
+ printf "lots\nof\nwords\nand\ncontent\n" >a &&
+ git add a &&
+ git commit -m A &&
+ git tag A &&
+
+ git checkout -b B A &&
+ git mv a b &&
+ git commit -m B &&
+
+ git checkout -b C A &&
+ git mv a c &&
+ printf "2\n3\n4\n5\n6\n7\n" >a &&
+ git add a &&
+ git commit -m C &&
+
+ git checkout B^0 &&
+ git merge --no-commit -s ours C^0 &&
+ git checkout C -- a c &&
+ mv a old_a &&
+ echo 1 >a &&
+ cat old_a >>a &&
+ rm old_a &&
+ git add -u &&
+ git commit -m "Merge commit C^0 into HEAD" &&
+ git tag D &&
+
+ git checkout C^0 &&
+ git merge --no-commit -s ours B^0 &&
+ git checkout B -- b &&
+ echo 8 >>a &&
+ git add -u &&
+ git commit -m "Merge commit B^0 into HEAD" &&
+ git tag E
+'
+
+test_expect_failure 'detect rename/rename/add-source for virtual merge-base' '
+ git checkout D^0 &&
+
+ git merge -s recursive E^0 &&
+
+ test 3 -eq $(git ls-files -s | wc -l) &&
+ test 0 -eq $(git ls-files -u | wc -l) &&
+ test 0 -eq $(git ls-files -o | wc -l) &&
+
+ test $(git rev-parse HEAD:b) = $(git rev-parse A:a) &&
+ test $(git rev-parse HEAD:c) = $(git rev-parse A:a) &&
+ test "$(cat a)" = "$(printf "1\n2\n3\n4\n5\n6\n7\n8\n")"
+'
+
+#
+# criss-cross with rename/rename(1to2)/add-dest + simple modify:
+#
+# B D
+# o---o
+# / \ / \
+# A o X ? F
+# \ / \ /
+# o---o
+# C E
+#
+# Commit A: new file: a
+# Commit B: rename a->b, add c
+# Commit C: rename a->c
+# Commit D: merge B&C, keeping A:a and B:c
+# Commit E: merge B&C, keeping A:a and slightly modified c from B
+#
+# Merging commits D & E should result in no conflict. The virtual merge
+# base of B & C needs to not delete B:c for that to work, though...
+
+test_expect_success 'setup criss-cross+rename/rename/add-dest + simple modify' '
+ git rm -rf . &&
+ git clean -fdqx &&
+ rm -rf .git &&
+ git init &&
+
+ >a &&
+ git add a &&
+ git commit -m A &&
+ git tag A &&
+
+ git checkout -b B A &&
+ git mv a b &&
+ printf "1\n2\n3\n4\n5\n6\n7\n" >c &&
+ git add c &&
+ git commit -m B &&
+
+ git checkout -b C A &&
+ git mv a c &&
+ git commit -m C &&
+
+ git checkout B^0 &&
+ git merge --no-commit -s ours C^0 &&
+ git mv b a &&
+ git commit -m "D is like B but renames b back to a" &&
+ git tag D &&
+
+ git checkout B^0 &&
+ git merge --no-commit -s ours C^0 &&
+ git mv b a &&
+ echo 8 >>c &&
+ git add c &&
+ git commit -m "E like D but has mod in c" &&
+ git tag E
+'
+
+test_expect_success 'virtual merge base handles rename/rename(1to2)/add-dest' '
+ git checkout D^0 &&
+
+ git merge -s recursive E^0 &&
+
+ test 2 -eq $(git ls-files -s | wc -l) &&
+ test 0 -eq $(git ls-files -u | wc -l) &&
+ test 0 -eq $(git ls-files -o | wc -l) &&
+
+ test $(git rev-parse HEAD:a) = $(git rev-parse A:a) &&
+ test $(git rev-parse HEAD:c) = $(git rev-parse E:c)
+'
+
test_done
--- /dev/null
+#!/bin/sh
+
+test_description="recursive merge corner cases w/ renames but not criss-crosses"
+# t6036 has corner cases that involve both criss-cross merges and renames
+
+. ./test-lib.sh
+
+test_expect_success 'setup rename/delete + untracked file' '
+ echo "A pretty inscription" >ring &&
+ git add ring &&
+ test_tick &&
+ git commit -m beginning &&
+
+ git branch people &&
+ git checkout -b rename-the-ring &&
+ git mv ring one-ring-to-rule-them-all &&
+ test_tick &&
+ git commit -m fullname &&
+
+ git checkout people &&
+ git rm ring &&
+ echo gollum >owner &&
+ git add owner &&
+ test_tick &&
+ git commit -m track-people-instead-of-objects &&
+ echo "Myyy PRECIOUSSS" >ring
+'
+
+test_expect_success "Does git preserve Gollum's precious artifact?" '
+ test_must_fail git merge -s recursive rename-the-ring &&
+
+ # Make sure git did not delete an untracked file
+ test -f ring
+'
+
+# Testcase setup for rename/modify/add-source:
+# Commit A: new file: a
+# Commit B: modify a slightly
+# Commit C: rename a->b, add completely different a
+#
+# We should be able to merge B & C cleanly
+
+test_expect_success 'setup rename/modify/add-source conflict' '
+ git rm -rf . &&
+ git clean -fdqx &&
+ rm -rf .git &&
+ git init &&
+
+ printf "1\n2\n3\n4\n5\n6\n7\n" >a &&
+ git add a &&
+ git commit -m A &&
+ git tag A &&
+
+ git checkout -b B A &&
+ echo 8 >>a &&
+ git add a &&
+ git commit -m B &&
+
+ git checkout -b C A &&
+ git mv a b &&
+ echo something completely different >a &&
+ git add a &&
+ git commit -m C
+'
+
+test_expect_failure 'rename/modify/add-source conflict resolvable' '
+ git checkout B^0 &&
+
+ git merge -s recursive C^0 &&
+
+ test $(git rev-parse B:a) = $(git rev-parse b) &&
+ test $(git rev-parse C:a) = $(git rev-parse a)
+'
+
+test_expect_success 'setup resolvable conflict missed if rename missed' '
+ git rm -rf . &&
+ git clean -fdqx &&
+ rm -rf .git &&
+ git init &&
+
+ printf "1\n2\n3\n4\n5\n" >a &&
+ echo foo >b &&
+ git add a b &&
+ git commit -m A &&
+ git tag A &&
+
+ git checkout -b B A &&
+ git mv a c &&
+ echo "Completely different content" >a &&
+ git add a &&
+ git commit -m B &&
+
+ git checkout -b C A &&
+ echo 6 >>a &&
+ git add a &&
+ git commit -m C
+'
+
+test_expect_failure 'conflict caused if rename not detected' '
+ git checkout -q C^0 &&
+ git merge -s recursive B^0 &&
+
+ test 3 -eq $(git ls-files -s | wc -l) &&
+ test 0 -eq $(git ls-files -u | wc -l) &&
+ test 0 -eq $(git ls-files -o | wc -l) &&
+
+ test 6 -eq $(wc -l < c) &&
+ test $(git rev-parse HEAD:a) = $(git rev-parse B:a) &&
+ test $(git rev-parse HEAD:b) = $(git rev-parse A:b)
+'
+
+test_expect_success 'setup conflict resolved wrong if rename missed' '
+ git reset --hard &&
+ git clean -f &&
+
+ git checkout -b D A &&
+ echo 7 >>a &&
+ git add a &&
+ git mv a c &&
+ echo "Completely different content" >a &&
+ git add a &&
+ git commit -m D &&
+
+ git checkout -b E A &&
+ git rm a &&
+ echo "Completely different content" >>a &&
+ git add a &&
+ git commit -m E
+'
+
+test_expect_failure 'missed conflict if rename not detected' '
+ git checkout -q E^0 &&
+ test_must_fail git merge -s recursive D^0
+'
+
+# Tests for undetected rename/add-source causing a file to erroneously be
+# deleted (and for mishandled rename/rename(1to1) causing the same issue).
+#
+# This test uses a rename/rename(1to1)+add-source conflict (1to1 means the
+# same file is renamed on both sides to the same thing; it should trigger
+# the 1to2 logic, which it would do if the add-source didn't cause issues
+# for git's rename detection):
+# Commit A: new file: a
+# Commit B: rename a->b
+# Commit C: rename a->b, add unrelated a
+
+test_expect_success 'setup undetected rename/add-source causes data loss' '
+ git rm -rf . &&
+ git clean -fdqx &&
+ rm -rf .git &&
+ git init &&
+
+ printf "1\n2\n3\n4\n5\n" >a &&
+ git add a &&
+ git commit -m A &&
+ git tag A &&
+
+ git checkout -b B A &&
+ git mv a b &&
+ git commit -m B &&
+
+ git checkout -b C A &&
+ git mv a b &&
+ echo foobar >a &&
+ git add a &&
+ git commit -m C
+'
+
+test_expect_failure 'detect rename/add-source and preserve all data' '
+ git checkout B^0 &&
+
+ git merge -s recursive C^0 &&
+
+ test 2 -eq $(git ls-files -s | wc -l) &&
+ test 2 -eq $(git ls-files -u | wc -l) &&
+ test 0 -eq $(git ls-files -o | wc -l) &&
+
+ test -f a &&
+ test -f b &&
+
+ test $(git rev-parse HEAD:b) = $(git rev-parse A:a) &&
+ test $(git rev-parse HEAD:a) = $(git rev-parse C:a)
+'
+
+test_expect_failure 'detect rename/add-source and preserve all data, merge other way' '
+ git checkout C^0 &&
+
+ git merge -s recursive B^0 &&
+
+ test 2 -eq $(git ls-files -s | wc -l) &&
+ test 2 -eq $(git ls-files -u | wc -l) &&
+ test 0 -eq $(git ls-files -o | wc -l) &&
+
+ test -f a &&
+ test -f b &&
+
+ test $(git rev-parse HEAD:b) = $(git rev-parse A:a) &&
+ test $(git rev-parse HEAD:a) = $(git rev-parse C:a)
+'
+
+test_expect_success 'setup content merge + rename/directory conflict' '
+ git rm -rf . &&
+ git clean -fdqx &&
+ rm -rf .git &&
+ git init &&
+
+ printf "1\n2\n3\n4\n5\n6\n" >file &&
+ git add file &&
+ test_tick &&
+ git commit -m base &&
+ git tag base &&
+
+ git checkout -b right &&
+ echo 7 >>file &&
+ mkdir newfile &&
+ echo junk >newfile/realfile &&
+ git add file newfile/realfile &&
+ test_tick &&
+ git commit -m right &&
+
+ git checkout -b left-conflict base &&
+ echo 8 >>file &&
+ git add file &&
+ git mv file newfile &&
+ test_tick &&
+ git commit -m left &&
+
+ git checkout -b left-clean base &&
+ echo 0 >newfile &&
+ cat file >>newfile &&
+ git add newfile &&
+ git rm file &&
+ test_tick &&
+ git commit -m left
+'
+
+test_expect_success 'rename/directory conflict + clean content merge' '
+ git reset --hard &&
+ git reset --hard &&
+ git clean -fdqx &&
+
+ git checkout left-clean^0 &&
+
+ test_must_fail git merge -s recursive right^0 &&
+
+ test 2 -eq $(git ls-files -s | wc -l) &&
+ test 1 -eq $(git ls-files -u | wc -l) &&
+ test 1 -eq $(git ls-files -o | wc -l) &&
+
+ echo 0 >expect &&
+ git cat-file -p base:file >>expect &&
+ echo 7 >>expect &&
+ test_cmp expect newfile~HEAD &&
+
+ test $(git rev-parse :2:newfile) = $(git hash-object expect) &&
+
+ test -f newfile/realfile &&
+ test -f newfile~HEAD
+'
+
+test_expect_success 'rename/directory conflict + content merge conflict' '
+ git reset --hard &&
+ git reset --hard &&
+ git clean -fdqx &&
+
+ git checkout left-conflict^0 &&
+
+ test_must_fail git merge -s recursive right^0 &&
+
+ test 4 -eq $(git ls-files -s | wc -l) &&
+ test 3 -eq $(git ls-files -u | wc -l) &&
+ test 1 -eq $(git ls-files -o | wc -l) &&
+
+ git cat-file -p left-conflict:newfile >left &&
+ git cat-file -p base:file >base &&
+ git cat-file -p right:file >right &&
+ test_must_fail git merge-file \
+ -L "HEAD:newfile" \
+ -L "" \
+ -L "right^0:file" \
+ left base right &&
+ test_cmp left newfile~HEAD &&
+
+ test $(git rev-parse :1:newfile) = $(git rev-parse base:file) &&
+ test $(git rev-parse :2:newfile) = $(git rev-parse left-conflict:newfile) &&
+ test $(git rev-parse :3:newfile) = $(git rev-parse right:file) &&
+
+ test -f newfile/realfile &&
+ test -f newfile~HEAD
+'
+
+test_expect_success 'setup content merge + rename/directory conflict w/ disappearing dir' '
+ git reset --hard &&
+ git rm -rf . &&
+ git clean -fdqx &&
+ rm -rf .git &&
+ git init &&
+
+ mkdir sub &&
+ printf "1\n2\n3\n4\n5\n6\n" >sub/file &&
+ git add sub/file &&
+ test_tick &&
+ git commit -m base &&
+ git tag base &&
+
+ git checkout -b right &&
+ echo 7 >>sub/file &&
+ git add sub/file &&
+ test_tick &&
+ git commit -m right &&
+
+ git checkout -b left base &&
+ echo 0 >newfile &&
+ cat sub/file >>newfile &&
+ git rm sub/file &&
+ mv newfile sub &&
+ git add sub &&
+ test_tick &&
+ git commit -m left
+'
+
+test_expect_success 'disappearing dir in rename/directory conflict handled' '
+ git reset --hard &&
+ git clean -fdqx &&
+
+ git checkout left^0 &&
+
+ git merge -s recursive right^0 &&
+
+ test 1 -eq $(git ls-files -s | wc -l) &&
+ test 0 -eq $(git ls-files -u | wc -l) &&
+ test 0 -eq $(git ls-files -o | wc -l) &&
+
+ echo 0 >expect &&
+ git cat-file -p base:sub/file >>expect &&
+ echo 7 >>expect &&
+ test_cmp expect sub &&
+
+ test -f sub
+'
+
+# Test for all kinds of things that can go wrong with rename/rename (2to1):
+# Commit A: new files: a & b
+# Commit B: rename a->c, modify b
+# Commit C: rename b->c, modify a
+#
+# Merging of B & C should NOT be clean. Questions:
+# * Both a & b should be removed by the merge; are they?
+# * The two c's should contain modifications to a & b; do they?
+# * The index should contain two files, both for c; does it?
+# * The working copy should have two files, both of form c~<unique>; does it?
+# * Nothing else should be present. Is anything?
+
+test_expect_success 'setup rename/rename (2to1) + modify/modify' '
+ git rm -rf . &&
+ git clean -fdqx &&
+ rm -rf .git &&
+ git init &&
+
+ printf "1\n2\n3\n4\n5\n" >a &&
+ printf "5\n4\n3\n2\n1\n" >b &&
+ git add a b &&
+ git commit -m A &&
+ git tag A &&
+
+ git checkout -b B A &&
+ git mv a c &&
+ echo 0 >>b &&
+ git add b &&
+ git commit -m B &&
+
+ git checkout -b C A &&
+ git mv b c &&
+ echo 6 >>a &&
+ git add a &&
+ git commit -m C
+'
+
+test_expect_success 'handle rename/rename (2to1) conflict correctly' '
+ git checkout B^0 &&
+
+ test_must_fail git merge -s recursive C^0 >out &&
+ grep "CONFLICT (rename/rename)" out &&
+
+ test 2 -eq $(git ls-files -s | wc -l) &&
+ test 2 -eq $(git ls-files -u | wc -l) &&
+ test 2 -eq $(git ls-files -u c | wc -l) &&
+ test 3 -eq $(git ls-files -o | wc -l) &&
+
+ test ! -f a &&
+ test ! -f b &&
+ test -f c~HEAD &&
+ test -f c~C^0 &&
+
+ test $(git hash-object c~HEAD) = $(git rev-parse C:a) &&
+ test $(git hash-object c~C^0) = $(git rev-parse B:b)
+'
+
+# Testcase setup for simple rename/rename (1to2) conflict:
+# Commit A: new file: a
+# Commit B: rename a->b
+# Commit C: rename a->c
+test_expect_success 'setup simple rename/rename (1to2) conflict' '
+ git rm -rf . &&
+ git clean -fdqx &&
+ rm -rf .git &&
+ git init &&
+
+ echo stuff >a &&
+ git add a &&
+ test_tick &&
+ git commit -m A &&
+ git tag A &&
+
+ git checkout -b B A &&
+ git mv a b &&
+ test_tick &&
+ git commit -m B &&
+
+ git checkout -b C A &&
+ git mv a c &&
+ test_tick &&
+ git commit -m C
+'
+
+test_expect_success 'merge has correct working tree contents' '
+ git checkout C^0 &&
+
+ test_must_fail git merge -s recursive B^0 &&
+
+ test 3 -eq $(git ls-files -s | wc -l) &&
+ test 3 -eq $(git ls-files -u | wc -l) &&
+ test 0 -eq $(git ls-files -o | wc -l) &&
+
+ test $(git rev-parse :1:a) = $(git rev-parse A:a) &&
+ test $(git rev-parse :3:b) = $(git rev-parse A:a) &&
+ test $(git rev-parse :2:c) = $(git rev-parse A:a) &&
+
+ test ! -f a &&
+ test $(git hash-object b) = $(git rev-parse A:a) &&
+ test $(git hash-object c) = $(git rev-parse A:a)
+'
+
+# Testcase setup for rename/rename(1to2)/add-source conflict:
+# Commit A: new file: a
+# Commit B: rename a->b
+# Commit C: rename a->c, add completely different a
+#
+# Merging of B & C should NOT be clean; there's a rename/rename conflict
+
+test_expect_success 'setup rename/rename(1to2)/add-source conflict' '
+ git rm -rf . &&
+ git clean -fdqx &&
+ rm -rf .git &&
+ git init &&
+
+ printf "1\n2\n3\n4\n5\n6\n7\n" >a &&
+ git add a &&
+ git commit -m A &&
+ git tag A &&
+
+ git checkout -b B A &&
+ git mv a b &&
+ git commit -m B &&
+
+ git checkout -b C A &&
+ git mv a c &&
+ echo something completely different >a &&
+ git add a &&
+ git commit -m C
+'
+
+test_expect_failure 'detect conflict with rename/rename(1to2)/add-source merge' '
+ git checkout B^0 &&
+
+ test_must_fail git merge -s recursive C^0 &&
+
+ test 4 -eq $(git ls-files -s | wc -l) &&
+ test 0 -eq $(git ls-files -o | wc -l) &&
+
+ test $(git rev-parse 3:a) = $(git rev-parse C:a) &&
+ test $(git rev-parse 1:a) = $(git rev-parse A:a) &&
+ test $(git rev-parse 2:b) = $(git rev-parse B:b) &&
+ test $(git rev-parse 3:c) = $(git rev-parse C:c) &&
+
+ test -f a &&
+ test -f b &&
+ test -f c
+'
+
+test_expect_success 'setup rename/rename(1to2)/add-source resolvable conflict' '
+ git rm -rf . &&
+ git clean -fdqx &&
+ rm -rf .git &&
+ git init &&
+
+ >a &&
+ git add a &&
+ test_tick &&
+ git commit -m base &&
+ git tag A &&
+
+ git checkout -b B A &&
+ git mv a b &&
+ test_tick &&
+ git commit -m one &&
+
+ git checkout -b C A &&
+ git mv a b &&
+ echo important-info >a &&
+ git add a &&
+ test_tick &&
+ git commit -m two
+'
+
+test_expect_failure 'rename/rename/add-source still tracks new a file' '
+ git checkout C^0 &&
+ git merge -s recursive B^0 &&
+
+ test 2 -eq $(git ls-files -s | wc -l) &&
+ test 0 -eq $(git ls-files -o | wc -l) &&
+
+ test $(git rev-parse HEAD:a) = $(git rev-parse C:a) &&
+ test $(git rev-parse HEAD:b) = $(git rev-parse A:a)
+'
+
+test_expect_success 'setup rename/rename(1to2)/add-dest conflict' '
+ git rm -rf . &&
+ git clean -fdqx &&
+ rm -rf .git &&
+ git init &&
+
+ echo stuff >a &&
+ git add a &&
+ test_tick &&
+ git commit -m base &&
+ git tag A &&
+
+ git checkout -b B A &&
+ git mv a b &&
+ echo precious-data >c &&
+ git add c &&
+ test_tick &&
+ git commit -m one &&
+
+ git checkout -b C A &&
+ git mv a c &&
+ echo important-info >b &&
+ git add b &&
+ test_tick &&
+ git commit -m two
+'
+
+test_expect_success 'rename/rename/add-dest merge still knows about conflicting file versions' '
+ git checkout C^0 &&
+ test_must_fail git merge -s recursive B^0 &&
+
+ test 5 -eq $(git ls-files -s | wc -l) &&
+ test 2 -eq $(git ls-files -u b | wc -l) &&
+ test 2 -eq $(git ls-files -u c | wc -l) &&
+ test 4 -eq $(git ls-files -o | wc -l) &&
+
+ test $(git rev-parse :1:a) = $(git rev-parse A:a) &&
+ test $(git rev-parse :2:b) = $(git rev-parse C:b) &&
+ test $(git rev-parse :3:b) = $(git rev-parse B:b) &&
+ test $(git rev-parse :2:c) = $(git rev-parse C:c) &&
+ test $(git rev-parse :3:c) = $(git rev-parse B:c) &&
+
+ test $(git hash-object c~HEAD) = $(git rev-parse C:c) &&
+ test $(git hash-object c~B\^0) = $(git rev-parse B:c) &&
+ test $(git hash-object b~HEAD) = $(git rev-parse C:b) &&
+ test $(git hash-object b~B\^0) = $(git rev-parse B:b) &&
+
+ test ! -f b &&
+ test ! -f c
+'
+
+test_done
test_description='for-each-ref test'
. ./test-lib.sh
+. "$TEST_DIRECTORY"/lib-gpg.sh
# Mon Jul 3 15:18:43 2006 +0000
datestamp=1151939923
case "$1" in
head) ref=refs/heads/master ;;
tag) ref=refs/tags/testtag ;;
+ *) ref=$1 ;;
esac
printf '%s\n' "$3" >expected
- test_expect_${4:-success} "basic atom: $1 $2" "
+ test_expect_${4:-success} $PREREQ "basic atom: $1 $2" "
git for-each-ref --format='%($2)' $ref >actual &&
- test_cmp expected actual
+ sanitize_pgp <actual >actual.clean &&
+ test_cmp expected actual.clean
"
}
test_atom head creator 'C O Mitter <committer@example.com> 1151939923 +0200'
test_atom head creatordate 'Mon Jul 3 17:18:43 2006 +0200'
test_atom head subject 'Initial'
+test_atom head contents:subject 'Initial'
test_atom head body ''
+test_atom head contents:body ''
+test_atom head contents:signature ''
test_atom head contents 'Initial
'
test_atom tag creator 'C O Mitter <committer@example.com> 1151939925 +0200'
test_atom tag creatordate 'Mon Jul 3 17:18:45 2006 +0200'
test_atom tag subject 'Tagging at 1151939927'
+test_atom tag contents:subject 'Tagging at 1151939927'
test_atom tag body ''
+test_atom tag contents:body ''
+test_atom tag contents:signature ''
test_atom tag contents 'Tagging at 1151939927
'
'
+test_expect_success 'create tag with subject and body content' '
+ cat >>msg <<-\EOF &&
+ the subject line
+
+ first body line
+ second body line
+ EOF
+ git tag -F msg subject-body
+'
+test_atom refs/tags/subject-body subject 'the subject line'
+test_atom refs/tags/subject-body body 'first body line
+second body line
+'
+test_atom refs/tags/subject-body contents 'the subject line
+
+first body line
+second body line
+'
+
+test_expect_success 'create tag with multiline subject' '
+ cat >msg <<-\EOF &&
+ first subject line
+ second subject line
+
+ first body line
+ second body line
+ EOF
+ git tag -F msg multiline
+'
+test_atom refs/tags/multiline subject 'first subject line second subject line'
+test_atom refs/tags/multiline contents:subject 'first subject line second subject line'
+test_atom refs/tags/multiline body 'first body line
+second body line
+'
+test_atom refs/tags/multiline contents:body 'first body line
+second body line
+'
+test_atom refs/tags/multiline contents:signature ''
+test_atom refs/tags/multiline contents 'first subject line
+second subject line
+
+first body line
+second body line
+'
+
+test_expect_success GPG 'create signed tags' '
+ git tag -s -m "" signed-empty &&
+ git tag -s -m "subject line" signed-short &&
+ cat >msg <<-\EOF &&
+ subject line
+
+ body contents
+ EOF
+ git tag -s -F msg signed-long
+'
+
+sig='-----BEGIN PGP SIGNATURE-----
+-----END PGP SIGNATURE-----
+'
+
+PREREQ=GPG
+test_atom refs/tags/signed-empty subject ''
+test_atom refs/tags/signed-empty contents:subject ''
+test_atom refs/tags/signed-empty body "$sig"
+test_atom refs/tags/signed-empty contents:body ''
+test_atom refs/tags/signed-empty contents:signature "$sig"
+test_atom refs/tags/signed-empty contents "$sig"
+
+test_atom refs/tags/signed-short subject 'subject line'
+test_atom refs/tags/signed-short contents:subject 'subject line'
+test_atom refs/tags/signed-short body "$sig"
+test_atom refs/tags/signed-short contents:body ''
+test_atom refs/tags/signed-short contents:signature "$sig"
+test_atom refs/tags/signed-short contents "subject line
+$sig"
+
+test_atom refs/tags/signed-long subject 'subject line'
+test_atom refs/tags/signed-long contents:subject 'subject line'
+test_atom refs/tags/signed-long body "body contents
+$sig"
+test_atom refs/tags/signed-long contents:body 'body contents
+'
+test_atom refs/tags/signed-long contents:signature "$sig"
+test_atom refs/tags/signed-long contents "subject line
+
+body contents
+$sig"
+
test_done
Tests for operations with tags.'
. ./test-lib.sh
+. "$TEST_DIRECTORY"/lib-gpg.sh
# creating and listing lightweight tags:
test_cmp expect actual
'
-# subsequent tests require gpg; check if it is available
-gpg --version >/dev/null 2>/dev/null
-if [ $? -eq 127 ]; then
- say "# gpg not found - skipping tag signing and verification tests"
-else
- # As said here: http://www.gnupg.org/documentation/faqs.html#q6.19
- # the gpg version 1.0.6 didn't parse trust packets correctly, so for
- # that version, creation of signed tags using the generated key fails.
- case "$(gpg --version)" in
- 'gpg (GnuPG) 1.0.6'*)
- say "Skipping signed tag tests, because a bug in 1.0.6 version"
- ;;
- *)
- test_set_prereq GPG
- ;;
- esac
-fi
-
# trying to verify annotated non-signed tags:
test_expect_success GPG \
# creating and verifying signed tags:
-# key generation info: gpg --homedir t/t7004 --gen-key
-# Type DSA and Elgamal, size 2048 bits, no expiration date.
-# Name and email: C O Mitter <committer@example.com>
-# No password given, to enable non-interactive operation.
-
-cp -R "$TEST_DIRECTORY"/t7004 ./gpghome
-chmod 0700 gpghome
-GNUPGHOME="$(pwd)/gpghome"
-export GNUPGHOME
-
get_tag_header signed-tag $commit commit $time >expect
echo 'A signed tag message' >>expect
echo '-----BEGIN PGP SIGNATURE-----' >>expect
git grep -f f -Fi a
"
-test_expect_failure 'git grep -Fi Y<NUL>x a' "
+test_expect_success 'git grep -Fi Y<NUL>x a' "
printf 'YQx' | q_to_nul >f &&
test_must_fail git grep -f f -Fi a
"
git grep -f f a
"
-test_expect_failure 'git grep y<NUL>x a' "
+test_expect_success 'git grep y<NUL>x a' "
printf 'yQx' | q_to_nul >f &&
test_must_fail git grep -f f a
"
--- /dev/null
+#!/bin/sh
+
+test_description='Test interaction of reset --hard with sequencer
+
+ + anotherpick: rewrites foo to d
+ + picked: rewrites foo to c
+ + unrelatedpick: rewrites unrelated to reallyunrelated
+ + base: rewrites foo to b
+ + initial: writes foo as a, unrelated as unrelated
+'
+
+. ./test-lib.sh
+
+pristine_detach () {
+ git cherry-pick --reset &&
+ git checkout -f "$1^0" &&
+ git read-tree -u --reset HEAD &&
+ git clean -d -f -f -q -x
+}
+
+test_expect_success setup '
+ echo unrelated >unrelated &&
+ git add unrelated &&
+ test_commit initial foo a &&
+ test_commit base foo b &&
+ test_commit unrelatedpick unrelated reallyunrelated &&
+ test_commit picked foo c &&
+ test_commit anotherpick foo d &&
+ git config advice.detachedhead false
+
+'
+
+test_expect_success 'reset --hard cleans up sequencer state, providing one-level undo' '
+ pristine_detach initial &&
+ test_must_fail git cherry-pick base..anotherpick &&
+ test_path_is_dir .git/sequencer &&
+ git reset --hard &&
+ test_path_is_missing .git/sequencer &&
+ test_path_is_dir .git/sequencer-old &&
+ git reset --hard &&
+ test_path_is_missing .git/sequencer-old
+'
+
+test_done
test "$mergeinfo" = "/branches/foo:1-10"
'
+test_expect_success 'change svn:mergeinfo multiline' '
+ touch baz &&
+ git add baz &&
+ git commit -m "baz" &&
+ git svn dcommit --mergeinfo="/branches/bar:1-10 /branches/other:3-5,8,10-11"
+'
+
+test_expect_success 'verify svn:mergeinfo multiline' '
+ mergeinfo=$(svn_cmd propget svn:mergeinfo "$svnrepo"/trunk)
+ test "$mergeinfo" = "/branches/bar:1-10
+/branches/other:3-5,8,10-11"
+'
+
test_done
--- /dev/null
+#!/bin/sh
+#
+# Copyright (c) 2011 Ray Chen
+#
+
+test_description='git svn test (option --preserve-empty-dirs)
+
+This test uses git to clone a Subversion repository that contains empty
+directories, and checks that corresponding directories are created in the
+local Git repository with placeholder files.'
+
+. ./lib-git-svn.sh
+
+say 'define NO_SVN_TESTS to skip git svn tests'
+GIT_REPO=git-svn-repo
+
+test_expect_success 'initialize source svn repo containing empty dirs' '
+ svn_cmd mkdir -m x "$svnrepo"/trunk &&
+ svn_cmd co "$svnrepo"/trunk "$SVN_TREE" &&
+ (
+ cd "$SVN_TREE" &&
+ mkdir -p 1 2 3/a 3/b 4 5 6 &&
+ echo "First non-empty file" > 2/file1.txt &&
+ echo "Second non-empty file" > 2/file2.txt &&
+ echo "Third non-empty file" > 3/a/file1.txt &&
+ echo "Fourth non-empty file" > 3/b/file1.txt &&
+ svn_cmd add 1 2 3 4 5 6 &&
+ svn_cmd commit -m "initial commit" &&
+
+ mkdir 4/a &&
+ svn_cmd add 4/a &&
+ svn_cmd commit -m "nested empty directory" &&
+ mkdir 4/a/b &&
+ svn_cmd add 4/a/b &&
+ svn_cmd commit -m "deeply nested empty directory" &&
+ mkdir 4/a/b/c &&
+ svn_cmd add 4/a/b/c &&
+ svn_cmd commit -m "really deeply nested empty directory" &&
+ echo "Kill the placeholder file" > 4/a/b/c/foo &&
+ svn_cmd add 4/a/b/c/foo &&
+ svn_cmd commit -m "Regular file to remove placeholder" &&
+
+ svn_cmd del 2/file2.txt &&
+ svn_cmd del 3/b &&
+ svn_cmd commit -m "delete non-last entry in directory" &&
+
+ svn_cmd del 2/file1.txt &&
+ svn_cmd del 3/a &&
+ svn_cmd commit -m "delete last entry in directory" &&
+
+ echo "Conflict file" > 5/.placeholder &&
+ mkdir 6/.placeholder &&
+ svn_cmd add 5/.placeholder 6/.placeholder &&
+ svn_cmd commit -m "Placeholder Namespace conflict"
+ ) &&
+ rm -rf "$SVN_TREE"
+'
+
+test_expect_success 'clone svn repo with --preserve-empty-dirs' '
+ git svn clone "$svnrepo"/trunk --preserve-empty-dirs "$GIT_REPO"
+'
+
+# "$GIT_REPO"/1 should only contain the placeholder file.
+test_expect_success 'directory empty from inception' '
+ test -f "$GIT_REPO"/1/.gitignore &&
+ test $(find "$GIT_REPO"/1 -type f | wc -l) = "1"
+'
+
+# "$GIT_REPO"/2 and "$GIT_REPO"/3 should only contain the placeholder file.
+test_expect_success 'directory empty from subsequent svn commit' '
+ test -f "$GIT_REPO"/2/.gitignore &&
+ test $(find "$GIT_REPO"/2 -type f | wc -l) = "1" &&
+ test -f "$GIT_REPO"/3/.gitignore &&
+ test $(find "$GIT_REPO"/3 -type f | wc -l) = "1"
+'
+
+# No placeholder files should exist in "$GIT_REPO"/4, even though one was
+# generated for every sub-directory at some point in the repo's history.
+test_expect_success 'add entry to previously empty directory' '
+ test $(find "$GIT_REPO"/4 -type f | wc -l) = "1" &&
+ test -f "$GIT_REPO"/4/a/b/c/foo
+'
+
+# The HEAD~2 commit should not have introduced .gitignore placeholder files.
+test_expect_success 'remove non-last entry from directory' '
+ (
+ cd "$GIT_REPO" &&
+ git checkout HEAD~2
+ ) &&
+ test_must_fail test -f "$GIT_REPO"/2/.gitignore &&
+ test_must_fail test -f "$GIT_REPO"/3/.gitignore
+'
+
+# After re-cloning the repository with --placeholder-file specified, there
+# should be 5 files named ".placeholder" in the local Git repo.
+test_expect_success 'clone svn repo with --placeholder-file specified' '
+ rm -rf "$GIT_REPO" &&
+ git svn clone "$svnrepo"/trunk --preserve-empty-dirs \
+ --placeholder-file=.placeholder "$GIT_REPO" &&
+ find "$GIT_REPO" -type f -name ".placeholder" &&
+ test $(find "$GIT_REPO" -type f -name ".placeholder" | wc -l) = "5"
+'
+
+# "$GIT_REPO"/5/.placeholder should be a file, and non-empty.
+test_expect_success 'placeholder namespace conflict with file' '
+ test -s "$GIT_REPO"/5/.placeholder
+'
+
+# "$GIT_REPO"/6/.placeholder should be a directory, and the "$GIT_REPO"/6 tree
+# should only contain one file: the placeholder.
+test_expect_success 'placeholder namespace conflict with directory' '
+ test -d "$GIT_REPO"/6/.placeholder &&
+ test -f "$GIT_REPO"/6/.placeholder/.placeholder &&
+ test $(find "$GIT_REPO"/6 -type f | wc -l) = "1"
+'
+
+# Prepare a second set of svn commits to test persistence during rebase.
+test_expect_success 'second set of svn commits and rebase' '
+ svn_cmd co "$svnrepo"/trunk "$SVN_TREE" &&
+ (
+ cd "$SVN_TREE" &&
+ mkdir -p 7 &&
+ echo "This should remove placeholder" > 1/file1.txt &&
+ echo "This should not remove placeholder" > 5/file1.txt &&
+ svn_cmd add 7 1/file1.txt 5/file1.txt &&
+ svn_cmd commit -m "subsequent svn commit for persistence tests"
+ ) &&
+ rm -rf "$SVN_TREE" &&
+ (
+ cd "$GIT_REPO" &&
+ git svn rebase
+ )
+'
+
+# Check that --preserve-empty-dirs and --placeholder-file flag state
+# stays persistent over multiple invocations.
+test_expect_success 'flag persistence during subsqeuent rebase' '
+ test -f "$GIT_REPO"/7/.placeholder &&
+ test $(find "$GIT_REPO"/7 -type f | wc -l) = "1"
+'
+
+# Check that placeholder files are properly removed when unnecessary,
+# even across multiple invocations.
+test_expect_success 'placeholder list persistence during subsqeuent rebase' '
+ test -f "$GIT_REPO"/1/file1.txt &&
+ test $(find "$GIT_REPO"/1 -type f | wc -l) = "1" &&
+
+ test -f "$GIT_REPO"/5/file1.txt &&
+ test -f "$GIT_REPO"/5/.placeholder &&
+ test $(find "$GIT_REPO"/5 -type f | wc -l) = "2"
+'
+
+test_done
--- /dev/null
+#!/bin/sh
+#
+# Portions copyright (c) 2007, 2009 Sam Vilain
+# Portions copyright (c) 2011 Bryan Jacobs
+#
+
+test_description='git-svn svn mergeinfo propagation'
+
+. ./lib-git-svn.sh
+
+test_expect_success 'load svn dump' "
+ svnadmin load -q '$rawsvnrepo' \
+ < '$TEST_DIRECTORY/t9161/branches.dump' &&
+ git svn init --minimize-url -R svnmerge \
+ -T trunk -b branches '$svnrepo' &&
+ git svn fetch --all
+ "
+
+test_expect_success 'propagate merge information' '
+ git config svn.pushmergeinfo yes &&
+ git checkout svnb1 &&
+ git merge --no-ff svnb2 &&
+ git svn dcommit
+ '
+
+test_expect_success 'check svn:mergeinfo' '
+ mergeinfo=$(svn_cmd propget svn:mergeinfo "$svnrepo"/branches/svnb1)
+ test "$mergeinfo" = "/branches/svnb2:3,8"
+ '
+
+test_expect_success 'merge another branch' '
+ git merge --no-ff svnb3 &&
+ git svn dcommit
+ '
+
+test_expect_success 'check primary parent mergeinfo respected' '
+ mergeinfo=$(svn_cmd propget svn:mergeinfo "$svnrepo"/branches/svnb1)
+ test "$mergeinfo" = "/branches/svnb2:3,8
+/branches/svnb3:4,9"
+ '
+
+test_expect_success 'merge existing merge' '
+ git merge --no-ff svnb4 &&
+ git svn dcommit
+ '
+
+test_expect_success "check both parents' mergeinfo respected" '
+ mergeinfo=$(svn_cmd propget svn:mergeinfo "$svnrepo"/branches/svnb1)
+ test "$mergeinfo" = "/branches/svnb2:3,8
+/branches/svnb3:4,9
+/branches/svnb4:5-6,10-12
+/branches/svnb5:6,11"
+ '
+
+test_expect_success 'make further commits to branch' '
+ git checkout svnb2 &&
+ touch newb2file &&
+ git add newb2file &&
+ git commit -m "later b2 commit" &&
+ touch newb2file-2 &&
+ git add newb2file-2 &&
+ git commit -m "later b2 commit 2" &&
+ git svn dcommit
+ '
+
+test_expect_success 'second forward merge' '
+ git checkout svnb1 &&
+ git merge --no-ff svnb2 &&
+ git svn dcommit
+ '
+
+test_expect_success 'check new mergeinfo added' '
+ mergeinfo=$(svn_cmd propget svn:mergeinfo "$svnrepo"/branches/svnb1)
+ test "$mergeinfo" = "/branches/svnb2:3,8,16-17
+/branches/svnb3:4,9
+/branches/svnb4:5-6,10-12
+/branches/svnb5:6,11"
+ '
+
+test_expect_success 'reintegration merge' '
+ git checkout svnb4 &&
+ git merge --no-ff svnb1 &&
+ git svn dcommit
+ '
+
+test_expect_success 'check reintegration mergeinfo' '
+ mergeinfo=$(svn_cmd propget svn:mergeinfo "$svnrepo"/branches/svnb4)
+ test "$mergeinfo" = "/branches/svnb1:2-4,7-9,13-18
+/branches/svnb2:3,8,16-17
+/branches/svnb3:4,9
+/branches/svnb4:5-6,10-12
+/branches/svnb5:6,11"
+ '
+
+test_expect_success 'dcommit a merge at the top of a stack' '
+ git checkout svnb1 &&
+ touch anotherfile &&
+ git add anotherfile &&
+ git commit -m "a commit" &&
+ git merge svnb4 &&
+ git svn dcommit
+ '
+
+test_done
--- /dev/null
+SVN-fs-dump-format-version: 2
+
+UUID: 1ef08553-f2d1-45df-b38c-19af6b7c926d
+
+Revision-number: 0
+Prop-content-length: 56
+Content-length: 56
+
+K 8
+svn:date
+V 27
+2011-09-02T16:08:02.941384Z
+PROPS-END
+
+Revision-number: 1
+Prop-content-length: 114
+Content-length: 114
+
+K 7
+svn:log
+V 12
+Base commit
+
+K 10
+svn:author
+V 7
+bjacobs
+K 8
+svn:date
+V 27
+2011-09-02T16:08:27.205062Z
+PROPS-END
+
+Node-path: branches
+Node-kind: dir
+Node-action: add
+Prop-content-length: 10
+Content-length: 10
+
+PROPS-END
+
+
+Node-path: trunk
+Node-kind: dir
+Node-action: add
+Prop-content-length: 10
+Content-length: 10
+
+PROPS-END
+
+
+Revision-number: 2
+Prop-content-length: 121
+Content-length: 121
+
+K 7
+svn:log
+V 19
+Create branch svnb1
+K 10
+svn:author
+V 7
+bjacobs
+K 8
+svn:date
+V 27
+2011-09-02T16:09:43.628137Z
+PROPS-END
+
+Node-path: branches/svnb1
+Node-kind: dir
+Node-action: add
+Node-copyfrom-rev: 1
+Node-copyfrom-path: trunk
+
+
+Revision-number: 3
+Prop-content-length: 121
+Content-length: 121
+
+K 7
+svn:log
+V 19
+Create branch svnb2
+K 10
+svn:author
+V 7
+bjacobs
+K 8
+svn:date
+V 27
+2011-09-02T16:09:46.339930Z
+PROPS-END
+
+Node-path: branches/svnb2
+Node-kind: dir
+Node-action: add
+Node-copyfrom-rev: 1
+Node-copyfrom-path: trunk
+
+
+Revision-number: 4
+Prop-content-length: 121
+Content-length: 121
+
+K 7
+svn:log
+V 19
+Create branch svnb3
+K 10
+svn:author
+V 7
+bjacobs
+K 8
+svn:date
+V 27
+2011-09-02T16:09:49.394515Z
+PROPS-END
+
+Node-path: branches/svnb3
+Node-kind: dir
+Node-action: add
+Node-copyfrom-rev: 1
+Node-copyfrom-path: trunk
+
+
+Revision-number: 5
+Prop-content-length: 121
+Content-length: 121
+
+K 7
+svn:log
+V 19
+Create branch svnb4
+K 10
+svn:author
+V 7
+bjacobs
+K 8
+svn:date
+V 27
+2011-09-02T16:09:54.114607Z
+PROPS-END
+
+Node-path: branches/svnb4
+Node-kind: dir
+Node-action: add
+Node-copyfrom-rev: 1
+Node-copyfrom-path: trunk
+
+
+Revision-number: 6
+Prop-content-length: 121
+Content-length: 121
+
+K 7
+svn:log
+V 19
+Create branch svnb5
+K 10
+svn:author
+V 7
+bjacobs
+K 8
+svn:date
+V 27
+2011-09-02T16:09:58.602623Z
+PROPS-END
+
+Node-path: branches/svnb5
+Node-kind: dir
+Node-action: add
+Node-copyfrom-rev: 1
+Node-copyfrom-path: trunk
+
+
+Revision-number: 7
+Prop-content-length: 110
+Content-length: 110
+
+K 7
+svn:log
+V 9
+b1 commit
+K 10
+svn:author
+V 7
+bjacobs
+K 8
+svn:date
+V 27
+2011-09-02T16:10:20.292369Z
+PROPS-END
+
+Node-path: branches/svnb1/b1file
+Node-kind: file
+Node-action: add
+Prop-content-length: 10
+Text-content-length: 0
+Text-content-md5: d41d8cd98f00b204e9800998ecf8427e
+Text-content-sha1: da39a3ee5e6b4b0d3255bfef95601890afd80709
+Content-length: 10
+
+PROPS-END
+
+
+Revision-number: 8
+Prop-content-length: 110
+Content-length: 110
+
+K 7
+svn:log
+V 9
+b2 commit
+K 10
+svn:author
+V 7
+bjacobs
+K 8
+svn:date
+V 27
+2011-09-02T16:10:38.429199Z
+PROPS-END
+
+Node-path: branches/svnb2/b2file
+Node-kind: file
+Node-action: add
+Prop-content-length: 10
+Text-content-length: 0
+Text-content-md5: d41d8cd98f00b204e9800998ecf8427e
+Text-content-sha1: da39a3ee5e6b4b0d3255bfef95601890afd80709
+Content-length: 10
+
+PROPS-END
+
+
+Revision-number: 9
+Prop-content-length: 110
+Content-length: 110
+
+K 7
+svn:log
+V 9
+b3 commit
+K 10
+svn:author
+V 7
+bjacobs
+K 8
+svn:date
+V 27
+2011-09-02T16:10:52.843023Z
+PROPS-END
+
+Node-path: branches/svnb3/b3file
+Node-kind: file
+Node-action: add
+Prop-content-length: 10
+Text-content-length: 0
+Text-content-md5: d41d8cd98f00b204e9800998ecf8427e
+Text-content-sha1: da39a3ee5e6b4b0d3255bfef95601890afd80709
+Content-length: 10
+
+PROPS-END
+
+
+Revision-number: 10
+Prop-content-length: 110
+Content-length: 110
+
+K 7
+svn:log
+V 9
+b4 commit
+K 10
+svn:author
+V 7
+bjacobs
+K 8
+svn:date
+V 27
+2011-09-02T16:11:17.489870Z
+PROPS-END
+
+Node-path: branches/svnb4/b4file
+Node-kind: file
+Node-action: add
+Prop-content-length: 10
+Text-content-length: 0
+Text-content-md5: d41d8cd98f00b204e9800998ecf8427e
+Text-content-sha1: da39a3ee5e6b4b0d3255bfef95601890afd80709
+Content-length: 10
+
+PROPS-END
+
+
+Revision-number: 11
+Prop-content-length: 110
+Content-length: 110
+
+K 7
+svn:log
+V 9
+b5 commit
+K 10
+svn:author
+V 7
+bjacobs
+K 8
+svn:date
+V 27
+2011-09-02T16:11:32.277404Z
+PROPS-END
+
+Node-path: branches/svnb5/b5file
+Node-kind: file
+Node-action: add
+Prop-content-length: 10
+Text-content-length: 0
+Text-content-md5: d41d8cd98f00b204e9800998ecf8427e
+Text-content-sha1: da39a3ee5e6b4b0d3255bfef95601890afd80709
+Content-length: 10
+
+PROPS-END
+
+
+Revision-number: 12
+Prop-content-length: 192
+Content-length: 192
+
+K 7
+svn:log
+V 90
+Merge remote-tracking branch 'svnb5' into HEAD
+
+* svnb5:
+ b5 commit
+ Create branch svnb5
+K 10
+svn:author
+V 7
+bjacobs
+K 8
+svn:date
+V 27
+2011-09-02T16:11:54.274722Z
+PROPS-END
+
+Node-path: branches/svnb4
+Node-kind: dir
+Node-action: change
+Prop-content-length: 56
+Content-length: 56
+
+K 13
+svn:mergeinfo
+V 21
+/branches/svnb5:6,11
+
+PROPS-END
+
+
+Node-path: branches/svnb4/b5file
+Node-kind: file
+Node-action: add
+Prop-content-length: 10
+Text-content-length: 0
+Text-content-md5: d41d8cd98f00b204e9800998ecf8427e
+Text-content-sha1: da39a3ee5e6b4b0d3255bfef95601890afd80709
+Content-length: 10
+
+PROPS-END
+
+
cd branch1 &&
echo file1 >file1 &&
echo file2 >file2 &&
- p4 add file* &&
+ p4 add file1 file2 &&
p4 submit -d "branch1" &&
p4 integrate //depot/branch1/... //depot/branch2/... &&
p4 submit -d "branch2" &&
# Finally, make an update to branch1 on P4 side to check if it is imported
# correctly by git-p4.
test_expect_success 'git-p4 clone simple branches' '
- git init "$git" &&
+ test_when_finished cleanup_git &&
+ test_create_repo "$git" &&
cd "$git" &&
git config git-p4.branchList branch1:branch2 &&
git config --add git-p4.branchList branch1:branch3 &&
- cd "$TRASH_DIRECTORY" &&
- "$GITP4" clone --dest="$git" --detect-branches //depot@all &&
- cd "$git" &&
+ "$GITP4" clone --dest=. --detect-branches //depot@all &&
git log --all --graph --decorate --stat &&
git reset --hard p4/depot/branch1 &&
test -f file1 &&
git reset --hard p4/depot/branch2 &&
test -f file1 &&
test -f file2 &&
- test \! -z file3 &&
+ test ! -f file3 &&
! grep -q update file2 &&
git reset --hard p4/depot/branch3 &&
test -f file1 &&
cd "$cli" &&
cd branch1 &&
p4 edit file2 &&
- echo file2_ >> file2 &&
- p4 submit -d "update file2 in branch3" &&
+ echo file2_ >>file2 &&
+ p4 submit -d "update file2 in branch1" &&
cd "$git" &&
git reset --hard p4/depot/branch1 &&
"$GITP4" rebase &&
- grep -q file2_ file2 &&
- cd "$TRASH_DIRECTORY" &&
- rm -rf "$git" && mkdir "$git"
+ grep -q file2_ file2
'
test_expect_success 'shutdown' '
do
make_valgrind_symlink $file
done
+ # special-case the mergetools loadables
+ make_symlink "$GIT_BUILD_DIR"/mergetools "$GIT_VALGRIND/bin/mergetools"
OLDIFS=$IFS
IFS=:
for path in $PATH
+++ /dev/null
-#!/bin/sh
-#
-# An example hook script that is called after a successful
-# commit is made.
-#
-# To enable this hook, rename this file to "post-commit".
-
-: Nothing
+++ /dev/null
-#!/bin/sh
-#
-# An example hook script for the "post-receive" event.
-#
-# The "post-receive" script is run after receive-pack has accepted a pack
-# and the repository has been updated. It is passed arguments in through
-# stdin in the form
-# <oldrev> <newrev> <refname>
-# For example:
-# aa453216d1b3e49e7f6f98441fa56946ddcd6a20 68f7abf4e6f922807889f52bc043ecd31b79f814 refs/heads/master
-#
-# see contrib/hooks/ for a sample, or uncomment the next line and
-# rename the file to "post-receive".
-
-#. /usr/share/doc/git-core/contrib/hooks/post-receive-email
#include "refs.h"
#include "branch.h"
#include "url.h"
+#include "submodule.h"
/* rsync support */
int nr_heads, struct ref **to_fetch)
{
struct bundle_transport_data *data = transport->data;
- return unbundle(&data->header, data->fd);
+ return unbundle(&data->header, data->fd,
+ transport->progress ? BUNDLE_VERBOSE : 0);
}
static int close_bundle(struct transport *transport)
static int connect_setup(struct transport *transport, int for_push, int verbose)
{
struct git_transport_data *data = transport->data;
- struct strbuf sb = STRBUF_INIT;
if (data->conn)
return 0;
- strbuf_addstr(&sb, for_push ? data->options.receivepack :
- data->options.uploadpack);
- if (for_push && transport->verbose < 0)
- strbuf_addstr(&sb, " --quiet");
- data->conn = git_connect(data->fd, transport->url, sb.buf,
+ data->conn = git_connect(data->fd, transport->url,
+ for_push ? data->options.receivepack :
+ data->options.uploadpack,
verbose ? CONNECT_VERBOSE : 0);
- strbuf_release(&sb);
return 0;
}
flags & TRANSPORT_PUSH_MIRROR,
flags & TRANSPORT_PUSH_FORCE);
+ if ((flags & TRANSPORT_RECURSE_SUBMODULES_CHECK) && !is_bare_repository()) {
+ struct ref *ref = remote_refs;
+ for (; ref; ref = ref->next)
+ if (!is_null_sha1(ref->new_sha1) &&
+ check_submodule_needs_pushing(ref->new_sha1,transport->remote->name))
+ die("There are unpushed submodules, aborting.");
+ }
+
push_ret = transport->push_refs(transport, remote_refs, flags);
err = push_had_errors(remote_refs);
ret = push_ret | err;
#define TRANSPORT_PUSH_MIRROR 8
#define TRANSPORT_PUSH_PORCELAIN 16
#define TRANSPORT_PUSH_SET_UPSTREAM 32
+#define TRANSPORT_RECURSE_SUBMODULES_CHECK 64
#define TRANSPORT_SUMMARY_WIDTH (2 * DEFAULT_ABBREV + 3)
}
}
+static inline int prune_traversal(struct name_entry *e,
+ struct traverse_info *info,
+ struct strbuf *base,
+ int still_interesting)
+{
+ if (!info->pathspec || still_interesting == 2)
+ return 2;
+ if (still_interesting < 0)
+ return still_interesting;
+ return tree_entry_interesting(e, base, 0, info->pathspec);
+}
+
int traverse_trees(int n, struct tree_desc *t, struct traverse_info *info)
{
int ret = 0;
struct name_entry *entry = xmalloc(n*sizeof(*entry));
int i;
struct tree_desc_x *tx = xcalloc(n, sizeof(*tx));
+ struct strbuf base = STRBUF_INIT;
+ int interesting = 1;
for (i = 0; i < n; i++)
tx[i].d = t[i];
+ if (info->prev) {
+ strbuf_grow(&base, info->pathlen);
+ make_traverse_path(base.buf, info->prev, &info->name);
+ base.buf[info->pathlen-1] = '/';
+ strbuf_setlen(&base, info->pathlen);
+ }
for (;;) {
unsigned long mask, dirmask;
const char *first = NULL;
mask |= 1ul << i;
if (S_ISDIR(entry[i].mode))
dirmask |= 1ul << i;
+ e = &entry[i];
}
if (!mask)
break;
- ret = info->fn(n, mask, dirmask, entry, info);
- if (ret < 0) {
- error = ret;
- if (!info->show_all_errors)
- break;
+ interesting = prune_traversal(e, info, &base, interesting);
+ if (interesting < 0)
+ break;
+ if (interesting) {
+ ret = info->fn(n, mask, dirmask, entry, info);
+ if (ret < 0) {
+ error = ret;
+ if (!info->show_all_errors)
+ break;
+ }
+ mask &= ret;
}
- mask &= ret;
ret = 0;
for (i = 0; i < n; i++)
if (mask & (1ul << i))
for (i = 0; i < n; i++)
free_extended_entry(tx + i);
free(tx);
+ strbuf_release(&base);
return error;
}
struct traverse_info *prev;
struct name_entry name;
int pathlen;
+ struct pathspec *pathspec;
unsigned long conflicts;
traverse_callback_t fn;
newinfo = *info;
newinfo.prev = info;
+ newinfo.pathspec = info->pathspec;
newinfo.name = *p;
newinfo.pathlen += tree_entry_len(p->path, p->sha1) + 1;
newinfo.conflicts |= df_conflicts;
info.fn = unpack_callback;
info.data = o;
info.show_all_errors = o->show_all_errors;
+ info.pathspec = o->pathspec;
if (o->prefix) {
/*
const char *prefix;
int cache_bottom;
struct dir_struct *dir;
+ struct pathspec *pathspec;
merge_fn_t fn;
const char *msgs[NB_UNPACK_TREES_ERROR_TYPES];
/*
commit->buffer = NULL;
}
-static void show_object(struct object *obj, const struct name_path *path, const char *component)
+static void show_object(struct object *obj,
+ const struct name_path *path, const char *component,
+ void *cb_data)
{
- /* An object with name "foo\n0000000..." can be used to
- * confuse downstream git-pack-objects very badly.
- */
- const char *name = path_name(path, component);
- const char *ep = strchr(name, '\n');
- if (ep) {
- fprintf(pack_pipe, "%s %.*s\n", sha1_to_hex(obj->sha1),
- (int) (ep - name),
- name);
- }
- else
- fprintf(pack_pipe, "%s %s\n",
- sha1_to_hex(obj->sha1), name);
- free((char *)name);
+ show_object_with_name(pack_pipe, obj, path, component);
}
static void show_edge(struct commit *commit)
char const *line;
long size;
long idx;
+ long len1, len2;
} xdlclass_t;
typedef struct s_xdlclassifier {
long hsize;
xdlclass_t **rchash;
chastore_t ncha;
+ xdlclass_t **rcrecs;
+ long alloc;
long count;
long flags;
} xdlclassifier_t;
static int xdl_init_classifier(xdlclassifier_t *cf, long size, long flags);
static void xdl_free_classifier(xdlclassifier_t *cf);
-static int xdl_classify_record(xdlclassifier_t *cf, xrecord_t **rhash, unsigned int hbits,
- xrecord_t *rec);
-static int xdl_prepare_ctx(mmfile_t *mf, long narec, xpparam_t const *xpp,
+static int xdl_classify_record(unsigned int pass, xdlclassifier_t *cf, xrecord_t **rhash,
+ unsigned int hbits, xrecord_t *rec);
+static int xdl_prepare_ctx(unsigned int pass, mmfile_t *mf, long narec, xpparam_t const *xpp,
xdlclassifier_t *cf, xdfile_t *xdf);
static void xdl_free_ctx(xdfile_t *xdf);
static int xdl_clean_mmatch(char const *dis, long i, long s, long e);
-static int xdl_cleanup_records(xdfile_t *xdf1, xdfile_t *xdf2);
+static int xdl_cleanup_records(xdlclassifier_t *cf, xdfile_t *xdf1, xdfile_t *xdf2);
static int xdl_trim_ends(xdfile_t *xdf1, xdfile_t *xdf2);
-static int xdl_optimize_ctxs(xdfile_t *xdf1, xdfile_t *xdf2);
+static int xdl_optimize_ctxs(xdlclassifier_t *cf, xdfile_t *xdf1, xdfile_t *xdf2);
}
memset(cf->rchash, 0, cf->hsize * sizeof(xdlclass_t *));
+ cf->alloc = size;
+ if (!(cf->rcrecs = (xdlclass_t **) xdl_malloc(cf->alloc * sizeof(xdlclass_t *)))) {
+
+ xdl_free(cf->rchash);
+ xdl_cha_free(&cf->ncha);
+ return -1;
+ }
+
cf->count = 0;
return 0;
static void xdl_free_classifier(xdlclassifier_t *cf) {
+ xdl_free(cf->rcrecs);
xdl_free(cf->rchash);
xdl_cha_free(&cf->ncha);
}
-static int xdl_classify_record(xdlclassifier_t *cf, xrecord_t **rhash, unsigned int hbits,
- xrecord_t *rec) {
+static int xdl_classify_record(unsigned int pass, xdlclassifier_t *cf, xrecord_t **rhash,
+ unsigned int hbits, xrecord_t *rec) {
long hi;
char const *line;
xdlclass_t *rcrec;
+ xdlclass_t **rcrecs;
line = rec->ptr;
hi = (long) XDL_HASHLONG(rec->ha, cf->hbits);
return -1;
}
rcrec->idx = cf->count++;
+ if (cf->count > cf->alloc) {
+ cf->alloc *= 2;
+ if (!(rcrecs = (xdlclass_t **) xdl_realloc(cf->rcrecs, cf->alloc * sizeof(xdlclass_t *)))) {
+
+ return -1;
+ }
+ cf->rcrecs = rcrecs;
+ }
+ cf->rcrecs[rcrec->idx] = rcrec;
rcrec->line = line;
rcrec->size = rec->size;
rcrec->ha = rec->ha;
+ rcrec->len1 = rcrec->len2 = 0;
rcrec->next = cf->rchash[hi];
cf->rchash[hi] = rcrec;
}
+ (pass == 1) ? rcrec->len1++ : rcrec->len2++;
+
rec->ha = (unsigned long) rcrec->idx;
hi = (long) XDL_HASHLONG(rec->ha, hbits);
}
-static int xdl_prepare_ctx(mmfile_t *mf, long narec, xpparam_t const *xpp,
+static int xdl_prepare_ctx(unsigned int pass, mmfile_t *mf, long narec, xpparam_t const *xpp,
xdlclassifier_t *cf, xdfile_t *xdf) {
unsigned int hbits;
long nrec, hsize, bsize;
recs[nrec++] = crec;
if (!(xpp->flags & XDF_HISTOGRAM_DIFF) &&
- xdl_classify_record(cf, rhash, hbits, crec) < 0)
+ xdl_classify_record(pass, cf, rhash, hbits, crec) < 0)
goto abort;
}
}
long enl1, enl2, sample;
xdlclassifier_t cf;
+ memset(&cf, 0, sizeof(cf));
+
/*
* For histogram diff, we can afford a smaller sample size and
* thus a poorer estimate of the number of lines, as the hash
return -1;
}
- if (xdl_prepare_ctx(mf1, enl1, xpp, &cf, &xe->xdf1) < 0) {
+ if (xdl_prepare_ctx(1, mf1, enl1, xpp, &cf, &xe->xdf1) < 0) {
xdl_free_classifier(&cf);
return -1;
}
- if (xdl_prepare_ctx(mf2, enl2, xpp, &cf, &xe->xdf2) < 0) {
+ if (xdl_prepare_ctx(2, mf2, enl2, xpp, &cf, &xe->xdf2) < 0) {
xdl_free_ctx(&xe->xdf1);
xdl_free_classifier(&cf);
return -1;
}
- if (!(xpp->flags & XDF_HISTOGRAM_DIFF))
- xdl_free_classifier(&cf);
-
if (!(xpp->flags & XDF_PATIENCE_DIFF) &&
!(xpp->flags & XDF_HISTOGRAM_DIFF) &&
- xdl_optimize_ctxs(&xe->xdf1, &xe->xdf2) < 0) {
+ xdl_optimize_ctxs(&cf, &xe->xdf1, &xe->xdf2) < 0) {
xdl_free_ctx(&xe->xdf2);
xdl_free_ctx(&xe->xdf1);
return -1;
}
+ if (!(xpp->flags & XDF_HISTOGRAM_DIFF))
+ xdl_free_classifier(&cf);
+
return 0;
}
* matches on the other file. Also, lines that have multiple matches
* might be potentially discarded if they happear in a run of discardable.
*/
-static int xdl_cleanup_records(xdfile_t *xdf1, xdfile_t *xdf2) {
- long i, nm, rhi, nreff, mlim;
- unsigned long hav;
+static int xdl_cleanup_records(xdlclassifier_t *cf, xdfile_t *xdf1, xdfile_t *xdf2) {
+ long i, nm, nreff;
xrecord_t **recs;
- xrecord_t *rec;
+ xdlclass_t *rcrec;
char *dis, *dis1, *dis2;
if (!(dis = (char *) xdl_malloc(xdf1->nrec + xdf2->nrec + 2))) {
dis1 = dis;
dis2 = dis1 + xdf1->nrec + 1;
- if ((mlim = xdl_bogosqrt(xdf1->nrec)) > XDL_MAX_EQLIMIT)
- mlim = XDL_MAX_EQLIMIT;
for (i = xdf1->dstart, recs = &xdf1->recs[xdf1->dstart]; i <= xdf1->dend; i++, recs++) {
- hav = (*recs)->ha;
- rhi = (long) XDL_HASHLONG(hav, xdf2->hbits);
- for (nm = 0, rec = xdf2->rhash[rhi]; rec; rec = rec->next)
- if (rec->ha == hav && ++nm == mlim)
- break;
- dis1[i] = (nm == 0) ? 0: (nm >= mlim) ? 2: 1;
+ rcrec = cf->rcrecs[(*recs)->ha];
+ nm = rcrec ? rcrec->len2 : 0;
+ dis1[i] = (nm == 0) ? 0: 1;
}
- if ((mlim = xdl_bogosqrt(xdf2->nrec)) > XDL_MAX_EQLIMIT)
- mlim = XDL_MAX_EQLIMIT;
for (i = xdf2->dstart, recs = &xdf2->recs[xdf2->dstart]; i <= xdf2->dend; i++, recs++) {
- hav = (*recs)->ha;
- rhi = (long) XDL_HASHLONG(hav, xdf1->hbits);
- for (nm = 0, rec = xdf1->rhash[rhi]; rec; rec = rec->next)
- if (rec->ha == hav && ++nm == mlim)
- break;
- dis2[i] = (nm == 0) ? 0: (nm >= mlim) ? 2: 1;
+ rcrec = cf->rcrecs[(*recs)->ha];
+ nm = rcrec ? rcrec->len1 : 0;
+ dis2[i] = (nm == 0) ? 0: 1;
}
for (nreff = 0, i = xdf1->dstart, recs = &xdf1->recs[xdf1->dstart];
}
-static int xdl_optimize_ctxs(xdfile_t *xdf1, xdfile_t *xdf2) {
+static int xdl_optimize_ctxs(xdlclassifier_t *cf, xdfile_t *xdf1, xdfile_t *xdf2) {
if (xdl_trim_ends(xdf1, xdf2) < 0 ||
- xdl_cleanup_records(xdf1, xdf2) < 0) {
+ xdl_cleanup_records(cf, xdf1, xdf2) < 0) {
return -1;
}