/git-init-db
/git-interpret-trailers
/git-instaweb
+/git-legacy-stash
/git-log
/git-ls-files
/git-ls-remote
or commands:
Literal examples (e.g. use of command-line options, command names,
- branch names, configuration and environment variables) must be
- typeset in monospace (i.e. wrapped with backticks):
+ branch names, URLs, pathnames (files and directories), configuration and
+ environment variables) must be typeset in monospace (i.e. wrapped with
+ backticks):
`--pretty=oneline`
`git rev-list`
`remote.pushDefault`
+ `http://git.example.com`
+ `.git/config`
`GIT_DIR`
`HEAD`
$(wildcard git-*.txt))
MAN1_TXT += git.txt
MAN1_TXT += gitk.txt
-MAN1_TXT += gitremote-helpers.txt
MAN1_TXT += gitweb.txt
MAN5_TXT += gitattributes.txt
MAN7_TXT += giteveryday.txt
MAN7_TXT += gitglossary.txt
MAN7_TXT += gitnamespaces.txt
+MAN7_TXT += gitremote-helpers.txt
MAN7_TXT += gitrevisions.txt
MAN7_TXT += gitsubmodules.txt
MAN7_TXT += gittutorial-2.txt
configuration (when available), which allows --list-cmds to honour
a repository specific setting of completion.commands, for example.
+ * "git mergetool" learned to offer Sublime Merge (smerge) as one of
+ its backends.
+
+ * A new hook "post-index-change" is called when the on-disk index
+ file changes, which can help e.g. a virtualized working tree
+ implementation.
+
+ * "git difftool" can now run outside a repository.
+
+ * "git checkout -m <other>" was about carrying the differences
+ between HEAD and the working-tree files forward while checking out
+ another branch, and ignored the differences between HEAD and the
+ index. The command has been taught to abort when the index and the
+ HEAD are different.
+
+ * A progress indicator has been added to the "index-pack" step, which
+ often makes users wait for completion during "git clone".
+
+ * "git submodule" learns "set-branch" subcommand that allows the
+ submodule.*.branch settings to be modified.
+
Performance, Internal Implementation, Development Support etc.
* "git multi-pack-index verify" did not scale well with the number of
packfiles, which is being improved.
+ * "git stash" has been rewritten in C.
+
+ * The "check-docs" Makefile target to support developers has been
+ updated.
+
+ * The tests have been updated not to rely on the abbreviated option
+ names the parse-options API offers, to protect us from an
+ abbreviated form of an option that used to be unique within the
+ command getting non-unique when a new option that share the same
+ prefix is added.
+
+ * The scripted version of "git rebase -i" wrote and rewrote the todo
+ list many times during a single step of its operation, and the
+ recent C-rewrite made a faithful conversion of the logic to C. The
+ implementation has been updated to carry necessary information
+ around in-core to avoid rewriting the same file over and over
+ unnecessarily.
+
+ * Test framework update to more robustly clean up leftover files and
+ processes after tests are done.
+
+ * Conversion from unsigned char[20] to struct object_id continues.
+
+ * While running "git diff" in a lazy clone, we can upfront know which
+ missing blobs we will need, instead of waiting for the on-demand
+ machinery to discover them one by one. The code learned to aim to
+ achieve better performance by batching the request for these
+ promised blobs.
+
Fixes since v2.21
-----------------
* A corner-case object name ambiguity while the sequencer machinery
is working (e.g. "rebase -i -x") has been fixed.
- * "git format-patch" used overwrite an existing patch/cover-letter
- file. A new "--no-clobber" option stops it.
+ * "git format-patch" did not diagnose an error while opening the
+ output file for the cover-letter, which has been corrected.
(merge 2fe95f494c jc/format-patch-error-check later to maint).
* "git checkout -f <branch>" while the index has an unmerged path
* dumb-http walker has been updated to share more error recovery
strategy with the normal codepath.
+ * A buglet in configuration parser has been fixed.
+ (merge 19e7fdaa58 nd/include-if-wildmatch later to maint).
+
+ * The documentation for "git read-tree --reset -u" has been updated.
+ (merge b5a0bd694c nd/read-tree-reset-doc later to maint).
+
+ * Code clean-up around a much-less-important-than-it-used-to-be
+ update_server_info() funtion.
+ (merge b3223761c8 jk/server-info-rabbit-hole later to maint).
+
+ * The message given when "git commit -a <paths>" errors out has been
+ updated.
+ (merge 5a1dbd48bc nd/commit-a-with-paths-msg-update later to maint).
+
+ * "git cherry-pick --options A..B", after giving control back to the
+ user to ask help resolving a conflicted step, did not honor the
+ options it originally received, which has been corrected.
+
+ * Various glitches in "git gc" around reflog handling have been fixed.
+
+ * The code to read from commit-graph file has been cleanup with more
+ careful error checking before using data read from it.
+
+ * Performance fix around "git fetch" that grabs many refs.
+ (merge b764300912 jt/fetch-pack-wanted-refs-optim later to maint).
+
+ * Protocol v2 support in "git fetch-pack" of shallow clones has been
+ corrected.
+
+ * Performance fix around "git blame", especially in a linear history
+ (which is the norm we should optimize for).
+ (merge f892014943 dk/blame-keep-origin-blob later to maint).
+
+ * Performance fix for "rev-list --parents -- pathspec".
+ (merge 8320b1dbe7 jk/revision-rewritten-parents-in-prio-queue later to maint).
+
+ * Updating the display with progress message has been cleaned up to
+ deal better with overlong messages.
+ (merge 545dc345eb sg/overlong-progress-fix later to maint).
+
+ * "git blame -- path" in a non-bare repository starts blaming from
+ the working tree, and the same command in a bare repository errors
+ out because there is no working tree by definition. The command
+ has been taught to instead start blaming from the commit at HEAD,
+ which is more useful.
+ (merge a544fb08f8 sg/blame-in-bare-start-at-head later to maint).
+
+ * An underallocation in the code to read the untracked cache
+ extension has been corrected.
+ (merge 3a7b45a623 js/untracked-cache-allocfix later to maint).
+
+ * The code is updated to check the result of memory allocation before
+ it is used in more places, by using xmalloc and/or xcalloc calls.
+ (merge 999b951b28 jk/xmalloc later to maint).
+
+ * The GETTEXT_POISON test option has been quite broken ever since it
+ was made runtime-tunable, which has been fixed.
+ (merge f88b9cb603 jc/gettext-test-fix later to maint).
+
+ * Test fix on APFS that is incapable of store paths in Latin-1.
+ (merge 3889149619 js/iso8895-test-on-apfs later to maint).
+
+ * "git submodule foreach <command> --quiet" did not pass the option
+ down correctly, which has been corrected.
+ (merge a282f5a906 nd/submodule-foreach-quiet later to maint).
+
+ * "git send-email" has been taught to use quoted-printable when the
+ payload contains carriage-return. The use of the mechanism is in
+ line with the design originally added the codepath that chooses QP
+ when the payload has overly long lines.
+ (merge 74d76a1701 bc/send-email-qp-cr later to maint).
+
+ * The recently added feature to add addresses that are on
+ anything-by: trailers in 'git send-email' was found to be way too
+ eager and considered nonsense strings as if they can be legitimate
+ beginning of *-by: trailer. This has been tightened.
+
+ * Build with gettext breaks on recent macOS w/ Homebrew when
+ /usr/local/bin is not on PATH, which has been corrected.
+ (merge 92a1377a2a js/macos-gettext-build later to maint).
+
* Code cleanup, docfix, build fix, etc.
(merge 11f470aee7 jc/test-yes-doc later to maint).
(merge 90503a240b js/doc-symref-in-proto-v1 later to maint).
(merge a7256debd4 nd/checkout-m-doc-update later to maint).
(merge 3a9e1ad78d jt/t5551-protocol-v2-does-not-have-half-auth later to maint).
(merge 0b918b75af sg/t5318-cleanup later to maint).
+ (merge 68ed71b53c cb/doco-mono later to maint).
+ (merge a34dca2451 nd/interpret-trailers-docfix later to maint).
+ (merge cf7b857a77 en/fast-import-parsing-fix later to maint).
+ (merge fe61ccbc35 po/rerere-doc-fmt later to maint).
+ (merge ffea0248bf po/describe-not-necessarily-7 later to maint).
+ (merge 7cb7283adb tg/ls-files-debug-format-fix later to maint).
+ (merge f64a21bd82 tz/doc-apostrophe-no-longer-needed later to maint).
+ (merge dbe7b41019 js/t3301-unbreak-notes-test later to maint).
+ (merge d8083e4180 km/t3000-retitle later to maint).
Some parts of the system have dedicated maintainers with their own
repositories.
-- 'git-gui/' comes from git-gui project, maintained by Pat Thoyts:
+- `git-gui/` comes from git-gui project, maintained by Pat Thoyts:
git://repo.or.cz/git-gui.git
-- 'gitk-git/' comes from Paul Mackerras's gitk project:
+- `gitk-git/` comes from Paul Mackerras's gitk project:
git://ozlabs.org/~paulus/gitk
-- 'po/' comes from the localization coordinator, Jiang Xin:
+- `po/` comes from the localization coordinator, Jiang Xin:
https://github.com/git-l10n/git-po/
waitingForEditor::
Print a message to the terminal whenever Git is waiting for
editor input from the user.
+ nestedTag::
+ Advice shown if a user attempts to recursively tag a tag object.
--
core.excludesFile::
Specifies the pathname to the file that contains patterns to
describe paths that are not meant to be tracked, in addition
- to '.gitignore' (per-directory) and '.git/info/exclude'.
+ to `.gitignore` (per-directory) and `.git/info/exclude`.
Defaults to `$XDG_CONFIG_HOME/git/ignore`.
If `$XDG_CONFIG_HOME` is either not set or empty, `$HOME/.config/git/ignore`
is used instead. See linkgit:gitignore[5].
command-line argument and write the password on its STDOUT.
core.attributesFile::
- In addition to '.gitattributes' (per-directory) and
- '.git/info/attributes', Git looks into this file for attributes
+ In addition to `.gitattributes` (per-directory) and
+ `.git/info/attributes`, Git looks into this file for attributes
(see linkgit:gitattributes[5]). Path expansions are made the same
way as for `core.excludesFile`. Its default value is
`$XDG_CONFIG_HOME/git/attributes`. If `$XDG_CONFIG_HOME` is either not
core.hooksPath::
By default Git will look for your hooks in the
- '$GIT_DIR/hooks' directory. Set this to different path,
- e.g. '/etc/git/hooks', and Git will try to find your hooks in
- that directory, e.g. '/etc/git/hooks/pre-receive' instead of
- in '$GIT_DIR/hooks/pre-receive'.
+ `$GIT_DIR/hooks` directory. Set this to different path,
+ e.g. `/etc/git/hooks`, and Git will try to find your hooks in
+ that directory, e.g. `/etc/git/hooks/pre-receive` instead of
+ in `$GIT_DIR/hooks/pre-receive`.
+
The path can be either absolute or relative. A relative path is
taken as relative to the directory where the hooks are run (see
gc.aggressiveDepth::
The depth parameter used in the delta compression
algorithm used by 'git gc --aggressive'. This defaults
- to 50.
+ to 50, which is the default for the `--depth` option when
+ `--aggressive` isn't in use.
++
+See the documentation for the `--depth` option in
+linkgit:git-repack[1] for more details.
gc.aggressiveWindow::
The window size parameter used in the delta compression
algorithm used by 'git gc --aggressive'. This defaults
- to 250.
+ to 250, which is a much more aggressive window size than
+ the default `--window` of 10.
++
+See the documentation for the `--window` option in
+linkgit:git-repack[1] for more details.
gc.auto::
When there are approximately more than this many loose
objects in the repository, `git gc --auto` will pack them.
Some Porcelain commands use this command to perform a
light-weight garbage collection from time to time. The
- default value is 6700. Setting this to 0 disables it.
+ default value is 6700.
++
+Setting this to 0 disables not only automatic packing based on the
+number of loose objects, but any other heuristic `git gc --auto` will
+otherwise use to determine if there's work to do, such as
+`gc.autoPackLimit`.
gc.autoPackLimit::
When there are more than this many packs that are not
marked with `*.keep` file in the repository, `git gc
--auto` consolidates them into one larger pack. The
default value is 50. Setting this to 0 disables it.
+ Setting `gc.auto` to 0 will also disable this.
++
+See the `gc.bigPackThreshold` configuration variable below. When in
+use, it'll affect how the auto pack limit works.
gc.autoDetach::
Make `git gc --auto` return immediately and run in background
this configuration variable is ignored, all packs except the base pack
will be repacked. After this the number of packs should go below
gc.autoPackLimit and gc.bigPackThreshold should be respected again.
++
+If the amount of memory estimated for `git repack` to run smoothly is
+not available and `gc.bigPackThreshold` is not set, the largest pack
+will also be excluded (this is the equivalent of running `git gc` with
+`--keep-base-pack`).
gc.writeCommitGraph::
If true, then gc will rewrite the commit-graph file when
- linkgit:git-gc[1] is run. When using linkgit:git-gc[1]
- '--auto' the commit-graph will be updated if housekeeping is
+ linkgit:git-gc[1] is run. When using `git gc --auto`
+ the commit-graph will be updated if housekeeping is
required. Default is false. See linkgit:git-commit-graph[1]
for details.
With "<pattern>" (e.g. "refs/stash")
in the middle, the setting applies only to the refs that
match the <pattern>.
++
+These types of entries are generally created as a result of using `git
+commit --amend` or `git rebase` and are the commits prior to the amend
+or rebase occurring. Since these changes are not part of the current
+project most users will want to expire them sooner, which is why the
+default is more aggressive than `gc.reflogExpire`.
gc.rerereResolved::
Records of conflicted merge you resolved earlier are
is turned off.
merge.renames::
- Whether and how Git detects renames. If set to "false",
- rename detection is disabled. If set to "true", basic rename
- detection is enabled. Defaults to the value of diff.renames.
+ Whether Git detects renames. If set to "false", rename detection
+ is disabled. If set to "true", basic rename detection is enabled.
+ Defaults to the value of diff.renames.
+
+merge.directoryRenames::
+ Whether Git detects directory renames, affecting what happens at
+ merge time to new files added to a directory on one side of
+ history when that directory was renamed on the other side of
+ history. If merge.directoryRenames is set to "false", directory
+ rename detection is disabled, meaning that such new files will be
+ left behind in the old directory. If set to "true", directory
+ rename detection is enabled, meaning that such new files will be
+ moved into the new directory. If set to "conflict", a conflict
+ will be reported for such paths. If merge.renames is false,
+ merge.directoryRenames is ignored and treated as false. Defaults
+ to "conflict".
merge.renormalize::
Tell Git that canonical representation of files in the
This is sort of "Git root" - if you run 'git daemon' with
'--base-path=/srv/git' on example.com, then if you later try to pull
'git://example.com/hello.git', 'git daemon' will interpret the path
- as '/srv/git/hello.git'.
+ as `/srv/git/hello.git`.
--base-path-relaxed::
If --base-path is enabled and repo lookup fails, with this option
The number of additional commits is the number
of commits which would be displayed by "git log v1.0.4..parent".
-The hash suffix is "-g" + 7-char abbreviation for the tip commit
+The hash suffix is "-g" + unambiguous abbreviation for the tip commit
of parent (which was `2414721b194453f058079d897d13c4e377f92dc6`).
The "g" prefix stands for "git" and is used to allow describing the version of
a software depending on the SCM the software is managed with. This is useful
include::pretty-formats.txt[]
+
include::diff-format.txt[]
GIT
all `filemodify`, `filecopy`, `filerename` and `notemodify` commands in
the same commit, as `filedeleteall` wipes the branch clean (see below).
-The `LF` after the command is optional (it used to be required).
+The `LF` after the command is optional (it used to be required). Note
+that for reasons of backward compatibility, if the commit ends with a
+`data` command (i.e. it has has no `from`, `merge`, `filemodify`,
+`filedelete`, `filecopy`, `filerename`, `filedeleteall` or
+`notemodify` commands) then two `LF` commands may appear at the end of
+the command instead of just one.
`author`
^^^^^^^^
'get-mark' SP ':' <idnum> LF
....
-This command can be used anywhere in the stream that comments are
-accepted. In particular, the `get-mark` command can be used in the
-middle of a commit but not in the middle of a `data` command.
-
See ``Responses To Commands'' below for details about how to read
this output safely.
<contents> LF
====
-This command can be used anywhere in the stream that comments are
-accepted. In particular, the `cat-blob` command can be used in the
-middle of a commit but not in the middle of a `data` command.
+This command can be used where a `filemodify` directive can appear,
+allowing it to be used in the middle of a commit. For a `filemodify`
+using an inline directive, it can also appear right before the `data`
+directive.
See ``Responses To Commands'' below for details about how to read
this output safely.
blob or tree from a previous commit for use in the current one (with
`filemodify`).
-The `ls` command can be used anywhere in the stream that comments are
-accepted, including the middle of a commit.
+The `ls` command can also be used where a `filemodify` directive can
+appear, allowing it to be used in the middle of a commit.
Reading from the active commit::
This form can only be used in the middle of a `commit`.
to force recomputation of all deltas can significantly reduce the
final packfile size (30-50% smaller can be quite typical).
+Instead of running `git repack` you can also run `git gc
+--aggressive`, which will also optimize other things after an import
+(e.g. pack loose refs). As noted in the "AGGRESSIVE" section in
+linkgit:git-gc[1] the `--aggressive` option will find new deltas with
+the `-f` option to linkgit:git-repack[1]. For the reasons elaborated
+on above using `--aggressive` after a fast-import is one of the few
+cases where it's known to be worthwhile.
MEMORY UTILIZATION
------------------
rewriting. When applying a tree filter, the command needs to
temporarily check out the tree to some directory, which may consume
considerable space in case of large projects. By default it
- does this in the '.git-rewrite/' directory but you can override
+ does this in the `.git-rewrite/` directory but you can override
that choice by this parameter.
-f::
reflog, rerere metadata or stale working trees. May also update ancillary
indexes such as the commit-graph.
-Users are encouraged to run this task on a regular basis within
-each repository to maintain good disk space utilization and good
-operating performance.
+When common porcelain operations that create objects are run, they
+will check whether the repository has grown substantially since the
+last maintenance, and if so run `git gc` automatically. See `gc.auto`
+below for how to disable this behavior.
-Some git commands may automatically run 'git gc'; see the `--auto` flag
-below for details. If you know what you're doing and all you want is to
-disable this behavior permanently without further considerations, just do:
-
-----------------------
-$ git config --global gc.auto 0
-----------------------
+Running `git gc` manually should only be needed when adding objects to
+a repository without regularly running such porcelain commands, to do
+a one-off repository optimization, or e.g. to clean up a suboptimal
+mass-import. See the "PACKFILE OPTIMIZATION" section in
+linkgit:git-fast-import[1] for more details on the import case.
OPTIONS
-------
space utilization and performance. This option will cause
'git gc' to more aggressively optimize the repository at the expense
of taking much more time. The effects of this optimization are
- persistent, so this option only needs to be used occasionally; every
- few hundred changesets or so.
+ mostly persistent. See the "AGGRESSIVE" section below for details.
--auto::
With this option, 'git gc' checks whether any housekeeping is
required; if not, it exits without performing any work.
- Some git commands run `git gc --auto` after performing
- operations that could create many loose objects. Housekeeping
- is required if there are too many loose objects or too many
- packs in the repository.
-+
-If the number of loose objects exceeds the value of the `gc.auto`
-configuration variable, then all loose objects are combined into a
-single pack using `git repack -d -l`. Setting the value of `gc.auto`
-to 0 disables automatic packing of loose objects.
+
-If the number of packs exceeds the value of `gc.autoPackLimit`,
-then existing packs (except those marked with a `.keep` file
-or over `gc.bigPackThreshold` limit)
-are consolidated into a single pack by using the `-A` option of
-'git repack'.
-If the amount of memory is estimated not enough for `git repack` to
-run smoothly and `gc.bigPackThreshold` is not set, the largest
-pack will also be excluded (this is the equivalent of running `git gc`
-with `--keep-base-pack`).
-Setting `gc.autoPackLimit` to 0 disables automatic consolidation of
-packs.
+See the `gc.auto` option in the "CONFIGURATION" section below for how
+this heuristic works.
+
-If houskeeping is required due to many loose objects or packs, all
+Once housekeeping is triggered by exceeding the limits of
+configuration options such as `gc.auto` and `gc.autoPackLimit`, all
other housekeeping tasks (e.g. rerere, working trees, reflog...) will
be performed as well.
`.keep` files are consolidated into a single pack. When this
option is used, `gc.bigPackThreshold` is ignored.
+AGGRESSIVE
+----------
+
+When the `--aggressive` option is supplied, linkgit:git-repack[1] will
+be invoked with the `-f` flag, which in turn will pass
+`--no-reuse-delta` to linkgit:git-pack-objects[1]. This will throw
+away any existing deltas and re-compute them, at the expense of
+spending much more time on the repacking.
+
+The effects of this are mostly persistent, e.g. when packs and loose
+objects are coalesced into one another pack the existing deltas in
+that pack might get re-used, but there are also various cases where we
+might pick a sub-optimal delta from a newer pack instead.
+
+Furthermore, supplying `--aggressive` will tweak the `--depth` and
+`--window` options passed to linkgit:git-repack[1]. See the
+`gc.aggressiveDepth` and `gc.aggressiveWindow` settings below. By
+using a larger window size we're more likely to find more optimal
+deltas.
+
+It's probably not worth it to use this option on a given repository
+without running tailored performance benchmarks on it. It takes a lot
+more time, and the resulting space/delta optimization may or may not
+be worth it. Not using this at all is the right trade-off for most
+users and their repositories.
+
CONFIGURATION
-------------
-The optional configuration variable `gc.reflogExpire` can be
-set to indicate how long historical entries within each branch's
-reflog should remain available in this repository. The setting is
-expressed as a length of time, for example '90 days' or '3 months'.
-It defaults to '90 days'.
-
-The optional configuration variable `gc.reflogExpireUnreachable`
-can be set to indicate how long historical reflog entries which
-are not part of the current branch should remain available in
-this repository. These types of entries are generally created as
-a result of using `git commit --amend` or `git rebase` and are the
-commits prior to the amend or rebase occurring. Since these changes
-are not part of the current project most users will want to expire
-them sooner. This option defaults to '30 days'.
-
-The above two configuration variables can be given to a pattern. For
-example, this sets non-default expiry values only to remote-tracking
-branches:
-
-------------
-[gc "refs/remotes/*"]
- reflogExpire = never
- reflogExpireUnreachable = 3 days
-------------
-
-The optional configuration variable `gc.rerereResolved` indicates
-how long records of conflicted merge you resolved earlier are
-kept. This defaults to 60 days.
-
-The optional configuration variable `gc.rerereUnresolved` indicates
-how long records of conflicted merge you have not resolved are
-kept. This defaults to 15 days.
-
-The optional configuration variable `gc.packRefs` determines if
-'git gc' runs 'git pack-refs'. This can be set to "notbare" to enable
-it within all non-bare repos or it can be set to a boolean value.
-This defaults to true.
-
-The optional configuration variable `gc.writeCommitGraph` determines if
-'git gc' should run 'git commit-graph write'. This can be set to a
-boolean value. This defaults to false.
-
-The optional configuration variable `gc.aggressiveWindow` controls how
-much time is spent optimizing the delta compression of the objects in
-the repository when the --aggressive option is specified. The larger
-the value, the more time is spent optimizing the delta compression. See
-the documentation for the --window option in linkgit:git-repack[1] for
-more details. This defaults to 250.
-
-Similarly, the optional configuration variable `gc.aggressiveDepth`
-controls --depth option in linkgit:git-repack[1]. This defaults to 50.
-
-The optional configuration variable `gc.pruneExpire` controls how old
-the unreferenced loose objects have to be before they are pruned. The
-default is "2 weeks ago".
-
-Optional configuration variable `gc.worktreePruneExpire` controls how
-old a stale working tree should be before `git worktree prune` deletes
-it. Default is "3 months ago".
+The below documentation is the same as what's found in
+linkgit:git-config[1]:
+include::config/gc.txt[]
NOTES
-----
particular, it will keep not only objects referenced by your current set
of branches and tags, but also objects referenced by the index,
remote-tracking branches, refs saved by 'git filter-branch' in
-refs/original/, or reflogs (which may reference commits in branches
-that were later amended or rewound).
+refs/original/, reflogs (which may reference commits in branches
+that were later amended or rewound), and anything else in the refs/* namespace.
If you are expecting some objects to be deleted and they aren't, check
all of those locations and decide whether it makes sense in your case to
remove those references.
However, these features fall short of a complete solution, so users who
run commands concurrently have to live with some risk of corruption (which
-seems to be low in practice) unless they turn off automatic garbage
-collection with 'git config gc.auto 0'.
+seems to be low in practice).
HOOKS
-----
already opened konqueror in a new tab if possible.
For consistency, we also try such a trick if 'man.konqueror.path' is
-set to something like 'A_PATH_TO/konqueror'. That means we will try to
-launch 'A_PATH_TO/kfmclient' instead.
+set to something like `A_PATH_TO/konqueror`. That means we will try to
+launch `A_PATH_TO/kfmclient` instead.
If you really want to use 'konqueror', then you can use something like
the following:
NAME
----
-git-interpret-trailers - add or parse structured information in commit messages
+git-interpret-trailers - Add or parse structured information in commit messages
SYNOPSIS
--------
taken as relative to the current working directory. E.g. when you are
in a directory 'sub' that has a directory 'dir', you can run 'git
ls-tree -r HEAD dir' to list the contents of the tree (that is
- 'sub/dir' in `HEAD`). You don't want to give a tree that is not at the
+ `sub/dir` in `HEAD`). You don't want to give a tree that is not at the
root level (e.g. `git ls-tree -r HEAD:sub dir`) in this case, as that
- would result in asking for 'sub/sub/dir' in the `HEAD` commit.
+ would result in asking for `sub/sub/dir` in the `HEAD` commit.
However, the current working directory can be ignored by passing
--full-tree option.
started.
--reset::
- Same as -m, except that unmerged entries are discarded
- instead of failing.
+ Same as -m, except that unmerged entries are discarded instead
+ of failing. When used with `-u`, updates leading to loss of
+ working tree changes will not abort the operation.
-u::
After a successful merge, update the files in the work
Instead of reading tree object(s) into the index, just empty
it.
+-q::
+--quiet::
+ Quiet, suppress feedback messages.
+
<tree-ish#>::
The id of the tree object(s) to be read/merged.
link-level address).
"ext::git-server-alias foo %G/repo% with% spaces %Vfoo"::
- Represents a repository with path '/repo with spaces' accessed
+ Represents a repository with path `/repo with spaces` accessed
using the helper program "git-server-alias foo". The hostname for
the remote server passed in the protocol stream will be "foo"
(this allows multiple virtual Git servers to share a
SEE ALSO
--------
-linkgit:gitremote-helpers[1]
+linkgit:gitremote-helpers[7]
GIT
---
SEE ALSO
--------
-linkgit:gitremote-helpers[1]
+linkgit:gitremote-helpers[7]
GIT
---
git-remote-helpers
==================
-This document has been moved to linkgit:gitremote-helpers[1].
+This document has been moved to linkgit:gitremote-helpers[7].
Please let the owners of the referring site know so that they can update the
link you clicked to get here.
+++ /dev/null
-git-remote-testgit(1)
-=====================
-
-NAME
-----
-git-remote-testgit - Example remote-helper
-
-
-SYNOPSIS
---------
-[verse]
-git clone testgit::<source-repo> [<destination>]
-
-DESCRIPTION
------------
-
-This command is a simple remote-helper, that is used both as a
-testcase for the remote-helper functionality, and as an example to
-show remote-helper authors one possible implementation.
-
-The best way to learn more is to read the comments and source code in
-'git-remote-testgit'.
-
-SEE ALSO
---------
-linkgit:gitremote-helpers[1]
-
-GIT
----
-Part of the linkgit:git[1] suite
hand resolutions to their corresponding automerge results.
[NOTE]
-You need to set the configuration variable rerere.enabled in order to
+You need to set the configuration variable `rerere.enabled` in order to
enable this command.
------------------------------------------------
These three branches all forked from a common commit, [master],
-whose commit message is "Add {apostrophe}git show-branch{apostrophe}".
+whose commit message is "Add \'git show-branch'".
The "fixes" branch adds one commit "Introduce "reset type" flag to
"git reset"". The "mhf" branch adds many other commits.
The current branch is "master".
--------
[verse]
'git stash' list [<options>]
-'git stash' show [<stash>]
+'git stash' show [<options>] [<stash>]
'git stash' drop [-q|--quiet] [<stash>]
'git stash' ( pop | apply ) [--index] [-q|--quiet] [<stash>]
'git stash' branch <branchname> [<stash>]
The command takes options applicable to the 'git log'
command to control what is shown and how. See linkgit:git-log[1].
-show [<stash>]::
+show [<options>] [<stash>]::
Show the changes recorded in the stash entry as a diff between the
stashed contents and the commit back when the stash entry was first
command line arguments. Parsers should ignore headers they
don't recognize.
-### Branch Headers
+Branch Headers
+^^^^^^^^^^^^^^
If `--branch` is given, a series of header lines are printed with
information about the current branch.
------------------------------------------------------------
....
-### Changed Tracked Entries
+Changed Tracked Entries
+^^^^^^^^^^^^^^^^^^^^^^^
Following the headers, a series of lines are printed for tracked
entries. One of three different line formats may be used to describe
--------------------------------------------------------
....
-### Other Items
+Other Items
+^^^^^^^^^^^
Following the tracked entries (and if requested), a series of
lines will be printed for untracked and then ignored items
! <path>
-### Pathname Format Notes and -z
+Pathname Format Notes and -z
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^
When the `-z` option is given, pathnames are printed as is and
without any quoting and lines are terminated with a NUL (ASCII 0x00)
'git submodule' [--quiet] init [--] [<path>...]
'git submodule' [--quiet] deinit [-f|--force] (--all|[--] <path>...)
'git submodule' [--quiet] update [<options>] [--] [<path>...]
+'git submodule' [--quiet] set-branch [<options>] [--] <path>
'git submodule' [--quiet] summary [<options>] [--] [<path>...]
'git submodule' [--quiet] foreach [--recursive] <command>
'git submodule' [--quiet] sync [--recursive] [--] [<path>...]
or ../), the location relative to the superproject's default remote
repository (Please note that to specify a repository 'foo.git'
which is located right next to a superproject 'bar.git', you'll
-have to use '../foo.git' instead of './foo.git' - as one might expect
+have to use `../foo.git` instead of `./foo.git` - as one might expect
when following the rules for relative URLs - because the evaluation
of relative URLs in Git is identical to that of relative directories).
+
If `--recursive` is specified, this command will recurse into the
registered submodules, and update any nested submodules within.
--
+set-branch ((-d|--default)|(-b|--branch <branch>)) [--] <path>::
+ Sets the default remote tracking branch for the submodule. The
+ `--branch` option allows the remote branch to be specified. The
+ `--default` option removes the submodule.<name>.branch configuration
+ key, which causes the tracking branch to default to 'master'.
+
summary [--cached|--files] [(-n|--summary-limit) <n>] [commit] [--] [<path>...]::
Show commit summary between the given commit (defaults to HEAD) and
working tree/index. For a submodule in question, a series of commits
This option is only valid for the deinit command. Unregister all
submodules in the working tree.
--b::
---branch::
+-b <branch>::
+--branch <branch>::
Branch of repository to add as submodule.
The name of the branch is recorded as `submodule.<name>.branch` in
`.gitmodules` for `update --remote`. A special value of `.` is used to
indicate that the name of the branch in the submodule should be the
- same name as the current branch in the current repository.
+ same name as the current branch in the current repository. If the
+ option is not specified, it defaults to 'master'.
-f::
--force::
tags = tags/*/project-a:refs/remotes/project-a/tags/*
------------------------------------------------------------------------
-Keep in mind that the '\*' (asterisk) wildcard of the local ref
-(right of the ':') *must* be the farthest right path component;
+Keep in mind that the `*` (asterisk) wildcard of the local ref
+(right of the `:`) *must* be the farthest right path component;
however the remote wildcard may be anywhere as long as it's an
-independent path component (surrounded by '/' or EOL). This
+independent path component (surrounded by `/` or EOL). This
type of configuration is not automatically created by 'init' and
should be manually entered with a text-editor or using 'git config'.
man page on an already opened konqueror in a new tab if possible.
For consistency, we also try such a trick if 'browser.konqueror.path' is
-set to something like 'A_PATH_TO/konqueror'. That means we will try to
-launch 'A_PATH_TO/kfmclient' instead.
+set to something like `A_PATH_TO/konqueror`. That means we will try to
+launch `A_PATH_TO/kfmclient` instead.
If you really want to use 'konqueror', then you can use something like
the following:
from standard input. Exiting with non-zero status from this script prevent
`git-p4 submit` from launching. Run `git-p4 submit --help` for details.
+post-index-change
+~~~~~~~~~~~~~~~~~
+
+This hook is invoked when the index is written in read-cache.c
+do_write_locked_index.
+
+The first parameter passed to the hook is the indicator for the
+working directory being updated. "1" meaning working directory
+was updated or "0" when the working directory was not updated.
+
+The second parameter passed to the hook is the indicator for whether
+or not the index was updated and the skip-worktree bit could have
+changed. "1" meaning skip-worktree bits could have been updated
+and "0" meaning they were not.
+
+Only one parameter should be set to "1" when the hook runs. The hook
+running passing "1", "1" should not be possible.
+
GIT
---
Part of the linkgit:git[1] suite
-----
User configuration and preferences are stored at:
-* '$XDG_CONFIG_HOME/git/gitk' if it exists, otherwise
-* '$HOME/.gitk' if it exists
+* `$XDG_CONFIG_HOME/git/gitk` if it exists, otherwise
+* `$HOME/.gitk` if it exists
-If neither of the above exist then '$XDG_CONFIG_HOME/git/gitk' is created and
+If neither of the above exist then `$XDG_CONFIG_HOME/git/gitk` is created and
used by default. If '$XDG_CONFIG_HOME' is not set it defaults to
-'$HOME/.config' in all cases.
+`$HOME/.config` in all cases.
History
-------
This defines two submodules, `libfoo` and `libbar`. These are expected to
-be checked out in the paths 'include/foo' and 'include/bar', and for both
+be checked out in the paths `include/foo` and `include/bar`, and for both
submodules a URL is specified which can be used for cloning the submodules.
SEE ALSO
-gitremote-helpers(1)
+gitremote-helpers(7)
====================
NAME
linkgit:git-remote-fd[1]
-linkgit:git-remote-testgit[1]
-
linkgit:git-fast-import[1]
GIT
to the object database, not to the repository!) in your
alternates file, but it will not work if you use absolute
paths unless the absolute path in filesystem and web URL
- is the same. See also 'objects/info/http-alternates'.
+ is the same. See also `objects/info/http-alternates`.
objects/info/http-alternates::
This file records URLs to alternate object stores that
* built-in values (some set during build stage),
* common system-wide configuration file (defaults to
- '/etc/gitweb-common.conf'),
+ `/etc/gitweb-common.conf`),
* either per-instance configuration file (defaults to 'gitweb_config.perl'
in the same directory as the installed gitweb), or if it does not exists
- then fallback system-wide configuration file (defaults to '/etc/gitweb.conf').
+ then fallback system-wide configuration file (defaults to `/etc/gitweb.conf`).
Values obtained in later configuration files override values obtained earlier
in the above sequence.
subroutine. For example, one might want to put gitweb configuration
related to access control for viewing repositories via Gitolite (one
of Git repository management tools) in a separate file, e.g. in
-'/etc/gitweb-gitolite.conf'. To include it, put
+`/etc/gitweb-gitolite.conf`. To include it, put
--------------------------------------------------
read_config_file("/etc/gitweb-gitolite.conf");
http://git.example.com/gitweb.cgi/foo/bar.git
------------------------------------------------
+
-will map to the path '/srv/git/foo/bar.git' on the filesystem.
+will map to the path `/srv/git/foo/bar.git` on the filesystem.
$projects_list::
Name of a plain text file listing projects, or a name of directory
$mimetypes_file::
File to use for (filename extension based) guessing of MIME types before
- trying '/etc/mime.types'. *NOTE* that this path, if relative, is taken
+ trying `/etc/mime.types`. *NOTE* that this path, if relative, is taken
as relative to the current Git repository, not to CGI script. If unset,
- only '/etc/mime.types' is used (if present on filesystem). If no mimetypes
+ only `/etc/mime.types` is used (if present on filesystem). If no mimetypes
file is found, mimetype guessing based on extension of file is disabled.
Unset by default.
+
This list should contain the URI of gitweb's standard stylesheet. The default
URI of gitweb stylesheet can be set at build time using the `GITWEB_CSS`
-makefile variable. Its default value is 'static/gitweb.css'
-(or 'static/gitweb.min.css' if the `CSSMIN` variable is defined,
+makefile variable. Its default value is `static/gitweb.css`
+(or `static/gitweb.min.css` if the `CSSMIN` variable is defined,
i.e. if CSS minifier is used during build).
+
*Note*: there is also a legacy `$stylesheet` configuration variable, which was
is displayed in the top right corner of each gitweb page and used as
a logo for the Atom feed. Relative to the base URI of gitweb (as a path).
Can be adjusted when building gitweb using `GITWEB_LOGO` variable
- By default set to 'static/git-logo.png'.
+ By default set to `static/git-logo.png`.
$favicon::
Points to the location where you put 'git-favicon.png' on your web
may display them in the browser's URL bar and next to the site name in
bookmarks. Relative to the base URI of gitweb. Can be adjusted at
build time using `GITWEB_FAVICON` variable.
- By default set to 'static/git-favicon.png'.
+ By default set to `static/git-favicon.png`.
$javascript::
Points to the location where you put 'gitweb.js' on your web server,
Relative to the base URI of gitweb. Can be set at build time using
the `GITWEB_JS` build-time configuration variable.
+
-The default value is either 'static/gitweb.js', or 'static/gitweb.min.js' if
+The default value is either `static/gitweb.js`, or `static/gitweb.min.js` if
the `JSMIN` build variable was defined, i.e. if JavaScript minifier was used
at build time. *Note* that this single file is generated from multiple
individual JavaScript "modules".
doesn't result in some other type; by default "text/plain".
Gitweb guesses mimetype of a file to display based on extension
of its filename, using `$mimetypes_file` (if set and file exists)
- and '/etc/mime.types' files (see *mime.types*(5) manpage; only
+ and `/etc/mime.types` files (see *mime.types*(5) manpage; only
filename extension rules are supported by gitweb).
$default_text_plain_charset::
(for example one for `git://` protocol, and one for `http://`
protocol).
+
-Note that per repository configuration can be set in '$GIT_DIR/cloneurl'
+Note that per repository configuration can be set in `$GIT_DIR/cloneurl`
file, or as values of multi-value `gitweb.url` configuration variable in
project config. Per-repository configuration takes precedence over value
composed from `@git_base_url_list` elements and project name.
If the server load exceeds this value then gitweb will return
"503 Service Unavailable" error. The server load is taken to be 0
if gitweb cannot determine its value. Currently it works only on Linux,
- where it uses '/proc/loadavg'; the load there is the number of active
+ where it uses `/proc/loadavg`; the load there is the number of active
tasks on the system -- processes that are actually running -- averaged
over the last minute.
+
Only one provider at a time can be selected ('default' is one element list).
If an unknown provider is specified, the feature is disabled.
*Note* that some providers might require extra Perl packages to be
-installed; see 'gitweb/INSTALL' for more details.
+installed; see `gitweb/INSTALL` for more details.
+
This feature can be configured on a per-repository basis via
repository's `gitweb.avatar` configuration variable.
CONFIGURATION
-------------
Various aspects of gitweb's behavior can be controlled through the configuration
-file 'gitweb_config.perl' or '/etc/gitweb.conf'. See the linkgit:gitweb.conf[5]
+file `gitweb_config.perl` or `/etc/gitweb.conf`. See the linkgit:gitweb.conf[5]
for details.
Repositories
our $projectroot = '/path/to/parent/directory';
-----------------------------------------------------------------------
-The default value for `$projectroot` is '/pub/git'. You can change it during
+The default value for `$projectroot` is `/pub/git`. You can change it during
building gitweb via `GITWEB_PROJECTROOT` build configuration variable.
By default all Git repositories under `$projectroot` are visible and available
-------------------------------------------------------------------------------
+
from the template during repository creation, usually installed in
-'/usr/share/git-core/templates/'. You can use the `gitweb.description` repo
+`/usr/share/git-core/templates/`. You can use the `gitweb.description` repo
configuration variable, but the file takes precedence.
category (or `gitweb.category`)::
Apache as CGI
~~~~~~~~~~~~~
Apache must be configured to support CGI scripts in the directory in
-which gitweb is installed. Let's assume that it is '/var/www/cgi-bin'
+which gitweb is installed. Let's assume that it is `/var/www/cgi-bin`
directory.
-----------------------------------------------------------------------
(for mod_perl 1.x) or ModPerl::Registry (for mod_perl 2.x) to enable
this support.
-Assuming that gitweb is installed to '/var/www/perl', the following
+Assuming that gitweb is installed to `/var/www/perl`, the following
Apache configuration (for mod_perl 2.x) is suitable.
-----------------------------------------------------------------------
~~~~~~~~~~~~~~~~~~~
Gitweb works with Apache and FastCGI. First you need to rename, copy
or symlink gitweb.cgi to gitweb.fcgi. Let's assume that gitweb is
-installed in '/usr/share/gitweb' directory. The following Apache
+installed in `/usr/share/gitweb` directory. The following Apache
configuration is suitable (UNTESTED!)
-----------------------------------------------------------------------
-----------------------------------------------------------------------
The above configuration expects your public repositories to live under
-'/pub/git' and will serve them as `http://git.domain.org/dir-under-pub-git`,
+`/pub/git` and will serve them as `http://git.domain.org/dir-under-pub-git`,
both as clonable Git URL and as browseable gitweb interface. If you then
start your linkgit:git-daemon[1] with `--base-path=/pub/git --export-all`
then you can even use the `git://` URL with exactly the same path.
Setting the environment variable `GITWEB_CONFIG` will tell gitweb to use the
-named file (i.e. in this example '/etc/gitweb.conf') as a configuration for
+named file (i.e. in this example `/etc/gitweb.conf`) as a configuration for
gitweb. You don't really need it in above example; it is required only if
your configuration file is in different place than built-in (during
-compiling gitweb) 'gitweb_config.perl' or '/etc/gitweb.conf'. See
+compiling gitweb) 'gitweb_config.perl' or `/etc/gitweb.conf`. See
linkgit:gitweb.conf[5] for details, especially information about precedence
rules.
If you use the rewrite rules from the example you *might* also need
something like the following in your gitweb configuration file
-('/etc/gitweb.conf' following example):
+(`/etc/gitweb.conf` following example):
----------------------------------------------------------------------------
@stylesheets = ("/some/absolute/path/gitweb.css");
$my_uri = "/";
Here actual project root is passed to gitweb via `GITWEB_PROJECT_ROOT`
environment variable from a web server, so you need to put the following
-line in gitweb configuration file ('/etc/gitweb.conf' in above example):
+line in gitweb configuration file (`/etc/gitweb.conf` in above example):
--------------------------------------------------------------------------
$projectroot = $ENV{'GITWEB_PROJECTROOT'} || "/pub/git";
--------------------------------------------------------------------------
These configurations enable two things. First, each unix user (`<user>`) of
the server will be able to browse through gitweb Git repositories found in
-'~/public_git/' with the following url:
+`~/public_git/` with the following url:
http://git.example.org/~<user>/
use the \'~' as first character, just comment or remove the second rewrite
rule, and uncomment one of the following according to what you want.
-Second, repositories found in '/pub/scm/' and '/var/git/' will be accessible
+Second, repositories found in `/pub/scm/` and `/var/git/` will be accessible
through `http://git.example.org/scm/` and `http://git.example.org/var/`.
You can add as many project roots as you want by adding rewrite rules like
the third and the fourth.
http://git.example.com/project.git/shortlog/sometag
i.e. without 'gitweb.cgi' part, by using a configuration such as the
-following. This configuration assumes that '/var/www/gitweb' is the
+following. This configuration assumes that `/var/www/gitweb` is the
DocumentRoot of your webserver, contains the gitweb.cgi script and
complementary static files (stylesheet, favicon, JavaScript):
`@stylesheets`, `$my_uri` and `$home_link`, but you lose "dumb client"
access to your project .git dirs (described in "Single URL for gitweb and
for fetching" section). A possible workaround for the latter is the
-following: in your project root dir (e.g. '/pub/git') have the projects
-named *without* a .git extension (e.g. '/pub/git/project' instead of
-'/pub/git/project.git') and configure Apache as follows:
+following: in your project root dir (e.g. `/pub/git`) have the projects
+named *without* a .git extension (e.g. `/pub/git/project` instead of
+`/pub/git/project.git`) and configure Apache as follows:
----------------------------------------------------------------------------
<VirtualHost *:80>
ServerAlias git.example.com
will provide human-friendly gitweb access.
This solution is not 100% bulletproof, in the sense that if some project has
-a named ref (branch, tag) starting with 'git/', then paths such as
+a named ref (branch, tag) starting with `git/`, then paths such as
http://git.example.com/project/command/abranch..git/abranch
--------
linkgit:gitweb.conf[5], linkgit:git-instaweb[1]
-'gitweb/README', 'gitweb/INSTALL'
+`gitweb/README`, `gitweb/INSTALL`
GIT
---
--------------
If you have to access the WebDAV server from behind an HTTP(S) proxy,
-set the variable 'all_proxy' to 'http://proxy-host.com:port', or
-'http://login-on-proxy:passwd-on-proxy@proxy-host.com:port'. See 'man
+set the variable 'all_proxy' to `http://proxy-host.com:port`, or
+`http://login-on-proxy:passwd-on-proxy@proxy-host.com:port`. See 'man
curl' for details.
author's). If `-local` is appended to the format (e.g.,
`iso-local`), the user's local time zone is used instead.
+
+--
`--date=relative` shows dates relative to the current time,
e.g. ``2 hours ago''. The `-local` option has no effect for
`--date=relative`.
-+
+
`--date=local` is an alias for `--date=default-local`.
-+
+
`--date=iso` (or `--date=iso8601`) shows timestamps in a ISO 8601-like format.
The differences to the strict ISO 8601 format are:
- a space between time and time zone
- no colon between hours and minutes of the time zone
-+
`--date=iso-strict` (or `--date=iso8601-strict`) shows timestamps in strict
ISO 8601 format.
-+
+
`--date=rfc` (or `--date=rfc2822`) shows timestamps in RFC 2822
format, often found in email messages.
-+
+
`--date=short` shows only the date, but not the time, in `YYYY-MM-DD` format.
-+
+
`--date=raw` shows the date as seconds since the epoch (1970-01-01
00:00:00 UTC), followed by a space, and then the timezone as an offset
from UTC (a `+` or `-` with four digits; the first two are hours, and
Note that the `-local` option does not affect the seconds-since-epoch
value (which is always measured in UTC), but does switch the accompanying
timezone value.
-+
+
`--date=human` shows the timezone if the timezone does not match the
current time-zone, and doesn't print the whole date if that matches
(ie skip printing year for dates that are "this year", but also skip
the whole date itself if it's in the last few days and we can just say
what weekday it was). For older dates the hour and minute is also
omitted.
-+
+
`--date=unix` shows the date as a Unix epoch timestamp (seconds since
1970). As with `--raw`, this is always in UTC and therefore `-local`
has no effect.
-+
+
`--date=format:...` feeds the format `...` to your system `strftime`,
except for %z and %Z, which are handled internally.
Use `--date=format:%c` to show the date in your system locale's
preferred format. See the `strftime` manual for a complete list of
format placeholders. When using `-local`, the correct syntax is
`--date=format-local:...`.
-+
+
`--date=default` is the default format, and is similar to
`--date=rfc2822`, with a few exceptions:
-
+--
- there is no comma after the day-of-week
- the time zone is omitted when the local time zone is used
when you run `git cherry-pick`.
+
Note that any of the 'refs/*' cases above may come either from
-the '$GIT_DIR/refs' directory or from the '$GIT_DIR/packed-refs' file.
+the `$GIT_DIR/refs` directory or from the `$GIT_DIR/packed-refs` file.
While the ref name encoding is unspecified, UTF-8 is preferred as
some output processing may assume ref names in UTF-8.
`git push` were run while `branchname` was checked out (or the current
`HEAD` if no branchname is specified). Since our push destination is
in a remote repository, of course, we report the local tracking branch
- that corresponds to that branch (i.e., something in 'refs/remotes/').
+ that corresponds to that branch (i.e., something in `refs/remotes/`).
+
Here's an example to make it more clear:
+
--continue::
Continue the operation in progress using the information in
- '.git/sequencer'. Can be used to continue after resolving
+ `.git/sequencer`. Can be used to continue after resolving
conflicts in a failed cherry-pick or revert.
--quit::
config-like files that the caller specifies (i.e., files like `.gitmodules`,
`~/.gitconfig` etc.). For example,
----------------------------------------
+----------------------------------------
struct config_set gm_config;
git_configset_init(&gm_config);
int b;
The filename will be prefixed by passing the filename along with
the prefix argument of `parse_options()` to `prefix_filename()`.
-`OPT_ARGUMENT(long, description)`::
+`OPT_ARGUMENT(long, &int_var, description)`::
Introduce a long-option argument that will be kept in `argv[]`.
+ If this option was seen, `int_var` will be set to one (except
+ if a `NULL` pointer was passed).
`OPT_NUMBER_CALLBACK(&var, description, func_ptr)`::
Recognize numerical options like -123 and feed the integer as
- Git Wire Protocol, Version 2
-==============================
+Git Wire Protocol, Version 2
+============================
This document presents a specification for a version 2 of Git's wire
protocol. Protocol v2 will improve upon v1 in the following ways:
has completed, a client can reuse the connection and request that other
commands be executed.
- Packet-Line Framing
----------------------
+Packet-Line Framing
+-------------------
All communication is done using packet-line framing, just as in v1. See
`Documentation/technical/pack-protocol.txt` and
* '0000' Flush Packet (flush-pkt) - indicates the end of a message
* '0001' Delimiter Packet (delim-pkt) - separates sections of a message
- Initial Client Request
-------------------------
+Initial Client Request
+----------------------
In general a client can request to speak protocol v2 by sending
`version=2` through the respective side-channel for the transport being
found in `pack-protocol.txt` and `http-protocol.txt`. In all cases the
response from the server is the capability advertisement.
- Git Transport
-~~~~~~~~~~~~~~~
+Git Transport
+~~~~~~~~~~~~~
When using the git:// transport, you can request to use protocol v2 by
sending "version=2" as an extra parameter:
003egit-upload-pack /project.git\0host=myserver.com\0\0version=2\0
- SSH and File Transport
-~~~~~~~~~~~~~~~~~~~~~~~~
+SSH and File Transport
+~~~~~~~~~~~~~~~~~~~~~~
When using either the ssh:// or file:// transport, the GIT_PROTOCOL
environment variable must be set explicitly to include "version=2".
- HTTP Transport
-~~~~~~~~~~~~~~~~
+HTTP Transport
+~~~~~~~~~~~~~~
When using the http:// or https:// transport a client makes a "smart"
info/refs request as described in `http-protocol.txt` and requests that
Subsequent requests are then made directly to the service
`$GIT_URL/git-upload-pack`. (This works the same for git-receive-pack).
- Capability Advertisement
---------------------------
+Capability Advertisement
+------------------------
A server which decides to communicate (based on a request from a client)
using protocol version 2, notifies the client by sending a version string
key = 1*(ALPHA | DIGIT | "-_")
value = 1*(ALPHA | DIGIT | " -_.,?\/{}[]()<>!@#$%^&*+=:;")
- Command Request
------------------
+Command Request
+---------------
After receiving the capability advertisement, a client can then issue a
request to select the command it wants with any particular capabilities
optionally send an empty request consisting of just a flush-pkt to
indicate that no more requests will be made.
- Capabilities
---------------
+Capabilities
+------------
There are two different types of capabilities: normal capabilities,
which can be used to to convey information or alter the behavior of a
permits simple round-robin load-balancing on the server side, without
needing to worry about state management.
- agent
-~~~~~~~
+agent
+~~~~~
The server can advertise the `agent` capability with a value `X` (in the
form `agent=X`) to notify the client that the server is running version
and debugging purposes, and MUST NOT be used to programmatically assume
the presence or absence of particular features.
- ls-refs
-~~~~~~~~~
+ls-refs
+~~~~~~~
`ls-refs` is the command used to request a reference advertisement in v2.
Unlike the current reference advertisement, ls-refs takes in arguments
symref = "symref-target:" symref-target
peeled = "peeled:" obj-id
- fetch
-~~~~~~~
+fetch
+~~~~~
`fetch` is the command used to fetch a packfile in v2. It can be looked
at as a modified version of the v1 fetch where the ref-advertisement is
2 - progress messages
3 - fatal error message just before stream aborts
- server-option
-~~~~~~~~~~~~~~~
+server-option
+~~~~~~~~~~~~~
If advertised, indicates that any number of server specific options can be
included in a request. This is done by sending each option as a
where <address> may be a path, a server and path, or an arbitrary
URL-like string recognized by the specific remote helper being
-invoked. See linkgit:gitremote-helpers[1] for details.
+invoked. See linkgit:gitremote-helpers[7] for details.
If there are a large number of similarly-named remote repositories and
you want to use a different format for them (such that the URLs you
SCRIPT_SH += git-merge-resolve.sh
SCRIPT_SH += git-mergetool.sh
SCRIPT_SH += git-quiltimport.sh
+SCRIPT_SH += git-legacy-stash.sh
SCRIPT_SH += git-remote-testgit.sh
SCRIPT_SH += git-request-pull.sh
-SCRIPT_SH += git-stash.sh
SCRIPT_SH += git-submodule.sh
SCRIPT_SH += git-web--browse.sh
BUILTIN_OBJS += builtin/show-branch.o
BUILTIN_OBJS += builtin/show-index.o
BUILTIN_OBJS += builtin/show-ref.o
+BUILTIN_OBJS += builtin/stash.o
BUILTIN_OBJS += builtin/stripspace.o
BUILTIN_OBJS += builtin/submodule--helper.o
BUILTIN_OBJS += builtin/symbolic-ref.o
.PHONY: check-docs
check-docs::
$(MAKE) -C Documentation lint-docs
- @(for v in $(ALL_COMMANDS); \
+ @(for v in $(patsubst %$X,%,$(ALL_COMMANDS)); \
do \
case "$$v" in \
git-merge-octopus | git-merge-ours | git-merge-recursive | \
( \
sed -e '1,/^### command list/d' \
-e '/^#/d' \
+ -e '/guide$$/d' \
-e 's/[ ].*//' \
-e 's/^/listed /' command-list.txt; \
$(MAKE) -C Documentation print-man1 | \
grep '\.txt$$' | \
- sed -e 's|Documentation/|documented |' \
+ sed -e 's|^|documented |' \
-e 's/\.txt//'; \
) | while read how cmd; \
do \
- case " $(ALL_COMMANDS) " in \
+ case " $(patsubst %$X,%,$(ALL_COMMANDS)) " in \
*" $$cmd "*) ;; \
*) echo "removed but $$how: $$cmd" ;; \
esac; \
int advice_waiting_for_editor = 1;
int advice_graft_file_deprecated = 1;
int advice_checkout_ambiguous_remote_branch_name = 1;
+int advice_nested_tag = 1;
static int advice_use_color = -1;
static char advice_colors[][COLOR_MAXLEN] = {
{ "waitingForEditor", &advice_waiting_for_editor },
{ "graftFileDeprecated", &advice_graft_file_deprecated },
{ "checkoutAmbiguousRemoteBranchName", &advice_checkout_ambiguous_remote_branch_name },
+ { "nestedTag", &advice_nested_tag },
/* make this an alias for backward compatibility */
{ "pushNonFastForward", &advice_push_update_rejected }
extern int advice_waiting_for_editor;
extern int advice_graft_file_deprecated;
extern int advice_checkout_ambiguous_remote_branch_name;
+extern int advice_nested_tag;
int git_default_advice_config(const char *var, const char *value);
__attribute__((format (printf, 1, 2)))
static void write_global_extended_header(struct archiver_args *args)
{
- const unsigned char *sha1 = args->commit_sha1;
+ const struct object_id *oid = args->commit_oid;
struct strbuf ext_header = STRBUF_INIT;
struct ustar_header header;
unsigned int mode;
- if (sha1)
+ if (oid)
strbuf_append_ext_header(&ext_header, "comment",
- sha1_to_hex(sha1), 40);
+ oid_to_hex(oid),
+ the_hash_algo->hexsz);
if (args->time > USTAR_MAX_MTIME) {
strbuf_append_ext_header_uint(&ext_header, "mtime",
args->time);
write_or_die(1, &locator64, ZIP64_DIR_TRAILER_LOCATOR_SIZE);
}
-static void write_zip_trailer(const unsigned char *sha1)
+static void write_zip_trailer(const struct object_id *oid)
{
struct zip_dir_trailer trailer;
int clamped = 0;
copy_le16_clamp(trailer.entries, zip_dir_entries, &clamped);
copy_le32(trailer.size, zip_dir.len);
copy_le32_clamp(trailer.offset, zip_offset, &clamped);
- copy_le16(trailer.comment_length, sha1 ? GIT_SHA1_HEXSZ : 0);
+ copy_le16(trailer.comment_length, oid ? the_hash_algo->hexsz : 0);
write_or_die(1, zip_dir.buf, zip_dir.len);
if (clamped)
write_zip64_trailer();
write_or_die(1, &trailer, ZIP_DIR_TRAILER_SIZE);
- if (sha1)
- write_or_die(1, sha1_to_hex(sha1), GIT_SHA1_HEXSZ);
+ if (oid)
+ write_or_die(1, oid_to_hex(oid), the_hash_algo->hexsz);
}
static void dos_time(timestamp_t *timestamp, int *dos_date, int *dos_time)
err = write_archive_entries(args, write_zip_entry);
if (!err)
- write_zip_trailer(args->commit_sha1);
+ write_zip_trailer(args->commit_oid);
strbuf_release(&zip_dir);
int remote)
{
const char *name = argv[0];
- const unsigned char *commit_sha1;
+ const struct object_id *commit_oid;
time_t archive_time;
struct tree *tree;
const struct commit *commit;
commit = lookup_commit_reference_gently(ar_args->repo, &oid, 1);
if (commit) {
- commit_sha1 = commit->object.oid.hash;
+ commit_oid = &commit->object.oid;
archive_time = commit->date;
} else {
- commit_sha1 = NULL;
+ commit_oid = NULL;
archive_time = time(NULL);
}
if (prefix) {
struct object_id tree_oid;
- unsigned int mode;
+ unsigned short mode;
int err;
err = get_tree_entry(&tree->object.oid, prefix, &tree_oid,
tree = parse_tree_indirect(&tree_oid);
}
ar_args->tree = tree;
- ar_args->commit_sha1 = commit_sha1;
+ ar_args->commit_oid = commit_oid;
ar_args->commit = commit;
ar_args->time = archive_time;
}
const char *base;
size_t baselen;
struct tree *tree;
- const unsigned char *commit_sha1;
+ const struct object_id *commit_oid;
const struct commit *commit;
timestamp_t time;
struct pathspec pathspec;
for (parents = work_tree->parents; parents; parents = parents->next) {
const struct object_id *commit_oid = &parents->item->object.oid;
struct object_id blob_oid;
- unsigned mode;
+ unsigned short mode;
if (!get_tree_entry(commit_oid, path, &blob_oid, &mode) &&
oid_object_info(r, &blob_oid, NULL) == OBJ_BLOB)
}
for (i = 0; i < num_sg; i++) {
if (sg_origin[i]) {
- drop_origin_blob(sg_origin[i]);
+ if (!sg_origin[i]->suspects)
+ drop_origin_blob(sg_origin[i]);
blame_origin_decref(sg_origin[i]);
}
}
struct blame_entry *suspects;
mmfile_t file;
struct object_id blob_oid;
- unsigned mode;
+ unsigned short mode;
/* guilty gets set when shipping any suspects to the final
* blame list instead of other commits
*/
extern int cmd_show_branch(int argc, const char **argv, const char *prefix);
extern int cmd_show_index(int argc, const char **argv, const char *prefix);
extern int cmd_status(int argc, const char **argv, const char *prefix);
+extern int cmd_stash(int argc, const char **argv, const char *prefix);
extern int cmd_stripspace(int argc, const char **argv, const char *prefix);
extern int cmd_submodule__helper(int argc, const char **argv, const char *prefix);
extern int cmd_symbolic_ref(int argc, const char **argv, const char *prefix);
}
for (i = 0; i < dir->nr; i++) {
- check_embedded_repo(dir->entries[i]->name);
if (add_file_to_index(&the_index, dir->entries[i]->name, flags)) {
if (!ignore_add_errors)
die(_("adding files failed"));
exit_status = 1;
+ } else {
+ check_embedded_repo(dir->entries[i]->name);
}
}
return exit_status;
while (!strbuf_getline_lf(&sb, fp)) {
struct object_id from_obj, to_obj;
+ const char *p;
- if (sb.len != GIT_SHA1_HEXSZ * 2 + 1) {
+ if (sb.len != the_hash_algo->hexsz * 2 + 1) {
ret = error(invalid_line, sb.buf);
goto finish;
}
- if (get_oid_hex(sb.buf, &from_obj)) {
+ if (parse_oid_hex(sb.buf, &from_obj, &p)) {
ret = error(invalid_line, sb.buf);
goto finish;
}
- if (sb.buf[GIT_SHA1_HEXSZ] != ' ') {
+ if (*p != ' ') {
ret = error(invalid_line, sb.buf);
goto finish;
}
- if (get_oid_hex(sb.buf + GIT_SHA1_HEXSZ + 1, &to_obj)) {
+ if (get_oid_hex(p + 1, &to_obj)) {
ret = error(invalid_line, sb.buf);
goto finish;
}
* review them with extra care to spot mismerges.
*/
struct rev_info rev_info;
- const char *diff_filter_str = "--diff-filter=AM";
repo_init_revisions(the_repository, &rev_info, NULL);
rev_info.diffopt.output_format = DIFF_FORMAT_NAME_STATUS;
- diff_opt_parse(&rev_info.diffopt, &diff_filter_str, 1, rev_info.prefix);
+ rev_info.diffopt.filter |= diff_filter_bit('A');
+ rev_info.diffopt.filter |= diff_filter_bit('M');
add_pending_oid(&rev_info, "HEAD", &our_tree, 0);
diff_setup_done(&rev_info.diffopt);
run_diff_index(&rev_info, 1);
#include "object-store.h"
#include "blame.h"
#include "string-list.h"
+#include "refs.h"
static char blame_usage[] = N_("git blame [<options>] [<rev-opts>] [<rev>] [--] <file>");
revs.disable_stdin = 1;
setup_revisions(argc, argv, &revs, NULL);
+ if (!revs.pending.nr && is_bare_repository()) {
+ struct commit *head_commit;
+ struct object_id head_oid;
+
+ if (!resolve_ref_unsafe("HEAD", RESOLVE_REF_READING,
+ &head_oid, NULL) ||
+ !(head_commit = lookup_commit_reference_gently(revs.repo,
+ &head_oid, 1)))
+ die("no such ref: HEAD");
+
+ add_pending_object(&revs, &head_commit->object, "HEAD");
+ }
init_scoreboard(&sb);
sb.revs = &revs;
OPT_MERGED(&filter, N_("print only branches that are merged")),
OPT_NO_MERGED(&filter, N_("print only branches that are not merged")),
OPT_COLUMN(0, "column", &colopts, N_("list branches in columns")),
- OPT_CALLBACK(0 , "sort", sorting_tail, N_("key"),
- N_("field name to sort on"), &parse_opt_ref_sorting),
+ OPT_REF_SORT(sorting_tail),
{
OPTION_CALLBACK, 0, "points-at", &filter.points_at, N_("object"),
N_("print only branches of the object"), 0, parse_opt_object_name
ps_matched,
opts);
- if (report_path_error(ps_matched, &opts->pathspec, opts->prefix)) {
+ if (report_path_error(ps_matched, &opts->pathspec)) {
free(ps_matched);
return 1;
}
topts.initial_checkout = is_cache_unborn();
topts.update = 1;
topts.merge = 1;
- topts.gently = opts->merge && old_branch_info->commit;
+ topts.quiet = opts->merge && old_branch_info->commit;
topts.verbose_update = opts->show_progress;
topts.fn = twoway_merge;
if (opts->overwrite_ignore) {
*/
struct tree *result;
struct tree *work;
+ struct tree *old_tree;
struct merge_options o;
struct strbuf sb = STRBUF_INIT;
*/
if (!old_branch_info->commit)
return 1;
+ old_tree = get_commit_tree(old_branch_info->commit);
+
+ if (repo_index_has_changes(the_repository, old_tree, &sb))
+ die(_("cannot continue with staged changes in "
+ "the following files:\n%s"), sb.buf);
+ strbuf_release(&sb);
if (repo_index_has_changes(the_repository,
get_commit_tree(old_branch_info->commit),
ret = merge_trees(&o,
get_commit_tree(new_branch_info->commit),
work,
- get_commit_tree(old_branch_info->commit),
+ old_tree,
&result);
if (ret < 0)
exit(128);
{
struct commit_graph *graph = NULL;
char *graph_name;
+ int open_ok;
+ int fd;
+ struct stat st;
static struct option builtin_commit_graph_verify_options[] = {
OPT_STRING(0, "object-dir", &opts.obj_dir,
opts.obj_dir = get_object_directory();
graph_name = get_commit_graph_filename(opts.obj_dir);
- graph = load_commit_graph_one(graph_name);
+ open_ok = open_commit_graph(graph_name, &fd, &st);
+ if (!open_ok && errno == ENOENT)
+ return 0;
+ if (!open_ok)
+ die_errno(_("Could not open commit-graph '%s'"), graph_name);
+ graph = load_commit_graph_one_fd_st(fd, &st);
FREE_AND_NULL(graph_name);
if (!graph)
- return 0;
+ return 1;
UNLEAK(graph);
return verify_commit_graph(the_repository, graph);
{
struct commit_graph *graph = NULL;
char *graph_name;
+ int open_ok;
+ int fd;
+ struct stat st;
static struct option builtin_commit_graph_read_options[] = {
OPT_STRING(0, "object-dir", &opts.obj_dir,
opts.obj_dir = get_object_directory();
graph_name = get_commit_graph_filename(opts.obj_dir);
- graph = load_commit_graph_one(graph_name);
+ open_ok = open_commit_graph(graph_name, &fd, &st);
+ if (!open_ok)
+ die_errno(_("Could not open commit-graph '%s'"), graph_name);
+
+ graph = load_commit_graph_one_fd_st(fd, &st);
if (!graph)
- die("graph file %s does not exist", graph_name);
+ return 1;
FREE_AND_NULL(graph_name);
* and return the paths that match the given pattern in list.
*/
static int list_paths(struct string_list *list, const char *with_tree,
- const char *prefix, const struct pathspec *pattern)
+ const struct pathspec *pattern)
{
int i, ret;
char *m;
item->util = item; /* better a valid pointer than a fake one */
}
- ret = report_path_error(m, pattern, prefix);
+ ret = report_path_error(m, pattern);
free(m);
return ret;
}
die(_("cannot do a partial commit during a cherry-pick."));
}
- if (list_paths(&partial, !current_head ? NULL : "HEAD", prefix, &pathspec))
+ if (list_paths(&partial, !current_head ? NULL : "HEAD", &pathspec))
exit(1);
discard_cache();
handle_untracked_files_arg(s);
if (all && argc > 0)
- die(_("Paths with -a does not make sense."));
+ die(_("paths '%s ...' with -a does not make sense"),
+ argv[0]);
if (status_format != STATUS_FORMAT_NONE)
dry_run = 1;
repo_init_revisions(the_repository, &rev, prefix);
- if (no_index && argc != i + 2) {
- if (no_index == DIFF_NO_INDEX_IMPLICIT) {
- /*
- * There was no --no-index and there were not two
- * paths. It is possible that the user intended
- * to do an inside-repository operation.
- */
- fprintf(stderr, "Not a git repository\n");
- fprintf(stderr,
- "To compare two paths outside a working tree:\n");
- }
- /* Give the usage message for non-repository usage and exit. */
- usagef("git diff %s <path> <path>",
- no_index == DIFF_NO_INDEX_EXPLICIT ?
- "--no-index" : "[--no-index]");
-
- }
-
/* Set up defaults that will apply to both no-index and regular diffs. */
rev.diffopt.stat_width = -1;
rev.diffopt.stat_graph_width = -1;
/* If this is a no-index diff, just run it and exit there. */
if (no_index)
- diff_no_index(&rev, argc, argv);
+ exit(diff_no_index(&rev, no_index == DIFF_NO_INDEX_IMPLICIT,
+ argc, argv));
+
/*
* Otherwise, we are doing the usual "git" diff; set up any
*mode2 = (int)strtol(p + 1, &p, 8);
if (*p != ' ')
return error("expected ' ', got '%c'", *p);
- if (get_oid_hex(++p, oid1))
- return error("expected object ID, got '%s'", p + 1);
- p += GIT_SHA1_HEXSZ;
+ if (parse_oid_hex(++p, oid1, (const char **)&p))
+ return error("expected object ID, got '%s'", p);
if (*p != ' ')
return error("expected ' ', got '%c'", *p);
- if (get_oid_hex(++p, oid2))
- return error("expected object ID, got '%s'", p + 1);
- p += GIT_SHA1_HEXSZ;
+ if (parse_oid_hex(++p, oid2, (const char **)&p))
+ return error("expected object ID, got '%s'", p);
if (*p != ' ')
return error("expected ' ', got '%c'", *p);
*status = *++p;
int cmd_difftool(int argc, const char **argv, const char *prefix)
{
int use_gui_tool = 0, dir_diff = 0, prompt = -1, symlinks = 0,
- tool_help = 0;
+ tool_help = 0, no_index = 0;
static char *difftool_cmd = NULL, *extcmd = NULL;
struct option builtin_difftool_options[] = {
OPT_BOOL('g', "gui", &use_gui_tool,
"tool returns a non - zero exit code")),
OPT_STRING('x', "extcmd", &extcmd, N_("command"),
N_("specify a custom command for viewing diffs")),
+ OPT_ARGUMENT("no-index", &no_index, N_("passed to `diff`")),
OPT_END()
};
if (tool_help)
return print_tool_help();
- /* NEEDSWORK: once we no longer spawn anything, remove this */
- setenv(GIT_DIR_ENVIRONMENT, absolute_path(get_git_dir()), 1);
- setenv(GIT_WORK_TREE_ENVIRONMENT, absolute_path(get_git_work_tree()), 1);
+ if (!no_index && !startup_info->have_repository)
+ die(_("difftool requires worktree or --no-index"));
+
+ if (!no_index){
+ setup_work_tree();
+ setenv(GIT_DIR_ENVIRONMENT, absolute_path(get_git_dir()), 1);
+ setenv(GIT_WORK_TREE_ENVIRONMENT, absolute_path(get_git_work_tree()), 1);
+ }
if (use_gui_tool && diff_gui_tool && *diff_gui_tool)
setenv("GIT_DIFF_TOOL", diff_gui_tool, 1);
BUG("unknown protocol version");
}
- ref = fetch_pack(&args, fd, conn, ref, dest, sought, nr_sought,
+ ref = fetch_pack(&args, fd, ref, sought, nr_sought,
&shallow, pack_lockfile_ptr, version);
if (pack_lockfile) {
printf("lock %s\n", pack_lockfile);
OPT_INTEGER( 0 , "count", &maxcount, N_("show only <n> matched refs")),
OPT_STRING( 0 , "format", &format.format, N_("format"), N_("format to use for the output")),
OPT__COLOR(&format.use_color, N_("respect format colors")),
- OPT_CALLBACK(0 , "sort", sorting_tail, N_("key"),
- N_("field name to sort on"), &parse_opt_ref_sorting),
+ OPT_REF_SORT(sorting_tail),
OPT_CALLBACK(0, "points-at", &filter.points_at,
N_("object"), N_("print only refs which points at the given object"),
parse_opt_object_name),
raise(signo);
}
+static int gc_config_is_timestamp_never(const char *var)
+{
+ const char *value;
+ timestamp_t expire;
+
+ if (!git_config_get_value(var, &value) && value) {
+ if (parse_expiry_date(value, &expire))
+ die(_("failed to parse '%s' value '%s'"), var, value);
+ return expire == 0;
+ }
+ return 0;
+}
+
static void gc_config(void)
{
const char *value;
pack_refs = git_config_bool("gc.packrefs", value);
}
+ if (gc_config_is_timestamp_never("gc.reflogexpire") &&
+ gc_config_is_timestamp_never("gc.reflogexpireunreachable"))
+ prune_reflogs = 0;
+
git_config_get_int("gc.aggressivewindow", &aggressive_window);
git_config_get_int("gc.aggressivedepth", &aggressive_depth);
git_config_get_int("gc.auto", &gc_auto_threshold);
int auto_threshold;
int num_loose = 0;
int needed = 0;
-
- if (gc_auto_threshold <= 0)
- return 0;
+ const unsigned hexsz_loose = the_hash_algo->hexsz - 2;
dir = opendir(git_path("objects/17"));
if (!dir)
auto_threshold = DIV_ROUND_UP(gc_auto_threshold, 256);
while ((ent = readdir(dir)) != NULL) {
- if (strspn(ent->d_name, "0123456789abcdef") != 38 ||
- ent->d_name[38] != '\0')
+ if (strspn(ent->d_name, "0123456789abcdef") != hexsz_loose ||
+ ent->d_name[hexsz_loose] != '\0')
continue;
if (++num_loose > auto_threshold) {
needed = 1;
static void gc_before_repack(void)
{
+ /*
+ * We may be called twice, as both the pre- and
+ * post-daemonized phases will call us, but running these
+ * commands more than once is pointless and wasteful.
+ */
+ static int done = 0;
+ if (done++)
+ return;
+
if (pack_refs && run_command_v_opt(pack_refs_cmd.argv, RUN_GIT_CMD))
die(FAILED_RUN, pack_refs_cmd.argv[0]);
if (prune_reflogs && run_command_v_opt(reflog.argv, RUN_GIT_CMD))
die(FAILED_RUN, reflog.argv[0]);
-
- pack_refs = 0;
- prune_reflogs = 0;
}
int cmd_gc(int argc, const char **argv, const char *prefix)
char *content = buffer + RECORDSIZE;
const char *comment;
ssize_t n;
+ long len;
+ char *end;
if (argc != 1)
usage(builtin_get_tar_commit_id_usage);
die_errno("git get-tar-commit-id: EOF before reading tar header");
if (header->typeflag[0] != 'g')
return 1;
- if (!skip_prefix(content, "52 comment=", &comment))
+
+ len = strtol(content, &end, 10);
+ if (errno == ERANGE || end == content || len < 0)
+ return 1;
+ if (!skip_prefix(end, " comment=", &comment))
+ return 1;
+ len -= comment - content;
+ if (len < 1 || !(len % 2) ||
+ hash_algo_by_length((len - 1) / 2) == GIT_HASH_UNKNOWN)
return 1;
- if (write_in_full(1, comment, 41) < 0)
+ if (write_in_full(1, comment, len) < 0)
die_errno("git get-tar-commit-id: write error");
return 0;
unsigned i, max, foreign_nr = 0;
max = get_max_object_index();
- for (i = 0; i < max; i++)
+
+ if (verbose)
+ progress = start_delayed_progress(_("Checking objects"), max);
+
+ for (i = 0; i < max; i++) {
foreign_nr += check_object(get_indexed_object(i));
+ display_progress(progress, i + 1);
+ }
+
+ stop_progress(&progress);
return foreign_nr;
}
* This gives a rough estimate for how many commits we
* will print out in the list.
*/
-static int estimate_commit_count(struct rev_info *rev, struct commit_list *list)
+static int estimate_commit_count(struct commit_list *list)
{
int n = 0;
switch (simplify_commit(revs, commit)) {
case commit_show:
if (show_header) {
- int n = estimate_commit_count(revs, list);
+ int n = estimate_commit_count(list);
show_early_header(revs, "incomplete", n);
show_header = 0;
}
show_early_output = log_show_early;
}
-static void setup_early_output(struct rev_info *rev)
+static void setup_early_output(void)
{
struct sigaction sa;
static void finish_early_output(struct rev_info *rev)
{
- int n = estimate_commit_count(rev, rev->commits);
+ int n = estimate_commit_count(rev->commits);
signal(SIGALRM, SIG_IGN);
show_early_header(rev, "done", n);
}
int saved_dcctc = 0, close_file = rev->diffopt.close_file;
if (rev->early_output)
- setup_early_output(rev);
+ setup_early_output();
if (prepare_revision_walk(rev))
die(_("revision walk setup failed"));
return cmd_log_walk(&rev);
}
-static void show_tagger(char *buf, int len, struct rev_info *rev)
+static void show_tagger(const char *buf, struct rev_info *rev)
{
struct strbuf out = STRBUF_INIT;
struct pretty_print_context pp = {0};
assert(type == OBJ_TAG);
while (offset < size && buf[offset] != '\n') {
int new_offset = offset + 1;
+ const char *ident;
while (new_offset < size && buf[new_offset++] != '\n')
; /* do nothing */
- if (starts_with(buf + offset, "tagger "))
- show_tagger(buf + offset + 7,
- new_offset - offset - 7, rev);
+ if (skip_prefix(buf + offset, "tagger ", &ident))
+ show_tagger(ident, rev);
offset = new_offset;
}
if (debug_mode) {
const struct stat_data *sd = &ce->ce_stat_data;
- printf(" ctime: %d:%d\n", sd->sd_ctime.sec, sd->sd_ctime.nsec);
- printf(" mtime: %d:%d\n", sd->sd_mtime.sec, sd->sd_mtime.nsec);
- printf(" dev: %d\tino: %d\n", sd->sd_dev, sd->sd_ino);
- printf(" uid: %d\tgid: %d\n", sd->sd_uid, sd->sd_gid);
- printf(" size: %d\tflags: %x\n", sd->sd_size, ce->ce_flags);
+ printf(" ctime: %u:%u\n", sd->sd_ctime.sec, sd->sd_ctime.nsec);
+ printf(" mtime: %u:%u\n", sd->sd_mtime.sec, sd->sd_mtime.nsec);
+ printf(" dev: %u\tino: %u\n", sd->sd_dev, sd->sd_ino);
+ printf(" uid: %u\tgid: %u\n", sd->sd_uid, sd->sd_gid);
+ printf(" size: %u\tflags: %x\n", sd->sd_size, ce->ce_flags);
}
}
if (ps_matched) {
int bad;
- bad = report_path_error(ps_matched, &pathspec, prefix);
+ bad = report_path_error(ps_matched, &pathspec);
if (bad)
fprintf(stderr, "Did you forget to 'git add'?\n");
OPT_BIT(0, "refs", &flags, N_("do not show peeled tags"), REF_NORMAL),
OPT_BOOL(0, "get-url", &get_url,
N_("take url.<base>.insteadOf into account")),
- OPT_CALLBACK(0 , "sort", sorting_tail, N_("key"),
- N_("field name to sort on"), &parse_opt_ref_sorting),
+ OPT_REF_SORT(sorting_tail),
OPT_SET_INT_F(0, "exit-code", &status,
N_("exit with exit code 2 if no matching refs are found"),
2, PARSE_OPT_NOCOMPLETE),
static void name_rev_line(char *p, struct name_ref_data *data)
{
struct strbuf buf = STRBUF_INIT;
- int forty = 0;
+ int counter = 0;
char *p_start;
+ const unsigned hexsz = the_hash_algo->hexsz;
+
for (p_start = p; *p; p++) {
#define ishex(x) (isdigit((x)) || ((x) >= 'a' && (x) <= 'f'))
if (!ishex(*p))
- forty = 0;
- else if (++forty == GIT_SHA1_HEXSZ &&
+ counter = 0;
+ else if (++counter == hexsz &&
!ishex(*(p+1))) {
struct object_id oid;
const char *name = NULL;
char c = *(p+1);
int p_len = p - p_start + 1;
- forty = 0;
+ counter = 0;
*(p+1) = 0;
- if (!get_oid(p - (GIT_SHA1_HEXSZ - 1), &oid)) {
+ if (!get_oid(p - (hexsz - 1), &oid)) {
struct object *o =
lookup_object(the_repository,
oid.hash);
continue;
if (data->name_only)
- printf("%.*s%s", p_len - GIT_SHA1_HEXSZ, p_start, name);
+ printf("%.*s%s", p_len - hexsz, p_start, name);
else
printf("%.*s (%s)", p_len, p_start, name);
p_start = p + 1;
if (written != nr_result)
die(_("wrote %"PRIu32" objects while expecting %"PRIu32),
written, nr_result);
+ trace2_data_intmax("pack-objects", the_repository,
+ "write_pack_file/wrote", nr_result);
}
static int no_try_delta(const char *path)
struct object_entry **base_out)
{
struct object_entry *base;
+ struct object_id base_oid;
if (!base_sha1)
return 0;
* even if it was buried too deep in history to make it into the
* packing list.
*/
- if (thin && bitmap_has_sha1_in_uninteresting(bitmap_git, base_sha1)) {
+ oidread(&base_oid, base_sha1);
+ if (thin && bitmap_has_oid_in_uninteresting(bitmap_git, &base_oid)) {
if (use_delta_islands) {
- struct object_id base_oid;
- hashcpy(base_oid.hash, base_sha1);
if (!in_same_island(&delta->idx.oid, &base_oid))
return 0;
}
pl = red = pack_list_difference(local_packs, min);
while (pl) {
printf("%s\n%s\n",
- sha1_pack_index_name(pl->pack->sha1),
+ sha1_pack_index_name(pl->pack->hash),
pl->pack->pack_name);
pl = pl->next;
}
fp = xfopen(filename, "r");
while (strbuf_getline_lf(&sb, fp) != EOF) {
- if (get_oid_hex(sb.buf, &oid))
- continue; /* invalid line: does not start with SHA1 */
- if (starts_with(sb.buf + GIT_SHA1_HEXSZ, "\tnot-for-merge\t"))
+ const char *p;
+ if (parse_oid_hex(sb.buf, &oid, &p))
+ continue; /* invalid line: does not start with object ID */
+ if (starts_with(p, "\tnot-for-merge\t"))
continue; /* ref is not-for-merge */
oid_array_append(merge_heads, &oid);
}
cp.no_stderr = 1;
cp.git_cmd = 1;
- ret = capture_command(&cp, &sb, GIT_SHA1_HEXSZ);
+ ret = capture_command(&cp, &sb, GIT_MAX_HEXSZ);
if (ret)
goto cleanup;
}
/**
- * Given the current HEAD SHA1, the merge head returned from git-fetch and the
+ * Given the current HEAD oid, the merge head returned from git-fetch and the
* fork point calculated by get_rebase_fork_point(), runs git-rebase with the
* appropriate arguments and returns its exit status.
*/
int creation_factor = RANGE_DIFF_CREATION_FACTOR_DEFAULT;
struct diff_options diffopt = { NULL };
int simple_color = -1;
- struct option options[] = {
+ struct option range_diff_options[] = {
OPT_INTEGER(0, "creation-factor", &creation_factor,
N_("Percentage by which creation is weighted")),
OPT_BOOL(0, "no-dual-color", &simple_color,
N_("use simple diff colors")),
OPT_END()
};
- int i, j, res = 0;
+ struct option *options;
+ int res = 0;
struct strbuf range1 = STRBUF_INIT, range2 = STRBUF_INIT;
git_config(git_diff_ui_config, NULL);
repo_diff_setup(the_repository, &diffopt);
+ options = parse_options_concat(range_diff_options, diffopt.parseopts);
argc = parse_options(argc, argv, NULL, options,
- builtin_range_diff_usage, PARSE_OPT_KEEP_UNKNOWN |
- PARSE_OPT_KEEP_DASHDASH | PARSE_OPT_KEEP_ARGV0);
-
- for (i = j = 1; i < argc && strcmp("--", argv[i]); ) {
- int c = diff_opt_parse(&diffopt, argv + i, argc - i, prefix);
+ builtin_range_diff_usage, 0);
- if (!c)
- argv[j++] = argv[i++];
- else
- i += c;
- }
- while (i < argc)
- argv[j++] = argv[i++];
- argc = j;
diff_setup_done(&diffopt);
- /* Make sure that there are no unparsed options */
- argc = parse_options(argc, argv, NULL,
- options + ARRAY_SIZE(options) - 1, /* OPT_END */
- builtin_range_diff_usage, 0);
-
/* force color when --dual-color was used */
if (!simple_color)
diffopt.use_color = 1;
error(_("need two commit ranges"));
usage_with_options(builtin_range_diff_usage, options);
}
+ FREE_AND_NULL(options);
res = show_range_diff(range1.buf, range2.buf, creation_factor,
simple_color < 1, &diffopt);
{ OPTION_CALLBACK, 0, "recurse-submodules", NULL,
"checkout", "control recursive updating of submodules",
PARSE_OPT_OPTARG, option_parse_recurse_submodules_worktree_updater },
+ OPT__QUIET(&opts.quiet, N_("suppress feedback messages")),
OPT_END()
};
static GIT_PATH_FUNC(path_squash_onto, "rebase-merge/squash-onto")
static GIT_PATH_FUNC(path_interactive, "rebase-merge/interactive")
+static int add_exec_commands(struct string_list *commands)
+{
+ const char *todo_file = rebase_path_todo();
+ struct todo_list todo_list = TODO_LIST_INIT;
+ int res;
+
+ if (strbuf_read_file(&todo_list.buf, todo_file, 0) < 0)
+ return error_errno(_("could not read '%s'."), todo_file);
+
+ if (todo_list_parse_insn_buffer(the_repository, todo_list.buf.buf,
+ &todo_list)) {
+ todo_list_release(&todo_list);
+ return error(_("unusable todo list: '%s'"), todo_file);
+ }
+
+ todo_list_add_exec_commands(&todo_list, commands);
+ res = todo_list_write_to_file(the_repository, &todo_list,
+ todo_file, NULL, NULL, -1, 0);
+ todo_list_release(&todo_list);
+
+ if (res)
+ return error_errno(_("could not write '%s'."), todo_file);
+ return 0;
+}
+
+static int rearrange_squash_in_todo_file(void)
+{
+ const char *todo_file = rebase_path_todo();
+ struct todo_list todo_list = TODO_LIST_INIT;
+ int res = 0;
+
+ if (strbuf_read_file(&todo_list.buf, todo_file, 0) < 0)
+ return error_errno(_("could not read '%s'."), todo_file);
+ if (todo_list_parse_insn_buffer(the_repository, todo_list.buf.buf,
+ &todo_list)) {
+ todo_list_release(&todo_list);
+ return error(_("unusable todo list: '%s'"), todo_file);
+ }
+
+ res = todo_list_rearrange_squash(&todo_list);
+ if (!res)
+ res = todo_list_write_to_file(the_repository, &todo_list,
+ todo_file, NULL, NULL, -1, 0);
+
+ todo_list_release(&todo_list);
+
+ if (res)
+ return error_errno(_("could not write '%s'."), todo_file);
+ return 0;
+}
+
+static int transform_todo_file(unsigned flags)
+{
+ const char *todo_file = rebase_path_todo();
+ struct todo_list todo_list = TODO_LIST_INIT;
+ int res;
+
+ if (strbuf_read_file(&todo_list.buf, todo_file, 0) < 0)
+ return error_errno(_("could not read '%s'."), todo_file);
+
+ if (todo_list_parse_insn_buffer(the_repository, todo_list.buf.buf,
+ &todo_list)) {
+ todo_list_release(&todo_list);
+ return error(_("unusable todo list: '%s'"), todo_file);
+ }
+
+ res = todo_list_write_to_file(the_repository, &todo_list, todo_file,
+ NULL, NULL, -1, flags);
+ todo_list_release(&todo_list);
+
+ if (res)
+ return error_errno(_("could not write '%s'."), todo_file);
+ return 0;
+}
+
+static int edit_todo_file(unsigned flags)
+{
+ const char *todo_file = rebase_path_todo();
+ struct todo_list todo_list = TODO_LIST_INIT,
+ new_todo = TODO_LIST_INIT;
+ int res = 0;
+
+ if (strbuf_read_file(&todo_list.buf, todo_file, 0) < 0)
+ return error_errno(_("could not read '%s'."), todo_file);
+
+ strbuf_stripspace(&todo_list.buf, 1);
+ res = edit_todo_list(the_repository, &todo_list, &new_todo, NULL, NULL, flags);
+ if (!res && todo_list_write_to_file(the_repository, &new_todo, todo_file,
+ NULL, NULL, -1, flags & ~(TODO_LIST_SHORTEN_IDS)))
+ res = error_errno(_("could not write '%s'"), todo_file);
+
+ todo_list_release(&todo_list);
+ todo_list_release(&new_todo);
+
+ return res;
+}
+
static int get_revision_ranges(const char *upstream, const char *onto,
const char **head_hash,
char **revisions, char **shortrevisions)
const char *onto, const char *onto_name,
const char *squash_onto, const char *head_name,
const char *restrict_revision, char *raw_strategies,
- const char *cmd, unsigned autosquash)
+ struct string_list *commands, unsigned autosquash)
{
int ret;
const char *head_hash = NULL;
char *revisions = NULL, *shortrevisions = NULL;
struct argv_array make_script_args = ARGV_ARRAY_INIT;
- FILE *todo_list;
+ struct todo_list todo_list = TODO_LIST_INIT;
if (prepare_branch_to_be_rebased(opts, switch_to))
return -1;
if (!upstream && squash_onto)
write_file(path_squash_onto(), "%s\n", squash_onto);
- todo_list = fopen(rebase_path_todo(), "w");
- if (!todo_list) {
- free(revisions);
- free(shortrevisions);
-
- return error_errno(_("could not open %s"), rebase_path_todo());
- }
-
argv_array_pushl(&make_script_args, "", revisions, NULL);
if (restrict_revision)
argv_array_push(&make_script_args, restrict_revision);
- ret = sequencer_make_script(the_repository, todo_list,
+ ret = sequencer_make_script(the_repository, &todo_list.buf,
make_script_args.argc, make_script_args.argv,
flags);
- fclose(todo_list);
if (ret)
error(_("could not generate todo list"));
else {
discard_cache();
- ret = complete_action(the_repository, opts, flags,
- shortrevisions, onto_name, onto,
- head_hash, cmd, autosquash);
+ if (todo_list_parse_insn_buffer(the_repository, todo_list.buf.buf,
+ &todo_list))
+ BUG("unusable todo list");
+
+ ret = complete_action(the_repository, opts, flags, shortrevisions, onto_name,
+ onto, head_hash, commands, autosquash, &todo_list);
}
free(revisions);
free(shortrevisions);
+ todo_list_release(&todo_list);
argv_array_clear(&make_script_args);
return ret;
const char *onto = NULL, *onto_name = NULL, *restrict_revision = NULL,
*squash_onto = NULL, *upstream = NULL, *head_name = NULL,
*switch_to = NULL, *cmd = NULL;
+ struct string_list commands = STRING_LIST_INIT_DUP;
char *raw_strategies = NULL;
enum {
NONE = 0, CONTINUE, SKIP, EDIT_TODO, SHOW_CURRENT_PATCH,
warning(_("--[no-]rebase-cousins has no effect without "
"--rebase-merges"));
+ if (cmd && *cmd) {
+ string_list_split(&commands, cmd, '\n', -1);
+
+ /* rebase.c adds a new line to cmd after every command,
+ * so here the last command is always empty */
+ string_list_remove_empty_items(&commands, 0);
+ }
+
switch (command) {
case NONE:
if (!onto && !upstream)
ret = do_interactive_rebase(&opts, flags, switch_to, upstream, onto,
onto_name, squash_onto, head_name, restrict_revision,
- raw_strategies, cmd, autosquash);
+ raw_strategies, &commands, autosquash);
break;
case SKIP: {
struct string_list merge_rr = STRING_LIST_INIT_DUP;
break;
}
case EDIT_TODO:
- ret = edit_todo_list(the_repository, flags);
+ ret = edit_todo_file(flags);
break;
case SHOW_CURRENT_PATCH: {
struct child_process cmd = CHILD_PROCESS_INIT;
}
case SHORTEN_OIDS:
case EXPAND_OIDS:
- ret = transform_todos(the_repository, flags);
+ ret = transform_todo_file(flags);
break;
case CHECK_TODO_LIST:
- ret = check_todo_list(the_repository);
+ ret = check_todo_list_from_file(the_repository);
break;
case REARRANGE_SQUASH:
- ret = rearrange_squash(the_repository);
+ ret = rearrange_squash_in_todo_file();
break;
case ADD_EXEC:
- ret = sequencer_add_exec_commands(the_repository, cmd);
+ ret = add_exec_commands(&commands);
break;
default:
BUG("invalid command '%d'", command);
}
+ string_list_clear(&commands, 0);
return !!ret;
}
int flags = quiet ? REFRESH_QUIET : REFRESH_IN_PORCELAIN;
if (read_from_tree(&pathspec, &oid, intent_to_add))
return 1;
+ the_index.updated_skipworktree = 1;
if (!quiet && get_git_work_tree()) {
uint64_t t_begin, t_delta_in_ms;
const struct cache_entry *ce;
const char *name = list.entry[i].name;
struct object_id oid;
- unsigned mode;
+ unsigned short mode;
int local_changes = 0;
int staged_changes = 0;
--- /dev/null
+#define USE_THE_INDEX_COMPATIBILITY_MACROS
+#include "builtin.h"
+#include "config.h"
+#include "parse-options.h"
+#include "refs.h"
+#include "lockfile.h"
+#include "cache-tree.h"
+#include "unpack-trees.h"
+#include "merge-recursive.h"
+#include "argv-array.h"
+#include "run-command.h"
+#include "dir.h"
+#include "rerere.h"
+#include "revision.h"
+#include "log-tree.h"
+#include "diffcore.h"
+#include "exec-cmd.h"
+
+#define INCLUDE_ALL_FILES 2
+
+static const char * const git_stash_usage[] = {
+ N_("git stash list [<options>]"),
+ N_("git stash show [<options>] [<stash>]"),
+ N_("git stash drop [-q|--quiet] [<stash>]"),
+ N_("git stash ( pop | apply ) [--index] [-q|--quiet] [<stash>]"),
+ N_("git stash branch <branchname> [<stash>]"),
+ N_("git stash clear"),
+ N_("git stash [push [-p|--patch] [-k|--[no-]keep-index] [-q|--quiet]\n"
+ " [-u|--include-untracked] [-a|--all] [-m|--message <message>]\n"
+ " [--] [<pathspec>...]]"),
+ N_("git stash save [-p|--patch] [-k|--[no-]keep-index] [-q|--quiet]\n"
+ " [-u|--include-untracked] [-a|--all] [<message>]"),
+ NULL
+};
+
+static const char * const git_stash_list_usage[] = {
+ N_("git stash list [<options>]"),
+ NULL
+};
+
+static const char * const git_stash_show_usage[] = {
+ N_("git stash show [<options>] [<stash>]"),
+ NULL
+};
+
+static const char * const git_stash_drop_usage[] = {
+ N_("git stash drop [-q|--quiet] [<stash>]"),
+ NULL
+};
+
+static const char * const git_stash_pop_usage[] = {
+ N_("git stash pop [--index] [-q|--quiet] [<stash>]"),
+ NULL
+};
+
+static const char * const git_stash_apply_usage[] = {
+ N_("git stash apply [--index] [-q|--quiet] [<stash>]"),
+ NULL
+};
+
+static const char * const git_stash_branch_usage[] = {
+ N_("git stash branch <branchname> [<stash>]"),
+ NULL
+};
+
+static const char * const git_stash_clear_usage[] = {
+ N_("git stash clear"),
+ NULL
+};
+
+static const char * const git_stash_store_usage[] = {
+ N_("git stash store [-m|--message <message>] [-q|--quiet] <commit>"),
+ NULL
+};
+
+static const char * const git_stash_push_usage[] = {
+ N_("git stash [push [-p|--patch] [-k|--[no-]keep-index] [-q|--quiet]\n"
+ " [-u|--include-untracked] [-a|--all] [-m|--message <message>]\n"
+ " [--] [<pathspec>...]]"),
+ NULL
+};
+
+static const char * const git_stash_save_usage[] = {
+ N_("git stash save [-p|--patch] [-k|--[no-]keep-index] [-q|--quiet]\n"
+ " [-u|--include-untracked] [-a|--all] [<message>]"),
+ NULL
+};
+
+static const char *ref_stash = "refs/stash";
+static struct strbuf stash_index_path = STRBUF_INIT;
+
+/*
+ * w_commit is set to the commit containing the working tree
+ * b_commit is set to the base commit
+ * i_commit is set to the commit containing the index tree
+ * u_commit is set to the commit containing the untracked files tree
+ * w_tree is set to the working tree
+ * b_tree is set to the base tree
+ * i_tree is set to the index tree
+ * u_tree is set to the untracked files tree
+ */
+struct stash_info {
+ struct object_id w_commit;
+ struct object_id b_commit;
+ struct object_id i_commit;
+ struct object_id u_commit;
+ struct object_id w_tree;
+ struct object_id b_tree;
+ struct object_id i_tree;
+ struct object_id u_tree;
+ struct strbuf revision;
+ int is_stash_ref;
+ int has_u;
+};
+
+static void free_stash_info(struct stash_info *info)
+{
+ strbuf_release(&info->revision);
+}
+
+static void assert_stash_like(struct stash_info *info, const char *revision)
+{
+ if (get_oidf(&info->b_commit, "%s^1", revision) ||
+ get_oidf(&info->w_tree, "%s:", revision) ||
+ get_oidf(&info->b_tree, "%s^1:", revision) ||
+ get_oidf(&info->i_tree, "%s^2:", revision))
+ die(_("'%s' is not a stash-like commit"), revision);
+}
+
+static int get_stash_info(struct stash_info *info, int argc, const char **argv)
+{
+ int ret;
+ char *end_of_rev;
+ char *expanded_ref;
+ const char *revision;
+ const char *commit = NULL;
+ struct object_id dummy;
+ struct strbuf symbolic = STRBUF_INIT;
+
+ if (argc > 1) {
+ int i;
+ struct strbuf refs_msg = STRBUF_INIT;
+
+ for (i = 0; i < argc; i++)
+ strbuf_addf(&refs_msg, " '%s'", argv[i]);
+
+ fprintf_ln(stderr, _("Too many revisions specified:%s"),
+ refs_msg.buf);
+ strbuf_release(&refs_msg);
+
+ return -1;
+ }
+
+ if (argc == 1)
+ commit = argv[0];
+
+ strbuf_init(&info->revision, 0);
+ if (!commit) {
+ if (!ref_exists(ref_stash)) {
+ free_stash_info(info);
+ fprintf_ln(stderr, _("No stash entries found."));
+ return -1;
+ }
+
+ strbuf_addf(&info->revision, "%s@{0}", ref_stash);
+ } else if (strspn(commit, "0123456789") == strlen(commit)) {
+ strbuf_addf(&info->revision, "%s@{%s}", ref_stash, commit);
+ } else {
+ strbuf_addstr(&info->revision, commit);
+ }
+
+ revision = info->revision.buf;
+
+ if (get_oid(revision, &info->w_commit)) {
+ error(_("%s is not a valid reference"), revision);
+ free_stash_info(info);
+ return -1;
+ }
+
+ assert_stash_like(info, revision);
+
+ info->has_u = !get_oidf(&info->u_tree, "%s^3:", revision);
+
+ end_of_rev = strchrnul(revision, '@');
+ strbuf_add(&symbolic, revision, end_of_rev - revision);
+
+ ret = dwim_ref(symbolic.buf, symbolic.len, &dummy, &expanded_ref);
+ strbuf_release(&symbolic);
+ switch (ret) {
+ case 0: /* Not found, but valid ref */
+ info->is_stash_ref = 0;
+ break;
+ case 1:
+ info->is_stash_ref = !strcmp(expanded_ref, ref_stash);
+ break;
+ default: /* Invalid or ambiguous */
+ free_stash_info(info);
+ }
+
+ free(expanded_ref);
+ return !(ret == 0 || ret == 1);
+}
+
+static int do_clear_stash(void)
+{
+ struct object_id obj;
+ if (get_oid(ref_stash, &obj))
+ return 0;
+
+ return delete_ref(NULL, ref_stash, &obj, 0);
+}
+
+static int clear_stash(int argc, const char **argv, const char *prefix)
+{
+ struct option options[] = {
+ OPT_END()
+ };
+
+ argc = parse_options(argc, argv, prefix, options,
+ git_stash_clear_usage,
+ PARSE_OPT_STOP_AT_NON_OPTION);
+
+ if (argc)
+ return error(_("git stash clear with parameters is "
+ "unimplemented"));
+
+ return do_clear_stash();
+}
+
+static int reset_tree(struct object_id *i_tree, int update, int reset)
+{
+ int nr_trees = 1;
+ struct unpack_trees_options opts;
+ struct tree_desc t[MAX_UNPACK_TREES];
+ struct tree *tree;
+ struct lock_file lock_file = LOCK_INIT;
+
+ read_cache_preload(NULL);
+ if (refresh_cache(REFRESH_QUIET))
+ return -1;
+
+ hold_locked_index(&lock_file, LOCK_DIE_ON_ERROR);
+
+ memset(&opts, 0, sizeof(opts));
+
+ tree = parse_tree_indirect(i_tree);
+ if (parse_tree(tree))
+ return -1;
+
+ init_tree_desc(t, tree->buffer, tree->size);
+
+ opts.head_idx = 1;
+ opts.src_index = &the_index;
+ opts.dst_index = &the_index;
+ opts.merge = 1;
+ opts.reset = reset;
+ opts.update = update;
+ opts.fn = oneway_merge;
+
+ if (unpack_trees(nr_trees, t, &opts))
+ return -1;
+
+ if (write_locked_index(&the_index, &lock_file, COMMIT_LOCK))
+ return error(_("unable to write new index file"));
+
+ return 0;
+}
+
+static int diff_tree_binary(struct strbuf *out, struct object_id *w_commit)
+{
+ struct child_process cp = CHILD_PROCESS_INIT;
+ const char *w_commit_hex = oid_to_hex(w_commit);
+
+ /*
+ * Diff-tree would not be very hard to replace with a native function,
+ * however it should be done together with apply_cached.
+ */
+ cp.git_cmd = 1;
+ argv_array_pushl(&cp.args, "diff-tree", "--binary", NULL);
+ argv_array_pushf(&cp.args, "%s^2^..%s^2", w_commit_hex, w_commit_hex);
+
+ return pipe_command(&cp, NULL, 0, out, 0, NULL, 0);
+}
+
+static int apply_cached(struct strbuf *out)
+{
+ struct child_process cp = CHILD_PROCESS_INIT;
+
+ /*
+ * Apply currently only reads either from stdin or a file, thus
+ * apply_all_patches would have to be updated to optionally take a
+ * buffer.
+ */
+ cp.git_cmd = 1;
+ argv_array_pushl(&cp.args, "apply", "--cached", NULL);
+ return pipe_command(&cp, out->buf, out->len, NULL, 0, NULL, 0);
+}
+
+static int reset_head(void)
+{
+ struct child_process cp = CHILD_PROCESS_INIT;
+
+ /*
+ * Reset is overall quite simple, however there is no current public
+ * API for resetting.
+ */
+ cp.git_cmd = 1;
+ argv_array_push(&cp.args, "reset");
+
+ return run_command(&cp);
+}
+
+static void add_diff_to_buf(struct diff_queue_struct *q,
+ struct diff_options *options,
+ void *data)
+{
+ int i;
+
+ for (i = 0; i < q->nr; i++) {
+ strbuf_addstr(data, q->queue[i]->one->path);
+
+ /* NUL-terminate: will be fed to update-index -z */
+ strbuf_addch(data, '\0');
+ }
+}
+
+static int get_newly_staged(struct strbuf *out, struct object_id *c_tree)
+{
+ struct child_process cp = CHILD_PROCESS_INIT;
+ const char *c_tree_hex = oid_to_hex(c_tree);
+
+ /*
+ * diff-index is very similar to diff-tree above, and should be
+ * converted together with update_index.
+ */
+ cp.git_cmd = 1;
+ argv_array_pushl(&cp.args, "diff-index", "--cached", "--name-only",
+ "--diff-filter=A", NULL);
+ argv_array_push(&cp.args, c_tree_hex);
+ return pipe_command(&cp, NULL, 0, out, 0, NULL, 0);
+}
+
+static int update_index(struct strbuf *out)
+{
+ struct child_process cp = CHILD_PROCESS_INIT;
+
+ /*
+ * Update-index is very complicated and may need to have a public
+ * function exposed in order to remove this forking.
+ */
+ cp.git_cmd = 1;
+ argv_array_pushl(&cp.args, "update-index", "--add", "--stdin", NULL);
+ return pipe_command(&cp, out->buf, out->len, NULL, 0, NULL, 0);
+}
+
+static int restore_untracked(struct object_id *u_tree)
+{
+ int res;
+ struct child_process cp = CHILD_PROCESS_INIT;
+
+ /*
+ * We need to run restore files from a given index, but without
+ * affecting the current index, so we use GIT_INDEX_FILE with
+ * run_command to fork processes that will not interfere.
+ */
+ cp.git_cmd = 1;
+ argv_array_push(&cp.args, "read-tree");
+ argv_array_push(&cp.args, oid_to_hex(u_tree));
+ argv_array_pushf(&cp.env_array, "GIT_INDEX_FILE=%s",
+ stash_index_path.buf);
+ if (run_command(&cp)) {
+ remove_path(stash_index_path.buf);
+ return -1;
+ }
+
+ child_process_init(&cp);
+ cp.git_cmd = 1;
+ argv_array_pushl(&cp.args, "checkout-index", "--all", NULL);
+ argv_array_pushf(&cp.env_array, "GIT_INDEX_FILE=%s",
+ stash_index_path.buf);
+
+ res = run_command(&cp);
+ remove_path(stash_index_path.buf);
+ return res;
+}
+
+static int do_apply_stash(const char *prefix, struct stash_info *info,
+ int index, int quiet)
+{
+ int ret;
+ int has_index = index;
+ struct merge_options o;
+ struct object_id c_tree;
+ struct object_id index_tree;
+ struct commit *result;
+ const struct object_id *bases[1];
+
+ read_cache_preload(NULL);
+ if (refresh_cache(REFRESH_QUIET))
+ return -1;
+
+ if (write_cache_as_tree(&c_tree, 0, NULL))
+ return error(_("cannot apply a stash in the middle of a merge"));
+
+ if (index) {
+ if (oideq(&info->b_tree, &info->i_tree) ||
+ oideq(&c_tree, &info->i_tree)) {
+ has_index = 0;
+ } else {
+ struct strbuf out = STRBUF_INIT;
+
+ if (diff_tree_binary(&out, &info->w_commit)) {
+ strbuf_release(&out);
+ return error(_("could not generate diff %s^!."),
+ oid_to_hex(&info->w_commit));
+ }
+
+ ret = apply_cached(&out);
+ strbuf_release(&out);
+ if (ret)
+ return error(_("conflicts in index."
+ "Try without --index."));
+
+ discard_cache();
+ read_cache();
+ if (write_cache_as_tree(&index_tree, 0, NULL))
+ return error(_("could not save index tree"));
+
+ reset_head();
+ }
+ }
+
+ if (info->has_u && restore_untracked(&info->u_tree))
+ return error(_("could not restore untracked files from stash"));
+
+ init_merge_options(&o, the_repository);
+
+ o.branch1 = "Updated upstream";
+ o.branch2 = "Stashed changes";
+
+ if (oideq(&info->b_tree, &c_tree))
+ o.branch1 = "Version stash was based on";
+
+ if (quiet)
+ o.verbosity = 0;
+
+ if (o.verbosity >= 3)
+ printf_ln(_("Merging %s with %s"), o.branch1, o.branch2);
+
+ bases[0] = &info->b_tree;
+
+ ret = merge_recursive_generic(&o, &c_tree, &info->w_tree, 1, bases,
+ &result);
+ if (ret) {
+ rerere(0);
+
+ if (index)
+ fprintf_ln(stderr, _("Index was not unstashed."));
+
+ return ret;
+ }
+
+ if (has_index) {
+ if (reset_tree(&index_tree, 0, 0))
+ return -1;
+ } else {
+ struct strbuf out = STRBUF_INIT;
+
+ if (get_newly_staged(&out, &c_tree)) {
+ strbuf_release(&out);
+ return -1;
+ }
+
+ if (reset_tree(&c_tree, 0, 1)) {
+ strbuf_release(&out);
+ return -1;
+ }
+
+ ret = update_index(&out);
+ strbuf_release(&out);
+ if (ret)
+ return -1;
+
+ discard_cache();
+ }
+
+ if (quiet) {
+ if (refresh_cache(REFRESH_QUIET))
+ warning("could not refresh index");
+ } else {
+ struct child_process cp = CHILD_PROCESS_INIT;
+
+ /*
+ * Status is quite simple and could be replaced with calls to
+ * wt_status in the future, but it adds complexities which may
+ * require more tests.
+ */
+ cp.git_cmd = 1;
+ cp.dir = prefix;
+ argv_array_push(&cp.args, "status");
+ run_command(&cp);
+ }
+
+ return 0;
+}
+
+static int apply_stash(int argc, const char **argv, const char *prefix)
+{
+ int ret;
+ int quiet = 0;
+ int index = 0;
+ struct stash_info info;
+ struct option options[] = {
+ OPT__QUIET(&quiet, N_("be quiet, only report errors")),
+ OPT_BOOL(0, "index", &index,
+ N_("attempt to recreate the index")),
+ OPT_END()
+ };
+
+ argc = parse_options(argc, argv, prefix, options,
+ git_stash_apply_usage, 0);
+
+ if (get_stash_info(&info, argc, argv))
+ return -1;
+
+ ret = do_apply_stash(prefix, &info, index, quiet);
+ free_stash_info(&info);
+ return ret;
+}
+
+static int do_drop_stash(struct stash_info *info, int quiet)
+{
+ int ret;
+ struct child_process cp_reflog = CHILD_PROCESS_INIT;
+ struct child_process cp = CHILD_PROCESS_INIT;
+
+ /*
+ * reflog does not provide a simple function for deleting refs. One will
+ * need to be added to avoid implementing too much reflog code here
+ */
+
+ cp_reflog.git_cmd = 1;
+ argv_array_pushl(&cp_reflog.args, "reflog", "delete", "--updateref",
+ "--rewrite", NULL);
+ argv_array_push(&cp_reflog.args, info->revision.buf);
+ ret = run_command(&cp_reflog);
+ if (!ret) {
+ if (!quiet)
+ printf_ln(_("Dropped %s (%s)"), info->revision.buf,
+ oid_to_hex(&info->w_commit));
+ } else {
+ return error(_("%s: Could not drop stash entry"),
+ info->revision.buf);
+ }
+
+ /*
+ * This could easily be replaced by get_oid, but currently it will throw
+ * a fatal error when a reflog is empty, which we can not recover from.
+ */
+ cp.git_cmd = 1;
+ /* Even though --quiet is specified, rev-parse still outputs the hash */
+ cp.no_stdout = 1;
+ argv_array_pushl(&cp.args, "rev-parse", "--verify", "--quiet", NULL);
+ argv_array_pushf(&cp.args, "%s@{0}", ref_stash);
+ ret = run_command(&cp);
+
+ /* do_clear_stash if we just dropped the last stash entry */
+ if (ret)
+ do_clear_stash();
+
+ return 0;
+}
+
+static void assert_stash_ref(struct stash_info *info)
+{
+ if (!info->is_stash_ref) {
+ error(_("'%s' is not a stash reference"), info->revision.buf);
+ free_stash_info(info);
+ exit(1);
+ }
+}
+
+static int drop_stash(int argc, const char **argv, const char *prefix)
+{
+ int ret;
+ int quiet = 0;
+ struct stash_info info;
+ struct option options[] = {
+ OPT__QUIET(&quiet, N_("be quiet, only report errors")),
+ OPT_END()
+ };
+
+ argc = parse_options(argc, argv, prefix, options,
+ git_stash_drop_usage, 0);
+
+ if (get_stash_info(&info, argc, argv))
+ return -1;
+
+ assert_stash_ref(&info);
+
+ ret = do_drop_stash(&info, quiet);
+ free_stash_info(&info);
+ return ret;
+}
+
+static int pop_stash(int argc, const char **argv, const char *prefix)
+{
+ int ret;
+ int index = 0;
+ int quiet = 0;
+ struct stash_info info;
+ struct option options[] = {
+ OPT__QUIET(&quiet, N_("be quiet, only report errors")),
+ OPT_BOOL(0, "index", &index,
+ N_("attempt to recreate the index")),
+ OPT_END()
+ };
+
+ argc = parse_options(argc, argv, prefix, options,
+ git_stash_pop_usage, 0);
+
+ if (get_stash_info(&info, argc, argv))
+ return -1;
+
+ assert_stash_ref(&info);
+ if ((ret = do_apply_stash(prefix, &info, index, quiet)))
+ printf_ln(_("The stash entry is kept in case "
+ "you need it again."));
+ else
+ ret = do_drop_stash(&info, quiet);
+
+ free_stash_info(&info);
+ return ret;
+}
+
+static int branch_stash(int argc, const char **argv, const char *prefix)
+{
+ int ret;
+ const char *branch = NULL;
+ struct stash_info info;
+ struct child_process cp = CHILD_PROCESS_INIT;
+ struct option options[] = {
+ OPT_END()
+ };
+
+ argc = parse_options(argc, argv, prefix, options,
+ git_stash_branch_usage, 0);
+
+ if (!argc) {
+ fprintf_ln(stderr, _("No branch name specified"));
+ return -1;
+ }
+
+ branch = argv[0];
+
+ if (get_stash_info(&info, argc - 1, argv + 1))
+ return -1;
+
+ cp.git_cmd = 1;
+ argv_array_pushl(&cp.args, "checkout", "-b", NULL);
+ argv_array_push(&cp.args, branch);
+ argv_array_push(&cp.args, oid_to_hex(&info.b_commit));
+ ret = run_command(&cp);
+ if (!ret)
+ ret = do_apply_stash(prefix, &info, 1, 0);
+ if (!ret && info.is_stash_ref)
+ ret = do_drop_stash(&info, 0);
+
+ free_stash_info(&info);
+
+ return ret;
+}
+
+static int list_stash(int argc, const char **argv, const char *prefix)
+{
+ struct child_process cp = CHILD_PROCESS_INIT;
+ struct option options[] = {
+ OPT_END()
+ };
+
+ argc = parse_options(argc, argv, prefix, options,
+ git_stash_list_usage,
+ PARSE_OPT_KEEP_UNKNOWN);
+
+ if (!ref_exists(ref_stash))
+ return 0;
+
+ cp.git_cmd = 1;
+ argv_array_pushl(&cp.args, "log", "--format=%gd: %gs", "-g",
+ "--first-parent", "-m", NULL);
+ argv_array_pushv(&cp.args, argv);
+ argv_array_push(&cp.args, ref_stash);
+ argv_array_push(&cp.args, "--");
+ return run_command(&cp);
+}
+
+static int show_stat = 1;
+static int show_patch;
+
+static int git_stash_config(const char *var, const char *value, void *cb)
+{
+ if (!strcmp(var, "stash.showstat")) {
+ show_stat = git_config_bool(var, value);
+ return 0;
+ }
+ if (!strcmp(var, "stash.showpatch")) {
+ show_patch = git_config_bool(var, value);
+ return 0;
+ }
+ return git_default_config(var, value, cb);
+}
+
+static int show_stash(int argc, const char **argv, const char *prefix)
+{
+ int i;
+ int opts = 0;
+ int ret = 0;
+ struct stash_info info;
+ struct rev_info rev;
+ struct argv_array stash_args = ARGV_ARRAY_INIT;
+ struct option options[] = {
+ OPT_END()
+ };
+
+ init_diff_ui_defaults();
+ git_config(git_diff_ui_config, NULL);
+ init_revisions(&rev, prefix);
+
+ for (i = 1; i < argc; i++) {
+ if (argv[i][0] != '-')
+ argv_array_push(&stash_args, argv[i]);
+ else
+ opts++;
+ }
+
+ ret = get_stash_info(&info, stash_args.argc, stash_args.argv);
+ argv_array_clear(&stash_args);
+ if (ret)
+ return -1;
+
+ /*
+ * The config settings are applied only if there are not passed
+ * any options.
+ */
+ if (!opts) {
+ git_config(git_stash_config, NULL);
+ if (show_stat)
+ rev.diffopt.output_format = DIFF_FORMAT_DIFFSTAT;
+
+ if (show_patch)
+ rev.diffopt.output_format |= DIFF_FORMAT_PATCH;
+
+ if (!show_stat && !show_patch) {
+ free_stash_info(&info);
+ return 0;
+ }
+ }
+
+ argc = setup_revisions(argc, argv, &rev, NULL);
+ if (argc > 1) {
+ free_stash_info(&info);
+ usage_with_options(git_stash_show_usage, options);
+ }
+ if (!rev.diffopt.output_format) {
+ rev.diffopt.output_format = DIFF_FORMAT_PATCH;
+ diff_setup_done(&rev.diffopt);
+ }
+
+ rev.diffopt.flags.recursive = 1;
+ setup_diff_pager(&rev.diffopt);
+ diff_tree_oid(&info.b_commit, &info.w_commit, "", &rev.diffopt);
+ log_tree_diff_flush(&rev);
+
+ free_stash_info(&info);
+ return diff_result_code(&rev.diffopt, 0);
+}
+
+static int do_store_stash(const struct object_id *w_commit, const char *stash_msg,
+ int quiet)
+{
+ if (!stash_msg)
+ stash_msg = "Created via \"git stash store\".";
+
+ if (update_ref(stash_msg, ref_stash, w_commit, NULL,
+ REF_FORCE_CREATE_REFLOG,
+ quiet ? UPDATE_REFS_QUIET_ON_ERR :
+ UPDATE_REFS_MSG_ON_ERR)) {
+ if (!quiet) {
+ fprintf_ln(stderr, _("Cannot update %s with %s"),
+ ref_stash, oid_to_hex(w_commit));
+ }
+ return -1;
+ }
+
+ return 0;
+}
+
+static int store_stash(int argc, const char **argv, const char *prefix)
+{
+ int quiet = 0;
+ const char *stash_msg = NULL;
+ struct object_id obj;
+ struct object_context dummy;
+ struct option options[] = {
+ OPT__QUIET(&quiet, N_("be quiet")),
+ OPT_STRING('m', "message", &stash_msg, "message",
+ N_("stash message")),
+ OPT_END()
+ };
+
+ argc = parse_options(argc, argv, prefix, options,
+ git_stash_store_usage,
+ PARSE_OPT_KEEP_UNKNOWN);
+
+ if (argc != 1) {
+ if (!quiet)
+ fprintf_ln(stderr, _("\"git stash store\" requires one "
+ "<commit> argument"));
+ return -1;
+ }
+
+ if (get_oid_with_context(the_repository,
+ argv[0], quiet ? GET_OID_QUIETLY : 0, &obj,
+ &dummy)) {
+ if (!quiet)
+ fprintf_ln(stderr, _("Cannot update %s with %s"),
+ ref_stash, argv[0]);
+ return -1;
+ }
+
+ return do_store_stash(&obj, stash_msg, quiet);
+}
+
+static void add_pathspecs(struct argv_array *args,
+ const struct pathspec *ps) {
+ int i;
+
+ for (i = 0; i < ps->nr; i++)
+ argv_array_push(args, ps->items[i].original);
+}
+
+/*
+ * `untracked_files` will be filled with the names of untracked files.
+ * The return value is:
+ *
+ * = 0 if there are not any untracked files
+ * > 0 if there are untracked files
+ */
+static int get_untracked_files(const struct pathspec *ps, int include_untracked,
+ struct strbuf *untracked_files)
+{
+ int i;
+ int max_len;
+ int found = 0;
+ char *seen;
+ struct dir_struct dir;
+
+ memset(&dir, 0, sizeof(dir));
+ if (include_untracked != INCLUDE_ALL_FILES)
+ setup_standard_excludes(&dir);
+
+ seen = xcalloc(ps->nr, 1);
+
+ max_len = fill_directory(&dir, the_repository->index, ps);
+ for (i = 0; i < dir.nr; i++) {
+ struct dir_entry *ent = dir.entries[i];
+ if (dir_path_match(&the_index, ent, ps, max_len, seen)) {
+ found++;
+ strbuf_addstr(untracked_files, ent->name);
+ /* NUL-terminate: will be fed to update-index -z */
+ strbuf_addch(untracked_files, '\0');
+ }
+ free(ent);
+ }
+
+ free(seen);
+ free(dir.entries);
+ free(dir.ignored);
+ clear_directory(&dir);
+ return found;
+}
+
+/*
+ * The return value of `check_changes_tracked_files()` can be:
+ *
+ * < 0 if there was an error
+ * = 0 if there are no changes.
+ * > 0 if there are changes.
+ */
+static int check_changes_tracked_files(const struct pathspec *ps)
+{
+ int result;
+ struct rev_info rev;
+ struct object_id dummy;
+ int ret = 0;
+
+ /* No initial commit. */
+ if (get_oid("HEAD", &dummy))
+ return -1;
+
+ if (read_cache() < 0)
+ return -1;
+
+ init_revisions(&rev, NULL);
+ copy_pathspec(&rev.prune_data, ps);
+
+ rev.diffopt.flags.quick = 1;
+ rev.diffopt.flags.ignore_submodules = 1;
+ rev.abbrev = 0;
+
+ add_head_to_pending(&rev);
+ diff_setup_done(&rev.diffopt);
+
+ result = run_diff_index(&rev, 1);
+ if (diff_result_code(&rev.diffopt, result)) {
+ ret = 1;
+ goto done;
+ }
+
+ object_array_clear(&rev.pending);
+ result = run_diff_files(&rev, 0);
+ if (diff_result_code(&rev.diffopt, result)) {
+ ret = 1;
+ goto done;
+ }
+
+done:
+ clear_pathspec(&rev.prune_data);
+ return ret;
+}
+
+/*
+ * The function will fill `untracked_files` with the names of untracked files
+ * It will return 1 if there were any changes and 0 if there were not.
+ */
+static int check_changes(const struct pathspec *ps, int include_untracked,
+ struct strbuf *untracked_files)
+{
+ int ret = 0;
+ if (check_changes_tracked_files(ps))
+ ret = 1;
+
+ if (include_untracked && get_untracked_files(ps, include_untracked,
+ untracked_files))
+ ret = 1;
+
+ return ret;
+}
+
+static int save_untracked_files(struct stash_info *info, struct strbuf *msg,
+ struct strbuf files)
+{
+ int ret = 0;
+ struct strbuf untracked_msg = STRBUF_INIT;
+ struct child_process cp_upd_index = CHILD_PROCESS_INIT;
+ struct index_state istate = { NULL };
+
+ cp_upd_index.git_cmd = 1;
+ argv_array_pushl(&cp_upd_index.args, "update-index", "-z", "--add",
+ "--remove", "--stdin", NULL);
+ argv_array_pushf(&cp_upd_index.env_array, "GIT_INDEX_FILE=%s",
+ stash_index_path.buf);
+
+ strbuf_addf(&untracked_msg, "untracked files on %s\n", msg->buf);
+ if (pipe_command(&cp_upd_index, files.buf, files.len, NULL, 0,
+ NULL, 0)) {
+ ret = -1;
+ goto done;
+ }
+
+ if (write_index_as_tree(&info->u_tree, &istate, stash_index_path.buf, 0,
+ NULL)) {
+ ret = -1;
+ goto done;
+ }
+
+ if (commit_tree(untracked_msg.buf, untracked_msg.len,
+ &info->u_tree, NULL, &info->u_commit, NULL, NULL)) {
+ ret = -1;
+ goto done;
+ }
+
+done:
+ discard_index(&istate);
+ strbuf_release(&untracked_msg);
+ remove_path(stash_index_path.buf);
+ return ret;
+}
+
+static int stash_patch(struct stash_info *info, const struct pathspec *ps,
+ struct strbuf *out_patch, int quiet)
+{
+ int ret = 0;
+ struct child_process cp_read_tree = CHILD_PROCESS_INIT;
+ struct child_process cp_add_i = CHILD_PROCESS_INIT;
+ struct child_process cp_diff_tree = CHILD_PROCESS_INIT;
+ struct index_state istate = { NULL };
+
+ remove_path(stash_index_path.buf);
+
+ cp_read_tree.git_cmd = 1;
+ argv_array_pushl(&cp_read_tree.args, "read-tree", "HEAD", NULL);
+ argv_array_pushf(&cp_read_tree.env_array, "GIT_INDEX_FILE=%s",
+ stash_index_path.buf);
+ if (run_command(&cp_read_tree)) {
+ ret = -1;
+ goto done;
+ }
+
+ /* Find out what the user wants. */
+ cp_add_i.git_cmd = 1;
+ argv_array_pushl(&cp_add_i.args, "add--interactive", "--patch=stash",
+ "--", NULL);
+ add_pathspecs(&cp_add_i.args, ps);
+ argv_array_pushf(&cp_add_i.env_array, "GIT_INDEX_FILE=%s",
+ stash_index_path.buf);
+ if (run_command(&cp_add_i)) {
+ ret = -1;
+ goto done;
+ }
+
+ /* State of the working tree. */
+ if (write_index_as_tree(&info->w_tree, &istate, stash_index_path.buf, 0,
+ NULL)) {
+ ret = -1;
+ goto done;
+ }
+
+ cp_diff_tree.git_cmd = 1;
+ argv_array_pushl(&cp_diff_tree.args, "diff-tree", "-p", "HEAD",
+ oid_to_hex(&info->w_tree), "--", NULL);
+ if (pipe_command(&cp_diff_tree, NULL, 0, out_patch, 0, NULL, 0)) {
+ ret = -1;
+ goto done;
+ }
+
+ if (!out_patch->len) {
+ if (!quiet)
+ fprintf_ln(stderr, _("No changes selected"));
+ ret = 1;
+ }
+
+done:
+ discard_index(&istate);
+ remove_path(stash_index_path.buf);
+ return ret;
+}
+
+static int stash_working_tree(struct stash_info *info, const struct pathspec *ps)
+{
+ int ret = 0;
+ struct rev_info rev;
+ struct child_process cp_upd_index = CHILD_PROCESS_INIT;
+ struct strbuf diff_output = STRBUF_INIT;
+ struct index_state istate = { NULL };
+
+ init_revisions(&rev, NULL);
+ copy_pathspec(&rev.prune_data, ps);
+
+ set_alternate_index_output(stash_index_path.buf);
+ if (reset_tree(&info->i_tree, 0, 0)) {
+ ret = -1;
+ goto done;
+ }
+ set_alternate_index_output(NULL);
+
+ rev.diffopt.output_format = DIFF_FORMAT_CALLBACK;
+ rev.diffopt.format_callback = add_diff_to_buf;
+ rev.diffopt.format_callback_data = &diff_output;
+
+ if (read_cache_preload(&rev.diffopt.pathspec) < 0) {
+ ret = -1;
+ goto done;
+ }
+
+ add_pending_object(&rev, parse_object(the_repository, &info->b_commit),
+ "");
+ if (run_diff_index(&rev, 0)) {
+ ret = -1;
+ goto done;
+ }
+
+ cp_upd_index.git_cmd = 1;
+ argv_array_pushl(&cp_upd_index.args, "update-index", "-z", "--add",
+ "--remove", "--stdin", NULL);
+ argv_array_pushf(&cp_upd_index.env_array, "GIT_INDEX_FILE=%s",
+ stash_index_path.buf);
+
+ if (pipe_command(&cp_upd_index, diff_output.buf, diff_output.len,
+ NULL, 0, NULL, 0)) {
+ ret = -1;
+ goto done;
+ }
+
+ if (write_index_as_tree(&info->w_tree, &istate, stash_index_path.buf, 0,
+ NULL)) {
+ ret = -1;
+ goto done;
+ }
+
+done:
+ discard_index(&istate);
+ UNLEAK(rev);
+ object_array_clear(&rev.pending);
+ clear_pathspec(&rev.prune_data);
+ strbuf_release(&diff_output);
+ remove_path(stash_index_path.buf);
+ return ret;
+}
+
+static int do_create_stash(const struct pathspec *ps, struct strbuf *stash_msg_buf,
+ int include_untracked, int patch_mode,
+ struct stash_info *info, struct strbuf *patch,
+ int quiet)
+{
+ int ret = 0;
+ int flags = 0;
+ int untracked_commit_option = 0;
+ const char *head_short_sha1 = NULL;
+ const char *branch_ref = NULL;
+ const char *branch_name = "(no branch)";
+ struct commit *head_commit = NULL;
+ struct commit_list *parents = NULL;
+ struct strbuf msg = STRBUF_INIT;
+ struct strbuf commit_tree_label = STRBUF_INIT;
+ struct strbuf untracked_files = STRBUF_INIT;
+
+ prepare_fallback_ident("git stash", "git@stash");
+
+ read_cache_preload(NULL);
+ refresh_cache(REFRESH_QUIET);
+
+ if (get_oid("HEAD", &info->b_commit)) {
+ if (!quiet)
+ fprintf_ln(stderr, _("You do not have "
+ "the initial commit yet"));
+ ret = -1;
+ goto done;
+ } else {
+ head_commit = lookup_commit(the_repository, &info->b_commit);
+ }
+
+ if (!check_changes(ps, include_untracked, &untracked_files)) {
+ ret = 1;
+ goto done;
+ }
+
+ branch_ref = resolve_ref_unsafe("HEAD", 0, NULL, &flags);
+ if (flags & REF_ISSYMREF)
+ branch_name = strrchr(branch_ref, '/') + 1;
+ head_short_sha1 = find_unique_abbrev(&head_commit->object.oid,
+ DEFAULT_ABBREV);
+ strbuf_addf(&msg, "%s: %s ", branch_name, head_short_sha1);
+ pp_commit_easy(CMIT_FMT_ONELINE, head_commit, &msg);
+
+ strbuf_addf(&commit_tree_label, "index on %s\n", msg.buf);
+ commit_list_insert(head_commit, &parents);
+ if (write_cache_as_tree(&info->i_tree, 0, NULL) ||
+ commit_tree(commit_tree_label.buf, commit_tree_label.len,
+ &info->i_tree, parents, &info->i_commit, NULL, NULL)) {
+ if (!quiet)
+ fprintf_ln(stderr, _("Cannot save the current "
+ "index state"));
+ ret = -1;
+ goto done;
+ }
+
+ if (include_untracked) {
+ if (save_untracked_files(info, &msg, untracked_files)) {
+ if (!quiet)
+ fprintf_ln(stderr, _("Cannot save "
+ "the untracked files"));
+ ret = -1;
+ goto done;
+ }
+ untracked_commit_option = 1;
+ }
+ if (patch_mode) {
+ ret = stash_patch(info, ps, patch, quiet);
+ if (ret < 0) {
+ if (!quiet)
+ fprintf_ln(stderr, _("Cannot save the current "
+ "worktree state"));
+ goto done;
+ } else if (ret > 0) {
+ goto done;
+ }
+ } else {
+ if (stash_working_tree(info, ps)) {
+ if (!quiet)
+ fprintf_ln(stderr, _("Cannot save the current "
+ "worktree state"));
+ ret = -1;
+ goto done;
+ }
+ }
+
+ if (!stash_msg_buf->len)
+ strbuf_addf(stash_msg_buf, "WIP on %s", msg.buf);
+ else
+ strbuf_insertf(stash_msg_buf, 0, "On %s: ", branch_name);
+
+ /*
+ * `parents` will be empty after calling `commit_tree()`, so there is
+ * no need to call `free_commit_list()`
+ */
+ parents = NULL;
+ if (untracked_commit_option)
+ commit_list_insert(lookup_commit(the_repository,
+ &info->u_commit),
+ &parents);
+ commit_list_insert(lookup_commit(the_repository, &info->i_commit),
+ &parents);
+ commit_list_insert(head_commit, &parents);
+
+ if (commit_tree(stash_msg_buf->buf, stash_msg_buf->len, &info->w_tree,
+ parents, &info->w_commit, NULL, NULL)) {
+ if (!quiet)
+ fprintf_ln(stderr, _("Cannot record "
+ "working tree state"));
+ ret = -1;
+ goto done;
+ }
+
+done:
+ strbuf_release(&commit_tree_label);
+ strbuf_release(&msg);
+ strbuf_release(&untracked_files);
+ return ret;
+}
+
+static int create_stash(int argc, const char **argv, const char *prefix)
+{
+ int ret = 0;
+ struct strbuf stash_msg_buf = STRBUF_INIT;
+ struct stash_info info;
+ struct pathspec ps;
+
+ /* Starting with argv[1], since argv[0] is "create" */
+ strbuf_join_argv(&stash_msg_buf, argc - 1, ++argv, ' ');
+
+ memset(&ps, 0, sizeof(ps));
+ if (!check_changes_tracked_files(&ps))
+ return 0;
+
+ ret = do_create_stash(&ps, &stash_msg_buf, 0, 0, &info,
+ NULL, 0);
+ if (!ret)
+ printf_ln("%s", oid_to_hex(&info.w_commit));
+
+ strbuf_release(&stash_msg_buf);
+ return ret;
+}
+
+static int do_push_stash(const struct pathspec *ps, const char *stash_msg, int quiet,
+ int keep_index, int patch_mode, int include_untracked)
+{
+ int ret = 0;
+ struct stash_info info;
+ struct strbuf patch = STRBUF_INIT;
+ struct strbuf stash_msg_buf = STRBUF_INIT;
+ struct strbuf untracked_files = STRBUF_INIT;
+
+ if (patch_mode && keep_index == -1)
+ keep_index = 1;
+
+ if (patch_mode && include_untracked) {
+ fprintf_ln(stderr, _("Can't use --patch and --include-untracked"
+ " or --all at the same time"));
+ ret = -1;
+ goto done;
+ }
+
+ read_cache_preload(NULL);
+ if (!include_untracked && ps->nr) {
+ int i;
+ char *ps_matched = xcalloc(ps->nr, 1);
+
+ for (i = 0; i < active_nr; i++)
+ ce_path_match(&the_index, active_cache[i], ps,
+ ps_matched);
+
+ if (report_path_error(ps_matched, ps)) {
+ fprintf_ln(stderr, _("Did you forget to 'git add'?"));
+ ret = -1;
+ free(ps_matched);
+ goto done;
+ }
+ free(ps_matched);
+ }
+
+ if (refresh_cache(REFRESH_QUIET)) {
+ ret = -1;
+ goto done;
+ }
+
+ if (!check_changes(ps, include_untracked, &untracked_files)) {
+ if (!quiet)
+ printf_ln(_("No local changes to save"));
+ goto done;
+ }
+
+ if (!reflog_exists(ref_stash) && do_clear_stash()) {
+ ret = -1;
+ if (!quiet)
+ fprintf_ln(stderr, _("Cannot initialize stash"));
+ goto done;
+ }
+
+ if (stash_msg)
+ strbuf_addstr(&stash_msg_buf, stash_msg);
+ if (do_create_stash(ps, &stash_msg_buf, include_untracked, patch_mode,
+ &info, &patch, quiet)) {
+ ret = -1;
+ goto done;
+ }
+
+ if (do_store_stash(&info.w_commit, stash_msg_buf.buf, 1)) {
+ ret = -1;
+ if (!quiet)
+ fprintf_ln(stderr, _("Cannot save the current status"));
+ goto done;
+ }
+
+ if (!quiet)
+ printf_ln(_("Saved working directory and index state %s"),
+ stash_msg_buf.buf);
+
+ if (!patch_mode) {
+ if (include_untracked && !ps->nr) {
+ struct child_process cp = CHILD_PROCESS_INIT;
+
+ cp.git_cmd = 1;
+ argv_array_pushl(&cp.args, "clean", "--force",
+ "--quiet", "-d", NULL);
+ if (include_untracked == INCLUDE_ALL_FILES)
+ argv_array_push(&cp.args, "-x");
+ if (run_command(&cp)) {
+ ret = -1;
+ goto done;
+ }
+ }
+ discard_cache();
+ if (ps->nr) {
+ struct child_process cp_add = CHILD_PROCESS_INIT;
+ struct child_process cp_diff = CHILD_PROCESS_INIT;
+ struct child_process cp_apply = CHILD_PROCESS_INIT;
+ struct strbuf out = STRBUF_INIT;
+
+ cp_add.git_cmd = 1;
+ argv_array_push(&cp_add.args, "add");
+ if (!include_untracked)
+ argv_array_push(&cp_add.args, "-u");
+ if (include_untracked == INCLUDE_ALL_FILES)
+ argv_array_push(&cp_add.args, "--force");
+ argv_array_push(&cp_add.args, "--");
+ add_pathspecs(&cp_add.args, ps);
+ if (run_command(&cp_add)) {
+ ret = -1;
+ goto done;
+ }
+
+ cp_diff.git_cmd = 1;
+ argv_array_pushl(&cp_diff.args, "diff-index", "-p",
+ "--cached", "--binary", "HEAD", "--",
+ NULL);
+ add_pathspecs(&cp_diff.args, ps);
+ if (pipe_command(&cp_diff, NULL, 0, &out, 0, NULL, 0)) {
+ ret = -1;
+ goto done;
+ }
+
+ cp_apply.git_cmd = 1;
+ argv_array_pushl(&cp_apply.args, "apply", "--index",
+ "-R", NULL);
+ if (pipe_command(&cp_apply, out.buf, out.len, NULL, 0,
+ NULL, 0)) {
+ ret = -1;
+ goto done;
+ }
+ } else {
+ struct child_process cp = CHILD_PROCESS_INIT;
+ cp.git_cmd = 1;
+ argv_array_pushl(&cp.args, "reset", "--hard", "-q",
+ NULL);
+ if (run_command(&cp)) {
+ ret = -1;
+ goto done;
+ }
+ }
+
+ if (keep_index == 1 && !is_null_oid(&info.i_tree)) {
+ struct child_process cp_ls = CHILD_PROCESS_INIT;
+ struct child_process cp_checkout = CHILD_PROCESS_INIT;
+ struct strbuf out = STRBUF_INIT;
+
+ if (reset_tree(&info.i_tree, 0, 1)) {
+ ret = -1;
+ goto done;
+ }
+
+ cp_ls.git_cmd = 1;
+ argv_array_pushl(&cp_ls.args, "ls-files", "-z",
+ "--modified", "--", NULL);
+
+ add_pathspecs(&cp_ls.args, ps);
+ if (pipe_command(&cp_ls, NULL, 0, &out, 0, NULL, 0)) {
+ ret = -1;
+ goto done;
+ }
+
+ cp_checkout.git_cmd = 1;
+ argv_array_pushl(&cp_checkout.args, "checkout-index",
+ "-z", "--force", "--stdin", NULL);
+ if (pipe_command(&cp_checkout, out.buf, out.len, NULL,
+ 0, NULL, 0)) {
+ ret = -1;
+ goto done;
+ }
+ }
+ goto done;
+ } else {
+ struct child_process cp = CHILD_PROCESS_INIT;
+
+ cp.git_cmd = 1;
+ argv_array_pushl(&cp.args, "apply", "-R", NULL);
+
+ if (pipe_command(&cp, patch.buf, patch.len, NULL, 0, NULL, 0)) {
+ if (!quiet)
+ fprintf_ln(stderr, _("Cannot remove "
+ "worktree changes"));
+ ret = -1;
+ goto done;
+ }
+
+ if (keep_index < 1) {
+ struct child_process cp = CHILD_PROCESS_INIT;
+
+ cp.git_cmd = 1;
+ argv_array_pushl(&cp.args, "reset", "-q", "--", NULL);
+ add_pathspecs(&cp.args, ps);
+ if (run_command(&cp)) {
+ ret = -1;
+ goto done;
+ }
+ }
+ goto done;
+ }
+
+done:
+ strbuf_release(&stash_msg_buf);
+ return ret;
+}
+
+static int push_stash(int argc, const char **argv, const char *prefix)
+{
+ int keep_index = -1;
+ int patch_mode = 0;
+ int include_untracked = 0;
+ int quiet = 0;
+ const char *stash_msg = NULL;
+ struct pathspec ps;
+ struct option options[] = {
+ OPT_BOOL('k', "keep-index", &keep_index,
+ N_("keep index")),
+ OPT_BOOL('p', "patch", &patch_mode,
+ N_("stash in patch mode")),
+ OPT__QUIET(&quiet, N_("quiet mode")),
+ OPT_BOOL('u', "include-untracked", &include_untracked,
+ N_("include untracked files in stash")),
+ OPT_SET_INT('a', "all", &include_untracked,
+ N_("include ignore files"), 2),
+ OPT_STRING('m', "message", &stash_msg, N_("message"),
+ N_("stash message")),
+ OPT_END()
+ };
+
+ if (argc)
+ argc = parse_options(argc, argv, prefix, options,
+ git_stash_push_usage,
+ 0);
+
+ parse_pathspec(&ps, 0, PATHSPEC_PREFER_FULL | PATHSPEC_PREFIX_ORIGIN,
+ prefix, argv);
+ return do_push_stash(&ps, stash_msg, quiet, keep_index, patch_mode,
+ include_untracked);
+}
+
+static int save_stash(int argc, const char **argv, const char *prefix)
+{
+ int keep_index = -1;
+ int patch_mode = 0;
+ int include_untracked = 0;
+ int quiet = 0;
+ int ret = 0;
+ const char *stash_msg = NULL;
+ struct pathspec ps;
+ struct strbuf stash_msg_buf = STRBUF_INIT;
+ struct option options[] = {
+ OPT_BOOL('k', "keep-index", &keep_index,
+ N_("keep index")),
+ OPT_BOOL('p', "patch", &patch_mode,
+ N_("stash in patch mode")),
+ OPT__QUIET(&quiet, N_("quiet mode")),
+ OPT_BOOL('u', "include-untracked", &include_untracked,
+ N_("include untracked files in stash")),
+ OPT_SET_INT('a', "all", &include_untracked,
+ N_("include ignore files"), 2),
+ OPT_STRING('m', "message", &stash_msg, "message",
+ N_("stash message")),
+ OPT_END()
+ };
+
+ argc = parse_options(argc, argv, prefix, options,
+ git_stash_save_usage,
+ PARSE_OPT_KEEP_DASHDASH);
+
+ if (argc)
+ stash_msg = strbuf_join_argv(&stash_msg_buf, argc, argv, ' ');
+
+ memset(&ps, 0, sizeof(ps));
+ ret = do_push_stash(&ps, stash_msg, quiet, keep_index,
+ patch_mode, include_untracked);
+
+ strbuf_release(&stash_msg_buf);
+ return ret;
+}
+
+static int use_builtin_stash(void)
+{
+ struct child_process cp = CHILD_PROCESS_INIT;
+ struct strbuf out = STRBUF_INIT;
+ int ret, env = git_env_bool("GIT_TEST_STASH_USE_BUILTIN", -1);
+
+ if (env != -1)
+ return env;
+
+ argv_array_pushl(&cp.args,
+ "config", "--bool", "stash.usebuiltin", NULL);
+ cp.git_cmd = 1;
+ if (capture_command(&cp, &out, 6)) {
+ strbuf_release(&out);
+ return 1;
+ }
+
+ strbuf_trim(&out);
+ ret = !strcmp("true", out.buf);
+ strbuf_release(&out);
+ return ret;
+}
+
+int cmd_stash(int argc, const char **argv, const char *prefix)
+{
+ int i = -1;
+ pid_t pid = getpid();
+ const char *index_file;
+ struct argv_array args = ARGV_ARRAY_INIT;
+
+ struct option options[] = {
+ OPT_END()
+ };
+
+ if (!use_builtin_stash()) {
+ const char *path = mkpath("%s/git-legacy-stash",
+ git_exec_path());
+
+ if (sane_execvp(path, (char **)argv) < 0)
+ die_errno(_("could not exec %s"), path);
+ else
+ BUG("sane_execvp() returned???");
+ }
+
+ prefix = setup_git_directory();
+ trace_repo_setup(prefix);
+ setup_work_tree();
+
+ git_config(git_diff_basic_config, NULL);
+
+ argc = parse_options(argc, argv, prefix, options, git_stash_usage,
+ PARSE_OPT_KEEP_UNKNOWN | PARSE_OPT_KEEP_DASHDASH);
+
+ index_file = get_index_file();
+ strbuf_addf(&stash_index_path, "%s.stash.%" PRIuMAX, index_file,
+ (uintmax_t)pid);
+
+ if (!argc)
+ return !!push_stash(0, NULL, prefix);
+ else if (!strcmp(argv[0], "apply"))
+ return !!apply_stash(argc, argv, prefix);
+ else if (!strcmp(argv[0], "clear"))
+ return !!clear_stash(argc, argv, prefix);
+ else if (!strcmp(argv[0], "drop"))
+ return !!drop_stash(argc, argv, prefix);
+ else if (!strcmp(argv[0], "pop"))
+ return !!pop_stash(argc, argv, prefix);
+ else if (!strcmp(argv[0], "branch"))
+ return !!branch_stash(argc, argv, prefix);
+ else if (!strcmp(argv[0], "list"))
+ return !!list_stash(argc, argv, prefix);
+ else if (!strcmp(argv[0], "show"))
+ return !!show_stash(argc, argv, prefix);
+ else if (!strcmp(argv[0], "store"))
+ return !!store_stash(argc, argv, prefix);
+ else if (!strcmp(argv[0], "create"))
+ return !!create_stash(argc, argv, prefix);
+ else if (!strcmp(argv[0], "push"))
+ return !!push_stash(argc, argv, prefix);
+ else if (!strcmp(argv[0], "save"))
+ return !!save_stash(argc, argv, prefix);
+ else if (*argv[0] != '-')
+ usage_msg_opt(xstrfmt(_("unknown subcommand: %s"), argv[0]),
+ git_stash_usage, options);
+
+ if (strcmp(argv[0], "-p")) {
+ while (++i < argc && strcmp(argv[i], "--")) {
+ /*
+ * `akpqu` is a string which contains all short options,
+ * except `-m` which is verified separately.
+ */
+ if ((strlen(argv[i]) == 2) && *argv[i] == '-' &&
+ strchr("akpqu", argv[i][1]))
+ continue;
+
+ if (!strcmp(argv[i], "--all") ||
+ !strcmp(argv[i], "--keep-index") ||
+ !strcmp(argv[i], "--no-keep-index") ||
+ !strcmp(argv[i], "--patch") ||
+ !strcmp(argv[i], "--quiet") ||
+ !strcmp(argv[i], "--include-untracked"))
+ continue;
+
+ /*
+ * `-m` and `--message=` are verified separately because
+ * they need to be immediately followed by a string
+ * (i.e.`-m"foobar"` or `--message="foobar"`).
+ */
+ if (starts_with(argv[i], "-m") ||
+ starts_with(argv[i], "--message="))
+ continue;
+
+ usage_with_options(git_stash_usage, options);
+ }
+ }
+
+ argv_array_push(&args, "push");
+ argv_array_pushv(&args, argv);
+ return !!push_stash(args.argc, args.argv, prefix);
+}
i++;
}
- if (ps_matched && report_path_error(ps_matched, pathspec, prefix))
+ if (ps_matched && report_path_error(ps_matched, pathspec))
result = -1;
free(ps_matched);
};
const char *const git_submodule_helper_usage[] = {
- N_("git submodule--helper foreach [--quiet] [--recursive] <command>"),
+ N_("git submodule--helper foreach [--quiet] [--recursive] [--] <command>"),
NULL
};
argc = parse_options(argc, argv, prefix, module_foreach_options,
- git_submodule_helper_usage, PARSE_OPT_KEEP_UNKNOWN);
+ git_submodule_helper_usage, 0);
if (module_list_compute(0, NULL, prefix, &pathspec, &list) < 0)
return 1;
};
const char *const git_submodule_helper_usage[] = {
- N_("git submodule--helper init [<path>]"),
+ N_("git submodule--helper init [<options>] [<path>]"),
NULL
};
};
const char *const git_submodule_helper_usage[] = {
- N_("git submodule--helper embed-git-dir [<path>...]"),
+ N_("git submodule--helper absorb-git-dirs [<options>] [<path>...]"),
NULL
};
static int module_config(int argc, const char **argv, const char *prefix)
{
enum {
- CHECK_WRITEABLE = 1
+ CHECK_WRITEABLE = 1,
+ DO_UNSET = 2
} command = 0;
struct option module_config_options[] = {
OPT_CMDMODE(0, "check-writeable", &command,
N_("check if it is safe to write to the .gitmodules file"),
CHECK_WRITEABLE),
+ OPT_CMDMODE(0, "unset", &command,
+ N_("unset the config in the .gitmodules file"),
+ DO_UNSET),
OPT_END()
};
const char *const git_submodule_helper_usage[] = {
- N_("git submodule--helper config name [value]"),
+ N_("git submodule--helper config <name> [<value>]"),
+ N_("git submodule--helper config --unset <name>"),
N_("git submodule--helper config --check-writeable"),
NULL
};
return is_writing_gitmodules_ok() ? 0 : -1;
/* Equivalent to ACTION_GET in builtin/config.c */
- if (argc == 2)
+ if (argc == 2 && command != DO_UNSET)
return print_config_from_gitmodules(the_repository, argv[1]);
/* Equivalent to ACTION_SET in builtin/config.c */
- if (argc == 3) {
+ if (argc == 3 || (argc == 2 && command == DO_UNSET)) {
+ const char *value = (argc == 3) ? argv[2] : NULL;
+
if (!is_writing_gitmodules_ok())
die(_("please make sure that the .gitmodules file is in the working tree"));
- return config_set_in_gitmodules_file_gently(argv[1], argv[2]);
+ return config_set_in_gitmodules_file_gently(argv[1], value);
}
usage_with_options(git_submodule_helper_usage, module_config_options);
#include "ref-filter.h"
static const char * const git_tag_usage[] = {
- N_("git tag [-a | -s | -u <key-id>] [-f] [-m <msg> | -F <file>] <tagname> [<head>]"),
+ N_("git tag [-a | -s | -u <key-id>] [-f] [-m <msg> | -F <file>]\n"
+ "\t\t<tagname> [<head>]"),
N_("git tag -d <tagname>..."),
- N_("git tag -l [-n[<num>]] [--contains <commit>] [--no-contains <commit>] [--points-at <object>]"
- "\n\t\t[--format=<format>] [--[no-]merged [<commit>]] [<pattern>...]"),
+ N_("git tag -l [-n[<num>]] [--contains <commit>] [--no-contains <commit>] [--points-at <object>]\n"
+ "\t\t[--format=<format>] [--[no-]merged [<commit>]] [<pattern>...]"),
N_("git tag -v [--format=<format>] <tagname>..."),
NULL
};
} cleanup_mode;
};
-static void create_tag(const struct object_id *object, const char *tag,
+static const char message_advice_nested_tag[] =
+ N_("You have created a nested tag. The object referred to by your new is\n"
+ "already a tag. If you meant to tag the object that it points to, use:\n"
+ "\n"
+ "\tgit tag -f %s %s^{}");
+
+static void create_tag(const struct object_id *object, const char *object_ref,
+ const char *tag,
struct strbuf *buf, struct create_tag_options *opt,
struct object_id *prev, struct object_id *result)
{
type = oid_object_info(the_repository, object, NULL);
if (type <= OBJ_NONE)
- die(_("bad object type."));
+ die(_("bad object type."));
+
+ if (type == OBJ_TAG && advice_nested_tag)
+ advise(_(message_advice_nested_tag), tag, object_ref);
strbuf_addf(&header,
"object %s\n"
OPT_WITHOUT(&filter.no_commit, N_("print only tags that don't contain the commit")),
OPT_MERGED(&filter, N_("print only tags that are merged")),
OPT_NO_MERGED(&filter, N_("print only tags that are not merged")),
- OPT_CALLBACK(0 , "sort", sorting_tail, N_("key"),
- N_("field name to sort on"), &parse_opt_ref_sorting),
+ OPT_REF_SORT(sorting_tail),
{
OPTION_CALLBACK, 0, "points-at", &filter.points_at, N_("object"),
N_("print only tags of the object"), PARSE_OPT_LASTARG_DEFAULT,
if (create_tag_object) {
if (force_sign_annotate && !annotate)
opt.sign = 1;
- create_tag(&object, tag, &buf, &opt, &prev, &object);
+ create_tag(&object, object_ref, tag, &buf, &opt, &prev, &object);
}
transaction = ref_transaction_begin(&err);
struct object_id *ent, const char *path,
int namelen, int stage)
{
- unsigned mode;
+ unsigned short mode;
struct object_id oid;
struct cache_entry *ce;
}
static int do_reupdate(int ac, const char **av,
- const char *prefix, int prefix_length)
+ const char *prefix)
{
/* Read HEAD and run update-index on paths that are
* merged and already different between index and HEAD.
/* consume remaining arguments. */
setup_work_tree();
- *has_errors = do_reupdate(ctx->argc, ctx->argv,
- prefix, prefix ? strlen(prefix) : 0);
+ *has_errors = do_reupdate(ctx->argc, ctx->argv, prefix);
if (*has_errors)
active_cache_changed = 0;
if (entries < 0)
die("cache corrupted");
+ the_index.updated_skipworktree = 1;
+
/*
* Custom copy of parse_options() because we want to handle
* filename arguments as they come.
struct cache_time timestamp;
unsigned name_hash_initialized : 1,
initialized : 1,
- drop_cache_tree : 1;
+ drop_cache_tree : 1,
+ updated_workdir : 1,
+ updated_skipworktree : 1;
struct hashmap name_hash;
struct hashmap dir_hash;
struct object_id oid;
#define FALLBACK_DEFAULT_ABBREV 7
struct object_context {
- unsigned mode;
+ unsigned short mode;
/*
* symlink_path is only used by get_tree_entry_follow_symlinks,
* and only for symlinks that point outside the repository.
};
extern int get_oid(const char *str, struct object_id *oid);
+extern int get_oidf(struct object_id *oid, const char *fmt, ...);
extern int get_oid_commit(const char *str, struct object_id *oid);
extern int get_oid_committish(const char *str, struct object_id *oid);
extern int get_oid_tree(const char *str, struct object_id *oid);
extern const char *git_pager(int stdout_is_tty);
extern int is_terminal_dumb(void);
extern int git_ident_config(const char *, const char *, void *);
+/*
+ * Prepare an ident to fall back on if the user didn't configure it.
+ */
+void prepare_fallback_ident(const char *name, const char *email);
extern void reset_ident_date(void);
struct ident_split {
Documentation)
sudo apt-get -q update
sudo apt-get -q -y install asciidoc xmlto
+
+ test -n "$ALREADY_HAVE_ASCIIDOCTOR" ||
+ gem install --version 1.5.8 asciidoctor
;;
esac
. ${0%/*}/lib.sh
-test -n "$ALREADY_HAVE_ASCIIDOCTOR" ||
-gem install asciidoctor
+filter_log () {
+ sed -e '/^GIT_VERSION = /d' \
+ -e '/^ \* new asciidoc flags$/d' \
+ "$1"
+}
make check-builtins
make check-docs
# Build docs with AsciiDoc
-make doc > >(tee stdout.log) 2> >(tee stderr.log >&2)
-! test -s stderr.log
+make doc > >(tee stdout.log) 2> >(tee stderr.raw >&2)
+cat stderr.raw
+filter_log stderr.raw >stderr.log
+test ! -s stderr.log
test -s Documentation/git.html
test -s Documentation/git.xml
test -s Documentation/git.1
grep '<meta name="generator" content="AsciiDoc ' Documentation/git.html
-rm -f stdout.log stderr.log
+rm -f stdout.log stderr.log stderr.raw
check_unignored_build_artifacts
# Build docs with AsciiDoctor
make clean
-make USE_ASCIIDOCTOR=1 doc > >(tee stdout.log) 2> >(tee stderr.log >&2)
-sed '/^GIT_VERSION = / d' stderr.log
-! test -s stderr.log
+make USE_ASCIIDOCTOR=1 doc > >(tee stdout.log) 2> >(tee stderr.raw >&2)
+cat stderr.raw
+filter_log stderr.raw >stderr.log
+test ! -s stderr.log
test -s Documentation/git.html
grep '<meta name="generator" content="Asciidoctor ' Documentation/git.html
-rm -f stdout.log stderr.log
+rm -f stdout.log stderr.log stderr.raw
check_unignored_build_artifacts
save_good_tree
return 1;
}
-struct commit_graph *load_commit_graph_one(const char *graph_file)
+int open_commit_graph(const char *graph_file, int *fd, struct stat *st)
+{
+ *fd = git_open(graph_file);
+ if (*fd < 0)
+ return 0;
+ if (fstat(*fd, st)) {
+ close(*fd);
+ return 0;
+ }
+ return 1;
+}
+
+struct commit_graph *load_commit_graph_one_fd_st(int fd, struct stat *st)
{
void *graph_map;
size_t graph_size;
- struct stat st;
struct commit_graph *ret;
- int fd = git_open(graph_file);
- if (fd < 0)
- return NULL;
- if (fstat(fd, &st)) {
- close(fd);
- return NULL;
- }
- graph_size = xsize_t(st.st_size);
+ graph_size = xsize_t(st->st_size);
if (graph_size < GRAPH_MIN_SIZE) {
close(fd);
- die(_("graph file %s is too small"), graph_file);
+ error(_("commit-graph file is too small"));
+ return NULL;
}
graph_map = xmmap(NULL, graph_size, PROT_READ, MAP_PRIVATE, fd, 0);
ret = parse_commit_graph(graph_map, fd, graph_size);
if (!ret) {
munmap(graph_map, graph_size);
close(fd);
- exit(1);
}
return ret;
}
+static int verify_commit_graph_lite(struct commit_graph *g)
+{
+ /*
+ * Basic validation shared between parse_commit_graph()
+ * which'll be called every time the graph is used, and the
+ * much more expensive verify_commit_graph() used by
+ * "commit-graph verify".
+ *
+ * There should only be very basic checks here to ensure that
+ * we don't e.g. segfault in fill_commit_in_graph(), but
+ * because this is a very hot codepath nothing that e.g. loops
+ * over g->num_commits, or runs a checksum on the commit-graph
+ * itself.
+ */
+ if (!g->chunk_oid_fanout) {
+ error("commit-graph is missing the OID Fanout chunk");
+ return 1;
+ }
+ if (!g->chunk_oid_lookup) {
+ error("commit-graph is missing the OID Lookup chunk");
+ return 1;
+ }
+ if (!g->chunk_commit_data) {
+ error("commit-graph is missing the Commit Data chunk");
+ return 1;
+ }
+
+ return 0;
+}
+
struct commit_graph *parse_commit_graph(void *graph_map, int fd,
size_t graph_size)
{
graph_signature = get_be32(data);
if (graph_signature != GRAPH_SIGNATURE) {
- error(_("graph signature %X does not match signature %X"),
+ error(_("commit-graph signature %X does not match signature %X"),
graph_signature, GRAPH_SIGNATURE);
return NULL;
}
graph_version = *(unsigned char*)(data + 4);
if (graph_version != GRAPH_VERSION) {
- error(_("graph version %X does not match version %X"),
+ error(_("commit-graph version %X does not match version %X"),
graph_version, GRAPH_VERSION);
return NULL;
}
hash_version = *(unsigned char*)(data + 5);
if (hash_version != oid_version()) {
- error(_("hash version %X does not match version %X"),
+ error(_("commit-graph hash version %X does not match version %X"),
hash_version, oid_version());
return NULL;
}
if (data + graph_size - chunk_lookup <
GRAPH_CHUNKLOOKUP_WIDTH) {
- error(_("chunk lookup table entry missing; graph file may be incomplete"));
+ error(_("commit-graph chunk lookup table entry missing; file may be incomplete"));
free(graph);
return NULL;
}
chunk_lookup += GRAPH_CHUNKLOOKUP_WIDTH;
if (chunk_offset > graph_size - the_hash_algo->rawsz) {
- error(_("improper chunk offset %08x%08x"), (uint32_t)(chunk_offset >> 32),
+ error(_("commit-graph improper chunk offset %08x%08x"), (uint32_t)(chunk_offset >> 32),
(uint32_t)chunk_offset);
free(graph);
return NULL;
}
if (chunk_repeated) {
- error(_("chunk id %08x appears multiple times"), chunk_id);
+ error(_("commit-graph chunk id %08x appears multiple times"), chunk_id);
free(graph);
return NULL;
}
last_chunk_offset = chunk_offset;
}
+ if (verify_commit_graph_lite(graph))
+ return NULL;
+
return graph;
}
+static struct commit_graph *load_commit_graph_one(const char *graph_file)
+{
+
+ struct stat st;
+ int fd;
+ int open_ok = open_commit_graph(graph_file, &fd, &st);
+
+ if (!open_ok)
+ return NULL;
+
+ return load_commit_graph_one_fd_st(fd, &st);
+}
+
static void prepare_commit_graph_one(struct repository *r, const char *obj_dir)
{
char *graph_name;
struct object_directory *odb;
int config_value;
+ if (git_env_bool(GIT_TEST_COMMIT_GRAPH_DIE_ON_LOAD, 0))
+ die("dying as requested by the '%s' variable on commit-graph load!",
+ GIT_TEST_COMMIT_GRAPH_DIE_ON_LOAD);
+
if (r->objects->commit_graph_attempted)
return !!r->objects->commit_graph;
r->objects->commit_graph_attempted = 1;
uint32_t packedDate[2];
display_progress(progress, ++*progress_cnt);
- parse_commit(*list);
+ parse_commit_no_graph(*list);
hashwrite(f, get_commit_tree_oid(*list)->hash, hash_len);
parent = (*list)->parents;
display_progress(progress, i + 1);
commit = lookup_commit(the_repository, &oids->list[i]);
- if (commit && !parse_commit(commit))
+ if (commit && !parse_commit_no_graph(commit))
add_missing_parents(oids, commit);
}
stop_progress(&progress);
continue;
commits.list[commits.nr] = lookup_commit(the_repository, &oids.list[i]);
- parse_commit(commits.list[commits.nr]);
+ parse_commit_no_graph(commits.list[commits.nr]);
for (parent = commits.list[commits.nr]->parents;
parent; parent = parent->next)
return 1;
}
- verify_commit_graph_error = 0;
-
- if (!g->chunk_oid_fanout)
- graph_report("commit-graph is missing the OID Fanout chunk");
- if (!g->chunk_oid_lookup)
- graph_report("commit-graph is missing the OID Lookup chunk");
- if (!g->chunk_commit_data)
- graph_report("commit-graph is missing the Commit Data chunk");
-
+ verify_commit_graph_error = verify_commit_graph_lite(g);
if (verify_commit_graph_error)
return verify_commit_graph_error;
hashcpy(cur_oid.hash, g->chunk_oid_lookup + g->hash_len * i);
if (i && oidcmp(&prev_oid, &cur_oid) >= 0)
- graph_report("commit-graph has incorrect OID order: %s then %s",
+ graph_report(_("commit-graph has incorrect OID order: %s then %s"),
oid_to_hex(&prev_oid),
oid_to_hex(&cur_oid));
uint32_t fanout_value = get_be32(g->chunk_oid_fanout + cur_fanout_pos);
if (i != fanout_value)
- graph_report("commit-graph has incorrect fanout value: fanout[%d] = %u != %u",
+ graph_report(_("commit-graph has incorrect fanout value: fanout[%d] = %u != %u"),
cur_fanout_pos, fanout_value, i);
cur_fanout_pos++;
}
graph_commit = lookup_commit(r, &cur_oid);
if (!parse_commit_in_graph_one(r, g, graph_commit))
- graph_report("failed to parse %s from commit-graph",
+ graph_report(_("failed to parse commit %s from commit-graph"),
oid_to_hex(&cur_oid));
}
uint32_t fanout_value = get_be32(g->chunk_oid_fanout + cur_fanout_pos);
if (g->num_commits != fanout_value)
- graph_report("commit-graph has incorrect fanout value: fanout[%d] = %u != %u",
+ graph_report(_("commit-graph has incorrect fanout value: fanout[%d] = %u != %u"),
cur_fanout_pos, fanout_value, i);
cur_fanout_pos++;
graph_commit = lookup_commit(r, &cur_oid);
odb_commit = (struct commit *)create_object(r, cur_oid.hash, alloc_commit_node(r));
if (parse_commit_internal(odb_commit, 0, 0)) {
- graph_report("failed to parse %s from object database",
+ graph_report(_("failed to parse commit %s from object database for commit-graph"),
oid_to_hex(&cur_oid));
continue;
}
if (!oideq(&get_commit_tree_in_graph_one(r, g, graph_commit)->object.oid,
get_commit_tree_oid(odb_commit)))
- graph_report("root tree OID for commit %s in commit-graph is %s != %s",
+ graph_report(_("root tree OID for commit %s in commit-graph is %s != %s"),
oid_to_hex(&cur_oid),
oid_to_hex(get_commit_tree_oid(graph_commit)),
oid_to_hex(get_commit_tree_oid(odb_commit)));
while (graph_parents) {
if (odb_parents == NULL) {
- graph_report("commit-graph parent list for commit %s is too long",
+ graph_report(_("commit-graph parent list for commit %s is too long"),
oid_to_hex(&cur_oid));
break;
}
if (!oideq(&graph_parents->item->object.oid, &odb_parents->item->object.oid))
- graph_report("commit-graph parent for %s is %s != %s",
+ graph_report(_("commit-graph parent for %s is %s != %s"),
oid_to_hex(&cur_oid),
oid_to_hex(&graph_parents->item->object.oid),
oid_to_hex(&odb_parents->item->object.oid));
}
if (odb_parents != NULL)
- graph_report("commit-graph parent list for commit %s terminates early",
+ graph_report(_("commit-graph parent list for commit %s terminates early"),
oid_to_hex(&cur_oid));
if (!graph_commit->generation) {
if (generation_zero == GENERATION_NUMBER_EXISTS)
- graph_report("commit-graph has generation number zero for commit %s, but non-zero elsewhere",
+ graph_report(_("commit-graph has generation number zero for commit %s, but non-zero elsewhere"),
oid_to_hex(&cur_oid));
generation_zero = GENERATION_ZERO_EXISTS;
} else if (generation_zero == GENERATION_ZERO_EXISTS)
- graph_report("commit-graph has non-zero generation number for commit %s, but zero elsewhere",
+ graph_report(_("commit-graph has non-zero generation number for commit %s, but zero elsewhere"),
oid_to_hex(&cur_oid));
if (generation_zero == GENERATION_ZERO_EXISTS)
max_generation--;
if (graph_commit->generation != max_generation + 1)
- graph_report("commit-graph generation for commit %s is %u != %u",
+ graph_report(_("commit-graph generation for commit %s is %u != %u"),
oid_to_hex(&cur_oid),
graph_commit->generation,
max_generation + 1);
if (graph_commit->date != odb_commit->date)
- graph_report("commit date for commit %s in commit-graph is %"PRItime" != %"PRItime,
+ graph_report(_("commit date for commit %s in commit-graph is %"PRItime" != %"PRItime),
oid_to_hex(&cur_oid),
graph_commit->date,
odb_commit->date);
#include "cache.h"
#define GIT_TEST_COMMIT_GRAPH "GIT_TEST_COMMIT_GRAPH"
+#define GIT_TEST_COMMIT_GRAPH_DIE_ON_LOAD "GIT_TEST_COMMIT_GRAPH_DIE_ON_LOAD"
struct commit;
char *get_commit_graph_filename(const char *obj_dir);
+int open_commit_graph(const char *graph_file, int *fd, struct stat *st);
/*
* Given a commit struct, try to fill the commit struct info, including:
const unsigned char *chunk_extra_edges;
};
-struct commit_graph *load_commit_graph_one(const char *graph_file);
+struct commit_graph *load_commit_graph_one_fd_st(int fd, struct stat *st);
struct commit_graph *parse_commit_graph(void *graph_map, int fd,
size_t graph_size);
{
return repo_parse_commit_gently(r, item, 0);
}
+
+static inline int parse_commit_no_graph(struct commit *commit)
+{
+ return repo_parse_commit_internal(the_repository, commit, 0, 0);
+}
+
#ifndef NO_THE_REPOSITORY_COMPATIBILITY_MACROS
#define parse_commit_internal(item, quiet, use) repo_parse_commit_internal(the_repository, item, quiet, use)
#define parse_commit_gently(item, quiet) repo_parse_commit_gently(the_repository, item, quiet)
extern const char *setup_temporary_shallow(const struct oid_array *extra);
extern void advertise_shallow_grafts(int);
+/*
+ * Initialize with prepare_shallow_info() or zero-initialize (equivalent to
+ * prepare_shallow_info with a NULL oid_array).
+ */
struct shallow_info {
struct oid_array *shallow;
int *ours, nr_ours;
}
ret = !wildmatch(pattern.buf + prefix, text.buf + prefix,
- icase ? WM_CASEFOLD : 0);
+ WM_PATHNAME | (icase ? WM_CASEFOLD : 0));
if (!ret && !already_tried_absolute) {
/*
HAVE_BSD_SYSCTL = YesPlease
FREAD_READS_DIRECTORIES = UnfortunatelyYes
HAVE_NS_GET_EXECUTABLE_PATH = YesPlease
+ BASIC_CFLAGS += -I/usr/local/include
+ BASIC_LDFLAGS += -L/usr/local/lib
endif
ifeq ($(uname_S),SunOS)
NEEDS_SOCKET = YesPlease
--- /dev/null
+@@
+expression str;
+identifier x, flexname;
+@@
+- FLEX_ALLOC_MEM(x, flexname, str, strlen(str));
++ FLEX_ALLOC_STR(x, flexname, str);
+
+@@
+expression str;
+identifier x, ptrname;
+@@
+- FLEXPTR_ALLOC_MEM(x, ptrname, str, strlen(str));
++ FLEXPTR_ALLOC_STR(x, ptrname, str);
}
__git_mergetools_common="diffuse diffmerge ecmerge emerge kdiff3 meld opendiff
- tkdiff vimdiff gvimdiff xxdiff araxis p4merge bc codecompare
+ tkdiff vimdiff gvimdiff xxdiff araxis p4merge bc
+ codecompare smerge
"
_git_difftool ()
{
__git_has_doubledash && return
- local subcommands="add status init deinit update summary foreach sync absorbgitdirs"
+ local subcommands="add status init deinit update set-branch summary foreach sync absorbgitdirs"
local subcommand="$(__git_find_on_cmdline "$subcommands")"
if [ -z "$subcommand" ]; then
case "$cur" in
--force --rebase --merge --reference --depth --recursive --jobs
"
;;
+ set-branch,--*)
+ __gitcomp "--default --branch"
+ ;;
summary,--*)
__gitcomp "--cached --files --summary-limit"
;;
#include "revision.h"
#include "log-tree.h"
#include "builtin.h"
+#include "parse-options.h"
#include "string-list.h"
#include "dir.h"
}
}
-void diff_no_index(struct rev_info *revs,
- int argc, const char **argv)
+static const char * const diff_no_index_usage[] = {
+ N_("git diff --no-index [<options>] <path> <path>"),
+ NULL
+};
+
+int diff_no_index(struct rev_info *revs,
+ int implicit_no_index,
+ int argc, const char **argv)
{
- int i;
+ int i, no_index;
const char *paths[2];
struct strbuf replacement = STRBUF_INIT;
const char *prefix = revs->prefix;
-
- for (i = 1; i < argc - 2; ) {
- int j;
- if (!strcmp(argv[i], "--no-index"))
- i++;
- else if (!strcmp(argv[i], "--"))
- i++;
- else {
- j = diff_opt_parse(&revs->diffopt, argv + i, argc - i,
- revs->prefix);
- if (j <= 0)
- die("invalid diff option/value: %s", argv[i]);
- i += j;
- }
+ struct option no_index_options[] = {
+ OPT_BOOL_F(0, "no-index", &no_index, "",
+ PARSE_OPT_NONEG | PARSE_OPT_HIDDEN),
+ OPT_END(),
+ };
+ struct option *options;
+
+ options = parse_options_concat(no_index_options,
+ revs->diffopt.parseopts);
+ argc = parse_options(argc, argv, revs->prefix, options,
+ diff_no_index_usage, 0);
+ if (argc != 2) {
+ if (implicit_no_index)
+ warning(_("Not a git repository. Use --no-index to "
+ "compare two paths outside a working tree"));
+ usage_with_options(diff_no_index_usage, options);
}
-
+ FREE_AND_NULL(options);
for (i = 0; i < 2; i++) {
const char *p = argv[argc - 2 + i];
if (!strcmp(p, "-"))
revs->diffopt.flags.exit_with_status = 1;
if (queue_diff(&revs->diffopt, paths[0], paths[1]))
- exit(1);
+ return 1;
diff_set_mnemonic_prefix(&revs->diffopt, "1/", "2/");
diffcore_std(&revs->diffopt);
diff_flush(&revs->diffopt);
* The return code for --no-index imitates diff(1):
* 0 = no changes, 1 = changes, else error
*/
- exit(diff_result_code(&revs->diffopt, 0));
+ return diff_result_code(&revs->diffopt, 0);
}
#include "packfile.h"
#include "parse-options.h"
#include "help.h"
+#include "fetch-object.h"
#ifdef NO_FAST_WORKING_DIRECTORY
#define FAST_WORKING_DIRECTORY 0
FREE_AND_NULL(options->parseopts);
}
-static int opt_arg(const char *arg, int arg_short, const char *arg_long, int *val)
-{
- char c, *eq;
- int len;
-
- if (*arg != '-')
- return 0;
- c = *++arg;
- if (!c)
- return 0;
- if (c == arg_short) {
- c = *++arg;
- if (!c)
- return 1;
- if (val && isdigit(c)) {
- char *end;
- int n = strtoul(arg, &end, 10);
- if (*end)
- return 0;
- *val = n;
- return 1;
- }
- return 0;
- }
- if (c != '-')
- return 0;
- arg++;
- eq = strchrnul(arg, '=');
- len = eq - arg;
- if (!len || strncmp(arg, arg_long, len))
- return 0;
- if (*eq) {
- int n;
- char *end;
- if (!isdigit(*++eq))
- return 0;
- n = strtoul(eq, &end, 10);
- if (*end)
- return 0;
- *val = n;
- }
- return 1;
-}
-
-static inline int short_opt(char opt, const char **argv,
- const char **optarg)
-{
- const char *arg = argv[0];
- if (arg[0] != '-' || arg[1] != opt)
- return 0;
- if (arg[2] != '\0') {
- *optarg = arg + 2;
- return 1;
- }
- if (!argv[1])
- die("Option '%c' requires a value", opt);
- *optarg = argv[1];
- return 2;
-}
-
int parse_long_opt(const char *opt, const char **argv,
const char **optarg)
{
return opt->filter & filter_bit[(int) status];
}
-static int parse_diff_filter_opt(const char *optarg, struct diff_options *opt)
+unsigned diff_filter_bit(char status)
+{
+ prepare_filter_bits();
+ return filter_bit[(int) status];
+}
+
+static int diff_opt_diff_filter(const struct option *option,
+ const char *optarg, int unset)
{
+ struct diff_options *opt = option->value;
int i, optch;
+ BUG_ON_OPT_NEG(unset);
prepare_filter_bits();
/*
bit = (0 <= optch && optch <= 'Z') ? filter_bit[optch] : 0;
if (!bit)
- return optarg[i];
+ return error(_("unknown change class '%c' in --diff-filter=%s"),
+ optarg[i], optarg);
if (negate)
opt->filter &= ~bit;
else
*fmt |= DIFF_FORMAT_PATCH;
}
-static int parse_ws_error_highlight_opt(struct diff_options *opt, const char *arg)
+static int diff_opt_ws_error_highlight(const struct option *option,
+ const char *arg, int unset)
{
+ struct diff_options *opt = option->value;
int val = parse_ws_error_highlight(arg);
- if (val < 0) {
- error("unknown value after ws-error-highlight=%.*s",
- -1 - val, arg);
- return 0;
- }
+ BUG_ON_OPT_NEG(unset);
+ if (val < 0)
+ return error(_("unknown value after ws-error-highlight=%.*s"),
+ -1 - val, arg);
opt->ws_error_highlight = val;
- return 1;
+ return 0;
}
-static int parse_objfind_opt(struct diff_options *opt, const char *arg)
+static int diff_opt_find_object(const struct option *option,
+ const char *arg, int unset)
{
+ struct diff_options *opt = option->value;
struct object_id oid;
+ BUG_ON_OPT_NEG(unset);
if (get_oid(arg, &oid))
- return error("unable to resolve '%s'", arg);
+ return error(_("unable to resolve '%s'"), arg);
if (!opt->objfind)
opt->objfind = xcalloc(1, sizeof(*opt->objfind));
opt->flags.recursive = 1;
opt->flags.tree_in_recursive = 1;
oidset_insert(opt->objfind, &oid);
- return 1;
+ return 0;
}
static int diff_opt_anchored(const struct option *opt,
return 0;
}
+static int diff_opt_color_moved(const struct option *opt,
+ const char *arg, int unset)
+{
+ struct diff_options *options = opt->value;
+
+ if (unset) {
+ options->color_moved = COLOR_MOVED_NO;
+ } else if (!arg) {
+ if (diff_color_moved_default)
+ options->color_moved = diff_color_moved_default;
+ if (options->color_moved == COLOR_MOVED_NO)
+ options->color_moved = COLOR_MOVED_DEFAULT;
+ } else {
+ int cm = parse_color_moved(arg);
+ if (cm < 0)
+ return error(_("bad --color-moved argument: %s"), arg);
+ options->color_moved = cm;
+ }
+ return 0;
+}
+
+static int diff_opt_color_moved_ws(const struct option *opt,
+ const char *arg, int unset)
+{
+ struct diff_options *options = opt->value;
+ unsigned cm;
+
+ if (unset) {
+ options->color_moved_ws_handling = 0;
+ return 0;
+ }
+
+ cm = parse_color_moved_ws(arg);
+ if (cm & COLOR_MOVED_WS_ERROR)
+ return error(_("invalid mode '%s' in --color-moved-ws"), arg);
+ options->color_moved_ws_handling = cm;
+ return 0;
+}
+
static int diff_opt_color_words(const struct option *opt,
const char *arg, int unset)
{
return 0;
}
+static int diff_opt_line_prefix(const struct option *opt,
+ const char *optarg, int unset)
+{
+ struct diff_options *options = opt->value;
+
+ BUG_ON_OPT_NEG(unset);
+ options->line_prefix = optarg;
+ options->line_prefix_length = strlen(options->line_prefix);
+ graph_setup_line_prefix(options);
+ return 0;
+}
+
+static int diff_opt_no_prefix(const struct option *opt,
+ const char *optarg, int unset)
+{
+ struct diff_options *options = opt->value;
+
+ BUG_ON_OPT_NEG(unset);
+ BUG_ON_OPT_ARG(optarg);
+ options->a_prefix = "";
+ options->b_prefix = "";
+ return 0;
+}
+
static enum parse_opt_result diff_opt_output(struct parse_opt_ctx_t *ctx,
const struct option *opt,
const char *arg, int unset)
return 0;
}
+static int diff_opt_pickaxe_regex(const struct option *opt,
+ const char *arg, int unset)
+{
+ struct diff_options *options = opt->value;
+
+ BUG_ON_OPT_NEG(unset);
+ options->pickaxe = arg;
+ options->pickaxe_opts |= DIFF_PICKAXE_KIND_G;
+ return 0;
+}
+
+static int diff_opt_pickaxe_string(const struct option *opt,
+ const char *arg, int unset)
+{
+ struct diff_options *options = opt->value;
+
+ BUG_ON_OPT_NEG(unset);
+ options->pickaxe = arg;
+ options->pickaxe_opts |= DIFF_PICKAXE_KIND_S;
+ return 0;
+}
+
static int diff_opt_relative(const struct option *opt,
const char *arg, int unset)
{
N_("show full pre- and post-image object names on the \"index\" lines")),
OPT_COLOR_FLAG(0, "color", &options->use_color,
N_("show colored diff")),
+ OPT_CALLBACK_F(0, "ws-error-highlight", options, N_("<kind>"),
+ N_("highlight whitespace errors in the 'context', 'old' or 'new' lines in the diff"),
+ PARSE_OPT_NONEG, diff_opt_ws_error_highlight),
+ OPT_SET_INT('z', NULL, &options->line_termination,
+ N_("do not munge pathnames and use NULs as output field terminators in --raw or --numstat"),
+ 0),
+ OPT__ABBREV(&options->abbrev),
+ OPT_STRING_F(0, "src-prefix", &options->a_prefix, N_("<prefix>"),
+ N_("show the given source prefix instead of \"a/\""),
+ PARSE_OPT_NONEG),
+ OPT_STRING_F(0, "dst-prefix", &options->b_prefix, N_("<prefix>"),
+ N_("show the given source prefix instead of \"b/\""),
+ PARSE_OPT_NONEG),
+ OPT_CALLBACK_F(0, "line-prefix", options, N_("<prefix>"),
+ N_("prepend an additional prefix to every line of output"),
+ PARSE_OPT_NONEG, diff_opt_line_prefix),
+ OPT_CALLBACK_F(0, "no-prefix", options, NULL,
+ N_("do not show any source or destination prefix"),
+ PARSE_OPT_NONEG | PARSE_OPT_NOARG, diff_opt_no_prefix),
+ OPT_INTEGER_F(0, "inter-hunk-context", &options->interhunkcontext,
+ N_("show context between diff hunks up to the specified number of lines"),
+ PARSE_OPT_NONEG),
OPT_CALLBACK_F(0, "output-indicator-new",
&options->output_indicators[OUTPUT_INDICATOR_NEW],
N_("<char>"),
OPT_CALLBACK_F(0, "follow", options, NULL,
N_("continue listing the history of a file beyond renames"),
PARSE_OPT_NOARG, diff_opt_follow),
+ OPT_INTEGER('l', NULL, &options->rename_limit,
+ N_("prevent rename/copy detection if the number of rename/copy targets exceeds given limit")),
OPT_GROUP(N_("Diff algorithm options")),
OPT_BIT(0, "minimal", &options->xdl_opts,
OPT_CALLBACK_F(0, "color-words", options, N_("<regex>"),
N_("equivalent to --word-diff=color --word-diff-regex=<regex>"),
PARSE_OPT_NONEG | PARSE_OPT_OPTARG, diff_opt_color_words),
+ OPT_CALLBACK_F(0, "color-moved", options, N_("<mode>"),
+ N_("move lines of code are colored differently"),
+ PARSE_OPT_OPTARG, diff_opt_color_moved),
+ OPT_CALLBACK_F(0, "color-moved-ws", options, N_("<mode>"),
+ N_("how white spaces are ignored in --color-moved"),
+ 0, diff_opt_color_moved_ws),
OPT_GROUP(N_("Diff other options")),
OPT_CALLBACK_F(0, "relative", options, N_("<prefix>"),
N_("specify how differences in submodules are shown"),
PARSE_OPT_NONEG | PARSE_OPT_OPTARG,
diff_opt_submodule),
+ OPT_SET_INT_F(0, "ita-invisible-in-index", &options->ita_invisible_in_index,
+ N_("hide 'git add -N' entries from the index"),
+ 1, PARSE_OPT_NONEG),
+ OPT_SET_INT_F(0, "ita-visible-in-index", &options->ita_invisible_in_index,
+ N_("treat 'git add -N' entries as real in the index"),
+ 0, PARSE_OPT_NONEG),
+ OPT_CALLBACK_F('S', NULL, options, N_("<string>"),
+ N_("look for differences that change the number of occurrences of the specified string"),
+ 0, diff_opt_pickaxe_string),
+ OPT_CALLBACK_F('G', NULL, options, N_("<regex>"),
+ N_("look for differences that change the number of occurrences of the specified regex"),
+ 0, diff_opt_pickaxe_regex),
+ OPT_BIT_F(0, "pickaxe-all", &options->pickaxe_opts,
+ N_("show all changes in the changeset with -S or -G"),
+ DIFF_PICKAXE_ALL, PARSE_OPT_NONEG),
+ OPT_BIT_F(0, "pickaxe-regex", &options->pickaxe_opts,
+ N_("treat <string> in -S as extended POSIX regular expression"),
+ DIFF_PICKAXE_REGEX, PARSE_OPT_NONEG),
+ OPT_FILENAME('O', NULL, &options->orderfile,
+ N_("control the order in which files appear in the output")),
+ OPT_CALLBACK_F(0, "find-object", options, N_("<object-id>"),
+ N_("look for differences that change the number of occurrences of the specified object"),
+ PARSE_OPT_NONEG, diff_opt_find_object),
+ OPT_CALLBACK_F(0, "diff-filter", options, N_("[(A|C|D|M|R|T|U|X|B)...[*]]"),
+ N_("select files by diff type"),
+ PARSE_OPT_NONEG, diff_opt_diff_filter),
{ OPTION_CALLBACK, 0, "output", options, N_("<file>"),
N_("Output to a specific file"),
PARSE_OPT_NONEG, NULL, 0, diff_opt_output },
int diff_opt_parse(struct diff_options *options,
const char **av, int ac, const char *prefix)
{
- const char *arg = av[0];
- const char *optarg;
- int argcount;
-
if (!prefix)
prefix = "";
PARSE_OPT_ONE_SHOT |
PARSE_OPT_STOP_AT_NON_OPTION);
- if (ac)
- return ac;
-
- /* flags options */
- if (!strcmp(arg, "--color-moved")) {
- if (diff_color_moved_default)
- options->color_moved = diff_color_moved_default;
- if (options->color_moved == COLOR_MOVED_NO)
- options->color_moved = COLOR_MOVED_DEFAULT;
- } else if (!strcmp(arg, "--no-color-moved"))
- options->color_moved = COLOR_MOVED_NO;
- else if (skip_prefix(arg, "--color-moved=", &arg)) {
- int cm = parse_color_moved(arg);
- if (cm < 0)
- return error("bad --color-moved argument: %s", arg);
- options->color_moved = cm;
- } else if (!strcmp(arg, "--no-color-moved-ws")) {
- options->color_moved_ws_handling = 0;
- } else if (skip_prefix(arg, "--color-moved-ws=", &arg)) {
- unsigned cm = parse_color_moved_ws(arg);
- if (cm & COLOR_MOVED_WS_ERROR)
- return -1;
- options->color_moved_ws_handling = cm;
- } else if (skip_prefix(arg, "--ws-error-highlight=", &arg))
- return parse_ws_error_highlight_opt(options, arg);
- else if (!strcmp(arg, "--ita-invisible-in-index"))
- options->ita_invisible_in_index = 1;
- else if (!strcmp(arg, "--ita-visible-in-index"))
- options->ita_invisible_in_index = 0;
-
- /* misc options */
- else if (!strcmp(arg, "-z"))
- options->line_termination = 0;
- else if ((argcount = short_opt('l', av, &optarg))) {
- options->rename_limit = strtoul(optarg, NULL, 10);
- return argcount;
- }
- else if ((argcount = short_opt('S', av, &optarg))) {
- options->pickaxe = optarg;
- options->pickaxe_opts |= DIFF_PICKAXE_KIND_S;
- return argcount;
- } else if ((argcount = short_opt('G', av, &optarg))) {
- options->pickaxe = optarg;
- options->pickaxe_opts |= DIFF_PICKAXE_KIND_G;
- return argcount;
- }
- else if (!strcmp(arg, "--pickaxe-all"))
- options->pickaxe_opts |= DIFF_PICKAXE_ALL;
- else if (!strcmp(arg, "--pickaxe-regex"))
- options->pickaxe_opts |= DIFF_PICKAXE_REGEX;
- else if ((argcount = short_opt('O', av, &optarg))) {
- options->orderfile = prefix_filename(prefix, optarg);
- return argcount;
- } else if (skip_prefix(arg, "--find-object=", &arg))
- return parse_objfind_opt(options, arg);
- else if ((argcount = parse_long_opt("diff-filter", av, &optarg))) {
- int offending = parse_diff_filter_opt(optarg, options);
- if (offending)
- die("unknown change class '%c' in --diff-filter=%s",
- offending, optarg);
- return argcount;
- }
- else if (!strcmp(arg, "--no-abbrev"))
- options->abbrev = 0;
- else if (!strcmp(arg, "--abbrev"))
- options->abbrev = DEFAULT_ABBREV;
- else if (skip_prefix(arg, "--abbrev=", &arg)) {
- options->abbrev = strtoul(arg, NULL, 10);
- if (options->abbrev < MINIMUM_ABBREV)
- options->abbrev = MINIMUM_ABBREV;
- else if (the_hash_algo->hexsz < options->abbrev)
- options->abbrev = the_hash_algo->hexsz;
- }
- else if ((argcount = parse_long_opt("src-prefix", av, &optarg))) {
- options->a_prefix = optarg;
- return argcount;
- }
- else if ((argcount = parse_long_opt("line-prefix", av, &optarg))) {
- options->line_prefix = optarg;
- options->line_prefix_length = strlen(options->line_prefix);
- graph_setup_line_prefix(options);
- return argcount;
- }
- else if ((argcount = parse_long_opt("dst-prefix", av, &optarg))) {
- options->b_prefix = optarg;
- return argcount;
- }
- else if (!strcmp(arg, "--no-prefix"))
- options->a_prefix = options->b_prefix = "";
- else if (opt_arg(arg, '\0', "inter-hunk-context",
- &options->interhunkcontext))
- ;
- else
- return 0;
- return 1;
+ return ac;
}
int parse_rename_score(const char **cp_p)
QSORT(q->queue, q->nr, diffnamecmp);
}
+static void add_if_missing(struct repository *r,
+ struct oid_array *to_fetch,
+ const struct diff_filespec *filespec)
+{
+ if (filespec && filespec->oid_valid &&
+ oid_object_info_extended(r, &filespec->oid, NULL,
+ OBJECT_INFO_FOR_PREFETCH))
+ oid_array_append(to_fetch, &filespec->oid);
+}
+
void diffcore_std(struct diff_options *options)
{
+ if (options->repo == the_repository &&
+ repository_format_partial_clone) {
+ /*
+ * Prefetch the diff pairs that are about to be flushed.
+ */
+ int i;
+ struct diff_queue_struct *q = &diff_queued_diff;
+ struct oid_array to_fetch = OID_ARRAY_INIT;
+
+ for (i = 0; i < q->nr; i++) {
+ struct diff_filepair *p = q->queue[i];
+ add_if_missing(options->repo, &to_fetch, p->one);
+ add_if_missing(options->repo, &to_fetch, p->two);
+ }
+ if (to_fetch.nr)
+ /*
+ * NEEDSWORK: Consider deduplicating the OIDs sent.
+ */
+ fetch_objects(repository_format_partial_clone,
+ to_fetch.oid, to_fetch.nr);
+ oid_array_clear(&to_fetch);
+ }
+
/* NOTE please keep the following in sync with diff_tree_combined() */
if (options->skip_stat_unmatch)
diffcore_skip_stat_unmatch(options);
struct option *parseopts;
};
+unsigned diff_filter_bit(char status);
+
void diff_emit_submodule_del(struct diff_options *o, const char *line);
void diff_emit_submodule_add(struct diff_options *o, const char *line);
void diff_emit_submodule_untracked(struct diff_options *o, const char *path);
int diff_result_code(struct diff_options *, int);
-void diff_no_index(struct rev_info *, int, const char **);
+int diff_no_index(struct rev_info *,
+ int implicit_no_index, int, const char **);
int index_differs_from(struct repository *r, const char *def,
const struct diff_flags *flags,
}
int report_path_error(const char *ps_matched,
- const struct pathspec *pathspec,
- const char *prefix)
+ const struct pathspec *pathspec)
{
/*
* Make sure all pathspec matched; otherwise it is an error.
return path_none;
}
if (!(dir->flags & DIR_NO_GITLINKS)) {
- struct object_id oid;
- if (resolve_gitlink_ref(dirname, "HEAD", &oid) == 0)
+ struct strbuf sb = STRBUF_INIT;
+ strbuf_addstr(&sb, dirname);
+ if (is_nonbare_repository_dir(&sb))
return exclude ? path_excluded : path_untracked;
+ strbuf_release(&sb);
}
return path_recurse;
}
struct stat_data info_exclude_stat;
struct stat_data excludes_file_stat;
uint32_t dir_flags;
- unsigned char info_exclude_sha1[20];
- unsigned char excludes_file_sha1[20];
- char exclude_per_dir[FLEX_ARRAY];
};
#define ouc_offset(x) offsetof(struct ondisk_untracked_cache, x)
-#define ouc_size(len) (ouc_offset(exclude_per_dir) + len + 1)
struct write_data {
int index; /* number of written untracked_cache_dir */
struct write_data wd;
unsigned char varbuf[16];
int varint_len;
- size_t len = strlen(untracked->exclude_per_dir);
+ const unsigned hashsz = the_hash_algo->rawsz;
- FLEX_ALLOC_MEM(ouc, exclude_per_dir, untracked->exclude_per_dir, len);
+ ouc = xcalloc(1, sizeof(*ouc));
stat_data_to_disk(&ouc->info_exclude_stat, &untracked->ss_info_exclude.stat);
stat_data_to_disk(&ouc->excludes_file_stat, &untracked->ss_excludes_file.stat);
- hashcpy(ouc->info_exclude_sha1, untracked->ss_info_exclude.oid.hash);
- hashcpy(ouc->excludes_file_sha1, untracked->ss_excludes_file.oid.hash);
ouc->dir_flags = htonl(untracked->dir_flags);
varint_len = encode_varint(untracked->ident.len, varbuf);
strbuf_add(out, varbuf, varint_len);
strbuf_addbuf(out, &untracked->ident);
- strbuf_add(out, ouc, ouc_size(len));
+ strbuf_add(out, ouc, sizeof(*ouc));
+ strbuf_add(out, untracked->ss_info_exclude.oid.hash, hashsz);
+ strbuf_add(out, untracked->ss_excludes_file.oid.hash, hashsz);
+ strbuf_add(out, untracked->exclude_per_dir, strlen(untracked->exclude_per_dir) + 1);
FREE_AND_NULL(ouc);
if (!untracked->root) {
next = data + len + 1;
if (next > rd->end)
return -1;
- *untracked_ = untracked = xmalloc(st_add(sizeof(*untracked), len));
+ *untracked_ = untracked = xmalloc(st_add3(sizeof(*untracked), len, 1));
memcpy(untracked, &ud, sizeof(ud));
memcpy(untracked->name, data, len + 1);
data = next;
int ident_len;
ssize_t len;
const char *exclude_per_dir;
+ const unsigned hashsz = the_hash_algo->rawsz;
+ const unsigned offset = sizeof(struct ondisk_untracked_cache);
+ const unsigned exclude_per_dir_offset = offset + 2 * hashsz;
if (sz <= 1 || end[-1] != '\0')
return NULL;
ident = (const char *)next;
next += ident_len;
- if (next + ouc_size(0) > end)
+ if (next + exclude_per_dir_offset + 1 > end)
return NULL;
uc = xcalloc(1, sizeof(*uc));
strbuf_add(&uc->ident, ident, ident_len);
load_oid_stat(&uc->ss_info_exclude,
next + ouc_offset(info_exclude_stat),
- next + ouc_offset(info_exclude_sha1));
+ next + offset);
load_oid_stat(&uc->ss_excludes_file,
next + ouc_offset(excludes_file_stat),
- next + ouc_offset(excludes_file_sha1));
+ next + offset + hashsz);
uc->dir_flags = get_be32(next + ouc_offset(dir_flags));
- exclude_per_dir = (const char *)next + ouc_offset(exclude_per_dir);
+ exclude_per_dir = (const char *)next + exclude_per_dir_offset;
uc->exclude_per_dir = xstrdup(exclude_per_dir);
/* NUL after exclude_per_dir is covered by sizeof(*ouc) */
- next += ouc_size(strlen(exclude_per_dir));
+ next += exclude_per_dir_offset + strlen(exclude_per_dir) + 1;
if (next >= end)
goto done2;
const struct pathspec *pathspec,
const char *name, int namelen,
int prefix, char *seen, int is_dir);
-extern int report_path_error(const char *ps_matched, const struct pathspec *pathspec, const char *prefix);
+extern int report_path_error(const char *ps_matched, const struct pathspec *pathspec);
extern int within_depth(const char *name, int namelen, int depth, int max_depth);
extern int fill_directory(struct dir_struct *dir,
*/
#define NO_DELTA S_ISUID
+/*
+ * The amount of additional space required in order to write an object into the
+ * current pack. This is the hash lengths at the end of the pack, plus the
+ * length of one object ID.
+ */
+#define PACK_SIZE_THRESHOLD (the_hash_algo->rawsz * 3)
+
struct object_entry {
struct pack_idx_entry idx;
struct object_entry *next;
if (c != last)
die("internal consistency error creating the index");
- tmpfile = write_idx_file(NULL, idx, object_count, &pack_idx_opts, pack_data->sha1);
+ tmpfile = write_idx_file(NULL, idx, object_count, &pack_idx_opts,
+ pack_data->hash);
free(idx);
return tmpfile;
}
struct strbuf name = STRBUF_INIT;
int keep_fd;
- odb_pack_name(&name, pack_data->sha1, "keep");
+ odb_pack_name(&name, pack_data->hash, "keep");
keep_fd = odb_pack_keep(name.buf);
if (keep_fd < 0)
die_errno("cannot create keep file");
if (close(keep_fd))
die_errno("failed to write keep file");
- odb_pack_name(&name, pack_data->sha1, "pack");
+ odb_pack_name(&name, pack_data->hash, "pack");
if (finalize_object_file(pack_data->pack_name, name.buf))
die("cannot store pack file");
- odb_pack_name(&name, pack_data->sha1, "idx");
+ odb_pack_name(&name, pack_data->hash, "idx");
if (finalize_object_file(curr_index_name, name.buf))
die("cannot store index file");
free((void *)curr_index_name);
for (k = 0; k < pack_id; k++) {
struct packed_git *p = all_packs[k];
- odb_pack_name(&name, p->sha1, "keep");
+ odb_pack_name(&name, p->hash, "keep");
unlink_or_warn(name.buf);
}
strbuf_release(&name);
close_pack_windows(pack_data);
finalize_hashfile(pack_file, cur_pack_oid.hash, 0);
- fixup_pack_header_footer(pack_data->pack_fd, pack_data->sha1,
- pack_data->pack_name, object_count,
- cur_pack_oid.hash, pack_size);
+ fixup_pack_header_footer(pack_data->pack_fd, pack_data->hash,
+ pack_data->pack_name, object_count,
+ cur_pack_oid.hash, pack_size);
if (object_count <= unpack_limit) {
if (!loosen_small_pack(pack_data)) {
git_deflate_end(&s);
/* Determine if we should auto-checkpoint. */
- if ((max_packsize && (pack_size + 60 + s.total_out) > max_packsize)
- || (pack_size + 60 + s.total_out) < pack_size) {
+ if ((max_packsize
+ && (pack_size + PACK_SIZE_THRESHOLD + s.total_out) > max_packsize)
+ || (pack_size + PACK_SIZE_THRESHOLD + s.total_out) < pack_size) {
/* This new object needs to *not* have the current pack_id. */
e->pack_id = pack_id + 1;
int status = Z_OK;
/* Determine if we should auto-checkpoint. */
- if ((max_packsize && (pack_size + 60 + len) > max_packsize)
- || (pack_size + 60 + len) < pack_size)
+ if ((max_packsize
+ && (pack_size + PACK_SIZE_THRESHOLD + len) > max_packsize)
+ || (pack_size + PACK_SIZE_THRESHOLD + len) < pack_size)
cycle_packfile();
hashfile_checkpoint(pack_file, &checkpoint);
c += e->name->str_len + 1;
hashcpy(e->versions[0].oid.hash, (unsigned char *)c);
hashcpy(e->versions[1].oid.hash, (unsigned char *)c);
- c += GIT_SHA1_RAWSZ;
+ c += the_hash_algo->rawsz;
}
free(buf);
}
strbuf_addf(b, "%o %s%c",
(unsigned int)(e->versions[v].mode & ~NO_DELTA),
e->name->str_dat, '\0');
- strbuf_add(b, e->versions[v].oid.hash, GIT_SHA1_RAWSZ);
+ strbuf_add(b, e->versions[v].oid.hash, the_hash_algo->rawsz);
}
}
}
for (;;) {
- const char *p;
-
if (unread_command_buf) {
unread_command_buf = 0;
} else {
rc->prev->next = rc;
cmd_tail = rc;
}
- if (skip_prefix(command_buf.buf, "get-mark ", &p)) {
- parse_get_mark(p);
- continue;
- }
- if (skip_prefix(command_buf.buf, "cat-blob ", &p)) {
- parse_cat_blob(p);
- continue;
- }
if (command_buf.buf[0] == '#')
continue;
return 0;
unsigned int i, tmp_hex_oid_len, tmp_fullpath_len;
uintmax_t num_notes = 0;
struct object_id oid;
- char realpath[60];
+ /* hex oid + '/' between each pair of hex digits + NUL */
+ char realpath[GIT_MAX_HEXSZ + ((GIT_MAX_HEXSZ / 2) - 1) + 1];
+ const unsigned hexsz = the_hash_algo->hexsz;
if (!root->tree)
load_tree(root);
* of 2 chars.
*/
if (!e->versions[1].mode ||
- tmp_hex_oid_len > GIT_SHA1_HEXSZ ||
+ tmp_hex_oid_len > hexsz ||
e->name->str_len % 2)
continue;
tmp_fullpath_len += e->name->str_len;
fullpath[tmp_fullpath_len] = '\0';
- if (tmp_hex_oid_len == GIT_SHA1_HEXSZ && !get_oid_hex(hex_oid, &oid)) {
+ if (tmp_hex_oid_len == hexsz && !get_oid_hex(hex_oid, &oid)) {
/* This is a note entry */
if (fanout == 0xff) {
/* Counting mode, no rename */
strbuf_addstr(&uq, p);
p = uq.buf;
}
- read_next_command();
- parse_and_store_blob(&last_blob, &oid, 0);
+ while (read_next_command() != EOF) {
+ const char *v;
+ if (skip_prefix(command_buf.buf, "cat-blob ", &v))
+ parse_cat_blob(v);
+ else {
+ parse_and_store_blob(&last_blob, &oid, 0);
+ break;
+ }
+ }
} else {
enum object_type expected = S_ISDIR(mode) ?
OBJ_TREE: OBJ_BLOB;
struct object_entry *oe;
struct branch *s;
struct object_id oid, commit_oid;
- char path[60];
+ char path[GIT_MAX_RAWSZ * 3];
uint16_t inline_data = 0;
unsigned char new_fanout;
char *buf = read_object_with_reference(&commit_oid,
commit_type, &size,
&commit_oid);
- if (!buf || size < 46)
+ if (!buf || size < the_hash_algo->hexsz + 6)
die("Not a valid commit: %s", p);
free(buf);
} else
static void parse_from_commit(struct branch *b, char *buf, unsigned long size)
{
- if (!buf || size < GIT_SHA1_HEXSZ + 6)
+ if (!buf || size < the_hash_algo->hexsz + 6)
die("Not a valid commit: %s", oid_to_hex(&b->oid));
if (memcmp("tree ", buf, 5)
|| get_oid_hex(buf + 5, &b->branch_tree.versions[1].oid))
char *buf = read_object_with_reference(&n->oid,
commit_type,
&size, &n->oid);
- if (!buf || size < 46)
+ if (!buf || size < the_hash_algo->hexsz + 6)
die("Not a valid commit: %s", from);
free(buf);
} else
file_change_deleteall(b);
else if (skip_prefix(command_buf.buf, "ls ", &v))
parse_ls(v, b);
+ else if (skip_prefix(command_buf.buf, "cat-blob ", &v))
+ parse_cat_blob(v);
else {
unread_command_buf = 1;
break;
die("Unknown mark: %s", command_buf.buf);
xsnprintf(output, sizeof(output), "%s\n", oid_to_hex(&oe->idx.oid));
- cat_blob_write(output, GIT_SHA1_HEXSZ + 1);
+ cat_blob_write(output, the_hash_algo->hexsz + 1);
}
static void parse_cat_blob(const char *p)
{
unsigned long size;
char *buf = NULL;
+ const unsigned hexsz = the_hash_algo->hexsz;
+
if (!oe) {
enum object_type type = oid_object_info(the_repository, oid,
NULL);
/* Peel one layer. */
switch (oe->type) {
case OBJ_TAG:
- if (size < GIT_SHA1_HEXSZ + strlen("object ") ||
+ if (size < hexsz + strlen("object ") ||
get_oid_hex(buf + strlen("object "), oid))
die("Invalid SHA1 in tag: %s", command_buf.buf);
break;
case OBJ_COMMIT:
- if (size < GIT_SHA1_HEXSZ + strlen("tree ") ||
+ if (size < hexsz + strlen("tree ") ||
get_oid_hex(buf + strlen("tree "), oid))
die("Invalid SHA1 in commit: %s", command_buf.buf);
}
return e;
}
-static void print_ls(int mode, const unsigned char *sha1, const char *path)
+static void print_ls(int mode, const unsigned char *hash, const char *path)
{
static struct strbuf line = STRBUF_INIT;
/* mode SP type SP object_name TAB path LF */
strbuf_reset(&line);
strbuf_addf(&line, "%06o %s %s\t",
- mode & ~NO_DELTA, type, sha1_to_hex(sha1));
+ mode & ~NO_DELTA, type, hash_to_hex(hash));
quote_c_style(path, &line, NULL, 0);
strbuf_addch(&line, '\n');
}
const char *v;
if (!strcmp("blob", command_buf.buf))
parse_new_blob();
- else if (skip_prefix(command_buf.buf, "ls ", &v))
- parse_ls(v, NULL);
else if (skip_prefix(command_buf.buf, "commit ", &v))
parse_new_commit(v);
else if (skip_prefix(command_buf.buf, "tag ", &v))
parse_new_tag(v);
else if (skip_prefix(command_buf.buf, "reset ", &v))
parse_reset_branch(v);
+ else if (skip_prefix(command_buf.buf, "ls ", &v))
+ parse_ls(v, NULL);
+ else if (skip_prefix(command_buf.buf, "cat-blob ", &v))
+ parse_cat_blob(v);
+ else if (skip_prefix(command_buf.buf, "get-mark ", &v))
+ parse_get_mark(v);
else if (!strcmp("checkpoint", command_buf.buf))
parse_checkpoint();
else if (!strcmp("done", command_buf.buf))
next = ref->next;
if (starts_with(ref->name, "refs/") &&
- check_refname_format(ref->name, 0))
- ; /* trash */
- else {
+ check_refname_format(ref->name, 0)) {
+ /*
+ * trash or a peeled value; do not even add it to
+ * unmatched list
+ */
+ free_one_ref(ref);
+ continue;
+ } else {
while (i < nr_sought) {
int cmp = strcmp(ref->name, sought[i]->name);
if (cmp < 0)
}
oidset_clear(&tip_oids);
- for (ref = unmatched; ref; ref = next) {
- next = ref->next;
- free(ref);
- }
+ free_refs(unmatched);
*refs = newlist;
}
}
static void receive_shallow_info(struct fetch_pack_args *args,
- struct packet_reader *reader)
+ struct packet_reader *reader,
+ struct oid_array *shallows,
+ struct shallow_info *si)
{
- int line_received = 0;
+ int unshallow_received = 0;
process_section_header(reader, "shallow-info", 0);
while (packet_reader_read(reader) == PACKET_READ_NORMAL) {
if (skip_prefix(reader->line, "shallow ", &arg)) {
if (get_oid_hex(arg, &oid))
die(_("invalid shallow line: %s"), reader->line);
- register_shallow(the_repository, &oid);
- line_received = 1;
+ oid_array_append(shallows, &oid);
continue;
}
if (skip_prefix(reader->line, "unshallow ", &arg)) {
die(_("error in object: %s"), reader->line);
if (unregister_shallow(&oid))
die(_("no shallow found: %s"), reader->line);
- line_received = 1;
+ unshallow_received = 1;
continue;
}
die(_("expected shallow/unshallow, got %s"), reader->line);
reader->status != PACKET_READ_DELIM)
die(_("error processing shallow info: %d"), reader->status);
- if (line_received) {
+ if (args->deepen || unshallow_received) {
+ /*
+ * Treat these as shallow lines caused by our depth settings.
+ * In v0, these lines cannot cause refs to be rejected; do the
+ * same.
+ */
+ int i;
+
+ for (i = 0; i < shallows->nr; i++)
+ register_shallow(the_repository, &shallows->oid[i]);
setup_alternate_shallow(&shallow_lock, &alternate_shallow_file,
NULL);
args->deepen = 1;
+ } else if (shallows->nr) {
+ /*
+ * Treat these as shallow lines caused by the remote being
+ * shallow. In v0, remote refs that reach these objects are
+ * rejected (unless --update-shallow is set); do the same.
+ */
+ prepare_shallow_info(si, shallows);
+ if (si->nr_ours || si->nr_theirs)
+ alternate_shallow_file =
+ setup_temporary_shallow(si->shallow);
+ else
+ alternate_shallow_file = NULL;
} else {
alternate_shallow_file = NULL;
}
}
+static int cmp_name_ref(const void *name, const void *ref)
+{
+ return strcmp(name, (*(struct ref **)ref)->name);
+}
+
static void receive_wanted_refs(struct packet_reader *reader,
struct ref **sought, int nr_sought)
{
while (packet_reader_read(reader) == PACKET_READ_NORMAL) {
struct object_id oid;
const char *end;
- int i;
+ struct ref **found;
if (parse_oid_hex(reader->line, &oid, &end) || *end++ != ' ')
die(_("expected wanted-ref, got '%s'"), reader->line);
- for (i = 0; i < nr_sought; i++) {
- if (!strcmp(end, sought[i]->name)) {
- oidcpy(&sought[i]->old_oid, &oid);
- break;
- }
- }
-
- if (i == nr_sought)
+ found = bsearch(end, sought, nr_sought, sizeof(*sought),
+ cmp_name_ref);
+ if (!found)
die(_("unexpected wanted-ref: '%s'"), reader->line);
+ oidcpy(&(*found)->old_oid, &oid);
}
if (reader->status != PACKET_READ_DELIM)
int fd[2],
const struct ref *orig_ref,
struct ref **sought, int nr_sought,
+ struct oid_array *shallows,
+ struct shallow_info *si,
char **pack_lockfile)
{
struct ref *ref = copy_ref_list(orig_ref);
case FETCH_GET_PACK:
/* Check for shallow-info section */
if (process_section_header(&reader, "shallow-info", 1))
- receive_shallow_info(args, &reader);
+ receive_shallow_info(args, &reader, shallows, si);
if (process_section_header(&reader, "wanted-refs", 1))
receive_wanted_refs(&reader, sought, nr_sought);
}
struct ref *fetch_pack(struct fetch_pack_args *args,
- int fd[], struct child_process *conn,
+ int fd[],
const struct ref *ref,
- const char *dest,
struct ref **sought, int nr_sought,
struct oid_array *shallow,
char **pack_lockfile,
{
struct ref *ref_cpy;
struct shallow_info si;
+ struct oid_array shallows_scratch = OID_ARRAY_INIT;
fetch_pack_setup();
if (nr_sought)
packet_flush(fd[1]);
die(_("no matching remote head"));
}
- prepare_shallow_info(&si, shallow);
- if (version == protocol_v2)
+ if (version == protocol_v2) {
+ if (shallow->nr)
+ BUG("Protocol V2 does not provide shallows at this point in the fetch");
+ memset(&si, 0, sizeof(si));
ref_cpy = do_fetch_pack_v2(args, fd, ref, sought, nr_sought,
+ &shallows_scratch, &si,
pack_lockfile);
- else
+ } else {
+ prepare_shallow_info(&si, shallow);
ref_cpy = do_fetch_pack(args, fd, ref, sought, nr_sought,
&si, pack_lockfile);
+ }
reprepare_packed_git(the_repository);
if (!args->cloning && args->deepen) {
update_shallow(args, sought, nr_sought, &si);
cleanup:
clear_shallow_info(&si);
+ oid_array_clear(&shallows_scratch);
return ref_cpy;
}
* marked as such.
*/
struct ref *fetch_pack(struct fetch_pack_args *args,
- int fd[], struct child_process *conn,
+ int fd[],
const struct ref *ref,
- const char *dest,
struct ref **sought,
int nr_sought,
struct oid_array *shallow,
o_name = NULL;
while (desc.size) {
- unsigned mode;
+ unsigned short mode;
const char *name;
const struct object_id *oid;
--- /dev/null
+#!/bin/sh
+# Copyright (c) 2007, Nanako Shiraishi
+
+dashless=$(basename "$0" | sed -e 's/-/ /')
+USAGE="list [<options>]
+ or: $dashless show [<stash>]
+ or: $dashless drop [-q|--quiet] [<stash>]
+ or: $dashless ( pop | apply ) [--index] [-q|--quiet] [<stash>]
+ or: $dashless branch <branchname> [<stash>]
+ or: $dashless save [--patch] [-k|--[no-]keep-index] [-q|--quiet]
+ [-u|--include-untracked] [-a|--all] [<message>]
+ or: $dashless [push [--patch] [-k|--[no-]keep-index] [-q|--quiet]
+ [-u|--include-untracked] [-a|--all] [-m <message>]
+ [-- <pathspec>...]]
+ or: $dashless clear"
+
+SUBDIRECTORY_OK=Yes
+OPTIONS_SPEC=
+START_DIR=$(pwd)
+. git-sh-setup
+require_work_tree
+prefix=$(git rev-parse --show-prefix) || exit 1
+cd_to_toplevel
+
+TMP="$GIT_DIR/.git-stash.$$"
+TMPindex=${GIT_INDEX_FILE-"$(git rev-parse --git-path index)"}.stash.$$
+trap 'rm -f "$TMP-"* "$TMPindex"' 0
+
+ref_stash=refs/stash
+
+if git config --get-colorbool color.interactive; then
+ help_color="$(git config --get-color color.interactive.help 'red bold')"
+ reset_color="$(git config --get-color '' reset)"
+else
+ help_color=
+ reset_color=
+fi
+
+no_changes () {
+ git diff-index --quiet --cached HEAD --ignore-submodules -- "$@" &&
+ git diff-files --quiet --ignore-submodules -- "$@" &&
+ (test -z "$untracked" || test -z "$(untracked_files "$@")")
+}
+
+untracked_files () {
+ if test "$1" = "-z"
+ then
+ shift
+ z=-z
+ else
+ z=
+ fi
+ excl_opt=--exclude-standard
+ test "$untracked" = "all" && excl_opt=
+ git ls-files -o $z $excl_opt -- "$@"
+}
+
+prepare_fallback_ident () {
+ if ! git -c user.useconfigonly=yes var GIT_COMMITTER_IDENT >/dev/null 2>&1
+ then
+ GIT_AUTHOR_NAME="git stash"
+ GIT_AUTHOR_EMAIL=git@stash
+ GIT_COMMITTER_NAME="git stash"
+ GIT_COMMITTER_EMAIL=git@stash
+ export GIT_AUTHOR_NAME
+ export GIT_AUTHOR_EMAIL
+ export GIT_COMMITTER_NAME
+ export GIT_COMMITTER_EMAIL
+ fi
+}
+
+clear_stash () {
+ if test $# != 0
+ then
+ die "$(gettext "git stash clear with parameters is unimplemented")"
+ fi
+ if current=$(git rev-parse --verify --quiet $ref_stash)
+ then
+ git update-ref -d $ref_stash $current
+ fi
+}
+
+maybe_quiet () {
+ case "$1" in
+ --keep-stdout)
+ shift
+ if test -n "$GIT_QUIET"
+ then
+ "$@" 2>/dev/null
+ else
+ "$@"
+ fi
+ ;;
+ *)
+ if test -n "$GIT_QUIET"
+ then
+ "$@" >/dev/null 2>&1
+ else
+ "$@"
+ fi
+ ;;
+ esac
+}
+
+create_stash () {
+
+ prepare_fallback_ident
+
+ stash_msg=
+ untracked=
+ while test $# != 0
+ do
+ case "$1" in
+ -m|--message)
+ shift
+ stash_msg=${1?"BUG: create_stash () -m requires an argument"}
+ ;;
+ -m*)
+ stash_msg=${1#-m}
+ ;;
+ --message=*)
+ stash_msg=${1#--message=}
+ ;;
+ -u|--include-untracked)
+ shift
+ untracked=${1?"BUG: create_stash () -u requires an argument"}
+ ;;
+ --)
+ shift
+ break
+ ;;
+ esac
+ shift
+ done
+
+ git update-index -q --refresh
+ if maybe_quiet no_changes "$@"
+ then
+ exit 0
+ fi
+
+ # state of the base commit
+ if b_commit=$(maybe_quiet --keep-stdout git rev-parse --verify HEAD)
+ then
+ head=$(git rev-list --oneline -n 1 HEAD --)
+ elif test -n "$GIT_QUIET"
+ then
+ exit 1
+ else
+ die "$(gettext "You do not have the initial commit yet")"
+ fi
+
+ if branch=$(git symbolic-ref -q HEAD)
+ then
+ branch=${branch#refs/heads/}
+ else
+ branch='(no branch)'
+ fi
+ msg=$(printf '%s: %s' "$branch" "$head")
+
+ # state of the index
+ i_tree=$(git write-tree) &&
+ i_commit=$(printf 'index on %s\n' "$msg" |
+ git commit-tree $i_tree -p $b_commit) ||
+ die "$(gettext "Cannot save the current index state")"
+
+ if test -n "$untracked"
+ then
+ # Untracked files are stored by themselves in a parentless commit, for
+ # ease of unpacking later.
+ u_commit=$(
+ untracked_files -z "$@" | (
+ GIT_INDEX_FILE="$TMPindex" &&
+ export GIT_INDEX_FILE &&
+ rm -f "$TMPindex" &&
+ git update-index -z --add --remove --stdin &&
+ u_tree=$(git write-tree) &&
+ printf 'untracked files on %s\n' "$msg" | git commit-tree $u_tree &&
+ rm -f "$TMPindex"
+ ) ) || die "$(gettext "Cannot save the untracked files")"
+
+ untracked_commit_option="-p $u_commit";
+ else
+ untracked_commit_option=
+ fi
+
+ if test -z "$patch_mode"
+ then
+
+ # state of the working tree
+ w_tree=$( (
+ git read-tree --index-output="$TMPindex" -m $i_tree &&
+ GIT_INDEX_FILE="$TMPindex" &&
+ export GIT_INDEX_FILE &&
+ git diff-index --name-only -z HEAD -- "$@" >"$TMP-stagenames" &&
+ git update-index -z --add --remove --stdin <"$TMP-stagenames" &&
+ git write-tree &&
+ rm -f "$TMPindex"
+ ) ) ||
+ die "$(gettext "Cannot save the current worktree state")"
+
+ else
+
+ rm -f "$TMP-index" &&
+ GIT_INDEX_FILE="$TMP-index" git read-tree HEAD &&
+
+ # find out what the user wants
+ GIT_INDEX_FILE="$TMP-index" \
+ git add--interactive --patch=stash -- "$@" &&
+
+ # state of the working tree
+ w_tree=$(GIT_INDEX_FILE="$TMP-index" git write-tree) ||
+ die "$(gettext "Cannot save the current worktree state")"
+
+ git diff-tree -p HEAD $w_tree -- >"$TMP-patch" &&
+ test -s "$TMP-patch" ||
+ die "$(gettext "No changes selected")"
+
+ rm -f "$TMP-index" ||
+ die "$(gettext "Cannot remove temporary index (can't happen)")"
+
+ fi
+
+ # create the stash
+ if test -z "$stash_msg"
+ then
+ stash_msg=$(printf 'WIP on %s' "$msg")
+ else
+ stash_msg=$(printf 'On %s: %s' "$branch" "$stash_msg")
+ fi
+ w_commit=$(printf '%s\n' "$stash_msg" |
+ git commit-tree $w_tree -p $b_commit -p $i_commit $untracked_commit_option) ||
+ die "$(gettext "Cannot record working tree state")"
+}
+
+store_stash () {
+ while test $# != 0
+ do
+ case "$1" in
+ -m|--message)
+ shift
+ stash_msg="$1"
+ ;;
+ -m*)
+ stash_msg=${1#-m}
+ ;;
+ --message=*)
+ stash_msg=${1#--message=}
+ ;;
+ -q|--quiet)
+ quiet=t
+ ;;
+ *)
+ break
+ ;;
+ esac
+ shift
+ done
+ test $# = 1 ||
+ die "$(eval_gettext "\"$dashless store\" requires one <commit> argument")"
+
+ w_commit="$1"
+ if test -z "$stash_msg"
+ then
+ stash_msg="Created via \"git stash store\"."
+ fi
+
+ git update-ref --create-reflog -m "$stash_msg" $ref_stash $w_commit
+ ret=$?
+ test $ret != 0 && test -z "$quiet" &&
+ die "$(eval_gettext "Cannot update \$ref_stash with \$w_commit")"
+ return $ret
+}
+
+push_stash () {
+ keep_index=
+ patch_mode=
+ untracked=
+ stash_msg=
+ while test $# != 0
+ do
+ case "$1" in
+ -k|--keep-index)
+ keep_index=t
+ ;;
+ --no-keep-index)
+ keep_index=n
+ ;;
+ -p|--patch)
+ patch_mode=t
+ # only default to keep if we don't already have an override
+ test -z "$keep_index" && keep_index=t
+ ;;
+ -q|--quiet)
+ GIT_QUIET=t
+ ;;
+ -u|--include-untracked)
+ untracked=untracked
+ ;;
+ -a|--all)
+ untracked=all
+ ;;
+ -m|--message)
+ shift
+ test -z ${1+x} && usage
+ stash_msg=$1
+ ;;
+ -m*)
+ stash_msg=${1#-m}
+ ;;
+ --message=*)
+ stash_msg=${1#--message=}
+ ;;
+ --help)
+ show_help
+ ;;
+ --)
+ shift
+ break
+ ;;
+ -*)
+ option="$1"
+ eval_gettextln "error: unknown option for 'stash push': \$option"
+ usage
+ ;;
+ *)
+ break
+ ;;
+ esac
+ shift
+ done
+
+ eval "set $(git rev-parse --sq --prefix "$prefix" -- "$@")"
+
+ if test -n "$patch_mode" && test -n "$untracked"
+ then
+ die "$(gettext "Can't use --patch and --include-untracked or --all at the same time")"
+ fi
+
+ test -n "$untracked" || git ls-files --error-unmatch -- "$@" >/dev/null || exit 1
+
+ git update-index -q --refresh
+ if maybe_quiet no_changes "$@"
+ then
+ say "$(gettext "No local changes to save")"
+ exit 0
+ fi
+
+ git reflog exists $ref_stash ||
+ clear_stash || die "$(gettext "Cannot initialize stash")"
+
+ create_stash -m "$stash_msg" -u "$untracked" -- "$@"
+ store_stash -m "$stash_msg" -q $w_commit ||
+ die "$(gettext "Cannot save the current status")"
+ say "$(eval_gettext "Saved working directory and index state \$stash_msg")"
+
+ if test -z "$patch_mode"
+ then
+ test "$untracked" = "all" && CLEAN_X_OPTION=-x || CLEAN_X_OPTION=
+ if test -n "$untracked" && test $# = 0
+ then
+ git clean --force --quiet -d $CLEAN_X_OPTION
+ fi
+
+ if test $# != 0
+ then
+ test -z "$untracked" && UPDATE_OPTION="-u" || UPDATE_OPTION=
+ test "$untracked" = "all" && FORCE_OPTION="--force" || FORCE_OPTION=
+ git add $UPDATE_OPTION $FORCE_OPTION -- "$@"
+ git diff-index -p --cached --binary HEAD -- "$@" |
+ git apply --index -R
+ else
+ git reset --hard -q
+ fi
+
+ if test "$keep_index" = "t" && test -n "$i_tree"
+ then
+ git read-tree --reset $i_tree
+ git ls-files -z --modified -- "$@" |
+ git checkout-index -z --force --stdin
+ fi
+ else
+ git apply -R < "$TMP-patch" ||
+ die "$(gettext "Cannot remove worktree changes")"
+
+ if test "$keep_index" != "t"
+ then
+ git reset -q -- "$@"
+ fi
+ fi
+}
+
+save_stash () {
+ push_options=
+ while test $# != 0
+ do
+ case "$1" in
+ -q|--quiet)
+ GIT_QUIET=t
+ ;;
+ --)
+ shift
+ break
+ ;;
+ -*)
+ # pass all options through to push_stash
+ push_options="$push_options $1"
+ ;;
+ *)
+ break
+ ;;
+ esac
+ shift
+ done
+
+ stash_msg="$*"
+
+ if test -z "$stash_msg"
+ then
+ push_stash $push_options
+ else
+ push_stash $push_options -m "$stash_msg"
+ fi
+}
+
+have_stash () {
+ git rev-parse --verify --quiet $ref_stash >/dev/null
+}
+
+list_stash () {
+ have_stash || return 0
+ git log --format="%gd: %gs" -g --first-parent -m "$@" $ref_stash --
+}
+
+show_stash () {
+ ALLOW_UNKNOWN_FLAGS=t
+ assert_stash_like "$@"
+
+ if test -z "$FLAGS"
+ then
+ if test "$(git config --bool stash.showStat || echo true)" = "true"
+ then
+ FLAGS=--stat
+ fi
+
+ if test "$(git config --bool stash.showPatch || echo false)" = "true"
+ then
+ FLAGS=${FLAGS}${FLAGS:+ }-p
+ fi
+
+ if test -z "$FLAGS"
+ then
+ return 0
+ fi
+ fi
+
+ git diff ${FLAGS} $b_commit $w_commit
+}
+
+show_help () {
+ exec git help stash
+ exit 1
+}
+
+#
+# Parses the remaining options looking for flags and
+# at most one revision defaulting to ${ref_stash}@{0}
+# if none found.
+#
+# Derives related tree and commit objects from the
+# revision, if one is found.
+#
+# stash records the work tree, and is a merge between the
+# base commit (first parent) and the index tree (second parent).
+#
+# REV is set to the symbolic version of the specified stash-like commit
+# IS_STASH_LIKE is non-blank if ${REV} looks like a stash
+# IS_STASH_REF is non-blank if the ${REV} looks like a stash ref
+# s is set to the SHA1 of the stash commit
+# w_commit is set to the commit containing the working tree
+# b_commit is set to the base commit
+# i_commit is set to the commit containing the index tree
+# u_commit is set to the commit containing the untracked files tree
+# w_tree is set to the working tree
+# b_tree is set to the base tree
+# i_tree is set to the index tree
+# u_tree is set to the untracked files tree
+#
+# GIT_QUIET is set to t if -q is specified
+# INDEX_OPTION is set to --index if --index is specified.
+# FLAGS is set to the remaining flags (if allowed)
+#
+# dies if:
+# * too many revisions specified
+# * no revision is specified and there is no stash stack
+# * a revision is specified which cannot be resolve to a SHA1
+# * a non-existent stash reference is specified
+# * unknown flags were set and ALLOW_UNKNOWN_FLAGS is not "t"
+#
+
+parse_flags_and_rev()
+{
+ test "$PARSE_CACHE" = "$*" && return 0 # optimisation
+ PARSE_CACHE="$*"
+
+ IS_STASH_LIKE=
+ IS_STASH_REF=
+ INDEX_OPTION=
+ s=
+ w_commit=
+ b_commit=
+ i_commit=
+ u_commit=
+ w_tree=
+ b_tree=
+ i_tree=
+ u_tree=
+
+ FLAGS=
+ REV=
+ for opt
+ do
+ case "$opt" in
+ -q|--quiet)
+ GIT_QUIET=-t
+ ;;
+ --index)
+ INDEX_OPTION=--index
+ ;;
+ --help)
+ show_help
+ ;;
+ -*)
+ test "$ALLOW_UNKNOWN_FLAGS" = t ||
+ die "$(eval_gettext "unknown option: \$opt")"
+ FLAGS="${FLAGS}${FLAGS:+ }$opt"
+ ;;
+ *)
+ REV="${REV}${REV:+ }'$opt'"
+ ;;
+ esac
+ done
+
+ eval set -- $REV
+
+ case $# in
+ 0)
+ have_stash || die "$(gettext "No stash entries found.")"
+ set -- ${ref_stash}@{0}
+ ;;
+ 1)
+ :
+ ;;
+ *)
+ die "$(eval_gettext "Too many revisions specified: \$REV")"
+ ;;
+ esac
+
+ case "$1" in
+ *[!0-9]*)
+ :
+ ;;
+ *)
+ set -- "${ref_stash}@{$1}"
+ ;;
+ esac
+
+ REV=$(git rev-parse --symbolic --verify --quiet "$1") || {
+ reference="$1"
+ die "$(eval_gettext "\$reference is not a valid reference")"
+ }
+
+ i_commit=$(git rev-parse --verify --quiet "$REV^2") &&
+ set -- $(git rev-parse "$REV" "$REV^1" "$REV:" "$REV^1:" "$REV^2:" 2>/dev/null) &&
+ s=$1 &&
+ w_commit=$1 &&
+ b_commit=$2 &&
+ w_tree=$3 &&
+ b_tree=$4 &&
+ i_tree=$5 &&
+ IS_STASH_LIKE=t &&
+ test "$ref_stash" = "$(git rev-parse --symbolic-full-name "${REV%@*}")" &&
+ IS_STASH_REF=t
+
+ u_commit=$(git rev-parse --verify --quiet "$REV^3") &&
+ u_tree=$(git rev-parse "$REV^3:" 2>/dev/null)
+}
+
+is_stash_like()
+{
+ parse_flags_and_rev "$@"
+ test -n "$IS_STASH_LIKE"
+}
+
+assert_stash_like() {
+ is_stash_like "$@" || {
+ args="$*"
+ die "$(eval_gettext "'\$args' is not a stash-like commit")"
+ }
+}
+
+is_stash_ref() {
+ is_stash_like "$@" && test -n "$IS_STASH_REF"
+}
+
+assert_stash_ref() {
+ is_stash_ref "$@" || {
+ args="$*"
+ die "$(eval_gettext "'\$args' is not a stash reference")"
+ }
+}
+
+apply_stash () {
+
+ assert_stash_like "$@"
+
+ git update-index -q --refresh || die "$(gettext "unable to refresh index")"
+
+ # current index state
+ c_tree=$(git write-tree) ||
+ die "$(gettext "Cannot apply a stash in the middle of a merge")"
+
+ unstashed_index_tree=
+ if test -n "$INDEX_OPTION" && test "$b_tree" != "$i_tree" &&
+ test "$c_tree" != "$i_tree"
+ then
+ git diff-tree --binary $s^2^..$s^2 | git apply --cached
+ test $? -ne 0 &&
+ die "$(gettext "Conflicts in index. Try without --index.")"
+ unstashed_index_tree=$(git write-tree) ||
+ die "$(gettext "Could not save index tree")"
+ git reset
+ fi
+
+ if test -n "$u_tree"
+ then
+ GIT_INDEX_FILE="$TMPindex" git read-tree "$u_tree" &&
+ GIT_INDEX_FILE="$TMPindex" git checkout-index --all &&
+ rm -f "$TMPindex" ||
+ die "$(gettext "Could not restore untracked files from stash entry")"
+ fi
+
+ eval "
+ GITHEAD_$w_tree='Stashed changes' &&
+ GITHEAD_$c_tree='Updated upstream' &&
+ GITHEAD_$b_tree='Version stash was based on' &&
+ export GITHEAD_$w_tree GITHEAD_$c_tree GITHEAD_$b_tree
+ "
+
+ if test -n "$GIT_QUIET"
+ then
+ GIT_MERGE_VERBOSITY=0 && export GIT_MERGE_VERBOSITY
+ fi
+ if git merge-recursive $b_tree -- $c_tree $w_tree
+ then
+ # No conflict
+ if test -n "$unstashed_index_tree"
+ then
+ git read-tree "$unstashed_index_tree"
+ else
+ a="$TMP-added" &&
+ git diff-index --cached --name-only --diff-filter=A $c_tree >"$a" &&
+ git read-tree --reset $c_tree &&
+ git update-index --add --stdin <"$a" ||
+ die "$(gettext "Cannot unstage modified files")"
+ rm -f "$a"
+ fi
+ squelch=
+ if test -n "$GIT_QUIET"
+ then
+ squelch='>/dev/null 2>&1'
+ fi
+ (cd "$START_DIR" && eval "git status $squelch") || :
+ else
+ # Merge conflict; keep the exit status from merge-recursive
+ status=$?
+ git rerere
+ if test -n "$INDEX_OPTION"
+ then
+ gettextln "Index was not unstashed." >&2
+ fi
+ exit $status
+ fi
+}
+
+pop_stash() {
+ assert_stash_ref "$@"
+
+ if apply_stash "$@"
+ then
+ drop_stash "$@"
+ else
+ status=$?
+ say "$(gettext "The stash entry is kept in case you need it again.")"
+ exit $status
+ fi
+}
+
+drop_stash () {
+ assert_stash_ref "$@"
+
+ git reflog delete --updateref --rewrite "${REV}" &&
+ say "$(eval_gettext "Dropped \${REV} (\$s)")" ||
+ die "$(eval_gettext "\${REV}: Could not drop stash entry")"
+
+ # clear_stash if we just dropped the last stash entry
+ git rev-parse --verify --quiet "$ref_stash@{0}" >/dev/null ||
+ clear_stash
+}
+
+apply_to_branch () {
+ test -n "$1" || die "$(gettext "No branch name specified")"
+ branch=$1
+ shift 1
+
+ set -- --index "$@"
+ assert_stash_like "$@"
+
+ git checkout -b $branch $REV^ &&
+ apply_stash "$@" && {
+ test -z "$IS_STASH_REF" || drop_stash "$@"
+ }
+}
+
+test "$1" = "-p" && set "push" "$@"
+
+PARSE_CACHE='--not-parsed'
+# The default command is "push" if nothing but options are given
+seen_non_option=
+for opt
+do
+ case "$opt" in
+ --) break ;;
+ -*) ;;
+ *) seen_non_option=t; break ;;
+ esac
+done
+
+test -n "$seen_non_option" || set "push" "$@"
+
+# Main command set
+case "$1" in
+list)
+ shift
+ list_stash "$@"
+ ;;
+show)
+ shift
+ show_stash "$@"
+ ;;
+save)
+ shift
+ save_stash "$@"
+ ;;
+push)
+ shift
+ push_stash "$@"
+ ;;
+apply)
+ shift
+ apply_stash "$@"
+ ;;
+clear)
+ shift
+ clear_stash "$@"
+ ;;
+create)
+ shift
+ create_stash -m "$*" && echo "$w_commit"
+ ;;
+store)
+ shift
+ store_stash "$@"
+ ;;
+drop)
+ shift
+ drop_stash "$@"
+ ;;
+pop)
+ shift
+ pop_stash "$@"
+ ;;
+branch)
+ shift
+ apply_to_branch "$@"
+ ;;
+*)
+ case $# in
+ 0)
+ push_stash &&
+ say "$(gettext "(To restore them type \"git stash apply\")")"
+ ;;
+ *)
+ usage
+ esac
+ ;;
+esac
fi
tools="$tools gvimdiff diffuse diffmerge ecmerge"
tools="$tools p4merge araxis bc codecompare"
+ tools="$tools smerge"
fi
case "${VISUAL:-$EDITOR}" in
*vim*)
# Now parse the message body
while(<$fh>) {
$message .= $_;
- if (/^([a-z-]*-by|Cc): (.*)/i) {
+ if (/^([a-z][a-z-]*-by|Cc): (.*)/i) {
chomp;
my ($what, $c) = ($1, $2);
# strip garbage for the address we'll use:
$message = MIME::Base64::decode($message)
if ($from eq 'base64');
- $to = ($message =~ /.{999,}/) ? 'quoted-printable' : '8bit'
+ $to = ($message =~ /(?:.{999,}|\r)/) ? 'quoted-printable' : '8bit'
if $to eq 'auto';
die __("cannot send message as 7bit")
case "$1" in
-h)
echo "$LONG_USAGE"
+ case "$0" in *git-legacy-stash) exit 129;; esac
exit
esac
fi
+++ /dev/null
-#!/bin/sh
-# Copyright (c) 2007, Nanako Shiraishi
-
-dashless=$(basename "$0" | sed -e 's/-/ /')
-USAGE="list [<options>]
- or: $dashless show [<stash>]
- or: $dashless drop [-q|--quiet] [<stash>]
- or: $dashless ( pop | apply ) [--index] [-q|--quiet] [<stash>]
- or: $dashless branch <branchname> [<stash>]
- or: $dashless save [--patch] [-k|--[no-]keep-index] [-q|--quiet]
- [-u|--include-untracked] [-a|--all] [<message>]
- or: $dashless [push [--patch] [-k|--[no-]keep-index] [-q|--quiet]
- [-u|--include-untracked] [-a|--all] [-m <message>]
- [-- <pathspec>...]]
- or: $dashless clear"
-
-SUBDIRECTORY_OK=Yes
-OPTIONS_SPEC=
-START_DIR=$(pwd)
-. git-sh-setup
-require_work_tree
-prefix=$(git rev-parse --show-prefix) || exit 1
-cd_to_toplevel
-
-TMP="$GIT_DIR/.git-stash.$$"
-TMPindex=${GIT_INDEX_FILE-"$(git rev-parse --git-path index)"}.stash.$$
-trap 'rm -f "$TMP-"* "$TMPindex"' 0
-
-ref_stash=refs/stash
-
-if git config --get-colorbool color.interactive; then
- help_color="$(git config --get-color color.interactive.help 'red bold')"
- reset_color="$(git config --get-color '' reset)"
-else
- help_color=
- reset_color=
-fi
-
-no_changes () {
- git diff-index --quiet --cached HEAD --ignore-submodules -- "$@" &&
- git diff-files --quiet --ignore-submodules -- "$@" &&
- (test -z "$untracked" || test -z "$(untracked_files "$@")")
-}
-
-untracked_files () {
- if test "$1" = "-z"
- then
- shift
- z=-z
- else
- z=
- fi
- excl_opt=--exclude-standard
- test "$untracked" = "all" && excl_opt=
- git ls-files -o $z $excl_opt -- "$@"
-}
-
-prepare_fallback_ident () {
- if ! git -c user.useconfigonly=yes var GIT_COMMITTER_IDENT >/dev/null 2>&1
- then
- GIT_AUTHOR_NAME="git stash"
- GIT_AUTHOR_EMAIL=git@stash
- GIT_COMMITTER_NAME="git stash"
- GIT_COMMITTER_EMAIL=git@stash
- export GIT_AUTHOR_NAME
- export GIT_AUTHOR_EMAIL
- export GIT_COMMITTER_NAME
- export GIT_COMMITTER_EMAIL
- fi
-}
-
-clear_stash () {
- if test $# != 0
- then
- die "$(gettext "git stash clear with parameters is unimplemented")"
- fi
- if current=$(git rev-parse --verify --quiet $ref_stash)
- then
- git update-ref -d $ref_stash $current
- fi
-}
-
-create_stash () {
-
- prepare_fallback_ident
-
- stash_msg=
- untracked=
- while test $# != 0
- do
- case "$1" in
- -m|--message)
- shift
- stash_msg=${1?"BUG: create_stash () -m requires an argument"}
- ;;
- -m*)
- stash_msg=${1#-m}
- ;;
- --message=*)
- stash_msg=${1#--message=}
- ;;
- -u|--include-untracked)
- shift
- untracked=${1?"BUG: create_stash () -u requires an argument"}
- ;;
- --)
- shift
- break
- ;;
- esac
- shift
- done
-
- git update-index -q --refresh
- if no_changes "$@"
- then
- exit 0
- fi
-
- # state of the base commit
- if b_commit=$(git rev-parse --verify HEAD)
- then
- head=$(git rev-list --oneline -n 1 HEAD --)
- else
- die "$(gettext "You do not have the initial commit yet")"
- fi
-
- if branch=$(git symbolic-ref -q HEAD)
- then
- branch=${branch#refs/heads/}
- else
- branch='(no branch)'
- fi
- msg=$(printf '%s: %s' "$branch" "$head")
-
- # state of the index
- i_tree=$(git write-tree) &&
- i_commit=$(printf 'index on %s\n' "$msg" |
- git commit-tree $i_tree -p $b_commit) ||
- die "$(gettext "Cannot save the current index state")"
-
- if test -n "$untracked"
- then
- # Untracked files are stored by themselves in a parentless commit, for
- # ease of unpacking later.
- u_commit=$(
- untracked_files -z "$@" | (
- GIT_INDEX_FILE="$TMPindex" &&
- export GIT_INDEX_FILE &&
- rm -f "$TMPindex" &&
- git update-index -z --add --remove --stdin &&
- u_tree=$(git write-tree) &&
- printf 'untracked files on %s\n' "$msg" | git commit-tree $u_tree &&
- rm -f "$TMPindex"
- ) ) || die "$(gettext "Cannot save the untracked files")"
-
- untracked_commit_option="-p $u_commit";
- else
- untracked_commit_option=
- fi
-
- if test -z "$patch_mode"
- then
-
- # state of the working tree
- w_tree=$( (
- git read-tree --index-output="$TMPindex" -m $i_tree &&
- GIT_INDEX_FILE="$TMPindex" &&
- export GIT_INDEX_FILE &&
- git diff-index --name-only -z HEAD -- "$@" >"$TMP-stagenames" &&
- git update-index -z --add --remove --stdin <"$TMP-stagenames" &&
- git write-tree &&
- rm -f "$TMPindex"
- ) ) ||
- die "$(gettext "Cannot save the current worktree state")"
-
- else
-
- rm -f "$TMP-index" &&
- GIT_INDEX_FILE="$TMP-index" git read-tree HEAD &&
-
- # find out what the user wants
- GIT_INDEX_FILE="$TMP-index" \
- git add--interactive --patch=stash -- "$@" &&
-
- # state of the working tree
- w_tree=$(GIT_INDEX_FILE="$TMP-index" git write-tree) ||
- die "$(gettext "Cannot save the current worktree state")"
-
- git diff-tree -p HEAD $w_tree -- >"$TMP-patch" &&
- test -s "$TMP-patch" ||
- die "$(gettext "No changes selected")"
-
- rm -f "$TMP-index" ||
- die "$(gettext "Cannot remove temporary index (can't happen)")"
-
- fi
-
- # create the stash
- if test -z "$stash_msg"
- then
- stash_msg=$(printf 'WIP on %s' "$msg")
- else
- stash_msg=$(printf 'On %s: %s' "$branch" "$stash_msg")
- fi
- w_commit=$(printf '%s\n' "$stash_msg" |
- git commit-tree $w_tree -p $b_commit -p $i_commit $untracked_commit_option) ||
- die "$(gettext "Cannot record working tree state")"
-}
-
-store_stash () {
- while test $# != 0
- do
- case "$1" in
- -m|--message)
- shift
- stash_msg="$1"
- ;;
- -m*)
- stash_msg=${1#-m}
- ;;
- --message=*)
- stash_msg=${1#--message=}
- ;;
- -q|--quiet)
- quiet=t
- ;;
- *)
- break
- ;;
- esac
- shift
- done
- test $# = 1 ||
- die "$(eval_gettext "\"$dashless store\" requires one <commit> argument")"
-
- w_commit="$1"
- if test -z "$stash_msg"
- then
- stash_msg="Created via \"git stash store\"."
- fi
-
- git update-ref --create-reflog -m "$stash_msg" $ref_stash $w_commit
- ret=$?
- test $ret != 0 && test -z "$quiet" &&
- die "$(eval_gettext "Cannot update \$ref_stash with \$w_commit")"
- return $ret
-}
-
-push_stash () {
- keep_index=
- patch_mode=
- untracked=
- stash_msg=
- while test $# != 0
- do
- case "$1" in
- -k|--keep-index)
- keep_index=t
- ;;
- --no-keep-index)
- keep_index=n
- ;;
- -p|--patch)
- patch_mode=t
- # only default to keep if we don't already have an override
- test -z "$keep_index" && keep_index=t
- ;;
- -q|--quiet)
- GIT_QUIET=t
- ;;
- -u|--include-untracked)
- untracked=untracked
- ;;
- -a|--all)
- untracked=all
- ;;
- -m|--message)
- shift
- test -z ${1+x} && usage
- stash_msg=$1
- ;;
- -m*)
- stash_msg=${1#-m}
- ;;
- --message=*)
- stash_msg=${1#--message=}
- ;;
- --help)
- show_help
- ;;
- --)
- shift
- break
- ;;
- -*)
- option="$1"
- eval_gettextln "error: unknown option for 'stash push': \$option"
- usage
- ;;
- *)
- break
- ;;
- esac
- shift
- done
-
- eval "set $(git rev-parse --sq --prefix "$prefix" -- "$@")"
-
- if test -n "$patch_mode" && test -n "$untracked"
- then
- die "$(gettext "Can't use --patch and --include-untracked or --all at the same time")"
- fi
-
- test -n "$untracked" || git ls-files --error-unmatch -- "$@" >/dev/null || exit 1
-
- git update-index -q --refresh
- if no_changes "$@"
- then
- say "$(gettext "No local changes to save")"
- exit 0
- fi
-
- git reflog exists $ref_stash ||
- clear_stash || die "$(gettext "Cannot initialize stash")"
-
- create_stash -m "$stash_msg" -u "$untracked" -- "$@"
- store_stash -m "$stash_msg" -q $w_commit ||
- die "$(gettext "Cannot save the current status")"
- say "$(eval_gettext "Saved working directory and index state \$stash_msg")"
-
- if test -z "$patch_mode"
- then
- test "$untracked" = "all" && CLEAN_X_OPTION=-x || CLEAN_X_OPTION=
- if test -n "$untracked" && test $# = 0
- then
- git clean --force --quiet -d $CLEAN_X_OPTION
- fi
-
- if test $# != 0
- then
- test -z "$untracked" && UPDATE_OPTION="-u" || UPDATE_OPTION=
- test "$untracked" = "all" && FORCE_OPTION="--force" || FORCE_OPTION=
- git add $UPDATE_OPTION $FORCE_OPTION -- "$@"
- git diff-index -p --cached --binary HEAD -- "$@" |
- git apply --index -R
- else
- git reset --hard -q
- fi
-
- if test "$keep_index" = "t" && test -n "$i_tree"
- then
- git read-tree --reset $i_tree
- git ls-files -z --modified -- "$@" |
- git checkout-index -z --force --stdin
- fi
- else
- git apply -R < "$TMP-patch" ||
- die "$(gettext "Cannot remove worktree changes")"
-
- if test "$keep_index" != "t"
- then
- git reset -q -- "$@"
- fi
- fi
-}
-
-save_stash () {
- push_options=
- while test $# != 0
- do
- case "$1" in
- --)
- shift
- break
- ;;
- -*)
- # pass all options through to push_stash
- push_options="$push_options $1"
- ;;
- *)
- break
- ;;
- esac
- shift
- done
-
- stash_msg="$*"
-
- if test -z "$stash_msg"
- then
- push_stash $push_options
- else
- push_stash $push_options -m "$stash_msg"
- fi
-}
-
-have_stash () {
- git rev-parse --verify --quiet $ref_stash >/dev/null
-}
-
-list_stash () {
- have_stash || return 0
- git log --format="%gd: %gs" -g --first-parent -m "$@" $ref_stash --
-}
-
-show_stash () {
- ALLOW_UNKNOWN_FLAGS=t
- assert_stash_like "$@"
-
- if test -z "$FLAGS"
- then
- if test "$(git config --bool stash.showStat || echo true)" = "true"
- then
- FLAGS=--stat
- fi
-
- if test "$(git config --bool stash.showPatch || echo false)" = "true"
- then
- FLAGS=${FLAGS}${FLAGS:+ }-p
- fi
-
- if test -z "$FLAGS"
- then
- return 0
- fi
- fi
-
- git diff ${FLAGS} $b_commit $w_commit
-}
-
-show_help () {
- exec git help stash
- exit 1
-}
-
-#
-# Parses the remaining options looking for flags and
-# at most one revision defaulting to ${ref_stash}@{0}
-# if none found.
-#
-# Derives related tree and commit objects from the
-# revision, if one is found.
-#
-# stash records the work tree, and is a merge between the
-# base commit (first parent) and the index tree (second parent).
-#
-# REV is set to the symbolic version of the specified stash-like commit
-# IS_STASH_LIKE is non-blank if ${REV} looks like a stash
-# IS_STASH_REF is non-blank if the ${REV} looks like a stash ref
-# s is set to the SHA1 of the stash commit
-# w_commit is set to the commit containing the working tree
-# b_commit is set to the base commit
-# i_commit is set to the commit containing the index tree
-# u_commit is set to the commit containing the untracked files tree
-# w_tree is set to the working tree
-# b_tree is set to the base tree
-# i_tree is set to the index tree
-# u_tree is set to the untracked files tree
-#
-# GIT_QUIET is set to t if -q is specified
-# INDEX_OPTION is set to --index if --index is specified.
-# FLAGS is set to the remaining flags (if allowed)
-#
-# dies if:
-# * too many revisions specified
-# * no revision is specified and there is no stash stack
-# * a revision is specified which cannot be resolve to a SHA1
-# * a non-existent stash reference is specified
-# * unknown flags were set and ALLOW_UNKNOWN_FLAGS is not "t"
-#
-
-parse_flags_and_rev()
-{
- test "$PARSE_CACHE" = "$*" && return 0 # optimisation
- PARSE_CACHE="$*"
-
- IS_STASH_LIKE=
- IS_STASH_REF=
- INDEX_OPTION=
- s=
- w_commit=
- b_commit=
- i_commit=
- u_commit=
- w_tree=
- b_tree=
- i_tree=
- u_tree=
-
- FLAGS=
- REV=
- for opt
- do
- case "$opt" in
- -q|--quiet)
- GIT_QUIET=-t
- ;;
- --index)
- INDEX_OPTION=--index
- ;;
- --help)
- show_help
- ;;
- -*)
- test "$ALLOW_UNKNOWN_FLAGS" = t ||
- die "$(eval_gettext "unknown option: \$opt")"
- FLAGS="${FLAGS}${FLAGS:+ }$opt"
- ;;
- *)
- REV="${REV}${REV:+ }'$opt'"
- ;;
- esac
- done
-
- eval set -- $REV
-
- case $# in
- 0)
- have_stash || die "$(gettext "No stash entries found.")"
- set -- ${ref_stash}@{0}
- ;;
- 1)
- :
- ;;
- *)
- die "$(eval_gettext "Too many revisions specified: \$REV")"
- ;;
- esac
-
- case "$1" in
- *[!0-9]*)
- :
- ;;
- *)
- set -- "${ref_stash}@{$1}"
- ;;
- esac
-
- REV=$(git rev-parse --symbolic --verify --quiet "$1") || {
- reference="$1"
- die "$(eval_gettext "\$reference is not a valid reference")"
- }
-
- i_commit=$(git rev-parse --verify --quiet "$REV^2") &&
- set -- $(git rev-parse "$REV" "$REV^1" "$REV:" "$REV^1:" "$REV^2:" 2>/dev/null) &&
- s=$1 &&
- w_commit=$1 &&
- b_commit=$2 &&
- w_tree=$3 &&
- b_tree=$4 &&
- i_tree=$5 &&
- IS_STASH_LIKE=t &&
- test "$ref_stash" = "$(git rev-parse --symbolic-full-name "${REV%@*}")" &&
- IS_STASH_REF=t
-
- u_commit=$(git rev-parse --verify --quiet "$REV^3") &&
- u_tree=$(git rev-parse "$REV^3:" 2>/dev/null)
-}
-
-is_stash_like()
-{
- parse_flags_and_rev "$@"
- test -n "$IS_STASH_LIKE"
-}
-
-assert_stash_like() {
- is_stash_like "$@" || {
- args="$*"
- die "$(eval_gettext "'\$args' is not a stash-like commit")"
- }
-}
-
-is_stash_ref() {
- is_stash_like "$@" && test -n "$IS_STASH_REF"
-}
-
-assert_stash_ref() {
- is_stash_ref "$@" || {
- args="$*"
- die "$(eval_gettext "'\$args' is not a stash reference")"
- }
-}
-
-apply_stash () {
-
- assert_stash_like "$@"
-
- git update-index -q --refresh || die "$(gettext "unable to refresh index")"
-
- # current index state
- c_tree=$(git write-tree) ||
- die "$(gettext "Cannot apply a stash in the middle of a merge")"
-
- unstashed_index_tree=
- if test -n "$INDEX_OPTION" && test "$b_tree" != "$i_tree" &&
- test "$c_tree" != "$i_tree"
- then
- git diff-tree --binary $s^2^..$s^2 | git apply --cached
- test $? -ne 0 &&
- die "$(gettext "Conflicts in index. Try without --index.")"
- unstashed_index_tree=$(git write-tree) ||
- die "$(gettext "Could not save index tree")"
- git reset
- fi
-
- if test -n "$u_tree"
- then
- GIT_INDEX_FILE="$TMPindex" git read-tree "$u_tree" &&
- GIT_INDEX_FILE="$TMPindex" git checkout-index --all &&
- rm -f "$TMPindex" ||
- die "$(gettext "Could not restore untracked files from stash entry")"
- fi
-
- eval "
- GITHEAD_$w_tree='Stashed changes' &&
- GITHEAD_$c_tree='Updated upstream' &&
- GITHEAD_$b_tree='Version stash was based on' &&
- export GITHEAD_$w_tree GITHEAD_$c_tree GITHEAD_$b_tree
- "
-
- if test -n "$GIT_QUIET"
- then
- GIT_MERGE_VERBOSITY=0 && export GIT_MERGE_VERBOSITY
- fi
- if git merge-recursive $b_tree -- $c_tree $w_tree
- then
- # No conflict
- if test -n "$unstashed_index_tree"
- then
- git read-tree "$unstashed_index_tree"
- else
- a="$TMP-added" &&
- git diff-index --cached --name-only --diff-filter=A $c_tree >"$a" &&
- git read-tree --reset $c_tree &&
- git update-index --add --stdin <"$a" ||
- die "$(gettext "Cannot unstage modified files")"
- rm -f "$a"
- fi
- squelch=
- if test -n "$GIT_QUIET"
- then
- squelch='>/dev/null 2>&1'
- fi
- (cd "$START_DIR" && eval "git status $squelch") || :
- else
- # Merge conflict; keep the exit status from merge-recursive
- status=$?
- git rerere
- if test -n "$INDEX_OPTION"
- then
- gettextln "Index was not unstashed." >&2
- fi
- exit $status
- fi
-}
-
-pop_stash() {
- assert_stash_ref "$@"
-
- if apply_stash "$@"
- then
- drop_stash "$@"
- else
- status=$?
- say "$(gettext "The stash entry is kept in case you need it again.")"
- exit $status
- fi
-}
-
-drop_stash () {
- assert_stash_ref "$@"
-
- git reflog delete --updateref --rewrite "${REV}" &&
- say "$(eval_gettext "Dropped \${REV} (\$s)")" ||
- die "$(eval_gettext "\${REV}: Could not drop stash entry")"
-
- # clear_stash if we just dropped the last stash entry
- git rev-parse --verify --quiet "$ref_stash@{0}" >/dev/null ||
- clear_stash
-}
-
-apply_to_branch () {
- test -n "$1" || die "$(gettext "No branch name specified")"
- branch=$1
- shift 1
-
- set -- --index "$@"
- assert_stash_like "$@"
-
- git checkout -b $branch $REV^ &&
- apply_stash "$@" && {
- test -z "$IS_STASH_REF" || drop_stash "$@"
- }
-}
-
-test "$1" = "-p" && set "push" "$@"
-
-PARSE_CACHE='--not-parsed'
-# The default command is "push" if nothing but options are given
-seen_non_option=
-for opt
-do
- case "$opt" in
- --) break ;;
- -*) ;;
- *) seen_non_option=t; break ;;
- esac
-done
-
-test -n "$seen_non_option" || set "push" "$@"
-
-# Main command set
-case "$1" in
-list)
- shift
- list_stash "$@"
- ;;
-show)
- shift
- show_stash "$@"
- ;;
-save)
- shift
- save_stash "$@"
- ;;
-push)
- shift
- push_stash "$@"
- ;;
-apply)
- shift
- apply_stash "$@"
- ;;
-clear)
- shift
- clear_stash "$@"
- ;;
-create)
- shift
- create_stash -m "$*" && echo "$w_commit"
- ;;
-store)
- shift
- store_stash "$@"
- ;;
-drop)
- shift
- drop_stash "$@"
- ;;
-pop)
- shift
- pop_stash "$@"
- ;;
-branch)
- shift
- apply_to_branch "$@"
- ;;
-*)
- case $# in
- 0)
- push_stash &&
- say "$(gettext "(To restore them type \"git stash apply\")")"
- ;;
- *)
- usage
- esac
- ;;
-esac
or: $dashless [--quiet] init [--] [<path>...]
or: $dashless [--quiet] deinit [-f|--force] (--all| [--] <path>...)
or: $dashless [--quiet] update [--init] [--remote] [-N|--no-fetch] [-f|--force] [--checkout|--merge|--rebase] [--[no-]recommend-shallow] [--reference <repository>] [--recursive] [--] [<path>...]
+ or: $dashless [--quiet] set-branch (--default|--branch <branch>) [--] <path>
or: $dashless [--quiet] summary [--cached|--files] [--summary-limit <n>] [commit] [--] [<path>...]
or: $dashless [--quiet] foreach [--recursive] <command>
or: $dashless [--quiet] sync [--recursive] [--] [<path>...]
die "$(eval_gettext "'\$sm_path' already exists in the index and is not a submodule")"
fi
+ if test -d "$sm_path" &&
+ test -z $(git -C "$sm_path" rev-parse --show-cdup 2>/dev/null)
+ then
+ git -C "$sm_path" rev-parse --verify -q HEAD >/dev/null ||
+ die "$(eval_gettext "'\$sm_path' does not have a commit checked out")"
+ fi
+
if test -z "$force" &&
! git add --dry-run --ignore-missing --no-warn-embedded-repo "$sm_path" > /dev/null 2>&1
then
shift
done
- git ${wt_prefix:+-C "$wt_prefix"} ${prefix:+--super-prefix "$prefix"} submodule--helper foreach ${GIT_QUIET:+--quiet} ${recursive:+--recursive} "$@"
+ git ${wt_prefix:+-C "$wt_prefix"} ${prefix:+--super-prefix "$prefix"} submodule--helper foreach ${GIT_QUIET:+--quiet} ${recursive:+--recursive} -- "$@"
}
#
shift
done
- git ${wt_prefix:+-C "$wt_prefix"} ${prefix:+--super-prefix "$prefix"} submodule--helper init ${GIT_QUIET:+--quiet} "$@"
+ git ${wt_prefix:+-C "$wt_prefix"} ${prefix:+--super-prefix "$prefix"} submodule--helper init ${GIT_QUIET:+--quiet} -- "$@"
}
#
shift
done
- git ${wt_prefix:+-C "$wt_prefix"} submodule--helper deinit ${GIT_QUIET:+--quiet} ${prefix:+--prefix "$prefix"} ${force:+--force} ${deinit_all:+--all} "$@"
+ git ${wt_prefix:+-C "$wt_prefix"} submodule--helper deinit ${GIT_QUIET:+--quiet} ${prefix:+--prefix "$prefix"} ${force:+--force} ${deinit_all:+--all} -- "$@"
}
is_tip_reachable () (
${depth:+--depth "$depth"} \
$recommend_shallow \
$jobs \
+ -- \
"$@" || echo "#unmatched" $?
} | {
err=
}
}
+#
+# Configures a submodule's default branch
+#
+# $@ = requested path
+#
+cmd_set_branch() {
+ unset_branch=false
+ branch=
+
+ while test $# -ne 0
+ do
+ case "$1" in
+ -q|--quiet)
+ # we don't do anything with this but we need to accept it
+ ;;
+ -d|--default)
+ unset_branch=true
+ ;;
+ -b|--branch)
+ case "$2" in '') usage ;; esac
+ branch=$2
+ shift
+ ;;
+ --)
+ shift
+ break
+ ;;
+ -*)
+ usage
+ ;;
+ *)
+ break
+ ;;
+ esac
+ shift
+ done
+
+ if test $# -ne 1
+ then
+ usage
+ fi
+
+ # we can't use `git submodule--helper name` here because internally, it
+ # hashes the path so a trailing slash could lead to an unintentional no match
+ name="$(git submodule--helper list "$1" | cut -f2)"
+ if test -z "$name"
+ then
+ exit 1
+ fi
+
+ test -n "$branch"; has_branch=$?
+ test "$unset_branch" = true; has_unset_branch=$?
+
+ if test $((!$has_branch != !$has_unset_branch)) -eq 0
+ then
+ usage
+ fi
+
+ if test $has_branch -eq 0
+ then
+ git submodule--helper config submodule."$name".branch "$branch"
+ else
+ git submodule--helper config --unset submodule."$name".branch
+ fi
+}
+
#
# Show commit summary for submodules in index or working tree
#
shift
done
- git ${wt_prefix:+-C "$wt_prefix"} ${prefix:+--super-prefix "$prefix"} submodule--helper status ${GIT_QUIET:+--quiet} ${cached:+--cached} ${recursive:+--recursive} "$@"
+ git ${wt_prefix:+-C "$wt_prefix"} ${prefix:+--super-prefix "$prefix"} submodule--helper status ${GIT_QUIET:+--quiet} ${cached:+--cached} ${recursive:+--recursive} -- "$@"
}
#
# Sync remote urls for submodules
esac
done
- git ${wt_prefix:+-C "$wt_prefix"} ${prefix:+--super-prefix "$prefix"} submodule--helper sync ${GIT_QUIET:+--quiet} ${recursive:+--recursive} "$@"
+ git ${wt_prefix:+-C "$wt_prefix"} ${prefix:+--super-prefix "$prefix"} submodule--helper sync ${GIT_QUIET:+--quiet} ${recursive:+--recursive} -- "$@"
}
cmd_absorbgitdirs()
while test $# != 0 && test -z "$command"
do
case "$1" in
- add | foreach | init | deinit | update | status | summary | sync | absorbgitdirs)
+ add | foreach | init | deinit | update | set-branch | status | summary | sync | absorbgitdirs)
command=$1
;;
-q|--quiet)
fi
fi
-# "-b branch" is accepted only by "add"
-if test -n "$branch" && test "$command" != add
+# "-b branch" is accepted only by "add" and "set-branch"
+if test -n "$branch" && (test "$command" != add || test "$command" != set-branch)
then
usage
fi
usage
fi
-"cmd_$command" "$@"
+"cmd_$(echo $command | sed -e s/-/_/g)" "$@"
{ "diff-files", cmd_diff_files, RUN_SETUP | NEED_WORK_TREE | NO_PARSEOPT },
{ "diff-index", cmd_diff_index, RUN_SETUP | NO_PARSEOPT },
{ "diff-tree", cmd_diff_tree, RUN_SETUP | NO_PARSEOPT },
- { "difftool", cmd_difftool, RUN_SETUP | NEED_WORK_TREE },
+ { "difftool", cmd_difftool, RUN_SETUP_GENTLY },
{ "fast-export", cmd_fast_export, RUN_SETUP },
{ "fetch", cmd_fetch, RUN_SETUP },
{ "fetch-pack", cmd_fetch_pack, RUN_SETUP | NO_PARSEOPT },
{ "show-index", cmd_show_index },
{ "show-ref", cmd_show_ref, RUN_SETUP },
{ "stage", cmd_add, RUN_SETUP | NEED_WORK_TREE },
+ /*
+ * NEEDSWORK: Until the builtin stash is thoroughly robust and no
+ * longer needs redirection to the stash shell script this is kept as
+ * is, then should be changed to RUN_SETUP | NEED_WORK_TREE
+ */
+ { "stash", cmd_stash },
{ "status", cmd_status, RUN_SETUP | NEED_WORK_TREE },
{ "stripspace", cmd_stripspace },
{ "submodule--helper", cmd_submodule__helper, RUN_SETUP | SUPPORT_SUPER_PREFIX | NO_PARSEOPT },
# ======================================================================
# input validation and dispatch
+# Various hash size-related values.
+my $sha1_len = 40;
+my $sha256_extra_len = 24;
+my $sha256_len = $sha1_len + $sha256_extra_len;
+
+# A regex matching $len hex characters. $len may be a range (e.g. 7,64).
+sub oid_nlen_regex {
+ my $len = shift;
+ my $hchr = qr/[0-9a-fA-F]/;
+ return qr/(?:(?:$hchr){$len})/;
+}
+
+# A regex matching two sets of $nlen hex characters, prefixed by the literal
+# string $prefix and with the literal string $infix between them.
+sub oid_nlen_prefix_infix_regex {
+ my $nlen = shift;
+ my $prefix = shift;
+ my $infix = shift;
+
+ my $rx = oid_nlen_regex($nlen);
+
+ return qr/^\Q$prefix\E$rx\Q$infix\E$rx$/;
+}
+
+# A regex matching a valid object ID.
+our $oid_regex;
+{
+ my $x = oid_nlen_regex($sha1_len);
+ my $y = oid_nlen_regex($sha256_extra_len);
+ $oid_regex = qr/(?:$x(?:$y)?)/;
+}
+
# input parameters can be collected from a variety of sources (presently, CGI
# and PATH_INFO), so we define an %input_params hash that collects them all
# together during validation: this allows subsequent uses (e.g. href()) to be
return undef unless defined $input;
# textual hashes are O.K.
- if ($input =~ m/^[0-9a-fA-F]{40}$/) {
+ if ($input =~ m/^$oid_regex$/) {
return 1;
}
# it must be correct pathname
sub format_log_line_html {
my $line = shift;
+ # Potentially abbreviated OID.
+ my $regex = oid_nlen_regex("7,64");
+
$line = esc_html($line, -nbsp=>1);
$line =~ s{
\b
(?<!-) # see strbuf_check_tag_ref(). Tags can't start with -
[A-Za-z0-9.-]+
(?!\.) # refs can't end with ".", see check_refname_format()
- -g[0-9a-fA-F]{7,40}
+ -g$regex
|
# Just a normal looking Git SHA1
- [0-9a-fA-F]{7,40}
+ $regex
)
\b
}{
')</span>';
}
# match <hash>
- if ($line =~ m/^index [0-9a-fA-F]{40},[0-9a-fA-F]{40}/) {
+ if ($line =~ oid_nlen_prefix_infix_regex($sha1_len, "index ", ",") |
+ $line =~ oid_nlen_prefix_infix_regex($sha256_len, "index ", ",")) {
# can match only for combined diff
$line = 'index ';
for (my $i = 0; $i < $diffinfo->{'nparents'}; $i++) {
$line .= '0' x 7;
}
- } elsif ($line =~ m/^index [0-9a-fA-F]{40}..[0-9a-fA-F]{40}/) {
+ } elsif ($line =~ oid_nlen_prefix_infix_regex($sha1_len, "index ", "..") |
+ $line =~ oid_nlen_prefix_infix_regex($sha256_len, "index ", "..")) {
# can match only for ordinary diff
my ($from_link, $to_link);
if ($from->{'href'}) {
}
#'100644 blob 0fa3f3a66fb6a137f6ec2c19351ed4d807070ffa panic.c'
- $line =~ m/^([0-9]+) (.+) ([0-9a-fA-F]{40})\t/;
+ $line =~ m/^([0-9]+) (.+) ($oid_regex)\t/;
if (defined $type && $type ne $2) {
# type doesn't match
return undef;
while (my $line = <$fd>) {
chomp $line;
- if ($line =~ m!^([0-9a-fA-F]{40})\srefs/($type.*)$!) {
+ if ($line =~ m!^($oid_regex)\srefs/($type.*)$!) {
if (defined $refs{$1}) {
push @{$refs{$1}}, $2;
} else {
$tag{'id'} = $tag_id;
while (my $line = <$fd>) {
chomp $line;
- if ($line =~ m/^object ([0-9a-fA-F]{40})$/) {
+ if ($line =~ m/^object ($oid_regex)$/) {
$tag{'object'} = $1;
} elsif ($line =~ m/^type (.+)$/) {
$tag{'type'} = $1;
}
my $header = shift @commit_lines;
- if ($header !~ m/^[0-9a-fA-F]{40}/) {
+ if ($header !~ m/^$oid_regex/) {
return;
}
($co{'id'}, my @parents) = split ' ', $header;
while (my $line = shift @commit_lines) {
last if $line eq "\n";
- if ($line =~ m/^tree ([0-9a-fA-F]{40})$/) {
+ if ($line =~ m/^tree ($oid_regex)$/) {
$co{'tree'} = $1;
- } elsif ((!defined $withparents) && ($line =~ m/^parent ([0-9a-fA-F]{40})$/)) {
+ } elsif ((!defined $withparents) && ($line =~ m/^parent ($oid_regex)$/)) {
push @parents, $1;
} elsif ($line =~ m/^author (.*) ([0-9]+) (.*)$/) {
$co{'author'} = to_utf8($1);
# ':100644 100644 03b218260e99b78c6df0ed378e59ed9205ccc96d 3b93d5e7cc7f7dd4ebed13a5cc1a4ad976fc94d8 M ls-files.c'
# ':100644 100644 7f9281985086971d3877aca27704f2aaf9c448ce bc190ebc71bbd923f2b728e505408f5e54bd073a M rev-tree.c'
- if ($line =~ m/^:([0-7]{6}) ([0-7]{6}) ([0-9a-fA-F]{40}) ([0-9a-fA-F]{40}) (.)([0-9]{0,3})\t(.*)$/) {
+ if ($line =~ m/^:([0-7]{6}) ([0-7]{6}) ($oid_regex) ($oid_regex) (.)([0-9]{0,3})\t(.*)$/) {
$res{'from_mode'} = $1;
$res{'to_mode'} = $2;
$res{'from_id'} = $3;
}
# '::100755 100755 100755 60e79ca1b01bc8b057abe17ddab484699a7f5fdb 94067cc5f73388f33722d52ae02f44692bc07490 94067cc5f73388f33722d52ae02f44692bc07490 MR git-gui/git-gui.sh'
# combined diff (for merge commit)
- elsif ($line =~ s/^(::+)((?:[0-7]{6} )+)((?:[0-9a-fA-F]{40} )+)([a-zA-Z]+)\t(.*)$//) {
+ elsif ($line =~ s/^(::+)((?:[0-7]{6} )+)((?:$oid_regex )+)([a-zA-Z]+)\t(.*)$//) {
$res{'nparents'} = length($1);
$res{'from_mode'} = [ split(' ', $2) ];
$res{'to_mode'} = pop @{$res{'from_mode'}};
$res{'to_file'} = unquote($5);
}
# 'c512b523472485aef4fff9e57b229d9d243c967f'
- elsif ($line =~ m/^([0-9a-fA-F]{40})$/) {
+ elsif ($line =~ m/^($oid_regex)$/) {
$res{'commit'} = $1;
}
if ($opts{'-l'}) {
#'100644 blob 0fa3f3a66fb6a137f6ec2c19351ed4d807070ffa 16717 panic.c'
- $line =~ m/^([0-9]+) (.+) ([0-9a-fA-F]{40}) +(-|[0-9]+)\t(.+)$/s;
+ $line =~ m/^([0-9]+) (.+) ($oid_regex) +(-|[0-9]+)\t(.+)$/s;
$res{'mode'} = $1;
$res{'type'} = $2;
}
} else {
#'100644 blob 0fa3f3a66fb6a137f6ec2c19351ed4d807070ffa panic.c'
- $line =~ m/^([0-9]+) (.+) ([0-9a-fA-F]{40})\t(.+)$/s;
+ $line =~ m/^([0-9]+) (.+) ($oid_regex)\t(.+)$/s;
$res{'mode'} = $1;
$res{'type'} = $2;
sub is_deleted {
my $diffinfo = shift;
- return $diffinfo->{'to_id'} eq ('0' x 40);
+ return $diffinfo->{'to_id'} eq ('0' x 40) || $diffinfo->{'to_id'} eq ('0' x 64);
}
# does patch correspond to [previous] difftree raw line
-class => "list subject"},
chop_and_escape_str($co{'title'}, 50) . "<br/>");
} elsif (defined $set{'to_id'}) {
- next if ($set{'to_id'} =~ m/^0{40}$/);
+ next if is_deleted(\%set);
print $cgi->a({-href => href(action=>"blob", hash_base=>$co{'id'},
hash=>$set{'to_id'}, file_name=>$set{'to_file'}),
# the header: <SHA-1> <src lineno> <dst lineno> [<lines in group>]
# no <lines in group> for subsequent lines in group of lines
my ($full_rev, $orig_lineno, $lineno, $group_size) =
- ($line =~ /^([0-9a-f]{40}) (\d+) (\d+)(?: (\d+))?$/);
+ ($line =~ /^($oid_regex) (\d+) (\d+)(?: (\d+))?$/);
if (!exists $metainfo{$full_rev}) {
$metainfo{$full_rev} = { 'nprevious' => 0 };
}
}
# 'previous' <sha1 of parent commit> <filename at commit>
if (exists $meta->{'previous'} &&
- $meta->{'previous'} =~ /^([a-fA-F0-9]{40}) (.*)$/) {
+ $meta->{'previous'} =~ /^($oid_regex) (.*)$/) {
$meta->{'parent'} = $1;
$meta->{'file_parent'} = unquote($2);
}
} else {
die_error(400, "No file name defined");
}
- } elsif ($hash =~ m/^[0-9a-fA-F]{40}$/) {
+ } elsif ($hash =~ m/^$oid_regex$/) {
# blobs defined by non-textual hash id's can be cached
$expires = "+1d";
}
} else {
die_error(400, "No file name defined");
}
- } elsif ($hash =~ m/^[0-9a-fA-F]{40}$/) {
+ } elsif ($hash =~ m/^$oid_regex$/) {
# blobs defined by non-textual hash id's can be cached
$expires = "+1d";
}
# non-textual hash id's can be cached
my $expires;
- if ($hash =~ m/^[0-9a-fA-F]{40}$/) {
+ if ($hash =~ m/^$oid_regex$/) {
$expires = "+1d";
}
my $refs = git_get_references();
close $fd;
#'100644 blob 0fa3f3a66fb6a137f6ec2c19351ed4d807070ffa panic.c'
- unless ($line && $line =~ m/^([0-9]+) (.+) ([0-9a-fA-F]{40})\t/) {
+ unless ($line && $line =~ m/^([0-9]+) (.+) ($oid_regex)\t/) {
die_error(404, "File or directory for given base does not exist");
}
$type = $2;
or die_error(404, "Blob diff not found");
} elsif (defined $hash &&
- $hash =~ /[0-9a-fA-F]{40}/) {
+ $hash =~ $oid_regex) {
# try to find filename from $hash
# read filtered raw output
@difftree =
# ':100644 100644 03b21826... 3b93d5e7... M ls-files.c'
# $hash == to_id
- grep { /^:[0-7]{6} [0-7]{6} [0-9a-fA-F]{40} $hash/ }
+ grep { /^:[0-7]{6} [0-7]{6} $oid_regex $hash/ }
map { chomp; $_ } <$fd>;
close $fd
or die_error(404, "Reading git-diff-tree failed");
$hash ||= $diffinfo{'to_id'};
# non-textual hash id's can be cached
- if ($hash_base =~ m/^[0-9a-fA-F]{40}$/ &&
- $hash_parent_base =~ m/^[0-9a-fA-F]{40}$/) {
+ if ($hash_base =~ m/^$oid_regex$/ &&
+ $hash_parent_base =~ m/^$oid_regex$/) {
$expires = '+1d';
}
$hash_parent ne '-c' && $hash_parent ne '--cc') {
# commitdiff with two commits given
my $hash_parent_short = $hash_parent;
- if ($hash_parent =~ m/^[0-9a-fA-F]{40}$/) {
+ if ($hash_parent =~ m/^$oid_regex$/) {
$hash_parent_short = substr($hash_parent, 0, 7);
}
$formats_nav .=
# non-textual hash id's can be cached
my $expires;
- if ($hash =~ m/^[0-9a-fA-F]{40}$/) {
+ if ($hash =~ m/^$oid_regex$/) {
$expires = "+1d";
}
int hash_algo_by_name(const char *name);
/* Identical, except based on the format ID. */
int hash_algo_by_id(uint32_t format_id);
+/* Identical, except based on the length. */
+int hash_algo_by_length(int len);
/* Identical, except for a pointer to struct git_hash_algo. */
static inline int hash_algo_by_ptr(const struct git_hash_algo *p)
{
{"GET", "/objects/info/http-alternates$", get_text_file},
{"GET", "/objects/info/packs$", get_info_packs},
{"GET", "/objects/[0-9a-f]{2}/[0-9a-f]{38}$", get_loose_object},
+ {"GET", "/objects/[0-9a-f]{2}/[0-9a-f]{62}$", get_loose_object},
{"GET", "/objects/pack/pack-[0-9a-f]{40}\\.pack$", get_pack_file},
+ {"GET", "/objects/pack/pack-[0-9a-f]{64}\\.pack$", get_pack_file},
{"GET", "/objects/pack/pack-[0-9a-f]{40}\\.idx$", get_idx_file},
+ {"GET", "/objects/pack/pack-[0-9a-f]{64}\\.idx$", get_idx_file},
{"POST", "/git-upload-pack$", service_rpc},
{"POST", "/git-receive-pack$", service_rpc}
char *url;
char *owner;
char *token;
- char tmpfile_suffix[41];
+ char tmpfile_suffix[GIT_MAX_HEXSZ + 1];
time_t start_time;
long timeout;
int refreshing;
return;
}
- fprintf(stderr, "Fetching pack %s\n", sha1_to_hex(target->sha1));
+ fprintf(stderr, "Fetching pack %s\n",
+ hash_to_hex(target->hash));
fprintf(stderr, " which contains %s\n", oid_to_hex(&request->obj->oid));
preq = new_http_pack_request(target, repo->url);
request->dest = strbuf_detach(&buf, NULL);
append_remote_object_url(&buf, repo->url, hex, 0);
- strbuf_add(&buf, request->lock->tmpfile_suffix, 41);
+ strbuf_add(&buf, request->lock->tmpfile_suffix, the_hash_algo->hexsz + 1);
request->url = strbuf_detach(&buf, NULL);
slot = get_active_slot();
static void handle_new_lock_ctx(struct xml_ctx *ctx, int tag_closed)
{
struct remote_lock *lock = (struct remote_lock *)ctx->userData;
- git_SHA_CTX sha_ctx;
- unsigned char lock_token_sha1[20];
+ git_hash_ctx hash_ctx;
+ unsigned char lock_token_hash[GIT_MAX_RAWSZ];
if (tag_closed && ctx->cdata) {
if (!strcmp(ctx->name, DAV_ACTIVELOCK_OWNER)) {
} else if (!strcmp(ctx->name, DAV_ACTIVELOCK_TOKEN)) {
lock->token = xstrdup(ctx->cdata);
- git_SHA1_Init(&sha_ctx);
- git_SHA1_Update(&sha_ctx, lock->token, strlen(lock->token));
- git_SHA1_Final(lock_token_sha1, &sha_ctx);
+ the_hash_algo->init_fn(&hash_ctx);
+ the_hash_algo->update_fn(&hash_ctx, lock->token, strlen(lock->token));
+ the_hash_algo->final_fn(lock_token_hash, &hash_ctx);
lock->tmpfile_suffix[0] = '_';
- memcpy(lock->tmpfile_suffix + 1, sha1_to_hex(lock_token_sha1), 40);
+ memcpy(lock->tmpfile_suffix + 1, hash_to_hex(lock_token_hash), the_hash_algo->hexsz);
}
}
}
/* extract hex from sharded "xx/x{38}" filename */
static int get_oid_hex_from_objpath(const char *path, struct object_id *oid)
{
- if (strlen(path) != GIT_SHA1_HEXSZ + 1)
+ if (strlen(path) != the_hash_algo->hexsz + 1)
return -1;
if (hex_to_bytes(oid->hash, path, 1))
path += 2;
path++; /* skip '/' */
- return hex_to_bytes(oid->hash + 1, path, GIT_SHA1_RAWSZ - 1);
+ return hex_to_bytes(oid->hash + 1, path, the_hash_algo->rawsz - 1);
}
static void process_ls_object(struct remote_ls_ctx *ls)
return count;
}
-static int update_remote(unsigned char *sha1, struct remote_lock *lock)
+static int update_remote(const struct object_id *oid, struct remote_lock *lock)
{
struct active_request_slot *slot;
struct slot_results results;
dav_headers = get_dav_token_headers(lock, DAV_HEADER_IF);
- strbuf_addf(&out_buffer.buf, "%s\n", sha1_to_hex(sha1));
+ strbuf_addf(&out_buffer.buf, "%s\n", oid_to_hex(oid));
slot = get_active_slot();
slot->results = &results;
run_request_queue();
/* Update the remote branch if all went well */
- if (aborted || !update_remote(ref->new_oid.hash, ref_lock))
+ if (aborted || !update_remote(&ref->new_oid, ref_lock))
rc = 1;
if (!rc)
if (walker->get_verbosely) {
fprintf(stderr, "Getting pack %s\n",
- sha1_to_hex(target->sha1));
+ hash_to_hex(target->hash));
fprintf(stderr, " which contains %s\n",
- sha1_to_hex(sha1));
+ hash_to_hex(sha1));
}
preq = new_http_pack_request(target, repo->base);
release_object_request(obj_req);
}
-static int fetch_object(struct walker *walker, unsigned char *sha1)
+static int fetch_object(struct walker *walker, unsigned char *hash)
{
- char *hex = sha1_to_hex(sha1);
+ char *hex = hash_to_hex(hash);
int ret = 0;
struct object_request *obj_req = NULL;
struct http_object_request *req;
list_for_each(pos, head) {
obj_req = list_entry(pos, struct object_request, node);
- if (hasheq(obj_req->oid.hash, sha1))
+ if (hasheq(obj_req->oid.hash, hash))
break;
}
if (obj_req == NULL)
return ret;
}
-static int fetch(struct walker *walker, unsigned char *sha1)
+static int fetch(struct walker *walker, unsigned char *hash)
{
struct walker_data *data = walker->data;
struct alt_base *altbase = data->alt;
- if (!fetch_object(walker, sha1))
+ if (!fetch_object(walker, hash))
return 0;
while (altbase) {
- if (!http_fetch_pack(walker, altbase, sha1))
+ if (!http_fetch_pack(walker, altbase, hash))
return 0;
fetch_alternates(walker, data->alt->base);
altbase = altbase->next;
}
- return error("Unable to find %s under %s", sha1_to_hex(sha1),
+ return error("Unable to find %s under %s", hash_to_hex(hash),
data->alt->base);
}
url = quote_ref_url(base, ref->name);
if (http_get_strbuf(url, &buffer, &options) == HTTP_OK) {
strbuf_rtrim(&buffer);
- if (buffer.len == 40)
+ if (buffer.len == the_hash_algo->hexsz)
ret = get_oid_hex(buffer.buf, &ref->old_oid);
else if (starts_with(buffer.buf, "ref: ")) {
ref->symref = xstrdup(buffer.buf + 5);
}
/* Helpers for fetching packs */
-static char *fetch_pack_index(unsigned char *sha1, const char *base_url)
+static char *fetch_pack_index(unsigned char *hash, const char *base_url)
{
char *url, *tmp;
struct strbuf buf = STRBUF_INIT;
if (http_is_verbose)
- fprintf(stderr, "Getting index for pack %s\n", sha1_to_hex(sha1));
+ fprintf(stderr, "Getting index for pack %s\n", hash_to_hex(hash));
end_url_with_slash(&buf, base_url);
- strbuf_addf(&buf, "objects/pack/pack-%s.idx", sha1_to_hex(sha1));
+ strbuf_addf(&buf, "objects/pack/pack-%s.idx", hash_to_hex(hash));
url = strbuf_detach(&buf, NULL);
- strbuf_addf(&buf, "%s.temp", sha1_pack_index_name(sha1));
+ strbuf_addf(&buf, "%s.temp", sha1_pack_index_name(hash));
tmp = strbuf_detach(&buf, NULL);
if (http_get_file(url, tmp, NULL) != HTTP_OK) {
int http_get_info_packs(const char *base_url, struct packed_git **packs_head)
{
struct http_get_options options = {0};
- int ret = 0, i = 0;
- char *url, *data;
+ int ret = 0;
+ char *url;
+ const char *data;
struct strbuf buf = STRBUF_INIT;
- unsigned char hash[GIT_MAX_RAWSZ];
- const unsigned hexsz = the_hash_algo->hexsz;
+ struct object_id oid;
end_url_with_slash(&buf, base_url);
strbuf_addstr(&buf, "objects/info/packs");
goto cleanup;
data = buf.buf;
- while (i < buf.len) {
- switch (data[i]) {
- case 'P':
- i++;
- if (i + hexsz + 12 <= buf.len &&
- starts_with(data + i, " pack-") &&
- starts_with(data + i + hexsz + 6, ".pack\n")) {
- get_sha1_hex(data + i + 6, hash);
- fetch_and_setup_pack_index(packs_head, hash,
- base_url);
- i += hexsz + 11;
- break;
- }
- default:
- while (i < buf.len && data[i] != '\n')
- i++;
+ while (*data) {
+ if (skip_prefix(data, "P pack-", &data) &&
+ !parse_oid_hex(data, &oid, &data) &&
+ skip_prefix(data, ".pack", &data) &&
+ (*data == '\n' || *data == '\0')) {
+ fetch_and_setup_pack_index(packs_head, oid.hash, base_url);
+ } else {
+ data = strchrnul(data, '\n');
}
- i++;
+ if (*data)
+ data++; /* skip past newline */
}
cleanup:
return -1;
}
- unlink(sha1_pack_index_name(p->sha1));
+ unlink(sha1_pack_index_name(p->hash));
- if (finalize_object_file(preq->tmpfile.buf, sha1_pack_name(p->sha1))
- || finalize_object_file(tmp_idx, sha1_pack_index_name(p->sha1))) {
+ if (finalize_object_file(preq->tmpfile.buf, sha1_pack_name(p->hash))
+ || finalize_object_file(tmp_idx, sha1_pack_index_name(p->hash))) {
free(tmp_idx);
return -1;
}
end_url_with_slash(&buf, base_url);
strbuf_addf(&buf, "objects/pack/pack-%s.pack",
- sha1_to_hex(target->sha1));
+ hash_to_hex(target->hash));
preq->url = strbuf_detach(&buf, NULL);
- strbuf_addf(&preq->tmpfile, "%s.temp", sha1_pack_name(target->sha1));
+ strbuf_addf(&preq->tmpfile, "%s.temp", sha1_pack_name(target->hash));
preq->packfile = fopen(preq->tmpfile.buf, "a");
if (!preq->packfile) {
error("Unable to open local file %s for pack",
if (http_is_verbose)
fprintf(stderr,
"Resuming fetch of pack %s at byte %"PRIuMAX"\n",
- sha1_to_hex(target->sha1), (uintmax_t)prev_posn);
+ hash_to_hex(target->hash),
+ (uintmax_t)prev_posn);
http_opt_request_remainder(preq->slot->curl, prev_posn);
}
freq->stream.next_out = expn;
freq->stream.avail_out = sizeof(expn);
freq->zret = git_inflate(&freq->stream, Z_SYNC_FLUSH);
- git_SHA1_Update(&freq->c, expn,
- sizeof(expn) - freq->stream.avail_out);
+ the_hash_algo->update_fn(&freq->c, expn,
+ sizeof(expn) - freq->stream.avail_out);
} while (freq->stream.avail_in && freq->zret == Z_OK);
return size;
}
git_inflate_init(&freq->stream);
- git_SHA1_Init(&freq->c);
+ the_hash_algo->init_fn(&freq->c);
freq->url = get_remote_object_url(base_url, hex, 0);
if (prev_read == -1) {
memset(&freq->stream, 0, sizeof(freq->stream));
git_inflate_init(&freq->stream);
- git_SHA1_Init(&freq->c);
+ the_hash_algo->init_fn(&freq->c);
if (prev_posn>0) {
prev_posn = 0;
lseek(freq->localfile, 0, SEEK_SET);
}
git_inflate_end(&freq->stream);
- git_SHA1_Final(freq->real_oid.hash, &freq->c);
+ the_hash_algo->final_fn(freq->real_oid.hash, &freq->c);
if (freq->zret != Z_STREAM_END) {
unlink_or_warn(freq->tmpfile.buf);
return -1;
long http_code;
struct object_id oid;
struct object_id real_oid;
- git_SHA_CTX c;
+ git_hash_ctx c;
git_zstream stream;
int zret;
int rename;
return set_ident(var, value);
}
+static void set_env_if(const char *key, const char *value, int *given, int bit)
+{
+ if ((*given & bit) || getenv(key))
+ return; /* nothing to do */
+ setenv(key, value, 0);
+ *given |= bit;
+}
+
+void prepare_fallback_ident(const char *name, const char *email)
+{
+ set_env_if("GIT_AUTHOR_NAME", name,
+ &author_ident_explicitly_given, IDENT_NAME_GIVEN);
+ set_env_if("GIT_AUTHOR_EMAIL", email,
+ &author_ident_explicitly_given, IDENT_MAIL_GIVEN);
+ set_env_if("GIT_COMMITTER_NAME", name,
+ &committer_ident_explicitly_given, IDENT_NAME_GIVEN);
+ set_env_if("GIT_COMMITTER_EMAIL", email,
+ &committer_ident_explicitly_given, IDENT_MAIL_GIVEN);
+}
+
static int buf_cmp(const char *a_begin, const char *a_end,
const char *b_begin, const char *b_end)
{
KHASH_INIT(sha1_pos, const unsigned char *, int, 1, sha1hash, __kh_oid_cmp)
typedef kh_sha1_pos_t khash_sha1_pos;
+static inline unsigned int oid_hash(struct object_id oid)
+{
+ return sha1hash(oid.hash);
+}
+
+static inline int oid_equal(struct object_id a, struct object_id b)
+{
+ return oideq(&a, &b);
+}
+
+KHASH_INIT(oid, struct object_id, int, 0, oid_hash, oid_equal)
+
+KHASH_INIT(oid_map, struct object_id, void *, 1, oid_hash, oid_equal)
+typedef kh_oid_t khash_oid_map;
+
+KHASH_INIT(oid_pos, struct object_id, int, 1, oid_hash, oid_equal)
+typedef kh_oid_pos_t khash_oid_pos;
+
#endif /* __AC_KHASH_H */
static void fill_blob_sha1(struct commit *commit, struct diff_filespec *spec)
{
- unsigned mode;
+ unsigned short mode;
struct object_id oid;
if (get_tree_entry(&commit->object.oid, spec->path, &oid, &mode))
while (one.size) {
const char *path;
const struct object_id *elem;
- unsigned mode;
+ unsigned short mode;
int score;
elem = tree_entry_extract(&one, &path, &mode);
rewrite_here = NULL;
while (desc.size) {
const char *name;
- unsigned mode;
+ unsigned short mode;
tree_entry_extract(&desc, &name, &mode);
if (strlen(name) == toplen &&
if (add_score < del_score) {
/* We need to pick a subtree of two */
- unsigned mode;
+ unsigned short mode;
if (!*del_prefix)
return;
const char *shift_prefix)
{
struct object_id sub1, sub2;
- unsigned mode1, mode2;
+ unsigned short mode1, mode2;
unsigned candidate = 0;
/* Can hash2 be a tree at shift_prefix in tree hash1? */
hashmap_init(map, (hashmap_cmp_fn) collision_cmp, NULL, 0);
}
-static void flush_output(struct merge_options *o)
+static void flush_output(struct merge_options *opt)
{
- if (o->buffer_output < 2 && o->obuf.len) {
- fputs(o->obuf.buf, stdout);
- strbuf_reset(&o->obuf);
+ if (opt->buffer_output < 2 && opt->obuf.len) {
+ fputs(opt->obuf.buf, stdout);
+ strbuf_reset(&opt->obuf);
}
}
-static int err(struct merge_options *o, const char *err, ...)
+static int err(struct merge_options *opt, const char *err, ...)
{
va_list params;
- if (o->buffer_output < 2)
- flush_output(o);
+ if (opt->buffer_output < 2)
+ flush_output(opt);
else {
- strbuf_complete(&o->obuf, '\n');
- strbuf_addstr(&o->obuf, "error: ");
+ strbuf_complete(&opt->obuf, '\n');
+ strbuf_addstr(&opt->obuf, "error: ");
}
va_start(params, err);
- strbuf_vaddf(&o->obuf, err, params);
+ strbuf_vaddf(&opt->obuf, err, params);
va_end(params);
- if (o->buffer_output > 1)
- strbuf_addch(&o->obuf, '\n');
+ if (opt->buffer_output > 1)
+ strbuf_addch(&opt->obuf, '\n');
else {
- error("%s", o->obuf.buf);
- strbuf_reset(&o->obuf);
+ error("%s", opt->obuf.buf);
+ strbuf_reset(&opt->obuf);
}
return -1;
RENAME_TWO_FILES_TO_ONE
};
-struct rename_conflict_info {
- enum rename_type rename_type;
- struct diff_filepair *pair1;
- struct diff_filepair *pair2;
- const char *branch1;
- const char *branch2;
- struct stage_data *dst_entry1;
- struct stage_data *dst_entry2;
- struct diff_filespec ren1_other;
- struct diff_filespec ren2_other;
-};
-
/*
* Since we want to write the index eventually, we cannot reuse the index
* for these (temporary) data.
*/
struct stage_data {
- struct {
- unsigned mode;
- struct object_id oid;
- } stages[4];
+ struct diff_filespec stages[4]; /* mostly for oid & mode; maybe path */
struct rename_conflict_info *rename_conflict_info;
unsigned processed:1;
};
+struct rename {
+ unsigned processed:1;
+ struct diff_filepair *pair;
+ const char *branch; /* branch that the rename occurred on */
+ /*
+ * If directory rename detection affected this rename, what was its
+ * original type ('A' or 'R') and it's original destination before
+ * the directory rename (otherwise, '\0' and NULL for these two vars).
+ */
+ char dir_rename_original_type;
+ char *dir_rename_original_dest;
+ /*
+ * Purpose of src_entry and dst_entry:
+ *
+ * If 'before' is renamed to 'after' then src_entry will contain
+ * the versions of 'before' from the merge_base, HEAD, and MERGE in
+ * stages 1, 2, and 3; dst_entry will contain the respective
+ * versions of 'after' in corresponding locations. Thus, we have a
+ * total of six modes and oids, though some will be null. (Stage 0
+ * is ignored; we're interested in handling conflicts.)
+ *
+ * Since we don't turn on break-rewrites by default, neither
+ * src_entry nor dst_entry can have all three of their stages have
+ * non-null oids, meaning at most four of the six will be non-null.
+ * Also, since this is a rename, both src_entry and dst_entry will
+ * have at least one non-null oid, meaning at least two will be
+ * non-null. Of the six oids, a typical rename will have three be
+ * non-null. Only two implies a rename/delete, and four implies a
+ * rename/add.
+ */
+ struct stage_data *src_entry;
+ struct stage_data *dst_entry;
+};
+
+struct rename_conflict_info {
+ enum rename_type rename_type;
+ struct rename *ren1;
+ struct rename *ren2;
+};
+
static inline void setup_rename_conflict_info(enum rename_type rename_type,
- struct diff_filepair *pair1,
- struct diff_filepair *pair2,
- const char *branch1,
- const char *branch2,
- struct stage_data *dst_entry1,
- struct stage_data *dst_entry2,
- struct merge_options *o,
- struct stage_data *src_entry1,
- struct stage_data *src_entry2)
-{
- int ostage1 = 0, ostage2;
+ struct merge_options *opt,
+ struct rename *ren1,
+ struct rename *ren2)
+{
struct rename_conflict_info *ci;
/*
* When we have two renames involved, it's easiest to get the
* correct things into stage 2 and 3, and to make sure that the
* content merge puts HEAD before the other branch if we just
- * ensure that branch1 == o->branch1. So, simply flip arguments
+ * ensure that branch1 == opt->branch1. So, simply flip arguments
* around if we don't have that.
*/
- if (dst_entry2 && branch1 != o->branch1) {
- setup_rename_conflict_info(rename_type,
- pair2, pair1,
- branch2, branch1,
- dst_entry2, dst_entry1,
- o,
- src_entry2, src_entry1);
+ if (ren2 && ren1->branch != opt->branch1) {
+ setup_rename_conflict_info(rename_type, opt, ren2, ren1);
return;
}
ci = xcalloc(1, sizeof(struct rename_conflict_info));
ci->rename_type = rename_type;
- ci->pair1 = pair1;
- ci->branch1 = branch1;
- ci->branch2 = branch2;
+ ci->ren1 = ren1;
+ ci->ren2 = ren2;
- ci->dst_entry1 = dst_entry1;
- dst_entry1->rename_conflict_info = ci;
- dst_entry1->processed = 0;
-
- assert(!pair2 == !dst_entry2);
- if (dst_entry2) {
- ci->dst_entry2 = dst_entry2;
- ci->pair2 = pair2;
- dst_entry2->rename_conflict_info = ci;
- }
-
- /*
- * For each rename, there could have been
- * modifications on the side of history where that
- * file was not renamed.
- */
- if (rename_type == RENAME_ADD ||
- rename_type == RENAME_TWO_FILES_TO_ONE) {
- ostage1 = o->branch1 == branch1 ? 3 : 2;
-
- ci->ren1_other.path = pair1->one->path;
- oidcpy(&ci->ren1_other.oid, &src_entry1->stages[ostage1].oid);
- ci->ren1_other.mode = src_entry1->stages[ostage1].mode;
- }
-
- if (rename_type == RENAME_TWO_FILES_TO_ONE) {
- ostage2 = ostage1 ^ 1;
-
- ci->ren2_other.path = pair2->one->path;
- oidcpy(&ci->ren2_other.oid, &src_entry2->stages[ostage2].oid);
- ci->ren2_other.mode = src_entry2->stages[ostage2].mode;
+ ci->ren1->dst_entry->processed = 0;
+ ci->ren1->dst_entry->rename_conflict_info = ci;
+ if (ren2) {
+ ci->ren2->dst_entry->rename_conflict_info = ci;
}
}
-static int show(struct merge_options *o, int v)
+static int show(struct merge_options *opt, int v)
{
- return (!o->call_depth && o->verbosity >= v) || o->verbosity >= 5;
+ return (!opt->call_depth && opt->verbosity >= v) || opt->verbosity >= 5;
}
__attribute__((format (printf, 3, 4)))
-static void output(struct merge_options *o, int v, const char *fmt, ...)
+static void output(struct merge_options *opt, int v, const char *fmt, ...)
{
va_list ap;
- if (!show(o, v))
+ if (!show(opt, v))
return;
- strbuf_addchars(&o->obuf, ' ', o->call_depth * 2);
+ strbuf_addchars(&opt->obuf, ' ', opt->call_depth * 2);
va_start(ap, fmt);
- strbuf_vaddf(&o->obuf, fmt, ap);
+ strbuf_vaddf(&opt->obuf, fmt, ap);
va_end(ap);
- strbuf_addch(&o->obuf, '\n');
- if (!o->buffer_output)
- flush_output(o);
+ strbuf_addch(&opt->obuf, '\n');
+ if (!opt->buffer_output)
+ flush_output(opt);
}
-static void output_commit_title(struct merge_options *o, struct commit *commit)
+static void output_commit_title(struct merge_options *opt, struct commit *commit)
{
struct merge_remote_desc *desc;
- strbuf_addchars(&o->obuf, ' ', o->call_depth * 2);
+ strbuf_addchars(&opt->obuf, ' ', opt->call_depth * 2);
desc = merge_remote_util(commit);
if (desc)
- strbuf_addf(&o->obuf, "virtual %s\n", desc->name);
+ strbuf_addf(&opt->obuf, "virtual %s\n", desc->name);
else {
- strbuf_add_unique_abbrev(&o->obuf, &commit->object.oid,
+ strbuf_add_unique_abbrev(&opt->obuf, &commit->object.oid,
DEFAULT_ABBREV);
- strbuf_addch(&o->obuf, ' ');
+ strbuf_addch(&opt->obuf, ' ');
if (parse_commit(commit) != 0)
- strbuf_addstr(&o->obuf, _("(bad commit)\n"));
+ strbuf_addstr(&opt->obuf, _("(bad commit)\n"));
else {
const char *title;
const char *msg = get_commit_buffer(commit, NULL);
int len = find_commit_subject(msg, &title);
if (len)
- strbuf_addf(&o->obuf, "%.*s\n", len, title);
+ strbuf_addf(&opt->obuf, "%.*s\n", len, title);
unuse_commit_buffer(commit, msg);
}
}
- flush_output(o);
+ flush_output(opt);
}
-static int add_cacheinfo(struct merge_options *o,
- unsigned int mode, const struct object_id *oid,
+static int add_cacheinfo(struct merge_options *opt,
+ const struct diff_filespec *blob,
const char *path, int stage, int refresh, int options)
{
- struct index_state *istate = o->repo->index;
+ struct index_state *istate = opt->repo->index;
struct cache_entry *ce;
int ret;
- ce = make_cache_entry(istate, mode, oid ? oid : &null_oid, path, stage, 0);
+ ce = make_cache_entry(istate, blob->mode, &blob->oid, path, stage, 0);
if (!ce)
- return err(o, _("add_cacheinfo failed for path '%s'; merge aborting."), path);
+ return err(opt, _("add_cacheinfo failed for path '%s'; merge aborting."), path);
ret = add_index_entry(istate, ce, options);
if (refresh) {
nce = refresh_cache_entry(istate, ce,
CE_MATCH_REFRESH | CE_MATCH_IGNORE_MISSING);
if (!nce)
- return err(o, _("add_cacheinfo failed to refresh for path '%s'; merge aborting."), path);
+ return err(opt, _("add_cacheinfo failed to refresh for path '%s'; merge aborting."), path);
if (nce != ce)
ret = add_index_entry(istate, nce, options);
}
init_tree_desc(desc, tree->buffer, tree->size);
}
-static int unpack_trees_start(struct merge_options *o,
+static int unpack_trees_start(struct merge_options *opt,
struct tree *common,
struct tree *head,
struct tree *merge)
struct tree_desc t[3];
struct index_state tmp_index = { NULL };
- memset(&o->unpack_opts, 0, sizeof(o->unpack_opts));
- if (o->call_depth)
- o->unpack_opts.index_only = 1;
+ memset(&opt->unpack_opts, 0, sizeof(opt->unpack_opts));
+ if (opt->call_depth)
+ opt->unpack_opts.index_only = 1;
else
- o->unpack_opts.update = 1;
- o->unpack_opts.merge = 1;
- o->unpack_opts.head_idx = 2;
- o->unpack_opts.fn = threeway_merge;
- o->unpack_opts.src_index = o->repo->index;
- o->unpack_opts.dst_index = &tmp_index;
- o->unpack_opts.aggressive = !merge_detect_rename(o);
- setup_unpack_trees_porcelain(&o->unpack_opts, "merge");
+ opt->unpack_opts.update = 1;
+ opt->unpack_opts.merge = 1;
+ opt->unpack_opts.head_idx = 2;
+ opt->unpack_opts.fn = threeway_merge;
+ opt->unpack_opts.src_index = opt->repo->index;
+ opt->unpack_opts.dst_index = &tmp_index;
+ opt->unpack_opts.aggressive = !merge_detect_rename(opt);
+ setup_unpack_trees_porcelain(&opt->unpack_opts, "merge");
init_tree_desc_from_tree(t+0, common);
init_tree_desc_from_tree(t+1, head);
init_tree_desc_from_tree(t+2, merge);
- rc = unpack_trees(3, t, &o->unpack_opts);
- cache_tree_free(&o->repo->index->cache_tree);
+ rc = unpack_trees(3, t, &opt->unpack_opts);
+ cache_tree_free(&opt->repo->index->cache_tree);
/*
- * Update o->repo->index to match the new results, AFTER saving a copy
- * in o->orig_index. Update src_index to point to the saved copy.
+ * Update opt->repo->index to match the new results, AFTER saving a copy
+ * in opt->orig_index. Update src_index to point to the saved copy.
* (verify_uptodate() checks src_index, and the original index is
* the one that had the necessary modification timestamps.)
*/
- o->orig_index = *o->repo->index;
- *o->repo->index = tmp_index;
- o->unpack_opts.src_index = &o->orig_index;
+ opt->orig_index = *opt->repo->index;
+ *opt->repo->index = tmp_index;
+ opt->unpack_opts.src_index = &opt->orig_index;
return rc;
}
-static void unpack_trees_finish(struct merge_options *o)
+static void unpack_trees_finish(struct merge_options *opt)
{
- discard_index(&o->orig_index);
- clear_unpack_trees_porcelain(&o->unpack_opts);
+ discard_index(&opt->orig_index);
+ clear_unpack_trees_porcelain(&opt->unpack_opts);
}
-struct tree *write_tree_from_memory(struct merge_options *o)
+struct tree *write_tree_from_memory(struct merge_options *opt)
{
struct tree *result = NULL;
- struct index_state *istate = o->repo->index;
+ struct index_state *istate = opt->repo->index;
if (unmerged_index(istate)) {
int i;
if (!cache_tree_fully_valid(istate->cache_tree) &&
cache_tree_update(istate, 0) < 0) {
- err(o, _("error building trees"));
+ err(opt, _("error building trees"));
return NULL;
}
- result = lookup_tree(o->repo, &istate->cache_tree->oid);
+ result = lookup_tree(opt->repo, &istate->cache_tree->oid);
return result;
}
{
struct path_hashmap_entry *entry;
int baselen = base->len;
- struct merge_options *o = context;
+ struct merge_options *opt = context;
strbuf_addstr(base, path);
FLEX_ALLOC_MEM(entry, path, base->buf, base->len);
hashmap_entry_init(entry, path_hash(entry->path));
- hashmap_add(&o->current_file_dir_set, entry);
+ hashmap_add(&opt->current_file_dir_set, entry);
strbuf_setlen(base, baselen);
return (S_ISDIR(mode) ? READ_TREE_RECURSIVE : 0);
}
-static void get_files_dirs(struct merge_options *o, struct tree *tree)
+static void get_files_dirs(struct merge_options *opt, struct tree *tree)
{
struct pathspec match_all;
memset(&match_all, 0, sizeof(match_all));
read_tree_recursive(the_repository, tree, "", 0, 0,
- &match_all, save_files_dirs, o);
+ &match_all, save_files_dirs, opt);
}
static int get_tree_entry_if_blob(const struct object_id *tree,
const char *path,
- struct object_id *hashy,
- unsigned int *mode_o)
+ struct diff_filespec *dfs)
{
int ret;
- ret = get_tree_entry(tree, path, hashy, mode_o);
- if (S_ISDIR(*mode_o)) {
- oidcpy(hashy, &null_oid);
- *mode_o = 0;
+ ret = get_tree_entry(tree, path, &dfs->oid, &dfs->mode);
+ if (S_ISDIR(dfs->mode)) {
+ oidcpy(&dfs->oid, &null_oid);
+ dfs->mode = 0;
}
return ret;
}
{
struct string_list_item *item;
struct stage_data *e = xcalloc(1, sizeof(struct stage_data));
- get_tree_entry_if_blob(&o->object.oid, path,
- &e->stages[1].oid, &e->stages[1].mode);
- get_tree_entry_if_blob(&a->object.oid, path,
- &e->stages[2].oid, &e->stages[2].mode);
- get_tree_entry_if_blob(&b->object.oid, path,
- &e->stages[3].oid, &e->stages[3].mode);
+ get_tree_entry_if_blob(&o->object.oid, path, &e->stages[1]);
+ get_tree_entry_if_blob(&a->object.oid, path, &e->stages[2]);
+ get_tree_entry_if_blob(&b->object.oid, path, &e->stages[3]);
item = string_list_insert(entries, path);
item->util = e;
return e;
return onelen - twolen;
}
-static void record_df_conflict_files(struct merge_options *o,
+static void record_df_conflict_files(struct merge_options *opt,
struct string_list *entries)
{
/* If there is a D/F conflict and the file for such a conflict
* If we're merging merge-bases, we don't want to bother with
* any working directory changes.
*/
- if (o->call_depth)
+ if (opt->call_depth)
return;
/* Ensure D/F conflicts are adjacent in the entries list. */
df_sorted_entries.cmp = string_list_df_name_compare;
string_list_sort(&df_sorted_entries);
- string_list_clear(&o->df_conflict_file_set, 1);
+ string_list_clear(&opt->df_conflict_file_set, 1);
for (i = 0; i < df_sorted_entries.nr; i++) {
const char *path = df_sorted_entries.items[i].string;
int len = strlen(path);
len > last_len &&
memcmp(path, last_file, last_len) == 0 &&
path[last_len] == '/') {
- string_list_insert(&o->df_conflict_file_set, last_file);
+ string_list_insert(&opt->df_conflict_file_set, last_file);
}
/*
string_list_clear(&df_sorted_entries, 0);
}
-struct rename {
- struct diff_filepair *pair;
- /*
- * Purpose of src_entry and dst_entry:
- *
- * If 'before' is renamed to 'after' then src_entry will contain
- * the versions of 'before' from the merge_base, HEAD, and MERGE in
- * stages 1, 2, and 3; dst_entry will contain the respective
- * versions of 'after' in corresponding locations. Thus, we have a
- * total of six modes and oids, though some will be null. (Stage 0
- * is ignored; we're interested in handling conflicts.)
- *
- * Since we don't turn on break-rewrites by default, neither
- * src_entry nor dst_entry can have all three of their stages have
- * non-null oids, meaning at most four of the six will be non-null.
- * Also, since this is a rename, both src_entry and dst_entry will
- * have at least one non-null oid, meaning at least two will be
- * non-null. Of the six oids, a typical rename will have three be
- * non-null. Only two implies a rename/delete, and four implies a
- * rename/add.
- */
- struct stage_data *src_entry;
- struct stage_data *dst_entry;
- unsigned add_turned_into_rename:1;
- unsigned processed:1;
-};
-
static int update_stages(struct merge_options *opt, const char *path,
const struct diff_filespec *o,
const struct diff_filespec *a,
if (remove_file_from_index(opt->repo->index, path))
return -1;
if (o)
- if (add_cacheinfo(opt, o->mode, &o->oid, path, 1, 0, options))
+ if (add_cacheinfo(opt, o, path, 1, 0, options))
return -1;
if (a)
- if (add_cacheinfo(opt, a->mode, &a->oid, path, 2, 0, options))
+ if (add_cacheinfo(opt, a, path, 2, 0, options))
return -1;
if (b)
- if (add_cacheinfo(opt, b->mode, &b->oid, path, 3, 0, options))
+ if (add_cacheinfo(opt, b, path, 3, 0, options))
return -1;
return 0;
}
oidcpy(&entry->stages[3].oid, &b->oid);
}
-static int remove_file(struct merge_options *o, int clean,
+static int remove_file(struct merge_options *opt, int clean,
const char *path, int no_wd)
{
- int update_cache = o->call_depth || clean;
- int update_working_directory = !o->call_depth && !no_wd;
+ int update_cache = opt->call_depth || clean;
+ int update_working_directory = !opt->call_depth && !no_wd;
if (update_cache) {
- if (remove_file_from_index(o->repo->index, path))
+ if (remove_file_from_index(opt->repo->index, path))
return -1;
}
if (update_working_directory) {
if (ignore_case) {
struct cache_entry *ce;
- ce = index_file_exists(o->repo->index, path, strlen(path),
+ ce = index_file_exists(opt->repo->index, path, strlen(path),
ignore_case);
if (ce && ce_stage(ce) == 0 && strcmp(path, ce->name))
return 0;
out->buf[i] = '_';
}
-static char *unique_path(struct merge_options *o, const char *path, const char *branch)
+static char *unique_path(struct merge_options *opt, const char *path, const char *branch)
{
struct path_hashmap_entry *entry;
struct strbuf newpath = STRBUF_INIT;
add_flattened_path(&newpath, branch);
base_len = newpath.len;
- while (hashmap_get_from_hash(&o->current_file_dir_set,
+ while (hashmap_get_from_hash(&opt->current_file_dir_set,
path_hash(newpath.buf), newpath.buf) ||
- (!o->call_depth && file_exists(newpath.buf))) {
+ (!opt->call_depth && file_exists(newpath.buf))) {
strbuf_setlen(&newpath, base_len);
strbuf_addf(&newpath, "_%d", suffix++);
}
FLEX_ALLOC_MEM(entry, path, newpath.buf, newpath.len);
hashmap_entry_init(entry, path_hash(entry->path));
- hashmap_add(&o->current_file_dir_set, entry);
+ hashmap_add(&opt->current_file_dir_set, entry);
return strbuf_detach(&newpath, NULL);
}
* Returns whether path was tracked in the index before the merge started,
* and its oid and mode match the specified values
*/
-static int was_tracked_and_matches(struct merge_options *o, const char *path,
- const struct object_id *oid, unsigned mode)
+static int was_tracked_and_matches(struct merge_options *opt, const char *path,
+ const struct diff_filespec *blob)
{
- int pos = index_name_pos(&o->orig_index, path, strlen(path));
+ int pos = index_name_pos(&opt->orig_index, path, strlen(path));
struct cache_entry *ce;
if (0 > pos)
return 0;
/* See if the file we were tracking before matches */
- ce = o->orig_index.cache[pos];
- return (oid_eq(&ce->oid, oid) && ce->ce_mode == mode);
+ ce = opt->orig_index.cache[pos];
+ return (oid_eq(&ce->oid, &blob->oid) && ce->ce_mode == blob->mode);
}
/*
* Returns whether path was tracked in the index before the merge started
*/
-static int was_tracked(struct merge_options *o, const char *path)
+static int was_tracked(struct merge_options *opt, const char *path)
{
- int pos = index_name_pos(&o->orig_index, path, strlen(path));
+ int pos = index_name_pos(&opt->orig_index, path, strlen(path));
if (0 <= pos)
/* we were tracking this path before the merge */
return 0;
}
-static int would_lose_untracked(struct merge_options *o, const char *path)
+static int would_lose_untracked(struct merge_options *opt, const char *path)
{
- struct index_state *istate = o->repo->index;
+ struct index_state *istate = opt->repo->index;
/*
* This may look like it can be simplified to:
- * return !was_tracked(o, path) && file_exists(path)
+ * return !was_tracked(opt, path) && file_exists(path)
* but it can't. This function needs to know whether path was in
* the working tree due to EITHER having been tracked in the index
* before the merge OR having been put into the working copy and
return file_exists(path);
}
-static int was_dirty(struct merge_options *o, const char *path)
+static int was_dirty(struct merge_options *opt, const char *path)
{
struct cache_entry *ce;
int dirty = 1;
- if (o->call_depth || !was_tracked(o, path))
+ if (opt->call_depth || !was_tracked(opt, path))
return !dirty;
- ce = index_file_exists(o->unpack_opts.src_index,
+ ce = index_file_exists(opt->unpack_opts.src_index,
path, strlen(path), ignore_case);
- dirty = verify_uptodate(ce, &o->unpack_opts) != 0;
+ dirty = verify_uptodate(ce, &opt->unpack_opts) != 0;
return dirty;
}
-static int make_room_for_path(struct merge_options *o, const char *path)
+static int make_room_for_path(struct merge_options *opt, const char *path)
{
int status, i;
const char *msg = _("failed to create path '%s'%s");
/* Unlink any D/F conflict files that are in the way */
- for (i = 0; i < o->df_conflict_file_set.nr; i++) {
- const char *df_path = o->df_conflict_file_set.items[i].string;
+ for (i = 0; i < opt->df_conflict_file_set.nr; i++) {
+ const char *df_path = opt->df_conflict_file_set.items[i].string;
size_t pathlen = strlen(path);
size_t df_pathlen = strlen(df_path);
if (df_pathlen < pathlen &&
path[df_pathlen] == '/' &&
strncmp(path, df_path, df_pathlen) == 0) {
- output(o, 3,
+ output(opt, 3,
_("Removing %s to make room for subdirectory\n"),
df_path);
unlink(df_path);
- unsorted_string_list_delete_item(&o->df_conflict_file_set,
+ unsorted_string_list_delete_item(&opt->df_conflict_file_set,
i, 0);
break;
}
if (status) {
if (status == SCLD_EXISTS)
/* something else exists */
- return err(o, msg, path, _(": perhaps a D/F conflict?"));
- return err(o, msg, path, "");
+ return err(opt, msg, path, _(": perhaps a D/F conflict?"));
+ return err(opt, msg, path, "");
}
/*
* Do not unlink a file in the work tree if we are not
* tracking it.
*/
- if (would_lose_untracked(o, path))
- return err(o, _("refusing to lose untracked file at '%s'"),
+ if (would_lose_untracked(opt, path))
+ return err(opt, _("refusing to lose untracked file at '%s'"),
path);
/* Successful unlink is good.. */
if (errno == ENOENT)
return 0;
/* .. but not some other error (who really cares what?) */
- return err(o, msg, path, _(": perhaps a D/F conflict?"));
+ return err(opt, msg, path, _(": perhaps a D/F conflict?"));
}
-static int update_file_flags(struct merge_options *o,
- const struct object_id *oid,
- unsigned mode,
+static int update_file_flags(struct merge_options *opt,
+ const struct diff_filespec *contents,
const char *path,
int update_cache,
int update_wd)
{
int ret = 0;
- if (o->call_depth)
+ if (opt->call_depth)
update_wd = 0;
if (update_wd) {
void *buf;
unsigned long size;
- if (S_ISGITLINK(mode)) {
+ if (S_ISGITLINK(contents->mode)) {
/*
* We may later decide to recursively descend into
* the submodule directory and update its index
goto update_index;
}
- buf = read_object_file(oid, &type, &size);
+ buf = read_object_file(&contents->oid, &type, &size);
if (!buf)
- return err(o, _("cannot read object %s '%s'"), oid_to_hex(oid), path);
+ return err(opt, _("cannot read object %s '%s'"),
+ oid_to_hex(&contents->oid), path);
if (type != OBJ_BLOB) {
- ret = err(o, _("blob expected for %s '%s'"), oid_to_hex(oid), path);
+ ret = err(opt, _("blob expected for %s '%s'"),
+ oid_to_hex(&contents->oid), path);
goto free_buf;
}
- if (S_ISREG(mode)) {
+ if (S_ISREG(contents->mode)) {
struct strbuf strbuf = STRBUF_INIT;
- if (convert_to_working_tree(o->repo->index, path, buf, size, &strbuf)) {
+ if (convert_to_working_tree(opt->repo->index, path, buf, size, &strbuf)) {
free(buf);
size = strbuf.len;
buf = strbuf_detach(&strbuf, NULL);
}
}
- if (make_room_for_path(o, path) < 0) {
+ if (make_room_for_path(opt, path) < 0) {
update_wd = 0;
goto free_buf;
}
- if (S_ISREG(mode) || (!has_symlinks && S_ISLNK(mode))) {
+ if (S_ISREG(contents->mode) ||
+ (!has_symlinks && S_ISLNK(contents->mode))) {
int fd;
- if (mode & 0100)
- mode = 0777;
- else
- mode = 0666;
+ int mode = (contents->mode & 0100 ? 0777 : 0666);
+
fd = open(path, O_WRONLY | O_TRUNC | O_CREAT, mode);
if (fd < 0) {
- ret = err(o, _("failed to open '%s': %s"),
+ ret = err(opt, _("failed to open '%s': %s"),
path, strerror(errno));
goto free_buf;
}
write_in_full(fd, buf, size);
close(fd);
- } else if (S_ISLNK(mode)) {
+ } else if (S_ISLNK(contents->mode)) {
char *lnk = xmemdupz(buf, size);
safe_create_leading_directories_const(path);
unlink(path);
if (symlink(lnk, path))
- ret = err(o, _("failed to symlink '%s': %s"),
+ ret = err(opt, _("failed to symlink '%s': %s"),
path, strerror(errno));
free(lnk);
} else
- ret = err(o,
+ ret = err(opt,
_("do not know what to do with %06o %s '%s'"),
- mode, oid_to_hex(oid), path);
+ contents->mode, oid_to_hex(&contents->oid), path);
free_buf:
free(buf);
}
update_index:
if (!ret && update_cache)
- if (add_cacheinfo(o, mode, oid, path, 0, update_wd,
+ if (add_cacheinfo(opt, contents, path, 0, update_wd,
ADD_CACHE_OK_TO_ADD))
return -1;
return ret;
}
-static int update_file(struct merge_options *o,
+static int update_file(struct merge_options *opt,
int clean,
- const struct object_id *oid,
- unsigned mode,
+ const struct diff_filespec *contents,
const char *path)
{
- return update_file_flags(o, oid, mode, path, o->call_depth || clean, !o->call_depth);
+ return update_file_flags(opt, contents, path,
+ opt->call_depth || clean, !opt->call_depth);
}
/* Low level file merging, update and removal */
struct merge_file_info {
- struct object_id oid;
- unsigned mode;
+ struct diff_filespec blob; /* mostly use oid & mode; sometimes path */
unsigned clean:1,
merge:1;
};
-static int merge_3way(struct merge_options *o,
+static int merge_3way(struct merge_options *opt,
mmbuffer_t *result_buf,
- const struct diff_filespec *one,
+ const struct diff_filespec *o,
const struct diff_filespec *a,
const struct diff_filespec *b,
const char *branch1,
char *base_name, *name1, *name2;
int merge_status;
- ll_opts.renormalize = o->renormalize;
+ ll_opts.renormalize = opt->renormalize;
ll_opts.extra_marker_size = extra_marker_size;
- ll_opts.xdl_opts = o->xdl_opts;
+ ll_opts.xdl_opts = opt->xdl_opts;
- if (o->call_depth) {
+ if (opt->call_depth) {
ll_opts.virtual_ancestor = 1;
ll_opts.variant = 0;
} else {
- switch (o->recursive_variant) {
+ switch (opt->recursive_variant) {
case MERGE_RECURSIVE_OURS:
ll_opts.variant = XDL_MERGE_FAVOR_OURS;
break;
}
}
+ assert(a->path && b->path);
if (strcmp(a->path, b->path) ||
- (o->ancestor != NULL && strcmp(a->path, one->path) != 0)) {
- base_name = o->ancestor == NULL ? NULL :
- mkpathdup("%s:%s", o->ancestor, one->path);
+ (opt->ancestor != NULL && strcmp(a->path, o->path) != 0)) {
+ base_name = opt->ancestor == NULL ? NULL :
+ mkpathdup("%s:%s", opt->ancestor, o->path);
name1 = mkpathdup("%s:%s", branch1, a->path);
name2 = mkpathdup("%s:%s", branch2, b->path);
} else {
- base_name = o->ancestor == NULL ? NULL :
- mkpathdup("%s", o->ancestor);
+ base_name = opt->ancestor == NULL ? NULL :
+ mkpathdup("%s", opt->ancestor);
name1 = mkpathdup("%s", branch1);
name2 = mkpathdup("%s", branch2);
}
- read_mmblob(&orig, &one->oid);
+ read_mmblob(&orig, &o->oid);
read_mmblob(&src1, &a->oid);
read_mmblob(&src2, &b->oid);
merge_status = ll_merge(result_buf, a->path, &orig, base_name,
&src1, name1, &src2, name2,
- o->repo->index, &ll_opts);
+ opt->repo->index, &ll_opts);
free(base_name);
free(name1);
struct commit *commit;
int contains_another;
- char merged_revision[42];
+ char merged_revision[GIT_MAX_HEXSZ + 2];
const char *rev_args[] = { "rev-list", "--merges", "--ancestry-path",
"--all", merged_revision, NULL };
struct rev_info revs;
strbuf_release(&sb);
}
-static int merge_submodule(struct merge_options *o,
+static int is_valid(const struct diff_filespec *dfs)
+{
+ return dfs->mode != 0 && !is_null_oid(&dfs->oid);
+}
+
+static int merge_submodule(struct merge_options *opt,
struct object_id *result, const char *path,
const struct object_id *base, const struct object_id *a,
const struct object_id *b)
struct object_array merges;
int i;
- int search = !o->call_depth;
+ int search = !opt->call_depth;
/* store a in result in case we fail */
oidcpy(result, a);
return 0;
if (add_submodule_odb(path)) {
- output(o, 1, _("Failed to merge submodule %s (not checked out)"), path);
+ output(opt, 1, _("Failed to merge submodule %s (not checked out)"), path);
return 0;
}
- if (!(commit_base = lookup_commit_reference(o->repo, base)) ||
- !(commit_a = lookup_commit_reference(o->repo, a)) ||
- !(commit_b = lookup_commit_reference(o->repo, b))) {
- output(o, 1, _("Failed to merge submodule %s (commits not present)"), path);
+ if (!(commit_base = lookup_commit_reference(opt->repo, base)) ||
+ !(commit_a = lookup_commit_reference(opt->repo, a)) ||
+ !(commit_b = lookup_commit_reference(opt->repo, b))) {
+ output(opt, 1, _("Failed to merge submodule %s (commits not present)"), path);
return 0;
}
/* check whether both changes are forward */
if (!in_merge_bases(commit_base, commit_a) ||
!in_merge_bases(commit_base, commit_b)) {
- output(o, 1, _("Failed to merge submodule %s (commits don't follow merge-base)"), path);
+ output(opt, 1, _("Failed to merge submodule %s (commits don't follow merge-base)"), path);
return 0;
}
/* Case #1: a is contained in b or vice versa */
if (in_merge_bases(commit_a, commit_b)) {
oidcpy(result, b);
- if (show(o, 3)) {
- output(o, 3, _("Fast-forwarding submodule %s to the following commit:"), path);
- output_commit_title(o, commit_b);
- } else if (show(o, 2))
- output(o, 2, _("Fast-forwarding submodule %s"), path);
+ if (show(opt, 3)) {
+ output(opt, 3, _("Fast-forwarding submodule %s to the following commit:"), path);
+ output_commit_title(opt, commit_b);
+ } else if (show(opt, 2))
+ output(opt, 2, _("Fast-forwarding submodule %s"), path);
else
; /* no output */
}
if (in_merge_bases(commit_b, commit_a)) {
oidcpy(result, a);
- if (show(o, 3)) {
- output(o, 3, _("Fast-forwarding submodule %s to the following commit:"), path);
- output_commit_title(o, commit_a);
- } else if (show(o, 2))
- output(o, 2, _("Fast-forwarding submodule %s"), path);
+ if (show(opt, 3)) {
+ output(opt, 3, _("Fast-forwarding submodule %s to the following commit:"), path);
+ output_commit_title(opt, commit_a);
+ } else if (show(opt, 2))
+ output(opt, 2, _("Fast-forwarding submodule %s"), path);
else
; /* no output */
return 0;
/* find commit which merges them */
- parent_count = find_first_merges(o->repo, &merges, path,
+ parent_count = find_first_merges(opt->repo, &merges, path,
commit_a, commit_b);
switch (parent_count) {
case 0:
- output(o, 1, _("Failed to merge submodule %s (merge following commits not found)"), path);
+ output(opt, 1, _("Failed to merge submodule %s (merge following commits not found)"), path);
break;
case 1:
- output(o, 1, _("Failed to merge submodule %s (not fast-forward)"), path);
- output(o, 2, _("Found a possible merge resolution for the submodule:\n"));
+ output(opt, 1, _("Failed to merge submodule %s (not fast-forward)"), path);
+ output(opt, 2, _("Found a possible merge resolution for the submodule:\n"));
print_commit((struct commit *) merges.objects[0].item);
- output(o, 2, _(
+ output(opt, 2, _(
"If this is correct simply add it to the index "
"for example\n"
"by using:\n\n"
break;
default:
- output(o, 1, _("Failed to merge submodule %s (multiple merges found)"), path);
+ output(opt, 1, _("Failed to merge submodule %s (multiple merges found)"), path);
for (i = 0; i < merges.nr; i++)
print_commit((struct commit *) merges.objects[i].item);
}
return 0;
}
-static int merge_mode_and_contents(struct merge_options *o,
- const struct diff_filespec *one,
+static int merge_mode_and_contents(struct merge_options *opt,
+ const struct diff_filespec *o,
const struct diff_filespec *a,
const struct diff_filespec *b,
const char *filename,
const int extra_marker_size,
struct merge_file_info *result)
{
- if (o->branch1 != branch1) {
+ if (opt->branch1 != branch1) {
/*
* It's weird getting a reverse merge with HEAD on the bottom
* side of the conflict markers and the other branch on the
* top. Fix that.
*/
- return merge_mode_and_contents(o, one, b, a,
+ return merge_mode_and_contents(opt, o, b, a,
filename,
branch2, branch1,
extra_marker_size, result);
if ((S_IFMT & a->mode) != (S_IFMT & b->mode)) {
result->clean = 0;
if (S_ISREG(a->mode)) {
- result->mode = a->mode;
- oidcpy(&result->oid, &a->oid);
+ result->blob.mode = a->mode;
+ oidcpy(&result->blob.oid, &a->oid);
} else {
- result->mode = b->mode;
- oidcpy(&result->oid, &b->oid);
+ result->blob.mode = b->mode;
+ oidcpy(&result->blob.oid, &b->oid);
}
} else {
- if (!oid_eq(&a->oid, &one->oid) && !oid_eq(&b->oid, &one->oid))
+ if (!oid_eq(&a->oid, &o->oid) && !oid_eq(&b->oid, &o->oid))
result->merge = 1;
/*
* Merge modes
*/
- if (a->mode == b->mode || a->mode == one->mode)
- result->mode = b->mode;
+ if (a->mode == b->mode || a->mode == o->mode)
+ result->blob.mode = b->mode;
else {
- result->mode = a->mode;
- if (b->mode != one->mode) {
+ result->blob.mode = a->mode;
+ if (b->mode != o->mode) {
result->clean = 0;
result->merge = 1;
}
}
- if (oid_eq(&a->oid, &b->oid) || oid_eq(&a->oid, &one->oid))
- oidcpy(&result->oid, &b->oid);
- else if (oid_eq(&b->oid, &one->oid))
- oidcpy(&result->oid, &a->oid);
+ if (oid_eq(&a->oid, &b->oid) || oid_eq(&a->oid, &o->oid))
+ oidcpy(&result->blob.oid, &b->oid);
+ else if (oid_eq(&b->oid, &o->oid))
+ oidcpy(&result->blob.oid, &a->oid);
else if (S_ISREG(a->mode)) {
mmbuffer_t result_buf;
int ret = 0, merge_status;
- merge_status = merge_3way(o, &result_buf, one, a, b,
+ merge_status = merge_3way(opt, &result_buf, o, a, b,
branch1, branch2,
extra_marker_size);
if ((merge_status < 0) || !result_buf.ptr)
- ret = err(o, _("Failed to execute internal merge"));
+ ret = err(opt, _("Failed to execute internal merge"));
if (!ret &&
write_object_file(result_buf.ptr, result_buf.size,
- blob_type, &result->oid))
- ret = err(o, _("Unable to add %s to database"),
+ blob_type, &result->blob.oid))
+ ret = err(opt, _("Unable to add %s to database"),
a->path);
free(result_buf.ptr);
return ret;
result->clean = (merge_status == 0);
} else if (S_ISGITLINK(a->mode)) {
- result->clean = merge_submodule(o, &result->oid,
- one->path,
- &one->oid,
+ result->clean = merge_submodule(opt, &result->blob.oid,
+ o->path,
+ &o->oid,
&a->oid,
&b->oid);
} else if (S_ISLNK(a->mode)) {
- switch (o->recursive_variant) {
+ switch (opt->recursive_variant) {
case MERGE_RECURSIVE_NORMAL:
- oidcpy(&result->oid, &a->oid);
+ oidcpy(&result->blob.oid, &a->oid);
if (!oid_eq(&a->oid, &b->oid))
result->clean = 0;
break;
case MERGE_RECURSIVE_OURS:
- oidcpy(&result->oid, &a->oid);
+ oidcpy(&result->blob.oid, &a->oid);
break;
case MERGE_RECURSIVE_THEIRS:
- oidcpy(&result->oid, &b->oid);
+ oidcpy(&result->blob.oid, &b->oid);
break;
}
} else
}
if (result->merge)
- output(o, 2, _("Auto-merging %s"), filename);
+ output(opt, 2, _("Auto-merging %s"), filename);
return 0;
}
-static int handle_rename_via_dir(struct merge_options *o,
- struct diff_filepair *pair,
- const char *rename_branch)
+static int handle_rename_via_dir(struct merge_options *opt,
+ struct rename_conflict_info *ci)
{
/*
* Handle file adds that need to be renamed due to directory rename
* there is no content merge to do; just move the file into the
* desired final location.
*/
- const struct diff_filespec *dest = pair->two;
+ const struct rename *ren = ci->ren1;
+ const struct diff_filespec *dest = ren->pair->two;
+ char *file_path = dest->path;
+ int mark_conflicted = (opt->detect_directory_renames == 1);
+ assert(ren->dir_rename_original_dest);
- if (!o->call_depth && would_lose_untracked(o, dest->path)) {
- char *alt_path = unique_path(o, dest->path, rename_branch);
+ if (!opt->call_depth && would_lose_untracked(opt, dest->path)) {
+ mark_conflicted = 1;
+ file_path = unique_path(opt, dest->path, ren->branch);
+ output(opt, 1, _("Error: Refusing to lose untracked file at %s; "
+ "writing to %s instead."),
+ dest->path, file_path);
+ }
- output(o, 1, _("Error: Refusing to lose untracked file at %s; "
- "writing to %s instead."),
- dest->path, alt_path);
+ if (mark_conflicted) {
/*
- * Write the file in worktree at alt_path, but not in the
- * index. Instead, write to dest->path for the index but
- * only at the higher appropriate stage.
+ * Write the file in worktree at file_path. In the index,
+ * only record the file at dest->path in the appropriate
+ * higher stage.
*/
- if (update_file(o, 0, &dest->oid, dest->mode, alt_path))
+ if (update_file(opt, 0, dest, file_path))
return -1;
- free(alt_path);
- return update_stages(o, dest->path, NULL,
- rename_branch == o->branch1 ? dest : NULL,
- rename_branch == o->branch1 ? NULL : dest);
+ if (file_path != dest->path)
+ free(file_path);
+ if (update_stages(opt, dest->path, NULL,
+ ren->branch == opt->branch1 ? dest : NULL,
+ ren->branch == opt->branch1 ? NULL : dest))
+ return -1;
+ return 0; /* not clean, but conflicted */
+ } else {
+ /* Update dest->path both in index and in worktree */
+ if (update_file(opt, 1, dest, dest->path))
+ return -1;
+ return 1; /* clean */
}
-
- /* Update dest->path both in index and in worktree */
- if (update_file(o, 1, &dest->oid, dest->mode, dest->path))
- return -1;
- return 0;
}
-static int handle_change_delete(struct merge_options *o,
+static int handle_change_delete(struct merge_options *opt,
const char *path, const char *old_path,
- const struct object_id *o_oid, int o_mode,
- const struct object_id *changed_oid,
- int changed_mode,
+ const struct diff_filespec *o,
+ const struct diff_filespec *changed,
const char *change_branch,
const char *delete_branch,
const char *change, const char *change_past)
const char *update_path = path;
int ret = 0;
- if (dir_in_way(o->repo->index, path, !o->call_depth, 0) ||
- (!o->call_depth && would_lose_untracked(o, path))) {
- update_path = alt_path = unique_path(o, path, change_branch);
+ if (dir_in_way(opt->repo->index, path, !opt->call_depth, 0) ||
+ (!opt->call_depth && would_lose_untracked(opt, path))) {
+ update_path = alt_path = unique_path(opt, path, change_branch);
}
- if (o->call_depth) {
+ if (opt->call_depth) {
/*
* We cannot arbitrarily accept either a_sha or b_sha as
* correct; since there is no true "middle point" between
* them, simply reuse the base version for virtual merge base.
*/
- ret = remove_file_from_index(o->repo->index, path);
+ ret = remove_file_from_index(opt->repo->index, path);
if (!ret)
- ret = update_file(o, 0, o_oid, o_mode, update_path);
+ ret = update_file(opt, 0, o, update_path);
} else {
/*
* Despite the four nearly duplicate messages and argument
*/
if (!alt_path) {
if (!old_path) {
- output(o, 1, _("CONFLICT (%s/delete): %s deleted in %s "
+ output(opt, 1, _("CONFLICT (%s/delete): %s deleted in %s "
"and %s in %s. Version %s of %s left in tree."),
change, path, delete_branch, change_past,
change_branch, change_branch, path);
} else {
- output(o, 1, _("CONFLICT (%s/delete): %s deleted in %s "
+ output(opt, 1, _("CONFLICT (%s/delete): %s deleted in %s "
"and %s to %s in %s. Version %s of %s left in tree."),
change, old_path, delete_branch, change_past, path,
change_branch, change_branch, path);
}
} else {
if (!old_path) {
- output(o, 1, _("CONFLICT (%s/delete): %s deleted in %s "
+ output(opt, 1, _("CONFLICT (%s/delete): %s deleted in %s "
"and %s in %s. Version %s of %s left in tree at %s."),
change, path, delete_branch, change_past,
change_branch, change_branch, path, alt_path);
} else {
- output(o, 1, _("CONFLICT (%s/delete): %s deleted in %s "
+ output(opt, 1, _("CONFLICT (%s/delete): %s deleted in %s "
"and %s to %s in %s. Version %s of %s left in tree at %s."),
change, old_path, delete_branch, change_past, path,
change_branch, change_branch, path, alt_path);
}
/*
* No need to call update_file() on path when change_branch ==
- * o->branch1 && !alt_path, since that would needlessly touch
+ * opt->branch1 && !alt_path, since that would needlessly touch
* path. We could call update_file_flags() with update_cache=0
* and update_wd=0, but that's a no-op.
*/
- if (change_branch != o->branch1 || alt_path)
- ret = update_file(o, 0, changed_oid, changed_mode, update_path);
+ if (change_branch != opt->branch1 || alt_path)
+ ret = update_file(opt, 0, changed, update_path);
}
free(alt_path);
return ret;
}
-static int handle_rename_delete(struct merge_options *o,
- struct diff_filepair *pair,
- const char *rename_branch,
- const char *delete_branch)
+static int handle_rename_delete(struct merge_options *opt,
+ struct rename_conflict_info *ci)
{
- const struct diff_filespec *orig = pair->one;
- const struct diff_filespec *dest = pair->two;
-
- if (handle_change_delete(o,
- o->call_depth ? orig->path : dest->path,
- o->call_depth ? NULL : orig->path,
- &orig->oid, orig->mode,
- &dest->oid, dest->mode,
+ const struct rename *ren = ci->ren1;
+ const struct diff_filespec *orig = ren->pair->one;
+ const struct diff_filespec *dest = ren->pair->two;
+ const char *rename_branch = ren->branch;
+ const char *delete_branch = (opt->branch1 == ren->branch ?
+ opt->branch2 : opt->branch1);
+
+ if (handle_change_delete(opt,
+ opt->call_depth ? orig->path : dest->path,
+ opt->call_depth ? NULL : orig->path,
+ orig, dest,
rename_branch, delete_branch,
_("rename"), _("renamed")))
return -1;
- if (o->call_depth)
- return remove_file_from_index(o->repo->index, dest->path);
+ if (opt->call_depth)
+ return remove_file_from_index(opt->repo->index, dest->path);
else
- return update_stages(o, dest->path, NULL,
- rename_branch == o->branch1 ? dest : NULL,
- rename_branch == o->branch1 ? NULL : dest);
+ return update_stages(opt, dest->path, NULL,
+ rename_branch == opt->branch1 ? dest : NULL,
+ rename_branch == opt->branch1 ? NULL : dest);
}
-static struct diff_filespec *filespec_from_entry(struct diff_filespec *target,
- struct stage_data *entry,
- int stage)
-{
- struct object_id *oid = &entry->stages[stage].oid;
- unsigned mode = entry->stages[stage].mode;
- if (mode == 0 || is_null_oid(oid))
- return NULL;
- oidcpy(&target->oid, oid);
- target->mode = mode;
- return target;
-}
-
-static int handle_file_collision(struct merge_options *o,
+static int handle_file_collision(struct merge_options *opt,
const char *collide_path,
const char *prev_path1,
const char *prev_path2,
const char *branch1, const char *branch2,
- const struct object_id *a_oid,
- unsigned int a_mode,
- const struct object_id *b_oid,
- unsigned int b_mode)
+ struct diff_filespec *a,
+ struct diff_filespec *b)
{
struct merge_file_info mfi;
- struct diff_filespec null, a, b;
+ struct diff_filespec null;
char *alt_path = NULL;
const char *update_path = collide_path;
/*
* It's easiest to get the correct things into stage 2 and 3, and
* to make sure that the content merge puts HEAD before the other
- * branch if we just ensure that branch1 == o->branch1. So, simply
+ * branch if we just ensure that branch1 == opt->branch1. So, simply
* flip arguments around if we don't have that.
*/
- if (branch1 != o->branch1) {
- return handle_file_collision(o, collide_path,
+ if (branch1 != opt->branch1) {
+ return handle_file_collision(opt, collide_path,
prev_path2, prev_path1,
branch2, branch1,
- b_oid, b_mode,
- a_oid, a_mode);
+ b, a);
}
/*
* In the recursive case, we just opt to undo renames
*/
- if (o->call_depth && (prev_path1 || prev_path2)) {
- /* Put first file (a_oid, a_mode) in its original spot */
+ if (opt->call_depth && (prev_path1 || prev_path2)) {
+ /* Put first file (a->oid, a->mode) in its original spot */
if (prev_path1) {
- if (update_file(o, 1, a_oid, a_mode, prev_path1))
+ if (update_file(opt, 1, a, prev_path1))
return -1;
} else {
- if (update_file(o, 1, a_oid, a_mode, collide_path))
+ if (update_file(opt, 1, a, collide_path))
return -1;
}
- /* Put second file (b_oid, b_mode) in its original spot */
+ /* Put second file (b->oid, b->mode) in its original spot */
if (prev_path2) {
- if (update_file(o, 1, b_oid, b_mode, prev_path2))
+ if (update_file(opt, 1, b, prev_path2))
return -1;
} else {
- if (update_file(o, 1, b_oid, b_mode, collide_path))
+ if (update_file(opt, 1, b, collide_path))
return -1;
}
/* Don't leave something at collision path if unrenaming both */
if (prev_path1 && prev_path2)
- remove_file(o, 1, collide_path, 0);
+ remove_file(opt, 1, collide_path, 0);
return 0;
}
/* Remove rename sources if rename/add or rename/rename(2to1) */
if (prev_path1)
- remove_file(o, 1, prev_path1,
- o->call_depth || would_lose_untracked(o, prev_path1));
+ remove_file(opt, 1, prev_path1,
+ opt->call_depth || would_lose_untracked(opt, prev_path1));
if (prev_path2)
- remove_file(o, 1, prev_path2,
- o->call_depth || would_lose_untracked(o, prev_path2));
+ remove_file(opt, 1, prev_path2,
+ opt->call_depth || would_lose_untracked(opt, prev_path2));
/*
* Remove the collision path, if it wouldn't cause dirty contents
* or an untracked file to get lost. We'll either overwrite with
* merged contents, or just write out to differently named files.
*/
- if (was_dirty(o, collide_path)) {
- output(o, 1, _("Refusing to lose dirty file at %s"),
+ if (was_dirty(opt, collide_path)) {
+ output(opt, 1, _("Refusing to lose dirty file at %s"),
collide_path);
- update_path = alt_path = unique_path(o, collide_path, "merged");
- } else if (would_lose_untracked(o, collide_path)) {
+ update_path = alt_path = unique_path(opt, collide_path, "merged");
+ } else if (would_lose_untracked(opt, collide_path)) {
/*
* Only way we get here is if both renames were from
* a directory rename AND user had an untracked file
* at the location where both files end up after the
* two directory renames. See testcase 10d of t6043.
*/
- output(o, 1, _("Refusing to lose untracked file at "
+ output(opt, 1, _("Refusing to lose untracked file at "
"%s, even though it's in the way."),
collide_path);
- update_path = alt_path = unique_path(o, collide_path, "merged");
+ update_path = alt_path = unique_path(opt, collide_path, "merged");
} else {
/*
* FIXME: It's possible that the two files are identical
* merge-recursive interoperate anyway, so punting for
* now...
*/
- remove_file(o, 0, collide_path, 0);
+ remove_file(opt, 0, collide_path, 0);
}
/* Store things in diff_filespecs for functions that need it */
- memset(&a, 0, sizeof(struct diff_filespec));
- memset(&b, 0, sizeof(struct diff_filespec));
- null.path = a.path = b.path = (char *)collide_path;
+ null.path = (char *)collide_path;
oidcpy(&null.oid, &null_oid);
null.mode = 0;
- oidcpy(&a.oid, a_oid);
- a.mode = a_mode;
- a.oid_valid = 1;
- oidcpy(&b.oid, b_oid);
- b.mode = b_mode;
- b.oid_valid = 1;
-
- if (merge_mode_and_contents(o, &null, &a, &b, collide_path,
- branch1, branch2, o->call_depth * 2, &mfi))
+
+ if (merge_mode_and_contents(opt, &null, a, b, collide_path,
+ branch1, branch2, opt->call_depth * 2, &mfi))
return -1;
mfi.clean &= !alt_path;
- if (update_file(o, mfi.clean, &mfi.oid, mfi.mode, update_path))
+ if (update_file(opt, mfi.clean, &mfi.blob, update_path))
return -1;
- if (!mfi.clean && !o->call_depth &&
- update_stages(o, collide_path, NULL, &a, &b))
+ if (!mfi.clean && !opt->call_depth &&
+ update_stages(opt, collide_path, NULL, a, b))
return -1;
free(alt_path);
/*
return mfi.clean;
}
-static int handle_rename_add(struct merge_options *o,
+static int handle_rename_add(struct merge_options *opt,
struct rename_conflict_info *ci)
{
/* a was renamed to c, and a separate c was added. */
- struct diff_filespec *a = ci->pair1->one;
- struct diff_filespec *c = ci->pair1->two;
+ struct diff_filespec *a = ci->ren1->pair->one;
+ struct diff_filespec *c = ci->ren1->pair->two;
char *path = c->path;
char *prev_path_desc;
struct merge_file_info mfi;
- int other_stage = (ci->branch1 == o->branch1 ? 3 : 2);
+ const char *rename_branch = ci->ren1->branch;
+ const char *add_branch = (opt->branch1 == rename_branch ?
+ opt->branch2 : opt->branch1);
+ int other_stage = (ci->ren1->branch == opt->branch1 ? 3 : 2);
- output(o, 1, _("CONFLICT (rename/add): "
+ output(opt, 1, _("CONFLICT (rename/add): "
"Rename %s->%s in %s. Added %s in %s"),
- a->path, c->path, ci->branch1,
- c->path, ci->branch2);
+ a->path, c->path, rename_branch,
+ c->path, add_branch);
prev_path_desc = xstrfmt("version of %s from %s", path, a->path);
- if (merge_mode_and_contents(o, a, c, &ci->ren1_other, prev_path_desc,
- o->branch1, o->branch2,
- 1 + o->call_depth * 2, &mfi))
+ if (merge_mode_and_contents(opt, a, c,
+ &ci->ren1->src_entry->stages[other_stage],
+ prev_path_desc,
+ opt->branch1, opt->branch2,
+ 1 + opt->call_depth * 2, &mfi))
return -1;
free(prev_path_desc);
- return handle_file_collision(o,
+ ci->ren1->dst_entry->stages[other_stage].path = mfi.blob.path = c->path;
+ return handle_file_collision(opt,
c->path, a->path, NULL,
- ci->branch1, ci->branch2,
- &mfi.oid, mfi.mode,
- &ci->dst_entry1->stages[other_stage].oid,
- ci->dst_entry1->stages[other_stage].mode);
+ rename_branch, add_branch,
+ &mfi.blob,
+ &ci->ren1->dst_entry->stages[other_stage]);
}
-static char *find_path_for_conflict(struct merge_options *o,
+static char *find_path_for_conflict(struct merge_options *opt,
const char *path,
const char *branch1,
const char *branch2)
{
char *new_path = NULL;
- if (dir_in_way(o->repo->index, path, !o->call_depth, 0)) {
- new_path = unique_path(o, path, branch1);
- output(o, 1, _("%s is a directory in %s adding "
+ if (dir_in_way(opt->repo->index, path, !opt->call_depth, 0)) {
+ new_path = unique_path(opt, path, branch1);
+ output(opt, 1, _("%s is a directory in %s adding "
"as %s instead"),
path, branch2, new_path);
- } else if (would_lose_untracked(o, path)) {
- new_path = unique_path(o, path, branch1);
- output(o, 1, _("Refusing to lose untracked file"
+ } else if (would_lose_untracked(opt, path)) {
+ new_path = unique_path(opt, path, branch1);
+ output(opt, 1, _("Refusing to lose untracked file"
" at %s; adding as %s instead"),
path, new_path);
}
return new_path;
}
-static int handle_rename_rename_1to2(struct merge_options *o,
+static int handle_rename_rename_1to2(struct merge_options *opt,
struct rename_conflict_info *ci)
{
/* One file was renamed in both branches, but to different names. */
struct merge_file_info mfi;
- struct diff_filespec other;
struct diff_filespec *add;
- struct diff_filespec *one = ci->pair1->one;
- struct diff_filespec *a = ci->pair1->two;
- struct diff_filespec *b = ci->pair2->two;
+ struct diff_filespec *o = ci->ren1->pair->one;
+ struct diff_filespec *a = ci->ren1->pair->two;
+ struct diff_filespec *b = ci->ren2->pair->two;
char *path_desc;
- output(o, 1, _("CONFLICT (rename/rename): "
+ output(opt, 1, _("CONFLICT (rename/rename): "
"Rename \"%s\"->\"%s\" in branch \"%s\" "
"rename \"%s\"->\"%s\" in \"%s\"%s"),
- one->path, a->path, ci->branch1,
- one->path, b->path, ci->branch2,
- o->call_depth ? _(" (left unresolved)") : "");
+ o->path, a->path, ci->ren1->branch,
+ o->path, b->path, ci->ren2->branch,
+ opt->call_depth ? _(" (left unresolved)") : "");
path_desc = xstrfmt("%s and %s, both renamed from %s",
- a->path, b->path, one->path);
- if (merge_mode_and_contents(o, one, a, b, path_desc,
- ci->branch1, ci->branch2,
- o->call_depth * 2, &mfi))
+ a->path, b->path, o->path);
+ if (merge_mode_and_contents(opt, o, a, b, path_desc,
+ ci->ren1->branch, ci->ren2->branch,
+ opt->call_depth * 2, &mfi))
return -1;
free(path_desc);
- if (o->call_depth) {
+ if (opt->call_depth) {
/*
* FIXME: For rename/add-source conflicts (if we could detect
* such), this is wrong. We should instead find a unique
* pathname and then either rename the add-source file to that
* unique path, or use that unique path instead of src here.
*/
- if (update_file(o, 0, &mfi.oid, mfi.mode, one->path))
+ if (update_file(opt, 0, &mfi.blob, o->path))
return -1;
/*
* such cases, we should keep the added file around,
* resolving the conflict at that path in its favor.
*/
- add = filespec_from_entry(&other, ci->dst_entry1, 2 ^ 1);
- if (add) {
- if (update_file(o, 0, &add->oid, add->mode, a->path))
+ add = &ci->ren1->dst_entry->stages[2 ^ 1];
+ if (is_valid(add)) {
+ if (update_file(opt, 0, add, a->path))
return -1;
}
else
- remove_file_from_index(o->repo->index, a->path);
- add = filespec_from_entry(&other, ci->dst_entry2, 3 ^ 1);
- if (add) {
- if (update_file(o, 0, &add->oid, add->mode, b->path))
+ remove_file_from_index(opt->repo->index, a->path);
+ add = &ci->ren2->dst_entry->stages[3 ^ 1];
+ if (is_valid(add)) {
+ if (update_file(opt, 0, add, b->path))
return -1;
}
else
- remove_file_from_index(o->repo->index, b->path);
+ remove_file_from_index(opt->repo->index, b->path);
} else {
/*
* For each destination path, we need to see if there is a
* rename/add collision. If not, we can write the file out
* to the specified location.
*/
- add = filespec_from_entry(&other, ci->dst_entry1, 2 ^ 1);
- if (add) {
- if (handle_file_collision(o, a->path,
+ add = &ci->ren1->dst_entry->stages[2 ^ 1];
+ if (is_valid(add)) {
+ add->path = mfi.blob.path = a->path;
+ if (handle_file_collision(opt, a->path,
NULL, NULL,
- ci->branch1, ci->branch2,
- &mfi.oid, mfi.mode,
- &add->oid, add->mode) < 0)
+ ci->ren1->branch,
+ ci->ren2->branch,
+ &mfi.blob, add) < 0)
return -1;
} else {
- char *new_path = find_path_for_conflict(o, a->path,
- ci->branch1,
- ci->branch2);
- if (update_file(o, 0, &mfi.oid, mfi.mode, new_path ? new_path : a->path))
+ char *new_path = find_path_for_conflict(opt, a->path,
+ ci->ren1->branch,
+ ci->ren2->branch);
+ if (update_file(opt, 0, &mfi.blob,
+ new_path ? new_path : a->path))
return -1;
free(new_path);
- if (update_stages(o, a->path, NULL, a, NULL))
+ if (update_stages(opt, a->path, NULL, a, NULL))
return -1;
}
- add = filespec_from_entry(&other, ci->dst_entry2, 3 ^ 1);
- if (add) {
- if (handle_file_collision(o, b->path,
+ add = &ci->ren2->dst_entry->stages[3 ^ 1];
+ if (is_valid(add)) {
+ add->path = mfi.blob.path = b->path;
+ if (handle_file_collision(opt, b->path,
NULL, NULL,
- ci->branch1, ci->branch2,
- &add->oid, add->mode,
- &mfi.oid, mfi.mode) < 0)
+ ci->ren1->branch,
+ ci->ren2->branch,
+ add, &mfi.blob) < 0)
return -1;
} else {
- char *new_path = find_path_for_conflict(o, b->path,
- ci->branch2,
- ci->branch1);
- if (update_file(o, 0, &mfi.oid, mfi.mode, new_path ? new_path : b->path))
+ char *new_path = find_path_for_conflict(opt, b->path,
+ ci->ren2->branch,
+ ci->ren1->branch);
+ if (update_file(opt, 0, &mfi.blob,
+ new_path ? new_path : b->path))
return -1;
free(new_path);
- if (update_stages(o, b->path, NULL, NULL, b))
+ if (update_stages(opt, b->path, NULL, NULL, b))
return -1;
}
}
return 0;
}
-static int handle_rename_rename_2to1(struct merge_options *o,
+static int handle_rename_rename_2to1(struct merge_options *opt,
struct rename_conflict_info *ci)
{
/* Two files, a & b, were renamed to the same thing, c. */
- struct diff_filespec *a = ci->pair1->one;
- struct diff_filespec *b = ci->pair2->one;
- struct diff_filespec *c1 = ci->pair1->two;
- struct diff_filespec *c2 = ci->pair2->two;
+ struct diff_filespec *a = ci->ren1->pair->one;
+ struct diff_filespec *b = ci->ren2->pair->one;
+ struct diff_filespec *c1 = ci->ren1->pair->two;
+ struct diff_filespec *c2 = ci->ren2->pair->two;
char *path = c1->path; /* == c2->path */
char *path_side_1_desc;
char *path_side_2_desc;
struct merge_file_info mfi_c1;
struct merge_file_info mfi_c2;
+ int ostage1, ostage2;
- output(o, 1, _("CONFLICT (rename/rename): "
+ output(opt, 1, _("CONFLICT (rename/rename): "
"Rename %s->%s in %s. "
"Rename %s->%s in %s"),
- a->path, c1->path, ci->branch1,
- b->path, c2->path, ci->branch2);
+ a->path, c1->path, ci->ren1->branch,
+ b->path, c2->path, ci->ren2->branch);
path_side_1_desc = xstrfmt("version of %s from %s", path, a->path);
path_side_2_desc = xstrfmt("version of %s from %s", path, b->path);
- if (merge_mode_and_contents(o, a, c1, &ci->ren1_other, path_side_1_desc,
- o->branch1, o->branch2,
- 1 + o->call_depth * 2, &mfi_c1) ||
- merge_mode_and_contents(o, b, &ci->ren2_other, c2, path_side_2_desc,
- o->branch1, o->branch2,
- 1 + o->call_depth * 2, &mfi_c2))
+ ostage1 = ci->ren1->branch == opt->branch1 ? 3 : 2;
+ ostage2 = ostage1 ^ 1;
+ ci->ren1->src_entry->stages[ostage1].path = a->path;
+ ci->ren2->src_entry->stages[ostage2].path = b->path;
+ if (merge_mode_and_contents(opt, a, c1,
+ &ci->ren1->src_entry->stages[ostage1],
+ path_side_1_desc,
+ opt->branch1, opt->branch2,
+ 1 + opt->call_depth * 2, &mfi_c1) ||
+ merge_mode_and_contents(opt, b,
+ &ci->ren2->src_entry->stages[ostage2],
+ c2, path_side_2_desc,
+ opt->branch1, opt->branch2,
+ 1 + opt->call_depth * 2, &mfi_c2))
return -1;
free(path_side_1_desc);
free(path_side_2_desc);
+ mfi_c1.blob.path = path;
+ mfi_c2.blob.path = path;
- return handle_file_collision(o, path, a->path, b->path,
- ci->branch1, ci->branch2,
- &mfi_c1.oid, mfi_c1.mode,
- &mfi_c2.oid, mfi_c2.mode);
+ return handle_file_collision(opt, path, a->path, b->path,
+ ci->ren1->branch, ci->ren2->branch,
+ &mfi_c1.blob, &mfi_c2.blob);
}
/*
* Get the diff_filepairs changed between o_tree and tree.
*/
-static struct diff_queue_struct *get_diffpairs(struct merge_options *o,
+static struct diff_queue_struct *get_diffpairs(struct merge_options *opt,
struct tree *o_tree,
struct tree *tree)
{
struct diff_queue_struct *ret;
struct diff_options opts;
- repo_diff_setup(o->repo, &opts);
+ repo_diff_setup(opt->repo, &opts);
opts.flags.recursive = 1;
opts.flags.rename_empty = 0;
- opts.detect_rename = merge_detect_rename(o);
+ opts.detect_rename = merge_detect_rename(opt);
/*
* We do not have logic to handle the detection of copies. In
* fact, it may not even make sense to add such logic: would we
*/
if (opts.detect_rename > DIFF_DETECT_RENAME)
opts.detect_rename = DIFF_DETECT_RENAME;
- opts.rename_limit = o->merge_rename_limit >= 0 ? o->merge_rename_limit :
- o->diff_rename_limit >= 0 ? o->diff_rename_limit :
+ opts.rename_limit = opt->merge_rename_limit >= 0 ? opt->merge_rename_limit :
+ opt->diff_rename_limit >= 0 ? opt->diff_rename_limit :
1000;
- opts.rename_score = o->rename_score;
- opts.show_rename_progress = o->show_rename_progress;
+ opts.rename_score = opt->rename_score;
+ opts.show_rename_progress = opt->show_rename_progress;
opts.output_format = DIFF_FORMAT_NO_OUTPUT;
diff_setup_done(&opts);
diff_tree_oid(&o_tree->object.oid, &tree->object.oid, "", &opts);
diffcore_std(&opts);
- if (opts.needed_rename_limit > o->needed_rename_limit)
- o->needed_rename_limit = opts.needed_rename_limit;
+ if (opts.needed_rename_limit > opt->needed_rename_limit)
+ opt->needed_rename_limit = opts.needed_rename_limit;
ret = xmalloc(sizeof(*ret));
*ret = diff_queued_diff;
static int tree_has_path(struct tree *tree, const char *path)
{
struct object_id hashy;
- unsigned int mode_o;
+ unsigned short mode_o;
return !get_tree_entry(&tree->object.oid, path,
&hashy, &mode_o);
* level conflicts for the renamed location. If there is a rename and
* there are no conflicts, return the new name. Otherwise, return NULL.
*/
-static char *handle_path_level_conflicts(struct merge_options *o,
+static char *handle_path_level_conflicts(struct merge_options *opt,
const char *path,
struct dir_rename_entry *entry,
struct hashmap *collisions,
/* This should only happen when entry->non_unique_new_dir set */
if (!entry->non_unique_new_dir)
BUG("entry->non_unqiue_dir not set and !new_path");
- output(o, 1, _("CONFLICT (directory rename split): "
+ output(opt, 1, _("CONFLICT (directory rename split): "
"Unclear where to place %s because directory "
"%s was renamed to multiple other directories, "
"with no destination getting a majority of the "
collision_ent->reported_already = 1;
strbuf_add_separated_string_list(&collision_paths, ", ",
&collision_ent->source_files);
- output(o, 1, _("CONFLICT (implicit dir rename): Existing "
+ output(opt, 1, _("CONFLICT (implicit dir rename): Existing "
"file/dir at %s in the way of implicit "
"directory rename(s) putting the following "
"path(s) there: %s."),
collision_ent->reported_already = 1;
strbuf_add_separated_string_list(&collision_paths, ", ",
&collision_ent->source_files);
- output(o, 1, _("CONFLICT (implicit dir rename): Cannot map "
+ output(opt, 1, _("CONFLICT (implicit dir rename): Cannot map "
"more than one path to %s; implicit directory "
"renames tried to put these paths there: %s"),
new_path, collision_paths.buf);
* causes conflicts for files within those merged directories, then
* that should be detected at the individual path level.
*/
-static void handle_directory_level_conflicts(struct merge_options *o,
+static void handle_directory_level_conflicts(struct merge_options *opt,
struct hashmap *dir_re_head,
struct tree *head,
struct hashmap *dir_re_merge,
* know that head_ent->new_dir and merge_ent->new_dir
* are different strings.
*/
- output(o, 1, _("CONFLICT (rename/rename): "
+ output(opt, 1, _("CONFLICT (rename/rename): "
"Rename directory %s->%s in %s. "
"Rename directory %s->%s in %s"),
- head_ent->dir, head_ent->new_dir.buf, o->branch1,
- head_ent->dir, merge_ent->new_dir.buf, o->branch2);
+ head_ent->dir, head_ent->new_dir.buf, opt->branch1,
+ head_ent->dir, merge_ent->new_dir.buf, opt->branch2);
string_list_append(&remove_from_head,
head_ent->dir)->util = head_ent;
strbuf_release(&head_ent->new_dir);
}
}
-static char *check_for_directory_rename(struct merge_options *o,
+static char *check_for_directory_rename(struct merge_options *opt,
const char *path,
struct tree *tree,
struct hashmap *dir_renames,
*/
oentry = dir_rename_find_entry(dir_rename_exclusions, entry->new_dir.buf);
if (oentry) {
- output(o, 1, _("WARNING: Avoiding applying %s -> %s rename "
+ output(opt, 1, _("WARNING: Avoiding applying %s -> %s rename "
"to %s, because %s itself was renamed."),
entry->dir, entry->new_dir.buf, path, entry->new_dir.buf);
} else {
- new_path = handle_path_level_conflicts(o, path, entry,
+ new_path = handle_path_level_conflicts(opt, path, entry,
collisions, tree);
*clean_merge &= (new_path != NULL);
}
return new_path;
}
-static void apply_directory_rename_modifications(struct merge_options *o,
+static void apply_directory_rename_modifications(struct merge_options *opt,
struct diff_filepair *pair,
char *new_path,
struct rename *re,
* saying the file would have been overwritten), but it might
* be dirty, though.
*/
- update_wd = !was_dirty(o, pair->two->path);
+ update_wd = !was_dirty(opt, pair->two->path);
if (!update_wd)
- output(o, 1, _("Refusing to lose dirty file at %s"),
+ output(opt, 1, _("Refusing to lose dirty file at %s"),
pair->two->path);
- remove_file(o, 1, pair->two->path, !update_wd);
+ remove_file(opt, 1, pair->two->path, !update_wd);
/* Find or create a new re->dst_entry */
item = string_list_lookup(entries, new_path);
&re->dst_entry->stages[stage].oid,
&re->dst_entry->stages[stage].mode);
- /* Update pair status */
- if (pair->status == 'A') {
- /*
- * Recording rename information for this add makes it look
- * like a rename/delete conflict. Make sure we can
- * correctly handle this as an add that was moved to a new
- * directory instead of reporting a rename/delete conflict.
- */
- re->add_turned_into_rename = 1;
- }
+ /*
+ * Record the original change status (or 'type' of change). If it
+ * was originally an add ('A'), this lets us differentiate later
+ * between a RENAME_DELETE conflict and RENAME_VIA_DIR (they
+ * otherwise look the same). If it was originally a rename ('R'),
+ * this lets us remember and report accurately about the transitive
+ * renaming that occurred via the directory rename detection. Also,
+ * record the original destination name.
+ */
+ re->dir_rename_original_type = pair->status;
+ re->dir_rename_original_dest = pair->two->path;
+
/*
* We don't actually look at pair->status again, but it seems
* pedagogically correct to adjust it.
* to be able to associate the correct cache entries with the rename
* information; tree is always equal to either a_tree or b_tree.
*/
-static struct string_list *get_renames(struct merge_options *o,
+static struct string_list *get_renames(struct merge_options *opt,
+ const char *branch,
struct diff_queue_struct *pairs,
struct hashmap *dir_renames,
struct hashmap *dir_rename_exclusions,
diff_free_filepair(pair);
continue;
}
- new_path = check_for_directory_rename(o, pair->two->path, tree,
+ new_path = check_for_directory_rename(opt, pair->two->path, tree,
dir_renames,
dir_rename_exclusions,
&collisions,
re = xmalloc(sizeof(*re));
re->processed = 0;
- re->add_turned_into_rename = 0;
re->pair = pair;
+ re->branch = branch;
+ re->dir_rename_original_type = '\0';
+ re->dir_rename_original_dest = NULL;
item = string_list_lookup(entries, re->pair->one->path);
if (!item)
re->src_entry = insert_stage_data(re->pair->one->path,
item = string_list_insert(renames, pair->one->path);
item->util = re;
if (new_path)
- apply_directory_rename_modifications(o, pair, new_path,
+ apply_directory_rename_modifications(opt, pair, new_path,
re, tree, o_tree,
a_tree, b_tree,
entries);
return renames;
}
-static int process_renames(struct merge_options *o,
+static int process_renames(struct merge_options *opt,
struct string_list *a_renames,
struct string_list *b_renames)
{
for (i = 0, j = 0; i < a_renames->nr || j < b_renames->nr;) {
struct string_list *renames1, *renames2Dst;
struct rename *ren1 = NULL, *ren2 = NULL;
- const char *branch1, *branch2;
const char *ren1_src, *ren1_dst;
struct string_list_item *lookup;
if (ren1) {
renames1 = a_renames;
renames2Dst = &b_by_dst;
- branch1 = o->branch1;
- branch2 = o->branch2;
} else {
renames1 = b_renames;
renames2Dst = &a_by_dst;
- branch1 = o->branch2;
- branch2 = o->branch1;
SWAP(ren2, ren1);
}
* the base stage (think of rename +
* add-source cases).
*/
- remove_file(o, 1, ren1_src, 1);
+ remove_file(opt, 1, ren1_src, 1);
update_entry(ren1->dst_entry,
ren1->pair->one,
ren1->pair->two,
ren2->pair->two);
}
- setup_rename_conflict_info(rename_type,
- ren1->pair,
- ren2->pair,
- branch1,
- branch2,
- ren1->dst_entry,
- ren2->dst_entry,
- o,
- NULL,
- NULL);
+ setup_rename_conflict_info(rename_type, opt, ren1, ren2);
} else if ((lookup = string_list_lookup(renames2Dst, ren1_dst))) {
/* Two different files renamed to the same thing */
char *ren2_dst;
ren2->src_entry->processed = 1;
setup_rename_conflict_info(RENAME_TWO_FILES_TO_ONE,
- ren1->pair,
- ren2->pair,
- branch1,
- branch2,
- ren1->dst_entry,
- ren2->dst_entry,
- o,
- ren1->src_entry,
- ren2->src_entry);
-
+ opt, ren1, ren2);
} else {
/* Renamed in 1, maybe changed in 2 */
/* we only use sha1 and mode of these */
* stage and in other_stage (think of rename +
* add-source case).
*/
- remove_file(o, 1, ren1_src,
- renamed_stage == 2 || !was_tracked(o, ren1_src));
+ remove_file(opt, 1, ren1_src,
+ renamed_stage == 2 || !was_tracked(opt, ren1_src));
oidcpy(&src_other.oid,
&ren1->src_entry->stages[other_stage].oid);
try_merge = 0;
if (oid_eq(&src_other.oid, &null_oid) &&
- ren1->add_turned_into_rename) {
+ ren1->dir_rename_original_type == 'A') {
setup_rename_conflict_info(RENAME_VIA_DIR,
- ren1->pair,
- NULL,
- branch1,
- branch2,
- ren1->dst_entry,
- NULL,
- o,
- NULL,
- NULL);
+ opt, ren1, NULL);
} else if (oid_eq(&src_other.oid, &null_oid)) {
setup_rename_conflict_info(RENAME_DELETE,
- ren1->pair,
- NULL,
- branch1,
- branch2,
- ren1->dst_entry,
- NULL,
- o,
- NULL,
- NULL);
+ opt, ren1, NULL);
} else if ((dst_other.mode == ren1->pair->two->mode) &&
oid_eq(&dst_other.oid, &ren1->pair->two->oid)) {
/*
* update_file_flags() instead of
* update_file().
*/
- if (update_file_flags(o,
- &ren1->pair->two->oid,
- ren1->pair->two->mode,
+ if (update_file_flags(opt,
+ ren1->pair->two,
ren1_dst,
1, /* update_cache */
0 /* update_wd */))
* file, then the merge will be clean.
*/
setup_rename_conflict_info(RENAME_ADD,
- ren1->pair,
- NULL,
- branch1,
- branch2,
- ren1->dst_entry,
- NULL,
- o,
- ren1->src_entry,
- NULL);
+ opt, ren1, NULL);
} else
try_merge = 1;
if (clean_merge < 0)
goto cleanup_and_return;
if (try_merge) {
- struct diff_filespec *one, *a, *b;
+ struct diff_filespec *o, *a, *b;
src_other.path = (char *)ren1_src;
- one = ren1->pair->one;
+ o = ren1->pair->one;
if (a_renames == renames1) {
a = ren1->pair->two;
b = &src_other;
b = ren1->pair->two;
a = &src_other;
}
- update_entry(ren1->dst_entry, one, a, b);
+ update_entry(ren1->dst_entry, o, a, b);
setup_rename_conflict_info(RENAME_NORMAL,
- ren1->pair,
- NULL,
- branch1,
- NULL,
- ren1->dst_entry,
- NULL,
- o,
- NULL,
- NULL);
+ opt, ren1, NULL);
}
}
}
free(pairs);
}
-static int detect_and_process_renames(struct merge_options *o,
+static int detect_and_process_renames(struct merge_options *opt,
struct tree *common,
struct tree *head,
struct tree *merge,
ri->head_renames = NULL;
ri->merge_renames = NULL;
- if (!merge_detect_rename(o))
+ if (!merge_detect_rename(opt))
return 1;
- head_pairs = get_diffpairs(o, common, head);
- merge_pairs = get_diffpairs(o, common, merge);
+ head_pairs = get_diffpairs(opt, common, head);
+ merge_pairs = get_diffpairs(opt, common, merge);
- if (o->detect_directory_renames) {
+ if (opt->detect_directory_renames) {
dir_re_head = get_directory_renames(head_pairs);
dir_re_merge = get_directory_renames(merge_pairs);
- handle_directory_level_conflicts(o,
+ handle_directory_level_conflicts(opt,
dir_re_head, head,
dir_re_merge, merge);
} else {
dir_rename_init(dir_re_merge);
}
- ri->head_renames = get_renames(o, head_pairs,
+ ri->head_renames = get_renames(opt, opt->branch1, head_pairs,
dir_re_merge, dir_re_head, head,
common, head, merge, entries,
&clean);
if (clean < 0)
goto cleanup;
- ri->merge_renames = get_renames(o, merge_pairs,
+ ri->merge_renames = get_renames(opt, opt->branch2, merge_pairs,
dir_re_head, dir_re_merge, merge,
common, head, merge, entries,
&clean);
if (clean < 0)
goto cleanup;
- clean &= process_renames(o, ri->head_renames, ri->merge_renames);
+ clean &= process_renames(opt, ri->head_renames, ri->merge_renames);
cleanup:
/*
final_cleanup_rename(re_info->merge_renames);
}
-static struct object_id *stage_oid(const struct object_id *oid, unsigned mode)
-{
- return (is_null_oid(oid) || mode == 0) ? NULL: (struct object_id *)oid;
-}
-
-static int read_oid_strbuf(struct merge_options *o,
+static int read_oid_strbuf(struct merge_options *opt,
const struct object_id *oid,
struct strbuf *dst)
{
unsigned long size;
buf = read_object_file(oid, &type, &size);
if (!buf)
- return err(o, _("cannot read object %s"), oid_to_hex(oid));
+ return err(opt, _("cannot read object %s"), oid_to_hex(oid));
if (type != OBJ_BLOB) {
free(buf);
- return err(o, _("object %s is not a blob"), oid_to_hex(oid));
+ return err(opt, _("object %s is not a blob"), oid_to_hex(oid));
}
strbuf_attach(dst, buf, size, size + 1);
return 0;
}
static int blob_unchanged(struct merge_options *opt,
- const struct object_id *o_oid,
- unsigned o_mode,
- const struct object_id *a_oid,
- unsigned a_mode,
+ const struct diff_filespec *o,
+ const struct diff_filespec *a,
int renormalize, const char *path)
{
- struct strbuf o = STRBUF_INIT;
- struct strbuf a = STRBUF_INIT;
+ struct strbuf obuf = STRBUF_INIT;
+ struct strbuf abuf = STRBUF_INIT;
int ret = 0; /* assume changed for safety */
+ const struct index_state *idx = opt->repo->index;
- if (a_mode != o_mode)
+ if (a->mode != o->mode)
return 0;
- if (oid_eq(o_oid, a_oid))
+ if (oid_eq(&o->oid, &a->oid))
return 1;
if (!renormalize)
return 0;
- assert(o_oid && a_oid);
- if (read_oid_strbuf(opt, o_oid, &o) || read_oid_strbuf(opt, a_oid, &a))
+ if (read_oid_strbuf(opt, &o->oid, &obuf) ||
+ read_oid_strbuf(opt, &a->oid, &abuf))
goto error_return;
/*
* Note: binary | is used so that both renormalizations are
* performed. Comparison can be skipped if both files are
* unchanged since their sha1s have already been compared.
*/
- if (renormalize_buffer(opt->repo->index, path, o.buf, o.len, &o) |
- renormalize_buffer(opt->repo->index, path, a.buf, a.len, &a))
- ret = (o.len == a.len && !memcmp(o.buf, a.buf, o.len));
+ if (renormalize_buffer(idx, path, obuf.buf, obuf.len, &obuf) |
+ renormalize_buffer(idx, path, abuf.buf, abuf.len, &abuf))
+ ret = (obuf.len == abuf.len && !memcmp(obuf.buf, abuf.buf, obuf.len));
error_return:
- strbuf_release(&o);
- strbuf_release(&a);
+ strbuf_release(&obuf);
+ strbuf_release(&abuf);
return ret;
}
-static int handle_modify_delete(struct merge_options *o,
+static int handle_modify_delete(struct merge_options *opt,
const char *path,
- struct object_id *o_oid, int o_mode,
- struct object_id *a_oid, int a_mode,
- struct object_id *b_oid, int b_mode)
+ const struct diff_filespec *o,
+ const struct diff_filespec *a,
+ const struct diff_filespec *b)
{
const char *modify_branch, *delete_branch;
- struct object_id *changed_oid;
- int changed_mode;
-
- if (a_oid) {
- modify_branch = o->branch1;
- delete_branch = o->branch2;
- changed_oid = a_oid;
- changed_mode = a_mode;
+ const struct diff_filespec *changed;
+
+ if (is_valid(a)) {
+ modify_branch = opt->branch1;
+ delete_branch = opt->branch2;
+ changed = a;
} else {
- modify_branch = o->branch2;
- delete_branch = o->branch1;
- changed_oid = b_oid;
- changed_mode = b_mode;
+ modify_branch = opt->branch2;
+ delete_branch = opt->branch1;
+ changed = b;
}
- return handle_change_delete(o,
+ return handle_change_delete(opt,
path, NULL,
- o_oid, o_mode,
- changed_oid, changed_mode,
+ o, changed,
modify_branch, delete_branch,
_("modify"), _("modified"));
}
-static int handle_content_merge(struct merge_options *o,
+static int handle_content_merge(struct merge_file_info *mfi,
+ struct merge_options *opt,
const char *path,
int is_dirty,
- struct object_id *o_oid, int o_mode,
- struct object_id *a_oid, int a_mode,
- struct object_id *b_oid, int b_mode,
- struct rename_conflict_info *rename_conflict_info)
+ const struct diff_filespec *o,
+ const struct diff_filespec *a,
+ const struct diff_filespec *b,
+ struct rename_conflict_info *ci)
{
const char *reason = _("content");
- const char *path1 = NULL, *path2 = NULL;
- struct merge_file_info mfi;
- struct diff_filespec one, a, b;
unsigned df_conflict_remains = 0;
- if (!o_oid) {
+ if (!is_valid(o))
reason = _("add/add");
- o_oid = (struct object_id *)&null_oid;
- }
- one.path = a.path = b.path = (char *)path;
- oidcpy(&one.oid, o_oid);
- one.mode = o_mode;
- oidcpy(&a.oid, a_oid);
- a.mode = a_mode;
- oidcpy(&b.oid, b_oid);
- b.mode = b_mode;
-
- if (rename_conflict_info) {
- struct diff_filepair *pair1 = rename_conflict_info->pair1;
-
- path1 = (o->branch1 == rename_conflict_info->branch1) ?
- pair1->two->path : pair1->one->path;
- /* If rename_conflict_info->pair2 != NULL, we are in
- * RENAME_ONE_FILE_TO_ONE case. Otherwise, we have a
- * normal rename.
- */
- path2 = (rename_conflict_info->pair2 ||
- o->branch2 == rename_conflict_info->branch1) ?
- pair1->two->path : pair1->one->path;
- one.path = pair1->one->path;
- a.path = (char *)path1;
- b.path = (char *)path2;
-
- if (dir_in_way(o->repo->index, path, !o->call_depth,
- S_ISGITLINK(pair1->two->mode)))
- df_conflict_remains = 1;
- }
- if (merge_mode_and_contents(o, &one, &a, &b, path,
- o->branch1, o->branch2,
- o->call_depth * 2, &mfi))
+
+ assert(o->path && a->path && b->path);
+ if (ci && dir_in_way(opt->repo->index, path, !opt->call_depth,
+ S_ISGITLINK(ci->ren1->pair->two->mode)))
+ df_conflict_remains = 1;
+
+ if (merge_mode_and_contents(opt, o, a, b, path,
+ opt->branch1, opt->branch2,
+ opt->call_depth * 2, mfi))
return -1;
/*
* b) The merge matches what was in HEAD (content, mode, pathname)
* c) The target path is usable (i.e. not involved in D/F conflict)
*/
- if (mfi.clean &&
- was_tracked_and_matches(o, path, &mfi.oid, mfi.mode) &&
+ if (mfi->clean && was_tracked_and_matches(opt, path, &mfi->blob) &&
!df_conflict_remains) {
int pos;
struct cache_entry *ce;
- output(o, 3, _("Skipped %s (merged same as existing)"), path);
- if (add_cacheinfo(o, mfi.mode, &mfi.oid, path,
- 0, (!o->call_depth && !is_dirty), 0))
+ output(opt, 3, _("Skipped %s (merged same as existing)"), path);
+ if (add_cacheinfo(opt, &mfi->blob, path,
+ 0, (!opt->call_depth && !is_dirty), 0))
return -1;
/*
* However, add_cacheinfo() will delete the old cache entry
* flag to avoid making the file appear as if it were
* deleted by the user.
*/
- pos = index_name_pos(&o->orig_index, path, strlen(path));
- ce = o->orig_index.cache[pos];
+ pos = index_name_pos(&opt->orig_index, path, strlen(path));
+ ce = opt->orig_index.cache[pos];
if (ce_skip_worktree(ce)) {
- pos = index_name_pos(o->repo->index, path, strlen(path));
- ce = o->repo->index->cache[pos];
+ pos = index_name_pos(opt->repo->index, path, strlen(path));
+ ce = opt->repo->index->cache[pos];
ce->ce_flags |= CE_SKIP_WORKTREE;
}
- return mfi.clean;
+ return mfi->clean;
}
- if (!mfi.clean) {
- if (S_ISGITLINK(mfi.mode))
+ if (!mfi->clean) {
+ if (S_ISGITLINK(mfi->blob.mode))
reason = _("submodule");
- output(o, 1, _("CONFLICT (%s): Merge conflict in %s"),
+ output(opt, 1, _("CONFLICT (%s): Merge conflict in %s"),
reason, path);
- if (rename_conflict_info && !df_conflict_remains)
- if (update_stages(o, path, &one, &a, &b))
+ if (ci && !df_conflict_remains)
+ if (update_stages(opt, path, o, a, b))
return -1;
}
if (df_conflict_remains || is_dirty) {
char *new_path;
- if (o->call_depth) {
- remove_file_from_index(o->repo->index, path);
+ if (opt->call_depth) {
+ remove_file_from_index(opt->repo->index, path);
} else {
- if (!mfi.clean) {
- if (update_stages(o, path, &one, &a, &b))
+ if (!mfi->clean) {
+ if (update_stages(opt, path, o, a, b))
return -1;
} else {
- int file_from_stage2 = was_tracked(o, path);
- struct diff_filespec merged;
- oidcpy(&merged.oid, &mfi.oid);
- merged.mode = mfi.mode;
-
- if (update_stages(o, path, NULL,
- file_from_stage2 ? &merged : NULL,
- file_from_stage2 ? NULL : &merged))
+ int file_from_stage2 = was_tracked(opt, path);
+
+ if (update_stages(opt, path, NULL,
+ file_from_stage2 ? &mfi->blob : NULL,
+ file_from_stage2 ? NULL : &mfi->blob))
return -1;
}
}
- new_path = unique_path(o, path, rename_conflict_info->branch1);
+ new_path = unique_path(opt, path, ci->ren1->branch);
if (is_dirty) {
- output(o, 1, _("Refusing to lose dirty file at %s"),
+ output(opt, 1, _("Refusing to lose dirty file at %s"),
path);
}
- output(o, 1, _("Adding as %s instead"), new_path);
- if (update_file(o, 0, &mfi.oid, mfi.mode, new_path)) {
+ output(opt, 1, _("Adding as %s instead"), new_path);
+ if (update_file(opt, 0, &mfi->blob, new_path)) {
free(new_path);
return -1;
}
free(new_path);
- mfi.clean = 0;
- } else if (update_file(o, mfi.clean, &mfi.oid, mfi.mode, path))
+ mfi->clean = 0;
+ } else if (update_file(opt, mfi->clean, &mfi->blob, path))
return -1;
- return !is_dirty && mfi.clean;
+ return !is_dirty && mfi->clean;
}
-static int handle_rename_normal(struct merge_options *o,
+static int handle_rename_normal(struct merge_options *opt,
const char *path,
- struct object_id *o_oid, unsigned int o_mode,
- struct object_id *a_oid, unsigned int a_mode,
- struct object_id *b_oid, unsigned int b_mode,
+ const struct diff_filespec *o,
+ const struct diff_filespec *a,
+ const struct diff_filespec *b,
struct rename_conflict_info *ci)
{
+ struct rename *ren = ci->ren1;
+ struct merge_file_info mfi;
+ int clean;
+ int side = (ren->branch == opt->branch1 ? 2 : 3);
+
/* Merge the content and write it out */
- return handle_content_merge(o, path, was_dirty(o, path),
- o_oid, o_mode, a_oid, a_mode, b_oid, b_mode,
- ci);
+ clean = handle_content_merge(&mfi, opt, path, was_dirty(opt, path),
+ o, a, b, ci);
+
+ if (clean && opt->detect_directory_renames == 1 &&
+ ren->dir_rename_original_dest) {
+ if (update_stages(opt, path,
+ NULL,
+ side == 2 ? &mfi.blob : NULL,
+ side == 2 ? NULL : &mfi.blob))
+ return -1;
+ clean = 0; /* not clean, but conflicted */
+ }
+ return clean;
+}
+
+static void dir_rename_warning(const char *msg,
+ int is_add,
+ int clean,
+ struct merge_options *opt,
+ struct rename *ren)
+{
+ const char *other_branch;
+ other_branch = (ren->branch == opt->branch1 ?
+ opt->branch2 : opt->branch1);
+ if (is_add) {
+ output(opt, clean ? 2 : 1, msg,
+ ren->pair->one->path, ren->branch,
+ other_branch, ren->pair->two->path);
+ return;
+ }
+ output(opt, clean ? 2 : 1, msg,
+ ren->pair->one->path, ren->dir_rename_original_dest, ren->branch,
+ other_branch, ren->pair->two->path);
+}
+static int warn_about_dir_renamed_entries(struct merge_options *opt,
+ struct rename *ren)
+{
+ const char *msg;
+ int clean = 1, is_add;
+
+ if (!ren)
+ return clean;
+
+ /* Return early if ren was not affected/created by a directory rename */
+ if (!ren->dir_rename_original_dest)
+ return clean;
+
+ /* Sanity checks */
+ assert(opt->detect_directory_renames > 0);
+ assert(ren->dir_rename_original_type == 'A' ||
+ ren->dir_rename_original_type == 'R');
+
+ /* Check whether to treat directory renames as a conflict */
+ clean = (opt->detect_directory_renames == 2);
+
+ is_add = (ren->dir_rename_original_type == 'A');
+ if (ren->dir_rename_original_type == 'A' && clean) {
+ msg = _("Path updated: %s added in %s inside a "
+ "directory that was renamed in %s; moving it to %s.");
+ } else if (ren->dir_rename_original_type == 'A' && !clean) {
+ msg = _("CONFLICT (file location): %s added in %s "
+ "inside a directory that was renamed in %s, "
+ "suggesting it should perhaps be moved to %s.");
+ } else if (ren->dir_rename_original_type == 'R' && clean) {
+ msg = _("Path updated: %s renamed to %s in %s, inside a "
+ "directory that was renamed in %s; moving it to %s.");
+ } else if (ren->dir_rename_original_type == 'R' && !clean) {
+ msg = _("CONFLICT (file location): %s renamed to %s in %s, "
+ "inside a directory that was renamed in %s, "
+ "suggesting it should perhaps be moved to %s.");
+ } else {
+ BUG("Impossible dir_rename_original_type/clean combination");
+ }
+ dir_rename_warning(msg, is_add, clean, opt, ren);
+
+ return clean;
}
/* Per entry merge function */
-static int process_entry(struct merge_options *o,
+static int process_entry(struct merge_options *opt,
const char *path, struct stage_data *entry)
{
int clean_merge = 1;
- int normalize = o->renormalize;
- unsigned o_mode = entry->stages[1].mode;
- unsigned a_mode = entry->stages[2].mode;
- unsigned b_mode = entry->stages[3].mode;
- struct object_id *o_oid = stage_oid(&entry->stages[1].oid, o_mode);
- struct object_id *a_oid = stage_oid(&entry->stages[2].oid, a_mode);
- struct object_id *b_oid = stage_oid(&entry->stages[3].oid, b_mode);
+ int normalize = opt->renormalize;
+
+ struct diff_filespec *o = &entry->stages[1];
+ struct diff_filespec *a = &entry->stages[2];
+ struct diff_filespec *b = &entry->stages[3];
+ int o_valid = is_valid(o);
+ int a_valid = is_valid(a);
+ int b_valid = is_valid(b);
+ o->path = a->path = b->path = (char*)path;
entry->processed = 1;
if (entry->rename_conflict_info) {
- struct rename_conflict_info *conflict_info = entry->rename_conflict_info;
- switch (conflict_info->rename_type) {
+ struct rename_conflict_info *ci = entry->rename_conflict_info;
+ struct diff_filespec *temp;
+ int path_clean;
+
+ path_clean = warn_about_dir_renamed_entries(opt, ci->ren1);
+ path_clean &= warn_about_dir_renamed_entries(opt, ci->ren2);
+
+ /*
+ * For cases with a single rename, {o,a,b}->path have all been
+ * set to the rename target path; we need to set two of these
+ * back to the rename source.
+ * For rename/rename conflicts, we'll manually fix paths below.
+ */
+ temp = (opt->branch1 == ci->ren1->branch) ? b : a;
+ o->path = temp->path = ci->ren1->pair->one->path;
+ if (ci->ren2) {
+ assert(opt->branch1 == ci->ren1->branch);
+ }
+
+ switch (ci->rename_type) {
case RENAME_NORMAL:
case RENAME_ONE_FILE_TO_ONE:
- clean_merge = handle_rename_normal(o,
- path,
- o_oid, o_mode,
- a_oid, a_mode,
- b_oid, b_mode,
- conflict_info);
+ clean_merge = handle_rename_normal(opt, path, o, a, b,
+ ci);
break;
case RENAME_VIA_DIR:
- clean_merge = 1;
- if (handle_rename_via_dir(o,
- conflict_info->pair1,
- conflict_info->branch1))
- clean_merge = -1;
+ clean_merge = handle_rename_via_dir(opt, ci);
break;
case RENAME_ADD:
/*
* two-way merged cleanly with the added file, I
* guess it's a clean merge?
*/
- clean_merge = handle_rename_add(o, conflict_info);
+ clean_merge = handle_rename_add(opt, ci);
break;
case RENAME_DELETE:
clean_merge = 0;
- if (handle_rename_delete(o,
- conflict_info->pair1,
- conflict_info->branch1,
- conflict_info->branch2))
+ if (handle_rename_delete(opt, ci))
clean_merge = -1;
break;
case RENAME_ONE_FILE_TO_TWO:
+ /*
+ * Manually fix up paths; note:
+ * ren[12]->pair->one->path are equal.
+ */
+ o->path = ci->ren1->pair->one->path;
+ a->path = ci->ren1->pair->two->path;
+ b->path = ci->ren2->pair->two->path;
+
clean_merge = 0;
- if (handle_rename_rename_1to2(o, conflict_info))
+ if (handle_rename_rename_1to2(opt, ci))
clean_merge = -1;
break;
case RENAME_TWO_FILES_TO_ONE:
+ /*
+ * Manually fix up paths; note,
+ * ren[12]->pair->two->path are actually equal.
+ */
+ o->path = NULL;
+ a->path = ci->ren1->pair->two->path;
+ b->path = ci->ren2->pair->two->path;
+
/*
* Probably unclean merge, but if the two renamed
* files merge cleanly and the two resulting files
* can then be two-way merged cleanly, I guess it's
* a clean merge?
*/
- clean_merge = handle_rename_rename_2to1(o,
- conflict_info);
+ clean_merge = handle_rename_rename_2to1(opt, ci);
break;
default:
entry->processed = 0;
break;
}
- } else if (o_oid && (!a_oid || !b_oid)) {
+ if (path_clean < clean_merge)
+ clean_merge = path_clean;
+ } else if (o_valid && (!a_valid || !b_valid)) {
/* Case A: Deleted in one */
- if ((!a_oid && !b_oid) ||
- (!b_oid && blob_unchanged(o, o_oid, o_mode, a_oid, a_mode, normalize, path)) ||
- (!a_oid && blob_unchanged(o, o_oid, o_mode, b_oid, b_mode, normalize, path))) {
+ if ((!a_valid && !b_valid) ||
+ (!b_valid && blob_unchanged(opt, o, a, normalize, path)) ||
+ (!a_valid && blob_unchanged(opt, o, b, normalize, path))) {
/* Deleted in both or deleted in one and
* unchanged in the other */
- if (a_oid)
- output(o, 2, _("Removing %s"), path);
+ if (a_valid)
+ output(opt, 2, _("Removing %s"), path);
/* do not touch working file if it did not exist */
- remove_file(o, 1, path, !a_oid);
+ remove_file(opt, 1, path, !a_valid);
} else {
/* Modify/delete; deleted side may have put a directory in the way */
clean_merge = 0;
- if (handle_modify_delete(o, path, o_oid, o_mode,
- a_oid, a_mode, b_oid, b_mode))
+ if (handle_modify_delete(opt, path, o, a, b))
clean_merge = -1;
}
- } else if ((!o_oid && a_oid && !b_oid) ||
- (!o_oid && !a_oid && b_oid)) {
+ } else if ((!o_valid && a_valid && !b_valid) ||
+ (!o_valid && !a_valid && b_valid)) {
/* Case B: Added in one. */
/* [nothing|directory] -> ([nothing|directory], file) */
const char *add_branch;
const char *other_branch;
- unsigned mode;
- const struct object_id *oid;
const char *conf;
+ const struct diff_filespec *contents;
- if (a_oid) {
- add_branch = o->branch1;
- other_branch = o->branch2;
- mode = a_mode;
- oid = a_oid;
+ if (a_valid) {
+ add_branch = opt->branch1;
+ other_branch = opt->branch2;
+ contents = a;
conf = _("file/directory");
} else {
- add_branch = o->branch2;
- other_branch = o->branch1;
- mode = b_mode;
- oid = b_oid;
+ add_branch = opt->branch2;
+ other_branch = opt->branch1;
+ contents = b;
conf = _("directory/file");
}
- if (dir_in_way(o->repo->index, path,
- !o->call_depth && !S_ISGITLINK(a_mode),
+ if (dir_in_way(opt->repo->index, path,
+ !opt->call_depth && !S_ISGITLINK(a->mode),
0)) {
- char *new_path = unique_path(o, path, add_branch);
+ char *new_path = unique_path(opt, path, add_branch);
clean_merge = 0;
- output(o, 1, _("CONFLICT (%s): There is a directory with name %s in %s. "
+ output(opt, 1, _("CONFLICT (%s): There is a directory with name %s in %s. "
"Adding %s as %s"),
conf, path, other_branch, path, new_path);
- if (update_file(o, 0, oid, mode, new_path))
+ if (update_file(opt, 0, contents, new_path))
clean_merge = -1;
- else if (o->call_depth)
- remove_file_from_index(o->repo->index, path);
+ else if (opt->call_depth)
+ remove_file_from_index(opt->repo->index, path);
free(new_path);
} else {
- output(o, 2, _("Adding %s"), path);
+ output(opt, 2, _("Adding %s"), path);
/* do not overwrite file if already present */
- if (update_file_flags(o, oid, mode, path, 1, !a_oid))
+ if (update_file_flags(opt, contents, path, 1, !a_valid))
clean_merge = -1;
}
- } else if (a_oid && b_oid) {
- if (!o_oid) {
+ } else if (a_valid && b_valid) {
+ if (!o_valid) {
/* Case C: Added in both (check for same permissions) */
- output(o, 1,
+ output(opt, 1,
_("CONFLICT (add/add): Merge conflict in %s"),
path);
- clean_merge = handle_file_collision(o,
+ clean_merge = handle_file_collision(opt,
path, NULL, NULL,
- o->branch1,
- o->branch2,
- a_oid, a_mode,
- b_oid, b_mode);
+ opt->branch1,
+ opt->branch2,
+ a, b);
} else {
/* case D: Modified in both, but differently. */
+ struct merge_file_info mfi;
int is_dirty = 0; /* unpack_trees would have bailed if dirty */
- clean_merge = handle_content_merge(o, path,
+ clean_merge = handle_content_merge(&mfi, opt, path,
is_dirty,
- o_oid, o_mode,
- a_oid, a_mode,
- b_oid, b_mode,
- NULL);
+ o, a, b, NULL);
}
- } else if (!o_oid && !a_oid && !b_oid) {
+ } else if (!o_valid && !a_valid && !b_valid) {
/*
* this entry was deleted altogether. a_mode == 0 means
* we had that path and want to actively remove it.
*/
- remove_file(o, 1, path, !a_mode);
+ remove_file(opt, 1, path, !a->mode);
} else
BUG("fatal merge failure, shouldn't happen.");
return clean_merge;
}
-int merge_trees(struct merge_options *o,
+int merge_trees(struct merge_options *opt,
struct tree *head,
struct tree *merge,
struct tree *common,
struct tree **result)
{
- struct index_state *istate = o->repo->index;
+ struct index_state *istate = opt->repo->index;
int code, clean;
struct strbuf sb = STRBUF_INIT;
- if (!o->call_depth && repo_index_has_changes(o->repo, head, &sb)) {
- err(o, _("Your local changes to the following files would be overwritten by merge:\n %s"),
+ if (!opt->call_depth && repo_index_has_changes(opt->repo, head, &sb)) {
+ err(opt, _("Your local changes to the following files would be overwritten by merge:\n %s"),
sb.buf);
return -1;
}
- if (o->subtree_shift) {
- merge = shift_tree_object(o->repo, head, merge, o->subtree_shift);
- common = shift_tree_object(o->repo, head, common, o->subtree_shift);
+ if (opt->subtree_shift) {
+ merge = shift_tree_object(opt->repo, head, merge, opt->subtree_shift);
+ common = shift_tree_object(opt->repo, head, common, opt->subtree_shift);
}
if (oid_eq(&common->object.oid, &merge->object.oid)) {
- output(o, 0, _("Already up to date!"));
+ output(opt, 0, _("Already up to date!"));
*result = head;
return 1;
}
- code = unpack_trees_start(o, common, head, merge);
+ code = unpack_trees_start(opt, common, head, merge);
if (code != 0) {
- if (show(o, 4) || o->call_depth)
- err(o, _("merging of trees %s and %s failed"),
+ if (show(opt, 4) || opt->call_depth)
+ err(opt, _("merging of trees %s and %s failed"),
oid_to_hex(&head->object.oid),
oid_to_hex(&merge->object.oid));
- unpack_trees_finish(o);
+ unpack_trees_finish(opt);
return -1;
}
* opposed to decaring a local hashmap is for convenience
* so that we don't have to pass it to around.
*/
- hashmap_init(&o->current_file_dir_set, path_hashmap_cmp, NULL, 512);
- get_files_dirs(o, head);
- get_files_dirs(o, merge);
+ hashmap_init(&opt->current_file_dir_set, path_hashmap_cmp, NULL, 512);
+ get_files_dirs(opt, head);
+ get_files_dirs(opt, merge);
- entries = get_unmerged(o->repo->index);
- clean = detect_and_process_renames(o, common, head, merge,
+ entries = get_unmerged(opt->repo->index);
+ clean = detect_and_process_renames(opt, common, head, merge,
entries, &re_info);
- record_df_conflict_files(o, entries);
+ record_df_conflict_files(opt, entries);
if (clean < 0)
goto cleanup;
for (i = entries->nr-1; 0 <= i; i--) {
const char *path = entries->items[i].string;
struct stage_data *e = entries->items[i].util;
if (!e->processed) {
- int ret = process_entry(o, path, e);
+ int ret = process_entry(opt, path, e);
if (!ret)
clean = 0;
else if (ret < 0) {
string_list_clear(entries, 1);
free(entries);
- hashmap_free(&o->current_file_dir_set, 1);
+ hashmap_free(&opt->current_file_dir_set, 1);
if (clean < 0) {
- unpack_trees_finish(o);
+ unpack_trees_finish(opt);
return clean;
}
}
else
clean = 1;
- unpack_trees_finish(o);
+ unpack_trees_finish(opt);
- if (o->call_depth && !(*result = write_tree_from_memory(o)))
+ if (opt->call_depth && !(*result = write_tree_from_memory(opt)))
return -1;
return clean;
* Merge the commits h1 and h2, return the resulting virtual
* commit object and a flag indicating the cleanness of the merge.
*/
-int merge_recursive(struct merge_options *o,
+int merge_recursive(struct merge_options *opt,
struct commit *h1,
struct commit *h2,
struct commit_list *ca,
struct tree *mrtree;
int clean;
- if (show(o, 4)) {
- output(o, 4, _("Merging:"));
- output_commit_title(o, h1);
- output_commit_title(o, h2);
+ if (show(opt, 4)) {
+ output(opt, 4, _("Merging:"));
+ output_commit_title(opt, h1);
+ output_commit_title(opt, h2);
}
if (!ca) {
ca = reverse_commit_list(ca);
}
- if (show(o, 5)) {
+ if (show(opt, 5)) {
unsigned cnt = commit_list_count(ca);
- output(o, 5, Q_("found %u common ancestor:",
+ output(opt, 5, Q_("found %u common ancestor:",
"found %u common ancestors:", cnt), cnt);
for (iter = ca; iter; iter = iter->next)
- output_commit_title(o, iter->item);
+ output_commit_title(opt, iter->item);
}
merged_common_ancestors = pop_commit(&ca);
/* if there is no common ancestor, use an empty tree */
struct tree *tree;
- tree = lookup_tree(o->repo, o->repo->hash_algo->empty_tree);
- merged_common_ancestors = make_virtual_commit(o->repo, tree, "ancestor");
+ tree = lookup_tree(opt->repo, opt->repo->hash_algo->empty_tree);
+ merged_common_ancestors = make_virtual_commit(opt->repo, tree, "ancestor");
}
for (iter = ca; iter; iter = iter->next) {
const char *saved_b1, *saved_b2;
- o->call_depth++;
+ opt->call_depth++;
/*
* When the merge fails, the result contains files
* with conflict markers. The cleanness flag is
* overwritten it: the committed "conflicts" were
* already resolved.
*/
- discard_index(o->repo->index);
- saved_b1 = o->branch1;
- saved_b2 = o->branch2;
- o->branch1 = "Temporary merge branch 1";
- o->branch2 = "Temporary merge branch 2";
- if (merge_recursive(o, merged_common_ancestors, iter->item,
+ discard_index(opt->repo->index);
+ saved_b1 = opt->branch1;
+ saved_b2 = opt->branch2;
+ opt->branch1 = "Temporary merge branch 1";
+ opt->branch2 = "Temporary merge branch 2";
+ if (merge_recursive(opt, merged_common_ancestors, iter->item,
NULL, &merged_common_ancestors) < 0)
return -1;
- o->branch1 = saved_b1;
- o->branch2 = saved_b2;
- o->call_depth--;
+ opt->branch1 = saved_b1;
+ opt->branch2 = saved_b2;
+ opt->call_depth--;
if (!merged_common_ancestors)
- return err(o, _("merge returned no commit"));
+ return err(opt, _("merge returned no commit"));
}
- discard_index(o->repo->index);
- if (!o->call_depth)
- repo_read_index(o->repo);
+ discard_index(opt->repo->index);
+ if (!opt->call_depth)
+ repo_read_index(opt->repo);
- o->ancestor = "merged common ancestors";
- clean = merge_trees(o, get_commit_tree(h1), get_commit_tree(h2),
+ opt->ancestor = "merged common ancestors";
+ clean = merge_trees(opt, get_commit_tree(h1), get_commit_tree(h2),
get_commit_tree(merged_common_ancestors),
&mrtree);
if (clean < 0) {
- flush_output(o);
+ flush_output(opt);
return clean;
}
- if (o->call_depth) {
- *result = make_virtual_commit(o->repo, mrtree, "merged tree");
+ if (opt->call_depth) {
+ *result = make_virtual_commit(opt->repo, mrtree, "merged tree");
commit_list_insert(h1, &(*result)->parents);
commit_list_insert(h2, &(*result)->parents->next);
}
- flush_output(o);
- if (!o->call_depth && o->buffer_output < 2)
- strbuf_release(&o->obuf);
- if (show(o, 2))
+ flush_output(opt);
+ if (!opt->call_depth && opt->buffer_output < 2)
+ strbuf_release(&opt->obuf);
+ if (show(opt, 2))
diff_warn_rename_limit("merge.renamelimit",
- o->needed_rename_limit, 0);
+ opt->needed_rename_limit, 0);
return clean;
}
return (struct commit *)object;
}
-int merge_recursive_generic(struct merge_options *o,
+int merge_recursive_generic(struct merge_options *opt,
const struct object_id *head,
const struct object_id *merge,
int num_base_list,
{
int clean;
struct lock_file lock = LOCK_INIT;
- struct commit *head_commit = get_ref(o->repo, head, o->branch1);
- struct commit *next_commit = get_ref(o->repo, merge, o->branch2);
+ struct commit *head_commit = get_ref(opt->repo, head, opt->branch1);
+ struct commit *next_commit = get_ref(opt->repo, merge, opt->branch2);
struct commit_list *ca = NULL;
if (base_list) {
int i;
for (i = 0; i < num_base_list; ++i) {
struct commit *base;
- if (!(base = get_ref(o->repo, base_list[i], oid_to_hex(base_list[i]))))
- return err(o, _("Could not parse object '%s'"),
+ if (!(base = get_ref(opt->repo, base_list[i], oid_to_hex(base_list[i]))))
+ return err(opt, _("Could not parse object '%s'"),
oid_to_hex(base_list[i]));
commit_list_insert(base, &ca);
}
}
- repo_hold_locked_index(o->repo, &lock, LOCK_DIE_ON_ERROR);
- clean = merge_recursive(o, head_commit, next_commit, ca,
+ repo_hold_locked_index(opt->repo, &lock, LOCK_DIE_ON_ERROR);
+ clean = merge_recursive(opt, head_commit, next_commit, ca,
result);
if (clean < 0) {
rollback_lock_file(&lock);
return clean;
}
- if (write_locked_index(o->repo->index, &lock,
+ if (write_locked_index(opt->repo->index, &lock,
COMMIT_LOCK | SKIP_IF_UNCHANGED))
- return err(o, _("Unable to write index."));
+ return err(opt, _("Unable to write index."));
return clean ? 0 : 1;
}
-static void merge_recursive_config(struct merge_options *o)
+static void merge_recursive_config(struct merge_options *opt)
{
char *value = NULL;
- git_config_get_int("merge.verbosity", &o->verbosity);
- git_config_get_int("diff.renamelimit", &o->diff_rename_limit);
- git_config_get_int("merge.renamelimit", &o->merge_rename_limit);
+ git_config_get_int("merge.verbosity", &opt->verbosity);
+ git_config_get_int("diff.renamelimit", &opt->diff_rename_limit);
+ git_config_get_int("merge.renamelimit", &opt->merge_rename_limit);
if (!git_config_get_string("diff.renames", &value)) {
- o->diff_detect_rename = git_config_rename("diff.renames", value);
+ opt->diff_detect_rename = git_config_rename("diff.renames", value);
free(value);
}
if (!git_config_get_string("merge.renames", &value)) {
- o->merge_detect_rename = git_config_rename("merge.renames", value);
+ opt->merge_detect_rename = git_config_rename("merge.renames", value);
+ free(value);
+ }
+ if (!git_config_get_string("merge.directoryrenames", &value)) {
+ int boolval = git_parse_maybe_bool(value);
+ if (0 <= boolval) {
+ opt->detect_directory_renames = boolval ? 2 : 0;
+ } else if (!strcasecmp(value, "conflict")) {
+ opt->detect_directory_renames = 1;
+ } /* avoid erroring on values from future versions of git */
free(value);
}
git_config(git_xmerge_config, NULL);
}
-void init_merge_options(struct merge_options *o,
+void init_merge_options(struct merge_options *opt,
struct repository *repo)
{
const char *merge_verbosity;
- memset(o, 0, sizeof(struct merge_options));
- o->repo = repo;
- o->verbosity = 2;
- o->buffer_output = 1;
- o->diff_rename_limit = -1;
- o->merge_rename_limit = -1;
- o->renormalize = 0;
- o->diff_detect_rename = -1;
- o->merge_detect_rename = -1;
- o->detect_directory_renames = 1;
- merge_recursive_config(o);
+ memset(opt, 0, sizeof(struct merge_options));
+ opt->repo = repo;
+ opt->verbosity = 2;
+ opt->buffer_output = 1;
+ opt->diff_rename_limit = -1;
+ opt->merge_rename_limit = -1;
+ opt->renormalize = 0;
+ opt->diff_detect_rename = -1;
+ opt->merge_detect_rename = -1;
+ opt->detect_directory_renames = 1;
+ merge_recursive_config(opt);
merge_verbosity = getenv("GIT_MERGE_VERBOSITY");
if (merge_verbosity)
- o->verbosity = strtol(merge_verbosity, NULL, 10);
- if (o->verbosity >= 5)
- o->buffer_output = 0;
- strbuf_init(&o->obuf, 0);
- string_list_init(&o->df_conflict_file_set, 1);
+ opt->verbosity = strtol(merge_verbosity, NULL, 10);
+ if (opt->verbosity >= 5)
+ opt->buffer_output = 0;
+ strbuf_init(&opt->obuf, 0);
+ string_list_init(&opt->df_conflict_file_set, 1);
}
-int parse_merge_opt(struct merge_options *o, const char *s)
+int parse_merge_opt(struct merge_options *opt, const char *s)
{
const char *arg;
if (!s || !*s)
return -1;
if (!strcmp(s, "ours"))
- o->recursive_variant = MERGE_RECURSIVE_OURS;
+ opt->recursive_variant = MERGE_RECURSIVE_OURS;
else if (!strcmp(s, "theirs"))
- o->recursive_variant = MERGE_RECURSIVE_THEIRS;
+ opt->recursive_variant = MERGE_RECURSIVE_THEIRS;
else if (!strcmp(s, "subtree"))
- o->subtree_shift = "";
+ opt->subtree_shift = "";
else if (skip_prefix(s, "subtree=", &arg))
- o->subtree_shift = arg;
+ opt->subtree_shift = arg;
else if (!strcmp(s, "patience"))
- o->xdl_opts = DIFF_WITH_ALG(o, PATIENCE_DIFF);
+ opt->xdl_opts = DIFF_WITH_ALG(opt, PATIENCE_DIFF);
else if (!strcmp(s, "histogram"))
- o->xdl_opts = DIFF_WITH_ALG(o, HISTOGRAM_DIFF);
+ opt->xdl_opts = DIFF_WITH_ALG(opt, HISTOGRAM_DIFF);
else if (skip_prefix(s, "diff-algorithm=", &arg)) {
long value = parse_algorithm_value(arg);
if (value < 0)
return -1;
/* clear out previous settings */
- DIFF_XDL_CLR(o, NEED_MINIMAL);
- o->xdl_opts &= ~XDF_DIFF_ALGORITHM_MASK;
- o->xdl_opts |= value;
+ DIFF_XDL_CLR(opt, NEED_MINIMAL);
+ opt->xdl_opts &= ~XDF_DIFF_ALGORITHM_MASK;
+ opt->xdl_opts |= value;
}
else if (!strcmp(s, "ignore-space-change"))
- DIFF_XDL_SET(o, IGNORE_WHITESPACE_CHANGE);
+ DIFF_XDL_SET(opt, IGNORE_WHITESPACE_CHANGE);
else if (!strcmp(s, "ignore-all-space"))
- DIFF_XDL_SET(o, IGNORE_WHITESPACE);
+ DIFF_XDL_SET(opt, IGNORE_WHITESPACE);
else if (!strcmp(s, "ignore-space-at-eol"))
- DIFF_XDL_SET(o, IGNORE_WHITESPACE_AT_EOL);
+ DIFF_XDL_SET(opt, IGNORE_WHITESPACE_AT_EOL);
else if (!strcmp(s, "ignore-cr-at-eol"))
- DIFF_XDL_SET(o, IGNORE_CR_AT_EOL);
+ DIFF_XDL_SET(opt, IGNORE_CR_AT_EOL);
else if (!strcmp(s, "renormalize"))
- o->renormalize = 1;
+ opt->renormalize = 1;
else if (!strcmp(s, "no-renormalize"))
- o->renormalize = 0;
+ opt->renormalize = 0;
else if (!strcmp(s, "no-renames"))
- o->merge_detect_rename = 0;
+ opt->merge_detect_rename = 0;
else if (!strcmp(s, "find-renames")) {
- o->merge_detect_rename = 1;
- o->rename_score = 0;
+ opt->merge_detect_rename = 1;
+ opt->rename_score = 0;
}
else if (skip_prefix(s, "find-renames=", &arg) ||
skip_prefix(s, "rename-threshold=", &arg)) {
- if ((o->rename_score = parse_rename_score(&arg)) == -1 || *arg != 0)
+ if ((opt->rename_score = parse_rename_score(&arg)) == -1 || *arg != 0)
return -1;
- o->merge_detect_rename = 1;
+ opt->merge_detect_rename = 1;
}
/*
* Please update $__git_merge_strategy_options in
--- /dev/null
+diff_cmd () {
+ "$merge_tool_path" mergetool "$LOCAL" "$REMOTE" -o "$MERGED"
+}
+
+merge_cmd () {
+ if $base_present
+ then
+ "$merge_tool_path" mergetool "$BASE" "$LOCAL" "$REMOTE" -o "$MERGED"
+ else
+ "$merge_tool_path" mergetool "$LOCAL" "$REMOTE" -o "$MERGED"
+ fi
+}
midx_map = xmmap(NULL, midx_size, PROT_READ, MAP_PRIVATE, fd, 0);
- FLEX_ALLOC_MEM(m, object_dir, object_dir, strlen(object_dir));
+ FLEX_ALLOC_STR(m, object_dir, object_dir);
m->fd = fd;
m->data = midx_map;
m->data_len = midx_size;
return nth_midxed_pack_entry(m, e, pos);
}
-int midx_contains_pack(struct multi_pack_index *m, const char *idx_name)
+/* Match "foo.idx" against either "foo.pack" _or_ "foo.idx". */
+static int cmp_idx_or_pack_name(const char *idx_or_pack_name,
+ const char *idx_name)
+{
+ /* Skip past any initial matching prefix. */
+ while (*idx_name && *idx_name == *idx_or_pack_name) {
+ idx_name++;
+ idx_or_pack_name++;
+ }
+
+ /*
+ * If we didn't match completely, we may have matched "pack-1234." and
+ * be left with "idx" and "pack" respectively, which is also OK. We do
+ * not have to check for "idx" and "idx", because that would have been
+ * a complete match (and in that case these strcmps will be false, but
+ * we'll correctly return 0 from the final strcmp() below.
+ *
+ * Technically this matches "fooidx" and "foopack", but we'd never have
+ * such names in the first place.
+ */
+ if (!strcmp(idx_name, "idx") && !strcmp(idx_or_pack_name, "pack"))
+ return 0;
+
+ /*
+ * This not only checks for a complete match, but also orders based on
+ * the first non-identical character, which means our ordering will
+ * match a raw strcmp(). That makes it OK to use this to binary search
+ * a naively-sorted list.
+ */
+ return strcmp(idx_or_pack_name, idx_name);
+}
+
+int midx_contains_pack(struct multi_pack_index *m, const char *idx_or_pack_name)
{
uint32_t first = 0, last = m->num_packs;
int cmp;
current = m->pack_names[mid];
- cmp = strcmp(idx_name, current);
+ cmp = cmp_idx_or_pack_name(idx_or_pack_name, current);
if (!cmp)
return 1;
if (cmp > 0) {
struct multi_pack_index *m,
uint32_t n);
int fill_midx_entry(const struct object_id *oid, struct pack_entry *e, struct multi_pack_index *m);
-int midx_contains_pack(struct multi_pack_index *m, const char *idx_name);
+int midx_contains_pack(struct multi_pack_index *m, const char *idx_or_pack_name);
int prepare_multi_pack_index_one(struct repository *r, const char *object_dir, int local);
int write_midx_file(const char *object_dir);
static int path_to_oid(const char *path, struct object_id *oid)
{
- char hex_oid[GIT_SHA1_HEXSZ];
+ char hex_oid[GIT_MAX_HEXSZ];
int i = 0;
- while (*path && i < GIT_SHA1_HEXSZ) {
+ while (*path && i < the_hash_algo->hexsz) {
if (*path != '/')
hex_oid[i++] = *path;
path++;
}
- if (*path || i != GIT_SHA1_HEXSZ)
+ if (*path || i != the_hash_algo->hexsz)
return -1;
return get_oid_hex(hex_oid, oid);
}
#define GET_NIBBLE(n, sha1) ((((sha1)[(n) >> 1]) >> ((~(n) & 0x01) << 2)) & 0x0f)
-#define KEY_INDEX (GIT_SHA1_RAWSZ - 1)
-#define FANOUT_PATH_SEPARATORS ((GIT_SHA1_HEXSZ / 2) - 1)
+#define KEY_INDEX (the_hash_algo->rawsz - 1)
+#define FANOUT_PATH_SEPARATORS (the_hash_algo->rawsz - 1)
+#define FANOUT_PATH_SEPARATORS_MAX ((GIT_MAX_HEXSZ / 2) - 1)
#define SUBTREE_SHA1_PREFIXCMP(key_sha1, subtree_sha1) \
(memcmp(key_sha1, subtree_sha1, subtree_sha1[KEY_INDEX]))
struct leaf_node *entry)
{
struct leaf_node *l;
- struct int_node *parent_stack[GIT_SHA1_RAWSZ];
+ struct int_node *parent_stack[GIT_MAX_RAWSZ];
unsigned char i, j;
void **p = note_tree_search(t, &tree, &n, entry->key_oid.hash);
void *buf;
struct tree_desc desc;
struct name_entry entry;
+ const unsigned hashsz = the_hash_algo->rawsz;
buf = fill_tree_descriptor(&desc, &subtree->val_oid);
if (!buf)
oid_to_hex(&subtree->val_oid));
prefix_len = subtree->key_oid.hash[KEY_INDEX];
- if (prefix_len >= GIT_SHA1_RAWSZ)
+ if (prefix_len >= hashsz)
BUG("prefix_len (%"PRIuMAX") is out of range", (uintmax_t)prefix_len);
if (prefix_len * 2 < n)
BUG("prefix_len (%"PRIuMAX") is too small", (uintmax_t)prefix_len);
struct leaf_node *l;
size_t path_len = strlen(entry.path);
- if (path_len == 2 * (GIT_SHA1_RAWSZ - prefix_len)) {
+ if (path_len == 2 * (hashsz - prefix_len)) {
/* This is potentially the remainder of the SHA-1 */
if (!S_ISREG(entry.mode))
goto handle_non_note;
if (hex_to_bytes(object_oid.hash + prefix_len, entry.path,
- GIT_SHA1_RAWSZ - prefix_len))
+ hashsz - prefix_len))
goto handle_non_note; /* entry.path is not a SHA1 */
type = PTR_TYPE_NOTE;
* except for the last byte, where we write
* the length:
*/
- memset(object_oid.hash + len, 0, GIT_SHA1_RAWSZ - len - 1);
+ memset(object_oid.hash + len, 0, hashsz - len - 1);
object_oid.hash[KEY_INDEX] = (unsigned char)len;
type = PTR_TYPE_SUBTREE;
return fanout + 1;
}
-/* hex SHA1 + 19 * '/' + NUL */
-#define FANOUT_PATH_MAX GIT_SHA1_HEXSZ + FANOUT_PATH_SEPARATORS + 1
+/* hex oid + '/' between each pair of hex digits + NUL */
+#define FANOUT_PATH_MAX GIT_MAX_HEXSZ + FANOUT_PATH_SEPARATORS_MAX + 1
-static void construct_path_with_fanout(const unsigned char *sha1,
+static void construct_path_with_fanout(const unsigned char *hash,
unsigned char fanout, char *path)
{
unsigned int i = 0, j = 0;
- const char *hex_sha1 = sha1_to_hex(sha1);
- assert(fanout < GIT_SHA1_RAWSZ);
+ const char *hex_hash = hash_to_hex(hash);
+ assert(fanout < the_hash_algo->rawsz);
while (fanout) {
- path[i++] = hex_sha1[j++];
- path[i++] = hex_sha1[j++];
+ path[i++] = hex_hash[j++];
+ path[i++] = hex_hash[j++];
path[i++] = '/';
fanout--;
}
- xsnprintf(path + i, FANOUT_PATH_MAX - i, "%s", hex_sha1 + j);
+ xsnprintf(path + i, FANOUT_PATH_MAX - i, "%s", hex_hash + j);
}
static int for_each_note_helper(struct notes_tree *t, struct int_node *tree,
static void write_tree_entry(struct strbuf *buf, unsigned int mode,
const char *path, unsigned int path_len, const
- unsigned char *sha1)
+ unsigned char *hash)
{
strbuf_addf(buf, "%o %.*s%c", mode, path_len, path, '\0');
- strbuf_add(buf, sha1, GIT_SHA1_RAWSZ);
+ strbuf_add(buf, hash, the_hash_algo->rawsz);
}
static void tree_write_stack_init_subtree(struct tree_write_stack *tws,
n = (struct tree_write_stack *)
xmalloc(sizeof(struct tree_write_stack));
n->next = NULL;
- strbuf_init(&n->buf, 256 * (32 + GIT_SHA1_HEXSZ)); /* assume 256 entries per tree */
+ strbuf_init(&n->buf, 256 * (32 + the_hash_algo->hexsz)); /* assume 256 entries per tree */
n->path[0] = n->path[1] = '\0';
tws->next = n;
tws->path[0] = path[0];
note_path[note_path_len] = '\0';
mode = 040000;
}
- assert(note_path_len <= GIT_SHA1_HEXSZ + FANOUT_PATH_SEPARATORS);
+ assert(note_path_len <= GIT_MAX_HEXSZ + FANOUT_PATH_SEPARATORS);
/* Weave non-note entries into note entries */
return write_each_non_note_until(note_path, d) ||
combine_notes_fn combine_notes, int flags)
{
struct object_id oid, object_oid;
- unsigned mode;
+ unsigned short mode;
struct leaf_node root_tree;
if (!t)
/* Prepare for traversal of current notes tree */
root.next = NULL; /* last forward entry in list is grounded */
- strbuf_init(&root.buf, 256 * (32 + GIT_SHA1_HEXSZ)); /* assume 256 entries */
+ strbuf_init(&root.buf, 256 * (32 + the_hash_algo->hexsz)); /* assume 256 entries */
root.path[0] = root.path[1] = '\0';
cb_data.root = &root;
cb_data.next_non_note = t->first_non_note;
while (l) {
if (flags & NOTES_PRUNE_VERBOSE)
- printf("%s\n", sha1_to_hex(l->sha1));
+ printf("%s\n", hash_to_hex(l->sha1));
if (!(flags & NOTES_PRUNE_DRYRUN))
remove_note(t, l->sha1);
l = l->next;
freshened:1,
do_not_close:1,
pack_promisor:1;
- unsigned char sha1[20];
+ unsigned char hash[GIT_MAX_RAWSZ];
struct revindex_entry *revindex;
/* something like ".git/objects/pack/xxxxx.pack" */
char pack_name[FLEX_ARRAY]; /* more */
#define OBJECT_INFO_QUICK 8
/* Do not check loose object */
#define OBJECT_INFO_IGNORE_LOOSE 16
+/*
+ * Do not attempt to fetch the object if missing (even if fetch_is_missing is
+ * nonzero). This is meant for bulk prefetching of missing blobs in a partial
+ * clone. Implies OBJECT_INFO_QUICK.
+ */
+#define OBJECT_INFO_FOR_PREFETCH (32 + OBJECT_INFO_QUICK)
int oid_object_info_extended(struct repository *r,
const struct object_id *,
* table overhead.
*/
-static inline unsigned int oid_hash(struct object_id oid)
-{
- return sha1hash(oid.hash);
-}
-
-static inline int oid_equal(struct object_id a, struct object_id b)
-{
- return oideq(&a, &b);
-}
-
-KHASH_INIT(oid, struct object_id, int, 0, oid_hash, oid_equal)
-
/**
* A single oidset; should be zero-initialized (or use OIDSET_INIT).
*/
seen_objects_nr = 0;
}
-static uint32_t find_object_pos(const unsigned char *sha1)
+static uint32_t find_object_pos(const unsigned char *hash)
{
- struct object_entry *entry = packlist_find(writer.to_pack, sha1, NULL);
+ struct object_entry *entry = packlist_find(writer.to_pack, hash, NULL);
if (!entry) {
die("Failed to write bitmap index. Packfile doesn't have full closure "
- "(object %s is missing)", sha1_to_hex(sha1));
+ "(object %s is missing)", hash_to_hex(hash));
}
return oe_in_pack_pos(writer.to_pack, entry);
header.entry_count = htonl(writer.selected_nr);
hashcpy(header.checksum, writer.pack_checksum);
- hashwrite(f, &header, sizeof(header));
+ hashwrite(f, &header, sizeof(header) - GIT_MAX_RAWSZ + the_hash_algo->rawsz);
dump_bitmap(f, writer.commits);
dump_bitmap(f, writer.trees);
dump_bitmap(f, writer.blobs);
* commit.
*/
struct stored_bitmap {
- unsigned char sha1[20];
+ struct object_id oid;
struct ewah_bitmap *root;
struct stored_bitmap *xor;
int flags;
struct ewah_bitmap *blobs;
struct ewah_bitmap *tags;
- /* Map from SHA1 -> `stored_bitmap` for all the bitmapped commits */
- khash_sha1 *bitmaps;
+ /* Map from object ID -> `stored_bitmap` for all the bitmapped commits */
+ kh_oid_map_t *bitmaps;
/* Number of bitmapped commits */
uint32_t entry_count;
struct object **objects;
uint32_t *hashes;
uint32_t count, alloc;
- khash_sha1_pos *positions;
+ kh_oid_pos_t *positions;
} ext_index;
/* Bitmap result of the last performed walk */
{
struct bitmap_disk_header *header = (void *)index->map;
- if (index->map_size < sizeof(*header) + 20)
+ if (index->map_size < sizeof(*header) + the_hash_algo->rawsz)
return error("Corrupted bitmap index (missing header data)");
if (memcmp(header->magic, BITMAP_IDX_SIGNATURE, sizeof(BITMAP_IDX_SIGNATURE)) != 0)
"(Git requires BITMAP_OPT_FULL_DAG)");
if (flags & BITMAP_OPT_HASH_CACHE) {
- unsigned char *end = index->map + index->map_size - 20;
+ unsigned char *end = index->map + index->map_size - the_hash_algo->rawsz;
index->hashes = ((uint32_t *)end) - index->pack->num_objects;
}
}
index->entry_count = ntohl(header->entry_count);
- index->map_pos += sizeof(*header);
+ index->map_pos += sizeof(*header) - GIT_MAX_RAWSZ + the_hash_algo->rawsz;
return 0;
}
static struct stored_bitmap *store_bitmap(struct bitmap_index *index,
struct ewah_bitmap *root,
- const unsigned char *sha1,
+ const unsigned char *hash,
struct stored_bitmap *xor_with,
int flags)
{
stored->root = root;
stored->xor = xor_with;
stored->flags = flags;
- hashcpy(stored->sha1, sha1);
+ oidread(&stored->oid, hash);
- hash_pos = kh_put_sha1(index->bitmaps, stored->sha1, &ret);
+ hash_pos = kh_put_oid_map(index->bitmaps, stored->oid, &ret);
/* a 0 return code means the insertion succeeded with no changes,
* because the SHA1 already existed on the map. this is bad, there
* shouldn't be duplicated commits in the index */
if (ret == 0) {
- error("Duplicate entry in bitmap index: %s", sha1_to_hex(sha1));
+ error("Duplicate entry in bitmap index: %s", hash_to_hex(hash));
return NULL;
}
{
assert(bitmap_git->map);
- bitmap_git->bitmaps = kh_init_sha1();
- bitmap_git->ext_index.positions = kh_init_sha1_pos();
- load_pack_revindex(bitmap_git->pack);
+ bitmap_git->bitmaps = kh_init_oid_map();
+ bitmap_git->ext_index.positions = kh_init_oid_pos();
+ if (load_pack_revindex(bitmap_git->pack))
+ goto failed;
if (!(bitmap_git->commits = read_bitmap_1(bitmap_git)) ||
!(bitmap_git->trees = read_bitmap_1(bitmap_git)) ||
};
static inline int bitmap_position_extended(struct bitmap_index *bitmap_git,
- const unsigned char *sha1)
+ const struct object_id *oid)
{
- khash_sha1_pos *positions = bitmap_git->ext_index.positions;
- khiter_t pos = kh_get_sha1_pos(positions, sha1);
+ khash_oid_pos *positions = bitmap_git->ext_index.positions;
+ khiter_t pos = kh_get_oid_pos(positions, *oid);
if (pos < kh_end(positions)) {
int bitmap_pos = kh_value(positions, pos);
}
static inline int bitmap_position_packfile(struct bitmap_index *bitmap_git,
- const unsigned char *sha1)
+ const struct object_id *oid)
{
- off_t offset = find_pack_entry_one(sha1, bitmap_git->pack);
+ off_t offset = find_pack_entry_one(oid->hash, bitmap_git->pack);
if (!offset)
return -1;
}
static int bitmap_position(struct bitmap_index *bitmap_git,
- const unsigned char *sha1)
+ const struct object_id *oid)
{
- int pos = bitmap_position_packfile(bitmap_git, sha1);
- return (pos >= 0) ? pos : bitmap_position_extended(bitmap_git, sha1);
+ int pos = bitmap_position_packfile(bitmap_git, oid);
+ return (pos >= 0) ? pos : bitmap_position_extended(bitmap_git, oid);
}
static int ext_index_add_object(struct bitmap_index *bitmap_git,
int hash_ret;
int bitmap_pos;
- hash_pos = kh_put_sha1_pos(eindex->positions, object->oid.hash, &hash_ret);
+ hash_pos = kh_put_oid_pos(eindex->positions, object->oid, &hash_ret);
if (hash_ret > 0) {
if (eindex->count >= eindex->alloc) {
eindex->alloc = (eindex->alloc + 16) * 3 / 2;
struct bitmap_show_data *data = data_;
int bitmap_pos;
- bitmap_pos = bitmap_position(data->bitmap_git, object->oid.hash);
+ bitmap_pos = bitmap_position(data->bitmap_git, &object->oid);
if (bitmap_pos < 0)
bitmap_pos = ext_index_add_object(data->bitmap_git, object,
static int add_to_include_set(struct bitmap_index *bitmap_git,
struct include_data *data,
- const unsigned char *sha1,
+ const struct object_id *oid,
int bitmap_pos)
{
khiter_t hash_pos;
if (bitmap_get(data->base, bitmap_pos))
return 0;
- hash_pos = kh_get_sha1(bitmap_git->bitmaps, sha1);
+ hash_pos = kh_get_oid_map(bitmap_git->bitmaps, *oid);
if (hash_pos < kh_end(bitmap_git->bitmaps)) {
struct stored_bitmap *st = kh_value(bitmap_git->bitmaps, hash_pos);
bitmap_or_ewah(data->base, lookup_stored_bitmap(st));
struct include_data *data = _data;
int bitmap_pos;
- bitmap_pos = bitmap_position(data->bitmap_git, commit->object.oid.hash);
+ bitmap_pos = bitmap_position(data->bitmap_git, &commit->object.oid);
if (bitmap_pos < 0)
bitmap_pos = ext_index_add_object(data->bitmap_git,
(struct object *)commit,
NULL);
- if (!add_to_include_set(data->bitmap_git, data, commit->object.oid.hash,
+ if (!add_to_include_set(data->bitmap_git, data, &commit->object.oid,
bitmap_pos)) {
struct commit_list *parent = commit->parents;
roots = roots->next;
if (object->type == OBJ_COMMIT) {
- khiter_t pos = kh_get_sha1(bitmap_git->bitmaps, object->oid.hash);
+ khiter_t pos = kh_get_oid_map(bitmap_git->bitmaps, object->oid);
if (pos < kh_end(bitmap_git->bitmaps)) {
struct stored_bitmap *st = kh_value(bitmap_git->bitmaps, pos);
int pos;
roots = roots->next;
- pos = bitmap_position(bitmap_git, object->oid.hash);
+ pos = bitmap_position(bitmap_git, &object->oid);
if (pos < 0 || base == NULL || !bitmap_get(base, pos)) {
object->flags &= ~UNINTERESTING;
fprintf(stderr, "Failed to reuse at %d (%016llx)\n",
reuse_objects, result->words[i]);
- fprintf(stderr, " %s\n", sha1_to_hex(sha1));
+ fprintf(stderr, " %s\n", hash_to_hex(sha1));
}
#endif
struct bitmap_test_data *tdata = data;
int bitmap_pos;
- bitmap_pos = bitmap_position(tdata->bitmap_git, object->oid.hash);
+ bitmap_pos = bitmap_position(tdata->bitmap_git, &object->oid);
if (bitmap_pos < 0)
die("Object not in bitmap: %s\n", oid_to_hex(&object->oid));
int bitmap_pos;
bitmap_pos = bitmap_position(tdata->bitmap_git,
- commit->object.oid.hash);
+ &commit->object.oid);
if (bitmap_pos < 0)
die("Object not in bitmap: %s\n", oid_to_hex(&commit->object.oid));
bitmap_git->version, bitmap_git->entry_count);
root = revs->pending.objects[0].item;
- pos = kh_get_sha1(bitmap_git->bitmaps, root->oid.hash);
+ pos = kh_get_oid_map(bitmap_git->bitmaps, root->oid);
if (pos < kh_end(bitmap_git->bitmaps)) {
struct stored_bitmap *st = kh_value(bitmap_git->bitmaps, pos);
lookup_stored_bitmap(stored),
rebuild)) {
hash_pos = kh_put_sha1(reused_bitmaps,
- stored->sha1,
+ stored->oid.hash,
&hash_ret);
kh_value(reused_bitmaps, hash_pos) =
bitmap_to_ewah(rebuild);
ewah_pool_free(b->trees);
ewah_pool_free(b->blobs);
ewah_pool_free(b->tags);
- kh_destroy_sha1(b->bitmaps);
+ kh_destroy_oid_map(b->bitmaps);
free(b->ext_index.objects);
free(b->ext_index.hashes);
bitmap_free(b->result);
free(b);
}
-int bitmap_has_sha1_in_uninteresting(struct bitmap_index *bitmap_git,
- const unsigned char *sha1)
+int bitmap_has_oid_in_uninteresting(struct bitmap_index *bitmap_git,
+ const struct object_id *oid)
{
int pos;
if (!bitmap_git->haves)
return 0; /* walk had no "haves" */
- pos = bitmap_position_packfile(bitmap_git, sha1);
+ pos = bitmap_position_packfile(bitmap_git, oid);
if (pos < 0)
return 0;
uint16_t version;
uint16_t options;
uint32_t entry_count;
- unsigned char checksum[20];
+ unsigned char checksum[GIT_MAX_RAWSZ];
};
static const char BITMAP_IDX_SIGNATURE[] = {'B', 'I', 'T', 'M'};
* queried to see if a particular object was reachable from any of the
* objects flagged as UNINTERESTING.
*/
-int bitmap_has_sha1_in_uninteresting(struct bitmap_index *, const unsigned char *sha1);
+int bitmap_has_oid_in_uninteresting(struct bitmap_index *, const struct object_id *oid);
void bitmap_writer_show_progress(int show);
void bitmap_writer_set_checksum(unsigned char *sha1);
#include "cache.h"
#include "pack-revindex.h"
#include "object-store.h"
+#include "packfile.h"
/*
* Pack index for existing packs give us easy access to the offsets into
sort_revindex(p->revindex, num_ent, p->pack_size);
}
-void load_pack_revindex(struct packed_git *p)
+int load_pack_revindex(struct packed_git *p)
{
- if (!p->revindex)
+ if (!p->revindex) {
+ if (open_pack_index(p))
+ return -1;
create_pack_revindex(p);
+ }
+ return 0;
}
int find_revindex_position(struct packed_git *p, off_t ofs)
{
int pos;
- load_pack_revindex(p);
+ if (load_pack_revindex(p))
+ return NULL;
+
pos = find_revindex_position(p, ofs);
if (pos < 0)
unsigned int nr;
};
-void load_pack_revindex(struct packed_git *p);
+int load_pack_revindex(struct packed_git *p);
int find_revindex_position(struct packed_git *p, off_t ofs);
struct revindex_entry *find_pack_revindex(struct packed_git *p, off_t ofs);
struct packed_git *p = alloc_packed_git(alloc);
memcpy(p->pack_name, path, alloc); /* includes NUL */
- hashcpy(p->sha1, sha1);
+ hashcpy(p->hash, sha1);
if (check_packed_git_idx(idx_path, p)) {
free(p);
return NULL;
#endif
}
+const char *pack_basename(struct packed_git *p)
+{
+ const char *ret = strrchr(p->pack_name, '/');
+ if (ret)
+ ret = ret + 1; /* skip past slash */
+ else
+ ret = p->pack_name; /* we only have a base */
+ return ret;
+}
+
/*
* Do not call this directly as this leaks p->pack_fd on error return;
* call open_packed_git() instead.
if (!p->index_data) {
struct multi_pack_index *m;
- const char *pack_name = strrchr(p->pack_name, '/');
+ const char *pack_name = pack_basename(p);
for (m = the_repository->objects->multi_pack_index;
m; m = m->next) {
p->pack_local = local;
p->mtime = st.st_mtime;
if (path_len < the_hash_algo->hexsz ||
- get_sha1_hex(path + path_len - the_hash_algo->hexsz, p->sha1))
- hashclr(p->sha1);
+ get_sha1_hex(path + path_len - the_hash_algo->hexsz, p->hash))
+ hashclr(p->hash);
return p;
}
uint32_t i;
int r = 0;
- if (flags & FOR_EACH_OBJECT_PACK_ORDER)
- load_pack_revindex(p);
+ if (flags & FOR_EACH_OBJECT_PACK_ORDER) {
+ if (load_pack_revindex(p))
+ return -1;
+ }
for (i = 0; i < p->num_objects; i++) {
uint32_t pos;
*
* Example: odb_pack_name(out, sha1, "idx") => ".git/objects/pack/pack-1234..idx"
*/
-extern char *odb_pack_name(struct strbuf *buf, const unsigned char *sha1, const char *ext);
+char *odb_pack_name(struct strbuf *buf, const unsigned char *sha1, const char *ext);
/*
* Return the name of the (local) packfile with the specified sha1 in
* its name. The return value is a pointer to memory that is
* overwritten each time this function is called.
*/
-extern char *sha1_pack_name(const unsigned char *sha1);
+char *sha1_pack_name(const unsigned char *sha1);
/*
* Return the name of the (local) pack index file with the specified
* sha1 in its name. The return value is a pointer to memory that is
* overwritten each time this function is called.
*/
-extern char *sha1_pack_index_name(const unsigned char *sha1);
+char *sha1_pack_index_name(const unsigned char *sha1);
-extern struct packed_git *parse_pack_index(unsigned char *sha1, const char *idx_path);
+/*
+ * Return the basename of the packfile, omitting any containing directory
+ * (e.g., "pack-1234abcd[...].pack").
+ */
+const char *pack_basename(struct packed_git *p);
+
+struct packed_git *parse_pack_index(unsigned char *sha1, const char *idx_path);
typedef void each_file_in_pack_dir_fn(const char *full_path, size_t full_path_len,
const char *file_pach, void *data);
#define PACKDIR_FILE_GARBAGE 4
extern void (*report_garbage)(unsigned seen_bits, const char *path);
-extern void reprepare_packed_git(struct repository *r);
-extern void install_packed_git(struct repository *r, struct packed_git *pack);
+void reprepare_packed_git(struct repository *r);
+void install_packed_git(struct repository *r, struct packed_git *pack);
struct packed_git *get_packed_git(struct repository *r);
struct list_head *get_packed_git_mru(struct repository *r);
*/
unsigned long approximate_object_count(void);
-extern struct packed_git *find_sha1_pack(const unsigned char *sha1,
- struct packed_git *packs);
+struct packed_git *find_sha1_pack(const unsigned char *sha1,
+ struct packed_git *packs);
-extern void pack_report(void);
+void pack_report(void);
/*
* mmap the index file for the specified packfile (if it is not
* already mmapped). Return 0 on success.
*/
-extern int open_pack_index(struct packed_git *);
+int open_pack_index(struct packed_git *);
/*
* munmap the index file for the specified packfile (if it is
* currently mmapped).
*/
-extern void close_pack_index(struct packed_git *);
+void close_pack_index(struct packed_git *);
int close_pack_fd(struct packed_git *p);
-extern uint32_t get_pack_fanout(struct packed_git *p, uint32_t value);
+uint32_t get_pack_fanout(struct packed_git *p, uint32_t value);
-extern unsigned char *use_pack(struct packed_git *, struct pack_window **, off_t, unsigned long *);
-extern void close_pack_windows(struct packed_git *);
-extern void close_pack(struct packed_git *);
-extern void close_all_packs(struct raw_object_store *o);
-extern void unuse_pack(struct pack_window **);
-extern void clear_delta_base_cache(void);
-extern struct packed_git *add_packed_git(const char *path, size_t path_len, int local);
+unsigned char *use_pack(struct packed_git *, struct pack_window **, off_t, unsigned long *);
+void close_pack_windows(struct packed_git *);
+void close_pack(struct packed_git *);
+void close_all_packs(struct raw_object_store *o);
+void unuse_pack(struct pack_window **);
+void clear_delta_base_cache(void);
+struct packed_git *add_packed_git(const char *path, size_t path_len, int local);
/*
* Make sure that a pointer access into an mmap'd index file is within bounds,
* (like the 64-bit extended offset table), as we compare the size to the
* fixed-length parts when we open the file.
*/
-extern void check_pack_index_ptr(const struct packed_git *p, const void *ptr);
+void check_pack_index_ptr(const struct packed_git *p, const void *ptr);
/*
* Perform binary search on a pack-index for a given oid. Packfile is expected to
* at the SHA-1 within the mmapped index. Return NULL if there is an
* error.
*/
-extern const unsigned char *nth_packed_object_sha1(struct packed_git *, uint32_t n);
+const unsigned char *nth_packed_object_sha1(struct packed_git *, uint32_t n);
/*
* Like nth_packed_object_sha1, but write the data into the object specified by
* the the first argument. Returns the first argument on success, and NULL on
* error.
*/
-extern const struct object_id *nth_packed_object_oid(struct object_id *, struct packed_git *, uint32_t n);
+const struct object_id *nth_packed_object_oid(struct object_id *, struct packed_git *, uint32_t n);
/*
* Return the offset of the nth object within the specified packfile.
* The index must already be opened.
*/
-extern off_t nth_packed_object_offset(const struct packed_git *, uint32_t n);
+off_t nth_packed_object_offset(const struct packed_git *, uint32_t n);
/*
* If the object named sha1 is present in the specified packfile,
* return its offset within the packfile; otherwise, return 0.
*/
-extern off_t find_pack_entry_one(const unsigned char *sha1, struct packed_git *);
+off_t find_pack_entry_one(const unsigned char *sha1, struct packed_git *);
-extern int is_pack_valid(struct packed_git *);
-extern void *unpack_entry(struct repository *r, struct packed_git *, off_t, enum object_type *, unsigned long *);
-extern unsigned long unpack_object_header_buffer(const unsigned char *buf, unsigned long len, enum object_type *type, unsigned long *sizep);
-extern unsigned long get_size_from_delta(struct packed_git *, struct pack_window **, off_t);
-extern int unpack_object_header(struct packed_git *, struct pack_window **, off_t *, unsigned long *);
+int is_pack_valid(struct packed_git *);
+void *unpack_entry(struct repository *r, struct packed_git *, off_t, enum object_type *, unsigned long *);
+unsigned long unpack_object_header_buffer(const unsigned char *buf, unsigned long len, enum object_type *type, unsigned long *sizep);
+unsigned long get_size_from_delta(struct packed_git *, struct pack_window **, off_t);
+int unpack_object_header(struct packed_git *, struct pack_window **, off_t *, unsigned long *);
-extern void release_pack_memory(size_t);
+void release_pack_memory(size_t);
/* global flag to enable extra checks when accessing packed objects */
extern int do_check_packed_object_crc;
-extern int packed_object_info(struct repository *r,
- struct packed_git *pack,
- off_t offset, struct object_info *);
+int packed_object_info(struct repository *r,
+ struct packed_git *pack,
+ off_t offset, struct object_info *);
-extern void mark_bad_packed_object(struct packed_git *p, const unsigned char *sha1);
-extern const struct packed_git *has_packed_and_bad(struct repository *r, const unsigned char *sha1);
+void mark_bad_packed_object(struct packed_git *p, const unsigned char *sha1);
+const struct packed_git *has_packed_and_bad(struct repository *r, const unsigned char *sha1);
/*
* Iff a pack file in the given repository contains the object named by sha1,
* return true and store its location to e.
*/
-extern int find_pack_entry(struct repository *r, const struct object_id *oid, struct pack_entry *e);
+int find_pack_entry(struct repository *r, const struct object_id *oid, struct pack_entry *e);
-extern int has_object_pack(const struct object_id *oid);
+int has_object_pack(const struct object_id *oid);
-extern int has_pack_index(const unsigned char *sha1);
+int has_pack_index(const unsigned char *sha1);
/*
* Return 1 if an object in a promisor packfile is or refers to the given
* object, 0 otherwise.
*/
-extern int is_promisor_object(const struct object_id *oid);
+int is_promisor_object(const struct object_id *oid);
/*
* Expose a function for fuzz testing.
* have a convenient entry-point for fuzz testing. For real uses, you should
* probably use open_pack_index() or parse_pack_index() instead.
*/
-extern int load_idx(const char *path, const unsigned int hashsz, void *idx_map,
- size_t idx_size, struct packed_git *p);
+int load_idx(const char *path, const unsigned int hashsz, void *idx_map,
+ size_t idx_size, struct packed_git *p);
#endif
opt->long_name);
if (v && v < MINIMUM_ABBREV)
v = MINIMUM_ABBREV;
- else if (v > 40)
- v = 40;
+ else if (v > the_hash_algo->hexsz)
+ v = the_hash_algo->hexsz;
}
*(int *)(opt->value) = v;
return 0;
#include "color.h"
#include "utf8.h"
+static int disallow_abbreviated_options;
+
#define OPT_SHORT 1
#define OPT_UNSET 2
optname(options, flags));
if (*rest)
continue;
+ if (options->value)
+ *(int *)options->value = options->defval;
p->out[p->cpidx++] = arg - 2;
return PARSE_OPT_DONE;
}
return get_value(p, options, all_opts, flags ^ opt_flags);
}
+ if (disallow_abbreviated_options && (ambiguous_option || abbrev_option))
+ die("disallowed abbreviated or ambiguous option '%.*s'",
+ (int)(arg_end - arg), arg);
+
if (ambiguous_option) {
error(_("ambiguous option: %s "
"(could be --%s%s or --%s%s)"),
}
}
-static int show_gitcomp(struct parse_opt_ctx_t *ctx,
- const struct option *opts)
+static int show_gitcomp(const struct option *opts)
{
const struct option *original_opts = opts;
int nr_noopts = 0;
/* lone --git-completion-helper is asked by git-completion.bash */
if (ctx->total == 1 && !strcmp(arg + 1, "-git-completion-helper"))
- return show_gitcomp(ctx, options);
+ return show_gitcomp(options);
if (arg[1] != '-') {
ctx->opt = arg + 1;
{
struct parse_opt_ctx_t ctx;
+ disallow_abbreviated_options =
+ git_env_bool("GIT_TEST_DISALLOW_ABBREVIATED_OPTIONS", 0);
+
parse_options_start(&ctx, argc, argv, prefix, options, flags);
switch (parse_options_step(&ctx, options, usagestr)) {
case PARSE_OPT_HELP:
#define OPT_BOOL_F(s, l, v, h, f) OPT_SET_INT_F(s, l, v, h, 1, f)
#define OPT_CALLBACK_F(s, l, v, a, h, f, cb) \
{ OPTION_CALLBACK, (s), (l), (v), (a), (h), (f), (cb) }
+#define OPT_STRING_F(s, l, v, a, h, f) { OPTION_STRING, (s), (l), (v), (a), (h), (f) }
+#define OPT_INTEGER_F(s, l, v, h, f) { OPTION_INTEGER, (s), (l), (v), N_("n"), (h), (f) }
#define OPT_END() { OPTION_END }
-#define OPT_ARGUMENT(l, h) { OPTION_ARGUMENT, 0, (l), NULL, NULL, \
- (h), PARSE_OPT_NOARG}
+#define OPT_ARGUMENT(l, v, h) { OPTION_ARGUMENT, 0, (l), (v), NULL, \
+ (h), PARSE_OPT_NOARG, NULL, 1 }
#define OPT_GROUP(h) { OPTION_GROUP, 0, NULL, NULL, NULL, (h) }
#define OPT_BIT(s, l, v, h, b) OPT_BIT_F(s, l, v, h, b, 0)
#define OPT_BITOP(s, l, v, h, set, clear) { OPTION_BITOP, (s), (l), (v), NULL, (h), \
(h), PARSE_OPT_NOARG | PARSE_OPT_HIDDEN, NULL, 1}
#define OPT_CMDMODE(s, l, v, h, i) { OPTION_CMDMODE, (s), (l), (v), NULL, \
(h), PARSE_OPT_NOARG|PARSE_OPT_NONEG, NULL, (i) }
-#define OPT_INTEGER(s, l, v, h) { OPTION_INTEGER, (s), (l), (v), N_("n"), (h) }
+#define OPT_INTEGER(s, l, v, h) OPT_INTEGER_F(s, l, v, h, 0)
#define OPT_MAGNITUDE(s, l, v, h) { OPTION_MAGNITUDE, (s), (l), (v), \
N_("n"), (h), PARSE_OPT_NONEG }
-#define OPT_STRING(s, l, v, a, h) { OPTION_STRING, (s), (l), (v), (a), (h) }
+#define OPT_STRING(s, l, v, a, h) OPT_STRING_F(s, l, v, a, h, 0)
#define OPT_STRING_LIST(s, l, v, a, h) \
{ OPTION_CALLBACK, (s), (l), (v), (a), \
(h), 0, &parse_opt_string_list }
return -1;
}
- if ($description !~ /^[0-9a-fA-F]{40} \S+ (\d+)$/) {
+ if ($description !~ /^[0-9a-fA-F]{40}(?:[0-9a-fA-F]{24})? \S+ (\d+)$/) {
carp "Unexpected result returned from git cat-file";
return -1;
}
return PACKET_READ_EOF;
}
- if ((options & PACKET_READ_DIE_ON_ERR_PACKET) &&
- starts_with(buffer, "ERR "))
- die(_("remote error: %s"), buffer + 4);
-
if ((options & PACKET_READ_CHOMP_NEWLINE) &&
len && buffer[len-1] == '\n')
len--;
buffer[len] = 0;
packet_trace(buffer, len, 0);
+
+ if ((options & PACKET_READ_DIE_ON_ERR_PACKET) &&
+ starts_with(buffer, "ERR "))
+ die(_("remote error: %s"), buffer + 4);
+
*pktlen = len;
return PACKET_READ_NORMAL;
}
return !(isalnum(ch) || ch == '!' || ch == '*' || ch == '+' || ch == '-' || ch == '/');
}
-static int needs_rfc2047_encoding(const char *line, int len,
- enum rfc2047_type type)
+static int needs_rfc2047_encoding(const char *line, int len)
{
int i;
}
strbuf_addstr(sb, "From: ");
- if (needs_rfc2047_encoding(namebuf, namelen, RFC2047_ADDRESS)) {
+ if (needs_rfc2047_encoding(namebuf, namelen)) {
add_rfc2047(sb, namebuf, namelen,
encoding, RFC2047_ADDRESS);
max_length = 76; /* per rfc2047 */
return rest - placeholder;
}
-static size_t parse_padding_placeholder(struct strbuf *sb,
- const char *placeholder,
+static size_t parse_padding_placeholder(const char *placeholder,
struct format_commit_context *c)
{
const char *ch = placeholder;
case '<':
case '>':
- return parse_padding_placeholder(sb, placeholder, c);
+ return parse_padding_placeholder(placeholder, c);
}
/* these depend on the commit */
if (pp->print_email_subject) {
if (pp->rev)
fmt_output_email_subject(sb, pp->rev);
- if (needs_rfc2047_encoding(title.buf, title.len, RFC2047_SUBJECT))
+ if (needs_rfc2047_encoding(title.buf, title.len))
add_rfc2047(sb, title.buf, title.len,
encoding, RFC2047_SUBJECT);
else
* published by the Free Software Foundation.
*/
-#include "git-compat-util.h"
+#include "cache.h"
#include "gettext.h"
#include "progress.h"
#include "strbuf.h"
#include "trace.h"
+#include "utf8.h"
#define TP_IDX_MAX 8
unsigned sparse;
struct throughput *throughput;
uint64_t start_ns;
+ struct strbuf counters_sb;
+ int title_len;
+ int split;
};
static volatile sig_atomic_t progress_update;
return tpgrp < 0 || tpgrp == getpgid(0);
}
-static int display(struct progress *progress, uint64_t n, const char *done)
+static void display(struct progress *progress, uint64_t n, const char *done)
{
- const char *eol, *tp;
+ const char *tp;
+ struct strbuf *counters_sb = &progress->counters_sb;
+ int show_update = 0;
+ int last_count_len = counters_sb->len;
if (progress->delay && (!progress_update || --progress->delay))
- return 0;
+ return;
progress->last_value = n;
tp = (progress->throughput) ? progress->throughput->display.buf : "";
- eol = done ? done : " \r";
if (progress->total) {
unsigned percent = n * 100 / progress->total;
if (percent != progress->last_percent || progress_update) {
progress->last_percent = percent;
- if (is_foreground_fd(fileno(stderr)) || done) {
- fprintf(stderr, "%s: %3u%% (%"PRIuMAX"/%"PRIuMAX")%s%s",
- progress->title, percent,
- (uintmax_t)n, (uintmax_t)progress->total,
- tp, eol);
- fflush(stderr);
- }
- progress_update = 0;
- return 1;
+
+ strbuf_reset(counters_sb);
+ strbuf_addf(counters_sb,
+ "%3u%% (%"PRIuMAX"/%"PRIuMAX")%s", percent,
+ (uintmax_t)n, (uintmax_t)progress->total,
+ tp);
+ show_update = 1;
}
} else if (progress_update) {
+ strbuf_reset(counters_sb);
+ strbuf_addf(counters_sb, "%"PRIuMAX"%s", (uintmax_t)n, tp);
+ show_update = 1;
+ }
+
+ if (show_update) {
if (is_foreground_fd(fileno(stderr)) || done) {
- fprintf(stderr, "%s: %"PRIuMAX"%s%s",
- progress->title, (uintmax_t)n, tp, eol);
+ const char *eol = done ? done : "\r";
+ size_t clear_len = counters_sb->len < last_count_len ?
+ last_count_len - counters_sb->len + 1 :
+ 0;
+ size_t progress_line_len = progress->title_len +
+ counters_sb->len + 2;
+ int cols = term_columns();
+
+ if (progress->split) {
+ fprintf(stderr, " %s%*s", counters_sb->buf,
+ (int) clear_len, eol);
+ } else if (!done && cols < progress_line_len) {
+ clear_len = progress->title_len + 1 < cols ?
+ cols - progress->title_len : 0;
+ fprintf(stderr, "%s:%*s\n %s%s",
+ progress->title, (int) clear_len, "",
+ counters_sb->buf, eol);
+ progress->split = 1;
+ } else {
+ fprintf(stderr, "%s: %s%*s", progress->title,
+ counters_sb->buf, (int) clear_len, eol);
+ }
fflush(stderr);
}
progress_update = 0;
- return 1;
}
-
- return 0;
}
static void throughput_string(struct strbuf *buf, uint64_t total,
now_ns = getnanotime();
if (!tp) {
- progress->throughput = tp = calloc(1, sizeof(*tp));
- if (tp) {
- tp->prev_total = tp->curr_total = total;
- tp->prev_ns = now_ns;
- strbuf_init(&tp->display, 0);
- }
+ progress->throughput = tp = xcalloc(1, sizeof(*tp));
+ tp->prev_total = tp->curr_total = total;
+ tp->prev_ns = now_ns;
+ strbuf_init(&tp->display, 0);
return;
}
tp->curr_total = total;
display(progress, progress->last_value, NULL);
}
-int display_progress(struct progress *progress, uint64_t n)
+void display_progress(struct progress *progress, uint64_t n)
{
- return progress ? display(progress, n, NULL) : 0;
+ if (progress)
+ display(progress, n, NULL);
}
static struct progress *start_progress_delay(const char *title, uint64_t total,
unsigned delay, unsigned sparse)
{
- struct progress *progress = malloc(sizeof(*progress));
- if (!progress) {
- /* unlikely, but here's a good fallback */
- fprintf(stderr, "%s...\n", title);
- fflush(stderr);
- return NULL;
- }
+ struct progress *progress = xmalloc(sizeof(*progress));
progress->title = title;
progress->total = total;
progress->last_value = -1;
progress->sparse = sparse;
progress->throughput = NULL;
progress->start_ns = getnanotime();
+ strbuf_init(&progress->counters_sb, 0);
+ progress->title_len = utf8_strwidth(title);
+ progress->split = 0;
set_progress_signal();
return progress;
}
free(buf);
}
clear_progress_signal();
+ strbuf_release(&progress->counters_sb);
if (progress->throughput)
strbuf_release(&progress->throughput->display);
free(progress->throughput);
struct progress;
void display_throughput(struct progress *progress, uint64_t total);
-int display_progress(struct progress *progress, uint64_t n);
+void display_progress(struct progress *progress, uint64_t n);
struct progress *start_progress(const char *title, uint64_t total);
struct progress *start_sparse_progress(const char *title, uint64_t total);
struct progress *start_delayed_progress(const char *title, uint64_t total);
#include "commit.h"
#include "blob.h"
#include "resolve-undo.h"
+#include "run-command.h"
#include "strbuf.h"
#include "varint.h"
#include "split-index.h"
int add_option = (ADD_CACHE_OK_TO_ADD|ADD_CACHE_OK_TO_REPLACE|
(intent_only ? ADD_CACHE_NEW_ONLY : 0));
int hash_flags = HASH_WRITE_OBJECT;
+ struct object_id oid;
if (flags & ADD_CACHE_RENORMALIZE)
hash_flags |= HASH_RENORMALIZE;
namelen = strlen(path);
if (S_ISDIR(st_mode)) {
+ if (resolve_gitlink_ref(path, "HEAD", &oid) < 0)
+ return error(_("'%s' does not have a commit checked out"), path);
while (namelen && path[namelen-1] == '/')
namelen--;
}
uint32_t uid;
uint32_t gid;
uint32_t size;
- unsigned char sha1[20];
- uint16_t flags;
- char name[FLEX_ARRAY]; /* more */
-};
-
-/*
- * This struct is used when CE_EXTENDED bit is 1
- * The struct must match ondisk_cache_entry exactly from
- * ctime till flags
- */
-struct ondisk_cache_entry_extended {
- struct cache_time ctime;
- struct cache_time mtime;
- uint32_t dev;
- uint32_t ino;
- uint32_t mode;
- uint32_t uid;
- uint32_t gid;
- uint32_t size;
- unsigned char sha1[20];
- uint16_t flags;
- uint16_t flags2;
- char name[FLEX_ARRAY]; /* more */
+ /*
+ * unsigned char hash[hashsz];
+ * uint16_t flags;
+ * if (flags & CE_EXTENDED)
+ * uint16_t flags2;
+ */
+ unsigned char data[GIT_MAX_RAWSZ + 2 * sizeof(uint16_t)];
+ char name[FLEX_ARRAY];
};
/* These are only used for v3 or lower */
#define align_padding_size(size, len) ((size + (len) + 8) & ~7) - (size + len)
-#define align_flex_name(STRUCT,len) ((offsetof(struct STRUCT,name) + (len) + 8) & ~7)
+#define align_flex_name(STRUCT,len) ((offsetof(struct STRUCT,data) + (len) + 8) & ~7)
#define ondisk_cache_entry_size(len) align_flex_name(ondisk_cache_entry,len)
-#define ondisk_cache_entry_extended_size(len) align_flex_name(ondisk_cache_entry_extended,len)
-#define ondisk_ce_size(ce) (((ce)->ce_flags & CE_EXTENDED) ? \
- ondisk_cache_entry_extended_size(ce_namelen(ce)) : \
- ondisk_cache_entry_size(ce_namelen(ce)))
+#define ondisk_data_size(flags, len) (the_hash_algo->rawsz + \
+ ((flags & CE_EXTENDED) ? 2 : 1) * sizeof(uint16_t) + len)
+#define ondisk_data_size_max(len) (ondisk_data_size(CE_EXTENDED, len))
+#define ondisk_ce_size(ce) (ondisk_cache_entry_size(ondisk_data_size((ce)->ce_flags, ce_namelen(ce))))
/* Allow fsck to force verification of the index checksum. */
int verify_index_checksum;
struct cache_entry *ce;
size_t len;
const char *name;
+ const unsigned hashsz = the_hash_algo->rawsz;
+ const uint16_t *flagsp = (const uint16_t *)(ondisk->data + hashsz);
unsigned int flags;
size_t copy_len = 0;
/*
int expand_name_field = version == 4;
/* On-disk flags are just 16 bits */
- flags = get_be16(&ondisk->flags);
+ flags = get_be16(flagsp);
len = flags & CE_NAMEMASK;
if (flags & CE_EXTENDED) {
- struct ondisk_cache_entry_extended *ondisk2;
int extended_flags;
- ondisk2 = (struct ondisk_cache_entry_extended *)ondisk;
- extended_flags = get_be16(&ondisk2->flags2) << 16;
+ extended_flags = get_be16(flagsp + 1) << 16;
/* We do not yet understand any bit out of CE_EXTENDED_FLAGS */
if (extended_flags & ~CE_EXTENDED_FLAGS)
die(_("unknown index entry format 0x%08x"), extended_flags);
flags |= extended_flags;
- name = ondisk2->name;
+ name = (const char *)(flagsp + 2);
}
else
- name = ondisk->name;
+ name = (const char *)(flagsp + 1);
if (expand_name_field) {
const unsigned char *cp = (const unsigned char *)name;
ce->ce_flags = flags & ~CE_NAMEMASK;
ce->ce_namelen = len;
ce->index = 0;
- hashcpy(ce->oid.hash, ondisk->sha1);
+ hashcpy(ce->oid.hash, ondisk->data);
+ memcpy(ce->name, name, len);
+ ce->name[len] = '\0';
if (expand_name_field) {
if (copy_len)
struct cache_entry *ce)
{
short flags;
+ const unsigned hashsz = the_hash_algo->rawsz;
+ uint16_t *flagsp = (uint16_t *)(ondisk->data + hashsz);
ondisk->ctime.sec = htonl(ce->ce_stat_data.sd_ctime.sec);
ondisk->mtime.sec = htonl(ce->ce_stat_data.sd_mtime.sec);
ondisk->uid = htonl(ce->ce_stat_data.sd_uid);
ondisk->gid = htonl(ce->ce_stat_data.sd_gid);
ondisk->size = htonl(ce->ce_stat_data.sd_size);
- hashcpy(ondisk->sha1, ce->oid.hash);
+ hashcpy(ondisk->data, ce->oid.hash);
flags = ce->ce_flags & ~CE_NAMEMASK;
flags |= (ce_namelen(ce) >= CE_NAMEMASK ? CE_NAMEMASK : ce_namelen(ce));
- ondisk->flags = htons(flags);
+ flagsp[0] = htons(flags);
if (ce->ce_flags & CE_EXTENDED) {
- struct ondisk_cache_entry_extended *ondisk2;
- ondisk2 = (struct ondisk_cache_entry_extended *)ondisk;
- ondisk2->flags2 = htons((ce->ce_flags & CE_EXTENDED_FLAGS) >> 16);
+ flagsp[1] = htons((ce->ce_flags & CE_EXTENDED_FLAGS) >> 16);
}
}
stripped_name = 1;
}
- if (ce->ce_flags & CE_EXTENDED)
- size = offsetof(struct ondisk_cache_entry_extended, name);
- else
- size = offsetof(struct ondisk_cache_entry, name);
+ size = offsetof(struct ondisk_cache_entry,data) + ondisk_data_size(ce->ce_flags, 0);
if (!previous_name) {
int len = ce_namelen(ce);
struct cache_entry **cache = istate->cache;
int entries = istate->cache_nr;
struct stat st;
- struct ondisk_cache_entry_extended ondisk;
+ struct ondisk_cache_entry ondisk;
struct strbuf previous_name_buf = STRBUF_INIT, *previous_name;
int drop_cache_tree = istate->drop_cache_tree;
off_t offset;
if (ret)
return ret;
if (flags & COMMIT_LOCK)
- return commit_locked_index(lock);
- return close_lock_file_gently(lock);
+ ret = commit_locked_index(lock);
+ else
+ ret = close_lock_file_gently(lock);
+
+ run_hook_le(NULL, "post-index-change",
+ istate->updated_workdir ? "1" : "0",
+ istate->updated_skipworktree ? "1" : "0", NULL);
+ istate->updated_workdir = 0;
+ istate->updated_skipworktree = 0;
+
+ return ret;
}
static int write_split_index(struct index_state *istate,
#include "cache.h"
#include "commit.h"
-#include "rebase-interactive.h"
#include "sequencer.h"
+#include "rebase-interactive.h"
#include "strbuf.h"
+#include "commit-slab.h"
+#include "config.h"
+
+enum missing_commit_check_level {
+ MISSING_COMMIT_CHECK_IGNORE = 0,
+ MISSING_COMMIT_CHECK_WARN,
+ MISSING_COMMIT_CHECK_ERROR
+};
+
+static enum missing_commit_check_level get_missing_commit_check_level(void)
+{
+ const char *value;
+
+ if (git_config_get_value("rebase.missingcommitscheck", &value) ||
+ !strcasecmp("ignore", value))
+ return MISSING_COMMIT_CHECK_IGNORE;
+ if (!strcasecmp("warn", value))
+ return MISSING_COMMIT_CHECK_WARN;
+ if (!strcasecmp("error", value))
+ return MISSING_COMMIT_CHECK_ERROR;
+ warning(_("unrecognized setting %s for option "
+ "rebase.missingCommitsCheck. Ignoring."), value);
+ return MISSING_COMMIT_CHECK_IGNORE;
+}
-void append_todo_help(unsigned edit_todo, unsigned keep_empty,
+void append_todo_help(unsigned keep_empty, int command_count,
+ const char *shortrevisions, const char *shortonto,
struct strbuf *buf)
{
const char *msg = _("\nCommands:\n"
". specified). Use -c <commit> to reword the commit message.\n"
"\n"
"These lines can be re-ordered; they are executed from top to bottom.\n");
+ unsigned edit_todo = !(shortrevisions && shortonto);
+
+ if (!edit_todo) {
+ strbuf_addch(buf, '\n');
+ strbuf_commented_addf(buf, Q_("Rebase %s onto %s (%d command)",
+ "Rebase %s onto %s (%d commands)",
+ command_count),
+ shortrevisions, shortonto, command_count);
+ }
strbuf_add_commented_lines(buf, msg, strlen(msg));
}
}
-int edit_todo_list(struct repository *r, unsigned flags)
+int edit_todo_list(struct repository *r, struct todo_list *todo_list,
+ struct todo_list *new_todo, const char *shortrevisions,
+ const char *shortonto, unsigned flags)
{
- struct strbuf buf = STRBUF_INIT;
const char *todo_file = rebase_path_todo();
+ unsigned initial = shortrevisions && shortonto;
- if (strbuf_read_file(&buf, todo_file, 0) < 0)
- return error_errno(_("could not read '%s'."), todo_file);
+ /* If the user is editing the todo list, we first try to parse
+ * it. If there is an error, we do not return, because the user
+ * might want to fix it in the first place. */
+ if (!initial)
+ todo_list_parse_insn_buffer(r, todo_list->buf.buf, todo_list);
- strbuf_stripspace(&buf, 1);
- if (write_message(buf.buf, buf.len, todo_file, 0)) {
- strbuf_release(&buf);
- return -1;
- }
+ if (todo_list_write_to_file(r, todo_list, todo_file, shortrevisions, shortonto,
+ -1, flags | TODO_LIST_SHORTEN_IDS | TODO_LIST_APPEND_TODO_HELP))
+ return error_errno(_("could not write '%s'"), todo_file);
+
+ if (initial && copy_file(rebase_path_todo_backup(), todo_file, 0666))
+ return error(_("could not copy '%s' to '%s'."), todo_file,
+ rebase_path_todo_backup());
- strbuf_release(&buf);
+ if (launch_sequence_editor(todo_file, &new_todo->buf, NULL))
+ return -2;
- transform_todos(r, flags | TODO_LIST_SHORTEN_IDS);
+ strbuf_stripspace(&new_todo->buf, 1);
+ if (initial && new_todo->buf.len == 0)
+ return -3;
- if (strbuf_read_file(&buf, todo_file, 0) < 0)
- return error_errno(_("could not read '%s'."), todo_file);
+ /* For the initial edit, the todo list gets parsed in
+ * complete_action(). */
+ if (!initial)
+ return todo_list_parse_insn_buffer(r, new_todo->buf.buf, new_todo);
- append_todo_help(1, 0, &buf);
- if (write_message(buf.buf, buf.len, todo_file, 0)) {
- strbuf_release(&buf);
- return -1;
+ return 0;
+}
+
+define_commit_slab(commit_seen, unsigned char);
+/*
+ * Check if the user dropped some commits by mistake
+ * Behaviour determined by rebase.missingCommitsCheck.
+ * Check if there is an unrecognized command or a
+ * bad SHA-1 in a command.
+ */
+int todo_list_check(struct todo_list *old_todo, struct todo_list *new_todo)
+{
+ enum missing_commit_check_level check_level = get_missing_commit_check_level();
+ struct strbuf missing = STRBUF_INIT;
+ int res = 0, i;
+ struct commit_seen commit_seen;
+
+ init_commit_seen(&commit_seen);
+
+ if (check_level == MISSING_COMMIT_CHECK_IGNORE)
+ goto leave_check;
+
+ /* Mark the commits in git-rebase-todo as seen */
+ for (i = 0; i < new_todo->nr; i++) {
+ struct commit *commit = new_todo->items[i].commit;
+ if (commit)
+ *commit_seen_at(&commit_seen, commit) = 1;
}
- strbuf_release(&buf);
+ /* Find commits in git-rebase-todo.backup yet unseen */
+ for (i = old_todo->nr - 1; i >= 0; i--) {
+ struct todo_item *item = old_todo->items + i;
+ struct commit *commit = item->commit;
+ if (commit && !*commit_seen_at(&commit_seen, commit)) {
+ strbuf_addf(&missing, " - %s %.*s\n",
+ find_unique_abbrev(&commit->object.oid, DEFAULT_ABBREV),
+ item->arg_len,
+ todo_item_get_arg(old_todo, item));
+ *commit_seen_at(&commit_seen, commit) = 1;
+ }
+ }
- if (launch_sequence_editor(todo_file, NULL, NULL))
- return -1;
+ /* Warn about missing commits */
+ if (!missing.len)
+ goto leave_check;
- transform_todos(r, flags & ~(TODO_LIST_SHORTEN_IDS));
+ if (check_level == MISSING_COMMIT_CHECK_ERROR)
+ res = 1;
- return 0;
+ fprintf(stderr,
+ _("Warning: some commits may have been dropped accidentally.\n"
+ "Dropped commits (newer to older):\n"));
+
+ /* Make the list user-friendly and display */
+ fputs(missing.buf, stderr);
+ strbuf_release(&missing);
+
+ fprintf(stderr, _("To avoid this message, use \"drop\" to "
+ "explicitly remove a commit.\n\n"
+ "Use 'git config rebase.missingCommitsCheck' to change "
+ "the level of warnings.\n"
+ "The possible behaviours are: ignore, warn, error.\n\n"));
+
+leave_check:
+ clear_commit_seen(&commit_seen);
+ return res;
}
struct strbuf;
struct repository;
+struct todo_list;
-void append_todo_help(unsigned edit_todo, unsigned keep_empty,
+void append_todo_help(unsigned keep_empty, int command_count,
+ const char *shortrevisions, const char *shortonto,
struct strbuf *buf);
-int edit_todo_list(struct repository *r, unsigned flags);
+int edit_todo_list(struct repository *r, struct todo_list *todo_list,
+ struct todo_list *new_todo, const char *shortrevisions,
+ const char *shortonto, unsigned flags);
+int todo_list_check(struct todo_list *old_todo, struct todo_list *new_todo);
#endif
int parse_opt_ref_sorting(const struct option *opt, const char *arg, int unset)
{
- if (!arg) /* should --no-sort void the list ? */
- return -1;
+ /*
+ * NEEDSWORK: We should probably clear the list in this case, but we've
+ * already munged the global used_atoms list, which would need to be
+ * undone.
+ */
+ BUG_ON_OPT_NEG(unset);
+
parse_ref_sorting(opt->value, arg);
return 0;
}
#define OPT_MERGED(f, h) _OPT_MERGED_NO_MERGED("merged", f, h)
#define OPT_NO_MERGED(f, h) _OPT_MERGED_NO_MERGED("no-merged", f, h)
+#define OPT_REF_SORT(var) \
+ OPT_CALLBACK_F(0, "sort", (var), \
+ N_("key"), N_("field name to sort on"), \
+ PARSE_OPT_NONEG, parse_opt_ref_sorting)
+
/*
* API for filtering a set of refs. Based on the type of refs the user
* has requested, we iterate through those refs and apply filters
/* LHS */
if (!*item->src)
; /* empty is ok; it means "HEAD" */
- else if (llen == GIT_SHA1_HEXSZ && !get_oid_hex(item->src, &unused))
+ else if (llen == the_hash_algo->hexsz && !get_oid_hex(item->src, &unused))
item->exact_sha1 = 1; /* ok */
else if (!check_refname_format(item->src, flags))
; /* valid looking ref is ok */
if (data[i] == '\t')
mid = &data[i];
if (data[i] == '\n') {
- if (mid - start != 40)
+ if (mid - start != the_hash_algo->hexsz)
die(_("%sinfo/refs not valid: is this a git repository?"),
transport_anonymize_url(url.buf));
data[i] = 0;
const char *name;
struct ref *ref;
struct object_id old_oid;
+ const char *q;
- if (get_oid_hex(p, &old_oid))
+ if (parse_oid_hex(p, &old_oid, &q))
die(_("protocol error: expected sha/ref, got %s'"), p);
- if (p[GIT_SHA1_HEXSZ] == ' ')
- name = p + GIT_SHA1_HEXSZ + 1;
- else if (!p[GIT_SHA1_HEXSZ])
+ if (*q == ' ')
+ name = q + 1;
+ else if (!*q)
name = "";
else
die(_("protocol error: expected sha/ref, got %s'"), p);
return ret;
}
-static void free_ref(struct ref *ref)
+void free_one_ref(struct ref *ref)
{
if (!ref)
return;
- free_ref(ref->peer_ref);
+ free_one_ref(ref->peer_ref);
free(ref->remote_status);
free(ref->symref);
free(ref);
struct ref *next;
while (ref) {
next = ref->next;
- free_ref(ref);
+ free_one_ref(ref);
ref = next;
}
}
int check_ref_type(const struct ref *ref, int flags);
/*
- * Frees the entire list and peers of elements.
+ * Free a single ref and its peer, or an entire list of refs and their peers,
+ * respectively.
*/
+void free_one_ref(struct ref *ref);
void free_refs(struct ref *ref);
struct oid_array;
commit->object.flags |= TREESAME;
}
-static void commit_list_insert_by_date_cached(struct commit *p, struct commit_list **head,
- struct commit_list *cached_base, struct commit_list **cache)
-{
- struct commit_list *new_entry;
-
- if (cached_base && p->date < cached_base->item->date)
- new_entry = commit_list_insert_by_date(p, &cached_base->next);
- else
- new_entry = commit_list_insert_by_date(p, head);
-
- if (cache && (!*cache || p->date < (*cache)->item->date))
- *cache = new_entry;
-}
-
static int process_parents(struct rev_info *revs, struct commit *commit,
- struct commit_list **list, struct commit_list **cache_ptr)
+ struct commit_list **list, struct prio_queue *queue)
{
struct commit_list *parent = commit->parents;
unsigned left_flag;
- struct commit_list *cached_base = cache_ptr ? *cache_ptr : NULL;
if (commit->object.flags & ADDED)
return 0;
continue;
p->object.flags |= SEEN;
if (list)
- commit_list_insert_by_date_cached(p, list, cached_base, cache_ptr);
+ commit_list_insert_by_date(p, list);
+ if (queue)
+ prio_queue_put(queue, p);
}
return 0;
}
if (!(p->object.flags & SEEN)) {
p->object.flags |= SEEN;
if (list)
- commit_list_insert_by_date_cached(p, list, cached_base, cache_ptr);
+ commit_list_insert_by_date(p, list);
+ if (queue)
+ prio_queue_put(queue, p);
}
if (revs->first_parent_only)
break;
return 0;
}
-static void read_pathspec_from_stdin(struct rev_info *revs, struct strbuf *sb,
+static void read_pathspec_from_stdin(struct strbuf *sb,
struct argv_array *prune)
{
while (strbuf_getline(sb, stdin) != EOF)
die("bad revision '%s'", sb.buf);
}
if (seen_dashdash)
- read_pathspec_from_stdin(revs, &sb, prune);
+ read_pathspec_from_stdin(&sb, prune);
strbuf_release(&sb);
warn_on_object_refname_ambiguity = save_warning;
return st;
}
-static int mark_redundant_parents(struct rev_info *revs, struct commit *commit)
+static int mark_redundant_parents(struct commit *commit)
{
struct commit_list *h = reduce_heads(commit->parents);
int i = 0, marked = 0;
return marked;
}
-static int mark_treesame_root_parents(struct rev_info *revs, struct commit *commit)
+static int mark_treesame_root_parents(struct commit *commit)
{
struct commit_list *p;
int marked = 0;
* Detect and simplify both cases.
*/
if (1 < cnt) {
- int marked = mark_redundant_parents(revs, commit);
- marked += mark_treesame_root_parents(revs, commit);
+ int marked = mark_redundant_parents(commit);
+ marked += mark_treesame_root_parents(commit);
if (marked)
marked -= leave_one_treesame_to_parent(revs, commit);
if (marked)
return 0;
}
-static enum rewrite_result rewrite_one(struct rev_info *revs, struct commit **pp)
+static enum rewrite_result rewrite_one_1(struct rev_info *revs,
+ struct commit **pp,
+ struct prio_queue *queue)
{
- struct commit_list *cache = NULL;
-
for (;;) {
struct commit *p = *pp;
if (!revs->limited)
- if (process_parents(revs, p, &revs->commits, &cache) < 0)
+ if (process_parents(revs, p, NULL, queue) < 0)
return rewrite_one_error;
if (p->object.flags & UNINTERESTING)
return rewrite_one_ok;
}
}
+static void merge_queue_into_list(struct prio_queue *q, struct commit_list **list)
+{
+ while (q->nr) {
+ struct commit *item = prio_queue_peek(q);
+ struct commit_list *p = *list;
+
+ if (p && p->item->date >= item->date)
+ list = &p->next;
+ else {
+ p = commit_list_insert(item, list);
+ list = &p->next; /* skip newly added item */
+ prio_queue_get(q); /* pop item */
+ }
+ }
+}
+
+static enum rewrite_result rewrite_one(struct rev_info *revs, struct commit **pp)
+{
+ struct prio_queue queue = { compare_commits_by_commit_date };
+ enum rewrite_result ret = rewrite_one_1(revs, pp, &queue);
+ merge_queue_into_list(&queue, &revs->commits);
+ clear_prio_queue(&queue);
+ return ret;
+}
+
int rewrite_parents(struct rev_info *revs, struct commit *commit,
rewrite_parent_fn_t rewrite_parent)
{
* file and written to the tail of 'done'.
*/
GIT_PATH_FUNC(rebase_path_todo, "rebase-merge/git-rebase-todo")
-static GIT_PATH_FUNC(rebase_path_todo_backup,
- "rebase-merge/git-rebase-todo.backup")
+GIT_PATH_FUNC(rebase_path_todo_backup, "rebase-merge/git-rebase-todo.backup")
/*
* The rebase command lines that have already been processed. A line
}
}
-int write_message(const void *buf, size_t len, const char *filename,
- int append_eol)
+static int write_message(const void *buf, size_t len, const char *filename,
+ int append_eol)
{
struct lock_file msg_file = LOCK_INIT;
return 1;
}
-/*
- * Note that ordering matters in this enum. Not only must it match the mapping
- * below, it is also divided into several sections that matter. When adding
- * new commands, make sure you add it in the right section.
- */
-enum todo_command {
- /* commands that handle commits */
- TODO_PICK = 0,
- TODO_REVERT,
- TODO_EDIT,
- TODO_REWORD,
- TODO_FIXUP,
- TODO_SQUASH,
- /* commands that do something else than handling a single commit */
- TODO_EXEC,
- TODO_BREAK,
- TODO_LABEL,
- TODO_RESET,
- TODO_MERGE,
- /* commands that do nothing but are counted for reporting progress */
- TODO_NOOP,
- TODO_DROP,
- /* comments (not counted for reporting progress) */
- TODO_COMMENT
-};
-
static struct {
char c;
const char *str;
TODO_EDIT_MERGE_MSG = 1
};
-struct todo_item {
- enum todo_command command;
- struct commit *commit;
- unsigned int flags;
- const char *arg;
- int arg_len;
- size_t offset_in_buf;
-};
-
-struct todo_list {
- struct strbuf buf;
- struct todo_item *items;
- int nr, alloc, current;
- int done_nr, total_nr;
- struct stat_data stat;
-};
-
-#define TODO_LIST_INIT { STRBUF_INIT }
-
-static void todo_list_release(struct todo_list *todo_list)
+void todo_list_release(struct todo_list *todo_list)
{
strbuf_release(&todo_list->buf);
FREE_AND_NULL(todo_list->items);
return todo_list->items + todo_list->nr++;
}
+const char *todo_item_get_arg(struct todo_list *todo_list,
+ struct todo_item *item)
+{
+ return todo_list->buf.buf + item->arg_offset;
+}
+
static int parse_insn_line(struct repository *r, struct todo_item *item,
- const char *bol, char *eol)
+ const char *buf, const char *bol, char *eol)
{
struct object_id commit_oid;
char *end_of_object_name;
if (bol == eol || *bol == '\r' || *bol == comment_line_char) {
item->command = TODO_COMMENT;
item->commit = NULL;
- item->arg = bol;
+ item->arg_offset = bol - buf;
item->arg_len = eol - bol;
return 0;
}
return error(_("%s does not accept arguments: '%s'"),
command_to_string(item->command), bol);
item->commit = NULL;
- item->arg = bol;
+ item->arg_offset = bol - buf;
item->arg_len = eol - bol;
return 0;
}
if (item->command == TODO_EXEC || item->command == TODO_LABEL ||
item->command == TODO_RESET) {
item->commit = NULL;
- item->arg = bol;
+ item->arg_offset = bol - buf;
item->arg_len = (int)(eol - bol);
return 0;
}
} else {
item->flags |= TODO_EDIT_MERGE_MSG;
item->commit = NULL;
- item->arg = bol;
+ item->arg_offset = bol - buf;
item->arg_len = (int)(eol - bol);
return 0;
}
status = get_oid(bol, &commit_oid);
*end_of_object_name = saved;
- item->arg = end_of_object_name + strspn(end_of_object_name, " \t");
- item->arg_len = (int)(eol - item->arg);
+ bol = end_of_object_name + strspn(end_of_object_name, " \t");
+ item->arg_offset = bol - buf;
+ item->arg_len = (int)(eol - bol);
if (status < 0)
return error(_("could not parse '%.*s'"),
return !item->commit;
}
-static int parse_insn_buffer(struct repository *r, char *buf,
- struct todo_list *todo_list)
+int todo_list_parse_insn_buffer(struct repository *r, char *buf,
+ struct todo_list *todo_list)
{
struct todo_item *item;
char *p = buf, *next_p;
int i, res = 0, fixup_okay = file_exists(rebase_path_done());
+ todo_list->current = todo_list->nr = 0;
+
for (i = 1; *p; i++, p = next_p) {
char *eol = strchrnul(p, '\n');
item = append_new_todo(todo_list);
item->offset_in_buf = p - todo_list->buf.buf;
- if (parse_insn_line(r, item, p, eol)) {
+ if (parse_insn_line(r, item, buf, p, eol)) {
res = error(_("invalid line %d: %.*s"),
i, (int)(eol - p), p);
- item->command = TODO_NOOP;
+ item->command = TODO_COMMENT + 1;
+ item->arg_offset = p - buf;
+ item->arg_len = (int)(eol - p);
+ item->commit = NULL;
}
if (fixup_okay)
return error(_("could not stat '%s'"), todo_file);
fill_stat_data(&todo_list->stat, &st);
- res = parse_insn_buffer(r, todo_list->buf.buf, todo_list);
+ res = todo_list_parse_insn_buffer(r, todo_list->buf.buf, todo_list);
if (res) {
if (is_rebase_i(opts))
return error(_("please fix this using "
FILE *f = fopen_or_warn(rebase_path_msgtotal(), "w");
if (strbuf_read_file(&done.buf, rebase_path_done(), 0) > 0 &&
- !parse_insn_buffer(r, done.buf.buf, &done))
+ !todo_list_parse_insn_buffer(r, done.buf.buf, &done))
todo_list->done_nr = count_commands(&done);
else
todo_list->done_nr = 0;
opts->no_commit = git_config_bool_or_int(key, value, &error_flag);
else if (!strcmp(key, "options.edit"))
opts->edit = git_config_bool_or_int(key, value, &error_flag);
+ else if (!strcmp(key, "options.allow-empty"))
+ opts->allow_empty =
+ git_config_bool_or_int(key, value, &error_flag);
+ else if (!strcmp(key, "options.allow-empty-message"))
+ opts->allow_empty_message =
+ git_config_bool_or_int(key, value, &error_flag);
+ else if (!strcmp(key, "options.keep-redundant-commits"))
+ opts->keep_redundant_commits =
+ git_config_bool_or_int(key, value, &error_flag);
else if (!strcmp(key, "options.signoff"))
opts->signoff = git_config_bool_or_int(key, value, &error_flag);
else if (!strcmp(key, "options.record-origin"))
item->command = command;
item->commit = commit;
- item->arg = NULL;
+ item->arg_offset = 0;
item->arg_len = 0;
item->offset_in_buf = todo_list->buf.len;
subject_len = find_commit_subject(commit_buffer, &subject);
int res = 0;
if (opts->no_commit)
- res |= git_config_set_in_file_gently(opts_file, "options.no-commit", "true");
+ res |= git_config_set_in_file_gently(opts_file,
+ "options.no-commit", "true");
if (opts->edit)
- res |= git_config_set_in_file_gently(opts_file, "options.edit", "true");
+ res |= git_config_set_in_file_gently(opts_file,
+ "options.edit", "true");
+ if (opts->allow_empty)
+ res |= git_config_set_in_file_gently(opts_file,
+ "options.allow-empty", "true");
+ if (opts->allow_empty_message)
+ res |= git_config_set_in_file_gently(opts_file,
+ "options.allow-empty-message", "true");
+ if (opts->keep_redundant_commits)
+ res |= git_config_set_in_file_gently(opts_file,
+ "options.keep-redundant-commits", "true");
if (opts->signoff)
- res |= git_config_set_in_file_gently(opts_file, "options.signoff", "true");
+ res |= git_config_set_in_file_gently(opts_file,
+ "options.signoff", "true");
if (opts->record_origin)
- res |= git_config_set_in_file_gently(opts_file, "options.record-origin", "true");
+ res |= git_config_set_in_file_gently(opts_file,
+ "options.record-origin", "true");
if (opts->allow_ff)
- res |= git_config_set_in_file_gently(opts_file, "options.allow-ff", "true");
+ res |= git_config_set_in_file_gently(opts_file,
+ "options.allow-ff", "true");
if (opts->mainline) {
struct strbuf buf = STRBUF_INIT;
strbuf_addf(&buf, "%d", opts->mainline);
- res |= git_config_set_in_file_gently(opts_file, "options.mainline", buf.buf);
+ res |= git_config_set_in_file_gently(opts_file,
+ "options.mainline", buf.buf);
strbuf_release(&buf);
}
if (opts->strategy)
- res |= git_config_set_in_file_gently(opts_file, "options.strategy", opts->strategy);
+ res |= git_config_set_in_file_gently(opts_file,
+ "options.strategy", opts->strategy);
if (opts->gpg_sign)
- res |= git_config_set_in_file_gently(opts_file, "options.gpg-sign", opts->gpg_sign);
+ res |= git_config_set_in_file_gently(opts_file,
+ "options.gpg-sign", opts->gpg_sign);
if (opts->xopts) {
int i;
for (i = 0; i < opts->xopts_nr; i++)
res |= git_config_set_multivar_in_file_gently(opts_file,
- "options.strategy-option",
- opts->xopts[i], "^$", 0);
+ "options.strategy-option",
+ opts->xopts[i], "^$", 0);
}
if (opts->allow_rerere_auto)
- res |= git_config_set_in_file_gently(opts_file, "options.allow-rerere-auto",
- opts->allow_rerere_auto == RERERE_AUTOUPDATE ?
- "true" : "false");
+ res |= git_config_set_in_file_gently(opts_file,
+ "options.allow-rerere-auto",
+ opts->allow_rerere_auto == RERERE_AUTOUPDATE ?
+ "true" : "false");
return res;
}
while (todo_list->current < todo_list->nr) {
struct todo_item *item = todo_list->items + todo_list->current;
+ const char *arg = todo_item_get_arg(todo_list, item);
+
if (save_todo(todo_list, opts))
return -1;
if (is_rebase_i(opts)) {
fprintf(stderr,
_("Stopped at %s... %.*s\n"),
short_commit_name(commit),
- item->arg_len, item->arg);
+ item->arg_len, arg);
return error_with_patch(r, commit,
- item->arg, item->arg_len, opts, res,
- !res);
+ arg, item->arg_len, opts, res, !res);
}
if (is_rebase_i(opts) && !res)
record_in_rewritten(&item->commit->object.oid,
if (res == 1)
intend_to_amend();
return error_failed_squash(r, item->commit, opts,
- item->arg_len, item->arg);
+ item->arg_len, arg);
} else if (res && is_rebase_i(opts) && item->commit) {
int to_amend = 0;
struct object_id oid;
to_amend = 1;
return res | error_with_patch(r, item->commit,
- item->arg, item->arg_len, opts,
+ arg, item->arg_len, opts,
res, to_amend);
}
} else if (item->command == TODO_EXEC) {
- char *end_of_arg = (char *)(item->arg + item->arg_len);
+ char *end_of_arg = (char *)(arg + item->arg_len);
int saved = *end_of_arg;
struct stat st;
*end_of_arg = '\0';
- res = do_exec(r, item->arg);
+ res = do_exec(r, arg);
*end_of_arg = saved;
if (res) {
todo_list->current = -1;
}
} else if (item->command == TODO_LABEL) {
- if ((res = do_label(r, item->arg, item->arg_len)))
+ if ((res = do_label(r, arg, item->arg_len)))
reschedule = 1;
} else if (item->command == TODO_RESET) {
- if ((res = do_reset(r, item->arg, item->arg_len, opts)))
+ if ((res = do_reset(r, arg, item->arg_len, opts)))
reschedule = 1;
} else if (item->command == TODO_MERGE) {
if ((res = do_merge(r, item->commit,
- item->arg, item->arg_len,
+ arg, item->arg_len,
item->flags, opts)) < 0)
reschedule = 1;
else if (item->commit)
if (res > 0)
/* failed with merge conflicts */
return error_with_patch(r, item->commit,
- item->arg,
- item->arg_len, opts,
- res, 0);
+ arg, item->arg_len,
+ opts, res, 0);
} else if (!is_noop(item->command))
return error(_("unknown command %d"), item->command);
if (item->commit)
return error_with_patch(r,
item->commit,
- item->arg,
- item->arg_len, opts,
- res, 0);
+ arg, item->arg_len,
+ opts, res, 0);
}
todo_list->current++;
}
static int make_script_with_merges(struct pretty_print_context *pp,
- struct rev_info *revs, FILE *out,
+ struct rev_info *revs, struct strbuf *out,
unsigned flags)
{
int keep_empty = flags & TODO_LIST_KEEP_EMPTY;
* gathering commits not yet shown, reversing the list on the fly,
* then outputting that list (labeling revisions as needed).
*/
- fprintf(out, "%s onto\n", cmd_label);
+ strbuf_addf(out, "%s onto\n", cmd_label);
for (iter = tips; iter; iter = iter->next) {
struct commit_list *list = NULL, *iter2;
entry = oidmap_get(&state.commit2label, &commit->object.oid);
if (entry)
- fprintf(out, "\n%c Branch %s\n", comment_line_char, entry->string);
+ strbuf_addf(out, "\n%c Branch %s\n", comment_line_char, entry->string);
else
- fprintf(out, "\n");
+ strbuf_addch(out, '\n');
while (oidset_contains(&interesting, &commit->object.oid) &&
!oidset_contains(&shown, &commit->object.oid)) {
}
if (!commit)
- fprintf(out, "%s %s\n", cmd_reset,
- rebase_cousins ? "onto" : "[new root]");
+ strbuf_addf(out, "%s %s\n", cmd_reset,
+ rebase_cousins ? "onto" : "[new root]");
else {
const char *to = NULL;
&state);
if (!to || !strcmp(to, "onto"))
- fprintf(out, "%s onto\n", cmd_reset);
+ strbuf_addf(out, "%s onto\n", cmd_reset);
else {
strbuf_reset(&oneline);
pretty_print_commit(pp, commit, &oneline);
- fprintf(out, "%s %s # %s\n",
- cmd_reset, to, oneline.buf);
+ strbuf_addf(out, "%s %s # %s\n",
+ cmd_reset, to, oneline.buf);
}
}
entry = oidmap_get(&commit2todo, oid);
/* only show if not already upstream */
if (entry)
- fprintf(out, "%s\n", entry->string);
+ strbuf_addf(out, "%s\n", entry->string);
entry = oidmap_get(&state.commit2label, oid);
if (entry)
- fprintf(out, "%s %s\n",
- cmd_label, entry->string);
+ strbuf_addf(out, "%s %s\n",
+ cmd_label, entry->string);
oidset_insert(&shown, oid);
}
return 0;
}
-int sequencer_make_script(struct repository *r, FILE *out,
- int argc, const char **argv,
- unsigned flags)
+int sequencer_make_script(struct repository *r, struct strbuf *out, int argc,
+ const char **argv, unsigned flags)
{
char *format = NULL;
struct pretty_print_context pp = {0};
- struct strbuf buf = STRBUF_INIT;
struct rev_info revs;
struct commit *commit;
int keep_empty = flags & TODO_LIST_KEEP_EMPTY;
if (!is_empty && (commit->object.flags & PATCHSAME))
continue;
- strbuf_reset(&buf);
if (!keep_empty && is_empty)
- strbuf_addf(&buf, "%c ", comment_line_char);
- strbuf_addf(&buf, "%s %s ", insn,
+ strbuf_addf(out, "%c ", comment_line_char);
+ strbuf_addf(out, "%s %s ", insn,
oid_to_hex(&commit->object.oid));
- pretty_print_commit(&pp, commit, &buf);
- strbuf_addch(&buf, '\n');
- fputs(buf.buf, out);
+ pretty_print_commit(&pp, commit, out);
+ strbuf_addch(out, '\n');
}
- strbuf_release(&buf);
return 0;
}
* Add commands after pick and (series of) squash/fixup commands
* in the todo list.
*/
-int sequencer_add_exec_commands(struct repository *r,
- const char *commands)
+void todo_list_add_exec_commands(struct todo_list *todo_list,
+ struct string_list *commands)
{
- const char *todo_file = rebase_path_todo();
- struct todo_list todo_list = TODO_LIST_INIT;
- struct strbuf *buf = &todo_list.buf;
- size_t offset = 0, commands_len = strlen(commands);
- int i, insert;
+ struct strbuf *buf = &todo_list->buf;
+ size_t base_offset = buf->len;
+ int i, insert, nr = 0, alloc = 0;
+ struct todo_item *items = NULL, *base_items = NULL;
+
+ base_items = xcalloc(commands->nr, sizeof(struct todo_item));
+ for (i = 0; i < commands->nr; i++) {
+ size_t command_len = strlen(commands->items[i].string);
- if (strbuf_read_file(&todo_list.buf, todo_file, 0) < 0)
- return error(_("could not read '%s'."), todo_file);
+ strbuf_addstr(buf, commands->items[i].string);
+ strbuf_addch(buf, '\n');
- if (parse_insn_buffer(r, todo_list.buf.buf, &todo_list)) {
- todo_list_release(&todo_list);
- return error(_("unusable todo list: '%s'"), todo_file);
+ base_items[i].command = TODO_EXEC;
+ base_items[i].offset_in_buf = base_offset;
+ base_items[i].arg_offset = base_offset + strlen("exec ");
+ base_items[i].arg_len = command_len - strlen("exec ");
+
+ base_offset += command_len + 1;
}
/*
* Insert <commands> after every pick. Here, fixup/squash chains
* are considered part of the pick, so we insert the commands *after*
* those chains if there are any.
+ *
+ * As we insert the exec commands immediatly after rearranging
+ * any fixups and before the user edits the list, a fixup chain
+ * can never contain comments (any comments are empty picks that
+ * have been commented out because the user did not specify
+ * --keep-empty). So, it is safe to insert an exec command
+ * without looking at the command following a comment.
*/
- insert = -1;
- for (i = 0; i < todo_list.nr; i++) {
- enum todo_command command = todo_list.items[i].command;
-
- if (insert >= 0) {
- /* skip fixup/squash chains */
- if (command == TODO_COMMENT)
- continue;
- else if (is_fixup(command)) {
- insert = i + 1;
- continue;
- }
- strbuf_insert(buf,
- todo_list.items[insert].offset_in_buf +
- offset, commands, commands_len);
- offset += commands_len;
- insert = -1;
+ insert = 0;
+ for (i = 0; i < todo_list->nr; i++) {
+ enum todo_command command = todo_list->items[i].command;
+ if (insert && !is_fixup(command)) {
+ ALLOC_GROW(items, nr + commands->nr, alloc);
+ COPY_ARRAY(items + nr, base_items, commands->nr);
+ nr += commands->nr;
+
+ insert = 0;
}
+ ALLOC_GROW(items, nr + 1, alloc);
+ items[nr++] = todo_list->items[i];
+
if (command == TODO_PICK || command == TODO_MERGE)
- insert = i + 1;
+ insert = 1;
}
/* insert or append final <commands> */
- if (insert >= 0 && insert < todo_list.nr)
- strbuf_insert(buf, todo_list.items[insert].offset_in_buf +
- offset, commands, commands_len);
- else if (insert >= 0 || !offset)
- strbuf_add(buf, commands, commands_len);
+ if (insert || nr == todo_list->nr) {
+ ALLOC_GROW(items, nr + commands->nr, alloc);
+ COPY_ARRAY(items + nr, base_items, commands->nr);
+ nr += commands->nr;
+ }
- i = write_message(buf->buf, buf->len, todo_file, 0);
- todo_list_release(&todo_list);
- return i;
+ free(base_items);
+ FREE_AND_NULL(todo_list->items);
+ todo_list->items = items;
+ todo_list->nr = nr;
+ todo_list->alloc = alloc;
}
-int transform_todos(struct repository *r, unsigned flags)
+static void todo_list_to_strbuf(struct repository *r, struct todo_list *todo_list,
+ struct strbuf *buf, int num, unsigned flags)
{
- const char *todo_file = rebase_path_todo();
- struct todo_list todo_list = TODO_LIST_INIT;
- struct strbuf buf = STRBUF_INIT;
struct todo_item *item;
- int i;
-
- if (strbuf_read_file(&todo_list.buf, todo_file, 0) < 0)
- return error(_("could not read '%s'."), todo_file);
+ int i, max = todo_list->nr;
- if (parse_insn_buffer(r, todo_list.buf.buf, &todo_list)) {
- todo_list_release(&todo_list);
- return error(_("unusable todo list: '%s'"), todo_file);
- }
+ if (num > 0 && num < max)
+ max = num;
- for (item = todo_list.items, i = 0; i < todo_list.nr; i++, item++) {
+ for (item = todo_list->items, i = 0; i < max; i++, item++) {
/* if the item is not a command write it and continue */
if (item->command >= TODO_COMMENT) {
- strbuf_addf(&buf, "%.*s\n", item->arg_len, item->arg);
+ strbuf_addf(buf, "%.*s\n", item->arg_len,
+ todo_item_get_arg(todo_list, item));
continue;
}
/* add command to the buffer */
if (flags & TODO_LIST_ABBREVIATE_CMDS)
- strbuf_addch(&buf, command_to_char(item->command));
+ strbuf_addch(buf, command_to_char(item->command));
else
- strbuf_addstr(&buf, command_to_string(item->command));
+ strbuf_addstr(buf, command_to_string(item->command));
/* add commit id */
if (item->commit) {
if (item->command == TODO_MERGE) {
if (item->flags & TODO_EDIT_MERGE_MSG)
- strbuf_addstr(&buf, " -c");
+ strbuf_addstr(buf, " -c");
else
- strbuf_addstr(&buf, " -C");
+ strbuf_addstr(buf, " -C");
}
- strbuf_addf(&buf, " %s", oid);
+ strbuf_addf(buf, " %s", oid);
}
/* add all the rest */
if (!item->arg_len)
- strbuf_addch(&buf, '\n');
+ strbuf_addch(buf, '\n');
else
- strbuf_addf(&buf, " %.*s\n", item->arg_len, item->arg);
+ strbuf_addf(buf, " %.*s\n", item->arg_len,
+ todo_item_get_arg(todo_list, item));
}
-
- i = write_message(buf.buf, buf.len, todo_file, 0);
- todo_list_release(&todo_list);
- return i;
}
-enum missing_commit_check_level get_missing_commit_check_level(void)
+int todo_list_write_to_file(struct repository *r, struct todo_list *todo_list,
+ const char *file, const char *shortrevisions,
+ const char *shortonto, int num, unsigned flags)
{
- const char *value;
+ int res;
+ struct strbuf buf = STRBUF_INIT;
- if (git_config_get_value("rebase.missingcommitscheck", &value) ||
- !strcasecmp("ignore", value))
- return MISSING_COMMIT_CHECK_IGNORE;
- if (!strcasecmp("warn", value))
- return MISSING_COMMIT_CHECK_WARN;
- if (!strcasecmp("error", value))
- return MISSING_COMMIT_CHECK_ERROR;
- warning(_("unrecognized setting %s for option "
- "rebase.missingCommitsCheck. Ignoring."), value);
- return MISSING_COMMIT_CHECK_IGNORE;
-}
+ todo_list_to_strbuf(r, todo_list, &buf, num, flags);
+ if (flags & TODO_LIST_APPEND_TODO_HELP)
+ append_todo_help(flags & TODO_LIST_KEEP_EMPTY, count_commands(todo_list),
+ shortrevisions, shortonto, &buf);
-define_commit_slab(commit_seen, unsigned char);
-/*
- * Check if the user dropped some commits by mistake
- * Behaviour determined by rebase.missingCommitsCheck.
- * Check if there is an unrecognized command or a
- * bad SHA-1 in a command.
- */
-int check_todo_list(struct repository *r)
-{
- enum missing_commit_check_level check_level = get_missing_commit_check_level();
- struct strbuf todo_file = STRBUF_INIT;
- struct todo_list todo_list = TODO_LIST_INIT;
- struct strbuf missing = STRBUF_INIT;
- int advise_to_edit_todo = 0, res = 0, i;
- struct commit_seen commit_seen;
+ res = write_message(buf.buf, buf.len, file, 0);
+ strbuf_release(&buf);
- init_commit_seen(&commit_seen);
+ return res;
+}
- strbuf_addstr(&todo_file, rebase_path_todo());
- if (strbuf_read_file_or_whine(&todo_list.buf, todo_file.buf) < 0) {
- res = -1;
- goto leave_check;
- }
- advise_to_edit_todo = res =
- parse_insn_buffer(r, todo_list.buf.buf, &todo_list);
+static const char edit_todo_list_advice[] =
+N_("You can fix this with 'git rebase --edit-todo' "
+"and then run 'git rebase --continue'.\n"
+"Or you can abort the rebase with 'git rebase"
+" --abort'.\n");
- if (res || check_level == MISSING_COMMIT_CHECK_IGNORE)
- goto leave_check;
+int check_todo_list_from_file(struct repository *r)
+{
+ struct todo_list old_todo = TODO_LIST_INIT, new_todo = TODO_LIST_INIT;
+ int res = 0;
- /* Mark the commits in git-rebase-todo as seen */
- for (i = 0; i < todo_list.nr; i++) {
- struct commit *commit = todo_list.items[i].commit;
- if (commit)
- *commit_seen_at(&commit_seen, commit) = 1;
+ if (strbuf_read_file_or_whine(&new_todo.buf, rebase_path_todo()) < 0) {
+ res = -1;
+ goto out;
}
- todo_list_release(&todo_list);
- strbuf_addstr(&todo_file, ".backup");
- if (strbuf_read_file_or_whine(&todo_list.buf, todo_file.buf) < 0) {
+ if (strbuf_read_file_or_whine(&old_todo.buf, rebase_path_todo_backup()) < 0) {
res = -1;
- goto leave_check;
- }
- strbuf_release(&todo_file);
- res = !!parse_insn_buffer(r, todo_list.buf.buf, &todo_list);
-
- /* Find commits in git-rebase-todo.backup yet unseen */
- for (i = todo_list.nr - 1; i >= 0; i--) {
- struct todo_item *item = todo_list.items + i;
- struct commit *commit = item->commit;
- if (commit && !*commit_seen_at(&commit_seen, commit)) {
- strbuf_addf(&missing, " - %s %.*s\n",
- short_commit_name(commit),
- item->arg_len, item->arg);
- *commit_seen_at(&commit_seen, commit) = 1;
- }
+ goto out;
}
- /* Warn about missing commits */
- if (!missing.len)
- goto leave_check;
-
- if (check_level == MISSING_COMMIT_CHECK_ERROR)
- advise_to_edit_todo = res = 1;
-
- fprintf(stderr,
- _("Warning: some commits may have been dropped accidentally.\n"
- "Dropped commits (newer to older):\n"));
-
- /* Make the list user-friendly and display */
- fputs(missing.buf, stderr);
- strbuf_release(&missing);
-
- fprintf(stderr, _("To avoid this message, use \"drop\" to "
- "explicitly remove a commit.\n\n"
- "Use 'git config rebase.missingCommitsCheck' to change "
- "the level of warnings.\n"
- "The possible behaviours are: ignore, warn, error.\n\n"));
-
-leave_check:
- clear_commit_seen(&commit_seen);
- strbuf_release(&todo_file);
- todo_list_release(&todo_list);
-
- if (advise_to_edit_todo)
- fprintf(stderr,
- _("You can fix this with 'git rebase --edit-todo' "
- "and then run 'git rebase --continue'.\n"
- "Or you can abort the rebase with 'git rebase"
- " --abort'.\n"));
+ res = todo_list_parse_insn_buffer(r, old_todo.buf.buf, &old_todo);
+ if (!res)
+ res = todo_list_parse_insn_buffer(r, new_todo.buf.buf, &new_todo);
+ if (!res)
+ res = todo_list_check(&old_todo, &new_todo);
+ if (res)
+ fprintf(stderr, _(edit_todo_list_advice));
+out:
+ todo_list_release(&old_todo);
+ todo_list_release(&new_todo);
return res;
}
-static int rewrite_file(const char *path, const char *buf, size_t len)
-{
- int rc = 0;
- int fd = open(path, O_WRONLY | O_TRUNC);
- if (fd < 0)
- return error_errno(_("could not open '%s' for writing"), path);
- if (write_in_full(fd, buf, len) < 0)
- rc = error_errno(_("could not write to '%s'"), path);
- if (close(fd) && !rc)
- rc = error_errno(_("could not close '%s'"), path);
- return rc;
-}
-
/* skip picking commits whose parents are unchanged */
-static int skip_unnecessary_picks(struct repository *r, struct object_id *output_oid)
+static int skip_unnecessary_picks(struct repository *r,
+ struct todo_list *todo_list,
+ struct object_id *base_oid)
{
- const char *todo_file = rebase_path_todo();
- struct strbuf buf = STRBUF_INIT;
- struct todo_list todo_list = TODO_LIST_INIT;
struct object_id *parent_oid;
- int fd, i;
-
- if (!read_oneliner(&buf, rebase_path_onto(), 0))
- return error(_("could not read 'onto'"));
- if (get_oid(buf.buf, output_oid)) {
- strbuf_release(&buf);
- return error(_("need a HEAD to fixup"));
- }
- strbuf_release(&buf);
-
- if (strbuf_read_file_or_whine(&todo_list.buf, todo_file) < 0)
- return -1;
- if (parse_insn_buffer(r, todo_list.buf.buf, &todo_list) < 0) {
- todo_list_release(&todo_list);
- return -1;
- }
+ int i;
- for (i = 0; i < todo_list.nr; i++) {
- struct todo_item *item = todo_list.items + i;
+ for (i = 0; i < todo_list->nr; i++) {
+ struct todo_item *item = todo_list->items + i;
if (item->command >= TODO_NOOP)
continue;
if (item->command != TODO_PICK)
break;
if (parse_commit(item->commit)) {
- todo_list_release(&todo_list);
return error(_("could not parse commit '%s'"),
oid_to_hex(&item->commit->object.oid));
}
if (item->commit->parents->next)
break; /* merge commit */
parent_oid = &item->commit->parents->item->object.oid;
- if (!oideq(parent_oid, output_oid))
+ if (!oideq(parent_oid, base_oid))
break;
- oidcpy(output_oid, &item->commit->object.oid);
+ oidcpy(base_oid, &item->commit->object.oid);
}
if (i > 0) {
- int offset = get_item_line_offset(&todo_list, i);
const char *done_path = rebase_path_done();
- fd = open(done_path, O_CREAT | O_WRONLY | O_APPEND, 0666);
- if (fd < 0) {
- error_errno(_("could not open '%s' for writing"),
- done_path);
- todo_list_release(&todo_list);
- return -1;
- }
- if (write_in_full(fd, todo_list.buf.buf, offset) < 0) {
+ if (todo_list_write_to_file(r, todo_list, done_path, NULL, NULL, i, 0)) {
error_errno(_("could not write to '%s'"), done_path);
- todo_list_release(&todo_list);
- close(fd);
return -1;
}
- close(fd);
- if (rewrite_file(rebase_path_todo(), todo_list.buf.buf + offset,
- todo_list.buf.len - offset) < 0) {
- todo_list_release(&todo_list);
- return -1;
- }
+ MOVE_ARRAY(todo_list->items, todo_list->items + i, todo_list->nr - i);
+ todo_list->nr -= i;
+ todo_list->current = 0;
- todo_list.current = i;
- if (is_fixup(peek_command(&todo_list, 0)))
- record_in_rewritten(output_oid, peek_command(&todo_list, 0));
+ if (is_fixup(peek_command(todo_list, 0)))
+ record_in_rewritten(base_oid, peek_command(todo_list, 0));
}
- todo_list_release(&todo_list);
-
return 0;
}
int complete_action(struct repository *r, struct replay_opts *opts, unsigned flags,
const char *shortrevisions, const char *onto_name,
- const char *onto, const char *orig_head, const char *cmd,
- unsigned autosquash)
+ const char *onto, const char *orig_head, struct string_list *commands,
+ unsigned autosquash, struct todo_list *todo_list)
{
const char *shortonto, *todo_file = rebase_path_todo();
- struct todo_list todo_list = TODO_LIST_INIT;
- struct strbuf *buf = &(todo_list.buf);
+ struct todo_list new_todo = TODO_LIST_INIT;
+ struct strbuf *buf = &todo_list->buf;
struct object_id oid;
- struct stat st;
+ int res;
get_oid(onto, &oid);
shortonto = find_unique_abbrev(&oid, DEFAULT_ABBREV);
- if (!lstat(todo_file, &st) && st.st_size == 0 &&
- write_message("noop\n", 5, todo_file, 0))
- return -1;
+ if (buf->len == 0) {
+ struct todo_item *item = append_new_todo(todo_list);
+ item->command = TODO_NOOP;
+ item->commit = NULL;
+ item->arg_len = item->arg_offset = item->flags = item->offset_in_buf = 0;
+ }
- if (autosquash && rearrange_squash(r))
+ if (autosquash && todo_list_rearrange_squash(todo_list))
return -1;
- if (cmd && *cmd)
- sequencer_add_exec_commands(r, cmd);
-
- if (strbuf_read_file(buf, todo_file, 0) < 0)
- return error_errno(_("could not read '%s'."), todo_file);
-
- if (parse_insn_buffer(r, buf->buf, &todo_list)) {
- todo_list_release(&todo_list);
- return error(_("unusable todo list: '%s'"), todo_file);
- }
+ if (commands->nr)
+ todo_list_add_exec_commands(todo_list, commands);
- if (count_commands(&todo_list) == 0) {
+ if (count_commands(todo_list) == 0) {
apply_autostash(opts);
sequencer_remove_state(opts);
- todo_list_release(&todo_list);
return error(_("nothing to do"));
}
- strbuf_addch(buf, '\n');
- strbuf_commented_addf(buf, Q_("Rebase %s onto %s (%d command)",
- "Rebase %s onto %s (%d commands)",
- count_commands(&todo_list)),
- shortrevisions, shortonto, count_commands(&todo_list));
- append_todo_help(0, flags & TODO_LIST_KEEP_EMPTY, buf);
-
- if (write_message(buf->buf, buf->len, todo_file, 0)) {
- todo_list_release(&todo_list);
+ res = edit_todo_list(r, todo_list, &new_todo, shortrevisions,
+ shortonto, flags);
+ if (res == -1)
return -1;
- }
-
- if (copy_file(rebase_path_todo_backup(), todo_file, 0666))
- return error(_("could not copy '%s' to '%s'."), todo_file,
- rebase_path_todo_backup());
-
- if (transform_todos(r, flags | TODO_LIST_SHORTEN_IDS))
- return error(_("could not transform the todo list"));
-
- strbuf_reset(buf);
-
- if (launch_sequence_editor(todo_file, buf, NULL)) {
+ else if (res == -2) {
apply_autostash(opts);
sequencer_remove_state(opts);
- todo_list_release(&todo_list);
return -1;
- }
-
- strbuf_stripspace(buf, 1);
- if (buf->len == 0) {
+ } else if (res == -3) {
apply_autostash(opts);
sequencer_remove_state(opts);
- todo_list_release(&todo_list);
+ todo_list_release(&new_todo);
return error(_("nothing to do"));
}
- todo_list_release(&todo_list);
-
- if (check_todo_list(r)) {
+ if (todo_list_parse_insn_buffer(r, new_todo.buf.buf, &new_todo) ||
+ todo_list_check(todo_list, &new_todo)) {
+ fprintf(stderr, _(edit_todo_list_advice));
checkout_onto(opts, onto_name, onto, orig_head);
+ todo_list_release(&new_todo);
+
return -1;
}
- if (transform_todos(r, flags & ~(TODO_LIST_SHORTEN_IDS)))
- return error(_("could not transform the todo list"));
-
- if (opts->allow_ff && skip_unnecessary_picks(r, &oid))
+ if (opts->allow_ff && skip_unnecessary_picks(r, &new_todo, &oid)) {
+ todo_list_release(&new_todo);
return error(_("could not skip unnecessary pick commands"));
+ }
+
+ if (todo_list_write_to_file(r, &new_todo, todo_file, NULL, NULL, -1,
+ flags & ~(TODO_LIST_SHORTEN_IDS))) {
+ todo_list_release(&new_todo);
+ return error_errno(_("could not write '%s'"), todo_file);
+ }
+
+ todo_list_release(&new_todo);
if (checkout_onto(opts, onto_name, oid_to_hex(&oid), orig_head))
return -1;
* message will have to be retrieved from the commit (as the oneline in the
* script cannot be trusted) in order to normalize the autosquash arrangement.
*/
-int rearrange_squash(struct repository *r)
+int todo_list_rearrange_squash(struct todo_list *todo_list)
{
- const char *todo_file = rebase_path_todo();
- struct todo_list todo_list = TODO_LIST_INIT;
struct hashmap subject2item;
- int res = 0, rearranged = 0, *next, *tail, i;
+ int rearranged = 0, *next, *tail, i, nr = 0, alloc = 0;
char **subjects;
struct commit_todo_item commit_todo;
-
- if (strbuf_read_file_or_whine(&todo_list.buf, todo_file) < 0)
- return -1;
- if (parse_insn_buffer(r, todo_list.buf.buf, &todo_list) < 0) {
- todo_list_release(&todo_list);
- return -1;
- }
+ struct todo_item *items = NULL;
init_commit_todo_item(&commit_todo);
/*
* be moved to appear after the i'th.
*/
hashmap_init(&subject2item, (hashmap_cmp_fn) subject2item_cmp,
- NULL, todo_list.nr);
- ALLOC_ARRAY(next, todo_list.nr);
- ALLOC_ARRAY(tail, todo_list.nr);
- ALLOC_ARRAY(subjects, todo_list.nr);
- for (i = 0; i < todo_list.nr; i++) {
+ NULL, todo_list->nr);
+ ALLOC_ARRAY(next, todo_list->nr);
+ ALLOC_ARRAY(tail, todo_list->nr);
+ ALLOC_ARRAY(subjects, todo_list->nr);
+ for (i = 0; i < todo_list->nr; i++) {
struct strbuf buf = STRBUF_INIT;
- struct todo_item *item = todo_list.items + i;
+ struct todo_item *item = todo_list->items + i;
const char *commit_buffer, *subject, *p;
size_t subject_len;
int i2 = -1;
}
if (is_fixup(item->command)) {
- todo_list_release(&todo_list);
clear_commit_todo_item(&commit_todo);
return error(_("the script was already rearranged."));
}
*commit_todo_item_at(&commit_todo, commit2))
/* found by commit name */
i2 = *commit_todo_item_at(&commit_todo, commit2)
- - todo_list.items;
+ - todo_list->items;
else {
/* copy can be a prefix of the commit subject */
for (i2 = 0; i2 < i; i2++)
}
if (i2 >= 0) {
rearranged = 1;
- todo_list.items[i].command =
+ todo_list->items[i].command =
starts_with(subject, "fixup!") ?
TODO_FIXUP : TODO_SQUASH;
if (next[i2] < 0)
}
if (rearranged) {
- struct strbuf buf = STRBUF_INIT;
-
- for (i = 0; i < todo_list.nr; i++) {
- enum todo_command command = todo_list.items[i].command;
+ for (i = 0; i < todo_list->nr; i++) {
+ enum todo_command command = todo_list->items[i].command;
int cur = i;
/*
continue;
while (cur >= 0) {
- const char *bol =
- get_item_line(&todo_list, cur);
- const char *eol =
- get_item_line(&todo_list, cur + 1);
-
- /* replace 'pick', by 'fixup' or 'squash' */
- command = todo_list.items[cur].command;
- if (is_fixup(command)) {
- strbuf_addstr(&buf,
- todo_command_info[command].str);
- bol += strcspn(bol, " \t");
- }
-
- strbuf_add(&buf, bol, eol - bol);
-
+ ALLOC_GROW(items, nr + 1, alloc);
+ items[nr++] = todo_list->items[cur];
cur = next[cur];
}
}
- res = rewrite_file(todo_file, buf.buf, buf.len);
- strbuf_release(&buf);
+ FREE_AND_NULL(todo_list->items);
+ todo_list->items = items;
+ todo_list->nr = nr;
+ todo_list->alloc = alloc;
}
free(next);
free(tail);
- for (i = 0; i < todo_list.nr; i++)
+ for (i = 0; i < todo_list->nr; i++)
free(subjects[i]);
free(subjects);
hashmap_free(&subject2item, 1);
- todo_list_release(&todo_list);
clear_commit_todo_item(&commit_todo);
- return res;
+
+ return 0;
}
const char *git_path_commit_editmsg(void);
const char *git_path_seq_dir(void);
const char *rebase_path_todo(void);
+const char *rebase_path_todo_backup(void);
#define APPEND_SIGNOFF_DEDUP (1u << 0)
};
#define REPLAY_OPTS_INIT { .action = -1, .current_fixups = STRBUF_INIT }
-enum missing_commit_check_level {
- MISSING_COMMIT_CHECK_IGNORE = 0,
- MISSING_COMMIT_CHECK_WARN,
- MISSING_COMMIT_CHECK_ERROR
+/*
+ * Note that ordering matters in this enum. Not only must it match the mapping
+ * of todo_command_info (in sequencer.c), it is also divided into several
+ * sections that matter. When adding new commands, make sure you add it in the
+ * right section.
+ */
+enum todo_command {
+ /* commands that handle commits */
+ TODO_PICK = 0,
+ TODO_REVERT,
+ TODO_EDIT,
+ TODO_REWORD,
+ TODO_FIXUP,
+ TODO_SQUASH,
+ /* commands that do something else than handling a single commit */
+ TODO_EXEC,
+ TODO_BREAK,
+ TODO_LABEL,
+ TODO_RESET,
+ TODO_MERGE,
+ /* commands that do nothing but are counted for reporting progress */
+ TODO_NOOP,
+ TODO_DROP,
+ /* comments (not counted for reporting progress) */
+ TODO_COMMENT
};
-int write_message(const void *buf, size_t len, const char *filename,
- int append_eol);
+struct todo_item {
+ enum todo_command command;
+ struct commit *commit;
+ unsigned int flags;
+ int arg_len;
+ /* The offset of the command and its argument in the strbuf */
+ size_t offset_in_buf, arg_offset;
+};
+
+struct todo_list {
+ struct strbuf buf;
+ struct todo_item *items;
+ int nr, alloc, current;
+ int done_nr, total_nr;
+ struct stat_data stat;
+};
+
+#define TODO_LIST_INIT { STRBUF_INIT }
+
+int todo_list_parse_insn_buffer(struct repository *r, char *buf,
+ struct todo_list *todo_list);
+int todo_list_write_to_file(struct repository *r, struct todo_list *todo_list,
+ const char *file, const char *shortrevisions,
+ const char *shortonto, int num, unsigned flags);
+void todo_list_release(struct todo_list *todo_list);
+const char *todo_item_get_arg(struct todo_list *todo_list,
+ struct todo_item *item);
/* Call this to setup defaults before parsing command line options */
void sequencer_init_config(struct replay_opts *opts);
* commits should be rebased onto the new base, this flag needs to be passed.
*/
#define TODO_LIST_REBASE_COUSINS (1U << 4)
-int sequencer_make_script(struct repository *repo, FILE *out,
- int argc, const char **argv,
- unsigned flags);
-
-int sequencer_add_exec_commands(struct repository *r, const char *command);
-int transform_todos(struct repository *r, unsigned flags);
-enum missing_commit_check_level get_missing_commit_check_level(void);
-int check_todo_list(struct repository *r);
+#define TODO_LIST_APPEND_TODO_HELP (1U << 5)
+
+int sequencer_make_script(struct repository *r, struct strbuf *out, int argc,
+ const char **argv, unsigned flags);
+
+void todo_list_add_exec_commands(struct todo_list *todo_list,
+ struct string_list *commands);
+int check_todo_list_from_file(struct repository *r);
int complete_action(struct repository *r, struct replay_opts *opts, unsigned flags,
const char *shortrevisions, const char *onto_name,
- const char *onto, const char *orig_head, const char *cmd,
- unsigned autosquash);
-int rearrange_squash(struct repository *r);
+ const char *onto, const char *orig_head, struct string_list *commands,
+ unsigned autosquash, struct todo_list *todo_list);
+int todo_list_rearrange_squash(struct todo_list *todo_list);
/*
* Append a signoff to the commit message in "msgbuf". The ignore_footer
return for_each_ref(add_info_ref, fp);
}
-static int update_info_refs(int force)
+static int update_info_refs(void)
{
char *path = git_pathdup("info/refs");
int ret = update_info_file(path, generate_info_refs);
struct packed_git *p;
int old_num;
int new_num;
- int nr_alloc;
} **info;
static int num_pack;
-static const char *objdir;
-static int objdirlen;
static struct pack_info *find_pack_by_name(const char *name)
{
int i;
for (i = 0; i < num_pack; i++) {
struct packed_git *p = info[i]->p;
- /* skip "/pack/" after ".git/objects" */
- if (!strcmp(p->pack_name + objdirlen + 6, name))
+ if (!strcmp(pack_basename(p), name))
return info[i];
}
return NULL;
/* Returns non-zero when we detect that the info in the
* old file is useless.
*/
-static int parse_pack_def(const char *line, int old_cnt)
+static int parse_pack_def(const char *packname, int old_cnt)
{
- struct pack_info *i = find_pack_by_name(line + 2);
+ struct pack_info *i = find_pack_by_name(packname);
if (i) {
i->old_num = old_cnt;
return 0;
static int read_pack_info_file(const char *infofile)
{
FILE *fp;
- char line[1000];
+ struct strbuf line = STRBUF_INIT;
int old_cnt = 0;
+ int stale = 1;
fp = fopen_or_warn(infofile, "r");
if (!fp)
return 1; /* nonexistent is not an error. */
- while (fgets(line, sizeof(line), fp)) {
- int len = strlen(line);
- if (len && line[len-1] == '\n')
- line[--len] = 0;
+ while (strbuf_getline(&line, fp) != EOF) {
+ const char *arg;
- if (!len)
+ if (!line.len)
continue;
- switch (line[0]) {
- case 'P': /* P name */
- if (parse_pack_def(line, old_cnt++))
+ if (skip_prefix(line.buf, "P ", &arg)) {
+ /* P name */
+ if (parse_pack_def(arg, old_cnt++))
goto out_stale;
- break;
- case 'D': /* we used to emit D but that was misguided. */
- case 'T': /* we used to emit T but nobody uses it. */
+ } else if (line.buf[0] == 'D') {
+ /* we used to emit D but that was misguided. */
goto out_stale;
- default:
- error("unrecognized: %s", line);
- break;
+ } else if (line.buf[0] == 'T') {
+ /* we used to emit T but nobody uses it. */
+ goto out_stale;
+ } else {
+ error("unrecognized: %s", line.buf);
}
}
- fclose(fp);
- return 0;
+ stale = 0;
+
out_stale:
+ strbuf_release(&line);
fclose(fp);
- return 1;
+ return stale;
}
static int compare_info(const void *a_, const void *b_)
int stale;
int i = 0;
- objdir = get_object_directory();
- objdirlen = strlen(objdir);
-
for (p = get_all_packs(the_repository); p; p = p->next) {
/* we ignore things on alternate path since they are
* not available to the pullers in general.
for (i = 0, p = get_all_packs(the_repository); p; p = p->next) {
if (!p->pack_local)
continue;
+ assert(i < num_pack);
info[i] = xcalloc(1, sizeof(struct pack_info));
info[i]->p = p;
info[i]->old_num = -1;
{
int i;
for (i = 0; i < num_pack; i++) {
- if (fprintf(fp, "P %s\n", info[i]->p->pack_name + objdirlen + 6) < 0)
+ if (fprintf(fp, "P %s\n", pack_basename(info[i]->p)) < 0)
return -1;
}
if (fputc('\n', fp) == EOF)
*/
int errs = 0;
- errs = errs | update_info_refs(force);
+ errs = errs | update_info_refs();
errs = errs | update_info_packs(force);
/* remove leftover rev-cache file if there is any */
return GIT_HASH_UNKNOWN;
}
+int hash_algo_by_length(int len)
+{
+ int i;
+ for (i = 1; i < GIT_HASH_NALGOS; i++)
+ if (len == hash_algos[i].rawsz)
+ return i;
+ return GIT_HASH_UNKNOWN;
+}
/*
* This is meant to hold a *small* number of objects that you would
/* Check if it is a missing object */
if (fetch_if_missing && repository_format_partial_clone &&
- !already_retried && r == the_repository) {
+ !already_retried && r == the_repository &&
+ !(flags & OBJECT_INFO_FOR_PREFETCH)) {
/*
* TODO Investigate having fetch_object() return
* TODO error/success and stopping the music here.
return get_oid_with_context(the_repository, name, 0, oid, &unused);
}
+/*
+ * This returns a non-zero value if the string (built using printf
+ * format and the given arguments) is not a valid object.
+ */
+int get_oidf(struct object_id *oid, const char *fmt, ...)
+{
+ va_list ap;
+ int ret;
+ struct strbuf sb = STRBUF_INIT;
+
+ va_start(ap, fmt);
+ strbuf_vaddf(&sb, fmt, ap);
+ va_end(ap);
+
+ ret = get_oid(sb.buf, oid);
+ strbuf_release(&sb);
+
+ return ret;
+}
/*
* Many callers know that the user meant to name a commit-ish by
int object_name_len)
{
struct object_id oid;
- unsigned mode;
+ unsigned short mode;
if (!prefix)
prefix = "";
strbuf_splice(sb, pos, 0, data, len);
}
+void strbuf_vinsertf(struct strbuf *sb, size_t pos, const char *fmt, va_list ap)
+{
+ int len, len2;
+ char save;
+ va_list cp;
+
+ if (pos > sb->len)
+ die("`pos' is too far after the end of the buffer");
+ va_copy(cp, ap);
+ len = vsnprintf(sb->buf + sb->len, 0, fmt, cp);
+ va_end(cp);
+ if (len < 0)
+ BUG("your vsnprintf is broken (returned %d)", len);
+ if (!len)
+ return; /* nothing to do */
+ if (unsigned_add_overflows(sb->len, len))
+ die("you want to use way too much memory");
+ strbuf_grow(sb, len);
+ memmove(sb->buf + pos + len, sb->buf + pos, sb->len - pos);
+ /* vsnprintf() will append a NUL, overwriting one of our characters */
+ save = sb->buf[pos + len];
+ len2 = vsnprintf(sb->buf + pos, len + 1, fmt, ap);
+ sb->buf[pos + len] = save;
+ if (len2 != len)
+ BUG("your vsnprintf is broken (returns inconsistent lengths)");
+ strbuf_setlen(sb, sb->len + len);
+}
+
+void strbuf_insertf(struct strbuf *sb, size_t pos, const char *fmt, ...)
+{
+ va_list ap;
+ va_start(ap, fmt);
+ strbuf_vinsertf(sb, pos, fmt, ap);
+ va_end(ap);
+}
+
void strbuf_remove(struct strbuf *sb, size_t pos, size_t len)
{
strbuf_splice(sb, pos, len, "", 0);
strbuf_setlen(sb, sb->len + sb2->len);
}
+const char *strbuf_join_argv(struct strbuf *buf,
+ int argc, const char **argv, char delim)
+{
+ if (!argc)
+ return buf->buf;
+
+ strbuf_addstr(buf, *argv);
+ while (--argc) {
+ strbuf_addch(buf, delim);
+ strbuf_addstr(buf, *(++argv));
+ }
+
+ return buf->buf;
+}
+
void strbuf_addchars(struct strbuf *sb, int c, size_t n)
{
strbuf_grow(sb, n);
*/
void strbuf_insert(struct strbuf *sb, size_t pos, const void *, size_t);
+/**
+ * Insert data to the given position of the buffer giving a printf format
+ * string. The contents will be shifted, not overwritten.
+ */
+void strbuf_vinsertf(struct strbuf *sb, size_t pos, const char *fmt,
+ va_list ap);
+
+void strbuf_insertf(struct strbuf *sb, size_t pos, const char *fmt, ...);
+
/**
* Remove given amount of data from a given position of the buffer.
*/
*/
void strbuf_addbuf(struct strbuf *sb, const struct strbuf *sb2);
+/**
+ * Join the arguments into a buffer. `delim` is put between every
+ * two arguments.
+ */
+const char *strbuf_join_argv(struct strbuf *buf, int argc,
+ const char **argv, char delim);
+
/**
* This function can be used to expand a format string containing
* placeholders. To that end, it parses the string and calls the specified
if (start_command(&cp))
die("Could not run 'git rev-list <commits> --not --remotes -n 1' command in submodule %s",
path);
- if (strbuf_read(&buf, cp.out, 41))
+ if (strbuf_read(&buf, cp.out, the_hash_algo->hexsz + 1))
needs_pushing = 1;
finish_command(&cp);
close(cp.out);
GIT_TEST_PRELOAD_INDEX=<boolean> exercises the preload-index code path
by overriding the minimum number of cache entries required per thread.
+GIT_TEST_STASH_USE_BUILTIN=<boolean>, when false, disables the
+built-in version of git-stash. See 'stash.useBuiltin' in
+git-config(1).
+
GIT_TEST_INDEX_THREADS=<n> enables exercising the multi-threaded loading
of the index for the whole test suite by bypassing the default number of
cache entries and thread minimums. Setting this to 1 will make the
fetch-pack to not request sideband-all (even if the server advertises
sideband-all).
+GIT_TEST_DISALLOW_ABBREVIATED_OPTIONS=<boolean>, when true (which is
+the default when running tests), errors out when an abbreviated option
+is used.
+
Naming Tests
------------
...
'
+ - test_atexit <script>
+
+ Prepend <script> to a list of commands to run unconditionally to
+ clean up before the test script exits, e.g. to stop a daemon:
+
+ test_expect_success 'test git daemon' '
+ git daemon &
+ daemon_pid=$! &&
+ test_atexit 'kill $daemon_pid' &&
+ hello world
+ '
+
+ The commands will be executed before the trash directory is removed,
+ i.e. the atexit commands will still be able to access any pidfiles or
+ socket files.
+
+ Note that these commands will be run even when a test script run
+ with '--immediate' fails. Be careful with your atexit commands to
+ minimize any changes to the failed state.
+
- test_write_lines <lines>
Write <lines> on standard output, one line per argument.
check_count A 2
'
+test_expect_success 'blame in a bare repo without starting commit' '
+ git clone --bare . bare.git &&
+ (
+ cd bare.git &&
+ check_count A 2
+ )
+'
+
test_expect_success 'blame by tag objects' '
git tag -m "test tag" testTag &&
git tag -m "test tag #2" testTag2 testTag &&
}
}
-static void parse_dates(const char **argv, struct timeval *now)
+static void parse_dates(const char **argv)
{
struct strbuf result = STRBUF_INIT;
else if (skip_prefix(*argv, "show:", &x))
show_dates(argv+1, x);
else if (!strcmp(*argv, "parse"))
- parse_dates(argv+1, &now);
+ parse_dates(argv+1);
else if (!strcmp(*argv, "approxidate"))
parse_approxidate(argv+1, &now);
else if (!strcmp(*argv, "timestamp"))
OPT_NOOP_NOARG(0, "obsolete"),
OPT_STRING_LIST(0, "list", &list, "str", "add str to list"),
OPT_GROUP("Magic arguments"),
- OPT_ARGUMENT("quux", "means --quux"),
+ OPT_ARGUMENT("quux", NULL, "means --quux"),
OPT_NUMBER_CALLBACK(&integer, "set integer to NUM",
number_callback),
{ OPTION_COUNTUP, '+', NULL, &boolean, NULL, "same as -b",
} else if (!strcmp(*argv, "stack")) {
pq.compare = NULL;
} else {
- int *v = malloc(sizeof(*v));
+ int *v = xmalloc(sizeof(*v));
*v = atoi(*argv);
prio_queue_put(&pq, v);
}
test_cmp expect actual
'
-stop_git_daemon
test_done
#
# test_expect_success ...
#
-# stop_git_daemon
# test_done
test_tristate GIT_TEST_GIT_DAEMON
test_set_port LIB_GIT_DAEMON_PORT
GIT_DAEMON_PID=
+GIT_DAEMON_PIDFILE="$PWD"/daemon.pid
GIT_DAEMON_DOCUMENT_ROOT_PATH="$PWD"/repo
GIT_DAEMON_HOST_PORT=127.0.0.1:$LIB_GIT_DAEMON_PORT
GIT_DAEMON_URL=git://$GIT_DAEMON_HOST_PORT
+registered_stop_git_daemon_atexit_handler=
start_git_daemon() {
if test -n "$GIT_DAEMON_PID"
then
mkdir -p "$GIT_DAEMON_DOCUMENT_ROOT_PATH"
- trap 'code=$?; stop_git_daemon; (exit $code); die' EXIT
+ # One of the test scripts stops and then re-starts 'git daemon'.
+ # Don't register and then run the same atexit handlers several times.
+ if test -z "$registered_stop_git_daemon_atexit_handler"
+ then
+ test_atexit 'stop_git_daemon'
+ registered_stop_git_daemon_atexit_handler=AlreadyDone
+ fi
say >&3 "Starting git daemon ..."
mkfifo git_daemon_output
${LIB_GIT_DAEMON_COMMAND:-git daemon} \
--listen=127.0.0.1 --port="$LIB_GIT_DAEMON_PORT" \
- --reuseaddr --verbose \
+ --reuseaddr --verbose --pid-file="$GIT_DAEMON_PIDFILE" \
--base-path="$GIT_DAEMON_DOCUMENT_ROOT_PATH" \
"$@" "$GIT_DAEMON_DOCUMENT_ROOT_PATH" \
>&3 2>git_daemon_output &
then
kill "$GIT_DAEMON_PID"
wait "$GIT_DAEMON_PID"
- trap 'die' EXIT
+ unset GIT_DAEMON_PID
test_skip_or_die $GIT_TEST_GIT_DAEMON \
"git daemon failed to start"
fi
return
fi
- trap 'die' EXIT
-
# kill git-daemon child of git
say >&3 "Stopping git daemon ..."
kill "$GIT_DAEMON_PID"
then
error "git daemon exited with status: $ret"
fi
+ kill "$(cat "$GIT_DAEMON_PIDFILE")" 2>/dev/null
GIT_DAEMON_PID=
- rm -f git_daemon_output
+ rm -f git_daemon_output "$GIT_DAEMON_PIDFILE"
}
# A stripped-down version of a netcat client, that connects to a "host:port"
echo "$path"
}
-# On Solaris the 'date +%s' function is not supported and therefore we
-# need this replacement.
-# Attention: This function is not safe again against time offset updates
-# at runtime (e.g. via NTP). The 'clock_gettime(CLOCK_MONOTONIC)'
-# function could fix that but it is not in Python until 3.3.
-time_in_seconds () {
- (cd / && "$PYTHON_PATH" -c 'import time; print(int(time.time()))')
-}
-
test_set_port P4DPORT
P4PORT=localhost:$P4DPORT
git="$TRASH_DIRECTORY/git"
pidfile="$TRASH_DIRECTORY/p4d.pid"
-# Sometimes "prove" seems to hang on exit because p4d is still running
-cleanup () {
- if test -f "$pidfile"
- then
- kill -9 $(cat "$pidfile") 2>/dev/null && exit 255
- fi
+stop_p4d_and_watchdog () {
+ kill -9 $p4d_pid $watchdog_pid
}
-trap cleanup EXIT
# git p4 submit generates a temp file, which will
# not get cleaned up if the submission fails. Don't
TMPDIR="$TRASH_DIRECTORY"
export TMPDIR
+registered_stop_p4d_atexit_handler=
start_p4d () {
+ # One of the test scripts stops and then re-starts p4d.
+ # Don't register and then run the same atexit handlers several times.
+ if test -z "$registered_stop_p4d_atexit_handler"
+ then
+ test_atexit 'stop_p4d_and_watchdog'
+ registered_stop_p4d_atexit_handler=AlreadyDone
+ fi
+
mkdir -p "$db" "$cli" "$git" &&
rm -f "$pidfile" &&
(
echo $! >"$pidfile"
}
) &&
+ p4d_pid=$(cat "$pidfile")
# This gives p4d a long time to start up, as it can be
# quite slow depending on the machine. Set this environment
# an automated test setup. If the p4d process dies, that
# will be caught with the "kill -0" check below.
i=${P4D_START_PATIENCE:-300}
- pid=$(cat "$pidfile")
- timeout=$(($(time_in_seconds) + $P4D_TIMEOUT))
+ nr_tries_left=$P4D_TIMEOUT
while true
do
- if test $(time_in_seconds) -gt $timeout
+ if test $nr_tries_left -eq 0
then
- kill -9 $pid
+ kill -9 $p4d_pid
exit 1
fi
sleep 1
- done &
+ nr_tries_left=$(($nr_tries_left - 1))
+ done 2>/dev/null 4>&2 &
watchdog_pid=$!
ready=
break
fi
# fail if p4d died
- kill -0 $pid 2>/dev/null || break
+ kill -0 $p4d_pid 2>/dev/null || break
echo waiting for p4d to start
sleep 1
i=$(( $i - 1 ))
}
retry_until_success () {
- timeout=$(($(time_in_seconds) + $RETRY_TIMEOUT))
- until "$@" 2>/dev/null || test $(time_in_seconds) -gt $timeout
- do
- sleep 1
- done
-}
-
-retry_until_fail () {
- timeout=$(($(time_in_seconds) + $RETRY_TIMEOUT))
- until ! "$@" 2>/dev/null || test $(time_in_seconds) -gt $timeout
+ nr_tries_left=$RETRY_TIMEOUT
+ until "$@" 2>/dev/null || test $nr_tries_left -eq 0
do
sleep 1
+ nr_tries_left=$(($nr_tries_left - 1))
done
}
-kill_p4d () {
- pid=$(cat "$pidfile")
- retry_until_fail kill $pid
- retry_until_fail kill -9 $pid
- # complain if it would not die
- test_must_fail kill $pid >/dev/null 2>&1 &&
- rm -rf "$db" "$cli" "$pidfile" &&
- retry_until_fail kill -9 $watchdog_pid
+stop_and_cleanup_p4d () {
+ kill -9 $p4d_pid $watchdog_pid
+ wait $p4d_pid
+ rm -rf "$db" "$cli" "$pidfile"
}
cleanup_git () {
LIB_HTTPD_SVN="$loc"
start_httpd
;;
- *)
- stop_httpd () {
- : noop
- }
- ;;
esac
}
#
# test_expect_success ...
#
-# stop_httpd
# test_done
#
# Can be configured using the following variables.
start_httpd() {
prepare_httpd >&3 2>&4
- trap 'code=$?; stop_httpd; (exit $code); die' EXIT
+ test_atexit stop_httpd
"$LIB_HTTPD_PATH" -d "$HTTPD_ROOT_PATH" \
-f "$TEST_PATH/apache.conf" $HTTPD_PARA \
>&3 2>&4
if test $? -ne 0
then
- trap 'die' EXIT
cat "$HTTPD_ROOT_PATH"/error.log >&4 2>/dev/null
test_skip_or_die $GIT_TEST_HTTPD "web server setup failed"
fi
}
stop_httpd() {
- trap 'die' EXIT
-
"$LIB_HTTPD_PATH" -d "$HTTPD_ROOT_PATH" \
-f "$TEST_PATH/apache.conf" $HTTPD_PARA -k stop
}
git revert HEAD &&
git checkout -b invalid_sub1 add_sub1 &&
- git update-index --cacheinfo 160000 0123456789012345678901234567890123456789 sub1 &&
+ git update-index --cacheinfo 160000 $(test_oid numeric) sub1 &&
git commit -m "Invalid sub1 commit" &&
git checkout -b valid_sub1 &&
git revert HEAD &&
# the submodule repo if it doesn't exist and configures the most problematic
# settings for diff.ignoreSubmodules.
prolog () {
+ test_oid_init &&
(test -d submodule_update_repo || create_lib_submodule_repo) &&
test_config_global diff.ignoreSubmodules all &&
test_config diff.ignoreSubmodules all
git rev-list --all --objects >/dev/null
'
+test_perf 'rev-list --parents' '
+ git rev-list --parents HEAD >/dev/null
+'
+
+test_expect_success 'create dummy file' '
+ echo unlikely-to-already-be-there >dummy &&
+ git add dummy &&
+ git commit -m dummy
+'
+
+test_perf 'rev-list -- dummy' '
+ git rev-list HEAD -- dummy
+'
+
+test_perf 'rev-list --parents -- dummy' '
+ git rev-list --parents HEAD -- dummy
+'
+
test_expect_success 'create new unreferenced commit' '
commit=$(git commit-tree HEAD^{tree} -p HEAD) &&
test_export commit
EOF
"
+test_expect_success 'test_atexit is run' "
+ test_must_fail run_sub_test_lib_test \
+ atexit-cleanup 'Run atexit commands' -i <<-\\EOF &&
+ test_expect_success 'tests clean up even after a failure' '
+ > ../../clean-atexit &&
+ test_atexit rm ../../clean-atexit &&
+ > ../../also-clean-atexit &&
+ test_atexit rm ../../also-clean-atexit &&
+ > ../../dont-clean-atexit &&
+ (exit 1)
+ '
+ test_done
+ EOF
+ test_path_is_file dont-clean-atexit &&
+ test_path_is_missing clean-atexit &&
+ test_path_is_missing also-clean-atexit
+"
+
test_expect_success 'test_oid setup' '
test_oid_init
'
EOF
test_expect_success 'unambiguously abbreviated option' '
+ GIT_TEST_DISALLOW_ABBREVIATED_OPTIONS=false \
test-tool parse-options --int 2 --boolean --no-bo >output 2>output.err &&
test_must_be_empty output.err &&
test_cmp expect output
'
test_expect_success 'unambiguously abbreviated option with "="' '
+ GIT_TEST_DISALLOW_ABBREVIATED_OPTIONS=false \
test-tool parse-options --expect="integer: 2" --int=2
'
test_expect_success 'ambiguously abbreviated option' '
- test_expect_code 129 test-tool parse-options --strin 123
+ test_expect_code 129 env GIT_TEST_DISALLOW_ABBREVIATED_OPTIONS=false \
+ test-tool parse-options --strin 123
'
test_expect_success 'non ambiguous option (after two options it abbreviates)' '
+ GIT_TEST_DISALLOW_ABBREVIATED_OPTIONS=false \
test-tool parse-options --expect="string: 123" --st 123
'
EOF
test_expect_success 'negation of OPT_NONEG flags is not ambiguous' '
+ GIT_TEST_DISALLOW_ABBREVIATED_OPTIONS=false \
test-tool parse-options --no-ambig >output 2>output.err &&
test_must_be_empty output.err &&
test_cmp expect output
test-tool parse-options --expect="verbose: 0" -v -v -v --no-verbose
'
+test_expect_success 'GIT_TEST_DISALLOW_ABBREVIATED_OPTIONS works' '
+ GIT_TEST_DISALLOW_ABBREVIATED_OPTIONS=false \
+ test-tool parse-options --ye &&
+ test_must_fail env GIT_TEST_DISALLOW_ABBREVIATED_OPTIONS=true \
+ test-tool parse-options --ye
+'
+
test_done
}
# don't leave a stale daemon running
-trap 'code=$?; git credential-cache exit; (exit $code); die' EXIT
+test_atexit 'git credential-cache exit'
# test that the daemon works with no special setup
helper_test cache
helper_test_timeout cache --timeout=1
-# we can't rely on our "trap" above working after test_done,
-# as test_done will delete the trash directory containing
-# our socket, leaving us with no way to access the daemon.
-git credential-cache exit
-
test_done
git verify-pack --verbose "$IDX" | grep "$HASH"
'
-stop_httpd
-
test_done
)
'
+test_expect_success 'conditional include with /**/' '
+ REPO=foo/bar/repo &&
+ git init $REPO &&
+ cat >>$REPO/.git/config <<-\EOF &&
+ [includeIf "gitdir:**/foo/**/bar/**"]
+ path=bar7
+ EOF
+ echo "[test]seven=7" >$REPO/.git/bar7 &&
+ echo 7 >expect &&
+ git -C $REPO config test.seven >actual &&
+ test_cmp expect actual
+'
+
test_expect_success SYMLINKS 'conditional include, set up symlinked $HOME' '
mkdir real-home &&
ln -s real-home home &&
'
test_expect_success 'gc.reflogexpire=never' '
+ test_config gc.reflogexpire never &&
+ test_config gc.reflogexpireunreachable never &&
+
+ git reflog expire --verbose --all >output &&
+ test_line_count = 9 output &&
- git config gc.reflogexpire never &&
- git config gc.reflogexpireunreachable never &&
- git reflog expire --verbose --all &&
git reflog refs/heads/master >output &&
test_line_count = 4 output
'
test_expect_success 'gc.reflogexpire=false' '
+ test_config gc.reflogexpire false &&
+ test_config gc.reflogexpireunreachable false &&
- git config gc.reflogexpire false &&
- git config gc.reflogexpireunreachable false &&
git reflog expire --verbose --all &&
git reflog refs/heads/master >output &&
- test_line_count = 4 output &&
+ test_line_count = 4 output
+
+'
- git config --unset gc.reflogexpire &&
- git config --unset gc.reflogexpireunreachable
+test_expect_success 'git reflog expire unknown reference' '
+ test_config gc.reflogexpire never &&
+ test_config gc.reflogexpireunreachable never &&
+ test_must_fail git reflog expire master@{123} 2>stderr &&
+ test_i18ngrep "points nowhere" stderr &&
+ test_must_fail git reflog expire does-not-exist 2>stderr &&
+ test_i18ngrep "points nowhere" stderr
'
test_expect_success 'checkout should not delete log for packed ref' '
# Copyright (c) 2005 Junio C Hamano
#
-test_description='git ls-files test (--others should pick up symlinks).
+test_description='basic tests for ls-files --others
This test runs git ls-files --others with the following on the
filesystem.
--- /dev/null
+#!/bin/sh
+
+test_description='test git ls-files --others with non-submodule repositories
+
+This test runs git ls-files --others with the following working tree:
+
+ nonrepo-no-files/
+ plain directory with no files
+ nonrepo-untracked-file/
+ plain directory with an untracked file
+ repo-no-commit-no-files/
+ git repository without a commit or a file
+ repo-no-commit-untracked-file/
+ git repository without a commit but with an untracked file
+ repo-with-commit-no-files/
+ git repository with a commit and no untracked files
+ repo-with-commit-untracked-file/
+ git repository with a commit and an untracked file
+'
+
+. ./test-lib.sh
+
+test_expect_success 'setup: directories' '
+ mkdir nonrepo-no-files/ &&
+ mkdir nonrepo-untracked-file &&
+ : >nonrepo-untracked-file/untracked &&
+ git init repo-no-commit-no-files &&
+ git init repo-no-commit-untracked-file &&
+ : >repo-no-commit-untracked-file/untracked &&
+ git init repo-with-commit-no-files &&
+ git -C repo-with-commit-no-files commit --allow-empty -mmsg &&
+ git init repo-with-commit-untracked-file &&
+ test_commit -C repo-with-commit-untracked-file msg &&
+ : >repo-with-commit-untracked-file/untracked
+'
+
+test_expect_success 'ls-files --others handles untracked git repositories' '
+ git ls-files -o >output &&
+ cat >expect <<-EOF &&
+ nonrepo-untracked-file/untracked
+ output
+ repo-no-commit-no-files/
+ repo-no-commit-untracked-file/
+ repo-with-commit-no-files/
+ repo-with-commit-untracked-file/
+ EOF
+ test_cmp expect output
+'
+
+test_done
test_config notes.rewriteMode overwrite &&
test_config notes.rewriteRef refs/notes/other &&
echo $(git rev-parse HEAD^) $(git rev-parse HEAD) |
- GIT_NOTES_REWRITE_REF= git notes copy --for-rewrite=foo &&
+ GIT_NOTES_REWRITE_REF=refs/notes/commits \
+ git notes copy --for-rewrite=foo &&
git log -1 >actual &&
- test_cmp expect actual
+ grep "replacement note 3" actual
'
test_expect_success 'git notes copy diagnoses too many or too few parameters' '
git checkout B^0 &&
set_fake_editor &&
- FAKE_LINES="1" git rebase --interactive A &&
+ FAKE_LINES="1" git -c merge.directoryRenames=true rebase --interactive A &&
git ls-files -s >out &&
test_line_count = 5 out &&
git checkout B^0 &&
- git rebase A &&
+ git -c merge.directoryRenames=true rebase A &&
git ls-files -s >out &&
test_line_count = 5 out &&
git checkout B^0 &&
- git rebase --merge A &&
+ git -c merge.directoryRenames=true rebase --merge A &&
git ls-files -s >out &&
test_line_count = 5 out &&
git format-patch -1 B &&
- git am --3way 0001*.patch &&
+ git -c merge.directoryRenames=true am --3way 0001*.patch &&
git ls-files -s >out &&
test_line_count = 5 out &&
(
set_cat_todo_editor &&
test_must_fail git -c rebase.instructionFormat= \
- rebase --autosquash --force -i HEAD^ >actual &&
+ rebase --autosquash --force-rebase -i HEAD^ >actual &&
git log -1 --format="pick %h %s" >expect &&
test_cmp expect actual
)
EOF
test_config sequence.editor \""$PWD"/replace-editor.sh\" &&
test_tick &&
- git rebase -i --force --root -r &&
+ git rebase -i --force-rebase --root -r &&
test "Parsnip" = "$(git show -s --format=%an HEAD^)" &&
test $(git rev-parse second-root^0) != $(git rev-parse HEAD^) &&
test $(git rev-parse second-root:second-root.t) = \
test_cmp_rev HEAD $before &&
test_tick &&
- git rebase -i --force -r HEAD^^ &&
+ git rebase -i --force-rebase -r HEAD^^ &&
test "Hank" = "$(git show -s --format=%an HEAD)" &&
test "$before" != $(git rev-parse HEAD) &&
test_cmp_graph HEAD^^.. <<-\EOF
test_commit base foo b &&
test_commit picked foo c &&
test_commit --signoff picked-signed foo d &&
+ git checkout -b topic initial &&
+ test_commit redundant-pick foo c redundant &&
+ git commit --allow-empty --allow-empty-message &&
+ git tag empty &&
+ git checkout master &&
git config advice.detachedhead false
'
test_i18ngrep ! "Changes not staged for commit:" actual
'
+test_expect_success 'cherry-pick --continue remembers --keep-redundant-commits' '
+ test_when_finished "git cherry-pick --abort || :" &&
+ pristine_detach initial &&
+ test_must_fail git cherry-pick --keep-redundant-commits picked redundant &&
+ echo c >foo &&
+ git add foo &&
+ git cherry-pick --continue
+'
+
+test_expect_success 'cherry-pick --continue remembers --allow-empty and --allow-empty-message' '
+ test_when_finished "git cherry-pick --abort || :" &&
+ pristine_detach initial &&
+ test_must_fail git cherry-pick --allow-empty --allow-empty-message \
+ picked empty &&
+ echo c >foo &&
+ git add foo &&
+ git cherry-pick --continue
+'
+
test_done
)
'
+test_expect_success 'error on a repository with no commits' '
+ rm -fr empty &&
+ git init empty &&
+ test_must_fail git add empty >actual 2>&1 &&
+ cat >expect <<-EOF &&
+ error: '"'empty/'"' does not have a commit checked out
+ fatal: adding files failed
+ EOF
+ test_i18ncmp expect actual
+'
+
test_expect_success 'git add --dry-run of existing changed file' "
echo new >>track-this &&
git add --dry-run track-this >actual 2>&1 &&
'
test_expect_success 'all statuses changed in folder if . is given' '
+ rm -fr empty &&
git add --chmod=+x . &&
test $(git ls-files --stage | grep ^100644 | wc -l) -eq 0 &&
git add --chmod=-x . &&
. ./test-lib.sh
test_expect_success 'stash some dirty working directory' '
- echo 1 > file &&
+ echo 1 >file &&
git add file &&
echo unrelated >other-file &&
git add other-file &&
test_tick &&
git commit -m initial &&
- echo 2 > file &&
+ echo 2 >file &&
git add file &&
- echo 3 > file &&
+ echo 3 >file &&
test_tick &&
git stash &&
git diff-files --quiet &&
git diff-index --cached --quiet HEAD
'
-cat > expect << EOF
+cat >expect <<EOF
diff --git a/file b/file
index 0cfbf08..00750ed 100644
--- a/file
test_expect_success 'parents of stash' '
test $(git rev-parse stash^) = $(git rev-parse HEAD) &&
- git diff stash^2..stash > output &&
+ git diff stash^2..stash >output &&
test_cmp expect output
'
test_expect_success 'apply stashed changes (including index)' '
git reset --hard HEAD^ &&
- echo 6 > other-file &&
+ echo 6 >other-file &&
git add other-file &&
test_tick &&
git commit -m other-file &&
test_expect_success 'drop top stash' '
git reset --hard &&
- git stash list > stashlist1 &&
- echo 7 > file &&
+ git stash list >expected &&
+ echo 7 >file &&
git stash &&
git stash drop &&
- git stash list > stashlist2 &&
- test_cmp stashlist1 stashlist2 &&
+ git stash list >actual &&
+ test_cmp expected actual &&
git stash apply &&
test 3 = $(cat file) &&
test 1 = $(git show :file) &&
test_expect_success 'drop middle stash' '
git reset --hard &&
- echo 8 > file &&
+ echo 8 >file &&
git stash &&
- echo 9 > file &&
+ echo 9 >file &&
git stash &&
git stash drop stash@{1} &&
test 2 = $(git stash list | wc -l) &&
test 0 = $(git stash list | wc -l)
'
-cat > expect << EOF
+cat >expect <<EOF
diff --git a/file2 b/file2
new file mode 100644
index 0000000..1fe912c
+bar2
EOF
-cat > expect1 << EOF
+cat >expect1 <<EOF
diff --git a/file b/file
index 257cc56..5716ca5 100644
--- a/file
+bar
EOF
-cat > expect2 << EOF
+cat >expect2 <<EOF
diff --git a/file b/file
index 7601807..5716ca5 100644
--- a/file
EOF
test_expect_success 'stash branch' '
- echo foo > file &&
+ echo foo >file &&
git commit file -m first &&
- echo bar > file &&
- echo bar2 > file2 &&
+ echo bar >file &&
+ echo bar2 >file2 &&
git add file2 &&
git stash &&
- echo baz > file &&
+ echo baz >file &&
git commit file -m second &&
git stash branch stashbranch &&
test refs/heads/stashbranch = $(git symbolic-ref HEAD) &&
test $(git rev-parse HEAD) = $(git rev-parse master^) &&
- git diff --cached > output &&
+ git diff --cached >output &&
test_cmp expect output &&
- git diff > output &&
+ git diff >output &&
test_cmp expect1 output &&
git add file &&
git commit -m alternate\ second &&
- git diff master..stashbranch > output &&
+ git diff master..stashbranch >output &&
test_cmp output expect2 &&
test 0 = $(git stash list | wc -l)
'
test_expect_success 'apply -q is quiet' '
- echo foo > file &&
+ echo foo >file &&
git stash &&
- git stash apply -q > output.out 2>&1 &&
+ git stash apply -q >output.out 2>&1 &&
test_must_be_empty output.out
'
test_expect_success 'save -q is quiet' '
- git stash save --quiet > output.out 2>&1 &&
+ git stash save --quiet >output.out 2>&1 &&
test_must_be_empty output.out
'
test_expect_success 'pop -q is quiet' '
- git stash pop -q > output.out 2>&1 &&
+ git stash pop -q >output.out 2>&1 &&
test_must_be_empty output.out
'
test_expect_success 'pop -q --index works and is quiet' '
- echo foo > file &&
+ echo foo >file &&
git add file &&
git stash save --quiet &&
- git stash pop -q --index > output.out 2>&1 &&
+ git stash pop -q --index >output.out 2>&1 &&
test foo = "$(git show :file)" &&
test_must_be_empty output.out
'
test_expect_success 'drop -q is quiet' '
git stash &&
- git stash drop -q > output.out 2>&1 &&
+ git stash drop -q >output.out 2>&1 &&
test_must_be_empty output.out
'
test_expect_success 'stash -k' '
- echo bar3 > file &&
- echo bar4 > file2 &&
+ echo bar3 >file &&
+ echo bar4 >file2 &&
git add file2 &&
git stash -k &&
test bar,bar4 = $(cat file),$(cat file2)
'
test_expect_success 'stash --no-keep-index' '
- echo bar33 > file &&
- echo bar44 > file2 &&
+ echo bar33 >file &&
+ echo bar44 >file2 &&
git add file2 &&
git stash --no-keep-index &&
test bar,bar2 = $(cat file),$(cat file2)
'
test_expect_success 'stash --invalid-option' '
- echo bar5 > file &&
- echo bar6 > file2 &&
+ echo bar5 >file &&
+ echo bar6 >file2 &&
git add file2 &&
test_must_fail git stash --invalid-option &&
test_must_fail git stash save --invalid-option &&
test new = "$(cat file3)"
'
+test_expect_success 'stash --intent-to-add file' '
+ git reset --hard &&
+ echo new >file4 &&
+ git add --intent-to-add file4 &&
+ test_when_finished "git rm -f file4" &&
+ test_must_fail git stash
+'
+
test_expect_success 'stash rm then recreate' '
git reset --hard &&
git rm file &&
test foo = "$(cat file/file)"
'
+test_expect_success 'giving too many ref arguments does not modify files' '
+ git stash clear &&
+ test_when_finished "git reset --hard HEAD" &&
+ echo foo >file2 &&
+ git stash &&
+ echo bar >file2 &&
+ git stash &&
+ test-tool chmtime =123456789 file2 &&
+ for type in apply pop "branch stash-branch"
+ do
+ test_must_fail git stash $type stash@{0} stash@{1} 2>err &&
+ test_i18ngrep "Too many revisions" err &&
+ test 123456789 = $(test-tool chmtime -g file2) || return 1
+ done
+'
+
+test_expect_success 'drop: too many arguments errors out (does nothing)' '
+ git stash list >expect &&
+ test_must_fail git stash drop stash@{0} stash@{1} 2>err &&
+ test_i18ngrep "Too many revisions" err &&
+ git stash list >actual &&
+ test_cmp expect actual
+'
+
+test_expect_success 'show: too many arguments errors out (does nothing)' '
+ test_must_fail git stash show stash@{0} stash@{1} 2>err 1>out &&
+ test_i18ngrep "Too many revisions" err &&
+ test_must_be_empty out
+'
+
test_expect_success 'stash create - no changes' '
git stash clear &&
test_when_finished "git reset --hard HEAD" &&
git stash clear &&
test_when_finished "git reset --hard HEAD" &&
git reset --hard &&
- echo foo >> file &&
+ echo foo >>file &&
STASH_ID=$(git stash create) &&
git reset --hard &&
git stash branch stash-branch ${STASH_ID} &&
- test_when_finished "git reset --hard HEAD && git checkout master && git branch -D stash-branch" &&
+ test_when_finished "git reset --hard HEAD && git checkout master &&
+ git branch -D stash-branch" &&
test $(git ls-files --modified | wc -l) -eq 1
'
git stash clear &&
test_when_finished "git reset --hard HEAD" &&
git reset --hard &&
- echo foo >> file &&
+ echo foo >>file &&
git stash &&
test_when_finished "git stash drop" &&
- echo bar >> file &&
+ echo bar >>file &&
STASH_ID=$(git stash create) &&
git reset --hard &&
git stash branch stash-branch ${STASH_ID} &&
- test_when_finished "git reset --hard HEAD && git checkout master && git branch -D stash-branch" &&
+ test_when_finished "git reset --hard HEAD && git checkout master &&
+ git branch -D stash-branch" &&
test $(git ls-files --modified | wc -l) -eq 1
'
+test_expect_success 'stash branch complains with no arguments' '
+ test_must_fail git stash branch 2>err &&
+ test_i18ngrep "No branch name specified" err
+'
+
test_expect_success 'stash show format defaults to --stat' '
git stash clear &&
test_when_finished "git reset --hard HEAD" &&
git reset --hard &&
- echo foo >> file &&
+ echo foo >>file &&
git stash &&
test_when_finished "git stash drop" &&
- echo bar >> file &&
+ echo bar >>file &&
STASH_ID=$(git stash create) &&
git reset --hard &&
cat >expected <<-EOF &&
git stash clear &&
test_when_finished "git reset --hard HEAD" &&
git reset --hard &&
- echo foo >> file &&
+ echo foo >>file &&
git stash &&
test_when_finished "git stash drop" &&
- echo bar >> file &&
+ echo bar >>file &&
STASH_ID=$(git stash create) &&
git reset --hard &&
echo "1 0 file" >expected &&
git stash clear &&
test_when_finished "git reset --hard HEAD" &&
git reset --hard &&
- echo foo >> file &&
+ echo foo >>file &&
git stash &&
test_when_finished "git stash drop" &&
- echo bar >> file &&
+ echo bar >>file &&
STASH_ID=$(git stash create) &&
git reset --hard &&
cat >expected <<-EOF &&
git stash clear &&
test_when_finished "git reset --hard HEAD" &&
git reset --hard &&
- echo foo >> file &&
+ echo foo >>file &&
STASH_ID=$(git stash create) &&
git reset --hard &&
echo "1 0 file" >expected &&
git stash clear &&
test_when_finished "git reset --hard HEAD" &&
git reset --hard &&
- echo foo >> file &&
+ echo foo >>file &&
STASH_ID=$(git stash create) &&
git reset --hard &&
cat >expected <<-EOF &&
test_cmp expected actual
'
-test_expect_success 'stash drop - fail early if specified stash is not a stash reference' '
+test_expect_success 'stash show --patience shows diff' '
+ git reset --hard &&
+ echo foo >>file &&
+ STASH_ID=$(git stash create) &&
+ git reset --hard &&
+ cat >expected <<-EOF &&
+ diff --git a/file b/file
+ index 7601807..71b52c4 100644
+ --- a/file
+ +++ b/file
+ @@ -1 +1,2 @@
+ baz
+ +foo
+ EOF
+ git stash show --patience ${STASH_ID} >actual &&
+ test_cmp expected actual
+'
+
+test_expect_success 'drop: fail early if specified stash is not a stash ref' '
git stash clear &&
test_when_finished "git reset --hard HEAD && git stash clear" &&
git reset --hard &&
- echo foo > file &&
+ echo foo >file &&
git stash &&
- echo bar > file &&
+ echo bar >file &&
git stash &&
test_must_fail git stash drop $(git rev-parse stash@{0}) &&
git stash pop &&
git reset --hard HEAD
'
-test_expect_success 'stash pop - fail early if specified stash is not a stash reference' '
+test_expect_success 'pop: fail early if specified stash is not a stash ref' '
git stash clear &&
test_when_finished "git reset --hard HEAD && git stash clear" &&
git reset --hard &&
- echo foo > file &&
+ echo foo >file &&
git stash &&
- echo bar > file &&
+ echo bar >file &&
git stash &&
test_must_fail git stash pop $(git rev-parse stash@{0}) &&
git stash pop &&
test_expect_success 'ref with non-existent reflog' '
git stash clear &&
- echo bar5 > file &&
- echo bar6 > file2 &&
+ echo bar5 >file &&
+ echo bar6 >file2 &&
git add file2 &&
git stash &&
test_must_fail git rev-parse --quiet --verify does-not-exist &&
test_expect_success 'invalid ref of the form stash@{n}, n >= N' '
git stash clear &&
test_must_fail git stash drop stash@{0} &&
- echo bar5 > file &&
- echo bar6 > file2 &&
+ echo bar5 >file &&
+ echo bar6 >file2 &&
git add file2 &&
git stash &&
test_must_fail git stash drop stash@{1} &&
git stash drop
'
-test_expect_success 'stash branch should not drop the stash if the branch exists' '
+test_expect_success 'branch: do not drop the stash if the branch exists' '
git stash clear &&
echo foo >file &&
git add file &&
git rev-parse stash@{0} --
'
-test_expect_success 'stash branch should not drop the stash if the apply fails' '
+test_expect_success 'branch: should not drop the stash if the apply fails' '
git stash clear &&
git reset HEAD~1 --hard &&
echo foo >file &&
git rev-parse stash@{0} --
'
-test_expect_success 'stash apply shows status same as git status (relative to current directory)' '
+test_expect_success 'apply: show same status as git status (relative to ./)' '
git stash clear &&
echo 1 >subdir/subfile1 &&
echo 2 >subdir/subfile2 &&
test_i18ncmp expect actual
'
-cat > expect << EOF
+cat >expect <<EOF
diff --git a/HEAD b/HEAD
new file mode 100644
index 0000000..fe0cbee
test_expect_success 'stash where working directory contains "HEAD" file' '
git stash clear &&
git reset --hard &&
- echo file-not-a-ref > HEAD &&
+ echo file-not-a-ref >HEAD &&
git add HEAD &&
test_tick &&
git stash &&
git diff-files --quiet &&
git diff-index --cached --quiet HEAD &&
test "$(git rev-parse stash^)" = "$(git rev-parse HEAD)" &&
- git diff stash^..stash > output &&
+ git diff stash^..stash >output &&
test_cmp expect output
'
test_i18ncmp expect actual
'
-test_expect_success 'stash push with pathspec shows no changes when there are none' '
+test_expect_success 'push <pathspec>: show no changes when there are none' '
>foo &&
git add foo &&
git commit -m "tmp" &&
test_i18ncmp expect actual
'
-test_expect_success 'stash push with pathspec not in the repository errors out' '
+test_expect_success 'push: <pathspec> not in the repository errors out' '
>untracked &&
test_must_fail git stash push untracked &&
test_path_is_file untracked
'
+test_expect_success 'push: -q is quiet with changes' '
+ >foo &&
+ git add foo &&
+ git stash push -q >output 2>&1 &&
+ test_must_be_empty output
+'
+
+test_expect_success 'push: -q is quiet with no changes' '
+ git stash push -q >output 2>&1 &&
+ test_must_be_empty output
+'
+
+test_expect_success 'push: -q is quiet even if there is no initial commit' '
+ git init foo_dir &&
+ test_when_finished rm -rf foo_dir &&
+ (
+ cd foo_dir &&
+ >bar &&
+ test_must_fail git stash push -q >output 2>&1 &&
+ test_must_be_empty output
+ )
+'
+
test_expect_success 'untracked files are left in place when -u is not given' '
>file &&
git add file &&
test_path_is_file subdir/untracked
'
+test_expect_success 'stash with user.name and user.email set works' '
+ test_config user.name "A U Thor" &&
+ test_config user.email "a.u@thor" &&
+ git stash
+'
+
test_expect_success 'stash works when user.name and user.email are not set' '
git reset &&
>1 &&
test_i18ncmp expect actual
'
+test_expect_success 'stash -u with globs' '
+ >untracked.txt &&
+ git stash -u -- ":(glob)**/*.txt" &&
+ test_path_is_missing untracked.txt
+'
+
test_done
--- /dev/null
+#!/bin/sh
+
+test_description='Test git stash show configuration.'
+
+. ./test-lib.sh
+
+test_expect_success 'setup' '
+ test_commit file
+'
+
+# takes three parameters:
+# 1. the stash.showStat value (or "<unset>")
+# 2. the stash.showPatch value (or "<unset>")
+# 3. the diff options of the expected output (or nothing for no output)
+test_stat_and_patch () {
+ if test "<unset>" = "$1"
+ then
+ test_unconfig stash.showStat
+ else
+ test_config stash.showStat "$1"
+ fi &&
+
+ if test "<unset>" = "$2"
+ then
+ test_unconfig stash.showPatch
+ else
+ test_config stash.showPatch "$2"
+ fi &&
+
+ shift 2 &&
+ echo 2 >file.t &&
+ if test $# != 0
+ then
+ git diff "$@" >expect
+ fi &&
+ git stash &&
+ git stash show >actual &&
+
+ if test $# = 0
+ then
+ test_must_be_empty actual
+ else
+ test_cmp expect actual
+ fi
+}
+
+test_expect_success 'showStat unset showPatch unset' '
+ test_stat_and_patch "<unset>" "<unset>" --stat
+'
+
+test_expect_success 'showStat unset showPatch false' '
+ test_stat_and_patch "<unset>" false --stat
+'
+
+test_expect_success 'showStat unset showPatch true' '
+ test_stat_and_patch "<unset>" true --stat -p
+'
+
+test_expect_success 'showStat false showPatch unset' '
+ test_stat_and_patch false "<unset>"
+'
+
+test_expect_success 'showStat false showPatch false' '
+ test_stat_and_patch false false
+'
+
+test_expect_success 'showStat false showPatch true' '
+ test_stat_and_patch false true -p
+'
+
+test_expect_success 'showStat true showPatch unset' '
+ test_stat_and_patch true "<unset>" --stat
+'
+
+test_expect_success 'showStat true showPatch false' '
+ test_stat_and_patch true false --stat
+'
+
+test_expect_success 'showStat true showPatch true' '
+ test_stat_and_patch true true --stat -p
+'
+
+test_done
export GIT_CEILING_DIRECTORIES &&
cd non/git &&
test_must_fail git diff --no-index a 2>actual.err &&
- echo "usage: git diff --no-index <path> <path>" >expect.err &&
- test_cmp expect.err actual.err
+ test_i18ngrep "usage: git diff --no-index" actual.err
)
'
--- /dev/null
+#!/bin/sh
+
+test_description='behavior of diff when reading objects in a partial clone'
+
+. ./test-lib.sh
+
+test_expect_success 'git show batches blobs' '
+ test_when_finished "rm -rf server client trace" &&
+
+ test_create_repo server &&
+ echo a >server/a &&
+ echo b >server/b &&
+ git -C server add a b &&
+ git -C server commit -m x &&
+
+ test_config -C server uploadpack.allowfilter 1 &&
+ test_config -C server uploadpack.allowanysha1inwant 1 &&
+ git clone --bare --filter=blob:limit=0 "file://$(pwd)/server" client &&
+
+ # Ensure that there is exactly 1 negotiation by checking that there is
+ # only 1 "done" line sent. ("done" marks the end of negotiation.)
+ GIT_TRACE_PACKET="$(pwd)/trace" git -C client show HEAD &&
+ grep "git> done" trace >done_lines &&
+ test_line_count = 1 done_lines
+'
+
+test_expect_success 'diff batches blobs' '
+ test_when_finished "rm -rf server client trace" &&
+
+ test_create_repo server &&
+ echo a >server/a &&
+ echo b >server/b &&
+ git -C server add a b &&
+ git -C server commit -m x &&
+ echo c >server/c &&
+ echo d >server/d &&
+ git -C server add c d &&
+ git -C server commit -m x &&
+
+ test_config -C server uploadpack.allowfilter 1 &&
+ test_config -C server uploadpack.allowanysha1inwant 1 &&
+ git clone --bare --filter=blob:limit=0 "file://$(pwd)/server" client &&
+
+ # Ensure that there is exactly 1 negotiation by checking that there is
+ # only 1 "done" line sent. ("done" marks the end of negotiation.)
+ GIT_TRACE_PACKET="$(pwd)/trace" git -C client diff HEAD^ HEAD &&
+ grep "git> done" trace >done_lines &&
+ test_line_count = 1 done_lines
+'
+
+test_expect_success 'diff skips same-OID blobs' '
+ test_when_finished "rm -rf server client trace" &&
+
+ test_create_repo server &&
+ echo a >server/a &&
+ echo b >server/b &&
+ git -C server add a b &&
+ git -C server commit -m x &&
+ echo another-a >server/a &&
+ git -C server add a &&
+ git -C server commit -m x &&
+
+ test_config -C server uploadpack.allowfilter 1 &&
+ test_config -C server uploadpack.allowanysha1inwant 1 &&
+ git clone --bare --filter=blob:limit=0 "file://$(pwd)/server" client &&
+
+ echo a | git hash-object --stdin >hash-old-a &&
+ echo another-a | git hash-object --stdin >hash-new-a &&
+ echo b | git hash-object --stdin >hash-b &&
+
+ # Ensure that only a and another-a are fetched.
+ GIT_TRACE_PACKET="$(pwd)/trace" git -C client diff HEAD^ HEAD &&
+ grep "want $(cat hash-old-a)" trace &&
+ grep "want $(cat hash-new-a)" trace &&
+ ! grep "want $(cat hash-b)" trace
+'
+
+test_expect_success 'diff with rename detection batches blobs' '
+ test_when_finished "rm -rf server client trace" &&
+
+ test_create_repo server &&
+ echo a >server/a &&
+ printf "b\nb\nb\nb\nb\n" >server/b &&
+ git -C server add a b &&
+ git -C server commit -m x &&
+ rm server/b &&
+ printf "b\nb\nb\nb\nbX\n" >server/c &&
+ git -C server add c &&
+ git -C server commit -a -m x &&
+
+ test_config -C server uploadpack.allowfilter 1 &&
+ test_config -C server uploadpack.allowanysha1inwant 1 &&
+ git clone --bare --filter=blob:limit=0 "file://$(pwd)/server" client &&
+
+ # Ensure that there is exactly 1 negotiation by checking that there is
+ # only 1 "done" line sent. ("done" marks the end of negotiation.)
+ GIT_TRACE_PACKET="$(pwd)/trace" git -C client diff -M HEAD^ HEAD >out &&
+ grep "similarity index" out &&
+ grep "git> done" trace >done_lines &&
+ test_line_count = 1 done_lines
+'
+
+test_done
awk -f print_2.awk ls_files_result |
sort >expected &&
- git -C r1 pack-objects --rev --stdout >all.pack <<-EOF &&
+ git -C r1 pack-objects --revs --stdout >all.pack <<-EOF &&
HEAD
EOF
git -C r1 index-pack ../all.pack &&
'
test_expect_success 'verify blob:none packfile has no blobs' '
- git -C r1 pack-objects --rev --stdout --filter=blob:none >filter.pack <<-EOF &&
+ git -C r1 pack-objects --revs --stdout --filter=blob:none >filter.pack <<-EOF &&
HEAD
EOF
git -C r1 index-pack ../filter.pack &&
git -C r5 commit -m "foo" &&
del=$(git -C r5 rev-parse HEAD^{tree} | sed "s|..|&/|") &&
rm r5/.git/objects/$del &&
- test_must_fail git -C r5 pack-objects --rev --stdout 2>bad_tree <<-EOF &&
+ test_must_fail git -C r5 pack-objects --revs --stdout 2>bad_tree <<-EOF &&
HEAD
EOF
grep "bad tree object" bad_tree
'
test_expect_success 'verify tree:0 packfile has no blobs or trees' '
- git -C r1 pack-objects --rev --stdout --filter=tree:0 >commitsonly.pack <<-EOF &&
+ git -C r1 pack-objects --revs --stdout --filter=tree:0 >commitsonly.pack <<-EOF &&
HEAD
EOF
git -C r1 index-pack ../commitsonly.pack &&
test_expect_success 'grab tree directly when using tree:0' '
# We should get the tree specified directly but not its blobs or subtrees.
- git -C r1 pack-objects --rev --stdout --filter=tree:0 >commitsonly.pack <<-EOF &&
+ git -C r1 pack-objects --revs --stdout --filter=tree:0 >commitsonly.pack <<-EOF &&
HEAD:
EOF
git -C r1 index-pack ../commitsonly.pack &&
awk -f print_2.awk ls_files_result |
sort >expected &&
- git -C r2 pack-objects --rev --stdout >all.pack <<-EOF &&
+ git -C r2 pack-objects --revs --stdout >all.pack <<-EOF &&
HEAD
EOF
git -C r2 index-pack ../all.pack &&
'
test_expect_success 'verify blob:limit=500 omits all blobs' '
- git -C r2 pack-objects --rev --stdout --filter=blob:limit=500 >filter.pack <<-EOF &&
+ git -C r2 pack-objects --revs --stdout --filter=blob:limit=500 >filter.pack <<-EOF &&
HEAD
EOF
git -C r2 index-pack ../filter.pack &&
'
test_expect_success 'verify blob:limit=1000' '
- git -C r2 pack-objects --rev --stdout --filter=blob:limit=1000 >filter.pack <<-EOF &&
+ git -C r2 pack-objects --revs --stdout --filter=blob:limit=1000 >filter.pack <<-EOF &&
HEAD
EOF
git -C r2 index-pack ../filter.pack &&
awk -f print_2.awk ls_files_result |
sort >expected &&
- git -C r2 pack-objects --rev --stdout --filter=blob:limit=1001 >filter.pack <<-EOF &&
+ git -C r2 pack-objects --revs --stdout --filter=blob:limit=1001 >filter.pack <<-EOF &&
HEAD
EOF
git -C r2 index-pack ../filter.pack &&
awk -f print_2.awk ls_files_result |
sort >expected &&
- git -C r2 pack-objects --rev --stdout --filter=blob:limit=10001 >filter.pack <<-EOF &&
+ git -C r2 pack-objects --revs --stdout --filter=blob:limit=10001 >filter.pack <<-EOF &&
HEAD
EOF
git -C r2 index-pack ../filter.pack &&
awk -f print_2.awk ls_files_result |
sort >expected &&
- git -C r2 pack-objects --rev --stdout --filter=blob:limit=1k >filter.pack <<-EOF &&
+ git -C r2 pack-objects --revs --stdout --filter=blob:limit=1k >filter.pack <<-EOF &&
HEAD
EOF
git -C r2 index-pack ../filter.pack &&
awk -f print_2.awk ls_files_result |
sort >expected &&
- git -C r2 pack-objects --rev --stdout --filter=blob:limit=1k >filter.pack <<-EOF &&
+ git -C r2 pack-objects --revs --stdout --filter=blob:limit=1k >filter.pack <<-EOF &&
HEAD
$(git -C r2 rev-parse HEAD:large.10000)
EOF
awk -f print_2.awk ls_files_result |
sort >expected &&
- git -C r2 pack-objects --rev --stdout --filter=blob:limit=1m >filter.pack <<-EOF &&
+ git -C r2 pack-objects --revs --stdout --filter=blob:limit=1m >filter.pack <<-EOF &&
HEAD
EOF
git -C r2 index-pack ../filter.pack &&
awk -f print_2.awk ls_files_result |
sort >expected &&
- git -C r3 pack-objects --rev --stdout >all.pack <<-EOF &&
+ git -C r3 pack-objects --revs --stdout >all.pack <<-EOF &&
HEAD
EOF
git -C r3 index-pack ../all.pack &&
awk -f print_2.awk ls_files_result |
sort >expected &&
- git -C r3 pack-objects --rev --stdout --filter=sparse:path=../pattern1 >filter.pack <<-EOF &&
+ git -C r3 pack-objects --revs --stdout --filter=sparse:path=../pattern1 >filter.pack <<-EOF &&
HEAD
EOF
git -C r3 index-pack ../filter.pack &&
awk -f print_2.awk ls_files_result |
sort >expected &&
- git -C r3 pack-objects --rev --stdout --filter=sparse:path=../pattern2 >filter.pack <<-EOF &&
+ git -C r3 pack-objects --revs --stdout --filter=sparse:path=../pattern2 >filter.pack <<-EOF &&
HEAD
EOF
git -C r3 index-pack ../filter.pack &&
awk -f print_2.awk ls_files_result |
sort >expected &&
- git -C r4 pack-objects --rev --stdout >all.pack <<-EOF &&
+ git -C r4 pack-objects --revs --stdout >all.pack <<-EOF &&
HEAD
EOF
git -C r4 index-pack ../all.pack &&
sort >expected &&
oid=$(git -C r4 ls-files -s pattern | awk -f print_2.awk) &&
- git -C r4 pack-objects --rev --stdout --filter=sparse:oid=$oid >filter.pack <<-EOF &&
+ git -C r4 pack-objects --revs --stdout --filter=sparse:oid=$oid >filter.pack <<-EOF &&
HEAD
EOF
git -C r4 index-pack ../filter.pack &&
awk -f print_2.awk ls_files_result |
sort >expected &&
- git -C r4 pack-objects --rev --stdout --filter=sparse:oid=master:pattern >filter.pack <<-EOF &&
+ git -C r4 pack-objects --revs --stdout --filter=sparse:oid=master:pattern >filter.pack <<-EOF &&
HEAD
EOF
git -C r4 index-pack ../filter.pack &&
'
test_expect_success 'verify pack-objects fails w/ missing objects' '
- test_must_fail git -C r1 pack-objects --rev --stdout >miss.pack <<-EOF
+ test_must_fail git -C r1 pack-objects --revs --stdout >miss.pack <<-EOF
HEAD
EOF
'
test_expect_success 'verify pack-objects fails w/ --missing=error' '
- test_must_fail git -C r1 pack-objects --rev --stdout --missing=error >miss.pack <<-EOF
+ test_must_fail git -C r1 pack-objects --revs --stdout --missing=error >miss.pack <<-EOF
HEAD
EOF
'
test_expect_success 'verify pack-objects w/ --missing=allow-any' '
- git -C r1 pack-objects --rev --stdout --missing=allow-any >miss.pack <<-EOF
+ git -C r1 pack-objects --revs --stdout --missing=allow-any >miss.pack <<-EOF
HEAD
EOF
'
GRAPH_BYTE_OCTOPUS=$(($GRAPH_OCTOPUS_DATA_OFFSET + 4))
GRAPH_BYTE_FOOTER=$(($GRAPH_OCTOPUS_DATA_OFFSET + 4 * $NUM_OCTOPUS_EDGES))
+corrupt_graph_setup() {
+ cd "$TRASH_DIRECTORY/full" &&
+ test_when_finished mv commit-graph-backup $objdir/info/commit-graph &&
+ cp $objdir/info/commit-graph commit-graph-backup
+}
+
+corrupt_graph_verify() {
+ grepstr=$1
+ test_must_fail git commit-graph verify 2>test_err &&
+ grep -v "^+" test_err >err &&
+ test_i18ngrep "$grepstr" err &&
+ if test "$2" != "no-copy"
+ then
+ cp $objdir/info/commit-graph commit-graph-pre-write-test
+ fi &&
+ git status --short &&
+ GIT_TEST_COMMIT_GRAPH_DIE_ON_LOAD=true git commit-graph write &&
+ git commit-graph verify
+}
+
# usage: corrupt_graph_and_verify <position> <data> <string> [<zero_pos>]
# Manipulates the commit-graph file at the position
# by inserting the data, optionally zeroing the file
pos=$1
data="${2:-\0}"
grepstr=$3
- cd "$TRASH_DIRECTORY/full" &&
+ corrupt_graph_setup &&
orig_size=$(wc -c < $objdir/info/commit-graph) &&
zero_pos=${4:-${orig_size}} &&
- test_when_finished mv commit-graph-backup $objdir/info/commit-graph &&
- cp $objdir/info/commit-graph commit-graph-backup &&
printf "$data" | dd of="$objdir/info/commit-graph" bs=1 seek="$pos" conv=notrunc &&
dd of="$objdir/info/commit-graph" bs=1 seek="$zero_pos" if=/dev/null &&
generate_zero_bytes $(($orig_size - $zero_pos)) >>"$objdir/info/commit-graph" &&
- test_must_fail git commit-graph verify 2>test_err &&
- grep -v "^+" test_err >err &&
- test_i18ngrep "$grepstr" err
+ corrupt_graph_verify "$grepstr"
+
}
+test_expect_success POSIXPERM,SANITY 'detect permission problem' '
+ corrupt_graph_setup &&
+ chmod 000 $objdir/info/commit-graph &&
+ corrupt_graph_verify "Could not open" "no-copy"
+'
+
+test_expect_success 'detect too small' '
+ corrupt_graph_setup &&
+ echo "a small graph" >$objdir/info/commit-graph &&
+ corrupt_graph_verify "too small"
+'
+
test_expect_success 'detect bad signature' '
corrupt_graph_and_verify 0 "\0" \
"graph signature"
git fsck &&
corrupt_graph_and_verify $GRAPH_BYTE_FOOTER "\00" \
"incorrect checksum" &&
+ cp commit-graph-pre-write-test $objdir/info/commit-graph &&
test_must_fail git fsck
'
'
midx_git_two_modes () {
+ git -c core.multiPackIndex=false $1 >expect &&
+ git -c core.multiPackIndex=true $1 >actual &&
if [ "$2" = "sorted" ]
then
- git -c core.multiPackIndex=false $1 | sort >expect &&
- git -c core.multiPackIndex=true $1 | sort >actual
- else
- git -c core.multiPackIndex=false $1 >expect &&
- git -c core.multiPackIndex=true $1 >actual
+ sort <expect >expect.sorted &&
+ mv expect.sorted expect &&
+ sort <actual >actual.sorted &&
+ mv actual.sorted actual
fi &&
test_cmp expect actual
}
midx_git_two_modes "rev-list --objects --all" &&
midx_git_two_modes "log --raw" &&
midx_git_two_modes "count-objects --verbose" &&
- midx_git_two_modes "cat-file --batch-all-objects --buffer --batch-check" &&
- midx_git_two_modes "cat-file --batch-all-objects --buffer --batch-check --unsorted" sorted
+ midx_git_two_modes "cat-file --batch-all-objects --batch-check" &&
+ midx_git_two_modes "cat-file --batch-all-objects --batch-check --unordered" sorted
'
}
compare_results_with_midx "one v2 pack"
+test_expect_success 'corrupt idx not opened' '
+ idx=$(test-tool read-midx $objdir | grep "\.idx\$") &&
+ mv $objdir/pack/$idx backup-$idx &&
+ test_when_finished "mv backup-\$idx \$objdir/pack/\$idx" &&
+
+ # This is the minimum size for a sha-1 based .idx; this lets
+ # us pass perfunctory tests, but anything that actually opens and reads
+ # the idx file will complain.
+ test_copy_bytes 1064 <backup-$idx >$objdir/pack/$idx &&
+
+ git -c core.multiPackIndex=true rev-list --objects --all 2>err &&
+ test_must_be_empty err
+'
+
test_expect_success 'add more objects' '
for i in $(test_seq 6 10)
do
test_expect_success 'git rebase with implicit use of interactive backend' '
git reset --hard D &&
clear_hook_input &&
- test_must_fail git rebase --keep --onto A B &&
+ test_must_fail git rebase --keep-empty --onto A B &&
echo C > foo &&
git add foo &&
git rebase --continue &&
test_expect_success 'git rebase --skip with implicit use of interactive backend' '
git reset --hard D &&
clear_hook_input &&
- test_must_fail git rebase --keep --onto A B &&
+ test_must_fail git rebase --keep-empty --onto A B &&
test_must_fail git rebase --skip &&
echo D > foo &&
git add foo &&
fetch_filter_blob_limit_zero "$HTTPD_DOCUMENT_ROOT_PATH/server" "$HTTPD_URL/smart/server"
'
-stop_httpd
-
-
test_done
check_negotiation_tip
'
-stop_httpd
-
test_done
cd shallow &&
# Some protocol versions (e.g. 2) support fetching
# unadvertised objects, so restrict this test to v0.
- test_must_fail ok=sigpipe env GIT_TEST_PROTOCOL_VERSION= \
+ test_must_fail env GIT_TEST_PROTOCOL_VERSION= \
git fetch ../testrepo/.git $SHA1_3 &&
- test_must_fail ok=sigpipe env GIT_TEST_PROTOCOL_VERSION= \
+ test_must_fail env GIT_TEST_PROTOCOL_VERSION= \
git fetch ../testrepo/.git $SHA1_1 &&
git --git-dir=../testrepo/.git config uploadpack.allowreachablesha1inwant true &&
git fetch ../testrepo/.git $SHA1_1 &&
test_must_fail git cat-file commit $SHA1_2 &&
git fetch ../testrepo/.git $SHA1_2 &&
git cat-file commit $SHA1_2 &&
- test_must_fail ok=sigpipe env GIT_TEST_PROTOCOL_VERSION= \
- git fetch ../testrepo/.git $SHA1_3
+ test_must_fail env GIT_TEST_PROTOCOL_VERSION= \
+ git fetch ../testrepo/.git $SHA1_3 2>err &&
+ test_i18ngrep "remote error:.*not our ref.*$SHA1_3\$" err
)
'
done
test_cmp expect actual
'
+test_expect_success 'peeled advertisements are not considered ref tips' '
+ mk_empty testrepo &&
+ git -C testrepo commit --allow-empty -m one &&
+ git -C testrepo commit --allow-empty -m two &&
+ git -C testrepo tag -m foo mytag HEAD^ &&
+ oid=$(git -C testrepo rev-parse mytag^{commit}) &&
+ test_must_fail env GIT_TEST_PROTOCOL_VERSION= \
+ git fetch testrepo $oid 2>err &&
+ test_i18ngrep "Server does not allow request for unadvertised object" err
+'
+
test_expect_success 'pushing a specific ref applies remote.$name.push as refmap' '
mk_test testrepo heads/master &&
rm -fr src dst &&
test_cmp expect actual
'
-test_expect_success 'push --follow-tag only pushes relevant tags' '
+test_expect_success 'push --follow-tags only pushes relevant tags' '
mk_test testrepo heads/master &&
rm -fr src dst &&
git init src &&
git tag -m "future" future &&
git checkout master &&
git for-each-ref refs/heads/master refs/tags/tag >../expect &&
- git push --follow-tag ../dst master
+ git push --follow-tags ../dst master
) &&
(
cd dst &&
grep "bad tree object" output.err
'
-test_expect_success 'upload-pack error message when bad ref requested' '
+test_expect_success 'upload-pack fails due to bad want (no object)' '
printf "0045want %s multi_ack_detailed\n00000009done\n0000" \
"deadbeefdeadbeefdeadbeefdeadbeefdeadbeef" >input &&
test_must_fail git upload-pack . <input >output 2>output.err &&
- grep -q "not our ref" output.err &&
- ! grep -q multi_ack_detailed output.err
+ grep "not our ref" output.err &&
+ grep "ERR" output &&
+ ! grep multi_ack_detailed output.err
+'
+
+test_expect_success 'upload-pack fails due to bad want (not tip)' '
+
+ oid=$(echo an object we have | git hash-object -w --stdin) &&
+ printf "0045want %s multi_ack_detailed\n00000009done\n0000" \
+ "$oid" >input &&
+ test_must_fail git upload-pack . <input >output 2>output.err &&
+ grep "not our ref" output.err &&
+ grep "ERR" output &&
+ ! grep multi_ack_detailed output.err
'
test_expect_success 'upload-pack fails due to error in pack-objects enumeration' '
) &&
git add b &&
git commit -m "added submodule" &&
- git push --recurse-submodule=check origin master
+ git push --recurse-submodules=check origin master
)
'
git -C client fsck
'
-stop_httpd
-
test_done
)
'
-stop_httpd
test_done
test_cmp expect actual
'
-stop_httpd
-
test_done
test_i18ngrep ! "^hint: " decoded
'
-stop_httpd
test_done
)
'
-stop_httpd
test_done
test_cmp expect "$HTTPD_DOCUMENT_ROOT_PATH"/upstream.git/hooks/pre-receive.push_options
'
-stop_httpd
-
test_done
git -c http.followredirects=true clone "$HTTPD_URL/dumb/alt-child.git"
'
-stop_httpd
test_done
grep "server-side error" actual
'
-stop_httpd
test_done
check_access_log exp
'
-stop_httpd
test_done
test_expect_success 'fetch notices corrupt idx' '
cp -R "$GIT_DAEMON_DOCUMENT_ROOT_PATH"/repo_pack.git "$GIT_DAEMON_DOCUMENT_ROOT_PATH"/repo_bad2.git &&
(cd "$GIT_DAEMON_DOCUMENT_ROOT_PATH"/repo_bad2.git &&
+ rm -f objects/pack/multi-pack-index &&
p=$(ls objects/pack/pack-*.idx) &&
chmod u+w $p &&
printf %0256d 0 | dd of=$p bs=256 count=1 seek=1 conv=notrunc
test_cmp expect actual
'
-stop_git_daemon
test_done
grep "< HTTP/1.1 500 Intentional Breakage" curl_log
'
-stop_httpd
-
test_done
partial_clone "$HTTPD_DOCUMENT_ROOT_PATH/server" "$HTTPD_URL/smart/server"
'
-stop_httpd
-
test_done
! test -e "$HTTPD_ROOT_PATH/one-time-sed"
'
-stop_httpd
-
test_done
grep "git< version 1" log
'
-stop_httpd
-
test_done
test_i18ngrep "expected no other sections to be sent after no .ready." err
'
-stop_httpd
-
test_done
test_i18ngrep "fatal: remote error: unknown ref refs/heads/raster" err
'
-stop_httpd
-
REPO="$(pwd)/repo"
LOCAL_PRISTINE="$(pwd)/local_pristine"
clone "$HTTPD_URL/smart-redir-perm/repo.git" redir.git
'
-stop_httpd
test_done
git checkout A^0 &&
- git merge -s recursive B^0 &&
+ git -c merge.directoryRenames=true merge -s recursive B^0 >out &&
git ls-files -s >out &&
test_line_count = 4 out &&
git checkout A^0 &&
- git merge -s recursive B^0 &&
+ git -c merge.directoryRenames=true merge -s recursive B^0 &&
git ls-files -s >out &&
test_line_count = 4 out &&
git checkout A^0 &&
- git merge -s recursive B^0 &&
+ git -c merge.directoryRenames=true merge -s recursive B^0 >out &&
git ls-files -s >out &&
test_line_count = 3 out &&
git checkout A^0 &&
- test_must_fail git merge -s recursive B^0 >out &&
+ test_must_fail git -c merge.directoryRenames=true merge -s recursive B^0 >out &&
test_i18ngrep "CONFLICT (rename/rename)" out &&
git ls-files -s >out &&
git checkout A^0 &&
- git merge -s recursive B^0 &&
+ git -c merge.directoryRenames=true merge -s recursive B^0 &&
git ls-files -s >out &&
test_line_count = 3 out &&
git checkout A^0 &&
- git merge -s recursive B^0 &&
+ git -c merge.directoryRenames=true merge -s recursive B^0 &&
git ls-files -s >out &&
test_line_count = 6 out &&
git checkout A^0 &&
- test_must_fail git merge -s recursive B^0 >out &&
+ test_must_fail git -c merge.directoryRenames=true merge -s recursive B^0 >out &&
test_i18ngrep "CONFLICT.*directory rename split" out &&
git ls-files -s >out &&
git checkout A^0 &&
- git merge -s recursive B^0 >out &&
+ git -c merge.directoryRenames=true merge -s recursive B^0 >out &&
git ls-files -s >out &&
test_line_count = 3 out &&
git checkout A^0 &&
- git merge -s recursive B^0 &&
+ git -c merge.directoryRenames=true merge -s recursive B^0 &&
git ls-files -s >out &&
test_line_count = 3 out &&
git checkout A^0 &&
- test_must_fail git merge -s recursive B^0 >out &&
+ test_must_fail git -c merge.directoryRenames=true merge -s recursive B^0 >out &&
test_i18ngrep CONFLICT.*rename/rename.*z/d.*x/d.*w/d out &&
test_i18ngrep ! CONFLICT.*rename/rename.*y/d out &&
git checkout A^0 &&
- git merge -s recursive B^0 &&
+ git -c merge.directoryRenames=true merge -s recursive B^0 &&
git ls-files -s >out &&
test_line_count = 5 out &&
git checkout A^0 &&
- test_must_fail git merge -s recursive B^0 >out &&
+ test_must_fail git -c merge.directoryRenames=true merge -s recursive B^0 >out &&
test_i18ngrep "CONFLICT.*implicit dir rename" out &&
git ls-files -s >out &&
git checkout A^0 &&
- test_must_fail git merge -s recursive B^0 >out &&
+ test_must_fail git -c merge.directoryRenames=true merge -s recursive B^0 >out &&
test_i18ngrep "CONFLICT (add/add).* y/d" out &&
git ls-files -s >out &&
git checkout A^0 &&
- test_must_fail git merge -s recursive B^0 >out &&
+ test_must_fail git -c merge.directoryRenames=true merge -s recursive B^0 >out &&
test_i18ngrep "CONFLICT (rename/rename).*x/d.*w/d.*z/d" out &&
test_i18ngrep "CONFLICT (add/add).* y/d" out &&
git checkout A^0 &&
- test_must_fail git merge -s recursive B^0 >out &&
+ test_must_fail git -c merge.directoryRenames=true merge -s recursive B^0 >out &&
test_i18ngrep "CONFLICT (file/directory).*y/d" out &&
git ls-files -s >out &&
git checkout A^0 &&
- test_must_fail git merge -s recursive B^0 >out &&
+ test_must_fail git -c merge.directoryRenames=true merge -s recursive B^0 >out &&
test_i18ngrep "CONFLICT (rename/delete).*z/c.*y/c" out &&
git ls-files -s >out &&
git checkout A^0 &&
- git merge -s recursive B^0 &&
+ git -c merge.directoryRenames=true merge -s recursive B^0 &&
git ls-files -s >out &&
test_line_count = 3 out &&
git checkout A^0 &&
- git merge -s recursive B^0 &&
+ git -c merge.directoryRenames=true merge -s recursive B^0 &&
git ls-files -s >out &&
test_line_count = 3 out &&
git checkout A^0 &&
- git merge -s recursive B^0 &&
+ git -c merge.directoryRenames=true merge -s recursive B^0 &&
git ls-files -s >out &&
test_line_count = 3 out &&
git checkout A^0 &&
- git merge -s recursive B^0 &&
+ git -c merge.directoryRenames=true merge -s recursive B^0 &&
git ls-files -s >out &&
test_line_count = 4 out &&
git checkout A^0 &&
- test_must_fail git merge -s recursive B^0 >out &&
+ test_must_fail git -c merge.directoryRenames=true merge -s recursive B^0 >out &&
test_i18ngrep "CONFLICT (rename/rename).*z/b.*y/b.*w/b" out &&
test_i18ngrep "CONFLICT (rename/rename).*z/c.*y/c.*x/c" out &&
git checkout A^0 &&
- test_must_fail git merge -s recursive B^0 >out &&
+ test_must_fail git -c merge.directoryRenames=true merge -s recursive B^0 >out &&
test_i18ngrep "CONFLICT (rename/rename)" out &&
git ls-files -s >out &&
git checkout A^0 &&
- test_must_fail git merge -s recursive B^0 >out &&
+ test_must_fail git -c merge.directoryRenames=true merge -s recursive B^0 >out &&
test_i18ngrep "CONFLICT (rename/rename).*x/d.*w/d.*y/d" out &&
git ls-files -s >out &&
git checkout A^0 &&
- test_must_fail git merge -s recursive B^0 >out &&
+ test_must_fail git -c merge.directoryRenames=true merge -s recursive B^0 >out &&
test_i18ngrep "CONFLICT (rename/delete).*x/d.*y/d" out &&
git ls-files -s >out &&
git checkout A^0 &&
- test_must_fail git merge -s recursive B^0 >out &&
+ test_must_fail git -c merge.directoryRenames=true merge -s recursive B^0 >out &&
test_i18ngrep "CONFLICT (rename/delete).*x/d.*y/d" out &&
git ls-files -s >out &&
git checkout A^0 &&
- git merge -s recursive B^0 &&
+ git -c merge.directoryRenames=true merge -s recursive B^0 &&
git ls-files -s >out &&
test_line_count = 6 out &&
git checkout A^0 &&
- git merge -s recursive B^0 &&
+ git -c merge.directoryRenames=true merge -s recursive B^0 &&
git ls-files -s >out &&
test_line_count = 6 out &&
git checkout A^0 &&
- test_must_fail git merge -s recursive B^0 >out &&
+ test_must_fail git -c merge.directoryRenames=true merge -s recursive B^0 >out &&
test_i18ngrep "CONFLICT (modify/delete).* z/d" out &&
git ls-files -s >out &&
git checkout A^0 &&
- git merge -s recursive B^0 &&
+ git -c merge.directoryRenames=true merge -s recursive B^0 &&
git ls-files -s >out &&
test_line_count = 3 out &&
git checkout A^0 &&
- test_must_fail git merge -s recursive B^0 >out 2>err &&
+ test_must_fail git -c merge.directoryRenames=true merge -s recursive B^0 >out 2>err &&
test_i18ngrep CONFLICT.*rename/rename.*z/c.*y/c.*w/c out &&
test_i18ngrep CONFLICT.*rename/rename.*z/b.*y/b.*w/b out &&
git checkout A^0 &&
- git merge -s recursive B^0 &&
+ git -c merge.directoryRenames=true merge -s recursive B^0 &&
git ls-files -s >out &&
test_line_count = 7 out &&
git checkout A^0 &&
- git merge -s recursive B^0 &&
+ git -c merge.directoryRenames=true merge -s recursive B^0 &&
git ls-files -s >out &&
test_line_count = 3 out &&
git checkout A^0 &&
- git merge -s recursive B^0 >out &&
+ git -c merge.directoryRenames=true merge -s recursive B^0 >out &&
test_i18ngrep "WARNING: Avoiding applying x -> z rename to x/f" out &&
git ls-files -s >out &&
git checkout A^0 &&
- git merge -s recursive B^0 >out &&
+ git -c merge.directoryRenames=true merge -s recursive B^0 >out &&
test_i18ngrep "WARNING: Avoiding applying z -> y rename to z/t" out &&
test_i18ngrep "WARNING: Avoiding applying y -> x rename to y/a" out &&
test_i18ngrep "WARNING: Avoiding applying x -> w rename to x/b" out &&
git checkout A^0 &&
- test_must_fail git merge -s recursive B^0 >out &&
+ test_must_fail git -c merge.directoryRenames=true merge -s recursive B^0 >out &&
grep "CONFLICT (implicit dir rename): Cannot map more than one path to combined/yo" out >error_line &&
grep -q dir1/yo error_line &&
grep -q dir2/yo error_line &&
git checkout A^0 &&
- git merge -s recursive B^0 &&
+ git -c merge.directoryRenames=true merge -s recursive B^0 &&
git ls-files -s >out &&
test_line_count = 4 out &&
git checkout A^0 &&
- git merge -s recursive B^0 &&
+ git -c merge.directoryRenames=true merge -s recursive B^0 &&
git ls-files -s >out &&
test_line_count = 4 out &&
git checkout A^0 &&
- git merge -s recursive B^0 &&
+ git -c merge.directoryRenames=true merge -s recursive B^0 &&
git ls-files -s >out &&
test_line_count = 3 out &&
echo very >z/c &&
echo important >z/d &&
- test_must_fail git merge -s recursive B^0 >out 2>err &&
+ test_must_fail git -c merge.directoryRenames=true merge -s recursive B^0 >out 2>err &&
test_i18ngrep "The following untracked working tree files would be overwritten by merge" err &&
git ls-files -s >out &&
echo important >y/d &&
echo contents >y/e &&
- test_must_fail git merge -s recursive B^0 >out 2>err &&
+ test_must_fail git -c merge.directoryRenames=true merge -s recursive B^0 >out 2>err &&
test_i18ngrep "CONFLICT (rename/delete).*Version B\^0 of y/d left in tree at y/d~B\^0" out &&
test_i18ngrep "Error: Refusing to lose untracked file at y/e; writing to y/e~B\^0 instead" out &&
git checkout A^0 &&
echo important >y/c &&
- test_must_fail git merge -s recursive B^0 >out 2>err &&
+ test_must_fail git -c merge.directoryRenames=true merge -s recursive B^0 >out 2>err &&
test_i18ngrep "CONFLICT (rename/rename)" out &&
test_i18ngrep "Refusing to lose untracked file at y/c; adding as y/c~B\^0 instead" out &&
mkdir y &&
echo important >y/c &&
- test_must_fail git merge -s recursive A^0 >out 2>err &&
+ test_must_fail git -c merge.directoryRenames=true merge -s recursive A^0 >out 2>err &&
test_i18ngrep "CONFLICT (rename/rename)" out &&
test_i18ngrep "Refusing to lose untracked file at y/c; adding as y/c~HEAD instead" out &&
git checkout A^0 &&
echo important >y/wham &&
- test_must_fail git merge -s recursive B^0 >out 2>err &&
+ test_must_fail git -c merge.directoryRenames=true merge -s recursive B^0 >out 2>err &&
test_i18ngrep "CONFLICT (rename/rename)" out &&
test_i18ngrep "Refusing to lose untracked file at y/wham" out &&
mkdir z &&
echo random >z/c &&
- git merge -s recursive B^0 >out 2>err &&
+ git -c merge.directoryRenames=true merge -s recursive B^0 >out 2>err &&
test_i18ngrep ! "following untracked working tree files would be overwritten by merge" err &&
git ls-files -s >out &&
git checkout A^0 &&
echo stuff >>z/c &&
- test_must_fail git merge -s recursive B^0 >out 2>err &&
+ test_must_fail git -c merge.directoryRenames=true merge -s recursive B^0 >out 2>err &&
test_i18ngrep "Refusing to lose dirty file at z/c" out &&
test_seq 1 10 >expected &&
git checkout A^0 &&
echo stuff >>z/c &&
- git merge -s recursive B^0 >out 2>err &&
+ git -c merge.directoryRenames=true merge -s recursive B^0 >out 2>err &&
test_i18ngrep "Refusing to lose dirty file at z/c" out &&
grep -q stuff z/c &&
git checkout A^0 &&
echo stuff >>y/c &&
- test_must_fail git merge -s recursive B^0 >out 2>err &&
+ test_must_fail git -c merge.directoryRenames=true merge -s recursive B^0 >out 2>err &&
test_i18ngrep "following files would be overwritten by merge" err &&
grep -q stuff y/c &&
git checkout A^0 &&
echo stuff >>z/c &&
- test_must_fail git merge -s recursive B^0 >out 2>err &&
+ test_must_fail git -c merge.directoryRenames=true merge -s recursive B^0 >out 2>err &&
test_i18ngrep "Refusing to lose dirty file at z/c" out &&
grep -q stuff z/c &&
git checkout A^0 &&
echo mods >>y/c &&
- test_must_fail git merge -s recursive B^0 >out 2>err &&
+ test_must_fail git -c merge.directoryRenames=true merge -s recursive B^0 >out 2>err &&
test_i18ngrep "CONFLICT (rename/rename)" out &&
test_i18ngrep "Refusing to lose dirty file at y/c" out &&
git checkout A^0 &&
echo important >>y/wham &&
- test_must_fail git merge -s recursive B^0 >out 2>err &&
+ test_must_fail git -c merge.directoryRenames=true merge -s recursive B^0 >out 2>err &&
test_i18ngrep "CONFLICT (rename/rename)" out &&
test_i18ngrep "Refusing to lose dirty file at y/wham" out &&
git checkout A^0 &&
- git merge -s recursive B^0 &&
+ git -c merge.directoryRenames=true merge -s recursive B^0 &&
git ls-files -s >out &&
test_line_count = 6 out &&
# To which, I can do no more than shrug my shoulders and say that
# even simple rules give weird results when given weird inputs.
-test_expect_success '12b-setup: Moving one directory hierarchy into another' '
+test_expect_success '12b-setup: Moving two directory hierarchies into each other' '
test_create_repo 12b &&
(
cd 12b &&
)
'
-test_expect_success '12b-check: Moving one directory hierarchy into another' '
+test_expect_success '12b-check: Moving two directory hierarchies into each other' '
(
cd 12b &&
git checkout A^0 &&
- git merge -s recursive B^0 &&
+ git -c merge.directoryRenames=true merge -s recursive B^0 &&
git ls-files -s >out &&
test_line_count = 4 out &&
git checkout A^0 &&
- test_must_fail git merge -s recursive B^0 &&
+ test_must_fail git -c merge.directoryRenames=true merge -s recursive B^0 &&
git ls-files -u >out &&
test_line_count = 12 out &&
)
'
+###########################################################################
+# SECTION 13: Checking informational and conflict messages
+#
+# A year after directory rename detection became the default, it was
+# instead decided to report conflicts on the pathname on the basis that
+# some users may expect the new files added or moved into a directory to
+# be unrelated to all the other files in that directory, and thus that
+# directory rename detection is unexpected. Test that the messages printed
+# match our expectation.
+###########################################################################
+
+# Testcase 13a, Basic directory rename with newly added files
+# Commit O: z/{b,c}
+# Commit A: y/{b,c}
+# Commit B: z/{b,c,d,e/f}
+# Expected: y/{b,c,d,e/f}, with notices/conflicts for both y/d and y/e/f
+
+test_expect_success '13a-setup: messages for newly added files' '
+ test_create_repo 13a &&
+ (
+ cd 13a &&
+
+ mkdir z &&
+ echo b >z/b &&
+ echo c >z/c &&
+ git add z &&
+ test_tick &&
+ git commit -m "O" &&
+
+ git branch O &&
+ git branch A &&
+ git branch B &&
+
+ git checkout A &&
+ git mv z y &&
+ test_tick &&
+ git commit -m "A" &&
+
+ git checkout B &&
+ echo d >z/d &&
+ mkdir z/e &&
+ echo f >z/e/f &&
+ git add z/d z/e/f &&
+ test_tick &&
+ git commit -m "B"
+ )
+'
+
+test_expect_success '13a-check(conflict): messages for newly added files' '
+ (
+ cd 13a &&
+
+ git checkout A^0 &&
+
+ test_must_fail git merge -s recursive B^0 >out 2>err &&
+
+ test_i18ngrep CONFLICT..file.location.*z/e/f.added.in.B^0.*y/e/f out &&
+ test_i18ngrep CONFLICT..file.location.*z/d.added.in.B^0.*y/d out &&
+
+ git ls-files >paths &&
+ ! grep z/ paths &&
+ grep "y/[de]" paths &&
+
+ test_path_is_missing z/d &&
+ test_path_is_file y/d &&
+ test_path_is_missing z/e/f &&
+ test_path_is_file y/e/f
+ )
+'
+
+test_expect_success '13a-check(info): messages for newly added files' '
+ (
+ cd 13a &&
+
+ git reset --hard &&
+ git checkout A^0 &&
+
+ git -c merge.directoryRenames=true merge -s recursive B^0 >out 2>err &&
+
+ test_i18ngrep Path.updated:.*z/e/f.added.in.B^0.*y/e/f out &&
+ test_i18ngrep Path.updated:.*z/d.added.in.B^0.*y/d out &&
+
+ git ls-files >paths &&
+ ! grep z/ paths &&
+ grep "y/[de]" paths &&
+
+ test_path_is_missing z/d &&
+ test_path_is_file y/d &&
+ test_path_is_missing z/e/f &&
+ test_path_is_file y/e/f
+ )
+'
+
+# Testcase 13b, Transitive rename with conflicted content merge and default
+# "conflict" setting
+# (Related to testcase 1c, 9b)
+# Commit O: z/{b,c}, x/d_1
+# Commit A: y/{b,c}, x/d_2
+# Commit B: z/{b,c,d_3}
+# Expected: y/{b,c,d_merged}, with two conflict messages for y/d,
+# one about content, and one about file location
+
+test_expect_success '13b-setup: messages for transitive rename with conflicted content' '
+ test_create_repo 13b &&
+ (
+ cd 13b &&
+
+ mkdir x &&
+ mkdir z &&
+ test_seq 1 10 >x/d &&
+ echo b >z/b &&
+ echo c >z/c &&
+ git add x z &&
+ test_tick &&
+ git commit -m "O" &&
+
+ git branch O &&
+ git branch A &&
+ git branch B &&
+
+ git checkout A &&
+ git mv z y &&
+ echo 11 >>x/d &&
+ git add x/d &&
+ test_tick &&
+ git commit -m "A" &&
+
+ git checkout B &&
+ echo eleven >>x/d &&
+ git mv x/d z/d &&
+ git add z/d &&
+ test_tick &&
+ git commit -m "B"
+ )
+'
+
+test_expect_success '13b-check(conflict): messages for transitive rename with conflicted content' '
+ (
+ cd 13b &&
+
+ git checkout A^0 &&
+
+ test_must_fail git merge -s recursive B^0 >out 2>err &&
+
+ test_i18ngrep CONFLICT.*content.*Merge.conflict.in.y/d out &&
+ test_i18ngrep CONFLICT..file.location.*x/d.renamed.to.z/d.*moved.to.y/d out &&
+
+ git ls-files >paths &&
+ ! grep z/ paths &&
+ grep "y/d" paths &&
+
+ test_path_is_missing z/d &&
+ test_path_is_file y/d
+ )
+'
+
+test_expect_success '13b-check(info): messages for transitive rename with conflicted content' '
+ (
+ cd 13b &&
+
+ git reset --hard &&
+ git checkout A^0 &&
+
+ test_must_fail git -c merge.directoryRenames=true merge -s recursive B^0 >out 2>err &&
+
+ test_i18ngrep CONFLICT.*content.*Merge.conflict.in.y/d out &&
+ test_i18ngrep Path.updated:.*x/d.renamed.to.z/d.in.B^0.*moving.it.to.y/d out &&
+
+ git ls-files >paths &&
+ ! grep z/ paths &&
+ grep "y/d" paths &&
+
+ test_path_is_missing z/d &&
+ test_path_is_file y/d
+ )
+'
+
+# Testcase 13c, Rename/rename(1to1) due to directory rename
+# Commit O: z/{b,c}, x/{d,e}
+# Commit A: y/{b,c,d}, x/e
+# Commit B: z/{b,c,d}, x/e
+# Expected: y/{b,c,d}, with info or conflict messages for d (
+# A: renamed x/d -> z/d; B: renamed z/ -> y/ AND renamed x/d to y/d
+# One could argue A had partial knowledge of what was done with
+# d and B had full knowledge, but that's a slippery slope as
+# shown in testcase 13d.
+
+test_expect_success '13c-setup: messages for rename/rename(1to1) via transitive rename' '
+ test_create_repo 13c &&
+ (
+ cd 13c &&
+
+ mkdir x &&
+ mkdir z &&
+ test_seq 1 10 >x/d &&
+ echo e >x/e &&
+ echo b >z/b &&
+ echo c >z/c &&
+ git add x z &&
+ test_tick &&
+ git commit -m "O" &&
+
+ git branch O &&
+ git branch A &&
+ git branch B &&
+
+ git checkout A &&
+ git mv z y &&
+ git mv x/d y/ &&
+ test_tick &&
+ git commit -m "A" &&
+
+ git checkout B &&
+ git mv x/d z/d &&
+ git add z/d &&
+ test_tick &&
+ git commit -m "B"
+ )
+'
+
+test_expect_success '13c-check(conflict): messages for rename/rename(1to1) via transitive rename' '
+ (
+ cd 13c &&
+
+ git checkout A^0 &&
+
+ test_must_fail git merge -s recursive B^0 >out 2>err &&
+
+ test_i18ngrep CONFLICT..file.location.*x/d.renamed.to.z/d.*moved.to.y/d out &&
+
+ git ls-files >paths &&
+ ! grep z/ paths &&
+ grep "y/d" paths &&
+
+ test_path_is_missing z/d &&
+ test_path_is_file y/d
+ )
+'
+
+test_expect_success '13c-check(info): messages for rename/rename(1to1) via transitive rename' '
+ (
+ cd 13c &&
+
+ git reset --hard &&
+ git checkout A^0 &&
+
+ git -c merge.directoryRenames=true merge -s recursive B^0 >out 2>err &&
+
+ test_i18ngrep Path.updated:.*x/d.renamed.to.z/d.in.B^0.*moving.it.to.y/d out &&
+
+ git ls-files >paths &&
+ ! grep z/ paths &&
+ grep "y/d" paths &&
+
+ test_path_is_missing z/d &&
+ test_path_is_file y/d
+ )
+'
+
+# Testcase 13d, Rename/rename(1to1) due to directory rename on both sides
+# Commit O: a/{z,y}, b/x, c/w
+# Commit A: a/z, b/{y,x}, d/w
+# Commit B: a/z, d/x, c/{y,w}
+# Expected: a/z, d/{y,x,w} with no file location conflict for x
+# Easy cases:
+# * z is always in a; so it stays in a.
+# * x starts in b, only modified on one side to move into d/
+# * w starts in c, only modified on one side to move into d/
+# Hard case:
+# * A renames a/y to b/y, and B renames b/->d/ => a/y -> d/y
+# * B renames a/y to c/y, and A renames c/->d/ => a/y -> d/y
+# No conflict in where a/y ends up, so put it in d/y.
+
+test_expect_success '13d-setup: messages for rename/rename(1to1) via dual transitive rename' '
+ test_create_repo 13d &&
+ (
+ cd 13d &&
+
+ mkdir a &&
+ mkdir b &&
+ mkdir c &&
+ echo z >a/z &&
+ echo y >a/y &&
+ echo x >b/x &&
+ echo w >c/w &&
+ git add a b c &&
+ test_tick &&
+ git commit -m "O" &&
+
+ git branch O &&
+ git branch A &&
+ git branch B &&
+
+ git checkout A &&
+ git mv a/y b/ &&
+ git mv c/ d/ &&
+ test_tick &&
+ git commit -m "A" &&
+
+ git checkout B &&
+ git mv a/y c/ &&
+ git mv b/ d/ &&
+ test_tick &&
+ git commit -m "B"
+ )
+'
+
+test_expect_success '13d-check(conflict): messages for rename/rename(1to1) via dual transitive rename' '
+ (
+ cd 13d &&
+
+ git checkout A^0 &&
+
+ test_must_fail git merge -s recursive B^0 >out 2>err &&
+
+ test_i18ngrep CONFLICT..file.location.*a/y.renamed.to.b/y.*moved.to.d/y out &&
+ test_i18ngrep CONFLICT..file.location.*a/y.renamed.to.c/y.*moved.to.d/y out &&
+
+ git ls-files >paths &&
+ ! grep b/ paths &&
+ ! grep c/ paths &&
+ grep "d/y" paths &&
+
+ test_path_is_missing b/y &&
+ test_path_is_missing c/y &&
+ test_path_is_file d/y
+ )
+'
+
+test_expect_success '13d-check(info): messages for rename/rename(1to1) via dual transitive rename' '
+ (
+ cd 13d &&
+
+ git reset --hard &&
+ git checkout A^0 &&
+
+ git -c merge.directoryRenames=true merge -s recursive B^0 >out 2>err &&
+
+ test_i18ngrep Path.updated.*a/y.renamed.to.b/y.*moving.it.to.d/y out &&
+ test_i18ngrep Path.updated.*a/y.renamed.to.c/y.*moving.it.to.d/y out &&
+
+ git ls-files >paths &&
+ ! grep b/ paths &&
+ ! grep c/ paths &&
+ grep "d/y" paths &&
+
+ test_path_is_missing b/y &&
+ test_path_is_missing c/y &&
+ test_path_is_file d/y
+ )
+'
+
test_done
git checkout A^0 &&
- GIT_MERGE_VERBOSITY=3 git merge -s recursive B^0 >out 2>err &&
+ GIT_MERGE_VERBOSITY=3 git -c merge.directoryRenames=true merge -s recursive B^0 >out 2>err &&
test_i18ngrep ! "Skipped bar/bq" out &&
test_must_be_empty err &&
git checkout B^0 &&
- GIT_MERGE_VERBOSITY=3 git merge -s recursive A^0 >out 2>err &&
+ GIT_MERGE_VERBOSITY=3 git -c merge.directoryRenames=true merge -s recursive A^0 >out 2>err &&
test_i18ngrep ! "Skipped bar/bq" out &&
test_must_be_empty err &&
git checkout A^0 &&
- GIT_MERGE_VERBOSITY=3 git merge -s recursive B^0 >out 2>err &&
+ GIT_MERGE_VERBOSITY=3 git -c merge.directoryRenames=true merge -s recursive B^0 >out 2>err &&
test_i18ngrep ! "Skipped bar/bq" out &&
test_must_be_empty err &&
git checkout B^0 &&
- GIT_MERGE_VERBOSITY=3 git merge -s recursive A^0 >out 2>err &&
+ GIT_MERGE_VERBOSITY=3 git -c merge.directoryRenames=true merge -s recursive A^0 >out 2>err &&
test_i18ngrep ! "Skipped bar/bq" out &&
test_must_be_empty err &&
test_must_be_empty stderr
'
+test_expect_success 'gc.reflogExpire{Unreachable,}=never skips "expire" via "gc"' '
+ test_config gc.reflogExpire never &&
+ test_config gc.reflogExpireUnreachable never &&
+
+ GIT_TRACE=$(pwd)/trace.out git gc &&
+
+ # Check that git-pack-refs is run as a sanity check (done via
+ # gc_before_repack()) but that git-expire is not.
+ grep -E "^trace: (built-in|exec|run_command): git pack-refs --" trace.out &&
+ ! grep -E "^trace: (built-in|exec|run_command): git reflog expire --" trace.out
+'
+
+test_expect_success 'one of gc.reflogExpire{Unreachable,}=never does not skip "expire" via "gc"' '
+ >trace.out &&
+ test_config gc.reflogExpire never &&
+ GIT_TRACE=$(pwd)/trace.out git gc &&
+ grep -E "^trace: (built-in|exec|run_command): git reflog expire --" trace.out
+'
+
run_and_wait_for_auto_gc () {
# We read stdout from gc for the side effect of waiting until the
# background gc process exits, closing its fd 9. Furthermore, the
test_cmp expect actual
'
+test_expect_success 'recursive tagging should give advice' '
+ sed -e "s/|$//" <<-EOF >expect &&
+ hint: You have created a nested tag. The object referred to by your new is
+ hint: already a tag. If you meant to tag the object that it points to, use:
+ hint: |
+ hint: git tag -f nested annotated-v4.0^{}
+ EOF
+ git tag -m nested nested annotated-v4.0 2>actual &&
+ test_i18ncmp expect actual
+'
+
test_expect_success 'multiple --points-at are OR-ed together' '
cat >expect <<-\EOF &&
v2.0
--- /dev/null
+#!/bin/sh
+
+test_description='post index change hook'
+
+. ./test-lib.sh
+
+test_expect_success 'setup' '
+ mkdir -p dir1 &&
+ touch dir1/file1.txt &&
+ echo testing >dir1/file2.txt &&
+ git add . &&
+ git commit -m "initial"
+'
+
+test_expect_success 'test status, add, commit, others trigger hook without flags set' '
+ mkdir -p .git/hooks &&
+ write_script .git/hooks/post-index-change <<-\EOF &&
+ if test "$1" -eq 1; then
+ echo "Invalid combination of flags passed to hook; updated_workdir is set." >testfailure
+ exit 1
+ fi
+ if test "$2" -eq 1; then
+ echo "Invalid combination of flags passed to hook; updated_skipworktree is set." >testfailure
+ exit 1
+ fi
+ if test -f ".git/index.lock"; then
+ echo ".git/index.lock exists" >testfailure
+ exit 3
+ fi
+ if ! test -f ".git/index"; then
+ echo ".git/index does not exist" >testfailure
+ exit 3
+ fi
+ echo "success" >testsuccess
+ EOF
+ mkdir -p dir2 &&
+ touch dir2/file1.txt &&
+ touch dir2/file2.txt &&
+ : force index to be dirty &&
+ test-tool chmtime +60 dir1/file1.txt &&
+ git status &&
+ test_path_is_file testsuccess && rm -f testsuccess &&
+ test_path_is_missing testfailure &&
+ git add . &&
+ test_path_is_file testsuccess && rm -f testsuccess &&
+ test_path_is_missing testfailure &&
+ git commit -m "second" &&
+ test_path_is_file testsuccess && rm -f testsuccess &&
+ test_path_is_missing testfailure &&
+ git checkout -- dir1/file1.txt &&
+ test_path_is_file testsuccess && rm -f testsuccess &&
+ test_path_is_missing testfailure &&
+ git update-index &&
+ test_path_is_missing testsuccess &&
+ test_path_is_missing testfailure &&
+ git reset --soft &&
+ test_path_is_missing testsuccess &&
+ test_path_is_missing testfailure
+'
+
+test_expect_success 'test checkout and reset trigger the hook' '
+ write_script .git/hooks/post-index-change <<-\EOF &&
+ if test "$1" -eq 1 && test "$2" -eq 1; then
+ echo "Invalid combination of flags passed to hook; updated_workdir and updated_skipworktree are both set." >testfailure
+ exit 1
+ fi
+ if test "$1" -eq 0 && test "$2" -eq 0; then
+ echo "Invalid combination of flags passed to hook; neither updated_workdir or updated_skipworktree are set." >testfailure
+ exit 2
+ fi
+ if test "$1" -eq 1; then
+ if test -f ".git/index.lock"; then
+ echo "updated_workdir set but .git/index.lock exists" >testfailure
+ exit 3
+ fi
+ if ! test -f ".git/index"; then
+ echo "updated_workdir set but .git/index does not exist" >testfailure
+ exit 3
+ fi
+ else
+ echo "update_workdir should be set for checkout" >testfailure
+ exit 4
+ fi
+ echo "success" >testsuccess
+ EOF
+ : force index to be dirty &&
+ test-tool chmtime +60 dir1/file1.txt &&
+ git checkout master &&
+ test_path_is_file testsuccess && rm -f testsuccess &&
+ test_path_is_missing testfailure &&
+ test-tool chmtime +60 dir1/file1.txt &&
+ git checkout HEAD &&
+ test_path_is_file testsuccess && rm -f testsuccess &&
+ test_path_is_missing testfailure &&
+ test-tool chmtime +60 dir1/file1.txt &&
+ git reset --hard &&
+ test_path_is_file testsuccess && rm -f testsuccess &&
+ test_path_is_missing testfailure &&
+ git checkout -B test &&
+ test_path_is_file testsuccess && rm -f testsuccess &&
+ test_path_is_missing testfailure
+'
+
+test_expect_success 'test reset --mixed and update-index triggers the hook' '
+ write_script .git/hooks/post-index-change <<-\EOF &&
+ if test "$1" -eq 1 && test "$2" -eq 1; then
+ echo "Invalid combination of flags passed to hook; updated_workdir and updated_skipworktree are both set." >testfailure
+ exit 1
+ fi
+ if test "$1" -eq 0 && test "$2" -eq 0; then
+ echo "Invalid combination of flags passed to hook; neither updated_workdir or updated_skipworktree are set." >testfailure
+ exit 2
+ fi
+ if test "$2" -eq 1; then
+ if test -f ".git/index.lock"; then
+ echo "updated_skipworktree set but .git/index.lock exists" >testfailure
+ exit 3
+ fi
+ if ! test -f ".git/index"; then
+ echo "updated_skipworktree set but .git/index does not exist" >testfailure
+ exit 3
+ fi
+ else
+ echo "updated_skipworktree should be set for reset --mixed and update-index" >testfailure
+ exit 4
+ fi
+ echo "success" >testsuccess
+ EOF
+ : force index to be dirty &&
+ test-tool chmtime +60 dir1/file1.txt &&
+ git reset --mixed --quiet HEAD~1 &&
+ test_path_is_file testsuccess && rm -f testsuccess &&
+ test_path_is_missing testfailure &&
+ git hash-object -w --stdin <dir1/file2.txt >expect &&
+ git update-index --cacheinfo 100644 "$(cat expect)" dir1/file1.txt &&
+ test_path_is_file testsuccess && rm -f testsuccess &&
+ test_path_is_missing testfailure &&
+ git update-index --skip-worktree dir1/file2.txt &&
+ git update-index --remove dir1/file2.txt &&
+ test_path_is_file testsuccess && rm -f testsuccess &&
+ test_path_is_missing testfailure
+'
+
+test_done
test_must_fail git checkout simple 2>errs &&
test_i18ngrep overwritten errs &&
- git checkout --merge simple 2>errs &&
- test_i18ngrep ! overwritten errs &&
- git ls-files -u &&
- test_must_fail git cat-file -t :0:two &&
- test "$(git cat-file -t :1:two)" = blob &&
- test "$(git cat-file -t :2:two)" = blob &&
- test_must_fail git cat-file -t :3:two
+ test_must_fail git read-tree --quiet -m -u HEAD simple 2>errs &&
+ test_must_be_empty errs
'
test_expect_success 'checkout to detach HEAD (with advice declined)' '
test_must_fail git submodule init
'
+test_expect_success 'add aborts on repository with no commits' '
+ cat >expect <<-\EOF &&
+ '"'repo-no-commits'"' does not have a commit checked out
+ EOF
+ git init repo-no-commits &&
+ test_must_fail git submodule add ../a ./repo-no-commits 2>actual &&
+ test_i18ncmp expect actual
+'
+
test_expect_success 'setup - repository in init subdirectory' '
mkdir init &&
(
cp pristine-.git-config .git/config &&
cp pristine-.gitmodules .gitmodules &&
mkdir -p a/b/c &&
- (cd a/b/c && git init) &&
+ (cd a/b/c && git init && test_commit msg) &&
git config remote.origin.url ../foo/bar.git &&
git submodule add ../bar/a/b/c ./a/b/c &&
git submodule init &&
test_cmp expected actual
'
+test_expect_success 'option-like arguments passed to foreach commands are not lost' '
+ (
+ cd super &&
+ git submodule foreach "echo be --quiet" > ../expected &&
+ git submodule foreach echo be --quiet > ../actual
+ ) &&
+ grep -sq -e "--quiet" expected &&
+ test_cmp expected actual
+'
+
test_done
)
'
+test_expect_success 'unsetting submodules config from the working tree with "submodule--helper config --unset"' '
+ (cd super &&
+ git submodule--helper config --unset submodule.submodule.url &&
+ git submodule--helper config submodule.submodule.url >actual &&
+ test_must_be_empty actual
+ )
+'
+
+
test_expect_success 'writing submodules config with "submodule--helper config"' '
(cd super &&
echo "new_url" >expect &&
--- /dev/null
+#!/bin/sh
+#
+# Copyright (c) 2019 Denton Liu
+#
+
+test_description='Test submodules set-branch subcommand
+
+This test verifies that the set-branch subcommand of git-submodule is working
+as expected.
+'
+
+TEST_NO_CREATE_REPO=1
+. ./test-lib.sh
+
+test_expect_success 'submodule config cache setup' '
+ mkdir submodule &&
+ (cd submodule &&
+ git init &&
+ echo a >a &&
+ git add . &&
+ git commit -ma &&
+ git checkout -b topic &&
+ echo b >a &&
+ git add . &&
+ git commit -mb
+ ) &&
+ mkdir super &&
+ (cd super &&
+ git init &&
+ git submodule add ../submodule &&
+ git commit -m "add submodule"
+ )
+'
+
+test_expect_success 'ensure submodule branch is unset' '
+ (cd super &&
+ test_must_fail grep branch .gitmodules
+ )
+'
+
+test_expect_success 'test submodule set-branch --branch' '
+ (cd super &&
+ git submodule set-branch --branch topic submodule &&
+ grep "branch = topic" .gitmodules &&
+ git submodule update --remote &&
+ cat <<-\EOF >expect &&
+ b
+ EOF
+ git -C submodule show -s --pretty=%s >actual &&
+ test_cmp expect actual
+ )
+'
+
+test_expect_success 'test submodule set-branch --default' '
+ (cd super &&
+ git submodule set-branch --default submodule &&
+ test_must_fail grep branch .gitmodules &&
+ git submodule update --remote &&
+ cat <<-\EOF >expect &&
+ a
+ EOF
+ git -C submodule show -s --pretty=%s >actual &&
+ test_cmp expect actual
+ )
+'
+
+test_expect_success 'test submodule set-branch -b' '
+ (cd super &&
+ git submodule set-branch -b topic submodule &&
+ grep "branch = topic" .gitmodules &&
+ git submodule update --remote &&
+ cat <<-\EOF >expect &&
+ b
+ EOF
+ git -C submodule show -s --pretty=%s >actual &&
+ test_cmp expect actual
+ )
+'
+
+test_expect_success 'test submodule set-branch -d' '
+ (cd super &&
+ git submodule set-branch -d submodule &&
+ test_must_fail grep branch .gitmodules &&
+ git submodule update --remote &&
+ cat <<-\EOF >expect &&
+ a
+ EOF
+ git -C submodule show -s --pretty=%s >actual &&
+ test_cmp expect actual
+ )
+'
+
+test_done
test_i18ngrep "deleted:" actual &&
test_i18ngrep "new file:" actual &&
- git status --find-rename=100% >actual &&
+ git status --find-renames=100% >actual &&
test_i18ngrep "deleted:" actual &&
test_i18ngrep "new file:" actual
'
git status -M=01% >actual &&
test_i18ngrep "renamed:" actual &&
- git status --find-rename=01% >actual &&
+ git status --find-renames=01% >actual &&
test_i18ngrep "renamed:" actual
'
-test_expect_success 'copies not overridden by find-rename' '
+test_expect_success 'copies not overridden by find-renames' '
cp renamed copy &&
git add copy &&
test_i18ngrep "copied:" actual &&
test_i18ngrep "renamed:" actual &&
- git -c status.renames=copies status --find-rename=01% >actual &&
+ git -c status.renames=copies status --find-renames=01% >actual &&
test_i18ngrep "copied:" actual &&
test_i18ngrep "renamed:" actual
'
test_cmp expect actual
'
+test_expect_success 'outside worktree' '
+ echo 1 >1 &&
+ echo 2 >2 &&
+ test_expect_code 1 nongit git \
+ -c diff.tool=echo -c difftool.echo.cmd="echo \$LOCAL \$REMOTE" \
+ difftool --no-prompt --no-index ../1 ../2 >actual &&
+ echo "../1 ../2" >expect &&
+ test_cmp expect actual
+'
+
test_done
test_cmp expected actual
'
- test_expect_success "grep -w $L (with --column, --invert)" '
+ test_expect_success "grep -w $L (with --column, --invert-match)" '
{
echo ${HC}file:1:foo mmap bar
echo ${HC}file:1:foo_mmap bar
echo ${HC}file:1:foo_mmap bar mmap
echo ${HC}file:1:foo mmap bar_mmap
} >expected &&
- git grep --column --invert -w -e baz $H -- file >actual &&
+ git grep --column --invert-match -w -e baz $H -- file >actual &&
test_cmp expected actual
'
- test_expect_success "grep $L (with --column, --invert, extended OR)" '
+ test_expect_success "grep $L (with --column, --invert-match, extended OR)" '
{
echo ${HC}hello_world:6:HeLLo_world
} >expected &&
- git grep --column --invert -e ll --or --not -e _ $H -- hello_world \
+ git grep --column --invert-match -e ll --or --not -e _ $H -- hello_world \
>actual &&
test_cmp expected actual
'
- test_expect_success "grep $L (with --column, --invert, extended AND)" '
+ test_expect_success "grep $L (with --column, --invert-match, extended AND)" '
{
echo ${HC}hello_world:3:Hello world
echo ${HC}hello_world:3:Hello_world
echo ${HC}hello_world:6:HeLLo_world
} >expected &&
- git grep --column --invert --not -e _ --and --not -e ll $H -- hello_world \
+ git grep --column --invert-match --not -e _ --and --not -e ll $H -- hello_world \
>actual &&
test_cmp expected actual
'
echo ".gitignore:.*o*" &&
cat ../expect.full
} >../expect.with.ignored &&
- git grep --no-index --no-exclude o >../actual.full &&
+ git grep --no-index --no-exclude-standard o >../actual.full &&
test_cmp ../expect.with.ignored ../actual.full
)
'
echo ".gitignore:.*o*" &&
cat ../expect.full
} >../expect.with.ignored &&
- git -c grep.fallbackToNoIndex grep --no-exclude o >../actual.full &&
+ git -c grep.fallbackToNoIndex grep --no-exclude-standard o >../actual.full &&
test_cmp ../expect.with.ignored ../actual.full
)
'
grep "Content-Transfer-Encoding: quoted-printable" msgtxt1
'
+test_expect_success $PREREQ 'carriage returns with auto encoding are quoted-printable' '
+ clean_fake_sendmail &&
+ cp $patches cr.patch &&
+ printf "this is a line\r\n" >>cr.patch &&
+ git send-email \
+ --from="Example <nobody@example.com>" \
+ --to=nobody@example.com \
+ --smtp-server="$(pwd)/fake.sendmail" \
+ --transfer-encoding=auto \
+ --no-validate \
+ cr.patch &&
+ grep "Content-Transfer-Encoding: quoted-printable" msgtxt1
+'
+
for enc in auto quoted-printable base64
do
test_expect_success $PREREQ "--validate passes with encoding $enc" '
git svn dcommit
'
-stop_httpd
-
test_done
)
'
-stop_httpd
-
test_done
)
'
-stop_httpd
-
test_done
( cd g && git rev-parse --symbolic --verify HEAD )
'
-stop_httpd
-
test_done
background_import_still_running
'
+###
+### series W (get-mark and empty orphan commits)
+###
+
+cat >>W-input <<-W_INPUT_END
+ commit refs/heads/W-branch
+ mark :1
+ author Full Name <user@company.tld> 1000000000 +0100
+ committer Full Name <user@company.tld> 1000000000 +0100
+ data 27
+ Intentionally empty commit
+ LFsget-mark :1
+ W_INPUT_END
+
+test_expect_success !MINGW 'W: get-mark & empty orphan commit with no newlines' '
+ sed -e s/LFs// W-input | tr L "\n" | git fast-import
+'
+
+test_expect_success !MINGW 'W: get-mark & empty orphan commit with one newline' '
+ sed -e s/LFs/L/ W-input | tr L "\n" | git fast-import
+'
+
+test_expect_success !MINGW 'W: get-mark & empty orphan commit with ugly second newline' '
+ # Technically, this should fail as it has too many linefeeds
+ # according to the grammar in fast-import.txt. But, for whatever
+ # reason, it works. Since using the correct number of newlines
+ # does not work with older (pre-2.22) versions of git, allow apps
+ # that used this second-newline workaround to keep working by
+ # checking it with this test...
+ sed -e s/LFs/LL/ W-input | tr L "\n" | git fast-import
+'
+
+test_expect_success !MINGW 'W: get-mark & empty orphan commit with erroneous third newline' '
+ # ...but do NOT allow more empty lines than that (see previous test).
+ sed -e s/LFs/LLL/ W-input | tr L "\n" | test_must_fail git fast-import
+'
+
test_done
)
'
-test_expect_success 'kill p4d' '
- kill_p4d
-'
-
test_done
'
test_expect_success 'restart p4d' '
- kill_p4d &&
+ stop_and_cleanup_p4d &&
start_p4d
'
'
test_expect_success 'restart p4d' '
- kill_p4d &&
+ stop_and_cleanup_p4d &&
start_p4d
'
)
'
-test_expect_success 'kill p4d' '
- kill_p4d
-'
-
test_done
)
'
-test_expect_success 'kill p4d' '
- kill_p4d
-'
-
test_done
)
'
-test_expect_success 'kill p4d' '
- kill_p4d
-'
-
test_done
)
'
-test_expect_success 'kill p4d' '
- kill_p4d
-'
-
test_done
)
'
-test_expect_success 'kill p4d' '
- kill_p4d
-'
-
test_done
test_path_is_file "$git"/cli_file2.t
'
-
-test_expect_success 'kill p4d' '
- kill_p4d
-'
-
test_done
)
'
-test_expect_success 'kill p4d' '
- kill_p4d
-'
-
test_done
)
'
-test_expect_success 'kill p4d' '
- kill_p4d
-'
-
test_done
git_verify "cdir 1/file11" "cdir 1/file12"
'
-test_expect_success 'kill p4d' '
- kill_p4d
-'
-
test_done
)
'
-test_expect_success 'kill p4d' '
- kill_p4d
-'
-
test_done
(
cd "$cli" &&
p4 sync ... &&
- !(p4 labels | grep GIT_TAG_ON_A_BRANCH)
+ ! p4 labels | grep GIT_TAG_ON_A_BRANCH
)
'
)
'
-
-test_expect_success 'kill p4d' '
- kill_p4d
-'
-
test_done
)
'
-test_expect_success 'kill p4d' '
- kill_p4d
-'
-
test_done
)
'
-test_expect_success 'kill p4d' '
- kill_p4d
-'
-
test_done
)
'
-test_expect_success 'kill p4d' '
- kill_p4d
-'
-
test_done
)
'
-test_expect_success 'kill p4d' '
- kill_p4d
-'
-
test_done
)
'
-test_expect_success 'kill p4d' '
- kill_p4d
-'
-
test_done
)
'
-test_expect_success 'kill p4d' '
- kill_p4d
-'
-
test_done
test_line_count \> 10 log
'
-test_expect_success 'kill p4d' '
- kill_p4d
-'
-
test_done
test_must_fail git p4 clone //depot/uc/...
'
-test_expect_success 'kill p4d' '
- kill_p4d
-'
-
test_done
)
'
-test_expect_success 'kill p4d' '
- kill_p4d
-'
-
test_done
)
'
-test_expect_success 'kill p4d' '
- kill_p4d
-'
-
test_done
UTF8_ESCAPED="a-\303\244_o-\303\266_u-\303\274.txt"
ISO8859_ESCAPED="a-\344_o-\366_u-\374.txt"
+ISO8859="$(printf "$ISO8859_ESCAPED")" &&
+echo content123 >"$ISO8859" &&
+rm "$ISO8859" || {
+ skip_all="fs does not accept ISO-8859-1 filenames"
+ test_done
+}
+
test_expect_success 'start p4d' '
start_p4d
'
)
'
-test_expect_success 'kill p4d' '
- kill_p4d
-'
-
test_done
)
'
-test_expect_success 'kill p4d' '
- kill_p4d
-'
-
test_done
)
'
-test_expect_success 'kill p4d' '
- kill_p4d
-'
-
test_done
git p4 clone --dest="$git" //depot
'
-test_expect_success 'kill p4d' '
- kill_p4d
-'
-
test_done
)
'
-test_expect_success 'kill p4d' '
- kill_p4d
-'
-
test_done
)
'
-test_expect_success 'kill p4d' '
- kill_p4d
-'
-
test_done
)
'
-test_expect_success 'kill p4d' '
- kill_p4d
-'
-
test_done
)
'
-test_expect_success 'kill p4d' '
- kill_p4d
-'
-
test_done
'
-test_expect_success 'kill p4d' '
- kill_p4d
-'
-
test_done
)
'
-test_expect_success 'kill p4d' '
- kill_p4d
-'
-
test_done
test_path_is_file file_to_shelve
)
'
-test_expect_success 'kill p4d' '
- kill_p4d
-'
test_done
)
'
-test_expect_success 'kill p4d' '
- kill_p4d
-'
-
-
test_done
} && (exit \"\$eval_ret\"); eval_ret=\$?; $test_cleanup"
}
+# This function can be used to schedule some commands to be run
+# unconditionally at the end of the test script, e.g. to stop a daemon:
+#
+# test_expect_success 'test git daemon' '
+# git daemon &
+# daemon_pid=$! &&
+# test_atexit 'kill $daemon_pid' &&
+# hello world
+# '
+#
+# The commands will be executed before the trash directory is removed,
+# i.e. the atexit commands will still be able to access any pidfiles or
+# socket files.
+#
+# Note that these commands will be run even when a test script run
+# with '--immediate' fails. Be careful with your atexit commands to
+# minimize any changes to the failed state.
+
+test_atexit () {
+ # We cannot detect when we are in a subshell in general, but by
+ # doing so on Bash is better than nothing (the test will
+ # silently pass on other shells).
+ test "${BASH_SUBSHELL-0}" = 0 ||
+ error "bug in test script: test_atexit does nothing in a subshell"
+ test_atexit_cleanup="{ $*
+ } && (exit \"\$eval_ret\"); eval_ret=\$?; $test_atexit_cleanup"
+}
+
# Most tests can use the created repository, but some may need to create more.
# Usage: test_create_repo <directory>
test_create_repo () {
. "$GIT_BUILD_DIR"/GIT-BUILD-OPTIONS
export PERL_PATH SHELL_PATH
+# Disallow the use of abbreviated options in the test suite by default
+if test -z "${GIT_TEST_DISALLOW_ABBREVIATED_OPTIONS}"
+then
+ GIT_TEST_DISALLOW_ABBREVIATED_OPTIONS=true
+ export GIT_TEST_DISALLOW_ABBREVIATED_OPTIONS
+fi
+
################################################################
# It appears that people try to run tests without building...
"${GIT_TEST_INSTALLED:-$GIT_BUILD_DIR}/git$X" >/dev/null
my @env = keys %ENV;
my $ok = join("|", qw(
TRACE
+ TR2_
DEBUG
TEST
.*_TEST
die () {
code=$?
+ # This is responsible for running the atexit commands even when a
+ # test script run with '--immediate' fails, or when the user hits
+ # ctrl-C, i.e. when 'test_done' is not invoked at all.
+ test_atexit_handler || code=$?
if test -n "$GIT_EXIT_OK"
then
exit $code
GIT_EXIT_OK=
trap 'die' EXIT
-trap 'exit $?' INT TERM HUP
+# Disable '-x' tracing, because with some shells, notably dash, it
+# prevents running the cleanup commands when a test script run with
+# '--verbose-log -x' is interrupted.
+trap '{ code=$?; set +x; } 2>/dev/null; exit $code' INT TERM HUP
# The user-facing functions are loaded from a separate file so that
# test_perf subshells can have them too
junit_have_testcase=t
}
+test_atexit_cleanup=:
+test_atexit_handler () {
+ # In a succeeding test script 'test_atexit_handler' is invoked
+ # twice: first from 'test_done', then from 'die' in the trap on
+ # EXIT.
+ # This condition and resetting 'test_atexit_cleanup' below makes
+ # sure that the registered cleanup commands are run only once.
+ test : != "$test_atexit_cleanup" || return 0
+
+ setup_malloc_check
+ test_eval_ "$test_atexit_cleanup"
+ test_atexit_cleanup=:
+ teardown_malloc_check
+}
+
test_done () {
GIT_EXIT_OK=t
+ # Run the atexit commands _before_ the trash directory is
+ # removed, so the commands can access pidfiles and socket files.
+ test_atexit_handler
+
if test -n "$write_junit_xml" && test -n "$junit_xml_path"
then
test -n "$junit_have_testcase" || {
if test -n "$GIT_TEST_GETTEXT_POISON_ORIG"
then
GIT_TEST_GETTEXT_POISON=$GIT_TEST_GETTEXT_POISON_ORIG
+ export GIT_TEST_GETTEXT_POISON
unset GIT_TEST_GETTEXT_POISON_ORIG
fi
int ret = 0;
struct git_transport_data *data = transport->data;
struct ref *refs = NULL;
- char *dest = xstrdup(transport->url);
struct fetch_pack_args args;
struct ref *refs_tmp = NULL;
switch (data->version) {
case protocol_v2:
- refs = fetch_pack(&args, data->fd, data->conn,
+ refs = fetch_pack(&args, data->fd,
refs_tmp ? refs_tmp : transport->remote_refs,
- dest, to_fetch, nr_heads, &data->shallow,
+ to_fetch, nr_heads, &data->shallow,
&transport->pack_lockfile, data->version);
break;
case protocol_v1:
case protocol_v0:
- refs = fetch_pack(&args, data->fd, data->conn,
+ refs = fetch_pack(&args, data->fd,
refs_tmp ? refs_tmp : transport->remote_refs,
- dest, to_fetch, nr_heads, &data->shallow,
+ to_fetch, nr_heads, &data->shallow,
&transport->pack_lockfile, data->version);
break;
case protocol_unknown_version:
free_refs(refs_tmp);
free_refs(refs);
- free(dest);
return ret;
}
struct tree_desc *t, struct tree_desc *tp,
int imin)
{
- unsigned mode;
+ unsigned short mode;
const char *path;
const struct object_id *oid;
int pathlen;
struct object_id oid;
};
-static int find_tree_entry(struct tree_desc *t, const char *name, struct object_id *result, unsigned *mode)
+static int find_tree_entry(struct tree_desc *t, const char *name, struct object_id *result, unsigned short *mode)
{
int namelen = strlen(name);
while (t->size) {
return -1;
}
-int get_tree_entry(const struct object_id *tree_oid, const char *name, struct object_id *oid, unsigned *mode)
+int get_tree_entry(const struct object_id *tree_oid, const char *name, struct object_id *oid, unsigned short *mode)
{
int retval;
void *tree;
* See the code for enum get_oid_result for a description of
* the return values.
*/
-enum get_oid_result get_tree_entry_follow_symlinks(struct object_id *tree_oid, const char *name, struct object_id *result, struct strbuf *result_path, unsigned *mode)
+enum get_oid_result get_tree_entry_follow_symlinks(struct object_id *tree_oid, const char *name, struct object_id *result, struct strbuf *result_path, unsigned short *mode)
{
int retval = MISSING_OBJECT;
struct dir_state *parents = NULL;
unsigned int size;
};
-static inline const struct object_id *tree_entry_extract(struct tree_desc *desc, const char **pathp, unsigned int *modep)
+static inline const struct object_id *tree_entry_extract(struct tree_desc *desc, const char **pathp, unsigned short *modep)
{
*pathp = desc->entry.path;
*modep = desc->entry.mode;
typedef int (*traverse_callback_t)(int n, unsigned long mask, unsigned long dirmask, struct name_entry *entry, struct traverse_info *);
int traverse_trees(struct index_state *istate, int n, struct tree_desc *t, struct traverse_info *info);
-enum get_oid_result get_tree_entry_follow_symlinks(struct object_id *tree_oid, const char *name, struct object_id *result, struct strbuf *result_path, unsigned *mode);
+enum get_oid_result get_tree_entry_follow_symlinks(struct object_id *tree_oid, const char *name, struct object_id *result, struct strbuf *result_path, unsigned short *mode);
struct traverse_info {
const char *traverse_path;
int show_all_errors;
};
-int get_tree_entry(const struct object_id *, const char *, struct object_id *, unsigned *);
+int get_tree_entry(const struct object_id *, const char *, struct object_id *, unsigned short *);
extern char *make_traverse_path(char *path, const struct traverse_info *info, const struct name_entry *n);
extern void setup_traverse_info(struct traverse_info *info, const char *base);
enum unpack_trees_error_types e,
const char *path)
{
+ if (o->quiet)
+ return -1;
+
if (!o->show_all_errors)
return error(ERRORMSG(o, e), super_prefixed(path));
flags |= SUBMODULE_MOVE_HEAD_FORCE;
if (submodule_move_head(ce->name, old_id, new_id, flags))
- return o->gently ? -1 :
- add_rejected_path(o, ERROR_WOULD_LOSE_SUBMODULE, ce->name);
+ return add_rejected_path(o, ERROR_WOULD_LOSE_SUBMODULE, ce->name);
return 0;
}
* below.
*/
struct oid_array to_fetch = OID_ARRAY_INIT;
- int fetch_if_missing_store = fetch_if_missing;
- fetch_if_missing = 0;
for (i = 0; i < index->cache_nr; i++) {
struct cache_entry *ce = index->cache[i];
- if ((ce->ce_flags & CE_UPDATE) &&
- !S_ISGITLINK(ce->ce_mode)) {
- if (!has_object_file(&ce->oid))
- oid_array_append(&to_fetch, &ce->oid);
- }
+
+ if (!(ce->ce_flags & CE_UPDATE) ||
+ S_ISGITLINK(ce->ce_mode))
+ continue;
+ if (!oid_object_info_extended(the_repository, &ce->oid,
+ NULL,
+ OBJECT_INFO_FOR_PREFETCH))
+ continue;
+ oid_array_append(&to_fetch, &ce->oid);
}
if (to_fetch.nr)
fetch_objects(repository_format_partial_clone,
to_fetch.oid, to_fetch.nr);
- fetch_if_missing = fetch_if_missing_store;
oid_array_clear(&to_fetch);
}
for (i = 0; i < index->cache_nr; i++) {
* instead of ODB since we already know what these trees contain.
*/
static int traverse_by_cache_tree(int pos, int nr_entries, int nr_names,
- struct name_entry *names,
struct traverse_info *info)
{
struct cache_entry *src[MAX_UNPACK_TREES + 1] = { NULL, };
* unprocessed entries before 'pos'.
*/
bottom = o->cache_bottom;
- ret = traverse_by_cache_tree(pos, nr_entries, n, names, info);
+ ret = traverse_by_cache_tree(pos, nr_entries, n, info);
o->cache_bottom = bottom;
return ret;
}
static int unpack_failed(struct unpack_trees_options *o, const char *message)
{
discard_index(&o->result);
- if (!o->gently && !o->exiting_early) {
+ if (!o->quiet && !o->exiting_early) {
if (message)
return error("%s", message);
return -1;
WRITE_TREE_SILENT |
WRITE_TREE_REPAIR);
}
+
+ o->result.updated_workdir = 1;
discard_index(o->dst_index);
*o->dst_index = o->result;
} else {
static int reject_merge(const struct cache_entry *ce,
struct unpack_trees_options *o)
{
- return o->gently ? -1 :
- add_rejected_path(o, ERROR_WOULD_OVERWRITE, ce->name);
+ return add_rejected_path(o, ERROR_WOULD_OVERWRITE, ce->name);
}
static int same(const struct cache_entry *a, const struct cache_entry *b)
int r = check_submodule_move_head(ce,
"HEAD", oid_to_hex(&ce->oid), o);
if (r)
- return o->gently ? -1 :
- add_rejected_path(o, error_type, ce->name);
+ return add_rejected_path(o, error_type, ce->name);
return 0;
}
}
if (errno == ENOENT)
return 0;
- return o->gently ? -1 :
- add_rejected_path(o, error_type, ce->name);
+ return add_rejected_path(o, error_type, ce->name);
}
int verify_uptodate(const struct cache_entry *ce,
*/
static int verify_clean_submodule(const char *old_sha1,
const struct cache_entry *ce,
- enum unpack_trees_error_types error_type,
struct unpack_trees_options *o)
{
if (!submodule_from_ce(ce))
}
static int verify_clean_subdirectory(const struct cache_entry *ce,
- enum unpack_trees_error_types error_type,
struct unpack_trees_options *o)
{
/*
if (!sub_head && oideq(&oid, &ce->oid))
return 0;
return verify_clean_submodule(sub_head ? NULL : oid_to_hex(&oid),
- ce, error_type, o);
+ ce, o);
}
/*
d.exclude_per_dir = o->dir->exclude_per_dir;
i = read_directory(&d, o->src_index, pathbuf, namelen+1, NULL);
if (i)
- return o->gently ? -1 :
- add_rejected_path(o, ERROR_NOT_UPTODATE_DIR, ce->name);
+ return add_rejected_path(o, ERROR_NOT_UPTODATE_DIR, ce->name);
free(pathbuf);
return cnt;
}
* files that are in "foo/" we would lose
* them.
*/
- if (verify_clean_subdirectory(ce, error_type, o) < 0)
+ if (verify_clean_subdirectory(ce, o) < 0)
return -1;
return 0;
}
return 0;
}
- return o->gently ? -1 :
- add_rejected_path(o, error_type, name);
+ return add_rejected_path(o, error_type, name);
}
/*
return error("Cannot do a bind merge of %d trees",
o->merge_size);
if (a && old)
- return o->gently ? -1 :
+ return o->quiet ? -1 :
error(ERRORMSG(o, ERROR_BIND_OVERLAP),
super_prefixed(a->name),
super_prefixed(old->name));
diff_index_cached,
debug_unpack,
skip_sparse_checkout,
- gently,
+ quiet,
exiting_early,
show_all_errors,
dry_run;
return 1;
}
-static void check_non_tip(struct object_array *want_obj)
+static void check_non_tip(struct object_array *want_obj,
+ struct packet_writer *writer)
{
int i;
/* Pick one of them (we know there at least is one) */
for (i = 0; i < want_obj->nr; i++) {
struct object *o = want_obj->objects[i].item;
- if (!is_our_ref(o))
+ if (!is_our_ref(o)) {
+ packet_writer_error(writer,
+ "upload-pack: not our ref %s",
+ oid_to_hex(&o->oid));
die("git upload-pack: not our ref %s",
oid_to_hex(&o->oid));
+ }
}
}
* by another process that handled the initial request.
*/
if (has_non_tip)
- check_non_tip(want_obj);
+ check_non_tip(want_obj, &writer);
if (!use_sideband && daemon_mode)
no_progress = 1;
} bdiffparam_t;
-#define xdl_malloc(x) malloc(x)
+#define xdl_malloc(x) xmalloc(x)
#define xdl_free(ptr) free(ptr)
-#define xdl_realloc(ptr,x) realloc(ptr,x)
+#define xdl_realloc(ptr,x) xrealloc(ptr,x)
void *xdl_mmfile_first(mmfile_t *mmf, long *size);
long xdl_mmfile_size(mmfile_t *mmf);
#if !defined(XINCLUDE_H)
#define XINCLUDE_H
-#include <ctype.h>
-#include <stdio.h>
-#include <stdlib.h>
-#include <unistd.h>
-#include <string.h>
-#include <limits.h>
-
+#include "git-compat-util.h"
#include "xmacros.h"
#include "xdiff.h"
#include "xtypes.h"