/test-dump-cache-tree
/test-scrap-cache-tree
/test-genrandom
+/test-hashmap
/test-index-version
/test-line-buffer
/test-match-trees
"char * string". This makes it easier to understand code
like "char *string, c;".
+ - Use whitespace around operators and keywords, but not inside
+ parentheses and not around functions. So:
+
+ while (condition)
+ func(bar + 1);
+
+ and not:
+
+ while( condition )
+ func (bar+1);
+
- We avoid using braces unnecessarily. I.e.
if (bla) {
--- /dev/null
+Git v1.9.1 Release Notes
+========================
+
+Fixes since v1.9.0
+------------------
+
+ * "git clean -d pathspec" did not use the given pathspec correctly
+ and ended up cleaning too much.
+
+ * "git difftool" misbehaved when the repository is bound to the
+ working tree with the ".git file" mechanism, where a textual file
+ ".git" tells us where it is.
+
+ * "git push" did not pay attention to branch.*.pushremote if it is
+ defined earlier than remote.pushdefault; the order of these two
+ variables in the configuration file should not matter, but it did
+ by mistake.
+
+ * Codepaths that parse timestamps in commit objects have been
+ tightened.
+
+ * "git diff --external-diff" incorrectly fed the submodule directory
+ in the working tree to the external diff driver when it knew it is
+ the same as one of the versions being compared.
+
+ * "git reset" needs to refresh the index when working in a working
+ tree (it can also be used to match the index to the HEAD in an
+ otherwise bare repository), but it failed to set up the working
+ tree properly, causing GIT_WORK_TREE to be ignored.
+
+ * "git check-attr" when working on a repository with a working tree
+ did not work well when the working tree was specified via the
+ --work-tree (and obviously with --git-dir) option.
+
+ * "merge-recursive" was broken in 1.7.7 era and stopped working in
+ an empty (temporary) working tree, when there are renames
+ involved. This has been corrected.
+
+ * "git rev-parse" was loose in rejecting command line arguments
+ that do not make sense, e.g. "--default" without the required
+ value for that option.
+
+ * include.path variable (or any variable that expects a path that
+ can use ~username expansion) in the configuration file is not a
+ boolean, but the code failed to check it.
+
+ * "git diff --quiet -- pathspec1 pathspec2" sometimes did not return
+ correct status value.
+
+ * Attempting to deepen a shallow repository by fetching over smart
+ HTTP transport failed in the protocol exchange, when no-done
+ extension was used. The fetching side waited for the list of
+ shallow boundary commits after the sending end stopped talking to
+ it.
+
+ * Allow "git cmd path/", when the 'path' is where a submodule is
+ bound to the top-level working tree, to match 'path', despite the
+ extra and unnecessary trailing slash (such a slash is often
+ given by command line completion).
--- /dev/null
+Git v1.9.2 Release Notes
+========================
+
+Fixes since v1.9.1
+------------------
+
+ * "git mv" that moves a submodule forgot to adjust the array that
+ uses to keep track of which submodules were to be moved to update
+ its configuration.
+
+ * Length limit for the pathname used when removing a path in a deep
+ subdirectory has been removed to avoid buffer overflows.
+
+ * The test helper lib-terminal always run an actual test_expect_*
+ when included, which screwed up with the use of skil-all that may
+ have to be done later.
+
+ * "git index-pack" used a wrong variable to name the keep-file in an
+ error message when the file cannot be written or closed.
+
+ * "rebase -i" produced a broken insn sheet when the title of a commit
+ happened to contain '\n' (or ended with '\c') due to a careless use
+ of 'echo'.
+
+ * There were a few instances of 'git-foo' remaining in the
+ documentation that should have been spelled 'git foo'.
+
+ * Serving objects from a shallow repository needs to write a
+ new file to hold the temporary shallow boundaries but it was not
+ cleaned when we exit due to die() or a signal.
+
+ * When "git stash pop" stops after failing to apply the stash
+ (e.g. due to conflicting changes), the stash is not dropped. State
+ that explicitly in the output to let the users know.
+
+ * The labels in "git status" output that describe the nature of
+ conflicts (e.g. "both deleted") were limited to 20 bytes, which was
+ too short for some l10n (e.g. fr).
--- /dev/null
+Git v2.0 Release Notes
+======================
+
+Backward compatibility notes
+----------------------------
+
+When "git push [$there]" does not say what to push, we have used the
+traditional "matching" semantics so far (all your branches were sent
+to the remote as long as there already are branches of the same name
+over there). In Git 2.0, the default is now the "simple" semantics,
+which pushes:
+
+ - only the current branch to the branch with the same name, and only
+ when the current branch is set to integrate with that remote
+ branch, if you are pushing to the same remote as you fetch from; or
+
+ - only the current branch to the branch with the same name, if you
+ are pushing to a remote that is not where you usually fetch from.
+
+You can use the configuration variable "push.default" to change
+this. If you are an old-timer who wants to keep using the
+"matching" semantics, you can set the variable to "matching", for
+example. Read the documentation for other possibilities.
+
+When "git add -u" and "git add -A" are run inside a subdirectory
+without specifying which paths to add on the command line, they
+operate on the entire tree for consistency with "git commit -a" and
+other commands (these commands used to operate only on the current
+subdirectory). Say "git add -u ." or "git add -A ." if you want to
+limit the operation to the current directory.
+
+"git add <path>" is the same as "git add -A <path>" now, so that
+"git add dir/" will notice paths you removed from the directory and
+record the removal. In older versions of Git, "git add <path>" used
+to ignore removals. You can say "git add --ignore-removal <path>" to
+add only added or modified paths in <path>, if you really want to.
+
+The "-q" option to "git diff-files", which does *NOT* mean "quiet",
+has been removed (it told Git to ignore deletion, which you can do
+with "git diff-files --diff-filter=d").
+
+"git request-pull" lost a few "heuristics" that often led to mistakes.
+
+
+Updates since v1.9 series
+-------------------------
+
+UI, Workflows & Features
+
+ * "git gc --aggressive" learned "--depth" option and
+ "gc.aggressiveDepth" configuration variable to allow use of a less
+ insane depth than the built-in default value of 250.
+
+ * "git log" learned the "--show-linear-break" option to show where a
+ single strand-of-pearls is broken in its output.
+
+ * The "rev-parse --parseopt" mechanism used by scripted Porcelains to
+ parse command line options and to give help text learned to take
+ the argv-help (the placeholder string for an option parameter,
+ e.g. "key-id" in "--gpg-sign=<key-id>").
+
+ * The pattern to find where the function begins in C/C++ used in
+ "diff" and "grep -p" have been updated to help C++ source better.
+
+ * "git rebase" learned to interpret a lone "-" as "@{-1}", the
+ branch that we were previously on.
+
+ * "git commit --cleanup=<mode>" learned a new mode, scissors.
+
+ * "git tag --list" output can be sorted using "version sort" with
+ "--sort=version:refname".
+
+ * Discard the accumulated "heuristics" to guess from which branch the
+ result wants to be pulled from and make sure what the end user
+ specified is not second-guessed by "git request-pull", to avoid
+ mistakes. When you pushed out your 'master' branch to your public
+ repository as 'for-linus', use the new "master:for-linus" syntax to
+ denote the branch to be pulled.
+
+ * "git grep" learned to behave in a way similar to native grep when
+ "-h" (no header) and "-c" (count) options are given.
+
+ * transport-helper, fast-import and fast-export have been updated to
+ allow the ref mapping and ref deletion in a way similar to the
+ natively supported transports.
+
+ * The "simple" mode is the default for "git push".
+
+ * "git add -u" and "git add -A", when run without any pathspec, is a
+ tree-wide operation even when run inside a subdirectory of a
+ working tree.
+
+ * "git add <path> is the same as "git add -A <path>" now.
+
+ * "core.statinfo" configuration variable, which is a
+ never-advertised synonym to "core.checkstat", has been removed.
+
+ * The "-q" option to "git diff-files", which does *NOT* mean
+ "quiet", has been removed (it told Git to ignore deletion, which
+ you can do with "git diff-files --diff-filter=d").
+
+ * Server operators can loosen the "tips of refs only" restriction for
+ the remote archive service with the uploadarchive.allowUnreachable
+ configuration option.
+
+ * The progress indicators from various time-consuming commands have
+ been marked for i18n/l10n.
+
+ * "git notes -C <blob>" diagnoses an attempt to use an object that
+ is not a blob as an error.
+
+ * "git config" learned to read from the standard input when "-" is
+ given as the value to its "--file" parameter (attempting an
+ operation to update the configuration in the standard input of
+ course is rejected).
+
+ * Trailing whitespaces in .gitignore files, unless they are quoted
+ for fnmatch(3), e.g. "path\ ", are warned and ignored. Strictly
+ speaking, this is a backward incompatible change, but very unlikely
+ to bite any sane user and adjusting should be obvious and easy.
+
+ * Many commands that create commits, e.g. "pull", "rebase",
+ learned to take the --gpg-sign option on the command line.
+
+ * "git commit" can be told to always GPG sign the resulting commit
+ by setting "commit.gpgsign" configuration variable to true (the
+ command line option --no-gpg-sign should override it).
+
+ * "git pull" can be told to only accept fast-forward by setting the
+ new "pull.ff" configuration.
+
+ * "git reset" learned "-N" option, which does not reset the index
+ fully for paths the index knows about but the tree-ish the command
+ resets to does not (these paths are kept as intend-to-add entries).
+
+
+Performance, Internal Implementation, etc.
+
+ * The compilation options to port to AIX has been updated.
+
+ * We started using wildmatch() in place of fnmatch(3) a few releases
+ ago; complete the process and stop using fnmatch(3).
+
+ * Uses of curl's "multi" interface and "easy" interface do not mix
+ well when we attempt to reuse outgoing connections. Teach the RPC
+ over http code, used in the smart HTTP transport, not to use the
+ "easy" interface.
+
+ * The bitmap-index feature from JGit has been ported, which should
+ significantly improve performance when serving objects form a
+ repository that uses it.
+
+ * The way "git log --cc" shows a combined diff against multiple
+ parents have been optimized.
+
+ * The prefixcmp() and suffixcmp() functions are gone. Use
+ starts_with() and ends_with(), and also consider if skip_prefix()
+ suits your needs better when using the former.
+
+
+Also contains various documentation updates and code clean-ups. Many
+of them came from flurry of activities as GSoC candidate microproject
+exercises.
+
+
+Fixes since v1.9 series
+-----------------------
+
+Unless otherwise noted, all the fixes since v1.9 in the maintenance
+track are contained in this release (see the maintenance releases'
+notes for details).
+
+ * "git diff --no-index -Mq a b" fell into an infinite loop.
+ (merge ad1c3fb jc/fix-diff-no-index-diff-opt-parse later to maint).
+
+ * "git fetch --prune", when the right-hand-side of multiple fetch
+ refspecs overlap (e.g. storing "refs/heads/*" to
+ "refs/remotes/origin/*", while storing "refs/frotz/*" to
+ "refs/remotes/origin/fr/*"), aggressively thought that lack of
+ "refs/heads/fr/otz" on the origin site meant we should remove
+ "refs/remotes/origin/fr/otz" from us, without checking their
+ "refs/frotz/otz" first.
+
+ Note that such a configuration is inherently unsafe (think what
+ should happen when "refs/heads/fr/otz" does appear on the origin
+ site), but that is not a reason not to be extra careful.
+ (merge e6f6371 cn/fetch-prune-overlapping-destination later to maint).
+
+ * "git status --porcelain --branch" showed its output with labels
+ "ahead/behind/gone" translated to the user's locale.
+ (merge 7a76c28 mm/status-porcelain-format-i18n-fix later to maint).
+
+
+ * "git repack" died when asked to (re)pack with the reachability
+ bitmap when a bitmap cannot be built; instead, just (re)pack
+ without producing a bitmap in such a case, with a warning.
+ (merge 373c67d jk/pack-bitmap later to maint).
+
+
+ * The progress output while repacking and transferring objects showed
+ an apparent large silence while writing the objects out of existing
+ packfiles, when the reachability bitmap was in use.
+ (merge 78d2214 jk/pack-bitmap-progress later to maint).
+
+
+ * A stray environment variable $prefix could have leaked into and
+ affected the behaviour of the "subtree" script (in contrib/).
+
+
+ * When it is not necessary to edit a commit log message (e.g. "git
+ commit -m" is given a message without specifying "-e"), we used to
+ disable the spawning of the editor by overriding GIT_EDITOR, but
+ this means all the uses of the editor, other than to edit the
+ commit log message, are also affected.
+ (merge b549be0 bp/commit-p-editor later to maint).
+
+
+ * "git mv" that moves a submodule forgot to adjust the array that
+ uses to keep track of which submodules were to be moved to update
+ its configuration.
+ (merge fb8a4e8 jk/mv-submodules-fix later to maint).
+
+ * Length limit for the pathname used when removing a path in a deep
+ subdirectory has been removed to avoid buffer overflows.
+ (merge 2f29e0c mh/remove-subtree-long-pathname-fix later to maint).
+
+ * The test helper lib-terminal always run an actual test_expect_*
+ when included, which screwed up with the use of skil-all that may
+ have to be done later.
+ (merge 7e27173 jk/lib-terminal-lazy later to maint).
+
+ * "git index-pack" used a wrong variable to name the keep-file in an
+ error message when the file cannot be written or closed.
+ (merge de983a0 nd/index-pack-error-message later to maint).
+
+ * "rebase -i" produced a broken insn sheet when the title of a commit
+ happened to contain '\n' (or ended with '\c') due to a careless use
+ of 'echo'.
+ (merge cb1aefd us/printf-not-echo later to maint).
+
+ * There were a few instances of 'git-foo' remaining in the
+ documentation that should have been spelled 'git foo'.
+ (merge 3c3e6f5 rr/doc-merge-strategies later to maint).
+
+ * Serving objects from a shallow repository needs to write a
+ new file to hold the temporary shallow boundaries but it was not
+ cleaned when we exit due to die() or a signal.
+ (merge 7839632 jk/shallow-update-fix later to maint).
+
+ * When "git stash pop" stops after failing to apply the stash
+ (e.g. due to conflicting changes), the stash is not dropped. State
+ that explicitly in the output to let the users know.
+ (merge 2d4c993 jc/stash-pop-not-popped later to maint).
+
+ * The labels in "git status" output that describe the nature of
+ conflicts (e.g. "both deleted") were limited to 20 bytes, which was
+ too short for some l10n (e.g. fr).
+ (merge c7cb333 jn/wt-status later to maint).
+
+ * "git clean -d pathspec" did not use the given pathspec correctly
+ and ended up cleaning too much.
+ (merge 1f2e108 jk/clean-d-pathspec later to maint).
+
+ * "git difftool" misbehaved when the repository is bound to the
+ working tree with the ".git file" mechanism, where a textual file
+ ".git" tells us where it is.
+ (merge fcfec8b da/difftool-git-files later to maint).
+
+ * "git push" did not pay attention to branch.*.pushremote if it is
+ defined earlier than remote.pushdefault; the order of these two
+ variables in the configuration file should not matter, but it did
+ by mistake.
+ (merge 98b406f jk/remote-pushremote-config-reading later to maint).
+
+ * Codepaths that parse timestamps in commit objects have been
+ tightened.
+ (merge 3f419d4 jk/commit-dates-parsing-fix later to maint).
+
+ * "git diff --external-diff" incorrectly fed the submodule directory
+ in the working tree to the external diff driver when it knew it is
+ the same as one of the versions being compared.
+ (merge aba4727 tr/diff-submodule-no-reuse-worktree later to maint).
+
+ * "git reset" needs to refresh the index when working in a working
+ tree (it can also be used to match the index to the HEAD in an
+ otherwise bare repository), but it failed to set up the working
+ tree properly, causing GIT_WORK_TREE to be ignored.
+ (merge b7756d4 nd/reset-setup-worktree later to maint).
+
+ * "git check-attr" when working on a repository with a working tree
+ did not work well when the working tree was specified via the
+ --work-tree (and obviously with --git-dir) option.
+ (merge cdbf623 jc/check-attr-honor-working-tree later to maint).
+
+ * "merge-recursive" was broken in 1.7.7 era and stopped working in
+ an empty (temporary) working tree, when there are renames
+ involved. This has been corrected.
+ (merge 6e2068a bk/refresh-missing-ok-in-merge-recursive later to maint.)
+
+ * "git rev-parse" was loose in rejecting command line arguments
+ that do not make sense, e.g. "--default" without the required
+ value for that option.
+ (merge a43219f ds/rev-parse-required-args later to maint.)
+
+ * include.path variable (or any variable that expects a path that
+ can use ~username expansion) in the configuration file is not a
+ boolean, but the code failed to check it.
+ (merge 67beb60 jk/config-path-include-fix later to maint.)
+
+ * Commands that take pathspecs on the command line misbehaved when
+ the pathspec is given as an absolute pathname (which is a
+ practice not particularly encouraged) that points at a symbolic
+ link in the working tree.
+ (merge later 655ee9e mw/symlinks to maint.)
+
+ * "git diff --quiet -- pathspec1 pathspec2" sometimes did not return
+ correct status value.
+ (merge f34b205 nd/diff-quiet-stat-dirty later to maint.)
+
+ * Attempting to deepen a shallow repository by fetching over smart
+ HTTP transport failed in the protocol exchange, when no-done
+ extension was used. The fetching side waited for the list of
+ shallow boundary commits after the sending end stopped talking to
+ it.
+ (merge 0232852 nd/http-fetch-shallow-fix later to maint.)
+
+ * Allow "git cmd path/", when the 'path' is where a submodule is
+ bound to the top-level working tree, to match 'path', despite the
+ extra and unnecessary trailing slash (such a slash is often
+ given by command line completion).
+ (merge 2e70c01 nd/submodule-pathspec-ending-with-slash later to maint.)
Note that this list is non-comprehensive and not necessarily complete.
For command-specific variables, you will find a more detailed description
-in the appropriate manual page. You will find a description of non-core
-porcelain configuration variables in the respective porcelain documentation.
+in the appropriate manual page.
+
+Other git-related tools may and do use their own variables. When
+inventing new variables for use in your own tool, make sure their
+names do not conflict with those that are used by Git itself and
+other popular tools, and describe them in your documentation.
+
advice.*::
These variables control various optional help messages designed to
--
pushUpdateRejected::
Set this variable to 'false' if you want to disable
- 'pushNonFFCurrent', 'pushNonFFDefault',
+ 'pushNonFFCurrent',
'pushNonFFMatching', 'pushAlreadyExists',
'pushFetchFirst', and 'pushNeedsForce'
simultaneously.
pushNonFFCurrent::
Advice shown when linkgit:git-push[1] fails due to a
non-fast-forward update to the current branch.
- pushNonFFDefault::
- Advice to set 'push.default' to 'upstream' or 'current'
- when you ran linkgit:git-push[1] and pushed 'matching
- refs' by default (i.e. you did not provide an explicit
- refspec, and no 'push.default' configuration was set)
- and it resulted in a non-fast-forward error.
pushNonFFMatching::
Advice shown when you ran linkgit:git-push[1] and pushed
'matching refs' explicitly (i.e. you used ':', or
have to remove the help lines that begin with `#` in the commit log
template yourself, if you do this).
+commit.gpgsign::
+
+ A boolean to specify whether all commits should be GPG signed.
+ Use of this option when doing operations such as rebase can
+ result in a large number of commits being signed. It may be
+ convenient to use an agent to avoid typing your GPG passphrase
+ several times.
+
commit.status::
A boolean to enable/disable inclusion of status information in the
commit message template when using an editor to prepare the commit
object to a worktree file upon checkout. See
linkgit:gitattributes[5] for details.
+gc.aggressiveDepth::
+ The depth parameter used in the delta compression
+ algorithm used by 'git gc --aggressive'. This defaults
+ to 250.
+
gc.aggressiveWindow::
The window size parameter used in the delta compression
algorithm used by 'git gc --aggressive'. This defaults
--auto` consolidates them into one larger pack. The
default value is 50. Setting this to 0 disables it.
+gc.autodetach::
+ Make `git gc --auto` return immediately andrun in background
+ if the system supports it. Default is true.
+
gc.packrefs::
Running `git pack-refs` in a repository renders it
unclonable by Git versions prior to 1.5.1.2 over dumb
The configuration variables in the 'imap' section are described
in linkgit:git-imap-send[1].
+index.version::
+ Specify the version with which new index files should be
+ initialized. This does not affect existing repositories.
+
init.templatedir::
Specify the directory from which templates will be copied.
(See the "TEMPLATE DIRECTORY" section of linkgit:git-init[1].)
linkgit:git-add[1], linkgit:git-checkout[1], linkgit:git-commit[1],
linkgit:git-reset[1], and linkgit:git-stash[1]. Note that this
setting is silently ignored if portable keystroke input
- is not available.
+ is not available; requires the Perl module Term::ReadKey.
log.abbrevCommit::
If true, makes linkgit:git-log[1], linkgit:git-show[1], and
Common unit suffixes of 'k', 'm', or 'g' are
supported.
+pack.useBitmaps::
+ When true, git will use pack bitmaps (if available) when packing
+ to stdout (e.g., during the server side of a fetch). Defaults to
+ true. You should not generally need to turn this off unless
+ you are debugging pack bitmaps.
+
+pack.writebitmaps::
+ When true, git will write a bitmap index when packing all
+ objects to disk (e.g., when `git repack -a` is run). This
+ index can speed up the "counting objects" phase of subsequent
+ packs created for clones and fetches, at the cost of some disk
+ space and extra time spent on the initial repack. Defaults to
+ false.
+
+pack.writeBitmapHashCache::
+ When true, git will include a "hash cache" section in the bitmap
+ index (if one is written). This cache can be used to feed git's
+ delta heuristics, potentially leading to better deltas between
+ bitmapped and non-bitmapped objects (e.g., when serving a fetch
+ between an older, bitmapped pack and objects that have been
+ pushed since the last gc). The downside is that it consumes 4
+ bytes per object of disk space, and that JGit's bitmap
+ implementation does not understand it, causing it to complain if
+ Git and JGit are used on the same repository. Defaults to false.
+
pager.<cmd>::
If the value is boolean, turns on or off pagination of the
output of a particular Git subcommand when writing to a tty.
Note that an alias with the same name as a built-in format
will be silently ignored.
+pull.ff::
+ By default, Git does not create an extra merge commit when merging
+ a commit that is a descendant of the current commit. Instead, the
+ tip of the current branch is fast-forwarded. When set to `false`,
+ this variable tells Git to create an extra merge commit in such
+ a case (equivalent to giving the `--no-ff` option from the command
+ line). When set to `only`, only such fast-forward merges are
+ allowed (equivalent to giving the `--ff-only` option from the
+ command line).
+
pull.rebase::
When true, rebase branches on top of the fetched branch, instead
of merging the default branch from the default remote when "git
pull from, work as `current`. This is the safest option and is suited
for beginners.
+
-This mode will become the default in Git 2.0.
+This mode has become the default in Git 2.0.
* `matching` - push all branches having the same name on both ends.
This makes the repository you are pushing to remember the set of
people may add new branches there, or update the tip of existing
branches outside your control.
+
-This is currently the default, but Git 2.0 will change the default
-to `simple`.
+This used to be the default, but not since Git 2.0 (`simple` is the
+new default).
--
"false" and repack. Access from old Git versions over the
native protocol are unaffected by this option.
+repack.packKeptObjects::
+ If set to true, makes `git repack` act as if
+ `--pack-kept-objects` was passed. See linkgit:git-repack[1] for
+ details. Defaults to `false` normally, but `true` if a bitmap
+ index is being written (either via `--write-bitmap-index` or
+ `pack.writeBitmaps`).
+
rerere.autoupdate::
When set to true, `git-rerere` updates the index with the
resulting contents after it cleanly resolves conflicts using
not set, the value of this variable is used instead.
The default value is 100.
+uploadarchive.allowUnreachable::
+ If true, allow clients to use `git archive --remote` to request
+ any tree, whether reachable from the ref tips or not. See the
+ discussion in the `SECURITY` section of
+ linkgit:git-upload-archive[1] for more details. Defaults to
+ `false`.
+
uploadpack.hiderefs::
String(s) `upload-pack` uses to decide which refs to omit
from its initial advertisement. Use more than one
Files to add content from. Fileglobs (e.g. `*.c`) can
be given to add all matching files. Also a
leading directory name (e.g. `dir` to add `dir/file1`
- and `dir/file2`) can be given to add all files in the
- directory, recursively.
+ and `dir/file2`) can be given to update the index to
+ match the current state of the directory as a whole (e.g.
+ specifying `dir` will record not just a file `dir/file1`
+ modified in the working tree, a file `dir/file2` added to
+ the working tree, but also a file `dir/file3` removed from
+ the working tree. Note that older versions of Git used
+ to ignore removed files; use `--no-all` option if you want
+ to add modified or new files but ignore removed ones.
-n::
--dry-run::
<pathspec>. This removes as well as modifies index entries to
match the working tree, but adds no new files.
+
-If no <pathspec> is given, the current version of Git defaults to
-"."; in other words, update all tracked files in the current directory
-and its subdirectories. This default will change in a future version
-of Git, hence the form without <pathspec> should not be used.
+If no <pathspec> is given when `-u` option is used, all
+tracked files in the entire working tree are updated (old versions
+of Git used to limit the update to the current directory and its
+subdirectories).
-A::
--all::
entry. This adds, modifies, and removes index entries to
match the working tree.
+
-If no <pathspec> is given, the current version of Git defaults to
-"."; in other words, update all files in the current directory
-and its subdirectories. This default will change in a future version
-of Git, hence the form without <pathspec> should not be used.
+If no <pathspec> is given when `-A` option is used, all
+files in the entire working tree are updated (old versions
+of Git used to limit the update to the current directory and its
+subdirectories).
--no-all::
--ignore-removal::
files that have been removed from the working tree. This
option is a no-op when no <pathspec> is used.
+
-This option is primarily to help the current users of Git, whose
-"git add <pathspec>..." ignores removed files. In future versions
-of Git, "git add <pathspec>..." will be a synonym to "git add -A
-<pathspec>..." and "git add --ignore-removal <pathspec>..." will behave like
-today's "git add <pathspec>...", ignoring removed files.
+This option is primarily to help users who are used to older
+versions of Git, whose "git add <pathspec>..." was a synonym
+for "git add --no-all <pathspec>...", i.e. ignored removed files.
-N::
--intent-to-add::
[--ignore-date] [--ignore-space-change | --ignore-whitespace]
[--whitespace=<option>] [-C<n>] [-p<n>] [--directory=<dir>]
[--exclude=<path>] [--include=<path>] [--reject] [-q | --quiet]
- [--[no-]scissors]
+ [--[no-]scissors] [-S[<keyid>]] [--patch-format=<format>]
[(<mbox> | <Maildir>)...]
'git am' (--continue | --skip | --abort)
program that applies
the patch.
+--patch-format::
+ By default the command will try to detect the patch format
+ automatically. This option allows the user to bypass the automatic
+ detection and specify the patch format that the patch(es) should be
+ interpreted as. Valid formats are mbox, stgit, stgit-series and hg.
+
-i::
--interactive::
Run interactively.
Skip the current patch. This is only meaningful when
restarting an aborted patch.
+-S[<keyid>]::
+--gpg-sign[=<keyid>]::
+ GPG-sign commits.
+
--continue::
-r::
--resolved::
commits that is more easily fixed by changing the mailbox (e.g.
errors in the "From:" lines).
+HOOKS
+-----
+This command can run `applypatch-msg`, `pre-applypatch`,
+and `post-applypatch` hooks. See linkgit:githooks[5] for more
+information.
SEE ALSO
--------
--remote=<repo>::
Instead of making a tar archive from the local repository,
- retrieve a tar archive from a remote repository.
+ retrieve a tar archive from a remote repository. Note that the
+ remote repository may place restrictions on which sha1
+ expressions may be allowed in `<tree-ish>`. See
+ linkgit:git-upload-archive[1] for details.
--exec=<git-upload-archive>::
Used with --remote to specify the path to the
development history for when a code snippet occurred in a change. This makes it
possible to track when a code snippet was added to a file, moved or copied
between files, and eventually deleted or replaced. It works by searching for
-a text string in the diff. A small example:
+a text string in the diff. A small example of the pickaxe interface
+that searches for `blame_usage`:
-----------------------------------------------------------------------------
$ git log --pretty=oneline -S'blame_usage'
SYNOPSIS
--------
[verse]
-'git cherry-pick' [--edit] [-n] [-m parent-number] [-s] [-x] [--ff] <commit>...
+'git cherry-pick' [--edit] [-n] [-m parent-number] [-s] [-x] [--ff]
+ [-S[<key-id>]] <commit>...
'git cherry-pick' --continue
'git cherry-pick' --quit
'git cherry-pick' --abort
--signoff::
Add Signed-off-by line at the end of the commit message.
+-S[<key-id>]::
+--gpg-sign[=<key-id>]::
+ GPG-sign commits.
+
--ff::
If the current HEAD is the same as the parent of the
cherry-pick'ed commit, then a fast forward to this commit will
never use the local optimizations). Specifying `--no-local` will
override the default when `/path/to/repo` is given, using the regular
Git transport instead.
-+
-To force copying instead of hardlinking (which may be desirable if you
-are trying to make a back-up of your repository), but still avoid the
-usual "Git aware" transport mechanism, `--no-hardlinks` can be used.
--no-hardlinks::
- Optimize the cloning process from a repository on a
- local filesystem by copying files under `.git/objects`
- directory.
+ Force the cloning process from a repository on a local
+ filesystem to copy the files under the `.git/objects`
+ directory instead of using hardlinks. This may be desirable
+ if you are trying to make a back-up of your repository.
--shared::
-s::
from the standard input.
-S[<keyid>]::
+--gpg-sign[=<keyid>]::
GPG-sign commit.
+--no-gpg-sign::
+ Countermand `commit.gpgsign` configuration variable that is
+ set to force each and every commit to be signed.
+
Commit Information
------------------
[-F <file> | -m <msg>] [--reset-author] [--allow-empty]
[--allow-empty-message] [--no-verify] [-e] [--author=<author>]
[--date=<date>] [--cleanup=<mode>] [--[no-]status]
- [-i | -o] [-S[<keyid>]] [--] [<file>...]
+ [-i | -o] [-S[<key-id>]] [--] [<file>...]
DESCRIPTION
-----------
--cleanup=<mode>::
This option determines how the supplied commit message should be
cleaned up before committing. The '<mode>' can be `strip`,
- `whitespace`, `verbatim`, or `default`.
+ `whitespace`, `verbatim`, `scissors` or `default`.
+
--
strip::
Same as `strip` except #commentary is not removed.
verbatim::
Do not change the message at all.
+scissors::
+ Same as `whitespace`, except that everything from (and
+ including) the line
+ "`# ------------------------ >8 ------------------------`"
+ is truncated if the message is to be edited. "`#`" can be
+ customized with core.commentChar.
default::
Same as `strip` if the message is to be edited.
Otherwise `whitespace`.
--gpg-sign[=<keyid>]::
GPG-sign commit.
+--no-gpg-sign::
+ Countermand `commit.gpgsign` configuration variable that is
+ set to force each and every commit to be signed.
+
\--::
Do not interpret any more arguments as options.
*WARNING:* `git cvsimport` uses cvsps version 2, which is considered
deprecated; it does not work with cvsps version 3 and later. If you are
performing a one-shot import of a CVS repository consider using
-link:http://cvs2svn.tigris.org/cvs2git.html[cvs2git] or
-link:https://github.com/BartMassey/parsecvs[parsecvs].
+http://cvs2svn.tigris.org/cvs2git.html[cvs2git] or
+https://github.com/BartMassey/parsecvs[parsecvs].
Imports a CVS repository into Git. It will either create a new
repository, or incrementally import into an existing one.
of your Git history, but you probably don't need this flexibility if
you're simply _removing unwanted data_ like large files or passwords.
For those operations you may want to consider
-link:http://rtyley.github.io/bfg-repo-cleaner/[The BFG Repo-Cleaner],
+http://rtyley.github.io/bfg-repo-cleaner/[The BFG Repo-Cleaner],
a JVM-based alternative to git-filter-branch, typically at least
10-50x faster for those use-cases, and with quite different
characteristics:
_is_ possible to write filters that include their own parallellism,
in the scripts executed against each commit.
-* The link:http://rtyley.github.io/bfg-repo-cleaner/#examples[command options]
+* The http://rtyley.github.io/bfg-repo-cleaner/#examples[command options]
are much more restrictive than git-filter branch, and dedicated just
to the tasks of removing unwanted data- e.g:
`--strip-blobs-bigger-than 1M`.
the documentation for the --window' option in linkgit:git-repack[1] for
more details. This defaults to 250.
+Similarly, the optional configuration variable 'gc.aggressiveDepth'
+controls --depth option in linkgit:git-repack[1]. This defaults to 250.
+
The optional configuration variable 'gc.pruneExpire' controls how old
the unreferenced loose objects have to be before they are pruned. The
default is "2 weeks ago".
--------
[verse]
'git merge' [-n] [--stat] [--no-commit] [--squash] [--[no-]edit]
- [-s <strategy>] [-X <strategy-option>] [-S[<keyid>]]
+ [-s <strategy>] [-X <strategy-option>] [-S[<key-id>]]
[--[no-]rerere-autoupdate] [-m <msg>] [<commit>...]
'git merge' <msg> HEAD <commit>...
'git merge' --abort
'git notes' append [-F <file> | -m <msg> | (-c | -C) <object>] [<object>]
'git notes' edit [<object>]
'git notes' show [<object>]
-'git notes' merge [-v | -q] [-s <strategy> ] <notes_ref>
+'git notes' merge [-v | -q] [-s <strategy> ] <notes-ref>
'git notes' merge --commit [-v | -q]
'git notes' merge --abort [-v | -q]
'git notes' remove [--ignore-missing] [--stdin] [<object>...]
the same way as 'git rev-list' with the `--objects` flag
uses its `commit` arguments to build the list of objects it
outputs. The objects on the resulting list are packed.
+ Besides revisions, `--not` or `--shallow <SHA-1>` lines are
+ also accepted.
--unpacked::
This implies `--revs`. When processing the list of
already exists on the remote side.
--all::
- Instead of naming each ref to push, specifies that all
- refs under `refs/heads/` be pushed.
+ Push all branches (i.e. refs under `refs/heads/`); cannot be
+ used with other <refspec>.
--prune::
Remove remote branches that don't have a local counterpart. For example
configured for the current branch).
`git push origin`::
- Without additional configuration, works like
- `git push origin :`.
+ Without additional configuration, pushes the current branch to
+ the configured upstream (`remote.origin.merge` configuration
+ variable) if it has the same name as the current branch, and
+ errors out without pushing otherwise.
+
The default behavior of this command when no <refspec> is given can be
configured by setting the `push` option of the remote, or the `push.default`
specified, `-s recursive`. Note the reversal of 'ours' and
'theirs' as noted above for the `-m` option.
+-S[<keyid>]::
+--gpg-sign[=<keyid>]::
+ GPG-sign commits.
+
-q::
--quiet::
Be quiet. Implies --no-stat.
NAME
----
-git-remote - manage set of tracked repositories
+git-remote - Manage set of tracked repositories
SYNOPSIS
SYNOPSIS
--------
[verse]
-'git repack' [-a] [-A] [-d] [-f] [-F] [-l] [-n] [-q] [--window=<n>] [--depth=<n>]
+'git repack' [-a] [-A] [-d] [-f] [-F] [-l] [-n] [-q] [-b] [--window=<n>] [--depth=<n>]
DESCRIPTION
-----------
The default is unlimited, unless the config variable
`pack.packSizeLimit` is set.
+-b::
+--write-bitmap-index::
+ Write a reachability bitmap index as part of the repack. This
+ only makes sense when used with `-a` or `-A`, as the bitmaps
+ must be able to refer to all reachable objects. This option
+ overrides the setting of `pack.writebitmaps`.
+
+--pack-kept-objects::
+ Include objects in `.keep` files when repacking. Note that we
+ still do not delete `.keep` packs after `pack-objects` finishes.
+ This means that we may duplicate objects, but this makes the
+ option safe to use when there are concurrent pushes or fetches.
+ This option is generally only useful if you are writing bitmaps
+ with `-b` or `pack.writebitmaps`, as it ensures that the
+ bitmapped packfile has the necessary objects.
Configuration
-------------
DESCRIPTION
-----------
-Summarizes the changes between two commits to the standard output, and includes
-the given URL in the generated summary.
+Generate a request asking your upstream project to pull changes into
+their tree. The request, printed to the standard output, summarizes
+the changes and indicates from where they can be pulled.
+
+The upstream project is expected to have the commit named by
+`<start>` and the output asks it to integrate the changes you made
+since that commit, up to the commit named by `<end>`, by visiting
+the repository named by `<url>`.
+
OPTIONS
-------
-p::
- Show patch text
+ Include patch text in the output.
<start>::
- Commit to start at.
+ Commit to start at. This names a commit that is already in
+ the upstream history.
<url>::
- URL to include in the summary.
+ The repository URL to be pulled from.
<end>::
- Commit to end at; defaults to HEAD.
+ Commit to end at (defaults to HEAD). This names the commit
+ at the tip of the history you are asking to be pulled.
++
+When the repository named by `<url>` has the commit at a tip of a
+ref that is different from the ref you have locally, you can use the
+`<local>:<remote>` syntax, to have its local name, a colon `:`, and
+its remote name.
+
+
+EXAMPLE
+-------
+
+Imagine that you built your work on your `master` branch on top of
+the `v1.0` release, and want it to be integrated to the project.
+First you push that change to your public repository for others to
+see:
+
+ git push https://git.ko.xz/project master
+
+Then, you run this command:
+
+ git request-pull v1.0 https://git.ko.xz/project master
+
+which will produce a request to the upstream, summarizing the
+changes between the `v1.0` release and your `master`, to pull it
+from your public repository.
+
+If you pushed your change to a branch whose name is different from
+the one you have locally, e.g.
+
+ git push https://git.ko.xz/project master:for-linus
+
+then you can ask that to be pulled with
+
+ git request-pull v1.0 https://git.ko.xz/project master:for-linus
+
GIT
---
[verse]
'git reset' [-q] [<tree-ish>] [--] <paths>...
'git reset' (--patch | -p) [<tree-ish>] [--] [<paths>...]
-'git reset' [--soft | --mixed | --hard | --merge | --keep] [-q] [<commit>]
+'git reset' [--soft | --mixed [-N] | --hard | --merge | --keep] [-q] [<commit>]
DESCRIPTION
-----------
Resets the index but not the working tree (i.e., the changed files
are preserved but not marked for commit) and reports what has not
been updated. This is the default action.
++
+If `-N` is specified, removed paths are marked as intent-to-add (see
+linkgit:git-add[1]).
--hard::
Resets the index and working tree. Any changes to tracked files in the
[ \--reverse ]
[ \--walk-reflogs ]
[ \--no-walk ] [ \--do-walk ]
+ [ \--use-bitmap-index ]
<commit>... [ \-- <paths>... ]
DESCRIPTION
'git rev-parse --parseopt' input format is fully text based. It has two parts,
separated by a line that contains only `--`. The lines before the separator
-(should be more than one) are used for the usage.
+(should be one or more) are used for the usage.
The lines after the separator describe the options.
Each line of options has this format:
------------
-<opt_spec><flags>* SP+ help LF
+<opt-spec><flags>*<arg-hint>? SP+ help LF
------------
-`<opt_spec>`::
+`<opt-spec>`::
its format is the short option character, then the long option name
separated by a comma. Both parts are not required, though at least one
is necessary. `h,help`, `dry-run` and `f` are all three correct
- `<opt_spec>`.
+ `<opt-spec>`.
`<flags>`::
`<flags>` are of `*`, `=`, `?` or `!`.
* Use `!` to not make the corresponding negated long option available.
+`<arg-hint>`::
+ `<arg-hint>`, if specified, is used as a name of the argument in the
+ help output, for options that take arguments. `<arg-hint>` is
+ terminated by the first whitespace. It is customary to use a
+ dash to separate words in a multi-word argument hint.
+
The remainder of the line, after stripping the spaces, is used
as the help associated to the option.
foo some nifty option --foo
bar= some cool option --bar with an argument
+baz=arg another cool option --baz with a named argument
+qux?path qux may take a path argument but has meaning by itself
An option group Header
C? option C with an optional argument"
eval "$(echo "$OPTS_SPEC" | git rev-parse --parseopt -- "$@" || echo exit $?)"
------------
+
+Usage text
+~~~~~~~~~~
+
+When `"$@"` is `-h` or `--help` in the above example, the following
+usage text would be shown:
+
+------------
+usage: some-command [options] <args>...
+
+ some-command does foo and bar!
+
+ -h, --help show the help
+ --foo some nifty option --foo
+ --bar ... some cool option --bar with an argument
+ --baz <arg> another cool option --baz with a named argument
+ --qux[=<path>] qux may take a path argument but has meaning by itself
+
+An option group Header
+ -C[...] option C with an optional argument
+------------
+
SQ-QUOTE
--------
SYNOPSIS
--------
[verse]
-'git revert' [--[no-]edit] [-n] [-m parent-number] [-s] <commit>...
+'git revert' [--[no-]edit] [-n] [-m parent-number] [-s] [-S[<key-id>]] <commit>...
'git revert' --continue
'git revert' --quit
'git revert' --abort
This is useful when reverting more than one commits'
effect to your index in a row.
+-S[<key-id>]::
+--gpg-sign[=<key-id>]::
+ GPG-sign commits.
+
-s::
--signoff::
Add Signed-off-by line at the end of the commit message.
OPTIONS
-------
-save [-p|--patch] [--[no-]keep-index] [-u|--include-untracked] [-a|--all] [-q|--quiet] [<message>]::
+save [-p|--patch] [-k|--[no-]keep-index] [-u|--include-untracked] [-a|--all] [-q|--quiet] [<message>]::
Save your local modifications to a new 'stash', and run `git reset
--hard` to revert them. The <message> part is optional and gives
OUTPUT
------
The output from this command is designed to be used as a commit
-template comment, and all the output lines are prefixed with '#'.
+template comment.
The default, long format, is designed to be human readable,
verbose and descriptive. Its contents and format are subject to change
at any time.
'git submodule' [--quiet] init [--] [<path>...]
'git submodule' [--quiet] deinit [-f|--force] [--] <path>...
'git submodule' [--quiet] update [--init] [--remote] [-N|--no-fetch]
- [-f|--force] [--rebase] [--reference <repository>] [--depth <depth>]
- [--merge] [--recursive] [--] [<path>...]
+ [-f|--force] [--rebase|--merge] [--reference <repository>]
+ [--depth <depth>] [--recursive] [--] [<path>...]
'git submodule' [--quiet] summary [--cached|--files] [(-n|--summary-limit) <n>]
[commit] [--] [<path>...]
'git submodule' [--quiet] foreach [--recursive] <command>
-b::
--branch::
Branch of repository to add as submodule.
- The name of the branch is recorded as `submodule.<path>.branch` in
+ The name of the branch is recorded as `submodule.<name>.branch` in
`.gitmodules` for `update --remote`.
-f::
fetches the submodule's remote repository before calculating the
SHA-1. If you don't want to fetch, you should use `submodule update
--remote --no-fetch`.
++
+Use this option to integrate changes from the upstream subproject with
+your submodule's current HEAD. Alternatively, you can run `git pull`
+from the submodule, which is equivalent except for the remote branch
+name: `update --remote` uses the default upstream repository and
+`submodule.<name>.branch`, while `git pull` uses the submodule's
+`branch.<name>.merge`. Prefer `submodule.<name>.branch` if you want
+to distribute the default upstream branch with the superproject and
+`branch.<name>.merge` if you want a more native feel while working in
+the submodule itself.
-N::
--no-fetch::
This option is only valid for the update command.
Don't fetch new objects from the remote site.
+--checkout::
+ This option is only valid for the update command.
+ Checkout the commit recorded in the superproject on a detached HEAD
+ in the submodule. This is the default behavior, the main use of
+ this option is to override `submodule.$name.update` when set to
+ `merge`, `rebase` or `none`.
+ If the key `submodule.$name.update` is either not explicitly set or
+ set to `checkout`, this option is implicit.
+
--merge::
This option is only valid for the update command.
Merge the commit recorded in the superproject into the current branch
using fnmatch(3)). Multiple patterns may be given; if any of
them matches, the tag is shown.
+--sort=<type>::
+ Sort in a specific order. Supported type is "refname"
+ (lexicographic order), "version:refname" or "v:refname" (tag
+ names are treated as versions). Prepend "-" to reverse sort
+ order.
+
--column[=<options>]::
--no-column::
Display tag listing in columns. See configuration variable
'git update-index'
[--add] [--remove | --force-remove] [--replace]
[--refresh] [-q] [--unmerged] [--ignore-missing]
- [(--cacheinfo <mode> <object> <file>)...]
+ [(--cacheinfo <mode>,<object>,<file>)...]
[--chmod=(+|-)x]
[--[no-]assume-unchanged]
[--[no-]skip-worktree]
--ignore-missing::
Ignores missing files during a --refresh
+--cacheinfo <mode>,<object>,<path>::
--cacheinfo <mode> <object> <path>::
- Directly insert the specified info into the index.
+ Directly insert the specified info into the index. For
+ backward compatibility, you can also give these three
+ arguments as three separate parameters, but new users are
+ encouraged to use a single-parameter form.
--index-info::
Read index information from stdin.
for the protocol is on the 'git archive' side, and the program pair
is meant to be used to get an archive from a remote repository.
+SECURITY
+--------
+
+In order to protect the privacy of objects that have been removed from
+history but may not yet have been pruned, `git-upload-archive` avoids
+serving archives for commits and trees that are not reachable from the
+repository's refs. However, because calculating object reachability is
+computationally expensive, `git-upload-archive` implements a stricter
+but easier-to-check set of rules:
+
+ 1. Clients may request a commit or tree that is pointed to directly by
+ a ref. E.g., `git archive --remote=origin v1.0`.
+
+ 2. Clients may request a sub-tree within a commit or tree using the
+ `ref:path` syntax. E.g., `git archive --remote=origin v1.0:Documentation`.
+
+ 3. Clients may _not_ use other sha1 expressions, even if the end
+ result is reachable. E.g., neither a relative commit like `master^`
+ nor a literal sha1 like `abcd1234` is allowed, even if the result
+ is reachable from the refs.
+
+Note that rule 3 disallows many cases that do not have any privacy
+implications. These rules are subject to change in future versions of
+git, and the server accessed by `git archive --remote` may or may not
+follow these exact rules.
+
+If the config option `uploadArchive.allowUnreachable` is true, these
+rules are ignored, and clients may use arbitrary sha1 expressions.
+This is useful if you do not care about the privacy of unreachable
+objects, or if your object database is already publicly available for
+access via non-smart-http.
+
OPTIONS
-------
<directory>::
branch of the `git.git` repository.
Documentation for older releases are available here:
-* link:v1.9.0/git.html[documentation for release 1.9.0]
+* link:v1.9.1/git.html[documentation for release 1.9.1]
* release notes for
+ link:RelNotes/1.9.1.txt[1.9.1],
link:RelNotes/1.9.0.txt[1.9.0].
* link:v1.8.5.5/git.html[documentation for release 1.8.5.5]
index file. If not specified, the default of `$GIT_DIR/index`
is used.
+'GIT_INDEX_VERSION'::
+ This environment variable allows the specification of an index
+ version for new repositories. It won't affect existing index
+ files. By default index file version [23] is used.
+
'GIT_OBJECT_DIRECTORY'::
If the object storage directory is specified via this
environment variable then the sha1 directories are created
convenient to organize your project with an informal hierarchy
of developers. Linux kernel development is run this way. There
is a nice illustration (page 17, "Merges to Mainline") in
-link:http://www.xenotime.net/linux/mentor/linux-mentoring-2006.pdf[Randy Dunlap's presentation].
+http://www.xenotime.net/linux/mentor/linux-mentoring-2006.pdf[Randy Dunlap's presentation].
It should be stressed that this hierarchy is purely *informal*.
There is nothing fundamental in Git that enforces the "chain of
-----------------------
First, install version 2.1 or higher of cvsps from
-link:http://www.cobite.com/cvsps/[http://www.cobite.com/cvsps/] and make
+http://www.cobite.com/cvsps/[http://www.cobite.com/cvsps/] and make
sure it is in your path. Then cd to a checked out CVS working directory
of the project you are interested in and run linkgit:git-cvsimport[1]:
Put a backslash ("`\`") in front of the first hash for patterns
that begin with a hash.
+ - Trailing spaces are ignored unless they are quoted with backlash
+ ("`\`").
+
- An optional prefix "`!`" which negates the pattern; any
matching file excluded by a previous pattern will become
included again. It is not possible to re-include a file if a parent
Files
-----
-Gitk creates the .gitk file in your $HOME directory to store preferences
-such as display options, font, and colors.
+User configuration and preferences are stored at:
+
+* '$XDG_CONFIG_HOME/git/gitk' if it exists, otherwise
+* '$HOME/.gitk' if it exists
+
+If neither of the above exist then '$XDG_CONFIG_HOME/git/gitk' is created and
+used by default. If '$XDG_CONFIG_HOME' is not set it defaults to
+'$HOME/.config' in all cases.
History
-------
'option check-connectivity' \{'true'|'false'\}::
Request the helper to check connectivity of a clone.
+'option force' \{'true'|'false'\}::
+ Request the helper to perform a force update. Defaults to
+ 'false'.
+
'option cloning \{'true'|'false'\}::
Notify the helper this is a clone request (i.e. the current
repository is guaranteed empty).
per line describes a commit and its fake parents by
listing their 40-byte hexadecimal object names separated
by a space and terminated by a newline.
++
+Note that the grafts mechanism is outdated and can lead to problems
+transferring objects between repositories; see linkgit:git-replace[1]
+for a more flexible and robust system to do the same thing.
info/exclude::
This file, by convention among Porcelains, stores the
* Fields use modified URI encoding, defined in RFC 3986, section 2.1
(Percent-Encoding), or rather "Query string encoding" (see
-link:http://en.wikipedia.org/wiki/Query_string#URL_encoding[]), the difference
+http://en.wikipedia.org/wiki/Query_string#URL_encoding[]), the difference
being that SP (" ") can be encoded as "{plus}" (and therefore "{plus}" has to be
also percent-encoded).
+
you can make Git pretend the set of <<def_parent,parents>> a <<def_commit,commit>> has
is different from what was recorded when the commit was
created. Configured via the `.git/info/grafts` file.
++
+Note that the grafts mechanism is outdated and can lead to problems
+transferring objects between repositories; see linkgit:git-replace[1]
+for a more flexible and robust system to do the same thing.
[[def_hash]]hash::
In Git's context, synonym for <<def_object_name,object name>>.
MERGE STRATEGIES
----------------
-The merge mechanism ('git-merge' and 'git-pull' commands) allows the
+The merge mechanism (`git merge` and `git pull` commands) allows the
backend 'merge strategies' to be chosen with `-s` option. Some strategies
can also take their own options, which can be passed by giving `-X<option>`
-arguments to 'git-merge' and/or 'git-pull'.
+arguments to `git merge` and/or `git pull`.
resolve::
This can only resolve two heads (i.e. the current branch
merged tree of the common ancestors and uses that as
the reference tree for the 3-way merge. This has been
reported to result in fewer merge conflicts without
- causing mis-merges by tests done on actual merge commits
+ causing mismerges by tests done on actual merge commits
taken from Linux 2.6 kernel development history.
Additionally this can detect and handle merges involving
renames. This is the default merge strategy when
Output excluded boundary commits. Boundary commits are
prefixed with `-`.
+ifdef::git-rev-list[]
+--use-bitmap-index::
+
+ Try to speed up the traversal using the pack bitmap index (if
+ one is available). Note that when traversing with `--objects`,
+ trees and blobs will not have their associated path printed.
+endif::git-rev-list[]
+
--
History Simplification
This implies the `--topo-order` option by default, but the
`--date-order` option may also be specified.
+--show-linear-break[=<barrier>]::
+ When --graph is not used, all history branches are flattened
+ which can make it hard to see that the two consecutive commits
+ do not belong to a linear branch. This option puts a barrier
+ in between them in that case. If `<barrier>` is specified, it
+ is the string that will be shown instead of the default one.
+
ifdef::git-rev-list[]
--count::
Print a number stating how many commits would have been
+++ /dev/null
-hash API
-========
-
-The hash API is a collection of simple hash table functions. Users are expected
-to implement their own hashing.
-
-Data Structures
----------------
-
-`struct hash_table`::
-
- The hash table structure. The `array` member points to the hash table
- entries. The `size` member counts the total number of valid and invalid
- entries in the table. The `nr` member keeps track of the number of
- valid entries.
-
-`struct hash_table_entry`::
-
- An opaque structure representing an entry in the hash table. The `hash`
- member is the entry's hash key and the `ptr` member is the entry's
- value.
-
-Functions
----------
-
-`init_hash`::
-
- Initialize the hash table.
-
-`free_hash`::
-
- Release memory associated with the hash table.
-
-`insert_hash`::
-
- Insert a pointer into the hash table. If an entry with that hash
- already exists, a pointer to the existing entry's value is returned.
- Otherwise NULL is returned. This allows callers to implement
- chaining, etc.
-
-`lookup_hash`::
-
- Lookup an entry in the hash table. If an entry with that hash exists
- the entry's value is returned. Otherwise NULL is returned.
-
-`for_each_hash`::
-
- Call a function for each entry in the hash table. The function is
- expected to take the entry's value as its only argument and return an
- int. If the function returns a negative int the loop is aborted
- immediately. Otherwise, the return value is accumulated and the sum
- returned upon completion of the loop.
--- /dev/null
+hashmap API
+===========
+
+The hashmap API is a generic implementation of hash-based key-value mappings.
+
+Data Structures
+---------------
+
+`struct hashmap`::
+
+ The hash table structure.
++
+The `size` member keeps track of the total number of entries. The `cmpfn`
+member is a function used to compare two entries for equality. The `table` and
+`tablesize` members store the hash table and its size, respectively.
+
+`struct hashmap_entry`::
+
+ An opaque structure representing an entry in the hash table, which must
+ be used as first member of user data structures. Ideally it should be
+ followed by an int-sized member to prevent unused memory on 64-bit
+ systems due to alignment.
++
+The `hash` member is the entry's hash code and the `next` member points to the
+next entry in case of collisions (i.e. if multiple entries map to the same
+bucket).
+
+`struct hashmap_iter`::
+
+ An iterator structure, to be used with hashmap_iter_* functions.
+
+Types
+-----
+
+`int (*hashmap_cmp_fn)(const void *entry, const void *entry_or_key, const void *keydata)`::
+
+ User-supplied function to test two hashmap entries for equality. Shall
+ return 0 if the entries are equal.
++
+This function is always called with non-NULL `entry` / `entry_or_key`
+parameters that have the same hash code. When looking up an entry, the `key`
+and `keydata` parameters to hashmap_get and hashmap_remove are always passed
+as second and third argument, respectively. Otherwise, `keydata` is NULL.
+
+Functions
+---------
+
+`unsigned int strhash(const char *buf)`::
+`unsigned int strihash(const char *buf)`::
+`unsigned int memhash(const void *buf, size_t len)`::
+`unsigned int memihash(const void *buf, size_t len)`::
+
+ Ready-to-use hash functions for strings, using the FNV-1 algorithm (see
+ http://www.isthe.com/chongo/tech/comp/fnv).
++
+`strhash` and `strihash` take 0-terminated strings, while `memhash` and
+`memihash` operate on arbitrary-length memory.
++
+`strihash` and `memihash` are case insensitive versions.
+
+`void hashmap_init(struct hashmap *map, hashmap_cmp_fn equals_function, size_t initial_size)`::
+
+ Initializes a hashmap structure.
++
+`map` is the hashmap to initialize.
++
+The `equals_function` can be specified to compare two entries for equality.
+If NULL, entries are considered equal if their hash codes are equal.
++
+If the total number of entries is known in advance, the `initial_size`
+parameter may be used to preallocate a sufficiently large table and thus
+prevent expensive resizing. If 0, the table is dynamically resized.
+
+`void hashmap_free(struct hashmap *map, int free_entries)`::
+
+ Frees a hashmap structure and allocated memory.
++
+`map` is the hashmap to free.
++
+If `free_entries` is true, each hashmap_entry in the map is freed as well
+(using stdlib's free()).
+
+`void hashmap_entry_init(void *entry, unsigned int hash)`::
+
+ Initializes a hashmap_entry structure.
++
+`entry` points to the entry to initialize.
++
+`hash` is the hash code of the entry.
+
+`void *hashmap_get(const struct hashmap *map, const void *key, const void *keydata)`::
+
+ Returns the hashmap entry for the specified key, or NULL if not found.
++
+`map` is the hashmap structure.
++
+`key` is a hashmap_entry structure (or user data structure that starts with
+hashmap_entry) that has at least been initialized with the proper hash code
+(via `hashmap_entry_init`).
++
+If an entry with matching hash code is found, `key` and `keydata` are passed
+to `hashmap_cmp_fn` to decide whether the entry matches the key.
+
+`void *hashmap_get_next(const struct hashmap *map, const void *entry)`::
+
+ Returns the next equal hashmap entry, or NULL if not found. This can be
+ used to iterate over duplicate entries (see `hashmap_add`).
++
+`map` is the hashmap structure.
++
+`entry` is the hashmap_entry to start the search from, obtained via a previous
+call to `hashmap_get` or `hashmap_get_next`.
+
+`void hashmap_add(struct hashmap *map, void *entry)`::
+
+ Adds a hashmap entry. This allows to add duplicate entries (i.e.
+ separate values with the same key according to hashmap_cmp_fn).
++
+`map` is the hashmap structure.
++
+`entry` is the entry to add.
+
+`void *hashmap_put(struct hashmap *map, void *entry)`::
+
+ Adds or replaces a hashmap entry. If the hashmap contains duplicate
+ entries equal to the specified entry, only one of them will be replaced.
++
+`map` is the hashmap structure.
++
+`entry` is the entry to add or replace.
++
+Returns the replaced entry, or NULL if not found (i.e. the entry was added).
+
+`void *hashmap_remove(struct hashmap *map, const void *key, const void *keydata)`::
+
+ Removes a hashmap entry matching the specified key. If the hashmap
+ contains duplicate entries equal to the specified key, only one of
+ them will be removed.
++
+`map` is the hashmap structure.
++
+`key` is a hashmap_entry structure (or user data structure that starts with
+hashmap_entry) that has at least been initialized with the proper hash code
+(via `hashmap_entry_init`).
++
+If an entry with matching hash code is found, `key` and `keydata` are
+passed to `hashmap_cmp_fn` to decide whether the entry matches the key.
++
+Returns the removed entry, or NULL if not found.
+
+`void hashmap_iter_init(struct hashmap *map, struct hashmap_iter *iter)`::
+`void *hashmap_iter_next(struct hashmap_iter *iter)`::
+`void *hashmap_iter_first(struct hashmap *map, struct hashmap_iter *iter)`::
+
+ Used to iterate over all entries of a hashmap.
++
+`hashmap_iter_init` initializes a `hashmap_iter` structure.
++
+`hashmap_iter_next` returns the next hashmap_entry, or NULL if there are no
+more entries.
++
+`hashmap_iter_first` is a combination of both (i.e. initializes the iterator
+and returns the first entry, if any).
+
+Usage example
+-------------
+
+Here's a simple usage example that maps long keys to double values.
+[source,c]
+------------
+struct hashmap map;
+
+struct long2double {
+ struct hashmap_entry ent; /* must be the first member! */
+ long key;
+ double value;
+};
+
+static int long2double_cmp(const struct long2double *e1, const struct long2double *e2, const void *unused)
+{
+ return !(e1->key == e2->key);
+}
+
+void long2double_init(void)
+{
+ hashmap_init(&map, (hashmap_cmp_fn) long2double_cmp, 0);
+}
+
+void long2double_free(void)
+{
+ hashmap_free(&map, 1);
+}
+
+static struct long2double *find_entry(long key)
+{
+ struct long2double k;
+ hashmap_entry_init(&k, memhash(&key, sizeof(long)));
+ k.key = key;
+ return hashmap_get(&map, &k, NULL);
+}
+
+double get_value(long key)
+{
+ struct long2double *e = find_entry(key);
+ return e ? e->value : 0;
+}
+
+void set_value(long key, double value)
+{
+ struct long2double *e = find_entry(key);
+ if (!e) {
+ e = malloc(sizeof(struct long2double));
+ hashmap_entry_init(e, memhash(&key, sizeof(long)));
+ e->key = key;
+ hashmap_add(&map, e);
+ }
+ e->value = value;
+}
+------------
+
+Using variable-sized keys
+-------------------------
+
+The `hashmap_entry_get` and `hashmap_entry_remove` functions expect an ordinary
+`hashmap_entry` structure as key to find the correct entry. If the key data is
+variable-sized (e.g. a FLEX_ARRAY string) or quite large, it is undesirable
+to create a full-fledged entry structure on the heap and copy all the key data
+into the structure.
+
+In this case, the `keydata` parameter can be used to pass
+variable-sized key data directly to the comparison function, and the `key`
+parameter can be a stripped-down, fixed size entry structure allocated on the
+stack.
+
+See test-hashmap.c for an example using arbitrary-length strings as keys.
--- /dev/null
+GIT bitmap v1 format
+====================
+
+ - A header appears at the beginning:
+
+ 4-byte signature: {'B', 'I', 'T', 'M'}
+
+ 2-byte version number (network byte order)
+ The current implementation only supports version 1
+ of the bitmap index (the same one as JGit).
+
+ 2-byte flags (network byte order)
+
+ The following flags are supported:
+
+ - BITMAP_OPT_FULL_DAG (0x1) REQUIRED
+ This flag must always be present. It implies that the bitmap
+ index has been generated for a packfile with full closure
+ (i.e. where every single object in the packfile can find
+ its parent links inside the same packfile). This is a
+ requirement for the bitmap index format, also present in JGit,
+ that greatly reduces the complexity of the implementation.
+
+ - BITMAP_OPT_HASH_CACHE (0x4)
+ If present, the end of the bitmap file contains
+ `N` 32-bit name-hash values, one per object in the
+ pack. The format and meaning of the name-hash is
+ described below.
+
+ 4-byte entry count (network byte order)
+
+ The total count of entries (bitmapped commits) in this bitmap index.
+
+ 20-byte checksum
+
+ The SHA1 checksum of the pack this bitmap index belongs to.
+
+ - 4 EWAH bitmaps that act as type indexes
+
+ Type indexes are serialized after the hash cache in the shape
+ of four EWAH bitmaps stored consecutively (see Appendix A for
+ the serialization format of an EWAH bitmap).
+
+ There is a bitmap for each Git object type, stored in the following
+ order:
+
+ - Commits
+ - Trees
+ - Blobs
+ - Tags
+
+ In each bitmap, the `n`th bit is set to true if the `n`th object
+ in the packfile is of that type.
+
+ The obvious consequence is that the OR of all 4 bitmaps will result
+ in a full set (all bits set), and the AND of all 4 bitmaps will
+ result in an empty bitmap (no bits set).
+
+ - N entries with compressed bitmaps, one for each indexed commit
+
+ Where `N` is the total amount of entries in this bitmap index.
+ Each entry contains the following:
+
+ - 4-byte object position (network byte order)
+ The position **in the index for the packfile** where the
+ bitmap for this commit is found.
+
+ - 1-byte XOR-offset
+ The xor offset used to compress this bitmap. For an entry
+ in position `x`, a XOR offset of `y` means that the actual
+ bitmap representing this commit is composed by XORing the
+ bitmap for this entry with the bitmap in entry `x-y` (i.e.
+ the bitmap `y` entries before this one).
+
+ Note that this compression can be recursive. In order to
+ XOR this entry with a previous one, the previous entry needs
+ to be decompressed first, and so on.
+
+ The hard-limit for this offset is 160 (an entry can only be
+ xor'ed against one of the 160 entries preceding it). This
+ number is always positive, and hence entries are always xor'ed
+ with **previous** bitmaps, not bitmaps that will come afterwards
+ in the index.
+
+ - 1-byte flags for this bitmap
+ At the moment the only available flag is `0x1`, which hints
+ that this bitmap can be re-used when rebuilding bitmap indexes
+ for the repository.
+
+ - The compressed bitmap itself, see Appendix A.
+
+== Appendix A: Serialization format for an EWAH bitmap
+
+Ewah bitmaps are serialized in the same protocol as the JAVAEWAH
+library, making them backwards compatible with the JGit
+implementation:
+
+ - 4-byte number of bits of the resulting UNCOMPRESSED bitmap
+
+ - 4-byte number of words of the COMPRESSED bitmap, when stored
+
+ - N x 8-byte words, as specified by the previous field
+
+ This is the actual content of the compressed bitmap.
+
+ - 4-byte position of the current RLW for the compressed
+ bitmap
+
+All words are stored in network byte order for their corresponding
+sizes.
+
+The compressed bitmap is stored in a form of run-length encoding, as
+follows. It consists of a concatenation of an arbitrary number of
+chunks. Each chunk consists of one or more 64-bit words
+
+ H L_1 L_2 L_3 .... L_M
+
+H is called RLW (run length word). It consists of (from lower to higher
+order bits):
+
+ - 1 bit: the repeated bit B
+
+ - 32 bits: repetition count K (unsigned)
+
+ - 31 bits: literal word count M (unsigned)
+
+The bitstream represented by the above chunk is then:
+
+ - K repetitions of B
+
+ - The bits stored in `L_1` through `L_M`. Within a word, bits at
+ lower order come earlier in the stream than those at higher
+ order.
+
+The next word after `L_M` (if any) must again be a RLW, for the next
+chunk. For efficient appending to the bitstream, the EWAH stores a
+pointer to the last RLW in the stream.
+
+
+== Appendix B: Optional Bitmap Sections
+
+These sections may or may not be present in the `.bitmap` file; their
+presence is indicated by the header flags section described above.
+
+Name-hash cache
+---------------
+
+If the BITMAP_OPT_HASH_CACHE flag is set, the end of the bitmap contains
+a cache of 32-bit values, one per object in the pack. The value at
+position `i` is the hash of the pathname at which the `i`th object
+(counting in index order) in the pack can be found. This can be fed
+into the delta heuristics to compare objects with similar pathnames.
+
+The hash algorithm used is:
+
+ hash = 0;
+ while ((c = *name++))
+ if (!isspace(c))
+ hash = (hash >> 2) + (c << 24);
+
+Note that this hashing scheme is tied to the BITMAP_OPT_HASH_CACHE flag.
+If implementations want to choose a different hashing scheme, they are
+free to do so, but MUST allocate a new header flag (because comparing
+hashes made under two different schemes would be pointless).
References
----------
-link:http://www.ietf.org/rfc/rfc1738.txt[RFC 1738: Uniform Resource Locators (URL)]
-link:http://www.ietf.org/rfc/rfc2616.txt[RFC 2616: Hypertext Transfer Protocol -- HTTP/1.1]
+http://www.ietf.org/rfc/rfc1738.txt[RFC 1738: Uniform Resource Locators (URL)]
+http://www.ietf.org/rfc/rfc2616.txt[RFC 2616: Hypertext Transfer Protocol -- HTTP/1.1]
link:technical/pack-protocol.html
link:technical/protocol-capabilities.html
ancestor is found before we give up entirely.
Once the 'done' line is read from the client, the server will either
-send a final 'ACK obj-id' or it will send a 'NAK'. The server only sends
+send a final 'ACK obj-id' or it will send a 'NAK'. 'obj-id' is the object
+name of the last commit determined to be common. The server only sends
ACK after 'done' if there is at least one common base and multi_ack or
multi_ack_detailed is enabled. The server always sends NAK after 'done'
if there is no common base found.
Without multi_ack the client would have sent that c-b-a chain anyway,
interleaved with S-R-Q.
+multi_ack_detailed
+------------------
+This is an extension of multi_ack that permits client to better
+understand the server's in-memory state. See pack-protocol.txt,
+section "Packfile Negotiation" for more information.
+
+no-done
+-------
+This capability should only be used with the smart HTTP protocol. If
+multi_ack_detailed and no-done are both present, then the sender is
+free to immediately send a pack following its first "ACK obj-id ready"
+message.
+
+Without no-done in the smart HTTP protocol, the server session would
+end and the client has to make another trip to send "done" before
+the server can send the pack. no-done removes the last round and
+thus slightly reduces latency.
+
thin-pack
---------
#!/bin/sh
GVF=GIT-VERSION-FILE
-DEF_VER=v1.9.0
+DEF_VER=v1.9.0.GIT
LF='
'
# FreeBSD can use either, but MinGW and some others need to use
# libcharset.h's locale_charset() instead.
#
-# Define CHARSET_LIB to you need to link with library other than -liconv to
+# Define CHARSET_LIB to the library you need to link with in order to
# use locale_charset() function. On some platforms this needs to set to
-# -lcharset
+# -lcharset, on others to -liconv .
#
# Define LIBC_CONTAINS_LIBINTL if your gettext implementation doesn't
# need -lintl when linking.
#
# Define NO_MKSTEMPS if you don't have mkstemps in the C library.
#
-# Define NO_FNMATCH if you don't have fnmatch in the C library.
-#
-# Define NO_FNMATCH_CASEFOLD if your fnmatch function doesn't have the
-# FNM_CASEFOLD GNU extension.
-#
-# Define NO_WILDMATCH if you do not want to use Git's wildmatch
-# implementation as fnmatch
-#
# Define NO_GECOS_IN_PWENT if you don't have pw_gecos in struct passwd
# in the C library.
#
# Define DEFAULT_HELP_FORMAT to "man", "info" or "html"
# (defaults to "man") if you want to have a different default when
# "git help" is called without a parameter specifying the format.
+#
+# Define TEST_GIT_INDEX_VERSION to 2, 3 or 4 to run the test suite
+# with a different indexfile format version. If it isn't set the index
+# file format used is index-v[23].
+#
+# Define GMTIME_UNRELIABLE_ERRORS if your gmtime() function does not
+# return NULL when it receives a bogus time_t.
GIT-VERSION-FILE: FORCE
@$(SHELL_PATH) ./GIT-VERSION-GEN
TEST_PROGRAMS_NEED_X += test-delta
TEST_PROGRAMS_NEED_X += test-dump-cache-tree
TEST_PROGRAMS_NEED_X += test-genrandom
+TEST_PROGRAMS_NEED_X += test-hashmap
TEST_PROGRAMS_NEED_X += test-index-version
TEST_PROGRAMS_NEED_X += test-line-buffer
TEST_PROGRAMS_NEED_X += test-match-trees
LIB_H += diffcore.h
LIB_H += dir.h
LIB_H += exec_cmd.h
+LIB_H += ewah/ewok.h
+LIB_H += ewah/ewok_rlw.h
LIB_H += fetch-pack.h
LIB_H += fmt-merge-msg.h
LIB_H += fsck.h
LIB_H += gpg-interface.h
LIB_H += graph.h
LIB_H += grep.h
-LIB_H += hash.h
+LIB_H += hashmap.h
LIB_H += help.h
LIB_H += http.h
LIB_H += kwset.h
LIB_H += notes-utils.h
LIB_H += notes.h
LIB_H += object.h
+LIB_H += pack-objects.h
LIB_H += pack-revindex.h
LIB_H += pack.h
+LIB_H += pack-bitmap.h
LIB_H += parse-options.h
LIB_H += patch-ids.h
LIB_H += pathspec.h
LIB_OBJS += editor.o
LIB_OBJS += entry.o
LIB_OBJS += environment.o
+LIB_OBJS += ewah/bitmap.o
+LIB_OBJS += ewah/ewah_bitmap.o
+LIB_OBJS += ewah/ewah_io.o
+LIB_OBJS += ewah/ewah_rlw.o
LIB_OBJS += exec_cmd.o
LIB_OBJS += fetch-pack.o
LIB_OBJS += fsck.o
LIB_OBJS += gpg-interface.o
LIB_OBJS += graph.o
LIB_OBJS += grep.o
-LIB_OBJS += hash.o
+LIB_OBJS += hashmap.o
LIB_OBJS += help.o
LIB_OBJS += hex.o
LIB_OBJS += ident.o
LIB_OBJS += notes-merge.o
LIB_OBJS += notes-utils.o
LIB_OBJS += object.o
+LIB_OBJS += pack-bitmap.o
+LIB_OBJS += pack-bitmap-write.o
LIB_OBJS += pack-check.o
+LIB_OBJS += pack-objects.o
LIB_OBJS += pack-revindex.o
LIB_OBJS += pack-write.o
LIB_OBJS += pager.o
LIB_OBJS += utf8.o
LIB_OBJS += varint.o
LIB_OBJS += version.o
+LIB_OBJS += versioncmp.o
LIB_OBJS += walker.o
LIB_OBJS += wildmatch.o
LIB_OBJS += wrapper.o
ifdef NO_STRTOULL
COMPAT_CFLAGS += -DNO_STRTOULL
endif
-ifdef NO_FNMATCH
- COMPAT_CFLAGS += -Icompat/fnmatch
- COMPAT_CFLAGS += -DNO_FNMATCH
- COMPAT_OBJS += compat/fnmatch/fnmatch.o
-else
-ifdef NO_FNMATCH_CASEFOLD
- COMPAT_CFLAGS += -Icompat/fnmatch
- COMPAT_CFLAGS += -DNO_FNMATCH_CASEFOLD
- COMPAT_OBJS += compat/fnmatch/fnmatch.o
-endif
-endif
-ifndef NO_WILDMATCH
- COMPAT_CFLAGS += -DUSE_WILDMATCH
-endif
ifdef NO_SETENV
COMPAT_CFLAGS += -DNO_SETENV
COMPAT_OBJS += compat/setenv.o
BASIC_CFLAGS += -DXDL_FAST_HASH
endif
+ifdef GMTIME_UNRELIABLE_ERRORS
+ COMPAT_OBJS += compat/gmtime.o
+ BASIC_CFLAGS += -DGMTIME_UNRELIABLE_ERRORS
+endif
+
ifeq ($(TCLTK_PATH),)
NO_TCLTK = NoThanks
endif
ifdef GIT_PERF_MAKE_OPTS
@echo GIT_PERF_MAKE_OPTS=\''$(subst ','\'',$(subst ','\'',$(GIT_PERF_MAKE_OPTS)))'\' >>$@
endif
+ifdef TEST_GIT_INDEX_VERSION
+ @echo TEST_GIT_INDEX_VERSION=\''$(subst ','\'',$(subst ','\'',$(TEST_GIT_INDEX_VERSION)))'\' >>$@
+endif
### Detect Python interpreter path changes
ifndef NO_PYTHON
$(RM) $(addsuffix *.gcno,$(addprefix $(PROFILE_DIR)/, $(object_dirs)))
clean: profile-clean coverage-clean
- $(RM) *.o *.res block-sha1/*.o ppc/*.o compat/*.o compat/*/*.o xdiff/*.o vcs-svn/*.o \
- builtin/*.o $(LIB_FILE) $(XDIFF_LIB) $(VCSSVN_LIB)
+ $(RM) *.o *.res block-sha1/*.o ppc/*.o compat/*.o compat/*/*.o
+ $(RM) xdiff/*.o vcs-svn/*.o ewah/*.o builtin/*.o
+ $(RM) $(LIB_FILE) $(XDIFF_LIB) $(VCSSVN_LIB)
$(RM) $(ALL_PROGRAMS) $(SCRIPT_LIB) $(BUILT_INS) git$X
$(RM) $(TEST_PROGRAMS) $(NO_INSTALL)
$(RM) -r bin-wrappers $(dep_dirs)
-Documentation/RelNotes/1.9.0.txt
\ No newline at end of file
+Documentation/RelNotes/2.0.0.txt
\ No newline at end of file
int advice_push_update_rejected = 1;
int advice_push_non_ff_current = 1;
-int advice_push_non_ff_default = 1;
int advice_push_non_ff_matching = 1;
int advice_push_already_exists = 1;
int advice_push_fetch_first = 1;
} advice_config[] = {
{ "pushupdaterejected", &advice_push_update_rejected },
{ "pushnonffcurrent", &advice_push_non_ff_current },
- { "pushnonffdefault", &advice_push_non_ff_default },
{ "pushnonffmatching", &advice_push_non_ff_matching },
{ "pushalreadyexists", &advice_push_already_exists },
{ "pushfetchfirst", &advice_push_fetch_first },
extern int advice_push_update_rejected;
extern int advice_push_non_ff_current;
-extern int advice_push_non_ff_default;
extern int advice_push_non_ff_matching;
extern int advice_push_already_exists;
extern int advice_push_fetch_first;
static const struct archiver **archivers;
static int nr_archivers;
static int alloc_archivers;
+static int remote_allow_unreachable;
void register_archiver(struct archiver *ar)
{
unsigned char sha1[20];
/* Remotes are only allowed to fetch actual refs */
- if (remote) {
+ if (remote && !remote_allow_unreachable) {
char *ref = NULL;
- const char *colon = strchr(name, ':');
- int refnamelen = colon ? colon - name : strlen(name);
+ const char *colon = strchrnul(name, ':');
+ int refnamelen = colon - name;
if (!dwim_ref(name, refnamelen, sha1, &ref))
die("no such ref: %.*s", refnamelen, name);
return argc;
}
+static int git_default_archive_config(const char *var, const char *value,
+ void *cb)
+{
+ if (!strcmp(var, "uploadarchive.allowunreachable"))
+ remote_allow_unreachable = git_config_bool(var, value);
+ return git_default_config(var, value, cb);
+}
+
int write_archive(int argc, const char **argv, const char *prefix,
int setup_prefix, const char *name_hint, int remote)
{
if (setup_prefix && prefix == NULL)
prefix = setup_git_directory_gently(&nongit);
- git_config(git_default_config, NULL);
+ git_config(git_default_archive_config, NULL);
init_tar_archiver();
init_zip_archiver();
a = parse_attr_line(line, src, lineno, macro_ok);
if (!a)
return;
- if (res->alloc <= res->num_matches) {
- res->alloc = alloc_nr(res->num_matches);
- res->attrs = xrealloc(res->attrs,
- sizeof(struct match_attr *) *
- res->alloc);
- }
+ ALLOC_GROW(res->attrs, res->num_matches + 1, res->alloc);
res->attrs[res->num_matches++] = a;
}
static const char *argv_show_branch[] = {"show-branch", NULL, NULL};
static const char *argv_update_ref[] = {"update-ref", "--no-deref", "BISECT_HEAD", NULL, NULL};
-/* bits #0-15 in revision.h */
-
+/* Remember to update object flag allocation in object.h */
#define COUNTED (1u<<16)
/*
static int bisect_checkout(char *bisect_rev_hex, int no_checkout)
{
- int res;
mark_expected_rev(bisect_rev_hex);
die("update-ref --no-deref HEAD failed on %s",
bisect_rev_hex);
} else {
+ int res;
res = run_command_v_opt(argv_checkout, RUN_GIT_CMD);
if (res)
exit(res);
#define setW(x, val) (W(x) = (val))
#endif
-/*
- * Performance might be improved if the CPU architecture is OK with
- * unaligned 32-bit loads and a fast ntohl() is available.
- * Otherwise fall back to byte loads and shifts which is portable,
- * and is faster on architectures with memory alignment issues.
- */
-
-#if defined(__i386__) || defined(__x86_64__) || \
- defined(_M_IX86) || defined(_M_X64) || \
- defined(__ppc__) || defined(__ppc64__) || \
- defined(__powerpc__) || defined(__powerpc64__) || \
- defined(__s390__) || defined(__s390x__)
-
-#define get_be32(p) ntohl(*(unsigned int *)(p))
-#define put_be32(p, v) do { *(unsigned int *)(p) = htonl(v); } while (0)
-
-#else
-
-#define get_be32(p) ( \
- (*((unsigned char *)(p) + 0) << 24) | \
- (*((unsigned char *)(p) + 1) << 16) | \
- (*((unsigned char *)(p) + 2) << 8) | \
- (*((unsigned char *)(p) + 3) << 0) )
-#define put_be32(p, v) do { \
- unsigned int __v = (v); \
- *((unsigned char *)(p) + 0) = __v >> 24; \
- *((unsigned char *)(p) + 1) = __v >> 16; \
- *((unsigned char *)(p) + 2) = __v >> 8; \
- *((unsigned char *)(p) + 3) = __v >> 0; } while (0)
-
-#endif
-
/* This "rolls" over the 512-bit array */
#define W(x) (array[(x)&15])
+#include "git-compat-util.h"
#include "cache.h"
#include "branch.h"
#include "refs.h"
void install_branch_config(int flag, const char *local, const char *origin, const char *remote)
{
- const char *shortname = remote + 11;
- int remote_is_branch = starts_with(remote, "refs/heads/");
+ const char *shortname = skip_prefix(remote, "refs/heads/");
struct strbuf key = STRBUF_INIT;
int rebasing = should_setup_rebase(origin);
- if (remote_is_branch
+ if (shortname
&& !strcmp(local, shortname)
&& !origin) {
warning(_("Not setting branch %s as its own upstream."),
strbuf_release(&key);
if (flag & BRANCH_CONFIG_VERBOSE) {
- if (remote_is_branch && origin)
- printf_ln(rebasing ?
- _("Branch %s set up to track remote branch %s from %s by rebasing.") :
- _("Branch %s set up to track remote branch %s from %s."),
- local, shortname, origin);
- else if (remote_is_branch && !origin)
- printf_ln(rebasing ?
- _("Branch %s set up to track local branch %s by rebasing.") :
- _("Branch %s set up to track local branch %s."),
- local, shortname);
- else if (!remote_is_branch && origin)
- printf_ln(rebasing ?
- _("Branch %s set up to track remote ref %s by rebasing.") :
- _("Branch %s set up to track remote ref %s."),
- local, remote);
- else if (!remote_is_branch && !origin)
- printf_ln(rebasing ?
- _("Branch %s set up to track local ref %s by rebasing.") :
- _("Branch %s set up to track local ref %s."),
- local, remote);
- else
- die("BUG: impossible combination of %d and %p",
- remote_is_branch, origin);
+ if (shortname) {
+ if (origin)
+ printf_ln(rebasing ?
+ _("Branch %s set up to track remote branch %s from %s by rebasing.") :
+ _("Branch %s set up to track remote branch %s from %s."),
+ local, shortname, origin);
+ else
+ printf_ln(rebasing ?
+ _("Branch %s set up to track local branch %s by rebasing.") :
+ _("Branch %s set up to track local branch %s."),
+ local, shortname);
+ } else {
+ if (origin)
+ printf_ln(rebasing ?
+ _("Branch %s set up to track remote ref %s by rebasing.") :
+ _("Branch %s set up to track remote ref %s."),
+ local, remote);
+ else
+ printf_ln(rebasing ?
+ _("Branch %s set up to track local ref %s by rebasing.") :
+ _("Branch %s set up to track local ref %s."),
+ local, remote);
+ }
}
}
struct tracking tracking;
int config_flags = quiet ? 0 : BRANCH_CONFIG_VERBOSE;
- if (strlen(new_ref) > 1024 - 7 - 7 - 1)
- return error(_("Tracking not set up: name too long: %s"),
- new_ref);
-
memset(&tracking, 0, sizeof(tracking));
tracking.spec.dst = (char *)orig_ref;
if (for_each_remote(find_tracked_branch, &tracking))
#include "diffcore.h"
#include "revision.h"
#include "bulk-checkin.h"
+#include "argv-array.h"
static const char * const builtin_add_usage[] = {
N_("git add [options] [--] <pathspec>..."),
struct update_callback_data {
int flags;
int add_errors;
- const char *implicit_dot;
- size_t implicit_dot_len;
-
- /* only needed for 2.0 transition preparation */
- int warn_add_would_remove;
};
-static const char *option_with_implicit_dot;
-static const char *short_option_with_implicit_dot;
-
-static void warn_pathless_add(void)
-{
- static int shown;
- assert(option_with_implicit_dot && short_option_with_implicit_dot);
-
- if (shown)
- return;
- shown = 1;
-
- /*
- * To be consistent with "git add -p" and most Git
- * commands, we should default to being tree-wide, but
- * this is not the original behavior and can't be
- * changed until users trained themselves not to type
- * "git add -u" or "git add -A". For now, we warn and
- * keep the old behavior. Later, the behavior can be changed
- * to tree-wide, keeping the warning for a while, and
- * eventually we can drop the warning.
- */
- warning(_("The behavior of 'git add %s (or %s)' with no path argument from a\n"
- "subdirectory of the tree will change in Git 2.0 and should not be used anymore.\n"
- "To add content for the whole tree, run:\n"
- "\n"
- " git add %s :/\n"
- " (or git add %s :/)\n"
- "\n"
- "To restrict the command to the current directory, run:\n"
- "\n"
- " git add %s .\n"
- " (or git add %s .)\n"
- "\n"
- "With the current Git version, the command is restricted to "
- "the current directory.\n"
- ""),
- option_with_implicit_dot, short_option_with_implicit_dot,
- option_with_implicit_dot, short_option_with_implicit_dot,
- option_with_implicit_dot, short_option_with_implicit_dot);
-}
-
static int fix_unmerged_status(struct diff_filepair *p,
struct update_callback_data *data)
{
return DIFF_STATUS_MODIFIED;
}
-static const char *add_would_remove_warning = N_(
- "You ran 'git add' with neither '-A (--all)' or '--ignore-removal',\n"
-"whose behaviour will change in Git 2.0 with respect to paths you removed.\n"
-"Paths like '%s' that are\n"
-"removed from your working tree are ignored with this version of Git.\n"
-"\n"
-"* 'git add --ignore-removal <pathspec>', which is the current default,\n"
-" ignores paths you removed from your working tree.\n"
-"\n"
-"* 'git add --all <pathspec>' will let you also record the removals.\n"
-"\n"
-"Run 'git status' to check the paths you removed from your working tree.\n");
-
-static void warn_add_would_remove(const char *path)
-{
- warning(_(add_would_remove_warning), path);
-}
-
static void update_callback(struct diff_queue_struct *q,
struct diff_options *opt, void *cbdata)
{
int i;
struct update_callback_data *data = cbdata;
- const char *implicit_dot = data->implicit_dot;
- size_t implicit_dot_len = data->implicit_dot_len;
for (i = 0; i < q->nr; i++) {
struct diff_filepair *p = q->queue[i];
const char *path = p->one->path;
- /*
- * Check if "git add -A" or "git add -u" was run from a
- * subdirectory with a modified file outside that directory,
- * and warn if so.
- *
- * "git add -u" will behave like "git add -u :/" instead of
- * "git add -u ." in the future. This warning prepares for
- * that change.
- */
- if (implicit_dot &&
- strncmp_icase(path, implicit_dot, implicit_dot_len)) {
- warn_pathless_add();
- continue;
- }
switch (fix_unmerged_status(p, data)) {
default:
die(_("unexpected diff status %c"), p->status);
}
break;
case DIFF_STATUS_DELETED:
- if (data->warn_add_would_remove) {
- warn_add_would_remove(path);
- data->warn_add_would_remove = 0;
- }
if (data->flags & ADD_CACHE_IGNORE_REMOVAL)
break;
if (!(data->flags & ADD_CACHE_PRETEND))
}
}
-static void update_files_in_cache(const char *prefix,
- const struct pathspec *pathspec,
- struct update_callback_data *data)
+int add_files_to_cache(const char *prefix,
+ const struct pathspec *pathspec, int flags)
{
+ struct update_callback_data data;
struct rev_info rev;
+ memset(&data, 0, sizeof(data));
+ data.flags = flags;
+
init_revisions(&rev, prefix);
setup_revisions(0, NULL, &rev, NULL);
if (pathspec)
copy_pathspec(&rev.prune_data, pathspec);
rev.diffopt.output_format = DIFF_FORMAT_CALLBACK;
rev.diffopt.format_callback = update_callback;
- rev.diffopt.format_callback_data = data;
+ rev.diffopt.format_callback_data = &data;
rev.max_count = 0; /* do not compare unmerged paths with stage #2 */
run_diff_files(&rev, DIFF_RACY_IS_MODIFIED);
-}
-
-int add_files_to_cache(const char *prefix,
- const struct pathspec *pathspec, int flags)
-{
- struct update_callback_data data;
-
- memset(&data, 0, sizeof(data));
- data.flags = flags;
- update_files_in_cache(prefix, pathspec, &data);
return !!data.add_errors;
}
-#define WARN_IMPLICIT_DOT (1u << 0)
-static char *prune_directory(struct dir_struct *dir, struct pathspec *pathspec,
- int prefix, unsigned flag)
+static char *prune_directory(struct dir_struct *dir, struct pathspec *pathspec, int prefix)
{
char *seen;
int i;
i = dir->nr;
while (--i >= 0) {
struct dir_entry *entry = *src++;
- if (match_pathspec_depth(pathspec, entry->name, entry->len,
- prefix, seen))
+ if (dir_path_match(entry, pathspec, prefix, seen))
*dst++ = entry;
- else if (flag & WARN_IMPLICIT_DOT)
- /*
- * "git add -A" was run from a subdirectory with a
- * new file outside that directory.
- *
- * "git add -A" will behave like "git add -A :/"
- * instead of "git add -A ." in the future.
- * Warn about the coming behavior change.
- */
- warn_pathless_add();
}
dir->nr = dst - dir->entries;
add_pathspec_matches_against_index(pathspec, seen);
int run_add_interactive(const char *revision, const char *patch_mode,
const struct pathspec *pathspec)
{
- int status, ac, i;
- const char **args;
+ int status, i;
+ struct argv_array argv = ARGV_ARRAY_INIT;
- args = xcalloc(sizeof(const char *), (pathspec->nr + 6));
- ac = 0;
- args[ac++] = "add--interactive";
+ argv_array_push(&argv, "add--interactive");
if (patch_mode)
- args[ac++] = patch_mode;
+ argv_array_push(&argv, patch_mode);
if (revision)
- args[ac++] = revision;
- args[ac++] = "--";
+ argv_array_push(&argv, revision);
+ argv_array_push(&argv, "--");
for (i = 0; i < pathspec->nr; i++)
/* pass original pathspec, to be re-parsed */
- args[ac++] = pathspec->items[i].original;
+ argv_array_push(&argv, pathspec->items[i].original);
- status = run_command_v_opt(args, RUN_GIT_CMD);
- free(args);
+ status = run_command_v_opt(argv.argv, RUN_GIT_CMD);
+ argv_array_clear(&argv);
return status;
}
static int verbose, show_only, ignored_too, refresh_only;
static int ignore_add_errors, intent_to_add, ignore_missing;
-#define ADDREMOVE_DEFAULT 0 /* Change to 1 in Git 2.0 */
+#define ADDREMOVE_DEFAULT 1
static int addremove = ADDREMOVE_DEFAULT;
static int addremove_explicit = -1; /* unspecified */
int add_new_files;
int require_pathspec;
char *seen = NULL;
- int implicit_dot = 0;
- struct update_callback_data update_data;
git_config(add_config, NULL);
if (addremove && take_worktree_changes)
die(_("-A and -u are mutually incompatible"));
- /*
- * Warn when "git add pathspec..." was given without "-u" or "-A"
- * and pathspec... covers a removed path.
- */
- memset(&update_data, 0, sizeof(update_data));
- if (!take_worktree_changes && addremove_explicit < 0)
- update_data.warn_add_would_remove = 1;
-
if (!take_worktree_changes && addremove_explicit < 0 && argc)
- /*
- * Turn "git add pathspec..." to "git add -A pathspec..."
- * in Git 2.0 but not yet
- */
- ; /* addremove = 1; */
+ /* Turn "git add pathspec..." to "git add -A pathspec..." */
+ addremove = 1;
if (!show_only && ignore_missing)
die(_("Option --ignore-missing can only be used together with --dry-run"));
- if (addremove) {
- option_with_implicit_dot = "--all";
- short_option_with_implicit_dot = "-A";
- }
- if (take_worktree_changes) {
- option_with_implicit_dot = "--update";
- short_option_with_implicit_dot = "-u";
- }
- if (option_with_implicit_dot && !argc) {
- static const char *here[2] = { ".", NULL };
+
+ if ((0 < addremove_explicit || take_worktree_changes) && !argc) {
+ static const char *whole[2] = { ":/", NULL };
argc = 1;
- argv = here;
- implicit_dot = 1;
+ argv = whole;
}
add_new_files = !take_worktree_changes && !refresh_only;
(intent_to_add ? ADD_CACHE_INTENT : 0) |
(ignore_add_errors ? ADD_CACHE_IGNORE_ERRORS : 0) |
(!(addremove || take_worktree_changes)
- ? ADD_CACHE_IGNORE_REMOVAL : 0)) |
- (implicit_dot ? ADD_CACHE_IMPLICIT_DOT : 0);
+ ? ADD_CACHE_IGNORE_REMOVAL : 0));
if (require_pathspec && argc == 0) {
fprintf(stderr, _("Nothing specified, nothing added.\n"));
memset(&empty_pathspec, 0, sizeof(empty_pathspec));
/* This picks up the paths that are not tracked */
- baselen = fill_directory(&dir, implicit_dot ? &empty_pathspec : &pathspec);
+ baselen = fill_directory(&dir, &pathspec);
if (pathspec.nr)
- seen = prune_directory(&dir, &pathspec, baselen,
- implicit_dot ? WARN_IMPLICIT_DOT : 0);
+ seen = prune_directory(&dir, &pathspec, baselen);
}
if (refresh_only) {
refresh(verbose, &pathspec);
goto finish;
}
- if (implicit_dot && prefix)
- refresh_cache(REFRESH_QUIET);
if (pathspec.nr) {
int i;
plug_bulk_checkin();
- if ((flags & ADD_CACHE_IMPLICIT_DOT) && prefix) {
- /*
- * Check for modified files throughout the worktree so
- * update_callback has a chance to warn about changes
- * outside the cwd.
- */
- update_data.implicit_dot = prefix;
- update_data.implicit_dot_len = strlen(prefix);
- free_pathspec(&pathspec);
- memset(&pathspec, 0, sizeof(pathspec));
- }
- update_data.flags = flags & ~ADD_CACHE_IMPLICIT_DOT;
- update_files_in_cache(prefix, &pathspec, &update_data);
+ exit_status |= add_files_to_cache(prefix, &pathspec, flags);
- exit_status |= !!update_data.add_errors;
if (add_new_files)
exit_status |= add_files(&dir, flags);
size - offset - hdrsize, patch);
if (!patchsize) {
- static const char *binhdr[] = {
- "Binary files ",
- "Files ",
- NULL,
- };
static const char git_binary[] = "GIT binary patch\n";
- int i;
int hd = hdrsize + offset;
unsigned long llen = linelen(buffer + hd, size - hd);
patchsize = 0;
}
else if (!memcmp(" differ\n", buffer + hd + llen - 8, 8)) {
+ static const char *binhdr[] = {
+ "Binary files ",
+ "Files ",
+ NULL,
+ };
+ int i;
for (i = 0; binhdr[i]; i++) {
int len = strlen(binhdr[i]);
if (len < size - hd &&
/* See if it matches any of exclude/include rule */
for (i = 0; i < limit_by_name.nr; i++) {
struct string_list_item *it = &limit_by_name.items[i];
- if (!fnmatch(it->string, pathname, 0))
+ if (!wildmatch(it->string, pathname, 0, NULL))
return (it->util != NULL);
}
#define BLAME_DEFAULT_MOVE_SCORE 20
#define BLAME_DEFAULT_COPY_SCORE 40
-/* bits #0..7 in revision.h, #8..11 used for merge_bases() in commit.c */
+/* Remember to update object flag allocation in object.h */
#define METAINFO_SHOWN (1u<<12)
#define MORE_THAN_ONE_PATH (1u<<13)
* scoreboard structure, sorted by the target line number.
*/
struct blame_entry {
- struct blame_entry *prev;
struct blame_entry *next;
/* the first line of this group in the final image;
int *lineno;
};
-static inline int same_suspect(struct origin *a, struct origin *b)
-{
- if (a == b)
- return 1;
- if (a->commit != b->commit)
- return 0;
- return !strcmp(a->path, b->path);
-}
-
static void sanity_check_refcnt(struct scoreboard *);
/*
struct blame_entry *ent, *next;
for (ent = sb->ent; ent && (next = ent->next); ent = next) {
- if (same_suspect(ent->suspect, next->suspect) &&
+ if (ent->suspect == next->suspect &&
ent->guilty == next->guilty &&
ent->s_lno + ent->num_lines == next->s_lno) {
ent->num_lines += next->num_lines;
ent->next = next->next;
- if (ent->next)
- ent->next->prev = ent;
origin_decref(next->suspect);
free(next);
ent->score = 0;
prev = ent;
/* prev, if not NULL, is the last one that is below e */
- e->prev = prev;
+
if (prev) {
e->next = prev->next;
prev->next = e;
e->next = sb->ent;
sb->ent = e;
}
- if (e->next)
- e->next->prev = e;
}
/*
*/
static void dup_entry(struct blame_entry *dst, struct blame_entry *src)
{
- struct blame_entry *p, *n;
+ struct blame_entry *n;
- p = dst->prev;
n = dst->next;
origin_incref(src->suspect);
origin_decref(dst->suspect);
memcpy(dst, src, sizeof(*src));
- dst->prev = p;
dst->next = n;
dst->score = 0;
}
int last_in_target = -1;
for (e = sb->ent; e; e = e->next) {
- if (e->guilty || !same_suspect(e->suspect, target))
+ if (e->guilty || e->suspect != target)
continue;
if (last_in_target < e->s_lno + e->num_lines)
last_in_target = e->s_lno + e->num_lines;
struct blame_entry *e;
for (e = sb->ent; e; e = e->next) {
- if (e->guilty || !same_suspect(e->suspect, target))
+ if (e->guilty || e->suspect != target)
continue;
if (same <= e->s_lno)
continue;
mmfile_t *file_p)
{
const char *cp;
- int cnt;
mmfile_t file_o;
struct handle_split_cb_data d;
*/
cp = nth_line(sb, ent->lno);
file_o.ptr = (char *) cp;
- cnt = ent->num_lines;
-
- while (cnt && cp < sb->final_buf + sb->final_buf_size) {
- if (*cp++ == '\n')
- cnt--;
- }
- file_o.size = cp - file_o.ptr;
+ file_o.size = nth_line(sb, ent->lno + ent->num_lines) - cp;
/*
* file_o is a part of final image we are annotating.
while (made_progress) {
made_progress = 0;
for (e = sb->ent; e; e = e->next) {
- if (e->guilty || !same_suspect(e->suspect, target) ||
+ if (e->guilty || e->suspect != target ||
ent_score(sb, e) < blame_move_score)
continue;
find_copy_in_blob(sb, e, parent, split, &file_p);
for (e = sb->ent, num_ents = 0; e; e = e->next)
if (!e->scanned && !e->guilty &&
- same_suspect(e->suspect, target) &&
+ e->suspect == target &&
min_score < ent_score(sb, e))
num_ents++;
if (num_ents) {
blame_list = xcalloc(num_ents, sizeof(struct blame_list));
for (e = sb->ent, i = 0; e; e = e->next)
if (!e->scanned && !e->guilty &&
- same_suspect(e->suspect, target) &&
+ e->suspect == target &&
min_score < ent_score(sb, e))
blame_list[i++].ent = e;
}
origin->file.ptr = NULL;
}
for (e = sb->ent; e; e = e->next) {
- if (!same_suspect(e->suspect, origin))
+ if (e->suspect != origin)
continue;
origin_incref(porigin);
origin_decref(e->suspect);
/* Take responsibility for the remaining entries */
for (ent = sb->ent; ent; ent = ent->next)
- if (same_suspect(ent->suspect, suspect))
+ if (ent->suspect == suspect)
found_guilty_entry(ent);
origin_decref(suspect);
int show_raw_time)
{
static char time_buf[128];
- const char *time_str;
- int time_len;
- int tz;
if (show_raw_time) {
snprintf(time_buf, sizeof(time_buf), "%lu %s", time, tz_str);
}
else {
+ const char *time_str;
+ int time_len;
+ int tz;
tz = atoi(tz_str);
time_str = show_date(time, tz, blame_date_mode);
time_len = strlen(time_str);
{
const char *buf = sb->final_buf;
unsigned long len = sb->final_buf_size;
- int num = 0, incomplete = 0, bol = 1;
+ const char *end = buf + len;
+ const char *p;
+ int *lineno;
+ int num = 0, incomplete = 0;
- if (len && buf[len-1] != '\n')
- incomplete++; /* incomplete line at the end */
- while (len--) {
- if (bol) {
- sb->lineno = xrealloc(sb->lineno,
- sizeof(int *) * (num + 1));
- sb->lineno[num] = buf - sb->final_buf;
- bol = 0;
- }
- if (*buf++ == '\n') {
+ for (p = buf;;) {
+ p = memchr(p, '\n', end - p);
+ if (p) {
+ p++;
num++;
- bol = 1;
+ continue;
}
+ break;
}
- sb->lineno = xrealloc(sb->lineno,
- sizeof(int *) * (num + incomplete + 1));
- sb->lineno[num + incomplete] = buf - sb->final_buf;
+
+ if (len && end[-1] != '\n')
+ incomplete++; /* incomplete line at the end */
+
+ sb->lineno = xmalloc(sizeof(*sb->lineno) * (num + incomplete + 1));
+ lineno = sb->lineno;
+
+ *lineno++ = 0;
+ for (p = buf;;) {
+ p = memchr(p, '\n', end - p);
+ if (p) {
+ p++;
+ *lineno++ = p - buf;
+ continue;
+ }
+ break;
+ }
+
+ if (incomplete)
+ *lineno++ = len;
+
sb->num_lines = num + incomplete;
return sb->num_lines;
}
ent->suspect = o;
ent->s_lno = bottom;
ent->next = next;
- if (next)
- next->prev = ent;
origin_incref(o);
}
origin_decref(o);
if (!*pattern)
return 1; /* no pattern always matches */
while (*pattern) {
- if (!fnmatch(*pattern, refname, 0))
+ if (!wildmatch(*pattern, refname, 0, NULL))
return 1;
pattern++;
}
{
struct strbuf buf = STRBUF_INIT;
struct expand_data data;
+ int save_warning;
+ int retval = 0;
if (!opt->format)
opt->format = "%(objectname) %(objecttype) %(objectsize)";
* warn) ends up dwarfing the actual cost of the object lookups
* themselves. We can work around it by just turning off the warning.
*/
+ save_warning = warn_on_object_refname_ambiguity;
warn_on_object_refname_ambiguity = 0;
while (strbuf_getline(&buf, stdin, '\n') != EOF) {
- int error;
-
if (data.split_on_whitespace) {
/*
* Split at first whitespace, tying off the beginning
data.rest = p;
}
- error = batch_one_object(buf.buf, opt, &data);
- if (error)
- return error;
+ retval = batch_one_object(buf.buf, opt, &data);
+ if (retval)
+ break;
}
- return 0;
+ strbuf_release(&buf);
+ warn_on_object_refname_ambiguity = save_warning;
+ return retval;
}
static const char * const cat_file_usage[] = {
struct git_attr_check *check;
int cnt, i, doubledash, filei;
+ if (!is_bare_repository())
+ setup_work_tree();
+
git_config(git_default_config, NULL);
argc = parse_options(argc, argv, prefix, check_attr_options,
static int post_checkout_hook(struct commit *old, struct commit *new,
int changed)
{
- return run_hook(NULL, "post-checkout",
- sha1_to_hex(old ? old->object.sha1 : null_sha1),
- sha1_to_hex(new ? new->object.sha1 : null_sha1),
- changed ? "1" : "0", NULL);
+ return run_hook_le(NULL, "post-checkout",
+ sha1_to_hex(old ? old->object.sha1 : null_sha1),
+ sha1_to_hex(new ? new->object.sha1 : null_sha1),
+ changed ? "1" : "0", NULL);
/* "new" can be NULL when checking out from the index before
a commit exists. */
* match_pathspec() for _all_ entries when
* opts->source_tree != NULL.
*/
- if (match_pathspec_depth(&opts->pathspec, ce->name, ce_namelen(ce),
- 0, ps_matched))
+ if (ce_path_match(ce, &opts->pathspec, ps_matched))
ce->ce_flags |= CE_MATCHED;
}
OPT_BOOL(0, "detach", &opts.force_detach, N_("detach the HEAD at named commit")),
OPT_SET_INT('t', "track", &opts.track, N_("set upstream info for new branch"),
BRANCH_TRACK_EXPLICIT),
- OPT_STRING(0, "orphan", &opts.new_orphan_branch, N_("new branch"), N_("new unparented branch")),
+ OPT_STRING(0, "orphan", &opts.new_orphan_branch, N_("new-branch"), N_("new unparented branch")),
OPT_SET_INT('2', "ours", &opts.writeout_stage, N_("checkout our version for unmerged files"),
2),
OPT_SET_INT('3', "theirs", &opts.writeout_stage, N_("checkout their version for unmerged files"),
DIR *dir;
struct strbuf quoted = STRBUF_INIT;
struct dirent *e;
- int res = 0, ret = 0, gone = 1, original_len = path->len, len, i;
+ int res = 0, ret = 0, gone = 1, original_len = path->len, len;
unsigned char submodule_head[20];
struct string_list dels = STRING_LIST_INIT_DUP;
}
if (!*dir_gone && !quiet) {
+ int i;
for (i = 0; i < dels.nr; i++)
printf(dry_run ? _(msg_would_remove) : _(msg_remove), dels.items[i].string);
}
for (i = 0; i < dir.nr; i++) {
struct dir_entry *ent = dir.entries[i];
- int len, pos;
int matches = 0;
- const struct cache_entry *ce;
struct stat st;
const char *rel;
- /*
- * Remove the '/' at the end that directory
- * walking adds for directory entries.
- */
- len = ent->len;
- if (len && ent->name[len-1] == '/')
- len--;
- pos = cache_name_pos(ent->name, len);
- if (0 <= pos)
- continue; /* exact match */
- pos = -pos - 1;
- if (pos < active_nr) {
- ce = active_cache[pos];
- if (ce_namelen(ce) == len &&
- !memcmp(ce->name, ent->name, len))
- continue; /* Yup, this one exists unmerged */
- }
+ if (!cache_name_is_other(ent->name, ent->len))
+ continue;
if (lstat(ent->name, &st))
die_errno("Cannot lstat '%s'", ent->name);
if (pathspec.nr)
- matches = match_pathspec_depth(&pathspec, ent->name,
- len, 0, NULL);
+ matches = dir_path_match(ent, &pathspec, 0, NULL);
- if (S_ISDIR(st.st_mode)) {
- if (remove_directories || (matches == MATCHED_EXACTLY)) {
- rel = relative_path(ent->name, prefix, &buf);
- string_list_append(&del_list, rel);
- }
- } else {
- if (pathspec.nr && !matches)
- continue;
- rel = relative_path(ent->name, prefix, &buf);
- string_list_append(&del_list, rel);
- }
+ if (pathspec.nr && !matches)
+ continue;
+
+ if (S_ISDIR(st.st_mode) && !remove_directories &&
+ matches != MATCHED_EXACTLY)
+ continue;
+
+ rel = relative_path(ent->name, prefix, &buf);
+ string_list_append(&del_list, rel);
}
if (interactive && del_list.nr > 0)
commit_locked_index(lock_file))
die(_("unable to write new index file"));
- err |= run_hook(NULL, "post-checkout", sha1_to_hex(null_sha1),
- sha1_to_hex(sha1), "1", NULL);
+ err |= run_hook_le(NULL, "post-checkout", sha1_to_hex(null_sha1),
+ sha1_to_hex(sha1), "1", NULL);
if (!err && option_recursive)
err = run_command_v_opt(argv_submodule, RUN_GIT_CMD);
static const char commit_tree_usage[] = "git commit-tree [(-p <sha1>)...] [-S[<keyid>]] [-m <message>] [-F <file>] <sha1> <changelog";
+static const char *sign_commit;
+
static void new_parent(struct commit *parent, struct commit_list **parents_p)
{
unsigned char *sha1 = parent->object.sha1;
int status = git_gpg_config(var, value, NULL);
if (status)
return status;
+ if (!strcmp(var, "commit.gpgsign")) {
+ sign_commit = git_config_bool(var, value) ? "" : NULL;
+ return 0;
+ }
return git_default_config(var, value, cb);
}
unsigned char tree_sha1[20];
unsigned char commit_sha1[20];
struct strbuf buffer = STRBUF_INIT;
- const char *sign_commit = NULL;
git_config(commit_tree_config, NULL);
continue;
}
+ if (!strcmp(arg, "--no-gpg-sign")) {
+ sign_commit = NULL;
+ continue;
+ }
+
if (!strcmp(arg, "-m")) {
if (argc <= ++i)
usage(commit_tree_usage);
static enum {
CLEANUP_SPACE,
CLEANUP_NONE,
+ CLEANUP_SCISSORS,
CLEANUP_ALL
} cleanup_mode;
static const char *cleanup_arg;
if (ce->ce_flags & CE_UPDATE)
continue;
- if (!match_pathspec_depth(pattern, ce->name, ce_namelen(ce), 0, m))
+ if (!ce_path_match(ce, pattern, m))
continue;
item = string_list_insert(list, ce->name);
if (ce_skip_worktree(ce))
int fd;
struct string_list partial;
struct pathspec pathspec;
- char *old_index_env = NULL;
int refresh_flags = REFRESH_QUIET;
if (is_status)
die(_("index file corrupt"));
if (interactive) {
+ char *old_index_env = NULL;
fd = hold_locked_index(&index_lock, 1);
refresh_cache_or_die(refresh_flags);
{
struct stat statbuf;
struct strbuf committer_ident = STRBUF_INIT;
- int commitable, saved_color_setting;
+ int commitable;
struct strbuf sb = STRBUF_INIT;
- char *buffer;
const char *hook_arg1 = NULL;
const char *hook_arg2 = NULL;
- int ident_shown = 0;
int clean_message_contents = (cleanup_mode != CLEANUP_NONE);
int old_display_comment_prefix;
/* This checks and barfs if author is badly specified */
determine_author_info(author_ident);
- if (!no_verify && run_hook(index_file, "pre-commit", NULL))
+ if (!no_verify && run_commit_hook(use_editor, index_file, "pre-commit", NULL))
return 0;
if (squash_message) {
logfile);
hook_arg1 = "message";
} else if (use_message) {
+ char *buffer;
buffer = strstr(use_message_buffer, "\n\n");
if (!use_editor && (!buffer || buffer[2] == '\0'))
die(_("commit has empty message"));
/* This checks if committer ident is explicitly given */
strbuf_addstr(&committer_ident, git_committer_info(IDENT_STRICT));
if (use_editor && include_status) {
+ int ident_shown = 0;
+ int saved_color_setting;
char *ai_tmp, *ci_tmp;
- if (whence != FROM_COMMIT)
+ if (whence != FROM_COMMIT) {
+ if (cleanup_mode == CLEANUP_SCISSORS)
+ wt_status_add_cut_line(s->fp);
status_printf_ln(s, GIT_COLOR_NORMAL,
whence == FROM_MERGE
? _("\n"
git_path(whence == FROM_MERGE
? "MERGE_HEAD"
: "CHERRY_PICK_HEAD"));
+ }
fprintf(s->fp, "\n");
if (cleanup_mode == CLEANUP_ALL)
_("Please enter the commit message for your changes."
" Lines starting\nwith '%c' will be ignored, and an empty"
" message aborts the commit.\n"), comment_line_char);
+ else if (cleanup_mode == CLEANUP_SCISSORS && whence == FROM_COMMIT)
+ wt_status_add_cut_line(s->fp);
else /* CLEANUP_SPACE, that is. */
status_printf(s, GIT_COLOR_NORMAL,
_("Please enter the commit message for your changes."
return 0;
}
- if (run_hook(index_file, "prepare-commit-msg",
- git_path(commit_editmsg), hook_arg1, hook_arg2, NULL))
+ if (run_commit_hook(use_editor, index_file, "prepare-commit-msg",
+ git_path(commit_editmsg), hook_arg1, hook_arg2, NULL))
return 0;
if (use_editor) {
}
if (!no_verify &&
- run_hook(index_file, "commit-msg", git_path(commit_editmsg), NULL)) {
+ run_commit_hook(use_editor, index_file, "commit-msg", git_path(commit_editmsg), NULL)) {
return 0;
}
use_editor = 0;
if (0 <= edit_flag)
use_editor = edit_flag;
- if (!use_editor)
- setenv("GIT_EDITOR", ":", 1);
/* Sanity check options */
if (amend && !current_head)
cleanup_mode = CLEANUP_SPACE;
else if (!strcmp(cleanup_arg, "strip"))
cleanup_mode = CLEANUP_ALL;
+ else if (!strcmp(cleanup_arg, "scissors"))
+ cleanup_mode = use_editor ? CLEANUP_SCISSORS : CLEANUP_SPACE;
else
die(_("Invalid cleanup mode %s"), cleanup_arg);
}
if (!strcmp(k, "commit.cleanup"))
return git_config_string(&cleanup_arg, k, v);
+ if (!strcmp(k, "commit.gpgsign")) {
+ sign_commit = git_config_bool(k, v) ? "" : NULL;
+ return 0;
+ }
status = git_gpg_config(k, v, NULL);
if (status)
return finish_command(&proc);
}
+int run_commit_hook(int editor_is_used, const char *index_file, const char *name, ...)
+{
+ const char *hook_env[3] = { NULL };
+ char index[PATH_MAX];
+ va_list args;
+ int ret;
+
+ snprintf(index, sizeof(index), "GIT_INDEX_FILE=%s", index_file);
+ hook_env[0] = index;
+
+ /*
+ * Let the hook know that no editor will be launched.
+ */
+ if (!editor_is_used)
+ hook_env[1] = "GIT_EDITOR=:";
+
+ va_start(args, name);
+ ret = run_hook_ve(hook_env, name, args);
+ va_end(args);
+
+ return ret;
+}
+
int cmd_commit(int argc, const char **argv, const char *prefix)
{
static struct wt_status s;
OPT_BOOL('e', "edit", &edit_flag, N_("force edit of commit")),
OPT_STRING(0, "cleanup", &cleanup_arg, N_("default"), N_("how to strip spaces and #comments from message")),
OPT_BOOL(0, "status", &include_status, N_("include status in commit message template")),
- { OPTION_STRING, 'S', "gpg-sign", &sign_commit, N_("key id"),
+ { OPTION_STRING, 'S', "gpg-sign", &sign_commit, N_("key-id"),
N_("GPG sign commit"), PARSE_OPT_OPTARG, NULL, (intptr_t) "" },
/* end commit message options */
struct ref_lock *ref_lock;
struct commit_list *parents = NULL, **pptr = &parents;
struct stat statbuf;
- int allow_fast_forward = 1;
struct commit *current_head = NULL;
struct commit_extra_header *extra = NULL;
} else if (whence == FROM_MERGE) {
struct strbuf m = STRBUF_INIT;
FILE *fp;
+ int allow_fast_forward = 1;
if (!reflog_msg)
reflog_msg = "commit (merge)";
die(_("could not read commit message: %s"), strerror(saved_errno));
}
- /* Truncate the message just before the diff, if any. */
- if (verbose)
+ if (verbose || /* Truncate the message just before the diff, if any. */
+ cleanup_mode == CLEANUP_SCISSORS)
wt_status_truncate_message_at_cut_line(&sb);
if (cleanup_mode != CLEANUP_NONE)
"not exceeded, and then \"git reset HEAD\" to recover."));
rerere(0);
- run_hook(get_index_file(), "post-commit", NULL);
+ run_commit_hook(use_editor, get_index_file(), "post-commit", NULL);
if (amend && !no_post_rewrite) {
struct notes_rewrite_cfg *cfg;
cfg = init_copy_notes_for_rewrite("amend");
static char term = '\n';
static int use_global_config, use_system_config, use_local_config;
-static const char *given_config_file;
-static const char *given_config_blob;
+static struct git_config_source given_config_source;
static int actions, types;
static const char *get_color_slot, *get_colorbool_slot;
static int end_null;
OPT_BOOL(0, "global", &use_global_config, N_("use global config file")),
OPT_BOOL(0, "system", &use_system_config, N_("use system config file")),
OPT_BOOL(0, "local", &use_local_config, N_("use repository config file")),
- OPT_STRING('f', "file", &given_config_file, N_("file"), N_("use given config file")),
- OPT_STRING(0, "blob", &given_config_blob, N_("blob-id"), N_("read config from given blob object")),
+ OPT_STRING('f', "file", &given_config_source.file, N_("file"), N_("use given config file")),
+ OPT_STRING(0, "blob", &given_config_source.blob, N_("blob-id"), N_("read config from given blob object")),
OPT_GROUP(N_("Action")),
OPT_BIT(0, "get", &actions, N_("get value: name [value-regex]"), ACTION_GET),
OPT_BIT(0, "get-all", &actions, N_("get all values: key [value-regex]"), ACTION_GET_ALL),
}
git_config_with_options(collect_config, &values,
- given_config_file, given_config_blob,
- respect_includes);
+ &given_config_source, respect_includes);
ret = !values.nr;
get_color_found = 0;
parsed_color[0] = '\0';
git_config_with_options(git_get_color_config, NULL,
- given_config_file, given_config_blob,
- respect_includes);
+ &given_config_source, respect_includes);
if (!get_color_found && def_color)
color_parse(def_color, "command line", parsed_color);
get_diff_color_found = -1;
get_color_ui_found = -1;
git_config_with_options(git_get_colorbool_config, NULL,
- given_config_file, given_config_blob,
- respect_includes);
+ &given_config_source, respect_includes);
if (get_colorbool_found < 0) {
if (!strcmp(get_colorbool_slot, "color.diff"))
return get_colorbool_found ? 0 : 1;
}
-static void check_blob_write(void)
+static void check_write(void)
{
- if (given_config_blob)
+ if (given_config_source.use_stdin)
+ die("writing to stdin is not supported");
+
+ if (given_config_source.blob)
die("writing config blobs is not supported");
}
}
git_config_with_options(urlmatch_config_entry, &config,
- given_config_file, NULL, respect_includes);
+ &given_config_source, respect_includes);
for_each_string_list_item(item, &values) {
struct urlmatch_current_candidate_value *matched = item->util;
int nongit = !startup_info->have_repository;
char *value;
- given_config_file = getenv(CONFIG_ENVIRONMENT);
+ given_config_source.file = getenv(CONFIG_ENVIRONMENT);
argc = parse_options(argc, argv, prefix, builtin_config_options,
builtin_config_usage,
PARSE_OPT_STOP_AT_NON_OPTION);
if (use_global_config + use_system_config + use_local_config +
- !!given_config_file + !!given_config_blob > 1) {
+ !!given_config_source.file + !!given_config_source.blob > 1) {
error("only one config file at a time.");
usage_with_options(builtin_config_usage, builtin_config_options);
}
+ if (given_config_source.file &&
+ !strcmp(given_config_source.file, "-")) {
+ given_config_source.file = NULL;
+ given_config_source.use_stdin = 1;
+ }
+
if (use_global_config) {
char *user_config = NULL;
char *xdg_config = NULL;
if (access_or_warn(user_config, R_OK, 0) &&
xdg_config && !access_or_warn(xdg_config, R_OK, 0))
- given_config_file = xdg_config;
+ given_config_source.file = xdg_config;
else
- given_config_file = user_config;
+ given_config_source.file = user_config;
}
else if (use_system_config)
- given_config_file = git_etc_gitconfig();
+ given_config_source.file = git_etc_gitconfig();
else if (use_local_config)
- given_config_file = git_pathdup("config");
- else if (given_config_file) {
- if (!is_absolute_path(given_config_file) && prefix)
- given_config_file =
+ given_config_source.file = git_pathdup("config");
+ else if (given_config_source.file) {
+ if (!is_absolute_path(given_config_source.file) && prefix)
+ given_config_source.file =
xstrdup(prefix_filename(prefix,
strlen(prefix),
- given_config_file));
+ given_config_source.file));
}
if (respect_includes == -1)
- respect_includes = !given_config_file;
+ respect_includes = !given_config_source.file;
if (end_null) {
term = '\0';
if (actions == ACTION_LIST) {
check_argc(argc, 0, 0);
if (git_config_with_options(show_all_config, NULL,
- given_config_file,
- given_config_blob,
+ &given_config_source,
respect_includes) < 0) {
- if (given_config_file)
+ if (given_config_source.file)
die_errno("unable to read config file '%s'",
- given_config_file);
+ given_config_source.file);
else
die("error processing config file(s)");
}
}
else if (actions == ACTION_EDIT) {
check_argc(argc, 0, 0);
- if (!given_config_file && nongit)
+ if (!given_config_source.file && nongit)
die("not in a git directory");
- if (given_config_blob)
+ if (given_config_source.use_stdin)
+ die("editing stdin is not supported");
+ if (given_config_source.blob)
die("editing blobs is not supported");
git_config(git_default_config, NULL);
- launch_editor(given_config_file ?
- given_config_file : git_path("config"),
+ launch_editor(given_config_source.file ?
+ given_config_source.file : git_path("config"),
NULL, NULL);
}
else if (actions == ACTION_SET) {
int ret;
- check_blob_write();
+ check_write();
check_argc(argc, 2, 2);
value = normalize_value(argv[0], argv[1]);
- ret = git_config_set_in_file(given_config_file, argv[0], value);
+ ret = git_config_set_in_file(given_config_source.file, argv[0], value);
if (ret == CONFIG_NOTHING_SET)
error("cannot overwrite multiple values with a single value\n"
" Use a regexp, --add or --replace-all to change %s.", argv[0]);
return ret;
}
else if (actions == ACTION_SET_ALL) {
- check_blob_write();
+ check_write();
check_argc(argc, 2, 3);
value = normalize_value(argv[0], argv[1]);
- return git_config_set_multivar_in_file(given_config_file,
+ return git_config_set_multivar_in_file(given_config_source.file,
argv[0], value, argv[2], 0);
}
else if (actions == ACTION_ADD) {
- check_blob_write();
+ check_write();
check_argc(argc, 2, 2);
value = normalize_value(argv[0], argv[1]);
- return git_config_set_multivar_in_file(given_config_file,
+ return git_config_set_multivar_in_file(given_config_source.file,
argv[0], value, "^$", 0);
}
else if (actions == ACTION_REPLACE_ALL) {
- check_blob_write();
+ check_write();
check_argc(argc, 2, 3);
value = normalize_value(argv[0], argv[1]);
- return git_config_set_multivar_in_file(given_config_file,
+ return git_config_set_multivar_in_file(given_config_source.file,
argv[0], value, argv[2], 1);
}
else if (actions == ACTION_GET) {
return get_urlmatch(argv[0], argv[1]);
}
else if (actions == ACTION_UNSET) {
- check_blob_write();
+ check_write();
check_argc(argc, 1, 2);
if (argc == 2)
- return git_config_set_multivar_in_file(given_config_file,
+ return git_config_set_multivar_in_file(given_config_source.file,
argv[0], NULL, argv[1], 0);
else
- return git_config_set_in_file(given_config_file,
+ return git_config_set_in_file(given_config_source.file,
argv[0], NULL);
}
else if (actions == ACTION_UNSET_ALL) {
- check_blob_write();
+ check_write();
check_argc(argc, 1, 2);
- return git_config_set_multivar_in_file(given_config_file,
+ return git_config_set_multivar_in_file(given_config_source.file,
argv[0], NULL, argv[1], 1);
}
else if (actions == ACTION_RENAME_SECTION) {
int ret;
- check_blob_write();
+ check_write();
check_argc(argc, 2, 2);
- ret = git_config_rename_section_in_file(given_config_file,
+ ret = git_config_rename_section_in_file(given_config_source.file,
argv[0], argv[1]);
if (ret < 0)
return ret;
}
else if (actions == ACTION_REMOVE_SECTION) {
int ret;
- check_blob_write();
+ check_write();
check_argc(argc, 1, 1);
- ret = git_config_rename_section_in_file(given_config_file,
+ ret = git_config_rename_section_in_file(given_config_source.file,
argv[0], NULL);
if (ret < 0)
return ret;
#include "exec_cmd.h"
#include "parse-options.h"
#include "diff.h"
-#include "hash.h"
+#include "hashmap.h"
#include "argv-array.h"
#define SEEN (1u << 0)
static int first_parent;
static int abbrev = -1; /* unspecified */
static int max_candidates = 10;
-static struct hash_table names;
+static struct hashmap names;
static int have_util;
static const char *pattern;
static int always;
};
struct commit_name {
- struct commit_name *next;
+ struct hashmap_entry entry;
unsigned char peeled[20];
struct tag *tag;
unsigned prio:2; /* annotated tag = 2, tag = 1, head = 0 */
"head", "lightweight", "annotated",
};
+static int commit_name_cmp(const struct commit_name *cn1,
+ const struct commit_name *cn2, const void *peeled)
+{
+ return hashcmp(cn1->peeled, peeled ? peeled : cn2->peeled);
+}
+
static inline unsigned int hash_sha1(const unsigned char *sha1)
{
unsigned int hash;
static inline struct commit_name *find_commit_name(const unsigned char *peeled)
{
- struct commit_name *n = lookup_hash(hash_sha1(peeled), &names);
- while (n && !!hashcmp(peeled, n->peeled))
- n = n->next;
- return n;
-}
-
-static int set_util(void *chain, void *data)
-{
- struct commit_name *n;
- for (n = chain; n; n = n->next) {
- struct commit *c = lookup_commit_reference_gently(n->peeled, 1);
- if (c)
- c->util = n;
- }
- return 0;
+ struct commit_name key;
+ hashmap_entry_init(&key, hash_sha1(peeled));
+ return hashmap_get(&names, &key, peeled);
}
static int replace_name(struct commit_name *e,
struct tag *tag = NULL;
if (replace_name(e, prio, sha1, &tag)) {
if (!e) {
- void **pos;
e = xmalloc(sizeof(struct commit_name));
hashcpy(e->peeled, peeled);
- pos = insert_hash(hash_sha1(peeled), e, &names);
- if (pos) {
- e->next = *pos;
- *pos = e;
- } else {
- e->next = NULL;
- }
+ hashmap_entry_init(e, hash_sha1(peeled));
+ hashmap_add(&names, e);
e->path = NULL;
}
e->tag = tag;
return 0;
/* Accept only tags that match the pattern, if given */
- if (pattern && (!is_tag || fnmatch(pattern, path + 10, 0)))
+ if (pattern && (!is_tag || wildmatch(pattern, path + 10, 0, NULL)))
return 0;
/* Is it annotated? */
fprintf(stderr, _("searching to describe %s\n"), arg);
if (!have_util) {
- for_each_hash(&names, set_util, NULL);
+ struct hashmap_iter iter;
+ struct commit *c;
+ struct commit_name *n = hashmap_iter_first(&names, &iter);
+ for (; n; n = hashmap_iter_next(&iter)) {
+ c = lookup_commit_reference_gently(n->peeled, 1);
+ if (c)
+ c->util = n;
+ }
have_util = 1;
}
return cmd_name_rev(args.argc, args.argv, prefix);
}
- init_hash(&names);
+ hashmap_init(&names, (hashmap_cmp_fn) commit_name_cmp, 0);
for_each_rawref(get_name, NULL);
- if (!names.nr && !always)
+ if (!names.size && !always)
die(_("No names found, cannot describe anything."));
if (argc == 0) {
static int fetch_one(struct remote *remote, int argc, const char **argv)
{
- int i;
static const char **refs = NULL;
struct refspec *refspec;
int ref_nr = 0;
if (argc > 0) {
int j = 0;
+ int i;
refs = xcalloc(argc + 1, sizeof(const char *));
for (i = 0; i < argc; i++) {
if (!strcmp(argv[i], "tag")) {
refname[plen] == '/' ||
p[plen-1] == '/'))
break;
- if (!fnmatch(p, refname, FNM_PATHNAME))
+ if (!wildmatch(p, refname, WM_PATHNAME, NULL))
break;
}
if (!*pattern)
unsigned int nr = 0;
int result = 0;
if (show_progress)
- progress = start_progress_delay("Checking connectivity", 0, 0, 2);
+ progress = start_progress_delay(_("Checking connectivity"), 0, 0, 2);
while (pending.nr) {
struct object_array_entry *entry;
struct object *obj;
fprintf(stderr, "Checking object directory\n");
if (show_progress)
- progress = start_progress("Checking object directories", 256);
+ progress = start_progress(_("Checking object directories"), 256);
for (i = 0; i < 256; i++) {
static char dir[4096];
sprintf(dir, "%s/%02x", path, i);
struct alternate_object_database *alt;
errors_found = 0;
- read_replace_refs = 0;
+ check_replace_refs = 0;
argc = parse_options(argc, argv, prefix, fsck_opts, fsck_usage, 0);
total += p->num_objects;
}
- progress = start_progress("Checking objects", total);
+ progress = start_progress(_("Checking objects"), total);
}
for (p = packed_git; p; p = p->next) {
/* verify gives error messages itself */
};
static int pack_refs = 1;
+static int aggressive_depth = 250;
static int aggressive_window = 250;
static int gc_auto_threshold = 6700;
static int gc_auto_pack_limit = 50;
+static int detach_auto = 1;
static const char *prune_expire = "2.weeks.ago";
static struct argv_array pack_refs_cmd = ARGV_ARRAY_INIT;
aggressive_window = git_config_int(var, value);
return 0;
}
+ if (!strcmp(var, "gc.aggressivedepth")) {
+ aggressive_depth = git_config_int(var, value);
+ return 0;
+ }
if (!strcmp(var, "gc.auto")) {
gc_auto_threshold = git_config_int(var, value);
return 0;
gc_auto_pack_limit = git_config_int(var, value);
return 0;
}
+ if (!strcmp(var, "gc.autodetach")) {
+ detach_auto = git_config_bool(var, value);
+ return 0;
+ }
if (!strcmp(var, "gc.pruneexpire")) {
if (value && strcmp(value, "now")) {
unsigned long now = approxidate("now");
else if (!too_many_loose_objects())
return 0;
- if (run_hook(NULL, "pre-auto-gc", NULL))
+ if (run_hook_le(NULL, "pre-auto-gc", NULL))
return 0;
return 1;
}
static const char *lock_repo_for_gc(int force, pid_t* ret_pid)
{
static struct lock_file lock;
- static char locking_host[128];
char my_host[128];
struct strbuf sb = STRBUF_INIT;
struct stat st;
uintmax_t pid;
FILE *fp;
- int fd, should_exit;
+ int fd;
if (pidfile)
/* already locked */
fd = hold_lock_file_for_update(&lock, git_path("gc.pid"),
LOCK_DIE_ON_ERROR);
if (!force) {
+ static char locking_host[128];
+ int should_exit;
fp = fopen(git_path("gc.pid"), "r");
memset(locking_host, 0, sizeof(locking_host));
should_exit =
if (aggressive) {
argv_array_push(&repack, "-f");
- argv_array_push(&repack, "--depth=250");
+ if (aggressive_depth > 0)
+ argv_array_pushf(&repack, "--depth=%d", aggressive_depth);
if (aggressive_window > 0)
argv_array_pushf(&repack, "--window=%d", aggressive_window);
}
*/
if (!need_to_gc())
return 0;
- if (!quiet)
- fprintf(stderr,
- _("Auto packing the repository for optimum performance. You may also\n"
- "run \"git gc\" manually. See "
- "\"git help gc\" for more information.\n"));
+ if (!quiet) {
+ if (detach_auto)
+ fprintf(stderr, _("Auto packing the repository in background for optimum performance.\n"));
+ else
+ fprintf(stderr, _("Auto packing the repository for optimum performance.\n"));
+ fprintf(stderr, _("See \"git help gc\" for manual housekeeping.\n"));
+ }
+ if (detach_auto)
+ /*
+ * failure to daemonize is ok, we'll continue
+ * in foreground
+ */
+ daemonize();
} else
add_repack_all_option();
const struct cache_entry *ce = active_cache[nr];
if (!S_ISREG(ce->ce_mode))
continue;
- if (!match_pathspec_depth(pathspec, ce->name, ce_namelen(ce), 0, NULL))
+ if (!ce_path_match(ce, pathspec, NULL))
continue;
/*
* If CE_VALID is on, we assume worktree file and its cache entry
fill_directory(&dir, pathspec);
for (i = 0; i < dir.nr; i++) {
- const char *name = dir.entries[i]->name;
- int namelen = strlen(name);
- if (!match_pathspec_depth(pathspec, name, namelen, 0, NULL))
+ if (!dir_path_match(dir.entries[i], pathspec, 0, NULL))
continue;
hit |= grep_file(opt, dir.entries[i]->name);
if (hit && opt->status_only)
if (keep_fd < 0) {
if (errno != EEXIST)
die_errno(_("cannot write keep file '%s'"),
- keep_name);
+ keep_name ? keep_name : name);
} else {
if (keep_msg_len > 0) {
write_or_die(keep_fd, keep_msg, keep_msg_len);
}
if (close(keep_fd) != 0)
die_errno(_("cannot close written keep file '%s'"),
- keep_name);
+ keep_name ? keep_name : name);
report = "keep";
}
}
if (argc == 2 && !strcmp(argv[1], "-h"))
usage(index_pack_usage);
- read_replace_refs = 0;
+ check_replace_refs = 0;
reset_pack_idx_option(&opts);
git_config(git_index_pack_config, &opts);
if (len >= ent->len)
die("git ls-files: internal error - directory entry not superset of prefix");
- if (!match_pathspec_depth(&pathspec, ent->name, ent->len, len, ps_matched))
+ if (!dir_path_match(ent, &pathspec, len, ps_matched))
return;
fputs(tag, stdout);
if (len >= ce_namelen(ce))
die("git ls-files: internal error - cache entry not superset of prefix");
- if (!match_pathspec_depth(&pathspec, ce->name, ce_namelen(ce), len, ps_matched))
+ if (!match_pathspec(&pathspec, ce->name, ce_namelen(ce),
+ len, ps_matched,
+ S_ISDIR(ce->ce_mode) || S_ISGITLINK(ce->ce_mode)))
return;
if (tag && *tag && show_valid_bit &&
len = strlen(path);
if (len < max_prefix_len)
continue; /* outside of the prefix */
- if (!match_pathspec_depth(&pathspec, path, len, max_prefix_len, ps_matched))
+ if (!match_pathspec(&pathspec, path, len,
+ max_prefix_len, ps_matched, 0))
continue; /* uninterested */
for (i = 0; i < 3; i++) {
if (!ui->mode[i])
if (snprintf(pathbuf, sizeof(pathbuf), "/%s", path) > sizeof(pathbuf))
return error("insanely long ref %.*s...", 20, path);
while ((p = *(pattern++)) != NULL) {
- if (!fnmatch(p, pathbuf, 0))
+ if (!wildmatch(p, pathbuf, 0, NULL))
return 1;
}
return 0;
* show_recursive() rolls its own matching code and is
* generally ignorant of 'struct pathspec'. The magic mask
* cannot be lifted until it is converted to use
- * match_pathspec_depth() or tree_entry_interesting()
+ * match_pathspec() or tree_entry_interesting()
*/
parse_pathspec(&pathspec, PATHSPEC_GLOB | PATHSPEC_ICASE,
PATHSPEC_PREFER_CWD,
OPT_BOOL(0, "abort", &abort_current_merge,
N_("abort the current in-progress merge")),
OPT_SET_INT(0, "progress", &show_progress, N_("force progress reporting"), 1),
- { OPTION_STRING, 'S', "gpg-sign", &sign_commit, N_("key id"),
+ { OPTION_STRING, 'S', "gpg-sign", &sign_commit, N_("key-id"),
N_("GPG sign commit"), PARSE_OPT_OPTARG, NULL, (intptr_t) "" },
OPT_BOOL(0, "overwrite-ignore", &overwrite_ignore, N_("update ignored files (default)")),
OPT_END()
}
/* Run a post-merge hook */
- run_hook(NULL, "post-merge", squash ? "1" : "0", NULL);
+ run_hook_le(NULL, "post-merge", squash ? "1" : "0", NULL);
strbuf_release(&reflog_message);
}
} else if (!strcmp(k, "merge.defaulttoupstream")) {
default_to_upstream = git_config_bool(k, v);
return 0;
+ } else if (!strcmp(k, "commit.gpgsign")) {
+ sign_commit = git_config_bool(k, v) ? "" : NULL;
+ return 0;
}
status = fmt_merge_msg_config(k, v, cb);
if (0 < option_edit)
strbuf_commented_addf(&msg, _(merge_editor_comment), comment_line_char);
write_merge_msg(&msg);
- if (run_hook(get_index_file(), "prepare-commit-msg",
- git_path("MERGE_MSG"), "merge", NULL, NULL))
+ if (run_commit_hook(0 < option_edit, get_index_file(), "prepare-commit-msg",
+ git_path("MERGE_MSG"), "merge", NULL))
abort_commit(remoteheads, NULL);
if (0 < option_edit) {
if (launch_editor(git_path("MERGE_MSG"), NULL, NULL))
if (strchr(path, '/'))
die("path %s contains slash", path);
- if (alloc <= used) {
- alloc = alloc_nr(used);
- entries = xrealloc(entries, sizeof(*entries) * alloc);
- }
+ ALLOC_GROW(entries, used + 1, alloc);
ent = entries[used++] = xmalloc(sizeof(**entries) + len + 1);
ent->mode = mode;
ent->len = len;
if (strncmp(path, src_w_slash, len_w_slash))
break;
}
- free((char *)src_w_slash);
+ if (src_w_slash != src)
+ free((char *)src_w_slash);
if (last - first < 1)
bad = _("source directory is empty");
modes = xrealloc(modes,
(argc + last - first)
* sizeof(enum update_mode));
+ submodule_gitfile = xrealloc(submodule_gitfile,
+ (argc + last - first)
+ * sizeof(char *));
}
dst = add_slash(dst);
prefix_path(dst, dst_len,
path + length + 1);
modes[argc + j] = INDEX;
+ submodule_gitfile[argc + j] = NULL;
}
argc += last - first;
}
memmove(destination + i,
destination + i + 1,
(argc - i) * sizeof(char *));
+ memmove(modes + i, modes + i + 1,
+ (argc - i) * sizeof(enum update_mode));
+ memmove(submodule_gitfile + i,
+ submodule_gitfile + i + 1,
+ (argc - i) * sizeof(char *));
i--;
}
} else
const char *subpath = path;
while (subpath) {
- if (!fnmatch(filter, subpath, 0))
+ if (!wildmatch(filter, subpath, 0, NULL))
return subpath - path;
subpath = strchr(subpath, '/');
if (subpath)
die(_("Failed to resolve '%s' as a valid ref."), arg);
if (!(buf = read_sha1_file(object, &type, &len)) || !len) {
free(buf);
- die(_("Failed to read object '%s'."), arg);;
+ die(_("Failed to read object '%s'."), arg);
+ }
+ if (type != OBJ_BLOB) {
+ free(buf);
+ die(_("Cannot read note data from non-blob object '%s'."), arg);
}
strbuf_add(&(msg->buf), buf, len);
free(buf);
int result;
const char *override_notes_ref = NULL;
struct option options[] = {
- OPT_STRING(0, "ref", &override_notes_ref, N_("notes_ref"),
+ OPT_STRING(0, "ref", &override_notes_ref, N_("notes-ref"),
N_("use notes from <notes_ref>")),
OPT_END()
};
#include "diff.h"
#include "revision.h"
#include "list-objects.h"
+#include "pack-objects.h"
#include "progress.h"
#include "refs.h"
#include "streaming.h"
#include "thread-utils.h"
+#include "pack-bitmap.h"
static const char *pack_usage[] = {
N_("git pack-objects --stdout [options...] [< ref-list | < object-list]"),
NULL
};
-struct object_entry {
- struct pack_idx_entry idx;
- unsigned long size; /* uncompressed size */
- struct packed_git *in_pack; /* already in pack */
- off_t in_pack_offset;
- struct object_entry *delta; /* delta base object */
- struct object_entry *delta_child; /* deltified objects who bases me */
- struct object_entry *delta_sibling; /* other deltified objects who
- * uses the same base as me
- */
- void *delta_data; /* cached delta (uncompressed) */
- unsigned long delta_size; /* delta data size (uncompressed) */
- unsigned long z_delta_size; /* delta data size (compressed) */
- enum object_type type;
- enum object_type in_pack_type; /* could be delta */
- uint32_t hash; /* name hint hash */
- unsigned char in_pack_header_size;
- unsigned preferred_base:1; /*
- * we do not pack this, but is available
- * to be used as the base object to delta
- * objects against.
- */
- unsigned no_try_delta:1;
- unsigned tagged:1; /* near the very tip of refs */
- unsigned filled:1; /* assigned write-order */
-};
-
/*
- * Objects we are going to pack are collected in objects array (dynamically
- * expanded). nr_objects & nr_alloc controls this array. They are stored
- * in the order we see -- typically rev-list --objects order that gives us
- * nice "minimum seek" order.
+ * Objects we are going to pack are collected in the `to_pack` structure.
+ * It contains an array (dynamically expanded) of the object data, and a map
+ * that can resolve SHA1s to their position in the array.
*/
-static struct object_entry *objects;
+static struct packing_data to_pack;
+
static struct pack_idx_entry **written_list;
-static uint32_t nr_objects, nr_alloc, nr_result, nr_written;
+static uint32_t nr_result, nr_written;
static int non_empty;
static int reuse_delta = 1, reuse_object = 1;
static int pack_compression_level = Z_DEFAULT_COMPRESSION;
static int pack_compression_seen;
+static struct packed_git *reuse_packfile;
+static uint32_t reuse_packfile_objects;
+static off_t reuse_packfile_offset;
+
+static int use_bitmap_index = 1;
+static int write_bitmap_index;
+static uint16_t write_bitmap_options;
+
static unsigned long delta_cache_size = 0;
static unsigned long max_delta_cache_size = 256 * 1024 * 1024;
static unsigned long cache_max_small_delta_size = 1000;
static unsigned long window_memory_limit = 0;
-/*
- * The object names in objects array are hashed with this hashtable,
- * to help looking up the entry by object name.
- * This hashtable is built after all the objects are seen.
- */
-static int *object_ix;
-static int object_ix_hashsz;
-static struct object_entry *locate_object_entry(const unsigned char *sha1);
-
/*
* stats
*/
static uint32_t written, written_delta;
static uint32_t reused, reused_delta;
+/*
+ * Indexed commits
+ */
+static struct commit **indexed_commits;
+static unsigned int indexed_commits_nr;
+static unsigned int indexed_commits_alloc;
+
+static void index_commit_for_bitmap(struct commit *commit)
+{
+ if (indexed_commits_nr >= indexed_commits_alloc) {
+ indexed_commits_alloc = (indexed_commits_alloc + 32) * 2;
+ indexed_commits = xrealloc(indexed_commits,
+ indexed_commits_alloc * sizeof(struct commit *));
+ }
+
+ indexed_commits[indexed_commits_nr++] = commit;
+}
static void *get_delta(struct object_entry *entry)
{
void *cb_data)
{
unsigned char peeled[20];
- struct object_entry *entry = locate_object_entry(sha1);
+ struct object_entry *entry = packlist_find(&to_pack, sha1, NULL);
if (entry)
entry->tagged = 1;
if (!peel_ref(path, peeled)) {
- entry = locate_object_entry(peeled);
+ entry = packlist_find(&to_pack, peeled, NULL);
if (entry)
entry->tagged = 1;
}
{
unsigned int i, wo_end, last_untagged;
- struct object_entry **wo = xmalloc(nr_objects * sizeof(*wo));
+ struct object_entry **wo = xmalloc(to_pack.nr_objects * sizeof(*wo));
+ struct object_entry *objects = to_pack.objects;
- for (i = 0; i < nr_objects; i++) {
+ for (i = 0; i < to_pack.nr_objects; i++) {
objects[i].tagged = 0;
objects[i].filled = 0;
objects[i].delta_child = NULL;
* Make sure delta_sibling is sorted in the original
* recency order.
*/
- for (i = nr_objects; i > 0;) {
+ for (i = to_pack.nr_objects; i > 0;) {
struct object_entry *e = &objects[--i];
if (!e->delta)
continue;
* Give the objects in the original recency order until
* we see a tagged tip.
*/
- for (i = wo_end = 0; i < nr_objects; i++) {
+ for (i = wo_end = 0; i < to_pack.nr_objects; i++) {
if (objects[i].tagged)
break;
add_to_write_order(wo, &wo_end, &objects[i]);
/*
* Then fill all the tagged tips.
*/
- for (; i < nr_objects; i++) {
+ for (; i < to_pack.nr_objects; i++) {
if (objects[i].tagged)
add_to_write_order(wo, &wo_end, &objects[i]);
}
/*
* And then all remaining commits and tags.
*/
- for (i = last_untagged; i < nr_objects; i++) {
+ for (i = last_untagged; i < to_pack.nr_objects; i++) {
if (objects[i].type != OBJ_COMMIT &&
objects[i].type != OBJ_TAG)
continue;
/*
* And then all the trees.
*/
- for (i = last_untagged; i < nr_objects; i++) {
+ for (i = last_untagged; i < to_pack.nr_objects; i++) {
if (objects[i].type != OBJ_TREE)
continue;
add_to_write_order(wo, &wo_end, &objects[i]);
/*
* Finally all the rest in really tight order
*/
- for (i = last_untagged; i < nr_objects; i++) {
+ for (i = last_untagged; i < to_pack.nr_objects; i++) {
if (!objects[i].filled)
add_family_to_write_order(wo, &wo_end, &objects[i]);
}
- if (wo_end != nr_objects)
- die("ordered %u objects, expected %"PRIu32, wo_end, nr_objects);
+ if (wo_end != to_pack.nr_objects)
+ die("ordered %u objects, expected %"PRIu32, wo_end, to_pack.nr_objects);
return wo;
}
+static off_t write_reused_pack(struct sha1file *f)
+{
+ unsigned char buffer[8192];
+ off_t to_write, total;
+ int fd;
+
+ if (!is_pack_valid(reuse_packfile))
+ die("packfile is invalid: %s", reuse_packfile->pack_name);
+
+ fd = git_open_noatime(reuse_packfile->pack_name);
+ if (fd < 0)
+ die_errno("unable to open packfile for reuse: %s",
+ reuse_packfile->pack_name);
+
+ if (lseek(fd, sizeof(struct pack_header), SEEK_SET) == -1)
+ die_errno("unable to seek in reused packfile");
+
+ if (reuse_packfile_offset < 0)
+ reuse_packfile_offset = reuse_packfile->pack_size - 20;
+
+ total = to_write = reuse_packfile_offset - sizeof(struct pack_header);
+
+ while (to_write) {
+ int read_pack = xread(fd, buffer, sizeof(buffer));
+
+ if (read_pack <= 0)
+ die_errno("unable to read from reused packfile");
+
+ if (read_pack > to_write)
+ read_pack = to_write;
+
+ sha1write(f, buffer, read_pack);
+ to_write -= read_pack;
+
+ /*
+ * We don't know the actual number of objects written,
+ * only how many bytes written, how many bytes total, and
+ * how many objects total. So we can fake it by pretending all
+ * objects we are writing are the same size. This gives us a
+ * smooth progress meter, and at the end it matches the true
+ * answer.
+ */
+ written = reuse_packfile_objects *
+ (((double)(total - to_write)) / total);
+ display_progress(progress_state, written);
+ }
+
+ close(fd);
+ written = reuse_packfile_objects;
+ display_progress(progress_state, written);
+ return reuse_packfile_offset - sizeof(struct pack_header);
+}
+
static void write_pack_file(void)
{
uint32_t i = 0, j;
struct object_entry **write_order;
if (progress > pack_to_stdout)
- progress_state = start_progress("Writing objects", nr_result);
- written_list = xmalloc(nr_objects * sizeof(*written_list));
+ progress_state = start_progress(_("Writing objects"), nr_result);
+ written_list = xmalloc(to_pack.nr_objects * sizeof(*written_list));
write_order = compute_write_order();
do {
f = create_tmp_packfile(&pack_tmp_name);
offset = write_pack_header(f, nr_remaining);
+
+ if (reuse_packfile) {
+ off_t packfile_size;
+ assert(pack_to_stdout);
+
+ packfile_size = write_reused_pack(f);
+ offset += packfile_size;
+ }
+
nr_written = 0;
- for (; i < nr_objects; i++) {
+ for (; i < to_pack.nr_objects; i++) {
struct object_entry *e = write_order[i];
if (write_one(f, e, &offset) == WRITE_ONE_BREAK)
break;
if (!pack_to_stdout) {
struct stat st;
- char tmpname[PATH_MAX];
+ struct strbuf tmpname = STRBUF_INIT;
/*
* Packs are runtime accessed in their mtime
utb.modtime = --last_mtime;
if (utime(pack_tmp_name, &utb) < 0)
warning("failed utime() on %s: %s",
- tmpname, strerror(errno));
+ pack_tmp_name, strerror(errno));
}
- /* Enough space for "-<sha-1>.pack"? */
- if (sizeof(tmpname) <= strlen(base_name) + 50)
- die("pack base name '%s' too long", base_name);
- snprintf(tmpname, sizeof(tmpname), "%s-", base_name);
- finish_tmp_packfile(tmpname, pack_tmp_name,
+ strbuf_addf(&tmpname, "%s-", base_name);
+
+ if (write_bitmap_index) {
+ bitmap_writer_set_checksum(sha1);
+ bitmap_writer_build_type_index(written_list, nr_written);
+ }
+
+ finish_tmp_packfile(&tmpname, pack_tmp_name,
written_list, nr_written,
&pack_idx_opts, sha1);
+
+ if (write_bitmap_index) {
+ strbuf_addf(&tmpname, "%s.bitmap", sha1_to_hex(sha1));
+
+ stop_progress(&progress_state);
+
+ bitmap_writer_show_progress(progress);
+ bitmap_writer_reuse_bitmaps(&to_pack);
+ bitmap_writer_select_commits(indexed_commits, indexed_commits_nr, -1);
+ bitmap_writer_build(&to_pack);
+ bitmap_writer_finish(written_list, nr_written,
+ tmpname.buf, write_bitmap_options);
+ write_bitmap_index = 0;
+ }
+
+ strbuf_release(&tmpname);
free(pack_tmp_name);
puts(sha1_to_hex(sha1));
}
written_list[j]->offset = (off_t)-1;
}
nr_remaining -= nr_written;
- } while (nr_remaining && i < nr_objects);
+ } while (nr_remaining && i < to_pack.nr_objects);
free(written_list);
free(write_order);
written, nr_result);
}
-static int locate_object_entry_hash(const unsigned char *sha1)
-{
- int i;
- unsigned int ui;
- memcpy(&ui, sha1, sizeof(unsigned int));
- i = ui % object_ix_hashsz;
- while (0 < object_ix[i]) {
- if (!hashcmp(sha1, objects[object_ix[i] - 1].idx.sha1))
- return i;
- if (++i == object_ix_hashsz)
- i = 0;
- }
- return -1 - i;
-}
-
-static struct object_entry *locate_object_entry(const unsigned char *sha1)
-{
- int i;
-
- if (!object_ix_hashsz)
- return NULL;
-
- i = locate_object_entry_hash(sha1);
- if (0 <= i)
- return &objects[object_ix[i]-1];
- return NULL;
-}
-
-static void rehash_objects(void)
-{
- uint32_t i;
- struct object_entry *oe;
-
- object_ix_hashsz = nr_objects * 3;
- if (object_ix_hashsz < 1024)
- object_ix_hashsz = 1024;
- object_ix = xrealloc(object_ix, sizeof(int) * object_ix_hashsz);
- memset(object_ix, 0, sizeof(int) * object_ix_hashsz);
- for (i = 0, oe = objects; i < nr_objects; i++, oe++) {
- int ix = locate_object_entry_hash(oe->idx.sha1);
- if (0 <= ix)
- continue;
- ix = -1 - ix;
- object_ix[ix] = i + 1;
- }
-}
-
-static uint32_t name_hash(const char *name)
-{
- uint32_t c, hash = 0;
-
- if (!name)
- return 0;
-
- /*
- * This effectively just creates a sortable number from the
- * last sixteen non-whitespace characters. Last characters
- * count "most", so things that end in ".c" sort together.
- */
- while ((c = *name++) != 0) {
- if (isspace(c))
- continue;
- hash = (hash >> 2) + (c << 24);
- }
- return hash;
-}
-
static void setup_delta_attr_check(struct git_attr_check *check)
{
static struct git_attr *attr_delta;
return 0;
}
-static int add_object_entry(const unsigned char *sha1, enum object_type type,
- const char *name, int exclude)
+/*
+ * When adding an object, check whether we have already added it
+ * to our packing list. If so, we can skip. However, if we are
+ * being asked to excludei t, but the previous mention was to include
+ * it, make sure to adjust its flags and tweak our numbers accordingly.
+ *
+ * As an optimization, we pass out the index position where we would have
+ * found the item, since that saves us from having to look it up again a
+ * few lines later when we want to add the new entry.
+ */
+static int have_duplicate_entry(const unsigned char *sha1,
+ int exclude,
+ uint32_t *index_pos)
{
struct object_entry *entry;
- struct packed_git *p, *found_pack = NULL;
- off_t found_offset = 0;
- int ix;
- uint32_t hash = name_hash(name);
-
- ix = nr_objects ? locate_object_entry_hash(sha1) : -1;
- if (ix >= 0) {
- if (exclude) {
- entry = objects + object_ix[ix] - 1;
- if (!entry->preferred_base)
- nr_result--;
- entry->preferred_base = 1;
- }
+
+ entry = packlist_find(&to_pack, sha1, index_pos);
+ if (!entry)
return 0;
+
+ if (exclude) {
+ if (!entry->preferred_base)
+ nr_result--;
+ entry->preferred_base = 1;
}
+ return 1;
+}
+
+/*
+ * Check whether we want the object in the pack (e.g., we do not want
+ * objects found in non-local stores if the "--local" option was used).
+ *
+ * As a side effect of this check, we will find the packed version of this
+ * object, if any. We therefore pass out the pack information to avoid having
+ * to look it up again later.
+ */
+static int want_object_in_pack(const unsigned char *sha1,
+ int exclude,
+ struct packed_git **found_pack,
+ off_t *found_offset)
+{
+ struct packed_git *p;
+
if (!exclude && local && has_loose_object_nonlocal(sha1))
return 0;
+ *found_pack = NULL;
+ *found_offset = 0;
+
for (p = packed_git; p; p = p->next) {
off_t offset = find_pack_entry_one(sha1, p);
if (offset) {
- if (!found_pack) {
+ if (!*found_pack) {
if (!is_pack_valid(p)) {
warning("packfile %s cannot be accessed", p->pack_name);
continue;
}
- found_offset = offset;
- found_pack = p;
+ *found_offset = offset;
+ *found_pack = p;
}
if (exclude)
- break;
+ return 1;
if (incremental)
return 0;
if (local && !p->pack_local)
}
}
- if (nr_objects >= nr_alloc) {
- nr_alloc = (nr_alloc + 1024) * 3 / 2;
- objects = xrealloc(objects, nr_alloc * sizeof(*entry));
- }
+ return 1;
+}
+
+static void create_object_entry(const unsigned char *sha1,
+ enum object_type type,
+ uint32_t hash,
+ int exclude,
+ int no_try_delta,
+ uint32_t index_pos,
+ struct packed_git *found_pack,
+ off_t found_offset)
+{
+ struct object_entry *entry;
- entry = objects + nr_objects++;
- memset(entry, 0, sizeof(*entry));
- hashcpy(entry->idx.sha1, sha1);
+ entry = packlist_alloc(&to_pack, sha1, index_pos);
entry->hash = hash;
if (type)
entry->type = type;
entry->in_pack_offset = found_offset;
}
- if (object_ix_hashsz * 3 <= nr_objects * 4)
- rehash_objects();
- else
- object_ix[-1 - ix] = nr_objects;
+ entry->no_try_delta = no_try_delta;
+}
+
+static const char no_closure_warning[] = N_(
+"disabling bitmap writing, as some objects are not being packed"
+);
+
+static int add_object_entry(const unsigned char *sha1, enum object_type type,
+ const char *name, int exclude)
+{
+ struct packed_git *found_pack;
+ off_t found_offset;
+ uint32_t index_pos;
+
+ if (have_duplicate_entry(sha1, exclude, &index_pos))
+ return 0;
+
+ if (!want_object_in_pack(sha1, exclude, &found_pack, &found_offset)) {
+ /* The pack is missing an object, so it will not have closure */
+ if (write_bitmap_index) {
+ warning(_(no_closure_warning));
+ write_bitmap_index = 0;
+ }
+ return 0;
+ }
+
+ create_object_entry(sha1, type, pack_name_hash(name),
+ exclude, name && no_try_delta(name),
+ index_pos, found_pack, found_offset);
+
+ display_progress(progress_state, nr_result);
+ return 1;
+}
- display_progress(progress_state, nr_objects);
+static int add_object_entry_from_bitmap(const unsigned char *sha1,
+ enum object_type type,
+ int flags, uint32_t name_hash,
+ struct packed_git *pack, off_t offset)
+{
+ uint32_t index_pos;
+
+ if (have_duplicate_entry(sha1, 0, &index_pos))
+ return 0;
- if (name && no_try_delta(name))
- entry->no_try_delta = 1;
+ create_object_entry(sha1, type, name_hash, 0, 0, index_pos, pack, offset);
+ display_progress(progress_state, nr_result);
return 1;
}
if (0 <= pos)
return 1;
pos = -pos - 1;
- if (done_pbase_paths_alloc <= done_pbase_paths_num) {
- done_pbase_paths_alloc = alloc_nr(done_pbase_paths_alloc);
- done_pbase_paths = xrealloc(done_pbase_paths,
- done_pbase_paths_alloc *
- sizeof(unsigned));
- }
+ ALLOC_GROW(done_pbase_paths,
+ done_pbase_paths_num + 1,
+ done_pbase_paths_alloc);
done_pbase_paths_num++;
if (pos < done_pbase_paths_num)
memmove(done_pbase_paths + pos + 1,
{
struct pbase_tree *it;
int cmplen;
- unsigned hash = name_hash(name);
+ unsigned hash = pack_name_hash(name);
if (!num_preferred_base || check_pbase_path(hash))
return;
break;
}
- if (base_ref && (base_entry = locate_object_entry(base_ref))) {
+ if (base_ref && (base_entry = packlist_find(&to_pack, base_ref, NULL))) {
/*
* If base_ref was set above that means we wish to
* reuse delta data, and we even found that base
uint32_t i;
struct object_entry **sorted_by_offset;
- sorted_by_offset = xcalloc(nr_objects, sizeof(struct object_entry *));
- for (i = 0; i < nr_objects; i++)
- sorted_by_offset[i] = objects + i;
- qsort(sorted_by_offset, nr_objects, sizeof(*sorted_by_offset), pack_offset_sort);
+ sorted_by_offset = xcalloc(to_pack.nr_objects, sizeof(struct object_entry *));
+ for (i = 0; i < to_pack.nr_objects; i++)
+ sorted_by_offset[i] = to_pack.objects + i;
+ qsort(sorted_by_offset, to_pack.nr_objects, sizeof(*sorted_by_offset), pack_offset_sort);
- for (i = 0; i < nr_objects; i++) {
+ for (i = 0; i < to_pack.nr_objects; i++) {
struct object_entry *entry = sorted_by_offset[i];
check_object(entry);
if (big_file_threshold < entry->size)
if (starts_with(path, "refs/tags/") && /* is a tag? */
!peel_ref(path, peeled) && /* peelable? */
- locate_object_entry(peeled)) /* object packed? */
+ packlist_find(&to_pack, peeled, NULL)) /* object packed? */
add_object_entry(sha1, OBJ_TAG, NULL, 0);
return 0;
}
if (!pack_to_stdout)
do_check_packed_object_crc = 1;
- if (!nr_objects || !window || !depth)
+ if (!to_pack.nr_objects || !window || !depth)
return;
- delta_list = xmalloc(nr_objects * sizeof(*delta_list));
+ delta_list = xmalloc(to_pack.nr_objects * sizeof(*delta_list));
nr_deltas = n = 0;
- for (i = 0; i < nr_objects; i++) {
- struct object_entry *entry = objects + i;
+ for (i = 0; i < to_pack.nr_objects; i++) {
+ struct object_entry *entry = to_pack.objects + i;
if (entry->delta)
/* This happens if we decided to reuse existing
if (nr_deltas && n > 1) {
unsigned nr_done = 0;
if (progress)
- progress_state = start_progress("Compressing objects",
+ progress_state = start_progress(_("Compressing objects"),
nr_deltas);
qsort(delta_list, n, sizeof(*delta_list), type_size_sort);
ll_find_deltas(delta_list, n, window+1, depth, &nr_done);
cache_max_small_delta_size = git_config_int(k, v);
return 0;
}
+ if (!strcmp(k, "pack.writebitmaps")) {
+ write_bitmap_index = git_config_bool(k, v);
+ return 0;
+ }
+ if (!strcmp(k, "pack.writebitmaphashcache")) {
+ if (git_config_bool(k, v))
+ write_bitmap_options |= BITMAP_OPT_HASH_CACHE;
+ else
+ write_bitmap_options &= ~BITMAP_OPT_HASH_CACHE;
+ }
+ if (!strcmp(k, "pack.usebitmaps")) {
+ use_bitmap_index = git_config_bool(k, v);
+ return 0;
+ }
if (!strcmp(k, "pack.threads")) {
delta_search_threads = git_config_int(k, v);
if (delta_search_threads < 0)
{
add_object_entry(commit->object.sha1, OBJ_COMMIT, NULL, 0);
commit->object.flags |= OBJECT_ADDED;
+
+ if (write_bitmap_index)
+ index_commit_for_bitmap(commit);
}
static void show_object(struct object *obj,
for (i = 0; i < p->num_objects; i++) {
sha1 = nth_packed_object_sha1(p, i);
- if (!locate_object_entry(sha1) &&
+ if (!packlist_find(&to_pack, sha1, NULL) &&
!has_sha1_pack_kept_or_nonlocal(sha1))
if (force_object_loose(sha1, p->mtime))
die("unable to force loose object");
}
}
+static int get_object_list_from_bitmap(struct rev_info *revs)
+{
+ if (prepare_bitmap_walk(revs) < 0)
+ return -1;
+
+ if (!reuse_partial_packfile_from_bitmap(
+ &reuse_packfile,
+ &reuse_packfile_objects,
+ &reuse_packfile_offset)) {
+ assert(reuse_packfile_objects);
+ nr_result += reuse_packfile_objects;
+ display_progress(progress_state, nr_result);
+ }
+
+ traverse_bitmap_commit_list(&add_object_entry_from_bitmap);
+ return 0;
+}
+
static void get_object_list(int ac, const char **av)
{
struct rev_info revs;
save_commit_buffer = 0;
setup_revisions(ac, av, &revs, NULL);
+ /* make sure shallows are read */
+ is_repository_shallow();
+
while (fgets(line, sizeof(line), stdin) != NULL) {
int len = strlen(line);
if (len && line[len - 1] == '\n')
if (*line == '-') {
if (!strcmp(line, "--not")) {
flags ^= UNINTERESTING;
+ write_bitmap_index = 0;
+ continue;
+ }
+ if (starts_with(line, "--shallow ")) {
+ unsigned char sha1[20];
+ if (get_sha1_hex(line + 10, sha1))
+ die("not an SHA-1 '%s'", line + 10);
+ register_shallow(sha1);
continue;
}
die("not a rev '%s'", line);
die("bad revision '%s'", line);
}
+ if (use_bitmap_index && !get_object_list_from_bitmap(&revs))
+ return;
+
if (prepare_revision_walk(&revs))
die("revision walk setup failed");
mark_edges_uninteresting(&revs, show_edge);
N_("pack compression level")),
OPT_SET_INT(0, "keep-true-parents", &grafts_replace_parents,
N_("do not hide commits by grafts"), 0),
+ OPT_BOOL(0, "use-bitmap-index", &use_bitmap_index,
+ N_("use a bitmap index if available to speed up counting objects")),
+ OPT_BOOL(0, "write-bitmap-index", &write_bitmap_index,
+ N_("write a bitmap index together with the pack index")),
OPT_END(),
};
- read_replace_refs = 0;
+ check_replace_refs = 0;
reset_pack_idx_option(&pack_idx_opts);
git_config(git_pack_config, NULL);
if (keep_unreachable && unpack_unreachable)
die("--keep-unreachable and --unpack-unreachable are incompatible.");
+ if (!use_internal_rev_list || !pack_to_stdout || is_repository_shallow())
+ use_bitmap_index = 0;
+
+ if (pack_to_stdout || !rev_list_all)
+ write_bitmap_index = 0;
+
if (progress && all_progress_implied)
progress = 2;
prepare_packed_git();
if (progress)
- progress_state = start_progress("Counting objects", 0);
+ progress_state = start_progress(_("Counting objects"), 0);
if (!use_internal_rev_list)
read_object_list_from_stdin();
else {
strbuf_addstr(&pathname, dir);
if (opts & PRUNE_PACKED_VERBOSE)
- progress = start_progress_delay("Removing duplicate objects",
+ progress = start_progress_delay(_("Removing duplicate objects"),
256, 95, 2);
if (pathname.len && pathname.buf[pathname.len - 1] != '/')
expire = ULONG_MAX;
save_commit_buffer = 0;
- read_replace_refs = 0;
+ check_replace_refs = 0;
init_revisions(&revs, prefix);
argc = parse_options(argc, argv, prefix, options, prune_usage, 0);
if (show_progress == -1)
show_progress = isatty(2);
if (show_progress)
- progress = start_progress_delay("Checking connectivity", 0, 0, 2);
+ progress = start_progress_delay(_("Checking connectivity"), 0, 0, 2);
mark_reachable_objects(&revs, 1, progress);
stop_progress(&progress);
static const char **refspec;
static int refspec_nr;
static int refspec_alloc;
-static int default_matching_used;
static void add_refspec(const char *ref)
{
}
if (push_default == PUSH_DEFAULT_UPSTREAM &&
- !prefixcmp(matched->name, "refs/heads/")) {
+ starts_with(matched->name, "refs/heads/")) {
struct branch *branch = branch_get(matched->name + 11);
if (branch->merge_nr == 1 && branch->merge[0]->src) {
struct strbuf buf = STRBUF_INIT;
}
static char warn_unspecified_push_default_msg[] =
-N_("push.default is unset; its implicit value is changing in\n"
+N_("push.default is unset; its implicit value has changed in\n"
"Git 2.0 from 'matching' to 'simple'. To squelch this message\n"
- "and maintain the current behavior after the default changes, use:\n"
+ "and maintain the traditional behavior, use:\n"
"\n"
" git config --global push.default matching\n"
"\n"
"When push.default is set to 'matching', git will push local branches\n"
"to the remote branches that already exist with the same name.\n"
"\n"
- "In Git 2.0, Git will default to the more conservative 'simple'\n"
+ "Since Git 2.0, Git defaults to the more conservative 'simple'\n"
"behavior, which only pushes the current branch to the corresponding\n"
"remote branch that 'git pull' uses to update the current branch.\n"
"\n"
switch (push_default) {
default:
- case PUSH_DEFAULT_UNSPECIFIED:
- default_matching_used = 1;
- warn_unspecified_push_default_configuration();
- /* fallthru */
case PUSH_DEFAULT_MATCHING:
add_refspec(":");
break;
+ case PUSH_DEFAULT_UNSPECIFIED:
+ warn_unspecified_push_default_configuration();
+ /* fallthru */
+
case PUSH_DEFAULT_SIMPLE:
if (triangular)
setup_push_current(remote, branch);
"'git pull ...') before pushing again.\n"
"See the 'Note about fast-forwards' in 'git push --help' for details.");
-static const char message_advice_use_upstream[] =
- N_("Updates were rejected because a pushed branch tip is behind its remote\n"
- "counterpart. If you did not intend to push that branch, you may want to\n"
- "specify branches to push or set the 'push.default' configuration variable\n"
- "to 'simple', 'current' or 'upstream' to push only the current branch.");
-
static const char message_advice_checkout_pull_push[] =
N_("Updates were rejected because a pushed branch tip is behind its remote\n"
"counterpart. Check out this branch and integrate the remote changes\n"
advise(_(message_advice_pull_before_push));
}
-static void advise_use_upstream(void)
-{
- if (!advice_push_non_ff_default || !advice_push_update_rejected)
- return;
- advise(_(message_advice_use_upstream));
-}
-
static void advise_checkout_pull_push(void)
{
if (!advice_push_non_ff_matching || !advice_push_update_rejected)
if (reject_reasons & REJECT_NON_FF_HEAD) {
advise_pull_before_push();
} else if (reject_reasons & REJECT_NON_FF_OTHER) {
- if (default_matching_used)
- advise_use_upstream();
- else
- advise_checkout_pull_push();
+ advise_checkout_pull_push();
} else if (reject_reasons & REJECT_ALREADY_EXISTS) {
advise_ref_already_exists();
} else if (reject_reasons & REJECT_FETCH_FIRST) {
}
}
- if (shallow_update) {
- if (!checked_connectivity)
- error("BUG: run 'git fsck' for safety.\n"
- "If there are errors, try to remove "
- "the reported refs above");
- if (alt_shallow_file && *alt_shallow_file)
- unlink(alt_shallow_file);
- }
+ if (shallow_update && !checked_connectivity)
+ error("BUG: run 'git fsck' for safety.\n"
+ "If there are errors, try to remove "
+ "the reported refs above");
}
static struct command *read_head_info(struct sha1_array *shallow)
cmd->skip_update = 1;
}
}
- if (alt_shallow_file && *alt_shallow_file) {
- unlink(alt_shallow_file);
- alt_shallow_file = NULL;
- }
free(ref_status);
}
return; /* both given explicitly -- nothing to tweak */
for (ent = reflog_expire_cfg; ent; ent = ent->next) {
- if (!fnmatch(ent->pattern, ref, 0)) {
+ if (!wildmatch(ent->pattern, ref, 0, NULL)) {
if (!(slot & EXPIRE_TOTAL))
cb->expire_total = ent->expire_total;
if (!(slot & EXPIRE_UNREACH))
#include "argv-array.h"
static int delta_base_offset = 1;
+static int pack_kept_objects = -1;
static char *packdir, *packtmp;
static const char *const git_repack_usage[] = {
delta_base_offset = git_config_bool(var, value);
return 0;
}
+ if (!strcmp(var, "repack.packkeptobjects")) {
+ pack_kept_objects = git_config_bool(var, value);
+ return 0;
+ }
return git_default_config(var, value, cb);
}
static void remove_redundant_pack(const char *dir_name, const char *base_name)
{
- const char *exts[] = {".pack", ".idx", ".keep"};
+ const char *exts[] = {".pack", ".idx", ".keep", ".bitmap"};
int i;
struct strbuf buf = STRBUF_INIT;
size_t plen;
int cmd_repack(int argc, const char **argv, const char *prefix)
{
- const char *exts[2] = {".pack", ".idx"};
+ struct {
+ const char *name;
+ unsigned optional:1;
+ } exts[] = {
+ {".pack"},
+ {".idx"},
+ {".bitmap", 1},
+ };
struct child_process cmd;
struct string_list_item *item;
struct argv_array cmd_args = ARGV_ARRAY_INIT;
int no_update_server_info = 0;
int quiet = 0;
int local = 0;
+ int write_bitmap = -1;
struct option builtin_repack_options[] = {
OPT_BIT('a', NULL, &pack_everything,
OPT__QUIET(&quiet, N_("be quiet")),
OPT_BOOL('l', "local", &local,
N_("pass --local to git-pack-objects")),
+ OPT_BOOL('b', "write-bitmap-index", &write_bitmap,
+ N_("write bitmap index")),
OPT_STRING(0, "unpack-unreachable", &unpack_unreachable, N_("approxidate"),
N_("with -A, do not loosen objects older than this")),
OPT_STRING(0, "window", &window, N_("n"),
N_("limits the maximum delta depth")),
OPT_STRING(0, "max-pack-size", &max_pack_size, N_("bytes"),
N_("maximum size of each packfile")),
+ OPT_BOOL(0, "pack-kept-objects", &pack_kept_objects,
+ N_("repack objects in packs marked with .keep")),
OPT_END()
};
argc = parse_options(argc, argv, prefix, builtin_repack_options,
git_repack_usage, 0);
+ if (pack_kept_objects < 0)
+ pack_kept_objects = write_bitmap;
+
packdir = mkpathdup("%s/pack", get_object_directory());
packtmp = mkpathdup("%s/.tmp-%d-pack", packdir, (int)getpid());
argv_array_push(&cmd_args, "pack-objects");
argv_array_push(&cmd_args, "--keep-true-parents");
- argv_array_push(&cmd_args, "--honor-pack-keep");
+ if (!pack_kept_objects)
+ argv_array_push(&cmd_args, "--honor-pack-keep");
argv_array_push(&cmd_args, "--non-empty");
argv_array_push(&cmd_args, "--all");
argv_array_push(&cmd_args, "--reflog");
argv_array_pushf(&cmd_args, "--no-reuse-delta");
if (no_reuse_object)
argv_array_pushf(&cmd_args, "--no-reuse-object");
+ if (write_bitmap >= 0)
+ argv_array_pushf(&cmd_args, "--%swrite-bitmap-index",
+ write_bitmap ? "" : "no-");
if (pack_everything & ALL_INTO_ONE) {
get_non_kept_pack_filenames(&existing_packs);
*/
failed = 0;
for_each_string_list_item(item, &names) {
- for (ext = 0; ext < 2; ext++) {
+ for (ext = 0; ext < ARRAY_SIZE(exts); ext++) {
char *fname, *fname_old;
fname = mkpathdup("%s/pack-%s%s", packdir,
- item->string, exts[ext]);
+ item->string, exts[ext].name);
if (!file_exists(fname)) {
free(fname);
continue;
}
fname_old = mkpath("%s/old-%s%s", packdir,
- item->string, exts[ext]);
+ item->string, exts[ext].name);
if (file_exists(fname_old))
if (unlink(fname_old))
failed = 1;
/* Now the ones with the same name are out of the way... */
for_each_string_list_item(item, &names) {
- for (ext = 0; ext < 2; ext++) {
+ for (ext = 0; ext < ARRAY_SIZE(exts); ext++) {
char *fname, *fname_old;
struct stat statbuffer;
+ int exists = 0;
fname = mkpathdup("%s/pack-%s%s",
- packdir, item->string, exts[ext]);
+ packdir, item->string, exts[ext].name);
fname_old = mkpathdup("%s-%s%s",
- packtmp, item->string, exts[ext]);
+ packtmp, item->string, exts[ext].name);
if (!stat(fname_old, &statbuffer)) {
statbuffer.st_mode &= ~(S_IWUSR | S_IWGRP | S_IWOTH);
chmod(fname_old, statbuffer.st_mode);
+ exists = 1;
+ }
+ if (exists || !exts[ext].optional) {
+ if (rename(fname_old, fname))
+ die_errno(_("renaming '%s' failed"), fname_old);
}
- if (rename(fname_old, fname))
- die_errno(_("renaming '%s' failed"), fname_old);
free(fname);
free(fname_old);
}
/* Remove the "old-" files */
for_each_string_list_item(item, &names) {
- for (ext = 0; ext < 2; ext++) {
+ for (ext = 0; ext < ARRAY_SIZE(exts); ext++) {
char *fname;
fname = mkpath("%s/old-%s%s",
packdir,
item->string,
- exts[ext]);
+ exts[ext].name);
if (remove_path(fname))
warning(_("removing '%s' failed"), fname);
}
{
struct show_data *data = cb_data;
- if (!fnmatch(data->pattern, refname, 0)) {
+ if (!wildmatch(data->pattern, refname, 0, NULL)) {
if (data->format == REPLACE_FORMAT_SHORT)
printf("%s\n", refname);
else if (data->format == REPLACE_FORMAT_MEDIUM)
OPT_END()
};
- read_replace_refs = 0;
+ check_replace_refs = 0;
argc = parse_options(argc, argv, prefix, options, git_replace_usage, 0);
struct diff_options *opt, void *data)
{
int i;
+ int intent_to_add = *(int *)data;
for (i = 0; i < q->nr; i++) {
struct diff_filespec *one = q->queue[i]->one;
- if (one->mode && !is_null_sha1(one->sha1)) {
- struct cache_entry *ce;
- ce = make_cache_entry(one->mode, one->sha1, one->path,
- 0, 0);
- if (!ce)
- die(_("make_cache_entry failed for path '%s'"),
- one->path);
- add_cache_entry(ce, ADD_CACHE_OK_TO_ADD |
- ADD_CACHE_OK_TO_REPLACE);
- } else
+ int is_missing = !(one->mode && !is_null_sha1(one->sha1));
+ struct cache_entry *ce;
+
+ if (is_missing && !intent_to_add) {
remove_file_from_cache(one->path);
+ continue;
+ }
+
+ ce = make_cache_entry(one->mode, one->sha1, one->path,
+ 0, 0);
+ if (!ce)
+ die(_("make_cache_entry failed for path '%s'"),
+ one->path);
+ if (is_missing) {
+ ce->ce_flags |= CE_INTENT_TO_ADD;
+ set_object_name_for_intent_to_add_entry(ce);
+ }
+ add_cache_entry(ce, ADD_CACHE_OK_TO_ADD | ADD_CACHE_OK_TO_REPLACE);
}
}
static int read_from_tree(const struct pathspec *pathspec,
- unsigned char *tree_sha1)
+ unsigned char *tree_sha1,
+ int intent_to_add)
{
struct diff_options opt;
copy_pathspec(&opt.pathspec, pathspec);
opt.output_format = DIFF_FORMAT_CALLBACK;
opt.format_callback = update_index_from_diff;
+ opt.format_callback_data = &intent_to_add;
if (do_diff_cache(tree_sha1, &opt))
return 1;
const char *rev;
unsigned char sha1[20];
struct pathspec pathspec;
+ int intent_to_add = 0;
const struct option options[] = {
OPT__QUIET(&quiet, N_("be quiet, only report errors")),
OPT_SET_INT(0, "mixed", &reset_type,
OPT_SET_INT(0, "keep", &reset_type,
N_("reset HEAD but keep local changes"), KEEP),
OPT_BOOL('p', "patch", &patch_mode, N_("select hunks interactively")),
+ OPT_BOOL('N', "intent-to-add", &intent_to_add,
+ N_("record only the fact that removed paths will be added later")),
OPT_END()
};
if (reset_type == NONE)
reset_type = MIXED; /* by default */
- if (reset_type != SOFT && reset_type != MIXED)
+ if (reset_type != SOFT && (reset_type != MIXED || get_git_work_tree()))
setup_work_tree();
if (reset_type == MIXED && is_bare_repository())
die(_("%s reset is not allowed in a bare repository"),
_(reset_type_names[reset_type]));
+ if (intent_to_add && reset_type != MIXED)
+ die(_("-N can only be used with --mixed"));
+
/* Soft reset does not touch the index file nor the working tree
* at all, but requires them in a good order. Other resets reset
* the index file to the tree object we are switching to. */
int newfd = hold_locked_index(lock, 1);
if (reset_type == MIXED) {
int flags = quiet ? REFRESH_QUIET : REFRESH_IN_PORCELAIN;
- if (read_from_tree(&pathspec, sha1))
+ if (read_from_tree(&pathspec, sha1, intent_to_add))
return 1;
- refresh_index(&the_index, flags, NULL, NULL,
- _("Unstaged changes after reset:"));
+ if (get_git_work_tree())
+ refresh_index(&the_index, flags, NULL, NULL,
+ _("Unstaged changes after reset:"));
} else {
int err = reset_index(sha1, reset_type, quiet);
if (reset_type == KEEP && !err)
#include "diff.h"
#include "revision.h"
#include "list-objects.h"
+#include "pack.h"
+#include "pack-bitmap.h"
#include "builtin.h"
#include "log-tree.h"
#include "graph.h"
return 0;
}
+static int show_object_fast(
+ const unsigned char *sha1,
+ enum object_type type,
+ int exclude,
+ uint32_t name_hash,
+ struct packed_git *found_pack,
+ off_t found_offset)
+{
+ fprintf(stdout, "%s\n", sha1_to_hex(sha1));
+ return 1;
+}
+
int cmd_rev_list(int argc, const char **argv, const char *prefix)
{
struct rev_info revs;
int bisect_list = 0;
int bisect_show_vars = 0;
int bisect_find_all = 0;
+ int use_bitmap_index = 0;
git_config(git_default_config, NULL);
init_revisions(&revs, prefix);
bisect_show_vars = 1;
continue;
}
+ if (!strcmp(arg, "--use-bitmap-index")) {
+ use_bitmap_index = 1;
+ continue;
+ }
+ if (!strcmp(arg, "--test-bitmap")) {
+ test_bitmap_walk(&revs);
+ return 0;
+ }
usage(rev_list_usage);
}
if (bisect_list)
revs.limited = 1;
+ if (use_bitmap_index) {
+ if (revs.count && !revs.left_right && !revs.cherry_mark) {
+ uint32_t commit_count;
+ if (!prepare_bitmap_walk(&revs)) {
+ count_bitmap_commit_list(&commit_count, NULL, NULL, NULL);
+ printf("%d\n", commit_count);
+ return 0;
+ }
+ } else if (revs.tag_objects && revs.tree_objects && revs.blob_objects) {
+ if (!prepare_bitmap_walk(&revs)) {
+ traverse_bitmap_commit_list(&show_object_fast);
+ return 0;
+ }
+ }
+ }
+
if (prepare_revision_walk(&revs))
die("revision walk setup failed");
if (revs.tree_objects)
usage[unb++] = strbuf_detach(&sb, NULL);
}
- /* parse: (<short>|<short>,<long>|<long>)[=?]? SP+ <help> */
+ /* parse: (<short>|<short>,<long>|<long>)[*=?!]*<arghint>? SP+ <help> */
while (strbuf_getline(&sb, stdin, '\n') != EOF) {
const char *s;
+ const char *end;
struct option *o;
if (!sb.len)
o->value = &parsed;
o->flags = PARSE_OPT_NOARG;
o->callback = &parseopt_dump;
+
+ /* Possible argument name hint */
+ end = s;
+ while (s > sb.buf && strchr("*=?!", s[-1]) == NULL)
+ --s;
+ if (s != sb.buf && s != end)
+ o->argh = xmemdupz(s, end - s);
+ if (s == sb.buf)
+ s = end;
+
while (s > sb.buf && strchr("*=?!", s[-1])) {
switch (*--s) {
case '=':
continue;
}
if (!strcmp(arg, "--default")) {
- def = argv[i+1];
- i++;
+ def = argv[++i];
+ if (!def)
+ die("--default requires an argument");
continue;
}
if (!strcmp(arg, "--prefix")) {
- prefix = argv[i+1];
+ prefix = argv[++i];
+ if (!prefix)
+ die("--prefix requires an argument");
startup_info->prefix = prefix;
output_prefix = 1;
- i++;
continue;
}
if (!strcmp(arg, "--revs-only")) {
continue;
}
if (!strcmp(arg, "--resolve-git-dir")) {
- const char *gitdir = resolve_gitdir(argv[i+1]);
+ const char *gitdir = argv[++i];
+ if (!gitdir)
+ die("--resolve-git-dir requires an argument");
+ gitdir = resolve_gitdir(gitdir);
if (!gitdir)
- die("not a gitdir '%s'", argv[i+1]);
+ die("not a gitdir '%s'", argv[i]);
puts(gitdir);
continue;
}
OPT_STRING(0, "strategy", &opts->strategy, N_("strategy"), N_("merge strategy")),
OPT_CALLBACK('X', "strategy-option", &opts, N_("option"),
N_("option for merge strategy"), option_parse_x),
+ { OPTION_STRING, 'S', "gpg-sign", &opts->gpg_sign, N_("key-id"),
+ N_("GPG sign commit"), PARSE_OPT_OPTARG, NULL, (intptr_t) "" },
OPT_END(),
OPT_END(),
OPT_END(),
for (i = 0; i < active_nr; i++) {
const struct cache_entry *ce = active_cache[i];
- if (!match_pathspec_depth(&pathspec, ce->name, ce_namelen(ce), 0, seen))
+ if (!ce_path_match(ce, &pathspec, seen))
continue;
ALLOC_GROW(list.entry, list.nr + 1, list.alloc);
- list.entry[list.nr].name = ce->name;
+ list.entry[list.nr].name = xstrdup(ce->name);
list.entry[list.nr].is_submodule = S_ISGITLINK(ce->ce_mode);
if (list.entry[list.nr++].is_submodule &&
!is_staging_gitmodules_ok())
slash--;
if (!*tail)
return 0;
- if (fnmatch(match_ref_pattern, tail, 0))
+ if (wildmatch(match_ref_pattern, tail, 0, NULL))
return 0;
if (starts_with(refname, "refs/heads/"))
return append_head_ref(refname, sha1, flag, cb_data);
NULL
};
+#define STRCMP_SORT 0 /* must be zero */
+#define VERCMP_SORT 1
+#define SORT_MASK 0x7fff
+#define REVERSE_SORT 0x8000
+
struct tag_filter {
const char **patterns;
int lines;
+ int sort;
+ struct string_list tags;
struct commit_list *with_commit;
};
if (!*patterns)
return 1;
for (; *patterns; patterns++)
- if (!fnmatch(*patterns, ref, 0))
+ if (!wildmatch(*patterns, ref, 0, NULL))
return 1;
return 0;
}
return 0;
if (!filter->lines) {
- printf("%s\n", refname);
+ if (filter->sort)
+ string_list_append(&filter->tags, refname);
+ else
+ printf("%s\n", refname);
return 0;
}
printf("%-15s ", refname);
return 0;
}
+static int sort_by_version(const void *a_, const void *b_)
+{
+ const struct string_list_item *a = a_;
+ const struct string_list_item *b = b_;
+ return versioncmp(a->string, b->string);
+}
+
static int list_tags(const char **patterns, int lines,
- struct commit_list *with_commit)
+ struct commit_list *with_commit, int sort)
{
struct tag_filter filter;
filter.patterns = patterns;
filter.lines = lines;
+ filter.sort = sort;
filter.with_commit = with_commit;
+ memset(&filter.tags, 0, sizeof(filter.tags));
+ filter.tags.strdup_strings = 1;
for_each_tag_ref(show_reference, (void *) &filter);
-
+ if (sort) {
+ int i;
+ if ((sort & SORT_MASK) == VERCMP_SORT)
+ qsort(filter.tags.items, filter.tags.nr,
+ sizeof(struct string_list_item), sort_by_version);
+ if (sort & REVERSE_SORT)
+ for (i = filter.tags.nr - 1; i >= 0; i--)
+ printf("%s\n", filter.tags.items[i].string);
+ else
+ for (i = 0; i < filter.tags.nr; i++)
+ printf("%s\n", filter.tags.items[i].string);
+ string_list_clear(&filter.tags, 0);
+ }
return 0;
}
return 0;
}
+static int parse_opt_sort(const struct option *opt, const char *arg, int unset)
+{
+ int *sort = opt->value;
+ int flags = 0;
+
+ if (*arg == '-') {
+ flags |= REVERSE_SORT;
+ arg++;
+ }
+ if (starts_with(arg, "version:")) {
+ *sort = VERCMP_SORT;
+ arg += 8;
+ } else if (starts_with(arg, "v:")) {
+ *sort = VERCMP_SORT;
+ arg += 2;
+ } else
+ *sort = STRCMP_SORT;
+ if (strcmp(arg, "refname"))
+ die(_("unsupported sort specification %s"), arg);
+ *sort |= flags;
+ return 0;
+}
+
int cmd_tag(int argc, const char **argv, const char *prefix)
{
struct strbuf buf = STRBUF_INIT;
struct create_tag_options opt;
char *cleanup_arg = NULL;
int annotate = 0, force = 0, lines = -1;
- int cmdmode = 0;
+ int cmdmode = 0, sort = 0;
const char *msgfile = NULL, *keyid = NULL;
struct msg_arg msg = { 0, STRBUF_INIT };
struct commit_list *with_commit = NULL;
OPT_BOOL('s', "sign", &opt.sign, N_("annotated and GPG-signed tag")),
OPT_STRING(0, "cleanup", &cleanup_arg, N_("mode"),
N_("how to strip spaces and #comments from message")),
- OPT_STRING('u', "local-user", &keyid, N_("key id"),
+ OPT_STRING('u', "local-user", &keyid, N_("key-id"),
N_("use another key to sign the tag")),
OPT__FORCE(&force, N_("replace the tag if exists")),
OPT_COLUMN(0, "column", &colopts, N_("show tag list in columns")),
+ {
+ OPTION_CALLBACK, 0, "sort", &sort, N_("type"), N_("sort tags"),
+ PARSE_OPT_NONEG, parse_opt_sort
+ },
OPT_GROUP(N_("Tag listing options")),
{
PARSE_OPT_LASTARG_DEFAULT,
parse_opt_with_commit, (intptr_t)"HEAD",
},
+ {
+ OPTION_CALLBACK, 0, "with", &with_commit, N_("commit"),
+ N_("print only tags that contain the commit"),
+ PARSE_OPT_HIDDEN | PARSE_OPT_LASTARG_DEFAULT,
+ parse_opt_with_commit, (intptr_t)"HEAD",
+ },
{
OPTION_CALLBACK, 0, "points-at", NULL, N_("object"),
N_("print only tags of the object"), 0, parse_opt_points_at
copts.padding = 2;
run_column_filter(colopts, &copts);
}
- ret = list_tags(argv, lines == -1 ? 0 : lines, with_commit);
+ if (lines != -1 && sort)
+ die(_("--sort and -n are incompatible"));
+ ret = list_tags(argv, lines == -1 ? 0 : lines, with_commit, sort);
if (column_active(colopts))
stop_column_filter();
return ret;
use(sizeof(struct pack_header));
if (!quiet)
- progress = start_progress("Unpacking objects", nr_objects);
+ progress = start_progress(_("Unpacking objects"), nr_objects);
obj_list = xcalloc(nr_objects, sizeof(*obj_list));
for (i = 0; i < nr_objects; i++) {
unpack_one(i);
int i;
unsigned char sha1[20];
- read_replace_refs = 0;
+ check_replace_refs = 0;
git_config(git_default_config, NULL);
#include "resolve-undo.h"
#include "parse-options.h"
#include "pathspec.h"
+#include "dir.h"
/*
* Default to not allowing changes to the list of files. The
die("git update-index: cannot chmod %cx '%s'", flip, path);
}
-static void update_one(const char *path, const char *prefix, int prefix_length)
+static void update_one(const char *path)
{
- const char *p = prefix_path(prefix, prefix_length, path);
- if (!verify_path(p)) {
+ if (!verify_path(path)) {
fprintf(stderr, "Ignoring path %s\n", path);
- goto free_return;
+ return;
}
if (mark_valid_only) {
- if (mark_ce_flags(p, CE_VALID, mark_valid_only == MARK_FLAG))
+ if (mark_ce_flags(path, CE_VALID, mark_valid_only == MARK_FLAG))
die("Unable to mark file %s", path);
- goto free_return;
+ return;
}
if (mark_skip_worktree_only) {
- if (mark_ce_flags(p, CE_SKIP_WORKTREE, mark_skip_worktree_only == MARK_FLAG))
+ if (mark_ce_flags(path, CE_SKIP_WORKTREE, mark_skip_worktree_only == MARK_FLAG))
die("Unable to mark file %s", path);
- goto free_return;
+ return;
}
if (force_remove) {
- if (remove_file_from_cache(p))
+ if (remove_file_from_cache(path))
die("git update-index: unable to remove %s", path);
report("remove '%s'", path);
- goto free_return;
+ return;
}
- if (process_path(p))
+ if (process_path(path))
die("Unable to process path %s", path);
report("add '%s'", path);
- free_return:
- if (p < path || p > path + strlen(path))
- free((char *)p);
}
static void read_index_info(int line_termination)
const struct cache_entry *ce = active_cache[pos];
struct cache_entry *old = NULL;
int save_nr;
+ char *path;
- if (ce_stage(ce) || !ce_path_match(ce, &pathspec))
+ if (ce_stage(ce) || !ce_path_match(ce, &pathspec, NULL))
continue;
if (has_head)
old = read_one_ent(NULL, head_sha1,
* or worse yet 'allow_replace', active_nr may decrease.
*/
save_nr = active_nr;
- update_one(ce->name + prefix_length, prefix, prefix_length);
+ path = xstrdup(ce->name);
+ update_one(path);
+ free(path);
if (save_nr != active_nr)
goto redo;
}
return 0;
}
+static int parse_new_style_cacheinfo(const char *arg,
+ unsigned int *mode,
+ unsigned char sha1[],
+ const char **path)
+{
+ unsigned long ul;
+ char *endp;
+
+ errno = 0;
+ ul = strtoul(arg, &endp, 8);
+ if (errno || endp == arg || *endp != ',' || (unsigned int) ul != ul)
+ return -1; /* not a new-style cacheinfo */
+ *mode = ul;
+ endp++;
+ if (get_sha1_hex(endp, sha1) || endp[40] != ',')
+ return -1;
+ *path = endp + 41;
+ return 0;
+}
+
static int cacheinfo_callback(struct parse_opt_ctx_t *ctx,
const struct option *opt, int unset)
{
unsigned char sha1[20];
unsigned int mode;
+ const char *path;
+ if (!parse_new_style_cacheinfo(ctx->argv[1], &mode, sha1, &path)) {
+ if (add_cacheinfo(mode, sha1, path, 0))
+ die("git update-index: --cacheinfo cannot add %s", path);
+ ctx->argv++;
+ ctx->argc--;
+ return 0;
+ }
if (ctx->argc <= 3)
- return error("option 'cacheinfo' expects three arguments");
+ return error("option 'cacheinfo' expects <mode>,<sha1>,<path>");
if (strtoul_ui(*++ctx->argv, 8, &mode) ||
get_sha1_hex(*++ctx->argv, sha1) ||
add_cacheinfo(mode, sha1, *++ctx->argv, 0))
PARSE_OPT_NOARG | PARSE_OPT_NONEG,
really_refresh_callback},
{OPTION_LOWLEVEL_CALLBACK, 0, "cacheinfo", NULL,
- N_("<mode> <object> <path>"),
+ N_("<mode>,<object>,<path>"),
N_("add the specified entry to the index"),
- PARSE_OPT_NOARG | /* disallow --cacheinfo=<mode> form */
+ PARSE_OPT_NOARG | /* disallow --cacheinfo=<mode> form */
PARSE_OPT_NONEG | PARSE_OPT_LITERAL_ARGHELP,
(parse_opt_cb *) cacheinfo_callback},
{OPTION_CALLBACK, 0, "chmod", &set_executable_bit, N_("(+/-)x"),
setup_work_tree();
p = prefix_path(prefix, prefix_length, path);
- update_one(p, NULL, 0);
+ update_one(p);
if (set_executable_bit)
chmod_path(set_executable_bit, p);
- if (p < path || p > path + strlen(path))
- free((char *)p);
+ free((char *)p);
ctx.argc--;
ctx.argv++;
break;
strbuf_swap(&buf, &nbuf);
}
p = prefix_path(prefix, prefix_length, buf.buf);
- update_one(p, NULL, 0);
+ update_one(p);
if (set_executable_bit)
chmod_path(set_executable_bit, p);
- if (p < buf.buf || p > buf.buf + buf.len)
- free((char *)p);
+ free((char *)p);
}
strbuf_release(&nbuf);
strbuf_release(&buf);
#include "bulk-checkin.h"
#include "csum-file.h"
#include "pack.h"
+#include "strbuf.h"
static int pack_compression_level = Z_DEFAULT_COMPRESSION;
static void finish_bulk_checkin(struct bulk_checkin_state *state)
{
unsigned char sha1[20];
- char packname[PATH_MAX];
+ struct strbuf packname = STRBUF_INIT;
int i;
if (!state->f)
close(fd);
}
- sprintf(packname, "%s/pack/pack-", get_object_directory());
- finish_tmp_packfile(packname, state->pack_tmp_name,
+ strbuf_addf(&packname, "%s/pack/pack-", get_object_directory());
+ finish_tmp_packfile(&packname, state->pack_tmp_name,
state->written, state->nr_written,
&state->pack_idx_opts, sha1);
for (i = 0; i < state->nr_written; i++)
free(state->written);
memset(state, 0, sizeof(*state));
+ strbuf_release(&packname);
/* Make objects we just wrote available to ourselves */
reprepare_packed_git();
}
static void add_to_ref_list(const unsigned char *sha1, const char *name,
struct ref_list *list)
{
- if (list->nr + 1 >= list->alloc) {
- list->alloc = alloc_nr(list->nr + 1);
- list->list = xrealloc(list->list,
- list->alloc * sizeof(list->list[0]));
- }
- memcpy(list->list[list->nr].sha1, sha1, 20);
+ ALLOC_GROW(list->list, list->nr + 1, list->alloc);
+ hashcpy(list->list[list->nr].sha1, sha1);
list->list[list->nr].name = xstrdup(name);
list->nr++;
}
return 0;
}
+/* Remember to update object flag allocation in object.h */
#define PREREQ_MARK (1u<<16)
int verify_bundle(struct bundle_header *header, int verbose)
return NULL;
pos = -pos-1;
- if (it->subtree_alloc <= it->subtree_nr) {
- it->subtree_alloc = alloc_nr(it->subtree_alloc);
- it->down = xrealloc(it->down, it->subtree_alloc *
- sizeof(*it->down));
- }
+ ALLOC_GROW(it->down, it->subtree_nr + 1, it->subtree_alloc);
it->subtree_nr++;
down = xmalloc(sizeof(*down) + pathlen + 1);
if (!it)
return;
- slash = strchr(path, '/');
+ slash = strchrnul(path, '/');
+ namelen = slash - path;
it->entry_count = -1;
- if (!slash) {
+ if (!*slash) {
int pos;
- namelen = strlen(path);
pos = subtree_pos(it, path, namelen);
if (0 <= pos) {
cache_tree_free(&it->down[pos]->cache_tree);
}
return;
}
- namelen = slash - path;
down = find_subtree(it, path, namelen, 0);
if (down)
cache_tree_invalidate_path(down->cache_tree, slash + 1);
const char *slash;
struct cache_tree_sub *sub;
- slash = strchr(path, '/');
- if (!slash)
- slash = path + strlen(path);
- /* between path and slash is the name of the
- * subtree to look for.
+ slash = strchrnul(path, '/');
+ /*
+ * Between path and slash is the name of the subtree
+ * to look for.
*/
sub = find_subtree(it, path, slash - path, 0);
if (!sub)
return NULL;
it = sub->cache_tree;
- if (slash)
- while (*slash && *slash == '/')
- slash++;
- if (!slash || !*slash)
- return it; /* prefix ended with slashes */
+
path = slash;
+ while (*path == '/')
+ path++;
}
return it;
}
#include "git-compat-util.h"
#include "strbuf.h"
-#include "hash.h"
+#include "hashmap.h"
#include "advice.h"
#include "gettext.h"
#include "convert.h"
};
struct cache_entry {
+ struct hashmap_entry ent;
struct stat_data ce_stat_data;
unsigned int ce_mode;
unsigned int ce_flags;
unsigned int ce_namelen;
unsigned char sha1[20];
- struct cache_entry *next;
char name[FLEX_ARRAY]; /* more */
};
#define CE_ADDED (1 << 19)
#define CE_HASHED (1 << 20)
-#define CE_UNHASHED (1 << 21)
#define CE_WT_REMOVE (1 << 22) /* remove in work directory */
#define CE_CONFLICTED (1 << 23)
* Copy the sha1 and stat state of a cache entry from one to
* another. But we never change the name, or the hash state!
*/
-#define CE_STATE_MASK (CE_HASHED | CE_UNHASHED)
static inline void copy_cache_entry(struct cache_entry *dst,
const struct cache_entry *src)
{
- unsigned int state = dst->ce_flags & CE_STATE_MASK;
+ unsigned int state = dst->ce_flags & CE_HASHED;
/* Don't copy hash chain and name */
- memcpy(dst, src, offsetof(struct cache_entry, next));
+ memcpy(&dst->ce_stat_data, &src->ce_stat_data,
+ offsetof(struct cache_entry, name) -
+ offsetof(struct cache_entry, ce_stat_data));
/* Restore the hash state */
- dst->ce_flags = (dst->ce_flags & ~CE_STATE_MASK) | state;
+ dst->ce_flags = (dst->ce_flags & ~CE_HASHED) | state;
}
static inline unsigned create_ce_flags(unsigned stage)
struct cache_time timestamp;
unsigned name_hash_initialized : 1,
initialized : 1;
- struct hash_table name_hash;
- struct hash_table dir_hash;
+ struct hashmap name_hash;
+ struct hashmap dir_hash;
};
extern struct index_state the_index;
#define ce_modified(ce, st, options) ie_modified(&the_index, (ce), (st), (options))
#define cache_dir_exists(name, namelen) index_dir_exists(&the_index, (name), (namelen))
#define cache_file_exists(name, namelen, igncase) index_file_exists(&the_index, (name), (namelen), (igncase))
-#define cache_name_exists(name, namelen, igncase) index_name_exists(&the_index, (name), (namelen), (igncase))
#define cache_name_is_other(name, namelen) index_name_is_other(&the_index, (name), (namelen))
#define resolve_undo_clear() resolve_undo_clear_index(&the_index)
#define unmerge_cache_entry_at(at) unmerge_index_entry_at(&the_index, at)
extern int init_db(const char *template_dir, unsigned int flags);
extern void sanitize_stdfds(void);
+extern int daemonize(void);
#define alloc_nr(x) (((x)+16)*3/2)
extern int verify_path(const char *path);
extern struct cache_entry *index_dir_exists(struct index_state *istate, const char *name, int namelen);
extern struct cache_entry *index_file_exists(struct index_state *istate, const char *name, int namelen, int igncase);
-extern struct cache_entry *index_name_exists(struct index_state *istate, const char *name, int namelen, int igncase);
extern int index_name_pos(const struct index_state *, const char *name, int namelen);
#define ADD_CACHE_OK_TO_ADD 1 /* Ok to add */
#define ADD_CACHE_OK_TO_REPLACE 2 /* Ok to replace file/directory */
#define ADD_CACHE_IGNORE_ERRORS 4
#define ADD_CACHE_IGNORE_REMOVAL 8
#define ADD_CACHE_INTENT 16
-#define ADD_CACHE_IMPLICIT_DOT 32 /* internal to "git add -u/-A" */
extern int add_to_index(struct index_state *, const char *path, struct stat *, int flags);
extern int add_file_to_index(struct index_state *, const char *path, int flags);
-extern struct cache_entry *make_cache_entry(unsigned int mode, const unsigned char *sha1, const char *path, int stage, int refresh);
+extern struct cache_entry *make_cache_entry(unsigned int mode, const unsigned char *sha1, const char *path, int stage, unsigned int refresh_options);
extern int ce_same_name(const struct cache_entry *a, const struct cache_entry *b);
+extern void set_object_name_for_intent_to_add_entry(struct cache_entry *ce);
extern int index_name_is_other(const struct index_state *, const char *, int);
extern void *read_blob_data_from_index(struct index_state *, const char *, unsigned long *);
#define CE_MATCH_RACY_IS_DIRTY 02
/* do stat comparison even if CE_SKIP_WORKTREE is true */
#define CE_MATCH_IGNORE_SKIP_WORKTREE 04
+/* ignore non-existent files during stat update */
+#define CE_MATCH_IGNORE_MISSING 0x08
+/* enable stat refresh */
+#define CE_MATCH_REFRESH 0x10
extern int ie_match_stat(const struct index_state *, const struct cache_entry *, struct stat *, unsigned int);
extern int ie_modified(const struct index_state *, const struct cache_entry *, struct stat *, unsigned int);
-extern int ce_path_match(const struct cache_entry *ce, const struct pathspec *pathspec);
-
#define HASH_WRITE_OBJECT 1
#define HASH_FORMAT_CHECK 2
extern int index_fd(unsigned char *sha1, int fd, struct stat *st, enum object_type type, const char *path, unsigned flags);
extern size_t delta_base_cache_limit;
extern unsigned long big_file_threshold;
extern unsigned long pack_size_limit_cfg;
-extern int read_replace_refs;
+
+/*
+ * Do replace refs need to be checked this run? This variable is
+ * initialized to true unless --no-replace-object is used or
+ * $GIT_NO_REPLACE_OBJECTS is set, but is set to false by some
+ * commands that do not want replace references to be active. As an
+ * optimization it is also set to false if replace references have
+ * been sought but there were none.
+ */
+extern int check_replace_refs;
+
extern int fsync_object_files;
extern int core_preload_index;
extern int core_apply_sparse_checkout;
extern char *git_path_submodule(const char *path, const char *fmt, ...)
__attribute__((format (printf, 2, 3)));
-extern char *sha1_file_name(const unsigned char *sha1);
+/*
+ * Return the name of the file in the local object database that would
+ * be used to store a loose object with the specified sha1. The
+ * return value is a pointer to a statically allocated buffer that is
+ * overwritten each time the function is called.
+ */
+extern const char *sha1_file_name(const unsigned char *sha1);
+
+/*
+ * Return the name of the (local) packfile with the specified sha1 in
+ * its name. The return value is a pointer to memory that is
+ * overwritten each time this function is called.
+ */
extern char *sha1_pack_name(const unsigned char *sha1);
+
+/*
+ * Return the name of the (local) pack index file with the specified
+ * sha1 in its name. The return value is a pointer to memory that is
+ * overwritten each time this function is called.
+ */
extern char *sha1_pack_index_name(const unsigned char *sha1);
+
extern const char *find_unique_abbrev(const unsigned char *sha1, int);
extern const unsigned char null_sha1[20];
{
return read_sha1_file_extended(sha1, type, size, LOOKUP_REPLACE_OBJECT);
}
+
+/*
+ * This internal function is only declared here for the benefit of
+ * lookup_replace_object(). Please do not call it directly.
+ */
extern const unsigned char *do_lookup_replace_object(const unsigned char *sha1);
+
+/*
+ * If object sha1 should be replaced, return the replacement object's
+ * name (replaced recursively, if necessary). The return value is
+ * either sha1 or a pointer to a permanently-allocated value. When
+ * object replacement is suppressed, always return sha1.
+ */
static inline const unsigned char *lookup_replace_object(const unsigned char *sha1)
{
- if (!read_replace_refs)
+ if (!check_replace_refs)
return sha1;
return do_lookup_replace_object(sha1);
}
+
static inline const unsigned char *lookup_replace_object_extended(const unsigned char *sha1, unsigned flag)
{
if (!(flag & LOOKUP_REPLACE_OBJECT))
extern int write_sha1_file(const void *buf, unsigned long len, const char *type, unsigned char *return_sha1);
extern int pretend_sha1_file(void *, unsigned long, enum object_type, unsigned char *);
extern int force_object_loose(const unsigned char *sha1, time_t mtime);
+extern int git_open_noatime(const char *name);
extern void *map_sha1_file(const unsigned char *sha1, unsigned long *size);
extern int unpack_sha1_header(git_zstream *stream, unsigned char *map, unsigned long mapsize, void *buffer, unsigned long bufsiz);
extern int parse_sha1_header(const char *hdr, unsigned long *sizep);
extern int move_temp_to_file(const char *tmpfile, const char *filename);
extern int has_sha1_pack(const unsigned char *sha1);
+
+/*
+ * Return true iff we have an object named sha1, whether local or in
+ * an alternate object database, and whether packed or loose. This
+ * function does not respect replace references.
+ */
extern int has_sha1_file(const unsigned char *sha1);
+
+/*
+ * Return true iff an alternate object database has a loose object
+ * with the specified name. This function does not respect replace
+ * references.
+ */
extern int has_loose_object_nonlocal(const unsigned char *sha1);
extern int has_pack_index(const unsigned char *sha1);
unsigned long approxidate_careful(const char *, int *);
unsigned long approxidate_relative(const char *date, const struct timeval *now);
enum date_mode parse_date_format(const char *format);
+int date_overflows(unsigned long date);
#define IDENT_STRICT 1
#define IDENT_NO_DATE 2
struct packed_git *packs);
extern void pack_report(void);
+
+/*
+ * mmap the index file for the specified packfile (if it is not
+ * already mmapped). Return 0 on success.
+ */
extern int open_pack_index(struct packed_git *);
+
+/*
+ * munmap the index file for the specified packfile (if it is
+ * currently mmapped).
+ */
extern void close_pack_index(struct packed_git *);
+
extern unsigned char *use_pack(struct packed_git *, struct pack_window **, off_t, unsigned long *);
extern void close_pack_windows(struct packed_git *);
extern void unuse_pack(struct pack_window **);
extern void free_pack_by_name(const char *);
extern void clear_delta_base_cache(void);
extern struct packed_git *add_packed_git(const char *, int, int);
-extern const unsigned char *nth_packed_object_sha1(struct packed_git *, uint32_t);
-extern off_t nth_packed_object_offset(const struct packed_git *, uint32_t);
-extern off_t find_pack_entry_one(const unsigned char *, struct packed_git *);
+
+/*
+ * Return the SHA-1 of the nth object within the specified packfile.
+ * Open the index if it is not already open. The return value points
+ * at the SHA-1 within the mmapped index. Return NULL if there is an
+ * error.
+ */
+extern const unsigned char *nth_packed_object_sha1(struct packed_git *, uint32_t n);
+
+/*
+ * Return the offset of the nth object within the specified packfile.
+ * The index must already be opened.
+ */
+extern off_t nth_packed_object_offset(const struct packed_git *, uint32_t n);
+
+/*
+ * If the object named sha1 is present in the specified packfile,
+ * return its offset within the packfile; otherwise, return 0.
+ */
+extern off_t find_pack_entry_one(const unsigned char *sha1, struct packed_git *);
+
extern int is_pack_valid(struct packed_git *);
extern void *unpack_entry(struct packed_git *, off_t, enum object_type *, unsigned long *);
extern unsigned long unpack_object_header_buffer(const unsigned char *buf, unsigned long len, enum object_type *type, unsigned long *sizep);
#define CONFIG_INVALID_PATTERN 6
#define CONFIG_GENERIC_ERROR 7
+struct git_config_source {
+ unsigned int use_stdin:1;
+ const char *file;
+ const char *blob;
+};
+
typedef int (*config_fn_t)(const char *, const char *, void *);
extern int git_default_config(const char *, const char *, void *);
extern int git_config_from_file(config_fn_t fn, const char *, void *);
extern int git_config_from_parameters(config_fn_t fn, void *data);
extern int git_config(config_fn_t fn, void *);
extern int git_config_with_options(config_fn_t fn, void *,
- const char *filename,
- const char *blob_ref,
+ struct git_config_source *config_source,
int respect_includes);
extern int git_config_early(config_fn_t fn, void *, const char *repo_config);
extern int git_parse_ulong(const char *, unsigned long *);
*/
void stat_validity_update(struct stat_validity *sv, int fd);
+int versioncmp(const char *s1, const char *s2);
+
#endif /* CACHE_H */
bad=0
while read builtin
do
- base=`expr "$builtin" : 'git-\(.*\)'`
- x=`sed -ne 's/.*{ "'$base'", \(cmd_[^, ]*\).*/'$base' \1/p' git.c`
+ base=$(expr "$builtin" : 'git-\(.*\)')
+ x=$(sed -ne 's/.*{ "'$base'", \(cmd_[^, ]*\).*/'$base' \1/p' git.c)
if test -z "$x"
then
echo "$base is builtin but not listed in git.c command list"
static struct combine_diff_path *intersect_paths(struct combine_diff_path *curr, int n, int num_parent)
{
struct diff_queue_struct *q = &diff_queued_diff;
- struct combine_diff_path *p;
- int i;
+ struct combine_diff_path *p, **tail = &curr;
+ int i, cmp;
if (!n) {
- struct combine_diff_path *list = NULL, **tail = &list;
for (i = 0; i < q->nr; i++) {
int len;
const char *path;
p->path = (char *) &(p->parent[num_parent]);
memcpy(p->path, path, len);
p->path[len] = 0;
- p->len = len;
p->next = NULL;
memset(p->parent, 0,
sizeof(p->parent[0]) * num_parent);
*tail = p;
tail = &p->next;
}
- return list;
+ return curr;
}
- for (p = curr; p; p = p->next) {
- int found = 0;
- if (!p->len)
+ /*
+ * paths in curr (linked list) and q->queue[] (array) are
+ * both sorted in the tree order.
+ */
+ i = 0;
+ while ((p = *tail) != NULL) {
+ cmp = ((i >= q->nr)
+ ? -1 : strcmp(p->path, q->queue[i]->two->path));
+
+ if (cmp < 0) {
+ /* p->path not in q->queue[]; drop it */
+ *tail = p->next;
+ free(p);
continue;
- for (i = 0; i < q->nr; i++) {
- const char *path;
- int len;
+ }
- if (diff_unmodified_pair(q->queue[i]))
- continue;
- path = q->queue[i]->two->path;
- len = strlen(path);
- if (len == p->len && !memcmp(path, p->path, len)) {
- found = 1;
- hashcpy(p->parent[n].sha1, q->queue[i]->one->sha1);
- p->parent[n].mode = q->queue[i]->one->mode;
- p->parent[n].status = q->queue[i]->status;
- break;
- }
+ if (cmp > 0) {
+ /* q->queue[i] not in p->path; skip it */
+ i++;
+ continue;
}
- if (!found)
- p->len = 0;
+
+ hashcpy(p->parent[n].sha1, q->queue[i]->one->sha1);
+ p->parent[n].mode = q->queue[i]->one->mode;
+ p->parent[n].status = q->queue[i]->status;
+
+ tail = &p->next;
+ i++;
}
return curr;
}
{
struct diff_options *opt = &rev->diffopt;
- if (!p->len)
- return;
if (opt->output_format & (DIFF_FORMAT_RAW |
DIFF_FORMAT_NAME |
DIFF_FORMAT_NAME_STATUS))
q.queue = xcalloc(num_paths, sizeof(struct diff_filepair *));
q.alloc = num_paths;
q.nr = num_paths;
- for (i = 0, p = paths; p; p = p->next) {
- if (!p->len)
- continue;
+ for (i = 0, p = paths; p; p = p->next)
q.queue[i++] = combined_pair(p, num_parent);
- }
opt->format_callback(&q, opt, opt->format_callback_data);
for (i = 0; i < num_paths; i++)
free_combined_pair(q.queue[i]);
free(q.queue);
}
+static const char *path_path(void *obj)
+{
+ struct combine_diff_path *path = (struct combine_diff_path *)obj;
+
+ return path->path;
+}
+
void diff_tree_combined(const unsigned char *sha1,
const struct sha1_array *parents,
int dense,
diffopts.output_format = DIFF_FORMAT_NO_OUTPUT;
DIFF_OPT_SET(&diffopts, RECURSIVE);
DIFF_OPT_CLR(&diffopts, ALLOW_EXTERNAL);
+ /* tell diff_tree to emit paths in sorted (=tree) order */
+ diffopts.orderfile = NULL;
show_log_first = !!rev->loginfo && !rev->no_commit_id;
needsep = 0;
printf("%s%c", diff_line_prefix(opt),
opt->line_termination);
}
+
+ /* if showing diff, show it in requested order */
+ if (diffopts.output_format != DIFF_FORMAT_NO_OUTPUT &&
+ opt->orderfile) {
+ diffcore_order(opt->orderfile);
+ }
+
diff_flush(&diffopts);
}
- /* find out surviving paths */
- for (num_paths = 0, p = paths; p; p = p->next) {
- if (p->len)
- num_paths++;
+ /* find out number of surviving paths */
+ for (num_paths = 0, p = paths; p; p = p->next)
+ num_paths++;
+
+ /* order paths according to diffcore_order */
+ if (opt->orderfile && num_paths) {
+ struct obj_order *o;
+
+ o = xmalloc(sizeof(*o) * num_paths);
+ for (i = 0, p = paths; p; p = p->next, i++)
+ o[i].obj = p;
+ order_objects(opt->orderfile, path_path, o, num_paths);
+ for (i = 0; i < num_paths - 1; i++) {
+ p = o[i].obj;
+ p->next = o[i+1].obj;
+ }
+
+ p = o[num_paths-1].obj;
+ p->next = NULL;
+ paths = o[0].obj;
+ free(o);
}
+
+
if (num_paths) {
if (opt->output_format & (DIFF_FORMAT_RAW |
DIFF_FORMAT_NAME |
DIFF_FORMAT_NAME_STATUS)) {
- for (p = paths; p; p = p->next) {
- if (p->len)
- show_raw_diff(p, num_parent, rev);
- }
+ for (p = paths; p; p = p->next)
+ show_raw_diff(p, num_parent, rev);
needsep = 1;
}
else if (opt->output_format &
if (needsep)
printf("%s%c", diff_line_prefix(opt),
opt->line_termination);
- for (p = paths; p; p = p->next) {
- if (p->len)
- show_patch_diff(p, num_parent, dense,
- 0, rev);
- }
+ for (p = paths; p; p = p->next)
+ show_patch_diff(p, num_parent, dense,
+ 0, rev);
}
}
#include "mergesort.h"
#include "commit-slab.h"
#include "prio-queue.h"
+#include "sha1-lookup.h"
static struct commit_extra_header *read_commit_extra_header_lines(const char *buf, size_t len, const char **);
static struct commit_graft **commit_graft;
static int commit_graft_alloc, commit_graft_nr;
+static const unsigned char *commit_graft_sha1_access(size_t index, void *table)
+{
+ struct commit_graft **commit_graft_table = table;
+ return commit_graft_table[index]->sha1;
+}
+
static int commit_graft_pos(const unsigned char *sha1)
{
- int lo, hi;
- lo = 0;
- hi = commit_graft_nr;
- while (lo < hi) {
- int mi = (lo + hi) / 2;
- struct commit_graft *graft = commit_graft[mi];
- int cmp = hashcmp(sha1, graft->sha1);
- if (!cmp)
- return mi;
- if (cmp < 0)
- hi = mi;
- else
- lo = mi + 1;
- }
- return -lo - 1;
+ return sha1_pos(sha1, commit_graft, commit_graft_nr,
+ commit_graft_sha1_access);
}
int register_commit_graft(struct commit_graft *graft, int ignore_dups)
return 1;
}
pos = -pos - 1;
- if (commit_graft_alloc <= ++commit_graft_nr) {
- commit_graft_alloc = alloc_nr(commit_graft_alloc);
- commit_graft = xrealloc(commit_graft,
- sizeof(*commit_graft) *
- commit_graft_alloc);
- }
+ ALLOC_GROW(commit_graft, commit_graft_nr + 1, commit_graft_alloc);
+ commit_graft_nr++;
if (pos < commit_graft_nr)
memmove(commit_graft + pos + 1,
commit_graft + pos,
static void record_author_date(struct author_date_slab *author_date,
struct commit *commit)
{
- const char *buf, *line_end;
+ const char *buf, *line_end, *ident_line;
char *buffer = NULL;
struct ident_split ident;
char *date_end;
buf;
buf = line_end + 1) {
line_end = strchrnul(buf, '\n');
- if (!starts_with(buf, "author ")) {
+ ident_line = skip_prefix(buf, "author ");
+ if (!ident_line) {
if (!line_end[0] || line_end[1] == '\n')
return; /* end of header */
continue;
}
if (split_ident_line(&ident,
- buf + strlen("author "),
- line_end - (buf + strlen("author "))) ||
+ ident_line, line_end - ident_line) ||
!ident.date_begin || !ident.date_end)
goto fail_exit; /* malformed "author" line */
break;
/* merge-base stuff */
-/* bits #0..15 in revision.h */
+/* Remember to update object flag allocation in object.h */
#define PARENT1 (1u<<16)
#define PARENT2 (1u<<17)
#define STALE (1u<<18)
for (i = 0; i < ARRAY_SIZE(sigcheck_gpg_status); i++) {
const char *found, *next;
- if (starts_with(buf, sigcheck_gpg_status[i].check + 1)) {
- /* At the very beginning of the buffer */
- found = buf + strlen(sigcheck_gpg_status[i].check + 1);
- } else {
+ found = skip_prefix(buf, sigcheck_gpg_status[i].check + 1);
+ if (!found) {
found = strstr(buf, sigcheck_gpg_status[i].check);
if (!found)
continue;
extern void setup_alternate_shallow(struct lock_file *shallow_lock,
const char **alternate_shallow_file,
const struct sha1_array *extra);
-extern char *setup_temporary_shallow(const struct sha1_array *extra);
+extern const char *setup_temporary_shallow(const struct sha1_array *extra);
extern void advertise_shallow_grafts(int);
struct shallow_info {
int compare_commits_by_commit_date(const void *a_, const void *b_, void *unused);
+LAST_ARG_MUST_BE_NULL
+extern int run_commit_hook(int editor_is_used, const char *index_file, const char *name, ...);
+
#endif /* COMMIT_H */
((val & 0x000000ff) << 24));
}
+static inline uint64_t default_bswap64(uint64_t val)
+{
+ return (((val & (uint64_t)0x00000000000000ffULL) << 56) |
+ ((val & (uint64_t)0x000000000000ff00ULL) << 40) |
+ ((val & (uint64_t)0x0000000000ff0000ULL) << 24) |
+ ((val & (uint64_t)0x00000000ff000000ULL) << 8) |
+ ((val & (uint64_t)0x000000ff00000000ULL) >> 8) |
+ ((val & (uint64_t)0x0000ff0000000000ULL) >> 24) |
+ ((val & (uint64_t)0x00ff000000000000ULL) >> 40) |
+ ((val & (uint64_t)0xff00000000000000ULL) >> 56));
+}
+
#undef bswap32
+#undef bswap64
#if defined(__GNUC__) && (defined(__i386__) || defined(__x86_64__))
return result;
}
+#define bswap64 git_bswap64
+#if defined(__x86_64__)
+static inline uint64_t git_bswap64(uint64_t x)
+{
+ uint64_t result;
+ if (__builtin_constant_p(x))
+ result = default_bswap64(x);
+ else
+ __asm__("bswap %q0" : "=r" (result) : "0" (x));
+ return result;
+}
+#else
+static inline uint64_t git_bswap64(uint64_t x)
+{
+ union { uint64_t i64; uint32_t i32[2]; } tmp, result;
+ if (__builtin_constant_p(x))
+ result.i64 = default_bswap64(x);
+ else {
+ tmp.i64 = x;
+ result.i32[0] = git_bswap32(tmp.i32[1]);
+ result.i32[1] = git_bswap32(tmp.i32[0]);
+ }
+ return result.i64;
+}
+#endif
+
#elif defined(_MSC_VER) && (defined(_M_IX86) || defined(_M_X64))
#include <stdlib.h>
#define bswap32(x) _byteswap_ulong(x)
+#define bswap64(x) _byteswap_uint64(x)
#endif
-#ifdef bswap32
+#if defined(bswap32)
#undef ntohl
#undef htonl
#define htonl(x) bswap32(x)
#endif
+
+#if defined(bswap64)
+
+#undef ntohll
+#undef htonll
+#define ntohll(x) bswap64(x)
+#define htonll(x) bswap64(x)
+
+#else
+
+#undef ntohll
+#undef htonll
+
+#if !defined(__BYTE_ORDER)
+# if defined(BYTE_ORDER) && defined(LITTLE_ENDIAN) && defined(BIG_ENDIAN)
+# define __BYTE_ORDER BYTE_ORDER
+# define __LITTLE_ENDIAN LITTLE_ENDIAN
+# define __BIG_ENDIAN BIG_ENDIAN
+# endif
+#endif
+
+#if !defined(__BYTE_ORDER)
+# error "Cannot determine endianness"
+#endif
+
+#if __BYTE_ORDER == __BIG_ENDIAN
+# define ntohll(n) (n)
+# define htonll(n) (n)
+#else
+# define ntohll(n) default_bswap64(n)
+# define htonll(n) default_bswap64(n)
+#endif
+
+#endif
+
+/*
+ * Performance might be improved if the CPU architecture is OK with
+ * unaligned 32-bit loads and a fast ntohl() is available.
+ * Otherwise fall back to byte loads and shifts which is portable,
+ * and is faster on architectures with memory alignment issues.
+ */
+
+#if defined(__i386__) || defined(__x86_64__) || \
+ defined(_M_IX86) || defined(_M_X64) || \
+ defined(__ppc__) || defined(__ppc64__) || \
+ defined(__powerpc__) || defined(__powerpc64__) || \
+ defined(__s390__) || defined(__s390x__)
+
+#define get_be16(p) ntohs(*(unsigned short *)(p))
+#define get_be32(p) ntohl(*(unsigned int *)(p))
+#define put_be32(p, v) do { *(unsigned int *)(p) = htonl(v); } while (0)
+
+#else
+
+#define get_be16(p) ( \
+ (*((unsigned char *)(p) + 0) << 8) | \
+ (*((unsigned char *)(p) + 1) << 0) )
+#define get_be32(p) ( \
+ (*((unsigned char *)(p) + 0) << 24) | \
+ (*((unsigned char *)(p) + 1) << 16) | \
+ (*((unsigned char *)(p) + 2) << 8) | \
+ (*((unsigned char *)(p) + 3) << 0) )
+#define put_be32(p, v) do { \
+ unsigned int __v = (v); \
+ *((unsigned char *)(p) + 0) = __v >> 24; \
+ *((unsigned char *)(p) + 1) = __v >> 16; \
+ *((unsigned char *)(p) + 2) = __v >> 8; \
+ *((unsigned char *)(p) + 3) = __v >> 0; } while (0)
+
+#endif
+++ /dev/null
-/* Copyright (C) 1991, 92, 93, 96, 97, 98, 99 Free Software Foundation, Inc.
- This file is part of the GNU C Library.
-
- This library is free software; you can redistribute it and/or
- modify it under the terms of the GNU Library General Public License as
- published by the Free Software Foundation; either version 2 of the
- License, or (at your option) any later version.
-
- This library is distributed in the hope that it will be useful,
- but WITHOUT ANY WARRANTY; without even the implied warranty of
- MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
- Library General Public License for more details.
-
- You should have received a copy of the GNU Library General Public
- License along with this library; see the file COPYING.LIB. If not,
- write to the Free Software Foundation, Inc., 59 Temple Place - Suite 330,
- Boston, MA 02111-1307, USA. */
-
-#if HAVE_CONFIG_H
-# include <config.h>
-#endif
-
-/* Enable GNU extensions in fnmatch.h. */
-#ifndef _GNU_SOURCE
-# define _GNU_SOURCE 1
-#endif
-
-#include <stddef.h>
-#include <errno.h>
-#include <fnmatch.h>
-#include <ctype.h>
-
-#if HAVE_STRING_H || defined _LIBC
-# include <string.h>
-#else
-# include <strings.h>
-#endif
-
-#if defined STDC_HEADERS || defined _LIBC
-# include <stdlib.h>
-#endif
-
-/* For platforms which support the ISO C amendment 1 functionality we
- support user defined character classes. */
-#if defined _LIBC || (defined HAVE_WCTYPE_H && defined HAVE_WCHAR_H)
-/* Solaris 2.5 has a bug: <wchar.h> must be included before <wctype.h>. */
-# include <wchar.h>
-# include <wctype.h>
-#endif
-
-/* Comment out all this code if we are using the GNU C Library, and are not
- actually compiling the library itself. This code is part of the GNU C
- Library, but also included in many other GNU distributions. Compiling
- and linking in this code is a waste when using the GNU C library
- (especially if it is a shared library). Rather than having every GNU
- program understand `configure --with-gnu-libc' and omit the object files,
- it is simpler to just do this in the source for each such file. */
-
-#if defined NO_FNMATCH || defined NO_FNMATCH_CASEFOLD || \
- defined _LIBC || !defined __GNU_LIBRARY__
-
-
-# if defined STDC_HEADERS || !defined isascii
-# define ISASCII(c) 1
-# else
-# define ISASCII(c) isascii(c)
-# endif
-
-# ifdef isblank
-# define ISBLANK(c) (ISASCII (c) && isblank (c))
-# else
-# define ISBLANK(c) ((c) == ' ' || (c) == '\t')
-# endif
-# ifdef isgraph
-# define ISGRAPH(c) (ISASCII (c) && isgraph (c))
-# else
-# define ISGRAPH(c) (ISASCII (c) && isprint (c) && !isspace (c))
-# endif
-
-# define ISPRINT(c) (ISASCII (c) && isprint (c))
-# define ISDIGIT(c) (ISASCII (c) && isdigit (c))
-# define ISALNUM(c) (ISASCII (c) && isalnum (c))
-# define ISALPHA(c) (ISASCII (c) && isalpha (c))
-# define ISCNTRL(c) (ISASCII (c) && iscntrl (c))
-# define ISLOWER(c) (ISASCII (c) && islower (c))
-# define ISPUNCT(c) (ISASCII (c) && ispunct (c))
-# define ISSPACE(c) (ISASCII (c) && isspace (c))
-# define ISUPPER(c) (ISASCII (c) && isupper (c))
-# define ISXDIGIT(c) (ISASCII (c) && isxdigit (c))
-
-# define STREQ(s1, s2) ((strcmp (s1, s2) == 0))
-
-# if defined _LIBC || (defined HAVE_WCTYPE_H && defined HAVE_WCHAR_H)
-/* The GNU C library provides support for user-defined character classes
- and the functions from ISO C amendment 1. */
-# ifdef CHARCLASS_NAME_MAX
-# define CHAR_CLASS_MAX_LENGTH CHARCLASS_NAME_MAX
-# else
-/* This shouldn't happen but some implementation might still have this
- problem. Use a reasonable default value. */
-# define CHAR_CLASS_MAX_LENGTH 256
-# endif
-
-# ifdef _LIBC
-# define IS_CHAR_CLASS(string) __wctype (string)
-# else
-# define IS_CHAR_CLASS(string) wctype (string)
-# endif
-# else
-# define CHAR_CLASS_MAX_LENGTH 6 /* Namely, `xdigit'. */
-
-# define IS_CHAR_CLASS(string) \
- (STREQ (string, "alpha") || STREQ (string, "upper") \
- || STREQ (string, "lower") || STREQ (string, "digit") \
- || STREQ (string, "alnum") || STREQ (string, "xdigit") \
- || STREQ (string, "space") || STREQ (string, "print") \
- || STREQ (string, "punct") || STREQ (string, "graph") \
- || STREQ (string, "cntrl") || STREQ (string, "blank"))
-# endif
-
-/* Avoid depending on library functions or files
- whose names are inconsistent. */
-
-# if !defined _LIBC && !defined getenv
-extern char *getenv (const char *name);
-# endif
-
-# ifndef errno
-extern int errno;
-# endif
-
-# ifndef NULL
-# define NULL 0
-# endif
-
-/* This function doesn't exist on most systems. */
-
-# if !defined HAVE___STRCHRNUL && !defined _LIBC
-static char *
-__strchrnul (const char *s, int c)
-
-
-{
- char *result = strchr (s, c);
- if (result == NULL)
- result = strchr (s, '\0');
- return result;
-}
-# endif
-
-# ifndef internal_function
-/* Inside GNU libc we mark some function in a special way. In other
- environments simply ignore the marking. */
-# define internal_function
-# endif
-
-/* Match STRING against the filename pattern PATTERN, returning zero if
- it matches, nonzero if not. */
-static int internal_fnmatch __P ((const char *pattern, const char *string,
- int no_leading_period, int flags))
- internal_function;
-static int
-internal_function
-internal_fnmatch (const char *pattern, const char *string, int no_leading_period, int flags)
-
-
-
-
-{
- register const char *p = pattern, *n = string;
- register unsigned char c;
-
-/* Note that this evaluates C many times. */
-# ifdef _LIBC
-# define FOLD(c) ((flags & FNM_CASEFOLD) ? tolower (c) : (c))
-# else
-# define FOLD(c) ((flags & FNM_CASEFOLD) && ISUPPER (c) ? tolower (c) : (c))
-# endif
-
- while ((c = *p++) != '\0')
- {
- c = FOLD (c);
-
- switch (c)
- {
- case '?':
- if (*n == '\0')
- return FNM_NOMATCH;
- else if (*n == '/' && (flags & FNM_FILE_NAME))
- return FNM_NOMATCH;
- else if (*n == '.' && no_leading_period
- && (n == string
- || (n[-1] == '/' && (flags & FNM_FILE_NAME))))
- return FNM_NOMATCH;
- break;
-
- case '\\':
- if (!(flags & FNM_NOESCAPE))
- {
- c = *p++;
- if (c == '\0')
- /* Trailing \ loses. */
- return FNM_NOMATCH;
- c = FOLD (c);
- }
- if (FOLD ((unsigned char) *n) != c)
- return FNM_NOMATCH;
- break;
-
- case '*':
- if (*n == '.' && no_leading_period
- && (n == string
- || (n[-1] == '/' && (flags & FNM_FILE_NAME))))
- return FNM_NOMATCH;
-
- for (c = *p++; c == '?' || c == '*'; c = *p++)
- {
- if (*n == '/' && (flags & FNM_FILE_NAME))
- /* A slash does not match a wildcard under FNM_FILE_NAME. */
- return FNM_NOMATCH;
- else if (c == '?')
- {
- /* A ? needs to match one character. */
- if (*n == '\0')
- /* There isn't another character; no match. */
- return FNM_NOMATCH;
- else
- /* One character of the string is consumed in matching
- this ? wildcard, so *??? won't match if there are
- less than three characters. */
- ++n;
- }
- }
-
- if (c == '\0')
- /* The wildcard(s) is/are the last element of the pattern.
- If the name is a file name and contains another slash
- this does mean it cannot match. */
- return ((flags & FNM_FILE_NAME) && strchr (n, '/') != NULL
- ? FNM_NOMATCH : 0);
- else
- {
- const char *endp;
-
- endp = __strchrnul (n, (flags & FNM_FILE_NAME) ? '/' : '\0');
-
- if (c == '[')
- {
- int flags2 = ((flags & FNM_FILE_NAME)
- ? flags : (flags & ~FNM_PERIOD));
-
- for (--p; n < endp; ++n)
- if (internal_fnmatch (p, n,
- (no_leading_period
- && (n == string
- || (n[-1] == '/'
- && (flags
- & FNM_FILE_NAME)))),
- flags2)
- == 0)
- return 0;
- }
- else if (c == '/' && (flags & FNM_FILE_NAME))
- {
- while (*n != '\0' && *n != '/')
- ++n;
- if (*n == '/'
- && (internal_fnmatch (p, n + 1, flags & FNM_PERIOD,
- flags) == 0))
- return 0;
- }
- else
- {
- int flags2 = ((flags & FNM_FILE_NAME)
- ? flags : (flags & ~FNM_PERIOD));
-
- if (c == '\\' && !(flags & FNM_NOESCAPE))
- c = *p;
- c = FOLD (c);
- for (--p; n < endp; ++n)
- if (FOLD ((unsigned char) *n) == c
- && (internal_fnmatch (p, n,
- (no_leading_period
- && (n == string
- || (n[-1] == '/'
- && (flags
- & FNM_FILE_NAME)))),
- flags2) == 0))
- return 0;
- }
- }
-
- /* If we come here no match is possible with the wildcard. */
- return FNM_NOMATCH;
-
- case '[':
- {
- /* Nonzero if the sense of the character class is inverted. */
- static int posixly_correct;
- register int not;
- char cold;
-
- if (posixly_correct == 0)
- posixly_correct = getenv ("POSIXLY_CORRECT") != NULL ? 1 : -1;
-
- if (*n == '\0')
- return FNM_NOMATCH;
-
- if (*n == '.' && no_leading_period && (n == string
- || (n[-1] == '/'
- && (flags
- & FNM_FILE_NAME))))
- return FNM_NOMATCH;
-
- if (*n == '/' && (flags & FNM_FILE_NAME))
- /* `/' cannot be matched. */
- return FNM_NOMATCH;
-
- not = (*p == '!' || (posixly_correct < 0 && *p == '^'));
- if (not)
- ++p;
-
- c = *p++;
- for (;;)
- {
- unsigned char fn = FOLD ((unsigned char) *n);
-
- if (!(flags & FNM_NOESCAPE) && c == '\\')
- {
- if (*p == '\0')
- return FNM_NOMATCH;
- c = FOLD ((unsigned char) *p);
- ++p;
-
- if (c == fn)
- goto matched;
- }
- else if (c == '[' && *p == ':')
- {
- /* Leave room for the null. */
- char str[CHAR_CLASS_MAX_LENGTH + 1];
- size_t c1 = 0;
-# if defined _LIBC || (defined HAVE_WCTYPE_H && defined HAVE_WCHAR_H)
- wctype_t wt;
-# endif
- const char *startp = p;
-
- for (;;)
- {
- if (c1 > CHAR_CLASS_MAX_LENGTH)
- /* The name is too long and therefore the pattern
- is ill-formed. */
- return FNM_NOMATCH;
-
- c = *++p;
- if (c == ':' && p[1] == ']')
- {
- p += 2;
- break;
- }
- if (c < 'a' || c >= 'z')
- {
- /* This cannot possibly be a character class name.
- Match it as a normal range. */
- p = startp;
- c = '[';
- goto normal_bracket;
- }
- str[c1++] = c;
- }
- str[c1] = '\0';
-
-# if defined _LIBC || (defined HAVE_WCTYPE_H && defined HAVE_WCHAR_H)
- wt = IS_CHAR_CLASS (str);
- if (wt == 0)
- /* Invalid character class name. */
- return FNM_NOMATCH;
-
- if (__iswctype (__btowc ((unsigned char) *n), wt))
- goto matched;
-# else
- if ((STREQ (str, "alnum") && ISALNUM ((unsigned char) *n))
- || (STREQ (str, "alpha") && ISALPHA ((unsigned char) *n))
- || (STREQ (str, "blank") && ISBLANK ((unsigned char) *n))
- || (STREQ (str, "cntrl") && ISCNTRL ((unsigned char) *n))
- || (STREQ (str, "digit") && ISDIGIT ((unsigned char) *n))
- || (STREQ (str, "graph") && ISGRAPH ((unsigned char) *n))
- || (STREQ (str, "lower") && ISLOWER ((unsigned char) *n))
- || (STREQ (str, "print") && ISPRINT ((unsigned char) *n))
- || (STREQ (str, "punct") && ISPUNCT ((unsigned char) *n))
- || (STREQ (str, "space") && ISSPACE ((unsigned char) *n))
- || (STREQ (str, "upper") && ISUPPER ((unsigned char) *n))
- || (STREQ (str, "xdigit") && ISXDIGIT ((unsigned char) *n)))
- goto matched;
-# endif
- }
- else if (c == '\0')
- /* [ (unterminated) loses. */
- return FNM_NOMATCH;
- else
- {
- normal_bracket:
- if (FOLD (c) == fn)
- goto matched;
-
- cold = c;
- c = *p++;
-
- if (c == '-' && *p != ']')
- {
- /* It is a range. */
- unsigned char cend = *p++;
- if (!(flags & FNM_NOESCAPE) && cend == '\\')
- cend = *p++;
- if (cend == '\0')
- return FNM_NOMATCH;
-
- if (cold <= fn && fn <= FOLD (cend))
- goto matched;
-
- c = *p++;
- }
- }
-
- if (c == ']')
- break;
- }
-
- if (!not)
- return FNM_NOMATCH;
- break;
-
- matched:
- /* Skip the rest of the [...] that already matched. */
- while (c != ']')
- {
- if (c == '\0')
- /* [... (unterminated) loses. */
- return FNM_NOMATCH;
-
- c = *p++;
- if (!(flags & FNM_NOESCAPE) && c == '\\')
- {
- if (*p == '\0')
- return FNM_NOMATCH;
- /* XXX 1003.2d11 is unclear if this is right. */
- ++p;
- }
- else if (c == '[' && *p == ':')
- {
- do
- if (*++p == '\0')
- return FNM_NOMATCH;
- while (*p != ':' || p[1] == ']');
- p += 2;
- c = *p;
- }
- }
- if (not)
- return FNM_NOMATCH;
- }
- break;
-
- default:
- if (c != FOLD ((unsigned char) *n))
- return FNM_NOMATCH;
- }
-
- ++n;
- }
-
- if (*n == '\0')
- return 0;
-
- if ((flags & FNM_LEADING_DIR) && *n == '/')
- /* The FNM_LEADING_DIR flag says that "foo*" matches "foobar/frobozz". */
- return 0;
-
- return FNM_NOMATCH;
-
-# undef FOLD
-}
-
-
-int
-fnmatch (const char *pattern, const char *string, int flags)
-
-
-
-{
- return internal_fnmatch (pattern, string, flags & FNM_PERIOD, flags);
-}
-
-#endif /* _LIBC or not __GNU_LIBRARY__. */
+++ /dev/null
-/* Copyright (C) 1991, 92, 93, 96, 97, 98, 99 Free Software Foundation, Inc.
- This file is part of the GNU C Library.
-
- The GNU C Library is free software; you can redistribute it and/or
- modify it under the terms of the GNU Library General Public License as
- published by the Free Software Foundation; either version 2 of the
- License, or (at your option) any later version.
-
- The GNU C Library is distributed in the hope that it will be useful,
- but WITHOUT ANY WARRANTY; without even the implied warranty of
- MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
- Library General Public License for more details.
-
- You should have received a copy of the GNU Library General Public
- License along with the GNU C Library; see the file COPYING.LIB. If not,
- write to the Free Software Foundation, Inc., 59 Temple Place - Suite 330,
- Boston, MA 02111-1307, USA. */
-
-#ifndef _FNMATCH_H
-#define _FNMATCH_H 1
-
-#ifdef __cplusplus
-extern "C" {
-#endif
-
-#if defined __cplusplus || (defined __STDC__ && __STDC__) || defined WINDOWS32
-# if !defined __GLIBC__ || !defined __P
-# undef __P
-# define __P(protos) protos
-# endif
-#else /* Not C++ or ANSI C. */
-# undef __P
-# define __P(protos) ()
-/* We can get away without defining `const' here only because in this file
- it is used only inside the prototype for `fnmatch', which is elided in
- non-ANSI C where `const' is problematical. */
-#endif /* C++ or ANSI C. */
-
-#ifndef const
-# if (defined __STDC__ && __STDC__) || defined __cplusplus
-# define __const const
-# else
-# define __const
-# endif
-#endif
-
-/* We #undef these before defining them because some losing systems
- (HP-UX A.08.07 for example) define these in <unistd.h>. */
-#undef FNM_PATHNAME
-#undef FNM_NOESCAPE
-#undef FNM_PERIOD
-
-/* Bits set in the FLAGS argument to `fnmatch'. */
-#define FNM_PATHNAME (1 << 0) /* No wildcard can ever match `/'. */
-#define FNM_NOESCAPE (1 << 1) /* Backslashes don't quote special chars. */
-#define FNM_PERIOD (1 << 2) /* Leading `.' is matched only explicitly. */
-
-#if !defined _POSIX_C_SOURCE || _POSIX_C_SOURCE < 2 || defined _GNU_SOURCE
-# define FNM_FILE_NAME FNM_PATHNAME /* Preferred GNU name. */
-# define FNM_LEADING_DIR (1 << 3) /* Ignore `/...' after a match. */
-# define FNM_CASEFOLD (1 << 4) /* Compare without regard to case. */
-#endif
-
-/* Value returned by `fnmatch' if STRING does not match PATTERN. */
-#define FNM_NOMATCH 1
-
-/* This value is returned if the implementation does not support
- `fnmatch'. Since this is not the case here it will never be
- returned but the conformance test suites still require the symbol
- to be defined. */
-#ifdef _XOPEN_SOURCE
-# define FNM_NOSYS (-1)
-#endif
-
-/* Match NAME against the filename pattern PATTERN,
- returning zero if it matches, FNM_NOMATCH if not. */
-extern int fnmatch __P ((__const char *__pattern, __const char *__name,
- int __flags));
-
-#ifdef __cplusplus
-}
-#endif
-
-#endif /* fnmatch.h */
--- /dev/null
+#include "../git-compat-util.h"
+#undef gmtime
+#undef gmtime_r
+
+struct tm *git_gmtime(const time_t *timep)
+{
+ static struct tm result;
+ return git_gmtime_r(timep, &result);
+}
+
+struct tm *git_gmtime_r(const time_t *timep, struct tm *result)
+{
+ struct tm *ret;
+
+ memset(result, 0, sizeof(*result));
+ ret = gmtime_r(timep, result);
+
+ /*
+ * Rather than NULL, FreeBSD gmtime simply leaves the "struct tm"
+ * untouched when it encounters overflow. Since "mday" cannot otherwise
+ * be zero, we can test this very quickly.
+ */
+ if (ret && !ret->tm_mday) {
+ ret = NULL;
+ errno = EOVERFLOW;
+ }
+
+ return ret;
+}
push(@args, "libeay32.lib");
} elsif ("$arg" eq "-lssl") {
push(@args, "ssleay32.lib");
+ } elsif ("$arg" eq "-lcurl") {
+ push(@args, "libcurl.lib");
} elsif ("$arg" =~ /^-L/ && "$arg" ne "-LTCG") {
$arg =~ s/^-L/-LIBPATH:/;
push(@args, $arg);
} buf;
} u;
const char *name;
+ const char *path;
int die_on_error;
int linenr;
int eof;
{
int ret = 0;
struct strbuf buf = STRBUF_INIT;
- char *expanded = expand_user_path(path);
+ char *expanded;
+ if (!path)
+ return config_error_nonbool("include.path");
+
+ expanded = expand_user_path(path);
if (!expanded)
return error("Could not expand include path '%s'", path);
path = expanded;
if (!is_absolute_path(path)) {
char *slash;
- if (!cf || !cf->name)
+ if (!cf || !cf->path)
return error("relative config includes must come from files");
- slash = find_last_dir_sep(cf->name);
+ slash = find_last_dir_sep(cf->path);
if (slash)
- strbuf_add(&buf, cf->name, slash - cf->name + 1);
+ strbuf_add(&buf, cf->path, slash - cf->path + 1);
strbuf_addstr(&buf, path);
path = buf.buf;
}
trust_ctime = git_config_bool(var, value);
return 0;
}
- if (!strcmp(var, "core.statinfo") ||
- !strcmp(var, "core.checkstat")) {
- /*
- * NEEDSWORK: statinfo was a typo in v1.8.2 that has
- * never been advertised. we will remove it at Git
- * 2.0 boundary.
- */
- if (!strcmp(var, "core.statinfo")) {
- static int warned;
- if (!warned++) {
- warning("'core.statinfo' will be removed in Git 2.0; "
- "use 'core.checkstat' instead.");
- }
- }
+ if (!strcmp(var, "core.checkstat")) {
if (!strcasecmp(value, "default"))
check_stat = 1;
else if (!strcasecmp(value, "minimal"))
return ret;
}
-int git_config_from_file(config_fn_t fn, const char *filename, void *data)
+static int do_config_from_file(config_fn_t fn,
+ const char *name, const char *path, FILE *f, void *data)
{
- int ret;
- FILE *f = fopen(filename, "r");
+ struct config_source top;
- ret = -1;
- if (f) {
- struct config_source top;
+ top.u.file = f;
+ top.name = name;
+ top.path = path;
+ top.die_on_error = 1;
+ top.do_fgetc = config_file_fgetc;
+ top.do_ungetc = config_file_ungetc;
+ top.do_ftell = config_file_ftell;
- top.u.file = f;
- top.name = filename;
- top.die_on_error = 1;
- top.do_fgetc = config_file_fgetc;
- top.do_ungetc = config_file_ungetc;
- top.do_ftell = config_file_ftell;
+ return do_config_from(&top, fn, data);
+}
- ret = do_config_from(&top, fn, data);
+static int git_config_from_stdin(config_fn_t fn, void *data)
+{
+ return do_config_from_file(fn, "<stdin>", NULL, stdin, data);
+}
+int git_config_from_file(config_fn_t fn, const char *filename, void *data)
+{
+ int ret = -1;
+ FILE *f;
+
+ f = fopen(filename, "r");
+ if (f) {
+ ret = do_config_from_file(fn, filename, filename, f, data);
fclose(f);
}
return ret;
top.u.buf.len = len;
top.u.buf.pos = 0;
top.name = name;
+ top.path = NULL;
top.die_on_error = 0;
top.do_fgetc = config_buf_fgetc;
top.do_ungetc = config_buf_ungetc;
}
int git_config_with_options(config_fn_t fn, void *data,
- const char *filename,
- const char *blob_ref,
+ struct git_config_source *config_source,
int respect_includes)
{
char *repo_config = NULL;
* If we have a specific filename, use it. Otherwise, follow the
* regular lookup sequence.
*/
- if (filename)
- return git_config_from_file(fn, filename, data);
- else if (blob_ref)
- return git_config_from_blob_ref(fn, blob_ref, data);
+ if (config_source && config_source->use_stdin)
+ return git_config_from_stdin(fn, data);
+ else if (config_source && config_source->file)
+ return git_config_from_file(fn, config_source->file, data);
+ else if (config_source && config_source->blob)
+ return git_config_from_blob_ref(fn, config_source->blob, data);
repo_config = git_pathdup("config");
ret = git_config_early(fn, data, repo_config);
int git_config(config_fn_t fn, void *data)
{
- return git_config_with_options(fn, data, NULL, NULL, 1);
+ return git_config_with_options(fn, data, NULL, 1);
}
/*
NO_MKDTEMP = YesPlease
NO_MKSTEMPS = YesPlease
NO_REGEX = YesPlease
- NO_FNMATCH_CASEFOLD = YesPlease
NO_MSGFMT_EXTENDED_OPTIONS = YesPlease
HAVE_DEV_TTY = YesPlease
ifeq ($(uname_R),5.6)
endif
PYTHON_PATH = /usr/local/bin/python
HAVE_PATHS_H = YesPlease
+ GMTIME_UNRELIABLE_ERRORS = UnfortunatelyYes
endif
ifeq ($(uname_S),OpenBSD)
NO_STRCASESTR = YesPlease
# issue, comment out the NO_MMAP statement.
NO_MMAP = YesPlease
NO_REGEX = YesPlease
- NO_FNMATCH_CASEFOLD = YesPlease
SNPRINTF_RETURNS_BOGUS = YesPlease
SHELL_PATH = /usr/gnu/bin/bash
NEEDS_LIBGEN = YesPlease
# issue, comment out the NO_MMAP statement.
NO_MMAP = YesPlease
NO_REGEX = YesPlease
- NO_FNMATCH_CASEFOLD = YesPlease
SNPRINTF_RETURNS_BOGUS = YesPlease
SHELL_PATH = /usr/gnu/bin/bash
NEEDS_LIBGEN = YesPlease
NO_UNSETENV = YesPlease
NO_HSTRERROR = YesPlease
NO_SYS_SELECT_H = YesPlease
- NO_FNMATCH_CASEFOLD = YesPlease
SNPRINTF_RETURNS_BOGUS = YesPlease
NO_NSEC = YesPlease
ifeq ($(uname_R),B.11.00)
NO_UNSETENV = YesPlease
NO_STRCASESTR = YesPlease
NO_STRLCPY = YesPlease
- NO_FNMATCH = YesPlease
NO_MEMMEM = YesPlease
# NEEDS_LIBICONV = YesPlease
NO_ICONV = YesPlease
NO_MKSTEMPS = YesPlease
SNPRINTF_RETURNS_BOGUS = YesPlease
NO_SVN_TESTS = YesPlease
- NO_PERL_MAKEMAKER = YesPlease
RUNTIME_PREFIX = YesPlease
NO_ST_BLOCKS_IN_STRUCT_STAT = YesPlease
NO_NSEC = YesPlease
UNRELIABLE_FSTAT = UnfortunatelyYes
OBJECT_CREATION_USES_RENAMES = UnfortunatelyNeedsTo
NO_REGEX = YesPlease
- NO_CURL = YesPlease
NO_GETTEXT = YesPlease
NO_PYTHON = YesPlease
BLK_SHA1 = YesPlease
compat/win32/dirent.o
COMPAT_CFLAGS = -D__USE_MINGW_ACCESS -DNOGDI -DHAVE_STRING_H -DHAVE_ALLOCA_H -Icompat -Icompat/regex -Icompat/win32 -DSTRIP_EXTENSION=\".exe\"
BASIC_LDFLAGS = -IGNORE:4217 -IGNORE:4049 -NOLOGO -SUBSYSTEM:CONSOLE -NODEFAULTLIB:MSVCRT.lib
- EXTLIBS = user32.lib advapi32.lib shell32.lib wininet.lib ws2_32.lib
+ EXTLIBS = user32.lib advapi32.lib shell32.lib wininet.lib ws2_32.lib invalidcontinue.obj
PTHREAD_LIBS =
lib =
ifndef DEBUG
NO_INET_NTOP = YesPlease
NO_INET_PTON = YesPlease
NO_SOCKADDR_STORAGE = YesPlease
- NO_FNMATCH_CASEFOLD = YesPlease
endif
ifeq ($(uname_R),5.2)
NO_INET_NTOP = YesPlease
NO_INET_PTON = YesPlease
NO_SOCKADDR_STORAGE = YesPlease
- NO_FNMATCH_CASEFOLD = YesPlease
endif
endif
ifeq ($(uname_S),Minix)
NO_D_TYPE_IN_DIRENT = YesPlease
NO_HSTRERROR = YesPlease
NO_STRCASESTR = YesPlease
- NO_FNMATCH_CASEFOLD = YesPlease
NO_MEMMEM = YesPlease
NO_STRLCPY = YesPlease
NO_SETENV = YesPlease
NO_UNSETENV = YesPlease
NO_STRCASESTR = YesPlease
NO_STRLCPY = YesPlease
- NO_FNMATCH = YesPlease
NO_MEMMEM = YesPlease
NEEDS_LIBICONV = YesPlease
NO_STRTOUMAX = YesPlease
EXPAT_NEEDS_XMLPARSE_H = YesPlease
HAVE_STRINGS_H = YesPlease
NEEDS_SOCKET = YesPlease
- NO_FNMATCH_CASEFOLD = YesPlease
NO_GETPAGESIZE = YesPlease
NO_ICONV = YesPlease
NO_MEMMEM = YesPlease
# and libcharset does
CHARSET_LIB=
AC_CHECK_LIB([iconv], [locale_charset],
- [],
+ [CHARSET_LIB=-liconv],
[AC_CHECK_LIB([charset], [locale_charset],
[CHARSET_LIB=-lcharset])])
GIT_CONF_SUBST([CHARSET_LIB])
[NO_STRCASESTR=YesPlease])
GIT_CONF_SUBST([NO_STRCASESTR])
#
-# Define NO_FNMATCH if you don't have fnmatch
-GIT_CHECK_FUNC(fnmatch,
-[NO_FNMATCH=],
-[NO_FNMATCH=YesPlease])
-GIT_CONF_SUBST([NO_FNMATCH])
-#
-# Define NO_FNMATCH_CASEFOLD if your fnmatch function doesn't have the
-# FNM_CASEFOLD GNU extension.
-AC_CACHE_CHECK([whether the fnmatch function supports the FNMATCH_CASEFOLD GNU extension],
- [ac_cv_c_excellent_fnmatch], [
-AC_EGREP_CPP(yippeeyeswehaveit,
- AC_LANG_PROGRAM([
-#include <fnmatch.h>
-],
-[#ifdef FNM_CASEFOLD
-yippeeyeswehaveit
-#endif
-]),
- [ac_cv_c_excellent_fnmatch=yes],
- [ac_cv_c_excellent_fnmatch=no])
-])
-if test $ac_cv_c_excellent_fnmatch = yes; then
- NO_FNMATCH_CASEFOLD=
-else
- NO_FNMATCH_CASEFOLD=YesPlease
-fi
-GIT_CONF_SUBST([NO_FNMATCH_CASEFOLD])
-#
# Define NO_MEMMEM if you don't have memmem.
GIT_CHECK_FUNC(memmem,
[NO_MEMMEM=],
*arg++ = port;
}
*arg++ = ssh_host;
- } else {
+ } else {
/* remove repo-local variables from the environment */
conn->env = local_repo_env;
conn->use_shell = 1;
__git_complete_revlist_file
}
+__git_fetch_recurse_submodules="yes on-demand no"
+
__git_fetch_options="
--quiet --verbose --append --upload-pack --force --keep --depth=
- --tags --no-tags --all --prune --dry-run
+ --tags --no-tags --all --prune --dry-run --recurse-submodules=
"
_git_fetch ()
{
case "$cur" in
+ --recurse-submodules=*)
+ __gitcomp "$__git_fetch_recurse_submodules" "" "${cur##--recurse-submodules=}"
+ return
+ ;;
--*)
__gitcomp "$__git_fetch_options"
return
__git_complete_strategy && return
case "$cur" in
+ --recurse-submodules=*)
+ __gitcomp "$__git_fetch_recurse_submodules" "" "${cur##--recurse-submodules=}"
+ return
+ ;;
--*)
__gitcomp "
--rebase --no-rebase
__git_complete_remote_or_refspec
}
+__git_push_recurse_submodules="check on-demand"
+
_git_push ()
{
case "$prev" in
__gitcomp_nl "$(__git_remotes)" "" "${cur##--repo=}"
return
;;
+ --recurse-submodules=*)
+ __gitcomp "$__git_push_recurse_submodules" "" "${cur##--recurse-submodules=}"
+ return
+ ;;
--*)
__gitcomp "
--all --mirror --tags --dry-run --force --verbose
--receive-pack= --repo= --set-upstream
+ --recurse-submodules=
"
return
;;
next unless $id;
if (m{^--- (?:a/(.+)|/dev/null)$}) {
$source = $1;
- } elsif (/^--- /) {
- die "Cannot parse hunk source: $_\n";
} elsif (/^@@ -(\d+)(?:,(\d+))?/ && $source) {
my $len = defined($2) ? $2 : 1;
push @{$sources->{$source}{$id}}, [$1, $len] if $len;
branches."
OPTIONS_KEEPDASHDASH=
+OPTIONS_STUCKLONG=
OPTIONS_SPEC="\
git resurrect $USAGE
--
peer = bzrlib.branch.Branch.open(peers[name],
possible_transports=transports)
try:
- peer.bzrdir.push_branch(branch, revision_id=revid)
+ peer.bzrdir.push_branch(branch, revision_id=revid,
+ overwrite=force)
except bzrlib.errors.DivergedBranches:
print "error %s non-fast forward" % ref
continue
print "*import-marks %s" % path
print "*export-marks %s" % path
+ print "option"
print
+class InvalidOptionValue(Exception):
+ pass
+
+def get_bool_option(val):
+ if val == 'true':
+ return True
+ elif val == 'false':
+ return False
+ else:
+ raise InvalidOptionValue()
+
+def do_option(parser):
+ global force
+ opt, val = parser[1:3]
+ try:
+ if opt == 'force':
+ force = get_bool_option(val)
+ print 'ok'
+ else:
+ print 'unsupported'
+ except InvalidOptionValue:
+ print "error '%s' is not a valid value for option '%s'" % (val, opt)
+
def ref_is_valid(name):
return not True in [c in name for c in '~^: \\']
global is_tmp
global branches, peers
global transports
+ global force
marks = None
is_tmp = False
branches = {}
peers = {}
transports = []
+ force = False
if alias[5:] == url:
is_tmp = True
do_import(parser)
elif parser.check('export'):
do_export(parser)
+ elif parser.check('option'):
+ do_option(parser)
else:
die('unhandled command: %s' % line)
sys.stdout.flush()
print "? refs/heads/branches/%s" % gitref(branch)
for bmark in bmarks:
- print "? refs/heads/%s" % gitref(bmark)
+ if bmarks[bmark].hex() == '0000000000000000000000000000000000000000':
+ warn("Ignoring invalid bookmark '%s'", bmark)
+ else:
+ print "? refs/heads/%s" % gitref(bmark)
for tag, node in repo.tagslist():
if tag == 'tip':
test_cmp expected actual
'
+test_expect_success 'forced pushing' '
+ (
+ cd gitrepo &&
+ echo three-new >content &&
+ git commit -a --amend -m three-new &&
+ git push -f
+ ) &&
+
+ (
+ cd bzrrepo &&
+ # the forced update overwrites the bzr branch but not the bzr
+ # working directory (it tries to merge instead)
+ bzr revert
+ ) &&
+
+ echo three-new >expected &&
+ cat bzrrepo/content >actual &&
+ test_cmp expected actual
+'
+
test_expect_success 'roundtrip' '
(
cd gitrepo &&
git pull &&
git log --format="%s" -1 origin/master >actual
) &&
- echo three >expected &&
+ echo three-new >expected &&
test_cmp expected actual &&
(cd gitrepo && git push && git pull) &&
)
'
-test_expect_failure 'remote big push force' '
+test_expect_success 'remote big push force' '
test_when_finished "rm -rf hgrepo gitrepo*" &&
setup_big_push
check_bookmark hgrepo new_bmark six
'
-test_expect_failure 'remote big push dry-run' '
+test_expect_success 'remote big push dry-run' '
test_when_finished "rm -rf hgrepo gitrepo*" &&
setup_big_push
)
'
+test_expect_success 'clone remote with master null bookmark, then push to the bookmark' '
+ test_when_finished "rm -rf gitrepo* hgrepo*" &&
+
+ hg init hgrepo &&
+ (
+ cd hgrepo &&
+ echo a >a &&
+ hg add a &&
+ hg commit -m a &&
+ hg bookmark -r null master
+ ) &&
+
+ git clone "hg::hgrepo" gitrepo &&
+ check gitrepo HEAD a &&
+ (
+ cd gitrepo &&
+ git checkout --quiet -b master &&
+ echo b >b &&
+ git add b &&
+ git commit -m b &&
+ git push origin master
+ )
+'
+
+test_expect_success 'clone remote with default null bookmark, then push to the bookmark' '
+ test_when_finished "rm -rf gitrepo* hgrepo*" &&
+
+ hg init hgrepo &&
+ (
+ cd hgrepo &&
+ echo a >a &&
+ hg add a &&
+ hg commit -m a &&
+ hg bookmark -r null -f default
+ ) &&
+
+ git clone "hg::hgrepo" gitrepo &&
+ check gitrepo HEAD a &&
+ (
+ cd gitrepo &&
+ git checkout --quiet -b default &&
+ echo b >b &&
+ git add b &&
+ git commit -m b &&
+ git push origin default
+ )
+'
+
+test_expect_success 'clone remote with generic null bookmark, then push to the bookmark' '
+ test_when_finished "rm -rf gitrepo* hgrepo*" &&
+
+ hg init hgrepo &&
+ (
+ cd hgrepo &&
+ echo a >a &&
+ hg add a &&
+ hg commit -m a &&
+ hg bookmark -r null bmark
+ ) &&
+
+ git clone "hg::hgrepo" gitrepo &&
+ check gitrepo HEAD a &&
+ (
+ cd gitrepo &&
+ git checkout --quiet -b bmark &&
+ git remote -v &&
+ echo b >b &&
+ git add b &&
+ git commit -m b &&
+ git push origin bmark
+ )
+'
+
test_done
annotate=
squash=
message=
+prefix=
debug()
{
/* nothing */
}
-static void daemonize(void)
-{
- die("--detach not supported on this platform");
-}
-
static struct credentials *prepare_credentials(const char *user_name,
const char *group_name)
{
return &c;
}
-
-static void daemonize(void)
-{
- switch (fork()) {
- case 0:
- break;
- case -1:
- die_errno("fork failed");
- default:
- exit(0);
- }
- if (setsid() == -1)
- die_errno("setsid failed");
- close(0);
- close(1);
- close(2);
- sanitize_stdfds();
-}
#endif
static void store_pid(const char *path)
if (inetd_mode || serve_mode)
return execute();
- if (detach)
- daemonize();
- else
+ if (detach) {
+ if (daemonize())
+ die("--detach not supported on this platform");
+ } else
sanitize_stdfds();
if (pid_file)
tz = local_tzoffset(time);
tm = time_to_tm(time, tz);
- if (!tm)
- return NULL;
+ if (!tm) {
+ tm = time_to_tm(0, 0);
+ tz = 0;
+ }
strbuf_reset(&timebuf);
if (mode == DATE_SHORT)
gettimeofday(&tv, NULL);
return approxidate_str(date, &tv, error_ret);
}
+
+int date_overflows(unsigned long t)
+{
+ time_t sys;
+
+ /* If we overflowed our unsigned long, that's bad... */
+ if (t == ULONG_MAX)
+ return 1;
+
+ /*
+ * ...but we also are going to feed the result to system
+ * functions that expect time_t, which is often "signed long".
+ * Make sure that we fit into time_t, as well.
+ */
+ sys = t;
+ return t != sys || (t < 1) != (sys < 1);
+}
#include "unpack-trees.h"
#include "refs.h"
#include "submodule.h"
+#include "dir.h"
/*
* diff-files
unsigned ce_option = ((option & DIFF_RACY_IS_MODIFIED)
? CE_MATCH_RACY_IS_DIRTY : 0);
- if (option & DIFF_SILENT_ON_REMOVED)
- handle_deprecated_show_diff_q(&revs->diffopt);
-
diff_set_mnemonic_prefix(&revs->diffopt, "i/", "w/");
if (diff_unmerged_stage < 0)
if (diff_can_quit_early(&revs->diffopt))
break;
- if (!ce_path_match(ce, &revs->prune_data))
+ if (!ce_path_match(ce, &revs->prune_data, NULL))
continue;
if (ce_stage(ce)) {
dpath->path = (char *) &(dpath->parent[5]);
dpath->next = NULL;
- dpath->len = path_len;
memcpy(dpath->path, ce->name, path_len);
dpath->path[path_len] = '\0';
hashclr(dpath->sha1);
p = xmalloc(combine_diff_path_size(2, pathlen));
p->path = (char *) &p->parent[2];
p->next = NULL;
- p->len = pathlen;
memcpy(p->path, new->name, pathlen);
p->path[pathlen] = 0;
p->mode = mode;
if (tree == o->df_conflict_entry)
tree = NULL;
- if (ce_path_match(idx ? idx : tree, &revs->prune_data)) {
+ if (ce_path_match(idx ? idx : tree, &revs->prune_data, NULL)) {
do_oneway_diff(o, idx, tree);
if (diff_can_quit_early(&revs->diffopt)) {
o->exiting_early = 1;
#include "log-tree.h"
#include "builtin.h"
#include "string-list.h"
+#include "dir.h"
-static int read_directory(const char *path, struct string_list *list)
+static int read_directory_contents(const char *path, struct string_list *list)
{
DIR *dir;
struct dirent *e;
return error("Could not open directory %s", path);
while ((e = readdir(dir)))
- if (strcmp(".", e->d_name) && strcmp("..", e->d_name))
+ if (!is_dot_or_dotdot(e->d_name))
string_list_insert(list, e->d_name);
closedir(dir);
int i1, i2, ret = 0;
size_t len1 = 0, len2 = 0;
- if (name1 && read_directory(name1, &p1))
+ if (name1 && read_directory_contents(name1, &p1))
return -1;
- if (name2 && read_directory(name2, &p2)) {
+ if (name2 && read_directory_contents(name2, &p2)) {
string_list_clear(&p1, 0);
return -1;
}
const char *prefix)
{
int i, prefixlen;
- unsigned deprecated_show_diff_q_option_used = 0;
const char *paths[2];
diff_setup(&revs->diffopt);
int j;
if (!strcmp(argv[i], "--no-index"))
i++;
- else if (!strcmp(argv[i], "-q")) {
- deprecated_show_diff_q_option_used = 1;
- i++;
- }
else if (!strcmp(argv[i], "--"))
i++;
else {
j = diff_opt_parse(&revs->diffopt, argv + i, argc - i);
- if (!j)
+ if (j <= 0)
die("invalid diff option/value: %s", argv[i]);
i += j;
}
revs->max_count = -2;
diff_setup_done(&revs->diffopt);
- if (deprecated_show_diff_q_option_used)
- handle_deprecated_show_diff_q(&revs->diffopt);
-
setup_diff_pager(&revs->diffopt);
DIFF_OPT_SET(&revs->diffopt, EXIT_WITH_STATUS);
{
struct diffstat_file *x;
x = xcalloc(sizeof (*x), 1);
- if (diffstat->nr == diffstat->alloc) {
- diffstat->alloc = alloc_nr(diffstat->alloc);
- diffstat->files = xrealloc(diffstat->files,
- diffstat->alloc * sizeof(x));
- }
+ ALLOC_GROW(diffstat->files, diffstat->nr + 1, diffstat->alloc);
diffstat->files[diffstat->nr++] = x;
if (name_b) {
x->from_name = xstrdup(name_a);
remove_tempfile_installed = 1;
}
- if (!one->sha1_valid ||
- reuse_worktree_file(name, one->sha1, 1)) {
+ if (!S_ISGITLINK(one->mode) &&
+ (!one->sha1_valid ||
+ reuse_worktree_file(name, one->sha1, 1))) {
struct stat st;
if (lstat(name, &st) < 0) {
if (errno == ENOENT)
if (c != '-')
return 0;
arg++;
- eq = strchr(arg, '=');
- if (eq)
- len = eq - arg;
- else
- len = strlen(arg);
+ eq = strchrnul(arg, '=');
+ len = eq - arg;
if (!len || strncmp(arg, arg_long, len))
return 0;
- if (eq) {
+ if (*eq) {
int n;
char *end;
if (!isdigit(*++eq))
return 0;
}
-/* Used only by "diff-files" and "diff --no-index" */
-void handle_deprecated_show_diff_q(struct diff_options *opt)
-{
- warning("'diff -q' and 'diff-files -q' are deprecated.");
- warning("Use 'diff --diff-filter=d' instead to ignore deleted filepairs.");
- parse_diff_filter_opt("d", opt);
-}
-
static void enable_patch_output(int *fmt) {
*fmt &= ~DIFF_FORMAT_NO_OUTPUT;
*fmt |= DIFF_FORMAT_PATCH;
void diff_q(struct diff_queue_struct *queue, struct diff_filepair *dp)
{
- if (queue->alloc <= queue->nr) {
- queue->alloc = alloc_nr(queue->alloc);
- queue->queue = xrealloc(queue->queue,
- sizeof(dp) * queue->alloc);
- }
+ ALLOC_GROW(queue->queue, queue->nr + 1, queue->alloc);
queue->queue[queue->nr++] = dp;
}
return !memcmp(one->data, two->data, one->size);
}
+static int diff_filespec_check_stat_unmatch(struct diff_filepair *p)
+{
+ if (p->done_skip_stat_unmatch)
+ return p->skip_stat_unmatch_result;
+
+ p->done_skip_stat_unmatch = 1;
+ p->skip_stat_unmatch_result = 0;
+ /*
+ * 1. Entries that come from stat info dirtiness
+ * always have both sides (iow, not create/delete),
+ * one side of the object name is unknown, with
+ * the same mode and size. Keep the ones that
+ * do not match these criteria. They have real
+ * differences.
+ *
+ * 2. At this point, the file is known to be modified,
+ * with the same mode and size, and the object
+ * name of one side is unknown. Need to inspect
+ * the identical contents.
+ */
+ if (!DIFF_FILE_VALID(p->one) || /* (1) */
+ !DIFF_FILE_VALID(p->two) ||
+ (p->one->sha1_valid && p->two->sha1_valid) ||
+ (p->one->mode != p->two->mode) ||
+ diff_populate_filespec(p->one, 1) ||
+ diff_populate_filespec(p->two, 1) ||
+ (p->one->size != p->two->size) ||
+ !diff_filespec_is_identical(p->one, p->two)) /* (2) */
+ p->skip_stat_unmatch_result = 1;
+ return p->skip_stat_unmatch_result;
+}
+
static void diffcore_skip_stat_unmatch(struct diff_options *diffopt)
{
int i;
for (i = 0; i < q->nr; i++) {
struct diff_filepair *p = q->queue[i];
- /*
- * 1. Entries that come from stat info dirtiness
- * always have both sides (iow, not create/delete),
- * one side of the object name is unknown, with
- * the same mode and size. Keep the ones that
- * do not match these criteria. They have real
- * differences.
- *
- * 2. At this point, the file is known to be modified,
- * with the same mode and size, and the object
- * name of one side is unknown. Need to inspect
- * the identical contents.
- */
- if (!DIFF_FILE_VALID(p->one) || /* (1) */
- !DIFF_FILE_VALID(p->two) ||
- (p->one->sha1_valid && p->two->sha1_valid) ||
- (p->one->mode != p->two->mode) ||
- diff_populate_filespec(p->one, 1) ||
- diff_populate_filespec(p->two, 1) ||
- (p->one->size != p->two->size) ||
- !diff_filespec_is_identical(p->one, p->two)) /* (2) */
+ if (diff_filespec_check_stat_unmatch(p))
diff_q(&outq, p);
else {
/*
unsigned old_dirty_submodule, unsigned new_dirty_submodule)
{
struct diff_filespec *one, *two;
+ struct diff_filepair *p;
if (S_ISGITLINK(old_mode) && S_ISGITLINK(new_mode) &&
is_submodule_ignored(concatpath, options))
fill_filespec(two, new_sha1, new_sha1_valid, new_mode);
one->dirty_submodule = old_dirty_submodule;
two->dirty_submodule = new_dirty_submodule;
+ p = diff_queue(&diff_queued_diff, one, two);
- diff_queue(&diff_queued_diff, one, two);
- if (!DIFF_OPT_TST(options, DIFF_FROM_CONTENTS))
- DIFF_OPT_SET(options, HAS_CHANGES);
+ if (DIFF_OPT_TST(options, DIFF_FROM_CONTENTS))
+ return;
+
+ if (DIFF_OPT_TST(options, QUICK) && options->skip_stat_unmatch &&
+ !diff_filespec_check_stat_unmatch(p))
+ return;
+
+ DIFF_OPT_SET(options, HAS_CHANGES);
}
struct diff_filepair *diff_unmerge(struct diff_options *options, const char *path)
struct combine_diff_path {
struct combine_diff_path *next;
- int len;
char *path;
unsigned int mode;
unsigned char sha1[20];
extern long parse_algorithm_value(const char *value);
-extern void handle_deprecated_show_diff_q(struct diff_options *);
-
extern int print_stat_summary(FILE *fp, int files,
int insertions, int deletions);
extern void setup_diff_pager(struct diff_options *);
}
}
-struct pair_order {
- struct diff_filepair *pair;
- int orig_order;
- int order;
-};
-
static int match_order(const char *path)
{
int i;
strbuf_addstr(&p, path);
while (p.buf[0]) {
char *cp;
- if (!fnmatch(order[i], p.buf, 0))
+ if (!wildmatch(order[i], p.buf, 0, NULL))
return i;
cp = strrchr(p.buf, '/');
if (!cp)
return order_cnt;
}
-static int compare_pair_order(const void *a_, const void *b_)
+static int compare_objs_order(const void *a_, const void *b_)
{
- struct pair_order const *a, *b;
- a = (struct pair_order const *)a_;
- b = (struct pair_order const *)b_;
+ struct obj_order const *a, *b;
+ a = (struct obj_order const *)a_;
+ b = (struct obj_order const *)b_;
if (a->order != b->order)
return a->order - b->order;
return a->orig_order - b->orig_order;
}
+void order_objects(const char *orderfile, obj_path_fn_t obj_path,
+ struct obj_order *objs, int nr)
+{
+ int i;
+
+ if (!nr)
+ return;
+
+ prepare_order(orderfile);
+ for (i = 0; i < nr; i++) {
+ objs[i].orig_order = i;
+ objs[i].order = match_order(obj_path(objs[i].obj));
+ }
+ qsort(objs, nr, sizeof(*objs), compare_objs_order);
+}
+
+static const char *pair_pathtwo(void *obj)
+{
+ struct diff_filepair *pair = (struct diff_filepair *)obj;
+
+ return pair->two->path;
+}
+
void diffcore_order(const char *orderfile)
{
struct diff_queue_struct *q = &diff_queued_diff;
- struct pair_order *o;
+ struct obj_order *o;
int i;
if (!q->nr)
return;
o = xmalloc(sizeof(*o) * q->nr);
- prepare_order(orderfile);
- for (i = 0; i < q->nr; i++) {
- o[i].pair = q->queue[i];
- o[i].orig_order = i;
- o[i].order = match_order(o[i].pair->two->path);
- }
- qsort(o, q->nr, sizeof(*o), compare_pair_order);
for (i = 0; i < q->nr; i++)
- q->queue[i] = o[i].pair;
+ o[i].obj = q->queue[i];
+ order_objects(orderfile, pair_pathtwo, o, q->nr);
+ for (i = 0; i < q->nr; i++)
+ q->queue[i] = o[i].obj;
free(o);
return;
}
struct diff_options *o,
regex_t *regexp, kwset_t kws);
-static int pickaxe_match(struct diff_filepair *p, struct diff_options *o,
- regex_t *regexp, kwset_t kws, pickaxe_fn fn);
-
-static void pickaxe(struct diff_queue_struct *q, struct diff_options *o,
- regex_t *regexp, kwset_t kws, pickaxe_fn fn)
-{
- int i;
- struct diff_queue_struct outq;
-
- DIFF_QUEUE_CLEAR(&outq);
-
- if (o->pickaxe_opts & DIFF_PICKAXE_ALL) {
- /* Showing the whole changeset if needle exists */
- for (i = 0; i < q->nr; i++) {
- struct diff_filepair *p = q->queue[i];
- if (pickaxe_match(p, o, regexp, kws, fn))
- return; /* do not munge the queue */
- }
-
- /*
- * Otherwise we will clear the whole queue by copying
- * the empty outq at the end of this function, but
- * first clear the current entries in the queue.
- */
- for (i = 0; i < q->nr; i++)
- diff_free_filepair(q->queue[i]);
- } else {
- /* Showing only the filepairs that has the needle */
- for (i = 0; i < q->nr; i++) {
- struct diff_filepair *p = q->queue[i];
- if (pickaxe_match(p, o, regexp, kws, fn))
- diff_q(&outq, p);
- else
- diff_free_filepair(p);
- }
- }
-
- free(q->queue);
- *q = outq;
-}
-
struct diffgrep_cb {
regex_t *regexp;
int hit;
return ecbdata.hit;
}
-static void diffcore_pickaxe_grep(struct diff_options *o)
-{
- int err;
- regex_t regex;
- int cflags = REG_EXTENDED | REG_NEWLINE;
-
- if (DIFF_OPT_TST(o, PICKAXE_IGNORE_CASE))
- cflags |= REG_ICASE;
-
- err = regcomp(®ex, o->pickaxe, cflags);
- if (err) {
- char errbuf[1024];
- regerror(err, ®ex, errbuf, 1024);
- regfree(®ex);
- die("invalid regex: %s", errbuf);
- }
-
- pickaxe(&diff_queued_diff, o, ®ex, NULL, diff_grep);
-
- regfree(®ex);
- return;
-}
-
static unsigned int contains(mmfile_t *mf, regex_t *regexp, kwset_t kws)
{
unsigned int cnt;
while (sz) {
struct kwsmatch kwsm;
size_t offset = kwsexec(kws, data, sz, &kwsm);
- const char *found;
if (offset == -1)
break;
- else
- found = data + offset;
- sz -= found - data + kwsm.size[0];
- data = found + kwsm.size[0];
+ sz -= offset + kwsm.size[0];
+ data += offset + kwsm.size[0];
cnt++;
}
}
return ret;
}
-static void diffcore_pickaxe_count(struct diff_options *o)
+static void pickaxe(struct diff_queue_struct *q, struct diff_options *o,
+ regex_t *regexp, kwset_t kws, pickaxe_fn fn)
+{
+ int i;
+ struct diff_queue_struct outq;
+
+ DIFF_QUEUE_CLEAR(&outq);
+
+ if (o->pickaxe_opts & DIFF_PICKAXE_ALL) {
+ /* Showing the whole changeset if needle exists */
+ for (i = 0; i < q->nr; i++) {
+ struct diff_filepair *p = q->queue[i];
+ if (pickaxe_match(p, o, regexp, kws, fn))
+ return; /* do not munge the queue */
+ }
+
+ /*
+ * Otherwise we will clear the whole queue by copying
+ * the empty outq at the end of this function, but
+ * first clear the current entries in the queue.
+ */
+ for (i = 0; i < q->nr; i++)
+ diff_free_filepair(q->queue[i]);
+ } else {
+ /* Showing only the filepairs that has the needle */
+ for (i = 0; i < q->nr; i++) {
+ struct diff_filepair *p = q->queue[i];
+ if (pickaxe_match(p, o, regexp, kws, fn))
+ diff_q(&outq, p);
+ else
+ diff_free_filepair(p);
+ }
+ }
+
+ free(q->queue);
+ *q = outq;
+}
+
+void diffcore_pickaxe(struct diff_options *o)
{
const char *needle = o->pickaxe;
int opts = o->pickaxe_opts;
- unsigned long len = strlen(needle);
regex_t regex, *regexp = NULL;
kwset_t kws = NULL;
- if (opts & DIFF_PICKAXE_REGEX) {
+ if (opts & (DIFF_PICKAXE_REGEX | DIFF_PICKAXE_KIND_G)) {
int err;
- err = regcomp(®ex, needle, REG_EXTENDED | REG_NEWLINE);
+ int cflags = REG_EXTENDED | REG_NEWLINE;
+ if (DIFF_OPT_TST(o, PICKAXE_IGNORE_CASE))
+ cflags |= REG_ICASE;
+ err = regcomp(®ex, needle, cflags);
if (err) {
/* The POSIX.2 people are surely sick */
char errbuf[1024];
} else {
kws = kwsalloc(DIFF_OPT_TST(o, PICKAXE_IGNORE_CASE)
? tolower_trans_tbl : NULL);
- kwsincr(kws, needle, len);
+ kwsincr(kws, needle, strlen(needle));
kwsprep(kws);
}
- pickaxe(&diff_queued_diff, o, regexp, kws, has_changes);
+ /* Might want to warn when both S and G are on; I don't care... */
+ pickaxe(&diff_queued_diff, o, regexp, kws,
+ (opts & DIFF_PICKAXE_KIND_G) ? diff_grep : has_changes);
- if (opts & DIFF_PICKAXE_REGEX)
- regfree(®ex);
+ if (regexp)
+ regfree(regexp);
else
kwsfree(kws);
return;
}
-
-void diffcore_pickaxe(struct diff_options *o)
-{
- /* Might want to warn when both S and G are on; I don't care... */
- if (o->pickaxe_opts & DIFF_PICKAXE_KIND_G)
- diffcore_pickaxe_grep(o);
- else
- diffcore_pickaxe_count(o);
-}
#include "cache.h"
#include "diff.h"
#include "diffcore.h"
-#include "hash.h"
+#include "hashmap.h"
#include "progress.h"
/* Table of rename/copy destinations */
if (!insert_ok)
return NULL;
/* insert to make it at "first" */
- if (rename_dst_alloc <= rename_dst_nr) {
- rename_dst_alloc = alloc_nr(rename_dst_alloc);
- rename_dst = xrealloc(rename_dst,
- rename_dst_alloc * sizeof(*rename_dst));
- }
+ ALLOC_GROW(rename_dst, rename_dst_nr + 1, rename_dst_alloc);
rename_dst_nr++;
if (first < rename_dst_nr)
memmove(rename_dst + first + 1, rename_dst + first,
}
/* insert to make it at "first" */
- if (rename_src_alloc <= rename_src_nr) {
- rename_src_alloc = alloc_nr(rename_src_alloc);
- rename_src = xrealloc(rename_src,
- rename_src_alloc * sizeof(*rename_src));
- }
+ ALLOC_GROW(rename_src, rename_src_nr + 1, rename_src_alloc);
rename_src_nr++;
if (first < rename_src_nr)
memmove(rename_src + first + 1, rename_src + first,
}
struct file_similarity {
- int src_dst, index;
+ struct hashmap_entry entry;
+ int index;
struct diff_filespec *filespec;
- struct file_similarity *next;
};
-static int find_identical_files(struct file_similarity *src,
- struct file_similarity *dst,
+static unsigned int hash_filespec(struct diff_filespec *filespec)
+{
+ unsigned int hash;
+ if (!filespec->sha1_valid) {
+ if (diff_populate_filespec(filespec, 0))
+ return 0;
+ hash_sha1_file(filespec->data, filespec->size, "blob", filespec->sha1);
+ }
+ memcpy(&hash, filespec->sha1, sizeof(hash));
+ return hash;
+}
+
+static int find_identical_files(struct hashmap *srcs,
+ int dst_index,
struct diff_options *options)
{
int renames = 0;
+ struct diff_filespec *target = rename_dst[dst_index].two;
+ struct file_similarity *p, *best, dst;
+ int i = 100, best_score = -1;
+
/*
- * Walk over all the destinations ...
+ * Find the best source match for specified destination.
*/
- do {
- struct diff_filespec *target = dst->filespec;
- struct file_similarity *p, *best;
- int i = 100, best_score = -1;
-
- /*
- * .. to find the best source match
- */
- best = NULL;
- for (p = src; p; p = p->next) {
- int score;
- struct diff_filespec *source = p->filespec;
-
- /* False hash collision? */
- if (hashcmp(source->sha1, target->sha1))
- continue;
- /* Non-regular files? If so, the modes must match! */
- if (!S_ISREG(source->mode) || !S_ISREG(target->mode)) {
- if (source->mode != target->mode)
- continue;
- }
- /* Give higher scores to sources that haven't been used already */
- score = !source->rename_used;
- if (source->rename_used && options->detect_rename != DIFF_DETECT_COPY)
+ best = NULL;
+ hashmap_entry_init(&dst, hash_filespec(target));
+ for (p = hashmap_get(srcs, &dst, NULL); p; p = hashmap_get_next(srcs, p)) {
+ int score;
+ struct diff_filespec *source = p->filespec;
+
+ /* False hash collision? */
+ if (hashcmp(source->sha1, target->sha1))
+ continue;
+ /* Non-regular files? If so, the modes must match! */
+ if (!S_ISREG(source->mode) || !S_ISREG(target->mode)) {
+ if (source->mode != target->mode)
continue;
- score += basename_same(source, target);
- if (score > best_score) {
- best = p;
- best_score = score;
- if (score == 2)
- break;
- }
-
- /* Too many identical alternatives? Pick one */
- if (!--i)
- break;
}
- if (best) {
- record_rename_pair(dst->index, best->index, MAX_SCORE);
- renames++;
+ /* Give higher scores to sources that haven't been used already */
+ score = !source->rename_used;
+ if (source->rename_used && options->detect_rename != DIFF_DETECT_COPY)
+ continue;
+ score += basename_same(source, target);
+ if (score > best_score) {
+ best = p;
+ best_score = score;
+ if (score == 2)
+ break;
}
- } while ((dst = dst->next) != NULL);
- return renames;
-}
-static void free_similarity_list(struct file_similarity *p)
-{
- while (p) {
- struct file_similarity *entry = p;
- p = p->next;
- free(entry);
+ /* Too many identical alternatives? Pick one */
+ if (!--i)
+ break;
}
-}
-
-static int find_same_files(void *ptr, void *data)
-{
- int ret;
- struct file_similarity *p = ptr;
- struct file_similarity *src = NULL, *dst = NULL;
- struct diff_options *options = data;
-
- /* Split the hash list up into sources and destinations */
- do {
- struct file_similarity *entry = p;
- p = p->next;
- if (entry->src_dst < 0) {
- entry->next = src;
- src = entry;
- } else {
- entry->next = dst;
- dst = entry;
- }
- } while (p);
-
- /*
- * If we have both sources *and* destinations, see if
- * we can match them up
- */
- ret = (src && dst) ? find_identical_files(src, dst, options) : 0;
-
- /* Free the hashes and return the number of renames found */
- free_similarity_list(src);
- free_similarity_list(dst);
- return ret;
-}
-
-static unsigned int hash_filespec(struct diff_filespec *filespec)
-{
- unsigned int hash;
- if (!filespec->sha1_valid) {
- if (diff_populate_filespec(filespec, 0))
- return 0;
- hash_sha1_file(filespec->data, filespec->size, "blob", filespec->sha1);
+ if (best) {
+ record_rename_pair(dst_index, best->index, MAX_SCORE);
+ renames++;
}
- memcpy(&hash, filespec->sha1, sizeof(hash));
- return hash;
+ return renames;
}
-static void insert_file_table(struct hash_table *table, int src_dst, int index, struct diff_filespec *filespec)
+static void insert_file_table(struct hashmap *table, int index, struct diff_filespec *filespec)
{
- void **pos;
- unsigned int hash;
struct file_similarity *entry = xmalloc(sizeof(*entry));
- entry->src_dst = src_dst;
entry->index = index;
entry->filespec = filespec;
- entry->next = NULL;
-
- hash = hash_filespec(filespec);
- pos = insert_hash(hash, entry, table);
- /* We already had an entry there? */
- if (pos) {
- entry->next = *pos;
- *pos = entry;
- }
+ hashmap_entry_init(entry, hash_filespec(filespec));
+ hashmap_add(table, entry);
}
/*
*/
static int find_exact_renames(struct diff_options *options)
{
- int i;
- struct hash_table file_table;
+ int i, renames = 0;
+ struct hashmap file_table;
- init_hash(&file_table);
- preallocate_hash(&file_table, rename_src_nr + rename_dst_nr);
+ /* Add all sources to the hash table */
+ hashmap_init(&file_table, NULL, rename_src_nr);
for (i = 0; i < rename_src_nr; i++)
- insert_file_table(&file_table, -1, i, rename_src[i].p->one);
+ insert_file_table(&file_table, i, rename_src[i].p->one);
+ /* Walk the destinations and find best source match */
for (i = 0; i < rename_dst_nr; i++)
- insert_file_table(&file_table, 1, i, rename_dst[i].two);
-
- /* Find the renames */
- i = for_each_hash(&file_table, find_same_files, options);
+ renames += find_identical_files(&file_table, i, options);
- /* .. and free the hash data structure */
- free_hash(&file_table);
+ /* Free the hash data structure and entries */
+ hashmap_free(&file_table, 1);
- return i;
+ return renames;
}
#define NUM_CANDIDATE_PER_DST 4
if (options->show_rename_progress) {
progress = start_progress_delay(
- "Performing inexact rename detection",
+ _("Performing inexact rename detection"),
rename_dst_nr * rename_src_nr, 50, 1);
}
unsigned is_stdin : 1;
unsigned has_more_entries : 1; /* only appear in combined diff */
/* data should be considered "binary"; -1 means "don't know yet" */
- int is_binary : 2;
+ signed int is_binary : 2;
struct userdiff_driver *driver;
};
unsigned broken_pair : 1;
unsigned renamed_pair : 1;
unsigned is_unmerged : 1;
+ unsigned done_skip_stat_unmatch : 1;
+ unsigned skip_stat_unmatch_result : 1;
};
#define DIFF_PAIR_UNMERGED(p) ((p)->is_unmerged)
extern void diffcore_pickaxe(struct diff_options *);
extern void diffcore_order(const char *orderfile);
+/* low-level interface to diffcore_order */
+struct obj_order {
+ void *obj; /* setup by caller */
+
+ /* setup/used by order_objects() */
+ int orig_order;
+ int order;
+};
+
+typedef const char *(*obj_path_fn_t)(void *obj);
+
+void order_objects(const char *orderfile, obj_path_fn_t obj_path,
+ struct obj_order *objs, int nr);
+
#define DIFF_DEBUG 0
#if DIFF_DEBUG
void diff_debug_filespec(struct diff_filespec *, int, const char *);
int fnmatch_icase(const char *pattern, const char *string, int flags)
{
- return fnmatch(pattern, string, flags | (ignore_case ? FNM_CASEFOLD : 0));
+ return wildmatch(pattern, string,
+ flags | (ignore_case ? WM_CASEFOLD : 0),
+ NULL);
}
-inline int git_fnmatch(const struct pathspec_item *item,
- const char *pattern, const char *string,
- int prefix)
+int git_fnmatch(const struct pathspec_item *item,
+ const char *pattern, const char *string,
+ int prefix)
{
if (prefix > 0) {
if (ps_strncmp(item, pattern, string, prefix))
- return FNM_NOMATCH;
+ return WM_NOMATCH;
pattern += prefix;
string += prefix;
}
NULL);
else
/* wildmatch has not learned no FNM_PATHNAME mode yet */
- return fnmatch(pattern, string,
- item->magic & PATHSPEC_ICASE ? FNM_CASEFOLD : 0);
+ return wildmatch(pattern, string,
+ item->magic & PATHSPEC_ICASE ? WM_CASEFOLD : 0,
+ NULL);
}
static int fnmatch_icase_mem(const char *pattern, int patternlen,
return 1;
}
+#define DO_MATCH_EXCLUDE 1
+#define DO_MATCH_DIRECTORY 2
+
/*
* Does 'match' match the given name?
* A match is found if
* It returns 0 when there is no match.
*/
static int match_pathspec_item(const struct pathspec_item *item, int prefix,
- const char *name, int namelen)
+ const char *name, int namelen, unsigned flags)
{
/* name/namelen has prefix cut off by caller */
const char *match = item->match + prefix;
* The normal call pattern is:
* 1. prefix = common_prefix_len(ps);
* 2. prune something, or fill_directory
- * 3. match_pathspec_depth()
+ * 3. match_pathspec()
*
* 'prefix' at #1 may be shorter than the command's prefix and
* it's ok for #2 to match extra files. Those extras will be
if (match[matchlen-1] == '/' || name[matchlen] == '/')
return MATCHED_RECURSIVELY;
- }
+ } else if ((flags & DO_MATCH_DIRECTORY) &&
+ match[matchlen - 1] == '/' &&
+ namelen == matchlen - 1 &&
+ !ps_strncmp(item, match, name, namelen))
+ return MATCHED_EXACTLY;
if (item->nowildcard_len < item->len &&
!git_fnmatch(item, match, name,
* pathspec did not match any names, which could indicate that the
* user mistyped the nth pathspec.
*/
-static int match_pathspec_depth_1(const struct pathspec *ps,
- const char *name, int namelen,
- int prefix, char *seen,
- int exclude)
+static int do_match_pathspec(const struct pathspec *ps,
+ const char *name, int namelen,
+ int prefix, char *seen,
+ unsigned flags)
{
- int i, retval = 0;
+ int i, retval = 0, exclude = flags & DO_MATCH_EXCLUDE;
GUARD_PATHSPEC(ps,
PATHSPEC_FROMTOP |
*/
if (seen && ps->items[i].magic & PATHSPEC_EXCLUDE)
seen[i] = MATCHED_FNMATCH;
- how = match_pathspec_item(ps->items+i, prefix, name, namelen);
+ how = match_pathspec_item(ps->items+i, prefix, name,
+ namelen, flags);
if (ps->recursive &&
(ps->magic & PATHSPEC_MAXDEPTH) &&
ps->max_depth != -1 &&
return retval;
}
-int match_pathspec_depth(const struct pathspec *ps,
- const char *name, int namelen,
- int prefix, char *seen)
+int match_pathspec(const struct pathspec *ps,
+ const char *name, int namelen,
+ int prefix, char *seen, int is_dir)
{
int positive, negative;
- positive = match_pathspec_depth_1(ps, name, namelen, prefix, seen, 0);
+ unsigned flags = is_dir ? DO_MATCH_DIRECTORY : 0;
+ positive = do_match_pathspec(ps, name, namelen,
+ prefix, seen, flags);
if (!(ps->magic & PATHSPEC_EXCLUDE) || !positive)
return positive;
- negative = match_pathspec_depth_1(ps, name, namelen, prefix, seen, 1);
+ negative = do_match_pathspec(ps, name, namelen,
+ prefix, seen,
+ flags | DO_MATCH_EXCLUDE);
return negative ? 0 : positive;
}
el->filebuf = NULL;
}
+static void trim_trailing_spaces(char *buf)
+{
+ int i, last_space = -1, nr_spaces, len = strlen(buf);
+ for (i = 0; i < len; i++)
+ if (buf[i] == '\\')
+ i++;
+ else if (buf[i] == ' ') {
+ if (last_space == -1) {
+ last_space = i;
+ nr_spaces = 1;
+ } else
+ nr_spaces++;
+ } else
+ last_space = -1;
+
+ if (last_space != -1 && last_space + nr_spaces == len)
+ buf[last_space] = '\0';
+}
+
int add_excludes_from_file_to_list(const char *fname,
const char *base,
int baselen,
if (buf[i] == '\n') {
if (entry != buf + i && entry[0] != '#') {
buf[i - (i && buf[i-1] == '\r')] = 0;
+ trim_trailing_spaces(entry);
add_exclude(entry, base, baselen, el, lineno);
}
lineno++;
for (nr = 0 ; ; nr++) {
const char *match;
- if (nr >= alloc) {
- alloc = alloc_nr(alloc);
- simplify = xrealloc(simplify, alloc * sizeof(*simplify));
- }
+ ALLOC_GROW(simplify, nr + 1, alloc);
match = *pathspec++;
if (!match)
break;
extern int simple_length(const char *match);
extern int no_wildcard(const char *string);
extern char *common_prefix(const struct pathspec *pathspec);
-extern int match_pathspec_depth(const struct pathspec *pathspec,
- const char *name, int namelen,
- int prefix, char *seen);
+extern int match_pathspec(const struct pathspec *pathspec,
+ const char *name, int namelen,
+ int prefix, char *seen, int is_dir);
extern int within_depth(const char *name, int namelen, int depth, int max_depth);
extern int fill_directory(struct dir_struct *dir, const struct pathspec *pathspec);
const char *pattern, const char *string,
int prefix);
+static inline int ce_path_match(const struct cache_entry *ce,
+ const struct pathspec *pathspec,
+ char *seen)
+{
+ return match_pathspec(pathspec, ce->name, ce_namelen(ce), 0, seen,
+ S_ISDIR(ce->ce_mode) || S_ISGITLINK(ce->ce_mode));
+}
+
+static inline int dir_path_match(const struct dir_entry *ent,
+ const struct pathspec *pathspec,
+ int prefix, char *seen)
+{
+ int has_trailing_dir = ent->len && ent->name[ent->len - 1] == '/';
+ int len = has_trailing_dir ? ent->len - 1 : ent->len;
+ return match_pathspec(pathspec, ent->name, len, prefix, seen,
+ has_trailing_dir);
+}
+
#endif
free(buf);
}
-static void remove_subtree(const char *path)
+static void remove_subtree(struct strbuf *path)
{
- DIR *dir = opendir(path);
+ DIR *dir = opendir(path->buf);
struct dirent *de;
- char pathbuf[PATH_MAX];
- char *name;
+ int origlen = path->len;
if (!dir)
- die_errno("cannot opendir '%s'", path);
- strcpy(pathbuf, path);
- name = pathbuf + strlen(path);
- *name++ = '/';
+ die_errno("cannot opendir '%s'", path->buf);
while ((de = readdir(dir)) != NULL) {
struct stat st;
+
if (is_dot_or_dotdot(de->d_name))
continue;
- strcpy(name, de->d_name);
- if (lstat(pathbuf, &st))
- die_errno("cannot lstat '%s'", pathbuf);
+
+ strbuf_addch(path, '/');
+ strbuf_addstr(path, de->d_name);
+ if (lstat(path->buf, &st))
+ die_errno("cannot lstat '%s'", path->buf);
if (S_ISDIR(st.st_mode))
- remove_subtree(pathbuf);
- else if (unlink(pathbuf))
- die_errno("cannot unlink '%s'", pathbuf);
+ remove_subtree(path);
+ else if (unlink(path->buf))
+ die_errno("cannot unlink '%s'", path->buf);
+ strbuf_setlen(path, origlen);
}
closedir(dir);
- if (rmdir(path))
- die_errno("cannot rmdir '%s'", path);
+ if (rmdir(path->buf))
+ die_errno("cannot rmdir '%s'", path->buf);
}
static int create_file(const char *path, unsigned int mode)
int checkout_entry(struct cache_entry *ce,
const struct checkout *state, char *topath)
{
- static struct strbuf path_buf = STRBUF_INIT;
- char *path;
+ static struct strbuf path = STRBUF_INIT;
struct stat st;
- int len;
if (topath)
return write_entry(ce, topath, state, 1);
- strbuf_reset(&path_buf);
- strbuf_add(&path_buf, state->base_dir, state->base_dir_len);
- strbuf_add(&path_buf, ce->name, ce_namelen(ce));
- path = path_buf.buf;
- len = path_buf.len;
+ strbuf_reset(&path);
+ strbuf_add(&path, state->base_dir, state->base_dir_len);
+ strbuf_add(&path, ce->name, ce_namelen(ce));
- if (!check_path(path, len, &st, state->base_dir_len)) {
+ if (!check_path(path.buf, path.len, &st, state->base_dir_len)) {
unsigned changed = ce_match_stat(ce, &st, CE_MATCH_IGNORE_VALID|CE_MATCH_IGNORE_SKIP_WORKTREE);
if (!changed)
return 0;
if (!state->force) {
if (!state->quiet)
- fprintf(stderr, "%s already exists, no checkout\n", path);
+ fprintf(stderr,
+ "%s already exists, no checkout\n",
+ path.buf);
return -1;
}
if (S_ISGITLINK(ce->ce_mode))
return 0;
if (!state->force)
- return error("%s is a directory", path);
- remove_subtree(path);
- } else if (unlink(path))
- return error("unable to unlink old '%s' (%s)", path, strerror(errno));
+ return error("%s is a directory", path.buf);
+ remove_subtree(&path);
+ } else if (unlink(path.buf))
+ return error("unable to unlink old '%s' (%s)",
+ path.buf, strerror(errno));
} else if (state->not_new)
return 0;
- create_directories(path, len, state);
- return write_entry(ce, path, state, 0);
+
+ create_directories(path.buf, path.len, state);
+ return write_entry(ce, path.buf, state, 0);
}
const char *askpass_program;
const char *excludes_file;
enum auto_crlf auto_crlf = AUTO_CRLF_FALSE;
-int read_replace_refs = 1; /* NEEDSWORK: rename to use_replace_refs */
+int check_replace_refs = 1;
enum eol core_eol = EOL_UNSET;
enum safe_crlf safe_crlf = SAFE_CRLF_WARN;
unsigned whitespace_rule_cfg = WS_DEFAULT_RULE;
if (!git_graft_file)
git_graft_file = git_pathdup("info/grafts");
if (getenv(NO_REPLACE_OBJECTS_ENVIRONMENT))
- read_replace_refs = 0;
+ check_replace_refs = 0;
namespace = expand_namespace(getenv(GIT_NAMESPACE_ENVIRONMENT));
namespace_len = strlen(namespace);
shallow_file = getenv(GIT_SHALLOW_FILE_ENVIRONMENT);
return xmkstemp_mode(template, mode);
}
-int odb_pack_keep(char *name, size_t namesz, unsigned char *sha1)
+int odb_pack_keep(char *name, size_t namesz, const unsigned char *sha1)
{
int fd;
--- /dev/null
+/**
+ * Copyright 2013, GitHub, Inc
+ * Copyright 2009-2013, Daniel Lemire, Cliff Moon,
+ * David McIntosh, Robert Becho, Google Inc. and Veronika Zenz
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version 2
+ * of the License, or (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
+ */
+#include "git-compat-util.h"
+#include "ewok.h"
+
+#define MASK(x) ((eword_t)1 << (x % BITS_IN_WORD))
+#define BLOCK(x) (x / BITS_IN_WORD)
+
+struct bitmap *bitmap_new(void)
+{
+ struct bitmap *bitmap = ewah_malloc(sizeof(struct bitmap));
+ bitmap->words = ewah_calloc(32, sizeof(eword_t));
+ bitmap->word_alloc = 32;
+ return bitmap;
+}
+
+void bitmap_set(struct bitmap *self, size_t pos)
+{
+ size_t block = BLOCK(pos);
+
+ if (block >= self->word_alloc) {
+ size_t old_size = self->word_alloc;
+ self->word_alloc = block * 2;
+ self->words = ewah_realloc(self->words,
+ self->word_alloc * sizeof(eword_t));
+
+ memset(self->words + old_size, 0x0,
+ (self->word_alloc - old_size) * sizeof(eword_t));
+ }
+
+ self->words[block] |= MASK(pos);
+}
+
+void bitmap_clear(struct bitmap *self, size_t pos)
+{
+ size_t block = BLOCK(pos);
+
+ if (block < self->word_alloc)
+ self->words[block] &= ~MASK(pos);
+}
+
+int bitmap_get(struct bitmap *self, size_t pos)
+{
+ size_t block = BLOCK(pos);
+ return block < self->word_alloc &&
+ (self->words[block] & MASK(pos)) != 0;
+}
+
+struct ewah_bitmap *bitmap_to_ewah(struct bitmap *bitmap)
+{
+ struct ewah_bitmap *ewah = ewah_new();
+ size_t i, running_empty_words = 0;
+ eword_t last_word = 0;
+
+ for (i = 0; i < bitmap->word_alloc; ++i) {
+ if (bitmap->words[i] == 0) {
+ running_empty_words++;
+ continue;
+ }
+
+ if (last_word != 0)
+ ewah_add(ewah, last_word);
+
+ if (running_empty_words > 0) {
+ ewah_add_empty_words(ewah, 0, running_empty_words);
+ running_empty_words = 0;
+ }
+
+ last_word = bitmap->words[i];
+ }
+
+ ewah_add(ewah, last_word);
+ return ewah;
+}
+
+struct bitmap *ewah_to_bitmap(struct ewah_bitmap *ewah)
+{
+ struct bitmap *bitmap = bitmap_new();
+ struct ewah_iterator it;
+ eword_t blowup;
+ size_t i = 0;
+
+ ewah_iterator_init(&it, ewah);
+
+ while (ewah_iterator_next(&blowup, &it)) {
+ if (i >= bitmap->word_alloc) {
+ bitmap->word_alloc *= 1.5;
+ bitmap->words = ewah_realloc(
+ bitmap->words, bitmap->word_alloc * sizeof(eword_t));
+ }
+
+ bitmap->words[i++] = blowup;
+ }
+
+ bitmap->word_alloc = i;
+ return bitmap;
+}
+
+void bitmap_and_not(struct bitmap *self, struct bitmap *other)
+{
+ const size_t count = (self->word_alloc < other->word_alloc) ?
+ self->word_alloc : other->word_alloc;
+
+ size_t i;
+
+ for (i = 0; i < count; ++i)
+ self->words[i] &= ~other->words[i];
+}
+
+void bitmap_or_ewah(struct bitmap *self, struct ewah_bitmap *other)
+{
+ size_t original_size = self->word_alloc;
+ size_t other_final = (other->bit_size / BITS_IN_WORD) + 1;
+ size_t i = 0;
+ struct ewah_iterator it;
+ eword_t word;
+
+ if (self->word_alloc < other_final) {
+ self->word_alloc = other_final;
+ self->words = ewah_realloc(self->words,
+ self->word_alloc * sizeof(eword_t));
+ memset(self->words + original_size, 0x0,
+ (self->word_alloc - original_size) * sizeof(eword_t));
+ }
+
+ ewah_iterator_init(&it, other);
+
+ while (ewah_iterator_next(&word, &it))
+ self->words[i++] |= word;
+}
+
+void bitmap_each_bit(struct bitmap *self, ewah_callback callback, void *data)
+{
+ size_t pos = 0, i;
+
+ for (i = 0; i < self->word_alloc; ++i) {
+ eword_t word = self->words[i];
+ uint32_t offset;
+
+ if (word == (eword_t)~0) {
+ for (offset = 0; offset < BITS_IN_WORD; ++offset)
+ callback(pos++, data);
+ } else {
+ for (offset = 0; offset < BITS_IN_WORD; ++offset) {
+ if ((word >> offset) == 0)
+ break;
+
+ offset += ewah_bit_ctz64(word >> offset);
+ callback(pos + offset, data);
+ }
+ pos += BITS_IN_WORD;
+ }
+ }
+}
+
+size_t bitmap_popcount(struct bitmap *self)
+{
+ size_t i, count = 0;
+
+ for (i = 0; i < self->word_alloc; ++i)
+ count += ewah_bit_popcount64(self->words[i]);
+
+ return count;
+}
+
+int bitmap_equals(struct bitmap *self, struct bitmap *other)
+{
+ struct bitmap *big, *small;
+ size_t i;
+
+ if (self->word_alloc < other->word_alloc) {
+ small = self;
+ big = other;
+ } else {
+ small = other;
+ big = self;
+ }
+
+ for (i = 0; i < small->word_alloc; ++i) {
+ if (small->words[i] != big->words[i])
+ return 0;
+ }
+
+ for (; i < big->word_alloc; ++i) {
+ if (big->words[i] != 0)
+ return 0;
+ }
+
+ return 1;
+}
+
+void bitmap_reset(struct bitmap *bitmap)
+{
+ memset(bitmap->words, 0x0, bitmap->word_alloc * sizeof(eword_t));
+}
+
+void bitmap_free(struct bitmap *bitmap)
+{
+ if (bitmap == NULL)
+ return;
+
+ free(bitmap->words);
+ free(bitmap);
+}
--- /dev/null
+/**
+ * Copyright 2013, GitHub, Inc
+ * Copyright 2009-2013, Daniel Lemire, Cliff Moon,
+ * David McIntosh, Robert Becho, Google Inc. and Veronika Zenz
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version 2
+ * of the License, or (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
+ */
+#include "git-compat-util.h"
+#include "ewok.h"
+#include "ewok_rlw.h"
+
+static inline size_t min_size(size_t a, size_t b)
+{
+ return a < b ? a : b;
+}
+
+static inline size_t max_size(size_t a, size_t b)
+{
+ return a > b ? a : b;
+}
+
+static inline void buffer_grow(struct ewah_bitmap *self, size_t new_size)
+{
+ size_t rlw_offset = (uint8_t *)self->rlw - (uint8_t *)self->buffer;
+
+ if (self->alloc_size >= new_size)
+ return;
+
+ self->alloc_size = new_size;
+ self->buffer = ewah_realloc(self->buffer,
+ self->alloc_size * sizeof(eword_t));
+ self->rlw = self->buffer + (rlw_offset / sizeof(size_t));
+}
+
+static inline void buffer_push(struct ewah_bitmap *self, eword_t value)
+{
+ if (self->buffer_size + 1 >= self->alloc_size)
+ buffer_grow(self, self->buffer_size * 3 / 2);
+
+ self->buffer[self->buffer_size++] = value;
+}
+
+static void buffer_push_rlw(struct ewah_bitmap *self, eword_t value)
+{
+ buffer_push(self, value);
+ self->rlw = self->buffer + self->buffer_size - 1;
+}
+
+static size_t add_empty_words(struct ewah_bitmap *self, int v, size_t number)
+{
+ size_t added = 0;
+ eword_t runlen, can_add;
+
+ if (rlw_get_run_bit(self->rlw) != v && rlw_size(self->rlw) == 0) {
+ rlw_set_run_bit(self->rlw, v);
+ } else if (rlw_get_literal_words(self->rlw) != 0 ||
+ rlw_get_run_bit(self->rlw) != v) {
+ buffer_push_rlw(self, 0);
+ if (v) rlw_set_run_bit(self->rlw, v);
+ added++;
+ }
+
+ runlen = rlw_get_running_len(self->rlw);
+ can_add = min_size(number, RLW_LARGEST_RUNNING_COUNT - runlen);
+
+ rlw_set_running_len(self->rlw, runlen + can_add);
+ number -= can_add;
+
+ while (number >= RLW_LARGEST_RUNNING_COUNT) {
+ buffer_push_rlw(self, 0);
+ added++;
+ if (v) rlw_set_run_bit(self->rlw, v);
+ rlw_set_running_len(self->rlw, RLW_LARGEST_RUNNING_COUNT);
+ number -= RLW_LARGEST_RUNNING_COUNT;
+ }
+
+ if (number > 0) {
+ buffer_push_rlw(self, 0);
+ added++;
+
+ if (v) rlw_set_run_bit(self->rlw, v);
+ rlw_set_running_len(self->rlw, number);
+ }
+
+ return added;
+}
+
+size_t ewah_add_empty_words(struct ewah_bitmap *self, int v, size_t number)
+{
+ if (number == 0)
+ return 0;
+
+ self->bit_size += number * BITS_IN_WORD;
+ return add_empty_words(self, v, number);
+}
+
+static size_t add_literal(struct ewah_bitmap *self, eword_t new_data)
+{
+ eword_t current_num = rlw_get_literal_words(self->rlw);
+
+ if (current_num >= RLW_LARGEST_LITERAL_COUNT) {
+ buffer_push_rlw(self, 0);
+
+ rlw_set_literal_words(self->rlw, 1);
+ buffer_push(self, new_data);
+ return 2;
+ }
+
+ rlw_set_literal_words(self->rlw, current_num + 1);
+
+ /* sanity check */
+ assert(rlw_get_literal_words(self->rlw) == current_num + 1);
+
+ buffer_push(self, new_data);
+ return 1;
+}
+
+void ewah_add_dirty_words(
+ struct ewah_bitmap *self, const eword_t *buffer,
+ size_t number, int negate)
+{
+ size_t literals, can_add;
+
+ while (1) {
+ literals = rlw_get_literal_words(self->rlw);
+ can_add = min_size(number, RLW_LARGEST_LITERAL_COUNT - literals);
+
+ rlw_set_literal_words(self->rlw, literals + can_add);
+
+ if (self->buffer_size + can_add >= self->alloc_size)
+ buffer_grow(self, (self->buffer_size + can_add) * 3 / 2);
+
+ if (negate) {
+ size_t i;
+ for (i = 0; i < can_add; ++i)
+ self->buffer[self->buffer_size++] = ~buffer[i];
+ } else {
+ memcpy(self->buffer + self->buffer_size,
+ buffer, can_add * sizeof(eword_t));
+ self->buffer_size += can_add;
+ }
+
+ self->bit_size += can_add * BITS_IN_WORD;
+
+ if (number - can_add == 0)
+ break;
+
+ buffer_push_rlw(self, 0);
+ buffer += can_add;
+ number -= can_add;
+ }
+}
+
+static size_t add_empty_word(struct ewah_bitmap *self, int v)
+{
+ int no_literal = (rlw_get_literal_words(self->rlw) == 0);
+ eword_t run_len = rlw_get_running_len(self->rlw);
+
+ if (no_literal && run_len == 0) {
+ rlw_set_run_bit(self->rlw, v);
+ assert(rlw_get_run_bit(self->rlw) == v);
+ }
+
+ if (no_literal && rlw_get_run_bit(self->rlw) == v &&
+ run_len < RLW_LARGEST_RUNNING_COUNT) {
+ rlw_set_running_len(self->rlw, run_len + 1);
+ assert(rlw_get_running_len(self->rlw) == run_len + 1);
+ return 0;
+ } else {
+ buffer_push_rlw(self, 0);
+
+ assert(rlw_get_running_len(self->rlw) == 0);
+ assert(rlw_get_run_bit(self->rlw) == 0);
+ assert(rlw_get_literal_words(self->rlw) == 0);
+
+ rlw_set_run_bit(self->rlw, v);
+ assert(rlw_get_run_bit(self->rlw) == v);
+
+ rlw_set_running_len(self->rlw, 1);
+ assert(rlw_get_running_len(self->rlw) == 1);
+ assert(rlw_get_literal_words(self->rlw) == 0);
+ return 1;
+ }
+}
+
+size_t ewah_add(struct ewah_bitmap *self, eword_t word)
+{
+ self->bit_size += BITS_IN_WORD;
+
+ if (word == 0)
+ return add_empty_word(self, 0);
+
+ if (word == (eword_t)(~0))
+ return add_empty_word(self, 1);
+
+ return add_literal(self, word);
+}
+
+void ewah_set(struct ewah_bitmap *self, size_t i)
+{
+ const size_t dist =
+ (i + BITS_IN_WORD) / BITS_IN_WORD -
+ (self->bit_size + BITS_IN_WORD - 1) / BITS_IN_WORD;
+
+ assert(i >= self->bit_size);
+
+ self->bit_size = i + 1;
+
+ if (dist > 0) {
+ if (dist > 1)
+ add_empty_words(self, 0, dist - 1);
+
+ add_literal(self, (eword_t)1 << (i % BITS_IN_WORD));
+ return;
+ }
+
+ if (rlw_get_literal_words(self->rlw) == 0) {
+ rlw_set_running_len(self->rlw,
+ rlw_get_running_len(self->rlw) - 1);
+ add_literal(self, (eword_t)1 << (i % BITS_IN_WORD));
+ return;
+ }
+
+ self->buffer[self->buffer_size - 1] |=
+ ((eword_t)1 << (i % BITS_IN_WORD));
+
+ /* check if we just completed a stream of 1s */
+ if (self->buffer[self->buffer_size - 1] == (eword_t)(~0)) {
+ self->buffer[--self->buffer_size] = 0;
+ rlw_set_literal_words(self->rlw,
+ rlw_get_literal_words(self->rlw) - 1);
+ add_empty_word(self, 1);
+ }
+}
+
+void ewah_each_bit(struct ewah_bitmap *self, void (*callback)(size_t, void*), void *payload)
+{
+ size_t pos = 0;
+ size_t pointer = 0;
+ size_t k;
+
+ while (pointer < self->buffer_size) {
+ eword_t *word = &self->buffer[pointer];
+
+ if (rlw_get_run_bit(word)) {
+ size_t len = rlw_get_running_len(word) * BITS_IN_WORD;
+ for (k = 0; k < len; ++k, ++pos)
+ callback(pos, payload);
+ } else {
+ pos += rlw_get_running_len(word) * BITS_IN_WORD;
+ }
+
+ ++pointer;
+
+ for (k = 0; k < rlw_get_literal_words(word); ++k) {
+ int c;
+
+ /* todo: zero count optimization */
+ for (c = 0; c < BITS_IN_WORD; ++c, ++pos) {
+ if ((self->buffer[pointer] & ((eword_t)1 << c)) != 0)
+ callback(pos, payload);
+ }
+
+ ++pointer;
+ }
+ }
+}
+
+struct ewah_bitmap *ewah_new(void)
+{
+ struct ewah_bitmap *self;
+
+ self = ewah_malloc(sizeof(struct ewah_bitmap));
+ if (self == NULL)
+ return NULL;
+
+ self->buffer = ewah_malloc(32 * sizeof(eword_t));
+ self->alloc_size = 32;
+
+ ewah_clear(self);
+ return self;
+}
+
+void ewah_clear(struct ewah_bitmap *self)
+{
+ self->buffer_size = 1;
+ self->buffer[0] = 0;
+ self->bit_size = 0;
+ self->rlw = self->buffer;
+}
+
+void ewah_free(struct ewah_bitmap *self)
+{
+ if (!self)
+ return;
+
+ if (self->alloc_size)
+ free(self->buffer);
+
+ free(self);
+}
+
+static void read_new_rlw(struct ewah_iterator *it)
+{
+ const eword_t *word = NULL;
+
+ it->literals = 0;
+ it->compressed = 0;
+
+ while (1) {
+ word = &it->buffer[it->pointer];
+
+ it->rl = rlw_get_running_len(word);
+ it->lw = rlw_get_literal_words(word);
+ it->b = rlw_get_run_bit(word);
+
+ if (it->rl || it->lw)
+ return;
+
+ if (it->pointer < it->buffer_size - 1) {
+ it->pointer++;
+ } else {
+ it->pointer = it->buffer_size;
+ return;
+ }
+ }
+}
+
+int ewah_iterator_next(eword_t *next, struct ewah_iterator *it)
+{
+ if (it->pointer >= it->buffer_size)
+ return 0;
+
+ if (it->compressed < it->rl) {
+ it->compressed++;
+ *next = it->b ? (eword_t)(~0) : 0;
+ } else {
+ assert(it->literals < it->lw);
+
+ it->literals++;
+ it->pointer++;
+
+ assert(it->pointer < it->buffer_size);
+
+ *next = it->buffer[it->pointer];
+ }
+
+ if (it->compressed == it->rl && it->literals == it->lw) {
+ if (++it->pointer < it->buffer_size)
+ read_new_rlw(it);
+ }
+
+ return 1;
+}
+
+void ewah_iterator_init(struct ewah_iterator *it, struct ewah_bitmap *parent)
+{
+ it->buffer = parent->buffer;
+ it->buffer_size = parent->buffer_size;
+ it->pointer = 0;
+
+ it->lw = 0;
+ it->rl = 0;
+ it->compressed = 0;
+ it->literals = 0;
+ it->b = 0;
+
+ if (it->pointer < it->buffer_size)
+ read_new_rlw(it);
+}
+
+void ewah_not(struct ewah_bitmap *self)
+{
+ size_t pointer = 0;
+
+ while (pointer < self->buffer_size) {
+ eword_t *word = &self->buffer[pointer];
+ size_t literals, k;
+
+ rlw_xor_run_bit(word);
+ ++pointer;
+
+ literals = rlw_get_literal_words(word);
+ for (k = 0; k < literals; ++k) {
+ self->buffer[pointer] = ~self->buffer[pointer];
+ ++pointer;
+ }
+ }
+}
+
+void ewah_xor(
+ struct ewah_bitmap *ewah_i,
+ struct ewah_bitmap *ewah_j,
+ struct ewah_bitmap *out)
+{
+ struct rlw_iterator rlw_i;
+ struct rlw_iterator rlw_j;
+ size_t literals;
+
+ rlwit_init(&rlw_i, ewah_i);
+ rlwit_init(&rlw_j, ewah_j);
+
+ while (rlwit_word_size(&rlw_i) > 0 && rlwit_word_size(&rlw_j) > 0) {
+ while (rlw_i.rlw.running_len > 0 || rlw_j.rlw.running_len > 0) {
+ struct rlw_iterator *prey, *predator;
+ size_t index;
+ int negate_words;
+
+ if (rlw_i.rlw.running_len < rlw_j.rlw.running_len) {
+ prey = &rlw_i;
+ predator = &rlw_j;
+ } else {
+ prey = &rlw_j;
+ predator = &rlw_i;
+ }
+
+ negate_words = !!predator->rlw.running_bit;
+ index = rlwit_discharge(prey, out,
+ predator->rlw.running_len, negate_words);
+
+ ewah_add_empty_words(out, negate_words,
+ predator->rlw.running_len - index);
+
+ rlwit_discard_first_words(predator,
+ predator->rlw.running_len);
+ }
+
+ literals = min_size(
+ rlw_i.rlw.literal_words,
+ rlw_j.rlw.literal_words);
+
+ if (literals) {
+ size_t k;
+
+ for (k = 0; k < literals; ++k) {
+ ewah_add(out,
+ rlw_i.buffer[rlw_i.literal_word_start + k] ^
+ rlw_j.buffer[rlw_j.literal_word_start + k]
+ );
+ }
+
+ rlwit_discard_first_words(&rlw_i, literals);
+ rlwit_discard_first_words(&rlw_j, literals);
+ }
+ }
+
+ if (rlwit_word_size(&rlw_i) > 0)
+ rlwit_discharge(&rlw_i, out, ~0, 0);
+ else
+ rlwit_discharge(&rlw_j, out, ~0, 0);
+
+ out->bit_size = max_size(ewah_i->bit_size, ewah_j->bit_size);
+}
+
+void ewah_and(
+ struct ewah_bitmap *ewah_i,
+ struct ewah_bitmap *ewah_j,
+ struct ewah_bitmap *out)
+{
+ struct rlw_iterator rlw_i;
+ struct rlw_iterator rlw_j;
+ size_t literals;
+
+ rlwit_init(&rlw_i, ewah_i);
+ rlwit_init(&rlw_j, ewah_j);
+
+ while (rlwit_word_size(&rlw_i) > 0 && rlwit_word_size(&rlw_j) > 0) {
+ while (rlw_i.rlw.running_len > 0 || rlw_j.rlw.running_len > 0) {
+ struct rlw_iterator *prey, *predator;
+
+ if (rlw_i.rlw.running_len < rlw_j.rlw.running_len) {
+ prey = &rlw_i;
+ predator = &rlw_j;
+ } else {
+ prey = &rlw_j;
+ predator = &rlw_i;
+ }
+
+ if (predator->rlw.running_bit == 0) {
+ ewah_add_empty_words(out, 0,
+ predator->rlw.running_len);
+ rlwit_discard_first_words(prey,
+ predator->rlw.running_len);
+ rlwit_discard_first_words(predator,
+ predator->rlw.running_len);
+ } else {
+ size_t index = rlwit_discharge(prey, out,
+ predator->rlw.running_len, 0);
+ ewah_add_empty_words(out, 0,
+ predator->rlw.running_len - index);
+ rlwit_discard_first_words(predator,
+ predator->rlw.running_len);
+ }
+ }
+
+ literals = min_size(
+ rlw_i.rlw.literal_words,
+ rlw_j.rlw.literal_words);
+
+ if (literals) {
+ size_t k;
+
+ for (k = 0; k < literals; ++k) {
+ ewah_add(out,
+ rlw_i.buffer[rlw_i.literal_word_start + k] &
+ rlw_j.buffer[rlw_j.literal_word_start + k]
+ );
+ }
+
+ rlwit_discard_first_words(&rlw_i, literals);
+ rlwit_discard_first_words(&rlw_j, literals);
+ }
+ }
+
+ if (rlwit_word_size(&rlw_i) > 0)
+ rlwit_discharge_empty(&rlw_i, out);
+ else
+ rlwit_discharge_empty(&rlw_j, out);
+
+ out->bit_size = max_size(ewah_i->bit_size, ewah_j->bit_size);
+}
+
+void ewah_and_not(
+ struct ewah_bitmap *ewah_i,
+ struct ewah_bitmap *ewah_j,
+ struct ewah_bitmap *out)
+{
+ struct rlw_iterator rlw_i;
+ struct rlw_iterator rlw_j;
+ size_t literals;
+
+ rlwit_init(&rlw_i, ewah_i);
+ rlwit_init(&rlw_j, ewah_j);
+
+ while (rlwit_word_size(&rlw_i) > 0 && rlwit_word_size(&rlw_j) > 0) {
+ while (rlw_i.rlw.running_len > 0 || rlw_j.rlw.running_len > 0) {
+ struct rlw_iterator *prey, *predator;
+
+ if (rlw_i.rlw.running_len < rlw_j.rlw.running_len) {
+ prey = &rlw_i;
+ predator = &rlw_j;
+ } else {
+ prey = &rlw_j;
+ predator = &rlw_i;
+ }
+
+ if ((predator->rlw.running_bit && prey == &rlw_i) ||
+ (!predator->rlw.running_bit && prey != &rlw_i)) {
+ ewah_add_empty_words(out, 0,
+ predator->rlw.running_len);
+ rlwit_discard_first_words(prey,
+ predator->rlw.running_len);
+ rlwit_discard_first_words(predator,
+ predator->rlw.running_len);
+ } else {
+ size_t index;
+ int negate_words;
+
+ negate_words = (&rlw_i != prey);
+ index = rlwit_discharge(prey, out,
+ predator->rlw.running_len, negate_words);
+ ewah_add_empty_words(out, negate_words,
+ predator->rlw.running_len - index);
+ rlwit_discard_first_words(predator,
+ predator->rlw.running_len);
+ }
+ }
+
+ literals = min_size(
+ rlw_i.rlw.literal_words,
+ rlw_j.rlw.literal_words);
+
+ if (literals) {
+ size_t k;
+
+ for (k = 0; k < literals; ++k) {
+ ewah_add(out,
+ rlw_i.buffer[rlw_i.literal_word_start + k] &
+ ~(rlw_j.buffer[rlw_j.literal_word_start + k])
+ );
+ }
+
+ rlwit_discard_first_words(&rlw_i, literals);
+ rlwit_discard_first_words(&rlw_j, literals);
+ }
+ }
+
+ if (rlwit_word_size(&rlw_i) > 0)
+ rlwit_discharge(&rlw_i, out, ~0, 0);
+ else
+ rlwit_discharge_empty(&rlw_j, out);
+
+ out->bit_size = max_size(ewah_i->bit_size, ewah_j->bit_size);
+}
+
+void ewah_or(
+ struct ewah_bitmap *ewah_i,
+ struct ewah_bitmap *ewah_j,
+ struct ewah_bitmap *out)
+{
+ struct rlw_iterator rlw_i;
+ struct rlw_iterator rlw_j;
+ size_t literals;
+
+ rlwit_init(&rlw_i, ewah_i);
+ rlwit_init(&rlw_j, ewah_j);
+
+ while (rlwit_word_size(&rlw_i) > 0 && rlwit_word_size(&rlw_j) > 0) {
+ while (rlw_i.rlw.running_len > 0 || rlw_j.rlw.running_len > 0) {
+ struct rlw_iterator *prey, *predator;
+
+ if (rlw_i.rlw.running_len < rlw_j.rlw.running_len) {
+ prey = &rlw_i;
+ predator = &rlw_j;
+ } else {
+ prey = &rlw_j;
+ predator = &rlw_i;
+ }
+
+ if (predator->rlw.running_bit) {
+ ewah_add_empty_words(out, 0,
+ predator->rlw.running_len);
+ rlwit_discard_first_words(prey,
+ predator->rlw.running_len);
+ rlwit_discard_first_words(predator,
+ predator->rlw.running_len);
+ } else {
+ size_t index = rlwit_discharge(prey, out,
+ predator->rlw.running_len, 0);
+ ewah_add_empty_words(out, 0,
+ predator->rlw.running_len - index);
+ rlwit_discard_first_words(predator,
+ predator->rlw.running_len);
+ }
+ }
+
+ literals = min_size(
+ rlw_i.rlw.literal_words,
+ rlw_j.rlw.literal_words);
+
+ if (literals) {
+ size_t k;
+
+ for (k = 0; k < literals; ++k) {
+ ewah_add(out,
+ rlw_i.buffer[rlw_i.literal_word_start + k] |
+ rlw_j.buffer[rlw_j.literal_word_start + k]
+ );
+ }
+
+ rlwit_discard_first_words(&rlw_i, literals);
+ rlwit_discard_first_words(&rlw_j, literals);
+ }
+ }
+
+ if (rlwit_word_size(&rlw_i) > 0)
+ rlwit_discharge(&rlw_i, out, ~0, 0);
+ else
+ rlwit_discharge(&rlw_j, out, ~0, 0);
+
+ out->bit_size = max_size(ewah_i->bit_size, ewah_j->bit_size);
+}
+
+
+#define BITMAP_POOL_MAX 16
+static struct ewah_bitmap *bitmap_pool[BITMAP_POOL_MAX];
+static size_t bitmap_pool_size;
+
+struct ewah_bitmap *ewah_pool_new(void)
+{
+ if (bitmap_pool_size)
+ return bitmap_pool[--bitmap_pool_size];
+
+ return ewah_new();
+}
+
+void ewah_pool_free(struct ewah_bitmap *self)
+{
+ if (self == NULL)
+ return;
+
+ if (bitmap_pool_size == BITMAP_POOL_MAX ||
+ self->alloc_size == 0) {
+ ewah_free(self);
+ return;
+ }
+
+ ewah_clear(self);
+ bitmap_pool[bitmap_pool_size++] = self;
+}
+
+uint32_t ewah_checksum(struct ewah_bitmap *self)
+{
+ const uint8_t *p = (uint8_t *)self->buffer;
+ uint32_t crc = (uint32_t)self->bit_size;
+ size_t size = self->buffer_size * sizeof(eword_t);
+
+ while (size--)
+ crc = (crc << 5) - crc + (uint32_t)*p++;
+
+ return crc;
+}
--- /dev/null
+/**
+ * Copyright 2013, GitHub, Inc
+ * Copyright 2009-2013, Daniel Lemire, Cliff Moon,
+ * David McIntosh, Robert Becho, Google Inc. and Veronika Zenz
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version 2
+ * of the License, or (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
+ */
+#include "git-compat-util.h"
+#include "ewok.h"
+
+int ewah_serialize_native(struct ewah_bitmap *self, int fd)
+{
+ uint32_t write32;
+ size_t to_write = self->buffer_size * 8;
+
+ /* 32 bit -- bit size for the map */
+ write32 = (uint32_t)self->bit_size;
+ if (write(fd, &write32, 4) != 4)
+ return -1;
+
+ /** 32 bit -- number of compressed 64-bit words */
+ write32 = (uint32_t)self->buffer_size;
+ if (write(fd, &write32, 4) != 4)
+ return -1;
+
+ if (write(fd, self->buffer, to_write) != to_write)
+ return -1;
+
+ /** 32 bit -- position for the RLW */
+ write32 = self->rlw - self->buffer;
+ if (write(fd, &write32, 4) != 4)
+ return -1;
+
+ return (3 * 4) + to_write;
+}
+
+int ewah_serialize_to(struct ewah_bitmap *self,
+ int (*write_fun)(void *, const void *, size_t),
+ void *data)
+{
+ size_t i;
+ eword_t dump[2048];
+ const size_t words_per_dump = sizeof(dump) / sizeof(eword_t);
+ uint32_t bitsize, word_count, rlw_pos;
+
+ const eword_t *buffer;
+ size_t words_left;
+
+ /* 32 bit -- bit size for the map */
+ bitsize = htonl((uint32_t)self->bit_size);
+ if (write_fun(data, &bitsize, 4) != 4)
+ return -1;
+
+ /** 32 bit -- number of compressed 64-bit words */
+ word_count = htonl((uint32_t)self->buffer_size);
+ if (write_fun(data, &word_count, 4) != 4)
+ return -1;
+
+ /** 64 bit x N -- compressed words */
+ buffer = self->buffer;
+ words_left = self->buffer_size;
+
+ while (words_left >= words_per_dump) {
+ for (i = 0; i < words_per_dump; ++i, ++buffer)
+ dump[i] = htonll(*buffer);
+
+ if (write_fun(data, dump, sizeof(dump)) != sizeof(dump))
+ return -1;
+
+ words_left -= words_per_dump;
+ }
+
+ if (words_left) {
+ for (i = 0; i < words_left; ++i, ++buffer)
+ dump[i] = htonll(*buffer);
+
+ if (write_fun(data, dump, words_left * 8) != words_left * 8)
+ return -1;
+ }
+
+ /** 32 bit -- position for the RLW */
+ rlw_pos = (uint8_t*)self->rlw - (uint8_t *)self->buffer;
+ rlw_pos = htonl(rlw_pos / sizeof(eword_t));
+
+ if (write_fun(data, &rlw_pos, 4) != 4)
+ return -1;
+
+ return (3 * 4) + (self->buffer_size * 8);
+}
+
+static int write_helper(void *fd, const void *buf, size_t len)
+{
+ return write((intptr_t)fd, buf, len);
+}
+
+int ewah_serialize(struct ewah_bitmap *self, int fd)
+{
+ return ewah_serialize_to(self, write_helper, (void *)(intptr_t)fd);
+}
+
+int ewah_read_mmap(struct ewah_bitmap *self, void *map, size_t len)
+{
+ uint8_t *ptr = map;
+ size_t i;
+
+ self->bit_size = get_be32(ptr);
+ ptr += sizeof(uint32_t);
+
+ self->buffer_size = self->alloc_size = get_be32(ptr);
+ ptr += sizeof(uint32_t);
+
+ self->buffer = ewah_realloc(self->buffer,
+ self->alloc_size * sizeof(eword_t));
+
+ if (!self->buffer)
+ return -1;
+
+ /*
+ * Copy the raw data for the bitmap as a whole chunk;
+ * if we're in a little-endian platform, we'll perform
+ * the endianness conversion in a separate pass to ensure
+ * we're loading 8-byte aligned words.
+ */
+ memcpy(self->buffer, ptr, self->buffer_size * sizeof(uint64_t));
+ ptr += self->buffer_size * sizeof(uint64_t);
+
+ for (i = 0; i < self->buffer_size; ++i)
+ self->buffer[i] = ntohll(self->buffer[i]);
+
+ self->rlw = self->buffer + get_be32(ptr);
+
+ return (3 * 4) + (self->buffer_size * 8);
+}
+
+int ewah_deserialize(struct ewah_bitmap *self, int fd)
+{
+ size_t i;
+ eword_t dump[2048];
+ const size_t words_per_dump = sizeof(dump) / sizeof(eword_t);
+ uint32_t bitsize, word_count, rlw_pos;
+
+ eword_t *buffer = NULL;
+ size_t words_left;
+
+ ewah_clear(self);
+
+ /* 32 bit -- bit size for the map */
+ if (read(fd, &bitsize, 4) != 4)
+ return -1;
+
+ self->bit_size = (size_t)ntohl(bitsize);
+
+ /** 32 bit -- number of compressed 64-bit words */
+ if (read(fd, &word_count, 4) != 4)
+ return -1;
+
+ self->buffer_size = self->alloc_size = (size_t)ntohl(word_count);
+ self->buffer = ewah_realloc(self->buffer,
+ self->alloc_size * sizeof(eword_t));
+
+ if (!self->buffer)
+ return -1;
+
+ /** 64 bit x N -- compressed words */
+ buffer = self->buffer;
+ words_left = self->buffer_size;
+
+ while (words_left >= words_per_dump) {
+ if (read(fd, dump, sizeof(dump)) != sizeof(dump))
+ return -1;
+
+ for (i = 0; i < words_per_dump; ++i, ++buffer)
+ *buffer = ntohll(dump[i]);
+
+ words_left -= words_per_dump;
+ }
+
+ if (words_left) {
+ if (read(fd, dump, words_left * 8) != words_left * 8)
+ return -1;
+
+ for (i = 0; i < words_left; ++i, ++buffer)
+ *buffer = ntohll(dump[i]);
+ }
+
+ /** 32 bit -- position for the RLW */
+ if (read(fd, &rlw_pos, 4) != 4)
+ return -1;
+
+ self->rlw = self->buffer + ntohl(rlw_pos);
+ return 0;
+}
--- /dev/null
+/**
+ * Copyright 2013, GitHub, Inc
+ * Copyright 2009-2013, Daniel Lemire, Cliff Moon,
+ * David McIntosh, Robert Becho, Google Inc. and Veronika Zenz
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version 2
+ * of the License, or (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
+ */
+#include "git-compat-util.h"
+#include "ewok.h"
+#include "ewok_rlw.h"
+
+static inline int next_word(struct rlw_iterator *it)
+{
+ if (it->pointer >= it->size)
+ return 0;
+
+ it->rlw.word = &it->buffer[it->pointer];
+ it->pointer += rlw_get_literal_words(it->rlw.word) + 1;
+
+ it->rlw.literal_words = rlw_get_literal_words(it->rlw.word);
+ it->rlw.running_len = rlw_get_running_len(it->rlw.word);
+ it->rlw.running_bit = rlw_get_run_bit(it->rlw.word);
+ it->rlw.literal_word_offset = 0;
+
+ return 1;
+}
+
+void rlwit_init(struct rlw_iterator *it, struct ewah_bitmap *from_ewah)
+{
+ it->buffer = from_ewah->buffer;
+ it->size = from_ewah->buffer_size;
+ it->pointer = 0;
+
+ next_word(it);
+
+ it->literal_word_start = rlwit_literal_words(it) +
+ it->rlw.literal_word_offset;
+}
+
+void rlwit_discard_first_words(struct rlw_iterator *it, size_t x)
+{
+ while (x > 0) {
+ size_t discard;
+
+ if (it->rlw.running_len > x) {
+ it->rlw.running_len -= x;
+ return;
+ }
+
+ x -= it->rlw.running_len;
+ it->rlw.running_len = 0;
+
+ discard = (x > it->rlw.literal_words) ? it->rlw.literal_words : x;
+
+ it->literal_word_start += discard;
+ it->rlw.literal_words -= discard;
+ x -= discard;
+
+ if (x > 0 || rlwit_word_size(it) == 0) {
+ if (!next_word(it))
+ break;
+
+ it->literal_word_start =
+ rlwit_literal_words(it) + it->rlw.literal_word_offset;
+ }
+ }
+}
+
+size_t rlwit_discharge(
+ struct rlw_iterator *it, struct ewah_bitmap *out, size_t max, int negate)
+{
+ size_t index = 0;
+
+ while (index < max && rlwit_word_size(it) > 0) {
+ size_t pd, pl = it->rlw.running_len;
+
+ if (index + pl > max)
+ pl = max - index;
+
+ ewah_add_empty_words(out, it->rlw.running_bit ^ negate, pl);
+ index += pl;
+
+ pd = it->rlw.literal_words;
+ if (pd + index > max)
+ pd = max - index;
+
+ ewah_add_dirty_words(out,
+ it->buffer + it->literal_word_start, pd, negate);
+
+ rlwit_discard_first_words(it, pd + pl);
+ index += pd;
+ }
+
+ return index;
+}
+
+void rlwit_discharge_empty(struct rlw_iterator *it, struct ewah_bitmap *out)
+{
+ while (rlwit_word_size(it) > 0) {
+ ewah_add_empty_words(out, 0, rlwit_word_size(it));
+ rlwit_discard_first_words(it, rlwit_word_size(it));
+ }
+}
--- /dev/null
+/**
+ * Copyright 2013, GitHub, Inc
+ * Copyright 2009-2013, Daniel Lemire, Cliff Moon,
+ * David McIntosh, Robert Becho, Google Inc. and Veronika Zenz
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version 2
+ * of the License, or (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
+ */
+#ifndef __EWOK_BITMAP_H__
+#define __EWOK_BITMAP_H__
+
+#ifndef ewah_malloc
+# define ewah_malloc xmalloc
+#endif
+#ifndef ewah_realloc
+# define ewah_realloc xrealloc
+#endif
+#ifndef ewah_calloc
+# define ewah_calloc xcalloc
+#endif
+
+typedef uint64_t eword_t;
+#define BITS_IN_WORD (sizeof(eword_t) * 8)
+
+/**
+ * Do not use __builtin_popcountll. The GCC implementation
+ * is notoriously slow on all platforms.
+ *
+ * See: http://gcc.gnu.org/bugzilla/show_bug.cgi?id=36041
+ */
+static inline uint32_t ewah_bit_popcount64(uint64_t x)
+{
+ x = (x & 0x5555555555555555ULL) + ((x >> 1) & 0x5555555555555555ULL);
+ x = (x & 0x3333333333333333ULL) + ((x >> 2) & 0x3333333333333333ULL);
+ x = (x & 0x0F0F0F0F0F0F0F0FULL) + ((x >> 4) & 0x0F0F0F0F0F0F0F0FULL);
+ return (x * 0x0101010101010101ULL) >> 56;
+}
+
+#ifdef __GNUC__
+#define ewah_bit_ctz64(x) __builtin_ctzll(x)
+#else
+static inline int ewah_bit_ctz64(uint64_t x)
+{
+ int n = 0;
+ if ((x & 0xffffffff) == 0) { x >>= 32; n += 32; }
+ if ((x & 0xffff) == 0) { x >>= 16; n += 16; }
+ if ((x & 0xff) == 0) { x >>= 8; n += 8; }
+ if ((x & 0xf) == 0) { x >>= 4; n += 4; }
+ if ((x & 0x3) == 0) { x >>= 2; n += 2; }
+ if ((x & 0x1) == 0) { x >>= 1; n += 1; }
+ return n + !x;
+}
+#endif
+
+struct ewah_bitmap {
+ eword_t *buffer;
+ size_t buffer_size;
+ size_t alloc_size;
+ size_t bit_size;
+ eword_t *rlw;
+};
+
+typedef void (*ewah_callback)(size_t pos, void *);
+
+struct ewah_bitmap *ewah_pool_new(void);
+void ewah_pool_free(struct ewah_bitmap *self);
+
+/**
+ * Allocate a new EWAH Compressed bitmap
+ */
+struct ewah_bitmap *ewah_new(void);
+
+/**
+ * Clear all the bits in the bitmap. Does not free or resize
+ * memory.
+ */
+void ewah_clear(struct ewah_bitmap *self);
+
+/**
+ * Free all the memory of the bitmap
+ */
+void ewah_free(struct ewah_bitmap *self);
+
+int ewah_serialize_to(struct ewah_bitmap *self,
+ int (*write_fun)(void *out, const void *buf, size_t len),
+ void *out);
+int ewah_serialize(struct ewah_bitmap *self, int fd);
+int ewah_serialize_native(struct ewah_bitmap *self, int fd);
+
+int ewah_deserialize(struct ewah_bitmap *self, int fd);
+int ewah_read_mmap(struct ewah_bitmap *self, void *map, size_t len);
+int ewah_read_mmap_native(struct ewah_bitmap *self, void *map, size_t len);
+
+uint32_t ewah_checksum(struct ewah_bitmap *self);
+
+/**
+ * Logical not (bitwise negation) in-place on the bitmap
+ *
+ * This operation is linear time based on the size of the bitmap.
+ */
+void ewah_not(struct ewah_bitmap *self);
+
+/**
+ * Call the given callback with the position of every single bit
+ * that has been set on the bitmap.
+ *
+ * This is an efficient operation that does not fully decompress
+ * the bitmap.
+ */
+void ewah_each_bit(struct ewah_bitmap *self, ewah_callback callback, void *payload);
+
+/**
+ * Set a given bit on the bitmap.
+ *
+ * The bit at position `pos` will be set to true. Because of the
+ * way that the bitmap is compressed, a set bit cannot be unset
+ * later on.
+ *
+ * Furthermore, since the bitmap uses streaming compression, bits
+ * can only set incrementally.
+ *
+ * E.g.
+ * ewah_set(bitmap, 1); // ok
+ * ewah_set(bitmap, 76); // ok
+ * ewah_set(bitmap, 77); // ok
+ * ewah_set(bitmap, 8712800127); // ok
+ * ewah_set(bitmap, 25); // failed, assert raised
+ */
+void ewah_set(struct ewah_bitmap *self, size_t i);
+
+struct ewah_iterator {
+ const eword_t *buffer;
+ size_t buffer_size;
+
+ size_t pointer;
+ eword_t compressed, literals;
+ eword_t rl, lw;
+ int b;
+};
+
+/**
+ * Initialize a new iterator to run through the bitmap in uncompressed form.
+ *
+ * The iterator can be stack allocated. The underlying bitmap must not be freed
+ * before the iteration is over.
+ *
+ * E.g.
+ *
+ * struct ewah_bitmap *bitmap = ewah_new();
+ * struct ewah_iterator it;
+ *
+ * ewah_iterator_init(&it, bitmap);
+ */
+void ewah_iterator_init(struct ewah_iterator *it, struct ewah_bitmap *parent);
+
+/**
+ * Yield every single word in the bitmap in uncompressed form. This is:
+ * yield single words (32-64 bits) where each bit represents an actual
+ * bit from the bitmap.
+ *
+ * Return: true if a word was yield, false if there are no words left
+ */
+int ewah_iterator_next(eword_t *next, struct ewah_iterator *it);
+
+void ewah_or(
+ struct ewah_bitmap *ewah_i,
+ struct ewah_bitmap *ewah_j,
+ struct ewah_bitmap *out);
+
+void ewah_and_not(
+ struct ewah_bitmap *ewah_i,
+ struct ewah_bitmap *ewah_j,
+ struct ewah_bitmap *out);
+
+void ewah_xor(
+ struct ewah_bitmap *ewah_i,
+ struct ewah_bitmap *ewah_j,
+ struct ewah_bitmap *out);
+
+void ewah_and(
+ struct ewah_bitmap *ewah_i,
+ struct ewah_bitmap *ewah_j,
+ struct ewah_bitmap *out);
+
+/**
+ * Direct word access
+ */
+size_t ewah_add_empty_words(struct ewah_bitmap *self, int v, size_t number);
+void ewah_add_dirty_words(
+ struct ewah_bitmap *self, const eword_t *buffer, size_t number, int negate);
+size_t ewah_add(struct ewah_bitmap *self, eword_t word);
+
+
+/**
+ * Uncompressed, old-school bitmap that can be efficiently compressed
+ * into an `ewah_bitmap`.
+ */
+struct bitmap {
+ eword_t *words;
+ size_t word_alloc;
+};
+
+struct bitmap *bitmap_new(void);
+void bitmap_set(struct bitmap *self, size_t pos);
+void bitmap_clear(struct bitmap *self, size_t pos);
+int bitmap_get(struct bitmap *self, size_t pos);
+void bitmap_reset(struct bitmap *self);
+void bitmap_free(struct bitmap *self);
+int bitmap_equals(struct bitmap *self, struct bitmap *other);
+int bitmap_is_subset(struct bitmap *self, struct bitmap *super);
+
+struct ewah_bitmap * bitmap_to_ewah(struct bitmap *bitmap);
+struct bitmap *ewah_to_bitmap(struct ewah_bitmap *ewah);
+
+void bitmap_and_not(struct bitmap *self, struct bitmap *other);
+void bitmap_or_ewah(struct bitmap *self, struct ewah_bitmap *other);
+void bitmap_or(struct bitmap *self, const struct bitmap *other);
+
+void bitmap_each_bit(struct bitmap *self, ewah_callback callback, void *data);
+size_t bitmap_popcount(struct bitmap *self);
+
+#endif
--- /dev/null
+/**
+ * Copyright 2013, GitHub, Inc
+ * Copyright 2009-2013, Daniel Lemire, Cliff Moon,
+ * David McIntosh, Robert Becho, Google Inc. and Veronika Zenz
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version 2
+ * of the License, or (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
+ */
+#ifndef __EWOK_RLW_H__
+#define __EWOK_RLW_H__
+
+#define RLW_RUNNING_BITS (sizeof(eword_t) * 4)
+#define RLW_LITERAL_BITS (sizeof(eword_t) * 8 - 1 - RLW_RUNNING_BITS)
+
+#define RLW_LARGEST_RUNNING_COUNT (((eword_t)1 << RLW_RUNNING_BITS) - 1)
+#define RLW_LARGEST_LITERAL_COUNT (((eword_t)1 << RLW_LITERAL_BITS) - 1)
+
+#define RLW_LARGEST_RUNNING_COUNT_SHIFT (RLW_LARGEST_RUNNING_COUNT << 1)
+
+#define RLW_RUNNING_LEN_PLUS_BIT (((eword_t)1 << (RLW_RUNNING_BITS + 1)) - 1)
+
+static int rlw_get_run_bit(const eword_t *word)
+{
+ return *word & (eword_t)1;
+}
+
+static inline void rlw_set_run_bit(eword_t *word, int b)
+{
+ if (b) {
+ *word |= (eword_t)1;
+ } else {
+ *word &= (eword_t)(~1);
+ }
+}
+
+static inline void rlw_xor_run_bit(eword_t *word)
+{
+ if (*word & 1) {
+ *word &= (eword_t)(~1);
+ } else {
+ *word |= (eword_t)1;
+ }
+}
+
+static inline void rlw_set_running_len(eword_t *word, eword_t l)
+{
+ *word |= RLW_LARGEST_RUNNING_COUNT_SHIFT;
+ *word &= (l << 1) | (~RLW_LARGEST_RUNNING_COUNT_SHIFT);
+}
+
+static inline eword_t rlw_get_running_len(const eword_t *word)
+{
+ return (*word >> 1) & RLW_LARGEST_RUNNING_COUNT;
+}
+
+static inline eword_t rlw_get_literal_words(const eword_t *word)
+{
+ return *word >> (1 + RLW_RUNNING_BITS);
+}
+
+static inline void rlw_set_literal_words(eword_t *word, eword_t l)
+{
+ *word |= ~RLW_RUNNING_LEN_PLUS_BIT;
+ *word &= (l << (RLW_RUNNING_BITS + 1)) | RLW_RUNNING_LEN_PLUS_BIT;
+}
+
+static inline eword_t rlw_size(const eword_t *self)
+{
+ return rlw_get_running_len(self) + rlw_get_literal_words(self);
+}
+
+struct rlw_iterator {
+ const eword_t *buffer;
+ size_t size;
+ size_t pointer;
+ size_t literal_word_start;
+
+ struct {
+ const eword_t *word;
+ int literal_words;
+ int running_len;
+ int literal_word_offset;
+ int running_bit;
+ } rlw;
+};
+
+void rlwit_init(struct rlw_iterator *it, struct ewah_bitmap *bitmap);
+void rlwit_discard_first_words(struct rlw_iterator *it, size_t x);
+size_t rlwit_discharge(
+ struct rlw_iterator *it, struct ewah_bitmap *out, size_t max, int negate);
+void rlwit_discharge_empty(struct rlw_iterator *it, struct ewah_bitmap *out);
+
+static inline size_t rlwit_word_size(struct rlw_iterator *it)
+{
+ return it->rlw.running_len + it->rlw.literal_words;
+}
+
+static inline size_t rlwit_literal_words(struct rlw_iterator *it)
+{
+ return it->pointer - it->rlw.literal_words;
+}
+
+#endif
unsigned int i, n;
struct tree_entry *e;
- slash1 = strchr(p, '/');
- if (slash1)
- n = slash1 - p;
- else
- n = strlen(p);
+ slash1 = strchrnul(p, '/');
+ n = slash1 - p;
if (!n)
die("Empty path component found in input");
- if (!slash1 && !S_ISDIR(mode) && subtree)
+ if (!*slash1 && !S_ISDIR(mode) && subtree)
die("Non-directories cannot have subtrees");
if (!root->tree)
for (i = 0; i < t->entry_count; i++) {
e = t->entries[i];
if (e->name->str_len == n && !strncmp_icase(p, e->name->str_dat, n)) {
- if (!slash1) {
+ if (!*slash1) {
if (!S_ISDIR(mode)
&& e->versions[1].mode == mode
&& !hashcmp(e->versions[1].sha1, sha1))
e->versions[0].mode = 0;
hashclr(e->versions[0].sha1);
t->entries[t->entry_count++] = e;
- if (slash1) {
+ if (*slash1) {
e->tree = new_tree_content(8);
e->versions[1].mode = S_IFDIR;
tree_content_set(e, slash1 + 1, sha1, mode, subtree);
unsigned int i, n;
struct tree_entry *e;
- slash1 = strchr(p, '/');
- if (slash1)
- n = slash1 - p;
- else
- n = strlen(p);
+ slash1 = strchrnul(p, '/');
+ n = slash1 - p;
if (!root->tree)
load_tree(root);
for (i = 0; i < t->entry_count; i++) {
e = t->entries[i];
if (e->name->str_len == n && !strncmp_icase(p, e->name->str_dat, n)) {
- if (slash1 && !S_ISDIR(e->versions[1].mode))
+ if (*slash1 && !S_ISDIR(e->versions[1].mode))
/*
* If p names a file in some subdirectory, and a
* file or symlink matching the name of the
* exist and need not be deleted.
*/
return 1;
- if (!slash1 || !S_ISDIR(e->versions[1].mode))
+ if (!*slash1 || !S_ISDIR(e->versions[1].mode))
goto del_entry;
if (!e->tree)
load_tree(e);
unsigned int i, n;
struct tree_entry *e;
- slash1 = strchr(p, '/');
- if (slash1)
- n = slash1 - p;
- else
- n = strlen(p);
+ slash1 = strchrnul(p, '/');
+ n = slash1 - p;
if (!n && !allow_root)
die("Empty path component found in input");
for (i = 0; i < t->entry_count; i++) {
e = t->entries[i];
if (e->name->str_len == n && !strncmp_icase(p, e->name->str_dat, n)) {
- if (!slash1)
+ if (!*slash1)
goto found_entry;
if (!S_ISDIR(e->versions[1].mode))
return 0;
static struct lock_file shallow_lock;
static const char *alternate_shallow_file;
+/* Remember to update object flag allocation in object.h */
#define COMPLETE (1U << 0)
#define COMMON (1U << 1)
#define COMMON_REF (1U << 2)
}
strbuf_release(&req_buf);
- consume_shallow_list(args, fd[0]);
+ if (!got_ready || !no_done)
+ consume_shallow_list(args, fd[0]);
while (flushes || multi_ack) {
int ack = get_ack(fd[0], result_sha1);
if (ack) {
if (!si->shallow || !si->shallow->nr)
return;
- if (alternate_shallow_file) {
- /*
- * The temporary shallow file is only useful for
- * index-pack and unpack-objects because it may
- * contain more roots than we want. Delete it.
- */
- if (*alternate_shallow_file)
- unlink(alternate_shallow_file);
- free((char *)alternate_shallow_file);
- }
-
if (args->cloning) {
/*
* remote is shallow, but this is a clone, there are
sha1 = tree_entry_extract(&desc, &name, &mode);
- if (is_null_sha1(sha1))
- has_null_sha1 = 1;
- if (strchr(name, '/'))
- has_full_path = 1;
- if (!*name)
- has_empty_name = 1;
- if (!strcmp(name, "."))
- has_dot = 1;
- if (!strcmp(name, ".."))
- has_dotdot = 1;
- if (!strcmp(name, ".git"))
- has_dotgit = 1;
+ has_null_sha1 |= is_null_sha1(sha1);
+ has_full_path |= !!strchr(name, '/');
+ has_empty_name |= !*name;
+ has_dot |= !strcmp(name, ".");
+ has_dotdot |= !strcmp(name, "..");
+ has_dotgit |= !strcmp(name, ".git");
has_zero_pad |= *(char *)desc.buffer == '0';
update_tree_entry(&desc);
return retval;
}
-static int fsck_ident(char **ident, struct object *obj, fsck_error error_func)
+static int fsck_ident(const char **ident, struct object *obj, fsck_error error_func)
{
+ char *end;
+
if (**ident == '<')
return error_func(obj, FSCK_ERROR, "invalid author/committer line - missing space before email");
*ident += strcspn(*ident, "<>\n");
(*ident)++;
if (**ident == '0' && (*ident)[1] != ' ')
return error_func(obj, FSCK_ERROR, "invalid author/committer line - zero-padded date");
- *ident += strspn(*ident, "0123456789");
- if (**ident != ' ')
+ if (date_overflows(strtoul(*ident, &end, 10)))
+ return error_func(obj, FSCK_ERROR, "invalid author/committer line - date causes integer overflow");
+ if (end == *ident || *end != ' ')
return error_func(obj, FSCK_ERROR, "invalid author/committer line - bad date");
- (*ident)++;
+ *ident = end + 1;
if ((**ident != '+' && **ident != '-') ||
!isdigit((*ident)[1]) ||
!isdigit((*ident)[2]) ||
static int fsck_commit(struct commit *commit, fsck_error error_func)
{
- char *buffer = commit->buffer;
+ const char *buffer = commit->buffer, *tmp;
unsigned char tree_sha1[20], sha1[20];
struct commit_graft *graft;
int parents = 0;
int err;
- if (commit->date == ULONG_MAX)
- return error_func(&commit->object, FSCK_ERROR, "invalid author/committer line");
-
- if (memcmp(buffer, "tree ", 5))
+ buffer = skip_prefix(buffer, "tree ");
+ if (!buffer)
return error_func(&commit->object, FSCK_ERROR, "invalid format - expected 'tree' line");
- if (get_sha1_hex(buffer+5, tree_sha1) || buffer[45] != '\n')
+ if (get_sha1_hex(buffer, tree_sha1) || buffer[40] != '\n')
return error_func(&commit->object, FSCK_ERROR, "invalid 'tree' line format - bad sha1");
- buffer += 46;
- while (!memcmp(buffer, "parent ", 7)) {
- if (get_sha1_hex(buffer+7, sha1) || buffer[47] != '\n')
+ buffer += 41;
+ while ((tmp = skip_prefix(buffer, "parent "))) {
+ buffer = tmp;
+ if (get_sha1_hex(buffer, sha1) || buffer[40] != '\n')
return error_func(&commit->object, FSCK_ERROR, "invalid 'parent' line format - bad sha1");
- buffer += 48;
+ buffer += 41;
parents++;
}
graft = lookup_commit_graft(commit->object.sha1);
if (p || parents)
return error_func(&commit->object, FSCK_ERROR, "parent objects missing");
}
- if (memcmp(buffer, "author ", 7))
+ buffer = skip_prefix(buffer, "author ");
+ if (!buffer)
return error_func(&commit->object, FSCK_ERROR, "invalid format - expected 'author' line");
- buffer += 7;
err = fsck_ident(&buffer, &commit->object, error_func);
if (err)
return err;
- if (memcmp(buffer, "committer ", strlen("committer ")))
+ buffer = skip_prefix(buffer, "committer ");
+ if (!buffer)
return error_func(&commit->object, FSCK_ERROR, "invalid format - expected 'committer' line");
- buffer += strlen("committer ");
err = fsck_ident(&buffer, &commit->object, error_func);
if (err)
return err;
Term::ReadKey->import;
$use_readkey = 1;
};
+ if (!$use_readkey) {
+ print STDERR "missing Term::ReadKey, disabling interactive.singlekey\n";
+ }
eval {
require Term::Cap;
my $termcap = Term::Cap->Tgetent;
SUBDIRECTORY_OK=Yes
OPTIONS_KEEPDASHDASH=
+OPTIONS_STUCKLONG=t
OPTIONS_SPEC="\
git am [options] [(<mbox>|<Maildir>)...]
git am [options] (--continue | --skip | --abort)
committer-date-is-author-date lie about committer date
ignore-date use current timestamp for author date
rerere-autoupdate update the index with reused conflict resolution if possible
+S,gpg-sign? GPG-sign commits
rebasing* (internal use for git-rebase)"
. git-sh-setup
}
fall_back_3way () {
- O_OBJECT=`cd "$GIT_OBJECT_DIRECTORY" && pwd`
+ O_OBJECT=$(cd "$GIT_OBJECT_DIRECTORY" && pwd)
rm -fr "$dotest"/patch-merge-*
mkdir "$dotest/patch-merge-tmp-dir"
then
clean_abort "$(gettext "Only one StGIT patch series can be applied at once")"
fi
- series_dir=`dirname "$1"`
+ series_dir=$(dirname "$1")
series_file="$1"
shift
{
this=0
for stgit in "$@"
do
- this=`expr "$this" + 1`
- msgnum=`printf "%0${prec}d" $this`
+ this=$(expr "$this" + 1)
+ msgnum=$(printf "%0${prec}d" $this)
# Perl version of StGIT parse_patch. The first nonemptyline
# not starting with Author, From or Date is the
# subject, and the body starts with the next nonempty
committer_date_is_author_date=
ignore_date=
allow_rerere_autoupdate=
+gpg_sign_opt=
if test "$(git config --bool --get am.keepcr)" = true
then
abort=t ;;
--rebasing)
rebasing=t threeway=t ;;
- --resolvemsg)
- shift; resolvemsg=$1 ;;
- --whitespace|--directory|--exclude|--include)
- git_apply_opt="$git_apply_opt $(sq "$1=$2")"; shift ;;
- -C|-p)
- git_apply_opt="$git_apply_opt $(sq "$1$2")"; shift ;;
- --patch-format)
- shift ; patch_format="$1" ;;
+ --resolvemsg=*)
+ resolvemsg="${1#--resolvemsg=}" ;;
+ --whitespace=*|--directory=*|--exclude=*|--include=*)
+ git_apply_opt="$git_apply_opt $(sq "$1")" ;;
+ -C*|-p*)
+ git_apply_opt="$git_apply_opt $(sq "$1")" ;;
+ --patch-format=*)
+ patch_format="${1#--patch-format=}" ;;
--reject|--ignore-whitespace|--ignore-space-change)
git_apply_opt="$git_apply_opt $1" ;;
--committer-date-is-author-date)
keepcr=t ;;
--no-keep-cr)
keepcr=f ;;
+ --gpg-sign)
+ gpg_sign_opt=-S ;;
+ --gpg-sign=*)
+ gpg_sign_opt="-S${1#--gpg-sign=}" ;;
--)
shift; break ;;
*)
git_apply_opt=$(cat "$dotest/apply-opt")
if test "$(cat "$dotest/sign")" = t
then
- SIGNOFF=`git var GIT_COMMITTER_IDENT | sed -e '
+ SIGNOFF=$(git var GIT_COMMITTER_IDENT | sed -e '
s/>.*/>/
s/^/Signed-off-by: /'
- `
+ )
else
SIGNOFF=
fi
-last=`cat "$dotest/last"`
-this=`cat "$dotest/next"`
+last=$(cat "$dotest/last")
+this=$(cat "$dotest/next")
if test "$skip" = t
then
- this=`expr "$this" + 1`
+ this=$(expr "$this" + 1)
resume=
fi
while test "$this" -le "$last"
do
- msgnum=`printf "%0${prec}d" $this`
- next=`expr "$this" + 1`
+ msgnum=$(printf "%0${prec}d" $this)
+ next=$(expr "$this" + 1)
test -f "$dotest/$msgnum" || {
resume=
go_next
'')
if test '' != "$SIGNOFF"
then
- LAST_SIGNED_OFF_BY=`
+ LAST_SIGNED_OFF_BY=$(
sed -ne '/^Signed-off-by: /p' \
"$dotest/msg-clean" |
sed -ne '$p'
- `
- ADD_SIGNOFF=`
+ )
+ ADD_SIGNOFF=$(
test "$LAST_SIGNED_OFF_BY" = "$SIGNOFF" || {
test '' = "$LAST_SIGNED_OFF_BY" && echo
echo "$SIGNOFF"
- }`
+ })
else
ADD_SIGNOFF=
fi
GIT_COMMITTER_DATE="$GIT_AUTHOR_DATE"
export GIT_COMMITTER_DATE
fi &&
- git commit-tree $tree ${parent:+-p} $parent <"$dotest/final-commit"
+ git commit-tree ${parent:+-p} $parent ${gpg_sign_opt:+"$gpg_sign_opt"} $tree \
+ <"$dotest/final-commit"
) &&
git update-ref -m "$GIT_REFLOG_ACTION: $FIRSTLINE" HEAD $commit $parent ||
stop_here $this
}
case "$#" in
0) branch=$(cat "$GIT_DIR/BISECT_START") ;;
- 1) git rev-parse --quiet --verify "$1^{commit}" > /dev/null || {
+ 1) git rev-parse --quiet --verify "$1^{commit}" >/dev/null || {
invalid="$1"
die "$(eval_gettext "'\$invalid' is not a valid commit")"
}
fi
# We have to use a subshell because "bisect_state" can exit.
- ( bisect_state $state > "$GIT_DIR/BISECT_RUN" )
+ ( bisect_state $state >"$GIT_DIR/BISECT_RUN" )
res=$?
cat "$GIT_DIR/BISECT_RUN"
if sane_grep "first bad commit could be any of" "$GIT_DIR/BISECT_RUN" \
- > /dev/null
+ >/dev/null
then
gettextln "bisect run cannot continue any more" >&2
exit $res
exit $res
fi
- if sane_grep "is the first bad commit" "$GIT_DIR/BISECT_RUN" > /dev/null
+ if sane_grep "is the first bad commit" "$GIT_DIR/BISECT_RUN" >/dev/null
then
gettextln "bisect run success"
exit 0;
#include <sys/time.h>
#include <time.h>
#include <signal.h>
-#ifndef USE_WILDMATCH
-#include <fnmatch.h>
-#endif
#include <assert.h>
#include <regex.h>
#include <utime.h>
#include "compat/bswap.h"
-#ifdef USE_WILDMATCH
#include "wildmatch.h"
-#define FNM_PATHNAME WM_PATHNAME
-#define FNM_CASEFOLD WM_CASEFOLD
-#define FNM_NOMATCH WM_NOMATCH
-static inline int fnmatch(const char *pattern, const char *string, int flags)
-{
- return wildmatch(pattern, string, flags, NULL);
-}
-#endif
/* General helper functions */
extern void vreportf(const char *prefix, const char *err, va_list params);
extern void set_die_is_recursing_routine(int (*routine)(void));
extern int starts_with(const char *str, const char *prefix);
-extern int prefixcmp(const char *str, const char *prefix);
extern int ends_with(const char *str, const char *suffix);
-extern int suffixcmp(const char *str, const char *suffix);
static inline const char *skip_prefix(const char *str, const char *prefix)
{
- size_t len = strlen(prefix);
- return strncmp(str, prefix, len) ? NULL : str + len;
+ do {
+ if (!*prefix)
+ return str;
+ } while (*str++ == *prefix++);
+ return NULL;
}
#if defined(NO_MMAP) || defined(USE_WIN32_MMAP)
#endif
#ifdef SNPRINTF_RETURNS_BOGUS
+#ifdef snprintf
+#undef snprintf
+#endif
#define snprintf git_snprintf
extern int git_snprintf(char *str, size_t maxsize,
const char *format, ...);
+#ifdef vsnprintf
+#undef vsnprintf
+#endif
#define vsnprintf git_vsnprintf
extern int git_vsnprintf(char *str, size_t maxsize,
const char *format, va_list ap);
extern int xmkstemp(char *template);
extern int xmkstemp_mode(char *template, int mode);
extern int odb_mkstemp(char *template, size_t limit, const char *pattern);
-extern int odb_pack_keep(char *name, size_t namesz, unsigned char *sha1);
+extern int odb_pack_keep(char *name, size_t namesz, const unsigned char *sha1);
static inline size_t xsize_t(off_t len)
{
/* Get the passwd entry for the UID of the current process. */
struct passwd *xgetpwuid_self(void);
+#ifdef GMTIME_UNRELIABLE_ERRORS
+struct tm *git_gmtime(const time_t *);
+struct tm *git_gmtime_r(const time_t *, struct tm *);
+#define gmtime git_gmtime
+#define gmtime_r git_gmtime_r
+#endif
+
#endif
sub find_worktree
{
- my ($repo) = @_;
-
# Git->repository->wc_path() does not honor changes to the working
# tree location made by $ENV{GIT_WORK_TREE} or the 'core.worktree'
# config variable.
- my $worktree;
- my $env_worktree = $ENV{GIT_WORK_TREE};
- my $core_worktree = Git::config('core.worktree');
-
- if (defined($env_worktree) and (length($env_worktree) > 0)) {
- $worktree = $env_worktree;
- } elsif (defined($core_worktree) and (length($core_worktree) > 0)) {
- $worktree = $core_worktree;
- } else {
- $worktree = $repo->wc_path();
- }
-
- return $worktree;
+ return Git::command_oneline('rev-parse', '--show-toplevel');
}
sub print_tool_help
my $rc;
my $error = 0;
my $repo = Git->repository();
- my $workdir = find_worktree($repo);
+ my $workdir = find_worktree();
my ($a, $b, $tmpdir, @worktree) =
setup_dir_diff($repo, $workdir, $symlinks);
PERL='@@PERL@@'
OPTIONS_KEEPDASHDASH=
+OPTIONS_STUCKLONG=
OPTIONS_SPEC="\
git instaweb [options] (--start | --stop | --restart)
--
print "Rebasing the current branch onto %s" % upstream
oldHead = read_pipe("git rev-parse HEAD").strip()
system("git rebase %s" % upstream)
- system("git diff-tree --stat --summary -M %s HEAD" % oldHead)
+ system("git diff-tree --stat --summary -M %s HEAD --" % oldHead)
return True
class P4Clone(P4Sync):
#
# Fetch one or more remote refs and merge it/them into the current HEAD.
-USAGE='[-n | --no-stat] [--[no-]commit] [--[no-]squash] [--[no-]ff] [--[no-]rebase|--rebase=preserve] [-s strategy]... [<fetch-options>] <repo> <head>...'
+USAGE='[-n | --no-stat] [--[no-]commit] [--[no-]squash] [--[no-]ff|--ff-only] [--[no-]rebase|--rebase=preserve] [-s strategy]... [<fetch-options>] <repo> <head>...'
LONG_USAGE='Fetch one or more remote refs and integrate it/them with the current HEAD.'
SUBDIRECTORY_OK=Yes
OPTIONS_SPEC=
then
rebase=$(bool_or_string_config pull.rebase)
fi
+
+# Setup default fast-forward options via `pull.ff`
+pull_ff=$(git config pull.ff)
+case "$pull_ff" in
+false)
+ no_ff=--no-ff
+ break
+ ;;
+only)
+ ff_only=--ff-only
+ break
+ ;;
+esac
+
+
dry_run=
while :
do
--no-verify-signatures)
verify_signatures=--no-verify-signatures
;;
+ --gpg-sign|-S)
+ gpg_sign_args=-S
+ ;;
+ --gpg-sign=*)
+ gpg_sign_args=$(git rev-parse --sq-quote "-S${1#--gpg-sign=}")
+ ;;
+ -S*)
+ gpg_sign_args=$(git rev-parse --sq-quote "$1")
+ ;;
--d|--dr|--dry|--dry-|--dry-r|--dry-ru|--dry-run)
dry_run=--dry-run
;;
case "$rebase" in
true)
eval="git-rebase $diffstat $strategy_args $merge_args $rebase_args $verbosity"
+ eval="$eval $gpg_sign_args"
eval="$eval --onto $merge_head ${oldremoteref:-$merge_head}"
;;
*)
eval="git-merge $diffstat $no_commit $verify_signatures $edit $squash $no_ff $ff_only"
- eval="$eval $log_arg $strategy_args $merge_args $verbosity $progress"
+ eval="$eval $log_arg $strategy_args $merge_args $verbosity $progress"
+ eval="$eval $gpg_sign_args"
eval="$eval \"\$merge_name\" HEAD $merge_head"
;;
esac
#!/bin/sh
OPTIONS_KEEPDASHDASH=
+OPTIONS_STUCKLONG=
OPTIONS_SPEC="\
git quiltimport [options]
--
case "$action" in
continue)
- git am --resolved --resolvemsg="$resolvemsg" &&
+ git am --resolved --resolvemsg="$resolvemsg" \
+ ${gpg_sign_opt:+"$gpg_sign_opt"} &&
move_to_original_branch
return
;;
# empty commits and even if it didn't the format doesn't really lend
# itself well to recording empty patches. fortunately, cherry-pick
# makes this easy
- git cherry-pick --allow-empty "$revisions"
+ git cherry-pick ${gpg_sign_opt:+"$gpg_sign_opt"} --allow-empty "$revisions"
ret=$?
else
rm -f "$GIT_DIR/rebased-patches"
return $?
fi
- git am $git_am_opt --rebasing --resolvemsg="$resolvemsg" <"$GIT_DIR/rebased-patches"
+ git am $git_am_opt --rebasing --resolvemsg="$resolvemsg" \
+ ${gpg_sign_opt:+"$gpg_sign_opt"} <"$GIT_DIR/rebased-patches"
ret=$?
rm -f "$GIT_DIR/rebased-patches"
echo "$1" > "$state_dir"/stopped-sha
make_patch $1
git rev-parse --verify HEAD > "$amend"
+ gpg_sign_opt_quoted=${gpg_sign_opt:+$(git rev-parse --sq-quote "$gpg_sign_opt")}
warn "You can amend the commit now, with"
warn
- warn " git commit --amend"
+ warn " git commit --amend $gpg_sign_opt_quoted"
warn
warn "Once you are satisfied with your changes, run"
warn
test -d "$rewritten" &&
pick_one_preserving_merges "$@" && return
- output eval git cherry-pick "$strategy_args" $empty_args $ff "$@"
+ output eval git cherry-pick \
+ ${gpg_sign_opt:+$(git rev-parse --sq-quote "$gpg_sign_opt")} \
+ "$strategy_args" $empty_args $ff "$@"
}
pick_one_preserving_merges () {
new_parents=${new_parents# $first_parent}
merge_args="--no-log --no-ff"
if ! do_with_author output eval \
- 'git merge $merge_args $strategy_args -m "$msg_content" $new_parents'
+ 'git merge ${gpg_sign_opt:+"$gpg_sign_opt"} \
+ $merge_args $strategy_args -m "$msg_content" $new_parents'
then
printf "%s\n" "$msg_content" > "$GIT_DIR"/MERGE_MSG
die_with_patch $sha1 "Error redoing merge $sha1"
echo "$sha1 $(git rev-parse HEAD^0)" >> "$rewritten_list"
;;
*)
- output eval git cherry-pick "$strategy_args" "$@" ||
+ output eval git cherry-pick \
+ ${gpg_sign_opt:+$(git rev-parse --sq-quote "$gpg_sign_opt")} \
+ "$strategy_args" "$@" ||
die_with_patch $sha1 "Could not pick $sha1"
;;
esac
--no-post-rewrite -n -q -C $1 &&
pick_one -n $1 &&
git commit --allow-empty --allow-empty-message \
- --amend --no-post-rewrite -n -q -C $1 ||
+ --amend --no-post-rewrite -n -q -C $1 \
+ ${gpg_sign_opt:+"$gpg_sign_opt"} ||
die_with_patch $1 "Could not apply $1... $2"
else
pick_one $1 ||
mark_action_done
do_pick $sha1 "$rest"
- git commit --amend --no-post-rewrite || {
+ git commit --amend --no-post-rewrite ${gpg_sign_opt:+"$gpg_sign_opt"} || {
warn "Could not amend commit after successfully picking $sha1... $rest"
warn "This is most likely due to an empty commit message, or the pre-commit hook"
warn "failed. If the pre-commit hook failed, you may need to resolve the issue before"
squash|s|fixup|f)
# This is an intermediate commit; its message will only be
# used in case of trouble. So use the long version:
- do_with_author output git commit --amend --no-verify -F "$squash_msg" ||
+ do_with_author output git commit --amend --no-verify -F "$squash_msg" \
+ ${gpg_sign_opt:+"$gpg_sign_opt"} ||
die_failed_squash $sha1 "$rest"
;;
*)
# This is the final command of this squash/fixup group
if test -f "$fixup_msg"
then
- do_with_author git commit --amend --no-verify -F "$fixup_msg" ||
+ do_with_author git commit --amend --no-verify -F "$fixup_msg" \
+ ${gpg_sign_opt:+"$gpg_sign_opt"} ||
die_failed_squash $sha1 "$rest"
else
cp "$squash_msg" "$GIT_DIR"/SQUASH_MSG || exit
rm -f "$GIT_DIR"/MERGE_MSG
- do_with_author git commit --amend --no-verify -F "$GIT_DIR"/SQUASH_MSG -e ||
+ do_with_author git commit --amend --no-verify -F "$GIT_DIR"/SQUASH_MSG -e \
+ ${gpg_sign_opt:+"$gpg_sign_opt"} ||
die_failed_squash $sha1 "$rest"
fi
rm -f "$squash_msg" "$fixup_msg"
;;
esac
done
- echo "$sha1 $action $prefix $rest"
+ printf '%s %s %s %s\n' "$sha1" "$action" "$prefix" "$rest"
# if it's a single word, try to resolve to a full sha1 and
# emit a second copy. This allows us to match on both message
# and on sha1 prefix
else
if ! test -f "$author_script"
then
+ gpg_sign_opt_quoted=${gpg_sign_opt:+$(git rev-parse --sq-quote "$gpg_sign_opt")}
die "You have staged changes in your working tree. If these changes are meant to be
squashed into the previous commit, run:
- git commit --amend
+ git commit --amend $gpg_sign_opt_quoted
If they are meant to go into a new commit, run:
- git commit
+ git commit $gpg_sign_opt_quoted
In both case, once you're done, continue with:
die "\
You have uncommitted changes in your working tree. Please, commit them
first and then run 'git rebase --continue' again."
- do_with_author git commit --amend --no-verify -F "$msg" -e ||
+ do_with_author git commit --amend --no-verify -F "$msg" -e \
+ ${gpg_sign_opt:+"$gpg_sign_opt"} ||
die "Could not commit staged changes."
else
- do_with_author git commit --no-verify -F "$msg" -e ||
+ do_with_author git commit --no-verify -F "$msg" -e \
+ ${gpg_sign_opt:+"$gpg_sign_opt"} ||
die "Could not commit staged changes."
fi
fi
cmt=`cat "$state_dir/current"`
if ! git diff-index --quiet --ignore-submodules HEAD --
then
- if ! git commit --no-verify -C "$cmt"
+ if ! git commit ${gpg_sign_opt:+"$gpg_sign_opt"} --no-verify -C "$cmt"
then
echo "Commit failed, please do not call \"git commit\""
echo "directly, but instead do one of the following: "
SUBDIRECTORY_OK=Yes
OPTIONS_KEEPDASHDASH=
+OPTIONS_STUCKLONG=t
OPTIONS_SPEC="\
git rebase [-i] [options] [--exec <cmd>] [--onto <newbase>] [<upstream>] [<branch>]
git rebase [-i] [options] [--exec <cmd>] [--onto <newbase>] --root [<branch>]
whitespace=! passed to 'git apply'
ignore-whitespace! passed to 'git apply'
C=! passed to 'git apply'
+S,gpg-sign? GPG-sign commits
Actions:
continue! continue
abort! abort and check out the original branch
autosquash=
keep_empty=
test "$(git config --bool rebase.autosquash)" = "true" && autosquash=t
+gpg_sign_opt=
read_basic_state () {
test -f "$state_dir/head-name" &&
strategy_opts="$(cat "$state_dir"/strategy_opts)"
test -f "$state_dir"/allow_rerere_autoupdate &&
allow_rerere_autoupdate="$(cat "$state_dir"/allow_rerere_autoupdate)"
+ test -f "$state_dir"/gpg_sign_opt &&
+ gpg_sign_opt="$(cat "$state_dir"/gpg_sign_opt)"
}
write_basic_state () {
"$state_dir"/strategy_opts
test -n "$allow_rerere_autoupdate" && echo "$allow_rerere_autoupdate" > \
"$state_dir"/allow_rerere_autoupdate
+ test -n "$gpg_sign_opt" && echo "$gpg_sign_opt" > "$state_dir"/gpg_sign_opt
}
output () {
test $total_argc -eq 2 || usage
action=${1##--}
;;
- --onto)
- test 2 -le "$#" || usage
- onto="$2"
- shift
+ --onto=*)
+ onto="${1#--onto=}"
;;
- -x)
- test 2 -le "$#" || usage
- cmd="${cmd}exec $2${LF}"
- shift
+ --exec=*)
+ cmd="${cmd}exec ${1#--exec=}${LF}"
;;
- -i)
+ --interactive)
interactive_rebase=explicit
;;
- -k)
+ --keep-empty)
keep_empty=yes
;;
- -p)
+ --preserve-merges)
preserve_merges=t
test -z "$interactive_rebase" && interactive_rebase=implied
;;
--no-fork-point)
fork_point=
;;
- -M|-m)
+ --merge)
do_merge=t
;;
- -X)
- shift
- strategy_opts="$strategy_opts $(git rev-parse --sq-quote "--$1")"
+ --strategy-option=*)
+ strategy_opts="$strategy_opts $(git rev-parse --sq-quote "--${1#--strategy-option=}")"
do_merge=t
test -z "$strategy" && strategy=recursive
;;
- -s)
- shift
- strategy="$1"
+ --strategy=*)
+ strategy="${1#--strategy=}"
do_merge=t
;;
- -n)
+ --no-stat)
diffstat=
;;
--stat)
--autostash)
autostash=true
;;
- -v)
+ --verbose)
verbose=t
diffstat=t
GIT_QUIET=
;;
- -q)
+ --quiet)
GIT_QUIET=t
git_am_opt="$git_am_opt -q"
verbose=
diffstat=
;;
- --whitespace)
- shift
- git_am_opt="$git_am_opt --whitespace=$1"
- case "$1" in
+ --whitespace=*)
+ git_am_opt="$git_am_opt --whitespace=${1#--whitespace=}"
+ case "${1#--whitespace=}" in
fix|strip)
force_rebase=t
;;
git_am_opt="$git_am_opt $1"
force_rebase=t
;;
- -C)
- shift
- git_am_opt="$git_am_opt -C$1"
+ -C*)
+ git_am_opt="$git_am_opt $1"
;;
--root)
rebase_root=t
;;
- -f|--no-ff)
+ --force-rebase|--no-ff)
force_rebase=t
;;
--rerere-autoupdate|--no-rerere-autoupdate)
allow_rerere_autoupdate="$1"
;;
+ --gpg-sign)
+ gpg_sign_opt=-S
+ ;;
+ --gpg-sign=*)
+ gpg_sign_opt="-S${1#--gpg-sign=}"
+ ;;
--)
shift
break
test "$fork_point" = auto && fork_point=t
;;
*) upstream_name="$1"
+ if test "$upstream_name" = "-"
+ then
+ upstream_name="@{-1}"
+ fi
shift
;;
esac
export GIT_DIR="$url/.git"
+force=
+
mkdir -p "$dir"
if test -z "$GIT_REMOTE_TESTGIT_NO_MARKS"
fi
test -n "$GIT_REMOTE_TESTGIT_SIGNED_TAGS" && echo "signed-tags"
test -n "$GIT_REMOTE_TESTGIT_NO_PRIVATE_UPDATE" && echo "no-private-update"
+ echo 'option'
echo
;;
list)
before=$(git for-each-ref --format=' %(refname) %(objectname) ')
git fast-import \
+ ${force:+--force} \
${testgitmarks:+"--import-marks=$testgitmarks"} \
${testgitmarks:+"--export-marks=$testgitmarks"} \
--quiet
echo
;;
+ option\ *)
+ read cmd opt val <<-EOF
+ $line
+ EOF
+ case $opt in
+ force)
+ test $val = "true" && force="true" || force=
+ echo "ok"
+ ;;
+ *)
+ echo "unsupported"
+ ;;
+ esac
+ ;;
'')
exit
;;
and includes the given URL in the generated summary.'
SUBDIRECTORY_OK='Yes'
OPTIONS_KEEPDASHDASH=
+OPTIONS_STUCKLONG=
OPTIONS_SPEC='git request-pull [options] start url [end]
--
p show patch text as well
shift
done
-base=$1 url=$2 head=${3-HEAD} status=0 branch_name=
-
-headref=$(git symbolic-ref -q "$head")
-if git show-ref -q --verify "$headref"
-then
- branch_name=${headref#refs/heads/}
- if test "z$branch_name" = "z$headref" ||
- ! git config "branch.$branch_name.description" >/dev/null
- then
- branch_name=
- fi
-fi
-
-tag_name=$(git describe --exact "$head^0" 2>/dev/null)
+base=$1 url=$2 status=0
test -n "$base" && test -n "$url" || usage
die "fatal: Not a valid revision: $base"
fi
+#
+# $3 must be a symbolic ref, a unique ref, or
+# a SHA object expression. It can also be of
+# the format 'local-name:remote-name'.
+#
+local=${3%:*}
+local=${local:-HEAD}
+remote=${3#*:}
+pretty_remote=${remote#refs/}
+pretty_remote=${pretty_remote#heads/}
+head=$(git symbolic-ref -q "$local")
+head=${head:-$(git show-ref --heads --tags "$local" | cut -d' ' -f2)}
+head=${head:-$(git rev-parse --quiet --verify "$local")}
+
+# None of the above? Bad.
+test -z "$head" && die "fatal: Not a valid revision: $local"
+
+# This also verifies that the resulting head is unique:
+# "git show-ref" could have shown multiple matching refs..
headrev=$(git rev-parse --verify --quiet "$head"^0)
-if test -z "$headrev"
+test -z "$headrev" && die "fatal: Ambiguous revision: $local"
+
+# Was it a branch with a description?
+branch_name=${head#refs/heads/}
+if test "z$branch_name" = "z$headref" ||
+ ! git config "branch.$branch_name.description" >/dev/null
then
- die "fatal: Not a valid revision: $head"
+ branch_name=
fi
merge_base=$(git merge-base $baserev $headrev) ||
die "fatal: No commits in common between $base and $head"
-# $head is the token given from the command line, and $tag_name, if
-# exists, is the tag we are going to show the commit information for.
-# If that tag exists at the remote and it points at the commit, use it.
-# Otherwise, if a branch with the same name as $head exists at the remote
-# and their values match, use that instead.
+# $head is the refname from the command line.
+# If a ref with the same name as $head exists at the remote
+# and their values match, use that.
#
# Otherwise find a random ref that matches $headrev.
find_matching_ref='
- sub abbr {
- my $ref = shift;
- if ($ref =~ s|^refs/heads/|| || $ref =~ s|^refs/tags/|tags/|) {
- return $ref;
- } else {
- return $ref;
- }
- }
+ my ($head,$headrev) = (@ARGV);
+ my ($found);
- my ($tagged, $branch, $found);
while (<STDIN>) {
- my ($sha1, $ref, $deref) = /^(\S+)\s+(\S+?)(\^\{\})?$/;
- next unless ($sha1 eq $ARGV[1]);
- $found = abbr($ref);
- if ($deref && $ref eq "tags/$ARGV[2]") {
- $tagged = $found;
- last;
+ chomp;
+ my ($sha1, $ref, $deref) = /^(\S+)\s+([^^]+)(\S*)$/;
+ my ($pattern);
+ next unless ($sha1 eq $headrev);
+
+ $pattern="/$head\$";
+ if ($ref eq $head) {
+ $found = $ref;
+ }
+ if ($ref =~ /$pattern/) {
+ $found = $ref;
}
- if ($ref =~ m|/\Q$ARGV[0]\E$|) {
- $exact = $found;
+ if ($sha1 eq $head) {
+ $found = $sha1;
}
}
- if ($tagged) {
- print "$tagged\n";
- } elsif ($exact) {
- print "$exact\n";
- } elsif ($found) {
+ if ($found) {
print "$found\n";
}
'
-ref=$(git ls-remote "$url" | @@PERL@@ -e "$find_matching_ref" "$head" "$headrev" "$tag_name")
+ref=$(git ls-remote "$url" | @@PERL@@ -e "$find_matching_ref" "${remote:-HEAD}" "$headrev")
+
+if test -z "$ref"
+then
+ echo "warn: No match for commit $headrev found at $url" >&2
+ echo "warn: Are you sure you pushed '${remote:-HEAD}' there?" >&2
+ status=1
+fi
url=$(git ls-remote --get-url "$url")
are available in the git repository at:
' $merge_base &&
-echo " $url${ref+ $ref}" &&
+echo " $url $pretty_remote" &&
git show -s --format='
for you to fetch changes up to %H:
----------------------------------------------------------------' $headrev &&
-if test -n "$branch_name"
+if test $(git cat-file -t "$head") = tag
then
- echo "(from the branch description for $branch_name local branch)"
- echo
- git config "branch.$branch_name.description"
-fi &&
-
-if test -n "$tag_name"
-then
- if test -z "$ref" || test "$ref" != "tags/$tag_name"
- then
- echo >&2 "warn: You locally have $tag_name but it does not (yet)"
- echo >&2 "warn: appear to be at $url"
- echo >&2 "warn: Do you want to push it there, perhaps?"
- fi
- git cat-file tag "$tag_name" |
+ git cat-file tag "$head" |
sed -n -e '1,/^$/d' -e '/^-----BEGIN PGP /q' -e p
echo
+ echo "----------------------------------------------------------------"
fi &&
-if test -n "$branch_name" || test -n "$tag_name"
+if test -n "$branch_name"
then
+ echo "(from the branch description for $branch_name local branch)"
+ echo
+ git config "branch.$branch_name.description"
echo "----------------------------------------------------------------"
fi &&
git shortlog ^$baserev $headrev &&
git diff -M --stat --summary $patch $merge_base..$headrev || status=1
-if test -z "$ref"
-then
- echo "warn: No branch of $url is at:" >&2
- git show -s --format='warn: %h: %s' $headrev >&2
- echo "warn: Are you sure you pushed '$head' there?" >&2
- status=1
-fi
exit $status
parseopt_extra=
[ -n "$OPTIONS_KEEPDASHDASH" ] &&
parseopt_extra="--keep-dashdash"
+ [ -n "$OPTIONS_STUCKLONG" ] &&
+ parseopt_extra="$parseopt_extra --stuck-long"
eval "$(
echo "$OPTIONS_SPEC" |
pop_stash() {
assert_stash_ref "$@"
- apply_stash "$@" &&
- drop_stash "$@"
+ if apply_stash "$@"
+ then
+ drop_stash "$@"
+ else
+ status=$?
+ say "The stash is kept in case you need it again."
+ exit $status
+ fi
}
drop_stash () {
or: $dashless [--quiet] status [--cached] [--recursive] [--] [<path>...]
or: $dashless [--quiet] init [--] [<path>...]
or: $dashless [--quiet] deinit [-f|--force] [--] <path>...
- or: $dashless [--quiet] update [--init] [--remote] [-N|--no-fetch] [-f|--force] [--rebase] [--reference <repository>] [--merge] [--recursive] [--] [<path>...]
+ or: $dashless [--quiet] update [--init] [--remote] [-N|--no-fetch] [-f|--force] [--checkout|--merge|--rebase] [--reference <repository>] [--recursive] [--] [<path>...]
or: $dashless [--quiet] summary [--cached|--files] [--summary-limit <n>] [commit] [--] [<path>...]
or: $dashless [--quiet] foreach [--recursive] <command>
or: $dashless [--quiet] sync [--recursive] [--] [<path>...]"
#
# Clone a submodule
#
+# $1 = submodule path
+# $2 = submodule name
+# $3 = URL to clone
+# $4 = reference repository to reuse (empty for independent)
+# $5 = depth argument for shallow clones (empty for deep)
+#
# Prior to calling, cmd_update checks that a possibly existing
# path is not a git repository.
# Likewise, cmd_add checks that path does not exist at all,
update_module=$update
else
update_module=$(git config submodule."$name".update)
- case "$update_module" in
- '')
- ;; # Unset update mode
- checkout | rebase | merge | none)
- ;; # Known update modes
- !*)
- ;; # Custom update command
- *)
- die "$(eval_gettext "Invalid update mode '$update_module' for submodule '$name'")"
- ;;
- esac
+ if test -z "$update_module"
+ then
+ update_module="checkout"
+ fi
fi
displaypath=$(relative_path "$prefix$sm_path")
case ";$cloned_modules;" in
*";$name;"*)
# then there is no local change to integrate
- update_module= ;;
+ update_module=checkout ;;
esac
must_die_on_failure=
case "$update_module" in
+ checkout)
+ command="git checkout $subforce -q"
+ die_msg="$(eval_gettext "Unable to checkout '\$sha1' in submodule path '\$displaypath'")"
+ say_msg="$(eval_gettext "Submodule path '\$displaypath': checked out '\$sha1'")"
+ ;;
rebase)
command="git rebase"
die_msg="$(eval_gettext "Unable to rebase '\$sha1' in submodule path '\$displaypath'")"
must_die_on_failure=yes
;;
*)
- command="git checkout $subforce -q"
- die_msg="$(eval_gettext "Unable to checkout '\$sha1' in submodule path '\$displaypath'")"
- say_msg="$(eval_gettext "Submodule path '\$displaypath': checked out '\$sha1'")"
- ;;
+ die "$(eval_gettext "Invalid update mode '$update_module' for submodule '$name'")"
esac
if (clear_local_git_env; cd "$sm_path" && $command "$sha1")
if (envchanged)
*envchanged = 1;
} else if (!strcmp(cmd, "--no-replace-objects")) {
- read_replace_refs = 0;
+ check_replace_refs = 0;
setenv(NO_REPLACE_OBJECTS_ENVIRONMENT, "1", 1);
if (envchanged)
*envchanged = 1;
git_print_page_path($file_name, "blob", $hash_base);
print "<div class=\"page_body\">\n";
if ($mimetype =~ m!^image/!) {
- print qq!<img type="!.esc_attr($mimetype).qq!"!;
+ print qq!<img class="blob" type="!.esc_attr($mimetype).qq!"!;
if ($file_name) {
print qq! alt="!.esc_attr($file_name).qq!" title="!.esc_attr($file_name).qq!"!;
}
vertical-align: middle;
}
+img.blob {
+ max-height: 100%;
+ max-width: 100%;
+}
+
a.list img.avatar {
border-style: none;
}
*/
if (opt->count && count) {
char buf[32];
- output_color(opt, gs->name, strlen(gs->name), opt->color_filename);
- output_sep(opt, ':');
+ if (opt->pathname) {
+ output_color(opt, gs->name, strlen(gs->name),
+ opt->color_filename);
+ output_sep(opt, ':');
+ }
snprintf(buf, sizeof(buf), "%u\n", count);
opt->output(opt, buf, strlen(buf));
return 1;
break;
case GREP_SOURCE_SHA1:
gs->identifier = xmalloc(20);
- memcpy(gs->identifier, identifier, 20);
+ hashcpy(gs->identifier, identifier);
break;
case GREP_SOURCE_BUF:
gs->identifier = NULL;
+++ /dev/null
-/*
- * Some generic hashing helpers.
- */
-#include "cache.h"
-#include "hash.h"
-
-/*
- * Look up a hash entry in the hash table. Return the pointer to
- * the existing entry, or the empty slot if none existed. The caller
- * can then look at the (*ptr) to see whether it existed or not.
- */
-static struct hash_table_entry *lookup_hash_entry(unsigned int hash, const struct hash_table *table)
-{
- unsigned int size = table->size, nr = hash % size;
- struct hash_table_entry *array = table->array;
-
- while (array[nr].ptr) {
- if (array[nr].hash == hash)
- break;
- nr++;
- if (nr >= size)
- nr = 0;
- }
- return array + nr;
-}
-
-
-/*
- * Insert a new hash entry pointer into the table.
- *
- * If that hash entry already existed, return the pointer to
- * the existing entry (and the caller can create a list of the
- * pointers or do anything else). If it didn't exist, return
- * NULL (and the caller knows the pointer has been inserted).
- */
-static void **insert_hash_entry(unsigned int hash, void *ptr, struct hash_table *table)
-{
- struct hash_table_entry *entry = lookup_hash_entry(hash, table);
-
- if (!entry->ptr) {
- entry->ptr = ptr;
- entry->hash = hash;
- table->nr++;
- return NULL;
- }
- return &entry->ptr;
-}
-
-static void grow_hash_table(struct hash_table *table)
-{
- unsigned int i;
- unsigned int old_size = table->size, new_size;
- struct hash_table_entry *old_array = table->array, *new_array;
-
- new_size = alloc_nr(old_size);
- new_array = xcalloc(sizeof(struct hash_table_entry), new_size);
- table->size = new_size;
- table->array = new_array;
- table->nr = 0;
- for (i = 0; i < old_size; i++) {
- unsigned int hash = old_array[i].hash;
- void *ptr = old_array[i].ptr;
- if (ptr)
- insert_hash_entry(hash, ptr, table);
- }
- free(old_array);
-}
-
-void *lookup_hash(unsigned int hash, const struct hash_table *table)
-{
- if (!table->array)
- return NULL;
- return lookup_hash_entry(hash, table)->ptr;
-}
-
-void **insert_hash(unsigned int hash, void *ptr, struct hash_table *table)
-{
- unsigned int nr = table->nr;
- if (nr >= table->size/2)
- grow_hash_table(table);
- return insert_hash_entry(hash, ptr, table);
-}
-
-int for_each_hash(const struct hash_table *table, int (*fn)(void *, void *), void *data)
-{
- int sum = 0;
- unsigned int i;
- unsigned int size = table->size;
- struct hash_table_entry *array = table->array;
-
- for (i = 0; i < size; i++) {
- void *ptr = array->ptr;
- array++;
- if (ptr) {
- int val = fn(ptr, data);
- if (val < 0)
- return val;
- sum += val;
- }
- }
- return sum;
-}
-
-void free_hash(struct hash_table *table)
-{
- free(table->array);
- table->array = NULL;
- table->size = 0;
- table->nr = 0;
-}
+++ /dev/null
-#ifndef HASH_H
-#define HASH_H
-
-/*
- * These are some simple generic hash table helper functions.
- * Not necessarily suitable for all users, but good for things
- * where you want to just keep track of a list of things, and
- * have a good hash to use on them.
- *
- * It keeps the hash table at roughly 50-75% free, so the memory
- * cost of the hash table itself is roughly
- *
- * 3 * 2*sizeof(void *) * nr_of_objects
- *
- * bytes.
- *
- * FIXME: on 64-bit architectures, we waste memory. It would be
- * good to have just 32-bit pointers, requiring a special allocator
- * for hashed entries or something.
- */
-struct hash_table_entry {
- unsigned int hash;
- void *ptr;
-};
-
-struct hash_table {
- unsigned int size, nr;
- struct hash_table_entry *array;
-};
-
-extern void *lookup_hash(unsigned int hash, const struct hash_table *table);
-extern void **insert_hash(unsigned int hash, void *ptr, struct hash_table *table);
-extern int for_each_hash(const struct hash_table *table, int (*fn)(void *, void *), void *data);
-extern void free_hash(struct hash_table *table);
-
-static inline void init_hash(struct hash_table *table)
-{
- table->size = 0;
- table->nr = 0;
- table->array = NULL;
-}
-
-static inline void preallocate_hash(struct hash_table *table, unsigned int elts)
-{
- assert(table->size == 0 && table->nr == 0 && table->array == NULL);
- table->size = elts * 2;
- table->array = xcalloc(sizeof(struct hash_table_entry), table->size);
-}
-
-#endif
--- /dev/null
+/*
+ * Generic implementation of hash-based key value mappings.
+ */
+#include "cache.h"
+#include "hashmap.h"
+
+#define FNV32_BASE ((unsigned int) 0x811c9dc5)
+#define FNV32_PRIME ((unsigned int) 0x01000193)
+
+unsigned int strhash(const char *str)
+{
+ unsigned int c, hash = FNV32_BASE;
+ while ((c = (unsigned char) *str++))
+ hash = (hash * FNV32_PRIME) ^ c;
+ return hash;
+}
+
+unsigned int strihash(const char *str)
+{
+ unsigned int c, hash = FNV32_BASE;
+ while ((c = (unsigned char) *str++)) {
+ if (c >= 'a' && c <= 'z')
+ c -= 'a' - 'A';
+ hash = (hash * FNV32_PRIME) ^ c;
+ }
+ return hash;
+}
+
+unsigned int memhash(const void *buf, size_t len)
+{
+ unsigned int hash = FNV32_BASE;
+ unsigned char *ucbuf = (unsigned char *) buf;
+ while (len--) {
+ unsigned int c = *ucbuf++;
+ hash = (hash * FNV32_PRIME) ^ c;
+ }
+ return hash;
+}
+
+unsigned int memihash(const void *buf, size_t len)
+{
+ unsigned int hash = FNV32_BASE;
+ unsigned char *ucbuf = (unsigned char *) buf;
+ while (len--) {
+ unsigned int c = *ucbuf++;
+ if (c >= 'a' && c <= 'z')
+ c -= 'a' - 'A';
+ hash = (hash * FNV32_PRIME) ^ c;
+ }
+ return hash;
+}
+
+#define HASHMAP_INITIAL_SIZE 64
+/* grow / shrink by 2^2 */
+#define HASHMAP_RESIZE_BITS 2
+/* load factor in percent */
+#define HASHMAP_LOAD_FACTOR 80
+
+static void alloc_table(struct hashmap *map, unsigned int size)
+{
+ map->tablesize = size;
+ map->table = xcalloc(size, sizeof(struct hashmap_entry *));
+
+ /* calculate resize thresholds for new size */
+ map->grow_at = (unsigned int) ((uint64_t) size * HASHMAP_LOAD_FACTOR / 100);
+ if (size <= HASHMAP_INITIAL_SIZE)
+ map->shrink_at = 0;
+ else
+ /*
+ * The shrink-threshold must be slightly smaller than
+ * (grow-threshold / resize-factor) to prevent erratic resizing,
+ * thus we divide by (resize-factor + 1).
+ */
+ map->shrink_at = map->grow_at / ((1 << HASHMAP_RESIZE_BITS) + 1);
+}
+
+static inline int entry_equals(const struct hashmap *map,
+ const struct hashmap_entry *e1, const struct hashmap_entry *e2,
+ const void *keydata)
+{
+ return (e1 == e2) || (e1->hash == e2->hash && !map->cmpfn(e1, e2, keydata));
+}
+
+static inline unsigned int bucket(const struct hashmap *map,
+ const struct hashmap_entry *key)
+{
+ return key->hash & (map->tablesize - 1);
+}
+
+static void rehash(struct hashmap *map, unsigned int newsize)
+{
+ unsigned int i, oldsize = map->tablesize;
+ struct hashmap_entry **oldtable = map->table;
+
+ alloc_table(map, newsize);
+ for (i = 0; i < oldsize; i++) {
+ struct hashmap_entry *e = oldtable[i];
+ while (e) {
+ struct hashmap_entry *next = e->next;
+ unsigned int b = bucket(map, e);
+ e->next = map->table[b];
+ map->table[b] = e;
+ e = next;
+ }
+ }
+ free(oldtable);
+}
+
+static inline struct hashmap_entry **find_entry_ptr(const struct hashmap *map,
+ const struct hashmap_entry *key, const void *keydata)
+{
+ struct hashmap_entry **e = &map->table[bucket(map, key)];
+ while (*e && !entry_equals(map, *e, key, keydata))
+ e = &(*e)->next;
+ return e;
+}
+
+static int always_equal(const void *unused1, const void *unused2, const void *unused3)
+{
+ return 0;
+}
+
+void hashmap_init(struct hashmap *map, hashmap_cmp_fn equals_function,
+ size_t initial_size)
+{
+ unsigned int size = HASHMAP_INITIAL_SIZE;
+ map->size = 0;
+ map->cmpfn = equals_function ? equals_function : always_equal;
+
+ /* calculate initial table size and allocate the table */
+ initial_size = (unsigned int) ((uint64_t) initial_size * 100
+ / HASHMAP_LOAD_FACTOR);
+ while (initial_size > size)
+ size <<= HASHMAP_RESIZE_BITS;
+ alloc_table(map, size);
+}
+
+void hashmap_free(struct hashmap *map, int free_entries)
+{
+ if (!map || !map->table)
+ return;
+ if (free_entries) {
+ struct hashmap_iter iter;
+ struct hashmap_entry *e;
+ hashmap_iter_init(map, &iter);
+ while ((e = hashmap_iter_next(&iter)))
+ free(e);
+ }
+ free(map->table);
+ memset(map, 0, sizeof(*map));
+}
+
+void *hashmap_get(const struct hashmap *map, const void *key, const void *keydata)
+{
+ return *find_entry_ptr(map, key, keydata);
+}
+
+void *hashmap_get_next(const struct hashmap *map, const void *entry)
+{
+ struct hashmap_entry *e = ((struct hashmap_entry *) entry)->next;
+ for (; e; e = e->next)
+ if (entry_equals(map, entry, e, NULL))
+ return e;
+ return NULL;
+}
+
+void hashmap_add(struct hashmap *map, void *entry)
+{
+ unsigned int b = bucket(map, entry);
+
+ /* add entry */
+ ((struct hashmap_entry *) entry)->next = map->table[b];
+ map->table[b] = entry;
+
+ /* fix size and rehash if appropriate */
+ map->size++;
+ if (map->size > map->grow_at)
+ rehash(map, map->tablesize << HASHMAP_RESIZE_BITS);
+}
+
+void *hashmap_remove(struct hashmap *map, const void *key, const void *keydata)
+{
+ struct hashmap_entry *old;
+ struct hashmap_entry **e = find_entry_ptr(map, key, keydata);
+ if (!*e)
+ return NULL;
+
+ /* remove existing entry */
+ old = *e;
+ *e = old->next;
+ old->next = NULL;
+
+ /* fix size and rehash if appropriate */
+ map->size--;
+ if (map->size < map->shrink_at)
+ rehash(map, map->tablesize >> HASHMAP_RESIZE_BITS);
+ return old;
+}
+
+void *hashmap_put(struct hashmap *map, void *entry)
+{
+ struct hashmap_entry *old = hashmap_remove(map, entry, NULL);
+ hashmap_add(map, entry);
+ return old;
+}
+
+void hashmap_iter_init(struct hashmap *map, struct hashmap_iter *iter)
+{
+ iter->map = map;
+ iter->tablepos = 0;
+ iter->next = NULL;
+}
+
+void *hashmap_iter_next(struct hashmap_iter *iter)
+{
+ struct hashmap_entry *current = iter->next;
+ for (;;) {
+ if (current) {
+ iter->next = current->next;
+ return current;
+ }
+
+ if (iter->tablepos >= iter->map->tablesize)
+ return NULL;
+
+ current = iter->map->table[iter->tablepos++];
+ }
+}
--- /dev/null
+#ifndef HASHMAP_H
+#define HASHMAP_H
+
+/*
+ * Generic implementation of hash-based key-value mappings.
+ * See Documentation/technical/api-hashmap.txt.
+ */
+
+/* FNV-1 functions */
+
+extern unsigned int strhash(const char *buf);
+extern unsigned int strihash(const char *buf);
+extern unsigned int memhash(const void *buf, size_t len);
+extern unsigned int memihash(const void *buf, size_t len);
+
+/* data structures */
+
+struct hashmap_entry {
+ struct hashmap_entry *next;
+ unsigned int hash;
+};
+
+typedef int (*hashmap_cmp_fn)(const void *entry, const void *entry_or_key,
+ const void *keydata);
+
+struct hashmap {
+ struct hashmap_entry **table;
+ hashmap_cmp_fn cmpfn;
+ unsigned int size, tablesize, grow_at, shrink_at;
+};
+
+struct hashmap_iter {
+ struct hashmap *map;
+ struct hashmap_entry *next;
+ unsigned int tablepos;
+};
+
+/* hashmap functions */
+
+extern void hashmap_init(struct hashmap *map, hashmap_cmp_fn equals_function,
+ size_t initial_size);
+extern void hashmap_free(struct hashmap *map, int free_entries);
+
+/* hashmap_entry functions */
+
+static inline void hashmap_entry_init(void *entry, unsigned int hash)
+{
+ struct hashmap_entry *e = entry;
+ e->hash = hash;
+ e->next = NULL;
+}
+extern void *hashmap_get(const struct hashmap *map, const void *key,
+ const void *keydata);
+extern void *hashmap_get_next(const struct hashmap *map, const void *entry);
+extern void hashmap_add(struct hashmap *map, void *entry);
+extern void *hashmap_put(struct hashmap *map, void *entry);
+extern void *hashmap_remove(struct hashmap *map, const void *key,
+ const void *keydata);
+
+/* hashmap_iter functions */
+
+extern void hashmap_iter_init(struct hashmap *map, struct hashmap_iter *iter);
+extern void *hashmap_iter_next(struct hashmap_iter *iter);
+static inline void *hashmap_iter_first(struct hashmap *map,
+ struct hashmap_iter *iter)
+{
+ hashmap_iter_init(map, iter);
+ return hashmap_iter_next(iter);
+}
+
+#endif
cmds->cnt = cj;
}
-static void pretty_print_string_list(struct cmdnames *cmds,
- unsigned int colopts)
+static void pretty_print_cmdnames(struct cmdnames *cmds, unsigned int colopts)
{
struct string_list list = STRING_LIST_INIT_NODUP;
struct column_options copts;
const char *exec_path = git_exec_path();
printf_ln(_("available git commands in '%s'"), exec_path);
putchar('\n');
- pretty_print_string_list(main_cmds, colopts);
+ pretty_print_cmdnames(main_cmds, colopts);
putchar('\n');
}
if (other_cmds->cnt) {
printf_ln(_("git commands available from elsewhere on your $PATH"));
putchar('\n');
- pretty_print_string_list(other_cmds, colopts);
+ pretty_print_cmdnames(other_cmds, colopts);
putchar('\n');
}
}
#define LOCK_TIME 600
#define LOCK_REFRESH 30
-/* bits #0-15 in revision.h */
-
+/* Remember to update object flag allocation in object.h */
#define LOCAL (1u<<16)
#define REMOTE (1u<<17)
#define FETCHING (1u<<18)
}
}
+int run_one_slot(struct active_request_slot *slot,
+ struct slot_results *results)
+{
+ slot->results = results;
+ if (!start_active_slot(slot)) {
+ snprintf(curl_errorstr, sizeof(curl_errorstr),
+ "failed to start HTTP request");
+ return HTTP_START_FAILED;
+ }
+
+ run_active_slot(slot);
+ return handle_curl_result(results);
+}
+
static CURLcode curlinfo_strbuf(CURL *curl, CURLINFO info, struct strbuf *buf)
{
char *ptr;
int ret;
slot = get_active_slot();
- slot->results = &results;
curl_easy_setopt(slot->curl, CURLOPT_HTTPGET, 1);
if (result == NULL) {
curl_easy_setopt(slot->curl, CURLOPT_HTTPHEADER, headers);
curl_easy_setopt(slot->curl, CURLOPT_ENCODING, "gzip");
- if (start_active_slot(slot)) {
- run_active_slot(slot);
- ret = handle_curl_result(&results);
- } else {
- snprintf(curl_errorstr, sizeof(curl_errorstr),
- "failed to start HTTP request");
- ret = HTTP_START_FAILED;
- }
+ ret = run_one_slot(slot, &results);
if (options && options->content_type)
curlinfo_strbuf(slot->curl, CURLINFO_CONTENT_TYPE,
unsigned char *sha1)
{
char *hex = sha1_to_hex(sha1);
- char *filename;
+ const char *filename;
char prevfile[PATH_MAX];
int prevlocal;
char prev_buf[PREV_BUF_SIZE];
extern void finish_all_active_slots(void);
extern int handle_curl_result(struct slot_results *results);
+/*
+ * This will run one slot to completion in a blocking manner, similar to how
+ * curl_easy_perform would work (but we don't want to use that, because
+ * we do not want to intermingle calls to curl_multi and curl_easy).
+ *
+ */
+int run_one_slot(struct active_request_slot *slot,
+ struct slot_results *results);
+
#ifdef USE_CURL_MULTI
extern void fill_active_slots(void);
extern void add_fill_function(void *data, int (*fill)(void *));
--- /dev/null
+/* The MIT License
+
+ Copyright (c) 2008, 2009, 2011 by Attractive Chaos <attractor@live.co.uk>
+
+ Permission is hereby granted, free of charge, to any person obtaining
+ a copy of this software and associated documentation files (the
+ "Software"), to deal in the Software without restriction, including
+ without limitation the rights to use, copy, modify, merge, publish,
+ distribute, sublicense, and/or sell copies of the Software, and to
+ permit persons to whom the Software is furnished to do so, subject to
+ the following conditions:
+
+ The above copyright notice and this permission notice shall be
+ included in all copies or substantial portions of the Software.
+
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
+ BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
+ ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
+ CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+ SOFTWARE.
+*/
+
+#ifndef __AC_KHASH_H
+#define __AC_KHASH_H
+
+#define AC_VERSION_KHASH_H "0.2.8"
+
+typedef uint32_t khint32_t;
+typedef uint64_t khint64_t;
+
+typedef khint32_t khint_t;
+typedef khint_t khiter_t;
+
+#define __ac_isempty(flag, i) ((flag[i>>4]>>((i&0xfU)<<1))&2)
+#define __ac_isdel(flag, i) ((flag[i>>4]>>((i&0xfU)<<1))&1)
+#define __ac_iseither(flag, i) ((flag[i>>4]>>((i&0xfU)<<1))&3)
+#define __ac_set_isdel_false(flag, i) (flag[i>>4]&=~(1ul<<((i&0xfU)<<1)))
+#define __ac_set_isempty_false(flag, i) (flag[i>>4]&=~(2ul<<((i&0xfU)<<1)))
+#define __ac_set_isboth_false(flag, i) (flag[i>>4]&=~(3ul<<((i&0xfU)<<1)))
+#define __ac_set_isdel_true(flag, i) (flag[i>>4]|=1ul<<((i&0xfU)<<1))
+
+#define __ac_fsize(m) ((m) < 16? 1 : (m)>>4)
+
+#define kroundup32(x) (--(x), (x)|=(x)>>1, (x)|=(x)>>2, (x)|=(x)>>4, (x)|=(x)>>8, (x)|=(x)>>16, ++(x))
+
+static inline khint_t __ac_X31_hash_string(const char *s)
+{
+ khint_t h = (khint_t)*s;
+ if (h) for (++s ; *s; ++s) h = (h << 5) - h + (khint_t)*s;
+ return h;
+}
+
+#define kh_str_hash_func(key) __ac_X31_hash_string(key)
+#define kh_str_hash_equal(a, b) (strcmp(a, b) == 0)
+
+static const double __ac_HASH_UPPER = 0.77;
+
+#define __KHASH_TYPE(name, khkey_t, khval_t) \
+ typedef struct { \
+ khint_t n_buckets, size, n_occupied, upper_bound; \
+ khint32_t *flags; \
+ khkey_t *keys; \
+ khval_t *vals; \
+ } kh_##name##_t;
+
+#define __KHASH_PROTOTYPES(name, khkey_t, khval_t) \
+ extern kh_##name##_t *kh_init_##name(void); \
+ extern void kh_destroy_##name(kh_##name##_t *h); \
+ extern void kh_clear_##name(kh_##name##_t *h); \
+ extern khint_t kh_get_##name(const kh_##name##_t *h, khkey_t key); \
+ extern int kh_resize_##name(kh_##name##_t *h, khint_t new_n_buckets); \
+ extern khint_t kh_put_##name(kh_##name##_t *h, khkey_t key, int *ret); \
+ extern void kh_del_##name(kh_##name##_t *h, khint_t x);
+
+#define __KHASH_IMPL(name, SCOPE, khkey_t, khval_t, kh_is_map, __hash_func, __hash_equal) \
+ SCOPE kh_##name##_t *kh_init_##name(void) { \
+ return (kh_##name##_t*)xcalloc(1, sizeof(kh_##name##_t)); \
+ } \
+ SCOPE void kh_destroy_##name(kh_##name##_t *h) \
+ { \
+ if (h) { \
+ free((void *)h->keys); free(h->flags); \
+ free((void *)h->vals); \
+ free(h); \
+ } \
+ } \
+ SCOPE void kh_clear_##name(kh_##name##_t *h) \
+ { \
+ if (h && h->flags) { \
+ memset(h->flags, 0xaa, __ac_fsize(h->n_buckets) * sizeof(khint32_t)); \
+ h->size = h->n_occupied = 0; \
+ } \
+ } \
+ SCOPE khint_t kh_get_##name(const kh_##name##_t *h, khkey_t key) \
+ { \
+ if (h->n_buckets) { \
+ khint_t k, i, last, mask, step = 0; \
+ mask = h->n_buckets - 1; \
+ k = __hash_func(key); i = k & mask; \
+ last = i; \
+ while (!__ac_isempty(h->flags, i) && (__ac_isdel(h->flags, i) || !__hash_equal(h->keys[i], key))) { \
+ i = (i + (++step)) & mask; \
+ if (i == last) return h->n_buckets; \
+ } \
+ return __ac_iseither(h->flags, i)? h->n_buckets : i; \
+ } else return 0; \
+ } \
+ SCOPE int kh_resize_##name(kh_##name##_t *h, khint_t new_n_buckets) \
+ { /* This function uses 0.25*n_buckets bytes of working space instead of [sizeof(key_t+val_t)+.25]*n_buckets. */ \
+ khint32_t *new_flags = NULL; \
+ khint_t j = 1; \
+ { \
+ kroundup32(new_n_buckets); \
+ if (new_n_buckets < 4) new_n_buckets = 4; \
+ if (h->size >= (khint_t)(new_n_buckets * __ac_HASH_UPPER + 0.5)) j = 0; /* requested size is too small */ \
+ else { /* hash table size to be changed (shrink or expand); rehash */ \
+ new_flags = (khint32_t*)xmalloc(__ac_fsize(new_n_buckets) * sizeof(khint32_t)); \
+ if (!new_flags) return -1; \
+ memset(new_flags, 0xaa, __ac_fsize(new_n_buckets) * sizeof(khint32_t)); \
+ if (h->n_buckets < new_n_buckets) { /* expand */ \
+ khkey_t *new_keys = (khkey_t*)xrealloc((void *)h->keys, new_n_buckets * sizeof(khkey_t)); \
+ if (!new_keys) return -1; \
+ h->keys = new_keys; \
+ if (kh_is_map) { \
+ khval_t *new_vals = (khval_t*)xrealloc((void *)h->vals, new_n_buckets * sizeof(khval_t)); \
+ if (!new_vals) return -1; \
+ h->vals = new_vals; \
+ } \
+ } /* otherwise shrink */ \
+ } \
+ } \
+ if (j) { /* rehashing is needed */ \
+ for (j = 0; j != h->n_buckets; ++j) { \
+ if (__ac_iseither(h->flags, j) == 0) { \
+ khkey_t key = h->keys[j]; \
+ khval_t val; \
+ khint_t new_mask; \
+ new_mask = new_n_buckets - 1; \
+ if (kh_is_map) val = h->vals[j]; \
+ __ac_set_isdel_true(h->flags, j); \
+ while (1) { /* kick-out process; sort of like in Cuckoo hashing */ \
+ khint_t k, i, step = 0; \
+ k = __hash_func(key); \
+ i = k & new_mask; \
+ while (!__ac_isempty(new_flags, i)) i = (i + (++step)) & new_mask; \
+ __ac_set_isempty_false(new_flags, i); \
+ if (i < h->n_buckets && __ac_iseither(h->flags, i) == 0) { /* kick out the existing element */ \
+ { khkey_t tmp = h->keys[i]; h->keys[i] = key; key = tmp; } \
+ if (kh_is_map) { khval_t tmp = h->vals[i]; h->vals[i] = val; val = tmp; } \
+ __ac_set_isdel_true(h->flags, i); /* mark it as deleted in the old hash table */ \
+ } else { /* write the element and jump out of the loop */ \
+ h->keys[i] = key; \
+ if (kh_is_map) h->vals[i] = val; \
+ break; \
+ } \
+ } \
+ } \
+ } \
+ if (h->n_buckets > new_n_buckets) { /* shrink the hash table */ \
+ h->keys = (khkey_t*)xrealloc((void *)h->keys, new_n_buckets * sizeof(khkey_t)); \
+ if (kh_is_map) h->vals = (khval_t*)xrealloc((void *)h->vals, new_n_buckets * sizeof(khval_t)); \
+ } \
+ free(h->flags); /* free the working space */ \
+ h->flags = new_flags; \
+ h->n_buckets = new_n_buckets; \
+ h->n_occupied = h->size; \
+ h->upper_bound = (khint_t)(h->n_buckets * __ac_HASH_UPPER + 0.5); \
+ } \
+ return 0; \
+ } \
+ SCOPE khint_t kh_put_##name(kh_##name##_t *h, khkey_t key, int *ret) \
+ { \
+ khint_t x; \
+ if (h->n_occupied >= h->upper_bound) { /* update the hash table */ \
+ if (h->n_buckets > (h->size<<1)) { \
+ if (kh_resize_##name(h, h->n_buckets - 1) < 0) { /* clear "deleted" elements */ \
+ *ret = -1; return h->n_buckets; \
+ } \
+ } else if (kh_resize_##name(h, h->n_buckets + 1) < 0) { /* expand the hash table */ \
+ *ret = -1; return h->n_buckets; \
+ } \
+ } /* TODO: to implement automatically shrinking; resize() already support shrinking */ \
+ { \
+ khint_t k, i, site, last, mask = h->n_buckets - 1, step = 0; \
+ x = site = h->n_buckets; k = __hash_func(key); i = k & mask; \
+ if (__ac_isempty(h->flags, i)) x = i; /* for speed up */ \
+ else { \
+ last = i; \
+ while (!__ac_isempty(h->flags, i) && (__ac_isdel(h->flags, i) || !__hash_equal(h->keys[i], key))) { \
+ if (__ac_isdel(h->flags, i)) site = i; \
+ i = (i + (++step)) & mask; \
+ if (i == last) { x = site; break; } \
+ } \
+ if (x == h->n_buckets) { \
+ if (__ac_isempty(h->flags, i) && site != h->n_buckets) x = site; \
+ else x = i; \
+ } \
+ } \
+ } \
+ if (__ac_isempty(h->flags, x)) { /* not present at all */ \
+ h->keys[x] = key; \
+ __ac_set_isboth_false(h->flags, x); \
+ ++h->size; ++h->n_occupied; \
+ *ret = 1; \
+ } else if (__ac_isdel(h->flags, x)) { /* deleted */ \
+ h->keys[x] = key; \
+ __ac_set_isboth_false(h->flags, x); \
+ ++h->size; \
+ *ret = 2; \
+ } else *ret = 0; /* Don't touch h->keys[x] if present and not deleted */ \
+ return x; \
+ } \
+ SCOPE void kh_del_##name(kh_##name##_t *h, khint_t x) \
+ { \
+ if (x != h->n_buckets && !__ac_iseither(h->flags, x)) { \
+ __ac_set_isdel_true(h->flags, x); \
+ --h->size; \
+ } \
+ }
+
+#define KHASH_DECLARE(name, khkey_t, khval_t) \
+ __KHASH_TYPE(name, khkey_t, khval_t) \
+ __KHASH_PROTOTYPES(name, khkey_t, khval_t)
+
+#define KHASH_INIT2(name, SCOPE, khkey_t, khval_t, kh_is_map, __hash_func, __hash_equal) \
+ __KHASH_TYPE(name, khkey_t, khval_t) \
+ __KHASH_IMPL(name, SCOPE, khkey_t, khval_t, kh_is_map, __hash_func, __hash_equal)
+
+#define KHASH_INIT(name, khkey_t, khval_t, kh_is_map, __hash_func, __hash_equal) \
+ KHASH_INIT2(name, static inline, khkey_t, khval_t, kh_is_map, __hash_func, __hash_equal)
+
+/* Other convenient macros... */
+
+/*! @function
+ @abstract Test whether a bucket contains data.
+ @param h Pointer to the hash table [khash_t(name)*]
+ @param x Iterator to the bucket [khint_t]
+ @return 1 if containing data; 0 otherwise [int]
+ */
+#define kh_exist(h, x) (!__ac_iseither((h)->flags, (x)))
+
+/*! @function
+ @abstract Get key given an iterator
+ @param h Pointer to the hash table [khash_t(name)*]
+ @param x Iterator to the bucket [khint_t]
+ @return Key [type of keys]
+ */
+#define kh_key(h, x) ((h)->keys[x])
+
+/*! @function
+ @abstract Get value given an iterator
+ @param h Pointer to the hash table [khash_t(name)*]
+ @param x Iterator to the bucket [khint_t]
+ @return Value [type of values]
+ @discussion For hash sets, calling this results in segfault.
+ */
+#define kh_val(h, x) ((h)->vals[x])
+
+/*! @function
+ @abstract Alias of kh_val()
+ */
+#define kh_value(h, x) ((h)->vals[x])
+
+/*! @function
+ @abstract Get the start iterator
+ @param h Pointer to the hash table [khash_t(name)*]
+ @return The start iterator [khint_t]
+ */
+#define kh_begin(h) (khint_t)(0)
+
+/*! @function
+ @abstract Get the end iterator
+ @param h Pointer to the hash table [khash_t(name)*]
+ @return The end iterator [khint_t]
+ */
+#define kh_end(h) ((h)->n_buckets)
+
+/*! @function
+ @abstract Get the number of elements in the hash table
+ @param h Pointer to the hash table [khash_t(name)*]
+ @return Number of elements in the hash table [khint_t]
+ */
+#define kh_size(h) ((h)->size)
+
+/*! @function
+ @abstract Get the number of buckets in the hash table
+ @param h Pointer to the hash table [khash_t(name)*]
+ @return Number of buckets in the hash table [khint_t]
+ */
+#define kh_n_buckets(h) ((h)->n_buckets)
+
+/*! @function
+ @abstract Iterate over the entries in the hash table
+ @param h Pointer to the hash table [khash_t(name)*]
+ @param kvar Variable to which key will be assigned
+ @param vvar Variable to which value will be assigned
+ @param code Block of code to execute
+ */
+#define kh_foreach(h, kvar, vvar, code) { khint_t __i; \
+ for (__i = kh_begin(h); __i != kh_end(h); ++__i) { \
+ if (!kh_exist(h,__i)) continue; \
+ (kvar) = kh_key(h,__i); \
+ (vvar) = kh_val(h,__i); \
+ code; \
+ } }
+
+/*! @function
+ @abstract Iterate over the values in the hash table
+ @param h Pointer to the hash table [khash_t(name)*]
+ @param vvar Variable to which value will be assigned
+ @param code Block of code to execute
+ */
+#define kh_foreach_value(h, vvar, code) { khint_t __i; \
+ for (__i = kh_begin(h); __i != kh_end(h); ++__i) { \
+ if (!kh_exist(h,__i)) continue; \
+ (vvar) = kh_val(h,__i); \
+ code; \
+ } }
+
+static inline khint_t __kh_oid_hash(const unsigned char *oid)
+{
+ khint_t hash;
+ memcpy(&hash, oid, sizeof(hash));
+ return hash;
+}
+
+#define __kh_oid_cmp(a, b) (hashcmp(a, b) == 0)
+
+KHASH_INIT(sha1, const unsigned char *, void *, 1, __kh_oid_hash, __kh_oid_cmp)
+typedef kh_sha1_t khash_sha1;
+
+KHASH_INIT(sha1_pos, const unsigned char *, int, 1, __kh_oid_hash, __kh_oid_cmp)
+typedef kh_sha1_pos_t khash_sha1_pos;
+
+#endif /* __AC_KHASH_H */
}
}
-static void load_tree_desc(struct tree_desc *desc, void **tree,
- const unsigned char *sha1)
-{
- unsigned long size;
- *tree = read_object_with_reference(sha1, tree_type, &size, NULL);
- if (!*tree)
- die("Unable to read tree (%s)", sha1_to_hex(sha1));
- init_tree_desc(desc, *tree, size);
-}
-
static int count_parents(struct commit *commit)
{
struct commit_list *parents = commit->parents;
struct diff_queue_struct *queue,
struct commit *commit, struct commit *parent)
{
- void *tree1 = NULL, *tree2 = NULL;
- struct tree_desc desc1, desc2;
-
assert(commit);
- load_tree_desc(&desc2, &tree2, commit->tree->object.sha1);
- if (parent)
- load_tree_desc(&desc1, &tree1, parent->tree->object.sha1);
- else
- init_tree_desc(&desc1, "", 0);
DIFF_QUEUE_CLEAR(&diff_queued_diff);
- diff_tree(&desc1, &desc2, "", opt);
+ diff_tree_sha1(parent ? parent->tree->object.sha1 : NULL,
+ commit->tree->object.sha1, "", opt);
if (opt->detect_rename) {
filter_diffs_for_paths(range, 1);
if (diff_might_be_rename())
filter_diffs_for_paths(range, 0);
}
move_diff_queue(queue, &diff_queued_diff);
-
- if (tree1)
- free(tree1);
- if (tree2)
- free(tree2);
}
static char *get_nth_line(long line, unsigned long *ends, void *data)
if (starts_with(refname, "refs/replace/")) {
unsigned char original_sha1[20];
- if (!read_replace_refs)
+ if (!check_replace_refs)
return 0;
if (get_sha1_hex(refname + 13, original_sha1)) {
warning("invalid replace ref %s", refname);
if (opt->line_level_traverse)
return line_log_print(opt, commit);
+ if (opt->track_linear && !opt->linear && !opt->reverse_output_stage)
+ printf("\n%s\n", opt->break_bar);
shown = log_tree_diff(opt, commit, &log);
if (!shown && opt->loginfo && opt->always_show_header) {
log.parent = NULL;
show_log(opt);
shown = 1;
}
+ if (opt->track_linear && !opt->linear && opt->reverse_output_stage)
+ printf("\n%s\n", opt->break_bar);
opt->loginfo = NULL;
maybe_flush_or_die(stdout, "stdout");
return shown;
enum object_type type;
int status;
- subpath = strchr(prefix, '/');
- if (!subpath)
- toplen = strlen(prefix);
- else {
- toplen = subpath - prefix;
+ subpath = strchrnul(prefix, '/');
+ toplen = subpath - prefix;
+ if (*subpath)
subpath++;
- }
buf = read_sha1_file(hash1, &type, &sz);
if (!buf)
if (!rewrite_here)
die("entry %.*s not found in tree %s",
toplen, prefix, sha1_to_hex(hash1));
- if (subpath) {
+ if (*subpath) {
status = splice_tree(rewrite_here, subpath, hash2, subtree);
if (status)
return status;
const char *path, int stage, int refresh, int options)
{
struct cache_entry *ce;
- ce = make_cache_entry(mode, sha1 ? sha1 : null_sha1, path, stage, refresh);
+ ce = make_cache_entry(mode, sha1 ? sha1 : null_sha1, path, stage,
+ (refresh ? (CE_MATCH_REFRESH |
+ CE_MATCH_IGNORE_MISSING) : 0 ));
if (!ce)
return error(_("addinfo_cache failed for path '%s'"), path);
return add_cache_entry(ce, options);
#define NO_THE_INDEX_COMPATIBILITY_MACROS
#include "cache.h"
-/*
- * This removes bit 5 if bit 6 is set.
- *
- * That will make US-ASCII characters hash to their upper-case
- * equivalent. We could easily do this one whole word at a time,
- * but that's for future worries.
- */
-static inline unsigned char icase_hash(unsigned char c)
-{
- return c & ~((c & 0x40) >> 1);
-}
-
-static unsigned int hash_name(const char *name, int namelen)
-{
- unsigned int hash = 0x123;
-
- while (namelen--) {
- unsigned char c = *name++;
- c = icase_hash(c);
- hash = hash*101 + c;
- }
- return hash;
-}
-
struct dir_entry {
- struct dir_entry *next;
+ struct hashmap_entry ent;
struct dir_entry *parent;
struct cache_entry *ce;
int nr;
unsigned int namelen;
};
+static int dir_entry_cmp(const struct dir_entry *e1,
+ const struct dir_entry *e2, const char *name)
+{
+ return e1->namelen != e2->namelen || strncasecmp(e1->ce->name,
+ name ? name : e2->ce->name, e1->namelen);
+}
+
static struct dir_entry *find_dir_entry(struct index_state *istate,
const char *name, unsigned int namelen)
{
- unsigned int hash = hash_name(name, namelen);
- struct dir_entry *dir;
-
- for (dir = lookup_hash(hash, &istate->dir_hash); dir; dir = dir->next)
- if (dir->namelen == namelen &&
- !strncasecmp(dir->ce->name, name, namelen))
- return dir;
- return NULL;
+ struct dir_entry key;
+ hashmap_entry_init(&key, memihash(name, namelen));
+ key.namelen = namelen;
+ return hashmap_get(&istate->dir_hash, &key, name);
}
static struct dir_entry *hash_dir_entry(struct index_state *istate,
dir = find_dir_entry(istate, ce->name, namelen);
if (!dir) {
/* not found, create it and add to hash table */
- void **pdir;
- unsigned int hash = hash_name(ce->name, namelen);
-
dir = xcalloc(1, sizeof(struct dir_entry));
+ hashmap_entry_init(dir, memihash(ce->name, namelen));
dir->namelen = namelen;
dir->ce = ce;
-
- pdir = insert_hash(hash, dir, &istate->dir_hash);
- if (pdir) {
- dir->next = *pdir;
- *pdir = dir;
- }
+ hashmap_add(&istate->dir_hash, dir);
/* recursively add missing parent directories */
dir->parent = hash_dir_entry(istate, ce, namelen);
static void remove_dir_entry(struct index_state *istate, struct cache_entry *ce)
{
/*
- * Release reference to the directory entry (and parents if 0).
- *
- * Note: we do not remove / free the entry because there's no
- * hash.[ch]::remove_hash and dir->next may point to other entries
- * that are still valid, so we must not free the memory.
+ * Release reference to the directory entry. If 0, remove and continue
+ * with parent directory.
*/
struct dir_entry *dir = hash_dir_entry(istate, ce, ce_namelen(ce));
- while (dir && dir->nr && !(--dir->nr))
- dir = dir->parent;
+ while (dir && !(--dir->nr)) {
+ struct dir_entry *parent = dir->parent;
+ hashmap_remove(&istate->dir_hash, dir, NULL);
+ free(dir);
+ dir = parent;
+ }
}
static void hash_index_entry(struct index_state *istate, struct cache_entry *ce)
{
- void **pos;
- unsigned int hash;
-
if (ce->ce_flags & CE_HASHED)
return;
ce->ce_flags |= CE_HASHED;
- ce->next = NULL;
- hash = hash_name(ce->name, ce_namelen(ce));
- pos = insert_hash(hash, ce, &istate->name_hash);
- if (pos) {
- ce->next = *pos;
- *pos = ce;
- }
+ hashmap_entry_init(ce, memihash(ce->name, ce_namelen(ce)));
+ hashmap_add(&istate->name_hash, ce);
- if (ignore_case && !(ce->ce_flags & CE_UNHASHED))
+ if (ignore_case)
add_dir_entry(istate, ce);
}
+static int cache_entry_cmp(const struct cache_entry *ce1,
+ const struct cache_entry *ce2, const void *remove)
+{
+ /*
+ * For remove_name_hash, find the exact entry (pointer equality); for
+ * index_file_exists, find all entries with matching hash code and
+ * decide whether the entry matches in same_name.
+ */
+ return remove ? !(ce1 == ce2) : 0;
+}
+
static void lazy_init_name_hash(struct index_state *istate)
{
int nr;
if (istate->name_hash_initialized)
return;
- if (istate->cache_nr)
- preallocate_hash(&istate->name_hash, istate->cache_nr);
+ hashmap_init(&istate->name_hash, (hashmap_cmp_fn) cache_entry_cmp,
+ istate->cache_nr);
+ hashmap_init(&istate->dir_hash, (hashmap_cmp_fn) dir_entry_cmp, 0);
for (nr = 0; nr < istate->cache_nr; nr++)
hash_index_entry(istate, istate->cache[nr]);
istate->name_hash_initialized = 1;
void add_name_hash(struct index_state *istate, struct cache_entry *ce)
{
- /* if already hashed, add reference to directory entries */
- if (ignore_case && (ce->ce_flags & CE_STATE_MASK) == CE_STATE_MASK)
- add_dir_entry(istate, ce);
-
- ce->ce_flags &= ~CE_UNHASHED;
if (istate->name_hash_initialized)
hash_index_entry(istate, ce);
}
-/*
- * We don't actually *remove* it, we can just mark it invalid so that
- * we won't find it in lookups.
- *
- * Not only would we have to search the lists (simple enough), but
- * we'd also have to rehash other hash buckets in case this makes the
- * hash bucket empty (common). So it's much better to just mark
- * it.
- */
void remove_name_hash(struct index_state *istate, struct cache_entry *ce)
{
- /* if already hashed, release reference to directory entries */
- if (ignore_case && (ce->ce_flags & CE_STATE_MASK) == CE_HASHED)
- remove_dir_entry(istate, ce);
+ if (!istate->name_hash_initialized || !(ce->ce_flags & CE_HASHED))
+ return;
+ ce->ce_flags &= ~CE_HASHED;
+ hashmap_remove(&istate->name_hash, ce, ce);
- ce->ce_flags |= CE_UNHASHED;
+ if (ignore_case)
+ remove_dir_entry(istate, ce);
}
static int slow_same_name(const char *name1, int len1, const char *name2, int len2)
struct cache_entry *index_file_exists(struct index_state *istate, const char *name, int namelen, int icase)
{
- unsigned int hash = hash_name(name, namelen);
struct cache_entry *ce;
+ struct hashmap_entry key;
lazy_init_name_hash(istate);
- ce = lookup_hash(hash, &istate->name_hash);
+ hashmap_entry_init(&key, memihash(name, namelen));
+ ce = hashmap_get(&istate->name_hash, &key, NULL);
while (ce) {
- if (!(ce->ce_flags & CE_UNHASHED)) {
- if (same_name(ce, name, namelen, icase))
- return ce;
- }
- ce = ce->next;
+ if (same_name(ce, name, namelen, icase))
+ return ce;
+ ce = hashmap_get_next(&istate->name_hash, ce);
}
return NULL;
}
-struct cache_entry *index_name_exists(struct index_state *istate, const char *name, int namelen, int icase)
-{
- if (namelen > 0 && name[namelen - 1] == '/')
- return index_dir_exists(istate, name, namelen - 1);
- return index_file_exists(istate, name, namelen, icase);
-}
-
-static int free_dir_entry(void *entry, void *unused)
-{
- struct dir_entry *dir = entry;
- while (dir) {
- struct dir_entry *next = dir->next;
- free(dir);
- dir = next;
- }
- return 0;
-}
-
void free_name_hash(struct index_state *istate)
{
if (!istate->name_hash_initialized)
return;
istate->name_hash_initialized = 0;
- if (ignore_case)
- /* free directory entries */
- for_each_hash(&istate->dir_hash, free_dir_entry, NULL);
- free_hash(&istate->name_hash);
- free_hash(&istate->dir_hash);
+ hashmap_free(&istate->name_hash, 0);
+ hashmap_free(&istate->dir_hash, 1);
}
return 0;
} else if (!c->mode_from_env && !strcmp(k, "notes.rewritemode")) {
if (!v)
- config_error_nonbool(k);
+ return config_error_nonbool(k);
c->combine = parse_combine_notes_fn(v);
if (!c->combine) {
error(_("Bad notes.rewriteMode value: '%s'"), v);
die("invalid object type \"%s\"", str);
}
+/*
+ * Return a numerical hash value between 0 and n-1 for the object with
+ * the specified sha1. n must be a power of 2. Please note that the
+ * return value is *not* consistent across computer architectures.
+ */
static unsigned int hash_obj(const unsigned char *sha1, unsigned int n)
{
unsigned int hash;
+
+ /*
+ * Since the sha1 is essentially random, we just take the
+ * required number of bits directly from the first
+ * sizeof(unsigned int) bytes of sha1. First we have to copy
+ * the bytes into a properly aligned integer. If we cared
+ * about getting consistent results across architectures, we
+ * would have to call ntohl() here, too.
+ */
memcpy(&hash, sha1, sizeof(unsigned int));
- /* Assumes power-of-2 hash sizes in grow_object_hash */
return hash & (n - 1);
}
+/*
+ * Insert obj into the hash table hash, which has length size (which
+ * must be a power of 2). On collisions, simply overflow to the next
+ * empty bucket.
+ */
static void insert_obj_hash(struct object *obj, struct object **hash, unsigned int size)
{
unsigned int j = hash_obj(obj->sha1, size);
hash[j] = obj;
}
+/*
+ * Look up the record for the given sha1 in the hash map stored in
+ * obj_hash. Return NULL if it was not found.
+ */
struct object *lookup_object(const unsigned char *sha1)
{
unsigned int i, first;
return obj;
}
+/*
+ * Increase the size of the hash map stored in obj_hash to the next
+ * power of 2 (but at least 32). Copy the existing values to the new
+ * hash map.
+ */
static void grow_object_hash(void)
{
int i;
#define OBJECT_ARRAY_INIT { 0, 0, NULL }
#define TYPE_BITS 3
+/*
+ * object flag allocation:
+ * revision.h: 0---------10 26
+ * fetch-pack.c: 0---4
+ * walker.c: 0-2
+ * upload-pack.c: 11----------------19
+ * builtin/blame.c: 12-13
+ * bisect.c: 16
+ * bundle.c: 16
+ * http-push.c: 16-----19
+ * commit.c: 16-----19
+ * sha1_name.c: 20
+ */
#define FLAG_BITS 27
/*
extern const char *typename(unsigned int type);
extern int type_from_string(const char *str);
+/*
+ * Return the current number of buckets in the object hashmap.
+ */
extern unsigned int get_max_object_index(void);
+
+/*
+ * Return the object from the specified bucket in the object hashmap.
+ */
extern struct object *get_indexed_object(unsigned int);
/*
--- /dev/null
+#include "cache.h"
+#include "commit.h"
+#include "tag.h"
+#include "diff.h"
+#include "revision.h"
+#include "list-objects.h"
+#include "progress.h"
+#include "pack-revindex.h"
+#include "pack.h"
+#include "pack-bitmap.h"
+#include "sha1-lookup.h"
+#include "pack-objects.h"
+
+struct bitmapped_commit {
+ struct commit *commit;
+ struct ewah_bitmap *bitmap;
+ struct ewah_bitmap *write_as;
+ int flags;
+ int xor_offset;
+ uint32_t commit_pos;
+};
+
+struct bitmap_writer {
+ struct ewah_bitmap *commits;
+ struct ewah_bitmap *trees;
+ struct ewah_bitmap *blobs;
+ struct ewah_bitmap *tags;
+
+ khash_sha1 *bitmaps;
+ khash_sha1 *reused;
+ struct packing_data *to_pack;
+
+ struct bitmapped_commit *selected;
+ unsigned int selected_nr, selected_alloc;
+
+ struct progress *progress;
+ int show_progress;
+ unsigned char pack_checksum[20];
+};
+
+static struct bitmap_writer writer;
+
+void bitmap_writer_show_progress(int show)
+{
+ writer.show_progress = show;
+}
+
+/**
+ * Build the initial type index for the packfile
+ */
+void bitmap_writer_build_type_index(struct pack_idx_entry **index,
+ uint32_t index_nr)
+{
+ uint32_t i;
+
+ writer.commits = ewah_new();
+ writer.trees = ewah_new();
+ writer.blobs = ewah_new();
+ writer.tags = ewah_new();
+
+ for (i = 0; i < index_nr; ++i) {
+ struct object_entry *entry = (struct object_entry *)index[i];
+ enum object_type real_type;
+
+ entry->in_pack_pos = i;
+
+ switch (entry->type) {
+ case OBJ_COMMIT:
+ case OBJ_TREE:
+ case OBJ_BLOB:
+ case OBJ_TAG:
+ real_type = entry->type;
+ break;
+
+ default:
+ real_type = sha1_object_info(entry->idx.sha1, NULL);
+ break;
+ }
+
+ switch (real_type) {
+ case OBJ_COMMIT:
+ ewah_set(writer.commits, i);
+ break;
+
+ case OBJ_TREE:
+ ewah_set(writer.trees, i);
+ break;
+
+ case OBJ_BLOB:
+ ewah_set(writer.blobs, i);
+ break;
+
+ case OBJ_TAG:
+ ewah_set(writer.tags, i);
+ break;
+
+ default:
+ die("Missing type information for %s (%d/%d)",
+ sha1_to_hex(entry->idx.sha1), real_type, entry->type);
+ }
+ }
+}
+
+/**
+ * Compute the actual bitmaps
+ */
+static struct object **seen_objects;
+static unsigned int seen_objects_nr, seen_objects_alloc;
+
+static inline void push_bitmapped_commit(struct commit *commit, struct ewah_bitmap *reused)
+{
+ if (writer.selected_nr >= writer.selected_alloc) {
+ writer.selected_alloc = (writer.selected_alloc + 32) * 2;
+ writer.selected = xrealloc(writer.selected,
+ writer.selected_alloc * sizeof(struct bitmapped_commit));
+ }
+
+ writer.selected[writer.selected_nr].commit = commit;
+ writer.selected[writer.selected_nr].bitmap = reused;
+ writer.selected[writer.selected_nr].flags = 0;
+
+ writer.selected_nr++;
+}
+
+static inline void mark_as_seen(struct object *object)
+{
+ ALLOC_GROW(seen_objects, seen_objects_nr + 1, seen_objects_alloc);
+ seen_objects[seen_objects_nr++] = object;
+}
+
+static inline void reset_all_seen(void)
+{
+ unsigned int i;
+ for (i = 0; i < seen_objects_nr; ++i) {
+ seen_objects[i]->flags &= ~(SEEN | ADDED | SHOWN);
+ }
+ seen_objects_nr = 0;
+}
+
+static uint32_t find_object_pos(const unsigned char *sha1)
+{
+ struct object_entry *entry = packlist_find(writer.to_pack, sha1, NULL);
+
+ if (!entry) {
+ die("Failed to write bitmap index. Packfile doesn't have full closure "
+ "(object %s is missing)", sha1_to_hex(sha1));
+ }
+
+ return entry->in_pack_pos;
+}
+
+static void show_object(struct object *object, const struct name_path *path,
+ const char *last, void *data)
+{
+ struct bitmap *base = data;
+ bitmap_set(base, find_object_pos(object->sha1));
+ mark_as_seen(object);
+}
+
+static void show_commit(struct commit *commit, void *data)
+{
+ mark_as_seen((struct object *)commit);
+}
+
+static int
+add_to_include_set(struct bitmap *base, struct commit *commit)
+{
+ khiter_t hash_pos;
+ uint32_t bitmap_pos = find_object_pos(commit->object.sha1);
+
+ if (bitmap_get(base, bitmap_pos))
+ return 0;
+
+ hash_pos = kh_get_sha1(writer.bitmaps, commit->object.sha1);
+ if (hash_pos < kh_end(writer.bitmaps)) {
+ struct bitmapped_commit *bc = kh_value(writer.bitmaps, hash_pos);
+ bitmap_or_ewah(base, bc->bitmap);
+ return 0;
+ }
+
+ bitmap_set(base, bitmap_pos);
+ return 1;
+}
+
+static int
+should_include(struct commit *commit, void *_data)
+{
+ struct bitmap *base = _data;
+
+ if (!add_to_include_set(base, commit)) {
+ struct commit_list *parent = commit->parents;
+
+ mark_as_seen((struct object *)commit);
+
+ while (parent) {
+ parent->item->object.flags |= SEEN;
+ mark_as_seen((struct object *)parent->item);
+ parent = parent->next;
+ }
+
+ return 0;
+ }
+
+ return 1;
+}
+
+static void compute_xor_offsets(void)
+{
+ static const int MAX_XOR_OFFSET_SEARCH = 10;
+
+ int i, next = 0;
+
+ while (next < writer.selected_nr) {
+ struct bitmapped_commit *stored = &writer.selected[next];
+
+ int best_offset = 0;
+ struct ewah_bitmap *best_bitmap = stored->bitmap;
+ struct ewah_bitmap *test_xor;
+
+ for (i = 1; i <= MAX_XOR_OFFSET_SEARCH; ++i) {
+ int curr = next - i;
+
+ if (curr < 0)
+ break;
+
+ test_xor = ewah_pool_new();
+ ewah_xor(writer.selected[curr].bitmap, stored->bitmap, test_xor);
+
+ if (test_xor->buffer_size < best_bitmap->buffer_size) {
+ if (best_bitmap != stored->bitmap)
+ ewah_pool_free(best_bitmap);
+
+ best_bitmap = test_xor;
+ best_offset = i;
+ } else {
+ ewah_pool_free(test_xor);
+ }
+ }
+
+ stored->xor_offset = best_offset;
+ stored->write_as = best_bitmap;
+
+ next++;
+ }
+}
+
+void bitmap_writer_build(struct packing_data *to_pack)
+{
+ static const double REUSE_BITMAP_THRESHOLD = 0.2;
+
+ int i, reuse_after, need_reset;
+ struct bitmap *base = bitmap_new();
+ struct rev_info revs;
+
+ writer.bitmaps = kh_init_sha1();
+ writer.to_pack = to_pack;
+
+ if (writer.show_progress)
+ writer.progress = start_progress("Building bitmaps", writer.selected_nr);
+
+ init_revisions(&revs, NULL);
+ revs.tag_objects = 1;
+ revs.tree_objects = 1;
+ revs.blob_objects = 1;
+ revs.no_walk = 0;
+
+ revs.include_check = should_include;
+ reset_revision_walk();
+
+ reuse_after = writer.selected_nr * REUSE_BITMAP_THRESHOLD;
+ need_reset = 0;
+
+ for (i = writer.selected_nr - 1; i >= 0; --i) {
+ struct bitmapped_commit *stored;
+ struct object *object;
+
+ khiter_t hash_pos;
+ int hash_ret;
+
+ stored = &writer.selected[i];
+ object = (struct object *)stored->commit;
+
+ if (stored->bitmap == NULL) {
+ if (i < writer.selected_nr - 1 &&
+ (need_reset ||
+ !in_merge_bases(writer.selected[i + 1].commit,
+ stored->commit))) {
+ bitmap_reset(base);
+ reset_all_seen();
+ }
+
+ add_pending_object(&revs, object, "");
+ revs.include_check_data = base;
+
+ if (prepare_revision_walk(&revs))
+ die("revision walk setup failed");
+
+ traverse_commit_list(&revs, show_commit, show_object, base);
+
+ revs.pending.nr = 0;
+ revs.pending.alloc = 0;
+ revs.pending.objects = NULL;
+
+ stored->bitmap = bitmap_to_ewah(base);
+ need_reset = 0;
+ } else
+ need_reset = 1;
+
+ if (i >= reuse_after)
+ stored->flags |= BITMAP_FLAG_REUSE;
+
+ hash_pos = kh_put_sha1(writer.bitmaps, object->sha1, &hash_ret);
+ if (hash_ret == 0)
+ die("Duplicate entry when writing index: %s",
+ sha1_to_hex(object->sha1));
+
+ kh_value(writer.bitmaps, hash_pos) = stored;
+ display_progress(writer.progress, writer.selected_nr - i);
+ }
+
+ bitmap_free(base);
+ stop_progress(&writer.progress);
+
+ compute_xor_offsets();
+}
+
+/**
+ * Select the commits that will be bitmapped
+ */
+static inline unsigned int next_commit_index(unsigned int idx)
+{
+ static const unsigned int MIN_COMMITS = 100;
+ static const unsigned int MAX_COMMITS = 5000;
+
+ static const unsigned int MUST_REGION = 100;
+ static const unsigned int MIN_REGION = 20000;
+
+ unsigned int offset, next;
+
+ if (idx <= MUST_REGION)
+ return 0;
+
+ if (idx <= MIN_REGION) {
+ offset = idx - MUST_REGION;
+ return (offset < MIN_COMMITS) ? offset : MIN_COMMITS;
+ }
+
+ offset = idx - MIN_REGION;
+ next = (offset < MAX_COMMITS) ? offset : MAX_COMMITS;
+
+ return (next > MIN_COMMITS) ? next : MIN_COMMITS;
+}
+
+static int date_compare(const void *_a, const void *_b)
+{
+ struct commit *a = *(struct commit **)_a;
+ struct commit *b = *(struct commit **)_b;
+ return (long)b->date - (long)a->date;
+}
+
+void bitmap_writer_reuse_bitmaps(struct packing_data *to_pack)
+{
+ if (prepare_bitmap_git() < 0)
+ return;
+
+ writer.reused = kh_init_sha1();
+ rebuild_existing_bitmaps(to_pack, writer.reused, writer.show_progress);
+}
+
+static struct ewah_bitmap *find_reused_bitmap(const unsigned char *sha1)
+{
+ khiter_t hash_pos;
+
+ if (!writer.reused)
+ return NULL;
+
+ hash_pos = kh_get_sha1(writer.reused, sha1);
+ if (hash_pos >= kh_end(writer.reused))
+ return NULL;
+
+ return kh_value(writer.reused, hash_pos);
+}
+
+void bitmap_writer_select_commits(struct commit **indexed_commits,
+ unsigned int indexed_commits_nr,
+ int max_bitmaps)
+{
+ unsigned int i = 0, j, next;
+
+ qsort(indexed_commits, indexed_commits_nr, sizeof(indexed_commits[0]),
+ date_compare);
+
+ if (writer.show_progress)
+ writer.progress = start_progress("Selecting bitmap commits", 0);
+
+ if (indexed_commits_nr < 100) {
+ for (i = 0; i < indexed_commits_nr; ++i)
+ push_bitmapped_commit(indexed_commits[i], NULL);
+ return;
+ }
+
+ for (;;) {
+ struct ewah_bitmap *reused_bitmap = NULL;
+ struct commit *chosen = NULL;
+
+ next = next_commit_index(i);
+
+ if (i + next >= indexed_commits_nr)
+ break;
+
+ if (max_bitmaps > 0 && writer.selected_nr >= max_bitmaps) {
+ writer.selected_nr = max_bitmaps;
+ break;
+ }
+
+ if (next == 0) {
+ chosen = indexed_commits[i];
+ reused_bitmap = find_reused_bitmap(chosen->object.sha1);
+ } else {
+ chosen = indexed_commits[i + next];
+
+ for (j = 0; j <= next; ++j) {
+ struct commit *cm = indexed_commits[i + j];
+
+ reused_bitmap = find_reused_bitmap(cm->object.sha1);
+ if (reused_bitmap || (cm->object.flags & NEEDS_BITMAP) != 0) {
+ chosen = cm;
+ break;
+ }
+
+ if (cm->parents && cm->parents->next)
+ chosen = cm;
+ }
+ }
+
+ push_bitmapped_commit(chosen, reused_bitmap);
+
+ i += next + 1;
+ display_progress(writer.progress, i);
+ }
+
+ stop_progress(&writer.progress);
+}
+
+
+static int sha1write_ewah_helper(void *f, const void *buf, size_t len)
+{
+ /* sha1write will die on error */
+ sha1write(f, buf, len);
+ return len;
+}
+
+/**
+ * Write the bitmap index to disk
+ */
+static inline void dump_bitmap(struct sha1file *f, struct ewah_bitmap *bitmap)
+{
+ if (ewah_serialize_to(bitmap, sha1write_ewah_helper, f) < 0)
+ die("Failed to write bitmap index");
+}
+
+static const unsigned char *sha1_access(size_t pos, void *table)
+{
+ struct pack_idx_entry **index = table;
+ return index[pos]->sha1;
+}
+
+static void write_selected_commits_v1(struct sha1file *f,
+ struct pack_idx_entry **index,
+ uint32_t index_nr)
+{
+ int i;
+
+ for (i = 0; i < writer.selected_nr; ++i) {
+ struct bitmapped_commit *stored = &writer.selected[i];
+ struct bitmap_disk_entry on_disk;
+
+ int commit_pos =
+ sha1_pos(stored->commit->object.sha1, index, index_nr, sha1_access);
+
+ if (commit_pos < 0)
+ die("BUG: trying to write commit not in index");
+
+ on_disk.object_pos = htonl(commit_pos);
+ on_disk.xor_offset = stored->xor_offset;
+ on_disk.flags = stored->flags;
+
+ sha1write(f, &on_disk, sizeof(on_disk));
+ dump_bitmap(f, stored->write_as);
+ }
+}
+
+static void write_hash_cache(struct sha1file *f,
+ struct pack_idx_entry **index,
+ uint32_t index_nr)
+{
+ uint32_t i;
+
+ for (i = 0; i < index_nr; ++i) {
+ struct object_entry *entry = (struct object_entry *)index[i];
+ uint32_t hash_value = htonl(entry->hash);
+ sha1write(f, &hash_value, sizeof(hash_value));
+ }
+}
+
+void bitmap_writer_set_checksum(unsigned char *sha1)
+{
+ hashcpy(writer.pack_checksum, sha1);
+}
+
+void bitmap_writer_finish(struct pack_idx_entry **index,
+ uint32_t index_nr,
+ const char *filename,
+ uint16_t options)
+{
+ static char tmp_file[PATH_MAX];
+ static uint16_t default_version = 1;
+ static uint16_t flags = BITMAP_OPT_FULL_DAG;
+ struct sha1file *f;
+
+ struct bitmap_disk_header header;
+
+ int fd = odb_mkstemp(tmp_file, sizeof(tmp_file), "pack/tmp_bitmap_XXXXXX");
+
+ if (fd < 0)
+ die_errno("unable to create '%s'", tmp_file);
+ f = sha1fd(fd, tmp_file);
+
+ memcpy(header.magic, BITMAP_IDX_SIGNATURE, sizeof(BITMAP_IDX_SIGNATURE));
+ header.version = htons(default_version);
+ header.options = htons(flags | options);
+ header.entry_count = htonl(writer.selected_nr);
+ hashcpy(header.checksum, writer.pack_checksum);
+
+ sha1write(f, &header, sizeof(header));
+ dump_bitmap(f, writer.commits);
+ dump_bitmap(f, writer.trees);
+ dump_bitmap(f, writer.blobs);
+ dump_bitmap(f, writer.tags);
+ write_selected_commits_v1(f, index, index_nr);
+
+ if (options & BITMAP_OPT_HASH_CACHE)
+ write_hash_cache(f, index, index_nr);
+
+ sha1close(f, NULL, CSUM_FSYNC);
+
+ if (adjust_shared_perm(tmp_file))
+ die_errno("unable to make temporary bitmap file readable");
+
+ if (rename(tmp_file, filename))
+ die_errno("unable to rename temporary bitmap file to '%s'", filename);
+}
--- /dev/null
+#include "cache.h"
+#include "commit.h"
+#include "tag.h"
+#include "diff.h"
+#include "revision.h"
+#include "progress.h"
+#include "list-objects.h"
+#include "pack.h"
+#include "pack-bitmap.h"
+#include "pack-revindex.h"
+#include "pack-objects.h"
+
+/*
+ * An entry on the bitmap index, representing the bitmap for a given
+ * commit.
+ */
+struct stored_bitmap {
+ unsigned char sha1[20];
+ struct ewah_bitmap *root;
+ struct stored_bitmap *xor;
+ int flags;
+};
+
+/*
+ * The currently active bitmap index. By design, repositories only have
+ * a single bitmap index available (the index for the biggest packfile in
+ * the repository), since bitmap indexes need full closure.
+ *
+ * If there is more than one bitmap index available (e.g. because of alternates),
+ * the active bitmap index is the largest one.
+ */
+static struct bitmap_index {
+ /* Packfile to which this bitmap index belongs to */
+ struct packed_git *pack;
+
+ /* reverse index for the packfile */
+ struct pack_revindex *reverse_index;
+
+ /*
+ * Mark the first `reuse_objects` in the packfile as reused:
+ * they will be sent as-is without using them for repacking
+ * calculations
+ */
+ uint32_t reuse_objects;
+
+ /* mmapped buffer of the whole bitmap index */
+ unsigned char *map;
+ size_t map_size; /* size of the mmaped buffer */
+ size_t map_pos; /* current position when loading the index */
+
+ /*
+ * Type indexes.
+ *
+ * Each bitmap marks which objects in the packfile are of the given
+ * type. This provides type information when yielding the objects from
+ * the packfile during a walk, which allows for better delta bases.
+ */
+ struct ewah_bitmap *commits;
+ struct ewah_bitmap *trees;
+ struct ewah_bitmap *blobs;
+ struct ewah_bitmap *tags;
+
+ /* Map from SHA1 -> `stored_bitmap` for all the bitmapped comits */
+ khash_sha1 *bitmaps;
+
+ /* Number of bitmapped commits */
+ uint32_t entry_count;
+
+ /* Name-hash cache (or NULL if not present). */
+ uint32_t *hashes;
+
+ /*
+ * Extended index.
+ *
+ * When trying to perform bitmap operations with objects that are not
+ * packed in `pack`, these objects are added to this "fake index" and
+ * are assumed to appear at the end of the packfile for all operations
+ */
+ struct eindex {
+ struct object **objects;
+ uint32_t *hashes;
+ uint32_t count, alloc;
+ khash_sha1_pos *positions;
+ } ext_index;
+
+ /* Bitmap result of the last performed walk */
+ struct bitmap *result;
+
+ /* Version of the bitmap index */
+ unsigned int version;
+
+ unsigned loaded : 1;
+
+} bitmap_git;
+
+static struct ewah_bitmap *lookup_stored_bitmap(struct stored_bitmap *st)
+{
+ struct ewah_bitmap *parent;
+ struct ewah_bitmap *composed;
+
+ if (st->xor == NULL)
+ return st->root;
+
+ composed = ewah_pool_new();
+ parent = lookup_stored_bitmap(st->xor);
+ ewah_xor(st->root, parent, composed);
+
+ ewah_pool_free(st->root);
+ st->root = composed;
+ st->xor = NULL;
+
+ return composed;
+}
+
+/*
+ * Read a bitmap from the current read position on the mmaped
+ * index, and increase the read position accordingly
+ */
+static struct ewah_bitmap *read_bitmap_1(struct bitmap_index *index)
+{
+ struct ewah_bitmap *b = ewah_pool_new();
+
+ int bitmap_size = ewah_read_mmap(b,
+ index->map + index->map_pos,
+ index->map_size - index->map_pos);
+
+ if (bitmap_size < 0) {
+ error("Failed to load bitmap index (corrupted?)");
+ ewah_pool_free(b);
+ return NULL;
+ }
+
+ index->map_pos += bitmap_size;
+ return b;
+}
+
+static int load_bitmap_header(struct bitmap_index *index)
+{
+ struct bitmap_disk_header *header = (void *)index->map;
+
+ if (index->map_size < sizeof(*header) + 20)
+ return error("Corrupted bitmap index (missing header data)");
+
+ if (memcmp(header->magic, BITMAP_IDX_SIGNATURE, sizeof(BITMAP_IDX_SIGNATURE)) != 0)
+ return error("Corrupted bitmap index file (wrong header)");
+
+ index->version = ntohs(header->version);
+ if (index->version != 1)
+ return error("Unsupported version for bitmap index file (%d)", index->version);
+
+ /* Parse known bitmap format options */
+ {
+ uint32_t flags = ntohs(header->options);
+
+ if ((flags & BITMAP_OPT_FULL_DAG) == 0)
+ return error("Unsupported options for bitmap index file "
+ "(Git requires BITMAP_OPT_FULL_DAG)");
+
+ if (flags & BITMAP_OPT_HASH_CACHE) {
+ unsigned char *end = index->map + index->map_size - 20;
+ index->hashes = ((uint32_t *)end) - index->pack->num_objects;
+ }
+ }
+
+ index->entry_count = ntohl(header->entry_count);
+ index->map_pos += sizeof(*header);
+ return 0;
+}
+
+static struct stored_bitmap *store_bitmap(struct bitmap_index *index,
+ struct ewah_bitmap *root,
+ const unsigned char *sha1,
+ struct stored_bitmap *xor_with,
+ int flags)
+{
+ struct stored_bitmap *stored;
+ khiter_t hash_pos;
+ int ret;
+
+ stored = xmalloc(sizeof(struct stored_bitmap));
+ stored->root = root;
+ stored->xor = xor_with;
+ stored->flags = flags;
+ hashcpy(stored->sha1, sha1);
+
+ hash_pos = kh_put_sha1(index->bitmaps, stored->sha1, &ret);
+
+ /* a 0 return code means the insertion succeeded with no changes,
+ * because the SHA1 already existed on the map. this is bad, there
+ * shouldn't be duplicated commits in the index */
+ if (ret == 0) {
+ error("Duplicate entry in bitmap index: %s", sha1_to_hex(sha1));
+ return NULL;
+ }
+
+ kh_value(index->bitmaps, hash_pos) = stored;
+ return stored;
+}
+
+static int load_bitmap_entries_v1(struct bitmap_index *index)
+{
+ static const size_t MAX_XOR_OFFSET = 160;
+
+ uint32_t i;
+ struct stored_bitmap **recent_bitmaps;
+ struct bitmap_disk_entry *entry;
+
+ recent_bitmaps = xcalloc(MAX_XOR_OFFSET, sizeof(struct stored_bitmap));
+
+ for (i = 0; i < index->entry_count; ++i) {
+ int xor_offset, flags;
+ struct ewah_bitmap *bitmap = NULL;
+ struct stored_bitmap *xor_bitmap = NULL;
+ uint32_t commit_idx_pos;
+ const unsigned char *sha1;
+
+ entry = (struct bitmap_disk_entry *)(index->map + index->map_pos);
+ index->map_pos += sizeof(struct bitmap_disk_entry);
+
+ commit_idx_pos = ntohl(entry->object_pos);
+ sha1 = nth_packed_object_sha1(index->pack, commit_idx_pos);
+
+ xor_offset = (int)entry->xor_offset;
+ flags = (int)entry->flags;
+
+ bitmap = read_bitmap_1(index);
+ if (!bitmap)
+ return -1;
+
+ if (xor_offset > MAX_XOR_OFFSET || xor_offset > i)
+ return error("Corrupted bitmap pack index");
+
+ if (xor_offset > 0) {
+ xor_bitmap = recent_bitmaps[(i - xor_offset) % MAX_XOR_OFFSET];
+
+ if (xor_bitmap == NULL)
+ return error("Invalid XOR offset in bitmap pack index");
+ }
+
+ recent_bitmaps[i % MAX_XOR_OFFSET] = store_bitmap(
+ index, bitmap, sha1, xor_bitmap, flags);
+ }
+
+ return 0;
+}
+
+static int open_pack_bitmap_1(struct packed_git *packfile)
+{
+ int fd;
+ struct stat st;
+ char *idx_name;
+
+ if (open_pack_index(packfile))
+ return -1;
+
+ idx_name = pack_bitmap_filename(packfile);
+ fd = git_open_noatime(idx_name);
+ free(idx_name);
+
+ if (fd < 0)
+ return -1;
+
+ if (fstat(fd, &st)) {
+ close(fd);
+ return -1;
+ }
+
+ if (bitmap_git.pack) {
+ warning("ignoring extra bitmap file: %s", packfile->pack_name);
+ close(fd);
+ return -1;
+ }
+
+ bitmap_git.pack = packfile;
+ bitmap_git.map_size = xsize_t(st.st_size);
+ bitmap_git.map = xmmap(NULL, bitmap_git.map_size, PROT_READ, MAP_PRIVATE, fd, 0);
+ bitmap_git.map_pos = 0;
+ close(fd);
+
+ if (load_bitmap_header(&bitmap_git) < 0) {
+ munmap(bitmap_git.map, bitmap_git.map_size);
+ bitmap_git.map = NULL;
+ bitmap_git.map_size = 0;
+ return -1;
+ }
+
+ return 0;
+}
+
+static int load_pack_bitmap(void)
+{
+ assert(bitmap_git.map && !bitmap_git.loaded);
+
+ bitmap_git.bitmaps = kh_init_sha1();
+ bitmap_git.ext_index.positions = kh_init_sha1_pos();
+ bitmap_git.reverse_index = revindex_for_pack(bitmap_git.pack);
+
+ if (!(bitmap_git.commits = read_bitmap_1(&bitmap_git)) ||
+ !(bitmap_git.trees = read_bitmap_1(&bitmap_git)) ||
+ !(bitmap_git.blobs = read_bitmap_1(&bitmap_git)) ||
+ !(bitmap_git.tags = read_bitmap_1(&bitmap_git)))
+ goto failed;
+
+ if (load_bitmap_entries_v1(&bitmap_git) < 0)
+ goto failed;
+
+ bitmap_git.loaded = 1;
+ return 0;
+
+failed:
+ munmap(bitmap_git.map, bitmap_git.map_size);
+ bitmap_git.map = NULL;
+ bitmap_git.map_size = 0;
+ return -1;
+}
+
+char *pack_bitmap_filename(struct packed_git *p)
+{
+ char *idx_name;
+ int len;
+
+ len = strlen(p->pack_name) - strlen(".pack");
+ idx_name = xmalloc(len + strlen(".bitmap") + 1);
+
+ memcpy(idx_name, p->pack_name, len);
+ memcpy(idx_name + len, ".bitmap", strlen(".bitmap") + 1);
+
+ return idx_name;
+}
+
+static int open_pack_bitmap(void)
+{
+ struct packed_git *p;
+ int ret = -1;
+
+ assert(!bitmap_git.map && !bitmap_git.loaded);
+
+ prepare_packed_git();
+ for (p = packed_git; p; p = p->next) {
+ if (open_pack_bitmap_1(p) == 0)
+ ret = 0;
+ }
+
+ return ret;
+}
+
+int prepare_bitmap_git(void)
+{
+ if (bitmap_git.loaded)
+ return 0;
+
+ if (!open_pack_bitmap())
+ return load_pack_bitmap();
+
+ return -1;
+}
+
+struct include_data {
+ struct bitmap *base;
+ struct bitmap *seen;
+};
+
+static inline int bitmap_position_extended(const unsigned char *sha1)
+{
+ khash_sha1_pos *positions = bitmap_git.ext_index.positions;
+ khiter_t pos = kh_get_sha1_pos(positions, sha1);
+
+ if (pos < kh_end(positions)) {
+ int bitmap_pos = kh_value(positions, pos);
+ return bitmap_pos + bitmap_git.pack->num_objects;
+ }
+
+ return -1;
+}
+
+static inline int bitmap_position_packfile(const unsigned char *sha1)
+{
+ off_t offset = find_pack_entry_one(sha1, bitmap_git.pack);
+ if (!offset)
+ return -1;
+
+ return find_revindex_position(bitmap_git.reverse_index, offset);
+}
+
+static int bitmap_position(const unsigned char *sha1)
+{
+ int pos = bitmap_position_packfile(sha1);
+ return (pos >= 0) ? pos : bitmap_position_extended(sha1);
+}
+
+static int ext_index_add_object(struct object *object, const char *name)
+{
+ struct eindex *eindex = &bitmap_git.ext_index;
+
+ khiter_t hash_pos;
+ int hash_ret;
+ int bitmap_pos;
+
+ hash_pos = kh_put_sha1_pos(eindex->positions, object->sha1, &hash_ret);
+ if (hash_ret > 0) {
+ if (eindex->count >= eindex->alloc) {
+ eindex->alloc = (eindex->alloc + 16) * 3 / 2;
+ eindex->objects = xrealloc(eindex->objects,
+ eindex->alloc * sizeof(struct object *));
+ eindex->hashes = xrealloc(eindex->hashes,
+ eindex->alloc * sizeof(uint32_t));
+ }
+
+ bitmap_pos = eindex->count;
+ eindex->objects[eindex->count] = object;
+ eindex->hashes[eindex->count] = pack_name_hash(name);
+ kh_value(eindex->positions, hash_pos) = bitmap_pos;
+ eindex->count++;
+ } else {
+ bitmap_pos = kh_value(eindex->positions, hash_pos);
+ }
+
+ return bitmap_pos + bitmap_git.pack->num_objects;
+}
+
+static void show_object(struct object *object, const struct name_path *path,
+ const char *last, void *data)
+{
+ struct bitmap *base = data;
+ int bitmap_pos;
+
+ bitmap_pos = bitmap_position(object->sha1);
+
+ if (bitmap_pos < 0) {
+ char *name = path_name(path, last);
+ bitmap_pos = ext_index_add_object(object, name);
+ free(name);
+ }
+
+ bitmap_set(base, bitmap_pos);
+}
+
+static void show_commit(struct commit *commit, void *data)
+{
+}
+
+static int add_to_include_set(struct include_data *data,
+ const unsigned char *sha1,
+ int bitmap_pos)
+{
+ khiter_t hash_pos;
+
+ if (data->seen && bitmap_get(data->seen, bitmap_pos))
+ return 0;
+
+ if (bitmap_get(data->base, bitmap_pos))
+ return 0;
+
+ hash_pos = kh_get_sha1(bitmap_git.bitmaps, sha1);
+ if (hash_pos < kh_end(bitmap_git.bitmaps)) {
+ struct stored_bitmap *st = kh_value(bitmap_git.bitmaps, hash_pos);
+ bitmap_or_ewah(data->base, lookup_stored_bitmap(st));
+ return 0;
+ }
+
+ bitmap_set(data->base, bitmap_pos);
+ return 1;
+}
+
+static int should_include(struct commit *commit, void *_data)
+{
+ struct include_data *data = _data;
+ int bitmap_pos;
+
+ bitmap_pos = bitmap_position(commit->object.sha1);
+ if (bitmap_pos < 0)
+ bitmap_pos = ext_index_add_object((struct object *)commit, NULL);
+
+ if (!add_to_include_set(data, commit->object.sha1, bitmap_pos)) {
+ struct commit_list *parent = commit->parents;
+
+ while (parent) {
+ parent->item->object.flags |= SEEN;
+ parent = parent->next;
+ }
+
+ return 0;
+ }
+
+ return 1;
+}
+
+static struct bitmap *find_objects(struct rev_info *revs,
+ struct object_list *roots,
+ struct bitmap *seen)
+{
+ struct bitmap *base = NULL;
+ int needs_walk = 0;
+
+ struct object_list *not_mapped = NULL;
+
+ /*
+ * Go through all the roots for the walk. The ones that have bitmaps
+ * on the bitmap index will be `or`ed together to form an initial
+ * global reachability analysis.
+ *
+ * The ones without bitmaps in the index will be stored in the
+ * `not_mapped_list` for further processing.
+ */
+ while (roots) {
+ struct object *object = roots->item;
+ roots = roots->next;
+
+ if (object->type == OBJ_COMMIT) {
+ khiter_t pos = kh_get_sha1(bitmap_git.bitmaps, object->sha1);
+
+ if (pos < kh_end(bitmap_git.bitmaps)) {
+ struct stored_bitmap *st = kh_value(bitmap_git.bitmaps, pos);
+ struct ewah_bitmap *or_with = lookup_stored_bitmap(st);
+
+ if (base == NULL)
+ base = ewah_to_bitmap(or_with);
+ else
+ bitmap_or_ewah(base, or_with);
+
+ object->flags |= SEEN;
+ continue;
+ }
+ }
+
+ object_list_insert(object, ¬_mapped);
+ }
+
+ /*
+ * Best case scenario: We found bitmaps for all the roots,
+ * so the resulting `or` bitmap has the full reachability analysis
+ */
+ if (not_mapped == NULL)
+ return base;
+
+ roots = not_mapped;
+
+ /*
+ * Let's iterate through all the roots that don't have bitmaps to
+ * check if we can determine them to be reachable from the existing
+ * global bitmap.
+ *
+ * If we cannot find them in the existing global bitmap, we'll need
+ * to push them to an actual walk and run it until we can confirm
+ * they are reachable
+ */
+ while (roots) {
+ struct object *object = roots->item;
+ int pos;
+
+ roots = roots->next;
+ pos = bitmap_position(object->sha1);
+
+ if (pos < 0 || base == NULL || !bitmap_get(base, pos)) {
+ object->flags &= ~UNINTERESTING;
+ add_pending_object(revs, object, "");
+ needs_walk = 1;
+ } else {
+ object->flags |= SEEN;
+ }
+ }
+
+ if (needs_walk) {
+ struct include_data incdata;
+
+ if (base == NULL)
+ base = bitmap_new();
+
+ incdata.base = base;
+ incdata.seen = seen;
+
+ revs->include_check = should_include;
+ revs->include_check_data = &incdata;
+
+ if (prepare_revision_walk(revs))
+ die("revision walk setup failed");
+
+ traverse_commit_list(revs, show_commit, show_object, base);
+ }
+
+ return base;
+}
+
+static void show_extended_objects(struct bitmap *objects,
+ show_reachable_fn show_reach)
+{
+ struct eindex *eindex = &bitmap_git.ext_index;
+ uint32_t i;
+
+ for (i = 0; i < eindex->count; ++i) {
+ struct object *obj;
+
+ if (!bitmap_get(objects, bitmap_git.pack->num_objects + i))
+ continue;
+
+ obj = eindex->objects[i];
+ show_reach(obj->sha1, obj->type, 0, eindex->hashes[i], NULL, 0);
+ }
+}
+
+static void show_objects_for_type(
+ struct bitmap *objects,
+ struct ewah_bitmap *type_filter,
+ enum object_type object_type,
+ show_reachable_fn show_reach)
+{
+ size_t pos = 0, i = 0;
+ uint32_t offset;
+
+ struct ewah_iterator it;
+ eword_t filter;
+
+ if (bitmap_git.reuse_objects == bitmap_git.pack->num_objects)
+ return;
+
+ ewah_iterator_init(&it, type_filter);
+
+ while (i < objects->word_alloc && ewah_iterator_next(&filter, &it)) {
+ eword_t word = objects->words[i] & filter;
+
+ for (offset = 0; offset < BITS_IN_WORD; ++offset) {
+ const unsigned char *sha1;
+ struct revindex_entry *entry;
+ uint32_t hash = 0;
+
+ if ((word >> offset) == 0)
+ break;
+
+ offset += ewah_bit_ctz64(word >> offset);
+
+ if (pos + offset < bitmap_git.reuse_objects)
+ continue;
+
+ entry = &bitmap_git.reverse_index->revindex[pos + offset];
+ sha1 = nth_packed_object_sha1(bitmap_git.pack, entry->nr);
+
+ if (bitmap_git.hashes)
+ hash = ntohl(bitmap_git.hashes[entry->nr]);
+
+ show_reach(sha1, object_type, 0, hash, bitmap_git.pack, entry->offset);
+ }
+
+ pos += BITS_IN_WORD;
+ i++;
+ }
+}
+
+static int in_bitmapped_pack(struct object_list *roots)
+{
+ while (roots) {
+ struct object *object = roots->item;
+ roots = roots->next;
+
+ if (find_pack_entry_one(object->sha1, bitmap_git.pack) > 0)
+ return 1;
+ }
+
+ return 0;
+}
+
+int prepare_bitmap_walk(struct rev_info *revs)
+{
+ unsigned int i;
+ unsigned int pending_nr = revs->pending.nr;
+ struct object_array_entry *pending_e = revs->pending.objects;
+
+ struct object_list *wants = NULL;
+ struct object_list *haves = NULL;
+
+ struct bitmap *wants_bitmap = NULL;
+ struct bitmap *haves_bitmap = NULL;
+
+ if (!bitmap_git.loaded) {
+ /* try to open a bitmapped pack, but don't parse it yet
+ * because we may not need to use it */
+ if (open_pack_bitmap() < 0)
+ return -1;
+ }
+
+ for (i = 0; i < pending_nr; ++i) {
+ struct object *object = pending_e[i].item;
+
+ if (object->type == OBJ_NONE)
+ parse_object_or_die(object->sha1, NULL);
+
+ while (object->type == OBJ_TAG) {
+ struct tag *tag = (struct tag *) object;
+
+ if (object->flags & UNINTERESTING)
+ object_list_insert(object, &haves);
+ else
+ object_list_insert(object, &wants);
+
+ if (!tag->tagged)
+ die("bad tag");
+ object = parse_object_or_die(tag->tagged->sha1, NULL);
+ }
+
+ if (object->flags & UNINTERESTING)
+ object_list_insert(object, &haves);
+ else
+ object_list_insert(object, &wants);
+ }
+
+ /*
+ * if we have a HAVES list, but none of those haves is contained
+ * in the packfile that has a bitmap, we don't have anything to
+ * optimize here
+ */
+ if (haves && !in_bitmapped_pack(haves))
+ return -1;
+
+ /* if we don't want anything, we're done here */
+ if (!wants)
+ return -1;
+
+ /*
+ * now we're going to use bitmaps, so load the actual bitmap entries
+ * from disk. this is the point of no return; after this the rev_list
+ * becomes invalidated and we must perform the revwalk through bitmaps
+ */
+ if (!bitmap_git.loaded && load_pack_bitmap() < 0)
+ return -1;
+
+ revs->pending.nr = 0;
+ revs->pending.alloc = 0;
+ revs->pending.objects = NULL;
+
+ if (haves) {
+ haves_bitmap = find_objects(revs, haves, NULL);
+ reset_revision_walk();
+
+ if (haves_bitmap == NULL)
+ die("BUG: failed to perform bitmap walk");
+ }
+
+ wants_bitmap = find_objects(revs, wants, haves_bitmap);
+
+ if (!wants_bitmap)
+ die("BUG: failed to perform bitmap walk");
+
+ if (haves_bitmap)
+ bitmap_and_not(wants_bitmap, haves_bitmap);
+
+ bitmap_git.result = wants_bitmap;
+
+ bitmap_free(haves_bitmap);
+ return 0;
+}
+
+int reuse_partial_packfile_from_bitmap(struct packed_git **packfile,
+ uint32_t *entries,
+ off_t *up_to)
+{
+ /*
+ * Reuse the packfile content if we need more than
+ * 90% of its objects
+ */
+ static const double REUSE_PERCENT = 0.9;
+
+ struct bitmap *result = bitmap_git.result;
+ uint32_t reuse_threshold;
+ uint32_t i, reuse_objects = 0;
+
+ assert(result);
+
+ for (i = 0; i < result->word_alloc; ++i) {
+ if (result->words[i] != (eword_t)~0) {
+ reuse_objects += ewah_bit_ctz64(~result->words[i]);
+ break;
+ }
+
+ reuse_objects += BITS_IN_WORD;
+ }
+
+#ifdef GIT_BITMAP_DEBUG
+ {
+ const unsigned char *sha1;
+ struct revindex_entry *entry;
+
+ entry = &bitmap_git.reverse_index->revindex[reuse_objects];
+ sha1 = nth_packed_object_sha1(bitmap_git.pack, entry->nr);
+
+ fprintf(stderr, "Failed to reuse at %d (%016llx)\n",
+ reuse_objects, result->words[i]);
+ fprintf(stderr, " %s\n", sha1_to_hex(sha1));
+ }
+#endif
+
+ if (!reuse_objects)
+ return -1;
+
+ if (reuse_objects >= bitmap_git.pack->num_objects) {
+ bitmap_git.reuse_objects = *entries = bitmap_git.pack->num_objects;
+ *up_to = -1; /* reuse the full pack */
+ *packfile = bitmap_git.pack;
+ return 0;
+ }
+
+ reuse_threshold = bitmap_popcount(bitmap_git.result) * REUSE_PERCENT;
+
+ if (reuse_objects < reuse_threshold)
+ return -1;
+
+ bitmap_git.reuse_objects = *entries = reuse_objects;
+ *up_to = bitmap_git.reverse_index->revindex[reuse_objects].offset;
+ *packfile = bitmap_git.pack;
+
+ return 0;
+}
+
+void traverse_bitmap_commit_list(show_reachable_fn show_reachable)
+{
+ assert(bitmap_git.result);
+
+ show_objects_for_type(bitmap_git.result, bitmap_git.commits,
+ OBJ_COMMIT, show_reachable);
+ show_objects_for_type(bitmap_git.result, bitmap_git.trees,
+ OBJ_TREE, show_reachable);
+ show_objects_for_type(bitmap_git.result, bitmap_git.blobs,
+ OBJ_BLOB, show_reachable);
+ show_objects_for_type(bitmap_git.result, bitmap_git.tags,
+ OBJ_TAG, show_reachable);
+
+ show_extended_objects(bitmap_git.result, show_reachable);
+
+ bitmap_free(bitmap_git.result);
+ bitmap_git.result = NULL;
+}
+
+static uint32_t count_object_type(struct bitmap *objects,
+ enum object_type type)
+{
+ struct eindex *eindex = &bitmap_git.ext_index;
+
+ uint32_t i = 0, count = 0;
+ struct ewah_iterator it;
+ eword_t filter;
+
+ switch (type) {
+ case OBJ_COMMIT:
+ ewah_iterator_init(&it, bitmap_git.commits);
+ break;
+
+ case OBJ_TREE:
+ ewah_iterator_init(&it, bitmap_git.trees);
+ break;
+
+ case OBJ_BLOB:
+ ewah_iterator_init(&it, bitmap_git.blobs);
+ break;
+
+ case OBJ_TAG:
+ ewah_iterator_init(&it, bitmap_git.tags);
+ break;
+
+ default:
+ return 0;
+ }
+
+ while (i < objects->word_alloc && ewah_iterator_next(&filter, &it)) {
+ eword_t word = objects->words[i++] & filter;
+ count += ewah_bit_popcount64(word);
+ }
+
+ for (i = 0; i < eindex->count; ++i) {
+ if (eindex->objects[i]->type == type &&
+ bitmap_get(objects, bitmap_git.pack->num_objects + i))
+ count++;
+ }
+
+ return count;
+}
+
+void count_bitmap_commit_list(uint32_t *commits, uint32_t *trees,
+ uint32_t *blobs, uint32_t *tags)
+{
+ assert(bitmap_git.result);
+
+ if (commits)
+ *commits = count_object_type(bitmap_git.result, OBJ_COMMIT);
+
+ if (trees)
+ *trees = count_object_type(bitmap_git.result, OBJ_TREE);
+
+ if (blobs)
+ *blobs = count_object_type(bitmap_git.result, OBJ_BLOB);
+
+ if (tags)
+ *tags = count_object_type(bitmap_git.result, OBJ_TAG);
+}
+
+struct bitmap_test_data {
+ struct bitmap *base;
+ struct progress *prg;
+ size_t seen;
+};
+
+static void test_show_object(struct object *object,
+ const struct name_path *path,
+ const char *last, void *data)
+{
+ struct bitmap_test_data *tdata = data;
+ int bitmap_pos;
+
+ bitmap_pos = bitmap_position(object->sha1);
+ if (bitmap_pos < 0)
+ die("Object not in bitmap: %s\n", sha1_to_hex(object->sha1));
+
+ bitmap_set(tdata->base, bitmap_pos);
+ display_progress(tdata->prg, ++tdata->seen);
+}
+
+static void test_show_commit(struct commit *commit, void *data)
+{
+ struct bitmap_test_data *tdata = data;
+ int bitmap_pos;
+
+ bitmap_pos = bitmap_position(commit->object.sha1);
+ if (bitmap_pos < 0)
+ die("Object not in bitmap: %s\n", sha1_to_hex(commit->object.sha1));
+
+ bitmap_set(tdata->base, bitmap_pos);
+ display_progress(tdata->prg, ++tdata->seen);
+}
+
+void test_bitmap_walk(struct rev_info *revs)
+{
+ struct object *root;
+ struct bitmap *result = NULL;
+ khiter_t pos;
+ size_t result_popcnt;
+ struct bitmap_test_data tdata;
+
+ if (prepare_bitmap_git())
+ die("failed to load bitmap indexes");
+
+ if (revs->pending.nr != 1)
+ die("you must specify exactly one commit to test");
+
+ fprintf(stderr, "Bitmap v%d test (%d entries loaded)\n",
+ bitmap_git.version, bitmap_git.entry_count);
+
+ root = revs->pending.objects[0].item;
+ pos = kh_get_sha1(bitmap_git.bitmaps, root->sha1);
+
+ if (pos < kh_end(bitmap_git.bitmaps)) {
+ struct stored_bitmap *st = kh_value(bitmap_git.bitmaps, pos);
+ struct ewah_bitmap *bm = lookup_stored_bitmap(st);
+
+ fprintf(stderr, "Found bitmap for %s. %d bits / %08x checksum\n",
+ sha1_to_hex(root->sha1), (int)bm->bit_size, ewah_checksum(bm));
+
+ result = ewah_to_bitmap(bm);
+ }
+
+ if (result == NULL)
+ die("Commit %s doesn't have an indexed bitmap", sha1_to_hex(root->sha1));
+
+ revs->tag_objects = 1;
+ revs->tree_objects = 1;
+ revs->blob_objects = 1;
+
+ result_popcnt = bitmap_popcount(result);
+
+ if (prepare_revision_walk(revs))
+ die("revision walk setup failed");
+
+ tdata.base = bitmap_new();
+ tdata.prg = start_progress("Verifying bitmap entries", result_popcnt);
+ tdata.seen = 0;
+
+ traverse_commit_list(revs, &test_show_commit, &test_show_object, &tdata);
+
+ stop_progress(&tdata.prg);
+
+ if (bitmap_equals(result, tdata.base))
+ fprintf(stderr, "OK!\n");
+ else
+ fprintf(stderr, "Mismatch!\n");
+}
+
+static int rebuild_bitmap(uint32_t *reposition,
+ struct ewah_bitmap *source,
+ struct bitmap *dest)
+{
+ uint32_t pos = 0;
+ struct ewah_iterator it;
+ eword_t word;
+
+ ewah_iterator_init(&it, source);
+
+ while (ewah_iterator_next(&word, &it)) {
+ uint32_t offset, bit_pos;
+
+ for (offset = 0; offset < BITS_IN_WORD; ++offset) {
+ if ((word >> offset) == 0)
+ break;
+
+ offset += ewah_bit_ctz64(word >> offset);
+
+ bit_pos = reposition[pos + offset];
+ if (bit_pos > 0)
+ bitmap_set(dest, bit_pos - 1);
+ else /* can't reuse, we don't have the object */
+ return -1;
+ }
+
+ pos += BITS_IN_WORD;
+ }
+ return 0;
+}
+
+int rebuild_existing_bitmaps(struct packing_data *mapping,
+ khash_sha1 *reused_bitmaps,
+ int show_progress)
+{
+ uint32_t i, num_objects;
+ uint32_t *reposition;
+ struct bitmap *rebuild;
+ struct stored_bitmap *stored;
+ struct progress *progress = NULL;
+
+ khiter_t hash_pos;
+ int hash_ret;
+
+ if (prepare_bitmap_git() < 0)
+ return -1;
+
+ num_objects = bitmap_git.pack->num_objects;
+ reposition = xcalloc(num_objects, sizeof(uint32_t));
+
+ for (i = 0; i < num_objects; ++i) {
+ const unsigned char *sha1;
+ struct revindex_entry *entry;
+ struct object_entry *oe;
+
+ entry = &bitmap_git.reverse_index->revindex[i];
+ sha1 = nth_packed_object_sha1(bitmap_git.pack, entry->nr);
+ oe = packlist_find(mapping, sha1, NULL);
+
+ if (oe)
+ reposition[i] = oe->in_pack_pos + 1;
+ }
+
+ rebuild = bitmap_new();
+ i = 0;
+
+ if (show_progress)
+ progress = start_progress("Reusing bitmaps", 0);
+
+ kh_foreach_value(bitmap_git.bitmaps, stored, {
+ if (stored->flags & BITMAP_FLAG_REUSE) {
+ if (!rebuild_bitmap(reposition,
+ lookup_stored_bitmap(stored),
+ rebuild)) {
+ hash_pos = kh_put_sha1(reused_bitmaps,
+ stored->sha1,
+ &hash_ret);
+ kh_value(reused_bitmaps, hash_pos) =
+ bitmap_to_ewah(rebuild);
+ }
+ bitmap_reset(rebuild);
+ display_progress(progress, ++i);
+ }
+ });
+
+ stop_progress(&progress);
+
+ free(reposition);
+ bitmap_free(rebuild);
+ return 0;
+}
--- /dev/null
+#ifndef PACK_BITMAP_H
+#define PACK_BITMAP_H
+
+#include "ewah/ewok.h"
+#include "khash.h"
+#include "pack-objects.h"
+
+struct bitmap_disk_entry {
+ uint32_t object_pos;
+ uint8_t xor_offset;
+ uint8_t flags;
+} __attribute__((packed));
+
+struct bitmap_disk_header {
+ char magic[4];
+ uint16_t version;
+ uint16_t options;
+ uint32_t entry_count;
+ unsigned char checksum[20];
+};
+
+static const char BITMAP_IDX_SIGNATURE[] = {'B', 'I', 'T', 'M'};
+
+#define NEEDS_BITMAP (1u<<22)
+
+enum pack_bitmap_opts {
+ BITMAP_OPT_FULL_DAG = 1,
+ BITMAP_OPT_HASH_CACHE = 4,
+};
+
+enum pack_bitmap_flags {
+ BITMAP_FLAG_REUSE = 0x1
+};
+
+typedef int (*show_reachable_fn)(
+ const unsigned char *sha1,
+ enum object_type type,
+ int flags,
+ uint32_t hash,
+ struct packed_git *found_pack,
+ off_t found_offset);
+
+int prepare_bitmap_git(void);
+void count_bitmap_commit_list(uint32_t *commits, uint32_t *trees, uint32_t *blobs, uint32_t *tags);
+void traverse_bitmap_commit_list(show_reachable_fn show_reachable);
+void test_bitmap_walk(struct rev_info *revs);
+char *pack_bitmap_filename(struct packed_git *p);
+int prepare_bitmap_walk(struct rev_info *revs);
+int reuse_partial_packfile_from_bitmap(struct packed_git **packfile, uint32_t *entries, off_t *up_to);
+int rebuild_existing_bitmaps(struct packing_data *mapping, khash_sha1 *reused_bitmaps, int show_progress);
+
+void bitmap_writer_show_progress(int show);
+void bitmap_writer_set_checksum(unsigned char *sha1);
+void bitmap_writer_build_type_index(struct pack_idx_entry **index, uint32_t index_nr);
+void bitmap_writer_reuse_bitmaps(struct packing_data *to_pack);
+void bitmap_writer_select_commits(struct commit **indexed_commits,
+ unsigned int indexed_commits_nr, int max_bitmaps);
+void bitmap_writer_build(struct packing_data *to_pack);
+void bitmap_writer_finish(struct pack_idx_entry **index,
+ uint32_t index_nr,
+ const char *filename,
+ uint16_t options);
+
+#endif
--- /dev/null
+#include "cache.h"
+#include "object.h"
+#include "pack.h"
+#include "pack-objects.h"
+
+static uint32_t locate_object_entry_hash(struct packing_data *pdata,
+ const unsigned char *sha1,
+ int *found)
+{
+ uint32_t i, hash, mask = (pdata->index_size - 1);
+
+ memcpy(&hash, sha1, sizeof(uint32_t));
+ i = hash & mask;
+
+ while (pdata->index[i] > 0) {
+ uint32_t pos = pdata->index[i] - 1;
+
+ if (!hashcmp(sha1, pdata->objects[pos].idx.sha1)) {
+ *found = 1;
+ return i;
+ }
+
+ i = (i + 1) & mask;
+ }
+
+ *found = 0;
+ return i;
+}
+
+static inline uint32_t closest_pow2(uint32_t v)
+{
+ v = v - 1;
+ v |= v >> 1;
+ v |= v >> 2;
+ v |= v >> 4;
+ v |= v >> 8;
+ v |= v >> 16;
+ return v + 1;
+}
+
+static void rehash_objects(struct packing_data *pdata)
+{
+ uint32_t i;
+ struct object_entry *entry;
+
+ pdata->index_size = closest_pow2(pdata->nr_objects * 3);
+ if (pdata->index_size < 1024)
+ pdata->index_size = 1024;
+
+ pdata->index = xrealloc(pdata->index, sizeof(uint32_t) * pdata->index_size);
+ memset(pdata->index, 0, sizeof(int) * pdata->index_size);
+
+ entry = pdata->objects;
+
+ for (i = 0; i < pdata->nr_objects; i++) {
+ int found;
+ uint32_t ix = locate_object_entry_hash(pdata, entry->idx.sha1, &found);
+
+ if (found)
+ die("BUG: Duplicate object in hash");
+
+ pdata->index[ix] = i + 1;
+ entry++;
+ }
+}
+
+struct object_entry *packlist_find(struct packing_data *pdata,
+ const unsigned char *sha1,
+ uint32_t *index_pos)
+{
+ uint32_t i;
+ int found;
+
+ if (!pdata->index_size)
+ return NULL;
+
+ i = locate_object_entry_hash(pdata, sha1, &found);
+
+ if (index_pos)
+ *index_pos = i;
+
+ if (!found)
+ return NULL;
+
+ return &pdata->objects[pdata->index[i] - 1];
+}
+
+struct object_entry *packlist_alloc(struct packing_data *pdata,
+ const unsigned char *sha1,
+ uint32_t index_pos)
+{
+ struct object_entry *new_entry;
+
+ if (pdata->nr_objects >= pdata->nr_alloc) {
+ pdata->nr_alloc = (pdata->nr_alloc + 1024) * 3 / 2;
+ pdata->objects = xrealloc(pdata->objects,
+ pdata->nr_alloc * sizeof(*new_entry));
+ }
+
+ new_entry = pdata->objects + pdata->nr_objects++;
+
+ memset(new_entry, 0, sizeof(*new_entry));
+ hashcpy(new_entry->idx.sha1, sha1);
+
+ if (pdata->index_size * 3 <= pdata->nr_objects * 4)
+ rehash_objects(pdata);
+ else
+ pdata->index[index_pos] = pdata->nr_objects;
+
+ return new_entry;
+}
--- /dev/null
+#ifndef PACK_OBJECTS_H
+#define PACK_OBJECTS_H
+
+struct object_entry {
+ struct pack_idx_entry idx;
+ unsigned long size; /* uncompressed size */
+ struct packed_git *in_pack; /* already in pack */
+ off_t in_pack_offset;
+ struct object_entry *delta; /* delta base object */
+ struct object_entry *delta_child; /* deltified objects who bases me */
+ struct object_entry *delta_sibling; /* other deltified objects who
+ * uses the same base as me
+ */
+ void *delta_data; /* cached delta (uncompressed) */
+ unsigned long delta_size; /* delta data size (uncompressed) */
+ unsigned long z_delta_size; /* delta data size (compressed) */
+ enum object_type type;
+ enum object_type in_pack_type; /* could be delta */
+ uint32_t hash; /* name hint hash */
+ unsigned int in_pack_pos;
+ unsigned char in_pack_header_size;
+ unsigned preferred_base:1; /*
+ * we do not pack this, but is available
+ * to be used as the base object to delta
+ * objects against.
+ */
+ unsigned no_try_delta:1;
+ unsigned tagged:1; /* near the very tip of refs */
+ unsigned filled:1; /* assigned write-order */
+};
+
+struct packing_data {
+ struct object_entry *objects;
+ uint32_t nr_objects, nr_alloc;
+
+ int32_t *index;
+ uint32_t index_size;
+};
+
+struct object_entry *packlist_alloc(struct packing_data *pdata,
+ const unsigned char *sha1,
+ uint32_t index_pos);
+
+struct object_entry *packlist_find(struct packing_data *pdata,
+ const unsigned char *sha1,
+ uint32_t *index_pos);
+
+static inline uint32_t pack_name_hash(const char *name)
+{
+ uint32_t c, hash = 0;
+
+ if (!name)
+ return 0;
+
+ /*
+ * This effectively just creates a sortable number from the
+ * last sixteen non-whitespace characters. Last characters
+ * count "most", so things that end in ".c" sort together.
+ */
+ while ((c = *name++) != 0) {
+ if (isspace(c))
+ continue;
+ hash = (hash >> 2) + (c << 24);
+ }
+ return hash;
+}
+
+#endif
* get the object sha1 from the main index.
*/
-struct pack_revindex {
- struct packed_git *p;
- struct revindex_entry *revindex;
-};
-
static struct pack_revindex *pack_revindex;
static int pack_revindex_hashsz;
sort_revindex(rix->revindex, num_ent, p->pack_size);
}
-struct revindex_entry *find_pack_revindex(struct packed_git *p, off_t ofs)
+struct pack_revindex *revindex_for_pack(struct packed_git *p)
{
int num;
- unsigned lo, hi;
struct pack_revindex *rix;
- struct revindex_entry *revindex;
if (!pack_revindex_hashsz)
init_pack_revindex();
+
num = pack_revindex_ix(p);
if (num < 0)
die("internal error: pack revindex fubar");
rix = &pack_revindex[num];
if (!rix->revindex)
create_pack_revindex(rix);
- revindex = rix->revindex;
- lo = 0;
- hi = p->num_objects + 1;
+ return rix;
+}
+
+int find_revindex_position(struct pack_revindex *pridx, off_t ofs)
+{
+ int lo = 0;
+ int hi = pridx->p->num_objects + 1;
+ struct revindex_entry *revindex = pridx->revindex;
+
do {
unsigned mi = lo + (hi - lo) / 2;
if (revindex[mi].offset == ofs) {
- return revindex + mi;
+ return mi;
} else if (ofs < revindex[mi].offset)
hi = mi;
else
lo = mi + 1;
} while (lo < hi);
+
error("bad offset for revindex");
- return NULL;
+ return -1;
}
-void discard_revindex(void)
+struct revindex_entry *find_pack_revindex(struct packed_git *p, off_t ofs)
{
- if (pack_revindex_hashsz) {
- int i;
- for (i = 0; i < pack_revindex_hashsz; i++)
- free(pack_revindex[i].revindex);
- free(pack_revindex);
- pack_revindex_hashsz = 0;
- }
+ struct pack_revindex *pridx = revindex_for_pack(p);
+ int pos = find_revindex_position(pridx, ofs);
+
+ if (pos < 0)
+ return NULL;
+
+ return pridx->revindex + pos;
}
unsigned int nr;
};
+struct pack_revindex {
+ struct packed_git *p;
+ struct revindex_entry *revindex;
+};
+
+struct pack_revindex *revindex_for_pack(struct packed_git *p);
+int find_revindex_position(struct pack_revindex *pridx, off_t ofs);
+
struct revindex_entry *find_pack_revindex(struct packed_git *p, off_t ofs);
-void discard_revindex(void);
#endif
return sha1fd(fd, *pack_tmp_name);
}
-void finish_tmp_packfile(char *name_buffer,
+void finish_tmp_packfile(struct strbuf *name_buffer,
const char *pack_tmp_name,
struct pack_idx_entry **written_list,
uint32_t nr_written,
unsigned char sha1[])
{
const char *idx_tmp_name;
- char *end_of_name_prefix = strrchr(name_buffer, 0);
+ int basename_len = name_buffer->len;
if (adjust_shared_perm(pack_tmp_name))
die_errno("unable to make temporary pack file readable");
if (adjust_shared_perm(idx_tmp_name))
die_errno("unable to make temporary index file readable");
- sprintf(end_of_name_prefix, "%s.pack", sha1_to_hex(sha1));
- free_pack_by_name(name_buffer);
+ strbuf_addf(name_buffer, "%s.pack", sha1_to_hex(sha1));
+ free_pack_by_name(name_buffer->buf);
- if (rename(pack_tmp_name, name_buffer))
+ if (rename(pack_tmp_name, name_buffer->buf))
die_errno("unable to rename temporary pack file");
- sprintf(end_of_name_prefix, "%s.idx", sha1_to_hex(sha1));
- if (rename(idx_tmp_name, name_buffer))
+ strbuf_setlen(name_buffer, basename_len);
+
+ strbuf_addf(name_buffer, "%s.idx", sha1_to_hex(sha1));
+ if (rename(idx_tmp_name, name_buffer->buf))
die_errno("unable to rename temporary index file");
+ strbuf_setlen(name_buffer, basename_len);
+
free((void *)idx_tmp_name);
}
extern int read_pack_header(int fd, struct pack_header *);
extern struct sha1file *create_tmp_packfile(char **pack_tmp_name);
-extern void finish_tmp_packfile(char *name_buffer, const char *pack_tmp_name, struct pack_idx_entry **written_list, uint32_t nr_written, struct pack_idx_option *pack_idx_opts, unsigned char sha1[]);
+extern void finish_tmp_packfile(struct strbuf *name_buffer, const char *pack_tmp_name, struct pack_idx_entry **written_list, uint32_t nr_written, struct pack_idx_option *pack_idx_opts, unsigned char sha1[]);
#endif
const struct option *options)
{
const struct option *all_opts = options;
- const char *arg_end = strchr(arg, '=');
+ const char *arg_end = strchrnul(arg, '=');
const struct option *abbrev_option = NULL, *ambiguous_option = NULL;
int abbrev_flags = 0, ambiguous_flags = 0;
- if (!arg_end)
- arg_end = arg + strlen(arg);
-
for (; options->type != OPTION_END; options++) {
const char *rest, *long_name = options->long_name;
int flags = 0, opt_flags = 0;
default:
; /* ok. (usually accepts an argument) */
}
+ if (opts->argh &&
+ strcspn(opts->argh, " _") != strlen(opts->argh))
+ err |= optbug(opts, "multi-word argh should use dash to separate words");
}
if (err)
exit(128);
{ OPTION_CALLBACK, (s), (l), (v), N_("time"),(h), 0, \
parse_opt_approxidate_cb }
#define OPT_EXPIRY_DATE(s, l, v, h) \
- { OPTION_CALLBACK, (s), (l), (v), N_("expiry date"),(h), 0, \
+ { OPTION_CALLBACK, (s), (l), (v), N_("expiry-date"),(h), 0, \
parse_opt_expiry_date_cb }
#define OPT_CALLBACK(s, l, v, a, h, f) \
{ OPTION_CALLBACK, (s), (l), (v), (a), (h), 0, (f) }
ent = &bucket->bucket[bucket->nr++];
hashcpy(ent->patch_id, sha1);
- if (ids->alloc <= ids->nr) {
- ids->alloc = alloc_nr(ids->nr);
- ids->table = xrealloc(ids->table, sizeof(ent) * ids->alloc);
- }
+ ALLOC_GROW(ids->table, ids->nr + 1, ids->alloc);
if (pos < ids->nr)
memmove(ids->table + pos + 1, ids->table + pos,
sizeof(ent) * (ids->nr - pos));
char *expand_user_path(const char *path)
{
struct strbuf user_path = STRBUF_INIT;
- const char *first_slash = strchrnul(path, '/');
const char *to_copy = path;
if (path == NULL)
goto return_null;
if (path[0] == '~') {
+ const char *first_slash = strchrnul(path, '/');
const char *username = path + 1;
size_t username_len = first_slash - username;
if (username_len == 0) {
return;
for (i = 0; i < active_nr; i++) {
const struct cache_entry *ce = active_cache[i];
- match_pathspec_depth(pathspec, ce->name, ce_namelen(ce), 0, seen);
+ ce_path_match(ce, pathspec, seen);
}
}
#: remote.c:1875
msgid " (use \"git branch --unset-upstream\" to fixup)\n"
-msgstr " (utilisez \"git branch -unset-upstream\" pour corriger)\n"
+msgstr " (utilisez \"git branch --unset-upstream\" pour corriger)\n"
#: remote.c:1878
#, c-format
#: builtin/branch.c:1027
#, c-format
msgid " git branch --set-upstream-to %s\n"
-msgstr " git branch -set-upstream-to %s\n"
+msgstr " git branch --set-upstream-to %s\n"
#: builtin/bundle.c:47
#, c-format
*/
#include "cache.h"
#include "pathspec.h"
+#include "dir.h"
#ifdef NO_PTHREADS
static void preload_index(struct index_state *index,
continue;
if (ce_uptodate(ce))
continue;
- if (!ce_path_match(ce, &p->pathspec))
+ if (!ce_path_match(ce, &p->pathspec, NULL))
continue;
if (threaded_has_symlink_leading_path(&cache, ce->name, ce_namelen(ce)))
continue;
enum date_mode mode)
{
unsigned long date = 0;
- int tz = 0;
+ long tz = 0;
if (ident->date_begin && ident->date_end)
date = strtoul(ident->date_begin, NULL, 10);
- if (ident->tz_begin && ident->tz_end)
- tz = strtol(ident->tz_begin, NULL, 10);
+ if (date_overflows(date))
+ date = 0;
+ else {
+ if (ident->tz_begin && ident->tz_end)
+ tz = strtol(ident->tz_begin, NULL, 10);
+ if (tz >= INT_MAX || tz <= INT_MIN)
+ tz = 0;
+ }
return show_date(date, tz, mode);
}
const char *line = msg;
while (line) {
- const char *eol = strchr(line, '\n'), *next;
+ const char *eol = strchrnul(line, '\n'), *next;
if (line == eol)
return NULL;
- if (!eol) {
+ if (!*eol) {
warning("malformed commit (header is missing newline): %s",
sha1_to_hex(commit->object.sha1));
- eol = line + strlen(line);
next = NULL;
} else
next = eol + 1;
*/
#include "git-compat-util.h"
+#include "gettext.h"
#include "progress.h"
#include "strbuf.h"
void stop_progress(struct progress **p_progress)
{
- stop_progress_msg(p_progress, "done");
+ stop_progress_msg(p_progress, _("done"));
}
void stop_progress_msg(struct progress **p_progress, const char *msg)
#include "strbuf.h"
#include "varint.h"
-static struct cache_entry *refresh_cache_entry(struct cache_entry *ce, int really);
+static struct cache_entry *refresh_cache_entry(struct cache_entry *ce,
+ unsigned int options);
/* Mask for the name length in ce_flags in the on-disk index */
struct cache_entry *old = istate->cache[nr];
remove_name_hash(istate, old);
+ free(old);
set_index_entry(istate, nr, ce);
istate->cache_changed = 1;
}
new = xmalloc(cache_entry_size(namelen));
copy_cache_entry(new, old);
- new->ce_flags &= ~CE_STATE_MASK;
+ new->ce_flags &= ~CE_HASHED;
new->ce_namelen = namelen;
memcpy(new->name, new_name, namelen + 1);
record_resolve_undo(istate, ce);
remove_name_hash(istate, ce);
+ free(ce);
istate->cache_changed = 1;
istate->cache_nr--;
if (pos >= istate->cache_nr)
unsigned int i, j;
for (i = j = 0; i < istate->cache_nr; i++) {
- if (ce_array[i]->ce_flags & CE_REMOVE)
+ if (ce_array[i]->ce_flags & CE_REMOVE) {
remove_name_hash(istate, ce_array[i]);
+ free(ce_array[i]);
+ }
else
ce_array[j++] = ce_array[i];
}
return new;
}
-static void record_intent_to_add(struct cache_entry *ce)
+void set_object_name_for_intent_to_add_entry(struct cache_entry *ce)
{
unsigned char sha1[20];
if (write_sha1_file("", 0, blob_type, sha1))
if (index_path(ce->sha1, path, st, HASH_WRITE_OBJECT))
return error("unable to index file %s", path);
} else
- record_intent_to_add(ce);
+ set_object_name_for_intent_to_add_entry(ce);
if (ignore_case && alias && different_name(ce, alias))
ce = create_alias_ce(ce, alias);
struct cache_entry *make_cache_entry(unsigned int mode,
const unsigned char *sha1, const char *path, int stage,
- int refresh)
+ unsigned int refresh_options)
{
int size, len;
struct cache_entry *ce;
ce->ce_namelen = len;
ce->ce_mode = create_ce_mode(mode);
- if (refresh)
- return refresh_cache_entry(ce, 0);
-
- return ce;
+ return refresh_cache_entry(ce, refresh_options);
}
int ce_same_name(const struct cache_entry *a, const struct cache_entry *b)
return ce_namelen(b) == len && !memcmp(a->name, b->name, len);
}
-int ce_path_match(const struct cache_entry *ce, const struct pathspec *pathspec)
-{
- return match_pathspec_depth(pathspec, ce->name, ce_namelen(ce), 0, NULL);
-}
-
/*
* We fundamentally don't like some paths: we don't want
* dot or dot-dot anywhere, and for obvious reasons don't
}
/* Make sure the array is big enough .. */
- if (istate->cache_nr == istate->cache_alloc) {
- istate->cache_alloc = alloc_nr(istate->cache_alloc);
- istate->cache = xrealloc(istate->cache,
- istate->cache_alloc * sizeof(*istate->cache));
- }
+ ALLOC_GROW(istate->cache, istate->cache_nr + 1, istate->cache_alloc);
/* Add it in.. */
istate->cache_nr++;
struct stat st;
struct cache_entry *updated;
int changed, size;
+ int refresh = options & CE_MATCH_REFRESH;
int ignore_valid = options & CE_MATCH_IGNORE_VALID;
int ignore_skip_worktree = options & CE_MATCH_IGNORE_SKIP_WORKTREE;
+ int ignore_missing = options & CE_MATCH_IGNORE_MISSING;
- if (ce_uptodate(ce))
+ if (!refresh || ce_uptodate(ce))
return ce;
/*
}
if (lstat(ce->name, &st) < 0) {
+ if (ignore_missing && errno == ENOENT)
+ return ce;
if (err)
*err = errno;
return NULL;
int ignore_submodules = (flags & REFRESH_IGNORE_SUBMODULES) != 0;
int first = 1;
int in_porcelain = (flags & REFRESH_IN_PORCELAIN);
- unsigned int options = really ? CE_MATCH_IGNORE_VALID : 0;
+ unsigned int options = (CE_MATCH_REFRESH |
+ (really ? CE_MATCH_IGNORE_VALID : 0) |
+ (not_new ? CE_MATCH_IGNORE_MISSING : 0));
const char *modified_fmt;
const char *deleted_fmt;
const char *typechange_fmt;
if (ignore_submodules && S_ISGITLINK(ce->ce_mode))
continue;
- if (pathspec &&
- !match_pathspec_depth(pathspec, ce->name, ce_namelen(ce), 0, seen))
+ if (pathspec && !ce_path_match(ce, pathspec, seen))
filtered = 1;
if (ce_stage(ce)) {
if (!new) {
const char *fmt;
- if (not_new && cache_errno == ENOENT)
- continue;
if (really && cache_errno == EINVAL) {
/* If we are doing --really-refresh that
* means the index is not valid anymore.
return has_errors;
}
-static struct cache_entry *refresh_cache_entry(struct cache_entry *ce, int really)
+static struct cache_entry *refresh_cache_entry(struct cache_entry *ce,
+ unsigned int options)
{
- return refresh_cache_ent(&the_index, ce, really, NULL, NULL);
+ return refresh_cache_ent(&the_index, ce, options, NULL, NULL);
}
#define INDEX_FORMAT_DEFAULT 3
+static int index_format_config(const char *var, const char *value, void *cb)
+{
+ unsigned int *version = cb;
+ if (!strcmp(var, "index.version")) {
+ *version = git_config_int(var, value);
+ return 0;
+ }
+ return 1;
+}
+
+static unsigned int get_index_format_default(void)
+{
+ char *envversion = getenv("GIT_INDEX_VERSION");
+ char *endp;
+ unsigned int version = INDEX_FORMAT_DEFAULT;
+
+ if (!envversion) {
+ git_config(index_format_config, &version);
+ if (version < INDEX_FORMAT_LB || INDEX_FORMAT_UB < version) {
+ warning(_("index.version set, but the value is invalid.\n"
+ "Using version %i"), INDEX_FORMAT_DEFAULT);
+ return INDEX_FORMAT_DEFAULT;
+ }
+ return version;
+ }
+
+ version = strtoul(envversion, &endp, 10);
+ if (*endp ||
+ version < INDEX_FORMAT_LB || INDEX_FORMAT_UB < version) {
+ warning(_("GIT_INDEX_VERSION set, but the value is invalid.\n"
+ "Using version %i"), INDEX_FORMAT_DEFAULT);
+ version = INDEX_FORMAT_DEFAULT;
+ }
+ return version;
+}
+
/*
* dev/ino/uid/gid/size are also just tracked to the low 32 bits
* Again - this is just a (very strong in practice) heuristic that
return read_index_from(istate, get_index_file());
}
-#ifndef NEEDS_ALIGNED_ACCESS
-#define ntoh_s(var) ntohs(var)
-#define ntoh_l(var) ntohl(var)
-#else
-static inline uint16_t ntoh_s_force_align(void *p)
-{
- uint16_t x;
- memcpy(&x, p, sizeof(x));
- return ntohs(x);
-}
-static inline uint32_t ntoh_l_force_align(void *p)
-{
- uint32_t x;
- memcpy(&x, p, sizeof(x));
- return ntohl(x);
-}
-#define ntoh_s(var) ntoh_s_force_align(&(var))
-#define ntoh_l(var) ntoh_l_force_align(&(var))
-#endif
-
static struct cache_entry *cache_entry_from_ondisk(struct ondisk_cache_entry *ondisk,
unsigned int flags,
const char *name,
{
struct cache_entry *ce = xmalloc(cache_entry_size(len));
- ce->ce_stat_data.sd_ctime.sec = ntoh_l(ondisk->ctime.sec);
- ce->ce_stat_data.sd_mtime.sec = ntoh_l(ondisk->mtime.sec);
- ce->ce_stat_data.sd_ctime.nsec = ntoh_l(ondisk->ctime.nsec);
- ce->ce_stat_data.sd_mtime.nsec = ntoh_l(ondisk->mtime.nsec);
- ce->ce_stat_data.sd_dev = ntoh_l(ondisk->dev);
- ce->ce_stat_data.sd_ino = ntoh_l(ondisk->ino);
- ce->ce_mode = ntoh_l(ondisk->mode);
- ce->ce_stat_data.sd_uid = ntoh_l(ondisk->uid);
- ce->ce_stat_data.sd_gid = ntoh_l(ondisk->gid);
- ce->ce_stat_data.sd_size = ntoh_l(ondisk->size);
+ ce->ce_stat_data.sd_ctime.sec = get_be32(&ondisk->ctime.sec);
+ ce->ce_stat_data.sd_mtime.sec = get_be32(&ondisk->mtime.sec);
+ ce->ce_stat_data.sd_ctime.nsec = get_be32(&ondisk->ctime.nsec);
+ ce->ce_stat_data.sd_mtime.nsec = get_be32(&ondisk->mtime.nsec);
+ ce->ce_stat_data.sd_dev = get_be32(&ondisk->dev);
+ ce->ce_stat_data.sd_ino = get_be32(&ondisk->ino);
+ ce->ce_mode = get_be32(&ondisk->mode);
+ ce->ce_stat_data.sd_uid = get_be32(&ondisk->uid);
+ ce->ce_stat_data.sd_gid = get_be32(&ondisk->gid);
+ ce->ce_stat_data.sd_size = get_be32(&ondisk->size);
ce->ce_flags = flags & ~CE_NAMEMASK;
ce->ce_namelen = len;
hashcpy(ce->sha1, ondisk->sha1);
unsigned int flags;
/* On-disk flags are just 16 bits */
- flags = ntoh_s(ondisk->flags);
+ flags = get_be16(&ondisk->flags);
len = flags & CE_NAMEMASK;
if (flags & CE_EXTENDED) {
struct ondisk_cache_entry_extended *ondisk2;
int extended_flags;
ondisk2 = (struct ondisk_cache_entry_extended *)ondisk;
- extended_flags = ntoh_s(ondisk2->flags2) << 16;
+ extended_flags = get_be16(&ondisk2->flags2) << 16;
/* We do not yet understand any bit out of CE_EXTENDED_FLAGS */
if (extended_flags & ~CE_EXTENDED_FLAGS)
die("Unknown index entry format %08x", extended_flags);
}
if (!istate->version)
- istate->version = INDEX_FORMAT_DEFAULT;
+ istate->version = get_index_format_default();
/* demote version 3 to version 2 when the latter suffices */
if (istate->version == 3 || istate->version == 2)
new_ce->ce_mode = ce->ce_mode;
if (add_index_entry(istate, new_ce, 0))
return error("%s: cannot drop to stage #0",
- ce->name);
+ new_ce->name);
i = index_name_pos(istate, new_ce->name, len);
}
return unmerged;
struct complete_reflogs *array = cb_data;
struct reflog_info *item;
- if (array->nr >= array->alloc) {
- array->alloc = alloc_nr(array->nr + 1);
- array->items = xrealloc(array->items, array->alloc *
- sizeof(struct reflog_info));
- }
+ ALLOC_GROW(array->items, array->nr + 1, array->alloc);
item = array->items + array->nr;
- memcpy(item->osha1, osha1, 20);
- memcpy(item->nsha1, nsha1, 20);
+ hashcpy(item->osha1, osha1);
+ hashcpy(item->nsha1, nsha1);
item->email = xstrdup(email);
item->timestamp = timestamp;
item->tz = tz;
struct commit_info_lifo *lifo)
{
struct commit_info *info;
- if (lifo->nr >= lifo->alloc) {
- lifo->alloc = alloc_nr(lifo->nr + 1);
- lifo->items = xrealloc(lifo->items,
- lifo->alloc * sizeof(struct commit_info));
- }
+ ALLOC_GROW(lifo->items, lifo->nr + 1, lifo->alloc);
info = lifo->items + lifo->nr;
info->commit = commit;
info->util = util;
if (ref == NULL)
return -1;
- memcpy(sha1, ref->u.value.sha1, 20);
+ hashcpy(sha1, ref->u.value.sha1);
return 0;
}
void *data)
{
struct ref_filter *filter = (struct ref_filter *)data;
- if (fnmatch(filter->pattern, refname, 0))
+ if (wildmatch(filter->pattern, refname, 0, NULL))
return 0;
return filter->fn(refname, sha1, flags, filter->cb_data);
}
if (!results)
results = &results_buf;
- slot->results = results;
- slot->curl_result = curl_easy_perform(slot->curl);
- finish_active_slot(slot);
+ err = run_one_slot(slot, results);
- err = handle_curl_result(results);
if (err != HTTP_OK && err != HTTP_REAUTH) {
error("RPC failed; result=%d, HTTP code = %ld",
results->curl_result, results->http_code);
size_t len;
while (*msg) {
- end = strchr(msg, '\n');
- len = end ? end - msg : strlen(msg);
+ end = strchrnul(msg, '\n');
+ len = end - msg;
key = "Revision-number: ";
if (starts_with(msg, key)) {
static struct branch *current_branch;
static const char *default_remote_name;
+static const char *branch_pushremote_name;
static const char *pushremote_name;
static int explicit_default_remote_name;
}
} else if (!strcmp(subkey, ".pushremote")) {
if (branch == current_branch)
- if (git_config_string(&pushremote_name, key, value))
+ if (git_config_string(&branch_pushremote_name, key, value))
return -1;
} else if (!strcmp(subkey, ".merge")) {
if (!value)
make_branch(head_ref + strlen("refs/heads/"), 0);
}
git_config(handle_config, NULL);
+ if (branch_pushremote_name) {
+ free((char *)pushremote_name);
+ pushremote_name = branch_pushremote_name;
+ }
alias_all_urls();
}
return ret;
}
+static void query_refspecs_multiple(struct refspec *refs, int ref_count, struct refspec *query, struct string_list *results)
+{
+ int i;
+ int find_src = !query->src;
+
+ if (find_src && !query->dst)
+ error("query_refspecs_multiple: need either src or dst");
+
+ for (i = 0; i < ref_count; i++) {
+ struct refspec *refspec = &refs[i];
+ const char *key = find_src ? refspec->dst : refspec->src;
+ const char *value = find_src ? refspec->src : refspec->dst;
+ const char *needle = find_src ? query->dst : query->src;
+ char **result = find_src ? &query->src : &query->dst;
+
+ if (!refspec->dst)
+ continue;
+ if (refspec->pattern) {
+ if (match_name_with_pattern(key, needle, value, result))
+ string_list_append_nodup(results, *result);
+ } else if (!strcmp(needle, key)) {
+ string_list_append(results, value);
+ }
+ }
+}
+
int query_refspecs(struct refspec *refs, int ref_count, struct refspec *query)
{
int i;
}
}
if (!matched) {
- *matched_ref = matched_weak;
+ if (matched_ref)
+ *matched_ref = matched_weak;
return weak_match;
}
else {
- *matched_ref = matched;
+ if (matched_ref)
+ *matched_ref = matched;
return match;
}
}
return ref;
}
-static struct ref *try_explicit_object_name(const char *name)
+static int try_explicit_object_name(const char *name,
+ struct ref **match)
{
unsigned char sha1[20];
- struct ref *ref;
- if (!*name)
- return alloc_delete_ref();
+ if (!*name) {
+ if (match)
+ *match = alloc_delete_ref();
+ return 0;
+ }
+
if (get_sha1(name, sha1))
- return NULL;
- ref = alloc_ref(name);
- hashcpy(ref->new_sha1, sha1);
- return ref;
+ return -1;
+
+ if (match) {
+ *match = alloc_ref(name);
+ hashcpy((*match)->new_sha1, sha1);
+ }
+ return 0;
}
static struct ref *make_linked_ref(const char *name, struct ref ***tail)
return strbuf_detach(&buf, NULL);
}
+static int match_explicit_lhs(struct ref *src,
+ struct refspec *rs,
+ struct ref **match,
+ int *allocated_match)
+{
+ switch (count_refspec_match(rs->src, src, match)) {
+ case 1:
+ if (allocated_match)
+ *allocated_match = 0;
+ return 0;
+ case 0:
+ /* The source could be in the get_sha1() format
+ * not a reference name. :refs/other is a
+ * way to delete 'other' ref at the remote end.
+ */
+ if (try_explicit_object_name(rs->src, match) < 0)
+ return error("src refspec %s does not match any.", rs->src);
+ if (allocated_match)
+ *allocated_match = 1;
+ return 0;
+ default:
+ return error("src refspec %s matches more than one.", rs->src);
+ }
+}
+
static int match_explicit(struct ref *src, struct ref *dst,
struct ref ***dst_tail,
struct refspec *rs)
{
struct ref *matched_src, *matched_dst;
- int copy_src;
+ int allocated_src;
const char *dst_value = rs->dst;
char *dst_guess;
return 0;
matched_src = matched_dst = NULL;
- switch (count_refspec_match(rs->src, src, &matched_src)) {
- case 1:
- copy_src = 1;
- break;
- case 0:
- /* The source could be in the get_sha1() format
- * not a reference name. :refs/other is a
- * way to delete 'other' ref at the remote end.
- */
- matched_src = try_explicit_object_name(rs->src);
- if (!matched_src)
- return error("src refspec %s does not match any.", rs->src);
- copy_src = 0;
- break;
- default:
- return error("src refspec %s matches more than one.", rs->src);
- }
+ if (match_explicit_lhs(src, rs, &matched_src, &allocated_src) < 0)
+ return -1;
if (!dst_value) {
unsigned char sha1[20];
return error("dst ref %s receives from more than one src.",
matched_dst->name);
else {
- matched_dst->peer_ref = copy_src ? copy_ref(matched_src) : matched_src;
+ matched_dst->peer_ref = allocated_src ?
+ matched_src :
+ copy_ref(matched_src);
matched_dst->force = rs->force;
}
return 0;
sort_string_list(ref_index);
}
+/*
+ * Given only the set of local refs, sanity-check the set of push
+ * refspecs. We can't catch all errors that match_push_refs would,
+ * but we can catch some errors early before even talking to the
+ * remote side.
+ */
+int check_push_refs(struct ref *src, int nr_refspec, const char **refspec_names)
+{
+ struct refspec *refspec = parse_push_refspec(nr_refspec, refspec_names);
+ int ret = 0;
+ int i;
+
+ for (i = 0; i < nr_refspec; i++) {
+ struct refspec *rs = refspec + i;
+
+ if (rs->pattern || rs->matching)
+ continue;
+
+ ret |= match_explicit_lhs(src, rs, NULL, NULL);
+ }
+
+ free_refspec(nr_refspec, refspec);
+ return ret;
+}
+
/*
* Given the set of refs the local repository has, the set of refs the
* remote repository has, and the refspec used for push, determine
const unsigned char *sha1, int flags, void *cb_data)
{
struct stale_heads_info *info = cb_data;
+ struct string_list matches = STRING_LIST_INIT_DUP;
struct refspec query;
+ int i, stale = 1;
memset(&query, 0, sizeof(struct refspec));
query.dst = (char *)refname;
- if (query_refspecs(info->refs, info->ref_count, &query))
- return 0; /* No matches */
+ query_refspecs_multiple(info->refs, info->ref_count, &query, &matches);
+ if (matches.nr == 0)
+ goto clean_exit; /* No matches */
/*
* If we did find a suitable refspec and it's not a symref and
* it's not in the list of refs that currently exist in that
- * remote we consider it to be stale.
+ * remote, we consider it to be stale. In order to deal with
+ * overlapping refspecs, we need to go over all of the
+ * matching refs.
*/
- if (!((flags & REF_ISSYMREF) ||
- string_list_has_string(info->ref_names, query.src))) {
+ if (flags & REF_ISSYMREF)
+ goto clean_exit;
+
+ for (i = 0; stale && i < matches.nr; i++)
+ if (string_list_has_string(info->ref_names, matches.items[i].string))
+ stale = 0;
+
+ if (stale) {
struct ref *ref = make_linked_ref(refname, &info->stale_refs_tail);
hashcpy(ref->new_sha1, sha1);
}
- free(query.src);
+clean_exit:
+ string_list_clear(&matches, 0);
return 0;
}
char *apply_refspecs(struct refspec *refspecs, int nr_refspec,
const char *name);
+int check_push_refs(struct ref *src, int nr_refspec, const char **refspec);
int match_push_refs(struct ref *src, struct ref **dst,
int nr_refspec, const char **refspec, int all);
void set_ref_status_for_push(struct ref *remote_refs, int send_mirror,
#include "refs.h"
#include "commit.h"
+/*
+ * An array of replacements. The array is kept sorted by the original
+ * sha1.
+ */
static struct replace_object {
- unsigned char sha1[2][20];
+ unsigned char original[20];
+ unsigned char replacement[20];
} **replace_object;
static int replace_object_alloc, replace_object_nr;
static const unsigned char *replace_sha1_access(size_t index, void *table)
{
struct replace_object **replace = table;
- return replace[index]->sha1[0];
+ return replace[index]->original;
}
static int replace_object_pos(const unsigned char *sha1)
static int register_replace_object(struct replace_object *replace,
int ignore_dups)
{
- int pos = replace_object_pos(replace->sha1[0]);
+ int pos = replace_object_pos(replace->original);
if (0 <= pos) {
if (ignore_dups)
return 1;
}
pos = -pos - 1;
- if (replace_object_alloc <= ++replace_object_nr) {
- replace_object_alloc = alloc_nr(replace_object_alloc);
- replace_object = xrealloc(replace_object,
- sizeof(*replace_object) *
- replace_object_alloc);
- }
+ ALLOC_GROW(replace_object, replace_object_nr + 1, replace_object_alloc);
+ replace_object_nr++;
if (pos < replace_object_nr)
memmove(replace_object + pos + 1,
replace_object + pos,
const char *hash = slash ? slash + 1 : refname;
struct replace_object *repl_obj = xmalloc(sizeof(*repl_obj));
- if (strlen(hash) != 40 || get_sha1_hex(hash, repl_obj->sha1[0])) {
+ if (strlen(hash) != 40 || get_sha1_hex(hash, repl_obj->original)) {
free(repl_obj);
warning("bad replace ref name: %s", refname);
return 0;
}
/* Copy sha1 from the read ref */
- hashcpy(repl_obj->sha1[1], sha1);
+ hashcpy(repl_obj->replacement, sha1);
/* Register new object */
if (register_replace_object(repl_obj, 1))
for_each_replace_ref(register_replace_ref, NULL);
replace_object_prepared = 1;
if (!replace_object_nr)
- read_replace_refs = 0;
+ check_replace_refs = 0;
}
/* We allow "recursive" replacement. Only within reason, though */
#define MAXREPLACEDEPTH 5
+/*
+ * If a replacement for object sha1 has been set up, return the
+ * replacement object's name (replaced recursively, if necessary).
+ * The return value is either sha1 or a pointer to a
+ * permanently-allocated value. This function always respects replace
+ * references, regardless of the value of check_replace_refs.
+ */
const unsigned char *do_lookup_replace_object(const unsigned char *sha1)
{
int pos, depth = MAXREPLACEDEPTH;
pos = replace_object_pos(cur);
if (0 <= pos)
- cur = replace_object[pos]->sha1[1];
+ cur = replace_object[pos]->replacement;
} while (0 <= pos);
return cur;
find_conflict(&conflict);
for (i = 0; i < conflict.nr; i++) {
struct string_list_item *it = &conflict.items[i];
- if (!match_pathspec_depth(pathspec, it->string, strlen(it->string),
- 0, NULL))
+ if (!match_pathspec(pathspec, it->string,
+ strlen(it->string), 0, NULL, 0))
continue;
rerere_forget_one_path(it->string, &merge_rr);
}
struct string_list_item *item;
struct resolve_undo_info *ru;
int i, err = 0, matched;
+ char *name;
if (!istate->resolve_undo)
return pos;
if (!ru)
return pos;
matched = ce->ce_flags & CE_MATCHED;
+ name = xstrdup(ce->name);
remove_index_entry_at(istate, pos);
for (i = 0; i < 3; i++) {
struct cache_entry *nce;
if (!ru->mode[i])
continue;
nce = make_cache_entry(ru->mode[i], ru->sha1[i],
- ce->name, i + 1, 0);
+ name, i + 1, 0);
if (matched)
nce->ce_flags |= CE_MATCHED;
if (add_index_entry(istate, nce, ADD_CACHE_OK_TO_ADD)) {
err = 1;
- error("cannot unmerge '%s'", ce->name);
+ error("cannot unmerge '%s'", name);
}
}
+ free(name);
if (err)
return pos;
free(ru);
for (i = 0; i < istate->cache_nr; i++) {
const struct cache_entry *ce = istate->cache[i];
- if (!match_pathspec_depth(pathspec, ce->name, ce_namelen(ce), 0, NULL))
+ if (!ce_path_match(ce, pathspec, NULL))
continue;
i = unmerge_index_entry_at(istate, i);
}
#include "line-log.h"
#include "mailmap.h"
#include "commit-slab.h"
+#include "dir.h"
volatile show_early_output_fn_t show_early_output;
static int rev_same_tree_as_empty(struct rev_info *revs, struct commit *commit)
{
int retval;
- void *tree;
- unsigned long size;
- struct tree_desc empty, real;
struct tree *t1 = commit->tree;
if (!t1)
return 0;
- tree = read_object_with_reference(t1->object.sha1, tree_type, &size, NULL);
- if (!tree)
- return 0;
- init_tree_desc(&real, tree, size);
- init_tree_desc(&empty, "", 0);
-
tree_difference = REV_TREE_SAME;
DIFF_OPT_CLR(&revs->pruning, HAS_CHANGES);
- retval = diff_tree(&empty, &real, "", &revs->pruning);
- free(tree);
+ retval = diff_tree_sha1(NULL, t1->object.sha1, "", &revs->pruning);
return retval >= 0 && (tree_difference == REV_TREE_SAME);
}
return 0;
commit->object.flags |= ADDED;
+ if (revs->include_check &&
+ !revs->include_check(commit, revs->include_check_data))
+ return 0;
+
/*
* If the commit is uninteresting, don't try to
* prune parents - we want the maximal uninteresting
if (!ref_excludes)
return 0;
for_each_string_list_item(item, ref_excludes) {
- if (!fnmatch(item->string, path, 0))
+ if (!wildmatch(item->string, path, 0, NULL))
return 1;
}
return 0;
const struct cache_entry *ce = active_cache[i];
if (!ce_stage(ce))
continue;
- if (ce_path_match(ce, &revs->prune_data)) {
+ if (ce_path_match(ce, &revs->prune_data, NULL)) {
prune_num++;
prune = xrealloc(prune, sizeof(*prune) * prune_num);
prune[prune_num-2] = ce->name;
{
struct strbuf sb;
int seen_dashdash = 0;
+ int save_warning;
+
+ save_warning = warn_on_object_refname_ambiguity;
+ warn_on_object_refname_ambiguity = 0;
strbuf_init(&sb, 1000);
while (strbuf_getwholeline(&sb, stdin, '\n') != EOF) {
}
if (seen_dashdash)
read_pathspec_from_stdin(revs, &sb, prune);
+
strbuf_release(&sb);
+ warn_on_object_refname_ambiguity = save_warning;
}
static void add_grep(struct rev_info *revs, const char *ptn, enum grep_pat_token what)
revs->notes_opt.use_default_notes = 1;
} else if (!strcmp(arg, "--show-signature")) {
revs->show_signature = 1;
+ } else if (!strcmp(arg, "--show-linear-break") ||
+ starts_with(arg, "--show-linear-break=")) {
+ if (starts_with(arg, "--show-linear-break="))
+ revs->break_bar = xstrdup(arg + 20);
+ else
+ revs->break_bar = " ..........";
+ revs->track_linear = 1;
+ revs->track_first_time = 1;
} else if (starts_with(arg, "--show-notes=") ||
starts_with(arg, "--notes=")) {
struct strbuf buf = STRBUF_INIT;
unkv[(*unkc)++] = arg;
return opts;
}
+ if (revs->graph && revs->track_linear)
+ die("--show-linear-break and --graph are incompatible");
return 1;
}
return action;
}
+static void track_linear(struct rev_info *revs, struct commit *commit)
+{
+ if (revs->track_first_time) {
+ revs->linear = 1;
+ revs->track_first_time = 0;
+ } else {
+ struct commit_list *p;
+ for (p = revs->previous_parents; p; p = p->next)
+ if (p->item == NULL || /* first commit */
+ !hashcmp(p->item->object.sha1, commit->object.sha1))
+ break;
+ revs->linear = p != NULL;
+ }
+ if (revs->reverse) {
+ if (revs->linear)
+ commit->object.flags |= TRACK_LINEAR;
+ }
+ free_commit_list(revs->previous_parents);
+ revs->previous_parents = copy_commit_list(commit->parents);
+}
+
static struct commit *get_revision_1(struct rev_info *revs)
{
if (!revs->commits)
die("Failed to simplify parents of commit %s",
sha1_to_hex(commit->object.sha1));
default:
+ if (revs->track_linear)
+ track_linear(revs, commit);
return commit;
}
} while (revs->commits);
revs->reverse_output_stage = 1;
}
- if (revs->reverse_output_stage)
- return pop_commit(&revs->commits);
+ if (revs->reverse_output_stage) {
+ c = pop_commit(&revs->commits);
+ if (revs->track_linear)
+ revs->linear = !!(c && c->object.flags & TRACK_LINEAR);
+ return c;
+ }
c = get_revision_internal(revs);
if (c && revs->graph)
graph_update(revs->graph, c);
- if (!c)
+ if (!c) {
free_saved_parents(revs);
+ if (revs->previous_parents) {
+ free_commit_list(revs->previous_parents);
+ revs->previous_parents = NULL;
+ }
+ }
return c;
}
#include "commit.h"
#include "diff.h"
+/* Remember to update object flag allocation in object.h */
#define SEEN (1u<<0)
#define UNINTERESTING (1u<<1)
#define TREESAME (1u<<2)
#define SYMMETRIC_LEFT (1u<<8)
#define PATCHSAME (1u<<9)
#define BOTTOM (1u<<10)
-#define ALL_REV_FLAGS ((1u<<11)-1)
+#define TRACK_LINEAR (1u<<26)
+#define ALL_REV_FLAGS (((1u<<11)-1) | TRACK_LINEAR)
#define DECORATE_SHORT_REFS 1
#define DECORATE_FULL_REFS 2
preserve_subject:1;
unsigned int disable_stdin:1;
unsigned int leak_pending:1;
+ /* --show-linear-break */
+ unsigned int track_linear:1,
+ track_first_time:1,
+ linear:1;
enum date_mode date_mode;
unsigned long min_age;
int min_parents;
int max_parents;
+ int (*include_check)(struct commit *, void *);
+ void *include_check_data;
/* diff info for patches and for paths limiting */
struct diff_options diffopt;
/* copies of the parent lists, for --full-diff display */
struct saved_parents *saved_parents_slab;
+
+ struct commit_list *previous_parents;
+ const char *break_bar;
};
extern int ref_excluded(struct string_list *, const char *path);
return path;
}
-int run_hook(const char *index_file, const char *name, ...)
+int run_hook_ve(const char *const *env, const char *name, va_list args)
{
struct child_process hook;
struct argv_array argv = ARGV_ARRAY_INIT;
- const char *p, *env[2];
- char index[PATH_MAX];
- va_list args;
+ const char *p;
int ret;
p = find_hook(name);
argv_array_push(&argv, p);
- va_start(args, name);
while ((p = va_arg(args, const char *)))
argv_array_push(&argv, p);
- va_end(args);
memset(&hook, 0, sizeof(hook));
hook.argv = argv.argv;
+ hook.env = env;
hook.no_stdin = 1;
hook.stdout_to_stderr = 1;
- if (index_file) {
- snprintf(index, sizeof(index), "GIT_INDEX_FILE=%s", index_file);
- env[0] = index;
- env[1] = NULL;
- hook.env = env;
- }
ret = run_command(&hook);
argv_array_clear(&argv);
return ret;
}
+
+int run_hook_le(const char *const *env, const char *name, ...)
+{
+ va_list args;
+ int ret;
+
+ va_start(args, name);
+ ret = run_hook_ve(env, name, args);
+ va_end(args);
+
+ return ret;
+}
+
+int run_hook_with_custom_index(const char *index_file, const char *name, ...)
+{
+ const char *hook_env[3] = { NULL };
+ char index[PATH_MAX];
+ va_list args;
+ int ret;
+
+ snprintf(index, sizeof(index), "GIT_INDEX_FILE=%s", index_file);
+ hook_env[0] = index;
+
+ va_start(args, name);
+ ret = run_hook_ve(hook_env, name, args);
+ va_end(args);
+
+ return ret;
+}
extern char *find_hook(const char *name);
LAST_ARG_MUST_BE_NULL
-extern int run_hook(const char *index_file, const char *name, ...);
+extern int run_hook_le(const char *const *env, const char *name, ...);
+extern int run_hook_ve(const char *const *env, const char *name, va_list args);
+
+LAST_ARG_MUST_BE_NULL
+__attribute__((deprecated))
+extern int run_hook_with_custom_index(const char *index_file, const char *name, ...);
#define RUN_COMMAND_NO_STDIN 1
#define RUN_GIT_CMD 2 /*If this is to be git sub-command */
{
struct argv_array array;
int rc;
+ char *gpg_sign;
argv_array_init(&array);
argv_array_push(&array, "commit");
argv_array_push(&array, "-n");
+ if (opts->gpg_sign) {
+ gpg_sign = xmalloc(3 + strlen(opts->gpg_sign));
+ sprintf(gpg_sign, "-S%s", opts->gpg_sign);
+ argv_array_push(&array, gpg_sign);
+ free(gpg_sign);
+ }
if (opts->signoff)
argv_array_push(&array, "-s");
if (!opts->edit) {
opts->mainline = git_config_int(key, value);
else if (!strcmp(key, "options.strategy"))
git_config_string(&opts->strategy, key, value);
+ else if (!strcmp(key, "options.gpg-sign"))
+ git_config_string(&opts->gpg_sign, key, value);
else if (!strcmp(key, "options.strategy-option")) {
ALLOC_GROW(opts->xopts, opts->xopts_nr + 1, opts->xopts_alloc);
opts->xopts[opts->xopts_nr++] = xstrdup(value);
}
if (opts->strategy)
git_config_set_in_file(opts_file, "options.strategy", opts->strategy);
+ if (opts->gpg_sign)
+ git_config_set_in_file(opts_file, "options.gpg-sign", opts->gpg_sign);
if (opts->xopts) {
int i;
for (i = 0; i < opts->xopts_nr; i++)
int mainline;
+ const char *gpg_sign;
+
/* Merge strategy */
const char *strategy;
const char **xopts;
static int inside_git_dir = -1;
static int inside_work_tree = -1;
+/*
+ * The input parameter must contain an absolute path, and it must already be
+ * normalized.
+ *
+ * Find the part of an absolute path that lies inside the work tree by
+ * dereferencing symlinks outside the work tree, for example:
+ * /dir1/repo/dir2/file (work tree is /dir1/repo) -> dir2/file
+ * /dir/file (work tree is /) -> dir/file
+ * /dir/symlink1/symlink2 (symlink1 points to work tree) -> symlink2
+ * /dir/repolink/file (repolink points to /dir/repo) -> file
+ * /dir/repo (exactly equal to work tree) -> (empty string)
+ */
+static int abspath_part_inside_repo(char *path)
+{
+ size_t len;
+ size_t wtlen;
+ char *path0;
+ int off;
+ const char *work_tree = get_git_work_tree();
+
+ if (!work_tree)
+ return -1;
+ wtlen = strlen(work_tree);
+ len = strlen(path);
+ off = 0;
+
+ /* check if work tree is already the prefix */
+ if (wtlen <= len && !strncmp(path, work_tree, wtlen)) {
+ if (path[wtlen] == '/') {
+ memmove(path, path + wtlen + 1, len - wtlen);
+ return 0;
+ } else if (path[wtlen - 1] == '/' || path[wtlen] == '\0') {
+ /* work tree is the root, or the whole path */
+ memmove(path, path + wtlen, len - wtlen + 1);
+ return 0;
+ }
+ /* work tree might match beginning of a symlink to work tree */
+ off = wtlen;
+ }
+ path0 = path;
+ path += offset_1st_component(path) + off;
+
+ /* check each '/'-terminated level */
+ while (*path) {
+ path++;
+ if (*path == '/') {
+ *path = '\0';
+ if (strcmp(real_path(path0), work_tree) == 0) {
+ memmove(path0, path + 1, len - (path - path0));
+ return 0;
+ }
+ *path = '/';
+ }
+ }
+
+ /* check whole path */
+ if (strcmp(real_path(path0), work_tree) == 0) {
+ *path0 = '\0';
+ return 0;
+ }
+
+ return -1;
+}
+
/*
* Normalize "path", prepending the "prefix" for relative paths. If
* remaining_prefix is not NULL, return the actual prefix still
const char *orig = path;
char *sanitized;
if (is_absolute_path(orig)) {
- const char *temp = real_path(path);
- sanitized = xmalloc(len + strlen(temp) + 1);
- strcpy(sanitized, temp);
+ sanitized = xmalloc(strlen(path) + 1);
if (remaining_prefix)
*remaining_prefix = 0;
+ if (normalize_path_copy_len(sanitized, path, remaining_prefix)) {
+ free(sanitized);
+ return NULL;
+ }
+ if (abspath_part_inside_repo(sanitized)) {
+ free(sanitized);
+ return NULL;
+ }
} else {
sanitized = xmalloc(len + strlen(path) + 1);
if (len)
strcpy(sanitized + len, path);
if (remaining_prefix)
*remaining_prefix = len;
- }
- if (normalize_path_copy_len(sanitized, sanitized, remaining_prefix))
- goto error_out;
- if (is_absolute_path(orig)) {
- size_t root_len, len, total;
- const char *work_tree = get_git_work_tree();
- if (!work_tree)
- goto error_out;
- len = strlen(work_tree);
- root_len = offset_1st_component(work_tree);
- total = strlen(sanitized) + 1;
- if (strncmp(sanitized, work_tree, len) ||
- (len > root_len && sanitized[len] != '\0' && sanitized[len] != '/')) {
- error_out:
+ if (normalize_path_copy_len(sanitized, sanitized, remaining_prefix)) {
free(sanitized);
return NULL;
}
- if (sanitized[len] == '/')
- len++;
- memmove(sanitized, sanitized + len, total - len);
}
return sanitized;
}
if (fd > 2)
close(fd);
}
+
+int daemonize(void)
+{
+#ifdef NO_POSIX_GOODIES
+ errno = ENOSYS;
+ return -1;
+#else
+ switch (fork()) {
+ case 0:
+ break;
+ case -1:
+ die_errno("fork failed");
+ default:
+ exit(0);
+ }
+ if (setsid() == -1)
+ die_errno("setsid failed");
+ close(0);
+ close(1);
+ close(2);
+ sanitize_stdfds();
+ return 0;
+#endif
+}
qsort (slp->item, slp->nitems, sizeof (slp->item[0]), cmp_string);
}
-/* Test whether a string list contains a given string. */
-static inline int
-string_list_member (const string_list_ty *slp, const char *s)
-{
- size_t j;
-
- for (j = 0; j < slp->nitems; ++j)
- if (strcmp (slp->item[j], s) == 0)
- return 1;
- return 0;
-}
-
/* Test whether a sorted string list contains a given string. */
static int
sorted_string_list_member (const string_list_ty *slp, const char *s)
0
};
+/*
+ * A pointer to the last packed_git in which an object was found.
+ * When an object is sought, we look in this packfile first, because
+ * objects that are looked up at similar times are often in the same
+ * packfile as one another.
+ */
static struct packed_git *last_found_pack;
static struct cached_object *find_cached_object(const unsigned char *sha1)
}
}
-/*
- * NOTE! This returns a statically allocated buffer, so you have to be
- * careful about using it. Do an "xstrdup()" if you need to save the
- * filename.
- *
- * Also note that this returns the location for creating. Reading
- * SHA1 file can happen from any alternate directory listed in the
- * DB_ENVIRONMENT environment variable if it is not found in
- * the primary object database.
- */
-char *sha1_file_name(const unsigned char *sha1)
+const char *sha1_file_name(const unsigned char *sha1)
{
static char buf[PATH_MAX];
const char *objdir;
return buf;
}
+/*
+ * Return the name of the pack or index file with the specified sha1
+ * in its filename. *base and *name are scratch space that must be
+ * provided by the caller. which should be "pack" or "idx".
+ */
static char *sha1_get_pack_name(const unsigned char *sha1,
char **name, char **base, const char *which)
{
struct alternate_object_database *alt_odb_list;
static struct alternate_object_database **alt_odb_tail;
-static int git_open_noatime(const char *name);
-
/*
* Prepare alternate object database registry.
*
static int has_loose_object_local(const unsigned char *sha1)
{
- char *name = sha1_file_name(sha1);
- return !access(name, F_OK);
+ return !access(sha1_file_name(sha1), F_OK);
}
int has_loose_object_nonlocal(const unsigned char *sha1)
sz_fmt(pack_mapped), sz_fmt(peak_pack_mapped));
}
-static int check_packed_git_idx(const char *path, struct packed_git *p)
+/*
+ * Open and mmap the index file at path, perform a couple of
+ * consistency checks, then record its information to p. Return 0 on
+ * success.
+ */
+static int check_packed_git_idx(const char *path, struct packed_git *p)
{
void *idx_map;
struct pack_idx_header *hdr;
if (has_extension(de->d_name, ".idx") ||
has_extension(de->d_name, ".pack") ||
+ has_extension(de->d_name, ".bitmap") ||
has_extension(de->d_name, ".keep"))
string_list_append(&garbage, path);
else
void reprepare_packed_git(void)
{
- discard_revindex();
prepare_packed_git_run_once = 0;
prepare_packed_git();
}
return hashcmp(sha1, real_sha1) ? -1 : 0;
}
-static int git_open_noatime(const char *name)
+int git_open_noatime(const char *name)
{
static int sha1_file_open_flag = O_NOATIME;
static int stat_sha1_file(const unsigned char *sha1, struct stat *st)
{
- char *name = sha1_file_name(sha1);
struct alternate_object_database *alt;
- if (!lstat(name, st))
+ if (!lstat(sha1_file_name(sha1), st))
return 0;
prepare_alt_odb();
errno = ENOENT;
for (alt = alt_odb_list; alt; alt = alt->next) {
- name = alt->name;
- fill_sha1_path(name, sha1);
+ fill_sha1_path(alt->name, sha1);
if (!lstat(alt->base, st))
return 0;
}
static int open_sha1_file(const unsigned char *sha1)
{
int fd;
- char *name = sha1_file_name(sha1);
struct alternate_object_database *alt;
- fd = git_open_noatime(name);
+ fd = git_open_noatime(sha1_file_name(sha1));
if (fd >= 0)
return fd;
prepare_alt_odb();
errno = ENOENT;
for (alt = alt_odb_list; alt; alt = alt->next) {
- name = alt->name;
- fill_sha1_path(name, sha1);
+ fill_sha1_path(alt->name, sha1);
fd = git_open_noatime(alt->base);
if (fd >= 0)
return fd;
*final_size = size;
unuse_pack(&w_curs);
+
+ if (delta_stack != small_delta_stack)
+ free(delta_stack);
+
return data;
}
return 1;
}
+/*
+ * Iff a pack file contains the object named by sha1, return true and
+ * store its location to e.
+ */
static int find_pack_entry(const unsigned char *sha1, struct pack_entry *e)
{
struct packed_git *p;
return 1;
for (p = packed_git; p; p = p->next) {
- if (p == last_found_pack || !fill_pack_entry(sha1, e, p))
- continue;
+ if (p == last_found_pack)
+ continue; /* we already checked this one */
- last_found_pack = p;
- return 1;
+ if (fill_pack_entry(sha1, e, p)) {
+ last_found_pack = p;
+ return 1;
+ }
}
return 0;
}
hash_sha1_file(buf, len, typename(type), sha1);
if (has_sha1_file(sha1) || find_cached_object(sha1))
return 0;
- if (cached_object_alloc <= cached_object_nr) {
- cached_object_alloc = alloc_nr(cached_object_alloc);
- cached_objects = xrealloc(cached_objects,
- sizeof(*cached_objects) *
- cached_object_alloc);
- }
+ ALLOC_GROW(cached_objects, cached_object_nr + 1, cached_object_alloc);
co = &cached_objects[cached_object_nr++];
co->size = len;
co->type = type;
unsigned flag)
{
void *data;
- char *path;
const struct packed_git *p;
const unsigned char *repl = lookup_replace_object_extended(sha1, flag);
sha1_to_hex(repl), sha1_to_hex(sha1));
if (has_loose_object(repl)) {
- path = sha1_file_name(sha1);
+ const char *path = sha1_file_name(sha1);
+
die("loose object %s (stored in %s) is corrupt",
sha1_to_hex(repl), path);
}
git_zstream stream;
git_SHA_CTX c;
unsigned char parano_sha1[20];
- char *filename;
static char tmp_file[PATH_MAX];
+ const char *filename = sha1_file_name(sha1);
- filename = sha1_file_name(sha1);
fd = create_tmpfile(tmp_file, sizeof(tmp_file), filename);
if (fd < 0) {
if (errno == EACCES)
* For future extension, ':/!' is reserved. If you want to match a message
* beginning with a '!', you have to repeat the exclamation mark.
*/
+
+/* Remember to update object flag allocation in object.h */
#define ONELINE_SEEN (1u<<20)
static int handle_one_ref(const char *path,
#include "diff.h"
#include "revision.h"
#include "commit-slab.h"
+#include "sigchain.h"
static int is_shallow = -1;
-static struct stat shallow_stat;
+static struct stat_validity shallow_stat;
static char *alternate_shallow_file;
void set_alternate_shallow_file(const char *path, int override)
* shallow file should be used. We could just open it and it
* will likely fail. But let's do an explicit check instead.
*/
- if (!*path ||
- stat(path, &shallow_stat) ||
- (fp = fopen(path, "r")) == NULL) {
+ if (!*path || (fp = fopen(path, "r")) == NULL) {
+ stat_validity_clear(&shallow_stat);
is_shallow = 0;
return is_shallow;
}
+ stat_validity_update(&shallow_stat, fileno(fp));
is_shallow = 1;
while (fgets(buf, sizeof(buf), fp)) {
void check_shallow_file_for_update(void)
{
- struct stat st;
-
- if (!is_shallow)
- return;
- else if (is_shallow == -1)
+ if (is_shallow == -1)
die("BUG: shallow must be initialized by now");
- if (stat(git_path("shallow"), &st))
- die("shallow file was removed during fetch");
- else if (st.st_mtime != shallow_stat.st_mtime
-#ifdef USE_NSEC
- || ST_MTIME_NSEC(st) != ST_MTIME_NSEC(shallow_stat)
-#endif
- )
- die("shallow file was changed during fetch");
+ if (!stat_validity_check(&shallow_stat, git_path("shallow")))
+ die("shallow file has changed since we read it");
}
#define SEEN_ONLY 1
return write_shallow_commits_1(out, use_pack_protocol, extra, 0);
}
-char *setup_temporary_shallow(const struct sha1_array *extra)
+static struct strbuf temporary_shallow = STRBUF_INIT;
+
+static void remove_temporary_shallow(void)
+{
+ if (temporary_shallow.len) {
+ unlink_or_warn(temporary_shallow.buf);
+ strbuf_reset(&temporary_shallow);
+ }
+}
+
+static void remove_temporary_shallow_on_signal(int signo)
+{
+ remove_temporary_shallow();
+ sigchain_pop(signo);
+ raise(signo);
+}
+
+const char *setup_temporary_shallow(const struct sha1_array *extra)
{
+ static int installed_handler;
struct strbuf sb = STRBUF_INIT;
int fd;
+ if (temporary_shallow.len)
+ die("BUG: attempt to create two temporary shallow files");
+
if (write_shallow_commits(&sb, 0, extra)) {
- struct strbuf path = STRBUF_INIT;
- strbuf_addstr(&path, git_path("shallow_XXXXXX"));
- fd = xmkstemp(path.buf);
+ strbuf_addstr(&temporary_shallow, git_path("shallow_XXXXXX"));
+ fd = xmkstemp(temporary_shallow.buf);
+
+ if (!installed_handler) {
+ atexit(remove_temporary_shallow);
+ sigchain_push_common(remove_temporary_shallow_on_signal);
+ }
+
if (write_in_full(fd, sb.buf, sb.len) != sb.len)
die_errno("failed to write to %s",
- path.buf);
+ temporary_shallow.buf);
close(fd);
strbuf_release(&sb);
- return strbuf_detach(&path, NULL);
+ return temporary_shallow.buf;
}
/*
* is_repository_shallow() sees empty string as "no shallow
* file".
*/
- return xstrdup("");
+ return temporary_shallow.buf;
}
void setup_alternate_shallow(struct lock_file *shallow_lock,
struct strbuf sb = STRBUF_INIT;
int fd;
- check_shallow_file_for_update();
fd = hold_lock_file_for_update(shallow_lock, git_path("shallow"),
LOCK_DIE_ON_ERROR);
+ check_shallow_file_for_update();
if (write_shallow_commits(&sb, 0, extra)) {
if (write_in_full(fd, sb.buf, sb.len) != sb.len)
die_errno("failed to write to %s",
strbuf_release(&sb);
return;
}
- check_shallow_file_for_update();
fd = hold_lock_file_for_update(&shallow_lock, git_path("shallow"),
LOCK_DIE_ON_ERROR);
+ check_shallow_file_for_update();
if (write_shallow_commits_1(&sb, 0, NULL, SEEN_ONLY)) {
if (write_in_full(fd, sb.buf, sb.len) != sb.len)
die_errno("failed to write to %s",
return 0;
}
-int prefixcmp(const char *str, const char *prefix)
-{
- for (; ; str++, prefix++)
- if (!*prefix)
- return 0;
- else if (*str != *prefix)
- return (unsigned char)*prefix - (unsigned char)*str;
-}
-
int ends_with(const char *str, const char *suffix)
{
int len = strlen(str), suflen = strlen(suffix);
return !strcmp(str + len - suflen, suffix);
}
-int suffixcmp(const char *str, const char *suffix)
-{
- int len = strlen(str), suflen = strlen(suffix);
- if (len < suflen)
- return -1;
- else
- return strcmp(str + len - suflen, suffix);
-}
-
/*
* Used as the default ->buf value, so that people can always assume
* buf is non NULL and ->buf is NUL terminated even for a freshly
extern void strbuf_release(struct strbuf *);
extern char *strbuf_detach(struct strbuf *, size_t *);
extern void strbuf_attach(struct strbuf *, void *, size_t, size_t);
-static inline void strbuf_swap(struct strbuf *a, struct strbuf *b) {
+static inline void strbuf_swap(struct strbuf *a, struct strbuf *b)
+{
struct strbuf tmp = *a;
*a = *b;
*b = tmp;
}
/*----- strbuf size related -----*/
-static inline size_t strbuf_avail(const struct strbuf *sb) {
+static inline size_t strbuf_avail(const struct strbuf *sb)
+{
return sb->alloc ? sb->alloc - sb->len - 1 : 0;
}
extern void strbuf_grow(struct strbuf *, size_t);
-static inline void strbuf_setlen(struct strbuf *sb, size_t len) {
+static inline void strbuf_setlen(struct strbuf *sb, size_t len)
+{
if (len > (sb->alloc ? sb->alloc - 1 : 0))
die("BUG: strbuf_setlen() beyond buffer");
sb->len = len;
extern void strbuf_list_free(struct strbuf **);
/*----- add data in your buffer -----*/
-static inline void strbuf_addch(struct strbuf *sb, int c) {
+static inline void strbuf_addch(struct strbuf *sb, int c)
+{
strbuf_grow(sb, 1);
sb->buf[sb->len++] = c;
sb->buf[sb->len] = '\0';
extern void strbuf_add_commented_lines(struct strbuf *out, const char *buf, size_t size);
extern void strbuf_add(struct strbuf *, const void *, size_t);
-static inline void strbuf_addstr(struct strbuf *sb, const char *s) {
+static inline void strbuf_addstr(struct strbuf *sb, const char *s)
+{
strbuf_add(sb, s, strlen(s));
}
-static inline void strbuf_addbuf(struct strbuf *sb, const struct strbuf *sb2) {
+static inline void strbuf_addbuf(struct strbuf *sb, const struct strbuf *sb2)
+{
strbuf_grow(sb, sb2->len);
strbuf_add(sb, sb2->buf, sb2->len);
}
if (filter) {
/* Add "&& !is_null_stream_filter(filter)" for performance */
struct git_istream *nst = attach_stream_filter(st, filter);
- if (!nst)
+ if (!nst) {
close_istream(st);
+ return NULL;
+ }
st = nst;
}
void stage_updated_gitmodules(void)
{
- struct strbuf buf = STRBUF_INIT;
- struct stat st;
- int pos;
- struct cache_entry *ce;
- int namelen = strlen(".gitmodules");
-
- pos = cache_name_pos(".gitmodules", namelen);
- if (pos < 0) {
- warning(_("could not find .gitmodules in index"));
- return;
- }
- ce = active_cache[pos];
- ce->ce_flags = namelen;
- if (strbuf_read_file(&buf, ".gitmodules", 0) < 0)
- die(_("reading updated .gitmodules failed"));
- if (lstat(".gitmodules", &st) < 0)
- die_errno(_("unable to stat updated .gitmodules"));
- fill_stat_cache_info(ce, &st);
- ce->ce_mode = ce_mode_from_stat(ce, st.st_mode);
- if (remove_cache_entry_at(pos) < 0)
- die(_("unable to remove .gitmodules from index"));
- if (write_sha1_file(buf.buf, buf.len, blob_type, ce->sha1))
- die(_("adding updated .gitmodules failed"));
- if (add_cache_entry(ce, ADD_CACHE_OK_TO_ADD|ADD_CACHE_OK_TO_REPLACE))
+ if (add_file_to_cache(".gitmodules", 0))
die(_("staging updated .gitmodules failed"));
}
$(MAKE) aggregate-results-and-cleanup
prove: pre-clean $(TEST_LINT)
- @echo "*** prove ***"; GIT_CONFIG=.git/config $(PROVE) --exec '$(SHELL_PATH_SQ)' $(GIT_PROVE_OPTS) $(T) :: $(GIT_TEST_OPTS)
+ @echo "*** prove ***"; $(PROVE) --exec '$(SHELL_PATH_SQ)' $(GIT_PROVE_OPTS) $(T) :: $(GIT_TEST_OPTS)
$(MAKE) clean-except-prove-cache
$(T):
- @echo "*** $@ ***"; GIT_CONFIG=.git/config '$(SHELL_PATH_SQ)' $@ $(GIT_TEST_OPTS)
+ @echo "*** $@ ***"; '$(SHELL_PATH_SQ)' $@ $(GIT_TEST_OPTS)
pre-clean:
$(RM) -r '$(TEST_RESULTS_DIRECTORY_SQ)'
# stop_git_daemon
# test_done
-if test -z "$GIT_TEST_GIT_DAEMON"
+test_tristate GIT_TEST_GIT_DAEMON
+if test "$GIT_TEST_GIT_DAEMON" = false
then
- skip_all="git-daemon testing disabled (define GIT_TEST_GIT_DAEMON to enable)"
+ skip_all="git-daemon testing disabled (unset GIT_TEST_GIT_DAEMON to enable)"
test_done
fi
-LIB_GIT_DAEMON_PORT=${LIB_GIT_DAEMON_PORT-'8121'}
+LIB_GIT_DAEMON_PORT=${LIB_GIT_DAEMON_PORT-${this_test#t}}
GIT_DAEMON_PID=
GIT_DAEMON_DOCUMENT_ROOT_PATH="$PWD"/repo
kill "$GIT_DAEMON_PID"
wait "$GIT_DAEMON_PID"
trap 'die' EXIT
- error "git daemon failed to start"
+ test_skip_or_die $GIT_TEST_GIT_DAEMON \
+ "git daemon failed to start"
fi
}
# Copyright (c) 2008 Clemens Buchacher <drizzd@aon.at>
#
-if test -z "$GIT_TEST_HTTPD"
+test_tristate GIT_TEST_HTTPD
+if test "$GIT_TEST_HTTPD" = false
then
- skip_all="Network testing disabled (define GIT_TEST_HTTPD to enable)"
+ skip_all="Network testing disabled (unset GIT_TEST_HTTPD to enable)"
test_done
fi
esac
LIB_HTTPD_PATH=${LIB_HTTPD_PATH-"$DEFAULT_HTTPD_PATH"}
-LIB_HTTPD_PORT=${LIB_HTTPD_PORT-'8111'}
+LIB_HTTPD_PORT=${LIB_HTTPD_PORT-${this_test#t}}
TEST_PATH="$TEST_DIRECTORY"/lib-httpd
HTTPD_ROOT_PATH="$PWD"/httpd
if ! test -x "$LIB_HTTPD_PATH"
then
- skip_all="skipping test, no web server found at '$LIB_HTTPD_PATH'"
- test_done
+ test_skip_or_die $GIT_TEST_HTTPD "no web server found at '$LIB_HTTPD_PATH'"
fi
HTTPD_VERSION=`$LIB_HTTPD_PATH -v | \
then
if ! test $HTTPD_VERSION -ge 2
then
- skip_all="skipping test, at least Apache version 2 is required"
- test_done
+ test_skip_or_die $GIT_TEST_HTTPD \
+ "at least Apache version 2 is required"
fi
if ! test -d "$DEFAULT_HTTPD_MODULE_PATH"
then
- skip_all="Apache module directory not found. Skipping tests."
- test_done
+ test_skip_or_die $GIT_TEST_HTTPD \
+ "Apache module directory not found"
fi
LIB_HTTPD_MODULE_PATH="$DEFAULT_HTTPD_MODULE_PATH"
fi
else
- error "Could not identify web server at '$LIB_HTTPD_PATH'"
+ test_skip_or_die $GIT_TEST_HTTPD \
+ "Could not identify web server at '$LIB_HTTPD_PATH'"
fi
prepare_httpd() {
>&3 2>&4
if test $? -ne 0
then
- skip_all="skipping test, web server setup failed"
trap 'die' EXIT
- test_done
+ test_skip_or_die $GIT_TEST_HTTPD "web server setup failed"
fi
}
# Helpers for terminal output tests.
-test_expect_success PERL 'set up terminal for tests' '
+# Catch tests which should depend on TTY but forgot to. There's no need
+# to aditionally check that the TTY prereq is set here. If the test declared
+# it and we are running the test, then it must have been set.
+test_terminal () {
+ if ! test_declared_prereq TTY
+ then
+ echo >&4 "test_terminal: need to declare TTY prerequisite"
+ return 127
+ fi
+ perl "$TEST_DIRECTORY"/test-terminal.perl "$@"
+}
+
+test_lazy_prereq TTY '
+ test_have_prereq PERL &&
+
# Reading from the pty master seems to get stuck _sometimes_
# on Mac OS X 10.5.0, using Perl 5.10.0 or 5.8.9.
#
# After 2000 iterations or so it hangs.
# https://rt.cpan.org/Ticket/Display.html?id=65692
#
- if test "$(uname -s)" = Darwin
- then
- :
- elif
- perl "$TEST_DIRECTORY"/test-terminal.perl \
- sh -c "test -t 1 && test -t 2"
- then
- test_set_prereq TTY &&
- test_terminal () {
- if ! test_declared_prereq TTY
- then
- echo >&4 "test_terminal: need to declare TTY prerequisite"
- return 127
- fi
- perl "$TEST_DIRECTORY"/test-terminal.perl "$@"
- }
- fi
+ test "$(uname -s)" != Darwin &&
+
+ perl "$TEST_DIRECTORY"/test-terminal.perl \
+ sh -c "test -t 1 && test -t 2"
'
--- /dev/null
+#!/bin/sh
+
+test_description='Tests pack performance using bitmaps'
+. ./perf-lib.sh
+
+test_perf_large_repo
+
+# note that we do everything through config,
+# since we want to be able to compare bitmap-aware
+# git versus non-bitmap git
+test_expect_success 'setup bitmap config' '
+ git config pack.writebitmaps true &&
+ git config pack.writebitmaphashcache true
+'
+
+test_perf 'repack to disk' '
+ git repack -ad
+'
+
+test_perf 'simulated clone' '
+ git pack-objects --stdout --all </dev/null >/dev/null
+'
+
+test_perf 'simulated fetch' '
+ have=$(git rev-list HEAD~100 -1) &&
+ {
+ echo HEAD &&
+ echo ^$have
+ } | git pack-objects --revs --stdout >/dev/null
+'
+
+test_expect_success 'create partial bitmap state' '
+ # pick a commit to represent the repo tip in the past
+ cutoff=$(git rev-list HEAD~100 -1) &&
+ orig_tip=$(git rev-parse HEAD) &&
+
+ # now kill off all of the refs and pretend we had
+ # just the one tip
+ rm -rf .git/logs .git/refs/* .git/packed-refs
+ git update-ref HEAD $cutoff
+
+ # and then repack, which will leave us with a nice
+ # big bitmap pack of the "old" history, and all of
+ # the new history will be loose, as if it had been pushed
+ # up incrementally and exploded via unpack-objects
+ git repack -Ad
+
+ # and now restore our original tip, as if the pushes
+ # had happened
+ git update-ref HEAD $orig_tip
+'
+
+test_perf 'partial bitmap' '
+ git pack-objects --stdout --all </dev/null >/dev/null
+'
+
+test_done
echo "expected a directory $1, a file $1/config and $1/refs"
return 1
fi
- bare=$(GIT_CONFIG="$1/config" git config --bool core.bare)
- worktree=$(GIT_CONFIG="$1/config" git config core.worktree) ||
+ bare=$(cd "$1" && git config --bool core.bare)
+ worktree=$(cd "$1" && git config core.worktree) ||
worktree=unset
test "$bare" = "$2" && test "$worktree" = "$3" || {
}
test_expect_success 'plain' '
- (
- sane_unset GIT_DIR GIT_WORK_TREE &&
- mkdir plain &&
- cd plain &&
- git init
- ) &&
+ git init plain &&
check_config plain/.git false unset
'
test_expect_success 'plain nested in bare' '
(
- sane_unset GIT_DIR GIT_WORK_TREE &&
git init --bare bare-ancestor.git &&
cd bare-ancestor.git &&
mkdir plain-nested &&
test_expect_success 'plain through aliased command, outside any git repo' '
(
- sane_unset GIT_DIR GIT_WORK_TREE &&
HOME=$(pwd)/alias-config &&
export HOME &&
mkdir alias-config &&
test_expect_failure 'plain nested through aliased command' '
(
- sane_unset GIT_DIR GIT_WORK_TREE &&
git init plain-ancestor-aliased &&
cd plain-ancestor-aliased &&
echo "[alias] aliasedinit = init" >>.git/config &&
test_expect_failure 'plain nested in bare through aliased command' '
(
- sane_unset GIT_DIR GIT_WORK_TREE &&
git init --bare bare-ancestor-aliased.git &&
cd bare-ancestor-aliased.git &&
echo "[alias] aliasedinit = init" >>config &&
'
test_expect_success 'plain with GIT_WORK_TREE' '
- if (
- sane_unset GIT_DIR &&
- mkdir plain-wt &&
- cd plain-wt &&
- GIT_WORK_TREE=$(pwd) git init
- )
- then
- echo Should have failed -- GIT_WORK_TREE should not be used
- false
- fi
+ mkdir plain-wt &&
+ test_must_fail env GIT_WORK_TREE="$(pwd)/plain-wt" git init plain-wt
'
test_expect_success 'plain bare' '
- (
- sane_unset GIT_DIR GIT_WORK_TREE GIT_CONFIG &&
- mkdir plain-bare-1 &&
- cd plain-bare-1 &&
- git --bare init
- ) &&
+ git --bare init plain-bare-1 &&
check_config plain-bare-1 true unset
'
test_expect_success 'plain bare with GIT_WORK_TREE' '
- if (
- sane_unset GIT_DIR GIT_CONFIG &&
- mkdir plain-bare-2 &&
- cd plain-bare-2 &&
- GIT_WORK_TREE=$(pwd) git --bare init
- )
- then
- echo Should have failed -- GIT_WORK_TREE should not be used
- false
- fi
+ mkdir plain-bare-2 &&
+ test_must_fail \
+ env GIT_WORK_TREE="$(pwd)/plain-bare-2" \
+ git --bare init plain-bare-2
'
test_expect_success 'GIT_DIR bare' '
-
- (
- sane_unset GIT_CONFIG &&
- mkdir git-dir-bare.git &&
- GIT_DIR=git-dir-bare.git git init
- ) &&
+ mkdir git-dir-bare.git &&
+ GIT_DIR=git-dir-bare.git git init &&
check_config git-dir-bare.git true unset
'
test_expect_success 'init --bare' '
-
- (
- sane_unset GIT_DIR GIT_WORK_TREE GIT_CONFIG &&
- mkdir init-bare.git &&
- cd init-bare.git &&
- git init --bare
- ) &&
+ git init --bare init-bare.git &&
check_config init-bare.git true unset
'
test_expect_success 'GIT_DIR non-bare' '
(
- sane_unset GIT_CONFIG &&
mkdir non-bare &&
cd non-bare &&
GIT_DIR=.git git init
test_expect_success 'GIT_DIR & GIT_WORK_TREE (1)' '
(
- sane_unset GIT_CONFIG &&
mkdir git-dir-wt-1.git &&
GIT_WORK_TREE=$(pwd) GIT_DIR=git-dir-wt-1.git git init
) &&
'
test_expect_success 'GIT_DIR & GIT_WORK_TREE (2)' '
-
- if (
- sane_unset GIT_CONFIG &&
- mkdir git-dir-wt-2.git &&
- GIT_WORK_TREE=$(pwd) GIT_DIR=git-dir-wt-2.git git --bare init
- )
- then
- echo Should have failed -- --bare should not be used
- false
- fi
+ mkdir git-dir-wt-2.git &&
+ test_must_fail env \
+ GIT_WORK_TREE="$(pwd)" \
+ GIT_DIR=git-dir-wt-2.git \
+ git --bare init
'
test_expect_success 'reinit' '
(
- sane_unset GIT_CONFIG GIT_WORK_TREE GIT_CONFIG &&
-
mkdir again &&
cd again &&
git init >out1 2>err1 &&
test_expect_success 'init with --template' '
mkdir template-source &&
echo content >template-source/file &&
- (
- mkdir template-custom &&
- cd template-custom &&
- git init --template=../template-source
- ) &&
+ git init --template=../template-source template-custom &&
test_cmp template-source/file template-custom/.git/file
'
test_expect_success 'init with --template (blank)' '
- (
- mkdir template-plain &&
- cd template-plain &&
- git init
- ) &&
- test -f template-plain/.git/info/exclude &&
- (
- mkdir template-blank &&
- cd template-blank &&
- git init --template=
- ) &&
- ! test -f template-blank/.git/info/exclude
+ git init template-plain &&
+ test_path_is_file template-plain/.git/info/exclude &&
+ git init --template= template-blank &&
+ test_path_is_missing template-blank/.git/info/exclude
'
test_expect_success 'init with init.templatedir set' '
mkdir templatedir-source &&
echo Content >templatedir-source/file &&
+ test_config_global init.templatedir "${HOME}/templatedir-source" &&
(
- test_config="${HOME}/.gitconfig" &&
- git config -f "$test_config" init.templatedir "${HOME}/templatedir-source" &&
mkdir templatedir-set &&
cd templatedir-set &&
sane_unset GIT_TEMPLATE_DIR &&
'
test_expect_success 'init --bare/--shared overrides system/global config' '
- (
- test_config="$HOME"/.gitconfig &&
- git config -f "$test_config" core.bare false &&
- git config -f "$test_config" core.sharedRepository 0640 &&
- mkdir init-bare-shared-override &&
- cd init-bare-shared-override &&
- git init --bare --shared=0666
- ) &&
+ test_config_global core.bare false &&
+ test_config_global core.sharedRepository 0640 &&
+ git init --bare --shared=0666 init-bare-shared-override &&
check_config init-bare-shared-override true unset &&
test x0666 = \
x`git config -f init-bare-shared-override/config core.sharedRepository`
'
test_expect_success 'init honors global core.sharedRepository' '
- (
- test_config="$HOME"/.gitconfig &&
- git config -f "$test_config" core.sharedRepository 0666 &&
- mkdir shared-honor-global &&
- cd shared-honor-global &&
- git init
- ) &&
+ test_config_global core.sharedRepository 0666 &&
+ git init shared-honor-global &&
test x0666 = \
x`git config -f shared-honor-global/.git/config core.sharedRepository`
'
test_expect_success 'init rejects insanely long --template' '
- (
- insane=$(printf "x%09999dx" 1) &&
- mkdir test &&
- cd test &&
- test_must_fail git init --template=$insane
- )
+ test_must_fail git init --template=$(printf "x%09999dx" 1) test
'
test_expect_success 'init creates a new directory' '
rm -fr newdir &&
- (
- git init newdir &&
- test -d newdir/.git/refs
- )
+ git init newdir &&
+ test_path_is_dir newdir/.git/refs
'
test_expect_success 'init creates a new bare directory' '
rm -fr newdir &&
- (
- git init --bare newdir &&
- test -d newdir/refs
- )
+ git init --bare newdir &&
+ test_path_is_dir newdir/refs
'
test_expect_success 'init recreates a directory' '
rm -fr newdir &&
- (
- mkdir newdir &&
- git init newdir &&
- test -d newdir/.git/refs
- )
+ mkdir newdir &&
+ git init newdir &&
+ test_path_is_dir newdir/.git/refs
'
test_expect_success 'init recreates a new bare directory' '
rm -fr newdir &&
- (
- mkdir newdir &&
- git init --bare newdir &&
- test -d newdir/refs
- )
+ mkdir newdir &&
+ git init --bare newdir &&
+ test_path_is_dir newdir/refs
'
test_expect_success 'init creates a new deep directory' '
rm -fr newdir &&
git init newdir/a/b/c &&
- test -d newdir/a/b/c/.git/refs
+ test_path_is_dir newdir/a/b/c/.git/refs
'
test_expect_success POSIXPERM 'init creates a new deep directory (umask vs. shared)' '
# the repository itself should follow "shared"
umask 002 &&
git init --bare --shared=0660 newdir/a/b/c &&
- test -d newdir/a/b/c/refs &&
+ test_path_is_dir newdir/a/b/c/refs &&
ls -ld newdir/a newdir/a/b > lsab.out &&
! grep -v "^drwxrw[sx]r-x" lsab.out &&
ls -ld newdir/a/b/c > lsc.out &&
test_expect_success 'init notices EEXIST (1)' '
rm -fr newdir &&
- (
- >newdir &&
- test_must_fail git init newdir &&
- test -f newdir
- )
+ >newdir &&
+ test_must_fail git init newdir &&
+ test_path_is_file newdir
'
test_expect_success 'init notices EEXIST (2)' '
rm -fr newdir &&
- (
- mkdir newdir &&
- >newdir/a
- test_must_fail git init newdir/a/b &&
- test -f newdir/a
- )
+ mkdir newdir &&
+ >newdir/a &&
+ test_must_fail git init newdir/a/b &&
+ test_path_is_file newdir/a
'
test_expect_success POSIXPERM,SANITY 'init notices EPERM' '
rm -fr newdir &&
- (
- mkdir newdir &&
- chmod -w newdir &&
- test_must_fail git init newdir/a/b
- )
+ mkdir newdir &&
+ chmod -w newdir &&
+ test_must_fail git init newdir/a/b
'
test_expect_success 'init creates a new bare directory with global --bare' '
rm -rf newdir &&
git --bare init newdir &&
- test -d newdir/refs
+ test_path_is_dir newdir/refs
'
test_expect_success 'init prefers command line to GIT_DIR' '
rm -rf newdir &&
mkdir otherdir &&
GIT_DIR=otherdir git --bare init newdir &&
- test -d newdir/refs &&
- ! test -d otherdir/refs
+ test_path_is_dir newdir/refs &&
+ test_path_is_missing otherdir/refs
'
test_expect_success 'init with separate gitdir' '
git init --separate-git-dir realgitdir newdir &&
echo "gitdir: `pwd`/realgitdir" >expected &&
test_cmp expected newdir/.git &&
- test -d realgitdir/refs
+ test_path_is_dir realgitdir/refs
'
test_expect_success 're-init on .git file' '
) &&
echo "gitdir: `pwd`/surrealgitdir" >expected &&
test_cmp expected newdir/.git &&
- test -d surrealgitdir/refs &&
- ! test -d realgitdir/refs
+ test_path_is_dir surrealgitdir/refs &&
+ test_path_is_missing realgitdir/refs
'
test_expect_success 're-init to move gitdir' '
) &&
echo "gitdir: `pwd`/realgitdir" >expected &&
test_cmp expected newdir/.git &&
- test -d realgitdir/refs
+ test_path_is_dir realgitdir/refs
'
test_expect_success SYMLINKS 're-init to move gitdir symlink' '
) &&
echo "gitdir: `pwd`/realgitdir" >expected &&
test_cmp expected newdir/.git &&
- test -d realgitdir/refs &&
- ! test -d newdir/here
+ test_cmp expected newdir/here &&
+ test_path_is_dir realgitdir/refs
'
test_done
test_line_count = 0 err
'
+test_expect_success 'using --git-dir and --work-tree' '
+ mkdir unreal real &&
+ git init real &&
+ echo "file test=in-real" >real/.gitattributes &&
+ (
+ cd unreal &&
+ attr_check file in-real "--git-dir ../real/.git --work-tree ../real"
+ )
+'
+
test_expect_success 'setup bare' '
- git clone --bare . bare.git &&
- cd bare.git
+ git clone --bare . bare.git
'
test_expect_success 'bare repository: check that .gitattribute is ignored' '
(
- echo "f test=f"
- echo "a/i test=a/i"
- ) >.gitattributes &&
- attr_check f unspecified &&
- attr_check a/f unspecified &&
- attr_check a/c/f unspecified &&
- attr_check a/i unspecified &&
- attr_check subdir/a/i unspecified
+ cd bare.git &&
+ (
+ echo "f test=f"
+ echo "a/i test=a/i"
+ ) >.gitattributes &&
+ attr_check f unspecified &&
+ attr_check a/f unspecified &&
+ attr_check a/c/f unspecified &&
+ attr_check a/i unspecified &&
+ attr_check subdir/a/i unspecified
+ )
'
test_expect_success 'bare repository: check that --cached honors index' '
- GIT_INDEX_FILE=../.git/index \
- git check-attr --cached --stdin --all <../stdin-all |
- sort >actual &&
- test_cmp ../specified-all actual
+ (
+ cd bare.git &&
+ GIT_INDEX_FILE=../.git/index \
+ git check-attr --cached --stdin --all <../stdin-all |
+ sort >actual &&
+ test_cmp ../specified-all actual
+ )
'
test_expect_success 'bare repository: test info/attributes' '
(
- echo "f test=f"
- echo "a/i test=a/i"
- ) >info/attributes &&
- attr_check f f &&
- attr_check a/f f &&
- attr_check a/c/f f &&
- attr_check a/i a/i &&
- attr_check subdir/a/i unspecified
+ cd bare.git &&
+ (
+ echo "f test=f"
+ echo "a/i test=a/i"
+ ) >info/attributes &&
+ attr_check f f &&
+ attr_check a/f f &&
+ attr_check a/c/f f &&
+ attr_check a/i a/i &&
+ attr_check subdir/a/i unspecified
+ )
'
test_done
echo "$response" | grep "^:: two"
'
+############################################################################
+#
+# test whitespace handling
+
+test_expect_success 'trailing whitespace is ignored' '
+ mkdir whitespace &&
+ >whitespace/trailing &&
+ >whitespace/untracked &&
+ echo "whitespace/trailing " >ignore &&
+ cat >expect <<EOF &&
+whitespace/untracked
+EOF
+ : >err.expect &&
+ git ls-files -o -X ignore whitespace >actual 2>err &&
+ test_cmp expect actual &&
+ test_cmp err.expect err
+'
+
+test_expect_success !MINGW 'quoting allows trailing whitespace' '
+ rm -rf whitespace &&
+ mkdir whitespace &&
+ >"whitespace/trailing " &&
+ >whitespace/untracked &&
+ echo "whitespace/trailing\\ \\ " >ignore &&
+ echo whitespace/untracked >expect &&
+ : >err.expect &&
+ git ls-files -o -X ignore whitespace >actual 2>err &&
+ test_cmp expect actual &&
+ test_cmp err.expect err
+'
+
test_done
--- /dev/null
+#!/bin/sh
+
+test_description='test hashmap and string hash functions'
+. ./test-lib.sh
+
+test_hashmap() {
+ echo "$1" | test-hashmap $3 > actual &&
+ echo "$2" > expect &&
+ test_cmp expect actual
+}
+
+test_expect_success 'hash functions' '
+
+test_hashmap "hash key1" "2215982743 2215982743 116372151 116372151" &&
+test_hashmap "hash key2" "2215982740 2215982740 116372148 116372148" &&
+test_hashmap "hash fooBarFrotz" "1383912807 1383912807 3189766727 3189766727" &&
+test_hashmap "hash foobarfrotz" "2862305959 2862305959 3189766727 3189766727"
+
+'
+
+test_expect_success 'put' '
+
+test_hashmap "put key1 value1
+put key2 value2
+put fooBarFrotz value3
+put foobarfrotz value4
+size" "NULL
+NULL
+NULL
+NULL
+64 4"
+
+'
+
+test_expect_success 'put (case insensitive)' '
+
+test_hashmap "put key1 value1
+put key2 value2
+put fooBarFrotz value3
+size" "NULL
+NULL
+NULL
+64 3" ignorecase
+
+'
+
+test_expect_success 'replace' '
+
+test_hashmap "put key1 value1
+put key1 value2
+put fooBarFrotz value3
+put fooBarFrotz value4
+size" "NULL
+value1
+NULL
+value3
+64 2"
+
+'
+
+test_expect_success 'replace (case insensitive)' '
+
+test_hashmap "put key1 value1
+put Key1 value2
+put fooBarFrotz value3
+put foobarfrotz value4
+size" "NULL
+value1
+NULL
+value3
+64 2" ignorecase
+
+'
+
+test_expect_success 'get' '
+
+test_hashmap "put key1 value1
+put key2 value2
+put fooBarFrotz value3
+put foobarfrotz value4
+get key1
+get key2
+get fooBarFrotz
+get notInMap" "NULL
+NULL
+NULL
+NULL
+value1
+value2
+value3
+NULL"
+
+'
+
+test_expect_success 'get (case insensitive)' '
+
+test_hashmap "put key1 value1
+put key2 value2
+put fooBarFrotz value3
+get Key1
+get keY2
+get foobarfrotz
+get notInMap" "NULL
+NULL
+NULL
+value1
+value2
+value3
+NULL" ignorecase
+
+'
+
+test_expect_success 'add' '
+
+test_hashmap "add key1 value1
+add key1 value2
+add fooBarFrotz value3
+add fooBarFrotz value4
+get key1
+get fooBarFrotz
+get notInMap" "value2
+value1
+value4
+value3
+NULL"
+
+'
+
+test_expect_success 'add (case insensitive)' '
+
+test_hashmap "add key1 value1
+add Key1 value2
+add fooBarFrotz value3
+add foobarfrotz value4
+get key1
+get Foobarfrotz
+get notInMap" "value2
+value1
+value4
+value3
+NULL" ignorecase
+
+'
+
+test_expect_success 'remove' '
+
+test_hashmap "put key1 value1
+put key2 value2
+put fooBarFrotz value3
+remove key1
+remove key2
+remove notInMap
+size" "NULL
+NULL
+NULL
+value1
+value2
+NULL
+64 1"
+
+'
+
+test_expect_success 'remove (case insensitive)' '
+
+test_hashmap "put key1 value1
+put key2 value2
+put fooBarFrotz value3
+remove Key1
+remove keY2
+remove notInMap
+size" "NULL
+NULL
+NULL
+value1
+value2
+NULL
+64 1" ignorecase
+
+'
+
+test_expect_success 'iterate' '
+
+test_hashmap "put key1 value1
+put key2 value2
+put fooBarFrotz value3
+iterate" "NULL
+NULL
+NULL
+key2 value2
+key1 value1
+fooBarFrotz value3"
+
+'
+
+test_expect_success 'iterate (case insensitive)' '
+
+test_hashmap "put key1 value1
+put key2 value2
+put fooBarFrotz value3
+iterate" "NULL
+NULL
+NULL
+fooBarFrotz value3
+key2 value2
+key1 value1" ignorecase
+
+'
+
+test_expect_success 'grow / shrink' '
+
+ rm -f in &&
+ rm -f expect &&
+ for n in $(test_seq 51)
+ do
+ echo put key$n value$n >> in &&
+ echo NULL >> expect
+ done &&
+ echo size >> in &&
+ echo 64 51 >> expect &&
+ echo put key52 value52 >> in &&
+ echo NULL >> expect
+ echo size >> in &&
+ echo 256 52 >> expect &&
+ for n in $(test_seq 12)
+ do
+ echo remove key$n >> in &&
+ echo value$n >> expect
+ done &&
+ echo size >> in &&
+ echo 256 40 >> expect &&
+ echo remove key40 >> in &&
+ echo value40 >> expect &&
+ echo size >> in &&
+ echo 64 39 >> expect &&
+ cat in | test-hashmap > out &&
+ test_cmp expect out
+
+'
+
+test_done
test "$sym" = "$(test-path-utils real_path "$dir2/syml")"
'
+test_expect_success SYMLINKS 'prefix_path works with absolute paths to work tree symlinks' '
+ ln -s target symlink &&
+ test "$(test-path-utils prefix_path prefix "$(pwd)/symlink")" = "symlink"
+'
+
+test_expect_success 'prefix_path works with only absolute path to work tree' '
+ echo "" >expected &&
+ test-path-utils prefix_path prefix "$(pwd)" >actual &&
+ test_cmp expected actual
+'
+
+test_expect_success 'prefix_path rejects absolute path to dir with same beginning as work tree' '
+ test_must_fail test-path-utils prefix_path prefix "$(pwd)a"
+'
+
+test_expect_success SYMLINKS 'prefix_path works with absolute path to a symlink to work tree having same beginning as work tree' '
+ git init repo &&
+ ln -s repo repolink &&
+ test "a" = "$(cd repo && test-path-utils prefix_path prefix "$(pwd)/../repolink/a")"
+'
+
relative_path /foo/a/b/c/ /foo/a/b/ c/
relative_path /foo/a/b/c/ /foo/a/b c/
relative_path /foo/a//b//c/ ///foo/a/b// c/ POSIX
(
cd dir &&
printf "change" >two &&
- env GIT_EXTERNAL_DIFF=./diff git diff >../actual
+ GIT_EXTERNAL_DIFF=./diff git diff >../actual
git checkout -- two
) &&
test_cmp expect actual
test_cmp expect .git/config
'
-test_expect_success 'alternative GIT_CONFIG (non-existing file should fail)' '
+test_expect_success 'alternative --file (non-existing file should fail)' '
test_must_fail git config --file non-existing-config -l
'
EOF
test_expect_success 'alternative GIT_CONFIG' '
- GIT_CONFIG=other-config git config -l >output &&
+ GIT_CONFIG=other-config git config --list >output &&
test_cmp expect output
'
test_expect_success 'alternative GIT_CONFIG (--file)' '
- git config --file other-config -l > output &&
+ git config --file other-config --list >output &&
test_cmp expect output
'
+test_expect_success 'alternative GIT_CONFIG (--file=-)' '
+ git config --file - --list <other-config >output &&
+ test_cmp expect output
+'
+
+test_expect_success 'setting a value in stdin is an error' '
+ test_must_fail git config --file - some.value foo
+'
+
+test_expect_success 'editing stdin is an error' '
+ test_must_fail git config --file - --edit
+'
+
test_expect_success 'refer config from subdirectory' '
mkdir x &&
(
'
-test_expect_success 'refer config from subdirectory via GIT_CONFIG' '
+test_expect_success 'refer config from subdirectory via --file' '
(
cd x &&
- GIT_CONFIG=../other-config git config --get ein.bahn >actual &&
+ git config --file=../other-config --get ein.bahn >actual &&
test_cmp expect actual
)
'
park = ausweis
EOF
-test_expect_success '--set in alternative GIT_CONFIG' '
- GIT_CONFIG=other-config git config anwohner.park ausweis &&
+test_expect_success '--set in alternative file' '
+ git config --file=other-config anwohner.park ausweis &&
test_cmp expect other-config
'
test_expect_success SYMLINKS 'symlinked configuration' '
ln -s notyet myconfig &&
- GIT_CONFIG=myconfig git config test.frotz nitfol &&
+ git config --file=myconfig test.frotz nitfol &&
test -h myconfig &&
test -f notyet &&
- test "z$(GIT_CONFIG=notyet git config test.frotz)" = znitfol &&
- GIT_CONFIG=myconfig git config test.xyzzy rezrov &&
+ test "z$(git config --file=notyet test.frotz)" = znitfol &&
+ git config --file=myconfig test.xyzzy rezrov &&
test -h myconfig &&
test -f notyet &&
cat >expect <<-\EOF &&
rezrov
EOF
{
- GIT_CONFIG=notyet git config test.frotz &&
- GIT_CONFIG=notyet git config test.xyzzy
+ git config --file=notyet test.frotz &&
+ git config --file=notyet test.xyzzy
} >actual &&
test_cmp expect actual
'
test_expect_success 'nonexistent configuration' '
- (
- GIT_CONFIG=doesnotexist &&
- export GIT_CONFIG &&
- test_must_fail git config --list &&
- test_must_fail git config test.xyzzy
- )
+ test_must_fail git config --file=doesnotexist --list &&
+ test_must_fail git config --file=doesnotexist test.xyzzy
'
test_expect_success SYMLINKS 'symlink to nonexistent configuration' '
ln -s doesnotexist linktonada &&
ln -s linktonada linktolinktonada &&
- (
- GIT_CONFIG=linktonada &&
- export GIT_CONFIG &&
- test_must_fail git config --list &&
- GIT_CONFIG=linktolinktonada &&
- test_must_fail git config --list
- )
+ test_must_fail git config --file=linktonada --list &&
+ test_must_fail git config --file=linktolinktonada --list
'
test_expect_success 'check split_cmdline return' "
test_create_repo "test" &&
test_create_repo "test2" &&
- GIT_CONFIG=test2/.git/config git config core.repositoryformatversion 99
+ git config --file=test2/.git/config core.repositoryformatversion 99
'
test_expect_success 'gitdir selection on normal repos' '
test_expect_success 'absolute includes from command line work' '
echo "[test]one = 1" >one &&
echo 1 >expect &&
- git -c include.path="$PWD/one" config test.one >actual &&
+ git -c include.path="$(pwd)/one" config test.one >actual &&
test_cmp expect actual
'
test_must_fail git -c include.path=one config test.one
'
+test_expect_success 'absolute includes from blobs work' '
+ echo "[test]one = 1" >one &&
+ echo "[include]path=$(pwd)/one" >blob &&
+ blob=$(git hash-object -w blob) &&
+ echo 1 >expect &&
+ git config --blob=$blob test.one >actual &&
+ test_cmp expect actual
+'
+
+test_expect_success 'relative includes from blobs fail' '
+ echo "[test]one = 1" >one &&
+ echo "[include]path=one" >blob &&
+ blob=$(git hash-object -w blob) &&
+ test_must_fail git config --blob=$blob test.one
+'
+
+test_expect_success 'absolute includes from stdin work' '
+ echo "[test]one = 1" >one &&
+ echo 1 >expect &&
+ echo "[include]path=\"$(pwd)/one\"" |
+ git config --file - test.one >actual &&
+ test_cmp expect actual
+'
+
+test_expect_success 'relative includes from stdin line fail' '
+ echo "[test]one = 1" >one &&
+ echo "[include]path=one" |
+ test_must_fail git config --file - test.one
+'
+
test_expect_success 'include cycles are detected' '
cat >.gitconfig <<-\EOF &&
[test]value = gitconfig
grep "error in commit $new" out
'
+# date is 2^64 + 1
+test_expect_success 'integer overflow in timestamps is reported' '
+ git cat-file commit HEAD >basis &&
+ sed "s/^\\(author .*>\\) [0-9]*/\\1 18446744073709551617/" \
+ <basis >bad-timestamp &&
+ new=$(git hash-object -t commit -w --stdin <bad-timestamp) &&
+ test_when_finished "remove_object $new" &&
+ git update-ref refs/heads/bogus "$new" &&
+ test_when_finished "git update-ref -d refs/heads/bogus" &&
+ git fsck 2>out &&
+ cat out &&
+ grep "error in commit $new.*integer overflow" out
+'
+
test_expect_success 'tag pointing to nonexistent' '
cat >invalid-tag <<-\EOF &&
object ffffffffffffffffffffffffffffffffffffffff
test_description='test git rev-parse --parseopt'
. ./test-lib.sh
-cat > expect <<\END_EXPECT
-cat <<\EOF
-usage: some-command [options] <args>...
-
- some-command does foo and bar!
-
- -h, --help show the help
- --foo some nifty option --foo
- --bar ... some cool option --bar with an argument
- -b, --baz a short and long option
-
-An option group Header
- -C[...] option C with an optional argument
- -d, --data[=...] short and long option with an optional argument
-
-Extras
- --extra1 line above used to cause a segfault but no longer does
-
-EOF
+sed -e 's/^|//' >expect <<\END_EXPECT
+|cat <<\EOF
+|usage: some-command [options] <args>...
+|
+| some-command does foo and bar!
+|
+| -h, --help show the help
+| --foo some nifty option --foo
+| --bar ... some cool option --bar with an argument
+| -b, --baz a short and long option
+|
+|An option group Header
+| -C[...] option C with an optional argument
+| -d, --data[=...] short and long option with an optional argument
+|
+|Argument hints
+| -b <arg> short option required argument
+| --bar2 <arg> long option required argument
+| -e, --fuz <with-space>
+| short and long option required argument
+| -s[<some>] short option optional argument
+| --long[=<data>] long option optional argument
+| -g, --fluf[=<path>] short and long option optional argument
+| --longest <very-long-argument-hint>
+| a very long argument hint
+|
+|Extras
+| --extra1 line above used to cause a segfault but no longer does
+|
+|EOF
END_EXPECT
-cat > optionspec << EOF
-some-command [options] <args>...
-
-some-command does foo and bar!
---
-h,help show the help
-
-foo some nifty option --foo
-bar= some cool option --bar with an argument
-b,baz a short and long option
-
- An option group Header
-C? option C with an optional argument
-d,data? short and long option with an optional argument
-
-Extras
-extra1 line above used to cause a segfault but no longer does
+sed -e 's/^|//' >optionspec <<\EOF
+|some-command [options] <args>...
+|
+|some-command does foo and bar!
+|--
+|h,help show the help
+|
+|foo some nifty option --foo
+|bar= some cool option --bar with an argument
+|b,baz a short and long option
+|
+| An option group Header
+|C? option C with an optional argument
+|d,data? short and long option with an optional argument
+|
+| Argument hints
+|b=arg short option required argument
+|bar2=arg long option required argument
+|e,fuz=with-space short and long option required argument
+|s?some short option optional argument
+|long?data long option optional argument
+|g,fluf?path short and long option optional argument
+|longest=very-long-argument-hint a very long argument hint
+|
+|Extras
+|extra1 line above used to cause a segfault but no longer does
EOF
test_expect_success 'test --parseopt help output' '
setup_repo 30 "$here/30" gitfile true &&
(
cd 30 &&
- GIT_DIR=.git &&
- export GIT_DIR &&
- test_must_fail git symbolic-ref HEAD 2>result
+ test_must_fail env GIT_DIR=.git git symbolic-ref HEAD 2>result
) &&
grep "core.bare and core.worktree" 30/result
'
--- /dev/null
+#!/bin/sh
+
+test_description='index file specific tests'
+
+. ./test-lib.sh
+
+test_expect_success 'setup' '
+ echo 1 >a
+'
+
+test_expect_success 'bogus GIT_INDEX_VERSION issues warning' '
+ (
+ rm -f .git/index &&
+ GIT_INDEX_VERSION=2bogus &&
+ export GIT_INDEX_VERSION &&
+ git add a 2>&1 | sed "s/[0-9]//" >actual.err &&
+ sed -e "s/ Z$/ /" <<-\EOF >expect.err &&
+ warning: GIT_INDEX_VERSION set, but the value is invalid.
+ Using version Z
+ EOF
+ test_i18ncmp expect.err actual.err
+ )
+'
+
+test_expect_success 'out of bounds GIT_INDEX_VERSION issues warning' '
+ (
+ rm -f .git/index &&
+ GIT_INDEX_VERSION=1 &&
+ export GIT_INDEX_VERSION &&
+ git add a 2>&1 | sed "s/[0-9]//" >actual.err &&
+ sed -e "s/ Z$/ /" <<-\EOF >expect.err &&
+ warning: GIT_INDEX_VERSION set, but the value is invalid.
+ Using version Z
+ EOF
+ test_i18ncmp expect.err actual.err
+ )
+'
+
+test_expect_success 'no warning with bogus GIT_INDEX_VERSION and existing index' '
+ (
+ GIT_INDEX_VERSION=1 &&
+ export GIT_INDEX_VERSION &&
+ git add a 2>actual.err &&
+ >expect.err &&
+ test_i18ncmp expect.err actual.err
+ )
+'
+
+test_expect_success 'out of bounds index.version issues warning' '
+ (
+ sane_unset GIT_INDEX_VERSION &&
+ rm -f .git/index &&
+ git config --add index.version 1 &&
+ git add a 2>&1 | sed "s/[0-9]//" >actual.err &&
+ sed -e "s/ Z$/ /" <<-\EOF >expect.err &&
+ warning: index.version set, but the value is invalid.
+ Using version Z
+ EOF
+ test_i18ncmp expect.err actual.err
+ )
+'
+
+test_expect_success 'GIT_INDEX_VERSION takes precedence over config' '
+ (
+ rm -f .git/index &&
+ GIT_INDEX_VERSION=4 &&
+ export GIT_INDEX_VERSION &&
+ git config --add index.version 2 &&
+ git add a 2>&1 &&
+ echo 4 >expect &&
+ test-index-version <.git/index >actual &&
+ test_cmp expect actual
+ )
+'
+
+test_done
. ./test-lib.sh
+test_set_index_version 3
+
cat >expect.full <<EOF
H 1
H 2
test_cmp expect actual
'
+test_expect_success '--cacheinfo mode,sha1,path (new syntax)' '
+ echo content >file &&
+ git hash-object -w --stdin <file >expect &&
+
+ git update-index --add --cacheinfo 100644 "$(cat expect)" file &&
+ git rev-parse :file >actual &&
+ test_cmp expect actual &&
+
+ git update-index --add --cacheinfo "100644,$(cat expect),elif" &&
+ git rev-parse :elif >actual &&
+ test_cmp expect actual
+'
+
test_done
'
-# Note that this is scheduled to change in Git 2.0, when
-# "git add -u" will become full-tree by default.
-test_expect_success 'non-limited update in subdir leaves root alone' '
+test_expect_success 'non-qualified update in subdir updates from the root' '
(
cd dir1 &&
echo even more >>sub2 &&
git add -u
) &&
- cat >expect <<-\EOF &&
- check
- top
- EOF
+ : >expect &&
git diff-files --name-only >actual &&
test_cmp expect actual
'
test_i18ngrep "[Uu]sage: git ls-files " broken/usage
'
+test_expect_success SYMLINKS 'ls-files with absolute paths to symlinks' '
+ mkdir subs &&
+ ln -s nosuch link &&
+ ln -s ../nosuch subs/link &&
+ git add link subs/link &&
+ git ls-files -s link subs/link >expect &&
+ git ls-files -s "$(pwd)/link" "$(pwd)/subs/link" >actual &&
+ test_cmp expect actual &&
+
+ (
+ cd subs &&
+ git ls-files -s link >../expect &&
+ git ls-files -s "$(pwd)/link" >../actual
+ ) &&
+ test_cmp expect actual
+'
+
test_done
git add e &&
test_tick &&
git commit -m "rename a->e" &&
+ c7=$(git rev-parse --verify HEAD) &&
git checkout rename-ln &&
git mv a e &&
test_ln_s_add e a &&
'
+test_expect_success 'merge-recursive w/ empty work tree - ours has rename' '
+ (
+ GIT_WORK_TREE="$PWD/ours-has-rename-work" &&
+ export GIT_WORK_TREE &&
+ GIT_INDEX_FILE="$PWD/ours-has-rename-index" &&
+ export GIT_INDEX_FILE &&
+ mkdir "$GIT_WORK_TREE" &&
+ git read-tree -i -m $c7 &&
+ git update-index --ignore-missing --refresh &&
+ git merge-recursive $c0 -- $c7 $c3 &&
+ git ls-files -s >actual-files
+ ) 2>actual-err &&
+ >expected-err &&
+ cat >expected-files <<-EOF &&
+ 100644 $o3 0 b/c
+ 100644 $o0 0 c
+ 100644 $o0 0 d/e
+ 100644 $o0 0 e
+ EOF
+ test_cmp expected-files actual-files &&
+ test_cmp expected-err actual-err
+'
+
+test_expect_success 'merge-recursive w/ empty work tree - theirs has rename' '
+ (
+ GIT_WORK_TREE="$PWD/theirs-has-rename-work" &&
+ export GIT_WORK_TREE &&
+ GIT_INDEX_FILE="$PWD/theirs-has-rename-index" &&
+ export GIT_INDEX_FILE &&
+ mkdir "$GIT_WORK_TREE" &&
+ git read-tree -i -m $c3 &&
+ git update-index --ignore-missing --refresh &&
+ git merge-recursive $c0 -- $c3 $c7 &&
+ git ls-files -s >actual-files
+ ) 2>actual-err &&
+ >expected-err &&
+ cat >expected-files <<-EOF &&
+ 100644 $o3 0 b/c
+ 100644 $o0 0 c
+ 100644 $o0 0 d/e
+ 100644 $o0 0 e
+ EOF
+ test_cmp expected-files actual-files &&
+ test_cmp expected-err actual-err
+'
+
test_expect_success 'merge removes empty directories' '
git reset --hard master &&
! test-wildmatch wildmatch '$3' '$4'
"
fi
- if [ $2 = 1 ]; then
- test_expect_success "fnmatch: match '$3' '$4'" "
- test-wildmatch fnmatch '$3' '$4'
- "
- elif [ $2 = 0 ]; then
- test_expect_success "fnmatch: no match '$3' '$4'" "
- ! test-wildmatch fnmatch '$3' '$4'
- "
-# else
-# test_expect_success BROKEN_FNMATCH "fnmatch: '$3' '$4'" "
-# ! test-wildmatch fnmatch '$3' '$4'
-# "
- fi
}
imatch() {
test_cmp expected actual
'
+test_expect_success '--set-upstream-to notices an error to set branch as own upstream' '
+ git branch --set-upstream-to refs/heads/my13 my13 2>actual &&
+ cat >expected <<-\EOF &&
+ warning: Not setting branch my13 as its own upstream.
+ EOF
+ test_expect_code 1 git config branch.my13.remote &&
+ test_expect_code 1 git config branch.my13.merge &&
+ test_i18ncmp expected actual
+'
+
# Keep this test last, as it changes the current branch
cat >expect <<EOF
$_z40 $HEAD $GIT_COMMITTER_NAME <$GIT_COMMITTER_EMAIL> 1117150200 +0000 branch: Created from master
write_script editor <<-\EOF &&
echo "New contents" >"$1"
EOF
- (
- EDITOR=./editor &&
- export EDITOR &&
- test_must_fail git branch --edit-description no-such-branch
- )
+ test_must_fail env EDITOR=./editor git branch --edit-description no-such-branch
'
test_expect_success 'refuse --edit-description on unborn branch for now' '
echo "New contents" >"$1"
EOF
git checkout --orphan unborn &&
- (
- EDITOR=./editor &&
- export EDITOR &&
- test_must_fail git branch --edit-description
- )
+ test_must_fail env EDITOR=./editor git branch --edit-description
'
test_expect_success '--merged catches invalid object names' '
export GIT_EDITOR
test_expect_success 'cannot annotate non-existing HEAD' '
- (MSG=3 && export MSG && test_must_fail git notes add)
+ test_must_fail env MSG=3 git notes add
'
test_expect_success setup '
'
test_expect_success 'need valid notes ref' '
- (MSG=1 GIT_NOTES_REF=/ && export MSG GIT_NOTES_REF &&
- test_must_fail git notes add) &&
- (MSG=2 GIT_NOTES_REF=/ && export MSG GIT_NOTES_REF &&
- test_must_fail git notes show)
+ test_must_fail env MSG=1 GIT_NOTES_REF=/ git notes show &&
+ test_must_fail env MSG=2 GIT_NOTES_REF=/ git notes show
'
test_expect_success 'refusing to add notes in refs/heads/' '
- (MSG=1 GIT_NOTES_REF=refs/heads/bogus &&
- export MSG GIT_NOTES_REF &&
- test_must_fail git notes add)
+ test_must_fail env MSG=1 GIT_NOTES_REF=refs/heads/bogus git notes add
'
test_expect_success 'refusing to edit notes in refs/remotes/' '
- (MSG=1 GIT_NOTES_REF=refs/remotes/bogus &&
- export MSG GIT_NOTES_REF &&
- test_must_fail git notes edit)
+ test_must_fail env MSG=1 GIT_NOTES_REF=refs/heads/bogus git notes edit
'
# 1 indicates caught gracefully by die, 128 means git-show barked
test_must_fail git notes list HEAD
'
+test_expect_success 'create note from non-blob with "git notes add -C" fails' '
+ commit=$(git rev-parse --verify HEAD) &&
+ tree=$(git rev-parse --verify HEAD:) &&
+ test_must_fail git notes add -C $commit &&
+ test_must_fail git notes add -C $tree &&
+ test_must_fail git notes list HEAD
+'
+
+cat > expect << EOF
+commit 80d796defacd5db327b7a4e50099663902fbdc5c
+Author: A U Thor <author@example.com>
+Date: Thu Apr 7 15:20:13 2005 -0700
+
+ 8th
+
+Notes (other):
+ This is a blob object
+EOF
+
+test_expect_success 'create note from blob with "git notes add -C" reuses blob id' '
+ blob=$(echo "This is a blob object" | git hash-object -w --stdin) &&
+ git notes add -C $blob &&
+ git log -1 > actual &&
+ test_cmp expect actual &&
+ test "$(git notes list HEAD)" = "$blob"
+'
+
cat > expect << EOF
commit 016e982bad97eacdbda0fcbd7ce5b0ba87c81f1b
Author: A U Thor <author@example.com>
git add a10 &&
test_tick &&
git commit -m 10th &&
- (
- MSG="yet another note" &&
- export MSG &&
- test_must_fail git notes add -c deadbeef
- ) &&
+ test_must_fail env MSG="yet another note" git notes add -c deadbeef &&
test_must_fail git notes list HEAD
'
git rebase master
'
+test_expect_success 'rebase off of the previous branch using "-"' '
+ git checkout master &&
+ git checkout HEAD^ &&
+ git rebase @{-1} >expect.messages &&
+ git merge-base master HEAD >expect.forkpoint &&
+
+ git checkout master &&
+ git checkout HEAD^ &&
+ git rebase - >actual.messages &&
+ git merge-base master HEAD >actual.forkpoint &&
+
+ test_cmp expect.forkpoint actual.forkpoint &&
+ # the next one is dubious---we may want to say "-",
+ # instead of @{-1}, in the message
+ test_i18ncmp expect.messages actual.messages
+'
+
test_expect_success 'rebase a single mode change' '
git checkout master &&
git branch -D topic &&
test_expect_success 'rebase -i with the exec command checks tree cleanness' '
git checkout master &&
- (
set_fake_editor &&
- FAKE_LINES="exec_echo_foo_>file1 1" &&
- export FAKE_LINES &&
- test_must_fail git rebase -i HEAD^
- ) &&
+ test_must_fail env FAKE_LINES="exec_echo_foo_>file1 1" git rebase -i HEAD^ &&
test_cmp_rev master^ HEAD &&
git reset --hard &&
git rebase --continue
test_expect_success 'rebase -i with exec of inexistent command' '
git checkout master &&
test_when_finished "git rebase --abort" &&
- (
set_fake_editor &&
- FAKE_LINES="exec_this-command-does-not-exist 1" &&
- export FAKE_LINES &&
- test_must_fail git rebase -i HEAD^ >actual 2>&1
- ) &&
+ test_must_fail env FAKE_LINES="exec_this-command-does-not-exist 1" \
+ git rebase -i HEAD^ >actual 2>&1 &&
! grep "Maybe git-rebase is broken" actual
'
git checkout -b conflict-fixup conflict-branch &&
base=$(git rev-parse HEAD~4) &&
set_fake_editor &&
- (
- FAKE_LINES="1 fixup 3 fixup 4" &&
- export FAKE_LINES &&
- test_must_fail git rebase -i $base
- ) &&
+ test_must_fail env FAKE_LINES="1 fixup 3 fixup 4" git rebase -i $base &&
echo three > conflict &&
git add conflict &&
FAKE_COMMIT_AMEND="ONCE" EXPECT_HEADER_COUNT=2 \
git checkout -b conflict-squash conflict-branch &&
base=$(git rev-parse HEAD~4) &&
set_fake_editor &&
- (
- FAKE_LINES="1 fixup 3 squash 4" &&
- export FAKE_LINES &&
- test_must_fail git rebase -i $base
- ) &&
+ test_must_fail env FAKE_LINES="1 fixup 3 squash 4" git rebase -i $base &&
echo three > conflict &&
git add conflict &&
FAKE_COMMIT_AMEND="TWICE" EXPECT_HEADER_COUNT=2 \
git checkout -b interrupted-squash conflict-branch &&
one=$(git rev-parse HEAD~3) &&
set_fake_editor &&
- (
- FAKE_LINES="1 squash 3 2" &&
- export FAKE_LINES &&
- test_must_fail git rebase -i HEAD~3
- ) &&
+ test_must_fail env FAKE_LINES="1 squash 3 2" git rebase -i HEAD~3 &&
(echo one; echo two; echo four) > conflict &&
git add conflict &&
test_must_fail git rebase --continue &&
git checkout -b interrupted-squash2 conflict-branch &&
one=$(git rev-parse HEAD~3) &&
set_fake_editor &&
- (
- FAKE_LINES="3 squash 1 2" &&
- export FAKE_LINES &&
- test_must_fail git rebase -i HEAD~3
- ) &&
+ test_must_fail env FAKE_LINES="3 squash 1 2" git rebase -i HEAD~3 &&
(echo one; echo four) > conflict &&
git add conflict &&
test_must_fail git rebase --continue &&
FAKE_LINES="edit 1" git rebase -i HEAD^ &&
echo "edited again" > file7 &&
git add file7 &&
- (
- FAKE_COMMIT_MESSAGE=" " &&
- export FAKE_COMMIT_MESSAGE &&
- test_must_fail git rebase --continue
- ) &&
+ test_must_fail env FAKE_COMMIT_MESSAGE=" " git rebase --continue &&
test $old = $(git rev-parse HEAD) &&
git rebase --abort
'
echo "and again" > file7 &&
git add file7 &&
test_tick &&
- (
- FAKE_COMMIT_MESSAGE="and again" &&
- export FAKE_COMMIT_MESSAGE &&
- test_must_fail git rebase --continue
- ) &&
+ test_must_fail env FAKE_COMMIT_MESSAGE="and again" git rebase --continue &&
git rebase --abort
'
test_tick &&
test_when_finished "git rebase --abort || :" &&
set_fake_editor &&
- (
- FAKE_LINES="1 exec_false" &&
- export FAKE_LINES &&
- test_must_fail git rebase -i HEAD^
- ) &&
+ test_must_fail env FAKE_LINES="1 exec_false" git rebase -i HEAD^ &&
echo "edited again" > file7 &&
git add file7 &&
test_must_fail git rebase --continue 2>error &&
test_expect_success 'rebase -i --root temporary sentinel commit' '
git checkout B &&
- (
- set_fake_editor &&
- FAKE_LINES="2" &&
- export FAKE_LINES &&
- test_must_fail git rebase -i --root
- ) &&
+ set_fake_editor &&
+ test_must_fail env FAKE_LINES="2" git rebase -i --root &&
git cat-file commit HEAD | grep "^tree 4b825dc642cb" &&
git rebase --abort
'
test_when_finished "git rebase --abort; git reset --hard $current_head; rm -f error" &&
test_commit TO-REMOVE will-conflict old-content &&
test_commit "\temp" will-conflict new-content dummy &&
- (
- EDITOR=true &&
- export EDITOR &&
- test_must_fail git rebase -i HEAD^ --onto HEAD^^ 2>error
- ) &&
+ test_must_fail env EDITOR=true git rebase -i HEAD^ --onto HEAD^^ 2>error &&
test_expect_code 1 grep " emp" error
'
test_expect_success 'pre-rebase hook stops rebase (2)' '
git checkout test &&
git reset --hard side &&
- (
- EDITOR=:
- export EDITOR
- test_must_fail git rebase -i master
- ) &&
+ test_must_fail env EDITOR=: git rebase -i master &&
test "z$(git symbolic-ref HEAD)" = zrefs/heads/test &&
test 0 = $(git rev-list HEAD...side | wc -l)
'
git submodule update &&
git checkout -q HEAD^ 2>actual &&
git checkout -q master 2>actual &&
- echo "warning: unable to rmdir submod: Directory not empty" >expected &&
- test_i18ncmp expected actual &&
+ test_i18ngrep "^warning: unable to rmdir submod:" actual &&
git status -s submod >actual &&
echo "?? submod/" >expected &&
test_cmp expected actual &&
test_cmp expect actual
'
+test_expect_success 'diff-cache ignores trailing slash on submodule path' '
+ git diff --name-only HEAD^ submod >expect &&
+ git diff --name-only HEAD^ submod/ >actual &&
+ test_cmp expect actual
+'
+
test_done
test_expect_success TTY 'format-patch --stdout paginates' '
rm -f pager_used &&
- (
- GIT_PAGER="wc >pager_used" &&
- export GIT_PAGER &&
- test_terminal git format-patch --stdout --all
- ) &&
+ test_terminal env GIT_PAGER="wc >pager_used" git format-patch --stdout --all &&
test_path_is_file pager_used
'
test_expect_success TTY 'format-patch --stdout pagination can be disabled' '
rm -f pager_used &&
- (
- GIT_PAGER="wc >pager_used" &&
- export GIT_PAGER &&
- test_terminal git --no-pager format-patch --stdout --all &&
- test_terminal git -c "pager.format-patch=false" format-patch --stdout --all
- ) &&
+ test_terminal env GIT_PAGER="wc >pager_used" git --no-pager format-patch --stdout --all &&
+ test_terminal env GIT_PAGER="wc >pager_used" git -c "pager.format-patch=false" format-patch --stdout --all &&
test_path_is_missing pager_used &&
test_path_is_missing .git/pager_used
'
. ./test-lib.sh
-LF='
-'
-cat >Beer.java <<\EOF
-public class Beer
-{
- int special;
- public static void main(String args[])
- {
- String s=" ";
- for(int x = 99; x > 0; x--)
- {
- System.out.print(x + " bottles of beer on the wall "
- + x + " bottles of beer\n"
- + "Take one down, pass it around, " + (x - 1)
- + " bottles of beer on the wall.\n");
- }
- System.out.print("Go to the store, buy some more,\n"
- + "99 bottles of beer on the wall.\n");
- }
-}
-EOF
-sed 's/beer\\/beer,\\/' <Beer.java >Beer-correct.java
-cat >Beer.perl <<\EOT
-package Beer;
-
-use strict;
-use warnings;
-use parent qw(Exporter);
-our @EXPORT_OK = qw(round finalround);
-
-sub other; # forward declaration
-
-# hello
-
-sub round {
- my ($n) = @_;
- print "$n bottles of beer on the wall ";
- print "$n bottles of beer\n";
- print "Take one down, pass it around, ";
- $n = $n - 1;
- print "$n bottles of beer on the wall.\n";
-}
-
-sub finalround
-{
- print "Go to the store, buy some more\n";
- print "99 bottles of beer on the wall.\n");
-}
-
-sub withheredocument {
- print <<"EOF"
-decoy here-doc
-EOF
- # some lines of context
- # to pad it out
- print "hello\n";
-}
-
-__END__
-
-=head1 NAME
-
-Beer - subroutine to output fragment of a drinking song
-
-=head1 SYNOPSIS
-
- use Beer qw(round finalround);
-
- sub song {
- for (my $i = 99; $i > 0; $i--) {
- round $i;
- }
- finalround;
- }
+test_expect_success 'setup' '
+ # a non-trivial custom pattern
+ git config diff.custom1.funcname "!static
+!String
+[^ ].*s.*" &&
- song;
+ # a custom pattern which matches to end of line
+ git config diff.custom2.funcname "......Beer\$" &&
-=cut
-EOT
-sed -e '
- s/hello/goodbye/
- s/beer\\/beer,\\/
- s/more\\/more,\\/
- s/song;/song();/
-' <Beer.perl >Beer-correct.perl
+ # alternation in pattern
+ git config diff.custom3.funcname "Beer$" &&
+ git config diff.custom3.xfuncname "^[ ]*((public|static).*)$" &&
-test_expect_funcname () {
- lang=${2-java}
- test_expect_code 1 git diff --no-index -U1 \
- "Beer.$lang" "Beer-correct.$lang" >diff &&
- grep "^@@.*@@ $1" diff
-}
+ # for regexp compilation tests
+ echo A >A.java &&
+ echo B >B.java
+'
-for p in ada bibtex cpp csharp fortran html java matlab objc pascal perl php python ruby tex
+diffpatterns="
+ ada
+ bibtex
+ cpp
+ csharp
+ fortran
+ html
+ java
+ matlab
+ objc
+ pascal
+ perl
+ php
+ python
+ ruby
+ tex
+ custom1
+ custom2
+ custom3
+"
+
+for p in $diffpatterns
do
test_expect_success "builtin $p pattern compiles" '
echo "*.java diff=$p" >.gitattributes &&
test_expect_code 1 git diff --no-index \
- Beer.java Beer-correct.java 2>msg &&
- ! grep fatal msg &&
- ! grep error msg
+ A.java B.java 2>msg &&
+ ! test_i18ngrep fatal msg &&
+ ! test_i18ngrep error msg
'
test_expect_success "builtin $p wordRegex pattern compiles" '
echo "*.java diff=$p" >.gitattributes &&
test_expect_code 1 git diff --no-index --word-diff \
- Beer.java Beer-correct.java 2>msg &&
- ! grep fatal msg &&
- ! grep error msg
+ A.java B.java 2>msg &&
+ ! test_i18ngrep fatal msg &&
+ ! test_i18ngrep error msg
'
done
-test_expect_success 'default behaviour' '
- rm -f .gitattributes &&
- test_expect_funcname "public class Beer\$"
-'
-
-test_expect_success 'set up .gitattributes declaring drivers to test' '
- cat >.gitattributes <<-\EOF
- *.java diff=java
- *.perl diff=perl
- EOF
-'
-
-test_expect_success 'preset java pattern' '
- test_expect_funcname "public static void main("
-'
-
-test_expect_success 'preset perl pattern' '
- test_expect_funcname "sub round {\$" perl
-'
-
-test_expect_success 'perl pattern accepts K&R style brace placement, too' '
- test_expect_funcname "sub finalround\$" perl
-'
-
-test_expect_success 'but is not distracted by end of <<here document' '
- test_expect_funcname "sub withheredocument {\$" perl
-'
-
-test_expect_success 'perl pattern is not distracted by sub within POD' '
- test_expect_funcname "=head" perl
-'
-
-test_expect_success 'perl pattern gets full line of POD header' '
- test_expect_funcname "=head1 SYNOPSIS\$" perl
-'
-
-test_expect_success 'perl pattern is not distracted by forward declaration' '
- test_expect_funcname "package Beer;\$" perl
-'
-
-test_expect_success 'custom pattern' '
- test_config diff.java.funcname "!static
-!String
-[^ ].*s.*" &&
- test_expect_funcname "int special;\$"
-'
-
test_expect_success 'last regexp must not be negated' '
+ echo "*.java diff=java" >.gitattributes &&
test_config diff.java.funcname "!static" &&
- test_expect_code 128 git diff --no-index Beer.java Beer-correct.java 2>msg &&
- grep ": Last expression must not be negated:" msg
+ test_expect_code 128 git diff --no-index A.java B.java 2>msg &&
+ test_i18ngrep ": Last expression must not be negated:" msg
'
-test_expect_success 'pattern which matches to end of line' '
- test_config diff.java.funcname "Beer\$" &&
- test_expect_funcname "Beer\$"
+test_expect_success 'setup hunk header tests' '
+ for i in $diffpatterns
+ do
+ echo "$i-* diff=$i"
+ done > .gitattributes &&
+
+ # add all test files to the index
+ (
+ cd "$TEST_DIRECTORY"/t4018 &&
+ git --git-dir="$TRASH_DIRECTORY/.git" add .
+ ) &&
+
+ # place modified files in the worktree
+ for i in $(git ls-files)
+ do
+ sed -e "s/ChangeMe/IWasChanged/" <"$TEST_DIRECTORY/t4018/$i" >"$i" || return 1
+ done
'
-test_expect_success 'alternation in pattern' '
- test_config diff.java.funcname "Beer$" &&
- test_config diff.java.xfuncname "^[ ]*((public|static).*)$" &&
- test_expect_funcname "public static void main("
-'
+# check each individual file
+for i in $(git ls-files)
+do
+ if grep broken "$i" >/dev/null 2>&1
+ then
+ result=failure
+ else
+ result=success
+ fi
+ test_expect_$result "hunk header: $i" "
+ test_when_finished 'cat actual' && # for debugging only
+ git diff -U1 $i >actual &&
+ grep '@@ .* @@.*RIGHT' actual
+ "
+done
test_done
--- /dev/null
+How to write RIGHT test cases
+=============================
+
+Insert the word "ChangeMe" (exactly this form) at a distance of
+at least two lines from the line that must appear in the hunk header.
+
+The text that must appear in the hunk header must contain the word
+"right", but in all upper-case, like in the title above.
+
+To mark a test case that highlights a malfunction, insert the word
+BROKEN in all lower-case somewhere in the file.
+
+This text is a bit twisted and out of order, but it is itself a
+test case for the default hunk header pattern. Know what you are doing
+if you change it.
+
+BTW, this tests that the head line goes to the hunk header, not the line
+of equal signs.
--- /dev/null
+Item RIGHT::DoSomething( Args with_spaces )
+{
+ ChangeMe;
+}
--- /dev/null
+Item::Item(int RIGHT)
+{
+ ChangeMe;
+}
--- /dev/null
+Item::Item(int RIGHT) :
+ member(0)
+{
+ ChangeMe;
+}
--- /dev/null
+class RIGHT
+{
+ int ChangeMe;
+};
--- /dev/null
+class RIGHT :
+ public Baseclass
+{
+ int ChangeMe;
+};
--- /dev/null
+RIGHT::~RIGHT()
+{
+ ChangeMe;
+}
--- /dev/null
+::Item get::it::RIGHT()
+{
+ ChangeMe;
+}
--- /dev/null
+get::Item get::it::RIGHT()
+{
+ ChangeMe;
+}
+
--- /dev/null
+const char *get_it_RIGHT(char *ptr)
+{
+ ChangeMe;
+}
--- /dev/null
+string& get::it::RIGHT(char *ptr)
+{
+ ChangeMe;
+}
--- /dev/null
+const char *
+RIGHT(int arg)
+{
+ ChangeMe;
+}
--- /dev/null
+namespace RIGHT
+{
+ ChangeMe;
+}
--- /dev/null
+Value operator+(Value LEFT, Value RIGHT)
+{
+ ChangeMe;
+}
--- /dev/null
+class RIGHT : public Baseclass
+{
+public:
+protected:
+private:
+ void DoSomething();
+ int ChangeMe;
+};
--- /dev/null
+struct item RIGHT(int i)
+// Do not
+// pick up
+/* these
+** comments.
+*/
+{
+ ChangeMe;
+}
--- /dev/null
+void RIGHT (void)
+{
+repeat: // C++ comment
+next: /* C comment */
+ do_something();
+
+ ChangeMe;
+}
--- /dev/null
+struct RIGHT {
+ unsigned
+ /* this bit field looks like a label and should not be picked up */
+ decoy_bitfield: 2,
+ more : 1;
+ int filler;
+
+ int ChangeMe;
+};
--- /dev/null
+void wrong()
+{
+}
+
+struct RIGHT_iterator_tag {};
+
+int ChangeMe;
--- /dev/null
+template<class T> int RIGHT(T arg)
+{
+ ChangeMe;
+}
--- /dev/null
+union RIGHT {
+ double v;
+ int ChangeMe;
+};
--- /dev/null
+void RIGHT (void)
+{
+ ChangeMe;
+}
--- /dev/null
+public class Beer
+{
+ int special, RIGHT;
+ public static void main(String args[])
+ {
+ String s=" ";
+ for(int x = 99; x > 0; x--)
+ {
+ System.out.print(x + " bottles of beer on the wall "
+ + x + " bottles of beer\n" // ChangeMe
+ + "Take one down, pass it around, " + (x - 1)
+ + " bottles of beer on the wall.\n");
+ }
+ System.out.print("Go to the store, buy some more,\n"
+ + "99 bottles of beer on the wall.\n");
+ }
+}
--- /dev/null
+public class RIGHT_Beer
+{
+ int special;
+ public static void main(String args[])
+ {
+ System.out.print("ChangeMe");
+ }
+}
--- /dev/null
+public class Beer
+{
+ int special;
+ public static void main(String RIGHT[])
+ {
+ String s=" ";
+ for(int x = 99; x > 0; x--)
+ {
+ System.out.print(x + " bottles of beer on the wall "
+ + x + " bottles of beer\n" // ChangeMe
+ + "Take one down, pass it around, " + (x - 1)
+ + " bottles of beer on the wall.\n");
+ }
+ System.out.print("Go to the store, buy some more,\n"
+ + "99 bottles of beer on the wall.\n");
+ }
+}
--- /dev/null
+public class Beer
+{
+ int special;
+ public static void main(String RIGHT[])
+ {
+ System.out.print("ChangeMe");
+ }
+}
--- /dev/null
+sub RIGHTwithheredocument {
+ print <<"EOF"
+decoy here-doc
+EOF
+ # some lines of context
+ # to pad it out
+ print "ChangeMe\n";
+}
--- /dev/null
+package RIGHT;
+
+use strict;
+use warnings;
+use parent qw(Exporter);
+our @EXPORT_OK = qw(round finalround);
+
+sub other; # forward declaration
+
+# ChangeMe
--- /dev/null
+=head1 NAME
+
+Beer - subroutine to output fragment of a drinking song
+
+=head1 SYNOPSIS_RIGHT
+
+ use Beer qw(round finalround);
+
+ sub song {
+ for (my $i = 99; $i > 0; $i--) {
+ round $i;
+ }
+ finalround;
+ }
+
+ ChangeMe;
+
+=cut
--- /dev/null
+sub RIGHT {
+ my ($n) = @_;
+ print "ChangeMe";
+}
--- /dev/null
+sub RIGHT
+{
+ print "ChangeMe\n";
+}
}
test_expect_success 'external diff with autocrlf = true' '
- git config core.autocrlf true &&
+ test_config core.autocrlf true &&
GIT_EXTERNAL_DIFF=./fake-diff.sh git diff &&
test $(wc -l < crlfed.txt) = $(cat crlfed.txt | keep_only_cr | wc -c)
'
test_expect_success 'diff --cached' '
+ test_config core.autocrlf true &&
git add file &&
git update-index --assume-unchanged file &&
echo second >file &&
test_cmp "$TEST_DIRECTORY"/t4020/diff.NUL actual
'
+test_expect_success 'clean up crlf leftovers' '
+ git update-index --no-assume-unchanged file &&
+ rm -f file* &&
+ git reset --hard
+'
+
+test_expect_success 'submodule diff' '
+ git init sub &&
+ ( cd sub && test_commit sub1 ) &&
+ git add sub &&
+ test_tick &&
+ git commit -m "add submodule" &&
+ ( cd sub && test_commit sub2 ) &&
+ write_script gather_pre_post.sh <<-\EOF &&
+ echo "$1 $4" # path, mode
+ cat "$2" # old file
+ cat "$5" # new file
+ EOF
+ GIT_EXTERNAL_DIFF=./gather_pre_post.sh git diff >actual &&
+ cat >expected <<-EOF &&
+ sub 160000
+ Subproject commit $(git rev-parse HEAD:sub)
+ Subproject commit $(cd sub && git rev-parse HEAD)
+ EOF
+ test_cmp expected actual
+'
+
test_done
)
'
+test_expect_success 'git diff --quiet ignores stat-change only entries' '
+ test-chmtime +10 a &&
+ echo modified >>b &&
+ test_expect_code 1 git diff --quiet
+'
+
test_done
'
done
+test_expect_success 'setup for testing combine-diff order' '
+ git checkout -b tmp HEAD~ &&
+ create_files 3 &&
+ git checkout master &&
+ git merge --no-commit -s ours tmp &&
+ create_files 5
+'
+
+test_expect_success "combine-diff: no order (=tree object order)" '
+ git diff --name-only HEAD HEAD^ HEAD^2 >actual &&
+ test_cmp expect_none actual
+'
+
+for i in 1 2
+do
+ test_expect_success "combine-diff: orderfile using option ($i)" '
+ git diff -Oorder_file_$i --name-only HEAD HEAD^ HEAD^2 >actual &&
+ test_cmp expect_$i actual
+ '
+done
+
test_done
--- /dev/null
+#!/bin/sh
+
+test_description='combined diff show only paths that are different to all parents'
+
+. ./test-lib.sh
+
+# verify that diffc.expect matches output of
+# `git diff -c --name-only HEAD HEAD^ HEAD^2`
+diffc_verify () {
+ git diff -c --name-only HEAD HEAD^ HEAD^2 >diffc.actual &&
+ test_cmp diffc.expect diffc.actual
+}
+
+test_expect_success 'trivial merge - combine-diff empty' '
+ for i in $(test_seq 1 9)
+ do
+ echo $i >$i.txt &&
+ git add $i.txt
+ done &&
+ git commit -m "init" &&
+ git checkout -b side &&
+ for i in $(test_seq 2 9)
+ do
+ echo $i/2 >>$i.txt
+ done &&
+ git commit -a -m "side 2-9" &&
+ git checkout master &&
+ echo 1/2 >1.txt &&
+ git commit -a -m "master 1" &&
+ git merge side &&
+ >diffc.expect &&
+ diffc_verify
+'
+
+
+test_expect_success 'only one trully conflicting path' '
+ git checkout side &&
+ for i in $(test_seq 2 9)
+ do
+ echo $i/3 >>$i.txt
+ done &&
+ echo "4side" >>4.txt &&
+ git commit -a -m "side 2-9 +4" &&
+ git checkout master &&
+ for i in $(test_seq 1 9)
+ do
+ echo $i/3 >>$i.txt
+ done &&
+ echo "4master" >>4.txt &&
+ git commit -a -m "master 1-9 +4" &&
+ test_must_fail git merge side &&
+ cat <<-\EOF >4.txt &&
+ 4
+ 4/2
+ 4/3
+ 4master
+ 4side
+ EOF
+ git add 4.txt &&
+ git commit -m "merge side (2)" &&
+ echo 4.txt >diffc.expect &&
+ diffc_verify
+'
+
+test_expect_success 'merge introduces new file' '
+ git checkout side &&
+ for i in $(test_seq 5 9)
+ do
+ echo $i/4 >>$i.txt
+ done &&
+ git commit -a -m "side 5-9" &&
+ git checkout master &&
+ for i in $(test_seq 1 3)
+ do
+ echo $i/4 >>$i.txt
+ done &&
+ git commit -a -m "master 1-3 +4hello" &&
+ git merge side &&
+ echo "Hello World" >4hello.txt &&
+ git add 4hello.txt &&
+ git commit --amend &&
+ echo 4hello.txt >diffc.expect &&
+ diffc_verify
+'
+
+test_expect_success 'merge removed a file' '
+ git checkout side &&
+ for i in $(test_seq 5 9)
+ do
+ echo $i/5 >>$i.txt
+ done &&
+ git commit -a -m "side 5-9" &&
+ git checkout master &&
+ for i in $(test_seq 1 3)
+ do
+ echo $i/4 >>$i.txt
+ done &&
+ git commit -a -m "master 1-3" &&
+ git merge side &&
+ git rm 4.txt &&
+ git commit --amend &&
+ echo 4.txt >diffc.expect &&
+ diffc_verify
+'
+
+test_done
test_description='log --grep/--author/--regexp-ignore-case/-S/-G'
. ./test-lib.sh
+test_log () {
+ expect=$1
+ kind=$2
+ needle=$3
+ shift 3
+ rest=$@
+
+ case $kind in
+ --*)
+ opt=$kind=$needle
+ ;;
+ *)
+ opt=$kind$needle
+ ;;
+ esac
+ case $expect in
+ expect_nomatch)
+ match=nomatch
+ ;;
+ *)
+ match=match
+ ;;
+ esac
+
+ test_expect_success "log $kind${rest:+ $rest} ($match)" "
+ git log $rest $opt --format=%H >actual &&
+ test_cmp $expect actual
+ "
+}
+
+# test -i and --regexp-ignore-case and expect both to behave the same way
+test_log_icase () {
+ test_log $@ --regexp-ignore-case
+ test_log $@ -i
+}
+
test_expect_success setup '
+ >expect_nomatch &&
+
>file &&
git add file &&
test_tick &&
git commit -m initial &&
+ git rev-parse --verify HEAD >expect_initial &&
echo Picked >file &&
+ git add file &&
test_tick &&
- git commit -a --author="Another Person <another@example.com>" -m second
-'
-
-test_expect_success 'log --grep' '
- git log --grep=initial --format=%H >actual &&
- git rev-parse --verify HEAD^ >expect &&
- test_cmp expect actual
-'
-
-test_expect_success 'log --grep --regexp-ignore-case' '
- git log --regexp-ignore-case --grep=InItial --format=%H >actual &&
- git rev-parse --verify HEAD^ >expect &&
- test_cmp expect actual
-'
-
-test_expect_success 'log --grep -i' '
- git log -i --grep=InItial --format=%H >actual &&
- git rev-parse --verify HEAD^ >expect &&
- test_cmp expect actual
-'
-
-test_expect_success 'log --author --regexp-ignore-case' '
- git log --regexp-ignore-case --author=person --format=%H >actual &&
- git rev-parse --verify HEAD >expect &&
- test_cmp expect actual
-'
-
-test_expect_success 'log --author -i' '
- git log -i --author=person --format=%H >actual &&
- git rev-parse --verify HEAD >expect &&
- test_cmp expect actual
-'
-
-test_expect_success 'log -G (nomatch)' '
- git log -Gpicked --format=%H >actual &&
- >expect &&
- test_cmp expect actual
-'
-
-test_expect_success 'log -G (match)' '
- git log -GPicked --format=%H >actual &&
- git rev-parse --verify HEAD >expect &&
- test_cmp expect actual
-'
-
-test_expect_success 'log -G --regexp-ignore-case (nomatch)' '
- git log --regexp-ignore-case -Gpickle --format=%H >actual &&
- >expect &&
- test_cmp expect actual
+ git commit --author="Another Person <another@example.com>" -m second &&
+ git rev-parse --verify HEAD >expect_second
'
-test_expect_success 'log -G -i (nomatch)' '
- git log -i -Gpickle --format=%H >actual &&
- >expect &&
- test_cmp expect actual
-'
+test_log expect_initial --grep initial
+test_log expect_nomatch --grep InItial
+test_log_icase expect_initial --grep InItial
+test_log_icase expect_nomatch --grep initail
-test_expect_success 'log -G --regexp-ignore-case (match)' '
- git log --regexp-ignore-case -Gpicked --format=%H >actual &&
- git rev-parse --verify HEAD >expect &&
- test_cmp expect actual
-'
+test_log expect_second --author Person
+test_log expect_nomatch --author person
+test_log_icase expect_second --author person
+test_log_icase expect_nomatch --author spreon
-test_expect_success 'log -G -i (match)' '
- git log -i -Gpicked --format=%H >actual &&
- git rev-parse --verify HEAD >expect &&
- test_cmp expect actual
-'
+test_log expect_nomatch -G picked
+test_log expect_second -G Picked
+test_log_icase expect_nomatch -G pickle
+test_log_icase expect_second -G picked
test_expect_success 'log -G --textconv (missing textconv tool)' '
echo "* diff=test" >.gitattributes &&
test_expect_success 'log -G --no-textconv (missing textconv tool)' '
echo "* diff=test" >.gitattributes &&
git -c diff.test.textconv=missing log -Gfoo --no-textconv >actual &&
- >expect &&
- test_cmp expect actual &&
+ test_cmp expect_nomatch actual &&
rm .gitattributes
'
-test_expect_success 'log -S (nomatch)' '
- git log -Spicked --format=%H >actual &&
- >expect &&
- test_cmp expect actual
-'
-
-test_expect_success 'log -S (match)' '
- git log -SPicked --format=%H >actual &&
- git rev-parse --verify HEAD >expect &&
- test_cmp expect actual
-'
-
-test_expect_success 'log -S --regexp-ignore-case (match)' '
- git log --regexp-ignore-case -Spicked --format=%H >actual &&
- git rev-parse --verify HEAD >expect &&
- test_cmp expect actual
-'
-
-test_expect_success 'log -S -i (match)' '
- git log -i -Spicked --format=%H >actual &&
- git rev-parse --verify HEAD >expect &&
- test_cmp expect actual
-'
-
-test_expect_success 'log -S --regexp-ignore-case (nomatch)' '
- git log --regexp-ignore-case -Spickle --format=%H >actual &&
- >expect &&
- test_cmp expect actual
-'
+test_log expect_nomatch -S picked
+test_log expect_second -S Picked
+test_log_icase expect_second -S picked
+test_log_icase expect_nomatch -S pickle
-test_expect_success 'log -S -i (nomatch)' '
- git log -i -Spickle --format=%H >actual &&
- >expect &&
- test_cmp expect actual
-'
+test_log expect_nomatch -S p.cked --pickaxe-regex
+test_log expect_second -S P.cked --pickaxe-regex
+test_log_icase expect_second -S p.cked --pickaxe-regex
+test_log_icase expect_nomatch -S p.ckle --pickaxe-regex
test_expect_success 'log -S --textconv (missing textconv tool)' '
echo "* diff=test" >.gitattributes &&
test_expect_success 'log -S --no-textconv (missing textconv tool)' '
echo "* diff=test" >.gitattributes &&
git -c diff.test.textconv=missing log -Sfoo --no-textconv >actual &&
- >expect &&
- test_cmp expect actual &&
+ test_cmp expect_nomatch actual &&
rm .gitattributes
'
test_cmp expect.err actual.err
'
+munge_author_date () {
+ git cat-file commit "$1" >commit.orig &&
+ sed "s/^\(author .*>\) [0-9]*/\1 $2/" <commit.orig >commit.munge &&
+ git hash-object -w -t commit commit.munge
+}
+
+test_expect_success 'unparsable dates produce sentinel value' '
+ commit=$(munge_author_date HEAD totally_bogus) &&
+ echo "Date: Thu Jan 1 00:00:00 1970 +0000" >expect &&
+ git log -1 $commit >actual.full &&
+ grep Date <actual.full >actual &&
+ test_cmp expect actual
+'
+
+test_expect_success 'unparsable dates produce sentinel value (%ad)' '
+ commit=$(munge_author_date HEAD totally_bogus) &&
+ echo >expect &&
+ git log -1 --format=%ad $commit >actual
+ test_cmp expect actual
+'
+
+# date is 2^64 + 1
+test_expect_success 'date parser recognizes integer overflow' '
+ commit=$(munge_author_date HEAD 18446744073709551617) &&
+ echo "Thu Jan 1 00:00:00 1970 +0000" >expect &&
+ git log -1 --format=%ad $commit >actual &&
+ test_cmp expect actual
+'
+
+# date is 2^64 - 2
+test_expect_success 'date parser recognizes time_t overflow' '
+ commit=$(munge_author_date HEAD 18446744073709551614) &&
+ echo "Thu Jan 1 00:00:00 1970 +0000" >expect &&
+ git log -1 --format=%ad $commit >actual &&
+ test_cmp expect actual
+'
+
+# date is within 2^63-1, but enough to choke glibc's gmtime
+test_expect_success 'absurdly far-in-future date' '
+ commit=$(munge_author_date HEAD 999999999999999999) &&
+ git log -1 --format=%ad $commit
+'
+
test_done
test_must_fail git archive --remote=. $sha1 >remote.tar
'
+test_expect_success 'upload-archive can allow unreachable commits' '
+ test_commit unreachable1 &&
+ sha1=`git rev-parse HEAD` &&
+ git reset --hard HEAD^ &&
+ git archive $sha1 >remote.tar &&
+ test_config uploadarchive.allowUnreachable true &&
+ git archive --remote=. $sha1 >remote.tar
+'
+
test_expect_success 'setup tar filters' '
git config tar.tar.foo.command "tr ab ba" &&
git config tar.bar.command "tr ab ba" &&
s/[-0-9]\{10\} [:0-9]\{8\} [-+][0-9]\{4\}/DATE/g
s/ [^ ].*/ SUBJECT/g
s/ [^ ].* (DATE)/ SUBJECT (DATE)/g
- s/for-upstream/BRANCH/g
+ s|tags/full|BRANCH|g
s/mnemonic.txt/FILENAME/g
s/^version [0-9]/VERSION/
/^ FILENAME | *[0-9]* [-+]*\$/ b diffstat
test_must_fail git request-pull initial "$downstream_url" \
2>../err
) &&
- grep "No branch of.*is at:\$" err &&
+ grep "No match for commit .*" err &&
grep "Are you sure you pushed" err
'
git checkout initial &&
git merge --ff-only master &&
git push origin master:for-upstream &&
- git request-pull initial origin >../request
+ git request-pull initial origin master:for-upstream >../request
) &&
sed -nf read-request.sed <request >digest &&
cat digest &&
'
-test_expect_success 'request names an appropriate branch' '
+test_expect_success 'request asks HEAD to be pulled' '
rm -fr downstream.git &&
git init --bare downstream.git &&
read repository &&
read branch
} <digest &&
- test "$branch" = tags/full
+ test -z "$branch"
'
cd local &&
git checkout initial &&
git merge --ff-only master &&
- git push origin master:for-upstream &&
- git request-pull initial "$downstream_url" >../request
+ git push origin tags/full &&
+ git request-pull initial "$downstream_url" tags/full >../request
) &&
<request sed -nf fuzz.sed >request.fuzzy &&
- test_i18ncmp expect request.fuzzy
+ test_i18ncmp expect request.fuzzy &&
+ (
+ cd local &&
+ git request-pull initial "$downstream_url" tags/full:refs/tags/full
+ ) >request &&
+ sed -nf fuzz.sed <request >request.fuzzy &&
+ test_i18ncmp expect request.fuzzy
'
test_expect_success 'request-pull ignores OPTIONS_KEEPDASHDASH poison' '
git checkout initial &&
git merge --ff-only master &&
git push origin master:for-upstream &&
- git request-pull -- initial "$downstream_url" >../request
+ git request-pull -- initial "$downstream_url" master:for-upstream >../request
)
'
test_expect_success 'check unpacked result (have commit, no tag)' '
git rev-list --objects $commit >list.expect &&
(
- GIT_DIR=clone.git &&
- export GIT_DIR &&
- test_must_fail git cat-file -e $tag &&
+ test_must_fail env GIT_DIR=clone.git git cat-file -e $tag &&
git rev-list --objects $commit
) >list.actual &&
test_cmp list.expect list.actual
--- /dev/null
+#!/bin/sh
+
+test_description='exercise basic bitmap functionality'
+. ./test-lib.sh
+
+test_expect_success 'setup repo with moderate-sized history' '
+ for i in $(test_seq 1 10); do
+ test_commit $i
+ done &&
+ git checkout -b other HEAD~5 &&
+ for i in $(test_seq 1 10); do
+ test_commit side-$i
+ done &&
+ git checkout master &&
+ blob=$(echo tagged-blob | git hash-object -w --stdin) &&
+ git tag tagged-blob $blob &&
+ git config pack.writebitmaps true &&
+ git config pack.writebitmaphashcache true
+'
+
+test_expect_success 'full repack creates bitmaps' '
+ git repack -ad &&
+ ls .git/objects/pack/ | grep bitmap >output &&
+ test_line_count = 1 output
+'
+
+test_expect_success 'rev-list --test-bitmap verifies bitmaps' '
+ git rev-list --test-bitmap HEAD
+'
+
+rev_list_tests() {
+ state=$1
+
+ test_expect_success "counting commits via bitmap ($state)" '
+ git rev-list --count HEAD >expect &&
+ git rev-list --use-bitmap-index --count HEAD >actual &&
+ test_cmp expect actual
+ '
+
+ test_expect_success "counting partial commits via bitmap ($state)" '
+ git rev-list --count HEAD~5..HEAD >expect &&
+ git rev-list --use-bitmap-index --count HEAD~5..HEAD >actual &&
+ test_cmp expect actual
+ '
+
+ test_expect_success "counting non-linear history ($state)" '
+ git rev-list --count other...master >expect &&
+ git rev-list --use-bitmap-index --count other...master >actual &&
+ test_cmp expect actual
+ '
+
+ test_expect_success "enumerate --objects ($state)" '
+ git rev-list --objects --use-bitmap-index HEAD >tmp &&
+ cut -d" " -f1 <tmp >tmp2 &&
+ sort <tmp2 >actual &&
+ git rev-list --objects HEAD >tmp &&
+ cut -d" " -f1 <tmp >tmp2 &&
+ sort <tmp2 >expect &&
+ test_cmp expect actual
+ '
+
+ test_expect_success "bitmap --objects handles non-commit objects ($state)" '
+ git rev-list --objects --use-bitmap-index HEAD tagged-blob >actual &&
+ grep $blob actual
+ '
+}
+
+rev_list_tests 'full bitmap'
+
+test_expect_success 'clone from bitmapped repository' '
+ git clone --no-local --bare . clone.git &&
+ git rev-parse HEAD >expect &&
+ git --git-dir=clone.git rev-parse HEAD >actual &&
+ test_cmp expect actual
+'
+
+test_expect_success 'setup further non-bitmapped commits' '
+ for i in $(test_seq 1 10); do
+ test_commit further-$i
+ done
+'
+
+rev_list_tests 'partial bitmap'
+
+test_expect_success 'fetch (partial bitmap)' '
+ git --git-dir=clone.git fetch origin master:master &&
+ git rev-parse HEAD >expect &&
+ git --git-dir=clone.git rev-parse HEAD >actual &&
+ test_cmp expect actual
+'
+
+test_expect_success 'incremental repack cannot create bitmaps' '
+ test_commit more-1 &&
+ find .git/objects/pack -name "*.bitmap" >expect &&
+ git repack -d &&
+ find .git/objects/pack -name "*.bitmap" >actual &&
+ test_cmp expect actual
+'
+
+test_expect_success 'incremental repack can disable bitmaps' '
+ test_commit more-2 &&
+ git repack -d --no-write-bitmap-index
+'
+
+test_expect_success 'full repack, reusing previous bitmaps' '
+ git repack -ad &&
+ ls .git/objects/pack/ | grep bitmap >output &&
+ test_line_count = 1 output
+'
+
+test_expect_success 'fetch (full bitmap)' '
+ git --git-dir=clone.git fetch origin master:master &&
+ git rev-parse HEAD >expect &&
+ git --git-dir=clone.git rev-parse HEAD >actual &&
+ test_cmp expect actual
+'
+
+test_lazy_prereq JGIT '
+ type jgit
+'
+
+test_expect_success JGIT 'we can read jgit bitmaps' '
+ git clone . compat-jgit &&
+ (
+ cd compat-jgit &&
+ rm -f .git/objects/pack/*.bitmap &&
+ jgit gc &&
+ git rev-list --test-bitmap HEAD
+ )
+'
+
+test_expect_success JGIT 'jgit can read our bitmaps' '
+ git clone . compat-us &&
+ (
+ cd compat-us &&
+ git repack -adb &&
+ # jgit gc will barf if it does not like our bitmaps
+ jgit gc
+ )
+'
+
+test_done
# Set the child to auto-pack if more than one pack exists
cd child &&
git config gc.autopacklimit 1 &&
+ git config gc.autodetach false &&
git branch test_auto_gc &&
# And create a file that follows the temporary object naming
# convention for the auto-gc to remove
git rev-parse origin/master
'
+test_expect_success 'fetch --prune handles overlapping refspecs' '
+ cd "$D" &&
+ git update-ref refs/pull/42/head master &&
+ git clone . prune-overlapping &&
+ cd prune-overlapping &&
+ git config --add remote.origin.fetch refs/pull/*/head:refs/remotes/origin/pr/* &&
+
+ git fetch --prune origin &&
+ git rev-parse origin/master &&
+ git rev-parse origin/pr/42 &&
+
+ git config --unset-all remote.origin.fetch
+ git config remote.origin.fetch refs/pull/*/head:refs/remotes/origin/pr/* &&
+ git config --add remote.origin.fetch refs/heads/*:refs/remotes/origin/* &&
+
+ git fetch --prune origin &&
+ git rev-parse origin/master &&
+ git rev-parse origin/pr/42
+'
+
test_expect_success 'fetch --prune --tags prunes branches but not tags' '
cd "$D" &&
git clone . prune-tags &&
mkdir rsynced &&
(cd rsynced &&
git init --bare &&
- git fetch "rsync:$(pwd)/../.git" master:refs/heads/master &&
+ git fetch "rsync:../.git" master:refs/heads/master &&
git gc --prune &&
test $(git rev-parse master) = $(cd .. && git rev-parse master) &&
git fsck --full)
(cd rsynced2 &&
git init) &&
(cd rsynced &&
- git push "rsync:$(pwd)/../rsynced2/.git" master) &&
+ git push "rsync:../rsynced2/.git" master) &&
(cd rsynced2 &&
git gc --prune &&
test $(git rev-parse master) = $(cd .. && git rev-parse master) &&
mkdir rsynced3 &&
(cd rsynced3 &&
git init) &&
- git push --all "rsync:$(pwd)/rsynced3/.git" &&
+ git push --all "rsync:rsynced3/.git" &&
(cd rsynced3 &&
test $(git rev-parse master) = $(cd .. && git rev-parse master) &&
git fsck --full)
check_push_result down_repo $the_commit heads/master
'
+test_expect_success 'branch.*.pushremote config order is irrelevant' '
+ mk_test one_repo heads/master &&
+ mk_test two_repo heads/master &&
+ test_config remote.one.url one_repo &&
+ test_config remote.two.url two_repo &&
+ test_config branch.master.pushremote two_repo &&
+ test_config remote.pushdefault one_repo &&
+ test_config push.default matching &&
+ git push &&
+ check_push_result one_repo $the_first_commit heads/master &&
+ check_push_result two_repo $the_commit heads/master
+'
+
test_expect_success 'push with dry-run' '
mk_test testrepo heads/master &&
--- /dev/null
+#!/bin/sh
+
+test_description='detect some push errors early (before contacting remote)'
+. ./test-lib.sh
+
+test_expect_success 'setup commits' '
+ test_commit one
+'
+
+test_expect_success 'setup remote' '
+ git init --bare remote.git &&
+ git remote add origin remote.git
+'
+
+test_expect_success 'setup fake receive-pack' '
+ FAKE_RP_ROOT=$(pwd) &&
+ export FAKE_RP_ROOT &&
+ write_script fake-rp <<-\EOF &&
+ echo yes >"$FAKE_RP_ROOT"/rp-ran
+ exit 1
+ EOF
+ git config remote.origin.receivepack "\"\$FAKE_RP_ROOT/fake-rp\""
+'
+
+test_expect_success 'detect missing branches early' '
+ echo no >rp-ran &&
+ echo no >expect &&
+ test_must_fail git push origin missing &&
+ test_cmp expect rp-ran
+'
+
+test_expect_success 'detect missing sha1 expressions early' '
+ echo no >rp-ran &&
+ echo no >expect &&
+ test_must_fail git push origin master~2:master &&
+ test_cmp expect rp-ran
+'
+
+test_expect_success 'detect ambiguous refs early' '
+ git branch foo &&
+ git tag foo &&
+ echo no >rp-ran &&
+ echo no >expect &&
+ test_must_fail git push origin foo &&
+ test_cmp expect rp-ran
+'
+
+test_done
)
'
+test_expect_success POSIXPERM,SANITY 'shallow fetch from a read-only repo' '
+ cp -R .git read-only.git &&
+ find read-only.git -print | xargs chmod -w &&
+ test_when_finished "find read-only.git -type d -print | xargs chmod +w" &&
+ git clone --no-local --depth=2 read-only.git from-read-only &&
+ git --git-dir=from-read-only/.git log --format=%s >actual &&
+ cat >expect <<EOF &&
+add-1-back
+4
+EOF
+ test_cmp expect actual
+'
+
stop_httpd
test_done
test_done
fi
-LIB_HTTPD_PORT=${LIB_HTTPD_PORT-'5537'}
. "$TEST_DIRECTORY"/lib-httpd.sh
start_httpd
--- /dev/null
+#!/bin/sh
+
+test_description='fetch/clone from a shallow clone over http'
+
+. ./test-lib.sh
+
+if test -n "$NO_CURL"; then
+ skip_all='skipping test, git built without http support'
+ test_done
+fi
+
+. "$TEST_DIRECTORY"/lib-httpd.sh
+start_httpd
+
+commit() {
+ echo "$1" >tracked &&
+ git add tracked &&
+ git commit -m "$1"
+}
+
+test_expect_success 'setup shallow clone' '
+ commit 1 &&
+ commit 2 &&
+ commit 3 &&
+ commit 4 &&
+ commit 5 &&
+ commit 6 &&
+ commit 7 &&
+ git clone --no-local --depth=5 .git shallow &&
+ git config --global transfer.fsckObjects true
+'
+
+test_expect_success 'clone http repository' '
+ git clone --bare --no-local shallow "$HTTPD_DOCUMENT_ROOT_PATH/repo.git" &&
+ git clone $HTTPD_URL/smart/repo.git clone &&
+ (
+ cd clone &&
+ git fsck &&
+ git log --format=%s origin/master >actual &&
+ cat <<EOF >expect &&
+7
+6
+5
+4
+3
+EOF
+ test_cmp expect actual
+ )
+'
+
+# This test is tricky. We need large enough "have"s that fetch-pack
+# will put pkt-flush in between. Then we need a "have" the server
+# does not have, it'll send "ACK %s ready"
+test_expect_success 'no shallow lines after receiving ACK ready' '
+ (
+ cd shallow &&
+ for i in $(test_seq 15)
+ do
+ git checkout --orphan unrelated$i &&
+ test_commit unrelated$i &&
+ git push -q "$HTTPD_DOCUMENT_ROOT_PATH/repo.git" \
+ refs/heads/unrelated$i:refs/heads/unrelated$i &&
+ git push -q ../clone/.git \
+ refs/heads/unrelated$i:refs/heads/unrelated$i ||
+ exit 1
+ done &&
+ git checkout master &&
+ test_commit new &&
+ git push "$HTTPD_DOCUMENT_ROOT_PATH/repo.git" master
+ ) &&
+ (
+ cd clone &&
+ git checkout --orphan newnew &&
+ test_commit new-too &&
+ GIT_TRACE_PACKET="$TRASH_DIRECTORY/trace" git fetch --depth=2 &&
+ grep "fetch-pack< ACK .* ready" ../trace &&
+ ! grep "fetch-pack> done" ../trace
+ )
+'
+
+stop_httpd
+test_done
--- /dev/null
+#!/bin/sh
+#
+# Copyright (c) 2008 Clemens Buchacher <drizzd@aon.at>
+#
+
+test_description='test WebDAV http-push
+
+This test runs various sanity checks on http-push.'
+
+. ./test-lib.sh
+
+if git http-push > /dev/null 2>&1 || [ $? -eq 128 ]
+then
+ skip_all="skipping test, USE_CURL_MULTI is not defined"
+ test_done
+fi
+
+LIB_HTTPD_DAV=t
+. "$TEST_DIRECTORY"/lib-httpd.sh
+ROOT_PATH="$PWD"
+start_httpd
+
+test_expect_success 'setup remote repository' '
+ cd "$ROOT_PATH" &&
+ mkdir test_repo &&
+ cd test_repo &&
+ git init &&
+ : >path1 &&
+ git add path1 &&
+ test_tick &&
+ git commit -m initial &&
+ cd - &&
+ git clone --bare test_repo test_repo.git &&
+ cd test_repo.git &&
+ git --bare update-server-info &&
+ mv hooks/post-update.sample hooks/post-update &&
+ ORIG_HEAD=$(git rev-parse --verify HEAD) &&
+ cd - &&
+ mv test_repo.git "$HTTPD_DOCUMENT_ROOT_PATH"
+'
+
+test_expect_success 'create password-protected repository' '
+ mkdir -p "$HTTPD_DOCUMENT_ROOT_PATH/auth/dumb" &&
+ cp -Rf "$HTTPD_DOCUMENT_ROOT_PATH/test_repo.git" \
+ "$HTTPD_DOCUMENT_ROOT_PATH/auth/dumb/test_repo.git"
+'
+
+setup_askpass_helper
+
+test_expect_success 'clone remote repository' '
+ cd "$ROOT_PATH" &&
+ git clone $HTTPD_URL/dumb/test_repo.git test_repo_clone
+'
+
+test_expect_success 'push to remote repository with packed refs' '
+ cd "$ROOT_PATH"/test_repo_clone &&
+ : >path2 &&
+ git add path2 &&
+ test_tick &&
+ git commit -m path2 &&
+ HEAD=$(git rev-parse --verify HEAD) &&
+ git push &&
+ (cd "$HTTPD_DOCUMENT_ROOT_PATH"/test_repo.git &&
+ test $HEAD = $(git rev-parse --verify HEAD))
+'
+
+test_expect_success 'push already up-to-date' '
+ git push
+'
+
+test_expect_success 'push to remote repository with unpacked refs' '
+ (cd "$HTTPD_DOCUMENT_ROOT_PATH"/test_repo.git &&
+ rm packed-refs &&
+ git update-ref refs/heads/master $ORIG_HEAD &&
+ git --bare update-server-info) &&
+ git push &&
+ (cd "$HTTPD_DOCUMENT_ROOT_PATH"/test_repo.git &&
+ test $HEAD = $(git rev-parse --verify HEAD))
+'
+
+test_expect_success 'http-push fetches unpacked objects' '
+ cp -R "$HTTPD_DOCUMENT_ROOT_PATH"/test_repo.git \
+ "$HTTPD_DOCUMENT_ROOT_PATH"/test_repo_unpacked.git &&
+
+ git clone $HTTPD_URL/dumb/test_repo_unpacked.git \
+ "$ROOT_PATH"/fetch_unpacked &&
+
+ # By reset, we force git to retrieve the object
+ (cd "$ROOT_PATH"/fetch_unpacked &&
+ git reset --hard HEAD^ &&
+ git remote rm origin &&
+ git reflog expire --expire=0 --all &&
+ git prune &&
+ git push -f -v $HTTPD_URL/dumb/test_repo_unpacked.git master)
+'
+
+test_expect_success 'http-push fetches packed objects' '
+ cp -R "$HTTPD_DOCUMENT_ROOT_PATH"/test_repo.git \
+ "$HTTPD_DOCUMENT_ROOT_PATH"/test_repo_packed.git &&
+
+ git clone $HTTPD_URL/dumb/test_repo_packed.git \
+ "$ROOT_PATH"/test_repo_clone_packed &&
+
+ (cd "$HTTPD_DOCUMENT_ROOT_PATH"/test_repo_packed.git &&
+ git --bare repack &&
+ git --bare prune-packed) &&
+
+ # By reset, we force git to retrieve the packed object
+ (cd "$ROOT_PATH"/test_repo_clone_packed &&
+ git reset --hard HEAD^ &&
+ git remote remove origin &&
+ git reflog expire --expire=0 --all &&
+ git prune &&
+ git push -f -v $HTTPD_URL/dumb/test_repo_packed.git master)
+'
+
+test_expect_success 'create and delete remote branch' '
+ cd "$ROOT_PATH"/test_repo_clone &&
+ git checkout -b dev &&
+ : >path3 &&
+ git add path3 &&
+ test_tick &&
+ git commit -m dev &&
+ git push origin dev &&
+ git push origin :dev &&
+ test_must_fail git show-ref --verify refs/remotes/origin/dev
+'
+
+test_expect_success 'MKCOL sends directory names with trailing slashes' '
+
+ ! grep "\"MKCOL.*[^/] HTTP/[^ ]*\"" < "$HTTPD_ROOT_PATH"/access.log
+
+'
+
+x1="[0-9a-f]"
+x2="$x1$x1"
+x5="$x1$x1$x1$x1$x1"
+x38="$x5$x5$x5$x5$x5$x5$x5$x1$x1$x1"
+x40="$x38$x2"
+
+test_expect_success 'PUT and MOVE sends object to URLs with SHA-1 hash suffix' '
+ sed \
+ -e "s/PUT /OP /" \
+ -e "s/MOVE /OP /" \
+ -e "s|/objects/$x2/${x38}_$x40|WANTED_PATH_REQUEST|" \
+ "$HTTPD_ROOT_PATH"/access.log |
+ grep -e "\"OP .*WANTED_PATH_REQUEST HTTP/[.0-9]*\" 20[0-9] "
+
+'
+
+test_http_push_nonff "$HTTPD_DOCUMENT_ROOT_PATH"/test_repo.git \
+ "$ROOT_PATH"/test_repo_clone master
+
+test_expect_success 'push to password-protected repository (user in URL)' '
+ test_commit pw-user &&
+ set_askpass user@host pass@host &&
+ git push "$HTTPD_URL_USER/auth/dumb/test_repo.git" HEAD &&
+ git rev-parse --verify HEAD >expect &&
+ git --git-dir="$HTTPD_DOCUMENT_ROOT_PATH/auth/dumb/test_repo.git" \
+ rev-parse --verify HEAD >actual &&
+ test_cmp expect actual
+'
+
+test_expect_failure 'user was prompted only once for password' '
+ expect_askpass pass user@host
+'
+
+test_expect_failure 'push to password-protected repository (no user in URL)' '
+ test_commit pw-nouser &&
+ set_askpass user@host pass@host &&
+ git push "$HTTPD_URL/auth/dumb/test_repo.git" HEAD &&
+ expect_askpass both user@host
+ git rev-parse --verify HEAD >expect &&
+ git --git-dir="$HTTPD_DOCUMENT_ROOT_PATH/auth/dumb/test_repo.git" \
+ rev-parse --verify HEAD >actual &&
+ test_cmp expect actual
+'
+
+stop_httpd
+
+test_done
+++ /dev/null
-#!/bin/sh
-#
-# Copyright (c) 2008 Clemens Buchacher <drizzd@aon.at>
-#
-
-test_description='test WebDAV http-push
-
-This test runs various sanity checks on http-push.'
-
-. ./test-lib.sh
-
-if git http-push > /dev/null 2>&1 || [ $? -eq 128 ]
-then
- skip_all="skipping test, USE_CURL_MULTI is not defined"
- test_done
-fi
-
-LIB_HTTPD_DAV=t
-LIB_HTTPD_PORT=${LIB_HTTPD_PORT-'5540'}
-. "$TEST_DIRECTORY"/lib-httpd.sh
-ROOT_PATH="$PWD"
-start_httpd
-
-test_expect_success 'setup remote repository' '
- cd "$ROOT_PATH" &&
- mkdir test_repo &&
- cd test_repo &&
- git init &&
- : >path1 &&
- git add path1 &&
- test_tick &&
- git commit -m initial &&
- cd - &&
- git clone --bare test_repo test_repo.git &&
- cd test_repo.git &&
- git --bare update-server-info &&
- mv hooks/post-update.sample hooks/post-update &&
- ORIG_HEAD=$(git rev-parse --verify HEAD) &&
- cd - &&
- mv test_repo.git "$HTTPD_DOCUMENT_ROOT_PATH"
-'
-
-test_expect_success 'create password-protected repository' '
- mkdir -p "$HTTPD_DOCUMENT_ROOT_PATH/auth/dumb" &&
- cp -Rf "$HTTPD_DOCUMENT_ROOT_PATH/test_repo.git" \
- "$HTTPD_DOCUMENT_ROOT_PATH/auth/dumb/test_repo.git"
-'
-
-setup_askpass_helper
-
-test_expect_success 'clone remote repository' '
- cd "$ROOT_PATH" &&
- git clone $HTTPD_URL/dumb/test_repo.git test_repo_clone
-'
-
-test_expect_success 'push to remote repository with packed refs' '
- cd "$ROOT_PATH"/test_repo_clone &&
- : >path2 &&
- git add path2 &&
- test_tick &&
- git commit -m path2 &&
- HEAD=$(git rev-parse --verify HEAD) &&
- git push &&
- (cd "$HTTPD_DOCUMENT_ROOT_PATH"/test_repo.git &&
- test $HEAD = $(git rev-parse --verify HEAD))
-'
-
-test_expect_success 'push already up-to-date' '
- git push
-'
-
-test_expect_success 'push to remote repository with unpacked refs' '
- (cd "$HTTPD_DOCUMENT_ROOT_PATH"/test_repo.git &&
- rm packed-refs &&
- git update-ref refs/heads/master $ORIG_HEAD &&
- git --bare update-server-info) &&
- git push &&
- (cd "$HTTPD_DOCUMENT_ROOT_PATH"/test_repo.git &&
- test $HEAD = $(git rev-parse --verify HEAD))
-'
-
-test_expect_success 'http-push fetches unpacked objects' '
- cp -R "$HTTPD_DOCUMENT_ROOT_PATH"/test_repo.git \
- "$HTTPD_DOCUMENT_ROOT_PATH"/test_repo_unpacked.git &&
-
- git clone $HTTPD_URL/dumb/test_repo_unpacked.git \
- "$ROOT_PATH"/fetch_unpacked &&
-
- # By reset, we force git to retrieve the object
- (cd "$ROOT_PATH"/fetch_unpacked &&
- git reset --hard HEAD^ &&
- git remote rm origin &&
- git reflog expire --expire=0 --all &&
- git prune &&
- git push -f -v $HTTPD_URL/dumb/test_repo_unpacked.git master)
-'
-
-test_expect_success 'http-push fetches packed objects' '
- cp -R "$HTTPD_DOCUMENT_ROOT_PATH"/test_repo.git \
- "$HTTPD_DOCUMENT_ROOT_PATH"/test_repo_packed.git &&
-
- git clone $HTTPD_URL/dumb/test_repo_packed.git \
- "$ROOT_PATH"/test_repo_clone_packed &&
-
- (cd "$HTTPD_DOCUMENT_ROOT_PATH"/test_repo_packed.git &&
- git --bare repack &&
- git --bare prune-packed) &&
-
- # By reset, we force git to retrieve the packed object
- (cd "$ROOT_PATH"/test_repo_clone_packed &&
- git reset --hard HEAD^ &&
- git remote remove origin &&
- git reflog expire --expire=0 --all &&
- git prune &&
- git push -f -v $HTTPD_URL/dumb/test_repo_packed.git master)
-'
-
-test_expect_success 'create and delete remote branch' '
- cd "$ROOT_PATH"/test_repo_clone &&
- git checkout -b dev &&
- : >path3 &&
- git add path3 &&
- test_tick &&
- git commit -m dev &&
- git push origin dev &&
- git push origin :dev &&
- test_must_fail git show-ref --verify refs/remotes/origin/dev
-'
-
-test_expect_success 'MKCOL sends directory names with trailing slashes' '
-
- ! grep "\"MKCOL.*[^/] HTTP/[^ ]*\"" < "$HTTPD_ROOT_PATH"/access.log
-
-'
-
-x1="[0-9a-f]"
-x2="$x1$x1"
-x5="$x1$x1$x1$x1$x1"
-x38="$x5$x5$x5$x5$x5$x5$x5$x1$x1$x1"
-x40="$x38$x2"
-
-test_expect_success 'PUT and MOVE sends object to URLs with SHA-1 hash suffix' '
- sed \
- -e "s/PUT /OP /" \
- -e "s/MOVE /OP /" \
- -e "s|/objects/$x2/${x38}_$x40|WANTED_PATH_REQUEST|" \
- "$HTTPD_ROOT_PATH"/access.log |
- grep -e "\"OP .*WANTED_PATH_REQUEST HTTP/[.0-9]*\" 20[0-9] "
-
-'
-
-test_http_push_nonff "$HTTPD_DOCUMENT_ROOT_PATH"/test_repo.git \
- "$ROOT_PATH"/test_repo_clone master
-
-test_expect_success 'push to password-protected repository (user in URL)' '
- test_commit pw-user &&
- set_askpass user@host pass@host &&
- git push "$HTTPD_URL_USER/auth/dumb/test_repo.git" HEAD &&
- git rev-parse --verify HEAD >expect &&
- git --git-dir="$HTTPD_DOCUMENT_ROOT_PATH/auth/dumb/test_repo.git" \
- rev-parse --verify HEAD >actual &&
- test_cmp expect actual
-'
-
-test_expect_failure 'user was prompted only once for password' '
- expect_askpass pass user@host
-'
-
-test_expect_failure 'push to password-protected repository (no user in URL)' '
- test_commit pw-nouser &&
- set_askpass user@host pass@host &&
- git push "$HTTPD_URL/auth/dumb/test_repo.git" HEAD &&
- expect_askpass both user@host
- git rev-parse --verify HEAD >expect &&
- git --git-dir="$HTTPD_DOCUMENT_ROOT_PATH/auth/dumb/test_repo.git" \
- rev-parse --verify HEAD >actual &&
- test_cmp expect actual
-'
-
-stop_httpd
-
-test_done
--- /dev/null
+#!/bin/sh
+#
+# Copyright (c) 2008 Clemens Buchacher <drizzd@aon.at>
+#
+
+test_description='test smart pushing over http via http-backend'
+. ./test-lib.sh
+
+if test -n "$NO_CURL"; then
+ skip_all='skipping test, git built without http support'
+ test_done
+fi
+
+ROOT_PATH="$PWD"
+. "$TEST_DIRECTORY"/lib-httpd.sh
+. "$TEST_DIRECTORY"/lib-terminal.sh
+start_httpd
+
+test_expect_success 'setup remote repository' '
+ cd "$ROOT_PATH" &&
+ mkdir test_repo &&
+ cd test_repo &&
+ git init &&
+ : >path1 &&
+ git add path1 &&
+ test_tick &&
+ git commit -m initial &&
+ cd - &&
+ git clone --bare test_repo test_repo.git &&
+ cd test_repo.git &&
+ git config http.receivepack true &&
+ git config core.logallrefupdates true &&
+ ORIG_HEAD=$(git rev-parse --verify HEAD) &&
+ cd - &&
+ mv test_repo.git "$HTTPD_DOCUMENT_ROOT_PATH"
+'
+
+setup_askpass_helper
+
+cat >exp <<EOF
+GET /smart/test_repo.git/info/refs?service=git-upload-pack HTTP/1.1 200
+POST /smart/test_repo.git/git-upload-pack HTTP/1.1 200
+EOF
+test_expect_success 'no empty path components' '
+ # In the URL, add a trailing slash, and see if git appends yet another
+ # slash.
+ cd "$ROOT_PATH" &&
+ git clone $HTTPD_URL/smart/test_repo.git/ test_repo_clone &&
+
+ sed -e "
+ s/^.* \"//
+ s/\"//
+ s/ [1-9][0-9]*\$//
+ s/^GET /GET /
+ " >act <"$HTTPD_ROOT_PATH"/access.log &&
+
+ # Clear the log, so that it does not affect the "used receive-pack
+ # service" test which reads the log too.
+ #
+ # We do this before the actual comparison to ensure the log is cleared.
+ echo > "$HTTPD_ROOT_PATH"/access.log &&
+
+ test_cmp exp act
+'
+
+test_expect_success 'clone remote repository' '
+ rm -rf test_repo_clone &&
+ git clone $HTTPD_URL/smart/test_repo.git test_repo_clone &&
+ (
+ cd test_repo_clone && git config push.default matching
+ )
+'
+
+test_expect_success 'push to remote repository (standard)' '
+ cd "$ROOT_PATH"/test_repo_clone &&
+ : >path2 &&
+ git add path2 &&
+ test_tick &&
+ git commit -m path2 &&
+ HEAD=$(git rev-parse --verify HEAD) &&
+ GIT_CURL_VERBOSE=1 git push -v -v 2>err &&
+ ! grep "Expect: 100-continue" err &&
+ grep "POST git-receive-pack ([0-9]* bytes)" err &&
+ (cd "$HTTPD_DOCUMENT_ROOT_PATH"/test_repo.git &&
+ test $HEAD = $(git rev-parse --verify HEAD))
+'
+
+test_expect_success 'push already up-to-date' '
+ git push
+'
+
+test_expect_success 'create and delete remote branch' '
+ cd "$ROOT_PATH"/test_repo_clone &&
+ git checkout -b dev &&
+ : >path3 &&
+ git add path3 &&
+ test_tick &&
+ git commit -m dev &&
+ git push origin dev &&
+ git push origin :dev &&
+ test_must_fail git show-ref --verify refs/remotes/origin/dev
+'
+
+cat >"$HTTPD_DOCUMENT_ROOT_PATH/test_repo.git/hooks/update" <<EOF
+#!/bin/sh
+exit 1
+EOF
+chmod a+x "$HTTPD_DOCUMENT_ROOT_PATH/test_repo.git/hooks/update"
+
+cat >exp <<EOF
+remote: error: hook declined to update refs/heads/dev2
+To http://127.0.0.1:$LIB_HTTPD_PORT/smart/test_repo.git
+ ! [remote rejected] dev2 -> dev2 (hook declined)
+error: failed to push some refs to 'http://127.0.0.1:$LIB_HTTPD_PORT/smart/test_repo.git'
+EOF
+
+test_expect_success 'rejected update prints status' '
+ cd "$ROOT_PATH"/test_repo_clone &&
+ git checkout -b dev2 &&
+ : >path4 &&
+ git add path4 &&
+ test_tick &&
+ git commit -m dev2 &&
+ test_must_fail git push origin dev2 2>act &&
+ sed -e "/^remote: /s/ *$//" <act >cmp &&
+ test_cmp exp cmp
+'
+rm -f "$HTTPD_DOCUMENT_ROOT_PATH/test_repo.git/hooks/update"
+
+cat >exp <<EOF
+
+GET /smart/test_repo.git/info/refs?service=git-upload-pack HTTP/1.1 200
+POST /smart/test_repo.git/git-upload-pack HTTP/1.1 200
+GET /smart/test_repo.git/info/refs?service=git-receive-pack HTTP/1.1 200
+POST /smart/test_repo.git/git-receive-pack HTTP/1.1 200
+GET /smart/test_repo.git/info/refs?service=git-receive-pack HTTP/1.1 200
+GET /smart/test_repo.git/info/refs?service=git-receive-pack HTTP/1.1 200
+POST /smart/test_repo.git/git-receive-pack HTTP/1.1 200
+GET /smart/test_repo.git/info/refs?service=git-receive-pack HTTP/1.1 200
+POST /smart/test_repo.git/git-receive-pack HTTP/1.1 200
+GET /smart/test_repo.git/info/refs?service=git-receive-pack HTTP/1.1 200
+POST /smart/test_repo.git/git-receive-pack HTTP/1.1 200
+EOF
+test_expect_success 'used receive-pack service' '
+ sed -e "
+ s/^.* \"//
+ s/\"//
+ s/ [1-9][0-9]*\$//
+ s/^GET /GET /
+ " >act <"$HTTPD_ROOT_PATH"/access.log &&
+ test_cmp exp act
+'
+
+test_http_push_nonff "$HTTPD_DOCUMENT_ROOT_PATH"/test_repo.git \
+ "$ROOT_PATH"/test_repo_clone master success
+
+test_expect_success 'push fails for non-fast-forward refs unmatched by remote helper' '
+ # create a dissimilarly-named remote ref so that git is unable to match the
+ # two refs (viz. local, remote) unless an explicit refspec is provided.
+ git push origin master:retsam
+
+ echo "change changed" > path2 &&
+ git commit -a -m path2 --amend &&
+
+ # push master too; this ensures there is at least one '"'push'"' command to
+ # the remote helper and triggers interaction with the helper.
+ test_must_fail git push -v origin +master master:retsam >output 2>&1'
+
+test_expect_success 'push fails for non-fast-forward refs unmatched by remote helper: remote output' '
+ grep "^ + [a-f0-9]*\.\.\.[a-f0-9]* *master -> master (forced update)$" output &&
+ grep "^ ! \[rejected\] *master -> retsam (non-fast-forward)$" output
+'
+
+test_expect_success 'push fails for non-fast-forward refs unmatched by remote helper: our output' '
+ test_i18ngrep "Updates were rejected because" \
+ output
+'
+
+test_expect_success 'push (chunked)' '
+ git checkout master &&
+ test_commit commit path3 &&
+ HEAD=$(git rev-parse --verify HEAD) &&
+ test_config http.postbuffer 4 &&
+ git push -v -v origin $BRANCH 2>err &&
+ grep "POST git-receive-pack (chunked)" err &&
+ (cd "$HTTPD_DOCUMENT_ROOT_PATH"/test_repo.git &&
+ test $HEAD = $(git rev-parse --verify HEAD))
+'
+
+test_expect_success 'push --all can push to empty repo' '
+ d=$HTTPD_DOCUMENT_ROOT_PATH/empty-all.git &&
+ git init --bare "$d" &&
+ git --git-dir="$d" config http.receivepack true &&
+ git push --all "$HTTPD_URL"/smart/empty-all.git
+'
+
+test_expect_success 'push --mirror can push to empty repo' '
+ d=$HTTPD_DOCUMENT_ROOT_PATH/empty-mirror.git &&
+ git init --bare "$d" &&
+ git --git-dir="$d" config http.receivepack true &&
+ git push --mirror "$HTTPD_URL"/smart/empty-mirror.git
+'
+
+test_expect_success 'push --all to repo with alternates' '
+ s=$HTTPD_DOCUMENT_ROOT_PATH/test_repo.git &&
+ d=$HTTPD_DOCUMENT_ROOT_PATH/alternates-all.git &&
+ git clone --bare --shared "$s" "$d" &&
+ git --git-dir="$d" config http.receivepack true &&
+ git --git-dir="$d" repack -adl &&
+ git push --all "$HTTPD_URL"/smart/alternates-all.git
+'
+
+test_expect_success 'push --mirror to repo with alternates' '
+ s=$HTTPD_DOCUMENT_ROOT_PATH/test_repo.git &&
+ d=$HTTPD_DOCUMENT_ROOT_PATH/alternates-mirror.git &&
+ git clone --bare --shared "$s" "$d" &&
+ git --git-dir="$d" config http.receivepack true &&
+ git --git-dir="$d" repack -adl &&
+ git push --mirror "$HTTPD_URL"/smart/alternates-mirror.git
+'
+
+test_expect_success TTY 'push shows progress when stderr is a tty' '
+ cd "$ROOT_PATH"/test_repo_clone &&
+ test_commit noisy &&
+ test_terminal git push >output 2>&1 &&
+ grep "^Writing objects" output
+'
+
+test_expect_success TTY 'push --quiet silences status and progress' '
+ cd "$ROOT_PATH"/test_repo_clone &&
+ test_commit quiet &&
+ test_terminal git push --quiet >output 2>&1 &&
+ test_cmp /dev/null output
+'
+
+test_expect_success TTY 'push --no-progress silences progress but not status' '
+ cd "$ROOT_PATH"/test_repo_clone &&
+ test_commit no-progress &&
+ test_terminal git push --no-progress >output 2>&1 &&
+ grep "^To http" output &&
+ ! grep "^Writing objects"
+'
+
+test_expect_success 'push --progress shows progress to non-tty' '
+ cd "$ROOT_PATH"/test_repo_clone &&
+ test_commit progress &&
+ git push --progress >output 2>&1 &&
+ grep "^To http" output &&
+ grep "^Writing objects" output
+'
+
+test_expect_success 'http push gives sane defaults to reflog' '
+ cd "$ROOT_PATH"/test_repo_clone &&
+ test_commit reflog-test &&
+ git push "$HTTPD_URL"/smart/test_repo.git &&
+ git --git-dir="$HTTPD_DOCUMENT_ROOT_PATH/test_repo.git" \
+ log -g -1 --format="%gn <%ge>" >actual &&
+ echo "anonymous <anonymous@http.127.0.0.1>" >expect &&
+ test_cmp expect actual
+'
+
+test_expect_success 'http push respects GIT_COMMITTER_* in reflog' '
+ cd "$ROOT_PATH"/test_repo_clone &&
+ test_commit custom-reflog-test &&
+ git push "$HTTPD_URL"/smart_custom_env/test_repo.git &&
+ git --git-dir="$HTTPD_DOCUMENT_ROOT_PATH/test_repo.git" \
+ log -g -1 --format="%gn <%ge>" >actual &&
+ echo "Custom User <custom@example.com>" >expect &&
+ test_cmp expect actual
+'
+
+test_expect_success 'push over smart http with auth' '
+ cd "$ROOT_PATH/test_repo_clone" &&
+ echo push-auth-test >expect &&
+ test_commit push-auth-test &&
+ set_askpass user@host pass@host &&
+ git push "$HTTPD_URL"/auth/smart/test_repo.git &&
+ git --git-dir="$HTTPD_DOCUMENT_ROOT_PATH/test_repo.git" \
+ log -1 --format=%s >actual &&
+ expect_askpass both user@host &&
+ test_cmp expect actual
+'
+
+test_expect_success 'push to auth-only-for-push repo' '
+ cd "$ROOT_PATH/test_repo_clone" &&
+ echo push-half-auth >expect &&
+ test_commit push-half-auth &&
+ set_askpass user@host pass@host &&
+ git push "$HTTPD_URL"/auth-push/smart/test_repo.git &&
+ git --git-dir="$HTTPD_DOCUMENT_ROOT_PATH/test_repo.git" \
+ log -1 --format=%s >actual &&
+ expect_askpass both user@host &&
+ test_cmp expect actual
+'
+
+test_expect_success 'create repo without http.receivepack set' '
+ cd "$ROOT_PATH" &&
+ git init half-auth &&
+ (
+ cd half-auth &&
+ test_commit one
+ ) &&
+ git clone --bare half-auth "$HTTPD_DOCUMENT_ROOT_PATH/half-auth.git"
+'
+
+test_expect_success 'clone via half-auth-complete does not need password' '
+ cd "$ROOT_PATH" &&
+ set_askpass wrong &&
+ git clone "$HTTPD_URL"/half-auth-complete/smart/half-auth.git \
+ half-auth-clone &&
+ expect_askpass none
+'
+
+test_expect_success 'push into half-auth-complete requires password' '
+ cd "$ROOT_PATH/half-auth-clone" &&
+ echo two >expect &&
+ test_commit two &&
+ set_askpass user@host pass@host &&
+ git push "$HTTPD_URL/half-auth-complete/smart/half-auth.git" &&
+ git --git-dir="$HTTPD_DOCUMENT_ROOT_PATH/half-auth.git" \
+ log -1 --format=%s >actual &&
+ expect_askpass both user@host &&
+ test_cmp expect actual
+'
+
+stop_httpd
+test_done
+++ /dev/null
-#!/bin/sh
-#
-# Copyright (c) 2008 Clemens Buchacher <drizzd@aon.at>
-#
-
-test_description='test smart pushing over http via http-backend'
-. ./test-lib.sh
-
-if test -n "$NO_CURL"; then
- skip_all='skipping test, git built without http support'
- test_done
-fi
-
-ROOT_PATH="$PWD"
-LIB_HTTPD_PORT=${LIB_HTTPD_PORT-'5541'}
-. "$TEST_DIRECTORY"/lib-httpd.sh
-. "$TEST_DIRECTORY"/lib-terminal.sh
-start_httpd
-
-test_expect_success 'setup remote repository' '
- cd "$ROOT_PATH" &&
- mkdir test_repo &&
- cd test_repo &&
- git init &&
- : >path1 &&
- git add path1 &&
- test_tick &&
- git commit -m initial &&
- cd - &&
- git clone --bare test_repo test_repo.git &&
- cd test_repo.git &&
- git config http.receivepack true &&
- git config core.logallrefupdates true &&
- ORIG_HEAD=$(git rev-parse --verify HEAD) &&
- cd - &&
- mv test_repo.git "$HTTPD_DOCUMENT_ROOT_PATH"
-'
-
-setup_askpass_helper
-
-cat >exp <<EOF
-GET /smart/test_repo.git/info/refs?service=git-upload-pack HTTP/1.1 200
-POST /smart/test_repo.git/git-upload-pack HTTP/1.1 200
-EOF
-test_expect_success 'no empty path components' '
- # In the URL, add a trailing slash, and see if git appends yet another
- # slash.
- cd "$ROOT_PATH" &&
- git clone $HTTPD_URL/smart/test_repo.git/ test_repo_clone &&
-
- sed -e "
- s/^.* \"//
- s/\"//
- s/ [1-9][0-9]*\$//
- s/^GET /GET /
- " >act <"$HTTPD_ROOT_PATH"/access.log &&
-
- # Clear the log, so that it does not affect the "used receive-pack
- # service" test which reads the log too.
- #
- # We do this before the actual comparison to ensure the log is cleared.
- echo > "$HTTPD_ROOT_PATH"/access.log &&
-
- test_cmp exp act
-'
-
-test_expect_success 'clone remote repository' '
- rm -rf test_repo_clone &&
- git clone $HTTPD_URL/smart/test_repo.git test_repo_clone &&
- (
- cd test_repo_clone && git config push.default matching
- )
-'
-
-test_expect_success 'push to remote repository (standard)' '
- cd "$ROOT_PATH"/test_repo_clone &&
- : >path2 &&
- git add path2 &&
- test_tick &&
- git commit -m path2 &&
- HEAD=$(git rev-parse --verify HEAD) &&
- GIT_CURL_VERBOSE=1 git push -v -v 2>err &&
- ! grep "Expect: 100-continue" err &&
- grep "POST git-receive-pack ([0-9]* bytes)" err &&
- (cd "$HTTPD_DOCUMENT_ROOT_PATH"/test_repo.git &&
- test $HEAD = $(git rev-parse --verify HEAD))
-'
-
-test_expect_success 'push already up-to-date' '
- git push
-'
-
-test_expect_success 'create and delete remote branch' '
- cd "$ROOT_PATH"/test_repo_clone &&
- git checkout -b dev &&
- : >path3 &&
- git add path3 &&
- test_tick &&
- git commit -m dev &&
- git push origin dev &&
- git push origin :dev &&
- test_must_fail git show-ref --verify refs/remotes/origin/dev
-'
-
-cat >"$HTTPD_DOCUMENT_ROOT_PATH/test_repo.git/hooks/update" <<EOF
-#!/bin/sh
-exit 1
-EOF
-chmod a+x "$HTTPD_DOCUMENT_ROOT_PATH/test_repo.git/hooks/update"
-
-cat >exp <<EOF
-remote: error: hook declined to update refs/heads/dev2
-To http://127.0.0.1:$LIB_HTTPD_PORT/smart/test_repo.git
- ! [remote rejected] dev2 -> dev2 (hook declined)
-error: failed to push some refs to 'http://127.0.0.1:$LIB_HTTPD_PORT/smart/test_repo.git'
-EOF
-
-test_expect_success 'rejected update prints status' '
- cd "$ROOT_PATH"/test_repo_clone &&
- git checkout -b dev2 &&
- : >path4 &&
- git add path4 &&
- test_tick &&
- git commit -m dev2 &&
- test_must_fail git push origin dev2 2>act &&
- sed -e "/^remote: /s/ *$//" <act >cmp &&
- test_cmp exp cmp
-'
-rm -f "$HTTPD_DOCUMENT_ROOT_PATH/test_repo.git/hooks/update"
-
-cat >exp <<EOF
-
-GET /smart/test_repo.git/info/refs?service=git-upload-pack HTTP/1.1 200
-POST /smart/test_repo.git/git-upload-pack HTTP/1.1 200
-GET /smart/test_repo.git/info/refs?service=git-receive-pack HTTP/1.1 200
-POST /smart/test_repo.git/git-receive-pack HTTP/1.1 200
-GET /smart/test_repo.git/info/refs?service=git-receive-pack HTTP/1.1 200
-GET /smart/test_repo.git/info/refs?service=git-receive-pack HTTP/1.1 200
-POST /smart/test_repo.git/git-receive-pack HTTP/1.1 200
-GET /smart/test_repo.git/info/refs?service=git-receive-pack HTTP/1.1 200
-POST /smart/test_repo.git/git-receive-pack HTTP/1.1 200
-GET /smart/test_repo.git/info/refs?service=git-receive-pack HTTP/1.1 200
-POST /smart/test_repo.git/git-receive-pack HTTP/1.1 200
-EOF
-test_expect_success 'used receive-pack service' '
- sed -e "
- s/^.* \"//
- s/\"//
- s/ [1-9][0-9]*\$//
- s/^GET /GET /
- " >act <"$HTTPD_ROOT_PATH"/access.log &&
- test_cmp exp act
-'
-
-test_http_push_nonff "$HTTPD_DOCUMENT_ROOT_PATH"/test_repo.git \
- "$ROOT_PATH"/test_repo_clone master success
-
-test_expect_success 'push fails for non-fast-forward refs unmatched by remote helper' '
- # create a dissimilarly-named remote ref so that git is unable to match the
- # two refs (viz. local, remote) unless an explicit refspec is provided.
- git push origin master:retsam
-
- echo "change changed" > path2 &&
- git commit -a -m path2 --amend &&
-
- # push master too; this ensures there is at least one '"'push'"' command to
- # the remote helper and triggers interaction with the helper.
- test_must_fail git push -v origin +master master:retsam >output 2>&1'
-
-test_expect_success 'push fails for non-fast-forward refs unmatched by remote helper: remote output' '
- grep "^ + [a-f0-9]*\.\.\.[a-f0-9]* *master -> master (forced update)$" output &&
- grep "^ ! \[rejected\] *master -> retsam (non-fast-forward)$" output
-'
-
-test_expect_success 'push fails for non-fast-forward refs unmatched by remote helper: our output' '
- test_i18ngrep "Updates were rejected because" \
- output
-'
-
-test_expect_success 'push (chunked)' '
- git checkout master &&
- test_commit commit path3 &&
- HEAD=$(git rev-parse --verify HEAD) &&
- test_config http.postbuffer 4 &&
- git push -v -v origin $BRANCH 2>err &&
- grep "POST git-receive-pack (chunked)" err &&
- (cd "$HTTPD_DOCUMENT_ROOT_PATH"/test_repo.git &&
- test $HEAD = $(git rev-parse --verify HEAD))
-'
-
-test_expect_success 'push --all can push to empty repo' '
- d=$HTTPD_DOCUMENT_ROOT_PATH/empty-all.git &&
- git init --bare "$d" &&
- git --git-dir="$d" config http.receivepack true &&
- git push --all "$HTTPD_URL"/smart/empty-all.git
-'
-
-test_expect_success 'push --mirror can push to empty repo' '
- d=$HTTPD_DOCUMENT_ROOT_PATH/empty-mirror.git &&
- git init --bare "$d" &&
- git --git-dir="$d" config http.receivepack true &&
- git push --mirror "$HTTPD_URL"/smart/empty-mirror.git
-'
-
-test_expect_success 'push --all to repo with alternates' '
- s=$HTTPD_DOCUMENT_ROOT_PATH/test_repo.git &&
- d=$HTTPD_DOCUMENT_ROOT_PATH/alternates-all.git &&
- git clone --bare --shared "$s" "$d" &&
- git --git-dir="$d" config http.receivepack true &&
- git --git-dir="$d" repack -adl &&
- git push --all "$HTTPD_URL"/smart/alternates-all.git
-'
-
-test_expect_success 'push --mirror to repo with alternates' '
- s=$HTTPD_DOCUMENT_ROOT_PATH/test_repo.git &&
- d=$HTTPD_DOCUMENT_ROOT_PATH/alternates-mirror.git &&
- git clone --bare --shared "$s" "$d" &&
- git --git-dir="$d" config http.receivepack true &&
- git --git-dir="$d" repack -adl &&
- git push --mirror "$HTTPD_URL"/smart/alternates-mirror.git
-'
-
-test_expect_success TTY 'push shows progress when stderr is a tty' '
- cd "$ROOT_PATH"/test_repo_clone &&
- test_commit noisy &&
- test_terminal git push >output 2>&1 &&
- grep "^Writing objects" output
-'
-
-test_expect_success TTY 'push --quiet silences status and progress' '
- cd "$ROOT_PATH"/test_repo_clone &&
- test_commit quiet &&
- test_terminal git push --quiet >output 2>&1 &&
- test_cmp /dev/null output
-'
-
-test_expect_success TTY 'push --no-progress silences progress but not status' '
- cd "$ROOT_PATH"/test_repo_clone &&
- test_commit no-progress &&
- test_terminal git push --no-progress >output 2>&1 &&
- grep "^To http" output &&
- ! grep "^Writing objects"
-'
-
-test_expect_success 'push --progress shows progress to non-tty' '
- cd "$ROOT_PATH"/test_repo_clone &&
- test_commit progress &&
- git push --progress >output 2>&1 &&
- grep "^To http" output &&
- grep "^Writing objects" output
-'
-
-test_expect_success 'http push gives sane defaults to reflog' '
- cd "$ROOT_PATH"/test_repo_clone &&
- test_commit reflog-test &&
- git push "$HTTPD_URL"/smart/test_repo.git &&
- git --git-dir="$HTTPD_DOCUMENT_ROOT_PATH/test_repo.git" \
- log -g -1 --format="%gn <%ge>" >actual &&
- echo "anonymous <anonymous@http.127.0.0.1>" >expect &&
- test_cmp expect actual
-'
-
-test_expect_success 'http push respects GIT_COMMITTER_* in reflog' '
- cd "$ROOT_PATH"/test_repo_clone &&
- test_commit custom-reflog-test &&
- git push "$HTTPD_URL"/smart_custom_env/test_repo.git &&
- git --git-dir="$HTTPD_DOCUMENT_ROOT_PATH/test_repo.git" \
- log -g -1 --format="%gn <%ge>" >actual &&
- echo "Custom User <custom@example.com>" >expect &&
- test_cmp expect actual
-'
-
-test_expect_success 'push over smart http with auth' '
- cd "$ROOT_PATH/test_repo_clone" &&
- echo push-auth-test >expect &&
- test_commit push-auth-test &&
- set_askpass user@host pass@host &&
- git push "$HTTPD_URL"/auth/smart/test_repo.git &&
- git --git-dir="$HTTPD_DOCUMENT_ROOT_PATH/test_repo.git" \
- log -1 --format=%s >actual &&
- expect_askpass both user@host &&
- test_cmp expect actual
-'
-
-test_expect_success 'push to auth-only-for-push repo' '
- cd "$ROOT_PATH/test_repo_clone" &&
- echo push-half-auth >expect &&
- test_commit push-half-auth &&
- set_askpass user@host pass@host &&
- git push "$HTTPD_URL"/auth-push/smart/test_repo.git &&
- git --git-dir="$HTTPD_DOCUMENT_ROOT_PATH/test_repo.git" \
- log -1 --format=%s >actual &&
- expect_askpass both user@host &&
- test_cmp expect actual
-'
-
-test_expect_success 'create repo without http.receivepack set' '
- cd "$ROOT_PATH" &&
- git init half-auth &&
- (
- cd half-auth &&
- test_commit one
- ) &&
- git clone --bare half-auth "$HTTPD_DOCUMENT_ROOT_PATH/half-auth.git"
-'
-
-test_expect_success 'clone via half-auth-complete does not need password' '
- cd "$ROOT_PATH" &&
- set_askpass wrong &&
- git clone "$HTTPD_URL"/half-auth-complete/smart/half-auth.git \
- half-auth-clone &&
- expect_askpass none
-'
-
-test_expect_success 'push into half-auth-complete requires password' '
- cd "$ROOT_PATH/half-auth-clone" &&
- echo two >expect &&
- test_commit two &&
- set_askpass user@host pass@host &&
- git push "$HTTPD_URL/half-auth-complete/smart/half-auth.git" &&
- git --git-dir="$HTTPD_DOCUMENT_ROOT_PATH/half-auth.git" \
- log -1 --format=%s >actual &&
- expect_askpass both user@host &&
- test_cmp expect actual
-'
-
-stop_httpd
-test_done
--- /dev/null
+#!/bin/sh
+
+test_description='test dumb fetching over http via static file'
+. ./test-lib.sh
+
+if test -n "$NO_CURL"; then
+ skip_all='skipping test, git built without http support'
+ test_done
+fi
+
+. "$TEST_DIRECTORY"/lib-httpd.sh
+start_httpd
+
+test_expect_success 'setup repository' '
+ git config push.default matching &&
+ echo content1 >file &&
+ git add file &&
+ git commit -m one
+ echo content2 >file &&
+ git add file &&
+ git commit -m two
+'
+
+test_expect_success 'create http-accessible bare repository with loose objects' '
+ cp -R .git "$HTTPD_DOCUMENT_ROOT_PATH/repo.git" &&
+ (cd "$HTTPD_DOCUMENT_ROOT_PATH/repo.git" &&
+ git config core.bare true &&
+ mkdir -p hooks &&
+ echo "exec git update-server-info" >hooks/post-update &&
+ chmod +x hooks/post-update &&
+ hooks/post-update
+ ) &&
+ git remote add public "$HTTPD_DOCUMENT_ROOT_PATH/repo.git" &&
+ git push public master:master
+'
+
+test_expect_success 'clone http repository' '
+ git clone $HTTPD_URL/dumb/repo.git clone-tmpl &&
+ cp -R clone-tmpl clone &&
+ test_cmp file clone/file
+'
+
+test_expect_success 'create password-protected repository' '
+ mkdir -p "$HTTPD_DOCUMENT_ROOT_PATH/auth/dumb/" &&
+ cp -Rf "$HTTPD_DOCUMENT_ROOT_PATH/repo.git" \
+ "$HTTPD_DOCUMENT_ROOT_PATH/auth/dumb/repo.git"
+'
+
+setup_askpass_helper
+
+test_expect_success 'cloning password-protected repository can fail' '
+ set_askpass wrong &&
+ test_must_fail git clone "$HTTPD_URL/auth/dumb/repo.git" clone-auth-fail &&
+ expect_askpass both wrong
+'
+
+test_expect_success 'http auth can use user/pass in URL' '
+ set_askpass wrong &&
+ git clone "$HTTPD_URL_USER_PASS/auth/dumb/repo.git" clone-auth-none &&
+ expect_askpass none
+'
+
+test_expect_success 'http auth can use just user in URL' '
+ set_askpass wrong pass@host &&
+ git clone "$HTTPD_URL_USER/auth/dumb/repo.git" clone-auth-pass &&
+ expect_askpass pass user@host
+'
+
+test_expect_success 'http auth can request both user and pass' '
+ set_askpass user@host pass@host &&
+ git clone "$HTTPD_URL/auth/dumb/repo.git" clone-auth-both &&
+ expect_askpass both user@host
+'
+
+test_expect_success 'http auth respects credential helper config' '
+ test_config_global credential.helper "!f() {
+ cat >/dev/null
+ echo username=user@host
+ echo password=pass@host
+ }; f" &&
+ set_askpass wrong &&
+ git clone "$HTTPD_URL/auth/dumb/repo.git" clone-auth-helper &&
+ expect_askpass none
+'
+
+test_expect_success 'http auth can get username from config' '
+ test_config_global "credential.$HTTPD_URL.username" user@host &&
+ set_askpass wrong pass@host &&
+ git clone "$HTTPD_URL/auth/dumb/repo.git" clone-auth-user &&
+ expect_askpass pass user@host
+'
+
+test_expect_success 'configured username does not override URL' '
+ test_config_global "credential.$HTTPD_URL.username" wrong &&
+ set_askpass wrong pass@host &&
+ git clone "$HTTPD_URL_USER/auth/dumb/repo.git" clone-auth-user2 &&
+ expect_askpass pass user@host
+'
+
+test_expect_success 'fetch changes via http' '
+ echo content >>file &&
+ git commit -a -m two &&
+ git push public &&
+ (cd clone && git pull) &&
+ test_cmp file clone/file
+'
+
+test_expect_success 'fetch changes via manual http-fetch' '
+ cp -R clone-tmpl clone2 &&
+
+ HEAD=$(git rev-parse --verify HEAD) &&
+ (cd clone2 &&
+ git http-fetch -a -w heads/master-new $HEAD $(git config remote.origin.url) &&
+ git checkout master-new &&
+ test $HEAD = $(git rev-parse --verify HEAD)) &&
+ test_cmp file clone2/file
+'
+
+test_expect_success 'http remote detects correct HEAD' '
+ git push public master:other &&
+ (cd clone &&
+ git remote set-head origin -d &&
+ git remote set-head origin -a &&
+ git symbolic-ref refs/remotes/origin/HEAD > output &&
+ echo refs/remotes/origin/master > expect &&
+ test_cmp expect output
+ )
+'
+
+test_expect_success 'fetch packed objects' '
+ cp -R "$HTTPD_DOCUMENT_ROOT_PATH"/repo.git "$HTTPD_DOCUMENT_ROOT_PATH"/repo_pack.git &&
+ (cd "$HTTPD_DOCUMENT_ROOT_PATH"/repo_pack.git &&
+ git --bare repack -a -d
+ ) &&
+ git clone $HTTPD_URL/dumb/repo_pack.git
+'
+
+test_expect_success 'fetch notices corrupt pack' '
+ cp -R "$HTTPD_DOCUMENT_ROOT_PATH"/repo_pack.git "$HTTPD_DOCUMENT_ROOT_PATH"/repo_bad1.git &&
+ (cd "$HTTPD_DOCUMENT_ROOT_PATH"/repo_bad1.git &&
+ p=`ls objects/pack/pack-*.pack` &&
+ chmod u+w $p &&
+ printf %0256d 0 | dd of=$p bs=256 count=1 seek=1 conv=notrunc
+ ) &&
+ mkdir repo_bad1.git &&
+ (cd repo_bad1.git &&
+ git --bare init &&
+ test_must_fail git --bare fetch $HTTPD_URL/dumb/repo_bad1.git &&
+ test 0 = `ls objects/pack/pack-*.pack | wc -l`
+ )
+'
+
+test_expect_success 'fetch notices corrupt idx' '
+ cp -R "$HTTPD_DOCUMENT_ROOT_PATH"/repo_pack.git "$HTTPD_DOCUMENT_ROOT_PATH"/repo_bad2.git &&
+ (cd "$HTTPD_DOCUMENT_ROOT_PATH"/repo_bad2.git &&
+ p=`ls objects/pack/pack-*.idx` &&
+ chmod u+w $p &&
+ printf %0256d 0 | dd of=$p bs=256 count=1 seek=1 conv=notrunc
+ ) &&
+ mkdir repo_bad2.git &&
+ (cd repo_bad2.git &&
+ git --bare init &&
+ test_must_fail git --bare fetch $HTTPD_URL/dumb/repo_bad2.git &&
+ test 0 = `ls objects/pack | wc -l`
+ )
+'
+
+test_expect_success 'did not use upload-pack service' '
+ grep '/git-upload-pack' <"$HTTPD_ROOT_PATH"/access.log >act
+ : >exp
+ test_cmp exp act
+'
+
+stop_httpd
+test_done
+++ /dev/null
-#!/bin/sh
-
-test_description='test dumb fetching over http via static file'
-. ./test-lib.sh
-
-if test -n "$NO_CURL"; then
- skip_all='skipping test, git built without http support'
- test_done
-fi
-
-LIB_HTTPD_PORT=${LIB_HTTPD_PORT-'5550'}
-. "$TEST_DIRECTORY"/lib-httpd.sh
-start_httpd
-
-test_expect_success 'setup repository' '
- git config push.default matching &&
- echo content1 >file &&
- git add file &&
- git commit -m one
- echo content2 >file &&
- git add file &&
- git commit -m two
-'
-
-test_expect_success 'create http-accessible bare repository with loose objects' '
- cp -R .git "$HTTPD_DOCUMENT_ROOT_PATH/repo.git" &&
- (cd "$HTTPD_DOCUMENT_ROOT_PATH/repo.git" &&
- git config core.bare true &&
- mkdir -p hooks &&
- echo "exec git update-server-info" >hooks/post-update &&
- chmod +x hooks/post-update &&
- hooks/post-update
- ) &&
- git remote add public "$HTTPD_DOCUMENT_ROOT_PATH/repo.git" &&
- git push public master:master
-'
-
-test_expect_success 'clone http repository' '
- git clone $HTTPD_URL/dumb/repo.git clone-tmpl &&
- cp -R clone-tmpl clone &&
- test_cmp file clone/file
-'
-
-test_expect_success 'create password-protected repository' '
- mkdir -p "$HTTPD_DOCUMENT_ROOT_PATH/auth/dumb/" &&
- cp -Rf "$HTTPD_DOCUMENT_ROOT_PATH/repo.git" \
- "$HTTPD_DOCUMENT_ROOT_PATH/auth/dumb/repo.git"
-'
-
-setup_askpass_helper
-
-test_expect_success 'cloning password-protected repository can fail' '
- set_askpass wrong &&
- test_must_fail git clone "$HTTPD_URL/auth/dumb/repo.git" clone-auth-fail &&
- expect_askpass both wrong
-'
-
-test_expect_success 'http auth can use user/pass in URL' '
- set_askpass wrong &&
- git clone "$HTTPD_URL_USER_PASS/auth/dumb/repo.git" clone-auth-none &&
- expect_askpass none
-'
-
-test_expect_success 'http auth can use just user in URL' '
- set_askpass wrong pass@host &&
- git clone "$HTTPD_URL_USER/auth/dumb/repo.git" clone-auth-pass &&
- expect_askpass pass user@host
-'
-
-test_expect_success 'http auth can request both user and pass' '
- set_askpass user@host pass@host &&
- git clone "$HTTPD_URL/auth/dumb/repo.git" clone-auth-both &&
- expect_askpass both user@host
-'
-
-test_expect_success 'http auth respects credential helper config' '
- test_config_global credential.helper "!f() {
- cat >/dev/null
- echo username=user@host
- echo password=pass@host
- }; f" &&
- set_askpass wrong &&
- git clone "$HTTPD_URL/auth/dumb/repo.git" clone-auth-helper &&
- expect_askpass none
-'
-
-test_expect_success 'http auth can get username from config' '
- test_config_global "credential.$HTTPD_URL.username" user@host &&
- set_askpass wrong pass@host &&
- git clone "$HTTPD_URL/auth/dumb/repo.git" clone-auth-user &&
- expect_askpass pass user@host
-'
-
-test_expect_success 'configured username does not override URL' '
- test_config_global "credential.$HTTPD_URL.username" wrong &&
- set_askpass wrong pass@host &&
- git clone "$HTTPD_URL_USER/auth/dumb/repo.git" clone-auth-user2 &&
- expect_askpass pass user@host
-'
-
-test_expect_success 'fetch changes via http' '
- echo content >>file &&
- git commit -a -m two &&
- git push public &&
- (cd clone && git pull) &&
- test_cmp file clone/file
-'
-
-test_expect_success 'fetch changes via manual http-fetch' '
- cp -R clone-tmpl clone2 &&
-
- HEAD=$(git rev-parse --verify HEAD) &&
- (cd clone2 &&
- git http-fetch -a -w heads/master-new $HEAD $(git config remote.origin.url) &&
- git checkout master-new &&
- test $HEAD = $(git rev-parse --verify HEAD)) &&
- test_cmp file clone2/file
-'
-
-test_expect_success 'http remote detects correct HEAD' '
- git push public master:other &&
- (cd clone &&
- git remote set-head origin -d &&
- git remote set-head origin -a &&
- git symbolic-ref refs/remotes/origin/HEAD > output &&
- echo refs/remotes/origin/master > expect &&
- test_cmp expect output
- )
-'
-
-test_expect_success 'fetch packed objects' '
- cp -R "$HTTPD_DOCUMENT_ROOT_PATH"/repo.git "$HTTPD_DOCUMENT_ROOT_PATH"/repo_pack.git &&
- (cd "$HTTPD_DOCUMENT_ROOT_PATH"/repo_pack.git &&
- git --bare repack -a -d
- ) &&
- git clone $HTTPD_URL/dumb/repo_pack.git
-'
-
-test_expect_success 'fetch notices corrupt pack' '
- cp -R "$HTTPD_DOCUMENT_ROOT_PATH"/repo_pack.git "$HTTPD_DOCUMENT_ROOT_PATH"/repo_bad1.git &&
- (cd "$HTTPD_DOCUMENT_ROOT_PATH"/repo_bad1.git &&
- p=`ls objects/pack/pack-*.pack` &&
- chmod u+w $p &&
- printf %0256d 0 | dd of=$p bs=256 count=1 seek=1 conv=notrunc
- ) &&
- mkdir repo_bad1.git &&
- (cd repo_bad1.git &&
- git --bare init &&
- test_must_fail git --bare fetch $HTTPD_URL/dumb/repo_bad1.git &&
- test 0 = `ls objects/pack/pack-*.pack | wc -l`
- )
-'
-
-test_expect_success 'fetch notices corrupt idx' '
- cp -R "$HTTPD_DOCUMENT_ROOT_PATH"/repo_pack.git "$HTTPD_DOCUMENT_ROOT_PATH"/repo_bad2.git &&
- (cd "$HTTPD_DOCUMENT_ROOT_PATH"/repo_bad2.git &&
- p=`ls objects/pack/pack-*.idx` &&
- chmod u+w $p &&
- printf %0256d 0 | dd of=$p bs=256 count=1 seek=1 conv=notrunc
- ) &&
- mkdir repo_bad2.git &&
- (cd repo_bad2.git &&
- git --bare init &&
- test_must_fail git --bare fetch $HTTPD_URL/dumb/repo_bad2.git &&
- test 0 = `ls objects/pack | wc -l`
- )
-'
-
-test_expect_success 'did not use upload-pack service' '
- grep '/git-upload-pack' <"$HTTPD_ROOT_PATH"/access.log >act
- : >exp
- test_cmp exp act
-'
-
-stop_httpd
-test_done
--- /dev/null
+#!/bin/sh
+
+test_description='test smart fetching over http via http-backend'
+. ./test-lib.sh
+
+if test -n "$NO_CURL"; then
+ skip_all='skipping test, git built without http support'
+ test_done
+fi
+
+. "$TEST_DIRECTORY"/lib-httpd.sh
+start_httpd
+
+test_expect_success 'setup repository' '
+ git config push.default matching &&
+ echo content >file &&
+ git add file &&
+ git commit -m one
+'
+
+test_expect_success 'create http-accessible bare repository' '
+ mkdir "$HTTPD_DOCUMENT_ROOT_PATH/repo.git" &&
+ (cd "$HTTPD_DOCUMENT_ROOT_PATH/repo.git" &&
+ git --bare init
+ ) &&
+ git remote add public "$HTTPD_DOCUMENT_ROOT_PATH/repo.git" &&
+ git push public master:master
+'
+
+setup_askpass_helper
+
+cat >exp <<EOF
+> GET /smart/repo.git/info/refs?service=git-upload-pack HTTP/1.1
+> Accept: */*
+> Accept-Encoding: gzip
+> Pragma: no-cache
+< HTTP/1.1 200 OK
+< Pragma: no-cache
+< Cache-Control: no-cache, max-age=0, must-revalidate
+< Content-Type: application/x-git-upload-pack-advertisement
+> POST /smart/repo.git/git-upload-pack HTTP/1.1
+> Accept-Encoding: gzip
+> Content-Type: application/x-git-upload-pack-request
+> Accept: application/x-git-upload-pack-result
+> Content-Length: xxx
+< HTTP/1.1 200 OK
+< Pragma: no-cache
+< Cache-Control: no-cache, max-age=0, must-revalidate
+< Content-Type: application/x-git-upload-pack-result
+EOF
+test_expect_success 'clone http repository' '
+ GIT_CURL_VERBOSE=1 git clone --quiet $HTTPD_URL/smart/repo.git clone 2>err &&
+ test_cmp file clone/file &&
+ tr '\''\015'\'' Q <err |
+ sed -e "
+ s/Q\$//
+ /^[*] /d
+ /^$/d
+ /^< $/d
+
+ /^[^><]/{
+ s/^/> /
+ }
+
+ /^> User-Agent: /d
+ /^> Host: /d
+ /^> POST /,$ {
+ /^> Accept: [*]\\/[*]/d
+ }
+ s/^> Content-Length: .*/> Content-Length: xxx/
+ /^> 00..want /d
+ /^> 00.*done/d
+
+ /^< Server: /d
+ /^< Expires: /d
+ /^< Date: /d
+ /^< Content-Length: /d
+ /^< Transfer-Encoding: /d
+ " >act &&
+ test_cmp exp act
+'
+
+test_expect_success 'fetch changes via http' '
+ echo content >>file &&
+ git commit -a -m two &&
+ git push public
+ (cd clone && git pull) &&
+ test_cmp file clone/file
+'
+
+cat >exp <<EOF
+GET /smart/repo.git/info/refs?service=git-upload-pack HTTP/1.1 200
+POST /smart/repo.git/git-upload-pack HTTP/1.1 200
+GET /smart/repo.git/info/refs?service=git-upload-pack HTTP/1.1 200
+POST /smart/repo.git/git-upload-pack HTTP/1.1 200
+EOF
+test_expect_success 'used upload-pack service' '
+ sed -e "
+ s/^.* \"//
+ s/\"//
+ s/ [1-9][0-9]*\$//
+ s/^GET /GET /
+ " >act <"$HTTPD_ROOT_PATH"/access.log &&
+ test_cmp exp act
+'
+
+test_expect_success 'follow redirects (301)' '
+ git clone $HTTPD_URL/smart-redir-perm/repo.git --quiet repo-p
+'
+
+test_expect_success 'follow redirects (302)' '
+ git clone $HTTPD_URL/smart-redir-temp/repo.git --quiet repo-t
+'
+
+test_expect_success 'redirects re-root further requests' '
+ git clone $HTTPD_URL/smart-redir-limited/repo.git repo-redir-limited
+'
+
+test_expect_success 'clone from password-protected repository' '
+ echo two >expect &&
+ set_askpass user@host pass@host &&
+ git clone --bare "$HTTPD_URL/auth/smart/repo.git" smart-auth &&
+ expect_askpass both user@host &&
+ git --git-dir=smart-auth log -1 --format=%s >actual &&
+ test_cmp expect actual
+'
+
+test_expect_success 'clone from auth-only-for-push repository' '
+ echo two >expect &&
+ set_askpass wrong &&
+ git clone --bare "$HTTPD_URL/auth-push/smart/repo.git" smart-noauth &&
+ expect_askpass none &&
+ git --git-dir=smart-noauth log -1 --format=%s >actual &&
+ test_cmp expect actual
+'
+
+test_expect_success 'clone from auth-only-for-objects repository' '
+ echo two >expect &&
+ set_askpass user@host pass@host &&
+ git clone --bare "$HTTPD_URL/auth-fetch/smart/repo.git" half-auth &&
+ expect_askpass both user@host &&
+ git --git-dir=half-auth log -1 --format=%s >actual &&
+ test_cmp expect actual
+'
+
+test_expect_success 'no-op half-auth fetch does not require a password' '
+ set_askpass wrong &&
+ git --git-dir=half-auth fetch &&
+ expect_askpass none
+'
+
+test_expect_success 'redirects send auth to new location' '
+ set_askpass user@host pass@host &&
+ git -c credential.useHttpPath=true \
+ clone $HTTPD_URL/smart-redir-auth/repo.git repo-redir-auth &&
+ expect_askpass both user@host auth/smart/repo.git
+'
+
+test_expect_success 'disable dumb http on server' '
+ git --git-dir="$HTTPD_DOCUMENT_ROOT_PATH/repo.git" \
+ config http.getanyfile false
+'
+
+test_expect_success 'GIT_SMART_HTTP can disable smart http' '
+ (GIT_SMART_HTTP=0 &&
+ export GIT_SMART_HTTP &&
+ cd clone &&
+ test_must_fail git fetch)
+'
+
+test_expect_success 'invalid Content-Type rejected' '
+ test_must_fail git clone $HTTPD_URL/broken_smart/repo.git 2>actual
+ grep "not valid:" actual
+'
+
+test_expect_success 'create namespaced refs' '
+ test_commit namespaced &&
+ git push public HEAD:refs/namespaces/ns/refs/heads/master &&
+ git --git-dir="$HTTPD_DOCUMENT_ROOT_PATH/repo.git" \
+ symbolic-ref refs/namespaces/ns/HEAD refs/namespaces/ns/refs/heads/master
+'
+
+test_expect_success 'smart clone respects namespace' '
+ git clone "$HTTPD_URL/smart_namespace/repo.git" ns-smart &&
+ echo namespaced >expect &&
+ git --git-dir=ns-smart/.git log -1 --format=%s >actual &&
+ test_cmp expect actual
+'
+
+test_expect_success 'dumb clone via http-backend respects namespace' '
+ git --git-dir="$HTTPD_DOCUMENT_ROOT_PATH/repo.git" \
+ config http.getanyfile true &&
+ GIT_SMART_HTTP=0 git clone \
+ "$HTTPD_URL/smart_namespace/repo.git" ns-dumb &&
+ echo namespaced >expect &&
+ git --git-dir=ns-dumb/.git log -1 --format=%s >actual &&
+ test_cmp expect actual
+'
+
+cat >cookies.txt <<EOF
+127.0.0.1 FALSE /smart_cookies/ FALSE 0 othername othervalue
+EOF
+cat >expect_cookies.txt <<EOF
+
+127.0.0.1 FALSE /smart_cookies/ FALSE 0 othername othervalue
+127.0.0.1 FALSE /smart_cookies/repo.git/info/ FALSE 0 name value
+EOF
+test_expect_success 'cookies stored in http.cookiefile when http.savecookies set' '
+ git config http.cookiefile cookies.txt &&
+ git config http.savecookies true &&
+ git ls-remote $HTTPD_URL/smart_cookies/repo.git master &&
+ tail -3 cookies.txt > cookies_tail.txt
+ test_cmp expect_cookies.txt cookies_tail.txt
+'
+
+test -n "$GIT_TEST_LONG" && test_set_prereq EXPENSIVE
+
+test_expect_success EXPENSIVE 'create 50,000 tags in the repo' '
+ (
+ cd "$HTTPD_DOCUMENT_ROOT_PATH/repo.git" &&
+ for i in `test_seq 50000`
+ do
+ echo "commit refs/heads/too-many-refs"
+ echo "mark :$i"
+ echo "committer git <git@example.com> $i +0000"
+ echo "data 0"
+ echo "M 644 inline bla.txt"
+ echo "data 4"
+ echo "bla"
+ # make every commit dangling by always
+ # rewinding the branch after each commit
+ echo "reset refs/heads/too-many-refs"
+ echo "from :1"
+ done | git fast-import --export-marks=marks &&
+
+ # now assign tags to all the dangling commits we created above
+ tag=$(perl -e "print \"bla\" x 30") &&
+ sed -e "s|^:\([^ ]*\) \(.*\)$|\2 refs/tags/$tag-\1|" <marks >>packed-refs
+ )
+'
+
+test_expect_success EXPENSIVE 'clone the 50,000 tag repo to check OS command line overflow' '
+ git clone $HTTPD_URL/smart/repo.git too-many-refs 2>err &&
+ test_line_count = 0 err &&
+ (
+ cd too-many-refs &&
+ test $(git for-each-ref refs/tags | wc -l) = 50000
+ )
+'
+
+stop_httpd
+test_done
+++ /dev/null
-#!/bin/sh
-
-test_description='test smart fetching over http via http-backend'
-. ./test-lib.sh
-
-if test -n "$NO_CURL"; then
- skip_all='skipping test, git built without http support'
- test_done
-fi
-
-LIB_HTTPD_PORT=${LIB_HTTPD_PORT-'5551'}
-. "$TEST_DIRECTORY"/lib-httpd.sh
-start_httpd
-
-test_expect_success 'setup repository' '
- git config push.default matching &&
- echo content >file &&
- git add file &&
- git commit -m one
-'
-
-test_expect_success 'create http-accessible bare repository' '
- mkdir "$HTTPD_DOCUMENT_ROOT_PATH/repo.git" &&
- (cd "$HTTPD_DOCUMENT_ROOT_PATH/repo.git" &&
- git --bare init
- ) &&
- git remote add public "$HTTPD_DOCUMENT_ROOT_PATH/repo.git" &&
- git push public master:master
-'
-
-setup_askpass_helper
-
-cat >exp <<EOF
-> GET /smart/repo.git/info/refs?service=git-upload-pack HTTP/1.1
-> Accept: */*
-> Accept-Encoding: gzip
-> Pragma: no-cache
-< HTTP/1.1 200 OK
-< Pragma: no-cache
-< Cache-Control: no-cache, max-age=0, must-revalidate
-< Content-Type: application/x-git-upload-pack-advertisement
-> POST /smart/repo.git/git-upload-pack HTTP/1.1
-> Accept-Encoding: gzip
-> Content-Type: application/x-git-upload-pack-request
-> Accept: application/x-git-upload-pack-result
-> Content-Length: xxx
-< HTTP/1.1 200 OK
-< Pragma: no-cache
-< Cache-Control: no-cache, max-age=0, must-revalidate
-< Content-Type: application/x-git-upload-pack-result
-EOF
-test_expect_success 'clone http repository' '
- GIT_CURL_VERBOSE=1 git clone --quiet $HTTPD_URL/smart/repo.git clone 2>err &&
- test_cmp file clone/file &&
- tr '\''\015'\'' Q <err |
- sed -e "
- s/Q\$//
- /^[*] /d
- /^$/d
- /^< $/d
-
- /^[^><]/{
- s/^/> /
- }
-
- /^> User-Agent: /d
- /^> Host: /d
- /^> POST /,$ {
- /^> Accept: [*]\\/[*]/d
- }
- s/^> Content-Length: .*/> Content-Length: xxx/
- /^> 00..want /d
- /^> 00.*done/d
-
- /^< Server: /d
- /^< Expires: /d
- /^< Date: /d
- /^< Content-Length: /d
- /^< Transfer-Encoding: /d
- " >act &&
- test_cmp exp act
-'
-
-test_expect_success 'fetch changes via http' '
- echo content >>file &&
- git commit -a -m two &&
- git push public
- (cd clone && git pull) &&
- test_cmp file clone/file
-'
-
-cat >exp <<EOF
-GET /smart/repo.git/info/refs?service=git-upload-pack HTTP/1.1 200
-POST /smart/repo.git/git-upload-pack HTTP/1.1 200
-GET /smart/repo.git/info/refs?service=git-upload-pack HTTP/1.1 200
-POST /smart/repo.git/git-upload-pack HTTP/1.1 200
-EOF
-test_expect_success 'used upload-pack service' '
- sed -e "
- s/^.* \"//
- s/\"//
- s/ [1-9][0-9]*\$//
- s/^GET /GET /
- " >act <"$HTTPD_ROOT_PATH"/access.log &&
- test_cmp exp act
-'
-
-test_expect_success 'follow redirects (301)' '
- git clone $HTTPD_URL/smart-redir-perm/repo.git --quiet repo-p
-'
-
-test_expect_success 'follow redirects (302)' '
- git clone $HTTPD_URL/smart-redir-temp/repo.git --quiet repo-t
-'
-
-test_expect_success 'redirects re-root further requests' '
- git clone $HTTPD_URL/smart-redir-limited/repo.git repo-redir-limited
-'
-
-test_expect_success 'clone from password-protected repository' '
- echo two >expect &&
- set_askpass user@host pass@host &&
- git clone --bare "$HTTPD_URL/auth/smart/repo.git" smart-auth &&
- expect_askpass both user@host &&
- git --git-dir=smart-auth log -1 --format=%s >actual &&
- test_cmp expect actual
-'
-
-test_expect_success 'clone from auth-only-for-push repository' '
- echo two >expect &&
- set_askpass wrong &&
- git clone --bare "$HTTPD_URL/auth-push/smart/repo.git" smart-noauth &&
- expect_askpass none &&
- git --git-dir=smart-noauth log -1 --format=%s >actual &&
- test_cmp expect actual
-'
-
-test_expect_success 'clone from auth-only-for-objects repository' '
- echo two >expect &&
- set_askpass user@host pass@host &&
- git clone --bare "$HTTPD_URL/auth-fetch/smart/repo.git" half-auth &&
- expect_askpass both user@host &&
- git --git-dir=half-auth log -1 --format=%s >actual &&
- test_cmp expect actual
-'
-
-test_expect_success 'no-op half-auth fetch does not require a password' '
- set_askpass wrong &&
- git --git-dir=half-auth fetch &&
- expect_askpass none
-'
-
-test_expect_success 'redirects send auth to new location' '
- set_askpass user@host pass@host &&
- git -c credential.useHttpPath=true \
- clone $HTTPD_URL/smart-redir-auth/repo.git repo-redir-auth &&
- expect_askpass both user@host auth/smart/repo.git
-'
-
-test_expect_success 'disable dumb http on server' '
- git --git-dir="$HTTPD_DOCUMENT_ROOT_PATH/repo.git" \
- config http.getanyfile false
-'
-
-test_expect_success 'GIT_SMART_HTTP can disable smart http' '
- (GIT_SMART_HTTP=0 &&
- export GIT_SMART_HTTP &&
- cd clone &&
- test_must_fail git fetch)
-'
-
-test_expect_success 'invalid Content-Type rejected' '
- test_must_fail git clone $HTTPD_URL/broken_smart/repo.git 2>actual
- grep "not valid:" actual
-'
-
-test_expect_success 'create namespaced refs' '
- test_commit namespaced &&
- git push public HEAD:refs/namespaces/ns/refs/heads/master &&
- git --git-dir="$HTTPD_DOCUMENT_ROOT_PATH/repo.git" \
- symbolic-ref refs/namespaces/ns/HEAD refs/namespaces/ns/refs/heads/master
-'
-
-test_expect_success 'smart clone respects namespace' '
- git clone "$HTTPD_URL/smart_namespace/repo.git" ns-smart &&
- echo namespaced >expect &&
- git --git-dir=ns-smart/.git log -1 --format=%s >actual &&
- test_cmp expect actual
-'
-
-test_expect_success 'dumb clone via http-backend respects namespace' '
- git --git-dir="$HTTPD_DOCUMENT_ROOT_PATH/repo.git" \
- config http.getanyfile true &&
- GIT_SMART_HTTP=0 git clone \
- "$HTTPD_URL/smart_namespace/repo.git" ns-dumb &&
- echo namespaced >expect &&
- git --git-dir=ns-dumb/.git log -1 --format=%s >actual &&
- test_cmp expect actual
-'
-
-cat >cookies.txt <<EOF
-127.0.0.1 FALSE /smart_cookies/ FALSE 0 othername othervalue
-EOF
-cat >expect_cookies.txt <<EOF
-
-127.0.0.1 FALSE /smart_cookies/ FALSE 0 othername othervalue
-127.0.0.1 FALSE /smart_cookies/repo.git/info/ FALSE 0 name value
-EOF
-test_expect_success 'cookies stored in http.cookiefile when http.savecookies set' '
- git config http.cookiefile cookies.txt &&
- git config http.savecookies true &&
- git ls-remote $HTTPD_URL/smart_cookies/repo.git master &&
- tail -3 cookies.txt > cookies_tail.txt
- test_cmp expect_cookies.txt cookies_tail.txt
-'
-
-test -n "$GIT_TEST_LONG" && test_set_prereq EXPENSIVE
-
-test_expect_success EXPENSIVE 'create 50,000 tags in the repo' '
- (
- cd "$HTTPD_DOCUMENT_ROOT_PATH/repo.git" &&
- for i in `test_seq 50000`
- do
- echo "commit refs/heads/too-many-refs"
- echo "mark :$i"
- echo "committer git <git@example.com> $i +0000"
- echo "data 0"
- echo "M 644 inline bla.txt"
- echo "data 4"
- echo "bla"
- # make every commit dangling by always
- # rewinding the branch after each commit
- echo "reset refs/heads/too-many-refs"
- echo "from :1"
- done | git fast-import --export-marks=marks &&
-
- # now assign tags to all the dangling commits we created above
- tag=$(perl -e "print \"bla\" x 30") &&
- sed -e "s|^:\([^ ]*\) \(.*\)$|\2 refs/tags/$tag-\1|" <marks >>packed-refs
- )
-'
-
-test_expect_success EXPENSIVE 'clone the 50,000 tag repo to check OS command line overflow' '
- git clone $HTTPD_URL/smart/repo.git too-many-refs 2>err &&
- test_line_count = 0 err &&
- (
- cd too-many-refs &&
- test $(git for-each-ref refs/tags | wc -l) = 50000
- )
-'
-
-stop_httpd
-test_done
test_done
fi
-LIB_HTTPD_PORT=${LIB_HTTPD_PORT-'5561'}
. "$TEST_DIRECTORY"/lib-httpd.sh
start_httpd
test_description='test fetching over git protocol'
. ./test-lib.sh
-LIB_GIT_DAEMON_PORT=${LIB_GIT_DAEMON_PORT-5570}
. "$TEST_DIRECTORY"/lib-git-daemon.sh
start_git_daemon
'
test_expect_success 'clone calls git upload-pack unqualified with no -u option' '
- (
- GIT_SSH=./not_ssh &&
- export GIT_SSH &&
- test_must_fail git clone localhost:/path/to/repo junk
- ) &&
+ test_must_fail env GIT_SSH=./not_ssh git clone localhost:/path/to/repo junk &&
echo "localhost git-upload-pack '\''/path/to/repo'\''" >expected &&
test_cmp expected not_ssh_output
'
test_expect_success 'clone calls specified git upload-pack with -u option' '
- (
- GIT_SSH=./not_ssh &&
- export GIT_SSH &&
- test_must_fail git clone -u ./something/bin/git-upload-pack localhost:/path/to/repo junk
- ) &&
+ test_must_fail env GIT_SSH=./not_ssh \
+ git clone -u ./something/bin/git-upload-pack localhost:/path/to/repo junk &&
echo "localhost ./something/bin/git-upload-pack '\''/path/to/repo'\''" >expected &&
test_cmp expected not_ssh_output
'
: >file && git add . && git commit -m1 &&
git clone --bare . a.git &&
git clone --bare . x &&
- test "$(GIT_CONFIG=a.git/config git config --bool core.bare)" = true &&
- test "$(GIT_CONFIG=x/config git config --bool core.bare)" = true &&
+ test "$(cd a.git && git config --bool core.bare)" = true &&
+ test "$(cd x && git config --bool core.bare)" = true &&
git bundle create b1.bundle --all &&
git bundle create b2.bundle master &&
mkdir dir &&
test_expect_success 'local clone without .git suffix' '
git clone -l -s a b &&
(cd b &&
- test "$(GIT_CONFIG=.git/config git config --bool core.bare)" = false &&
+ test "$(git config --bool core.bare)" = false &&
git fetch)
'
compare_refs local HEAD server refs/heads/new-refspec
'
+test_expect_success 'forced push' '
+ (cd local &&
+ git checkout -b force-test &&
+ echo content >> file &&
+ git commit -a -m eight &&
+ git push origin force-test &&
+ echo content >> file &&
+ git commit -a --amend -m eight-modified &&
+ git push --force origin force-test
+ ) &&
+ compare_refs local refs/heads/force-test server refs/heads/force-test
+'
+
test_expect_success 'cloning without refspec' '
GIT_REMOTE_TESTGIT_REFSPEC="" \
git clone "testgit::${PWD}/server" local2 2>error &&
'
test_expect_success 'proper failure checks for pushing' '
- (GIT_REMOTE_TESTGIT_FAILURE=1 &&
- export GIT_REMOTE_TESTGIT_FAILURE &&
- cd local &&
- test_must_fail git push --all
+ (cd local &&
+ test_must_fail env GIT_REMOTE_TESTGIT_FAILURE=1 git push --all
)
'
'
test_expect_success TTY '%C(auto) respects --color=auto (stdout is tty)' '
- (
- TERM=vt100 && export TERM &&
- test_terminal \
- git log --format=$AUTO_COLOR -1 --color=auto >actual &&
- has_color actual
- )
+ test_terminal env TERM=vt100 \
+ git log --format=$AUTO_COLOR -1 --color=auto >actual &&
+ has_color actual
'
test_expect_success '%C(auto) respects --color=auto (stdout not tty)' '
test_cmp expect actual
'
-test_expect_success 'match_pathspec_depth matches :(icase)bar' '
+test_expect_success 'match_pathspec matches :(icase)bar' '
cat <<-EOF >expect &&
BAR
bAr
test_cmp expect actual
'
-test_expect_success 'match_pathspec_depth matches :(icase)bar with prefix' '
+test_expect_success 'match_pathspec matches :(icase)bar with prefix' '
cat <<-EOF >expect &&
fOo/BAR
fOo/bAr
test_cmp expect actual
'
-test_expect_success 'match_pathspec_depth matches :(icase)bar with empty prefix' '
+test_expect_success 'match_pathspec matches :(icase)bar with empty prefix' '
cat <<-EOF >expect &&
bar
fOo/BAR
git submodule add ./. sub &&
echo content >file &&
git add file &&
- git commit -m "added sub and file"
+ git commit -m "added sub and file" &&
+ git branch submodule
'
test_expect_success 'git mv cannot move a submodule in a file' '
git mv sub sub2 &&
git commit -m "moved sub to sub2" &&
git checkout -q HEAD^ 2>actual &&
- echo "warning: unable to rmdir sub2: Directory not empty" >expected &&
- test_i18ncmp expected actual &&
+ test_i18ngrep "^warning: unable to rmdir sub2:" actual &&
git status -s sub2 >actual &&
echo "?? sub2/" >expected &&
test_cmp expected actual &&
! test -s actual
'
+test_expect_success 'mv -k does not accidentally destroy submodules' '
+ git checkout submodule &&
+ mkdir dummy dest &&
+ git mv -k dummy sub dest &&
+ git status --porcelain >actual &&
+ grep "^R sub -> dest/sub" actual &&
+ git reset --hard &&
+ git checkout .
+'
+
test_done
test_cmp expect actual
'
+test_expect_success 'lexical sort' '
+ git tag foo1.3 &&
+ git tag foo1.6 &&
+ git tag foo1.10 &&
+ git tag -l --sort=refname "foo*" >actual &&
+ cat >expect <<EOF &&
+foo1.10
+foo1.3
+foo1.6
+EOF
+ test_cmp expect actual
+'
+
+test_expect_success 'version sort' '
+ git tag -l --sort=version:refname "foo*" >actual &&
+ cat >expect <<EOF &&
+foo1.3
+foo1.6
+foo1.10
+EOF
+ test_cmp expect actual
+'
+
+test_expect_success 'reverse version sort' '
+ git tag -l --sort=-version:refname "foo*" >actual &&
+ cat >expect <<EOF &&
+foo1.10
+foo1.6
+foo1.3
+EOF
+ test_cmp expect actual
+'
+
+test_expect_success 'reverse lexical sort' '
+ git tag -l --sort=-refname "foo*" >actual &&
+ cat >expect <<EOF &&
+foo1.6
+foo1.3
+foo1.10
+EOF
+ test_cmp expect actual
+'
+
test_done
test_expect_success TTY 'color when writing to a pager' '
rm -f paginated.out &&
test_config color.ui auto &&
- (
- TERM=vt100 &&
- export TERM &&
- test_terminal git log
- ) &&
+ test_terminal env TERM=vt100 git log &&
colorful paginated.out
'
rm -f paginated.out &&
test_config color.ui auto &&
test_config color.pager false &&
- (
- TERM=vt100 &&
- export TERM &&
- test_terminal git log
- ) &&
+ test_terminal env TERM=vt100 git log &&
! colorful paginated.out
'
test_expect_success TTY 'colors are sent to pager for external commands' '
test_config alias.externallog "!git log" &&
test_config color.ui auto &&
- (
- TERM=vt100 &&
- export TERM &&
- test_terminal git -p externallog
- ) &&
+ test_terminal env TERM=vt100 git -p externallog &&
colorful paginated.out
'
Unmerged paths:
(use "git add/rm <file>..." as appropriate to mark resolution)
- deleted by us: foo
+ deleted by us: foo
no changes added to commit (use "git add" and/or "git commit -a")
EOF
Unmerged paths:
(use "git add/rm <file>..." as appropriate to mark resolution)
- both added: conflict.txt
- deleted by them: main.txt
+ both added: conflict.txt
+ deleted by them: main.txt
no changes added to commit (use "git add" and/or "git commit -a")
EOF
Unmerged paths:
(use "git add/rm <file>..." as appropriate to mark resolution)
- both deleted: main.txt
- added by them: sub_master.txt
- added by us: sub_second.txt
+ both deleted: main.txt
+ added by them: sub_master.txt
+ added by us: sub_second.txt
no changes added to commit (use "git add" and/or "git commit -a")
EOF
Unmerged paths:
(use "git rm <file>..." to mark resolution)
- both deleted: main.txt
+ both deleted: main.txt
Untracked files not listed (use -u option to show untracked files)
EOF
--- /dev/null
+#!/bin/sh
+#
+# Copyright (c) 2006 Shawn Pearce
+#
+
+test_description='git reset should cull empty subdirs'
+. ./test-lib.sh
+
+test_expect_success \
+ 'creating initial files' \
+ 'mkdir path0 &&
+ cp "$TEST_DIRECTORY"/../COPYING path0/COPYING &&
+ git add path0/COPYING &&
+ git commit -m add -a'
+
+test_expect_success \
+ 'creating second files' \
+ 'mkdir path1 &&
+ mkdir path1/path2 &&
+ cp "$TEST_DIRECTORY"/../COPYING path1/path2/COPYING &&
+ cp "$TEST_DIRECTORY"/../COPYING path1/COPYING &&
+ cp "$TEST_DIRECTORY"/../COPYING COPYING &&
+ cp "$TEST_DIRECTORY"/../COPYING path0/COPYING-TOO &&
+ git add path1/path2/COPYING &&
+ git add path1/COPYING &&
+ git add COPYING &&
+ git add path0/COPYING-TOO &&
+ git commit -m change -a'
+
+test_expect_success \
+ 'resetting tree HEAD^' \
+ 'git reset --hard HEAD^'
+
+test_expect_success \
+ 'checking initial files exist after rewind' \
+ 'test -d path0 &&
+ test -f path0/COPYING'
+
+test_expect_success \
+ 'checking lack of path1/path2/COPYING' \
+ '! test -f path1/path2/COPYING'
+
+test_expect_success \
+ 'checking lack of path1/COPYING' \
+ '! test -f path1/COPYING'
+
+test_expect_success \
+ 'checking lack of COPYING' \
+ '! test -f COPYING'
+
+test_expect_success \
+ 'checking checking lack of path1/COPYING-TOO' \
+ '! test -f path0/COPYING-TOO'
+
+test_expect_success \
+ 'checking lack of path1/path2' \
+ '! test -d path1/path2'
+
+test_expect_success \
+ 'checking lack of path1' \
+ '! test -d path1'
+
+test_done
+++ /dev/null
-#!/bin/sh
-#
-# Copyright (c) 2006 Shawn Pearce
-#
-
-test_description='git reset should cull empty subdirs'
-. ./test-lib.sh
-
-test_expect_success \
- 'creating initial files' \
- 'mkdir path0 &&
- cp "$TEST_DIRECTORY"/../COPYING path0/COPYING &&
- git add path0/COPYING &&
- git commit -m add -a'
-
-test_expect_success \
- 'creating second files' \
- 'mkdir path1 &&
- mkdir path1/path2 &&
- cp "$TEST_DIRECTORY"/../COPYING path1/path2/COPYING &&
- cp "$TEST_DIRECTORY"/../COPYING path1/COPYING &&
- cp "$TEST_DIRECTORY"/../COPYING COPYING &&
- cp "$TEST_DIRECTORY"/../COPYING path0/COPYING-TOO &&
- git add path1/path2/COPYING &&
- git add path1/COPYING &&
- git add COPYING &&
- git add path0/COPYING-TOO &&
- git commit -m change -a'
-
-test_expect_success \
- 'resetting tree HEAD^' \
- 'git reset --hard HEAD^'
-
-test_expect_success \
- 'checking initial files exist after rewind' \
- 'test -d path0 &&
- test -f path0/COPYING'
-
-test_expect_success \
- 'checking lack of path1/path2/COPYING' \
- '! test -f path1/path2/COPYING'
-
-test_expect_success \
- 'checking lack of path1/COPYING' \
- '! test -f path1/COPYING'
-
-test_expect_success \
- 'checking lack of COPYING' \
- '! test -f COPYING'
-
-test_expect_success \
- 'checking checking lack of path1/COPYING-TOO' \
- '! test -f path0/COPYING-TOO'
-
-test_expect_success \
- 'checking lack of path1/path2' \
- '! test -d path1/path2'
-
-test_expect_success \
- 'checking lack of path1' \
- '! test -d path1'
-
-test_done
git diff HEAD --exit-code
'
+test_expect_success 'reset -N keeps removed files as intent-to-add' '
+ echo new-file >new-file &&
+ git add new-file &&
+ git reset -N HEAD &&
+
+ tree=$(git write-tree) &&
+ git ls-tree $tree new-file >actual &&
+ >expect &&
+ test_cmp expect actual &&
+
+ git diff --name-only >actual &&
+ echo new-file >expect &&
+ test_cmp expect actual
+'
+
+test_expect_success 'reset --mixed sets up work tree' '
+ git init mixed_worktree &&
+ (
+ cd mixed_worktree &&
+ test_commit dummy
+ ) &&
+ : >expect &&
+ git --git-dir=mixed_worktree/.git --work-tree=mixed_worktree reset >actual &&
+ test_cmp expect actual
+'
+
test_done
--- /dev/null
+#!/bin/sh
+
+test_description='reset --hard unmerged'
+
+. ./test-lib.sh
+
+test_expect_success setup '
+
+ mkdir before later &&
+ >before/1 &&
+ >before/2 &&
+ >hello &&
+ >later/3 &&
+ git add before hello later &&
+ git commit -m world &&
+
+ H=$(git rev-parse :hello) &&
+ git rm --cached hello &&
+ echo "100644 $H 2 hello" | git update-index --index-info &&
+
+ rm -f hello &&
+ mkdir -p hello &&
+ >hello/world &&
+ test "$(git ls-files -o)" = hello/world
+
+'
+
+test_expect_success 'reset --hard should restore unmerged ones' '
+
+ git reset --hard &&
+ git ls-files --error-unmatch before/1 before/2 hello later/3 &&
+ test -f hello
+
+'
+
+test_expect_success 'reset --hard did not corrupt index nor cached-tree' '
+
+ T=$(git write-tree) &&
+ rm -f .git/index &&
+ git add before hello later &&
+ U=$(git write-tree) &&
+ test "$T" = "$U"
+
+'
+
+test_done
+++ /dev/null
-#!/bin/sh
-
-test_description='reset --hard unmerged'
-
-. ./test-lib.sh
-
-test_expect_success setup '
-
- mkdir before later &&
- >before/1 &&
- >before/2 &&
- >hello &&
- >later/3 &&
- git add before hello later &&
- git commit -m world &&
-
- H=$(git rev-parse :hello) &&
- git rm --cached hello &&
- echo "100644 $H 2 hello" | git update-index --index-info &&
-
- rm -f hello &&
- mkdir -p hello &&
- >hello/world &&
- test "$(git ls-files -o)" = hello/world
-
-'
-
-test_expect_success 'reset --hard should restore unmerged ones' '
-
- git reset --hard &&
- git ls-files --error-unmatch before/1 before/2 hello later/3 &&
- test -f hello
-
-'
-
-test_expect_success 'reset --hard did not corrupt index nor cached-tree' '
-
- T=$(git write-tree) &&
- rm -f .git/index &&
- git add before hello later &&
- U=$(git write-tree) &&
- test "$T" = "$U"
-
-'
-
-test_done
! test -d foo
'
+test_expect_success 'git clean -d respects pathspecs (dir is prefix of pathspec)' '
+ mkdir -p foo &&
+ mkdir -p foobar &&
+ git clean -df foobar &&
+ test_path_is_dir foo &&
+ test_path_is_missing foobar
+'
+
+test_expect_success 'git clean -d respects pathspecs (pathspec is prefix of dir)' '
+ mkdir -p foo &&
+ mkdir -p foobar &&
+ git clean -df foo &&
+ test_path_is_missing foo &&
+ test_path_is_dir foobar
+'
+
test_done
'
test_expect_success 'setup - add an example entry to .gitmodules' '
- GIT_CONFIG=.gitmodules \
- git config submodule.example.url git://example.com/init.git
+ git config --file=.gitmodules submodule.example.url git://example.com/init.git
'
test_expect_success 'status should fail for unmapped paths' '
path = init
EOF
- GIT_CONFIG=.gitmodules git config submodule.example.path init &&
+ git config --file=.gitmodules submodule.example.path init &&
test_cmp expect .gitmodules
'
test_i18ngrep "Submodule path .deeper/submodule/subsubmodule.: checked out" actual
)
'
-
test_done
test_must_fail git commit -m initial
'
+test_expect_success '--dry-run fails with nothing to commit' '
+ test_must_fail git commit -m initial --dry-run
+'
+
+test_expect_success '--short fails with nothing to commit' '
+ test_must_fail git commit -m initial --short
+'
+
+test_expect_success '--porcelain fails with nothing to commit' '
+ test_must_fail git commit -m initial --porcelain
+'
+
+test_expect_success '--long fails with nothing to commit' '
+ test_must_fail git commit -m initial --long
+'
+
test_expect_success 'setup: non-initial commit' '
echo bongo bongo bongo >file &&
git commit -m next -a
'
+test_expect_success '--dry-run with stuff to commit returns ok' '
+ echo bongo bongo bongo >>file &&
+ git commit -m next -a --dry-run
+'
+
+test_expect_failure '--short with stuff to commit returns ok' '
+ echo bongo bongo bongo >>file &&
+ git commit -m next -a --short
+'
+
+test_expect_failure '--porcelain with stuff to commit returns ok' '
+ echo bongo bongo bongo >>file &&
+ git commit -m next -a --porcelain
+'
+
+test_expect_success '--long with stuff to commit returns ok' '
+ echo bongo bongo bongo >>file &&
+ git commit -m next -a --long
+'
+
test_expect_success 'commit message from non-existing file' '
echo more bongo: bongo bongo bongo bongo >file &&
test_must_fail git commit -F gah -a
'
+test_expect_success 'cleanup commit messages (scissors option,-F,-e)' '
+
+ echo >>negative &&
+ cat >text <<EOF &&
+
+# to be kept
+# ------------------------ >8 ------------------------
+to be removed
+EOF
+ echo "# to be kept" >expect &&
+ git commit --cleanup=scissors -e -F text -a &&
+ git cat-file -p HEAD |sed -e "1,/^\$/d">actual &&
+ test_cmp expect actual
+
+'
+
test_expect_success 'cleanup commit messages (strip option,-F)' '
echo >>negative &&
test_expect_success 'with hook (merge)' '
- head=`git rev-parse HEAD` &&
- git checkout -b other HEAD@{1} &&
- echo "more" >> file &&
+ test_when_finished "git checkout -f master" &&
+ git checkout -B other HEAD@{1} &&
+ echo "more" >>file &&
+ git add file &&
+ git commit -m other &&
+ git checkout - &&
+ git merge --no-ff other &&
+ test "`git log -1 --pretty=format:%s`" = "merge (no editor)"
+'
+
+test_expect_success 'with hook and editor (merge)' '
+
+ test_when_finished "git checkout -f master" &&
+ git checkout -B other HEAD@{1} &&
+ echo "more" >>file &&
git add file &&
git commit -m other &&
git checkout - &&
- git merge other &&
- test "`git log -1 --pretty=format:%s`" = merge
+ env GIT_EDITOR="\"\$FAKE_EDITOR\"" git merge --no-ff -e other &&
+ test "`git log -1 --pretty=format:%s`" = "merge"
'
cat > "$HOOK" <<'EOF'
test_expect_success 'with failing hook' '
+ test_when_finished "git checkout -f master" &&
head=`git rev-parse HEAD` &&
echo "more" >> file &&
git add file &&
- ! GIT_EDITOR="\"\$FAKE_EDITOR\"" git commit -c $head
+ test_must_fail env GIT_EDITOR="\"\$FAKE_EDITOR\"" git commit -c $head
'
test_expect_success 'with failing hook (--no-verify)' '
+ test_when_finished "git checkout -f master" &&
head=`git rev-parse HEAD` &&
echo "more" >> file &&
git add file &&
- ! GIT_EDITOR="\"\$FAKE_EDITOR\"" git commit --no-verify -c $head
+ test_must_fail env GIT_EDITOR="\"\$FAKE_EDITOR\"" git commit --no-verify -c $head
'
test_expect_success 'with failing hook (merge)' '
+ test_when_finished "git checkout -f master" &&
git checkout -B other HEAD@{1} &&
echo "more" >> file &&
git add file &&
rm -f "$HOOK" &&
git commit -m other &&
- write_script "$HOOK" <<-EOF
+ write_script "$HOOK" <<-EOF &&
exit 1
EOF
git checkout - &&
- test_must_fail git merge other
+ test_must_fail git merge --no-ff other
'
. "$TEST_DIRECTORY/lib-gpg.sh"
test_expect_success GPG 'create signed commits' '
+ test_when_finished "test_unconfig commit.gpgsign" &&
+
echo 1 >file && git add file &&
test_tick && git commit -S -m initial &&
git tag initial &&
git tag fourth-unsigned &&
test_tick && git commit --amend -S -m "fourth signed" &&
- git tag fourth-signed
+ git tag fourth-signed &&
+
+ git config commit.gpgsign true &&
+ echo 5 >file && test_tick && git commit -a -m "fifth signed" &&
+ git tag fifth-signed &&
+
+ git config commit.gpgsign false &&
+ echo 6 >file && test_tick && git commit -a -m "sixth" &&
+ git tag sixth-unsigned &&
+
+ git config commit.gpgsign true &&
+ echo 7 >file && test_tick && git commit -a -m "seventh" --no-gpg-sign &&
+ git tag seventh-unsigned &&
+
+ test_tick && git rebase -f HEAD^^ && git tag sixth-signed HEAD^ &&
+ git tag seventh-signed
'
test_expect_success GPG 'show signatures' '
(
- for commit in initial second merge master
+ for commit in initial second merge fourth-signed fifth-signed sixth-signed master
do
git show --pretty=short --show-signature $commit >actual &&
grep "Good signature from" actual || exit 1
done
) &&
(
- for commit in merge^2 fourth-unsigned
+ for commit in merge^2 fourth-unsigned sixth-unsigned seventh-unsigned
do
git show --pretty=short --show-signature $commit >actual &&
grep "Good signature from" actual && exit 1
test_expect_success GPG 'detect fudged signature' '
git cat-file commit master >raw &&
- sed -e "s/fourth signed/4th forged/" raw >forged1 &&
+ sed -e "s/seventh/7th forged/" raw >forged1 &&
git hash-object -w -t commit forged1 >forged1.commit &&
git show --pretty=short --show-signature $(cat forged1.commit) >actual1 &&
grep "BAD signature from" actual1 &&
Unmerged paths:
(use "git add <file>..." to mark resolution)
- both modified: main.txt
+ both modified: main.txt
no changes added to commit (use "git add" and/or "git commit -a")
EOF
(use "git reset HEAD <file>..." to unstage)
(use "git add <file>..." to mark resolution)
- both modified: main.txt
+ both modified: main.txt
no changes added to commit (use "git add" and/or "git commit -a")
EOF
(use "git reset HEAD <file>..." to unstage)
(use "git add <file>..." to mark resolution)
- both modified: main.txt
+ both modified: main.txt
no changes added to commit (use "git add" and/or "git commit -a")
EOF
You are currently rebasing branch '\''statushints_disabled'\'' on '\''$ONTO'\''.
Unmerged paths:
- both modified: main.txt
+ both modified: main.txt
no changes added to commit
EOF
Unmerged paths:
(use "git add <file>..." to mark resolution)
- both modified: main.txt
+ both modified: main.txt
no changes added to commit (use "git add" and/or "git commit -a")
EOF
(use "git reset HEAD <file>..." to unstage)
(use "git add <file>..." to mark resolution)
- both modified: to-revert.txt
+ both modified: to-revert.txt
no changes added to commit (use "git add" and/or "git commit -a")
EOF
--- /dev/null
+#!/bin/sh
+
+test_description='hunk edit with "commit -p -m"'
+. ./test-lib.sh
+
+if ! test_have_prereq PERL
+then
+ skip_all="skipping '$test_description' tests, perl not available"
+ test_done
+fi
+
+test_expect_success 'setup (initial)' '
+ echo line1 >file &&
+ git add file &&
+ git commit -m commit1
+'
+
+test_expect_success 'edit hunk "commit -p -m message"' '
+ test_when_finished "rm -f editor_was_started" &&
+ rm -f editor_was_started &&
+ echo more >>file &&
+ echo e | env GIT_EDITOR=": >editor_was_started" git commit -p -m commit2 file &&
+ test -r editor_was_started
+'
+
+test_expect_success 'edit hunk "commit --dry-run -p -m message"' '
+ test_when_finished "rm -f editor_was_started" &&
+ rm -f editor_was_started &&
+ echo more >>file &&
+ echo e | env GIT_EDITOR=": >editor_was_started" git commit -p -m commit3 file &&
+ test -r editor_was_started
+'
+
+test_done
test -f c2.c
'
+test_expect_success 'fast-forward pull succeeds with "true" in pull.ff' '
+ git reset --hard c0 &&
+ test_config pull.ff true &&
+ git pull . c1 &&
+ test "$(git rev-parse HEAD)" = "$(git rev-parse c1)"
+'
+
+test_expect_success 'fast-forward pull creates merge with "false" in pull.ff' '
+ git reset --hard c0 &&
+ test_config pull.ff false &&
+ git pull . c1 &&
+ test "$(git rev-parse HEAD^1)" = "$(git rev-parse c0)" &&
+ test "$(git rev-parse HEAD^2)" = "$(git rev-parse c1)"
+'
+
+test_expect_success 'pull prevents non-fast-forward with "only" in pull.ff' '
+ git reset --hard c1 &&
+ test_config pull.ff only &&
+ test_must_fail git pull . c3
+'
+
test_expect_success 'merge c1 with c2 (ours in pull.twohead)' '
git reset --hard c1 &&
git config pull.twohead ours &&
objsha1=$(git verify-pack -v pack-$packsha1.idx | head -n 1 |
sed -e "s/^\([0-9a-f]\{40\}\).*/\1/") &&
mv pack-* .git/objects/pack/ &&
- git repack -A -d -l &&
+ git repack --no-pack-kept-objects -A -d -l &&
git prune-packed &&
for p in .git/objects/pack/*.idx; do
idx=$(basename $p)
test -z "$found_duplicate_object"
'
+test_expect_success 'writing bitmaps can duplicate .keep objects' '
+ # build on $objsha1, $packsha1, and .keep state from previous
+ git repack -Adl &&
+ test_when_finished "found_duplicate_object=" &&
+ for p in .git/objects/pack/*.idx; do
+ idx=$(basename $p)
+ test "pack-$packsha1.idx" = "$idx" && continue
+ if git verify-pack -v $p | egrep "^$objsha1"; then
+ found_duplicate_object=1
+ echo "DUPLICATE OBJECT FOUND"
+ break
+ fi
+ done &&
+ test "$found_duplicate_object" = 1
+'
+
test_expect_success 'loose objects in alternate ODB are not repacked' '
mkdir alt_objects &&
echo `pwd`/alt_objects > .git/objects/info/alternates &&
)
'
+test_expect_success PERL 'difftool properly honors gitlink and core.worktree' '
+ git submodule add ./. submod/ule &&
+ (
+ cd submod/ule &&
+ test_config diff.tool checktrees &&
+ test_config difftool.checktrees.cmd '\''
+ test -d "$LOCAL" && test -d "$REMOTE" && echo good
+ '\'' &&
+ echo good >expect &&
+ git difftool --tool=checktrees --dir-diff HEAD~ >actual &&
+ test_cmp expect actual
+ )
+'
+
test_done
test_expect_success "grep -w $L (w)" '
: >expected &&
- test_must_fail git grep -n -w -e "^w" >actual &&
+ test_must_fail git grep -n -w -e "^w" $H >actual &&
test_cmp expected actual
'
test_cmp expected actual
'
test_expect_success "grep $L with grep.extendedRegexp=false" '
- echo "ab:a+bc" >expected &&
- git -c grep.extendedRegexp=false grep "a+b*c" ab >actual &&
+ echo "${HC}ab:a+bc" >expected &&
+ git -c grep.extendedRegexp=false grep "a+b*c" $H ab >actual &&
test_cmp expected actual
'
test_expect_success "grep $L with grep.extendedRegexp=true" '
- echo "ab:abc" >expected &&
- git -c grep.extendedRegexp=true grep "a+b*c" ab >actual &&
+ echo "${HC}ab:abc" >expected &&
+ git -c grep.extendedRegexp=true grep "a+b*c" $H ab >actual &&
test_cmp expected actual
'
test_expect_success "grep $L with grep.patterntype=basic" '
- echo "ab:a+bc" >expected &&
- git -c grep.patterntype=basic grep "a+b*c" ab >actual &&
+ echo "${HC}ab:a+bc" >expected &&
+ git -c grep.patterntype=basic grep "a+b*c" $H ab >actual &&
test_cmp expected actual
'
test_expect_success "grep $L with grep.patterntype=extended" '
- echo "ab:abc" >expected &&
- git -c grep.patterntype=extended grep "a+b*c" ab >actual &&
+ echo "${HC}ab:abc" >expected &&
+ git -c grep.patterntype=extended grep "a+b*c" $H ab >actual &&
test_cmp expected actual
'
test_expect_success "grep $L with grep.patterntype=fixed" '
- echo "ab:a+b*c" >expected &&
- git -c grep.patterntype=fixed grep "a+b*c" ab >actual &&
+ echo "${HC}ab:a+b*c" >expected &&
+ git -c grep.patterntype=fixed grep "a+b*c" $H ab >actual &&
test_cmp expected actual
'
test_expect_success LIBPCRE "grep $L with grep.patterntype=perl" '
- echo "ab:a+b*c" >expected &&
- git -c grep.patterntype=perl grep "a\x{2b}b\x{2a}c" ab >actual &&
+ echo "${HC}ab:a+b*c" >expected &&
+ git -c grep.patterntype=perl grep "a\x{2b}b\x{2a}c" $H ab >actual &&
test_cmp expected actual
'
test_expect_success "grep $L with grep.patternType=default and grep.extendedRegexp=true" '
- echo "ab:abc" >expected &&
+ echo "${HC}ab:abc" >expected &&
git \
-c grep.patternType=default \
-c grep.extendedRegexp=true \
- grep "a+b*c" ab >actual &&
+ grep "a+b*c" $H ab >actual &&
test_cmp expected actual
'
test_expect_success "grep $L with grep.extendedRegexp=true and grep.patternType=default" '
- echo "ab:abc" >expected &&
+ echo "${HC}ab:abc" >expected &&
git \
-c grep.extendedRegexp=true \
-c grep.patternType=default \
- grep "a+b*c" ab >actual &&
+ grep "a+b*c" $H ab >actual &&
test_cmp expected actual
'
- test_expect_success 'grep $L with grep.patternType=extended and grep.extendedRegexp=false' '
- echo "ab:abc" >expected &&
+ test_expect_success "grep $L with grep.patternType=extended and grep.extendedRegexp=false" '
+ echo "${HC}ab:abc" >expected &&
git \
-c grep.patternType=extended \
-c grep.extendedRegexp=false \
- grep "a+b*c" ab >actual &&
+ grep "a+b*c" $H ab >actual &&
test_cmp expected actual
'
- test_expect_success 'grep $L with grep.patternType=basic and grep.extendedRegexp=true' '
- echo "ab:a+bc" >expected &&
+ test_expect_success "grep $L with grep.patternType=basic and grep.extendedRegexp=true" '
+ echo "${HC}ab:a+bc" >expected &&
git \
-c grep.patternType=basic \
-c grep.extendedRegexp=true \
- grep "a+b*c" ab >actual &&
+ grep "a+b*c" $H ab >actual &&
test_cmp expected actual
'
- test_expect_success 'grep $L with grep.extendedRegexp=false and grep.patternType=extended' '
- echo "ab:abc" >expected &&
+ test_expect_success "grep $L with grep.extendedRegexp=false and grep.patternType=extended" '
+ echo "${HC}ab:abc" >expected &&
git \
-c grep.extendedRegexp=false \
-c grep.patternType=extended \
- grep "a+b*c" ab >actual &&
+ grep "a+b*c" $H ab >actual &&
test_cmp expected actual
'
- test_expect_success 'grep $L with grep.extendedRegexp=true and grep.patternType=basic' '
- echo "ab:a+bc" >expected &&
+ test_expect_success "grep $L with grep.extendedRegexp=true and grep.patternType=basic" '
+ echo "${HC}ab:a+bc" >expected &&
git \
-c grep.extendedRegexp=true \
-c grep.patternType=basic \
- grep "a+b*c" ab >actual &&
+ grep "a+b*c" $H ab >actual &&
+ test_cmp expected actual
+ '
+
+ test_expect_success "grep --count $L" '
+ echo ${HC}ab:3 >expected &&
+ git grep --count -e b $H -- ab >actual &&
+ test_cmp expected actual
+ '
+
+ test_expect_success "grep --count -h $L" '
+ echo 3 >expected &&
+ git grep --count -h -e b $H -- ab >actual &&
test_cmp expected actual
'
done
(echo "From Example <from@example.com>"
echo "To Example <to@example.com>"
echo ""
- ) | env GIT_SEND_EMAIL_NOTTY=1 git send-email \
+ ) | GIT_SEND_EMAIL_NOTTY=1 git send-email \
--smtp-server="$(pwd)/fake.sendmail" \
$patches 2>errors &&
! grep "^In-Reply-To: < *>" msgtxt1
'
tmp_config_get () {
- GIT_CONFIG=.git/svn/.metadata git config --get "$1"
+ git config --file=.git/svn/.metadata --get "$1"
}
test_expect_success 'failure happened without negative side effects' '
test x = x"$(git config svn.authorsfile)" &&
test_config="$HOME"/.gitconfig &&
sane_unset GIT_DIR &&
- sane_unset GIT_CONFIG &&
git config --global \
svn.authorsfile "$HOME"/svn-authors &&
test x"$HOME"/svn-authors = x"$(git config svn.authorsfile)" &&
"
test_expect_success 'add gre branch' "
- GIT_CONFIG=.git/svn/.metadata git config --unset svn-remote.svn.branches-maxRev &&
+ git config --file=.git/svn/.metadata --unset svn-remote.svn.branches-maxRev &&
git config svn-remote.svn.branches 'branches/{red,gre}:refs/remotes/*' &&
git svn fetch &&
git rev-parse refs/remotes/red &&
"
test_expect_success 'add green branch' "
- GIT_CONFIG=.git/svn/.metadata git config --unset svn-remote.svn.branches-maxRev &&
+ git config --file=.git/svn/.metadata --unset svn-remote.svn.branches-maxRev &&
git config svn-remote.svn.branches 'branches/{red,green}:refs/remotes/*' &&
git svn fetch &&
git rev-parse refs/remotes/red &&
"
test_expect_success 'add all branches' "
- GIT_CONFIG=.git/svn/.metadata git config --unset svn-remote.svn.branches-maxRev &&
+ git config --file=.git/svn/.metadata --unset svn-remote.svn.branches-maxRev &&
git config svn-remote.svn.branches 'branches/*:refs/remotes/*' &&
git svn fetch &&
git rev-parse refs/remotes/red &&
test_done
}
-unset GIT_DIR GIT_CONFIG
WORKDIR=$(pwd)
SERVERDIR=$(pwd)/gitcvs.git
git_config="$SERVERDIR/config"
export EDITOR
}
+test_set_index_version () {
+ GIT_INDEX_VERSION="$1"
+ export GIT_INDEX_VERSION
+}
+
test_decode_color () {
awk '
function name(n) {
command "$PERL_PATH" "$@"
}
+# Is the value one of the various ways to spell a boolean true/false?
+test_normalize_bool () {
+ git -c magic.variable="$1" config --bool magic.variable 2>/dev/null
+}
+
+# Given a variable $1, normalize the value of it to one of "true",
+# "false", or "auto" and store the result to it.
+#
+# test_tristate GIT_TEST_HTTPD
+#
+# A variable set to an empty string is set to 'false'.
+# A variable set to 'false' or 'auto' keeps its value.
+# Anything else is set to 'true'.
+# An unset variable defaults to 'auto'.
+#
+# The last rule is to allow people to set the variable to an empty
+# string and export it to decline testing the particular feature
+# for versions both before and after this change. We used to treat
+# both unset and empty variable as a signal for "do not test" and
+# took any non-empty string as "please test".
+
+test_tristate () {
+ if eval "test x\"\${$1+isset}\" = xisset"
+ then
+ # explicitly set
+ eval "
+ case \"\$$1\" in
+ '') $1=false ;;
+ auto) ;;
+ *) $1=\$(test_normalize_bool \$$1 || echo true) ;;
+ esac
+ "
+ else
+ eval "$1=auto"
+ fi
+}
+
+# Exit the test suite, either by skipping all remaining tests or by
+# exiting with an error. If "$1" is "auto", we then we assume we were
+# opportunistically trying to set up some tests and we skip. If it is
+# "true", then we report a failure.
+#
+# The error/skip message should be given by $2.
+#
+test_skip_or_die () {
+ case "$1" in
+ auto)
+ skip_all=$2
+ test_done
+ ;;
+ true)
+ error "$2"
+ ;;
+ *)
+ error "BUG: test tristate is '$1' (real error: $2)"
+ esac
+}
+
# The following mingw_* functions obey POSIX shell syntax, but are actually
# bash scripts, and are meant to be used only with bash on Windows.
export GIT_COMMITTER_EMAIL GIT_COMMITTER_NAME
export EDITOR
+if test -n "${TEST_GIT_INDEX_VERSION:+isset}"
+then
+ GIT_INDEX_VERSION="$TEST_GIT_INDEX_VERSION"
+ export GIT_INDEX_VERSION
+fi
+
# Add libc MALLOC and MALLOC_PERTURB test
# only if we are not executing the test with valgrind
if expr " $GIT_TEST_OPTS " : ".* --valgrind " >/dev/null ||
if test "$help" = "t"
then
- echo "$test_description"
+ printf '%s\n' "$test_description"
exit 0
fi
test_failure=$(($test_failure + 1))
say_color error "not ok $test_count - $1"
shift
- echo "$@" | sed -e 's/^/# /'
+ printf '%s\n' "$*" | sed -e 's/^/# /'
test "$immediate" = "" || { GIT_EXIT_OK=t; exit 1; }
}
fi
fi
GIT_TEMPLATE_DIR="$GIT_BUILD_DIR"/templates/blt
-unset GIT_CONFIG
GIT_CONFIG_NOSYSTEM=1
GIT_ATTR_NOSYSTEM=1
export PATH GIT_EXEC_PATH GIT_TEMPLATE_DIR GIT_CONFIG_NOSYSTEM GIT_ATTR_NOSYSTEM
--- /dev/null
+#include "git-compat-util.h"
+#include "hashmap.h"
+
+struct test_entry
+{
+ struct hashmap_entry ent;
+ /* key and value as two \0-terminated strings */
+ char key[FLEX_ARRAY];
+};
+
+static const char *get_value(const struct test_entry *e)
+{
+ return e->key + strlen(e->key) + 1;
+}
+
+static int test_entry_cmp(const struct test_entry *e1,
+ const struct test_entry *e2, const char* key)
+{
+ return strcmp(e1->key, key ? key : e2->key);
+}
+
+static int test_entry_cmp_icase(const struct test_entry *e1,
+ const struct test_entry *e2, const char* key)
+{
+ return strcasecmp(e1->key, key ? key : e2->key);
+}
+
+static struct test_entry *alloc_test_entry(int hash, char *key, int klen,
+ char *value, int vlen)
+{
+ struct test_entry *entry = malloc(sizeof(struct test_entry) + klen
+ + vlen + 2);
+ hashmap_entry_init(entry, hash);
+ memcpy(entry->key, key, klen + 1);
+ memcpy(entry->key + klen + 1, value, vlen + 1);
+ return entry;
+}
+
+#define HASH_METHOD_FNV 0
+#define HASH_METHOD_I 1
+#define HASH_METHOD_IDIV10 2
+#define HASH_METHOD_0 3
+#define HASH_METHOD_X2 4
+#define TEST_SPARSE 8
+#define TEST_ADD 16
+#define TEST_SIZE 100000
+
+static unsigned int hash(unsigned int method, unsigned int i, const char *key)
+{
+ unsigned int hash;
+ switch (method & 3)
+ {
+ case HASH_METHOD_FNV:
+ hash = strhash(key);
+ break;
+ case HASH_METHOD_I:
+ hash = i;
+ break;
+ case HASH_METHOD_IDIV10:
+ hash = i / 10;
+ break;
+ case HASH_METHOD_0:
+ hash = 0;
+ break;
+ }
+
+ if (method & HASH_METHOD_X2)
+ hash = 2 * hash;
+ return hash;
+}
+
+/*
+ * Test performance of hashmap.[ch]
+ * Usage: time echo "perfhashmap method rounds" | test-hashmap
+ */
+static void perf_hashmap(unsigned int method, unsigned int rounds)
+{
+ struct hashmap map;
+ char buf[16];
+ struct test_entry **entries;
+ unsigned int *hashes;
+ unsigned int i, j;
+
+ entries = malloc(TEST_SIZE * sizeof(struct test_entry *));
+ hashes = malloc(TEST_SIZE * sizeof(int));
+ for (i = 0; i < TEST_SIZE; i++) {
+ snprintf(buf, sizeof(buf), "%i", i);
+ entries[i] = alloc_test_entry(0, buf, strlen(buf), "", 0);
+ hashes[i] = hash(method, i, entries[i]->key);
+ }
+
+ if (method & TEST_ADD) {
+ /* test adding to the map */
+ for (j = 0; j < rounds; j++) {
+ hashmap_init(&map, (hashmap_cmp_fn) test_entry_cmp, 0);
+
+ /* add entries */
+ for (i = 0; i < TEST_SIZE; i++) {
+ hashmap_entry_init(entries[i], hashes[i]);
+ hashmap_add(&map, entries[i]);
+ }
+
+ hashmap_free(&map, 0);
+ }
+ } else {
+ /* test map lookups */
+ hashmap_init(&map, (hashmap_cmp_fn) test_entry_cmp, 0);
+
+ /* fill the map (sparsely if specified) */
+ j = (method & TEST_SPARSE) ? TEST_SIZE / 10 : TEST_SIZE;
+ for (i = 0; i < j; i++) {
+ hashmap_entry_init(entries[i], hashes[i]);
+ hashmap_add(&map, entries[i]);
+ }
+
+ for (j = 0; j < rounds; j++) {
+ for (i = 0; i < TEST_SIZE; i++) {
+ struct hashmap_entry key;
+ hashmap_entry_init(&key, hashes[i]);
+ hashmap_get(&map, &key, entries[i]->key);
+ }
+ }
+
+ hashmap_free(&map, 0);
+ }
+}
+
+#define DELIM " \t\r\n"
+
+/*
+ * Read stdin line by line and print result of commands to stdout:
+ *
+ * hash key -> strhash(key) memhash(key) strihash(key) memihash(key)
+ * put key value -> NULL / old value
+ * get key -> NULL / value
+ * remove key -> NULL / old value
+ * iterate -> key1 value1\nkey2 value2\n...
+ * size -> tablesize numentries
+ *
+ * perfhashmap method rounds -> test hashmap.[ch] performance
+ */
+int main(int argc, char *argv[])
+{
+ char line[1024];
+ struct hashmap map;
+ int icase;
+
+ /* init hash map */
+ icase = argc > 1 && !strcmp("ignorecase", argv[1]);
+ hashmap_init(&map, (hashmap_cmp_fn) (icase ? test_entry_cmp_icase
+ : test_entry_cmp), 0);
+
+ /* process commands from stdin */
+ while (fgets(line, sizeof(line), stdin)) {
+ char *cmd, *p1 = NULL, *p2 = NULL;
+ int l1 = 0, l2 = 0, hash = 0;
+ struct test_entry *entry;
+
+ /* break line into command and up to two parameters */
+ cmd = strtok(line, DELIM);
+ /* ignore empty lines */
+ if (!cmd || *cmd == '#')
+ continue;
+
+ p1 = strtok(NULL, DELIM);
+ if (p1) {
+ l1 = strlen(p1);
+ hash = icase ? strihash(p1) : strhash(p1);
+ p2 = strtok(NULL, DELIM);
+ if (p2)
+ l2 = strlen(p2);
+ }
+
+ if (!strcmp("hash", cmd) && l1) {
+
+ /* print results of different hash functions */
+ printf("%u %u %u %u\n", strhash(p1), memhash(p1, l1),
+ strihash(p1), memihash(p1, l1));
+
+ } else if (!strcmp("add", cmd) && l1 && l2) {
+
+ /* create entry with key = p1, value = p2 */
+ entry = alloc_test_entry(hash, p1, l1, p2, l2);
+
+ /* add to hashmap */
+ hashmap_add(&map, entry);
+
+ } else if (!strcmp("put", cmd) && l1 && l2) {
+
+ /* create entry with key = p1, value = p2 */
+ entry = alloc_test_entry(hash, p1, l1, p2, l2);
+
+ /* add / replace entry */
+ entry = hashmap_put(&map, entry);
+
+ /* print and free replaced entry, if any */
+ puts(entry ? get_value(entry) : "NULL");
+ free(entry);
+
+ } else if (!strcmp("get", cmd) && l1) {
+
+ /* setup static key */
+ struct hashmap_entry key;
+ hashmap_entry_init(&key, hash);
+
+ /* lookup entry in hashmap */
+ entry = hashmap_get(&map, &key, p1);
+
+ /* print result */
+ if (!entry)
+ puts("NULL");
+ while (entry) {
+ puts(get_value(entry));
+ entry = hashmap_get_next(&map, entry);
+ }
+
+ } else if (!strcmp("remove", cmd) && l1) {
+
+ /* setup static key */
+ struct hashmap_entry key;
+ hashmap_entry_init(&key, hash);
+
+ /* remove entry from hashmap */
+ entry = hashmap_remove(&map, &key, p1);
+
+ /* print result and free entry*/
+ puts(entry ? get_value(entry) : "NULL");
+ free(entry);
+
+ } else if (!strcmp("iterate", cmd)) {
+
+ struct hashmap_iter iter;
+ hashmap_iter_init(&map, &iter);
+ while ((entry = hashmap_iter_next(&iter)))
+ printf("%s %s\n", entry->key, get_value(entry));
+
+ } else if (!strcmp("size", cmd)) {
+
+ /* print table sizes */
+ printf("%u %u\n", map.tablesize, map.size);
+
+ } else if (!strcmp("perfhashmap", cmd) && l1 && l2) {
+
+ perf_hashmap(atoi(p1), atoi(p2));
+
+ } else {
+
+ printf("Unknown command %s\n", cmd);
+
+ }
+ }
+
+ hashmap_free(&map, 1);
+ return 0;
+}
-#ifdef USE_WILDMATCH
-#undef USE_WILDMATCH /* We need real fnmatch implementation here */
-#endif
#include "cache.h"
-#include "wildmatch.h"
-
-static int perf(int ac, char **av)
-{
- struct timeval tv1, tv2;
- struct stat st;
- int fd, i, n, flags1 = 0, flags2 = 0;
- char *buffer, *p;
- uint32_t usec1, usec2;
- const char *lang;
- const char *file = av[0];
- const char *pattern = av[1];
-
- lang = getenv("LANG");
- if (lang && strcmp(lang, "C"))
- die("Please test it on C locale.");
-
- if ((fd = open(file, O_RDONLY)) == -1 || fstat(fd, &st))
- die_errno("file open");
-
- buffer = xmalloc(st.st_size + 2);
- if (read(fd, buffer, st.st_size) != st.st_size)
- die_errno("read");
-
- buffer[st.st_size] = '\0';
- buffer[st.st_size + 1] = '\0';
- for (i = 0; i < st.st_size; i++)
- if (buffer[i] == '\n')
- buffer[i] = '\0';
-
- n = atoi(av[2]);
- if (av[3] && !strcmp(av[3], "pathname")) {
- flags1 = WM_PATHNAME;
- flags2 = FNM_PATHNAME;
- }
-
- gettimeofday(&tv1, NULL);
- for (i = 0; i < n; i++) {
- for (p = buffer; *p; p += strlen(p) + 1)
- wildmatch(pattern, p, flags1, NULL);
- }
- gettimeofday(&tv2, NULL);
-
- usec1 = (uint32_t)tv2.tv_sec * 1000000 + tv2.tv_usec;
- usec1 -= (uint32_t)tv1.tv_sec * 1000000 + tv1.tv_usec;
- printf("wildmatch %ds %dus\n",
- (int)(usec1 / 1000000),
- (int)(usec1 % 1000000));
-
- gettimeofday(&tv1, NULL);
- for (i = 0; i < n; i++) {
- for (p = buffer; *p; p += strlen(p) + 1)
- fnmatch(pattern, p, flags2);
- }
- gettimeofday(&tv2, NULL);
-
- usec2 = (uint32_t)tv2.tv_sec * 1000000 + tv2.tv_usec;
- usec2 -= (uint32_t)tv1.tv_sec * 1000000 + tv1.tv_usec;
- if (usec2 > usec1)
- printf("fnmatch %ds %dus or %.2f%% slower\n",
- (int)((usec2 - usec1) / 1000000),
- (int)((usec2 - usec1) % 1000000),
- (float)(usec2 - usec1) / usec1 * 100);
- else
- printf("fnmatch %ds %dus or %.2f%% faster\n",
- (int)((usec1 - usec2) / 1000000),
- (int)((usec1 - usec2) % 1000000),
- (float)(usec1 - usec2) / usec1 * 100);
- return 0;
-}
int main(int argc, char **argv)
{
int i;
-
- if (!strcmp(argv[1], "perf"))
- return perf(argc - 2, argv + 2);
-
for (i = 2; i < argc; i++) {
if (argv[i][0] == '/')
die("Forward slash is not allowed at the beginning of the\n"
return !!wildmatch(argv[3], argv[2], WM_PATHNAME | WM_CASEFOLD, NULL);
else if (!strcmp(argv[1], "pathmatch"))
return !!wildmatch(argv[3], argv[2], 0, NULL);
- else if (!strcmp(argv[1], "fnmatch"))
- return !!fnmatch(argv[3], argv[2], FNM_PATHNAME);
else
return 1;
}
struct ref *remote_refs)
{
char *refname, *msg;
- int status;
+ int status, forced = 0;
if (starts_with(buf->buf, "ok ")) {
status = REF_STATUS_OK;
free(msg);
msg = NULL;
}
+ else if (!strcmp(msg, "forced update")) {
+ forced = 1;
+ free(msg);
+ msg = NULL;
+ }
}
if (*ref)
}
(*ref)->status = status;
+ (*ref)->forced_update |= forced;
(*ref)->remote_status = msg;
return !(status == REF_STATUS_OK);
}
static void push_update_refs_status(struct helper_data *data,
- struct ref *remote_refs)
+ struct ref *remote_refs,
+ int flags)
{
struct strbuf buf = STRBUF_INIT;
struct ref *ref = remote_refs;
if (push_update_ref_status(&buf, &ref, remote_refs))
continue;
- if (!data->refspecs || data->no_private_update)
+ if (flags & TRANSPORT_PUSH_DRY_RUN || !data->refspecs || data->no_private_update)
continue;
/* propagate back the update to the remote namespace */
sendline(data, &buf);
strbuf_release(&buf);
- push_update_refs_status(data, remote_refs);
+ push_update_refs_status(data, remote_refs, flags);
return 0;
}
die("helper %s does not support dry-run", data->name);
}
+ if (flags & TRANSPORT_PUSH_FORCE) {
+ if (set_helper_option(transport, "force", "true") != 0)
+ warning("helper %s does not support 'force'", data->name);
+ }
+
helper = get_helper(transport);
write_constant(helper->in, "export\n");
}
free(private);
- if (ref->deletion)
- die("remote-helpers do not support ref deletion");
-
if (ref->peer_ref) {
if (strcmp(ref->peer_ref->name, ref->name))
die("remote-helpers do not support old:new syntax");
if (finish_command(&exporter))
die("Error while running fast-export");
- push_update_refs_status(data, remote_refs);
+ push_update_refs_status(data, remote_refs, flags);
return 0;
}
return transport->push(transport, refspec_nr, refspec, flags);
} else if (transport->push_refs) {
- struct ref *remote_refs =
- transport->get_refs_list(transport, 1);
+ struct ref *remote_refs;
struct ref *local_refs = get_local_heads();
int match_flags = MATCH_REFS_NONE;
int verbose = (transport->verbose > 0);
int pretend = flags & TRANSPORT_PUSH_DRY_RUN;
int push_ret, ret, err;
+ if (check_push_refs(local_refs, refspec_nr, refspec) < 0)
+ return -1;
+
+ remote_refs = transport->get_refs_list(transport, 1);
+
if (flags & TRANSPORT_PUSH_ALL)
match_flags |= MATCH_REFS_ALL;
if (flags & TRANSPORT_PUSH_MIRROR)
unsigned long size1, size2;
int retval;
- tree1 = read_object_with_reference(old, tree_type, &size1, NULL);
- if (!tree1)
- die("unable to read source tree (%s)", sha1_to_hex(old));
- tree2 = read_object_with_reference(new, tree_type, &size2, NULL);
- if (!tree2)
- die("unable to read destination tree (%s)", sha1_to_hex(new));
- init_tree_desc(&t1, tree1, size1);
- init_tree_desc(&t2, tree2, size2);
+ tree1 = fill_tree_descriptor(&t1, old);
+ tree2 = fill_tree_descriptor(&t2, new);
+ size1 = t1.size;
+ size2 = t2.size;
retval = diff_tree(&t1, &t2, base, opt);
if (!*base && DIFF_OPT_TST(opt, FOLLOW_RENAMES) && diff_might_be_rename()) {
init_tree_desc(&t1, tree1, size1);
int diff_root_tree_sha1(const unsigned char *new, const char *base, struct diff_options *opt)
{
- int retval;
- void *tree;
- unsigned long size;
- struct tree_desc empty, real;
-
- tree = read_object_with_reference(new, tree_type, &size, NULL);
- if (!tree)
- die("unable to read root tree (%s)", sha1_to_hex(new));
- init_tree_desc(&real, tree, size);
-
- init_tree_desc(&empty, "", 0);
- retval = diff_tree(&empty, &real, base, opt);
- free(tree);
- return retval;
+ return diff_tree_sha1(NULL, new, base, opt);
}
/* Initialize the descriptor entry */
desc->entry.path = path;
- desc->entry.mode = mode;
+ desc->entry.mode = canon_mode(mode);
desc->entry.sha1 = (const unsigned char *)(path + len);
}
static inline const unsigned char *tree_entry_extract(struct tree_desc *desc, const char **pathp, unsigned int *modep)
{
*pathp = desc->entry.path;
- *modep = canon_mode(desc->entry.mode);
+ *modep = desc->entry.mode;
return desc->entry.sha1;
}
static void do_add_entry(struct unpack_trees_options *o, struct cache_entry *ce,
unsigned int set, unsigned int clear)
{
- clear |= CE_HASHED | CE_UNHASHED;
+ clear |= CE_HASHED;
if (set & CE_REMOVE)
set |= CE_WT_REMOVE;
- ce->next = NULL;
ce->ce_flags = (ce->ce_flags & ~clear) | set;
add_index_entry(&o->result, ce,
ADD_CACHE_OK_TO_ADD | ADD_CACHE_OK_TO_REPLACE);
total++;
}
- progress = start_progress_delay("Checking out files",
+ progress = start_progress_delay(_("Checking out files"),
total, 50, 1);
cnt = 0;
}
static const char upload_pack_usage[] = "git upload-pack [--strict] [--timeout=<n>] <dir>";
-/* bits #0..7 in revision.h, #8..10 in commit.c */
+/* Remember to update object flag allocation in object.h */
#define THEY_HAVE (1u << 11)
#define OUR_REF (1u << 12)
#define WANTED (1u << 13)
return sz;
}
+static int write_one_shallow(const struct commit_graft *graft, void *cb_data)
+{
+ FILE *fp = cb_data;
+ if (graft->nr_parent == -1)
+ fprintf(fp, "--shallow %s\n", sha1_to_hex(graft->sha1));
+ return 0;
+}
+
static void create_pack_file(void)
{
struct child_process pack_objects;
const char *argv[12];
int i, arg = 0;
FILE *pipe_fd;
- char *shallow_file = NULL;
if (shallow_nr) {
- shallow_file = setup_temporary_shallow(NULL);
argv[arg++] = "--shallow-file";
- argv[arg++] = shallow_file;
+ argv[arg++] = "";
}
argv[arg++] = "pack-objects";
argv[arg++] = "--revs";
pipe_fd = xfdopen(pack_objects.in, "w");
+ if (shallow_nr)
+ for_each_commit_graft(write_one_shallow, pipe_fd);
+
for (i = 0; i < want_obj.nr; i++)
fprintf(pipe_fd, "%s\n",
sha1_to_hex(want_obj.objects[i].item->sha1));
error("git upload-pack: git-pack-objects died with error.");
goto fail;
}
- if (shallow_file) {
- if (*shallow_file)
- unlink(shallow_file);
- free(shallow_file);
- }
/* flush the data */
if (0 <= buffered) {
packet_trace_identity("upload-pack");
git_extract_argv0_path(argv[0]);
- read_replace_refs = 0;
+ check_replace_refs = 0;
for (i = 1; i < argc; i++) {
char *arg = argv[i];
"\\\\[a-zA-Z@]+|\\\\.|[a-zA-Z0-9\x80-\xff]+"),
PATTERNS("cpp",
/* Jump targets or access declarations */
- "!^[ \t]*[A-Za-z_][A-Za-z_0-9]*:.*$\n"
- /* C/++ functions/methods at top level */
- "^([A-Za-z_][A-Za-z_0-9]*([ \t*]+[A-Za-z_][A-Za-z_0-9]*([ \t]*::[ \t]*[^[:space:]]+)?){1,}[ \t]*\\([^;]*)$\n"
- /* compound type at top level */
- "^((struct|class|enum)[^;]*)$",
+ "!^[ \t]*[A-Za-z_][A-Za-z_0-9]*:[[:space:]]*($|/[/*])\n"
+ /* functions/methods, variables, and compounds at top level */
+ "^((::[[:space:]]*)?[A-Za-z_].*)$",
/* -- */
"[a-zA-Z_][a-zA-Z0-9_]*"
- "|[-+0-9.e]+[fFlL]?|0[xXbB]?[0-9a-fA-F]+[lL]?"
- "|[-+*/<>%&^|=!]=|--|\\+\\+|<<=?|>>=?|&&|\\|\\||::|->"),
+ "|[-+0-9.e]+[fFlL]?|0[xXbB]?[0-9a-fA-F]+[lLuU]*"
+ "|[-+*/<>%&^|=!]=|--|\\+\\+|<<=?|>>=?|&&|\\|\\||::|->\\*?|\\.\\*"),
PATTERNS("csharp",
/* Keywords */
"!^[ \t]*(do|while|for|if|else|instanceof|new|return|switch|case|throw|catch|using)\n"
/* This code is originally from http://www.cl.cam.ac.uk/~mgk25/ucs/ */
struct interval {
- int first;
- int last;
+ ucs_char_t first;
+ ucs_char_t last;
};
size_t display_mode_esc_sequence_len(const char *s)
while (1) {
size_t cnt = iconv(conv, &cp, &insz, &outpos, &outsz);
- if (cnt == -1) {
+ if (cnt == (size_t) -1) {
size_t sofar;
if (errno != E2BIG) {
free(out);
--- /dev/null
+#include "cache.h"
+
+/*
+ * versioncmp(): copied from string/strverscmp.c in glibc commit
+ * ee9247c38a8def24a59eb5cfb7196a98bef8cfdc, reformatted to Git coding
+ * style. The implementation is under LGPL-2.1 and Git relicenses it
+ * to GPLv2.
+ */
+
+/*
+ * states: S_N: normal, S_I: comparing integral part, S_F: comparing
+ * fractionnal parts, S_Z: idem but with leading Zeroes only
+ */
+#define S_N 0x0
+#define S_I 0x3
+#define S_F 0x6
+#define S_Z 0x9
+
+/* result_type: CMP: return diff; LEN: compare using len_diff/diff */
+#define CMP 2
+#define LEN 3
+
+
+/*
+ * Compare S1 and S2 as strings holding indices/version numbers,
+ * returning less than, equal to or greater than zero if S1 is less
+ * than, equal to or greater than S2 (for more info, see the texinfo
+ * doc).
+ */
+
+int versioncmp(const char *s1, const char *s2)
+{
+ const unsigned char *p1 = (const unsigned char *) s1;
+ const unsigned char *p2 = (const unsigned char *) s2;
+ unsigned char c1, c2;
+ int state, diff;
+
+ /*
+ * Symbol(s) 0 [1-9] others
+ * Transition (10) 0 (01) d (00) x
+ */
+ static const uint8_t next_state[] = {
+ /* state x d 0 */
+ /* S_N */ S_N, S_I, S_Z,
+ /* S_I */ S_N, S_I, S_I,
+ /* S_F */ S_N, S_F, S_F,
+ /* S_Z */ S_N, S_F, S_Z
+ };
+
+ static const int8_t result_type[] = {
+ /* state x/x x/d x/0 d/x d/d d/0 0/x 0/d 0/0 */
+
+ /* S_N */ CMP, CMP, CMP, CMP, LEN, CMP, CMP, CMP, CMP,
+ /* S_I */ CMP, -1, -1, +1, LEN, LEN, +1, LEN, LEN,
+ /* S_F */ CMP, CMP, CMP, CMP, CMP, CMP, CMP, CMP, CMP,
+ /* S_Z */ CMP, +1, +1, -1, CMP, CMP, -1, CMP, CMP
+ };
+
+ if (p1 == p2)
+ return 0;
+
+ c1 = *p1++;
+ c2 = *p2++;
+ /* Hint: '0' is a digit too. */
+ state = S_N + ((c1 == '0') + (isdigit (c1) != 0));
+
+ while ((diff = c1 - c2) == 0) {
+ if (c1 == '\0')
+ return diff;
+
+ state = next_state[state];
+ c1 = *p1++;
+ c2 = *p2++;
+ state += (c1 == '0') + (isdigit (c1) != 0);
+ }
+
+ state = result_type[state * 3 + (((c2 == '0') + (isdigit (c2) != 0)))];
+
+ switch (state) {
+ case CMP:
+ return diff;
+
+ case LEN:
+ while (isdigit (*p1++))
+ if (!isdigit (*p2++))
+ return 1;
+
+ return isdigit (*p2) ? -1 : diff;
+
+ default:
+ return state;
+ }
+}
return 0;
}
+/* Remember to update object flag allocation in object.h */
#define COMPLETE (1U << 0)
#define SEEN (1U << 1)
#define TO_SCAN (1U << 2)
int negated = 0;
string = string + strspn(string, ", \t\n\r");
- ep = strchr(string, ',');
- if (!ep)
- len = strlen(string);
- else
- len = ep - string;
+ ep = strchrnul(string, ',');
+ len = ep - string;
if (*string == '-') {
negated = 1;
#include "strbuf.h"
#include "utf8.h"
-static char cut_line[] =
+static const char cut_line[] =
"------------------------ >8 ------------------------\n";
static char default_wt_status_colors[][COLOR_MAXLEN] = {
#define quote_path quote_path_relative
-static void wt_status_print_unmerged_data(struct wt_status *s,
- struct string_list_item *it)
+static const char *wt_status_unmerged_status_string(int stagemask)
{
- const char *c = color(WT_STATUS_UNMERGED, s);
- struct wt_status_change_data *d = it->util;
- struct strbuf onebuf = STRBUF_INIT;
- const char *one, *how = _("bug");
-
- one = quote_path(it->string, s->prefix, &onebuf);
- status_printf(s, color(WT_STATUS_HEADER, s), "\t");
- switch (d->stagemask) {
- case 1: how = _("both deleted:"); break;
- case 2: how = _("added by us:"); break;
- case 3: how = _("deleted by them:"); break;
- case 4: how = _("added by them:"); break;
- case 5: how = _("deleted by us:"); break;
- case 6: how = _("both added:"); break;
- case 7: how = _("both modified:"); break;
+ switch (stagemask) {
+ case 1:
+ return _("both deleted:");
+ case 2:
+ return _("added by us:");
+ case 3:
+ return _("deleted by them:");
+ case 4:
+ return _("added by them:");
+ case 5:
+ return _("deleted by us:");
+ case 6:
+ return _("both added:");
+ case 7:
+ return _("both modified:");
+ default:
+ die(_("bug: unhandled unmerged status %x"), stagemask);
}
- status_printf_more(s, c, "%-20s%s\n", how, one);
- strbuf_release(&onebuf);
}
static const char *wt_status_diff_status_string(int status)
{
switch (status) {
case DIFF_STATUS_ADDED:
- return _("new file");
+ return _("new file:");
case DIFF_STATUS_COPIED:
- return _("copied");
+ return _("copied:");
case DIFF_STATUS_DELETED:
- return _("deleted");
+ return _("deleted:");
case DIFF_STATUS_MODIFIED:
- return _("modified");
+ return _("modified:");
case DIFF_STATUS_RENAMED:
- return _("renamed");
+ return _("renamed:");
case DIFF_STATUS_TYPE_CHANGED:
- return _("typechange");
+ return _("typechange:");
case DIFF_STATUS_UNKNOWN:
- return _("unknown");
+ return _("unknown:");
case DIFF_STATUS_UNMERGED:
- return _("unmerged");
+ return _("unmerged:");
default:
return NULL;
}
}
+static int maxwidth(const char *(*label)(int), int minval, int maxval)
+{
+ int result = 0, i;
+
+ for (i = minval; i <= maxval; i++) {
+ const char *s = label(i);
+ int len = s ? utf8_strwidth(s) : 0;
+ if (len > result)
+ result = len;
+ }
+ return result;
+}
+
+static void wt_status_print_unmerged_data(struct wt_status *s,
+ struct string_list_item *it)
+{
+ const char *c = color(WT_STATUS_UNMERGED, s);
+ struct wt_status_change_data *d = it->util;
+ struct strbuf onebuf = STRBUF_INIT;
+ static char *padding;
+ static int label_width;
+ const char *one, *how;
+ int len;
+
+ if (!padding) {
+ label_width = maxwidth(wt_status_unmerged_status_string, 1, 7);
+ label_width += strlen(" ");
+ padding = xmallocz(label_width);
+ memset(padding, ' ', label_width);
+ }
+
+ one = quote_path(it->string, s->prefix, &onebuf);
+ status_printf(s, color(WT_STATUS_HEADER, s), "\t");
+
+ how = wt_status_unmerged_status_string(d->stagemask);
+ len = label_width - utf8_strwidth(how);
+ status_printf_more(s, c, "%s%.*s%s\n", how, len, padding, one);
+ strbuf_release(&onebuf);
+}
+
static void wt_status_print_change_data(struct wt_status *s,
int change_type,
struct string_list_item *it)
struct strbuf onebuf = STRBUF_INIT, twobuf = STRBUF_INIT;
struct strbuf extra = STRBUF_INIT;
static char *padding;
+ static int label_width;
const char *what;
int len;
if (!padding) {
- int width = 0;
- /* If DIFF_STATUS_* uses outside this range, we're in trouble */
- for (status = 'A'; status <= 'Z'; status++) {
- what = wt_status_diff_status_string(status);
- len = what ? strlen(what) : 0;
- if (len > width)
- width = len;
- }
- width += 2; /* colon and a space */
- padding = xmallocz(width);
- memset(padding, ' ', width);
+ /* If DIFF_STATUS_* uses outside the range [A..Z], we're in trouble */
+ label_width = maxwidth(wt_status_diff_status_string, 'A', 'Z');
+ label_width += strlen(" ");
+ padding = xmallocz(label_width);
+ memset(padding, ' ', label_width);
}
one_name = two_name = it->string;
what = wt_status_diff_status_string(status);
if (!what)
die(_("bug: unhandled diff status %c"), status);
- /* 1 for colon, which is not part of "what" */
- len = strlen(padding) - (utf8_strwidth(what) + 1);
+ len = label_width - utf8_strwidth(what);
assert(len >= 0);
if (status == DIFF_STATUS_COPIED || status == DIFF_STATUS_RENAMED)
- status_printf_more(s, c, "%s:%.*s%s -> %s",
+ status_printf_more(s, c, "%s%.*s%s -> %s",
what, len, padding, one, two);
else
- status_printf_more(s, c, "%s:%.*s%s",
+ status_printf_more(s, c, "%s%.*s%s",
what, len, padding, one);
if (extra.len) {
status_printf_more(s, color(WT_STATUS_HEADER, s), "%s", extra.buf);
struct wt_status_change_data *d;
const struct cache_entry *ce = active_cache[i];
- if (!ce_path_match(ce, &s->pathspec))
+ if (!ce_path_match(ce, &s->pathspec, NULL))
continue;
it = string_list_insert(&s->change, ce->name);
d = it->util;
for (i = 0; i < dir.nr; i++) {
struct dir_entry *ent = dir.entries[i];
if (cache_name_is_other(ent->name, ent->len) &&
- match_pathspec_depth(&s->pathspec, ent->name, ent->len, 0, NULL))
+ dir_path_match(ent, &s->pathspec, 0, NULL))
string_list_insert(&s->untracked, ent->name);
free(ent);
}
for (i = 0; i < dir.ignored_nr; i++) {
struct dir_entry *ent = dir.ignored[i];
if (cache_name_is_other(ent->name, ent->len) &&
- match_pathspec_depth(&s->pathspec, ent->name, ent->len, 0, NULL))
+ dir_path_match(ent, &s->pathspec, 0, NULL))
string_list_insert(&s->ignored, ent->name);
free(ent);
}
strbuf_release(&pattern);
}
+void wt_status_add_cut_line(FILE *fp)
+{
+ const char *explanation = _("Do not touch the line above.\nEverything below will be removed.");
+ struct strbuf buf = STRBUF_INIT;
+
+ fprintf(fp, "%c %s", comment_line_char, cut_line);
+ strbuf_add_commented_lines(&buf, explanation, strlen(explanation));
+ fputs(buf.buf, fp);
+ strbuf_release(&buf);
+}
+
static void wt_status_print_verbose(struct wt_status *s)
{
struct rev_info rev;
* diff before committing.
*/
if (s->fp != stdout) {
- const char *explanation = _("Do not touch the line above.\nEverything below will be removed.");
- struct strbuf buf = STRBUF_INIT;
-
rev.diffopt.use_color = 0;
- fprintf(s->fp, "%c %s", comment_line_char, cut_line);
- strbuf_add_commented_lines(&buf, explanation, strlen(explanation));
- fputs(buf.buf, s->fp);
- strbuf_release(&buf);
+ wt_status_add_cut_line(s->fp);
}
run_diff_index(&rev, 1);
}
return;
}
+#define LABEL(string) (s->no_gettext ? (string) : _(string))
+
color_fprintf(s->fp, header_color, " [");
if (upstream_is_gone) {
- color_fprintf(s->fp, header_color, _("gone"));
+ color_fprintf(s->fp, header_color, LABEL(N_("gone")));
} else if (!num_ours) {
- color_fprintf(s->fp, header_color, _("behind "));
+ color_fprintf(s->fp, header_color, LABEL(N_("behind ")));
color_fprintf(s->fp, branch_color_remote, "%d", num_theirs);
} else if (!num_theirs) {
- color_fprintf(s->fp, header_color, _("ahead "));
+ color_fprintf(s->fp, header_color, LABEL(N_(("ahead "))));
color_fprintf(s->fp, branch_color_local, "%d", num_ours);
} else {
- color_fprintf(s->fp, header_color, _("ahead "));
+ color_fprintf(s->fp, header_color, LABEL(N_(("ahead "))));
color_fprintf(s->fp, branch_color_local, "%d", num_ours);
- color_fprintf(s->fp, header_color, _(", behind "));
+ color_fprintf(s->fp, header_color, ", %s", LABEL(N_("behind ")));
color_fprintf(s->fp, branch_color_remote, "%d", num_theirs);
}
s->use_color = 0;
s->relative_paths = 0;
s->prefix = NULL;
+ s->no_gettext = 1;
wt_shortstatus_print(s);
}
enum commit_whence whence;
int nowarn;
int use_color;
+ int no_gettext;
int display_comment_prefix;
int relative_paths;
int submodule_summary;
};
void wt_status_truncate_message_at_cut_line(struct strbuf *);
+void wt_status_add_cut_line(FILE *fp);
void wt_status_prepare(struct wt_status *s);
void wt_status_print(struct wt_status *s);
void wt_status_collect(struct wt_status *s);