matrix:
include:
+ - env: Windows
+ os: linux
+ compiler:
+ addons:
+ before_install:
+ before_script:
+ script:
+ - >
+ test "$TRAVIS_REPO_SLUG" != "git/git" ||
+ ci/run-windows-build.sh $TRAVIS_BRANCH $(git rev-parse HEAD)
+ after_failure:
- env: Linux32
os: linux
services:
release (yet).
* The historical argument order "git merge <msg> HEAD <commit>..."
- has been deprecated for quite some time, and will be removed in a
- future release.
+ has been deprecated for quite some time, and is now removed.
* The default location "~/.git-credential-cache/socket" for the
socket used to communicate with the credential-cache daemon has
been moved to "~/.cache/git/credential/socket".
+ * Git now avoids blindly falling back to ".git" when the setup
+ sequence said we are _not_ in Git repository. A corner case that
+ happens to work right now may be broken by a call to die("BUG").
+ We've tried hard to locate such cases and fixed them, but there
+ might still be cases that need to be addressed--bug reports are
+ greatly appreciated.
+
Updates since v2.12
-------------------
doing other things, output from reset seeped out). These, and
other chattyness has been fixed.
+ * "git merge <message> HEAD <commit>" syntax that has been deprecated
+ since October 2007 has been removed.
+
+ * The refs completion for large number of refs has been sped up,
+ partly by giving up disambiguating ambiguous refs and partly by
+ eliminating most of the shell processing between 'git for-each-ref'
+ and 'ls-remote' and Bash's completion facility.
+
+ * On many keyboards, typing "@{" involves holding down SHIFT key and
+ one can easily end up with "@{Up..." when typing "@{upstream}". As
+ the upstream/push keywords do not appear anywhere else in the syntax,
+ we can safely accept them case insensitively without introducing
+ ambiguity or confusion to solve this.
+
+ * "git tag/branch/for-each-ref" family of commands long allowed to
+ filter the refs by "--contains X" (show only the refs that are
+ descendants of X), "--merged X" (show only the refs that are
+ ancestors of X), "--no-merged X" (show only the refs that are not
+ ancestors of X). One curious omission, "--no-contains X" (show
+ only the refs that are not descendants of X) has been added to
+ them.
+
+ * The default behaviour of "git log" in an interactive session has
+ been changed to enable "--decorate".
+
+ * The output from "git status --short" has been extended to show
+ various kinds of dirtyness in submodules differently; instead of to
+ "M" for modified, 'm' and '?' can be shown to signal changes only
+ to the working tree of the submodule but not the commit that is
+ checked out.
+
Performance, Internal Implementation, Development Support etc.
* An earlier version of sha1dc/sha1.c that was merged to 'master'
compiled incorrectly on Windows, which has been fixed.
+ * "what URL do we want to update this submodule?" and "are we
+ interested in this submodule?" are split into two distinct
+ concepts, and then the way used to express the latter got extended,
+ paving a way to make it easier to manage a project with many
+ submodules and make it possible to later extend use of multiple
+ worktrees for a project with submodules.
+
+ * Some debugging output from "git describe" were marked for l10n,
+ but some weren't. Mark missing ones for l10n.
+
+ * Define a new task in .travis.yml that triggers a test session on
+ Windows run elsewhere.
+
+ * Conversion from unsigned char [40] to struct object_id continues.
+
+ * The "submodule" specific field in the ref_store structure is
+ replaced with a more generic "gitdir" that can later be used also
+ when dealing with ref_store that represents the set of refs visible
+ from the other worktrees.
Also contains various documentation updates and code clean-ups.
* Fix for NO_PTHREADS option.
(merge 2225e1ea20 bw/grep-recurse-submodules later to maint).
+ * Git now avoids blindly falling back to ".git" when the setup
+ sequence said we are _not_ in Git repository. A corner case that
+ happens to work right now may be broken by a call to die("BUG").
+ (merge b1ef400eec jk/no-looking-at-dotgit-outside-repo-final later to maint).
+
+ * A few commands that recently learned the "--recurse-submodule"
+ option misbehaved when started from a subdirectory of the
+ superproject.
+ (merge b2dfeb7c00 bw/recurse-submodules-relative-fix later to maint).
+
+ * FreeBSD implementation of getcwd(3) behaved differently when an
+ intermediate directory is unreadable/unsearchable depending on the
+ length of the buffer provided, which our strbuf_getcwd() was not
+ aware of. strbuf_getcwd() has been taught to cope with it better.
+ (merge a54e938e5b rs/freebsd-getcwd-workaround later to maint).
+
+ * A recent update to "rebase -i" stopped running hooks for the "git
+ commit" command during "reword" action, which has been fixed.
+
+ * Removing an entry from a notes tree and then looking another note
+ entry from the resulting tree using the internal notes API
+ functions did not work as expected. No in-tree users of the API
+ has such access pattern, but it still is worth fixing.
+
+ * "git receive-pack" could have been forced to die by attempting
+ allocate an unreasonably large amount of memory with a crafted push
+ certificate; this has been fixed.
+ (merge f2214dede9 bc/push-cert-receive-fix later to maint).
+
+ * Update error handling for codepath that deals with corrupt loose
+ objects.
+ (merge 51054177b3 jk/loose-object-info-report-error later to maint).
+
+ * "git diff --submodule=diff" learned to work better in a project
+ with a submodule that in turn has its own submodules.
+ (merge 17b254cda6 sb/show-diff-for-submodule-in-diff-fix later to maint).
+
+ * Update the build dependency so that an update to /usr/bin/perl
+ etc. result in recomputation of perl.mak file.
+ (merge c59c4939c2 ab/regen-perl-mak-with-different-perl later to maint).
+
+ * "git push --recurse-submodules --push-option=<string>" learned to
+ propagate the push option recursively down to pushes in submodules.
+
+ * If a patch e-mail had its first paragraph after an in-body header
+ indented (even after a blank line after the in-body header line),
+ the indented line was mistook as a continuation of the in-body
+ header. This has been fixed.
+ (merge fd1062e52e lt/mailinfo-in-body-header-continuation later to maint).
+
+ * Clean up fallouts from recent tightening of the set-up sequence,
+ where Git barfs when repository information is accessed without
+ first ensuring that it was started in a repository.
+ (merge bccb22cbb1 jk/no-looking-at-dotgit-outside-repo later to maint).
+
+ * "git p4" used "name-rev HEAD" when it wants to learn what branch is
+ checked out; it should use "symbolic-ref HEAD".
+ (merge eff451101d ld/p4-current-branch-fix later to maint).
+
* Other minor doc, test and build updates and code cleanups.
(merge df2a6e38b7 jk/pager-in-use later to maint).
(merge 75ec4a6cb0 ab/branch-list-doc later to maint).
(merge 3e5b36c637 sg/skip-prefix-in-prettify-refname later to maint).
(merge 2c5e2865cc jk/fast-import-cleanup later to maint).
+ (merge 4473060bc2 ab/test-readme-updates later to maint).
+ (merge 48a96972fd ab/doc-submitting later to maint).
+ (merge f5c2bc2b96 jk/make-coccicheck-detect-errors later to maint).
+ (merge c105f563d1 cc/untracked later to maint).
+ (merge 8668976b53 jc/unused-symbols later to maint).
+ (merge fba275dc93 jc/bs-t-is-not-a-tab-for-sed later to maint).
+ (merge be6ed145de mm/ls-files-s-doc later to maint).
+ (merge 60b091c679 qp/bisect-docfix later to maint).
+ (merge 47242cd103 ah/diff-files-ours-theirs-doc later to maint).
+ (merge 35ad44cbd8 sb/submodule-rm-absorb later to maint).
+ (merge 0301f1fd92 va/i18n-perl-scripts later to maint).
+ (merge 733e064d98 vn/revision-shorthand-for-side-branch-log later to maint).
prefix the first line with "area: " where the area is a filename or
identifier for the general area of the code being modified, e.g.
- . archive: ustar header checksum is computed unsigned
- . git-cherry-pick.txt: clarify the use of revision range notation
+ . doc: clarify distinction between sign-off and pgp-signing
+ . githooks.txt: improve the intro section
If in doubt which identifier to use, run "git log --no-merges" on the
files you are modifying to see the current conventions.
+It's customary to start the remainder of the first line after "area: "
+with a lower-case letter. E.g. "doc: clarify...", not "doc:
+Clarify...", or "githooks.txt: improve...", not "githooks.txt:
+Improve...".
+
The body should provide a meaningful commit message, which:
. explains the problem the change tries to solve, iow, what is wrong
noticed that ...
The "Copy commit summary" command of gitk can be used to obtain this
-format.
+format, or this invocation of "git show":
+ git show -s --date=short --pretty='format:%h ("%s", %ad)' <commit>
(3) Generate your patch using Git tools out of your commits.
The URL for a submodule. This variable is copied from the .gitmodules
file to the git config via 'git submodule init'. The user can change
the configured URL before obtaining the submodule via 'git submodule
- update'. After obtaining the submodule, the presence of this variable
- is used as a sign whether the submodule is of interest to git commands.
+ update'. If neither submodule.<name>.active or submodule.active are
+ set, the presence of this variable is used as a fallback to indicate
+ whether the submodule is of interest to git commands.
See linkgit:git-submodule[1] and linkgit:gitmodules[5] for details.
submodule.<name>.update::
"--ignore-submodules" option. The 'git submodule' commands are not
affected by this setting.
+submodule.<name>.active::
+ Boolean value indicating if the submodule is of interest to git
+ commands. This config option takes precedence over the
+ submodule.active config option.
+
+submodule.active::
+ A repeated field which contains a pathspec used to match against a
+ submodule's path to determine if the submodule is of interest to git
+ commands.
+
submodule.fetchJobs::
Specifies how many submodules are fetched/cloned at the same time.
A positive integer allows up to that number of submodules fetched
mix "good" and "bad" with "old" and "new" in a single session.)
In this more general usage, you provide `git bisect` with a "new"
-commit has some property and an "old" commit that doesn't have that
+commit that has some property and an "old" commit that doesn't have that
property. Each time `git bisect` checks out a commit, you test if that
commit has the property. If it does, mark the commit as "new";
otherwise, mark it as "old". When the bisection is done, `git bisect`
[verse]
'git branch' [--color[=<when>] | --no-color] [-r | -a]
[--list] [-v [--abbrev=<length> | --no-abbrev]]
- [--column[=<options>] | --no-column]
- [(--merged | --no-merged | --contains) [<commit>]] [--sort=<key>]
+ [--column[=<options>] | --no-column] [--sort=<key>]
+ [(--merged | --no-merged) [<commit>]]
+ [--contains [<commit]] [--no-contains [<commit>]]
[--points-at <object>] [--format=<format>] [<pattern>...]
'git branch' [--set-upstream | --track | --no-track] [-l] [-f] <branchname> [<start-point>]
'git branch' (--set-upstream-to=<upstream> | -u <upstream>) [<branchname>]
With `--contains`, shows only the branches that contain the named commit
(in other words, the branches whose tip commits are descendants of the
-named commit). With `--merged`, only branches merged into the named
-commit (i.e. the branches whose tip commits are reachable from the named
-commit) will be listed. With `--no-merged` only branches not merged into
-the named commit will be listed. If the <commit> argument is missing it
-defaults to `HEAD` (i.e. the tip of the current branch).
+named commit), `--no-contains` inverts it. With `--merged`, only branches
+merged into the named commit (i.e. the branches whose tip commits are
+reachable from the named commit) will be listed. With `--no-merged` only
+branches not merged into the named commit will be listed. If the <commit>
+argument is missing it defaults to `HEAD` (i.e. the tip of the current
+branch).
The command's second form creates a new branch head named <branchname>
which points to the current `HEAD`, or <start-point> if given.
Only list branches which contain the specified commit (HEAD
if not specified). Implies `--list`.
+--no-contains [<commit>]::
+ Only list branches which don't contain the specified commit
+ (HEAD if not specified). Implies `--list`.
+
--merged [<commit>]::
Only list branches whose tips are reachable from the
- specified commit (HEAD if not specified). Implies `--list`.
+ specified commit (HEAD if not specified). Implies `--list`,
+ incompatible with `--no-merged`.
--no-merged [<commit>]::
Only list branches whose tips are not reachable from the
- specified commit (HEAD if not specified). Implies `--list`.
+ specified commit (HEAD if not specified). Implies `--list`,
+ incompatible with `--merged`.
<branchname>::
The name of the branch to create or delete.
easier to use the git checkout command with its `-b` option to create
a branch and check it out with a single command.
-The options `--contains`, `--merged` and `--no-merged` serve three related
-but different purposes:
+The options `--contains`, `--no-contains`, `--merged` and `--no-merged`
+serve four related but different purposes:
- `--contains <commit>` is used to find all branches which will need
special attention if <commit> were to be rebased or amended, since those
branches contain the specified <commit>.
+- `--no-contains <commit>` is the inverse of that, i.e. branches that don't
+ contain the specified <commit>.
+
- `--merged` is used to find all branches which can be safely deleted,
since those branches are fully contained by HEAD.
[-o <name>] [-b <name>] [-u <upload-pack>] [--reference <repository>]
[--dissociate] [--separate-git-dir <git dir>]
[--depth <depth>] [--[no-]single-branch]
- [--recursive | --recurse-submodules] [--[no-]shallow-submodules]
+ [--recurse-submodules] [--[no-]shallow-submodules]
[--jobs <n>] [--] <repository> [<directory>]
DESCRIPTION
branch when `--single-branch` clone was made, no remote-tracking
branch is created.
---recursive::
---recurse-submodules::
- After the clone is created, initialize all submodules within,
- using their default settings. This is equivalent to running
+--recurse-submodules[=<pathspec]::
+ After the clone is created, initialize and clone submodules
+ within based on the provided pathspec. If no pathspec is
+ provided, all submodules are initialized and cloned.
+ Submodules are initialized and cloned using their default
+ settings. The resulting clone has `submodule.active` set to
+ the provided pathspec, or "." (meaning all submodules) if no
+ pathspec is provided. This is equivalent to running
`git submodule update --init --recursive` immediately after
the clone is finished. This option is ignored if the cloned
repository does not have a worktree/checkout (i.e. if any of
:git-diff: 1
include::diff-options.txt[]
+-1 --base::
+-2 --ours::
+-3 --theirs::
+ Compare the working tree with the "base" version (stage #1),
+ "our branch" (stage #2) or "their branch" (stage #3). The
+ index contains these stages only for unmerged entries i.e.
+ while resolving conflicts. See linkgit:git-read-tree[1]
+ section "3-Way Merge" for detailed information.
+
+-0::
+ Omit diff output for unmerged entries and just show
+ "Unmerged". Can be used only when comparing the working tree
+ with the index.
+
<path>...::
The <paths> parameters, when given, are used to limit
the diff to the named paths (you can give directory
'git for-each-ref' [--count=<count>] [--shell|--perl|--python|--tcl]
[(--sort=<key>)...] [--format=<format>] [<pattern>...]
[--points-at <object>] [(--merged | --no-merged) [<object>]]
- [--contains [<object>]]
+ [--contains [<object>]] [--no-contains [<object>]]
DESCRIPTION
-----------
--merged [<object>]::
Only list refs whose tips are reachable from the
- specified commit (HEAD if not specified).
+ specified commit (HEAD if not specified),
+ incompatible with `--no-merged`.
--no-merged [<object>]::
Only list refs whose tips are not reachable from the
- specified commit (HEAD if not specified).
+ specified commit (HEAD if not specified),
+ incompatible with `--merged`.
--contains [<object>]::
Only list refs which contain the specified commit (HEAD if not
specified).
+--no-contains [<object>]::
+ Only list refs which don't contain the specified commit (HEAD
+ if not specified).
+
--ignore-case::
Sorting and filtering refs are case insensitive.
-s::
--stage::
- Show staged contents' object name, mode bits and stage number in the output.
+ Show staged contents' mode bits, object name and stage number in the output.
--directory::
If a whole directory is classified as "other", show just its
[-s <strategy>] [-X <strategy-option>] [-S[<keyid>]]
[--[no-]allow-unrelated-histories]
[--[no-]rerere-autoupdate] [-m <msg>] [<commit>...]
-'git merge' <msg> HEAD <commit>...
'git merge' --abort
'git merge' --continue
D---E---F---G---H master
------------
-The second syntax (<msg> `HEAD` <commit>...) is supported for
-historical reasons. Do not use it from the command line or in
-new scripts. It is the same as `git merge -m <msg> <commit>...`.
-
-The third syntax ("`git merge --abort`") can only be run after the
+The second syntax ("`git merge --abort`") can only be run after the
merge has resulted in conflicts. 'git merge --abort' will abort the
merge process and try to reconstruct the pre-merge state. However,
if there were uncommitted changes when the merge started (and
! ! ignored
-------------------------------------------------
+Submodules have more state and instead report
+ M the submodule has a different HEAD than
+ recorded in the index
+ m the submodule has modified content
+ ? the submodule has untracked files
+since modified content or untracked files in a submodule cannot be added
+via `git add` in the superproject to prepare a commit.
+
+'m' and '?' are applied recursively. For example if a nested submodule
+in a submodule contains an untracked file, this is reported as '?' as well.
+
If -b is used the short-format status is preceded by a line
## branchname tracking info
characters are not specially formatted; no quoting or
backslash-escaping is performed.
+Any submodule changes are reported as modified `M` instead of `m` or single `?`.
+
Porcelain Format Version 2
~~~~~~~~~~~~~~~~~~~~~~~~~~
repository will be assumed to be upstream.
+
Optional <path> arguments limit which submodules will be initialized.
-If no path is specified, all submodules are initialized.
+If no path is specified and submodule.active has been configured, submodules
+configured to be active will be initialized, otherwise all submodules are
+initialized.
+
When present, it will also copy the value of `submodule.$name.update`.
This command does not alter existing information in .git/config.
'git tag' [-a | -s | -u <keyid>] [-f] [-m <msg> | -F <file>]
<tagname> [<commit> | <object>]
'git tag' -d <tagname>...
-'git tag' [-n[<num>]] -l [--contains <commit>] [--points-at <object>]
- [--column[=<options>] | --no-column] [--create-reflog] [--sort=<key>]
- [--format=<format>] [--[no-]merged [<commit>]] [<pattern>...]
+'git tag' [-n[<num>]] -l [--contains <commit>] [--contains <commit>]
+ [--points-at <object>] [--column[=<options>] | --no-column]
+ [--create-reflog] [--sort=<key>] [--format=<format>]
+ [--[no-]merged [<commit>]] [<pattern>...]
'git tag' -v [--format=<format>] <tagname>...
DESCRIPTION
-n<num>::
<num> specifies how many lines from the annotation, if any,
- are printed when using -l.
- The default is not to print any annotation lines.
- If no number is given to `-n`, only the first line is printed.
- If the tag is not annotated, the commit message is displayed instead.
-
--l <pattern>::
---list <pattern>::
- List tags with names that match the given pattern (or all if no
- pattern is given). Running "git tag" without arguments also
- lists all tags. The pattern is a shell wildcard (i.e., matched
- using fnmatch(3)). Multiple patterns may be given; if any of
- them matches, the tag is shown.
+ are printed when using -l. Implies `--list`.
++
+The default is not to print any annotation lines.
+If no number is given to `-n`, only the first line is printed.
+If the tag is not annotated, the commit message is displayed instead.
+
+-l::
+--list::
+ List tags. With optional `<pattern>...`, e.g. `git tag --list
+ 'v-*'`, list only the tags that match the pattern(s).
++
+Running "git tag" without arguments also lists all tags. The pattern
+is a shell wildcard (i.e., matched using fnmatch(3)). Multiple
+patterns may be given; if any of them matches, the tag is shown.
++
+This option is implicitly supplied if any other list-like option such
+as `--contains` is provided. See the documentation for each of those
+options for details.
--sort=<key>::
Sort based on the key given. Prefix `-` to sort in
--contains [<commit>]::
Only list tags which contain the specified commit (HEAD if not
- specified).
+ specified). Implies `--list`.
+
+--no-contains [<commit>]::
+ Only list tags which don't contain the specified commit (HEAD if
+ not specified). Implies `--list`.
+
+--merged [<commit>]::
+ Only list tags whose commits are reachable from the specified
+ commit (`HEAD` if not specified), incompatible with `--no-merged`.
+
+--no-merged [<commit>]::
+ Only list tags whose commits are not reachable from the specified
+ commit (`HEAD` if not specified), incompatible with `--merged`.
--points-at <object>::
- Only list tags of the given object.
+ Only list tags of the given object (HEAD if not
+ specified). Implies `--list`.
-m <msg>::
--message=<msg>::
that of linkgit:git-for-each-ref[1]. When unspecified,
defaults to `%(refname:strip=2)`.
---[no-]merged [<commit>]::
- Only list tags whose tips are reachable, or not reachable
- if `--no-merged` is used, from the specified commit (`HEAD`
- if not specified).
-
CONFIGURATION
-------------
By default, 'git tag' in sign-with-default mode (-s) will use your
-------------------------------------------------
$ echo "* text=auto" >.gitattributes
-$ rm .git/index # Remove the index to force Git to
-$ git reset # re-scan the working directory
+$ rm .git/index # Remove the index to re-scan the working directory
+$ git add .
$ git status # Show files that will be normalized
-$ git add -u
-$ git add .gitattributes
$ git commit -m "Introduce end-of-line normalization"
-------------------------------------------------
refers to the branch that the branch specified by branchname is set to build on
top of (configured with `branch.<name>.remote` and
`branch.<name>.merge`). A missing branchname defaults to the
- current one.
+ current one. These suffixes are also accepted when spelled in uppercase, and
+ they mean the same thing no matter the case.
'<branchname>@\{push\}', e.g. 'master@\{push\}', '@\{push\}'::
The suffix '@\{push}' reports the branch "where we would push to" if
Note in the example that we set up a triangular workflow, where we pull
from one location and push to another. In a non-triangular workflow,
'@\{push}' is the same as '@\{upstream}', and there is no need for it.
++
+This suffix is also accepted when spelled in uppercase, and means the same
+thing no matter the case.
'<rev>{caret}', e.g. 'HEAD{caret}, v1.5.1{caret}0'::
A suffix '{caret}' to a revision parameter means the first parent of
The 'r1{caret}!' notation includes commit 'r1' but excludes all of its parents.
By itself, this notation denotes the single commit 'r1'.
-The '<rev>{caret}-{<n>}' notation includes '<rev>' but excludes the <n>th
+The '<rev>{caret}-<n>' notation includes '<rev>' but excludes the <n>th
parent (i.e. a shorthand for '<rev>{caret}<n>..<rev>'), with '<n>' = 1 if
not given. This is typically useful for merge commits where you
can just pass '<commit>{caret}-' to get all the commits in the branch
as giving commit '<rev>' and then all its parents prefixed with
'{caret}' to exclude them (and their ancestors).
-'<rev>{caret}-{<n>}', e.g. 'HEAD{caret}-, HEAD{caret}-2'::
+'<rev>{caret}-<n>', e.g. 'HEAD{caret}-, HEAD{caret}-2'::
Equivalent to '<rev>{caret}<n>..<rev>', with '<n>' = 1 if not
given.
--- /dev/null
+oid-array API
+==============
+
+The oid-array API provides storage and manipulation of sets of object
+identifiers. The emphasis is on storage and processing efficiency,
+making them suitable for large lists. Note that the ordering of items is
+not preserved over some operations.
+
+Data Structures
+---------------
+
+`struct oid_array`::
+
+ A single array of object IDs. This should be initialized by
+ assignment from `OID_ARRAY_INIT`. The `oid` member contains
+ the actual data. The `nr` member contains the number of items in
+ the set. The `alloc` and `sorted` members are used internally,
+ and should not be needed by API callers.
+
+Functions
+---------
+
+`oid_array_append`::
+ Add an item to the set. The object ID will be placed at the end of
+ the array (but note that some operations below may lose this
+ ordering).
+
+`oid_array_lookup`::
+ Perform a binary search of the array for a specific object ID.
+ If found, returns the offset (in number of elements) of the
+ object ID. If not found, returns a negative integer. If the array
+ is not sorted, this function has the side effect of sorting it.
+
+`oid_array_clear`::
+ Free all memory associated with the array and return it to the
+ initial, empty state.
+
+`oid_array_for_each_unique`::
+ Efficiently iterate over each unique element of the list,
+ executing the callback function for each one. If the array is
+ not sorted, this function has the side effect of sorting it. If
+ the callback returns a non-zero value, the iteration ends
+ immediately and the callback's return is propagated; otherwise,
+ 0 is returned.
+
+Examples
+--------
+
+-----------------------------------------
+int print_callback(const struct object_id *oid,
+ void *data)
+{
+ printf("%s\n", oid_to_hex(oid));
+ return 0; /* always continue */
+}
+
+void some_func(void)
+{
+ struct sha1_array hashes = OID_ARRAY_INIT;
+ struct object_id oid;
+
+ /* Read objects into our set */
+ while (read_object_from_stdin(oid.hash))
+ oid_array_append(&hashes, &oid);
+
+ /* Check if some objects are in our set */
+ while (read_object_from_stdin(oid.hash)) {
+ if (oid_array_lookup(&hashes, &oid) >= 0)
+ printf("it's in there!\n");
+
+ /*
+ * Print the unique set of objects. We could also have
+ * avoided adding duplicate objects in the first place,
+ * but we would end up re-sorting the array repeatedly.
+ * Instead, this will sort once and then skip duplicates
+ * in linear time.
+ */
+ oid_array_for_each_unique(&hashes, print_callback, NULL);
+}
+-----------------------------------------
+++ /dev/null
-sha1-array API
-==============
-
-The sha1-array API provides storage and manipulation of sets of SHA-1
-identifiers. The emphasis is on storage and processing efficiency,
-making them suitable for large lists. Note that the ordering of items is
-not preserved over some operations.
-
-Data Structures
----------------
-
-`struct sha1_array`::
-
- A single array of SHA-1 hashes. This should be initialized by
- assignment from `SHA1_ARRAY_INIT`. The `sha1` member contains
- the actual data. The `nr` member contains the number of items in
- the set. The `alloc` and `sorted` members are used internally,
- and should not be needed by API callers.
-
-Functions
----------
-
-`sha1_array_append`::
- Add an item to the set. The sha1 will be placed at the end of
- the array (but note that some operations below may lose this
- ordering).
-
-`sha1_array_lookup`::
- Perform a binary search of the array for a specific sha1.
- If found, returns the offset (in number of elements) of the
- sha1. If not found, returns a negative integer. If the array is
- not sorted, this function has the side effect of sorting it.
-
-`sha1_array_clear`::
- Free all memory associated with the array and return it to the
- initial, empty state.
-
-`sha1_array_for_each_unique`::
- Efficiently iterate over each unique element of the list,
- executing the callback function for each one. If the array is
- not sorted, this function has the side effect of sorting it. If
- the callback returns a non-zero value, the iteration ends
- immediately and the callback's return is propagated; otherwise,
- 0 is returned.
-
-Examples
---------
-
------------------------------------------
-int print_callback(const unsigned char sha1[20],
- void *data)
-{
- printf("%s\n", sha1_to_hex(sha1));
- return 0; /* always continue */
-}
-
-void some_func(void)
-{
- struct sha1_array hashes = SHA1_ARRAY_INIT;
- unsigned char sha1[20];
-
- /* Read objects into our set */
- while (read_object_from_stdin(sha1))
- sha1_array_append(&hashes, sha1);
-
- /* Check if some objects are in our set */
- while (read_object_from_stdin(sha1)) {
- if (sha1_array_lookup(&hashes, sha1) >= 0)
- printf("it's in there!\n");
-
- /*
- * Print the unique set of objects. We could also have
- * avoided adding duplicate objects in the first place,
- * but we would end up re-sorting the array repeatedly.
- * Instead, this will sort once and then skip duplicates
- * in linear time.
- */
- sha1_array_for_each_unique(&hashes, print_callback, NULL);
-}
------------------------------------------
#!/bin/sh
GVF=GIT-VERSION-FILE
-DEF_VER=v2.12.GIT
+DEF_VER=v2.13.0-rc0
LF='
'
TEST_PROGRAMS_NEED_X += test-match-trees
TEST_PROGRAMS_NEED_X += test-mergesort
TEST_PROGRAMS_NEED_X += test-mktemp
+TEST_PROGRAMS_NEED_X += test-online-cpus
TEST_PROGRAMS_NEED_X += test-parse-options
TEST_PROGRAMS_NEED_X += test-path-utils
TEST_PROGRAMS_NEED_X += test-prio-queue
TEST_PROGRAMS_NEED_X += test-read-cache
+TEST_PROGRAMS_NEED_X += test-ref-store
TEST_PROGRAMS_NEED_X += test-regex
TEST_PROGRAMS_NEED_X += test-revision-walking
TEST_PROGRAMS_NEED_X += test-run-command
perl/PM.stamp: FORCE
@$(FIND) perl -type f -name '*.pm' | sort >$@+ && \
+ $(PERL_PATH) -V >>$@+ && \
{ cmp $@+ $@ >/dev/null 2>/dev/null || mv $@+ $@; } && \
$(RM) $@+
C_SOURCES = $(patsubst %.o,%.c,$(C_OBJ))
%.cocci.patch: %.cocci $(C_SOURCES)
@echo ' ' SPATCH $<; \
+ ret=0; \
for f in $(C_SOURCES); do \
- $(SPATCH) --sp-file $< $$f $(SPATCH_FLAGS); \
- done >$@ 2>$@.log; \
+ $(SPATCH) --sp-file $< $$f $(SPATCH_FLAGS) || \
+ { ret=$$?; break; }; \
+ done >$@+ 2>$@.log; \
+ if test $$ret != 0; \
+ then \
+ cat $@.log; \
+ exit 1; \
+ fi; \
+ mv $@+ $@; \
if test -s $@; \
then \
echo ' ' SPATCH result: $@; \
#include "sha1-array.h"
#include "argv-array.h"
-static struct sha1_array good_revs;
-static struct sha1_array skipped_revs;
+static struct oid_array good_revs;
+static struct oid_array skipped_revs;
static struct object_id *current_bad_oid;
{
struct commit_list *p;
struct commit_dist *array = xcalloc(nr, sizeof(*array));
+ struct strbuf buf = STRBUF_INIT;
int cnt, i;
for (p = list, cnt = 0; p; p = p->next) {
}
QSORT(array, cnt, compare_commit_dist);
for (p = list, i = 0; i < cnt; i++) {
- char buf[100]; /* enough for dist=%d */
struct object *obj = &(array[i].commit->object);
- snprintf(buf, sizeof(buf), "dist=%d", array[i].distance);
- add_name_decoration(DECORATION_NONE, buf, obj);
+ strbuf_reset(&buf);
+ strbuf_addf(&buf, "dist=%d", array[i].distance);
+ add_name_decoration(DECORATION_NONE, buf.buf, obj);
p->item = array[i].commit;
p = p->next;
}
if (p)
p->next = NULL;
+ strbuf_release(&buf);
free(array);
return list;
}
current_bad_oid = xmalloc(sizeof(*current_bad_oid));
oidcpy(current_bad_oid, oid);
} else if (starts_with(refname, good_prefix.buf)) {
- sha1_array_append(&good_revs, oid->hash);
+ oid_array_append(&good_revs, oid);
} else if (starts_with(refname, "skip-")) {
- sha1_array_append(&skipped_revs, oid->hash);
+ oid_array_append(&skipped_revs, oid);
}
strbuf_release(&good_prefix);
fclose(fp);
}
-static char *join_sha1_array_hex(struct sha1_array *array, char delim)
+static char *join_sha1_array_hex(struct oid_array *array, char delim)
{
struct strbuf joined_hexs = STRBUF_INIT;
int i;
for (i = 0; i < array->nr; i++) {
- strbuf_addstr(&joined_hexs, sha1_to_hex(array->sha1[i]));
+ strbuf_addstr(&joined_hexs, oid_to_hex(array->oid + i));
if (i + 1 < array->nr)
strbuf_addch(&joined_hexs, delim);
}
while (list) {
struct commit_list *next = list->next;
list->next = NULL;
- if (0 <= sha1_array_lookup(&skipped_revs,
- list->item->object.oid.hash)) {
+ if (0 <= oid_array_lookup(&skipped_revs, &list->item->object.oid)) {
if (skipped_first && !*skipped_first)
*skipped_first = 1;
/* Move current to tried list */
argv_array_pushf(&rev_argv, bad_format, oid_to_hex(current_bad_oid));
for (i = 0; i < good_revs.nr; i++)
argv_array_pushf(&rev_argv, good_format,
- sha1_to_hex(good_revs.sha1[i]));
+ oid_to_hex(good_revs.oid + i));
argv_array_push(&rev_argv, "--");
if (read_paths)
read_bisect_paths(&rev_argv);
static int bisect_checkout(const unsigned char *bisect_rev, int no_checkout)
{
- char bisect_rev_hex[GIT_SHA1_HEXSZ + 1];
+ char bisect_rev_hex[GIT_MAX_HEXSZ + 1];
memcpy(bisect_rev_hex, sha1_to_hex(bisect_rev), GIT_SHA1_HEXSZ + 1);
update_ref(NULL, "BISECT_EXPECTED_REV", bisect_rev, NULL, 0, UPDATE_REFS_DIE_ON_ERR);
return run_command_v_opt(argv_show_branch, RUN_GIT_CMD);
}
-static struct commit *get_commit_reference(const unsigned char *sha1)
+static struct commit *get_commit_reference(const struct object_id *oid)
{
- struct commit *r = lookup_commit_reference(sha1);
+ struct commit *r = lookup_commit_reference(oid->hash);
if (!r)
- die(_("Not a valid commit name %s"), sha1_to_hex(sha1));
+ die(_("Not a valid commit name %s"), oid_to_hex(oid));
return r;
}
int i, n = 0;
ALLOC_ARRAY(rev, 1 + good_revs.nr);
- rev[n++] = get_commit_reference(current_bad_oid->hash);
+ rev[n++] = get_commit_reference(current_bad_oid);
for (i = 0; i < good_revs.nr; i++)
- rev[n++] = get_commit_reference(good_revs.sha1[i]);
+ rev[n++] = get_commit_reference(good_revs.oid + i);
*rev_nr = n;
return rev;
exit(1);
}
-static void handle_skipped_merge_base(const unsigned char *mb)
+static void handle_skipped_merge_base(const struct object_id *mb)
{
- char *mb_hex = sha1_to_hex(mb);
+ char *mb_hex = oid_to_hex(mb);
char *bad_hex = oid_to_hex(current_bad_oid);
char *good_hex = join_sha1_array_hex(&good_revs, ' ');
result = get_merge_bases_many(rev[0], rev_nr - 1, rev + 1);
for (; result; result = result->next) {
- const unsigned char *mb = result->item->object.oid.hash;
- if (!hashcmp(mb, current_bad_oid->hash)) {
+ const struct object_id *mb = &result->item->object.oid;
+ if (!oidcmp(mb, current_bad_oid)) {
handle_bad_merge_base();
- } else if (0 <= sha1_array_lookup(&good_revs, mb)) {
+ } else if (0 <= oid_array_lookup(&good_revs, mb)) {
continue;
- } else if (0 <= sha1_array_lookup(&skipped_revs, mb)) {
+ } else if (0 <= oid_array_lookup(&skipped_revs, mb)) {
handle_skipped_merge_base(mb);
} else {
printf(_("Bisecting: a merge base must be tested\n"));
- exit(bisect_checkout(mb, no_checkout));
+ exit(bisect_checkout(mb->hash, no_checkout));
}
}
{
struct commit *commit;
unsigned char sha1[20];
- char *real_ref, msg[PATH_MAX + 20];
+ char *real_ref;
struct strbuf ref = STRBUF_INIT;
int forcing = 0;
int dont_change_ref = 0;
die(_("Not a valid branch point: '%s'."), start_name);
hashcpy(sha1, commit->object.oid.hash);
- if (forcing)
- snprintf(msg, sizeof msg, "branch: Reset to %s",
- start_name);
- else if (!dont_change_ref)
- snprintf(msg, sizeof msg, "branch: Created from %s",
- start_name);
-
if (reflog)
log_all_ref_updates = LOG_REFS_NORMAL;
if (!dont_change_ref) {
struct ref_transaction *transaction;
struct strbuf err = STRBUF_INIT;
+ char *msg;
+
+ if (forcing)
+ msg = xstrfmt("branch: Reset to %s", start_name);
+ else
+ msg = xstrfmt("branch: Created from %s", start_name);
transaction = ref_transaction_begin(&err);
if (!transaction ||
die("%s", err.buf);
ref_transaction_free(transaction);
strbuf_release(&err);
+ free(msg);
}
if (real_ref && track)
int cnt;
const char *cp;
struct origin *suspect = ent->suspect;
- char hex[GIT_SHA1_HEXSZ + 1];
+ char hex[GIT_MAX_HEXSZ + 1];
oid_to_hex_r(hex, &suspect->commit->object.oid);
printf("%s %d %d %d\n",
const char *cp;
struct origin *suspect = ent->suspect;
struct commit_info ci;
- char hex[GIT_SHA1_HEXSZ + 1];
+ char hex[GIT_MAX_HEXSZ + 1];
int show_raw_time = !!(opt & OUTPUT_RAW_TIMESTAMP);
get_commit_info(suspect->commit, &ci, 1);
OPT_SET_INT('r', "remotes", &filter.kind, N_("act on remote-tracking branches"),
FILTER_REFS_REMOTES),
OPT_CONTAINS(&filter.with_commit, N_("print only branches that contain the commit")),
+ OPT_NO_CONTAINS(&filter.no_commit, N_("print only branches that don't contain the commit")),
OPT_WITH(&filter.with_commit, N_("print only branches that contain the commit")),
+ OPT_WITHOUT(&filter.no_commit, N_("print only branches that don't contain the commit")),
OPT__ABBREV(&filter.abbrev),
OPT_GROUP(N_("Specific git-branch actions:")),
if (!delete && !rename && !edit_description && !new_upstream && !unset_upstream && argc == 0)
list = 1;
- if (filter.with_commit || filter.merge != REF_FILTER_MERGED_NONE || filter.points_at.nr)
+ if (filter.with_commit || filter.merge != REF_FILTER_MERGED_NONE || filter.points_at.nr ||
+ filter.no_commit)
list = 1;
if (!!delete + !!rename + !!new_upstream +
struct expand_data *expand;
};
-static int batch_object_cb(const unsigned char sha1[20], void *vdata)
+static int batch_object_cb(const struct object_id *oid, void *vdata)
{
struct object_cb_data *data = vdata;
- hashcpy(data->expand->oid.hash, sha1);
+ oidcpy(&data->expand->oid, oid);
batch_object_write(NULL, data->opt, data->expand);
return 0;
}
const char *path,
void *data)
{
- sha1_array_append(data, oid->hash);
+ oid_array_append(data, oid);
return 0;
}
uint32_t pos,
void *data)
{
- sha1_array_append(data, oid->hash);
+ oid_array_append(data, oid);
return 0;
}
data.info.typep = &data.type;
if (opt->all_objects) {
- struct sha1_array sa = SHA1_ARRAY_INIT;
+ struct oid_array sa = OID_ARRAY_INIT;
struct object_cb_data cb;
for_each_loose_object(batch_loose_object, &sa, 0);
cb.opt = opt;
cb.expand = &data;
- sha1_array_for_each_unique(&sa, batch_object_cb, &cb);
+ oid_array_for_each_unique(&sa, batch_object_cb, &cb);
- sha1_array_clear(&sa);
+ oid_array_clear(&sa);
return 0;
}
static const char *unique_tracking_name(const char *name, struct object_id *oid)
{
struct tracking_name_data cb_data = { NULL, NULL, NULL, 1 };
- char src_ref[PATH_MAX];
- snprintf(src_ref, PATH_MAX, "refs/heads/%s", name);
- cb_data.src_ref = src_ref;
+ cb_data.src_ref = xstrfmt("refs/heads/%s", name);
cb_data.dst_oid = oid;
for_each_remote(check_tracking_name, &cb_data);
+ free(cb_data.src_ref);
if (cb_data.unique)
return cb_data.dst_ref;
free(cb_data.dst_ref);
};
static int option_no_checkout, option_bare, option_mirror, option_single_branch = -1;
-static int option_local = -1, option_no_hardlinks, option_shared, option_recursive;
+static int option_local = -1, option_no_hardlinks, option_shared;
static int option_shallow_submodules;
static int deepen;
static char *option_template, *option_depth, *option_since;
static struct string_list option_optional_reference = STRING_LIST_INIT_NODUP;
static int option_dissociate;
static int max_jobs = -1;
+static struct string_list option_recurse_submodules = STRING_LIST_INIT_NODUP;
+
+static int recurse_submodules_cb(const struct option *opt,
+ const char *arg, int unset)
+{
+ if (unset)
+ string_list_clear((struct string_list *)opt->value, 0);
+ else if (arg)
+ string_list_append((struct string_list *)opt->value, arg);
+ else
+ string_list_append((struct string_list *)opt->value,
+ (const char *)opt->defval);
+
+ return 0;
+}
static struct option builtin_clone_options[] = {
OPT__VERBOSITY(&option_verbosity),
N_("don't use local hardlinks, always copy")),
OPT_BOOL('s', "shared", &option_shared,
N_("setup as shared repository")),
- OPT_BOOL(0, "recursive", &option_recursive,
- N_("initialize submodules in the clone")),
- OPT_BOOL(0, "recurse-submodules", &option_recursive,
- N_("initialize submodules in the clone")),
+ { OPTION_CALLBACK, 0, "recursive", &option_recurse_submodules,
+ N_("pathspec"), N_("initialize submodules in the clone"),
+ PARSE_OPT_OPTARG | PARSE_OPT_HIDDEN, recurse_submodules_cb,
+ (intptr_t)"." },
+ { OPTION_CALLBACK, 0, "recurse-submodules", &option_recurse_submodules,
+ N_("pathspec"), N_("initialize submodules in the clone"),
+ PARSE_OPT_OPTARG, recurse_submodules_cb, (intptr_t)"." },
OPT_INTEGER('j', "jobs", &max_jobs,
N_("number of submodules cloned in parallel")),
OPT_STRING(0, "template", &option_template, N_("template-directory"),
err |= run_hook_le(NULL, "post-checkout", sha1_to_hex(null_sha1),
oid_to_hex(&oid), "1", NULL);
- if (!err && option_recursive) {
+ if (!err && (option_recurse_submodules.nr > 0)) {
struct argv_array args = ARGV_ARRAY_INIT;
argv_array_pushl(&args, "submodule", "update", "--init", "--recursive", NULL);
fprintf(stderr, _("Cloning into '%s'...\n"), dir);
}
- if (option_recursive) {
+ if (option_recurse_submodules.nr > 0) {
+ struct string_list_item *item;
+ struct strbuf sb = STRBUF_INIT;
+
+ /* remove duplicates */
+ string_list_sort(&option_recurse_submodules);
+ string_list_remove_duplicates(&option_recurse_submodules, 0);
+
+ /*
+ * NEEDSWORK: In a multi-working-tree world, this needs to be
+ * set in the per-worktree config.
+ */
+ for_each_string_list_item(item, &option_recurse_submodules) {
+ strbuf_addf(&sb, "submodule.active=%s",
+ item->string);
+ string_list_append(&option_config,
+ strbuf_detach(&sb, NULL));
+ }
+
if (option_required_reference.nr &&
option_optional_reference.nr)
die(_("clone --recursive is not compatible with "
static const char *implicit_ident_advice(void)
{
- char *user_config = expand_user_path("~/.gitconfig");
+ char *user_config = expand_user_path("~/.gitconfig", 0);
char *xdg_config = xdg_config_home("config");
int config_exists = file_exists(user_config) || file_exists(xdg_config);
}
if (use_global_config) {
- char *user_config = expand_user_path("~/.gitconfig");
+ char *user_config = expand_user_path("~/.gitconfig", 0);
char *xdg_config = xdg_config_home("config");
if (!user_config)
};
static const char *prio_names[] = {
- "head", "lightweight", "annotated",
+ N_("head"), N_("lightweight"), N_("annotated"),
};
static int commit_name_cmp(const struct commit_name *cn1,
free_commit_list(list);
if (debug) {
+ static int label_width = -1;
+ if (label_width < 0) {
+ int i, w;
+ for (i = 0; i < ARRAY_SIZE(prio_names); i++) {
+ w = strlen(_(prio_names[i]));
+ if (label_width < w)
+ label_width = w;
+ }
+ }
for (cur_match = 0; cur_match < match_cnt; cur_match++) {
struct possible_tag *t = &all_matches[cur_match];
- fprintf(stderr, " %-11s %8d %s\n",
- prio_names[t->name->prio],
+ fprintf(stderr, " %-*s %8d %s\n",
+ label_width, _(prio_names[t->name->prio]),
t->depth, t->name->path);
}
fprintf(stderr, _("traversed %lu commits\n"), seen_commits);
#define DIFF_NO_INDEX_IMPLICIT 2
struct blobinfo {
- unsigned char sha1[20];
+ struct object_id oid;
const char *name;
unsigned mode;
};
static void stuff_change(struct diff_options *opt,
unsigned old_mode, unsigned new_mode,
- const unsigned char *old_sha1,
- const unsigned char *new_sha1,
- int old_sha1_valid,
- int new_sha1_valid,
+ const struct object_id *old_oid,
+ const struct object_id *new_oid,
+ int old_oid_valid,
+ int new_oid_valid,
const char *old_name,
const char *new_name)
{
struct diff_filespec *one, *two;
- if (!is_null_sha1(old_sha1) && !is_null_sha1(new_sha1) &&
- !hashcmp(old_sha1, new_sha1) && (old_mode == new_mode))
+ if (!is_null_oid(old_oid) && !is_null_oid(new_oid) &&
+ !oidcmp(old_oid, new_oid) && (old_mode == new_mode))
return;
if (DIFF_OPT_TST(opt, REVERSE_DIFF)) {
SWAP(old_mode, new_mode);
- SWAP(old_sha1, new_sha1);
+ SWAP(old_oid, new_oid);
SWAP(old_name, new_name);
}
one = alloc_filespec(old_name);
two = alloc_filespec(new_name);
- fill_filespec(one, old_sha1, old_sha1_valid, old_mode);
- fill_filespec(two, new_sha1, new_sha1_valid, new_mode);
+ fill_filespec(one, old_oid->hash, old_oid_valid, old_mode);
+ fill_filespec(two, new_oid->hash, new_oid_valid, new_mode);
diff_queue(&diff_queued_diff, one, two);
}
stuff_change(&revs->diffopt,
blob[0].mode, canon_mode(st.st_mode),
- blob[0].sha1, null_sha1,
+ &blob[0].oid, &null_oid,
1, 0,
path, path);
diffcore_std(&revs->diffopt);
stuff_change(&revs->diffopt,
blob[0].mode, blob[1].mode,
- blob[0].sha1, blob[1].sha1,
+ &blob[0].oid, &blob[1].oid,
1, 1,
blob[0].name, blob[1].name);
diffcore_std(&revs->diffopt);
struct object_array_entry *ent0,
struct object_array_entry *ent1)
{
- const unsigned char *(sha1[2]);
+ const struct object_id *(oid[2]);
int swap = 0;
if (argc > 1)
*/
if (ent1->item->flags & UNINTERESTING)
swap = 1;
- sha1[swap] = ent0->item->oid.hash;
- sha1[1 - swap] = ent1->item->oid.hash;
- diff_tree_sha1(sha1[0], sha1[1], "", &revs->diffopt);
+ oid[swap] = &ent0->item->oid;
+ oid[1 - swap] = &ent1->item->oid;
+ diff_tree_sha1(oid[0]->hash, oid[1]->hash, "", &revs->diffopt);
log_tree_diff_flush(revs);
return 0;
}
struct object_array_entry *ent,
int ents)
{
- struct sha1_array parents = SHA1_ARRAY_INIT;
+ struct oid_array parents = OID_ARRAY_INIT;
int i;
if (argc > 1)
if (!revs->dense_combined_merges && !revs->combine_merges)
revs->dense_combined_merges = revs->combine_merges = 1;
for (i = 1; i < ents; i++)
- sha1_array_append(&parents, ent[i].item->oid.hash);
+ oid_array_append(&parents, &ent[i].item->oid);
diff_tree_combined(ent[0].item->oid.hash, &parents,
revs->dense_combined_merges, revs);
- sha1_array_clear(&parents);
+ oid_array_clear(&parents);
return 0;
}
} else if (obj->type == OBJ_BLOB) {
if (2 <= blobs)
die(_("more than two blobs given: '%s'"), name);
- hashcpy(blob[blobs].sha1, obj->oid.hash);
+ hashcpy(blob[blobs].oid.hash, obj->oid.hash);
blob[blobs].name = name;
blob[blobs].mode = entry->mode;
blobs++;
return data;
}
+static int checkout_path(unsigned mode, struct object_id *oid,
+ const char *path, const struct checkout *state)
+{
+ struct cache_entry *ce;
+ int ret;
+
+ ce = make_cache_entry(mode, oid->hash, path, 0, 0);
+ ret = checkout_entry(ce, state, NULL);
+
+ free(ce);
+ return ret;
+}
+
static int run_dir_diff(const char *extcmd, int symlinks, const char *prefix,
int argc, const char **argv)
{
struct strbuf rpath = STRBUF_INIT, buf = STRBUF_INIT;
struct strbuf ldir = STRBUF_INIT, rdir = STRBUF_INIT;
struct strbuf wtdir = STRBUF_INIT;
+ char *lbase_dir, *rbase_dir;
size_t ldir_len, rdir_len, wtdir_len;
- struct cache_entry *ce = xcalloc(1, sizeof(ce) + PATH_MAX + 1);
const char *workdir, *tmp;
int ret = 0, i;
FILE *fp;
memset(&wtindex, 0, sizeof(wtindex));
memset(&lstate, 0, sizeof(lstate));
- lstate.base_dir = ldir.buf;
+ lstate.base_dir = lbase_dir = xstrdup(ldir.buf);
lstate.base_dir_len = ldir.len;
lstate.force = 1;
memset(&rstate, 0, sizeof(rstate));
- rstate.base_dir = rdir.buf;
+ rstate.base_dir = rbase_dir = xstrdup(rdir.buf);
rstate.base_dir_len = rdir.len;
rstate.force = 1;
struct object_id loid, roid;
char status;
const char *src_path, *dst_path;
- size_t src_path_len, dst_path_len;
if (starts_with(info.buf, "::"))
die(N_("combined diff formats('-c' and '--cc') are "
if (strbuf_getline_nul(&lpath, fp))
break;
src_path = lpath.buf;
- src_path_len = lpath.len;
i++;
if (status != 'C' && status != 'R') {
dst_path = src_path;
- dst_path_len = src_path_len;
} else {
if (strbuf_getline_nul(&rpath, fp))
break;
dst_path = rpath.buf;
- dst_path_len = rpath.len;
}
if (S_ISGITLINK(lmode) || S_ISGITLINK(rmode)) {
}
if (lmode && status != 'C') {
- ce->ce_mode = lmode;
- oidcpy(&ce->oid, &loid);
- strcpy(ce->name, src_path);
- ce->ce_namelen = src_path_len;
- if (checkout_entry(ce, &lstate, NULL))
+ if (checkout_path(lmode, &loid, src_path, &lstate))
return error("could not write '%s'", src_path);
}
hashmap_add(&working_tree_dups, entry);
if (!use_wt_file(workdir, dst_path, &roid)) {
- ce->ce_mode = rmode;
- oidcpy(&ce->oid, &roid);
- strcpy(ce->name, dst_path);
- ce->ce_namelen = dst_path_len;
- if (checkout_entry(ce, &rstate, NULL))
+ if (checkout_path(rmode, &roid, dst_path, &rstate))
return error("could not write '%s'",
dst_path);
} else if (!is_null_oid(&roid)) {
exit_cleanup(tmpdir, rc);
finish:
- free(ce);
+ free(lbase_dir);
+ free(rbase_dir);
strbuf_release(&ldir);
strbuf_release(&rdir);
strbuf_release(&wtdir);
char **pack_lockfile_ptr = NULL;
struct child_process *conn;
struct fetch_pack_args args;
- struct sha1_array shallow = SHA1_ARRAY_INIT;
+ struct oid_array shallow = OID_ARRAY_INIT;
struct string_list deepen_not = STRING_LIST_INIT_DUP;
packet_trace_identity("fetch-pack");
struct ref *ref,
int check_old)
{
- char msg[1024];
+ char *msg;
char *rla = getenv("GIT_REFLOG_ACTION");
struct ref_transaction *transaction;
struct strbuf err = STRBUF_INIT;
return 0;
if (!rla)
rla = default_rla.buf;
- snprintf(msg, sizeof(msg), "%s: %s", rla, action);
+ msg = xstrfmt("%s: %s", rla, action);
transaction = ref_transaction_begin(&err);
if (!transaction ||
ref_transaction_free(transaction);
strbuf_release(&err);
+ free(msg);
return 0;
fail:
ref_transaction_free(transaction);
error("%s", err.buf);
strbuf_release(&err);
+ free(msg);
return df_conflict ? STORE_REF_ERROR_DF_CONFLICT
: STORE_REF_ERROR_OTHER;
}
if ((recurse_submodules != RECURSE_SUBMODULES_OFF) &&
(recurse_submodules != RECURSE_SUBMODULES_ON))
- check_for_new_submodule_commits(ref->new_oid.hash);
+ check_for_new_submodule_commits(&ref->new_oid);
r = s_update_ref(msg, ref, 0);
format_display(display, r ? '!' : '*', what,
r ? _("unable to update local ref") : NULL,
strbuf_add_unique_abbrev(&quickref, ref->new_oid.hash, DEFAULT_ABBREV);
if ((recurse_submodules != RECURSE_SUBMODULES_OFF) &&
(recurse_submodules != RECURSE_SUBMODULES_ON))
- check_for_new_submodule_commits(ref->new_oid.hash);
+ check_for_new_submodule_commits(&ref->new_oid);
r = s_update_ref("fast-forward", ref, 1);
format_display(display, r ? '!' : ' ', quickref.buf,
r ? _("unable to update local ref") : NULL,
strbuf_add_unique_abbrev(&quickref, ref->new_oid.hash, DEFAULT_ABBREV);
if ((recurse_submodules != RECURSE_SUBMODULES_OFF) &&
(recurse_submodules != RECURSE_SUBMODULES_ON))
- check_for_new_submodule_commits(ref->new_oid.hash);
+ check_for_new_submodule_commits(&ref->new_oid);
r = s_update_ref("forced-update", ref, 1);
format_display(display, r ? '!' : '+', quickref.buf,
r ? _("unable to update local ref") : _("forced update"),
static char const * const for_each_ref_usage[] = {
N_("git for-each-ref [<options>] [<pattern>]"),
N_("git for-each-ref [--points-at <object>]"),
- N_("git for-each-ref [(--merged | --no-merged) [<object>]]"),
- N_("git for-each-ref [--contains [<object>]]"),
+ N_("git for-each-ref [(--merged | --no-merged) [<commit>]]"),
+ N_("git for-each-ref [--contains [<commit>]] [--no-contains [<commit>]]"),
NULL
};
OPT_MERGED(&filter, N_("print only refs that are merged")),
OPT_NO_MERGED(&filter, N_("print only refs that are not merged")),
OPT_CONTAINS(&filter.with_commit, N_("print only refs which contain the commit")),
+ OPT_NO_CONTAINS(&filter.no_commit, N_("print only refs which don't contain the commit")),
OPT_BOOL(0, "ignore-case", &icase, N_("sorting and filtering are case insensitive")),
OPT_END(),
};
* distributed, we can check only one and get a reasonable
* estimate.
*/
- char path[PATH_MAX];
- const char *objdir = get_object_directory();
DIR *dir;
struct dirent *ent;
int auto_threshold;
if (gc_auto_threshold <= 0)
return 0;
- if (sizeof(path) <= snprintf(path, sizeof(path), "%s/17", objdir)) {
- warning(_("insanely long object directory %.*s"), 50, objdir);
- return 0;
- }
- dir = opendir(path);
+ dir = opendir(git_path("objects/17"));
if (!dir)
return 0;
{
struct strbuf pathbuf = STRBUF_INIT;
- if (opt->relative && opt->prefix_length) {
- quote_path_relative(filename + tree_name_len, opt->prefix, &pathbuf);
- strbuf_insert(&pathbuf, 0, filename, tree_name_len);
- } else if (super_prefix) {
+ if (super_prefix) {
strbuf_add(&pathbuf, filename, tree_name_len);
strbuf_addstr(&pathbuf, super_prefix);
strbuf_addstr(&pathbuf, filename + tree_name_len);
strbuf_addstr(&pathbuf, filename);
}
+ if (opt->relative && opt->prefix_length) {
+ char *name = strbuf_detach(&pathbuf, NULL);
+ quote_path_relative(name + tree_name_len, opt->prefix, &pathbuf);
+ strbuf_insert(&pathbuf, 0, name, tree_name_len);
+ free(name);
+ }
+
#ifndef NO_PTHREADS
if (num_threads) {
add_work(opt, GREP_SOURCE_SHA1, pathbuf.buf, path, oid);
{
struct strbuf buf = STRBUF_INIT;
+ if (super_prefix)
+ strbuf_addstr(&buf, super_prefix);
+ strbuf_addstr(&buf, filename);
+
if (opt->relative && opt->prefix_length) {
- quote_path_relative(filename, opt->prefix, &buf);
- } else {
- if (super_prefix)
- strbuf_addstr(&buf, super_prefix);
- strbuf_addstr(&buf, filename);
+ char *name = strbuf_detach(&buf, NULL);
+ quote_path_relative(name, opt->prefix, &buf);
+ free(name);
}
#ifndef NO_PTHREADS
}
static void compile_submodule_options(const struct grep_opt *opt,
- const struct pathspec *pathspec,
+ const char **argv,
int cached, int untracked,
int opt_exclude, int use_index,
int pattern_type_arg)
{
struct grep_pat *pattern;
- int i;
if (recurse_submodules)
argv_array_push(&submodule_options, "--recurse-submodules");
/* Add Pathspecs */
argv_array_push(&submodule_options, "--");
- for (i = 0; i < pathspec->nr; i++)
- argv_array_push(&submodule_options,
- pathspec->items[i].original);
+ for (; *argv; argv++)
+ argv_array_push(&submodule_options, *argv);
}
/*
prepare_submodule_repo_env(&cp.env_array);
argv_array_push(&cp.env_array, GIT_DIR_ENVIRONMENT);
+ if (opt->relative && opt->prefix_length)
+ argv_array_pushf(&cp.env_array, "%s=%s",
+ GIT_TOPLEVEL_PREFIX_ENVIRONMENT,
+ opt->prefix);
+
/* Add super prefix */
argv_array_pushf(&cp.args, "--super-prefix=%s%s/",
super_prefix ? super_prefix : "",
OPT_SET_INT(0, "exclude-standard", &opt_exclude,
N_("ignore files specified via '.gitignore'"), 1),
OPT_BOOL(0, "recurse-submodules", &recurse_submodules,
- N_("recursivley search in each submodule")),
+ N_("recursively search in each submodule")),
OPT_STRING(0, "parent-basename", &parent_basename,
N_("basename"),
N_("prepend parent project's basename to output")),
if (recurse_submodules) {
gitmodules_config();
- compile_submodule_options(&opt, &pathspec, cached, untracked,
+ compile_submodule_options(&opt, argv + i, cached, untracked,
opt_exclude, use_index,
pattern_type_arg);
}
hit |= wait_all();
if (hit && show_in_pager)
run_pager(&opt, prefix);
+ clear_pathspec(&pathspec);
free_grep_patterns(&opt);
return !hit;
}
if (from_stdin) {
input_fd = 0;
if (!pack_name) {
- static char tmp_file[PATH_MAX];
- output_fd = odb_mkstemp(tmp_file, sizeof(tmp_file),
+ struct strbuf tmp_file = STRBUF_INIT;
+ output_fd = odb_mkstemp(&tmp_file,
"pack/tmp_pack_XXXXXX");
- pack_name = xstrdup(tmp_file);
- } else
+ pack_name = strbuf_detach(&tmp_file, NULL);
+ } else {
output_fd = open(pack_name, O_CREAT|O_EXCL|O_RDWR, 0600);
- if (output_fd < 0)
- die_errno(_("unable to create '%s'"), pack_name);
+ if (output_fd < 0)
+ die_errno(_("unable to create '%s'"), pack_name);
+ }
nothread_data.pack_fd = output_fd;
} else {
input_fd = open(pack_name, O_RDONLY);
unsigned long has_size;
read_lock();
has_type = sha1_object_info(sha1, &has_size);
+ if (has_type < 0)
+ die(_("cannot read existing object info %s"), sha1_to_hex(sha1));
if (has_type != type || has_size != size)
die(_("SHA1 COLLISION FOUND WITH %s !"), sha1_to_hex(sha1));
has_data = read_sha1_file(sha1, &has_type, &has_size);
if (!from_stdin) {
printf("%s\n", sha1_to_hex(sha1));
} else {
- char buf[48];
- int len = snprintf(buf, sizeof(buf), "%s\t%s\n",
- report, sha1_to_hex(sha1));
- write_or_die(1, buf, len);
+ struct strbuf buf = STRBUF_INIT;
+
+ strbuf_addf(&buf, "%s\t%s\n", report, sha1_to_hex(sha1));
+ write_or_die(1, buf.buf, buf.len);
+ strbuf_release(&buf);
/*
* Let's just mimic git-unpack-objects here and write
struct string_list args;
};
+static int auto_decoration_style(void)
+{
+ return (isatty(1) || pager_in_use()) ? DECORATE_SHORT_REFS : 0;
+}
+
static int parse_decoration_style(const char *var, const char *value)
{
switch (git_config_maybe_bool(var, value)) {
else if (!strcmp(value, "short"))
return DECORATE_SHORT_REFS;
else if (!strcmp(value, "auto"))
- return (isatty(1) || pager_in_use()) ? DECORATE_SHORT_REFS : 0;
+ return auto_decoration_style();
return -1;
}
if (decoration_style < 0)
decoration_style = 0; /* maybe warn? */
return 0;
+ } else {
+ decoration_style = auto_decoration_style();
}
if (!strcmp(var, "log.showroot")) {
default_show_root = git_config_bool(var, value);
static int debug_mode;
static int show_eol;
static int recurse_submodules;
-static struct argv_array submodules_options = ARGV_ARRAY_INIT;
+static struct argv_array submodule_options = ARGV_ARRAY_INIT;
static const char *prefix;
static const char *super_prefix;
/*
* Compile an argv_array with all of the options supported by --recurse_submodules
*/
-static void compile_submodule_options(const struct dir_struct *dir, int show_tag)
+static void compile_submodule_options(const char **argv,
+ const struct dir_struct *dir,
+ int show_tag)
{
if (line_terminator == '\0')
- argv_array_push(&submodules_options, "-z");
+ argv_array_push(&submodule_options, "-z");
if (show_tag)
- argv_array_push(&submodules_options, "-t");
+ argv_array_push(&submodule_options, "-t");
if (show_valid_bit)
- argv_array_push(&submodules_options, "-v");
+ argv_array_push(&submodule_options, "-v");
if (show_cached)
- argv_array_push(&submodules_options, "--cached");
+ argv_array_push(&submodule_options, "--cached");
if (show_eol)
- argv_array_push(&submodules_options, "--eol");
+ argv_array_push(&submodule_options, "--eol");
if (debug_mode)
- argv_array_push(&submodules_options, "--debug");
+ argv_array_push(&submodule_options, "--debug");
+
+ /* Add Pathspecs */
+ argv_array_push(&submodule_options, "--");
+ for (; *argv; argv++)
+ argv_array_push(&submodule_options, *argv);
}
/**
{
struct child_process cp = CHILD_PROCESS_INIT;
int status;
- int i;
+ if (prefix_len)
+ argv_array_pushf(&cp.env_array, "%s=%s",
+ GIT_TOPLEVEL_PREFIX_ENVIRONMENT,
+ prefix);
argv_array_pushf(&cp.args, "--super-prefix=%s%s/",
super_prefix ? super_prefix : "",
ce->name);
argv_array_push(&cp.args, "--recurse-submodules");
/* add supported options */
- argv_array_pushv(&cp.args, submodules_options.argv);
-
- /*
- * Pass in the original pathspec args. The submodule will be
- * responsible for prepending the 'submodule_prefix' prior to comparing
- * against the pathspec for matches.
- */
- argv_array_push(&cp.args, "--");
- for (i = 0; i < pathspec.nr; i++)
- argv_array_push(&cp.args, pathspec.items[i].original);
+ argv_array_pushv(&cp.args, submodule_options.argv);
cp.git_cmd = 1;
cp.dir = ce->name;
setup_work_tree();
if (recurse_submodules)
- compile_submodule_options(&dir, show_tag);
+ compile_submodule_options(argv, &dir, show_tag);
if (recurse_submodules &&
(show_stage || show_deleted || show_others || show_unmerged ||
static int tail_match(const char **pattern, const char *path)
{
const char *p;
- char pathbuf[PATH_MAX];
+ char *pathbuf;
if (!pattern)
return 1; /* no restriction */
- if (snprintf(pathbuf, sizeof(pathbuf), "/%s", path) > sizeof(pathbuf))
- return error("insanely long ref %.*s...", 20, path);
+ pathbuf = xstrfmt("/%s", path);
while ((p = *(pattern++)) != NULL) {
- if (!wildmatch(p, pathbuf, 0, NULL))
+ if (!wildmatch(p, pathbuf, 0, NULL)) {
+ free(pathbuf);
return 1;
+ }
}
+ free(pathbuf);
return 0;
}
{
int found;
const char *arguments[] = { pgm, "", "", "", path, "", "", "", NULL };
- char hexbuf[4][GIT_SHA1_HEXSZ + 1];
+ char hexbuf[4][GIT_MAX_HEXSZ + 1];
char ownbuf[4][60];
if (pos >= active_nr)
static const char * const builtin_merge_usage[] = {
N_("git merge [<options>] [<commit>...]"),
- N_("git merge [<options>] <msg> HEAD <commit>"),
N_("git merge --abort"),
N_("git merge --continue"),
NULL
static int try_merge_strategy(const char *strategy, struct commit_list *common,
struct commit_list *remoteheads,
- struct commit *head, const char *head_arg)
+ struct commit *head)
{
static struct lock_file lock;
+ const char *head_arg = "HEAD";
hold_locked_index(&lock, LOCK_DIE_ON_ERROR);
refresh_cache(REFRESH_QUIET);
return 1;
}
-static struct commit *is_old_style_invocation(int argc, const char **argv,
- const struct object_id *head)
-{
- struct commit *second_token = NULL;
- if (argc > 2) {
- struct object_id second_oid;
-
- if (get_oid(argv[1], &second_oid))
- return NULL;
- second_token = lookup_commit_reference_gently(second_oid.hash, 0);
- if (!second_token)
- die(_("'%s' is not a commit"), argv[1]);
- if (oidcmp(&second_token->object.oid, head))
- return NULL;
- }
- return second_token;
-}
-
static int evaluate_result(void)
{
int cnt = 0;
struct object_id result_tree, stash, head_oid;
struct commit *head_commit;
struct strbuf buf = STRBUF_INIT;
- const char *head_arg;
int i, ret = 0, head_subsumed;
int best_cnt = -1, merge_was_ok = 0, automerge_was_ok = 0;
struct commit_list *common = NULL;
}
/*
- * This could be traditional "merge <msg> HEAD <commit>..." and
- * the way we can tell it is to see if the second token is HEAD,
- * but some people might have misused the interface and used a
- * commit-ish that is the same as HEAD there instead.
- * Traditional format never would have "-m" so it is an
- * additional safety measure to check for it.
+ * All the rest are the commits being merged; prepare
+ * the standard merge summary message to be appended
+ * to the given message.
*/
- if (!have_message &&
- is_old_style_invocation(argc, argv, &head_commit->object.oid)) {
- warning("old-style 'git merge <msg> HEAD <commit>' is deprecated.");
- strbuf_addstr(&merge_msg, argv[0]);
- head_arg = argv[1];
- argv += 2;
- argc -= 2;
- remoteheads = collect_parents(head_commit, &head_subsumed,
- argc, argv, NULL);
- } else {
- /* We are invoked directly as the first-class UI. */
- head_arg = "HEAD";
-
- /*
- * All the rest are the commits being merged; prepare
- * the standard merge summary message to be appended
- * to the given message.
- */
- remoteheads = collect_parents(head_commit, &head_subsumed,
- argc, argv, &merge_msg);
- }
+ remoteheads = collect_parents(head_commit, &head_subsumed,
+ argc, argv, &merge_msg);
if (!head_commit || !argc)
usage_with_options(builtin_merge_usage,
if (verify_signatures) {
for (p = remoteheads; p; p = p->next) {
struct commit *commit = p->item;
- char hex[GIT_SHA1_HEXSZ + 1];
+ char hex[GIT_MAX_HEXSZ + 1];
struct signature_check signature_check;
memset(&signature_check, 0, sizeof(signature_check));
ret = try_merge_strategy(use_strategies[i]->name,
common, remoteheads,
- head_commit, head_arg);
+ head_commit);
if (!option_commit && !ret) {
merge_was_ok = 1;
/*
printf(_("Using the %s to prepare resolving by hand.\n"),
best_strategy);
try_merge_strategy(best_strategy, common, remoteheads,
- head_commit, head_arg);
+ head_commit);
}
if (squash)
return NULL;
}
-/* returns a static buffer */
-static const char *get_rev_name(const struct object *o)
+/* may return a constant string or use "buf" as scratch space */
+static const char *get_rev_name(const struct object *o, struct strbuf *buf)
{
- static char buffer[1024];
struct rev_name *n;
struct commit *c;
int len = strlen(n->tip_name);
if (len > 2 && !strcmp(n->tip_name + len - 2, "^0"))
len -= 2;
- snprintf(buffer, sizeof(buffer), "%.*s~%d", len, n->tip_name,
- n->generation);
-
- return buffer;
+ strbuf_reset(buf);
+ strbuf_addf(buf, "%.*s~%d", len, n->tip_name, n->generation);
+ return buf->buf;
}
}
{
const char *name;
const struct object_id *oid = &obj->oid;
+ struct strbuf buf = STRBUF_INIT;
if (!name_only)
printf("%s ", caller_name ? caller_name : oid_to_hex(oid));
- name = get_rev_name(obj);
+ name = get_rev_name(obj, &buf);
if (name)
printf("%s\n", name);
else if (allow_undefined)
printf("%s\n", find_unique_abbrev(oid->hash, DEFAULT_ABBREV));
else
die("cannot describe '%s'", oid_to_hex(oid));
+ strbuf_release(&buf);
}
static char const * const name_rev_usage[] = {
static void name_rev_line(char *p, struct name_ref_data *data)
{
+ struct strbuf buf = STRBUF_INIT;
int forty = 0;
char *p_start;
for (p_start = p; *p; p++) {
struct object *o =
lookup_object(sha1);
if (o)
- name = get_rev_name(o);
+ name = get_rev_name(o, &buf);
}
*(p+1) = c;
/* flush */
if (p_start != p)
fwrite(p_start, p - p_start, 1, stdout);
+
+ strbuf_release(&buf);
}
int cmd_name_rev(int argc, const char **argv, const char *prefix)
struct notes_tree *t;
unsigned char object[20], new_note[20];
const unsigned char *note;
- char logmsg[100];
+ char *logmsg;
const char * const *usage;
struct note_data d = { 0, 0, NULL, STRBUF_INIT };
struct option options[] = {
write_note_data(&d, new_note);
if (add_note(t, object, new_note, combine_notes_overwrite))
die("BUG: combine_notes_overwrite failed");
- snprintf(logmsg, sizeof(logmsg), "Notes added by 'git notes %s'",
- argv[0]);
+ logmsg = xstrfmt("Notes added by 'git notes %s'", argv[0]);
} else {
fprintf(stderr, _("Removing note for object %s\n"),
sha1_to_hex(object));
remove_note(t, object);
- snprintf(logmsg, sizeof(logmsg), "Notes removed by 'git notes %s'",
- argv[0]);
+ logmsg = xstrfmt("Notes removed by 'git notes %s'", argv[0]);
}
commit_notes(t, logmsg);
+ free(logmsg);
free_note_data(&d);
free_notes(t);
return 0;
*
* This is filled by get_object_list.
*/
-static struct sha1_array recent_objects;
+static struct oid_array recent_objects;
-static int loosened_object_can_be_discarded(const unsigned char *sha1,
+static int loosened_object_can_be_discarded(const struct object_id *oid,
unsigned long mtime)
{
if (!unpack_unreachable_expiration)
return 0;
if (mtime > unpack_unreachable_expiration)
return 0;
- if (sha1_array_lookup(&recent_objects, sha1) >= 0)
+ if (oid_array_lookup(&recent_objects, oid) >= 0)
return 0;
return 1;
}
{
struct packed_git *p;
uint32_t i;
- const unsigned char *sha1;
+ struct object_id oid;
for (p = packed_git; p; p = p->next) {
if (!p->pack_local || p->pack_keep)
die("cannot open pack index");
for (i = 0; i < p->num_objects; i++) {
- sha1 = nth_packed_object_sha1(p, i);
- if (!packlist_find(&to_pack, sha1, NULL) &&
- !has_sha1_pack_kept_or_nonlocal(sha1) &&
- !loosened_object_can_be_discarded(sha1, p->mtime))
- if (force_object_loose(sha1, p->mtime))
+ nth_packed_object_oid(&oid, p, i);
+ if (!packlist_find(&to_pack, oid.hash, NULL) &&
+ !has_sha1_pack_kept_or_nonlocal(oid.hash) &&
+ !loosened_object_can_be_discarded(&oid, p->mtime))
+ if (force_object_loose(oid.hash, p->mtime))
die("unable to force loose object");
}
}
const char *name,
void *data)
{
- sha1_array_append(&recent_objects, obj->oid.hash);
+ oid_array_append(&recent_objects, &obj->oid);
}
static void record_recent_commit(struct commit *commit, void *data)
{
- sha1_array_append(&recent_objects, commit->object.oid.hash);
+ oid_array_append(&recent_objects, &commit->object.oid);
}
static void get_object_list(int ac, const char **av)
if (unpack_unreachable)
loosen_unused_packed_objects(&revs);
- sha1_array_clear(&recent_objects);
+ oid_array_clear(&recent_objects);
}
static int option_parse_index_version(const struct option *opt,
};
if (parse_options(argc, argv, prefix, opts, pack_refs_usage, 0))
usage_with_options(pack_refs_usage, opts);
- return pack_refs(flags);
+ return refs_pack_refs(get_main_ref_store(), flags);
}
static void flush_one_hunk(struct object_id *result, git_SHA_CTX *ctx)
{
- unsigned char hash[GIT_SHA1_RAWSZ];
+ unsigned char hash[GIT_MAX_RAWSZ];
unsigned short carry = 0;
int i;
* Appends merge candidates from FETCH_HEAD that are not marked not-for-merge
* into merge_heads.
*/
-static void get_merge_heads(struct sha1_array *merge_heads)
+static void get_merge_heads(struct oid_array *merge_heads)
{
const char *filename = git_path("FETCH_HEAD");
FILE *fp;
struct strbuf sb = STRBUF_INIT;
- unsigned char sha1[GIT_SHA1_RAWSZ];
+ struct object_id oid;
if (!(fp = fopen(filename, "r")))
die_errno(_("could not open '%s' for reading"), filename);
while (strbuf_getline_lf(&sb, fp) != EOF) {
- if (get_sha1_hex(sb.buf, sha1))
+ if (get_oid_hex(sb.buf, &oid))
continue; /* invalid line: does not start with SHA1 */
if (starts_with(sb.buf + GIT_SHA1_HEXSZ, "\tnot-for-merge\t"))
continue; /* ref is not-for-merge */
- sha1_array_append(merge_heads, sha1);
+ oid_array_append(merge_heads, &oid);
}
fclose(fp);
strbuf_release(&sb);
/**
* "Pulls into void" by branching off merge_head.
*/
-static int pull_into_void(const unsigned char *merge_head,
- const unsigned char *curr_head)
+static int pull_into_void(const struct object_id *merge_head,
+ const struct object_id *curr_head)
{
/*
* Two-way merge: we treat the index as based on an empty tree,
* index/worktree changes that the user already made on the unborn
* branch.
*/
- if (checkout_fast_forward(EMPTY_TREE_SHA1_BIN, merge_head, 0))
+ if (checkout_fast_forward(EMPTY_TREE_SHA1_BIN, merge_head->hash, 0))
return 1;
- if (update_ref("initial pull", "HEAD", merge_head, curr_head, 0, UPDATE_REFS_DIE_ON_ERR))
+ if (update_ref("initial pull", "HEAD", merge_head->hash, curr_head->hash, 0, UPDATE_REFS_DIE_ON_ERR))
return 1;
return 0;
* current branch forked from its remote tracking branch. Returns 0 on success,
* -1 on failure.
*/
-static int get_rebase_fork_point(unsigned char *fork_point, const char *repo,
+static int get_rebase_fork_point(struct object_id *fork_point, const char *repo,
const char *refspec)
{
int ret;
if (ret)
goto cleanup;
- ret = get_sha1_hex(sb.buf, fork_point);
+ ret = get_oid_hex(sb.buf, fork_point);
if (ret)
goto cleanup;
* Sets merge_base to the octopus merge base of curr_head, merge_head and
* fork_point. Returns 0 if a merge base is found, 1 otherwise.
*/
-static int get_octopus_merge_base(unsigned char *merge_base,
- const unsigned char *curr_head,
- const unsigned char *merge_head,
- const unsigned char *fork_point)
+static int get_octopus_merge_base(struct object_id *merge_base,
+ const struct object_id *curr_head,
+ const struct object_id *merge_head,
+ const struct object_id *fork_point)
{
struct commit_list *revs = NULL, *result;
- commit_list_insert(lookup_commit_reference(curr_head), &revs);
- commit_list_insert(lookup_commit_reference(merge_head), &revs);
- if (!is_null_sha1(fork_point))
- commit_list_insert(lookup_commit_reference(fork_point), &revs);
+ commit_list_insert(lookup_commit_reference(curr_head->hash), &revs);
+ commit_list_insert(lookup_commit_reference(merge_head->hash), &revs);
+ if (!is_null_oid(fork_point))
+ commit_list_insert(lookup_commit_reference(fork_point->hash), &revs);
result = reduce_heads(get_octopus_merge_bases(revs));
free_commit_list(revs);
if (!result)
return 1;
- hashcpy(merge_base, result->item->object.oid.hash);
+ oidcpy(merge_base, &result->item->object.oid);
return 0;
}
* fork point calculated by get_rebase_fork_point(), runs git-rebase with the
* appropriate arguments and returns its exit status.
*/
-static int run_rebase(const unsigned char *curr_head,
- const unsigned char *merge_head,
- const unsigned char *fork_point)
+static int run_rebase(const struct object_id *curr_head,
+ const struct object_id *merge_head,
+ const struct object_id *fork_point)
{
int ret;
- unsigned char oct_merge_base[GIT_SHA1_RAWSZ];
+ struct object_id oct_merge_base;
struct argv_array args = ARGV_ARRAY_INIT;
- if (!get_octopus_merge_base(oct_merge_base, curr_head, merge_head, fork_point))
- if (!is_null_sha1(fork_point) && !hashcmp(oct_merge_base, fork_point))
+ if (!get_octopus_merge_base(&oct_merge_base, curr_head, merge_head, fork_point))
+ if (!is_null_oid(fork_point) && !oidcmp(&oct_merge_base, fork_point))
fork_point = NULL;
argv_array_push(&args, "rebase");
warning(_("ignoring --verify-signatures for rebase"));
argv_array_push(&args, "--onto");
- argv_array_push(&args, sha1_to_hex(merge_head));
+ argv_array_push(&args, oid_to_hex(merge_head));
- if (fork_point && !is_null_sha1(fork_point))
- argv_array_push(&args, sha1_to_hex(fork_point));
+ if (fork_point && !is_null_oid(fork_point))
+ argv_array_push(&args, oid_to_hex(fork_point));
else
- argv_array_push(&args, sha1_to_hex(merge_head));
+ argv_array_push(&args, oid_to_hex(merge_head));
ret = run_command_v_opt(args.argv, RUN_GIT_CMD);
argv_array_clear(&args);
int cmd_pull(int argc, const char **argv, const char *prefix)
{
const char *repo, **refspecs;
- struct sha1_array merge_heads = SHA1_ARRAY_INIT;
- unsigned char orig_head[GIT_SHA1_RAWSZ], curr_head[GIT_SHA1_RAWSZ];
- unsigned char rebase_fork_point[GIT_SHA1_RAWSZ];
+ struct oid_array merge_heads = OID_ARRAY_INIT;
+ struct object_id orig_head, curr_head;
+ struct object_id rebase_fork_point;
if (!getenv("GIT_REFLOG_ACTION"))
set_reflog_message(argc, argv);
if (file_exists(git_path("MERGE_HEAD")))
die_conclude_merge();
- if (get_sha1("HEAD", orig_head))
- hashclr(orig_head);
+ if (get_oid("HEAD", &orig_head))
+ oidclr(&orig_head);
if (!opt_rebase && opt_autostash != -1)
die(_("--[no-]autostash option is only valid with --rebase."));
if (opt_autostash != -1)
autostash = opt_autostash;
- if (is_null_sha1(orig_head) && !is_cache_unborn())
+ if (is_null_oid(&orig_head) && !is_cache_unborn())
die(_("Updating an unborn branch with changes added to the index."));
if (!autostash)
require_clean_work_tree(N_("pull with rebase"),
_("please commit or stash them."), 1, 0);
- if (get_rebase_fork_point(rebase_fork_point, repo, *refspecs))
- hashclr(rebase_fork_point);
+ if (get_rebase_fork_point(&rebase_fork_point, repo, *refspecs))
+ oidclr(&rebase_fork_point);
}
if (run_fetch(repo, refspecs))
if (opt_dry_run)
return 0;
- if (get_sha1("HEAD", curr_head))
- hashclr(curr_head);
+ if (get_oid("HEAD", &curr_head))
+ oidclr(&curr_head);
- if (!is_null_sha1(orig_head) && !is_null_sha1(curr_head) &&
- hashcmp(orig_head, curr_head)) {
+ if (!is_null_oid(&orig_head) && !is_null_oid(&curr_head) &&
+ oidcmp(&orig_head, &curr_head)) {
/*
* The fetch involved updating the current branch.
*
warning(_("fetch updated the current branch head.\n"
"fast-forwarding your working tree from\n"
- "commit %s."), sha1_to_hex(orig_head));
+ "commit %s."), oid_to_hex(&orig_head));
- if (checkout_fast_forward(orig_head, curr_head, 0))
+ if (checkout_fast_forward(orig_head.hash, curr_head.hash, 0))
die(_("Cannot fast-forward your working tree.\n"
"After making sure that you saved anything precious from\n"
"$ git diff %s\n"
"output, run\n"
"$ git reset --hard\n"
- "to recover."), sha1_to_hex(orig_head));
+ "to recover."), oid_to_hex(&orig_head));
}
get_merge_heads(&merge_heads);
if (!merge_heads.nr)
die_no_merge_candidates(repo, refspecs);
- if (is_null_sha1(orig_head)) {
+ if (is_null_oid(&orig_head)) {
if (merge_heads.nr > 1)
die(_("Cannot merge multiple branches into empty head."));
- return pull_into_void(*merge_heads.sha1, curr_head);
+ return pull_into_void(merge_heads.oid, &curr_head);
}
if (opt_rebase && merge_heads.nr > 1)
die(_("Cannot rebase onto multiple branches."));
struct commit_list *list = NULL;
struct commit *merge_head, *head;
- head = lookup_commit_reference(orig_head);
+ head = lookup_commit_reference(orig_head.hash);
commit_list_insert(head, &list);
- merge_head = lookup_commit_reference(merge_heads.sha1[0]);
+ merge_head = lookup_commit_reference(merge_heads.oid[0].hash);
if (is_descendant_of(merge_head, list)) {
/* we can fast-forward this without invoking rebase */
opt_ff = "--ff-only";
return run_merge();
}
- return run_rebase(curr_head, *merge_heads.sha1, rebase_fork_point);
+ return run_rebase(&curr_head, merge_heads.oid, &rebase_fork_point);
} else {
return run_merge();
}
int push_cert = -1;
int rc;
const char *repo = NULL; /* default repository */
- static struct string_list push_options = STRING_LIST_INIT_DUP;
- static struct string_list_item *item;
+ struct string_list push_options = STRING_LIST_INIT_DUP;
+ const struct string_list_item *item;
struct option options[] = {
OPT__VERBOSITY(&verbosity),
die(_("push options must not have new line characters"));
rc = do_push(repo, flags, &push_options);
+ string_list_clear(&push_options, 0);
if (rc == -1)
usage_with_options(push_usage, options);
else
return git_default_config(var, value, cb);
}
-static void show_ref(const char *path, const unsigned char *sha1)
+static void show_ref(const char *path, const struct object_id *oid)
{
if (sent_capabilities) {
- packet_write_fmt(1, "%s %s\n", sha1_to_hex(sha1), path);
+ packet_write_fmt(1, "%s %s\n", oid_to_hex(oid), path);
} else {
struct strbuf cap = STRBUF_INIT;
strbuf_addstr(&cap, " push-options");
strbuf_addf(&cap, " agent=%s", git_user_agent_sanitized());
packet_write_fmt(1, "%s %s%c%s\n",
- sha1_to_hex(sha1), path, 0, cap.buf);
+ oid_to_hex(oid), path, 0, cap.buf);
strbuf_release(&cap);
sent_capabilities = 1;
}
} else {
oidset_insert(seen, oid);
}
- show_ref(path, oid->hash);
+ show_ref(path, oid);
return 0;
}
if (oidset_insert(seen, oid))
return;
- show_ref(".have", oid->hash);
+ show_ref(".have", oid);
}
static void write_head_info(void)
for_each_alternate_ref(show_one_alternate_ref, &seen);
oidset_clear(&seen);
if (!sent_capabilities)
- show_ref("capabilities^{}", null_sha1);
+ show_ref("capabilities^{}", &null_oid);
advertise_shallow_grafts(1);
unsigned int skip_update:1,
did_not_exist:1;
int index;
- unsigned char old_sha1[20];
- unsigned char new_sha1[20];
+ struct object_id old_oid;
+ struct object_id new_oid;
char ref_name[FLEX_ARRAY]; /* more */
};
return -1; /* EOF */
strbuf_reset(&state->buf);
strbuf_addf(&state->buf, "%s %s %s\n",
- sha1_to_hex(cmd->old_sha1), sha1_to_hex(cmd->new_sha1),
+ oid_to_hex(&cmd->old_oid), oid_to_hex(&cmd->new_oid),
cmd->ref_name);
state->cmd = cmd->next;
if (bufp) {
return 0;
argv[1] = cmd->ref_name;
- argv[2] = sha1_to_hex(cmd->old_sha1);
- argv[3] = sha1_to_hex(cmd->new_sha1);
+ argv[2] = oid_to_hex(&cmd->old_oid);
+ argv[3] = oid_to_hex(&cmd->new_oid);
argv[4] = NULL;
proc.no_stdin = 1;
static int update_shallow_ref(struct command *cmd, struct shallow_info *si)
{
static struct lock_file shallow_lock;
- struct sha1_array extra = SHA1_ARRAY_INIT;
+ struct oid_array extra = OID_ARRAY_INIT;
struct check_connected_options opt = CHECK_CONNECTED_INIT;
uint32_t mask = 1 << (cmd->index % 32);
int i;
if (si->used_shallow[i] &&
(si->used_shallow[i][cmd->index / 32] & mask) &&
!delayed_reachability_test(si, i))
- sha1_array_append(&extra, si->shallow->sha1[i]);
+ oid_array_append(&extra, &si->shallow->oid[i]);
opt.env = tmp_objdir_env(tmp_objdir);
setup_alternate_shallow(&shallow_lock, &opt.shallow_file, &extra);
if (check_connected(command_singleton_iterator, cmd, &opt)) {
rollback_lock_file(&shallow_lock);
- sha1_array_clear(&extra);
+ oid_array_clear(&extra);
return -1;
}
* not lose these new roots..
*/
for (i = 0; i < extra.nr; i++)
- register_shallow(extra.sha1[i]);
+ register_shallow(extra.oid[i].hash);
si->shallow_ref[cmd->index] = 0;
- sha1_array_clear(&extra);
+ oid_array_clear(&extra);
return 0;
}
const char *name = cmd->ref_name;
struct strbuf namespaced_name_buf = STRBUF_INIT;
const char *namespaced_name, *ret;
- unsigned char *old_sha1 = cmd->old_sha1;
- unsigned char *new_sha1 = cmd->new_sha1;
+ struct object_id *old_oid = &cmd->old_oid;
+ struct object_id *new_oid = &cmd->new_oid;
/* only refs/... are allowed */
if (!starts_with(name, "refs/") || check_refname_format(name + 5, 0)) {
refuse_unconfigured_deny();
return "branch is currently checked out";
case DENY_UPDATE_INSTEAD:
- ret = update_worktree(new_sha1);
+ ret = update_worktree(new_oid->hash);
if (ret)
return ret;
break;
}
}
- if (!is_null_sha1(new_sha1) && !has_sha1_file(new_sha1)) {
+ if (!is_null_oid(new_oid) && !has_object_file(new_oid)) {
error("unpack should have generated %s, "
- "but I can't find it!", sha1_to_hex(new_sha1));
+ "but I can't find it!", oid_to_hex(new_oid));
return "bad pack";
}
- if (!is_null_sha1(old_sha1) && is_null_sha1(new_sha1)) {
+ if (!is_null_oid(old_oid) && is_null_oid(new_oid)) {
if (deny_deletes && starts_with(name, "refs/heads/")) {
rp_error("denying ref deletion for %s", name);
return "deletion prohibited";
}
}
- if (deny_non_fast_forwards && !is_null_sha1(new_sha1) &&
- !is_null_sha1(old_sha1) &&
+ if (deny_non_fast_forwards && !is_null_oid(new_oid) &&
+ !is_null_oid(old_oid) &&
starts_with(name, "refs/heads/")) {
struct object *old_object, *new_object;
struct commit *old_commit, *new_commit;
- old_object = parse_object(old_sha1);
- new_object = parse_object(new_sha1);
+ old_object = parse_object(old_oid->hash);
+ new_object = parse_object(new_oid->hash);
if (!old_object || !new_object ||
old_object->type != OBJ_COMMIT ||
return "hook declined";
}
- if (is_null_sha1(new_sha1)) {
+ if (is_null_oid(new_oid)) {
struct strbuf err = STRBUF_INIT;
- if (!parse_object(old_sha1)) {
- old_sha1 = NULL;
+ if (!parse_object(old_oid->hash)) {
+ old_oid = NULL;
if (ref_exists(name)) {
rp_warning("Allowing deletion of corrupt ref.");
} else {
}
if (ref_transaction_delete(transaction,
namespaced_name,
- old_sha1,
+ old_oid->hash,
0, "push", &err)) {
rp_error("%s", err.buf);
strbuf_release(&err);
if (ref_transaction_update(transaction,
namespaced_name,
- new_sha1, old_sha1,
+ new_oid->hash, old_oid->hash,
0, "push",
&err)) {
rp_error("%s", err.buf);
const char *dst_name;
struct string_list_item *item;
struct command *dst_cmd;
- unsigned char sha1[GIT_SHA1_RAWSZ];
+ unsigned char sha1[GIT_MAX_RAWSZ];
int flag;
strbuf_addf(&buf, "%s%s", get_git_namespace(), cmd->ref_name);
dst_cmd = (struct command *) item->util;
- if (!hashcmp(cmd->old_sha1, dst_cmd->old_sha1) &&
- !hashcmp(cmd->new_sha1, dst_cmd->new_sha1))
+ if (!oidcmp(&cmd->old_oid, &dst_cmd->old_oid) &&
+ !oidcmp(&cmd->new_oid, &dst_cmd->new_oid))
return;
dst_cmd->skip_update = 1;
rp_error("refusing inconsistent update between symref '%s' (%s..%s) and"
" its target '%s' (%s..%s)",
cmd->ref_name,
- find_unique_abbrev(cmd->old_sha1, DEFAULT_ABBREV),
- find_unique_abbrev(cmd->new_sha1, DEFAULT_ABBREV),
+ find_unique_abbrev(cmd->old_oid.hash, DEFAULT_ABBREV),
+ find_unique_abbrev(cmd->new_oid.hash, DEFAULT_ABBREV),
dst_cmd->ref_name,
- find_unique_abbrev(dst_cmd->old_sha1, DEFAULT_ABBREV),
- find_unique_abbrev(dst_cmd->new_sha1, DEFAULT_ABBREV));
+ find_unique_abbrev(dst_cmd->old_oid.hash, DEFAULT_ABBREV),
+ find_unique_abbrev(dst_cmd->new_oid.hash, DEFAULT_ABBREV));
cmd->error_string = dst_cmd->error_string =
"inconsistent aliased update";
struct command **cmd_list = cb_data;
struct command *cmd = *cmd_list;
- if (!cmd || is_null_sha1(cmd->new_sha1))
+ if (!cmd || is_null_oid(&cmd->new_oid))
return -1; /* end of list */
*cmd_list = NULL; /* this returns only one */
- hashcpy(sha1, cmd->new_sha1);
+ hashcpy(sha1, cmd->new_oid.hash);
return 0;
}
if (shallow_update && data->si->shallow_ref[cmd->index])
/* to be checked in update_shallow_ref() */
continue;
- if (!is_null_sha1(cmd->new_sha1) && !cmd->skip_update) {
- hashcpy(sha1, cmd->new_sha1);
+ if (!is_null_oid(&cmd->new_oid) && !cmd->skip_update) {
+ hashcpy(sha1, cmd->new_oid.hash);
*cmd_list = cmd->next;
return 0;
}
if (!ref_is_hidden(cmd->ref_name, refname_full.buf))
continue;
- if (is_null_sha1(cmd->new_sha1))
+ if (is_null_oid(&cmd->new_oid))
cmd->error_string = "deny deleting a hidden ref";
else
cmd->error_string = "deny updating a hidden ref";
const char *line,
int linelen)
{
- unsigned char old_sha1[20], new_sha1[20];
+ struct object_id old_oid, new_oid;
struct command *cmd;
const char *refname;
int reflen;
+ const char *p;
- if (linelen < 83 ||
- line[40] != ' ' ||
- line[81] != ' ' ||
- get_sha1_hex(line, old_sha1) ||
- get_sha1_hex(line + 41, new_sha1))
+ if (parse_oid_hex(line, &old_oid, &p) ||
+ *p++ != ' ' ||
+ parse_oid_hex(p, &new_oid, &p) ||
+ *p++ != ' ')
die("protocol error: expected old/new/ref, got '%s'", line);
- refname = line + 82;
- reflen = linelen - 82;
+ refname = p;
+ reflen = linelen - (p - line);
FLEX_ALLOC_MEM(cmd, ref_name, refname, reflen);
- hashcpy(cmd->old_sha1, old_sha1);
- hashcpy(cmd->new_sha1, new_sha1);
+ oidcpy(&cmd->old_oid, &old_oid);
+ oidcpy(&cmd->new_oid, &new_oid);
*tail = cmd;
return &cmd->next;
}
while (boc < eoc) {
const char *eol = memchr(boc, '\n', eoc - boc);
- tail = queue_command(tail, boc, eol ? eol - boc : eoc - eol);
+ tail = queue_command(tail, boc, eol ? eol - boc : eoc - boc);
boc = eol ? eol + 1 : eoc;
}
}
-static struct command *read_head_info(struct sha1_array *shallow)
+static struct command *read_head_info(struct oid_array *shallow)
{
struct command *commands = NULL;
struct command **p = &commands;
if (!line)
break;
- if (len == 48 && starts_with(line, "shallow ")) {
- unsigned char sha1[20];
- if (get_sha1_hex(line + 8, sha1))
+ if (len > 8 && starts_with(line, "shallow ")) {
+ struct object_id oid;
+ if (get_oid_hex(line + 8, &oid))
die("protocol error: expected shallow sha, got '%s'",
line + 8);
- sha1_array_append(shallow, sha1);
+ oid_array_append(shallow, &oid);
continue;
}
static const char *pack_lockfile;
+static void push_header_arg(struct argv_array *args, struct pack_header *hdr)
+{
+ argv_array_pushf(args, "--pack_header=%"PRIu32",%"PRIu32,
+ ntohl(hdr->hdr_version), ntohl(hdr->hdr_entries));
+}
+
static const char *unpack(int err_fd, struct shallow_info *si)
{
struct pack_header hdr;
const char *hdr_err;
int status;
- char hdr_arg[38];
struct child_process child = CHILD_PROCESS_INIT;
int fsck_objects = (receive_fsck_objects >= 0
? receive_fsck_objects
close(err_fd);
return hdr_err;
}
- snprintf(hdr_arg, sizeof(hdr_arg),
- "--pack_header=%"PRIu32",%"PRIu32,
- ntohl(hdr.hdr_version), ntohl(hdr.hdr_entries));
if (si->nr_ours || si->nr_theirs) {
alt_shallow_file = setup_temporary_shallow(si->shallow);
tmp_objdir_add_as_alternate(tmp_objdir);
if (ntohl(hdr.hdr_entries) < unpack_limit) {
- argv_array_pushl(&child.args, "unpack-objects", hdr_arg, NULL);
+ argv_array_push(&child.args, "unpack-objects");
+ push_header_arg(&child.args, &hdr);
if (quiet)
argv_array_push(&child.args, "-q");
if (fsck_objects)
} else {
char hostname[256];
- argv_array_pushl(&child.args, "index-pack",
- "--stdin", hdr_arg, NULL);
+ argv_array_pushl(&child.args, "index-pack", "--stdin", NULL);
+ push_header_arg(&child.args, &hdr);
if (gethostname(hostname, sizeof(hostname)))
xsnprintf(hostname, sizeof(hostname), "localhost");
static void update_shallow_info(struct command *commands,
struct shallow_info *si,
- struct sha1_array *ref)
+ struct oid_array *ref)
{
struct command *cmd;
int *ref_status;
}
for (cmd = commands; cmd; cmd = cmd->next) {
- if (is_null_sha1(cmd->new_sha1))
+ if (is_null_oid(&cmd->new_oid))
continue;
- sha1_array_append(ref, cmd->new_sha1);
+ oid_array_append(ref, &cmd->new_oid);
cmd->index = ref->nr - 1;
}
si->ref = ref;
ALLOC_ARRAY(ref_status, ref->nr);
assign_shallow_commits_to_refs(si, NULL, ref_status);
for (cmd = commands; cmd; cmd = cmd->next) {
- if (is_null_sha1(cmd->new_sha1))
+ if (is_null_oid(&cmd->new_oid))
continue;
if (ref_status[cmd->index]) {
cmd->error_string = "shallow update not allowed";
{
struct command *cmd;
for (cmd = commands; cmd; cmd = cmd->next) {
- if (!is_null_sha1(cmd->new_sha1))
+ if (!is_null_oid(&cmd->new_oid))
return 0;
}
return 1;
{
int advertise_refs = 0;
struct command *commands;
- struct sha1_array shallow = SHA1_ARRAY_INIT;
- struct sha1_array ref = SHA1_ARRAY_INIT;
+ struct oid_array shallow = OID_ARRAY_INIT;
+ struct oid_array ref = OID_ARRAY_INIT;
struct shallow_info si;
struct option options[] = {
}
if (use_sideband)
packet_flush(1);
- sha1_array_clear(&shallow);
- sha1_array_clear(&ref);
+ oid_array_clear(&shallow);
+ oid_array_clear(&ref);
free((void *)push_cert_nonce);
return 0;
}
static int for_each_replace_name(const char **argv, each_replace_name_fn fn)
{
const char **p, *full_hex;
- char ref[PATH_MAX];
+ struct strbuf ref = STRBUF_INIT;
+ size_t base_len;
int had_error = 0;
struct object_id oid;
+ strbuf_addstr(&ref, git_replace_ref_base);
+ base_len = ref.len;
+
for (p = argv; *p; p++) {
if (get_oid(*p, &oid)) {
error("Failed to resolve '%s' as a valid ref.", *p);
had_error = 1;
continue;
}
- full_hex = oid_to_hex(&oid);
- snprintf(ref, sizeof(ref), "%s%s", git_replace_ref_base, full_hex);
- /* read_ref() may reuse the buffer */
- full_hex = ref + strlen(git_replace_ref_base);
- if (read_ref(ref, oid.hash)) {
+
+ strbuf_setlen(&ref, base_len);
+ strbuf_addstr(&ref, oid_to_hex(&oid));
+ full_hex = ref.buf + base_len;
+
+ if (read_ref(ref.buf, oid.hash)) {
error("replace ref '%s' not found.", full_hex);
had_error = 1;
continue;
}
- if (fn(full_hex, ref, &oid))
+ if (fn(full_hex, ref.buf, &oid))
had_error = 1;
}
return had_error;
static void check_ref_valid(struct object_id *object,
struct object_id *prev,
- char *ref,
- int ref_size,
+ struct strbuf *ref,
int force)
{
- if (snprintf(ref, ref_size,
- "%s%s", git_replace_ref_base,
- oid_to_hex(object)) > ref_size - 1)
- die("replace ref name too long: %.*s...", 50, ref);
- if (check_refname_format(ref, 0))
- die("'%s' is not a valid ref name.", ref);
-
- if (read_ref(ref, prev->hash))
+ strbuf_reset(ref);
+ strbuf_addf(ref, "%s%s", git_replace_ref_base, oid_to_hex(object));
+ if (check_refname_format(ref->buf, 0))
+ die("'%s' is not a valid ref name.", ref->buf);
+
+ if (read_ref(ref->buf, prev->hash))
oidclr(prev);
else if (!force)
- die("replace ref '%s' already exists", ref);
+ die("replace ref '%s' already exists", ref->buf);
}
static int replace_object_oid(const char *object_ref,
{
struct object_id prev;
enum object_type obj_type, repl_type;
- char ref[PATH_MAX];
+ struct strbuf ref = STRBUF_INIT;
struct ref_transaction *transaction;
struct strbuf err = STRBUF_INIT;
object_ref, typename(obj_type),
replace_ref, typename(repl_type));
- check_ref_valid(object, &prev, ref, sizeof(ref), force);
+ check_ref_valid(object, &prev, &ref, force);
transaction = ref_transaction_begin(&err);
if (!transaction ||
- ref_transaction_update(transaction, ref, repl->hash, prev.hash,
+ ref_transaction_update(transaction, ref.buf, repl->hash, prev.hash,
0, NULL, &err) ||
ref_transaction_commit(transaction, &err))
die("%s", err.buf);
ref_transaction_free(transaction);
+ strbuf_release(&ref);
return 0;
}
char *tmpfile = git_pathdup("REPLACE_EDITOBJ");
enum object_type type;
struct object_id old, new, prev;
- char ref[PATH_MAX];
+ struct strbuf ref = STRBUF_INIT;
if (get_oid(object_ref, &old) < 0)
die("Not a valid object name: '%s'", object_ref);
if (type < 0)
die("unable to get object type for %s", oid_to_hex(&old));
- check_ref_valid(&old, &prev, ref, sizeof(ref), force);
+ check_ref_valid(&old, &prev, &ref, force);
+ strbuf_release(&ref);
export_object(&old, type, raw, tmpfile);
if (launch_editor(tmpfile, NULL, NULL) < 0)
static int show_bisect_vars(struct rev_list_info *info, int reaches, int all)
{
int cnt, flags = info->flags;
- char hex[GIT_SHA1_HEXSZ + 1] = "";
+ char hex[GIT_MAX_HEXSZ + 1] = "";
struct commit_list *tried;
struct rev_info *revs = info->revs;
return 0;
}
-static int show_abbrev(const unsigned char *sha1, void *cb_data)
+static int show_abbrev(const struct object_id *oid, void *cb_data)
{
- show_rev(NORMAL, sha1, NULL);
+ show_rev(NORMAL, oid->hash, NULL);
return 0;
}
static void show_datestring(const char *flag, const char *datestr)
{
- static char buffer[100];
+ char *buffer;
/* date handling requires both flags and revs */
if ((filter & (DO_FLAGS | DO_REVS)) != (DO_FLAGS | DO_REVS))
return;
- snprintf(buffer, sizeof(buffer), "%s%lu", flag, approxidate(datestr));
+ buffer = xstrfmt("%s%lu", flag, approxidate(datestr));
show(buffer);
+ free(buffer);
}
static int show_file(const char *arg, int output_prefix)
const char *dest = NULL;
int fd[2];
struct child_process *conn;
- struct sha1_array extra_have = SHA1_ARRAY_INIT;
- struct sha1_array shallow = SHA1_ARRAY_INIT;
+ struct oid_array extra_have = OID_ARRAY_INIT;
+ struct oid_array shallow = OID_ARRAY_INIT;
struct ref *remote_refs, *local_refs;
int ret;
int helper_status = 0;
return result;
}
+static void module_list_active(struct module_list *list)
+{
+ int i;
+ struct module_list active_modules = MODULE_LIST_INIT;
+
+ gitmodules_config();
+
+ for (i = 0; i < list->nr; i++) {
+ const struct cache_entry *ce = list->entries[i];
+
+ if (!is_submodule_initialized(ce->name))
+ continue;
+
+ ALLOC_GROW(active_modules.entries,
+ active_modules.nr + 1,
+ active_modules.alloc);
+ active_modules.entries[active_modules.nr++] = ce;
+ }
+
+ free(list->entries);
+ *list = active_modules;
+}
+
static int module_list(int argc, const char **argv, const char *prefix)
{
int i;
die(_("No url found for submodule path '%s' in .gitmodules"),
displaypath);
+ /*
+ * NEEDSWORK: In a multi-working-tree world, this needs to be
+ * set in the per-worktree config.
+ *
+ * Set active flag for the submodule being initialized
+ */
+ if (!is_submodule_initialized(path)) {
+ strbuf_reset(&sb);
+ strbuf_addf(&sb, "submodule.%s.active", sub->name);
+ git_config_set_gently(sb.buf, "true");
+ }
+
/*
* Copy url setting when it is not set yet.
* To look up the url in .git/config, we must not fall back to
if (module_list_compute(argc, argv, prefix, &pathspec, &list) < 0)
return 1;
+ /*
+ * If there are no path args and submodule.active is set then,
+ * by default, only initialize 'active' modules.
+ */
+ if (!argc && git_config_get_value_multi("submodule.active"))
+ module_list_active(&list);
+
for (i = 0; i < list.nr; i++)
init_submodule(list.entries[i]->name, prefix, quiet);
struct strbuf displaypath_sb = STRBUF_INIT;
struct strbuf sb = STRBUF_INIT;
const char *displaypath = NULL;
- char *url = NULL;
int needs_cloning = 0;
if (ce_stage(ce)) {
goto cleanup;
}
- /*
- * Looking up the url in .git/config.
- * We must not fall back to .gitmodules as we only want
- * to process configured submodules.
- */
- strbuf_reset(&sb);
- strbuf_addf(&sb, "submodule.%s.url", sub->name);
- git_config_get_string(sb.buf, &url);
- if (!url) {
+ /* Check if the submodule has been initialized. */
+ if (!is_submodule_initialized(ce->name)) {
next_submodule_warn_missing(suc, out, displaypath);
goto cleanup;
}
argv_array_push(&child->args, "--depth=1");
argv_array_pushl(&child->args, "--path", sub->path, NULL);
argv_array_pushl(&child->args, "--name", sub->name, NULL);
- argv_array_pushl(&child->args, "--url", url, NULL);
+ argv_array_pushl(&child->args, "--url", sub->url, NULL);
if (suc->references.nr) {
struct string_list_item *item;
for_each_string_list_item(item, &suc->references)
argv_array_push(&child->args, suc->depth);
cleanup:
- free(url);
strbuf_reset(&displaypath_sb);
strbuf_reset(&sb);
return 0;
}
+static int push_check(int argc, const char **argv, const char *prefix)
+{
+ struct remote *remote;
+
+ if (argc < 2)
+ die("submodule--helper push-check requires at least 1 argument");
+
+ /*
+ * The remote must be configured.
+ * This is to avoid pushing to the exact same URL as the parent.
+ */
+ remote = pushremote_get(argv[1]);
+ if (!remote || remote->origin == REMOTE_UNCONFIGURED)
+ die("remote '%s' not configured", argv[1]);
+
+ /* Check the refspec */
+ if (argc > 2) {
+ int i, refspec_nr = argc - 2;
+ struct ref *local_refs = get_local_heads();
+ struct refspec *refspec = parse_push_refspec(refspec_nr,
+ argv + 2);
+
+ for (i = 0; i < refspec_nr; i++) {
+ struct refspec *rs = refspec + i;
+
+ if (rs->pattern || rs->matching)
+ continue;
+
+ /*
+ * LHS must match a single ref
+ * NEEDSWORK: add logic to special case 'HEAD' once
+ * working with submodules in a detached head state
+ * ceases to be the norm.
+ */
+ if (count_refspec_match(rs->src, local_refs, NULL) != 1)
+ die("src refspec '%s' must name a ref",
+ rs->src);
+ }
+ free_refspec(refspec_nr, refspec);
+ }
+
+ return 0;
+}
+
static int absorb_git_dirs(int argc, const char **argv, const char *prefix)
{
int i;
return 0;
}
+static int is_active(int argc, const char **argv, const char *prefix)
+{
+ if (argc != 2)
+ die("submodule--helper is-active takes exactly 1 argument");
+
+ gitmodules_config();
+
+ return !is_submodule_initialized(argv[1]);
+}
+
#define SUPPORT_SUPER_PREFIX (1<<0)
struct cmd_struct {
{"resolve-relative-url-test", resolve_relative_url_test, 0},
{"init", module_init, SUPPORT_SUPER_PREFIX},
{"remote-branch", resolve_remote_submodule_branch, 0},
+ {"push-check", push_check, 0},
{"absorb-git-dirs", absorb_git_dirs, SUPPORT_SUPER_PREFIX},
+ {"is-active", is_active, 0},
};
int cmd_submodule__helper(int argc, const char **argv, const char *prefix)
static const char * const git_tag_usage[] = {
N_("git tag [-a | -s | -u <key-id>] [-f] [-m <msg> | -F <file>] <tagname> [<head>]"),
N_("git tag -d <tagname>..."),
- N_("git tag -l [-n[<num>]] [--contains <commit>] [--points-at <object>]"
+ N_("git tag -l [-n[<num>]] [--contains <commit>] [--no-contains <commit>] [--points-at <object>]"
"\n\t\t[--format=<format>] [--[no-]merged [<commit>]] [<pattern>...]"),
N_("git tag -v [--format=<format>] <tagname>..."),
NULL
const void *cb_data)
{
const char **p;
- char ref[PATH_MAX];
+ struct strbuf ref = STRBUF_INIT;
int had_error = 0;
unsigned char sha1[20];
for (p = argv; *p; p++) {
- if (snprintf(ref, sizeof(ref), "refs/tags/%s", *p)
- >= sizeof(ref)) {
- error(_("tag name too long: %.*s..."), 50, *p);
- had_error = 1;
- continue;
- }
- if (read_ref(ref, sha1)) {
+ strbuf_reset(&ref);
+ strbuf_addf(&ref, "refs/tags/%s", *p);
+ if (read_ref(ref.buf, sha1)) {
error(_("tag '%s' not found."), *p);
had_error = 1;
continue;
}
- if (fn(*p, ref, sha1, cb_data))
+ if (fn(*p, ref.buf, sha1, cb_data))
had_error = 1;
}
+ strbuf_release(&ref);
return had_error;
}
unsigned char *prev, unsigned char *result)
{
enum object_type type;
- char header_buf[1024];
- int header_len;
+ struct strbuf header = STRBUF_INIT;
char *path = NULL;
type = sha1_object_info(object, NULL);
if (type <= OBJ_NONE)
die(_("bad object type."));
- header_len = snprintf(header_buf, sizeof(header_buf),
- "object %s\n"
- "type %s\n"
- "tag %s\n"
- "tagger %s\n\n",
- sha1_to_hex(object),
- typename(type),
- tag,
- git_committer_info(IDENT_STRICT));
-
- if (header_len > sizeof(header_buf) - 1)
- die(_("tag header too big."));
+ strbuf_addf(&header,
+ "object %s\n"
+ "type %s\n"
+ "tag %s\n"
+ "tagger %s\n\n",
+ sha1_to_hex(object),
+ typename(type),
+ tag,
+ git_committer_info(IDENT_STRICT));
if (!opt->message_given) {
int fd;
if (!opt->message_given && !buf->len)
die(_("no tag message?"));
- strbuf_insert(buf, 0, header_buf, header_len);
+ strbuf_insert(buf, 0, header.buf, header.len);
+ strbuf_release(&header);
if (build_tag_object(buf, opt->sign, result) < 0) {
if (path)
OPT_GROUP(N_("Tag listing options")),
OPT_COLUMN(0, "column", &colopts, N_("show tag list in columns")),
OPT_CONTAINS(&filter.with_commit, N_("print only tags that contain the commit")),
+ OPT_NO_CONTAINS(&filter.no_commit, N_("print only tags that don't contain the commit")),
OPT_WITH(&filter.with_commit, N_("print only tags that contain the commit")),
+ OPT_WITHOUT(&filter.no_commit, N_("print only tags that don't contain the commit")),
OPT_MERGED(&filter, N_("print only tags that are merged")),
OPT_NO_MERGED(&filter, N_("print only tags that are not merged")),
OPT_CALLBACK(0 , "sort", sorting_tail, N_("key"),
N_("field name to sort on"), &parse_opt_ref_sorting),
{
OPTION_CALLBACK, 0, "points-at", &filter.points_at, N_("object"),
- N_("print only tags of the object"), 0, parse_opt_object_name
+ N_("print only tags of the object"), PARSE_OPT_LASTARG_DEFAULT,
+ parse_opt_object_name, (intptr_t) "HEAD"
},
OPT_STRING( 0 , "format", &format, N_("format"), N_("format to use for the output")),
OPT_BOOL('i', "ignore-case", &icase, N_("sorting and filtering are case insensitive")),
}
create_tag_object = (opt.sign || annotate || msg.given || msgfile);
- if (argc == 0 && !cmdmode)
- cmdmode = 'l';
+ if (!cmdmode) {
+ if (argc == 0)
+ cmdmode = 'l';
+ else if (filter.with_commit || filter.no_commit ||
+ filter.points_at.nr || filter.merge_commit ||
+ filter.lines != -1)
+ cmdmode = 'l';
+ }
if ((create_tag_object || force) && (cmdmode != 0))
usage_with_options(git_tag_usage, options);
return ret;
}
if (filter.lines != -1)
- die(_("-n option is only allowed with -l."));
+ die(_("-n option is only allowed in list mode"));
if (filter.with_commit)
- die(_("--contains option is only allowed with -l."));
+ die(_("--contains option is only allowed in list mode"));
+ if (filter.no_commit)
+ die(_("--no-contains option is only allowed in list mode"));
if (filter.points_at.nr)
- die(_("--points-at option is only allowed with -l."));
+ die(_("--points-at option is only allowed in list mode"));
if (filter.merge_commit)
- die(_("--merged and --no-merged option are only allowed with -l"));
+ die(_("--merged and --no-merged options are only allowed in list mode"));
if (cmdmode == 'd')
return for_each_tag_name(argv, delete_tag, NULL);
if (cmdmode == 'v') {
struct stat st;
struct stat_data base;
int fd, ret = 0;
+ char *cwd;
strbuf_addstr(&mtime_dir, "mtime-test-XXXXXX");
if (!mkdtemp(mtime_dir.buf))
die_errno("Could not make temporary directory");
- fprintf(stderr, _("Testing mtime in '%s' "), xgetcwd());
+ cwd = xgetcwd();
+ fprintf(stderr, _("Testing mtime in '%s' "), cwd);
+ free(cwd);
+
atexit(remove_test_directory);
xstat_mtime_dir(&st);
fill_stat_data(&base, &st);
#define GIT_SHA1_RAWSZ 20
#define GIT_SHA1_HEXSZ (2 * GIT_SHA1_RAWSZ)
+/* The length in byte and in hex digits of the largest possible hash value. */
+#define GIT_MAX_RAWSZ GIT_SHA1_RAWSZ
+#define GIT_MAX_HEXSZ GIT_SHA1_HEXSZ
+
struct object_id {
- unsigned char hash[GIT_SHA1_RAWSZ];
+ unsigned char hash[GIT_MAX_RAWSZ];
};
#if defined(DT_UNKNOWN) && !defined(NO_D_TYPE_IN_DIRENT)
#define GIT_WORK_TREE_ENVIRONMENT "GIT_WORK_TREE"
#define GIT_PREFIX_ENVIRONMENT "GIT_PREFIX"
#define GIT_SUPER_PREFIX_ENVIRONMENT "GIT_INTERNAL_SUPER_PREFIX"
+#define GIT_TOPLEVEL_PREFIX_ENVIRONMENT "GIT_INTERNAL_TOPLEVEL_PREFIX"
#define DEFAULT_GIT_DIR_ENVIRONMENT ".git"
#define DB_ENVIRONMENT "GIT_OBJECT_DIRECTORY"
#define INDEX_ENVIRONMENT "GIT_INDEX_FILE"
extern const char *find_unique_abbrev(const unsigned char *sha1, int len);
extern int find_unique_abbrev_r(char *hex, const unsigned char *sha1, int len);
-extern const unsigned char null_sha1[GIT_SHA1_RAWSZ];
+extern const unsigned char null_sha1[GIT_MAX_RAWSZ];
extern const struct object_id null_oid;
static inline int hashcmp(const unsigned char *sha1, const unsigned char *sha2)
int raceproof_create_file(const char *path, create_file_fn fn, void *cb);
int mkdir_in_gitdir(const char *path);
-extern char *expand_user_path(const char *path);
+extern char *expand_user_path(const char *path, int real_home);
const char *enter_repo(const char *path, int strict);
static inline int is_absolute_path(const char *path)
{
extern int get_oid(const char *str, struct object_id *oid);
-typedef int each_abbrev_fn(const unsigned char *sha1, void *);
+typedef int each_abbrev_fn(const struct object_id *oid, void *);
extern int for_each_abbrev(const char *prefix, each_abbrev_fn, void *);
extern int set_disambiguate_hint_config(const char *var, const char *value);
extern void pack_report(void);
/*
- * Create a temporary file rooted in the object database directory.
+ * Create a temporary file rooted in the object database directory, or
+ * die on failure. The filename is taken from "pattern", which should have the
+ * usual "XXXXXX" trailer, and the resulting filename is written into the
+ * "template" buffer. Returns the open descriptor.
*/
-extern int odb_mkstemp(char *template, size_t limit, const char *pattern);
+extern int odb_mkstemp(struct strbuf *template, const char *pattern);
/*
* Generate the filename to be used for a pack file with checksum "sha1" and
extern int git_config_int(const char *, const char *);
extern int64_t git_config_int64(const char *, const char *);
extern unsigned long git_config_ulong(const char *, const char *);
+extern ssize_t git_config_ssize_t(const char *, const char *);
extern int git_config_bool_or_int(const char *, const char *, int *);
extern int git_config_bool(const char *, const char *);
extern int git_config_maybe_bool(const char *, const char *);
--- /dev/null
+#!/usr/bin/env bash
+#
+# Script to trigger the a Git for Windows build and test run.
+# Set the $GFW_CI_TOKEN as environment variable.
+# Pass the branch (only branches on https://github.com/git/git are
+# supported) and a commit hash.
+#
+
+test $# -ne 2 && echo "Unexpected number of parameters" && exit 1
+test -z "$GFW_CI_TOKEN" && echo "GFW_CI_TOKEN not defined" && exit
+
+BRANCH=$1
+COMMIT=$2
+
+gfwci () {
+ local CURL_ERROR_CODE HTTP_CODE
+ exec 3>&1
+ HTTP_CODE=$(curl \
+ -H "Authentication: Bearer $GFW_CI_TOKEN" \
+ --silent --retry 5 --write-out '%{HTTP_CODE}' \
+ --output >(sed "$(printf '1s/^\xef\xbb\xbf//')" >cat >&3) \
+ "https://git-for-windows-ci.azurewebsites.net/api/TestNow?$1" \
+ )
+ CURL_ERROR_CODE=$?
+ if test $CURL_ERROR_CODE -ne 0
+ then
+ return $CURL_ERROR_CODE
+ fi
+ if test "$HTTP_CODE" -ge 400 && test "$HTTP_CODE" -lt 600
+ then
+ return 127
+ fi
+}
+
+# Trigger build job
+BUILD_ID=$(gfwci "action=trigger&branch=$BRANCH&commit=$COMMIT&skipTests=false")
+if test $? -ne 0
+then
+ echo "Unable to trigger Visual Studio Team Services Build"
+ echo "$BUILD_ID"
+ exit 1
+fi
+
+# Check if the $BUILD_ID contains a number
+case $BUILD_ID in
+''|*[!0-9]*) echo "Unexpected build number: $BUILD_ID" && exit 1
+esac
+
+echo "Visual Studio Team Services Build #${BUILD_ID}"
+
+# Wait until build job finished
+STATUS=
+RESULT=
+while true
+do
+ LAST_STATUS=$STATUS
+ STATUS=$(gfwci "action=status&buildId=$BUILD_ID")
+ test "$STATUS" = "$LAST_STATUS" || printf "\nStatus: $STATUS "
+ printf "."
+
+ case "$STATUS" in
+ inProgress|postponed|notStarted) sleep 10 ;; # continue
+ "completed: succeeded") RESULT="success"; break;; # success
+ *) echo "Unhandled status: $STATUS"; break;; # failure
+ esac
+done
+
+# Print log
+echo ""
+echo ""
+gfwci "action=log&buildId=$BUILD_ID" | cut -c 30-
+
+# Set exit code for TravisCI
+test "$RESULT" = "success"
enum object_type type;
if (S_ISGITLINK(mode)) {
- blob = xmalloc(100);
- *size = snprintf(blob, 100,
- "Subproject commit %s\n", oid_to_hex(oid));
+ struct strbuf buf = STRBUF_INIT;
+ strbuf_addf(&buf, "Subproject commit %s\n", oid_to_hex(oid));
+ *size = buf.len;
+ blob = strbuf_detach(&buf, NULL);
} else if (is_null_oid(oid)) {
/* deleted blob */
*size = 0;
/* find set of paths that every parent touches */
static struct combine_diff_path *find_paths_generic(const unsigned char *sha1,
- const struct sha1_array *parents, struct diff_options *opt)
+ const struct oid_array *parents, struct diff_options *opt)
{
struct combine_diff_path *paths = NULL;
int i, num_parent = parents->nr;
opt->output_format = stat_opt;
else
opt->output_format = DIFF_FORMAT_NO_OUTPUT;
- diff_tree_sha1(parents->sha1[i], sha1, "", opt);
+ diff_tree_sha1(parents->oid[i].hash, sha1, "", opt);
diffcore_std(opt);
paths = intersect_paths(paths, i, num_parent);
* rename/copy detection, etc, comparing all trees simultaneously (= faster).
*/
static struct combine_diff_path *find_paths_multitree(
- const unsigned char *sha1, const struct sha1_array *parents,
+ const unsigned char *sha1, const struct oid_array *parents,
struct diff_options *opt)
{
int i, nparent = parents->nr;
ALLOC_ARRAY(parents_sha1, nparent);
for (i = 0; i < nparent; i++)
- parents_sha1[i] = parents->sha1[i];
+ parents_sha1[i] = parents->oid[i].hash;
/* fake list head, so worker can assume it is non-NULL */
paths_head.next = NULL;
void diff_tree_combined(const unsigned char *sha1,
- const struct sha1_array *parents,
+ const struct oid_array *parents,
int dense,
struct rev_info *rev)
{
if (stat_opt) {
diffopts.output_format = stat_opt;
- diff_tree_sha1(parents->sha1[0], sha1, "", &diffopts);
+ diff_tree_sha1(parents->oid[0].hash, sha1, "", &diffopts);
diffcore_std(&diffopts);
if (opt->orderfile)
diffcore_order(opt->orderfile);
struct rev_info *rev)
{
struct commit_list *parent = get_saved_parents(rev, commit);
- struct sha1_array parents = SHA1_ARRAY_INIT;
+ struct oid_array parents = OID_ARRAY_INIT;
while (parent) {
- sha1_array_append(&parents, parent->item->object.oid.hash);
+ oid_array_append(&parents, &parent->item->object.oid);
parent = parent->next;
}
diff_tree_combined(commit->object.oid.hash, &parents, dense, rev);
- sha1_array_clear(&parents);
+ oid_array_clear(&parents);
}
/* largest positive number a signed 32-bit integer can contain */
#define INFINITE_DEPTH 0x7fffffff
-struct sha1_array;
+struct oid_array;
struct ref;
extern int register_shallow(const unsigned char *sha1);
extern int unregister_shallow(const unsigned char *sha1);
int ac, const char **av, int shallow_flag, int not_shallow_flag);
extern void set_alternate_shallow_file(const char *path, int override);
extern int write_shallow_commits(struct strbuf *out, int use_pack_protocol,
- const struct sha1_array *extra);
+ const struct oid_array *extra);
extern void setup_alternate_shallow(struct lock_file *shallow_lock,
const char **alternate_shallow_file,
- const struct sha1_array *extra);
-extern const char *setup_temporary_shallow(const struct sha1_array *extra);
+ const struct oid_array *extra);
+extern const char *setup_temporary_shallow(const struct oid_array *extra);
extern void advertise_shallow_grafts(int);
struct shallow_info {
- struct sha1_array *shallow;
+ struct oid_array *shallow;
int *ours, nr_ours;
int *theirs, nr_theirs;
- struct sha1_array *ref;
+ struct oid_array *ref;
/* for receive-pack */
uint32_t **used_shallow;
int nr_commits;
};
-extern void prepare_shallow_info(struct shallow_info *, struct sha1_array *);
+extern void prepare_shallow_info(struct shallow_info *, struct oid_array *);
extern void clear_shallow_info(struct shallow_info *);
extern void remove_nonexistent_theirs_shallow(struct shallow_info *);
extern void assign_shallow_commits_to_refs(struct shallow_info *info,
if (!path)
return config_error_nonbool("include.path");
- expanded = expand_user_path(path);
+ expanded = expand_user_path(path, 0);
if (!expanded)
return error("could not expand include path '%s'", path);
path = expanded;
char *expanded;
int prefix = 0;
- expanded = expand_user_path(pat->buf);
+ expanded = expand_user_path(pat->buf, 1);
if (expanded) {
strbuf_reset(pat);
strbuf_addstr(pat, expanded);
return error(_("relative config include "
"conditionals must come from files"));
- strbuf_add_absolute_path(&path, cf->path);
+ strbuf_realpath(&path, cf->path, 1);
slash = find_last_dir_sep(path.buf);
if (!slash)
die("BUG: how is this possible?");
struct strbuf pattern = STRBUF_INIT;
int ret = 0, prefix;
- strbuf_add_absolute_path(&text, get_git_dir());
+ strbuf_realpath(&text, get_git_dir(), 1);
strbuf_add(&pattern, cond, cond_len);
prefix = prepare_include_condition_pattern(&pattern);
return 1;
}
+static int git_parse_ssize_t(const char *value, ssize_t *ret)
+{
+ intmax_t tmp;
+ if (!git_parse_signed(value, &tmp, maximum_signed_value_of_type(ssize_t)))
+ return 0;
+ *ret = tmp;
+ return 1;
+}
+
NORETURN
static void die_bad_number(const char *name, const char *value)
{
return ret;
}
+ssize_t git_config_ssize_t(const char *name, const char *value)
+{
+ ssize_t ret;
+ if (!git_parse_ssize_t(value, &ret))
+ die_bad_number(name, value);
+ return ret;
+}
+
int git_parse_maybe_bool(const char *value)
{
if (!value)
{
if (!value)
return config_error_nonbool(var);
- *dest = expand_user_path(value);
+ *dest = expand_user_path(value, 0);
if (!*dest)
die(_("failed to expand user dir in: '%s'"), value);
return 0;
{
int ret = 0;
char *xdg_config = xdg_config_home("config");
- char *user_config = expand_user_path("~/.gitconfig");
+ char *user_config = expand_user_path("~/.gitconfig", 0);
char *repo_config = have_git_dir() ? git_pathdup("config") : NULL;
current_parsing_scope = CONFIG_SCOPE_SYSTEM;
*/
struct ref **get_remote_heads(int in, char *src_buf, size_t src_len,
struct ref **list, unsigned int flags,
- struct sha1_array *extra_have,
- struct sha1_array *shallow_points)
+ struct oid_array *extra_have,
+ struct oid_array *shallow_points)
{
struct ref **orig_list = list;
die("protocol error: expected shallow sha-1, got '%s'", arg);
if (!shallow_points)
die("repository on the other end cannot be shallow");
- sha1_array_append(shallow_points, old_oid.hash);
+ oid_array_append(shallow_points, &old_oid);
continue;
}
}
if (extra_have && !strcmp(name, ".have")) {
- sha1_array_append(extra_have, old_oid.hash);
+ oid_array_append(extra_have, &old_oid);
continue;
}
const char **ssh_argv;
p = xstrdup(ssh_command);
- if (split_cmdline(p, &ssh_argv)) {
+ if (split_cmdline(p, &ssh_argv) > 0) {
variant = basename((char *)ssh_argv[0]);
/*
* At this point, variant points into the buffer
}
fi
+# Fills the COMPREPLY array with prefiltered words without any additional
+# processing.
+# Callers must take care of providing only words that match the current word
+# to be completed and adding any prefix and/or suffix (trailing space!), if
+# necessary.
+# 1: List of newline-separated matching completion words, complete with
+# prefix and suffix.
+__gitcomp_direct ()
+{
+ local IFS=$'\n'
+
+ COMPREPLY=($1)
+}
+
__gitcompappend ()
{
local x i=${#COMPREPLY[@]}
done | sort | uniq
}
+# Lists branches from the local repository.
+# 1: A prefix to be added to each listed branch (optional).
+# 2: List only branches matching this word (optional; list all branches if
+# unset or empty).
+# 3: A suffix to be appended to each listed branch (optional).
__git_heads ()
{
- __git for-each-ref --format='%(refname:short)' refs/heads
+ local pfx="${1-}" cur_="${2-}" sfx="${3-}"
+
+ __git for-each-ref --format="${pfx//\%/%%}%(refname:strip=2)$sfx" \
+ "refs/heads/$cur_*" "refs/heads/$cur_*/**"
}
+# Lists tags from the local repository.
+# Accepts the same positional parameters as __git_heads() above.
__git_tags ()
{
- __git for-each-ref --format='%(refname:short)' refs/tags
+ local pfx="${1-}" cur_="${2-}" sfx="${3-}"
+
+ __git for-each-ref --format="${pfx//\%/%%}%(refname:strip=2)$sfx" \
+ "refs/tags/$cur_*" "refs/tags/$cur_*/**"
}
# Lists refs from the local (by default) or from a remote repository.
# Can be the name of a configured remote, a path, or a URL.
# 2: In addition to local refs, list unique branches from refs/remotes/ for
# 'git checkout's tracking DWIMery (optional; ignored, if set but empty).
+# 3: A prefix to be added to each listed ref (optional).
+# 4: List only refs matching this word (optional; list all refs if unset or
+# empty).
+# 5: A suffix to be appended to each listed ref (optional; ignored, if set
+# but empty).
+#
+# Use __git_complete_refs() instead.
__git_refs ()
{
local i hash dir track="${2-}"
local list_refs_from=path remote="${1-}"
- local format refs pfx
+ local format refs
+ local pfx="${3-}" cur_="${4-$cur}" sfx="${5-}"
+ local match="${4-}"
+ local fer_pfx="${pfx//\%/%%}" # "escape" for-each-ref format specifiers
__git_find_repo_path
dir="$__git_repo_path"
fi
if [ "$list_refs_from" = path ]; then
- case "$cur" in
+ if [[ "$cur_" == ^* ]]; then
+ pfx="$pfx^"
+ fer_pfx="$fer_pfx^"
+ cur_=${cur_#^}
+ match=${match#^}
+ fi
+ case "$cur_" in
refs|refs/*)
format="refname"
- refs="${cur%/*}"
+ refs=("$match*" "$match*/**")
track=""
;;
*)
- [[ "$cur" == ^* ]] && pfx="^"
for i in HEAD FETCH_HEAD ORIG_HEAD MERGE_HEAD; do
- if [ -e "$dir/$i" ]; then echo $pfx$i; fi
+ case "$i" in
+ $match*)
+ if [ -e "$dir/$i" ]; then
+ echo "$pfx$i$sfx"
+ fi
+ ;;
+ esac
done
- format="refname:short"
- refs="refs/tags refs/heads refs/remotes"
+ format="refname:strip=2"
+ refs=("refs/tags/$match*" "refs/tags/$match*/**"
+ "refs/heads/$match*" "refs/heads/$match*/**"
+ "refs/remotes/$match*" "refs/remotes/$match*/**")
;;
esac
- __git_dir="$dir" __git for-each-ref --format="$pfx%($format)" \
- $refs
+ __git_dir="$dir" __git for-each-ref --format="$fer_pfx%($format)$sfx" \
+ "${refs[@]}"
if [ -n "$track" ]; then
# employ the heuristic used by git checkout
# Try to find a remote branch that matches the completion word
# but only output if the branch name is unique
- local ref entry
- __git for-each-ref --shell --format="ref=%(refname:short)" \
- "refs/remotes/" | \
- while read -r entry; do
- eval "$entry"
- ref="${ref#*/}"
- if [[ "$ref" == "$cur"* ]]; then
- echo "$ref"
- fi
- done | sort | uniq -u
+ __git for-each-ref --format="$fer_pfx%(refname:strip=3)$sfx" \
+ --sort="refname:strip=3" \
+ "refs/remotes/*/$match*" "refs/remotes/*/$match*/**" | \
+ uniq -u
fi
return
fi
- case "$cur" in
+ case "$cur_" in
refs|refs/*)
- __git ls-remote "$remote" "$cur*" | \
+ __git ls-remote "$remote" "$match*" | \
while read -r hash i; do
case "$i" in
*^{}) ;;
- *) echo "$i" ;;
+ *) echo "$pfx$i$sfx" ;;
esac
done
;;
*)
if [ "$list_refs_from" = remote ]; then
- echo "HEAD"
- __git for-each-ref --format="%(refname:short)" \
- "refs/remotes/$remote/" | sed -e "s#^$remote/##"
+ case "HEAD" in
+ $match*) echo "${pfx}HEAD$sfx" ;;
+ esac
+ __git for-each-ref --format="$fer_pfx%(refname:strip=3)$sfx" \
+ "refs/remotes/$remote/$match*" \
+ "refs/remotes/$remote/$match*/**"
else
- __git ls-remote "$remote" HEAD \
- "refs/tags/*" "refs/heads/*" "refs/remotes/*" |
+ local query_symref
+ case "HEAD" in
+ $match*) query_symref="HEAD" ;;
+ esac
+ __git ls-remote "$remote" $query_symref \
+ "refs/tags/$match*" "refs/heads/$match*" \
+ "refs/remotes/$match*" |
while read -r hash i; do
case "$i" in
*^{}) ;;
- refs/*) echo "${i#refs/*/}" ;;
- *) echo "$i" ;; # symbolic refs
+ refs/*) echo "$pfx${i#refs/*/}$sfx" ;;
+ *) echo "$pfx$i$sfx" ;; # symbolic refs
esac
done
fi
esac
}
+# Completes refs, short and long, local and remote, symbolic and pseudo.
+#
+# Usage: __git_complete_refs [<option>]...
+# --remote=<remote>: The remote to list refs from, can be the name of a
+# configured remote, a path, or a URL.
+# --track: List unique remote branches for 'git checkout's tracking DWIMery.
+# --pfx=<prefix>: A prefix to be added to each ref.
+# --cur=<word>: The current ref to be completed. Defaults to the current
+# word to be completed.
+# --sfx=<suffix>: A suffix to be appended to each ref instead of the default
+# space.
+__git_complete_refs ()
+{
+ local remote track pfx cur_="$cur" sfx=" "
+
+ while test $# != 0; do
+ case "$1" in
+ --remote=*) remote="${1##--remote=}" ;;
+ --track) track="yes" ;;
+ --pfx=*) pfx="${1##--pfx=}" ;;
+ --cur=*) cur_="${1##--cur=}" ;;
+ --sfx=*) sfx="${1##--sfx=}" ;;
+ *) return 1 ;;
+ esac
+ shift
+ done
+
+ __gitcomp_direct "$(__git_refs "$remote" "$track" "$pfx" "$cur_" "$sfx")"
+}
+
# __git_refs2 requires 1 argument (to pass to __git_refs)
+# Deprecated: use __git_complete_fetch_refspecs() instead.
__git_refs2 ()
{
local i
done
}
+# Completes refspecs for fetching from a remote repository.
+# 1: The remote repository.
+# 2: A prefix to be added to each listed refspec (optional).
+# 3: The ref to be completed as a refspec instead of the current word to be
+# completed (optional)
+# 4: A suffix to be appended to each listed refspec instead of the default
+# space (optional).
+__git_complete_fetch_refspecs ()
+{
+ local i remote="$1" pfx="${2-}" cur_="${3-$cur}" sfx="${4- }"
+
+ __gitcomp_direct "$(
+ for i in $(__git_refs "$remote" "" "" "$cur_") ; do
+ echo "$pfx$i:$i$sfx"
+ done
+ )"
+}
+
# __git_refs_remotes requires 1 argument (to pass to ls-remote)
__git_refs_remotes ()
{
*...*)
pfx="${cur_%...*}..."
cur_="${cur_#*...}"
- __gitcomp_nl "$(__git_refs)" "$pfx" "$cur_"
+ __git_complete_refs --pfx="$pfx" --cur="$cur_"
;;
*..*)
pfx="${cur_%..*}.."
cur_="${cur_#*..}"
- __gitcomp_nl "$(__git_refs)" "$pfx" "$cur_"
+ __git_complete_refs --pfx="$pfx" --cur="$cur_"
;;
*)
- __gitcomp_nl "$(__git_refs)"
+ __git_complete_refs
;;
esac
}
case "$cmd" in
fetch)
if [ $lhs = 1 ]; then
- __gitcomp_nl "$(__git_refs2 "$remote")" "$pfx" "$cur_"
+ __git_complete_fetch_refspecs "$remote" "$pfx" "$cur_"
else
- __gitcomp_nl "$(__git_refs)" "$pfx" "$cur_"
+ __git_complete_refs --pfx="$pfx" --cur="$cur_"
fi
;;
pull|remote)
if [ $lhs = 1 ]; then
- __gitcomp_nl "$(__git_refs "$remote")" "$pfx" "$cur_"
+ __git_complete_refs --remote="$remote" --pfx="$pfx" --cur="$cur_"
else
- __gitcomp_nl "$(__git_refs)" "$pfx" "$cur_"
+ __git_complete_refs --pfx="$pfx" --cur="$cur_"
fi
;;
push)
if [ $lhs = 1 ]; then
- __gitcomp_nl "$(__git_refs)" "$pfx" "$cur_"
+ __git_complete_refs --pfx="$pfx" --cur="$cur_"
else
- __gitcomp_nl "$(__git_refs "$remote")" "$pfx" "$cur_"
+ __git_complete_refs --remote="$remote" --pfx="$pfx" --cur="$cur_"
fi
;;
esac
case "$subcommand" in
bad|good|reset|skip|start)
- __gitcomp_nl "$(__git_refs)"
+ __git_complete_refs
;;
*)
;;
case "$cur" in
--set-upstream-to=*)
- __gitcomp_nl "$(__git_refs)" "" "${cur##--set-upstream-to=}"
+ __git_complete_refs --cur="${cur##--set-upstream-to=}"
;;
--*)
__gitcomp "
--color --no-color --verbose --abbrev= --no-abbrev
- --track --no-track --contains --merged --no-merged
+ --track --no-track --contains --no-contains --merged --no-merged
--set-upstream-to= --edit-description --list
--unset-upstream --delete --move --remotes
--column --no-column --sort= --points-at
;;
*)
if [ $only_local_ref = "y" -a $has_r = "n" ]; then
- __gitcomp_nl "$(__git_heads)"
+ __gitcomp_direct "$(__git_heads "" "$cur" " ")"
else
- __gitcomp_nl "$(__git_refs)"
+ __git_complete_refs
fi
;;
esac
*)
# check if --track, --no-track, or --no-guess was specified
# if so, disable DWIM mode
- local flags="--track --no-track --no-guess" track=1
+ local flags="--track --no-track --no-guess" track_opt="--track"
if [ -n "$(__git_find_on_cmdline "$flags")" ]; then
- track=''
+ track_opt=''
fi
- __gitcomp_nl "$(__git_refs '' $track)"
+ __git_complete_refs $track_opt
;;
esac
}
_git_cherry ()
{
- __gitcomp_nl "$(__git_refs)"
+ __git_complete_refs
}
_git_cherry_pick ()
__gitcomp "--edit --no-commit --signoff --strategy= --mainline"
;;
*)
- __gitcomp_nl "$(__git_refs)"
+ __git_complete_refs
;;
esac
}
{
case "$prev" in
-c|-C)
- __gitcomp_nl "$(__git_refs)" "" "${cur}"
+ __git_complete_refs
return
;;
esac
;;
--reuse-message=*|--reedit-message=*|\
--fixup=*|--squash=*)
- __gitcomp_nl "$(__git_refs)" "" "${cur#*=}"
+ __git_complete_refs --cur="${cur#*=}"
return
;;
--untracked-files=*)
"
return
esac
- __gitcomp_nl "$(__git_refs)"
+ __git_complete_refs
}
__git_diff_algorithms="myers minimal patience histogram"
_gitk
}
-__git_match_ctag() {
- awk "/^${1//\//\\/}/ { print \$1 }" "$2"
+# Lists matching symbol names from a tag (as in ctags) file.
+# 1: List symbol names matching this word.
+# 2: The tag file to list symbol names from.
+# 3: A prefix to be added to each listed symbol name (optional).
+# 4: A suffix to be appended to each listed symbol name (optional).
+__git_match_ctag () {
+ awk -v pfx="${3-}" -v sfx="${4-}" "
+ /^${1//\//\\/}/ { print pfx \$1 sfx }
+ " "$2"
+}
+
+# Complete symbol names from a tag file.
+# Usage: __git_complete_symbol [<option>]...
+# --tags=<file>: The tag file to list symbol names from instead of the
+# default "tags".
+# --pfx=<prefix>: A prefix to be added to each symbol name.
+# --cur=<word>: The current symbol name to be completed. Defaults to
+# the current word to be completed.
+# --sfx=<suffix>: A suffix to be appended to each symbol name instead
+# of the default space.
+__git_complete_symbol () {
+ local tags=tags pfx="" cur_="${cur-}" sfx=" "
+
+ while test $# != 0; do
+ case "$1" in
+ --tags=*) tags="${1##--tags=}" ;;
+ --pfx=*) pfx="${1##--pfx=}" ;;
+ --cur=*) cur_="${1##--cur=}" ;;
+ --sfx=*) sfx="${1##--sfx=}" ;;
+ *) return 1 ;;
+ esac
+ shift
+ done
+
+ if test -r "$tags"; then
+ __gitcomp_direct "$(__git_match_ctag "$cur_" "$tags" "$pfx" "$sfx")"
+ fi
}
_git_grep ()
case "$cword,$prev" in
2,*|*,-*)
- if test -r tags; then
- __gitcomp_nl "$(__git_match_ctag "$cur" tags)"
- return
- fi
+ __git_complete_symbol && return
;;
esac
- __gitcomp_nl "$(__git_refs)"
+ __git_complete_refs
}
_git_help ()
if [ -f "$__git_repo_path/MERGE_HEAD" ]; then
merge="--merge"
fi
+ case "$prev,$cur" in
+ -L,:*:*)
+ return # fall back to Bash filename completion
+ ;;
+ -L,:*)
+ __git_complete_symbol --cur="${cur#:}" --sfx=":"
+ return
+ ;;
+ -G,*|-S,*)
+ __git_complete_symbol
+ return
+ ;;
+ esac
case "$cur" in
--pretty=*|--format=*)
__gitcomp "$__git_log_pretty_formats $(__git_pretty_aliases)
"
return
;;
+ -L:*:*)
+ return # fall back to Bash filename completion
+ ;;
+ -L:*)
+ __git_complete_symbol --cur="${cur#-L:}" --sfx=":"
+ return
+ ;;
+ -G*)
+ __git_complete_symbol --pfx="-G" --cur="${cur#-G}"
+ return
+ ;;
+ -S*)
+ __git_complete_symbol --pfx="-S" --cur="${cur#-S}"
+ return
+ ;;
esac
__git_complete_revlist
}
--rerere-autoupdate --no-rerere-autoupdate --abort --continue"
return
esac
- __gitcomp_nl "$(__git_refs)"
+ __git_complete_refs
}
_git_mergetool ()
return
;;
esac
- __gitcomp_nl "$(__git_refs)"
+ __git_complete_refs
}
_git_mv ()
,*)
case "$prev" in
--ref)
- __gitcomp_nl "$(__git_refs)"
+ __git_complete_refs
;;
*)
__gitcomp "$subcommands --ref"
;;
add,--reuse-message=*|append,--reuse-message=*|\
add,--reedit-message=*|append,--reedit-message=*)
- __gitcomp_nl "$(__git_refs)" "" "${cur#*=}"
+ __git_complete_refs --cur="${cur#*=}"
;;
add,--*|append,--*)
__gitcomp '--file= --message= --reedit-message=
-m|-F)
;;
*)
- __gitcomp_nl "$(__git_refs)"
+ __git_complete_refs
;;
esac
;;
--*=)
;;
*:*)
- __gitcomp_nl "$(__git_refs)" "" "${cur_#*:}"
+ __git_complete_refs --cur="${cur_#*:}"
;;
*)
- __gitcomp_nl "$(__git_refs)" "" "$cur_"
+ __git_complete_refs --cur="$cur_"
;;
esac
}
return
esac
- __gitcomp_nl "$(__git_refs)"
+ __git_complete_refs
}
_git_reflog ()
if [ -z "$subcommand" ]; then
__gitcomp "$subcommands"
else
- __gitcomp_nl "$(__git_refs)"
+ __git_complete_refs
fi
}
return
;;
branch.*.merge)
- __gitcomp_nl "$(__git_refs)"
+ __git_complete_refs
return
;;
branch.*.rebase)
;;
branch.*)
local pfx="${cur%.*}." cur_="${cur#*.}"
- __gitcomp_nl "$(__git_heads)" "$pfx" "$cur_" "."
+ __gitcomp_direct "$(__git_heads "$pfx" "$cur_" ".")"
__gitcomp_nl_append $'autosetupmerge\nautosetuprebase\n' "$pfx" "$cur_"
return
;;
return
;;
esac
- __gitcomp_nl "$(__git_refs)"
+ __git_complete_refs
}
_git_rerere ()
return
;;
esac
- __gitcomp_nl "$(__git_refs)"
+ __git_complete_refs
}
_git_revert ()
return
;;
esac
- __gitcomp_nl "$(__git_refs)"
+ __git_complete_refs
}
_git_rm ()
;;
branch,*)
if [ $cword -eq 3 ]; then
- __gitcomp_nl "$(__git_refs)";
+ __git_complete_refs
else
__gitcomp_nl "$(__git stash list \
| sed -n -e 's/:.*//p')"
i="${words[c]}"
case "$i" in
-d|-v)
- __gitcomp_nl "$(__git_tags)"
+ __gitcomp_direct "$(__git_tags "" "$cur" " ")"
return
;;
-f)
;;
-*|tag)
if [ $f = 1 ]; then
- __gitcomp_nl "$(__git_tags)"
+ __gitcomp_direct "$(__git_tags "" "$cur" " ")"
fi
;;
*)
- __gitcomp_nl "$(__git_refs)"
+ __git_complete_refs
;;
esac
__gitcomp "
--list --delete --verify --annotate --message --file
--sign --cleanup --local-user --force --column --sort=
- --contains --points-at --merged --no-merged --create-reflog
+ --contains --no-contains --points-at --merged --no-merged --create-reflog
"
;;
esac
esac
}
+ __gitcomp_direct ()
+ {
+ emulate -L zsh
+
+ local IFS=$'\n'
+ compset -P '*[=:]'
+ compadd -Q -- ${=1} && _ret=0
+ }
+
__gitcomp_nl ()
{
emulate -L zsh
esac
}
+__gitcomp_direct ()
+{
+ emulate -L zsh
+
+ local IFS=$'\n'
+ compset -P '*[=:]'
+ compadd -Q -- ${=1} && _ret=0
+}
+
__gitcomp_nl ()
{
emulate -L zsh
. git-sh-setup
search_reflog () {
- sed -ne 's~^\([^ ]*\) .*\tcheckout: moving from '"$1"' .*~\1~p' \
+ sed -ne 's~^\([^ ]*\) .* checkout: moving from '"$1"' .*~\1~p' \
< "$GIT_DIR"/logs/HEAD
}
search_reflog_merges () {
git rev-parse $(
- sed -ne 's~^[^ ]* \([^ ]*\) .*\tmerge '"$1"':.*~\1^2~p' \
+ sed -ne 's~^[^ ]* \([^ ]*\) .* merge '"$1"':.*~\1^2~p' \
< "$GIT_DIR"/logs/HEAD
)
}
{
struct stat sb;
char *old_dir, *socket;
- old_dir = expand_user_path("~/.git-credential-cache");
+ old_dir = expand_user_path("~/.git-credential-cache", 0);
if (old_dir && !stat(old_dir, &sb) && S_ISDIR(sb.st_mode))
socket = xstrfmt("%s/socket", old_dir);
else
if (file) {
string_list_append(&fns, file);
} else {
- if ((file = expand_user_path("~/.git-credentials")))
+ if ((file = expand_user_path("~/.git-credentials", 0)))
string_list_append_nodup(&fns, file);
file = xdg_config_home("credentials");
if (file)
fclose(fp);
}
-static int run_service_command(const char **argv)
+static int run_service_command(struct child_process *cld)
{
- struct child_process cld = CHILD_PROCESS_INIT;
-
- cld.argv = argv;
- cld.git_cmd = 1;
- cld.err = -1;
- if (start_command(&cld))
+ argv_array_push(&cld->args, ".");
+ cld->git_cmd = 1;
+ cld->err = -1;
+ if (start_command(cld))
return -1;
close(0);
close(1);
- copy_to_log(cld.err);
+ copy_to_log(cld->err);
- return finish_command(&cld);
+ return finish_command(cld);
}
static int upload_pack(void)
{
- /* Timeout as string */
- char timeout_buf[64];
- const char *argv[] = { "upload-pack", "--strict", NULL, ".", NULL };
-
- argv[2] = timeout_buf;
-
- snprintf(timeout_buf, sizeof timeout_buf, "--timeout=%u", timeout);
- return run_service_command(argv);
+ struct child_process cld = CHILD_PROCESS_INIT;
+ argv_array_pushl(&cld.args, "upload-pack", "--strict", NULL);
+ argv_array_pushf(&cld.args, "--timeout=%u", timeout);
+ return run_service_command(&cld);
}
static int upload_archive(void)
{
- static const char *argv[] = { "upload-archive", ".", NULL };
- return run_service_command(argv);
+ struct child_process cld = CHILD_PROCESS_INIT;
+ argv_array_push(&cld.args, "upload-archive");
+ return run_service_command(&cld);
}
static int receive_pack(void)
{
- static const char *argv[] = { "receive-pack", ".", NULL };
- return run_service_command(argv);
+ struct child_process cld = CHILD_PROCESS_INIT;
+ argv_array_push(&cld.args, "receive-pack");
+ return run_service_command(&cld);
}
static struct daemon_service daemon_service[] = {
*/
const char *name;
- char hex[GIT_SHA1_HEXSZ + 1];
+ char hex[GIT_MAX_HEXSZ + 1];
char mode[10];
/*
* uniqueness across all objects (statistically speaking).
*/
if (abblen < GIT_SHA1_HEXSZ - 3) {
- static char hex[GIT_SHA1_HEXSZ + 1];
+ static char hex[GIT_MAX_HEXSZ + 1];
if (len < abblen && abblen <= len + 2)
xsnprintf(hex, sizeof(hex), "%s%.*s", abbrev, len+3-abblen, "..");
else
data->patchlen += new_len;
}
+static void patch_id_add_string(git_SHA_CTX *ctx, const char *str)
+{
+ git_SHA1_Update(ctx, str, strlen(str));
+}
+
+static void patch_id_add_mode(git_SHA_CTX *ctx, unsigned mode)
+{
+ /* large enough for 2^32 in octal */
+ char buf[12];
+ int len = xsnprintf(buf, sizeof(buf), "%06o", mode);
+ git_SHA1_Update(ctx, buf, len);
+}
+
/* returns 0 upon success, and writes result into sha1 */
static int diff_get_patch_id(struct diff_options *options, unsigned char *sha1, int diff_header_only)
{
int i;
git_SHA_CTX ctx;
struct patch_id_t data;
- char buffer[PATH_MAX * 4 + 20];
git_SHA1_Init(&ctx);
memset(&data, 0, sizeof(struct patch_id_t));
len1 = remove_space(p->one->path, strlen(p->one->path));
len2 = remove_space(p->two->path, strlen(p->two->path));
- if (p->one->mode == 0)
- len1 = snprintf(buffer, sizeof(buffer),
- "diff--gita/%.*sb/%.*s"
- "newfilemode%06o"
- "---/dev/null"
- "+++b/%.*s",
- len1, p->one->path,
- len2, p->two->path,
- p->two->mode,
- len2, p->two->path);
- else if (p->two->mode == 0)
- len1 = snprintf(buffer, sizeof(buffer),
- "diff--gita/%.*sb/%.*s"
- "deletedfilemode%06o"
- "---a/%.*s"
- "+++/dev/null",
- len1, p->one->path,
- len2, p->two->path,
- p->one->mode,
- len1, p->one->path);
- else
- len1 = snprintf(buffer, sizeof(buffer),
- "diff--gita/%.*sb/%.*s"
- "---a/%.*s"
- "+++b/%.*s",
- len1, p->one->path,
- len2, p->two->path,
- len1, p->one->path,
- len2, p->two->path);
- git_SHA1_Update(&ctx, buffer, len1);
+ patch_id_add_string(&ctx, "diff--git");
+ patch_id_add_string(&ctx, "a/");
+ git_SHA1_Update(&ctx, p->one->path, len1);
+ patch_id_add_string(&ctx, "b/");
+ git_SHA1_Update(&ctx, p->two->path, len2);
+
+ if (p->one->mode == 0) {
+ patch_id_add_string(&ctx, "newfilemode");
+ patch_id_add_mode(&ctx, p->two->mode);
+ patch_id_add_string(&ctx, "---/dev/null");
+ patch_id_add_string(&ctx, "+++b/");
+ git_SHA1_Update(&ctx, p->two->path, len2);
+ } else if (p->two->mode == 0) {
+ patch_id_add_string(&ctx, "deletedfilemode");
+ patch_id_add_mode(&ctx, p->one->mode);
+ patch_id_add_string(&ctx, "---a/");
+ git_SHA1_Update(&ctx, p->one->path, len1);
+ patch_id_add_string(&ctx, "+++/dev/null");
+ } else {
+ patch_id_add_string(&ctx, "---a/");
+ git_SHA1_Update(&ctx, p->one->path, len1);
+ patch_id_add_string(&ctx, "+++b/");
+ git_SHA1_Update(&ctx, p->two->path, len2);
+ }
if (diff_header_only)
continue;
struct strbuf;
struct diff_filespec;
struct userdiff_driver;
-struct sha1_array;
+struct oid_array;
struct commit;
struct combine_diff_path;
extern void show_combined_diff(struct combine_diff_path *elem, int num_parent,
int dense, struct rev_info *);
-extern void diff_tree_combined(const unsigned char *sha1, const struct sha1_array *parents, int dense, struct rev_info *rev);
+extern void diff_tree_combined(const unsigned char *sha1, const struct oid_array *parents, int dense, struct rev_info *rev);
extern void diff_tree_combined_merge(const struct commit *commit, int dense, struct rev_info *rev);
const char *replace_ref_base;
git_dir = getenv(GIT_DIR_ENVIRONMENT);
- if (!git_dir)
+ if (!git_dir) {
+ if (!startup_info->have_repository)
+ die("BUG: setup_git_env called without repository");
git_dir = DEFAULT_GIT_DIR_ENVIRONMENT;
+ }
gitfile = read_gitfile(git_dir);
git_dir = xstrdup(gitfile ? gitfile : git_dir);
if (get_common_dir(&sb, git_dir))
return git_object_dir;
}
-int odb_mkstemp(char *template, size_t limit, const char *pattern)
+int odb_mkstemp(struct strbuf *template, const char *pattern)
{
int fd;
/*
* restrictive except to remove write permission.
*/
int mode = 0444;
- snprintf(template, limit, "%s/%s",
- get_object_directory(), pattern);
- fd = git_mkstemp_mode(template, mode);
+ git_path_buf(template, "objects/%s", pattern);
+ fd = git_mkstemp_mode(template->buf, mode);
if (0 <= fd)
return fd;
/* slow path */
/* some mkstemp implementations erase template on failure */
- snprintf(template, limit, "%s/%s",
- get_object_directory(), pattern);
- safe_create_leading_directories(template);
- return xmkstemp_mode(template, mode);
+ git_path_buf(template, "objects/%s", pattern);
+ safe_create_leading_directories(template->buf);
+ return xmkstemp_mode(template->buf, mode);
}
int odb_pack_keep(const char *name)
static void start_packfile(void)
{
- static char tmp_file[PATH_MAX];
+ struct strbuf tmp_file = STRBUF_INIT;
struct packed_git *p;
struct pack_header hdr;
int pack_fd;
- pack_fd = odb_mkstemp(tmp_file, sizeof(tmp_file),
- "pack/tmp_pack_XXXXXX");
- FLEX_ALLOC_STR(p, pack_name, tmp_file);
+ pack_fd = odb_mkstemp(&tmp_file, "pack/tmp_pack_XXXXXX");
+ FLEX_ALLOC_STR(p, pack_name, tmp_file.buf);
+ strbuf_release(&tmp_file);
+
p->pack_fd = pack_fd;
p->do_not_close = 1;
pack_file = sha1fd(pack_fd, p->pack_name);
struct ref **sought, int nr_sought,
struct shallow_info *si)
{
- struct sha1_array ref = SHA1_ARRAY_INIT;
+ struct oid_array ref = OID_ARRAY_INIT;
int *status;
int i;
* shallow points that exist in the pack (iow in repo
* after get_pack() and reprepare_packed_git())
*/
- struct sha1_array extra = SHA1_ARRAY_INIT;
- unsigned char (*sha1)[20] = si->shallow->sha1;
+ struct oid_array extra = OID_ARRAY_INIT;
+ struct object_id *oid = si->shallow->oid;
for (i = 0; i < si->shallow->nr; i++)
- if (has_sha1_file(sha1[i]))
- sha1_array_append(&extra, sha1[i]);
+ if (has_object_file(&oid[i]))
+ oid_array_append(&extra, &oid[i]);
if (extra.nr) {
setup_alternate_shallow(&shallow_lock,
&alternate_shallow_file,
&extra);
commit_lock_file(&shallow_lock);
}
- sha1_array_clear(&extra);
+ oid_array_clear(&extra);
return;
}
if (!si->nr_ours && !si->nr_theirs)
return;
for (i = 0; i < nr_sought; i++)
- sha1_array_append(&ref, sought[i]->old_oid.hash);
+ oid_array_append(&ref, &sought[i]->old_oid);
si->ref = &ref;
if (args->update_shallow) {
* shallow roots that are actually reachable from new
* refs.
*/
- struct sha1_array extra = SHA1_ARRAY_INIT;
- unsigned char (*sha1)[20] = si->shallow->sha1;
+ struct oid_array extra = OID_ARRAY_INIT;
+ struct object_id *oid = si->shallow->oid;
assign_shallow_commits_to_refs(si, NULL, NULL);
if (!si->nr_ours && !si->nr_theirs) {
- sha1_array_clear(&ref);
+ oid_array_clear(&ref);
return;
}
for (i = 0; i < si->nr_ours; i++)
- sha1_array_append(&extra, sha1[si->ours[i]]);
+ oid_array_append(&extra, &oid[si->ours[i]]);
for (i = 0; i < si->nr_theirs; i++)
- sha1_array_append(&extra, sha1[si->theirs[i]]);
+ oid_array_append(&extra, &oid[si->theirs[i]]);
setup_alternate_shallow(&shallow_lock,
&alternate_shallow_file,
&extra);
commit_lock_file(&shallow_lock);
- sha1_array_clear(&extra);
- sha1_array_clear(&ref);
+ oid_array_clear(&extra);
+ oid_array_clear(&ref);
return;
}
sought[i]->status = REF_STATUS_REJECT_SHALLOW;
}
free(status);
- sha1_array_clear(&ref);
+ oid_array_clear(&ref);
}
struct ref *fetch_pack(struct fetch_pack_args *args,
const struct ref *ref,
const char *dest,
struct ref **sought, int nr_sought,
- struct sha1_array *shallow,
+ struct oid_array *shallow,
char **pack_lockfile)
{
struct ref *ref_cpy;
#include "string-list.h"
#include "run-command.h"
-struct sha1_array;
+struct oid_array;
struct fetch_pack_args {
const char *uploadpack;
const char *dest,
struct ref **sought,
int nr_sought,
- struct sha1_array *shallow,
+ struct oid_array *shallow,
char **pack_lockfile);
/*
static void init_skiplist(struct fsck_options *options, const char *path)
{
- static struct sha1_array skiplist = SHA1_ARRAY_INIT;
+ static struct oid_array skiplist = OID_ARRAY_INIT;
int sorted, fd;
- char buffer[41];
- unsigned char sha1[20];
+ char buffer[GIT_MAX_HEXSZ + 1];
+ struct object_id oid;
if (options->skiplist)
sorted = options->skiplist->sorted;
if (fd < 0)
die("Could not open skip list: %s", path);
for (;;) {
+ const char *p;
int result = read_in_full(fd, buffer, sizeof(buffer));
if (result < 0)
die_errno("Could not read '%s'", path);
if (!result)
break;
- if (get_sha1_hex(buffer, sha1) || buffer[40] != '\n')
+ if (parse_oid_hex(buffer, &oid, &p) || *p != '\n')
die("Invalid SHA-1: %s", buffer);
- sha1_array_append(&skiplist, sha1);
+ oid_array_append(&skiplist, &oid);
if (sorted && skiplist.nr > 1 &&
- hashcmp(skiplist.sha1[skiplist.nr - 2],
- sha1) > 0)
+ oidcmp(&skiplist.oid[skiplist.nr - 2],
+ &oid) > 0)
sorted = 0;
}
close(fd);
return 0;
if (options->skiplist && object &&
- sha1_array_lookup(options->skiplist, object->oid.hash) >= 0)
+ oid_array_lookup(options->skiplist, &object->oid) >= 0)
return 0;
if (msg_type == FSCK_FATAL)
fsck_error error_func;
unsigned strict:1;
int *msg_type;
- struct sha1_array *skiplist;
+ struct oid_array *skiplist;
struct decoration *object_names;
};
marked for applying."),
checkout_index => N__(
"If the patch applies cleanly, the edited hunk will immediately be
-marked for discarding"),
+marked for discarding."),
checkout_head => N__(
"If the patch applies cleanly, the edited hunk will immediately be
marked for discarding."),
real_cmd = p4_build_cmd(c)
return write_pipe(real_cmd, stdin)
-def read_pipe(c, ignore_error=False):
+def read_pipe_full(c):
+ """ Read output from command. Returns a tuple
+ of the return status, stdout text and stderr
+ text.
+ """
if verbose:
sys.stderr.write('Reading pipe: %s\n' % str(c))
expand = isinstance(c,basestring)
p = subprocess.Popen(c, stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=expand)
(out, err) = p.communicate()
- if p.returncode != 0 and not ignore_error:
- die('Command failed: %s\nError: %s' % (str(c), err))
+ return (p.returncode, out, err)
+
+def read_pipe(c, ignore_error=False):
+ """ Read output from command. Returns the output text on
+ success. On failure, terminates execution, unless
+ ignore_error is True, when it returns an empty string.
+ """
+ (retcode, out, err) = read_pipe_full(c)
+ if retcode != 0:
+ if ignore_error:
+ out = ""
+ else:
+ die('Command failed: %s\nError: %s' % (str(c), err))
return out
+def read_pipe_text(c):
+ """ Read output from a command with trailing whitespace stripped.
+ On error, returns None.
+ """
+ (retcode, out, err) = read_pipe_full(c)
+ if retcode != 0:
+ return None
+ else:
+ return out.rstrip()
+
def p4_read_pipe(c, ignore_error=False):
real_cmd = p4_build_cmd(c)
return read_pipe(real_cmd, ignore_error)
return clientPath
def currentGitBranch():
- retcode = system(["git", "symbolic-ref", "-q", "HEAD"], ignore_error=True)
- if retcode != 0:
- # on a detached head
- return None
- else:
- return read_pipe(["git", "name-rev", "HEAD"]).split(" ")[1].strip()
+ return read_pipe_text(["git", "symbolic-ref", "--short", "-q", "HEAD"])
def isValidGitDir(path):
return git_dir(path) != None
fi &&
git add --force .gitmodules ||
die "$(eval_gettext "Failed to register submodule '\$sm_path'")"
+
+ # NEEDSWORK: In a multi-working-tree world, this needs to be
+ # set in the per-worktree config.
+ if git config --get submodule.active >/dev/null
+ then
+ # If the submodule being adding isn't already covered by the
+ # current configured pathspec, set the submodule's active flag
+ if ! git submodule--helper is-active "$sm_path"
+ then
+ git config submodule."$sm_name".active "true"
+ fi
+ else
+ git config submodule."$sm_name".active "true"
+ fi
}
#
do
die_if_unmatched "$mode" "$sha1"
name=$(git submodule--helper name "$sm_path") || exit
- url=$(git config submodule."$name".url)
displaypath=$(git submodule--helper relative-path "$prefix$sm_path" "$wt_prefix")
if test "$stage" = U
then
say "U$sha1 $displaypath"
continue
fi
- if test -z "$url" ||
+ if ! git submodule--helper is-active "$sm_path" ||
{
! test -d "$sm_path"/.git &&
! test -f "$sm_path"/.git
while read mode sha1 stage sm_path
do
die_if_unmatched "$mode" "$sha1"
+
+ # skip inactive submodules
+ if ! git submodule--helper is-active "$sm_path"
+ then
+ continue
+ fi
+
name=$(git submodule--helper name "$sm_path")
url=$(git config -f .gitmodules --get submodule."$name".url)
;;
esac
- if git config "submodule.$name.url" >/dev/null 2>/dev/null
+ displaypath=$(git submodule--helper relative-path "$prefix$sm_path" "$wt_prefix")
+ say "$(eval_gettext "Synchronizing submodule url for '\$displaypath'")"
+ git config submodule."$name".url "$super_config_url"
+
+ if test -e "$sm_path"/.git
then
- displaypath=$(git submodule--helper relative-path "$prefix$sm_path" "$wt_prefix")
- say "$(eval_gettext "Synchronizing submodule url for '\$displaypath'")"
- git config submodule."$name".url "$super_config_url"
+ (
+ sanitize_submodule_env
+ cd "$sm_path"
+ remote=$(get_default_remote)
+ git config remote."$remote".url "$sub_origin_url"
- if test -e "$sm_path"/.git
+ if test -n "$recursive"
then
- (
- sanitize_submodule_env
- cd "$sm_path"
- remote=$(get_default_remote)
- git config remote."$remote".url "$sub_origin_url"
-
- if test -n "$recursive"
- then
- prefix="$prefix$sm_path/"
- eval cmd_sync
- fi
- )
+ prefix="$prefix$sm_path/"
+ eval cmd_sync
fi
+ )
fi
done
}
if (!help && get_super_prefix()) {
if (!(p->option & SUPPORT_SUPER_PREFIX))
die("%s doesn't support --super-prefix", p->cmd);
- if (prefix)
- die("can't use --super-prefix from a subdirectory");
}
if (!help && p->option & NEED_WORK_TREE)
}
if (opt->linenum) {
char buf[32];
- snprintf(buf, sizeof(buf), "%d", lno);
+ xsnprintf(buf, sizeof(buf), "%d", lno);
output_color(opt, buf, strlen(buf), opt->color_lineno);
output_sep(opt, sign);
}
opt->color_filename);
output_sep(opt, ':');
}
- snprintf(buf, sizeof(buf), "%u\n", count);
+ xsnprintf(buf, sizeof(buf), "%u\n", count);
opt->output(opt, buf, strlen(buf));
return 1;
}
char *sha1_to_hex(const unsigned char *sha1)
{
static int bufno;
- static char hexbuffer[4][GIT_SHA1_HEXSZ + 1];
+ static char hexbuffer[4][GIT_MAX_HEXSZ + 1];
bufno = (bufno + 1) % ARRAY_SIZE(hexbuffer);
return sha1_to_hex_r(hexbuffer[bufno], sha1);
}
#endif
int active_requests;
int http_is_verbose;
-size_t http_post_buffer = 16 * LARGE_PACKET_MAX;
+ssize_t http_post_buffer = 16 * LARGE_PACKET_MAX;
#if LIBCURL_VERSION_NUM >= 0x070a06
#define LIBCURL_CAN_HANDLE_AUTH_ANY
}
if (!strcmp("http.postbuffer", var)) {
- http_post_buffer = git_config_int(var, value);
+ http_post_buffer = git_config_ssize_t(var, value);
+ if (http_post_buffer < 0)
+ warning(_("negative value for http.postbuffer; defaulting to %d"), LARGE_PACKET_MAX);
if (http_post_buffer < LARGE_PACKET_MAX)
http_post_buffer = LARGE_PACKET_MAX;
return 0;
}
}
- if (curl_http_proxy) {
- curl_easy_setopt(result, CURLOPT_PROXY, curl_http_proxy);
+ if (curl_http_proxy && curl_http_proxy[0] == '\0') {
+ /*
+ * Handle case with the empty http.proxy value here to keep
+ * common code clean.
+ * NB: empty option disables proxying at all.
+ */
+ curl_easy_setopt(result, CURLOPT_PROXY, "");
+ } else if (curl_http_proxy) {
#if LIBCURL_VERSION_NUM >= 0x071800
if (starts_with(curl_http_proxy, "socks5h"))
curl_easy_setopt(result,
strbuf_release(&url);
}
+ if (!proxy_auth.host)
+ die("Invalid proxy URL '%s'", curl_http_proxy);
+
curl_easy_setopt(result, CURLOPT_PROXY, proxy_auth.host);
#if LIBCURL_VERSION_NUM >= 0x071304
var_override(&curl_no_proxy, getenv("NO_PROXY"));
* FAILONERROR it is lost, so we can give only the numeric
* status code.
*/
- snprintf(curl_errorstr, sizeof(curl_errorstr),
- "The requested URL returned error: %ld",
- results->http_code);
+ xsnprintf(curl_errorstr, sizeof(curl_errorstr),
+ "The requested URL returned error: %ld",
+ results->http_code);
}
if (results->curl_result == CURLE_OK) {
{
slot->results = results;
if (!start_active_slot(slot)) {
- snprintf(curl_errorstr, sizeof(curl_errorstr),
- "failed to start HTTP request");
+ xsnprintf(curl_errorstr, sizeof(curl_errorstr),
+ "failed to start HTTP request");
return HTTP_START_FAILED;
}
extern long int git_curl_ipresolve;
extern int active_requests;
extern int http_is_verbose;
-extern size_t http_post_buffer;
+extern ssize_t http_post_buffer;
extern struct credential http_auth;
extern char curl_errorstr[CURL_ERROR_SIZE];
int gai;
char portstr[6];
- snprintf(portstr, sizeof(portstr), "%d", srvc->port);
+ xsnprintf(portstr, sizeof(portstr), "%d", srvc->port);
memset(&hints, 0, sizeof(hints));
hints.ai_socktype = SOCK_STREAM;
assert(!mi->filter_stage);
if (mi->header_stage) {
- if (!line->len || (line->len == 1 && line->buf[0] == '\n'))
+ if (!line->len || (line->len == 1 && line->buf[0] == '\n')) {
+ if (mi->inbody_header_accum.len) {
+ flush_inbody_header_accum(mi);
+ mi->header_stage = 0;
+ }
return 0;
+ }
}
if (mi->use_inbody_headers && mi->header_stage) {
* Scan forward in the index array for index entries having the same
* path prefix (that are also in this directory).
*/
- if (strncmp(istate->cache[k_start + 1]->name, prefix->buf, prefix->len) > 0)
+ if (k_start + 1 >= k_end)
+ k = k_end;
+ else if (strncmp(istate->cache[k_start + 1]->name, prefix->buf, prefix->len) > 0)
k = k_start + 1;
else if (strncmp(istate->cache[k_end - 1]->name, prefix->buf, prefix->len) == 0)
k = k_end;
* How to consolidate an int_node:
* If there are > 1 non-NULL entries, give up and return non-zero.
* Otherwise replace the int_node at the given index in the given parent node
- * with the only entry (or a NULL entry if no entries) from the given tree,
- * and return 0.
+ * with the only NOTE entry (or a NULL entry if no entries) from the given
+ * tree, and return 0.
*/
static int note_tree_consolidate(struct int_node *tree,
struct int_node *parent, unsigned char index)
}
}
+ if (p && (GET_PTR_TYPE(p) != PTR_TYPE_NOTE))
+ return -2;
/* replace tree with p in parent[index] */
parent->a[index] = p;
free(tree);
const char *filename,
uint16_t options)
{
- static char tmp_file[PATH_MAX];
static uint16_t default_version = 1;
static uint16_t flags = BITMAP_OPT_FULL_DAG;
+ struct strbuf tmp_file = STRBUF_INIT;
struct sha1file *f;
struct bitmap_disk_header header;
- int fd = odb_mkstemp(tmp_file, sizeof(tmp_file), "pack/tmp_bitmap_XXXXXX");
+ int fd = odb_mkstemp(&tmp_file, "pack/tmp_bitmap_XXXXXX");
- if (fd < 0)
- die_errno("unable to create '%s'", tmp_file);
- f = sha1fd(fd, tmp_file);
+ f = sha1fd(fd, tmp_file.buf);
memcpy(header.magic, BITMAP_IDX_SIGNATURE, sizeof(BITMAP_IDX_SIGNATURE));
header.version = htons(default_version);
sha1close(f, NULL, CSUM_FSYNC);
- if (adjust_shared_perm(tmp_file))
+ if (adjust_shared_perm(tmp_file.buf))
die_errno("unable to make temporary bitmap file readable");
- if (rename(tmp_file, filename))
+ if (rename(tmp_file.buf, filename))
die_errno("unable to rename temporary bitmap file to '%s'", filename);
+
+ strbuf_release(&tmp_file);
}
f = sha1fd_check(index_name);
} else {
if (!index_name) {
- static char tmp_file[PATH_MAX];
- fd = odb_mkstemp(tmp_file, sizeof(tmp_file), "pack/tmp_idx_XXXXXX");
- index_name = xstrdup(tmp_file);
+ struct strbuf tmp_file = STRBUF_INIT;
+ fd = odb_mkstemp(&tmp_file, "pack/tmp_idx_XXXXXX");
+ index_name = strbuf_detach(&tmp_file, NULL);
} else {
unlink(index_name);
fd = open(index_name, O_CREAT|O_EXCL|O_WRONLY, 0600);
+ if (fd < 0)
+ die_errno("unable to create '%s'", index_name);
}
- if (fd < 0)
- die_errno("unable to create '%s'", index_name);
f = sha1fd(fd, index_name);
}
struct sha1file *create_tmp_packfile(char **pack_tmp_name)
{
- char tmpname[PATH_MAX];
+ struct strbuf tmpname = STRBUF_INIT;
int fd;
- fd = odb_mkstemp(tmpname, sizeof(tmpname), "pack/tmp_pack_XXXXXX");
- *pack_tmp_name = xstrdup(tmpname);
+ fd = odb_mkstemp(&tmpname, "pack/tmp_pack_XXXXXX");
+ *pack_tmp_name = strbuf_detach(&tmpname, NULL);
return sha1fd(fd, *pack_tmp_name);
}
int parse_opt_object_name(const struct option *opt, const char *arg, int unset)
{
- unsigned char sha1[20];
+ struct object_id oid;
if (unset) {
- sha1_array_clear(opt->value);
+ oid_array_clear(opt->value);
return 0;
}
if (!arg)
return -1;
- if (get_sha1(arg, sha1))
+ if (get_oid(arg, &oid))
return error(_("malformed object name '%s'"), arg);
- sha1_array_append(opt->value, sha1);
+ oid_array_append(opt->value, &oid);
return 0;
}
PARSE_OPT_LASTARG_DEFAULT | flag, \
parse_opt_commits, (intptr_t) "HEAD" \
}
-#define OPT_CONTAINS(v, h) _OPT_CONTAINS_OR_WITH("contains", v, h, 0)
-#define OPT_WITH(v, h) _OPT_CONTAINS_OR_WITH("with", v, h, PARSE_OPT_HIDDEN)
+#define OPT_CONTAINS(v, h) _OPT_CONTAINS_OR_WITH("contains", v, h, PARSE_OPT_NONEG)
+#define OPT_NO_CONTAINS(v, h) _OPT_CONTAINS_OR_WITH("no-contains", v, h, PARSE_OPT_NONEG)
+#define OPT_WITH(v, h) _OPT_CONTAINS_OR_WITH("with", v, h, PARSE_OPT_HIDDEN | PARSE_OPT_NONEG)
+#define OPT_WITHOUT(v, h) _OPT_CONTAINS_OR_WITH("without", v, h, PARSE_OPT_HIDDEN | PARSE_OPT_NONEG)
#endif
struct commit *commit,
struct patch_ids *ids)
{
- unsigned char header_only_patch_id[GIT_SHA1_RAWSZ];
+ unsigned char header_only_patch_id[GIT_MAX_RAWSZ];
patch->commit = commit;
if (commit_patch_id(commit, &ids->diffopts, header_only_patch_id, 1))
struct patch_id {
struct hashmap_entry ent;
- unsigned char patch_id[GIT_SHA1_RAWSZ];
+ unsigned char patch_id[GIT_MAX_RAWSZ];
struct commit *commit;
};
}
/* Returns 0 on success, negative on failure. */
-#define SUBMODULE_PATH_ERR_NOT_CONFIGURED -1
static int do_submodule_path(struct strbuf *buf, const char *path,
const char *fmt, va_list args)
{
- const char *git_dir;
struct strbuf git_submodule_common_dir = STRBUF_INIT;
struct strbuf git_submodule_dir = STRBUF_INIT;
- const struct submodule *sub;
- int err = 0;
+ int ret;
- strbuf_addstr(buf, path);
- strbuf_complete(buf, '/');
- strbuf_addstr(buf, ".git");
-
- git_dir = read_gitfile(buf->buf);
- if (git_dir) {
- strbuf_reset(buf);
- strbuf_addstr(buf, git_dir);
- }
- if (!is_git_directory(buf->buf)) {
- gitmodules_config();
- sub = submodule_from_path(null_sha1, path);
- if (!sub) {
- err = SUBMODULE_PATH_ERR_NOT_CONFIGURED;
- goto cleanup;
- }
- strbuf_reset(buf);
- strbuf_git_path(buf, "%s/%s", "modules", sub->name);
- }
-
- strbuf_addch(buf, '/');
- strbuf_addbuf(&git_submodule_dir, buf);
+ ret = submodule_to_gitdir(&git_submodule_dir, path);
+ if (ret)
+ goto cleanup;
+ strbuf_complete(&git_submodule_dir, '/');
+ strbuf_addbuf(buf, &git_submodule_dir);
strbuf_vaddf(buf, fmt, args);
if (get_common_dir_noenv(&git_submodule_common_dir, git_submodule_dir.buf))
cleanup:
strbuf_release(&git_submodule_dir);
strbuf_release(&git_submodule_common_dir);
-
- return err;
+ return ret;
}
char *git_pathdup_submodule(const char *path, const char *fmt, ...)
* Return a string with ~ and ~user expanded via getpw*. If buf != NULL,
* then it is a newly allocated string. Returns NULL on getpw failure or
* if path is NULL.
+ *
+ * If real_home is true, real_path($HOME) is used in the expansion.
*/
-char *expand_user_path(const char *path)
+char *expand_user_path(const char *path, int real_home)
{
struct strbuf user_path = STRBUF_INIT;
const char *to_copy = path;
const char *home = getenv("HOME");
if (!home)
goto return_null;
- strbuf_addstr(&user_path, home);
+ if (real_home)
+ strbuf_addstr(&user_path, real_path(home));
+ else
+ strbuf_addstr(&user_path, home);
#ifdef GIT_WINDOWS_NATIVE
convert_slashes(user_path.buf);
#endif
strbuf_add(&validated_path, path, len);
if (used_path.buf[0] == '~') {
- char *newpath = expand_user_path(used_path.buf);
+ char *newpath = expand_user_path(used_path.buf, 0);
if (!newpath)
return NULL;
strbuf_attach(&used_path, newpath, strlen(newpath),
free(pathspec->items[i].match);
free(pathspec->items[i].original);
- for (j = 0; j < pathspec->items[j].attr_match_nr; j++)
+ for (j = 0; j < pathspec->items[i].attr_match_nr; j++)
free(pathspec->items[i].attr_match[j].value);
free(pathspec->items[i].attr_match);
msgid "git describe [<options>] --dirty"
msgstr "git describe [<Optionen>] --dirty"
-#: builtin/describe.c:217
+#: builtin/describe.c:52
+msgid "head"
+msgstr "Branch"
+
+#: builtin/describe.c:52
+msgid "lightweight"
+msgstr "nicht-annotiert"
+
+#: builtin/describe.c:52
+msgid "annotated"
+msgstr "annotiert"
+
+#: builtin/describe.c:249
#, c-format
msgid "annotated tag %s not available"
msgstr "annotiertes Tag %s ist nicht verfügbar"
struct ref_array *array;
struct ref_filter *filter;
struct contains_cache contains_cache;
+ struct contains_cache no_contains_cache;
};
/*
}
static int commit_contains(struct ref_filter *filter, struct commit *commit,
- struct contains_cache *cache)
+ struct commit_list *list, struct contains_cache *cache)
{
if (filter->with_commit_tag_algo)
- return contains_tag_algo(commit, filter->with_commit, cache) == CONTAINS_YES;
- return is_descendant_of(commit, filter->with_commit);
+ return contains_tag_algo(commit, list, cache) == CONTAINS_YES;
+ return is_descendant_of(commit, list);
}
/*
* the need to parse the object via parse_object(). peel_ref() might be a
* more efficient alternative to obtain the pointee.
*/
-static const unsigned char *match_points_at(struct sha1_array *points_at,
- const unsigned char *sha1,
- const char *refname)
+static const struct object_id *match_points_at(struct oid_array *points_at,
+ const struct object_id *oid,
+ const char *refname)
{
- const unsigned char *tagged_sha1 = NULL;
+ const struct object_id *tagged_oid = NULL;
struct object *obj;
- if (sha1_array_lookup(points_at, sha1) >= 0)
- return sha1;
- obj = parse_object(sha1);
+ if (oid_array_lookup(points_at, oid) >= 0)
+ return oid;
+ obj = parse_object(oid->hash);
if (!obj)
die(_("malformed object at '%s'"), refname);
if (obj->type == OBJ_TAG)
- tagged_sha1 = ((struct tag *)obj)->tagged->oid.hash;
- if (tagged_sha1 && sha1_array_lookup(points_at, tagged_sha1) >= 0)
- return tagged_sha1;
+ tagged_oid = &((struct tag *)obj)->tagged->oid;
+ if (tagged_oid && oid_array_lookup(points_at, tagged_oid) >= 0)
+ return tagged_oid;
return NULL;
}
if (!filter_pattern_match(filter, refname))
return 0;
- if (filter->points_at.nr && !match_points_at(&filter->points_at, oid->hash, refname))
+ if (filter->points_at.nr && !match_points_at(&filter->points_at, oid, refname))
return 0;
/*
* obtain the commit using the 'oid' available and discard all
* non-commits early. The actual filtering is done later.
*/
- if (filter->merge_commit || filter->with_commit || filter->verbose) {
+ if (filter->merge_commit || filter->with_commit || filter->no_commit || filter->verbose) {
commit = lookup_commit_reference_gently(oid->hash, 1);
if (!commit)
return 0;
- /* We perform the filtering for the '--contains' option */
+ /* We perform the filtering for the '--contains' option... */
if (filter->with_commit &&
- !commit_contains(filter, commit, &ref_cbdata->contains_cache))
+ !commit_contains(filter, commit, filter->with_commit, &ref_cbdata->contains_cache))
+ return 0;
+ /* ...or for the `--no-contains' option */
+ if (filter->no_commit &&
+ commit_contains(filter, commit, filter->no_commit, &ref_cbdata->no_contains_cache))
return 0;
}
filter->kind = type & FILTER_REFS_KIND_MASK;
init_contains_cache(&ref_cbdata.contains_cache);
+ init_contains_cache(&ref_cbdata.no_contains_cache);
/* Simple per-ref filtering */
if (!filter->kind)
}
clear_contains_cache(&ref_cbdata.contains_cache);
+ clear_contains_cache(&ref_cbdata.no_contains_cache);
/* Filters that need revision walking */
if (filter->merge_commit)
{
struct ref_filter *rf = opt->value;
unsigned char sha1[20];
+ int no_merged = starts_with(opt->long_name, "no");
+
+ if (rf->merge) {
+ if (no_merged) {
+ return opterror(opt, "is incompatible with --merged", 0);
+ } else {
+ return opterror(opt, "is incompatible with --no-merged", 0);
+ }
+ }
- rf->merge = starts_with(opt->long_name, "no")
+ rf->merge = no_merged
? REF_FILTER_MERGED_OMIT
: REF_FILTER_MERGED_INCLUDE;
struct ref_filter {
const char **name_patterns;
- struct sha1_array points_at;
+ struct oid_array points_at;
struct commit_list *with_commit;
+ struct commit_list *no_commit;
enum {
REF_FILTER_MERGED_NONE = 0,
#include "refs/refs-internal.h"
#include "object.h"
#include "tag.h"
+#include "submodule.h"
/*
* List of all available backends
return 1;
}
+char *refs_resolve_refdup(struct ref_store *refs,
+ const char *refname, int resolve_flags,
+ unsigned char *sha1, int *flags)
+{
+ const char *result;
+
+ result = refs_resolve_ref_unsafe(refs, refname, resolve_flags,
+ sha1, flags);
+ return xstrdup_or_null(result);
+}
+
char *resolve_refdup(const char *refname, int resolve_flags,
unsigned char *sha1, int *flags)
{
- return xstrdup_or_null(resolve_ref_unsafe(refname, resolve_flags,
- sha1, flags));
+ return refs_resolve_refdup(get_main_ref_store(),
+ refname, resolve_flags,
+ sha1, flags);
}
/* The argument to filter_refs */
void *cb_data;
};
-int read_ref_full(const char *refname, int resolve_flags, unsigned char *sha1, int *flags)
+int refs_read_ref_full(struct ref_store *refs, const char *refname,
+ int resolve_flags, unsigned char *sha1, int *flags)
{
- if (resolve_ref_unsafe(refname, resolve_flags, sha1, flags))
+ if (refs_resolve_ref_unsafe(refs, refname, resolve_flags, sha1, flags))
return 0;
return -1;
}
+int read_ref_full(const char *refname, int resolve_flags, unsigned char *sha1, int *flags)
+{
+ return refs_read_ref_full(get_main_ref_store(), refname,
+ resolve_flags, sha1, flags);
+}
+
int read_ref(const char *refname, unsigned char *sha1)
{
return read_ref_full(refname, RESOLVE_REF_READING, sha1, NULL);
for_each_rawref(warn_if_dangling_symref, &data);
}
+int refs_for_each_tag_ref(struct ref_store *refs, each_ref_fn fn, void *cb_data)
+{
+ return refs_for_each_ref_in(refs, "refs/tags/", fn, cb_data);
+}
+
int for_each_tag_ref(each_ref_fn fn, void *cb_data)
{
- return for_each_ref_in("refs/tags/", fn, cb_data);
+ return refs_for_each_tag_ref(get_main_ref_store(), fn, cb_data);
}
int for_each_tag_ref_submodule(const char *submodule, each_ref_fn fn, void *cb_data)
{
- return for_each_ref_in_submodule(submodule, "refs/tags/", fn, cb_data);
+ return refs_for_each_tag_ref(get_submodule_ref_store(submodule),
+ fn, cb_data);
+}
+
+int refs_for_each_branch_ref(struct ref_store *refs, each_ref_fn fn, void *cb_data)
+{
+ return refs_for_each_ref_in(refs, "refs/heads/", fn, cb_data);
}
int for_each_branch_ref(each_ref_fn fn, void *cb_data)
{
- return for_each_ref_in("refs/heads/", fn, cb_data);
+ return refs_for_each_branch_ref(get_main_ref_store(), fn, cb_data);
}
int for_each_branch_ref_submodule(const char *submodule, each_ref_fn fn, void *cb_data)
{
- return for_each_ref_in_submodule(submodule, "refs/heads/", fn, cb_data);
+ return refs_for_each_branch_ref(get_submodule_ref_store(submodule),
+ fn, cb_data);
+}
+
+int refs_for_each_remote_ref(struct ref_store *refs, each_ref_fn fn, void *cb_data)
+{
+ return refs_for_each_ref_in(refs, "refs/remotes/", fn, cb_data);
}
int for_each_remote_ref(each_ref_fn fn, void *cb_data)
{
- return for_each_ref_in("refs/remotes/", fn, cb_data);
+ return refs_for_each_remote_ref(get_main_ref_store(), fn, cb_data);
}
int for_each_remote_ref_submodule(const char *submodule, each_ref_fn fn, void *cb_data)
{
- return for_each_ref_in_submodule(submodule, "refs/remotes/", fn, cb_data);
+ return refs_for_each_remote_ref(get_submodule_ref_store(submodule),
+ fn, cb_data);
}
int head_ref_namespaced(each_ref_fn fn, void *cb_data)
{
const char **p, *r;
int refs_found = 0;
+ struct strbuf fullref = STRBUF_INIT;
*ref = NULL;
for (p = ref_rev_parse_rules; *p; p++) {
- char fullref[PATH_MAX];
unsigned char sha1_from_ref[20];
unsigned char *this_result;
int flag;
this_result = refs_found ? sha1_from_ref : sha1;
- mksnpath(fullref, sizeof(fullref), *p, len, str);
- r = resolve_ref_unsafe(fullref, RESOLVE_REF_READING,
+ strbuf_reset(&fullref);
+ strbuf_addf(&fullref, *p, len, str);
+ r = resolve_ref_unsafe(fullref.buf, RESOLVE_REF_READING,
this_result, &flag);
if (r) {
if (!refs_found++)
*ref = xstrdup(r);
if (!warn_ambiguous_refs)
break;
- } else if ((flag & REF_ISSYMREF) && strcmp(fullref, "HEAD")) {
- warning("ignoring dangling symref %s.", fullref);
- } else if ((flag & REF_ISBROKEN) && strchr(fullref, '/')) {
- warning("ignoring broken ref %s.", fullref);
+ } else if ((flag & REF_ISSYMREF) && strcmp(fullref.buf, "HEAD")) {
+ warning("ignoring dangling symref %s.", fullref.buf);
+ } else if ((flag & REF_ISBROKEN) && strchr(fullref.buf, '/')) {
+ warning("ignoring broken ref %s.", fullref.buf);
}
}
+ strbuf_release(&fullref);
return refs_found;
}
char *last_branch = substitute_branch_name(&str, &len);
const char **p;
int logs_found = 0;
+ struct strbuf path = STRBUF_INIT;
*log = NULL;
for (p = ref_rev_parse_rules; *p; p++) {
unsigned char hash[20];
- char path[PATH_MAX];
const char *ref, *it;
- mksnpath(path, sizeof(path), *p, len, str);
- ref = resolve_ref_unsafe(path, RESOLVE_REF_READING,
+ strbuf_reset(&path);
+ strbuf_addf(&path, *p, len, str);
+ ref = resolve_ref_unsafe(path.buf, RESOLVE_REF_READING,
hash, NULL);
if (!ref)
continue;
- if (reflog_exists(path))
- it = path;
- else if (strcmp(ref, path) && reflog_exists(ref))
+ if (reflog_exists(path.buf))
+ it = path.buf;
+ else if (strcmp(ref, path.buf) && reflog_exists(ref))
it = ref;
else
continue;
if (!warn_ambiguous_refs)
break;
}
+ strbuf_release(&path);
free(last_branch);
return logs_found;
}
return 0;
}
-int delete_ref(const char *msg, const char *refname,
- const unsigned char *old_sha1, unsigned int flags)
+int refs_delete_ref(struct ref_store *refs, const char *msg,
+ const char *refname,
+ const unsigned char *old_sha1,
+ unsigned int flags)
{
struct ref_transaction *transaction;
struct strbuf err = STRBUF_INIT;
- if (ref_type(refname) == REF_TYPE_PSEUDOREF)
+ if (ref_type(refname) == REF_TYPE_PSEUDOREF) {
+ assert(refs == get_main_ref_store());
return delete_pseudoref(refname, old_sha1);
+ }
- transaction = ref_transaction_begin(&err);
+ transaction = ref_store_transaction_begin(refs, &err);
if (!transaction ||
ref_transaction_delete(transaction, refname, old_sha1,
flags, msg, &err) ||
return 0;
}
+int delete_ref(const char *msg, const char *refname,
+ const unsigned char *old_sha1, unsigned int flags)
+{
+ return refs_delete_ref(get_main_ref_store(), msg, refname,
+ old_sha1, flags);
+}
+
int copy_reflog_msg(char *buf, const char *msg)
{
char *cp = buf;
return 1;
}
-struct ref_transaction *ref_transaction_begin(struct strbuf *err)
+struct ref_transaction *ref_store_transaction_begin(struct ref_store *refs,
+ struct strbuf *err)
{
+ struct ref_transaction *tr;
assert(err);
- return xcalloc(1, sizeof(struct ref_transaction));
+ tr = xcalloc(1, sizeof(struct ref_transaction));
+ tr->ref_store = refs;
+ return tr;
+}
+
+struct ref_transaction *ref_transaction_begin(struct strbuf *err)
+{
+ return ref_store_transaction_begin(get_main_ref_store(), err);
}
void ref_transaction_free(struct ref_transaction *transaction)
old_oid ? old_oid->hash : NULL, flags, onerr);
}
-int update_ref(const char *msg, const char *refname,
- const unsigned char *new_sha1, const unsigned char *old_sha1,
- unsigned int flags, enum action_on_err onerr)
+int refs_update_ref(struct ref_store *refs, const char *msg,
+ const char *refname, const unsigned char *new_sha1,
+ const unsigned char *old_sha1, unsigned int flags,
+ enum action_on_err onerr)
{
struct ref_transaction *t = NULL;
struct strbuf err = STRBUF_INIT;
int ret = 0;
if (ref_type(refname) == REF_TYPE_PSEUDOREF) {
+ assert(refs == get_main_ref_store());
ret = write_pseudoref(refname, new_sha1, old_sha1, &err);
} else {
- t = ref_transaction_begin(&err);
+ t = ref_store_transaction_begin(refs, &err);
if (!t ||
ref_transaction_update(t, refname, new_sha1, old_sha1,
flags, msg, &err) ||
return 0;
}
+int update_ref(const char *msg, const char *refname,
+ const unsigned char *new_sha1,
+ const unsigned char *old_sha1,
+ unsigned int flags, enum action_on_err onerr)
+{
+ return refs_update_ref(get_main_ref_store(), msg, refname, new_sha1,
+ old_sha1, flags, onerr);
+}
+
char *shorten_unambiguous_ref(const char *refname, int strict)
{
int i;
static char **scanf_fmts;
static int nr_rules;
char *short_name;
+ struct strbuf resolved_buf = STRBUF_INIT;
if (!nr_rules) {
/*
*/
for (j = 0; j < rules_to_fail; j++) {
const char *rule = ref_rev_parse_rules[j];
- char refname[PATH_MAX];
/* skip matched rule */
if (i == j)
* (with this previous rule) to a valid ref
* read_ref() returns 0 on success
*/
- mksnpath(refname, sizeof(refname),
- rule, short_name_len, short_name);
- if (ref_exists(refname))
+ strbuf_reset(&resolved_buf);
+ strbuf_addf(&resolved_buf, rule,
+ short_name_len, short_name);
+ if (ref_exists(resolved_buf.buf))
break;
}
* short name is non-ambiguous if all previous rules
* haven't resolved to a valid ref
*/
- if (j == rules_to_fail)
+ if (j == rules_to_fail) {
+ strbuf_release(&resolved_buf);
return short_name;
+ }
}
+ strbuf_release(&resolved_buf);
free(short_name);
return xstrdup(refname);
}
return NULL;
}
-int rename_ref_available(const char *old_refname, const char *new_refname)
+int refs_rename_ref_available(struct ref_store *refs,
+ const char *old_refname,
+ const char *new_refname)
{
struct string_list skip = STRING_LIST_INIT_NODUP;
struct strbuf err = STRBUF_INIT;
int ok;
string_list_insert(&skip, old_refname);
- ok = !verify_refname_available(new_refname, NULL, &skip, &err);
+ ok = !refs_verify_refname_available(refs, new_refname,
+ NULL, &skip, &err);
if (!ok)
error("%s", err.buf);
* non-zero value, stop the iteration and return that value;
* otherwise, return 0.
*/
-static int do_for_each_ref(const char *submodule, const char *prefix,
+static int do_for_each_ref(struct ref_store *refs, const char *prefix,
each_ref_fn fn, int trim, int flags, void *cb_data)
{
- struct ref_store *refs = get_ref_store(submodule);
struct ref_iterator *iter;
if (!refs)
return do_for_each_ref_iterator(iter, fn, cb_data);
}
+int refs_for_each_ref(struct ref_store *refs, each_ref_fn fn, void *cb_data)
+{
+ return do_for_each_ref(refs, "", fn, 0, 0, cb_data);
+}
+
int for_each_ref(each_ref_fn fn, void *cb_data)
{
- return do_for_each_ref(NULL, "", fn, 0, 0, cb_data);
+ return refs_for_each_ref(get_main_ref_store(), fn, cb_data);
}
int for_each_ref_submodule(const char *submodule, each_ref_fn fn, void *cb_data)
{
- return do_for_each_ref(submodule, "", fn, 0, 0, cb_data);
+ return refs_for_each_ref(get_submodule_ref_store(submodule), fn, cb_data);
+}
+
+int refs_for_each_ref_in(struct ref_store *refs, const char *prefix,
+ each_ref_fn fn, void *cb_data)
+{
+ return do_for_each_ref(refs, prefix, fn, strlen(prefix), 0, cb_data);
}
int for_each_ref_in(const char *prefix, each_ref_fn fn, void *cb_data)
{
- return do_for_each_ref(NULL, prefix, fn, strlen(prefix), 0, cb_data);
+ return refs_for_each_ref_in(get_main_ref_store(), prefix, fn, cb_data);
}
int for_each_fullref_in(const char *prefix, each_ref_fn fn, void *cb_data, unsigned int broken)
if (broken)
flag = DO_FOR_EACH_INCLUDE_BROKEN;
- return do_for_each_ref(NULL, prefix, fn, 0, flag, cb_data);
+ return do_for_each_ref(get_main_ref_store(),
+ prefix, fn, 0, flag, cb_data);
}
int for_each_ref_in_submodule(const char *submodule, const char *prefix,
- each_ref_fn fn, void *cb_data)
+ each_ref_fn fn, void *cb_data)
{
- return do_for_each_ref(submodule, prefix, fn, strlen(prefix), 0, cb_data);
+ return refs_for_each_ref_in(get_submodule_ref_store(submodule),
+ prefix, fn, cb_data);
}
int for_each_replace_ref(each_ref_fn fn, void *cb_data)
{
- return do_for_each_ref(NULL, git_replace_ref_base, fn,
- strlen(git_replace_ref_base), 0, cb_data);
+ return do_for_each_ref(get_main_ref_store(),
+ git_replace_ref_base, fn,
+ strlen(git_replace_ref_base),
+ 0, cb_data);
}
int for_each_namespaced_ref(each_ref_fn fn, void *cb_data)
struct strbuf buf = STRBUF_INIT;
int ret;
strbuf_addf(&buf, "%srefs/", get_git_namespace());
- ret = do_for_each_ref(NULL, buf.buf, fn, 0, 0, cb_data);
+ ret = do_for_each_ref(get_main_ref_store(),
+ buf.buf, fn, 0, 0, cb_data);
strbuf_release(&buf);
return ret;
}
-int for_each_rawref(each_ref_fn fn, void *cb_data)
+int refs_for_each_rawref(struct ref_store *refs, each_ref_fn fn, void *cb_data)
{
- return do_for_each_ref(NULL, "", fn, 0,
+ return do_for_each_ref(refs, "", fn, 0,
DO_FOR_EACH_INCLUDE_BROKEN, cb_data);
}
+int for_each_rawref(each_ref_fn fn, void *cb_data)
+{
+ return refs_for_each_rawref(get_main_ref_store(), fn, cb_data);
+}
+
/* This function needs to return a meaningful errno on failure */
-const char *resolve_ref_recursively(struct ref_store *refs,
+const char *refs_resolve_ref_unsafe(struct ref_store *refs,
const char *refname,
int resolve_flags,
unsigned char *sha1, int *flags)
/* backend functions */
int refs_init_db(struct strbuf *err)
{
- struct ref_store *refs = get_ref_store(NULL);
+ struct ref_store *refs = get_main_ref_store();
return refs->be->init_db(refs, err);
}
const char *resolve_ref_unsafe(const char *refname, int resolve_flags,
unsigned char *sha1, int *flags)
{
- return resolve_ref_recursively(get_ref_store(NULL), refname,
+ return refs_resolve_ref_unsafe(get_main_ref_store(), refname,
resolve_flags, sha1, flags);
}
/* We need to strip off one or more trailing slashes */
char *stripped = xmemdupz(submodule, len);
- refs = get_ref_store(stripped);
+ refs = get_submodule_ref_store(stripped);
free(stripped);
} else {
- refs = get_ref_store(submodule);
+ refs = get_submodule_ref_store(submodule);
}
if (!refs)
return -1;
- if (!resolve_ref_recursively(refs, refname, 0, sha1, &flags) ||
+ if (!refs_resolve_ref_unsafe(refs, refname, 0, sha1, &flags) ||
is_null_sha1(sha1))
return -1;
return 0;
static struct hashmap submodule_ref_stores;
/*
- * Return the ref_store instance for the specified submodule (or the
- * main repository if submodule is NULL). If that ref_store hasn't
- * been initialized yet, return NULL.
+ * Return the ref_store instance for the specified submodule. If that
+ * ref_store hasn't been initialized yet, return NULL.
*/
-static struct ref_store *lookup_ref_store(const char *submodule)
+static struct ref_store *lookup_submodule_ref_store(const char *submodule)
{
struct submodule_hash_entry *entry;
- if (!submodule)
- return main_ref_store;
-
if (!submodule_ref_stores.tablesize)
/* It's initialized on demand in register_ref_store(). */
return NULL;
return entry ? entry->refs : NULL;
}
-/*
- * Register the specified ref_store to be the one that should be used
- * for submodule (or the main repository if submodule is NULL). It is
- * a fatal error to call this function twice for the same submodule.
- */
-static void register_ref_store(struct ref_store *refs, const char *submodule)
-{
- if (!submodule) {
- if (main_ref_store)
- die("BUG: main_ref_store initialized twice");
-
- main_ref_store = refs;
- } else {
- if (!submodule_ref_stores.tablesize)
- hashmap_init(&submodule_ref_stores, submodule_hash_cmp, 0);
-
- if (hashmap_put(&submodule_ref_stores,
- alloc_submodule_hash_entry(submodule, refs)))
- die("BUG: ref_store for submodule '%s' initialized twice",
- submodule);
- }
-}
-
/*
* Create, record, and return a ref_store instance for the specified
- * submodule (or the main repository if submodule is NULL).
+ * gitdir.
*/
-static struct ref_store *ref_store_init(const char *submodule)
+static struct ref_store *ref_store_init(const char *gitdir,
+ unsigned int flags)
{
const char *be_name = "files";
struct ref_storage_be *be = find_ref_storage_backend(be_name);
if (!be)
die("BUG: reference backend %s is unknown", be_name);
- refs = be->init(submodule);
- register_ref_store(refs, submodule);
+ refs = be->init(gitdir, flags);
return refs;
}
-struct ref_store *get_ref_store(const char *submodule)
+struct ref_store *get_main_ref_store(void)
+{
+ if (main_ref_store)
+ return main_ref_store;
+
+ main_ref_store = ref_store_init(get_git_dir(),
+ (REF_STORE_READ |
+ REF_STORE_WRITE |
+ REF_STORE_ODB |
+ REF_STORE_MAIN));
+ return main_ref_store;
+}
+
+/*
+ * Register the specified ref_store to be the one that should be used
+ * for submodule. It is a fatal error to call this function twice for
+ * the same submodule.
+ */
+static void register_submodule_ref_store(struct ref_store *refs,
+ const char *submodule)
{
+ if (!submodule_ref_stores.tablesize)
+ hashmap_init(&submodule_ref_stores, submodule_hash_cmp, 0);
+
+ if (hashmap_put(&submodule_ref_stores,
+ alloc_submodule_hash_entry(submodule, refs)))
+ die("BUG: ref_store for submodule '%s' initialized twice",
+ submodule);
+}
+
+struct ref_store *get_submodule_ref_store(const char *submodule)
+{
+ struct strbuf submodule_sb = STRBUF_INIT;
struct ref_store *refs;
+ int ret;
if (!submodule || !*submodule) {
- refs = lookup_ref_store(NULL);
+ /*
+ * FIXME: This case is ideally not allowed. But that
+ * can't happen until we clean up all the callers.
+ */
+ return get_main_ref_store();
+ }
- if (!refs)
- refs = ref_store_init(NULL);
- } else {
- refs = lookup_ref_store(submodule);
+ refs = lookup_submodule_ref_store(submodule);
+ if (refs)
+ return refs;
- if (!refs) {
- struct strbuf submodule_sb = STRBUF_INIT;
+ strbuf_addstr(&submodule_sb, submodule);
+ ret = is_nonbare_repository_dir(&submodule_sb);
+ strbuf_release(&submodule_sb);
+ if (!ret)
+ return NULL;
- strbuf_addstr(&submodule_sb, submodule);
- if (is_nonbare_repository_dir(&submodule_sb))
- refs = ref_store_init(submodule);
- strbuf_release(&submodule_sb);
- }
+ ret = submodule_to_gitdir(&submodule_sb, submodule);
+ if (ret) {
+ strbuf_release(&submodule_sb);
+ return NULL;
}
+ /* assume that add_submodule_odb() has been called */
+ refs = ref_store_init(submodule_sb.buf,
+ REF_STORE_READ | REF_STORE_ODB);
+ register_submodule_ref_store(refs, submodule);
+
+ strbuf_release(&submodule_sb);
return refs;
}
}
/* backend functions */
-int pack_refs(unsigned int flags)
+int refs_pack_refs(struct ref_store *refs, unsigned int flags)
{
- struct ref_store *refs = get_ref_store(NULL);
-
return refs->be->pack_refs(refs, flags);
}
+int refs_peel_ref(struct ref_store *refs, const char *refname,
+ unsigned char *sha1)
+{
+ return refs->be->peel_ref(refs, refname, sha1);
+}
+
int peel_ref(const char *refname, unsigned char *sha1)
{
- struct ref_store *refs = get_ref_store(NULL);
+ return refs_peel_ref(get_main_ref_store(), refname, sha1);
+}
- return refs->be->peel_ref(refs, refname, sha1);
+int refs_create_symref(struct ref_store *refs,
+ const char *ref_target,
+ const char *refs_heads_master,
+ const char *logmsg)
+{
+ return refs->be->create_symref(refs, ref_target,
+ refs_heads_master,
+ logmsg);
}
int create_symref(const char *ref_target, const char *refs_heads_master,
const char *logmsg)
{
- struct ref_store *refs = get_ref_store(NULL);
-
- return refs->be->create_symref(refs, ref_target, refs_heads_master,
- logmsg);
+ return refs_create_symref(get_main_ref_store(), ref_target,
+ refs_heads_master, logmsg);
}
int ref_transaction_commit(struct ref_transaction *transaction,
struct strbuf *err)
{
- struct ref_store *refs = get_ref_store(NULL);
+ struct ref_store *refs = transaction->ref_store;
return refs->be->transaction_commit(refs, transaction, err);
}
-int verify_refname_available(const char *refname,
- const struct string_list *extra,
- const struct string_list *skip,
- struct strbuf *err)
+int refs_verify_refname_available(struct ref_store *refs,
+ const char *refname,
+ const struct string_list *extra,
+ const struct string_list *skip,
+ struct strbuf *err)
{
- struct ref_store *refs = get_ref_store(NULL);
-
return refs->be->verify_refname_available(refs, refname, extra, skip, err);
}
-int for_each_reflog(each_ref_fn fn, void *cb_data)
+int refs_for_each_reflog(struct ref_store *refs, each_ref_fn fn, void *cb_data)
{
- struct ref_store *refs = get_ref_store(NULL);
struct ref_iterator *iter;
iter = refs->be->reflog_iterator_begin(refs);
return do_for_each_ref_iterator(iter, fn, cb_data);
}
-int for_each_reflog_ent_reverse(const char *refname, each_reflog_ent_fn fn,
- void *cb_data)
+int for_each_reflog(each_ref_fn fn, void *cb_data)
{
- struct ref_store *refs = get_ref_store(NULL);
+ return refs_for_each_reflog(get_main_ref_store(), fn, cb_data);
+}
+int refs_for_each_reflog_ent_reverse(struct ref_store *refs,
+ const char *refname,
+ each_reflog_ent_fn fn,
+ void *cb_data)
+{
return refs->be->for_each_reflog_ent_reverse(refs, refname,
fn, cb_data);
}
+int for_each_reflog_ent_reverse(const char *refname, each_reflog_ent_fn fn,
+ void *cb_data)
+{
+ return refs_for_each_reflog_ent_reverse(get_main_ref_store(),
+ refname, fn, cb_data);
+}
+
+int refs_for_each_reflog_ent(struct ref_store *refs, const char *refname,
+ each_reflog_ent_fn fn, void *cb_data)
+{
+ return refs->be->for_each_reflog_ent(refs, refname, fn, cb_data);
+}
+
int for_each_reflog_ent(const char *refname, each_reflog_ent_fn fn,
void *cb_data)
{
- struct ref_store *refs = get_ref_store(NULL);
+ return refs_for_each_reflog_ent(get_main_ref_store(), refname,
+ fn, cb_data);
+}
- return refs->be->for_each_reflog_ent(refs, refname, fn, cb_data);
+int refs_reflog_exists(struct ref_store *refs, const char *refname)
+{
+ return refs->be->reflog_exists(refs, refname);
}
int reflog_exists(const char *refname)
{
- struct ref_store *refs = get_ref_store(NULL);
+ return refs_reflog_exists(get_main_ref_store(), refname);
+}
- return refs->be->reflog_exists(refs, refname);
+int refs_create_reflog(struct ref_store *refs, const char *refname,
+ int force_create, struct strbuf *err)
+{
+ return refs->be->create_reflog(refs, refname, force_create, err);
}
int safe_create_reflog(const char *refname, int force_create,
struct strbuf *err)
{
- struct ref_store *refs = get_ref_store(NULL);
+ return refs_create_reflog(get_main_ref_store(), refname,
+ force_create, err);
+}
- return refs->be->create_reflog(refs, refname, force_create, err);
+int refs_delete_reflog(struct ref_store *refs, const char *refname)
+{
+ return refs->be->delete_reflog(refs, refname);
}
int delete_reflog(const char *refname)
{
- struct ref_store *refs = get_ref_store(NULL);
+ return refs_delete_reflog(get_main_ref_store(), refname);
+}
- return refs->be->delete_reflog(refs, refname);
+int refs_reflog_expire(struct ref_store *refs,
+ const char *refname, const unsigned char *sha1,
+ unsigned int flags,
+ reflog_expiry_prepare_fn prepare_fn,
+ reflog_expiry_should_prune_fn should_prune_fn,
+ reflog_expiry_cleanup_fn cleanup_fn,
+ void *policy_cb_data)
+{
+ return refs->be->reflog_expire(refs, refname, sha1, flags,
+ prepare_fn, should_prune_fn,
+ cleanup_fn, policy_cb_data);
}
int reflog_expire(const char *refname, const unsigned char *sha1,
reflog_expiry_cleanup_fn cleanup_fn,
void *policy_cb_data)
{
- struct ref_store *refs = get_ref_store(NULL);
-
- return refs->be->reflog_expire(refs, refname, sha1, flags,
- prepare_fn, should_prune_fn,
- cleanup_fn, policy_cb_data);
+ return refs_reflog_expire(get_main_ref_store(),
+ refname, sha1, flags,
+ prepare_fn, should_prune_fn,
+ cleanup_fn, policy_cb_data);
}
int initial_ref_transaction_commit(struct ref_transaction *transaction,
struct strbuf *err)
{
- struct ref_store *refs = get_ref_store(NULL);
+ struct ref_store *refs = transaction->ref_store;
return refs->be->initial_transaction_commit(refs, transaction, err);
}
-int delete_refs(struct string_list *refnames, unsigned int flags)
+int refs_delete_refs(struct ref_store *refs, struct string_list *refnames,
+ unsigned int flags)
{
- struct ref_store *refs = get_ref_store(NULL);
-
return refs->be->delete_refs(refs, refnames, flags);
}
-int rename_ref(const char *oldref, const char *newref, const char *logmsg)
+int delete_refs(struct string_list *refnames, unsigned int flags)
{
- struct ref_store *refs = get_ref_store(NULL);
+ return refs_delete_refs(get_main_ref_store(), refnames, flags);
+}
+int refs_rename_ref(struct ref_store *refs, const char *oldref,
+ const char *newref, const char *logmsg)
+{
return refs->be->rename_ref(refs, oldref, newref, logmsg);
}
+
+int rename_ref(const char *oldref, const char *newref, const char *logmsg)
+{
+ return refs_rename_ref(get_main_ref_store(), oldref, newref, logmsg);
+}
#ifndef REFS_H
#define REFS_H
+struct object_id;
+struct ref_store;
+struct strbuf;
+struct string_list;
+
/*
* Resolve a reference, recursively following symbolic refererences.
*
#define RESOLVE_REF_NO_RECURSE 0x02
#define RESOLVE_REF_ALLOW_BAD_NAME 0x04
+const char *refs_resolve_ref_unsafe(struct ref_store *refs,
+ const char *refname,
+ int resolve_flags,
+ unsigned char *sha1,
+ int *flags);
const char *resolve_ref_unsafe(const char *refname, int resolve_flags,
unsigned char *sha1, int *flags);
+char *refs_resolve_refdup(struct ref_store *refs,
+ const char *refname, int resolve_flags,
+ unsigned char *sha1, int *flags);
char *resolve_refdup(const char *refname, int resolve_flags,
unsigned char *sha1, int *flags);
+int refs_read_ref_full(struct ref_store *refs, const char *refname,
+ int resolve_flags, unsigned char *sha1, int *flags);
int read_ref_full(const char *refname, int resolve_flags,
unsigned char *sha1, int *flags);
int read_ref(const char *refname, unsigned char *sha1);
+/*
+ * Return 0 if a reference named refname could be created without
+ * conflicting with the name of an existing reference. Otherwise,
+ * return a negative value and write an explanation to err. If extras
+ * is non-NULL, it is a list of additional refnames with which refname
+ * is not allowed to conflict. If skip is non-NULL, ignore potential
+ * conflicts with refs in skip (e.g., because they are scheduled for
+ * deletion in the same operation). Behavior is undefined if the same
+ * name is listed in both extras and skip.
+ *
+ * Two reference names conflict if one of them exactly matches the
+ * leading components of the other; e.g., "foo/bar" conflicts with
+ * both "foo" and with "foo/bar/baz" but not with "foo/bar" or
+ * "foo/barbados".
+ *
+ * extras and skip must be sorted.
+ */
+
+int refs_verify_refname_available(struct ref_store *refs,
+ const char *refname,
+ const struct string_list *extra,
+ const struct string_list *skip,
+ struct strbuf *err);
+
int ref_exists(const char *refname);
int should_autocreate_reflog(const char *refname);
* Symbolic references are considered unpeelable, even if they
* ultimately resolve to a peelable tag.
*/
+int refs_peel_ref(struct ref_store *refs, const char *refname,
+ unsigned char *sha1);
int peel_ref(const char *refname, unsigned char *sha1);
/**
* it is not safe to modify references while an iteration is in
* progress, unless the same callback function invocation that
* modifies the reference also returns a nonzero value to immediately
- * stop the iteration.
+ * stop the iteration. Returned references are sorted.
*/
+int refs_for_each_ref(struct ref_store *refs,
+ each_ref_fn fn, void *cb_data);
+int refs_for_each_ref_in(struct ref_store *refs, const char *prefix,
+ each_ref_fn fn, void *cb_data);
+int refs_for_each_tag_ref(struct ref_store *refs,
+ each_ref_fn fn, void *cb_data);
+int refs_for_each_branch_ref(struct ref_store *refs,
+ each_ref_fn fn, void *cb_data);
+int refs_for_each_remote_ref(struct ref_store *refs,
+ each_ref_fn fn, void *cb_data);
+
int head_ref(each_ref_fn fn, void *cb_data);
int for_each_ref(each_ref_fn fn, void *cb_data);
int for_each_ref_in(const char *prefix, each_ref_fn fn, void *cb_data);
int for_each_namespaced_ref(each_ref_fn fn, void *cb_data);
/* can be used to learn about broken ref and symref */
+int refs_for_each_rawref(struct ref_store *refs, each_ref_fn fn, void *cb_data);
int for_each_rawref(each_ref_fn fn, void *cb_data);
static inline const char *has_glob_specials(const char *pattern)
* Write a packed-refs file for the current repository.
* flags: Combination of the above PACK_REFS_* flags.
*/
-int pack_refs(unsigned int flags);
+int refs_pack_refs(struct ref_store *refs, unsigned int flags);
/*
* Flags controlling ref_transaction_update(), ref_transaction_create(), etc.
/*
* Setup reflog before using. Fill in err and return -1 on failure.
*/
+int refs_create_reflog(struct ref_store *refs, const char *refname,
+ int force_create, struct strbuf *err);
int safe_create_reflog(const char *refname, int force_create, struct strbuf *err);
/** Reads log for the value of ref during at_time. **/
unsigned long *cutoff_time, int *cutoff_tz, int *cutoff_cnt);
/** Check if a particular reflog exists */
+int refs_reflog_exists(struct ref_store *refs, const char *refname);
int reflog_exists(const char *refname);
/*
* exists, regardless of its old value. It is an error for old_sha1 to
* be NULL_SHA1. flags is passed through to ref_transaction_delete().
*/
+int refs_delete_ref(struct ref_store *refs, const char *msg,
+ const char *refname,
+ const unsigned char *old_sha1,
+ unsigned int flags);
int delete_ref(const char *msg, const char *refname,
const unsigned char *old_sha1, unsigned int flags);
* an all-or-nothing transaction). flags is passed through to
* ref_transaction_delete().
*/
+int refs_delete_refs(struct ref_store *refs, struct string_list *refnames,
+ unsigned int flags);
int delete_refs(struct string_list *refnames, unsigned int flags);
/** Delete a reflog */
+int refs_delete_reflog(struct ref_store *refs, const char *refname);
int delete_reflog(const char *refname);
/* iterate over reflog entries */
const char *committer, unsigned long timestamp,
int tz, const char *msg, void *cb_data);
+int refs_for_each_reflog_ent(struct ref_store *refs, const char *refname,
+ each_reflog_ent_fn fn, void *cb_data);
+int refs_for_each_reflog_ent_reverse(struct ref_store *refs,
+ const char *refname,
+ each_reflog_ent_fn fn,
+ void *cb_data);
int for_each_reflog_ent(const char *refname, each_reflog_ent_fn fn, void *cb_data);
int for_each_reflog_ent_reverse(const char *refname, each_reflog_ent_fn fn, void *cb_data);
/*
* Calls the specified function for each reflog file until it returns nonzero,
- * and returns the value
+ * and returns the value. Reflog file order is unspecified.
*/
+int refs_for_each_reflog(struct ref_store *refs, each_ref_fn fn, void *cb_data);
int for_each_reflog(each_ref_fn fn, void *cb_data);
#define REFNAME_ALLOW_ONELEVEL 1
char *shorten_unambiguous_ref(const char *refname, int strict);
/** rename ref, return 0 on success **/
+int refs_rename_ref(struct ref_store *refs, const char *oldref,
+ const char *newref, const char *logmsg);
int rename_ref(const char *oldref, const char *newref, const char *logmsg);
+int refs_create_symref(struct ref_store *refs, const char *refname,
+ const char *target, const char *logmsg);
int create_symref(const char *refname, const char *target, const char *logmsg);
/*
* Begin a reference transaction. The reference transaction must
* be freed by calling ref_transaction_free().
*/
+struct ref_transaction *ref_store_transaction_begin(struct ref_store *refs,
+ struct strbuf *err);
struct ref_transaction *ref_transaction_begin(struct strbuf *err);
/*
* ref_transaction_update(). Handle errors as requested by the `onerr`
* argument.
*/
+int refs_update_ref(struct ref_store *refs, const char *msg, const char *refname,
+ const unsigned char *new_sha1, const unsigned char *old_sha1,
+ unsigned int flags, enum action_on_err onerr);
int update_ref(const char *msg, const char *refname,
const unsigned char *new_sha1, const unsigned char *old_sha1,
unsigned int flags, enum action_on_err onerr);
* enum expire_reflog_flags. The three function pointers are described
* above. On success, return zero.
*/
+int refs_reflog_expire(struct ref_store *refs,
+ const char *refname,
+ const unsigned char *sha1,
+ unsigned int flags,
+ reflog_expiry_prepare_fn prepare_fn,
+ reflog_expiry_should_prune_fn should_prune_fn,
+ reflog_expiry_cleanup_fn cleanup_fn,
+ void *policy_cb_data);
int reflog_expire(const char *refname, const unsigned char *sha1,
unsigned int flags,
reflog_expiry_prepare_fn prepare_fn,
int ref_storage_backend_exists(const char *name);
+struct ref_store *get_main_ref_store(void);
+/*
+ * Return the ref_store instance for the specified submodule. For the
+ * main repository, use submodule==NULL; such a call cannot fail. For
+ * a submodule, the submodule must exist and be a nonbare repository,
+ * otherwise return NULL. If the requested reference store has not yet
+ * been initialized, initialize it first.
+ *
+ * For backwards compatibility, submodule=="" is treated the same as
+ * submodule==NULL.
+ */
+struct ref_store *get_submodule_ref_store(const char *submodule);
+
#endif /* REFS_H */
const char *dirname, size_t len,
int incomplete);
static void add_entry_to_dir(struct ref_dir *dir, struct ref_entry *entry);
+static int files_log_ref_write(struct files_ref_store *refs,
+ const char *refname, const unsigned char *old_sha1,
+ const unsigned char *new_sha1, const char *msg,
+ int flags, struct strbuf *err);
static struct ref_dir *get_ref_dir(struct ref_entry *entry)
{
*/
struct files_ref_store {
struct ref_store base;
+ unsigned int store_flags;
- /*
- * The name of the submodule represented by this object, or
- * NULL if it represents the main repository's reference
- * store:
- */
- const char *submodule;
+ char *gitdir;
+ char *gitcommondir;
+ char *packed_refs_path;
struct ref_entry *loose;
struct packed_ref_cache *packed;
* Create a new submodule ref cache and add it to the internal
* set of caches.
*/
-static struct ref_store *files_ref_store_create(const char *submodule)
+static struct ref_store *files_ref_store_create(const char *gitdir,
+ unsigned int flags)
{
struct files_ref_store *refs = xcalloc(1, sizeof(*refs));
struct ref_store *ref_store = (struct ref_store *)refs;
+ struct strbuf sb = STRBUF_INIT;
base_ref_store_init(ref_store, &refs_be_files);
+ refs->store_flags = flags;
- refs->submodule = xstrdup_or_null(submodule);
+ refs->gitdir = xstrdup(gitdir);
+ get_common_dir_noenv(&sb, gitdir);
+ refs->gitcommondir = strbuf_detach(&sb, NULL);
+ strbuf_addf(&sb, "%s/packed-refs", refs->gitcommondir);
+ refs->packed_refs_path = strbuf_detach(&sb, NULL);
return ref_store;
}
/*
- * Die if refs is for a submodule (i.e., not for the main repository).
- * caller is used in any necessary error messages.
+ * Die if refs is not the main ref store. caller is used in any
+ * necessary error messages.
*/
static void files_assert_main_repository(struct files_ref_store *refs,
const char *caller)
{
- if (refs->submodule)
- die("BUG: %s called for a submodule", caller);
+ if (refs->store_flags & REF_STORE_MAIN)
+ return;
+
+ die("BUG: operation %s only allowed for main ref store", caller);
}
/*
* Downcast ref_store to files_ref_store. Die if ref_store is not a
- * files_ref_store. If submodule_allowed is not true, then also die if
- * files_ref_store is for a submodule (i.e., not for the main
- * repository). caller is used in any necessary error messages.
+ * files_ref_store. required_flags is compared with ref_store's
+ * store_flags to ensure the ref_store has all required capabilities.
+ * "caller" is used in any necessary error messages.
*/
-static struct files_ref_store *files_downcast(
- struct ref_store *ref_store, int submodule_allowed,
- const char *caller)
+static struct files_ref_store *files_downcast(struct ref_store *ref_store,
+ unsigned int required_flags,
+ const char *caller)
{
struct files_ref_store *refs;
refs = (struct files_ref_store *)ref_store;
- if (!submodule_allowed)
- files_assert_main_repository(refs, caller);
+ if ((refs->store_flags & required_flags) != required_flags)
+ die("BUG: operation %s requires abilities 0x%x, but only have 0x%x",
+ caller, required_flags, refs->store_flags);
return refs;
}
strbuf_release(&line);
}
+static const char *files_packed_refs_path(struct files_ref_store *refs)
+{
+ return refs->packed_refs_path;
+}
+
+static void files_reflog_path(struct files_ref_store *refs,
+ struct strbuf *sb,
+ const char *refname)
+{
+ if (!refname) {
+ /*
+ * FIXME: of course this is wrong in multi worktree
+ * setting. To be fixed real soon.
+ */
+ strbuf_addf(sb, "%s/logs", refs->gitcommondir);
+ return;
+ }
+
+ switch (ref_type(refname)) {
+ case REF_TYPE_PER_WORKTREE:
+ case REF_TYPE_PSEUDOREF:
+ strbuf_addf(sb, "%s/logs/%s", refs->gitdir, refname);
+ break;
+ case REF_TYPE_NORMAL:
+ strbuf_addf(sb, "%s/logs/%s", refs->gitcommondir, refname);
+ break;
+ default:
+ die("BUG: unknown ref type %d of ref %s",
+ ref_type(refname), refname);
+ }
+}
+
+static void files_ref_path(struct files_ref_store *refs,
+ struct strbuf *sb,
+ const char *refname)
+{
+ switch (ref_type(refname)) {
+ case REF_TYPE_PER_WORKTREE:
+ case REF_TYPE_PSEUDOREF:
+ strbuf_addf(sb, "%s/%s", refs->gitdir, refname);
+ break;
+ case REF_TYPE_NORMAL:
+ strbuf_addf(sb, "%s/%s", refs->gitcommondir, refname);
+ break;
+ default:
+ die("BUG: unknown ref type %d of ref %s",
+ ref_type(refname), refname);
+ }
+}
+
/*
* Get the packed_ref_cache for the specified files_ref_store,
* creating it if necessary.
*/
static struct packed_ref_cache *get_packed_ref_cache(struct files_ref_store *refs)
{
- char *packed_refs_file;
-
- if (refs->submodule)
- packed_refs_file = git_pathdup_submodule(refs->submodule,
- "packed-refs");
- else
- packed_refs_file = git_pathdup("packed-refs");
+ const char *packed_refs_file = files_packed_refs_path(refs);
if (refs->packed &&
!stat_validity_check(&refs->packed->validity, packed_refs_file))
fclose(f);
}
}
- free(packed_refs_file);
return refs->packed;
}
struct strbuf refname;
struct strbuf path = STRBUF_INIT;
size_t path_baselen;
- int err = 0;
- if (refs->submodule)
- err = strbuf_git_path_submodule(&path, refs->submodule, "%s", dirname);
- else
- strbuf_git_path(&path, "%s", dirname);
+ files_ref_path(refs, &path, dirname);
path_baselen = path.len;
- if (err) {
- strbuf_release(&path);
- return;
- }
-
d = opendir(path.buf);
if (!d) {
strbuf_release(&path);
create_dir_entry(refs, refname.buf,
refname.len, 1));
} else {
- if (!resolve_ref_recursively(&refs->base,
+ if (!refs_resolve_ref_unsafe(&refs->base,
refname.buf,
RESOLVE_REF_READING,
sha1, &flag)) {
struct strbuf *referent, unsigned int *type)
{
struct files_ref_store *refs =
- files_downcast(ref_store, 1, "read_raw_ref");
+ files_downcast(ref_store, REF_STORE_READ, "read_raw_ref");
struct strbuf sb_contents = STRBUF_INIT;
struct strbuf sb_path = STRBUF_INIT;
const char *path;
*type = 0;
strbuf_reset(&sb_path);
- if (refs->submodule)
- strbuf_git_path_submodule(&sb_path, refs->submodule, "%s", refname);
- else
- strbuf_git_path(&sb_path, "%s", refname);
+ files_ref_path(refs, &sb_path, refname);
path = sb_path.buf;
*lock_p = lock = xcalloc(1, sizeof(*lock));
lock->ref_name = xstrdup(refname);
- strbuf_git_path(&ref_file, "%s", refname);
+ files_ref_path(refs, &ref_file, refname);
retry:
switch (safe_create_leading_directories(ref_file.buf)) {
* another reference such as "refs/foo". There is no
* reason to expect this error to be transitory.
*/
- if (verify_refname_available(refname, extras, skip, err)) {
+ if (refs_verify_refname_available(&refs->base, refname,
+ extras, skip, err)) {
if (mustexist) {
/*
* To the user the relevant error is
static int files_peel_ref(struct ref_store *ref_store,
const char *refname, unsigned char *sha1)
{
- struct files_ref_store *refs = files_downcast(ref_store, 0, "peel_ref");
+ struct files_ref_store *refs =
+ files_downcast(ref_store, REF_STORE_READ | REF_STORE_ODB,
+ "peel_ref");
int flag;
unsigned char base[20];
return 0;
}
- if (read_ref_full(refname, RESOLVE_REF_READING, base, &flag))
+ if (refs_read_ref_full(ref_store, refname,
+ RESOLVE_REF_READING, base, &flag))
return -1;
/*
struct ref_store *ref_store,
const char *prefix, unsigned int flags)
{
- struct files_ref_store *refs =
- files_downcast(ref_store, 1, "ref_iterator_begin");
+ struct files_ref_store *refs;
struct ref_dir *loose_dir, *packed_dir;
struct ref_iterator *loose_iter, *packed_iter;
struct files_ref_iterator *iter;
struct ref_iterator *ref_iterator;
- if (!refs)
- return empty_ref_iterator_begin();
-
if (ref_paranoia < 0)
ref_paranoia = git_env_bool("GIT_REF_PARANOIA", 0);
if (ref_paranoia)
flags |= DO_FOR_EACH_INCLUDE_BROKEN;
+ refs = files_downcast(ref_store,
+ REF_STORE_READ | (ref_paranoia ? 0 : REF_STORE_ODB),
+ "ref_iterator_begin");
+
iter = xcalloc(1, sizeof(*iter));
ref_iterator = &iter->base;
base_ref_iterator_init(ref_iterator, &files_ref_iterator_vtable);
* on success. On error, write an error message to err, set errno, and
* return a negative value.
*/
-static int verify_lock(struct ref_lock *lock,
+static int verify_lock(struct ref_store *ref_store, struct ref_lock *lock,
const unsigned char *old_sha1, int mustexist,
struct strbuf *err)
{
assert(err);
- if (read_ref_full(lock->ref_name,
- mustexist ? RESOLVE_REF_READING : 0,
- lock->old_oid.hash, NULL)) {
+ if (refs_read_ref_full(ref_store, lock->ref_name,
+ mustexist ? RESOLVE_REF_READING : 0,
+ lock->old_oid.hash, NULL)) {
if (old_sha1) {
int save_errno = errno;
strbuf_addf(err, "can't verify ref '%s'", lock->ref_name);
if (flags & REF_DELETING)
resolve_flags |= RESOLVE_REF_ALLOW_BAD_NAME;
- strbuf_git_path(&ref_file, "%s", refname);
- resolved = !!resolve_ref_unsafe(refname, resolve_flags,
- lock->old_oid.hash, type);
+ files_ref_path(refs, &ref_file, refname);
+ resolved = !!refs_resolve_ref_unsafe(&refs->base,
+ refname, resolve_flags,
+ lock->old_oid.hash, type);
if (!resolved && errno == EISDIR) {
/*
* we are trying to lock foo but we used to
refname);
goto error_return;
}
- resolved = !!resolve_ref_unsafe(refname, resolve_flags,
- lock->old_oid.hash, type);
+ resolved = !!refs_resolve_ref_unsafe(&refs->base,
+ refname, resolve_flags,
+ lock->old_oid.hash, type);
}
if (!resolved) {
last_errno = errno;
goto error_return;
}
- if (verify_lock(lock, old_sha1, mustexist, err)) {
+ if (verify_lock(&refs->base, lock, old_sha1, mustexist, err)) {
last_errno = errno;
goto error_return;
}
}
if (hold_lock_file_for_update_timeout(
- &packlock, git_path("packed-refs"),
+ &packlock, files_packed_refs_path(refs),
flags, timeout_value) < 0)
return -1;
/*
* subdirs. flags is a combination of REMOVE_EMPTY_PARENTS_REF and/or
* REMOVE_EMPTY_PARENTS_REFLOG.
*/
-static void try_remove_empty_parents(const char *refname, unsigned int flags)
+static void try_remove_empty_parents(struct files_ref_store *refs,
+ const char *refname,
+ unsigned int flags)
{
struct strbuf buf = STRBUF_INIT;
+ struct strbuf sb = STRBUF_INIT;
char *p, *q;
int i;
if (q == p)
break;
strbuf_setlen(&buf, q - buf.buf);
- if ((flags & REMOVE_EMPTY_PARENTS_REF) &&
- rmdir(git_path("%s", buf.buf)))
+
+ strbuf_reset(&sb);
+ files_ref_path(refs, &sb, buf.buf);
+ if ((flags & REMOVE_EMPTY_PARENTS_REF) && rmdir(sb.buf))
flags &= ~REMOVE_EMPTY_PARENTS_REF;
- if ((flags & REMOVE_EMPTY_PARENTS_REFLOG) &&
- rmdir(git_path("logs/%s", buf.buf)))
+
+ strbuf_reset(&sb);
+ files_reflog_path(refs, &sb, buf.buf);
+ if ((flags & REMOVE_EMPTY_PARENTS_REFLOG) && rmdir(sb.buf))
flags &= ~REMOVE_EMPTY_PARENTS_REFLOG;
}
strbuf_release(&buf);
+ strbuf_release(&sb);
}
/* make sure nobody touched the ref, and unlink */
-static void prune_ref(struct ref_to_prune *r)
+static void prune_ref(struct files_ref_store *refs, struct ref_to_prune *r)
{
struct ref_transaction *transaction;
struct strbuf err = STRBUF_INIT;
if (check_refname_format(r->name, 0))
return;
- transaction = ref_transaction_begin(&err);
+ transaction = ref_store_transaction_begin(&refs->base, &err);
if (!transaction ||
ref_transaction_delete(transaction, r->name, r->sha1,
REF_ISPRUNING | REF_NODEREF, NULL, &err) ||
strbuf_release(&err);
}
-static void prune_refs(struct ref_to_prune *r)
+static void prune_refs(struct files_ref_store *refs, struct ref_to_prune *r)
{
while (r) {
- prune_ref(r);
+ prune_ref(refs, r);
r = r->next;
}
}
static int files_pack_refs(struct ref_store *ref_store, unsigned int flags)
{
struct files_ref_store *refs =
- files_downcast(ref_store, 0, "pack_refs");
+ files_downcast(ref_store, REF_STORE_WRITE | REF_STORE_ODB,
+ "pack_refs");
struct pack_refs_cb_data cbdata;
memset(&cbdata, 0, sizeof(cbdata));
if (commit_packed_refs(refs))
die_errno("unable to overwrite old ref-pack file");
- prune_refs(cbdata.ref_to_prune);
+ prune_refs(refs, cbdata.ref_to_prune);
return 0;
}
return 0; /* no refname exists in packed refs */
if (lock_packed_refs(refs, 0)) {
- unable_to_lock_message(git_path("packed-refs"), errno, err);
+ unable_to_lock_message(files_packed_refs_path(refs), errno, err);
return -1;
}
packed = get_packed_refs(refs);
struct string_list *refnames, unsigned int flags)
{
struct files_ref_store *refs =
- files_downcast(ref_store, 0, "delete_refs");
+ files_downcast(ref_store, REF_STORE_WRITE, "delete_refs");
struct strbuf err = STRBUF_INIT;
int i, result = 0;
for (i = 0; i < refnames->nr; i++) {
const char *refname = refnames->items[i].string;
- if (delete_ref(NULL, refname, NULL, flags))
+ if (refs_delete_ref(&refs->base, NULL, refname, NULL, flags))
result |= error(_("could not remove reference %s"), refname);
}
* IOW, to avoid cross device rename errors, the temporary renamed log must
* live into logs/refs.
*/
-#define TMP_RENAMED_LOG "logs/refs/.tmp-renamed-log"
+#define TMP_RENAMED_LOG "refs/.tmp-renamed-log"
+
+struct rename_cb {
+ const char *tmp_renamed_log;
+ int true_errno;
+};
-static int rename_tmp_log_callback(const char *path, void *cb)
+static int rename_tmp_log_callback(const char *path, void *cb_data)
{
- int *true_errno = cb;
+ struct rename_cb *cb = cb_data;
- if (rename(git_path(TMP_RENAMED_LOG), path)) {
+ if (rename(cb->tmp_renamed_log, path)) {
/*
* rename(a, b) when b is an existing directory ought
* to result in ISDIR, but Solaris 5.8 gives ENOTDIR.
* but report EISDIR to raceproof_create_file() so
* that it knows to retry.
*/
- *true_errno = errno;
+ cb->true_errno = errno;
if (errno == ENOTDIR)
errno = EISDIR;
return -1;
}
}
-static int rename_tmp_log(const char *newrefname)
+static int rename_tmp_log(struct files_ref_store *refs, const char *newrefname)
{
- char *path = git_pathdup("logs/%s", newrefname);
- int ret, true_errno;
+ struct strbuf path = STRBUF_INIT;
+ struct strbuf tmp = STRBUF_INIT;
+ struct rename_cb cb;
+ int ret;
- ret = raceproof_create_file(path, rename_tmp_log_callback, &true_errno);
+ files_reflog_path(refs, &path, newrefname);
+ files_reflog_path(refs, &tmp, TMP_RENAMED_LOG);
+ cb.tmp_renamed_log = tmp.buf;
+ ret = raceproof_create_file(path.buf, rename_tmp_log_callback, &cb);
if (ret) {
if (errno == EISDIR)
- error("directory not empty: %s", path);
+ error("directory not empty: %s", path.buf);
else
error("unable to move logfile %s to %s: %s",
- git_path(TMP_RENAMED_LOG), path,
- strerror(true_errno));
+ tmp.buf, path.buf,
+ strerror(cb.true_errno));
}
- free(path);
+ strbuf_release(&path);
+ strbuf_release(&tmp);
return ret;
}
struct strbuf *err)
{
struct files_ref_store *refs =
- files_downcast(ref_store, 1, "verify_refname_available");
+ files_downcast(ref_store, REF_STORE_READ, "verify_refname_available");
struct ref_dir *packed_refs = get_packed_refs(refs);
struct ref_dir *loose_refs = get_loose_refs(refs);
const char *logmsg)
{
struct files_ref_store *refs =
- files_downcast(ref_store, 0, "rename_ref");
+ files_downcast(ref_store, REF_STORE_WRITE, "rename_ref");
unsigned char sha1[20], orig_sha1[20];
int flag = 0, logmoved = 0;
struct ref_lock *lock;
struct stat loginfo;
- int log = !lstat(git_path("logs/%s", oldrefname), &loginfo);
+ struct strbuf sb_oldref = STRBUF_INIT;
+ struct strbuf sb_newref = STRBUF_INIT;
+ struct strbuf tmp_renamed_log = STRBUF_INIT;
+ int log, ret;
struct strbuf err = STRBUF_INIT;
- if (log && S_ISLNK(loginfo.st_mode))
- return error("reflog for %s is a symlink", oldrefname);
+ files_reflog_path(refs, &sb_oldref, oldrefname);
+ files_reflog_path(refs, &sb_newref, newrefname);
+ files_reflog_path(refs, &tmp_renamed_log, TMP_RENAMED_LOG);
- if (!resolve_ref_unsafe(oldrefname, RESOLVE_REF_READING | RESOLVE_REF_NO_RECURSE,
- orig_sha1, &flag))
- return error("refname %s not found", oldrefname);
+ log = !lstat(sb_oldref.buf, &loginfo);
+ if (log && S_ISLNK(loginfo.st_mode)) {
+ ret = error("reflog for %s is a symlink", oldrefname);
+ goto out;
+ }
- if (flag & REF_ISSYMREF)
- return error("refname %s is a symbolic ref, renaming it is not supported",
- oldrefname);
- if (!rename_ref_available(oldrefname, newrefname))
- return 1;
+ if (!refs_resolve_ref_unsafe(&refs->base, oldrefname,
+ RESOLVE_REF_READING | RESOLVE_REF_NO_RECURSE,
+ orig_sha1, &flag)) {
+ ret = error("refname %s not found", oldrefname);
+ goto out;
+ }
- if (log && rename(git_path("logs/%s", oldrefname), git_path(TMP_RENAMED_LOG)))
- return error("unable to move logfile logs/%s to "TMP_RENAMED_LOG": %s",
- oldrefname, strerror(errno));
+ if (flag & REF_ISSYMREF) {
+ ret = error("refname %s is a symbolic ref, renaming it is not supported",
+ oldrefname);
+ goto out;
+ }
+ if (!refs_rename_ref_available(&refs->base, oldrefname, newrefname)) {
+ ret = 1;
+ goto out;
+ }
- if (delete_ref(logmsg, oldrefname, orig_sha1, REF_NODEREF)) {
+ if (log && rename(sb_oldref.buf, tmp_renamed_log.buf)) {
+ ret = error("unable to move logfile logs/%s to logs/"TMP_RENAMED_LOG": %s",
+ oldrefname, strerror(errno));
+ goto out;
+ }
+
+ if (refs_delete_ref(&refs->base, logmsg, oldrefname,
+ orig_sha1, REF_NODEREF)) {
error("unable to delete old %s", oldrefname);
goto rollback;
}
* the safety anyway; we want to delete the reference whatever
* its current value.
*/
- if (!read_ref_full(newrefname, RESOLVE_REF_READING | RESOLVE_REF_NO_RECURSE,
- sha1, NULL) &&
- delete_ref(NULL, newrefname, NULL, REF_NODEREF)) {
+ if (!refs_read_ref_full(&refs->base, newrefname,
+ RESOLVE_REF_READING | RESOLVE_REF_NO_RECURSE,
+ sha1, NULL) &&
+ refs_delete_ref(&refs->base, NULL, newrefname,
+ NULL, REF_NODEREF)) {
if (errno == EISDIR) {
struct strbuf path = STRBUF_INIT;
int result;
- strbuf_git_path(&path, "%s", newrefname);
+ files_ref_path(refs, &path, newrefname);
result = remove_empty_directories(&path);
strbuf_release(&path);
}
}
- if (log && rename_tmp_log(newrefname))
+ if (log && rename_tmp_log(refs, newrefname))
goto rollback;
logmoved = log;
goto rollback;
}
- return 0;
+ ret = 0;
+ goto out;
rollback:
lock = lock_ref_sha1_basic(refs, oldrefname, NULL, NULL, NULL,
log_all_ref_updates = flag;
rollbacklog:
- if (logmoved && rename(git_path("logs/%s", newrefname), git_path("logs/%s", oldrefname)))
+ if (logmoved && rename(sb_newref.buf, sb_oldref.buf))
error("unable to restore logfile %s from %s: %s",
oldrefname, newrefname, strerror(errno));
if (!logmoved && log &&
- rename(git_path(TMP_RENAMED_LOG), git_path("logs/%s", oldrefname)))
- error("unable to restore logfile %s from "TMP_RENAMED_LOG": %s",
+ rename(tmp_renamed_log.buf, sb_oldref.buf))
+ error("unable to restore logfile %s from logs/"TMP_RENAMED_LOG": %s",
oldrefname, strerror(errno));
+ ret = 1;
+ out:
+ strbuf_release(&sb_newref);
+ strbuf_release(&sb_oldref);
+ strbuf_release(&tmp_renamed_log);
- return 1;
+ return ret;
}
static int close_ref(struct ref_lock *lock)
* set *logfd to -1. On failure, fill in *err, set *logfd to -1, and
* return -1.
*/
-static int log_ref_setup(const char *refname, int force_create,
+static int log_ref_setup(struct files_ref_store *refs,
+ const char *refname, int force_create,
int *logfd, struct strbuf *err)
{
- char *logfile = git_pathdup("logs/%s", refname);
+ struct strbuf logfile_sb = STRBUF_INIT;
+ char *logfile;
+
+ files_reflog_path(refs, &logfile_sb, refname);
+ logfile = strbuf_detach(&logfile_sb, NULL);
if (force_create || should_autocreate_reflog(refname)) {
if (raceproof_create_file(logfile, open_or_create_logfile, logfd)) {
const char *refname, int force_create,
struct strbuf *err)
{
+ struct files_ref_store *refs =
+ files_downcast(ref_store, REF_STORE_WRITE, "create_reflog");
int fd;
- /* Check validity (but we don't need the result): */
- files_downcast(ref_store, 0, "create_reflog");
-
- if (log_ref_setup(refname, force_create, &fd, err))
+ if (log_ref_setup(refs, refname, force_create, &fd, err))
return -1;
if (fd >= 0)
return 0;
}
-int files_log_ref_write(const char *refname, const unsigned char *old_sha1,
- const unsigned char *new_sha1, const char *msg,
- int flags, struct strbuf *err)
+static int files_log_ref_write(struct files_ref_store *refs,
+ const char *refname, const unsigned char *old_sha1,
+ const unsigned char *new_sha1, const char *msg,
+ int flags, struct strbuf *err)
{
int logfd, result;
if (log_all_ref_updates == LOG_REFS_UNSET)
log_all_ref_updates = is_bare_repository() ? LOG_REFS_NONE : LOG_REFS_NORMAL;
- result = log_ref_setup(refname, flags & REF_FORCE_CREATE_REFLOG,
+ result = log_ref_setup(refs, refname,
+ flags & REF_FORCE_CREATE_REFLOG,
&logfd, err);
if (result)
result = log_ref_write_fd(logfd, old_sha1, new_sha1,
git_committer_info(0), msg);
if (result) {
+ struct strbuf sb = STRBUF_INIT;
int save_errno = errno;
+ files_reflog_path(refs, &sb, refname);
strbuf_addf(err, "unable to append to '%s': %s",
- git_path("logs/%s", refname), strerror(save_errno));
+ sb.buf, strerror(save_errno));
+ strbuf_release(&sb);
close(logfd);
return -1;
}
if (close(logfd)) {
+ struct strbuf sb = STRBUF_INIT;
int save_errno = errno;
+ files_reflog_path(refs, &sb, refname);
strbuf_addf(err, "unable to append to '%s': %s",
- git_path("logs/%s", refname), strerror(save_errno));
+ sb.buf, strerror(save_errno));
+ strbuf_release(&sb);
return -1;
}
return 0;
files_assert_main_repository(refs, "commit_ref_update");
clear_loose_ref_cache(refs);
- if (files_log_ref_write(lock->ref_name, lock->old_oid.hash, sha1,
+ if (files_log_ref_write(refs, lock->ref_name,
+ lock->old_oid.hash, sha1,
logmsg, 0, err)) {
char *old_msg = strbuf_detach(err, NULL);
strbuf_addf(err, "cannot update the ref '%s': %s",
int head_flag;
const char *head_ref;
- head_ref = resolve_ref_unsafe("HEAD", RESOLVE_REF_READING,
- head_sha1, &head_flag);
+ head_ref = refs_resolve_ref_unsafe(&refs->base, "HEAD",
+ RESOLVE_REF_READING,
+ head_sha1, &head_flag);
if (head_ref && (head_flag & REF_ISSYMREF) &&
!strcmp(head_ref, lock->ref_name)) {
struct strbuf log_err = STRBUF_INIT;
- if (files_log_ref_write("HEAD", lock->old_oid.hash, sha1,
- logmsg, 0, &log_err)) {
+ if (files_log_ref_write(refs, "HEAD",
+ lock->old_oid.hash, sha1,
+ logmsg, 0, &log_err)) {
error("%s", log_err.buf);
strbuf_release(&log_err);
}
return ret;
}
-static void update_symref_reflog(struct ref_lock *lock, const char *refname,
+static void update_symref_reflog(struct files_ref_store *refs,
+ struct ref_lock *lock, const char *refname,
const char *target, const char *logmsg)
{
struct strbuf err = STRBUF_INIT;
unsigned char new_sha1[20];
- if (logmsg && !read_ref(target, new_sha1) &&
- files_log_ref_write(refname, lock->old_oid.hash, new_sha1,
- logmsg, 0, &err)) {
+ if (logmsg &&
+ !refs_read_ref_full(&refs->base, target,
+ RESOLVE_REF_READING, new_sha1, NULL) &&
+ files_log_ref_write(refs, refname, lock->old_oid.hash,
+ new_sha1, logmsg, 0, &err)) {
error("%s", err.buf);
strbuf_release(&err);
}
}
-static int create_symref_locked(struct ref_lock *lock, const char *refname,
+static int create_symref_locked(struct files_ref_store *refs,
+ struct ref_lock *lock, const char *refname,
const char *target, const char *logmsg)
{
if (prefer_symlink_refs && !create_ref_symlink(lock, target)) {
- update_symref_reflog(lock, refname, target, logmsg);
+ update_symref_reflog(refs, lock, refname, target, logmsg);
return 0;
}
return error("unable to fdopen %s: %s",
lock->lk->tempfile.filename.buf, strerror(errno));
- update_symref_reflog(lock, refname, target, logmsg);
+ update_symref_reflog(refs, lock, refname, target, logmsg);
/* no error check; commit_ref will check ferror */
fprintf(lock->lk->tempfile.fp, "ref: %s\n", target);
const char *logmsg)
{
struct files_ref_store *refs =
- files_downcast(ref_store, 0, "create_symref");
+ files_downcast(ref_store, REF_STORE_WRITE, "create_symref");
struct strbuf err = STRBUF_INIT;
struct ref_lock *lock;
int ret;
return -1;
}
- ret = create_symref_locked(lock, refname, target, logmsg);
+ ret = create_symref_locked(refs, lock, refname, target, logmsg);
unlock_ref(lock);
return ret;
}
int set_worktree_head_symref(const char *gitdir, const char *target, const char *logmsg)
{
+ /*
+ * FIXME: this obviously will not work well for future refs
+ * backends. This function needs to die.
+ */
+ struct files_ref_store *refs =
+ files_downcast(get_main_ref_store(),
+ REF_STORE_WRITE,
+ "set_head_symref");
+
static struct lock_file head_lock;
struct ref_lock *lock;
struct strbuf head_path = STRBUF_INIT;
lock->lk = &head_lock;
lock->ref_name = xstrdup(head_rel);
- ret = create_symref_locked(lock, head_rel, target, logmsg);
+ ret = create_symref_locked(refs, lock, head_rel, target, logmsg);
unlock_ref(lock); /* will free lock */
strbuf_release(&head_path);
static int files_reflog_exists(struct ref_store *ref_store,
const char *refname)
{
+ struct files_ref_store *refs =
+ files_downcast(ref_store, REF_STORE_READ, "reflog_exists");
+ struct strbuf sb = STRBUF_INIT;
struct stat st;
+ int ret;
- /* Check validity (but we don't need the result): */
- files_downcast(ref_store, 0, "reflog_exists");
-
- return !lstat(git_path("logs/%s", refname), &st) &&
- S_ISREG(st.st_mode);
+ files_reflog_path(refs, &sb, refname);
+ ret = !lstat(sb.buf, &st) && S_ISREG(st.st_mode);
+ strbuf_release(&sb);
+ return ret;
}
static int files_delete_reflog(struct ref_store *ref_store,
const char *refname)
{
- /* Check validity (but we don't need the result): */
- files_downcast(ref_store, 0, "delete_reflog");
+ struct files_ref_store *refs =
+ files_downcast(ref_store, REF_STORE_WRITE, "delete_reflog");
+ struct strbuf sb = STRBUF_INIT;
+ int ret;
- return remove_path(git_path("logs/%s", refname));
+ files_reflog_path(refs, &sb, refname);
+ ret = remove_path(sb.buf);
+ strbuf_release(&sb);
+ return ret;
}
static int show_one_reflog_ent(struct strbuf *sb, each_reflog_ent_fn fn, void *cb_data)
each_reflog_ent_fn fn,
void *cb_data)
{
+ struct files_ref_store *refs =
+ files_downcast(ref_store, REF_STORE_READ,
+ "for_each_reflog_ent_reverse");
struct strbuf sb = STRBUF_INIT;
FILE *logfp;
long pos;
int ret = 0, at_tail = 1;
- /* Check validity (but we don't need the result): */
- files_downcast(ref_store, 0, "for_each_reflog_ent_reverse");
-
- logfp = fopen(git_path("logs/%s", refname), "r");
+ files_reflog_path(refs, &sb, refname);
+ logfp = fopen(sb.buf, "r");
+ strbuf_release(&sb);
if (!logfp)
return -1;
const char *refname,
each_reflog_ent_fn fn, void *cb_data)
{
+ struct files_ref_store *refs =
+ files_downcast(ref_store, REF_STORE_READ,
+ "for_each_reflog_ent");
FILE *logfp;
struct strbuf sb = STRBUF_INIT;
int ret = 0;
- /* Check validity (but we don't need the result): */
- files_downcast(ref_store, 0, "for_each_reflog_ent");
-
- logfp = fopen(git_path("logs/%s", refname), "r");
+ files_reflog_path(refs, &sb, refname);
+ logfp = fopen(sb.buf, "r");
+ strbuf_release(&sb);
if (!logfp)
return -1;
struct files_reflog_iterator {
struct ref_iterator base;
+ struct ref_store *ref_store;
struct dir_iterator *dir_iterator;
struct object_id oid;
};
if (ends_with(diter->basename, ".lock"))
continue;
- if (read_ref_full(diter->relative_path, 0,
- iter->oid.hash, &flags)) {
+ if (refs_read_ref_full(iter->ref_store,
+ diter->relative_path, 0,
+ iter->oid.hash, &flags)) {
error("bad ref for %s", diter->path.buf);
continue;
}
static struct ref_iterator *files_reflog_iterator_begin(struct ref_store *ref_store)
{
+ struct files_ref_store *refs =
+ files_downcast(ref_store, REF_STORE_READ,
+ "reflog_iterator_begin");
struct files_reflog_iterator *iter = xcalloc(1, sizeof(*iter));
struct ref_iterator *ref_iterator = &iter->base;
-
- /* Check validity (but we don't need the result): */
- files_downcast(ref_store, 0, "reflog_iterator_begin");
+ struct strbuf sb = STRBUF_INIT;
base_ref_iterator_init(ref_iterator, &files_reflog_iterator_vtable);
- iter->dir_iterator = dir_iterator_begin(git_path("logs"));
+ files_reflog_path(refs, &sb, NULL);
+ iter->dir_iterator = dir_iterator_begin(sb.buf);
+ iter->ref_store = ref_store;
+ strbuf_release(&sb);
return ref_iterator;
}
* the transaction, so we have to read it here
* to record and possibly check old_sha1:
*/
- if (read_ref_full(referent.buf, 0,
- lock->old_oid.hash, NULL)) {
+ if (refs_read_ref_full(&refs->base,
+ referent.buf, 0,
+ lock->old_oid.hash, NULL)) {
if (update->flags & REF_HAVE_OLD) {
strbuf_addf(err, "cannot lock ref '%s': "
"error reading reference",
struct strbuf *err)
{
struct files_ref_store *refs =
- files_downcast(ref_store, 0, "ref_transaction_commit");
+ files_downcast(ref_store, REF_STORE_WRITE,
+ "ref_transaction_commit");
int ret = 0, i;
struct string_list refs_to_delete = STRING_LIST_INIT_NODUP;
struct string_list_item *ref_to_delete;
char *head_ref = NULL;
int head_type;
struct object_id head_oid;
+ struct strbuf sb = STRBUF_INIT;
assert(err);
* head_ref within the transaction, then split_head_update()
* arranges for the reflog of HEAD to be updated, too.
*/
- head_ref = resolve_refdup("HEAD", RESOLVE_REF_NO_RECURSE,
- head_oid.hash, &head_type);
+ head_ref = refs_resolve_refdup(ref_store, "HEAD",
+ RESOLVE_REF_NO_RECURSE,
+ head_oid.hash, &head_type);
if (head_ref && !(head_type & REF_ISSYMREF)) {
free(head_ref);
if (update->flags & REF_NEEDS_COMMIT ||
update->flags & REF_LOG_ONLY) {
- if (files_log_ref_write(lock->ref_name,
+ if (files_log_ref_write(refs,
+ lock->ref_name,
lock->old_oid.hash,
update->new_sha1,
update->msg, update->flags,
if (!(update->type & REF_ISPACKED) ||
update->type & REF_ISSYMREF) {
/* It is a loose reference. */
- if (unlink_or_msg(git_path("%s", lock->ref_name), err)) {
+ strbuf_reset(&sb);
+ files_ref_path(refs, &sb, lock->ref_name);
+ if (unlink_or_msg(sb.buf, err)) {
ret = TRANSACTION_GENERIC_ERROR;
goto cleanup;
}
/* Delete the reflogs of any references that were deleted: */
for_each_string_list_item(ref_to_delete, &refs_to_delete) {
- if (!unlink_or_warn(git_path("logs/%s", ref_to_delete->string)))
- try_remove_empty_parents(ref_to_delete->string,
+ strbuf_reset(&sb);
+ files_reflog_path(refs, &sb, ref_to_delete->string);
+ if (!unlink_or_warn(sb.buf))
+ try_remove_empty_parents(refs, ref_to_delete->string,
REMOVE_EMPTY_PARENTS_REFLOG);
}
clear_loose_ref_cache(refs);
cleanup:
+ strbuf_release(&sb);
transaction->state = REF_TRANSACTION_CLOSED;
for (i = 0; i < transaction->nr; i++) {
* can only work because we have already
* removed the lockfile.)
*/
- try_remove_empty_parents(update->refname,
+ try_remove_empty_parents(refs, update->refname,
REMOVE_EMPTY_PARENTS_REF);
}
}
struct strbuf *err)
{
struct files_ref_store *refs =
- files_downcast(ref_store, 0, "initial_ref_transaction_commit");
+ files_downcast(ref_store, REF_STORE_WRITE,
+ "initial_ref_transaction_commit");
int ret = 0, i;
struct string_list affected_refnames = STRING_LIST_INIT_NODUP;
* so here we really only check that none of the references
* that we are creating already exists.
*/
- if (for_each_rawref(ref_present, &affected_refnames))
+ if (refs_for_each_rawref(&refs->base, ref_present,
+ &affected_refnames))
die("BUG: initial ref transaction called with existing refs");
for (i = 0; i < transaction->nr; i++) {
if ((update->flags & REF_HAVE_OLD) &&
!is_null_sha1(update->old_sha1))
die("BUG: initial ref transaction with old_sha1 set");
- if (verify_refname_available(update->refname,
- &affected_refnames, NULL,
- err)) {
+ if (refs_verify_refname_available(&refs->base, update->refname,
+ &affected_refnames, NULL,
+ err)) {
ret = TRANSACTION_NAME_CONFLICT;
goto cleanup;
}
void *policy_cb_data)
{
struct files_ref_store *refs =
- files_downcast(ref_store, 0, "reflog_expire");
+ files_downcast(ref_store, REF_STORE_WRITE, "reflog_expire");
static struct lock_file reflog_lock;
struct expire_reflog_cb cb;
struct ref_lock *lock;
+ struct strbuf log_file_sb = STRBUF_INIT;
char *log_file;
int status = 0;
int type;
strbuf_release(&err);
return -1;
}
- if (!reflog_exists(refname)) {
+ if (!refs_reflog_exists(ref_store, refname)) {
unlock_ref(lock);
return 0;
}
- log_file = git_pathdup("logs/%s", refname);
+ files_reflog_path(refs, &log_file_sb, refname);
+ log_file = strbuf_detach(&log_file_sb, NULL);
if (!(flags & EXPIRE_REFLOGS_DRY_RUN)) {
/*
* Even though holding $GIT_DIR/logs/$reflog.lock has
}
(*prepare_fn)(refname, sha1, cb.policy_cb);
- for_each_reflog_ent(refname, expire_reflog_ent, &cb);
+ refs_for_each_reflog_ent(ref_store, refname, expire_reflog_ent, &cb);
(*cleanup_fn)(cb.policy_cb);
if (!(flags & EXPIRE_REFLOGS_DRY_RUN)) {
static int files_init_db(struct ref_store *ref_store, struct strbuf *err)
{
- /* Check validity (but we don't need the result): */
- files_downcast(ref_store, 0, "init_db");
+ struct files_ref_store *refs =
+ files_downcast(ref_store, REF_STORE_WRITE, "init_db");
+ struct strbuf sb = STRBUF_INIT;
/*
* Create .git/refs/{heads,tags}
*/
- safe_create_dir(git_path("refs/heads"), 1);
- safe_create_dir(git_path("refs/tags"), 1);
- if (get_shared_repository()) {
- adjust_shared_perm(git_path("refs/heads"));
- adjust_shared_perm(git_path("refs/tags"));
- }
+ files_ref_path(refs, &sb, "refs/heads");
+ safe_create_dir(sb.buf, 1);
+
+ strbuf_reset(&sb);
+ files_ref_path(refs, &sb, "refs/tags");
+ safe_create_dir(sb.buf, 1);
+
+ strbuf_release(&sb);
return 0;
}
*/
enum peel_status peel_object(const unsigned char *name, unsigned char *sha1);
-/*
- * Return 0 if a reference named refname could be created without
- * conflicting with the name of an existing reference. Otherwise,
- * return a negative value and write an explanation to err. If extras
- * is non-NULL, it is a list of additional refnames with which refname
- * is not allowed to conflict. If skip is non-NULL, ignore potential
- * conflicts with refs in skip (e.g., because they are scheduled for
- * deletion in the same operation). Behavior is undefined if the same
- * name is listed in both extras and skip.
- *
- * Two reference names conflict if one of them exactly matches the
- * leading components of the other; e.g., "foo/bar" conflicts with
- * both "foo" and with "foo/bar/baz" but not with "foo/bar" or
- * "foo/barbados".
- *
- * extras and skip must be sorted.
- */
-int verify_refname_available(const char *newname,
- const struct string_list *extras,
- const struct string_list *skip,
- struct strbuf *err);
-
/*
* Copy the reflog message msg to buf, which has been allocated sufficiently
* large, while cleaning up the whitespaces. Especially, convert LF to space,
* as atomically as possible. This structure is opaque to callers.
*/
struct ref_transaction {
+ struct ref_store *ref_store;
struct ref_update **updates;
size_t alloc;
size_t nr;
enum ref_transaction_state state;
};
-int files_log_ref_write(const char *refname, const unsigned char *old_sha1,
- const unsigned char *new_sha1, const char *msg,
- int flags, struct strbuf *err);
-
/*
* Check for entries in extras that are within the specified
* directory, where dirname is a reference directory name including
* processes (though rename_ref() catches some races that might get by
* this check).
*/
-int rename_ref_available(const char *old_refname, const char *new_refname);
+int refs_rename_ref_available(struct ref_store *refs,
+ const char *old_refname,
+ const char *new_refname);
/* We allow "recursive" symbolic refs. Only within reason, though */
#define SYMREF_MAXDEPTH 5
/* refs backends */
+/* ref_store_init flags */
+#define REF_STORE_READ (1 << 0)
+#define REF_STORE_WRITE (1 << 1) /* can perform update operations */
+#define REF_STORE_ODB (1 << 2) /* has access to object database */
+#define REF_STORE_MAIN (1 << 3)
+
/*
- * Initialize the ref_store for the specified submodule, or for the
- * main repository if submodule == NULL. These functions should call
- * base_ref_store_init() to initialize the shared part of the
- * ref_store and to record the ref_store for later lookup.
+ * Initialize the ref_store for the specified gitdir. These functions
+ * should call base_ref_store_init() to initialize the shared part of
+ * the ref_store and to record the ref_store for later lookup.
*/
-typedef struct ref_store *ref_store_init_fn(const char *submodule);
+typedef struct ref_store *ref_store_init_fn(const char *gitdir,
+ unsigned int flags);
typedef int ref_init_db_fn(struct ref_store *refs, struct strbuf *err);
void base_ref_store_init(struct ref_store *refs,
const struct ref_storage_be *be);
-/*
- * Return the ref_store instance for the specified submodule. For the
- * main repository, use submodule==NULL; such a call cannot fail. For
- * a submodule, the submodule must exist and be a nonbare repository,
- * otherwise return NULL. If the requested reference store has not yet
- * been initialized, initialize it first.
- *
- * For backwards compatibility, submodule=="" is treated the same as
- * submodule==NULL.
- */
-struct ref_store *get_ref_store(const char *submodule);
-
-const char *resolve_ref_recursively(struct ref_store *refs,
- const char *refname,
- int resolve_flags,
- unsigned char *sha1, int *flags);
-
#endif /* REFS_REFS_INTERNAL_H */
char *buf;
size_t len;
struct ref *refs;
- struct sha1_array shallow;
+ struct oid_array shallow;
unsigned proto_git : 1;
};
static struct discovery *last_discovery;
if (d) {
if (d == last_discovery)
last_discovery = NULL;
- free(d->shallow.sha1);
+ free(d->shallow.oid);
free(d->buf_alloc);
free_refs(d->refs);
free(d);
return err;
}
+static curl_off_t xcurl_off_t(ssize_t len) {
+ if (len > maximum_signed_value_of_type(curl_off_t))
+ die("cannot handle pushes this big");
+ return (curl_off_t) len;
+}
+
static int post_rpc(struct rpc_state *rpc)
{
struct active_request_slot *slot;
* and we just need to send it.
*/
curl_easy_setopt(slot->curl, CURLOPT_POSTFIELDS, gzip_body);
- curl_easy_setopt(slot->curl, CURLOPT_POSTFIELDSIZE, gzip_size);
+ curl_easy_setopt(slot->curl, CURLOPT_POSTFIELDSIZE_LARGE, xcurl_off_t(gzip_size));
} else if (use_gzip && 1024 < rpc->len) {
/* The client backend isn't giving us compressed data so
headers = curl_slist_append(headers, "Content-Encoding: gzip");
curl_easy_setopt(slot->curl, CURLOPT_POSTFIELDS, gzip_body);
- curl_easy_setopt(slot->curl, CURLOPT_POSTFIELDSIZE, gzip_size);
+ curl_easy_setopt(slot->curl, CURLOPT_POSTFIELDSIZE_LARGE, xcurl_off_t(gzip_size));
if (options.verbosity > 1) {
fprintf(stderr, "POST %s (gzip %lu to %lu bytes)\n",
* more normal Content-Length approach.
*/
curl_easy_setopt(slot->curl, CURLOPT_POSTFIELDS, rpc->buf);
- curl_easy_setopt(slot->curl, CURLOPT_POSTFIELDSIZE, rpc->len);
+ curl_easy_setopt(slot->curl, CURLOPT_POSTFIELDSIZE_LARGE, xcurl_off_t(rpc->len));
if (options.verbosity > 1) {
fprintf(stderr, "POST %s (%lu bytes)\n",
rpc->service_name, (unsigned long)rpc->len);
return parse_refspec_internal(nr_refspec, refspec, 1, 0);
}
-static struct refspec *parse_push_refspec(int nr_refspec, const char **refspec)
+struct refspec *parse_push_refspec(int nr_refspec, const char **refspec)
{
return parse_refspec_internal(nr_refspec, refspec, 0, 0);
}
return entry;
}
-int parse_push_cas_option(struct push_cas_option *cas, const char *arg, int unset)
+static int parse_push_cas_option(struct push_cas_option *cas, const char *arg, int unset)
{
const char *colon;
struct push_cas *entry;
*/
void free_refs(struct ref *ref);
-struct sha1_array;
+struct oid_array;
extern struct ref **get_remote_heads(int in, char *src_buf, size_t src_len,
struct ref **list, unsigned int flags,
- struct sha1_array *extra_have,
- struct sha1_array *shallow);
+ struct oid_array *extra_have,
+ struct oid_array *shallow);
int resolve_remote_symref(struct ref *ref, struct ref *list);
int ref_newer(const struct object_id *new_oid, const struct object_id *old_oid);
int valid_fetch_refspec(const char *refspec);
struct refspec *parse_fetch_refspec(int nr_refspec, const char **refspec);
+extern struct refspec *parse_push_refspec(int nr_refspec, const char **refspec);
void free_refspec(int nr_refspec, struct refspec *refspec);
};
extern int parseopt_push_cas_option(const struct option *, const char *arg, int unset);
-extern int parse_push_cas_option(struct push_cas_option *, const char *arg, int unset);
extern int is_empty_cas(const struct push_cas_option *);
void apply_push_cas(struct push_cas_option *, struct remote *, struct ref *);
/*
* Make a pack stream and spit it out into file descriptor fd
*/
-static int pack_objects(int fd, struct ref *refs, struct sha1_array *extra, struct send_pack_args *args)
+static int pack_objects(int fd, struct ref *refs, struct oid_array *extra, struct send_pack_args *args)
{
/*
* The child becomes pack-objects --revs; we feed
*/
po_in = xfdopen(po.in, "w");
for (i = 0; i < extra->nr; i++)
- feed_object(extra->sha1[i], po_in, 1);
+ feed_object(extra->oid[i].hash, po_in, 1);
while (refs) {
if (!is_null_oid(&refs->old_oid))
int send_pack(struct send_pack_args *args,
int fd[], struct child_process *conn,
struct ref *remote_refs,
- struct sha1_array *extra_have)
+ struct oid_array *extra_have)
{
int in = fd[0];
int out = fd[1];
int send_pack(struct send_pack_args *args,
int fd[], struct child_process *conn,
- struct ref *remote_refs, struct sha1_array *extra_have);
+ struct ref *remote_refs, struct oid_array *extra_have);
#endif
"\n"
" git rebase --continue\n");
+#define ALLOW_EMPTY (1<<0)
+#define EDIT_MSG (1<<1)
+#define AMEND_MSG (1<<2)
+#define CLEANUP_MSG (1<<3)
+#define VERIFY_MSG (1<<4)
+
/*
* If we are cherry-pick, and if the merge did not result in
* hand-editing, we will hit this commit and inherit the original
* author metadata.
*/
static int run_git_commit(const char *defmsg, struct replay_opts *opts,
- int allow_empty, int edit, int amend,
- int cleanup_commit_message)
+ unsigned int flags)
{
struct child_process cmd = CHILD_PROCESS_INIT;
const char *value;
cmd.git_cmd = 1;
if (is_rebase_i(opts)) {
- if (!edit) {
+ if (!(flags & EDIT_MSG)) {
cmd.stdout_to_stderr = 1;
cmd.err = -1;
}
}
argv_array_push(&cmd.args, "commit");
- argv_array_push(&cmd.args, "-n");
- if (amend)
+ if (!(flags & VERIFY_MSG))
+ argv_array_push(&cmd.args, "-n");
+ if ((flags & AMEND_MSG))
argv_array_push(&cmd.args, "--amend");
if (opts->gpg_sign)
argv_array_pushf(&cmd.args, "-S%s", opts->gpg_sign);
argv_array_push(&cmd.args, "-s");
if (defmsg)
argv_array_pushl(&cmd.args, "-F", defmsg, NULL);
- if (cleanup_commit_message)
+ if ((flags & CLEANUP_MSG))
argv_array_push(&cmd.args, "--cleanup=strip");
- if (edit)
+ if ((flags & EDIT_MSG))
argv_array_push(&cmd.args, "-e");
- else if (!cleanup_commit_message &&
+ else if (!(flags & CLEANUP_MSG) &&
!opts->signoff && !opts->record_origin &&
git_config_get_value("commit.cleanup", &value))
argv_array_push(&cmd.args, "--cleanup=verbatim");
- if (allow_empty)
+ if ((flags & ALLOW_EMPTY))
argv_array_push(&cmd.args, "--allow-empty");
if (opts->allow_empty_message)
static int do_pick_commit(enum todo_command command, struct commit *commit,
struct replay_opts *opts, int final_fixup)
{
- int edit = opts->edit, cleanup_commit_message = 0;
- const char *msg_file = edit ? NULL : git_path_merge_msg();
+ unsigned int flags = opts->edit ? EDIT_MSG : 0;
+ const char *msg_file = opts->edit ? NULL : git_path_merge_msg();
unsigned char head[20];
struct commit *base, *next, *parent;
const char *base_label, *next_label;
struct commit_message msg = { NULL, NULL, NULL, NULL };
struct strbuf msgbuf = STRBUF_INIT;
- int res, unborn = 0, amend = 0, allow = 0;
+ int res, unborn = 0, allow;
if (opts->no_commit) {
/*
opts);
if (res || command != TODO_REWORD)
goto leave;
- edit = amend = 1;
+ flags |= EDIT_MSG | AMEND_MSG;
+ if (command == TODO_REWORD)
+ flags |= VERIFY_MSG;
msg_file = NULL;
goto fast_forward_edit;
}
}
if (command == TODO_REWORD)
- edit = 1;
+ flags |= EDIT_MSG | VERIFY_MSG;
else if (is_fixup(command)) {
if (update_squash_messages(command, commit, opts))
return -1;
- amend = 1;
+ flags |= AMEND_MSG;
if (!final_fixup)
msg_file = rebase_path_squash_msg();
else if (file_exists(rebase_path_fixup_msg())) {
- cleanup_commit_message = 1;
+ flags |= CLEANUP_MSG;
msg_file = rebase_path_fixup_msg();
} else {
const char *dest = git_path("SQUASH_MSG");
rebase_path_squash_msg(), dest);
unlink(git_path("MERGE_MSG"));
msg_file = dest;
- edit = 1;
+ flags |= EDIT_MSG;
}
}
if (allow < 0) {
res = allow;
goto leave;
- }
+ } else if (allow)
+ flags |= ALLOW_EMPTY;
if (!opts->no_commit)
fast_forward_edit:
- res = run_git_commit(msg_file, opts, allow, edit, amend,
- cleanup_commit_message);
+ res = run_git_commit(msg_file, opts, flags);
if (!res && final_fixup) {
unlink(rebase_path_fixup_msg());
static int commit_staged_changes(struct replay_opts *opts)
{
- int amend = 0;
+ unsigned int flags = ALLOW_EMPTY | EDIT_MSG;
if (has_unstaged_changes(1))
return error(_("cannot rebase: You have unstaged changes."));
"--continue' again."));
strbuf_release(&rev);
- amend = 1;
+ flags |= AMEND_MSG;
}
- if (run_git_commit(rebase_path_message(), opts, 1, 1, amend, 0))
+ if (run_git_commit(rebase_path_message(), opts, flags))
return error(_("could not commit staged changes."));
unlink(rebase_path_amend());
return 0;
{
static struct strbuf cwd = STRBUF_INIT;
struct strbuf dir = STRBUF_INIT, gitdir = STRBUF_INIT;
- const char *prefix;
+ const char *prefix, *env_prefix;
/*
* We may have read an incomplete configuration before
die("BUG: unhandled setup_git_directory_1() result");
}
+ env_prefix = getenv(GIT_TOPLEVEL_PREFIX_ENVIRONMENT);
+ if (env_prefix)
+ prefix = env_prefix;
+
if (prefix)
setenv(GIT_PREFIX_ENVIRONMENT, prefix, 1);
else
#include "sha1-array.h"
#include "sha1-lookup.h"
-void sha1_array_append(struct sha1_array *array, const unsigned char *sha1)
+void oid_array_append(struct oid_array *array, const struct object_id *oid)
{
- ALLOC_GROW(array->sha1, array->nr + 1, array->alloc);
- hashcpy(array->sha1[array->nr++], sha1);
+ ALLOC_GROW(array->oid, array->nr + 1, array->alloc);
+ oidcpy(&array->oid[array->nr++], oid);
array->sorted = 0;
}
static int void_hashcmp(const void *a, const void *b)
{
- return hashcmp(a, b);
+ return oidcmp(a, b);
}
-static void sha1_array_sort(struct sha1_array *array)
+static void oid_array_sort(struct oid_array *array)
{
- QSORT(array->sha1, array->nr, void_hashcmp);
+ QSORT(array->oid, array->nr, void_hashcmp);
array->sorted = 1;
}
static const unsigned char *sha1_access(size_t index, void *table)
{
- unsigned char (*array)[20] = table;
- return array[index];
+ struct object_id *array = table;
+ return array[index].hash;
}
-int sha1_array_lookup(struct sha1_array *array, const unsigned char *sha1)
+int oid_array_lookup(struct oid_array *array, const struct object_id *oid)
{
if (!array->sorted)
- sha1_array_sort(array);
- return sha1_pos(sha1, array->sha1, array->nr, sha1_access);
+ oid_array_sort(array);
+ return sha1_pos(oid->hash, array->oid, array->nr, sha1_access);
}
-void sha1_array_clear(struct sha1_array *array)
+void oid_array_clear(struct oid_array *array)
{
- free(array->sha1);
- array->sha1 = NULL;
+ free(array->oid);
+ array->oid = NULL;
array->nr = 0;
array->alloc = 0;
array->sorted = 0;
}
-int sha1_array_for_each_unique(struct sha1_array *array,
- for_each_sha1_fn fn,
+int oid_array_for_each_unique(struct oid_array *array,
+ for_each_oid_fn fn,
void *data)
{
int i;
if (!array->sorted)
- sha1_array_sort(array);
+ oid_array_sort(array);
for (i = 0; i < array->nr; i++) {
int ret;
- if (i > 0 && !hashcmp(array->sha1[i], array->sha1[i-1]))
+ if (i > 0 && !oidcmp(array->oid + i, array->oid + i - 1))
continue;
- ret = fn(array->sha1[i], data);
+ ret = fn(array->oid + i, data);
if (ret)
return ret;
}
#ifndef SHA1_ARRAY_H
#define SHA1_ARRAY_H
-struct sha1_array {
- unsigned char (*sha1)[20];
+struct oid_array {
+ struct object_id *oid;
int nr;
int alloc;
int sorted;
};
-#define SHA1_ARRAY_INIT { NULL, 0, 0, 0 }
+#define OID_ARRAY_INIT { NULL, 0, 0, 0 }
-void sha1_array_append(struct sha1_array *array, const unsigned char *sha1);
-int sha1_array_lookup(struct sha1_array *array, const unsigned char *sha1);
-void sha1_array_clear(struct sha1_array *array);
+void oid_array_append(struct oid_array *array, const struct object_id *oid);
+int oid_array_lookup(struct oid_array *array, const struct object_id *oid);
+void oid_array_clear(struct oid_array *array);
-typedef int (*for_each_sha1_fn)(const unsigned char sha1[20],
- void *data);
-int sha1_array_for_each_unique(struct sha1_array *array,
- for_each_sha1_fn fn,
+typedef int (*for_each_oid_fn)(const struct object_id *oid,
+ void *data);
+int oid_array_for_each_unique(struct oid_array *array,
+ for_each_oid_fn fn,
void *data);
#endif /* SHA1_ARRAY_H */
if (!hashcmp(sha1, p->bad_object_sha1 + GIT_SHA1_RAWSZ * i))
return;
p->bad_object_sha1 = xrealloc(p->bad_object_sha1,
- st_mult(GIT_SHA1_RAWSZ,
+ st_mult(GIT_MAX_RAWSZ,
st_add(p->num_bad_objects, 1)));
hashcpy(p->bad_object_sha1 + GIT_SHA1_RAWSZ * p->num_bad_objects, sha1);
p->num_bad_objects++;
if (status && oi->typep)
*oi->typep = status;
strbuf_release(&hdrbuf);
- return 0;
+ return (status < 0) ? status : 0;
}
int sha1_object_info_extended(const unsigned char *sha1, struct object_info *oi, unsigned flags)
{
struct pack_entry e;
+ if (!startup_info->have_repository)
+ return 0;
if (find_pack_entry(sha1, &e))
return 1;
if (has_loose_object(sha1))
strbuf_addf(path, "/%s", de->d_name);
if (strlen(de->d_name) == GIT_SHA1_HEXSZ - 2) {
- char hex[GIT_SHA1_HEXSZ+1];
+ char hex[GIT_MAX_HEXSZ+1];
struct object_id oid;
- snprintf(hex, sizeof(hex), "%02x%s",
- subdir_nr, de->d_name);
+ xsnprintf(hex, sizeof(hex), "%02x%s",
+ subdir_nr, de->d_name);
if (!get_oid_hex(hex, &oid)) {
if (obj_cb) {
r = obj_cb(&oid, path->buf, data);
const unsigned char *expected_sha1)
{
git_SHA_CTX c;
- unsigned char real_sha1[GIT_SHA1_RAWSZ];
+ unsigned char real_sha1[GIT_MAX_RAWSZ];
unsigned char buf[4096];
unsigned long total_read;
int status = Z_OK;
static int get_sha1_oneline(const char *, unsigned char *, struct commit_list *);
-typedef int (*disambiguate_hint_fn)(const unsigned char *, void *);
+typedef int (*disambiguate_hint_fn)(const struct object_id *, void *);
struct disambiguate_state {
int len; /* length of prefix in hex chars */
- char hex_pfx[GIT_SHA1_HEXSZ + 1];
- unsigned char bin_pfx[GIT_SHA1_RAWSZ];
+ char hex_pfx[GIT_MAX_HEXSZ + 1];
+ struct object_id bin_pfx;
disambiguate_hint_fn fn;
void *cb_data;
- unsigned char candidate[GIT_SHA1_RAWSZ];
+ struct object_id candidate;
unsigned candidate_exists:1;
unsigned candidate_checked:1;
unsigned candidate_ok:1;
unsigned always_call_fn:1;
};
-static void update_candidates(struct disambiguate_state *ds, const unsigned char *current)
+static void update_candidates(struct disambiguate_state *ds, const struct object_id *current)
{
if (ds->always_call_fn) {
ds->ambiguous = ds->fn(current, ds->cb_data) ? 1 : 0;
}
if (!ds->candidate_exists) {
/* this is the first candidate */
- hashcpy(ds->candidate, current);
+ oidcpy(&ds->candidate, current);
ds->candidate_exists = 1;
return;
- } else if (!hashcmp(ds->candidate, current)) {
+ } else if (!oidcmp(&ds->candidate, current)) {
/* the same as what we already have seen */
return;
}
}
if (!ds->candidate_checked) {
- ds->candidate_ok = ds->fn(ds->candidate, ds->cb_data);
+ ds->candidate_ok = ds->fn(&ds->candidate, ds->cb_data);
ds->disambiguate_fn_used = 1;
ds->candidate_checked = 1;
}
if (!ds->candidate_ok) {
/* discard the candidate; we know it does not satisfy fn */
- hashcpy(ds->candidate, current);
+ oidcpy(&ds->candidate, current);
ds->candidate_checked = 0;
return;
}
static void find_short_object_filename(struct disambiguate_state *ds)
{
struct alternate_object_database *alt;
- char hex[GIT_SHA1_HEXSZ];
+ char hex[GIT_MAX_HEXSZ];
static struct alternate_object_database *fakeent;
if (!fakeent) {
continue;
while (!ds->ambiguous && (de = readdir(dir)) != NULL) {
- unsigned char sha1[20];
+ struct object_id oid;
- if (strlen(de->d_name) != 38)
+ if (strlen(de->d_name) != GIT_SHA1_HEXSZ - 2)
continue;
if (memcmp(de->d_name, ds->hex_pfx + 2, ds->len - 2))
continue;
- memcpy(hex + 2, de->d_name, 38);
- if (!get_sha1_hex(hex, sha1))
- update_candidates(ds, sha1);
+ memcpy(hex + 2, de->d_name, GIT_SHA1_HEXSZ - 2);
+ if (!get_oid_hex(hex, &oid))
+ update_candidates(ds, &oid);
}
closedir(dir);
}
struct disambiguate_state *ds)
{
uint32_t num, last, i, first = 0;
- const unsigned char *current = NULL;
+ const struct object_id *current = NULL;
open_pack_index(p);
num = p->num_objects;
int cmp;
current = nth_packed_object_sha1(p, mid);
- cmp = hashcmp(ds->bin_pfx, current);
+ cmp = hashcmp(ds->bin_pfx.hash, current);
if (!cmp) {
first = mid;
break;
* 0, 1 or more objects that actually match(es).
*/
for (i = first; i < num && !ds->ambiguous; i++) {
- current = nth_packed_object_sha1(p, i);
- if (!match_sha(ds->len, ds->bin_pfx, current))
+ struct object_id oid;
+ current = nth_packed_object_oid(&oid, p, i);
+ if (!match_sha(ds->len, ds->bin_pfx.hash, current->hash))
break;
update_candidates(ds, current);
}
* same repository!
*/
ds->candidate_ok = (!ds->disambiguate_fn_used ||
- ds->fn(ds->candidate, ds->cb_data));
+ ds->fn(&ds->candidate, ds->cb_data));
if (!ds->candidate_ok)
return SHORT_NAME_AMBIGUOUS;
- hashcpy(sha1, ds->candidate);
+ hashcpy(sha1, ds->candidate.hash);
return 0;
}
-static int disambiguate_commit_only(const unsigned char *sha1, void *cb_data_unused)
+static int disambiguate_commit_only(const struct object_id *oid, void *cb_data_unused)
{
- int kind = sha1_object_info(sha1, NULL);
+ int kind = sha1_object_info(oid->hash, NULL);
return kind == OBJ_COMMIT;
}
-static int disambiguate_committish_only(const unsigned char *sha1, void *cb_data_unused)
+static int disambiguate_committish_only(const struct object_id *oid, void *cb_data_unused)
{
struct object *obj;
int kind;
- kind = sha1_object_info(sha1, NULL);
+ kind = sha1_object_info(oid->hash, NULL);
if (kind == OBJ_COMMIT)
return 1;
if (kind != OBJ_TAG)
return 0;
/* We need to do this the hard way... */
- obj = deref_tag(parse_object(sha1), NULL, 0);
+ obj = deref_tag(parse_object(oid->hash), NULL, 0);
if (obj && obj->type == OBJ_COMMIT)
return 1;
return 0;
}
-static int disambiguate_tree_only(const unsigned char *sha1, void *cb_data_unused)
+static int disambiguate_tree_only(const struct object_id *oid, void *cb_data_unused)
{
- int kind = sha1_object_info(sha1, NULL);
+ int kind = sha1_object_info(oid->hash, NULL);
return kind == OBJ_TREE;
}
-static int disambiguate_treeish_only(const unsigned char *sha1, void *cb_data_unused)
+static int disambiguate_treeish_only(const struct object_id *oid, void *cb_data_unused)
{
struct object *obj;
int kind;
- kind = sha1_object_info(sha1, NULL);
+ kind = sha1_object_info(oid->hash, NULL);
if (kind == OBJ_TREE || kind == OBJ_COMMIT)
return 1;
if (kind != OBJ_TAG)
return 0;
/* We need to do this the hard way... */
- obj = deref_tag(parse_object(sha1), NULL, 0);
+ obj = deref_tag(parse_object(oid->hash), NULL, 0);
if (obj && (obj->type == OBJ_TREE || obj->type == OBJ_COMMIT))
return 1;
return 0;
}
-static int disambiguate_blob_only(const unsigned char *sha1, void *cb_data_unused)
+static int disambiguate_blob_only(const struct object_id *oid, void *cb_data_unused)
{
- int kind = sha1_object_info(sha1, NULL);
+ int kind = sha1_object_info(oid->hash, NULL);
return kind == OBJ_BLOB;
}
ds->hex_pfx[i] = c;
if (!(i & 1))
val <<= 4;
- ds->bin_pfx[i >> 1] |= val;
+ ds->bin_pfx.hash[i >> 1] |= val;
}
ds->len = len;
return 0;
}
-static int show_ambiguous_object(const unsigned char *sha1, void *data)
+static int show_ambiguous_object(const struct object_id *oid, void *data)
{
const struct disambiguate_state *ds = data;
struct strbuf desc = STRBUF_INIT;
int type;
- if (ds->fn && !ds->fn(sha1, ds->cb_data))
+
+ if (ds->fn && !ds->fn(oid, ds->cb_data))
return 0;
- type = sha1_object_info(sha1, NULL);
+ type = sha1_object_info(oid->hash, NULL);
if (type == OBJ_COMMIT) {
- struct commit *commit = lookup_commit(sha1);
+ struct commit *commit = lookup_commit(oid->hash);
if (commit) {
struct pretty_print_context pp = {0};
pp.date_mode.type = DATE_SHORT;
format_commit_message(commit, " %ad - %s", &desc, &pp);
}
} else if (type == OBJ_TAG) {
- struct tag *tag = lookup_tag(sha1);
+ struct tag *tag = lookup_tag(oid->hash);
if (!parse_tag(tag) && tag->tag)
strbuf_addf(&desc, " %s", tag->tag);
}
advise(" %s %s%s",
- find_unique_abbrev(sha1, DEFAULT_ABBREV),
+ find_unique_abbrev(oid->hash, DEFAULT_ABBREV),
typename(type) ? typename(type) : "unknown type",
desc.buf);
return status;
}
-static int collect_ambiguous(const unsigned char *sha1, void *data)
+static int collect_ambiguous(const struct object_id *oid, void *data)
{
- sha1_array_append(data, sha1);
+ oid_array_append(data, oid);
return 0;
}
int for_each_abbrev(const char *prefix, each_abbrev_fn fn, void *cb_data)
{
- struct sha1_array collect = SHA1_ARRAY_INIT;
+ struct oid_array collect = OID_ARRAY_INIT;
struct disambiguate_state ds;
int ret;
find_short_object_filename(&ds);
find_short_packed_object(&ds);
- ret = sha1_array_for_each_unique(&collect, fn, cb_data);
- sha1_array_clear(&collect);
+ ret = oid_array_for_each_unique(&collect, fn, cb_data);
+ oid_array_clear(&collect);
return ret;
}
const char *find_unique_abbrev(const unsigned char *sha1, int len)
{
static int bufno;
- static char hexbuffer[4][GIT_SHA1_HEXSZ + 1];
+ static char hexbuffer[4][GIT_MAX_HEXSZ + 1];
char *hex = hexbuffer[bufno];
bufno = (bufno + 1) % ARRAY_SIZE(hexbuffer);
find_unique_abbrev_r(hex, sha1, len);
for (i = 0; i < nr; i++) {
int suffix_len = strlen(suffix[i]);
if (suffix_len <= len
- && !memcmp(string, suffix[i], suffix_len))
+ && !strncasecmp(string, suffix[i], suffix_len))
return suffix_len;
}
return 0;
}
static int write_shallow_commits_1(struct strbuf *out, int use_pack_protocol,
- const struct sha1_array *extra,
+ const struct oid_array *extra,
unsigned flags)
{
struct write_shallow_data data;
if (!extra)
return data.count;
for (i = 0; i < extra->nr; i++) {
- strbuf_addstr(out, sha1_to_hex(extra->sha1[i]));
+ strbuf_addstr(out, oid_to_hex(extra->oid + i));
strbuf_addch(out, '\n');
data.count++;
}
}
int write_shallow_commits(struct strbuf *out, int use_pack_protocol,
- const struct sha1_array *extra)
+ const struct oid_array *extra)
{
return write_shallow_commits_1(out, use_pack_protocol, extra, 0);
}
static struct tempfile temporary_shallow;
-const char *setup_temporary_shallow(const struct sha1_array *extra)
+const char *setup_temporary_shallow(const struct oid_array *extra)
{
struct strbuf sb = STRBUF_INIT;
int fd;
void setup_alternate_shallow(struct lock_file *shallow_lock,
const char **alternate_shallow_file,
- const struct sha1_array *extra)
+ const struct oid_array *extra)
{
struct strbuf sb = STRBUF_INIT;
int fd;
* Step 1, split sender shallow commits into "ours" and "theirs"
* Step 2, clean "ours" based on .git/shallow
*/
-void prepare_shallow_info(struct shallow_info *info, struct sha1_array *sa)
+void prepare_shallow_info(struct shallow_info *info, struct oid_array *sa)
{
int i;
trace_printf_key(&trace_shallow, "shallow: prepare_shallow_info\n");
ALLOC_ARRAY(info->ours, sa->nr);
ALLOC_ARRAY(info->theirs, sa->nr);
for (i = 0; i < sa->nr; i++) {
- if (has_sha1_file(sa->sha1[i])) {
+ if (has_object_file(sa->oid + i)) {
struct commit_graft *graft;
- graft = lookup_commit_graft(sa->sha1[i]);
+ graft = lookup_commit_graft(sa->oid[i].hash);
if (graft && graft->nr_parent < 0)
continue;
info->ours[info->nr_ours++] = i;
void remove_nonexistent_theirs_shallow(struct shallow_info *info)
{
- unsigned char (*sha1)[20] = info->shallow->sha1;
+ struct object_id *oid = info->shallow->oid;
int i, dst;
trace_printf_key(&trace_shallow, "shallow: remove_nonexistent_theirs_shallow\n");
for (i = dst = 0; i < info->nr_theirs; i++) {
if (i != dst)
info->theirs[dst] = info->theirs[i];
- if (has_sha1_file(sha1[info->theirs[i]]))
+ if (has_object_file(oid + info->theirs[i]))
dst++;
}
info->nr_theirs = dst;
void assign_shallow_commits_to_refs(struct shallow_info *info,
uint32_t **used, int *ref_status)
{
- unsigned char (*sha1)[20] = info->shallow->sha1;
- struct sha1_array *ref = info->ref;
+ struct object_id *oid = info->shallow->oid;
+ struct oid_array *ref = info->ref;
unsigned int i, nr;
int *shallow, nr_shallow = 0;
struct paint_info pi;
/* Mark potential bottoms so we won't go out of bound */
for (i = 0; i < nr_shallow; i++) {
- struct commit *c = lookup_commit(sha1[shallow[i]]);
+ struct commit *c = lookup_commit(oid[shallow[i]].hash);
c->object.flags |= BOTTOM;
}
for (i = 0; i < ref->nr; i++)
- paint_down(&pi, ref->sha1[i], i);
+ paint_down(&pi, ref->oid[i].hash, i);
if (used) {
int bitmap_size = ((pi.nr_bits + 31) / 32) * sizeof(uint32_t);
memset(used, 0, sizeof(*used) * info->shallow->nr);
for (i = 0; i < nr_shallow; i++) {
- const struct commit *c = lookup_commit(sha1[shallow[i]]);
+ const struct commit *c = lookup_commit(oid[shallow[i]].hash);
uint32_t **map = ref_bitmap_at(&pi.ref_bitmap, c);
if (*map)
used[shallow[i]] = xmemdupz(*map, bitmap_size);
struct ref_bitmap *ref_bitmap,
int *ref_status)
{
- unsigned char (*sha1)[20] = info->shallow->sha1;
+ struct object_id *oid = info->shallow->oid;
struct commit *c;
uint32_t **bitmap;
int dst, i, j;
for (i = dst = 0; i < info->nr_theirs; i++) {
if (i != dst)
info->theirs[dst] = info->theirs[i];
- c = lookup_commit(sha1[info->theirs[i]]);
+ c = lookup_commit(oid[info->theirs[i]].hash);
bitmap = ref_bitmap_at(ref_bitmap, c);
if (!*bitmap)
continue;
for (i = dst = 0; i < info->nr_ours; i++) {
if (i != dst)
info->ours[dst] = info->ours[i];
- c = lookup_commit(sha1[info->ours[i]]);
+ c = lookup_commit(oid[info->ours[i]].hash);
bitmap = ref_bitmap_at(ref_bitmap, c);
if (!*bitmap)
continue;
int delayed_reachability_test(struct shallow_info *si, int c)
{
if (si->need_reachability_test[c]) {
- struct commit *commit = lookup_commit(si->shallow->sha1[c]);
+ struct commit *commit = lookup_commit(si->shallow->oid[c].hash);
if (!si->commits) {
struct commit_array ca;
strbuf_setlen(sb, strlen(sb->buf));
return 0;
}
+
+ /*
+ * If getcwd(3) is implemented as a syscall that falls
+ * back to a regular lookup using readdir(3) etc. then
+ * we may be able to avoid EACCES by providing enough
+ * space to the syscall as it's not necessarily bound
+ * to the same restrictions as the fallback.
+ */
+ if (errno == EACCES && guessed_len < PATH_MAX)
+ continue;
+
if (errno != ERANGE)
break;
}
if (exact_match)
return -1 - index;
- if (list->nr + 1 >= list->alloc) {
- list->alloc += 32;
- REALLOC_ARRAY(list->items, list->alloc);
- }
+ ALLOC_GROW(list->items, list->nr+1, list->alloc);
if (index < list->nr)
memmove(list->items + index + 1, list->items + index,
(list->nr - index)
#include "blob.h"
#include "thread-utils.h"
#include "quote.h"
+#include "remote.h"
#include "worktree.h"
static int config_fetch_recurse_submodules = RECURSE_SUBMODULES_ON_DEMAND;
static int parallel_jobs = 1;
static struct string_list changed_submodule_paths = STRING_LIST_INIT_NODUP;
static int initialized_fetch_ref_tips;
-static struct sha1_array ref_tips_before_fetch;
-static struct sha1_array ref_tips_after_fetch;
+static struct oid_array ref_tips_before_fetch;
+static struct oid_array ref_tips_after_fetch;
/*
* The following flag is set if the .gitmodules file is unmerged. We then
}
/*
+ * NEEDSWORK: With the addition of different configuration options to determine
+ * if a submodule is of interests, the validity of this function's name comes
+ * into question. Once the dust has settled and more concrete terminology is
+ * decided upon, come up with a more proper name for this function. One
+ * potential candidate could be 'is_submodule_active()'.
+ *
* Determine if a submodule has been initialized at a given 'path'
*/
int is_submodule_initialized(const char *path)
{
int ret = 0;
- const struct submodule *module = NULL;
+ char *key = NULL;
+ char *value = NULL;
+ const struct string_list *sl;
+ const struct submodule *module = submodule_from_path(null_sha1, path);
- module = submodule_from_path(null_sha1, path);
+ /* early return if there isn't a path->module mapping */
+ if (!module)
+ return 0;
- if (module) {
- char *key = xstrfmt("submodule.%s.url", module->name);
- char *value = NULL;
+ /* submodule.<name>.active is set */
+ key = xstrfmt("submodule.%s.active", module->name);
+ if (!git_config_get_bool(key, &ret)) {
+ free(key);
+ return ret;
+ }
+ free(key);
- ret = !git_config_get_string(key, &value);
+ /* submodule.active is set */
+ sl = git_config_get_value_multi("submodule.active");
+ if (sl) {
+ struct pathspec ps;
+ struct argv_array args = ARGV_ARRAY_INIT;
+ const struct string_list_item *item;
- free(value);
- free(key);
+ for_each_string_list_item(item, sl) {
+ argv_array_push(&args, item->string);
+ }
+
+ parse_pathspec(&ps, 0, 0, NULL, args.argv);
+ ret = match_pathspec(&ps, path, strlen(path), 0, NULL, 1);
+
+ argv_array_clear(&args);
+ clear_pathspec(&ps);
+ return ret;
}
+ /* fallback to checking if the URL is set */
+ key = xstrfmt("submodule.%s.url", module->name);
+ ret = !git_config_get_string(key, &value);
+
+ free(value);
+ free(key);
return ret;
}
if (!(dirty_submodule & DIRTY_SUBMODULE_MODIFIED))
argv_array_push(&cp.args, oid_to_hex(new));
+ prepare_submodule_repo_env(&cp.env_array);
if (run_command(&cp))
fprintf(f, "(diff failed)\n");
return 1;
}
-static int append_sha1_to_argv(const unsigned char sha1[20], void *data)
+static int append_oid_to_argv(const struct object_id *oid, void *data)
{
struct argv_array *argv = data;
- argv_array_push(argv, sha1_to_hex(sha1));
+ argv_array_push(argv, oid_to_hex(oid));
return 0;
}
-static int check_has_commit(const unsigned char sha1[20], void *data)
+static int check_has_commit(const struct object_id *oid, void *data)
{
int *has_commit = data;
- if (!lookup_commit_reference(sha1))
+ if (!lookup_commit_reference(oid->hash))
*has_commit = 0;
return 0;
}
-static int submodule_has_commits(const char *path, struct sha1_array *commits)
+static int submodule_has_commits(const char *path, struct oid_array *commits)
{
int has_commit = 1;
if (add_submodule_odb(path))
return 0;
- sha1_array_for_each_unique(commits, check_has_commit, &has_commit);
+ oid_array_for_each_unique(commits, check_has_commit, &has_commit);
return has_commit;
}
-static int submodule_needs_pushing(const char *path, struct sha1_array *commits)
+static int submodule_needs_pushing(const char *path, struct oid_array *commits)
{
if (!submodule_has_commits(path, commits))
/*
int needs_pushing = 0;
argv_array_push(&cp.args, "rev-list");
- sha1_array_for_each_unique(commits, append_sha1_to_argv, &cp.args);
+ oid_array_for_each_unique(commits, append_oid_to_argv, &cp.args);
argv_array_pushl(&cp.args, "--not", "--remotes", "-n", "1" , NULL);
prepare_submodule_repo_env(&cp.env_array);
return 0;
}
-static struct sha1_array *submodule_commits(struct string_list *submodules,
+static struct oid_array *submodule_commits(struct string_list *submodules,
const char *path)
{
struct string_list_item *item;
item = string_list_insert(submodules, path);
if (item->util)
- return (struct sha1_array *) item->util;
+ return (struct oid_array *) item->util;
- /* NEEDSWORK: should we have sha1_array_init()? */
- item->util = xcalloc(1, sizeof(struct sha1_array));
- return (struct sha1_array *) item->util;
+ /* NEEDSWORK: should we have oid_array_init()? */
+ item->util = xcalloc(1, sizeof(struct oid_array));
+ return (struct oid_array *) item->util;
}
static void collect_submodules_from_diff(struct diff_queue_struct *q,
for (i = 0; i < q->nr; i++) {
struct diff_filepair *p = q->queue[i];
- struct sha1_array *commits;
+ struct oid_array *commits;
if (!S_ISGITLINK(p->two->mode))
continue;
commits = submodule_commits(submodules, p->two->path);
- sha1_array_append(commits, p->two->oid.hash);
+ oid_array_append(commits, &p->two->oid);
}
}
{
struct string_list_item *item;
for_each_string_list_item(item, submodules)
- sha1_array_clear((struct sha1_array *) item->util);
+ oid_array_clear((struct oid_array *) item->util);
string_list_clear(submodules, 1);
}
-int find_unpushed_submodules(struct sha1_array *commits,
+int find_unpushed_submodules(struct oid_array *commits,
const char *remotes_name, struct string_list *needs_pushing)
{
struct rev_info rev;
/* argv.argv[0] will be ignored by setup_revisions */
argv_array_push(&argv, "find_unpushed_submodules");
- sha1_array_for_each_unique(commits, append_sha1_to_argv, &argv);
+ oid_array_for_each_unique(commits, append_oid_to_argv, &argv);
argv_array_push(&argv, "--not");
argv_array_pushf(&argv, "--remotes=%s", remotes_name);
argv_array_clear(&argv);
for_each_string_list_item(submodule, &submodules) {
- struct sha1_array *commits = (struct sha1_array *) submodule->util;
+ struct oid_array *commits = (struct oid_array *) submodule->util;
if (submodule_needs_pushing(submodule->string, commits))
string_list_insert(needs_pushing, submodule->string);
return needs_pushing->nr;
}
-static int push_submodule(const char *path, int dry_run)
+static int push_submodule(const char *path,
+ const struct remote *remote,
+ const char **refspec, int refspec_nr,
+ const struct string_list *push_options,
+ int dry_run)
{
if (add_submodule_odb(path))
return 1;
if (dry_run)
argv_array_push(&cp.args, "--dry-run");
+ if (push_options && push_options->nr) {
+ const struct string_list_item *item;
+ for_each_string_list_item(item, push_options)
+ argv_array_pushf(&cp.args, "--push-option=%s",
+ item->string);
+ }
+
+ if (remote->origin != REMOTE_UNCONFIGURED) {
+ int i;
+ argv_array_push(&cp.args, remote->name);
+ for (i = 0; i < refspec_nr; i++)
+ argv_array_push(&cp.args, refspec[i]);
+ }
+
prepare_submodule_repo_env(&cp.env_array);
cp.git_cmd = 1;
cp.no_stdin = 1;
return 1;
}
-int push_unpushed_submodules(struct sha1_array *commits,
- const char *remotes_name,
+/*
+ * Perform a check in the submodule to see if the remote and refspec work.
+ * Die if the submodule can't be pushed.
+ */
+static void submodule_push_check(const char *path, const struct remote *remote,
+ const char **refspec, int refspec_nr)
+{
+ struct child_process cp = CHILD_PROCESS_INIT;
+ int i;
+
+ argv_array_push(&cp.args, "submodule--helper");
+ argv_array_push(&cp.args, "push-check");
+ argv_array_push(&cp.args, remote->name);
+
+ for (i = 0; i < refspec_nr; i++)
+ argv_array_push(&cp.args, refspec[i]);
+
+ prepare_submodule_repo_env(&cp.env_array);
+ cp.git_cmd = 1;
+ cp.no_stdin = 1;
+ cp.no_stdout = 1;
+ cp.dir = path;
+
+ /*
+ * Simply indicate if 'submodule--helper push-check' failed.
+ * More detailed error information will be provided by the
+ * child process.
+ */
+ if (run_command(&cp))
+ die("process for submodule '%s' failed", path);
+}
+
+int push_unpushed_submodules(struct oid_array *commits,
+ const struct remote *remote,
+ const char **refspec, int refspec_nr,
+ const struct string_list *push_options,
int dry_run)
{
int i, ret = 1;
struct string_list needs_pushing = STRING_LIST_INIT_DUP;
- if (!find_unpushed_submodules(commits, remotes_name, &needs_pushing))
+ if (!find_unpushed_submodules(commits, remote->name, &needs_pushing))
return 1;
+ /*
+ * Verify that the remote and refspec can be propagated to all
+ * submodules. This check can be skipped if the remote and refspec
+ * won't be propagated due to the remote being unconfigured (e.g. a URL
+ * instead of a remote name).
+ */
+ if (remote->origin != REMOTE_UNCONFIGURED)
+ for (i = 0; i < needs_pushing.nr; i++)
+ submodule_push_check(needs_pushing.items[i].string,
+ remote, refspec, refspec_nr);
+
+ /* Actually push the submodules */
for (i = 0; i < needs_pushing.nr; i++) {
const char *path = needs_pushing.items[i].string;
fprintf(stderr, "Pushing submodule '%s'\n", path);
- if (!push_submodule(path, dry_run)) {
+ if (!push_submodule(path, remote, refspec, refspec_nr,
+ push_options, dry_run)) {
fprintf(stderr, "Unable to push submodule '%s'\n", path);
ret = 0;
}
static int add_sha1_to_array(const char *ref, const struct object_id *oid,
int flags, void *data)
{
- sha1_array_append(data, oid->hash);
+ oid_array_append(data, oid);
return 0;
}
-void check_for_new_submodule_commits(unsigned char new_sha1[20])
+void check_for_new_submodule_commits(struct object_id *oid)
{
if (!initialized_fetch_ref_tips) {
for_each_ref(add_sha1_to_array, &ref_tips_before_fetch);
initialized_fetch_ref_tips = 1;
}
- sha1_array_append(&ref_tips_after_fetch, new_sha1);
+ oid_array_append(&ref_tips_after_fetch, oid);
}
-static int add_sha1_to_argv(const unsigned char sha1[20], void *data)
+static int add_oid_to_argv(const struct object_id *oid, void *data)
{
- argv_array_push(data, sha1_to_hex(sha1));
+ argv_array_push(data, oid_to_hex(oid));
return 0;
}
init_revisions(&rev, NULL);
argv_array_push(&argv, "--"); /* argv[0] program name */
- sha1_array_for_each_unique(&ref_tips_after_fetch,
- add_sha1_to_argv, &argv);
+ oid_array_for_each_unique(&ref_tips_after_fetch,
+ add_oid_to_argv, &argv);
argv_array_push(&argv, "--not");
- sha1_array_for_each_unique(&ref_tips_before_fetch,
- add_sha1_to_argv, &argv);
+ oid_array_for_each_unique(&ref_tips_before_fetch,
+ add_oid_to_argv, &argv);
setup_revisions(argv.argc, argv.argv, &rev, NULL);
if (prepare_revision_walk(&rev))
die("revision walk setup failed");
}
argv_array_clear(&argv);
- sha1_array_clear(&ref_tips_before_fetch);
- sha1_array_clear(&ref_tips_after_fetch);
+ oid_array_clear(&ref_tips_before_fetch);
+ oid_array_clear(&ref_tips_after_fetch);
initialized_fetch_ref_tips = 0;
}
unsigned is_submodule_modified(const char *path, int ignore_untracked)
{
- ssize_t len;
struct child_process cp = CHILD_PROCESS_INIT;
- const char *argv[] = {
- "status",
- "--porcelain",
- NULL,
- NULL,
- };
struct strbuf buf = STRBUF_INIT;
+ FILE *fp;
unsigned dirty_submodule = 0;
- const char *line, *next_line;
const char *git_dir;
+ int ignore_cp_exit_code = 0;
strbuf_addf(&buf, "%s/.git", path);
git_dir = read_gitfile(buf.buf);
if (!git_dir)
git_dir = buf.buf;
- if (!is_directory(git_dir)) {
+ if (!is_git_directory(git_dir)) {
+ if (is_directory(git_dir))
+ die(_("'%s' not recognized as a git repository"), git_dir);
strbuf_release(&buf);
/* The submodule is not checked out, so it is not modified */
return 0;
-
}
strbuf_reset(&buf);
+ argv_array_pushl(&cp.args, "status", "--porcelain=2", NULL);
if (ignore_untracked)
- argv[2] = "-uno";
+ argv_array_push(&cp.args, "-uno");
- cp.argv = argv;
prepare_submodule_repo_env(&cp.env_array);
cp.git_cmd = 1;
cp.no_stdin = 1;
cp.out = -1;
cp.dir = path;
if (start_command(&cp))
- die("Could not run 'git status --porcelain' in submodule %s", path);
+ die("Could not run 'git status --porcelain=2' in submodule %s", path);
- len = strbuf_read(&buf, cp.out, 1024);
- line = buf.buf;
- while (len > 2) {
- if ((line[0] == '?') && (line[1] == '?')) {
+ fp = xfdopen(cp.out, "r");
+ while (strbuf_getwholeline(&buf, fp, '\n') != EOF) {
+ /* regular untracked files */
+ if (buf.buf[0] == '?')
dirty_submodule |= DIRTY_SUBMODULE_UNTRACKED;
- if (dirty_submodule & DIRTY_SUBMODULE_MODIFIED)
- break;
- } else {
- dirty_submodule |= DIRTY_SUBMODULE_MODIFIED;
- if (ignore_untracked ||
- (dirty_submodule & DIRTY_SUBMODULE_UNTRACKED))
- break;
+
+ if (buf.buf[0] == 'u' ||
+ buf.buf[0] == '1' ||
+ buf.buf[0] == '2') {
+ /* T = line type, XY = status, SSSS = submodule state */
+ if (buf.len < strlen("T XY SSSS"))
+ die("BUG: invalid status --porcelain=2 line %s",
+ buf.buf);
+
+ if (buf.buf[5] == 'S' && buf.buf[8] == 'U')
+ /* nested untracked file */
+ dirty_submodule |= DIRTY_SUBMODULE_UNTRACKED;
+
+ if (buf.buf[0] == 'u' ||
+ buf.buf[0] == '2' ||
+ memcmp(buf.buf + 5, "S..U", 4))
+ /* other change */
+ dirty_submodule |= DIRTY_SUBMODULE_MODIFIED;
}
- next_line = strchr(line, '\n');
- if (!next_line)
+
+ if ((dirty_submodule & DIRTY_SUBMODULE_MODIFIED) &&
+ ((dirty_submodule & DIRTY_SUBMODULE_UNTRACKED) ||
+ ignore_untracked)) {
+ /*
+ * We're not interested in any further information from
+ * the child any more, neither output nor its exit code.
+ */
+ ignore_cp_exit_code = 1;
break;
- next_line++;
- len -= (next_line - line);
- line = next_line;
+ }
}
- close(cp.out);
+ fclose(fp);
- if (finish_command(&cp))
- die("'git status --porcelain' failed in submodule %s", path);
+ if (finish_command(&cp) && !ignore_cp_exit_code)
+ die("'git status --porcelain=2' failed in submodule %s", path);
strbuf_release(&buf);
return dirty_submodule;
cp.dir = path;
if (start_command(&cp)) {
if (flags & SUBMODULE_REMOVAL_DIE_ON_ERROR)
- die(_("could not start 'git status in submodule '%s'"),
+ die(_("could not start 'git status' in submodule '%s'"),
path);
ret = -1;
goto out;
if (finish_command(&cp)) {
if (flags & SUBMODULE_REMOVAL_DIE_ON_ERROR)
- die(_("could not run 'git status in submodule '%s'"),
+ die(_("could not run 'git status' in submodule '%s'"),
path);
ret = -1;
}
memset(&rev_opts, 0, sizeof(rev_opts));
/* get all revisions that merge commit a */
- snprintf(merged_revision, sizeof(merged_revision), "^%s",
+ xsnprintf(merged_revision, sizeof(merged_revision), "^%s",
oid_to_hex(&a->object.oid));
init_revisions(&revs, NULL);
rev_opts.submodule = path;
return ret;
}
+
+int submodule_to_gitdir(struct strbuf *buf, const char *submodule)
+{
+ const struct submodule *sub;
+ const char *git_dir;
+ int ret = 0;
+
+ strbuf_reset(buf);
+ strbuf_addstr(buf, submodule);
+ strbuf_complete(buf, '/');
+ strbuf_addstr(buf, ".git");
+
+ git_dir = read_gitfile(buf->buf);
+ if (git_dir) {
+ strbuf_reset(buf);
+ strbuf_addstr(buf, git_dir);
+ }
+ if (!is_git_directory(buf->buf)) {
+ gitmodules_config();
+ sub = submodule_from_path(null_sha1, submodule);
+ if (!sub) {
+ ret = -1;
+ goto cleanup;
+ }
+ strbuf_reset(buf);
+ strbuf_git_path(buf, "%s/%s", "modules", sub->name);
+ }
+
+cleanup:
+ return ret;
+}
struct diff_options;
struct argv_array;
-struct sha1_array;
+struct oid_array;
+struct remote;
enum {
RECURSE_SUBMODULES_ONLY = -5,
* and it should be updated. Returns NULL otherwise.
*/
extern const struct submodule *submodule_from_ce(const struct cache_entry *ce);
-extern void check_for_new_submodule_commits(unsigned char new_sha1[20]);
+extern void check_for_new_submodule_commits(struct object_id *oid);
extern int fetch_populated_submodules(const struct argv_array *options,
const char *prefix, int command_line_option,
int quiet, int max_parallel_jobs);
const unsigned char base[20],
const unsigned char a[20],
const unsigned char b[20], int search);
-extern int find_unpushed_submodules(struct sha1_array *commits,
+extern int find_unpushed_submodules(struct oid_array *commits,
const char *remotes_name,
struct string_list *needs_pushing);
-extern int push_unpushed_submodules(struct sha1_array *commits,
- const char *remotes_name,
+extern int push_unpushed_submodules(struct oid_array *commits,
+ const struct remote *remote,
+ const char **refspec, int refspec_nr,
+ const struct string_list *push_options,
int dry_run);
extern void connect_work_tree_and_git_dir(const char *work_tree, const char *git_dir);
extern int parallel_submodules(void);
+/*
+ * Given a submodule path (as in the index), return the repository
+ * path of that submodule in 'buf'. Return -1 on error or when the
+ * submodule is not initialized.
+ */
+int submodule_to_gitdir(struct strbuf *buf, const char *submodule);
#define SUBMODULE_MOVE_HEAD_DRY_RUN (1<<0)
#define SUBMODULE_MOVE_HEAD_FORCE (1<<1)
their output.
You can glean some further possible issues from the TAP grammar
- (see http://search.cpan.org/perldoc?TAP::Parser::Grammar#TAP_Grammar)
+ (see https://metacpan.org/pod/TAP::Parser::Grammar#TAP-GRAMMAR)
but the best indication is to just run the tests with prove(1),
it'll complain if anything is amiss.
Keep in mind:
- - Inside <script> part, the standard output and standard error
+ - Inside the <script> part, the standard output and standard error
streams are discarded, and the test harness only reports "ok" or
"not ok" to the end user running the tests. Under --verbose, they
are shown to help debugging the tests.
- test_have_prereq <prereq>
- Check if we have a prerequisite previously set with
- test_set_prereq. The most common use of this directly is to skip
- all the tests if we don't have some essential prerequisite:
+ Check if we have a prerequisite previously set with test_set_prereq.
+ The most common way to use this explicitly (as opposed to the
+ implicit use when an argument is passed to test_expect_*) is to skip
+ all the tests at the start of the test script if we don't have some
+ essential prerequisite:
if ! test_have_prereq PERL
then
/test-match-trees
/test-mergesort
/test-mktemp
+/test-online-cpus
/test-parse-options
/test-path-utils
/test-prio-queue
/test-read-cache
+/test-ref-store
/test-regex
/test-revision-walking
/test-run-command
--- /dev/null
+#include "git-compat-util.h"
+#include "thread-utils.h"
+
+int cmd_main(int argc, const char **argv)
+{
+ printf("%d\n", online_cpus());
+ return 0;
+}
int i, cnt = 1;
if (argc == 2)
cnt = strtol(argv[1], NULL, 0);
+ setup_git_directory();
for (i = 0; i < cnt; i++) {
read_cache();
discard_cache();
--- /dev/null
+#include "cache.h"
+#include "refs.h"
+
+static const char *notnull(const char *arg, const char *name)
+{
+ if (!arg)
+ die("%s required", name);
+ return arg;
+}
+
+static unsigned int arg_flags(const char *arg, const char *name)
+{
+ return atoi(notnull(arg, name));
+}
+
+static const char **get_store(const char **argv, struct ref_store **refs)
+{
+ const char *gitdir;
+
+ if (!argv[0]) {
+ die("ref store required");
+ } else if (!strcmp(argv[0], "main")) {
+ *refs = get_main_ref_store();
+ } else if (skip_prefix(argv[0], "submodule:", &gitdir)) {
+ struct strbuf sb = STRBUF_INIT;
+ int ret;
+
+ ret = strbuf_git_path_submodule(&sb, gitdir, "objects/");
+ if (ret)
+ die("strbuf_git_path_submodule failed: %d", ret);
+ add_to_alternates_memory(sb.buf);
+ strbuf_release(&sb);
+
+ *refs = get_submodule_ref_store(gitdir);
+ } else
+ die("unknown backend %s", argv[0]);
+
+ if (!*refs)
+ die("no ref store");
+
+ /* consume store-specific optional arguments if needed */
+
+ return argv + 1;
+}
+
+
+static int cmd_pack_refs(struct ref_store *refs, const char **argv)
+{
+ unsigned int flags = arg_flags(*argv++, "flags");
+
+ return refs_pack_refs(refs, flags);
+}
+
+static int cmd_peel_ref(struct ref_store *refs, const char **argv)
+{
+ const char *refname = notnull(*argv++, "refname");
+ unsigned char sha1[20];
+ int ret;
+
+ ret = refs_peel_ref(refs, refname, sha1);
+ if (!ret)
+ puts(sha1_to_hex(sha1));
+ return ret;
+}
+
+static int cmd_create_symref(struct ref_store *refs, const char **argv)
+{
+ const char *refname = notnull(*argv++, "refname");
+ const char *target = notnull(*argv++, "target");
+ const char *logmsg = *argv++;
+
+ return refs_create_symref(refs, refname, target, logmsg);
+}
+
+static int cmd_delete_refs(struct ref_store *refs, const char **argv)
+{
+ unsigned int flags = arg_flags(*argv++, "flags");
+ struct string_list refnames = STRING_LIST_INIT_NODUP;
+
+ while (*argv)
+ string_list_append(&refnames, *argv++);
+
+ return refs_delete_refs(refs, &refnames, flags);
+}
+
+static int cmd_rename_ref(struct ref_store *refs, const char **argv)
+{
+ const char *oldref = notnull(*argv++, "oldref");
+ const char *newref = notnull(*argv++, "newref");
+ const char *logmsg = *argv++;
+
+ return refs_rename_ref(refs, oldref, newref, logmsg);
+}
+
+static int each_ref(const char *refname, const struct object_id *oid,
+ int flags, void *cb_data)
+{
+ printf("%s %s 0x%x\n", oid_to_hex(oid), refname, flags);
+ return 0;
+}
+
+static int cmd_for_each_ref(struct ref_store *refs, const char **argv)
+{
+ const char *prefix = notnull(*argv++, "prefix");
+
+ return refs_for_each_ref_in(refs, prefix, each_ref, NULL);
+}
+
+static int cmd_resolve_ref(struct ref_store *refs, const char **argv)
+{
+ unsigned char sha1[20];
+ const char *refname = notnull(*argv++, "refname");
+ int resolve_flags = arg_flags(*argv++, "resolve-flags");
+ int flags;
+ const char *ref;
+
+ ref = refs_resolve_ref_unsafe(refs, refname, resolve_flags,
+ sha1, &flags);
+ printf("%s %s 0x%x\n", sha1_to_hex(sha1), ref, flags);
+ return ref ? 0 : 1;
+}
+
+static int cmd_verify_ref(struct ref_store *refs, const char **argv)
+{
+ const char *refname = notnull(*argv++, "refname");
+ struct strbuf err = STRBUF_INIT;
+ int ret;
+
+ ret = refs_verify_refname_available(refs, refname, NULL, NULL, &err);
+ if (err.len)
+ puts(err.buf);
+ return ret;
+}
+
+static int cmd_for_each_reflog(struct ref_store *refs, const char **argv)
+{
+ return refs_for_each_reflog(refs, each_ref, NULL);
+}
+
+static int each_reflog(struct object_id *old_oid, struct object_id *new_oid,
+ const char *committer, unsigned long timestamp,
+ int tz, const char *msg, void *cb_data)
+{
+ printf("%s %s %s %lu %d %s\n",
+ oid_to_hex(old_oid), oid_to_hex(new_oid),
+ committer, timestamp, tz, msg);
+ return 0;
+}
+
+static int cmd_for_each_reflog_ent(struct ref_store *refs, const char **argv)
+{
+ const char *refname = notnull(*argv++, "refname");
+
+ return refs_for_each_reflog_ent(refs, refname, each_reflog, refs);
+}
+
+static int cmd_for_each_reflog_ent_reverse(struct ref_store *refs, const char **argv)
+{
+ const char *refname = notnull(*argv++, "refname");
+
+ return refs_for_each_reflog_ent_reverse(refs, refname, each_reflog, refs);
+}
+
+static int cmd_reflog_exists(struct ref_store *refs, const char **argv)
+{
+ const char *refname = notnull(*argv++, "refname");
+
+ return !refs_reflog_exists(refs, refname);
+}
+
+static int cmd_create_reflog(struct ref_store *refs, const char **argv)
+{
+ const char *refname = notnull(*argv++, "refname");
+ int force_create = arg_flags(*argv++, "force-create");
+ struct strbuf err = STRBUF_INIT;
+ int ret;
+
+ ret = refs_create_reflog(refs, refname, force_create, &err);
+ if (err.len)
+ puts(err.buf);
+ return ret;
+}
+
+static int cmd_delete_reflog(struct ref_store *refs, const char **argv)
+{
+ const char *refname = notnull(*argv++, "refname");
+
+ return refs_delete_reflog(refs, refname);
+}
+
+static int cmd_reflog_expire(struct ref_store *refs, const char **argv)
+{
+ die("not supported yet");
+}
+
+static int cmd_delete_ref(struct ref_store *refs, const char **argv)
+{
+ const char *msg = notnull(*argv++, "msg");
+ const char *refname = notnull(*argv++, "refname");
+ const char *sha1_buf = notnull(*argv++, "old-sha1");
+ unsigned int flags = arg_flags(*argv++, "flags");
+ unsigned char old_sha1[20];
+
+ if (get_sha1_hex(sha1_buf, old_sha1))
+ die("not sha-1");
+
+ return refs_delete_ref(refs, msg, refname, old_sha1, flags);
+}
+
+static int cmd_update_ref(struct ref_store *refs, const char **argv)
+{
+ const char *msg = notnull(*argv++, "msg");
+ const char *refname = notnull(*argv++, "refname");
+ const char *new_sha1_buf = notnull(*argv++, "old-sha1");
+ const char *old_sha1_buf = notnull(*argv++, "old-sha1");
+ unsigned int flags = arg_flags(*argv++, "flags");
+ unsigned char old_sha1[20];
+ unsigned char new_sha1[20];
+
+ if (get_sha1_hex(old_sha1_buf, old_sha1) ||
+ get_sha1_hex(new_sha1_buf, new_sha1))
+ die("not sha-1");
+
+ return refs_update_ref(refs, msg, refname,
+ new_sha1, old_sha1,
+ flags, UPDATE_REFS_DIE_ON_ERR);
+}
+
+struct command {
+ const char *name;
+ int (*func)(struct ref_store *refs, const char **argv);
+};
+
+static struct command commands[] = {
+ { "pack-refs", cmd_pack_refs },
+ { "peel-ref", cmd_peel_ref },
+ { "create-symref", cmd_create_symref },
+ { "delete-refs", cmd_delete_refs },
+ { "rename-ref", cmd_rename_ref },
+ { "for-each-ref", cmd_for_each_ref },
+ { "resolve-ref", cmd_resolve_ref },
+ { "verify-ref", cmd_verify_ref },
+ { "for-each-reflog", cmd_for_each_reflog },
+ { "for-each-reflog-ent", cmd_for_each_reflog_ent },
+ { "for-each-reflog-ent-reverse", cmd_for_each_reflog_ent_reverse },
+ { "reflog-exists", cmd_reflog_exists },
+ { "create-reflog", cmd_create_reflog },
+ { "delete-reflog", cmd_delete_reflog },
+ { "reflog-expire", cmd_reflog_expire },
+ /*
+ * backend transaction functions can't be tested separately
+ */
+ { "delete-ref", cmd_delete_ref },
+ { "update-ref", cmd_update_ref },
+ { NULL, NULL }
+};
+
+int cmd_main(int argc, const char **argv)
+{
+ struct ref_store *refs;
+ const char *func;
+ struct command *cmd;
+
+ setup_git_directory();
+
+ argv = get_store(argv + 1, &refs);
+
+ func = *argv++;
+ if (!func)
+ die("ref function required");
+ for (cmd = commands; cmd->name; cmd++) {
+ if (!strcmp(func, cmd->name))
+ return cmd->func(refs, argv);
+ }
+ die("unknown function %s", func);
+ return 0;
+}
#include "cache.h"
#include "sha1-array.h"
-static int print_sha1(const unsigned char sha1[20], void *data)
+static int print_oid(const struct object_id *oid, void *data)
{
- puts(sha1_to_hex(sha1));
+ puts(oid_to_hex(oid));
return 0;
}
int cmd_main(int argc, const char **argv)
{
- struct sha1_array array = SHA1_ARRAY_INIT;
+ struct oid_array array = OID_ARRAY_INIT;
struct strbuf line = STRBUF_INIT;
while (strbuf_getline(&line, stdin) != EOF) {
const char *arg;
- unsigned char sha1[20];
+ struct object_id oid;
if (skip_prefix(line.buf, "append ", &arg)) {
- if (get_sha1_hex(arg, sha1))
+ if (get_oid_hex(arg, &oid))
die("not a hexadecimal SHA1: %s", arg);
- sha1_array_append(&array, sha1);
+ oid_array_append(&array, &oid);
} else if (skip_prefix(line.buf, "lookup ", &arg)) {
- if (get_sha1_hex(arg, sha1))
+ if (get_oid_hex(arg, &oid))
die("not a hexadecimal SHA1: %s", arg);
- printf("%d\n", sha1_array_lookup(&array, sha1));
+ printf("%d\n", oid_array_lookup(&array, &oid));
} else if (!strcmp(line.buf, "clear"))
- sha1_array_clear(&array);
+ oid_array_clear(&array);
else if (!strcmp(line.buf, "for_each_unique"))
- sha1_array_for_each_unique(&array, print_sha1, NULL);
+ oid_array_for_each_unique(&array, print_oid, NULL);
else
die("unknown command: %s", line.buf);
}
--- /dev/null
+#!/bin/sh
+#
+# This test measures the performance of various read-tree
+# and status operations. It is primarily interested in
+# the algorithmic costs of index operations and recursive
+# tree traversal -- and NOT disk I/O on thousands of files.
+
+test_description="Tests performance of read-tree"
+
+. ./perf-lib.sh
+
+test_perf_default_repo
+
+# If the test repo was generated by ./repos/many-files.sh
+# then we know something about the data shape and branches,
+# so we can isolate testing to the ballast-related commits
+# and setup sparse-checkout so we don't have to populate
+# the ballast files and directories.
+#
+# Otherwise, we make some general assumptions about the
+# repo and consider the entire history of the current
+# branch to be the ballast.
+
+test_expect_success "setup repo" '
+ if git rev-parse --verify refs/heads/p0006-ballast^{commit}
+ then
+ echo Assuming synthetic repo from many-files.sh
+ git branch br_base master
+ git branch br_ballast p0006-ballast
+ git config --local core.sparsecheckout 1
+ cat >.git/info/sparse-checkout <<-EOF
+ /*
+ !ballast/*
+ EOF
+ else
+ echo Assuming non-synthetic repo...
+ git branch br_base $(git rev-list HEAD | tail -n 1)
+ git branch br_ballast HEAD
+ fi &&
+ git checkout -q br_ballast &&
+ nr_files=$(git ls-files | wc -l)
+'
+
+test_perf "read-tree status br_ballast ($nr_files)" '
+ git read-tree HEAD &&
+ git status
+'
+
+test_done
test_path_is_dir realgitdir/refs
'
+test_expect_success 'init in long base path' '
+ # exceed initial buffer size of strbuf_getcwd()
+ component=123456789abcdef &&
+ test_when_finished "chmod 0700 $component; rm -rf $component" &&
+ p31=$component/$component &&
+ p127=$p31/$p31/$p31/$p31 &&
+ mkdir -p $p127 &&
+ chmod 0111 $component &&
+ (
+ cd $p127 &&
+ git init newdir
+ )
+'
+
test_expect_success 're-init on .git file' '
( cd newdir && git init )
'
test -z "$LFwithNULdiff"
'
+test_expect_success 'prepare unnormalized' '
+ > .gitattributes &&
+ git config core.autocrlf false &&
+ printf "LINEONE\nLINETWO\r\n" >mixed &&
+ git add mixed .gitattributes &&
+ git commit -m "Add mixed" &&
+ git ls-files --eol | egrep "i/crlf" &&
+ git ls-files --eol | egrep "i/mixed"
+'
+
+test_expect_success 'normalize unnormalized' '
+ echo "* text=auto" >.gitattributes &&
+ rm .git/index &&
+ git add . &&
+ git commit -m "Introduce end-of-line normalization" &&
+ git ls-files --eol | tr "\\t" " " | sort >act &&
+cat >exp <<EOF &&
+i/-text w/-text attr/text=auto LFwithNUL
+i/lf w/crlf attr/text=auto CRLFonly
+i/lf w/crlf attr/text=auto LFonly
+i/lf w/lf attr/text=auto .gitattributes
+i/lf w/mixed attr/text=auto mixed
+EOF
+ test_cmp exp act
+'
+
test_done
cd bit-error &&
test_commit content &&
corrupt_byte HEAD:content.t 10
+ ) &&
+ git init no-bit-error &&
+ (
+ # distinct commit from bit-error, but containing a
+ # non-corrupted version of the same blob
+ cd no-bit-error &&
+ test_tick &&
+ test_commit content
)
'
)
'
+test_expect_success 'getting type of a corrupt blob fails' '
+ (
+ cd bit-error &&
+ test_must_fail git cat-file -s HEAD:content.t
+ )
+'
+
test_expect_success 'read-tree -u detects bit-errors in blobs' '
(
cd bit-error &&
test_must_fail git clone --local misnamed misnamed-checkout
'
+test_expect_success 'fetch into corrupted repo with index-pack' '
+ (
+ cd bit-error &&
+ test_must_fail git -c transfer.unpackLimit=1 \
+ fetch ../no-bit-error 2>stderr &&
+ test_i18ngrep ! -i collision stderr
+ )
+'
+
test_done
test_description='test config file include directives'
. ./test-lib.sh
+# Force setup_explicit_git_dir() to run until the end. This is needed
+# by some tests to make sure real_path() is called on $GIT_DIR. The
+# caller needs to make sure git commands are run from a subdirectory
+# though or real_path() will not be called.
+force_setup_explicit_git_dir() {
+ GIT_DIR="$(pwd)/.git"
+ GIT_WORK_TREE="$(pwd)"
+ export GIT_DIR GIT_WORK_TREE
+}
+
test_expect_success 'include file by absolute path' '
echo "[test]one = 1" >one &&
echo "[include]path = \"$(pwd)/one\"" >.gitconfig &&
)
'
+test_expect_success SYMLINKS 'conditional include, set up symlinked $HOME' '
+ mkdir real-home &&
+ ln -s real-home home &&
+ (
+ HOME="$TRASH_DIRECTORY/home" &&
+ export HOME &&
+ cd "$HOME" &&
+
+ git init foo &&
+ cd foo &&
+ mkdir sub
+ )
+'
+
+test_expect_success SYMLINKS 'conditional include, $HOME expansion with symlinks' '
+ (
+ HOME="$TRASH_DIRECTORY/home" &&
+ export HOME &&
+ cd "$HOME"/foo &&
+
+ echo "[includeIf \"gitdir:~/foo/\"]path=bar2" >>.git/config &&
+ echo "[test]two=2" >.git/bar2 &&
+ echo 2 >expect &&
+ force_setup_explicit_git_dir &&
+ git -C sub config test.two >actual &&
+ test_cmp expect actual
+ )
+'
+
+test_expect_success SYMLINKS 'conditional include, relative path with symlinks' '
+ echo "[includeIf \"gitdir:./foo/.git\"]path=bar4" >home/.gitconfig &&
+ echo "[test]four=4" >home/bar4 &&
+ (
+ HOME="$TRASH_DIRECTORY/home" &&
+ export HOME &&
+ cd "$HOME"/foo &&
+
+ echo 4 >expect &&
+ force_setup_explicit_git_dir &&
+ git -C sub config test.four >actual &&
+ test_cmp expect actual
+ )
+'
+
test_expect_success 'include cycles are detected' '
cat >.gitconfig <<-\EOF &&
[test]value = gitconfig
--- /dev/null
+#!/bin/sh
+
+test_description='test main ref store api'
+
+. ./test-lib.sh
+
+RUN="test-ref-store main"
+
+test_expect_success 'pack_refs(PACK_REFS_ALL | PACK_REFS_PRUNE)' '
+ test_commit one &&
+ N=`find .git/refs -type f | wc -l` &&
+ test "$N" != 0 &&
+ $RUN pack-refs 3 &&
+ N=`find .git/refs -type f | wc -l`
+'
+
+test_expect_success 'peel_ref(new-tag)' '
+ git rev-parse HEAD >expected &&
+ git tag -a -m new-tag new-tag HEAD &&
+ $RUN peel-ref refs/tags/new-tag >actual &&
+ test_cmp expected actual
+'
+
+test_expect_success 'create_symref(FOO, refs/heads/master)' '
+ $RUN create-symref FOO refs/heads/master nothing &&
+ echo refs/heads/master >expected &&
+ git symbolic-ref FOO >actual &&
+ test_cmp expected actual
+'
+
+test_expect_success 'delete_refs(FOO, refs/tags/new-tag)' '
+ git rev-parse FOO -- &&
+ git rev-parse refs/tags/new-tag -- &&
+ $RUN delete-refs 0 FOO refs/tags/new-tag &&
+ test_must_fail git rev-parse FOO -- &&
+ test_must_fail git rev-parse refs/tags/new-tag --
+'
+
+test_expect_success 'rename_refs(master, new-master)' '
+ git rev-parse master >expected &&
+ $RUN rename-ref refs/heads/master refs/heads/new-master &&
+ git rev-parse new-master >actual &&
+ test_cmp expected actual &&
+ test_commit recreate-master
+'
+
+test_expect_success 'for_each_ref(refs/heads/)' '
+ $RUN for-each-ref refs/heads/ | cut -c 42- >actual &&
+ cat >expected <<-\EOF &&
+ master 0x0
+ new-master 0x0
+ EOF
+ test_cmp expected actual
+'
+
+test_expect_success 'for_each_ref() is sorted' '
+ $RUN for-each-ref refs/heads/ | cut -c 42- >actual &&
+ sort actual > expected &&
+ test_cmp expected actual
+'
+
+test_expect_success 'resolve_ref(new-master)' '
+ SHA1=`git rev-parse new-master` &&
+ echo "$SHA1 refs/heads/new-master 0x0" >expected &&
+ $RUN resolve-ref refs/heads/new-master 0 >actual &&
+ test_cmp expected actual
+'
+
+test_expect_success 'verify_ref(new-master)' '
+ $RUN verify-ref refs/heads/new-master
+'
+
+test_expect_success 'for_each_reflog()' '
+ $RUN for-each-reflog | sort | cut -c 42- >actual &&
+ cat >expected <<-\EOF &&
+ HEAD 0x1
+ refs/heads/master 0x0
+ refs/heads/new-master 0x0
+ EOF
+ test_cmp expected actual
+'
+
+test_expect_success 'for_each_reflog_ent()' '
+ $RUN for-each-reflog-ent HEAD >actual &&
+ head -n1 actual | grep one &&
+ tail -n2 actual | head -n1 | grep recreate-master
+'
+
+test_expect_success 'for_each_reflog_ent_reverse()' '
+ $RUN for-each-reflog-ent-reverse HEAD >actual &&
+ head -n1 actual | grep recreate-master &&
+ tail -n2 actual | head -n1 | grep one
+'
+
+test_expect_success 'reflog_exists(HEAD)' '
+ $RUN reflog-exists HEAD
+'
+
+test_expect_success 'delete_reflog(HEAD)' '
+ $RUN delete-reflog HEAD &&
+ ! test -f .git/logs/HEAD
+'
+
+test_expect_success 'create-reflog(HEAD)' '
+ $RUN create-reflog HEAD 1 &&
+ test -f .git/logs/HEAD
+'
+
+test_expect_success 'delete_ref(refs/heads/foo)' '
+ git checkout -b foo &&
+ FOO_SHA1=`git rev-parse foo` &&
+ git checkout --detach &&
+ test_commit bar-commit &&
+ git checkout -b bar &&
+ BAR_SHA1=`git rev-parse bar` &&
+ $RUN update-ref updating refs/heads/foo $BAR_SHA1 $FOO_SHA1 0 &&
+ echo $BAR_SHA1 >expected &&
+ git rev-parse refs/heads/foo >actual &&
+ test_cmp expected actual
+'
+
+test_expect_success 'delete_ref(refs/heads/foo)' '
+ SHA1=`git rev-parse foo` &&
+ git checkout --detach &&
+ $RUN delete-ref msg refs/heads/foo $SHA1 0 &&
+ test_must_fail git rev-parse refs/heads/foo --
+'
+
+test_done
--- /dev/null
+#!/bin/sh
+
+test_description='test submodule ref store api'
+
+. ./test-lib.sh
+
+RUN="test-ref-store submodule:sub"
+
+test_expect_success 'setup' '
+ git init sub &&
+ (
+ cd sub &&
+ test_commit first &&
+ git checkout -b new-master
+ )
+'
+
+test_expect_success 'pack_refs() not allowed' '
+ test_must_fail $RUN pack-refs 3
+'
+
+test_expect_success 'peel_ref(new-tag)' '
+ git -C sub rev-parse HEAD >expected &&
+ git -C sub tag -a -m new-tag new-tag HEAD &&
+ $RUN peel-ref refs/tags/new-tag >actual &&
+ test_cmp expected actual
+'
+
+test_expect_success 'create_symref() not allowed' '
+ test_must_fail $RUN create-symref FOO refs/heads/master nothing
+'
+
+test_expect_success 'delete_refs() not allowed' '
+ test_must_fail $RUN delete-refs 0 FOO refs/tags/new-tag
+'
+
+test_expect_success 'rename_refs() not allowed' '
+ test_must_fail $RUN rename-ref refs/heads/master refs/heads/new-master
+'
+
+test_expect_success 'for_each_ref(refs/heads/)' '
+ $RUN for-each-ref refs/heads/ | cut -c 42- >actual &&
+ cat >expected <<-\EOF &&
+ master 0x0
+ new-master 0x0
+ EOF
+ test_cmp expected actual
+'
+
+test_expect_success 'for_each_ref() is sorted' '
+ $RUN for-each-ref refs/heads/ | cut -c 42- >actual &&
+ sort actual > expected &&
+ test_cmp expected actual
+'
+
+test_expect_success 'resolve_ref(master)' '
+ SHA1=`git -C sub rev-parse master` &&
+ echo "$SHA1 refs/heads/master 0x0" >expected &&
+ $RUN resolve-ref refs/heads/master 0 >actual &&
+ test_cmp expected actual
+'
+
+test_expect_success 'verify_ref(new-master)' '
+ $RUN verify-ref refs/heads/new-master
+'
+
+test_expect_success 'for_each_reflog()' '
+ $RUN for-each-reflog | sort | cut -c 42- >actual &&
+ cat >expected <<-\EOF &&
+ HEAD 0x1
+ refs/heads/master 0x0
+ refs/heads/new-master 0x0
+ EOF
+ test_cmp expected actual
+'
+
+test_expect_success 'for_each_reflog_ent()' '
+ $RUN for-each-reflog-ent HEAD >actual && cat actual &&
+ head -n1 actual | grep first &&
+ tail -n2 actual | head -n1 | grep master.to.new
+'
+
+test_expect_success 'for_each_reflog_ent_reverse()' '
+ $RUN for-each-reflog-ent-reverse HEAD >actual &&
+ head -n1 actual | grep master.to.new &&
+ tail -n2 actual | head -n1 | grep first
+'
+
+test_expect_success 'reflog_exists(HEAD)' '
+ $RUN reflog-exists HEAD
+'
+
+test_expect_success 'delete_reflog() not allowed' '
+ test_must_fail $RUN delete-reflog HEAD
+'
+
+test_expect_success 'create-reflog() not allowed' '
+ test_must_fail $RUN create-reflog HEAD 1
+'
+
+test_done
}
test_expect_success '@{upstream} resolves to correct full name' '
- test refs/remotes/origin/master = "$(full_name @{upstream})"
+ test refs/remotes/origin/master = "$(full_name @{upstream})" &&
+ test refs/remotes/origin/master = "$(full_name @{UPSTREAM})" &&
+ test refs/remotes/origin/master = "$(full_name @{UpSTReam})"
'
test_expect_success '@{u} resolves to correct full name' '
- test refs/remotes/origin/master = "$(full_name @{u})"
+ test refs/remotes/origin/master = "$(full_name @{u})" &&
+ test refs/remotes/origin/master = "$(full_name @{U})"
'
test_expect_success 'my-side@{upstream} resolves to correct full name' '
test_expect_success 'upstream of branch with @ in middle' '
full_name fun@ny@{u} >actual &&
echo refs/remotes/origin/side >expect &&
+ test_cmp expect actual &&
+ full_name fun@ny@{U} >actual &&
test_cmp expect actual
'
test_expect_success '<branch>@{u}@{1} resolves correctly' '
test_commit 6 &&
(cd clone && git fetch) &&
- test 5 = $(commit_subject my-side@{u}@{1})
+ test 5 = $(commit_subject my-side@{u}@{1}) &&
+ test 5 = $(commit_subject my-side@{U}@{1})
'
test_expect_success '@{u} without specifying branch fails on a detached HEAD' '
git checkout HEAD^0 &&
- test_must_fail git rev-parse @{u}
+ test_must_fail git rev-parse @{u} &&
+ test_must_fail git rev-parse @{U}
'
test_expect_success 'checkout -b new my-side@{u} forks from the same' '
test_expect_success '@{push} with default=nothing' '
test_config push.default nothing &&
- test_must_fail git rev-parse master@{push}
+ test_must_fail git rev-parse master@{push} &&
+ test_must_fail git rev-parse master@{PUSH} &&
+ test_must_fail git rev-parse master@{PuSH}
'
test_expect_success '@{push} with default=simple' '
test_config push.default simple &&
- resolve master@{push} refs/remotes/origin/master
+ resolve master@{push} refs/remotes/origin/master &&
+ resolve master@{PUSH} refs/remotes/origin/master &&
+ resolve master@{pUSh} refs/remotes/origin/master
'
test_expect_success 'triangular @{push} fails with default=simple' '
test_cmp expect actual
'
+test_expect_success '--recurse-submodules and relative paths' '
+ # From subdir
+ cat >expect <<-\EOF &&
+ b
+ EOF
+ git -C b ls-files --recurse-submodules >actual &&
+ test_cmp expect actual &&
+
+ # Relative path to top
+ cat >expect <<-\EOF &&
+ ../.gitmodules
+ ../a
+ b
+ ../h.txt
+ ../sib/file
+ ../sub/file
+ ../submodule/.gitmodules
+ ../submodule/c
+ ../submodule/f.TXT
+ ../submodule/g.txt
+ ../submodule/subsub/d
+ ../submodule/subsub/e.txt
+ EOF
+ git -C b ls-files --recurse-submodules -- .. >actual &&
+ test_cmp expect actual &&
+
+ # Relative path to submodule
+ cat >expect <<-\EOF &&
+ ../submodule/.gitmodules
+ ../submodule/c
+ ../submodule/f.TXT
+ ../submodule/g.txt
+ ../submodule/subsub/d
+ ../submodule/subsub/e.txt
+ EOF
+ git -C b ls-files --recurse-submodules -- ../submodule >actual &&
+ test_cmp expect actual
+'
+
test_expect_success '--recurse-submodules does not support --error-unmatch' '
test_must_fail git ls-files --recurse-submodules --error-unmatch 2>actual &&
test_i18ngrep "does not support --error-unmatch" actual
--- /dev/null
+#!/bin/sh
+
+test_description='Test the lazy init name hash with various folder structures'
+
+. ./test-lib.sh
+
+if test 1 -eq $($GIT_BUILD_DIR/t/helper/test-online-cpus)
+then
+ skip_all='skipping lazy-init tests, single cpu'
+ test_done
+fi
+
+LAZY_THREAD_COST=2000
+
+test_expect_success 'no buffer overflow in lazy_init_name_hash' '
+ (
+ test_seq $LAZY_THREAD_COST | sed "s/^/a_/"
+ echo b/b/b
+ test_seq $LAZY_THREAD_COST | sed "s/^/c_/"
+ test_seq 50 | sed "s/^/d_/" | tr "\n" "/"; echo d
+ ) |
+ sed "s/^/100644 $EMPTY_BLOB /" |
+ git update-index --index-info &&
+ test-lazy-init-name-hash -m
+'
+
+test_done
test_must_fail git branch --merged 0000000000000000000000000000000000000000
'
+test_expect_success '--merged is incompatible with --no-merged' '
+ test_must_fail git branch --merged HEAD --no-merged HEAD
+'
+
test_expect_success 'tracking with unexpected .fetch refspec' '
rm -rf a b c d &&
git init a &&
#!/bin/sh
-test_description='branch --contains <commit>, --merged, and --no-merged'
+test_description='branch --contains <commit>, --no-contains <commit> --merged, and --no-merged'
. ./test-lib.sh
'
+test_expect_success 'branch --no-contains=master' '
+
+ git branch --no-contains=master >actual &&
+ >expect &&
+ test_cmp expect actual
+
+'
+
+test_expect_success 'branch --no-contains master' '
+
+ git branch --no-contains master >actual &&
+ >expect &&
+ test_cmp expect actual
+
+'
+
test_expect_success 'branch --contains=side' '
git branch --contains=side >actual &&
'
+test_expect_success 'branch --no-contains=side' '
+
+ git branch --no-contains=side >actual &&
+ {
+ echo " master"
+ } >expect &&
+ test_cmp expect actual
+
+'
+
test_expect_success 'branch --contains with pattern implies --list' '
git branch --contains=master master >actual &&
'
+test_expect_success 'branch --no-contains with pattern implies --list' '
+
+ git branch --no-contains=master master >actual &&
+ >expect &&
+ test_cmp expect actual
+
+'
+
test_expect_success 'side: branch --merged' '
git branch --merged >actual &&
test_expect_success 'implicit --list conflicts with modification options' '
test_must_fail git branch --contains=master -d &&
- test_must_fail git branch --contains=master -m foo
+ test_must_fail git branch --contains=master -m foo &&
+ test_must_fail git branch --no-contains=master -d &&
+ test_must_fail git branch --no-contains=master -m foo
+
+'
+test_expect_success 'Assert that --contains only works on commits, not trees & blobs' '
+ test_must_fail git branch --contains master^{tree} &&
+ blob=$(git hash-object -w --stdin <<-\EOF
+ Some blob
+ EOF
+ ) &&
+ test_must_fail git branch --contains $blob &&
+ test_must_fail git branch --no-contains $blob
'
# We want to set up a case where the walk for the tracking info
test_i18ncmp expect actual
'
+test_expect_success 'branch --contains combined with --no-contains' '
+ git branch --contains zzz --no-contains topic >actual &&
+ cat >expect <<-\EOF &&
+ master
+ side
+ zzz
+ EOF
+ test_cmp expect actual
+
+'
+
test_done
M submod
EOF
+cat >expect.modified_inside <<EOF
+ m submod
+EOF
+
+cat >expect.modified_untracked <<EOF
+ ? submod
+EOF
+
cat >expect.cached <<EOF
D submod
EOF
test -d submod &&
test -f submod/.git &&
git status -s -uno --ignore-submodules=none >actual &&
- test_cmp expect.modified actual &&
+ test_cmp expect.modified_inside actual &&
git rm -f submod &&
test ! -d submod &&
git status -s -uno --ignore-submodules=none >actual &&
test -d submod &&
test -f submod/.git &&
git status -s -uno --ignore-submodules=none >actual &&
- test_cmp expect.modified actual &&
+ test_cmp expect.modified_untracked actual &&
git rm -f submod &&
test ! -d submod &&
git status -s -uno --ignore-submodules=none >actual &&
test -d submod &&
test -f submod/.git &&
git status -s -uno --ignore-submodules=none >actual &&
- test_cmp expect.modified actual &&
+ test_cmp expect.modified_inside actual &&
git rm -f submod &&
test ! -d submod &&
git status -s -uno --ignore-submodules=none >actual &&
test -d submod &&
test -f submod/.git &&
git status -s -uno --ignore-submodules=none >actual &&
- test_cmp expect.modified actual &&
+ test_cmp expect.modified_inside actual &&
git rm -f submod &&
test ! -d submod &&
git status -s -uno --ignore-submodules=none >actual &&
test -d submod &&
test -f submod/.git &&
git status -s -uno --ignore-submodules=none >actual &&
- test_cmp expect.modified actual &&
+ test_cmp expect.modified_untracked actual &&
git rm -f submod &&
test ! -d submod &&
git status -s -uno --ignore-submodules=none >actual &&
test_cmp expected actual
'
+test_expect_success 'setup nested submodule' '
+ git submodule add -f ./sm2 &&
+ git commit -a -m "add sm2" &&
+ git -C sm2 submodule add ../sm2 nested &&
+ git -C sm2 commit -a -m "nested sub"
+'
+
+test_expect_success 'move nested submodule HEAD' '
+ echo "nested content" >sm2/nested/file &&
+ git -C sm2/nested add file &&
+ git -C sm2/nested commit --allow-empty -m "new HEAD"
+'
+
+test_expect_success 'diff --submodule=diff with moved nested submodule HEAD' '
+ cat >expected <<-EOF &&
+ Submodule nested a5a65c9..b55928c:
+ diff --git a/nested/file b/nested/file
+ new file mode 100644
+ index 0000000..ca281f5
+ --- /dev/null
+ +++ b/nested/file
+ @@ -0,0 +1 @@
+ +nested content
+ EOF
+ git -C sm2 diff --submodule=diff >actual 2>err &&
+ test_must_be_empty err &&
+ test_cmp expected actual
+'
+
test_done
rm -fr .git/rebase-apply &&
git checkout -f first &&
echo one >> file &&
- git commit -am "$LONG" --author="$LONG <long@example.com>" &&
+ git commit -am "$LONG
+
+ Body test" --author="$LONG <long@example.com>" &&
git format-patch --stdout -1 >patch &&
# bump from, date, and subject down to in-body header
perl -lpe "
git am msg &&
# Ensure that the author and full message are present
git cat-file commit HEAD | grep "^author.*long@example.com" &&
- git cat-file commit HEAD | grep "^$LONG"
+ git cat-file commit HEAD | grep "^$LONG$"
'
test_done
. ./test-lib.sh
. "$TEST_DIRECTORY/lib-gpg.sh"
+. "$TEST_DIRECTORY/lib-terminal.sh"
test_expect_success setup '
'
test_expect_success 'log.decorate configuration' '
- git log --oneline >expect.none &&
+ git log --oneline --no-decorate >expect.none &&
git log --oneline --decorate >expect.short &&
git log --oneline --decorate=full >expect.full &&
'
+test_expect_success TTY 'log output on a TTY' '
+ git log --oneline --decorate >expect.short &&
+
+ test_terminal git log --oneline >actual &&
+ test_cmp expect.short actual
+'
+
test_expect_success 'reflog is expected format' '
git log -g --abbrev-commit --pretty=oneline >expect &&
git reflog >actual &&
test_cmp expected_pub actual_pub
'
+test_expect_success 'push propagating the remotes name to a submodule' '
+ git -C work remote add origin ../pub.git &&
+ git -C work remote add pub ../pub.git &&
+
+ > work/gar/bage/junk10 &&
+ git -C work/gar/bage add junk10 &&
+ git -C work/gar/bage commit -m "Tenth junk" &&
+ git -C work add gar/bage &&
+ git -C work commit -m "Tenth junk added to gar/bage" &&
+
+ # Fails when submodule does not have a matching remote
+ test_must_fail git -C work push --recurse-submodules=on-demand pub master &&
+ # Succeeds when submodules has matching remote and refspec
+ git -C work push --recurse-submodules=on-demand origin master &&
+
+ git -C submodule.git rev-parse master >actual_submodule &&
+ git -C pub.git rev-parse master >actual_pub &&
+ git -C work/gar/bage rev-parse master >expected_submodule &&
+ git -C work rev-parse master >expected_pub &&
+ test_cmp expected_submodule actual_submodule &&
+ test_cmp expected_pub actual_pub
+'
+
+test_expect_success 'push propagating refspec to a submodule' '
+ > work/gar/bage/junk11 &&
+ git -C work/gar/bage add junk11 &&
+ git -C work/gar/bage commit -m "Eleventh junk" &&
+
+ git -C work checkout branch2 &&
+ git -C work add gar/bage &&
+ git -C work commit -m "updating gar/bage in branch2" &&
+
+ # Fails when submodule does not have a matching branch
+ test_must_fail git -C work push --recurse-submodules=on-demand origin branch2 &&
+ # Fails when refspec includes an object id
+ test_must_fail git -C work push --recurse-submodules=on-demand origin \
+ "$(git -C work rev-parse branch2):refs/heads/branch2" &&
+ # Fails when refspec includes 'HEAD' as it is unsupported at this time
+ test_must_fail git -C work push --recurse-submodules=on-demand origin \
+ HEAD:refs/heads/branch2 &&
+
+ git -C work/gar/bage branch branch2 master &&
+ git -C work push --recurse-submodules=on-demand origin branch2 &&
+
+ git -C submodule.git rev-parse branch2 >actual_submodule &&
+ git -C pub.git rev-parse branch2 >actual_pub &&
+ git -C work/gar/bage rev-parse branch2 >expected_submodule &&
+ git -C work rev-parse branch2 >expected_pub &&
+ test_cmp expected_submodule actual_submodule &&
+ test_cmp expected_pub actual_pub
+'
+
test_done
test_cmp expect actual
'
+test_expect_success 'push options and submodules' '
+ test_when_finished "rm -rf parent" &&
+ test_when_finished "rm -rf parent_upstream" &&
+ mk_repo_pair &&
+ git -C upstream config receive.advertisePushOptions true &&
+ cp -r upstream parent_upstream &&
+ test_commit -C upstream one &&
+
+ test_create_repo parent &&
+ git -C parent remote add up ../parent_upstream &&
+ test_commit -C parent one &&
+ git -C parent push --mirror up &&
+
+ git -C parent submodule add ../upstream workbench &&
+ git -C parent/workbench remote add up ../../upstream &&
+ git -C parent commit -m "add submoule" &&
+
+ test_commit -C parent/workbench two &&
+ git -C parent add workbench &&
+ git -C parent commit -m "update workbench" &&
+
+ git -C parent push \
+ --push-option=asdf --push-option="more structured text" \
+ --recurse-submodules=on-demand up master &&
+
+ git -C upstream rev-parse --verify master >expect &&
+ git -C parent/workbench rev-parse --verify master >actual &&
+ test_cmp expect actual &&
+
+ git -C parent_upstream rev-parse --verify master >expect &&
+ git -C parent rev-parse --verify master >actual &&
+ test_cmp expect actual &&
+
+ printf "asdf\nmore structured text\n" >expect &&
+ test_cmp expect upstream/.git/hooks/pre-receive.push_options &&
+ test_cmp expect upstream/.git/hooks/post-receive.push_options &&
+ test_cmp expect parent_upstream/.git/hooks/pre-receive.push_options &&
+ test_cmp expect parent_upstream/.git/hooks/post-receive.push_options
+'
+
stop_httpd
test_done
expect_ssh "-batch -P 123" myhost src
'
+test_expect_success 'clean failure on broken quoting' '
+ test_must_fail \
+ env GIT_SSH_COMMAND="${SQ}plink.exe -v" \
+ git clone "[myhost:123]:src" sq-failure
+'
+
# Reset the GIT_SSH environment variable for clone tests.
setup_ssh_wrapper
test_cmp expect actual
'
+test_expect_success 'filtering with --no-contains' '
+ cat >expect <<-\EOF &&
+ refs/tags/one
+ EOF
+ git for-each-ref --format="%(refname)" --no-contains=two >actual &&
+ test_cmp expect actual
+'
+
+test_expect_success 'filtering with --contains and --no-contains' '
+ cat >expect <<-\EOF &&
+ refs/tags/two
+ EOF
+ git for-each-ref --format="%(refname)" --contains=two --no-contains=three >actual &&
+ test_cmp expect actual
+'
+
test_expect_success '%(color) must fail' '
test_must_fail git for-each-ref --format="%(color)%(refname)"
'
test_cmp expect actual
'
+test_expect_success '--merged is incompatible with --no-merged' '
+ test_must_fail git for-each-ref --merged HEAD --no-merged HEAD
+'
+
test_done
test_line_count = 2 new # There is one new pack and its .idx
'
+run_and_wait_for_auto_gc () {
+ # We read stdout from gc for the side effect of waiting until the
+ # background gc process exits, closing its fd 9. Furthermore, the
+ # variable assignment from a command substitution preserves the
+ # exit status of the main gc process.
+ # Note: this fd trickery doesn't work on Windows, but there is no
+ # need to, because on Win the auto gc always runs in the foreground.
+ doesnt_matter=$(git gc --auto 9>&1)
+}
+
test_expect_success 'background auto gc does not run if gc.log is present and recent but does if it is old' '
test_commit foo &&
test_commit bar &&
test-chmtime =-345600 .git/gc.log &&
test_must_fail git gc --auto &&
test_config gc.logexpiry 2.days &&
- git gc --auto
+ run_and_wait_for_auto_gc &&
+ ls .git/objects/pack/pack-*.pack >packs &&
+ test_line_count = 1 packs
'
+# DO NOT leave a detached auto gc process running near the end of the
+# test script: it can run long enough in the background to racily
+# interfere with the cleanup in 'test_done'.
+
test_done
git show-ref --quiet --verify refs/tags/"$1"
}
-# todo: git tag -l now returns always zero, when fixed, change this test
test_expect_success 'listing all tags in an empty tree should succeed' '
git tag -l &&
git tag
git tag
'
+cat >expect <<EOF
+mytag
+EOF
+test_expect_success 'Multiple -l or --list options are equivalent to one -l option' '
+ git tag -l -l >actual &&
+ test_cmp expect actual &&
+ git tag --list --list >actual &&
+ test_cmp expect actual &&
+ git tag --list -l --list >actual &&
+ test_cmp expect actual
+'
+
test_expect_success 'listing all tags if one exists should output that tag' '
test $(git tag -l) = mytag &&
test $(git tag) = mytag
'listing a tag using a matching pattern should output that tag' \
'test $(git tag -l mytag) = mytag'
-# todo: git tag -l now returns always zero, when fixed, change this test
test_expect_success \
- 'listing tags using a non-matching pattern should suceed' \
+ 'listing tags using a non-matching pattern should succeed' \
'git tag -l xxx'
test_expect_success \
test_cmp expect actual
'
+# Between v1.7.7 & v2.13.0 a fair reading of the git-tag documentation
+# could leave you with the impression that "-l <pattern> -l <pattern>"
+# was how we wanted to accept multiple patterns.
+#
+# This test should not imply that this is a sane thing to support. but
+# since the documentation was worded like it was let's at least find
+# out if we're going to break this long-documented form of taking
+# multiple patterns.
+test_expect_success 'tag -l <pattern> -l <pattern> works, as our buggy documentation previously suggested' '
+ git tag -l "v1*" -l "v0*" >actual &&
+ test_cmp expect actual
+'
+
test_expect_success 'listing tags in column' '
COLUMNS=40 git tag -l --column=row >actual &&
cat >expected <<\EOF &&
git tag -n0 -l tag-one-line >actual &&
test_cmp expect actual &&
+ git tag -n0 | grep "^tag-one-line" >actual &&
+ test_cmp expect actual &&
+ git tag -n0 tag-one-line >actual &&
+ test_cmp expect actual &&
+
echo "tag-one-line A msg" >expect &&
git tag -n1 -l | grep "^tag-one-line" >actual &&
test_cmp expect actual &&
test_cmp expect actual
'
+test_expect_success 'The -n 100 invocation means -n --list 100, not -n100' '
+ >expect &&
+ git tag -n 100 >actual &&
+ test_cmp expect actual &&
+
+ git tag -m "A msg" 100 &&
+ echo "100 A msg" >expect &&
+ git tag -n 100 >actual &&
+ test_cmp expect actual
+'
+
test_expect_success \
'listing the zero-lines message of a non-signed tag should succeed' '
git tag -m "" tag-zero-lines &&
test_cmp expected actual
"
+# All the --contains tests above, but with --no-contains
+test_expect_success 'checking that first commit is not listed in any tag with --no-contains (hash)' "
+ >expected &&
+ git tag -l --no-contains $hash1 v* >actual &&
+ test_cmp expected actual
+"
+
+test_expect_success 'checking that first commit is in all tags (tag)' "
+ git tag -l --no-contains v1.0 v* >actual &&
+ test_cmp expected actual
+"
+
+test_expect_success 'checking that first commit is in all tags (relative)' "
+ git tag -l --no-contains HEAD~2 v* >actual &&
+ test_cmp expected actual
+"
+
cat > expected <<EOF
v2.0
EOF
test_cmp expected actual
"
+cat > expected <<EOF
+v0.2.1
+v1.0
+v1.0.1
+v1.1.3
+EOF
+
+test_expect_success 'inverse of the last test, with --no-contains' "
+ git tag -l --no-contains $hash2 v* >actual &&
+ test_cmp expected actual
+"
cat > expected <<EOF
EOF
test_cmp expected actual
"
+cat > expected <<EOF
+v0.2.1
+v1.0
+v1.0.1
+v1.1.3
+v2.0
+EOF
+
+test_expect_success 'conversely --no-contains on the third commit lists all tags' "
+ git tag -l --no-contains $hash3 v* >actual &&
+ test_cmp expected actual
+"
+
# how about a simple merge?
test_expect_success 'creating simple branch' '
test_cmp expected actual
"
+cat > expected <<EOF
+v0.2.1
+v1.0
+v1.0.1
+v1.1.3
+v2.0
+EOF
+
+test_expect_success 'checking that branch head with --no-contains lists all but one tag' "
+ git tag -l --no-contains $hash4 v* >actual &&
+ test_cmp expected actual
+"
+
test_expect_success 'merging original branch into this branch' '
git merge --strategy=ours master &&
git tag v4.0
test_cmp expected actual
"
+cat > expected <<EOF
+v0.2.1
+v1.0
+v1.0.1
+v1.1.3
+v2.0
+v3.0
+EOF
+
+test_expect_success 'checking that original branch head with --no-contains lists all but one tag now' "
+ git tag -l --no-contains $hash3 v* >actual &&
+ test_cmp expected actual
+"
+
cat > expected <<EOF
v0.2.1
v1.0
test_cmp expected actual
"
+test_expect_success 'checking that --contains can be used in non-list mode' '
+ git tag --contains $hash1 v* >actual &&
+ test_cmp expected actual
+'
+
+test_expect_success 'checking that initial commit is in all tags with --no-contains' "
+ >expected &&
+ git tag -l --no-contains $hash1 v* >actual &&
+ test_cmp expected actual
+"
+
# mixing modes and options:
test_expect_success 'mixing incompatibles modes and options is forbidden' '
test_must_fail git tag -a &&
+ test_must_fail git tag -a -l &&
+ test_must_fail git tag -s &&
+ test_must_fail git tag -s -l &&
+ test_must_fail git tag -m &&
+ test_must_fail git tag -m -l &&
+ test_must_fail git tag -m "hlagh" &&
+ test_must_fail git tag -m "hlagh" -l &&
+ test_must_fail git tag -F &&
+ test_must_fail git tag -F -l &&
+ test_must_fail git tag -f &&
+ test_must_fail git tag -f -l &&
+ test_must_fail git tag -a -s -m -F &&
+ test_must_fail git tag -a -s -m -F -l &&
test_must_fail git tag -l -v &&
- test_must_fail git tag -n 100 &&
+ test_must_fail git tag -l -d &&
+ test_must_fail git tag -l -v -d &&
+ test_must_fail git tag -n 100 -v &&
test_must_fail git tag -l -m msg &&
test_must_fail git tag -l -F some file &&
- test_must_fail git tag -v -s
-'
+ test_must_fail git tag -v -s &&
+ test_must_fail git tag --contains tag-tree &&
+ test_must_fail git tag --contains tag-blob &&
+ test_must_fail git tag --no-contains tag-tree &&
+ test_must_fail git tag --no-contains tag-blob &&
+ test_must_fail git tag --contains --no-contains &&
+ test_must_fail git tag --no-with HEAD &&
+ test_must_fail git tag --no-without HEAD
+'
+
+for option in --contains --with --no-contains --without --merged --no-merged --points-at
+do
+ test_expect_success "mixing incompatible modes with $option is forbidden" "
+ test_must_fail git tag -d $option HEAD &&
+ test_must_fail git tag -d $option HEAD some-tag &&
+ test_must_fail git tag -v $option HEAD
+ "
+ test_expect_success "Doing 'git tag --list-like $option <commit> <pattern> is permitted" "
+ git tag -n $option HEAD HEAD &&
+ git tag $option HEAD HEAD &&
+ git tag $option
+ "
+done
# check points-at
-test_expect_success '--points-at cannot be used in non-list mode' '
- test_must_fail git tag --points-at=v4.0 foo
+test_expect_success '--points-at can be used in non-list mode' '
+ echo v4.0 >expect &&
+ git tag --points-at=v4.0 "v*" >actual &&
+ test_cmp expect actual
+'
+
+test_expect_success '--points-at is a synonym for --points-at HEAD' '
+ echo v4.0 >expect &&
+ git tag --points-at >actual &&
+ test_cmp expect actual
'
test_expect_success '--points-at finds lightweight tags' '
test_lazy_prereq ULIMIT_STACK_SIZE 'run_with_limited_stack true'
# we require ulimit, this excludes Windows
-test_expect_success ULIMIT_STACK_SIZE '--contains works in a deep repo' '
+test_expect_success ULIMIT_STACK_SIZE '--contains and --no-contains work in a deep repo' '
>expect &&
i=1 &&
while test $i -lt 8000
git checkout master &&
git tag far-far-away HEAD^ &&
run_with_limited_stack git tag --contains HEAD >actual &&
- test_cmp expect actual
+ test_cmp expect actual &&
+ run_with_limited_stack git tag --no-contains HEAD >actual &&
+ test_line_count ">" 10 actual
'
test_expect_success '--format should list tags as per format given' '
git tag mergetest-3 HEAD
'
-test_expect_success '--merged cannot be used in non-list mode' '
- test_must_fail git tag --merged=mergetest-2 foo
+test_expect_success '--merged can be used in non-list mode' '
+ cat >expect <<-\EOF &&
+ mergetest-1
+ mergetest-2
+ EOF
+ git tag --merged=mergetest-2 "mergetest*" >actual &&
+ test_cmp expect actual
+'
+
+test_expect_success '--merged is incompatible with --no-merged' '
+ test_must_fail git tag --merged HEAD --no-merged HEAD
'
test_expect_success '--merged shows merged tags' '
test_cmp expect actual
'
+test_expect_success '--no-merged can be used in non-list mode' '
+ git tag --no-merged=mergetest-2 mergetest-* >actual &&
+ test_cmp expect actual
+'
+
test_expect_success 'ambiguous branch/tags not marked' '
git tag ambiguous &&
git branch ambiguous &&
test_cmp expect actual
'
+test_expect_success '--contains combined with --no-contains' '
+ (
+ git init no-contains &&
+ cd no-contains &&
+ test_commit v0.1 &&
+ test_commit v0.2 &&
+ test_commit v0.3 &&
+ test_commit v0.4 &&
+ test_commit v0.5 &&
+ cat >expected <<-\EOF &&
+ v0.2
+ v0.3
+ v0.4
+ EOF
+ git tag --contains v0.2 --no-contains v0.5 >actual &&
+ test_cmp expected actual
+ )
+'
+
+# As the docs say, list tags which contain a specified *commit*. We
+# don't recurse down to tags for trees or blobs pointed to by *those*
+# commits.
+test_expect_success 'Does --[no-]contains stop at commits? Yes!' '
+ cd no-contains &&
+ blob=$(git rev-parse v0.3:v0.3.t) &&
+ tree=$(git rev-parse v0.3^{tree}) &&
+ git tag tag-blob $blob &&
+ git tag tag-tree $tree &&
+ git tag --contains v0.3 >actual &&
+ cat >expected <<-\EOF &&
+ v0.3
+ v0.4
+ v0.5
+ EOF
+ test_cmp expected actual &&
+ git tag --no-contains v0.3 >actual &&
+ cat >expected <<-\EOF &&
+ v0.1
+ v0.2
+ EOF
+ test_cmp expected actual
+'
+
test_done
test_cmp expect actual
'
+test_expect_success 'setup superproject with submodules' '
+ git init sub1 &&
+ test_commit -C sub1 test &&
+ test_commit -C sub1 test2 &&
+ git init multisuper &&
+ git -C multisuper submodule add ../sub1 sub0 &&
+ git -C multisuper submodule add ../sub1 sub1 &&
+ git -C multisuper submodule add ../sub1 sub2 &&
+ git -C multisuper submodule add ../sub1 sub3 &&
+ git -C multisuper commit -m "add some submodules"
+'
+
+cat >expect <<-EOF
+-sub0
+ sub1 (test2)
+ sub2 (test2)
+ sub3 (test2)
+EOF
+
+test_expect_success 'submodule update --init with a specification' '
+ test_when_finished "rm -rf multisuper_clone" &&
+ pwd=$(pwd) &&
+ git clone file://"$pwd"/multisuper multisuper_clone &&
+ git -C multisuper_clone submodule update --init . ":(exclude)sub0" &&
+ git -C multisuper_clone submodule status |cut -c 1,43- >actual &&
+ test_cmp expect actual
+'
+
+test_expect_success 'submodule update --init with submodule.active set' '
+ test_when_finished "rm -rf multisuper_clone" &&
+ pwd=$(pwd) &&
+ git clone file://"$pwd"/multisuper multisuper_clone &&
+ git -C multisuper_clone config submodule.active "." &&
+ git -C multisuper_clone config --add submodule.active ":(exclude)sub0" &&
+ git -C multisuper_clone submodule update --init &&
+ git -C multisuper_clone submodule status |cut -c 1,43- >actual &&
+ test_cmp expect actual
+'
+
+test_expect_success 'submodule update and setting submodule.<name>.active' '
+ test_when_finished "rm -rf multisuper_clone" &&
+ pwd=$(pwd) &&
+ git clone file://"$pwd"/multisuper multisuper_clone &&
+ git -C multisuper_clone config --bool submodule.sub0.active "true" &&
+ git -C multisuper_clone config --bool submodule.sub1.active "false" &&
+ git -C multisuper_clone config --bool submodule.sub2.active "true" &&
+
+ cat >expect <<-\EOF &&
+ sub0 (test2)
+ -sub1
+ sub2 (test2)
+ -sub3
+ EOF
+ git -C multisuper_clone submodule update &&
+ git -C multisuper_clone submodule status |cut -c 1,43- >actual &&
+ test_cmp expect actual
+'
+
+test_expect_success 'clone --recurse-submodules with a pathspec works' '
+ test_when_finished "rm -rf multisuper_clone" &&
+ cat >expected <<-\EOF &&
+ sub0 (test2)
+ -sub1
+ -sub2
+ -sub3
+ EOF
+
+ git clone --recurse-submodules="sub0" multisuper multisuper_clone &&
+ git -C multisuper_clone submodule status |cut -c1,43- >actual &&
+ test_cmp actual expected
+'
+
+test_expect_success 'clone with multiple --recurse-submodules options' '
+ test_when_finished "rm -rf multisuper_clone" &&
+ cat >expect <<-\EOF &&
+ -sub0
+ sub1 (test2)
+ -sub2
+ sub3 (test2)
+ EOF
+
+ git clone --recurse-submodules="." \
+ --recurse-submodules=":(exclude)sub0" \
+ --recurse-submodules=":(exclude)sub2" \
+ multisuper multisuper_clone &&
+ git -C multisuper_clone submodule status |cut -c1,43- >actual &&
+ test_cmp expect actual
+'
+
+test_expect_success 'clone and subsequent updates correctly auto-initialize submodules' '
+ test_when_finished "rm -rf multisuper_clone" &&
+ cat <<-\EOF >expect &&
+ -sub0
+ sub1 (test2)
+ -sub2
+ sub3 (test2)
+ EOF
+
+ cat <<-\EOF >expect2 &&
+ -sub0
+ sub1 (test2)
+ -sub2
+ sub3 (test2)
+ -sub4
+ sub5 (test2)
+ EOF
+
+ git clone --recurse-submodules="." \
+ --recurse-submodules=":(exclude)sub0" \
+ --recurse-submodules=":(exclude)sub2" \
+ --recurse-submodules=":(exclude)sub4" \
+ multisuper multisuper_clone &&
+
+ git -C multisuper_clone submodule status |cut -c1,43- >actual &&
+ test_cmp expect actual &&
+
+ git -C multisuper submodule add ../sub1 sub4 &&
+ git -C multisuper submodule add ../sub1 sub5 &&
+ git -C multisuper commit -m "add more submodules" &&
+ # obtain the new superproject
+ git -C multisuper_clone pull &&
+ git -C multisuper_clone submodule update --init &&
+ git -C multisuper_clone submodule status |cut -c1,43- >actual &&
+ test_cmp expect2 actual
+'
+
+test_expect_success 'init properly sets the config' '
+ test_when_finished "rm -rf multisuper_clone" &&
+ git clone --recurse-submodules="." \
+ --recurse-submodules=":(exclude)sub0" \
+ multisuper multisuper_clone &&
+
+ git -C multisuper_clone submodule init -- sub0 sub1 &&
+ git -C multisuper_clone config --get submodule.sub0.active &&
+ test_must_fail git -C multisuper_clone config --get submodule.sub1.active
+'
test_done
--- /dev/null
+#!/bin/sh
+
+test_description='Test submodule--helper is-active
+
+This test verifies that `git submodue--helper is-active` correclty identifies
+submodules which are "active" and interesting to the user.
+'
+
+. ./test-lib.sh
+
+test_expect_success 'setup' '
+ git init sub &&
+ test_commit -C sub initial &&
+ git init super &&
+ test_commit -C super initial &&
+ git -C super submodule add ../sub sub1 &&
+ git -C super submodule add ../sub sub2 &&
+
+ # Remove submodule.<name>.active entries in order to test in an
+ # environment where only URLs are present in the conifg
+ git -C super config --unset submodule.sub1.active &&
+ git -C super config --unset submodule.sub2.active &&
+
+ git -C super commit -a -m "add 2 submodules at sub{1,2}"
+'
+
+test_expect_success 'is-active works with urls' '
+ git -C super submodule--helper is-active sub1 &&
+ git -C super submodule--helper is-active sub2 &&
+
+ git -C super config --unset submodule.sub1.URL &&
+ test_must_fail git -C super submodule--helper is-active sub1 &&
+ git -C super config submodule.sub1.URL ../sub &&
+ git -C super submodule--helper is-active sub1
+'
+
+test_expect_success 'is-active works with submodule.<name>.active config' '
+ test_when_finished "git -C super config --unset submodule.sub1.active" &&
+ test_when_finished "git -C super config submodule.sub1.URL ../sub" &&
+
+ git -C super config --bool submodule.sub1.active "false" &&
+ test_must_fail git -C super submodule--helper is-active sub1 &&
+
+ git -C super config --bool submodule.sub1.active "true" &&
+ git -C super config --unset submodule.sub1.URL &&
+ git -C super submodule--helper is-active sub1
+'
+
+test_expect_success 'is-active works with basic submodule.active config' '
+ test_when_finished "git -C super config submodule.sub1.URL ../sub" &&
+ test_when_finished "git -C super config --unset-all submodule.active" &&
+
+ git -C super config --add submodule.active "." &&
+ git -C super config --unset submodule.sub1.URL &&
+
+ git -C super submodule--helper is-active sub1 &&
+ git -C super submodule--helper is-active sub2
+'
+
+test_expect_success 'is-active correctly works with paths that are not submodules' '
+ test_when_finished "git -C super config --unset-all submodule.active" &&
+
+ test_must_fail git -C super submodule--helper is-active not-a-submodule &&
+
+ git -C super config --add submodule.active "." &&
+ test_must_fail git -C super submodule--helper is-active not-a-submodule
+'
+
+test_expect_success 'is-active works with exclusions in submodule.active config' '
+ test_when_finished "git -C super config --unset-all submodule.active" &&
+
+ git -C super config --add submodule.active "." &&
+ git -C super config --add submodule.active ":(exclude)sub1" &&
+
+ test_must_fail git -C super submodule--helper is-active sub1 &&
+ git -C super submodule--helper is-active sub2
+'
+
+test_expect_success 'is-active with submodule.active and submodule.<name>.active' '
+ test_when_finished "git -C super config --unset-all submodule.active" &&
+ test_when_finished "git -C super config --unset submodule.sub1.active" &&
+ test_when_finished "git -C super config --unset submodule.sub2.active" &&
+
+ git -C super config --add submodule.active "sub1" &&
+ git -C super config --bool submodule.sub1.active "false" &&
+ git -C super config --bool submodule.sub2.active "true" &&
+
+ test_must_fail git -C super submodule--helper is-active sub1 &&
+ git -C super submodule--helper is-active sub2
+'
+
+test_expect_success 'is-active, submodule.active and submodule add' '
+ test_when_finished "rm -rf super2" &&
+ git init super2 &&
+ test_commit -C super2 initial &&
+ git -C super2 config --add submodule.active "sub*" &&
+
+ # submodule add should only add submodule.<name>.active
+ # to the config if not matched by the pathspec
+ git -C super2 submodule add ../sub sub1 &&
+ test_must_fail git -C super2 config --get submodule.sub1.active &&
+
+ git -C super2 submodule add ../sub mod &&
+ git -C super2 config --get submodule.mod.active
+'
+
+test_done
'
+# set up fake editor to replace `pick` by `reword`
+cat > reword-editor <<'EOF'
+#!/bin/sh
+mv "$1" "$1".bup &&
+sed 's/^pick/reword/' <"$1".bup >"$1"
+EOF
+chmod +x reword-editor
+REWORD_EDITOR="$(pwd)/reword-editor"
+export REWORD_EDITOR
+
+test_expect_success 'hook is called for reword during `rebase -i`' '
+
+ GIT_SEQUENCE_EDITOR="\"$REWORD_EDITOR\"" git rebase -i HEAD^ &&
+ commit_msg_is "new message"
+
+'
+
test_done
)
}
+sanitize_output () {
+ sed -e "s/$_x40/HASH/" -e "s/$_x40/HASH/" output >output2 &&
+ mv output2 output
+}
+
+
test_expect_success 'setup' '
test_create_repo_with_commit sub &&
echo output > .gitignore &&
EOF
'
+test_expect_success 'status with modified file in submodule (short)' '
+ (cd sub && git reset --hard) &&
+ echo "changed" >sub/foo &&
+ git status --short >output &&
+ diff output - <<-\EOF
+ m sub
+ EOF
+'
+
test_expect_success 'status with added file in submodule' '
(cd sub && git reset --hard && echo >foo && git add foo) &&
git status >output &&
EOF
'
+test_expect_success 'status with added file in submodule (short)' '
+ (cd sub && git reset --hard && echo >foo && git add foo) &&
+ git status --short >output &&
+ diff output - <<-\EOF
+ m sub
+ EOF
+'
+
test_expect_success 'status with untracked file in submodule' '
(cd sub && git reset --hard) &&
echo "content" >sub/new-file &&
EOF
'
+test_expect_success 'status with untracked file in submodule (short)' '
+ git status --short >output &&
+ diff output - <<-\EOF
+ ? sub
+ EOF
+'
+
test_expect_success 'status with added and untracked file in submodule' '
(cd sub && git reset --hard && echo >foo && git add foo) &&
echo "content" >sub/new-file &&
test_i18ngrep "modified: sub (new commits, modified content)" output
'
+test_expect_success 'status with a lot of untracked files in the submodule' '
+ (
+ cd sub
+ i=0 &&
+ while test $i -lt 1024
+ do
+ >some-file-$i
+ i=$(( $i + 1 ))
+ done
+ ) &&
+ git status --porcelain sub 2>err.actual &&
+ test_must_be_empty err.actual &&
+ rm err.actual
+'
+
test_expect_success 'rm submodule contents' '
- rm -rf sub/* sub/.git
+ rm -rf sub &&
+ mkdir sub
'
test_expect_success 'status clean (empty submodule dir)' '
test_cmp diff_submodule_actual diff_submodule_expect
'
+# We'll setup different cases for further testing:
+# sub1 will contain a nested submodule,
+# sub2 will have an untracked file
+# sub3 will have an untracked repository
+test_expect_success 'setup superproject with untracked file in nested submodule' '
+ (
+ cd super &&
+ git clean -dfx &&
+ rm .gitmodules &&
+ git submodule add -f ./sub1 &&
+ git submodule add -f ./sub2 &&
+ git submodule add -f ./sub1 sub3 &&
+ git commit -a -m "messy merge in superproject" &&
+ (
+ cd sub1 &&
+ git submodule add ../sub2 &&
+ git commit -a -m "add sub2 to sub1"
+ ) &&
+ git add sub1 &&
+ git commit -a -m "update sub1 to contain nested sub"
+ ) &&
+ echo content >super/sub1/sub2/file &&
+ echo content >super/sub2/file &&
+ git -C super/sub3 clone ../../sub2 untracked_repository
+'
+
+test_expect_success 'status with untracked file in nested submodule (porcelain)' '
+ git -C super status --porcelain >output &&
+ diff output - <<-\EOF
+ M sub1
+ M sub2
+ M sub3
+ EOF
+'
+
+test_expect_success 'status with untracked file in nested submodule (porcelain=2)' '
+ git -C super status --porcelain=2 >output &&
+ sanitize_output output &&
+ diff output - <<-\EOF
+ 1 .M S..U 160000 160000 160000 HASH HASH sub1
+ 1 .M S..U 160000 160000 160000 HASH HASH sub2
+ 1 .M S..U 160000 160000 160000 HASH HASH sub3
+ EOF
+'
+
+test_expect_success 'status with untracked file in nested submodule (short)' '
+ git -C super status --short >output &&
+ diff output - <<-\EOF
+ ? sub1
+ ? sub2
+ ? sub3
+ EOF
+'
+
+test_expect_success 'setup superproject with modified file in nested submodule' '
+ git -C super/sub1/sub2 add file &&
+ git -C super/sub2 add file
+'
+
+test_expect_success 'status with added file in nested submodule (porcelain)' '
+ git -C super status --porcelain >output &&
+ diff output - <<-\EOF
+ M sub1
+ M sub2
+ M sub3
+ EOF
+'
+
+test_expect_success 'status with added file in nested submodule (porcelain=2)' '
+ git -C super status --porcelain=2 >output &&
+ sanitize_output output &&
+ diff output - <<-\EOF
+ 1 .M S.M. 160000 160000 160000 HASH HASH sub1
+ 1 .M S.M. 160000 160000 160000 HASH HASH sub2
+ 1 .M S..U 160000 160000 160000 HASH HASH sub3
+ EOF
+'
+
+test_expect_success 'status with added file in nested submodule (short)' '
+ git -C super status --short >output &&
+ diff output - <<-\EOF
+ m sub1
+ m sub2
+ ? sub3
+ EOF
+'
+
test_done
git commit -m "modified both"
'
+test_expect_success 'difftool -d with growing paths' '
+ a=aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa &&
+ git init growing &&
+ (
+ cd growing &&
+ echo "test -f \"\$2/b\"" | write_script .git/test-for-b.sh &&
+ one=$(printf 1 | git hash-object -w --stdin) &&
+ two=$(printf 2 | git hash-object -w --stdin) &&
+ git update-index --add \
+ --cacheinfo 100644,$one,$a --cacheinfo 100644,$two,b &&
+ tree1=$(git write-tree) &&
+ git update-index --add \
+ --cacheinfo 100644,$two,$a --cacheinfo 100644,$one,b &&
+ tree2=$(git write-tree) &&
+ git checkout -- $a &&
+ git difftool -d --extcmd .git/test-for-b.sh $tree1 $tree2
+ )
+'
+
run_dir_diff_test () {
test_expect_success "$1 --no-symlinks" "
symlinks=--no-symlinks &&
test_cmp expect actual
'
+test_expect_success 'grep using relative path' '
+ test_when_finished "rm -rf parent sub" &&
+ git init sub &&
+ echo "foobar" >sub/file &&
+ git -C sub add file &&
+ git -C sub commit -m "add file" &&
+
+ git init parent &&
+ echo "foobar" >parent/file &&
+ git -C parent add file &&
+ mkdir parent/src &&
+ echo "foobar" >parent/src/file2 &&
+ git -C parent add src/file2 &&
+ git -C parent submodule add ../sub &&
+ git -C parent commit -m "add files and submodule" &&
+
+ # From top works
+ cat >expect <<-\EOF &&
+ file:foobar
+ src/file2:foobar
+ sub/file:foobar
+ EOF
+ git -C parent grep --recurse-submodules -e "foobar" >actual &&
+ test_cmp expect actual &&
+
+ # Relative path to top
+ cat >expect <<-\EOF &&
+ ../file:foobar
+ file2:foobar
+ ../sub/file:foobar
+ EOF
+ git -C parent/src grep --recurse-submodules -e "foobar" -- .. >actual &&
+ test_cmp expect actual &&
+
+ # Relative path to submodule
+ cat >expect <<-\EOF &&
+ ../sub/file:foobar
+ EOF
+ git -C parent/src grep --recurse-submodules -e "foobar" -- ../sub >actual &&
+ test_cmp expect actual
+'
+
+test_expect_success 'grep from a subdir' '
+ test_when_finished "rm -rf parent sub" &&
+ git init sub &&
+ echo "foobar" >sub/file &&
+ git -C sub add file &&
+ git -C sub commit -m "add file" &&
+
+ git init parent &&
+ mkdir parent/src &&
+ echo "foobar" >parent/src/file &&
+ git -C parent add src/file &&
+ git -C parent submodule add ../sub src/sub &&
+ git -C parent submodule add ../sub sub &&
+ git -C parent commit -m "add files and submodules" &&
+
+ # Verify grep from root works
+ cat >expect <<-\EOF &&
+ src/file:foobar
+ src/sub/file:foobar
+ sub/file:foobar
+ EOF
+ git -C parent grep --recurse-submodules -e "foobar" >actual &&
+ test_cmp expect actual &&
+
+ # Verify grep from a subdir works
+ cat >expect <<-\EOF &&
+ file:foobar
+ sub/file:foobar
+ EOF
+ git -C parent/src grep --recurse-submodules -e "foobar" >actual &&
+ test_cmp expect actual
+'
+
test_incompatible_with_recurse_submodules ()
{
test_expect_success "--recurse-submodules and $1 are incompatible" "
)
'
+test_expect_success 'allow submit from branch with same revision but different name' '
+ test_when_finished cleanup_git &&
+ git p4 clone --dest="$git" //depot &&
+ (
+ cd "$git" &&
+ test_commit "file8" &&
+ git checkout -b branch1 &&
+ git checkout -b branch2 &&
+ git config git-p4.skipSubmitEdit true &&
+ git config git-p4.allowSubmit "branch1" &&
+ test_must_fail git p4 submit &&
+ git checkout branch1 &&
+ git p4 submit
+ )
+'
+
#
# Basic submit tests, the five handled cases
#
test_cmp expected "$actual"
'
+test_expect_success '__gitcomp_direct - puts everything into COMPREPLY as-is' '
+ sed -e "s/Z$//g" >expected <<-EOF &&
+ with-trailing-space Z
+ without-trailing-spaceZ
+ --option Z
+ --option=Z
+ $invalid_variable_name Z
+ EOF
+ (
+ cur=should_be_ignored &&
+ __gitcomp_direct "$(cat expected)" &&
+ print_comp
+ ) &&
+ test_cmp expected out
+'
+
test_expect_success '__gitcomp - trailing space - options' '
test_gitcomp "--re" "--dry-run --reuse-message= --reedit-message=
--reset-author" <<-EOF
cat >expected <<-EOF &&
refs/heads/master
refs/heads/matching-branch
+ refs/remotes/other/branch-in-other
+ refs/remotes/other/master-in-other
+ refs/tags/matching-tag
EOF
(
cur=refs/heads/ &&
test_expect_success '__git_refs - configured remote - full refs' '
cat >expected <<-EOF &&
+ HEAD
refs/heads/branch-in-other
refs/heads/master-in-other
refs/tags/tag-in-other
test_expect_success '__git_refs - configured remote - full refs - repo given on the command line' '
cat >expected <<-EOF &&
+ HEAD
refs/heads/branch-in-other
refs/heads/master-in-other
refs/tags/tag-in-other
test_expect_success '__git_refs - URL remote - full refs' '
cat >expected <<-EOF &&
+ HEAD
refs/heads/branch-in-other
refs/heads/master-in-other
refs/tags/tag-in-other
test_cmp expected "$actual"
'
+test_expect_success '__git_refs - after --opt=' '
+ cat >expected <<-EOF &&
+ HEAD
+ master
+ matching-branch
+ other/branch-in-other
+ other/master-in-other
+ matching-tag
+ EOF
+ (
+ cur="--opt=" &&
+ __git_refs "" "" "" "" >"$actual"
+ ) &&
+ test_cmp expected "$actual"
+'
+
+test_expect_success '__git_refs - after --opt= - full refs' '
+ cat >expected <<-EOF &&
+ refs/heads/master
+ refs/heads/matching-branch
+ refs/remotes/other/branch-in-other
+ refs/remotes/other/master-in-other
+ refs/tags/matching-tag
+ EOF
+ (
+ cur="--opt=refs/" &&
+ __git_refs "" "" "" refs/ >"$actual"
+ ) &&
+ test_cmp expected "$actual"
+'
+
+test_expect_success '__git refs - exluding refs' '
+ cat >expected <<-EOF &&
+ ^HEAD
+ ^master
+ ^matching-branch
+ ^other/branch-in-other
+ ^other/master-in-other
+ ^matching-tag
+ EOF
+ (
+ cur=^ &&
+ __git_refs >"$actual"
+ ) &&
+ test_cmp expected "$actual"
+'
+
+test_expect_success '__git refs - exluding full refs' '
+ cat >expected <<-EOF &&
+ ^refs/heads/master
+ ^refs/heads/matching-branch
+ ^refs/remotes/other/branch-in-other
+ ^refs/remotes/other/master-in-other
+ ^refs/tags/matching-tag
+ EOF
+ (
+ cur=^refs/ &&
+ __git_refs >"$actual"
+ ) &&
+ test_cmp expected "$actual"
+'
+
+test_expect_success 'setup for filtering matching refs' '
+ git branch matching/branch &&
+ git tag matching/tag &&
+ git -C otherrepo branch matching/branch-in-other &&
+ git fetch --no-tags other &&
+ rm -f .git/FETCH_HEAD
+'
+
+test_expect_success '__git_refs - dont filter refs unless told so' '
+ cat >expected <<-EOF &&
+ HEAD
+ master
+ matching-branch
+ matching/branch
+ other/branch-in-other
+ other/master-in-other
+ other/matching/branch-in-other
+ matching-tag
+ matching/tag
+ EOF
+ (
+ cur=master &&
+ __git_refs >"$actual"
+ ) &&
+ test_cmp expected "$actual"
+'
+
+test_expect_success '__git_refs - only matching refs' '
+ cat >expected <<-EOF &&
+ matching-branch
+ matching/branch
+ matching-tag
+ matching/tag
+ EOF
+ (
+ cur=mat &&
+ __git_refs "" "" "" "$cur" >"$actual"
+ ) &&
+ test_cmp expected "$actual"
+'
+
+test_expect_success '__git_refs - only matching refs - full refs' '
+ cat >expected <<-EOF &&
+ refs/heads/matching-branch
+ refs/heads/matching/branch
+ EOF
+ (
+ cur=refs/heads/mat &&
+ __git_refs "" "" "" "$cur" >"$actual"
+ ) &&
+ test_cmp expected "$actual"
+'
+
+test_expect_success '__git_refs - only matching refs - remote on local file system' '
+ cat >expected <<-EOF &&
+ master-in-other
+ matching/branch-in-other
+ EOF
+ (
+ cur=ma &&
+ __git_refs otherrepo "" "" "$cur" >"$actual"
+ ) &&
+ test_cmp expected "$actual"
+'
+
+test_expect_success '__git_refs - only matching refs - configured remote' '
+ cat >expected <<-EOF &&
+ master-in-other
+ matching/branch-in-other
+ EOF
+ (
+ cur=ma &&
+ __git_refs other "" "" "$cur" >"$actual"
+ ) &&
+ test_cmp expected "$actual"
+'
+
+test_expect_success '__git_refs - only matching refs - remote - full refs' '
+ cat >expected <<-EOF &&
+ refs/heads/master-in-other
+ refs/heads/matching/branch-in-other
+ EOF
+ (
+ cur=refs/heads/ma &&
+ __git_refs other "" "" "$cur" >"$actual"
+ ) &&
+ test_cmp expected "$actual"
+'
+
+test_expect_success '__git_refs - only matching refs - checkout DWIMery' '
+ cat >expected <<-EOF &&
+ matching-branch
+ matching/branch
+ matching-tag
+ matching/tag
+ matching/branch-in-other
+ EOF
+ for remote_ref in refs/remotes/other/ambiguous \
+ refs/remotes/remote/ambiguous \
+ refs/remotes/remote/branch-in-remote
+ do
+ git update-ref $remote_ref master &&
+ test_when_finished "git update-ref -d $remote_ref"
+ done &&
+ (
+ cur=mat &&
+ __git_refs "" 1 "" "$cur" >"$actual"
+ ) &&
+ test_cmp expected "$actual"
+'
+
+test_expect_success 'teardown after filtering matching refs' '
+ git branch -d matching/branch &&
+ git tag -d matching/tag &&
+ git update-ref -d refs/remotes/other/matching/branch-in-other &&
+ git -C otherrepo branch -D matching/branch-in-other
+'
+
+test_expect_success '__git_refs - for-each-ref format specifiers in prefix' '
+ cat >expected <<-EOF &&
+ evil-%%-%42-%(refname)..master
+ EOF
+ (
+ cur="evil-%%-%42-%(refname)..mas" &&
+ __git_refs "" "" "evil-%%-%42-%(refname).." mas >"$actual"
+ ) &&
+ test_cmp expected "$actual"
+'
+
+test_expect_success '__git_complete_refs - simple' '
+ sed -e "s/Z$//" >expected <<-EOF &&
+ HEAD Z
+ master Z
+ matching-branch Z
+ other/branch-in-other Z
+ other/master-in-other Z
+ matching-tag Z
+ EOF
+ (
+ cur= &&
+ __git_complete_refs &&
+ print_comp
+ ) &&
+ test_cmp expected out
+'
+
+test_expect_success '__git_complete_refs - matching' '
+ sed -e "s/Z$//" >expected <<-EOF &&
+ matching-branch Z
+ matching-tag Z
+ EOF
+ (
+ cur=mat &&
+ __git_complete_refs &&
+ print_comp
+ ) &&
+ test_cmp expected out
+'
+
+test_expect_success '__git_complete_refs - remote' '
+ sed -e "s/Z$//" >expected <<-EOF &&
+ HEAD Z
+ branch-in-other Z
+ master-in-other Z
+ EOF
+ (
+ cur=
+ __git_complete_refs --remote=other &&
+ print_comp
+ ) &&
+ test_cmp expected out
+'
+
+test_expect_success '__git_complete_refs - track' '
+ sed -e "s/Z$//" >expected <<-EOF &&
+ HEAD Z
+ master Z
+ matching-branch Z
+ other/branch-in-other Z
+ other/master-in-other Z
+ matching-tag Z
+ branch-in-other Z
+ master-in-other Z
+ EOF
+ (
+ cur=
+ __git_complete_refs --track &&
+ print_comp
+ ) &&
+ test_cmp expected out
+'
+
+test_expect_success '__git_complete_refs - current word' '
+ sed -e "s/Z$//" >expected <<-EOF &&
+ matching-branch Z
+ matching-tag Z
+ EOF
+ (
+ cur="--option=mat" &&
+ __git_complete_refs --cur="${cur#*=}" &&
+ print_comp
+ ) &&
+ test_cmp expected out
+'
+
+test_expect_success '__git_complete_refs - prefix' '
+ sed -e "s/Z$//" >expected <<-EOF &&
+ v1.0..matching-branch Z
+ v1.0..matching-tag Z
+ EOF
+ (
+ cur=v1.0..mat &&
+ __git_complete_refs --pfx=v1.0.. --cur=mat &&
+ print_comp
+ ) &&
+ test_cmp expected out
+'
+
+test_expect_success '__git_complete_refs - suffix' '
+ cat >expected <<-EOF &&
+ HEAD.
+ master.
+ matching-branch.
+ other/branch-in-other.
+ other/master-in-other.
+ matching-tag.
+ EOF
+ (
+ cur= &&
+ __git_complete_refs --sfx=. &&
+ print_comp
+ ) &&
+ test_cmp expected out
+'
+
+test_expect_success '__git_complete_fetch_refspecs - simple' '
+ sed -e "s/Z$//" >expected <<-EOF &&
+ HEAD:HEAD Z
+ branch-in-other:branch-in-other Z
+ master-in-other:master-in-other Z
+ EOF
+ (
+ cur= &&
+ __git_complete_fetch_refspecs other &&
+ print_comp
+ ) &&
+ test_cmp expected out
+'
+
+test_expect_success '__git_complete_fetch_refspecs - matching' '
+ sed -e "s/Z$//" >expected <<-EOF &&
+ branch-in-other:branch-in-other Z
+ EOF
+ (
+ cur=br &&
+ __git_complete_fetch_refspecs other "" br &&
+ print_comp
+ ) &&
+ test_cmp expected out
+'
+
+test_expect_success '__git_complete_fetch_refspecs - prefix' '
+ sed -e "s/Z$//" >expected <<-EOF &&
+ +HEAD:HEAD Z
+ +branch-in-other:branch-in-other Z
+ +master-in-other:master-in-other Z
+ EOF
+ (
+ cur="+" &&
+ __git_complete_fetch_refspecs other "+" "" &&
+ print_comp
+ ) &&
+ test_cmp expected out
+'
+
+test_expect_success '__git_complete_fetch_refspecs - fully qualified' '
+ sed -e "s/Z$//" >expected <<-EOF &&
+ refs/heads/branch-in-other:refs/heads/branch-in-other Z
+ refs/heads/master-in-other:refs/heads/master-in-other Z
+ refs/tags/tag-in-other:refs/tags/tag-in-other Z
+ EOF
+ (
+ cur=refs/ &&
+ __git_complete_fetch_refspecs other "" refs/ &&
+ print_comp
+ ) &&
+ test_cmp expected out
+'
+
+test_expect_success '__git_complete_fetch_refspecs - fully qualified & prefix' '
+ sed -e "s/Z$//" >expected <<-EOF &&
+ +refs/heads/branch-in-other:refs/heads/branch-in-other Z
+ +refs/heads/master-in-other:refs/heads/master-in-other Z
+ +refs/tags/tag-in-other:refs/tags/tag-in-other Z
+ EOF
+ (
+ cur=+refs/ &&
+ __git_complete_fetch_refspecs other + refs/ &&
+ print_comp
+ ) &&
+ test_cmp expected out
+'
+
test_expect_success 'teardown after ref completion' '
git branch -d matching-branch &&
git tag -d matching-tag &&
static void standard_options(struct transport *t)
{
char buf[16];
- int n;
int v = t->verbose;
set_helper_option(t, "progress", t->progress ? "true" : "false");
- n = snprintf(buf, sizeof(buf), "%d", v + 1);
- if (n >= sizeof(buf))
- die("impossibly large verbosity value");
+ xsnprintf(buf, sizeof(buf), "%d", v + 1);
set_helper_option(t, "verbosity", buf);
switch (t->family) {
struct child_process *conn;
int fd[2];
unsigned got_remote_heads : 1;
- struct sha1_array extra_have;
- struct sha1_array shallow;
+ struct oid_array extra_have;
+ struct oid_array shallow;
};
static int set_git_option(struct git_transport_options *opts,
static int measure_abbrev(const struct object_id *oid, int sofar)
{
- char hex[GIT_SHA1_HEXSZ + 1];
+ char hex[GIT_MAX_HEXSZ + 1];
int w = find_unique_abbrev_r(hex, oid->hash, DEFAULT_ABBREV);
return (w < sofar) ? sofar : w;
TRANSPORT_RECURSE_SUBMODULES_ONLY)) &&
!is_bare_repository()) {
struct ref *ref = remote_refs;
- struct sha1_array commits = SHA1_ARRAY_INIT;
+ struct oid_array commits = OID_ARRAY_INIT;
for (; ref; ref = ref->next)
if (!is_null_oid(&ref->new_oid))
- sha1_array_append(&commits, ref->new_oid.hash);
+ oid_array_append(&commits,
+ &ref->new_oid);
if (!push_unpushed_submodules(&commits,
- transport->remote->name,
+ transport->remote,
+ refspec, refspec_nr,
+ transport->push_options,
pretend)) {
- sha1_array_clear(&commits);
+ oid_array_clear(&commits);
die("Failed to push all needed submodules!");
}
- sha1_array_clear(&commits);
+ oid_array_clear(&commits);
}
if (((flags & TRANSPORT_RECURSE_SUBMODULES_CHECK) ||
!pretend)) && !is_bare_repository()) {
struct ref *ref = remote_refs;
struct string_list needs_pushing = STRING_LIST_INIT_DUP;
- struct sha1_array commits = SHA1_ARRAY_INIT;
+ struct oid_array commits = OID_ARRAY_INIT;
for (; ref; ref = ref->next)
if (!is_null_oid(&ref->new_oid))
- sha1_array_append(&commits, ref->new_oid.hash);
+ oid_array_append(&commits,
+ &ref->new_oid);
if (find_unpushed_submodules(&commits, transport->remote->name,
&needs_pushing)) {
- sha1_array_clear(&commits);
+ oid_array_clear(&commits);
die_with_unpushed_submodules(&needs_pushing);
}
string_list_clear(&needs_pushing, 0);
- sha1_array_clear(&commits);
+ oid_array_clear(&commits);
}
if (!(flags & TRANSPORT_RECURSE_SUBMODULES_ONLY))
msgs[ERROR_WOULD_LOSE_ORPHANED_REMOVED] =
_("The following working tree files would be removed by sparse checkout update:\n%s");
msgs[ERROR_WOULD_LOSE_SUBMODULE] =
- _("Submodule '%s' cannot checkout new HEAD");
+ _("Cannot update submodule:\n%s");
opts->show_all_errors = 1;
/* rejected paths may not have a static buffer */
return ret;
}
+static inline int are_same_oid(struct name_entry *name_j, struct name_entry *name_k)
+{
+ return name_j->oid && name_k->oid && !oidcmp(name_j->oid, name_k->oid);
+}
+
static int traverse_trees_recursive(int n, unsigned long dirmask,
unsigned long df_conflicts,
struct name_entry *names,
struct traverse_info *info)
{
int i, ret, bottom;
+ int nr_buf = 0;
struct tree_desc t[MAX_UNPACK_TREES];
void *buf[MAX_UNPACK_TREES];
struct traverse_info newinfo;
newinfo.pathlen += tree_entry_len(p) + 1;
newinfo.df_conflicts |= df_conflicts;
+ /*
+ * Fetch the tree from the ODB for each peer directory in the
+ * n commits.
+ *
+ * For 2- and 3-way traversals, we try to avoid hitting the
+ * ODB twice for the same OID. This should yield a nice speed
+ * up in checkouts and merges when the commits are similar.
+ *
+ * We don't bother doing the full O(n^2) search for larger n,
+ * because wider traversals don't happen that often and we
+ * avoid the search setup.
+ *
+ * When 2 peer OIDs are the same, we just copy the tree
+ * descriptor data. This implicitly borrows the buffer
+ * data from the earlier cell.
+ */
for (i = 0; i < n; i++, dirmask >>= 1) {
- const unsigned char *sha1 = NULL;
- if (dirmask & 1)
- sha1 = names[i].oid->hash;
- buf[i] = fill_tree_descriptor(t+i, sha1);
+ if (i > 0 && are_same_oid(&names[i], &names[i - 1]))
+ t[i] = t[i - 1];
+ else if (i > 1 && are_same_oid(&names[i], &names[i - 2]))
+ t[i] = t[i - 2];
+ else {
+ const unsigned char *sha1 = NULL;
+ if (dirmask & 1)
+ sha1 = names[i].oid->hash;
+ buf[nr_buf++] = fill_tree_descriptor(t+i, sha1);
+ }
}
bottom = switch_cache_bottom(&newinfo);
ret = traverse_trees(n, t, &newinfo);
restore_cache_bottom(&newinfo, bottom);
- for (i = 0; i < n; i++)
+ for (i = 0; i < nr_buf; i++)
free(buf[i]);
return ret;
strbuf_release(&twobuf);
}
+static char short_submodule_status(struct wt_status_change_data *d) {
+ if (d->new_submodule_commits)
+ return 'M';
+ if (d->dirty_submodule & DIRTY_SUBMODULE_MODIFIED)
+ return 'm';
+ if (d->dirty_submodule & DIRTY_SUBMODULE_UNTRACKED)
+ return '?';
+ return d->worktree_status;
+}
+
static void wt_status_collect_changed_cb(struct diff_queue_struct *q,
struct diff_options *options,
void *data)
}
if (!d->worktree_status)
d->worktree_status = p->status;
- d->dirty_submodule = p->two->dirty_submodule;
- if (S_ISGITLINK(p->two->mode))
+ if (S_ISGITLINK(p->two->mode)) {
+ d->dirty_submodule = p->two->dirty_submodule;
d->new_submodule_commits = !!oidcmp(&p->one->oid,
&p->two->oid);
+ if (s->status_format == STATUS_FORMAT_SHORT)
+ d->worktree_status = short_submodule_status(d);
+ }
switch (p->status) {
case DIFF_STATUS_ADDED:
int hints;
enum wt_status_format status_format;
- unsigned char sha1_commit[GIT_SHA1_RAWSZ]; /* when not Initial */
+ unsigned char sha1_commit[GIT_MAX_RAWSZ]; /* when not Initial */
/* These are computed during processing of the individual sections */
int commitable;