Allow to lock a worktree immediately after it's created. This helps
prevent a race between "git worktree add; git worktree lock" and
"git worktree prune".
* nd/worktree-add-lock:
worktree add: add --lock option
# Use the following command to debug the docker build locally:
# $ docker run -itv "${PWD}:/usr/src/git" --entrypoint /bin/bash daald/ubuntu32:xenial
# root@container:/# /usr/src/git/ci/run-linux32-build.sh
+ - env: Static Analysis
+ os: linux
+ compiler:
+ addons:
+ apt:
+ packages:
+ - coccinelle
+ before_install:
+ script:
+ # "before_script" that builds Git is inherited from base job
+ - make coccicheck
+ after_failure:
- env: Documentation
os: linux
compiler: clang
* The default behaviour of "git log" in an interactive session has
been changed to enable "--decorate".
+ * The output from "git status --short" has been extended to show
+ various kinds of dirtyness in submodules differently; instead of to
+ "M" for modified, 'm' and '?' can be shown to signal changes only
+ to the working tree of the submodule but not the commit that is
+ checked out.
+
+ * Allow the http.postbuffer configuration variable to be set to a
+ size that can be expressed in size_t, which can be larger than
+ ulong on some platforms.
+
Performance, Internal Implementation, Development Support etc.
older one and the newer one interoperate happily has now become
possible.
- * "uchar [40]" to "struct object_id" conversion continues.
-
* "git tag --contains" used to (ab)use the object bits to keep track
of the state of object reachability without clearing them after
use; this has been cleaned up and made to use the newer commit-slab
* Define a new task in .travis.yml that triggers a test session on
Windows run elsewhere.
+ * Conversion from uchar[20] to struct object_id continues.
+
+ * The "submodule" specific field in the ref_store structure is
+ replaced with a more generic "gitdir" that can later be used also
+ when dealing with ref_store that represents the set of refs visible
+ from the other worktrees.
+
+ * The string-list API used a custom reallocation strategy that was
+ very inefficient, instead of using the usual ALLOC_GROW() macro,
+ which has been fixed.
+ (merge 950a234cbd jh/string-list-micro-optim later to maint).
+
+ * In a 2- and 3-way merge of trees, more than one source trees often
+ end up sharing an identical subtree; optimize by not reading the
+ same tree multiple times in such a case.
+ (merge d12a8cf0af jh/unpack-trees-micro-optim later to maint).
+
+ * The index file has a trailing SHA-1 checksum to detect file
+ corruption, and historically we checked it every time the index
+ file is used. Omit the validation during normal use, and instead
+ verify only in "git fsck".
+
Also contains various documentation updates and code clean-ups.
certificate; this has been fixed.
(merge f2214dede9 bc/push-cert-receive-fix later to maint).
+ * Update error handling for codepath that deals with corrupt loose
+ objects.
+ (merge 51054177b3 jk/loose-object-info-report-error later to maint).
+
+ * "git diff --submodule=diff" learned to work better in a project
+ with a submodule that in turn has its own submodules.
+ (merge 17b254cda6 sb/show-diff-for-submodule-in-diff-fix later to maint).
+
+ * Update the build dependency so that an update to /usr/bin/perl
+ etc. result in recomputation of perl.mak file.
+ (merge c59c4939c2 ab/regen-perl-mak-with-different-perl later to maint).
+
+ * "git push --recurse-submodules --push-option=<string>" learned to
+ propagate the push option recursively down to pushes in submodules.
+
+ * If a patch e-mail had its first paragraph after an in-body header
+ indented (even after a blank line after the in-body header line),
+ the indented line was mistook as a continuation of the in-body
+ header. This has been fixed.
+ (merge fd1062e52e lt/mailinfo-in-body-header-continuation later to maint).
+
+ * Clean up fallouts from recent tightening of the set-up sequence,
+ where Git barfs when repository information is accessed without
+ first ensuring that it was started in a repository.
+ (merge bccb22cbb1 jk/no-looking-at-dotgit-outside-repo later to maint).
+
+ * "git p4" used "name-rev HEAD" when it wants to learn what branch is
+ checked out; it should use "symbolic-ref HEAD".
+ (merge eff451101d ld/p4-current-branch-fix later to maint).
+
+ * "http.proxy" set to an empty string is used to disable the usage of
+ proxy. We broke this early last year.
+ (merge ae51d91105 sr/http-proxy-configuration-fix later to maint).
+
+ * $GIT_DIR may in some cases be normalized with all symlinks resolved
+ while "gitdir" path expansion in the pattern does not receive the
+ same treatment, leading to incorrect mismatch. This has been fixed.
+
+ * "git submodule" script does not work well with strange pathnames.
+ Protect it from a path with slashes in them, at least.
+
+ * "git fetch-pack" was not prepared to accept ERR packet that the
+ upload-pack can send with a human-readable error message. It
+ showed the packet contents with ERR prefix, so there was no data
+ loss, but it was redundant to say "ERR" in an error message.
+ (merge 8e2c7bef03 jt/fetch-pack-error-reporting later to maint).
+
+ * "ls-files --recurse-submodules" did not quite work well in a
+ project with nested submodules.
+
+ * gethostname(2) may not NUL terminate the buffer if hostname does
+ not fit; unfortunately there is no easy way to see if our buffer
+ was too small, but at least this will make sure we will not end up
+ using garbage past the end of the buffer.
+ (merge 5781a9a270 dt/xgethostname-nul-termination later to maint).
+
* Other minor doc, test and build updates and code cleanups.
(merge df2a6e38b7 jk/pager-in-use later to maint).
(merge 75ec4a6cb0 ab/branch-list-doc later to maint).
(merge 48a96972fd ab/doc-submitting later to maint).
(merge f5c2bc2b96 jk/make-coccicheck-detect-errors later to maint).
(merge c105f563d1 cc/untracked later to maint).
+ (merge 8668976b53 jc/unused-symbols later to maint).
+ (merge fba275dc93 jc/bs-t-is-not-a-tab-for-sed later to maint).
+ (merge be6ed145de mm/ls-files-s-doc later to maint).
+ (merge 60b091c679 qp/bisect-docfix later to maint).
+ (merge 47242cd103 ah/diff-files-ours-theirs-doc later to maint).
+ (merge 35ad44cbd8 sb/submodule-rm-absorb later to maint).
+ (merge 0301f1fd92 va/i18n-perl-scripts later to maint).
+ (merge 733e064d98 vn/revision-shorthand-for-side-branch-log later to maint).
+ (merge 85999743e7 tb/doc-eol-normalization later to maint).
+ (merge 0747fb49fd jk/loose-object-fsck later to maint).
+ (merge d8f4481c4f jk/quarantine-received-objects later to maint).
+ (merge 7ba1ceef95 xy/format-patch-base later to maint).
+ (merge fa1912c89a rs/misc-cppcheck-fixes later to maint).
pruned.
-a::
- Attempt to auto-register archives at http://mirrors.sourcecontrol.net
+ Attempt to auto-register archives at `http://mirrors.sourcecontrol.net`
This is particularly useful with the -D option.
-t <tmpdir>::
References
----------
-- [[[1]]] http://www.nist.gov/public_affairs/releases/n02-10.htm['Software Errors Cost U.S. Economy $59.5 Billion Annually'. Nist News Release.]
-- [[[2]]] http://java.sun.com/docs/codeconv/html/CodeConventions.doc.html#16712['Code Conventions for the Java Programming Language'. Sun Microsystems.]
-- [[[3]]] http://en.wikipedia.org/wiki/Software_maintenance['Software maintenance'. Wikipedia.]
+- [[[1]]] https://www.nist.gov/sites/default/files/documents/director/planning/report02-3.pdf['The Economic Impacts of Inadequate Infratructure for Software Testing'. Nist Planning Report 02-3], see Executive Summary and Chapter 8.
+- [[[2]]] http://www.oracle.com/technetwork/java/codeconvtoc-136057.html['Code Conventions for the Java Programming Language'. Sun Microsystems.]
+- [[[3]]] https://en.wikipedia.org/wiki/Software_maintenance['Software maintenance'. Wikipedia.]
- [[[4]]] http://article.gmane.org/gmane.comp.version-control.git/45195/[Junio C Hamano. 'Automated bisect success story'. Gmane.]
-- [[[5]]] http://lwn.net/Articles/317154/[Christian Couder. 'Fully automated bisecting with "git bisect run"'. LWN.net.]
-- [[[6]]] http://lwn.net/Articles/277872/[Jonathan Corbet. 'Bisection divides users and developers'. LWN.net.]
+- [[[5]]] https://lwn.net/Articles/317154/[Christian Couder. 'Fully automated bisecting with "git bisect run"'. LWN.net.]
+- [[[6]]] https://lwn.net/Articles/277872/[Jonathan Corbet. 'Bisection divides users and developers'. LWN.net.]
- [[[7]]] http://article.gmane.org/gmane.linux.scsi/36652/[Ingo Molnar. 'Re: BUG 2.6.23-rc3 can't see sd partitions on Alpha'. Gmane.]
-- [[[8]]] http://www.kernel.org/pub/software/scm/git/docs/git-bisect.html[Junio C Hamano and the git-list. 'git-bisect(1) Manual Page'. Linux Kernel Archives.]
-- [[[9]]] http://github.com/Ealdwulf/bbchop[Ealdwulf. 'bbchop'. GitHub.]
+- [[[8]]] https://www.kernel.org/pub/software/scm/git/docs/git-bisect.html[Junio C Hamano and the git-list. 'git-bisect(1) Manual Page'. Linux Kernel Archives.]
+- [[[9]]] https://github.com/Ealdwulf/bbchop[Ealdwulf. 'bbchop'. GitHub.]
mix "good" and "bad" with "old" and "new" in a single session.)
In this more general usage, you provide `git bisect` with a "new"
-commit has some property and an "old" commit that doesn't have that
+commit that has some property and an "old" commit that doesn't have that
property. Each time `git bisect` checks out a commit, you test if that
commit has the property. If it does, mark the commit as "new";
otherwise, mark it as "old". When the bisection is done, `git bisect`
:git-diff: 1
include::diff-options.txt[]
+-1 --base::
+-2 --ours::
+-3 --theirs::
+ Compare the working tree with the "base" version (stage #1),
+ "our branch" (stage #2) or "their branch" (stage #3). The
+ index contains these stages only for unmerged entries i.e.
+ while resolving conflicts. See linkgit:git-read-tree[1]
+ section "3-Way Merge" for detailed information.
+
+-0::
+ Omit diff output for unmerged entries and just show
+ "Unmerged". Can be used only when comparing the working tree
+ with the index.
+
<path>...::
The <paths> parameters, when given, are used to limit
the diff to the named paths (you can give directory
................................................
With `git format-patch --base=P -3 C` (or variants thereof, e.g. with
-`--cover-letter` of using `Z..C` instead of `-3 C` to specify the
+`--cover-letter` or using `Z..C` instead of `-3 C` to specify the
range), the base tree information block is shown at the end of the
first message the command outputs (either the first patch, or the
cover letter), like this:
-s::
--stage::
- Show staged contents' object name, mode bits and stage number in the output.
+ Show staged contents' mode bits, object name and stage number in the output.
--directory::
If a whole directory is classified as "other", show just its
+
"--no-force-with-lease" will cancel all the previous --force-with-lease on the
command line.
++
+A general note on safety: supplying this option without an expected
+value, i.e. as `--force-with-lease` or `--force-with-lease=<refname>`
+interacts very badly with anything that implicitly runs `git fetch` on
+the remote to be pushed to in the background, e.g. `git fetch origin`
+on your repository in a cronjob.
++
+The protection it offers over `--force` is ensuring that subsequent
+changes your work wasn't based on aren't clobbered, but this is
+trivially defeated if some background process is updating refs in the
+background. We don't have anything except the remote tracking info to
+go by as a heuristic for refs you're expected to have seen & are
+willing to clobber.
++
+If your editor or some other system is running `git fetch` in the
+background for you a way to mitigate this is to simply set up another
+remote:
++
+ git remote add origin-push $(git config remote.origin.url)
+ git fetch origin-push
++
+Now when the background process runs `git fetch origin` the references
+on `origin-push` won't be updated, and thus commands like:
++
+ git push --force-with-lease origin-push
++
+Will fail unless you manually run `git fetch origin-push`. This method
+is of course entirely defeated by something that runs `git fetch
+--all`, in that case you'd need to either disable it or do something
+more tedious like:
++
+ git fetch # update 'master' from remote
+ git tag base master # mark our base point
+ git rebase -i master # rewrite some commits
+ git push --force-with-lease=master:base master:master
++
+I.e. create a `base` tag for versions of the upstream code that you've
+seen and are willing to overwrite, then rewrite history, and finally
+force push changes to `master` if the remote version is still at
+`base`, regardless of what your local `remotes/origin/master` has been
+updated to in the background.
-f::
--force::
of the rebased commits (see linkgit:git-am[1]).
Incompatible with the --interactive option.
+--signoff::
+ This flag is passed to 'git am' to sign off all the rebased
+ commits (see linkgit:git-am[1]). Incompatible with the
+ --interactive option.
+
-i::
--interactive::
Make a list of the commits which are about to be rebased. Let the
hooks will not be invoked either. This can be useful to quickly
bail out if the update is not to be supported.
+See the notes on the quarantine environment below.
+
update Hook
-----------
Before each ref is updated, if $GIT_DIR/hooks/update file exists
exec git update-server-info
+Quarantine Environment
+----------------------
+
+When `receive-pack` takes in objects, they are placed into a temporary
+"quarantine" directory within the `$GIT_DIR/objects` directory and
+migrated into the main object store only after the `pre-receive` hook
+has completed. If the push fails before then, the temporary directory is
+removed entirely.
+
+This has a few user-visible effects and caveats:
+
+ 1. Pushes which fail due to problems with the incoming pack, missing
+ objects, or due to the `pre-receive` hook will not leave any
+ on-disk data. This is usually helpful to prevent repeated failed
+ pushes from filling up your disk, but can make debugging more
+ challenging.
+
+ 2. Any objects created by the `pre-receive` hook will be created in
+ the quarantine directory (and migrated only if it succeeds).
+
+ 3. The `pre-receive` hook MUST NOT update any refs to point to
+ quarantined objects. Other programs accessing the repository will
+ not be able to see the objects (and if the pre-receive hook fails,
+ those refs would become corrupted). For safety, any ref updates
+ from within `pre-receive` are automatically rejected.
+
+
SEE ALSO
--------
linkgit:git-send-pack[1], linkgit:gitnamespaces[7]
! ! ignored
-------------------------------------------------
+Submodules have more state and instead report
+ M the submodule has a different HEAD than
+ recorded in the index
+ m the submodule has modified content
+ ? the submodule has untracked files
+since modified content or untracked files in a submodule cannot be added
+via `git add` in the superproject to prepare a commit.
+
+'m' and '?' are applied recursively. For example if a nested submodule
+in a submodule contains an untracked file, this is reported as '?' as well.
+
If -b is used the short-format status is preceded by a line
## branchname tracking info
characters are not specially formatted; no quoting or
backslash-escaping is performed.
+Any submodule changes are reported as modified `M` instead of `m` or single `?`.
+
Porcelain Format Version 2
~~~~~~~~~~~~~~~~~~~~~~~~~~
more efficiently, so this manually-maintained list has been retired.
See also the `contrib/` area, and the Git wiki:
-http://git.or.cz/gitwiki/InterfacesFrontendsAndTools
+https://git.wiki.kernel.org/index.php/InterfacesFrontendsAndTools
-------------------------------------------------
$ echo "* text=auto" >.gitattributes
-$ rm .git/index # Remove the index to force Git to
-$ git reset # re-scan the working directory
+$ rm .git/index # Remove the index to re-scan the working directory
+$ git add .
$ git status # Show files that will be normalized
-$ git add -u
-$ git add .gitattributes
$ git commit -m "Introduce end-of-line normalization"
-------------------------------------------------
convenient to organize your project with an informal hierarchy
of developers. Linux kernel development is run this way. There
is a nice illustration (page 17, "Merges to Mainline") in
-http://www.xenotime.net/linux/mentor/linux-mentoring-2006.pdf[Randy Dunlap's presentation].
+https://web.archive.org/web/20120915203609/http://www.xenotime.net/linux/mentor/linux-mentoring-2006.pdf[Randy Dunlap's presentation].
It should be stressed that this hierarchy is purely *informal*.
There is nothing fundamental in Git that enforces the "chain of
to use push options, but doesn't transmit any, the count variable
will be set to zero, `GIT_PUSH_OPTION_COUNT=0`.
+See the section on "Quarantine Environment" in
+linkgit:git-receive-pack[1] for some caveats.
+
[[update]]
update
~~~~~~
submodule.<name>.ignore::
Defines under what circumstances "git status" and the diff family show
- a submodule as modified. When set to "all", it will never be considered
- modified (but will nonetheless show up in the output of status and
- commit when it has been staged), "dirty" will ignore all changes
- to the submodules work tree and
- takes only differences between the HEAD of the submodule and the commit
- recorded in the superproject into account. "untracked" will additionally
- let submodules with modified tracked files in their work tree show up.
- Using "none" (the default when this option is not set) also shows
- submodules that have untracked files in their work tree as changed.
- If this option is also present in the submodules entry in .git/config of
- the superproject, the setting there will override the one found in
+ a submodule as modified. The following values are supported:
+
+ all;; The submodule will never be considered modified (but will
+ nonetheless show up in the output of status and commit when it has
+ been staged).
+
+ dirty;; All changes to the submodule's work tree will be ignored, only
+ committed differences between the HEAD of the submodule and its
+ recorded state in the superproject are taken into account.
+
+ untracked;; Only untracked files in submodules will be ignored.
+ Committed differences and modifications to tracked files will show
+ up.
+
+ none;; No modifiations to submodules are ignored, all of committed
+ differences, and modifications to tracked and untracked files are
+ shown. This is the default option.
+
+ If this option is also present in the submodules entry in .git/config
+ of the superproject, the setting there will override the one found in
.gitmodules.
Both settings can be overridden on the command line by using the
"--ignore-submodule" option. The 'git submodule' commands are not
submodule.<name>.shallow::
When set to true, a clone of this submodule will be performed as a
- shallow clone unless the user explicitly asks for a non-shallow
- clone.
+ shallow clone (with a history depth of 1) unless the user explicitly
+ asks for a non-shallow clone.
EXAMPLES
$logo_label::
URI and label (title) for the Git logo link (or your site logo,
if you chose to use different logo image). By default, these both
- refer to Git homepage, http://git-scm.com[]; in the past, they pointed
- to Git documentation at http://www.kernel.org[].
+ refer to Git homepage, https://git-scm.com[]; in the past, they pointed
+ to Git documentation at https://www.kernel.org[].
Changing gitweb's look
Date: Fri, 26 Aug 2005 18:19:10 -0700
Abstract: In this how-to article, JC talks about how he
uses the post-update hook to automate Git documentation page
- shown at http://www.kernel.org/pub/software/scm/git/docs/.
+ shown at https://www.kernel.org/pub/software/scm/git/docs/.
Content-type: text/asciidoc
How to rebuild from update hook
===============================
-The pages under http://www.kernel.org/pub/software/scm/git/docs/
+The pages under https://www.kernel.org/pub/software/scm/git/docs/
are built from Documentation/ directory of the git.git project
and needed to be kept up-to-date. The www.kernel.org/ servers
are mirrored and I was told that the origin of the mirror is on
The 'r1{caret}!' notation includes commit 'r1' but excludes all of its parents.
By itself, this notation denotes the single commit 'r1'.
-The '<rev>{caret}-{<n>}' notation includes '<rev>' but excludes the <n>th
+The '<rev>{caret}-<n>' notation includes '<rev>' but excludes the <n>th
parent (i.e. a shorthand for '<rev>{caret}<n>..<rev>'), with '<n>' = 1 if
not given. This is typically useful for merge commits where you
can just pass '<commit>{caret}-' to get all the commits in the branch
as giving commit '<rev>' and then all its parents prefixed with
'{caret}' to exclude them (and their ancestors).
-'<rev>{caret}-{<n>}', e.g. 'HEAD{caret}-, HEAD{caret}-2'::
+'<rev>{caret}-<n>', e.g. 'HEAD{caret}-, HEAD{caret}-2'::
Equivalent to '<rev>{caret}<n>..<rev>', with '<n>' = 1 if not
given.
--- /dev/null
+oid-array API
+==============
+
+The oid-array API provides storage and manipulation of sets of object
+identifiers. The emphasis is on storage and processing efficiency,
+making them suitable for large lists. Note that the ordering of items is
+not preserved over some operations.
+
+Data Structures
+---------------
+
+`struct oid_array`::
+
+ A single array of object IDs. This should be initialized by
+ assignment from `OID_ARRAY_INIT`. The `oid` member contains
+ the actual data. The `nr` member contains the number of items in
+ the set. The `alloc` and `sorted` members are used internally,
+ and should not be needed by API callers.
+
+Functions
+---------
+
+`oid_array_append`::
+ Add an item to the set. The object ID will be placed at the end of
+ the array (but note that some operations below may lose this
+ ordering).
+
+`oid_array_lookup`::
+ Perform a binary search of the array for a specific object ID.
+ If found, returns the offset (in number of elements) of the
+ object ID. If not found, returns a negative integer. If the array
+ is not sorted, this function has the side effect of sorting it.
+
+`oid_array_clear`::
+ Free all memory associated with the array and return it to the
+ initial, empty state.
+
+`oid_array_for_each_unique`::
+ Efficiently iterate over each unique element of the list,
+ executing the callback function for each one. If the array is
+ not sorted, this function has the side effect of sorting it. If
+ the callback returns a non-zero value, the iteration ends
+ immediately and the callback's return is propagated; otherwise,
+ 0 is returned.
+
+Examples
+--------
+
+-----------------------------------------
+int print_callback(const struct object_id *oid,
+ void *data)
+{
+ printf("%s\n", oid_to_hex(oid));
+ return 0; /* always continue */
+}
+
+void some_func(void)
+{
+ struct sha1_array hashes = OID_ARRAY_INIT;
+ struct object_id oid;
+
+ /* Read objects into our set */
+ while (read_object_from_stdin(oid.hash))
+ oid_array_append(&hashes, &oid);
+
+ /* Check if some objects are in our set */
+ while (read_object_from_stdin(oid.hash)) {
+ if (oid_array_lookup(&hashes, &oid) >= 0)
+ printf("it's in there!\n");
+
+ /*
+ * Print the unique set of objects. We could also have
+ * avoided adding duplicate objects in the first place,
+ * but we would end up re-sorting the array repeatedly.
+ * Instead, this will sort once and then skip duplicates
+ * in linear time.
+ */
+ oid_array_for_each_unique(&hashes, print_callback, NULL);
+}
+-----------------------------------------
+++ /dev/null
-sha1-array API
-==============
-
-The sha1-array API provides storage and manipulation of sets of SHA-1
-identifiers. The emphasis is on storage and processing efficiency,
-making them suitable for large lists. Note that the ordering of items is
-not preserved over some operations.
-
-Data Structures
----------------
-
-`struct sha1_array`::
-
- A single array of SHA-1 hashes. This should be initialized by
- assignment from `SHA1_ARRAY_INIT`. The `sha1` member contains
- the actual data. The `nr` member contains the number of items in
- the set. The `alloc` and `sorted` members are used internally,
- and should not be needed by API callers.
-
-Functions
----------
-
-`sha1_array_append`::
- Add an item to the set. The sha1 will be placed at the end of
- the array (but note that some operations below may lose this
- ordering).
-
-`sha1_array_lookup`::
- Perform a binary search of the array for a specific sha1.
- If found, returns the offset (in number of elements) of the
- sha1. If not found, returns a negative integer. If the array is
- not sorted, this function has the side effect of sorting it.
-
-`sha1_array_clear`::
- Free all memory associated with the array and return it to the
- initial, empty state.
-
-`sha1_array_for_each_unique`::
- Efficiently iterate over each unique element of the list,
- executing the callback function for each one. If the array is
- not sorted, this function has the side effect of sorting it. If
- the callback returns a non-zero value, the iteration ends
- immediately and the callback's return is propagated; otherwise,
- 0 is returned.
-
-Examples
---------
-
------------------------------------------
-int print_callback(const unsigned char sha1[20],
- void *data)
-{
- printf("%s\n", sha1_to_hex(sha1));
- return 0; /* always continue */
-}
-
-void some_func(void)
-{
- struct sha1_array hashes = SHA1_ARRAY_INIT;
- unsigned char sha1[20];
-
- /* Read objects into our set */
- while (read_object_from_stdin(sha1))
- sha1_array_append(&hashes, sha1);
-
- /* Check if some objects are in our set */
- while (read_object_from_stdin(sha1)) {
- if (sha1_array_lookup(&hashes, sha1) >= 0)
- printf("it's in there!\n");
-
- /*
- * Print the unique set of objects. We could also have
- * avoided adding duplicate objects in the first place,
- * but we would end up re-sorting the array repeatedly.
- * Instead, this will sort once and then skip duplicates
- * in linear time.
- */
- sha1_array_for_each_unique(&hashes, print_callback, NULL);
-}
------------------------------------------
multi_ack_detailed is enabled. The server always sends NAK after 'done'
if there is no common base found.
+Instead of 'ACK' or 'NAK', the server may send an error message (for
+example, if it does not recognize an object in a 'want' line received
+from the client).
+
Then the server will start sending its packfile data.
----
- server-response = *ack_multi ack / nak
+ server-response = *ack_multi ack / nak / error-line
ack_multi = PKT-LINE("ACK" SP obj-id ack_status)
ack_status = "continue" / "common" / "ready"
ack = PKT-LINE("ACK" SP obj-id)
nak = PKT-LINE("NAK")
+ error-line = PKT-LINE("ERR" SP explanation-text)
----
A simple clone may look like this (with no 'have' lines):
#!/bin/sh
GVF=GIT-VERSION-FILE
-DEF_VER=v2.12.GIT
+DEF_VER=v2.13.0-rc0
LF='
'
TEST_PROGRAMS_NEED_X += test-match-trees
TEST_PROGRAMS_NEED_X += test-mergesort
TEST_PROGRAMS_NEED_X += test-mktemp
+TEST_PROGRAMS_NEED_X += test-online-cpus
TEST_PROGRAMS_NEED_X += test-parse-options
TEST_PROGRAMS_NEED_X += test-path-utils
TEST_PROGRAMS_NEED_X += test-prio-queue
TEST_PROGRAMS_NEED_X += test-read-cache
+TEST_PROGRAMS_NEED_X += test-ref-store
TEST_PROGRAMS_NEED_X += test-regex
TEST_PROGRAMS_NEED_X += test-revision-walking
TEST_PROGRAMS_NEED_X += test-run-command
TEST_PROGRAMS_NEED_X += test-sha1
TEST_PROGRAMS_NEED_X += test-sha1-array
TEST_PROGRAMS_NEED_X += test-sigchain
+TEST_PROGRAMS_NEED_X += test-strcmp-offset
TEST_PROGRAMS_NEED_X += test-string-list
TEST_PROGRAMS_NEED_X += test-submodule-config
TEST_PROGRAMS_NEED_X += test-subprocess
perl/PM.stamp: FORCE
@$(FIND) perl -type f -name '*.pm' | sort >$@+ && \
+ $(PERL_PATH) -V >>$@+ && \
{ cmp $@+ $@ >/dev/null 2>/dev/null || mv $@+ $@; } && \
$(RM) $@+
#include "sha1-array.h"
#include "argv-array.h"
-static struct sha1_array good_revs;
-static struct sha1_array skipped_revs;
+static struct oid_array good_revs;
+static struct oid_array skipped_revs;
static struct object_id *current_bad_oid;
{
struct commit_list *p;
struct commit_dist *array = xcalloc(nr, sizeof(*array));
+ struct strbuf buf = STRBUF_INIT;
int cnt, i;
for (p = list, cnt = 0; p; p = p->next) {
}
QSORT(array, cnt, compare_commit_dist);
for (p = list, i = 0; i < cnt; i++) {
- char buf[100]; /* enough for dist=%d */
struct object *obj = &(array[i].commit->object);
- snprintf(buf, sizeof(buf), "dist=%d", array[i].distance);
- add_name_decoration(DECORATION_NONE, buf, obj);
+ strbuf_reset(&buf);
+ strbuf_addf(&buf, "dist=%d", array[i].distance);
+ add_name_decoration(DECORATION_NONE, buf.buf, obj);
p->item = array[i].commit;
p = p->next;
}
if (p)
p->next = NULL;
+ strbuf_release(&buf);
free(array);
return list;
}
current_bad_oid = xmalloc(sizeof(*current_bad_oid));
oidcpy(current_bad_oid, oid);
} else if (starts_with(refname, good_prefix.buf)) {
- sha1_array_append(&good_revs, oid->hash);
+ oid_array_append(&good_revs, oid);
} else if (starts_with(refname, "skip-")) {
- sha1_array_append(&skipped_revs, oid->hash);
+ oid_array_append(&skipped_revs, oid);
}
strbuf_release(&good_prefix);
static GIT_PATH_FUNC(git_path_bisect_names, "BISECT_NAMES")
static GIT_PATH_FUNC(git_path_bisect_expected_rev, "BISECT_EXPECTED_REV")
+static GIT_PATH_FUNC(git_path_bisect_terms, "BISECT_TERMS")
static void read_bisect_paths(struct argv_array *array)
{
fclose(fp);
}
-static char *join_sha1_array_hex(struct sha1_array *array, char delim)
+static char *join_sha1_array_hex(struct oid_array *array, char delim)
{
struct strbuf joined_hexs = STRBUF_INIT;
int i;
for (i = 0; i < array->nr; i++) {
- strbuf_addstr(&joined_hexs, sha1_to_hex(array->sha1[i]));
+ strbuf_addstr(&joined_hexs, oid_to_hex(array->oid + i));
if (i + 1 < array->nr)
strbuf_addch(&joined_hexs, delim);
}
while (list) {
struct commit_list *next = list->next;
list->next = NULL;
- if (0 <= sha1_array_lookup(&skipped_revs,
- list->item->object.oid.hash)) {
+ if (0 <= oid_array_lookup(&skipped_revs, &list->item->object.oid)) {
if (skipped_first && !*skipped_first)
*skipped_first = 1;
/* Move current to tried list */
argv_array_pushf(&rev_argv, bad_format, oid_to_hex(current_bad_oid));
for (i = 0; i < good_revs.nr; i++)
argv_array_pushf(&rev_argv, good_format,
- sha1_to_hex(good_revs.sha1[i]));
+ oid_to_hex(good_revs.oid + i));
argv_array_push(&rev_argv, "--");
if (read_paths)
read_bisect_paths(&rev_argv);
static int bisect_checkout(const unsigned char *bisect_rev, int no_checkout)
{
- char bisect_rev_hex[GIT_SHA1_HEXSZ + 1];
+ char bisect_rev_hex[GIT_MAX_HEXSZ + 1];
memcpy(bisect_rev_hex, sha1_to_hex(bisect_rev), GIT_SHA1_HEXSZ + 1);
update_ref(NULL, "BISECT_EXPECTED_REV", bisect_rev, NULL, 0, UPDATE_REFS_DIE_ON_ERR);
return run_command_v_opt(argv_show_branch, RUN_GIT_CMD);
}
-static struct commit *get_commit_reference(const unsigned char *sha1)
+static struct commit *get_commit_reference(const struct object_id *oid)
{
- struct commit *r = lookup_commit_reference(sha1);
+ struct commit *r = lookup_commit_reference(oid->hash);
if (!r)
- die(_("Not a valid commit name %s"), sha1_to_hex(sha1));
+ die(_("Not a valid commit name %s"), oid_to_hex(oid));
return r;
}
int i, n = 0;
ALLOC_ARRAY(rev, 1 + good_revs.nr);
- rev[n++] = get_commit_reference(current_bad_oid->hash);
+ rev[n++] = get_commit_reference(current_bad_oid);
for (i = 0; i < good_revs.nr; i++)
- rev[n++] = get_commit_reference(good_revs.sha1[i]);
+ rev[n++] = get_commit_reference(good_revs.oid + i);
*rev_nr = n;
return rev;
exit(1);
}
-static void handle_skipped_merge_base(const unsigned char *mb)
+static void handle_skipped_merge_base(const struct object_id *mb)
{
- char *mb_hex = sha1_to_hex(mb);
+ char *mb_hex = oid_to_hex(mb);
char *bad_hex = oid_to_hex(current_bad_oid);
char *good_hex = join_sha1_array_hex(&good_revs, ' ');
result = get_merge_bases_many(rev[0], rev_nr - 1, rev + 1);
for (; result; result = result->next) {
- const unsigned char *mb = result->item->object.oid.hash;
- if (!hashcmp(mb, current_bad_oid->hash)) {
+ const struct object_id *mb = &result->item->object.oid;
+ if (!oidcmp(mb, current_bad_oid)) {
handle_bad_merge_base();
- } else if (0 <= sha1_array_lookup(&good_revs, mb)) {
+ } else if (0 <= oid_array_lookup(&good_revs, mb)) {
continue;
- } else if (0 <= sha1_array_lookup(&skipped_revs, mb)) {
+ } else if (0 <= oid_array_lookup(&skipped_revs, mb)) {
handle_skipped_merge_base(mb);
} else {
printf(_("Bisecting: a merge base must be tested\n"));
- exit(bisect_checkout(mb, no_checkout));
+ exit(bisect_checkout(mb->hash, no_checkout));
}
}
void read_bisect_terms(const char **read_bad, const char **read_good)
{
struct strbuf str = STRBUF_INIT;
- const char *filename = git_path("BISECT_TERMS");
+ const char *filename = git_path_bisect_terms();
FILE *fp = fopen(filename, "r");
if (!fp) {
{
struct commit *commit;
unsigned char sha1[20];
- char *real_ref, msg[PATH_MAX + 20];
+ char *real_ref;
struct strbuf ref = STRBUF_INIT;
int forcing = 0;
int dont_change_ref = 0;
die(_("Not a valid branch point: '%s'."), start_name);
hashcpy(sha1, commit->object.oid.hash);
- if (forcing)
- snprintf(msg, sizeof msg, "branch: Reset to %s",
- start_name);
- else if (!dont_change_ref)
- snprintf(msg, sizeof msg, "branch: Created from %s",
- start_name);
-
if (reflog)
log_all_ref_updates = LOG_REFS_NORMAL;
if (!dont_change_ref) {
struct ref_transaction *transaction;
struct strbuf err = STRBUF_INIT;
+ char *msg;
+
+ if (forcing)
+ msg = xstrfmt("branch: Reset to %s", start_name);
+ else
+ msg = xstrfmt("branch: Created from %s", start_name);
transaction = ref_transaction_begin(&err);
if (!transaction ||
die("%s", err.buf);
ref_transaction_free(transaction);
strbuf_release(&err);
+ free(msg);
}
if (real_ref && track)
};
/**
- * Initializes am_state with the default values. The state directory is set to
- * dir.
+ * Initializes am_state with the default values.
*/
-static void am_state_init(struct am_state *state, const char *dir)
+static void am_state_init(struct am_state *state)
{
int gpgsign;
memset(state, 0, sizeof(*state));
- assert(dir);
- state->dir = xstrdup(dir);
+ state->dir = git_pathdup("rebase-apply");
state->prec = 4;
mail = mkpath("%s/%0*d", state->dir, state->prec, i + 1);
out = fopen(mail, "w");
- if (!out)
+ if (!out) {
+ if (in != stdin)
+ fclose(in);
return error_errno(_("could not open '%s' for writing"),
mail);
+ }
ret = fn(out, in, keep_cr);
fclose(out);
- fclose(in);
+ if (in != stdin)
+ fclose(in);
if (ret)
return error(_("could not parse patch '%s'"), *paths);
exit(128);
}
-static void am_signoff(struct strbuf *sb)
+/**
+ * Appends signoff to the "msg" field of the am_state.
+ */
+static void am_append_signoff(struct am_state *state)
{
char *cp;
struct strbuf mine = STRBUF_INIT;
+ struct strbuf sb = STRBUF_INIT;
- /* Does it end with our own sign-off? */
+ strbuf_attach(&sb, state->msg, state->msg_len, state->msg_len);
+
+ /* our sign-off */
strbuf_addf(&mine, "\n%s%s\n",
sign_off_header,
fmt_name(getenv("GIT_COMMITTER_NAME"),
getenv("GIT_COMMITTER_EMAIL")));
- if (mine.len < sb->len &&
- !strcmp(mine.buf, sb->buf + sb->len - mine.len))
+
+ /* Does sb end with it already? */
+ if (mine.len < sb.len &&
+ !strcmp(mine.buf, sb.buf + sb.len - mine.len))
goto exit; /* no need to duplicate */
/* Does it have any Signed-off-by: in the text */
- for (cp = sb->buf;
+ for (cp = sb.buf;
cp && *cp && (cp = strstr(cp, sign_off_header)) != NULL;
cp = strchr(cp, '\n')) {
- if (sb->buf == cp || cp[-1] == '\n')
+ if (sb.buf == cp || cp[-1] == '\n')
break;
}
- strbuf_addstr(sb, mine.buf + !!cp);
+ strbuf_addstr(&sb, mine.buf + !!cp);
exit:
strbuf_release(&mine);
-}
-
-/**
- * Appends signoff to the "msg" field of the am_state.
- */
-static void am_append_signoff(struct am_state *state)
-{
- struct strbuf sb = STRBUF_INIT;
-
- strbuf_attach(&sb, state->msg, state->msg_len, state->msg_len);
- am_signoff(&sb);
state->msg = strbuf_detach(&sb, &state->msg_len);
}
strbuf_addbuf(&msg, &mi.log_message);
strbuf_stripspace(&msg, 0);
- if (state->signoff)
- am_signoff(&msg);
-
assert(!state->author_name);
state->author_name = strbuf_detach(&author_name, NULL);
if (skip)
goto next; /* mail should be skipped */
+ if (state->signoff)
+ am_append_signoff(state);
+
write_author_script(state);
write_commit_msg(state);
}
git_config(git_am_config, NULL);
- am_state_init(&state, git_path("rebase-apply"));
+ am_state_init(&state);
in_progress = am_in_progress(&state);
if (in_progress)
int cnt;
const char *cp;
struct origin *suspect = ent->suspect;
- char hex[GIT_SHA1_HEXSZ + 1];
+ char hex[GIT_MAX_HEXSZ + 1];
oid_to_hex_r(hex, &suspect->commit->object.oid);
printf("%s %d %d %d\n",
const char *cp;
struct origin *suspect = ent->suspect;
struct commit_info ci;
- char hex[GIT_SHA1_HEXSZ + 1];
+ char hex[GIT_MAX_HEXSZ + 1];
int show_raw_time = !!(opt & OUTPUT_RAW_TIMESTAMP);
get_commit_info(suspect->commit, &ci, 1);
strbuf_release(&newsection);
}
-static const char edit_description[] = "BRANCH_DESCRIPTION";
+static GIT_PATH_FUNC(edit_description, "EDIT_DESCRIPTION")
static int edit_branch_description(const char *branch_name)
{
" %s\n"
"Lines starting with '%c' will be stripped.\n"),
branch_name, comment_line_char);
- write_file_buf(git_path(edit_description), buf.buf, buf.len);
+ write_file_buf(edit_description(), buf.buf, buf.len);
strbuf_reset(&buf);
- if (launch_editor(git_path(edit_description), &buf, NULL)) {
+ if (launch_editor(edit_description(), &buf, NULL)) {
strbuf_release(&buf);
return -1;
}
struct expand_data *expand;
};
-static int batch_object_cb(const unsigned char sha1[20], void *vdata)
+static int batch_object_cb(const struct object_id *oid, void *vdata)
{
struct object_cb_data *data = vdata;
- hashcpy(data->expand->oid.hash, sha1);
+ oidcpy(&data->expand->oid, oid);
batch_object_write(NULL, data->opt, data->expand);
return 0;
}
const char *path,
void *data)
{
- sha1_array_append(data, oid->hash);
+ oid_array_append(data, oid);
return 0;
}
uint32_t pos,
void *data)
{
- sha1_array_append(data, oid->hash);
+ oid_array_append(data, oid);
return 0;
}
data.info.typep = &data.type;
if (opt->all_objects) {
- struct sha1_array sa = SHA1_ARRAY_INIT;
+ struct oid_array sa = OID_ARRAY_INIT;
struct object_cb_data cb;
for_each_loose_object(batch_loose_object, &sa, 0);
cb.opt = opt;
cb.expand = &data;
- sha1_array_for_each_unique(&sa, batch_object_cb, &cb);
+ oid_array_for_each_unique(&sa, batch_object_cb, &cb);
- sha1_array_clear(&sa);
+ oid_array_clear(&sa);
return 0;
}
static const char *unique_tracking_name(const char *name, struct object_id *oid)
{
struct tracking_name_data cb_data = { NULL, NULL, NULL, 1 };
- char src_ref[PATH_MAX];
- snprintf(src_ref, PATH_MAX, "refs/heads/%s", name);
- cb_data.src_ref = src_ref;
+ cb_data.src_ref = xstrfmt("refs/heads/%s", name);
cb_data.dst_oid = oid;
for_each_remote(check_tracking_name, &cb_data);
+ free(cb_data.src_ref);
if (cb_data.unique)
return cb_data.dst_ref;
free(cb_data.dst_ref);
"If this is not correct, please remove the file\n"
" %s\n"
"and try again.\n"),
- git_path(whence == FROM_MERGE
- ? "MERGE_HEAD"
- : "CHERRY_PICK_HEAD"));
+ whence == FROM_MERGE ?
+ git_path_merge_head() :
+ git_path_cherry_pick_head());
}
fprintf(s->fp, "\n");
static const char *implicit_ident_advice(void)
{
- char *user_config = expand_user_path("~/.gitconfig");
+ char *user_config = expand_user_path("~/.gitconfig", 0);
char *xdg_config = xdg_config_home("config");
int config_exists = file_exists(user_config) || file_exists(xdg_config);
static struct git_config_source given_config_source;
static int actions, types;
static int end_null;
-static int respect_includes = -1;
+static int respect_includes_opt = -1;
+static struct config_options config_options;
static int show_origin;
#define ACTION_GET (1<<0)
OPT_GROUP(N_("Other")),
OPT_BOOL('z', "null", &end_null, N_("terminate values with NUL byte")),
OPT_BOOL(0, "name-only", &omit_values, N_("show variable names only")),
- OPT_BOOL(0, "includes", &respect_includes, N_("respect include directives on lookup")),
+ OPT_BOOL(0, "includes", &respect_includes_opt, N_("respect include directives on lookup")),
OPT_BOOL(0, "show-origin", &show_origin, N_("show origin of config (file, standard input, blob, command line)")),
OPT_END(),
};
}
git_config_with_options(collect_config, &values,
- &given_config_source, respect_includes);
+ &given_config_source, &config_options);
ret = !values.nr;
get_color_found = 0;
parsed_color[0] = '\0';
git_config_with_options(git_get_color_config, NULL,
- &given_config_source, respect_includes);
+ &given_config_source, &config_options);
if (!get_color_found && def_color) {
if (color_parse(def_color, parsed_color) < 0)
get_diff_color_found = -1;
get_color_ui_found = -1;
git_config_with_options(git_get_colorbool_config, NULL,
- &given_config_source, respect_includes);
+ &given_config_source, &config_options);
if (get_colorbool_found < 0) {
if (!strcmp(get_colorbool_slot, "color.diff"))
}
git_config_with_options(urlmatch_config_entry, &config,
- &given_config_source, respect_includes);
+ &given_config_source, &config_options);
ret = !values.nr;
}
if (use_global_config) {
- char *user_config = expand_user_path("~/.gitconfig");
+ char *user_config = expand_user_path("~/.gitconfig", 0);
char *xdg_config = xdg_config_home("config");
if (!user_config)
prefix_filename(prefix, given_config_source.file);
}
- if (respect_includes == -1)
- respect_includes = !given_config_source.file;
+ if (respect_includes_opt == -1)
+ config_options.respect_includes = !given_config_source.file;
+ else
+ config_options.respect_includes = respect_includes_opt;
if (end_null) {
term = '\0';
check_argc(argc, 0, 0);
if (git_config_with_options(show_all_config, NULL,
&given_config_source,
- respect_includes) < 0) {
+ &config_options) < 0) {
if (given_config_source.file)
die_errno("unable to read config file '%s'",
given_config_source.file);
if (given_config_source.blob)
die("editing blobs is not supported");
git_config(git_default_config, NULL);
- config_file = xstrdup(given_config_source.file ?
- given_config_source.file : git_path("config"));
+ config_file = given_config_source.file ?
+ xstrdup(given_config_source.file) :
+ git_pathdup("config");
if (use_global_config) {
int fd = open(config_file, O_CREAT | O_EXCL | O_WRONLY, 0666);
if (fd >= 0) {
#define DIFF_NO_INDEX_IMPLICIT 2
struct blobinfo {
- unsigned char sha1[20];
+ struct object_id oid;
const char *name;
unsigned mode;
};
static void stuff_change(struct diff_options *opt,
unsigned old_mode, unsigned new_mode,
- const unsigned char *old_sha1,
- const unsigned char *new_sha1,
- int old_sha1_valid,
- int new_sha1_valid,
+ const struct object_id *old_oid,
+ const struct object_id *new_oid,
+ int old_oid_valid,
+ int new_oid_valid,
const char *old_name,
const char *new_name)
{
struct diff_filespec *one, *two;
- if (!is_null_sha1(old_sha1) && !is_null_sha1(new_sha1) &&
- !hashcmp(old_sha1, new_sha1) && (old_mode == new_mode))
+ if (!is_null_oid(old_oid) && !is_null_oid(new_oid) &&
+ !oidcmp(old_oid, new_oid) && (old_mode == new_mode))
return;
if (DIFF_OPT_TST(opt, REVERSE_DIFF)) {
SWAP(old_mode, new_mode);
- SWAP(old_sha1, new_sha1);
+ SWAP(old_oid, new_oid);
SWAP(old_name, new_name);
}
one = alloc_filespec(old_name);
two = alloc_filespec(new_name);
- fill_filespec(one, old_sha1, old_sha1_valid, old_mode);
- fill_filespec(two, new_sha1, new_sha1_valid, new_mode);
+ fill_filespec(one, old_oid->hash, old_oid_valid, old_mode);
+ fill_filespec(two, new_oid->hash, new_oid_valid, new_mode);
diff_queue(&diff_queued_diff, one, two);
}
stuff_change(&revs->diffopt,
blob[0].mode, canon_mode(st.st_mode),
- blob[0].sha1, null_sha1,
+ &blob[0].oid, &null_oid,
1, 0,
path, path);
diffcore_std(&revs->diffopt);
stuff_change(&revs->diffopt,
blob[0].mode, blob[1].mode,
- blob[0].sha1, blob[1].sha1,
+ &blob[0].oid, &blob[1].oid,
1, 1,
blob[0].name, blob[1].name);
diffcore_std(&revs->diffopt);
struct object_array_entry *ent0,
struct object_array_entry *ent1)
{
- const unsigned char *(sha1[2]);
+ const struct object_id *(oid[2]);
int swap = 0;
if (argc > 1)
*/
if (ent1->item->flags & UNINTERESTING)
swap = 1;
- sha1[swap] = ent0->item->oid.hash;
- sha1[1 - swap] = ent1->item->oid.hash;
- diff_tree_sha1(sha1[0], sha1[1], "", &revs->diffopt);
+ oid[swap] = &ent0->item->oid;
+ oid[1 - swap] = &ent1->item->oid;
+ diff_tree_sha1(oid[0]->hash, oid[1]->hash, "", &revs->diffopt);
log_tree_diff_flush(revs);
return 0;
}
struct object_array_entry *ent,
int ents)
{
- struct sha1_array parents = SHA1_ARRAY_INIT;
+ struct oid_array parents = OID_ARRAY_INIT;
int i;
if (argc > 1)
if (!revs->dense_combined_merges && !revs->combine_merges)
revs->dense_combined_merges = revs->combine_merges = 1;
for (i = 1; i < ents; i++)
- sha1_array_append(&parents, ent[i].item->oid.hash);
+ oid_array_append(&parents, &ent[i].item->oid);
diff_tree_combined(ent[0].item->oid.hash, &parents,
revs->dense_combined_merges, revs);
- sha1_array_clear(&parents);
+ oid_array_clear(&parents);
return 0;
}
} else if (obj->type == OBJ_BLOB) {
if (2 <= blobs)
die(_("more than two blobs given: '%s'"), name);
- hashcpy(blob[blobs].sha1, obj->oid.hash);
+ hashcpy(blob[blobs].oid.hash, obj->oid.hash);
blob[blobs].name = name;
blob[blobs].mode = entry->mode;
blobs++;
return data;
}
+static int checkout_path(unsigned mode, struct object_id *oid,
+ const char *path, const struct checkout *state)
+{
+ struct cache_entry *ce;
+ int ret;
+
+ ce = make_cache_entry(mode, oid->hash, path, 0, 0);
+ ret = checkout_entry(ce, state, NULL);
+
+ free(ce);
+ return ret;
+}
+
static int run_dir_diff(const char *extcmd, int symlinks, const char *prefix,
int argc, const char **argv)
{
struct strbuf rpath = STRBUF_INIT, buf = STRBUF_INIT;
struct strbuf ldir = STRBUF_INIT, rdir = STRBUF_INIT;
struct strbuf wtdir = STRBUF_INIT;
+ char *lbase_dir, *rbase_dir;
size_t ldir_len, rdir_len, wtdir_len;
- struct cache_entry *ce = xcalloc(1, sizeof(ce) + PATH_MAX + 1);
const char *workdir, *tmp;
int ret = 0, i;
FILE *fp;
memset(&wtindex, 0, sizeof(wtindex));
memset(&lstate, 0, sizeof(lstate));
- lstate.base_dir = ldir.buf;
+ lstate.base_dir = lbase_dir = xstrdup(ldir.buf);
lstate.base_dir_len = ldir.len;
lstate.force = 1;
memset(&rstate, 0, sizeof(rstate));
- rstate.base_dir = rdir.buf;
+ rstate.base_dir = rbase_dir = xstrdup(rdir.buf);
rstate.base_dir_len = rdir.len;
rstate.force = 1;
struct object_id loid, roid;
char status;
const char *src_path, *dst_path;
- size_t src_path_len, dst_path_len;
if (starts_with(info.buf, "::"))
die(N_("combined diff formats('-c' and '--cc') are "
if (strbuf_getline_nul(&lpath, fp))
break;
src_path = lpath.buf;
- src_path_len = lpath.len;
i++;
if (status != 'C' && status != 'R') {
dst_path = src_path;
- dst_path_len = src_path_len;
} else {
if (strbuf_getline_nul(&rpath, fp))
break;
dst_path = rpath.buf;
- dst_path_len = rpath.len;
}
if (S_ISGITLINK(lmode) || S_ISGITLINK(rmode)) {
}
if (lmode && status != 'C') {
- ce->ce_mode = lmode;
- oidcpy(&ce->oid, &loid);
- strcpy(ce->name, src_path);
- ce->ce_namelen = src_path_len;
- if (checkout_entry(ce, &lstate, NULL))
+ if (checkout_path(lmode, &loid, src_path, &lstate))
return error("could not write '%s'", src_path);
}
hashmap_add(&working_tree_dups, entry);
if (!use_wt_file(workdir, dst_path, &roid)) {
- ce->ce_mode = rmode;
- oidcpy(&ce->oid, &roid);
- strcpy(ce->name, dst_path);
- ce->ce_namelen = dst_path_len;
- if (checkout_entry(ce, &rstate, NULL))
+ if (checkout_path(rmode, &roid, dst_path, &rstate))
return error("could not write '%s'",
dst_path);
} else if (!is_null_oid(&roid)) {
exit_cleanup(tmpdir, rc);
finish:
- free(ce);
+ free(lbase_dir);
+ free(rbase_dir);
strbuf_release(&ldir);
strbuf_release(&rdir);
strbuf_release(&wtdir);
char **pack_lockfile_ptr = NULL;
struct child_process *conn;
struct fetch_pack_args args;
- struct sha1_array shallow = SHA1_ARRAY_INIT;
+ struct oid_array shallow = OID_ARRAY_INIT;
struct string_list deepen_not = STRING_LIST_INIT_DUP;
packet_trace_identity("fetch-pack");
struct ref *ref,
int check_old)
{
- char msg[1024];
+ char *msg;
char *rla = getenv("GIT_REFLOG_ACTION");
struct ref_transaction *transaction;
struct strbuf err = STRBUF_INIT;
return 0;
if (!rla)
rla = default_rla.buf;
- snprintf(msg, sizeof(msg), "%s: %s", rla, action);
+ msg = xstrfmt("%s: %s", rla, action);
transaction = ref_transaction_begin(&err);
if (!transaction ||
ref_transaction_free(transaction);
strbuf_release(&err);
+ free(msg);
return 0;
fail:
ref_transaction_free(transaction);
error("%s", err.buf);
strbuf_release(&err);
+ free(msg);
return df_conflict ? STORE_REF_ERROR_DF_CONFLICT
: STORE_REF_ERROR_OTHER;
}
if ((recurse_submodules != RECURSE_SUBMODULES_OFF) &&
(recurse_submodules != RECURSE_SUBMODULES_ON))
- check_for_new_submodule_commits(ref->new_oid.hash);
+ check_for_new_submodule_commits(&ref->new_oid);
r = s_update_ref(msg, ref, 0);
format_display(display, r ? '!' : '*', what,
r ? _("unable to update local ref") : NULL,
strbuf_add_unique_abbrev(&quickref, ref->new_oid.hash, DEFAULT_ABBREV);
if ((recurse_submodules != RECURSE_SUBMODULES_OFF) &&
(recurse_submodules != RECURSE_SUBMODULES_ON))
- check_for_new_submodule_commits(ref->new_oid.hash);
+ check_for_new_submodule_commits(&ref->new_oid);
r = s_update_ref("fast-forward", ref, 1);
format_display(display, r ? '!' : ' ', quickref.buf,
r ? _("unable to update local ref") : NULL,
strbuf_add_unique_abbrev(&quickref, ref->new_oid.hash, DEFAULT_ABBREV);
if ((recurse_submodules != RECURSE_SUBMODULES_OFF) &&
(recurse_submodules != RECURSE_SUBMODULES_ON))
- check_for_new_submodule_commits(ref->new_oid.hash);
+ check_for_new_submodule_commits(&ref->new_oid);
r = s_update_ref("forced-update", ref, 1);
format_display(display, r ? '!' : '+', quickref.buf,
r ? _("unable to update local ref") : _("forced update"),
}
if (keep_cache_objects) {
+ verify_index_checksum = 1;
read_cache();
for (i = 0; i < active_nr; i++) {
unsigned int mode;
* distributed, we can check only one and get a reasonable
* estimate.
*/
- char path[PATH_MAX];
- const char *objdir = get_object_directory();
DIR *dir;
struct dirent *ent;
int auto_threshold;
if (gc_auto_threshold <= 0)
return 0;
- if (sizeof(path) <= snprintf(path, sizeof(path), "%s/17", objdir)) {
- warning(_("insanely long object directory %.*s"), 50, objdir);
- return 0;
- }
- dir = opendir(path);
+ dir = opendir(git_path("objects/17"));
if (!dir)
return 0;
static const char *lock_repo_for_gc(int force, pid_t* ret_pid)
{
static struct lock_file lock;
- char my_host[128];
+ char my_host[HOST_NAME_MAX + 1];
struct strbuf sb = STRBUF_INIT;
struct stat st;
uintmax_t pid;
/* already locked */
return NULL;
- if (gethostname(my_host, sizeof(my_host)))
+ if (xgethostname(my_host, sizeof(my_host)))
xsnprintf(my_host, sizeof(my_host), "unknown");
pidfile_path = git_pathdup("gc.pid");
fd = hold_lock_file_for_update(&lock, pidfile_path,
LOCK_DIE_ON_ERROR);
if (!force) {
- static char locking_host[128];
+ static char locking_host[HOST_NAME_MAX + 1];
+ static char *scan_fmt;
int should_exit;
+
+ if (!scan_fmt)
+ scan_fmt = xstrfmt("%s %%%dc", "%"SCNuMAX, HOST_NAME_MAX);
fp = fopen(pidfile_path, "r");
memset(locking_host, 0, sizeof(locking_host));
should_exit =
* running.
*/
time(NULL) - st.st_mtime <= 12 * 3600 &&
- fscanf(fp, "%"SCNuMAX" %127c", &pid, locking_host) == 2 &&
+ fscanf(fp, scan_fmt, &pid, locking_host) == 2 &&
/* be gentle to concurrent "gc" on remote hosts */
(strcmp(locking_host, my_host) || !kill(pid, 0) || errno == EPERM);
if (fp != NULL)
hit |= wait_all();
if (hit && show_in_pager)
run_pager(&opt, prefix);
+ clear_pathspec(&pathspec);
free_grep_patterns(&opt);
return !hit;
}
if (from_stdin) {
input_fd = 0;
if (!pack_name) {
- static char tmp_file[PATH_MAX];
- output_fd = odb_mkstemp(tmp_file, sizeof(tmp_file),
+ struct strbuf tmp_file = STRBUF_INIT;
+ output_fd = odb_mkstemp(&tmp_file,
"pack/tmp_pack_XXXXXX");
- pack_name = xstrdup(tmp_file);
- } else
+ pack_name = strbuf_detach(&tmp_file, NULL);
+ } else {
output_fd = open(pack_name, O_CREAT|O_EXCL|O_RDWR, 0600);
- if (output_fd < 0)
- die_errno(_("unable to create '%s'"), pack_name);
+ if (output_fd < 0)
+ die_errno(_("unable to create '%s'"), pack_name);
+ }
nothread_data.pack_fd = output_fd;
} else {
input_fd = open(pack_name, O_RDONLY);
unsigned long has_size;
read_lock();
has_type = sha1_object_info(sha1, &has_size);
+ if (has_type < 0)
+ die(_("cannot read existing object info %s"), sha1_to_hex(sha1));
if (has_type != type || has_size != size)
die(_("SHA1 COLLISION FOUND WITH %s !"), sha1_to_hex(sha1));
has_data = read_sha1_file(sha1, &has_type, &has_size);
if (!from_stdin) {
printf("%s\n", sha1_to_hex(sha1));
} else {
- char buf[48];
- int len = snprintf(buf, sizeof(buf), "%s\t%s\n",
- report, sha1_to_hex(sha1));
- write_or_die(1, buf, len);
+ struct strbuf buf = STRBUF_INIT;
+
+ strbuf_addf(&buf, "%s\t%s\n", report, sha1_to_hex(sha1));
+ write_or_die(1, buf.buf, buf.len);
+ strbuf_release(&buf);
/*
* Let's just mimic git-unpack-objects here and write
#include "string-list.h"
#include "pathspec.h"
#include "run-command.h"
+#include "submodule.h"
static int abbrev;
static int show_deleted;
{
struct child_process cp = CHILD_PROCESS_INIT;
int status;
+ char *dir;
+
+ prepare_submodule_repo_env(&cp.env_array);
+ argv_array_push(&cp.env_array, GIT_DIR_ENVIRONMENT);
if (prefix_len)
argv_array_pushf(&cp.env_array, "%s=%s",
argv_array_pushv(&cp.args, submodule_options.argv);
cp.git_cmd = 1;
- cp.dir = ce->name;
+ dir = mkpathdup("%s/%s", get_git_work_tree(), ce->name);
+ cp.dir = dir;
status = run_command(&cp);
+ free(dir);
if (status)
exit(status);
}
static int tail_match(const char **pattern, const char *path)
{
const char *p;
- char pathbuf[PATH_MAX];
+ char *pathbuf;
if (!pattern)
return 1; /* no restriction */
- if (snprintf(pathbuf, sizeof(pathbuf), "/%s", path) > sizeof(pathbuf))
- return error("insanely long ref %.*s...", 20, path);
+ pathbuf = xstrfmt("/%s", path);
while ((p = *(pattern++)) != NULL) {
- if (!wildmatch(p, pathbuf, 0, NULL))
+ if (!wildmatch(p, pathbuf, 0, NULL)) {
+ free(pathbuf);
return 1;
+ }
}
+ free(pathbuf);
return 0;
}
{
int found;
const char *arguments[] = { pgm, "", "", "", path, "", "", "", NULL };
- char hexbuf[4][GIT_SHA1_HEXSZ + 1];
+ char hexbuf[4][GIT_MAX_HEXSZ + 1];
char ownbuf[4][60];
if (pos >= active_nr)
if (verify_signatures) {
for (p = remoteheads; p; p = p->next) {
struct commit *commit = p->item;
- char hex[GIT_SHA1_HEXSZ + 1];
+ char hex[GIT_MAX_HEXSZ + 1];
struct signature_check signature_check;
memset(&signature_check, 0, sizeof(signature_check));
return NULL;
}
-/* returns a static buffer */
-static const char *get_rev_name(const struct object *o)
+/* may return a constant string or use "buf" as scratch space */
+static const char *get_rev_name(const struct object *o, struct strbuf *buf)
{
- static char buffer[1024];
struct rev_name *n;
struct commit *c;
int len = strlen(n->tip_name);
if (len > 2 && !strcmp(n->tip_name + len - 2, "^0"))
len -= 2;
- snprintf(buffer, sizeof(buffer), "%.*s~%d", len, n->tip_name,
- n->generation);
-
- return buffer;
+ strbuf_reset(buf);
+ strbuf_addf(buf, "%.*s~%d", len, n->tip_name, n->generation);
+ return buf->buf;
}
}
{
const char *name;
const struct object_id *oid = &obj->oid;
+ struct strbuf buf = STRBUF_INIT;
if (!name_only)
printf("%s ", caller_name ? caller_name : oid_to_hex(oid));
- name = get_rev_name(obj);
+ name = get_rev_name(obj, &buf);
if (name)
printf("%s\n", name);
else if (allow_undefined)
printf("%s\n", find_unique_abbrev(oid->hash, DEFAULT_ABBREV));
else
die("cannot describe '%s'", oid_to_hex(oid));
+ strbuf_release(&buf);
}
static char const * const name_rev_usage[] = {
static void name_rev_line(char *p, struct name_ref_data *data)
{
+ struct strbuf buf = STRBUF_INIT;
int forty = 0;
char *p_start;
for (p_start = p; *p; p++) {
struct object *o =
lookup_object(sha1);
if (o)
- name = get_rev_name(o);
+ name = get_rev_name(o, &buf);
}
*(p+1) = c;
/* flush */
if (p_start != p)
fwrite(p_start, p - p_start, 1, stdout);
+
+ strbuf_release(&buf);
}
int cmd_name_rev(int argc, const char **argv, const char *prefix)
struct notes_tree *t;
unsigned char object[20], new_note[20];
const unsigned char *note;
- char logmsg[100];
+ char *logmsg;
const char * const *usage;
struct note_data d = { 0, 0, NULL, STRBUF_INIT };
struct option options[] = {
write_note_data(&d, new_note);
if (add_note(t, object, new_note, combine_notes_overwrite))
die("BUG: combine_notes_overwrite failed");
- snprintf(logmsg, sizeof(logmsg), "Notes added by 'git notes %s'",
- argv[0]);
+ logmsg = xstrfmt("Notes added by 'git notes %s'", argv[0]);
} else {
fprintf(stderr, _("Removing note for object %s\n"),
sha1_to_hex(object));
remove_note(t, object);
- snprintf(logmsg, sizeof(logmsg), "Notes removed by 'git notes %s'",
- argv[0]);
+ logmsg = xstrfmt("Notes removed by 'git notes %s'", argv[0]);
}
commit_notes(t, logmsg);
+ free(logmsg);
free_note_data(&d);
free_notes(t);
return 0;
*
* This is filled by get_object_list.
*/
-static struct sha1_array recent_objects;
+static struct oid_array recent_objects;
-static int loosened_object_can_be_discarded(const unsigned char *sha1,
+static int loosened_object_can_be_discarded(const struct object_id *oid,
unsigned long mtime)
{
if (!unpack_unreachable_expiration)
return 0;
if (mtime > unpack_unreachable_expiration)
return 0;
- if (sha1_array_lookup(&recent_objects, sha1) >= 0)
+ if (oid_array_lookup(&recent_objects, oid) >= 0)
return 0;
return 1;
}
{
struct packed_git *p;
uint32_t i;
- const unsigned char *sha1;
+ struct object_id oid;
for (p = packed_git; p; p = p->next) {
if (!p->pack_local || p->pack_keep)
die("cannot open pack index");
for (i = 0; i < p->num_objects; i++) {
- sha1 = nth_packed_object_sha1(p, i);
- if (!packlist_find(&to_pack, sha1, NULL) &&
- !has_sha1_pack_kept_or_nonlocal(sha1) &&
- !loosened_object_can_be_discarded(sha1, p->mtime))
- if (force_object_loose(sha1, p->mtime))
+ nth_packed_object_oid(&oid, p, i);
+ if (!packlist_find(&to_pack, oid.hash, NULL) &&
+ !has_sha1_pack_kept_or_nonlocal(oid.hash) &&
+ !loosened_object_can_be_discarded(&oid, p->mtime))
+ if (force_object_loose(oid.hash, p->mtime))
die("unable to force loose object");
}
}
const char *name,
void *data)
{
- sha1_array_append(&recent_objects, obj->oid.hash);
+ oid_array_append(&recent_objects, &obj->oid);
}
static void record_recent_commit(struct commit *commit, void *data)
{
- sha1_array_append(&recent_objects, commit->object.oid.hash);
+ oid_array_append(&recent_objects, &commit->object.oid);
}
static void get_object_list(int ac, const char **av)
if (unpack_unreachable)
loosen_unused_packed_objects(&revs);
- sha1_array_clear(&recent_objects);
+ oid_array_clear(&recent_objects);
}
static int option_parse_index_version(const struct option *opt,
};
if (parse_options(argc, argv, prefix, opts, pack_refs_usage, 0))
usage_with_options(pack_refs_usage, opts);
- return pack_refs(flags);
+ return refs_pack_refs(get_main_ref_store(), flags);
}
static void flush_one_hunk(struct object_id *result, git_SHA_CTX *ctx)
{
- unsigned char hash[GIT_SHA1_RAWSZ];
+ unsigned char hash[GIT_MAX_RAWSZ];
unsigned short carry = 0;
int i;
* Appends merge candidates from FETCH_HEAD that are not marked not-for-merge
* into merge_heads.
*/
-static void get_merge_heads(struct sha1_array *merge_heads)
+static void get_merge_heads(struct oid_array *merge_heads)
{
- const char *filename = git_path("FETCH_HEAD");
+ const char *filename = git_path_fetch_head();
FILE *fp;
struct strbuf sb = STRBUF_INIT;
- unsigned char sha1[GIT_SHA1_RAWSZ];
+ struct object_id oid;
if (!(fp = fopen(filename, "r")))
die_errno(_("could not open '%s' for reading"), filename);
while (strbuf_getline_lf(&sb, fp) != EOF) {
- if (get_sha1_hex(sb.buf, sha1))
+ if (get_oid_hex(sb.buf, &oid))
continue; /* invalid line: does not start with SHA1 */
if (starts_with(sb.buf + GIT_SHA1_HEXSZ, "\tnot-for-merge\t"))
continue; /* ref is not-for-merge */
- sha1_array_append(merge_heads, sha1);
+ oid_array_append(merge_heads, &oid);
}
fclose(fp);
strbuf_release(&sb);
/**
* "Pulls into void" by branching off merge_head.
*/
-static int pull_into_void(const unsigned char *merge_head,
- const unsigned char *curr_head)
+static int pull_into_void(const struct object_id *merge_head,
+ const struct object_id *curr_head)
{
/*
* Two-way merge: we treat the index as based on an empty tree,
* index/worktree changes that the user already made on the unborn
* branch.
*/
- if (checkout_fast_forward(EMPTY_TREE_SHA1_BIN, merge_head, 0))
+ if (checkout_fast_forward(EMPTY_TREE_SHA1_BIN, merge_head->hash, 0))
return 1;
- if (update_ref("initial pull", "HEAD", merge_head, curr_head, 0, UPDATE_REFS_DIE_ON_ERR))
+ if (update_ref("initial pull", "HEAD", merge_head->hash, curr_head->hash, 0, UPDATE_REFS_DIE_ON_ERR))
return 1;
return 0;
* current branch forked from its remote tracking branch. Returns 0 on success,
* -1 on failure.
*/
-static int get_rebase_fork_point(unsigned char *fork_point, const char *repo,
+static int get_rebase_fork_point(struct object_id *fork_point, const char *repo,
const char *refspec)
{
int ret;
if (ret)
goto cleanup;
- ret = get_sha1_hex(sb.buf, fork_point);
+ ret = get_oid_hex(sb.buf, fork_point);
if (ret)
goto cleanup;
* Sets merge_base to the octopus merge base of curr_head, merge_head and
* fork_point. Returns 0 if a merge base is found, 1 otherwise.
*/
-static int get_octopus_merge_base(unsigned char *merge_base,
- const unsigned char *curr_head,
- const unsigned char *merge_head,
- const unsigned char *fork_point)
+static int get_octopus_merge_base(struct object_id *merge_base,
+ const struct object_id *curr_head,
+ const struct object_id *merge_head,
+ const struct object_id *fork_point)
{
struct commit_list *revs = NULL, *result;
- commit_list_insert(lookup_commit_reference(curr_head), &revs);
- commit_list_insert(lookup_commit_reference(merge_head), &revs);
- if (!is_null_sha1(fork_point))
- commit_list_insert(lookup_commit_reference(fork_point), &revs);
+ commit_list_insert(lookup_commit_reference(curr_head->hash), &revs);
+ commit_list_insert(lookup_commit_reference(merge_head->hash), &revs);
+ if (!is_null_oid(fork_point))
+ commit_list_insert(lookup_commit_reference(fork_point->hash), &revs);
result = reduce_heads(get_octopus_merge_bases(revs));
free_commit_list(revs);
if (!result)
return 1;
- hashcpy(merge_base, result->item->object.oid.hash);
+ oidcpy(merge_base, &result->item->object.oid);
return 0;
}
* fork point calculated by get_rebase_fork_point(), runs git-rebase with the
* appropriate arguments and returns its exit status.
*/
-static int run_rebase(const unsigned char *curr_head,
- const unsigned char *merge_head,
- const unsigned char *fork_point)
+static int run_rebase(const struct object_id *curr_head,
+ const struct object_id *merge_head,
+ const struct object_id *fork_point)
{
int ret;
- unsigned char oct_merge_base[GIT_SHA1_RAWSZ];
+ struct object_id oct_merge_base;
struct argv_array args = ARGV_ARRAY_INIT;
- if (!get_octopus_merge_base(oct_merge_base, curr_head, merge_head, fork_point))
- if (!is_null_sha1(fork_point) && !hashcmp(oct_merge_base, fork_point))
+ if (!get_octopus_merge_base(&oct_merge_base, curr_head, merge_head, fork_point))
+ if (!is_null_oid(fork_point) && !oidcmp(&oct_merge_base, fork_point))
fork_point = NULL;
argv_array_push(&args, "rebase");
warning(_("ignoring --verify-signatures for rebase"));
argv_array_push(&args, "--onto");
- argv_array_push(&args, sha1_to_hex(merge_head));
+ argv_array_push(&args, oid_to_hex(merge_head));
- if (fork_point && !is_null_sha1(fork_point))
- argv_array_push(&args, sha1_to_hex(fork_point));
+ if (fork_point && !is_null_oid(fork_point))
+ argv_array_push(&args, oid_to_hex(fork_point));
else
- argv_array_push(&args, sha1_to_hex(merge_head));
+ argv_array_push(&args, oid_to_hex(merge_head));
ret = run_command_v_opt(args.argv, RUN_GIT_CMD);
argv_array_clear(&args);
int cmd_pull(int argc, const char **argv, const char *prefix)
{
const char *repo, **refspecs;
- struct sha1_array merge_heads = SHA1_ARRAY_INIT;
- unsigned char orig_head[GIT_SHA1_RAWSZ], curr_head[GIT_SHA1_RAWSZ];
- unsigned char rebase_fork_point[GIT_SHA1_RAWSZ];
+ struct oid_array merge_heads = OID_ARRAY_INIT;
+ struct object_id orig_head, curr_head;
+ struct object_id rebase_fork_point;
if (!getenv("GIT_REFLOG_ACTION"))
set_reflog_message(argc, argv);
if (read_cache_unmerged())
die_resolve_conflict("pull");
- if (file_exists(git_path("MERGE_HEAD")))
+ if (file_exists(git_path_merge_head()))
die_conclude_merge();
- if (get_sha1("HEAD", orig_head))
- hashclr(orig_head);
+ if (get_oid("HEAD", &orig_head))
+ oidclr(&orig_head);
if (!opt_rebase && opt_autostash != -1)
die(_("--[no-]autostash option is only valid with --rebase."));
if (opt_autostash != -1)
autostash = opt_autostash;
- if (is_null_sha1(orig_head) && !is_cache_unborn())
+ if (is_null_oid(&orig_head) && !is_cache_unborn())
die(_("Updating an unborn branch with changes added to the index."));
if (!autostash)
require_clean_work_tree(N_("pull with rebase"),
_("please commit or stash them."), 1, 0);
- if (get_rebase_fork_point(rebase_fork_point, repo, *refspecs))
- hashclr(rebase_fork_point);
+ if (get_rebase_fork_point(&rebase_fork_point, repo, *refspecs))
+ oidclr(&rebase_fork_point);
}
if (run_fetch(repo, refspecs))
if (opt_dry_run)
return 0;
- if (get_sha1("HEAD", curr_head))
- hashclr(curr_head);
+ if (get_oid("HEAD", &curr_head))
+ oidclr(&curr_head);
- if (!is_null_sha1(orig_head) && !is_null_sha1(curr_head) &&
- hashcmp(orig_head, curr_head)) {
+ if (!is_null_oid(&orig_head) && !is_null_oid(&curr_head) &&
+ oidcmp(&orig_head, &curr_head)) {
/*
* The fetch involved updating the current branch.
*
warning(_("fetch updated the current branch head.\n"
"fast-forwarding your working tree from\n"
- "commit %s."), sha1_to_hex(orig_head));
+ "commit %s."), oid_to_hex(&orig_head));
- if (checkout_fast_forward(orig_head, curr_head, 0))
+ if (checkout_fast_forward(orig_head.hash, curr_head.hash, 0))
die(_("Cannot fast-forward your working tree.\n"
"After making sure that you saved anything precious from\n"
"$ git diff %s\n"
"output, run\n"
"$ git reset --hard\n"
- "to recover."), sha1_to_hex(orig_head));
+ "to recover."), oid_to_hex(&orig_head));
}
get_merge_heads(&merge_heads);
if (!merge_heads.nr)
die_no_merge_candidates(repo, refspecs);
- if (is_null_sha1(orig_head)) {
+ if (is_null_oid(&orig_head)) {
if (merge_heads.nr > 1)
die(_("Cannot merge multiple branches into empty head."));
- return pull_into_void(*merge_heads.sha1, curr_head);
+ return pull_into_void(merge_heads.oid, &curr_head);
}
if (opt_rebase && merge_heads.nr > 1)
die(_("Cannot rebase onto multiple branches."));
struct commit_list *list = NULL;
struct commit *merge_head, *head;
- head = lookup_commit_reference(orig_head);
+ head = lookup_commit_reference(orig_head.hash);
commit_list_insert(head, &list);
- merge_head = lookup_commit_reference(merge_heads.sha1[0]);
+ merge_head = lookup_commit_reference(merge_heads.oid[0].hash);
if (is_descendant_of(merge_head, list)) {
/* we can fast-forward this without invoking rebase */
opt_ff = "--ff-only";
return run_merge();
}
- return run_rebase(curr_head, *merge_heads.sha1, rebase_fork_point);
+ return run_rebase(&curr_head, merge_heads.oid, &rebase_fork_point);
} else {
return run_merge();
}
int push_cert = -1;
int rc;
const char *repo = NULL; /* default repository */
- static struct string_list push_options = STRING_LIST_INIT_DUP;
- static struct string_list_item *item;
+ struct string_list push_options = STRING_LIST_INIT_DUP;
+ const struct string_list_item *item;
struct option options[] = {
OPT__VERBOSITY(&verbosity),
die(_("push options must not have new line characters"));
rc = do_push(repo, flags, &push_options);
+ string_list_clear(&push_options, 0);
if (rc == -1)
usage_with_options(push_usage, options);
else
return git_default_config(var, value, cb);
}
-static void show_ref(const char *path, const unsigned char *sha1)
+static void show_ref(const char *path, const struct object_id *oid)
{
if (sent_capabilities) {
- packet_write_fmt(1, "%s %s\n", sha1_to_hex(sha1), path);
+ packet_write_fmt(1, "%s %s\n", oid_to_hex(oid), path);
} else {
struct strbuf cap = STRBUF_INIT;
strbuf_addstr(&cap, " push-options");
strbuf_addf(&cap, " agent=%s", git_user_agent_sanitized());
packet_write_fmt(1, "%s %s%c%s\n",
- sha1_to_hex(sha1), path, 0, cap.buf);
+ oid_to_hex(oid), path, 0, cap.buf);
strbuf_release(&cap);
sent_capabilities = 1;
}
} else {
oidset_insert(seen, oid);
}
- show_ref(path, oid->hash);
+ show_ref(path, oid);
return 0;
}
if (oidset_insert(seen, oid))
return;
- show_ref(".have", oid->hash);
+ show_ref(".have", oid);
}
static void write_head_info(void)
for_each_alternate_ref(show_one_alternate_ref, &seen);
oidset_clear(&seen);
if (!sent_capabilities)
- show_ref("capabilities^{}", null_sha1);
+ show_ref("capabilities^{}", &null_oid);
advertise_shallow_grafts(1);
unsigned int skip_update:1,
did_not_exist:1;
int index;
- unsigned char old_sha1[20];
- unsigned char new_sha1[20];
+ struct object_id old_oid;
+ struct object_id new_oid;
char ref_name[FLEX_ARRAY]; /* more */
};
return -1; /* EOF */
strbuf_reset(&state->buf);
strbuf_addf(&state->buf, "%s %s %s\n",
- sha1_to_hex(cmd->old_sha1), sha1_to_hex(cmd->new_sha1),
+ oid_to_hex(&cmd->old_oid), oid_to_hex(&cmd->new_oid),
cmd->ref_name);
state->cmd = cmd->next;
if (bufp) {
return 0;
argv[1] = cmd->ref_name;
- argv[2] = sha1_to_hex(cmd->old_sha1);
- argv[3] = sha1_to_hex(cmd->new_sha1);
+ argv[2] = oid_to_hex(&cmd->old_oid);
+ argv[3] = oid_to_hex(&cmd->new_oid);
argv[4] = NULL;
proc.no_stdin = 1;
proc.stdout_to_stderr = 1;
proc.err = use_sideband ? -1 : 0;
proc.argv = argv;
- proc.env = tmp_objdir_env(tmp_objdir);
code = start_command(&proc);
if (code)
static int update_shallow_ref(struct command *cmd, struct shallow_info *si)
{
static struct lock_file shallow_lock;
- struct sha1_array extra = SHA1_ARRAY_INIT;
+ struct oid_array extra = OID_ARRAY_INIT;
struct check_connected_options opt = CHECK_CONNECTED_INIT;
uint32_t mask = 1 << (cmd->index % 32);
int i;
if (si->used_shallow[i] &&
(si->used_shallow[i][cmd->index / 32] & mask) &&
!delayed_reachability_test(si, i))
- sha1_array_append(&extra, si->shallow->sha1[i]);
+ oid_array_append(&extra, &si->shallow->oid[i]);
opt.env = tmp_objdir_env(tmp_objdir);
setup_alternate_shallow(&shallow_lock, &opt.shallow_file, &extra);
if (check_connected(command_singleton_iterator, cmd, &opt)) {
rollback_lock_file(&shallow_lock);
- sha1_array_clear(&extra);
+ oid_array_clear(&extra);
return -1;
}
* not lose these new roots..
*/
for (i = 0; i < extra.nr; i++)
- register_shallow(extra.sha1[i]);
+ register_shallow(extra.oid[i].hash);
si->shallow_ref[cmd->index] = 0;
- sha1_array_clear(&extra);
+ oid_array_clear(&extra);
return 0;
}
const char *name = cmd->ref_name;
struct strbuf namespaced_name_buf = STRBUF_INIT;
const char *namespaced_name, *ret;
- unsigned char *old_sha1 = cmd->old_sha1;
- unsigned char *new_sha1 = cmd->new_sha1;
+ struct object_id *old_oid = &cmd->old_oid;
+ struct object_id *new_oid = &cmd->new_oid;
/* only refs/... are allowed */
if (!starts_with(name, "refs/") || check_refname_format(name + 5, 0)) {
refuse_unconfigured_deny();
return "branch is currently checked out";
case DENY_UPDATE_INSTEAD:
- ret = update_worktree(new_sha1);
+ ret = update_worktree(new_oid->hash);
if (ret)
return ret;
break;
}
}
- if (!is_null_sha1(new_sha1) && !has_sha1_file(new_sha1)) {
+ if (!is_null_oid(new_oid) && !has_object_file(new_oid)) {
error("unpack should have generated %s, "
- "but I can't find it!", sha1_to_hex(new_sha1));
+ "but I can't find it!", oid_to_hex(new_oid));
return "bad pack";
}
- if (!is_null_sha1(old_sha1) && is_null_sha1(new_sha1)) {
+ if (!is_null_oid(old_oid) && is_null_oid(new_oid)) {
if (deny_deletes && starts_with(name, "refs/heads/")) {
rp_error("denying ref deletion for %s", name);
return "deletion prohibited";
}
}
- if (deny_non_fast_forwards && !is_null_sha1(new_sha1) &&
- !is_null_sha1(old_sha1) &&
+ if (deny_non_fast_forwards && !is_null_oid(new_oid) &&
+ !is_null_oid(old_oid) &&
starts_with(name, "refs/heads/")) {
struct object *old_object, *new_object;
struct commit *old_commit, *new_commit;
- old_object = parse_object(old_sha1);
- new_object = parse_object(new_sha1);
+ old_object = parse_object(old_oid->hash);
+ new_object = parse_object(new_oid->hash);
if (!old_object || !new_object ||
old_object->type != OBJ_COMMIT ||
return "hook declined";
}
- if (is_null_sha1(new_sha1)) {
+ if (is_null_oid(new_oid)) {
struct strbuf err = STRBUF_INIT;
- if (!parse_object(old_sha1)) {
- old_sha1 = NULL;
+ if (!parse_object(old_oid->hash)) {
+ old_oid = NULL;
if (ref_exists(name)) {
rp_warning("Allowing deletion of corrupt ref.");
} else {
}
if (ref_transaction_delete(transaction,
namespaced_name,
- old_sha1,
+ old_oid->hash,
0, "push", &err)) {
rp_error("%s", err.buf);
strbuf_release(&err);
if (ref_transaction_update(transaction,
namespaced_name,
- new_sha1, old_sha1,
+ new_oid->hash, old_oid->hash,
0, "push",
&err)) {
rp_error("%s", err.buf);
const char *dst_name;
struct string_list_item *item;
struct command *dst_cmd;
- unsigned char sha1[GIT_SHA1_RAWSZ];
+ unsigned char sha1[GIT_MAX_RAWSZ];
int flag;
strbuf_addf(&buf, "%s%s", get_git_namespace(), cmd->ref_name);
dst_cmd = (struct command *) item->util;
- if (!hashcmp(cmd->old_sha1, dst_cmd->old_sha1) &&
- !hashcmp(cmd->new_sha1, dst_cmd->new_sha1))
+ if (!oidcmp(&cmd->old_oid, &dst_cmd->old_oid) &&
+ !oidcmp(&cmd->new_oid, &dst_cmd->new_oid))
return;
dst_cmd->skip_update = 1;
rp_error("refusing inconsistent update between symref '%s' (%s..%s) and"
" its target '%s' (%s..%s)",
cmd->ref_name,
- find_unique_abbrev(cmd->old_sha1, DEFAULT_ABBREV),
- find_unique_abbrev(cmd->new_sha1, DEFAULT_ABBREV),
+ find_unique_abbrev(cmd->old_oid.hash, DEFAULT_ABBREV),
+ find_unique_abbrev(cmd->new_oid.hash, DEFAULT_ABBREV),
dst_cmd->ref_name,
- find_unique_abbrev(dst_cmd->old_sha1, DEFAULT_ABBREV),
- find_unique_abbrev(dst_cmd->new_sha1, DEFAULT_ABBREV));
+ find_unique_abbrev(dst_cmd->old_oid.hash, DEFAULT_ABBREV),
+ find_unique_abbrev(dst_cmd->new_oid.hash, DEFAULT_ABBREV));
cmd->error_string = dst_cmd->error_string =
"inconsistent aliased update";
struct command **cmd_list = cb_data;
struct command *cmd = *cmd_list;
- if (!cmd || is_null_sha1(cmd->new_sha1))
+ if (!cmd || is_null_oid(&cmd->new_oid))
return -1; /* end of list */
*cmd_list = NULL; /* this returns only one */
- hashcpy(sha1, cmd->new_sha1);
+ hashcpy(sha1, cmd->new_oid.hash);
return 0;
}
if (shallow_update && data->si->shallow_ref[cmd->index])
/* to be checked in update_shallow_ref() */
continue;
- if (!is_null_sha1(cmd->new_sha1) && !cmd->skip_update) {
- hashcpy(sha1, cmd->new_sha1);
+ if (!is_null_oid(&cmd->new_oid) && !cmd->skip_update) {
+ hashcpy(sha1, cmd->new_oid.hash);
*cmd_list = cmd->next;
return 0;
}
if (!ref_is_hidden(cmd->ref_name, refname_full.buf))
continue;
- if (is_null_sha1(cmd->new_sha1))
+ if (is_null_oid(&cmd->new_oid))
cmd->error_string = "deny deleting a hidden ref";
else
cmd->error_string = "deny updating a hidden ref";
const char *line,
int linelen)
{
- unsigned char old_sha1[20], new_sha1[20];
+ struct object_id old_oid, new_oid;
struct command *cmd;
const char *refname;
int reflen;
+ const char *p;
- if (linelen < 83 ||
- line[40] != ' ' ||
- line[81] != ' ' ||
- get_sha1_hex(line, old_sha1) ||
- get_sha1_hex(line + 41, new_sha1))
+ if (parse_oid_hex(line, &old_oid, &p) ||
+ *p++ != ' ' ||
+ parse_oid_hex(p, &new_oid, &p) ||
+ *p++ != ' ')
die("protocol error: expected old/new/ref, got '%s'", line);
- refname = line + 82;
- reflen = linelen - 82;
+ refname = p;
+ reflen = linelen - (p - line);
FLEX_ALLOC_MEM(cmd, ref_name, refname, reflen);
- hashcpy(cmd->old_sha1, old_sha1);
- hashcpy(cmd->new_sha1, new_sha1);
+ oidcpy(&cmd->old_oid, &old_oid);
+ oidcpy(&cmd->new_oid, &new_oid);
*tail = cmd;
return &cmd->next;
}
}
}
-static struct command *read_head_info(struct sha1_array *shallow)
+static struct command *read_head_info(struct oid_array *shallow)
{
struct command *commands = NULL;
struct command **p = &commands;
if (!line)
break;
- if (len == 48 && starts_with(line, "shallow ")) {
- unsigned char sha1[20];
- if (get_sha1_hex(line + 8, sha1))
+ if (len > 8 && starts_with(line, "shallow ")) {
+ struct object_id oid;
+ if (get_oid_hex(line + 8, &oid))
die("protocol error: expected shallow sha, got '%s'",
line + 8);
- sha1_array_append(shallow, sha1);
+ oid_array_append(shallow, &oid);
continue;
}
static const char *pack_lockfile;
+static void push_header_arg(struct argv_array *args, struct pack_header *hdr)
+{
+ argv_array_pushf(args, "--pack_header=%"PRIu32",%"PRIu32,
+ ntohl(hdr->hdr_version), ntohl(hdr->hdr_entries));
+}
+
static const char *unpack(int err_fd, struct shallow_info *si)
{
struct pack_header hdr;
const char *hdr_err;
int status;
- char hdr_arg[38];
struct child_process child = CHILD_PROCESS_INIT;
int fsck_objects = (receive_fsck_objects >= 0
? receive_fsck_objects
close(err_fd);
return hdr_err;
}
- snprintf(hdr_arg, sizeof(hdr_arg),
- "--pack_header=%"PRIu32",%"PRIu32,
- ntohl(hdr.hdr_version), ntohl(hdr.hdr_entries));
if (si->nr_ours || si->nr_theirs) {
alt_shallow_file = setup_temporary_shallow(si->shallow);
tmp_objdir_add_as_alternate(tmp_objdir);
if (ntohl(hdr.hdr_entries) < unpack_limit) {
- argv_array_pushl(&child.args, "unpack-objects", hdr_arg, NULL);
+ argv_array_push(&child.args, "unpack-objects");
+ push_header_arg(&child.args, &hdr);
if (quiet)
argv_array_push(&child.args, "-q");
if (fsck_objects)
if (status)
return "unpack-objects abnormal exit";
} else {
- char hostname[256];
+ char hostname[HOST_NAME_MAX + 1];
- argv_array_pushl(&child.args, "index-pack",
- "--stdin", hdr_arg, NULL);
+ argv_array_pushl(&child.args, "index-pack", "--stdin", NULL);
+ push_header_arg(&child.args, &hdr);
- if (gethostname(hostname, sizeof(hostname)))
+ if (xgethostname(hostname, sizeof(hostname)))
xsnprintf(hostname, sizeof(hostname), "localhost");
argv_array_pushf(&child.args,
"--keep=receive-pack %"PRIuMAX" on %s",
static void update_shallow_info(struct command *commands,
struct shallow_info *si,
- struct sha1_array *ref)
+ struct oid_array *ref)
{
struct command *cmd;
int *ref_status;
}
for (cmd = commands; cmd; cmd = cmd->next) {
- if (is_null_sha1(cmd->new_sha1))
+ if (is_null_oid(&cmd->new_oid))
continue;
- sha1_array_append(ref, cmd->new_sha1);
+ oid_array_append(ref, &cmd->new_oid);
cmd->index = ref->nr - 1;
}
si->ref = ref;
ALLOC_ARRAY(ref_status, ref->nr);
assign_shallow_commits_to_refs(si, NULL, ref_status);
for (cmd = commands; cmd; cmd = cmd->next) {
- if (is_null_sha1(cmd->new_sha1))
+ if (is_null_oid(&cmd->new_oid))
continue;
if (ref_status[cmd->index]) {
cmd->error_string = "shallow update not allowed";
{
struct command *cmd;
for (cmd = commands; cmd; cmd = cmd->next) {
- if (!is_null_sha1(cmd->new_sha1))
+ if (!is_null_oid(&cmd->new_oid))
return 0;
}
return 1;
{
int advertise_refs = 0;
struct command *commands;
- struct sha1_array shallow = SHA1_ARRAY_INIT;
- struct sha1_array ref = SHA1_ARRAY_INIT;
+ struct oid_array shallow = OID_ARRAY_INIT;
+ struct oid_array ref = OID_ARRAY_INIT;
struct shallow_info si;
struct option options[] = {
}
if (use_sideband)
packet_flush(1);
- sha1_array_clear(&shallow);
- sha1_array_clear(&ref);
+ oid_array_clear(&shallow);
+ oid_array_clear(&ref);
free((void *)push_cert_nonce);
return 0;
}
static int for_each_replace_name(const char **argv, each_replace_name_fn fn)
{
const char **p, *full_hex;
- char ref[PATH_MAX];
+ struct strbuf ref = STRBUF_INIT;
+ size_t base_len;
int had_error = 0;
struct object_id oid;
+ strbuf_addstr(&ref, git_replace_ref_base);
+ base_len = ref.len;
+
for (p = argv; *p; p++) {
if (get_oid(*p, &oid)) {
error("Failed to resolve '%s' as a valid ref.", *p);
had_error = 1;
continue;
}
- full_hex = oid_to_hex(&oid);
- snprintf(ref, sizeof(ref), "%s%s", git_replace_ref_base, full_hex);
- /* read_ref() may reuse the buffer */
- full_hex = ref + strlen(git_replace_ref_base);
- if (read_ref(ref, oid.hash)) {
+
+ strbuf_setlen(&ref, base_len);
+ strbuf_addstr(&ref, oid_to_hex(&oid));
+ full_hex = ref.buf + base_len;
+
+ if (read_ref(ref.buf, oid.hash)) {
error("replace ref '%s' not found.", full_hex);
had_error = 1;
continue;
}
- if (fn(full_hex, ref, &oid))
+ if (fn(full_hex, ref.buf, &oid))
had_error = 1;
}
+ strbuf_release(&ref);
return had_error;
}
static void check_ref_valid(struct object_id *object,
struct object_id *prev,
- char *ref,
- int ref_size,
+ struct strbuf *ref,
int force)
{
- if (snprintf(ref, ref_size,
- "%s%s", git_replace_ref_base,
- oid_to_hex(object)) > ref_size - 1)
- die("replace ref name too long: %.*s...", 50, ref);
- if (check_refname_format(ref, 0))
- die("'%s' is not a valid ref name.", ref);
-
- if (read_ref(ref, prev->hash))
+ strbuf_reset(ref);
+ strbuf_addf(ref, "%s%s", git_replace_ref_base, oid_to_hex(object));
+ if (check_refname_format(ref->buf, 0))
+ die("'%s' is not a valid ref name.", ref->buf);
+
+ if (read_ref(ref->buf, prev->hash))
oidclr(prev);
else if (!force)
- die("replace ref '%s' already exists", ref);
+ die("replace ref '%s' already exists", ref->buf);
}
static int replace_object_oid(const char *object_ref,
{
struct object_id prev;
enum object_type obj_type, repl_type;
- char ref[PATH_MAX];
+ struct strbuf ref = STRBUF_INIT;
struct ref_transaction *transaction;
struct strbuf err = STRBUF_INIT;
object_ref, typename(obj_type),
replace_ref, typename(repl_type));
- check_ref_valid(object, &prev, ref, sizeof(ref), force);
+ check_ref_valid(object, &prev, &ref, force);
transaction = ref_transaction_begin(&err);
if (!transaction ||
- ref_transaction_update(transaction, ref, repl->hash, prev.hash,
+ ref_transaction_update(transaction, ref.buf, repl->hash, prev.hash,
0, NULL, &err) ||
ref_transaction_commit(transaction, &err))
die("%s", err.buf);
ref_transaction_free(transaction);
+ strbuf_release(&ref);
return 0;
}
char *tmpfile = git_pathdup("REPLACE_EDITOBJ");
enum object_type type;
struct object_id old, new, prev;
- char ref[PATH_MAX];
+ struct strbuf ref = STRBUF_INIT;
if (get_oid(object_ref, &old) < 0)
die("Not a valid object name: '%s'", object_ref);
if (type < 0)
die("unable to get object type for %s", oid_to_hex(&old));
- check_ref_valid(&old, &prev, ref, sizeof(ref), force);
+ check_ref_valid(&old, &prev, &ref, force);
+ strbuf_release(&ref);
export_object(&old, type, raw, tmpfile);
if (launch_editor(tmpfile, NULL, NULL) < 0)
static int show_bisect_vars(struct rev_list_info *info, int reaches, int all)
{
int cnt, flags = info->flags;
- char hex[GIT_SHA1_HEXSZ + 1] = "";
+ char hex[GIT_MAX_HEXSZ + 1] = "";
struct commit_list *tried;
struct rev_info *revs = info->revs;
return 0;
}
-static int show_abbrev(const unsigned char *sha1, void *cb_data)
+static int show_abbrev(const struct object_id *oid, void *cb_data)
{
- show_rev(NORMAL, sha1, NULL);
+ show_rev(NORMAL, oid->hash, NULL);
return 0;
}
static void show_datestring(const char *flag, const char *datestr)
{
- static char buffer[100];
+ char *buffer;
/* date handling requires both flags and revs */
if ((filter & (DO_FLAGS | DO_REVS)) != (DO_FLAGS | DO_REVS))
return;
- snprintf(buffer, sizeof(buffer), "%s%lu", flag, approxidate(datestr));
+ buffer = xstrfmt("%s%lu", flag, approxidate(datestr));
show(buffer);
+ free(buffer);
}
static int show_file(const char *arg, int output_prefix)
const char *dest = NULL;
int fd[2];
struct child_process *conn;
- struct sha1_array extra_have = SHA1_ARRAY_INIT;
- struct sha1_array shallow = SHA1_ARRAY_INIT;
+ struct oid_array extra_have = OID_ARRAY_INIT;
+ struct oid_array shallow = OID_ARRAY_INIT;
struct ref *remote_refs, *local_refs;
int ret;
int helper_status = 0;
return 0;
}
+static int push_check(int argc, const char **argv, const char *prefix)
+{
+ struct remote *remote;
+
+ if (argc < 2)
+ die("submodule--helper push-check requires at least 1 argument");
+
+ /*
+ * The remote must be configured.
+ * This is to avoid pushing to the exact same URL as the parent.
+ */
+ remote = pushremote_get(argv[1]);
+ if (!remote || remote->origin == REMOTE_UNCONFIGURED)
+ die("remote '%s' not configured", argv[1]);
+
+ /* Check the refspec */
+ if (argc > 2) {
+ int i, refspec_nr = argc - 2;
+ struct ref *local_refs = get_local_heads();
+ struct refspec *refspec = parse_push_refspec(refspec_nr,
+ argv + 2);
+
+ for (i = 0; i < refspec_nr; i++) {
+ struct refspec *rs = refspec + i;
+
+ if (rs->pattern || rs->matching)
+ continue;
+
+ /*
+ * LHS must match a single ref
+ * NEEDSWORK: add logic to special case 'HEAD' once
+ * working with submodules in a detached head state
+ * ceases to be the norm.
+ */
+ if (count_refspec_match(rs->src, local_refs, NULL) != 1)
+ die("src refspec '%s' must name a ref",
+ rs->src);
+ }
+ free_refspec(refspec_nr, refspec);
+ }
+
+ return 0;
+}
+
static int absorb_git_dirs(int argc, const char **argv, const char *prefix)
{
int i;
static int is_active(int argc, const char **argv, const char *prefix)
{
if (argc != 2)
- die("submodule--helper is-active takes exactly 1 arguments");
+ die("submodule--helper is-active takes exactly 1 argument");
gitmodules_config();
{"resolve-relative-url-test", resolve_relative_url_test, 0},
{"init", module_init, SUPPORT_SUPER_PREFIX},
{"remote-branch", resolve_remote_submodule_branch, 0},
+ {"push-check", push_check, 0},
{"absorb-git-dirs", absorb_git_dirs, SUPPORT_SUPER_PREFIX},
{"is-active", is_active, 0},
};
const void *cb_data)
{
const char **p;
- char ref[PATH_MAX];
+ struct strbuf ref = STRBUF_INIT;
int had_error = 0;
unsigned char sha1[20];
for (p = argv; *p; p++) {
- if (snprintf(ref, sizeof(ref), "refs/tags/%s", *p)
- >= sizeof(ref)) {
- error(_("tag name too long: %.*s..."), 50, *p);
- had_error = 1;
- continue;
- }
- if (read_ref(ref, sha1)) {
+ strbuf_reset(&ref);
+ strbuf_addf(&ref, "refs/tags/%s", *p);
+ if (read_ref(ref.buf, sha1)) {
error(_("tag '%s' not found."), *p);
had_error = 1;
continue;
}
- if (fn(*p, ref, sha1, cb_data))
+ if (fn(*p, ref.buf, sha1, cb_data))
had_error = 1;
}
+ strbuf_release(&ref);
return had_error;
}
unsigned char *prev, unsigned char *result)
{
enum object_type type;
- char header_buf[1024];
- int header_len;
+ struct strbuf header = STRBUF_INIT;
char *path = NULL;
type = sha1_object_info(object, NULL);
if (type <= OBJ_NONE)
die(_("bad object type."));
- header_len = snprintf(header_buf, sizeof(header_buf),
- "object %s\n"
- "type %s\n"
- "tag %s\n"
- "tagger %s\n\n",
- sha1_to_hex(object),
- typename(type),
- tag,
- git_committer_info(IDENT_STRICT));
-
- if (header_len > sizeof(header_buf) - 1)
- die(_("tag header too big."));
+ strbuf_addf(&header,
+ "object %s\n"
+ "type %s\n"
+ "tag %s\n"
+ "tagger %s\n\n",
+ sha1_to_hex(object),
+ typename(type),
+ tag,
+ git_committer_info(IDENT_STRICT));
if (!opt->message_given) {
int fd;
if (!opt->message_given && !buf->len)
die(_("no tag message?"));
- strbuf_insert(buf, 0, header_buf, header_len);
+ strbuf_insert(buf, 0, header.buf, header.len);
+ strbuf_release(&header);
if (build_tag_object(buf, opt->sign, result) < 0) {
if (path)
printf("%s\n", reason.buf);
if (show_only)
continue;
- strbuf_reset(&path);
- strbuf_addstr(&path, git_path("worktrees/%s", d->d_name));
+ git_path_buf(&path, "worktrees/%s", d->d_name);
ret = remove_dir_recursively(&path, 0);
if (ret < 0 && errno == ENOTDIR)
ret = unlink(path.buf);
}
name = worktree_basename(path, &len);
- strbuf_addstr(&sb_repo,
- git_path("worktrees/%.*s", (int)(path + len - name), name));
+ git_path_buf(&sb_repo, "worktrees/%.*s", (int)(path + len - name), name);
len = sb_repo.len;
if (safe_create_leading_directories_const(sb_repo.buf))
die_errno(_("could not create leading directories of '%s'"),
#define GIT_SHA1_RAWSZ 20
#define GIT_SHA1_HEXSZ (2 * GIT_SHA1_RAWSZ)
+/* The length in byte and in hex digits of the largest possible hash value. */
+#define GIT_MAX_RAWSZ GIT_SHA1_RAWSZ
+#define GIT_MAX_HEXSZ GIT_SHA1_HEXSZ
+
struct object_id {
- unsigned char hash[GIT_SHA1_RAWSZ];
+ unsigned char hash[GIT_MAX_RAWSZ];
};
#if defined(DT_UNKNOWN) && !defined(NO_D_TYPE_IN_DIRENT)
extern int discard_index(struct index_state *);
extern int unmerged_index(const struct index_state *);
extern int verify_path(const char *path);
+extern int strcmp_offset(const char *s1, const char *s2, size_t *first_change);
extern int index_dir_exists(struct index_state *istate, const char *name, int namelen);
extern void adjust_dirname_case(struct index_state *istate, char *name);
extern struct cache_entry *index_file_exists(struct index_state *istate, const char *name, int namelen, int igncase);
extern int hold_locked_index(struct lock_file *, int);
extern void set_alternate_index_output(const char *);
+extern int verify_index_checksum;
+
/* Environment bits from configuration mechanism */
extern int trust_executable_bit;
extern int trust_ctime;
extern const char *find_unique_abbrev(const unsigned char *sha1, int len);
extern int find_unique_abbrev_r(char *hex, const unsigned char *sha1, int len);
-extern const unsigned char null_sha1[GIT_SHA1_RAWSZ];
+extern const unsigned char null_sha1[GIT_MAX_RAWSZ];
extern const struct object_id null_oid;
static inline int hashcmp(const unsigned char *sha1, const unsigned char *sha2)
int raceproof_create_file(const char *path, create_file_fn fn, void *cb);
int mkdir_in_gitdir(const char *path);
-extern char *expand_user_path(const char *path);
+extern char *expand_user_path(const char *path, int real_home);
const char *enter_repo(const char *path, int strict);
static inline int is_absolute_path(const char *path)
{
extern int get_oid(const char *str, struct object_id *oid);
-typedef int each_abbrev_fn(const unsigned char *sha1, void *);
+typedef int each_abbrev_fn(const struct object_id *oid, void *);
extern int for_each_abbrev(const char *prefix, each_abbrev_fn, void *);
extern int set_disambiguate_hint_config(const char *var, const char *value);
extern void pack_report(void);
/*
- * Create a temporary file rooted in the object database directory.
+ * Create a temporary file rooted in the object database directory, or
+ * die on failure. The filename is taken from "pattern", which should have the
+ * usual "XXXXXX" trailer, and the resulting filename is written into the
+ * "template" buffer. Returns the open descriptor.
*/
-extern int odb_mkstemp(char *template, size_t limit, const char *pattern);
+extern int odb_mkstemp(struct strbuf *template, const char *pattern);
/*
* Generate the filename to be used for a pack file with checksum "sha1" and
CONFIG_ORIGIN_CMDLINE
};
+struct config_options {
+ unsigned int respect_includes : 1;
+ const char *git_dir;
+};
+
typedef int (*config_fn_t)(const char *, const char *, void *);
extern int git_default_config(const char *, const char *, void *);
extern int git_config_from_file(config_fn_t fn, const char *, void *);
extern void git_config(config_fn_t fn, void *);
extern int git_config_with_options(config_fn_t fn, void *,
struct git_config_source *config_source,
- int respect_includes);
+ const struct config_options *opts);
extern int git_parse_ulong(const char *, unsigned long *);
extern int git_parse_maybe_bool(const char *);
extern int git_config_int(const char *, const char *);
extern int64_t git_config_int64(const char *, const char *);
extern unsigned long git_config_ulong(const char *, const char *);
+extern ssize_t git_config_ssize_t(const char *, const char *);
extern int git_config_bool_or_int(const char *, const char *, int *);
extern int git_config_bool(const char *, const char *);
extern int git_config_maybe_bool(const char *, const char *);
int depth;
config_fn_t fn;
void *data;
+ const struct config_options *opts;
};
#define CONFIG_INCLUDE_INIT { 0 }
extern int git_config_include(const char *name, const char *value, void *data);
enum object_type type;
if (S_ISGITLINK(mode)) {
- blob = xmalloc(100);
- *size = snprintf(blob, 100,
- "Subproject commit %s\n", oid_to_hex(oid));
+ struct strbuf buf = STRBUF_INIT;
+ strbuf_addf(&buf, "Subproject commit %s\n", oid_to_hex(oid));
+ *size = buf.len;
+ blob = strbuf_detach(&buf, NULL);
} else if (is_null_oid(oid)) {
/* deleted blob */
*size = 0;
/* find set of paths that every parent touches */
static struct combine_diff_path *find_paths_generic(const unsigned char *sha1,
- const struct sha1_array *parents, struct diff_options *opt)
+ const struct oid_array *parents, struct diff_options *opt)
{
struct combine_diff_path *paths = NULL;
int i, num_parent = parents->nr;
opt->output_format = stat_opt;
else
opt->output_format = DIFF_FORMAT_NO_OUTPUT;
- diff_tree_sha1(parents->sha1[i], sha1, "", opt);
+ diff_tree_sha1(parents->oid[i].hash, sha1, "", opt);
diffcore_std(opt);
paths = intersect_paths(paths, i, num_parent);
* rename/copy detection, etc, comparing all trees simultaneously (= faster).
*/
static struct combine_diff_path *find_paths_multitree(
- const unsigned char *sha1, const struct sha1_array *parents,
+ const unsigned char *sha1, const struct oid_array *parents,
struct diff_options *opt)
{
int i, nparent = parents->nr;
ALLOC_ARRAY(parents_sha1, nparent);
for (i = 0; i < nparent; i++)
- parents_sha1[i] = parents->sha1[i];
+ parents_sha1[i] = parents->oid[i].hash;
/* fake list head, so worker can assume it is non-NULL */
paths_head.next = NULL;
void diff_tree_combined(const unsigned char *sha1,
- const struct sha1_array *parents,
+ const struct oid_array *parents,
int dense,
struct rev_info *rev)
{
if (stat_opt) {
diffopts.output_format = stat_opt;
- diff_tree_sha1(parents->sha1[0], sha1, "", &diffopts);
+ diff_tree_sha1(parents->oid[0].hash, sha1, "", &diffopts);
diffcore_std(&diffopts);
if (opt->orderfile)
diffcore_order(opt->orderfile);
struct rev_info *rev)
{
struct commit_list *parent = get_saved_parents(rev, commit);
- struct sha1_array parents = SHA1_ARRAY_INIT;
+ struct oid_array parents = OID_ARRAY_INIT;
while (parent) {
- sha1_array_append(&parents, parent->item->object.oid.hash);
+ oid_array_append(&parents, &parent->item->object.oid);
parent = parent->next;
}
diff_tree_combined(commit->object.oid.hash, &parents, dense, rev);
- sha1_array_clear(&parents);
+ oid_array_clear(&parents);
}
/* largest positive number a signed 32-bit integer can contain */
#define INFINITE_DEPTH 0x7fffffff
-struct sha1_array;
+struct oid_array;
struct ref;
extern int register_shallow(const unsigned char *sha1);
extern int unregister_shallow(const unsigned char *sha1);
int ac, const char **av, int shallow_flag, int not_shallow_flag);
extern void set_alternate_shallow_file(const char *path, int override);
extern int write_shallow_commits(struct strbuf *out, int use_pack_protocol,
- const struct sha1_array *extra);
+ const struct oid_array *extra);
extern void setup_alternate_shallow(struct lock_file *shallow_lock,
const char **alternate_shallow_file,
- const struct sha1_array *extra);
-extern const char *setup_temporary_shallow(const struct sha1_array *extra);
+ const struct oid_array *extra);
+extern const char *setup_temporary_shallow(const struct oid_array *extra);
extern void advertise_shallow_grafts(int);
struct shallow_info {
- struct sha1_array *shallow;
+ struct oid_array *shallow;
int *ours, nr_ours;
int *theirs, nr_theirs;
- struct sha1_array *ref;
+ struct oid_array *ref;
/* for receive-pack */
uint32_t **used_shallow;
int nr_commits;
};
-extern void prepare_shallow_info(struct shallow_info *, struct sha1_array *);
+extern void prepare_shallow_info(struct shallow_info *, struct oid_array *);
extern void clear_shallow_info(struct shallow_info *);
extern void remove_nonexistent_theirs_shallow(struct shallow_info *);
extern void assign_shallow_commits_to_refs(struct shallow_info *info,
if (!path)
return config_error_nonbool("include.path");
- expanded = expand_user_path(path);
+ expanded = expand_user_path(path, 0);
if (!expanded)
return error("could not expand include path '%s'", path);
path = expanded;
char *expanded;
int prefix = 0;
- expanded = expand_user_path(pat->buf);
+ expanded = expand_user_path(pat->buf, 1);
if (expanded) {
strbuf_reset(pat);
strbuf_addstr(pat, expanded);
return error(_("relative config include "
"conditionals must come from files"));
- strbuf_add_absolute_path(&path, cf->path);
+ strbuf_realpath(&path, cf->path, 1);
slash = find_last_dir_sep(path.buf);
if (!slash)
die("BUG: how is this possible?");
return prefix;
}
-static int include_by_gitdir(const char *cond, size_t cond_len, int icase)
+static int include_by_gitdir(const struct config_options *opts,
+ const char *cond, size_t cond_len, int icase)
{
struct strbuf text = STRBUF_INIT;
struct strbuf pattern = STRBUF_INIT;
int ret = 0, prefix;
+ const char *git_dir;
- strbuf_add_absolute_path(&text, get_git_dir());
+ if (opts->git_dir)
+ git_dir = opts->git_dir;
+ else if (have_git_dir())
+ git_dir = get_git_dir();
+ else
+ goto done;
+
+ strbuf_realpath(&text, git_dir, 1);
strbuf_add(&pattern, cond, cond_len);
prefix = prepare_include_condition_pattern(&pattern);
return ret;
}
-static int include_condition_is_true(const char *cond, size_t cond_len)
+static int include_condition_is_true(const struct config_options *opts,
+ const char *cond, size_t cond_len)
{
if (skip_prefix_mem(cond, cond_len, "gitdir:", &cond, &cond_len))
- return include_by_gitdir(cond, cond_len, 0);
+ return include_by_gitdir(opts, cond, cond_len, 0);
else if (skip_prefix_mem(cond, cond_len, "gitdir/i:", &cond, &cond_len))
- return include_by_gitdir(cond, cond_len, 1);
+ return include_by_gitdir(opts, cond, cond_len, 1);
/* unknown conditionals are always false */
return 0;
ret = handle_path_include(value, inc);
if (!parse_config_key(var, "includeif", &cond, &cond_len, &key) &&
- (cond && include_condition_is_true(cond, cond_len)) &&
+ (cond && include_condition_is_true(inc->opts, cond, cond_len)) &&
!strcmp(key, "path"))
ret = handle_path_include(value, inc);
return 1;
}
+static int git_parse_ssize_t(const char *value, ssize_t *ret)
+{
+ intmax_t tmp;
+ if (!git_parse_signed(value, &tmp, maximum_signed_value_of_type(ssize_t)))
+ return 0;
+ *ret = tmp;
+ return 1;
+}
+
NORETURN
static void die_bad_number(const char *name, const char *value)
{
return ret;
}
+ssize_t git_config_ssize_t(const char *name, const char *value)
+{
+ ssize_t ret;
+ if (!git_parse_ssize_t(value, &ret))
+ die_bad_number(name, value);
+ return ret;
+}
+
int git_parse_maybe_bool(const char *value)
{
if (!value)
{
if (!value)
return config_error_nonbool(var);
- *dest = expand_user_path(value);
+ *dest = expand_user_path(value, 0);
if (!*dest)
die(_("failed to expand user dir in: '%s'"), value);
return 0;
return !git_env_bool("GIT_CONFIG_NOSYSTEM", 0);
}
-static int do_git_config_sequence(config_fn_t fn, void *data)
+static int do_git_config_sequence(const struct config_options *opts,
+ config_fn_t fn, void *data)
{
int ret = 0;
char *xdg_config = xdg_config_home("config");
- char *user_config = expand_user_path("~/.gitconfig");
- char *repo_config = have_git_dir() ? git_pathdup("config") : NULL;
+ char *user_config = expand_user_path("~/.gitconfig", 0);
+ char *repo_config;
+
+ if (opts->git_dir)
+ repo_config = mkpathdup("%s/config", opts->git_dir);
+ else if (have_git_dir())
+ repo_config = git_pathdup("config");
+ else
+ repo_config = NULL;
current_parsing_scope = CONFIG_SCOPE_SYSTEM;
if (git_config_system() && !access_or_die(git_etc_gitconfig(), R_OK, 0))
int git_config_with_options(config_fn_t fn, void *data,
struct git_config_source *config_source,
- int respect_includes)
+ const struct config_options *opts)
{
struct config_include_data inc = CONFIG_INCLUDE_INIT;
- if (respect_includes) {
+ if (opts->respect_includes) {
inc.fn = fn;
inc.data = data;
+ inc.opts = opts;
fn = git_config_include;
data = &inc;
}
else if (config_source && config_source->blob)
return git_config_from_blob_ref(fn, config_source->blob, data);
- return do_git_config_sequence(fn, data);
+ return do_git_config_sequence(opts, fn, data);
}
static void git_config_raw(config_fn_t fn, void *data)
{
- if (git_config_with_options(fn, data, NULL, 1) < 0)
+ struct config_options opts = {0};
+
+ opts.respect_includes = 1;
+ if (git_config_with_options(fn, data, NULL, &opts) < 0)
/*
* git_config_with_options() normally returns only
* zero, as most errors are fatal, and
void read_early_config(config_fn_t cb, void *data)
{
+ struct config_options opts = {0};
struct strbuf buf = STRBUF_INIT;
- git_config_with_options(cb, data, NULL, 1);
+ opts.respect_includes = 1;
+ if (have_git_dir())
+ opts.git_dir = get_git_dir();
/*
* When setup_git_directory() was not yet asked to discover the
* GIT_DIR, we ask discover_git_directory() to figure out whether there
* notably, the current working directory is still the same after the
* call).
*/
- if (!have_git_dir() && discover_git_directory(&buf)) {
- struct git_config_source repo_config;
+ else if (discover_git_directory(&buf))
+ opts.git_dir = buf.buf;
+
+ git_config_with_options(cb, data, NULL, &opts);
- memset(&repo_config, 0, sizeof(repo_config));
- strbuf_addstr(&buf, "/config");
- repo_config.file = buf.buf;
- git_config_with_options(cb, data, &repo_config, 1);
- }
strbuf_release(&buf);
}
*/
struct ref **get_remote_heads(int in, char *src_buf, size_t src_len,
struct ref **list, unsigned int flags,
- struct sha1_array *extra_have,
- struct sha1_array *shallow_points)
+ struct oid_array *extra_have,
+ struct oid_array *shallow_points)
{
struct ref **orig_list = list;
die("protocol error: expected shallow sha-1, got '%s'", arg);
if (!shallow_points)
die("repository on the other end cannot be shallow");
- sha1_array_append(shallow_points, old_oid.hash);
+ oid_array_append(shallow_points, &old_oid);
continue;
}
}
if (extra_have && !strcmp(name, ".have")) {
- sha1_array_append(extra_have, old_oid.hash);
+ oid_array_append(extra_have, &old_oid);
continue;
}
const char **ssh_argv;
p = xstrdup(ssh_command);
- if (split_cmdline(p, &ssh_argv)) {
+ if (split_cmdline(p, &ssh_argv) > 0) {
variant = basename((char *)ssh_argv[0]);
/*
* At this point, variant points into the buffer
* any longer.
*/
free(ssh_argv);
- } else
+ } else {
+ free(p);
return;
+ }
}
if (!strcasecmp(variant, "plink") ||
i="${words[c]}"
case "$i" in
--mirror) [ "$cmd" = "push" ] && no_complete_refspec=1 ;;
+ -d|--delete) [ "$cmd" = "push" ] && lhs=0 ;;
--all)
case "$cmd" in
push) no_complete_refspec=1 ;;
. git-sh-setup
search_reflog () {
- sed -ne 's~^\([^ ]*\) .*\tcheckout: moving from '"$1"' .*~\1~p' \
+ sed -ne 's~^\([^ ]*\) .* checkout: moving from '"$1"' .*~\1~p' \
< "$GIT_DIR"/logs/HEAD
}
search_reflog_merges () {
git rev-parse $(
- sed -ne 's~^[^ ]* \([^ ]*\) .*\tmerge '"$1"':.*~\1^2~p' \
+ sed -ne 's~^[^ ]* \([^ ]*\) .* merge '"$1"':.*~\1^2~p' \
< "$GIT_DIR"/logs/HEAD
)
}
{
struct stat sb;
char *old_dir, *socket;
- old_dir = expand_user_path("~/.git-credential-cache");
+ old_dir = expand_user_path("~/.git-credential-cache", 0);
if (old_dir && !stat(old_dir, &sb) && S_ISDIR(sb.st_mode))
socket = xstrfmt("%s/socket", old_dir);
else
if (file) {
string_list_append(&fns, file);
} else {
- if ((file = expand_user_path("~/.git-credentials")))
+ if ((file = expand_user_path("~/.git-credentials", 0)))
string_list_append_nodup(&fns, file);
file = xdg_config_home("credentials");
if (file)
#include "strbuf.h"
#include "string-list.h"
-#ifndef HOST_NAME_MAX
-#define HOST_NAME_MAX 256
-#endif
-
#ifdef NO_INITGROUPS
#define initgroups(x, y) (0) /* nothing */
#endif
fclose(fp);
}
-static int run_service_command(const char **argv)
+static int run_service_command(struct child_process *cld)
{
- struct child_process cld = CHILD_PROCESS_INIT;
-
- cld.argv = argv;
- cld.git_cmd = 1;
- cld.err = -1;
- if (start_command(&cld))
+ argv_array_push(&cld->args, ".");
+ cld->git_cmd = 1;
+ cld->err = -1;
+ if (start_command(cld))
return -1;
close(0);
close(1);
- copy_to_log(cld.err);
+ copy_to_log(cld->err);
- return finish_command(&cld);
+ return finish_command(cld);
}
static int upload_pack(void)
{
- /* Timeout as string */
- char timeout_buf[64];
- const char *argv[] = { "upload-pack", "--strict", NULL, ".", NULL };
-
- argv[2] = timeout_buf;
-
- snprintf(timeout_buf, sizeof timeout_buf, "--timeout=%u", timeout);
- return run_service_command(argv);
+ struct child_process cld = CHILD_PROCESS_INIT;
+ argv_array_pushl(&cld.args, "upload-pack", "--strict", NULL);
+ argv_array_pushf(&cld.args, "--timeout=%u", timeout);
+ return run_service_command(&cld);
}
static int upload_archive(void)
{
- static const char *argv[] = { "upload-archive", ".", NULL };
- return run_service_command(argv);
+ struct child_process cld = CHILD_PROCESS_INIT;
+ argv_array_push(&cld.args, "upload-archive");
+ return run_service_command(&cld);
}
static int receive_pack(void)
{
- static const char *argv[] = { "receive-pack", ".", NULL };
- return run_service_command(argv);
+ struct child_process cld = CHILD_PROCESS_INIT;
+ argv_array_push(&cld.args, "receive-pack");
+ return run_service_command(&cld);
}
static struct daemon_service daemon_service[] = {
*/
const char *name;
- char hex[GIT_SHA1_HEXSZ + 1];
+ char hex[GIT_MAX_HEXSZ + 1];
char mode[10];
/*
* uniqueness across all objects (statistically speaking).
*/
if (abblen < GIT_SHA1_HEXSZ - 3) {
- static char hex[GIT_SHA1_HEXSZ + 1];
+ static char hex[GIT_MAX_HEXSZ + 1];
if (len < abblen && abblen <= len + 2)
xsnprintf(hex, sizeof(hex), "%s%.*s", abbrev, len+3-abblen, "..");
else
data->patchlen += new_len;
}
+static void patch_id_add_string(git_SHA_CTX *ctx, const char *str)
+{
+ git_SHA1_Update(ctx, str, strlen(str));
+}
+
+static void patch_id_add_mode(git_SHA_CTX *ctx, unsigned mode)
+{
+ /* large enough for 2^32 in octal */
+ char buf[12];
+ int len = xsnprintf(buf, sizeof(buf), "%06o", mode);
+ git_SHA1_Update(ctx, buf, len);
+}
+
/* returns 0 upon success, and writes result into sha1 */
static int diff_get_patch_id(struct diff_options *options, unsigned char *sha1, int diff_header_only)
{
int i;
git_SHA_CTX ctx;
struct patch_id_t data;
- char buffer[PATH_MAX * 4 + 20];
git_SHA1_Init(&ctx);
memset(&data, 0, sizeof(struct patch_id_t));
len1 = remove_space(p->one->path, strlen(p->one->path));
len2 = remove_space(p->two->path, strlen(p->two->path));
- if (p->one->mode == 0)
- len1 = snprintf(buffer, sizeof(buffer),
- "diff--gita/%.*sb/%.*s"
- "newfilemode%06o"
- "---/dev/null"
- "+++b/%.*s",
- len1, p->one->path,
- len2, p->two->path,
- p->two->mode,
- len2, p->two->path);
- else if (p->two->mode == 0)
- len1 = snprintf(buffer, sizeof(buffer),
- "diff--gita/%.*sb/%.*s"
- "deletedfilemode%06o"
- "---a/%.*s"
- "+++/dev/null",
- len1, p->one->path,
- len2, p->two->path,
- p->one->mode,
- len1, p->one->path);
- else
- len1 = snprintf(buffer, sizeof(buffer),
- "diff--gita/%.*sb/%.*s"
- "---a/%.*s"
- "+++b/%.*s",
- len1, p->one->path,
- len2, p->two->path,
- len1, p->one->path,
- len2, p->two->path);
- git_SHA1_Update(&ctx, buffer, len1);
+ patch_id_add_string(&ctx, "diff--git");
+ patch_id_add_string(&ctx, "a/");
+ git_SHA1_Update(&ctx, p->one->path, len1);
+ patch_id_add_string(&ctx, "b/");
+ git_SHA1_Update(&ctx, p->two->path, len2);
+
+ if (p->one->mode == 0) {
+ patch_id_add_string(&ctx, "newfilemode");
+ patch_id_add_mode(&ctx, p->two->mode);
+ patch_id_add_string(&ctx, "---/dev/null");
+ patch_id_add_string(&ctx, "+++b/");
+ git_SHA1_Update(&ctx, p->two->path, len2);
+ } else if (p->two->mode == 0) {
+ patch_id_add_string(&ctx, "deletedfilemode");
+ patch_id_add_mode(&ctx, p->one->mode);
+ patch_id_add_string(&ctx, "---a/");
+ git_SHA1_Update(&ctx, p->one->path, len1);
+ patch_id_add_string(&ctx, "+++/dev/null");
+ } else {
+ patch_id_add_string(&ctx, "---a/");
+ git_SHA1_Update(&ctx, p->one->path, len1);
+ patch_id_add_string(&ctx, "+++b/");
+ git_SHA1_Update(&ctx, p->two->path, len2);
+ }
if (diff_header_only)
continue;
struct strbuf;
struct diff_filespec;
struct userdiff_driver;
-struct sha1_array;
+struct oid_array;
struct commit;
struct combine_diff_path;
extern void show_combined_diff(struct combine_diff_path *elem, int num_parent,
int dense, struct rev_info *);
-extern void diff_tree_combined(const unsigned char *sha1, const struct sha1_array *parents, int dense, struct rev_info *rev);
+extern void diff_tree_combined(const unsigned char *sha1, const struct oid_array *parents, int dense, struct rev_info *rev);
extern void diff_tree_combined_merge(const struct commit *commit, int dense, struct rev_info *rev);
return git_object_dir;
}
-int odb_mkstemp(char *template, size_t limit, const char *pattern)
+int odb_mkstemp(struct strbuf *template, const char *pattern)
{
int fd;
/*
* restrictive except to remove write permission.
*/
int mode = 0444;
- snprintf(template, limit, "%s/%s",
- get_object_directory(), pattern);
- fd = git_mkstemp_mode(template, mode);
+ git_path_buf(template, "objects/%s", pattern);
+ fd = git_mkstemp_mode(template->buf, mode);
if (0 <= fd)
return fd;
/* slow path */
/* some mkstemp implementations erase template on failure */
- snprintf(template, limit, "%s/%s",
- get_object_directory(), pattern);
- safe_create_leading_directories(template);
- return xmkstemp_mode(template, mode);
+ git_path_buf(template, "objects/%s", pattern);
+ safe_create_leading_directories(template->buf);
+ return xmkstemp_mode(template->buf, mode);
}
int odb_pack_keep(const char *name)
static void start_packfile(void)
{
- static char tmp_file[PATH_MAX];
+ struct strbuf tmp_file = STRBUF_INIT;
struct packed_git *p;
struct pack_header hdr;
int pack_fd;
- pack_fd = odb_mkstemp(tmp_file, sizeof(tmp_file),
- "pack/tmp_pack_XXXXXX");
- FLEX_ALLOC_STR(p, pack_name, tmp_file);
+ pack_fd = odb_mkstemp(&tmp_file, "pack/tmp_pack_XXXXXX");
+ FLEX_ALLOC_STR(p, pack_name, tmp_file.buf);
+ strbuf_release(&tmp_file);
+
p->pack_fd = pack_fd;
p->do_not_close = 1;
pack_file = sha1fd(pack_fd, p->pack_name);
{
if (!relative_marks_paths || is_absolute_path(path))
return xstrdup(path);
- return xstrdup(git_path("info/fast-import/%s", path));
+ return git_pathdup("info/fast-import/%s", path);
}
static void option_import_marks(const char *marks,
return ACK;
}
}
+ if (skip_prefix(line, "ERR ", &arg))
+ die(_("remote error: %s"), arg);
die(_("git fetch-pack: expected ACK/NAK, got '%s'"), line);
}
if (args->use_thin_pack)
argv_array_push(&cmd.args, "--fix-thin");
if (args->lock_pack || unpack_limit) {
- char hostname[256];
- if (gethostname(hostname, sizeof(hostname)))
+ char hostname[HOST_NAME_MAX + 1];
+ if (xgethostname(hostname, sizeof(hostname)))
xsnprintf(hostname, sizeof(hostname), "localhost");
argv_array_pushf(&cmd.args,
"--keep=fetch-pack %"PRIuMAX " on %s",
struct ref **sought, int nr_sought,
struct shallow_info *si)
{
- struct sha1_array ref = SHA1_ARRAY_INIT;
+ struct oid_array ref = OID_ARRAY_INIT;
int *status;
int i;
* shallow points that exist in the pack (iow in repo
* after get_pack() and reprepare_packed_git())
*/
- struct sha1_array extra = SHA1_ARRAY_INIT;
- unsigned char (*sha1)[20] = si->shallow->sha1;
+ struct oid_array extra = OID_ARRAY_INIT;
+ struct object_id *oid = si->shallow->oid;
for (i = 0; i < si->shallow->nr; i++)
- if (has_sha1_file(sha1[i]))
- sha1_array_append(&extra, sha1[i]);
+ if (has_object_file(&oid[i]))
+ oid_array_append(&extra, &oid[i]);
if (extra.nr) {
setup_alternate_shallow(&shallow_lock,
&alternate_shallow_file,
&extra);
commit_lock_file(&shallow_lock);
}
- sha1_array_clear(&extra);
+ oid_array_clear(&extra);
return;
}
if (!si->nr_ours && !si->nr_theirs)
return;
for (i = 0; i < nr_sought; i++)
- sha1_array_append(&ref, sought[i]->old_oid.hash);
+ oid_array_append(&ref, &sought[i]->old_oid);
si->ref = &ref;
if (args->update_shallow) {
* shallow roots that are actually reachable from new
* refs.
*/
- struct sha1_array extra = SHA1_ARRAY_INIT;
- unsigned char (*sha1)[20] = si->shallow->sha1;
+ struct oid_array extra = OID_ARRAY_INIT;
+ struct object_id *oid = si->shallow->oid;
assign_shallow_commits_to_refs(si, NULL, NULL);
if (!si->nr_ours && !si->nr_theirs) {
- sha1_array_clear(&ref);
+ oid_array_clear(&ref);
return;
}
for (i = 0; i < si->nr_ours; i++)
- sha1_array_append(&extra, sha1[si->ours[i]]);
+ oid_array_append(&extra, &oid[si->ours[i]]);
for (i = 0; i < si->nr_theirs; i++)
- sha1_array_append(&extra, sha1[si->theirs[i]]);
+ oid_array_append(&extra, &oid[si->theirs[i]]);
setup_alternate_shallow(&shallow_lock,
&alternate_shallow_file,
&extra);
commit_lock_file(&shallow_lock);
- sha1_array_clear(&extra);
- sha1_array_clear(&ref);
+ oid_array_clear(&extra);
+ oid_array_clear(&ref);
return;
}
sought[i]->status = REF_STATUS_REJECT_SHALLOW;
}
free(status);
- sha1_array_clear(&ref);
+ oid_array_clear(&ref);
}
struct ref *fetch_pack(struct fetch_pack_args *args,
const struct ref *ref,
const char *dest,
struct ref **sought, int nr_sought,
- struct sha1_array *shallow,
+ struct oid_array *shallow,
char **pack_lockfile)
{
struct ref *ref_cpy;
#include "string-list.h"
#include "run-command.h"
-struct sha1_array;
+struct oid_array;
struct fetch_pack_args {
const char *uploadpack;
const char *dest,
struct ref **sought,
int nr_sought,
- struct sha1_array *shallow,
+ struct oid_array *shallow,
char **pack_lockfile);
/*
static void init_skiplist(struct fsck_options *options, const char *path)
{
- static struct sha1_array skiplist = SHA1_ARRAY_INIT;
+ static struct oid_array skiplist = OID_ARRAY_INIT;
int sorted, fd;
- char buffer[41];
- unsigned char sha1[20];
+ char buffer[GIT_MAX_HEXSZ + 1];
+ struct object_id oid;
if (options->skiplist)
sorted = options->skiplist->sorted;
if (fd < 0)
die("Could not open skip list: %s", path);
for (;;) {
+ const char *p;
int result = read_in_full(fd, buffer, sizeof(buffer));
if (result < 0)
die_errno("Could not read '%s'", path);
if (!result)
break;
- if (get_sha1_hex(buffer, sha1) || buffer[40] != '\n')
+ if (parse_oid_hex(buffer, &oid, &p) || *p != '\n')
die("Invalid SHA-1: %s", buffer);
- sha1_array_append(&skiplist, sha1);
+ oid_array_append(&skiplist, &oid);
if (sorted && skiplist.nr > 1 &&
- hashcmp(skiplist.sha1[skiplist.nr - 2],
- sha1) > 0)
+ oidcmp(&skiplist.oid[skiplist.nr - 2],
+ &oid) > 0)
sorted = 0;
}
close(fd);
return 0;
if (options->skiplist && object &&
- sha1_array_lookup(options->skiplist, object->oid.hash) >= 0)
+ oid_array_lookup(options->skiplist, &object->oid) >= 0)
return 0;
if (msg_type == FSCK_FATAL)
fsck_error error_func;
unsigned strict:1;
int *msg_type;
- struct sha1_array *skiplist;
+ struct oid_array *skiplist;
struct decoration *object_names;
};
marked for applying."),
checkout_index => N__(
"If the patch applies cleanly, the edited hunk will immediately be
-marked for discarding"),
+marked for discarding."),
checkout_head => N__(
"If the patch applies cleanly, the edited hunk will immediately be
marked for discarding."),
__attribute__((format (printf, 3, 4)))
extern int xsnprintf(char *dst, size_t max, const char *fmt, ...);
+#ifndef HOST_NAME_MAX
+#define HOST_NAME_MAX 256
+#endif
+
+extern int xgethostname(char *buf, size_t len);
+
/* in ctype.c, for kwset users */
extern const unsigned char tolower_trans_tbl[256];
real_cmd = p4_build_cmd(c)
return write_pipe(real_cmd, stdin)
-def read_pipe(c, ignore_error=False):
+def read_pipe_full(c):
+ """ Read output from command. Returns a tuple
+ of the return status, stdout text and stderr
+ text.
+ """
if verbose:
sys.stderr.write('Reading pipe: %s\n' % str(c))
expand = isinstance(c,basestring)
p = subprocess.Popen(c, stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=expand)
(out, err) = p.communicate()
- if p.returncode != 0 and not ignore_error:
- die('Command failed: %s\nError: %s' % (str(c), err))
+ return (p.returncode, out, err)
+
+def read_pipe(c, ignore_error=False):
+ """ Read output from command. Returns the output text on
+ success. On failure, terminates execution, unless
+ ignore_error is True, when it returns an empty string.
+ """
+ (retcode, out, err) = read_pipe_full(c)
+ if retcode != 0:
+ if ignore_error:
+ out = ""
+ else:
+ die('Command failed: %s\nError: %s' % (str(c), err))
return out
+def read_pipe_text(c):
+ """ Read output from a command with trailing whitespace stripped.
+ On error, returns None.
+ """
+ (retcode, out, err) = read_pipe_full(c)
+ if retcode != 0:
+ return None
+ else:
+ return out.rstrip()
+
def p4_read_pipe(c, ignore_error=False):
real_cmd = p4_build_cmd(c)
return read_pipe(real_cmd, ignore_error)
return clientPath
def currentGitBranch():
- retcode = system(["git", "symbolic-ref", "-q", "HEAD"], ignore_error=True)
- if retcode != 0:
- # on a detached head
- return None
- else:
- return read_pipe(["git", "name-rev", "HEAD"]).split(" ")[1].strip()
+ return read_pipe_text(["git", "symbolic-ref", "--short", "-q", "HEAD"])
def isValidGitDir(path):
return git_dir(path) != None
autosquash move commits that begin with squash!/fixup! under -i
committer-date-is-author-date! passed to 'git am'
ignore-date! passed to 'git am'
+signoff passed to 'git am'
whitespace=! passed to 'git apply'
ignore-whitespace! passed to 'git apply'
C=! passed to 'git apply'
--ignore-whitespace)
git_am_opt="$git_am_opt $1"
;;
- --committer-date-is-author-date|--ignore-date)
+ --committer-date-is-author-date|--ignore-date|--signoff|--no-signoff)
git_am_opt="$git_am_opt $1"
force_rebase=t
;;
git submodule--helper list --prefix "$wt_prefix" ||
echo "#unmatched" $?
} |
- while read mode sha1 stage sm_path
+ while read -r mode sha1 stage sm_path
do
die_if_unmatched "$mode" "$sha1"
if test -e "$sm_path"/.git
git submodule--helper list --prefix "$wt_prefix" "$@" ||
echo "#unmatched" $?
} |
- while read mode sha1 stage sm_path
+ while read -r mode sha1 stage sm_path
do
die_if_unmatched "$mode" "$sha1"
name=$(git submodule--helper name "$sm_path") || exit
"$@" || echo "#unmatched" $?
} | {
err=
- while read mode sha1 stage just_cloned sm_path
+ while read -r mode sha1 stage just_cloned sm_path
do
die_if_unmatched "$mode" "$sha1"
# Get modified modules cared by user
modules=$(git $diff_cmd $cached --ignore-submodules=dirty --raw $head -- "$@" |
sane_egrep '^:([0-7]* )?160000' |
- while read mod_src mod_dst sha1_src sha1_dst status sm_path
+ while read -r mod_src mod_dst sha1_src sha1_dst status sm_path
do
# Always show modules deleted or type-changed (blob<->module)
if test "$status" = D || test "$status" = T
git $diff_cmd $cached --ignore-submodules=dirty --raw $head -- $modules |
sane_egrep '^:([0-7]* )?160000' |
cut -c2- |
- while read mod_src mod_dst sha1_src sha1_dst status name
+ while read -r mod_src mod_dst sha1_src sha1_dst status name
do
if test -z "$cached" &&
test $sha1_dst = 0000000000000000000000000000000000000000
git submodule--helper list --prefix "$wt_prefix" "$@" ||
echo "#unmatched" $?
} |
- while read mode sha1 stage sm_path
+ while read -r mode sha1 stage sm_path
do
die_if_unmatched "$mode" "$sha1"
name=$(git submodule--helper name "$sm_path") || exit
git submodule--helper list --prefix "$wt_prefix" "$@" ||
echo "#unmatched" $?
} |
- while read mode sha1 stage sm_path
+ while read -r mode sha1 stage sm_path
do
die_if_unmatched "$mode" "$sha1"
}
if (opt->linenum) {
char buf[32];
- snprintf(buf, sizeof(buf), "%d", lno);
+ xsnprintf(buf, sizeof(buf), "%d", lno);
output_color(opt, buf, strlen(buf), opt->color_lineno);
output_sep(opt, sign);
}
opt->color_filename);
output_sep(opt, ':');
}
- snprintf(buf, sizeof(buf), "%u\n", count);
+ xsnprintf(buf, sizeof(buf), "%u\n", count);
opt->output(opt, buf, strlen(buf));
return 1;
}
char *sha1_to_hex(const unsigned char *sha1)
{
static int bufno;
- static char hexbuffer[4][GIT_SHA1_HEXSZ + 1];
+ static char hexbuffer[4][GIT_MAX_HEXSZ + 1];
bufno = (bufno + 1) % ARRAY_SIZE(hexbuffer);
return sha1_to_hex_r(hexbuffer[bufno], sha1);
}
#endif
int active_requests;
int http_is_verbose;
-size_t http_post_buffer = 16 * LARGE_PACKET_MAX;
+ssize_t http_post_buffer = 16 * LARGE_PACKET_MAX;
#if LIBCURL_VERSION_NUM >= 0x070a06
#define LIBCURL_CAN_HANDLE_AUTH_ANY
}
if (!strcmp("http.postbuffer", var)) {
- http_post_buffer = git_config_int(var, value);
+ http_post_buffer = git_config_ssize_t(var, value);
+ if (http_post_buffer < 0)
+ warning(_("negative value for http.postbuffer; defaulting to %d"), LARGE_PACKET_MAX);
if (http_post_buffer < LARGE_PACKET_MAX)
http_post_buffer = LARGE_PACKET_MAX;
return 0;
}
}
- if (curl_http_proxy) {
- curl_easy_setopt(result, CURLOPT_PROXY, curl_http_proxy);
+ if (curl_http_proxy && curl_http_proxy[0] == '\0') {
+ /*
+ * Handle case with the empty http.proxy value here to keep
+ * common code clean.
+ * NB: empty option disables proxying at all.
+ */
+ curl_easy_setopt(result, CURLOPT_PROXY, "");
+ } else if (curl_http_proxy) {
#if LIBCURL_VERSION_NUM >= 0x071800
if (starts_with(curl_http_proxy, "socks5h"))
curl_easy_setopt(result,
strbuf_release(&url);
}
+ if (!proxy_auth.host)
+ die("Invalid proxy URL '%s'", curl_http_proxy);
+
curl_easy_setopt(result, CURLOPT_PROXY, proxy_auth.host);
#if LIBCURL_VERSION_NUM >= 0x071304
var_override(&curl_no_proxy, getenv("NO_PROXY"));
* FAILONERROR it is lost, so we can give only the numeric
* status code.
*/
- snprintf(curl_errorstr, sizeof(curl_errorstr),
- "The requested URL returned error: %ld",
- results->http_code);
+ xsnprintf(curl_errorstr, sizeof(curl_errorstr),
+ "The requested URL returned error: %ld",
+ results->http_code);
}
if (results->curl_result == CURLE_OK) {
{
slot->results = results;
if (!start_active_slot(slot)) {
- snprintf(curl_errorstr, sizeof(curl_errorstr),
- "failed to start HTTP request");
+ xsnprintf(curl_errorstr, sizeof(curl_errorstr),
+ "failed to start HTTP request");
return HTTP_START_FAILED;
}
extern long int git_curl_ipresolve;
extern int active_requests;
extern int http_is_verbose;
-extern size_t http_post_buffer;
+extern ssize_t http_post_buffer;
extern struct credential http_auth;
extern char curl_errorstr[CURL_ERROR_SIZE];
static void add_domainname(struct strbuf *out, int *is_bogus)
{
- char buf[1024];
+ char buf[HOST_NAME_MAX + 1];
- if (gethostname(buf, sizeof(buf))) {
+ if (xgethostname(buf, sizeof(buf))) {
warning_errno("cannot get host name");
strbuf_addstr(out, "(none)");
*is_bogus = 1;
int gai;
char portstr[6];
- snprintf(portstr, sizeof(portstr), "%d", srvc->port);
+ xsnprintf(portstr, sizeof(portstr), "%d", srvc->port);
memset(&hints, 0, sizeof(hints));
hints.ai_socktype = SOCK_STREAM;
assert(!mi->filter_stage);
if (mi->header_stage) {
- if (!line->len || (line->len == 1 && line->buf[0] == '\n'))
+ if (!line->len || (line->len == 1 && line->buf[0] == '\n')) {
+ if (mi->inbody_header_accum.len) {
+ flush_inbody_header_accum(mi);
+ mi->header_stage = 0;
+ }
return 0;
+ }
}
if (mi->use_inbody_headers && mi->header_stage) {
* Scan forward in the index array for index entries having the same
* path prefix (that are also in this directory).
*/
- if (strncmp(istate->cache[k_start + 1]->name, prefix->buf, prefix->len) > 0)
+ if (k_start + 1 >= k_end)
+ k = k_end;
+ else if (strncmp(istate->cache[k_start + 1]->name, prefix->buf, prefix->len) > 0)
k = k_start + 1;
else if (strncmp(istate->cache[k_end - 1]->name, prefix->buf, prefix->len) == 0)
k = k_end;
const char *msg = strstr(buffer, "\n\n");
int baselen;
- strbuf_addstr(&path, git_path(NOTES_MERGE_WORKTREE));
+ git_path_buf(&path, NOTES_MERGE_WORKTREE);
if (o->verbosity >= 3)
printf("Committing notes in notes merge worktree at %s\n",
path.buf);
struct strbuf buf = STRBUF_INIT;
int ret;
- strbuf_addstr(&buf, git_path(NOTES_MERGE_WORKTREE));
+ git_path_buf(&buf, NOTES_MERGE_WORKTREE);
if (o->verbosity >= 3)
printf("Removing notes merge worktree at %s/*\n", buf.buf);
ret = remove_dir_recursively(&buf, REMOVE_DIR_KEEP_TOPLEVEL);
const char *filename,
uint16_t options)
{
- static char tmp_file[PATH_MAX];
static uint16_t default_version = 1;
static uint16_t flags = BITMAP_OPT_FULL_DAG;
+ struct strbuf tmp_file = STRBUF_INIT;
struct sha1file *f;
struct bitmap_disk_header header;
- int fd = odb_mkstemp(tmp_file, sizeof(tmp_file), "pack/tmp_bitmap_XXXXXX");
+ int fd = odb_mkstemp(&tmp_file, "pack/tmp_bitmap_XXXXXX");
- if (fd < 0)
- die_errno("unable to create '%s'", tmp_file);
- f = sha1fd(fd, tmp_file);
+ f = sha1fd(fd, tmp_file.buf);
memcpy(header.magic, BITMAP_IDX_SIGNATURE, sizeof(BITMAP_IDX_SIGNATURE));
header.version = htons(default_version);
sha1close(f, NULL, CSUM_FSYNC);
- if (adjust_shared_perm(tmp_file))
+ if (adjust_shared_perm(tmp_file.buf))
die_errno("unable to make temporary bitmap file readable");
- if (rename(tmp_file, filename))
+ if (rename(tmp_file.buf, filename))
die_errno("unable to rename temporary bitmap file to '%s'", filename);
+
+ strbuf_release(&tmp_file);
}
f = sha1fd_check(index_name);
} else {
if (!index_name) {
- static char tmp_file[PATH_MAX];
- fd = odb_mkstemp(tmp_file, sizeof(tmp_file), "pack/tmp_idx_XXXXXX");
- index_name = xstrdup(tmp_file);
+ struct strbuf tmp_file = STRBUF_INIT;
+ fd = odb_mkstemp(&tmp_file, "pack/tmp_idx_XXXXXX");
+ index_name = strbuf_detach(&tmp_file, NULL);
} else {
unlink(index_name);
fd = open(index_name, O_CREAT|O_EXCL|O_WRONLY, 0600);
+ if (fd < 0)
+ die_errno("unable to create '%s'", index_name);
}
- if (fd < 0)
- die_errno("unable to create '%s'", index_name);
f = sha1fd(fd, index_name);
}
struct sha1file *create_tmp_packfile(char **pack_tmp_name)
{
- char tmpname[PATH_MAX];
+ struct strbuf tmpname = STRBUF_INIT;
int fd;
- fd = odb_mkstemp(tmpname, sizeof(tmpname), "pack/tmp_pack_XXXXXX");
- *pack_tmp_name = xstrdup(tmpname);
+ fd = odb_mkstemp(&tmpname, "pack/tmp_pack_XXXXXX");
+ *pack_tmp_name = strbuf_detach(&tmpname, NULL);
return sha1fd(fd, *pack_tmp_name);
}
int parse_opt_object_name(const struct option *opt, const char *arg, int unset)
{
- unsigned char sha1[20];
+ struct object_id oid;
if (unset) {
- sha1_array_clear(opt->value);
+ oid_array_clear(opt->value);
return 0;
}
if (!arg)
return -1;
- if (get_sha1(arg, sha1))
+ if (get_oid(arg, &oid))
return error(_("malformed object name '%s'"), arg);
- sha1_array_append(opt->value, sha1);
+ oid_array_append(opt->value, &oid);
return 0;
}
struct commit *commit,
struct patch_ids *ids)
{
- unsigned char header_only_patch_id[GIT_SHA1_RAWSZ];
+ unsigned char header_only_patch_id[GIT_MAX_RAWSZ];
patch->commit = commit;
if (commit_patch_id(commit, &ids->diffopts, header_only_patch_id, 1))
struct patch_id {
struct hashmap_entry ent;
- unsigned char patch_id[GIT_SHA1_RAWSZ];
+ unsigned char patch_id[GIT_MAX_RAWSZ];
struct commit *commit;
};
}
/* Returns 0 on success, negative on failure. */
-#define SUBMODULE_PATH_ERR_NOT_CONFIGURED -1
static int do_submodule_path(struct strbuf *buf, const char *path,
const char *fmt, va_list args)
{
- const char *git_dir;
struct strbuf git_submodule_common_dir = STRBUF_INIT;
struct strbuf git_submodule_dir = STRBUF_INIT;
- const struct submodule *sub;
- int err = 0;
+ int ret;
- strbuf_addstr(buf, path);
- strbuf_complete(buf, '/');
- strbuf_addstr(buf, ".git");
-
- git_dir = read_gitfile(buf->buf);
- if (git_dir) {
- strbuf_reset(buf);
- strbuf_addstr(buf, git_dir);
- }
- if (!is_git_directory(buf->buf)) {
- gitmodules_config();
- sub = submodule_from_path(null_sha1, path);
- if (!sub) {
- err = SUBMODULE_PATH_ERR_NOT_CONFIGURED;
- goto cleanup;
- }
- strbuf_reset(buf);
- strbuf_git_path(buf, "%s/%s", "modules", sub->name);
- }
-
- strbuf_addch(buf, '/');
- strbuf_addbuf(&git_submodule_dir, buf);
+ ret = submodule_to_gitdir(&git_submodule_dir, path);
+ if (ret)
+ goto cleanup;
+ strbuf_complete(&git_submodule_dir, '/');
+ strbuf_addbuf(buf, &git_submodule_dir);
strbuf_vaddf(buf, fmt, args);
if (get_common_dir_noenv(&git_submodule_common_dir, git_submodule_dir.buf))
cleanup:
strbuf_release(&git_submodule_dir);
strbuf_release(&git_submodule_common_dir);
-
- return err;
+ return ret;
}
char *git_pathdup_submodule(const char *path, const char *fmt, ...)
* Return a string with ~ and ~user expanded via getpw*. If buf != NULL,
* then it is a newly allocated string. Returns NULL on getpw failure or
* if path is NULL.
+ *
+ * If real_home is true, real_path($HOME) is used in the expansion.
*/
-char *expand_user_path(const char *path)
+char *expand_user_path(const char *path, int real_home)
{
struct strbuf user_path = STRBUF_INIT;
const char *to_copy = path;
const char *home = getenv("HOME");
if (!home)
goto return_null;
- strbuf_addstr(&user_path, home);
+ if (real_home)
+ strbuf_addstr(&user_path, real_path(home));
+ else
+ strbuf_addstr(&user_path, home);
#ifdef GIT_WINDOWS_NATIVE
convert_slashes(user_path.buf);
#endif
strbuf_add(&validated_path, path, len);
if (used_path.buf[0] == '~') {
- char *newpath = expand_user_path(used_path.buf);
+ char *newpath = expand_user_path(used_path.buf, 0);
if (!newpath)
return NULL;
strbuf_attach(&used_path, newpath, strlen(newpath),
* original. Useful for passing to another command.
*/
if ((flags & PATHSPEC_PREFIX_ORIGIN) &&
- prefixlen && !get_literal_global()) {
+ !get_literal_global()) {
struct strbuf sb = STRBUF_INIT;
/* Preserve the actual prefix length of each pattern */
free(pathspec->items[i].match);
free(pathspec->items[i].original);
- for (j = 0; j < pathspec->items[j].attr_match_nr; j++)
+ for (j = 0; j < pathspec->items[i].attr_match_nr; j++)
free(pathspec->items[i].attr_match[j].value);
free(pathspec->items[i].attr_match);
return retval;
}
+
+/*
+ * Like strcmp(), but also return the offset of the first change.
+ * If strings are equal, return the length.
+ */
+int strcmp_offset(const char *s1, const char *s2, size_t *first_change)
+{
+ size_t k;
+
+ if (!first_change)
+ return strcmp(s1, s2);
+
+ for (k = 0; s1[k] == s2[k]; k++)
+ if (s1[k] == '\0')
+ break;
+
+ *first_change = k;
+ return (unsigned char)s1[k] - (unsigned char)s2[k];
+}
+
/*
* Do we have another file with a pathname that is a proper
* subset of the name we're trying to add?
+ *
+ * That is, is there another file in the index with a path
+ * that matches a sub-directory in the given entry?
*/
static int has_dir_name(struct index_state *istate,
const struct cache_entry *ce, int pos, int ok_to_replace)
int stage = ce_stage(ce);
const char *name = ce->name;
const char *slash = name + ce_namelen(ce);
+ size_t len_eq_last;
+ int cmp_last = 0;
+
+ /*
+ * We are frequently called during an iteration on a sorted
+ * list of pathnames and while building a new index. Therefore,
+ * there is a high probability that this entry will eventually
+ * be appended to the index, rather than inserted in the middle.
+ * If we can confirm that, we can avoid binary searches on the
+ * components of the pathname.
+ *
+ * Compare the entry's full path with the last path in the index.
+ */
+ if (istate->cache_nr > 0) {
+ cmp_last = strcmp_offset(name,
+ istate->cache[istate->cache_nr - 1]->name,
+ &len_eq_last);
+ if (cmp_last > 0) {
+ if (len_eq_last == 0) {
+ /*
+ * The entry sorts AFTER the last one in the
+ * index and their paths have no common prefix,
+ * so there cannot be a F/D conflict.
+ */
+ return retval;
+ } else {
+ /*
+ * The entry sorts AFTER the last one in the
+ * index, but has a common prefix. Fall through
+ * to the loop below to disect the entry's path
+ * and see where the difference is.
+ */
+ }
+ } else if (cmp_last == 0) {
+ /*
+ * The entry exactly matches the last one in the
+ * index, but because of multiple stage and CE_REMOVE
+ * items, we fall through and let the regular search
+ * code handle it.
+ */
+ }
+ }
for (;;) {
- int len;
+ size_t len;
for (;;) {
if (*--slash == '/')
}
len = slash - name;
+ if (cmp_last > 0) {
+ /*
+ * (len + 1) is a directory boundary (including
+ * the trailing slash). And since the loop is
+ * decrementing "slash", the first iteration is
+ * the longest directory prefix; subsequent
+ * iterations consider parent directories.
+ */
+
+ if (len + 1 <= len_eq_last) {
+ /*
+ * The directory prefix (including the trailing
+ * slash) also appears as a prefix in the last
+ * entry, so the remainder cannot collide (because
+ * strcmp said the whole path was greater).
+ *
+ * EQ: last: xxx/A
+ * this: xxx/B
+ *
+ * LT: last: xxx/file_A
+ * this: xxx/file_B
+ */
+ return retval;
+ }
+
+ if (len > len_eq_last) {
+ /*
+ * This part of the directory prefix (excluding
+ * the trailing slash) is longer than the known
+ * equal portions, so this sub-directory cannot
+ * collide with a file.
+ *
+ * GT: last: xxxA
+ * this: xxxB/file
+ */
+ return retval;
+ }
+
+ if (istate->cache_nr > 0 &&
+ ce_namelen(istate->cache[istate->cache_nr - 1]) > len) {
+ /*
+ * The directory prefix lines up with part of
+ * a longer file or directory name, but sorts
+ * after it, so this sub-directory cannot
+ * collide with a file.
+ *
+ * last: xxx/yy-file (because '-' sorts before '/')
+ * this: xxx/yy/abc
+ */
+ return retval;
+ }
+
+ /*
+ * This is a possible collision. Fall through and
+ * let the regular search code handle it.
+ *
+ * last: xxx
+ * this: xxx/file
+ */
+ }
+
pos = index_name_stage_pos(istate, name, len, stage);
if (pos >= 0) {
/*
if (!(option & ADD_CACHE_KEEP_CACHE_TREE))
cache_tree_invalidate_path(istate, ce->name);
- pos = index_name_stage_pos(istate, ce->name, ce_namelen(ce), ce_stage(ce));
+
+ /*
+ * If this entry's path sorts after the last entry in the index,
+ * we can avoid searching for it.
+ */
+ if (istate->cache_nr > 0 &&
+ strcmp(ce->name, istate->cache[istate->cache_nr - 1]->name) > 0)
+ pos = -istate->cache_nr - 1;
+ else
+ pos = index_name_stage_pos(istate, ce->name, ce_namelen(ce), ce_stage(ce));
/* existing match? Just replace it. */
if (pos >= 0) {
ondisk_cache_entry_extended_size(ce_namelen(ce)) : \
ondisk_cache_entry_size(ce_namelen(ce)))
+/* Allow fsck to force verification of the index checksum. */
+int verify_index_checksum;
+
static int verify_hdr(struct cache_header *hdr, unsigned long size)
{
git_SHA_CTX c;
hdr_version = ntohl(hdr->hdr_version);
if (hdr_version < INDEX_FORMAT_LB || INDEX_FORMAT_UB < hdr_version)
return error("bad index version %d", hdr_version);
+
+ if (!verify_index_checksum)
+ return 0;
+
git_SHA1_Init(&c);
git_SHA1_Update(&c, hdr, size - 20);
git_SHA1_Final(sha1, &c);
*/
static void freshen_shared_index(char *base_sha1_hex, int warn)
{
- const char *shared_index = git_path("sharedindex.%s", base_sha1_hex);
+ char *shared_index = git_pathdup("sharedindex.%s", base_sha1_hex);
if (!check_and_freshen_file(shared_index, 1) && warn)
warning("could not freshen shared index '%s'", shared_index);
+ free(shared_index);
}
int read_index_from(struct index_state *istate, const char *path)
* the need to parse the object via parse_object(). peel_ref() might be a
* more efficient alternative to obtain the pointee.
*/
-static const unsigned char *match_points_at(struct sha1_array *points_at,
- const unsigned char *sha1,
- const char *refname)
+static const struct object_id *match_points_at(struct oid_array *points_at,
+ const struct object_id *oid,
+ const char *refname)
{
- const unsigned char *tagged_sha1 = NULL;
+ const struct object_id *tagged_oid = NULL;
struct object *obj;
- if (sha1_array_lookup(points_at, sha1) >= 0)
- return sha1;
- obj = parse_object(sha1);
+ if (oid_array_lookup(points_at, oid) >= 0)
+ return oid;
+ obj = parse_object(oid->hash);
if (!obj)
die(_("malformed object at '%s'"), refname);
if (obj->type == OBJ_TAG)
- tagged_sha1 = ((struct tag *)obj)->tagged->oid.hash;
- if (tagged_sha1 && sha1_array_lookup(points_at, tagged_sha1) >= 0)
- return tagged_sha1;
+ tagged_oid = &((struct tag *)obj)->tagged->oid;
+ if (tagged_oid && oid_array_lookup(points_at, tagged_oid) >= 0)
+ return tagged_oid;
return NULL;
}
if (!filter_pattern_match(filter, refname))
return 0;
- if (filter->points_at.nr && !match_points_at(&filter->points_at, oid->hash, refname))
+ if (filter->points_at.nr && !match_points_at(&filter->points_at, oid, refname))
return 0;
/*
struct ref_filter {
const char **name_patterns;
- struct sha1_array points_at;
+ struct oid_array points_at;
struct commit_list *with_commit;
struct commit_list *no_commit;
#include "refs/refs-internal.h"
#include "object.h"
#include "tag.h"
+#include "submodule.h"
/*
* List of all available backends
return 1;
}
+char *refs_resolve_refdup(struct ref_store *refs,
+ const char *refname, int resolve_flags,
+ unsigned char *sha1, int *flags)
+{
+ const char *result;
+
+ result = refs_resolve_ref_unsafe(refs, refname, resolve_flags,
+ sha1, flags);
+ return xstrdup_or_null(result);
+}
+
char *resolve_refdup(const char *refname, int resolve_flags,
unsigned char *sha1, int *flags)
{
- return xstrdup_or_null(resolve_ref_unsafe(refname, resolve_flags,
- sha1, flags));
+ return refs_resolve_refdup(get_main_ref_store(),
+ refname, resolve_flags,
+ sha1, flags);
}
/* The argument to filter_refs */
void *cb_data;
};
-int read_ref_full(const char *refname, int resolve_flags, unsigned char *sha1, int *flags)
+int refs_read_ref_full(struct ref_store *refs, const char *refname,
+ int resolve_flags, unsigned char *sha1, int *flags)
{
- if (resolve_ref_unsafe(refname, resolve_flags, sha1, flags))
+ if (refs_resolve_ref_unsafe(refs, refname, resolve_flags, sha1, flags))
return 0;
return -1;
}
+int read_ref_full(const char *refname, int resolve_flags, unsigned char *sha1, int *flags)
+{
+ return refs_read_ref_full(get_main_ref_store(), refname,
+ resolve_flags, sha1, flags);
+}
+
int read_ref(const char *refname, unsigned char *sha1)
{
return read_ref_full(refname, RESOLVE_REF_READING, sha1, NULL);
for_each_rawref(warn_if_dangling_symref, &data);
}
+int refs_for_each_tag_ref(struct ref_store *refs, each_ref_fn fn, void *cb_data)
+{
+ return refs_for_each_ref_in(refs, "refs/tags/", fn, cb_data);
+}
+
int for_each_tag_ref(each_ref_fn fn, void *cb_data)
{
- return for_each_ref_in("refs/tags/", fn, cb_data);
+ return refs_for_each_tag_ref(get_main_ref_store(), fn, cb_data);
}
int for_each_tag_ref_submodule(const char *submodule, each_ref_fn fn, void *cb_data)
{
- return for_each_ref_in_submodule(submodule, "refs/tags/", fn, cb_data);
+ return refs_for_each_tag_ref(get_submodule_ref_store(submodule),
+ fn, cb_data);
+}
+
+int refs_for_each_branch_ref(struct ref_store *refs, each_ref_fn fn, void *cb_data)
+{
+ return refs_for_each_ref_in(refs, "refs/heads/", fn, cb_data);
}
int for_each_branch_ref(each_ref_fn fn, void *cb_data)
{
- return for_each_ref_in("refs/heads/", fn, cb_data);
+ return refs_for_each_branch_ref(get_main_ref_store(), fn, cb_data);
}
int for_each_branch_ref_submodule(const char *submodule, each_ref_fn fn, void *cb_data)
{
- return for_each_ref_in_submodule(submodule, "refs/heads/", fn, cb_data);
+ return refs_for_each_branch_ref(get_submodule_ref_store(submodule),
+ fn, cb_data);
+}
+
+int refs_for_each_remote_ref(struct ref_store *refs, each_ref_fn fn, void *cb_data)
+{
+ return refs_for_each_ref_in(refs, "refs/remotes/", fn, cb_data);
}
int for_each_remote_ref(each_ref_fn fn, void *cb_data)
{
- return for_each_ref_in("refs/remotes/", fn, cb_data);
+ return refs_for_each_remote_ref(get_main_ref_store(), fn, cb_data);
}
int for_each_remote_ref_submodule(const char *submodule, each_ref_fn fn, void *cb_data)
{
- return for_each_ref_in_submodule(submodule, "refs/remotes/", fn, cb_data);
+ return refs_for_each_remote_ref(get_submodule_ref_store(submodule),
+ fn, cb_data);
}
int head_ref_namespaced(each_ref_fn fn, void *cb_data)
{
const char **p, *r;
int refs_found = 0;
+ struct strbuf fullref = STRBUF_INIT;
*ref = NULL;
for (p = ref_rev_parse_rules; *p; p++) {
- char fullref[PATH_MAX];
unsigned char sha1_from_ref[20];
unsigned char *this_result;
int flag;
this_result = refs_found ? sha1_from_ref : sha1;
- mksnpath(fullref, sizeof(fullref), *p, len, str);
- r = resolve_ref_unsafe(fullref, RESOLVE_REF_READING,
+ strbuf_reset(&fullref);
+ strbuf_addf(&fullref, *p, len, str);
+ r = resolve_ref_unsafe(fullref.buf, RESOLVE_REF_READING,
this_result, &flag);
if (r) {
if (!refs_found++)
*ref = xstrdup(r);
if (!warn_ambiguous_refs)
break;
- } else if ((flag & REF_ISSYMREF) && strcmp(fullref, "HEAD")) {
- warning("ignoring dangling symref %s.", fullref);
- } else if ((flag & REF_ISBROKEN) && strchr(fullref, '/')) {
- warning("ignoring broken ref %s.", fullref);
+ } else if ((flag & REF_ISSYMREF) && strcmp(fullref.buf, "HEAD")) {
+ warning("ignoring dangling symref %s.", fullref.buf);
+ } else if ((flag & REF_ISBROKEN) && strchr(fullref.buf, '/')) {
+ warning("ignoring broken ref %s.", fullref.buf);
}
}
+ strbuf_release(&fullref);
return refs_found;
}
char *last_branch = substitute_branch_name(&str, &len);
const char **p;
int logs_found = 0;
+ struct strbuf path = STRBUF_INIT;
*log = NULL;
for (p = ref_rev_parse_rules; *p; p++) {
unsigned char hash[20];
- char path[PATH_MAX];
const char *ref, *it;
- mksnpath(path, sizeof(path), *p, len, str);
- ref = resolve_ref_unsafe(path, RESOLVE_REF_READING,
+ strbuf_reset(&path);
+ strbuf_addf(&path, *p, len, str);
+ ref = resolve_ref_unsafe(path.buf, RESOLVE_REF_READING,
hash, NULL);
if (!ref)
continue;
- if (reflog_exists(path))
- it = path;
- else if (strcmp(ref, path) && reflog_exists(ref))
+ if (reflog_exists(path.buf))
+ it = path.buf;
+ else if (strcmp(ref, path.buf) && reflog_exists(ref))
it = ref;
else
continue;
if (!warn_ambiguous_refs)
break;
}
+ strbuf_release(&path);
free(last_branch);
return logs_found;
}
return 0;
}
-int delete_ref(const char *msg, const char *refname,
- const unsigned char *old_sha1, unsigned int flags)
+int refs_delete_ref(struct ref_store *refs, const char *msg,
+ const char *refname,
+ const unsigned char *old_sha1,
+ unsigned int flags)
{
struct ref_transaction *transaction;
struct strbuf err = STRBUF_INIT;
- if (ref_type(refname) == REF_TYPE_PSEUDOREF)
+ if (ref_type(refname) == REF_TYPE_PSEUDOREF) {
+ assert(refs == get_main_ref_store());
return delete_pseudoref(refname, old_sha1);
+ }
- transaction = ref_transaction_begin(&err);
+ transaction = ref_store_transaction_begin(refs, &err);
if (!transaction ||
ref_transaction_delete(transaction, refname, old_sha1,
flags, msg, &err) ||
return 0;
}
+int delete_ref(const char *msg, const char *refname,
+ const unsigned char *old_sha1, unsigned int flags)
+{
+ return refs_delete_ref(get_main_ref_store(), msg, refname,
+ old_sha1, flags);
+}
+
int copy_reflog_msg(char *buf, const char *msg)
{
char *cp = buf;
return 1;
}
-struct ref_transaction *ref_transaction_begin(struct strbuf *err)
+struct ref_transaction *ref_store_transaction_begin(struct ref_store *refs,
+ struct strbuf *err)
{
+ struct ref_transaction *tr;
assert(err);
- return xcalloc(1, sizeof(struct ref_transaction));
+ tr = xcalloc(1, sizeof(struct ref_transaction));
+ tr->ref_store = refs;
+ return tr;
+}
+
+struct ref_transaction *ref_transaction_begin(struct strbuf *err)
+{
+ return ref_store_transaction_begin(get_main_ref_store(), err);
}
void ref_transaction_free(struct ref_transaction *transaction)
old_oid ? old_oid->hash : NULL, flags, onerr);
}
-int update_ref(const char *msg, const char *refname,
- const unsigned char *new_sha1, const unsigned char *old_sha1,
- unsigned int flags, enum action_on_err onerr)
+int refs_update_ref(struct ref_store *refs, const char *msg,
+ const char *refname, const unsigned char *new_sha1,
+ const unsigned char *old_sha1, unsigned int flags,
+ enum action_on_err onerr)
{
struct ref_transaction *t = NULL;
struct strbuf err = STRBUF_INIT;
int ret = 0;
if (ref_type(refname) == REF_TYPE_PSEUDOREF) {
+ assert(refs == get_main_ref_store());
ret = write_pseudoref(refname, new_sha1, old_sha1, &err);
} else {
- t = ref_transaction_begin(&err);
+ t = ref_store_transaction_begin(refs, &err);
if (!t ||
ref_transaction_update(t, refname, new_sha1, old_sha1,
flags, msg, &err) ||
return 0;
}
+int update_ref(const char *msg, const char *refname,
+ const unsigned char *new_sha1,
+ const unsigned char *old_sha1,
+ unsigned int flags, enum action_on_err onerr)
+{
+ return refs_update_ref(get_main_ref_store(), msg, refname, new_sha1,
+ old_sha1, flags, onerr);
+}
+
char *shorten_unambiguous_ref(const char *refname, int strict)
{
int i;
static char **scanf_fmts;
static int nr_rules;
char *short_name;
+ struct strbuf resolved_buf = STRBUF_INIT;
if (!nr_rules) {
/*
*/
for (j = 0; j < rules_to_fail; j++) {
const char *rule = ref_rev_parse_rules[j];
- char refname[PATH_MAX];
/* skip matched rule */
if (i == j)
* (with this previous rule) to a valid ref
* read_ref() returns 0 on success
*/
- mksnpath(refname, sizeof(refname),
- rule, short_name_len, short_name);
- if (ref_exists(refname))
+ strbuf_reset(&resolved_buf);
+ strbuf_addf(&resolved_buf, rule,
+ short_name_len, short_name);
+ if (ref_exists(resolved_buf.buf))
break;
}
* short name is non-ambiguous if all previous rules
* haven't resolved to a valid ref
*/
- if (j == rules_to_fail)
+ if (j == rules_to_fail) {
+ strbuf_release(&resolved_buf);
return short_name;
+ }
}
+ strbuf_release(&resolved_buf);
free(short_name);
return xstrdup(refname);
}
return NULL;
}
-int rename_ref_available(const char *old_refname, const char *new_refname)
+int refs_rename_ref_available(struct ref_store *refs,
+ const char *old_refname,
+ const char *new_refname)
{
struct string_list skip = STRING_LIST_INIT_NODUP;
struct strbuf err = STRBUF_INIT;
int ok;
string_list_insert(&skip, old_refname);
- ok = !verify_refname_available(new_refname, NULL, &skip, &err);
+ ok = !refs_verify_refname_available(refs, new_refname,
+ NULL, &skip, &err);
if (!ok)
error("%s", err.buf);
* non-zero value, stop the iteration and return that value;
* otherwise, return 0.
*/
-static int do_for_each_ref(const char *submodule, const char *prefix,
+static int do_for_each_ref(struct ref_store *refs, const char *prefix,
each_ref_fn fn, int trim, int flags, void *cb_data)
{
- struct ref_store *refs = get_ref_store(submodule);
struct ref_iterator *iter;
if (!refs)
return do_for_each_ref_iterator(iter, fn, cb_data);
}
+int refs_for_each_ref(struct ref_store *refs, each_ref_fn fn, void *cb_data)
+{
+ return do_for_each_ref(refs, "", fn, 0, 0, cb_data);
+}
+
int for_each_ref(each_ref_fn fn, void *cb_data)
{
- return do_for_each_ref(NULL, "", fn, 0, 0, cb_data);
+ return refs_for_each_ref(get_main_ref_store(), fn, cb_data);
}
int for_each_ref_submodule(const char *submodule, each_ref_fn fn, void *cb_data)
{
- return do_for_each_ref(submodule, "", fn, 0, 0, cb_data);
+ return refs_for_each_ref(get_submodule_ref_store(submodule), fn, cb_data);
+}
+
+int refs_for_each_ref_in(struct ref_store *refs, const char *prefix,
+ each_ref_fn fn, void *cb_data)
+{
+ return do_for_each_ref(refs, prefix, fn, strlen(prefix), 0, cb_data);
}
int for_each_ref_in(const char *prefix, each_ref_fn fn, void *cb_data)
{
- return do_for_each_ref(NULL, prefix, fn, strlen(prefix), 0, cb_data);
+ return refs_for_each_ref_in(get_main_ref_store(), prefix, fn, cb_data);
}
int for_each_fullref_in(const char *prefix, each_ref_fn fn, void *cb_data, unsigned int broken)
if (broken)
flag = DO_FOR_EACH_INCLUDE_BROKEN;
- return do_for_each_ref(NULL, prefix, fn, 0, flag, cb_data);
+ return do_for_each_ref(get_main_ref_store(),
+ prefix, fn, 0, flag, cb_data);
}
int for_each_ref_in_submodule(const char *submodule, const char *prefix,
- each_ref_fn fn, void *cb_data)
+ each_ref_fn fn, void *cb_data)
{
- return do_for_each_ref(submodule, prefix, fn, strlen(prefix), 0, cb_data);
+ return refs_for_each_ref_in(get_submodule_ref_store(submodule),
+ prefix, fn, cb_data);
}
int for_each_replace_ref(each_ref_fn fn, void *cb_data)
{
- return do_for_each_ref(NULL, git_replace_ref_base, fn,
- strlen(git_replace_ref_base), 0, cb_data);
+ return do_for_each_ref(get_main_ref_store(),
+ git_replace_ref_base, fn,
+ strlen(git_replace_ref_base),
+ 0, cb_data);
}
int for_each_namespaced_ref(each_ref_fn fn, void *cb_data)
struct strbuf buf = STRBUF_INIT;
int ret;
strbuf_addf(&buf, "%srefs/", get_git_namespace());
- ret = do_for_each_ref(NULL, buf.buf, fn, 0, 0, cb_data);
+ ret = do_for_each_ref(get_main_ref_store(),
+ buf.buf, fn, 0, 0, cb_data);
strbuf_release(&buf);
return ret;
}
-int for_each_rawref(each_ref_fn fn, void *cb_data)
+int refs_for_each_rawref(struct ref_store *refs, each_ref_fn fn, void *cb_data)
{
- return do_for_each_ref(NULL, "", fn, 0,
+ return do_for_each_ref(refs, "", fn, 0,
DO_FOR_EACH_INCLUDE_BROKEN, cb_data);
}
+int for_each_rawref(each_ref_fn fn, void *cb_data)
+{
+ return refs_for_each_rawref(get_main_ref_store(), fn, cb_data);
+}
+
/* This function needs to return a meaningful errno on failure */
-const char *resolve_ref_recursively(struct ref_store *refs,
+const char *refs_resolve_ref_unsafe(struct ref_store *refs,
const char *refname,
int resolve_flags,
unsigned char *sha1, int *flags)
/* backend functions */
int refs_init_db(struct strbuf *err)
{
- struct ref_store *refs = get_ref_store(NULL);
+ struct ref_store *refs = get_main_ref_store();
return refs->be->init_db(refs, err);
}
const char *resolve_ref_unsafe(const char *refname, int resolve_flags,
unsigned char *sha1, int *flags)
{
- return resolve_ref_recursively(get_ref_store(NULL), refname,
+ return refs_resolve_ref_unsafe(get_main_ref_store(), refname,
resolve_flags, sha1, flags);
}
/* We need to strip off one or more trailing slashes */
char *stripped = xmemdupz(submodule, len);
- refs = get_ref_store(stripped);
+ refs = get_submodule_ref_store(stripped);
free(stripped);
} else {
- refs = get_ref_store(submodule);
+ refs = get_submodule_ref_store(submodule);
}
if (!refs)
return -1;
- if (!resolve_ref_recursively(refs, refname, 0, sha1, &flags) ||
+ if (!refs_resolve_ref_unsafe(refs, refname, 0, sha1, &flags) ||
is_null_sha1(sha1))
return -1;
return 0;
static struct hashmap submodule_ref_stores;
/*
- * Return the ref_store instance for the specified submodule (or the
- * main repository if submodule is NULL). If that ref_store hasn't
- * been initialized yet, return NULL.
+ * Return the ref_store instance for the specified submodule. If that
+ * ref_store hasn't been initialized yet, return NULL.
*/
-static struct ref_store *lookup_ref_store(const char *submodule)
+static struct ref_store *lookup_submodule_ref_store(const char *submodule)
{
struct submodule_hash_entry *entry;
- if (!submodule)
- return main_ref_store;
-
if (!submodule_ref_stores.tablesize)
/* It's initialized on demand in register_ref_store(). */
return NULL;
return entry ? entry->refs : NULL;
}
-/*
- * Register the specified ref_store to be the one that should be used
- * for submodule (or the main repository if submodule is NULL). It is
- * a fatal error to call this function twice for the same submodule.
- */
-static void register_ref_store(struct ref_store *refs, const char *submodule)
-{
- if (!submodule) {
- if (main_ref_store)
- die("BUG: main_ref_store initialized twice");
-
- main_ref_store = refs;
- } else {
- if (!submodule_ref_stores.tablesize)
- hashmap_init(&submodule_ref_stores, submodule_hash_cmp, 0);
-
- if (hashmap_put(&submodule_ref_stores,
- alloc_submodule_hash_entry(submodule, refs)))
- die("BUG: ref_store for submodule '%s' initialized twice",
- submodule);
- }
-}
-
/*
* Create, record, and return a ref_store instance for the specified
- * submodule (or the main repository if submodule is NULL).
+ * gitdir.
*/
-static struct ref_store *ref_store_init(const char *submodule)
+static struct ref_store *ref_store_init(const char *gitdir,
+ unsigned int flags)
{
const char *be_name = "files";
struct ref_storage_be *be = find_ref_storage_backend(be_name);
if (!be)
die("BUG: reference backend %s is unknown", be_name);
- refs = be->init(submodule);
- register_ref_store(refs, submodule);
+ refs = be->init(gitdir, flags);
return refs;
}
-struct ref_store *get_ref_store(const char *submodule)
+struct ref_store *get_main_ref_store(void)
+{
+ if (main_ref_store)
+ return main_ref_store;
+
+ main_ref_store = ref_store_init(get_git_dir(),
+ (REF_STORE_READ |
+ REF_STORE_WRITE |
+ REF_STORE_ODB |
+ REF_STORE_MAIN));
+ return main_ref_store;
+}
+
+/*
+ * Register the specified ref_store to be the one that should be used
+ * for submodule. It is a fatal error to call this function twice for
+ * the same submodule.
+ */
+static void register_submodule_ref_store(struct ref_store *refs,
+ const char *submodule)
+{
+ if (!submodule_ref_stores.tablesize)
+ hashmap_init(&submodule_ref_stores, submodule_hash_cmp, 0);
+
+ if (hashmap_put(&submodule_ref_stores,
+ alloc_submodule_hash_entry(submodule, refs)))
+ die("BUG: ref_store for submodule '%s' initialized twice",
+ submodule);
+}
+
+struct ref_store *get_submodule_ref_store(const char *submodule)
{
+ struct strbuf submodule_sb = STRBUF_INIT;
struct ref_store *refs;
+ int ret;
if (!submodule || !*submodule) {
- refs = lookup_ref_store(NULL);
+ /*
+ * FIXME: This case is ideally not allowed. But that
+ * can't happen until we clean up all the callers.
+ */
+ return get_main_ref_store();
+ }
- if (!refs)
- refs = ref_store_init(NULL);
- } else {
- refs = lookup_ref_store(submodule);
+ refs = lookup_submodule_ref_store(submodule);
+ if (refs)
+ return refs;
- if (!refs) {
- struct strbuf submodule_sb = STRBUF_INIT;
+ strbuf_addstr(&submodule_sb, submodule);
+ ret = is_nonbare_repository_dir(&submodule_sb);
+ strbuf_release(&submodule_sb);
+ if (!ret)
+ return NULL;
- strbuf_addstr(&submodule_sb, submodule);
- if (is_nonbare_repository_dir(&submodule_sb))
- refs = ref_store_init(submodule);
- strbuf_release(&submodule_sb);
- }
+ ret = submodule_to_gitdir(&submodule_sb, submodule);
+ if (ret) {
+ strbuf_release(&submodule_sb);
+ return NULL;
}
+ /* assume that add_submodule_odb() has been called */
+ refs = ref_store_init(submodule_sb.buf,
+ REF_STORE_READ | REF_STORE_ODB);
+ register_submodule_ref_store(refs, submodule);
+
+ strbuf_release(&submodule_sb);
return refs;
}
}
/* backend functions */
-int pack_refs(unsigned int flags)
+int refs_pack_refs(struct ref_store *refs, unsigned int flags)
{
- struct ref_store *refs = get_ref_store(NULL);
-
return refs->be->pack_refs(refs, flags);
}
+int refs_peel_ref(struct ref_store *refs, const char *refname,
+ unsigned char *sha1)
+{
+ return refs->be->peel_ref(refs, refname, sha1);
+}
+
int peel_ref(const char *refname, unsigned char *sha1)
{
- struct ref_store *refs = get_ref_store(NULL);
+ return refs_peel_ref(get_main_ref_store(), refname, sha1);
+}
- return refs->be->peel_ref(refs, refname, sha1);
+int refs_create_symref(struct ref_store *refs,
+ const char *ref_target,
+ const char *refs_heads_master,
+ const char *logmsg)
+{
+ return refs->be->create_symref(refs, ref_target,
+ refs_heads_master,
+ logmsg);
}
int create_symref(const char *ref_target, const char *refs_heads_master,
const char *logmsg)
{
- struct ref_store *refs = get_ref_store(NULL);
-
- return refs->be->create_symref(refs, ref_target, refs_heads_master,
- logmsg);
+ return refs_create_symref(get_main_ref_store(), ref_target,
+ refs_heads_master, logmsg);
}
int ref_transaction_commit(struct ref_transaction *transaction,
struct strbuf *err)
{
- struct ref_store *refs = get_ref_store(NULL);
+ struct ref_store *refs = transaction->ref_store;
+
+ if (getenv(GIT_QUARANTINE_ENVIRONMENT)) {
+ strbuf_addstr(err,
+ _("ref updates forbidden inside quarantine environment"));
+ return -1;
+ }
return refs->be->transaction_commit(refs, transaction, err);
}
-int verify_refname_available(const char *refname,
- const struct string_list *extra,
- const struct string_list *skip,
- struct strbuf *err)
+int refs_verify_refname_available(struct ref_store *refs,
+ const char *refname,
+ const struct string_list *extra,
+ const struct string_list *skip,
+ struct strbuf *err)
{
- struct ref_store *refs = get_ref_store(NULL);
-
return refs->be->verify_refname_available(refs, refname, extra, skip, err);
}
-int for_each_reflog(each_ref_fn fn, void *cb_data)
+int refs_for_each_reflog(struct ref_store *refs, each_ref_fn fn, void *cb_data)
{
- struct ref_store *refs = get_ref_store(NULL);
struct ref_iterator *iter;
iter = refs->be->reflog_iterator_begin(refs);
return do_for_each_ref_iterator(iter, fn, cb_data);
}
-int for_each_reflog_ent_reverse(const char *refname, each_reflog_ent_fn fn,
- void *cb_data)
+int for_each_reflog(each_ref_fn fn, void *cb_data)
{
- struct ref_store *refs = get_ref_store(NULL);
+ return refs_for_each_reflog(get_main_ref_store(), fn, cb_data);
+}
+int refs_for_each_reflog_ent_reverse(struct ref_store *refs,
+ const char *refname,
+ each_reflog_ent_fn fn,
+ void *cb_data)
+{
return refs->be->for_each_reflog_ent_reverse(refs, refname,
fn, cb_data);
}
+int for_each_reflog_ent_reverse(const char *refname, each_reflog_ent_fn fn,
+ void *cb_data)
+{
+ return refs_for_each_reflog_ent_reverse(get_main_ref_store(),
+ refname, fn, cb_data);
+}
+
+int refs_for_each_reflog_ent(struct ref_store *refs, const char *refname,
+ each_reflog_ent_fn fn, void *cb_data)
+{
+ return refs->be->for_each_reflog_ent(refs, refname, fn, cb_data);
+}
+
int for_each_reflog_ent(const char *refname, each_reflog_ent_fn fn,
void *cb_data)
{
- struct ref_store *refs = get_ref_store(NULL);
+ return refs_for_each_reflog_ent(get_main_ref_store(), refname,
+ fn, cb_data);
+}
- return refs->be->for_each_reflog_ent(refs, refname, fn, cb_data);
+int refs_reflog_exists(struct ref_store *refs, const char *refname)
+{
+ return refs->be->reflog_exists(refs, refname);
}
int reflog_exists(const char *refname)
{
- struct ref_store *refs = get_ref_store(NULL);
+ return refs_reflog_exists(get_main_ref_store(), refname);
+}
- return refs->be->reflog_exists(refs, refname);
+int refs_create_reflog(struct ref_store *refs, const char *refname,
+ int force_create, struct strbuf *err)
+{
+ return refs->be->create_reflog(refs, refname, force_create, err);
}
int safe_create_reflog(const char *refname, int force_create,
struct strbuf *err)
{
- struct ref_store *refs = get_ref_store(NULL);
+ return refs_create_reflog(get_main_ref_store(), refname,
+ force_create, err);
+}
- return refs->be->create_reflog(refs, refname, force_create, err);
+int refs_delete_reflog(struct ref_store *refs, const char *refname)
+{
+ return refs->be->delete_reflog(refs, refname);
}
int delete_reflog(const char *refname)
{
- struct ref_store *refs = get_ref_store(NULL);
+ return refs_delete_reflog(get_main_ref_store(), refname);
+}
- return refs->be->delete_reflog(refs, refname);
+int refs_reflog_expire(struct ref_store *refs,
+ const char *refname, const unsigned char *sha1,
+ unsigned int flags,
+ reflog_expiry_prepare_fn prepare_fn,
+ reflog_expiry_should_prune_fn should_prune_fn,
+ reflog_expiry_cleanup_fn cleanup_fn,
+ void *policy_cb_data)
+{
+ return refs->be->reflog_expire(refs, refname, sha1, flags,
+ prepare_fn, should_prune_fn,
+ cleanup_fn, policy_cb_data);
}
int reflog_expire(const char *refname, const unsigned char *sha1,
reflog_expiry_cleanup_fn cleanup_fn,
void *policy_cb_data)
{
- struct ref_store *refs = get_ref_store(NULL);
-
- return refs->be->reflog_expire(refs, refname, sha1, flags,
- prepare_fn, should_prune_fn,
- cleanup_fn, policy_cb_data);
+ return refs_reflog_expire(get_main_ref_store(),
+ refname, sha1, flags,
+ prepare_fn, should_prune_fn,
+ cleanup_fn, policy_cb_data);
}
int initial_ref_transaction_commit(struct ref_transaction *transaction,
struct strbuf *err)
{
- struct ref_store *refs = get_ref_store(NULL);
+ struct ref_store *refs = transaction->ref_store;
return refs->be->initial_transaction_commit(refs, transaction, err);
}
-int delete_refs(struct string_list *refnames, unsigned int flags)
+int refs_delete_refs(struct ref_store *refs, struct string_list *refnames,
+ unsigned int flags)
{
- struct ref_store *refs = get_ref_store(NULL);
-
return refs->be->delete_refs(refs, refnames, flags);
}
-int rename_ref(const char *oldref, const char *newref, const char *logmsg)
+int delete_refs(struct string_list *refnames, unsigned int flags)
{
- struct ref_store *refs = get_ref_store(NULL);
+ return refs_delete_refs(get_main_ref_store(), refnames, flags);
+}
+int refs_rename_ref(struct ref_store *refs, const char *oldref,
+ const char *newref, const char *logmsg)
+{
return refs->be->rename_ref(refs, oldref, newref, logmsg);
}
+
+int rename_ref(const char *oldref, const char *newref, const char *logmsg)
+{
+ return refs_rename_ref(get_main_ref_store(), oldref, newref, logmsg);
+}
#ifndef REFS_H
#define REFS_H
+struct object_id;
+struct ref_store;
+struct strbuf;
+struct string_list;
+
/*
* Resolve a reference, recursively following symbolic refererences.
*
#define RESOLVE_REF_NO_RECURSE 0x02
#define RESOLVE_REF_ALLOW_BAD_NAME 0x04
+const char *refs_resolve_ref_unsafe(struct ref_store *refs,
+ const char *refname,
+ int resolve_flags,
+ unsigned char *sha1,
+ int *flags);
const char *resolve_ref_unsafe(const char *refname, int resolve_flags,
unsigned char *sha1, int *flags);
+char *refs_resolve_refdup(struct ref_store *refs,
+ const char *refname, int resolve_flags,
+ unsigned char *sha1, int *flags);
char *resolve_refdup(const char *refname, int resolve_flags,
unsigned char *sha1, int *flags);
+int refs_read_ref_full(struct ref_store *refs, const char *refname,
+ int resolve_flags, unsigned char *sha1, int *flags);
int read_ref_full(const char *refname, int resolve_flags,
unsigned char *sha1, int *flags);
int read_ref(const char *refname, unsigned char *sha1);
+/*
+ * Return 0 if a reference named refname could be created without
+ * conflicting with the name of an existing reference. Otherwise,
+ * return a negative value and write an explanation to err. If extras
+ * is non-NULL, it is a list of additional refnames with which refname
+ * is not allowed to conflict. If skip is non-NULL, ignore potential
+ * conflicts with refs in skip (e.g., because they are scheduled for
+ * deletion in the same operation). Behavior is undefined if the same
+ * name is listed in both extras and skip.
+ *
+ * Two reference names conflict if one of them exactly matches the
+ * leading components of the other; e.g., "foo/bar" conflicts with
+ * both "foo" and with "foo/bar/baz" but not with "foo/bar" or
+ * "foo/barbados".
+ *
+ * extras and skip must be sorted.
+ */
+
+int refs_verify_refname_available(struct ref_store *refs,
+ const char *refname,
+ const struct string_list *extra,
+ const struct string_list *skip,
+ struct strbuf *err);
+
int ref_exists(const char *refname);
int should_autocreate_reflog(const char *refname);
* Symbolic references are considered unpeelable, even if they
* ultimately resolve to a peelable tag.
*/
+int refs_peel_ref(struct ref_store *refs, const char *refname,
+ unsigned char *sha1);
int peel_ref(const char *refname, unsigned char *sha1);
/**
* it is not safe to modify references while an iteration is in
* progress, unless the same callback function invocation that
* modifies the reference also returns a nonzero value to immediately
- * stop the iteration.
+ * stop the iteration. Returned references are sorted.
*/
+int refs_for_each_ref(struct ref_store *refs,
+ each_ref_fn fn, void *cb_data);
+int refs_for_each_ref_in(struct ref_store *refs, const char *prefix,
+ each_ref_fn fn, void *cb_data);
+int refs_for_each_tag_ref(struct ref_store *refs,
+ each_ref_fn fn, void *cb_data);
+int refs_for_each_branch_ref(struct ref_store *refs,
+ each_ref_fn fn, void *cb_data);
+int refs_for_each_remote_ref(struct ref_store *refs,
+ each_ref_fn fn, void *cb_data);
+
int head_ref(each_ref_fn fn, void *cb_data);
int for_each_ref(each_ref_fn fn, void *cb_data);
int for_each_ref_in(const char *prefix, each_ref_fn fn, void *cb_data);
int for_each_namespaced_ref(each_ref_fn fn, void *cb_data);
/* can be used to learn about broken ref and symref */
+int refs_for_each_rawref(struct ref_store *refs, each_ref_fn fn, void *cb_data);
int for_each_rawref(each_ref_fn fn, void *cb_data);
static inline const char *has_glob_specials(const char *pattern)
* Write a packed-refs file for the current repository.
* flags: Combination of the above PACK_REFS_* flags.
*/
-int pack_refs(unsigned int flags);
+int refs_pack_refs(struct ref_store *refs, unsigned int flags);
/*
* Flags controlling ref_transaction_update(), ref_transaction_create(), etc.
/*
* Setup reflog before using. Fill in err and return -1 on failure.
*/
+int refs_create_reflog(struct ref_store *refs, const char *refname,
+ int force_create, struct strbuf *err);
int safe_create_reflog(const char *refname, int force_create, struct strbuf *err);
/** Reads log for the value of ref during at_time. **/
unsigned long *cutoff_time, int *cutoff_tz, int *cutoff_cnt);
/** Check if a particular reflog exists */
+int refs_reflog_exists(struct ref_store *refs, const char *refname);
int reflog_exists(const char *refname);
/*
* exists, regardless of its old value. It is an error for old_sha1 to
* be NULL_SHA1. flags is passed through to ref_transaction_delete().
*/
+int refs_delete_ref(struct ref_store *refs, const char *msg,
+ const char *refname,
+ const unsigned char *old_sha1,
+ unsigned int flags);
int delete_ref(const char *msg, const char *refname,
const unsigned char *old_sha1, unsigned int flags);
* an all-or-nothing transaction). flags is passed through to
* ref_transaction_delete().
*/
+int refs_delete_refs(struct ref_store *refs, struct string_list *refnames,
+ unsigned int flags);
int delete_refs(struct string_list *refnames, unsigned int flags);
/** Delete a reflog */
+int refs_delete_reflog(struct ref_store *refs, const char *refname);
int delete_reflog(const char *refname);
/* iterate over reflog entries */
const char *committer, unsigned long timestamp,
int tz, const char *msg, void *cb_data);
+int refs_for_each_reflog_ent(struct ref_store *refs, const char *refname,
+ each_reflog_ent_fn fn, void *cb_data);
+int refs_for_each_reflog_ent_reverse(struct ref_store *refs,
+ const char *refname,
+ each_reflog_ent_fn fn,
+ void *cb_data);
int for_each_reflog_ent(const char *refname, each_reflog_ent_fn fn, void *cb_data);
int for_each_reflog_ent_reverse(const char *refname, each_reflog_ent_fn fn, void *cb_data);
/*
* Calls the specified function for each reflog file until it returns nonzero,
- * and returns the value
+ * and returns the value. Reflog file order is unspecified.
*/
+int refs_for_each_reflog(struct ref_store *refs, each_ref_fn fn, void *cb_data);
int for_each_reflog(each_ref_fn fn, void *cb_data);
#define REFNAME_ALLOW_ONELEVEL 1
char *shorten_unambiguous_ref(const char *refname, int strict);
/** rename ref, return 0 on success **/
+int refs_rename_ref(struct ref_store *refs, const char *oldref,
+ const char *newref, const char *logmsg);
int rename_ref(const char *oldref, const char *newref, const char *logmsg);
+int refs_create_symref(struct ref_store *refs, const char *refname,
+ const char *target, const char *logmsg);
int create_symref(const char *refname, const char *target, const char *logmsg);
/*
* Begin a reference transaction. The reference transaction must
* be freed by calling ref_transaction_free().
*/
+struct ref_transaction *ref_store_transaction_begin(struct ref_store *refs,
+ struct strbuf *err);
struct ref_transaction *ref_transaction_begin(struct strbuf *err);
/*
* ref_transaction_update(). Handle errors as requested by the `onerr`
* argument.
*/
+int refs_update_ref(struct ref_store *refs, const char *msg, const char *refname,
+ const unsigned char *new_sha1, const unsigned char *old_sha1,
+ unsigned int flags, enum action_on_err onerr);
int update_ref(const char *msg, const char *refname,
const unsigned char *new_sha1, const unsigned char *old_sha1,
unsigned int flags, enum action_on_err onerr);
* enum expire_reflog_flags. The three function pointers are described
* above. On success, return zero.
*/
+int refs_reflog_expire(struct ref_store *refs,
+ const char *refname,
+ const unsigned char *sha1,
+ unsigned int flags,
+ reflog_expiry_prepare_fn prepare_fn,
+ reflog_expiry_should_prune_fn should_prune_fn,
+ reflog_expiry_cleanup_fn cleanup_fn,
+ void *policy_cb_data);
int reflog_expire(const char *refname, const unsigned char *sha1,
unsigned int flags,
reflog_expiry_prepare_fn prepare_fn,
int ref_storage_backend_exists(const char *name);
+struct ref_store *get_main_ref_store(void);
+/*
+ * Return the ref_store instance for the specified submodule. For the
+ * main repository, use submodule==NULL; such a call cannot fail. For
+ * a submodule, the submodule must exist and be a nonbare repository,
+ * otherwise return NULL. If the requested reference store has not yet
+ * been initialized, initialize it first.
+ *
+ * For backwards compatibility, submodule=="" is treated the same as
+ * submodule==NULL.
+ */
+struct ref_store *get_submodule_ref_store(const char *submodule);
+
#endif /* REFS_H */
const char *dirname, size_t len,
int incomplete);
static void add_entry_to_dir(struct ref_dir *dir, struct ref_entry *entry);
+static int files_log_ref_write(struct files_ref_store *refs,
+ const char *refname, const unsigned char *old_sha1,
+ const unsigned char *new_sha1, const char *msg,
+ int flags, struct strbuf *err);
static struct ref_dir *get_ref_dir(struct ref_entry *entry)
{
*/
struct files_ref_store {
struct ref_store base;
+ unsigned int store_flags;
- /*
- * The name of the submodule represented by this object, or
- * NULL if it represents the main repository's reference
- * store:
- */
- const char *submodule;
+ char *gitdir;
+ char *gitcommondir;
+ char *packed_refs_path;
struct ref_entry *loose;
struct packed_ref_cache *packed;
* Create a new submodule ref cache and add it to the internal
* set of caches.
*/
-static struct ref_store *files_ref_store_create(const char *submodule)
+static struct ref_store *files_ref_store_create(const char *gitdir,
+ unsigned int flags)
{
struct files_ref_store *refs = xcalloc(1, sizeof(*refs));
struct ref_store *ref_store = (struct ref_store *)refs;
+ struct strbuf sb = STRBUF_INIT;
base_ref_store_init(ref_store, &refs_be_files);
+ refs->store_flags = flags;
- refs->submodule = xstrdup_or_null(submodule);
+ refs->gitdir = xstrdup(gitdir);
+ get_common_dir_noenv(&sb, gitdir);
+ refs->gitcommondir = strbuf_detach(&sb, NULL);
+ strbuf_addf(&sb, "%s/packed-refs", refs->gitcommondir);
+ refs->packed_refs_path = strbuf_detach(&sb, NULL);
return ref_store;
}
/*
- * Die if refs is for a submodule (i.e., not for the main repository).
- * caller is used in any necessary error messages.
+ * Die if refs is not the main ref store. caller is used in any
+ * necessary error messages.
*/
static void files_assert_main_repository(struct files_ref_store *refs,
const char *caller)
{
- if (refs->submodule)
- die("BUG: %s called for a submodule", caller);
+ if (refs->store_flags & REF_STORE_MAIN)
+ return;
+
+ die("BUG: operation %s only allowed for main ref store", caller);
}
/*
* Downcast ref_store to files_ref_store. Die if ref_store is not a
- * files_ref_store. If submodule_allowed is not true, then also die if
- * files_ref_store is for a submodule (i.e., not for the main
- * repository). caller is used in any necessary error messages.
+ * files_ref_store. required_flags is compared with ref_store's
+ * store_flags to ensure the ref_store has all required capabilities.
+ * "caller" is used in any necessary error messages.
*/
-static struct files_ref_store *files_downcast(
- struct ref_store *ref_store, int submodule_allowed,
- const char *caller)
+static struct files_ref_store *files_downcast(struct ref_store *ref_store,
+ unsigned int required_flags,
+ const char *caller)
{
struct files_ref_store *refs;
refs = (struct files_ref_store *)ref_store;
- if (!submodule_allowed)
- files_assert_main_repository(refs, caller);
+ if ((refs->store_flags & required_flags) != required_flags)
+ die("BUG: operation %s requires abilities 0x%x, but only have 0x%x",
+ caller, required_flags, refs->store_flags);
return refs;
}
strbuf_release(&line);
}
+static const char *files_packed_refs_path(struct files_ref_store *refs)
+{
+ return refs->packed_refs_path;
+}
+
+static void files_reflog_path(struct files_ref_store *refs,
+ struct strbuf *sb,
+ const char *refname)
+{
+ if (!refname) {
+ /*
+ * FIXME: of course this is wrong in multi worktree
+ * setting. To be fixed real soon.
+ */
+ strbuf_addf(sb, "%s/logs", refs->gitcommondir);
+ return;
+ }
+
+ switch (ref_type(refname)) {
+ case REF_TYPE_PER_WORKTREE:
+ case REF_TYPE_PSEUDOREF:
+ strbuf_addf(sb, "%s/logs/%s", refs->gitdir, refname);
+ break;
+ case REF_TYPE_NORMAL:
+ strbuf_addf(sb, "%s/logs/%s", refs->gitcommondir, refname);
+ break;
+ default:
+ die("BUG: unknown ref type %d of ref %s",
+ ref_type(refname), refname);
+ }
+}
+
+static void files_ref_path(struct files_ref_store *refs,
+ struct strbuf *sb,
+ const char *refname)
+{
+ switch (ref_type(refname)) {
+ case REF_TYPE_PER_WORKTREE:
+ case REF_TYPE_PSEUDOREF:
+ strbuf_addf(sb, "%s/%s", refs->gitdir, refname);
+ break;
+ case REF_TYPE_NORMAL:
+ strbuf_addf(sb, "%s/%s", refs->gitcommondir, refname);
+ break;
+ default:
+ die("BUG: unknown ref type %d of ref %s",
+ ref_type(refname), refname);
+ }
+}
+
/*
* Get the packed_ref_cache for the specified files_ref_store,
* creating it if necessary.
*/
static struct packed_ref_cache *get_packed_ref_cache(struct files_ref_store *refs)
{
- char *packed_refs_file;
-
- if (refs->submodule)
- packed_refs_file = git_pathdup_submodule(refs->submodule,
- "packed-refs");
- else
- packed_refs_file = git_pathdup("packed-refs");
+ const char *packed_refs_file = files_packed_refs_path(refs);
if (refs->packed &&
!stat_validity_check(&refs->packed->validity, packed_refs_file))
fclose(f);
}
}
- free(packed_refs_file);
return refs->packed;
}
struct strbuf refname;
struct strbuf path = STRBUF_INIT;
size_t path_baselen;
- int err = 0;
- if (refs->submodule)
- err = strbuf_git_path_submodule(&path, refs->submodule, "%s", dirname);
- else
- strbuf_git_path(&path, "%s", dirname);
+ files_ref_path(refs, &path, dirname);
path_baselen = path.len;
- if (err) {
- strbuf_release(&path);
- return;
- }
-
d = opendir(path.buf);
if (!d) {
strbuf_release(&path);
create_dir_entry(refs, refname.buf,
refname.len, 1));
} else {
- if (!resolve_ref_recursively(&refs->base,
+ if (!refs_resolve_ref_unsafe(&refs->base,
refname.buf,
RESOLVE_REF_READING,
sha1, &flag)) {
struct strbuf *referent, unsigned int *type)
{
struct files_ref_store *refs =
- files_downcast(ref_store, 1, "read_raw_ref");
+ files_downcast(ref_store, REF_STORE_READ, "read_raw_ref");
struct strbuf sb_contents = STRBUF_INIT;
struct strbuf sb_path = STRBUF_INIT;
const char *path;
*type = 0;
strbuf_reset(&sb_path);
- if (refs->submodule)
- strbuf_git_path_submodule(&sb_path, refs->submodule, "%s", refname);
- else
- strbuf_git_path(&sb_path, "%s", refname);
+ files_ref_path(refs, &sb_path, refname);
path = sb_path.buf;
*lock_p = lock = xcalloc(1, sizeof(*lock));
lock->ref_name = xstrdup(refname);
- strbuf_git_path(&ref_file, "%s", refname);
+ files_ref_path(refs, &ref_file, refname);
retry:
switch (safe_create_leading_directories(ref_file.buf)) {
* another reference such as "refs/foo". There is no
* reason to expect this error to be transitory.
*/
- if (verify_refname_available(refname, extras, skip, err)) {
+ if (refs_verify_refname_available(&refs->base, refname,
+ extras, skip, err)) {
if (mustexist) {
/*
* To the user the relevant error is
static int files_peel_ref(struct ref_store *ref_store,
const char *refname, unsigned char *sha1)
{
- struct files_ref_store *refs = files_downcast(ref_store, 0, "peel_ref");
+ struct files_ref_store *refs =
+ files_downcast(ref_store, REF_STORE_READ | REF_STORE_ODB,
+ "peel_ref");
int flag;
unsigned char base[20];
return 0;
}
- if (read_ref_full(refname, RESOLVE_REF_READING, base, &flag))
+ if (refs_read_ref_full(ref_store, refname,
+ RESOLVE_REF_READING, base, &flag))
return -1;
/*
struct ref_store *ref_store,
const char *prefix, unsigned int flags)
{
- struct files_ref_store *refs =
- files_downcast(ref_store, 1, "ref_iterator_begin");
+ struct files_ref_store *refs;
struct ref_dir *loose_dir, *packed_dir;
struct ref_iterator *loose_iter, *packed_iter;
struct files_ref_iterator *iter;
struct ref_iterator *ref_iterator;
- if (!refs)
- return empty_ref_iterator_begin();
-
if (ref_paranoia < 0)
ref_paranoia = git_env_bool("GIT_REF_PARANOIA", 0);
if (ref_paranoia)
flags |= DO_FOR_EACH_INCLUDE_BROKEN;
+ refs = files_downcast(ref_store,
+ REF_STORE_READ | (ref_paranoia ? 0 : REF_STORE_ODB),
+ "ref_iterator_begin");
+
iter = xcalloc(1, sizeof(*iter));
ref_iterator = &iter->base;
base_ref_iterator_init(ref_iterator, &files_ref_iterator_vtable);
* on success. On error, write an error message to err, set errno, and
* return a negative value.
*/
-static int verify_lock(struct ref_lock *lock,
+static int verify_lock(struct ref_store *ref_store, struct ref_lock *lock,
const unsigned char *old_sha1, int mustexist,
struct strbuf *err)
{
assert(err);
- if (read_ref_full(lock->ref_name,
- mustexist ? RESOLVE_REF_READING : 0,
- lock->old_oid.hash, NULL)) {
+ if (refs_read_ref_full(ref_store, lock->ref_name,
+ mustexist ? RESOLVE_REF_READING : 0,
+ lock->old_oid.hash, NULL)) {
if (old_sha1) {
int save_errno = errno;
strbuf_addf(err, "can't verify ref '%s'", lock->ref_name);
if (flags & REF_DELETING)
resolve_flags |= RESOLVE_REF_ALLOW_BAD_NAME;
- strbuf_git_path(&ref_file, "%s", refname);
- resolved = !!resolve_ref_unsafe(refname, resolve_flags,
- lock->old_oid.hash, type);
+ files_ref_path(refs, &ref_file, refname);
+ resolved = !!refs_resolve_ref_unsafe(&refs->base,
+ refname, resolve_flags,
+ lock->old_oid.hash, type);
if (!resolved && errno == EISDIR) {
/*
* we are trying to lock foo but we used to
refname);
goto error_return;
}
- resolved = !!resolve_ref_unsafe(refname, resolve_flags,
- lock->old_oid.hash, type);
+ resolved = !!refs_resolve_ref_unsafe(&refs->base,
+ refname, resolve_flags,
+ lock->old_oid.hash, type);
}
if (!resolved) {
last_errno = errno;
goto error_return;
}
- if (verify_lock(lock, old_sha1, mustexist, err)) {
+ if (verify_lock(&refs->base, lock, old_sha1, mustexist, err)) {
last_errno = errno;
goto error_return;
}
}
if (hold_lock_file_for_update_timeout(
- &packlock, git_path("packed-refs"),
+ &packlock, files_packed_refs_path(refs),
flags, timeout_value) < 0)
return -1;
/*
* subdirs. flags is a combination of REMOVE_EMPTY_PARENTS_REF and/or
* REMOVE_EMPTY_PARENTS_REFLOG.
*/
-static void try_remove_empty_parents(const char *refname, unsigned int flags)
+static void try_remove_empty_parents(struct files_ref_store *refs,
+ const char *refname,
+ unsigned int flags)
{
struct strbuf buf = STRBUF_INIT;
+ struct strbuf sb = STRBUF_INIT;
char *p, *q;
int i;
if (q == p)
break;
strbuf_setlen(&buf, q - buf.buf);
- if ((flags & REMOVE_EMPTY_PARENTS_REF) &&
- rmdir(git_path("%s", buf.buf)))
+
+ strbuf_reset(&sb);
+ files_ref_path(refs, &sb, buf.buf);
+ if ((flags & REMOVE_EMPTY_PARENTS_REF) && rmdir(sb.buf))
flags &= ~REMOVE_EMPTY_PARENTS_REF;
- if ((flags & REMOVE_EMPTY_PARENTS_REFLOG) &&
- rmdir(git_path("logs/%s", buf.buf)))
+
+ strbuf_reset(&sb);
+ files_reflog_path(refs, &sb, buf.buf);
+ if ((flags & REMOVE_EMPTY_PARENTS_REFLOG) && rmdir(sb.buf))
flags &= ~REMOVE_EMPTY_PARENTS_REFLOG;
}
strbuf_release(&buf);
+ strbuf_release(&sb);
}
/* make sure nobody touched the ref, and unlink */
-static void prune_ref(struct ref_to_prune *r)
+static void prune_ref(struct files_ref_store *refs, struct ref_to_prune *r)
{
struct ref_transaction *transaction;
struct strbuf err = STRBUF_INIT;
if (check_refname_format(r->name, 0))
return;
- transaction = ref_transaction_begin(&err);
+ transaction = ref_store_transaction_begin(&refs->base, &err);
if (!transaction ||
ref_transaction_delete(transaction, r->name, r->sha1,
REF_ISPRUNING | REF_NODEREF, NULL, &err) ||
strbuf_release(&err);
}
-static void prune_refs(struct ref_to_prune *r)
+static void prune_refs(struct files_ref_store *refs, struct ref_to_prune *r)
{
while (r) {
- prune_ref(r);
+ prune_ref(refs, r);
r = r->next;
}
}
static int files_pack_refs(struct ref_store *ref_store, unsigned int flags)
{
struct files_ref_store *refs =
- files_downcast(ref_store, 0, "pack_refs");
+ files_downcast(ref_store, REF_STORE_WRITE | REF_STORE_ODB,
+ "pack_refs");
struct pack_refs_cb_data cbdata;
memset(&cbdata, 0, sizeof(cbdata));
if (commit_packed_refs(refs))
die_errno("unable to overwrite old ref-pack file");
- prune_refs(cbdata.ref_to_prune);
+ prune_refs(refs, cbdata.ref_to_prune);
return 0;
}
return 0; /* no refname exists in packed refs */
if (lock_packed_refs(refs, 0)) {
- unable_to_lock_message(git_path("packed-refs"), errno, err);
+ unable_to_lock_message(files_packed_refs_path(refs), errno, err);
return -1;
}
packed = get_packed_refs(refs);
struct string_list *refnames, unsigned int flags)
{
struct files_ref_store *refs =
- files_downcast(ref_store, 0, "delete_refs");
+ files_downcast(ref_store, REF_STORE_WRITE, "delete_refs");
struct strbuf err = STRBUF_INIT;
int i, result = 0;
for (i = 0; i < refnames->nr; i++) {
const char *refname = refnames->items[i].string;
- if (delete_ref(NULL, refname, NULL, flags))
+ if (refs_delete_ref(&refs->base, NULL, refname, NULL, flags))
result |= error(_("could not remove reference %s"), refname);
}
* IOW, to avoid cross device rename errors, the temporary renamed log must
* live into logs/refs.
*/
-#define TMP_RENAMED_LOG "logs/refs/.tmp-renamed-log"
+#define TMP_RENAMED_LOG "refs/.tmp-renamed-log"
-static int rename_tmp_log_callback(const char *path, void *cb)
+struct rename_cb {
+ const char *tmp_renamed_log;
+ int true_errno;
+};
+
+static int rename_tmp_log_callback(const char *path, void *cb_data)
{
- int *true_errno = cb;
+ struct rename_cb *cb = cb_data;
- if (rename(git_path(TMP_RENAMED_LOG), path)) {
+ if (rename(cb->tmp_renamed_log, path)) {
/*
* rename(a, b) when b is an existing directory ought
* to result in ISDIR, but Solaris 5.8 gives ENOTDIR.
* but report EISDIR to raceproof_create_file() so
* that it knows to retry.
*/
- *true_errno = errno;
+ cb->true_errno = errno;
if (errno == ENOTDIR)
errno = EISDIR;
return -1;
}
}
-static int rename_tmp_log(const char *newrefname)
+static int rename_tmp_log(struct files_ref_store *refs, const char *newrefname)
{
- char *path = git_pathdup("logs/%s", newrefname);
- int ret, true_errno;
+ struct strbuf path = STRBUF_INIT;
+ struct strbuf tmp = STRBUF_INIT;
+ struct rename_cb cb;
+ int ret;
- ret = raceproof_create_file(path, rename_tmp_log_callback, &true_errno);
+ files_reflog_path(refs, &path, newrefname);
+ files_reflog_path(refs, &tmp, TMP_RENAMED_LOG);
+ cb.tmp_renamed_log = tmp.buf;
+ ret = raceproof_create_file(path.buf, rename_tmp_log_callback, &cb);
if (ret) {
if (errno == EISDIR)
- error("directory not empty: %s", path);
+ error("directory not empty: %s", path.buf);
else
error("unable to move logfile %s to %s: %s",
- git_path(TMP_RENAMED_LOG), path,
- strerror(true_errno));
+ tmp.buf, path.buf,
+ strerror(cb.true_errno));
}
- free(path);
+ strbuf_release(&path);
+ strbuf_release(&tmp);
return ret;
}
struct strbuf *err)
{
struct files_ref_store *refs =
- files_downcast(ref_store, 1, "verify_refname_available");
+ files_downcast(ref_store, REF_STORE_READ, "verify_refname_available");
struct ref_dir *packed_refs = get_packed_refs(refs);
struct ref_dir *loose_refs = get_loose_refs(refs);
const char *logmsg)
{
struct files_ref_store *refs =
- files_downcast(ref_store, 0, "rename_ref");
+ files_downcast(ref_store, REF_STORE_WRITE, "rename_ref");
unsigned char sha1[20], orig_sha1[20];
int flag = 0, logmoved = 0;
struct ref_lock *lock;
struct stat loginfo;
- int log = !lstat(git_path("logs/%s", oldrefname), &loginfo);
+ struct strbuf sb_oldref = STRBUF_INIT;
+ struct strbuf sb_newref = STRBUF_INIT;
+ struct strbuf tmp_renamed_log = STRBUF_INIT;
+ int log, ret;
struct strbuf err = STRBUF_INIT;
- if (log && S_ISLNK(loginfo.st_mode))
- return error("reflog for %s is a symlink", oldrefname);
+ files_reflog_path(refs, &sb_oldref, oldrefname);
+ files_reflog_path(refs, &sb_newref, newrefname);
+ files_reflog_path(refs, &tmp_renamed_log, TMP_RENAMED_LOG);
- if (!resolve_ref_unsafe(oldrefname, RESOLVE_REF_READING | RESOLVE_REF_NO_RECURSE,
- orig_sha1, &flag))
- return error("refname %s not found", oldrefname);
+ log = !lstat(sb_oldref.buf, &loginfo);
+ if (log && S_ISLNK(loginfo.st_mode)) {
+ ret = error("reflog for %s is a symlink", oldrefname);
+ goto out;
+ }
- if (flag & REF_ISSYMREF)
- return error("refname %s is a symbolic ref, renaming it is not supported",
- oldrefname);
- if (!rename_ref_available(oldrefname, newrefname))
- return 1;
+ if (!refs_resolve_ref_unsafe(&refs->base, oldrefname,
+ RESOLVE_REF_READING | RESOLVE_REF_NO_RECURSE,
+ orig_sha1, &flag)) {
+ ret = error("refname %s not found", oldrefname);
+ goto out;
+ }
- if (log && rename(git_path("logs/%s", oldrefname), git_path(TMP_RENAMED_LOG)))
- return error("unable to move logfile logs/%s to "TMP_RENAMED_LOG": %s",
- oldrefname, strerror(errno));
+ if (flag & REF_ISSYMREF) {
+ ret = error("refname %s is a symbolic ref, renaming it is not supported",
+ oldrefname);
+ goto out;
+ }
+ if (!refs_rename_ref_available(&refs->base, oldrefname, newrefname)) {
+ ret = 1;
+ goto out;
+ }
- if (delete_ref(logmsg, oldrefname, orig_sha1, REF_NODEREF)) {
+ if (log && rename(sb_oldref.buf, tmp_renamed_log.buf)) {
+ ret = error("unable to move logfile logs/%s to logs/"TMP_RENAMED_LOG": %s",
+ oldrefname, strerror(errno));
+ goto out;
+ }
+
+ if (refs_delete_ref(&refs->base, logmsg, oldrefname,
+ orig_sha1, REF_NODEREF)) {
error("unable to delete old %s", oldrefname);
goto rollback;
}
* the safety anyway; we want to delete the reference whatever
* its current value.
*/
- if (!read_ref_full(newrefname, RESOLVE_REF_READING | RESOLVE_REF_NO_RECURSE,
- sha1, NULL) &&
- delete_ref(NULL, newrefname, NULL, REF_NODEREF)) {
+ if (!refs_read_ref_full(&refs->base, newrefname,
+ RESOLVE_REF_READING | RESOLVE_REF_NO_RECURSE,
+ sha1, NULL) &&
+ refs_delete_ref(&refs->base, NULL, newrefname,
+ NULL, REF_NODEREF)) {
if (errno == EISDIR) {
struct strbuf path = STRBUF_INIT;
int result;
- strbuf_git_path(&path, "%s", newrefname);
+ files_ref_path(refs, &path, newrefname);
result = remove_empty_directories(&path);
strbuf_release(&path);
}
}
- if (log && rename_tmp_log(newrefname))
+ if (log && rename_tmp_log(refs, newrefname))
goto rollback;
logmoved = log;
goto rollback;
}
- return 0;
+ ret = 0;
+ goto out;
rollback:
lock = lock_ref_sha1_basic(refs, oldrefname, NULL, NULL, NULL,
log_all_ref_updates = flag;
rollbacklog:
- if (logmoved && rename(git_path("logs/%s", newrefname), git_path("logs/%s", oldrefname)))
+ if (logmoved && rename(sb_newref.buf, sb_oldref.buf))
error("unable to restore logfile %s from %s: %s",
oldrefname, newrefname, strerror(errno));
if (!logmoved && log &&
- rename(git_path(TMP_RENAMED_LOG), git_path("logs/%s", oldrefname)))
- error("unable to restore logfile %s from "TMP_RENAMED_LOG": %s",
+ rename(tmp_renamed_log.buf, sb_oldref.buf))
+ error("unable to restore logfile %s from logs/"TMP_RENAMED_LOG": %s",
oldrefname, strerror(errno));
+ ret = 1;
+ out:
+ strbuf_release(&sb_newref);
+ strbuf_release(&sb_oldref);
+ strbuf_release(&tmp_renamed_log);
- return 1;
+ return ret;
}
static int close_ref(struct ref_lock *lock)
* set *logfd to -1. On failure, fill in *err, set *logfd to -1, and
* return -1.
*/
-static int log_ref_setup(const char *refname, int force_create,
+static int log_ref_setup(struct files_ref_store *refs,
+ const char *refname, int force_create,
int *logfd, struct strbuf *err)
{
- char *logfile = git_pathdup("logs/%s", refname);
+ struct strbuf logfile_sb = STRBUF_INIT;
+ char *logfile;
+
+ files_reflog_path(refs, &logfile_sb, refname);
+ logfile = strbuf_detach(&logfile_sb, NULL);
if (force_create || should_autocreate_reflog(refname)) {
if (raceproof_create_file(logfile, open_or_create_logfile, logfd)) {
const char *refname, int force_create,
struct strbuf *err)
{
+ struct files_ref_store *refs =
+ files_downcast(ref_store, REF_STORE_WRITE, "create_reflog");
int fd;
- /* Check validity (but we don't need the result): */
- files_downcast(ref_store, 0, "create_reflog");
-
- if (log_ref_setup(refname, force_create, &fd, err))
+ if (log_ref_setup(refs, refname, force_create, &fd, err))
return -1;
if (fd >= 0)
return 0;
}
-int files_log_ref_write(const char *refname, const unsigned char *old_sha1,
- const unsigned char *new_sha1, const char *msg,
- int flags, struct strbuf *err)
+static int files_log_ref_write(struct files_ref_store *refs,
+ const char *refname, const unsigned char *old_sha1,
+ const unsigned char *new_sha1, const char *msg,
+ int flags, struct strbuf *err)
{
int logfd, result;
if (log_all_ref_updates == LOG_REFS_UNSET)
log_all_ref_updates = is_bare_repository() ? LOG_REFS_NONE : LOG_REFS_NORMAL;
- result = log_ref_setup(refname, flags & REF_FORCE_CREATE_REFLOG,
+ result = log_ref_setup(refs, refname,
+ flags & REF_FORCE_CREATE_REFLOG,
&logfd, err);
if (result)
result = log_ref_write_fd(logfd, old_sha1, new_sha1,
git_committer_info(0), msg);
if (result) {
+ struct strbuf sb = STRBUF_INIT;
int save_errno = errno;
+ files_reflog_path(refs, &sb, refname);
strbuf_addf(err, "unable to append to '%s': %s",
- git_path("logs/%s", refname), strerror(save_errno));
+ sb.buf, strerror(save_errno));
+ strbuf_release(&sb);
close(logfd);
return -1;
}
if (close(logfd)) {
+ struct strbuf sb = STRBUF_INIT;
int save_errno = errno;
+ files_reflog_path(refs, &sb, refname);
strbuf_addf(err, "unable to append to '%s': %s",
- git_path("logs/%s", refname), strerror(save_errno));
+ sb.buf, strerror(save_errno));
+ strbuf_release(&sb);
return -1;
}
return 0;
files_assert_main_repository(refs, "commit_ref_update");
clear_loose_ref_cache(refs);
- if (files_log_ref_write(lock->ref_name, lock->old_oid.hash, sha1,
+ if (files_log_ref_write(refs, lock->ref_name,
+ lock->old_oid.hash, sha1,
logmsg, 0, err)) {
char *old_msg = strbuf_detach(err, NULL);
strbuf_addf(err, "cannot update the ref '%s': %s",
int head_flag;
const char *head_ref;
- head_ref = resolve_ref_unsafe("HEAD", RESOLVE_REF_READING,
- head_sha1, &head_flag);
+ head_ref = refs_resolve_ref_unsafe(&refs->base, "HEAD",
+ RESOLVE_REF_READING,
+ head_sha1, &head_flag);
if (head_ref && (head_flag & REF_ISSYMREF) &&
!strcmp(head_ref, lock->ref_name)) {
struct strbuf log_err = STRBUF_INIT;
- if (files_log_ref_write("HEAD", lock->old_oid.hash, sha1,
- logmsg, 0, &log_err)) {
+ if (files_log_ref_write(refs, "HEAD",
+ lock->old_oid.hash, sha1,
+ logmsg, 0, &log_err)) {
error("%s", log_err.buf);
strbuf_release(&log_err);
}
return ret;
}
-static void update_symref_reflog(struct ref_lock *lock, const char *refname,
+static void update_symref_reflog(struct files_ref_store *refs,
+ struct ref_lock *lock, const char *refname,
const char *target, const char *logmsg)
{
struct strbuf err = STRBUF_INIT;
unsigned char new_sha1[20];
- if (logmsg && !read_ref(target, new_sha1) &&
- files_log_ref_write(refname, lock->old_oid.hash, new_sha1,
- logmsg, 0, &err)) {
+ if (logmsg &&
+ !refs_read_ref_full(&refs->base, target,
+ RESOLVE_REF_READING, new_sha1, NULL) &&
+ files_log_ref_write(refs, refname, lock->old_oid.hash,
+ new_sha1, logmsg, 0, &err)) {
error("%s", err.buf);
strbuf_release(&err);
}
}
-static int create_symref_locked(struct ref_lock *lock, const char *refname,
+static int create_symref_locked(struct files_ref_store *refs,
+ struct ref_lock *lock, const char *refname,
const char *target, const char *logmsg)
{
if (prefer_symlink_refs && !create_ref_symlink(lock, target)) {
- update_symref_reflog(lock, refname, target, logmsg);
+ update_symref_reflog(refs, lock, refname, target, logmsg);
return 0;
}
return error("unable to fdopen %s: %s",
lock->lk->tempfile.filename.buf, strerror(errno));
- update_symref_reflog(lock, refname, target, logmsg);
+ update_symref_reflog(refs, lock, refname, target, logmsg);
/* no error check; commit_ref will check ferror */
fprintf(lock->lk->tempfile.fp, "ref: %s\n", target);
const char *logmsg)
{
struct files_ref_store *refs =
- files_downcast(ref_store, 0, "create_symref");
+ files_downcast(ref_store, REF_STORE_WRITE, "create_symref");
struct strbuf err = STRBUF_INIT;
struct ref_lock *lock;
int ret;
return -1;
}
- ret = create_symref_locked(lock, refname, target, logmsg);
+ ret = create_symref_locked(refs, lock, refname, target, logmsg);
unlock_ref(lock);
return ret;
}
int set_worktree_head_symref(const char *gitdir, const char *target, const char *logmsg)
{
+ /*
+ * FIXME: this obviously will not work well for future refs
+ * backends. This function needs to die.
+ */
+ struct files_ref_store *refs =
+ files_downcast(get_main_ref_store(),
+ REF_STORE_WRITE,
+ "set_head_symref");
+
static struct lock_file head_lock;
struct ref_lock *lock;
struct strbuf head_path = STRBUF_INIT;
lock->lk = &head_lock;
lock->ref_name = xstrdup(head_rel);
- ret = create_symref_locked(lock, head_rel, target, logmsg);
+ ret = create_symref_locked(refs, lock, head_rel, target, logmsg);
unlock_ref(lock); /* will free lock */
strbuf_release(&head_path);
static int files_reflog_exists(struct ref_store *ref_store,
const char *refname)
{
+ struct files_ref_store *refs =
+ files_downcast(ref_store, REF_STORE_READ, "reflog_exists");
+ struct strbuf sb = STRBUF_INIT;
struct stat st;
+ int ret;
- /* Check validity (but we don't need the result): */
- files_downcast(ref_store, 0, "reflog_exists");
-
- return !lstat(git_path("logs/%s", refname), &st) &&
- S_ISREG(st.st_mode);
+ files_reflog_path(refs, &sb, refname);
+ ret = !lstat(sb.buf, &st) && S_ISREG(st.st_mode);
+ strbuf_release(&sb);
+ return ret;
}
static int files_delete_reflog(struct ref_store *ref_store,
const char *refname)
{
- /* Check validity (but we don't need the result): */
- files_downcast(ref_store, 0, "delete_reflog");
+ struct files_ref_store *refs =
+ files_downcast(ref_store, REF_STORE_WRITE, "delete_reflog");
+ struct strbuf sb = STRBUF_INIT;
+ int ret;
- return remove_path(git_path("logs/%s", refname));
+ files_reflog_path(refs, &sb, refname);
+ ret = remove_path(sb.buf);
+ strbuf_release(&sb);
+ return ret;
}
static int show_one_reflog_ent(struct strbuf *sb, each_reflog_ent_fn fn, void *cb_data)
each_reflog_ent_fn fn,
void *cb_data)
{
+ struct files_ref_store *refs =
+ files_downcast(ref_store, REF_STORE_READ,
+ "for_each_reflog_ent_reverse");
struct strbuf sb = STRBUF_INIT;
FILE *logfp;
long pos;
int ret = 0, at_tail = 1;
- /* Check validity (but we don't need the result): */
- files_downcast(ref_store, 0, "for_each_reflog_ent_reverse");
-
- logfp = fopen(git_path("logs/%s", refname), "r");
+ files_reflog_path(refs, &sb, refname);
+ logfp = fopen(sb.buf, "r");
+ strbuf_release(&sb);
if (!logfp)
return -1;
/* Jump to the end */
if (fseek(logfp, 0, SEEK_END) < 0)
- return error("cannot seek back reflog for %s: %s",
- refname, strerror(errno));
+ ret = error("cannot seek back reflog for %s: %s",
+ refname, strerror(errno));
pos = ftell(logfp);
while (!ret && 0 < pos) {
int cnt;
/* Fill next block from the end */
cnt = (sizeof(buf) < pos) ? sizeof(buf) : pos;
- if (fseek(logfp, pos - cnt, SEEK_SET))
- return error("cannot seek back reflog for %s: %s",
- refname, strerror(errno));
+ if (fseek(logfp, pos - cnt, SEEK_SET)) {
+ ret = error("cannot seek back reflog for %s: %s",
+ refname, strerror(errno));
+ break;
+ }
nread = fread(buf, cnt, 1, logfp);
- if (nread != 1)
- return error("cannot read %d bytes from reflog for %s: %s",
- cnt, refname, strerror(errno));
+ if (nread != 1) {
+ ret = error("cannot read %d bytes from reflog for %s: %s",
+ cnt, refname, strerror(errno));
+ break;
+ }
pos -= cnt;
scanp = endp = buf + cnt;
const char *refname,
each_reflog_ent_fn fn, void *cb_data)
{
+ struct files_ref_store *refs =
+ files_downcast(ref_store, REF_STORE_READ,
+ "for_each_reflog_ent");
FILE *logfp;
struct strbuf sb = STRBUF_INIT;
int ret = 0;
- /* Check validity (but we don't need the result): */
- files_downcast(ref_store, 0, "for_each_reflog_ent");
-
- logfp = fopen(git_path("logs/%s", refname), "r");
+ files_reflog_path(refs, &sb, refname);
+ logfp = fopen(sb.buf, "r");
+ strbuf_release(&sb);
if (!logfp)
return -1;
struct files_reflog_iterator {
struct ref_iterator base;
+ struct ref_store *ref_store;
struct dir_iterator *dir_iterator;
struct object_id oid;
};
if (ends_with(diter->basename, ".lock"))
continue;
- if (read_ref_full(diter->relative_path, 0,
- iter->oid.hash, &flags)) {
+ if (refs_read_ref_full(iter->ref_store,
+ diter->relative_path, 0,
+ iter->oid.hash, &flags)) {
error("bad ref for %s", diter->path.buf);
continue;
}
static struct ref_iterator *files_reflog_iterator_begin(struct ref_store *ref_store)
{
+ struct files_ref_store *refs =
+ files_downcast(ref_store, REF_STORE_READ,
+ "reflog_iterator_begin");
struct files_reflog_iterator *iter = xcalloc(1, sizeof(*iter));
struct ref_iterator *ref_iterator = &iter->base;
-
- /* Check validity (but we don't need the result): */
- files_downcast(ref_store, 0, "reflog_iterator_begin");
+ struct strbuf sb = STRBUF_INIT;
base_ref_iterator_init(ref_iterator, &files_reflog_iterator_vtable);
- iter->dir_iterator = dir_iterator_begin(git_path("logs"));
+ files_reflog_path(refs, &sb, NULL);
+ iter->dir_iterator = dir_iterator_begin(sb.buf);
+ iter->ref_store = ref_store;
+ strbuf_release(&sb);
return ref_iterator;
}
* the transaction, so we have to read it here
* to record and possibly check old_sha1:
*/
- if (read_ref_full(referent.buf, 0,
- lock->old_oid.hash, NULL)) {
+ if (refs_read_ref_full(&refs->base,
+ referent.buf, 0,
+ lock->old_oid.hash, NULL)) {
if (update->flags & REF_HAVE_OLD) {
strbuf_addf(err, "cannot lock ref '%s': "
"error reading reference",
struct strbuf *err)
{
struct files_ref_store *refs =
- files_downcast(ref_store, 0, "ref_transaction_commit");
+ files_downcast(ref_store, REF_STORE_WRITE,
+ "ref_transaction_commit");
int ret = 0, i;
struct string_list refs_to_delete = STRING_LIST_INIT_NODUP;
struct string_list_item *ref_to_delete;
char *head_ref = NULL;
int head_type;
struct object_id head_oid;
+ struct strbuf sb = STRBUF_INIT;
assert(err);
* head_ref within the transaction, then split_head_update()
* arranges for the reflog of HEAD to be updated, too.
*/
- head_ref = resolve_refdup("HEAD", RESOLVE_REF_NO_RECURSE,
- head_oid.hash, &head_type);
+ head_ref = refs_resolve_refdup(ref_store, "HEAD",
+ RESOLVE_REF_NO_RECURSE,
+ head_oid.hash, &head_type);
if (head_ref && !(head_type & REF_ISSYMREF)) {
free(head_ref);
if (update->flags & REF_NEEDS_COMMIT ||
update->flags & REF_LOG_ONLY) {
- if (files_log_ref_write(lock->ref_name,
+ if (files_log_ref_write(refs,
+ lock->ref_name,
lock->old_oid.hash,
update->new_sha1,
update->msg, update->flags,
if (!(update->type & REF_ISPACKED) ||
update->type & REF_ISSYMREF) {
/* It is a loose reference. */
- if (unlink_or_msg(git_path("%s", lock->ref_name), err)) {
+ strbuf_reset(&sb);
+ files_ref_path(refs, &sb, lock->ref_name);
+ if (unlink_or_msg(sb.buf, err)) {
ret = TRANSACTION_GENERIC_ERROR;
goto cleanup;
}
/* Delete the reflogs of any references that were deleted: */
for_each_string_list_item(ref_to_delete, &refs_to_delete) {
- if (!unlink_or_warn(git_path("logs/%s", ref_to_delete->string)))
- try_remove_empty_parents(ref_to_delete->string,
+ strbuf_reset(&sb);
+ files_reflog_path(refs, &sb, ref_to_delete->string);
+ if (!unlink_or_warn(sb.buf))
+ try_remove_empty_parents(refs, ref_to_delete->string,
REMOVE_EMPTY_PARENTS_REFLOG);
}
clear_loose_ref_cache(refs);
cleanup:
+ strbuf_release(&sb);
transaction->state = REF_TRANSACTION_CLOSED;
for (i = 0; i < transaction->nr; i++) {
* can only work because we have already
* removed the lockfile.)
*/
- try_remove_empty_parents(update->refname,
+ try_remove_empty_parents(refs, update->refname,
REMOVE_EMPTY_PARENTS_REF);
}
}
struct strbuf *err)
{
struct files_ref_store *refs =
- files_downcast(ref_store, 0, "initial_ref_transaction_commit");
+ files_downcast(ref_store, REF_STORE_WRITE,
+ "initial_ref_transaction_commit");
int ret = 0, i;
struct string_list affected_refnames = STRING_LIST_INIT_NODUP;
* so here we really only check that none of the references
* that we are creating already exists.
*/
- if (for_each_rawref(ref_present, &affected_refnames))
+ if (refs_for_each_rawref(&refs->base, ref_present,
+ &affected_refnames))
die("BUG: initial ref transaction called with existing refs");
for (i = 0; i < transaction->nr; i++) {
if ((update->flags & REF_HAVE_OLD) &&
!is_null_sha1(update->old_sha1))
die("BUG: initial ref transaction with old_sha1 set");
- if (verify_refname_available(update->refname,
- &affected_refnames, NULL,
- err)) {
+ if (refs_verify_refname_available(&refs->base, update->refname,
+ &affected_refnames, NULL,
+ err)) {
ret = TRANSACTION_NAME_CONFLICT;
goto cleanup;
}
void *policy_cb_data)
{
struct files_ref_store *refs =
- files_downcast(ref_store, 0, "reflog_expire");
+ files_downcast(ref_store, REF_STORE_WRITE, "reflog_expire");
static struct lock_file reflog_lock;
struct expire_reflog_cb cb;
struct ref_lock *lock;
+ struct strbuf log_file_sb = STRBUF_INIT;
char *log_file;
int status = 0;
int type;
strbuf_release(&err);
return -1;
}
- if (!reflog_exists(refname)) {
+ if (!refs_reflog_exists(ref_store, refname)) {
unlock_ref(lock);
return 0;
}
- log_file = git_pathdup("logs/%s", refname);
+ files_reflog_path(refs, &log_file_sb, refname);
+ log_file = strbuf_detach(&log_file_sb, NULL);
if (!(flags & EXPIRE_REFLOGS_DRY_RUN)) {
/*
* Even though holding $GIT_DIR/logs/$reflog.lock has
}
(*prepare_fn)(refname, sha1, cb.policy_cb);
- for_each_reflog_ent(refname, expire_reflog_ent, &cb);
+ refs_for_each_reflog_ent(ref_store, refname, expire_reflog_ent, &cb);
(*cleanup_fn)(cb.policy_cb);
if (!(flags & EXPIRE_REFLOGS_DRY_RUN)) {
static int files_init_db(struct ref_store *ref_store, struct strbuf *err)
{
- /* Check validity (but we don't need the result): */
- files_downcast(ref_store, 0, "init_db");
+ struct files_ref_store *refs =
+ files_downcast(ref_store, REF_STORE_WRITE, "init_db");
+ struct strbuf sb = STRBUF_INIT;
/*
* Create .git/refs/{heads,tags}
*/
- safe_create_dir(git_path("refs/heads"), 1);
- safe_create_dir(git_path("refs/tags"), 1);
- if (get_shared_repository()) {
- adjust_shared_perm(git_path("refs/heads"));
- adjust_shared_perm(git_path("refs/tags"));
- }
+ files_ref_path(refs, &sb, "refs/heads");
+ safe_create_dir(sb.buf, 1);
+
+ strbuf_reset(&sb);
+ files_ref_path(refs, &sb, "refs/tags");
+ safe_create_dir(sb.buf, 1);
+
+ strbuf_release(&sb);
return 0;
}
*/
enum peel_status peel_object(const unsigned char *name, unsigned char *sha1);
-/*
- * Return 0 if a reference named refname could be created without
- * conflicting with the name of an existing reference. Otherwise,
- * return a negative value and write an explanation to err. If extras
- * is non-NULL, it is a list of additional refnames with which refname
- * is not allowed to conflict. If skip is non-NULL, ignore potential
- * conflicts with refs in skip (e.g., because they are scheduled for
- * deletion in the same operation). Behavior is undefined if the same
- * name is listed in both extras and skip.
- *
- * Two reference names conflict if one of them exactly matches the
- * leading components of the other; e.g., "foo/bar" conflicts with
- * both "foo" and with "foo/bar/baz" but not with "foo/bar" or
- * "foo/barbados".
- *
- * extras and skip must be sorted.
- */
-int verify_refname_available(const char *newname,
- const struct string_list *extras,
- const struct string_list *skip,
- struct strbuf *err);
-
/*
* Copy the reflog message msg to buf, which has been allocated sufficiently
* large, while cleaning up the whitespaces. Especially, convert LF to space,
* as atomically as possible. This structure is opaque to callers.
*/
struct ref_transaction {
+ struct ref_store *ref_store;
struct ref_update **updates;
size_t alloc;
size_t nr;
enum ref_transaction_state state;
};
-int files_log_ref_write(const char *refname, const unsigned char *old_sha1,
- const unsigned char *new_sha1, const char *msg,
- int flags, struct strbuf *err);
-
/*
* Check for entries in extras that are within the specified
* directory, where dirname is a reference directory name including
* processes (though rename_ref() catches some races that might get by
* this check).
*/
-int rename_ref_available(const char *old_refname, const char *new_refname);
+int refs_rename_ref_available(struct ref_store *refs,
+ const char *old_refname,
+ const char *new_refname);
/* We allow "recursive" symbolic refs. Only within reason, though */
#define SYMREF_MAXDEPTH 5
/* refs backends */
+/* ref_store_init flags */
+#define REF_STORE_READ (1 << 0)
+#define REF_STORE_WRITE (1 << 1) /* can perform update operations */
+#define REF_STORE_ODB (1 << 2) /* has access to object database */
+#define REF_STORE_MAIN (1 << 3)
+
/*
- * Initialize the ref_store for the specified submodule, or for the
- * main repository if submodule == NULL. These functions should call
- * base_ref_store_init() to initialize the shared part of the
- * ref_store and to record the ref_store for later lookup.
+ * Initialize the ref_store for the specified gitdir. These functions
+ * should call base_ref_store_init() to initialize the shared part of
+ * the ref_store and to record the ref_store for later lookup.
*/
-typedef struct ref_store *ref_store_init_fn(const char *submodule);
+typedef struct ref_store *ref_store_init_fn(const char *gitdir,
+ unsigned int flags);
typedef int ref_init_db_fn(struct ref_store *refs, struct strbuf *err);
void base_ref_store_init(struct ref_store *refs,
const struct ref_storage_be *be);
-/*
- * Return the ref_store instance for the specified submodule. For the
- * main repository, use submodule==NULL; such a call cannot fail. For
- * a submodule, the submodule must exist and be a nonbare repository,
- * otherwise return NULL. If the requested reference store has not yet
- * been initialized, initialize it first.
- *
- * For backwards compatibility, submodule=="" is treated the same as
- * submodule==NULL.
- */
-struct ref_store *get_ref_store(const char *submodule);
-
-const char *resolve_ref_recursively(struct ref_store *refs,
- const char *refname,
- int resolve_flags,
- unsigned char *sha1, int *flags);
-
#endif /* REFS_REFS_INTERNAL_H */
char *buf;
size_t len;
struct ref *refs;
- struct sha1_array shallow;
+ struct oid_array shallow;
unsigned proto_git : 1;
};
static struct discovery *last_discovery;
if (d) {
if (d == last_discovery)
last_discovery = NULL;
- free(d->shallow.sha1);
+ free(d->shallow.oid);
free(d->buf_alloc);
free_refs(d->refs);
free(d);
return err;
}
+static curl_off_t xcurl_off_t(ssize_t len) {
+ if (len > maximum_signed_value_of_type(curl_off_t))
+ die("cannot handle pushes this big");
+ return (curl_off_t) len;
+}
+
static int post_rpc(struct rpc_state *rpc)
{
struct active_request_slot *slot;
* and we just need to send it.
*/
curl_easy_setopt(slot->curl, CURLOPT_POSTFIELDS, gzip_body);
- curl_easy_setopt(slot->curl, CURLOPT_POSTFIELDSIZE, gzip_size);
+ curl_easy_setopt(slot->curl, CURLOPT_POSTFIELDSIZE_LARGE, xcurl_off_t(gzip_size));
} else if (use_gzip && 1024 < rpc->len) {
/* The client backend isn't giving us compressed data so
headers = curl_slist_append(headers, "Content-Encoding: gzip");
curl_easy_setopt(slot->curl, CURLOPT_POSTFIELDS, gzip_body);
- curl_easy_setopt(slot->curl, CURLOPT_POSTFIELDSIZE, gzip_size);
+ curl_easy_setopt(slot->curl, CURLOPT_POSTFIELDSIZE_LARGE, xcurl_off_t(gzip_size));
if (options.verbosity > 1) {
fprintf(stderr, "POST %s (gzip %lu to %lu bytes)\n",
* more normal Content-Length approach.
*/
curl_easy_setopt(slot->curl, CURLOPT_POSTFIELDS, rpc->buf);
- curl_easy_setopt(slot->curl, CURLOPT_POSTFIELDSIZE, rpc->len);
+ curl_easy_setopt(slot->curl, CURLOPT_POSTFIELDSIZE_LARGE, xcurl_off_t(rpc->len));
if (options.verbosity > 1) {
fprintf(stderr, "POST %s (%lu bytes)\n",
rpc->service_name, (unsigned long)rpc->len);
return parse_refspec_internal(nr_refspec, refspec, 1, 0);
}
-static struct refspec *parse_push_refspec(int nr_refspec, const char **refspec)
+struct refspec *parse_push_refspec(int nr_refspec, const char **refspec)
{
return parse_refspec_internal(nr_refspec, refspec, 0, 0);
}
return entry;
}
-int parse_push_cas_option(struct push_cas_option *cas, const char *arg, int unset)
+static int parse_push_cas_option(struct push_cas_option *cas, const char *arg, int unset)
{
const char *colon;
struct push_cas *entry;
*/
void free_refs(struct ref *ref);
-struct sha1_array;
+struct oid_array;
extern struct ref **get_remote_heads(int in, char *src_buf, size_t src_len,
struct ref **list, unsigned int flags,
- struct sha1_array *extra_have,
- struct sha1_array *shallow);
+ struct oid_array *extra_have,
+ struct oid_array *shallow);
int resolve_remote_symref(struct ref *ref, struct ref *list);
int ref_newer(const struct object_id *new_oid, const struct object_id *old_oid);
int valid_fetch_refspec(const char *refspec);
struct refspec *parse_fetch_refspec(int nr_refspec, const char **refspec);
+extern struct refspec *parse_push_refspec(int nr_refspec, const char **refspec);
void free_refspec(int nr_refspec, struct refspec *refspec);
};
extern int parseopt_push_cas_option(const struct option *, const char *arg, int unset);
-extern int parse_push_cas_option(struct push_cas_option *, const char *arg, int unset);
extern int is_empty_cas(const struct push_cas_option *);
void apply_push_cas(struct push_cas_option *, struct remote *, struct ref *);
/*
* Make a pack stream and spit it out into file descriptor fd
*/
-static int pack_objects(int fd, struct ref *refs, struct sha1_array *extra, struct send_pack_args *args)
+static int pack_objects(int fd, struct ref *refs, struct oid_array *extra, struct send_pack_args *args)
{
/*
* The child becomes pack-objects --revs; we feed
*/
po_in = xfdopen(po.in, "w");
for (i = 0; i < extra->nr; i++)
- feed_object(extra->sha1[i], po_in, 1);
+ feed_object(extra->oid[i].hash, po_in, 1);
while (refs) {
if (!is_null_oid(&refs->old_oid))
int send_pack(struct send_pack_args *args,
int fd[], struct child_process *conn,
struct ref *remote_refs,
- struct sha1_array *extra_have)
+ struct oid_array *extra_have)
{
int in = fd[0];
int out = fd[1];
int send_pack(struct send_pack_args *args,
int fd[], struct child_process *conn,
- struct ref *remote_refs, struct sha1_array *extra_have);
+ struct ref *remote_refs, struct oid_array *extra_have);
#endif
flags |= CLEANUP_MSG;
msg_file = rebase_path_fixup_msg();
} else {
- const char *dest = git_path("SQUASH_MSG");
+ const char *dest = git_path_squash_msg();
unlink(dest);
if (copy_file(dest, rebase_path_squash_msg(), 0666))
return error(_("could not rename '%s' to '%s'"),
rebase_path_squash_msg(), dest);
- unlink(git_path("MERGE_MSG"));
+ unlink(git_path_merge_msg());
msg_file = dest;
flags |= EDIT_MSG;
}
return error(_("could not rename '%s' to '%s'"),
rebase_path_squash_msg(), rebase_path_message());
unlink(rebase_path_fixup_msg());
- unlink(git_path("MERGE_MSG"));
- if (copy_file(git_path("MERGE_MSG"), rebase_path_message(), 0666))
+ unlink(git_path_merge_msg());
+ if (copy_file(git_path_merge_msg(), rebase_path_message(), 0666))
return error(_("could not copy '%s' to '%s'"),
- rebase_path_message(), git_path("MERGE_MSG"));
+ rebase_path_message(), git_path_merge_msg());
return error_with_patch(commit, subject, subject_len, opts, 1, 0);
}
if (has_unstaged_changes(1))
return error(_("cannot rebase: You have unstaged changes."));
if (!has_uncommitted_changes(0)) {
- const char *cherry_pick_head = git_path("CHERRY_PICK_HEAD");
+ const char *cherry_pick_head = git_path_cherry_pick_head();
if (file_exists(cherry_pick_head) && unlink(cherry_pick_head))
return error(_("could not remove CHERRY_PICK_HEAD"));
char *tmp = mkpathdup("%s_XXXXXX", path);
int ret = -1;
int fd = -1;
- FILE *fp = NULL;
+ FILE *fp = NULL, *to_close;
safe_create_leading_directories(path);
fd = git_mkstemp_mode(tmp, 0666);
if (fd < 0)
goto out;
- fp = fdopen(fd, "w");
+ to_close = fp = fdopen(fd, "w");
if (!fp)
goto out;
+ fd = -1;
ret = generate(fp);
if (ret)
goto out;
- if (fclose(fp))
+ fp = NULL;
+ if (fclose(to_close))
goto out;
if (adjust_shared_perm(tmp) < 0)
goto out;
#include "sha1-array.h"
#include "sha1-lookup.h"
-void sha1_array_append(struct sha1_array *array, const unsigned char *sha1)
+void oid_array_append(struct oid_array *array, const struct object_id *oid)
{
- ALLOC_GROW(array->sha1, array->nr + 1, array->alloc);
- hashcpy(array->sha1[array->nr++], sha1);
+ ALLOC_GROW(array->oid, array->nr + 1, array->alloc);
+ oidcpy(&array->oid[array->nr++], oid);
array->sorted = 0;
}
static int void_hashcmp(const void *a, const void *b)
{
- return hashcmp(a, b);
+ return oidcmp(a, b);
}
-static void sha1_array_sort(struct sha1_array *array)
+static void oid_array_sort(struct oid_array *array)
{
- QSORT(array->sha1, array->nr, void_hashcmp);
+ QSORT(array->oid, array->nr, void_hashcmp);
array->sorted = 1;
}
static const unsigned char *sha1_access(size_t index, void *table)
{
- unsigned char (*array)[20] = table;
- return array[index];
+ struct object_id *array = table;
+ return array[index].hash;
}
-int sha1_array_lookup(struct sha1_array *array, const unsigned char *sha1)
+int oid_array_lookup(struct oid_array *array, const struct object_id *oid)
{
if (!array->sorted)
- sha1_array_sort(array);
- return sha1_pos(sha1, array->sha1, array->nr, sha1_access);
+ oid_array_sort(array);
+ return sha1_pos(oid->hash, array->oid, array->nr, sha1_access);
}
-void sha1_array_clear(struct sha1_array *array)
+void oid_array_clear(struct oid_array *array)
{
- free(array->sha1);
- array->sha1 = NULL;
+ free(array->oid);
+ array->oid = NULL;
array->nr = 0;
array->alloc = 0;
array->sorted = 0;
}
-int sha1_array_for_each_unique(struct sha1_array *array,
- for_each_sha1_fn fn,
+int oid_array_for_each_unique(struct oid_array *array,
+ for_each_oid_fn fn,
void *data)
{
int i;
if (!array->sorted)
- sha1_array_sort(array);
+ oid_array_sort(array);
for (i = 0; i < array->nr; i++) {
int ret;
- if (i > 0 && !hashcmp(array->sha1[i], array->sha1[i-1]))
+ if (i > 0 && !oidcmp(array->oid + i, array->oid + i - 1))
continue;
- ret = fn(array->sha1[i], data);
+ ret = fn(array->oid + i, data);
if (ret)
return ret;
}
#ifndef SHA1_ARRAY_H
#define SHA1_ARRAY_H
-struct sha1_array {
- unsigned char (*sha1)[20];
+struct oid_array {
+ struct object_id *oid;
int nr;
int alloc;
int sorted;
};
-#define SHA1_ARRAY_INIT { NULL, 0, 0, 0 }
+#define OID_ARRAY_INIT { NULL, 0, 0, 0 }
-void sha1_array_append(struct sha1_array *array, const unsigned char *sha1);
-int sha1_array_lookup(struct sha1_array *array, const unsigned char *sha1);
-void sha1_array_clear(struct sha1_array *array);
+void oid_array_append(struct oid_array *array, const struct object_id *oid);
+int oid_array_lookup(struct oid_array *array, const struct object_id *oid);
+void oid_array_clear(struct oid_array *array);
-typedef int (*for_each_sha1_fn)(const unsigned char sha1[20],
- void *data);
-int sha1_array_for_each_unique(struct sha1_array *array,
- for_each_sha1_fn fn,
+typedef int (*for_each_oid_fn)(const struct object_id *oid,
+ void *data);
+int oid_array_for_each_unique(struct oid_array *array,
+ for_each_oid_fn fn,
void *data);
#endif /* SHA1_ARRAY_H */
if (!hashcmp(sha1, p->bad_object_sha1 + GIT_SHA1_RAWSZ * i))
return;
p->bad_object_sha1 = xrealloc(p->bad_object_sha1,
- st_mult(GIT_SHA1_RAWSZ,
+ st_mult(GIT_MAX_RAWSZ,
st_add(p->num_bad_objects, 1)));
hashcpy(p->bad_object_sha1 + GIT_SHA1_RAWSZ * p->num_bad_objects, sha1);
p->num_bad_objects++;
if (status && oi->typep)
*oi->typep = status;
strbuf_release(&hdrbuf);
- return 0;
+ return (status < 0) ? status : 0;
}
int sha1_object_info_extended(const unsigned char *sha1, struct object_info *oi, unsigned flags)
{
struct pack_entry e;
+ if (!startup_info->have_repository)
+ return 0;
if (find_pack_entry(sha1, &e))
return 1;
if (has_loose_object(sha1))
strbuf_addf(path, "/%s", de->d_name);
if (strlen(de->d_name) == GIT_SHA1_HEXSZ - 2) {
- char hex[GIT_SHA1_HEXSZ+1];
+ char hex[GIT_MAX_HEXSZ+1];
struct object_id oid;
- snprintf(hex, sizeof(hex), "%02x%s",
- subdir_nr, de->d_name);
+ xsnprintf(hex, sizeof(hex), "%02x%s",
+ subdir_nr, de->d_name);
if (!get_oid_hex(hex, &oid)) {
if (obj_cb) {
r = obj_cb(&oid, path->buf, data);
const unsigned char *expected_sha1)
{
git_SHA_CTX c;
- unsigned char real_sha1[GIT_SHA1_RAWSZ];
+ unsigned char real_sha1[GIT_MAX_RAWSZ];
unsigned char buf[4096];
unsigned long total_read;
int status = Z_OK;
void **contents)
{
int ret = -1;
- int fd = -1;
void *map = NULL;
unsigned long mapsize;
git_zstream stream;
out:
if (map)
munmap(map, mapsize);
- if (fd >= 0)
- close(fd);
return ret;
}
static int get_sha1_oneline(const char *, unsigned char *, struct commit_list *);
-typedef int (*disambiguate_hint_fn)(const unsigned char *, void *);
+typedef int (*disambiguate_hint_fn)(const struct object_id *, void *);
struct disambiguate_state {
int len; /* length of prefix in hex chars */
- char hex_pfx[GIT_SHA1_HEXSZ + 1];
- unsigned char bin_pfx[GIT_SHA1_RAWSZ];
+ char hex_pfx[GIT_MAX_HEXSZ + 1];
+ struct object_id bin_pfx;
disambiguate_hint_fn fn;
void *cb_data;
- unsigned char candidate[GIT_SHA1_RAWSZ];
+ struct object_id candidate;
unsigned candidate_exists:1;
unsigned candidate_checked:1;
unsigned candidate_ok:1;
unsigned always_call_fn:1;
};
-static void update_candidates(struct disambiguate_state *ds, const unsigned char *current)
+static void update_candidates(struct disambiguate_state *ds, const struct object_id *current)
{
if (ds->always_call_fn) {
ds->ambiguous = ds->fn(current, ds->cb_data) ? 1 : 0;
}
if (!ds->candidate_exists) {
/* this is the first candidate */
- hashcpy(ds->candidate, current);
+ oidcpy(&ds->candidate, current);
ds->candidate_exists = 1;
return;
- } else if (!hashcmp(ds->candidate, current)) {
+ } else if (!oidcmp(&ds->candidate, current)) {
/* the same as what we already have seen */
return;
}
}
if (!ds->candidate_checked) {
- ds->candidate_ok = ds->fn(ds->candidate, ds->cb_data);
+ ds->candidate_ok = ds->fn(&ds->candidate, ds->cb_data);
ds->disambiguate_fn_used = 1;
ds->candidate_checked = 1;
}
if (!ds->candidate_ok) {
/* discard the candidate; we know it does not satisfy fn */
- hashcpy(ds->candidate, current);
+ oidcpy(&ds->candidate, current);
ds->candidate_checked = 0;
return;
}
static void find_short_object_filename(struct disambiguate_state *ds)
{
struct alternate_object_database *alt;
- char hex[GIT_SHA1_HEXSZ];
+ char hex[GIT_MAX_HEXSZ];
static struct alternate_object_database *fakeent;
if (!fakeent) {
continue;
while (!ds->ambiguous && (de = readdir(dir)) != NULL) {
- unsigned char sha1[20];
+ struct object_id oid;
- if (strlen(de->d_name) != 38)
+ if (strlen(de->d_name) != GIT_SHA1_HEXSZ - 2)
continue;
if (memcmp(de->d_name, ds->hex_pfx + 2, ds->len - 2))
continue;
- memcpy(hex + 2, de->d_name, 38);
- if (!get_sha1_hex(hex, sha1))
- update_candidates(ds, sha1);
+ memcpy(hex + 2, de->d_name, GIT_SHA1_HEXSZ - 2);
+ if (!get_oid_hex(hex, &oid))
+ update_candidates(ds, &oid);
}
closedir(dir);
}
struct disambiguate_state *ds)
{
uint32_t num, last, i, first = 0;
- const unsigned char *current = NULL;
+ const struct object_id *current = NULL;
open_pack_index(p);
num = p->num_objects;
int cmp;
current = nth_packed_object_sha1(p, mid);
- cmp = hashcmp(ds->bin_pfx, current);
+ cmp = hashcmp(ds->bin_pfx.hash, current);
if (!cmp) {
first = mid;
break;
* 0, 1 or more objects that actually match(es).
*/
for (i = first; i < num && !ds->ambiguous; i++) {
- current = nth_packed_object_sha1(p, i);
- if (!match_sha(ds->len, ds->bin_pfx, current))
+ struct object_id oid;
+ current = nth_packed_object_oid(&oid, p, i);
+ if (!match_sha(ds->len, ds->bin_pfx.hash, current->hash))
break;
update_candidates(ds, current);
}
* same repository!
*/
ds->candidate_ok = (!ds->disambiguate_fn_used ||
- ds->fn(ds->candidate, ds->cb_data));
+ ds->fn(&ds->candidate, ds->cb_data));
if (!ds->candidate_ok)
return SHORT_NAME_AMBIGUOUS;
- hashcpy(sha1, ds->candidate);
+ hashcpy(sha1, ds->candidate.hash);
return 0;
}
-static int disambiguate_commit_only(const unsigned char *sha1, void *cb_data_unused)
+static int disambiguate_commit_only(const struct object_id *oid, void *cb_data_unused)
{
- int kind = sha1_object_info(sha1, NULL);
+ int kind = sha1_object_info(oid->hash, NULL);
return kind == OBJ_COMMIT;
}
-static int disambiguate_committish_only(const unsigned char *sha1, void *cb_data_unused)
+static int disambiguate_committish_only(const struct object_id *oid, void *cb_data_unused)
{
struct object *obj;
int kind;
- kind = sha1_object_info(sha1, NULL);
+ kind = sha1_object_info(oid->hash, NULL);
if (kind == OBJ_COMMIT)
return 1;
if (kind != OBJ_TAG)
return 0;
/* We need to do this the hard way... */
- obj = deref_tag(parse_object(sha1), NULL, 0);
+ obj = deref_tag(parse_object(oid->hash), NULL, 0);
if (obj && obj->type == OBJ_COMMIT)
return 1;
return 0;
}
-static int disambiguate_tree_only(const unsigned char *sha1, void *cb_data_unused)
+static int disambiguate_tree_only(const struct object_id *oid, void *cb_data_unused)
{
- int kind = sha1_object_info(sha1, NULL);
+ int kind = sha1_object_info(oid->hash, NULL);
return kind == OBJ_TREE;
}
-static int disambiguate_treeish_only(const unsigned char *sha1, void *cb_data_unused)
+static int disambiguate_treeish_only(const struct object_id *oid, void *cb_data_unused)
{
struct object *obj;
int kind;
- kind = sha1_object_info(sha1, NULL);
+ kind = sha1_object_info(oid->hash, NULL);
if (kind == OBJ_TREE || kind == OBJ_COMMIT)
return 1;
if (kind != OBJ_TAG)
return 0;
/* We need to do this the hard way... */
- obj = deref_tag(parse_object(sha1), NULL, 0);
+ obj = deref_tag(parse_object(oid->hash), NULL, 0);
if (obj && (obj->type == OBJ_TREE || obj->type == OBJ_COMMIT))
return 1;
return 0;
}
-static int disambiguate_blob_only(const unsigned char *sha1, void *cb_data_unused)
+static int disambiguate_blob_only(const struct object_id *oid, void *cb_data_unused)
{
- int kind = sha1_object_info(sha1, NULL);
+ int kind = sha1_object_info(oid->hash, NULL);
return kind == OBJ_BLOB;
}
ds->hex_pfx[i] = c;
if (!(i & 1))
val <<= 4;
- ds->bin_pfx[i >> 1] |= val;
+ ds->bin_pfx.hash[i >> 1] |= val;
}
ds->len = len;
return 0;
}
-static int show_ambiguous_object(const unsigned char *sha1, void *data)
+static int show_ambiguous_object(const struct object_id *oid, void *data)
{
const struct disambiguate_state *ds = data;
struct strbuf desc = STRBUF_INIT;
int type;
- if (ds->fn && !ds->fn(sha1, ds->cb_data))
+
+ if (ds->fn && !ds->fn(oid, ds->cb_data))
return 0;
- type = sha1_object_info(sha1, NULL);
+ type = sha1_object_info(oid->hash, NULL);
if (type == OBJ_COMMIT) {
- struct commit *commit = lookup_commit(sha1);
+ struct commit *commit = lookup_commit(oid->hash);
if (commit) {
struct pretty_print_context pp = {0};
pp.date_mode.type = DATE_SHORT;
format_commit_message(commit, " %ad - %s", &desc, &pp);
}
} else if (type == OBJ_TAG) {
- struct tag *tag = lookup_tag(sha1);
+ struct tag *tag = lookup_tag(oid->hash);
if (!parse_tag(tag) && tag->tag)
strbuf_addf(&desc, " %s", tag->tag);
}
advise(" %s %s%s",
- find_unique_abbrev(sha1, DEFAULT_ABBREV),
+ find_unique_abbrev(oid->hash, DEFAULT_ABBREV),
typename(type) ? typename(type) : "unknown type",
desc.buf);
return status;
}
-static int collect_ambiguous(const unsigned char *sha1, void *data)
+static int collect_ambiguous(const struct object_id *oid, void *data)
{
- sha1_array_append(data, sha1);
+ oid_array_append(data, oid);
return 0;
}
int for_each_abbrev(const char *prefix, each_abbrev_fn fn, void *cb_data)
{
- struct sha1_array collect = SHA1_ARRAY_INIT;
+ struct oid_array collect = OID_ARRAY_INIT;
struct disambiguate_state ds;
int ret;
find_short_object_filename(&ds);
find_short_packed_object(&ds);
- ret = sha1_array_for_each_unique(&collect, fn, cb_data);
- sha1_array_clear(&collect);
+ ret = oid_array_for_each_unique(&collect, fn, cb_data);
+ oid_array_clear(&collect);
return ret;
}
const char *find_unique_abbrev(const unsigned char *sha1, int len)
{
static int bufno;
- static char hexbuffer[4][GIT_SHA1_HEXSZ + 1];
+ static char hexbuffer[4][GIT_MAX_HEXSZ + 1];
char *hex = hexbuffer[bufno];
bufno = (bufno + 1) % ARRAY_SIZE(hexbuffer);
find_unique_abbrev_r(hex, sha1, len);
}
static int write_shallow_commits_1(struct strbuf *out, int use_pack_protocol,
- const struct sha1_array *extra,
+ const struct oid_array *extra,
unsigned flags)
{
struct write_shallow_data data;
if (!extra)
return data.count;
for (i = 0; i < extra->nr; i++) {
- strbuf_addstr(out, sha1_to_hex(extra->sha1[i]));
+ strbuf_addstr(out, oid_to_hex(extra->oid + i));
strbuf_addch(out, '\n');
data.count++;
}
}
int write_shallow_commits(struct strbuf *out, int use_pack_protocol,
- const struct sha1_array *extra)
+ const struct oid_array *extra)
{
return write_shallow_commits_1(out, use_pack_protocol, extra, 0);
}
static struct tempfile temporary_shallow;
-const char *setup_temporary_shallow(const struct sha1_array *extra)
+const char *setup_temporary_shallow(const struct oid_array *extra)
{
struct strbuf sb = STRBUF_INIT;
int fd;
void setup_alternate_shallow(struct lock_file *shallow_lock,
const char **alternate_shallow_file,
- const struct sha1_array *extra)
+ const struct oid_array *extra)
{
struct strbuf sb = STRBUF_INIT;
int fd;
* Step 1, split sender shallow commits into "ours" and "theirs"
* Step 2, clean "ours" based on .git/shallow
*/
-void prepare_shallow_info(struct shallow_info *info, struct sha1_array *sa)
+void prepare_shallow_info(struct shallow_info *info, struct oid_array *sa)
{
int i;
trace_printf_key(&trace_shallow, "shallow: prepare_shallow_info\n");
ALLOC_ARRAY(info->ours, sa->nr);
ALLOC_ARRAY(info->theirs, sa->nr);
for (i = 0; i < sa->nr; i++) {
- if (has_sha1_file(sa->sha1[i])) {
+ if (has_object_file(sa->oid + i)) {
struct commit_graft *graft;
- graft = lookup_commit_graft(sa->sha1[i]);
+ graft = lookup_commit_graft(sa->oid[i].hash);
if (graft && graft->nr_parent < 0)
continue;
info->ours[info->nr_ours++] = i;
void remove_nonexistent_theirs_shallow(struct shallow_info *info)
{
- unsigned char (*sha1)[20] = info->shallow->sha1;
+ struct object_id *oid = info->shallow->oid;
int i, dst;
trace_printf_key(&trace_shallow, "shallow: remove_nonexistent_theirs_shallow\n");
for (i = dst = 0; i < info->nr_theirs; i++) {
if (i != dst)
info->theirs[dst] = info->theirs[i];
- if (has_sha1_file(sha1[info->theirs[i]]))
+ if (has_object_file(oid + info->theirs[i]))
dst++;
}
info->nr_theirs = dst;
void assign_shallow_commits_to_refs(struct shallow_info *info,
uint32_t **used, int *ref_status)
{
- unsigned char (*sha1)[20] = info->shallow->sha1;
- struct sha1_array *ref = info->ref;
+ struct object_id *oid = info->shallow->oid;
+ struct oid_array *ref = info->ref;
unsigned int i, nr;
int *shallow, nr_shallow = 0;
struct paint_info pi;
/* Mark potential bottoms so we won't go out of bound */
for (i = 0; i < nr_shallow; i++) {
- struct commit *c = lookup_commit(sha1[shallow[i]]);
+ struct commit *c = lookup_commit(oid[shallow[i]].hash);
c->object.flags |= BOTTOM;
}
for (i = 0; i < ref->nr; i++)
- paint_down(&pi, ref->sha1[i], i);
+ paint_down(&pi, ref->oid[i].hash, i);
if (used) {
int bitmap_size = ((pi.nr_bits + 31) / 32) * sizeof(uint32_t);
memset(used, 0, sizeof(*used) * info->shallow->nr);
for (i = 0; i < nr_shallow; i++) {
- const struct commit *c = lookup_commit(sha1[shallow[i]]);
+ const struct commit *c = lookup_commit(oid[shallow[i]].hash);
uint32_t **map = ref_bitmap_at(&pi.ref_bitmap, c);
if (*map)
used[shallow[i]] = xmemdupz(*map, bitmap_size);
struct ref_bitmap *ref_bitmap,
int *ref_status)
{
- unsigned char (*sha1)[20] = info->shallow->sha1;
+ struct object_id *oid = info->shallow->oid;
struct commit *c;
uint32_t **bitmap;
int dst, i, j;
for (i = dst = 0; i < info->nr_theirs; i++) {
if (i != dst)
info->theirs[dst] = info->theirs[i];
- c = lookup_commit(sha1[info->theirs[i]]);
+ c = lookup_commit(oid[info->theirs[i]].hash);
bitmap = ref_bitmap_at(ref_bitmap, c);
if (!*bitmap)
continue;
for (i = dst = 0; i < info->nr_ours; i++) {
if (i != dst)
info->ours[dst] = info->ours[i];
- c = lookup_commit(sha1[info->ours[i]]);
+ c = lookup_commit(oid[info->ours[i]].hash);
bitmap = ref_bitmap_at(ref_bitmap, c);
if (!*bitmap)
continue;
int delayed_reachability_test(struct shallow_info *si, int c)
{
if (si->need_reachability_test[c]) {
- struct commit *commit = lookup_commit(si->shallow->sha1[c]);
+ struct commit *commit = lookup_commit(si->shallow->oid[c].hash);
if (!si->commits) {
struct commit_array ca;
if (exact_match)
return -1 - index;
- if (list->nr + 1 >= list->alloc) {
- list->alloc += 32;
- REALLOC_ARRAY(list->items, list->alloc);
- }
+ ALLOC_GROW(list->items, list->nr+1, list->alloc);
if (index < list->nr)
memmove(list->items + index + 1, list->items + index,
(list->nr - index)
#include "blob.h"
#include "thread-utils.h"
#include "quote.h"
+#include "remote.h"
#include "worktree.h"
static int config_fetch_recurse_submodules = RECURSE_SUBMODULES_ON_DEMAND;
static int parallel_jobs = 1;
static struct string_list changed_submodule_paths = STRING_LIST_INIT_NODUP;
static int initialized_fetch_ref_tips;
-static struct sha1_array ref_tips_before_fetch;
-static struct sha1_array ref_tips_after_fetch;
+static struct oid_array ref_tips_before_fetch;
+static struct oid_array ref_tips_after_fetch;
/*
* The following flag is set if the .gitmodules file is unmerged. We then
if (!(dirty_submodule & DIRTY_SUBMODULE_MODIFIED))
argv_array_push(&cp.args, oid_to_hex(new));
+ prepare_submodule_repo_env(&cp.env_array);
if (run_command(&cp))
fprintf(f, "(diff failed)\n");
return 1;
}
-static int append_sha1_to_argv(const unsigned char sha1[20], void *data)
+static int append_oid_to_argv(const struct object_id *oid, void *data)
{
struct argv_array *argv = data;
- argv_array_push(argv, sha1_to_hex(sha1));
+ argv_array_push(argv, oid_to_hex(oid));
return 0;
}
-static int check_has_commit(const unsigned char sha1[20], void *data)
+static int check_has_commit(const struct object_id *oid, void *data)
{
int *has_commit = data;
- if (!lookup_commit_reference(sha1))
+ if (!lookup_commit_reference(oid->hash))
*has_commit = 0;
return 0;
}
-static int submodule_has_commits(const char *path, struct sha1_array *commits)
+static int submodule_has_commits(const char *path, struct oid_array *commits)
{
int has_commit = 1;
if (add_submodule_odb(path))
return 0;
- sha1_array_for_each_unique(commits, check_has_commit, &has_commit);
+ oid_array_for_each_unique(commits, check_has_commit, &has_commit);
return has_commit;
}
-static int submodule_needs_pushing(const char *path, struct sha1_array *commits)
+static int submodule_needs_pushing(const char *path, struct oid_array *commits)
{
if (!submodule_has_commits(path, commits))
/*
int needs_pushing = 0;
argv_array_push(&cp.args, "rev-list");
- sha1_array_for_each_unique(commits, append_sha1_to_argv, &cp.args);
+ oid_array_for_each_unique(commits, append_oid_to_argv, &cp.args);
argv_array_pushl(&cp.args, "--not", "--remotes", "-n", "1" , NULL);
prepare_submodule_repo_env(&cp.env_array);
return 0;
}
-static struct sha1_array *submodule_commits(struct string_list *submodules,
+static struct oid_array *submodule_commits(struct string_list *submodules,
const char *path)
{
struct string_list_item *item;
item = string_list_insert(submodules, path);
if (item->util)
- return (struct sha1_array *) item->util;
+ return (struct oid_array *) item->util;
- /* NEEDSWORK: should we have sha1_array_init()? */
- item->util = xcalloc(1, sizeof(struct sha1_array));
- return (struct sha1_array *) item->util;
+ /* NEEDSWORK: should we have oid_array_init()? */
+ item->util = xcalloc(1, sizeof(struct oid_array));
+ return (struct oid_array *) item->util;
}
static void collect_submodules_from_diff(struct diff_queue_struct *q,
for (i = 0; i < q->nr; i++) {
struct diff_filepair *p = q->queue[i];
- struct sha1_array *commits;
+ struct oid_array *commits;
if (!S_ISGITLINK(p->two->mode))
continue;
commits = submodule_commits(submodules, p->two->path);
- sha1_array_append(commits, p->two->oid.hash);
+ oid_array_append(commits, &p->two->oid);
}
}
{
struct string_list_item *item;
for_each_string_list_item(item, submodules)
- sha1_array_clear((struct sha1_array *) item->util);
+ oid_array_clear((struct oid_array *) item->util);
string_list_clear(submodules, 1);
}
-int find_unpushed_submodules(struct sha1_array *commits,
+int find_unpushed_submodules(struct oid_array *commits,
const char *remotes_name, struct string_list *needs_pushing)
{
struct rev_info rev;
/* argv.argv[0] will be ignored by setup_revisions */
argv_array_push(&argv, "find_unpushed_submodules");
- sha1_array_for_each_unique(commits, append_sha1_to_argv, &argv);
+ oid_array_for_each_unique(commits, append_oid_to_argv, &argv);
argv_array_push(&argv, "--not");
argv_array_pushf(&argv, "--remotes=%s", remotes_name);
argv_array_clear(&argv);
for_each_string_list_item(submodule, &submodules) {
- struct sha1_array *commits = (struct sha1_array *) submodule->util;
+ struct oid_array *commits = (struct oid_array *) submodule->util;
if (submodule_needs_pushing(submodule->string, commits))
string_list_insert(needs_pushing, submodule->string);
return needs_pushing->nr;
}
-static int push_submodule(const char *path, int dry_run)
+static int push_submodule(const char *path,
+ const struct remote *remote,
+ const char **refspec, int refspec_nr,
+ const struct string_list *push_options,
+ int dry_run)
{
if (add_submodule_odb(path))
return 1;
if (dry_run)
argv_array_push(&cp.args, "--dry-run");
+ if (push_options && push_options->nr) {
+ const struct string_list_item *item;
+ for_each_string_list_item(item, push_options)
+ argv_array_pushf(&cp.args, "--push-option=%s",
+ item->string);
+ }
+
+ if (remote->origin != REMOTE_UNCONFIGURED) {
+ int i;
+ argv_array_push(&cp.args, remote->name);
+ for (i = 0; i < refspec_nr; i++)
+ argv_array_push(&cp.args, refspec[i]);
+ }
+
prepare_submodule_repo_env(&cp.env_array);
cp.git_cmd = 1;
cp.no_stdin = 1;
return 1;
}
-int push_unpushed_submodules(struct sha1_array *commits,
- const char *remotes_name,
+/*
+ * Perform a check in the submodule to see if the remote and refspec work.
+ * Die if the submodule can't be pushed.
+ */
+static void submodule_push_check(const char *path, const struct remote *remote,
+ const char **refspec, int refspec_nr)
+{
+ struct child_process cp = CHILD_PROCESS_INIT;
+ int i;
+
+ argv_array_push(&cp.args, "submodule--helper");
+ argv_array_push(&cp.args, "push-check");
+ argv_array_push(&cp.args, remote->name);
+
+ for (i = 0; i < refspec_nr; i++)
+ argv_array_push(&cp.args, refspec[i]);
+
+ prepare_submodule_repo_env(&cp.env_array);
+ cp.git_cmd = 1;
+ cp.no_stdin = 1;
+ cp.no_stdout = 1;
+ cp.dir = path;
+
+ /*
+ * Simply indicate if 'submodule--helper push-check' failed.
+ * More detailed error information will be provided by the
+ * child process.
+ */
+ if (run_command(&cp))
+ die("process for submodule '%s' failed", path);
+}
+
+int push_unpushed_submodules(struct oid_array *commits,
+ const struct remote *remote,
+ const char **refspec, int refspec_nr,
+ const struct string_list *push_options,
int dry_run)
{
int i, ret = 1;
struct string_list needs_pushing = STRING_LIST_INIT_DUP;
- if (!find_unpushed_submodules(commits, remotes_name, &needs_pushing))
+ if (!find_unpushed_submodules(commits, remote->name, &needs_pushing))
return 1;
+ /*
+ * Verify that the remote and refspec can be propagated to all
+ * submodules. This check can be skipped if the remote and refspec
+ * won't be propagated due to the remote being unconfigured (e.g. a URL
+ * instead of a remote name).
+ */
+ if (remote->origin != REMOTE_UNCONFIGURED)
+ for (i = 0; i < needs_pushing.nr; i++)
+ submodule_push_check(needs_pushing.items[i].string,
+ remote, refspec, refspec_nr);
+
+ /* Actually push the submodules */
for (i = 0; i < needs_pushing.nr; i++) {
const char *path = needs_pushing.items[i].string;
fprintf(stderr, "Pushing submodule '%s'\n", path);
- if (!push_submodule(path, dry_run)) {
+ if (!push_submodule(path, remote, refspec, refspec_nr,
+ push_options, dry_run)) {
fprintf(stderr, "Unable to push submodule '%s'\n", path);
ret = 0;
}
static int add_sha1_to_array(const char *ref, const struct object_id *oid,
int flags, void *data)
{
- sha1_array_append(data, oid->hash);
+ oid_array_append(data, oid);
return 0;
}
-void check_for_new_submodule_commits(unsigned char new_sha1[20])
+void check_for_new_submodule_commits(struct object_id *oid)
{
if (!initialized_fetch_ref_tips) {
for_each_ref(add_sha1_to_array, &ref_tips_before_fetch);
initialized_fetch_ref_tips = 1;
}
- sha1_array_append(&ref_tips_after_fetch, new_sha1);
+ oid_array_append(&ref_tips_after_fetch, oid);
}
-static int add_sha1_to_argv(const unsigned char sha1[20], void *data)
+static int add_oid_to_argv(const struct object_id *oid, void *data)
{
- argv_array_push(data, sha1_to_hex(sha1));
+ argv_array_push(data, oid_to_hex(oid));
return 0;
}
init_revisions(&rev, NULL);
argv_array_push(&argv, "--"); /* argv[0] program name */
- sha1_array_for_each_unique(&ref_tips_after_fetch,
- add_sha1_to_argv, &argv);
+ oid_array_for_each_unique(&ref_tips_after_fetch,
+ add_oid_to_argv, &argv);
argv_array_push(&argv, "--not");
- sha1_array_for_each_unique(&ref_tips_before_fetch,
- add_sha1_to_argv, &argv);
+ oid_array_for_each_unique(&ref_tips_before_fetch,
+ add_oid_to_argv, &argv);
setup_revisions(argv.argc, argv.argv, &rev, NULL);
if (prepare_revision_walk(&rev))
die("revision walk setup failed");
}
argv_array_clear(&argv);
- sha1_array_clear(&ref_tips_before_fetch);
- sha1_array_clear(&ref_tips_after_fetch);
+ oid_array_clear(&ref_tips_before_fetch);
+ oid_array_clear(&ref_tips_after_fetch);
initialized_fetch_ref_tips = 0;
}
unsigned is_submodule_modified(const char *path, int ignore_untracked)
{
- ssize_t len;
struct child_process cp = CHILD_PROCESS_INIT;
- const char *argv[] = {
- "status",
- "--porcelain",
- NULL,
- NULL,
- };
struct strbuf buf = STRBUF_INIT;
+ FILE *fp;
unsigned dirty_submodule = 0;
- const char *line, *next_line;
const char *git_dir;
+ int ignore_cp_exit_code = 0;
strbuf_addf(&buf, "%s/.git", path);
git_dir = read_gitfile(buf.buf);
if (!git_dir)
git_dir = buf.buf;
- if (!is_directory(git_dir)) {
+ if (!is_git_directory(git_dir)) {
+ if (is_directory(git_dir))
+ die(_("'%s' not recognized as a git repository"), git_dir);
strbuf_release(&buf);
/* The submodule is not checked out, so it is not modified */
return 0;
-
}
strbuf_reset(&buf);
+ argv_array_pushl(&cp.args, "status", "--porcelain=2", NULL);
if (ignore_untracked)
- argv[2] = "-uno";
+ argv_array_push(&cp.args, "-uno");
- cp.argv = argv;
prepare_submodule_repo_env(&cp.env_array);
cp.git_cmd = 1;
cp.no_stdin = 1;
cp.out = -1;
cp.dir = path;
if (start_command(&cp))
- die("Could not run 'git status --porcelain' in submodule %s", path);
+ die("Could not run 'git status --porcelain=2' in submodule %s", path);
- len = strbuf_read(&buf, cp.out, 1024);
- line = buf.buf;
- while (len > 2) {
- if ((line[0] == '?') && (line[1] == '?')) {
+ fp = xfdopen(cp.out, "r");
+ while (strbuf_getwholeline(&buf, fp, '\n') != EOF) {
+ /* regular untracked files */
+ if (buf.buf[0] == '?')
dirty_submodule |= DIRTY_SUBMODULE_UNTRACKED;
- if (dirty_submodule & DIRTY_SUBMODULE_MODIFIED)
- break;
- } else {
- dirty_submodule |= DIRTY_SUBMODULE_MODIFIED;
- if (ignore_untracked ||
- (dirty_submodule & DIRTY_SUBMODULE_UNTRACKED))
- break;
+
+ if (buf.buf[0] == 'u' ||
+ buf.buf[0] == '1' ||
+ buf.buf[0] == '2') {
+ /* T = line type, XY = status, SSSS = submodule state */
+ if (buf.len < strlen("T XY SSSS"))
+ die("BUG: invalid status --porcelain=2 line %s",
+ buf.buf);
+
+ if (buf.buf[5] == 'S' && buf.buf[8] == 'U')
+ /* nested untracked file */
+ dirty_submodule |= DIRTY_SUBMODULE_UNTRACKED;
+
+ if (buf.buf[0] == 'u' ||
+ buf.buf[0] == '2' ||
+ memcmp(buf.buf + 5, "S..U", 4))
+ /* other change */
+ dirty_submodule |= DIRTY_SUBMODULE_MODIFIED;
}
- next_line = strchr(line, '\n');
- if (!next_line)
+
+ if ((dirty_submodule & DIRTY_SUBMODULE_MODIFIED) &&
+ ((dirty_submodule & DIRTY_SUBMODULE_UNTRACKED) ||
+ ignore_untracked)) {
+ /*
+ * We're not interested in any further information from
+ * the child any more, neither output nor its exit code.
+ */
+ ignore_cp_exit_code = 1;
break;
- next_line++;
- len -= (next_line - line);
- line = next_line;
+ }
}
- close(cp.out);
+ fclose(fp);
- if (finish_command(&cp))
- die("'git status --porcelain' failed in submodule %s", path);
+ if (finish_command(&cp) && !ignore_cp_exit_code)
+ die("'git status --porcelain=2' failed in submodule %s", path);
strbuf_release(&buf);
return dirty_submodule;
cp.dir = path;
if (start_command(&cp)) {
if (flags & SUBMODULE_REMOVAL_DIE_ON_ERROR)
- die(_("could not start 'git status in submodule '%s'"),
+ die(_("could not start 'git status' in submodule '%s'"),
path);
ret = -1;
goto out;
if (finish_command(&cp)) {
if (flags & SUBMODULE_REMOVAL_DIE_ON_ERROR)
- die(_("could not run 'git status in submodule '%s'"),
+ die(_("could not run 'git status' in submodule '%s'"),
path);
ret = -1;
}
cp1.no_stdin = 1;
cp1.dir = path;
- argv_array_pushl(&cp1.args, "update-ref", "HEAD",
- new ? new : EMPTY_TREE_SHA1_HEX, NULL);
+ argv_array_pushl(&cp1.args, "update-ref", "HEAD", new, NULL);
if (run_command(&cp1)) {
ret = -1;
memset(&rev_opts, 0, sizeof(rev_opts));
/* get all revisions that merge commit a */
- snprintf(merged_revision, sizeof(merged_revision), "^%s",
+ xsnprintf(merged_revision, sizeof(merged_revision), "^%s",
oid_to_hex(&a->object.oid));
init_revisions(&revs, NULL);
rev_opts.submodule = path;
return ret;
}
+
+int submodule_to_gitdir(struct strbuf *buf, const char *submodule)
+{
+ const struct submodule *sub;
+ const char *git_dir;
+ int ret = 0;
+
+ strbuf_reset(buf);
+ strbuf_addstr(buf, submodule);
+ strbuf_complete(buf, '/');
+ strbuf_addstr(buf, ".git");
+
+ git_dir = read_gitfile(buf->buf);
+ if (git_dir) {
+ strbuf_reset(buf);
+ strbuf_addstr(buf, git_dir);
+ }
+ if (!is_git_directory(buf->buf)) {
+ gitmodules_config();
+ sub = submodule_from_path(null_sha1, submodule);
+ if (!sub) {
+ ret = -1;
+ goto cleanup;
+ }
+ strbuf_reset(buf);
+ strbuf_git_path(buf, "%s/%s", "modules", sub->name);
+ }
+
+cleanup:
+ return ret;
+}
struct diff_options;
struct argv_array;
-struct sha1_array;
+struct oid_array;
+struct remote;
enum {
RECURSE_SUBMODULES_ONLY = -5,
* and it should be updated. Returns NULL otherwise.
*/
extern const struct submodule *submodule_from_ce(const struct cache_entry *ce);
-extern void check_for_new_submodule_commits(unsigned char new_sha1[20]);
+extern void check_for_new_submodule_commits(struct object_id *oid);
extern int fetch_populated_submodules(const struct argv_array *options,
const char *prefix, int command_line_option,
int quiet, int max_parallel_jobs);
const unsigned char base[20],
const unsigned char a[20],
const unsigned char b[20], int search);
-extern int find_unpushed_submodules(struct sha1_array *commits,
+extern int find_unpushed_submodules(struct oid_array *commits,
const char *remotes_name,
struct string_list *needs_pushing);
-extern int push_unpushed_submodules(struct sha1_array *commits,
- const char *remotes_name,
+extern int push_unpushed_submodules(struct oid_array *commits,
+ const struct remote *remote,
+ const char **refspec, int refspec_nr,
+ const struct string_list *push_options,
int dry_run);
extern void connect_work_tree_and_git_dir(const char *work_tree, const char *git_dir);
extern int parallel_submodules(void);
+/*
+ * Given a submodule path (as in the index), return the repository
+ * path of that submodule in 'buf'. Return -1 on error or when the
+ * submodule is not initialized.
+ */
+int submodule_to_gitdir(struct strbuf *buf, const char *submodule);
#define SUBMODULE_MOVE_HEAD_DRY_RUN (1<<0)
#define SUBMODULE_MOVE_HEAD_FORCE (1<<1)
/test-match-trees
/test-mergesort
/test-mktemp
+/test-online-cpus
/test-parse-options
/test-path-utils
/test-prio-queue
/test-read-cache
+/test-ref-store
/test-regex
/test-revision-walking
/test-run-command
/test-sha1
/test-sha1-array
/test-sigchain
+/test-strcmp-offset
/test-string-list
/test-submodule-config
/test-subprocess
--- /dev/null
+#include "git-compat-util.h"
+#include "thread-utils.h"
+
+int cmd_main(int argc, const char **argv)
+{
+ printf("%d\n", online_cpus());
+ return 0;
+}
int i, cnt = 1;
if (argc == 2)
cnt = strtol(argv[1], NULL, 0);
+ setup_git_directory();
for (i = 0; i < cnt; i++) {
read_cache();
discard_cache();
--- /dev/null
+#include "cache.h"
+#include "refs.h"
+
+static const char *notnull(const char *arg, const char *name)
+{
+ if (!arg)
+ die("%s required", name);
+ return arg;
+}
+
+static unsigned int arg_flags(const char *arg, const char *name)
+{
+ return atoi(notnull(arg, name));
+}
+
+static const char **get_store(const char **argv, struct ref_store **refs)
+{
+ const char *gitdir;
+
+ if (!argv[0]) {
+ die("ref store required");
+ } else if (!strcmp(argv[0], "main")) {
+ *refs = get_main_ref_store();
+ } else if (skip_prefix(argv[0], "submodule:", &gitdir)) {
+ struct strbuf sb = STRBUF_INIT;
+ int ret;
+
+ ret = strbuf_git_path_submodule(&sb, gitdir, "objects/");
+ if (ret)
+ die("strbuf_git_path_submodule failed: %d", ret);
+ add_to_alternates_memory(sb.buf);
+ strbuf_release(&sb);
+
+ *refs = get_submodule_ref_store(gitdir);
+ } else
+ die("unknown backend %s", argv[0]);
+
+ if (!*refs)
+ die("no ref store");
+
+ /* consume store-specific optional arguments if needed */
+
+ return argv + 1;
+}
+
+
+static int cmd_pack_refs(struct ref_store *refs, const char **argv)
+{
+ unsigned int flags = arg_flags(*argv++, "flags");
+
+ return refs_pack_refs(refs, flags);
+}
+
+static int cmd_peel_ref(struct ref_store *refs, const char **argv)
+{
+ const char *refname = notnull(*argv++, "refname");
+ unsigned char sha1[20];
+ int ret;
+
+ ret = refs_peel_ref(refs, refname, sha1);
+ if (!ret)
+ puts(sha1_to_hex(sha1));
+ return ret;
+}
+
+static int cmd_create_symref(struct ref_store *refs, const char **argv)
+{
+ const char *refname = notnull(*argv++, "refname");
+ const char *target = notnull(*argv++, "target");
+ const char *logmsg = *argv++;
+
+ return refs_create_symref(refs, refname, target, logmsg);
+}
+
+static int cmd_delete_refs(struct ref_store *refs, const char **argv)
+{
+ unsigned int flags = arg_flags(*argv++, "flags");
+ struct string_list refnames = STRING_LIST_INIT_NODUP;
+
+ while (*argv)
+ string_list_append(&refnames, *argv++);
+
+ return refs_delete_refs(refs, &refnames, flags);
+}
+
+static int cmd_rename_ref(struct ref_store *refs, const char **argv)
+{
+ const char *oldref = notnull(*argv++, "oldref");
+ const char *newref = notnull(*argv++, "newref");
+ const char *logmsg = *argv++;
+
+ return refs_rename_ref(refs, oldref, newref, logmsg);
+}
+
+static int each_ref(const char *refname, const struct object_id *oid,
+ int flags, void *cb_data)
+{
+ printf("%s %s 0x%x\n", oid_to_hex(oid), refname, flags);
+ return 0;
+}
+
+static int cmd_for_each_ref(struct ref_store *refs, const char **argv)
+{
+ const char *prefix = notnull(*argv++, "prefix");
+
+ return refs_for_each_ref_in(refs, prefix, each_ref, NULL);
+}
+
+static int cmd_resolve_ref(struct ref_store *refs, const char **argv)
+{
+ unsigned char sha1[20];
+ const char *refname = notnull(*argv++, "refname");
+ int resolve_flags = arg_flags(*argv++, "resolve-flags");
+ int flags;
+ const char *ref;
+
+ ref = refs_resolve_ref_unsafe(refs, refname, resolve_flags,
+ sha1, &flags);
+ printf("%s %s 0x%x\n", sha1_to_hex(sha1), ref, flags);
+ return ref ? 0 : 1;
+}
+
+static int cmd_verify_ref(struct ref_store *refs, const char **argv)
+{
+ const char *refname = notnull(*argv++, "refname");
+ struct strbuf err = STRBUF_INIT;
+ int ret;
+
+ ret = refs_verify_refname_available(refs, refname, NULL, NULL, &err);
+ if (err.len)
+ puts(err.buf);
+ return ret;
+}
+
+static int cmd_for_each_reflog(struct ref_store *refs, const char **argv)
+{
+ return refs_for_each_reflog(refs, each_ref, NULL);
+}
+
+static int each_reflog(struct object_id *old_oid, struct object_id *new_oid,
+ const char *committer, unsigned long timestamp,
+ int tz, const char *msg, void *cb_data)
+{
+ printf("%s %s %s %lu %d %s\n",
+ oid_to_hex(old_oid), oid_to_hex(new_oid),
+ committer, timestamp, tz, msg);
+ return 0;
+}
+
+static int cmd_for_each_reflog_ent(struct ref_store *refs, const char **argv)
+{
+ const char *refname = notnull(*argv++, "refname");
+
+ return refs_for_each_reflog_ent(refs, refname, each_reflog, refs);
+}
+
+static int cmd_for_each_reflog_ent_reverse(struct ref_store *refs, const char **argv)
+{
+ const char *refname = notnull(*argv++, "refname");
+
+ return refs_for_each_reflog_ent_reverse(refs, refname, each_reflog, refs);
+}
+
+static int cmd_reflog_exists(struct ref_store *refs, const char **argv)
+{
+ const char *refname = notnull(*argv++, "refname");
+
+ return !refs_reflog_exists(refs, refname);
+}
+
+static int cmd_create_reflog(struct ref_store *refs, const char **argv)
+{
+ const char *refname = notnull(*argv++, "refname");
+ int force_create = arg_flags(*argv++, "force-create");
+ struct strbuf err = STRBUF_INIT;
+ int ret;
+
+ ret = refs_create_reflog(refs, refname, force_create, &err);
+ if (err.len)
+ puts(err.buf);
+ return ret;
+}
+
+static int cmd_delete_reflog(struct ref_store *refs, const char **argv)
+{
+ const char *refname = notnull(*argv++, "refname");
+
+ return refs_delete_reflog(refs, refname);
+}
+
+static int cmd_reflog_expire(struct ref_store *refs, const char **argv)
+{
+ die("not supported yet");
+}
+
+static int cmd_delete_ref(struct ref_store *refs, const char **argv)
+{
+ const char *msg = notnull(*argv++, "msg");
+ const char *refname = notnull(*argv++, "refname");
+ const char *sha1_buf = notnull(*argv++, "old-sha1");
+ unsigned int flags = arg_flags(*argv++, "flags");
+ unsigned char old_sha1[20];
+
+ if (get_sha1_hex(sha1_buf, old_sha1))
+ die("not sha-1");
+
+ return refs_delete_ref(refs, msg, refname, old_sha1, flags);
+}
+
+static int cmd_update_ref(struct ref_store *refs, const char **argv)
+{
+ const char *msg = notnull(*argv++, "msg");
+ const char *refname = notnull(*argv++, "refname");
+ const char *new_sha1_buf = notnull(*argv++, "old-sha1");
+ const char *old_sha1_buf = notnull(*argv++, "old-sha1");
+ unsigned int flags = arg_flags(*argv++, "flags");
+ unsigned char old_sha1[20];
+ unsigned char new_sha1[20];
+
+ if (get_sha1_hex(old_sha1_buf, old_sha1) ||
+ get_sha1_hex(new_sha1_buf, new_sha1))
+ die("not sha-1");
+
+ return refs_update_ref(refs, msg, refname,
+ new_sha1, old_sha1,
+ flags, UPDATE_REFS_DIE_ON_ERR);
+}
+
+struct command {
+ const char *name;
+ int (*func)(struct ref_store *refs, const char **argv);
+};
+
+static struct command commands[] = {
+ { "pack-refs", cmd_pack_refs },
+ { "peel-ref", cmd_peel_ref },
+ { "create-symref", cmd_create_symref },
+ { "delete-refs", cmd_delete_refs },
+ { "rename-ref", cmd_rename_ref },
+ { "for-each-ref", cmd_for_each_ref },
+ { "resolve-ref", cmd_resolve_ref },
+ { "verify-ref", cmd_verify_ref },
+ { "for-each-reflog", cmd_for_each_reflog },
+ { "for-each-reflog-ent", cmd_for_each_reflog_ent },
+ { "for-each-reflog-ent-reverse", cmd_for_each_reflog_ent_reverse },
+ { "reflog-exists", cmd_reflog_exists },
+ { "create-reflog", cmd_create_reflog },
+ { "delete-reflog", cmd_delete_reflog },
+ { "reflog-expire", cmd_reflog_expire },
+ /*
+ * backend transaction functions can't be tested separately
+ */
+ { "delete-ref", cmd_delete_ref },
+ { "update-ref", cmd_update_ref },
+ { NULL, NULL }
+};
+
+int cmd_main(int argc, const char **argv)
+{
+ struct ref_store *refs;
+ const char *func;
+ struct command *cmd;
+
+ setup_git_directory();
+
+ argv = get_store(argv + 1, &refs);
+
+ func = *argv++;
+ if (!func)
+ die("ref function required");
+ for (cmd = commands; cmd->name; cmd++) {
+ if (!strcmp(func, cmd->name))
+ return cmd->func(refs, argv);
+ }
+ die("unknown function %s", func);
+ return 0;
+}
#include "cache.h"
#include "sha1-array.h"
-static int print_sha1(const unsigned char sha1[20], void *data)
+static int print_oid(const struct object_id *oid, void *data)
{
- puts(sha1_to_hex(sha1));
+ puts(oid_to_hex(oid));
return 0;
}
int cmd_main(int argc, const char **argv)
{
- struct sha1_array array = SHA1_ARRAY_INIT;
+ struct oid_array array = OID_ARRAY_INIT;
struct strbuf line = STRBUF_INIT;
while (strbuf_getline(&line, stdin) != EOF) {
const char *arg;
- unsigned char sha1[20];
+ struct object_id oid;
if (skip_prefix(line.buf, "append ", &arg)) {
- if (get_sha1_hex(arg, sha1))
+ if (get_oid_hex(arg, &oid))
die("not a hexadecimal SHA1: %s", arg);
- sha1_array_append(&array, sha1);
+ oid_array_append(&array, &oid);
} else if (skip_prefix(line.buf, "lookup ", &arg)) {
- if (get_sha1_hex(arg, sha1))
+ if (get_oid_hex(arg, &oid))
die("not a hexadecimal SHA1: %s", arg);
- printf("%d\n", sha1_array_lookup(&array, sha1));
+ printf("%d\n", oid_array_lookup(&array, &oid));
} else if (!strcmp(line.buf, "clear"))
- sha1_array_clear(&array);
+ oid_array_clear(&array);
else if (!strcmp(line.buf, "for_each_unique"))
- sha1_array_for_each_unique(&array, print_sha1, NULL);
+ oid_array_for_each_unique(&array, print_oid, NULL);
else
die("unknown command: %s", line.buf);
}
--- /dev/null
+#include "cache.h"
+
+int cmd_main(int argc, const char **argv)
+{
+ int result;
+ size_t offset;
+
+ if (!argv[1] || !argv[2])
+ die("usage: %s <string1> <string2>", argv[0]);
+
+ result = strcmp_offset(argv[1], argv[2], &offset);
+
+ /*
+ * Because differnt CRTs behave differently, only rely on signs
+ * of the result values.
+ */
+ result = (result < 0 ? -1 :
+ result > 0 ? 1 :
+ 0);
+ printf("%d %"PRIuMAX"\n", result, (uintmax_t)offset);
+ return 0;
+}
--- /dev/null
+#!/bin/sh
+#
+# This test measures the performance of various read-tree
+# and status operations. It is primarily interested in
+# the algorithmic costs of index operations and recursive
+# tree traversal -- and NOT disk I/O on thousands of files.
+
+test_description="Tests performance of read-tree"
+
+. ./perf-lib.sh
+
+test_perf_default_repo
+
+# If the test repo was generated by ./repos/many-files.sh
+# then we know something about the data shape and branches,
+# so we can isolate testing to the ballast-related commits
+# and setup sparse-checkout so we don't have to populate
+# the ballast files and directories.
+#
+# Otherwise, we make some general assumptions about the
+# repo and consider the entire history of the current
+# branch to be the ballast.
+
+test_expect_success "setup repo" '
+ if git rev-parse --verify refs/heads/p0006-ballast^{commit}
+ then
+ echo Assuming synthetic repo from many-files.sh
+ git branch br_base master
+ git branch br_ballast p0006-ballast
+ git config --local core.sparsecheckout 1
+ cat >.git/info/sparse-checkout <<-EOF
+ /*
+ !ballast/*
+ EOF
+ else
+ echo Assuming non-synthetic repo...
+ git branch br_base $(git rev-list HEAD | tail -n 1)
+ git branch br_ballast HEAD
+ fi &&
+ git checkout -q br_ballast &&
+ nr_files=$(git ls-files | wc -l)
+'
+
+test_perf "read-tree status br_ballast ($nr_files)" '
+ git read-tree HEAD &&
+ git status
+'
+
+test_done
--- /dev/null
+#!/bin/sh
+#
+# This test measures the performance of various read-tree
+# and checkout operations. It is primarily interested in
+# the algorithmic costs of index operations and recursive
+# tree traversal -- and NOT disk I/O on thousands of files.
+
+test_description="Tests performance of read-tree"
+
+. ./perf-lib.sh
+
+test_perf_default_repo
+
+# If the test repo was generated by ./repos/many-files.sh
+# then we know something about the data shape and branches,
+# so we can isolate testing to the ballast-related commits
+# and setup sparse-checkout so we don't have to populate
+# the ballast files and directories.
+#
+# Otherwise, we make some general assumptions about the
+# repo and consider the entire history of the current
+# branch to be the ballast.
+
+test_expect_success "setup repo" '
+ if git rev-parse --verify refs/heads/p0006-ballast^{commit}
+ then
+ echo Assuming synthetic repo from many-files.sh
+ git branch br_base master
+ git branch br_ballast p0006-ballast^
+ git branch br_ballast_alias p0006-ballast^
+ git branch br_ballast_plus_1 p0006-ballast
+ git config --local core.sparsecheckout 1
+ cat >.git/info/sparse-checkout <<-EOF
+ /*
+ !ballast/*
+ EOF
+ else
+ echo Assuming non-synthetic repo...
+ git branch br_base $(git rev-list HEAD | tail -n 1)
+ git branch br_ballast HEAD^ || error "no ancestor commit from current head"
+ git branch br_ballast_alias HEAD^
+ git branch br_ballast_plus_1 HEAD
+ fi &&
+ git checkout -q br_ballast &&
+ nr_files=$(git ls-files | wc -l)
+'
+
+test_perf "read-tree br_base br_ballast ($nr_files)" '
+ git read-tree -m br_base br_ballast -n
+'
+
+test_perf "switch between br_base br_ballast ($nr_files)" '
+ git checkout -q br_base &&
+ git checkout -q br_ballast
+'
+
+test_perf "switch between br_ballast br_ballast_plus_1 ($nr_files)" '
+ git checkout -q br_ballast_plus_1 &&
+ git checkout -q br_ballast
+'
+
+test_perf "switch between aliases ($nr_files)" '
+ git checkout -q br_ballast_alias &&
+ git checkout -q br_ballast
+'
+
+test_done
--- /dev/null
+#!/bin/sh
+# Inflate the size of an EXISTING repo.
+#
+# This script should be run inside the worktree of a TEST repo.
+# It will use the contents of the current HEAD to generate a
+# commit containing copies of the current worktree such that the
+# total size of the commit has at least <target_size> files.
+#
+# Usage: [-t target_size] [-b branch_name]
+
+set -e
+
+target_size=10000
+branch_name=p0006-ballast
+ballast=ballast
+
+while test "$#" -ne 0
+do
+ case "$1" in
+ -b)
+ shift;
+ test "$#" -ne 0 || { echo 'error: -b requires an argument' >&2; exit 1; }
+ branch_name=$1;
+ shift ;;
+ -t)
+ shift;
+ test "$#" -ne 0 || { echo 'error: -t requires an argument' >&2; exit 1; }
+ target_size=$1;
+ shift ;;
+ *)
+ echo "error: unknown option '$1'" >&2; exit 1 ;;
+ esac
+done
+
+git ls-tree -r HEAD >GEN_src_list
+nr_src_files=$(cat GEN_src_list | wc -l)
+
+src_branch=$(git symbolic-ref --short HEAD)
+
+echo "Branch $src_branch initially has $nr_src_files files."
+
+if test $target_size -le $nr_src_files
+then
+ echo "Repository already exceeds target size $target_size."
+ rm GEN_src_list
+ exit 1
+fi
+
+# Create well-known branch and add 1 file change to start
+# if off before the ballast.
+git checkout -b $branch_name HEAD
+echo "$target_size" > inflate-repo.params
+git add inflate-repo.params
+git commit -q -m params
+
+# Create ballast for in our branch.
+copy=1
+nr_files=$nr_src_files
+while test $nr_files -lt $target_size
+do
+ sed -e "s| | $ballast/$copy/|" <GEN_src_list |
+ git update-index --index-info
+
+ nr_files=$(expr $nr_files + $nr_src_files)
+ copy=$(expr $copy + 1)
+done
+rm GEN_src_list
+git commit -q -m "ballast"
+
+# Modify 1 file and commit.
+echo "$target_size" >> inflate-repo.params
+git add inflate-repo.params
+git commit -q -m "ballast plus 1"
+
+nr_files=$(git ls-files | wc -l)
+
+# Checkout master to put repo in canonical state (because
+# the perf test may need to clone and enable sparse-checkout
+# before attempting to checkout a commit with the ballast
+# (because it may contain 100K directories and 1M files)).
+git checkout $src_branch
+
+echo "Repository inflated. Branch $branch_name has $nr_files files."
+
+exit 0
--- /dev/null
+#!/bin/sh
+# Generate test data repository using the given parameters.
+# When omitted, we create "gen-many-files-d-w-f.git".
+#
+# Usage: [-r repo] [-d depth] [-w width] [-f files]
+#
+# -r repo: path to the new repo to be generated
+# -d depth: the depth of sub-directories
+# -w width: the number of sub-directories at each level
+# -f files: the number of files created in each directory
+#
+# Note that all files will have the same SHA-1 and each
+# directory at a level will have the same SHA-1, so we
+# will potentially have a large index, but not a large
+# ODB.
+#
+# Ballast will be created under "ballast/".
+
+EMPTY_BLOB=e69de29bb2d1d6434b8b29ae775ad8c2e48c5391
+
+set -e
+
+# (5, 10, 9) will create 999,999 ballast files.
+# (4, 10, 9) will create 99,999 ballast files.
+depth=5
+width=10
+files=9
+
+while test "$#" -ne 0
+do
+ case "$1" in
+ -r)
+ shift;
+ test "$#" -ne 0 || { echo 'error: -r requires an argument' >&2; exit 1; }
+ repo=$1;
+ shift ;;
+ -d)
+ shift;
+ test "$#" -ne 0 || { echo 'error: -d requires an argument' >&2; exit 1; }
+ depth=$1;
+ shift ;;
+ -w)
+ shift;
+ test "$#" -ne 0 || { echo 'error: -w requires an argument' >&2; exit 1; }
+ width=$1;
+ shift ;;
+ -f)
+ shift;
+ test "$#" -ne 0 || { echo 'error: -f requires an argument' >&2; exit 1; }
+ files=$1;
+ shift ;;
+ *)
+ echo "error: unknown option '$1'" >&2; exit 1 ;;
+ esac
+done
+
+# Inflate the index with thousands of empty files.
+# usage: dir depth width files
+fill_index() {
+ awk -v arg_dir=$1 -v arg_depth=$2 -v arg_width=$3 -v arg_files=$4 '
+ function make_paths(dir, depth, width, files, f, w) {
+ for (f = 1; f <= files; f++) {
+ print dir "/file" f
+ }
+ if (depth > 0) {
+ for (w = 1; w <= width; w++) {
+ make_paths(dir "/dir" w, depth - 1, width, files)
+ }
+ }
+ }
+ END { make_paths(arg_dir, arg_depth, arg_width, arg_files) }
+ ' </dev/null |
+ sed "s/^/100644 $EMPTY_BLOB /" |
+ git update-index --index-info
+ return 0
+}
+
+[ -z "$repo" ] && repo=gen-many-files-$depth.$width.$files.git
+
+mkdir $repo
+cd $repo
+git init .
+
+# Create an initial commit just to define master.
+touch many-files.empty
+echo "$depth $width $files" >many-files.params
+git add many-files.*
+git commit -q -m params
+
+# Create ballast for p0006 based upon the given params and
+# inflate the index with thousands of empty files and commit.
+git checkout -b p0006-ballast
+fill_index "ballast" $depth $width $files
+git commit -q -m "ballast"
+
+nr_files=$(git ls-files | wc -l)
+
+# Modify 1 file and commit.
+echo "$depth $width $files" >>many-files.params
+git add many-files.params
+git commit -q -m "ballast plus 1"
+
+# Checkout master to put repo in canonical state (because
+# the perf test may need to clone and enable sparse-checkout
+# before attempting to checkout a commit with the ballast
+# (because it may contain 100K directories and 1M files)).
+git checkout master
+
+echo "Repository "$repo" ($depth, $width, $files) created. Ballast $nr_files."
+exit 0
test -z "$LFwithNULdiff"
'
+test_expect_success 'prepare unnormalized' '
+ > .gitattributes &&
+ git config core.autocrlf false &&
+ printf "LINEONE\nLINETWO\r\n" >mixed &&
+ git add mixed .gitattributes &&
+ git commit -m "Add mixed" &&
+ git ls-files --eol | egrep "i/crlf" &&
+ git ls-files --eol | egrep "i/mixed"
+'
+
+test_expect_success 'normalize unnormalized' '
+ echo "* text=auto" >.gitattributes &&
+ rm .git/index &&
+ git add . &&
+ git commit -m "Introduce end-of-line normalization" &&
+ git ls-files --eol | tr "\\t" " " | sort >act &&
+cat >exp <<EOF &&
+i/-text w/-text attr/text=auto LFwithNUL
+i/lf w/crlf attr/text=auto CRLFonly
+i/lf w/crlf attr/text=auto LFonly
+i/lf w/lf attr/text=auto .gitattributes
+i/lf w/mixed attr/text=auto mixed
+EOF
+ test_cmp exp act
+'
+
test_done
--- /dev/null
+#!/bin/sh
+
+test_description='Test strcmp_offset functionality'
+
+. ./test-lib.sh
+
+while read s1 s2 expect
+do
+ test_expect_success "strcmp_offset($s1, $s2)" '
+ echo "$expect" >expect &&
+ test-strcmp-offset "$s1" "$s2" >actual &&
+ test_cmp expect actual
+ '
+done <<-EOF
+abc abc 0 3
+abc def -1 0
+abc abz -1 2
+abc abcdef -1 3
+EOF
+
+test_done
cd bit-error &&
test_commit content &&
corrupt_byte HEAD:content.t 10
+ ) &&
+ git init no-bit-error &&
+ (
+ # distinct commit from bit-error, but containing a
+ # non-corrupted version of the same blob
+ cd no-bit-error &&
+ test_tick &&
+ test_commit content
)
'
)
'
+test_expect_success 'getting type of a corrupt blob fails' '
+ (
+ cd bit-error &&
+ test_must_fail git cat-file -s HEAD:content.t
+ )
+'
+
test_expect_success 'read-tree -u detects bit-errors in blobs' '
(
cd bit-error &&
test_must_fail git clone --local misnamed misnamed-checkout
'
+test_expect_success 'fetch into corrupted repo with index-pack' '
+ (
+ cd bit-error &&
+ test_must_fail git -c transfer.unpackLimit=1 \
+ fetch ../no-bit-error 2>stderr &&
+ test_i18ngrep ! -i collision stderr
+ )
+'
+
test_done
test_description='test config file include directives'
. ./test-lib.sh
+# Force setup_explicit_git_dir() to run until the end. This is needed
+# by some tests to make sure real_path() is called on $GIT_DIR. The
+# caller needs to make sure git commands are run from a subdirectory
+# though or real_path() will not be called.
+force_setup_explicit_git_dir() {
+ GIT_DIR="$(pwd)/.git"
+ GIT_WORK_TREE="$(pwd)"
+ export GIT_DIR GIT_WORK_TREE
+}
+
test_expect_success 'include file by absolute path' '
echo "[test]one = 1" >one &&
echo "[include]path = \"$(pwd)/one\"" >.gitconfig &&
)
'
+test_expect_success 'conditional include, early config reading' '
+ (
+ cd foo &&
+ echo "[includeIf \"gitdir:foo/\"]path=bar6" >>.git/config &&
+ echo "[test]six=6" >.git/bar6 &&
+ echo 6 >expect &&
+ test-config read_early_config test.six >actual &&
+ test_cmp expect actual
+ )
+'
+
+test_expect_success SYMLINKS 'conditional include, set up symlinked $HOME' '
+ mkdir real-home &&
+ ln -s real-home home &&
+ (
+ HOME="$TRASH_DIRECTORY/home" &&
+ export HOME &&
+ cd "$HOME" &&
+
+ git init foo &&
+ cd foo &&
+ mkdir sub
+ )
+'
+
+test_expect_success SYMLINKS 'conditional include, $HOME expansion with symlinks' '
+ (
+ HOME="$TRASH_DIRECTORY/home" &&
+ export HOME &&
+ cd "$HOME"/foo &&
+
+ echo "[includeIf \"gitdir:~/foo/\"]path=bar2" >>.git/config &&
+ echo "[test]two=2" >.git/bar2 &&
+ echo 2 >expect &&
+ force_setup_explicit_git_dir &&
+ git -C sub config test.two >actual &&
+ test_cmp expect actual
+ )
+'
+
+test_expect_success SYMLINKS 'conditional include, relative path with symlinks' '
+ echo "[includeIf \"gitdir:./foo/.git\"]path=bar4" >home/.gitconfig &&
+ echo "[test]four=4" >home/bar4 &&
+ (
+ HOME="$TRASH_DIRECTORY/home" &&
+ export HOME &&
+ cd "$HOME"/foo &&
+
+ echo 4 >expect &&
+ force_setup_explicit_git_dir &&
+ git -C sub config test.four >actual &&
+ test_cmp expect actual
+ )
+'
+
test_expect_success 'include cycles are detected' '
cat >.gitconfig <<-\EOF &&
[test]value = gitconfig
test xdg = "$(cat output)"
'
+cmdline_config="'test.source=cmdline'"
+test_expect_success 'read config file in right order' '
+ echo "[test]source = home" >>.gitconfig &&
+ git init foo &&
+ (
+ cd foo &&
+ echo "[test]source = repo" >>.git/config &&
+ GIT_CONFIG_PARAMETERS=$cmdline_config test-config \
+ read_early_config test.source >actual &&
+ cat >expected <<-\EOF &&
+ home
+ repo
+ cmdline
+ EOF
+ test_cmp expected actual
+ )
+'
+
test_with_config () {
rm -rf throwaway &&
git init throwaway &&
cd -
'
-test_expect_success \
- "create $m" \
- "git update-ref $m $A &&
- test $A"' = $(cat .git/'"$m"')'
-test_expect_success \
- "create $m with oldvalue verification" \
- "git update-ref $m $B $A &&
- test $B"' = $(cat .git/'"$m"')'
+test_expect_success "create $m" '
+ git update-ref $m $A &&
+ test $A = $(cat .git/$m)
+'
+test_expect_success "create $m with oldvalue verification" '
+ git update-ref $m $B $A &&
+ test $B = $(cat .git/$m)
+'
test_expect_success "fail to delete $m with stale ref" '
test_must_fail git update-ref -d $m $A &&
test $B = "$(cat .git/$m)"
test_must_fail git update-ref $n $A
'
-test_expect_success \
- "create $m (by HEAD)" \
- "git update-ref HEAD $A &&
- test $A"' = $(cat .git/'"$m"')'
-test_expect_success \
- "create $m (by HEAD) with oldvalue verification" \
- "git update-ref HEAD $B $A &&
- test $B"' = $(cat .git/'"$m"')'
+test_expect_success "create $m (by HEAD)" '
+ git update-ref HEAD $A &&
+ test $A = $(cat .git/$m)
+'
+test_expect_success "create $m (by HEAD) with oldvalue verification" '
+ git update-ref HEAD $B $A &&
+ test $B = $(cat .git/$m)
+'
test_expect_success "fail to delete $m (by HEAD) with stale ref" '
test_must_fail git update-ref -d HEAD $A &&
test $B = $(cat .git/$m)
test_must_fail git reflog exists $outside
'
-test_expect_success \
- "create $m (by HEAD)" \
- "git update-ref HEAD $A &&
- test $A"' = $(cat .git/'"$m"')'
-test_expect_success \
- "pack refs" \
- "git pack-refs --all"
-test_expect_success \
- "move $m (by HEAD)" \
- "git update-ref HEAD $B $A &&
- test $B"' = $(cat .git/'"$m"')'
+test_expect_success "create $m (by HEAD)" '
+ git update-ref HEAD $A &&
+ test $A = $(cat .git/$m)
+'
+test_expect_success 'pack refs' '
+ git pack-refs --all
+'
+test_expect_success "move $m (by HEAD)" '
+ git update-ref HEAD $B $A &&
+ test $B = $(cat .git/$m)
+'
test_expect_success "delete $m (by HEAD) should remove both packed and loose $m" '
test_when_finished "rm -f .git/$m" &&
git update-ref -d HEAD $B &&
'
cp -f .git/HEAD .git/HEAD.orig
-test_expect_success "delete symref without dereference" '
+test_expect_success 'delete symref without dereference' '
test_when_finished "cp -f .git/HEAD.orig .git/HEAD" &&
git update-ref --no-deref -d HEAD &&
test_path_is_missing .git/HEAD
'
-test_expect_success "delete symref without dereference when the referred ref is packed" '
+test_expect_success 'delete symref without dereference when the referred ref is packed' '
test_when_finished "cp -f .git/HEAD.orig .git/HEAD" &&
echo foo >foo.c &&
git add foo.c &&
test_path_is_missing .git/refs/heads/ref-to-bad
'
-test_expect_success '(not) create HEAD with old sha1' "
+test_expect_success '(not) create HEAD with old sha1' '
test_must_fail git update-ref HEAD $A $B
-"
+'
test_expect_success "(not) prior created .git/$m" '
test_when_finished "rm -f .git/$m" &&
test_path_is_missing .git/$m
'
-test_expect_success \
- "create HEAD" \
- "git update-ref HEAD $A"
-test_expect_success '(not) change HEAD with wrong SHA1' "
+test_expect_success 'create HEAD' '
+ git update-ref HEAD $A
+'
+test_expect_success '(not) change HEAD with wrong SHA1' '
test_must_fail git update-ref HEAD $B $Z
-"
+'
test_expect_success "(not) changed .git/$m" '
test_when_finished "rm -f .git/$m" &&
! test $B = $(cat .git/$m)
'
rm -f .git/logs/refs/heads/master
-test_expect_success \
- "create $m (logged by touch)" \
- 'test_config core.logAllRefUpdates false &&
- GIT_COMMITTER_DATE="2005-05-26 23:30" \
- git update-ref --create-reflog HEAD '"$A"' -m "Initial Creation" &&
- test '"$A"' = $(cat .git/'"$m"')'
-test_expect_success \
- "update $m (logged by touch)" \
- 'test_config core.logAllRefUpdates false &&
- GIT_COMMITTER_DATE="2005-05-26 23:31" \
- git update-ref HEAD'" $B $A "'-m "Switch" &&
- test '"$B"' = $(cat .git/'"$m"')'
-test_expect_success \
- "set $m (logged by touch)" \
- 'test_config core.logAllRefUpdates false &&
- GIT_COMMITTER_DATE="2005-05-26 23:41" \
- git update-ref HEAD'" $A &&
- test $A"' = $(cat .git/'"$m"')'
-
-test_expect_success "empty directory removal" '
+test_expect_success "create $m (logged by touch)" '
+ test_config core.logAllRefUpdates false &&
+ GIT_COMMITTER_DATE="2005-05-26 23:30" \
+ git update-ref --create-reflog HEAD $A -m "Initial Creation" &&
+ test $A = $(cat .git/$m)
+'
+test_expect_success "update $m (logged by touch)" '
+ test_config core.logAllRefUpdates false &&
+ GIT_COMMITTER_DATE="2005-05-26 23:31" \
+ git update-ref HEAD $B $A -m "Switch" &&
+ test $B = $(cat .git/$m)
+'
+test_expect_success "set $m (logged by touch)" '
+ test_config core.logAllRefUpdates false &&
+ GIT_COMMITTER_DATE="2005-05-26 23:41" \
+ git update-ref HEAD $A &&
+ test $A = $(cat .git/$m)
+'
+
+test_expect_success 'empty directory removal' '
git branch d1/d2/r1 HEAD &&
git branch d1/r2 HEAD &&
test_path_is_file .git/refs/heads/d1/d2/r1 &&
test_path_is_file .git/logs/refs/heads/d1/r2
'
-test_expect_success "symref empty directory removal" '
+test_expect_success 'symref empty directory removal' '
git branch e1/e2/r1 HEAD &&
git branch e1/r2 HEAD &&
git checkout e1/e2/r1 &&
test_cmp expect .git/logs/$m
'
-test_expect_success \
- "create $m (logged by config)" \
- 'test_config core.logAllRefUpdates true &&
- GIT_COMMITTER_DATE="2005-05-26 23:32" \
- git update-ref HEAD'" $A "'-m "Initial Creation" &&
- test '"$A"' = $(cat .git/'"$m"')'
-test_expect_success \
- "update $m (logged by config)" \
- 'test_config core.logAllRefUpdates true &&
- GIT_COMMITTER_DATE="2005-05-26 23:33" \
- git update-ref HEAD'" $B $A "'-m "Switch" &&
- test '"$B"' = $(cat .git/'"$m"')'
-test_expect_success \
- "set $m (logged by config)" \
- 'test_config core.logAllRefUpdates true &&
- GIT_COMMITTER_DATE="2005-05-26 23:43" \
- git update-ref HEAD '"$A &&
- test $A"' = $(cat .git/'"$m"')'
+test_expect_success "create $m (logged by config)" '
+ test_config core.logAllRefUpdates true &&
+ GIT_COMMITTER_DATE="2005-05-26 23:32" \
+ git update-ref HEAD $A -m "Initial Creation" &&
+ test $A = $(cat .git/$m)
+'
+test_expect_success "update $m (logged by config)" '
+ test_config core.logAllRefUpdates true &&
+ GIT_COMMITTER_DATE="2005-05-26 23:33" \
+ git update-ref HEAD'" $B $A "'-m "Switch" &&
+ test $B = $(cat .git/$m)
+'
+test_expect_success "set $m (logged by config)" '
+ test_config core.logAllRefUpdates true &&
+ GIT_COMMITTER_DATE="2005-05-26 23:43" \
+ git update-ref HEAD $A &&
+ test $A = $(cat .git/$m)
+'
cat >expect <<EOF
$Z $A $GIT_COMMITTER_NAME <$GIT_COMMITTER_EMAIL> 1117150320 +0000 Initial Creation
$A $B $GIT_COMMITTER_NAME <$GIT_COMMITTER_EMAIL> 1117150380 +0000 Switch
$B $A $GIT_COMMITTER_NAME <$GIT_COMMITTER_EMAIL> 1117150980 +0000
EOF
-test_expect_success \
- "verifying $m's log (logged by config)" \
- 'test_when_finished "rm -f .git/$m .git/logs/$m expect" &&
- test_cmp expect .git/logs/$m'
+test_expect_success "verifying $m's log (logged by config)" '
+ test_when_finished "rm -f .git/$m .git/logs/$m expect" &&
+ test_cmp expect .git/logs/$m
+'
git update-ref $m $D
cat >.git/logs/$m <<EOF
ed="Thu, 26 May 2005 18:32:00 -0500"
gd="Thu, 26 May 2005 18:33:00 -0500"
ld="Thu, 26 May 2005 18:43:00 -0500"
-test_expect_success \
- 'Query "master@{May 25 2005}" (before history)' \
- 'test_when_finished "rm -f o e" &&
- git rev-parse --verify "master@{May 25 2005}" >o 2>e &&
- test '"$C"' = $(cat o) &&
- test "warning: Log for '\'master\'' only goes back to $ed." = "$(cat e)"'
-test_expect_success \
- "Query master@{2005-05-25} (before history)" \
- 'test_when_finished "rm -f o e" &&
- git rev-parse --verify master@{2005-05-25} >o 2>e &&
- test '"$C"' = $(cat o) &&
- echo test "warning: Log for '\'master\'' only goes back to $ed." = "$(cat e)"'
-test_expect_success \
- 'Query "master@{May 26 2005 23:31:59}" (1 second before history)' \
- 'test_when_finished "rm -f o e" &&
- git rev-parse --verify "master@{May 26 2005 23:31:59}" >o 2>e &&
- test '"$C"' = $(cat o) &&
- test "warning: Log for '\''master'\'' only goes back to $ed." = "$(cat e)"'
-test_expect_success \
- 'Query "master@{May 26 2005 23:32:00}" (exactly history start)' \
- 'test_when_finished "rm -f o e" &&
- git rev-parse --verify "master@{May 26 2005 23:32:00}" >o 2>e &&
- test '"$C"' = $(cat o) &&
- test "" = "$(cat e)"'
-test_expect_success \
- 'Query "master@{May 26 2005 23:32:30}" (first non-creation change)' \
- 'test_when_finished "rm -f o e" &&
- git rev-parse --verify "master@{May 26 2005 23:32:30}" >o 2>e &&
- test '"$A"' = $(cat o) &&
- test "" = "$(cat e)"'
-test_expect_success \
- 'Query "master@{2005-05-26 23:33:01}" (middle of history with gap)' \
- 'test_when_finished "rm -f o e" &&
- git rev-parse --verify "master@{2005-05-26 23:33:01}" >o 2>e &&
- test '"$B"' = $(cat o) &&
- test "warning: Log for ref '"$m has gap after $gd"'." = "$(cat e)"'
-test_expect_success \
- 'Query "master@{2005-05-26 23:38:00}" (middle of history)' \
- 'test_when_finished "rm -f o e" &&
- git rev-parse --verify "master@{2005-05-26 23:38:00}" >o 2>e &&
- test '"$Z"' = $(cat o) &&
- test "" = "$(cat e)"'
-test_expect_success \
- 'Query "master@{2005-05-26 23:43:00}" (exact end of history)' \
- 'test_when_finished "rm -f o e" &&
- git rev-parse --verify "master@{2005-05-26 23:43:00}" >o 2>e &&
- test '"$E"' = $(cat o) &&
- test "" = "$(cat e)"'
-test_expect_success \
- 'Query "master@{2005-05-28}" (past end of history)' \
- 'test_when_finished "rm -f o e" &&
- git rev-parse --verify "master@{2005-05-28}" >o 2>e &&
- test '"$D"' = $(cat o) &&
- test "warning: Log for ref '"$m unexpectedly ended on $ld"'." = "$(cat e)"'
-
+test_expect_success 'Query "master@{May 25 2005}" (before history)' '
+ test_when_finished "rm -f o e" &&
+ git rev-parse --verify "master@{May 25 2005}" >o 2>e &&
+ test $C = $(cat o) &&
+ test "warning: Log for '\''master'\'' only goes back to $ed." = "$(cat e)"
+'
+test_expect_success 'Query master@{2005-05-25} (before history)' '
+ test_when_finished "rm -f o e" &&
+ git rev-parse --verify master@{2005-05-25} >o 2>e &&
+ test $C = $(cat o) &&
+ echo test "warning: Log for '\''master'\'' only goes back to $ed." = "$(cat e)"
+'
+test_expect_success 'Query "master@{May 26 2005 23:31:59}" (1 second before history)' '
+ test_when_finished "rm -f o e" &&
+ git rev-parse --verify "master@{May 26 2005 23:31:59}" >o 2>e &&
+ test $C = $(cat o) &&
+ test "warning: Log for '\''master'\'' only goes back to $ed." = "$(cat e)"
+'
+test_expect_success 'Query "master@{May 26 2005 23:32:00}" (exactly history start)' '
+ test_when_finished "rm -f o e" &&
+ git rev-parse --verify "master@{May 26 2005 23:32:00}" >o 2>e &&
+ test $C = $(cat o) &&
+ test "" = "$(cat e)"
+'
+test_expect_success 'Query "master@{May 26 2005 23:32:30}" (first non-creation change)' '
+ test_when_finished "rm -f o e" &&
+ git rev-parse --verify "master@{May 26 2005 23:32:30}" >o 2>e &&
+ test $A = $(cat o) &&
+ test "" = "$(cat e)"
+'
+test_expect_success 'Query "master@{2005-05-26 23:33:01}" (middle of history with gap)' '
+ test_when_finished "rm -f o e" &&
+ git rev-parse --verify "master@{2005-05-26 23:33:01}" >o 2>e &&
+ test $B = $(cat o) &&
+ test "warning: Log for ref $m has gap after $gd." = "$(cat e)"
+'
+test_expect_success 'Query "master@{2005-05-26 23:38:00}" (middle of history)' '
+ test_when_finished "rm -f o e" &&
+ git rev-parse --verify "master@{2005-05-26 23:38:00}" >o 2>e &&
+ test $Z = $(cat o) &&
+ test "" = "$(cat e)"
+'
+test_expect_success 'Query "master@{2005-05-26 23:43:00}" (exact end of history)' '
+ test_when_finished "rm -f o e" &&
+ git rev-parse --verify "master@{2005-05-26 23:43:00}" >o 2>e &&
+ test $E = $(cat o) &&
+ test "" = "$(cat e)"
+'
+test_expect_success 'Query "master@{2005-05-28}" (past end of history)' '
+ test_when_finished "rm -f o e" &&
+ git rev-parse --verify "master@{2005-05-28}" >o 2>e &&
+ test $D = $(cat o) &&
+ test "warning: Log for ref $m unexpectedly ended on $ld." = "$(cat e)"
+'
rm -f .git/$m .git/logs/$m expect
-test_expect_success \
- 'creating initial files' \
- 'test_when_finished rm -f M &&
- echo TEST >F &&
- git add F &&
- GIT_AUTHOR_DATE="2005-05-26 23:30" \
- GIT_COMMITTER_DATE="2005-05-26 23:30" git commit -m add -a &&
- h_TEST=$(git rev-parse --verify HEAD) &&
- echo The other day this did not work. >M &&
- echo And then Bob told me how to fix it. >>M &&
- echo OTHER >F &&
- GIT_AUTHOR_DATE="2005-05-26 23:41" \
- GIT_COMMITTER_DATE="2005-05-26 23:41" git commit -F M -a &&
- h_OTHER=$(git rev-parse --verify HEAD) &&
- GIT_AUTHOR_DATE="2005-05-26 23:44" \
- GIT_COMMITTER_DATE="2005-05-26 23:44" git commit --amend &&
- h_FIXED=$(git rev-parse --verify HEAD) &&
- echo Merged initial commit and a later commit. >M &&
- echo $h_TEST >.git/MERGE_HEAD &&
- GIT_AUTHOR_DATE="2005-05-26 23:45" \
- GIT_COMMITTER_DATE="2005-05-26 23:45" git commit -F M &&
- h_MERGED=$(git rev-parse --verify HEAD)'
+test_expect_success 'creating initial files' '
+ test_when_finished rm -f M &&
+ echo TEST >F &&
+ git add F &&
+ GIT_AUTHOR_DATE="2005-05-26 23:30" \
+ GIT_COMMITTER_DATE="2005-05-26 23:30" git commit -m add -a &&
+ h_TEST=$(git rev-parse --verify HEAD) &&
+ echo The other day this did not work. >M &&
+ echo And then Bob told me how to fix it. >>M &&
+ echo OTHER >F &&
+ GIT_AUTHOR_DATE="2005-05-26 23:41" \
+ GIT_COMMITTER_DATE="2005-05-26 23:41" git commit -F M -a &&
+ h_OTHER=$(git rev-parse --verify HEAD) &&
+ GIT_AUTHOR_DATE="2005-05-26 23:44" \
+ GIT_COMMITTER_DATE="2005-05-26 23:44" git commit --amend &&
+ h_FIXED=$(git rev-parse --verify HEAD) &&
+ echo Merged initial commit and a later commit. >M &&
+ echo $h_TEST >.git/MERGE_HEAD &&
+ GIT_AUTHOR_DATE="2005-05-26 23:45" \
+ GIT_COMMITTER_DATE="2005-05-26 23:45" git commit -F M &&
+ h_MERGED=$(git rev-parse --verify HEAD)
+'
cat >expect <<EOF
$Z $h_TEST $GIT_COMMITTER_NAME <$GIT_COMMITTER_EMAIL> 1117150200 +0000 commit (initial): add
$h_OTHER $h_FIXED $GIT_COMMITTER_NAME <$GIT_COMMITTER_EMAIL> 1117151040 +0000 commit (amend): The other day this did not work.
$h_FIXED $h_MERGED $GIT_COMMITTER_NAME <$GIT_COMMITTER_EMAIL> 1117151100 +0000 commit (merge): Merged initial commit and a later commit.
EOF
-test_expect_success \
- 'git commit logged updates' \
- "test_cmp expect .git/logs/$m"
+test_expect_success 'git commit logged updates' '
+ test_cmp expect .git/logs/$m
+'
unset h_TEST h_OTHER h_FIXED h_MERGED
-test_expect_success \
- 'git cat-file blob master:F (expect OTHER)' \
- 'test OTHER = $(git cat-file blob master:F)'
-test_expect_success \
- 'git cat-file blob master@{2005-05-26 23:30}:F (expect TEST)' \
- 'test TEST = $(git cat-file blob "master@{2005-05-26 23:30}:F")'
-test_expect_success \
- 'git cat-file blob master@{2005-05-26 23:42}:F (expect OTHER)' \
- 'test OTHER = $(git cat-file blob "master@{2005-05-26 23:42}:F")'
+test_expect_success 'git cat-file blob master:F (expect OTHER)' '
+ test OTHER = $(git cat-file blob master:F)
+'
+test_expect_success 'git cat-file blob master@{2005-05-26 23:30}:F (expect TEST)' '
+ test TEST = $(git cat-file blob "master@{2005-05-26 23:30}:F")
+'
+test_expect_success 'git cat-file blob master@{2005-05-26 23:42}:F (expect OTHER)' '
+ test OTHER = $(git cat-file blob "master@{2005-05-26 23:42}:F")
+'
a=refs/heads/a
b=refs/heads/b
--- /dev/null
+#!/bin/sh
+
+test_description='test main ref store api'
+
+. ./test-lib.sh
+
+RUN="test-ref-store main"
+
+test_expect_success 'pack_refs(PACK_REFS_ALL | PACK_REFS_PRUNE)' '
+ test_commit one &&
+ N=`find .git/refs -type f | wc -l` &&
+ test "$N" != 0 &&
+ $RUN pack-refs 3 &&
+ N=`find .git/refs -type f | wc -l`
+'
+
+test_expect_success 'peel_ref(new-tag)' '
+ git rev-parse HEAD >expected &&
+ git tag -a -m new-tag new-tag HEAD &&
+ $RUN peel-ref refs/tags/new-tag >actual &&
+ test_cmp expected actual
+'
+
+test_expect_success 'create_symref(FOO, refs/heads/master)' '
+ $RUN create-symref FOO refs/heads/master nothing &&
+ echo refs/heads/master >expected &&
+ git symbolic-ref FOO >actual &&
+ test_cmp expected actual
+'
+
+test_expect_success 'delete_refs(FOO, refs/tags/new-tag)' '
+ git rev-parse FOO -- &&
+ git rev-parse refs/tags/new-tag -- &&
+ $RUN delete-refs 0 FOO refs/tags/new-tag &&
+ test_must_fail git rev-parse FOO -- &&
+ test_must_fail git rev-parse refs/tags/new-tag --
+'
+
+test_expect_success 'rename_refs(master, new-master)' '
+ git rev-parse master >expected &&
+ $RUN rename-ref refs/heads/master refs/heads/new-master &&
+ git rev-parse new-master >actual &&
+ test_cmp expected actual &&
+ test_commit recreate-master
+'
+
+test_expect_success 'for_each_ref(refs/heads/)' '
+ $RUN for-each-ref refs/heads/ | cut -c 42- >actual &&
+ cat >expected <<-\EOF &&
+ master 0x0
+ new-master 0x0
+ EOF
+ test_cmp expected actual
+'
+
+test_expect_success 'for_each_ref() is sorted' '
+ $RUN for-each-ref refs/heads/ | cut -c 42- >actual &&
+ sort actual > expected &&
+ test_cmp expected actual
+'
+
+test_expect_success 'resolve_ref(new-master)' '
+ SHA1=`git rev-parse new-master` &&
+ echo "$SHA1 refs/heads/new-master 0x0" >expected &&
+ $RUN resolve-ref refs/heads/new-master 0 >actual &&
+ test_cmp expected actual
+'
+
+test_expect_success 'verify_ref(new-master)' '
+ $RUN verify-ref refs/heads/new-master
+'
+
+test_expect_success 'for_each_reflog()' '
+ $RUN for-each-reflog | sort | cut -c 42- >actual &&
+ cat >expected <<-\EOF &&
+ HEAD 0x1
+ refs/heads/master 0x0
+ refs/heads/new-master 0x0
+ EOF
+ test_cmp expected actual
+'
+
+test_expect_success 'for_each_reflog_ent()' '
+ $RUN for-each-reflog-ent HEAD >actual &&
+ head -n1 actual | grep one &&
+ tail -n2 actual | head -n1 | grep recreate-master
+'
+
+test_expect_success 'for_each_reflog_ent_reverse()' '
+ $RUN for-each-reflog-ent-reverse HEAD >actual &&
+ head -n1 actual | grep recreate-master &&
+ tail -n2 actual | head -n1 | grep one
+'
+
+test_expect_success 'reflog_exists(HEAD)' '
+ $RUN reflog-exists HEAD
+'
+
+test_expect_success 'delete_reflog(HEAD)' '
+ $RUN delete-reflog HEAD &&
+ ! test -f .git/logs/HEAD
+'
+
+test_expect_success 'create-reflog(HEAD)' '
+ $RUN create-reflog HEAD 1 &&
+ test -f .git/logs/HEAD
+'
+
+test_expect_success 'delete_ref(refs/heads/foo)' '
+ git checkout -b foo &&
+ FOO_SHA1=`git rev-parse foo` &&
+ git checkout --detach &&
+ test_commit bar-commit &&
+ git checkout -b bar &&
+ BAR_SHA1=`git rev-parse bar` &&
+ $RUN update-ref updating refs/heads/foo $BAR_SHA1 $FOO_SHA1 0 &&
+ echo $BAR_SHA1 >expected &&
+ git rev-parse refs/heads/foo >actual &&
+ test_cmp expected actual
+'
+
+test_expect_success 'delete_ref(refs/heads/foo)' '
+ SHA1=`git rev-parse foo` &&
+ git checkout --detach &&
+ $RUN delete-ref msg refs/heads/foo $SHA1 0 &&
+ test_must_fail git rev-parse refs/heads/foo --
+'
+
+test_done
--- /dev/null
+#!/bin/sh
+
+test_description='test submodule ref store api'
+
+. ./test-lib.sh
+
+RUN="test-ref-store submodule:sub"
+
+test_expect_success 'setup' '
+ git init sub &&
+ (
+ cd sub &&
+ test_commit first &&
+ git checkout -b new-master
+ )
+'
+
+test_expect_success 'pack_refs() not allowed' '
+ test_must_fail $RUN pack-refs 3
+'
+
+test_expect_success 'peel_ref(new-tag)' '
+ git -C sub rev-parse HEAD >expected &&
+ git -C sub tag -a -m new-tag new-tag HEAD &&
+ $RUN peel-ref refs/tags/new-tag >actual &&
+ test_cmp expected actual
+'
+
+test_expect_success 'create_symref() not allowed' '
+ test_must_fail $RUN create-symref FOO refs/heads/master nothing
+'
+
+test_expect_success 'delete_refs() not allowed' '
+ test_must_fail $RUN delete-refs 0 FOO refs/tags/new-tag
+'
+
+test_expect_success 'rename_refs() not allowed' '
+ test_must_fail $RUN rename-ref refs/heads/master refs/heads/new-master
+'
+
+test_expect_success 'for_each_ref(refs/heads/)' '
+ $RUN for-each-ref refs/heads/ | cut -c 42- >actual &&
+ cat >expected <<-\EOF &&
+ master 0x0
+ new-master 0x0
+ EOF
+ test_cmp expected actual
+'
+
+test_expect_success 'for_each_ref() is sorted' '
+ $RUN for-each-ref refs/heads/ | cut -c 42- >actual &&
+ sort actual > expected &&
+ test_cmp expected actual
+'
+
+test_expect_success 'resolve_ref(master)' '
+ SHA1=`git -C sub rev-parse master` &&
+ echo "$SHA1 refs/heads/master 0x0" >expected &&
+ $RUN resolve-ref refs/heads/master 0 >actual &&
+ test_cmp expected actual
+'
+
+test_expect_success 'verify_ref(new-master)' '
+ $RUN verify-ref refs/heads/new-master
+'
+
+test_expect_success 'for_each_reflog()' '
+ $RUN for-each-reflog | sort | cut -c 42- >actual &&
+ cat >expected <<-\EOF &&
+ HEAD 0x1
+ refs/heads/master 0x0
+ refs/heads/new-master 0x0
+ EOF
+ test_cmp expected actual
+'
+
+test_expect_success 'for_each_reflog_ent()' '
+ $RUN for-each-reflog-ent HEAD >actual && cat actual &&
+ head -n1 actual | grep first &&
+ tail -n2 actual | head -n1 | grep master.to.new
+'
+
+test_expect_success 'for_each_reflog_ent_reverse()' '
+ $RUN for-each-reflog-ent-reverse HEAD >actual &&
+ head -n1 actual | grep master.to.new &&
+ tail -n2 actual | head -n1 | grep first
+'
+
+test_expect_success 'reflog_exists(HEAD)' '
+ $RUN reflog-exists HEAD
+'
+
+test_expect_success 'delete_reflog() not allowed' '
+ test_must_fail $RUN delete-reflog HEAD
+'
+
+test_expect_success 'create-reflog() not allowed' '
+ test_must_fail $RUN create-reflog HEAD 1
+'
+
+test_done
! grep $blob out
'
+test_expect_success 'detect corrupt index file in fsck' '
+ cp .git/index .git/index.backup &&
+ test_when_finished "mv .git/index.backup .git/index" &&
+ echo zzzzzzzz >zzzzzzzz &&
+ git add zzzzzzzz &&
+ sed -e "s/zzzzzzzz/yyyyyyyy/" .git/index >.git/index.yyy &&
+ mv .git/index.yyy .git/index &&
+ # Confirm that fsck detects invalid checksum
+ test_must_fail git fsck --cache &&
+ # Confirm that status no longer complains about invalid checksum
+ git status
+'
+
test_done
test_expect_success 'rev-parse --git-path objects linked worktree' '
echo "$(git rev-parse --show-toplevel)/.git/objects" >expect &&
- test_when_finished "rm -rf linked-tree && git worktree prune" &&
+ test_when_finished "rm -rf linked-tree actual expect && git worktree prune" &&
git worktree add --detach linked-tree master &&
git -C linked-tree rev-parse --git-path objects >actual &&
test_cmp expect actual
test_expect_success '"list" all worktrees from main' '
echo "$(git rev-parse --show-toplevel) $(git rev-parse --short HEAD) [$(git symbolic-ref --short HEAD)]" >expect &&
- test_when_finished "rm -rf here && git worktree prune" &&
+ test_when_finished "rm -rf here out actual expect && git worktree prune" &&
git worktree add --detach here master &&
echo "$(git -C here rev-parse --show-toplevel) $(git rev-parse --short HEAD) (detached HEAD)" >>expect &&
- git worktree list | sed "s/ */ /g" >actual &&
+ git worktree list >out &&
+ sed "s/ */ /g" <out >actual &&
test_cmp expect actual
'
test_expect_success '"list" all worktrees from linked' '
echo "$(git rev-parse --show-toplevel) $(git rev-parse --short HEAD) [$(git symbolic-ref --short HEAD)]" >expect &&
- test_when_finished "rm -rf here && git worktree prune" &&
+ test_when_finished "rm -rf here out actual expect && git worktree prune" &&
git worktree add --detach here master &&
echo "$(git -C here rev-parse --show-toplevel) $(git rev-parse --short HEAD) (detached HEAD)" >>expect &&
- git -C here worktree list | sed "s/ */ /g" >actual &&
+ git -C here worktree list >out &&
+ sed "s/ */ /g" <out >actual &&
test_cmp expect actual
'
echo "HEAD $(git rev-parse HEAD)" >>expect &&
echo "branch $(git symbolic-ref HEAD)" >>expect &&
echo >>expect &&
- test_when_finished "rm -rf here && git worktree prune" &&
+ test_when_finished "rm -rf here actual expect && git worktree prune" &&
git worktree add --detach here master &&
echo "worktree $(git -C here rev-parse --show-toplevel)" >>expect &&
echo "HEAD $(git rev-parse HEAD)" >>expect &&
'
test_expect_success '"list" all worktrees from bare main' '
- test_when_finished "rm -rf there && git -C bare1 worktree prune" &&
+ test_when_finished "rm -rf there out actual expect && git -C bare1 worktree prune" &&
git -C bare1 worktree add --detach ../there master &&
echo "$(pwd)/bare1 (bare)" >expect &&
echo "$(git -C there rev-parse --show-toplevel) $(git -C there rev-parse --short HEAD) (detached HEAD)" >>expect &&
- git -C bare1 worktree list | sed "s/ */ /g" >actual &&
+ git -C bare1 worktree list >out &&
+ sed "s/ */ /g" <out >actual &&
test_cmp expect actual
'
test_expect_success '"list" all worktrees --porcelain from bare main' '
- test_when_finished "rm -rf there && git -C bare1 worktree prune" &&
+ test_when_finished "rm -rf there actual expect && git -C bare1 worktree prune" &&
git -C bare1 worktree add --detach ../there master &&
echo "worktree $(pwd)/bare1" >expect &&
echo "bare" >>expect &&
'
test_expect_success '"list" all worktrees from linked with a bare main' '
- test_when_finished "rm -rf there && git -C bare1 worktree prune" &&
+ test_when_finished "rm -rf there out actual expect && git -C bare1 worktree prune" &&
git -C bare1 worktree add --detach ../there master &&
echo "$(pwd)/bare1 (bare)" >expect &&
echo "$(git -C there rev-parse --show-toplevel) $(git -C there rev-parse --short HEAD) (detached HEAD)" >>expect &&
- git -C there worktree list | sed "s/ */ /g" >actual &&
+ git -C there worktree list >out &&
+ sed "s/ */ /g" <out >actual &&
test_cmp expect actual
'
cd linked &&
echo "worktree $(pwd)" >expected &&
echo "ref: .broken" >../.git/HEAD &&
- git worktree list --porcelain | head -n 3 >actual &&
+ git worktree list --porcelain >out &&
+ head -n 3 out >actual &&
test_cmp ../expected actual &&
- git worktree list | head -n 1 >actual.2 &&
+ git worktree list >out &&
+ head -n 1 out >actual.2 &&
grep -F "(error)" actual.2
)
'
test_commit new &&
git worktree add ../first &&
git worktree add ../second &&
- git worktree list --porcelain | grep ^worktree >actual
+ git worktree list --porcelain >out &&
+ grep ^worktree out >actual
) &&
cat >expected <<-EOF &&
worktree $(pwd)/sorted/main
git -C submodule/subsub commit -m "add d" &&
git -C submodule submodule add ./subsub &&
git -C submodule commit -m "added subsub" &&
+ git submodule absorbgitdirs &&
git ls-files --recurse-submodules >actual &&
test_cmp expect actual
'
+test_expect_success 'ls-files works with GIT_DIR' '
+ cat >expect <<-\EOF &&
+ .gitmodules
+ c
+ subsub/d
+ EOF
+
+ git --git-dir=submodule/.git ls-files --recurse-submodules >actual &&
+ test_cmp expect actual
+'
+
test_expect_success '--recurse-submodules and pathspecs setup' '
echo e >submodule/subsub/e.txt &&
git -C submodule/subsub add e.txt &&
--- /dev/null
+#!/bin/sh
+
+test_description='Test the lazy init name hash with various folder structures'
+
+. ./test-lib.sh
+
+if test 1 -eq $($GIT_BUILD_DIR/t/helper/test-online-cpus)
+then
+ skip_all='skipping lazy-init tests, single cpu'
+ test_done
+fi
+
+LAZY_THREAD_COST=2000
+
+test_expect_success 'no buffer overflow in lazy_init_name_hash' '
+ (
+ test_seq $LAZY_THREAD_COST | sed "s/^/a_/"
+ echo b/b/b
+ test_seq $LAZY_THREAD_COST | sed "s/^/c_/"
+ test_seq 50 | sed "s/^/d_/" | tr "\n" "/"; echo d
+ ) |
+ sed "s/^/100644 $EMPTY_BLOB /" |
+ git update-index --index-info &&
+ test-lazy-init-name-hash -m
+'
+
+test_done
--- /dev/null
+#!/bin/sh
+
+test_description='git rebase --signoff
+
+This test runs git rebase --signoff and make sure that it works.
+'
+
+. ./test-lib.sh
+
+# A simple file to commit
+cat >file <<EOF
+a
+EOF
+
+# Expected commit message after rebase --signoff
+cat >expected-signed <<EOF
+first
+
+Signed-off-by: $(git var GIT_COMMITTER_IDENT | sed -e "s/>.*/>/")
+EOF
+
+# Expected commit message after rebase without --signoff (or with --no-signoff)
+cat >expected-unsigned <<EOF
+first
+EOF
+
+
+# We configure an alias to do the rebase --signoff so that
+# on the next subtest we can show that --no-signoff overrides the alias
+test_expect_success 'rebase --signoff adds a sign-off line' '
+ git commit --allow-empty -m "Initial empty commit" &&
+ git add file && git commit -m first &&
+ git config alias.rbs "rebase --signoff" &&
+ git rbs HEAD^ &&
+ git cat-file commit HEAD | sed -e "1,/^\$/d" > actual &&
+ test_cmp expected-signed actual
+'
+
+test_expect_success 'rebase --no-signoff does not add a sign-off line' '
+ git commit --amend -m "first" &&
+ git rbs --no-signoff HEAD^ &&
+ git cat-file commit HEAD | sed -e "1,/^\$/d" > actual &&
+ test_cmp expected-unsigned actual
+'
+
+test_done
M submod
EOF
+cat >expect.modified_inside <<EOF
+ m submod
+EOF
+
+cat >expect.modified_untracked <<EOF
+ ? submod
+EOF
+
cat >expect.cached <<EOF
D submod
EOF
test -d submod &&
test -f submod/.git &&
git status -s -uno --ignore-submodules=none >actual &&
- test_cmp expect.modified actual &&
+ test_cmp expect.modified_inside actual &&
git rm -f submod &&
test ! -d submod &&
git status -s -uno --ignore-submodules=none >actual &&
test -d submod &&
test -f submod/.git &&
git status -s -uno --ignore-submodules=none >actual &&
- test_cmp expect.modified actual &&
+ test_cmp expect.modified_untracked actual &&
git rm -f submod &&
test ! -d submod &&
git status -s -uno --ignore-submodules=none >actual &&
test -d submod &&
test -f submod/.git &&
git status -s -uno --ignore-submodules=none >actual &&
- test_cmp expect.modified actual &&
+ test_cmp expect.modified_inside actual &&
git rm -f submod &&
test ! -d submod &&
git status -s -uno --ignore-submodules=none >actual &&
test -d submod &&
test -f submod/.git &&
git status -s -uno --ignore-submodules=none >actual &&
- test_cmp expect.modified actual &&
+ test_cmp expect.modified_inside actual &&
git rm -f submod &&
test ! -d submod &&
git status -s -uno --ignore-submodules=none >actual &&
test -d submod &&
test -f submod/.git &&
git status -s -uno --ignore-submodules=none >actual &&
- test_cmp expect.modified actual &&
+ test_cmp expect.modified_untracked actual &&
git rm -f submod &&
test ! -d submod &&
git status -s -uno --ignore-submodules=none >actual &&
test_cmp expect actual
'
+test_expect_success 'add -p handles relative paths' '
+ git reset --hard &&
+
+ echo base >relpath.c &&
+ git add "*.c" &&
+ git commit -m relpath &&
+
+ echo change >relpath.c &&
+ mkdir -p subdir &&
+ git -C subdir add -p .. 2>error <<-\EOF &&
+ y
+ EOF
+
+ test_must_be_empty error &&
+
+ cat >expect <<-\EOF &&
+ relpath.c
+ EOF
+ git diff --cached --name-only >actual &&
+ test_cmp expect actual
+'
+
test_expect_success 'add -p does not expand argument lists' '
git reset --hard &&
test_cmp expected actual
'
+test_expect_success 'setup nested submodule' '
+ git submodule add -f ./sm2 &&
+ git commit -a -m "add sm2" &&
+ git -C sm2 submodule add ../sm2 nested &&
+ git -C sm2 commit -a -m "nested sub"
+'
+
+test_expect_success 'move nested submodule HEAD' '
+ echo "nested content" >sm2/nested/file &&
+ git -C sm2/nested add file &&
+ git -C sm2/nested commit --allow-empty -m "new HEAD"
+'
+
+test_expect_success 'diff --submodule=diff with moved nested submodule HEAD' '
+ cat >expected <<-EOF &&
+ Submodule nested a5a65c9..b55928c:
+ diff --git a/nested/file b/nested/file
+ new file mode 100644
+ index 0000000..ca281f5
+ --- /dev/null
+ +++ b/nested/file
+ @@ -0,0 +1 @@
+ +nested content
+ EOF
+ git -C sm2 diff --submodule=diff >actual 2>err &&
+ test_must_be_empty err &&
+ test_cmp expected actual
+'
+
test_done
rm -fr .git/rebase-apply &&
git checkout -f first &&
echo one >> file &&
- git commit -am "$LONG" --author="$LONG <long@example.com>" &&
+ git commit -am "$LONG
+
+ Body test" --author="$LONG <long@example.com>" &&
git format-patch --stdout -1 >patch &&
# bump from, date, and subject down to in-body header
perl -lpe "
git am msg &&
# Ensure that the author and full message are present
git cat-file commit HEAD | grep "^author.*long@example.com" &&
- git cat-file commit HEAD | grep "^$LONG"
+ git cat-file commit HEAD | grep "^$LONG$"
'
test_done
test_cmp expected_pub actual_pub
'
+test_expect_success 'push propagating the remotes name to a submodule' '
+ git -C work remote add origin ../pub.git &&
+ git -C work remote add pub ../pub.git &&
+
+ > work/gar/bage/junk10 &&
+ git -C work/gar/bage add junk10 &&
+ git -C work/gar/bage commit -m "Tenth junk" &&
+ git -C work add gar/bage &&
+ git -C work commit -m "Tenth junk added to gar/bage" &&
+
+ # Fails when submodule does not have a matching remote
+ test_must_fail git -C work push --recurse-submodules=on-demand pub master &&
+ # Succeeds when submodules has matching remote and refspec
+ git -C work push --recurse-submodules=on-demand origin master &&
+
+ git -C submodule.git rev-parse master >actual_submodule &&
+ git -C pub.git rev-parse master >actual_pub &&
+ git -C work/gar/bage rev-parse master >expected_submodule &&
+ git -C work rev-parse master >expected_pub &&
+ test_cmp expected_submodule actual_submodule &&
+ test_cmp expected_pub actual_pub
+'
+
+test_expect_success 'push propagating refspec to a submodule' '
+ > work/gar/bage/junk11 &&
+ git -C work/gar/bage add junk11 &&
+ git -C work/gar/bage commit -m "Eleventh junk" &&
+
+ git -C work checkout branch2 &&
+ git -C work add gar/bage &&
+ git -C work commit -m "updating gar/bage in branch2" &&
+
+ # Fails when submodule does not have a matching branch
+ test_must_fail git -C work push --recurse-submodules=on-demand origin branch2 &&
+ # Fails when refspec includes an object id
+ test_must_fail git -C work push --recurse-submodules=on-demand origin \
+ "$(git -C work rev-parse branch2):refs/heads/branch2" &&
+ # Fails when refspec includes 'HEAD' as it is unsupported at this time
+ test_must_fail git -C work push --recurse-submodules=on-demand origin \
+ HEAD:refs/heads/branch2 &&
+
+ git -C work/gar/bage branch branch2 master &&
+ git -C work push --recurse-submodules=on-demand origin branch2 &&
+
+ git -C submodule.git rev-parse branch2 >actual_submodule &&
+ git -C pub.git rev-parse branch2 >actual_pub &&
+ git -C work/gar/bage rev-parse branch2 >expected_submodule &&
+ git -C work rev-parse branch2 >expected_pub &&
+ test_cmp expected_submodule actual_submodule &&
+ test_cmp expected_pub actual_pub
+'
+
test_done
)
'
+test_expect_success 'background updates of REMOTE can be mitigated with a non-updated REMOTE-push' '
+ rm -rf src dst &&
+ git init --bare src.bare &&
+ test_when_finished "rm -rf src.bare" &&
+ git clone --no-local src.bare dst &&
+ test_when_finished "rm -rf dst" &&
+ (
+ cd dst &&
+ test_commit G &&
+ git remote add origin-push ../src.bare &&
+ git push origin-push master:master
+ ) &&
+ git clone --no-local src.bare dst2 &&
+ test_when_finished "rm -rf dst2" &&
+ (
+ cd dst2 &&
+ test_commit H &&
+ git push
+ ) &&
+ (
+ cd dst &&
+ test_commit I &&
+ git fetch origin &&
+ test_must_fail git push --force-with-lease origin-push &&
+ git fetch origin-push &&
+ git push --force-with-lease origin-push
+ )
+'
+
test_done
test_cmp expect actual
'
+test_expect_success 'push options and submodules' '
+ test_when_finished "rm -rf parent" &&
+ test_when_finished "rm -rf parent_upstream" &&
+ mk_repo_pair &&
+ git -C upstream config receive.advertisePushOptions true &&
+ cp -r upstream parent_upstream &&
+ test_commit -C upstream one &&
+
+ test_create_repo parent &&
+ git -C parent remote add up ../parent_upstream &&
+ test_commit -C parent one &&
+ git -C parent push --mirror up &&
+
+ git -C parent submodule add ../upstream workbench &&
+ git -C parent/workbench remote add up ../../upstream &&
+ git -C parent commit -m "add submoule" &&
+
+ test_commit -C parent/workbench two &&
+ git -C parent add workbench &&
+ git -C parent commit -m "update workbench" &&
+
+ git -C parent push \
+ --push-option=asdf --push-option="more structured text" \
+ --recurse-submodules=on-demand up master &&
+
+ git -C upstream rev-parse --verify master >expect &&
+ git -C parent/workbench rev-parse --verify master >actual &&
+ test_cmp expect actual &&
+
+ git -C parent_upstream rev-parse --verify master >expect &&
+ git -C parent rev-parse --verify master >actual &&
+ test_cmp expect actual &&
+
+ printf "asdf\nmore structured text\n" >expect &&
+ test_cmp expect upstream/.git/hooks/pre-receive.push_options &&
+ test_cmp expect upstream/.git/hooks/post-receive.push_options &&
+ test_cmp expect parent_upstream/.git/hooks/pre-receive.push_options &&
+ test_cmp expect parent_upstream/.git/hooks/post-receive.push_options
+'
+
stop_httpd
test_done
git push "$(pwd)/xxx${pathsep}yyy.git" HEAD
'
+test_expect_success 'updating a ref from quarantine is forbidden' '
+ git init --bare update.git &&
+ write_script update.git/hooks/pre-receive <<-\EOF &&
+ read old new refname
+ git update-ref refs/heads/unrelated $new
+ exit 1
+ EOF
+ test_must_fail git push update.git HEAD &&
+ git -C update.git fsck
+'
+
test_done
expect_ssh "-batch -P 123" myhost src
'
+test_expect_success 'clean failure on broken quoting' '
+ test_must_fail \
+ env GIT_SSH_COMMAND="${SQ}plink.exe -v" \
+ git clone "[myhost:123]:src" sq-failure
+'
+
# Reset the GIT_SSH environment variable for clone tests.
setup_ssh_wrapper
test_line_count = 2 new # There is one new pack and its .idx
'
+run_and_wait_for_auto_gc () {
+ # We read stdout from gc for the side effect of waiting until the
+ # background gc process exits, closing its fd 9. Furthermore, the
+ # variable assignment from a command substitution preserves the
+ # exit status of the main gc process.
+ # Note: this fd trickery doesn't work on Windows, but there is no
+ # need to, because on Win the auto gc always runs in the foreground.
+ doesnt_matter=$(git gc --auto 9>&1)
+}
+
test_expect_success 'background auto gc does not run if gc.log is present and recent but does if it is old' '
test_commit foo &&
test_commit bar &&
test-chmtime =-345600 .git/gc.log &&
test_must_fail git gc --auto &&
test_config gc.logexpiry 2.days &&
- git gc --auto
+ run_and_wait_for_auto_gc &&
+ ls .git/objects/pack/pack-*.pack >packs &&
+ test_line_count = 1 packs
'
+# DO NOT leave a detached auto gc process running near the end of the
+# test script: it can run long enough in the background to racily
+# interfere with the cleanup in 'test_done'.
+
test_done
test_cmp empty untracked
'
+test_expect_success 'submodule add with \\ in path' '
+ test_when_finished "rm -rf parent sub\\with\\backslash" &&
+
+ # Initialize a repo with a backslash in its name
+ git init sub\\with\\backslash &&
+ touch sub\\with\\backslash/empty.file &&
+ git -C sub\\with\\backslash add empty.file &&
+ git -C sub\\with\\backslash commit -m "Added empty.file" &&
+
+ # Add that repository as a submodule
+ git init parent &&
+ git -C parent submodule add ../sub\\with\\backslash
+'
+
test_expect_success 'submodule add in subdirectory' '
echo "refs/heads/master" >expect &&
>empty &&
)
}
+sanitize_output () {
+ sed -e "s/$_x40/HASH/" -e "s/$_x40/HASH/" output >output2 &&
+ mv output2 output
+}
+
+
test_expect_success 'setup' '
test_create_repo_with_commit sub &&
echo output > .gitignore &&
EOF
'
+test_expect_success 'status with modified file in submodule (short)' '
+ (cd sub && git reset --hard) &&
+ echo "changed" >sub/foo &&
+ git status --short >output &&
+ diff output - <<-\EOF
+ m sub
+ EOF
+'
+
test_expect_success 'status with added file in submodule' '
(cd sub && git reset --hard && echo >foo && git add foo) &&
git status >output &&
EOF
'
+test_expect_success 'status with added file in submodule (short)' '
+ (cd sub && git reset --hard && echo >foo && git add foo) &&
+ git status --short >output &&
+ diff output - <<-\EOF
+ m sub
+ EOF
+'
+
test_expect_success 'status with untracked file in submodule' '
(cd sub && git reset --hard) &&
echo "content" >sub/new-file &&
EOF
'
+test_expect_success 'status with untracked file in submodule (short)' '
+ git status --short >output &&
+ diff output - <<-\EOF
+ ? sub
+ EOF
+'
+
test_expect_success 'status with added and untracked file in submodule' '
(cd sub && git reset --hard && echo >foo && git add foo) &&
echo "content" >sub/new-file &&
test_i18ngrep "modified: sub (new commits, modified content)" output
'
+test_expect_success 'status with a lot of untracked files in the submodule' '
+ (
+ cd sub
+ i=0 &&
+ while test $i -lt 1024
+ do
+ >some-file-$i
+ i=$(( $i + 1 ))
+ done
+ ) &&
+ git status --porcelain sub 2>err.actual &&
+ test_must_be_empty err.actual &&
+ rm err.actual
+'
+
test_expect_success 'rm submodule contents' '
- rm -rf sub/* sub/.git
+ rm -rf sub &&
+ mkdir sub
'
test_expect_success 'status clean (empty submodule dir)' '
test_cmp diff_submodule_actual diff_submodule_expect
'
+# We'll setup different cases for further testing:
+# sub1 will contain a nested submodule,
+# sub2 will have an untracked file
+# sub3 will have an untracked repository
+test_expect_success 'setup superproject with untracked file in nested submodule' '
+ (
+ cd super &&
+ git clean -dfx &&
+ rm .gitmodules &&
+ git submodule add -f ./sub1 &&
+ git submodule add -f ./sub2 &&
+ git submodule add -f ./sub1 sub3 &&
+ git commit -a -m "messy merge in superproject" &&
+ (
+ cd sub1 &&
+ git submodule add ../sub2 &&
+ git commit -a -m "add sub2 to sub1"
+ ) &&
+ git add sub1 &&
+ git commit -a -m "update sub1 to contain nested sub"
+ ) &&
+ echo content >super/sub1/sub2/file &&
+ echo content >super/sub2/file &&
+ git -C super/sub3 clone ../../sub2 untracked_repository
+'
+
+test_expect_success 'status with untracked file in nested submodule (porcelain)' '
+ git -C super status --porcelain >output &&
+ diff output - <<-\EOF
+ M sub1
+ M sub2
+ M sub3
+ EOF
+'
+
+test_expect_success 'status with untracked file in nested submodule (porcelain=2)' '
+ git -C super status --porcelain=2 >output &&
+ sanitize_output output &&
+ diff output - <<-\EOF
+ 1 .M S..U 160000 160000 160000 HASH HASH sub1
+ 1 .M S..U 160000 160000 160000 HASH HASH sub2
+ 1 .M S..U 160000 160000 160000 HASH HASH sub3
+ EOF
+'
+
+test_expect_success 'status with untracked file in nested submodule (short)' '
+ git -C super status --short >output &&
+ diff output - <<-\EOF
+ ? sub1
+ ? sub2
+ ? sub3
+ EOF
+'
+
+test_expect_success 'setup superproject with modified file in nested submodule' '
+ git -C super/sub1/sub2 add file &&
+ git -C super/sub2 add file
+'
+
+test_expect_success 'status with added file in nested submodule (porcelain)' '
+ git -C super status --porcelain >output &&
+ diff output - <<-\EOF
+ M sub1
+ M sub2
+ M sub3
+ EOF
+'
+
+test_expect_success 'status with added file in nested submodule (porcelain=2)' '
+ git -C super status --porcelain=2 >output &&
+ sanitize_output output &&
+ diff output - <<-\EOF
+ 1 .M S.M. 160000 160000 160000 HASH HASH sub1
+ 1 .M S.M. 160000 160000 160000 HASH HASH sub2
+ 1 .M S..U 160000 160000 160000 HASH HASH sub3
+ EOF
+'
+
+test_expect_success 'status with added file in nested submodule (short)' '
+ git -C super status --short >output &&
+ diff output - <<-\EOF
+ m sub1
+ m sub2
+ ? sub3
+ EOF
+'
+
test_done
git commit -m "modified both"
'
+test_expect_success 'difftool -d with growing paths' '
+ a=aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa &&
+ git init growing &&
+ (
+ cd growing &&
+ echo "test -f \"\$2/b\"" | write_script .git/test-for-b.sh &&
+ one=$(printf 1 | git hash-object -w --stdin) &&
+ two=$(printf 2 | git hash-object -w --stdin) &&
+ git update-index --add \
+ --cacheinfo 100644,$one,$a --cacheinfo 100644,$two,b &&
+ tree1=$(git write-tree) &&
+ git update-index --add \
+ --cacheinfo 100644,$two,$a --cacheinfo 100644,$one,b &&
+ tree2=$(git write-tree) &&
+ git checkout -- $a &&
+ git difftool -d --extcmd .git/test-for-b.sh $tree1 $tree2
+ )
+'
+
run_dir_diff_test () {
test_expect_success "$1 --no-symlinks" "
symlinks=--no-symlinks &&
)
'
+test_expect_success 'allow submit from branch with same revision but different name' '
+ test_when_finished cleanup_git &&
+ git p4 clone --dest="$git" //depot &&
+ (
+ cd "$git" &&
+ test_commit "file8" &&
+ git checkout -b branch1 &&
+ git checkout -b branch2 &&
+ git config git-p4.skipSubmitEdit true &&
+ git config git-p4.allowSubmit "branch1" &&
+ test_must_fail git p4 submit &&
+ git checkout branch1 &&
+ git p4 submit
+ )
+'
+
#
# Basic submit tests, the five handled cases
#
test_completion "git add ~/tmp/" "~/tmp/file"
'
+test_expect_success 'setup other remote for remote reference completion' '
+ git remote add other otherrepo &&
+ git fetch other
+'
+
+for flag in -d --delete
+do
+ test_expect_success "__git_complete_remote_or_refspec - push $flag other" '
+ sed -e "s/Z$//" >expected <<-EOF &&
+ master-in-other Z
+ EOF
+ (
+ words=(git push '$flag' other ma) &&
+ cword=${#words[@]} cur=${words[cword-1]} &&
+ __git_complete_remote_or_refspec &&
+ print_comp
+ ) &&
+ test_cmp expected out
+ '
+
+ test_expect_failure "__git_complete_remote_or_refspec - push other $flag" '
+ sed -e "s/Z$//" >expected <<-EOF &&
+ master-in-other Z
+ EOF
+ (
+ words=(git push other '$flag' ma) &&
+ cword=${#words[@]} cur=${words[cword-1]} &&
+ __git_complete_remote_or_refspec &&
+ print_comp
+ ) &&
+ test_cmp expected out
+ '
+done
+
test_done
static void standard_options(struct transport *t)
{
char buf[16];
- int n;
int v = t->verbose;
set_helper_option(t, "progress", t->progress ? "true" : "false");
- n = snprintf(buf, sizeof(buf), "%d", v + 1);
- if (n >= sizeof(buf))
- die("impossibly large verbosity value");
+ xsnprintf(buf, sizeof(buf), "%d", v + 1);
set_helper_option(t, "verbosity", buf);
switch (t->family) {
struct child_process *conn;
int fd[2];
unsigned got_remote_heads : 1;
- struct sha1_array extra_have;
- struct sha1_array shallow;
+ struct oid_array extra_have;
+ struct oid_array shallow;
};
static int set_git_option(struct git_transport_options *opts,
static int measure_abbrev(const struct object_id *oid, int sofar)
{
- char hex[GIT_SHA1_HEXSZ + 1];
+ char hex[GIT_MAX_HEXSZ + 1];
int w = find_unique_abbrev_r(hex, oid->hash, DEFAULT_ABBREV);
return (w < sofar) ? sofar : w;
TRANSPORT_RECURSE_SUBMODULES_ONLY)) &&
!is_bare_repository()) {
struct ref *ref = remote_refs;
- struct sha1_array commits = SHA1_ARRAY_INIT;
+ struct oid_array commits = OID_ARRAY_INIT;
for (; ref; ref = ref->next)
if (!is_null_oid(&ref->new_oid))
- sha1_array_append(&commits, ref->new_oid.hash);
+ oid_array_append(&commits,
+ &ref->new_oid);
if (!push_unpushed_submodules(&commits,
- transport->remote->name,
+ transport->remote,
+ refspec, refspec_nr,
+ transport->push_options,
pretend)) {
- sha1_array_clear(&commits);
+ oid_array_clear(&commits);
die("Failed to push all needed submodules!");
}
- sha1_array_clear(&commits);
+ oid_array_clear(&commits);
}
if (((flags & TRANSPORT_RECURSE_SUBMODULES_CHECK) ||
!pretend)) && !is_bare_repository()) {
struct ref *ref = remote_refs;
struct string_list needs_pushing = STRING_LIST_INIT_DUP;
- struct sha1_array commits = SHA1_ARRAY_INIT;
+ struct oid_array commits = OID_ARRAY_INIT;
for (; ref; ref = ref->next)
if (!is_null_oid(&ref->new_oid))
- sha1_array_append(&commits, ref->new_oid.hash);
+ oid_array_append(&commits,
+ &ref->new_oid);
if (find_unpushed_submodules(&commits, transport->remote->name,
&needs_pushing)) {
- sha1_array_clear(&commits);
+ oid_array_clear(&commits);
die_with_unpushed_submodules(&needs_pushing);
}
string_list_clear(&needs_pushing, 0);
- sha1_array_clear(&commits);
+ oid_array_clear(&commits);
}
if (!(flags & TRANSPORT_RECURSE_SUBMODULES_ONLY))
msgs[ERROR_WOULD_LOSE_ORPHANED_REMOVED] =
_("The following working tree files would be removed by sparse checkout update:\n%s");
msgs[ERROR_WOULD_LOSE_SUBMODULE] =
- _("Submodule '%s' cannot checkout new HEAD");
+ _("Cannot update submodule:\n%s");
opts->show_all_errors = 1;
/* rejected paths may not have a static buffer */
return ret;
}
+static inline int are_same_oid(struct name_entry *name_j, struct name_entry *name_k)
+{
+ return name_j->oid && name_k->oid && !oidcmp(name_j->oid, name_k->oid);
+}
+
static int traverse_trees_recursive(int n, unsigned long dirmask,
unsigned long df_conflicts,
struct name_entry *names,
struct traverse_info *info)
{
int i, ret, bottom;
+ int nr_buf = 0;
struct tree_desc t[MAX_UNPACK_TREES];
void *buf[MAX_UNPACK_TREES];
struct traverse_info newinfo;
newinfo.pathlen += tree_entry_len(p) + 1;
newinfo.df_conflicts |= df_conflicts;
+ /*
+ * Fetch the tree from the ODB for each peer directory in the
+ * n commits.
+ *
+ * For 2- and 3-way traversals, we try to avoid hitting the
+ * ODB twice for the same OID. This should yield a nice speed
+ * up in checkouts and merges when the commits are similar.
+ *
+ * We don't bother doing the full O(n^2) search for larger n,
+ * because wider traversals don't happen that often and we
+ * avoid the search setup.
+ *
+ * When 2 peer OIDs are the same, we just copy the tree
+ * descriptor data. This implicitly borrows the buffer
+ * data from the earlier cell.
+ */
for (i = 0; i < n; i++, dirmask >>= 1) {
- const unsigned char *sha1 = NULL;
- if (dirmask & 1)
- sha1 = names[i].oid->hash;
- buf[i] = fill_tree_descriptor(t+i, sha1);
+ if (i > 0 && are_same_oid(&names[i], &names[i - 1]))
+ t[i] = t[i - 1];
+ else if (i > 1 && are_same_oid(&names[i], &names[i - 2]))
+ t[i] = t[i - 2];
+ else {
+ const unsigned char *sha1 = NULL;
+ if (dirmask & 1)
+ sha1 = names[i].oid->hash;
+ buf[nr_buf++] = fill_tree_descriptor(t+i, sha1);
+ }
}
bottom = switch_cache_bottom(&newinfo);
ret = traverse_trees(n, t, &newinfo);
restore_cache_bottom(&newinfo, bottom);
- for (i = 0; i < n; i++)
+ for (i = 0; i < nr_buf; i++)
free(buf[i]);
return ret;
{
poll(NULL, 0, millisec);
}
+
+int xgethostname(char *buf, size_t len)
+{
+ /*
+ * If the full hostname doesn't fit in buf, POSIX does not
+ * specify whether the buffer will be null-terminated, so to
+ * be safe, do it ourselves.
+ */
+ int ret = gethostname(buf, len);
+ if (!ret)
+ buf[len - 1] = 0;
+ return ret;
+}
strbuf_release(&twobuf);
}
+static char short_submodule_status(struct wt_status_change_data *d) {
+ if (d->new_submodule_commits)
+ return 'M';
+ if (d->dirty_submodule & DIRTY_SUBMODULE_MODIFIED)
+ return 'm';
+ if (d->dirty_submodule & DIRTY_SUBMODULE_UNTRACKED)
+ return '?';
+ return d->worktree_status;
+}
+
static void wt_status_collect_changed_cb(struct diff_queue_struct *q,
struct diff_options *options,
void *data)
}
if (!d->worktree_status)
d->worktree_status = p->status;
- d->dirty_submodule = p->two->dirty_submodule;
- if (S_ISGITLINK(p->two->mode))
+ if (S_ISGITLINK(p->two->mode)) {
+ d->dirty_submodule = p->two->dirty_submodule;
d->new_submodule_commits = !!oidcmp(&p->one->oid,
&p->two->oid);
+ if (s->status_format == STATUS_FORMAT_SHORT)
+ d->worktree_status = short_submodule_status(d);
+ }
switch (p->status) {
case DIFF_STATUS_ADDED:
int hints;
enum wt_status_format status_format;
- unsigned char sha1_commit[GIT_SHA1_RAWSZ]; /* when not Initial */
+ unsigned char sha1_commit[GIT_MAX_RAWSZ]; /* when not Initial */
/* These are computed during processing of the individual sections */
int commitable;