Introduce a new "oidmap" API and rewrite oidset to use it.
* jt/oidmap:
oidmap: map with OID as key
-# Defaults
+# This file is an example configuration for clang-format 5.0.
+#
+# Note that this style definition should only be understood as a hint
+# for writing new code. The rules are still work-in-progress and does
+# not yet exactly match the style we have in the existing code.
# Use tabs whenever we need to fill whitespace that spans at least from one tab
# stop to the next one.
# Penalties
# This decides what order things should be done if a line is too long
-PenaltyBreakAssignment: 100
-PenaltyBreakBeforeFirstCallParameter: 100
-PenaltyBreakComment: 100
+PenaltyBreakAssignment: 10
+PenaltyBreakBeforeFirstCallParameter: 30
+PenaltyBreakComment: 10
PenaltyBreakFirstLessLess: 0
-PenaltyBreakString: 100
-PenaltyExcessCharacter: 5
-PenaltyReturnTypeOnItsOwnLine: 0
+PenaltyBreakString: 10
+PenaltyExcessCharacter: 100
+PenaltyReturnTypeOnItsOwnLine: 5
# Don't sort #include's
SortIncludes: false
API_DOCS = $(patsubst %.txt,%,$(filter-out technical/api-index-skel.txt technical/api-index.txt, $(wildcard technical/api-*.txt)))
SP_ARTICLES += $(API_DOCS)
+TECH_DOCS += technical/hash-function-transition
TECH_DOCS += technical/http-protocol
TECH_DOCS += technical/index-format
TECH_DOCS += technical/pack-format
more explicit '.' for that instead. The hope is that existing
users will not mind this change, and eventually the warning can be
turned into a hard error, upgrading the deprecation into removal of
- this (mis)feature. That is now scheduled to happen in the upcoming
- release.
+ this (mis)feature. That is now scheduled to happen in Git v2.16,
+ the next major release after this one.
* Git now avoids blindly falling back to ".git" when the setup
sequence said we are _not_ in Git repository. A corner case that
- happens to work right now may be broken by a call to die("BUG").
+ happens to work right now may be broken by a call to BUG().
We've tried hard to locate such cases and fixed them, but there
might still be cases that need to be addressed--bug reports are
greatly appreciated.
other options to make it easier for scripts to grab existing
trailer lines from a commit log message.
+ * The "--format=%(trailers)" option "git log" and its friends take
+ learned to take the 'unfold' and 'only' modifiers to normalize its
+ output, e.g. "git log --format=%(trailers:only,unfold)".
+
* "gitweb" shows a link to visit the 'raw' contents of blbos in the
history overview page.
* "git describe --match <pattern>" has been taught to play well with
the "--all" option.
+ * "git branch" learned "-c/-C" to create a new branch by copying an
+ existing one.
+
+ * Some commands (most notably "git status") makes an opportunistic
+ update when performing a read-only operation to help optimize later
+ operations in the same repository. The new "--no-optional-locks"
+ option can be passed to Git to disable them.
+
Performance, Internal Implementation, Development Support etc.
the directory, which is unnecessary. The codepath has been
optimized to avoid this overhead.
+ * The final batch to "git rebase -i" updates to move more code from
+ the shell script to C has been merged.
+
+ * Operations that do not touch (majority of) packed refs have been
+ optimized by making accesses to packed-refs file lazy; we no longer
+ pre-parse everything, and an access to a single ref in the
+ packed-refs does not touch majority of irrelevant refs, either.
+
+ * Add comment to clarify that the style file is meant to be used with
+ clang-5 and the rules are still work in progress.
+
+ * Many variables that points at a region of memory that will live
+ throughout the life of the program have been marked with UNLEAK
+ marker to help the leak checkers concentrate on real leaks..
+
+
Also contains various documentation updates and code clean-ups.
* Memory leaks in various codepaths have been plugged.
(merge 4d01a7fa65 ma/leakplugs later to maint).
+ * Recent versions of "git rev-parse --parseopt" did not parse the
+ option specification that does not have the optional flags (*=?!)
+ correctly, which has been corrected.
+ (merge a6304fa4c2 bc/rev-parse-parseopt-fix later to maint).
+
+ * The checkpoint command "git fast-import" did not flush updates to
+ refs and marks unless at least one object was created since the
+ last checkpoint, which has been corrected, as these things can
+ happen without any new object getting created.
+ (merge 30e215a65c er/fast-import-dump-refs-on-checkpoint later to maint).
+
+ * Spell the name of our system as "Git" in the output from
+ request-pull script.
+ (merge e66d7c37a5 ar/request-pull-phrasofix later to maint).
+
+ * Fixes for a handful memory access issues identified by valgrind.
+ (merge 2944a94c6b tg/memfixes later to maint).
+
+ * Backports a moral equivalent of 2015 fix to the poll() emulation
+ from the upstream gnulib to fix occasional breakages on HPE NonStop.
+ (merge 61b2a1acaa rb/compat-poll-fix later to maint).
+
+ * Users with "color.ui = always" in their configuration were broken
+ by a recent change that made plumbing commands to pay attention to
+ them as the patch created internally by "git add -p" were colored
+ (heh) and made unusable. Fix this regression by redefining
+ 'always' to mean the same thing as 'auto'.
+ (merge 6be4595edb jk/ui-color-always-to-auto-maint later to maint).
+
+ * In the "--format=..." option of the "git for-each-ref" command (and
+ its friends, i.e. the listing mode of "git branch/tag"), "%(atom:)"
+ (e.g. "%(refname:)", "%(body:)" used to error out. Instead, treat
+ them as if the colon and an empty string that follows it were not
+ there.
+ (merge bea4dbeafd tb/ref-filter-empty-modifier later to maint).
+
* Other minor doc, test and build updates and code cleanups.
(merge f094b89a4d ma/parse-maybe-bool later to maint).
(merge 39b00fa4d4 jk/drop-sha1-entry-pos later to maint).
(merge 217bb56d4f hn/typofix later to maint).
(merge c08fd6388c jk/doc-read-tree-table-asciidoctor-fix later to maint).
(merge c3342b362e ks/doc-use-camelcase-for-config-name later to maint).
+ (merge 0bca165fdb jk/validate-headref-fix later to maint).
+ (merge 93dbefb389 mr/doc-negative-pathspec later to maint).
+ (merge 5e633326e4 ad/doc-markup-fix later to maint).
+ (merge 9ca356fa8b rs/cocci-de-paren-call-params later to maint).
+ (merge 7099153e8d rs/tag-null-pointer-arith-fix later to maint).
+ (merge 0e187d758c rs/run-command-use-alloc-array later to maint).
+ (merge e0222159fa jn/strbuf-doc-re-reuse later to maint).
+ (merge 97487ea11a rs/qsort-s later to maint).
+ (merge a9155c50bd sb/branch-avoid-repeated-strbuf-release later to maint).
+ (merge f777623514 ks/branch-tweak-error-message-for-extra-args later to maint).
+ (merge 33f3c683ec ks/verify-filename-non-option-error-message-tweak later to maint).
color.branch::
A boolean to enable/disable color in the output of
- linkgit:git-branch[1]. May be set to `always`,
- `false` (or `never`) or `auto` (or `true`), in which case colors are used
- only when the output is to a terminal. If unset, then the
- value of `color.ui` is used (`auto` by default).
+ linkgit:git-branch[1]. May be set to `false` (or `never`) to
+ disable color entirely, `auto` (or `true` or `always`) in which
+ case colors are used only when the output is to a terminal. If
+ unset, then the value of `color.ui` is used (`auto` by default).
color.branch.<slot>::
Use customized color for branch coloration. `<slot>` is one of
color.diff::
Whether to use ANSI escape sequences to add color to patches.
- If this is set to `always`, linkgit:git-diff[1],
+ If this is set to `true` or `auto`, linkgit:git-diff[1],
linkgit:git-log[1], and linkgit:git-show[1] will use color
- for all patches. If it is set to `true` or `auto`, those
- commands will only use color when output is to the terminal.
- If unset, then the value of `color.ui` is used (`auto` by
- default).
+ when output is to the terminal. The value `always` is a
+ historical synonym for `auto`. If unset, then the value of
+ `color.ui` is used (`auto` by default).
+
This does not affect linkgit:git-format-patch[1] or the
'git-diff-{asterisk}' plumbing commands. Can be overridden on the
--
color.interactive::
- When set to `always`, always use colors for interactive prompts
+ When set to `true` or `auto`, use colors for interactive prompts
and displays (such as those used by "git-add --interactive" and
- "git-clean --interactive"). When false (or `never`), never.
- When set to `true` or `auto`, use colors only when the output is
- to the terminal. If unset, then the value of `color.ui` is
- used (`auto` by default).
+ "git-clean --interactive") when the output is to the terminal.
+ When false (or `never`), never show colors. The value `always`
+ is a historical synonym for `auto`. If unset, then the value of
+ `color.ui` is used (`auto` by default).
color.interactive.<slot>::
Use customized color for 'git add --interactive' and 'git clean
configuration to set a default for the `--color` option. Set it
to `false` or `never` if you prefer Git commands not to use
color unless enabled explicitly with some other configuration
- or the `--color` option. Set it to `always` if you want all
- output not intended for machine consumption to use color, to
- `true` or `auto` (this is the default since Git 1.8.4) if you
- want such output to use color when written to the terminal.
+ or the `--color` option. Set it to `true` or `auto` to enable
+ color when output is written to the terminal (this is also the
+ default since Git 1.8.4). The value `always` is a historical
+ synonym for `auto`.
column.ui::
Specify whether supported commands should output in columns.
the working tree. Note that older versions of Git used
to ignore removed files; use `--no-all` option if you want
to add modified or new files but ignore removed ones.
++
+For more details about the <pathspec> syntax, see the 'pathspec' entry
+in linkgit:gitglossary[7].
-n::
--dry-run::
'git branch' (--set-upstream-to=<upstream> | -u <upstream>) [<branchname>]
'git branch' --unset-upstream [<branchname>]
'git branch' (-m | -M) [<oldbranch>] <newbranch>
+'git branch' (-c | -C) [<oldbranch>] <newbranch>
'git branch' (-d | -D) [-r] <branchname>...
'git branch' --edit-description [<branchname>]
renaming. If <newbranch> exists, -M must be used to force the rename
to happen.
+The `-c` and `-C` options have the exact same semantics as `-m` and
+`-M`, except instead of the branch being renamed it along with its
+config and reflog will be copied to a new name.
+
With a `-d` or `-D` option, `<branchname>` will be deleted. You may
specify more than one branch for deletion. If the branch currently
has a reflog then the reflog will also be deleted.
In combination with `-d` (or `--delete`), allow deleting the
branch irrespective of its merged status. In combination with
`-m` (or `--move`), allow renaming the branch even if the new
- branch name already exists.
+ branch name already exists, the same applies for `-c` (or `--copy`).
-m::
--move::
-M::
Shortcut for `--move --force`.
+-c::
+--copy::
+ Copy a branch and the corresponding reflog.
+
+-C::
+ Shortcut for `--copy --force`.
+
--color[=<when>]::
Color branches to highlight current, local, and
remote-tracking branches.
`xx`; for example `%00` interpolates to `\0` (NUL),
`%09` to `\t` (TAB) and `%0a` to `\n` (LF).
+--color[=<when>]:
+ Respect any colors specified in the `--format` option. The
+ `<when>` field must be one of `always`, `never`, or `auto` (if
+ `<when>` is absent, behave as if `always` was given).
+
--shell::
--perl::
--python::
<pathspec>...::
If given, limit the search to paths matching at least one pattern.
Both leading paths match and glob(7) patterns are supported.
++
+For more details about the <pathspec> syntax, see the 'pathspec' entry
+in linkgit:gitglossary[7].
Examples
--------
Looks for a line that has `NODE` or `Unexpected` in
files that have lines that match both.
+`git grep solution -- :^Documentation`::
+ Looks for `solution`, excluding files in `Documentation`.
+
GIT
---
Part of the linkgit:git[1] suite
--autosquash::
--no-autosquash::
When the commit log message begins with "squash! ..." (or
- "fixup! ..."), and there is a commit whose title begins with
- the same ..., automatically modify the todo list of rebase -i
- so that the commit marked for squashing comes right after the
- commit to be modified, and change the action of the moved
- commit from `pick` to `squash` (or `fixup`). Ignores subsequent
- "fixup! " or "squash! " after the first, in case you referred to an
- earlier fixup/squash with `git commit --fixup/--squash`.
+ "fixup! ..."), and there is already a commit in the todo list that
+ matches the same `...`, automatically modify the todo list of rebase
+ -i so that the commit marked for squashing comes right after the
+ commit to be modified, and change the action of the moved commit
+ from `pick` to `squash` (or `fixup`). A commit matches the `...` if
+ the commit subject matches, or if the `...` refers to the commit's
+ hash. As a fall-back, partial matches of the commit subject work,
+ too. The recommended way to create fixup/squash commits is by using
+ the `--fixup`/`--squash` options of linkgit:git-commit[1].
+
This option is only valid when the `--interactive` option is used.
+
without options are equivalent to 'always' and 'never'
respectively.
+<pathspec>...::
+ See the 'pathspec' entry in linkgit:gitglossary[7].
OUTPUT
------
variable if it exists, or lexicographic order otherwise. See
linkgit:git-config[1].
+--color[=<when>]:
+ Respect any colors specified in the `--format` option. The
+ `<when>` field must be one of `always`, `never`, or `auto` (if
+ `<when>` is absent, behave as if `always` was given).
+
-i::
--ignore-case::
Sorting and filtering tags are case insensitive.
Note that omitting the `=` in `git -c foo.bar ...` is allowed and sets
`foo.bar` to the boolean true value (just like `[foo]bar` would in a
config file). Including the equals but with an empty value (like `git -c
-foo.bar= ...`) sets `foo.bar` to the empty string which ` git config
+foo.bar= ...`) sets `foo.bar` to the empty string which `git config
--bool` will convert to `false`.
--exec-path[=<path>]::
Add "icase" magic to all pathspec. This is equivalent to setting
the `GIT_ICASE_PATHSPECS` environment variable to `1`.
+--no-optional-locks::
+ Do not perform optional operations that require locks. This is
+ equivalent to setting the `GIT_OPTIONAL_LOCKS` to `0`.
+
GIT COMMANDS
------------
which feed potentially-untrusted URLS to git commands. See
linkgit:git-config[1] for more details.
+`GIT_OPTIONAL_LOCKS`::
+ If set to `0`, Git will complete any requested operation without
+ performing any optional sub-operations that require taking a lock.
+ For example, this will prevent `git status` from refreshing the
+ index as a side effect. This is useful for processes running in
+ the background which do not want to cause lock contention with
+ other operations on the repository. Defaults to `1`.
+
Discussion[[Discussion]]
------------------------
exclude;;
After a path matches any non-exclude pathspec, it will be run
- through all exclude pathspec (magic signature: `!` or its
+ through all exclude pathspecs (magic signature: `!` or its
synonym `^`). If it matches, the path is ignored. When there
is no non-exclude pathspec, the exclusion is applied to the
result set as if invoked without any pathspec.
+++ /dev/null
-string-list API
-===============
-
-The string_list API offers a data structure and functions to handle
-sorted and unsorted string lists. A "sorted" list is one whose
-entries are sorted by string value in `strcmp()` order.
-
-The 'string_list' struct used to be called 'path_list', but was renamed
-because it is not specific to paths.
-
-The caller:
-
-. Allocates and clears a `struct string_list` variable.
-
-. Initializes the members. You might want to set the flag `strdup_strings`
- if the strings should be strdup()ed. For example, this is necessary
- when you add something like git_path("..."), since that function returns
- a static buffer that will change with the next call to git_path().
-+
-If you need something advanced, you can manually malloc() the `items`
-member (you need this if you add things later) and you should set the
-`nr` and `alloc` members in that case, too.
-
-. Adds new items to the list, using `string_list_append`,
- `string_list_append_nodup`, `string_list_insert`,
- `string_list_split`, and/or `string_list_split_in_place`.
-
-. Can check if a string is in the list using `string_list_has_string` or
- `unsorted_string_list_has_string` and get it from the list using
- `string_list_lookup` for sorted lists.
-
-. Can sort an unsorted list using `string_list_sort`.
-
-. Can remove duplicate items from a sorted list using
- `string_list_remove_duplicates`.
-
-. Can remove individual items of an unsorted list using
- `unsorted_string_list_delete_item`.
-
-. Can remove items not matching a criterion from a sorted or unsorted
- list using `filter_string_list`, or remove empty strings using
- `string_list_remove_empty_items`.
-
-. Finally it should free the list using `string_list_clear`.
-
-Example:
-
-----
-struct string_list list = STRING_LIST_INIT_NODUP;
-int i;
-
-string_list_append(&list, "foo");
-string_list_append(&list, "bar");
-for (i = 0; i < list.nr; i++)
- printf("%s\n", list.items[i].string)
-----
-
-NOTE: It is more efficient to build an unsorted list and sort it
-afterwards, instead of building a sorted list (`O(n log n)` instead of
-`O(n^2)`).
-+
-However, if you use the list to check if a certain string was added
-already, you should not do that (using unsorted_string_list_has_string()),
-because the complexity would be quadratic again (but with a worse factor).
-
-Functions
----------
-
-* General ones (works with sorted and unsorted lists as well)
-
-`string_list_init`::
-
- Initialize the members of the string_list, set `strdup_strings`
- member according to the value of the second parameter.
-
-`filter_string_list`::
-
- Apply a function to each item in a list, retaining only the
- items for which the function returns true. If free_util is
- true, call free() on the util members of any items that have
- to be deleted. Preserve the order of the items that are
- retained.
-
-`string_list_remove_empty_items`::
-
- Remove any empty strings from the list. If free_util is true,
- call free() on the util members of any items that have to be
- deleted. Preserve the order of the items that are retained.
-
-`print_string_list`::
-
- Dump a string_list to stdout, useful mainly for debugging purposes. It
- can take an optional header argument and it writes out the
- string-pointer pairs of the string_list, each one in its own line.
-
-`string_list_clear`::
-
- Free a string_list. The `string` pointer of the items will be freed in
- case the `strdup_strings` member of the string_list is set. The second
- parameter controls if the `util` pointer of the items should be freed
- or not.
-
-* Functions for sorted lists only
-
-`string_list_has_string`::
-
- Determine if the string_list has a given string or not.
-
-`string_list_insert`::
-
- Insert a new element to the string_list. The returned pointer can be
- handy if you want to write something to the `util` pointer of the
- string_list_item containing the just added string. If the given
- string already exists the insertion will be skipped and the
- pointer to the existing item returned.
-+
-Since this function uses xrealloc() (which die()s if it fails) if the
-list needs to grow, it is safe not to check the pointer. I.e. you may
-write `string_list_insert(...)->util = ...;`.
-
-`string_list_lookup`::
-
- Look up a given string in the string_list, returning the containing
- string_list_item. If the string is not found, NULL is returned.
-
-`string_list_remove_duplicates`::
-
- Remove all but the first of consecutive entries that have the
- same string value. If free_util is true, call free() on the
- util members of any items that have to be deleted.
-
-* Functions for unsorted lists only
-
-`string_list_append`::
-
- Append a new string to the end of the string_list. If
- `strdup_string` is set, then the string argument is copied;
- otherwise the new `string_list_entry` refers to the input
- string.
-
-`string_list_append_nodup`::
-
- Append a new string to the end of the string_list. The new
- `string_list_entry` always refers to the input string, even if
- `strdup_string` is set. This function can be used to hand
- ownership of a malloc()ed string to a `string_list` that has
- `strdup_string` set.
-
-`string_list_sort`::
-
- Sort the list's entries by string value in `strcmp()` order.
-
-`unsorted_string_list_has_string`::
-
- It's like `string_list_has_string()` but for unsorted lists.
-
-`unsorted_string_list_lookup`::
-
- It's like `string_list_lookup()` but for unsorted lists.
-+
-The above two functions need to look through all items, as opposed to their
-counterpart for sorted lists, which performs a binary search.
-
-`unsorted_string_list_delete_item`::
-
- Remove an item from a string_list. The `string` pointer of the items
- will be freed in case the `strdup_strings` member of the string_list
- is set. The third parameter controls if the `util` pointer of the
- items should be freed or not.
-
-`string_list_split`::
-`string_list_split_in_place`::
-
- Split a string into substrings on a delimiter character and
- append the substrings to a `string_list`. If `maxsplit` is
- non-negative, then split at most `maxsplit` times. Return the
- number of substrings appended to the list.
-+
-`string_list_split` requires a `string_list` that has `strdup_strings`
-set to true; it leaves the input string untouched and makes copies of
-the substrings in newly-allocated memory.
-`string_list_split_in_place` requires a `string_list` that has
-`strdup_strings` set to false; it splits the input string in place,
-overwriting the delimiter characters with NULs and creating new
-string_list_items that point into the original string (the original
-string must therefore not be modified or freed while the `string_list`
-is in use).
-
-
-Data structures
----------------
-
-* `struct string_list_item`
-
-Represents an item of the list. The `string` member is a pointer to the
-string, and you may use the `util` member for any purpose, if you want.
-
-* `struct string_list`
-
-Represents the list itself.
-
-. The array of items are available via the `items` member.
-. The `nr` member contains the number of items stored in the list.
-. The `alloc` member is used to avoid reallocating at every insertion.
- You should not tamper with it.
-. Setting the `strdup_strings` member to 1 will strdup() the strings
- before adding them, see above.
-. The `compare_strings_fn` member is used to specify a custom compare
- function, otherwise `strcmp()` is used as the default function.
--- /dev/null
+Git hash function transition
+============================
+
+Objective
+---------
+Migrate Git from SHA-1 to a stronger hash function.
+
+Background
+----------
+At its core, the Git version control system is a content addressable
+filesystem. It uses the SHA-1 hash function to name content. For
+example, files, directories, and revisions are referred to by hash
+values unlike in other traditional version control systems where files
+or versions are referred to via sequential numbers. The use of a hash
+function to address its content delivers a few advantages:
+
+* Integrity checking is easy. Bit flips, for example, are easily
+ detected, as the hash of corrupted content does not match its name.
+* Lookup of objects is fast.
+
+Using a cryptographically secure hash function brings additional
+advantages:
+
+* Object names can be signed and third parties can trust the hash to
+ address the signed object and all objects it references.
+* Communication using Git protocol and out of band communication
+ methods have a short reliable string that can be used to reliably
+ address stored content.
+
+Over time some flaws in SHA-1 have been discovered by security
+researchers. https://shattered.io demonstrated a practical SHA-1 hash
+collision. As a result, SHA-1 cannot be considered cryptographically
+secure any more. This impacts the communication of hash values because
+we cannot trust that a given hash value represents the known good
+version of content that the speaker intended.
+
+SHA-1 still possesses the other properties such as fast object lookup
+and safe error checking, but other hash functions are equally suitable
+that are believed to be cryptographically secure.
+
+Goals
+-----
+Where NewHash is a strong 256-bit hash function to replace SHA-1 (see
+"Selection of a New Hash", below):
+
+1. The transition to NewHash can be done one local repository at a time.
+ a. Requiring no action by any other party.
+ b. A NewHash repository can communicate with SHA-1 Git servers
+ (push/fetch).
+ c. Users can use SHA-1 and NewHash identifiers for objects
+ interchangeably (see "Object names on the command line", below).
+ d. New signed objects make use of a stronger hash function than
+ SHA-1 for their security guarantees.
+2. Allow a complete transition away from SHA-1.
+ a. Local metadata for SHA-1 compatibility can be removed from a
+ repository if compatibility with SHA-1 is no longer needed.
+3. Maintainability throughout the process.
+ a. The object format is kept simple and consistent.
+ b. Creation of a generalized repository conversion tool.
+
+Non-Goals
+---------
+1. Add NewHash support to Git protocol. This is valuable and the
+ logical next step but it is out of scope for this initial design.
+2. Transparently improving the security of existing SHA-1 signed
+ objects.
+3. Intermixing objects using multiple hash functions in a single
+ repository.
+4. Taking the opportunity to fix other bugs in Git's formats and
+ protocols.
+5. Shallow clones and fetches into a NewHash repository. (This will
+ change when we add NewHash support to Git protocol.)
+6. Skip fetching some submodules of a project into a NewHash
+ repository. (This also depends on NewHash support in Git
+ protocol.)
+
+Overview
+--------
+We introduce a new repository format extension. Repositories with this
+extension enabled use NewHash instead of SHA-1 to name their objects.
+This affects both object names and object content --- both the names
+of objects and all references to other objects within an object are
+switched to the new hash function.
+
+NewHash repositories cannot be read by older versions of Git.
+
+Alongside the packfile, a NewHash repository stores a bidirectional
+mapping between NewHash and SHA-1 object names. The mapping is generated
+locally and can be verified using "git fsck". Object lookups use this
+mapping to allow naming objects using either their SHA-1 and NewHash names
+interchangeably.
+
+"git cat-file" and "git hash-object" gain options to display an object
+in its sha1 form and write an object given its sha1 form. This
+requires all objects referenced by that object to be present in the
+object database so that they can be named using the appropriate name
+(using the bidirectional hash mapping).
+
+Fetches from a SHA-1 based server convert the fetched objects into
+NewHash form and record the mapping in the bidirectional mapping table
+(see below for details). Pushes to a SHA-1 based server convert the
+objects being pushed into sha1 form so the server does not have to be
+aware of the hash function the client is using.
+
+Detailed Design
+---------------
+Repository format extension
+~~~~~~~~~~~~~~~~~~~~~~~~~~~
+A NewHash repository uses repository format version `1` (see
+Documentation/technical/repository-version.txt) with extensions
+`objectFormat` and `compatObjectFormat`:
+
+ [core]
+ repositoryFormatVersion = 1
+ [extensions]
+ objectFormat = newhash
+ compatObjectFormat = sha1
+
+Specifying a repository format extension ensures that versions of Git
+not aware of NewHash do not try to operate on these repositories,
+instead producing an error message:
+
+ $ git status
+ fatal: unknown repository extensions found:
+ objectformat
+ compatobjectformat
+
+See the "Transition plan" section below for more details on these
+repository extensions.
+
+Object names
+~~~~~~~~~~~~
+Objects can be named by their 40 hexadecimal digit sha1-name or 64
+hexadecimal digit newhash-name, plus names derived from those (see
+gitrevisions(7)).
+
+The sha1-name of an object is the SHA-1 of the concatenation of its
+type, length, a nul byte, and the object's sha1-content. This is the
+traditional <sha1> used in Git to name objects.
+
+The newhash-name of an object is the NewHash of the concatenation of its
+type, length, a nul byte, and the object's newhash-content.
+
+Object format
+~~~~~~~~~~~~~
+The content as a byte sequence of a tag, commit, or tree object named
+by sha1 and newhash differ because an object named by newhash-name refers to
+other objects by their newhash-names and an object named by sha1-name
+refers to other objects by their sha1-names.
+
+The newhash-content of an object is the same as its sha1-content, except
+that objects referenced by the object are named using their newhash-names
+instead of sha1-names. Because a blob object does not refer to any
+other object, its sha1-content and newhash-content are the same.
+
+The format allows round-trip conversion between newhash-content and
+sha1-content.
+
+Object storage
+~~~~~~~~~~~~~~
+Loose objects use zlib compression and packed objects use the packed
+format described in Documentation/technical/pack-format.txt, just like
+today. The content that is compressed and stored uses newhash-content
+instead of sha1-content.
+
+Pack index
+~~~~~~~~~~
+Pack index (.idx) files use a new v3 format that supports multiple
+hash functions. They have the following format (all integers are in
+network byte order):
+
+- A header appears at the beginning and consists of the following:
+ - The 4-byte pack index signature: '\377t0c'
+ - 4-byte version number: 3
+ - 4-byte length of the header section, including the signature and
+ version number
+ - 4-byte number of objects contained in the pack
+ - 4-byte number of object formats in this pack index: 2
+ - For each object format:
+ - 4-byte format identifier (e.g., 'sha1' for SHA-1)
+ - 4-byte length in bytes of shortened object names. This is the
+ shortest possible length needed to make names in the shortened
+ object name table unambiguous.
+ - 4-byte integer, recording where tables relating to this format
+ are stored in this index file, as an offset from the beginning.
+ - 4-byte offset to the trailer from the beginning of this file.
+ - Zero or more additional key/value pairs (4-byte key, 4-byte
+ value). Only one key is supported: 'PSRC'. See the "Loose objects
+ and unreachable objects" section for supported values and how this
+ is used. All other keys are reserved. Readers must ignore
+ unrecognized keys.
+- Zero or more NUL bytes. This can optionally be used to improve the
+ alignment of the full object name table below.
+- Tables for the first object format:
+ - A sorted table of shortened object names. These are prefixes of
+ the names of all objects in this pack file, packed together
+ without offset values to reduce the cache footprint of the binary
+ search for a specific object name.
+
+ - A table of full object names in pack order. This allows resolving
+ a reference to "the nth object in the pack file" (from a
+ reachability bitmap or from the next table of another object
+ format) to its object name.
+
+ - A table of 4-byte values mapping object name order to pack order.
+ For an object in the table of sorted shortened object names, the
+ value at the corresponding index in this table is the index in the
+ previous table for that same object.
+
+ This can be used to look up the object in reachability bitmaps or
+ to look up its name in another object format.
+
+ - A table of 4-byte CRC32 values of the packed object data, in the
+ order that the objects appear in the pack file. This is to allow
+ compressed data to be copied directly from pack to pack during
+ repacking without undetected data corruption.
+
+ - A table of 4-byte offset values. For an object in the table of
+ sorted shortened object names, the value at the corresponding
+ index in this table indicates where that object can be found in
+ the pack file. These are usually 31-bit pack file offsets, but
+ large offsets are encoded as an index into the next table with the
+ most significant bit set.
+
+ - A table of 8-byte offset entries (empty for pack files less than
+ 2 GiB). Pack files are organized with heavily used objects toward
+ the front, so most object references should not need to refer to
+ this table.
+- Zero or more NUL bytes.
+- Tables for the second object format, with the same layout as above,
+ up to and not including the table of CRC32 values.
+- Zero or more NUL bytes.
+- The trailer consists of the following:
+ - A copy of the 20-byte NewHash checksum at the end of the
+ corresponding packfile.
+
+ - 20-byte NewHash checksum of all of the above.
+
+Loose object index
+~~~~~~~~~~~~~~~~~~
+A new file $GIT_OBJECT_DIR/loose-object-idx contains information about
+all loose objects. Its format is
+
+ # loose-object-idx
+ (newhash-name SP sha1-name LF)*
+
+where the object names are in hexadecimal format. The file is not
+sorted.
+
+The loose object index is protected against concurrent writes by a
+lock file $GIT_OBJECT_DIR/loose-object-idx.lock. To add a new loose
+object:
+
+1. Write the loose object to a temporary file, like today.
+2. Open loose-object-idx.lock with O_CREAT | O_EXCL to acquire the lock.
+3. Rename the loose object into place.
+4. Open loose-object-idx with O_APPEND and write the new object
+5. Unlink loose-object-idx.lock to release the lock.
+
+To remove entries (e.g. in "git pack-refs" or "git-prune"):
+
+1. Open loose-object-idx.lock with O_CREAT | O_EXCL to acquire the
+ lock.
+2. Write the new content to loose-object-idx.lock.
+3. Unlink any loose objects being removed.
+4. Rename to replace loose-object-idx, releasing the lock.
+
+Translation table
+~~~~~~~~~~~~~~~~~
+The index files support a bidirectional mapping between sha1-names
+and newhash-names. The lookup proceeds similarly to ordinary object
+lookups. For example, to convert a sha1-name to a newhash-name:
+
+ 1. Look for the object in idx files. If a match is present in the
+ idx's sorted list of truncated sha1-names, then:
+ a. Read the corresponding entry in the sha1-name order to pack
+ name order mapping.
+ b. Read the corresponding entry in the full sha1-name table to
+ verify we found the right object. If it is, then
+ c. Read the corresponding entry in the full newhash-name table.
+ That is the object's newhash-name.
+ 2. Check for a loose object. Read lines from loose-object-idx until
+ we find a match.
+
+Step (1) takes the same amount of time as an ordinary object lookup:
+O(number of packs * log(objects per pack)). Step (2) takes O(number of
+loose objects) time. To maintain good performance it will be necessary
+to keep the number of loose objects low. See the "Loose objects and
+unreachable objects" section below for more details.
+
+Since all operations that make new objects (e.g., "git commit") add
+the new objects to the corresponding index, this mapping is possible
+for all objects in the object store.
+
+Reading an object's sha1-content
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+The sha1-content of an object can be read by converting all newhash-names
+its newhash-content references to sha1-names using the translation table.
+
+Fetch
+~~~~~
+Fetching from a SHA-1 based server requires translating between SHA-1
+and NewHash based representations on the fly.
+
+SHA-1s named in the ref advertisement that are present on the client
+can be translated to NewHash and looked up as local objects using the
+translation table.
+
+Negotiation proceeds as today. Any "have"s generated locally are
+converted to SHA-1 before being sent to the server, and SHA-1s
+mentioned by the server are converted to NewHash when looking them up
+locally.
+
+After negotiation, the server sends a packfile containing the
+requested objects. We convert the packfile to NewHash format using
+the following steps:
+
+1. index-pack: inflate each object in the packfile and compute its
+ SHA-1. Objects can contain deltas in OBJ_REF_DELTA format against
+ objects the client has locally. These objects can be looked up
+ using the translation table and their sha1-content read as
+ described above to resolve the deltas.
+2. topological sort: starting at the "want"s from the negotiation
+ phase, walk through objects in the pack and emit a list of them,
+ excluding blobs, in reverse topologically sorted order, with each
+ object coming later in the list than all objects it references.
+ (This list only contains objects reachable from the "wants". If the
+ pack from the server contained additional extraneous objects, then
+ they will be discarded.)
+3. convert to newhash: open a new (newhash) packfile. Read the topologically
+ sorted list just generated. For each object, inflate its
+ sha1-content, convert to newhash-content, and write it to the newhash
+ pack. Record the new sha1<->newhash mapping entry for use in the idx.
+4. sort: reorder entries in the new pack to match the order of objects
+ in the pack the server generated and include blobs. Write a newhash idx
+ file
+5. clean up: remove the SHA-1 based pack file, index, and
+ topologically sorted list obtained from the server in steps 1
+ and 2.
+
+Step 3 requires every object referenced by the new object to be in the
+translation table. This is why the topological sort step is necessary.
+
+As an optimization, step 1 could write a file describing what non-blob
+objects each object it has inflated from the packfile references. This
+makes the topological sort in step 2 possible without inflating the
+objects in the packfile for a second time. The objects need to be
+inflated again in step 3, for a total of two inflations.
+
+Step 4 is probably necessary for good read-time performance. "git
+pack-objects" on the server optimizes the pack file for good data
+locality (see Documentation/technical/pack-heuristics.txt).
+
+Details of this process are likely to change. It will take some
+experimenting to get this to perform well.
+
+Push
+~~~~
+Push is simpler than fetch because the objects referenced by the
+pushed objects are already in the translation table. The sha1-content
+of each object being pushed can be read as described in the "Reading
+an object's sha1-content" section to generate the pack written by git
+send-pack.
+
+Signed Commits
+~~~~~~~~~~~~~~
+We add a new field "gpgsig-newhash" to the commit object format to allow
+signing commits without relying on SHA-1. It is similar to the
+existing "gpgsig" field. Its signed payload is the newhash-content of the
+commit object with any "gpgsig" and "gpgsig-newhash" fields removed.
+
+This means commits can be signed
+1. using SHA-1 only, as in existing signed commit objects
+2. using both SHA-1 and NewHash, by using both gpgsig-newhash and gpgsig
+ fields.
+3. using only NewHash, by only using the gpgsig-newhash field.
+
+Old versions of "git verify-commit" can verify the gpgsig signature in
+cases (1) and (2) without modifications and view case (3) as an
+ordinary unsigned commit.
+
+Signed Tags
+~~~~~~~~~~~
+We add a new field "gpgsig-newhash" to the tag object format to allow
+signing tags without relying on SHA-1. Its signed payload is the
+newhash-content of the tag with its gpgsig-newhash field and "-----BEGIN PGP
+SIGNATURE-----" delimited in-body signature removed.
+
+This means tags can be signed
+1. using SHA-1 only, as in existing signed tag objects
+2. using both SHA-1 and NewHash, by using gpgsig-newhash and an in-body
+ signature.
+3. using only NewHash, by only using the gpgsig-newhash field.
+
+Mergetag embedding
+~~~~~~~~~~~~~~~~~~
+The mergetag field in the sha1-content of a commit contains the
+sha1-content of a tag that was merged by that commit.
+
+The mergetag field in the newhash-content of the same commit contains the
+newhash-content of the same tag.
+
+Submodules
+~~~~~~~~~~
+To convert recorded submodule pointers, you need to have the converted
+submodule repository in place. The translation table of the submodule
+can be used to look up the new hash.
+
+Loose objects and unreachable objects
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+Fast lookups in the loose-object-idx require that the number of loose
+objects not grow too high.
+
+"git gc --auto" currently waits for there to be 6700 loose objects
+present before consolidating them into a packfile. We will need to
+measure to find a more appropriate threshold for it to use.
+
+"git gc --auto" currently waits for there to be 50 packs present
+before combining packfiles. Packing loose objects more aggressively
+may cause the number of pack files to grow too quickly. This can be
+mitigated by using a strategy similar to Martin Fick's exponential
+rolling garbage collection script:
+https://gerrit-review.googlesource.com/c/gerrit/+/35215
+
+"git gc" currently expels any unreachable objects it encounters in
+pack files to loose objects in an attempt to prevent a race when
+pruning them (in case another process is simultaneously writing a new
+object that refers to the about-to-be-deleted object). This leads to
+an explosion in the number of loose objects present and disk space
+usage due to the objects in delta form being replaced with independent
+loose objects. Worse, the race is still present for loose objects.
+
+Instead, "git gc" will need to move unreachable objects to a new
+packfile marked as UNREACHABLE_GARBAGE (using the PSRC field; see
+below). To avoid the race when writing new objects referring to an
+about-to-be-deleted object, code paths that write new objects will
+need to copy any objects from UNREACHABLE_GARBAGE packs that they
+refer to to new, non-UNREACHABLE_GARBAGE packs (or loose objects).
+UNREACHABLE_GARBAGE are then safe to delete if their creation time (as
+indicated by the file's mtime) is long enough ago.
+
+To avoid a proliferation of UNREACHABLE_GARBAGE packs, they can be
+combined under certain circumstances. If "gc.garbageTtl" is set to
+greater than one day, then packs created within a single calendar day,
+UTC, can be coalesced together. The resulting packfile would have an
+mtime before midnight on that day, so this makes the effective maximum
+ttl the garbageTtl + 1 day. If "gc.garbageTtl" is less than one day,
+then we divide the calendar day into intervals one-third of that ttl
+in duration. Packs created within the same interval can be coalesced
+together. The resulting packfile would have an mtime before the end of
+the interval, so this makes the effective maximum ttl equal to the
+garbageTtl * 4/3.
+
+This rule comes from Thirumala Reddy Mutchukota's JGit change
+https://git.eclipse.org/r/90465.
+
+The UNREACHABLE_GARBAGE setting goes in the PSRC field of the pack
+index. More generally, that field indicates where a pack came from:
+
+ - 1 (PACK_SOURCE_RECEIVE) for a pack received over the network
+ - 2 (PACK_SOURCE_AUTO) for a pack created by a lightweight
+ "gc --auto" operation
+ - 3 (PACK_SOURCE_GC) for a pack created by a full gc
+ - 4 (PACK_SOURCE_UNREACHABLE_GARBAGE) for potential garbage
+ discovered by gc
+ - 5 (PACK_SOURCE_INSERT) for locally created objects that were
+ written directly to a pack file, e.g. from "git add ."
+
+This information can be useful for debugging and for "gc --auto" to
+make appropriate choices about which packs to coalesce.
+
+Caveats
+-------
+Invalid objects
+~~~~~~~~~~~~~~~
+The conversion from sha1-content to newhash-content retains any
+brokenness in the original object (e.g., tree entry modes encoded with
+leading 0, tree objects whose paths are not sorted correctly, and
+commit objects without an author or committer). This is a deliberate
+feature of the design to allow the conversion to round-trip.
+
+More profoundly broken objects (e.g., a commit with a truncated "tree"
+header line) cannot be converted but were not usable by current Git
+anyway.
+
+Shallow clone and submodules
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+Because it requires all referenced objects to be available in the
+locally generated translation table, this design does not support
+shallow clone or unfetched submodules. Protocol improvements might
+allow lifting this restriction.
+
+Alternates
+~~~~~~~~~~
+For the same reason, a newhash repository cannot borrow objects from a
+sha1 repository using objects/info/alternates or
+$GIT_ALTERNATE_OBJECT_REPOSITORIES.
+
+git notes
+~~~~~~~~~
+The "git notes" tool annotates objects using their sha1-name as key.
+This design does not describe a way to migrate notes trees to use
+newhash-names. That migration is expected to happen separately (for
+example using a file at the root of the notes tree to describe which
+hash it uses).
+
+Server-side cost
+~~~~~~~~~~~~~~~~
+Until Git protocol gains NewHash support, using NewHash based storage
+on public-facing Git servers is strongly discouraged. Once Git
+protocol gains NewHash support, NewHash based servers are likely not
+to support SHA-1 compatibility, to avoid what may be a very expensive
+hash reencode during clone and to encourage peers to modernize.
+
+The design described here allows fetches by SHA-1 clients of a
+personal NewHash repository because it's not much more difficult than
+allowing pushes from that repository. This support needs to be guarded
+by a configuration option --- servers like git.kernel.org that serve a
+large number of clients would not be expected to bear that cost.
+
+Meaning of signatures
+~~~~~~~~~~~~~~~~~~~~~
+The signed payload for signed commits and tags does not explicitly
+name the hash used to identify objects. If some day Git adopts a new
+hash function with the same length as the current SHA-1 (40
+hexadecimal digit) or NewHash (64 hexadecimal digit) objects then the
+intent behind the PGP signed payload in an object signature is
+unclear:
+
+ object e7e07d5a4fcc2a203d9873968ad3e6bd4d7419d7
+ type commit
+ tag v2.12.0
+ tagger Junio C Hamano <gitster@pobox.com> 1487962205 -0800
+
+ Git 2.12
+
+Does this mean Git v2.12.0 is the commit with sha1-name
+e7e07d5a4fcc2a203d9873968ad3e6bd4d7419d7 or the commit with
+new-40-digit-hash-name e7e07d5a4fcc2a203d9873968ad3e6bd4d7419d7?
+
+Fortunately NewHash and SHA-1 have different lengths. If Git starts
+using another hash with the same length to name objects, then it will
+need to change the format of signed payloads using that hash to
+address this issue.
+
+Object names on the command line
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+To support the transition (see Transition plan below), this design
+supports four different modes of operation:
+
+ 1. ("dark launch") Treat object names input by the user as SHA-1 and
+ convert any object names written to output to SHA-1, but store
+ objects using NewHash. This allows users to test the code with no
+ visible behavior change except for performance. This allows
+ allows running even tests that assume the SHA-1 hash function, to
+ sanity-check the behavior of the new mode.
+
+ 2. ("early transition") Allow both SHA-1 and NewHash object names in
+ input. Any object names written to output use SHA-1. This allows
+ users to continue to make use of SHA-1 to communicate with peers
+ (e.g. by email) that have not migrated yet and prepares for mode 3.
+
+ 3. ("late transition") Allow both SHA-1 and NewHash object names in
+ input. Any object names written to output use NewHash. In this
+ mode, users are using a more secure object naming method by
+ default. The disruption is minimal as long as most of their peers
+ are in mode 2 or mode 3.
+
+ 4. ("post-transition") Treat object names input by the user as
+ NewHash and write output using NewHash. This is safer than mode 3
+ because there is less risk that input is incorrectly interpreted
+ using the wrong hash function.
+
+The mode is specified in configuration.
+
+The user can also explicitly specify which format to use for a
+particular revision specifier and for output, overriding the mode. For
+example:
+
+git --output-format=sha1 log abac87a^{sha1}..f787cac^{newhash}
+
+Selection of a New Hash
+-----------------------
+In early 2005, around the time that Git was written, Xiaoyun Wang,
+Yiqun Lisa Yin, and Hongbo Yu announced an attack finding SHA-1
+collisions in 2^69 operations. In August they published details.
+Luckily, no practical demonstrations of a collision in full SHA-1 were
+published until 10 years later, in 2017.
+
+The hash function NewHash to replace SHA-1 should be stronger than
+SHA-1 was: we would like it to be trustworthy and useful in practice
+for at least 10 years.
+
+Some other relevant properties:
+
+1. A 256-bit hash (long enough to match common security practice; not
+ excessively long to hurt performance and disk usage).
+
+2. High quality implementations should be widely available (e.g. in
+ OpenSSL).
+
+3. The hash function's properties should match Git's needs (e.g. Git
+ requires collision and 2nd preimage resistance and does not require
+ length extension resistance).
+
+4. As a tiebreaker, the hash should be fast to compute (fortunately
+ many contenders are faster than SHA-1).
+
+Some hashes under consideration are SHA-256, SHA-512/256, SHA-256x16,
+K12, and BLAKE2bp-256.
+
+Transition plan
+---------------
+Some initial steps can be implemented independently of one another:
+- adding a hash function API (vtable)
+- teaching fsck to tolerate the gpgsig-newhash field
+- excluding gpgsig-* from the fields copied by "git commit --amend"
+- annotating tests that depend on SHA-1 values with a SHA1 test
+ prerequisite
+- using "struct object_id", GIT_MAX_RAWSZ, and GIT_MAX_HEXSZ
+ consistently instead of "unsigned char *" and the hardcoded
+ constants 20 and 40.
+- introducing index v3
+- adding support for the PSRC field and safer object pruning
+
+
+The first user-visible change is the introduction of the objectFormat
+extension (without compatObjectFormat). This requires:
+- implementing the loose-object-idx
+- teaching fsck about this mode of operation
+- using the hash function API (vtable) when computing object names
+- signing objects and verifying signatures
+- rejecting attempts to fetch from or push to an incompatible
+ repository
+
+Next comes introduction of compatObjectFormat:
+- translating object names between object formats
+- translating object content between object formats
+- generating and verifying signatures in the compat format
+- adding appropriate index entries when adding a new object to the
+ object store
+- --output-format option
+- ^{sha1} and ^{newhash} revision notation
+- configuration to specify default input and output format (see
+ "Object names on the command line" above)
+
+The next step is supporting fetches and pushes to SHA-1 repositories:
+- allow pushes to a repository using the compat format
+- generate a topologically sorted list of the SHA-1 names of fetched
+ objects
+- convert the fetched packfile to newhash format and generate an idx
+ file
+- re-sort to match the order of objects in the fetched packfile
+
+The infrastructure supporting fetch also allows converting an existing
+repository. In converted repositories and new clones, end users can
+gain support for the new hash function without any visible change in
+behavior (see "dark launch" in the "Object names on the command line"
+section). In particular this allows users to verify NewHash signatures
+on objects in the repository, and it should ensure the transition code
+is stable in production in preparation for using it more widely.
+
+Over time projects would encourage their users to adopt the "early
+transition" and then "late transition" modes to take advantage of the
+new, more futureproof NewHash object names.
+
+When objectFormat and compatObjectFormat are both set, commands
+generating signatures would generate both SHA-1 and NewHash signatures
+by default to support both new and old users.
+
+In projects using NewHash heavily, users could be encouraged to adopt
+the "post-transition" mode to avoid accidentally making implicit use
+of SHA-1 object names.
+
+Once a critical mass of users have upgraded to a version of Git that
+can verify NewHash signatures and have converted their existing
+repositories to support verifying them, we can add support for a
+setting to generate only NewHash signatures. This is expected to be at
+least a year later.
+
+That is also a good moment to advertise the ability to convert
+repositories to use NewHash only, stripping out all SHA-1 related
+metadata. This improves performance by eliminating translation
+overhead and security by avoiding the possibility of accidentally
+relying on the safety of SHA-1.
+
+Updating Git's protocols to allow a server to specify which hash
+functions it supports is also an important part of this transition. It
+is not discussed in detail in this document but this transition plan
+assumes it happens. :)
+
+Alternatives considered
+-----------------------
+Upgrading everyone working on a particular project on a flag day
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+Projects like the Linux kernel are large and complex enough that
+flipping the switch for all projects based on the repository at once
+is infeasible.
+
+Not only would all developers and server operators supporting
+developers have to switch on the same flag day, but supporting tooling
+(continuous integration, code review, bug trackers, etc) would have to
+be adapted as well. This also makes it difficult to get early feedback
+from some project participants testing before it is time for mass
+adoption.
+
+Using hash functions in parallel
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+(e.g. https://public-inbox.org/git/22708.8913.864049.452252@chiark.greenend.org.uk/ )
+Objects newly created would be addressed by the new hash, but inside
+such an object (e.g. commit) it is still possible to address objects
+using the old hash function.
+* You cannot trust its history (needed for bisectability) in the
+ future without further work
+* Maintenance burden as the number of supported hash functions grows
+ (they will never go away, so they accumulate). In this proposal, by
+ comparison, converted objects lose all references to SHA-1.
+
+Signed objects with multiple hashes
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+Instead of introducing the gpgsig-newhash field in commit and tag objects
+for newhash-content based signatures, an earlier version of this design
+added "hash newhash <newhash-name>" fields to strengthen the existing
+sha1-content based signatures.
+
+In other words, a single signature was used to attest to the object
+content using both hash functions. This had some advantages:
+* Using one signature instead of two speeds up the signing process.
+* Having one signed payload with both hashes allows the signer to
+ attest to the sha1-name and newhash-name referring to the same object.
+* All users consume the same signature. Broken signatures are likely
+ to be detected quickly using current versions of git.
+
+However, it also came with disadvantages:
+* Verifying a signed object requires access to the sha1-names of all
+ objects it references, even after the transition is complete and
+ translation table is no longer needed for anything else. To support
+ this, the design added fields such as "hash sha1 tree <sha1-name>"
+ and "hash sha1 parent <sha1-name>" to the newhash-content of a signed
+ commit, complicating the conversion process.
+* Allowing signed objects without a sha1 (for after the transition is
+ complete) complicated the design further, requiring a "nohash sha1"
+ field to suppress including "hash sha1" fields in the newhash-content
+ and signed payload.
+
+Lazily populated translation table
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+Some of the work of building the translation table could be deferred to
+push time, but that would significantly complicate and slow down pushes.
+Calculating the sha1-name at object creation time at the same time it is
+being streamed to disk and having its newhash-name calculated should be
+an acceptable cost.
+
+Document History
+----------------
+
+2017-03-03
+bmwill@google.com, jonathantanmy@google.com, jrnieder@gmail.com,
+sbeller@google.com
+
+Initial version sent to
+http://public-inbox.org/git/20170304011251.GA26789@aiede.mtv.corp.google.com
+
+2017-03-03 jrnieder@gmail.com
+Incorporated suggestions from jonathantanmy and sbeller:
+* describe purpose of signed objects with each hash type
+* redefine signed object verification using object content under the
+ first hash function
+
+2017-03-06 jrnieder@gmail.com
+* Use SHA3-256 instead of SHA2 (thanks, Linus and brian m. carlson).[1][2]
+* Make sha3-based signatures a separate field, avoiding the need for
+ "hash" and "nohash" fields (thanks to peff[3]).
+* Add a sorting phase to fetch (thanks to Junio for noticing the need
+ for this).
+* Omit blobs from the topological sort during fetch (thanks to peff).
+* Discuss alternates, git notes, and git servers in the caveats
+ section (thanks to Junio Hamano, brian m. carlson[4], and Shawn
+ Pearce).
+* Clarify language throughout (thanks to various commenters,
+ especially Junio).
+
+2017-09-27 jrnieder@gmail.com, sbeller@google.com
+* use placeholder NewHash instead of SHA3-256
+* describe criteria for picking a hash function.
+* include a transition plan (thanks especially to Brandon Williams
+ for fleshing these ideas out)
+* define the translation table (thanks, Shawn Pearce[5], Jonathan
+ Tan, and Masaya Suzuki)
+* avoid loose object overhead by packing more aggressively in
+ "git gc --auto"
+
+[1] http://public-inbox.org/git/CA+55aFzJtejiCjV0e43+9oR3QuJK2PiFiLQemytoLpyJWe6P9w@mail.gmail.com/
+[2] http://public-inbox.org/git/CA+55aFz+gkAsDZ24zmePQuEs1XPS9BP_s8O7Q4wQ7LV7X5-oDA@mail.gmail.com/
+[3] http://public-inbox.org/git/20170306084353.nrns455dvkdsfgo5@sigill.intra.peff.net/
+[4] http://public-inbox.org/git/20170304224936.rqqtkdvfjgyezsht@genre.crustytoothpaste.net
+[5] https://public-inbox.org/git/CAJo=hJtoX9=AyLHHpUJS7fueV9ciZ_MNpnEPHUz8Whui6g9F0A@mail.gmail.com/
#!/bin/sh
GVF=GIT-VERSION-FILE
-DEF_VER=v2.14.GIT
+DEF_VER=v2.15.0-rc0
LF='
'
#
# Define NO_MMAP if you want to avoid mmap.
#
+# Define MMAP_PREVENTS_DELETE if a file that is currently mmapped cannot be
+# deleted or cannot be replaced using rename().
+#
# Define NO_SYS_POLL_H if you don't have sys/poll.h.
#
# Define NO_POLL if you do not have or don't want to use poll().
COMPAT_OBJS += compat/win32mmap.o
endif
endif
+ifdef MMAP_PREVENTS_DELETE
+ BASIC_CFLAGS += -DMMAP_PREVENTS_DELETE
+endif
ifdef OBJECT_CREATION_USES_RENAMES
COMPAT_CFLAGS += -DOBJECT_CREATION_MODE=1
endif
return retval;
}
+/*
+ * Resolve `path` into an absolute, cleaned-up path. The return value
+ * comes from a shared buffer.
+ */
const char *real_path(const char *path)
{
static struct strbuf realpath = STRBUF_INIT;
N_("git branch [<options>] [-l] [-f] <branch-name> [<start-point>]"),
N_("git branch [<options>] [-r] (-d | -D) <branch-name>..."),
N_("git branch [<options>] (-m | -M) [<old-branch>] <new-branch>"),
+ N_("git branch [<options>] (-c | -C) [<old-branch>] <new-branch>"),
N_("git branch [<options>] [-r | -a] [--points-at]"),
N_("git branch [<options>] [-r | -a] [--format]"),
NULL
if (!head_rev)
die(_("Couldn't look up commit object for HEAD"));
}
- for (i = 0; i < argc; i++, strbuf_release(&bname)) {
+ for (i = 0; i < argc; i++, strbuf_reset(&bname)) {
char *target = NULL;
int flags = 0;
}
free(name);
+ strbuf_release(&bname);
- return(ret);
+ return ret;
}
static int calc_maxwidth(struct ref_array *refs, int remote_bonus)
strbuf_addf(&obname, "%%(objectname:short=%d)", filter->abbrev);
strbuf_addf(&local, "%%(align:%d,left)%%(refname:lstrip=2)%%(end)", maxwidth);
- strbuf_addf(&local, "%s", branch_get_color(BRANCH_COLOR_RESET));
+ strbuf_addstr(&local, branch_get_color(BRANCH_COLOR_RESET));
strbuf_addf(&local, " %s ", obname.buf);
if (filter->verbose > 1)
free_worktrees(worktrees);
}
-static void rename_branch(const char *oldname, const char *newname, int force)
+static void copy_or_rename_branch(const char *oldname, const char *newname, int copy, int force)
{
struct strbuf oldref = STRBUF_INIT, newref = STRBUF_INIT, logmsg = STRBUF_INIT;
struct strbuf oldsection = STRBUF_INIT, newsection = STRBUF_INIT;
int recovery = 0;
int clobber_head_ok;
- if (!oldname)
- die(_("cannot rename the current branch while not on any."));
+ if (!oldname) {
+ if (copy)
+ die(_("cannot copy the current branch while not on any."));
+ else
+ die(_("cannot rename the current branch while not on any."));
+ }
if (strbuf_check_branch_ref(&oldref, oldname)) {
/*
reject_rebase_or_bisect_branch(oldref.buf);
- strbuf_addf(&logmsg, "Branch: renamed %s to %s",
- oldref.buf, newref.buf);
+ if (copy)
+ strbuf_addf(&logmsg, "Branch: copied %s to %s",
+ oldref.buf, newref.buf);
+ else
+ strbuf_addf(&logmsg, "Branch: renamed %s to %s",
+ oldref.buf, newref.buf);
- if (rename_ref(oldref.buf, newref.buf, logmsg.buf))
+ if (!copy && rename_ref(oldref.buf, newref.buf, logmsg.buf))
die(_("Branch rename failed"));
+ if (copy && copy_existing_ref(oldref.buf, newref.buf, logmsg.buf))
+ die(_("Branch copy failed"));
- if (recovery)
- warning(_("Renamed a misnamed branch '%s' away"), oldref.buf + 11);
+ if (recovery) {
+ if (copy)
+ warning(_("Copied a misnamed branch '%s' away"),
+ oldref.buf + 11);
+ else
+ warning(_("Renamed a misnamed branch '%s' away"),
+ oldref.buf + 11);
+ }
- if (replace_each_worktree_head_symref(oldref.buf, newref.buf, logmsg.buf))
+ if (!copy &&
+ replace_each_worktree_head_symref(oldref.buf, newref.buf, logmsg.buf))
die(_("Branch renamed to %s, but HEAD is not updated!"), newname);
strbuf_release(&logmsg);
strbuf_release(&oldref);
strbuf_addf(&newsection, "branch.%s", newref.buf + 11);
strbuf_release(&newref);
- if (git_config_rename_section(oldsection.buf, newsection.buf) < 0)
+ if (!copy && git_config_rename_section(oldsection.buf, newsection.buf) < 0)
die(_("Branch is renamed, but update of config-file failed"));
+ if (copy && strcmp(oldname, newname) && git_config_copy_section(oldsection.buf, newsection.buf) < 0)
+ die(_("Branch is copied, but update of config-file failed"));
strbuf_release(&oldsection);
strbuf_release(&newsection);
}
int cmd_branch(int argc, const char **argv, const char *prefix)
{
- int delete = 0, rename = 0, force = 0, list = 0;
+ int delete = 0, rename = 0, copy = 0, force = 0, list = 0;
int reflog = 0, edit_description = 0;
int quiet = 0, unset_upstream = 0;
const char *new_upstream = NULL;
OPT_BIT('D', NULL, &delete, N_("delete branch (even if not merged)"), 2),
OPT_BIT('m', "move", &rename, N_("move/rename a branch and its reflog"), 1),
OPT_BIT('M', NULL, &rename, N_("move/rename a branch, even if target exists"), 2),
+ OPT_BIT('c', "copy", ©, N_("copy a branch and its reflog"), 1),
+ OPT_BIT('C', NULL, ©, N_("copy a branch, even if target exists"), 2),
OPT_BOOL(0, "list", &list, N_("list branch names")),
OPT_BOOL('l', "create-reflog", &reflog, N_("create the branch's reflog")),
OPT_BOOL(0, "edit-description", &edit_description,
argc = parse_options(argc, argv, prefix, options, builtin_branch_usage,
0);
- if (!delete && !rename && !edit_description && !new_upstream && !unset_upstream && argc == 0)
+ if (!delete && !rename && !copy && !edit_description && !new_upstream && !unset_upstream && argc == 0)
list = 1;
if (filter.with_commit || filter.merge != REF_FILTER_MERGED_NONE || filter.points_at.nr ||
filter.no_commit)
list = 1;
- if (!!delete + !!rename + !!new_upstream +
+ if (!!delete + !!rename + !!copy + !!new_upstream +
list + unset_upstream > 1)
usage_with_options(builtin_branch_usage, options);
if (force) {
delete *= 2;
rename *= 2;
+ copy *= 2;
}
if (delete) {
if (edit_branch_description(branch_name))
return 1;
+ } else if (copy) {
+ if (!argc)
+ die(_("branch name required"));
+ else if (argc == 1)
+ copy_or_rename_branch(head, argv[0], 1, copy > 1);
+ else if (argc == 2)
+ copy_or_rename_branch(argv[0], argv[1], 1, copy > 1);
+ else
+ die(_("too many branches for a copy operation"));
} else if (rename) {
if (!argc)
die(_("branch name required"));
else if (argc == 1)
- rename_branch(head, argv[0], rename > 1);
+ copy_or_rename_branch(head, argv[0], 0, rename > 1);
else if (argc == 2)
- rename_branch(argv[0], argv[1], rename > 1);
+ copy_or_rename_branch(argv[0], argv[1], 0, rename > 1);
else
- die(_("too many branches for a rename operation"));
+ die(_("too many arguments for a rename operation"));
} else if (new_upstream) {
struct branch *branch = branch_get(argv[0]);
if (argc > 1)
- die(_("too many branches to set new upstream"));
+ die(_("too many arguments to set new upstream"));
if (!branch) {
if (!argc || !strcmp(argv[0], "HEAD"))
struct strbuf buf = STRBUF_INIT;
if (argc > 1)
- die(_("too many branches to unset upstream"));
+ die(_("too many arguments to unset upstream"));
if (!branch) {
if (!argc || !strcmp(argv[0], "HEAD"))
if (new->path && !opts->force_detach && !opts->new_branch &&
!opts->ignore_other_worktrees) {
- struct object_id oid;
int flag;
- char *head_ref = resolve_refdup("HEAD", 0, oid.hash, &flag);
+ char *head_ref = resolve_refdup("HEAD", 0, NULL, &flag);
if (head_ref &&
(!(flag & REF_ISSYMREF) || strcmp(head_ref, new->path)))
die_if_checked_out(new->path, 1);
strbuf_release(&buf);
}
+ UNLEAK(opts);
if (opts.patch_mode || opts.pathspec.nr)
return checkout_paths(&opts, new.name);
else
read_cache_preload(&s.pathspec);
refresh_index(&the_index, REFRESH_QUIET|REFRESH_UNMERGED, &s.pathspec, NULL, NULL);
- fd = hold_locked_index(&index_lock, 0);
+ if (use_optional_locks())
+ fd = hold_locked_index(&index_lock, 0);
+ else
+ fd = -1;
s.is_initial = get_oid(s.reference, &oid) ? 1 : 0;
if (!s.is_initial)
return -1;
}
result = run_diff_index(&rev, cached);
+ UNLEAK(rev);
return diff_result_code(&rev.diffopt, result);
}
result = diff_result_code(&rev.diffopt, result);
if (1 < rev.diffopt.skip_stat_unmatch)
refresh_index_quietly();
+ UNLEAK(rev);
+ UNLEAK(ent);
+ UNLEAK(blob);
return result;
}
OPT_GROUP(""),
OPT_INTEGER( 0 , "count", &maxcount, N_("show only <n> matched refs")),
OPT_STRING( 0 , "format", &format.format, N_("format"), N_("format to use for the output")),
+ OPT__COLOR(&format.use_color, N_("respect format colors")),
OPT_CALLBACK(0 , "sort", sorting_tail, N_("key"),
N_("field name to sort on"), &parse_opt_ref_sorting),
OPT_CALLBACK(0, "points-at", &filter.points_at,
usage(builtin_get_tar_commit_id_usage);
n = read_in_full(0, buffer, HEADERSIZE);
- if (n < HEADERSIZE)
- die("git get-tar-commit-id: read error");
+ if (n < 0)
+ die_errno("git get-tar-commit-id: read error");
+ if (n != HEADERSIZE)
+ die_errno("git get-tar-commit-id: EOF before reading tar header");
if (header->typeflag[0] != 'g')
return 1;
if (!skip_prefix(content, "52 comment=", &comment))
always, allow_undefined, data.name_only);
}
+ UNLEAK(revs);
return 0;
}
int cmd_rebase__helper(int argc, const char **argv, const char *prefix)
{
struct replay_opts opts = REPLAY_OPTS_INIT;
+ int keep_empty = 0;
enum {
- CONTINUE = 1, ABORT
+ CONTINUE = 1, ABORT, MAKE_SCRIPT, SHORTEN_SHA1S, EXPAND_SHA1S,
+ CHECK_TODO_LIST, SKIP_UNNECESSARY_PICKS, REARRANGE_SQUASH
} command = 0;
struct option options[] = {
OPT_BOOL(0, "ff", &opts.allow_ff, N_("allow fast-forward")),
+ OPT_BOOL(0, "keep-empty", &keep_empty, N_("keep empty commits")),
OPT_CMDMODE(0, "continue", &command, N_("continue rebase"),
CONTINUE),
OPT_CMDMODE(0, "abort", &command, N_("abort rebase"),
ABORT),
+ OPT_CMDMODE(0, "make-script", &command,
+ N_("make rebase script"), MAKE_SCRIPT),
+ OPT_CMDMODE(0, "shorten-ids", &command,
+ N_("shorten SHA-1s in the todo list"), SHORTEN_SHA1S),
+ OPT_CMDMODE(0, "expand-ids", &command,
+ N_("expand SHA-1s in the todo list"), EXPAND_SHA1S),
+ OPT_CMDMODE(0, "check-todo-list", &command,
+ N_("check the todo list"), CHECK_TODO_LIST),
+ OPT_CMDMODE(0, "skip-unnecessary-picks", &command,
+ N_("skip unnecessary picks"), SKIP_UNNECESSARY_PICKS),
+ OPT_CMDMODE(0, "rearrange-squash", &command,
+ N_("rearrange fixup/squash lines"), REARRANGE_SQUASH),
OPT_END()
};
return !!sequencer_continue(&opts);
if (command == ABORT && argc == 1)
return !!sequencer_remove_state(&opts);
+ if (command == MAKE_SCRIPT && argc > 1)
+ return !!sequencer_make_script(keep_empty, stdout, argc, argv);
+ if (command == SHORTEN_SHA1S && argc == 1)
+ return !!transform_todo_ids(1);
+ if (command == EXPAND_SHA1S && argc == 1)
+ return !!transform_todo_ids(0);
+ if (command == CHECK_TODO_LIST && argc == 1)
+ return !!check_todo_list();
+ if (command == SKIP_UNNECESSARY_PICKS && argc == 1)
+ return !!skip_unnecessary_picks();
+ if (command == REARRANGE_SQUASH && argc == 1)
+ return !!rearrange_squash();
usage_with_options(builtin_rebase_helper_usage, options);
}
{
struct check_connected_options opt = CHECK_CONNECTED_INIT;
struct command *cmd;
- struct object_id oid;
struct iterate_data data;
struct async muxer;
int err_fd = 0;
check_aliased_updates(commands);
free(head_name_to_free);
- head_name = head_name_to_free = resolve_refdup("HEAD", 0, oid.hash, NULL);
+ head_name = head_name_to_free = resolve_refdup("HEAD", 0, NULL, NULL);
if (use_atomic)
execute_commands_atomic(commands, si);
return s;
}
+static char *findspace(const char *s)
+{
+ for (; *s; s++)
+ if (isspace(*s))
+ return (char*)s;
+ return NULL;
+}
+
static int cmd_parseopt(int argc, const char **argv, const char *prefix)
{
static int keep_dashdash = 0, stop_at_non_option = 0;
/* parse: (<short>|<short>,<long>|<long>)[*=?!]*<arghint>? SP+ <help> */
while (strbuf_getline(&sb, stdin) != EOF) {
const char *s;
- const char *help;
+ char *help;
struct option *o;
if (!sb.len)
memset(opts + onb, 0, sizeof(opts[onb]));
o = &opts[onb++];
- help = strchr(sb.buf, ' ');
- if (!help || *sb.buf == ' ') {
+ help = findspace(sb.buf);
+ if (!help || sb.buf == help) {
o->type = OPTION_GROUP;
o->help = xstrdup(skipspaces(sb.buf));
continue;
}
+ *help = '\0';
+
o->type = OPTION_CALLBACK;
- o->help = xstrdup(skipspaces(help));
+ o->help = xstrdup(skipspaces(help+1));
o->value = &parsed;
o->flags = PARSE_OPT_NOARG;
o->callback = &parseopt_dump;
},
OPT_STRING( 0 , "format", &format.format, N_("format"),
N_("format to use for the output")),
+ OPT__COLOR(&format.use_color, N_("respect format colors")),
OPT_BOOL('i', "ignore-case", &icase, N_("sorting and filtering are case insensitive")),
OPT_END()
};
if (force && !is_null_oid(&prev) && oidcmp(&prev, &object))
printf(_("Updated tag '%s' (was %s)\n"), tag, find_unique_abbrev(prev.hash, DEFAULT_ABBREV));
- strbuf_release(&err);
- strbuf_release(&buf);
- strbuf_release(&ref);
- strbuf_release(&reflog_msg);
+ UNLEAK(buf);
+ UNLEAK(ref);
+ UNLEAK(reflog_msg);
+ UNLEAK(msg);
+ UNLEAK(err);
return 0;
}
{
struct stat st;
char *path;
- int fd, len;
+ int fd;
+ size_t len;
+ ssize_t read_result;
if (!is_directory(git_path("worktrees/%s", id))) {
strbuf_addf(reason, _("Removing worktrees/%s: not a valid directory"), id);
id, strerror(errno));
return 1;
}
- len = st.st_size;
+ len = xsize_t(st.st_size);
path = xmallocz(len);
- read_in_full(fd, path, len);
+
+ read_result = read_in_full(fd, path, len);
+ if (read_result < 0) {
+ strbuf_addf(reason, _("Removing worktrees/%s: unable to read gitdir file (%s)"),
+ id, strerror(errno));
+ close(fd);
+ free(path);
+ return 1;
+ }
close(fd);
+
+ if (read_result != len) {
+ strbuf_addf(reason,
+ _("Removing worktrees/%s: short read (expected %"PRIuMAX" bytes, read %"PRIuMAX")"),
+ id, (uintmax_t)len, (uintmax_t)read_result);
+ free(path);
+ return 1;
+ }
while (len && (path[len - 1] == '\n' || path[len - 1] == '\r'))
len--;
if (!len) {
if (size && !s.avail_in) {
ssize_t rsize = size < sizeof(ibuf) ? size : sizeof(ibuf);
- if (read_in_full(fd, ibuf, rsize) != rsize)
+ ssize_t read_result = read_in_full(fd, ibuf, rsize);
+ if (read_result < 0)
+ die_errno("failed to read from '%s'", path);
+ if (read_result != rsize)
die("failed to read %d bytes from '%s'",
(int)rsize, path);
offset += rsize;
#define GIT_NOGLOB_PATHSPECS_ENVIRONMENT "GIT_NOGLOB_PATHSPECS"
#define GIT_ICASE_PATHSPECS_ENVIRONMENT "GIT_ICASE_PATHSPECS"
#define GIT_QUARANTINE_ENVIRONMENT "GIT_QUARANTINE_PATH"
+#define GIT_OPTIONAL_LOCKS_ENVIRONMENT "GIT_OPTIONAL_LOCKS"
/*
* This environment variable is expected to contain a boolean indicating
*/
extern int ref_paranoia;
+/*
+ * Returns the boolean value of $GIT_OPTIONAL_LOCKS (or the default value).
+ */
+int use_optional_locks(void);
+
/*
* The character that begins a commented line in user-editable file
* that is subject to stripspace.
if (!strcasecmp(value, "never"))
return 0;
if (!strcasecmp(value, "always"))
- return 1;
+ return var ? GIT_COLOR_AUTO : 1;
if (!strcasecmp(value, "auto"))
return GIT_COLOR_AUTO;
}
pfd[i].revents = happened;
rc++;
}
+ else
+ {
+ pfd[i].revents = 0;
+ }
}
return rc;
return 4;
}
-static ssize_t write_section(int fd, const char *key)
+static struct strbuf store_create_section(const char *key)
{
const char *dot;
int i;
- ssize_t ret;
struct strbuf sb = STRBUF_INIT;
dot = memchr(key, '.', store.baselen);
strbuf_addf(&sb, "[%.*s]\n", store.baselen, key);
}
- ret = write_in_full(fd, sb.buf, sb.len);
+ return sb;
+}
+
+static ssize_t write_section(int fd, const char *key)
+{
+ struct strbuf sb = store_create_section(key);
+ ssize_t ret;
+
+ ret = write_in_full(fd, sb.buf, sb.len) == sb.len;
strbuf_release(&sb);
return ret;
}
/* if new_name == NULL, the section is removed instead */
-int git_config_rename_section_in_file(const char *config_filename,
- const char *old_name, const char *new_name)
+static int git_config_copy_or_rename_section_in_file(const char *config_filename,
+ const char *old_name, const char *new_name, int copy)
{
int ret = 0, remove = 0;
char *filename_buf = NULL;
char buf[1024];
FILE *config_file = NULL;
struct stat st;
+ struct strbuf copystr = STRBUF_INIT;
if (new_name && !section_name_is_ok(new_name)) {
ret = error("invalid section name: %s", new_name);
while (fgets(buf, sizeof(buf), config_file)) {
int i;
int length;
+ int is_section = 0;
char *output = buf;
for (i = 0; buf[i] && isspace(buf[i]); i++)
; /* do nothing */
if (buf[i] == '[') {
/* it's a section */
- int offset = section_name_match(&buf[i], old_name);
+ int offset;
+ is_section = 1;
+
+ /*
+ * When encountering a new section under -c we
+ * need to flush out any section we're already
+ * coping and begin anew. There might be
+ * multiple [branch "$name"] sections.
+ */
+ if (copystr.len > 0) {
+ if (write_in_full(out_fd, copystr.buf, copystr.len) != copystr.len) {
+ ret = write_error(get_lock_file_path(lock));
+ goto out;
+ }
+ strbuf_reset(©str);
+ }
+
+ offset = section_name_match(&buf[i], old_name);
if (offset > 0) {
ret++;
if (new_name == NULL) {
continue;
}
store.baselen = strlen(new_name);
- if (write_section(out_fd, new_name) < 0) {
- ret = write_error(get_lock_file_path(lock));
- goto out;
- }
- /*
- * We wrote out the new section, with
- * a newline, now skip the old
- * section's length
- */
- output += offset + i;
- if (strlen(output) > 0) {
+ if (!copy) {
+ if (write_section(out_fd, new_name) < 0) {
+ ret = write_error(get_lock_file_path(lock));
+ goto out;
+ }
/*
- * More content means there's
- * a declaration to put on the
- * next line; indent with a
- * tab
+ * We wrote out the new section, with
+ * a newline, now skip the old
+ * section's length
*/
- output -= 1;
- output[0] = '\t';
+ output += offset + i;
+ if (strlen(output) > 0) {
+ /*
+ * More content means there's
+ * a declaration to put on the
+ * next line; indent with a
+ * tab
+ */
+ output -= 1;
+ output[0] = '\t';
+ }
+ } else {
+ copystr = store_create_section(new_name);
}
}
remove = 0;
if (remove)
continue;
length = strlen(output);
+
+ if (!is_section && copystr.len > 0) {
+ strbuf_add(©str, output, length);
+ }
+
if (write_in_full(out_fd, output, length) < 0) {
ret = write_error(get_lock_file_path(lock));
goto out;
}
}
+
+ /*
+ * Copy a trailing section at the end of the config, won't be
+ * flushed by the usual "flush because we have a new section
+ * logic in the loop above.
+ */
+ if (copystr.len > 0) {
+ if (write_in_full(out_fd, copystr.buf, copystr.len) != copystr.len) {
+ ret = write_error(get_lock_file_path(lock));
+ goto out;
+ }
+ strbuf_reset(©str);
+ }
+
fclose(config_file);
config_file = NULL;
commit_and_out:
return ret;
}
+int git_config_rename_section_in_file(const char *config_filename,
+ const char *old_name, const char *new_name)
+{
+ return git_config_copy_or_rename_section_in_file(config_filename,
+ old_name, new_name, 0);
+}
+
int git_config_rename_section(const char *old_name, const char *new_name)
{
return git_config_rename_section_in_file(NULL, old_name, new_name);
}
+int git_config_copy_section_in_file(const char *config_filename,
+ const char *old_name, const char *new_name)
+{
+ return git_config_copy_or_rename_section_in_file(config_filename,
+ old_name, new_name, 1);
+}
+
+int git_config_copy_section(const char *old_name, const char *new_name)
+{
+ return git_config_copy_section_in_file(NULL, old_name, new_name);
+}
+
/*
* Call this to report error for your variable that should not
* get a boolean value (i.e. "[my] var" means "true").
extern void git_config_set_multivar_in_file(const char *, const char *, const char *, const char *, int);
extern int git_config_rename_section(const char *, const char *);
extern int git_config_rename_section_in_file(const char *, const char *, const char *);
+extern int git_config_copy_section(const char *, const char *);
+extern int git_config_copy_section_in_file(const char *, const char *, const char *);
extern const char *git_etc_gitconfig(void);
extern int git_env_bool(const char *, int);
extern unsigned long git_env_ulong(const char *, unsigned long);
UNRELIABLE_FSTAT = UnfortunatelyYes
SPARSE_FLAGS = -isystem /usr/include/w32api -Wno-one-bit-signed-bitfield
OBJECT_CREATION_USES_RENAMES = UnfortunatelyNeedsTo
+ MMAP_PREVENTS_DELETE = UnfortunatelyYes
COMPAT_OBJS += compat/cygwin.o
FREAD_READS_DIRECTORIES = UnfortunatelyYes
endif
NO_ST_BLOCKS_IN_STRUCT_STAT = YesPlease
NO_NSEC = YesPlease
USE_WIN32_MMAP = YesPlease
+ MMAP_PREVENTS_DELETE = UnfortunatelyYes
# USE_NED_ALLOCATOR = YesPlease
UNRELIABLE_FSTAT = UnfortunatelyYes
OBJECT_CREATION_USES_RENAMES = UnfortunatelyNeedsTo
NO_ST_BLOCKS_IN_STRUCT_STAT = YesPlease
NO_NSEC = YesPlease
USE_WIN32_MMAP = YesPlease
+ MMAP_PREVENTS_DELETE = UnfortunatelyYes
USE_NED_ALLOCATOR = YesPlease
UNRELIABLE_FSTAT = UnfortunatelyYes
OBJECT_CREATION_USES_RENAMES = UnfortunatelyNeedsTo
T *src;
expression n;
@@
-- memcpy(dst, src, n * sizeof(*dst));
+- memcpy(dst, src, (n) * sizeof(*dst));
+ COPY_ARRAY(dst, src, n);
@@
T *src;
expression n;
@@
-- memcpy(dst, src, n * sizeof(*src));
+- memcpy(dst, src, (n) * sizeof(*src));
+ COPY_ARRAY(dst, src, n);
@@
T *src;
expression n;
@@
-- memcpy(dst, src, n * sizeof(T));
+- memcpy(dst, src, (n) * sizeof(T));
+ COPY_ARRAY(dst, src, n);
@@
T *ptr;
expression n;
@@
-- ptr = xmalloc(n * sizeof(*ptr));
+- ptr = xmalloc((n) * sizeof(*ptr));
+ ALLOC_ARRAY(ptr, n);
@@
T *ptr;
expression n;
@@
-- ptr = xmalloc(n * sizeof(T));
+- ptr = xmalloc((n) * sizeof(T));
+ ALLOC_ARRAY(ptr, n);
if (ret < 0)
die_errno("%s: sha1 file read error", f->name);
- if (ret < count)
+ if (ret != count)
die("%s: sha1 file truncated", f->name);
if (memcmp(buf, check_buffer, count))
die("sha1 file '%s' validation error", f->name);
strbuf_addch(&sb, ' ');
quote_c_style(p->two->path, &sb, NULL, 0);
}
+ strbuf_addch(&sb, '\n');
emit_diff_symbol(opt, DIFF_SYMBOL_SUMMARY,
sb.buf, sb.len, 0);
strbuf_release(&sb);
{
need_shared_repository_from_config = 1;
}
+
+int use_optional_locks(void)
+{
+ return git_env_bool(GIT_OPTIONAL_LOCKS_ENVIRONMENT, 1);
+}
checkpoint_requested = 0;
if (object_count) {
cycle_packfile();
- dump_branches();
- dump_tags();
- dump_marks();
}
+ dump_branches();
+ dump_tags();
+ dump_marks();
}
static void parse_checkpoint(void)
append_todo_help () {
gettext "
Commands:
- p, pick = use commit
- r, reword = use commit, but edit the commit message
- e, edit = use commit, but stop for amending
- s, squash = use commit, but meld into previous commit
- f, fixup = like \"squash\", but discard this commit's log message
- x, exec = run command (the rest of the line) using shell
- d, drop = remove commit
+p, pick = use commit
+r, reword = use commit, but edit the commit message
+e, edit = use commit, but stop for amending
+s, squash = use commit, but meld into previous commit
+f, fixup = like \"squash\", but discard this commit's log message
+x, exec = run command (the rest of the line) using shell
+d, drop = remove commit
These lines can be re-ordered; they are executed from top to bottom.
" | git stripspace --comment-lines >>"$todo"
done
}
-# skip picking commits whose parents are unchanged
-skip_unnecessary_picks () {
- fd=3
- while read -r command rest
- do
- # fd=3 means we skip the command
- case "$fd,$command" in
- 3,pick|3,p)
- # pick a commit whose parent is current $onto -> skip
- sha1=${rest%% *}
- case "$(git rev-parse --verify --quiet "$sha1"^)" in
- "$onto"*)
- onto=$sha1
- ;;
- *)
- fd=1
- ;;
- esac
- ;;
- 3,"$comment_char"*|3,)
- # copy comments
- ;;
- *)
- fd=1
- ;;
- esac
- printf '%s\n' "$command${rest:+ }$rest" >&$fd
- done <"$todo" >"$todo.new" 3>>"$done" &&
- mv -f "$todo".new "$todo" &&
- case "$(peek_next_command)" in
- squash|s|fixup|f)
- record_in_rewritten "$onto"
- ;;
- esac ||
- die "$(gettext "Could not skip unnecessary pick commands")"
-}
-
-transform_todo_ids () {
- while read -r command rest
- do
- case "$command" in
- "$comment_char"* | exec)
- # Be careful for oddball commands like 'exec'
- # that do not have a SHA-1 at the beginning of $rest.
- ;;
- *)
- sha1=$(git rev-parse --verify --quiet "$@" ${rest%%[ ]*}) &&
- rest="$sha1 ${rest#*[ ]}"
- ;;
- esac
- printf '%s\n' "$command${rest:+ }$rest"
- done <"$todo" >"$todo.new" &&
- mv -f "$todo.new" "$todo"
-}
-
expand_todo_ids() {
- transform_todo_ids
+ git rebase--helper --expand-ids
}
collapse_todo_ids() {
- transform_todo_ids --short
-}
-
-# Rearrange the todo list that has both "pick sha1 msg" and
-# "pick sha1 fixup!/squash! msg" appears in it so that the latter
-# comes immediately after the former, and change "pick" to
-# "fixup"/"squash".
-#
-# Note that if the config has specified a custom instruction format
-# each log message will be re-retrieved in order to normalize the
-# autosquash arrangement
-rearrange_squash () {
- # extract fixup!/squash! lines and resolve any referenced sha1's
- while read -r pick sha1 message
- do
- test -z "${format}" || message=$(git log -n 1 --format="%s" ${sha1})
- case "$message" in
- "squash! "*|"fixup! "*)
- action="${message%%!*}"
- rest=$message
- prefix=
- # skip all squash! or fixup! (but save for later)
- while :
- do
- case "$rest" in
- "squash! "*|"fixup! "*)
- prefix="$prefix${rest%%!*},"
- rest="${rest#*! }"
- ;;
- *)
- break
- ;;
- esac
- done
- printf '%s %s %s %s\n' "$sha1" "$action" "$prefix" "$rest"
- # if it's a single word, try to resolve to a full sha1 and
- # emit a second copy. This allows us to match on both message
- # and on sha1 prefix
- if test "${rest#* }" = "$rest"; then
- fullsha="$(git rev-parse -q --verify "$rest" 2>/dev/null)"
- if test -n "$fullsha"; then
- # prefix the action to uniquely identify this line as
- # intended for full sha1 match
- echo "$sha1 +$action $prefix $fullsha"
- fi
- fi
- esac
- done >"$1.sq" <"$1"
- test -s "$1.sq" || return
-
- used=
- while read -r pick sha1 message
- do
- case " $used" in
- *" $sha1 "*) continue ;;
- esac
- printf '%s\n' "$pick $sha1 $message"
- test -z "${format}" || message=$(git log -n 1 --format="%s" ${sha1})
- used="$used$sha1 "
- while read -r squash action msg_prefix msg_content
- do
- case " $used" in
- *" $squash "*) continue ;;
- esac
- emit=0
- case "$action" in
- +*)
- action="${action#+}"
- # full sha1 prefix test
- case "$msg_content" in "$sha1"*) emit=1;; esac ;;
- *)
- # message prefix test
- case "$message" in "$msg_content"*) emit=1;; esac ;;
- esac
- if test $emit = 1; then
- if test -n "${format}"
- then
- msg_content=$(git log -n 1 --format="${format}" ${squash})
- else
- msg_content="$(echo "$msg_prefix" | sed "s/,/! /g")$msg_content"
- fi
- printf '%s\n' "$action $squash $msg_content"
- used="$used$squash "
- fi
- done <"$1.sq"
- done >"$1.rearranged" <"$1"
- cat "$1.rearranged" >"$1"
- rm -f "$1.sq" "$1.rearranged"
+ git rebase--helper --shorten-ids
}
# Add commands after a pick or after a squash/fixup serie
mv "$1.new" "$1"
}
-# Check if the SHA-1 passed as an argument is a
-# correct one, if not then print $2 in "$todo".badsha
-# $1: the SHA-1 to test
-# $2: the line number of the input
-# $3: the input filename
-check_commit_sha () {
- badsha=0
- if test -z "$1"
- then
- badsha=1
- else
- sha1_verif="$(git rev-parse --verify --quiet $1^{commit})"
- if test -z "$sha1_verif"
- then
- badsha=1
- fi
- fi
-
- if test $badsha -ne 0
- then
- line="$(sed -n -e "${2}p" "$3")"
- warn "$(eval_gettext "\
-Warning: the SHA-1 is missing or isn't a commit in the following line:
- - \$line")"
- warn
- fi
-
- return $badsha
-}
-
-# prints the bad commits and bad commands
-# from the todolist in stdin
-check_bad_cmd_and_sha () {
- retval=0
- lineno=0
- while read -r command rest
- do
- lineno=$(( $lineno + 1 ))
- case $command in
- "$comment_char"*|''|noop|x|exec)
- # Doesn't expect a SHA-1
- ;;
- "$cr")
- # Work around CR left by "read" (e.g. with Git for
- # Windows' Bash).
- ;;
- pick|p|drop|d|reword|r|edit|e|squash|s|fixup|f)
- if ! check_commit_sha "${rest%%[ ]*}" "$lineno" "$1"
- then
- retval=1
- fi
- ;;
- *)
- line="$(sed -n -e "${lineno}p" "$1")"
- warn "$(eval_gettext "\
-Warning: the command isn't recognized in the following line:
- - \$line")"
- warn
- retval=1
- ;;
- esac
- done <"$1"
- return $retval
-}
-
-# Print the list of the SHA-1 of the commits
-# from stdin to stdout
-todo_list_to_sha_list () {
- git stripspace --strip-comments |
- while read -r command sha1 rest
- do
- case $command in
- "$comment_char"*|''|noop|x|"exec")
- ;;
- *)
- long_sha=$(git rev-list --no-walk "$sha1" 2>/dev/null)
- printf "%s\n" "$long_sha"
- ;;
- esac
- done
-}
-
-# Use warn for each line in stdin
-warn_lines () {
- while read -r line
- do
- warn " - $line"
- done
-}
-
# Switch to the branch in $into and notify it in the reflog
checkout_onto () {
GIT_REFLOG_ACTION="$GIT_REFLOG_ACTION: checkout $onto_name"
printf '%s' "$check_level" | tr 'A-Z' 'a-z'
}
-# Check if the user dropped some commits by mistake
-# Behaviour determined by rebase.missingCommitsCheck.
-# Check if there is an unrecognized command or a
-# bad SHA-1 in a command.
-check_todo_list () {
- raise_error=f
-
- check_level=$(get_missing_commit_check_level)
-
- case "$check_level" in
- warn|error)
- # Get the SHA-1 of the commits
- todo_list_to_sha_list <"$todo".backup >"$todo".oldsha1
- todo_list_to_sha_list <"$todo" >"$todo".newsha1
-
- # Sort the SHA-1 and compare them
- sort -u "$todo".oldsha1 >"$todo".oldsha1+
- mv "$todo".oldsha1+ "$todo".oldsha1
- sort -u "$todo".newsha1 >"$todo".newsha1+
- mv "$todo".newsha1+ "$todo".newsha1
- comm -2 -3 "$todo".oldsha1 "$todo".newsha1 >"$todo".miss
-
- # Warn about missing commits
- if test -s "$todo".miss
- then
- test "$check_level" = error && raise_error=t
-
- warn "$(gettext "\
-Warning: some commits may have been dropped accidentally.
-Dropped commits (newer to older):")"
-
- # Make the list user-friendly and display
- opt="--no-walk=sorted --format=oneline --abbrev-commit --stdin"
- git rev-list $opt <"$todo".miss | warn_lines
-
- warn "$(gettext "\
-To avoid this message, use \"drop\" to explicitly remove a commit.
-
-Use 'git config rebase.missingCommitsCheck' to change the level of warnings.
-The possible behaviours are: ignore, warn, error.")"
- warn
- fi
- ;;
- ignore)
- ;;
- *)
- warn "$(eval_gettext "Unrecognized setting \$check_level for option rebase.missingCommitsCheck. Ignoring.")"
- ;;
- esac
-
- if ! check_bad_cmd_and_sha "$todo"
- then
- raise_error=t
- fi
-
- if test $raise_error = t
- then
- # Checkout before the first commit of the
- # rebase: this way git rebase --continue
- # will work correctly as it expects HEAD to be
- # placed before the commit of the next action
- checkout_onto
-
- warn "$(gettext "You can fix this with 'git rebase --edit-todo' and then run 'git rebase --continue'.")"
- die "$(gettext "Or you can abort the rebase with 'git rebase --abort'.")"
- fi
-}
-
# The whole contents of this file is run by dot-sourcing it from
# inside a shell function. It used to be that "return"s we see
# below were not inside any function, and expected to return
revisions=$onto...$orig_head
shortrevisions=$shorthead
fi
-format=$(git config --get rebase.instructionFormat)
-# the 'rev-list .. | sed' requires %m to parse; the instruction requires %H to parse
-git rev-list $merges_option --format="%m%H ${format:-%s}" \
- --reverse --left-right --topo-order \
- $revisions ${restrict_revision+^$restrict_revision} | \
- sed -n "s/^>//p" |
-while read -r sha1 rest
-do
-
- if test -z "$keep_empty" && is_empty_commit $sha1 && ! is_merge_commit $sha1
- then
- comment_out="$comment_char "
- else
- comment_out=
- fi
+if test t != "$preserve_merges"
+then
+ git rebase--helper --make-script ${keep_empty:+--keep-empty} \
+ $revisions ${restrict_revision+^$restrict_revision} >"$todo"
+else
+ format=$(git config --get rebase.instructionFormat)
+ # the 'rev-list .. | sed' requires %m to parse; the instruction requires %H to parse
+ git rev-list $merges_option --format="%m%H ${format:-%s}" \
+ --reverse --left-right --topo-order \
+ $revisions ${restrict_revision+^$restrict_revision} | \
+ sed -n "s/^>//p" |
+ while read -r sha1 rest
+ do
+
+ if test -z "$keep_empty" && is_empty_commit $sha1 && ! is_merge_commit $sha1
+ then
+ comment_out="$comment_char "
+ else
+ comment_out=
+ fi
- if test t != "$preserve_merges"
- then
- printf '%s\n' "${comment_out}pick $sha1 $rest" >>"$todo"
- else
if test -z "$rebase_root"
then
preserve=t
touch "$rewritten"/$sha1
printf '%s\n' "${comment_out}pick $sha1 $rest" >>"$todo"
fi
- fi
-done
+ done
+fi
# Watch for commits that been dropped by --cherry-pick
if test t = "$preserve_merges"
fi
test -s "$todo" || echo noop >> "$todo"
-test -n "$autosquash" && rearrange_squash "$todo"
+test -z "$autosquash" || git rebase--helper --rearrange-squash || exit
test -n "$cmd" && add_exec_commands "$todo"
todocount=$(git stripspace --strip-comments <"$todo" | wc -l)
has_action "$todo" ||
return 2
-check_todo_list
+git rebase--helper --check-todo-list || {
+ ret=$?
+ checkout_onto
+ exit $ret
+}
expand_todo_ids
-test -d "$rewritten" || test -n "$force_rebase" || skip_unnecessary_picks
+test -d "$rewritten" || test -n "$force_rebase" ||
+onto="$(git rebase--helper --skip-unnecessary-picks)" ||
+die "Could not skip unnecessary pick commands"
checkout_onto
if test -z "$rebase_root" && test ! -d "$rewritten"
shift
break
;;
+ *)
+ usage
+ ;;
esac
shift
done
%s (%ci)
-are available in the git repository at:
+are available in the Git repository at:
' $merge_base &&
echo " $url $pretty_remote" &&
git show -s --format='
setenv(GIT_ICASE_PATHSPECS_ENVIRONMENT, "1", 1);
if (envchanged)
*envchanged = 1;
+ } else if (!strcmp(cmd, "--no-optional-locks")) {
+ setenv(GIT_OPTIONAL_LOCKS_ENVIRONMENT, "0", 1);
+ if (envchanged)
+ *envchanged = 1;
} else if (!strcmp(cmd, "--shallow-file")) {
(*argv)++;
(*argc)--;
* This way, fields printed to the right of the graph will remain
* aligned for the entire commit.
*/
- int extra;
- if (chars_written >= graph->width)
- return;
-
- extra = graph->width - chars_written;
- strbuf_addf(sb, "%*s", (int) extra, "");
+ if (chars_written < graph->width)
+ strbuf_addchars(sb, ' ', graph->width - chars_written);
}
static void graph_output_padding_line(struct git_graph *graph,
if (col->commit == graph->commit) {
seen_this = 1;
strbuf_write_column(sb, col, '|');
- strbuf_addf(sb, "%*s", graph->expansion_row, "");
+ strbuf_addchars(sb, ' ', graph->expansion_row);
chars_written += 1 + graph->expansion_row;
} else if (seen_this && (graph->expansion_row == 0)) {
/*
memcpy(hex, path, 2);
path += 2;
path++; /* skip '/' */
- memcpy(hex, path, GIT_SHA1_HEXSZ - 2);
+ memcpy(hex + 2, path, GIT_SHA1_HEXSZ - 2);
return get_oid_hex(hex, oid);
}
if (errno == EPIPE)
break;
die_errno("notes-merge");
- } else if (!ret) {
- die("notes-merge: disk full?");
}
size -= ret;
buf += ret;
git_SHA_CTX old_sha1_ctx, new_sha1_ctx;
struct pack_header hdr;
char *buf;
+ ssize_t read_result;
git_SHA1_Init(&old_sha1_ctx);
git_SHA1_Init(&new_sha1_ctx);
if (lseek(pack_fd, 0, SEEK_SET) != 0)
die_errno("Failed seeking to start of '%s'", pack_name);
- if (read_in_full(pack_fd, &hdr, sizeof(hdr)) != sizeof(hdr))
+ read_result = read_in_full(pack_fd, &hdr, sizeof(hdr));
+ if (read_result < 0)
die_errno("Unable to reread header of '%s'", pack_name);
+ else if (read_result != sizeof(hdr))
+ die_errno("Unexpected short read for header of '%s'",
+ pack_name);
if (lseek(pack_fd, 0, SEEK_SET) != 0)
die_errno("Failed seeking to start of '%s'", pack_name);
git_SHA1_Update(&old_sha1_ctx, &hdr, sizeof(hdr));
unsigned char sha1[20];
unsigned char *idx_sha1;
long fd_flag;
+ ssize_t read_result;
if (!p->index_data && open_pack_index(p))
return error("packfile %s index unavailable", p->pack_name);
return error("cannot set FD_CLOEXEC");
/* Verify we recognize this pack file format. */
- if (read_in_full(p->pack_fd, &hdr, sizeof(hdr)) != sizeof(hdr))
+ read_result = read_in_full(p->pack_fd, &hdr, sizeof(hdr));
+ if (read_result < 0)
+ return error_errno("error reading from %s", p->pack_name);
+ if (read_result != sizeof(hdr))
return error("file %s is far too short to be a packfile", p->pack_name);
if (hdr.hdr_signature != htonl(PACK_SIGNATURE))
return error("file %s is not a GIT packfile", p->pack_name);
p->num_objects);
if (lseek(p->pack_fd, p->pack_size - sizeof(sha1), SEEK_SET) == -1)
return error("end of packfile %s is unavailable", p->pack_name);
- if (read_in_full(p->pack_fd, sha1, sizeof(sha1)) != sizeof(sha1))
+ read_result = read_in_full(p->pack_fd, sha1, sizeof(sha1));
+ if (read_result < 0)
+ return error_errno("error reading from %s", p->pack_name);
+ if (read_result != sizeof(sha1))
return error("packfile %s signature is unavailable", p->pack_name);
idx_sha1 = ((unsigned char *)p->index_data) + p->index_size - 40;
if (hashcmp(sha1, idx_sha1))
const struct option *opts, int full, int err)
{
FILE *outfile = err ? stderr : stdout;
+ int need_newline;
if (!usagestr)
return PARSE_OPT_HELP;
if (**usagestr)
fprintf_ln(outfile, _(" %s"), _(*usagestr));
else
- putchar('\n');
+ fputc('\n', outfile);
usagestr++;
}
- if (opts->type != OPTION_GROUP)
- fputc('\n', outfile);
+ need_newline = 1;
for (; opts->type != OPTION_END; opts++) {
size_t pos;
if (opts->type == OPTION_GROUP) {
fputc('\n', outfile);
+ need_newline = 0;
if (*opts->help)
fprintf(outfile, "%s\n", _(opts->help));
continue;
if (!full && (opts->flags & PARSE_OPT_HIDDEN))
continue;
+ if (need_newline) {
+ fputc('\n', outfile);
+ need_newline = 0;
+ }
+
pos = fprintf(outfile, " ");
if (opts->short_name) {
if (opts->flags & PARSE_OPT_NODASH)
return sb;
}
-static char *cleanup_path(char *path)
+static const char *cleanup_path(const char *path)
{
/* Clean it up */
- if (!memcmp(path, "./", 2)) {
- path += 2;
+ if (skip_prefix(path, "./", &path)) {
while (*path == '/')
path++;
}
static void strbuf_cleanup_path(struct strbuf *sb)
{
- char *path = cleanup_path(sb->buf);
+ const char *path = cleanup_path(sb->buf);
if (path > sb->buf)
strbuf_remove(sb, 0, path - sb->buf);
}
strlcpy(buf, bad_path, n);
return buf;
}
- return cleanup_path(buf);
+ return (char *)cleanup_path(buf);
}
static int dir_prefix(const char *buf, const char *dir)
int validate_headref(const char *path)
{
struct stat st;
- char *buf, buffer[256];
- unsigned char sha1[20];
+ char buffer[256];
+ const char *refname;
+ struct object_id oid;
int fd;
ssize_t len;
len = read_in_full(fd, buffer, sizeof(buffer)-1);
close(fd);
+ if (len < 0)
+ return -1;
+ buffer[len] = '\0';
+
/*
* Is it a symbolic ref?
*/
- if (len < 4)
- return -1;
- if (!memcmp("ref:", buffer, 4)) {
- buf = buffer + 4;
- len -= 4;
- while (len && isspace(*buf))
- buf++, len--;
- if (len >= 5 && !memcmp("refs/", buf, 5))
+ if (skip_prefix(buffer, "ref:", &refname)) {
+ while (isspace(*refname))
+ refname++;
+ if (starts_with(refname, "refs/"))
return 0;
}
/*
* Is this a detached HEAD?
*/
- if (!get_sha1_hex(buffer, sha1))
+ if (!get_oid_hex(buffer, &oid))
return 0;
return -1;
if (!home)
goto return_null;
if (real_home)
- strbuf_addstr(&user_path, real_path(home));
+ strbuf_add_real_path(&user_path, home);
else
strbuf_addstr(&user_path, home);
#ifdef GIT_WINDOWS_NATIVE
}
/* And complain if we didn't get enough bytes to satisfy the read. */
- if (ret < size) {
+ if (ret != size) {
if (options & PACKET_READ_GENTLE_ON_EOF)
return -1;
return 0;
}
+static int match_placeholder_arg(const char *to_parse, const char *candidate,
+ const char **end)
+{
+ const char *p;
+
+ if (!(skip_prefix(to_parse, candidate, &p)))
+ return 0;
+ if (*p == ',') {
+ *end = p + 1;
+ return 1;
+ }
+ if (*p == ')') {
+ *end = p;
+ return 1;
+ }
+ return 0;
+}
+
static size_t format_commit_one(struct strbuf *sb, /* in UTF-8 */
const char *placeholder,
void *context)
if (skip_prefix(placeholder, "(trailers", &arg)) {
struct process_trailer_options opts = PROCESS_TRAILER_OPTIONS_INIT;
- while (*arg == ':') {
- if (skip_prefix(arg, ":only", &arg))
- opts.only_trailers = 1;
- else if (skip_prefix(arg, ":unfold", &arg))
- opts.unfold = 1;
+ if (*arg == ':') {
+ arg++;
+ for (;;) {
+ if (match_placeholder_arg(arg, "only", &arg))
+ opts.only_trailers = 1;
+ else if (match_placeholder_arg(arg, "unfold", &arg))
+ opts.unfold = 1;
+ else
+ break;
+ }
}
if (*arg == ')') {
format_trailers_from_commit(sb, msg + c->subject_off, &opts);
static void head_atom_parser(const struct ref_format *format, struct used_atom *atom, const char *arg)
{
- struct object_id unused;
-
- atom->u.head = resolve_refdup("HEAD", RESOLVE_REF_READING, unused.hash, NULL);
+ atom->u.head = resolve_refdup("HEAD", RESOLVE_REF_READING, NULL, NULL);
}
static struct {
REALLOC_ARRAY(used_atom, used_atom_cnt);
used_atom[at].name = xmemdupz(atom, ep - atom);
used_atom[at].type = valid_atom[i].cmp_type;
- if (arg)
+ if (arg) {
arg = used_atom[at].name + (arg - atom) + 1;
+ if (!*arg) {
+ /*
+ * Treat empty sub-arguments list as NULL (i.e.,
+ * "%(atom:)" is equivalent to "%(atom)").
+ */
+ arg = NULL;
+ }
+ }
memset(&used_atom[at].u, 0, sizeof(used_atom[at].u));
if (valid_atom[i].parser)
valid_atom[i].parser(format, &used_atom[at], arg);
ref->value = xcalloc(used_atom_cnt, sizeof(struct atom_value));
if (need_symref && (ref->flag & REF_ISSYMREF) && !ref->symref) {
- struct object_id unused1;
ref->symref = resolve_refdup(ref->refname, RESOLVE_REF_READING,
- unused1.hash, NULL);
+ NULL, NULL);
if (!ref->symref)
ref->symref = "";
}
reflogs->ref = xstrdup(ref);
for_each_reflog_ent(ref, read_one_reflog, reflogs);
if (reflogs->nr == 0) {
- struct object_id oid;
const char *name;
void *name_to_free;
name = name_to_free = resolve_refdup(ref, RESOLVE_REF_READING,
- oid.hash, NULL);
+ NULL, NULL);
if (name) {
for_each_reflog_ent(name, read_one_reflog, reflogs);
free(name_to_free);
reflogs = item->util;
else {
if (*branch == '\0') {
- struct object_id oid;
free(branch);
- branch = resolve_refdup("HEAD", 0, oid.hash, NULL);
+ branch = resolve_refdup("HEAD", 0, NULL, NULL);
if (!branch)
die ("No current branch");
if (trim)
iter = prefix_ref_iterator_begin(iter, "", trim);
+ /* Sanity check for subclasses: */
+ if (!iter->ordered)
+ BUG("reference iterator is not ordered");
+
return iter;
}
int refs_peel_ref(struct ref_store *refs, const char *refname,
unsigned char *sha1)
{
- return refs->be->peel_ref(refs, refname, sha1);
+ int flag;
+ unsigned char base[20];
+
+ if (current_ref_iter && current_ref_iter->refname == refname) {
+ struct object_id peeled;
+
+ if (ref_iterator_peel(current_ref_iter, &peeled))
+ return -1;
+ hashcpy(sha1, peeled.hash);
+ return 0;
+ }
+
+ if (refs_read_ref_full(refs, refname,
+ RESOLVE_REF_READING, base, &flag))
+ return -1;
+
+ return peel_object(base, sha1);
}
int peel_ref(const char *refname, unsigned char *sha1)
{
return refs_rename_ref(get_main_ref_store(), oldref, newref, logmsg);
}
+
+int refs_copy_existing_ref(struct ref_store *refs, const char *oldref,
+ const char *newref, const char *logmsg)
+{
+ return refs->be->copy_ref(refs, oldref, newref, logmsg);
+}
+
+int copy_existing_ref(const char *oldref, const char *newref, const char *logmsg)
+{
+ return refs_copy_existing_ref(get_main_ref_store(), oldref, newref, logmsg);
+}
/** rename ref, return 0 on success **/
int refs_rename_ref(struct ref_store *refs, const char *oldref,
const char *newref, const char *logmsg);
-int rename_ref(const char *oldref, const char *newref, const char *logmsg);
+int rename_ref(const char *oldref, const char *newref,
+ const char *logmsg);
+
+/** copy ref, return 0 on success **/
+int refs_copy_existing_ref(struct ref_store *refs, const char *oldref,
+ const char *newref, const char *logmsg);
+int copy_existing_ref(const char *oldref, const char *newref,
+ const char *logmsg);
int refs_create_symref(struct ref_store *refs, const char *refname,
const char *target, const char *logmsg);
return ret;
}
-static int files_peel_ref(struct ref_store *ref_store,
- const char *refname, unsigned char *sha1)
-{
- struct files_ref_store *refs =
- files_downcast(ref_store, REF_STORE_READ | REF_STORE_ODB,
- "peel_ref");
- int flag;
- unsigned char base[20];
-
- if (current_ref_iter && current_ref_iter->refname == refname) {
- struct object_id peeled;
-
- if (ref_iterator_peel(current_ref_iter, &peeled))
- return -1;
- hashcpy(sha1, peeled.hash);
- return 0;
- }
-
- if (refs_read_ref_full(ref_store, refname,
- RESOLVE_REF_READING, base, &flag))
- return -1;
-
- /*
- * If the reference is packed, read its ref_entry from the
- * cache in the hope that we already know its peeled value.
- * We only try this optimization on packed references because
- * (a) forcing the filling of the loose reference cache could
- * be expensive and (b) loose references anyway usually do not
- * have REF_KNOWS_PEELED.
- */
- if (flag & REF_ISPACKED &&
- !refs_peel_ref(refs->packed_ref_store, refname, sha1))
- return 0;
-
- return peel_object(base, sha1);
-}
-
struct files_ref_iterator {
struct ref_iterator base;
const char *prefix, unsigned int flags)
{
struct files_ref_store *refs;
- struct ref_iterator *loose_iter, *packed_iter;
+ struct ref_iterator *loose_iter, *packed_iter, *overlay_iter;
struct files_ref_iterator *iter;
struct ref_iterator *ref_iterator;
unsigned int required_flags = REF_STORE_READ;
refs = files_downcast(ref_store, required_flags, "ref_iterator_begin");
- iter = xcalloc(1, sizeof(*iter));
- ref_iterator = &iter->base;
- base_ref_iterator_init(ref_iterator, &files_ref_iterator_vtable);
-
/*
* We must make sure that all loose refs are read before
* accessing the packed-refs file; this avoids a race
refs->packed_ref_store, prefix, 0,
DO_FOR_EACH_INCLUDE_BROKEN);
- iter->iter0 = overlay_ref_iterator_begin(loose_iter, packed_iter);
+ overlay_iter = overlay_ref_iterator_begin(loose_iter, packed_iter);
+
+ iter = xcalloc(1, sizeof(*iter));
+ ref_iterator = &iter->base;
+ base_ref_iterator_init(ref_iterator, &files_ref_iterator_vtable,
+ overlay_iter->ordered);
+ iter->iter0 = overlay_iter;
iter->flags = flags;
return ref_iterator;
const struct object_id *oid, const char *logmsg,
struct strbuf *err);
-static int files_rename_ref(struct ref_store *ref_store,
+static int files_copy_or_rename_ref(struct ref_store *ref_store,
const char *oldrefname, const char *newrefname,
- const char *logmsg)
+ const char *logmsg, int copy)
{
struct files_ref_store *refs =
files_downcast(ref_store, REF_STORE_WRITE, "rename_ref");
}
if (flag & REF_ISSYMREF) {
- ret = error("refname %s is a symbolic ref, renaming it is not supported",
- oldrefname);
+ if (copy)
+ ret = error("refname %s is a symbolic ref, copying it is not supported",
+ oldrefname);
+ else
+ ret = error("refname %s is a symbolic ref, renaming it is not supported",
+ oldrefname);
goto out;
}
if (!refs_rename_ref_available(&refs->base, oldrefname, newrefname)) {
goto out;
}
- if (log && rename(sb_oldref.buf, tmp_renamed_log.buf)) {
+ if (!copy && log && rename(sb_oldref.buf, tmp_renamed_log.buf)) {
ret = error("unable to move logfile logs/%s to logs/"TMP_RENAMED_LOG": %s",
oldrefname, strerror(errno));
goto out;
}
- if (refs_delete_ref(&refs->base, logmsg, oldrefname,
+ if (copy && log && copy_file(tmp_renamed_log.buf, sb_oldref.buf, 0644)) {
+ ret = error("unable to copy logfile logs/%s to logs/"TMP_RENAMED_LOG": %s",
+ oldrefname, strerror(errno));
+ goto out;
+ }
+
+ if (!copy && refs_delete_ref(&refs->base, logmsg, oldrefname,
orig_oid.hash, REF_NODEREF)) {
error("unable to delete old %s", oldrefname);
goto rollback;
* the safety anyway; we want to delete the reference whatever
* its current value.
*/
- if (!refs_read_ref_full(&refs->base, newrefname,
+ if (!copy && !refs_read_ref_full(&refs->base, newrefname,
RESOLVE_REF_READING | RESOLVE_REF_NO_RECURSE,
oid.hash, NULL) &&
refs_delete_ref(&refs->base, NULL, newrefname,
lock = lock_ref_sha1_basic(refs, newrefname, NULL, NULL, NULL,
REF_NODEREF, NULL, &err);
if (!lock) {
- error("unable to rename '%s' to '%s': %s", oldrefname, newrefname, err.buf);
+ if (copy)
+ error("unable to copy '%s' to '%s': %s", oldrefname, newrefname, err.buf);
+ else
+ error("unable to rename '%s' to '%s': %s", oldrefname, newrefname, err.buf);
strbuf_release(&err);
goto rollback;
}
return ret;
}
+static int files_rename_ref(struct ref_store *ref_store,
+ const char *oldrefname, const char *newrefname,
+ const char *logmsg)
+{
+ return files_copy_or_rename_ref(ref_store, oldrefname,
+ newrefname, logmsg, 0);
+}
+
+static int files_copy_ref(struct ref_store *ref_store,
+ const char *oldrefname, const char *newrefname,
+ const char *logmsg)
+{
+ return files_copy_or_rename_ref(ref_store, oldrefname,
+ newrefname, logmsg, 1);
+}
+
static int close_ref_gently(struct ref_lock *lock)
{
if (close_lock_file_gently(&lock->lk))
struct ref_iterator *ref_iterator = &iter->base;
struct strbuf sb = STRBUF_INIT;
- base_ref_iterator_init(ref_iterator, &files_reflog_iterator_vtable);
+ base_ref_iterator_init(ref_iterator, &files_reflog_iterator_vtable, 0);
strbuf_addf(&sb, "%s/logs", gitdir);
iter->dir_iterator = dir_iterator_begin(sb.buf);
iter->ref_store = ref_store;
return reflog_iterator_begin(ref_store, refs->gitcommondir);
} else {
return merge_ref_iterator_begin(
+ 0,
reflog_iterator_begin(ref_store, refs->gitdir),
reflog_iterator_begin(ref_store, refs->gitcommondir),
reflog_iterator_select, refs);
struct string_list affected_refnames = STRING_LIST_INIT_NODUP;
char *head_ref = NULL;
int head_type;
- struct object_id head_oid;
struct files_transaction_backend_data *backend_data;
struct ref_transaction *packed_transaction = NULL;
*/
head_ref = refs_resolve_refdup(ref_store, "HEAD",
RESOLVE_REF_NO_RECURSE,
- head_oid.hash, &head_type);
+ NULL, &head_type);
if (head_ref && !(head_type & REF_ISSYMREF)) {
FREE_AND_NULL(head_ref);
} else if (update &&
(write_in_full(get_lock_file_fd(&lock->lk),
oid_to_hex(&cb.last_kept_oid), GIT_SHA1_HEXSZ) < 0 ||
- write_str_in_full(get_lock_file_fd(&lock->lk), "\n") < 1 ||
+ write_str_in_full(get_lock_file_fd(&lock->lk), "\n") < 0 ||
close_ref_gently(lock) < 0)) {
status |= error("couldn't write %s",
get_lock_file_path(&lock->lk));
files_initial_transaction_commit,
files_pack_refs,
- files_peel_ref,
files_create_symref,
files_delete_refs,
files_rename_ref,
+ files_copy_ref,
files_ref_iterator_begin,
files_read_raw_ref,
}
void base_ref_iterator_init(struct ref_iterator *iter,
- struct ref_iterator_vtable *vtable)
+ struct ref_iterator_vtable *vtable,
+ int ordered)
{
iter->vtable = vtable;
+ iter->ordered = !!ordered;
iter->refname = NULL;
iter->oid = NULL;
iter->flags = 0;
struct empty_ref_iterator *iter = xcalloc(1, sizeof(*iter));
struct ref_iterator *ref_iterator = &iter->base;
- base_ref_iterator_init(ref_iterator, &empty_ref_iterator_vtable);
+ base_ref_iterator_init(ref_iterator, &empty_ref_iterator_vtable, 1);
return ref_iterator;
}
};
struct ref_iterator *merge_ref_iterator_begin(
+ int ordered,
struct ref_iterator *iter0, struct ref_iterator *iter1,
ref_iterator_select_fn *select, void *cb_data)
{
* references through only if they exist in both iterators.
*/
- base_ref_iterator_init(ref_iterator, &merge_ref_iterator_vtable);
+ base_ref_iterator_init(ref_iterator, &merge_ref_iterator_vtable, ordered);
iter->iter0 = iter0;
iter->iter1 = iter1;
iter->select = select;
} else if (is_empty_ref_iterator(back)) {
ref_iterator_abort(back);
return front;
+ } else if (!front->ordered || !back->ordered) {
+ BUG("overlay_ref_iterator requires ordered inputs");
}
- return merge_ref_iterator_begin(front, back,
+ return merge_ref_iterator_begin(1, front, back,
overlay_iterator_select, NULL);
}
int trim;
};
+/* Return -1, 0, 1 if refname is before, inside, or after the prefix. */
+static int compare_prefix(const char *refname, const char *prefix)
+{
+ while (*prefix) {
+ if (*refname != *prefix)
+ return ((unsigned char)*refname < (unsigned char)*prefix) ? -1 : +1;
+
+ refname++;
+ prefix++;
+ }
+
+ return 0;
+}
+
static int prefix_ref_iterator_advance(struct ref_iterator *ref_iterator)
{
struct prefix_ref_iterator *iter =
int ok;
while ((ok = ref_iterator_advance(iter->iter0)) == ITER_OK) {
- if (!starts_with(iter->iter0->refname, iter->prefix))
+ int cmp = compare_prefix(iter->iter0->refname, iter->prefix);
+
+ if (cmp < 0)
continue;
+ if (cmp > 0) {
+ /*
+ * If the source iterator is ordered, then we
+ * can stop the iteration as soon as we see a
+ * refname that comes after the prefix:
+ */
+ if (iter->iter0->ordered) {
+ ok = ref_iterator_abort(iter->iter0);
+ break;
+ } else {
+ continue;
+ }
+ }
+
if (iter->trim) {
/*
* It is nonsense to trim off characters that
iter = xcalloc(1, sizeof(*iter));
ref_iterator = &iter->base;
- base_ref_iterator_init(ref_iterator, &prefix_ref_iterator_vtable);
+ base_ref_iterator_init(ref_iterator, &prefix_ref_iterator_vtable, iter0->ordered);
iter->iter0 = iter0;
iter->prefix = xstrdup(prefix);
#include "../config.h"
#include "../refs.h"
#include "refs-internal.h"
-#include "ref-cache.h"
#include "packed-backend.h"
#include "../iterator.h"
#include "../lockfile.h"
-struct packed_ref_cache {
- struct ref_cache *cache;
+enum mmap_strategy {
+ /*
+ * Don't use mmap() at all for reading `packed-refs`.
+ */
+ MMAP_NONE,
/*
- * Count of references to the data structure in this instance,
- * including the pointer from files_ref_store::packed if any.
- * The data will not be freed as long as the reference count
- * is nonzero.
+ * Can use mmap() for reading `packed-refs`, but the file must
+ * not remain mmapped. This is the usual option on Windows,
+ * where you cannot rename a new version of a file onto a file
+ * that is currently mmapped.
*/
- unsigned int referrers;
+ MMAP_TEMPORARY,
- /* The metadata from when this packed-refs cache was read */
- struct stat_validity validity;
+ /*
+ * It is OK to leave the `packed-refs` file mmapped while
+ * arbitrary other code is running.
+ */
+ MMAP_OK
};
-/*
- * Increment the reference count of *packed_refs.
- */
-static void acquire_packed_ref_cache(struct packed_ref_cache *packed_refs)
-{
- packed_refs->referrers++;
-}
+#if defined(NO_MMAP)
+static enum mmap_strategy mmap_strategy = MMAP_NONE;
+#elif defined(MMAP_PREVENTS_DELETE)
+static enum mmap_strategy mmap_strategy = MMAP_TEMPORARY;
+#else
+static enum mmap_strategy mmap_strategy = MMAP_OK;
+#endif
+
+struct packed_ref_store;
/*
- * Decrease the reference count of *packed_refs. If it goes to zero,
- * free *packed_refs and return true; otherwise return false.
+ * A `snapshot` represents one snapshot of a `packed-refs` file.
+ *
+ * Normally, this will be a mmapped view of the contents of the
+ * `packed-refs` file at the time the snapshot was created. However,
+ * if the `packed-refs` file was not sorted, this might point at heap
+ * memory holding the contents of the `packed-refs` file with its
+ * records sorted by refname.
+ *
+ * `snapshot` instances are reference counted (via
+ * `acquire_snapshot()` and `release_snapshot()`). This is to prevent
+ * an instance from disappearing while an iterator is still iterating
+ * over it. Instances are garbage collected when their `referrers`
+ * count goes to zero.
+ *
+ * The most recent `snapshot`, if available, is referenced by the
+ * `packed_ref_store`. Its freshness is checked whenever
+ * `get_snapshot()` is called; if the existing snapshot is obsolete, a
+ * new snapshot is taken.
*/
-static int release_packed_ref_cache(struct packed_ref_cache *packed_refs)
-{
- if (!--packed_refs->referrers) {
- free_ref_cache(packed_refs->cache);
- stat_validity_clear(&packed_refs->validity);
- free(packed_refs);
- return 1;
- } else {
- return 0;
- }
-}
+struct snapshot {
+ /*
+ * A back-pointer to the packed_ref_store with which this
+ * snapshot is associated:
+ */
+ struct packed_ref_store *refs;
+
+ /* Is the `packed-refs` file currently mmapped? */
+ int mmapped;
+
+ /*
+ * The contents of the `packed-refs` file. If the file was
+ * already sorted, this points at the mmapped contents of the
+ * file. If not, this points at heap-allocated memory
+ * containing the contents, sorted. If there were no contents
+ * (e.g., because the file didn't exist), `buf` and `eof` are
+ * both NULL.
+ */
+ char *buf, *eof;
+
+ /* The size of the header line, if any; otherwise, 0: */
+ size_t header_len;
+
+ /*
+ * What is the peeled state of the `packed-refs` file that
+ * this snapshot represents? (This is usually determined from
+ * the file's header.)
+ */
+ enum { PEELED_NONE, PEELED_TAGS, PEELED_FULLY } peeled;
+
+ /*
+ * Count of references to this instance, including the pointer
+ * from `packed_ref_store::snapshot`, if any. The instance
+ * will not be freed as long as the reference count is
+ * nonzero.
+ */
+ unsigned int referrers;
+
+ /*
+ * The metadata of the `packed-refs` file from which this
+ * snapshot was created, used to tell if the file has been
+ * replaced since we read it.
+ */
+ struct stat_validity validity;
+};
/*
- * A container for `packed-refs`-related data. It is not (yet) a
- * `ref_store`.
+ * A `ref_store` representing references stored in a `packed-refs`
+ * file. It implements the `ref_store` interface, though it has some
+ * limitations:
+ *
+ * - It cannot store symbolic references.
+ *
+ * - It cannot store reflogs.
+ *
+ * - It does not support reference renaming (though it could).
+ *
+ * On the other hand, it can be locked outside of a reference
+ * transaction. In that case, it remains locked even after the
+ * transaction is done and the new `packed-refs` file is activated.
*/
struct packed_ref_store {
struct ref_store base;
char *path;
/*
- * A cache of the values read from the `packed-refs` file, if
- * it might still be current; otherwise, NULL.
+ * A snapshot of the values read from the `packed-refs` file,
+ * if it might still be current; otherwise, NULL.
*/
- struct packed_ref_cache *cache;
+ struct snapshot *snapshot;
/*
* Lock used for the "packed-refs" file. Note that this (and
struct tempfile *tempfile;
};
+/*
+ * Increment the reference count of `*snapshot`.
+ */
+static void acquire_snapshot(struct snapshot *snapshot)
+{
+ snapshot->referrers++;
+}
+
+/*
+ * If the buffer in `snapshot` is active, then either munmap the
+ * memory and close the file, or free the memory. Then set the buffer
+ * pointers to NULL.
+ */
+static void clear_snapshot_buffer(struct snapshot *snapshot)
+{
+ if (snapshot->mmapped) {
+ if (munmap(snapshot->buf, snapshot->eof - snapshot->buf))
+ die_errno("error ummapping packed-refs file %s",
+ snapshot->refs->path);
+ snapshot->mmapped = 0;
+ } else {
+ free(snapshot->buf);
+ }
+ snapshot->buf = snapshot->eof = NULL;
+ snapshot->header_len = 0;
+}
+
+/*
+ * Decrease the reference count of `*snapshot`. If it goes to zero,
+ * free `*snapshot` and return true; otherwise return false.
+ */
+static int release_snapshot(struct snapshot *snapshot)
+{
+ if (!--snapshot->referrers) {
+ stat_validity_clear(&snapshot->validity);
+ clear_snapshot_buffer(snapshot);
+ free(snapshot);
+ return 1;
+ } else {
+ return 0;
+ }
+}
+
struct ref_store *packed_ref_store_create(const char *path,
unsigned int store_flags)
{
return refs;
}
-static void clear_packed_ref_cache(struct packed_ref_store *refs)
+static void clear_snapshot(struct packed_ref_store *refs)
{
- if (refs->cache) {
- struct packed_ref_cache *cache = refs->cache;
+ if (refs->snapshot) {
+ struct snapshot *snapshot = refs->snapshot;
- refs->cache = NULL;
- release_packed_ref_cache(cache);
+ refs->snapshot = NULL;
+ release_snapshot(snapshot);
}
}
-/* The length of a peeled reference line in packed-refs, including EOL: */
-#define PEELED_LINE_LENGTH 42
+static NORETURN void die_unterminated_line(const char *path,
+ const char *p, size_t len)
+{
+ if (len < 80)
+ die("unterminated line in %s: %.*s", path, (int)len, p);
+ else
+ die("unterminated line in %s: %.75s...", path, p);
+}
+
+static NORETURN void die_invalid_line(const char *path,
+ const char *p, size_t len)
+{
+ const char *eol = memchr(p, '\n', len);
+
+ if (!eol)
+ die_unterminated_line(path, p, len);
+ else if (eol - p < 80)
+ die("unexpected line in %s: %.*s", path, (int)(eol - p), p);
+ else
+ die("unexpected line in %s: %.75s...", path, p);
+
+}
+
+struct snapshot_record {
+ const char *start;
+ size_t len;
+};
+
+static int cmp_packed_ref_records(const void *v1, const void *v2)
+{
+ const struct snapshot_record *e1 = v1, *e2 = v2;
+ const char *r1 = e1->start + GIT_SHA1_HEXSZ + 1;
+ const char *r2 = e2->start + GIT_SHA1_HEXSZ + 1;
+
+ while (1) {
+ if (*r1 == '\n')
+ return *r2 == '\n' ? 0 : -1;
+ if (*r1 != *r2) {
+ if (*r2 == '\n')
+ return 1;
+ else
+ return (unsigned char)*r1 < (unsigned char)*r2 ? -1 : +1;
+ }
+ r1++;
+ r2++;
+ }
+}
/*
- * Parse one line from a packed-refs file. Write the SHA1 to sha1.
- * Return a pointer to the refname within the line (null-terminated),
- * or NULL if there was a problem.
+ * Compare a snapshot record at `rec` to the specified NUL-terminated
+ * refname.
*/
-static const char *parse_ref_line(struct strbuf *line, struct object_id *oid)
+static int cmp_record_to_refname(const char *rec, const char *refname)
{
- const char *ref;
+ const char *r1 = rec + GIT_SHA1_HEXSZ + 1;
+ const char *r2 = refname;
+
+ while (1) {
+ if (*r1 == '\n')
+ return *r2 ? -1 : 0;
+ if (!*r2)
+ return 1;
+ if (*r1 != *r2)
+ return (unsigned char)*r1 < (unsigned char)*r2 ? -1 : +1;
+ r1++;
+ r2++;
+ }
+}
- if (parse_oid_hex(line->buf, oid, &ref) < 0)
- return NULL;
- if (!isspace(*ref++))
- return NULL;
+/*
+ * `snapshot->buf` is not known to be sorted. Check whether it is, and
+ * if not, sort it into new memory and munmap/free the old storage.
+ */
+static void sort_snapshot(struct snapshot *snapshot)
+{
+ struct snapshot_record *records = NULL;
+ size_t alloc = 0, nr = 0;
+ int sorted = 1;
+ const char *pos, *eof, *eol;
+ size_t len, i;
+ char *new_buffer, *dst;
- if (isspace(*ref))
- return NULL;
+ pos = snapshot->buf + snapshot->header_len;
+ eof = snapshot->eof;
+ len = eof - pos;
- if (line->buf[line->len - 1] != '\n')
- return NULL;
- line->buf[--line->len] = 0;
+ if (!len)
+ return;
+
+ /*
+ * Initialize records based on a crude estimate of the number
+ * of references in the file (we'll grow it below if needed):
+ */
+ ALLOC_GROW(records, len / 80 + 20, alloc);
+
+ while (pos < eof) {
+ eol = memchr(pos, '\n', eof - pos);
+ if (!eol)
+ /* The safety check should prevent this. */
+ BUG("unterminated line found in packed-refs");
+ if (eol - pos < GIT_SHA1_HEXSZ + 2)
+ die_invalid_line(snapshot->refs->path,
+ pos, eof - pos);
+ eol++;
+ if (eol < eof && *eol == '^') {
+ /*
+ * Keep any peeled line together with its
+ * reference:
+ */
+ const char *peeled_start = eol;
+
+ eol = memchr(peeled_start, '\n', eof - peeled_start);
+ if (!eol)
+ /* The safety check should prevent this. */
+ BUG("unterminated peeled line found in packed-refs");
+ eol++;
+ }
+
+ ALLOC_GROW(records, nr + 1, alloc);
+ records[nr].start = pos;
+ records[nr].len = eol - pos;
+ nr++;
+
+ if (sorted &&
+ nr > 1 &&
+ cmp_packed_ref_records(&records[nr - 2],
+ &records[nr - 1]) >= 0)
+ sorted = 0;
+
+ pos = eol;
+ }
+
+ if (sorted)
+ goto cleanup;
+
+ /* We need to sort the memory. First we sort the records array: */
+ QSORT(records, nr, cmp_packed_ref_records);
+
+ /*
+ * Allocate a new chunk of memory, and copy the old memory to
+ * the new in the order indicated by `records` (not bothering
+ * with the header line):
+ */
+ new_buffer = xmalloc(len);
+ for (dst = new_buffer, i = 0; i < nr; i++) {
+ memcpy(dst, records[i].start, records[i].len);
+ dst += records[i].len;
+ }
+
+ /*
+ * Now munmap the old buffer and use the sorted buffer in its
+ * place:
+ */
+ clear_snapshot_buffer(snapshot);
+ snapshot->buf = new_buffer;
+ snapshot->eof = new_buffer + len;
+ snapshot->header_len = 0;
+
+cleanup:
+ free(records);
+}
+
+/*
+ * Return a pointer to the start of the record that contains the
+ * character `*p` (which must be within the buffer). If no other
+ * record start is found, return `buf`.
+ */
+static const char *find_start_of_record(const char *buf, const char *p)
+{
+ while (p > buf && (p[-1] != '\n' || p[0] == '^'))
+ p--;
+ return p;
+}
+
+/*
+ * Return a pointer to the start of the record following the record
+ * that contains `*p`. If none is found before `end`, return `end`.
+ */
+static const char *find_end_of_record(const char *p, const char *end)
+{
+ while (++p < end && (p[-1] != '\n' || p[0] == '^'))
+ ;
+ return p;
+}
+
+/*
+ * We want to be able to compare mmapped reference records quickly,
+ * without totally parsing them. We can do so because the records are
+ * LF-terminated, and the refname should start exactly (GIT_SHA1_HEXSZ
+ * + 1) bytes past the beginning of the record.
+ *
+ * But what if the `packed-refs` file contains garbage? We're willing
+ * to tolerate not detecting the problem, as long as we don't produce
+ * totally garbled output (we can't afford to check the integrity of
+ * the whole file during every Git invocation). But we do want to be
+ * sure that we never read past the end of the buffer in memory and
+ * perform an illegal memory access.
+ *
+ * Guarantee that minimum level of safety by verifying that the last
+ * record in the file is LF-terminated, and that it has at least
+ * (GIT_SHA1_HEXSZ + 1) characters before the LF. Die if either of
+ * these checks fails.
+ */
+static void verify_buffer_safe(struct snapshot *snapshot)
+{
+ const char *buf = snapshot->buf + snapshot->header_len;
+ const char *eof = snapshot->eof;
+ const char *last_line;
+
+ if (buf == eof)
+ return;
+
+ last_line = find_start_of_record(buf, eof - 1);
+ if (*(eof - 1) != '\n' || eof - last_line < GIT_SHA1_HEXSZ + 2)
+ die_invalid_line(snapshot->refs->path,
+ last_line, eof - last_line);
+}
- return ref;
+/*
+ * Depending on `mmap_strategy`, either mmap or read the contents of
+ * the `packed-refs` file into the snapshot. Return 1 if the file
+ * existed and was read, or 0 if the file was absent. Die on errors.
+ */
+static int load_contents(struct snapshot *snapshot)
+{
+ int fd;
+ struct stat st;
+ size_t size;
+ ssize_t bytes_read;
+
+ fd = open(snapshot->refs->path, O_RDONLY);
+ if (fd < 0) {
+ if (errno == ENOENT) {
+ /*
+ * This is OK; it just means that no
+ * "packed-refs" file has been written yet,
+ * which is equivalent to it being empty,
+ * which is its state when initialized with
+ * zeros.
+ */
+ return 0;
+ } else {
+ die_errno("couldn't read %s", snapshot->refs->path);
+ }
+ }
+
+ stat_validity_update(&snapshot->validity, fd);
+
+ if (fstat(fd, &st) < 0)
+ die_errno("couldn't stat %s", snapshot->refs->path);
+ size = xsize_t(st.st_size);
+
+ switch (mmap_strategy) {
+ case MMAP_NONE:
+ snapshot->buf = xmalloc(size);
+ bytes_read = read_in_full(fd, snapshot->buf, size);
+ if (bytes_read < 0 || bytes_read != size)
+ die_errno("couldn't read %s", snapshot->refs->path);
+ snapshot->eof = snapshot->buf + size;
+ snapshot->mmapped = 0;
+ break;
+ case MMAP_TEMPORARY:
+ case MMAP_OK:
+ snapshot->buf = xmmap(NULL, size, PROT_READ, MAP_PRIVATE, fd, 0);
+ snapshot->eof = snapshot->buf + size;
+ snapshot->mmapped = 1;
+ break;
+ }
+ close(fd);
+
+ return 1;
}
/*
- * Read from `packed_refs_file` into a newly-allocated
- * `packed_ref_cache` and return it. The return value will already
- * have its reference count incremented.
+ * Find the place in `snapshot->buf` where the start of the record for
+ * `refname` starts. If `mustexist` is true and the reference doesn't
+ * exist, then return NULL. If `mustexist` is false and the reference
+ * doesn't exist, then return the point where that reference would be
+ * inserted. In the latter mode, `refname` doesn't have to be a proper
+ * reference name; for example, one could search for "refs/replace/"
+ * to find the start of any replace references.
+ *
+ * The record is sought using a binary search, so `snapshot->buf` must
+ * be sorted.
+ */
+static const char *find_reference_location(struct snapshot *snapshot,
+ const char *refname, int mustexist)
+{
+ /*
+ * This is not *quite* a garden-variety binary search, because
+ * the data we're searching is made up of records, and we
+ * always need to find the beginning of a record to do a
+ * comparison. A "record" here is one line for the reference
+ * itself and zero or one peel lines that start with '^'. Our
+ * loop invariant is described in the next two comments.
+ */
+
+ /*
+ * A pointer to the character at the start of a record whose
+ * preceding records all have reference names that come
+ * *before* `refname`.
+ */
+ const char *lo = snapshot->buf + snapshot->header_len;
+
+ /*
+ * A pointer to a the first character of a record whose
+ * reference name comes *after* `refname`.
+ */
+ const char *hi = snapshot->eof;
+
+ while (lo < hi) {
+ const char *mid, *rec;
+ int cmp;
+
+ mid = lo + (hi - lo) / 2;
+ rec = find_start_of_record(lo, mid);
+ cmp = cmp_record_to_refname(rec, refname);
+ if (cmp < 0) {
+ lo = find_end_of_record(mid, hi);
+ } else if (cmp > 0) {
+ hi = rec;
+ } else {
+ return rec;
+ }
+ }
+
+ if (mustexist)
+ return NULL;
+ else
+ return lo;
+}
+
+/*
+ * Create a newly-allocated `snapshot` of the `packed-refs` file in
+ * its current state and return it. The return value will already have
+ * its reference count incremented.
*
* A comment line of the form "# pack-refs with: " may contain zero or
* more traits. We interpret the traits as follows:
*
- * No traits:
+ * Neither `peeled` nor `fully-peeled`:
*
* Probably no references are peeled. But if the file contains a
* peeled value for a reference, we will use it.
*
- * peeled:
+ * `peeled`:
*
* References under "refs/tags/", if they *can* be peeled, *are*
* peeled in this file. References outside of "refs/tags/" are
* probably not peeled even if they could have been, but if we find
* a peeled value for such a reference we will use it.
*
- * fully-peeled:
+ * `fully-peeled`:
*
* All references in the file that can be peeled are peeled.
* Inversely (and this is more important), any references in the
* trait should typically be written alongside "peeled" for
* compatibility with older clients, but we do not require it
* (i.e., "peeled" is a no-op if "fully-peeled" is set).
+ *
+ * `sorted`:
+ *
+ * The references in this file are known to be sorted by refname.
*/
-static struct packed_ref_cache *read_packed_refs(const char *packed_refs_file)
+static struct snapshot *create_snapshot(struct packed_ref_store *refs)
{
- FILE *f;
- struct packed_ref_cache *packed_refs = xcalloc(1, sizeof(*packed_refs));
- struct ref_entry *last = NULL;
- struct strbuf line = STRBUF_INIT;
- enum { PEELED_NONE, PEELED_TAGS, PEELED_FULLY } peeled = PEELED_NONE;
- struct ref_dir *dir;
-
- acquire_packed_ref_cache(packed_refs);
- packed_refs->cache = create_ref_cache(NULL, NULL);
- packed_refs->cache->root->flag &= ~REF_INCOMPLETE;
-
- f = fopen(packed_refs_file, "r");
- if (!f) {
- if (errno == ENOENT) {
- /*
- * This is OK; it just means that no
- * "packed-refs" file has been written yet,
- * which is equivalent to it being empty.
- */
- return packed_refs;
- } else {
- die_errno("couldn't read %s", packed_refs_file);
- }
- }
+ struct snapshot *snapshot = xcalloc(1, sizeof(*snapshot));
+ int sorted = 0;
- stat_validity_update(&packed_refs->validity, fileno(f));
+ snapshot->refs = refs;
+ acquire_snapshot(snapshot);
+ snapshot->peeled = PEELED_NONE;
- dir = get_ref_dir(packed_refs->cache->root);
- while (strbuf_getwholeline(&line, f, '\n') != EOF) {
- struct object_id oid;
- const char *refname;
- const char *traits;
+ if (!load_contents(snapshot))
+ return snapshot;
- if (!line.len || line.buf[line.len - 1] != '\n')
- die("unterminated line in %s: %s", packed_refs_file, line.buf);
+ /* If the file has a header line, process it: */
+ if (snapshot->buf < snapshot->eof && *snapshot->buf == '#') {
+ struct strbuf tmp = STRBUF_INIT;
+ char *p;
+ const char *eol;
+ struct string_list traits = STRING_LIST_INIT_NODUP;
- if (skip_prefix(line.buf, "# pack-refs with:", &traits)) {
- if (strstr(traits, " fully-peeled "))
- peeled = PEELED_FULLY;
- else if (strstr(traits, " peeled "))
- peeled = PEELED_TAGS;
- /* perhaps other traits later as well */
- continue;
- }
+ eol = memchr(snapshot->buf, '\n',
+ snapshot->eof - snapshot->buf);
+ if (!eol)
+ die_unterminated_line(refs->path,
+ snapshot->buf,
+ snapshot->eof - snapshot->buf);
- refname = parse_ref_line(&line, &oid);
- if (refname) {
- int flag = REF_ISPACKED;
+ strbuf_add(&tmp, snapshot->buf, eol - snapshot->buf);
- if (check_refname_format(refname, REFNAME_ALLOW_ONELEVEL)) {
- if (!refname_is_safe(refname))
- die("packed refname is dangerous: %s", refname);
- oidclr(&oid);
- flag |= REF_BAD_NAME | REF_ISBROKEN;
- }
- last = create_ref_entry(refname, &oid, flag);
- if (peeled == PEELED_FULLY ||
- (peeled == PEELED_TAGS && starts_with(refname, "refs/tags/")))
- last->flag |= REF_KNOWS_PEELED;
- add_ref_entry(dir, last);
- } else if (last &&
- line.buf[0] == '^' &&
- line.len == PEELED_LINE_LENGTH &&
- line.buf[PEELED_LINE_LENGTH - 1] == '\n' &&
- !get_oid_hex(line.buf + 1, &oid)) {
- oidcpy(&last->u.value.peeled, &oid);
- /*
- * Regardless of what the file header said,
- * we definitely know the value of *this*
- * reference:
- */
- last->flag |= REF_KNOWS_PEELED;
- } else {
- strbuf_setlen(&line, line.len - 1);
- die("unexpected line in %s: %s", packed_refs_file, line.buf);
- }
+ if (!skip_prefix(tmp.buf, "# pack-refs with:", (const char **)&p))
+ die_invalid_line(refs->path,
+ snapshot->buf,
+ snapshot->eof - snapshot->buf);
+
+ string_list_split_in_place(&traits, p, ' ', -1);
+
+ if (unsorted_string_list_has_string(&traits, "fully-peeled"))
+ snapshot->peeled = PEELED_FULLY;
+ else if (unsorted_string_list_has_string(&traits, "peeled"))
+ snapshot->peeled = PEELED_TAGS;
+
+ sorted = unsorted_string_list_has_string(&traits, "sorted");
+
+ /* perhaps other traits later as well */
+
+ /* The "+ 1" is for the LF character. */
+ snapshot->header_len = eol + 1 - snapshot->buf;
+
+ string_list_clear(&traits, 0);
+ strbuf_release(&tmp);
}
- fclose(f);
- strbuf_release(&line);
+ verify_buffer_safe(snapshot);
- return packed_refs;
+ if (!sorted) {
+ sort_snapshot(snapshot);
+
+ /*
+ * Reordering the records might have moved a short one
+ * to the end of the buffer, so verify the buffer's
+ * safety again:
+ */
+ verify_buffer_safe(snapshot);
+ }
+
+ if (mmap_strategy != MMAP_OK && snapshot->mmapped) {
+ /*
+ * We don't want to leave the file mmapped, so we are
+ * forced to make a copy now:
+ */
+ size_t size = snapshot->eof -
+ (snapshot->buf + snapshot->header_len);
+ char *buf_copy = xmalloc(size);
+
+ memcpy(buf_copy, snapshot->buf + snapshot->header_len, size);
+ clear_snapshot_buffer(snapshot);
+ snapshot->buf = buf_copy;
+ snapshot->eof = buf_copy + size;
+ }
+
+ return snapshot;
}
/*
- * Check that the packed refs cache (if any) still reflects the
- * contents of the file. If not, clear the cache.
+ * Check that `refs->snapshot` (if present) still reflects the
+ * contents of the `packed-refs` file. If not, clear the snapshot.
*/
-static void validate_packed_ref_cache(struct packed_ref_store *refs)
+static void validate_snapshot(struct packed_ref_store *refs)
{
- if (refs->cache &&
- !stat_validity_check(&refs->cache->validity, refs->path))
- clear_packed_ref_cache(refs);
+ if (refs->snapshot &&
+ !stat_validity_check(&refs->snapshot->validity, refs->path))
+ clear_snapshot(refs);
}
/*
- * Get the packed_ref_cache for the specified packed_ref_store,
- * creating and populating it if it hasn't been read before or if the
- * file has been changed (according to its `validity` field) since it
- * was last read. On the other hand, if we hold the lock, then assume
- * that the file hasn't been changed out from under us, so skip the
- * extra `stat()` call in `stat_validity_check()`.
+ * Get the `snapshot` for the specified packed_ref_store, creating and
+ * populating it if it hasn't been read before or if the file has been
+ * changed (according to its `validity` field) since it was last read.
+ * On the other hand, if we hold the lock, then assume that the file
+ * hasn't been changed out from under us, so skip the extra `stat()`
+ * call in `stat_validity_check()`. This function does *not* increase
+ * the snapshot's reference count on behalf of the caller.
*/
-static struct packed_ref_cache *get_packed_ref_cache(struct packed_ref_store *refs)
+static struct snapshot *get_snapshot(struct packed_ref_store *refs)
{
if (!is_lock_file_locked(&refs->lock))
- validate_packed_ref_cache(refs);
+ validate_snapshot(refs);
- if (!refs->cache)
- refs->cache = read_packed_refs(refs->path);
+ if (!refs->snapshot)
+ refs->snapshot = create_snapshot(refs);
- return refs->cache;
-}
-
-static struct ref_dir *get_packed_ref_dir(struct packed_ref_cache *packed_ref_cache)
-{
- return get_ref_dir(packed_ref_cache->cache->root);
-}
-
-static struct ref_dir *get_packed_refs(struct packed_ref_store *refs)
-{
- return get_packed_ref_dir(get_packed_ref_cache(refs));
-}
-
-/*
- * Return the ref_entry for the given refname from the packed
- * references. If it does not exist, return NULL.
- */
-static struct ref_entry *get_packed_ref(struct packed_ref_store *refs,
- const char *refname)
-{
- return find_ref_entry(get_packed_refs(refs), refname);
+ return refs->snapshot;
}
static int packed_read_raw_ref(struct ref_store *ref_store,
{
struct packed_ref_store *refs =
packed_downcast(ref_store, REF_STORE_READ, "read_raw_ref");
-
- struct ref_entry *entry;
+ struct snapshot *snapshot = get_snapshot(refs);
+ const char *rec;
*type = 0;
- entry = get_packed_ref(refs, refname);
- if (!entry) {
+ rec = find_reference_location(snapshot, refname, 1);
+
+ if (!rec) {
+ /* refname is not a packed reference. */
errno = ENOENT;
return -1;
}
- hashcpy(sha1, entry->u.value.oid.hash);
+ if (get_sha1_hex(rec, sha1))
+ die_invalid_line(refs->path, rec, snapshot->eof - rec);
+
*type = REF_ISPACKED;
return 0;
}
-static int packed_peel_ref(struct ref_store *ref_store,
- const char *refname, unsigned char *sha1)
-{
- struct packed_ref_store *refs =
- packed_downcast(ref_store, REF_STORE_READ | REF_STORE_ODB,
- "peel_ref");
- struct ref_entry *r = get_packed_ref(refs, refname);
-
- if (!r || peel_entry(r, 0))
- return -1;
-
- hashcpy(sha1, r->u.value.peeled.hash);
- return 0;
-}
+/*
+ * This value is set in `base.flags` if the peeled value of the
+ * current reference is known. In that case, `peeled` contains the
+ * correct peeled value for the reference, which might be `null_sha1`
+ * if the reference is not a tag or if it is broken.
+ */
+#define REF_KNOWS_PEELED 0x40
+/*
+ * An iterator over a snapshot of a `packed-refs` file.
+ */
struct packed_ref_iterator {
struct ref_iterator base;
- struct packed_ref_cache *cache;
- struct ref_iterator *iter0;
+ struct snapshot *snapshot;
+
+ /* The current position in the snapshot's buffer: */
+ const char *pos;
+
+ /* The end of the part of the buffer that will be iterated over: */
+ const char *eof;
+
+ /* Scratch space for current values: */
+ struct object_id oid, peeled;
+ struct strbuf refname_buf;
+
unsigned int flags;
};
+/*
+ * Move the iterator to the next record in the snapshot, without
+ * respect for whether the record is actually required by the current
+ * iteration. Adjust the fields in `iter` and return `ITER_OK` or
+ * `ITER_DONE`. This function does not free the iterator in the case
+ * of `ITER_DONE`.
+ */
+static int next_record(struct packed_ref_iterator *iter)
+{
+ const char *p = iter->pos, *eol;
+
+ strbuf_reset(&iter->refname_buf);
+
+ if (iter->pos == iter->eof)
+ return ITER_DONE;
+
+ iter->base.flags = REF_ISPACKED;
+
+ if (iter->eof - p < GIT_SHA1_HEXSZ + 2 ||
+ parse_oid_hex(p, &iter->oid, &p) ||
+ !isspace(*p++))
+ die_invalid_line(iter->snapshot->refs->path,
+ iter->pos, iter->eof - iter->pos);
+
+ eol = memchr(p, '\n', iter->eof - p);
+ if (!eol)
+ die_unterminated_line(iter->snapshot->refs->path,
+ iter->pos, iter->eof - iter->pos);
+
+ strbuf_add(&iter->refname_buf, p, eol - p);
+ iter->base.refname = iter->refname_buf.buf;
+
+ if (check_refname_format(iter->base.refname, REFNAME_ALLOW_ONELEVEL)) {
+ if (!refname_is_safe(iter->base.refname))
+ die("packed refname is dangerous: %s",
+ iter->base.refname);
+ oidclr(&iter->oid);
+ iter->base.flags |= REF_BAD_NAME | REF_ISBROKEN;
+ }
+ if (iter->snapshot->peeled == PEELED_FULLY ||
+ (iter->snapshot->peeled == PEELED_TAGS &&
+ starts_with(iter->base.refname, "refs/tags/")))
+ iter->base.flags |= REF_KNOWS_PEELED;
+
+ iter->pos = eol + 1;
+
+ if (iter->pos < iter->eof && *iter->pos == '^') {
+ p = iter->pos + 1;
+ if (iter->eof - p < GIT_SHA1_HEXSZ + 1 ||
+ parse_oid_hex(p, &iter->peeled, &p) ||
+ *p++ != '\n')
+ die_invalid_line(iter->snapshot->refs->path,
+ iter->pos, iter->eof - iter->pos);
+ iter->pos = p;
+
+ /*
+ * Regardless of what the file header said, we
+ * definitely know the value of *this* reference. But
+ * we suppress it if the reference is broken:
+ */
+ if ((iter->base.flags & REF_ISBROKEN)) {
+ oidclr(&iter->peeled);
+ iter->base.flags &= ~REF_KNOWS_PEELED;
+ } else {
+ iter->base.flags |= REF_KNOWS_PEELED;
+ }
+ } else {
+ oidclr(&iter->peeled);
+ }
+
+ return ITER_OK;
+}
+
static int packed_ref_iterator_advance(struct ref_iterator *ref_iterator)
{
struct packed_ref_iterator *iter =
(struct packed_ref_iterator *)ref_iterator;
int ok;
- while ((ok = ref_iterator_advance(iter->iter0)) == ITER_OK) {
+ while ((ok = next_record(iter)) == ITER_OK) {
if (iter->flags & DO_FOR_EACH_PER_WORKTREE_ONLY &&
- ref_type(iter->iter0->refname) != REF_TYPE_PER_WORKTREE)
+ ref_type(iter->base.refname) != REF_TYPE_PER_WORKTREE)
continue;
if (!(iter->flags & DO_FOR_EACH_INCLUDE_BROKEN) &&
- !ref_resolves_to_object(iter->iter0->refname,
- iter->iter0->oid,
- iter->iter0->flags))
+ !ref_resolves_to_object(iter->base.refname, &iter->oid,
+ iter->flags))
continue;
- iter->base.refname = iter->iter0->refname;
- iter->base.oid = iter->iter0->oid;
- iter->base.flags = iter->iter0->flags;
return ITER_OK;
}
- iter->iter0 = NULL;
if (ref_iterator_abort(ref_iterator) != ITER_DONE)
ok = ITER_ERROR;
struct packed_ref_iterator *iter =
(struct packed_ref_iterator *)ref_iterator;
- return ref_iterator_peel(iter->iter0, peeled);
+ if ((iter->base.flags & REF_KNOWS_PEELED)) {
+ oidcpy(peeled, &iter->peeled);
+ return is_null_oid(&iter->peeled) ? -1 : 0;
+ } else if ((iter->base.flags & (REF_ISBROKEN | REF_ISSYMREF))) {
+ return -1;
+ } else {
+ return !!peel_object(iter->oid.hash, peeled->hash);
+ }
}
static int packed_ref_iterator_abort(struct ref_iterator *ref_iterator)
(struct packed_ref_iterator *)ref_iterator;
int ok = ITER_DONE;
- if (iter->iter0)
- ok = ref_iterator_abort(iter->iter0);
-
- release_packed_ref_cache(iter->cache);
+ strbuf_release(&iter->refname_buf);
+ release_snapshot(iter->snapshot);
base_ref_iterator_free(ref_iterator);
return ok;
}
const char *prefix, unsigned int flags)
{
struct packed_ref_store *refs;
+ struct snapshot *snapshot;
+ const char *start;
struct packed_ref_iterator *iter;
struct ref_iterator *ref_iterator;
unsigned int required_flags = REF_STORE_READ;
required_flags |= REF_STORE_ODB;
refs = packed_downcast(ref_store, required_flags, "ref_iterator_begin");
+ /*
+ * Note that `get_snapshot()` internally checks whether the
+ * snapshot is up to date with what is on disk, and re-reads
+ * it if not.
+ */
+ snapshot = get_snapshot(refs);
+
+ if (!snapshot->buf)
+ return empty_ref_iterator_begin();
+
iter = xcalloc(1, sizeof(*iter));
ref_iterator = &iter->base;
- base_ref_iterator_init(ref_iterator, &packed_ref_iterator_vtable);
+ base_ref_iterator_init(ref_iterator, &packed_ref_iterator_vtable, 1);
- /*
- * Note that get_packed_ref_cache() internally checks whether
- * the packed-ref cache is up to date with what is on disk,
- * and re-reads it if not.
- */
+ iter->snapshot = snapshot;
+ acquire_snapshot(snapshot);
- iter->cache = get_packed_ref_cache(refs);
- acquire_packed_ref_cache(iter->cache);
- iter->iter0 = cache_ref_iterator_begin(iter->cache->cache, prefix, 0);
+ if (prefix && *prefix)
+ start = find_reference_location(snapshot, prefix, 0);
+ else
+ start = snapshot->buf + snapshot->header_len;
+
+ iter->pos = start;
+ iter->eof = snapshot->eof;
+ strbuf_init(&iter->refname_buf, 0);
+
+ iter->base.oid = &iter->oid;
iter->flags = flags;
+ if (prefix && *prefix)
+ /* Stop iteration after we've gone *past* prefix: */
+ ref_iterator = prefix_ref_iterator_begin(ref_iterator, prefix, 0);
+
return ref_iterator;
}
/*
* Now that we hold the `packed-refs` lock, make sure that our
- * cache matches the current version of the file. Normally
- * `get_packed_ref_cache()` does that for us, but that
- * function assumes that when the file is locked, any existing
- * cache is still valid. We've just locked the file, but it
- * might have changed the moment *before* we locked it.
+ * snapshot matches the current version of the file. Normally
+ * `get_snapshot()` does that for us, but that function
+ * assumes that when the file is locked, any existing snapshot
+ * is still valid. We've just locked the file, but it might
+ * have changed the moment *before* we locked it.
*/
- validate_packed_ref_cache(refs);
+ validate_snapshot(refs);
/*
* Now make sure that the packed-refs file as it exists in the
- * locked state is loaded into the cache:
+ * locked state is loaded into the snapshot:
*/
- get_packed_ref_cache(refs);
+ get_snapshot(refs);
return 0;
}
}
/*
- * The packed-refs header line that we write out. Perhaps other
- * traits will be added later. The trailing space is required.
+ * The packed-refs header line that we write out. Perhaps other traits
+ * will be added later.
+ *
+ * Note that earlier versions of Git used to parse these traits by
+ * looking for " trait " in the line. For this reason, the space after
+ * the colon and the trailing space are required.
*/
static const char PACKED_REFS_HEADER[] =
- "# pack-refs with: peeled fully-peeled \n";
+ "# pack-refs with: peeled fully-peeled sorted \n";
static int packed_init_db(struct ref_store *ref_store, struct strbuf *err)
{
}
/*
- * Write the packed-refs from the cache to the packed-refs tempfile,
- * incorporating any changes from `updates`. `updates` must be a
- * sorted string list whose keys are the refnames and whose util
+ * Write the packed refs from the current snapshot to the packed-refs
+ * tempfile, incorporating any changes from `updates`. `updates` must
+ * be a sorted string list whose keys are the refnames and whose util
* values are `struct ref_update *`. On error, rollback the tempfile,
* write an error message to `err`, and return a nonzero value.
*
}
if (ok != ITER_DONE) {
- strbuf_addf(err, "unable to write packed-refs file: "
- "error iterating over old contents");
+ strbuf_addstr(err, "unable to write packed-refs file: "
+ "error iterating over old contents");
goto error;
}
/*
* Note that we *don't* skip transactions with zero updates,
* because such a transaction might be executed for the side
- * effect of ensuring that all of the references are peeled.
- * If the caller wants to optimize away empty transactions, it
- * should do so itself.
+ * effect of ensuring that all of the references are peeled or
+ * ensuring that the `packed-refs` file is sorted. If the
+ * caller wants to optimize away empty transactions, it should
+ * do so itself.
*/
data = xcalloc(1, sizeof(*data));
int ret = TRANSACTION_GENERIC_ERROR;
char *packed_refs_path;
+ clear_snapshot(refs);
+
packed_refs_path = get_locked_file_path(&refs->lock);
if (rename_tempfile(&refs->tempfile, packed_refs_path)) {
strbuf_addf(err, "error replacing %s: %s",
goto cleanup;
}
- clear_packed_ref_cache(refs);
ret = 0;
cleanup:
die("BUG: packed reference store does not support renaming references");
}
+static int packed_copy_ref(struct ref_store *ref_store,
+ const char *oldrefname, const char *newrefname,
+ const char *logmsg)
+{
+ die("BUG: packed reference store does not support copying references");
+}
+
static struct ref_iterator *packed_reflog_iterator_begin(struct ref_store *ref_store)
{
return empty_ref_iterator_begin();
packed_initial_transaction_commit,
packed_pack_refs,
- packed_peel_ref,
packed_create_symref,
packed_delete_refs,
packed_rename_ref,
+ packed_copy_ref,
packed_ref_iterator_begin,
packed_read_raw_ref,
FLEX_ALLOC_STR(ref, name, refname);
oidcpy(&ref->u.value.oid, oid);
- oidclr(&ref->u.value.peeled);
ref->flag = flag;
return ref;
}
}
}
-enum peel_status peel_entry(struct ref_entry *entry, int repeel)
-{
- enum peel_status status;
-
- if (entry->flag & REF_KNOWS_PEELED) {
- if (repeel) {
- entry->flag &= ~REF_KNOWS_PEELED;
- oidclr(&entry->u.value.peeled);
- } else {
- return is_null_oid(&entry->u.value.peeled) ?
- PEEL_NON_TAG : PEEL_PEELED;
- }
- }
- if (entry->flag & REF_ISBROKEN)
- return PEEL_BROKEN;
- if (entry->flag & REF_ISSYMREF)
- return PEEL_IS_SYMREF;
-
- status = peel_object(entry->u.value.oid.hash, entry->u.value.peeled.hash);
- if (status == PEEL_PEELED || status == PEEL_NON_TAG)
- entry->flag |= REF_KNOWS_PEELED;
- return status;
-}
-
static int cache_ref_iterator_peel(struct ref_iterator *ref_iterator,
struct object_id *peeled)
{
- struct cache_ref_iterator *iter =
- (struct cache_ref_iterator *)ref_iterator;
- struct cache_ref_iterator_level *level;
- struct ref_entry *entry;
-
- level = &iter->levels[iter->levels_nr - 1];
-
- if (level->index == -1)
- die("BUG: peel called before advance for cache iterator");
-
- entry = level->dir->entries[level->index];
-
- if (peel_entry(entry, 0))
- return -1;
- oidcpy(peeled, &entry->u.value.peeled);
- return 0;
+ return peel_object(ref_iterator->oid->hash, peeled->hash);
}
static int cache_ref_iterator_abort(struct ref_iterator *ref_iterator)
iter = xcalloc(1, sizeof(*iter));
ref_iterator = &iter->base;
- base_ref_iterator_init(ref_iterator, &cache_ref_iterator_vtable);
+ base_ref_iterator_init(ref_iterator, &cache_ref_iterator_vtable, 1);
ALLOC_GROW(iter->levels, 10, iter->levels_alloc);
iter->levels_nr = 1;
* referred to by the last reference in the symlink chain.
*/
struct object_id oid;
-
- /*
- * If REF_KNOWS_PEELED, then this field holds the peeled value
- * of this reference, or null if the reference is known not to
- * be peelable. See the documentation for peel_ref() for an
- * exact definition of "peelable".
- */
- struct object_id peeled;
};
/*
* public values; see refs.h.
*/
-/*
- * The field ref_entry->u.value.peeled of this value entry contains
- * the correct peeled value for the reference, which might be
- * null_sha1 if the reference is not a tag or if it is broken.
- */
-#define REF_KNOWS_PEELED 0x10
-
/* ref_entry represents a directory of references */
-#define REF_DIR 0x20
+#define REF_DIR 0x10
/*
* Entry has not yet been read from disk (used only for REF_DIR
* entries representing loose references)
*/
-#define REF_INCOMPLETE 0x40
+#define REF_INCOMPLETE 0x20
/*
* A ref_entry represents either a reference or a "subdirectory" of
* Start iterating over references in `cache`. If `prefix` is
* specified, only include references whose names start with that
* prefix. If `prime_dir` is true, then fill any incomplete
- * directories before beginning the iteration.
+ * directories before beginning the iteration. The output is ordered
+ * by refname.
*/
struct ref_iterator *cache_ref_iterator_begin(struct ref_cache *cache,
const char *prefix,
int prime_dir);
-/*
- * Peel the entry (if possible) and return its new peel_status. If
- * repeel is true, re-peel the entry even if there is an old peeled
- * value that is already stored in it.
- *
- * It is OK to call this function with a packed reference entry that
- * might be stale and might even refer to an object that has since
- * been garbage-collected. In such a case, if the entry has
- * REF_KNOWS_PEELED then leave the status unchanged and return
- * PEEL_PEELED or PEEL_NON_TAG; otherwise, return PEEL_INVALID.
- */
-enum peel_status peel_entry(struct ref_entry *entry, int repeel);
-
#endif /* REFS_REF_CACHE_H */
*/
struct ref_iterator {
struct ref_iterator_vtable *vtable;
+
+ /*
+ * Does this `ref_iterator` iterate over references in order
+ * by refname?
+ */
+ unsigned int ordered : 1;
+
const char *refname;
const struct object_id *oid;
unsigned int flags;
* which the refname begins with prefix. If trim is non-zero, then
* trim that many characters off the beginning of each refname. flags
* can be DO_FOR_EACH_INCLUDE_BROKEN to include broken references in
- * the iteration.
+ * the iteration. The output is ordered by refname.
*/
struct ref_iterator *refs_ref_iterator_begin(
struct ref_store *refs,
* Iterate over the entries from iter0 and iter1, with the values
* interleaved as directed by the select function. The iterator takes
* ownership of iter0 and iter1 and frees them when the iteration is
- * over.
+ * over. A derived class should set `ordered` to 1 or 0 based on
+ * whether it generates its output in order by reference name.
*/
struct ref_iterator *merge_ref_iterator_begin(
+ int ordered,
struct ref_iterator *iter0, struct ref_iterator *iter1,
ref_iterator_select_fn *select, void *cb_data);
* As an convenience to callers, if prefix is the empty string and
* trim is zero, this function returns iter0 directly, without
* wrapping it.
+ *
+ * The resulting ref_iterator is ordered if iter0 is.
*/
struct ref_iterator *prefix_ref_iterator_begin(struct ref_iterator *iter0,
const char *prefix,
/*
* Base class constructor for ref_iterators. Initialize the
* ref_iterator part of iter, setting its vtable pointer as specified.
+ * `ordered` should be set to 1 if the iterator will iterate over
+ * references in order by refname; otherwise it should be set to 0.
* This is meant to be called only by the initializers of derived
* classes.
*/
void base_ref_iterator_init(struct ref_iterator *iter,
- struct ref_iterator_vtable *vtable);
+ struct ref_iterator_vtable *vtable,
+ int ordered);
/*
* Base class destructor for ref_iterators. Destroy the ref_iterator
struct strbuf *err);
typedef int pack_refs_fn(struct ref_store *ref_store, unsigned int flags);
-typedef int peel_ref_fn(struct ref_store *ref_store,
- const char *refname, unsigned char *sha1);
typedef int create_symref_fn(struct ref_store *ref_store,
const char *ref_target,
const char *refs_heads_master,
typedef int rename_ref_fn(struct ref_store *ref_store,
const char *oldref, const char *newref,
const char *logmsg);
+typedef int copy_ref_fn(struct ref_store *ref_store,
+ const char *oldref, const char *newref,
+ const char *logmsg);
/*
* Iterate over the references in `ref_store` whose names start with
* `prefix`. `prefix` is matched as a literal string, without regard
* for path separators. If prefix is NULL or the empty string, iterate
- * over all references in `ref_store`.
+ * over all references in `ref_store`. The output is ordered by
+ * refname.
*/
typedef struct ref_iterator *ref_iterator_begin_fn(
struct ref_store *ref_store,
ref_transaction_commit_fn *initial_transaction_commit;
pack_refs_fn *pack_refs;
- peel_ref_fn *peel_ref;
create_symref_fn *create_symref;
delete_refs_fn *delete_refs;
rename_ref_fn *rename_ref;
+ copy_ref_fn *copy_ref;
ref_iterator_begin_fn *iterator_begin;
read_raw_ref_fn *read_raw_ref;
void repo_clear(struct repository *repo)
{
- free(repo->gitdir);
- repo->gitdir = NULL;
- free(repo->commondir);
- repo->commondir = NULL;
- free(repo->objectdir);
- repo->objectdir = NULL;
- free(repo->graft_file);
- repo->graft_file = NULL;
- free(repo->index_file);
- repo->index_file = NULL;
- free(repo->worktree);
- repo->worktree = NULL;
- free(repo->submodule_prefix);
- repo->submodule_prefix = NULL;
+ FREE_AND_NULL(repo->gitdir);
+ FREE_AND_NULL(repo->commondir);
+ FREE_AND_NULL(repo->objectdir);
+ FREE_AND_NULL(repo->graft_file);
+ FREE_AND_NULL(repo->index_file);
+ FREE_AND_NULL(repo->worktree);
+ FREE_AND_NULL(repo->submodule_prefix);
if (repo->config) {
git_configset_clear(repo->config);
- free(repo->config);
- repo->config = NULL;
+ FREE_AND_NULL(repo->config);
}
if (repo->submodule_cache) {
if (repo->index) {
discard_index(repo->index);
- free(repo->index);
- repo->index = NULL;
+ FREE_AND_NULL(repo->index);
}
}
}
/* Create an array of 'char *' to be used as the childenv */
- childenv = xmalloc((env.nr + 1) * sizeof(char *));
+ ALLOC_ARRAY(childenv, env.nr + 1);
for (i = 0; i < env.nr; i++)
childenv[i] = env.items[i].util;
childenv[env.nr] = NULL;
#include "trailer.h"
#include "log-tree.h"
#include "wt-status.h"
+#include "hashmap.h"
#define GIT_REFLOG_ACTION "GIT_REFLOG_ACTION"
free(opts->xopts[i]);
free(opts->xopts);
- strbuf_addf(&dir, "%s", get_dir(opts));
+ strbuf_addstr(&dir, get_dir(opts));
remove_dir_recursively(&dir, 0);
strbuf_release(&dir);
strbuf_release(&sob);
}
+
+int sequencer_make_script(int keep_empty, FILE *out,
+ int argc, const char **argv)
+{
+ char *format = NULL;
+ struct pretty_print_context pp = {0};
+ struct strbuf buf = STRBUF_INIT;
+ struct rev_info revs;
+ struct commit *commit;
+
+ init_revisions(&revs, NULL);
+ revs.verbose_header = 1;
+ revs.max_parents = 1;
+ revs.cherry_pick = 1;
+ revs.limited = 1;
+ revs.reverse = 1;
+ revs.right_only = 1;
+ revs.sort_order = REV_SORT_IN_GRAPH_ORDER;
+ revs.topo_order = 1;
+
+ revs.pretty_given = 1;
+ git_config_get_string("rebase.instructionFormat", &format);
+ if (!format || !*format) {
+ free(format);
+ format = xstrdup("%s");
+ }
+ get_commit_format(format, &revs);
+ free(format);
+ pp.fmt = revs.commit_format;
+ pp.output_encoding = get_log_output_encoding();
+
+ if (setup_revisions(argc, argv, &revs, NULL) > 1)
+ return error(_("make_script: unhandled options"));
+
+ if (prepare_revision_walk(&revs) < 0)
+ return error(_("make_script: error preparing revisions"));
+
+ while ((commit = get_revision(&revs))) {
+ strbuf_reset(&buf);
+ if (!keep_empty && is_original_commit_empty(commit))
+ strbuf_addf(&buf, "%c ", comment_line_char);
+ strbuf_addf(&buf, "pick %s ", oid_to_hex(&commit->object.oid));
+ pretty_print_commit(&pp, commit, &buf);
+ strbuf_addch(&buf, '\n');
+ fputs(buf.buf, out);
+ }
+ strbuf_release(&buf);
+ return 0;
+}
+
+
+int transform_todo_ids(int shorten_ids)
+{
+ const char *todo_file = rebase_path_todo();
+ struct todo_list todo_list = TODO_LIST_INIT;
+ int fd, res, i;
+ FILE *out;
+
+ strbuf_reset(&todo_list.buf);
+ fd = open(todo_file, O_RDONLY);
+ if (fd < 0)
+ return error_errno(_("could not open '%s'"), todo_file);
+ if (strbuf_read(&todo_list.buf, fd, 0) < 0) {
+ close(fd);
+ return error(_("could not read '%s'."), todo_file);
+ }
+ close(fd);
+
+ res = parse_insn_buffer(todo_list.buf.buf, &todo_list);
+ if (res) {
+ todo_list_release(&todo_list);
+ return error(_("unusable todo list: '%s'"), todo_file);
+ }
+
+ out = fopen(todo_file, "w");
+ if (!out) {
+ todo_list_release(&todo_list);
+ return error(_("unable to open '%s' for writing"), todo_file);
+ }
+ for (i = 0; i < todo_list.nr; i++) {
+ struct todo_item *item = todo_list.items + i;
+ int bol = item->offset_in_buf;
+ const char *p = todo_list.buf.buf + bol;
+ int eol = i + 1 < todo_list.nr ?
+ todo_list.items[i + 1].offset_in_buf :
+ todo_list.buf.len;
+
+ if (item->command >= TODO_EXEC && item->command != TODO_DROP)
+ fwrite(p, eol - bol, 1, out);
+ else {
+ const char *id = shorten_ids ?
+ short_commit_name(item->commit) :
+ oid_to_hex(&item->commit->object.oid);
+ int len;
+
+ p += strspn(p, " \t"); /* left-trim command */
+ len = strcspn(p, " \t"); /* length of command */
+
+ fprintf(out, "%.*s %s %.*s\n",
+ len, p, id, item->arg_len, item->arg);
+ }
+ }
+ fclose(out);
+ todo_list_release(&todo_list);
+ return 0;
+}
+
+enum check_level {
+ CHECK_IGNORE = 0, CHECK_WARN, CHECK_ERROR
+};
+
+static enum check_level get_missing_commit_check_level(void)
+{
+ const char *value;
+
+ if (git_config_get_value("rebase.missingcommitscheck", &value) ||
+ !strcasecmp("ignore", value))
+ return CHECK_IGNORE;
+ if (!strcasecmp("warn", value))
+ return CHECK_WARN;
+ if (!strcasecmp("error", value))
+ return CHECK_ERROR;
+ warning(_("unrecognized setting %s for option "
+ "rebase.missingCommitsCheck. Ignoring."), value);
+ return CHECK_IGNORE;
+}
+
+/*
+ * Check if the user dropped some commits by mistake
+ * Behaviour determined by rebase.missingCommitsCheck.
+ * Check if there is an unrecognized command or a
+ * bad SHA-1 in a command.
+ */
+int check_todo_list(void)
+{
+ enum check_level check_level = get_missing_commit_check_level();
+ struct strbuf todo_file = STRBUF_INIT;
+ struct todo_list todo_list = TODO_LIST_INIT;
+ struct strbuf missing = STRBUF_INIT;
+ int advise_to_edit_todo = 0, res = 0, fd, i;
+
+ strbuf_addstr(&todo_file, rebase_path_todo());
+ fd = open(todo_file.buf, O_RDONLY);
+ if (fd < 0) {
+ res = error_errno(_("could not open '%s'"), todo_file.buf);
+ goto leave_check;
+ }
+ if (strbuf_read(&todo_list.buf, fd, 0) < 0) {
+ close(fd);
+ res = error(_("could not read '%s'."), todo_file.buf);
+ goto leave_check;
+ }
+ close(fd);
+ advise_to_edit_todo = res =
+ parse_insn_buffer(todo_list.buf.buf, &todo_list);
+
+ if (res || check_level == CHECK_IGNORE)
+ goto leave_check;
+
+ /* Mark the commits in git-rebase-todo as seen */
+ for (i = 0; i < todo_list.nr; i++) {
+ struct commit *commit = todo_list.items[i].commit;
+ if (commit)
+ commit->util = (void *)1;
+ }
+
+ todo_list_release(&todo_list);
+ strbuf_addstr(&todo_file, ".backup");
+ fd = open(todo_file.buf, O_RDONLY);
+ if (fd < 0) {
+ res = error_errno(_("could not open '%s'"), todo_file.buf);
+ goto leave_check;
+ }
+ if (strbuf_read(&todo_list.buf, fd, 0) < 0) {
+ close(fd);
+ res = error(_("could not read '%s'."), todo_file.buf);
+ goto leave_check;
+ }
+ close(fd);
+ strbuf_release(&todo_file);
+ res = !!parse_insn_buffer(todo_list.buf.buf, &todo_list);
+
+ /* Find commits in git-rebase-todo.backup yet unseen */
+ for (i = todo_list.nr - 1; i >= 0; i--) {
+ struct todo_item *item = todo_list.items + i;
+ struct commit *commit = item->commit;
+ if (commit && !commit->util) {
+ strbuf_addf(&missing, " - %s %.*s\n",
+ short_commit_name(commit),
+ item->arg_len, item->arg);
+ commit->util = (void *)1;
+ }
+ }
+
+ /* Warn about missing commits */
+ if (!missing.len)
+ goto leave_check;
+
+ if (check_level == CHECK_ERROR)
+ advise_to_edit_todo = res = 1;
+
+ fprintf(stderr,
+ _("Warning: some commits may have been dropped accidentally.\n"
+ "Dropped commits (newer to older):\n"));
+
+ /* Make the list user-friendly and display */
+ fputs(missing.buf, stderr);
+ strbuf_release(&missing);
+
+ fprintf(stderr, _("To avoid this message, use \"drop\" to "
+ "explicitly remove a commit.\n\n"
+ "Use 'git config rebase.missingCommitsCheck' to change "
+ "the level of warnings.\n"
+ "The possible behaviours are: ignore, warn, error.\n\n"));
+
+leave_check:
+ strbuf_release(&todo_file);
+ todo_list_release(&todo_list);
+
+ if (advise_to_edit_todo)
+ fprintf(stderr,
+ _("You can fix this with 'git rebase --edit-todo' "
+ "and then run 'git rebase --continue'.\n"
+ "Or you can abort the rebase with 'git rebase"
+ " --abort'.\n"));
+
+ return res;
+}
+
+/* skip picking commits whose parents are unchanged */
+int skip_unnecessary_picks(void)
+{
+ const char *todo_file = rebase_path_todo();
+ struct strbuf buf = STRBUF_INIT;
+ struct todo_list todo_list = TODO_LIST_INIT;
+ struct object_id onto_oid, *oid = &onto_oid, *parent_oid;
+ int fd, i;
+
+ if (!read_oneliner(&buf, rebase_path_onto(), 0))
+ return error(_("could not read 'onto'"));
+ if (get_oid(buf.buf, &onto_oid)) {
+ strbuf_release(&buf);
+ return error(_("need a HEAD to fixup"));
+ }
+ strbuf_release(&buf);
+
+ fd = open(todo_file, O_RDONLY);
+ if (fd < 0) {
+ return error_errno(_("could not open '%s'"), todo_file);
+ }
+ if (strbuf_read(&todo_list.buf, fd, 0) < 0) {
+ close(fd);
+ return error(_("could not read '%s'."), todo_file);
+ }
+ close(fd);
+ if (parse_insn_buffer(todo_list.buf.buf, &todo_list) < 0) {
+ todo_list_release(&todo_list);
+ return -1;
+ }
+
+ for (i = 0; i < todo_list.nr; i++) {
+ struct todo_item *item = todo_list.items + i;
+
+ if (item->command >= TODO_NOOP)
+ continue;
+ if (item->command != TODO_PICK)
+ break;
+ if (parse_commit(item->commit)) {
+ todo_list_release(&todo_list);
+ return error(_("could not parse commit '%s'"),
+ oid_to_hex(&item->commit->object.oid));
+ }
+ if (!item->commit->parents)
+ break; /* root commit */
+ if (item->commit->parents->next)
+ break; /* merge commit */
+ parent_oid = &item->commit->parents->item->object.oid;
+ if (hashcmp(parent_oid->hash, oid->hash))
+ break;
+ oid = &item->commit->object.oid;
+ }
+ if (i > 0) {
+ int offset = i < todo_list.nr ?
+ todo_list.items[i].offset_in_buf : todo_list.buf.len;
+ const char *done_path = rebase_path_done();
+
+ fd = open(done_path, O_CREAT | O_WRONLY | O_APPEND, 0666);
+ if (fd < 0) {
+ error_errno(_("could not open '%s' for writing"),
+ done_path);
+ todo_list_release(&todo_list);
+ return -1;
+ }
+ if (write_in_full(fd, todo_list.buf.buf, offset) < 0) {
+ error_errno(_("could not write to '%s'"), done_path);
+ todo_list_release(&todo_list);
+ close(fd);
+ return -1;
+ }
+ close(fd);
+
+ fd = open(rebase_path_todo(), O_WRONLY, 0666);
+ if (fd < 0) {
+ error_errno(_("could not open '%s' for writing"),
+ rebase_path_todo());
+ todo_list_release(&todo_list);
+ return -1;
+ }
+ if (write_in_full(fd, todo_list.buf.buf + offset,
+ todo_list.buf.len - offset) < 0) {
+ error_errno(_("could not write to '%s'"),
+ rebase_path_todo());
+ close(fd);
+ todo_list_release(&todo_list);
+ return -1;
+ }
+ if (ftruncate(fd, todo_list.buf.len - offset) < 0) {
+ error_errno(_("could not truncate '%s'"),
+ rebase_path_todo());
+ todo_list_release(&todo_list);
+ close(fd);
+ return -1;
+ }
+ close(fd);
+
+ todo_list.current = i;
+ if (is_fixup(peek_command(&todo_list, 0)))
+ record_in_rewritten(oid, peek_command(&todo_list, 0));
+ }
+
+ todo_list_release(&todo_list);
+ printf("%s\n", oid_to_hex(oid));
+
+ return 0;
+}
+
+struct subject2item_entry {
+ struct hashmap_entry entry;
+ int i;
+ char subject[FLEX_ARRAY];
+};
+
+static int subject2item_cmp(const void *fndata,
+ const struct subject2item_entry *a,
+ const struct subject2item_entry *b, const void *key)
+{
+ return key ? strcmp(a->subject, key) : strcmp(a->subject, b->subject);
+}
+
+/*
+ * Rearrange the todo list that has both "pick commit-id msg" and "pick
+ * commit-id fixup!/squash! msg" in it so that the latter is put immediately
+ * after the former, and change "pick" to "fixup"/"squash".
+ *
+ * Note that if the config has specified a custom instruction format, each log
+ * message will have to be retrieved from the commit (as the oneline in the
+ * script cannot be trusted) in order to normalize the autosquash arrangement.
+ */
+int rearrange_squash(void)
+{
+ const char *todo_file = rebase_path_todo();
+ struct todo_list todo_list = TODO_LIST_INIT;
+ struct hashmap subject2item;
+ int res = 0, rearranged = 0, *next, *tail, fd, i;
+ char **subjects;
+
+ fd = open(todo_file, O_RDONLY);
+ if (fd < 0)
+ return error_errno(_("could not open '%s'"), todo_file);
+ if (strbuf_read(&todo_list.buf, fd, 0) < 0) {
+ close(fd);
+ return error(_("could not read '%s'."), todo_file);
+ }
+ close(fd);
+ if (parse_insn_buffer(todo_list.buf.buf, &todo_list) < 0) {
+ todo_list_release(&todo_list);
+ return -1;
+ }
+
+ /*
+ * The hashmap maps onelines to the respective todo list index.
+ *
+ * If any items need to be rearranged, the next[i] value will indicate
+ * which item was moved directly after the i'th.
+ *
+ * In that case, last[i] will indicate the index of the latest item to
+ * be moved to appear after the i'th.
+ */
+ hashmap_init(&subject2item, (hashmap_cmp_fn) subject2item_cmp,
+ NULL, todo_list.nr);
+ ALLOC_ARRAY(next, todo_list.nr);
+ ALLOC_ARRAY(tail, todo_list.nr);
+ ALLOC_ARRAY(subjects, todo_list.nr);
+ for (i = 0; i < todo_list.nr; i++) {
+ struct strbuf buf = STRBUF_INIT;
+ struct todo_item *item = todo_list.items + i;
+ const char *commit_buffer, *subject, *p;
+ size_t subject_len;
+ int i2 = -1;
+ struct subject2item_entry *entry;
+
+ next[i] = tail[i] = -1;
+ if (item->command >= TODO_EXEC) {
+ subjects[i] = NULL;
+ continue;
+ }
+
+ if (is_fixup(item->command)) {
+ todo_list_release(&todo_list);
+ return error(_("the script was already rearranged."));
+ }
+
+ item->commit->util = item;
+
+ parse_commit(item->commit);
+ commit_buffer = get_commit_buffer(item->commit, NULL);
+ find_commit_subject(commit_buffer, &subject);
+ format_subject(&buf, subject, " ");
+ subject = subjects[i] = strbuf_detach(&buf, &subject_len);
+ unuse_commit_buffer(item->commit, commit_buffer);
+ if ((skip_prefix(subject, "fixup! ", &p) ||
+ skip_prefix(subject, "squash! ", &p))) {
+ struct commit *commit2;
+
+ for (;;) {
+ while (isspace(*p))
+ p++;
+ if (!skip_prefix(p, "fixup! ", &p) &&
+ !skip_prefix(p, "squash! ", &p))
+ break;
+ }
+
+ if ((entry = hashmap_get_from_hash(&subject2item,
+ strhash(p), p)))
+ /* found by title */
+ i2 = entry->i;
+ else if (!strchr(p, ' ') &&
+ (commit2 =
+ lookup_commit_reference_by_name(p)) &&
+ commit2->util)
+ /* found by commit name */
+ i2 = (struct todo_item *)commit2->util
+ - todo_list.items;
+ else {
+ /* copy can be a prefix of the commit subject */
+ for (i2 = 0; i2 < i; i2++)
+ if (subjects[i2] &&
+ starts_with(subjects[i2], p))
+ break;
+ if (i2 == i)
+ i2 = -1;
+ }
+ }
+ if (i2 >= 0) {
+ rearranged = 1;
+ todo_list.items[i].command =
+ starts_with(subject, "fixup!") ?
+ TODO_FIXUP : TODO_SQUASH;
+ if (next[i2] < 0)
+ next[i2] = i;
+ else
+ next[tail[i2]] = i;
+ tail[i2] = i;
+ } else if (!hashmap_get_from_hash(&subject2item,
+ strhash(subject), subject)) {
+ FLEX_ALLOC_MEM(entry, subject, subject, subject_len);
+ entry->i = i;
+ hashmap_entry_init(entry, strhash(entry->subject));
+ hashmap_put(&subject2item, entry);
+ }
+ }
+
+ if (rearranged) {
+ struct strbuf buf = STRBUF_INIT;
+
+ for (i = 0; i < todo_list.nr; i++) {
+ enum todo_command command = todo_list.items[i].command;
+ int cur = i;
+
+ /*
+ * Initially, all commands are 'pick's. If it is a
+ * fixup or a squash now, we have rearranged it.
+ */
+ if (is_fixup(command))
+ continue;
+
+ while (cur >= 0) {
+ int offset = todo_list.items[cur].offset_in_buf;
+ int end_offset = cur + 1 < todo_list.nr ?
+ todo_list.items[cur + 1].offset_in_buf :
+ todo_list.buf.len;
+ char *bol = todo_list.buf.buf + offset;
+ char *eol = todo_list.buf.buf + end_offset;
+
+ /* replace 'pick', by 'fixup' or 'squash' */
+ command = todo_list.items[cur].command;
+ if (is_fixup(command)) {
+ strbuf_addstr(&buf,
+ todo_command_info[command].str);
+ bol += strcspn(bol, " \t");
+ }
+
+ strbuf_add(&buf, bol, eol - bol);
+
+ cur = next[cur];
+ }
+ }
+
+ fd = open(todo_file, O_WRONLY);
+ if (fd < 0)
+ res = error_errno(_("could not open '%s'"), todo_file);
+ else if (write(fd, buf.buf, buf.len) < 0)
+ res = error_errno(_("could not read '%s'."), todo_file);
+ else if (ftruncate(fd, buf.len) < 0)
+ res = error_errno(_("could not finish '%s'"),
+ todo_file);
+ close(fd);
+ strbuf_release(&buf);
+ }
+
+ free(next);
+ free(tail);
+ for (i = 0; i < todo_list.nr; i++)
+ free(subjects[i]);
+ free(subjects);
+ hashmap_free(&subject2item, 1);
+ todo_list_release(&todo_list);
+
+ return res;
+}
int sequencer_rollback(struct replay_opts *opts);
int sequencer_remove_state(struct replay_opts *opts);
+int sequencer_make_script(int keep_empty, FILE *out,
+ int argc, const char **argv);
+
+int transform_todo_ids(int shorten_ids);
+int check_todo_list(void);
+int skip_unnecessary_picks(void);
+int rearrange_squash(void);
+
extern const char sign_off_header[];
void append_signoff(struct strbuf *msgbuf, int ignore_footer, unsigned flag);
int diagnose_misspelt_rev)
{
if (*arg == '-')
- die("bad flag '%s' used after filename", arg);
+ die("option '%s' must come before non-option arguments", arg);
if (looks_like_pathspec(arg) || check_filename(prefix, arg))
return;
die_verify_filename(prefix, arg, diagnose_misspelt_rev);
/*
* Try to read the location of the git directory from the .git file,
- * return path to git directory if found.
+ * return path to git directory if found. The return value comes from
+ * a shared buffer.
*
* On failure, if return_error_code is not NULL, return_error_code
* will be set to an error code and NULL will be returned. If
ret = index_mem(sha1, "", size, type, path, flags);
} else if (size <= SMALL_FILE_SIZE) {
char *buf = xmalloc(size);
- if (size == read_in_full(fd, buf, size))
- ret = index_mem(sha1, buf, size, type, path, flags);
+ ssize_t read_result = read_in_full(fd, buf, size);
+ if (read_result < 0)
+ ret = error_errno("read error while indexing %s",
+ path ? path : "<unknown>");
+ else if (read_result != size)
+ ret = error("short read while indexing %s",
+ path ? path : "<unknown>");
else
- ret = error_errno("short read");
+ ret = index_mem(sha1, buf, size, type, path, flags);
free(buf);
} else {
void *buf = xmmap(NULL, size, PROT_READ, MAP_PRIVATE, fd, 0);
extern void strbuf_init(struct strbuf *, size_t);
/**
- * Release a string buffer and the memory it used. You should not use the
- * string buffer after using this function, unless you initialize it again.
+ * Release a string buffer and the memory it used. After this call, the
+ * strbuf points to an empty string that does not need to be free()ed, as
+ * if it had been set to `STRBUF_INIT` and never modified.
+ *
+ * To clear a strbuf in preparation for further use without the overhead
+ * of free()ing and malloc()ing again, use strbuf_reset() instead.
*/
extern void strbuf_release(struct strbuf *);
* Detach the string from the strbuf and returns it; you now own the
* storage the string occupies and it is your responsibility from then on
* to release it with `free(3)` when you are done with it.
+ *
+ * The strbuf that previously held the string is reset to `STRBUF_INIT` so
+ * it can be reused after calling this function.
*/
extern char *strbuf_detach(struct strbuf *, size_t *);
#ifndef STRING_LIST_H
#define STRING_LIST_H
+/**
+ * The string_list API offers a data structure and functions to handle
+ * sorted and unsorted arrays of strings. A "sorted" list is one whose
+ * entries are sorted by string value in `strcmp()` order.
+ *
+ * The caller:
+ *
+ * . Allocates and clears a `struct string_list` variable.
+ *
+ * . Initializes the members. You might want to set the flag `strdup_strings`
+ * if the strings should be strdup()ed. For example, this is necessary
+ * when you add something like git_path("..."), since that function returns
+ * a static buffer that will change with the next call to git_path().
+ *
+ * If you need something advanced, you can manually malloc() the `items`
+ * member (you need this if you add things later) and you should set the
+ * `nr` and `alloc` members in that case, too.
+ *
+ * . Adds new items to the list, using `string_list_append`,
+ * `string_list_append_nodup`, `string_list_insert`,
+ * `string_list_split`, and/or `string_list_split_in_place`.
+ *
+ * . Can check if a string is in the list using `string_list_has_string` or
+ * `unsorted_string_list_has_string` and get it from the list using
+ * `string_list_lookup` for sorted lists.
+ *
+ * . Can sort an unsorted list using `string_list_sort`.
+ *
+ * . Can remove duplicate items from a sorted list using
+ * `string_list_remove_duplicates`.
+ *
+ * . Can remove individual items of an unsorted list using
+ * `unsorted_string_list_delete_item`.
+ *
+ * . Can remove items not matching a criterion from a sorted or unsorted
+ * list using `filter_string_list`, or remove empty strings using
+ * `string_list_remove_empty_items`.
+ *
+ * . Finally it should free the list using `string_list_clear`.
+ *
+ * Example:
+ *
+ * struct string_list list = STRING_LIST_INIT_NODUP;
+ * int i;
+ *
+ * string_list_append(&list, "foo");
+ * string_list_append(&list, "bar");
+ * for (i = 0; i < list.nr; i++)
+ * printf("%s\n", list.items[i].string)
+ *
+ * NOTE: It is more efficient to build an unsorted list and sort it
+ * afterwards, instead of building a sorted list (`O(n log n)` instead of
+ * `O(n^2)`).
+ *
+ * However, if you use the list to check if a certain string was added
+ * already, you should not do that (using unsorted_string_list_has_string()),
+ * because the complexity would be quadratic again (but with a worse factor).
+ */
+
+/**
+ * Represents an item of the list. The `string` member is a pointer to the
+ * string, and you may use the `util` member for any purpose, if you want.
+ */
struct string_list_item {
char *string;
void *util;
typedef int (*compare_strings_fn)(const char *, const char *);
+/**
+ * Represents the list itself.
+ *
+ * . The array of items are available via the `items` member.
+ * . The `nr` member contains the number of items stored in the list.
+ * . The `alloc` member is used to avoid reallocating at every insertion.
+ * You should not tamper with it.
+ * . Setting the `strdup_strings` member to 1 will strdup() the strings
+ * before adding them, see above.
+ * . The `compare_strings_fn` member is used to specify a custom compare
+ * function, otherwise `strcmp()` is used as the default function.
+ */
struct string_list {
struct string_list_item *items;
unsigned int nr, alloc;
#define STRING_LIST_INIT_NODUP { NULL, 0, 0, 0, NULL }
#define STRING_LIST_INIT_DUP { NULL, 0, 0, 1, NULL }
+/* General functions which work with both sorted and unsorted lists. */
+
+/**
+ * Initialize the members of the string_list, set `strdup_strings`
+ * member according to the value of the second parameter.
+ */
void string_list_init(struct string_list *list, int strdup_strings);
+/** Callback function type for for_each_string_list */
+typedef int (*string_list_each_func_t)(struct string_list_item *, void *);
+
+/**
+ * Apply `want` to each item in `list`, retaining only the ones for which
+ * the function returns true. If `free_util` is true, call free() on
+ * the util members of any items that have to be deleted. Preserve
+ * the order of the items that are retained.
+ */
+void filter_string_list(struct string_list *list, int free_util,
+ string_list_each_func_t want, void *cb_data);
+
+/**
+ * Dump a string_list to stdout, useful mainly for debugging
+ * purposes. It can take an optional header argument and it writes out
+ * the string-pointer pairs of the string_list, each one in its own
+ * line.
+ */
void print_string_list(const struct string_list *p, const char *text);
+
+/**
+ * Free a string_list. The `string` pointer of the items will be freed
+ * in case the `strdup_strings` member of the string_list is set. The
+ * second parameter controls if the `util` pointer of the items should
+ * be freed or not.
+ */
void string_list_clear(struct string_list *list, int free_util);
-/* Use this function to call a custom clear function on each util pointer */
-/* The string associated with the util pointer is passed as the second argument */
+/**
+ * Callback type for `string_list_clear_func`. The string associated
+ * with the util pointer is passed as the second argument
+ */
typedef void (*string_list_clear_func_t)(void *p, const char *str);
+
+/** Call a custom clear function on each util pointer */
void string_list_clear_func(struct string_list *list, string_list_clear_func_t clearfunc);
-/* Use this function or the macro below to iterate over each item */
-typedef int (*string_list_each_func_t)(struct string_list_item *, void *);
+/**
+ * Apply `func` to each item. If `func` returns nonzero, the
+ * iteration aborts and the return value is propagated.
+ */
int for_each_string_list(struct string_list *list,
- string_list_each_func_t, void *cb_data);
+ string_list_each_func_t func, void *cb_data);
+
+/** Iterate over each item, as a macro. */
#define for_each_string_list_item(item,list) \
for (item = (list)->items; \
item && item < (list)->items + (list)->nr; \
++item)
-/*
- * Apply want to each item in list, retaining only the ones for which
- * the function returns true. If free_util is true, call free() on
- * the util members of any items that have to be deleted. Preserve
- * the order of the items that are retained.
- */
-void filter_string_list(struct string_list *list, int free_util,
- string_list_each_func_t want, void *cb_data);
-
-/*
+/**
* Remove any empty strings from the list. If free_util is true, call
* free() on the util members of any items that have to be deleted.
* Preserve the order of the items that are retained.
void string_list_remove_empty_items(struct string_list *list, int free_util);
/* Use these functions only on sorted lists: */
+
+/** Determine if the string_list has a given string or not. */
int string_list_has_string(const struct string_list *list, const char *string);
int string_list_find_insert_index(const struct string_list *list, const char *string,
int negative_existing_index);
-/*
- * Inserts the given string into the sorted list.
- * If the string already exists, the list is not altered.
- * Returns the string_list_item, the string is part of.
+
+/**
+ * Insert a new element to the string_list. The returned pointer can
+ * be handy if you want to write something to the `util` pointer of
+ * the string_list_item containing the just added string. If the given
+ * string already exists the insertion will be skipped and the pointer
+ * to the existing item returned.
+ *
+ * Since this function uses xrealloc() (which die()s if it fails) if the
+ * list needs to grow, it is safe not to check the pointer. I.e. you may
+ * write `string_list_insert(...)->util = ...;`.
*/
struct string_list_item *string_list_insert(struct string_list *list, const char *string);
-/*
- * Removes the given string from the sorted list.
- * If the string doesn't exist, the list is not altered.
+/**
+ * Remove the given string from the sorted list. If the string
+ * doesn't exist, the list is not altered.
*/
extern void string_list_remove(struct string_list *list, const char *string,
int free_util);
-/*
- * Checks if the given string is part of a sorted list. If it is part of the list,
+/**
+ * Check if the given string is part of a sorted list. If it is part of the list,
* return the coresponding string_list_item, NULL otherwise.
*/
struct string_list_item *string_list_lookup(struct string_list *list, const char *string);
/* Use these functions only on unsorted lists: */
-/*
+/**
* Add string to the end of list. If list->strdup_string is set, then
* string is copied; otherwise the new string_list_entry refers to the
* input string.
*/
struct string_list_item *string_list_append(struct string_list *list, const char *string);
-/*
+/**
* Like string_list_append(), except string is never copied. When
* list->strdup_strings is set, this function can be used to hand
* ownership of a malloc()ed string to list without making an extra
*/
struct string_list_item *string_list_append_nodup(struct string_list *list, char *string);
+/**
+ * Sort the list's entries by string value in `strcmp()` order.
+ */
void string_list_sort(struct string_list *list);
+
+/**
+ * Like `string_list_has_string()` but for unsorted lists. Linear in
+ * size of the list.
+ */
int unsorted_string_list_has_string(struct string_list *list, const char *string);
+
+/**
+ * Like `string_list_lookup()` but for unsorted lists. Linear in size
+ * of the list.
+ */
struct string_list_item *unsorted_string_list_lookup(struct string_list *list,
const char *string);
-
+/**
+ * Remove an item from a string_list. The `string` pointer of the
+ * items will be freed in case the `strdup_strings` member of the
+ * string_list is set. The third parameter controls if the `util`
+ * pointer of the items should be freed or not.
+ */
void unsorted_string_list_delete_item(struct string_list *list, int i, int free_util);
-/*
- * Split string into substrings on character delim and append the
- * substrings to list. The input string is not modified.
+/**
+ * Split string into substrings on character `delim` and append the
+ * substrings to `list`. The input string is not modified.
* list->strdup_strings must be set, as new memory needs to be
* allocated to hold the substrings. If maxsplit is non-negative,
* then split at most maxsplit times. Return the number of substrings
{
int err;
struct child_process *process;
- const char *argv[] = { cmd, NULL };
entry->cmd = cmd;
process = &entry->process;
child_process_init(process);
- process->argv = argv;
+ argv_array_push(&process->args, cmd);
process->use_shell = 1;
process->in = -1;
process->out = -1;
if (add_submodule_odb(path)) {
if (!message)
- message = "(not initialized)";
+ message = "(commits not present)";
goto output_header;
}
return ret;
}
+/*
+ * Put the gitdir for a submodule (given relative to the main
+ * repository worktree) into `buf`, or return -1 on error.
+ */
int submodule_to_gitdir(struct strbuf *buf, const char *submodule)
{
const struct submodule *sub;
const char *prefix = "prefix/";
const char *usage[] = {
"test-parse-options <options>",
+ "",
+ "A helper function for the parse-options API.",
NULL
};
struct string_list expect = STRING_LIST_INIT_NODUP;
* Split by newline, but don't create a string_list item
* for the empty string after the last separator.
*/
- if (sb.buf[sb.len - 1] == '\n')
+ if (sb.len && sb.buf[sb.len - 1] == '\n')
strbuf_setlen(&sb, sb.len - 1);
string_list_split_in_place(&list, sb.buf, '\n', -1);
cat >expect <<\EOF
usage: test-parse-options <options>
+ A helper function for the parse-options API.
+
--yes get a boolean
-D, --no-doubt begins with 'no-'
-B, --no-fear be brave
|g,fluf?path short and long option optional argument
|longest=very-long-argument-hint a very long argument hint
|pair=key=value with an equals sign in the hint
+|aswitch help te=t contains? fl*g characters!`
+|bswitch=hint hint has trailing tab character
+|cswitch switch has trailing tab character
|short-hint=a with a one symbol hint
|
|Extras
EOF
'
+test_expect_success 'setup optionspec-no-switches' '
+ sed -e "s/^|//" >optionspec_no_switches <<\EOF
+|some-command [options] <args>...
+|
+|some-command does foo and bar!
+|--
+EOF
+'
+
+test_expect_success 'setup optionspec-only-hidden-switches' '
+ sed -e "s/^|//" >optionspec_only_hidden_switches <<\EOF
+|some-command [options] <args>...
+|
+|some-command does foo and bar!
+|--
+|hidden1* A hidden switch
+EOF
+'
+
test_expect_success 'test --parseopt help output' '
sed -e "s/^|//" >expect <<\END_EXPECT &&
|cat <<\EOF
| --longest <very-long-argument-hint>
| a very long argument hint
| --pair <key=value> with an equals sign in the hint
+| --aswitch help te=t contains? fl*g characters!`
+| --bswitch <hint> hint has trailing tab character
+| --cswitch switch has trailing tab character
| --short-hint <a> with a one symbol hint
|
|Extras
test_i18ncmp expect output
'
+test_expect_success 'test --parseopt help output no switches' '
+ sed -e "s/^|//" >expect <<\END_EXPECT &&
+|cat <<\EOF
+|usage: some-command [options] <args>...
+|
+| some-command does foo and bar!
+|
+|EOF
+END_EXPECT
+ test_expect_code 129 git rev-parse --parseopt -- -h > output < optionspec_no_switches &&
+ test_i18ncmp expect output
+'
+
+test_expect_success 'test --parseopt help output hidden switches' '
+ sed -e "s/^|//" >expect <<\END_EXPECT &&
+|cat <<\EOF
+|usage: some-command [options] <args>...
+|
+| some-command does foo and bar!
+|
+|EOF
+END_EXPECT
+ test_expect_code 129 git rev-parse --parseopt -- -h > output < optionspec_only_hidden_switches &&
+ test_i18ncmp expect output
+'
+
+test_expect_success 'test --parseopt help-all output hidden switches' '
+ sed -e "s/^|//" >expect <<\END_EXPECT &&
+|cat <<\EOF
+|usage: some-command [options] <args>...
+|
+| some-command does foo and bar!
+|
+| --hidden1 A hidden switch
+|
+|EOF
+END_EXPECT
+ test_expect_code 129 git rev-parse --parseopt -- --help-all > output < optionspec_only_hidden_switches &&
+ test_i18ncmp expect output
+'
+
+test_expect_success 'test --parseopt invalid switch help output' '
+ sed -e "s/^|//" >expect <<\END_EXPECT &&
+|error: unknown option `does-not-exist'\''
+|usage: some-command [options] <args>...
+|
+| some-command does foo and bar!
+|
+| -h, --help show the help
+| --foo some nifty option --foo
+| --bar ... some cool option --bar with an argument
+| -b, --baz a short and long option
+|
+|An option group Header
+| -C[...] option C with an optional argument
+| -d, --data[=...] short and long option with an optional argument
+|
+|Argument hints
+| -B <arg> short option required argument
+| --bar2 <arg> long option required argument
+| -e, --fuz <with-space>
+| short and long option required argument
+| -s[<some>] short option optional argument
+| --long[=<data>] long option optional argument
+| -g, --fluf[=<path>] short and long option optional argument
+| --longest <very-long-argument-hint>
+| a very long argument hint
+| --pair <key=value> with an equals sign in the hint
+| --aswitch help te=t contains? fl*g characters!`
+| --bswitch <hint> hint has trailing tab character
+| --cswitch switch has trailing tab character
+| --short-hint <a> with a one symbol hint
+|
+|Extras
+| --extra1 line above used to cause a segfault but no longer does
+|
+END_EXPECT
+ test_expect_code 129 git rev-parse --parseopt -- --does-not-exist 1>/dev/null 2>output < optionspec &&
+ test_i18ncmp expect output
+'
+
test_expect_success 'setup expect.1' "
cat > expect <<EOF
-set -- --foo --bar 'ham' -b -- 'arg'
+set -- --foo --bar 'ham' -b --aswitch -- 'arg'
EOF
"
test_expect_success 'test --parseopt' '
- git rev-parse --parseopt -- --foo --bar=ham --baz arg < optionspec > output &&
+ git rev-parse --parseopt -- --foo --bar=ham --baz --aswitch arg < optionspec > output &&
test_cmp expect output
'
test_expect_success 'test --parseopt with mixed options and arguments' '
- git rev-parse --parseopt -- --foo arg --bar=ham --baz < optionspec > output &&
+ git rev-parse --parseopt -- --foo arg --bar=ham --baz --aswitch < optionspec > output &&
test_cmp expect output
'
test_must_fail git config branch.s/s.dummy
'
+test_expect_success 'git branch -m correctly renames multiple config sections' '
+ test_when_finished "git checkout master" &&
+ git checkout -b source master &&
+
+ # Assert that a config file with multiple config sections has
+ # those sections preserved...
+ cat >expect <<-\EOF &&
+ branch.dest.key1=value1
+ some.gar.b=age
+ branch.dest.key2=value2
+ EOF
+ cat >config.branch <<\EOF &&
+;; Note the lack of -\EOF above & mixed indenting here. This is
+;; intentional, we are also testing that the formatting of copied
+;; sections is preserved.
+
+;; Comment for source. Tabs
+[branch "source"]
+ ;; Comment for the source value
+ key1 = value1
+;; Comment for some.gar. Spaces
+[some "gar"]
+ ;; Comment for the some.gar value
+ b = age
+;; Comment for source, again. Mixed tabs/spaces.
+[branch "source"]
+ ;; Comment for the source value, again
+ key2 = value2
+EOF
+ cat config.branch >>.git/config &&
+ git branch -m source dest &&
+ git config -f .git/config -l | grep -F -e source -e dest -e some.gar >actual &&
+ test_cmp expect actual &&
+
+ # ...and that the comments for those sections are also
+ # preserved.
+ cat config.branch | sed "s/\"source\"/\"dest\"/" >expect &&
+ sed -n -e "/Note the lack/,\$p" .git/config >actual &&
+ test_cmp expect actual
+'
+
+test_expect_success 'git branch -c dumps usage' '
+ test_expect_code 128 git branch -c 2>err &&
+ test_i18ngrep "branch name required" err
+'
+
+test_expect_success 'git branch --copy dumps usage' '
+ test_expect_code 128 git branch --copy 2>err &&
+ test_i18ngrep "branch name required" err
+'
+
+test_expect_success 'git branch -c d e should work' '
+ git branch -l d &&
+ git reflog exists refs/heads/d &&
+ git config branch.d.dummy Hello &&
+ git branch -c d e &&
+ git reflog exists refs/heads/d &&
+ git reflog exists refs/heads/e &&
+ echo Hello >expect &&
+ git config branch.e.dummy >actual &&
+ test_cmp expect actual &&
+ echo Hello >expect &&
+ git config branch.d.dummy >actual &&
+ test_cmp expect actual
+'
+
+test_expect_success 'git branch --copy is a synonym for -c' '
+ git branch -l copy &&
+ git reflog exists refs/heads/copy &&
+ git config branch.copy.dummy Hello &&
+ git branch --copy copy copy-to &&
+ git reflog exists refs/heads/copy &&
+ git reflog exists refs/heads/copy-to &&
+ echo Hello >expect &&
+ git config branch.copy.dummy >actual &&
+ test_cmp expect actual &&
+ echo Hello >expect &&
+ git config branch.copy-to.dummy >actual &&
+ test_cmp expect actual
+'
+
+test_expect_success 'git branch -c ee ef should copy ee to create branch ef' '
+ git checkout -b ee &&
+ git reflog exists refs/heads/ee &&
+ git config branch.ee.dummy Hello &&
+ git branch -c ee ef &&
+ git reflog exists refs/heads/ee &&
+ git reflog exists refs/heads/ef &&
+ test $(git config branch.ee.dummy) = Hello &&
+ test $(git config branch.ef.dummy) = Hello &&
+ test $(git rev-parse --abbrev-ref HEAD) = ee
+'
+
+test_expect_success 'git branch -c f/f g/g should work' '
+ git branch -l f/f &&
+ git reflog exists refs/heads/f/f &&
+ git config branch.f/f.dummy Hello &&
+ git branch -c f/f g/g &&
+ git reflog exists refs/heads/f/f &&
+ git reflog exists refs/heads/g/g &&
+ test $(git config branch.f/f.dummy) = Hello &&
+ test $(git config branch.g/g.dummy) = Hello
+'
+
+test_expect_success 'git branch -c m2 m2 should work' '
+ git branch -l m2 &&
+ git reflog exists refs/heads/m2 &&
+ git config branch.m2.dummy Hello &&
+ git branch -c m2 m2 &&
+ git reflog exists refs/heads/m2 &&
+ test $(git config branch.m2.dummy) = Hello
+'
+
+test_expect_success 'git branch -c zz zz/zz should fail' '
+ git branch -l zz &&
+ git reflog exists refs/heads/zz &&
+ test_must_fail git branch -c zz zz/zz
+'
+
+test_expect_success 'git branch -c b/b b should fail' '
+ git branch -l b/b &&
+ test_must_fail git branch -c b/b b
+'
+
+test_expect_success 'git branch -C o/q o/p should work when o/p exists' '
+ git branch -l o/q &&
+ git reflog exists refs/heads/o/q &&
+ git reflog exists refs/heads/o/p &&
+ git branch -C o/q o/p
+'
+
+test_expect_success 'git branch -c -f o/q o/p should work when o/p exists' '
+ git reflog exists refs/heads/o/q &&
+ git reflog exists refs/heads/o/p &&
+ git branch -c -f o/q o/p
+'
+
+test_expect_success 'git branch -c qq rr/qq should fail when r exists' '
+ git branch qq &&
+ git branch rr &&
+ test_must_fail git branch -c qq rr/qq
+'
+
+test_expect_success 'git branch -C b1 b2 should fail when b2 is checked out' '
+ git branch b1 &&
+ git checkout -b b2 &&
+ test_must_fail git branch -C b1 b2
+'
+
+test_expect_success 'git branch -C c1 c2 should succeed when c1 is checked out' '
+ git checkout -b c1 &&
+ git branch c2 &&
+ git branch -C c1 c2 &&
+ test $(git rev-parse --abbrev-ref HEAD) = c1
+'
+
+test_expect_success 'git branch -C c1 c2 should never touch HEAD' '
+ msg="Branch: copied refs/heads/c1 to refs/heads/c2" &&
+ ! grep "$msg$" .git/logs/HEAD
+'
+
+test_expect_success 'git branch -C master should work when master is checked out' '
+ git checkout master &&
+ git branch -C master
+'
+
+test_expect_success 'git branch -C master master should work when master is checked out' '
+ git checkout master &&
+ git branch -C master master
+'
+
+test_expect_success 'git branch -C master5 master5 should work when master is checked out' '
+ git checkout master &&
+ git branch master5 &&
+ git branch -C master5 master5
+'
+
+test_expect_success 'git branch -C ab cd should overwrite existing config for cd' '
+ git branch -l cd &&
+ git reflog exists refs/heads/cd &&
+ git config branch.cd.dummy CD &&
+ git branch -l ab &&
+ git reflog exists refs/heads/ab &&
+ git config branch.ab.dummy AB &&
+ git branch -C ab cd &&
+ git reflog exists refs/heads/ab &&
+ git reflog exists refs/heads/cd &&
+ test $(git config branch.ab.dummy) = AB &&
+ test $(git config branch.cd.dummy) = AB
+'
+
+test_expect_success 'git branch -c correctly copies multiple config sections' '
+ FOO=1 &&
+ export FOO &&
+ test_when_finished "git checkout master" &&
+ git checkout -b source2 master &&
+
+ # Assert that a config file with multiple config sections has
+ # those sections preserved...
+ cat >expect <<-\EOF &&
+ branch.source2.key1=value1
+ branch.dest2.key1=value1
+ more.gar.b=age
+ branch.source2.key2=value2
+ branch.dest2.key2=value2
+ EOF
+ cat >config.branch <<\EOF &&
+;; Note the lack of -\EOF above & mixed indenting here. This is
+;; intentional, we are also testing that the formatting of copied
+;; sections is preserved.
+
+;; Comment for source2. Tabs
+[branch "source2"]
+ ;; Comment for the source2 value
+ key1 = value1
+;; Comment for more.gar. Spaces
+[more "gar"]
+ ;; Comment for the more.gar value
+ b = age
+;; Comment for source2, again. Mixed tabs/spaces.
+[branch "source2"]
+ ;; Comment for the source2 value, again
+ key2 = value2
+EOF
+ cat config.branch >>.git/config &&
+ git branch -c source2 dest2 &&
+ git config -f .git/config -l | grep -F -e source2 -e dest2 -e more.gar >actual &&
+ test_cmp expect actual &&
+
+ # ...and that the comments and formatting for those sections
+ # is also preserved.
+ cat >expect <<\EOF &&
+;; Comment for source2. Tabs
+[branch "source2"]
+ ;; Comment for the source2 value
+ key1 = value1
+;; Comment for more.gar. Spaces
+[branch "dest2"]
+ ;; Comment for the source2 value
+ key1 = value1
+;; Comment for more.gar. Spaces
+[more "gar"]
+ ;; Comment for the more.gar value
+ b = age
+;; Comment for source2, again. Mixed tabs/spaces.
+[branch "source2"]
+ ;; Comment for the source2 value, again
+ key2 = value2
+[branch "dest2"]
+ ;; Comment for the source2 value, again
+ key2 = value2
+EOF
+ sed -n -e "/Comment for source2/,\$p" .git/config >actual &&
+ test_cmp expect actual
+'
+
test_expect_success 'deleting a symref' '
git branch target &&
git symbolic-ref refs/heads/symref refs/heads/target &&
'
test_expect_success TTY '%(color) present with tty' '
- test_terminal env TERM=vt100 git branch $color_args >actual.raw &&
- test_decode_color <actual.raw >actual &&
- test_cmp expect.color actual
-'
-
-test_expect_success 'color.branch=always overrides auto-color' '
- git -c color.branch=always branch $color_args >actual.raw &&
+ test_terminal git branch $color_args >actual.raw &&
test_decode_color <actual.raw >actual &&
test_cmp expect.color actual
'
# choose non-default colors to make sure config
# is taking effect
test_expect_success 'set up some color config' '
- git config color.branch always &&
git config color.branch.local blue &&
git config color.branch.remote yellow &&
git config color.branch.current cyan
<BLUE>other<RESET>
<YELLOW>remotes/origin/master<RESET>
EOF
- git branch -a >actual.raw &&
+ git branch --color -a >actual.raw &&
test_decode_color <actual.raw >actual &&
test_cmp expect actual
'
<BLUE>other <RESET> $oid foo
<YELLOW>remotes/origin/master<RESET> $oid foo
EOF
- git branch -v -a >actual.raw &&
+ git branch --color -v -a >actual.raw &&
test_decode_color <actual.raw >actual &&
test_cmp expect actual
'
test B = $(git cat-file commit HEAD^ | sed -ne \$p)
'
-cat >expect <<EOF
-Warning: the command isn't recognized in the following line:
- - badcmd $(git rev-list --oneline -1 master~1)
-
-You can fix this with 'git rebase --edit-todo' and then run 'git rebase --continue'.
-Or you can abort the rebase with 'git rebase --abort'.
-EOF
-
test_expect_success 'static check of bad command' '
rebase_setup_and_clean bad-cmd &&
set_fake_editor &&
test_must_fail env FAKE_LINES="1 2 3 bad 4 5" \
git rebase -i --root 2>actual &&
- test_i18ncmp expect actual &&
+ test_i18ngrep "badcmd $(git rev-list --oneline -1 master~1)" actual &&
+ test_i18ngrep "You can fix this with .git rebase --edit-todo.." actual &&
FAKE_LINES="1 2 3 drop 4 5" git rebase --edit-todo &&
git rebase --continue &&
test E = $(git cat-file commit HEAD | sed -ne \$p) &&
test E = $(git cat-file commit HEAD | sed -ne \$p)
'
-cat >expect <<EOF
-Warning: the SHA-1 is missing or isn't a commit in the following line:
- - edit XXXXXXX False commit
-
-You can fix this with 'git rebase --edit-todo' and then run 'git rebase --continue'.
-Or you can abort the rebase with 'git rebase --abort'.
-EOF
-
test_expect_success 'static check of bad SHA-1' '
rebase_setup_and_clean bad-sha &&
set_fake_editor &&
test_must_fail env FAKE_LINES="1 2 edit fakesha 3 4 5 #" \
git rebase -i --root 2>actual &&
- test_i18ncmp expect actual &&
+ test_i18ngrep "edit XXXXXXX False commit" actual &&
+ test_i18ngrep "You can fix this with .git rebase --edit-todo.." actual &&
FAKE_LINES="1 2 4 5 6" git rebase --edit-todo &&
git rebase --continue &&
test E = $(git cat-file commit HEAD | sed -ne \$p)
test 2 = $(git cat-file commit HEAD^ | grep squash | wc -l)
'
+test_expect_success 'autosquash with empty custom instructionFormat' '
+ git reset --hard base &&
+ test_commit empty-instructionFormat-test &&
+ (
+ set_cat_todo_editor &&
+ test_must_fail git -c rebase.instructionFormat= \
+ rebase --autosquash --force -i HEAD^ >actual &&
+ git log -1 --format="pick %h %s" >expect &&
+ test_cmp expect actual
+ )
+'
+
set_backup_editor () {
write_script backup-editor.sh <<-\EOF
cp "$1" .git/backup-"$(basename "$1")"
test_set_editor "$PWD/backup-editor.sh"
}
-test_expect_failure 'autosquash with multiple empty patches' '
+test_expect_success 'autosquash with multiple empty patches' '
test_tick &&
git commit --allow-empty -m "empty" &&
test_tick &&
test $base = $parent
'
+test_expect_success 'wrapped original subject' '
+ if test -d .git/rebase-merge; then git rebase --abort; fi &&
+ base=$(git rev-parse HEAD) &&
+ echo "wrapped subject" >wrapped &&
+ git add wrapped &&
+ test_tick &&
+ git commit --allow-empty -m "$(printf "To\nfixup")" &&
+ test_tick &&
+ git commit --allow-empty -m "fixup! To fixup" &&
+ git rebase -i --autosquash --keep-empty HEAD~2 &&
+ parent=$(git rev-parse HEAD^) &&
+ test $base = $parent
+'
+
test_done
test_description='add -i basic tests'
. ./test-lib.sh
+. "$TEST_DIRECTORY"/lib-terminal.sh
if ! test_have_prereq PERL
then
test_cmp expected diff
'
-test_expect_success 'diffs can be colorized' '
+test_expect_success TTY 'diffs can be colorized' '
git reset --hard &&
- # force color even though the test script has no terminal
- test_config color.ui always &&
-
echo content >test &&
- printf y | git add -p >output 2>&1 &&
+ printf y | test_terminal git add -p >output 2>&1 &&
# We do not want to depend on the exact coloring scheme
# git uses for diffs, so just check that we saw some kind of color.
git diff --exit-code
'
+test_expect_success 'add -p works even with color.ui=always' '
+ git reset --hard &&
+ echo change >>file &&
+ test_config color.ui always &&
+ echo y | git add -p &&
+ echo file >expect &&
+ git diff --cached --name-only >actual &&
+ test_cmp expect actual
+'
+
test_done
git commit -m "Rearranged lines in dir/sub" &&
git checkout master &&
+ GIT_AUTHOR_DATE="2006-06-26 00:06:00 +0000" &&
+ GIT_COMMITTER_DATE="2006-06-26 00:06:00 +0000" &&
+ export GIT_AUTHOR_DATE GIT_COMMITTER_DATE &&
+ git checkout -b mode initial &&
+ git update-index --chmod=+x file0 &&
+ git commit -m "update mode" &&
+ git checkout -f master &&
+
git config diff.renames false &&
git show-branch
diff-tree --pretty -p side
diff-tree --pretty --patch-with-stat side
+diff-tree initial mode
+diff-tree --stat initial mode
+diff-tree --summary initial mode
+
diff-tree master
diff-tree -p master
diff-tree -p -m master
--- /dev/null
+$ git diff-tree --stat initial mode
+ file0 | 0
+ 1 file changed, 0 insertions(+), 0 deletions(-)
+$
--- /dev/null
+$ git diff-tree --summary initial mode
+ mode change 100644 => 100755 file0
+$
--- /dev/null
+$ git diff-tree initial mode
+:100644 100755 01e79c32a8c99c557f0757da7cb6d65b3414466d 01e79c32a8c99c557f0757da7cb6d65b3414466d M file0
+$
$ git log --decorate=full --all
+commit b7e0bc69303b488b47deca799a7d723971dfa6cd (refs/heads/mode)
+Author: A U Thor <author@example.com>
+Date: Mon Jun 26 00:06:00 2006 +0000
+
+ update mode
+
commit cd4e72fd96faed3f0ba949dc42967430374e2290 (refs/heads/rearrange)
Author: A U Thor <author@example.com>
Date: Mon Jun 26 00:06:00 2006 +0000
$ git log --decorate --all
+commit b7e0bc69303b488b47deca799a7d723971dfa6cd (mode)
+Author: A U Thor <author@example.com>
+Date: Mon Jun 26 00:06:00 2006 +0000
+
+ update mode
+
commit cd4e72fd96faed3f0ba949dc42967430374e2290 (rearrange)
Author: A U Thor <author@example.com>
Date: Mon Jun 26 00:06:00 2006 +0000
# Start testing the colored format for whitespace checks
test_expect_success 'setup diff colors' '
- git config color.diff always &&
git config color.diff.plain normal &&
git config color.diff.meta bold &&
git config color.diff.frag cyan &&
echo "test" >x &&
git commit -m "initial" x &&
echo "{NTN}" | tr "NT" "\n\t" >>x &&
- git -c color.diff=always diff | test_decode_color >current &&
+ git diff --color | test_decode_color >current &&
cat >expected <<-\EOF &&
<BOLD>diff --git a/x b/x<RESET>
echo "2. and a new line "
} >x &&
- git -c color.diff=always diff |
+ git diff --color |
test_decode_color >current &&
cat >expected <<-\EOF &&
test_expect_success 'test --ws-error-highlight option' '
- git -c color.diff=always diff --ws-error-highlight=default,old |
+ git diff --color --ws-error-highlight=default,old |
test_decode_color >current &&
test_cmp expect.default-old current &&
- git -c color.diff=always diff --ws-error-highlight=all |
+ git diff --color --ws-error-highlight=all |
test_decode_color >current &&
test_cmp expect.all current &&
- git -c color.diff=always diff --ws-error-highlight=none |
+ git diff --color --ws-error-highlight=none |
test_decode_color >current &&
test_cmp expect.none current
test_expect_success 'test diff.wsErrorHighlight config' '
- git -c color.diff=always -c diff.wsErrorHighlight=default,old diff |
+ git -c diff.wsErrorHighlight=default,old diff --color |
test_decode_color >current &&
test_cmp expect.default-old current &&
- git -c color.diff=always -c diff.wsErrorHighlight=all diff |
+ git -c diff.wsErrorHighlight=all diff --color |
test_decode_color >current &&
test_cmp expect.all current &&
- git -c color.diff=always -c diff.wsErrorHighlight=none diff |
+ git -c diff.wsErrorHighlight=none diff --color |
test_decode_color >current &&
test_cmp expect.none current
test_expect_success 'option overrides diff.wsErrorHighlight' '
- git -c color.diff=always -c diff.wsErrorHighlight=none \
- diff --ws-error-highlight=default,old |
+ git -c diff.wsErrorHighlight=none \
+ diff --color --ws-error-highlight=default,old |
test_decode_color >current &&
test_cmp expect.default-old current &&
- git -c color.diff=always -c diff.wsErrorHighlight=default \
- diff --ws-error-highlight=all |
+ git -c diff.wsErrorHighlight=default \
+ diff --color --ws-error-highlight=all |
test_decode_color >current &&
test_cmp expect.all current &&
- git -c color.diff=always -c diff.wsErrorHighlight=all \
- diff --ws-error-highlight=none |
+ git -c diff.wsErrorHighlight=all \
+ diff --color --ws-error-highlight=none |
test_decode_color >current &&
test_cmp expect.none current
git mv test.c main.c &&
test_config color.diff.oldMoved "normal red" &&
test_config color.diff.newMoved "normal green" &&
- git diff HEAD --color-moved=zebra --no-renames | test_decode_color >actual &&
+ git diff HEAD --color-moved=zebra --color --no-renames | test_decode_color >actual &&
cat >expected <<-\EOF &&
<BOLD>diff --git a/main.c b/main.c<RESET>
<BOLD>new file mode 100644<RESET>
bar();
}
EOF
- git diff HEAD --no-renames --color-moved=zebra| test_decode_color >actual &&
+ git diff HEAD --no-renames --color-moved=zebra --color | test_decode_color >actual &&
cat <<-\EOF >expected &&
<BOLD>diff --git a/main.c b/main.c<RESET>
<BOLD>index 27a619c..7cf9336 100644<RESET>
test_config color.diff.oldMovedAlternative "blue" &&
test_config color.diff.newMovedAlternative "yellow" &&
# needs previous test as setup
- git diff HEAD --no-renames --color-moved=plain| test_decode_color >actual &&
+ git diff HEAD --no-renames --color-moved=plain --color | test_decode_color >actual &&
cat <<-\EOF >expected &&
<BOLD>diff --git a/main.c b/main.c<RESET>
<BOLD>index 27a619c..7cf9336 100644<RESET>
test_config color.diff.newMovedDimmed "normal cyan" &&
test_config color.diff.oldMovedAlternativeDimmed "normal blue" &&
test_config color.diff.newMovedAlternativeDimmed "normal yellow" &&
- git diff HEAD --no-renames --color-moved=dimmed_zebra |
+ git diff HEAD --no-renames --color-moved=dimmed_zebra --color |
grep -v "index" |
test_decode_color >actual &&
cat <<-\EOF >expected &&
test_config color.diff.oldMovedAlternativeDimmed "normal blue" &&
test_config color.diff.newMovedAlternativeDimmed "normal yellow" &&
test_config diff.colorMoved zebra &&
- git diff HEAD --no-renames --color-moved |
+ git diff HEAD --no-renames --color-moved --color |
grep -v "index" |
test_decode_color >actual &&
cat <<-\EOF >expected &&
EOF
test_config color.diff.oldMoved "magenta" &&
test_config color.diff.newMoved "cyan" &&
- git diff HEAD --no-renames --color-moved |
+ git diff HEAD --no-renames --color-moved --color |
grep -v "index" |
test_decode_color >actual &&
cat <<-\EOF >expected &&
EOF
test_cmp expected actual &&
- git diff HEAD --no-renames -w --color-moved |
+ git diff HEAD --no-renames -w --color-moved --color |
grep -v "index" |
test_decode_color >actual &&
cat <<-\EOF >expected &&
irrelevant_line
EOF
- git diff HEAD --color-moved=zebra --no-renames |
+ git diff HEAD --color-moved=zebra --color --no-renames |
grep -v "index" |
test_decode_color >actual &&
cat >expected <<-\EOF &&
nineteen chars 456789
EOF
- git diff HEAD --color-moved=zebra --no-renames |
+ git diff HEAD --color-moved=zebra --color --no-renames |
grep -v "index" |
test_decode_color >actual &&
cat >expected <<-\EOF &&
7charsA
EOF
- git diff HEAD --color-moved=zebra --no-renames | grep -v "index" | test_decode_color >actual &&
+ git diff HEAD --color-moved=zebra --color --no-renames | grep -v "index" | test_decode_color >actual &&
cat >expected <<-\EOF &&
<BOLD>diff --git a/bar b/bar<RESET>
<BOLD>--- a/bar<RESET>
echo foul >bananas/recipe &&
echo ripe >fruit.t &&
- git diff --submodule=diff --color-moved >actual &&
+ git diff --submodule=diff --color-moved --color >actual &&
# no move detection as the moved line is across repository boundaries.
test_decode_color <actual >decoded_actual &&
! grep BRED decoded_actual &&
# nor did we mess with it another way
- git diff --submodule=diff | test_decode_color >expect &&
+ git diff --submodule=diff --color | test_decode_color >expect &&
test_cmp expect decoded_actual
'
git clone . sm3 &&
git -C sm3 diff-tree -p --no-commit-id --submodule=log HEAD >actual &&
cat >expected <<-EOF &&
- Submodule sm1 $smhead1...$smhead2 (not initialized)
+ Submodule sm1 $smhead1...$smhead2 (commits not present)
EOF
test_cmp expected actual
'
'
test_expect_success TTY 'log output on a TTY' '
- git log --oneline --decorate >expect.short &&
+ git log --color --oneline --decorate >expect.short &&
test_terminal git log --oneline >actual &&
test_cmp expect.short actual
'
test_expect_success ':only and :unfold work together' '
- git log --no-walk --pretty="%(trailers:only:unfold)" >actual &&
- git log --no-walk --pretty="%(trailers:unfold:only)" >reverse &&
+ git log --no-walk --pretty="%(trailers:only,unfold)" >actual &&
+ git log --no-walk --pretty="%(trailers:unfold,only)" >reverse &&
test_cmp actual reverse &&
{
grep -v patch.description <trailers | unfold &&
cat <<-\EOT >read-request.sed &&
#!/bin/sed -nf
# Note that a request could ask for "tag $tagname"
- / in the git repository at:$/!d
+ / in the Git repository at:$/!d
n
/^$/ n
s/ tag \([^ ]*\)$/ tag--\1/
SUBJECT (DATE)
- are available in the git repository at:
+ are available in the Git repository at:
URL BRANCH
has_no_color actual
'
- test_expect_success "$desc enables colors for color.diff" '
- git -c color.diff=always log --format=$color -1 >actual &&
- has_color actual
- '
-
- test_expect_success "$desc enables colors for color.ui" '
- git -c color.ui=always log --format=$color -1 >actual &&
- has_color actual
- '
-
test_expect_success "$desc respects --color" '
git log --format=$color -1 --color >actual &&
has_color actual
'
- test_expect_success "$desc respects --no-color" '
- git -c color.ui=always log --format=$color -1 --no-color >actual &&
- has_no_color actual
- '
-
test_expect_success TTY "$desc respects --color=auto (stdout is tty)" '
- test_terminal env TERM=vt100 \
- git log --format=$color -1 --color=auto >actual &&
+ test_terminal git log --format=$color -1 --color=auto >actual &&
has_color actual
'
has_no_color actual
)
'
+
+ test_expect_success TTY "$desc respects --no-color" '
+ test_terminal git log --format=$color -1 --no-color >actual &&
+ has_no_color actual
+ '
done
test_expect_success '%C(always,...) enables color even without tty' '
test_cmp expect actual
'
-test_expect_success 'exclude only no longer errors out' '
+test_expect_success 'exclude only pathspec uses default implicit pathspec' '
git log --oneline --format=%s -- . ":(exclude)sub" >expect &&
git log --oneline --format=%s -- ":(exclude)sub" >actual &&
test_cmp expect actual
test_cmp expect actual
'
+test_expect_success 'multiple exclusions' '
+ git ls-files -- ":^*/file2" ":^sub2" >actual &&
+ cat <<-\EOF >expect &&
+ file
+ sub/file
+ sub/sub/file
+ sub/sub/sub/file
+ EOF
+ test_cmp expect actual
+'
+
test_done
}
test_atom head refname refs/heads/master
+test_atom head refname: refs/heads/master
test_atom head refname:short master
test_atom head refname:lstrip=1 heads/master
test_atom head refname:lstrip=2 master
'
test_expect_success TTY '%(color) shows color with a tty' '
- test_terminal env TERM=vt100 \
- git for-each-ref --format="$color_format" >actual.raw &&
+ test_terminal git for-each-ref --format="$color_format" >actual.raw &&
test_decode_color <actual.raw >actual &&
test_cmp expected.color actual
'
test_cmp expected.bare actual
'
-test_expect_success 'color.ui=always can override tty check' '
- git -c color.ui=always for-each-ref --format="$color_format" >actual.raw &&
+test_expect_success '--color can override tty check' '
+ git for-each-ref --color --format="$color_format" >actual.raw &&
test_decode_color <actual.raw >actual &&
test_cmp expected.color actual
'
'
test_expect_success TTY '%(color) present with tty' '
- test_terminal env TERM=vt100 git tag $color_args >actual.raw &&
+ test_terminal git tag $color_args >actual.raw &&
test_decode_color <actual.raw >actual &&
test_cmp expect.color actual
'
-test_expect_success 'color.ui=always overrides auto-color' '
- git -c color.ui=always tag $color_args >actual.raw &&
+test_expect_success '--color overrides auto-color' '
+ git tag --color $color_args >actual.raw &&
test_decode_color <actual.raw >actual &&
test_cmp expect.color actual
'
test_expect_success TTY 'color when writing to a pager' '
rm -f paginated.out &&
test_config color.ui auto &&
- test_terminal env TERM=vt100 git log &&
+ test_terminal git log &&
colorful paginated.out
'
rm -f paginated.out &&
test_config color.ui auto &&
test_config color.pager false &&
- test_terminal env TERM=vt100 git log &&
+ test_terminal git log &&
! colorful paginated.out
'
test_expect_success TTY 'colors are sent to pager for external commands' '
test_config alias.externallog "!git log" &&
test_config color.ui auto &&
- test_terminal env TERM=vt100 git -p externallog &&
+ test_terminal git -p externallog &&
colorful paginated.out
'
test_description='git clean -i basic tests'
. ./test-lib.sh
+. "$TEST_DIRECTORY"/lib-terminal.sh
test_expect_success 'setup' '
'
-test_expect_success 'git clean -i paints the header in HEADER color' '
+test_expect_success TTY 'git clean -i paints the header in HEADER color' '
>a.out &&
echo q |
- git -c color.ui=always clean -i |
+ test_terminal git clean -i |
test_decode_color |
head -n 1 >header &&
# not i18ngrep
)
'
+test_expect_success 'submodule update - command in .gitmodules is ignored' '
+ test_when_finished "git -C super reset --hard HEAD^" &&
+ git -C super config -f .gitmodules submodule.submodule.update "!false" &&
+ git -C super commit -a -m "add command to .gitmodules file" &&
+ git -C super/submodule reset --hard $submodulesha1^ &&
+ git -C super submodule update submodule
+'
+
cat << EOF >expect
Execution of 'false $submodulesha1' failed in submodule path 'submodule'
EOF
test_expect_success 'verbose respects diff config' '
- test_config color.diff always &&
+ test_config diff.noprefix true &&
git status -v >actual &&
- grep "\[1mdiff --git" actual
+ grep "diff --git negative negative" actual
'
mesg_with_comment_and_newlines='
test_description='git status'
. ./test-lib.sh
+. "$TEST_DIRECTORY"/lib-terminal.sh
test_expect_success 'status -h in broken repository' '
git config --global advice.statusuoption false &&
'
-test_expect_success 'status with color.ui' '
+test_expect_success TTY 'status with color.ui' '
cat >expect <<\EOF &&
On branch <GREEN>master<RESET>
Your branch and '\''upstream'\'' have diverged,
<BLUE>untracked<RESET>
EOF
- test_config color.ui always &&
- git status | test_decode_color >output &&
+ test_config color.ui auto &&
+ test_terminal git status | test_decode_color >output &&
test_i18ncmp expect output
'
-test_expect_success 'status with color.status' '
- test_config color.status always &&
- git status | test_decode_color >output &&
+test_expect_success TTY 'status with color.status' '
+ test_config color.status auto &&
+ test_terminal git status | test_decode_color >output &&
test_i18ncmp expect output
'
<BLUE>??<RESET> untracked
EOF
-test_expect_success 'status -s with color.ui' '
+test_expect_success TTY 'status -s with color.ui' '
- git config color.ui always &&
- git status -s | test_decode_color >output &&
+ git config color.ui auto &&
+ test_terminal git status -s | test_decode_color >output &&
test_cmp expect output
'
-test_expect_success 'status -s with color.status' '
+test_expect_success TTY 'status -s with color.status' '
git config --unset color.ui &&
- git config color.status always &&
- git status -s | test_decode_color >output &&
+ git config color.status auto &&
+ test_terminal git status -s | test_decode_color >output &&
test_cmp expect output
'
<BLUE>??<RESET> untracked
EOF
-test_expect_success 'status -s -b with color.status' '
+test_expect_success TTY 'status -s -b with color.status' '
- git status -s -b | test_decode_color >output &&
+ test_terminal git status -s -b | test_decode_color >output &&
test_i18ncmp expect output
'
?? untracked
EOF
-test_expect_success 'status --porcelain ignores color.ui' '
+test_expect_success TTY 'status --porcelain ignores color.ui' '
git config --unset color.status &&
- git config color.ui always &&
- git status --porcelain | test_decode_color >output &&
+ git config color.ui auto &&
+ test_terminal git status --porcelain | test_decode_color >output &&
test_cmp expect output
'
-test_expect_success 'status --porcelain ignores color.status' '
+test_expect_success TTY 'status --porcelain ignores color.status' '
git config --unset color.ui &&
- git config color.status always &&
- git status --porcelain | test_decode_color >output &&
+ git config color.status auto &&
+ test_terminal git status --porcelain | test_decode_color >output &&
test_cmp expect output
'
test_i18ngrep ! "Initial commit" output
'
+test_expect_success '--no-optional-locks prevents index update' '
+ test-chmtime =1234567890 .git/index &&
+ git --no-optional-locks status &&
+ test-chmtime -v +0 .git/index >out &&
+ grep ^1234567890 out &&
+ git status &&
+ test-chmtime -v +0 .git/index >out &&
+ ! grep ^1234567890 out
+'
+
test_done
compare_diff_raw expect actual
'
+###
+### series V (checkpoint)
+###
+
+# The commands in input_file should not produce any output on the file
+# descriptor set with --cat-blob-fd (or stdout if unspecified).
+#
+# To make sure you're observing the side effects of checkpoint *before*
+# fast-import terminates (and thus writes out its state), check that the
+# fast-import process is still running using background_import_still_running
+# *after* evaluating the test conditions.
+background_import_then_checkpoint () {
+ options=$1
+ input_file=$2
+
+ mkfifo V.input
+ exec 8<>V.input
+ rm V.input
+
+ mkfifo V.output
+ exec 9<>V.output
+ rm V.output
+
+ git fast-import $options <&8 >&9 &
+ echo $! >V.pid
+ # We don't mind if fast-import has already died by the time the test
+ # ends.
+ test_when_finished "exec 8>&-; exec 9>&-; kill $(cat V.pid) || true"
+
+ # Start in the background to ensure we adhere strictly to (blocking)
+ # pipes writing sequence. We want to assume that the write below could
+ # block, e.g. if fast-import blocks writing its own output to &9
+ # because there is no reader on &9 yet.
+ (
+ cat "$input_file"
+ echo "checkpoint"
+ echo "progress checkpoint"
+ ) >&8 &
+
+ error=1 ;# assume the worst
+ while read output <&9
+ do
+ if test "$output" = "progress checkpoint"
+ then
+ error=0
+ break
+ fi
+ # otherwise ignore cruft
+ echo >&2 "cruft: $output"
+ done
+
+ if test $error -eq 1
+ then
+ false
+ fi
+}
+
+background_import_still_running () {
+ if ! kill -0 "$(cat V.pid)"
+ then
+ echo >&2 "background fast-import terminated too early"
+ false
+ fi
+}
+
+test_expect_success PIPE 'V: checkpoint helper does not get stuck with extra output' '
+ cat >input <<-INPUT_END &&
+ progress foo
+ progress bar
+
+ INPUT_END
+
+ background_import_then_checkpoint "" input &&
+ background_import_still_running
+'
+
+test_expect_success PIPE 'V: checkpoint updates refs after reset' '
+ cat >input <<-\INPUT_END &&
+ reset refs/heads/V
+ from refs/heads/U
+
+ INPUT_END
+
+ background_import_then_checkpoint "" input &&
+ test "$(git rev-parse --verify V)" = "$(git rev-parse --verify U)" &&
+ background_import_still_running
+'
+
+test_expect_success PIPE 'V: checkpoint updates refs and marks after commit' '
+ cat >input <<-INPUT_END &&
+ commit refs/heads/V
+ mark :1
+ committer $GIT_COMMITTER_NAME <$GIT_COMMITTER_EMAIL> $GIT_COMMITTER_DATE
+ data 0
+ from refs/heads/U
+
+ INPUT_END
+
+ background_import_then_checkpoint "--export-marks=marks.actual" input &&
+
+ echo ":1 $(git rev-parse --verify V)" >marks.expected &&
+
+ test "$(git rev-parse --verify V^)" = "$(git rev-parse --verify U)" &&
+ test_cmp marks.expected marks.actual &&
+ background_import_still_running
+'
+
+# Re-create the exact same commit, but on a different branch: no new object is
+# created in the database, but the refs and marks still need to be updated.
+test_expect_success PIPE 'V: checkpoint updates refs and marks after commit (no new objects)' '
+ cat >input <<-INPUT_END &&
+ commit refs/heads/V2
+ mark :2
+ committer $GIT_COMMITTER_NAME <$GIT_COMMITTER_EMAIL> $GIT_COMMITTER_DATE
+ data 0
+ from refs/heads/U
+
+ INPUT_END
+
+ background_import_then_checkpoint "--export-marks=marks.actual" input &&
+
+ echo ":2 $(git rev-parse --verify V2)" >marks.expected &&
+
+ test "$(git rev-parse --verify V2)" = "$(git rev-parse --verify V)" &&
+ test_cmp marks.expected marks.actual &&
+ background_import_still_running
+'
+
+test_expect_success PIPE 'V: checkpoint updates tags after tag' '
+ cat >input <<-INPUT_END &&
+ tag Vtag
+ from refs/heads/V
+ tagger $GIT_COMMITTER_NAME <$GIT_COMMITTER_EMAIL> $GIT_COMMITTER_DATE
+ data 0
+
+ INPUT_END
+
+ background_import_then_checkpoint "" input &&
+ git show-ref -d Vtag &&
+ background_import_still_running
+'
+
test_done
if ($#ARGV < 1) {
die "usage: test-terminal program args";
}
+$ENV{TERM} = 'vt100';
my $master_in = new IO::Pty;
my $master_out = new IO::Pty;
my $master_err = new IO::Pty;
bufptr = nl + 1;
if (!strcmp(type, blob_type)) {
- item->tagged = &lookup_blob(&oid)->object;
+ item->tagged = (struct object *)lookup_blob(&oid);
} else if (!strcmp(type, tree_type)) {
- item->tagged = &lookup_tree(&oid)->object;
+ item->tagged = (struct object *)lookup_tree(&oid);
} else if (!strcmp(type, commit_type)) {
- item->tagged = &lookup_commit(&oid)->object;
+ item->tagged = (struct object *)lookup_commit(&oid);
} else if (!strcmp(type, tag_type)) {
- item->tagged = &lookup_tag(&oid)->object;
+ item->tagged = (struct object *)lookup_tag(&oid);
} else {
error("Unknown type %s", type);
item->tagged = NULL;
{
struct ref *ref;
int n = 0;
- struct object_id head_oid;
char *head;
int summary_width = transport_summary_width(refs);
- head = resolve_refdup("HEAD", RESOLVE_REF_READING, head_oid.hash, NULL);
+ head = resolve_refdup("HEAD", RESOLVE_REF_READING, NULL, NULL);
if (verbose) {
for (ref = refs; ref; ref = ref->next)
void wt_status_prepare(struct wt_status *s)
{
- struct object_id oid;
-
memset(s, 0, sizeof(*s));
memcpy(s->color_palette, default_wt_status_colors,
sizeof(default_wt_status_colors));
s->show_untracked_files = SHOW_NORMAL_UNTRACKED_FILES;
s->use_color = -1;
s->relative_paths = 1;
- s->branch = resolve_refdup("HEAD", 0, oid.hash, NULL);
+ s->branch = resolve_refdup("HEAD", 0, NULL, NULL);
s->reference = "HEAD";
s->fp = stdout;
s->index_file = get_index_file();