/git-mergetool--lib
/git-mktag
/git-mktree
- /git-name-rev
+ /git-multi-pack-index
/git-mv
+ /git-name-rev
/git-notes
/git-p4
/git-pack-redundant
/git-pull
/git-push
/git-quiltimport
+/git-range-diff
/git-read-tree
/git-rebase
/git-rebase--am
/git-rebase--helper
/git-rebase--interactive
/git-rebase--merge
+/git-rebase--preserve-merges
/git-receive-pack
/git-reflog
/git-remote
/config.mak.autogen
/config.mak.append
/configure
+/.vscode/
/tags
/TAGS
/cscope*
Advice shown when you used linkgit:git-checkout[1] to
move to the detach HEAD state, to instruct how to create
a local branch after the fact.
+ checkoutAmbiguousRemoteBranchName::
+ Advice shown when the argument to
+ linkgit:git-checkout[1] ambiguously resolves to a
+ remote tracking branch on more than one remote in
+ situations where an unambiguous argument would have
+ otherwise caused a remote-tracking branch to be
+ checked out. See the `checkout.defaultRemote`
+ configuration variable for how to set a given remote
+ to used by default in some situations where this
+ advice would be printed.
amWorkDir::
Advice that shows the location of the patch file when
linkgit:git-am[1] fails to apply it.
Advice on what to do when you've accidentally added one
git repo inside of another.
ignoredHook::
- Advice shown if an hook is ignored because the hook is not
+ Advice shown if a hook is ignored because the hook is not
set as executable.
waitingForEditor::
Print a message to the terminal whenever Git is waiting for
default mode is 'dotGitOnly'.
core.ignoreCase::
- If true, this option enables various workarounds to enable
+ Internal variable which enables various workarounds to enable
Git to work better on filesystems that are not case sensitive,
- like FAT. For example, if a directory listing finds
- "makefile" when Git expects "Makefile", Git will assume
+ like APFS, HFS+, FAT, NTFS, etc. For example, if a directory listing
+ finds "makefile" when Git expects "Makefile", Git will assume
it is really the same file, and continue to remember it as
"Makefile".
+
The default is false, except linkgit:git-clone[1] or linkgit:git-init[1]
will probe and set core.ignoreCase true if appropriate when the repository
is created.
++
+Git relies on the proper configuration of this variable for your operating
+and file system. Modifying this value may result in unexpected behavior.
core.precomposeUnicode::
This option is only used by Mac OS implementation of Git.
This setting defaults to "refs/notes/commits", and it can be overridden by
the `GIT_NOTES_REF` environment variable. See linkgit:git-notes[1].
-core.commitGraph::
- Enable git commit graph feature. Allows reading from the
- commit-graph file.
+gc.commitGraph::
+ If true, then gc will rewrite the commit-graph file when
+ linkgit:git-gc[1] is run. When using linkgit:git-gc[1]
+ '--auto' the commit-graph will be updated if housekeeping is
+ required. Default is false. See linkgit:git-commit-graph[1]
+ for details.
+
+core.useReplaceRefs::
+ If set to `false`, behave as if the `--no-replace-objects`
+ option was given on the command line. See linkgit:git[1] and
+ linkgit:git-replace[1] for more information.
+ core.multiPackIndex::
+ Use the multi-pack-index file to track multiple packfiles using a
+ single index. See link:technical/multi-pack-index.html[the
+ multi-pack-index design document].
+
core.sparseCheckout::
Enable "sparse checkout" feature. See section "Sparse checkout" in
linkgit:git-read-tree[1] for more information.
Tells 'git apply' how to handle whitespaces, in the same way
as the `--whitespace` option. See linkgit:git-apply[1].
-blame.showRoot::
- Do not treat root commits as boundaries in linkgit:git-blame[1].
- This option defaults to false.
-
blame.blankBoundary::
Show blank commit object name for boundary commits in
linkgit:git-blame[1]. This option defaults to false.
-blame.showEmail::
- Show the author email instead of author name in linkgit:git-blame[1].
- This option defaults to false.
+blame.coloring::
+ This determines the coloring scheme to be applied to blame
+ output. It can be 'repeatedLines', 'highlightRecent',
+ or 'none' which is the default.
blame.date::
Specifies the format used to output dates in linkgit:git-blame[1].
If unset the iso format is used. For supported values,
see the discussion of the `--date` option at linkgit:git-log[1].
+blame.showEmail::
+ Show the author email instead of author name in linkgit:git-blame[1].
+ This option defaults to false.
+
+blame.showRoot::
+ Do not treat root commits as boundaries in linkgit:git-blame[1].
+ This option defaults to false.
+
branch.autoSetupMerge::
Tells 'git branch' and 'git checkout' to set up new branches
so that linkgit:git-pull[1] will appropriately merge from the
browse HTML help (see `-w` option in linkgit:git-help[1]) or a
working repository in gitweb (see linkgit:git-instaweb[1]).
+checkout.defaultRemote::
+ When you run 'git checkout <something>' and only have one
+ remote, it may implicitly fall back on checking out and
+ tracking e.g. 'origin/<something>'. This stops working as soon
+ as you have more than one remote with a '<something>'
+ reference. This setting allows for setting the name of a
+ preferred remote that should always win when it comes to
+ disambiguation. The typical use-case is to set this to
+ `origin`.
++
+Currently this is used by linkgit:git-checkout[1] when 'git checkout
+<something>' will checkout the '<something>' branch on another remote,
+and by linkgit:git-worktree[1] when 'git worktree add' refers to a
+remote branch. This setting might be used for other checkout-like
+commands or functionality in the future.
+
clean.requireForce::
A boolean to make git-clean do nothing unless given -f,
-i or -n. Defaults to true.
color.advice.hint::
Use customized color for hints.
+color.blame.highlightRecent::
+ This can be used to color the metadata of a blame line depending
+ on age of the line.
++
+This setting should be set to a comma-separated list of color and date settings,
+starting and ending with a color, the dates should be set from oldest to newest.
+The metadata will be colored given the colors if the the line was introduced
+before the given timestamp, overwriting older timestamped colors.
++
+Instead of an absolute timestamp relative timestamps work as well, e.g.
+2.weeks.ago is valid to address anything older than 2 weeks.
++
+It defaults to 'blue,12 month ago,white,1 month ago,red', which colors
+everything older than one year blue, recent changes between one month and
+one year old are kept white, and lines introduced within the last month are
+colored red.
+
+color.blame.repeatedLines::
+ Use the customized color for the part of git-blame output that
+ is repeated meta information per line (such as commit id,
+ author name, date and timezone). Defaults to cyan.
+
color.branch::
A boolean to enable/disable color in the output of
linkgit:git-branch[1]. May be set to `always`,
true the default color mode will be used. When set to false,
moved lines are not colored.
+diff.colorMovedWS::
+ When moved lines are colored using e.g. the `diff.colorMoved` setting,
+ this option controls the `<mode>` how spaces are treated
+ for details of valid modes see '--color-moved-ws' in linkgit:git-diff[1].
+
color.diff.<slot>::
Use customized color for diff colorization. `<slot>` specifies
which part of the patch to use the specified color, and is one
(highlighting whitespace errors), `oldMoved` (deleted lines),
`newMoved` (added lines), `oldMovedDimmed`, `oldMovedAlternative`,
`oldMovedAlternativeDimmed`, `newMovedDimmed`, `newMovedAlternative`
- and `newMovedAlternativeDimmed` (See the '<mode>'
- setting of '--color-moved' in linkgit:git-diff[1] for details).
+ `newMovedAlternativeDimmed` (See the '<mode>'
+ setting of '--color-moved' in linkgit:git-diff[1] for details),
+ `contextDimmed`, `oldDimmed`, `newDimmed`, `contextBold`,
+ `oldBold`, and `newBold` (see linkgit:git-range-diff[1] for details).
color.decorate.<slot>::
Use customized color for 'git log --decorate' output. `<slot>` is one
of `branch`, `remoteBranch`, `tag`, `stash` or `HEAD` for local
- branches, remote-tracking branches, tags, stash and HEAD, respectively.
+ branches, remote-tracking branches, tags, stash and HEAD, respectively
+ and `grafted` for grafted commits.
color.grep::
When set to `always`, always highlight matches. When `false` (or
filename prefix (when not using `-h`)
`function`;;
function name lines (when using `-p`)
-`linenumber`;;
+`lineNumber`;;
line number prefix (when using `-n`)
+`column`;;
+ column number prefix (when using `--column`)
`match`;;
matching text (same as setting `matchContext` and `matchSelected`)
`matchContext`;;
color.push.error::
Use customized color for push errors.
+color.remote::
+ If set, keywords at the start of the line are highlighted. The
+ keywords are "error", "warning", "hint" and "success", and are
+ matched case-insensitively. May be set to `always`, `false` (or
+ `never`) or `auto` (or `true`). If unset, then the value of
+ `color.ui` is used (`auto` by default).
+
+color.remote.<slot>::
+ Use customized color for each remote keyword. `<slot>` may be
+ `hint`, `warning`, `success` or `error` which match the
+ corresponding keyword.
+
color.showBranch::
A boolean to enable/disable color in the output of
linkgit:git-show-branch[1]. May be set to `always`,
status short-format), or
`unmerged` (files which have unmerged changes).
-color.blame.repeatedLines::
- Use the customized color for the part of git-blame output that
- is repeated meta information per line (such as commit id,
- author name, date and timezone). Defaults to cyan.
-
-color.blame.highlightRecent::
- This can be used to color the metadata of a blame line depending
- on age of the line.
-+
-This setting should be set to a comma-separated list of color and date settings,
-starting and ending with a color, the dates should be set from oldest to newest.
-The metadata will be colored given the colors if the the line was introduced
-before the given timestamp, overwriting older timestamped colors.
-+
-Instead of an absolute timestamp relative timestamps work as well, e.g.
-2.weeks.ago is valid to address anything older than 2 weeks.
-+
-It defaults to 'blue,12 month ago,white,1 month ago,red', which colors
-everything older than one year blue, recent changes between one month and
-one year old are kept white, and lines introduced within the last month are
-colored red.
-
-blame.coloring::
- This determines the coloring scheme to be applied to blame
- output. It can be 'repeatedLines', 'highlightRecent',
- or 'none' which is the default.
-
color.transport::
A boolean to enable/disable color when pushes are rejected. May be
set to `always`, `false` (or `never`) or `auto` (or `true`), in which
fetch.fsckObjects::
If it is set to true, git-fetch-pack will check all fetched
- objects. It will abort in the case of a malformed object or a
- broken link. The result of an abort are only dangling objects.
- Defaults to false. If not set, the value of `transfer.fsckObjects`
- is used instead.
+ objects. See `transfer.fsckObjects` for what's
+ checked. Defaults to false. If not set, the value of
+ `transfer.fsckObjects` is used instead.
+
+fetch.fsck.<msg-id>::
+ Acts like `fsck.<msg-id>`, but is used by
+ linkgit:git-fetch-pack[1] instead of linkgit:git-fsck[1]. See
+ the `fsck.<msg-id>` documentation for details.
+
+fetch.fsck.skipList::
+ Acts like `fsck.skipList`, but is used by
+ linkgit:git-fetch-pack[1] instead of linkgit:git-fsck[1]. See
+ the `fsck.skipList` documentation for details.
fetch.unpackLimit::
If the number of objects fetched over the Git native
`full` and `compact`. Default value is `full`. See section
OUTPUT in linkgit:git-fetch[1] for detail.
+fetch.negotiationAlgorithm::
+ Control how information about the commits in the local repository is
+ sent when negotiating the contents of the packfile to be sent by the
+ server. Set to "skipping" to use an algorithm that skips commits in an
+ effort to converge faster, but may result in a larger-than-necessary
+ packfile; The default is "default" which instructs Git to use the default algorithm
+ that never skips commits (unless the server has acknowledged it or one
+ of its descendants).
+ Unknown values will cause 'git fetch' to error out.
++
+See also the `--negotiation-tip` option for linkgit:git-fetch[1].
+
format.attach::
Enable multipart/mixed attachments as the default for
'format-patch'. The value can also be a double quoted string
linkgit:gitattributes[5] for details.
fsck.<msg-id>::
- Allows overriding the message type (error, warn or ignore) of a
- specific message ID such as `missingEmail`.
-+
-For convenience, fsck prefixes the error/warning with the message ID,
-e.g. "missingEmail: invalid author/committer line - missing email" means
-that setting `fsck.missingEmail = ignore` will hide that issue.
-+
-This feature is intended to support working with legacy repositories
-which cannot be repaired without disruptive changes.
+ During fsck git may find issues with legacy data which
+ wouldn't be generated by current versions of git, and which
+ wouldn't be sent over the wire if `transfer.fsckObjects` was
+ set. This feature is intended to support working with legacy
+ repositories containing such data.
++
+Setting `fsck.<msg-id>` will be picked up by linkgit:git-fsck[1], but
+to accept pushes of such data set `receive.fsck.<msg-id>` instead, or
+to clone or fetch it set `fetch.fsck.<msg-id>`.
++
+The rest of the documentation discusses `fsck.*` for brevity, but the
+same applies for the corresponding `receive.fsck.*` and
+`fetch.<msg-id>.*`. variables.
++
+Unlike variables like `color.ui` and `core.editor` the
+`receive.fsck.<msg-id>` and `fetch.fsck.<msg-id>` variables will not
+fall back on the `fsck.<msg-id>` configuration if they aren't set. To
+uniformly configure the same fsck settings in different circumstances
+all three of them they must all set to the same values.
++
+When `fsck.<msg-id>` is set, errors can be switched to warnings and
+vice versa by configuring the `fsck.<msg-id>` setting where the
+`<msg-id>` is the fsck message ID and the value is one of `error`,
+`warn` or `ignore`. For convenience, fsck prefixes the error/warning
+with the message ID, e.g. "missingEmail: invalid author/committer line
+- missing email" means that setting `fsck.missingEmail = ignore` will
+hide that issue.
++
+In general, it is better to enumerate existing objects with problems
+with `fsck.skipList`, instead of listing the kind of breakages these
+problematic objects share to be ignored, as doing the latter will
+allow new instances of the same breakages go unnoticed.
++
+Setting an unknown `fsck.<msg-id>` value will cause fsck to die, but
+doing the same for `receive.fsck.<msg-id>` and `fetch.fsck.<msg-id>`
+will only cause git to warn.
fsck.skipList::
The path to a sorted list of object names (i.e. one SHA-1 per
should be accepted despite early commits containing errors that
can be safely ignored such as invalid committer email addresses.
Note: corrupt objects cannot be skipped with this setting.
++
+Like `fsck.<msg-id>` this variable has corresponding
+`receive.fsck.skipList` and `fetch.fsck.skipList` variants.
++
+Unlike variables like `color.ui` and `core.editor` the
+`receive.fsck.skipList` and `fetch.fsck.skipList` variables will not
+fall back on the `fsck.skipList` configuration if they aren't set. To
+uniformly configure the same fsck settings in different circumstances
+all three of them they must all set to the same values.
gc.aggressiveDepth::
The depth parameter used in the delta compression
grep.lineNumber::
If set to true, enable `-n` option by default.
+grep.column::
+ If set to true, enable the `--column` option by default.
+
grep.patternType::
Set the default matching behavior. Using a value of 'basic', 'extended',
'fixed', or 'perl' will enable the `--basic-regexp`, `--extended-regexp`,
signed, and the program is expected to send the result to its
standard output.
+gpg.format::
+ Specifies which key format to use when signing with `--gpg-sign`.
+ Default is "openpgp" and another possible value is "x509".
+
+gpg.<format>.program::
+ Use this to customize the program used for the signing format you
+ chose. (see `gpg.program` and `gpg.format`) `gpg.program` can still
+ be used as a legacy synonym for `gpg.openpgp.program`. The default
+ value for `gpg.x509.program` is "gpgsm".
+
gui.commitMsgWidth::
Defines how wide the commit message window is in the
linkgit:git-gui[1]. "75" is the default.
receive.fsckObjects::
If it is set to true, git-receive-pack will check all received
- objects. It will abort in the case of a malformed object or a
- broken link. The result of an abort are only dangling objects.
- Defaults to false. If not set, the value of `transfer.fsckObjects`
- is used instead.
+ objects. See `transfer.fsckObjects` for what's checked.
+ Defaults to false. If not set, the value of
+ `transfer.fsckObjects` is used instead.
receive.fsck.<msg-id>::
- When `receive.fsckObjects` is set to true, errors can be switched
- to warnings and vice versa by configuring the `receive.fsck.<msg-id>`
- setting where the `<msg-id>` is the fsck message ID and the value
- is one of `error`, `warn` or `ignore`. For convenience, fsck prefixes
- the error/warning with the message ID, e.g. "missingEmail: invalid
- author/committer line - missing email" means that setting
- `receive.fsck.missingEmail = ignore` will hide that issue.
-+
-This feature is intended to support working with legacy repositories
-which would not pass pushing when `receive.fsckObjects = true`, allowing
-the host to accept repositories with certain known issues but still catch
-other issues.
+ Acts like `fsck.<msg-id>`, but is used by
+ linkgit:git-receive-pack[1] instead of
+ linkgit:git-fsck[1]. See the `fsck.<msg-id>` documentation for
+ details.
receive.fsck.skipList::
- The path to a sorted list of object names (i.e. one SHA-1 per
- line) that are known to be broken in a non-fatal way and should
- be ignored. This feature is useful when an established project
- should be accepted despite early commits containing errors that
- can be safely ignored such as invalid committer email addresses.
- Note: corrupt objects cannot be skipped with this setting.
+ Acts like `fsck.skipList`, but is used by
+ linkgit:git-receive-pack[1] instead of
+ linkgit:git-fsck[1]. See the `fsck.skipList` documentation for
+ details.
receive.keepAlive::
After receiving the pack from the client, `receive-pack` may
submodule.<name>.active::
Boolean value indicating if the submodule is of interest to git
commands. This config option takes precedence over the
- submodule.active config option.
+ submodule.active config option. See linkgit:gitsubmodules[7] for
+ details.
submodule.active::
A repeated field which contains a pathspec used to match against a
submodule's path to determine if the submodule is of interest to git
- commands.
+ commands. See linkgit:gitsubmodules[7] for details.
submodule.recurse::
Specifies if commands recurse into submodules by default. This
When `fetch.fsckObjects` or `receive.fsckObjects` are
not set, the value of this variable is used instead.
Defaults to false.
++
+When set, the fetch or receive will abort in the case of a malformed
+object or a link to a nonexistent object. In addition, various other
+issues are checked for, including legacy issues (see `fsck.<msg-id>`),
+and potential security issues like the existence of a `.GIT` directory
+or a malicious `.gitmodules` file (see the release notes for v2.2.1
+and v2.17.1 for details). Other sanity and security checks may be
+added in future releases.
++
+On the receiving side, failing fsckObjects will make those objects
+unreachable, see "QUARANTINE ENVIRONMENT" in
+linkgit:git-receive-pack[1]. On the fetch side, malformed objects will
+instead be left unreferenced in the repository.
++
+Due to the non-quarantine nature of the `fetch.fsckObjects`
+implementation it can not be relied upon to leave the object store
+clean like `receive.fsckObjects` can.
++
+As objects are unpacked they're written to the object store, so there
+can be cases where malicious objects get introduced even though the
+"fetch" failed, only to have a subsequent "fetch" succeed because only
+new incoming objects are checked, not those that have already been
+written to the object store. That difference in behavior should not be
+relied upon. In the future, such objects may be quarantined for
+"fetch" as well.
++
+For now, the paranoid need to find some way to emulate the quarantine
+environment if they'd like the same protection as "push". E.g. in the
+case of an internal mirror do the mirroring in two steps, one to fetch
+the untrusted objects, and then do a second "push" (which will use the
+quarantine) to another internal repo, and have internal clients
+consume this pushed-to repository, or embargo internal fetches and
+only allow them once a full "fsck" has run (and no new fetches have
+happened in the meantime).
transfer.hideRefs::
String(s) `receive-pack` and `upload-pack` use to decide which
repository-level config (this is a safety measure against fetching from
untrusted repositories).
+uploadpack.allowRefInWant::
+ If this option is set, `upload-pack` will support the `ref-in-want`
+ feature of the protocol version 2 `fetch` command. This feature
+ is intended for the benefit of load-balanced servers which may
+ not have the same view of what OIDs their refs point to due to
+ replication delay.
+
url.<base>.insteadOf::
Any URL that starts with this value will be rewritten to
start, instead, with <base>. In cases where some site serves a
# The DEVELOPER mode enables -Wextra with a few exceptions. By
# setting this flag the exceptions are removed, and all of
# -Wextra is used.
+#
+# pedantic:
+#
+# Enable -pedantic compilation. This also disables
+# USE_PARENS_AROUND_GETTEXT_N to produce only relevant warnings.
GIT-VERSION-FILE: FORCE
@$(SHELL_PATH) ./GIT-VERSION-GEN
export TCL_PATH TCLTK_PATH
SPARSE_FLAGS =
-SPATCH_FLAGS = --all-includes
+SPATCH_FLAGS = --all-includes --patch .
SCRIPT_LIB += git-parse-remote
SCRIPT_LIB += git-rebase--am
SCRIPT_LIB += git-rebase--interactive
+SCRIPT_LIB += git-rebase--preserve-merges
SCRIPT_LIB += git-rebase--merge
SCRIPT_LIB += git-sh-setup
SCRIPT_LIB += git-sh-i18n
PROGRAM_OBJS += imap-send.o
PROGRAM_OBJS += sh-i18n--envsubst.o
PROGRAM_OBJS += shell.o
-PROGRAM_OBJS += show-index.o
PROGRAM_OBJS += remote-testsvn.o
# Binary suffix, set to .exe for Windows builds
TEST_BUILTINS_OBJS += test-genrandom.o
TEST_BUILTINS_OBJS += test-hashmap.o
TEST_BUILTINS_OBJS += test-index-version.o
+TEST_BUILTINS_OBJS += test-json-writer.o
TEST_BUILTINS_OBJS += test-lazy-init-name-hash.o
TEST_BUILTINS_OBJS += test-match-trees.o
TEST_BUILTINS_OBJS += test-mergesort.o
TEST_BUILTINS_OBJS += test-path-utils.o
TEST_BUILTINS_OBJS += test-prio-queue.o
TEST_BUILTINS_OBJS += test-read-cache.o
+ TEST_BUILTINS_OBJS += test-read-midx.o
TEST_BUILTINS_OBJS += test-ref-store.o
TEST_BUILTINS_OBJS += test-regex.o
+TEST_BUILTINS_OBJS += test-repository.o
TEST_BUILTINS_OBJS += test-revision-walking.o
TEST_BUILTINS_OBJS += test-run-command.o
TEST_BUILTINS_OBJS += test-scrap-cache-tree.o
LIB_OBJS += ewah/ewah_io.o
LIB_OBJS += ewah/ewah_rlw.o
LIB_OBJS += exec-cmd.o
+LIB_OBJS += fetch-negotiator.o
LIB_OBJS += fetch-object.o
LIB_OBJS += fetch-pack.o
LIB_OBJS += fsck.o
LIB_OBJS += graph.o
LIB_OBJS += grep.o
LIB_OBJS += hashmap.o
+LIB_OBJS += linear-assignment.o
LIB_OBJS += help.o
LIB_OBJS += hex.o
LIB_OBJS += ident.o
+LIB_OBJS += json-writer.o
LIB_OBJS += kwset.o
LIB_OBJS += levenshtein.o
LIB_OBJS += line-log.o
LIB_OBJS += merge-blobs.o
LIB_OBJS += merge-recursive.o
LIB_OBJS += mergesort.o
+ LIB_OBJS += midx.o
LIB_OBJS += name-hash.o
+LIB_OBJS += negotiator/default.o
+LIB_OBJS += negotiator/skipping.o
LIB_OBJS += notes.o
LIB_OBJS += notes-cache.o
LIB_OBJS += notes-merge.o
LIB_OBJS += prompt.o
LIB_OBJS += protocol.o
LIB_OBJS += quote.o
+LIB_OBJS += range-diff.o
LIB_OBJS += reachable.o
LIB_OBJS += read-cache.o
LIB_OBJS += reflog-walk.o
BUILTIN_OBJS += builtin/merge-tree.o
BUILTIN_OBJS += builtin/mktag.o
BUILTIN_OBJS += builtin/mktree.o
+ BUILTIN_OBJS += builtin/multi-pack-index.o
BUILTIN_OBJS += builtin/mv.o
BUILTIN_OBJS += builtin/name-rev.o
BUILTIN_OBJS += builtin/notes.o
BUILTIN_OBJS += builtin/prune.o
BUILTIN_OBJS += builtin/pull.o
BUILTIN_OBJS += builtin/push.o
+BUILTIN_OBJS += builtin/range-diff.o
BUILTIN_OBJS += builtin/read-tree.o
BUILTIN_OBJS += builtin/rebase--helper.o
BUILTIN_OBJS += builtin/receive-pack.o
BUILTIN_OBJS += builtin/serve.o
BUILTIN_OBJS += builtin/shortlog.o
BUILTIN_OBJS += builtin/show-branch.o
+BUILTIN_OBJS += builtin/show-index.o
BUILTIN_OBJS += builtin/show-ref.o
BUILTIN_OBJS += builtin/stripspace.o
BUILTIN_OBJS += builtin/submodule--helper.o
version.sp version.s version.o: EXTRA_CPPFLAGS = \
'-DGIT_VERSION="$(GIT_VERSION)"' \
'-DGIT_USER_AGENT=$(GIT_USER_AGENT_CQ_SQ)' \
- '-DGIT_BUILT_FROM_COMMIT="$(shell GIT_CEILING_DIRECTORIES=\"$(CURDIR)/..\" \
- git rev-parse -q --verify HEAD || :)"'
+ '-DGIT_BUILT_FROM_COMMIT="$(shell \
+ GIT_CEILING_DIRECTORIES="$(CURDIR)/.." \
+ git rev-parse -q --verify HEAD 2>/dev/null)"'
$(BUILT_INS): git$X
$(QUIET_BUILT_IN)$(RM) $@ && \
command-list.h: generate-cmdlist.sh command-list.txt
-command-list.h: $(wildcard Documentation/git*.txt)
+command-list.h: $(wildcard Documentation/git*.txt) Documentation/config.txt
$(QUIET_GEN)$(SHELL_PATH) ./generate-cmdlist.sh command-list.txt >$@+ && mv $@+ $@
SCRIPT_DEFINES = $(SHELL_PATH_SQ):$(DIFF_SQ):$(GIT_VERSION):\
$(QUIET_GEN)$(RM) $@ $@+ && \
sed -e '1{' \
-e ' s|#!.*perl|#!$(PERL_PATH_SQ)|' \
- -e ' rGIT-PERL-HEADER' \
+ -e ' r GIT-PERL-HEADER' \
-e ' G' \
-e '}' \
-e 's/@@GIT_VERSION@@/$(GIT_VERSION)/g' \
LOCALIZED_SH = $(SCRIPT_SH)
LOCALIZED_SH += git-parse-remote.sh
LOCALIZED_SH += git-rebase--interactive.sh
+LOCALIZED_SH += git-rebase--preserve-merges.sh
LOCALIZED_SH += git-sh-setup.sh
LOCALIZED_PERL = $(SCRIPT_PERL)
fi
C_SOURCES = $(patsubst %.o,%.c,$(C_OBJ))
-%.cocci.patch: %.cocci $(C_SOURCES)
+ifdef DC_SHA1_SUBMODULE
+COCCI_SOURCES = $(filter-out sha1collisiondetection/%,$(C_SOURCES))
+else
+COCCI_SOURCES = $(filter-out sha1dc/%,$(C_SOURCES))
+endif
+
+%.cocci.patch: %.cocci $(COCCI_SOURCES)
@echo ' ' SPATCH $<; \
ret=0; \
- for f in $(C_SOURCES); do \
+ for f in $(COCCI_SOURCES); do \
$(SPATCH) --sp-file $< $$f $(SPATCH_FLAGS) || \
{ ret=$$?; break; }; \
done >$@+ 2>$@.log; \
then \
echo ' ' SPATCH result: $@; \
fi
-coccicheck: $(patsubst %.cocci,%.cocci.patch,$(wildcard contrib/coccinelle/*.cocci))
+coccicheck: $(addsuffix .patch,$(wildcard contrib/coccinelle/*.cocci))
+
+.PHONY: coccicheck
### Installation rules
$(RM) $(addsuffix *.gcda,$(addprefix $(PROFILE_DIR)/, $(object_dirs)))
$(RM) $(addsuffix *.gcno,$(addprefix $(PROFILE_DIR)/, $(object_dirs)))
-clean: profile-clean coverage-clean
+cocciclean:
+ $(RM) contrib/coccinelle/*.cocci.patch*
+
+clean: profile-clean coverage-clean cocciclean
$(RM) *.res
$(RM) $(OBJECTS)
$(RM) $(LIB_FILE) $(XDIFF_LIB) $(VCSSVN_LIB)
$(RM) -r $(GIT_TARNAME) .doc-tmp-dir
$(RM) $(GIT_TARNAME).tar.gz git-core_$(GIT_VERSION)-*.tar.gz
$(RM) $(htmldocs).tar.gz $(manpages).tar.gz
- $(RM) contrib/coccinelle/*.cocci.patch*
$(MAKE) -C Documentation/ clean
ifndef NO_PERL
$(MAKE) -C gitweb clean
$(RM) GIT-USER-AGENT GIT-PREFIX
$(RM) GIT-SCRIPT-DEFINES GIT-PERL-DEFINES GIT-PERL-HEADER GIT-PYTHON-VARS
-.PHONY: all install profile-clean clean strip
+.PHONY: all install profile-clean cocciclean clean strip
.PHONY: shell_compatibility_test please_set_SHELL_PATH_to_a_more_modern_shell
.PHONY: FORCE cscope
extern int cmd_merge_tree(int argc, const char **argv, const char *prefix);
extern int cmd_mktag(int argc, const char **argv, const char *prefix);
extern int cmd_mktree(int argc, const char **argv, const char *prefix);
+ extern int cmd_multi_pack_index(int argc, const char **argv, const char *prefix);
extern int cmd_mv(int argc, const char **argv, const char *prefix);
extern int cmd_name_rev(int argc, const char **argv, const char *prefix);
extern int cmd_notes(int argc, const char **argv, const char *prefix);
extern int cmd_prune_packed(int argc, const char **argv, const char *prefix);
extern int cmd_pull(int argc, const char **argv, const char *prefix);
extern int cmd_push(int argc, const char **argv, const char *prefix);
+extern int cmd_range_diff(int argc, const char **argv, const char *prefix);
extern int cmd_read_tree(int argc, const char **argv, const char *prefix);
extern int cmd_rebase__helper(int argc, const char **argv, const char *prefix);
extern int cmd_receive_pack(int argc, const char **argv, const char *prefix);
extern int cmd_shortlog(int argc, const char **argv, const char *prefix);
extern int cmd_show(int argc, const char **argv, const char *prefix);
extern int cmd_show_branch(int argc, const char **argv, const char *prefix);
+extern int cmd_show_index(int argc, const char **argv, const char *prefix);
extern int cmd_status(int argc, const char **argv, const char *prefix);
extern int cmd_stripspace(int argc, const char **argv, const char *prefix);
extern int cmd_submodule__helper(int argc, const char **argv, const char *prefix);
#include "strbuf.h"
#include "string-list.h"
#include "argv-array.h"
+ #include "midx.h"
+#include "packfile.h"
+#include "object-store.h"
static int delta_base_offset = 1;
static int pack_kept_objects = -1;
/*
* Adds all packs hex strings to the fname list, which do not
- * have a corresponding .keep or .promisor file. These packs are not to
+ * have a corresponding .keep file. These packs are not to
* be kept if we are going to pack everything into one file.
*/
static void get_non_kept_pack_filenames(struct string_list *fname_list,
fname = xmemdupz(e->d_name, len);
- if (!file_exists(mkpath("%s/%s.keep", packdir, fname)) &&
- !file_exists(mkpath("%s/%s.promisor", packdir, fname)))
+ if (!file_exists(mkpath("%s/%s.keep", packdir, fname)))
string_list_append_nodup(fname_list, fname);
else
free(fname);
static void remove_redundant_pack(const char *dir_name, const char *base_name)
{
- const char *exts[] = {".pack", ".idx", ".keep", ".bitmap"};
+ const char *exts[] = {".pack", ".idx", ".keep", ".bitmap", ".promisor"};
int i;
struct strbuf buf = STRBUF_INIT;
size_t plen;
strbuf_release(&buf);
}
+struct pack_objects_args {
+ const char *window;
+ const char *window_memory;
+ const char *depth;
+ const char *threads;
+ const char *max_pack_size;
+ int no_reuse_delta;
+ int no_reuse_object;
+ int quiet;
+ int local;
+};
+
+static void prepare_pack_objects(struct child_process *cmd,
+ const struct pack_objects_args *args)
+{
+ argv_array_push(&cmd->args, "pack-objects");
+ if (args->window)
+ argv_array_pushf(&cmd->args, "--window=%s", args->window);
+ if (args->window_memory)
+ argv_array_pushf(&cmd->args, "--window-memory=%s", args->window_memory);
+ if (args->depth)
+ argv_array_pushf(&cmd->args, "--depth=%s", args->depth);
+ if (args->threads)
+ argv_array_pushf(&cmd->args, "--threads=%s", args->threads);
+ if (args->max_pack_size)
+ argv_array_pushf(&cmd->args, "--max-pack-size=%s", args->max_pack_size);
+ if (args->no_reuse_delta)
+ argv_array_pushf(&cmd->args, "--no-reuse-delta");
+ if (args->no_reuse_object)
+ argv_array_pushf(&cmd->args, "--no-reuse-object");
+ if (args->local)
+ argv_array_push(&cmd->args, "--local");
+ if (args->quiet)
+ argv_array_push(&cmd->args, "--quiet");
+ if (delta_base_offset)
+ argv_array_push(&cmd->args, "--delta-base-offset");
+ argv_array_push(&cmd->args, packtmp);
+ cmd->git_cmd = 1;
+ cmd->out = -1;
+}
+
+/*
+ * Write oid to the given struct child_process's stdin, starting it first if
+ * necessary.
+ */
+static int write_oid(const struct object_id *oid, struct packed_git *pack,
+ uint32_t pos, void *data)
+{
+ struct child_process *cmd = data;
+
+ if (cmd->in == -1) {
+ if (start_command(cmd))
+ die("Could not start pack-objects to repack promisor objects");
+ }
+
+ xwrite(cmd->in, oid_to_hex(oid), GIT_SHA1_HEXSZ);
+ xwrite(cmd->in, "\n", 1);
+ return 0;
+}
+
+static void repack_promisor_objects(const struct pack_objects_args *args,
+ struct string_list *names)
+{
+ struct child_process cmd = CHILD_PROCESS_INIT;
+ FILE *out;
+ struct strbuf line = STRBUF_INIT;
+
+ prepare_pack_objects(&cmd, args);
+ cmd.in = -1;
+
+ /*
+ * NEEDSWORK: Giving pack-objects only the OIDs without any ordering
+ * hints may result in suboptimal deltas in the resulting pack. See if
+ * the OIDs can be sent with fake paths such that pack-objects can use a
+ * {type -> existing pack order} ordering when computing deltas instead
+ * of a {type -> size} ordering, which may produce better deltas.
+ */
+ for_each_packed_object(write_oid, &cmd,
+ FOR_EACH_OBJECT_PROMISOR_ONLY);
+
+ if (cmd.in == -1)
+ /* No packed objects; cmd was never started */
+ return;
+
+ close(cmd.in);
+
+ out = xfdopen(cmd.out, "r");
+ while (strbuf_getline_lf(&line, out) != EOF) {
+ char *promisor_name;
+ int fd;
+ if (line.len != 40)
+ die("repack: Expecting 40 character sha1 lines only from pack-objects.");
+ string_list_append(names, line.buf);
+
+ /*
+ * pack-objects creates the .pack and .idx files, but not the
+ * .promisor file. Create the .promisor file, which is empty.
+ */
+ promisor_name = mkpathdup("%s-%s.promisor", packtmp,
+ line.buf);
+ fd = open(promisor_name, O_CREAT|O_EXCL|O_WRONLY, 0600);
+ if (fd < 0)
+ die_errno("unable to create '%s'", promisor_name);
+ close(fd);
+ free(promisor_name);
+ }
+ fclose(out);
+ if (finish_command(&cmd))
+ die("Could not finish pack-objects to repack promisor objects");
+}
+
#define ALL_INTO_ONE 1
#define LOOSEN_UNREACHABLE 2
{".pack"},
{".idx"},
{".bitmap", 1},
+ {".promisor", 1},
};
struct child_process cmd = CHILD_PROCESS_INIT;
struct string_list_item *item;
int delete_redundant = 0;
const char *unpack_unreachable = NULL;
int keep_unreachable = 0;
- const char *window = NULL, *window_memory = NULL;
- const char *depth = NULL;
- const char *threads = NULL;
- const char *max_pack_size = NULL;
struct string_list keep_pack_list = STRING_LIST_INIT_NODUP;
- int no_reuse_delta = 0, no_reuse_object = 0;
int no_update_server_info = 0;
- int quiet = 0;
- int local = 0;
+ int midx_cleared = 0;
+ struct pack_objects_args po_args = {NULL};
struct option builtin_repack_options[] = {
OPT_BIT('a', NULL, &pack_everything,
LOOSEN_UNREACHABLE | ALL_INTO_ONE),
OPT_BOOL('d', NULL, &delete_redundant,
N_("remove redundant packs, and run git-prune-packed")),
- OPT_BOOL('f', NULL, &no_reuse_delta,
+ OPT_BOOL('f', NULL, &po_args.no_reuse_delta,
N_("pass --no-reuse-delta to git-pack-objects")),
- OPT_BOOL('F', NULL, &no_reuse_object,
+ OPT_BOOL('F', NULL, &po_args.no_reuse_object,
N_("pass --no-reuse-object to git-pack-objects")),
OPT_BOOL('n', NULL, &no_update_server_info,
N_("do not run git-update-server-info")),
- OPT__QUIET(&quiet, N_("be quiet")),
- OPT_BOOL('l', "local", &local,
+ OPT__QUIET(&po_args.quiet, N_("be quiet")),
+ OPT_BOOL('l', "local", &po_args.local,
N_("pass --local to git-pack-objects")),
OPT_BOOL('b', "write-bitmap-index", &write_bitmaps,
N_("write bitmap index")),
N_("with -A, do not loosen objects older than this")),
OPT_BOOL('k', "keep-unreachable", &keep_unreachable,
N_("with -a, repack unreachable objects")),
- OPT_STRING(0, "window", &window, N_("n"),
+ OPT_STRING(0, "window", &po_args.window, N_("n"),
N_("size of the window used for delta compression")),
- OPT_STRING(0, "window-memory", &window_memory, N_("bytes"),
+ OPT_STRING(0, "window-memory", &po_args.window_memory, N_("bytes"),
N_("same as the above, but limit memory size instead of entries count")),
- OPT_STRING(0, "depth", &depth, N_("n"),
+ OPT_STRING(0, "depth", &po_args.depth, N_("n"),
N_("limits the maximum delta depth")),
- OPT_STRING(0, "threads", &threads, N_("n"),
+ OPT_STRING(0, "threads", &po_args.threads, N_("n"),
N_("limits the maximum number of threads")),
- OPT_STRING(0, "max-pack-size", &max_pack_size, N_("bytes"),
+ OPT_STRING(0, "max-pack-size", &po_args.max_pack_size, N_("bytes"),
N_("maximum size of each packfile")),
OPT_BOOL(0, "pack-kept-objects", &pack_kept_objects,
N_("repack objects in packs marked with .keep")),
sigchain_push_common(remove_pack_on_signal);
- argv_array_push(&cmd.args, "pack-objects");
+ prepare_pack_objects(&cmd, &po_args);
+
argv_array_push(&cmd.args, "--keep-true-parents");
if (!pack_kept_objects)
argv_array_push(&cmd.args, "--honor-pack-keep");
argv_array_push(&cmd.args, "--indexed-objects");
if (repository_format_partial_clone)
argv_array_push(&cmd.args, "--exclude-promisor-objects");
- if (window)
- argv_array_pushf(&cmd.args, "--window=%s", window);
- if (window_memory)
- argv_array_pushf(&cmd.args, "--window-memory=%s", window_memory);
- if (depth)
- argv_array_pushf(&cmd.args, "--depth=%s", depth);
- if (threads)
- argv_array_pushf(&cmd.args, "--threads=%s", threads);
- if (max_pack_size)
- argv_array_pushf(&cmd.args, "--max-pack-size=%s", max_pack_size);
- if (no_reuse_delta)
- argv_array_pushf(&cmd.args, "--no-reuse-delta");
- if (no_reuse_object)
- argv_array_pushf(&cmd.args, "--no-reuse-object");
if (write_bitmaps)
argv_array_push(&cmd.args, "--write-bitmap-index");
if (pack_everything & ALL_INTO_ONE) {
get_non_kept_pack_filenames(&existing_packs, &keep_pack_list);
+ repack_promisor_objects(&po_args, &names);
+
if (existing_packs.nr && delete_redundant) {
if (unpack_unreachable) {
argv_array_pushf(&cmd.args,
argv_array_push(&cmd.args, "--incremental");
}
- if (local)
- argv_array_push(&cmd.args, "--local");
- if (quiet)
- argv_array_push(&cmd.args, "--quiet");
- if (delta_base_offset)
- argv_array_push(&cmd.args, "--delta-base-offset");
-
- argv_array_push(&cmd.args, packtmp);
-
- cmd.git_cmd = 1;
- cmd.out = -1;
cmd.no_stdin = 1;
ret = start_command(&cmd);
if (ret)
return ret;
- if (!names.nr && !quiet)
+ if (!names.nr && !po_args.quiet)
printf("Nothing new to pack.\n");
/*
for_each_string_list_item(item, &names) {
for (ext = 0; ext < ARRAY_SIZE(exts); ext++) {
char *fname, *fname_old;
+
+ if (!midx_cleared) {
+ /* if we move a packfile, it will invalidated the midx */
+ clear_midx_file(get_object_directory());
+ midx_cleared = 1;
+ }
+
fname = mkpathdup("%s/pack-%s%s", packdir,
item->string, exts[ext].name);
if (!file_exists(fname)) {
/* End of pack replacement. */
+ reprepare_packed_git(the_repository);
+
if (delete_redundant) {
int opts = 0;
string_list_sort(&names);
if (!string_list_has_string(&names, sha1))
remove_redundant_pack(packdir, item->string);
}
- if (!quiet && isatty(2))
+ if (!po_args.quiet && isatty(2))
opts |= PRUNE_PACKED_VERBOSE;
prune_packed_objects(opts);
}
git-merge-one-file purehelpers
git-mergetool ancillarymanipulators complete
git-merge-tree ancillaryinterrogators
+ git-multi-pack-index plumbingmanipulators
git-mktag plumbingmanipulators
git-mktree plumbingmanipulators
git-mv mainporcelain worktree
git-pull mainporcelain remote
git-push mainporcelain remote
git-quiltimport foreignscminterface
+git-range-diff mainporcelain
git-read-tree plumbingmanipulators
git-rebase mainporcelain history
git-receive-pack synchelpers
if (envchanged)
*envchanged = 1;
} else if (!strcmp(cmd, "--no-replace-objects")) {
- check_replace_refs = 0;
+ read_replace_refs = 0;
setenv(NO_REPLACE_OBJECTS_ENVIRONMENT, "1", 1);
if (envchanged)
*envchanged = 1;
} else if (!strcmp(cmd, "--shallow-file")) {
(*argv)++;
(*argc)--;
- set_alternate_shallow_file((*argv)[0], 1);
+ set_alternate_shallow_file(the_repository, (*argv)[0], 1);
if (envchanged)
*envchanged = 1;
} else if (!strcmp(cmd, "-C")) {
trace_argv_printf(argv, "trace: built-in: git");
+ validate_cache_entries(&the_index);
status = p->fn(argc, argv, prefix);
+ validate_cache_entries(&the_index);
+
if (status)
return status;
{ "merge-tree", cmd_merge_tree, RUN_SETUP | NO_PARSEOPT },
{ "mktag", cmd_mktag, RUN_SETUP | NO_PARSEOPT },
{ "mktree", cmd_mktree, RUN_SETUP },
+ { "multi-pack-index", cmd_multi_pack_index, RUN_SETUP_GENTLY },
{ "mv", cmd_mv, RUN_SETUP | NEED_WORK_TREE },
{ "name-rev", cmd_name_rev, RUN_SETUP },
{ "notes", cmd_notes, RUN_SETUP },
{ "prune-packed", cmd_prune_packed, RUN_SETUP },
{ "pull", cmd_pull, RUN_SETUP | NEED_WORK_TREE },
{ "push", cmd_push, RUN_SETUP },
+ { "range-diff", cmd_range_diff, RUN_SETUP | USE_PAGER },
{ "read-tree", cmd_read_tree, RUN_SETUP | SUPPORT_SUPER_PREFIX},
{ "rebase--helper", cmd_rebase__helper, RUN_SETUP | NEED_WORK_TREE },
{ "receive-pack", cmd_receive_pack },
{ "shortlog", cmd_shortlog, RUN_SETUP_GENTLY | USE_PAGER },
{ "show", cmd_show, RUN_SETUP },
{ "show-branch", cmd_show_branch, RUN_SETUP },
+ { "show-index", cmd_show_index },
{ "show-ref", cmd_show_ref, RUN_SETUP },
{ "stage", cmd_add, RUN_SETUP | NEED_WORK_TREE },
{ "status", cmd_status, RUN_SETUP | NEED_WORK_TREE },
#ifndef OBJECT_STORE_H
#define OBJECT_STORE_H
+#include "cache.h"
#include "oidmap.h"
+#include "list.h"
+#include "sha1-array.h"
+#include "strbuf.h"
struct alternate_object_database {
struct alternate_object_database *next;
char pack_name[FLEX_ARRAY]; /* more */
};
+ struct multi_pack_index;
+
struct raw_object_store {
/*
* Path to the repository's object store.
*/
struct oidmap *replace_map;
+ struct commit_graph *commit_graph;
+ unsigned commit_graph_attempted : 1; /* if loading has been attempted */
+
+ /*
+ * private data
+ *
+ * should only be accessed directly by packfile.c and midx.c
+ */
+ struct multi_pack_index *multi_pack_index;
+
/*
* private data
*
void *map_sha1_file(struct repository *r, const unsigned char *sha1, unsigned long *size);
+extern void *read_object_file_extended(const struct object_id *oid,
+ enum object_type *type,
+ unsigned long *size, int lookup_replace);
+static inline void *read_object_file(const struct object_id *oid, enum object_type *type, unsigned long *size)
+{
+ return read_object_file_extended(oid, type, size, 1);
+}
+
+/* Read and unpack an object file into memory, write memory to an object file */
+int oid_object_info(struct repository *r, const struct object_id *, unsigned long *);
+
+extern int hash_object_file(const void *buf, unsigned long len,
+ const char *type, struct object_id *oid);
+
+extern int write_object_file(const void *buf, unsigned long len,
+ const char *type, struct object_id *oid);
+
+extern int hash_object_file_literally(const void *buf, unsigned long len,
+ const char *type, struct object_id *oid,
+ unsigned flags);
+
+extern int pretend_object_file(void *, unsigned long, enum object_type,
+ struct object_id *oid);
+
+extern int force_object_loose(const struct object_id *oid, time_t mtime);
+
+/*
+ * Open the loose object at path, check its hash, and return the contents,
+ * type, and size. If the object is a blob, then "contents" may return NULL,
+ * to allow streaming of large blobs.
+ *
+ * Returns 0 on success, negative on error (details may be written to stderr).
+ */
+int read_loose_object(const char *path,
+ const struct object_id *expected_oid,
+ enum object_type *type,
+ unsigned long *size,
+ void **contents);
+
+/*
+ * Convenience for sha1_object_info_extended() with a NULL struct
+ * object_info. OBJECT_INFO_SKIP_CACHED is automatically set; pass
+ * nonzero flags to also set other flags.
+ */
+extern int has_sha1_file_with_flags(const unsigned char *sha1, int flags);
+static inline int has_sha1_file(const unsigned char *sha1)
+{
+ return has_sha1_file_with_flags(sha1, 0);
+}
+
+/* Same as the above, except for struct object_id. */
+extern int has_object_file(const struct object_id *oid);
+extern int has_object_file_with_flags(const struct object_id *oid, int flags);
+
+/*
+ * Return true iff an alternate object database has a loose object
+ * with the specified name. This function does not respect replace
+ * references.
+ */
+extern int has_loose_object_nonlocal(const struct object_id *);
+
+extern void assert_oid_type(const struct object_id *oid, enum object_type expect);
+
+struct object_info {
+ /* Request */
+ enum object_type *typep;
+ unsigned long *sizep;
+ off_t *disk_sizep;
+ unsigned char *delta_base_sha1;
+ struct strbuf *type_name;
+ void **contentp;
+
+ /* Response */
+ enum {
+ OI_CACHED,
+ OI_LOOSE,
+ OI_PACKED,
+ OI_DBCACHED
+ } whence;
+ union {
+ /*
+ * struct {
+ * ... Nothing to expose in this case
+ * } cached;
+ * struct {
+ * ... Nothing to expose in this case
+ * } loose;
+ */
+ struct {
+ struct packed_git *pack;
+ off_t offset;
+ unsigned int is_delta;
+ } packed;
+ } u;
+};
+
+/*
+ * Initializer for a "struct object_info" that wants no items. You may
+ * also memset() the memory to all-zeroes.
+ */
+#define OBJECT_INFO_INIT {NULL}
+
+/* Invoke lookup_replace_object() on the given hash */
+#define OBJECT_INFO_LOOKUP_REPLACE 1
+/* Allow reading from a loose object file of unknown/bogus type */
+#define OBJECT_INFO_ALLOW_UNKNOWN_TYPE 2
+/* Do not check cached storage */
+#define OBJECT_INFO_SKIP_CACHED 4
+/* Do not retry packed storage after checking packed and loose storage */
+#define OBJECT_INFO_QUICK 8
+/* Do not check loose object */
+#define OBJECT_INFO_IGNORE_LOOSE 16
+
+int oid_object_info_extended(struct repository *r,
+ const struct object_id *,
+ struct object_info *, unsigned flags);
+
+/*
+ * Iterate over the files in the loose-object parts of the object
+ * directory "path", triggering the following callbacks:
+ *
+ * - loose_object is called for each loose object we find.
+ *
+ * - loose_cruft is called for any files that do not appear to be
+ * loose objects. Note that we only look in the loose object
+ * directories "objects/[0-9a-f]{2}/", so we will not report
+ * "objects/foobar" as cruft.
+ *
+ * - loose_subdir is called for each top-level hashed subdirectory
+ * of the object directory (e.g., "$OBJDIR/f0"). It is called
+ * after the objects in the directory are processed.
+ *
+ * Any callback that is NULL will be ignored. Callbacks returning non-zero
+ * will end the iteration.
+ *
+ * In the "buf" variant, "path" is a strbuf which will also be used as a
+ * scratch buffer, but restored to its original contents before
+ * the function returns.
+ */
+typedef int each_loose_object_fn(const struct object_id *oid,
+ const char *path,
+ void *data);
+typedef int each_loose_cruft_fn(const char *basename,
+ const char *path,
+ void *data);
+typedef int each_loose_subdir_fn(unsigned int nr,
+ const char *path,
+ void *data);
+int for_each_file_in_obj_subdir(unsigned int subdir_nr,
+ struct strbuf *path,
+ each_loose_object_fn obj_cb,
+ each_loose_cruft_fn cruft_cb,
+ each_loose_subdir_fn subdir_cb,
+ void *data);
+int for_each_loose_file_in_objdir(const char *path,
+ each_loose_object_fn obj_cb,
+ each_loose_cruft_fn cruft_cb,
+ each_loose_subdir_fn subdir_cb,
+ void *data);
+int for_each_loose_file_in_objdir_buf(struct strbuf *path,
+ each_loose_object_fn obj_cb,
+ each_loose_cruft_fn cruft_cb,
+ each_loose_subdir_fn subdir_cb,
+ void *data);
+
+/* Flags for for_each_*_object() below. */
+enum for_each_object_flags {
+ /* Iterate only over local objects, not alternates. */
+ FOR_EACH_OBJECT_LOCAL_ONLY = (1<<0),
+
+ /* Only iterate over packs obtained from the promisor remote. */
+ FOR_EACH_OBJECT_PROMISOR_ONLY = (1<<1),
+
+ /*
+ * Visit objects within a pack in packfile order rather than .idx order
+ */
+ FOR_EACH_OBJECT_PACK_ORDER = (1<<2),
+};
+
+/*
+ * Iterate over all accessible loose objects without respect to
+ * reachability. By default, this includes both local and alternate objects.
+ * The order in which objects are visited is unspecified.
+ *
+ * Any flags specific to packs are ignored.
+ */
+int for_each_loose_object(each_loose_object_fn, void *,
+ enum for_each_object_flags flags);
+
+/*
+ * Iterate over all accessible packed objects without respect to reachability.
+ * By default, this includes both local and alternate packs.
+ *
+ * Note that some objects may appear twice if they are found in multiple packs.
+ * Each pack is visited in an unspecified order. By default, objects within a
+ * pack are visited in pack-idx order (i.e., sorted by oid).
+ */
+typedef int each_packed_object_fn(const struct object_id *oid,
+ struct packed_git *pack,
+ uint32_t pos,
+ void *data);
+int for_each_object_in_pack(struct packed_git *p,
+ each_packed_object_fn, void *data,
+ enum for_each_object_flags flags);
+int for_each_packed_object(each_packed_object_fn, void *,
+ enum for_each_object_flags flags);
+
#endif /* OBJECT_STORE_H */
#include "tree-walk.h"
#include "tree.h"
#include "object-store.h"
+ #include "midx.h"
char *odb_pack_name(struct strbuf *buf,
const unsigned char *sha1,
return ret;
}
+ uint32_t get_pack_fanout(struct packed_git *p, uint32_t value)
+ {
+ const uint32_t *level1_ofs = p->index_data;
+
+ if (!level1_ofs) {
+ if (open_pack_index(p))
+ return 0;
+ level1_ofs = p->index_data;
+ }
+
+ if (p->index_version > 1) {
+ level1_ofs += 2;
+ }
+
+ return ntohl(level1_ofs[value]);
+ }
+
static struct packed_git *alloc_packed_git(int extra)
{
struct packed_git *p = xmalloc(st_add(sizeof(*p), extra));
ssize_t read_result;
const unsigned hashsz = the_hash_algo->rawsz;
- if (!p->index_data && open_pack_index(p))
- return error("packfile %s index unavailable", p->pack_name);
+ if (!p->index_data) {
+ struct multi_pack_index *m;
+ const char *pack_name = strrchr(p->pack_name, '/');
+
+ for (m = the_repository->objects->multi_pack_index;
+ m; m = m->next) {
+ if (midx_contains_pack(m, pack_name))
+ break;
+ }
+
+ if (!m && open_pack_index(p))
+ return error("packfile %s index unavailable", p->pack_name);
+ }
if (!pack_max_fds) {
unsigned int max_fds = get_max_fd_limit();
" supported (try upgrading GIT to a newer version)",
p->pack_name, ntohl(hdr.hdr_version));
+ /* Skip index checking if in multi-pack-index */
+ if (!p->index_data)
+ return 0;
+
/* Verify the pack matches its index. */
if (p->num_objects != ntohl(hdr.hdr_entries))
return error("packfile %s claims to have %"PRIu32" objects"
report_helper(list, seen_bits, first, list->nr);
}
- static void prepare_packed_git_one(struct repository *r, char *objdir, int local)
+ void for_each_file_in_pack_dir(const char *objdir,
+ each_file_in_pack_dir_fn fn,
+ void *data)
{
struct strbuf path = STRBUF_INIT;
size_t dirnamelen;
DIR *dir;
struct dirent *de;
- struct string_list garbage = STRING_LIST_INIT_DUP;
strbuf_addstr(&path, objdir);
strbuf_addstr(&path, "/pack");
strbuf_addch(&path, '/');
dirnamelen = path.len;
while ((de = readdir(dir)) != NULL) {
- struct packed_git *p;
- size_t base_len;
-
if (is_dot_or_dotdot(de->d_name))
continue;
strbuf_setlen(&path, dirnamelen);
strbuf_addstr(&path, de->d_name);
- base_len = path.len;
- if (strip_suffix_mem(path.buf, &base_len, ".idx")) {
- /* Don't reopen a pack we already have. */
- for (p = r->objects->packed_git; p;
- p = p->next) {
- size_t len;
- if (strip_suffix(p->pack_name, ".pack", &len) &&
- len == base_len &&
- !memcmp(p->pack_name, path.buf, len))
- break;
- }
- if (p == NULL &&
- /*
- * See if it really is a valid .idx file with
- * corresponding .pack file that we can map.
- */
- (p = add_packed_git(path.buf, path.len, local)) != NULL)
- install_packed_git(r, p);
- }
-
- if (!report_garbage)
- continue;
-
- if (ends_with(de->d_name, ".idx") ||
- ends_with(de->d_name, ".pack") ||
- ends_with(de->d_name, ".bitmap") ||
- ends_with(de->d_name, ".keep") ||
- ends_with(de->d_name, ".promisor"))
- string_list_append(&garbage, path.buf);
- else
- report_garbage(PACKDIR_FILE_GARBAGE, path.buf);
+ fn(path.buf, path.len, de->d_name, data);
}
+
closedir(dir);
- report_pack_garbage(&garbage);
- string_list_clear(&garbage, 0);
strbuf_release(&path);
}
+ struct prepare_pack_data {
+ struct repository *r;
+ struct string_list *garbage;
+ int local;
+ struct multi_pack_index *m;
+ };
+
+ static void prepare_pack(const char *full_name, size_t full_name_len,
+ const char *file_name, void *_data)
+ {
+ struct prepare_pack_data *data = (struct prepare_pack_data *)_data;
+ struct packed_git *p;
+ size_t base_len = full_name_len;
+
+ if (strip_suffix_mem(full_name, &base_len, ".idx")) {
+ if (data->m && midx_contains_pack(data->m, file_name))
+ return;
+ /* Don't reopen a pack we already have. */
+ for (p = data->r->objects->packed_git; p; p = p->next) {
+ size_t len;
+ if (strip_suffix(p->pack_name, ".pack", &len) &&
+ len == base_len &&
+ !memcmp(p->pack_name, full_name, len))
+ break;
+ }
+
+ if (!p) {
+ p = add_packed_git(full_name, full_name_len, data->local);
+ if (p)
+ install_packed_git(data->r, p);
+ }
+ }
+
+ if (!report_garbage)
+ return;
+
+ if (ends_with(file_name, ".idx") ||
+ ends_with(file_name, ".pack") ||
+ ends_with(file_name, ".bitmap") ||
+ ends_with(file_name, ".keep") ||
+ ends_with(file_name, ".promisor"))
+ string_list_append(data->garbage, full_name);
+ else
+ report_garbage(PACKDIR_FILE_GARBAGE, full_name);
+ }
+
+ static void prepare_packed_git_one(struct repository *r, char *objdir, int local)
+ {
+ struct prepare_pack_data data;
+ struct string_list garbage = STRING_LIST_INIT_DUP;
+
+ data.m = r->objects->multi_pack_index;
+
+ /* look for the multi-pack-index for this object directory */
+ while (data.m && strcmp(data.m->object_dir, objdir))
+ data.m = data.m->next;
+
+ data.r = r;
+ data.garbage = &garbage;
+ data.local = local;
+
+ for_each_file_in_pack_dir(objdir, prepare_pack, &data);
+
+ report_pack_garbage(data.garbage);
+ string_list_clear(data.garbage, 0);
+ }
+
static void prepare_packed_git(struct repository *r);
/*
* Give a fast, rough count of the number of objects in the repository. This
{
if (!the_repository->objects->approximate_object_count_valid) {
unsigned long count;
+ struct multi_pack_index *m;
struct packed_git *p;
prepare_packed_git(the_repository);
count = 0;
+ for (m = get_multi_pack_index(the_repository); m; m = m->next)
+ count += m->num_objects;
for (p = the_repository->objects->packed_git; p; p = p->next) {
if (open_pack_index(p))
continue;
if (r->objects->packed_git_initialized)
return;
+ prepare_multi_pack_index_one(r, r->objects->objectdir);
prepare_packed_git_one(r, r->objects->objectdir, 1);
prepare_alt_odb(r);
- for (alt = r->objects->alt_odb_list; alt; alt = alt->next)
+ for (alt = r->objects->alt_odb_list; alt; alt = alt->next) {
+ prepare_multi_pack_index_one(r, alt->path);
prepare_packed_git_one(r, alt->path, 0);
+ }
rearrange_packed_git(r);
prepare_packed_git_mru(r);
r->objects->packed_git_initialized = 1;
return r->objects->packed_git;
}
+ struct multi_pack_index *get_multi_pack_index(struct repository *r)
+ {
+ prepare_packed_git(r);
+ return r->objects->multi_pack_index;
+ }
+
struct list_head *get_packed_git_mru(struct repository *r)
{
prepare_packed_git(r);
int find_pack_entry(struct repository *r, const struct object_id *oid, struct pack_entry *e)
{
struct list_head *pos;
+ struct multi_pack_index *m;
prepare_packed_git(r);
- if (!r->objects->packed_git)
+ if (!r->objects->packed_git && !r->objects->multi_pack_index)
return 0;
+ for (m = r->objects->multi_pack_index; m; m = m->next) {
+ if (fill_midx_entry(oid, e, m))
+ return 1;
+ }
+
list_for_each(pos, &r->objects->packed_git_mru) {
struct packed_git *p = list_entry(pos, struct packed_git, mru);
if (fill_pack_entry(oid, e, p)) {
return 1;
}
-int for_each_object_in_pack(struct packed_git *p, each_packed_object_fn cb, void *data)
+int for_each_object_in_pack(struct packed_git *p,
+ each_packed_object_fn cb, void *data,
+ enum for_each_object_flags flags)
{
uint32_t i;
int r = 0;
+ if (flags & FOR_EACH_OBJECT_PACK_ORDER)
+ load_pack_revindex(p);
+
for (i = 0; i < p->num_objects; i++) {
+ uint32_t pos;
struct object_id oid;
- if (!nth_packed_object_oid(&oid, p, i))
+ if (flags & FOR_EACH_OBJECT_PACK_ORDER)
+ pos = p->revindex[i].nr;
+ else
+ pos = i;
+
+ if (!nth_packed_object_oid(&oid, p, pos))
return error("unable to get sha1 of object %u in %s",
- i, p->pack_name);
+ pos, p->pack_name);
- r = cb(&oid, p, i, data);
+ r = cb(&oid, p, pos, data);
if (r)
break;
}
return r;
}
-int for_each_packed_object(each_packed_object_fn cb, void *data, unsigned flags)
+int for_each_packed_object(each_packed_object_fn cb, void *data,
+ enum for_each_object_flags flags)
{
struct packed_git *p;
int r = 0;
pack_errors = 1;
continue;
}
- r = for_each_object_in_pack(p, cb, data);
+ r = for_each_object_in_pack(p, cb, data, flags);
if (r)
break;
}
void *set_)
{
struct oidset *set = set_;
- struct object *obj = parse_object(oid);
+ struct object *obj = parse_object(the_repository, oid);
if (!obj)
return 1;
#ifndef PACKFILE_H
#define PACKFILE_H
+#include "cache.h"
#include "oidset.h"
+/* in object-store.h */
+struct packed_git;
+struct object_info;
+
/*
* Generate the filename to be used for a pack file with checksum "sha1" and
* extension "ext". The result is written into the strbuf "buf", overwriting
extern struct packed_git *parse_pack_index(unsigned char *sha1, const char *idx_path);
+ typedef void each_file_in_pack_dir_fn(const char *full_path, size_t full_path_len,
+ const char *file_pach, void *data);
+ void for_each_file_in_pack_dir(const char *objdir,
+ each_file_in_pack_dir_fn fn,
+ void *data);
+
/* A hook to report invalid files in pack directory */
#define PACKDIR_FILE_PACK 1
#define PACKDIR_FILE_IDX 2
struct packed_git *get_packed_git(struct repository *r);
struct list_head *get_packed_git_mru(struct repository *r);
+ struct multi_pack_index *get_multi_pack_index(struct repository *r);
/*
* Give a rough count of objects in the repository. This sacrifices accuracy
*/
extern void close_pack_index(struct packed_git *);
+ extern uint32_t get_pack_fanout(struct packed_git *p, uint32_t value);
+
extern unsigned char *use_pack(struct packed_git *, struct pack_window **, off_t, unsigned long *);
extern void close_pack_windows(struct packed_git *);
extern void close_pack(struct packed_git *);
extern int has_pack_index(const unsigned char *sha1);
-/*
- * Only iterate over packs obtained from the promisor remote.
- */
-#define FOR_EACH_OBJECT_PROMISOR_ONLY 2
-
-/*
- * Iterate over packed objects in both the local
- * repository and any alternates repositories (unless the
- * FOR_EACH_OBJECT_LOCAL_ONLY flag, defined in cache.h, is set).
- */
-typedef int each_packed_object_fn(const struct object_id *oid,
- struct packed_git *pack,
- uint32_t pos,
- void *data);
-extern int for_each_object_in_pack(struct packed_git *p, each_packed_object_fn, void *data);
-extern int for_each_packed_object(each_packed_object_fn, void *, unsigned flags);
-
/*
* Return 1 if an object in a promisor packfile is or refers to the given
* object, 0 otherwise.
#include "packfile.h"
#include "object-store.h"
#include "repository.h"
+ #include "midx.h"
static int get_oid_oneline(const char *, struct object_id *, struct commit_list *);
return 1;
}
+ static void unique_in_midx(struct multi_pack_index *m,
+ struct disambiguate_state *ds)
+ {
+ uint32_t num, i, first = 0;
+ const struct object_id *current = NULL;
+ num = m->num_objects;
+
+ if (!num)
+ return;
+
+ bsearch_midx(&ds->bin_pfx, m, &first);
+
+ /*
+ * At this point, "first" is the location of the lowest object
+ * with an object name that could match "bin_pfx". See if we have
+ * 0, 1 or more objects that actually match(es).
+ */
+ for (i = first; i < num && !ds->ambiguous; i++) {
+ struct object_id oid;
+ current = nth_midxed_object_oid(&oid, m, i);
+ if (!match_sha(ds->len, ds->bin_pfx.hash, current->hash))
+ break;
+ update_candidates(ds, current);
+ }
+ }
+
static void unique_in_pack(struct packed_git *p,
struct disambiguate_state *ds)
{
static void find_short_packed_object(struct disambiguate_state *ds)
{
+ struct multi_pack_index *m;
struct packed_git *p;
+ for (m = get_multi_pack_index(the_repository); m && !ds->ambiguous;
+ m = m->next)
+ unique_in_midx(m, ds);
for (p = get_packed_git(the_repository); p && !ds->ambiguous;
p = p->next)
unique_in_pack(p, ds);
return 0;
/* We need to do this the hard way... */
- obj = deref_tag(parse_object(oid), NULL, 0);
+ obj = deref_tag(the_repository, parse_object(the_repository, oid),
+ NULL, 0);
if (obj && obj->type == OBJ_COMMIT)
return 1;
return 0;
return 0;
/* We need to do this the hard way... */
- obj = deref_tag(parse_object(oid), NULL, 0);
+ obj = deref_tag(the_repository, parse_object(the_repository, oid),
+ NULL, 0);
if (obj && (obj->type == OBJ_TREE || obj->type == OBJ_COMMIT))
return 1;
return 0;
{
int i;
- if (len < MINIMUM_ABBREV || len > GIT_SHA1_HEXSZ)
+ if (len < MINIMUM_ABBREV || len > the_hash_algo->hexsz)
return -1;
memset(ds, 0, sizeof(*ds));
type = oid_object_info(the_repository, oid, NULL);
if (type == OBJ_COMMIT) {
- struct commit *commit = lookup_commit(oid);
+ struct commit *commit = lookup_commit(the_repository, oid);
if (commit) {
struct pretty_print_context pp = {0};
pp.date_mode.type = DATE_SHORT;
format_commit_message(commit, " %ad - %s", &desc, &pp);
}
} else if (type == OBJ_TAG) {
- struct tag *tag = lookup_tag(oid);
+ struct tag *tag = lookup_tag(the_repository, oid);
if (!parse_tag(tag) && tag->tag)
strbuf_addf(&desc, " %s", tag->tag);
}
return 0;
}
+ static void find_abbrev_len_for_midx(struct multi_pack_index *m,
+ struct min_abbrev_data *mad)
+ {
+ int match = 0;
+ uint32_t num, first = 0;
+ struct object_id oid;
+ const struct object_id *mad_oid;
+
+ if (!m->num_objects)
+ return;
+
+ num = m->num_objects;
+ mad_oid = mad->oid;
+ match = bsearch_midx(mad_oid, m, &first);
+
+ /*
+ * first is now the position in the packfile where we would insert
+ * mad->hash if it does not exist (or the position of mad->hash if
+ * it does exist). Hence, we consider a maximum of two objects
+ * nearby for the abbreviation length.
+ */
+ mad->init_len = 0;
+ if (!match) {
+ if (nth_midxed_object_oid(&oid, m, first))
+ extend_abbrev_len(&oid, mad);
+ } else if (first < num - 1) {
+ if (nth_midxed_object_oid(&oid, m, first + 1))
+ extend_abbrev_len(&oid, mad);
+ }
+ if (first > 0) {
+ if (nth_midxed_object_oid(&oid, m, first - 1))
+ extend_abbrev_len(&oid, mad);
+ }
+ mad->init_len = mad->cur_len;
+ }
+
static void find_abbrev_len_for_pack(struct packed_git *p,
struct min_abbrev_data *mad)
{
static void find_abbrev_len_packed(struct min_abbrev_data *mad)
{
+ struct multi_pack_index *m;
struct packed_git *p;
+ for (m = get_multi_pack_index(the_repository); m; m = m->next)
+ find_abbrev_len_for_midx(m, mad);
for (p = get_packed_git(the_repository); p; p = p->next)
find_abbrev_len_for_pack(p, mad);
}
struct disambiguate_state ds;
struct min_abbrev_data mad;
struct object_id oid_ret;
+ const unsigned hexsz = the_hash_algo->hexsz;
+
if (len < 0) {
unsigned long count = approximate_object_count();
/*
}
oid_to_hex_r(hex, oid);
- if (len == GIT_SHA1_HEXSZ || !len)
- return GIT_SHA1_HEXSZ;
+ if (len == hexsz || !len)
+ return hexsz;
mad.init_len = len;
mad.cur_len = len;
int refs_found = 0;
int at, reflog_len, nth_prior = 0;
- if (len == GIT_SHA1_HEXSZ && !get_oid_hex(str, oid)) {
+ if (len == the_hash_algo->hexsz && !get_oid_hex(str, oid)) {
if (warn_ambiguous_refs && warn_on_object_refname_ambiguity) {
refs_found = dwim_ref(str, len, &tmp_oid, &real_ref);
if (refs_found > 0) {
int detached;
if (interpret_nth_prior_checkout(str, len, &buf) > 0) {
- detached = (buf.len == GIT_SHA1_HEXSZ && !get_oid_hex(buf.buf, oid));
+ detached = (buf.len == the_hash_algo->hexsz && !get_oid_hex(buf.buf, oid));
strbuf_release(&buf);
if (detached)
return 0;
if (ret)
return ret;
- commit = lookup_commit_reference(&oid);
+ commit = lookup_commit_reference(the_repository, &oid);
if (parse_commit(commit))
return -1;
if (!idx) {
ret = get_oid_1(name, len, &oid, GET_OID_COMMITTISH);
if (ret)
return ret;
- commit = lookup_commit_reference(&oid);
+ commit = lookup_commit_reference(the_repository, &oid);
if (!commit)
return -1;
if (name && !namelen)
namelen = strlen(name);
while (1) {
- if (!o || (!o->parsed && !parse_object(&o->oid)))
+ if (!o || (!o->parsed && !parse_object(the_repository, &o->oid)))
return NULL;
if (expected_type == OBJ_ANY || o->type == expected_type)
return o;
if (get_oid_1(name, sp - name - 2, &outer, lookup_flags))
return -1;
- o = parse_object(&outer);
+ o = parse_object(the_repository, &outer);
if (!o)
return -1;
if (!expected_type) {
- o = deref_tag(o, name, sp - name - 2);
- if (!o || (!o->parsed && !parse_object(&o->oid)))
+ o = deref_tag(the_repository, o, name, sp - name - 2);
+ if (!o || (!o->parsed && !parse_object(the_repository, &o->oid)))
return -1;
oidcpy(oid, &o->oid);
return 0;
int flag, void *cb_data)
{
struct commit_list **list = cb_data;
- struct object *object = parse_object(oid);
+ struct object *object = parse_object(the_repository, oid);
if (!object)
return 0;
if (object->type == OBJ_TAG) {
- object = deref_tag(object, path, strlen(path));
+ object = deref_tag(the_repository, object, path,
+ strlen(path));
if (!object)
return 0;
}
int matches;
commit = pop_most_recent_commit(&list, ONELINE_SEEN);
- if (!parse_object(&commit->object.oid))
+ if (!parse_object(the_repository, &commit->object.oid))
continue;
buf = get_commit_buffer(commit, NULL);
p = strstr(buf, "\n\n");
}
if (st)
return st;
- one = lookup_commit_reference_gently(&oid_tmp, 0);
+ one = lookup_commit_reference_gently(the_repository, &oid_tmp, 0);
if (!one)
return -1;
if (get_oid_committish(dots[3] ? (dots + 3) : "HEAD", &oid_tmp))
return -1;
- two = lookup_commit_reference_gently(&oid_tmp, 0);
+ two = lookup_commit_reference_gently(the_repository, &oid_tmp, 0);
if (!two)
return -1;
mbs = get_merge_bases(one, two);
struct commit_list *list = NULL;
for_each_ref(handle_one_ref, &list);
+ head_ref(handle_one_ref, &list);
commit_list_sort_by_date(&list);
return get_oid_oneline(name + 2, oid, list);
}
{ "genrandom", cmd__genrandom },
{ "hashmap", cmd__hashmap },
{ "index-version", cmd__index_version },
+ { "json-writer", cmd__json_writer },
{ "lazy-init-name-hash", cmd__lazy_init_name_hash },
{ "match-trees", cmd__match_trees },
{ "mergesort", cmd__mergesort },
{ "path-utils", cmd__path_utils },
{ "prio-queue", cmd__prio_queue },
{ "read-cache", cmd__read_cache },
+ { "read-midx", cmd__read_midx },
{ "ref-store", cmd__ref_store },
{ "regex", cmd__regex },
+ { "repository", cmd__repository },
{ "revision-walking", cmd__revision_walking },
{ "run-command", cmd__run_command },
{ "scrap-cache-tree", cmd__scrap_cache_tree },
int cmd__genrandom(int argc, const char **argv);
int cmd__hashmap(int argc, const char **argv);
int cmd__index_version(int argc, const char **argv);
+int cmd__json_writer(int argc, const char **argv);
int cmd__lazy_init_name_hash(int argc, const char **argv);
int cmd__match_trees(int argc, const char **argv);
int cmd__mergesort(int argc, const char **argv);
int cmd__path_utils(int argc, const char **argv);
int cmd__prio_queue(int argc, const char **argv);
int cmd__read_cache(int argc, const char **argv);
+ int cmd__read_midx(int argc, const char **argv);
int cmd__ref_store(int argc, const char **argv);
int cmd__regex(int argc, const char **argv);
+int cmd__repository(int argc, const char **argv);
int cmd__revision_walking(int argc, const char **argv);
int cmd__run_command(int argc, const char **argv);
int cmd__scrap_cache_tree(int argc, const char **argv);