From: Junio C Hamano Date: Wed, 28 Mar 2018 06:08:51 +0000 (-0700) Subject: Merge branch 'tg/stash-doc-typofix' into next X-Git-Url: https://git.lorimer.id.au/gitweb.git/diff_plain/144a25f09cf6a36ffc7eee6a3009cd36d0aa09e6?hp=0a790f09c638b2d759854d70daf527a9b136e7fc Merge branch 'tg/stash-doc-typofix' into next * tg/stash-doc-typofix: git-stash.txt: remove extra square bracket --- diff --git a/.clang-format b/.clang-format index 611ab4750b..12a89f95f9 100644 --- a/.clang-format +++ b/.clang-format @@ -163,7 +163,7 @@ PenaltyBreakComment: 10 PenaltyBreakFirstLessLess: 0 PenaltyBreakString: 10 PenaltyExcessCharacter: 100 -PenaltyReturnTypeOnItsOwnLine: 5 +PenaltyReturnTypeOnItsOwnLine: 60 # Don't sort #include's SortIncludes: false diff --git a/Documentation/CodingGuidelines b/Documentation/CodingGuidelines index c4cb5ff0d4..48aa4edfbd 100644 --- a/Documentation/CodingGuidelines +++ b/Documentation/CodingGuidelines @@ -386,6 +386,11 @@ For C programs: - Use Git's gettext wrappers to make the user interface translatable. See "Marking strings for translation" in po/README. + - Variables and functions local to a given source file should be marked + with "static". Variables that are visible to other source files + must be declared with "extern" in header files. However, function + declarations should not use "extern", as that is already the default. + For Perl programs: - Most of the C guidelines above apply. diff --git a/Documentation/Makefile b/Documentation/Makefile index 4ae9ba5c86..6232143cb9 100644 --- a/Documentation/Makefile +++ b/Documentation/Makefile @@ -72,6 +72,7 @@ TECH_DOCS += SubmittingPatches TECH_DOCS += technical/hash-function-transition TECH_DOCS += technical/http-protocol TECH_DOCS += technical/index-format +TECH_DOCS += technical/long-running-process-protocol TECH_DOCS += technical/pack-format TECH_DOCS += technical/pack-heuristics TECH_DOCS += technical/pack-protocol diff --git a/Documentation/RelNotes/2.17.0.txt b/Documentation/RelNotes/2.17.0.txt new file mode 100644 index 0000000000..d6db0e19cf --- /dev/null +++ b/Documentation/RelNotes/2.17.0.txt @@ -0,0 +1,398 @@ +Git 2.17 Release Notes +====================== + +Updates since v2.16 +------------------- + +UI, Workflows & Features + + * "diff" family of commands learned "--find-object=" option + to limit the findings to changes that involve the named object. + + * "git format-patch" learned to give 72-cols to diffstat, which is + consistent with other line length limits the subcommand uses for + its output meant for e-mails. + + * The log from "git daemon" can be redirected with a new option; one + relevant use case is to send the log to standard error (instead of + syslog) when running it from inetd. + + * "git rebase" learned to take "--allow-empty-message" option. + + * "git am" has learned the "--quit" option, in addition to the + existing "--abort" option; having the pair mirrors a few other + commands like "rebase" and "cherry-pick". + + * "git worktree add" learned to run the post-checkout hook, just like + "git clone" runs it upon the initial checkout. + + * "git tag" learned an explicit "--edit" option that allows the + message given via "-m" and "-F" to be further edited. + + * "git fetch --prune-tags" may be used as a handy short-hand for + getting rid of stale tags that are locally held. + + * The new "--show-current-patch" option gives an end-user facing way + to get the diff being applied when "git rebase" (and "git am") + stops with a conflict. + + * "git add -p" used to offer "/" (look for a matching hunk) as a + choice, even there was only one hunk, which has been corrected. + Also the single-key help is now given only for keys that are + enabled (e.g. help for '/' won't be shown when there is only one + hunk). + + * Since Git 1.7.9, "git merge" defaulted to --no-ff (i.e. even when + the side branch being merged is a descendant of the current commit, + create a merge commit instead of fast-forwarding) when merging a + tag object. This was appropriate default for integrators who pull + signed tags from their downstream contributors, but caused an + unnecessary merges when used by downstream contributors who + habitually "catch up" their topic branches with tagged releases + from the upstream. Update "git merge" to default to --no-ff only + when merging a tag object that does *not* sit at its usual place in + refs/tags/ hierarchy, and allow fast-forwarding otherwise, to + mitigate the problem. + + * "git status" can spend a lot of cycles to compute the relation + between the current branch and its upstream, which can now be + disabled with "--no-ahead-behind" option. + + * "git diff" and friends learned funcname patterns for Go language + source files. + + * "git send-email" learned "--reply-to=
" option. + + * Funcname pattern used for C# now recognizes "async" keyword. + + * In a way similar to how "git tag" learned to honor the pager + setting only in the list mode, "git config" learned to ignore the + pager setting when it is used for setting values (i.e. when the + purpose of the operation is not to "show"). + + +Performance, Internal Implementation, Development Support etc. + + * More perf tests for threaded grep + + * "perf" test output can be sent to codespeed server. + + * The build procedure for perl/ part has been greatly simplified by + weaning ourselves off of MakeMaker. + + * Perl 5.8 or greater has been required since Git 1.7.4 released in + 2010, but we continued to assume some core modules may not exist and + used a conditional "eval { require <> }"; we no longer do + this. Some platforms (Fedora/RedHat/CentOS, for example) ship Perl + without all core modules by default (e.g. Digest::MD5, File::Temp, + File::Spec, Net::Domain, Net::SMTP). Users on such platforms may + need to install these additional modules. + + * As a convenience, we install copies of Perl modules we require which + are not part of the core Perl distribution (e.g. Error and + Mail::Address). Users and packagers whose operating system provides + these modules can set NO_PERL_CPAN_FALLBACKS to avoid installing the + bundled modules. + + * In preparation for implementing narrow/partial clone, the machinery + for checking object connectivity used by gc and fsck has been + taught that a missing object is OK when it is referenced by a + packfile specially marked as coming from trusted repository that + promises to make them available on-demand and lazily. + + * The machinery to clone & fetch, which in turn involves packing and + unpacking objects, has been told how to omit certain objects using + the filtering mechanism introduced by another topic. It now knows + to mark the resulting pack as a promisor pack to tolerate missing + objects, laying foundation for "narrow" clones. + + * The first step to getting rid of mru API and using the + doubly-linked list API directly instead. + + * Retire mru API as it does not give enough abstraction over + underlying list API to be worth it. + + * Rewrite two more "git submodule" subcommands in C. + + * The tracing machinery learned to report tweaking of environment + variables as well. + + * Update Coccinelle rules to catch and optimize strbuf_addf(&buf, "%s", str) + + * Prevent "clang-format" from breaking line after function return type. + + * The sequencer infrastructure is shared across "git cherry-pick", + "git rebase -i", etc., and has always spawned "git commit" when it + needs to create a commit. It has been taught to do so internally, + when able, by reusing the codepath "git commit" itself uses, which + gives performance boost for a few tens of percents in some sample + scenarios. + + * Push the submodule version of collision-detecting SHA-1 hash + implementation a bit harder on builders. + + * Avoid mmapping small files while using packed refs (especially ones + with zero size, which would cause later munmap() to fail). + + * Conversion from uchar[20] to struct object_id continues. + + * More tests for wildmatch functions. + + * The code to binary search starting from a fan-out table (which is + how the packfile is indexed with object names) has been refactored + into a reusable helper. + + * We now avoid using identifiers that clash with C++ keywords. Even + though it is not a goal to compile Git with C++ compilers, changes + like this help use of code analysis tools that targets C++ on our + codebase. + + * The executable is now built in 'script' phase in Travis CI integration, + to follow the established practice, rather than during 'before_script' + phase. This allows the CI categorize the failures better ('failed' + is project's fault, 'errored' is build environment's). + (merge 3c93b82920 sg/travis-build-during-script-phase later to maint). + + * Writing out the index file when the only thing that changed in it + is the untracked cache information is often wasteful, and this has + been optimized out. + + * Various pieces of Perl code we have have been cleaned up. + + * Internal API clean-up to allow write_locked_index() optionally skip + writing the in-core index when it is not modified. + + +Also contains various documentation updates and code clean-ups. + + +Fixes since v2.16 +----------------- + + * An old regression in "git describe --all $annotated_tag^0" has been + fixed. + + * "git status" after moving a path in the working tree (hence making + it appear "removed") and then adding with the -N option (hence + making that appear "added") detected it as a rename, but did not + report the old and new pathnames correctly. + + * "git svn dcommit" did not take into account the fact that a + svn+ssh:// URL with a username@ (typically used for pushing) refers + to the same SVN repository without the username@ and failed when + svn.pushmergeinfo option is set. + + * API clean-up around revision traversal. + + * "git merge -Xours/-Xtheirs" learned to use our/their version when + resolving a conflicting updates to a symbolic link. + + * "git clone $there $here" is allowed even when here directory exists + as long as it is an empty directory, but the command incorrectly + removed it upon a failure of the operation. + + * "git commit --fixup" did not allow "-m" option to be used + at the same time; allow it to annotate resulting commit with more + text. + + * When resetting the working tree files recursively, the working tree + of submodules are now also reset to match. + + * "git stash -- " incorrectly blew away untracked files in + the directory that matched the pathspec, which has been corrected. + + * Instead of maintaining home-grown email address parsing code, ship + a copy of reasonably recent Mail::Address to be used as a fallback + in 'git send-email' when the platform lacks it. + (merge d60be8acab mm/send-email-fallback-to-local-mail-address later to maint). + + * "git add -p" was taught to ignore local changes to submodules as + they do not interfere with the partial addition of regular changes + anyway. + + * Avoid showing a warning message in the middle of a line of "git + diff" output. + (merge 4e056c989f nd/diff-flush-before-warning later to maint). + + * The http tracing code, often used to debug connection issues, + learned to redact potentially sensitive information from its output + so that it can be more safely sharable. + (merge 8ba18e6fa4 jt/http-redact-cookies later to maint). + + * Crash fix for a corner case where an error codepath tried to unlock + what it did not acquire lock on. + (merge 81fcb698e0 mr/packed-ref-store-fix later to maint). + + * The split-index mode had a few corner case bugs fixed. + (merge ae59a4e44f tg/split-index-fixes later to maint). + + * Assorted fixes to "git daemon". + (merge ed15e58efe jk/daemon-fixes later to maint). + + * Completion of "git merge -s" (in contrib/) did not work + well in non-C locale. + (merge 7cc763aaa3 nd/list-merge-strategy later to maint). + + * Workaround for segfault with more recent versions of SVN. + (merge 7f6f75e97a ew/svn-branch-segfault-fix later to maint). + + * Plug recently introduced leaks in fsck. + (merge ba3a08ca0e jt/fsck-code-cleanup later to maint). + + * "git pull --rebase" did not pass verbosity setting down when + recursing into a submodule. + (merge a56771a668 sb/pull-rebase-submodule later to maint). + + * The way "git reset --hard" reports the commit the updated HEAD + points at is made consistent with the way how the commit title is + generated by the other parts of the system. This matters when the + title is spread across physically multiple lines. + (merge 1cf823fb68 tg/reset-hard-show-head-with-pretty later to maint). + + * Test fixes. + (merge 63b1a175ee sg/test-i18ngrep later to maint). + + * Some bugs around "untracked cache" feature have been fixed. This + will notice corrupt data in the untracked cache left by old and + buggy code and issue a warning---the index can be fixed by clearing + the untracked cache from it. + (merge 0cacebf099 nd/fix-untracked-cache-invalidation later to maint). + (merge 7bf0be7501 ab/untracked-cache-invalidation-docs later to maint). + + * "git blame HEAD COPYING" in a bare repository failed to run, while + "git blame HEAD -- COPYING" run just fine. This has been corrected. + + * "git add" files in the same directory, but spelling the directory + path in different cases on case insensitive filesystem, corrupted + the name hash data structure and led to unexpected results. This + has been corrected. + (merge c95525e90d bp/name-hash-dirname-fix later to maint). + + * "git rebase -p" mangled log messages of a merge commit, which is + now fixed. + (merge ed5144d7eb js/fix-merge-arg-quoting-in-rebase-p later to maint). + + * Some low level protocol codepath could crash when they get an + unexpected flush packet, which is now fixed. + (merge bb1356dc64 js/packet-read-line-check-null later to maint). + + * "git check-ignore" with multiple paths got confused when one is a + file and the other is a directory, which has been fixed. + (merge d60771e930 rs/check-ignore-multi later to maint). + + * "git describe $garbage" stopped giving any errors when the garbage + happens to be a string with 40 hexadecimal letters. + (merge a8e7a2bf0f sb/describe-blob later to maint). + + * Code to unquote single-quoted string (used in the parser for + configuration files, etc.) did not diagnose bogus input correctly + and produced bogus results instead. + (merge ddbbf8eb25 jk/sq-dequote-on-bogus-input later to maint). + + * Many places in "git apply" knew that "/dev/null" that signals + "there is no such file on this side of the diff" can be followed by + whitespace and garbage when parsing a patch, except for one, which + made an otherwise valid patch (e.g. ones from subversion) rejected. + (merge e454ad4bec tk/apply-dev-null-verify-name-fix later to maint). + + * We no longer create any *.spec file, so "make clean" should not + remove it. + (merge 4321bdcabb tz/do-not-clean-spec-file later to maint). + + * "git push" over http transport did not unquote the push-options + correctly. + (merge 90dce21eb0 jk/push-options-via-transport-fix later to maint). + + * "git send-email" learned to complain when the batch-size option is + not defined when the relogin-delay option is, since these two are + mutually required. + (merge 9caa70697b xz/send-email-batch-size later to maint). + + * Y2k20 fix ;-) for our perl scripts. + (merge a40e06ee33 bw/perl-timegm-timelocal-fix later to maint). + + * Threaded "git grep" has been optimized to avoid allocation in code + section that is covered under a mutex. + (merge 38ef24dccf rv/grep-cleanup later to maint). + + * "git subtree" script (in contrib/) scripted around "git log", whose + output got affected by end-user configuration like log.showsignature + (merge 8841b5222c sg/subtree-signed-commits later to maint). + + * While finding unique object name abbreviation, the code may + accidentally have read beyond the end of the array of object names + in a pack. + (merge 21abed500c ds/find-unique-abbrev-optim later to maint). + + * Micro optimization in revision traversal code. + (merge ebbed3ba04 ds/mark-parents-uninteresting-optim later to maint). + + * "git commit" used to run "gc --auto" near the end, which was lost + when the command was reimplemented in C by mistake. + (merge 095c741edd ab/gc-auto-in-commit later to maint). + + * Allow running a couple of tests with "sh -x". + (merge c20bf94abc sg/cvs-tests-with-x later to maint). + + * The codepath to replace an existing entry in the index had a bug in + updating the name hash structure, which has been fixed. + (merge 0e267b7a24 bp/refresh-cache-ent-rehash-fix later to maint). + + * The transfer.fsckobjects configuration tells "git fetch" to + validate the data and connected-ness of objects in the received + pack; the code to perform this check has been taught about the + narrow clone's convention that missing objects that are reachable + from objects in a pack that came from a promissor remote is OK. + + * There was an unused file-scope static variable left in http.c when + building for versions of libCURL that is older than 7.19.4, which + has been fixed. + (merge b8fd6008ec rj/http-code-cleanup later to maint). + + * Shell script portability fix. + (merge 206a6ae013 ml/filter-branch-portability-fix later to maint). + + * Other minor doc, test and build updates and code cleanups. + (merge e2a5a028c7 bw/oidmap-autoinit later to maint). + (merge ec3b4b06f8 cl/t9001-cleanup later to maint). + (merge e1b3f3dd38 ks/submodule-doc-updates later to maint). + (merge fbac558a9b rs/describe-unique-abbrev later to maint). + (merge 8462ff43e4 tb/crlf-conv-flags later to maint). + (merge 7d68bb0766 rb/hashmap-h-compilation-fix later to maint). + (merge 3449847168 cc/sha1-file-name later to maint). + (merge ad622a256f ds/use-get-be64 later to maint). + (merge f919ffebed sg/cocci-move-array later to maint). + (merge 4e801463c7 jc/mailinfo-cleanup-fix later to maint). + (merge ef5b3a6c5e nd/shared-index-fix later to maint). + (merge 9f5258cbb8 tz/doc-show-defaults-to-head later to maint). + (merge b780e4407d jc/worktree-add-short-help later to maint). + (merge ae239fc8e5 rs/cocci-strbuf-addf-to-addstr later to maint). + (merge 2e22a85e5c nd/ignore-glob-doc-update later to maint). + (merge 3738031581 jk/gettext-poison later to maint). + (merge 54360a1956 rj/sparse-updates later to maint). + (merge 12e31a6b12 sg/doc-test-must-fail-args later to maint). + (merge 760f1ad101 bc/doc-interpret-trailers-grammofix later to maint). + (merge 4ccf461f56 bp/fsmonitor later to maint). + (merge a6119f82b1 jk/test-hashmap-updates later to maint). + (merge 5aea9fe6cc rd/typofix later to maint). + (merge e4e5da2796 sb/status-doc-fix later to maint). + (merge 7976e901c8 gs/test-unset-xdg-cache-home later to maint). + (merge d023df1ee6 tg/worktree-create-tracking later to maint). + (merge 4cbe92fd41 sm/mv-dry-run-update later to maint). + (merge 75e5e9c3f7 sb/color-h-cleanup later to maint). + (merge 2708ef4af6 sg/t6300-modernize later to maint). + (merge d88e92d4e0 bw/doc-submodule-recurse-config-with-clone later to maint). + (merge f74bbc8dd2 jk/cached-commit-buffer later to maint). + (merge 1316416903 ms/non-ascii-ticks later to maint). + (merge 878056005e rs/strbuf-read-file-or-whine later to maint). + (merge 79f0ba1547 jk/strbuf-read-file-close-error later to maint). + (merge edfb8ba068 ot/ref-filter-cleanup later to maint). + (merge 11395a3b4b jc/test-must-be-empty later to maint). + (merge 768b9d6db7 mk/doc-pretty-fill later to maint). + (merge 2caa7b8d27 ab/man-sec-list later to maint). + (merge 40c17eb184 ks/t3200-typofix later to maint). + (merge bd9958c358 dp/merge-strategy-doc-fix later to maint). + (merge 9ee0540a40 js/ming-strftime later to maint). + (merge 1775e990f7 tz/complete-tag-delete-tagname later to maint). + (merge 00a4b03501 rj/warning-uninitialized-fix later to maint). + (merge b635ed97a0 jk/attributes-path-doc later to maint). diff --git a/Documentation/config.txt b/Documentation/config.txt index 0e25b2c92b..ce9102cea8 100644 --- a/Documentation/config.txt +++ b/Documentation/config.txt @@ -1398,7 +1398,16 @@ fetch.unpackLimit:: fetch.prune:: If true, fetch will automatically behave as if the `--prune` - option was given on the command line. See also `remote..prune`. + option was given on the command line. See also `remote..prune` + and the PRUNING section of linkgit:git-fetch[1]. + +fetch.pruneTags:: + If true, fetch will automatically behave as if the + `refs/tags/*:refs/tags/*` refspec was provided when pruning, + if not set already. This allows for setting both this option + and `fetch.prune` to maintain a 1=1 mapping to upstream + refs. See also `remote..pruneTags` and the PRUNING + section of linkgit:git-fetch[1]. fetch.output:: Control how ref update status is printed. Valid values are @@ -2945,6 +2954,15 @@ remote..prune:: remote (as if the `--prune` option was given on the command line). Overrides `fetch.prune` settings, if any. +remote..pruneTags:: + When set to true, fetching from this remote by default will also + remove any local tags that no longer exist on the remote if pruning + is activated in general via `remote..prune`, `fetch.prune` or + `--prune`. Overrides `fetch.pruneTags` settings, if any. ++ +See also `remote..prune` and the PRUNING section of +linkgit:git-fetch[1]. + remotes.:: The list of remotes which are fetched by "git remote update ". See linkgit:git-remote[1]. @@ -3210,7 +3228,8 @@ submodule.active:: submodule.recurse:: Specifies if commands recurse into submodules by default. This - applies to all commands that have a `--recurse-submodules` option. + applies to all commands that have a `--recurse-submodules` option, + except `clone`. Defaults to false. submodule.fetchJobs:: @@ -3343,6 +3362,10 @@ uploadpack.packObjectsHook:: was run. I.e., `upload-pack` will feed input intended for `pack-objects` to the hook, and expects a completed packfile on stdout. + +uploadpack.allowFilter:: + If this option is set, `upload-pack` will advertise partial + clone and partial fetch object filtering. + Note that this configuration variable is ignored if it is seen in the repository-level config (this is a safety measure against fetching from diff --git a/Documentation/diff-options.txt b/Documentation/diff-options.txt index 743af97b06..e3a44f03cd 100644 --- a/Documentation/diff-options.txt +++ b/Documentation/diff-options.txt @@ -128,6 +128,14 @@ have to use `--diff-algorithm=default` option. These parameters can also be set individually with `--stat-width=`, `--stat-name-width=` and `--stat-count=`. +--compact-summary:: + Output a condensed summary of extended header information such + as file creations or deletions ("new" or "gone", optionally "+l" + if it's a symlink) and mode changes ("+x" or "-x" for adding + or removing executable bit respectively) in diffstat. The + information is put betwen the filename part and the graph + part. Implies `--stat`. + --numstat:: Similar to `--stat`, but shows number of added and deleted lines in decimal notation and pathname without @@ -508,6 +516,15 @@ occurrences of that string did not change). See the 'pickaxe' entry in linkgit:gitdiffcore[7] for more information. +--find-object=:: + Look for differences that change the number of occurrences of + the specified object. Similar to `-S`, just the argument is different + in that it doesn't search for a specific string but for a specific + object id. ++ +The object can be a blob or a submodule commit. It implies the `-t` option in +`git-log` to also find trees. + --pickaxe-all:: When `-S` or `-G` finds a change, show all the changes in that changeset, not just the files that contain the change @@ -516,6 +533,7 @@ information. --pickaxe-regex:: Treat the given to `-S` as an extended POSIX regular expression to match. + endif::git-format-patch[] -O:: diff --git a/Documentation/fetch-options.txt b/Documentation/fetch-options.txt index fb6bebbc61..8631e365f4 100644 --- a/Documentation/fetch-options.txt +++ b/Documentation/fetch-options.txt @@ -73,7 +73,22 @@ ifndef::git-pull[] are fetched due to an explicit refspec (either on the command line or in the remote configuration, for example if the remote was cloned with the --mirror option), then they are also - subject to pruning. + subject to pruning. Supplying `--prune-tags` is a shorthand for + providing the tag refspec. ++ +See the PRUNING section below for more details. + +-P:: +--prune-tags:: + Before fetching, remove any local tags that no longer exist on + the remote if `--prune` is enabled. This option should be used + more carefully, unlike `--prune` it will remove any local + references (local tags) that have been created. This option is + a shorthand for providing the explicit tag refspec along with + `--prune`, see the discussion about that in its documentation. ++ +See the PRUNING section below for more details. + endif::git-pull[] ifndef::git-pull[] diff --git a/Documentation/git-am.txt b/Documentation/git-am.txt index 12879e4029..6f6c34b0f4 100644 --- a/Documentation/git-am.txt +++ b/Documentation/git-am.txt @@ -16,7 +16,7 @@ SYNOPSIS [--exclude=] [--include=] [--reject] [-q | --quiet] [--[no-]scissors] [-S[]] [--patch-format=] [( | )...] -'git am' (--continue | --skip | --abort) +'git am' (--continue | --skip | --abort | --quit | --show-current-patch) DESCRIPTION ----------- @@ -167,6 +167,14 @@ default. You can use `--no-utf8` to override this. --abort:: Restore the original branch and abort the patching operation. +--quit:: + Abort the patching operation but keep HEAD and the index + untouched. + +--show-current-patch:: + Show the patch being applied when "git am" is stopped because + of conflicts. + DISCUSSION ---------- diff --git a/Documentation/git-config.txt b/Documentation/git-config.txt index 14da5fc157..e09ed5d7d5 100644 --- a/Documentation/git-config.txt +++ b/Documentation/git-config.txt @@ -233,6 +233,12 @@ See also <>. using `--file`, `--global`, etc) and `on` when searching all config files. +CONFIGURATION +------------- +`pager.config` is only respected when listing configuration, i.e., when +using `--list` or any of the `--get-*` which may return multiple results. +The default is to use a pager. + [[FILES]] FILES ----- diff --git a/Documentation/git-daemon.txt b/Documentation/git-daemon.txt index 3c91db7bed..56d54a4898 100644 --- a/Documentation/git-daemon.txt +++ b/Documentation/git-daemon.txt @@ -20,6 +20,7 @@ SYNOPSIS [--inetd | [--listen=] [--port=] [--user= [--group=]]] + [--log-destination=(stderr|syslog|none)] [...] DESCRIPTION @@ -80,7 +81,8 @@ OPTIONS do not have the 'git-daemon-export-ok' file. --inetd:: - Have the server run as an inetd service. Implies --syslog. + Have the server run as an inetd service. Implies --syslog (may be + overridden with `--log-destination=`). Incompatible with --detach, --port, --listen, --user and --group options. @@ -110,8 +112,28 @@ OPTIONS zero for no limit. --syslog:: - Log to syslog instead of stderr. Note that this option does not imply - --verbose, thus by default only error conditions will be logged. + Short for `--log-destination=syslog`. + +--log-destination=:: + Send log messages to the specified destination. + Note that this option does not imply --verbose, + thus by default only error conditions will be logged. + The must be one of: ++ +-- +stderr:: + Write to standard error. + Note that if `--detach` is specified, + the process disconnects from the real standard error, + making this destination effectively equivalent to `none`. +syslog:: + Write to syslog, using the `git-daemon` identifier. +none:: + Disable all logging. +-- ++ +The default destination is `syslog` if `--inetd` or `--detach` is specified, +otherwise `stderr`. --user-path:: --user-path=:: diff --git a/Documentation/git-fetch.txt b/Documentation/git-fetch.txt index b153aefa68..e319935597 100644 --- a/Documentation/git-fetch.txt +++ b/Documentation/git-fetch.txt @@ -99,6 +99,93 @@ The latter use of the `remote..fetch` values can be overridden by giving the `--refmap=` parameter(s) on the command line. +PRUNING +------- + +Git has a default disposition of keeping data unless it's explicitly +thrown away; this extends to holding onto local references to branches +on remotes that have themselves deleted those branches. + +If left to accumulate, these stale references might make performance +worse on big and busy repos that have a lot of branch churn, and +e.g. make the output of commands like `git branch -a --contains +` needlessly verbose, as well as impacting anything else +that'll work with the complete set of known references. + +These remote-tracking references can be deleted as a one-off with +either of: + +------------------------------------------------ +# While fetching +$ git fetch --prune + +# Only prune, don't fetch +$ git remote prune +------------------------------------------------ + +To prune references as part of your normal workflow without needing to +remember to run that, set `fetch.prune` globally, or +`remote..prune` per-remote in the config. See +linkgit:git-config[1]. + +Here's where things get tricky and more specific. The pruning feature +doesn't actually care about branches, instead it'll prune local <-> +remote-references as a function of the refspec of the remote (see +`` and <> above). + +Therefore if the refspec for the remote includes +e.g. `refs/tags/*:refs/tags/*`, or you manually run e.g. `git fetch +--prune "refs/tags/*:refs/tags/*"` it won't be stale remote +tracking branches that are deleted, but any local tag that doesn't +exist on the remote. + +This might not be what you expect, i.e. you want to prune remote +``, but also explicitly fetch tags from it, so when you fetch +from it you delete all your local tags, most of which may not have +come from the `` remote in the first place. + +So be careful when using this with a refspec like +`refs/tags/*:refs/tags/*`, or any other refspec which might map +references from multiple remotes to the same local namespace. + +Since keeping up-to-date with both branches and tags on the remote is +a common use-case the `--prune-tags` option can be supplied along with +`--prune` to prune local tags that don't exist on the remote, and +force-update those tags that differ. Tag pruning can also be enabled +with `fetch.pruneTags` or `remote..pruneTags` in the config. See +linkgit:git-config[1]. + +The `--prune-tags` option is equivalent to having +`refs/tags/*:refs/tags/*` declared in the refspecs of the remote. This +can lead to some seemingly strange interactions: + +------------------------------------------------ +# These both fetch tags +$ git fetch --no-tags origin 'refs/tags/*:refs/tags/*' +$ git fetch --no-tags --prune-tags origin +------------------------------------------------ + +The reason it doesn't error out when provided without `--prune` or its +config versions is for flexibility of the configured versions, and to +maintain a 1=1 mapping between what the command line flags do, and +what the configuration versions do. + +It's reasonable to e.g. configure `fetch.pruneTags=true` in +`~/.gitconfig` to have tags pruned whenever `git fetch --prune` is +run, without making every invocation of `git fetch` without `--prune` +an error. + +Pruning tags with `--prune-tags` also works when fetching a URL +instead of a named remote. These will all prune tags not found on +origin: + +------------------------------------------------ +$ git fetch origin --prune --prune-tags +$ git fetch origin --prune 'refs/tags/*:refs/tags/*' +$ git fetch --prune --prune-tags +$ git fetch --prune 'refs/tags/*:refs/tags/*' +------------------------------------------------ + OUTPUT ------ diff --git a/Documentation/git-filter-branch.txt b/Documentation/git-filter-branch.txt index 3a52e4dce3..b634043183 100644 --- a/Documentation/git-filter-branch.txt +++ b/Documentation/git-filter-branch.txt @@ -222,6 +222,14 @@ this purpose, they are instead rewritten to point at the nearest ancestor that was not excluded. +EXIT STATUS +----------- + +On success, the exit status is `0`. If the filter can't find any commits to +rewrite, the exit status is `2`. On any other error, the exit status may be +any other non-zero value. + + Examples -------- diff --git a/Documentation/git-gc.txt b/Documentation/git-gc.txt index 571b5a7e3c..3126e0dd00 100644 --- a/Documentation/git-gc.txt +++ b/Documentation/git-gc.txt @@ -15,8 +15,9 @@ DESCRIPTION ----------- Runs a number of housekeeping tasks within the current repository, such as compressing file revisions (to reduce disk space and increase -performance) and removing unreachable objects which may have been -created from prior invocations of 'git add'. +performance), removing unreachable objects which may have been +created from prior invocations of 'git add', packing refs, pruning +reflog, rerere metadata or stale working trees. Users are encouraged to run this task on a regular basis within each repository to maintain good disk space utilization and good @@ -45,20 +46,25 @@ OPTIONS With this option, 'git gc' checks whether any housekeeping is required; if not, it exits without performing any work. Some git commands run `git gc --auto` after performing - operations that could create many loose objects. + operations that could create many loose objects. Housekeeping + is required if there are too many loose objects or too many + packs in the repository. + -Housekeeping is required if there are too many loose objects or -too many packs in the repository. If the number of loose objects -exceeds the value of the `gc.auto` configuration variable, then -all loose objects are combined into a single pack using -`git repack -d -l`. Setting the value of `gc.auto` to 0 -disables automatic packing of loose objects. +If the number of loose objects exceeds the value of the `gc.auto` +configuration variable, then all loose objects are combined into a +single pack using `git repack -d -l`. Setting the value of `gc.auto` +to 0 disables automatic packing of loose objects. + If the number of packs exceeds the value of `gc.autoPackLimit`, then existing packs (except those marked with a `.keep` file) are consolidated into a single pack by using the `-A` option of 'git repack'. Setting `gc.autoPackLimit` to 0 disables automatic consolidation of packs. ++ +If houskeeping is required due to many loose objects or packs, all +other housekeeping tasks (e.g. rerere, working trees, reflog...) will +be performed as well. + --prune=:: Prune loose objects older than date (default is 2 weeks ago, @@ -133,6 +139,10 @@ The optional configuration variable `gc.pruneExpire` controls how old the unreferenced loose objects have to be before they are pruned. The default is "2 weeks ago". +Optional configuration variable `gc.worktreePruneExpire` controls how +old a stale working tree should be before `git worktree prune` deletes +it. Default is "3 months ago". + Notes ----- diff --git a/Documentation/git-index-pack.txt b/Documentation/git-index-pack.txt index 1b4b65d665..138edb47b6 100644 --- a/Documentation/git-index-pack.txt +++ b/Documentation/git-index-pack.txt @@ -77,6 +77,9 @@ OPTIONS --check-self-contained-and-connected:: Die if the pack contains broken links. For internal use only. +--fsck-objects:: + Die if the pack contains broken objects. For internal use only. + --threads=:: Specifies the number of threads to spawn when resolving deltas. This requires that index-pack be compiled with diff --git a/Documentation/git-pack-objects.txt b/Documentation/git-pack-objects.txt index aa403d02f3..81bc490ac5 100644 --- a/Documentation/git-pack-objects.txt +++ b/Documentation/git-pack-objects.txt @@ -255,6 +255,17 @@ a missing object is encountered. This is the default action. The form '--missing=allow-any' will allow object traversal to continue if a missing object is encountered. Missing objects will silently be omitted from the results. ++ +The form '--missing=allow-promisor' is like 'allow-any', but will only +allow object traversal to continue for EXPECTED promisor missing objects. +Unexpected missing object will raise an error. + +--exclude-promisor-objects:: + Omit objects that are known to be in the promisor remote. (This + option has the purpose of operating only on locally created objects, + so that when we repack, we still maintain a distinction between + locally created objects [without .promisor] and objects from the + promisor remote [with .promisor].) This is used with partial clone. SEE ALSO -------- diff --git a/Documentation/git-rebase.txt b/Documentation/git-rebase.txt index 8a861c1e0d..3277ca1432 100644 --- a/Documentation/git-rebase.txt +++ b/Documentation/git-rebase.txt @@ -12,7 +12,7 @@ SYNOPSIS [ []] 'git rebase' [-i | --interactive] [options] [--exec ] [--onto ] --root [] -'git rebase' --continue | --skip | --abort | --quit | --edit-todo +'git rebase' --continue | --skip | --abort | --quit | --edit-todo | --show-current-patch DESCRIPTION ----------- @@ -244,12 +244,22 @@ leave out at most one of A and B, in which case it defaults to HEAD. Keep the commits that do not change anything from its parents in the result. +--allow-empty-message:: + By default, rebasing commits with an empty message will fail. + This option overrides that behavior, allowing commits with empty + messages to be rebased. + --skip:: Restart the rebasing process by skipping the current patch. --edit-todo:: Edit the todo list during an interactive rebase. +--show-current-patch:: + Show the current patch in an interactive rebase or when rebase + is stopped because of conflicts. This is the equivalent of + `git show REBASE_HEAD`. + -m:: --merge:: Use merging strategies to rebase. When the recursive (default) merge diff --git a/Documentation/git-remote.txt b/Documentation/git-remote.txt index 577b969c1b..4feddc0293 100644 --- a/Documentation/git-remote.txt +++ b/Documentation/git-remote.txt @@ -172,10 +172,14 @@ With `-n` option, the remote heads are not queried first with 'prune':: -Deletes all stale remote-tracking branches under . -These stale branches have already been removed from the remote repository -referenced by , but are still locally available in -"remotes/". +Deletes stale references associated with . By default, stale +remote-tracking branches under are deleted, but depending on +global configuration and the configuration of the remote we might even +prune local tags that haven't been pushed there. Equivalent to `git +fetch --prune `, except that no new references will be fetched. ++ +See the PRUNING section of linkgit:git-fetch[1] for what it'll prune +depending on various configuration. + With `--dry-run` option, report what branches will be pruned, but do not actually prune them. @@ -189,7 +193,7 @@ remotes.default is not defined, all remotes which do not have the configuration parameter remote..skipDefaultUpdate set to true will be updated. (See linkgit:git-config[1]). + -With `--prune` option, prune all the remotes that are updated. +With `--prune` option, run pruning against all the remotes that are updated. DISCUSSION diff --git a/Documentation/git-send-email.txt b/Documentation/git-send-email.txt index 8060ea35c5..71ef97ba9b 100644 --- a/Documentation/git-send-email.txt +++ b/Documentation/git-send-email.txt @@ -84,6 +84,11 @@ See the CONFIGURATION section for `sendemail.multiEdit`. the value of GIT_AUTHOR_IDENT, or GIT_COMMITTER_IDENT if that is not set, as returned by "git var -l". +--reply-to=
:: + Specify the address where replies from recipients should go to. + Use this if replies to messages should go to another address than what + is specified with the --from parameter. + --in-reply-to=:: Make the first mail (or all the mails with `--no-thread`) appear as a reply to the given Message-Id, which avoids breaking threads to diff --git a/Documentation/git-shortlog.txt b/Documentation/git-shortlog.txt index ee6c5476c1..5e35ea18ac 100644 --- a/Documentation/git-shortlog.txt +++ b/Documentation/git-shortlog.txt @@ -8,8 +8,8 @@ git-shortlog - Summarize 'git log' output SYNOPSIS -------- [verse] -git log --pretty=short | 'git shortlog' [] 'git shortlog' [] [] [[\--] ...] +git log --pretty=short | 'git shortlog' [] DESCRIPTION ----------- diff --git a/Documentation/git-status.txt b/Documentation/git-status.txt index f9c91c721e..6c230c0c72 100644 --- a/Documentation/git-status.txt +++ b/Documentation/git-status.txt @@ -130,6 +130,11 @@ ignored, then the directory is not shown, but all contents are shown. without options are equivalent to 'always' and 'never' respectively. +--ahead-behind:: +--no-ahead-behind:: + Display or do not display detailed ahead/behind counts for the + branch relative to its upstream branch. Defaults to true. + ...:: See the 'pathspec' entry in linkgit:gitglossary[7]. diff --git a/Documentation/git-tag.txt b/Documentation/git-tag.txt index 956fc019f9..1d17101bac 100644 --- a/Documentation/git-tag.txt +++ b/Documentation/git-tag.txt @@ -9,7 +9,7 @@ git-tag - Create, list, delete or verify a tag object signed with GPG SYNOPSIS -------- [verse] -'git tag' [-a | -s | -u ] [-f] [-m | -F ] +'git tag' [-a | -s | -u ] [-f] [-m | -F ] [-e] [ | ] 'git tag' -d ... 'git tag' [-n[]] -l [--contains ] [--no-contains ] @@ -167,6 +167,12 @@ This option is only applicable when listing tags without annotation lines. Implies `-a` if none of `-a`, `-s`, or `-u ` is given. +-e:: +--edit:: + The message taken from file with `-F` and command line with + `-m` are usually used as the tag message unmodified. + This option lets you further edit the message taken from these sources. + --cleanup=:: This option sets how the tag message is cleaned up. The '' can be one of 'verbatim', 'whitespace' and 'strip'. The diff --git a/Documentation/git-update-index.txt b/Documentation/git-update-index.txt index ad2383d7ed..3897a59ee9 100644 --- a/Documentation/git-update-index.txt +++ b/Documentation/git-update-index.txt @@ -464,6 +464,32 @@ command reads the index; while when `--[no-|force-]untracked-cache` are used, the untracked cache is immediately added to or removed from the index. +Before 2.17, the untracked cache had a bug where replacing a directory +with a symlink to another directory could cause it to incorrectly show +files tracked by git as untracked. See the "status: add a failing test +showing a core.untrackedCache bug" commit to git.git. A workaround for +that is (and this might work for other undiscovered bugs in the +future): + +---------------- +$ git -c core.untrackedCache=false status +---------------- + +This bug has also been shown to affect non-symlink cases of replacing +a directory with a file when it comes to the internal structures of +the untracked cache, but no case has been reported where this resulted in +wrong "git status" output. + +There are also cases where existing indexes written by git versions +before 2.17 will reference directories that don't exist anymore, +potentially causing many "could not open directory" warnings to be +printed on "git status". These are new warnings for existing issues +that were previously silently discarded. + +As with the bug described above the solution is to one-off do a "git +status" run with `core.untrackedCache=false` to flush out the leftover +bad data. + File System Monitor ------------------- diff --git a/Documentation/git-worktree.txt b/Documentation/git-worktree.txt index 5ac3f68ab5..e7eb24ab85 100644 --- a/Documentation/git-worktree.txt +++ b/Documentation/git-worktree.txt @@ -12,7 +12,9 @@ SYNOPSIS 'git worktree add' [-f] [--detach] [--checkout] [--lock] [-b ] [] 'git worktree list' [--porcelain] 'git worktree lock' [--reason ] +'git worktree move' 'git worktree prune' [-n] [-v] [--expire ] +'git worktree remove' [--force] 'git worktree unlock' DESCRIPTION @@ -34,10 +36,6 @@ The working tree's administrative files in the repository (see `git worktree prune` in the main or any linked working tree to clean up any stale administrative files. -If you move a linked working tree, you need to manually update the -administrative files so that they do not get pruned automatically. See -section "DETAILS" for more information. - If a linked working tree is stored on a portable device or network share which is not always mounted, you can prevent its administrative files from being pruned by issuing the `git worktree lock` command, optionally @@ -80,10 +78,22 @@ files from being pruned automatically. This also prevents it from being moved or deleted. Optionally, specify a reason for the lock with `--reason`. +move:: + +Move a working tree to a new location. Note that the main working tree +or linked working trees containing submodules cannot be moved. + prune:: Prune working tree information in $GIT_DIR/worktrees. +remove:: + +Remove a working tree. Only clean working trees (no untracked files +and no modification in tracked files) can be removed. Unclean working +trees or ones with submodules can be removed with `--force`. The main +working tree cannot be removed. + unlock:: Unlock a working tree, allowing it to be pruned, moved or deleted. @@ -93,9 +103,10 @@ OPTIONS -f:: --force:: - By default, `add` refuses to create a new working tree when `` is a branch name and - is already checked out by another working tree. This option overrides - that safeguard. + By default, `add` refuses to create a new working tree when + `` is a branch name and is already checked out by + another working tree and `remove` refuses to remove an unclean + working tree. This option overrides that safeguard. -b :: -B :: @@ -197,7 +208,7 @@ thumb is do not make any assumption about whether a path belongs to $GIT_DIR or $GIT_COMMON_DIR when you need to directly access something inside $GIT_DIR. Use `git rev-parse --git-path` to get the final path. -If you move a linked working tree, you need to update the 'gitdir' file +If you manually move a linked working tree, you need to update the 'gitdir' file in the entry's directory. For example, if a linked working tree is moved to `/newpath/test-next` and its `.git` file points to `/path/main/.git/worktrees/test-next`, then update @@ -277,13 +288,6 @@ Multiple checkout in general is still experimental, and the support for submodules is incomplete. It is NOT recommended to make multiple checkouts of a superproject. -git-worktree could provide more automation for tasks currently -performed manually, such as: - -- `remove` to remove a linked working tree and its administrative files (and - warn if the working tree is dirty) -- `mv` to move or rename a working tree and update its administrative files - GIT --- Part of the linkgit:git[1] suite diff --git a/Documentation/git.txt b/Documentation/git.txt index 8163b5796b..4767860e72 100644 --- a/Documentation/git.txt +++ b/Documentation/git.txt @@ -849,6 +849,9 @@ Report bugs to the Git mailing list where the development and maintenance is primarily done. You do not have to be subscribed to the list to send a message there. +Issues which are security relevant should be disclosed privately to +the Git Security mailing list . + SEE ALSO -------- linkgit:gittutorial[7], linkgit:gittutorial-2[7], diff --git a/Documentation/gitattributes.txt b/Documentation/gitattributes.txt index 30687de81a..1094fe2b5b 100644 --- a/Documentation/gitattributes.txt +++ b/Documentation/gitattributes.txt @@ -56,9 +56,16 @@ Unspecified:: When more than one pattern matches the path, a later line overrides an earlier line. This overriding is done per -attribute. The rules how the pattern matches paths are the -same as in `.gitignore` files; see linkgit:gitignore[5]. -Unlike `.gitignore`, negative patterns are forbidden. +attribute. + +The rules by which the pattern matches paths are the same as in +`.gitignore` files (see linkgit:gitignore[5]), with a few exceptions: + + - negative patterns are forbidden + + - patterns that match a directory do not recursively match paths + inside that directory (so using the trailing-slash `path/` syntax is + pointless in an attributes file; use `path/**` instead) When deciding what attributes are assigned to a path, Git consults `$GIT_DIR/info/attributes` file (which has the highest @@ -392,46 +399,14 @@ Long Running Filter Process If the filter command (a string value) is defined via `filter..process` then Git can process all blobs with a single filter invocation for the entire life of a single Git -command. This is achieved by using a packet format (pkt-line, -see technical/protocol-common.txt) based protocol over standard -input and standard output as follows. All packets, except for the -"*CONTENT" packets and the "0000" flush packet, are considered -text and therefore are terminated by a LF. - -Git starts the filter when it encounters the first file -that needs to be cleaned or smudged. After the filter started -Git sends a welcome message ("git-filter-client"), a list of supported -protocol version numbers, and a flush packet. Git expects to read a welcome -response message ("git-filter-server"), exactly one protocol version number -from the previously sent list, and a flush packet. All further -communication will be based on the selected version. The remaining -protocol description below documents "version=2". Please note that -"version=42" in the example below does not exist and is only there -to illustrate how the protocol would look like with more than one -version. - -After the version negotiation Git sends a list of all capabilities that -it supports and a flush packet. Git expects to read a list of desired -capabilities, which must be a subset of the supported capabilities list, -and a flush packet as response: ------------------------- -packet: git> git-filter-client -packet: git> version=2 -packet: git> version=42 -packet: git> 0000 -packet: git< git-filter-server -packet: git< version=2 -packet: git< 0000 -packet: git> capability=clean -packet: git> capability=smudge -packet: git> capability=not-yet-invented -packet: git> 0000 -packet: git< capability=clean -packet: git< capability=smudge -packet: git< 0000 ------------------------- -Supported filter capabilities in version 2 are "clean", "smudge", -and "delay". +command. This is achieved by using the long-running process protocol +(described in technical/long-running-process-protocol.txt). + +When Git encounters the first file that needs to be cleaned or smudged, +it starts the filter and performs the handshake. In the handshake, the +welcome message sent by Git is "git-filter-client", only version 2 is +suppported, and the supported capabilities are "clean", "smudge", and +"delay". Afterwards Git sends a list of "key=value" pairs terminated with a flush packet. The list will contain at least the filter command @@ -517,12 +492,6 @@ the protocol then Git will stop the filter process and restart it with the next file that needs to be processed. Depending on the `filter..required` flag Git will interpret that as error. -After the filter has processed a command it is expected to wait for -a "key=value" list containing the next command. Git will close -the command pipe on exit. The filter is expected to detect EOF -and exit gracefully on its own. Git will wait until the filter -process has stopped. - Delay ^^^^^ @@ -752,6 +721,8 @@ patterns are available: - `fountain` suitable for Fountain documents. +- `golang` suitable for source code in the Go language. + - `html` suitable for HTML/XHTML documents. - `java` suitable for source code in the Java language. diff --git a/Documentation/gitremote-helpers.txt b/Documentation/gitremote-helpers.txt index 4a584f3c5d..4b8c93ec59 100644 --- a/Documentation/gitremote-helpers.txt +++ b/Documentation/gitremote-helpers.txt @@ -466,6 +466,13 @@ set by Git if the remote helper has the 'option' capability. Transmit as a push option. As the push option must not contain LF or NUL characters, the string is not encoded. +'option from-promisor' {'true'|'false'}:: + Indicate that these objects are being fetched from a promisor. + +'option no-dependents' {'true'|'false'}:: + Indicate that only the objects wanted need to be fetched, not + their dependents. + SEE ALSO -------- linkgit:git-remote[1] diff --git a/Documentation/gitrepository-layout.txt b/Documentation/gitrepository-layout.txt index c60bcad44a..e85148f05e 100644 --- a/Documentation/gitrepository-layout.txt +++ b/Documentation/gitrepository-layout.txt @@ -275,11 +275,6 @@ worktrees//locked:: or manually by `git worktree prune`. The file may contain a string explaining why the repository is locked. -worktrees//link:: - If this file exists, it is a hard link to the linked .git - file. It is used to detect if the linked repository is - manually removed. - SEE ALSO -------- linkgit:git-init[1], diff --git a/Documentation/merge-options.txt b/Documentation/merge-options.txt index 3888c3ff85..63a3fc0954 100644 --- a/Documentation/merge-options.txt +++ b/Documentation/merge-options.txt @@ -35,7 +35,8 @@ set to `no` at the beginning of them. --no-ff:: Create a merge commit even when the merge resolves as a fast-forward. This is the default behaviour when merging an - annotated (and possibly signed) tag. + annotated (and possibly signed) tag that is not stored in + its natural place in 'refs/tags/' hierarchy. --ff-only:: Refuse to merge and exit with a non-zero status unless the diff --git a/Documentation/merge-strategies.txt b/Documentation/merge-strategies.txt index fd5d748d1b..4a58aad4b8 100644 --- a/Documentation/merge-strategies.txt +++ b/Documentation/merge-strategies.txt @@ -40,7 +40,7 @@ the other tree did, declaring 'our' history contains all that happened in it. theirs;; This is the opposite of 'ours'; note that, unlike 'ours', there is - no 'theirs' merge stragegy to confuse this merge option with. + no 'theirs' merge strategy to confuse this merge option with. patience;; With this option, 'merge-recursive' spends a little extra time diff --git a/Documentation/pretty-formats.txt b/Documentation/pretty-formats.txt index e664c088a5..6109ef09aa 100644 --- a/Documentation/pretty-formats.txt +++ b/Documentation/pretty-formats.txt @@ -202,7 +202,7 @@ endif::git-rev-list[] - '%>>()', '%>>|()': similar to '%>()', '%>|()' respectively, except that if the next placeholder takes more spaces than given and there are spaces on its left, use those spaces -- '%><()', '%><|()': similar to '% <()', '%<|()' +- '%><()', '%><|()': similar to '%<()', '%<|()' respectively, but padding both sides (i.e. the text is centered) - %(trailers[:options]): display the trailers of the body as interpreted by linkgit:git-interpret-trailers[1]. The `trailers` string may be diff --git a/Documentation/rev-list-options.txt b/Documentation/rev-list-options.txt index 22f5c9b43d..7b273635de 100644 --- a/Documentation/rev-list-options.txt +++ b/Documentation/rev-list-options.txt @@ -750,10 +750,21 @@ The form '--missing=allow-any' will allow object traversal to continue if a missing object is encountered. Missing objects will silently be omitted from the results. + +The form '--missing=allow-promisor' is like 'allow-any', but will only +allow object traversal to continue for EXPECTED promisor missing objects. +Unexpected missing objects will raise an error. ++ The form '--missing=print' is like 'allow-any', but will also print a list of the missing objects. Object IDs are prefixed with a ``?'' character. endif::git-rev-list[] +--exclude-promisor-objects:: + (For internal use only.) Prefilter object traversal at + promisor boundary. This is used with partial clone. This is + stronger than `--missing=allow-promisor` because it limits the + traversal, rather than just silencing errors about missing + objects. + --no-walk[=(sorted|unsorted)]:: Only show the given commits, but do not traverse their ancestors. This has no effect if a range is specified. If the argument diff --git a/Documentation/technical/api-object-access.txt b/Documentation/technical/api-object-access.txt index 03bb0e950d..a1162e5bcd 100644 --- a/Documentation/technical/api-object-access.txt +++ b/Documentation/technical/api-object-access.txt @@ -7,7 +7,7 @@ Talk about and family, things like * read_object_with_reference() * has_sha1_file() * write_sha1_file() -* pretend_sha1_file() +* pretend_object_file() * lookup_{object,commit,tag,blob,tree} * parse_{object,commit,tag,blob,tree} * Use of object flags diff --git a/Documentation/technical/http-protocol.txt b/Documentation/technical/http-protocol.txt index a0e45f2889..64f49d0bbb 100644 --- a/Documentation/technical/http-protocol.txt +++ b/Documentation/technical/http-protocol.txt @@ -214,10 +214,12 @@ smart server reply: S: Cache-Control: no-cache S: S: 001e# service=git-upload-pack\n + S: 0000 S: 004895dcfa3633004da0049d3d0fa03f80589cbcaf31 refs/heads/maint\0multi_ack\n S: 0042d049f6c27a2244e12041955e262a404c7faba355 refs/heads/master\n S: 003c2cb58b79488a98d2721cea644875a8dd0026b115 refs/tags/v1.0\n S: 003fa3c2e2402b99163d1d59756e5f207ae21cccba4c refs/tags/v1.0^{}\n + S: 0000 The client may send Extra Parameters (see Documentation/technical/pack-protocol.txt) as a colon-separated string @@ -277,6 +279,7 @@ The returned response contains "version 1" if "version=1" was sent as an Extra Parameter. smart_reply = PKT-LINE("# service=$servicename" LF) + "0000" *1("version 1") ref_list "0000" diff --git a/Documentation/technical/long-running-process-protocol.txt b/Documentation/technical/long-running-process-protocol.txt new file mode 100644 index 0000000000..aa0aa9af1c --- /dev/null +++ b/Documentation/technical/long-running-process-protocol.txt @@ -0,0 +1,50 @@ +Long-running process protocol +============================= + +This protocol is used when Git needs to communicate with an external +process throughout the entire life of a single Git command. All +communication is in pkt-line format (see technical/protocol-common.txt) +over standard input and standard output. + +Handshake +--------- + +Git starts by sending a welcome message (for example, +"git-filter-client"), a list of supported protocol version numbers, and +a flush packet. Git expects to read the welcome message with "server" +instead of "client" (for example, "git-filter-server"), exactly one +protocol version number from the previously sent list, and a flush +packet. All further communication will be based on the selected version. +The remaining protocol description below documents "version=2". Please +note that "version=42" in the example below does not exist and is only +there to illustrate how the protocol would look like with more than one +version. + +After the version negotiation Git sends a list of all capabilities that +it supports and a flush packet. Git expects to read a list of desired +capabilities, which must be a subset of the supported capabilities list, +and a flush packet as response: +------------------------ +packet: git> git-filter-client +packet: git> version=2 +packet: git> version=42 +packet: git> 0000 +packet: git< git-filter-server +packet: git< version=2 +packet: git< 0000 +packet: git> capability=clean +packet: git> capability=smudge +packet: git> capability=not-yet-invented +packet: git> 0000 +packet: git< capability=clean +packet: git< capability=smudge +packet: git< 0000 +------------------------ + +Shutdown +-------- + +Git will close +the command pipe on exit. The filter is expected to detect EOF +and exit gracefully on its own. Git will wait until the filter +process has stopped. diff --git a/Documentation/technical/pack-protocol.txt b/Documentation/technical/pack-protocol.txt index cd31edc91e..7fee6b780a 100644 --- a/Documentation/technical/pack-protocol.txt +++ b/Documentation/technical/pack-protocol.txt @@ -241,6 +241,7 @@ out of what the server said it could do with the first 'want' line. upload-request = want-list *shallow-line *1depth-request + [filter-request] flush-pkt want-list = first-want @@ -256,6 +257,8 @@ out of what the server said it could do with the first 'want' line. additional-want = PKT-LINE("want" SP obj-id) depth = 1*DIGIT + + filter-request = PKT-LINE("filter" SP filter-spec) ---- Clients MUST send all the obj-ids it wants from the reference @@ -278,6 +281,11 @@ complete those commits. Commits whose parents are not received as a result are defined as shallow and marked as such in the server. This information is sent back to the client in the next step. +The client can optionally request that pack-objects omit various +objects from the packfile using one of several filtering techniques. +These are intended for use with partial clone and partial fetch +operations. See `rev-list` for possible "filter-spec" values. + Once all the 'want's and 'shallow's (and optional 'deepen') are transferred, clients MUST send a flush-pkt, to tell the server side that it is done sending the list. diff --git a/Documentation/technical/protocol-capabilities.txt b/Documentation/technical/protocol-capabilities.txt index 26dcc6f502..332d209b58 100644 --- a/Documentation/technical/protocol-capabilities.txt +++ b/Documentation/technical/protocol-capabilities.txt @@ -309,3 +309,11 @@ to accept a signed push certificate, and asks the to be included in the push certificate. A send-pack client MUST NOT send a push-cert packet unless the receive-pack server advertises this capability. + +filter +------ + +If the upload-pack server advertises the 'filter' capability, +fetch-pack may send "filter" commands to request a partial clone +or partial fetch and request that the server omit various objects +from the packfile. diff --git a/Documentation/technical/repository-version.txt b/Documentation/technical/repository-version.txt index 00ad37986e..e03eaccebc 100644 --- a/Documentation/technical/repository-version.txt +++ b/Documentation/technical/repository-version.txt @@ -86,3 +86,15 @@ for testing format-1 compatibility. When the config key `extensions.preciousObjects` is set to `true`, objects in the repository MUST NOT be deleted (e.g., by `git-prune` or `git repack -d`). + +`partialclone` +~~~~~~~~~~~~~~ + +When the config key `extensions.partialclone` is set, it indicates +that the repo was created with a partial clone (or later performed +a partial fetch) and that the remote may have omitted sending +certain unwanted objects. Such a remote is called a "promisor remote" +and it promises that all such omitted objects can be fetched from it +in the future. + +The value of this key is the name of the promisor remote. diff --git a/GIT-VERSION-GEN b/GIT-VERSION-GEN index 8945e05f52..b4fb7d9a39 100755 --- a/GIT-VERSION-GEN +++ b/GIT-VERSION-GEN @@ -1,7 +1,7 @@ #!/bin/sh GVF=GIT-VERSION-FILE -DEF_VER=v2.16.3 +DEF_VER=v2.17.0-rc1 LF=' ' diff --git a/INSTALL b/INSTALL index ffb071e9f0..c39006e8e7 100644 --- a/INSTALL +++ b/INSTALL @@ -84,9 +84,29 @@ Issues of note: GIT_EXEC_PATH=`pwd` PATH=`pwd`:$PATH - GITPERLLIB=`pwd`/perl/blib/lib + GITPERLLIB=`pwd`/perl/build/lib export GIT_EXEC_PATH PATH GITPERLLIB + - By default (unless NO_PERL is provided) Git will ship various perl + scripts. However, for simplicity it doesn't use the + ExtUtils::MakeMaker toolchain to decide where to place the perl + libraries. Depending on the system this can result in the perl + libraries not being where you'd like them if they're expected to be + used by things other than Git itself. + + Manually supplying a perllibdir prefix should fix this, if this is + a problem you care about, e.g.: + + prefix=/usr perllibdir=/usr/$(/usr/bin/perl -MConfig -wle 'print substr $Config{installsitelib}, 1 + length $Config{siteprefixexp}') + + Will result in e.g. perllibdir=/usr/share/perl/5.26.1 on Debian, + perllibdir=/usr/share/perl5 (which we'd use by default) on CentOS. + + - Unless NO_PERL is provided Git will ship various perl libraries it + needs. Distributors of Git will usually want to set + NO_PERL_CPAN_FALLBACKS if NO_PERL is not provided to use their own + copies of the CPAN modules Git needs. + - Git is reasonably self-sufficient, but does depend on a few external programs and libraries. Git can be used without most of them by adding the approriate "NO_=YesPlease" to the make command line or @@ -106,7 +126,8 @@ Issues of note: Redhat/Fedora are reported to ship Perl binary package with some core modules stripped away (see http://lwn.net/Articles/477234/), so you might need to install additional packages other than Perl - itself, e.g. Time::HiRes. + itself, e.g. Digest::MD5, File::Spec, File::Temp, Net::Domain, + Net::SMTP, and Time::HiRes. - git-imap-send needs the OpenSSL library to talk IMAP over SSL if you are using libcurl older than 7.34.0. Otherwise you can use diff --git a/Makefile b/Makefile index 7f40f76739..96f6138f63 100644 --- a/Makefile +++ b/Makefile @@ -29,10 +29,10 @@ all:: # Perl-compatible regular expressions instead of standard or extended # POSIX regular expressions. # -# Currently USE_LIBPCRE is a synonym for USE_LIBPCRE1, define -# USE_LIBPCRE2 instead if you'd like to use version 2 of the PCRE -# library. The USE_LIBPCRE flag will likely be changed to mean v2 by -# default in future releases. +# USE_LIBPCRE is a synonym for USE_LIBPCRE2, define USE_LIBPCRE1 +# instead if you'd like to use the legacy version 1 of the PCRE +# library. Support for version 1 will likely be removed in some future +# release of Git, as upstream has all but abandoned it. # # When using USE_LIBPCRE1, define NO_LIBPCRE1_JIT if the PCRE v1 # library is compiled without --enable-jit. We will auto-detect @@ -294,11 +294,14 @@ all:: # # Define PERL_PATH to the path of your Perl binary (usually /usr/bin/perl). # -# Define NO_PERL_MAKEMAKER if you cannot use Makefiles generated by perl's -# MakeMaker (e.g. using ActiveState under Cygwin). -# # Define NO_PERL if you do not want Perl scripts or libraries at all. # +# Define NO_PERL_CPAN_FALLBACKS if you do not want to install bundled +# copies of CPAN modules that serve as a fallback in case the modules +# are not available on the system. This option is intended for +# distributions that want to use their packaged versions of Perl +# modules, instead of the fallbacks shipped with Git. +# # Define PYTHON_PATH to the path of your Python binary (often /usr/bin/python # but /usr/bin/python2.7 on some platforms). # @@ -332,6 +335,13 @@ all:: # when hardlinking a file to another name and unlinking the original file right # away (some NTFS drivers seem to zero the contents in that scenario). # +# Define INSTALL_SYMLINKS if you prefer to have everything that can be +# symlinked between bin/ and libexec/ to use relative symlinks between +# the two. This option overrides NO_CROSS_DIRECTORY_HARDLINKS and +# NO_INSTALL_HARDLINKS which will also use symlinking by indirection +# within the same directory in some cases, INSTALL_SYMLINKS will +# always symlink to the final target directly. +# # Define NO_CROSS_DIRECTORY_HARDLINKS if you plan to distribute the installed # programs as a tar, where bin/ and libexec/ might be on different file systems. # @@ -471,14 +481,14 @@ ARFLAGS = rcs # This can help installing the suite in a relocatable way. prefix = $(HOME) -bindir_relative = bin -bindir = $(prefix)/$(bindir_relative) +bindir = $(prefix)/bin mandir = $(prefix)/share/man infodir = $(prefix)/share/info gitexecdir = libexec/git-core mergetoolsdir = $(gitexecdir)/mergetools sharedir = $(prefix)/share gitwebdir = $(sharedir)/gitweb +perllibdir = $(sharedir)/perl5 localedir = $(sharedir)/locale template_dir = share/git-core/templates htmldir = $(prefix)/share/doc/git-doc @@ -488,11 +498,13 @@ lib = lib # DESTDIR = pathsep = : +bindir_relative = $(patsubst $(prefix)/%,%,$(bindir)) mandir_relative = $(patsubst $(prefix)/%,%,$(mandir)) infodir_relative = $(patsubst $(prefix)/%,%,$(infodir)) +gitexecdir_relative = $(patsubst $(prefix)/%,%,$(gitexecdir)) htmldir_relative = $(patsubst $(prefix)/%,%,$(htmldir)) -export prefix bindir sharedir sysconfdir gitwebdir localedir +export prefix bindir sharedir sysconfdir gitwebdir perllibdir localedir CC = cc AR = ar @@ -804,6 +816,7 @@ LIB_OBJS += ewah/ewah_bitmap.o LIB_OBJS += ewah/ewah_io.o LIB_OBJS += ewah/ewah_rlw.o LIB_OBJS += exec_cmd.o +LIB_OBJS += fetch-object.o LIB_OBJS += fetch-pack.o LIB_OBJS += fsck.o LIB_OBJS += fsmonitor.o @@ -832,7 +845,6 @@ LIB_OBJS += merge.o LIB_OBJS += merge-blobs.o LIB_OBJS += merge-recursive.o LIB_OBJS += mergesort.o -LIB_OBJS += mru.o LIB_OBJS += name-hash.o LIB_OBJS += notes.o LIB_OBJS += notes-cache.o @@ -1166,13 +1178,18 @@ ifdef NO_LIBGEN_H COMPAT_OBJS += compat/basename.o endif -USE_LIBPCRE1 ?= $(USE_LIBPCRE) +USE_LIBPCRE2 ?= $(USE_LIBPCRE) -ifneq (,$(USE_LIBPCRE1)) - ifdef USE_LIBPCRE2 -$(error Only set USE_LIBPCRE1 (or its alias USE_LIBPCRE) or USE_LIBPCRE2, not both!) +ifneq (,$(USE_LIBPCRE2)) + ifdef USE_LIBPCRE1 +$(error Only set USE_LIBPCRE2 (or its alias USE_LIBPCRE) or USE_LIBPCRE1, not both!) endif + BASIC_CFLAGS += -DUSE_LIBPCRE2 + EXTLIBS += -lpcre2-8 +endif + +ifdef USE_LIBPCRE1 BASIC_CFLAGS += -DUSE_LIBPCRE1 EXTLIBS += -lpcre @@ -1181,11 +1198,6 @@ ifdef NO_LIBPCRE1_JIT endif endif -ifdef USE_LIBPCRE2 - BASIC_CFLAGS += -DUSE_LIBPCRE2 - EXTLIBS += -lpcre2-8 -endif - ifdef LIBPCREDIR BASIC_CFLAGS += -I$(LIBPCREDIR)/include EXTLIBS += -L$(LIBPCREDIR)/$(lib) $(CC_LD_DYNPATH)$(LIBPCREDIR)/$(lib) @@ -1515,7 +1527,9 @@ else LIB_OBJS += sha1dc_git.o ifdef DC_SHA1_EXTERNAL ifdef DC_SHA1_SUBMODULE + ifneq ($(DC_SHA1_SUBMODULE),auto) $(error Only set DC_SHA1_EXTERNAL or DC_SHA1_SUBMODULE, not both) + endif endif BASIC_CFLAGS += -DDC_SHA1_EXTERNAL EXTLIBS += -lsha1detectcoll @@ -1543,9 +1557,6 @@ ifdef SHA1_MAX_BLOCK_SIZE LIB_OBJS += compat/sha1-chunked.o BASIC_CFLAGS += -DSHA1_MAX_BLOCK_SIZE="$(SHA1_MAX_BLOCK_SIZE)" endif -ifdef NO_PERL_MAKEMAKER - export NO_PERL_MAKEMAKER -endif ifdef NO_HSTRERROR COMPAT_CFLAGS += -DNO_HSTRERROR COMPAT_OBJS += compat/hstrerror.o @@ -1732,10 +1743,13 @@ ETC_GITATTRIBUTES_SQ = $(subst ','\'',$(ETC_GITATTRIBUTES)) DESTDIR_SQ = $(subst ','\'',$(DESTDIR)) bindir_SQ = $(subst ','\'',$(bindir)) bindir_relative_SQ = $(subst ','\'',$(bindir_relative)) +mandir_SQ = $(subst ','\'',$(mandir)) mandir_relative_SQ = $(subst ','\'',$(mandir_relative)) infodir_relative_SQ = $(subst ','\'',$(infodir_relative)) +perllibdir_SQ = $(subst ','\'',$(perllibdir)) localedir_SQ = $(subst ','\'',$(localedir)) gitexecdir_SQ = $(subst ','\'',$(gitexecdir)) +gitexecdir_relative_SQ = $(subst ','\'',$(gitexecdir_relative)) template_dir_SQ = $(subst ','\'',$(template_dir)) htmldir_relative_SQ = $(subst ','\'',$(htmldir_relative)) prefix_SQ = $(subst ','\'',$(prefix)) @@ -1843,9 +1857,6 @@ all:: ifndef NO_TCLTK $(QUIET_SUBDIR0)git-gui $(QUIET_SUBDIR1) gitexecdir='$(gitexec_instdir_SQ)' all $(QUIET_SUBDIR0)gitk-git $(QUIET_SUBDIR1) all -endif -ifndef NO_PERL - $(QUIET_SUBDIR0)perl $(QUIET_SUBDIR1) PERL_PATH='$(PERL_PATH_SQ)' prefix='$(prefix_SQ)' localedir='$(localedir_SQ)' all endif $(QUIET_SUBDIR0)templates $(QUIET_SUBDIR1) SHELL_PATH='$(SHELL_PATH_SQ)' PERL_PATH='$(PERL_PATH_SQ)' @@ -1928,7 +1939,8 @@ common-cmds.h: $(wildcard Documentation/git-*.txt) SCRIPT_DEFINES = $(SHELL_PATH_SQ):$(DIFF_SQ):$(GIT_VERSION):\ $(localedir_SQ):$(NO_CURL):$(USE_GETTEXT_SCHEME):$(SANE_TOOL_PATH_SQ):\ - $(gitwebdir_SQ):$(PERL_PATH_SQ):$(SANE_TEXT_GREP):$(PAGER_ENV) + $(gitwebdir_SQ):$(PERL_PATH_SQ):$(SANE_TEXT_GREP):$(PAGER_ENV):\ + $(perllibdir_SQ) define cmd_munge_script $(RM) $@ $@+ && \ sed -e '1s|#!.*/sh|#!$(SHELL_PATH_SQ)|' \ @@ -1972,23 +1984,12 @@ git.res: git.rc GIT-VERSION-FILE $(SCRIPT_PERL_GEN): GIT-BUILD-OPTIONS ifndef NO_PERL -$(SCRIPT_PERL_GEN): perl/perl.mak +$(SCRIPT_PERL_GEN): -perl/perl.mak: perl/PM.stamp - -perl/PM.stamp: FORCE - @$(FIND) perl -type f -name '*.pm' | sort >$@+ && \ - $(PERL_PATH) -V >>$@+ && \ - { cmp $@+ $@ >/dev/null 2>/dev/null || mv $@+ $@; } && \ - $(RM) $@+ - -perl/perl.mak: GIT-CFLAGS GIT-PREFIX perl/Makefile perl/Makefile.PL - $(QUIET_SUBDIR0)perl $(QUIET_SUBDIR1) PERL_PATH='$(PERL_PATH_SQ)' prefix='$(prefix_SQ)' $(@F) - -PERL_DEFINES = $(PERL_PATH_SQ):$(PERLLIB_EXTRA_SQ) -$(SCRIPT_PERL_GEN): % : %.perl perl/perl.mak GIT-PERL-DEFINES GIT-VERSION-FILE +PERL_DEFINES = $(PERL_PATH_SQ):$(PERLLIB_EXTRA_SQ):$(perllibdir_SQ) +$(SCRIPT_PERL_GEN): % : %.perl GIT-PERL-DEFINES GIT-VERSION-FILE $(QUIET_GEN)$(RM) $@ $@+ && \ - INSTLIBDIR=`MAKEFLAGS= $(MAKE) -C perl -s --no-print-directory instlibdir` && \ + INSTLIBDIR='$(perllibdir_SQ)' && \ INSTLIBDIR_EXTRA='$(PERLLIB_EXTRA_SQ)' && \ INSTLIBDIR="$$INSTLIBDIR$${INSTLIBDIR_EXTRA:+:$$INSTLIBDIR_EXTRA}" && \ sed -e '1{' \ @@ -2232,13 +2233,15 @@ $(VCSSVN_LIB): $(VCSSVN_OBJS) export DEFAULT_EDITOR DEFAULT_PAGER -.PHONY: doc man html info pdf -doc: +.PHONY: doc man man-perl html info pdf +doc: man-perl $(MAKE) -C Documentation all -man: +man: man-perl $(MAKE) -C Documentation man +man-perl: perl/build/man/man3/Git.3pm + html: $(MAKE) -C Documentation html @@ -2314,6 +2317,29 @@ endif po/build/locale/%/LC_MESSAGES/git.mo: po/%.po $(QUIET_MSGFMT)mkdir -p $(dir $@) && $(MSGFMT) -o $@ $< +LIB_PERL := $(wildcard perl/Git.pm perl/Git/*.pm perl/Git/*/*.pm perl/Git/*/*/*.pm) +LIB_PERL_GEN := $(patsubst perl/%.pm,perl/build/lib/%.pm,$(LIB_PERL)) +LIB_CPAN := $(wildcard perl/FromCPAN/*.pm perl/FromCPAN/*/*.pm) +LIB_CPAN_GEN := $(patsubst perl/%.pm,perl/build/lib/%.pm,$(LIB_CPAN)) + +ifndef NO_PERL +all:: $(LIB_PERL_GEN) +ifndef NO_PERL_CPAN_FALLBACKS +all:: $(LIB_CPAN_GEN) +endif +NO_PERL_CPAN_FALLBACKS_SQ = $(subst ','\'',$(NO_PERL_CPAN_FALLBACKS)) +endif + +perl/build/lib/%.pm: perl/%.pm + $(QUIET_GEN)mkdir -p $(dir $@) && \ + sed -e 's|@@LOCALEDIR@@|$(localedir_SQ)|g' \ + -e 's|@@NO_PERL_CPAN_FALLBACKS@@|$(NO_PERL_CPAN_FALLBACKS_SQ)|g' \ + < $< > $@ + +perl/build/man/man3/Git.3pm: perl/Git.pm + $(QUIET_GEN)mkdir -p $(dir $@) && \ + pod2man $< $@ + FIND_SOURCE_FILES = ( \ git ls-files \ '*.[hcS]' \ @@ -2574,7 +2600,9 @@ ifndef NO_GETTEXT (cd '$(DESTDIR_SQ)$(localedir_SQ)' && umask 022 && $(TAR) xof -) endif ifndef NO_PERL - $(MAKE) -C perl prefix='$(prefix_SQ)' DESTDIR='$(DESTDIR_SQ)' install + $(INSTALL) -d -m 755 '$(DESTDIR_SQ)$(perllibdir_SQ)' + (cd perl/build/lib && $(TAR) cf - .) | \ + (cd '$(DESTDIR_SQ)$(perllibdir_SQ)' && umask 022 && $(TAR) xof -) $(MAKE) -C gitweb install endif ifndef NO_TCLTK @@ -2587,49 +2615,63 @@ endif bindir=$$(cd '$(DESTDIR_SQ)$(bindir_SQ)' && pwd) && \ execdir=$$(cd '$(DESTDIR_SQ)$(gitexec_instdir_SQ)' && pwd) && \ + destdir_from_execdir_SQ=$$(echo '$(gitexecdir_relative_SQ)' | sed -e 's|[^/][^/]*|..|g') && \ { test "$$bindir/" = "$$execdir/" || \ for p in git$X $(filter $(install_bindir_programs),$(ALL_PROGRAMS)); do \ $(RM) "$$execdir/$$p" && \ - test -z "$(NO_INSTALL_HARDLINKS)$(NO_CROSS_DIRECTORY_HARDLINKS)" && \ - ln "$$bindir/$$p" "$$execdir/$$p" 2>/dev/null || \ - cp "$$bindir/$$p" "$$execdir/$$p" || exit; \ + test -n "$(INSTALL_SYMLINKS)" && \ + ln -s "$$destdir_from_execdir_SQ/$(bindir_relative_SQ)/$$p" "$$execdir/$$p" || \ + { test -z "$(NO_INSTALL_HARDLINKS)$(NO_CROSS_DIRECTORY_HARDLINKS)" && \ + ln "$$bindir/$$p" "$$execdir/$$p" 2>/dev/null || \ + cp "$$bindir/$$p" "$$execdir/$$p" || exit; } \ done; \ } && \ for p in $(filter $(install_bindir_programs),$(BUILT_INS)); do \ $(RM) "$$bindir/$$p" && \ - test -z "$(NO_INSTALL_HARDLINKS)" && \ - ln "$$bindir/git$X" "$$bindir/$$p" 2>/dev/null || \ - ln -s "git$X" "$$bindir/$$p" 2>/dev/null || \ - cp "$$bindir/git$X" "$$bindir/$$p" || exit; \ + test -n "$(INSTALL_SYMLINKS)" && \ + ln -s "git$X" "$$bindir/$$p" || \ + { test -z "$(NO_INSTALL_HARDLINKS)" && \ + ln "$$bindir/git$X" "$$bindir/$$p" 2>/dev/null || \ + ln -s "git$X" "$$bindir/$$p" 2>/dev/null || \ + cp "$$bindir/git$X" "$$bindir/$$p" || exit; } \ done && \ for p in $(BUILT_INS); do \ $(RM) "$$execdir/$$p" && \ - test -z "$(NO_INSTALL_HARDLINKS)" && \ - ln "$$execdir/git$X" "$$execdir/$$p" 2>/dev/null || \ - ln -s "git$X" "$$execdir/$$p" 2>/dev/null || \ - cp "$$execdir/git$X" "$$execdir/$$p" || exit; \ + test -n "$(INSTALL_SYMLINKS)" && \ + ln -s "$$destdir_from_execdir_SQ/$(bindir_relative_SQ)/git$X" "$$execdir/$$p" || \ + { test -z "$(NO_INSTALL_HARDLINKS)" && \ + ln "$$execdir/git$X" "$$execdir/$$p" 2>/dev/null || \ + ln -s "git$X" "$$execdir/$$p" 2>/dev/null || \ + cp "$$execdir/git$X" "$$execdir/$$p" || exit; } \ done && \ remote_curl_aliases="$(REMOTE_CURL_ALIASES)" && \ for p in $$remote_curl_aliases; do \ $(RM) "$$execdir/$$p" && \ - test -z "$(NO_INSTALL_HARDLINKS)" && \ - ln "$$execdir/git-remote-http$X" "$$execdir/$$p" 2>/dev/null || \ - ln -s "git-remote-http$X" "$$execdir/$$p" 2>/dev/null || \ - cp "$$execdir/git-remote-http$X" "$$execdir/$$p" || exit; \ + test -n "$(INSTALL_SYMLINKS)" && \ + ln -s "git-remote-http$X" "$$execdir/$$p" || \ + { test -z "$(NO_INSTALL_HARDLINKS)" && \ + ln "$$execdir/git-remote-http$X" "$$execdir/$$p" 2>/dev/null || \ + ln -s "git-remote-http$X" "$$execdir/$$p" 2>/dev/null || \ + cp "$$execdir/git-remote-http$X" "$$execdir/$$p" || exit; } \ done && \ ./check_bindir "z$$bindir" "z$$execdir" "$$bindir/git-add$X" -.PHONY: install-gitweb install-doc install-man install-html install-info install-pdf +.PHONY: install-gitweb install-doc install-man install-man-perl install-html install-info install-pdf .PHONY: quick-install-doc quick-install-man quick-install-html install-gitweb: $(MAKE) -C gitweb install -install-doc: +install-doc: install-man-perl $(MAKE) -C Documentation install -install-man: +install-man: install-man-perl $(MAKE) -C Documentation install-man +install-man-perl: man-perl + $(INSTALL) -d -m 755 '$(DESTDIR_SQ)$(mandir_SQ)/man3' + (cd perl/build/man/man3 && $(TAR) cf - .) | \ + (cd '$(DESTDIR_SQ)$(mandir_SQ)/man3' && umask 022 && $(TAR) xof -) + install-html: $(MAKE) -C Documentation install-html @@ -2664,6 +2706,21 @@ dist: git-archive$(X) configure $(GIT_TARNAME)/configure \ $(GIT_TARNAME)/version \ $(GIT_TARNAME)/git-gui/version +ifdef DC_SHA1_SUBMODULE + @mkdir -p $(GIT_TARNAME)/sha1collisiondetection/lib + @cp sha1collisiondetection/LICENSE.txt \ + $(GIT_TARNAME)/sha1collisiondetection/ + @cp sha1collisiondetection/LICENSE.txt \ + $(GIT_TARNAME)/sha1collisiondetection/ + @cp sha1collisiondetection/lib/sha1.[ch] \ + $(GIT_TARNAME)/sha1collisiondetection/lib/ + @cp sha1collisiondetection/lib/ubc_check.[ch] \ + $(GIT_TARNAME)/sha1collisiondetection/lib/ + $(TAR) rf $(GIT_TARNAME).tar \ + $(GIT_TARNAME)/sha1collisiondetection/LICENSE.txt \ + $(GIT_TARNAME)/sha1collisiondetection/lib/sha1.[ch] \ + $(GIT_TARNAME)/sha1collisiondetection/lib/ubc_check.[ch] +endif @$(RM) -r $(GIT_TARNAME) gzip -f -9 $(GIT_TARNAME).tar @@ -2713,7 +2770,7 @@ clean: profile-clean coverage-clean $(RM) $(TEST_PROGRAMS) $(NO_INSTALL) $(RM) -r bin-wrappers $(dep_dirs) $(RM) -r po/build/ - $(RM) *.spec *.pyc *.pyo */*.pyc */*.pyo common-cmds.h $(ETAGS_TARGET) tags cscope* + $(RM) *.pyc *.pyo */*.pyc */*.pyo common-cmds.h $(ETAGS_TARGET) tags cscope* $(RM) -r $(GIT_TARNAME) .doc-tmp-dir $(RM) $(GIT_TARNAME).tar.gz git-core_$(GIT_VERSION)-*.tar.gz $(RM) $(htmldocs).tar.gz $(manpages).tar.gz @@ -2721,7 +2778,7 @@ clean: profile-clean coverage-clean $(MAKE) -C Documentation/ clean ifndef NO_PERL $(MAKE) -C gitweb clean - $(MAKE) -C perl clean + $(RM) -r perl/build/ endif $(MAKE) -C templates/ clean $(MAKE) -C t/ clean diff --git a/RelNotes b/RelNotes index fdd7e420b1..7a6dc0603b 120000 --- a/RelNotes +++ b/RelNotes @@ -1 +1 @@ -Documentation/RelNotes/2.16.3.txt \ No newline at end of file +Documentation/RelNotes/2.17.0.txt \ No newline at end of file diff --git a/apply.c b/apply.c index 321a9fa68d..7e5792c996 100644 --- a/apply.c +++ b/apply.c @@ -950,7 +950,7 @@ static int gitdiff_verify_name(struct apply_state *state, } free(another); } else { - if (!starts_with(line, "/dev/null\n")) + if (!is_dev_null(line)) return error(_("git apply: bad git-diff - expected /dev/null on line %d"), state->linenr); } @@ -2263,8 +2263,8 @@ static void show_stats(struct apply_state *state, struct patch *patch) static int read_old_data(struct stat *st, struct patch *patch, const char *path, struct strbuf *buf) { - enum safe_crlf safe_crlf = patch->crlf_in_old ? - SAFE_CRLF_KEEP_CRLF : SAFE_CRLF_RENORMALIZE; + int conv_flags = patch->crlf_in_old ? + CONV_EOL_KEEP_CRLF : CONV_EOL_RENORMALIZE; switch (st->st_mode & S_IFMT) { case S_IFLNK: if (strbuf_readlink(buf, path, st->st_size) < 0) @@ -2281,7 +2281,7 @@ static int read_old_data(struct stat *st, struct patch *patch, * should never look at the index when explicit crlf option * is given. */ - convert_to_git(NULL, path, buf->buf, buf->len, buf, safe_crlf); + convert_to_git(NULL, path, buf->buf, buf->len, buf, conv_flags); return 0; default: return -1; @@ -2301,7 +2301,7 @@ static void update_pre_post_images(struct image *preimage, size_t len, size_t postlen) { int i, ctx, reduced; - char *new, *old, *fixed; + char *new_buf, *old_buf, *fixed; struct image fixed_preimage; /* @@ -2327,25 +2327,25 @@ static void update_pre_post_images(struct image *preimage, * We trust the caller to tell us if the update can be done * in place (postlen==0) or not. */ - old = postimage->buf; + old_buf = postimage->buf; if (postlen) - new = postimage->buf = xmalloc(postlen); + new_buf = postimage->buf = xmalloc(postlen); else - new = old; + new_buf = old_buf; fixed = preimage->buf; for (i = reduced = ctx = 0; i < postimage->nr; i++) { size_t l_len = postimage->line[i].len; if (!(postimage->line[i].flag & LINE_COMMON)) { /* an added line -- no counterparts in preimage */ - memmove(new, old, l_len); - old += l_len; - new += l_len; + memmove(new_buf, old_buf, l_len); + old_buf += l_len; + new_buf += l_len; continue; } /* a common context -- skip it in the original postimage */ - old += l_len; + old_buf += l_len; /* and find the corresponding one in the fixed preimage */ while (ctx < preimage->nr && @@ -2365,29 +2365,29 @@ static void update_pre_post_images(struct image *preimage, /* and copy it in, while fixing the line length */ l_len = preimage->line[ctx].len; - memcpy(new, fixed, l_len); - new += l_len; + memcpy(new_buf, fixed, l_len); + new_buf += l_len; fixed += l_len; postimage->line[i].len = l_len; ctx++; } if (postlen - ? postlen < new - postimage->buf - : postimage->len < new - postimage->buf) + ? postlen < new_buf - postimage->buf + : postimage->len < new_buf - postimage->buf) die("BUG: caller miscounted postlen: asked %d, orig = %d, used = %d", - (int)postlen, (int) postimage->len, (int)(new - postimage->buf)); + (int)postlen, (int) postimage->len, (int)(new_buf - postimage->buf)); /* Fix the length of the whole thing */ - postimage->len = new - postimage->buf; + postimage->len = new_buf - postimage->buf; postimage->nr -= reduced; } static int line_by_line_fuzzy_match(struct image *img, struct image *preimage, struct image *postimage, - unsigned long try, - int try_lno, + unsigned long current, + int current_lno, int preimage_limit) { int i; @@ -2404,9 +2404,9 @@ static int line_by_line_fuzzy_match(struct image *img, for (i = 0; i < preimage_limit; i++) { size_t prelen = preimage->line[i].len; - size_t imglen = img->line[try_lno+i].len; + size_t imglen = img->line[current_lno+i].len; - if (!fuzzy_matchlines(img->buf + try + imgoff, imglen, + if (!fuzzy_matchlines(img->buf + current + imgoff, imglen, preimage->buf + preoff, prelen)) return 0; if (preimage->line[i].flag & LINE_COMMON) @@ -2443,7 +2443,7 @@ static int line_by_line_fuzzy_match(struct image *img, */ extra_chars = preimage_end - preimage_eof; strbuf_init(&fixed, imgoff + extra_chars); - strbuf_add(&fixed, img->buf + try, imgoff); + strbuf_add(&fixed, img->buf + current, imgoff); strbuf_add(&fixed, preimage_eof, extra_chars); fixed_buf = strbuf_detach(&fixed, &fixed_len); update_pre_post_images(preimage, postimage, @@ -2455,8 +2455,8 @@ static int match_fragment(struct apply_state *state, struct image *img, struct image *preimage, struct image *postimage, - unsigned long try, - int try_lno, + unsigned long current, + int current_lno, unsigned ws_rule, int match_beginning, int match_end) { @@ -2466,12 +2466,12 @@ static int match_fragment(struct apply_state *state, size_t fixed_len, postlen; int preimage_limit; - if (preimage->nr + try_lno <= img->nr) { + if (preimage->nr + current_lno <= img->nr) { /* * The hunk falls within the boundaries of img. */ preimage_limit = preimage->nr; - if (match_end && (preimage->nr + try_lno != img->nr)) + if (match_end && (preimage->nr + current_lno != img->nr)) return 0; } else if (state->ws_error_action == correct_ws_error && (ws_rule & WS_BLANK_AT_EOF)) { @@ -2482,7 +2482,7 @@ static int match_fragment(struct apply_state *state, * match with img, and the remainder of the preimage * must be blank. */ - preimage_limit = img->nr - try_lno; + preimage_limit = img->nr - current_lno; } else { /* * The hunk extends beyond the end of the img and @@ -2492,27 +2492,27 @@ static int match_fragment(struct apply_state *state, return 0; } - if (match_beginning && try_lno) + if (match_beginning && current_lno) return 0; /* Quick hash check */ for (i = 0; i < preimage_limit; i++) - if ((img->line[try_lno + i].flag & LINE_PATCHED) || - (preimage->line[i].hash != img->line[try_lno + i].hash)) + if ((img->line[current_lno + i].flag & LINE_PATCHED) || + (preimage->line[i].hash != img->line[current_lno + i].hash)) return 0; if (preimage_limit == preimage->nr) { /* * Do we have an exact match? If we were told to match - * at the end, size must be exactly at try+fragsize, - * otherwise try+fragsize must be still within the preimage, + * at the end, size must be exactly at current+fragsize, + * otherwise current+fragsize must be still within the preimage, * and either case, the old piece should match the preimage * exactly. */ if ((match_end - ? (try + preimage->len == img->len) - : (try + preimage->len <= img->len)) && - !memcmp(img->buf + try, preimage->buf, preimage->len)) + ? (current + preimage->len == img->len) + : (current + preimage->len <= img->len)) && + !memcmp(img->buf + current, preimage->buf, preimage->len)) return 1; } else { /* @@ -2543,7 +2543,7 @@ static int match_fragment(struct apply_state *state, */ if (state->ws_ignore_action == ignore_ws_change) return line_by_line_fuzzy_match(img, preimage, postimage, - try, try_lno, preimage_limit); + current, current_lno, preimage_limit); if (state->ws_error_action != correct_ws_error) return 0; @@ -2577,10 +2577,10 @@ static int match_fragment(struct apply_state *state, */ strbuf_init(&fixed, preimage->len + 1); orig = preimage->buf; - target = img->buf + try; + target = img->buf + current; for (i = 0; i < preimage_limit; i++) { size_t oldlen = preimage->line[i].len; - size_t tgtlen = img->line[try_lno + i].len; + size_t tgtlen = img->line[current_lno + i].len; size_t fixstart = fixed.len; struct strbuf tgtfix; int match; @@ -2666,8 +2666,8 @@ static int find_pos(struct apply_state *state, int match_beginning, int match_end) { int i; - unsigned long backwards, forwards, try; - int backwards_lno, forwards_lno, try_lno; + unsigned long backwards, forwards, current; + int backwards_lno, forwards_lno, current_lno; /* * If match_beginning or match_end is specified, there is no @@ -2687,25 +2687,25 @@ static int find_pos(struct apply_state *state, if ((size_t) line > img->nr) line = img->nr; - try = 0; + current = 0; for (i = 0; i < line; i++) - try += img->line[i].len; + current += img->line[i].len; /* * There's probably some smart way to do this, but I'll leave * that to the smart and beautiful people. I'm simple and stupid. */ - backwards = try; + backwards = current; backwards_lno = line; - forwards = try; + forwards = current; forwards_lno = line; - try_lno = line; + current_lno = line; for (i = 0; ; i++) { if (match_fragment(state, img, preimage, postimage, - try, try_lno, ws_rule, + current, current_lno, ws_rule, match_beginning, match_end)) - return try_lno; + return current_lno; again: if (backwards_lno == 0 && forwards_lno == img->nr) @@ -2718,8 +2718,8 @@ static int find_pos(struct apply_state *state, } backwards_lno--; backwards -= img->line[backwards_lno].len; - try = backwards; - try_lno = backwards_lno; + current = backwards; + current_lno = backwards_lno; } else { if (forwards_lno == img->nr) { i++; @@ -2727,8 +2727,8 @@ static int find_pos(struct apply_state *state, } forwards += img->line[forwards_lno].len; forwards_lno++; - try = forwards; - try_lno = forwards_lno; + current = forwards; + current_lno = forwards_lno; } } @@ -3154,7 +3154,7 @@ static int apply_binary(struct apply_state *state, * See if the old one matches what the patch * applies to. */ - hash_sha1_file(img->buf, img->len, blob_type, oid.hash); + hash_object_file(img->buf, img->len, blob_type, &oid); if (strcmp(oid_to_hex(&oid), patch->old_sha1_prefix)) return error(_("the patch applies to '%s' (%s), " "which does not match the " @@ -3180,7 +3180,7 @@ static int apply_binary(struct apply_state *state, unsigned long size; char *result; - result = read_sha1_file(oid.hash, &type, &size); + result = read_object_file(&oid, &type, &size); if (!result) return error(_("the necessary postimage %s for " "'%s' cannot be read"), @@ -3199,7 +3199,7 @@ static int apply_binary(struct apply_state *state, name); /* verify that the result matches */ - hash_sha1_file(img->buf, img->len, blob_type, oid.hash); + hash_object_file(img->buf, img->len, blob_type, &oid); if (strcmp(oid_to_hex(&oid), patch->new_sha1_prefix)) return error(_("binary patch to '%s' creates incorrect result (expecting %s, got %s)"), name, patch->new_sha1_prefix, oid_to_hex(&oid)); @@ -3242,7 +3242,7 @@ static int read_blob_object(struct strbuf *buf, const struct object_id *oid, uns unsigned long sz; char *result; - result = read_sha1_file(oid->hash, &type, &sz); + result = read_object_file(oid, &type, &sz); if (!result) return -1; /* XXX read_sha1_file NUL-terminates */ @@ -3554,7 +3554,7 @@ static int try_threeway(struct apply_state *state, /* Preimage the patch was prepared for */ if (patch->is_new) - write_sha1_file("", 0, blob_type, pre_oid.hash); + write_object_file("", 0, blob_type, &pre_oid); else if (get_oid(patch->old_sha1_prefix, &pre_oid) || read_blob_object(&buf, &pre_oid, patch->old_mode)) return error(_("repository lacks the necessary blob to fall back on 3-way merge.")); @@ -3570,7 +3570,7 @@ static int try_threeway(struct apply_state *state, return -1; } /* post_oid is theirs */ - write_sha1_file(tmp_image.buf, tmp_image.len, blob_type, post_oid.hash); + write_object_file(tmp_image.buf, tmp_image.len, blob_type, &post_oid); clear_image(&tmp_image); /* our_oid is ours */ @@ -3583,7 +3583,7 @@ static int try_threeway(struct apply_state *state, return error(_("cannot read the current contents of '%s'"), patch->old_name); } - write_sha1_file(tmp_image.buf, tmp_image.len, blob_type, our_oid.hash); + write_object_file(tmp_image.buf, tmp_image.len, blob_type, &our_oid); clear_image(&tmp_image); /* in-core three-way merge between post and our using pre as base */ @@ -4163,30 +4163,30 @@ static void show_mode_change(struct patch *p, int show_name) static void show_rename_copy(struct patch *p) { const char *renamecopy = p->is_rename ? "rename" : "copy"; - const char *old, *new; + const char *old_name, *new_name; /* Find common prefix */ - old = p->old_name; - new = p->new_name; + old_name = p->old_name; + new_name = p->new_name; while (1) { const char *slash_old, *slash_new; - slash_old = strchr(old, '/'); - slash_new = strchr(new, '/'); + slash_old = strchr(old_name, '/'); + slash_new = strchr(new_name, '/'); if (!slash_old || !slash_new || - slash_old - old != slash_new - new || - memcmp(old, new, slash_new - new)) + slash_old - old_name != slash_new - new_name || + memcmp(old_name, new_name, slash_new - new_name)) break; - old = slash_old + 1; - new = slash_new + 1; + old_name = slash_old + 1; + new_name = slash_new + 1; } - /* p->old_name thru old is the common prefix, and old and new + /* p->old_name thru old_name is the common prefix, and old_name and new_name * through the end of names are renames */ - if (old != p->old_name) + if (old_name != p->old_name) printf(" %s %.*s{%s => %s} (%d%%)\n", renamecopy, - (int)(old - p->old_name), p->old_name, - old, new, p->score); + (int)(old_name - p->old_name), p->old_name, + old_name, new_name, p->score); else printf(" %s %s => %s (%d%%)\n", renamecopy, p->old_name, p->new_name, p->score); @@ -4291,7 +4291,7 @@ static int add_index_file(struct apply_state *state, } fill_stat_cache_info(ce, &st); } - if (write_sha1_file(buf, size, blob_type, ce->oid.hash) < 0) { + if (write_object_file(buf, size, blob_type, &ce->oid) < 0) { free(ce); return error(_("unable to create backing store " "for newly created file %s"), path); @@ -4943,8 +4943,9 @@ int apply_parse_options(int argc, const char **argv, N_("make sure the patch is applicable to the current index")), OPT_BOOL(0, "cached", &state->cached, N_("apply a patch without touching the working tree")), - OPT_BOOL(0, "unsafe-paths", &state->unsafe_paths, - N_("accept a patch that touches outside the working area")), + OPT_BOOL_F(0, "unsafe-paths", &state->unsafe_paths, + N_("accept a patch that touches outside the working area"), + PARSE_OPT_NOCOMPLETE), OPT_BOOL(0, "apply", force_apply, N_("also apply the patch (use with --stat/--summary/--check)")), OPT_BOOL('3', "3way", &state->threeway, diff --git a/archive-tar.c b/archive-tar.c index c6ed96ee74..3563bcb9f2 100644 --- a/archive-tar.c +++ b/archive-tar.c @@ -111,7 +111,7 @@ static void write_trailer(void) * queues up writes, so that all our write(2) calls write exactly one * full block; pads writes to RECORDSIZE */ -static int stream_blocked(const unsigned char *sha1) +static int stream_blocked(const struct object_id *oid) { struct git_istream *st; enum object_type type; @@ -119,9 +119,9 @@ static int stream_blocked(const unsigned char *sha1) char buf[BLOCKSIZE]; ssize_t readlen; - st = open_istream(sha1, &type, &sz, NULL); + st = open_istream(oid, &type, &sz, NULL); if (!st) - return error("cannot stream blob %s", sha1_to_hex(sha1)); + return error("cannot stream blob %s", oid_to_hex(oid)); for (;;) { readlen = read_istream(st, buf, sizeof(buf)); if (readlen <= 0) @@ -218,7 +218,7 @@ static void prepare_header(struct archiver_args *args, } static void write_extended_header(struct archiver_args *args, - const unsigned char *sha1, + const struct object_id *oid, const void *buffer, unsigned long size) { struct ustar_header header; @@ -226,14 +226,14 @@ static void write_extended_header(struct archiver_args *args, memset(&header, 0, sizeof(header)); *header.typeflag = TYPEFLAG_EXT_HEADER; mode = 0100666; - xsnprintf(header.name, sizeof(header.name), "%s.paxheader", sha1_to_hex(sha1)); + xsnprintf(header.name, sizeof(header.name), "%s.paxheader", oid_to_hex(oid)); prepare_header(args, &header, mode, size); write_blocked(&header, sizeof(header)); write_blocked(buffer, size); } static int write_tar_entry(struct archiver_args *args, - const unsigned char *sha1, + const struct object_id *oid, const char *path, size_t pathlen, unsigned int mode) { @@ -257,7 +257,7 @@ static int write_tar_entry(struct archiver_args *args, mode = (mode | ((mode & 0100) ? 0777 : 0666)) & ~tar_umask; } else { return error("unsupported file mode: 0%o (SHA1: %s)", - mode, sha1_to_hex(sha1)); + mode, oid_to_hex(oid)); } if (pathlen > sizeof(header.name)) { size_t plen = get_path_prefix(path, pathlen, @@ -268,7 +268,7 @@ static int write_tar_entry(struct archiver_args *args, memcpy(header.name, path + plen + 1, rest); } else { xsnprintf(header.name, sizeof(header.name), "%s.data", - sha1_to_hex(sha1)); + oid_to_hex(oid)); strbuf_append_ext_header(&ext_header, "path", path, pathlen); } @@ -276,14 +276,14 @@ static int write_tar_entry(struct archiver_args *args, memcpy(header.name, path, pathlen); if (S_ISREG(mode) && !args->convert && - sha1_object_info(sha1, &size) == OBJ_BLOB && + oid_object_info(oid, &size) == OBJ_BLOB && size > big_file_threshold) buffer = NULL; else if (S_ISLNK(mode) || S_ISREG(mode)) { enum object_type type; - buffer = sha1_file_to_archive(args, path, sha1, old_mode, &type, &size); + buffer = object_file_to_archive(args, path, oid, old_mode, &type, &size); if (!buffer) - return error("cannot read %s", sha1_to_hex(sha1)); + return error("cannot read %s", oid_to_hex(oid)); } else { buffer = NULL; size = 0; @@ -292,7 +292,7 @@ static int write_tar_entry(struct archiver_args *args, if (S_ISLNK(mode)) { if (size > sizeof(header.linkname)) { xsnprintf(header.linkname, sizeof(header.linkname), - "see %s.paxheader", sha1_to_hex(sha1)); + "see %s.paxheader", oid_to_hex(oid)); strbuf_append_ext_header(&ext_header, "linkpath", buffer, size); } else @@ -308,7 +308,7 @@ static int write_tar_entry(struct archiver_args *args, prepare_header(args, &header, mode, size_in_header); if (ext_header.len > 0) { - write_extended_header(args, sha1, ext_header.buf, + write_extended_header(args, oid, ext_header.buf, ext_header.len); } strbuf_release(&ext_header); @@ -317,7 +317,7 @@ static int write_tar_entry(struct archiver_args *args, if (buffer) write_blocked(buffer, size); else - err = stream_blocked(sha1); + err = stream_blocked(oid); } free(buffer); return err; diff --git a/archive-zip.c b/archive-zip.c index e8913e5a26..6b20bce4d1 100644 --- a/archive-zip.c +++ b/archive-zip.c @@ -276,7 +276,7 @@ static int entry_is_binary(const char *path, const void *buffer, size_t size) #define STREAM_BUFFER_SIZE (1024 * 16) static int write_zip_entry(struct archiver_args *args, - const unsigned char *sha1, + const struct object_id *oid, const char *path, size_t pathlen, unsigned int mode) { @@ -314,7 +314,7 @@ static int write_zip_entry(struct archiver_args *args, if (pathlen > 0xffff) { return error("path too long (%d chars, SHA1: %s): %s", - (int)pathlen, sha1_to_hex(sha1), path); + (int)pathlen, oid_to_hex(oid), path); } if (S_ISDIR(mode) || S_ISGITLINK(mode)) { @@ -325,7 +325,7 @@ static int write_zip_entry(struct archiver_args *args, compressed_size = 0; buffer = NULL; } else if (S_ISREG(mode) || S_ISLNK(mode)) { - enum object_type type = sha1_object_info(sha1, &size); + enum object_type type = oid_object_info(oid, &size); method = 0; attr2 = S_ISLNK(mode) ? ((mode | 0777) << 16) : @@ -337,18 +337,18 @@ static int write_zip_entry(struct archiver_args *args, if (S_ISREG(mode) && type == OBJ_BLOB && !args->convert && size > big_file_threshold) { - stream = open_istream(sha1, &type, &size, NULL); + stream = open_istream(oid, &type, &size, NULL); if (!stream) return error("cannot stream blob %s", - sha1_to_hex(sha1)); + oid_to_hex(oid)); flags |= ZIP_STREAM; out = buffer = NULL; } else { - buffer = sha1_file_to_archive(args, path, sha1, mode, - &type, &size); + buffer = object_file_to_archive(args, path, oid, mode, + &type, &size); if (!buffer) return error("cannot read %s", - sha1_to_hex(sha1)); + oid_to_hex(oid)); crc = crc32(crc, buffer, size); is_binary = entry_is_binary(path_without_prefix, buffer, size); @@ -357,7 +357,7 @@ static int write_zip_entry(struct archiver_args *args, compressed_size = (method == 0) ? size : 0; } else { return error("unsupported file mode: 0%o (SHA1: %s)", mode, - sha1_to_hex(sha1)); + oid_to_hex(oid)); } if (creator_version > max_creator_version) diff --git a/archive.c b/archive.c index 0b7b62af0c..93ab175b0b 100644 --- a/archive.c +++ b/archive.c @@ -63,16 +63,16 @@ static void format_subst(const struct commit *commit, free(to_free); } -void *sha1_file_to_archive(const struct archiver_args *args, - const char *path, const unsigned char *sha1, - unsigned int mode, enum object_type *type, - unsigned long *sizep) +void *object_file_to_archive(const struct archiver_args *args, + const char *path, const struct object_id *oid, + unsigned int mode, enum object_type *type, + unsigned long *sizep) { void *buffer; const struct commit *commit = args->convert ? args->commit : NULL; path += args->baselen; - buffer = read_sha1_file(sha1, type, sizep); + buffer = read_object_file(oid, type, sizep); if (buffer && S_ISREG(mode)) { struct strbuf buf = STRBUF_INIT; size_t size = 0; @@ -121,7 +121,7 @@ static int check_attr_export_subst(const struct attr_check *check) return check && ATTR_TRUE(check->items[1].value); } -static int write_archive_entry(const unsigned char *sha1, const char *base, +static int write_archive_entry(const struct object_id *oid, const char *base, int baselen, const char *filename, unsigned mode, int stage, void *context) { @@ -153,7 +153,7 @@ static int write_archive_entry(const unsigned char *sha1, const char *base, if (S_ISDIR(mode) || S_ISGITLINK(mode)) { if (args->verbose) fprintf(stderr, "%.*s\n", (int)path.len, path.buf); - err = write_entry(args, sha1, path.buf, path.len, mode); + err = write_entry(args, oid, path.buf, path.len, mode); if (err) return err; return (S_ISDIR(mode) ? READ_TREE_RECURSIVE : 0); @@ -161,7 +161,7 @@ static int write_archive_entry(const unsigned char *sha1, const char *base, if (args->verbose) fprintf(stderr, "%.*s\n", (int)path.len, path.buf); - return write_entry(args, sha1, path.buf, path.len, mode); + return write_entry(args, oid, path.buf, path.len, mode); } static void queue_directory(const unsigned char *sha1, @@ -191,14 +191,14 @@ static int write_directory(struct archiver_context *c) d->path[d->len - 1] = '\0'; /* no trailing slash */ ret = write_directory(c) || - write_archive_entry(d->oid.hash, d->path, d->baselen, + write_archive_entry(&d->oid, d->path, d->baselen, d->path + d->baselen, d->mode, d->stage, c) != READ_TREE_RECURSIVE; free(d); return ret ? -1 : 0; } -static int queue_or_write_archive_entry(const unsigned char *sha1, +static int queue_or_write_archive_entry(const struct object_id *oid, struct strbuf *base, const char *filename, unsigned mode, int stage, void *context) { @@ -224,14 +224,14 @@ static int queue_or_write_archive_entry(const unsigned char *sha1, if (check_attr_export_ignore(check)) return 0; - queue_directory(sha1, base, filename, + queue_directory(oid->hash, base, filename, mode, stage, c); return READ_TREE_RECURSIVE; } if (write_directory(c)) return -1; - return write_archive_entry(sha1, base->buf, base->len, filename, mode, + return write_archive_entry(oid, base->buf, base->len, filename, mode, stage, context); } @@ -250,7 +250,7 @@ int write_archive_entries(struct archiver_args *args, len--; if (args->verbose) fprintf(stderr, "%.*s\n", (int)len, args->base); - err = write_entry(args, args->tree->object.oid.hash, args->base, + err = write_entry(args, &args->tree->object.oid, args->base, len, 040777); if (err) return err; @@ -303,7 +303,7 @@ static const struct archiver *lookup_archiver(const char *name) return NULL; } -static int reject_entry(const unsigned char *sha1, struct strbuf *base, +static int reject_entry(const struct object_id *oid, struct strbuf *base, const char *filename, unsigned mode, int stage, void *context) { @@ -397,8 +397,8 @@ static void parse_treeish_arg(const char **argv, unsigned int mode; int err; - err = get_tree_entry(tree->object.oid.hash, prefix, - tree_oid.hash, &mode); + err = get_tree_entry(&tree->object.oid, prefix, &tree_oid, + &mode); if (err || !S_ISDIR(mode)) die("current working directory is untracked"); diff --git a/archive.h b/archive.h index 62d1d82c1a..1f9954f7cd 100644 --- a/archive.h +++ b/archive.h @@ -31,7 +31,7 @@ extern void init_tar_archiver(void); extern void init_zip_archiver(void); typedef int (*write_archive_entry_fn_t)(struct archiver_args *args, - const unsigned char *sha1, + const struct object_id *oid, const char *path, size_t pathlen, unsigned int mode); @@ -39,9 +39,9 @@ extern int write_archive_entries(struct archiver_args *args, write_archive_entry extern int write_archive(int argc, const char **argv, const char *prefix, const char *name_hint, int remote); const char *archive_format_from_filename(const char *filename); -extern void *sha1_file_to_archive(const struct archiver_args *args, - const char *path, const unsigned char *sha1, - unsigned int mode, enum object_type *type, - unsigned long *sizep); +extern void *object_file_to_archive(const struct archiver_args *args, + const char *path, const struct object_id *oid, + unsigned int mode, enum object_type *type, + unsigned long *sizep); #endif /* ARCHIVE_H */ diff --git a/bisect.c b/bisect.c index f6d05bd66f..ad395bb2b8 100644 --- a/bisect.c +++ b/bisect.c @@ -132,7 +132,8 @@ static void show_list(const char *debug, int counted, int nr, unsigned flags = commit->object.flags; enum object_type type; unsigned long size; - char *buf = read_sha1_file(commit->object.oid.hash, &type, &size); + char *buf = read_object_file(&commit->object.oid, &type, + &size); const char *subject_start; int subject_len; diff --git a/blame.c b/blame.c index 2893f3c103..78c9808bd1 100644 --- a/blame.c +++ b/blame.c @@ -80,8 +80,8 @@ static void verify_working_tree_path(struct commit *work_tree, const char *path) struct object_id blob_oid; unsigned mode; - if (!get_tree_entry(commit_oid->hash, path, blob_oid.hash, &mode) && - sha1_object_info(blob_oid.hash, NULL) == OBJ_BLOB) + if (!get_tree_entry(commit_oid, path, &blob_oid, &mode) && + oid_object_info(&blob_oid, NULL) == OBJ_BLOB) return; } @@ -232,7 +232,7 @@ static struct commit *fake_working_tree_commit(struct diff_options *opt, convert_to_git(&the_index, path, buf.buf, buf.len, &buf, 0); origin->file.ptr = buf.buf; origin->file.size = buf.len; - pretend_sha1_file(buf.buf, buf.len, OBJ_BLOB, origin->blob_oid.hash); + pretend_object_file(buf.buf, buf.len, OBJ_BLOB, &origin->blob_oid); /* * Read the current index, replace the path entry with @@ -297,8 +297,8 @@ static void fill_origin_blob(struct diff_options *opt, textconv_object(o->path, o->mode, &o->blob_oid, 1, &file->ptr, &file_size)) ; else - file->ptr = read_sha1_file(o->blob_oid.hash, &type, - &file_size); + file->ptr = read_object_file(&o->blob_oid, &type, + &file_size); file->size = file_size; if (!file->ptr) @@ -502,11 +502,9 @@ static int fill_blob_sha1_and_mode(struct blame_origin *origin) { if (!is_null_oid(&origin->blob_oid)) return 0; - if (get_tree_entry(origin->commit->object.oid.hash, - origin->path, - origin->blob_oid.hash, &origin->mode)) + if (get_tree_entry(&origin->commit->object.oid, origin->path, &origin->blob_oid, &origin->mode)) goto error_out; - if (sha1_object_info(origin->blob_oid.hash, NULL) != OBJ_BLOB) + if (oid_object_info(&origin->blob_oid, NULL) != OBJ_BLOB) goto error_out; return 0; error_out: @@ -998,28 +996,29 @@ unsigned blame_entry_score(struct blame_scoreboard *sb, struct blame_entry *e) } /* - * best_so_far[] and this[] are both a split of an existing blame_entry - * that passes blame to the parent. Maintain best_so_far the best split - * so far, by comparing this and best_so_far and copying this into + * best_so_far[] and potential[] are both a split of an existing blame_entry + * that passes blame to the parent. Maintain best_so_far the best split so + * far, by comparing potential and best_so_far and copying potential into * bst_so_far as needed. */ static void copy_split_if_better(struct blame_scoreboard *sb, struct blame_entry *best_so_far, - struct blame_entry *this) + struct blame_entry *potential) { int i; - if (!this[1].suspect) + if (!potential[1].suspect) return; if (best_so_far[1].suspect) { - if (blame_entry_score(sb, &this[1]) < blame_entry_score(sb, &best_so_far[1])) + if (blame_entry_score(sb, &potential[1]) < + blame_entry_score(sb, &best_so_far[1])) return; } for (i = 0; i < 3; i++) - blame_origin_incref(this[i].suspect); + blame_origin_incref(potential[i].suspect); decref_split(best_so_far); - memcpy(best_so_far, this, sizeof(struct blame_entry [3])); + memcpy(best_so_far, potential, sizeof(struct blame_entry[3])); } /* @@ -1046,12 +1045,12 @@ static void handle_split(struct blame_scoreboard *sb, if (ent->num_lines <= tlno) return; if (tlno < same) { - struct blame_entry this[3]; + struct blame_entry potential[3]; tlno += ent->s_lno; same += ent->s_lno; - split_overlap(this, ent, tlno, plno, same, parent); - copy_split_if_better(sb, split, this); - decref_split(this); + split_overlap(potential, ent, tlno, plno, same, parent); + copy_split_if_better(sb, split, potential); + decref_split(potential); } } @@ -1273,7 +1272,7 @@ static void find_copy_in_parent(struct blame_scoreboard *sb, struct diff_filepair *p = diff_queued_diff.queue[i]; struct blame_origin *norigin; mmfile_t file_p; - struct blame_entry this[3]; + struct blame_entry potential[3]; if (!DIFF_FILE_VALID(p->one)) continue; /* does not exist in parent */ @@ -1292,10 +1291,10 @@ static void find_copy_in_parent(struct blame_scoreboard *sb, for (j = 0; j < num_ents; j++) { find_copy_in_blob(sb, blame_list[j].ent, - norigin, this, &file_p); + norigin, potential, &file_p); copy_split_if_better(sb, blame_list[j].split, - this); - decref_split(this); + potential); + decref_split(potential); } blame_origin_decref(norigin); } @@ -1830,8 +1829,8 @@ void setup_scoreboard(struct blame_scoreboard *sb, const char *path, struct blam &sb->final_buf_size)) ; else - sb->final_buf = read_sha1_file(o->blob_oid.hash, &type, - &sb->final_buf_size); + sb->final_buf = read_object_file(&o->blob_oid, &type, + &sb->final_buf_size); if (!sb->final_buf) die(_("cannot read blob %s for path %s"), diff --git a/builtin/add.c b/builtin/add.c index bf01d89e28..9ef7fb02d5 100644 --- a/builtin/add.c +++ b/builtin/add.c @@ -294,7 +294,7 @@ static struct option builtin_add_options[] = { OPT_BOOL('i', "interactive", &add_interactive, N_("interactive picking")), OPT_BOOL('p', "patch", &patch_interactive, N_("select hunks interactively")), OPT_BOOL('e', "edit", &edit_interactive, N_("edit current diff and apply")), - OPT__FORCE(&ignored_too, N_("allow adding otherwise ignored files")), + OPT__FORCE(&ignored_too, N_("allow adding otherwise ignored files"), 0), OPT_BOOL('u', "update", &take_worktree_changes, N_("update tracked files")), OPT_BOOL(0, "renormalize", &add_renormalize, N_("renormalize EOL of tracked files (implies -u)")), OPT_BOOL('N', "intent-to-add", &intent_to_add, N_("record only the fact that the path will be added later")), @@ -534,10 +534,9 @@ int cmd_add(int argc, const char **argv, const char *prefix) unplug_bulk_checkin(); finish: - if (active_cache_changed) { - if (write_locked_index(&the_index, &lock_file, COMMIT_LOCK)) - die(_("Unable to write new index file")); - } + if (write_locked_index(&the_index, &lock_file, + COMMIT_LOCK | SKIP_IF_UNCHANGED)) + die(_("Unable to write new index file")); UNLEAK(pathspec); UNLEAK(dir); diff --git a/builtin/am.c b/builtin/am.c index acfe9d3c8c..1bcc3606c5 100644 --- a/builtin/am.c +++ b/builtin/am.c @@ -1011,6 +1011,7 @@ static void am_setup(struct am_state *state, enum patch_format patch_format, if (mkdir(state->dir, 0777) < 0 && errno != EEXIST) die_errno(_("failed to create directory '%s'"), state->dir); + delete_ref(NULL, "REBASE_HEAD", NULL, REF_NO_DEREF); if (split_mail(state, patch_format, paths, keep_cr) < 0) { am_destroy(state); @@ -1061,7 +1062,7 @@ static void am_setup(struct am_state *state, enum patch_format patch_format, } write_state_text(state, "scissors", str); - sq_quote_argv(&sb, state->git_apply_opts.argv, 0); + sq_quote_argv(&sb, state->git_apply_opts.argv); write_state_text(state, "apply-opt", sb.buf); if (state->rebasing) @@ -1110,6 +1111,7 @@ static void am_next(struct am_state *state) oidclr(&state->orig_commit); unlink(am_path(state, "original-commit")); + delete_ref(NULL, "REBASE_HEAD", NULL, REF_NO_DEREF); if (!get_oid("HEAD", &head)) write_state_text(state, "abort-safety", oid_to_hex(&head)); @@ -1441,6 +1443,8 @@ static int parse_mail_rebase(struct am_state *state, const char *mail) oidcpy(&state->orig_commit, &commit_oid); write_state_text(state, "original-commit", oid_to_hex(&commit_oid)); + update_ref("am", "REBASE_HEAD", &commit_oid, + NULL, REF_NO_DEREF, UPDATE_REFS_DIE_ON_ERR); return 0; } @@ -1546,7 +1550,7 @@ static int fall_back_threeway(const struct am_state *state, const char *index_pa discard_cache(); read_cache_from(index_path); - if (write_index_as_tree(orig_tree.hash, &the_index, index_path, 0, NULL)) + if (write_index_as_tree(&orig_tree, &the_index, index_path, 0, NULL)) return error(_("Repository lacks necessary blobs to fall back on 3-way merge.")); say(state, stdout, _("Using index info to reconstruct a base tree...")); @@ -1571,7 +1575,7 @@ static int fall_back_threeway(const struct am_state *state, const char *index_pa return error(_("Did you hand edit your patch?\n" "It does not apply to blobs recorded in its index.")); - if (write_index_as_tree(their_tree.hash, &the_index, index_path, 0, NULL)) + if (write_index_as_tree(&their_tree, &the_index, index_path, 0, NULL)) return error("could not write tree"); say(state, stdout, _("Falling back to patching base and 3-way merge...")); @@ -1622,7 +1626,7 @@ static void do_commit(const struct am_state *state) if (run_hook_le(NULL, "pre-applypatch", NULL)) exit(1); - if (write_cache_as_tree(tree.hash, 0, NULL)) + if (write_cache_as_tree(&tree, 0, NULL)) die(_("git write-tree failed to write a tree")); if (!get_oid_commit("HEAD", &parent)) { @@ -1641,8 +1645,8 @@ static void do_commit(const struct am_state *state) setenv("GIT_COMMITTER_DATE", state->ignore_date ? "" : state->author_date, 1); - if (commit_tree(state->msg, state->msg_len, tree.hash, parents, commit.hash, - author, state->sign_commit)) + if (commit_tree(state->msg, state->msg_len, &tree, parents, &commit, + author, state->sign_commit)) die(_("failed to write commit object")); reflog_msg = getenv("GIT_REFLOG_ACTION"); @@ -1831,8 +1835,7 @@ static void am_run(struct am_state *state, int resume) git_config_get_bool("advice.amworkdir", &advice_amworkdir); if (advice_amworkdir) - printf_ln(_("The copy of the patch that failed is found in: %s"), - am_path(state, "patch")); + printf_ln(_("Use 'git am --show-current-patch' to see the failed patch")); die_user_resolve(state); } @@ -2001,7 +2004,7 @@ static int clean_index(const struct object_id *head, const struct object_id *rem if (fast_forward_to(head_tree, head_tree, 1)) return -1; - if (write_cache_as_tree(index.hash, 0, NULL)) + if (write_cache_as_tree(&index, 0, NULL)) return -1; index_tree = parse_tree_indirect(&index); @@ -2121,6 +2124,34 @@ static void am_abort(struct am_state *state) am_destroy(state); } +static int show_patch(struct am_state *state) +{ + struct strbuf sb = STRBUF_INIT; + const char *patch_path; + int len; + + if (!is_null_oid(&state->orig_commit)) { + const char *av[4] = { "show", NULL, "--", NULL }; + char *new_oid_str; + int ret; + + av[1] = new_oid_str = xstrdup(oid_to_hex(&state->orig_commit)); + ret = run_command_v_opt(av, RUN_GIT_CMD); + free(new_oid_str); + return ret; + } + + patch_path = am_path(state, msgnum(state)); + len = strbuf_read_file(&sb, patch_path, 0); + if (len < 0) + die_errno(_("failed to read '%s'"), patch_path); + + setup_pager(); + write_in_full(1, sb.buf, sb.len); + strbuf_release(&sb); + return 0; +} + /** * parse_options() callback that validates and sets opt->value to the * PATCH_FORMAT_* enum value corresponding to `arg`. @@ -2149,7 +2180,9 @@ enum resume_mode { RESUME_APPLY, RESUME_RESOLVED, RESUME_SKIP, - RESUME_ABORT + RESUME_ABORT, + RESUME_QUIT, + RESUME_SHOW_PATCH }; static int git_am_config(const char *k, const char *v, void *cb) @@ -2171,6 +2204,7 @@ int cmd_am(int argc, const char **argv, const char *prefix) int patch_format = PATCH_FORMAT_UNKNOWN; enum resume_mode resume = RESUME_FALSE; int in_progress; + int ret = 0; const char * const usage[] = { N_("git am [] [( | )...]"), @@ -2249,6 +2283,12 @@ int cmd_am(int argc, const char **argv, const char *prefix) OPT_CMDMODE(0, "abort", &resume, N_("restore the original branch and abort the patching operation."), RESUME_ABORT), + OPT_CMDMODE(0, "quit", &resume, + N_("abort the patching operation but keep HEAD where it is."), + RESUME_QUIT), + OPT_CMDMODE(0, "show-current-patch", &resume, + N_("show the patch being applied."), + RESUME_SHOW_PATCH), OPT_BOOL(0, "committer-date-is-author-date", &state.committer_date_is_author_date, N_("lie about committer date")), @@ -2317,7 +2357,7 @@ int cmd_am(int argc, const char **argv, const char *prefix) * stray directories. */ if (file_exists(state.dir) && !state.rebasing) { - if (resume == RESUME_ABORT) { + if (resume == RESUME_ABORT || resume == RESUME_QUIT) { am_destroy(&state); am_state_release(&state); return 0; @@ -2359,11 +2399,18 @@ int cmd_am(int argc, const char **argv, const char *prefix) case RESUME_ABORT: am_abort(&state); break; + case RESUME_QUIT: + am_rerere_clear(); + am_destroy(&state); + break; + case RESUME_SHOW_PATCH: + ret = show_patch(&state); + break; default: die("BUG: invalid resume value"); } am_state_release(&state); - return 0; + return ret; } diff --git a/builtin/archive.c b/builtin/archive.c index f863465a0f..73971d0dd2 100644 --- a/builtin/archive.c +++ b/builtin/archive.c @@ -55,7 +55,7 @@ static int run_remote_archiver(int argc, const char **argv, buf = packet_read_line(fd[0], NULL); if (!buf) - die(_("git archive: expected ACK/NAK, got EOF")); + die(_("git archive: expected ACK/NAK, got a flush packet")); if (strcmp(buf, "ACK")) { if (starts_with(buf, "NACK ")) die(_("git archive: NACK %s"), buf + 5); diff --git a/builtin/blame.c b/builtin/blame.c index 005f55aaa2..db38c0b307 100644 --- a/builtin/blame.c +++ b/builtin/blame.c @@ -499,7 +499,7 @@ static int read_ancestry(const char *graft_file) static int update_auto_abbrev(int auto_abbrev, struct blame_origin *suspect) { - const char *uniq = find_unique_abbrev(suspect->commit->object.oid.hash, + const char *uniq = find_unique_abbrev(&suspect->commit->object.oid, auto_abbrev); int len = strlen(uniq); if (auto_abbrev < len) @@ -649,6 +649,15 @@ static int blame_move_callback(const struct option *option, const char *arg, int return 0; } +static int is_a_rev(const char *name) +{ + struct object_id oid; + + if (get_oid(name, &oid)) + return 0; + return OBJ_NONE < oid_object_info(&oid, NULL); +} + int cmd_blame(int argc, const char **argv, const char *prefix) { struct rev_info revs; @@ -720,6 +729,7 @@ int cmd_blame(int argc, const char **argv, const char *prefix) for (;;) { switch (parse_options_step(&ctx, options, blame_opt_usage)) { case PARSE_OPT_HELP: + case PARSE_OPT_ERROR: exit(129); case PARSE_OPT_DONE: if (ctx.argv[0]) @@ -845,16 +855,15 @@ int cmd_blame(int argc, const char **argv, const char *prefix) } else { if (argc < 2) usage_with_options(blame_opt_usage, options); - path = add_prefix(prefix, argv[argc - 1]); - if (argc == 3 && !file_exists(path)) { /* (2b) */ + if (argc == 3 && is_a_rev(argv[argc - 1])) { /* (2b) */ path = add_prefix(prefix, argv[1]); argv[1] = argv[2]; + } else { /* (2a) */ + if (argc == 2 && is_a_rev(argv[1]) && !get_git_work_tree()) + die("missing to blame"); + path = add_prefix(prefix, argv[argc - 1]); } argv[argc - 1] = "--"; - - setup_work_tree(); - if (!file_exists(path)) - die_errno("cannot stat path '%s'", path); } revs.disable_stdin = 1; diff --git a/builtin/branch.c b/builtin/branch.c index 8dcc2ed058..5bd2a0dd48 100644 --- a/builtin/branch.c +++ b/builtin/branch.c @@ -273,7 +273,7 @@ static int delete_branches(int argc, const char **argv, int force, int kinds, bname.buf, (flags & REF_ISBROKEN) ? "broken" : (flags & REF_ISSYMREF) ? target - : find_unique_abbrev(oid.hash, DEFAULT_ABBREV)); + : find_unique_abbrev(&oid, DEFAULT_ABBREV)); } delete_branch_config(bname.buf); @@ -615,7 +615,7 @@ int cmd_branch(int argc, const char **argv, const char *prefix) OPT_BOOL('l', "create-reflog", &reflog, N_("create the branch's reflog")), OPT_BOOL(0, "edit-description", &edit_description, N_("edit the description for the branch")), - OPT__FORCE(&force, N_("force creation, move/rename, deletion")), + OPT__FORCE(&force, N_("force creation, move/rename, deletion"), PARSE_OPT_NOCOMPLETE), OPT_MERGED(&filter, N_("print only branches that are merged")), OPT_NO_MERGED(&filter, N_("print only branches that are not merged")), OPT_COLUMN(0, "column", &colopts, N_("list branches in columns")), diff --git a/builtin/cat-file.c b/builtin/cat-file.c index f5fa4fd75a..2c46d257cd 100644 --- a/builtin/cat-file.c +++ b/builtin/cat-file.c @@ -32,7 +32,7 @@ static int filter_object(const char *path, unsigned mode, { enum object_type type; - *buf = read_sha1_file(oid->hash, &type, size); + *buf = read_object_file(oid, &type, size); if (!*buf) return error(_("cannot read object %s '%s'"), oid_to_hex(oid), path); @@ -76,8 +76,8 @@ static int cat_one_file(int opt, const char *exp_type, const char *obj_name, buf = NULL; switch (opt) { case 't': - oi.typename = &sb; - if (sha1_object_info_extended(oid.hash, &oi, flags) < 0) + oi.type_name = &sb; + if (oid_object_info_extended(&oid, &oi, flags) < 0) die("git cat-file: could not get object info"); if (sb.len) { printf("%s\n", sb.buf); @@ -88,7 +88,7 @@ static int cat_one_file(int opt, const char *exp_type, const char *obj_name, case 's': oi.sizep = &size; - if (sha1_object_info_extended(oid.hash, &oi, flags) < 0) + if (oid_object_info_extended(&oid, &oi, flags) < 0) die("git cat-file: could not get object info"); printf("%lu\n", size); return 0; @@ -116,7 +116,7 @@ static int cat_one_file(int opt, const char *exp_type, const char *obj_name, /* else fallthrough */ case 'p': - type = sha1_object_info(oid.hash, NULL); + type = oid_object_info(&oid, NULL); if (type < 0) die("Not a valid object name %s", obj_name); @@ -130,7 +130,7 @@ static int cat_one_file(int opt, const char *exp_type, const char *obj_name, if (type == OBJ_BLOB) return stream_blob_to_fd(1, &oid, NULL, 0); - buf = read_sha1_file(oid.hash, &type, &size); + buf = read_object_file(&oid, &type, &size); if (!buf) die("Cannot read object %s", obj_name); @@ -140,8 +140,9 @@ static int cat_one_file(int opt, const char *exp_type, const char *obj_name, case 0: if (type_from_string(exp_type) == OBJ_BLOB) { struct object_id blob_oid; - if (sha1_object_info(oid.hash, NULL) == OBJ_TAG) { - char *buffer = read_sha1_file(oid.hash, &type, &size); + if (oid_object_info(&oid, NULL) == OBJ_TAG) { + char *buffer = read_object_file(&oid, &type, + &size); const char *target; if (!skip_prefix(buffer, "object ", &target) || get_oid_hex(target, &blob_oid)) @@ -150,7 +151,7 @@ static int cat_one_file(int opt, const char *exp_type, const char *obj_name, } else oidcpy(&blob_oid, &oid); - if (sha1_object_info(blob_oid.hash, NULL) == OBJ_BLOB) + if (oid_object_info(&blob_oid, NULL) == OBJ_BLOB) return stream_blob_to_fd(1, &blob_oid, NULL, 0); /* * we attempted to dereference a tag to a blob @@ -159,7 +160,7 @@ static int cat_one_file(int opt, const char *exp_type, const char *obj_name, * fall-back to the usual case. */ } - buf = read_object_with_reference(oid.hash, exp_type, &size, NULL); + buf = read_object_with_reference(&oid, exp_type, &size, NULL); break; default: @@ -229,7 +230,7 @@ static void expand_atom(struct strbuf *sb, const char *atom, int len, if (data->mark_query) data->info.typep = &data->type; else - strbuf_addstr(sb, typename(data->type)); + strbuf_addstr(sb, type_name(data->type)); } else if (is_atom("objectsize", atom, len)) { if (data->mark_query) data->info.sizep = &data->size; @@ -304,8 +305,9 @@ static void print_object_or_die(struct batch_options *opt, struct expand_data *d enum object_type type; if (!textconv_object(data->rest, 0100644, oid, 1, &contents, &size)) - contents = read_sha1_file(oid->hash, &type, - &size); + contents = read_object_file(oid, + &type, + &size); if (!contents) die("could not convert '%s' %s", oid_to_hex(oid), data->rest); @@ -321,7 +323,7 @@ static void print_object_or_die(struct batch_options *opt, struct expand_data *d unsigned long size; void *contents; - contents = read_sha1_file(oid->hash, &type, &size); + contents = read_object_file(oid, &type, &size); if (!contents) die("object %s disappeared", oid_to_hex(oid)); if (type != data->type) @@ -340,8 +342,8 @@ static void batch_object_write(const char *obj_name, struct batch_options *opt, struct strbuf buf = STRBUF_INIT; if (!data->skip_object_info && - sha1_object_info_extended(data->oid.hash, &data->info, - OBJECT_INFO_LOOKUP_REPLACE) < 0) { + oid_object_info_extended(&data->oid, &data->info, + OBJECT_INFO_LOOKUP_REPLACE) < 0) { printf("%s missing\n", obj_name ? obj_name : oid_to_hex(&data->oid)); fflush(stdout); @@ -475,6 +477,8 @@ static int batch_objects(struct batch_options *opt) for_each_loose_object(batch_loose_object, &sa, 0); for_each_packed_object(batch_packed_object, &sa, 0); + if (repository_format_partial_clone) + warning("This repository has extensions.partialClone set. Some objects may not be loaded."); cb.opt = opt; cb.expand = &data; diff --git a/builtin/check-ignore.c b/builtin/check-ignore.c index 3e280b9c7a..ec9a959e08 100644 --- a/builtin/check-ignore.c +++ b/builtin/check-ignore.c @@ -72,7 +72,7 @@ static int check_ignore(struct dir_struct *dir, { const char *full_path; char *seen; - int num_ignored = 0, dtype = DT_UNKNOWN, i; + int num_ignored = 0, i; struct exclude *exclude; struct pathspec pathspec; @@ -104,6 +104,7 @@ static int check_ignore(struct dir_struct *dir, full_path = pathspec.items[i].match; exclude = NULL; if (!seen[i]) { + int dtype = DT_UNKNOWN; exclude = last_exclude_matching(dir, &the_index, full_path, &dtype); } diff --git a/builtin/checkout-index.c b/builtin/checkout-index.c index b0e78b819d..a730f6a1aa 100644 --- a/builtin/checkout-index.c +++ b/builtin/checkout-index.c @@ -157,7 +157,7 @@ int cmd_checkout_index(int argc, const char **argv, const char *prefix) struct option builtin_checkout_index_options[] = { OPT_BOOL('a', "all", &all, N_("check out all files in the index")), - OPT__FORCE(&force, N_("force overwrite of existing files")), + OPT__FORCE(&force, N_("force overwrite of existing files"), 0), OPT__QUIET(&quiet, N_("no warning for existing files and files not in index")), OPT_BOOL('n', "no-create", ¬_new, diff --git a/builtin/checkout.c b/builtin/checkout.c index c54c78df54..b49b582071 100644 --- a/builtin/checkout.c +++ b/builtin/checkout.c @@ -54,19 +54,19 @@ struct checkout_opts { struct tree *source_tree; }; -static int post_checkout_hook(struct commit *old, struct commit *new, +static int post_checkout_hook(struct commit *old_commit, struct commit *new_commit, int changed) { return run_hook_le(NULL, "post-checkout", - oid_to_hex(old ? &old->object.oid : &null_oid), - oid_to_hex(new ? &new->object.oid : &null_oid), + oid_to_hex(old_commit ? &old_commit->object.oid : &null_oid), + oid_to_hex(new_commit ? &new_commit->object.oid : &null_oid), changed ? "1" : "0", NULL); - /* "new" can be NULL when checking out from the index before + /* "new_commit" can be NULL when checking out from the index before a commit exists. */ } -static int update_some(const unsigned char *sha1, struct strbuf *base, +static int update_some(const struct object_id *oid, struct strbuf *base, const char *pathname, unsigned mode, int stage, void *context) { int len; @@ -78,7 +78,7 @@ static int update_some(const unsigned char *sha1, struct strbuf *base, len = base->len + strlen(pathname); ce = xcalloc(1, cache_entry_size(len)); - hashcpy(ce->oid.hash, sha1); + oidcpy(&ce->oid, oid); memcpy(ce->name, base->buf, base->len); memcpy(ce->name + base->len, pathname, len - base->len); ce->ce_flags = create_ce_flags(0) | CE_UPDATE; @@ -227,8 +227,7 @@ static int checkout_merged(int pos, const struct checkout *state) * (it also writes the merge result to the object database even * when it may contain conflicts). */ - if (write_sha1_file(result_buf.ptr, result_buf.size, - blob_type, oid.hash)) + if (write_object_file(result_buf.ptr, result_buf.size, blob_type, &oid)) die(_("Unable to add merge result for '%s'"), path); free(result_buf.ptr); ce = make_cache_entry(mode, oid.hash, path, 2, 0); @@ -406,10 +405,10 @@ static void describe_detached_head(const char *msg, struct commit *commit) pp_commit_easy(CMIT_FMT_ONELINE, commit, &sb); if (print_sha1_ellipsis()) { fprintf(stderr, "%s %s... %s\n", msg, - find_unique_abbrev(commit->object.oid.hash, DEFAULT_ABBREV), sb.buf); + find_unique_abbrev(&commit->object.oid, DEFAULT_ABBREV), sb.buf); } else { fprintf(stderr, "%s %s %s\n", msg, - find_unique_abbrev(commit->object.oid.hash, DEFAULT_ABBREV), sb.buf); + find_unique_abbrev(&commit->object.oid, DEFAULT_ABBREV), sb.buf); } strbuf_release(&sb); } @@ -472,8 +471,8 @@ static void setup_branch_path(struct branch_info *branch) } static int merge_working_tree(const struct checkout_opts *opts, - struct branch_info *old, - struct branch_info *new, + struct branch_info *old_branch_info, + struct branch_info *new_branch_info, int *writeout_error) { int ret; @@ -485,7 +484,7 @@ static int merge_working_tree(const struct checkout_opts *opts, resolve_undo_clear(); if (opts->force) { - ret = reset_tree(new->commit->tree, opts, 1, writeout_error); + ret = reset_tree(new_branch_info->commit->tree, opts, 1, writeout_error); if (ret) return ret; } else { @@ -511,7 +510,7 @@ static int merge_working_tree(const struct checkout_opts *opts, topts.initial_checkout = is_cache_unborn(); topts.update = 1; topts.merge = 1; - topts.gently = opts->merge && old->commit; + topts.gently = opts->merge && old_branch_info->commit; topts.verbose_update = opts->show_progress; topts.fn = twoway_merge; if (opts->overwrite_ignore) { @@ -519,11 +518,11 @@ static int merge_working_tree(const struct checkout_opts *opts, topts.dir->flags |= DIR_SHOW_IGNORED; setup_standard_excludes(topts.dir); } - tree = parse_tree_indirect(old->commit ? - &old->commit->object.oid : + tree = parse_tree_indirect(old_branch_info->commit ? + &old_branch_info->commit->object.oid : the_hash_algo->empty_tree); init_tree_desc(&trees[0], tree->buffer, tree->size); - tree = parse_tree_indirect(&new->commit->object.oid); + tree = parse_tree_indirect(&new_branch_info->commit->object.oid); init_tree_desc(&trees[1], tree->buffer, tree->size); ret = unpack_trees(2, trees, &topts); @@ -540,10 +539,10 @@ static int merge_working_tree(const struct checkout_opts *opts, return 1; /* - * Without old->commit, the below is the same as + * Without old_branch_info->commit, the below is the same as * the two-tree unpack we already tried and failed. */ - if (!old->commit) + if (!old_branch_info->commit) return 1; /* Do more real merge */ @@ -571,18 +570,18 @@ static int merge_working_tree(const struct checkout_opts *opts, o.verbosity = 0; work = write_tree_from_memory(&o); - ret = reset_tree(new->commit->tree, opts, 1, + ret = reset_tree(new_branch_info->commit->tree, opts, 1, writeout_error); if (ret) return ret; - o.ancestor = old->name; - o.branch1 = new->name; + o.ancestor = old_branch_info->name; + o.branch1 = new_branch_info->name; o.branch2 = "local"; - ret = merge_trees(&o, new->commit->tree, work, - old->commit->tree, &result); + ret = merge_trees(&o, new_branch_info->commit->tree, work, + old_branch_info->commit->tree, &result); if (ret < 0) exit(128); - ret = reset_tree(new->commit->tree, opts, 0, + ret = reset_tree(new_branch_info->commit->tree, opts, 0, writeout_error); strbuf_release(&o.obuf); if (ret) @@ -600,25 +599,25 @@ static int merge_working_tree(const struct checkout_opts *opts, die(_("unable to write new index file")); if (!opts->force && !opts->quiet) - show_local_changes(&new->commit->object, &opts->diff_options); + show_local_changes(&new_branch_info->commit->object, &opts->diff_options); return 0; } -static void report_tracking(struct branch_info *new) +static void report_tracking(struct branch_info *new_branch_info) { struct strbuf sb = STRBUF_INIT; - struct branch *branch = branch_get(new->name); + struct branch *branch = branch_get(new_branch_info->name); - if (!format_tracking_info(branch, &sb)) + if (!format_tracking_info(branch, &sb, AHEAD_BEHIND_FULL)) return; fputs(sb.buf, stdout); strbuf_release(&sb); } static void update_refs_for_switch(const struct checkout_opts *opts, - struct branch_info *old, - struct branch_info *new) + struct branch_info *old_branch_info, + struct branch_info *new_branch_info) { struct strbuf msg = STRBUF_INIT; const char *old_desc, *reflog_msg; @@ -645,69 +644,69 @@ static void update_refs_for_switch(const struct checkout_opts *opts, free(refname); } else - create_branch(opts->new_branch, new->name, + create_branch(opts->new_branch, new_branch_info->name, opts->new_branch_force ? 1 : 0, opts->new_branch_force ? 1 : 0, opts->new_branch_log, opts->quiet, opts->track); - new->name = opts->new_branch; - setup_branch_path(new); + new_branch_info->name = opts->new_branch; + setup_branch_path(new_branch_info); } - old_desc = old->name; - if (!old_desc && old->commit) - old_desc = oid_to_hex(&old->commit->object.oid); + old_desc = old_branch_info->name; + if (!old_desc && old_branch_info->commit) + old_desc = oid_to_hex(&old_branch_info->commit->object.oid); reflog_msg = getenv("GIT_REFLOG_ACTION"); if (!reflog_msg) strbuf_addf(&msg, "checkout: moving from %s to %s", - old_desc ? old_desc : "(invalid)", new->name); + old_desc ? old_desc : "(invalid)", new_branch_info->name); else strbuf_insert(&msg, 0, reflog_msg, strlen(reflog_msg)); - if (!strcmp(new->name, "HEAD") && !new->path && !opts->force_detach) { + if (!strcmp(new_branch_info->name, "HEAD") && !new_branch_info->path && !opts->force_detach) { /* Nothing to do. */ - } else if (opts->force_detach || !new->path) { /* No longer on any branch. */ - update_ref(msg.buf, "HEAD", &new->commit->object.oid, NULL, + } else if (opts->force_detach || !new_branch_info->path) { /* No longer on any branch. */ + update_ref(msg.buf, "HEAD", &new_branch_info->commit->object.oid, NULL, REF_NO_DEREF, UPDATE_REFS_DIE_ON_ERR); if (!opts->quiet) { - if (old->path && + if (old_branch_info->path && advice_detached_head && !opts->force_detach) - detach_advice(new->name); - describe_detached_head(_("HEAD is now at"), new->commit); + detach_advice(new_branch_info->name); + describe_detached_head(_("HEAD is now at"), new_branch_info->commit); } - } else if (new->path) { /* Switch branches. */ - if (create_symref("HEAD", new->path, msg.buf) < 0) + } else if (new_branch_info->path) { /* Switch branches. */ + if (create_symref("HEAD", new_branch_info->path, msg.buf) < 0) die(_("unable to update HEAD")); if (!opts->quiet) { - if (old->path && !strcmp(new->path, old->path)) { + if (old_branch_info->path && !strcmp(new_branch_info->path, old_branch_info->path)) { if (opts->new_branch_force) fprintf(stderr, _("Reset branch '%s'\n"), - new->name); + new_branch_info->name); else fprintf(stderr, _("Already on '%s'\n"), - new->name); + new_branch_info->name); } else if (opts->new_branch) { if (opts->branch_exists) - fprintf(stderr, _("Switched to and reset branch '%s'\n"), new->name); + fprintf(stderr, _("Switched to and reset branch '%s'\n"), new_branch_info->name); else - fprintf(stderr, _("Switched to a new branch '%s'\n"), new->name); + fprintf(stderr, _("Switched to a new branch '%s'\n"), new_branch_info->name); } else { fprintf(stderr, _("Switched to branch '%s'\n"), - new->name); + new_branch_info->name); } } - if (old->path && old->name) { - if (!ref_exists(old->path) && reflog_exists(old->path)) - delete_reflog(old->path); + if (old_branch_info->path && old_branch_info->name) { + if (!ref_exists(old_branch_info->path) && reflog_exists(old_branch_info->path)) + delete_reflog(old_branch_info->path); } } remove_branch_state(); strbuf_release(&msg); if (!opts->quiet && - (new->path || (!opts->force_detach && !strcmp(new->name, "HEAD")))) - report_tracking(new); + (new_branch_info->path || (!opts->force_detach && !strcmp(new_branch_info->name, "HEAD")))) + report_tracking(new_branch_info); } static int add_pending_uninteresting_ref(const char *refname, @@ -721,7 +720,7 @@ static int add_pending_uninteresting_ref(const char *refname, static void describe_one_orphan(struct strbuf *sb, struct commit *commit) { strbuf_addstr(sb, " "); - strbuf_add_unique_abbrev(sb, commit->object.oid.hash, DEFAULT_ABBREV); + strbuf_add_unique_abbrev(sb, &commit->object.oid, DEFAULT_ABBREV); strbuf_addch(sb, ' '); if (!parse_commit(commit)) pp_commit_easy(CMIT_FMT_ONELINE, commit, sb); @@ -779,7 +778,7 @@ static void suggest_reattach(struct commit *commit, struct rev_info *revs) " git branch %s\n\n", /* Give ngettext() the count */ lost), - find_unique_abbrev(commit->object.oid.hash, DEFAULT_ABBREV)); + find_unique_abbrev(&commit->object.oid, DEFAULT_ABBREV)); } /* @@ -787,10 +786,10 @@ static void suggest_reattach(struct commit *commit, struct rev_info *revs) * HEAD. If it is not reachable from any ref, this is the last chance * for the user to do so without resorting to reflog. */ -static void orphaned_commit_warning(struct commit *old, struct commit *new) +static void orphaned_commit_warning(struct commit *old_commit, struct commit *new_commit) { struct rev_info revs; - struct object *object = &old->object; + struct object *object = &old_commit->object; init_revisions(&revs, NULL); setup_revisions(0, NULL, &revs, NULL); @@ -799,57 +798,57 @@ static void orphaned_commit_warning(struct commit *old, struct commit *new) add_pending_object(&revs, object, oid_to_hex(&object->oid)); for_each_ref(add_pending_uninteresting_ref, &revs); - add_pending_oid(&revs, "HEAD", &new->object.oid, UNINTERESTING); + add_pending_oid(&revs, "HEAD", &new_commit->object.oid, UNINTERESTING); if (prepare_revision_walk(&revs)) die(_("internal error in revision walk")); - if (!(old->object.flags & UNINTERESTING)) - suggest_reattach(old, &revs); + if (!(old_commit->object.flags & UNINTERESTING)) + suggest_reattach(old_commit, &revs); else - describe_detached_head(_("Previous HEAD position was"), old); + describe_detached_head(_("Previous HEAD position was"), old_commit); /* Clean up objects used, as they will be reused. */ clear_commit_marks_all(ALL_REV_FLAGS); } static int switch_branches(const struct checkout_opts *opts, - struct branch_info *new) + struct branch_info *new_branch_info) { int ret = 0; - struct branch_info old; + struct branch_info old_branch_info; void *path_to_free; struct object_id rev; int flag, writeout_error = 0; - memset(&old, 0, sizeof(old)); - old.path = path_to_free = resolve_refdup("HEAD", 0, &rev, &flag); - if (old.path) - old.commit = lookup_commit_reference_gently(&rev, 1); + memset(&old_branch_info, 0, sizeof(old_branch_info)); + old_branch_info.path = path_to_free = resolve_refdup("HEAD", 0, &rev, &flag); + if (old_branch_info.path) + old_branch_info.commit = lookup_commit_reference_gently(&rev, 1); if (!(flag & REF_ISSYMREF)) - old.path = NULL; + old_branch_info.path = NULL; - if (old.path) - skip_prefix(old.path, "refs/heads/", &old.name); + if (old_branch_info.path) + skip_prefix(old_branch_info.path, "refs/heads/", &old_branch_info.name); - if (!new->name) { - new->name = "HEAD"; - new->commit = old.commit; - if (!new->commit) + if (!new_branch_info->name) { + new_branch_info->name = "HEAD"; + new_branch_info->commit = old_branch_info.commit; + if (!new_branch_info->commit) die(_("You are on a branch yet to be born")); - parse_commit_or_die(new->commit); + parse_commit_or_die(new_branch_info->commit); } - ret = merge_working_tree(opts, &old, new, &writeout_error); + ret = merge_working_tree(opts, &old_branch_info, new_branch_info, &writeout_error); if (ret) { free(path_to_free); return ret; } - if (!opts->quiet && !old.path && old.commit && new->commit != old.commit) - orphaned_commit_warning(old.commit, new->commit); + if (!opts->quiet && !old_branch_info.path && old_branch_info.commit && new_branch_info->commit != old_branch_info.commit) + orphaned_commit_warning(old_branch_info.commit, new_branch_info->commit); - update_refs_for_switch(opts, &old, new); + update_refs_for_switch(opts, &old_branch_info, new_branch_info); - ret = post_checkout_hook(old.commit, new->commit, 1); + ret = post_checkout_hook(old_branch_info.commit, new_branch_info->commit, 1); free(path_to_free); return ret || writeout_error; } @@ -870,7 +869,7 @@ static int git_checkout_config(const char *var, const char *value, void *cb) static int parse_branchname_arg(int argc, const char **argv, int dwim_new_local_branch_ok, - struct branch_info *new, + struct branch_info *new_branch_info, struct checkout_opts *opts, struct object_id *rev) { @@ -988,22 +987,22 @@ static int parse_branchname_arg(int argc, const char **argv, argv++; argc--; - new->name = arg; - setup_branch_path(new); + new_branch_info->name = arg; + setup_branch_path(new_branch_info); - if (!check_refname_format(new->path, 0) && - !read_ref(new->path, &branch_rev)) + if (!check_refname_format(new_branch_info->path, 0) && + !read_ref(new_branch_info->path, &branch_rev)) oidcpy(rev, &branch_rev); else - new->path = NULL; /* not an existing branch */ + new_branch_info->path = NULL; /* not an existing branch */ - new->commit = lookup_commit_reference_gently(rev, 1); - if (!new->commit) { + new_branch_info->commit = lookup_commit_reference_gently(rev, 1); + if (!new_branch_info->commit) { /* not a commit */ *source_tree = parse_tree_indirect(rev); } else { - parse_commit_or_die(new->commit); - *source_tree = new->commit->tree; + parse_commit_or_die(new_branch_info->commit); + *source_tree = new_branch_info->commit->tree; } if (!*source_tree) /* case (1): want a tree */ @@ -1043,7 +1042,7 @@ static int switch_unborn_to_new_branch(const struct checkout_opts *opts) } static int checkout_branch(struct checkout_opts *opts, - struct branch_info *new) + struct branch_info *new_branch_info) { if (opts->pathspec.nr) die(_("paths cannot be used with switching branches")); @@ -1072,21 +1071,21 @@ static int checkout_branch(struct checkout_opts *opts, } else if (opts->track == BRANCH_TRACK_UNSPECIFIED) opts->track = git_branch_track; - if (new->name && !new->commit) + if (new_branch_info->name && !new_branch_info->commit) die(_("Cannot switch branch to a non-commit '%s'"), - new->name); + new_branch_info->name); - if (new->path && !opts->force_detach && !opts->new_branch && + if (new_branch_info->path && !opts->force_detach && !opts->new_branch && !opts->ignore_other_worktrees) { int flag; char *head_ref = resolve_refdup("HEAD", 0, NULL, &flag); if (head_ref && - (!(flag & REF_ISSYMREF) || strcmp(head_ref, new->path))) - die_if_checked_out(new->path, 1); + (!(flag & REF_ISSYMREF) || strcmp(head_ref, new_branch_info->path))) + die_if_checked_out(new_branch_info->path, 1); free(head_ref); } - if (!new->commit && opts->new_branch) { + if (!new_branch_info->commit && opts->new_branch) { struct object_id rev; int flag; @@ -1094,13 +1093,13 @@ static int checkout_branch(struct checkout_opts *opts, (flag & REF_ISSYMREF) && is_null_oid(&rev)) return switch_unborn_to_new_branch(opts); } - return switch_branches(opts, new); + return switch_branches(opts, new_branch_info); } int cmd_checkout(int argc, const char **argv, const char *prefix) { struct checkout_opts opts; - struct branch_info new; + struct branch_info new_branch_info; char *conflict_style = NULL; int dwim_new_local_branch = 1; struct option options[] = { @@ -1118,9 +1117,12 @@ int cmd_checkout(int argc, const char **argv, const char *prefix) 2), OPT_SET_INT('3', "theirs", &opts.writeout_stage, N_("checkout their version for unmerged files"), 3), - OPT__FORCE(&opts.force, N_("force checkout (throw away local modifications)")), + OPT__FORCE(&opts.force, N_("force checkout (throw away local modifications)"), + PARSE_OPT_NOCOMPLETE), OPT_BOOL('m', "merge", &opts.merge, N_("perform a 3-way merge with the new branch")), - OPT_BOOL(0, "overwrite-ignore", &opts.overwrite_ignore, N_("update ignored files (default)")), + OPT_BOOL_F(0, "overwrite-ignore", &opts.overwrite_ignore, + N_("update ignored files (default)"), + PARSE_OPT_NOCOMPLETE), OPT_STRING(0, "conflict", &conflict_style, N_("style"), N_("conflict style (merge or diff3)")), OPT_BOOL('p', "patch", &opts.patch_mode, N_("select hunks interactively")), @@ -1138,7 +1140,7 @@ int cmd_checkout(int argc, const char **argv, const char *prefix) }; memset(&opts, 0, sizeof(opts)); - memset(&new, 0, sizeof(new)); + memset(&new_branch_info, 0, sizeof(new_branch_info)); opts.overwrite_ignore = 1; opts.prefix = prefix; opts.show_progress = -1; @@ -1210,7 +1212,7 @@ int cmd_checkout(int argc, const char **argv, const char *prefix) opts.track == BRANCH_TRACK_UNSPECIFIED && !opts.new_branch; int n = parse_branchname_arg(argc, argv, dwim_ok, - &new, &opts, &rev); + &new_branch_info, &opts, &rev); argv += n; argc -= n; } @@ -1253,7 +1255,7 @@ int cmd_checkout(int argc, const char **argv, const char *prefix) UNLEAK(opts); if (opts.patch_mode || opts.pathspec.nr) - return checkout_paths(&opts, new.name); + return checkout_paths(&opts, new_branch_info.name); else - return checkout_branch(&opts, &new); + return checkout_branch(&opts, &new_branch_info); } diff --git a/builtin/clean.c b/builtin/clean.c index 189e20628c..fad533a0a7 100644 --- a/builtin/clean.c +++ b/builtin/clean.c @@ -909,7 +909,7 @@ int cmd_clean(int argc, const char **argv, const char *prefix) struct option options[] = { OPT__QUIET(&quiet, N_("do not print names of files removed")), OPT__DRY_RUN(&dry_run, N_("dry run")), - OPT__FORCE(&force, N_("force")), + OPT__FORCE(&force, N_("force"), PARSE_OPT_NOCOMPLETE), OPT_BOOL('i', "interactive", &interactive, N_("interactive cleaning")), OPT_BOOL('d', NULL, &remove_directories, N_("remove whole directories")), diff --git a/builtin/clone.c b/builtin/clone.c index 284651797e..101c27a593 100644 --- a/builtin/clone.c +++ b/builtin/clone.c @@ -26,6 +26,7 @@ #include "run-command.h" #include "connected.h" #include "packfile.h" +#include "list-objects-filter-options.h" /* * Overall FIXMEs: @@ -60,6 +61,7 @@ static struct string_list option_optional_reference = STRING_LIST_INIT_NODUP; static int option_dissociate; static int max_jobs = -1; static struct string_list option_recurse_submodules = STRING_LIST_INIT_NODUP; +static struct list_objects_filter_options filter_options; static int recurse_submodules_cb(const struct option *opt, const char *arg, int unset) @@ -135,6 +137,7 @@ static struct option builtin_clone_options[] = { TRANSPORT_FAMILY_IPV4), OPT_SET_INT('6', "ipv6", &family, N_("use IPv6 addresses only"), TRANSPORT_FAMILY_IPV6), + OPT_PARSE_LIST_OBJECTS_FILTER(&filter_options), OPT_END() }; @@ -893,6 +896,8 @@ int cmd_clone(int argc, const char **argv, const char *prefix) struct refspec *refspec; const char *fetch_pattern; + fetch_if_missing = 0; + packet_trace_identity("clone"); argc = parse_options(argc, argv, prefix, builtin_clone_options, builtin_clone_usage, 0); @@ -1090,6 +1095,8 @@ int cmd_clone(int argc, const char **argv, const char *prefix) warning(_("--shallow-since is ignored in local clones; use file:// instead.")); if (option_not.nr) warning(_("--shallow-exclude is ignored in local clones; use file:// instead.")); + if (filter_options.choice) + warning(_("--filter is ignored in local clones; use file:// instead.")); if (!access(mkpath("%s/shallow", path), F_OK)) { if (option_local > 0) warning(_("source repository is shallow, ignoring --local")); @@ -1118,7 +1125,13 @@ int cmd_clone(int argc, const char **argv, const char *prefix) transport_set_option(transport, TRANS_OPT_UPLOADPACK, option_upload_pack); - if (transport->smart_options && !deepen) + if (filter_options.choice) { + transport_set_option(transport, TRANS_OPT_LIST_OBJECTS_FILTER, + filter_options.filter_spec); + transport_set_option(transport, TRANS_OPT_FROM_PROMISOR, "1"); + } + + if (transport->smart_options && !deepen && !filter_options.choice) transport->smart_options->check_self_contained_and_connected = 1; refs = transport_get_remote_refs(transport); @@ -1178,13 +1191,17 @@ int cmd_clone(int argc, const char **argv, const char *prefix) write_refspec_config(src_ref_prefix, our_head_points_at, remote_head_points_at, &branch_top); + if (filter_options.choice) + partial_clone_register("origin", &filter_options); + if (is_local) clone_local(path, git_dir); else if (refs && complete_refs_before_fetch) transport_fetch_refs(transport, mapped_refs); update_remote_refs(refs, mapped_refs, remote_head_points_at, - branch_top.buf, reflog_msg.buf, transport, !is_local); + branch_top.buf, reflog_msg.buf, transport, + !is_local && !filter_options.choice); update_head(our_head_points_at, remote_head, reflog_msg.buf); @@ -1205,6 +1222,7 @@ int cmd_clone(int argc, const char **argv, const char *prefix) } junk_mode = JUNK_LEAVE_REPO; + fetch_if_missing = 1; err = checkout(submodule_progress); strbuf_release(&reflog_msg); diff --git a/builtin/commit-tree.c b/builtin/commit-tree.c index 2177251e24..ecf42191da 100644 --- a/builtin/commit-tree.c +++ b/builtin/commit-tree.c @@ -58,7 +58,7 @@ int cmd_commit_tree(int argc, const char **argv, const char *prefix) usage(commit_tree_usage); if (get_oid_commit(argv[i], &oid)) die("Not a valid object name %s", argv[i]); - assert_sha1_type(oid.hash, OBJ_COMMIT); + assert_oid_type(&oid, OBJ_COMMIT); new_parent(lookup_commit(&oid), &parents); continue; } @@ -117,8 +117,8 @@ int cmd_commit_tree(int argc, const char **argv, const char *prefix) die_errno("git commit-tree: failed to read"); } - if (commit_tree(buffer.buf, buffer.len, tree_oid.hash, parents, - commit_oid.hash, NULL, sign_commit)) { + if (commit_tree(buffer.buf, buffer.len, &tree_oid, parents, &commit_oid, + NULL, sign_commit)) { strbuf_release(&buffer); return 1; } diff --git a/builtin/commit.c b/builtin/commit.c index 4610e3d8e3..37fcb55ab0 100644 --- a/builtin/commit.c +++ b/builtin/commit.c @@ -31,9 +31,7 @@ #include "gpg-interface.h" #include "column.h" #include "sequencer.h" -#include "notes-utils.h" #include "mailmap.h" -#include "sigchain.h" static const char * const builtin_commit_usage[] = { N_("git commit [] [--] ..."), @@ -45,31 +43,6 @@ static const char * const builtin_status_usage[] = { NULL }; -static const char implicit_ident_advice_noconfig[] = -N_("Your name and email address were configured automatically based\n" -"on your username and hostname. Please check that they are accurate.\n" -"You can suppress this message by setting them explicitly. Run the\n" -"following command and follow the instructions in your editor to edit\n" -"your configuration file:\n" -"\n" -" git config --global --edit\n" -"\n" -"After doing this, you may fix the identity used for this commit with:\n" -"\n" -" git commit --amend --reset-author\n"); - -static const char implicit_ident_advice_config[] = -N_("Your name and email address were configured automatically based\n" -"on your username and hostname. Please check that they are accurate.\n" -"You can suppress this message by setting them explicitly:\n" -"\n" -" git config --global user.name \"Your Name\"\n" -" git config --global user.email you@example.com\n" -"\n" -"After doing this, you may fix the identity used for this commit with:\n" -"\n" -" git commit --amend --reset-author\n"); - static const char empty_amend_advice[] = N_("You asked to amend the most recent commit, but doing so would make\n" "it empty. You can repeat your command with --allow-empty, or you can\n" @@ -93,8 +66,6 @@ N_("If you wish to skip this commit, use:\n" "Then \"git cherry-pick --continue\" will resume cherry-picking\n" "the remaining commits.\n"); -static GIT_PATH_FUNC(git_path_commit_editmsg, "COMMIT_EDITMSG") - static const char *use_message_buffer; static struct lock_file index_lock; /* real index */ static struct lock_file false_lock; /* used only for partial commits */ @@ -128,12 +99,7 @@ static char *sign_commit; * if editor is used, and only the whitespaces if the message * is specified explicitly. */ -static enum { - CLEANUP_SPACE, - CLEANUP_NONE, - CLEANUP_SCISSORS, - CLEANUP_ALL -} cleanup_mode; +static enum commit_msg_cleanup_mode cleanup_mode; static const char *cleanup_arg; static enum commit_whence whence; @@ -423,13 +389,9 @@ static const char *prepare_index(int argc, const char **argv, const char *prefix if (active_cache_changed || !cache_tree_fully_valid(active_cache_tree)) update_main_cache_tree(WRITE_TREE_SILENT); - if (active_cache_changed) { - if (write_locked_index(&the_index, &index_lock, - COMMIT_LOCK)) - die(_("unable to write new_index file")); - } else { - rollback_lock_file(&index_lock); - } + if (write_locked_index(&the_index, &index_lock, + COMMIT_LOCK | SKIP_IF_UNCHANGED)) + die(_("unable to write new_index file")); commit_style = COMMIT_AS_IS; ret = get_index_file(); goto out; @@ -673,7 +635,7 @@ static int prepare_to_commit(const char *index_file, const char *prefix, struct strbuf sb = STRBUF_INIT; const char *hook_arg1 = NULL; const char *hook_arg2 = NULL; - int clean_message_contents = (cleanup_mode != CLEANUP_NONE); + int clean_message_contents = (cleanup_mode != COMMIT_MSG_CLEANUP_NONE); int old_display_comment_prefix; /* This checks and barfs if author is badly specified */ @@ -814,7 +776,7 @@ static int prepare_to_commit(const char *index_file, const char *prefix, struct ident_split ci, ai; if (whence != FROM_COMMIT) { - if (cleanup_mode == CLEANUP_SCISSORS) + if (cleanup_mode == COMMIT_MSG_CLEANUP_SCISSORS) wt_status_add_cut_line(s->fp); status_printf_ln(s, GIT_COLOR_NORMAL, whence == FROM_MERGE @@ -834,14 +796,15 @@ static int prepare_to_commit(const char *index_file, const char *prefix, } fprintf(s->fp, "\n"); - if (cleanup_mode == CLEANUP_ALL) + if (cleanup_mode == COMMIT_MSG_CLEANUP_ALL) status_printf(s, GIT_COLOR_NORMAL, _("Please enter the commit message for your changes." " Lines starting\nwith '%c' will be ignored, and an empty" " message aborts the commit.\n"), comment_line_char); - else if (cleanup_mode == CLEANUP_SCISSORS && whence == FROM_COMMIT) + else if (cleanup_mode == COMMIT_MSG_CLEANUP_SCISSORS && + whence == FROM_COMMIT) wt_status_add_cut_line(s->fp); - else /* CLEANUP_SPACE, that is. */ + else /* COMMIT_MSG_CLEANUP_SPACE, that is. */ status_printf(s, GIT_COLOR_NORMAL, _("Please enter the commit message for your changes." " Lines starting\n" @@ -986,65 +949,6 @@ static int prepare_to_commit(const char *index_file, const char *prefix, return 1; } -static int rest_is_empty(struct strbuf *sb, int start) -{ - int i, eol; - const char *nl; - - /* Check if the rest is just whitespace and Signed-off-by's. */ - for (i = start; i < sb->len; i++) { - nl = memchr(sb->buf + i, '\n', sb->len - i); - if (nl) - eol = nl - sb->buf; - else - eol = sb->len; - - if (strlen(sign_off_header) <= eol - i && - starts_with(sb->buf + i, sign_off_header)) { - i = eol; - continue; - } - while (i < eol) - if (!isspace(sb->buf[i++])) - return 0; - } - - return 1; -} - -/* - * Find out if the message in the strbuf contains only whitespace and - * Signed-off-by lines. - */ -static int message_is_empty(struct strbuf *sb) -{ - if (cleanup_mode == CLEANUP_NONE && sb->len) - return 0; - return rest_is_empty(sb, 0); -} - -/* - * See if the user edited the message in the editor or left what - * was in the template intact - */ -static int template_untouched(struct strbuf *sb) -{ - struct strbuf tmpl = STRBUF_INIT; - const char *start; - - if (cleanup_mode == CLEANUP_NONE && sb->len) - return 0; - - if (!template_file || strbuf_read_file(&tmpl, template_file, 0) <= 0) - return 0; - - strbuf_stripspace(&tmpl, cleanup_mode == CLEANUP_ALL); - if (!skip_prefix(sb->buf, tmpl.buf, &start)) - start = sb->buf; - strbuf_release(&tmpl); - return rest_is_empty(sb, start - sb->buf); -} - static const char *find_author_by_nickname(const char *name) { struct rev_info revs; @@ -1153,6 +1057,9 @@ static void finalize_deferred_config(struct wt_status *s) s->show_branch = status_deferred_config.show_branch; if (s->show_branch < 0) s->show_branch = 0; + + if (s->ahead_behind_flags == AHEAD_BEHIND_UNSPECIFIED) + s->ahead_behind_flags = AHEAD_BEHIND_FULL; } static int parse_and_validate_options(int argc, const char *argv[], @@ -1229,15 +1136,17 @@ static int parse_and_validate_options(int argc, const char *argv[], if (argc == 0 && (also || (only && !amend && !allow_empty))) die(_("No paths with --include/--only does not make sense.")); if (!cleanup_arg || !strcmp(cleanup_arg, "default")) - cleanup_mode = use_editor ? CLEANUP_ALL : CLEANUP_SPACE; + cleanup_mode = use_editor ? COMMIT_MSG_CLEANUP_ALL : + COMMIT_MSG_CLEANUP_SPACE; else if (!strcmp(cleanup_arg, "verbatim")) - cleanup_mode = CLEANUP_NONE; + cleanup_mode = COMMIT_MSG_CLEANUP_NONE; else if (!strcmp(cleanup_arg, "whitespace")) - cleanup_mode = CLEANUP_SPACE; + cleanup_mode = COMMIT_MSG_CLEANUP_SPACE; else if (!strcmp(cleanup_arg, "strip")) - cleanup_mode = CLEANUP_ALL; + cleanup_mode = COMMIT_MSG_CLEANUP_ALL; else if (!strcmp(cleanup_arg, "scissors")) - cleanup_mode = use_editor ? CLEANUP_SCISSORS : CLEANUP_SPACE; + cleanup_mode = use_editor ? COMMIT_MSG_CLEANUP_SCISSORS : + COMMIT_MSG_CLEANUP_SPACE; else die(_("Invalid cleanup mode %s"), cleanup_arg); @@ -1367,6 +1276,8 @@ int cmd_status(int argc, const char **argv, const char *prefix) N_("show branch information")), OPT_BOOL(0, "show-stash", &s.show_stash, N_("show stash information")), + OPT_BOOL(0, "ahead-behind", &s.ahead_behind_flags, + N_("compute full ahead/behind values")), { OPTION_CALLBACK, 0, "porcelain", &status_format, N_("version"), N_("machine-readable output"), PARSE_OPT_OPTARG, opt_parse_porcelain }, @@ -1439,98 +1350,6 @@ int cmd_status(int argc, const char **argv, const char *prefix) return 0; } -static const char *implicit_ident_advice(void) -{ - char *user_config = expand_user_path("~/.gitconfig", 0); - char *xdg_config = xdg_config_home("config"); - int config_exists = file_exists(user_config) || file_exists(xdg_config); - - free(user_config); - free(xdg_config); - - if (config_exists) - return _(implicit_ident_advice_config); - else - return _(implicit_ident_advice_noconfig); - -} - -static void print_summary(const char *prefix, const struct object_id *oid, - int initial_commit) -{ - struct rev_info rev; - struct commit *commit; - struct strbuf format = STRBUF_INIT; - const char *head; - struct pretty_print_context pctx = {0}; - struct strbuf author_ident = STRBUF_INIT; - struct strbuf committer_ident = STRBUF_INIT; - - commit = lookup_commit(oid); - if (!commit) - die(_("couldn't look up newly created commit")); - if (parse_commit(commit)) - die(_("could not parse newly created commit")); - - strbuf_addstr(&format, "format:%h] %s"); - - format_commit_message(commit, "%an <%ae>", &author_ident, &pctx); - format_commit_message(commit, "%cn <%ce>", &committer_ident, &pctx); - if (strbuf_cmp(&author_ident, &committer_ident)) { - strbuf_addstr(&format, "\n Author: "); - strbuf_addbuf_percentquote(&format, &author_ident); - } - if (author_date_is_interesting()) { - struct strbuf date = STRBUF_INIT; - format_commit_message(commit, "%ad", &date, &pctx); - strbuf_addstr(&format, "\n Date: "); - strbuf_addbuf_percentquote(&format, &date); - strbuf_release(&date); - } - if (!committer_ident_sufficiently_given()) { - strbuf_addstr(&format, "\n Committer: "); - strbuf_addbuf_percentquote(&format, &committer_ident); - if (advice_implicit_identity) { - strbuf_addch(&format, '\n'); - strbuf_addstr(&format, implicit_ident_advice()); - } - } - strbuf_release(&author_ident); - strbuf_release(&committer_ident); - - init_revisions(&rev, prefix); - setup_revisions(0, NULL, &rev, NULL); - - rev.diff = 1; - rev.diffopt.output_format = - DIFF_FORMAT_SHORTSTAT | DIFF_FORMAT_SUMMARY; - - rev.verbose_header = 1; - rev.show_root_diff = 1; - get_commit_format(format.buf, &rev); - rev.always_show_header = 0; - rev.diffopt.detect_rename = DIFF_DETECT_RENAME; - rev.diffopt.break_opt = 0; - diff_setup_done(&rev.diffopt); - - head = resolve_ref_unsafe("HEAD", 0, NULL, NULL); - if (!head) - die_errno(_("unable to resolve HEAD after creating commit")); - if (!strcmp(head, "HEAD")) - head = _("detached HEAD"); - else - skip_prefix(head, "refs/heads/", &head); - printf("[%s%s ", head, initial_commit ? _(" (root-commit)") : ""); - - if (!log_tree_commit(&rev, commit)) { - rev.always_show_header = 1; - rev.use_terminator = 1; - log_tree_commit(&rev, commit); - } - - strbuf_release(&format); -} - static int git_commit_config(const char *k, const char *v, void *cb) { struct wt_status *s = cb; @@ -1560,37 +1379,6 @@ static int git_commit_config(const char *k, const char *v, void *cb) return git_status_config(k, v, s); } -static int run_rewrite_hook(const struct object_id *oldoid, - const struct object_id *newoid) -{ - struct child_process proc = CHILD_PROCESS_INIT; - const char *argv[3]; - int code; - struct strbuf sb = STRBUF_INIT; - - argv[0] = find_hook("post-rewrite"); - if (!argv[0]) - return 0; - - argv[1] = "amend"; - argv[2] = NULL; - - proc.argv = argv; - proc.in = -1; - proc.stdout_to_stderr = 1; - - code = start_command(&proc); - if (code) - return code; - strbuf_addf(&sb, "%s %s\n", oid_to_hex(oldoid), oid_to_hex(newoid)); - sigchain_push(SIGPIPE, SIG_IGN); - write_in_full(proc.in, sb.buf, sb.len); - close(proc.in); - strbuf_release(&sb); - sigchain_pop(SIGPIPE); - return finish_command(&proc); -} - int run_commit_hook(int editor_is_used, const char *index_file, const char *name, ...) { struct argv_array hook_env = ARGV_ARRAY_INIT; @@ -1615,6 +1403,7 @@ int run_commit_hook(int editor_is_used, const char *index_file, const char *name int cmd_commit(int argc, const char **argv, const char *prefix) { + const char *argv_gc_auto[] = {"gc", "--auto", NULL}; static struct wt_status s; static struct option builtin_commit_options[] = { OPT__QUIET(&quiet, N_("suppress summary after successful commit")), @@ -1650,6 +1439,8 @@ int cmd_commit(int argc, const char **argv, const char *prefix) OPT_SET_INT(0, "short", &status_format, N_("show status concisely"), STATUS_FORMAT_SHORT), OPT_BOOL(0, "branch", &s.show_branch, N_("show branch information")), + OPT_BOOL(0, "ahead-behind", &s.ahead_behind_flags, + N_("compute full ahead/behind values")), OPT_SET_INT(0, "porcelain", &status_format, N_("machine-readable output"), STATUS_FORMAT_PORCELAIN), OPT_SET_INT(0, "long", &status_format, @@ -1673,13 +1464,11 @@ int cmd_commit(int argc, const char **argv, const char *prefix) struct strbuf sb = STRBUF_INIT; struct strbuf author_ident = STRBUF_INIT; const char *index_file, *reflog_msg; - char *nl; struct object_id oid; struct commit_list *parents = NULL; struct stat statbuf; struct commit *current_head = NULL; struct commit_extra_header *extra = NULL; - struct ref_transaction *transaction; struct strbuf err = STRBUF_INIT; if (argc == 2 && !strcmp(argv[1], "-h")) @@ -1770,17 +1559,17 @@ int cmd_commit(int argc, const char **argv, const char *prefix) } if (verbose || /* Truncate the message just before the diff, if any. */ - cleanup_mode == CLEANUP_SCISSORS) + cleanup_mode == COMMIT_MSG_CLEANUP_SCISSORS) strbuf_setlen(&sb, wt_status_locate_end(sb.buf, sb.len)); - if (cleanup_mode != CLEANUP_NONE) - strbuf_stripspace(&sb, cleanup_mode == CLEANUP_ALL); + if (cleanup_mode != COMMIT_MSG_CLEANUP_NONE) + strbuf_stripspace(&sb, cleanup_mode == COMMIT_MSG_CLEANUP_ALL); - if (message_is_empty(&sb) && !allow_empty_message) { + if (message_is_empty(&sb, cleanup_mode) && !allow_empty_message) { rollback_index_files(); fprintf(stderr, _("Aborting commit due to empty commit message.\n")); exit(1); } - if (template_untouched(&sb) && !allow_empty_message) { + if (template_untouched(&sb, template_file, cleanup_mode) && !allow_empty_message) { rollback_index_files(); fprintf(stderr, _("Aborting commit; you did not edit the message.\n")); exit(1); @@ -1794,33 +1583,20 @@ int cmd_commit(int argc, const char **argv, const char *prefix) append_merge_tag_headers(parents, &tail); } - if (commit_tree_extended(sb.buf, sb.len, active_cache_tree->oid.hash, - parents, oid.hash, author_ident.buf, sign_commit, extra)) { + if (commit_tree_extended(sb.buf, sb.len, &active_cache_tree->oid, + parents, &oid, author_ident.buf, sign_commit, + extra)) { rollback_index_files(); die(_("failed to write commit object")); } strbuf_release(&author_ident); free_commit_extra_headers(extra); - nl = strchr(sb.buf, '\n'); - if (nl) - strbuf_setlen(&sb, nl + 1 - sb.buf); - else - strbuf_addch(&sb, '\n'); - strbuf_insert(&sb, 0, reflog_msg, strlen(reflog_msg)); - strbuf_insert(&sb, strlen(reflog_msg), ": ", 2); - - transaction = ref_transaction_begin(&err); - if (!transaction || - ref_transaction_update(transaction, "HEAD", &oid, - current_head - ? ¤t_head->object.oid : &null_oid, - 0, sb.buf, &err) || - ref_transaction_commit(transaction, &err)) { + if (update_head_with_reflog(current_head, &oid, reflog_msg, &sb, + &err)) { rollback_index_files(); die("%s", err.buf); } - ref_transaction_free(transaction); unlink(git_path_cherry_pick_head()); unlink(git_path_revert_head()); @@ -1835,19 +1611,20 @@ int cmd_commit(int argc, const char **argv, const char *prefix) "not exceeded, and then \"git reset HEAD\" to recover.")); rerere(0); + run_command_v_opt(argv_gc_auto, RUN_GIT_CMD); run_commit_hook(use_editor, get_index_file(), "post-commit", NULL); if (amend && !no_post_rewrite) { - struct notes_rewrite_cfg *cfg; - cfg = init_copy_notes_for_rewrite("amend"); - if (cfg) { - /* we are amending, so current_head is not NULL */ - copy_note_for_rewrite(cfg, ¤t_head->object.oid, &oid); - finish_copy_notes_for_rewrite(cfg, "Notes added by 'git commit --amend'"); - } - run_rewrite_hook(¤t_head->object.oid, &oid); + commit_post_rewrite(current_head, &oid); + } + if (!quiet) { + unsigned int flags = 0; + + if (!current_head) + flags |= SUMMARY_INITIAL_COMMIT; + if (author_date_is_interesting()) + flags |= SUMMARY_SHOW_AUTHOR_DATE; + print_commit_summary(prefix, &oid, flags); } - if (!quiet) - print_summary(prefix, &oid, !current_head); UNLEAK(err); UNLEAK(sb); diff --git a/builtin/config.c b/builtin/config.c index ab5f95476e..01169dd628 100644 --- a/builtin/config.c +++ b/builtin/config.c @@ -48,6 +48,13 @@ static int show_origin; #define ACTION_GET_COLORBOOL (1<<14) #define ACTION_GET_URLMATCH (1<<15) +/* + * The actions "ACTION_LIST | ACTION_GET_*" which may produce more than + * one line of output and which should therefore be paged. + */ +#define PAGING_ACTIONS (ACTION_LIST | ACTION_GET_ALL | \ + ACTION_GET_REGEXP | ACTION_GET_URLMATCH) + #define TYPE_BOOL (1<<0) #define TYPE_INT (1<<1) #define TYPE_BOOL_OR_INT (1<<2) @@ -594,6 +601,9 @@ int cmd_config(int argc, const char **argv, const char *prefix) usage_with_options(builtin_config_usage, builtin_config_options); } + if (actions & PAGING_ACTIONS) + setup_auto_pager("config", 1); + if (actions == ACTION_LIST) { check_argc(argc, 0, 0); if (config_with_options(show_all_config, NULL, diff --git a/builtin/describe.c b/builtin/describe.c index c428984706..de840f96a4 100644 --- a/builtin/describe.c +++ b/builtin/describe.c @@ -285,7 +285,7 @@ static void append_name(struct commit_name *n, struct strbuf *dst) static void append_suffix(int depth, const struct object_id *oid, struct strbuf *dst) { - strbuf_addf(dst, "-%d-g%s", depth, find_unique_abbrev(oid->hash, abbrev)); + strbuf_addf(dst, "-%d-g%s", depth, find_unique_abbrev(oid, abbrev)); } static void describe_commit(struct object_id *oid, struct strbuf *dst) @@ -383,7 +383,7 @@ static void describe_commit(struct object_id *oid, struct strbuf *dst) if (!match_cnt) { struct object_id *cmit_oid = &cmit->object.oid; if (always) { - strbuf_add_unique_abbrev(dst, cmit_oid->hash, abbrev); + strbuf_add_unique_abbrev(dst, cmit_oid, abbrev); if (suffix) strbuf_addstr(dst, suffix); return; @@ -502,7 +502,7 @@ static void describe(const char *arg, int last_one) if (cmit) describe_commit(&oid, &sb); - else if (lookup_blob(&oid)) + else if (oid_object_info(&oid, NULL) == OBJ_BLOB) describe_blob(oid, &sb); else die(_("%s is neither a commit nor blob"), arg); diff --git a/builtin/diff-tree.c b/builtin/diff-tree.c index b775a75647..473615117e 100644 --- a/builtin/diff-tree.c +++ b/builtin/diff-tree.c @@ -76,7 +76,7 @@ static int diff_tree_stdin(char *line) if (obj->type == OBJ_TREE) return stdin_diff_trees((struct tree *)obj, p); error("Object %s is a %s, not a commit or tree", - oid_to_hex(&oid), typename(obj->type)); + oid_to_hex(&oid), type_name(obj->type)); return -1; } diff --git a/builtin/difftool.c b/builtin/difftool.c index bcc79d1888..ee8dce019e 100644 --- a/builtin/difftool.c +++ b/builtin/difftool.c @@ -306,7 +306,7 @@ static char *get_symlink(const struct object_id *oid, const char *path) } else { enum object_type type; unsigned long size; - data = read_sha1_file(oid->hash, &type, &size); + data = read_object_file(oid, &type, &size); if (!data) die(_("could not read object %s for symlink %s"), oid_to_hex(oid), path); diff --git a/builtin/fast-export.c b/builtin/fast-export.c index 796d0cd66c..a15898d641 100644 --- a/builtin/fast-export.c +++ b/builtin/fast-export.c @@ -237,10 +237,10 @@ static void export_blob(const struct object_id *oid) object = (struct object *)lookup_blob(oid); eaten = 0; } else { - buf = read_sha1_file(oid->hash, &type, &size); + buf = read_object_file(oid, &type, &size); if (!buf) die ("Could not read blob %s", oid_to_hex(oid)); - if (check_sha1_signature(oid->hash, buf, size, typename(type)) < 0) + if (check_object_signature(oid, buf, size, type_name(type)) < 0) die("sha1 mismatch in blob %s", oid_to_hex(oid)); object = parse_object_buffer(oid, type, size, buf, &eaten); } @@ -682,7 +682,7 @@ static void handle_tag(const char *name, struct tag *tag) return; } - buf = read_sha1_file(tag->object.oid.hash, &type, &size); + buf = read_object_file(&tag->object.oid, &type, &size); if (!buf) die ("Could not read tag %s", oid_to_hex(&tag->object.oid)); message = memmem(buf, size, "\n\n", 2); @@ -757,7 +757,7 @@ static void handle_tag(const char *name, struct tag *tag) if (tagged->type != OBJ_COMMIT) { die ("Tag %s tags unexported %s!", oid_to_hex(&tag->object.oid), - typename(tagged->type)); + type_name(tagged->type)); } p = (struct commit *)tagged; for (;;) { @@ -839,7 +839,7 @@ static void get_tags_and_duplicates(struct rev_cmdline_info *info) if (!commit) { warning("%s: Unexpected object of type %s, skipping.", e->name, - typename(e->item->type)); + type_name(e->item->type)); continue; } @@ -851,7 +851,7 @@ static void get_tags_and_duplicates(struct rev_cmdline_info *info) continue; default: /* OBJ_TAG (nested tags) is already handled */ warning("Tag points to object of unexpected type %s, skipping.", - typename(commit->object.type)); + type_name(commit->object.type)); continue; } @@ -947,7 +947,7 @@ static void import_marks(char *input_file) if (last_idnum < mark) last_idnum = mark; - type = sha1_object_info(oid.hash, NULL); + type = oid_object_info(&oid, NULL); if (type < 0) die("object not found: %s", oid_to_hex(&oid)); diff --git a/builtin/fetch-pack.c b/builtin/fetch-pack.c index 366b9d13f9..a7bc1366ab 100644 --- a/builtin/fetch-pack.c +++ b/builtin/fetch-pack.c @@ -53,6 +53,8 @@ int cmd_fetch_pack(int argc, const char **argv, const char *prefix) struct oid_array shallow = OID_ARRAY_INIT; struct string_list deepen_not = STRING_LIST_INIT_DUP; + fetch_if_missing = 0; + packet_trace_identity("fetch-pack"); memset(&args, 0, sizeof(args)); @@ -143,6 +145,22 @@ int cmd_fetch_pack(int argc, const char **argv, const char *prefix) args.update_shallow = 1; continue; } + if (!strcmp("--from-promisor", arg)) { + args.from_promisor = 1; + continue; + } + if (!strcmp("--no-dependents", arg)) { + args.no_dependents = 1; + continue; + } + if (skip_prefix(arg, ("--" CL_ARG__FILTER "="), &arg)) { + parse_list_objects_filter(&args.filter_options, arg); + continue; + } + if (!strcmp(arg, ("--no-" CL_ARG__FILTER))) { + list_objects_filter_set_no_filter(&args.filter_options); + continue; + } usage(fetch_pack_usage); } if (deepen_not.nr) diff --git a/builtin/fetch.c b/builtin/fetch.c index 7bbcd26faf..8295f92b3e 100644 --- a/builtin/fetch.c +++ b/builtin/fetch.c @@ -19,6 +19,7 @@ #include "argv-array.h" #include "utf8.h" #include "packfile.h" +#include "list-objects-filter-options.h" static const char * const builtin_fetch_usage[] = { N_("git fetch [] [ [...]]"), @@ -38,6 +39,10 @@ static int fetch_prune_config = -1; /* unspecified */ static int prune = -1; /* unspecified */ #define PRUNE_BY_DEFAULT 0 /* do we prune by default? */ +static int fetch_prune_tags_config = -1; /* unspecified */ +static int prune_tags = -1; /* unspecified */ +#define PRUNE_TAGS_BY_DEFAULT 0 /* do we prune tags by default? */ + static int all, append, dry_run, force, keep, multiple, update_head_ok, verbosity, deepen_relative; static int progress = -1; static int tags = TAGS_DEFAULT, unshallow, update_shallow, deepen; @@ -56,6 +61,7 @@ static int recurse_submodules_default = RECURSE_SUBMODULES_ON_DEMAND; static int shown_url = 0; static int refmap_alloc, refmap_nr; static const char **refmap_array; +static struct list_objects_filter_options filter_options; static int git_fetch_config(const char *k, const char *v, void *cb) { @@ -64,6 +70,11 @@ static int git_fetch_config(const char *k, const char *v, void *cb) return 0; } + if (!strcmp(k, "fetch.prunetags")) { + fetch_prune_tags_config = git_config_bool(k, v); + return 0; + } + if (!strcmp(k, "submodule.recurse")) { int r = git_config_bool(k, v) ? RECURSE_SUBMODULES_ON : RECURSE_SUBMODULES_OFF; @@ -115,7 +126,7 @@ static struct option builtin_fetch_options[] = { N_("append to .git/FETCH_HEAD instead of overwriting")), OPT_STRING(0, "upload-pack", &upload_pack, N_("path"), N_("path to upload pack on remote end")), - OPT__FORCE(&force, N_("force overwrite of local branch")), + OPT__FORCE(&force, N_("force overwrite of local branch"), 0), OPT_BOOL('m', "multiple", &multiple, N_("fetch from multiple remotes")), OPT_SET_INT('t', "tags", &tags, @@ -126,6 +137,8 @@ static struct option builtin_fetch_options[] = { N_("number of submodules fetched in parallel")), OPT_BOOL('p', "prune", &prune, N_("prune remote-tracking branches no longer on remote")), + OPT_BOOL('P', "prune-tags", &prune_tags, + N_("prune local tags no longer on remote and clobber changed tags")), { OPTION_CALLBACK, 0, "recurse-submodules", &recurse_submodules, N_("on-demand"), N_("control recursive fetching of submodules"), PARSE_OPT_OPTARG, option_fetch_parse_recurse_submodules }, @@ -161,6 +174,7 @@ static struct option builtin_fetch_options[] = { TRANSPORT_FAMILY_IPV4), OPT_SET_INT('6', "ipv6", &family, N_("use IPv6 addresses only"), TRANSPORT_FAMILY_IPV6), + OPT_PARSE_LIST_OBJECTS_FILTER(&filter_options), OPT_END() }; @@ -623,7 +637,7 @@ static int update_local_ref(struct ref *ref, struct branch *current_branch = branch_get(NULL); const char *pretty_ref = prettify_refname(ref->name); - type = sha1_object_info(ref->new_oid.hash, NULL); + type = oid_object_info(&ref->new_oid, NULL); if (type < 0) die(_("object %s not found"), oid_to_hex(&ref->new_oid)); @@ -694,9 +708,9 @@ static int update_local_ref(struct ref *ref, if (in_merge_bases(current, updated)) { struct strbuf quickref = STRBUF_INIT; int r; - strbuf_add_unique_abbrev(&quickref, current->object.oid.hash, DEFAULT_ABBREV); + strbuf_add_unique_abbrev(&quickref, ¤t->object.oid, DEFAULT_ABBREV); strbuf_addstr(&quickref, ".."); - strbuf_add_unique_abbrev(&quickref, ref->new_oid.hash, DEFAULT_ABBREV); + strbuf_add_unique_abbrev(&quickref, &ref->new_oid, DEFAULT_ABBREV); if ((recurse_submodules != RECURSE_SUBMODULES_OFF) && (recurse_submodules != RECURSE_SUBMODULES_ON)) check_for_new_submodule_commits(&ref->new_oid); @@ -709,9 +723,9 @@ static int update_local_ref(struct ref *ref, } else if (force || ref->force) { struct strbuf quickref = STRBUF_INIT; int r; - strbuf_add_unique_abbrev(&quickref, current->object.oid.hash, DEFAULT_ABBREV); + strbuf_add_unique_abbrev(&quickref, ¤t->object.oid, DEFAULT_ABBREV); strbuf_addstr(&quickref, "..."); - strbuf_add_unique_abbrev(&quickref, ref->new_oid.hash, DEFAULT_ABBREV); + strbuf_add_unique_abbrev(&quickref, &ref->new_oid, DEFAULT_ABBREV); if ((recurse_submodules != RECURSE_SUBMODULES_OFF) && (recurse_submodules != RECURSE_SUBMODULES_ON)) check_for_new_submodule_commits(&ref->new_oid); @@ -1045,6 +1059,11 @@ static struct transport *prepare_transport(struct remote *remote, int deepen) set_option(transport, TRANS_OPT_DEEPEN_RELATIVE, "yes"); if (update_shallow) set_option(transport, TRANS_OPT_UPDATE_SHALLOW, "yes"); + if (filter_options.choice) { + set_option(transport, TRANS_OPT_LIST_OBJECTS_FILTER, + filter_options.filter_spec); + set_option(transport, TRANS_OPT_FROM_PROMISOR, "1"); + } return transport; } @@ -1212,6 +1231,8 @@ static void add_options_to_argv(struct argv_array *argv) argv_array_push(argv, "--dry-run"); if (prune != -1) argv_array_push(argv, prune ? "--prune" : "--no-prune"); + if (prune_tags != -1) + argv_array_push(argv, prune_tags ? "--prune-tags" : "--no-prune-tags"); if (update_head_ok) argv_array_push(argv, "--update-head-ok"); if (force) @@ -1265,12 +1286,65 @@ static int fetch_multiple(struct string_list *list) return result; } -static int fetch_one(struct remote *remote, int argc, const char **argv) +/* + * Fetching from the promisor remote should use the given filter-spec + * or inherit the default filter-spec from the config. + */ +static inline void fetch_one_setup_partial(struct remote *remote) +{ + /* + * Explicit --no-filter argument overrides everything, regardless + * of any prior partial clones and fetches. + */ + if (filter_options.no_filter) + return; + + /* + * If no prior partial clone/fetch and the current fetch DID NOT + * request a partial-fetch, do a normal fetch. + */ + if (!repository_format_partial_clone && !filter_options.choice) + return; + + /* + * If this is the FIRST partial-fetch request, we enable partial + * on this repo and remember the given filter-spec as the default + * for subsequent fetches to this remote. + */ + if (!repository_format_partial_clone && filter_options.choice) { + partial_clone_register(remote->name, &filter_options); + return; + } + + /* + * We are currently limited to only ONE promisor remote and only + * allow partial-fetches from the promisor remote. + */ + if (strcmp(remote->name, repository_format_partial_clone)) { + if (filter_options.choice) + die(_("--filter can only be used with the remote configured in core.partialClone")); + return; + } + + /* + * Do a partial-fetch from the promisor remote using either the + * explicitly given filter-spec or inherit the filter-spec from + * the config. + */ + if (!filter_options.choice) + partial_clone_get_default_filter_spec(&filter_options); + return; +} + +static int fetch_one(struct remote *remote, int argc, const char **argv, int prune_tags_ok) { static const char **refs = NULL; struct refspec *refspec; int ref_nr = 0; + int j = 0; int exit_code; + int maybe_prune_tags; + int remote_via_config = remote_is_configured(remote, 0); if (!remote) die(_("No remote repository specified. Please, specify either a URL or a\n" @@ -1280,18 +1354,39 @@ static int fetch_one(struct remote *remote, int argc, const char **argv) if (prune < 0) { /* no command line request */ - if (0 <= gtransport->remote->prune) - prune = gtransport->remote->prune; + if (0 <= remote->prune) + prune = remote->prune; else if (0 <= fetch_prune_config) prune = fetch_prune_config; else prune = PRUNE_BY_DEFAULT; } + if (prune_tags < 0) { + /* no command line request */ + if (0 <= remote->prune_tags) + prune_tags = remote->prune_tags; + else if (0 <= fetch_prune_tags_config) + prune_tags = fetch_prune_tags_config; + else + prune_tags = PRUNE_TAGS_BY_DEFAULT; + } + + maybe_prune_tags = prune_tags_ok && prune_tags; + if (maybe_prune_tags && remote_via_config) + add_prune_tags_to_fetch_refspec(remote); + + if (argc > 0 || (maybe_prune_tags && !remote_via_config)) { + size_t nr_alloc = st_add3(argc, maybe_prune_tags, 1); + refs = xcalloc(nr_alloc, sizeof(const char *)); + if (maybe_prune_tags) { + refs[j++] = xstrdup("refs/tags/*:refs/tags/*"); + ref_nr++; + } + } + if (argc > 0) { - int j = 0; int i; - refs = xcalloc(st_add(argc, 1), sizeof(const char *)); for (i = 0; i < argc; i++) { if (!strcmp(argv[i], "tag")) { i++; @@ -1301,9 +1396,8 @@ static int fetch_one(struct remote *remote, int argc, const char **argv) argv[i], argv[i]); } else refs[j++] = argv[i]; + ref_nr++; } - refs[j] = NULL; - ref_nr = j; } sigchain_push_common(unlock_pack_on_signal); @@ -1320,12 +1414,15 @@ int cmd_fetch(int argc, const char **argv, const char *prefix) { int i; struct string_list list = STRING_LIST_INIT_DUP; - struct remote *remote; + struct remote *remote = NULL; int result = 0; + int prune_tags_ok = 1; struct argv_array argv_gc_auto = ARGV_ARRAY_INIT; packet_trace_identity("fetch"); + fetch_if_missing = 0; + /* Record the command line for the reflog */ strbuf_addstr(&default_rla, "fetch"); for (i = 1; i < argc; i++) @@ -1359,23 +1456,23 @@ int cmd_fetch(int argc, const char **argv, const char *prefix) if (depth || deepen_since || deepen_not.nr) deepen = 1; + if (filter_options.choice && !repository_format_partial_clone) + die("--filter can only be used when extensions.partialClone is set"); + if (all) { if (argc == 1) die(_("fetch --all does not take a repository argument")); else if (argc > 1) die(_("fetch --all does not make sense with refspecs")); (void) for_each_remote(get_one_remote_for_fetch, &list); - result = fetch_multiple(&list); } else if (argc == 0) { /* No arguments -- use default remote */ remote = remote_get(NULL); - result = fetch_one(remote, argc, argv); } else if (multiple) { /* All arguments are assumed to be remotes or groups */ for (i = 0; i < argc; i++) if (!add_remote_or_group(argv[i], &list)) die(_("No such remote or remote group: %s"), argv[i]); - result = fetch_multiple(&list); } else { /* Single remote or group */ (void) add_remote_or_group(argv[0], &list); @@ -1383,14 +1480,26 @@ int cmd_fetch(int argc, const char **argv, const char *prefix) /* More than one remote */ if (argc > 1) die(_("Fetching a group and specifying refspecs does not make sense")); - result = fetch_multiple(&list); } else { /* Zero or one remotes */ remote = remote_get(argv[0]); - result = fetch_one(remote, argc-1, argv+1); + prune_tags_ok = (argc == 1); + argc--; + argv++; } } + if (remote) { + if (filter_options.choice || repository_format_partial_clone) + fetch_one_setup_partial(remote); + result = fetch_one(remote, argc, argv, prune_tags_ok); + } else { + if (filter_options.choice) + die(_("--filter can only be used with the remote configured in core.partialClone")); + /* TODO should this also die if we have a previous partial-clone? */ + result = fetch_multiple(&list); + } + if (!result && (recurse_submodules != RECURSE_SUBMODULES_OFF)) { struct argv_array options = ARGV_ARRAY_INIT; diff --git a/builtin/fmt-merge-msg.c b/builtin/fmt-merge-msg.c index 8e8a15ea4a..bd680be687 100644 --- a/builtin/fmt-merge-msg.c +++ b/builtin/fmt-merge-msg.c @@ -485,10 +485,10 @@ static void fmt_merge_msg_sigs(struct strbuf *out) struct strbuf tagbuf = STRBUF_INIT; for (i = 0; i < origins.nr; i++) { - unsigned char *sha1 = origins.items[i].util; + struct object_id *oid = origins.items[i].util; enum object_type type; unsigned long size, len; - char *buf = read_sha1_file(sha1, &type, &size); + char *buf = read_object_file(oid, &type, &size); struct strbuf sig = STRBUF_INIT; if (!buf || type != OBJ_TAG) diff --git a/builtin/fsck.c b/builtin/fsck.c index 92ce775a74..0922558683 100644 --- a/builtin/fsck.c +++ b/builtin/fsck.c @@ -65,12 +65,12 @@ static const char *printable_type(struct object *obj) const char *ret; if (obj->type == OBJ_NONE) { - enum object_type type = sha1_object_info(obj->oid.hash, NULL); + enum object_type type = oid_object_info(&obj->oid, NULL); if (type > 0) object_as_type(obj, type, 0); } - ret = typename(obj->type); + ret = type_name(obj->type); if (!ret) ret = "unknown"; @@ -137,7 +137,7 @@ static int mark_object(struct object *obj, int type, void *data, struct fsck_opt printf("broken link from %7s %s\n", printable_type(parent), describe_object(parent)); printf("broken link from %7s %s\n", - (type == OBJ_ANY ? "unknown" : typename(type)), "unknown"); + (type == OBJ_ANY ? "unknown" : type_name(type)), "unknown"); errors_found |= ERROR_REACHABLE; return 1; } @@ -149,6 +149,15 @@ static int mark_object(struct object *obj, int type, void *data, struct fsck_opt if (obj->flags & REACHABLE) return 0; obj->flags |= REACHABLE; + + if (is_promisor_object(&obj->oid)) + /* + * Further recursion does not need to be performed on this + * object since it is a promisor object (so it does not need to + * be added to "pending"). + */ + return 0; + if (!(obj->flags & HAS_OBJ)) { if (parent && !has_object_file(&obj->oid)) { printf("broken link from %7s %s\n", @@ -214,6 +223,8 @@ static void check_reachable_object(struct object *obj) * do a full fsck */ if (!(obj->flags & HAS_OBJ)) { + if (is_promisor_object(&obj->oid)) + return; if (has_sha1_pack(obj->oid.hash)) return; /* it is in pack - forget about it */ printf("missing %s %s\n", printable_type(obj), @@ -404,7 +415,7 @@ static void fsck_handle_reflog_oid(const char *refname, struct object_id *oid, xstrfmt("%s@{%"PRItime"}", refname, timestamp)); obj->flags |= USED; mark_object_reachable(obj); - } else { + } else if (!is_promisor_object(oid)) { error("%s: invalid reflog entry %s", refname, oid_to_hex(oid)); errors_found |= ERROR_REACHABLE; } @@ -440,6 +451,14 @@ static int fsck_handle_ref(const char *refname, const struct object_id *oid, obj = parse_object(oid); if (!obj) { + if (is_promisor_object(oid)) { + /* + * Increment default_refs anyway, because this is a + * valid ref. + */ + default_refs++; + return 0; + } error("%s: invalid sha1 pointer %s", refname, oid_to_hex(oid)); errors_found |= ERROR_REACHABLE; /* We'll continue with the rest despite the error.. */ @@ -494,7 +513,7 @@ static struct object *parse_loose_object(const struct object_id *oid, unsigned long size; int eaten; - if (read_loose_object(path, oid->hash, &type, &size, &contents) < 0) + if (read_loose_object(path, oid, &type, &size, &contents) < 0) return NULL; if (!contents && type != OBJ_BLOB) @@ -665,6 +684,9 @@ int cmd_fsck(int argc, const char **argv, const char *prefix) int i; struct alternate_object_database *alt; + /* fsck knows how to handle missing promisor objects */ + fetch_if_missing = 0; + errors_found = 0; check_replace_refs = 0; @@ -737,6 +759,8 @@ int cmd_fsck(int argc, const char **argv, const char *prefix) struct object *obj = lookup_object(oid.hash); if (!obj || !(obj->flags & HAS_OBJ)) { + if (is_promisor_object(&oid)) + continue; error("%s: object missing", oid_to_hex(&oid)); errors_found |= ERROR_OBJECT; continue; diff --git a/builtin/gc.c b/builtin/gc.c index 3c5eae0edf..f51e5a6500 100644 --- a/builtin/gc.c +++ b/builtin/gc.c @@ -360,8 +360,11 @@ int cmd_gc(int argc, const char **argv, const char *prefix) N_("prune unreferenced objects"), PARSE_OPT_OPTARG, NULL, (intptr_t)prune_expire }, OPT_BOOL(0, "aggressive", &aggressive, N_("be more thorough (increased runtime)")), - OPT_BOOL(0, "auto", &auto_gc, N_("enable auto-gc mode")), - OPT_BOOL(0, "force", &force, N_("force running gc even if there may be another gc running")), + OPT_BOOL_F(0, "auto", &auto_gc, N_("enable auto-gc mode"), + PARSE_OPT_NOCOMPLETE), + OPT_BOOL_F(0, "force", &force, + N_("force running gc even if there may be another gc running"), + PARSE_OPT_NOCOMPLETE), OPT_END() }; @@ -458,6 +461,9 @@ int cmd_gc(int argc, const char **argv, const char *prefix) argv_array_push(&prune, prune_expire); if (quiet) argv_array_push(&prune, "--no-progress"); + if (repository_format_partial_clone) + argv_array_push(&prune, + "--exclude-promisor-objects"); if (run_command_v_opt(prune.argv, RUN_GIT_CMD)) return error(FAILED_RUN, prune.argv[0]); } diff --git a/builtin/grep.c b/builtin/grep.c index 3ca4ac80d8..668cb8050a 100644 --- a/builtin/grep.c +++ b/builtin/grep.c @@ -92,8 +92,7 @@ static pthread_cond_t cond_result; static int skip_first_line; -static void add_work(struct grep_opt *opt, enum grep_source_type type, - const char *name, const char *path, const void *id) +static void add_work(struct grep_opt *opt, const struct grep_source *gs) { grep_lock(); @@ -101,7 +100,7 @@ static void add_work(struct grep_opt *opt, enum grep_source_type type, pthread_cond_wait(&cond_write, &grep_mutex); } - grep_source_init(&todo[todo_end].source, type, name, path, id); + todo[todo_end].source = *gs; if (opt->binary != GREP_BINARY_TEXT) grep_source_load_driver(&todo[todo_end].source); todo[todo_end].done = 0; @@ -307,7 +306,7 @@ static void *lock_and_read_oid_file(const struct object_id *oid, enum object_typ void *data; grep_read_lock(); - data = read_sha1_file(oid->hash, type, size); + data = read_object_file(oid, type, size); grep_read_unlock(); return data; } @@ -317,6 +316,7 @@ static int grep_oid(struct grep_opt *opt, const struct object_id *oid, const char *path) { struct strbuf pathbuf = STRBUF_INIT; + struct grep_source gs; if (opt->relative && opt->prefix_length) { quote_path_relative(filename + tree_name_len, opt->prefix, &pathbuf); @@ -325,19 +325,22 @@ static int grep_oid(struct grep_opt *opt, const struct object_id *oid, strbuf_addstr(&pathbuf, filename); } + grep_source_init(&gs, GREP_SOURCE_OID, pathbuf.buf, path, oid); + strbuf_release(&pathbuf); + #ifndef NO_PTHREADS if (num_threads) { - add_work(opt, GREP_SOURCE_OID, pathbuf.buf, path, oid); - strbuf_release(&pathbuf); + /* + * add_work() copies gs and thus assumes ownership of + * its fields, so do not call grep_source_clear() + */ + add_work(opt, &gs); return 0; } else #endif { - struct grep_source gs; int hit; - grep_source_init(&gs, GREP_SOURCE_OID, pathbuf.buf, path, oid); - strbuf_release(&pathbuf); hit = grep_source(opt, &gs); grep_source_clear(&gs); @@ -348,25 +351,29 @@ static int grep_oid(struct grep_opt *opt, const struct object_id *oid, static int grep_file(struct grep_opt *opt, const char *filename) { struct strbuf buf = STRBUF_INIT; + struct grep_source gs; if (opt->relative && opt->prefix_length) quote_path_relative(filename, opt->prefix, &buf); else strbuf_addstr(&buf, filename); + grep_source_init(&gs, GREP_SOURCE_FILE, buf.buf, filename, filename); + strbuf_release(&buf); + #ifndef NO_PTHREADS if (num_threads) { - add_work(opt, GREP_SOURCE_FILE, buf.buf, filename, filename); - strbuf_release(&buf); + /* + * add_work() copies gs and thus assumes ownership of + * its fields, so do not call grep_source_clear() + */ + add_work(opt, &gs); return 0; } else #endif { - struct grep_source gs; int hit; - grep_source_init(&gs, GREP_SOURCE_FILE, buf.buf, filename, filename); - strbuf_release(&buf); hit = grep_source(opt, &gs); grep_source_clear(&gs); @@ -445,7 +452,7 @@ static int grep_submodule(struct grep_opt *opt, struct repository *superproject, object = parse_object_or_die(oid, oid_to_hex(oid)); grep_read_lock(); - data = read_object_with_reference(object->oid.hash, tree_type, + data = read_object_with_reference(&object->oid, tree_type, &size, NULL); grep_read_unlock(); @@ -607,7 +614,7 @@ static int grep_object(struct grep_opt *opt, const struct pathspec *pathspec, int hit, len; grep_read_lock(); - data = read_object_with_reference(obj->oid.hash, tree_type, + data = read_object_with_reference(&obj->oid, tree_type, &size, NULL); grep_read_unlock(); @@ -627,7 +634,7 @@ static int grep_object(struct grep_opt *opt, const struct pathspec *pathspec, free(data); return hit; } - die(_("unable to grep from object of type %s"), typename(obj->type)); + die(_("unable to grep from object of type %s"), type_name(obj->type)); } static int grep_objects(struct grep_opt *opt, const struct pathspec *pathspec, @@ -832,8 +839,9 @@ int cmd_grep(int argc, const char **argv, const char *prefix) OPT_BOOL('L', "files-without-match", &opt.unmatch_name_only, N_("show only the names of files without match")), - OPT_BOOL('z', "null", &opt.null_following_name, - N_("print NUL after filenames")), + OPT_BOOL_F('z', "null", &opt.null_following_name, + N_("print NUL after filenames"), + PARSE_OPT_NOCOMPLETE), OPT_BOOL('c', "count", &opt.count, N_("show the number of matches instead of matching lines")), OPT__COLOR(&opt.color, N_("highlight matches")), @@ -884,9 +892,11 @@ int cmd_grep(int argc, const char **argv, const char *prefix) OPT_GROUP(""), { OPTION_STRING, 'O', "open-files-in-pager", &show_in_pager, N_("pager"), N_("show matching files in the pager"), - PARSE_OPT_OPTARG, NULL, (intptr_t)default_pager }, - OPT_BOOL(0, "ext-grep", &external_grep_allowed__ignored, - N_("allow calling of grep(1) (ignored by this build)")), + PARSE_OPT_OPTARG | PARSE_OPT_NOCOMPLETE, + NULL, (intptr_t)default_pager }, + OPT_BOOL_F(0, "ext-grep", &external_grep_allowed__ignored, + N_("allow calling of grep(1) (ignored by this build)"), + PARSE_OPT_NOCOMPLETE), OPT_END() }; diff --git a/builtin/hash-object.c b/builtin/hash-object.c index c532ff9320..526da5c185 100644 --- a/builtin/hash-object.c +++ b/builtin/hash-object.c @@ -24,7 +24,8 @@ static int hash_literally(struct object_id *oid, int fd, const char *type, unsig if (strbuf_read(&buf, fd, 4096) < 0) ret = -1; else - ret = hash_sha1_file_literally(buf.buf, buf.len, type, oid, flags); + ret = hash_object_file_literally(buf.buf, buf.len, type, oid, + flags); strbuf_release(&buf); return ret; } diff --git a/builtin/help.c b/builtin/help.c index d3c8fc4082..598867cfea 100644 --- a/builtin/help.c +++ b/builtin/help.c @@ -194,11 +194,11 @@ static void do_add_man_viewer_info(const char *name, size_t len, const char *value) { - struct man_viewer_info_list *new; - FLEX_ALLOC_MEM(new, name, name, len); - new->info = xstrdup(value); - new->next = man_viewer_info_list; - man_viewer_info_list = new; + struct man_viewer_info_list *new_man_viewer; + FLEX_ALLOC_MEM(new_man_viewer, name, name, len); + new_man_viewer->info = xstrdup(value); + new_man_viewer->next = man_viewer_info_list; + man_viewer_info_list = new_man_viewer; } static int add_man_viewer_path(const char *name, diff --git a/builtin/index-pack.c b/builtin/index-pack.c index 4c51aec81f..657a5dda06 100644 --- a/builtin/index-pack.c +++ b/builtin/index-pack.c @@ -49,6 +49,7 @@ struct thread_local { int pack_fd; }; +/* Remember to update object flag allocation in object.h */ #define FLAG_LINK (1u<<20) #define FLAG_CHECKED (1u<<21) @@ -58,7 +59,7 @@ struct ofs_delta_entry { }; struct ref_delta_entry { - unsigned char sha1[20]; + struct object_id oid; int obj_no; }; @@ -91,7 +92,7 @@ static unsigned int input_offset, input_len; static off_t consumed_bytes; static off_t max_input_size; static unsigned deepest_delta; -static git_SHA_CTX input_ctx; +static git_hash_ctx input_ctx; static uint32_t input_crc32; static int input_fd, output_fd; static const char *curr_pack; @@ -221,14 +222,14 @@ static unsigned check_object(struct object *obj) if (!(obj->flags & FLAG_CHECKED)) { unsigned long size; - int type = sha1_object_info(obj->oid.hash, &size); + int type = oid_object_info(&obj->oid, &size); if (type <= 0) die(_("did not receive expected object %s"), oid_to_hex(&obj->oid)); if (type != obj->type) die(_("object %s: expected type %s, found %s"), oid_to_hex(&obj->oid), - typename(obj->type), typename(type)); + type_name(obj->type), type_name(type)); obj->flags |= FLAG_CHECKED; return 1; } @@ -253,7 +254,7 @@ static void flush(void) if (input_offset) { if (output_fd >= 0) write_or_die(output_fd, input_buffer, input_offset); - git_SHA1_Update(&input_ctx, input_buffer, input_offset); + the_hash_algo->update_fn(&input_ctx, input_buffer, input_offset); memmove(input_buffer, input_buffer + input_offset, input_len); input_offset = 0; } @@ -326,7 +327,7 @@ static const char *open_pack_file(const char *pack_name) output_fd = -1; nothread_data.pack_fd = input_fd; } - git_SHA1_Init(&input_ctx); + the_hash_algo->init_fn(&input_ctx); return pack_name; } @@ -437,22 +438,22 @@ static int is_delta_type(enum object_type type) } static void *unpack_entry_data(off_t offset, unsigned long size, - enum object_type type, unsigned char *sha1) + enum object_type type, struct object_id *oid) { static char fixed_buf[8192]; int status; git_zstream stream; void *buf; - git_SHA_CTX c; + git_hash_ctx c; char hdr[32]; int hdrlen; if (!is_delta_type(type)) { - hdrlen = xsnprintf(hdr, sizeof(hdr), "%s %lu", typename(type), size) + 1; - git_SHA1_Init(&c); - git_SHA1_Update(&c, hdr, hdrlen); + hdrlen = xsnprintf(hdr, sizeof(hdr), "%s %lu", type_name(type), size) + 1; + the_hash_algo->init_fn(&c); + the_hash_algo->update_fn(&c, hdr, hdrlen); } else - sha1 = NULL; + oid = NULL; if (type == OBJ_BLOB && size > big_file_threshold) buf = fixed_buf; else @@ -469,8 +470,8 @@ static void *unpack_entry_data(off_t offset, unsigned long size, stream.avail_in = input_len; status = git_inflate(&stream, 0); use(input_len - stream.avail_in); - if (sha1) - git_SHA1_Update(&c, last_out, stream.next_out - last_out); + if (oid) + the_hash_algo->update_fn(&c, last_out, stream.next_out - last_out); if (buf == fixed_buf) { stream.next_out = buf; stream.avail_out = sizeof(fixed_buf); @@ -479,15 +480,15 @@ static void *unpack_entry_data(off_t offset, unsigned long size, if (stream.total_out != size || status != Z_STREAM_END) bad_object(offset, _("inflate returned %d"), status); git_inflate_end(&stream); - if (sha1) - git_SHA1_Final(sha1, &c); + if (oid) + the_hash_algo->final_fn(oid->hash, &c); return buf == fixed_buf ? NULL : buf; } static void *unpack_raw_entry(struct object_entry *obj, off_t *ofs_offset, - unsigned char *ref_sha1, - unsigned char *sha1) + struct object_id *ref_oid, + struct object_id *oid) { unsigned char *p; unsigned long size, c; @@ -515,8 +516,8 @@ static void *unpack_raw_entry(struct object_entry *obj, switch (obj->type) { case OBJ_REF_DELTA: - hashcpy(ref_sha1, fill(20)); - use(20); + hashcpy(ref_oid->hash, fill(the_hash_algo->rawsz)); + use(the_hash_algo->rawsz); break; case OBJ_OFS_DELTA: p = fill(1); @@ -546,7 +547,7 @@ static void *unpack_raw_entry(struct object_entry *obj, } obj->hdr_size = consumed_bytes - obj->idx.offset; - data = unpack_entry_data(obj->idx.offset, obj->size, obj->type, sha1); + data = unpack_entry_data(obj->idx.offset, obj->size, obj->type, oid); obj->idx.crc32 = input_crc32; return data; } @@ -671,18 +672,18 @@ static void find_ofs_delta_children(off_t offset, *last_index = last; } -static int compare_ref_delta_bases(const unsigned char *sha1, - const unsigned char *sha2, +static int compare_ref_delta_bases(const struct object_id *oid1, + const struct object_id *oid2, enum object_type type1, enum object_type type2) { int cmp = type1 - type2; if (cmp) return cmp; - return hashcmp(sha1, sha2); + return oidcmp(oid1, oid2); } -static int find_ref_delta(const unsigned char *sha1, enum object_type type) +static int find_ref_delta(const struct object_id *oid, enum object_type type) { int first = 0, last = nr_ref_deltas; @@ -691,7 +692,7 @@ static int find_ref_delta(const unsigned char *sha1, enum object_type type) struct ref_delta_entry *delta = &ref_deltas[next]; int cmp; - cmp = compare_ref_delta_bases(sha1, delta->sha1, + cmp = compare_ref_delta_bases(oid, &delta->oid, type, objects[delta->obj_no].type); if (!cmp) return next; @@ -704,11 +705,11 @@ static int find_ref_delta(const unsigned char *sha1, enum object_type type) return -first-1; } -static void find_ref_delta_children(const unsigned char *sha1, +static void find_ref_delta_children(const struct object_id *oid, int *first_index, int *last_index, enum object_type type) { - int first = find_ref_delta(sha1, type); + int first = find_ref_delta(oid, type); int last = first; int end = nr_ref_deltas - 1; @@ -717,9 +718,9 @@ static void find_ref_delta_children(const unsigned char *sha1, *last_index = -1; return; } - while (first > 0 && !hashcmp(ref_deltas[first - 1].sha1, sha1)) + while (first > 0 && !oidcmp(&ref_deltas[first - 1].oid, oid)) --first; - while (last < end && !hashcmp(ref_deltas[last + 1].sha1, sha1)) + while (last < end && !oidcmp(&ref_deltas[last + 1].oid, oid)) ++last; *first_index = first; *last_index = last; @@ -771,7 +772,7 @@ static int check_collison(struct object_entry *entry) memset(&data, 0, sizeof(data)); data.entry = entry; - data.st = open_istream(entry->idx.oid.hash, &type, &size, NULL); + data.st = open_istream(&entry->idx.oid, &type, &size, NULL); if (!data.st) return -1; if (size != entry->size || type != entry->type) @@ -810,12 +811,12 @@ static void sha1_object(const void *data, struct object_entry *obj_entry, enum object_type has_type; unsigned long has_size; read_lock(); - has_type = sha1_object_info(oid->hash, &has_size); + has_type = oid_object_info(oid, &has_size); if (has_type < 0) die(_("cannot read existing object info %s"), oid_to_hex(oid)); if (has_type != type || has_size != size) die(_("SHA1 COLLISION FOUND WITH %s !"), oid_to_hex(oid)); - has_data = read_sha1_file(oid->hash, &has_type, &has_size); + has_data = read_object_file(oid, &has_type, &has_size); read_unlock(); if (!data) data = new_data = get_data_from_pack(obj_entry); @@ -827,7 +828,7 @@ static void sha1_object(const void *data, struct object_entry *obj_entry, free(has_data); } - if (strict) { + if (strict || do_fsck_object) { read_lock(); if (type == OBJ_BLOB) { struct blob *blob = lookup_blob(oid); @@ -849,11 +850,11 @@ static void sha1_object(const void *data, struct object_entry *obj_entry, obj = parse_object_buffer(oid, type, size, buf, &eaten); if (!obj) - die(_("invalid %s"), typename(type)); + die(_("invalid %s"), type_name(type)); if (do_fsck_object && fsck_object(obj, buf, size, &fsck_options)) die(_("Error in object")); - if (fsck_walk(obj, NULL, &fsck_options)) + if (strict && fsck_walk(obj, NULL, &fsck_options)) die(_("Not all child objects of %s are reachable"), oid_to_hex(&obj->oid)); if (obj->type == OBJ_TREE) { @@ -958,9 +959,8 @@ static void resolve_delta(struct object_entry *delta_obj, free(delta_data); if (!result->data) bad_object(delta_obj->idx.offset, _("failed to apply delta")); - hash_sha1_file(result->data, result->size, - typename(delta_obj->real_type), - delta_obj->idx.oid.hash); + hash_object_file(result->data, result->size, + type_name(delta_obj->real_type), &delta_obj->idx.oid); sha1_object(result->data, NULL, result->size, delta_obj->real_type, &delta_obj->idx.oid); counter_lock(); @@ -992,7 +992,7 @@ static struct base_data *find_unresolved_deltas_1(struct base_data *base, struct base_data *prev_base) { if (base->ref_last == -1 && base->ofs_last == -1) { - find_ref_delta_children(base->obj->idx.oid.hash, + find_ref_delta_children(&base->obj->idx.oid, &base->ref_first, &base->ref_last, OBJ_REF_DELTA); @@ -1076,7 +1076,7 @@ static int compare_ref_delta_entry(const void *a, const void *b) const struct ref_delta_entry *delta_a = a; const struct ref_delta_entry *delta_b = b; - return hashcmp(delta_a->sha1, delta_b->sha1); + return oidcmp(&delta_a->oid, &delta_b->oid); } static void resolve_base(struct object_entry *obj) @@ -1119,11 +1119,11 @@ static void *threaded_second_pass(void *data) * - calculate SHA1 of all non-delta objects; * - remember base (SHA1 or offset) for all deltas. */ -static void parse_pack_objects(unsigned char *sha1) +static void parse_pack_objects(unsigned char *hash) { int i, nr_delays = 0; struct ofs_delta_entry *ofs_delta = ofs_deltas; - unsigned char ref_delta_sha1[20]; + struct object_id ref_delta_oid; struct stat st; if (verbose) @@ -1133,8 +1133,8 @@ static void parse_pack_objects(unsigned char *sha1) for (i = 0; i < nr_objects; i++) { struct object_entry *obj = &objects[i]; void *data = unpack_raw_entry(obj, &ofs_delta->offset, - ref_delta_sha1, - obj->idx.oid.hash); + &ref_delta_oid, + &obj->idx.oid); obj->real_type = obj->type; if (obj->type == OBJ_OFS_DELTA) { nr_ofs_deltas++; @@ -1142,7 +1142,7 @@ static void parse_pack_objects(unsigned char *sha1) ofs_delta++; } else if (obj->type == OBJ_REF_DELTA) { ALLOC_GROW(ref_deltas, nr_ref_deltas + 1, ref_deltas_alloc); - hashcpy(ref_deltas[nr_ref_deltas].sha1, ref_delta_sha1); + oidcpy(&ref_deltas[nr_ref_deltas].oid, &ref_delta_oid); ref_deltas[nr_ref_deltas].obj_no = i; nr_ref_deltas++; } else if (!data) { @@ -1160,10 +1160,10 @@ static void parse_pack_objects(unsigned char *sha1) /* Check pack integrity */ flush(); - git_SHA1_Final(sha1, &input_ctx); - if (hashcmp(fill(20), sha1)) + the_hash_algo->final_fn(hash, &input_ctx); + if (hashcmp(fill(the_hash_algo->rawsz), hash)) die(_("pack is corrupted (SHA1 mismatch)")); - use(20); + use(the_hash_algo->rawsz); /* If input_fd is a file, we should have reached its end now. */ if (fstat(input_fd, &st)) @@ -1239,21 +1239,21 @@ static void resolve_deltas(void) /* * Third pass: * - append objects to convert thin pack to full pack if required - * - write the final 20-byte SHA-1 + * - write the final pack hash */ -static void fix_unresolved_deltas(struct sha1file *f); -static void conclude_pack(int fix_thin_pack, const char *curr_pack, unsigned char *pack_sha1) +static void fix_unresolved_deltas(struct hashfile *f); +static void conclude_pack(int fix_thin_pack, const char *curr_pack, unsigned char *pack_hash) { if (nr_ref_deltas + nr_ofs_deltas == nr_resolved_deltas) { stop_progress(&progress); - /* Flush remaining pack final 20-byte SHA1. */ + /* Flush remaining pack final hash. */ flush(); return; } if (fix_thin_pack) { - struct sha1file *f; - unsigned char read_sha1[20], tail_sha1[20]; + struct hashfile *f; + unsigned char read_hash[GIT_MAX_RAWSZ], tail_hash[GIT_MAX_RAWSZ]; struct strbuf msg = STRBUF_INIT; int nr_unresolved = nr_ofs_deltas + nr_ref_deltas - nr_resolved_deltas; int nr_objects_initial = nr_objects; @@ -1262,7 +1262,7 @@ static void conclude_pack(int fix_thin_pack, const char *curr_pack, unsigned cha REALLOC_ARRAY(objects, nr_objects + nr_unresolved + 1); memset(objects + nr_objects + 1, 0, nr_unresolved * sizeof(*objects)); - f = sha1fd(output_fd, curr_pack); + f = hashfd(output_fd, curr_pack); fix_unresolved_deltas(f); strbuf_addf(&msg, Q_("completed with %d local object", "completed with %d local objects", @@ -1270,12 +1270,12 @@ static void conclude_pack(int fix_thin_pack, const char *curr_pack, unsigned cha nr_objects - nr_objects_initial); stop_progress_msg(&progress, msg.buf); strbuf_release(&msg); - sha1close(f, tail_sha1, 0); - hashcpy(read_sha1, pack_sha1); - fixup_pack_header_footer(output_fd, pack_sha1, + hashclose(f, tail_hash, 0); + hashcpy(read_hash, pack_hash); + fixup_pack_header_footer(output_fd, pack_hash, curr_pack, nr_objects, - read_sha1, consumed_bytes-20); - if (hashcmp(read_sha1, tail_sha1) != 0) + read_hash, consumed_bytes-the_hash_algo->rawsz); + if (hashcmp(read_hash, tail_hash) != 0) die(_("Unexpected tail checksum for %s " "(disk corruption?)"), curr_pack); } @@ -1286,7 +1286,7 @@ static void conclude_pack(int fix_thin_pack, const char *curr_pack, unsigned cha nr_ofs_deltas + nr_ref_deltas - nr_resolved_deltas); } -static int write_compressed(struct sha1file *f, void *in, unsigned int size) +static int write_compressed(struct hashfile *f, void *in, unsigned int size) { git_zstream stream; int status; @@ -1300,7 +1300,7 @@ static int write_compressed(struct sha1file *f, void *in, unsigned int size) stream.next_out = outbuf; stream.avail_out = sizeof(outbuf); status = git_deflate(&stream, Z_FINISH); - sha1write(f, outbuf, sizeof(outbuf) - stream.avail_out); + hashwrite(f, outbuf, sizeof(outbuf) - stream.avail_out); } while (status == Z_OK); if (status != Z_STREAM_END) @@ -1310,7 +1310,7 @@ static int write_compressed(struct sha1file *f, void *in, unsigned int size) return size; } -static struct object_entry *append_obj_to_pack(struct sha1file *f, +static struct object_entry *append_obj_to_pack(struct hashfile *f, const unsigned char *sha1, void *buf, unsigned long size, enum object_type type) { @@ -1327,7 +1327,7 @@ static struct object_entry *append_obj_to_pack(struct sha1file *f, } header[n++] = c; crc32_begin(f); - sha1write(f, header, n); + hashwrite(f, header, n); obj[0].size = size; obj[0].hdr_size = n; obj[0].type = type; @@ -1335,7 +1335,7 @@ static struct object_entry *append_obj_to_pack(struct sha1file *f, obj[1].idx.offset = obj[0].idx.offset + n; obj[1].idx.offset += write_compressed(f, buf, size); obj[0].idx.crc32 = crc32_end(f); - sha1flush(f); + hashflush(f); hashcpy(obj->idx.oid.hash, sha1); return obj; } @@ -1347,7 +1347,7 @@ static int delta_pos_compare(const void *_a, const void *_b) return a->obj_no - b->obj_no; } -static void fix_unresolved_deltas(struct sha1file *f) +static void fix_unresolved_deltas(struct hashfile *f) { struct ref_delta_entry **sorted_by_pos; int i; @@ -1374,14 +1374,15 @@ static void fix_unresolved_deltas(struct sha1file *f) if (objects[d->obj_no].real_type != OBJ_REF_DELTA) continue; - base_obj->data = read_sha1_file(d->sha1, &type, &base_obj->size); + base_obj->data = read_object_file(&d->oid, &type, + &base_obj->size); if (!base_obj->data) continue; - if (check_sha1_signature(d->sha1, base_obj->data, - base_obj->size, typename(type))) - die(_("local object %s is corrupt"), sha1_to_hex(d->sha1)); - base_obj->obj = append_obj_to_pack(f, d->sha1, + if (check_object_signature(&d->oid, base_obj->data, + base_obj->size, type_name(type))) + die(_("local object %s is corrupt"), oid_to_hex(&d->oid)); + base_obj->obj = append_obj_to_pack(f, d->oid.hash, base_obj->data, base_obj->size, type); find_unresolved_deltas(base_obj); display_progress(progress, nr_resolved_deltas); @@ -1389,15 +1390,60 @@ static void fix_unresolved_deltas(struct sha1file *f) free(sorted_by_pos); } +static const char *derive_filename(const char *pack_name, const char *suffix, + struct strbuf *buf) +{ + size_t len; + if (!strip_suffix(pack_name, ".pack", &len)) + die(_("packfile name '%s' does not end with '.pack'"), + pack_name); + strbuf_add(buf, pack_name, len); + strbuf_addch(buf, '.'); + strbuf_addstr(buf, suffix); + return buf->buf; +} + +static void write_special_file(const char *suffix, const char *msg, + const char *pack_name, const unsigned char *hash, + const char **report) +{ + struct strbuf name_buf = STRBUF_INIT; + const char *filename; + int fd; + int msg_len = strlen(msg); + + if (pack_name) + filename = derive_filename(pack_name, suffix, &name_buf); + else + filename = odb_pack_name(&name_buf, hash, suffix); + + fd = odb_pack_keep(filename); + if (fd < 0) { + if (errno != EEXIST) + die_errno(_("cannot write %s file '%s'"), + suffix, filename); + } else { + if (msg_len > 0) { + write_or_die(fd, msg, msg_len); + write_or_die(fd, "\n", 1); + } + if (close(fd) != 0) + die_errno(_("cannot close written %s file '%s'"), + suffix, filename); + if (report) + *report = suffix; + } + strbuf_release(&name_buf); +} + static void final(const char *final_pack_name, const char *curr_pack_name, const char *final_index_name, const char *curr_index_name, - const char *keep_name, const char *keep_msg, - unsigned char *sha1) + const char *keep_msg, const char *promisor_msg, + unsigned char *hash) { const char *report = "pack"; struct strbuf pack_name = STRBUF_INIT; struct strbuf index_name = STRBUF_INIT; - struct strbuf keep_name_buf = STRBUF_INIT; int err; if (!from_stdin) { @@ -1409,32 +1455,16 @@ static void final(const char *final_pack_name, const char *curr_pack_name, die_errno(_("error while closing pack file")); } - if (keep_msg) { - int keep_fd, keep_msg_len = strlen(keep_msg); - - if (!keep_name) - keep_name = odb_pack_name(&keep_name_buf, sha1, "keep"); - - keep_fd = odb_pack_keep(keep_name); - if (keep_fd < 0) { - if (errno != EEXIST) - die_errno(_("cannot write keep file '%s'"), - keep_name); - } else { - if (keep_msg_len > 0) { - write_or_die(keep_fd, keep_msg, keep_msg_len); - write_or_die(keep_fd, "\n", 1); - } - if (close(keep_fd) != 0) - die_errno(_("cannot close written keep file '%s'"), - keep_name); - report = "keep"; - } - } + if (keep_msg) + write_special_file("keep", keep_msg, final_pack_name, hash, + &report); + if (promisor_msg) + write_special_file("promisor", promisor_msg, final_pack_name, + hash, NULL); if (final_pack_name != curr_pack_name) { if (!final_pack_name) - final_pack_name = odb_pack_name(&pack_name, sha1, "pack"); + final_pack_name = odb_pack_name(&pack_name, hash, "pack"); if (finalize_object_file(curr_pack_name, final_pack_name)) die(_("cannot store pack file")); } else if (from_stdin) @@ -1442,18 +1472,18 @@ static void final(const char *final_pack_name, const char *curr_pack_name, if (final_index_name != curr_index_name) { if (!final_index_name) - final_index_name = odb_pack_name(&index_name, sha1, "idx"); + final_index_name = odb_pack_name(&index_name, hash, "idx"); if (finalize_object_file(curr_index_name, final_index_name)) die(_("cannot store index file")); } else chmod(final_index_name, 0444); if (!from_stdin) { - printf("%s\n", sha1_to_hex(sha1)); + printf("%s\n", sha1_to_hex(hash)); } else { struct strbuf buf = STRBUF_INIT; - strbuf_addf(&buf, "%s\t%s\n", report, sha1_to_hex(sha1)); + strbuf_addf(&buf, "%s\t%s\n", report, sha1_to_hex(hash)); write_or_die(1, buf.buf, buf.len); strbuf_release(&buf); @@ -1472,7 +1502,6 @@ static void final(const char *final_pack_name, const char *curr_pack_name, strbuf_release(&index_name); strbuf_release(&pack_name); - strbuf_release(&keep_name_buf); } static int git_index_pack_config(const char *k, const char *v, void *cb) @@ -1588,7 +1617,7 @@ static void show_pack_info(int stat_only) continue; printf("%s %-6s %lu %lu %"PRIuMAX, oid_to_hex(&obj->idx.oid), - typename(obj->real_type), obj->size, + type_name(obj->real_type), obj->size, (unsigned long)(obj[1].idx.offset - obj->idx.offset), (uintmax_t)obj->idx.offset); if (is_delta_type(obj->type)) { @@ -1615,32 +1644,26 @@ static void show_pack_info(int stat_only) } } -static const char *derive_filename(const char *pack_name, const char *suffix, - struct strbuf *buf) -{ - size_t len; - if (!strip_suffix(pack_name, ".pack", &len)) - die(_("packfile name '%s' does not end with '.pack'"), - pack_name); - strbuf_add(buf, pack_name, len); - strbuf_addstr(buf, suffix); - return buf->buf; -} - int cmd_index_pack(int argc, const char **argv, const char *prefix) { int i, fix_thin_pack = 0, verify = 0, stat_only = 0; const char *curr_index; const char *index_name = NULL, *pack_name = NULL; - const char *keep_name = NULL, *keep_msg = NULL; - struct strbuf index_name_buf = STRBUF_INIT, - keep_name_buf = STRBUF_INIT; + const char *keep_msg = NULL; + const char *promisor_msg = NULL; + struct strbuf index_name_buf = STRBUF_INIT; struct pack_idx_entry **idx_objects; struct pack_idx_option opts; - unsigned char pack_sha1[20]; + unsigned char pack_hash[GIT_MAX_RAWSZ]; unsigned foreign_nr = 1; /* zero is a "good" value, assume bad */ int report_end_of_input = 0; + /* + * index-pack never needs to fetch missing objects, since it only + * accesses the repo to do hash collision checks + */ + fetch_if_missing = 0; + if (argc == 2 && !strcmp(argv[1], "-h")) usage(index_pack_usage); @@ -1667,6 +1690,8 @@ int cmd_index_pack(int argc, const char **argv, const char *prefix) } else if (!strcmp(arg, "--check-self-contained-and-connected")) { strict = 1; check_self_contained_and_connected = 1; + } else if (!strcmp(arg, "--fsck-objects")) { + do_fsck_object = 1; } else if (!strcmp(arg, "--verify")) { verify = 1; } else if (!strcmp(arg, "--verify-stat")) { @@ -1678,6 +1703,8 @@ int cmd_index_pack(int argc, const char **argv, const char *prefix) stat_only = 1; } else if (skip_to_optional_arg(arg, "--keep", &keep_msg)) { ; /* nothing to do */ + } else if (skip_to_optional_arg(arg, "--promisor", &promisor_msg)) { + ; /* already parsed */ } else if (starts_with(arg, "--threads=")) { char *end; nr_threads = strtoul(arg+10, &end, 0); @@ -1740,9 +1767,7 @@ int cmd_index_pack(int argc, const char **argv, const char *prefix) if (from_stdin && !startup_info->have_repository) die(_("--stdin requires a git repository")); if (!index_name && pack_name) - index_name = derive_filename(pack_name, ".idx", &index_name_buf); - if (keep_msg && !keep_name && pack_name) - keep_name = derive_filename(pack_name, ".keep", &keep_name_buf); + index_name = derive_filename(pack_name, "idx", &index_name_buf); if (verify) { if (!index_name) @@ -1768,11 +1793,11 @@ int cmd_index_pack(int argc, const char **argv, const char *prefix) if (show_stat) obj_stat = xcalloc(st_add(nr_objects, 1), sizeof(struct object_stat)); ofs_deltas = xcalloc(nr_objects, sizeof(struct ofs_delta_entry)); - parse_pack_objects(pack_sha1); + parse_pack_objects(pack_hash); if (report_end_of_input) write_in_full(2, "\0", 1); resolve_deltas(); - conclude_pack(fix_thin_pack, curr_pack, pack_sha1); + conclude_pack(fix_thin_pack, curr_pack, pack_hash); free(ofs_deltas); free(ref_deltas); if (strict) @@ -1784,19 +1809,18 @@ int cmd_index_pack(int argc, const char **argv, const char *prefix) ALLOC_ARRAY(idx_objects, nr_objects); for (i = 0; i < nr_objects; i++) idx_objects[i] = &objects[i].idx; - curr_index = write_idx_file(index_name, idx_objects, nr_objects, &opts, pack_sha1); + curr_index = write_idx_file(index_name, idx_objects, nr_objects, &opts, pack_hash); free(idx_objects); if (!verify) final(pack_name, curr_pack, index_name, curr_index, - keep_name, keep_msg, - pack_sha1); + keep_msg, promisor_msg, + pack_hash); else close(input_fd); free(objects); strbuf_release(&index_name_buf); - strbuf_release(&keep_name_buf); if (pack_name == NULL) free((void *) curr_pack); if (index_name == NULL) diff --git a/builtin/init-db.c b/builtin/init-db.c index c9b7946bad..68ff4ad75a 100644 --- a/builtin/init-db.c +++ b/builtin/init-db.c @@ -24,11 +24,11 @@ static int init_is_bare_repository = 0; static int init_shared_repository = -1; static const char *init_db_template_dir; -static void copy_templates_1(struct strbuf *path, struct strbuf *template, +static void copy_templates_1(struct strbuf *path, struct strbuf *template_path, DIR *dir) { size_t path_baselen = path->len; - size_t template_baselen = template->len; + size_t template_baselen = template_path->len; struct dirent *de; /* Note: if ".git/hooks" file exists in the repository being @@ -44,12 +44,12 @@ static void copy_templates_1(struct strbuf *path, struct strbuf *template, int exists = 0; strbuf_setlen(path, path_baselen); - strbuf_setlen(template, template_baselen); + strbuf_setlen(template_path, template_baselen); if (de->d_name[0] == '.') continue; strbuf_addstr(path, de->d_name); - strbuf_addstr(template, de->d_name); + strbuf_addstr(template_path, de->d_name); if (lstat(path->buf, &st_git)) { if (errno != ENOENT) die_errno(_("cannot stat '%s'"), path->buf); @@ -57,36 +57,36 @@ static void copy_templates_1(struct strbuf *path, struct strbuf *template, else exists = 1; - if (lstat(template->buf, &st_template)) - die_errno(_("cannot stat template '%s'"), template->buf); + if (lstat(template_path->buf, &st_template)) + die_errno(_("cannot stat template '%s'"), template_path->buf); if (S_ISDIR(st_template.st_mode)) { - DIR *subdir = opendir(template->buf); + DIR *subdir = opendir(template_path->buf); if (!subdir) - die_errno(_("cannot opendir '%s'"), template->buf); + die_errno(_("cannot opendir '%s'"), template_path->buf); strbuf_addch(path, '/'); - strbuf_addch(template, '/'); - copy_templates_1(path, template, subdir); + strbuf_addch(template_path, '/'); + copy_templates_1(path, template_path, subdir); closedir(subdir); } else if (exists) continue; else if (S_ISLNK(st_template.st_mode)) { struct strbuf lnk = STRBUF_INIT; - if (strbuf_readlink(&lnk, template->buf, 0) < 0) - die_errno(_("cannot readlink '%s'"), template->buf); + if (strbuf_readlink(&lnk, template_path->buf, 0) < 0) + die_errno(_("cannot readlink '%s'"), template_path->buf); if (symlink(lnk.buf, path->buf)) die_errno(_("cannot symlink '%s' '%s'"), lnk.buf, path->buf); strbuf_release(&lnk); } else if (S_ISREG(st_template.st_mode)) { - if (copy_file(path->buf, template->buf, st_template.st_mode)) + if (copy_file(path->buf, template_path->buf, st_template.st_mode)) die_errno(_("cannot copy '%s' to '%s'"), - template->buf, path->buf); + template_path->buf, path->buf); } else - error(_("ignoring template %s"), template->buf); + error(_("ignoring template %s"), template_path->buf); } } diff --git a/builtin/log.c b/builtin/log.c index 14fdf39165..71f68a3e4f 100644 --- a/builtin/log.c +++ b/builtin/log.c @@ -29,6 +29,8 @@ #include "gpg-interface.h" #include "progress.h" +#define MAIL_DEFAULT_WRAP 72 + /* Set a default date-time format for git log ("log.date" config variable) */ static const char *default_date_mode = NULL; @@ -188,8 +190,8 @@ static void cmd_log_init_finish(int argc, const char **argv, const char *prefix, if (rev->show_notes) init_display_notes(&rev->notes_opt); - if (rev->diffopt.pickaxe || rev->diffopt.filter || - rev->diffopt.flags.follow_renames) + if ((rev->diffopt.pickaxe_opts & DIFF_PICKAXE_KINDS_MASK) || + rev->diffopt.filter || rev->diffopt.flags.follow_renames) rev->always_show_header = 0; if (source) @@ -516,7 +518,7 @@ static int show_tag_object(const struct object_id *oid, struct rev_info *rev) { unsigned long size; enum object_type type; - char *buf = read_sha1_file(oid->hash, &type, &size); + char *buf = read_object_file(oid, &type, &size); int offset = 0; if (!buf) @@ -539,7 +541,7 @@ static int show_tag_object(const struct object_id *oid, struct rev_info *rev) return 0; } -static int show_tree_object(const unsigned char *sha1, +static int show_tree_object(const struct object_id *oid, struct strbuf *base, const char *pathname, unsigned mode, int stage, void *context) { @@ -1044,7 +1046,7 @@ static void make_cover_letter(struct rev_info *rev, int use_stdout, shortlog_init(&log); log.wrap_lines = 1; - log.wrap = 72; + log.wrap = MAIL_DEFAULT_WRAP; log.in1 = 2; log.in2 = 4; log.file = rev->diffopt.file; @@ -1061,6 +1063,7 @@ static void make_cover_letter(struct rev_info *rev, int use_stdout, memcpy(&opts, &rev->diffopt, sizeof(opts)); opts.output_format = DIFF_FORMAT_SUMMARY | DIFF_FORMAT_DIFFSTAT; + opts.stat_width = MAIL_DEFAULT_WRAP; diff_setup_done(&opts); @@ -1614,6 +1617,8 @@ int cmd_format_patch(int argc, const char **argv, const char *prefix) (!rev.diffopt.output_format || rev.diffopt.output_format == DIFF_FORMAT_PATCH)) rev.diffopt.output_format = DIFF_FORMAT_DIFFSTAT | DIFF_FORMAT_SUMMARY; + if (!rev.diffopt.stat_width) + rev.diffopt.stat_width = MAIL_DEFAULT_WRAP; /* Always generate a patch */ rev.diffopt.output_format |= DIFF_FORMAT_PATCH; @@ -1868,12 +1873,12 @@ static void print_commit(char sign, struct commit *commit, int verbose, { if (!verbose) { fprintf(file, "%c %s\n", sign, - find_unique_abbrev(commit->object.oid.hash, abbrev)); + find_unique_abbrev(&commit->object.oid, abbrev)); } else { struct strbuf buf = STRBUF_INIT; pp_commit_easy(CMIT_FMT_ONELINE, commit, &buf); fprintf(file, "%c %s %s\n", sign, - find_unique_abbrev(commit->object.oid.hash, abbrev), + find_unique_abbrev(&commit->object.oid, abbrev), buf.buf); strbuf_release(&buf); } diff --git a/builtin/ls-files.c b/builtin/ls-files.c index 2fc836e330..a71f6bd088 100644 --- a/builtin/ls-files.c +++ b/builtin/ls-files.c @@ -240,7 +240,7 @@ static void show_ce(struct repository *repo, struct dir_struct *dir, printf("%s%06o %s %d\t", tag, ce->ce_mode, - find_unique_abbrev(ce->oid.hash, abbrev), + find_unique_abbrev(&ce->oid, abbrev), ce_stage(ce)); } write_eolinfo(repo->index, ce, fullname); @@ -271,7 +271,7 @@ static void show_ru_info(const struct index_state *istate) if (!ui->mode[i]) continue; printf("%s%06o %s %d\t", tag_resolve_undo, ui->mode[i], - find_unique_abbrev(ui->sha1[i], abbrev), + find_unique_abbrev(&ui->oid[i], abbrev), i + 1); write_name(path); } diff --git a/builtin/ls-remote.c b/builtin/ls-remote.c index c4be98ab9e..540d56429f 100644 --- a/builtin/ls-remote.c +++ b/builtin/ls-remote.c @@ -60,8 +60,9 @@ int cmd_ls_remote(int argc, const char **argv, const char *prefix) OPT_BIT(0, "refs", &flags, N_("do not show peeled tags"), REF_NORMAL), OPT_BOOL(0, "get-url", &get_url, N_("take url..insteadOf into account")), - OPT_SET_INT(0, "exit-code", &status, - N_("exit with exit code 2 if no matching refs are found"), 2), + OPT_SET_INT_F(0, "exit-code", &status, + N_("exit with exit code 2 if no matching refs are found"), + 2, PARSE_OPT_NOCOMPLETE), OPT_BOOL(0, "symref", &show_symref_target, N_("show underlying ref in addition to the object pointed by it")), OPT_END() diff --git a/builtin/ls-tree.c b/builtin/ls-tree.c index ef965408e8..d44b4f9c27 100644 --- a/builtin/ls-tree.c +++ b/builtin/ls-tree.c @@ -60,7 +60,7 @@ static int show_recursive(const char *base, int baselen, const char *pathname) return 0; } -static int show_tree(const unsigned char *sha1, struct strbuf *base, +static int show_tree(const struct object_id *oid, struct strbuf *base, const char *pathname, unsigned mode, int stage, void *context) { int retval = 0; @@ -94,7 +94,7 @@ static int show_tree(const unsigned char *sha1, struct strbuf *base, char size_text[24]; if (!strcmp(type, blob_type)) { unsigned long size; - if (sha1_object_info(sha1, &size) == OBJ_BAD) + if (oid_object_info(oid, &size) == OBJ_BAD) xsnprintf(size_text, sizeof(size_text), "BAD"); else @@ -103,11 +103,11 @@ static int show_tree(const unsigned char *sha1, struct strbuf *base, } else xsnprintf(size_text, sizeof(size_text), "-"); printf("%06o %s %s %7s\t", mode, type, - find_unique_abbrev(sha1, abbrev), + find_unique_abbrev(oid, abbrev), size_text); } else printf("%06o %s %s\t", mode, type, - find_unique_abbrev(sha1, abbrev)); + find_unique_abbrev(oid, abbrev)); } baselen = base->len; strbuf_addstr(base, pathname); diff --git a/builtin/merge-tree.c b/builtin/merge-tree.c index d01ddecf66..32736e0b10 100644 --- a/builtin/merge-tree.c +++ b/builtin/merge-tree.c @@ -60,7 +60,7 @@ static void *result(struct merge_list *entry, unsigned long *size) const char *path = entry->path; if (!entry->stage) - return read_sha1_file(entry->blob->object.oid.hash, &type, size); + return read_object_file(&entry->blob->object.oid, &type, size); base = NULL; if (entry->stage == 1) { base = entry->blob; @@ -82,7 +82,8 @@ static void *origin(struct merge_list *entry, unsigned long *size) enum object_type type; while (entry) { if (entry->stage == 2) - return read_sha1_file(entry->blob->object.oid.hash, &type, size); + return read_object_file(&entry->blob->object.oid, + &type, size); entry = entry->link; } return NULL; diff --git a/builtin/merge.c b/builtin/merge.c index 30264cfd7c..8746c5e3e8 100644 --- a/builtin/merge.c +++ b/builtin/merge.c @@ -33,6 +33,7 @@ #include "sequencer.h" #include "string-list.h" #include "packfile.h" +#include "tag.h" #define DEFAULT_TWOHEAD (1<<0) #define DEFAULT_OCTOPUS (1<<1) @@ -520,7 +521,7 @@ static void merge_name(const char *remote, struct strbuf *msg) if (desc && desc->obj && desc->obj->type == OBJ_TAG) { strbuf_addf(msg, "%s\t\t%s '%s'\n", oid_to_hex(&desc->obj->oid), - typename(desc->obj->type), + type_name(desc->obj->type), remote); goto cleanup; } @@ -638,7 +639,7 @@ static int read_tree_trivial(struct object_id *common, struct object_id *head, static void write_tree_trivial(struct object_id *oid) { - if (write_cache_as_tree(oid->hash, 0, NULL)) + if (write_cache_as_tree(oid, 0, NULL)) die(_("git write-tree failed to write a tree")); } @@ -651,10 +652,9 @@ static int try_merge_strategy(const char *strategy, struct commit_list *common, hold_locked_index(&lock, LOCK_DIE_ON_ERROR); refresh_cache(REFRESH_QUIET); - if (active_cache_changed && - write_locked_index(&the_index, &lock, COMMIT_LOCK)) + if (write_locked_index(&the_index, &lock, + COMMIT_LOCK | SKIP_IF_UNCHANGED)) return error(_("Unable to write index.")); - rollback_lock_file(&lock); if (!strcmp(strategy, "recursive") || !strcmp(strategy, "subtree")) { int clean, x; @@ -691,10 +691,9 @@ static int try_merge_strategy(const char *strategy, struct commit_list *common, remoteheads->item, reversed, &result); if (clean < 0) exit(128); - if (active_cache_changed && - write_locked_index(&the_index, &lock, COMMIT_LOCK)) + if (write_locked_index(&the_index, &lock, + COMMIT_LOCK | SKIP_IF_UNCHANGED)) die (_("unable to write %s"), get_index_file()); - rollback_lock_file(&lock); return clean ? 0 : 1; } else { return try_merge_command(strategy, xopts_nr, xopts, @@ -810,18 +809,17 @@ static int merge_trivial(struct commit *head, struct commit_list *remoteheads) hold_locked_index(&lock, LOCK_DIE_ON_ERROR); refresh_cache(REFRESH_QUIET); - if (active_cache_changed && - write_locked_index(&the_index, &lock, COMMIT_LOCK)) + if (write_locked_index(&the_index, &lock, + COMMIT_LOCK | SKIP_IF_UNCHANGED)) return error(_("Unable to write index.")); - rollback_lock_file(&lock); write_tree_trivial(&result_tree); printf(_("Wonderful.\n")); pptr = commit_list_append(head, pptr); pptr = commit_list_append(remoteheads->item, pptr); prepare_to_commit(remoteheads); - if (commit_tree(merge_msg.buf, merge_msg.len, result_tree.hash, parents, - result_commit.hash, NULL, sign_commit)) + if (commit_tree(merge_msg.buf, merge_msg.len, &result_tree, parents, + &result_commit, NULL, sign_commit)) die(_("failed to write commit object")); finish(head, remoteheads, &result_commit, "In-index merge"); drop_save(); @@ -845,8 +843,8 @@ static int finish_automerge(struct commit *head, commit_list_insert(head, &parents); strbuf_addch(&merge_msg, '\n'); prepare_to_commit(remoteheads); - if (commit_tree(merge_msg.buf, merge_msg.len, result_tree->hash, parents, - result_commit.hash, NULL, sign_commit)) + if (commit_tree(merge_msg.buf, merge_msg.len, result_tree, parents, + &result_commit, NULL, sign_commit)) die(_("failed to write commit object")); strbuf_addf(&buf, "Merge made by the '%s' strategy.", wt_strategy); finish(head, remoteheads, &result_commit, buf.buf); @@ -1125,6 +1123,43 @@ static struct commit_list *collect_parents(struct commit *head_commit, return remoteheads; } +static int merging_a_throwaway_tag(struct commit *commit) +{ + char *tag_ref; + struct object_id oid; + int is_throwaway_tag = 0; + + /* Are we merging a tag? */ + if (!merge_remote_util(commit) || + !merge_remote_util(commit)->obj || + merge_remote_util(commit)->obj->type != OBJ_TAG) + return is_throwaway_tag; + + /* + * Now we know we are merging a tag object. Are we downstream + * and following the tags from upstream? If so, we must have + * the tag object pointed at by "refs/tags/$T" where $T is the + * tagname recorded in the tag object. We want to allow such + * a "just to catch up" merge to fast-forward. + * + * Otherwise, we are playing an integrator's role, making a + * merge with a throw-away tag from a contributor with + * something like "git pull $contributor $signed_tag". + * We want to forbid such a merge from fast-forwarding + * by default; otherwise we would not keep the signature + * anywhere. + */ + tag_ref = xstrfmt("refs/tags/%s", + ((struct tag *)merge_remote_util(commit)->obj)->tag); + if (!read_ref(tag_ref, &oid) && + !oidcmp(&oid, &merge_remote_util(commit)->obj->oid)) + is_throwaway_tag = 0; + else + is_throwaway_tag = 1; + free(tag_ref); + return is_throwaway_tag; +} + int cmd_merge(int argc, const char **argv, const char *prefix) { struct object_id result_tree, stash, head_oid; @@ -1289,7 +1324,7 @@ int cmd_merge(int argc, const char **argv, const char *prefix) check_commit_signature(commit, &signature_check); - find_unique_abbrev_r(hex, commit->object.oid.hash, DEFAULT_ABBREV); + find_unique_abbrev_r(hex, &commit->object.oid, DEFAULT_ABBREV); switch (signature_check.result) { case 'G': break; @@ -1322,10 +1357,7 @@ int cmd_merge(int argc, const char **argv, const char *prefix) oid_to_hex(&commit->object.oid)); setenv(buf.buf, merge_remote_util(commit)->name, 1); strbuf_reset(&buf); - if (fast_forward != FF_ONLY && - merge_remote_util(commit) && - merge_remote_util(commit)->obj && - merge_remote_util(commit)->obj->type == OBJ_TAG) + if (fast_forward != FF_ONLY && merging_a_throwaway_tag(commit)) fast_forward = FF_NO; } @@ -1385,9 +1417,9 @@ int cmd_merge(int argc, const char **argv, const char *prefix) if (verbosity >= 0) { printf(_("Updating %s..%s\n"), - find_unique_abbrev(head_commit->object.oid.hash, + find_unique_abbrev(&head_commit->object.oid, DEFAULT_ABBREV), - find_unique_abbrev(remoteheads->item->object.oid.hash, + find_unique_abbrev(&remoteheads->item->object.oid, DEFAULT_ABBREV)); } strbuf_addstr(&msg, "Fast-forward"); diff --git a/builtin/mktag.c b/builtin/mktag.c index 031b750f06..9f5a50a8fd 100644 --- a/builtin/mktag.c +++ b/builtin/mktag.c @@ -18,17 +18,17 @@ /* * We refuse to tag something we can't verify. Just because. */ -static int verify_object(const unsigned char *sha1, const char *expected_type) +static int verify_object(const struct object_id *oid, const char *expected_type) { int ret = -1; enum object_type type; unsigned long size; - void *buffer = read_sha1_file(sha1, &type, &size); - const unsigned char *repl = lookup_replace_object(sha1); + void *buffer = read_object_file(oid, &type, &size); + const struct object_id *repl = lookup_replace_object(oid); if (buffer) { if (type == type_from_string(expected_type)) - ret = check_sha1_signature(repl, buffer, size, expected_type); + ret = check_object_signature(repl, buffer, size, expected_type); free(buffer); } return ret; @@ -38,8 +38,8 @@ static int verify_tag(char *buffer, unsigned long size) { int typelen; char type[20]; - unsigned char sha1[20]; - const char *object, *type_line, *tag_line, *tagger_line, *lb, *rb; + struct object_id oid; + const char *object, *type_line, *tag_line, *tagger_line, *lb, *rb, *p; size_t len; if (size < 84) @@ -52,11 +52,11 @@ static int verify_tag(char *buffer, unsigned long size) if (memcmp(object, "object ", 7)) return error("char%d: does not start with \"object \"", 0); - if (get_sha1_hex(object + 7, sha1)) + if (parse_oid_hex(object + 7, &oid, &p)) return error("char%d: could not get SHA1 hash", 7); /* Verify type line */ - type_line = object + 48; + type_line = p + 1; if (memcmp(type_line - 1, "\ntype ", 6)) return error("char%d: could not find \"\\ntype \"", 47); @@ -80,8 +80,8 @@ static int verify_tag(char *buffer, unsigned long size) type[typelen] = 0; /* Verify that the object matches */ - if (verify_object(sha1, type)) - return error("char%d: could not verify object %s", 7, sha1_to_hex(sha1)); + if (verify_object(&oid, type)) + return error("char%d: could not verify object %s", 7, oid_to_hex(&oid)); /* Verify the tag-name: we don't allow control characters or spaces in it */ tag_line += 4; @@ -151,7 +151,7 @@ static int verify_tag(char *buffer, unsigned long size) int cmd_mktag(int argc, const char **argv, const char *prefix) { struct strbuf buf = STRBUF_INIT; - unsigned char result_sha1[20]; + struct object_id result; if (argc != 1) usage("git mktag"); @@ -165,10 +165,10 @@ int cmd_mktag(int argc, const char **argv, const char *prefix) if (verify_tag(buf.buf, buf.len) < 0) die("invalid tag signature file"); - if (write_sha1_file(buf.buf, buf.len, tag_type, result_sha1) < 0) + if (write_object_file(buf.buf, buf.len, tag_type, &result) < 0) die("unable to write tag file"); strbuf_release(&buf); - printf("%s\n", sha1_to_hex(result_sha1)); + printf("%s\n", oid_to_hex(&result)); return 0; } diff --git a/builtin/mktree.c b/builtin/mktree.c index da0fd8cd70..263c530315 100644 --- a/builtin/mktree.c +++ b/builtin/mktree.c @@ -10,13 +10,13 @@ static struct treeent { unsigned mode; - unsigned char sha1[20]; + struct object_id oid; int len; char name[FLEX_ARRAY]; } **entries; static int alloc, used; -static void append_to_tree(unsigned mode, unsigned char *sha1, char *path) +static void append_to_tree(unsigned mode, struct object_id *oid, char *path) { struct treeent *ent; size_t len = strlen(path); @@ -26,7 +26,7 @@ static void append_to_tree(unsigned mode, unsigned char *sha1, char *path) FLEX_ALLOC_MEM(ent, name, path, len); ent->mode = mode; ent->len = len; - hashcpy(ent->sha1, sha1); + oidcpy(&ent->oid, oid); ALLOC_GROW(entries, used + 1, alloc); entries[used++] = ent; @@ -40,7 +40,7 @@ static int ent_compare(const void *a_, const void *b_) b->name, b->len, b->mode); } -static void write_tree(unsigned char *sha1) +static void write_tree(struct object_id *oid) { struct strbuf buf; size_t size; @@ -54,10 +54,10 @@ static void write_tree(unsigned char *sha1) for (i = 0; i < used; i++) { struct treeent *ent = entries[i]; strbuf_addf(&buf, "%o %s%c", ent->mode, ent->name, '\0'); - strbuf_add(&buf, ent->sha1, 20); + strbuf_add(&buf, ent->oid.hash, the_hash_algo->rawsz); } - write_sha1_file(buf.buf, buf.len, tree_type, sha1); + write_object_file(buf.buf, buf.len, tree_type, oid); strbuf_release(&buf); } @@ -69,11 +69,12 @@ static const char *mktree_usage[] = { static void mktree_line(char *buf, size_t len, int nul_term_line, int allow_missing) { char *ptr, *ntr; + const char *p; unsigned mode; enum object_type mode_type; /* object type derived from mode */ enum object_type obj_type; /* object type derived from sha */ char *path, *to_free = NULL; - unsigned char sha1[20]; + struct object_id oid; ptr = buf; /* @@ -85,9 +86,8 @@ static void mktree_line(char *buf, size_t len, int nul_term_line, int allow_miss die("input format error: %s", buf); ptr = ntr + 1; /* type */ ntr = strchr(ptr, ' '); - if (!ntr || buf + len <= ntr + 40 || - ntr[41] != '\t' || - get_sha1_hex(ntr + 1, sha1)) + if (!ntr || parse_oid_hex(ntr + 1, &oid, &p) || + *p != '\t') die("input format error: %s", buf); /* It is perfectly normal if we do not have a commit from a submodule */ @@ -112,16 +112,16 @@ static void mktree_line(char *buf, size_t len, int nul_term_line, int allow_miss mode_type = object_type(mode); if (mode_type != type_from_string(ptr)) { die("entry '%s' object type (%s) doesn't match mode type (%s)", - path, ptr, typename(mode_type)); + path, ptr, type_name(mode_type)); } /* Check the type of object identified by sha1 */ - obj_type = sha1_object_info(sha1, NULL); + obj_type = oid_object_info(&oid, NULL); if (obj_type < 0) { if (allow_missing) { ; /* no problem - missing objects are presumed to be of the right type */ } else { - die("entry '%s' object %s is unavailable", path, sha1_to_hex(sha1)); + die("entry '%s' object %s is unavailable", path, oid_to_hex(&oid)); } } else { if (obj_type != mode_type) { @@ -131,18 +131,18 @@ static void mktree_line(char *buf, size_t len, int nul_term_line, int allow_miss * because the new tree entry will never be correct. */ die("entry '%s' object %s is a %s but specified type was (%s)", - path, sha1_to_hex(sha1), typename(obj_type), typename(mode_type)); + path, oid_to_hex(&oid), type_name(obj_type), type_name(mode_type)); } } - append_to_tree(mode, sha1, path); + append_to_tree(mode, &oid, path); free(to_free); } int cmd_mktree(int ac, const char **av, const char *prefix) { struct strbuf sb = STRBUF_INIT; - unsigned char sha1[20]; + struct object_id oid; int nul_term_line = 0; int allow_missing = 0; int is_batch_mode = 0; @@ -181,8 +181,8 @@ int cmd_mktree(int ac, const char **av, const char *prefix) */ ; /* skip creating an empty tree */ } else { - write_tree(sha1); - puts(sha1_to_hex(sha1)); + write_tree(&oid); + puts(oid_to_hex(&oid)); fflush(stdout); } used=0; /* reset tree entry buffer for re-use in batch mode */ diff --git a/builtin/mv.c b/builtin/mv.c index 8ce6a2ddd4..6d141f7a53 100644 --- a/builtin/mv.c +++ b/builtin/mv.c @@ -122,7 +122,8 @@ int cmd_mv(int argc, const char **argv, const char *prefix) struct option builtin_mv_options[] = { OPT__VERBOSE(&verbose, N_("be verbose")), OPT__DRY_RUN(&show_only, N_("dry run")), - OPT__FORCE(&force, N_("force move/rename even if target exists")), + OPT__FORCE(&force, N_("force move/rename even if target exists"), + PARSE_OPT_NOCOMPLETE), OPT_BOOL('k', NULL, &ignore_errors, N_("skip move/rename errors")), OPT_END(), }; @@ -292,8 +293,8 @@ int cmd_mv(int argc, const char **argv, const char *prefix) if (gitmodules_modified) stage_updated_gitmodules(&the_index); - if (active_cache_changed && - write_locked_index(&the_index, &lock_file, COMMIT_LOCK)) + if (write_locked_index(&the_index, &lock_file, + COMMIT_LOCK | SKIP_IF_UNCHANGED)) die(_("Unable to write new index file")); return 0; diff --git a/builtin/name-rev.c b/builtin/name-rev.c index 9e088ebd11..387ddf85d2 100644 --- a/builtin/name-rev.c +++ b/builtin/name-rev.c @@ -328,7 +328,7 @@ static void show_name(const struct object *obj, else if (allow_undefined) printf("undefined\n"); else if (always) - printf("%s\n", find_unique_abbrev(oid->hash, DEFAULT_ABBREV)); + printf("%s\n", find_unique_abbrev(oid, DEFAULT_ABBREV)); else die("cannot describe '%s'", oid_to_hex(oid)); strbuf_release(&buf); diff --git a/builtin/notes.c b/builtin/notes.c index 7c81761645..921e08d5bf 100644 --- a/builtin/notes.c +++ b/builtin/notes.c @@ -118,11 +118,11 @@ static int list_each_note(const struct object_id *object_oid, return 0; } -static void copy_obj_to_fd(int fd, const unsigned char *sha1) +static void copy_obj_to_fd(int fd, const struct object_id *oid) { unsigned long size; enum object_type type; - char *buf = read_sha1_file(sha1, &type, &size); + char *buf = read_object_file(oid, &type, &size); if (buf) { if (size) write_or_die(fd, buf, size); @@ -162,7 +162,7 @@ static void write_commented_object(int fd, const struct object_id *object) } static void prepare_note_data(const struct object_id *object, struct note_data *d, - const unsigned char *old_note) + const struct object_id *old_note) { if (d->use_editor || !d->given) { int fd; @@ -198,9 +198,9 @@ static void prepare_note_data(const struct object_id *object, struct note_data * } } -static void write_note_data(struct note_data *d, unsigned char *sha1) +static void write_note_data(struct note_data *d, struct object_id *oid) { - if (write_sha1_file(d->buf.buf, d->buf.len, blob_type, sha1)) { + if (write_object_file(d->buf.buf, d->buf.len, blob_type, oid)) { error(_("unable to write note object")); if (d->edit_path) error(_("the note contents have been left in %s"), @@ -253,7 +253,7 @@ static int parse_reuse_arg(const struct option *opt, const char *arg, int unset) if (get_oid(arg, &object)) die(_("failed to resolve '%s' as a valid ref."), arg); - if (!(buf = read_sha1_file(object.hash, &type, &len))) { + if (!(buf = read_object_file(&object, &type, &len))) { free(buf); die(_("failed to read object '%s'."), arg); } @@ -413,7 +413,7 @@ static int add(int argc, const char **argv, const char *prefix) parse_reuse_arg}, OPT_BOOL(0, "allow-empty", &allow_empty, N_("allow storing empty note")), - OPT__FORCE(&force, N_("replace existing notes")), + OPT__FORCE(&force, N_("replace existing notes"), PARSE_OPT_NOCOMPLETE), OPT_END() }; @@ -457,9 +457,9 @@ static int add(int argc, const char **argv, const char *prefix) oid_to_hex(&object)); } - prepare_note_data(&object, &d, note ? note->hash : NULL); + prepare_note_data(&object, &d, note); if (d.buf.len || allow_empty) { - write_note_data(&d, new_note.hash); + write_note_data(&d, &new_note); if (add_note(t, &object, &new_note, combine_notes_overwrite)) die("BUG: combine_notes_overwrite failed"); commit_notes(t, "Notes added by 'git notes add'"); @@ -484,7 +484,7 @@ static int copy(int argc, const char **argv, const char *prefix) struct notes_tree *t; const char *rewrite_cmd = NULL; struct option options[] = { - OPT__FORCE(&force, N_("replace existing notes")), + OPT__FORCE(&force, N_("replace existing notes"), PARSE_OPT_NOCOMPLETE), OPT_BOOL(0, "stdin", &from_stdin, N_("read objects from stdin")), OPT_STRING(0, "for-rewrite", &rewrite_cmd, N_("command"), N_("load rewriting config for (implies " @@ -602,13 +602,13 @@ static int append_edit(int argc, const char **argv, const char *prefix) t = init_notes_check(argv[0], NOTES_INIT_WRITABLE); note = get_note(t, &object); - prepare_note_data(&object, &d, edit && note ? note->hash : NULL); + prepare_note_data(&object, &d, edit && note ? note : NULL); if (note && !edit) { /* Append buf to previous note contents */ unsigned long size; enum object_type type; - char *prev_buf = read_sha1_file(note->hash, &type, &size); + char *prev_buf = read_object_file(note, &type, &size); strbuf_grow(&d.buf, size + 1); if (d.buf.len && prev_buf && size) @@ -619,7 +619,7 @@ static int append_edit(int argc, const char **argv, const char *prefix) } if (d.buf.len || allow_empty) { - write_note_data(&d, new_note.hash); + write_note_data(&d, &new_note); if (add_note(t, &object, &new_note, combine_notes_overwrite)) die("BUG: combine_notes_overwrite failed"); logmsg = xstrfmt("Notes added by 'git notes %s'", argv[0]); diff --git a/builtin/pack-objects.c b/builtin/pack-objects.c index 6b9cfc289d..e7e673266e 100644 --- a/builtin/pack-objects.c +++ b/builtin/pack-objects.c @@ -26,7 +26,7 @@ #include "reachable.h" #include "sha1-array.h" #include "argv-array.h" -#include "mru.h" +#include "list.h" #include "packfile.h" static const char *pack_usage[] = { @@ -75,6 +75,8 @@ static int use_bitmap_index = -1; static int write_bitmap_index; static uint16_t write_bitmap_options; +static int exclude_promisor_objects; + static unsigned long delta_cache_size = 0; static unsigned long max_delta_cache_size = 256 * 1024 * 1024; static unsigned long cache_max_small_delta_size = 1000; @@ -84,8 +86,9 @@ static unsigned long window_memory_limit = 0; static struct list_objects_filter_options filter_options; enum missing_action { - MA_ERROR = 0, /* fail if any missing objects are encountered */ - MA_ALLOW_ANY, /* silently allow ALL missing objects */ + MA_ERROR = 0, /* fail if any missing objects are encountered */ + MA_ALLOW_ANY, /* silently allow ALL missing objects */ + MA_ALLOW_PROMISOR, /* silently allow all missing PROMISOR objects */ }; static enum missing_action arg_missing_action; static show_object_fn fn_show_object; @@ -119,11 +122,10 @@ static void *get_delta(struct object_entry *entry) void *buf, *base_buf, *delta_buf; enum object_type type; - buf = read_sha1_file(entry->idx.oid.hash, &type, &size); + buf = read_object_file(&entry->idx.oid, &type, &size); if (!buf) die("unable to read %s", oid_to_hex(&entry->idx.oid)); - base_buf = read_sha1_file(entry->delta->idx.oid.hash, &type, - &base_size); + base_buf = read_object_file(&entry->delta->idx.oid, &type, &base_size); if (!base_buf) die("unable to read %s", oid_to_hex(&entry->delta->idx.oid)); @@ -161,7 +163,7 @@ static unsigned long do_compress(void **pptr, unsigned long size) return stream.total_out; } -static unsigned long write_large_blob_data(struct git_istream *st, struct sha1file *f, +static unsigned long write_large_blob_data(struct git_istream *st, struct hashfile *f, const struct object_id *oid) { git_zstream stream; @@ -185,7 +187,7 @@ static unsigned long write_large_blob_data(struct git_istream *st, struct sha1fi stream.next_out = obuf; stream.avail_out = sizeof(obuf); zret = git_deflate(&stream, readlen ? 0 : Z_FINISH); - sha1write(f, obuf, stream.next_out - obuf); + hashwrite(f, obuf, stream.next_out - obuf); olen += stream.next_out - obuf; } if (stream.avail_in) @@ -230,7 +232,7 @@ static int check_pack_inflate(struct packed_git *p, stream.total_in == len) ? 0 : -1; } -static void copy_pack_data(struct sha1file *f, +static void copy_pack_data(struct hashfile *f, struct packed_git *p, struct pack_window **w_curs, off_t offset, @@ -243,14 +245,14 @@ static void copy_pack_data(struct sha1file *f, in = use_pack(p, w_curs, offset, &avail); if (avail > len) avail = (unsigned long)len; - sha1write(f, in, avail); + hashwrite(f, in, avail); offset += avail; len -= avail; } } /* Return 0 if we will bust the pack-size limit */ -static unsigned long write_no_reuse_object(struct sha1file *f, struct object_entry *entry, +static unsigned long write_no_reuse_object(struct hashfile *f, struct object_entry *entry, unsigned long limit, int usable_delta) { unsigned long size, datalen; @@ -264,11 +266,10 @@ static unsigned long write_no_reuse_object(struct sha1file *f, struct object_ent if (!usable_delta) { if (entry->type == OBJ_BLOB && entry->size > big_file_threshold && - (st = open_istream(entry->idx.oid.hash, &type, &size, NULL)) != NULL) + (st = open_istream(&entry->idx.oid, &type, &size, NULL)) != NULL) buf = NULL; else { - buf = read_sha1_file(entry->idx.oid.hash, &type, - &size); + buf = read_object_file(&entry->idx.oid, &type, &size); if (!buf) die(_("unable to read %s"), oid_to_hex(&entry->idx.oid)); @@ -323,8 +324,8 @@ static unsigned long write_no_reuse_object(struct sha1file *f, struct object_ent free(buf); return 0; } - sha1write(f, header, hdrlen); - sha1write(f, dheader + pos, sizeof(dheader) - pos); + hashwrite(f, header, hdrlen); + hashwrite(f, dheader + pos, sizeof(dheader) - pos); hdrlen += sizeof(dheader) - pos; } else if (type == OBJ_REF_DELTA) { /* @@ -337,8 +338,8 @@ static unsigned long write_no_reuse_object(struct sha1file *f, struct object_ent free(buf); return 0; } - sha1write(f, header, hdrlen); - sha1write(f, entry->delta->idx.oid.hash, 20); + hashwrite(f, header, hdrlen); + hashwrite(f, entry->delta->idx.oid.hash, 20); hdrlen += 20; } else { if (limit && hdrlen + datalen + 20 >= limit) { @@ -347,13 +348,13 @@ static unsigned long write_no_reuse_object(struct sha1file *f, struct object_ent free(buf); return 0; } - sha1write(f, header, hdrlen); + hashwrite(f, header, hdrlen); } if (st) { datalen = write_large_blob_data(st, f, &entry->idx.oid); close_istream(st); } else { - sha1write(f, buf, datalen); + hashwrite(f, buf, datalen); free(buf); } @@ -361,7 +362,7 @@ static unsigned long write_no_reuse_object(struct sha1file *f, struct object_ent } /* Return 0 if we will bust the pack-size limit */ -static off_t write_reuse_object(struct sha1file *f, struct object_entry *entry, +static off_t write_reuse_object(struct hashfile *f, struct object_entry *entry, unsigned long limit, int usable_delta) { struct packed_git *p = entry->in_pack; @@ -412,8 +413,8 @@ static off_t write_reuse_object(struct sha1file *f, struct object_entry *entry, unuse_pack(&w_curs); return 0; } - sha1write(f, header, hdrlen); - sha1write(f, dheader + pos, sizeof(dheader) - pos); + hashwrite(f, header, hdrlen); + hashwrite(f, dheader + pos, sizeof(dheader) - pos); hdrlen += sizeof(dheader) - pos; reused_delta++; } else if (type == OBJ_REF_DELTA) { @@ -421,8 +422,8 @@ static off_t write_reuse_object(struct sha1file *f, struct object_entry *entry, unuse_pack(&w_curs); return 0; } - sha1write(f, header, hdrlen); - sha1write(f, entry->delta->idx.oid.hash, 20); + hashwrite(f, header, hdrlen); + hashwrite(f, entry->delta->idx.oid.hash, 20); hdrlen += 20; reused_delta++; } else { @@ -430,7 +431,7 @@ static off_t write_reuse_object(struct sha1file *f, struct object_entry *entry, unuse_pack(&w_curs); return 0; } - sha1write(f, header, hdrlen); + hashwrite(f, header, hdrlen); } copy_pack_data(f, p, &w_curs, offset, datalen); unuse_pack(&w_curs); @@ -439,7 +440,7 @@ static off_t write_reuse_object(struct sha1file *f, struct object_entry *entry, } /* Return 0 if we will bust the pack-size limit */ -static off_t write_object(struct sha1file *f, +static off_t write_object(struct hashfile *f, struct object_entry *entry, off_t write_offset) { @@ -512,7 +513,7 @@ enum write_one_status { WRITE_ONE_RECURSIVE = 2 /* already scheduled to be written */ }; -static enum write_one_status write_one(struct sha1file *f, +static enum write_one_status write_one(struct hashfile *f, struct object_entry *e, off_t *offset) { @@ -731,7 +732,7 @@ static struct object_entry **compute_write_order(void) return wo; } -static off_t write_reused_pack(struct sha1file *f) +static off_t write_reused_pack(struct hashfile *f) { unsigned char buffer[8192]; off_t to_write, total; @@ -762,7 +763,7 @@ static off_t write_reused_pack(struct sha1file *f) if (read_pack > to_write) read_pack = to_write; - sha1write(f, buffer, read_pack); + hashwrite(f, buffer, read_pack); to_write -= read_pack; /* @@ -791,7 +792,7 @@ static const char no_split_warning[] = N_( static void write_pack_file(void) { uint32_t i = 0, j; - struct sha1file *f; + struct hashfile *f; off_t offset; uint32_t nr_remaining = nr_result; time_t last_mtime = 0; @@ -807,7 +808,7 @@ static void write_pack_file(void) char *pack_tmp_name = NULL; if (pack_to_stdout) - f = sha1fd_throughput(1, "", progress_state); + f = hashfd_throughput(1, "", progress_state); else f = create_tmp_packfile(&pack_tmp_name); @@ -834,11 +835,11 @@ static void write_pack_file(void) * If so, rewrite it like in fast-import */ if (pack_to_stdout) { - sha1close(f, oid.hash, CSUM_CLOSE); + hashclose(f, oid.hash, CSUM_CLOSE); } else if (nr_written == nr_remaining) { - sha1close(f, oid.hash, CSUM_FSYNC); + hashclose(f, oid.hash, CSUM_FSYNC); } else { - int fd = sha1close(f, oid.hash, 0); + int fd = hashclose(f, oid.hash, 0); fixup_pack_header_footer(fd, oid.hash, pack_tmp_name, nr_written, oid.hash, offset); close(fd); @@ -1006,8 +1007,8 @@ static int want_object_in_pack(const struct object_id *oid, struct packed_git **found_pack, off_t *found_offset) { - struct mru_entry *entry; int want; + struct list_head *pos; if (!exclude && local && has_loose_object_nonlocal(oid->hash)) return 0; @@ -1023,8 +1024,8 @@ static int want_object_in_pack(const struct object_id *oid, return want; } - for (entry = packed_git_mru.head; entry; entry = entry->next) { - struct packed_git *p = entry->item; + list_for_each(pos, &packed_git_mru) { + struct packed_git *p = list_entry(pos, struct packed_git, mru); off_t offset; if (p == *found_pack) @@ -1041,7 +1042,7 @@ static int want_object_in_pack(const struct object_id *oid, } want = want_found_object(exclude, p); if (!exclude && want > 0) - mru_mark(&packed_git_mru, entry); + list_move(&p->mru, &packed_git_mru); if (want != -1) return want; } @@ -1187,7 +1188,7 @@ static struct pbase_tree_cache *pbase_tree_get(const struct object_id *oid) /* Did not find one. Either we got a bogus request or * we need to read and perhaps cache. */ - data = read_sha1_file(oid->hash, &type, &size); + data = read_object_file(oid, &type, &size); if (!data) return NULL; if (type != OBJ_TREE) { @@ -1348,7 +1349,7 @@ static void add_preferred_base(struct object_id *oid) if (window <= num_preferred_base++) return; - data = read_object_with_reference(oid->hash, tree_type, &size, tree_oid.hash); + data = read_object_with_reference(oid, tree_type, &size, &tree_oid); if (!data) return; @@ -1376,10 +1377,10 @@ static void cleanup_preferred_base(void) it = pbase_tree; pbase_tree = NULL; while (it) { - struct pbase_tree *this = it; - it = this->next; - free(this->pcache.tree_data); - free(this); + struct pbase_tree *tmp = it; + it = tmp->next; + free(tmp->pcache.tree_data); + free(tmp); } for (i = 0; i < ARRAY_SIZE(pbase_tree_cache); i++) { @@ -1513,7 +1514,7 @@ static void check_object(struct object_entry *entry) unuse_pack(&w_curs); } - entry->type = sha1_object_info(entry->idx.oid.hash, &entry->size); + entry->type = oid_object_info(&entry->idx.oid, &entry->size); /* * The error condition is checked in prepare_pack(). This is * to permit a missing preferred base object to be ignored @@ -1575,8 +1576,7 @@ static void drop_reused_delta(struct object_entry *entry) * And if that fails, the error will be recorded in entry->type * and dealt with in prepare_pack(). */ - entry->type = sha1_object_info(entry->idx.oid.hash, - &entry->size); + entry->type = oid_object_info(&entry->idx.oid, &entry->size); } } @@ -1868,8 +1868,7 @@ static int try_delta(struct unpacked *trg, struct unpacked *src, /* Load data if not already done */ if (!trg->data) { read_lock(); - trg->data = read_sha1_file(trg_entry->idx.oid.hash, &type, - &sz); + trg->data = read_object_file(&trg_entry->idx.oid, &type, &sz); read_unlock(); if (!trg->data) die("object %s cannot be read", @@ -1882,8 +1881,7 @@ static int try_delta(struct unpacked *trg, struct unpacked *src, } if (!src->data) { read_lock(); - src->data = read_sha1_file(src_entry->idx.oid.hash, &type, - &sz); + src->data = read_object_file(&src_entry->idx.oid, &type, &sz); read_unlock(); if (!src->data) { if (src_entry->preferred_base) { @@ -2546,6 +2544,7 @@ static void read_object_list_from_stdin(void) } } +/* Remember to update object flag allocation in object.h */ #define OBJECT_ADDED (1u<<20) static void show_commit(struct commit *commit, void *data) @@ -2578,6 +2577,20 @@ static void show_object__ma_allow_any(struct object *obj, const char *name, void show_object(obj, name, data); } +static void show_object__ma_allow_promisor(struct object *obj, const char *name, void *data) +{ + assert(arg_missing_action == MA_ALLOW_PROMISOR); + + /* + * Quietly ignore EXPECTED missing objects. This avoids problems with + * staging them now and getting an odd error later. + */ + if (!has_object_file(&obj->oid) && is_promisor_object(&obj->oid)) + return; + + show_object(obj, name, data); +} + static int option_parse_missing_action(const struct option *opt, const char *arg, int unset) { @@ -2592,10 +2605,18 @@ static int option_parse_missing_action(const struct option *opt, if (!strcmp(arg, "allow-any")) { arg_missing_action = MA_ALLOW_ANY; + fetch_if_missing = 0; fn_show_object = show_object__ma_allow_any; return 0; } + if (!strcmp(arg, "allow-promisor")) { + arg_missing_action = MA_ALLOW_PROMISOR; + fetch_if_missing = 0; + fn_show_object = show_object__ma_allow_promisor; + return 0; + } + die(_("invalid value for --missing")); return 0; } @@ -2683,7 +2704,7 @@ static void add_objects_in_unpacked_packs(struct rev_info *revs) static int add_loose_object(const struct object_id *oid, const char *path, void *data) { - enum object_type type = sha1_object_info(oid->hash, NULL); + enum object_type type = oid_object_info(oid, NULL); if (type < 0) { warning("loose object at %s could not be examined", path); @@ -2768,7 +2789,7 @@ static void loosen_unused_packed_objects(struct rev_info *revs) if (!packlist_find(&to_pack, oid.hash, NULL) && !has_sha1_pack_kept_or_nonlocal(&oid) && !loosened_object_can_be_discarded(&oid, p->mtime)) - if (force_object_loose(oid.hash, p->mtime)) + if (force_object_loose(&oid, p->mtime)) die("unable to force loose object"); } } @@ -3009,6 +3030,8 @@ int cmd_pack_objects(int argc, const char **argv, const char *prefix) { OPTION_CALLBACK, 0, "missing", NULL, N_("action"), N_("handling for missing objects"), PARSE_OPT_NONEG, option_parse_missing_action }, + OPT_BOOL(0, "exclude-promisor-objects", &exclude_promisor_objects, + N_("do not pack objects in promisor packfiles")), OPT_END(), }; @@ -3054,6 +3077,12 @@ int cmd_pack_objects(int argc, const char **argv, const char *prefix) argv_array_push(&rp, "--unpacked"); } + if (exclude_promisor_objects) { + use_internal_rev_list = 1; + fetch_if_missing = 0; + argv_array_push(&rp, "--exclude-promisor-objects"); + } + if (!reuse_object) reuse_delta = 0; if (pack_compression_level == -1) diff --git a/builtin/pack-redundant.c b/builtin/pack-redundant.c index aaa8136322..991e1bb76f 100644 --- a/builtin/pack-redundant.c +++ b/builtin/pack-redundant.c @@ -48,17 +48,17 @@ static inline void llist_item_put(struct llist_item *item) static inline struct llist_item *llist_item_get(void) { - struct llist_item *new; + struct llist_item *new_item; if ( free_nodes ) { - new = free_nodes; + new_item = free_nodes; free_nodes = free_nodes->next; } else { int i = 1; - ALLOC_ARRAY(new, BLKSIZE); + ALLOC_ARRAY(new_item, BLKSIZE); for (; i < BLKSIZE; i++) - llist_item_put(&new[i]); + llist_item_put(&new_item[i]); } - return new; + return new_item; } static void llist_free(struct llist *list) @@ -80,26 +80,26 @@ static inline void llist_init(struct llist **list) static struct llist * llist_copy(struct llist *list) { struct llist *ret; - struct llist_item *new, *old, *prev; + struct llist_item *new_item, *old_item, *prev; llist_init(&ret); if ((ret->size = list->size) == 0) return ret; - new = ret->front = llist_item_get(); - new->sha1 = list->front->sha1; + new_item = ret->front = llist_item_get(); + new_item->sha1 = list->front->sha1; - old = list->front->next; - while (old) { - prev = new; - new = llist_item_get(); - prev->next = new; - new->sha1 = old->sha1; - old = old->next; + old_item = list->front->next; + while (old_item) { + prev = new_item; + new_item = llist_item_get(); + prev->next = new_item; + new_item->sha1 = old_item->sha1; + old_item = old_item->next; } - new->next = NULL; - ret->back = new; + new_item->next = NULL; + ret->back = new_item; return ret; } @@ -108,24 +108,24 @@ static inline struct llist_item *llist_insert(struct llist *list, struct llist_item *after, const unsigned char *sha1) { - struct llist_item *new = llist_item_get(); - new->sha1 = sha1; - new->next = NULL; + struct llist_item *new_item = llist_item_get(); + new_item->sha1 = sha1; + new_item->next = NULL; if (after != NULL) { - new->next = after->next; - after->next = new; + new_item->next = after->next; + after->next = new_item; if (after == list->back) - list->back = new; + list->back = new_item; } else {/* insert in front */ if (list->size == 0) - list->back = new; + list->back = new_item; else - new->next = list->front; - list->front = new; + new_item->next = list->front; + list->front = new_item; } list->size++; - return new; + return new_item; } static inline struct llist_item *llist_insert_back(struct llist *list, diff --git a/builtin/prune.c b/builtin/prune.c index d2fdae680a..38ced18dad 100644 --- a/builtin/prune.c +++ b/builtin/prune.c @@ -50,9 +50,9 @@ static int prune_object(const struct object_id *oid, const char *fullpath, if (st.st_mtime > expire) return 0; if (show_only || verbose) { - enum object_type type = sha1_object_info(oid->hash, NULL); + enum object_type type = oid_object_info(oid, NULL); printf("%s %s\n", oid_to_hex(oid), - (type > 0) ? typename(type) : "unknown"); + (type > 0) ? type_name(type) : "unknown"); } if (!show_only) unlink_or_warn(fullpath); @@ -101,12 +101,15 @@ int cmd_prune(int argc, const char **argv, const char *prefix) { struct rev_info revs; struct progress *progress = NULL; + int exclude_promisor_objects = 0; const struct option options[] = { OPT__DRY_RUN(&show_only, N_("do not remove, show only")), OPT__VERBOSE(&verbose, N_("report pruned objects")), OPT_BOOL(0, "progress", &show_progress, N_("show progress")), OPT_EXPIRY_DATE(0, "expire", &expire, N_("expire objects older than