* whitespace=!indent,trail,space
-*.[ch] whitespace=indent,trail,space
+*.[ch] whitespace=indent,trail,space diff=cpp
*.sh whitespace=indent,trail,space
Chris Shoemaker <c.shoemaker@cox.net>
Chris Wright <chrisw@sous-sol.org> <chrisw@osdl.org>
Cord Seele <cowose@gmail.com> <cowose@googlemail.com>
+Christian Couder <chriscool@tuxfamily.org> <christian.couder@gmail.com>
Christian Stimming <stimming@tuhh.de> <chs@ckiste.goetheallee>
Csaba Henk <csaba@gluster.com> <csaba@lowlife.hu>
Dan Johnson <computerdruid@gmail.com>
David KÃ¥gedal <davidk@lysator.liu.se>
David Reiss <dreiss@facebook.com> <dreiss@dreiss-vmware.(none)>
David S. Miller <davem@davemloft.net>
+David Turner <novalis@novalis.org> <dturner@twopensource.com>
Deskin Miller <deskinm@umich.edu>
Dirk Süsserott <newsletter@dirk.my1.cc>
Eric Blake <eblake@redhat.com> <ebb9@byu.net>
packages:
- language-pack-is
- git-svn
+ - apache2
env:
global:
- DEFAULT_TEST_TARGET=prove
- GIT_PROVE_OPTS="--timer --jobs 3 --state=failed,slow,save"
- GIT_TEST_OPTS="--verbose --tee"
+ - GIT_TEST_HTTPD=true
- GIT_TEST_CLONE_2GB=YesPlease
# t9810 occasionally fails on Travis CI OS X
# t9816 occasionally fails with "TAP out of sequence errors" on Travis CI OS X
brew tap homebrew/binary --quiet
brew_force_set_latest_binary_hash perforce
brew_force_set_latest_binary_hash perforce-server
+ # Uncomment this if you want to run perf tests:
+ # brew install gnu-time
brew install git-lfs perforce-server perforce gettext
brew link --force gettext
;;
modifying paragraphs or option/command explanations that contain options
or commands:
- Literal examples (e.g. use of command-line options, command names, and
- configuration variables) are typeset in monospace, and if you can use
- `backticks around word phrases`, do so.
+ Literal examples (e.g. use of command-line options, command names,
+ branch names, configuration and environment variables) must be
+ typeset in monospace (i.e. wrapped with backticks):
`--pretty=oneline`
`git rev-list`
`remote.pushDefault`
+ `GIT_DIR`
+ `HEAD`
+
+ An environment variable must be prefixed with "$" only when referring to its
+ value and not when referring to the variable itself, in this case there is
+ nothing to add except the backticks:
+ `GIT_DIR` is specified
+ `$GIT_DIR/hooks/pre-receive`
Word phrases enclosed in `backtick characters` are rendered literally
and will not be further expanded. The use of `backticks` to achieve the
TECH_DOCS += technical/racy-git
TECH_DOCS += technical/send-pack-pipeline
TECH_DOCS += technical/shallow
+TECH_DOCS += technical/signature-format
TECH_DOCS += technical/trivial-merge
SP_ARTICLES += $(TECH_DOCS)
SP_ARTICLES += technical/api-index
--- /dev/null
+Git 2.10 Release Notes
+======================
+
+Backward compatibility notes
+----------------------------
+
+Updates since v2.9
+------------------
+
+UI, Workflows & Features
+
+ * "git pull --rebase --verify-signature" learned to warn the user
+ that "--verify-signature" is a no-op when rebasing.
+
+ * An upstream project can make a recommendation to shallowly clone
+ some submodules in the .gitmodules file it ships.
+
+ * "git worktree add" learned that '-' can be used as a short-hand for
+ "@{-1}", the previous branch.
+
+ * Update the funcname definition to support css files.
+
+ * The completion script (in contrib/) learned to complete "git
+ status" options.
+
+ * Messages that are generated by auto gc during "git push" on the
+ receiving end are now passed back to the sending end in such a way
+ that they are shown with "remote: " prefix to avoid confusing the
+ users.
+
+ * "git add -i/-p" learned to honor diff.compactionHeuristic
+ experimental knob, so that the user can work on the same hunk split
+ as "git diff" output.
+
+ * "upload-pack" allows a custom "git pack-objects" replacement when
+ responding to "fetch/clone" via the uploadpack.packObjectsHook.
+ (merge b738396 jk/upload-pack-hook later to maint).
+
+ * Teach format-patch and mailsplit (hence "am") how a line that
+ happens to begin with "From " in the e-mail message is quoted with
+ ">", so that these lines can be restored to their original shape.
+ (merge d9925d1 ew/mboxrd-format-am later to maint).
+
+ * "git repack" learned the "--keep-unreachable" option, which sends
+ loose unreachable objects to a pack instead of leaving them loose.
+ This helps heuristics based on the number of loose objects
+ (e.g. "gc --auto").
+ (merge e26a8c4 jk/repack-keep-unreachable later to maint).
+
+ * "log --graph --format=" learned that "%>|(N)" specifies the width
+ relative to the terminal's left edge, not relative to the area to
+ draw text that is to the right of the ancestry-graph section. It
+ also now accepts negative N that means the column limit is relative
+ to the right border.
+
+ * A careless invocation of "git send-email directory/" after editing
+ 0001-change.patch with an editor often ends up sending both
+ 0001-change.patch and its backup file, 0001-change.patch~, causing
+ embarrassment and a minor confusion. Detect such an input and
+ offer to skip the backup files when sending the patches out.
+ (merge 531220b jc/send-email-skip-backup later to maint).
+
+ * "git submodule update" that drives many "git clone" could
+ eventually hit flaky servers/network conditions on one of the
+ submodules; the command learned to retry the attempt.
+
+ * The output coloring scheme learned two new attributes, italic and
+ strike, in addition to existing bold, reverse, etc.
+
+ * "git log" learns log.showSignature configuration variable, and a
+ command line option "--no-show-signature" to countermand it.
+ (merge fce04c3 mj/log-show-signature-conf later to maint).
+
+ * More markings of messages for i18n, with updates to various tests
+ to pass GETTEXT_POISON tests.
+
+ * "git archive" learned to handle files that are larger than 8GB and
+ commits far in the future than expressible by the traditional US-TAR
+ format.
+ (merge 5caeeb8 jk/big-and-future-archive-tar later to maint).
+
+ * A new configuration variable core.sshCommand has been added to
+ specify what value for GIT_SSH_COMMAND to use per repository.
+
+ * "git worktree prune" protected worktrees that are marked as
+ "locked" by creating a file in a known location. "git worktree"
+ command learned a dedicated command pair to create and remove such
+ a file, so that the users do not have to do this with editor.
+
+ * A handful of "git svn" updates.
+
+ * "git push" learned to accept and pass extra options to the
+ receiving end so that hooks can read and react to them.
+
+ * "git status" learned to suggest "merge --abort" during a conflicted
+ merge, just like it already suggests "rebase --abort" during a
+ conflicted rebase.
+
+ * "git jump" script (in contrib/) has been updated a bit.
+ (merge a91e692 jk/git-jump later to maint).
+
+ * "git push" and "git clone" learned to give better progress meters
+ to the end user who is waiting on the terminal.
+
+ * An entry "git log --decorate" for the tip of the current branch is
+ shown as "HEAD -> name" (where "name" is the name of the branch);
+ the arrow is now painted in the same color as "HEAD", not in the
+ color for commits.
+
+ * "git format-patch" learned format.from configuration variable to
+ specify the default settings for its "--from" option.
+
+ * "git am -3" calls "git merge-recursive" when it needs to fall back
+ to a three-way merge; this call has been turned into an internal
+ subroutine call instead of spawning a separate subprocess.
+
+ * The command line completion scripts (in contrib/) now knows about
+ "git branch --delete/--move [--remote]".
+ (merge 2703c22 vs/completion-branch-fully-spelled-d-m-r later to maint).
+
+ * "git rev-parse --git-path hooks/<hook>" learned to take
+ core.hooksPath configuration variable (introduced during 2.9 cycle)
+ into account.
+ (merge 9445b49 ab/hooks later to maint).
+
+ * "git log --show-signature" and other commands that display the
+ verification status of PGP signature now shows the longer key-id,
+ as 32-bit key-id is so last century.
+
+
+Performance, Internal Implementation, Development Support etc.
+
+ * "git fast-import" learned the same performance trick to avoid
+ creating too small a packfile as "git fetch" and "git push" have,
+ using *.unpackLimit configuration.
+
+ * When "git daemon" is run without --[init-]timeout specified, a
+ connection from a client that silently goes offline can hang around
+ for a long time, wasting resources. The socket-level KEEPALIVE has
+ been enabled to allow the OS to notice such failed connections.
+
+ * "git upload-pack" command has been updated to use the parse-options
+ API.
+
+ * The "git apply" standalone program is being libified; the first
+ step to move many state variables into a structure that can be
+ explicitly (re)initialized to make the machinery callable more
+ than once has been merged.
+
+ * HTTP transport gained an option to produce more detailed debugging
+ trace.
+ (merge 73e57aa ep/http-curl-trace later to maint).
+
+ * Instead of taking advantage of the fact that a struct string_list
+ that is allocated with all NULs happens to be the INIT_NODUP kind,
+ the users of string_list structures are taught to initialize them
+ explicitly as such, to document their behaviour better.
+ (merge 2721ce2 jk/string-list-static-init later to maint).
+
+ * HTTPd tests learned to show the server error log to help diagnosing
+ a failing tests.
+ (merge 44f243d nd/test-lib-httpd-show-error-log-in-verbose later to maint).
+
+ * The ownership rule for the piece of memory that hold references to
+ be fetched in "git fetch" was screwy, which has been cleaned up.
+
+ * "git bisect" makes an internal call to "git diff-tree" when
+ bisection finds the culprit, but this call did not initialize the
+ data structure to pass to the diff-tree API correctly.
+
+ * Further preparatory clean-up for "worktree" feature continues.
+ (merge 0409e0b nd/worktree-cleanup-post-head-protection later to maint).
+
+ * Formats of the various data (and how to validate them) where we use
+ GPG signature have been documented.
+
+ * A new run-command API function pipe_command() is introduced to
+ sanely feed data to the standard input while capturing data from
+ the standard output and the standard error of an external process,
+ which is cumbersome to hand-roll correctly without deadlocking.
+
+ * The codepath to sign data in a prepared buffer with GPG has been
+ updated to use this API to read from the status-fd to check for
+ errors (instead of relying on GPG's exit status).
+ (merge efee955 jk/gpg-interface-cleanup later to maint).
+
+ * Allow t/perf framework to use the features from the most recent
+ version of Git even when testing an older installed version.
+
+ * The commands in the "log/diff" family have had an FILE* pointer in the
+ data structure they pass around for a long time, but some codepaths
+ used to always write to the standard output. As a preparatory step
+ to make "git format-patch" available to the internal callers, these
+ codepaths have been updated to consistently write into that FILE*
+ instead.
+
+ * Conversion from unsigned char sha1[20] to struct object_id
+ continues.
+
+ * Improve the look of the way "git fetch" reports what happened to
+ each ref that was fetched.
+
+ * The .c/.h sources are marked as such in our .gitattributes file so
+ that "git diff -W" and friends would work better.
+
+ * Code clean-up to avoid using a variable string that compilers may
+ feel untrustable as printf-style format given to write_file()
+ helper function.
+
+ * "git p4" used a location outside $GIT_DIR/refs/ to place its
+ temporary branches, which has been moved to refs/git-p4-tmp/.
+
+ * Existing autoconf generated test for the need to link with pthread
+ library did not check all the functions from pthread libraries;
+ recent FreeBSD has some functions in libc but not others, and we
+ mistakenly thought linking with libc is enough when it is not.
+
+ * When "git fsck" reports a broken link (e.g. a tree object contains
+ a blob that does not exist), both containing object and the object
+ that is referred to were reported with their 40-hex object names.
+ The command learned the "--name-objects" option to show the path to
+ the containing object from existing refs (e.g. "HEAD~24^2:file.txt").
+
+ * Allow http daemon tests in Travis CI tests.
+
+ * Makefile assumed that -lrt is always available on platforms that
+ want to use clock_gettime() and CLOCK_MONOTONIC, which is not a
+ case for recent Mac OS X. The necessary symbols are often found in
+ libc on many modern systems and having -lrt on the command line, as
+ long as the library exists, had no effect, but when the platform
+ removes librt.a that is a different matter--having -lrt will break
+ the linkage.
+
+ This change could be seen as a regression for those who do need to
+ specify -lrt, as they now specifically ask for NEEDS_LIBRT when
+ building. Hopefully they are in the minority these days.
+
+ * Further preparatory work on the refs API before the pluggable
+ backend series can land.
+
+ * Error handling in the codepaths that updates refs has been
+ improved.
+
+ * The API to iterate over all the refs (i.e. for_each_ref(), etc.)
+ has been revamped.
+
+ * The handling of the "text=auto" attribute has been corrected.
+ $ echo "* text=auto eol=crlf" >.gitattributes
+ used to have the same effect as
+ $ echo "* text eol=crlf" >.gitattributes
+ i.e. declaring all files are text (ignoring "auto"). The
+ combination has been fixed to be equivalent to doing
+ $ git config core.autocrlf true
+
+ * Documentation has been updated to show better example usage
+ of the updated "text=auto" attribute.
+
+ * A few tests that specifically target "git rebase -i" have been
+ added.
+
+ * Dumb http transport on the client side has been optimized.
+ (merge ecba195 ew/http-walker later to maint).
+
+ * Users of the parse_options_concat() API function need to allocate
+ extra slots in advance and fill them with OPT_END() when they want
+ to decide the set of supported options dynamically, which makes the
+ code error-prone and hard to read. This has been corrected by tweaking
+ the API to allocate and return a new copy of "struct option" array.
+
+ * "git fetch" exchanges batched have/ack messages between the sender
+ and the receiver, initially doubling every time and then falling
+ back to enlarge the window size linearly. The "smart http"
+ transport, being an half-duplex protocol, outgrows the preset limit
+ too quickly and becomes inefficient when interacting with a large
+ repository. The internal mechanism learned to grow the window size
+ more aggressively when working with the "smart http" transport.
+
+ * Tests for "git svn" have been taught to reuse the lib-httpd test
+ infrastructure when testing the subversion integration that
+ interacts with subversion repositories served over the http://
+ protocol.
+ (merge a8a5d25 ew/git-svn-http-tests later to maint).
+
+ * "git pack-objects" has a few options that tell it not to pack
+ objects found in certain packfiles, which require it to scan .idx
+ files of all available packs. The codepaths involved in these
+ operations have been optimized for a common case of not having any
+ non-local pack and/or any .kept pack.
+
+ * The t3700 test about "add --chmod=-x" have been made a bit more
+ robust and generally cleaned up.
+ (merge 766cdc4 ib/t3700-add-chmod-x-updates later to maint).
+
+ * The build procedure learned PAGER_ENV knob that lists what default
+ environment variable settings to export for popular pagers. This
+ mechanism is used to tweak the default settings to MORE on FreeBSD.
+ (merge 995bc22 ew/build-time-pager-tweaks later to maint).
+
+ * The http-backend (the server-side component of smart-http
+ transport) used to trickle the HTTP header one at a time. Now
+ these write(2)s are batched.
+ (merge b36045c ew/http-backend-batch-headers later to maint).
+
+ * When "git rebase" tries to compare set of changes on the updated
+ upstream and our own branch, it computes patch-id for all of these
+ changes and attempts to find matches. This has been optimized by
+ lazily computing the full patch-id (which is expensive) to be
+ compared only for changes that touch the same set of paths.
+ (merge b3dfeeb kw/patch-ids-optim later to maint).
+
+ * A handful of tests that were broken under gettext-poison build have
+ been fixed.
+
+ * The recent i18n patch we added during this cycle did a bit too much
+ refactoring of the messages to avoid word-legos; the repetition has
+ been reduced to help translators.
+
+
+Also contains various documentation updates and code clean-ups.
+
+
+Fixes since v2.9
+----------------
+
+Unless otherwise noted, all the fixes since v2.8 in the maintenance
+track are contained in this release (see the maintenance releases'
+notes for details).
+
+ * The commands in `git log` family take %C(auto) in a custom format
+ string. This unconditionally turned the color on, ignoring
+ --no-color or with --color=auto when the output is not connected to
+ a tty; this was corrected to make the format truly behave as
+ "auto".
+
+ * "git rev-list --count" whose walk-length is limited with "-n"
+ option did not work well with the counting optimized to look at the
+ bitmap index.
+
+ * "git show -W" (extend hunks to cover the entire function, delimited
+ by lines that match the "funcname" pattern) used to show the entire
+ file when a change added an entire function at the end of the file,
+ which has been fixed.
+
+ * The documentation set has been updated so that literal commands,
+ configuration variables and environment variables are consistently
+ typeset in fixed-width font and bold in manpages.
+
+ * "git svn propset" subcommand that was added in 2.3 days is
+ documented now.
+
+ * The documentation tries to consistently spell "GPG"; when
+ referring to the specific program name, "gpg" is used.
+
+ * "git reflog" stopped upon seeing an entry that denotes a branch
+ creation event (aka "unborn"), which made it appear as if the
+ reflog was truncated.
+
+ * The git-prompt scriptlet (in contrib/) was not friendly with those
+ who uses "set -u", which has been fixed.
+
+ * compat/regex code did not cleanly compile.
+
+ * A codepath that used alloca(3) to place an unbounded amount of data
+ on the stack has been updated to avoid doing so.
+
+ * "git update-index --add --chmod=+x file" may be usable as an escape
+ hatch, but not a friendly thing to force for people who do need to
+ use it regularly. "git add --chmod=+x file" can be used instead.
+
+ * Build improvements for gnome-keyring (in contrib/)
+
+ * "git status" used to say "working directory" when it meant "working
+ tree".
+
+ * Comments about misbehaving FreeBSD shells have been clarified with
+ the version number (9.x and before are broken, newer ones are OK).
+
+ * "git cherry-pick A" worked on an unborn branch, but "git
+ cherry-pick A..B" didn't.
+
+ * Fix an unintended regression in v2.9 that breaks "clone --depth"
+ that recurses down to submodules by forcing the submodules to also
+ be cloned shallowly, which many server instances that host upstream
+ of the submodules are not prepared for.
+
+ * Fix unnecessarily waste in the idiomatic use of ': ${VAR=default}'
+ to set the default value, without enclosing it in double quotes.
+
+ * Some platform-specific code had non-ANSI strict declarations of C
+ functions that do not take any parameters, which has been
+ corrected.
+
+ * The internal code used to show local timezone offset is not
+ prepared to handle timestamps beyond year 2100, and gave a
+ bogus offset value to the caller. Use a more benign looking
+ +0000 instead and let "git log" going in such a case, instead
+ of aborting.
+
+ * One among four invocations of readlink(1) in our test suite has
+ been rewritten so that the test can run on systems without the
+ command (others are in valgrind test framework and t9802).
+
+ * t/perf needs /usr/bin/time with GNU extension; the invocation of it
+ is updated to "gtime" on Darwin.
+
+ * A bug, which caused "git p4" while running under verbose mode to
+ report paths that are omitted due to branch prefix incorrectly, has
+ been fixed; the command said "Ignoring file outside of prefix" for
+ paths that are _inside_.
+
+ * The top level documentation "git help git" still pointed at the
+ documentation set hosted at now-defunct google-code repository.
+ Update it to point to https://git.github.io/htmldocs/git.html
+ instead.
+
+ * A helper function that takes the contents of a commit object and
+ finds its subject line did not ignore leading blank lines, as is
+ commonly done by other codepaths. Make it ignore leading blank
+ lines to match.
+
+ * For a long time, we carried an in-code comment that said our
+ colored output would work only when we use fprintf/fputs on
+ Windows, which no longer is the case for the past few years.
+
+ * "gc.autoPackLimit" when set to 1 should not trigger a repacking
+ when there is only one pack, but the code counted poorly and did
+ so.
+
+ * Add a test to specify the desired behaviour that currently is not
+ available in "git rebase -Xsubtree=...".
+
+ * More mark-up updates to typeset strings that are expected to
+ literally typed by the end user in fixed-width font.
+
+ * "git commit --amend --allow-empty-message -S" for a commit without
+ any message body could have misidentified where the header of the
+ commit object ends.
+
+ * "git rebase -i --autostash" did not restore the auto-stashed change
+ when the operation was aborted.
+
+ * Git does not know what the contents in the index should be for a
+ path added with "git add -N" yet, so "git grep --cached" should not
+ show hits (or show lack of hits, with -L) in such a path, but that
+ logic does not apply to "git grep", i.e. searching in the working
+ tree files. But we did so by mistake, which has been corrected.
+
+ * "git blame -M" missed a single line that was moved within the file.
+
+ * Fix recently introduced codepaths that are involved in parallel
+ submodule operations, which gave up on reading too early, and
+ could have wasted CPU while attempting to write under a corner
+ case condition.
+
+ * "git grep -i" has been taught to fold case in non-ascii locales
+ correctly.
+
+ * A test that unconditionally used "mktemp" learned that the command
+ is not necessarily available everywhere.
+
+ * There are certain house-keeping tasks that need to be performed at
+ the very beginning of any Git program, and programs that are not
+ built-in commands had to do them exactly the same way as "git"
+ potty does. It was easy to make mistakes in one-off standalone
+ programs (like test helpers). A common "main()" function that
+ calls cmd_main() of individual program has been introduced to
+ make it harder to make mistakes.
+ (merge de61ceb jk/common-main later to maint).
+
+ * The test framework learned a new helper test_match_signal to
+ check an exit code from getting killed by an expected signal.
+
+ * General code clean-up around a helper function to write a
+ single-liner to a file.
+ (merge 7eb6e10 jk/write-file later to maint).
+
+ * One part of "git am" had an oddball helper function that called
+ stuff from outside "his" as opposed to calling what we have "ours",
+ which was not gender-neutral and also inconsistent with the rest of
+ the system where outside stuff is usuall called "theirs" in
+ contrast to "ours".
+
+ * "git blame file" allowed the lineage of lines in the uncommitted,
+ unadded contents of "file" to be inspected, but it refused when
+ "file" did not appear in the current commit. When "file" was
+ created by renaming an existing file (but the change has not been
+ committed), this restriction was unnecessarily tight.
+
+ * "git add -N dir/file && git write-tree" produced an incorrect tree
+ when there are other paths in the same directory that sorts after
+ "file".
+
+ * "git fetch http://user:pass@host/repo..." scrubbed the userinfo
+ part, but "git push" didn't.
+
+ * "git merge" with renormalization did not work well with
+ merge-recursive, due to "safer crlf" conversion kicking in when it
+ shouldn't.
+ (merge 1335d76 jc/renormalize-merge-kill-safer-crlf later to maint).
+
+ * The use of strbuf in "git rm" to build filename to remove was a bit
+ suboptimal, which has been fixed.
+
+ * An age old bug that caused "git diff --ignore-space-at-eol"
+ misbehave has been fixed.
+
+ * "git notes merge" had a code to see if a path exists (and fails if
+ it does) and then open the path for writing (when it doesn't).
+ Replace it with open with O_EXCL.
+
+ * "git pack-objects" and "git index-pack" mostly operate with off_t
+ when talking about the offset of objects in a packfile, but there
+ were a handful of places that used "unsigned long" to hold that
+ value, leading to an unintended truncation.
+
+ * Recent update to "git daemon" tries to enable the socket-level
+ KEEPALIVE, but when it is spawned via inetd, the standard input
+ file descriptor may not necessarily be connected to a socket.
+ Suppress an ENOTSOCK error from setsockopt().
+
+ * Recent FreeBSD stopped making perl available at /usr/bin/perl;
+ switch the default the built-in path to /usr/local/bin/perl on not
+ too ancient FreeBSD releases.
+
+ * "git commit --help" said "--no-verify" is only about skipping the
+ pre-commit hook, and failed to say that it also skipped the
+ commit-msg hook.
+
+ * "git merge" in Git v2.9 was taught to forbid merging an unrelated
+ lines of history by default, but that is exactly the kind of thing
+ the "--rejoin" mode of "git subtree" (in contrib/) wants to do.
+ "git subtree" has been taught to use the "--allow-unrelated-histories"
+ option to override the default.
+
+ * The build procedure for "git persistent-https" helper (in contrib/)
+ has been updated so that it can be built with more recent versions
+ of Go.
+
+ * There is an optimization used in "git diff $treeA $treeB" to borrow
+ an already checked-out copy in the working tree when it is known to
+ be the same as the blob being compared, expecting that open/mmap of
+ such a file is faster than reading it from the object store, which
+ involves inflating and applying delta. This however kicked in even
+ when the checked-out copy needs to go through the convert-to-git
+ conversion (including the clean filter), which defeats the whole
+ point of the optimization. The optimization has been disabled when
+ the conversion is necessary.
+
+ * "git -c grep.patternType=extended log --basic-regexp" misbehaved
+ because the internal API to access the grep machinery was not
+ designed well.
+
+ * Windows port was failing some tests in t4130, due to the lack of
+ inum in the returned values by its lstat(2) emulation.
+
+ * The reflog output format is documented better, and a new format
+ --date=unix to report the seconds-since-epoch (without timezone)
+ has been added.
+ (merge 442f6fd jk/reflog-date later to maint).
+
+ * "git difftool <paths>..." started in a subdirectory failed to
+ interpret the paths relative to that directory, which has been
+ fixed.
+
+ * The characters in the label shown for tags/refs for commits in
+ "gitweb" output are now properly escaped for proper HTML output.
+
+ * FreeBSD can lie when asked mtime of a directory, which made the
+ untracked cache code to fall back to a slow-path, which in turn
+ caused tests in t7063 to fail because it wanted to verify the
+ behaviour of the fast-path.
+
+ * Squelch compiler warnings for nedmalloc (in compat/) library.
+
+ * A small memory leak in the command line parsing of "git blame"
+ has been plugged.
+
+ * The API documentation for hashmap was unclear if hashmap_entry
+ can be safely discarded without any other consideration. State
+ that it is safe to do so.
+
+ * Not-so-recent rewrite of "git am" that started making internal
+ calls into the commit machinery had an unintended regression, in
+ that no matter how many seconds it took to apply many patches, the
+ resulting committer timestamp for the resulting commits were all
+ the same.
+
+ * "git push --force-with-lease" already had enough logic to allow
+ ensuring that such a push results in creation of a ref (i.e. the
+ receiving end did not have another push from sideways that would be
+ discarded by our force-pushing), but didn't expose this possibility
+ to the users. It does so now.
+ (merge 9eed4f3 jk/push-force-with-lease-creation later to maint).
+
+ * The mechanism to limit the pack window memory size, when packing is
+ done using multiple threads (which is the default), is per-thread,
+ but this was not documented clearly.
+ (merge 954176c ms/document-pack-window-memory-is-per-thread later to maint).
+
+ * "import-tars" fast-import script (in contrib/) used to ignore a
+ hardlink target and replaced it with an empty file, which has been
+ corrected to record the same blob as the other file the hardlink is
+ shared with.
+ (merge 04e0869 js/import-tars-hardlinks later to maint).
+
+ * "git mv dir non-existing-dir/" did not work in some environments
+ the same way as existing mainstream platforms. The code now moves
+ "dir" to "non-existing-dir", without relying on rename("A", "B/")
+ that strips the trailing slash of '/'.
+ (merge 189d035 js/mv-dir-to-new-directory later to maint).
+
+ * The "t/" hierarchy is prone to get an unusual pathname; "make test"
+ has been taught to make sure they do not contain paths that cannot
+ be checked out on Windows (and the mechanism can be reusable to
+ catch pathnames that are not portable to other platforms as need
+ arises).
+ (merge c2cafd3 js/test-lint-pathname later to maint).
+
+ * When "git merge-recursive" works on history with many criss-cross
+ merges in "verbose" mode, the names the command assigns to the
+ virtual merge bases could have overwritten each other by unintended
+ reuse of the same piece of memory.
+ (merge 5447a76 rs/pull-signed-tag later to maint).
+
+ * "git checkout --detach <branch>" used to give the same advice
+ message as that is issued when "git checkout <tag>" (or anything
+ that is not a branch name) is given, but asking with "--detach" is
+ an explicit enough sign that the user knows what is going on. The
+ advice message has been squelched in this case.
+ (merge 779b88a sb/checkout-explit-detach-no-advice later to maint).
+
+ * "git difftool" by default ignores the error exit from the backend
+ commands it spawns, because often they signal that they found
+ differences by exiting with a non-zero status code just like "diff"
+ does; the exit status codes 126 and above however are special in
+ that they are used to signal that the command is not executable,
+ does not exist, or killed by a signal. "git difftool" has been
+ taught to notice these exit status codes.
+ (merge 45a4f5d jk/difftool-command-not-found later to maint).
+
+ * On Windows, help.browser configuration variable used to be ignored,
+ which has been corrected.
+ (merge 6db5967 js/no-html-bypass-on-windows later to maint).
+
+ * The "git -c var[=val] cmd" facility to append a configuration
+ variable definition at the end of the search order was described in
+ git(1) manual page, but not in git-config(1), which was more likely
+ place for people to look for when they ask "can I make a one-shot
+ override, and if so how?"
+ (merge ae1f709 dg/document-git-c-in-git-config-doc later to maint).
+
+ * The tempfile (hence its user lockfile) API lets the caller to open
+ a file descriptor to a temporary file, write into it and then
+ finalize it by first closing the filehandle and then either
+ removing or renaming the temporary file. When the process spawns a
+ subprocess after obtaining the file descriptor, and if the
+ subprocess has not exited when the attempt to remove or rename is
+ made, the last step fails on Windows, because the subprocess has
+ the file descriptor still open. Open tempfile with O_CLOEXEC flag
+ to avoid this (on Windows, this is mapped to O_NOINHERIT).
+ (merge 05d1ed6 bw/mingw-avoid-inheriting-fd-to-lockfile later to maint).
+
+ * Other minor clean-ups and documentation updates
+ (merge 02a8cfa rs/merge-add-strategies-simplification later to maint).
+ (merge af4941d rs/merge-recursive-string-list-init later to maint).
+ (merge 1eb47f1 rs/use-strbuf-add-unique-abbrev later to maint).
+ (merge ddd0bfa jk/tighten-alloc later to maint).
+ (merge ecf30b2 rs/mailinfo-lib later to maint).
+ (merge 0eb75ce sg/reflog-past-root later to maint).
+ (merge 175d38c hv/doc-commit-reference-style later to maint).
* xdiff code we use to generate diffs is not prepared to handle
extremely large files. It uses "int" in many places, which can
overflow if we have a very large number of lines or even bytes in
- our input files, for example. Cap the input size to soemwhere
+ our input files, for example. Cap the input size to somewhere
around 1GB for now.
* Some protocols (like git-remote-ext) can execute arbitrary code
* xdiff code we use to generate diffs is not prepared to handle
extremely large files. It uses "int" in many places, which can
overflow if we have a very large number of lines or even bytes in
- our input files, for example. Cap the input size to soemwhere
+ our input files, for example. Cap the input size to somewhere
around 1GB for now.
* Some protocols (like git-remote-ext) can execute arbitrary code
* xdiff code we use to generate diffs is not prepared to handle
extremely large files. It uses "int" in many places, which can
overflow if we have a very large number of lines or even bytes in
- our input files, for example. Cap the input size to soemwhere
+ our input files, for example. Cap the input size to somewhere
around 1GB for now.
* Some protocols (like git-remote-ext) can execute arbitrary code
* xdiff code we use to generate diffs is not prepared to handle
extremely large files. It uses "int" in many places, which can
overflow if we have a very large number of lines or even bytes in
- our input files, for example. Cap the input size to soemwhere
+ our input files, for example. Cap the input size to somewhere
around 1GB for now.
* Some protocols (like git-remote-ext) can execute arbitrary code
--- /dev/null
+Git v2.9.1 Release Notes
+========================
+
+Fixes since v2.9
+----------------
+
+ * When "git daemon" is run without --[init-]timeout specified, a
+ connection from a client that silently goes offline can hang around
+ for a long time, wasting resources. The socket-level KEEPALIVE has
+ been enabled to allow the OS to notice such failed connections.
+
+ * The commands in `git log` family take %C(auto) in a custom format
+ string. This unconditionally turned the color on, ignoring
+ --no-color or with --color=auto when the output is not connected to
+ a tty; this was corrected to make the format truly behave as
+ "auto".
+
+ * "git rev-list --count" whose walk-length is limited with "-n"
+ option did not work well with the counting optimized to look at the
+ bitmap index.
+
+ * "git show -W" (extend hunks to cover the entire function, delimited
+ by lines that match the "funcname" pattern) used to show the entire
+ file when a change added an entire function at the end of the file,
+ which has been fixed.
+
+ * The documentation set has been updated so that literal commands,
+ configuration variables and environment variables are consistently
+ typeset in fixed-width font and bold in manpages.
+
+ * "git svn propset" subcommand that was added in 2.3 days is
+ documented now.
+
+ * The documentation tries to consistently spell "GPG"; when
+ referring to the specific program name, "gpg" is used.
+
+ * "git reflog" stopped upon seeing an entry that denotes a branch
+ creation event (aka "unborn"), which made it appear as if the
+ reflog was truncated.
+
+ * The git-prompt scriptlet (in contrib/) was not friendly with those
+ who uses "set -u", which has been fixed.
+
+ * A codepath that used alloca(3) to place an unbounded amount of data
+ on the stack has been updated to avoid doing so.
+
+ * "git update-index --add --chmod=+x file" may be usable as an escape
+ hatch, but not a friendly thing to force for people who do need to
+ use it regularly. "git add --chmod=+x file" can be used instead.
+
+ * Build improvements for gnome-keyring (in contrib/)
+
+ * "git status" used to say "working directory" when it meant "working
+ tree".
+
+ * Comments about misbehaving FreeBSD shells have been clarified with
+ the version number (9.x and before are broken, newer ones are OK).
+
+ * "git cherry-pick A" worked on an unborn branch, but "git
+ cherry-pick A..B" didn't.
+
+ * "git add -i/-p" learned to honor diff.compactionHeuristic
+ experimental knob, so that the user can work on the same hunk split
+ as "git diff" output.
+
+ * "log --graph --format=" learned that "%>|(N)" specifies the width
+ relative to the terminal's left edge, not relative to the area to
+ draw text that is to the right of the ancestry-graph section. It
+ also now accepts negative N that means the column limit is relative
+ to the right border.
+
+ * The ownership rule for the piece of memory that hold references to
+ be fetched in "git fetch" was screwy, which has been cleaned up.
+
+ * "git bisect" makes an internal call to "git diff-tree" when
+ bisection finds the culprit, but this call did not initialize the
+ data structure to pass to the diff-tree API correctly.
+
+ * Formats of the various data (and how to validate them) where we use
+ GPG signature have been documented.
+
+ * Fix an unintended regression in v2.9 that breaks "clone --depth"
+ that recurses down to submodules by forcing the submodules to also
+ be cloned shallowly, which many server instances that host upstream
+ of the submodules are not prepared for.
+
+ * Fix unnecessarily waste in the idiomatic use of ': ${VAR=default}'
+ to set the default value, without enclosing it in double quotes.
+
+ * Some platform-specific code had non-ANSI strict declarations of C
+ functions that do not take any parameters, which has been
+ corrected.
+
+ * The internal code used to show local timezone offset is not
+ prepared to handle timestamps beyond year 2100, and gave a
+ bogus offset value to the caller. Use a more benign looking
+ +0000 instead and let "git log" going in such a case, instead
+ of aborting.
+
+ * One among four invocations of readlink(1) in our test suite has
+ been rewritten so that the test can run on systems without the
+ command (others are in valgrind test framework and t9802).
+
+ * t/perf needs /usr/bin/time with GNU extension; the invocation of it
+ is updated to "gtime" on Darwin.
+
+ * A bug, which caused "git p4" while running under verbose mode to
+ report paths that are omitted due to branch prefix incorrectly, has
+ been fixed; the command said "Ignoring file outside of prefix" for
+ paths that are _inside_.
+
+ * The top level documentation "git help git" still pointed at the
+ documentation set hosted at now-defunct google-code repository.
+ Update it to point to https://git.github.io/htmldocs/git.html
+ instead.
+
+Also contains minor documentation updates and code clean-ups.
--- /dev/null
+Git v2.9.2 Release Notes
+========================
+
+Fixes since v2.9.1
+------------------
+
+ * A fix merged to v2.9.1 had a few tests that are not meant to be
+ run on platforms without 64-bit long, which caused unnecessary
+ test failures on them because we didn't detect the platform and
+ skip them. These tests are now skipped on platforms that they
+ are not applicable to.
+
+No other change is included in this update.
--- /dev/null
+Git v2.9.3 Release Notes
+========================
+
+Fixes since v2.9.2
+------------------
+
+ * A helper function that takes the contents of a commit object and
+ finds its subject line did not ignore leading blank lines, as is
+ commonly done by other codepaths. Make it ignore leading blank
+ lines to match.
+
+ * Git does not know what the contents in the index should be for a
+ path added with "git add -N" yet, so "git grep --cached" should not
+ show hits (or show lack of hits, with -L) in such a path, but that
+ logic does not apply to "git grep", i.e. searching in the working
+ tree files. But we did so by mistake, which has been corrected.
+
+ * "git rebase -i --autostash" did not restore the auto-stashed change
+ when the operation was aborted.
+
+ * "git commit --amend --allow-empty-message -S" for a commit without
+ any message body could have misidentified where the header of the
+ commit object ends.
+
+ * More mark-up updates to typeset strings that are expected to
+ literally typed by the end user in fixed-width font.
+
+ * For a long time, we carried an in-code comment that said our
+ colored output would work only when we use fprintf/fputs on
+ Windows, which no longer is the case for the past few years.
+
+ * "gc.autoPackLimit" when set to 1 should not trigger a repacking
+ when there is only one pack, but the code counted poorly and did
+ so.
+
+ * One part of "git am" had an oddball helper function that called
+ stuff from outside "his" as opposed to calling what we have "ours",
+ which was not gender-neutral and also inconsistent with the rest of
+ the system where outside stuff is usuall called "theirs" in
+ contrast to "ours".
+
+ * The test framework learned a new helper test_match_signal to
+ check an exit code from getting killed by an expected signal.
+
+ * "git blame -M" missed a single line that was moved within the file.
+
+ * Fix recently introduced codepaths that are involved in parallel
+ submodule operations, which gave up on reading too early, and
+ could have wasted CPU while attempting to write under a corner
+ case condition.
+
+ * "git grep -i" has been taught to fold case in non-ascii locales
+ correctly.
+
+ * A test that unconditionally used "mktemp" learned that the command
+ is not necessarily available everywhere.
+
+ * "git blame file" allowed the lineage of lines in the uncommitted,
+ unadded contents of "file" to be inspected, but it refused when
+ "file" did not appear in the current commit. When "file" was
+ created by renaming an existing file (but the change has not been
+ committed), this restriction was unnecessarily tight.
+
+ * "git add -N dir/file && git write-tree" produced an incorrect tree
+ when there are other paths in the same directory that sorts after
+ "file".
+
+ * "git fetch http://user:pass@host/repo..." scrubbed the userinfo
+ part, but "git push" didn't.
+
+ * An age old bug that caused "git diff --ignore-space-at-eol"
+ misbehave has been fixed.
+
+ * "git notes merge" had a code to see if a path exists (and fails if
+ it does) and then open the path for writing (when it doesn't).
+ Replace it with open with O_EXCL.
+
+ * "git pack-objects" and "git index-pack" mostly operate with off_t
+ when talking about the offset of objects in a packfile, but there
+ were a handful of places that used "unsigned long" to hold that
+ value, leading to an unintended truncation.
+
+ * Recent update to "git daemon" tries to enable the socket-level
+ KEEPALIVE, but when it is spawned via inetd, the standard input
+ file descriptor may not necessarily be connected to a socket.
+ Suppress an ENOTSOCK error from setsockopt().
+
+ * Recent FreeBSD stopped making perl available at /usr/bin/perl;
+ switch the default the built-in path to /usr/local/bin/perl on not
+ too ancient FreeBSD releases.
+
+ * "git status" learned to suggest "merge --abort" during a conflicted
+ merge, just like it already suggests "rebase --abort" during a
+ conflicted rebase.
+
+ * The .c/.h sources are marked as such in our .gitattributes file so
+ that "git diff -W" and friends would work better.
+
+ * Existing autoconf generated test for the need to link with pthread
+ library did not check all the functions from pthread libraries;
+ recent FreeBSD has some functions in libc but not others, and we
+ mistakenly thought linking with libc is enough when it is not.
+
+ * Allow http daemon tests in Travis CI tests.
+
+ * Users of the parse_options_concat() API function need to allocate
+ extra slots in advance and fill them with OPT_END() when they want
+ to decide the set of supported options dynamically, which makes the
+ code error-prone and hard to read. This has been corrected by tweaking
+ the API to allocate and return a new copy of "struct option" array.
+
+ * The use of strbuf in "git rm" to build filename to remove was a bit
+ suboptimal, which has been fixed.
+
+ * "git commit --help" said "--no-verify" is only about skipping the
+ pre-commit hook, and failed to say that it also skipped the
+ commit-msg hook.
+
+ * "git merge" in Git v2.9 was taught to forbid merging an unrelated
+ lines of history by default, but that is exactly the kind of thing
+ the "--rejoin" mode of "git subtree" (in contrib/) wants to do.
+ "git subtree" has been taught to use the "--allow-unrelated-histories"
+ option to override the default.
+
+ * The build procedure for "git persistent-https" helper (in contrib/)
+ has been updated so that it can be built with more recent versions
+ of Go.
+
+ * There is an optimization used in "git diff $treeA $treeB" to borrow
+ an already checked-out copy in the working tree when it is known to
+ be the same as the blob being compared, expecting that open/mmap of
+ such a file is faster than reading it from the object store, which
+ involves inflating and applying delta. This however kicked in even
+ when the checked-out copy needs to go through the convert-to-git
+ conversion (including the clean filter), which defeats the whole
+ point of the optimization. The optimization has been disabled when
+ the conversion is necessary.
+
+ * "git -c grep.patternType=extended log --basic-regexp" misbehaved
+ because the internal API to access the grep machinery was not
+ designed well.
+
+ * Windows port was failing some tests in t4130, due to the lack of
+ inum in the returned values by its lstat(2) emulation.
+
+ * The characters in the label shown for tags/refs for commits in
+ "gitweb" output are now properly escaped for proper HTML output.
+
+ * FreeBSD can lie when asked mtime of a directory, which made the
+ untracked cache code to fall back to a slow-path, which in turn
+ caused tests in t7063 to fail because it wanted to verify the
+ behaviour of the fast-path.
+
+ * Squelch compiler warnings for netmalloc (in compat/) library.
+
+ * The API documentation for hashmap was unclear if hashmap_entry
+ can be safely discarded without any other consideration. State
+ that it is safe to do so.
+
+ * Not-so-recent rewrite of "git am" that started making internal
+ calls into the commit machinery had an unintended regression, in
+ that no matter how many seconds it took to apply many patches, the
+ resulting committer timestamp for the resulting commits were all
+ the same.
+
+ * "git difftool <paths>..." started in a subdirectory failed to
+ interpret the paths relative to that directory, which has been
+ fixed.
+
+Also contains minor documentation updates and code clean-ups.
without external resources. Instead of giving a URL to a mailing list
archive, summarize the relevant points of the discussion.
+If you want to reference a previous commit in the history of a stable
+branch use the format "abbreviated sha1 (subject, date)". So for example
+like this: "Commit f86a374 (pack-bitmap.c: fix a memleak, 2015-03-30)
+noticed [...]".
+
(3) Generate your patch using Git tools out of your commits.
false;; Boolean false can be spelled as `no`, `off`,
`false`, or `0`.
+
-When converting value to the canonical form using '--bool' type
+When converting value to the canonical form using `--bool` type
specifier; 'git config' will ensure that the output is "true" or
"false" (spelled in lowercase).
1024", "by 1024x1024", etc.
color::
- The value for a variables that takes a color is a list of
- colors (at most two) and attributes (at most one), separated
- by spaces. The colors accepted are `normal`, `black`,
- `red`, `green`, `yellow`, `blue`, `magenta`, `cyan` and
- `white`; the attributes are `bold`, `dim`, `ul`, `blink` and
- `reverse`. The first color given is the foreground; the
- second is the background. The position of the attribute, if
- any, doesn't matter. Attributes may be turned off specifically
- by prefixing them with `no` (e.g., `noreverse`, `noul`, etc).
-+
-Colors (foreground and background) may also be given as numbers between
-0 and 255; these use ANSI 256-color mode (but note that not all
-terminals may support this). If your terminal supports it, you may also
-specify 24-bit RGB values as hex, like `#ff0ab3`.
-+
-The attributes are meant to be reset at the beginning of each item
-in the colored output, so setting color.decorate.branch to `black`
-will paint that branch name in a plain `black`, even if the previous
-thing on the same output line (e.g. opening parenthesis before the
-list of branch names in `log --decorate` output) is set to be
-painted with `bold` or some other attribute.
+ The value for a variable that takes a color is a list of
+ colors (at most two, one for foreground and one for background)
+ and attributes (as many as you want), separated by spaces.
++
+The basic colors accepted are `normal`, `black`, `red`, `green`, `yellow`,
+`blue`, `magenta`, `cyan` and `white`. The first color given is the
+foreground; the second is the background.
++
+Colors may also be given as numbers between 0 and 255; these use ANSI
+256-color mode (but note that not all terminals may support this). If
+your terminal supports it, you may also specify 24-bit RGB values as
+hex, like `#ff0ab3`.
++
+The accepted attributes are `bold`, `dim`, `ul`, `blink`, `reverse`,
+`italic`, and `strike` (for crossed-out or "strikethrough" letters).
+The position of any attributes with respect to the colors
+(before, after, or in between), doesn't matter. Specific attributes may
+be turned off by prefixing them with `no` or `no-` (e.g., `noreverse`,
+`no-ul`, etc).
++
+For git's pre-defined color slots, the attributes are meant to be reset
+at the beginning of each item in the colored output. So setting
+`color.decorate.branch` to `black` will paint that branch name in a
+plain `black`, even if the previous thing on the same output line (e.g.
+opening parenthesis before the list of branch names in `log --decorate`
+output) is set to be painted with `bold` or some other attribute.
+However, custom log formats may do more complicated and layered
+coloring, and the negated forms may be useful there.
pathname::
A variable that takes a pathname value can be given a
mechanism.
core.autocrlf::
- Setting this variable to "true" is almost the same as setting
- the `text` attribute to "auto" on all files except that text
- files are not guaranteed to be normalized: files that contain
- `CRLF` in the repository will not be touched. Use this
- setting if you want to have `CRLF` line endings in your
- working directory even though the repository does not have
- normalized line endings. This variable can be set to 'input',
+ Setting this variable to "true" is the same as setting
+ the `text` attribute to "auto" on all files and core.eol to "crlf".
+ Set to true if you want to have `CRLF` line endings in your
+ working directory and the repository has LF line endings.
+ This variable can be set to 'input',
in which case no output conversion is performed.
core.symlinks::
may be set multiple times and is matched in the given order;
the first match wins.
+
-Can be overridden by the 'GIT_PROXY_COMMAND' environment variable
+Can be overridden by the `GIT_PROXY_COMMAND` environment variable
(which always applies universally, without the special "for"
handling).
+
This is useful for excluding servers inside a firewall from
proxy use, while defaulting to a common proxy for external domains.
+core.sshCommand::
+ If this variable is set, `git fetch` and `git push` will
+ use the specified command instead of `ssh` when they need to
+ connect to a remote system. The command is in the same form as
+ the `GIT_SSH_COMMAND` environment variable and is overridden
+ when the environment variable is set.
+
core.ignoreStat::
If true, Git will avoid using lstat() calls to detect if files have
changed by setting the "assume-unchanged" bit for those tracked files
core.worktree::
Set the path to the root of the working tree.
- If GIT_COMMON_DIR environment variable is set, core.worktree
+ If `GIT_COMMON_DIR` environment variable is set, core.worktree
is ignored and not used for determining the root of working tree.
- This can be overridden by the GIT_WORK_TREE environment
- variable and the '--work-tree' command-line option.
+ This can be overridden by the `GIT_WORK_TREE` environment
+ variable and the `--work-tree` command-line option.
The value can be an absolute path or relative to the path to
the .git directory, which is either specified by --git-dir
or GIT_DIR, or automatically discovered.
-1 is the zlib default. 0 means no compression,
and 1..9 are various speed/size tradeoffs, 9 being slowest.
If set, this provides a default to other compression variables,
- such as 'core.looseCompression' and 'pack.compression'.
+ such as `core.looseCompression` and `pack.compression`.
core.looseCompression::
An integer -1..9, indicating the compression level for objects that
core.askPass::
Some commands (e.g. svn and http interfaces) that interactively
ask for a password can be told to use an external program given
- via the value of this variable. Can be overridden by the 'GIT_ASKPASS'
+ via the value of this variable. Can be overridden by the `GIT_ASKPASS`
environment variable. If not set, fall back to the value of the
- 'SSH_ASKPASS' environment variable or, failing that, a simple password
+ `SSH_ASKPASS` environment variable or, failing that, a simple password
prompt. The external program shall be given a suitable prompt as
command-line argument and write the password on its STDOUT.
notes should be printed.
+
This setting defaults to "refs/notes/commits", and it can be overridden by
-the 'GIT_NOTES_REF' environment variable. See linkgit:git-notes[1].
+the `GIT_NOTES_REF` environment variable. See linkgit:git-notes[1].
core.sparseCheckout::
Enable "sparse checkout" feature. See section "Sparse checkout" in
add.ignoreErrors::
add.ignore-errors (deprecated)::
Tells 'git add' to continue adding files when some files cannot be
- added due to indexing errors. Equivalent to the '--ignore-errors'
+ added due to indexing errors. Equivalent to the `--ignore-errors`
option of linkgit:git-add[1]. `add.ignore-errors` is deprecated,
as it does not follow the usual naming convention for configuration
variables.
"gitk --all --not ORIG_HEAD". Note that shell commands will be
executed from the top-level directory of a repository, which may
not necessarily be the current directory.
-'GIT_PREFIX' is set as returned by running 'git rev-parse --show-prefix'
+`GIT_PREFIX` is set as returned by running 'git rev-parse --show-prefix'
from the original current directory. See linkgit:git-rev-parse[1].
am.keepcr::
If true, git-am will call git-mailsplit for patches in mbox format
- with parameter '--keep-cr'. In this case git-mailsplit will
+ with parameter `--keep-cr`. In this case git-mailsplit will
not remove `\r` from lines ending with `\r\n`. Can be overridden
- by giving '--no-keep-cr' from the command line.
+ by giving `--no-keep-cr` from the command line.
See linkgit:git-am[1], linkgit:git-mailsplit[1].
am.threeWay::
apply.ignoreWhitespace::
When set to 'change', tells 'git apply' to ignore changes in
- whitespace, in the same way as the '--ignore-space-change'
+ whitespace, in the same way as the `--ignore-space-change`
option.
When set to one of: no, none, never, false tells 'git apply' to
respect all whitespace differences.
apply.whitespace::
Tells 'git apply' how to handle whitespaces, in the same way
- as the '--whitespace' option. See linkgit:git-apply[1].
+ as the `--whitespace` option. See linkgit:git-apply[1].
branch.autoSetupMerge::
Tells 'git branch' and 'git checkout' to set up new branches
browser.<tool>.path::
Override the path for the given tool that may be used to
- browse HTML help (see '-w' option in linkgit:git-help[1]) or a
+ browse HTML help (see `-w` option in linkgit:git-help[1]) or a
working repository in gitweb (see linkgit:git-instaweb[1]).
clean.requireForce::
difftool.prompt::
Prompt before each invocation of the diff tool.
+fastimport.unpackLimit::
+ If the number of objects imported by linkgit:git-fast-import[1]
+ is below this limit, then the objects will be unpacked into
+ loose object files. However if the number of imported objects
+ equals or exceeds this limit then the pack will be stored as a
+ pack. Storing the pack from a fast-import can make the import
+ operation complete faster, especially on slow filesystems. If
+ not set, the value of `transfer.unpackLimit` is used instead.
+
fetch.recurseSubmodules::
This option can be either set to a boolean value or to 'on-demand'.
Setting it to a boolean changes the behavior of fetch and pull to
If true, fetch will automatically behave as if the `--prune`
option was given on the command line. See also `remote.<name>.prune`.
+fetch.output::
+ Control how ref update status is printed. Valid values are
+ `full` and `compact`. Default value is `full`. See section
+ OUTPUT in linkgit:git-fetch[1] for detail.
+
format.attach::
Enable multipart/mixed attachments as the default for
'format-patch'. The value can also be a double quoted string
value as the boundary. See the --attach option in
linkgit:git-format-patch[1].
+format.from::
+ Provides the default value for the `--from` option to format-patch.
+ Accepts a boolean value, or a name and email address. If false,
+ format-patch defaults to `--no-from`, using commit authors directly in
+ the "From:" field of patch mails. If true, format-patch defaults to
+ `--from`, using your committer identity in the "From:" field of patch
+ mails and including a "From:" field in the body of the patch mail if
+ different. If set to a non-boolean value, format-patch uses that
+ value instead of your committer identity. Defaults to false.
+
format.numbered::
A boolean which can enable or disable sequence numbers in patch
subjects. It defaults to "auto" which enables it only if there
gitcvs.usecrlfattr::
If true, the server will look up the end-of-line conversion
- attributes for files to determine the '-k' modes to use. If
+ attributes for files to determine the `-k` modes to use. If
the attributes force Git to treat a file as text,
- the '-k' mode will be left blank so CVS clients will
+ the `-k` mode will be left blank so CVS clients will
treat it as text. If they suppress text conversion, the file
will be set with '-kb' mode, which suppresses any newline munging
the client might otherwise do. If the attributes do not allow
- the file type to be determined, then 'gitcvs.allBinary' is
+ the file type to be determined, then `gitcvs.allBinary` is
used. See linkgit:gitattributes[5].
gitcvs.allBinary::
- This is used if 'gitcvs.usecrlfattr' does not resolve
+ This is used if `gitcvs.usecrlfattr` does not resolve
the correct '-kb' mode to use. If true, all
unresolved files are sent to the client in
mode '-kb'. This causes the client to treat them
as binary files, which suppresses any newline munging it
otherwise might do. Alternatively, if it is set to "guess",
then the contents of the file are examined to decide if
- it is binary, similar to 'core.autocrlf'.
+ it is binary, similar to `core.autocrlf`.
gitcvs.dbName::
Database used by git-cvsserver to cache revision information
See linkgit:git-cvsserver[1].
gitcvs.dbUser, gitcvs.dbPass::
- Database user and password. Only useful if setting 'gitcvs.dbDriver',
+ Database user and password. Only useful if setting `gitcvs.dbDriver`,
since SQLite has no concept of database users and/or passwords.
'gitcvs.dbUser' supports variable substitution (see
linkgit:git-cvsserver[1] for details).
linkgit:git-cvsserver[1] for details). Any non-alphabetic
characters will be replaced with underscores.
-All gitcvs variables except for 'gitcvs.usecrlfattr' and
-'gitcvs.allBinary' can also be specified as
+All gitcvs variables except for `gitcvs.usecrlfattr` and
+`gitcvs.allBinary` can also be specified as
'gitcvs.<access_method>.<varname>' (where 'access_method'
is one of "ext" and "pserver") to make them apply only for the given
access method.
See linkgit:gitweb.conf[5] for description.
grep.lineNumber::
- If set to true, enable '-n' option by default.
+ If set to true, enable `-n` option by default.
grep.patternType::
Set the default matching behavior. Using a value of 'basic', 'extended',
- 'fixed', or 'perl' will enable the '--basic-regexp', '--extended-regexp',
- '--fixed-strings', or '--perl-regexp' option accordingly, while the
+ 'fixed', or 'perl' will enable the `--basic-regexp`, `--extended-regexp`,
+ `--fixed-strings`, or `--perl-regexp` option accordingly, while the
value 'default' will return to the default matching behavior.
grep.extendedRegexp::
- If set to true, enable '--extended-regexp' option by default. This
- option is ignored when the 'grep.patternType' option is set to a value
+ If set to true, enable `--extended-regexp` option by default. This
+ option is ignored when the `grep.patternType` option is set to a value
other than 'default'.
grep.threads::
of the linkgit:git-gui[1] `Tools` menu is invoked. This option is
mandatory for every tool. The command is executed from the root of
the working directory, and in the environment it receives the name of
- the tool as 'GIT_GUITOOL', the name of the currently selected file as
+ the tool as `GIT_GUITOOL`, the name of the currently selected file as
'FILENAME', and the name of the current branch as 'CUR_BRANCH' (if
the head is detached, 'CUR_BRANCH' is empty).
guitool.<name>.argPrompt::
Request a string argument from the user, and pass it to the tool
- through the 'ARGS' environment variable. Since requesting an
+ through the `ARGS` environment variable. Since requesting an
argument implies confirmation, the 'confirm' option has no effect
if this is enabled. If the option is set to 'true', 'yes', or '1',
the dialog uses a built-in generic prompt; otherwise the exact
guitool.<name>.revPrompt::
Request a single valid revision from the user, and set the
- 'REVISION' environment variable. In other aspects this option
+ `REVISION` environment variable. In other aspects this option
is similar to 'argPrompt', and can be used together with it.
guitool.<name>.revUnmerged::
only takes effect if the configured proxy string contains a user name part
(i.e. is of the form 'user@host' or 'user@host:port'). This can be
overridden on a per-remote basis; see `remote.<name>.proxyAuthMethod`.
- Both can be overridden by the 'GIT_HTTP_PROXY_AUTHMETHOD' environment
+ Both can be overridden by the `GIT_HTTP_PROXY_AUTHMETHOD` environment
variable. Possible values are:
+
--
- tlsv1.2
+
-Can be overridden by the 'GIT_SSL_VERSION' environment variable.
+Can be overridden by the `GIT_SSL_VERSION` environment variable.
To force git to use libcurl's default ssl version and ignore any
-explicit http.sslversion option, set 'GIT_SSL_VERSION' to the
+explicit http.sslversion option, set `GIT_SSL_VERSION` to the
empty string.
http.sslCipherList::
option; see the libcurl documentation for more details on the format
of this list.
+
-Can be overridden by the 'GIT_SSL_CIPHER_LIST' environment variable.
+Can be overridden by the `GIT_SSL_CIPHER_LIST` environment variable.
To force git to use libcurl's default cipher list and ignore any
-explicit http.sslCipherList option, set 'GIT_SSL_CIPHER_LIST' to the
+explicit http.sslCipherList option, set `GIT_SSL_CIPHER_LIST` to the
empty string.
http.sslVerify::
Whether to verify the SSL certificate when fetching or pushing
- over HTTPS. Can be overridden by the 'GIT_SSL_NO_VERIFY' environment
+ over HTTPS. Can be overridden by the `GIT_SSL_NO_VERIFY` environment
variable.
http.sslCert::
File containing the SSL certificate when fetching or pushing
- over HTTPS. Can be overridden by the 'GIT_SSL_CERT' environment
+ over HTTPS. Can be overridden by the `GIT_SSL_CERT` environment
variable.
http.sslKey::
File containing the SSL private key when fetching or pushing
- over HTTPS. Can be overridden by the 'GIT_SSL_KEY' environment
+ over HTTPS. Can be overridden by the `GIT_SSL_KEY` environment
variable.
http.sslCertPasswordProtected::
Enable Git's password prompt for the SSL certificate. Otherwise
OpenSSL will prompt the user, possibly many times, if the
certificate or private key is encrypted. Can be overridden by the
- 'GIT_SSL_CERT_PASSWORD_PROTECTED' environment variable.
+ `GIT_SSL_CERT_PASSWORD_PROTECTED` environment variable.
http.sslCAInfo::
File containing the certificates to verify the peer with when
fetching or pushing over HTTPS. Can be overridden by the
- 'GIT_SSL_CAINFO' environment variable.
+ `GIT_SSL_CAINFO` environment variable.
http.sslCAPath::
Path containing files with the CA certificates to verify the peer
with when fetching or pushing over HTTPS. Can be overridden
- by the 'GIT_SSL_CAPATH' environment variable.
+ by the `GIT_SSL_CAPATH` environment variable.
http.pinnedpubkey::
Public key of the https service. It may either be the filename of
http.maxRequests::
How many HTTP requests to launch in parallel. Can be overridden
- by the 'GIT_HTTP_MAX_REQUESTS' environment variable. Default is 5.
+ by the `GIT_HTTP_MAX_REQUESTS` environment variable. Default is 5.
http.minSessions::
The number of curl sessions (counted across slots) to be kept across
http.lowSpeedLimit, http.lowSpeedTime::
If the HTTP transfer speed is less than 'http.lowSpeedLimit'
for longer than 'http.lowSpeedTime' seconds, the transfer is aborted.
- Can be overridden by the 'GIT_HTTP_LOW_SPEED_LIMIT' and
- 'GIT_HTTP_LOW_SPEED_TIME' environment variables.
+ Can be overridden by the `GIT_HTTP_LOW_SPEED_LIMIT` and
+ `GIT_HTTP_LOW_SPEED_TIME` environment variables.
http.noEPSV::
A boolean which disables using of EPSV ftp command by curl.
This can helpful with some "poor" ftp servers which don't
- support EPSV mode. Can be overridden by the 'GIT_CURL_FTP_NO_EPSV'
+ support EPSV mode. Can be overridden by the `GIT_CURL_FTP_NO_EPSV`
environment variable. Default is false (curl will use EPSV).
http.userAgent::
such as Mozilla/4.0. This may be necessary, for instance, if
connecting through a firewall that restricts HTTP connections to a set
of common USER_AGENT strings (but not including those like git/1.7.1).
- Can be overridden by the 'GIT_HTTP_USER_AGENT' environment variable.
+ Can be overridden by the `GIT_HTTP_USER_AGENT` environment variable.
http.<url>.*::
Any of the http.* options above can be applied selectively to some URLs.
specified, the full ref name (including prefix) will be printed.
If 'auto' is specified, then if the output is going to a terminal,
the ref names are shown as if 'short' were given, otherwise no ref
- names are shown. This is the same as the '--decorate' option
+ names are shown. This is the same as the `--decorate` option
of the `git log`.
log.follow::
--
push.followTags::
- If set to true enable '--follow-tags' option by default. You
+ If set to true enable `--follow-tags` option by default. You
may override this configuration at time of push by specifying
- '--no-follow-tags'.
+ `--no-follow-tags`.
push.gpgSign::
May be set to a boolean value, or the string 'if-asked'. A true
- value causes all pushes to be GPG signed, as if '--signed' is
+ value causes all pushes to be GPG signed, as if `--signed` is
passed to linkgit:git-push[1]. The string 'if-asked' causes
pushes to be signed if the server supports it, as if
- '--signed=if-asked' is passed to 'git push'. A false value may
+ `--signed=if-asked` is passed to 'git push'. A false value may
override a value from a lower-priority config file. An explicit
command-line flag always overrides this config option.
rebase. False by default.
rebase.autoSquash::
- If set to true enable '--autosquash' option by default.
+ If set to true enable `--autosquash` option by default.
rebase.autoStash::
When set to true, automatically create a temporary stash
receive.advertiseAtomic::
By default, git-receive-pack will advertise the atomic push
- capability to its clients. If you don't want to this capability
- to be advertised, set this variable to false.
+ capability to its clients. If you don't want to advertise this
+ capability, set this variable to false.
+
+receive.advertisePushOptions::
+ By default, git-receive-pack will advertise the push options
+ capability to its clients. If you don't want to advertise this
+ capability, set this variable to false.
receive.autogc::
By default, git-receive-pack will run "git-gc --auto" after
can be safely ignored such as invalid committer email addresses.
Note: corrupt objects cannot be skipped with this setting.
+receive.keepAlive::
+ After receiving the pack from the client, `receive-pack` may
+ produce no output (if `--quiet` was specified) while processing
+ the pack, causing some networks to drop the TCP connection.
+ With this option set, if `receive-pack` does not transmit
+ any data in this phase for `receive.keepAlive` seconds, it will
+ send a short keepalive packet. The default is 5 seconds; set
+ to 0 to disable keepalives entirely.
+
receive.unpackLimit::
If the number of objects received in a push is below this
limit then the objects will be unpacked into loose object
A configuration identity. When given, causes values in the
'sendemail.<identity>' subsection to take precedence over
values in the 'sendemail' section. The default identity is
- the value of 'sendemail.identity'.
+ the value of `sendemail.identity`.
sendemail.smtpEncryption::
See linkgit:git-send-email[1] for description. Note that this
Identity-specific versions of the 'sendemail.*' parameters
found below, taking precedence over those when the this
identity is selected, through command-line or
- 'sendemail.identity'.
+ `sendemail.identity`.
sendemail.aliasesFile::
sendemail.aliasFileType::
See linkgit:git-send-email[1] for description.
sendemail.signedoffcc (deprecated)::
- Deprecated alias for 'sendemail.signedoffbycc'.
+ Deprecated alias for `sendemail.signedoffbycc`.
showbranch.default::
The default set of branches for linkgit:git-show-branch[1].
`uploadpack.keepAlive` seconds. Setting this option to 0
disables keepalive packets entirely. The default is 5 seconds.
+uploadpack.packObjectsHook::
+ If this option is set, when `upload-pack` would run
+ `git pack-objects` to create a packfile for a client, it will
+ run this shell command instead. The `pack-objects` command and
+ arguments it _would_ have run (including the `git pack-objects`
+ at the beginning) are appended to the shell command. The stdin
+ and stdout of the hook are treated as if `pack-objects` itself
+ was run. I.e., `upload-pack` will feed input intended for
+ `pack-objects` to the hook, and expects a completed packfile on
+ stdout.
++
+Note that this configuration variable is ignored if it is seen in the
+repository-level config (this is a safety measure against fetching from
+untrusted repositories).
+
url.<base>.insteadOf::
Any URL that starts with this value will be rewritten to
start, instead, with <base>. In cases where some site serves a
user.email::
Your email address to be recorded in any newly created commits.
- Can be overridden by the 'GIT_AUTHOR_EMAIL', 'GIT_COMMITTER_EMAIL', and
- 'EMAIL' environment variables. See linkgit:git-commit-tree[1].
+ Can be overridden by the `GIT_AUTHOR_EMAIL`, `GIT_COMMITTER_EMAIL`, and
+ `EMAIL` environment variables. See linkgit:git-commit-tree[1].
user.name::
Your full name to be recorded in any newly created commits.
- Can be overridden by the 'GIT_AUTHOR_NAME' and 'GIT_COMMITTER_NAME'
+ Can be overridden by the `GIT_AUTHOR_NAME` and `GIT_COMMITTER_NAME`
environment variables. See linkgit:git-commit-tree[1].
user.useConfigOnly::
- Instruct Git to avoid trying to guess defaults for 'user.email'
- and 'user.name', and instead retrieve the values only from the
+ Instruct Git to avoid trying to guess defaults for `user.email`
+ and `user.name`, and instead retrieve the values only from the
configuration. For example, if you have multiple email addresses
and would like to use a different one for each repository, then
with this configuration option set to `true` in the global config
DATE FORMATS
------------
-The GIT_AUTHOR_DATE, GIT_COMMITTER_DATE environment variables
+The `GIT_AUTHOR_DATE`, `GIT_COMMITTER_DATE` environment variables
ifdef::git-commit[]
and the `--date` option
endif::git-commit[]
commands such as 'git diff-files'. 'git checkout' also honors
this setting when reporting uncommitted changes. Setting it to
'all' disables the submodule summary normally shown by 'git commit'
- and 'git status' when 'status.submoduleSummary' is set unless it is
+ and 'git status' when `status.submoduleSummary` is set unless it is
overridden by using the --ignore-submodules command-line option.
The 'git submodule' commands are not affected by this setting.
diff.renameLimit::
The number of files to consider when performing the copy/rename
- detection; equivalent to the 'git diff' option '-l'.
+ detection; equivalent to the 'git diff' option `-l`.
diff.renames::
Whether and how Git detects renames. If set to "false",
. sha1 for "dst"; 0\{40\} if creation, unmerged or "look at work tree".
. a space.
. status, followed by optional "score" number.
-. a tab or a NUL when '-z' option is used.
+. a tab or a NUL when `-z` option is used.
. path for "src"
-. a tab or a NUL when '-z' option is used; only exists for C or R.
+. a tab or a NUL when `-z` option is used; only exists for C or R.
. path for "dst"; only exists for C or R.
-. an LF or a NUL when '-z' option is used, to terminate the record.
+. an LF or a NUL when `-z` option is used, to terminate the record.
Possible status letters are:
----------------------
"git-diff-tree", "git-diff-files" and "git-diff --raw"
-can take '-c' or '--cc' option
+can take `-c` or `--cc` option
to generate diff output also for merge commits. The output differs
from the format described above in the following way:
--------------------------
When "git-diff-index", "git-diff-tree", or "git-diff-files" are run
-with a '-p' option, "git diff" without the '--raw' option, or
+with a `-p` option, "git diff" without the `--raw` option, or
"git log" with the "-p" option, they
do not produce the output described above; instead they produce a
patch file. You can customize the creation of such patches via the
-GIT_EXTERNAL_DIFF and the GIT_DIFF_OPTS environment variables.
+`GIT_EXTERNAL_DIFF` and the `GIT_DIFF_OPTS` environment variables.
What the -p option produces is slightly different from the traditional
diff format:
------------
1. It is preceded with a "git diff" header, that looks like
- this (when '-c' option is used):
+ this (when `-c` option is used):
diff --combined file
+
-or like this (when '--cc' option is used):
+or like this (when `--cc` option is used):
diff --cc file
paths are selected if there is any file that matches
other criteria in the comparison; if there is no file
that matches other criteria, nothing is selected.
++
+Also, these upper-case letters can be downcased to exclude. E.g.
+`--diff-filter=ad` excludes added and deleted paths.
-S<string>::
Look for differences that change the number of occurrences of
-p::
--prune::
- After fetching, remove any remote-tracking references that no
+ Before fetching, remove any remote-tracking references that no
longer exist on the remote. Tags are not subject to pruning
if they are fetched only because of the default tag
auto-following or due to a --tags option. However, if tags
to whatever else would otherwise be fetched. Using this
option alone does not subject tags to pruning, even if --prune
is used (though tags may be pruned anyway if they are also the
- destination of an explicit refspec; see '--prune').
+ destination of an explicit refspec; see `--prune`).
--recurse-submodules[=yes|on-demand|no]::
This option controls if and under what conditions new commits of
--no-recurse-submodules::
Disable recursive fetching of submodules (this has the same effect as
- using the '--recurse-submodules=no' option).
+ using the `--recurse-submodules=no` option).
--submodule-prefix=<path>::
Prepend <path> to paths printed in informative messages
--upload-pack <upload-pack>::
When given, and the repository to fetch from is handled
- by 'git fetch-pack', '--exec=<upload-pack>' is passed to
+ by 'git fetch-pack', `--exec=<upload-pack>` is passed to
the command to specify non-default path for the command
run on the other end.
By default the command will try to detect the patch format
automatically. This option allows the user to bypass the automatic
detection and specify the patch format that the patch(es) should be
- interpreted as. Valid formats are mbox, stgit, stgit-series and hg.
+ interpreted as. Valid formats are mbox, mboxrd,
+ stgit, stgit-series and hg.
-i::
--interactive::
to process. Upon seeing the first patch that does not apply, it
aborts in the middle. You can recover from this in one of two ways:
-. skip the current patch by re-running the command with the '--skip'
+. skip the current patch by re-running the command with the `--skip`
option.
. hand resolve the conflict in the working directory, and update
the index file to bring it into a state that the patch should
- have produced. Then run the command with the '--continue' option.
+ have produced. Then run the command with the `--continue` option.
The command refuses to process new mailboxes until the current
operation is finished, so if you decide to start over from scratch,
Or if you want more control, you can inspect the current state using
for example "git bisect visualize". It will launch gitk (or "git log"
-if the DISPLAY environment variable is not set) to help you find a
+if the `DISPLAY` environment variable is not set) to help you find a
better bisection point.
Either way, if you have a string of untestable commits, it might
`view` may also be used as a synonym for `visualize`.
-If the 'DISPLAY' environment variable is not set, 'git log' is used
+If the `DISPLAY` environment variable is not set, 'git log' is used
instead. You can also give command-line options such as `-p` and
`--stat`.
--no-checkout::
+
Do not checkout the new working tree at each iteration of the bisection
-process. Instead just update a special reference named 'BISECT_HEAD' to make
+process. Instead just update a special reference named `BISECT_HEAD` to make
it point to the commit that should be tested.
+
This option may be useful when the test you would perform in each step
commit (i.e. the branches whose tip commits are reachable from the named
commit) will be listed. With `--no-merged` only branches not merged into
the named commit will be listed. If the <commit> argument is missing it
-defaults to 'HEAD' (i.e. the tip of the current branch).
+defaults to `HEAD` (i.e. the tip of the current branch).
The command's second form creates a new branch head named <branchname>
-which points to the current 'HEAD', or <start-point> if given.
+which points to the current `HEAD`, or <start-point> if given.
Note that this will create the new branch, but it will not switch the
working tree to it; use "git checkout <newbranch>" to switch to the
+
This behavior is the default when the start point is a remote-tracking branch.
Set the branch.autoSetupMerge configuration variable to `false` if you
-want `git checkout` and `git branch` to always behave as if '--no-track'
+want `git checkout` and `git branch` to always behave as if `--no-track`
were given. Set it to `always` if you want this behavior when the
start-point is either a local or remote-tracking branch.
DESCRIPTION
-----------
In its first form, the command provides the content or the type of an object in
-the repository. The type is required unless '-t' or '-p' is used to find the
-object type, or '-s' is used to find the object size, or '--textconv' is used
+the repository. The type is required unless `-t` or `-p` is used to find the
+object type, or `-s` is used to find the object size, or `--textconv` is used
(which implies type "blob").
In the second form, a list of objects (separated by linefeeds) is provided on
OUTPUT
------
-If '-t' is specified, one of the <type>.
+If `-t` is specified, one of the <type>.
-If '-s' is specified, the size of the <object> in bytes.
+If `-s` is specified, the size of the <object> in bytes.
-If '-e' is specified, no output.
+If `-e` is specified, no output.
-If '-p' is specified, the contents of <object> are pretty-printed.
+If `-p` is specified, the contents of <object> are pretty-printed.
If <type> is specified, the raw (though uncompressed) contents of the <object>
will be returned.
When creating a new branch, set up "upstream" configuration. See
"--track" in linkgit:git-branch[1] for details.
+
-If no '-b' option is given, the name of the new branch will be
+If no `-b` option is given, the name of the new branch will be
derived from the remote-tracking branch, by looking at the local part of
the refspec configured for the corresponding remote, and then stripping
the initial part up to the "*".
off of "origin/hack" (or "remotes/origin/hack", or even
"refs/remotes/origin/hack"). If the given name has no slash, or the above
guessing results in an empty name, the guessing is aborted. You can
-explicitly give a name with '-b' in such a case.
+explicitly give a name with `-b` in such a case.
--no-track::
Do not set up "upstream" configuration, even if the
For a more complete list of ways to spell commits, see
linkgit:gitrevisions[7].
Sets of commits can be passed but no traversal is done by
- default, as if the '--no-walk' option was specified, see
+ default, as if the `--no-walk` option was specified, see
linkgit:git-rev-list[1]. Note that specifying a range will
feed all <commit>... arguments to a single revision walk
(see a later example that uses 'maint master..next').
Cleans the working tree by recursively removing files that are not
under version control, starting from the current directory.
-Normally, only files unknown to Git are removed, but if the '-x'
+Normally, only files unknown to Git are removed, but if the `-x`
option is specified, ignored files are also removed. This can, for
example, be useful to remove all build products.
Create a 'shallow' clone with a history truncated to the
specified number of commits. Implies `--single-branch` unless
`--no-single-branch` is given to fetch the histories near the
- tips of all branches. This implies `--shallow-submodules`. If
- you want to have a shallow superproject clone, but full submodules,
- also pass `--no-shallow-submodules`.
+ tips of all branches. If you want to clone submodules shallowly,
+ also pass `--shallow-submodules`.
--[no-]single-branch::
Clone only the history leading to the tip of a single branch,
An existing tree object
-p <parent>::
- Each '-p' indicates the id of a parent commit object.
+ Each `-p` indicates the id of a parent commit object.
-m <message>::
A paragraph in the commit log message. This can be given more than
-c <commit>::
--reedit-message=<commit>::
- Like '-C', but with '-c' the editor is invoked, so that
+ Like '-C', but with `-c` the editor is invoked, so that
the user can further edit the commit message.
--fixup=<commit>::
Otherwise `whitespace`.
--
+
-The default can be changed by the 'commit.cleanup' configuration
+The default can be changed by the `commit.cleanup` configuration
variable (see linkgit:git-config[1]).
-e::
staged for other paths. This is the default mode of operation of
'git commit' if any paths are given on the command line,
in which case this option can be omitted.
- If this option is specified together with '--amend', then
+ If this option is specified together with `--amend`, then
no paths need to be specified, which can be used to amend
the last commit without committing changes that have
already been staged.
ENVIRONMENT AND CONFIGURATION VARIABLES
---------------------------------------
The editor used to edit the commit log message will be chosen from the
-GIT_EDITOR environment variable, the core.editor configuration variable, the
-VISUAL environment variable, or the EDITOR environment variable (in that
+`GIT_EDITOR` environment variable, the core.editor configuration variable, the
+`VISUAL` environment variable, or the `EDITOR` environment variable (in that
order). See linkgit:git-var[1] for details.
HOOKS
actually the section and the key separated by a dot, and the value will be
escaped.
-Multiple lines can be added to an option by using the '--add' option.
+Multiple lines can be added to an option by using the `--add` option.
If you want to update or unset an option which can occur on multiple
lines, a POSIX regexp `value_regex` needs to be given. Only the
existing values that match the regexp are updated or unset. If
you want to handle the lines that do *not* match the regex, just
prepend a single exclamation mark in front (see also <<EXAMPLES>>).
-The type specifier can be either '--int' or '--bool', to make
+The type specifier can be either `--int` or `--bool`, to make
'git config' ensure that the variable(s) are of the given type and
convert the value to the canonical form (simple decimal number for int,
-a "true" or "false" string for bool), or '--path', which does some
-path expansion (see '--path' below). If no type specifier is passed, no
+a "true" or "false" string for bool), or `--path`, which does some
+path expansion (see `--path` below). If no type specifier is passed, no
checks or transformations are performed on the value.
When reading, the values are read from the system, global and
repository local configuration files by default, and options
-'--system', '--global', '--local' and '--file <filename>' can be
+`--system`, `--global`, `--local` and `--file <filename>` can be
used to tell the command to read from only that location (see <<FILES>>).
When writing, the new value is written to the repository local
-configuration file by default, and options '--system', '--global',
-'--file <filename>' can be used to tell the command to write to
-that location (you can say '--local' but that is the default).
+configuration file by default, and options `--system`, `--global`,
+`--file <filename>` can be used to tell the command to write to
+that location (you can say `--local` but that is the default).
This command will fail with non-zero status upon error. Some exit
codes are:
Use the given config file instead of the one specified by GIT_CONFIG.
--blob blob::
- Similar to '--file' but use the given blob instead of a file. E.g.
+ Similar to `--file` but use the given blob instead of a file. E.g.
you can use 'master:.gitmodules' to read values from the file
'.gitmodules' in the master branch. See "SPECIFYING REVISIONS"
section in linkgit:gitrevisions[7] for a more complete list of
-e::
--edit::
Opens an editor to modify the specified config file; either
- '--system', '--global', or repository (default).
+ `--system`, `--global`, or repository (default).
--[no-]includes::
Respect `include.*` directives in config files when looking up
FILES
-----
-If not set explicitly with '--file', there are four files where
+If not set explicitly with `--file`, there are four files where
'git config' will search for configuration options:
$(prefix)/etc/gitconfig::
precedence over values read earlier. When multiple values are taken then all
values of a key from all files will be used.
+You may override individual configuration parameters when running any git
+command by using the `-c` option. See linkgit:git[1] for details.
+
All writing options will per default write to the repository specific
-configuration file. Note that this also affects options like '--replace-all'
-and '--unset'. *'git config' will only ever change one file at a time*.
+configuration file. Note that this also affects options like `--replace-all`
+and `--unset`. *'git config' will only ever change one file at a time*.
You can override these rules either by command-line options or by environment
-variables. The '--global' and the '--system' options will limit the file used
-to the global or system-wide file respectively. The GIT_CONFIG environment
+variables. The `--global` and the `--system` options will limit the file used
+to the global or system-wide file respectively. The `GIT_CONFIG` environment
variable has a similar effect, but you can specify any filename you want.
FILES
-----
-If not set explicitly with '--file', there are two files where
+If not set explicitly with `--file`, there are two files where
git-credential-store will search for credentials in order of precedence:
~/.git-credentials::
akin to the way 'git clone' uses 'origin' by default.
-o <branch-for-HEAD>::
- When no remote is specified (via -r) the 'HEAD' branch
+ When no remote is specified (via -r) the `HEAD` branch
from CVS is imported to the 'origin' branch within the Git
- repository, as 'HEAD' already has a special meaning for Git.
- When a remote is specified the 'HEAD' branch is named
+ repository, as `HEAD` already has a special meaning for Git.
+ When a remote is specified the `HEAD` branch is named
remotes/<remote>/master mirroring 'git clone' behaviour.
Use this option if you want to import into a different
branch.
-p <options-for-cvsps>::
Additional options for cvsps.
- The options '-u' and '-A' are implicit and should not be used here.
+ The options `-u` and '-A' are implicit and should not be used here.
+
If you need to pass multiple options, separate them with a comma.
-M <regex>::
Attempt to detect merges based on the commit message with a custom
- regex. It can be used with '-m' to enable the default regexes
+ regex. It can be used with `-m` to enable the default regexes
as well. You must escape forward slashes.
+
The regex must capture the source branch name in $1.
OUTPUT
------
-If '-v' is specified, the script reports what it is doing.
+If `-v` is specified, the script reports what it is doing.
Otherwise, success is indicated the Unix way, i.e. by simply exiting with
a zero exit status.
You can specify a list of allowed directories. If no directories
are given, all are allowed. This is an additional restriction, gitcvs
access still needs to be enabled by the `gitcvs.enabled` config option
-unless '--export-all' was given, too.
+unless `--export-all` was given, too.
DESCRIPTION
3. Browse the 'modules' available. It will give you a list of the heads in
the repository. You will not be able to browse the tree from there. Only
the heads.
-4. Pick 'HEAD' when it asks what branch/tag to check out. Untick the
+4. Pick `HEAD` when it asks what branch/tag to check out. Untick the
"launch commit wizard" to avoid committing the .project file.
Protocol notes: If you are using anonymous access via pserver, just select that.
CRLF Line Ending Conversions
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-By default the server leaves the '-k' mode blank for all files,
+By default the server leaves the `-k` mode blank for all files,
which causes the CVS client to treat them as a text files, subject
to end-of-line conversion on some platforms.
You can make the server use the end-of-line conversion attributes to
-set the '-k' modes for files by setting the `gitcvs.usecrlfattr`
+set the `-k` modes for files by setting the `gitcvs.usecrlfattr`
config variable. See linkgit:gitattributes[5] for more information
about end-of-line conversion.
or the attributes do not allow automatic detection for a filename, then
the server uses the `gitcvs.allBinary` config for the default setting.
If `gitcvs.allBinary` is set, then file not otherwise
-specified will default to '-kb' mode. Otherwise the '-k' mode
+specified will default to '-kb' mode. Otherwise the `-k` mode
is left blank. But if `gitcvs.allBinary` is set to "guess", then
-the correct '-k' mode will be guessed based on the contents of
+the correct `-k` mode will be guessed based on the contents of
the file.
For best consistency with 'cvs', it is probably best to override the
It verifies that the directory has the magic file "git-daemon-export-ok", and
it will refuse to export any Git directory that hasn't explicitly been marked
-for export this way (unless the '--export-all' parameter is specified). If you
+for export this way (unless the `--export-all` parameter is specified). If you
pass some directory paths as 'git daemon' arguments, you can further restrict
the offers to a whitelist comprising of those.
is not supported, then --listen=hostname is also not supported and
--listen must be given an IPv4 address.
Can be given more than once.
- Incompatible with '--inetd' option.
+ Incompatible with `--inetd` option.
--port=<n>::
- Listen on an alternative port. Incompatible with '--inetd' option.
+ Listen on an alternative port. Incompatible with `--inetd` option.
--init-timeout=<n>::
Timeout (in seconds) between the moment the connection is established
arguments. The external command can decide to decline the
service by exiting with a non-zero status (or to allow it by
exiting with a zero status). It can also look at the $REMOTE_ADDR
- and $REMOTE_PORT environment variables to learn about the
+ and `$REMOTE_PORT` environment variables to learn about the
requestor when making this decision.
+
The external command can optionally write a single line to its
selectively enable/disable services per repository::
To enable 'git archive --remote' and disable 'git fetch' against
a repository, have the following in the configuration file in the
- repository (that is the file 'config' next to 'HEAD', 'refs' and
+ repository (that is the file 'config' next to `HEAD`, 'refs' and
'objects').
+
----------------------------------------------------------------
If an exact match was not found, 'git describe' will walk back
through the commit history to locate an ancestor commit which
has been tagged. The ancestor's tag will be output along with an
-abbreviation of the input commit-ish's SHA-1. If '--first-parent' was
+abbreviation of the input commit-ish's SHA-1. If `--first-parent` was
specified then the walk will only consider the first parent of each
commit.
Operating Modes
---------------
You can choose whether you want to trust the index file entirely
-(using the '--cached' flag) or ask the diff logic to show any files
+(using the `--cached` flag) or ask the diff logic to show any files
that don't match the stat state as being "tentatively changed". Both
of these operations are very useful indeed.
Cached Mode
-----------
-If '--cached' is specified, it allows you to ask:
+If `--cached` is specified, it allows you to ask:
show me the differences between HEAD and the current index
contents (the ones I'd write using 'git write-tree')
show tree entry itself as well as subtrees. Implies -r.
--root::
- When '--root' is specified the initial commit will be shown as a big
+ When `--root` is specified the initial commit will be shown as a big
creation event. This is equivalent to a diff against the NULL tree.
--stdin::
- When '--stdin' is specified, the command does not take
+ When `--stdin` is specified, the command does not take
<tree-ish> arguments from the command line. Instead, it
reads lines containing either two <tree>, one <commit>, or a
list of <commit> from its standard input. (Use a single space
By default, 'git diff-tree --stdin' does not show
differences for merge commits. With this flag, it shows
differences to that commit from all of its parents. See
- also '-c'.
+ also `-c`.
-s::
By default, 'git diff-tree --stdin' shows differences,
- either in machine-readable form (without '-p') or in patch
- form (with '-p'). This output can be suppressed. It is
- only useful with '-v' flag.
+ either in machine-readable form (without `-p`) or in patch
+ form (with `-p`). This output can be suppressed. It is
+ only useful with `-v` flag.
-v::
This flag causes 'git diff-tree --stdin' to also show
-c::
This flag changes the way a merge commit is displayed
(which means it is useful only when the command is given
- one <tree-ish>, or '--stdin'). It shows the differences
+ one <tree-ish>, or `--stdin`). It shows the differences
from each of the parents to the merge result simultaneously
instead of showing pairwise diff between a parent and the
- result one at a time (which is what the '-m' option does).
+ result one at a time (which is what the `-m` option does).
Furthermore, it lists only files which were modified
from all parents.
--cc::
This flag changes the way a merge commit patch is displayed,
- in a similar way to the '-c' option. It implies the '-c'
- and '-p' options and further compresses the patch output
+ in a similar way to the `-c` option. It implies the `-c`
+ and `-p` options and further compresses the patch output
by omitting uninteresting hunks whose the contents in the parents
have only two variants and the merge result picks one of them
without modification. When all hunks are uninteresting, the commit
invoked diff tool returns a non-zero exit code.
+
'git-difftool' will forward the exit code of the invoked tool when
-'--trust-exit-code' is used.
+`--trust-exit-code` is used.
See linkgit:git-diff[1] for the full list of supported options.
Maximum size of each output packfile.
The default is unlimited.
+fastimport.unpackLimit::
+ See linkgit:git-config[1]
Performance
-----------
no-relative-marks::
force::
Act as though the corresponding command-line option with
- a leading '--' was passed on the command line
+ a leading `--` was passed on the command line
(see OPTIONS, above).
import-marks::
The `<option>` part of the command may contain any of the options
listed in the OPTIONS section that do not change import semantics,
-without the leading '--' and is treated in the same way.
+without the leading `--` and is treated in the same way.
Option commands must be the first commands on the input (not counting
feature commands), to give an option command after any non-option
option, then the refs from stdin are processed after those
on the command line.
+
-If '--stateless-rpc' is specified together with this option then
+If `--stateless-rpc` is specified together with this option then
the list of refs must be in packet format (pkt-line). Each ref must
be in a separate packet, and the list must end with a flush packet.
-q::
--quiet::
- Pass '-q' flag to 'git unpack-objects'; this makes the
+ Pass `-q` flag to 'git unpack-objects'; this makes the
cloning process less verbose.
-k::
overridden by giving the `--refmap=<refspec>` parameter(s) on the
command line.
+OUTPUT
+------
+
+The output of "git fetch" depends on the transport method used; this
+section describes the output when fetching over the Git protocol
+(either locally or via ssh) and Smart HTTP protocol.
+
+The status of the fetch is output in tabular form, with each line
+representing the status of a single ref. Each line is of the form:
+
+-------------------------------
+ <flag> <summary> <from> -> <to> [<reason>]
+-------------------------------
+
+The status of up-to-date refs is shown only if the --verbose option is
+used.
+
+In compact output mode, specified with configuration variable
+fetch.output, if either entire `<from>` or `<to>` is found in the
+other string, it will be substituted with `*` in the other string. For
+example, `master -> origin/master` becomes `master -> origin/*`.
+
+flag::
+ A single character indicating the status of the ref:
+(space);; for a successfully fetched fast-forward;
+`+`;; for a successful forced update;
+`-`;; for a successfully pruned ref;
+`t`;; for a successful tag update;
+`*`;; for a successfully fetched new ref;
+`!`;; for a ref that was rejected or failed to update; and
+`=`;; for a ref that was up to date and did not need fetching.
+
+summary::
+ For a successfully fetched ref, the summary shows the old and new
+ values of the ref in a form suitable for using as an argument to
+ `git log` (this is `<old>..<new>` in most cases, and
+ `<old>...<new>` for forced non-fast-forward updates).
+
+from::
+ The name of the remote ref being fetched from, minus its
+ `refs/<type>/` prefix. In the case of deletion, the name of
+ the remote ref is "(none)".
+
+to::
+ The name of the local ref being updated, minus its
+ `refs/<type>/` prefix.
+
+reason::
+ A human-readable explanation. In the case of successfully fetched
+ refs, no explanation is needed. For a failed ref, the reason for
+ failure is described.
EXAMPLES
--------
Note that since this operation is very I/O expensive, it might
be a good idea to redirect the temporary directory off-disk with the
-'-d' option, e.g. on tmpfs. Reportedly the speedup is very noticeable.
+`-d` option, e.g. on tmpfs. Reportedly the speedup is very noticeable.
Filters
The filters are applied in the order as listed below. The <command>
argument is always evaluated in the shell context using the 'eval' command
(with the notable exception of the commit filter, for technical reasons).
-Prior to that, the $GIT_COMMIT environment variable will be set to contain
+Prior to that, the `$GIT_COMMIT` environment variable will be set to contain
the id of the commit being rewritten. Also, GIT_AUTHOR_NAME,
GIT_AUTHOR_EMAIL, GIT_AUTHOR_DATE, GIT_COMMITTER_NAME, GIT_COMMITTER_EMAIL,
and GIT_COMMITTER_DATE are taken from the current commit and exported to
untouched. This switch allow git-filter-branch to ignore such
commits. Though, this switch only applies for commits that have one
and only one parent, it will hence keep merges points. Also, this
- option is not compatible with the use of '--commit-filter'. Though you
+ option is not compatible with the use of `--commit-filter`. Though you
just need to use the function 'git_commit_non_empty_tree "$@"' instead
of the `git commit-tree "$@"` idiom in your commit filter to make that
happen.
<rev-list options>...::
Arguments for 'git rev-list'. All positive refs included by
these options are rewritten. You may also specify options
- such as '--all', but you must use '--' to separate them from
+ such as `--all`, but you must use `--` to separate them from
the 'git filter-branch' options. Implies <<Remap_to_ancestor>>.
<width> and <position> used instead. For instance,
`%(align:<width>,<position>)`. If the contents length is more
than the width then no alignment is performed. If used with
- '--quote' everything in between %(align:...) and %(end) is
+ `--quote` everything in between %(align:...) and %(end) is
quoted, but if nested then only the topmost level performs
quoting.
If `-o` is specified, output files are created in <dir>. Otherwise
they are created in the current working directory. The default path
-can be set with the 'format.outputDirectory' configuration option.
+can be set with the `format.outputDirectory` configuration option.
The `-o` option takes precedence over `format.outputDirectory`.
To store patches in the current working directory even when
`format.outputDirectory` points elsewhere, use `-o .`.
`--in-reply-to`, and the first patch mail, in this order. 'deep'
threading makes every mail a reply to the previous one.
+
-The default is `--no-thread`, unless the 'format.thread' configuration
+The default is `--no-thread`, unless the `format.thread` configuration
is set. If `--thread` is specified without a style, it defaults to the
-style specified by 'format.thread' if any, or else `shallow`.
+style specified by `format.thread` if any, or else `shallow`.
+
Beware that the default for 'git send-email' is to thread emails
itself. If you want `git format-patch` to take care of threading, you
[verse]
'git fsck' [--tags] [--root] [--unreachable] [--cache] [--no-reflogs]
[--[no-]full] [--strict] [--verbose] [--lost-found]
- [--[no-]dangling] [--[no-]progress] [--connectivity-only] [<object>*]
+ [--[no-]dangling] [--[no-]progress] [--connectivity-only]
+ [--[no-]name-objects] [<object>*]
DESCRIPTION
-----------
a blob, the contents are written into the file, rather than
its object name.
+--name-objects::
+ When displaying names of reachable objects, in addition to the
+ SHA-1 also display a name that describes *how* they are reachable,
+ compatible with linkgit:git-rev-parse[1], e.g.
+ `HEAD@{1234567890}~25^2:src/`.
+
--[no-]progress::
Progress status is reported on the standard error stream by
default when it is attached to a terminal, unless
git-fsck tests SHA-1 and general object sanity, and it does full tracking
of the resulting reachability and everything else. It prints out any
corruption it finds (missing or bad objects), and if you use the
-'--unreachable' flag it will also print out objects that exist but that
+`--unreachable` flag it will also print out objects that exist but that
aren't reachable from any of the specified head nodes (or the default
set, as mentioned above).
Configuration
-------------
-The optional configuration variable 'gc.reflogExpire' can be
+The optional configuration variable `gc.reflogExpire` can be
set to indicate how long historical entries within each branch's
reflog should remain available in this repository. The setting is
expressed as a length of time, for example '90 days' or '3 months'.
It defaults to '90 days'.
-The optional configuration variable 'gc.reflogExpireUnreachable'
+The optional configuration variable `gc.reflogExpireUnreachable`
can be set to indicate how long historical reflog entries which
are not part of the current branch should remain available in
this repository. These types of entries are generally created as
reflogExpireUnreachable = 3 days
------------
-The optional configuration variable 'gc.rerereResolved' indicates
+The optional configuration variable `gc.rerereResolved` indicates
how long records of conflicted merge you resolved earlier are
kept. This defaults to 60 days.
-The optional configuration variable 'gc.rerereUnresolved' indicates
+The optional configuration variable `gc.rerereUnresolved` indicates
how long records of conflicted merge you have not resolved are
kept. This defaults to 15 days.
-The optional configuration variable 'gc.packRefs' determines if
+The optional configuration variable `gc.packRefs` determines if
'git gc' runs 'git pack-refs'. This can be set to "notbare" to enable
it within all non-bare repos or it can be set to a boolean value.
This defaults to true.
-The optional configuration variable 'gc.aggressiveWindow' controls how
+The optional configuration variable `gc.aggressiveWindow` controls how
much time is spent optimizing the delta compression of the objects in
the repository when the --aggressive option is specified. The larger
the value, the more time is spent optimizing the delta compression. See
the documentation for the --window' option in linkgit:git-repack[1] for
more details. This defaults to 250.
-Similarly, the optional configuration variable 'gc.aggressiveDepth'
+Similarly, the optional configuration variable `gc.aggressiveDepth`
controls --depth option in linkgit:git-repack[1]. This defaults to 250.
-The optional configuration variable 'gc.pruneExpire' controls how old
+The optional configuration variable `gc.pruneExpire` controls how old
the unreferenced loose objects have to be before they are pruned. The
default is "2 weeks ago".
-------------
grep.lineNumber::
- If set to true, enable '-n' option by default.
+ If set to true, enable `-n` option by default.
grep.patternType::
Set the default matching behavior. Using a value of 'basic', 'extended',
- 'fixed', or 'perl' will enable the '--basic-regexp', '--extended-regexp',
- '--fixed-strings', or '--perl-regexp' option accordingly, while the
+ 'fixed', or 'perl' will enable the `--basic-regexp`, `--extended-regexp`,
+ `--fixed-strings`, or `--perl-regexp` option accordingly, while the
value 'default' will return to the default matching behavior.
grep.extendedRegexp::
- If set to true, enable '--extended-regexp' option by default. This
- option is ignored when the 'grep.patternType' option is set to a value
+ If set to true, enable `--extended-regexp` option by default. This
+ option is ignored when the `grep.patternType` option is set to a value
other than 'default'.
grep.threads::
8 threads are used by default (for now).
grep.fullName::
- If set to true, enable '--full-name' option by default.
+ If set to true, enable `--full-name` option by default.
grep.fallbackToNoIndex::
If set to true, fall back to git grep --no-index if git grep
browser::
Start a tree browser showing all files in the specified
- commit (or 'HEAD' by default). Files selected through the
+ commit (or `HEAD` by default). Files selected through the
browser are opened in the blame viewer.
citool::
command and a list of the most commonly used Git commands are printed
on the standard output.
-If the option '--all' or '-a' is given, all available commands are
+If the option `--all` or `-a` is given, all available commands are
printed on the standard output.
-If the option '--guide' or '-g' is given, a list of the useful
+If the option `--guide` or `-g` is given, a list of the useful
Git guides is also printed on the standard output.
If a command, or a guide, is given, a manual page for that command or
--man::
Display manual page for the command in the 'man' format. This
option may be used to override a value set in the
- 'help.format' configuration variable.
+ `help.format` configuration variable.
+
By default the 'man' program will be used to display the manual page,
-but the 'man.viewer' configuration variable may be used to choose
+but the `man.viewer` configuration variable may be used to choose
other display programs (see below).
-w::
format. A web browser will be used for that purpose.
+
The web browser can be specified using the configuration variable
-'help.browser', or 'web.browser' if the former is not set. If none of
+`help.browser`, or `web.browser` if the former is not set. If none of
these config variables is set, the 'git web{litdd}browse' helper script
(called by 'git help') will pick a suitable default. See
linkgit:git-web{litdd}browse[1] for more information about this.
help.format
~~~~~~~~~~~
-If no command-line option is passed, the 'help.format' configuration
+If no command-line option is passed, the `help.format` configuration
variable will be checked. The following values are supported for this
variable; they make 'git help' behave as their corresponding command-
line option:
help.browser, web.browser and browser.<tool>.path
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-The 'help.browser', 'web.browser' and 'browser.<tool>.path' will also
+The `help.browser`, `web.browser` and `browser.<tool>.path` will also
be checked if the 'web' format is chosen (either by command-line
option or configuration variable). See '-w|--web' in the OPTIONS
section above and linkgit:git-web{litdd}browse[1].
man.viewer
~~~~~~~~~~
-The 'man.viewer' configuration variable will be checked if the 'man'
+The `man.viewer` configuration variable will be checked if the 'man'
format is chosen. The following values are currently supported:
* "man": use the 'man' program as usual,
tab (see 'Note about konqueror' below).
Values for other tools can be used if there is a corresponding
-'man.<tool>.cmd' configuration entry (see below).
+`man.<tool>.cmd` configuration entry (see below).
-Multiple values may be given to the 'man.viewer' configuration
+Multiple values may be given to the `man.viewer` configuration
variable. Their corresponding programs will be tried in the order
listed in the configuration file.
DISPLAY is not set) and in that case emacs' woman mode will be tried.
If everything fails, or if no viewer is configured, the viewer specified
-in the GIT_MAN_VIEWER environment variable will be tried. If that
+in the `GIT_MAN_VIEWER` environment variable will be tried. If that
fails too, the 'man' program will be tried anyway.
man.<tool>.path
~~~~~~~~~~~~~~~
You can explicitly provide a full path to your preferred man viewer by
-setting the configuration variable 'man.<tool>.path'. For example, you
+setting the configuration variable `man.<tool>.path`. For example, you
can configure the absolute path to konqueror by setting
'man.konqueror.path'. Otherwise, 'git help' assumes the tool is
available in PATH.
man.<tool>.cmd
~~~~~~~~~~~~~~
-When the man viewer, specified by the 'man.viewer' configuration
+When the man viewer, specified by the `man.viewer` configuration
variables, is not among the supported ones, then the corresponding
-'man.<tool>.cmd' configuration variable will be looked up. If this
+`man.<tool>.cmd` configuration variable will be looked up. If this
variable exists then the specified tool will be treated as a custom
command and a shell eval will be used to run the command with the man
page passed as arguments.
Note about konqueror
~~~~~~~~~~~~~~~~~~~~
-When 'konqueror' is specified in the 'man.viewer' configuration
+When 'konqueror' is specified in the `man.viewer` configuration
variable, we launch 'kfmclient' to try to open the man page on an
already opened konqueror in a new tab if possible.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Note that all these configuration variables should probably be set
-using the '--global' flag, for example like this:
+using the `--global` flag, for example like this:
------------------------------------------------
$ git config --global help.format web
It verifies that the directory has the magic file
"git-daemon-export-ok", and it will refuse to export any Git directory
that hasn't explicitly been marked for export this way (unless the
-GIT_HTTP_EXPORT_ALL environmental variable is set).
+`GIT_HTTP_EXPORT_ALL` environmental variable is set).
By default, only the `upload-pack` service is enabled, which serves
'git fetch-pack' and 'git ls-remote' clients, which are invoked from
ENVIRONMENT
-----------
-'git http-backend' relies upon the CGI environment variables set
+'git http-backend' relies upon the `CGI` environment variables set
by the invoking web server, including:
* PATH_INFO (if GIT_PROJECT_ROOT is set, otherwise PATH_TRANSLATED)
* QUERY_STRING
* REQUEST_METHOD
-The GIT_HTTP_EXPORT_ALL environmental variable may be passed to
+The `GIT_HTTP_EXPORT_ALL` environmental variable may be passed to
'git-http-backend' to bypass the check for the "git-daemon-export-ok"
file in each repository before allowing export of that repository.
ensuring that any reflogs created by 'git-receive-pack' contain some
identifying information of the remote user who performed the push.
-All CGI environment variables are available to each of the hooks
+All `CGI` environment variables are available to each of the hooks
invoked by the 'git-receive-pack'.
GIT
exist in the set of remote refs; the ref matched <src>
locally is used as the name of the destination.
-Without '--force', the <src> ref is stored at the remote only if
+Without `--force`, the <src> ref is stored at the remote only if
<dst> does not exist, or <dst> is a proper subset (i.e. an
ancestor) of <src>. This check, known as "fast-forward check",
is performed in order to avoid accidentally overwriting the
remote ref and lose other peoples' commits from there.
-With '--force', the fast-forward check is disabled for all refs.
+With `--force`, the fast-forward check is disabled for all refs.
Optionally, a <ref> parameter can be prefixed with a plus '+' sign
to disable the fast-forward check only on that ref.
--bare::
-Create a bare repository. If GIT_DIR environment is not set, it is set to the
+Create a bare repository. If `GIT_DIR` environment is not set, it is set to the
current working directory.
--template=<template_directory>::
-----------------------------------------------------------------------
-If the configuration variable 'instaweb.browser' is not set,
-'web.browser' will be used instead if it is defined. See
+If the configuration variable `instaweb.browser` is not set,
+`web.browser` will be used instead if it is defined. See
linkgit:git-web{litdd}browse[1] for more information about this.
SEE ALSO
Signed-off-by: Bob <bob@example.com>
------------
-* Use the '--in-place' option to edit a message file in place:
+* Use the `--in-place` option to edit a message file in place:
+
------------
$ cat msg.txt
`git log -p` output would be shown without a diff attached.
The default is `true`.
+log.showSignature::
+ If `true`, `git log` and related commands will act as if the
+ `--show-signature` option was passed to them.
+
mailmap.*::
See linkgit:git-shortlog[1].
notes.displayRef::
Which refs, in addition to the default set by `core.notesRef`
- or 'GIT_NOTES_REF', to read notes from when showing commit
+ or `GIT_NOTES_REF`, to read notes from when showing commit
messages with the `log` family of commands. See
linkgit:git-notes[1].
+
but a glob that does not match any refs is silently ignored.
+
This setting can be disabled by the `--no-notes` option,
-overridden by the 'GIT_NOTES_DISPLAY_REF' environment variable,
+overridden by the `GIT_NOTES_DISPLAY_REF` environment variable,
and overridden by the `--notes=<ref>` option.
GIT
+
<eolattr> is the attribute that is used when checking out or committing,
it is either "", "-text", "text", "text=auto", "text eol=lf", "text eol=crlf".
-Note: Currently Git does not support "text=auto eol=lf" or "text=auto eol=crlf",
-that may change in the future.
+Since Git 2.10 "text=auto eol=lf" and "text=auto eol=crlf" are supported.
+
Both the <eolinfo> in the index ("i/<eolinfo>")
and in the working tree ("w/<eolinfo>") are shown for regular files,
Output
------
-'git ls-files' just outputs the filenames unless '--stage' is specified in
+'git ls-files' just outputs the filenames unless `--stage` is specified in
which case it outputs:
[<tag> ]<mode> <object> <stage> <file>
- the behaviour is slightly different from that of "/bin/ls" in that the
'<path>' denotes just a list of patterns to match, e.g. so specifying
- directory name (without '-r') will behave differently, and order of the
+ directory name (without `-r`) will behave differently, and order of the
arguments does not matter.
- the behaviour is similar to that of "/bin/ls" in that the '<path>' is
taken as relative to the current working directory. E.g. when you are
in a directory 'sub' that has a directory 'dir', you can run 'git
ls-tree -r HEAD dir' to list the contents of the tree (that is
- 'sub/dir' in 'HEAD'). You don't want to give a tree that is not at the
+ 'sub/dir' in `HEAD`). You don't want to give a tree that is not at the
root level (e.g. `git ls-tree -r HEAD:sub dir`) in this case, as that
- would result in asking for 'sub/sub/dir' in the 'HEAD' commit.
+ would result in asking for 'sub/sub/dir' in the `HEAD` commit.
However, the current working directory can be ignored by passing
--full-tree option.
-t::
Show tree entries even when going to recurse them. Has no effect
- if '-r' was not passed. '-d' implies '-t'.
+ if `-r` was not passed. `-d` implies `-t`.
-l::
--long::
SYNOPSIS
--------
[verse]
-'git mailsplit' [-b] [-f<nn>] [-d<prec>] [--keep-cr] -o<directory> [--] [(<mbox>|<Maildir>)...]
+'git mailsplit' [-b] [-f<nn>] [-d<prec>] [--keep-cr] [--mboxrd]
+ -o<directory> [--] [(<mbox>|<Maildir>)...]
DESCRIPTION
-----------
--keep-cr::
Do not remove `\r` from lines ending with `\r\n`.
+--mboxrd::
+ Input is of the "mboxrd" format and "^>+From " line escaping is
+ reversed.
+
GIT
---
Part of the linkgit:git[1] suite
--batch::
Allow building of more than one tree object before exiting. Each
tree is separated by as single blank line. The final new-line is
- optional. Note - if the '-z' option is used, lines are terminated
+ optional. Note - if the `-z` option is used, lines are terminated
with NUL.
GIT
--force::
Force renaming or moving of a file even if the target exists
-k::
- Skip move or rename actions which would lead to an error
+ Skip move or rename actions which would lead to an error
condition. An error happens when a source is neither existing nor
controlled by Git, or when it would overwrite an existing
- file unless '-f' is given.
+ file unless `-f` is given.
-n::
--dry-run::
Do nothing; only show what would happen
-c <object>::
--reedit-message=<object>::
- Like '-C', but with '-c' the editor is invoked, so that
+ Like '-C', but with `-c` the editor is invoked, so that
the user can further edit the note message.
--allow-empty::
--ref <ref>::
Manipulate the notes tree in <ref>. This overrides
- 'GIT_NOTES_REF' and the "core.notesRef" configuration. The ref
+ `GIT_NOTES_REF` and the "core.notesRef" configuration. The ref
specifies the full refname when it begins with `refs/notes/`; when it
begins with `notes/`, `refs/` and otherwise `refs/notes/` is prefixed
to form a full name of the ref.
notes.displayRef::
Which ref (or refs, if a glob or specified more than once), in
addition to the default set by `core.notesRef` or
- 'GIT_NOTES_REF', to read notes from when showing commit
+ `GIT_NOTES_REF`, to read notes from when showing commit
messages with the 'git log' family of commands.
This setting can be overridden on the command line or by the
- 'GIT_NOTES_DISPLAY_REF' environment variable.
+ `GIT_NOTES_DISPLAY_REF` environment variable.
See linkgit:git-log[1].
notes.rewrite.<command>::
notes from the original to the rewritten commit. Defaults to
`true`. See also "`notes.rewriteRef`" below.
+
-This setting can be overridden by the 'GIT_NOTES_REWRITE_REF'
+This setting can be overridden by the `GIT_NOTES_REWRITE_REF`
environment variable.
notes.rewriteMode::
Does not have a default value; you must configure this variable to
enable note rewriting.
+
-Can be overridden with the 'GIT_NOTES_REWRITE_REF' environment variable.
+Can be overridden with the `GIT_NOTES_REWRITE_REF` environment variable.
ENVIRONMENT
-----------
-'GIT_NOTES_REF'::
+`GIT_NOTES_REF`::
Which ref to manipulate notes from, instead of `refs/notes/commits`.
This overrides the `core.notesRef` setting.
-'GIT_NOTES_DISPLAY_REF'::
+`GIT_NOTES_DISPLAY_REF`::
Colon-delimited list of refs or globs indicating which refs,
in addition to the default from `core.notesRef` or
- 'GIT_NOTES_REF', to read notes from when showing commit
+ `GIT_NOTES_REF`, to read notes from when showing commit
messages.
This overrides the `notes.displayRef` setting.
+
A warning will be issued for refs that do not exist, but a glob that
does not match any refs is silently ignored.
-'GIT_NOTES_REWRITE_MODE'::
+`GIT_NOTES_REWRITE_MODE`::
When copying notes during a rewrite, what to do if the target
commit already has a note.
Must be one of `overwrite`, `concatenate`, `cat_sort_uniq`, or `ignore`.
This overrides the `core.rewriteMode` setting.
-'GIT_NOTES_REWRITE_REF'::
+`GIT_NOTES_REWRITE_REF`::
When rewriting commits, which notes to copy from the original
to the rewritten commit. Must be a colon-delimited list of
refs or globs.
------------
This imports the specified depot into
'refs/remotes/p4/master' in an existing Git repository. The
-'--branch' option can be used to specify a different branch to
+`--branch` option can be used to specify a different branch to
be used for the p4 content.
If a Git repository includes branches 'refs/remotes/origin/p4', these
If there are multiple branches, doing 'git p4 sync' will automatically
use the "BRANCH DETECTION" algorithm to try to partition new changes
-into the right branch. This can be overridden with the '--branch'
+into the right branch. This can be overridden with the `--branch`
option to specify just a single branch to update.
~~~~~~
Submitting changes from a Git repository back to the p4 repository
requires a separate p4 client workspace. This should be specified
-using the 'P4CLIENT' environment variable or the Git configuration
+using the `P4CLIENT` environment variable or the Git configuration
variable 'git-p4.client'. The p4 client must exist, but the client root
will be created and populated if it does not already exist.
------------
The upstream reference is generally 'refs/remotes/p4/master', but can
-be overridden using the '--origin=' command-line option.
+be overridden using the `--origin=` command-line option.
The p4 changes will be created as the user invoking 'git p4 submit'. The
-'--preserve-user' option will cause ownership to be modified
+`--preserve-user` option will cause ownership to be modified
according to the author of the Git commit. This option requires admin
privileges in p4, which can be granted using 'p4 protect'.
All commands except clone accept these options.
--git-dir <dir>::
- Set the 'GIT_DIR' environment variable. See linkgit:git[1].
+ Set the `GIT_DIR` environment variable. See linkgit:git[1].
-v::
--verbose::
where they will be treated as remote-tracking branches by
linkgit:git-branch[1] and other commands. This option instead
puts p4 branches in 'refs/heads/p4/'. Note that future
- sync operations must specify '--import-local' as well so that
+ sync operations must specify `--import-local` as well so that
they can find the p4 branches in refs/heads.
--max-changes <n>::
default, involves removing the entire depot path. With this
option, the full p4 depot path is retained in Git. For example,
path '//depot/main/foo/bar.c', when imported from
- '//depot/main/', becomes 'foo/bar.c'. With '--keep-path', the
+ '//depot/main/', becomes 'foo/bar.c'. With `--keep-path`, the
Git path is instead 'depot/main/foo/bar.c'.
--use-client-spec::
--origin <commit>::
Upstream location from which commits are identified to submit to
p4. By default, this is the most recent p4 commit reachable
- from 'HEAD'.
+ from `HEAD`.
-M::
Detect renames. See linkgit:git-diff[1]. Renames will be
Import all changes from both named depot paths into a single
repository. Only files below these directories are included.
There is not a subdirectory in Git for each "proj1" and "proj2".
- You must use the '--destination' option when specifying more
+ You must use the `--destination` option when specifying more
than one depot path. The revision specifier must be specified
identically on each depot path. If there are files in the
depot paths with the same name, the path with the most recently
The p4 client specification is maintained with the 'p4 client' command
and contains among other fields, a View that specifies how the depot
is mapped into the client repository. The 'clone' and 'sync' commands
-can consult the client spec when given the '--use-client-spec' option or
+can consult the client spec when given the `--use-client-spec` option or
when the useClientSpec variable is true. After 'git p4 clone', the
useClientSpec variable is automatically set in the repository
configuration file. This allows future 'git p4 submit' commands to
can use these mappings to determine branch relationships.
If you have a repository where all the branches of interest exist as
-subdirectories of a single depot path, you can use '--detect-branches'
+subdirectories of a single depot path, you can use `--detect-branches`
when cloning or syncing to have 'git p4' automatically find
subdirectories in p4, and to generate these as branches in Git.
git-p4.useClientSpec::
Specify that the p4 client spec should be used to identify p4
depot paths of interest. This is equivalent to specifying the
- option '--use-client-spec'. See the "CLIENT SPEC" section above.
+ option `--use-client-spec`. See the "CLIENT SPEC" section above.
This variable is a boolean, not the name of a p4 client.
git-p4.pathEncoding::
out of memory with a large window, but still be able to take
advantage of the large window for the smaller objects. The
size can be suffixed with "k", "m", or "g".
- `--window-memory=0` makes memory usage unlimited, which is the
- default.
+ `--window-memory=0` makes memory usage unlimited. The default
+ is taken from the `pack.windowMemory` configuration variable.
--max-pack-size=<n>::
Maximum size of each output pack file. The size can be suffixed with
[verse]
'git push' [--all | --mirror | --tags] [--follow-tags] [--atomic] [-n | --dry-run] [--receive-pack=<git-receive-pack>]
[--repo=<repository>] [-f | --force] [-d | --delete] [--prune] [-v | --verbose]
- [-u | --set-upstream]
+ [-u | --set-upstream] [--push-option=<string>]
[--[no-]signed|--sign=(true|false|if-asked)]
[--force-with-lease[=<refname>[:<expect>]]]
[--no-verify] [<repository> [<refspec>...]]
and also push annotated tags in `refs/tags` that are missing
from the remote but are pointing at commit-ish that are
reachable from the refs being pushed. This can also be specified
- with configuration variable 'push.followTags'. For more
- information, see 'push.followTags' in linkgit:git-config[1].
+ with configuration variable `push.followTags`. For more
+ information, see `push.followTags` in linkgit:git-config[1].
--[no-]signed::
--sign=(true|false|if-asked)::
Either all refs are updated, or on error, no refs are updated.
If the server does not support atomic pushes the push will fail.
+-o::
+--push-option::
+ Transmit the given string to the server, which passes them to
+ the pre-receive as well as the post-receive hook. The given string
+ must not contain a NUL or LF character.
+
--receive-pack=<git-receive-pack>::
--exec=<git-receive-pack>::
Path to the 'git-receive-pack' program on the remote
+
`--force-with-lease=<refname>:<expect>` will protect the named ref (alone),
if it is going to be updated, by requiring its current value to be
-the same as the specified value <expect> (which is allowed to be
+the same as the specified value `<expect>` (which is allowed to be
different from the remote-tracking branch we have for the refname,
or we do not even have to have such a remote-tracking branch when
-this form is used).
+this form is used). If `<expect>` is the empty string, then the named ref
+must not already exist.
+
Note that all forms other than `--force-with-lease=<refname>:<expect>`
that specifies the expected current value of the ref explicitly are
For every branch that is up to date or successfully pushed, add
upstream (tracking) reference, used by argument-less
linkgit:git-pull[1] and other commands. For more information,
- see 'branch.<name>.merge' in linkgit:git-config[1].
+ see `branch.<name>.merge` in linkgit:git-config[1].
--[no-]thin::
These options are passed to linkgit:git-send-pack[1]. A thin transfer
all submodules that changed in the revisions to be pushed will be
pushed. If on-demand was not able to push all necessary revisions
it will also be aborted and exit with non-zero status. A value of
- 'no' or using '--no-recurse-submodules' can be used to override the
+ 'no' or using `--no-recurse-submodules` can be used to override the
push.recurseSubmodules configuration variable when no submodule
recursion is required.
The directory to find the quilt patches.
+
The default for the patch directory is patches
-or the value of the $QUILT_PATCHES environment
+or the value of the `$QUILT_PATCHES` environment
variable.
--series <file>::
The quilt series file.
+
The default for the series file is <patches>/series
-or the value of the $QUILT_SERIES environment
+or the value of the `$QUILT_SERIES` environment
variable.
GIT
rebase. False by default.
rebase.autoSquash::
- If set to true enable '--autosquash' option by default.
+ If set to true enable `--autosquash` option by default.
rebase.autoStash::
- If set to true enable '--autostash' option by default.
+ If set to true enable `--autostash` option by default.
rebase.missingCommitsCheck::
If set to "warn", print warnings about removed commits in
done. "ignore" by default.
rebase.instructionFormat::
- Custom commit list format to use during an '--interactive' rebase.
+ Custom commit list format to use during an `--interactive` rebase.
OPTIONS
-------
"fixup! " or "squash! " after the first, in case you referred to an
earlier fixup/squash with `git commit --fixup/--squash`.
+
-This option is only valid when the '--interactive' option is used.
+This option is only valid when the `--interactive` option is used.
+
-If the '--autosquash' option is enabled by default using the
+If the `--autosquash` option is enabled by default using the
configuration variable `rebase.autoSquash`, this option can be
used to override and disable this setting.
If only <infd> is given, it is assumed to be a bidirectional socket connected
to remote Git server (git-upload-pack, git-receive-pack or
-git-upload-achive). If both <infd> and <outfd> are given, they are assumed
+git-upload-archive). If both <infd> and <outfd> are given, they are assumed
to be pipes connected to a remote Git server (<infd> being the inbound pipe
and <outfd> being the outbound pipe.
Retrieves the URLs for a remote. Configurations for `insteadOf` and
`pushInsteadOf` are expanded here. By default, only the first URL is listed.
+
-With '--push', push URLs are queried rather than fetch URLs.
+With `--push`, push URLs are queried rather than fetch URLs.
+
-With '--all', all URLs for the remote will be listed.
+With `--all`, all URLs for the remote will be listed.
'set-url'::
regex <oldurl> (first URL if no <oldurl> is given) to <newurl>. If
<oldurl> doesn't match any URL, an error occurs and nothing is changed.
+
-With '--push', push URLs are manipulated instead of fetch URLs.
+With `--push`, push URLs are manipulated instead of fetch URLs.
+
-With '--add', instead of changing existing URLs, new URL is added.
+With `--add`, instead of changing existing URLs, new URL is added.
+
-With '--delete', instead of changing existing URLs, all URLs matching
+With `--delete`, instead of changing existing URLs, all URLs matching
regex <url> are deleted for remote <name>. Trying to delete all
non-push URLs is an error.
+
pack everything referenced into a single pack.
Especially useful when packing a repository that is used
for private development. Use
- with '-d'. This will clean up the objects that `git prune`
+ with `-d`. This will clean up the objects that `git prune`
leaves behind, but `git fsck --full --dangling` shows as
dangling.
+
other objects in that pack they already have locally.
-A::
- Same as `-a`, unless '-d' is used. Then any unreachable
+ Same as `-a`, unless `-d` is used. Then any unreachable
objects in a previous pack become loose, unpacked objects,
instead of being left in the old pack. Unreachable objects
are never intentionally added to a pack, even when repacking.
out of memory with a large window, but still be able to take
advantage of the large window for the smaller objects. The
size can be suffixed with "k", "m", or "g".
- `--window-memory=0` makes memory usage unlimited, which is the
- default.
+ `--window-memory=0` makes memory usage unlimited. The default
+ is taken from the `pack.windowMemory` configuration variable.
+ Note that the actual memory usage will be the limit multiplied
+ by the number of threads used by linkgit:git-pack-objects[1].
--max-pack-size=<n>::
Maximum size of each output pack file. The size can be suffixed with
with `-b` or `repack.writeBitmaps`, as it ensures that the
bitmapped packfile has the necessary objects.
+--unpack-unreachable=<when>::
+ When loosening unreachable objects, do not bother loosening any
+ objects older than `<when>`. This can be used to optimize out
+ the write of any objects that would be immediately pruned by
+ a follow-up `git prune`.
+
+-k::
+--keep-unreachable::
+ When used with `-ad`, any unreachable objects from existing
+ packs will be appended to the end of the packfile instead of
+ being removed. In addition, any unreachable loose objects will
+ be packed (and their loose counterparts removed).
+
Configuration
-------------
shows information about commit 'bar'.
-The 'GIT_NO_REPLACE_OBJECTS' environment variable can be set to
+The `GIT_NO_REPLACE_OBJECTS` environment variable can be set to
achieve the same effect as the `--no-replace-objects` option.
OPTIONS
Note: 'git revert' is used to record some new commits to reverse the
effect of some earlier commits (often only a faulty one). If you want to
throw away all uncommitted changes in your working directory, you
-should see linkgit:git-reset[1], particularly the '--hard' option. If
+should see linkgit:git-reset[1], particularly the `--hard` option. If
you want to extract specific files as they were in another commit, you
should see linkgit:git-checkout[1], specifically the `git checkout
<commit> -- <filename>` syntax. Take care with these alternatives as
For a more complete list of ways to spell commit names, see
linkgit:gitrevisions[7].
Sets of commits can also be given but no traversal is done by
- default, see linkgit:git-rev-list[1] and its '--no-walk'
+ default, see linkgit:git-rev-list[1] and its `--no-walk`
option.
-e::
--annotate::
Review and edit each patch you're about to send. Default is the value
- of 'sendemail.annotate'. See the CONFIGURATION section for
- 'sendemail.multiEdit'.
+ of `sendemail.annotate`. See the CONFIGURATION section for
+ `sendemail.multiEdit`.
--bcc=<address>,...::
Specify a "Bcc:" value for each email. Default is the value of
- 'sendemail.bcc'.
+ `sendemail.bcc`.
+
This option may be specified multiple times.
--cc=<address>,...::
Specify a starting "Cc:" value for each email.
- Default is the value of 'sendemail.cc'.
+ Default is the value of `sendemail.cc`.
+
This option may be specified multiple times.
Invoke a text editor (see GIT_EDITOR in linkgit:git-var[1])
to edit an introductory message for the patch series.
+
-When '--compose' is used, git send-email will use the From, Subject, and
+When `--compose` is used, git send-email will use the From, Subject, and
In-Reply-To headers specified in the message. If the body of the message
(what you type after the headers and a blank line) only contains blank
(or Git: prefixed) lines, the summary won't be sent, but From, Subject,
+
Missing From or In-Reply-To headers will be prompted for.
+
-See the CONFIGURATION section for 'sendemail.multiEdit'.
+See the CONFIGURATION section for `sendemail.multiEdit`.
--from=<address>::
Specify the sender of the emails. If not specified on the command line,
- the value of the 'sendemail.from' configuration option is used. If
- neither the command-line option nor 'sendemail.from' are set, then the
+ the value of the `sendemail.from` configuration option is used. If
+ neither the command-line option nor `sendemail.from` are set, then the
user will be prompted for the value. The default for the prompt will be
the value of GIT_AUTHOR_IDENT, or GIT_COMMITTER_IDENT if that is not
set, as returned by "git var -l".
--to=<address>,...::
Specify the primary recipient of the emails generated. Generally, this
will be the upstream maintainer of the project involved. Default is the
- value of the 'sendemail.to' configuration value; if that is unspecified,
+ value of the `sendemail.to` configuration value; if that is unspecified,
and --to-cmd is not specified, this will be prompted for.
+
This option may be specified multiple times.
can be useful when the repository contains files that contain carriage
returns, but makes the raw patch email file (as saved from a MUA) much
harder to inspect manually. base64 is even more fool proof, but also
- even more opaque. Default is the value of the 'sendemail.transferEncoding'
+ even more opaque. Default is the value of the `sendemail.transferEncoding`
configuration value; if that is unspecified, git will use 8bit and not
add a Content-Transfer-Encoding header.
subscribed to a list. In order to use the 'From' address, set the
value to "auto". If you use the sendmail binary, you must have
suitable privileges for the -f parameter. Default is the value of the
- 'sendemail.envelopeSender' configuration variable; if that is
+ `sendemail.envelopeSender` configuration variable; if that is
unspecified, choosing the envelope sender is left to your MTA.
--smtp-encryption=<encryption>::
Specify the encryption to use, either 'ssl' or 'tls'. Any other
value reverts to plain SMTP. Default is the value of
- 'sendemail.smtpEncryption'.
+ `sendemail.smtpEncryption`.
--smtp-domain=<FQDN>::
Specifies the Fully Qualified Domain Name (FQDN) used in the
HELO/EHLO command to the SMTP server. Some servers require the
FQDN to match your IP address. If not set, git send-email attempts
to determine your FQDN automatically. Default is the value of
- 'sendemail.smtpDomain'.
+ `sendemail.smtpDomain`.
--smtp-auth=<mechanisms>::
Whitespace-separated list of allowed SMTP-AUTH mechanisms. This setting
+
If at least one of the specified mechanisms matches the ones advertised by the
SMTP server and if it is supported by the utilized SASL library, the mechanism
-is used for authentication. If neither 'sendemail.smtpAuth' nor '--smtp-auth'
+is used for authentication. If neither 'sendemail.smtpAuth' nor `--smtp-auth`
is specified, all mechanisms supported by the SASL library can be used.
--smtp-pass[=<password>]::
Password for SMTP-AUTH. The argument is optional: If no
argument is specified, then the empty string is used as
- the password. Default is the value of 'sendemail.smtpPass',
- however '--smtp-pass' always overrides this value.
+ the password. Default is the value of `sendemail.smtpPass`,
+ however `--smtp-pass` always overrides this value.
+
Furthermore, passwords need not be specified in configuration files
or on the command line. If a username has been specified (with
-'--smtp-user' or a 'sendemail.smtpUser'), but no password has been
-specified (with '--smtp-pass' or 'sendemail.smtpPass'), then
+`--smtp-user` or a `sendemail.smtpUser`), but no password has been
+specified (with `--smtp-pass` or `sendemail.smtpPass`), then
a password is obtained using 'git-credential'.
--smtp-server=<host>::
`smtp.example.com` or a raw IP address). Alternatively it can
specify a full pathname of a sendmail-like program instead;
the program must support the `-i` option. Default value can
- be specified by the 'sendemail.smtpServer' configuration
+ be specified by the `sendemail.smtpServer` configuration
option; the built-in default is `/usr/sbin/sendmail` or
`/usr/lib/sendmail` if such program is available, or
`localhost` otherwise.
submission port 587, or the common SSL smtp port 465);
symbolic port names (e.g. "submission" instead of 587)
are also accepted. The port can also be set with the
- 'sendemail.smtpServerPort' configuration variable.
+ `sendemail.smtpServerPort` configuration variable.
--smtp-server-option=<option>::
If set, specifies the outgoing SMTP server option to use.
- Default value can be specified by the 'sendemail.smtpServerOption'
+ Default value can be specified by the `sendemail.smtpServerOption`
configuration option.
+
The --smtp-server-option option must be repeated for each option you want
certificates concatenated together: see verify(1) -CAfile and
-CApath for more information on these). Set it to an empty string
to disable certificate verification. Defaults to the value of the
- 'sendemail.smtpsslcertpath' configuration variable, if set, or the
+ `sendemail.smtpsslcertpath` configuration variable, if set, or the
backing SSL library's compiled-in default otherwise (which should
be the best choice on most platforms).
--smtp-user=<user>::
- Username for SMTP-AUTH. Default is the value of 'sendemail.smtpUser';
- if a username is not specified (with '--smtp-user' or 'sendemail.smtpUser'),
+ Username for SMTP-AUTH. Default is the value of `sendemail.smtpUser`;
+ if a username is not specified (with `--smtp-user` or `sendemail.smtpUser`),
then authentication is not attempted.
--smtp-debug=0|1::
Specify a command to execute once per patch file which
should generate patch file specific "Cc:" entries.
Output of this command must be single email address per line.
- Default is the value of 'sendemail.ccCmd' configuration value.
+ Default is the value of `sendemail.ccCmd` configuration value.
--[no-]chain-reply-to::
If this is set, each email will be sent as a reply to the previous
email sent. If disabled with "--no-chain-reply-to", all emails after
the first will be sent as replies to the first email sent. When using
this, it is recommended that the first file given be an overview of the
- entire patch series. Disabled by default, but the 'sendemail.chainReplyTo'
+ entire patch series. Disabled by default, but the `sendemail.chainReplyTo`
configuration variable can be used to enable it.
--identity=<identity>::
A configuration identity. When given, causes values in the
'sendemail.<identity>' subsection to take precedence over
values in the 'sendemail' section. The default identity is
- the value of 'sendemail.identity'.
+ the value of `sendemail.identity`.
--[no-]signed-off-by-cc::
If this is set, add emails found in Signed-off-by: or Cc: lines to the
- cc list. Default is the value of 'sendemail.signedoffbycc' configuration
+ cc list. Default is the value of `sendemail.signedoffbycc` configuration
value; if that is unspecified, default to --signed-off-by-cc.
--[no-]cc-cover::
- 'all' will suppress all auto cc values.
--
+
-Default is the value of 'sendemail.suppresscc' configuration value; if
+Default is the value of `sendemail.suppresscc` configuration value; if
that is unspecified, default to 'self' if --suppress-from is
specified, as well as 'body' if --no-signed-off-cc is specified.
--[no-]suppress-from::
If this is set, do not add the From: address to the cc: list.
- Default is the value of 'sendemail.suppressFrom' configuration
+ Default is the value of `sendemail.suppressFrom` configuration
value; if that is unspecified, default to --no-suppress-from.
--[no-]thread::
+
If disabled with "--no-thread", those headers will not be added
(unless specified with --in-reply-to). Default is the value of the
-'sendemail.thread' configuration value; if that is unspecified,
+`sendemail.thread` configuration value; if that is unspecified,
default to --thread.
+
It is up to the user to ensure that no In-Reply-To header already
- 'auto' is equivalent to 'cc' + 'compose'
--
+
-Default is the value of 'sendemail.confirm' configuration value; if that
+Default is the value of `sendemail.confirm` configuration value; if that
is unspecified, default to 'auto' unless any of the suppress options
have been specified, in which case default to 'compose'.
--[no-]format-patch::
When an argument may be understood either as a reference or as a file name,
- choose to understand it as a format-patch argument ('--format-patch')
- or as a file name ('--no-format-patch'). By default, when such a conflict
+ choose to understand it as a format-patch argument (`--format-patch`)
+ or as a file name (`--no-format-patch`). By default, when such a conflict
occurs, git send-email will fail.
--quiet::
is due to SMTP limits as described by http://www.ietf.org/rfc/rfc2821.txt.
--
+
-Default is the value of 'sendemail.validate'; if this is not set,
-default to '--validate'.
+Default is the value of `sendemail.validate`; if this is not set,
+default to `--validate`.
--force::
Send emails even if safety checks would prevent it.
sendemail.aliasesFile::
To avoid typing long email addresses, point this to one or more
- email aliases files. You must also supply 'sendemail.aliasFileType'.
+ email aliases files. You must also supply `sendemail.aliasFileType`.
sendemail.aliasFileType::
Format of the file(s) specified in sendemail.aliasesFile. Must be
sendemail.multiEdit::
If true (default), a single editor instance will be spawned to edit
- files you have to edit (patches when '--annotate' is used, and the
- summary when '--compose' is used). If false, files will be edited one
+ files you have to edit (patches when `--annotate` is used, and the
+ summary when `--compose` is used). If false, files will be edited one
after the other, spawning a new editor each time.
sendemail.confirm::
Sets the default for whether to confirm before sending. Must be
- one of 'always', 'never', 'cc', 'compose', or 'auto'. See '--confirm'
+ one of 'always', 'never', 'cc', 'compose', or 'auto'. See `--confirm`
in the previous section for the meaning of these values.
EXAMPLE
option, then the refs from stdin are processed after those
on the command line.
+
-If '--stateless-rpc' is specified together with this option then
+If `--stateless-rpc` is specified together with this option then
the list of refs must be in packet format (pkt-line). Each ref must
be in a separate packet, and the list must end with a flush packet.
There are three ways to specify which refs to update on the
remote end.
-With '--all' flag, all refs that exist locally are transferred to
+With `--all` flag, all refs that exist locally are transferred to
the remote side. You cannot specify any '<ref>' if you use
this flag.
-Without '--all' and without any '<ref>', the heads that exist
+Without `--all` and without any '<ref>', the heads that exist
both on the local side and on the remote side are updated.
When one or more '<ref>' are specified explicitly (whether on the
exist in the set of remote refs; the ref matched <src>
locally is used as the name of the destination.
-Without '--force', the <src> ref is stored at the remote only if
+Without `--force`, the <src> ref is stored at the remote only if
<dst> does not exist, or <dst> is a proper subset (i.e. an
ancestor) of <src>. This check, known as "fast-forward check",
is performed in order to avoid accidentally overwriting the
remote ref and lose other peoples' commits from there.
-With '--force', the fast-forward check is disabled for all refs.
+With `--force`, the fast-forward check is disabled for all refs.
Optionally, a <ref> parameter can be prefixed with a plus '+' sign
to disable the fast-forward check only on that ref.
die with the usage message.
set_reflog_action::
- Set GIT_REFLOG_ACTION environment to a given string (typically
+ Set `GIT_REFLOG_ACTION` environment to a given string (typically
the name of the program) unless it is already set. Whenever
the script runs a `git` command that updates refs, a reflog
entry is created using the value of this string to leave the
COMMANDS
--------
-'git shell' accepts the following commands after the '-c' option:
+'git shell' accepts the following commands after the `-c` option:
'git receive-pack <argument>'::
'git upload-pack <argument>'::
INTERACTIVE USE
---------------
-By default, the commands above can be executed only with the '-c'
+By default, the commands above can be executed only with the `-c`
option; the shell is not interactive.
If a `~/git-shell-commands` directory is present, 'git shell'
are shown before their parents).
--date-order::
- This option is similar to '--topo-order' in the sense that no
+ This option is similar to `--topo-order` in the sense that no
parent comes before all of its children, but otherwise commits
are ordered according to their commit date.
Enable stricter reference checking by requiring an exact ref path.
Aside from returning an error code of 1, it will also print an error
- message if '--quiet' was not specified.
+ message if `--quiet` was not specified.
--abbrev[=<n>]::
-q::
--quiet::
- Do not print any results to stdout. When combined with '--verify' this
+ Do not print any results to stdout. When combined with `--verify` this
can be used to silently check if a reference exists.
--exclude-existing[=<pattern>]::
This will show "refs/heads/master" but also "refs/remote/other-repo/master",
if such references exists.
-When using the '--verify' flag, the command requires an exact path:
+When using the `--verify` flag, the command requires an exact path:
-----------------------------------------------------------------------------
git show-ref --verify refs/heads/master
'git submodule' [--quiet] init [--] [<path>...]
'git submodule' [--quiet] deinit [-f|--force] (--all|[--] <path>...)
'git submodule' [--quiet] update [--init] [--remote] [-N|--no-fetch]
- [-f|--force] [--rebase|--merge] [--reference <repository>]
- [--depth <depth>] [--recursive] [--jobs <n>] [--] [<path>...]
+ [--[no-]recommend-shallow] [-f|--force] [--rebase|--merge]
+ [--reference <repository>] [--depth <depth>] [--recursive]
+ [--jobs <n>] [--] [<path>...]
'git submodule' [--quiet] summary [--cached|--files] [(-n|--summary-limit) <n>]
[commit] [--] [<path>...]
'git submodule' [--quiet] foreach [--recursive] <command>
clone with a history truncated to the specified number of revisions.
See linkgit:git-clone[1]
+--[no-]recommend-shallow::
+ This option is only valid for the update command.
+ The initial clone of a submodule will use the recommended
+ `submodule.<name>.shallow` as provided by the .gitmodules file
+ by default. To ignore the suggestions use `--no-recommend-shallow`.
+
-j <n>::
--jobs <n>::
This option is only valid for the update command.
--ignore-paths=<regex>;;
When passed to 'init' or 'clone' this regular expression will
be preserved as a config key. See 'fetch' for a description
- of '--ignore-paths'.
+ of `--ignore-paths`.
--include-paths=<regex>;;
When passed to 'init' or 'clone' this regular expression will
be preserved as a config key. See 'fetch' for a description
- of '--include-paths'.
+ of `--include-paths`.
--no-minimize-url;;
When tracking multiple directories (using --stdlayout,
--branches, or --tags options), git svn will attempt to connect
repository. This default allows better tracking of history if
entire projects are moved within a repository, but may cause
issues on repositories where read access restrictions are in
- place. Passing '--no-minimize-url' will allow git svn to
+ place. Passing `--no-minimize-url` will allow git svn to
accept URLs as-is without attempting to connect to a higher
level directory. This option is off by default when only
one URL/branch is tracked (it would do little good).
--ignore-paths=<regex>;;
This allows one to specify a Perl regular expression that will
cause skipping of all matching paths from checkout from SVN.
- The '--ignore-paths' option should match for every 'fetch'
+ The `--ignore-paths` option should match for every 'fetch'
(including automatic fetches due to 'clone', 'dcommit',
'rebase', etc) on a given repository.
+
--include-paths=<regex>;;
This allows one to specify a Perl regular expression that will
cause the inclusion of only matching paths from checkout from SVN.
- The '--include-paths' option should match for every 'fetch'
+ The `--include-paths` option should match for every 'fetch'
(including automatic fetches due to 'clone', 'dcommit',
- 'rebase', etc) on a given repository. '--ignore-paths' takes
- precedence over '--include-paths'.
+ 'rebase', etc) on a given repository. `--ignore-paths` takes
+ precedence over `--include-paths`.
+
[verse]
config key: svn-remote.<name>.include-paths
or if a second argument is passed; it will create a directory
and work within that. It accepts all arguments that the
'init' and 'fetch' commands accept; with the exception of
- '--fetch-all' and '--parent'. After a repository is cloned,
+ `--fetch-all` and `--parent`. After a repository is cloned,
the 'fetch' command will be able to update revisions without
affecting the working tree; and the 'rebase' command will be
able to update the working tree with the latest changes.
'git merge' for ease of dcommitting with 'git svn'.
+
This accepts all options that 'git svn fetch' and 'git rebase'
-accept. However, '--fetch-all' only fetches from the current
+accept. However, `--fetch-all` only fetches from the current
[svn-remote], and not all [svn-remote] definitions.
+
Like 'git rebase'; this requires that the working tree be clean
Gets the Subversion property given as the first argument, for a
file. A specific revision can be specified with -r/--revision.
+'propset'::
+ Sets the Subversion property given as the first argument, to the
+ value given as the second argument for the file given as the
+ third argument.
++
+Example:
++
+------------------------------------------------------------------------
+git svn propset svn:keywords "FreeBSD=%H" devel/py-tipper/Makefile
+------------------------------------------------------------------------
++
+This will set the property 'svn:keywords' to 'FreeBSD=%H' for the file
+'devel/py-tipper/Makefile'.
+
'show-externals'::
Shows the Subversion externals. Use -r/--revision to specify a
specific revision.
with the committer name as the first argument. The program is
expected to return a single line of the form "Name <email>",
which will be treated as if included in the authors file.
++
+[verse]
+config key: svn.authorsProg
-q::
--quiet::
svn-remote.<name>.pushurl::
- Similar to Git's 'remote.<name>.pushurl', this key is designed
+ Similar to Git's `remote.<name>.pushurl`, this key is designed
to be used in cases where 'url' points to an SVN repository
via a read-only transport, to provide an alternate read/write
transport. It is assumed that both keys point to the same
Git commit to serve as parent. This will happen, among other reasons,
if the SVN branch is a copy of a revision that was not fetched by 'git
svn' (e.g. because it is an old revision that was skipped with
-'--revision'), or if in SVN a directory was copied that is not tracked
+`--revision`), or if in SVN a directory was copied that is not tracked
by 'git svn' (such as a branch that is not tracked at all, or a
subdirectory of a tracked branch). In these cases, 'git svn' will still
create a Git branch, but instead of using an existing Git commit as the
copy of a complete repository, for projects with many branches it will
lead to a working copy many times larger than just the trunk. Thus for
projects using the standard directory structure (trunk/branches/tags),
-it is recommended to clone with option '--stdlayout'. If the project
+it is recommended to clone with option `--stdlayout`. If the project
uses a non-standard structure, and/or if branches and tags are not
required, it is easiest to only clone one directory (typically trunk),
without giving any repository layout options. If the full history with
-branches and tags is required, the options '--trunk' / '--branches' /
-'--tags' must be used.
+branches and tags is required, the options `--trunk` / `--branches` /
+`--tags` must be used.
When using multiple --branches or --tags, 'git svn' does not automatically
handle name collisions (for example, if two branches from different paths have
-v::
--verify::
- Verify the gpg signature of the given tag names.
+ Verify the GPG signature of the given tag names.
-n<num>::
<num> specifies how many lines from the annotation, if any,
order can also be affected by the
"versionsort.prereleaseSuffix" configuration variable.
The keys supported are the same as those in `git for-each-ref`.
- Sort order defaults to the value configured for the 'tag.sort'
+ Sort order defaults to the value configured for the `tag.sort`
variable if it exists, or lexicographic order otherwise. See
linkgit:git-config[1].
--[no-]merged [<commit>]::
Only list tags whose tips are reachable, or not reachable
- if '--no-merged' is used, from the specified commit ('HEAD'
+ if `--no-merged` is used, from the specified commit (`HEAD`
if not specified).
CONFIGURATION
you will need to handle the situation manually.
--really-refresh::
- Like '--refresh', but checks stat information unconditionally,
+ Like `--refresh`, but checks stat information unconditionally,
without regard to the "assume unchanged" setting.
--[no-]skip-worktree::
Using --refresh
---------------
-'--refresh' does not calculate a new sha1 file or bring the index
+`--refresh` does not calculate a new sha1 file or bring the index
up-to-date for mode/content changes. But what it *does* do is to
"re-match" the stat information of a file with the index, so that you
can refresh the index for a file that hasn't been changed but where
Using --cacheinfo or --info-only
--------------------------------
-'--cacheinfo' is used to register a file that is not in the
+`--cacheinfo` is used to register a file that is not in the
current working directory. This is useful for minimum-checkout
merging.
$ git update-index --cacheinfo <mode>,<sha1>,<path>
----------------
-'--info-only' is used to register files without placing them in the object
+`--info-only` is used to register files without placing them in the object
database. This is useful for status-only repositories.
-Both '--cacheinfo' and '--info-only' behave similarly: the index is updated
-but the object database isn't. '--cacheinfo' is useful when the object is
-in the database but the file isn't available locally. '--info-only' is
+Both `--cacheinfo` and `--info-only` behave similarly: the index is updated
+but the object database isn't. `--cacheinfo` is useful when the object is
+in the database but the file isn't available locally. `--info-only` is
useful when the file is available, but you do not wish to update the
object database.
SYNOPSIS
--------
[verse]
-'git-upload-pack' [--strict] [--timeout=<n>] <directory>
-
+'git-upload-pack' [--[no-]strict] [--timeout=<n>] [--stateless-rpc]
+ [--advertise-refs] <directory>
DESCRIPTION
-----------
Invoked by 'git fetch-pack', learns what
OPTIONS
-------
---strict::
+--[no-]strict::
Do not try <directory>/.git/ if <directory> is no Git directory.
--timeout=<n>::
Interrupt transfer after <n> seconds of inactivity.
+--stateless-rpc::
+ Perform only a single read-write cycle with stdin and stdout.
+ This fits with the HTTP POST request processing model where
+ a program may read the request, write a response, and must exit.
+
+--advertise-refs::
+ Only the initial ref advertisement is output, and the program exits
+ immediately. This fits with the HTTP GET request model, where
+ no request content is received but a response must be produced.
+
<directory>::
The repository to sync from.
DESCRIPTION
-----------
-Validates the gpg signature created by 'git commit -S'.
+Validates the GPG signature created by 'git commit -S'.
OPTIONS
-------
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The web browser can be specified using a configuration variable passed
-with the -c (or --config) command-line option, or the 'web.browser'
+with the -c (or --config) command-line option, or the `web.browser`
configuration variable if the former is not used.
browser.<tool>.path
~~~~~~~~~~~~~~~~~~~
You can explicitly provide a full path to your preferred browser by
-setting the configuration variable 'browser.<tool>.path'. For example,
+setting the configuration variable `browser.<tool>.path`. For example,
you can configure the absolute path to firefox by setting
'browser.firefox.path'. Otherwise, 'git web{litdd}browse' assumes the tool
is available in PATH.
When the browser, specified by options or configuration variables, is
not among the supported ones, then the corresponding
-'browser.<tool>.cmd' configuration variable will be looked up. If this
+`browser.<tool>.cmd` configuration variable will be looked up. If this
variable exists then 'git web{litdd}browse' will treat the specified tool
as a custom command and will use a shell eval to run the command with
the URLs passed as arguments.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Note that these configuration variables should probably be set using
-the '--global' flag, for example like this:
+the `--global` flag, for example like this:
------------------------------------------------
$ git config --global web.browser firefox
--------
[verse]
'git worktree add' [-f] [--detach] [--checkout] [-b <new-branch>] <path> [<branch>]
-'git worktree prune' [-n] [-v] [--expire <expire>]
'git worktree list' [--porcelain]
+'git worktree lock' [--reason <string>] <worktree>
+'git worktree prune' [-n] [-v] [--expire <expire>]
+'git worktree unlock' <worktree>
DESCRIPTION
-----------
If a linked working tree is stored on a portable device or network share
which is not always mounted, you can prevent its administrative files from
-being pruned by creating a file named 'locked' alongside the other
-administrative files, optionally containing a plain text reason that
-pruning should be suppressed. See section "DETAILS" for more information.
+being pruned by issuing the `git worktree lock` command, optionally
+specifying `--reason` to explain why the working tree is locked.
COMMANDS
--------
Create `<path>` and checkout `<branch>` into it. The new working directory
is linked to the current repository, sharing everything except working
-directory specific files such as HEAD, index, etc.
+directory specific files such as HEAD, index, etc. `-` may also be
+specified as `<branch>`; it is synonymous with `@{-1}`.
+
If `<branch>` is omitted and neither `-b` nor `-B` nor `--detached` used,
then, as a convenience, a new branch based at HEAD is created automatically,
as if `-b $(basename <path>)` was specified.
-prune::
-
-Prune working tree information in $GIT_DIR/worktrees.
-
list::
List details of each worktree. The main worktree is listed first, followed by
bare, the revision currently checked out, and the branch currently checked out
(or 'detached HEAD' if none).
+lock::
+
+If a working tree is on a portable device or network share which
+is not always mounted, lock it to prevent its administrative
+files from being pruned automatically. This also prevents it from
+being moved or deleted. Optionally, specify a reason for the lock
+with `--reason`.
+
+prune::
+
+Prune working tree information in $GIT_DIR/worktrees.
+
+unlock::
+
+Unlock a working tree, allowing it to be pruned, moved or deleted.
+
OPTIONS
-------
--expire <time>::
With `prune`, only expire unused working trees older than <time>.
+--reason <string>::
+ With `lock`, an explanation why the working tree is locked.
+
+<worktree>::
+ Working trees can be identified by path, either relative or
+ absolute.
++
+If the last path components in the working tree's path is unique among
+working trees, it can be used to identify worktrees. For example if
+you only have to working trees at "/abc/def/ghi" and "/abc/def/ggg",
+then "ghi" or "def/ghi" is enough to point to the former working tree.
+
DETAILS
-------
Each linked working tree has a private sub-directory in the repository's
To prevent a $GIT_DIR/worktrees entry from being pruned (which
can be useful in some situations, such as when the
-entry's working tree is stored on a portable device), add a file named
+entry's working tree is stored on a portable device), use the
+`git worktree lock` command, which adds a file named
'locked' to the entry's directory. The file contains the reason in
plain text. For example, if a linked working tree's `.git` file points
to `/path/main/.git/worktrees/test-next` then a file named
- `remove` to remove a linked working tree and its administrative files (and
warn if the working tree is dirty)
- `mv` to move or rename a working tree and update its administrative files
-- `lock` to prevent automatic pruning of administrative files (for instance,
- for a working tree on a portable device)
GIT
---
individual Git commands with "git help command". linkgit:gitcli[7]
manual page gives you an overview of the command-line command syntax.
-Formatted and hyperlinked version of the latest Git documentation
-can be viewed at `http://git-htmldocs.googlecode.com/git/git.html`.
+A formatted and hyperlinked copy of the latest Git documentation
+can be viewed at `https://git.github.io/htmldocs/git.html`.
ifdef::stalenotes[]
[NOTE]
branch of the `git.git` repository.
Documentation for older releases are available here:
-* link:v2.9.0/git.html[documentation for release 2.9]
+* link:v2.9.3/git.html[documentation for release 2.9.3]
* release notes for
+ link:RelNotes/2.9.3.txt[2.9.3],
+ link:RelNotes/2.9.2.txt[2.9.2],
+ link:RelNotes/2.9.1.txt[2.9.1],
link:RelNotes/2.9.0.txt[2.9].
* link:v2.8.4/git.html[documentation for release 2.8.4]
--help::
Prints the synopsis and a list of the most commonly used
- commands. If the option '--all' or '-a' is given then all
+ commands. If the option `--all` or `-a` is given then all
available commands are printed. If a Git command is named this
option will bring up the manual page for that command.
+
--git-dir=<path>::
Set the path to the repository. This can also be controlled by
- setting the GIT_DIR environment variable. It can be an absolute
+ setting the `GIT_DIR` environment variable. It can be an absolute
path or relative path to current working directory.
--work-tree=<path>::
is worth noting that they may be used/overridden by SCMS sitting above
Git so take care if using a foreign front-end.
-'GIT_INDEX_FILE'::
+`GIT_INDEX_FILE`::
This environment allows the specification of an alternate
index file. If not specified, the default of `$GIT_DIR/index`
is used.
-'GIT_INDEX_VERSION'::
+`GIT_INDEX_VERSION`::
This environment variable allows the specification of an index
version for new repositories. It won't affect existing index
files. By default index file version 2 or 3 is used. See
linkgit:git-update-index[1] for more information.
-'GIT_OBJECT_DIRECTORY'::
+`GIT_OBJECT_DIRECTORY`::
If the object storage directory is specified via this
environment variable then the sha1 directories are created
underneath - otherwise the default `$GIT_DIR/objects`
directory is used.
-'GIT_ALTERNATE_OBJECT_DIRECTORIES'::
+`GIT_ALTERNATE_OBJECT_DIRECTORIES`::
Due to the immutable nature of Git objects, old objects can be
archived into shared, read-only directories. This variable
specifies a ":" separated (on Windows ";" separated) list
of Git object directories which can be used to search for Git
objects. New objects will not be written to these directories.
-'GIT_DIR'::
- If the 'GIT_DIR' environment variable is set then it
+`GIT_DIR`::
+ If the `GIT_DIR` environment variable is set then it
specifies a path to use instead of the default `.git`
for the base of the repository.
- The '--git-dir' command-line option also sets this value.
+ The `--git-dir` command-line option also sets this value.
-'GIT_WORK_TREE'::
+`GIT_WORK_TREE`::
Set the path to the root of the working tree.
- This can also be controlled by the '--work-tree' command-line
+ This can also be controlled by the `--work-tree` command-line
option and the core.worktree configuration variable.
-'GIT_NAMESPACE'::
+`GIT_NAMESPACE`::
Set the Git namespace; see linkgit:gitnamespaces[7] for details.
- The '--namespace' command-line option also sets this value.
+ The `--namespace` command-line option also sets this value.
-'GIT_CEILING_DIRECTORIES'::
+`GIT_CEILING_DIRECTORIES`::
This should be a colon-separated list of absolute paths. If
set, it is a list of directories that Git should not chdir up
into while looking for a repository directory (useful for
can add an empty entry to the list to tell Git that the
subsequent entries are not symlinks and needn't be resolved;
e.g.,
- 'GIT_CEILING_DIRECTORIES=/maybe/symlink::/very/slow/non/symlink'.
+ `GIT_CEILING_DIRECTORIES=/maybe/symlink::/very/slow/non/symlink`.
-'GIT_DISCOVERY_ACROSS_FILESYSTEM'::
+`GIT_DISCOVERY_ACROSS_FILESYSTEM`::
When run in a directory that does not have ".git" repository
directory, Git tries to find such a directory in the parent
directories to find the top of the working tree, but by default it
does not cross filesystem boundaries. This environment variable
can be set to true to tell Git not to stop at filesystem
- boundaries. Like 'GIT_CEILING_DIRECTORIES', this will not affect
- an explicit repository directory set via 'GIT_DIR' or on the
+ boundaries. Like `GIT_CEILING_DIRECTORIES`, this will not affect
+ an explicit repository directory set via `GIT_DIR` or on the
command line.
-'GIT_COMMON_DIR'::
+`GIT_COMMON_DIR`::
If this variable is set to a path, non-worktree files that are
normally in $GIT_DIR will be taken from this path
instead. Worktree-specific files such as HEAD or index are
Git Commits
~~~~~~~~~~~
-'GIT_AUTHOR_NAME'::
-'GIT_AUTHOR_EMAIL'::
-'GIT_AUTHOR_DATE'::
-'GIT_COMMITTER_NAME'::
-'GIT_COMMITTER_EMAIL'::
-'GIT_COMMITTER_DATE'::
+`GIT_AUTHOR_NAME`::
+`GIT_AUTHOR_EMAIL`::
+`GIT_AUTHOR_DATE`::
+`GIT_COMMITTER_NAME`::
+`GIT_COMMITTER_EMAIL`::
+`GIT_COMMITTER_DATE`::
'EMAIL'::
see linkgit:git-commit-tree[1]
Git Diffs
~~~~~~~~~
-'GIT_DIFF_OPTS'::
+`GIT_DIFF_OPTS`::
Only valid setting is "--unified=??" or "-u??" to set the
number of context lines shown when a unified diff is created.
This takes precedence over any "-U" or "--unified" option
value passed on the Git diff command line.
-'GIT_EXTERNAL_DIFF'::
- When the environment variable 'GIT_EXTERNAL_DIFF' is set, the
+`GIT_EXTERNAL_DIFF`::
+ When the environment variable `GIT_EXTERNAL_DIFF` is set, the
program named by it is called, instead of the diff invocation
described above. For a path that is added, removed, or modified,
- 'GIT_EXTERNAL_DIFF' is called with 7 parameters:
+ `GIT_EXTERNAL_DIFF` is called with 7 parameters:
path old-file old-hex old-mode new-file new-hex new-mode
+
The file parameters can point at the user's working file
(e.g. `new-file` in "git-diff-files"), `/dev/null` (e.g. `old-file`
when a new file is added), or a temporary file (e.g. `old-file` in the
-index). 'GIT_EXTERNAL_DIFF' should not worry about unlinking the
-temporary file --- it is removed when 'GIT_EXTERNAL_DIFF' exits.
+index). `GIT_EXTERNAL_DIFF` should not worry about unlinking the
+temporary file --- it is removed when `GIT_EXTERNAL_DIFF` exits.
+
-For a path that is unmerged, 'GIT_EXTERNAL_DIFF' is called with 1
+For a path that is unmerged, `GIT_EXTERNAL_DIFF` is called with 1
parameter, <path>.
+
-For each path 'GIT_EXTERNAL_DIFF' is called, two environment variables,
-'GIT_DIFF_PATH_COUNTER' and 'GIT_DIFF_PATH_TOTAL' are set.
+For each path `GIT_EXTERNAL_DIFF` is called, two environment variables,
+`GIT_DIFF_PATH_COUNTER` and `GIT_DIFF_PATH_TOTAL` are set.
-'GIT_DIFF_PATH_COUNTER'::
+`GIT_DIFF_PATH_COUNTER`::
A 1-based counter incremented by one for every path.
-'GIT_DIFF_PATH_TOTAL'::
+`GIT_DIFF_PATH_TOTAL`::
The total number of paths.
other
~~~~~
-'GIT_MERGE_VERBOSITY'::
+`GIT_MERGE_VERBOSITY`::
A number controlling the amount of output shown by
the recursive merge strategy. Overrides merge.verbosity.
See linkgit:git-merge[1]
-'GIT_PAGER'::
+`GIT_PAGER`::
This environment variable overrides `$PAGER`. If it is set
to an empty string or to the value "cat", Git will not launch
a pager. See also the `core.pager` option in
linkgit:git-config[1].
-'GIT_EDITOR'::
+`GIT_EDITOR`::
This environment variable overrides `$EDITOR` and `$VISUAL`.
It is used by several Git commands when, on interactive mode,
an editor is to be launched. See also linkgit:git-var[1]
and the `core.editor` option in linkgit:git-config[1].
-'GIT_SSH'::
-'GIT_SSH_COMMAND'::
+`GIT_SSH`::
+`GIT_SSH_COMMAND`::
If either of these environment variables is set then 'git fetch'
and 'git push' will use the specified command instead of 'ssh'
when they need to connect to a remote system.
The command will be given exactly two or four arguments: the
'username@host' (or just 'host') from the URL and the shell
command to execute on that remote system, optionally preceded by
- '-p' (literally) and the 'port' from the URL when it specifies
+ `-p` (literally) and the 'port' from the URL when it specifies
something other than the default SSH port.
+
`$GIT_SSH_COMMAND` takes precedence over `$GIT_SSH`, and is interpreted
personal `.ssh/config` file. Please consult your ssh documentation
for further details.
-'GIT_ASKPASS'::
+`GIT_ASKPASS`::
If this environment variable is set, then Git commands which need to
acquire passwords or passphrases (e.g. for HTTP or IMAP authentication)
will call this program with a suitable prompt as command-line argument
- and read the password from its STDOUT. See also the 'core.askPass'
+ and read the password from its STDOUT. See also the `core.askPass`
option in linkgit:git-config[1].
-'GIT_TERMINAL_PROMPT'::
+`GIT_TERMINAL_PROMPT`::
If this environment variable is set to `0`, git will not prompt
on the terminal (e.g., when asking for HTTP authentication).
-'GIT_CONFIG_NOSYSTEM'::
+`GIT_CONFIG_NOSYSTEM`::
Whether to skip reading settings from the system-wide
`$(prefix)/etc/gitconfig` file. This environment variable can
be used along with `$HOME` and `$XDG_CONFIG_HOME` to create a
temporarily to avoid using a buggy `/etc/gitconfig` file while
waiting for someone with sufficient permissions to fix it.
-'GIT_FLUSH'::
+`GIT_FLUSH`::
If this environment variable is set to "1", then commands such
as 'git blame' (in incremental mode), 'git rev-list', 'git log',
'git check-attr' and 'git check-ignore' will
not set, Git will choose buffered or record-oriented flushing
based on whether stdout appears to be redirected to a file or not.
-'GIT_TRACE'::
+`GIT_TRACE`::
Enables general trace messages, e.g. alias expansion, built-in
command execution and external command execution.
+
Unsetting the variable, or setting it to empty, "0" or
"false" (case insensitive) disables trace messages.
-'GIT_TRACE_PACK_ACCESS'::
+`GIT_TRACE_PACK_ACCESS`::
Enables trace messages for all accesses to any packs. For each
access, the pack file name and an offset in the pack is
recorded. This may be helpful for troubleshooting some
pack-related performance problems.
- See 'GIT_TRACE' for available trace output options.
+ See `GIT_TRACE` for available trace output options.
-'GIT_TRACE_PACKET'::
+`GIT_TRACE_PACKET`::
Enables trace messages for all packets coming in or out of a
given program. This can help with debugging object negotiation
or other protocol issues. Tracing is turned off at a packet
- starting with "PACK" (but see 'GIT_TRACE_PACKFILE' below).
- See 'GIT_TRACE' for available trace output options.
+ starting with "PACK" (but see `GIT_TRACE_PACKFILE` below).
+ See `GIT_TRACE` for available trace output options.
-'GIT_TRACE_PACKFILE'::
+`GIT_TRACE_PACKFILE`::
Enables tracing of packfiles sent or received by a
given program. Unlike other trace output, this trace is
verbatim: no headers, and no quoting of binary data. You almost
Note that this is currently only implemented for the client side
of clones and fetches.
-'GIT_TRACE_PERFORMANCE'::
+`GIT_TRACE_PERFORMANCE`::
Enables performance related trace messages, e.g. total execution
time of each Git command.
- See 'GIT_TRACE' for available trace output options.
+ See `GIT_TRACE` for available trace output options.
-'GIT_TRACE_SETUP'::
+`GIT_TRACE_SETUP`::
Enables trace messages printing the .git, working tree and current
working directory after Git has completed its setup phase.
- See 'GIT_TRACE' for available trace output options.
+ See `GIT_TRACE` for available trace output options.
-'GIT_TRACE_SHALLOW'::
+`GIT_TRACE_SHALLOW`::
Enables trace messages that can help debugging fetching /
cloning of shallow repositories.
- See 'GIT_TRACE' for available trace output options.
+ See `GIT_TRACE` for available trace output options.
-'GIT_LITERAL_PATHSPECS'::
+`GIT_TRACE_CURL`::
+ Enables a curl full trace dump of all incoming and outgoing data,
+ including descriptive information, of the git transport protocol.
+ This is similar to doing curl `--trace-ascii` on the command line.
+ This option overrides setting the `GIT_CURL_VERBOSE` environment
+ variable.
+ See `GIT_TRACE` for available trace output options.
+
+`GIT_LITERAL_PATHSPECS`::
Setting this variable to `1` will cause Git to treat all
pathspecs literally, rather than as glob patterns. For example,
running `GIT_LITERAL_PATHSPECS=1 git log -- '*.c'` will search
literal paths to Git (e.g., paths previously given to you by
`git ls-tree`, `--raw` diff output, etc).
-'GIT_GLOB_PATHSPECS'::
+`GIT_GLOB_PATHSPECS`::
Setting this variable to `1` will cause Git to treat all
pathspecs as glob patterns (aka "glob" magic).
-'GIT_NOGLOB_PATHSPECS'::
+`GIT_NOGLOB_PATHSPECS`::
Setting this variable to `1` will cause Git to treat all
pathspecs as literal (aka "literal" magic).
-'GIT_ICASE_PATHSPECS'::
+`GIT_ICASE_PATHSPECS`::
Setting this variable to `1` will cause Git to treat all
pathspecs as case-insensitive.
-'GIT_REFLOG_ACTION'::
+`GIT_REFLOG_ACTION`::
When a ref is updated, reflog entries are created to keep
track of the reason why the ref was updated (which is
typically the name of the high-level command that updated
variable when it is invoked as the top level command by the
end user, to be recorded in the body of the reflog.
-'GIT_REF_PARANOIA'::
+`GIT_REF_PARANOIA`::
If set to `1`, include broken or badly named refs when iterating
over lists of refs. In a normal, non-corrupted repository, this
does nothing. However, enabling it may help git to detect and
an operation has touched every ref (e.g., because you are
cloning a repository to make a backup).
-'GIT_ALLOW_PROTOCOL'::
+`GIT_ALLOW_PROTOCOL`::
If set, provide a colon-separated list of protocols which are
allowed to be used with fetch/push/clone. This is useful to
restrict recursive submodule initialization from an untrusted
repository. To control what line ending style is used in the working
directory, use the `eol` attribute for a single file and the
`core.eol` configuration variable for all text files.
+Note that `core.autocrlf` overrides `core.eol`
Set::
Set to string value "auto"::
When `text` is set to "auto", the path is marked for automatic
- end-of-line normalization. If Git decides that the content is
- text, its line endings are normalized to LF on checkin.
+ end-of-line conversion. If Git decides that the content is
+ text, its line endings are converted to LF on checkin.
+ When the file has been committed with CRLF, no conversion is done.
Unspecified::
^^^^^
This attribute sets a specific line-ending style to be used in the
-working directory. It enables end-of-line normalization without any
+working directory. It enables end-of-line conversion without any
content checks, effectively setting the `text` attribute.
Set to string value "crlf"::
normalize line endings to LF in the repository and, optionally, to
convert them to CRLF when files are checked out.
-Here is an example that will make Git normalize .txt, .vcproj and .sh
-files, ensure that .vcproj files have CRLF and .sh files have LF in
-the working directory, and prevent .jpg files from being normalized
-regardless of their content.
-
-------------------------
-*.txt text
-*.vcproj eol=crlf
-*.sh eol=lf
-*.jpg -text
-------------------------
-
-Other source code management systems normalize all text files in their
-repositories, and there are two ways to enable similar automatic
-normalization in Git.
-
If you simply want to have CRLF line endings in your working directory
regardless of the repository you are working with, you can set the
-config variable "core.autocrlf" without changing any attributes.
+config variable "core.autocrlf" without using any attributes.
------------------------
[core]
autocrlf = true
------------------------
-This does not force normalization of all text files, but does ensure
+This does not force normalization of text files, but does ensure
that text files that you introduce to the repository have their line
endings normalized to LF when they are added, and that files that are
already normalized in the repository stay normalized.
-If you want to interoperate with a source code management system that
-enforces end-of-line normalization, or you simply want all text files
-in your repository to be normalized, you should instead set the `text`
-attribute to "auto" for _all_ files.
+If you want to ensure that text files that any contributor introduces to
+the repository have their line endings normalized, you can set the
+`text` attribute to "auto" for _all_ files.
------------------------
* text=auto
------------------------
-This ensures that all files that Git considers to be text will have
-normalized (LF) line endings in the repository. The `core.eol`
-configuration variable controls which line endings Git will use for
-normalized files in your working directory; the default is to use the
-native line ending for your platform, or CRLF if `core.autocrlf` is
-set.
+The attributes allow a fine-grained control, how the line endings
+are converted.
+Here is an example that will make Git normalize .txt, .vcproj and .sh
+files, ensure that .vcproj files have CRLF and .sh files have LF in
+the working directory, and prevent .jpg files from being normalized
+regardless of their content.
-NOTE: When `text=auto` normalization is enabled in an existing
-repository, any text files containing CRLFs should be normalized. If
-they are not they will be normalized the next time someone tries to
-change them, causing unfortunate misattribution. From a clean working
-directory:
+------------------------
+* text=auto
+*.txt text
+*.vcproj text eol=crlf
+*.sh text eol=lf
+*.jpg -text
+------------------------
+
+NOTE: When `text=auto` conversion is enabled in a cross-platform
+project using push and pull to a central repository the text files
+containing CRLFs should be normalized.
+
+From a clean working directory:
-------------------------------------------------
-$ echo "* text=auto" >>.gitattributes
+$ echo "* text=auto" >.gitattributes
$ rm .git/index # Remove the index to force Git to
$ git reset # re-scan the working directory
$ git status # Show files that will be normalized
smudge = git-p4-filter --smudge %f
------------------------
+Note that "%f" is the name of the path that is being worked on. Depending
+on the version that is being filtered, the corresponding file on disk may
+not exist, or may have different contents. So, smudge and clean commands
+should not try to access the file on disk, but only act as filters on the
+content provided to them on standard input.
Interaction between checkin/checkout attributes
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
- `csharp` suitable for source code in the C# language.
+- `css` suitable for cascading style sheets.
+
- `fortran` suitable for source code in the Fortran language.
- `fountain` suitable for Fountain documents.
[NOTE]
If there were more commits on the 'master' branch after the merge, the
merge commit itself would not be shown by 'git show-branch' by
-default. You would need to provide '--sparse' option to make the
+default. You would need to provide `--sparse` option to make the
merge commit visible in this case.
Now, let's pretend you are the one who did all the work in
files:
- 'git diff-index' compares contents of a "tree" object and the
- working directory (when '--cached' flag is not used) or a
- "tree" object and the index file (when '--cached' flag is
+ working directory (when `--cached` flag is not used) or a
+ "tree" object and the index file (when `--cached` flag is
used);
- 'git diff-files' compares contents of the index file and the
'git send-pack' on the other end, so you can simply `echo` messages
for the user.
+The number of push options given on the command line of
+`git push --push-option=...` can be read from the environment
+variable `GIT_PUSH_OPTION_COUNT`, and the options themselves are
+found in `GIT_PUSH_OPTION_0`, `GIT_PUSH_OPTION_1`,...
+If it is negotiated to not use the push options phase, the
+environment variables will not be set. If the client selects
+to use push options, but doesn't transmit any, the count variable
+will be set to zero, `GIT_PUSH_OPTION_COUNT=0`.
+
[[update]]
update
~~~~~~
directory in Git distribution, which implements sending commit
emails.
+The number of push options given on the command line of
+`git push --push-option=...` can be read from the environment
+variable `GIT_PUSH_OPTION_COUNT`, and the options themselves are
+found in `GIT_PUSH_OPTION_0`, `GIT_PUSH_OPTION_1`,...
+If it is negotiated to not use the push options phase, the
+environment variables will not be set. If the client selects
+to use push options, but doesn't transmit any, the count variable
+will be set to zero, `GIT_PUSH_OPTION_COUNT=0`.
+
[[post-update]]
post-update
~~~~~~~~~~~
* Patterns read from `$GIT_DIR/info/exclude`.
* Patterns read from the file specified by the configuration
- variable 'core.excludesFile'.
+ variable `core.excludesFile`.
Which file to place a pattern in depends on how the pattern is meant to
be used.
--simplify-merges::
- Additional option to '--full-history' to remove some needless
+ Additional option to `--full-history` to remove some needless
merges from the resulting history, as there are no selected
commits contributing to this merge. (See "History
simplification" in linkgit:git-log[1] for a more detailed
The file contains one subsection per submodule, and the subsection value
is the name of the submodule. The name is set to the path where the
-submodule has been added unless it was customized with the '--name'
+submodule has been added unless it was customized with the `--name`
option of 'git submodule add'. Each submodule section also contains the
following required keys:
"--ignore-submodule" option. The 'git submodule' commands are not
affected by this setting.
+submodule.<name>.shallow::
+ When set to true, a clone of this submodule will be performed as a
+ shallow clone unless the user explicitly asks for a non-shallow
+ clone.
+
EXAMPLES
--------
it is either the name of a configured remote or a URL. The second
argument specifies a URL; it is usually of the form
'<transport>://<address>', but any arbitrary string is possible.
-The 'GIT_DIR' environment variable is set up for the remote helper
+The `GIT_DIR` environment variable is set up for the remote helper
and can be used to determine where to store additional data or from
which directory to invoke auxiliary Git commands.
the first argument is '<address>', and if it is encountered in a
configured remote, the first argument is the name of that remote.
-Additionally, when a configured remote has 'remote.<name>.vcs' set to
+Additionally, when a configured remote has `remote.<name>.vcs` set to
'<transport>', Git explicitly invokes 'git remote-<transport>' with
'<name>' as the first argument. If set, the second argument is
-'remote.<name>.url'; otherwise, the second argument is omitted.
+`remote.<name>.url`; otherwise, the second argument is omitted.
INPUT FORMAT
------------
'export-marks' <file>::
This modifies the 'export' capability, instructing Git to dump the
internal marks table to <file> when complete. For details,
- read up on '--export-marks=<file>' in linkgit:git-fast-export[1].
+ read up on `--export-marks=<file>` in linkgit:git-fast-export[1].
'import-marks' <file>::
This modifies the 'export' capability, instructing Git to load the
marks specified in <file> before processing any input. For details,
- read up on '--import-marks=<file>' in linkgit:git-fast-export[1].
+ read up on `--import-marks=<file>` in linkgit:git-fast-export[1].
'signed-tags'::
This modifies the 'export' capability, instructing Git to pass
- '--signed-tags=verbatim' to linkgit:git-fast-export[1]. In the
- absence of this capability, Git will use '--signed-tags=warn-strip'.
+ `--signed-tags=verbatim` to linkgit:git-fast-export[1]. In the
+ absence of this capability, Git will use `--signed-tags=warn-strip`.
is followed by a blank line). For example, the following would
be two batches of 'push', the first asking the remote-helper
to push the local ref 'master' to the remote ref 'master' and
- the local 'HEAD' to the remote 'branch', and the second
+ the local `HEAD` to the remote 'branch', and the second
asking to push ref 'foo' to ref 'bar' (forced update requested
by the '+').
+
Name of your site or organization, to appear in page titles. Set it
to something descriptive for clearer bookmarks etc. If this variable
is not set or is, then gitweb uses the value of the `SERVER_NAME`
- CGI environment variable, setting site name to "$SERVER_NAME Git",
+ `CGI` environment variable, setting site name to "$SERVER_NAME Git",
or "Untitled Git" if this variable is not set (e.g. if running gitweb
as standalone script).
+
Per-repository gitweb configuration
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
You can configure individual repositories shown in gitweb by creating file
-in the 'GIT_DIR' of Git repository, or by setting some repo configuration
-variable (in 'GIT_DIR/config', see linkgit:git-config[1]).
+in the `GIT_DIR` of Git repository, or by setting some repo configuration
+variable (in `GIT_DIR/config`, see linkgit:git-config[1]).
You can use the following files in repository:
6. There is a file command-list.txt in the distribution main directory
that categorizes commands by type, so they can be listed in appropriate
subsections in the documentation's summary command list. Add an entry
-for yours. To understand the categories, look at git-commands.txt
+for yours. To understand the categories, look at command-list.txt
in the main directory. If the new command is part of the typical Git
workflow and you believe it common enough to be mentioned in 'git help',
map this command to a common group in the column [common].
message if conflicts were detected. Level 1 outputs only
conflicts, 2 outputs conflicts and file changes. Level 5 and
above outputs debugging information. The default is level 2.
- Can be overridden by the 'GIT_MERGE_VERBOSITY' environment variable.
+ Can be overridden by the `GIT_MERGE_VERBOSITY` environment variable.
merge.<driver>.name::
Defines a human-readable name for a custom low-level
"U" for a good signature with unknown validity and "N" for no signature
- '%GS': show the name of the signer for a signed commit
- '%GK': show the key used to sign a signed commit
-- '%gD': reflog selector, e.g., `refs/stash@{1}`
-- '%gd': shortened reflog selector, e.g., `stash@{1}`
+- '%gD': reflog selector, e.g., `refs/stash@{1}` or
+ `refs/stash@{2 minutes ago`}; the format follows the rules described
+ for the `-g` option. The portion before the `@` is the refname as
+ given on the command line (so `git log -g refs/heads/master` would
+ yield `refs/heads/master@{0}`).
+- '%gd': shortened reflog selector; same as `%gD`, but the refname
+ portion is shortened for human readability (so `refs/heads/master`
+ becomes just `master`).
- '%gn': reflog identity name
- '%gN': reflog identity name (respecting .mailmap, see
linkgit:git-shortlog[1] or linkgit:git-blame[1])
--no-abbrev-commit::
Show the full 40-byte hexadecimal commit object name. This negates
`--abbrev-commit` and those options which imply it such as
- "--oneline". It also overrides the 'log.abbrevCommit' variable.
+ "--oneline". It also overrides the `log.abbrevCommit` variable.
--oneline::
This is a shorthand for "--pretty=oneline --abbrev-commit"
on the command line.
+
By default, the notes shown are from the notes refs listed in the
-'core.notesRef' and 'notes.displayRef' variables (or corresponding
+`core.notesRef` and `notes.displayRef` variables (or corresponding
environment overrides). See linkgit:git-config[1] for more details.
+
With an optional '<treeish>' argument, use the treeish to find the notes
--stdin::
In addition to the '<commit>' listed on the command
- line, read them from the standard input. If a '--' separator is
+ line, read them from the standard input. If a `--` separator is
seen, stop reading commits and start reading paths to limit the
result.
+
With `--pretty` format other than `oneline` (for obvious reasons),
this causes the output to have two extra lines of information
-taken from the reflog. By default, 'commit@\{Nth}' notation is
-used in the output. When the starting commit is specified as
-'commit@\{now}', output also uses 'commit@\{timestamp}' notation
-instead. Under `--pretty=oneline`, the commit message is
+taken from the reflog. The reflog designator in the output may be shown
+as `ref@{Nth}` (where `Nth` is the reverse-chronological index in the
+reflog) or as `ref@{timestamp}` (with the timestamp for that entry),
+depending on a few rules:
++
+--
+1. If the starting point is specified as `ref@{Nth}`, show the index
+format.
++
+2. If the starting point was specified as `ref@{now}`, show the
+timestamp format.
++
+3. If neither was used, but `--date` was given on the command line, show
+the timestamp in the format requested by `--date`.
++
+4. Otherwise, show the index format.
+--
++
+Under `--pretty=oneline`, the commit message is
prefixed with this information on the same line.
This option cannot be combined with `--reverse`.
See also linkgit:git-reflog[1].
Try to speed up the traversal using the pack bitmap index (if
one is available). Note that when traversing with `--objects`,
trees and blobs will not have their associated path printed.
+
+--progress=<header>::
+ Show progress reports on stderr as objects are considered. The
+ `<header>` text will be printed with each progress update.
endif::git-rev-list[]
--
`iso-local`), the user's local time zone is used instead.
+
`--date=relative` shows dates relative to the current time,
-e.g. ``2 hours ago''. The `-local` option cannot be used with
-`--raw` or `--relative`.
+e.g. ``2 hours ago''. The `-local` option has no effect for
+`--date=relative`.
+
`--date=local` is an alias for `--date=default-local`.
+
+
`--date=short` shows only the date, but not the time, in `YYYY-MM-DD` format.
+
-`--date=raw` shows the date in the internal raw Git format `%s %z` format.
+`--date=raw` shows the date as seconds since the epoch (1970-01-01
+00:00:00 UTC), followed by a space, and then the timezone as an offset
+from UTC (a `+` or `-` with four digits; the first two are hours, and
+the second two are minutes). I.e., as if the timestamp were formatted
+with `strftime("%s %z")`).
+Note that the `-local` option does not affect the seconds-since-epoch
+value (which is always measured in UTC), but does switch the accompanying
+timezone value.
++
+`--date=unix` shows the date as a Unix epoch timestamp (seconds since
+1970). As with `--raw`, this is always in UTC and therefore `-local`
+has no effect.
+
`--date=format:...` feeds the format `...` to your system `strftime`.
Use `--date=format:%c` to show the date in your system locale's
first match in the following rules:
. If '$GIT_DIR/<refname>' exists, that is what you mean (this is usually
- useful only for 'HEAD', 'FETCH_HEAD', 'ORIG_HEAD', 'MERGE_HEAD'
- and 'CHERRY_PICK_HEAD');
+ useful only for `HEAD`, `FETCH_HEAD`, `ORIG_HEAD`, `MERGE_HEAD`
+ and `CHERRY_PICK_HEAD`);
. otherwise, 'refs/<refname>' if it exists;
. otherwise, 'refs/remotes/<refname>/HEAD' if it exists.
+
-'HEAD' names the commit on which you based the changes in the working tree.
-'FETCH_HEAD' records the branch which you fetched from a remote repository
+`HEAD` names the commit on which you based the changes in the working tree.
+`FETCH_HEAD` records the branch which you fetched from a remote repository
with your last `git fetch` invocation.
-'ORIG_HEAD' is created by commands that move your 'HEAD' in a drastic
-way, to record the position of the 'HEAD' before their operation, so that
+`ORIG_HEAD` is created by commands that move your `HEAD` in a drastic
+way, to record the position of the `HEAD` before their operation, so that
you can easily change the tip of the branch back to the state before you ran
them.
-'MERGE_HEAD' records the commit(s) which you are merging into your branch
+`MERGE_HEAD` records the commit(s) which you are merging into your branch
when you run `git merge`.
-'CHERRY_PICK_HEAD' records the commit which you are cherry-picking
+`CHERRY_PICK_HEAD` records the commit which you are cherry-picking
when you run `git cherry-pick`.
+
Note that any of the 'refs/*' cases above may come either from
some output processing may assume ref names in UTF-8.
'@'::
- '@' alone is a shortcut for 'HEAD'.
+ '@' alone is a shortcut for `HEAD`.
'<refname>@{<date>}', e.g. 'master@\{yesterday\}', 'HEAD@{5 minutes ago}'::
A ref followed by the suffix '@' with a date specification
existing log ('$GIT_DIR/logs/<ref>'). Note that this looks up the state
of your *local* ref at a given time; e.g., what was in your local
'master' branch last week. If you want to look at commits made during
- certain times, see '--since' and '--until'.
+ certain times, see `--since` and `--until`.
'<refname>@{<n>}', e.g. 'master@\{1\}'::
A ref followed by the suffix '@' with an ordinal specification
'<branchname>@\{push\}', e.g. 'master@\{push\}', '@\{push\}'::
The suffix '@\{push}' reports the branch "where we would push to" if
`git push` were run while `branchname` was checked out (or the current
- 'HEAD' if no branchname is specified). Since our push destination is
+ `HEAD` if no branchname is specified). Since our push destination is
in a remote repository, of course, we report the local tracking branch
that corresponds to that branch (i.e., something in 'refs/remotes/').
+
'<rev1>..<rev2>'::
Include commits that are reachable from <rev2> but exclude
those that are reachable from <rev1>. When either <rev1> or
- <rev2> is omitted, it defaults to 'HEAD'.
+ <rev2> is omitted, it defaults to `HEAD`.
'<rev1>\...<rev2>'::
Include commits that are reachable from either <rev1> or
<rev2> but exclude those that are reachable from both. When
- either <rev1> or <rev2> is omitted, it defaults to 'HEAD'.
+ either <rev1> or <rev2> is omitted, it defaults to `HEAD`.
'<rev>{caret}@', e.g. 'HEAD{caret}@'::
A suffix '{caret}' followed by an at sign is the same as listing
`entry` points to the entry to initialize.
+
`hash` is the hash code of the entry.
++
+The hashmap_entry structure does not hold references to external resources,
+and it is safe to just discard it once you are done with it (i.e. if
+your structure was allocated with xmalloc(), you can just free(3) it,
+and if it is on stack, you can just let it go out of scope).
`void *hashmap_get(const struct hashmap *map, const void *key, const void *keydata)`::
ready to make a packfile, it will blindly ACK all 'have' obj-ids
back to the client.
- * the server will then send a 'NACK' and then wait for another response
+ * the server will then send a 'NAK' and then wait for another response
from the client - either a 'done' or another list of 'have' lines.
In multi_ack_detailed mode:
fetching protocol. Each reference obj-id and name on the server is sent
in packet-line format to the client, followed by a flush-pkt. The only
real difference is that the capability listing is different - the only
-possible values are 'report-status', 'delete-refs' and 'ofs-delta'.
+possible values are 'report-status', 'delete-refs', 'ofs-delta' and
+'push-options'.
Reference Update Request and Packfile Transfer
----------------------------------------------
the server, the obj-id the client would like to update it to and the name
of the reference.
-This list is followed by a flush-pkt and then the packfile that should
-contain all the objects that the server will need to complete the new
-references.
+This list is followed by a flush-pkt. Then the push options are transmitted
+one per packet followed by another flush-pkt. After that the packfile that
+should contain all the objects that the server will need to complete the new
+references will be sent.
----
update-request = *shallow ( command-list | push-cert ) [packfile]
will update the refs in one atomic transaction. Either all refs are
updated or none.
+push-options
+------------
+
+If the server sends the 'push-options' capability it is able to accept
+push options after the update commands have been sent, but before the
+packfile is streamed. If the pushing client requests this capability,
+the server will pass the options to the pre- and post- receive hooks
+that process this push request.
+
allow-tip-sha1-in-want
----------------------
--- /dev/null
+Git signature format
+====================
+
+== Overview
+
+Git uses cryptographic signatures in various places, currently objects (tags,
+commits, mergetags) and transactions (pushes). In every case, the command which
+is about to create an object or transaction determines a payload from that,
+calls gpg to obtain a detached signature for the payload (`gpg -bsa`) and
+embeds the signature into the object or transaction.
+
+Signatures always begin with `-----BEGIN PGP SIGNATURE-----`
+and end with `-----END PGP SIGNATURE-----`, unless gpg is told to
+produce RFC1991 signatures which use `MESSAGE` instead of `SIGNATURE`.
+
+The signed payload and the way the signature is embedded depends
+on the type of the object resp. transaction.
+
+== Tag signatures
+
+- created by: `git tag -s`
+- payload: annotated tag object
+- embedding: append the signature to the unsigned tag object
+- example: tag `signedtag` with subject `signed tag`
+
+----
+object 04b871796dc0420f8e7561a895b52484b701d51a
+type commit
+tag signedtag
+tagger C O Mitter <committer@example.com> 1465981006 +0000
+
+signed tag
+
+signed tag message body
+-----BEGIN PGP SIGNATURE-----
+Version: GnuPG v1
+
+iQEcBAABAgAGBQJXYRhOAAoJEGEJLoW3InGJklkIAIcnhL7RwEb/+QeX9enkXhxn
+rxfdqrvWd1K80sl2TOt8Bg/NYwrUBw/RWJ+sg/hhHp4WtvE1HDGHlkEz3y11Lkuh
+8tSxS3qKTxXUGozyPGuE90sJfExhZlW4knIQ1wt/yWqM+33E9pN4hzPqLwyrdods
+q8FWEqPPUbSJXoMbRPw04S5jrLtZSsUWbRYjmJCHzlhSfFWW4eFd37uquIaLUBS0
+rkC3Jrx7420jkIpgFcTI2s60uhSQLzgcCwdA2ukSYIRnjg/zDkj8+3h/GaROJ72x
+lZyI6HWixKJkWw8lE9aAOD9TmTW9sFJwcVAzmAuFX2kUreDUKMZduGcoRYGpD7E=
+=jpXa
+-----END PGP SIGNATURE-----
+----
+
+- verify with: `git verify-tag [-v]` or `git tag -v`
+
+----
+gpg: Signature made Wed Jun 15 10:56:46 2016 CEST using RSA key ID B7227189
+gpg: Good signature from "Eris Discordia <discord@example.net>"
+gpg: WARNING: This key is not certified with a trusted signature!
+gpg: There is no indication that the signature belongs to the owner.
+Primary key fingerprint: D4BE 2231 1AD3 131E 5EDA 29A4 6109 2E85 B722 7189
+object 04b871796dc0420f8e7561a895b52484b701d51a
+type commit
+tag signedtag
+tagger C O Mitter <committer@example.com> 1465981006 +0000
+
+signed tag
+
+signed tag message body
+----
+
+== Commit signatures
+
+- created by: `git commit -S`
+- payload: commit object
+- embedding: header entry `gpgsig`
+ (content is preceded by a space)
+- example: commit with subject `signed commit`
+
+----
+tree eebfed94e75e7760540d1485c740902590a00332
+parent 04b871796dc0420f8e7561a895b52484b701d51a
+author A U Thor <author@example.com> 1465981137 +0000
+committer C O Mitter <committer@example.com> 1465981137 +0000
+gpgsig -----BEGIN PGP SIGNATURE-----
+ Version: GnuPG v1
+
+ iQEcBAABAgAGBQJXYRjRAAoJEGEJLoW3InGJ3IwIAIY4SA6GxY3BjL60YyvsJPh/
+ HRCJwH+w7wt3Yc/9/bW2F+gF72kdHOOs2jfv+OZhq0q4OAN6fvVSczISY/82LpS7
+ DVdMQj2/YcHDT4xrDNBnXnviDO9G7am/9OE77kEbXrp7QPxvhjkicHNwy2rEflAA
+ zn075rtEERDHr8nRYiDh8eVrefSO7D+bdQ7gv+7GsYMsd2auJWi1dHOSfTr9HIF4
+ HJhWXT9d2f8W+diRYXGh4X0wYiGg6na/soXc+vdtDYBzIxanRqjg8jCAeo1eOTk1
+ EdTwhcTZlI0x5pvJ3H0+4hA2jtldVtmPM4OTB0cTrEWBad7XV6YgiyuII73Ve3I=
+ =jKHM
+ -----END PGP SIGNATURE-----
+
+signed commit
+
+signed commit message body
+----
+
+- verify with: `git verify-commit [-v]` (or `git show --show-signature`)
+
+----
+gpg: Signature made Wed Jun 15 10:58:57 2016 CEST using RSA key ID B7227189
+gpg: Good signature from "Eris Discordia <discord@example.net>"
+gpg: WARNING: This key is not certified with a trusted signature!
+gpg: There is no indication that the signature belongs to the owner.
+Primary key fingerprint: D4BE 2231 1AD3 131E 5EDA 29A4 6109 2E85 B722 7189
+tree eebfed94e75e7760540d1485c740902590a00332
+parent 04b871796dc0420f8e7561a895b52484b701d51a
+author A U Thor <author@example.com> 1465981137 +0000
+committer C O Mitter <committer@example.com> 1465981137 +0000
+
+signed commit
+
+signed commit message body
+----
+
+== Mergetag signatures
+
+- created by: `git merge` on signed tag
+- payload/embedding: the whole signed tag object is embedded into
+ the (merge) commit object as header entry `mergetag`
+- example: merge of the signed tag `signedtag` as above
+
+----
+tree c7b1cff039a93f3600a1d18b82d26688668c7dea
+parent c33429be94b5f2d3ee9b0adad223f877f174b05d
+parent 04b871796dc0420f8e7561a895b52484b701d51a
+author A U Thor <author@example.com> 1465982009 +0000
+committer C O Mitter <committer@example.com> 1465982009 +0000
+mergetag object 04b871796dc0420f8e7561a895b52484b701d51a
+ type commit
+ tag signedtag
+ tagger C O Mitter <committer@example.com> 1465981006 +0000
+
+ signed tag
+
+ signed tag message body
+ -----BEGIN PGP SIGNATURE-----
+ Version: GnuPG v1
+
+ iQEcBAABAgAGBQJXYRhOAAoJEGEJLoW3InGJklkIAIcnhL7RwEb/+QeX9enkXhxn
+ rxfdqrvWd1K80sl2TOt8Bg/NYwrUBw/RWJ+sg/hhHp4WtvE1HDGHlkEz3y11Lkuh
+ 8tSxS3qKTxXUGozyPGuE90sJfExhZlW4knIQ1wt/yWqM+33E9pN4hzPqLwyrdods
+ q8FWEqPPUbSJXoMbRPw04S5jrLtZSsUWbRYjmJCHzlhSfFWW4eFd37uquIaLUBS0
+ rkC3Jrx7420jkIpgFcTI2s60uhSQLzgcCwdA2ukSYIRnjg/zDkj8+3h/GaROJ72x
+ lZyI6HWixKJkWw8lE9aAOD9TmTW9sFJwcVAzmAuFX2kUreDUKMZduGcoRYGpD7E=
+ =jpXa
+ -----END PGP SIGNATURE-----
+
+Merge tag 'signedtag' into downstream
+
+signed tag
+
+signed tag message body
+
+# gpg: Signature made Wed Jun 15 08:56:46 2016 UTC using RSA key ID B7227189
+# gpg: Good signature from "Eris Discordia <discord@example.net>"
+# gpg: WARNING: This key is not certified with a trusted signature!
+# gpg: There is no indication that the signature belongs to the owner.
+# Primary key fingerprint: D4BE 2231 1AD3 131E 5EDA 29A4 6109 2E85 B722 7189
+----
+
+- verify with: verification is embedded in merge commit message by default,
+ alternatively with `git show --show-signature`:
+
+----
+commit 9863f0c76ff78712b6800e199a46aa56afbcbd49
+merged tag 'signedtag'
+gpg: Signature made Wed Jun 15 10:56:46 2016 CEST using RSA key ID B7227189
+gpg: Good signature from "Eris Discordia <discord@example.net>"
+gpg: WARNING: This key is not certified with a trusted signature!
+gpg: There is no indication that the signature belongs to the owner.
+Primary key fingerprint: D4BE 2231 1AD3 131E 5EDA 29A4 6109 2E85 B722 7189
+Merge: c33429b 04b8717
+Author: A U Thor <author@example.com>
+Date: Wed Jun 15 09:13:29 2016 +0000
+
+ Merge tag 'signedtag' into downstream
+
+ signed tag
+
+ signed tag message body
+
+ # gpg: Signature made Wed Jun 15 08:56:46 2016 UTC using RSA key ID B7227189
+ # gpg: Good signature from "Eris Discordia <discord@example.net>"
+ # gpg: WARNING: This key is not certified with a trusted signature!
+ # gpg: There is no indication that the signature belongs to the owner.
+ # Primary key fingerprint: D4BE 2231 1AD3 131E 5EDA 29A4 6109 2E85 B722 7189
+----
#!/bin/sh
GVF=GIT-VERSION-FILE
-DEF_VER=v2.9.0
+DEF_VER=v2.10.0-rc2
LF='
'
# Define GMTIME_UNRELIABLE_ERRORS if your gmtime() function does not
# return NULL when it receives a bogus time_t.
#
-# Define HAVE_CLOCK_GETTIME if your platform has clock_gettime in librt.
+# Define HAVE_CLOCK_GETTIME if your platform has clock_gettime.
#
-# Define HAVE_CLOCK_MONOTONIC if your platform has CLOCK_MONOTONIC in librt.
+# Define HAVE_CLOCK_MONOTONIC if your platform has CLOCK_MONOTONIC.
+#
+# Define NEEDS_LIBRT if your platform requires linking with librt (glibc version
+# before 2.17) for clock_gettime and CLOCK_MONOTONIC.
#
# Define USE_PARENS_AROUND_GETTEXT_N to "yes" if your compiler happily
# compiles the following initialization:
# Define HAVE_BSD_SYSCTL if your platform has a BSD-compatible sysctl function.
#
# Define HAVE_GETDELIM if your system has the getdelim() function.
+#
+# Define PAGER_ENV to a SP separated VAR=VAL pairs to define
+# default environment variables to be passed when a pager is spawned, e.g.
+#
+# PAGER_ENV = LESS=FRX LV=-c
+#
+# to say "export LESS=FRX (and LV=-c) if the environment variable
+# LESS (and LV) is not set, respectively".
GIT-VERSION-FILE: FORCE
@$(SHELL_PATH) ./GIT-VERSION-GEN
LIB_OBJS += diff-no-index.o
LIB_OBJS += diff.o
LIB_OBJS += dir.o
+LIB_OBJS += dir-iterator.o
LIB_OBJS += editor.o
LIB_OBJS += entry.o
LIB_OBJS += environment.o
LIB_OBJS += merge-blobs.o
LIB_OBJS += merge-recursive.o
LIB_OBJS += mergesort.o
+LIB_OBJS += mru.o
LIB_OBJS += name-hash.o
LIB_OBJS += notes.o
LIB_OBJS += notes-cache.o
LIB_OBJS += reflog-walk.o
LIB_OBJS += refs.o
LIB_OBJS += refs/files-backend.o
+LIB_OBJS += refs/iterator.o
LIB_OBJS += ref-filter.o
LIB_OBJS += remote.o
LIB_OBJS += replace_object.o
BUILTIN_OBJS += builtin/worktree.o
BUILTIN_OBJS += builtin/write-tree.o
-GITLIBS = $(LIB_FILE) $(XDIFF_LIB)
+GITLIBS = common-main.o $(LIB_FILE) $(XDIFF_LIB)
EXTLIBS =
GIT_USER_AGENT = git/$(GIT_VERSION)
ifdef HAVE_CLOCK_GETTIME
BASIC_CFLAGS += -DHAVE_CLOCK_GETTIME
- EXTLIBS += -lrt
endif
ifdef HAVE_CLOCK_MONOTONIC
BASIC_CFLAGS += -DHAVE_CLOCK_MONOTONIC
endif
+ifdef NEEDS_LIBRT
+ EXTLIBS += -lrt
+endif
+
ifdef HAVE_BSD_SYSCTL
BASIC_CFLAGS += -DHAVE_BSD_SYSCTL
endif
NO_PYTHON = NoThanks
endif
+ifndef PAGER_ENV
+PAGER_ENV = LESS=FRX LV=-c
+endif
+
QUIET_SUBDIR0 = +$(MAKE) -C # space to separate -C and subdir
QUIET_SUBDIR1 =
DIFF_SQ = $(subst ','\'',$(DIFF))
PERLLIB_EXTRA_SQ = $(subst ','\'',$(PERLLIB_EXTRA))
-LIBS = $(GITLIBS) $(EXTLIBS)
+# We must filter out any object files from $(GITLIBS),
+# as it is typically used like:
+#
+# foo: foo.o $(GITLIBS)
+# $(CC) $(filter %.o,$^) $(LIBS)
+#
+# where we use it as a dependency. Since we also pull object files
+# from the dependency list, that would make each entry appear twice.
+LIBS = $(filter-out %.o, $(GITLIBS)) $(EXTLIBS)
BASIC_CFLAGS += -DSHA1_HEADER='$(SHA1_HEADER_SQ)' \
$(COMPAT_CFLAGS)
BASIC_CFLAGS += -DDEFAULT_HELP_FORMAT='"$(DEFAULT_HELP_FORMAT)"'
endif
+PAGER_ENV_SQ = $(subst ','\'',$(PAGER_ENV))
+PAGER_ENV_CQ = "$(subst ",\",$(subst \,\\,$(PAGER_ENV)))"
+PAGER_ENV_CQ_SQ = $(subst ','\'',$(PAGER_ENV_CQ))
+BASIC_CFLAGS += -DPAGER_ENV='$(PAGER_ENV_CQ_SQ)'
+
ALL_CFLAGS += $(BASIC_CFLAGS)
ALL_LDFLAGS += $(BASIC_LDFLAGS)
'-DGIT_INFO_PATH="$(infodir_relative_SQ)"'
git$X: git.o GIT-LDFLAGS $(BUILTIN_OBJS) $(GITLIBS)
- $(QUIET_LINK)$(CC) $(ALL_CFLAGS) -o $@ $(ALL_LDFLAGS) git.o \
- $(BUILTIN_OBJS) $(LIBS)
+ $(QUIET_LINK)$(CC) $(ALL_CFLAGS) -o $@ $(ALL_LDFLAGS) \
+ $(filter %.o,$^) $(LIBS)
help.sp help.s help.o: common-cmds.h
SCRIPT_DEFINES = $(SHELL_PATH_SQ):$(DIFF_SQ):$(GIT_VERSION):\
$(localedir_SQ):$(NO_CURL):$(USE_GETTEXT_SCHEME):$(SANE_TOOL_PATH_SQ):\
- $(gitwebdir_SQ):$(PERL_PATH_SQ):$(SANE_TEXT_GREP)
+ $(gitwebdir_SQ):$(PERL_PATH_SQ):$(SANE_TEXT_GREP):$(PAGER_ENV)
define cmd_munge_script
$(RM) $@ $@+ && \
sed -e '1s|#!.*/sh|#!$(SHELL_PATH_SQ)|' \
-e 's|@@GITWEBDIR@@|$(gitwebdir_SQ)|g' \
-e 's|@@PERL@@|$(PERL_PATH_SQ)|g' \
-e 's|@@SANE_TEXT_GREP@@|$(SANE_TEXT_GREP)|g' \
+ -e 's|@@PAGER_ENV@@|$(PAGER_ENV_SQ)|g' \
$@.sh >$@+
endef
OBJECTS := $(LIB_OBJS) $(BUILTIN_OBJS) $(PROGRAM_OBJS) $(TEST_OBJS) \
$(XDIFF_OBJS) \
$(VCSSVN_OBJS) \
+ common-main.o \
git.o
ifndef NO_CURL
OBJECTS += http.o http-walker.o remote-curl.o
--keyword=gettextln --keyword=eval_gettextln
XGETTEXT_FLAGS_PERL = $(XGETTEXT_FLAGS) --keyword=__ --language=Perl
LOCALIZED_C = $(C_OBJ:o=c) $(LIB_H) $(GENERATED_H)
-LOCALIZED_SH = $(SCRIPT_SH) git-parse-remote.sh
+LOCALIZED_SH = $(SCRIPT_SH)
+LOCALIZED_SH += git-parse-remote.sh
+LOCALIZED_SH += git-rebase--interactive.sh
+LOCALIZED_SH += git-sh-setup.sh
LOCALIZED_PERL = $(SCRIPT_PERL)
ifdef XGETTEXT_INCLUDE_TESTS
@echo NO_PERL=\''$(subst ','\'',$(subst ','\'',$(NO_PERL)))'\' >>$@+
@echo NO_PYTHON=\''$(subst ','\'',$(subst ','\'',$(NO_PYTHON)))'\' >>$@+
@echo NO_UNIX_SOCKETS=\''$(subst ','\'',$(subst ','\'',$(NO_UNIX_SOCKETS)))'\' >>$@+
+ @echo PAGER_ENV=\''$(subst ','\'',$(subst ','\'',$(PAGER_ENV)))'\' >>$@+
ifdef TEST_OUTPUT_DIRECTORY
@echo TEST_OUTPUT_DIRECTORY=\''$(subst ','\'',$(subst ','\'',$(TEST_OUTPUT_DIRECTORY)))'\' >>$@+
endif
.PHONY: test perf
-t/helper/test-ctype$X: ctype.o
-
-t/helper/test-date$X: date.o ctype.o
-
-t/helper/test-delta$X: diff-delta.o patch-delta.o
-
-t/helper/test-line-buffer$X: vcs-svn/lib.a
-
-t/helper/test-parse-options$X: parse-options.o parse-options-cb.o
+t/helper/test-line-buffer$X: $(VCSSVN_LIB)
-t/helper/test-svn-fe$X: vcs-svn/lib.a
+t/helper/test-svn-fe$X: $(VCSSVN_LIB)
.PRECIOUS: $(TEST_OBJS)
-Documentation/RelNotes/2.9.0.txt
\ No newline at end of file
+Documentation/RelNotes/2.10.0.txt
\ No newline at end of file
int error_resolve_conflict(const char *me)
{
- error("%s is not possible because you have unmerged files.", me);
+ if (!strcmp(me, "cherry-pick"))
+ error(_("Cherry-picking is not possible because you have unmerged files."));
+ else if (!strcmp(me, "commit"))
+ error(_("Committing is not possible because you have unmerged files."));
+ else if (!strcmp(me, "merge"))
+ error(_("Merging is not possible because you have unmerged files."));
+ else if (!strcmp(me, "pull"))
+ error(_("Pulling is not possible because you have unmerged files."));
+ else if (!strcmp(me, "revert"))
+ error(_("Reverting is not possible because you have unmerged files."));
+ else
+ error(_("It is not possible to %s because you have unmerged files."),
+ me);
+
if (advice_resolve_conflict)
/*
* Message used both when 'git commit' fails and when
void NORETURN die_resolve_conflict(const char *me)
{
error_resolve_conflict(me);
- die("Exiting because of an unresolved conflict.");
+ die(_("Exiting because of an unresolved conflict."));
}
void NORETURN die_conclude_merge(void)
void detach_advice(const char *new_name)
{
- const char fmt[] =
- "Note: checking out '%s'.\n\n"
+ const char *fmt =
+ _("Note: checking out '%s'.\n\n"
"You are in 'detached HEAD' state. You can look around, make experimental\n"
"changes and commit them, and you can discard any commits you make in this\n"
"state without impacting any branches by performing another checkout.\n\n"
"If you want to create a new branch to retain commits you create, you may\n"
"do so (now or later) by using -b with the checkout command again. Example:\n\n"
- " git checkout -b <new-branch-name>\n\n";
+ " git checkout -b <new-branch-name>\n\n");
fprintf(stderr, fmt, new_name);
}
static int write_tar_filter_archive(const struct archiver *ar,
struct archiver_args *args);
+/*
+ * This is the max value that a ustar size header can specify, as it is fixed
+ * at 11 octal digits. POSIX specifies that we switch to extended headers at
+ * this size.
+ *
+ * Likewise for the mtime (which happens to use a buffer of the same size).
+ */
+#if ULONG_MAX == 0xFFFFFFFF
+#define USTAR_MAX_SIZE ULONG_MAX
+#define USTAR_MAX_MTIME ULONG_MAX
+#else
+#define USTAR_MAX_SIZE 077777777777UL
+#define USTAR_MAX_MTIME 077777777777UL
+#endif
+
/* writes out the whole block, but only if it is full */
static void write_if_needed(void)
{
strbuf_addch(sb, '\n');
}
+/*
+ * Like strbuf_append_ext_header, but for numeric values.
+ */
+static void strbuf_append_ext_header_uint(struct strbuf *sb,
+ const char *keyword,
+ uintmax_t value)
+{
+ char buf[40]; /* big enough for 2^128 in decimal, plus NUL */
+ int len;
+
+ len = xsnprintf(buf, sizeof(buf), "%"PRIuMAX, value);
+ strbuf_append_ext_header(sb, keyword, buf, len);
+}
+
static unsigned int ustar_header_chksum(const struct ustar_header *header)
{
const unsigned char *p = (const unsigned char *)header;
xsnprintf(header->chksum, sizeof(header->chksum), "%07o", ustar_header_chksum(header));
}
-static int write_extended_header(struct archiver_args *args,
- const unsigned char *sha1,
- const void *buffer, unsigned long size)
+static void write_extended_header(struct archiver_args *args,
+ const unsigned char *sha1,
+ const void *buffer, unsigned long size)
{
struct ustar_header header;
unsigned int mode;
prepare_header(args, &header, mode, size);
write_blocked(&header, sizeof(header));
write_blocked(buffer, size);
- return 0;
}
static int write_tar_entry(struct archiver_args *args,
struct ustar_header header;
struct strbuf ext_header = STRBUF_INIT;
unsigned int old_mode = mode;
- unsigned long size;
+ unsigned long size, size_in_header;
void *buffer;
int err = 0;
memcpy(header.linkname, buffer, size);
}
- prepare_header(args, &header, mode, size);
+ size_in_header = size;
+ if (S_ISREG(mode) && size > USTAR_MAX_SIZE) {
+ size_in_header = 0;
+ strbuf_append_ext_header_uint(&ext_header, "size", size);
+ }
+
+ prepare_header(args, &header, mode, size_in_header);
if (ext_header.len > 0) {
- err = write_extended_header(args, sha1, ext_header.buf,
- ext_header.len);
- if (err) {
- free(buffer);
- return err;
- }
+ write_extended_header(args, sha1, ext_header.buf,
+ ext_header.len);
}
strbuf_release(&ext_header);
write_blocked(&header, sizeof(header));
return err;
}
-static int write_global_extended_header(struct archiver_args *args)
+static void write_global_extended_header(struct archiver_args *args)
{
const unsigned char *sha1 = args->commit_sha1;
struct strbuf ext_header = STRBUF_INIT;
struct ustar_header header;
unsigned int mode;
- int err = 0;
- strbuf_append_ext_header(&ext_header, "comment", sha1_to_hex(sha1), 40);
+ if (sha1)
+ strbuf_append_ext_header(&ext_header, "comment",
+ sha1_to_hex(sha1), 40);
+ if (args->time > USTAR_MAX_MTIME) {
+ strbuf_append_ext_header_uint(&ext_header, "mtime",
+ args->time);
+ args->time = USTAR_MAX_MTIME;
+ }
+
+ if (!ext_header.len)
+ return;
+
memset(&header, 0, sizeof(header));
*header.typeflag = TYPEFLAG_GLOBAL_HEADER;
mode = 0100666;
write_blocked(&header, sizeof(header));
write_blocked(ext_header.buf, ext_header.len);
strbuf_release(&ext_header);
- return err;
}
static struct archiver **tar_filters;
{
int err = 0;
- if (args->commit_sha1)
- err = write_global_extended_header(args);
- if (!err)
- err = write_archive_entries(args, write_tar_entry);
+ write_global_extended_header(args);
+ err = write_archive_entries(args, write_tar_entry);
if (!err)
write_trailer();
return err;
pathspec.recursive = 1;
ret = read_tree_recursive(tree, "", 0, 0, &pathspec,
reject_entry, &pathspec);
- free_pathspec(&pathspec);
+ clear_pathspec(&pathspec);
return ret != 0;
}
argc = parse_options(argc, argv, NULL, opts, archive_usage, 0);
if (remote)
- die("Unexpected option --remote");
+ die(_("Unexpected option --remote"));
if (exec)
- die("Option --exec can only be used together with --remote");
+ die(_("Option --exec can only be used together with --remote"));
if (output)
- die("Unexpected option --output");
+ die(_("Unexpected option --output"));
if (!base)
base = "";
usage_with_options(archive_usage, opts);
*ar = lookup_archiver(format);
if (!*ar || (is_remote && !((*ar)->flags & ARCHIVER_REMOTE)))
- die("Unknown archive format '%s'", format);
+ die(_("Unknown archive format '%s'"), format);
args->compression_level = Z_DEFAULT_COMPRESSION;
if (compression_level != -1) {
if ((*ar)->flags & ARCHIVER_WANT_COMPRESSION_LEVELS)
args->compression_level = compression_level;
else {
- die("Argument not supported for format '%s': -%d",
+ die(_("Argument not supported for format '%s': -%d"),
format, compression_level);
}
}
FILE *fp = fopen(filename, "r");
if (!fp)
- die_errno("Could not open file '%s'", filename);
+ die_errno(_("Could not open file '%s'"), filename);
while (strbuf_getline_lf(&str, fp) != EOF) {
strbuf_trim(&str);
if (sq_dequote_to_argv_array(str.buf, array))
- die("Badly quoted content in file '%s': %s",
+ die(_("Badly quoted content in file '%s': %s"),
filename, str.buf);
}
printf("There are only 'skip'ped commits left to test.\n"
"The first %s commit could be any of:\n", term_bad);
- print_commit_list(tried, "%s\n", "%s\n");
+
+ for ( ; tried; tried = tried->next)
+ printf("%s\n", oid_to_hex(&tried->item->object.oid));
+
if (bad)
printf("%s\n", oid_to_hex(bad));
- printf("We cannot bisect more!\n");
+ printf(_("We cannot bisect more!\n"));
exit(2);
}
{
struct commit *r = lookup_commit_reference(sha1);
if (!r)
- die("Not a valid commit name %s", sha1_to_hex(sha1));
+ die(_("Not a valid commit name %s"), sha1_to_hex(sha1));
return r;
}
char *bad_hex = oid_to_hex(current_bad_oid);
char *good_hex = join_sha1_array_hex(&good_revs, ' ');
if (!strcmp(term_bad, "bad") && !strcmp(term_good, "good")) {
- fprintf(stderr, "The merge base %s is bad.\n"
+ fprintf(stderr, _("The merge base %s is bad.\n"
"This means the bug has been fixed "
- "between %s and [%s].\n",
+ "between %s and [%s].\n"),
bad_hex, bad_hex, good_hex);
} else if (!strcmp(term_bad, "new") && !strcmp(term_good, "old")) {
- fprintf(stderr, "The merge base %s is new.\n"
+ fprintf(stderr, _("The merge base %s is new.\n"
"The property has changed "
- "between %s and [%s].\n",
+ "between %s and [%s].\n"),
bad_hex, bad_hex, good_hex);
} else {
- fprintf(stderr, "The merge base %s is %s.\n"
+ fprintf(stderr, _("The merge base %s is %s.\n"
"This means the first '%s' commit is "
- "between %s and [%s].\n",
+ "between %s and [%s].\n"),
bad_hex, term_bad, term_good, bad_hex, good_hex);
}
exit(3);
}
- fprintf(stderr, "Some %s revs are not ancestor of the %s rev.\n"
+ fprintf(stderr, _("Some %s revs are not ancestor of the %s rev.\n"
"git bisect cannot work properly in this case.\n"
- "Maybe you mistook %s and %s revs?\n",
+ "Maybe you mistook %s and %s revs?\n"),
term_good, term_bad, term_good, term_bad);
exit(1);
}
static void handle_skipped_merge_base(const unsigned char *mb)
{
char *mb_hex = sha1_to_hex(mb);
- char *bad_hex = sha1_to_hex(current_bad_oid->hash);
+ char *bad_hex = oid_to_hex(current_bad_oid);
char *good_hex = join_sha1_array_hex(&good_revs, ' ');
- warning("the merge base between %s and [%s] "
+ warning(_("the merge base between %s and [%s] "
"must be skipped.\n"
"So we cannot be sure the first %s commit is "
"between %s and %s.\n"
- "We continue anyway.",
+ "We continue anyway."),
bad_hex, good_hex, term_bad, mb_hex, bad_hex);
free(good_hex);
}
} else if (0 <= sha1_array_lookup(&skipped_revs, mb)) {
handle_skipped_merge_base(mb);
} else {
- printf("Bisecting: a merge base must be tested\n");
+ printf(_("Bisecting: a merge base must be tested\n"));
exit(bisect_checkout(mb, no_checkout));
}
}
int fd;
if (!current_bad_oid)
- die("a %s revision is needed", term_bad);
+ die(_("a %s revision is needed"), term_bad);
/* Check if file BISECT_ANCESTORS_OK exists. */
if (!stat(filename, &st) && S_ISREG(st.st_mode))
/* Create file BISECT_ANCESTORS_OK. */
fd = open(filename, O_CREAT | O_TRUNC | O_WRONLY, 0600);
if (fd < 0)
- warning_errno("could not create file '%s'",
+ warning_errno(_("could not create file '%s'"),
filename);
else
close(fd);
if (!opt.diffopt.output_format)
opt.diffopt.output_format = DIFF_FORMAT_RAW;
+ setup_revisions(0, NULL, &opt, NULL);
log_tree_commit(&opt, commit);
}
*read_good = "good";
return;
} else {
- die_errno("could not read file '%s'", filename);
+ die_errno(_("could not read file '%s'"), filename);
}
} else {
strbuf_getline_lf(&str, fp);
struct commit_list *tried;
int reaches = 0, all = 0, nr, steps;
const unsigned char *bisect_rev;
+ char steps_msg[32];
read_bisect_terms(&term_bad, &term_good);
if (read_bisect_refs())
- die("reading bisect refs failed");
+ die(_("reading bisect refs failed"));
check_good_are_ancestors_of_bad(prefix, no_checkout);
*/
exit_if_skipped_commits(tried, NULL);
- printf("%s was both %s and %s\n",
+ printf(_("%s was both %s and %s\n"),
oid_to_hex(current_bad_oid),
term_good,
term_bad);
}
if (!all) {
- fprintf(stderr, "No testable commit found.\n"
- "Maybe you started with bad path parameters?\n");
+ fprintf(stderr, _("No testable commit found.\n"
+ "Maybe you started with bad path parameters?\n"));
exit(4);
}
nr = all - reaches - 1;
steps = estimate_bisect_steps(all);
- printf("Bisecting: %d revision%s left to test after this "
- "(roughly %d step%s)\n", nr, (nr == 1 ? "" : "s"),
- steps, (steps == 1 ? "" : "s"));
+ xsnprintf(steps_msg, sizeof(steps_msg),
+ Q_("(roughly %d step)", "(roughly %d steps)", steps),
+ steps);
+ /* TRANSLATORS: the last %s will be replaced with
+ "(roughly %d steps)" translation */
+ printf(Q_("Bisecting: %d revision left to test after this %s\n",
+ "Bisecting: %d revisions left to test after this %s\n",
+ nr), nr, steps_msg);
return bisect_checkout(bisect_rev, no_checkout);
}
static int take_worktree_changes;
struct update_callback_data {
- int flags;
+ int flags, force_mode;
int add_errors;
};
die(_("unexpected diff status %c"), p->status);
case DIFF_STATUS_MODIFIED:
case DIFF_STATUS_TYPE_CHANGED:
- if (add_file_to_index(&the_index, path, data->flags)) {
+ if (add_file_to_index(&the_index, path,
+ data->flags, data->force_mode)) {
if (!(data->flags & ADD_CACHE_IGNORE_ERRORS))
die(_("updating files failed"));
data->add_errors++;
}
}
-int add_files_to_cache(const char *prefix,
- const struct pathspec *pathspec, int flags)
+int add_files_to_cache(const char *prefix, const struct pathspec *pathspec,
+ int flags, int force_mode)
{
struct update_callback_data data;
struct rev_info rev;
memset(&data, 0, sizeof(data));
data.flags = flags;
+ data.force_mode = force_mode;
init_revisions(&rev, prefix);
setup_revisions(0, NULL, &rev, NULL);
static int addremove = ADDREMOVE_DEFAULT;
static int addremove_explicit = -1; /* unspecified */
+static char *chmod_arg;
+
static int ignore_removal_cb(const struct option *opt, const char *arg, int unset)
{
/* if we are told to ignore, we are not adding removals */
OPT_BOOL( 0 , "refresh", &refresh_only, N_("don't add, only refresh the index")),
OPT_BOOL( 0 , "ignore-errors", &ignore_add_errors, N_("just skip files which cannot be added because of errors")),
OPT_BOOL( 0 , "ignore-missing", &ignore_missing, N_("check if - even missing - files are ignored in dry run")),
+ OPT_STRING( 0 , "chmod", &chmod_arg, N_("(+/-)x"), N_("override the executable bit of the listed files")),
OPT_END(),
};
return git_default_config(var, value, cb);
}
-static int add_files(struct dir_struct *dir, int flags)
+static int add_files(struct dir_struct *dir, int flags, int force_mode)
{
int i, exit_status = 0;
}
for (i = 0; i < dir->nr; i++)
- if (add_file_to_cache(dir->entries[i]->name, flags)) {
+ if (add_file_to_index(&the_index, dir->entries[i]->name,
+ flags, force_mode)) {
if (!ignore_add_errors)
die(_("adding files failed"));
exit_status = 1;
int exit_status = 0;
struct pathspec pathspec;
struct dir_struct dir;
- int flags;
+ int flags, force_mode;
int add_new_files;
int require_pathspec;
char *seen = NULL;
if (!show_only && ignore_missing)
die(_("Option --ignore-missing can only be used together with --dry-run"));
+ if (!chmod_arg)
+ force_mode = 0;
+ else if (!strcmp(chmod_arg, "-x"))
+ force_mode = 0666;
+ else if (!strcmp(chmod_arg, "+x"))
+ force_mode = 0777;
+ else
+ die(_("--chmod param '%s' must be either -x or +x"), chmod_arg);
+
add_new_files = !take_worktree_changes && !refresh_only;
require_pathspec = !(take_worktree_changes || (0 < addremove_explicit));
plug_bulk_checkin();
- exit_status |= add_files_to_cache(prefix, &pathspec, flags);
+ exit_status |= add_files_to_cache(prefix, &pathspec, flags, force_mode);
if (add_new_files)
- exit_status |= add_files(&dir, flags);
+ exit_status |= add_files(&dir, flags, force_mode);
unplug_bulk_checkin();
PATCH_FORMAT_MBOX,
PATCH_FORMAT_STGIT,
PATCH_FORMAT_STGIT_SERIES,
- PATCH_FORMAT_HG
+ PATCH_FORMAT_HG,
+ PATCH_FORMAT_MBOXRD
};
enum keep_type {
/**
* For convenience to call write_file()
*/
-static int write_state_text(const struct am_state *state,
- const char *name, const char *string)
+static void write_state_text(const struct am_state *state,
+ const char *name, const char *string)
{
- return write_file(am_path(state, name), "%s", string);
+ write_file(am_path(state, name), "%s", string);
}
-static int write_state_count(const struct am_state *state,
- const char *name, int value)
+static void write_state_count(const struct am_state *state,
+ const char *name, int value)
{
- return write_file(am_path(state, name), "%d", value);
+ write_file(am_path(state, name), "%d", value);
}
-static int write_state_bool(const struct am_state *state,
- const char *name, int value)
+static void write_state_bool(const struct am_state *state,
+ const char *name, int value)
{
- return write_state_text(state, name, value ? "t" : "f");
+ write_state_text(state, name, value ? "t" : "f");
}
/**
*/
static void write_commit_msg(const struct am_state *state)
{
- int fd;
const char *filename = am_path(state, "final-commit");
-
- fd = xopen(filename, O_WRONLY | O_CREAT, 0666);
- if (write_in_full(fd, state->msg, state->msg_len) < 0)
- die_errno(_("could not write to %s"), filename);
- close(fd);
+ write_file_buf(filename, state->msg, state->msg_len);
}
/**
* Splits out individual email patches from `paths`, where each path is either
* a mbox file or a Maildir. Returns 0 on success, -1 on failure.
*/
-static int split_mail_mbox(struct am_state *state, const char **paths, int keep_cr)
+static int split_mail_mbox(struct am_state *state, const char **paths,
+ int keep_cr, int mboxrd)
{
struct child_process cp = CHILD_PROCESS_INIT;
struct strbuf last = STRBUF_INIT;
argv_array_push(&cp.args, "-b");
if (keep_cr)
argv_array_push(&cp.args, "--keep-cr");
+ if (mboxrd)
+ argv_array_push(&cp.args, "--mboxrd");
argv_array_push(&cp.args, "--");
argv_array_pushv(&cp.args, paths);
switch (patch_format) {
case PATCH_FORMAT_MBOX:
- return split_mail_mbox(state, paths, keep_cr);
+ return split_mail_mbox(state, paths, keep_cr, 0);
case PATCH_FORMAT_STGIT:
return split_mail_conv(stgit_patch_to_mail, state, paths, keep_cr);
case PATCH_FORMAT_STGIT_SERIES:
return split_mail_stgit_series(state, paths, keep_cr);
case PATCH_FORMAT_HG:
return split_mail_conv(hg_patch_to_mail, state, paths, keep_cr);
+ case PATCH_FORMAT_MBOXRD:
+ return split_mail_mbox(state, paths, keep_cr, 1);
default:
die("BUG: invalid patch_format");
}
return 0;
}
-/**
- * Do the three-way merge using fake ancestor, his tree constructed
- * from the fake ancestor and the postimage of the patch, and our
- * state.
- */
-static int run_fallback_merge_recursive(const struct am_state *state,
- unsigned char *orig_tree,
- unsigned char *our_tree,
- unsigned char *his_tree)
-{
- struct child_process cp = CHILD_PROCESS_INIT;
- int status;
-
- cp.git_cmd = 1;
-
- argv_array_pushf(&cp.env_array, "GITHEAD_%s=%.*s",
- sha1_to_hex(his_tree), linelen(state->msg), state->msg);
- if (state->quiet)
- argv_array_push(&cp.env_array, "GIT_MERGE_VERBOSITY=0");
-
- argv_array_push(&cp.args, "merge-recursive");
- argv_array_push(&cp.args, sha1_to_hex(orig_tree));
- argv_array_push(&cp.args, "--");
- argv_array_push(&cp.args, sha1_to_hex(our_tree));
- argv_array_push(&cp.args, sha1_to_hex(his_tree));
-
- status = run_command(&cp) ? (-1) : 0;
- discard_cache();
- read_cache();
- return status;
-}
-
/**
* Attempt a threeway merge, using index_path as the temporary index.
*/
static int fall_back_threeway(const struct am_state *state, const char *index_path)
{
- unsigned char orig_tree[GIT_SHA1_RAWSZ], his_tree[GIT_SHA1_RAWSZ],
- our_tree[GIT_SHA1_RAWSZ];
+ struct object_id orig_tree, their_tree, our_tree;
+ const struct object_id *bases[1] = { &orig_tree };
+ struct merge_options o;
+ struct commit *result;
+ char *their_tree_name;
- if (get_sha1("HEAD", our_tree) < 0)
- hashcpy(our_tree, EMPTY_TREE_SHA1_BIN);
+ if (get_oid("HEAD", &our_tree) < 0)
+ hashcpy(our_tree.hash, EMPTY_TREE_SHA1_BIN);
if (build_fake_ancestor(state, index_path))
return error("could not build fake ancestor");
discard_cache();
read_cache_from(index_path);
- if (write_index_as_tree(orig_tree, &the_index, index_path, 0, NULL))
+ if (write_index_as_tree(orig_tree.hash, &the_index, index_path, 0, NULL))
return error(_("Repository lacks necessary blobs to fall back on 3-way merge."));
say(state, stdout, _("Using index info to reconstruct a base tree..."));
init_revisions(&rev_info, NULL);
rev_info.diffopt.output_format = DIFF_FORMAT_NAME_STATUS;
diff_opt_parse(&rev_info.diffopt, &diff_filter_str, 1, rev_info.prefix);
- add_pending_sha1(&rev_info, "HEAD", our_tree, 0);
+ add_pending_sha1(&rev_info, "HEAD", our_tree.hash, 0);
diff_setup_done(&rev_info.diffopt);
run_diff_index(&rev_info, 1);
}
return error(_("Did you hand edit your patch?\n"
"It does not apply to blobs recorded in its index."));
- if (write_index_as_tree(his_tree, &the_index, index_path, 0, NULL))
+ if (write_index_as_tree(their_tree.hash, &the_index, index_path, 0, NULL))
return error("could not write tree");
say(state, stdout, _("Falling back to patching base and 3-way merge..."));
/*
* This is not so wrong. Depending on which base we picked, orig_tree
- * may be wildly different from ours, but his_tree has the same set of
+ * may be wildly different from ours, but their_tree has the same set of
* wildly different changes in parts the patch did not touch, so
* recursive ends up canceling them, saying that we reverted all those
* changes.
*/
- if (run_fallback_merge_recursive(state, orig_tree, our_tree, his_tree)) {
+ init_merge_options(&o);
+
+ o.branch1 = "HEAD";
+ their_tree_name = xstrfmt("%.*s", linelen(state->msg), state->msg);
+ o.branch2 = their_tree_name;
+
+ if (state->quiet)
+ o.verbosity = 0;
+
+ if (merge_recursive_generic(&o, &our_tree, &their_tree, 1, bases, &result)) {
rerere(state->allow_rerere_autoupdate);
+ free(their_tree_name);
return error(_("Failed to merge in the changes."));
}
+ free(their_tree_name);
return 0;
}
const char *mail = am_path(state, msgnum(state));
int apply_status;
+ reset_ident_date();
+
if (!file_exists(mail))
goto next;
*opt_value = PATCH_FORMAT_STGIT_SERIES;
else if (!strcmp(arg, "hg"))
*opt_value = PATCH_FORMAT_HG;
+ else if (!strcmp(arg, "mboxrd"))
+ *opt_value = PATCH_FORMAT_MBOXRD;
else
return error(_("Invalid value for --patch-format: %s"), arg);
return 0;
#include "ll-merge.h"
#include "rerere.h"
-/*
- * --check turns on checking that the working tree matches the
- * files that are being modified, but doesn't apply the patch
- * --stat does just a diffstat, and doesn't actually apply
- * --numstat does numeric diffstat, and doesn't actually apply
- * --index-info shows the old and new index info for paths if available.
- * --index updates the cache as well.
- * --cached updates only the cache without ever touching the working tree.
- */
-static const char *prefix;
-static int prefix_length = -1;
-static int newfd = -1;
-
-static int unidiff_zero;
-static int p_value = 1;
-static int p_value_known;
-static int check_index;
-static int update_index;
-static int cached;
-static int diffstat;
-static int numstat;
-static int summary;
-static int check;
-static int apply = 1;
-static int apply_in_reverse;
-static int apply_with_reject;
-static int apply_verbosely;
-static int allow_overlap;
-static int no_add;
-static int threeway;
-static int unsafe_paths;
-static const char *fake_ancestor;
-static int line_termination = '\n';
-static unsigned int p_context = UINT_MAX;
-static const char * const apply_usage[] = {
- N_("git apply [<options>] [<patch>...]"),
- NULL
-};
-
-static enum ws_error_action {
+enum ws_error_action {
nowarn_ws_error,
warn_on_ws_error,
die_on_ws_error,
correct_ws_error
-} ws_error_action = warn_on_ws_error;
-static int whitespace_error;
-static int squelch_whitespace_errors = 5;
-static int applied_after_fixing_ws;
+};
+
-static enum ws_ignore {
+enum ws_ignore {
ignore_ws_none,
ignore_ws_change
-} ws_ignore_action = ignore_ws_none;
+};
+
+/*
+ * We need to keep track of how symlinks in the preimage are
+ * manipulated by the patches. A patch to add a/b/c where a/b
+ * is a symlink should not be allowed to affect the directory
+ * the symlink points at, but if the same patch removes a/b,
+ * it is perfectly fine, as the patch removes a/b to make room
+ * to create a directory a/b so that a/b/c can be created.
+ *
+ * See also "struct string_list symlink_changes" in "struct
+ * apply_state".
+ */
+#define SYMLINK_GOES_AWAY 01
+#define SYMLINK_IN_RESULT 02
+struct apply_state {
+ const char *prefix;
+ int prefix_length;
+
+ /* These are lock_file related */
+ struct lock_file *lock_file;
+ int newfd;
+
+ /* These control what gets looked at and modified */
+ int apply; /* this is not a dry-run */
+ int cached; /* apply to the index only */
+ int check; /* preimage must match working tree, don't actually apply */
+ int check_index; /* preimage must match the indexed version */
+ int update_index; /* check_index && apply */
+
+ /* These control cosmetic aspect of the output */
+ int diffstat; /* just show a diffstat, and don't actually apply */
+ int numstat; /* just show a numeric diffstat, and don't actually apply */
+ int summary; /* just report creation, deletion, etc, and don't actually apply */
+
+ /* These boolean parameters control how the apply is done */
+ int allow_overlap;
+ int apply_in_reverse;
+ int apply_with_reject;
+ int apply_verbosely;
+ int no_add;
+ int threeway;
+ int unidiff_zero;
+ int unsafe_paths;
+
+ /* Other non boolean parameters */
+ const char *fake_ancestor;
+ const char *patch_input_file;
+ int line_termination;
+ struct strbuf root;
+ int p_value;
+ int p_value_known;
+ unsigned int p_context;
+
+ /* Exclude and include path parameters */
+ struct string_list limit_by_name;
+ int has_include;
+
+ /* Various "current state" */
+ int linenr; /* current line number */
+ struct string_list symlink_changes; /* we have to track symlinks */
-static const char *patch_input_file;
-static struct strbuf root = STRBUF_INIT;
-static int read_stdin = 1;
-static int options;
+ /*
+ * For "diff-stat" like behaviour, we keep track of the biggest change
+ * we've seen, and the longest filename. That allows us to do simple
+ * scaling.
+ */
+ int max_change;
+ int max_len;
-static void parse_whitespace_option(const char *option)
+ /*
+ * Records filenames that have been touched, in order to handle
+ * the case where more than one patches touch the same file.
+ */
+ struct string_list fn_table;
+
+ /* These control whitespace errors */
+ enum ws_error_action ws_error_action;
+ enum ws_ignore ws_ignore_action;
+ const char *whitespace_option;
+ int whitespace_error;
+ int squelch_whitespace_errors;
+ int applied_after_fixing_ws;
+};
+
+static const char * const apply_usage[] = {
+ N_("git apply [<options>] [<patch>...]"),
+ NULL
+};
+
+static void parse_whitespace_option(struct apply_state *state, const char *option)
{
if (!option) {
- ws_error_action = warn_on_ws_error;
+ state->ws_error_action = warn_on_ws_error;
return;
}
if (!strcmp(option, "warn")) {
- ws_error_action = warn_on_ws_error;
+ state->ws_error_action = warn_on_ws_error;
return;
}
if (!strcmp(option, "nowarn")) {
- ws_error_action = nowarn_ws_error;
+ state->ws_error_action = nowarn_ws_error;
return;
}
if (!strcmp(option, "error")) {
- ws_error_action = die_on_ws_error;
+ state->ws_error_action = die_on_ws_error;
return;
}
if (!strcmp(option, "error-all")) {
- ws_error_action = die_on_ws_error;
- squelch_whitespace_errors = 0;
+ state->ws_error_action = die_on_ws_error;
+ state->squelch_whitespace_errors = 0;
return;
}
if (!strcmp(option, "strip") || !strcmp(option, "fix")) {
- ws_error_action = correct_ws_error;
+ state->ws_error_action = correct_ws_error;
return;
}
die(_("unrecognized whitespace option '%s'"), option);
}
-static void parse_ignorewhitespace_option(const char *option)
+static void parse_ignorewhitespace_option(struct apply_state *state,
+ const char *option)
{
if (!option || !strcmp(option, "no") ||
!strcmp(option, "false") || !strcmp(option, "never") ||
!strcmp(option, "none")) {
- ws_ignore_action = ignore_ws_none;
+ state->ws_ignore_action = ignore_ws_none;
return;
}
if (!strcmp(option, "change")) {
- ws_ignore_action = ignore_ws_change;
+ state->ws_ignore_action = ignore_ws_change;
return;
}
die(_("unrecognized whitespace ignore option '%s'"), option);
}
-static void set_default_whitespace_mode(const char *whitespace_option)
+static void set_default_whitespace_mode(struct apply_state *state)
{
- if (!whitespace_option && !apply_default_whitespace)
- ws_error_action = (apply ? warn_on_ws_error : nowarn_ws_error);
+ if (!state->whitespace_option && !apply_default_whitespace)
+ state->ws_error_action = (state->apply ? warn_on_ws_error : nowarn_ws_error);
}
-/*
- * For "diff-stat" like behaviour, we keep track of the biggest change
- * we've seen, and the longest filename. That allows us to do simple
- * scaling.
- */
-static int max_change, max_len;
-
-/*
- * Various "current state", notably line numbers and what
- * file (and how) we're patching right now.. The "is_xxxx"
- * things are flags, where -1 means "don't know yet".
- */
-static int linenr = 1;
-
/*
* This represents one "hunk" from a patch, starting with
* "@@ -oldpos,oldlines +newpos,newlines @@" marker. The
struct line *line;
};
-/*
- * Records filenames that have been touched, in order to handle
- * the case where more than one patches touch the same file.
- */
-
-static struct string_list fn_table;
-
static uint32_t hash_line(const char *cp, size_t len)
{
size_t i;
return name;
}
-static char *find_name_gnu(const char *line, const char *def, int p_value)
+static char *find_name_gnu(struct apply_state *state,
+ const char *line,
+ const char *def,
+ int p_value)
{
struct strbuf name = STRBUF_INIT;
char *cp;
}
strbuf_remove(&name, 0, cp - name.buf);
- if (root.len)
- strbuf_insert(&name, 0, root.buf, root.len);
+ if (state->root.len)
+ strbuf_insert(&name, 0, state->root.buf, state->root.len);
return squash_slash(strbuf_detach(&name, NULL));
}
return line + len - end;
}
-static char *find_name_common(const char *line, const char *def,
- int p_value, const char *end, int terminate)
+static char *find_name_common(struct apply_state *state,
+ const char *line,
+ const char *def,
+ int p_value,
+ const char *end,
+ int terminate)
{
int len;
const char *start = NULL;
return squash_slash(xstrdup(def));
}
- if (root.len) {
- char *ret = xstrfmt("%s%.*s", root.buf, len, start);
+ if (state->root.len) {
+ char *ret = xstrfmt("%s%.*s", state->root.buf, len, start);
return squash_slash(ret);
}
return squash_slash(xmemdupz(start, len));
}
-static char *find_name(const char *line, char *def, int p_value, int terminate)
+static char *find_name(struct apply_state *state,
+ const char *line,
+ char *def,
+ int p_value,
+ int terminate)
{
if (*line == '"') {
- char *name = find_name_gnu(line, def, p_value);
+ char *name = find_name_gnu(state, line, def, p_value);
if (name)
return name;
}
- return find_name_common(line, def, p_value, NULL, terminate);
+ return find_name_common(state, line, def, p_value, NULL, terminate);
}
-static char *find_name_traditional(const char *line, char *def, int p_value)
+static char *find_name_traditional(struct apply_state *state,
+ const char *line,
+ char *def,
+ int p_value)
{
size_t len;
size_t date_len;
if (*line == '"') {
- char *name = find_name_gnu(line, def, p_value);
+ char *name = find_name_gnu(state, line, def, p_value);
if (name)
return name;
}
len = strchrnul(line, '\n') - line;
date_len = diff_timestamp_len(line, len);
if (!date_len)
- return find_name_common(line, def, p_value, NULL, TERM_TAB);
+ return find_name_common(state, line, def, p_value, NULL, TERM_TAB);
len -= date_len;
- return find_name_common(line, def, p_value, line + len, 0);
+ return find_name_common(state, line, def, p_value, line + len, 0);
}
static int count_slashes(const char *cp)
* Given the string after "--- " or "+++ ", guess the appropriate
* p_value for the given patch.
*/
-static int guess_p_value(const char *nameline)
+static int guess_p_value(struct apply_state *state, const char *nameline)
{
char *name, *cp;
int val = -1;
if (is_dev_null(nameline))
return -1;
- name = find_name_traditional(nameline, NULL, 0);
+ name = find_name_traditional(state, nameline, NULL, 0);
if (!name)
return -1;
cp = strchr(name, '/');
if (!cp)
val = 0;
- else if (prefix) {
+ else if (state->prefix) {
/*
* Does it begin with "a/$our-prefix" and such? Then this is
* very likely to apply to our directory.
*/
- if (!strncmp(name, prefix, prefix_length))
- val = count_slashes(prefix);
+ if (!strncmp(name, state->prefix, state->prefix_length))
+ val = count_slashes(state->prefix);
else {
cp++;
- if (!strncmp(cp, prefix, prefix_length))
- val = count_slashes(prefix) + 1;
+ if (!strncmp(cp, state->prefix, state->prefix_length))
+ val = count_slashes(state->prefix) + 1;
}
}
free(name);
* files, we can happily check the index for a match, but for creating a
* new file we should try to match whatever "patch" does. I have no idea.
*/
-static void parse_traditional_patch(const char *first, const char *second, struct patch *patch)
+static void parse_traditional_patch(struct apply_state *state,
+ const char *first,
+ const char *second,
+ struct patch *patch)
{
char *name;
first += 4; /* skip "--- " */
second += 4; /* skip "+++ " */
- if (!p_value_known) {
+ if (!state->p_value_known) {
int p, q;
- p = guess_p_value(first);
- q = guess_p_value(second);
+ p = guess_p_value(state, first);
+ q = guess_p_value(state, second);
if (p < 0) p = q;
if (0 <= p && p == q) {
- p_value = p;
- p_value_known = 1;
+ state->p_value = p;
+ state->p_value_known = 1;
}
}
if (is_dev_null(first)) {
patch->is_new = 1;
patch->is_delete = 0;
- name = find_name_traditional(second, NULL, p_value);
+ name = find_name_traditional(state, second, NULL, state->p_value);
patch->new_name = name;
} else if (is_dev_null(second)) {
patch->is_new = 0;
patch->is_delete = 1;
- name = find_name_traditional(first, NULL, p_value);
+ name = find_name_traditional(state, first, NULL, state->p_value);
patch->old_name = name;
} else {
char *first_name;
- first_name = find_name_traditional(first, NULL, p_value);
- name = find_name_traditional(second, first_name, p_value);
+ first_name = find_name_traditional(state, first, NULL, state->p_value);
+ name = find_name_traditional(state, second, first_name, state->p_value);
free(first_name);
if (has_epoch_timestamp(first)) {
patch->is_new = 1;
}
}
if (!name)
- die(_("unable to find filename in patch at line %d"), linenr);
+ die(_("unable to find filename in patch at line %d"), state->linenr);
}
-static int gitdiff_hdrend(const char *line, struct patch *patch)
+static int gitdiff_hdrend(struct apply_state *state,
+ const char *line,
+ struct patch *patch)
{
return -1;
}
#define DIFF_OLD_NAME 0
#define DIFF_NEW_NAME 1
-static char *gitdiff_verify_name(const char *line, int isnull, char *orig_name, int side)
+static void gitdiff_verify_name(struct apply_state *state,
+ const char *line,
+ int isnull,
+ char **name,
+ int side)
{
- if (!orig_name && !isnull)
- return find_name(line, NULL, p_value, TERM_TAB);
+ if (!*name && !isnull) {
+ *name = find_name(state, line, NULL, state->p_value, TERM_TAB);
+ return;
+ }
- if (orig_name) {
- int len = strlen(orig_name);
+ if (*name) {
+ int len = strlen(*name);
char *another;
if (isnull)
die(_("git apply: bad git-diff - expected /dev/null, got %s on line %d"),
- orig_name, linenr);
- another = find_name(line, NULL, p_value, TERM_TAB);
- if (!another || memcmp(another, orig_name, len + 1))
+ *name, state->linenr);
+ another = find_name(state, line, NULL, state->p_value, TERM_TAB);
+ if (!another || memcmp(another, *name, len + 1))
die((side == DIFF_NEW_NAME) ?
_("git apply: bad git-diff - inconsistent new filename on line %d") :
- _("git apply: bad git-diff - inconsistent old filename on line %d"), linenr);
+ _("git apply: bad git-diff - inconsistent old filename on line %d"), state->linenr);
free(another);
- return orig_name;
} else {
/* expect "/dev/null" */
if (memcmp("/dev/null", line, 9) || line[9] != '\n')
- die(_("git apply: bad git-diff - expected /dev/null on line %d"), linenr);
- return NULL;
+ die(_("git apply: bad git-diff - expected /dev/null on line %d"), state->linenr);
}
}
-static int gitdiff_oldname(const char *line, struct patch *patch)
+static int gitdiff_oldname(struct apply_state *state,
+ const char *line,
+ struct patch *patch)
{
- patch->old_name = gitdiff_verify_name(line, patch->is_new, patch->old_name,
- DIFF_OLD_NAME);
+ gitdiff_verify_name(state, line,
+ patch->is_new, &patch->old_name,
+ DIFF_OLD_NAME);
return 0;
}
-static int gitdiff_newname(const char *line, struct patch *patch)
+static int gitdiff_newname(struct apply_state *state,
+ const char *line,
+ struct patch *patch)
{
- patch->new_name = gitdiff_verify_name(line, patch->is_delete, patch->new_name,
- DIFF_NEW_NAME);
+ gitdiff_verify_name(state, line,
+ patch->is_delete, &patch->new_name,
+ DIFF_NEW_NAME);
return 0;
}
-static int gitdiff_oldmode(const char *line, struct patch *patch)
+static int gitdiff_oldmode(struct apply_state *state,
+ const char *line,
+ struct patch *patch)
{
patch->old_mode = strtoul(line, NULL, 8);
return 0;
}
-static int gitdiff_newmode(const char *line, struct patch *patch)
+static int gitdiff_newmode(struct apply_state *state,
+ const char *line,
+ struct patch *patch)
{
patch->new_mode = strtoul(line, NULL, 8);
return 0;
}
-static int gitdiff_delete(const char *line, struct patch *patch)
+static int gitdiff_delete(struct apply_state *state,
+ const char *line,
+ struct patch *patch)
{
patch->is_delete = 1;
free(patch->old_name);
patch->old_name = xstrdup_or_null(patch->def_name);
- return gitdiff_oldmode(line, patch);
+ return gitdiff_oldmode(state, line, patch);
}
-static int gitdiff_newfile(const char *line, struct patch *patch)
+static int gitdiff_newfile(struct apply_state *state,
+ const char *line,
+ struct patch *patch)
{
patch->is_new = 1;
free(patch->new_name);
patch->new_name = xstrdup_or_null(patch->def_name);
- return gitdiff_newmode(line, patch);
+ return gitdiff_newmode(state, line, patch);
}
-static int gitdiff_copysrc(const char *line, struct patch *patch)
+static int gitdiff_copysrc(struct apply_state *state,
+ const char *line,
+ struct patch *patch)
{
patch->is_copy = 1;
free(patch->old_name);
- patch->old_name = find_name(line, NULL, p_value ? p_value - 1 : 0, 0);
+ patch->old_name = find_name(state, line, NULL, state->p_value ? state->p_value - 1 : 0, 0);
return 0;
}
-static int gitdiff_copydst(const char *line, struct patch *patch)
+static int gitdiff_copydst(struct apply_state *state,
+ const char *line,
+ struct patch *patch)
{
patch->is_copy = 1;
free(patch->new_name);
- patch->new_name = find_name(line, NULL, p_value ? p_value - 1 : 0, 0);
+ patch->new_name = find_name(state, line, NULL, state->p_value ? state->p_value - 1 : 0, 0);
return 0;
}
-static int gitdiff_renamesrc(const char *line, struct patch *patch)
+static int gitdiff_renamesrc(struct apply_state *state,
+ const char *line,
+ struct patch *patch)
{
patch->is_rename = 1;
free(patch->old_name);
- patch->old_name = find_name(line, NULL, p_value ? p_value - 1 : 0, 0);
+ patch->old_name = find_name(state, line, NULL, state->p_value ? state->p_value - 1 : 0, 0);
return 0;
}
-static int gitdiff_renamedst(const char *line, struct patch *patch)
+static int gitdiff_renamedst(struct apply_state *state,
+ const char *line,
+ struct patch *patch)
{
patch->is_rename = 1;
free(patch->new_name);
- patch->new_name = find_name(line, NULL, p_value ? p_value - 1 : 0, 0);
+ patch->new_name = find_name(state, line, NULL, state->p_value ? state->p_value - 1 : 0, 0);
return 0;
}
-static int gitdiff_similarity(const char *line, struct patch *patch)
+static int gitdiff_similarity(struct apply_state *state,
+ const char *line,
+ struct patch *patch)
{
unsigned long val = strtoul(line, NULL, 10);
if (val <= 100)
return 0;
}
-static int gitdiff_dissimilarity(const char *line, struct patch *patch)
+static int gitdiff_dissimilarity(struct apply_state *state,
+ const char *line,
+ struct patch *patch)
{
unsigned long val = strtoul(line, NULL, 10);
if (val <= 100)
return 0;
}
-static int gitdiff_index(const char *line, struct patch *patch)
+static int gitdiff_index(struct apply_state *state,
+ const char *line,
+ struct patch *patch)
{
/*
* index line is N hexadecimal, "..", N hexadecimal,
* This is normal for a diff that doesn't change anything: we'll fall through
* into the next diff. Tell the parser to break out.
*/
-static int gitdiff_unrecognized(const char *line, struct patch *patch)
+static int gitdiff_unrecognized(struct apply_state *state,
+ const char *line,
+ struct patch *patch)
{
return -1;
}
* Skip p_value leading components from "line"; as we do not accept
* absolute paths, return NULL in that case.
*/
-static const char *skip_tree_prefix(const char *line, int llen)
+static const char *skip_tree_prefix(struct apply_state *state,
+ const char *line,
+ int llen)
{
int nslash;
int i;
- if (!p_value)
+ if (!state->p_value)
return (llen && line[0] == '/') ? NULL : line;
- nslash = p_value;
+ nslash = state->p_value;
for (i = 0; i < llen; i++) {
int ch = line[i];
if (ch == '/' && --nslash <= 0)
* creation or deletion of an empty file. In any of these cases,
* both sides are the same name under a/ and b/ respectively.
*/
-static char *git_header_name(const char *line, int llen)
+static char *git_header_name(struct apply_state *state,
+ const char *line,
+ int llen)
{
const char *name;
const char *second = NULL;
goto free_and_fail1;
/* strip the a/b prefix including trailing slash */
- cp = skip_tree_prefix(first.buf, first.len);
+ cp = skip_tree_prefix(state, first.buf, first.len);
if (!cp)
goto free_and_fail1;
strbuf_remove(&first, 0, cp - first.buf);
if (*second == '"') {
if (unquote_c_style(&sp, second, NULL))
goto free_and_fail1;
- cp = skip_tree_prefix(sp.buf, sp.len);
+ cp = skip_tree_prefix(state, sp.buf, sp.len);
if (!cp)
goto free_and_fail1;
/* They must match, otherwise ignore */
}
/* unquoted second */
- cp = skip_tree_prefix(second, line + llen - second);
+ cp = skip_tree_prefix(state, second, line + llen - second);
if (!cp)
goto free_and_fail1;
if (line + llen - cp != first.len ||
}
/* unquoted first name */
- name = skip_tree_prefix(line, llen);
+ name = skip_tree_prefix(state, line, llen);
if (!name)
return NULL;
if (unquote_c_style(&sp, second, NULL))
goto free_and_fail2;
- np = skip_tree_prefix(sp.buf, sp.len);
+ np = skip_tree_prefix(state, sp.buf, sp.len);
if (!np)
goto free_and_fail2;
*/
if (!name[len + 1])
return NULL; /* no postimage name */
- second = skip_tree_prefix(name + len + 1,
+ second = skip_tree_prefix(state, name + len + 1,
line_len - (len + 1));
if (!second)
return NULL;
}
/* Verify that we recognize the lines following a git header */
-static int parse_git_header(const char *line, int len, unsigned int size, struct patch *patch)
+static int parse_git_header(struct apply_state *state,
+ const char *line,
+ int len,
+ unsigned int size,
+ struct patch *patch)
{
unsigned long offset;
* or removing or adding empty files), so we get
* the default name from the header.
*/
- patch->def_name = git_header_name(line, len);
- if (patch->def_name && root.len) {
- char *s = xstrfmt("%s%s", root.buf, patch->def_name);
+ patch->def_name = git_header_name(state, line, len);
+ if (patch->def_name && state->root.len) {
+ char *s = xstrfmt("%s%s", state->root.buf, patch->def_name);
free(patch->def_name);
patch->def_name = s;
}
line += len;
size -= len;
- linenr++;
- for (offset = len ; size > 0 ; offset += len, size -= len, line += len, linenr++) {
+ state->linenr++;
+ for (offset = len ; size > 0 ; offset += len, size -= len, line += len, state->linenr++) {
static const struct opentry {
const char *str;
- int (*fn)(const char *, struct patch *);
+ int (*fn)(struct apply_state *, const char *, struct patch *);
} optable[] = {
{ "@@ -", gitdiff_hdrend },
{ "--- ", gitdiff_oldname },
int oplen = strlen(p->str);
if (len < oplen || memcmp(p->str, line, oplen))
continue;
- if (p->fn(line + oplen, patch) < 0)
+ if (p->fn(state, line + oplen, patch) < 0)
return offset;
break;
}
return offset;
}
-static int find_header(const char *line, unsigned long size, int *hdrsize, struct patch *patch)
+static int find_header(struct apply_state *state,
+ const char *line,
+ unsigned long size,
+ int *hdrsize,
+ struct patch *patch)
{
unsigned long offset, len;
patch->is_new = patch->is_delete = -1;
patch->old_mode = patch->new_mode = 0;
patch->old_name = patch->new_name = NULL;
- for (offset = 0; size > 0; offset += len, size -= len, line += len, linenr++) {
+ for (offset = 0; size > 0; offset += len, size -= len, line += len, state->linenr++) {
unsigned long nextlen;
len = linelen(line, size);
if (parse_fragment_header(line, len, &dummy) < 0)
continue;
die(_("patch fragment without header at line %d: %.*s"),
- linenr, (int)len-1, line);
+ state->linenr, (int)len-1, line);
}
if (size < len + 6)
* or mode change, so we handle that specially
*/
if (!memcmp("diff --git ", line, 11)) {
- int git_hdr_len = parse_git_header(line, len, size, patch);
+ int git_hdr_len = parse_git_header(state, line, len, size, patch);
if (git_hdr_len <= len)
continue;
if (!patch->old_name && !patch->new_name) {
"%d leading pathname component (line %d)",
"git diff header lacks filename information when removing "
"%d leading pathname components (line %d)",
- p_value),
- p_value, linenr);
+ state->p_value),
+ state->p_value, state->linenr);
patch->old_name = xstrdup(patch->def_name);
patch->new_name = xstrdup(patch->def_name);
}
if (!patch->is_delete && !patch->new_name)
die("git diff header lacks filename information "
- "(line %d)", linenr);
+ "(line %d)", state->linenr);
patch->is_toplevel_relative = 1;
*hdrsize = git_hdr_len;
return offset;
continue;
/* Ok, we'll consider it a patch */
- parse_traditional_patch(line, line+len, patch);
+ parse_traditional_patch(state, line, line+len, patch);
*hdrsize = len + nextlen;
- linenr += 2;
+ state->linenr += 2;
return offset;
}
return -1;
}
-static void record_ws_error(unsigned result, const char *line, int len, int linenr)
+static void record_ws_error(struct apply_state *state,
+ unsigned result,
+ const char *line,
+ int len,
+ int linenr)
{
char *err;
if (!result)
return;
- whitespace_error++;
- if (squelch_whitespace_errors &&
- squelch_whitespace_errors < whitespace_error)
+ state->whitespace_error++;
+ if (state->squelch_whitespace_errors &&
+ state->squelch_whitespace_errors < state->whitespace_error)
return;
err = whitespace_error_string(result);
fprintf(stderr, "%s:%d: %s.\n%.*s\n",
- patch_input_file, linenr, err, len, line);
+ state->patch_input_file, linenr, err, len, line);
free(err);
}
-static void check_whitespace(const char *line, int len, unsigned ws_rule)
+static void check_whitespace(struct apply_state *state,
+ const char *line,
+ int len,
+ unsigned ws_rule)
{
unsigned result = ws_check(line + 1, len - 1, ws_rule);
- record_ws_error(result, line + 1, len - 2, linenr);
+ record_ws_error(state, result, line + 1, len - 2, state->linenr);
}
/*
* between a "---" that is part of a patch, and a "---" that starts
* the next patch is to look at the line counts..
*/
-static int parse_fragment(const char *line, unsigned long size,
- struct patch *patch, struct fragment *fragment)
+static int parse_fragment(struct apply_state *state,
+ const char *line,
+ unsigned long size,
+ struct patch *patch,
+ struct fragment *fragment)
{
int added, deleted;
int len = linelen(line, size), offset;
/* Parse the thing.. */
line += len;
size -= len;
- linenr++;
+ state->linenr++;
added = deleted = 0;
for (offset = len;
0 < size;
- offset += len, size -= len, line += len, linenr++) {
+ offset += len, size -= len, line += len, state->linenr++) {
if (!oldlines && !newlines)
break;
len = linelen(line, size);
if (!deleted && !added)
leading++;
trailing++;
- if (!apply_in_reverse &&
- ws_error_action == correct_ws_error)
- check_whitespace(line, len, patch->ws_rule);
+ if (!state->apply_in_reverse &&
+ state->ws_error_action == correct_ws_error)
+ check_whitespace(state, line, len, patch->ws_rule);
break;
case '-':
- if (apply_in_reverse &&
- ws_error_action != nowarn_ws_error)
- check_whitespace(line, len, patch->ws_rule);
+ if (state->apply_in_reverse &&
+ state->ws_error_action != nowarn_ws_error)
+ check_whitespace(state, line, len, patch->ws_rule);
deleted++;
oldlines--;
trailing = 0;
break;
case '+':
- if (!apply_in_reverse &&
- ws_error_action != nowarn_ws_error)
- check_whitespace(line, len, patch->ws_rule);
+ if (!state->apply_in_reverse &&
+ state->ws_error_action != nowarn_ws_error)
+ check_whitespace(state, line, len, patch->ws_rule);
added++;
newlines--;
trailing = 0;
* The (fragment->patch, fragment->size) pair points into the memory given
* by the caller, not a copy, when we return.
*/
-static int parse_single_patch(const char *line, unsigned long size, struct patch *patch)
+static int parse_single_patch(struct apply_state *state,
+ const char *line,
+ unsigned long size,
+ struct patch *patch)
{
unsigned long offset = 0;
unsigned long oldlines = 0, newlines = 0, context = 0;
int len;
fragment = xcalloc(1, sizeof(*fragment));
- fragment->linenr = linenr;
- len = parse_fragment(line, size, patch, fragment);
+ fragment->linenr = state->linenr;
+ len = parse_fragment(state, line, size, patch, fragment);
if (len <= 0)
- die(_("corrupt patch at line %d"), linenr);
+ die(_("corrupt patch at line %d"), state->linenr);
fragment->patch = line;
fragment->size = len;
oldlines += fragment->oldlines;
* points at an allocated memory that the caller must free, so
* it is marked as "->free_patch = 1".
*/
-static struct fragment *parse_binary_hunk(char **buf_p,
+static struct fragment *parse_binary_hunk(struct apply_state *state,
+ char **buf_p,
unsigned long *sz_p,
int *status_p,
int *used_p)
else
return NULL;
- linenr++;
+ state->linenr++;
buffer += llen;
while (1) {
int byte_length, max_byte_length, newsize;
llen = linelen(buffer, size);
used += llen;
- linenr++;
+ state->linenr++;
if (llen == 1) {
/* consume the blank line */
buffer++;
free(data);
*status_p = -1;
error(_("corrupt binary patch at line %d: %.*s"),
- linenr-1, llen-1, buffer);
+ state->linenr-1, llen-1, buffer);
return NULL;
}
* -1 in case of error,
* the length of the parsed binary patch otherwise
*/
-static int parse_binary(char *buffer, unsigned long size, struct patch *patch)
+static int parse_binary(struct apply_state *state,
+ char *buffer,
+ unsigned long size,
+ struct patch *patch)
{
/*
* We have read "GIT binary patch\n"; what follows is a line
int status;
int used, used_1;
- forward = parse_binary_hunk(&buffer, &size, &status, &used);
+ forward = parse_binary_hunk(state, &buffer, &size, &status, &used);
if (!forward && !status)
/* there has to be one hunk (forward hunk) */
- return error(_("unrecognized binary patch at line %d"), linenr-1);
+ return error(_("unrecognized binary patch at line %d"), state->linenr-1);
if (status)
/* otherwise we already gave an error message */
return status;
- reverse = parse_binary_hunk(&buffer, &size, &status, &used_1);
+ reverse = parse_binary_hunk(state, &buffer, &size, &status, &used_1);
if (reverse)
used += used_1;
else if (status) {
return used;
}
-static void prefix_one(char **name)
+static void prefix_one(struct apply_state *state, char **name)
{
char *old_name = *name;
if (!old_name)
return;
- *name = xstrdup(prefix_filename(prefix, prefix_length, *name));
+ *name = xstrdup(prefix_filename(state->prefix, state->prefix_length, *name));
free(old_name);
}
-static void prefix_patch(struct patch *p)
+static void prefix_patch(struct apply_state *state, struct patch *p)
{
- if (!prefix || p->is_toplevel_relative)
+ if (!state->prefix || p->is_toplevel_relative)
return;
- prefix_one(&p->new_name);
- prefix_one(&p->old_name);
+ prefix_one(state, &p->new_name);
+ prefix_one(state, &p->old_name);
}
/*
* include/exclude
*/
-static struct string_list limit_by_name;
-static int has_include;
-static void add_name_limit(const char *name, int exclude)
+static void add_name_limit(struct apply_state *state,
+ const char *name,
+ int exclude)
{
struct string_list_item *it;
- it = string_list_append(&limit_by_name, name);
+ it = string_list_append(&state->limit_by_name, name);
it->util = exclude ? NULL : (void *) 1;
}
-static int use_patch(struct patch *p)
+static int use_patch(struct apply_state *state, struct patch *p)
{
const char *pathname = p->new_name ? p->new_name : p->old_name;
int i;
/* Paths outside are not touched regardless of "--include" */
- if (0 < prefix_length) {
+ if (0 < state->prefix_length) {
int pathlen = strlen(pathname);
- if (pathlen <= prefix_length ||
- memcmp(prefix, pathname, prefix_length))
+ if (pathlen <= state->prefix_length ||
+ memcmp(state->prefix, pathname, state->prefix_length))
return 0;
}
/* See if it matches any of exclude/include rule */
- for (i = 0; i < limit_by_name.nr; i++) {
- struct string_list_item *it = &limit_by_name.items[i];
+ for (i = 0; i < state->limit_by_name.nr; i++) {
+ struct string_list_item *it = &state->limit_by_name.items[i];
if (!wildmatch(it->string, pathname, 0, NULL))
return (it->util != NULL);
}
* not used. Otherwise, we saw bunch of exclude rules (or none)
* and such a path is used.
*/
- return !has_include;
+ return !state->has_include;
}
* Return the number of bytes consumed, so that the caller can call us
* again for the next patch.
*/
-static int parse_chunk(char *buffer, unsigned long size, struct patch *patch)
+static int parse_chunk(struct apply_state *state, char *buffer, unsigned long size, struct patch *patch)
{
int hdrsize, patchsize;
- int offset = find_header(buffer, size, &hdrsize, patch);
+ int offset = find_header(state, buffer, size, &hdrsize, patch);
if (offset < 0)
return offset;
- prefix_patch(patch);
+ prefix_patch(state, patch);
- if (!use_patch(patch))
+ if (!use_patch(state, patch))
patch->ws_rule = 0;
else
patch->ws_rule = whitespace_rule(patch->new_name
? patch->new_name
: patch->old_name);
- patchsize = parse_single_patch(buffer + offset + hdrsize,
- size - offset - hdrsize, patch);
+ patchsize = parse_single_patch(state,
+ buffer + offset + hdrsize,
+ size - offset - hdrsize,
+ patch);
if (!patchsize) {
static const char git_binary[] = "GIT binary patch\n";
if (llen == sizeof(git_binary) - 1 &&
!memcmp(git_binary, buffer + hd, llen)) {
int used;
- linenr++;
- used = parse_binary(buffer + hd + llen,
+ state->linenr++;
+ used = parse_binary(state, buffer + hd + llen,
size - hd - llen, patch);
if (used < 0)
return -1;
int len = strlen(binhdr[i]);
if (len < size - hd &&
!memcmp(binhdr[i], buffer + hd, len)) {
- linenr++;
+ state->linenr++;
patch->is_binary = 1;
patchsize = llen;
break;
* without metadata change. A binary patch appears
* empty to us here.
*/
- if ((apply || check) &&
+ if ((state->apply || state->check) &&
(!patch->is_binary && !metadata_changes(patch)))
- die(_("patch with only garbage at line %d"), linenr);
+ die(_("patch with only garbage at line %d"), state->linenr);
}
return offset + hdrsize + patchsize;
static const char minuses[]=
"----------------------------------------------------------------------";
-static void show_stats(struct patch *patch)
+static void show_stats(struct apply_state *state, struct patch *patch)
{
struct strbuf qname = STRBUF_INIT;
char *cp = patch->new_name ? patch->new_name : patch->old_name;
/*
* "scale" the filename
*/
- max = max_len;
+ max = state->max_len;
if (max > 50)
max = 50;
/*
* scale the add/delete
*/
- max = max + max_change > 70 ? 70 - max : max_change;
+ max = max + state->max_change > 70 ? 70 - max : state->max_change;
add = patch->lines_added;
del = patch->lines_deleted;
- if (max_change > 0) {
- int total = ((add + del) * max + max_change / 2) / max_change;
- add = (add * max + max_change / 2) / max_change;
+ if (state->max_change > 0) {
+ int total = ((add + del) * max + state->max_change / 2) / state->max_change;
+ add = (add * max + state->max_change / 2) / state->max_change;
del = total - add;
}
printf("%5d %.*s%.*s\n", patch->lines_added + patch->lines_deleted,
fixed = preimage->buf;
for (i = reduced = ctx = 0; i < postimage->nr; i++) {
- size_t len = postimage->line[i].len;
+ size_t l_len = postimage->line[i].len;
if (!(postimage->line[i].flag & LINE_COMMON)) {
/* an added line -- no counterparts in preimage */
- memmove(new, old, len);
- old += len;
- new += len;
+ memmove(new, old, l_len);
+ old += l_len;
+ new += l_len;
continue;
}
/* a common context -- skip it in the original postimage */
- old += len;
+ old += l_len;
/* and find the corresponding one in the fixed preimage */
while (ctx < preimage->nr &&
}
/* and copy it in, while fixing the line length */
- len = preimage->line[ctx].len;
- memcpy(new, fixed, len);
- new += len;
- fixed += len;
- postimage->line[i].len = len;
+ l_len = preimage->line[ctx].len;
+ memcpy(new, fixed, l_len);
+ new += l_len;
+ fixed += l_len;
+ postimage->line[i].len = l_len;
ctx++;
}
postimage->nr -= reduced;
}
-static int match_fragment(struct image *img,
+static int line_by_line_fuzzy_match(struct image *img,
+ struct image *preimage,
+ struct image *postimage,
+ unsigned long try,
+ int try_lno,
+ int preimage_limit)
+{
+ int i;
+ size_t imgoff = 0;
+ size_t preoff = 0;
+ size_t postlen = postimage->len;
+ size_t extra_chars;
+ char *buf;
+ char *preimage_eof;
+ char *preimage_end;
+ struct strbuf fixed;
+ char *fixed_buf;
+ size_t fixed_len;
+
+ for (i = 0; i < preimage_limit; i++) {
+ size_t prelen = preimage->line[i].len;
+ size_t imglen = img->line[try_lno+i].len;
+
+ if (!fuzzy_matchlines(img->buf + try + imgoff, imglen,
+ preimage->buf + preoff, prelen))
+ return 0;
+ if (preimage->line[i].flag & LINE_COMMON)
+ postlen += imglen - prelen;
+ imgoff += imglen;
+ preoff += prelen;
+ }
+
+ /*
+ * Ok, the preimage matches with whitespace fuzz.
+ *
+ * imgoff now holds the true length of the target that
+ * matches the preimage before the end of the file.
+ *
+ * Count the number of characters in the preimage that fall
+ * beyond the end of the file and make sure that all of them
+ * are whitespace characters. (This can only happen if
+ * we are removing blank lines at the end of the file.)
+ */
+ buf = preimage_eof = preimage->buf + preoff;
+ for ( ; i < preimage->nr; i++)
+ preoff += preimage->line[i].len;
+ preimage_end = preimage->buf + preoff;
+ for ( ; buf < preimage_end; buf++)
+ if (!isspace(*buf))
+ return 0;
+
+ /*
+ * Update the preimage and the common postimage context
+ * lines to use the same whitespace as the target.
+ * If whitespace is missing in the target (i.e.
+ * if the preimage extends beyond the end of the file),
+ * use the whitespace from the preimage.
+ */
+ extra_chars = preimage_end - preimage_eof;
+ strbuf_init(&fixed, imgoff + extra_chars);
+ strbuf_add(&fixed, img->buf + try, imgoff);
+ strbuf_add(&fixed, preimage_eof, extra_chars);
+ fixed_buf = strbuf_detach(&fixed, &fixed_len);
+ update_pre_post_images(preimage, postimage,
+ fixed_buf, fixed_len, postlen);
+ return 1;
+}
+
+static int match_fragment(struct apply_state *state,
+ struct image *img,
struct image *preimage,
struct image *postimage,
unsigned long try,
preimage_limit = preimage->nr;
if (match_end && (preimage->nr + try_lno != img->nr))
return 0;
- } else if (ws_error_action == correct_ws_error &&
+ } else if (state->ws_error_action == correct_ws_error &&
(ws_rule & WS_BLANK_AT_EOF)) {
/*
* This hunk extends beyond the end of img, and we are
* fuzzy matching. We collect all the line length information because
* we need it to adjust whitespace if we match.
*/
- if (ws_ignore_action == ignore_ws_change) {
- size_t imgoff = 0;
- size_t preoff = 0;
- size_t postlen = postimage->len;
- size_t extra_chars;
- char *preimage_eof;
- char *preimage_end;
- for (i = 0; i < preimage_limit; i++) {
- size_t prelen = preimage->line[i].len;
- size_t imglen = img->line[try_lno+i].len;
-
- if (!fuzzy_matchlines(img->buf + try + imgoff, imglen,
- preimage->buf + preoff, prelen))
- return 0;
- if (preimage->line[i].flag & LINE_COMMON)
- postlen += imglen - prelen;
- imgoff += imglen;
- preoff += prelen;
- }
+ if (state->ws_ignore_action == ignore_ws_change)
+ return line_by_line_fuzzy_match(img, preimage, postimage,
+ try, try_lno, preimage_limit);
- /*
- * Ok, the preimage matches with whitespace fuzz.
- *
- * imgoff now holds the true length of the target that
- * matches the preimage before the end of the file.
- *
- * Count the number of characters in the preimage that fall
- * beyond the end of the file and make sure that all of them
- * are whitespace characters. (This can only happen if
- * we are removing blank lines at the end of the file.)
- */
- buf = preimage_eof = preimage->buf + preoff;
- for ( ; i < preimage->nr; i++)
- preoff += preimage->line[i].len;
- preimage_end = preimage->buf + preoff;
- for ( ; buf < preimage_end; buf++)
- if (!isspace(*buf))
- return 0;
-
- /*
- * Update the preimage and the common postimage context
- * lines to use the same whitespace as the target.
- * If whitespace is missing in the target (i.e.
- * if the preimage extends beyond the end of the file),
- * use the whitespace from the preimage.
- */
- extra_chars = preimage_end - preimage_eof;
- strbuf_init(&fixed, imgoff + extra_chars);
- strbuf_add(&fixed, img->buf + try, imgoff);
- strbuf_add(&fixed, preimage_eof, extra_chars);
- fixed_buf = strbuf_detach(&fixed, &fixed_len);
- update_pre_post_images(preimage, postimage,
- fixed_buf, fixed_len, postlen);
- return 1;
- }
-
- if (ws_error_action != correct_ws_error)
+ if (state->ws_error_action != correct_ws_error)
return 0;
/*
return 0;
}
-static int find_pos(struct image *img,
+static int find_pos(struct apply_state *state,
+ struct image *img,
struct image *preimage,
struct image *postimage,
int line,
try_lno = line;
for (i = 0; ; i++) {
- if (match_fragment(img, preimage, postimage,
+ if (match_fragment(state, img, preimage, postimage,
try, try_lno, ws_rule,
match_beginning, match_end))
return try_lno;
* apply at applied_pos (counts in line numbers) in "img".
* Update "img" to remove "preimage" and replace it with "postimage".
*/
-static void update_image(struct image *img,
+static void update_image(struct apply_state *state,
+ struct image *img,
int applied_pos,
struct image *preimage,
struct image *postimage)
memcpy(img->line + applied_pos,
postimage->line,
postimage->nr * sizeof(*img->line));
- if (!allow_overlap)
+ if (!state->allow_overlap)
for (i = 0; i < postimage->nr; i++)
img->line[applied_pos + i].flag |= LINE_PATCHED;
img->nr = nr;
* postimage) for the hunk. Find lines that match "preimage" in "img" and
* replace the part of "img" with "postimage" text.
*/
-static int apply_one_fragment(struct image *img, struct fragment *frag,
+static int apply_one_fragment(struct apply_state *state,
+ struct image *img, struct fragment *frag,
int inaccurate_eof, unsigned ws_rule,
int nth_fragment)
{
if (len < size && patch[len] == '\\')
plen--;
first = *patch;
- if (apply_in_reverse) {
+ if (state->apply_in_reverse) {
if (first == '-')
first = '+';
else if (first == '+')
/* Fall-through for ' ' */
case '+':
/* --no-add does not add new lines */
- if (first == '+' && no_add)
+ if (first == '+' && state->no_add)
break;
start = newlines.len;
if (first != '+' ||
- !whitespace_error ||
- ws_error_action != correct_ws_error) {
+ !state->whitespace_error ||
+ state->ws_error_action != correct_ws_error) {
strbuf_add(&newlines, patch + 1, plen);
}
else {
- ws_fix_copy(&newlines, patch + 1, plen, ws_rule, &applied_after_fixing_ws);
+ ws_fix_copy(&newlines, patch + 1, plen, ws_rule, &state->applied_after_fixing_ws);
}
add_line_info(&postimage, newlines.buf + start, newlines.len - start,
(first == '+' ? 0 : LINE_COMMON));
/* Ignore it, we already handled it */
break;
default:
- if (apply_verbosely)
+ if (state->apply_verbosely)
error(_("invalid start of line: '%c'"), first);
applied_pos = -1;
goto out;
* without leading context must match at the beginning.
*/
match_beginning = (!frag->oldpos ||
- (frag->oldpos == 1 && !unidiff_zero));
+ (frag->oldpos == 1 && !state->unidiff_zero));
/*
* A hunk without trailing lines must match at the end.
* from the lack of trailing lines if the patch was generated
* with unidiff without any context.
*/
- match_end = !unidiff_zero && !trailing;
+ match_end = !state->unidiff_zero && !trailing;
pos = frag->newpos ? (frag->newpos - 1) : 0;
preimage.buf = oldlines;
for (;;) {
- applied_pos = find_pos(img, &preimage, &postimage, pos,
+ applied_pos = find_pos(state, img, &preimage, &postimage, pos,
ws_rule, match_beginning, match_end);
if (applied_pos >= 0)
break;
/* Am I at my context limits? */
- if ((leading <= p_context) && (trailing <= p_context))
+ if ((leading <= state->p_context) && (trailing <= state->p_context))
break;
if (match_beginning || match_end) {
match_beginning = match_end = 0;
if (new_blank_lines_at_end &&
preimage.nr + applied_pos >= img->nr &&
(ws_rule & WS_BLANK_AT_EOF) &&
- ws_error_action != nowarn_ws_error) {
- record_ws_error(WS_BLANK_AT_EOF, "+", 1,
+ state->ws_error_action != nowarn_ws_error) {
+ record_ws_error(state, WS_BLANK_AT_EOF, "+", 1,
found_new_blank_lines_at_end);
- if (ws_error_action == correct_ws_error) {
+ if (state->ws_error_action == correct_ws_error) {
while (new_blank_lines_at_end--)
remove_last_line(&postimage);
}
* apply_patch->check_patch_list->check_patch->
* apply_data->apply_fragments->apply_one_fragment
*/
- if (ws_error_action == die_on_ws_error)
- apply = 0;
+ if (state->ws_error_action == die_on_ws_error)
+ state->apply = 0;
}
- if (apply_verbosely && applied_pos != pos) {
+ if (state->apply_verbosely && applied_pos != pos) {
int offset = applied_pos - pos;
- if (apply_in_reverse)
+ if (state->apply_in_reverse)
offset = 0 - offset;
fprintf_ln(stderr,
Q_("Hunk #%d succeeded at %d (offset %d line).",
fprintf_ln(stderr, _("Context reduced to (%ld/%ld)"
" to apply fragment at %d"),
leading, trailing, applied_pos+1);
- update_image(img, applied_pos, &preimage, &postimage);
+ update_image(state, img, applied_pos, &preimage, &postimage);
} else {
- if (apply_verbosely)
+ if (state->apply_verbosely)
error(_("while searching for:\n%.*s"),
(int)(old - oldlines), oldlines);
}
return (applied_pos < 0);
}
-static int apply_binary_fragment(struct image *img, struct patch *patch)
+static int apply_binary_fragment(struct apply_state *state,
+ struct image *img,
+ struct patch *patch)
{
struct fragment *fragment = patch->fragments;
unsigned long len;
patch->old_name);
/* Binary patch is irreversible without the optional second hunk */
- if (apply_in_reverse) {
+ if (state->apply_in_reverse) {
if (!fragment->next)
return error("cannot reverse-apply a binary patch "
"without the reverse hunk to '%s'",
* but the preimage prepared by the caller in "img" is freed here
* or in the helper function apply_binary_fragment() this calls.
*/
-static int apply_binary(struct image *img, struct patch *patch)
+static int apply_binary(struct apply_state *state,
+ struct image *img,
+ struct patch *patch)
{
const char *name = patch->old_name ? patch->old_name : patch->new_name;
unsigned char sha1[20];
* apply the patch data to it, which is stored
* in the patch->fragments->{patch,size}.
*/
- if (apply_binary_fragment(img, patch))
+ if (apply_binary_fragment(state, img, patch))
return error(_("binary patch does not apply to '%s'"),
name);
return 0;
}
-static int apply_fragments(struct image *img, struct patch *patch)
+static int apply_fragments(struct apply_state *state, struct image *img, struct patch *patch)
{
struct fragment *frag = patch->fragments;
const char *name = patch->old_name ? patch->old_name : patch->new_name;
int nth = 0;
if (patch->is_binary)
- return apply_binary(img, patch);
+ return apply_binary(state, img, patch);
while (frag) {
nth++;
- if (apply_one_fragment(img, frag, inaccurate_eof, ws_rule, nth)) {
+ if (apply_one_fragment(state, img, frag, inaccurate_eof, ws_rule, nth)) {
error(_("patch failed: %s:%ld"), name, frag->oldpos);
- if (!apply_with_reject)
+ if (!state->apply_with_reject)
return -1;
frag->rejected = 1;
}
return read_blob_object(buf, ce->sha1, ce->ce_mode);
}
-static struct patch *in_fn_table(const char *name)
+static struct patch *in_fn_table(struct apply_state *state, const char *name)
{
struct string_list_item *item;
if (name == NULL)
return NULL;
- item = string_list_lookup(&fn_table, name);
+ item = string_list_lookup(&state->fn_table, name);
if (item != NULL)
return (struct patch *)item->util;
return patch == PATH_WAS_DELETED;
}
-static void add_to_fn_table(struct patch *patch)
+static void add_to_fn_table(struct apply_state *state, struct patch *patch)
{
struct string_list_item *item;
* file creations and copies
*/
if (patch->new_name != NULL) {
- item = string_list_insert(&fn_table, patch->new_name);
+ item = string_list_insert(&state->fn_table, patch->new_name);
item->util = patch;
}
* later chunks shouldn't patch old names
*/
if ((patch->new_name == NULL) || (patch->is_rename)) {
- item = string_list_insert(&fn_table, patch->old_name);
+ item = string_list_insert(&state->fn_table, patch->old_name);
item->util = PATH_WAS_DELETED;
}
}
-static void prepare_fn_table(struct patch *patch)
+static void prepare_fn_table(struct apply_state *state, struct patch *patch)
{
/*
* store information about incoming file deletion
while (patch) {
if ((patch->new_name == NULL) || (patch->is_rename)) {
struct string_list_item *item;
- item = string_list_insert(&fn_table, patch->old_name);
+ item = string_list_insert(&state->fn_table, patch->old_name);
item->util = PATH_TO_BE_DELETED;
}
patch = patch->next;
return 0;
}
-static struct patch *previous_patch(struct patch *patch, int *gone)
+static struct patch *previous_patch(struct apply_state *state,
+ struct patch *patch,
+ int *gone)
{
struct patch *previous;
if (patch->is_copy || patch->is_rename)
return NULL; /* "git" patches do not depend on the order */
- previous = in_fn_table(patch->old_name);
+ previous = in_fn_table(state, patch->old_name);
if (!previous)
return NULL;
#define SUBMODULE_PATCH_WITHOUT_INDEX 1
-static int load_patch_target(struct strbuf *buf,
+static int load_patch_target(struct apply_state *state,
+ struct strbuf *buf,
const struct cache_entry *ce,
struct stat *st,
const char *name,
unsigned expected_mode)
{
- if (cached || check_index) {
+ if (state->cached || state->check_index) {
if (read_file_or_gitlink(ce, buf))
- return error(_("read of %s failed"), name);
+ return error(_("failed to read %s"), name);
} else if (name) {
if (S_ISGITLINK(expected_mode)) {
if (ce)
return error(_("reading from '%s' beyond a symbolic link"), name);
} else {
if (read_old_data(st, name, buf))
- return error(_("read of %s failed"), name);
+ return error(_("failed to read %s"), name);
}
}
return 0;
* applying a non-git patch that incrementally updates the tree,
* we read from the result of a previous diff.
*/
-static int load_preimage(struct image *image,
+static int load_preimage(struct apply_state *state,
+ struct image *image,
struct patch *patch, struct stat *st,
const struct cache_entry *ce)
{
struct patch *previous;
int status;
- previous = previous_patch(patch, &status);
+ previous = previous_patch(state, patch, &status);
if (status)
return error(_("path %s has been renamed/deleted"),
patch->old_name);
/* We have a patched copy in memory; use that. */
strbuf_add(&buf, previous->result, previous->resultsize);
} else {
- status = load_patch_target(&buf, ce, st,
+ status = load_patch_target(state, &buf, ce, st,
patch->old_name, patch->old_mode);
if (status < 0)
return status;
free_fragment_list(patch->fragments);
patch->fragments = NULL;
} else if (status) {
- return error(_("read of %s failed"), patch->old_name);
+ return error(_("failed to read %s"), patch->old_name);
}
}
* the current contents of the new_name. In no cases other than that
* this function will be called.
*/
-static int load_current(struct image *image, struct patch *patch)
+static int load_current(struct apply_state *state,
+ struct image *image,
+ struct patch *patch)
{
struct strbuf buf = STRBUF_INIT;
int status, pos;
if (verify_index_match(ce, &st))
return error(_("%s: does not match index"), name);
- status = load_patch_target(&buf, ce, &st, name, mode);
+ status = load_patch_target(state, &buf, ce, &st, name, mode);
if (status < 0)
return status;
else if (status)
return 0;
}
-static int try_threeway(struct image *image, struct patch *patch,
- struct stat *st, const struct cache_entry *ce)
+static int try_threeway(struct apply_state *state,
+ struct image *image,
+ struct patch *patch,
+ struct stat *st,
+ const struct cache_entry *ce)
{
unsigned char pre_sha1[20], post_sha1[20], our_sha1[20];
struct strbuf buf = STRBUF_INIT;
img = strbuf_detach(&buf, &len);
prepare_image(&tmp_image, img, len, 1);
/* Apply the patch to get the post image */
- if (apply_fragments(&tmp_image, patch) < 0) {
+ if (apply_fragments(state, &tmp_image, patch) < 0) {
clear_image(&tmp_image);
return -1;
}
/* our_sha1[] is ours */
if (patch->is_new) {
- if (load_current(&tmp_image, patch))
+ if (load_current(state, &tmp_image, patch))
return error("cannot read the current contents of '%s'",
patch->new_name);
} else {
- if (load_preimage(&tmp_image, patch, st, ce))
+ if (load_preimage(state, &tmp_image, patch, st, ce))
return error("cannot read the current contents of '%s'",
patch->old_name);
}
return 0;
}
-static int apply_data(struct patch *patch, struct stat *st, const struct cache_entry *ce)
+static int apply_data(struct apply_state *state, struct patch *patch,
+ struct stat *st, const struct cache_entry *ce)
{
struct image image;
- if (load_preimage(&image, patch, st, ce) < 0)
+ if (load_preimage(state, &image, patch, st, ce) < 0)
return -1;
if (patch->direct_to_threeway ||
- apply_fragments(&image, patch) < 0) {
+ apply_fragments(state, &image, patch) < 0) {
/* Note: with --reject, apply_fragments() returns 0 */
- if (!threeway || try_threeway(&image, patch, st, ce) < 0)
+ if (!state->threeway || try_threeway(state, &image, patch, st, ce) < 0)
return -1;
}
patch->result = image.buf;
patch->resultsize = image.len;
- add_to_fn_table(patch);
+ add_to_fn_table(state, patch);
free(image.line_allocated);
if (0 < patch->is_delete && patch->resultsize)
* check_patch() separately makes sure (and errors out otherwise) that
* the path the patch creates does not exist in the current tree.
*/
-static int check_preimage(struct patch *patch, struct cache_entry **ce, struct stat *st)
+static int check_preimage(struct apply_state *state,
+ struct patch *patch,
+ struct cache_entry **ce,
+ struct stat *st)
{
const char *old_name = patch->old_name;
struct patch *previous = NULL;
return 0;
assert(patch->is_new <= 0);
- previous = previous_patch(patch, &status);
+ previous = previous_patch(state, patch, &status);
if (status)
return error(_("path %s has been renamed/deleted"), old_name);
if (previous) {
st_mode = previous->new_mode;
- } else if (!cached) {
+ } else if (!state->cached) {
stat_ret = lstat(old_name, st);
if (stat_ret && errno != ENOENT)
return error(_("%s: %s"), old_name, strerror(errno));
}
- if (check_index && !previous) {
+ if (state->check_index && !previous) {
int pos = cache_name_pos(old_name, strlen(old_name));
if (pos < 0) {
if (patch->is_new < 0)
if (checkout_target(&the_index, *ce, st))
return -1;
}
- if (!cached && verify_index_match(*ce, st))
+ if (!state->cached && verify_index_match(*ce, st))
return error(_("%s: does not match index"), old_name);
- if (cached)
+ if (state->cached)
st_mode = (*ce)->ce_mode;
} else if (stat_ret < 0) {
if (patch->is_new < 0)
return error(_("%s: %s"), old_name, strerror(errno));
}
- if (!cached && !previous)
+ if (!state->cached && !previous)
st_mode = ce_mode_from_stat(*ce, st->st_mode);
if (patch->is_new < 0)
#define EXISTS_IN_INDEX 1
#define EXISTS_IN_WORKTREE 2
-static int check_to_create(const char *new_name, int ok_if_exists)
+static int check_to_create(struct apply_state *state,
+ const char *new_name,
+ int ok_if_exists)
{
struct stat nst;
- if (check_index &&
+ if (state->check_index &&
cache_name_pos(new_name, strlen(new_name)) >= 0 &&
!ok_if_exists)
return EXISTS_IN_INDEX;
- if (cached)
+ if (state->cached)
return 0;
if (!lstat(new_name, &nst)) {
return 0;
}
-/*
- * We need to keep track of how symlinks in the preimage are
- * manipulated by the patches. A patch to add a/b/c where a/b
- * is a symlink should not be allowed to affect the directory
- * the symlink points at, but if the same patch removes a/b,
- * it is perfectly fine, as the patch removes a/b to make room
- * to create a directory a/b so that a/b/c can be created.
- */
-static struct string_list symlink_changes;
-#define SYMLINK_GOES_AWAY 01
-#define SYMLINK_IN_RESULT 02
-
-static uintptr_t register_symlink_changes(const char *path, uintptr_t what)
+static uintptr_t register_symlink_changes(struct apply_state *state,
+ const char *path,
+ uintptr_t what)
{
struct string_list_item *ent;
- ent = string_list_lookup(&symlink_changes, path);
+ ent = string_list_lookup(&state->symlink_changes, path);
if (!ent) {
- ent = string_list_insert(&symlink_changes, path);
+ ent = string_list_insert(&state->symlink_changes, path);
ent->util = (void *)0;
}
ent->util = (void *)(what | ((uintptr_t)ent->util));
return (uintptr_t)ent->util;
}
-static uintptr_t check_symlink_changes(const char *path)
+static uintptr_t check_symlink_changes(struct apply_state *state, const char *path)
{
struct string_list_item *ent;
- ent = string_list_lookup(&symlink_changes, path);
+ ent = string_list_lookup(&state->symlink_changes, path);
if (!ent)
return 0;
return (uintptr_t)ent->util;
}
-static void prepare_symlink_changes(struct patch *patch)
+static void prepare_symlink_changes(struct apply_state *state, struct patch *patch)
{
for ( ; patch; patch = patch->next) {
if ((patch->old_name && S_ISLNK(patch->old_mode)) &&
(patch->is_rename || patch->is_delete))
/* the symlink at patch->old_name is removed */
- register_symlink_changes(patch->old_name, SYMLINK_GOES_AWAY);
+ register_symlink_changes(state, patch->old_name, SYMLINK_GOES_AWAY);
if (patch->new_name && S_ISLNK(patch->new_mode))
/* the symlink at patch->new_name is created or remains */
- register_symlink_changes(patch->new_name, SYMLINK_IN_RESULT);
+ register_symlink_changes(state, patch->new_name, SYMLINK_IN_RESULT);
}
}
-static int path_is_beyond_symlink_1(struct strbuf *name)
+static int path_is_beyond_symlink_1(struct apply_state *state, struct strbuf *name)
{
do {
unsigned int change;
if (!name->len)
break;
name->buf[name->len] = '\0';
- change = check_symlink_changes(name->buf);
+ change = check_symlink_changes(state, name->buf);
if (change & SYMLINK_IN_RESULT)
return 1;
if (change & SYMLINK_GOES_AWAY)
continue;
/* otherwise, check the preimage */
- if (check_index) {
+ if (state->check_index) {
struct cache_entry *ce;
ce = cache_file_exists(name->buf, name->len, ignore_case);
return 0;
}
-static int path_is_beyond_symlink(const char *name_)
+static int path_is_beyond_symlink(struct apply_state *state, const char *name_)
{
int ret;
struct strbuf name = STRBUF_INIT;
assert(*name_ != '\0');
strbuf_addstr(&name, name_);
- ret = path_is_beyond_symlink_1(&name);
+ ret = path_is_beyond_symlink_1(state, &name);
strbuf_release(&name);
return ret;
* Check and apply the patch in-core; leave the result in patch->result
* for the caller to write it out to the final destination.
*/
-static int check_patch(struct patch *patch)
+static int check_patch(struct apply_state *state, struct patch *patch)
{
struct stat st;
const char *old_name = patch->old_name;
patch->rejected = 1; /* we will drop this after we succeed */
- status = check_preimage(patch, &ce, &st);
+ status = check_preimage(state, patch, &ce, &st);
if (status)
return status;
old_name = patch->old_name;
* B and rename from A to B is handled the same way by asking
* was_deleted().
*/
- if ((tpatch = in_fn_table(new_name)) &&
+ if ((tpatch = in_fn_table(state, new_name)) &&
(was_deleted(tpatch) || to_be_deleted(tpatch)))
ok_if_exists = 1;
else
if (new_name &&
((0 < patch->is_new) || patch->is_rename || patch->is_copy)) {
- int err = check_to_create(new_name, ok_if_exists);
+ int err = check_to_create(state, new_name, ok_if_exists);
- if (err && threeway) {
+ if (err && state->threeway) {
patch->direct_to_threeway = 1;
} else switch (err) {
case 0:
}
}
- if (!unsafe_paths)
+ if (!state->unsafe_paths)
die_on_unsafe_path(patch);
/*
* is not deposited to a path that is beyond a symbolic link
* here.
*/
- if (!patch->is_delete && path_is_beyond_symlink(patch->new_name))
+ if (!patch->is_delete && path_is_beyond_symlink(state, patch->new_name))
return error(_("affected file '%s' is beyond a symbolic link"),
patch->new_name);
- if (apply_data(patch, &st, ce) < 0)
+ if (apply_data(state, patch, &st, ce) < 0)
return error(_("%s: patch does not apply"), name);
patch->rejected = 0;
return 0;
}
-static int check_patch_list(struct patch *patch)
+static int check_patch_list(struct apply_state *state, struct patch *patch)
{
int err = 0;
- prepare_symlink_changes(patch);
- prepare_fn_table(patch);
+ prepare_symlink_changes(state, patch);
+ prepare_fn_table(state, patch);
while (patch) {
- if (apply_verbosely)
+ if (state->apply_verbosely)
say_patch_name(stderr,
_("Checking patch %s..."), patch);
- err |= check_patch(patch);
+ err |= check_patch(state, patch);
patch = patch->next;
}
return err;
discard_index(&result);
}
-static void stat_patch_list(struct patch *patch)
+static void stat_patch_list(struct apply_state *state, struct patch *patch)
{
int files, adds, dels;
files++;
adds += patch->lines_added;
dels += patch->lines_deleted;
- show_stats(patch);
+ show_stats(state, patch);
}
print_stat_summary(stdout, files, adds, dels);
}
-static void numstat_patch_list(struct patch *patch)
+static void numstat_patch_list(struct apply_state *state,
+ struct patch *patch)
{
for ( ; patch; patch = patch->next) {
const char *name;
printf("-\t-\t");
else
printf("%d\t%d\t", patch->lines_added, patch->lines_deleted);
- write_name_quoted(name, stdout, line_termination);
+ write_name_quoted(name, stdout, state->line_termination);
}
}
}
}
-static void patch_stats(struct patch *patch)
+static void patch_stats(struct apply_state *state, struct patch *patch)
{
int lines = patch->lines_added + patch->lines_deleted;
- if (lines > max_change)
- max_change = lines;
+ if (lines > state->max_change)
+ state->max_change = lines;
if (patch->old_name) {
int len = quote_c_style(patch->old_name, NULL, NULL, 0);
if (!len)
len = strlen(patch->old_name);
- if (len > max_len)
- max_len = len;
+ if (len > state->max_len)
+ state->max_len = len;
}
if (patch->new_name) {
int len = quote_c_style(patch->new_name, NULL, NULL, 0);
if (!len)
len = strlen(patch->new_name);
- if (len > max_len)
- max_len = len;
+ if (len > state->max_len)
+ state->max_len = len;
}
}
-static void remove_file(struct patch *patch, int rmdir_empty)
+static void remove_file(struct apply_state *state, struct patch *patch, int rmdir_empty)
{
- if (update_index) {
+ if (state->update_index) {
if (remove_file_from_cache(patch->old_name) < 0)
die(_("unable to remove %s from index"), patch->old_name);
}
- if (!cached) {
+ if (!state->cached) {
if (!remove_or_warn(patch->old_mode, patch->old_name) && rmdir_empty) {
remove_path(patch->old_name);
}
}
}
-static void add_index_file(const char *path, unsigned mode, void *buf, unsigned long size)
+static void add_index_file(struct apply_state *state,
+ const char *path,
+ unsigned mode,
+ void *buf,
+ unsigned long size)
{
struct stat st;
struct cache_entry *ce;
int namelen = strlen(path);
unsigned ce_size = cache_entry_size(namelen);
- if (!update_index)
+ if (!state->update_index)
return;
ce = xcalloc(1, ce_size);
get_sha1_hex(s, ce->sha1))
die(_("corrupt patch for submodule %s"), path);
} else {
- if (!cached) {
+ if (!state->cached) {
if (lstat(path, &st) < 0)
die_errno(_("unable to stat newly created file '%s'"),
path);
* which is true 99% of the time anyway. If they don't,
* we create them and try again.
*/
-static void create_one_file(char *path, unsigned mode, const char *buf, unsigned long size)
+static void create_one_file(struct apply_state *state,
+ char *path,
+ unsigned mode,
+ const char *buf,
+ unsigned long size)
{
- if (cached)
+ if (state->cached)
return;
if (!try_create_file(path, mode, buf, size))
return;
die_errno(_("unable to write file '%s' mode %o"), path, mode);
}
-static void add_conflicted_stages_file(struct patch *patch)
+static void add_conflicted_stages_file(struct apply_state *state,
+ struct patch *patch)
{
int stage, namelen;
unsigned ce_size, mode;
struct cache_entry *ce;
- if (!update_index)
+ if (!state->update_index)
return;
namelen = strlen(patch->new_name);
ce_size = cache_entry_size(namelen);
}
}
-static void create_file(struct patch *patch)
+static void create_file(struct apply_state *state, struct patch *patch)
{
char *path = patch->new_name;
unsigned mode = patch->new_mode;
if (!mode)
mode = S_IFREG | 0644;
- create_one_file(path, mode, buf, size);
+ create_one_file(state, path, mode, buf, size);
if (patch->conflicted_threeway)
- add_conflicted_stages_file(patch);
+ add_conflicted_stages_file(state, patch);
else
- add_index_file(path, mode, buf, size);
+ add_index_file(state, path, mode, buf, size);
}
/* phase zero is to remove, phase one is to create */
-static void write_out_one_result(struct patch *patch, int phase)
+static void write_out_one_result(struct apply_state *state,
+ struct patch *patch,
+ int phase)
{
if (patch->is_delete > 0) {
if (phase == 0)
- remove_file(patch, 1);
+ remove_file(state, patch, 1);
return;
}
if (patch->is_new > 0 || patch->is_copy) {
if (phase == 1)
- create_file(patch);
+ create_file(state, patch);
return;
}
/*
* thing: remove the old, write the new
*/
if (phase == 0)
- remove_file(patch, patch->is_rename);
+ remove_file(state, patch, patch->is_rename);
if (phase == 1)
- create_file(patch);
+ create_file(state, patch);
}
-static int write_out_one_reject(struct patch *patch)
+static int write_out_one_reject(struct apply_state *state, struct patch *patch)
{
FILE *rej;
char namebuf[PATH_MAX];
}
if (!cnt) {
- if (apply_verbosely)
+ if (state->apply_verbosely)
say_patch_name(stderr,
_("Applied patch %s cleanly."), patch);
return 0;
return -1;
}
-static int write_out_results(struct patch *list)
+static int write_out_results(struct apply_state *state, struct patch *list)
{
int phase;
int errs = 0;
if (l->rejected)
errs = 1;
else {
- write_out_one_result(l, phase);
+ write_out_one_result(state, l, phase);
if (phase == 1) {
- if (write_out_one_reject(l))
+ if (write_out_one_reject(state, l))
errs = 1;
if (l->conflicted_threeway) {
string_list_append(&cpath, l->new_name);
#define INACCURATE_EOF (1<<0)
#define RECOUNT (1<<1)
-static int apply_patch(int fd, const char *filename, int options)
+static int apply_patch(struct apply_state *state,
+ int fd,
+ const char *filename,
+ int options)
{
size_t offset;
struct strbuf buf = STRBUF_INIT; /* owns the patch text */
struct patch *list = NULL, **listp = &list;
int skipped_patch = 0;
- patch_input_file = filename;
+ state->patch_input_file = filename;
read_patch_file(&buf, fd);
offset = 0;
while (offset < buf.len) {
patch = xcalloc(1, sizeof(*patch));
patch->inaccurate_eof = !!(options & INACCURATE_EOF);
patch->recount = !!(options & RECOUNT);
- nr = parse_chunk(buf.buf + offset, buf.len - offset, patch);
+ nr = parse_chunk(state, buf.buf + offset, buf.len - offset, patch);
if (nr < 0) {
free_patch(patch);
break;
}
- if (apply_in_reverse)
+ if (state->apply_in_reverse)
reverse_patches(patch);
- if (use_patch(patch)) {
- patch_stats(patch);
+ if (use_patch(state, patch)) {
+ patch_stats(state, patch);
*listp = patch;
listp = &patch->next;
}
else {
- if (apply_verbosely)
+ if (state->apply_verbosely)
say_patch_name(stderr, _("Skipped patch '%s'."), patch);
free_patch(patch);
skipped_patch++;
if (!list && !skipped_patch)
die(_("unrecognized input"));
- if (whitespace_error && (ws_error_action == die_on_ws_error))
- apply = 0;
+ if (state->whitespace_error && (state->ws_error_action == die_on_ws_error))
+ state->apply = 0;
- update_index = check_index && apply;
- if (update_index && newfd < 0)
- newfd = hold_locked_index(&lock_file, 1);
+ state->update_index = state->check_index && state->apply;
+ if (state->update_index && state->newfd < 0)
+ state->newfd = hold_locked_index(state->lock_file, 1);
- if (check_index) {
+ if (state->check_index) {
if (read_cache() < 0)
die(_("unable to read index file"));
}
- if ((check || apply) &&
- check_patch_list(list) < 0 &&
- !apply_with_reject)
+ if ((state->check || state->apply) &&
+ check_patch_list(state, list) < 0 &&
+ !state->apply_with_reject)
exit(1);
- if (apply && write_out_results(list)) {
- if (apply_with_reject)
+ if (state->apply && write_out_results(state, list)) {
+ if (state->apply_with_reject)
exit(1);
/* with --3way, we still need to write the index out */
return 1;
}
- if (fake_ancestor)
- build_fake_ancestor(list, fake_ancestor);
+ if (state->fake_ancestor)
+ build_fake_ancestor(list, state->fake_ancestor);
- if (diffstat)
- stat_patch_list(list);
+ if (state->diffstat)
+ stat_patch_list(state, list);
- if (numstat)
- numstat_patch_list(list);
+ if (state->numstat)
+ numstat_patch_list(state, list);
- if (summary)
+ if (state->summary)
summary_patch_list(list);
free_patch_list(list);
strbuf_release(&buf);
- string_list_clear(&fn_table, 0);
+ string_list_clear(&state->fn_table, 0);
return 0;
}
static int option_parse_exclude(const struct option *opt,
const char *arg, int unset)
{
- add_name_limit(arg, 1);
+ struct apply_state *state = opt->value;
+ add_name_limit(state, arg, 1);
return 0;
}
static int option_parse_include(const struct option *opt,
const char *arg, int unset)
{
- add_name_limit(arg, 0);
- has_include = 1;
+ struct apply_state *state = opt->value;
+ add_name_limit(state, arg, 0);
+ state->has_include = 1;
return 0;
}
static int option_parse_p(const struct option *opt,
- const char *arg, int unset)
+ const char *arg,
+ int unset)
{
- p_value = atoi(arg);
- p_value_known = 1;
+ struct apply_state *state = opt->value;
+ state->p_value = atoi(arg);
+ state->p_value_known = 1;
return 0;
}
static int option_parse_space_change(const struct option *opt,
- const char *arg, int unset)
+ const char *arg, int unset)
{
+ struct apply_state *state = opt->value;
if (unset)
- ws_ignore_action = ignore_ws_none;
+ state->ws_ignore_action = ignore_ws_none;
else
- ws_ignore_action = ignore_ws_change;
+ state->ws_ignore_action = ignore_ws_change;
return 0;
}
static int option_parse_whitespace(const struct option *opt,
const char *arg, int unset)
{
- const char **whitespace_option = opt->value;
-
- *whitespace_option = arg;
- parse_whitespace_option(arg);
+ struct apply_state *state = opt->value;
+ state->whitespace_option = arg;
+ parse_whitespace_option(state, arg);
return 0;
}
static int option_parse_directory(const struct option *opt,
const char *arg, int unset)
{
- strbuf_reset(&root);
- strbuf_addstr(&root, arg);
- strbuf_complete(&root, '/');
+ struct apply_state *state = opt->value;
+ strbuf_reset(&state->root);
+ strbuf_addstr(&state->root, arg);
+ strbuf_complete(&state->root, '/');
return 0;
}
-int cmd_apply(int argc, const char **argv, const char *prefix_)
+static void init_apply_state(struct apply_state *state,
+ const char *prefix,
+ struct lock_file *lock_file)
+{
+ memset(state, 0, sizeof(*state));
+ state->prefix = prefix;
+ state->prefix_length = state->prefix ? strlen(state->prefix) : 0;
+ state->lock_file = lock_file;
+ state->newfd = -1;
+ state->apply = 1;
+ state->line_termination = '\n';
+ state->p_value = 1;
+ state->p_context = UINT_MAX;
+ state->squelch_whitespace_errors = 5;
+ state->ws_error_action = warn_on_ws_error;
+ state->ws_ignore_action = ignore_ws_none;
+ state->linenr = 1;
+ string_list_init(&state->fn_table, 0);
+ string_list_init(&state->limit_by_name, 0);
+ string_list_init(&state->symlink_changes, 0);
+ strbuf_init(&state->root, 0);
+
+ git_apply_config();
+ if (apply_default_whitespace)
+ parse_whitespace_option(state, apply_default_whitespace);
+ if (apply_default_ignorewhitespace)
+ parse_ignorewhitespace_option(state, apply_default_ignorewhitespace);
+}
+
+static void clear_apply_state(struct apply_state *state)
+{
+ string_list_clear(&state->limit_by_name, 0);
+ string_list_clear(&state->symlink_changes, 0);
+ strbuf_release(&state->root);
+
+ /* &state->fn_table is cleared at the end of apply_patch() */
+}
+
+static void check_apply_state(struct apply_state *state, int force_apply)
+{
+ int is_not_gitdir = !startup_info->have_repository;
+
+ if (state->apply_with_reject && state->threeway)
+ die("--reject and --3way cannot be used together.");
+ if (state->cached && state->threeway)
+ die("--cached and --3way cannot be used together.");
+ if (state->threeway) {
+ if (is_not_gitdir)
+ die(_("--3way outside a repository"));
+ state->check_index = 1;
+ }
+ if (state->apply_with_reject)
+ state->apply = state->apply_verbosely = 1;
+ if (!force_apply && (state->diffstat || state->numstat || state->summary || state->check || state->fake_ancestor))
+ state->apply = 0;
+ if (state->check_index && is_not_gitdir)
+ die(_("--index outside a repository"));
+ if (state->cached) {
+ if (is_not_gitdir)
+ die(_("--cached outside a repository"));
+ state->check_index = 1;
+ }
+ if (state->check_index)
+ state->unsafe_paths = 0;
+ if (!state->lock_file)
+ die("BUG: state->lock_file should not be NULL");
+}
+
+static int apply_all_patches(struct apply_state *state,
+ int argc,
+ const char **argv,
+ int options)
{
int i;
int errs = 0;
- int is_not_gitdir = !startup_info->have_repository;
- int force_apply = 0;
+ int read_stdin = 1;
+
+ for (i = 0; i < argc; i++) {
+ const char *arg = argv[i];
+ int fd;
+
+ if (!strcmp(arg, "-")) {
+ errs |= apply_patch(state, 0, "<stdin>", options);
+ read_stdin = 0;
+ continue;
+ } else if (0 < state->prefix_length)
+ arg = prefix_filename(state->prefix,
+ state->prefix_length,
+ arg);
+
+ fd = open(arg, O_RDONLY);
+ if (fd < 0)
+ die_errno(_("can't open patch '%s'"), arg);
+ read_stdin = 0;
+ set_default_whitespace_mode(state);
+ errs |= apply_patch(state, fd, arg, options);
+ close(fd);
+ }
+ set_default_whitespace_mode(state);
+ if (read_stdin)
+ errs |= apply_patch(state, 0, "<stdin>", options);
+
+ if (state->whitespace_error) {
+ if (state->squelch_whitespace_errors &&
+ state->squelch_whitespace_errors < state->whitespace_error) {
+ int squelched =
+ state->whitespace_error - state->squelch_whitespace_errors;
+ warning(Q_("squelched %d whitespace error",
+ "squelched %d whitespace errors",
+ squelched),
+ squelched);
+ }
+ if (state->ws_error_action == die_on_ws_error)
+ die(Q_("%d line adds whitespace errors.",
+ "%d lines add whitespace errors.",
+ state->whitespace_error),
+ state->whitespace_error);
+ if (state->applied_after_fixing_ws && state->apply)
+ warning("%d line%s applied after"
+ " fixing whitespace errors.",
+ state->applied_after_fixing_ws,
+ state->applied_after_fixing_ws == 1 ? "" : "s");
+ else if (state->whitespace_error)
+ warning(Q_("%d line adds whitespace errors.",
+ "%d lines add whitespace errors.",
+ state->whitespace_error),
+ state->whitespace_error);
+ }
+
+ if (state->update_index) {
+ if (write_locked_index(&the_index, state->lock_file, COMMIT_LOCK))
+ die(_("Unable to write new index file"));
+ state->newfd = -1;
+ }
+
+ return !!errs;
+}
- const char *whitespace_option = NULL;
+int cmd_apply(int argc, const char **argv, const char *prefix)
+{
+ int force_apply = 0;
+ int options = 0;
+ int ret;
+ struct apply_state state;
struct option builtin_apply_options[] = {
- { OPTION_CALLBACK, 0, "exclude", NULL, N_("path"),
+ { OPTION_CALLBACK, 0, "exclude", &state, N_("path"),
N_("don't apply changes matching the given path"),
0, option_parse_exclude },
- { OPTION_CALLBACK, 0, "include", NULL, N_("path"),
+ { OPTION_CALLBACK, 0, "include", &state, N_("path"),
N_("apply changes matching the given path"),
0, option_parse_include },
- { OPTION_CALLBACK, 'p', NULL, NULL, N_("num"),
+ { OPTION_CALLBACK, 'p', NULL, &state, N_("num"),
N_("remove <num> leading slashes from traditional diff paths"),
0, option_parse_p },
- OPT_BOOL(0, "no-add", &no_add,
+ OPT_BOOL(0, "no-add", &state.no_add,
N_("ignore additions made by the patch")),
- OPT_BOOL(0, "stat", &diffstat,
+ OPT_BOOL(0, "stat", &state.diffstat,
N_("instead of applying the patch, output diffstat for the input")),
OPT_NOOP_NOARG(0, "allow-binary-replacement"),
OPT_NOOP_NOARG(0, "binary"),
- OPT_BOOL(0, "numstat", &numstat,
+ OPT_BOOL(0, "numstat", &state.numstat,
N_("show number of added and deleted lines in decimal notation")),
- OPT_BOOL(0, "summary", &summary,
+ OPT_BOOL(0, "summary", &state.summary,
N_("instead of applying the patch, output a summary for the input")),
- OPT_BOOL(0, "check", &check,
+ OPT_BOOL(0, "check", &state.check,
N_("instead of applying the patch, see if the patch is applicable")),
- OPT_BOOL(0, "index", &check_index,
+ OPT_BOOL(0, "index", &state.check_index,
N_("make sure the patch is applicable to the current index")),
- OPT_BOOL(0, "cached", &cached,
+ OPT_BOOL(0, "cached", &state.cached,
N_("apply a patch without touching the working tree")),
- OPT_BOOL(0, "unsafe-paths", &unsafe_paths,
+ OPT_BOOL(0, "unsafe-paths", &state.unsafe_paths,
N_("accept a patch that touches outside the working area")),
OPT_BOOL(0, "apply", &force_apply,
N_("also apply the patch (use with --stat/--summary/--check)")),
- OPT_BOOL('3', "3way", &threeway,
+ OPT_BOOL('3', "3way", &state.threeway,
N_( "attempt three-way merge if a patch does not apply")),
- OPT_FILENAME(0, "build-fake-ancestor", &fake_ancestor,
+ OPT_FILENAME(0, "build-fake-ancestor", &state.fake_ancestor,
N_("build a temporary index based on embedded index information")),
/* Think twice before adding "--nul" synonym to this */
- OPT_SET_INT('z', NULL, &line_termination,
+ OPT_SET_INT('z', NULL, &state.line_termination,
N_("paths are separated with NUL character"), '\0'),
- OPT_INTEGER('C', NULL, &p_context,
+ OPT_INTEGER('C', NULL, &state.p_context,
N_("ensure at least <n> lines of context match")),
- { OPTION_CALLBACK, 0, "whitespace", &whitespace_option, N_("action"),
+ { OPTION_CALLBACK, 0, "whitespace", &state, N_("action"),
N_("detect new or modified lines that have whitespace errors"),
0, option_parse_whitespace },
- { OPTION_CALLBACK, 0, "ignore-space-change", NULL, NULL,
+ { OPTION_CALLBACK, 0, "ignore-space-change", &state, NULL,
N_("ignore changes in whitespace when finding context"),
PARSE_OPT_NOARG, option_parse_space_change },
- { OPTION_CALLBACK, 0, "ignore-whitespace", NULL, NULL,
+ { OPTION_CALLBACK, 0, "ignore-whitespace", &state, NULL,
N_("ignore changes in whitespace when finding context"),
PARSE_OPT_NOARG, option_parse_space_change },
- OPT_BOOL('R', "reverse", &apply_in_reverse,
+ OPT_BOOL('R', "reverse", &state.apply_in_reverse,
N_("apply the patch in reverse")),
- OPT_BOOL(0, "unidiff-zero", &unidiff_zero,
+ OPT_BOOL(0, "unidiff-zero", &state.unidiff_zero,
N_("don't expect at least one line of context")),
- OPT_BOOL(0, "reject", &apply_with_reject,
+ OPT_BOOL(0, "reject", &state.apply_with_reject,
N_("leave the rejected hunks in corresponding *.rej files")),
- OPT_BOOL(0, "allow-overlap", &allow_overlap,
+ OPT_BOOL(0, "allow-overlap", &state.allow_overlap,
N_("allow overlapping hunks")),
- OPT__VERBOSE(&apply_verbosely, N_("be verbose")),
+ OPT__VERBOSE(&state.apply_verbosely, N_("be verbose")),
OPT_BIT(0, "inaccurate-eof", &options,
N_("tolerate incorrectly detected missing new-line at the end of file"),
INACCURATE_EOF),
OPT_BIT(0, "recount", &options,
N_("do not trust the line counts in the hunk headers"),
RECOUNT),
- { OPTION_CALLBACK, 0, "directory", NULL, N_("root"),
+ { OPTION_CALLBACK, 0, "directory", &state, N_("root"),
N_("prepend <root> to all filenames"),
0, option_parse_directory },
OPT_END()
};
- prefix = prefix_;
- prefix_length = prefix ? strlen(prefix) : 0;
- git_apply_config();
- if (apply_default_whitespace)
- parse_whitespace_option(apply_default_whitespace);
- if (apply_default_ignorewhitespace)
- parse_ignorewhitespace_option(apply_default_ignorewhitespace);
+ init_apply_state(&state, prefix, &lock_file);
- argc = parse_options(argc, argv, prefix, builtin_apply_options,
+ argc = parse_options(argc, argv, state.prefix, builtin_apply_options,
apply_usage, 0);
- if (apply_with_reject && threeway)
- die("--reject and --3way cannot be used together.");
- if (cached && threeway)
- die("--cached and --3way cannot be used together.");
- if (threeway) {
- if (is_not_gitdir)
- die(_("--3way outside a repository"));
- check_index = 1;
- }
- if (apply_with_reject)
- apply = apply_verbosely = 1;
- if (!force_apply && (diffstat || numstat || summary || check || fake_ancestor))
- apply = 0;
- if (check_index && is_not_gitdir)
- die(_("--index outside a repository"));
- if (cached) {
- if (is_not_gitdir)
- die(_("--cached outside a repository"));
- check_index = 1;
- }
- if (check_index)
- unsafe_paths = 0;
+ check_apply_state(&state, force_apply);
- for (i = 0; i < argc; i++) {
- const char *arg = argv[i];
- int fd;
+ ret = apply_all_patches(&state, argc, argv, options);
- if (!strcmp(arg, "-")) {
- errs |= apply_patch(0, "<stdin>", options);
- read_stdin = 0;
- continue;
- } else if (0 < prefix_length)
- arg = prefix_filename(prefix, prefix_length, arg);
+ clear_apply_state(&state);
- fd = open(arg, O_RDONLY);
- if (fd < 0)
- die_errno(_("can't open patch '%s'"), arg);
- read_stdin = 0;
- set_default_whitespace_mode(whitespace_option);
- errs |= apply_patch(fd, arg, options);
- close(fd);
- }
- set_default_whitespace_mode(whitespace_option);
- if (read_stdin)
- errs |= apply_patch(0, "<stdin>", options);
- if (whitespace_error) {
- if (squelch_whitespace_errors &&
- squelch_whitespace_errors < whitespace_error) {
- int squelched =
- whitespace_error - squelch_whitespace_errors;
- warning(Q_("squelched %d whitespace error",
- "squelched %d whitespace errors",
- squelched),
- squelched);
- }
- if (ws_error_action == die_on_ws_error)
- die(Q_("%d line adds whitespace errors.",
- "%d lines add whitespace errors.",
- whitespace_error),
- whitespace_error);
- if (applied_after_fixing_ws && apply)
- warning("%d line%s applied after"
- " fixing whitespace errors.",
- applied_after_fixing_ws,
- applied_after_fixing_ws == 1 ? "" : "s");
- else if (whitespace_error)
- warning(Q_("%d line adds whitespace errors.",
- "%d lines add whitespace errors.",
- whitespace_error),
- whitespace_error);
- }
-
- if (update_index) {
- if (write_locked_index(&the_index, &lock_file, COMMIT_LOCK))
- die(_("Unable to write new index file"));
- }
-
- return !!errs;
+ return ret;
}
static struct date_mode blame_date_mode = { DATE_ISO8601 };
static size_t blame_date_width;
-static struct string_list mailmap;
+static struct string_list mailmap = STRING_LIST_INIT_NODUP;
#ifndef DEBUG
#define DEBUG 0
int blamed_lines;
};
-static int diff_hunks(mmfile_t *file_a, mmfile_t *file_b, long ctxlen,
+static int diff_hunks(mmfile_t *file_a, mmfile_t *file_b,
xdl_emit_hunk_consume_func_t hunk_func, void *cb_data)
{
xpparam_t xpp = {0};
xdemitcb_t ecb = {NULL};
xpp.flags = xdl_opts;
- xecfg.ctxlen = ctxlen;
xecfg.hunk_func = hunk_func;
ecb.priv = cb_data;
return xdi_diff(file_a, file_b, &xpp, &xecfg, &ecb);
p->status);
case 'M':
porigin = get_origin(sb, parent, origin->path);
- hashcpy(porigin->blob_sha1, p->one->sha1);
+ hashcpy(porigin->blob_sha1, p->one->oid.hash);
porigin->mode = p->one->mode;
break;
case 'A':
}
}
diff_flush(&diff_opts);
- free_pathspec(&diff_opts.pathspec);
+ clear_pathspec(&diff_opts.pathspec);
return porigin;
}
if ((p->status == 'R' || p->status == 'C') &&
!strcmp(p->two->path, origin->path)) {
porigin = get_origin(sb, parent, p->one->path);
- hashcpy(porigin->blob_sha1, p->one->sha1);
+ hashcpy(porigin->blob_sha1, p->one->oid.hash);
porigin->mode = p->one->mode;
break;
}
}
diff_flush(&diff_opts);
- free_pathspec(&diff_opts.pathspec);
+ clear_pathspec(&diff_opts.pathspec);
return porigin;
}
fill_origin_blob(&sb->revs->diffopt, target, &file_o);
num_get_patch++;
- if (diff_hunks(&file_p, &file_o, 0, blame_chunk_cb, &d))
+ if (diff_hunks(&file_p, &file_o, blame_chunk_cb, &d))
die("unable to generate diff (%s -> %s)",
oid_to_hex(&parent->commit->object.oid),
oid_to_hex(&target->commit->object.oid));
* file_p partially may match that image.
*/
memset(split, 0, sizeof(struct blame_entry [3]));
- if (diff_hunks(file_p, &file_o, 1, handle_split_cb, &d))
+ if (diff_hunks(file_p, &file_o, handle_split_cb, &d))
die("unable to generate diff (%s)",
oid_to_hex(&parent->commit->object.oid));
/* remainder, if any, all match the preimage */
continue;
norigin = get_origin(sb, parent, p->one->path);
- hashcpy(norigin->blob_sha1, p->one->sha1);
+ hashcpy(norigin->blob_sha1, p->one->oid.hash);
norigin->mode = p->one->mode;
fill_origin_blob(&sb->revs->diffopt, norigin, &file_p);
if (!file_p.ptr)
} while (unblamed);
target->suspects = reverse_blame(leftover, NULL);
diff_flush(&diff_opts);
- free_pathspec(&diff_opts.pathspec);
+ clear_pathspec(&diff_opts.pathspec);
}
/*
static void verify_working_tree_path(struct commit *work_tree, const char *path)
{
struct commit_list *parents;
+ int pos;
for (parents = work_tree->parents; parents; parents = parents->next) {
const unsigned char *commit_sha1 = parents->item->object.oid.hash;
sha1_object_info(blob_sha1, NULL) == OBJ_BLOB)
return;
}
- die("no such path '%s' in HEAD", path);
+
+ pos = cache_name_pos(path, strlen(path));
+ if (pos >= 0)
+ ; /* path is in the index */
+ else if (!strcmp(active_cache[-1 - pos]->name, path))
+ ; /* path is in the index, unmerged */
+ else
+ die("no such path '%s' in HEAD", path);
}
static struct commit_list **append_parent(struct commit_list **tail, const unsigned char *sha1)
struct object *obj = revs->pending.objects[i].item;
if (obj->flags & UNINTERESTING)
continue;
- while (obj->type == OBJ_TAG)
- obj = deref_tag(obj, NULL, 0);
+ obj = deref_tag(obj, NULL, 0);
if (obj->type != OBJ_COMMIT)
die("Non commit %s?", revs->pending.objects[i].name);
if (found)
struct object *obj = revs->pending.objects[i].item;
if (!(obj->flags & UNINTERESTING))
continue;
- while (obj->type == OBJ_TAG)
- obj = deref_tag(obj, NULL, 0);
+ obj = deref_tag(obj, NULL, 0);
if (obj->type != OBJ_COMMIT)
die("Non commit %s?", revs->pending.objects[i].name);
if (sb->final)
enum object_type type;
struct commit *final_commit = NULL;
- static struct string_list range_list;
- static int output_option = 0, opt = 0;
- static int show_stats = 0;
- static const char *revs_file = NULL;
- static const char *contents_from = NULL;
- static const struct option options[] = {
+ struct string_list range_list = STRING_LIST_INIT_NODUP;
+ int output_option = 0, opt = 0;
+ int show_stats = 0;
+ const char *revs_file = NULL;
+ const char *contents_from = NULL;
+ const struct option options[] = {
OPT_BOOL(0, "incremental", &incremental, N_("Show blame entries as we find them, incrementally")),
OPT_BOOL('b', NULL, &blank_boundary, N_("Show blank SHA-1 for boundary commits (Default: off)")),
OPT_BOOL(0, "root", &show_root, N_("Do not treat root commits as boundaries (Default: off)")),
case DATE_RAW:
blame_date_width = sizeof("1161298804 -0700");
break;
+ case DATE_UNIX:
+ blame_date_width = sizeof("1161298804");
+ break;
case DATE_SHORT:
blame_date_width = sizeof("2006-10-19");
break;
lno = prepare_lines(&sb);
if (lno && !range_list.nr)
- string_list_append(&range_list, xstrdup("1"));
+ string_list_append(&range_list, "1");
anchor = 1;
range_set_init(&ranges, range_list.nr);
die(_("Couldn't look up commit object for HEAD"));
}
for (i = 0; i < argc; i++, strbuf_release(&bname)) {
- const char *target;
+ char *target = NULL;
int flags = 0;
strbuf_branchname(&bname, argv[i]);
}
}
- target = resolve_ref_unsafe(name,
- RESOLVE_REF_READING
- | RESOLVE_REF_NO_RECURSE
- | RESOLVE_REF_ALLOW_BAD_NAME,
- sha1, &flags);
+ target = resolve_refdup(name,
+ RESOLVE_REF_READING
+ | RESOLVE_REF_NO_RECURSE
+ | RESOLVE_REF_ALLOW_BAD_NAME,
+ sha1, &flags);
if (!target) {
error(remote_branch
? _("remote-tracking branch '%s' not found.")
check_branch_commit(bname.buf, name, sha1, head_rev, kinds,
force)) {
ret = 1;
- continue;
+ goto next;
}
if (delete_ref(name, is_null_sha1(sha1) ? NULL : sha1,
: _("Error deleting branch '%s'"),
bname.buf);
ret = 1;
- continue;
+ goto next;
}
if (!quiet) {
printf(remote_branch
: find_unique_abbrev(sha1, DEFAULT_ABBREV));
}
delete_branch_config(bname.buf);
+
+ next:
+ free(target);
}
free(name);
if (!buf.len || buf.buf[buf.len-1] != '\n')
strbuf_addch(&buf, '\n');
strbuf_commented_addf(&buf,
- "Please edit the description for the branch\n"
- " %s\n"
- "Lines starting with '%c' will be stripped.\n",
+ _("Please edit the description for the branch\n"
+ " %s\n"
+ "Lines starting with '%c' will be stripped.\n"),
branch_name, comment_line_char);
- if (write_file_gently(git_path(edit_description), "%s", buf.buf)) {
- strbuf_release(&buf);
- return error_errno(_("could not write branch description template"));
- }
+ write_file_buf(git_path(edit_description), buf.buf, buf.len);
strbuf_reset(&buf);
if (launch_editor(git_path(edit_description), &buf, NULL)) {
strbuf_release(&buf);
unsigned char sha1[20];
enum object_type type;
unsigned long size;
- unsigned long disk_size;
+ off_t disk_size;
const char *rest;
unsigned char delta_base_sha1[20];
if (data->mark_query)
data->info.disk_sizep = &data->disk_size;
else
- strbuf_addf(sb, "%lu", data->disk_size);
+ strbuf_addf(sb, "%"PRIuMAX, (uintmax_t)data->disk_size);
} else if (is_atom("rest", atom, len)) {
if (data->mark_query)
data->split_on_whitespace = 1;
hold_locked_index(lock_file, 1);
if (read_cache_preload(&opts->pathspec) < 0)
- return error(_("corrupt index file"));
+ return error(_("index file corrupt"));
if (opts->source_tree)
read_tree_some(opts->source_tree, &opts->pathspec);
hold_locked_index(lock_file, 1);
if (read_cache_preload(NULL) < 0)
- return error(_("corrupt index file"));
+ return error(_("index file corrupt"));
resolve_undo_clear();
if (opts->force) {
* entries in the index.
*/
- add_files_to_cache(NULL, NULL, 0);
+ add_files_to_cache(NULL, NULL, 0, 0);
/*
* NEEDSWORK: carrying over local changes
* when branches have different end-of-line
o.ancestor = old->name;
o.branch1 = new->name;
o.branch2 = "local";
- merge_trees(&o, new->commit->tree, work,
+ ret = merge_trees(&o, new->commit->tree, work,
old->commit->tree, &result);
+ if (ret < 0)
+ exit(128);
ret = reset_tree(new->commit->tree, opts, 0,
writeout_error);
+ strbuf_release(&o.obuf);
if (ret)
return ret;
}
update_ref(msg.buf, "HEAD", new->commit->object.oid.hash, NULL,
REF_NODEREF, UPDATE_REFS_DIE_ON_ERR);
if (!opts->quiet) {
- if (old->path && advice_detached_head)
+ if (old->path &&
+ advice_detached_head && !opts->force_detach)
detach_advice(new->name);
describe_detached_head(_("HEAD is now at"), new->commit);
}
static void describe_one_orphan(struct strbuf *sb, struct commit *commit)
{
strbuf_addstr(sb, " ");
- strbuf_addstr(sb,
- find_unique_abbrev(commit->object.oid.hash, DEFAULT_ABBREV));
+ strbuf_add_unique_abbrev(sb, commit->object.oid.hash, DEFAULT_ABBREV);
strbuf_addch(sb, ' ');
if (!parse_commit(commit))
pp_commit_easy(CMIT_FMT_ONELINE, commit, sb);
OPT_STRING('B', NULL, &opts.new_branch_force, N_("branch"),
N_("create/reset and checkout a branch")),
OPT_BOOL('l', NULL, &opts.new_branch_log, N_("create reflog for new branch")),
- OPT_BOOL(0, "detach", &opts.force_detach, N_("detach the HEAD at named commit")),
+ OPT_BOOL(0, "detach", &opts.force_detach, N_("detach HEAD at named commit")),
OPT_SET_INT('t', "track", &opts.track, N_("set upstream info for new branch"),
BRANCH_TRACK_EXPLICIT),
OPT_STRING(0, "orphan", &opts.new_orphan_branch, N_("new-branch"), N_("new unparented branch")),
static int option_no_checkout, option_bare, option_mirror, option_single_branch = -1;
static int option_local = -1, option_no_hardlinks, option_shared, option_recursive;
-static int option_shallow_submodules = -1;
+static int option_shallow_submodules;
static char *option_template, *option_depth;
static char *option_origin = NULL;
static char *option_branch = NULL;
static int option_verbosity;
static int option_progress = -1;
static enum transport_family family;
-static struct string_list option_config;
-static struct string_list option_reference;
+static struct string_list option_config = STRING_LIST_INIT_NODUP;
+static struct string_list option_reference = STRING_LIST_INIT_NODUP;
static int option_dissociate;
static int max_jobs = -1;
const struct ref *rm = mapped_refs;
if (check_connectivity) {
- if (transport->progress)
- fprintf(stderr, _("Checking connectivity... "));
- if (check_everything_connected_with_transport(iterate_ref_map,
- 0, &rm, transport))
+ struct check_connected_options opt = CHECK_CONNECTED_INIT;
+
+ opt.transport = transport;
+ opt.progress = transport->progress;
+
+ if (check_connected(iterate_ref_map, &rm, &opt))
die(_("remote did not send all necessary objects"));
- if (transport->progress)
- fprintf(stderr, _("done.\n"));
}
if (refs) {
struct argv_array args = ARGV_ARRAY_INIT;
argv_array_pushl(&args, "submodule", "update", "--init", "--recursive", NULL);
- if (option_shallow_submodules == 1
- || (option_shallow_submodules == -1 && option_depth))
+ if (option_shallow_submodules == 1)
argv_array_push(&args, "--depth=1");
if (max_jobs != -1)
"Then \"git cherry-pick --continue\" will resume cherry-picking\n"
"the remaining commits.\n");
+static GIT_PATH_FUNC(git_path_commit_editmsg, "COMMIT_EDITMSG")
+
static const char *use_message_buffer;
-static const char commit_editmsg[] = "COMMIT_EDITMSG";
static struct lock_file index_lock; /* real index */
static struct lock_file false_lock; /* used only for partial commits */
static enum {
*/
if (all || (also && pathspec.nr)) {
hold_locked_index(&index_lock, 1);
- add_files_to_cache(also ? prefix : NULL, &pathspec, 0);
+ add_files_to_cache(also ? prefix : NULL, &pathspec, 0, 0);
refresh_cache_or_die(refresh_flags);
update_main_cache_tree(WRITE_TREE_SILENT);
if (write_locked_index(&the_index, &index_lock, CLOSE_LOCK))
char *buffer;
buffer = strstr(use_message_buffer, "\n\n");
if (buffer)
- strbuf_addstr(&sb, buffer + 2);
+ strbuf_addstr(&sb, skip_blank_lines(buffer + 2));
hook_arg1 = "commit";
hook_arg2 = use_message;
} else if (fixup_message) {
hook_arg2 = "";
}
- s->fp = fopen_for_writing(git_path(commit_editmsg));
+ s->fp = fopen_for_writing(git_path_commit_editmsg());
if (s->fp == NULL)
- die_errno(_("could not open '%s'"), git_path(commit_editmsg));
+ die_errno(_("could not open '%s'"), git_path_commit_editmsg());
/* Ignore status.displayCommentPrefix: we do need comments in COMMIT_EDITMSG. */
old_display_comment_prefix = s->display_comment_prefix;
}
if (run_commit_hook(use_editor, index_file, "prepare-commit-msg",
- git_path(commit_editmsg), hook_arg1, hook_arg2, NULL))
+ git_path_commit_editmsg(), hook_arg1, hook_arg2, NULL))
return 0;
if (use_editor) {
const char *env[2] = { NULL };
env[0] = index;
snprintf(index, sizeof(index), "GIT_INDEX_FILE=%s", index_file);
- if (launch_editor(git_path(commit_editmsg), NULL, env)) {
+ if (launch_editor(git_path_commit_editmsg(), NULL, env)) {
fprintf(stderr,
_("Please supply the message using either -m or -F option.\n"));
exit(1);
}
if (!no_verify &&
- run_commit_hook(use_editor, index_file, "commit-msg", git_path(commit_editmsg), NULL)) {
+ run_commit_hook(use_editor, index_file, "commit-msg", git_path_commit_editmsg(), NULL)) {
return 0;
}
OPT_BOOL(0, "interactive", &interactive, N_("interactively add files")),
OPT_BOOL('p', "patch", &patch_interactive, N_("interactively add changes")),
OPT_BOOL('o', "only", &only, N_("commit only specified files")),
- OPT_BOOL('n', "no-verify", &no_verify, N_("bypass pre-commit hook")),
+ OPT_BOOL('n', "no-verify", &no_verify, N_("bypass pre-commit and commit-msg hooks")),
OPT_BOOL(0, "dry-run", &dry_run, N_("show what would be committed")),
OPT_SET_INT(0, "short", &status_format, N_("show status concisely"),
STATUS_FORMAT_SHORT),
/* Finally, get the commit message */
strbuf_reset(&sb);
- if (strbuf_read_file(&sb, git_path(commit_editmsg), 0) < 0) {
+ if (strbuf_read_file(&sb, git_path_commit_editmsg(), 0) < 0) {
int saved_errno = errno;
rollback_index_files();
die(_("could not read commit message: %s"), strerror(saved_errno));
static int use_global_config, use_system_config, use_local_config;
static struct git_config_source given_config_source;
static int actions, types;
-static const char *get_color_slot, *get_colorbool_slot;
static int end_null;
static int respect_includes = -1;
static int show_origin;
given_config_source.file : git_path("config"));
if (use_global_config) {
int fd = open(config_file, O_CREAT | O_EXCL | O_WRONLY, 0666);
- if (fd) {
+ if (fd >= 0) {
char *content = default_user_config();
write_str_in_full(fd, content);
free(content);
print_path(spec->path);
putchar('\n');
- if (!hashcmp(ospec->sha1, spec->sha1) &&
+ if (!oidcmp(&ospec->oid, &spec->oid) &&
ospec->mode == spec->mode)
break;
/* fallthrough */
if (no_data || S_ISGITLINK(spec->mode))
printf("M %06o %s ", spec->mode,
sha1_to_hex(anonymize ?
- anonymize_sha1(spec->sha1) :
- spec->sha1));
+ anonymize_sha1(spec->oid.hash) :
+ spec->oid.hash));
else {
- struct object *object = lookup_object(spec->sha1);
+ struct object *object = lookup_object(spec->oid.hash);
printf("M %06o :%d ", spec->mode,
get_object_mark(object));
}
/* Export the referenced blobs, and remember the marks. */
for (i = 0; i < diff_queued_diff.nr; i++)
if (!S_ISGITLINK(diff_queued_diff.queue[i]->two->mode))
- export_blob(diff_queued_diff.queue[i]->two->sha1);
+ export_blob(diff_queued_diff.queue[i]->two->oid.hash);
refname = commit->util;
if (anonymize) {
#include "submodule.h"
#include "connected.h"
#include "argv-array.h"
+#include "utf8.h"
static const char * const builtin_fetch_usage[] = {
N_("git fetch [<options>] [<repository> [<refspec>...]]"),
: STORE_REF_ERROR_OTHER;
}
-#define REFCOL_WIDTH 10
+static int refcol_width = 10;
+static int compact_format;
+
+static void adjust_refcol_width(const struct ref *ref)
+{
+ int max, rlen, llen, len;
+
+ /* uptodate lines are only shown on high verbosity level */
+ if (!verbosity && !oidcmp(&ref->peer_ref->old_oid, &ref->old_oid))
+ return;
+
+ max = term_columns();
+ rlen = utf8_strwidth(prettify_refname(ref->name));
+
+ llen = utf8_strwidth(prettify_refname(ref->peer_ref->name));
+
+ /*
+ * rough estimation to see if the output line is too long and
+ * should not be counted (we can't do precise calculation
+ * anyway because we don't know if the error explanation part
+ * will be printed in update_local_ref)
+ */
+ if (compact_format) {
+ llen = 0;
+ max = max * 2 / 3;
+ }
+ len = 21 /* flag and summary */ + rlen + 4 /* -> */ + llen;
+ if (len >= max)
+ return;
+
+ /*
+ * Not precise calculation for compact mode because '*' can
+ * appear on the left hand side of '->' and shrink the column
+ * back.
+ */
+ if (refcol_width < rlen)
+ refcol_width = rlen;
+}
+
+static void prepare_format_display(struct ref *ref_map)
+{
+ struct ref *rm;
+ const char *format = "full";
+
+ git_config_get_string_const("fetch.output", &format);
+ if (!strcasecmp(format, "full"))
+ compact_format = 0;
+ else if (!strcasecmp(format, "compact"))
+ compact_format = 1;
+ else
+ die(_("configuration fetch.output contains invalid value %s"),
+ format);
+
+ for (rm = ref_map; rm; rm = rm->next) {
+ if (rm->status == REF_STATUS_REJECT_SHALLOW ||
+ !rm->peer_ref ||
+ !strcmp(rm->name, "HEAD"))
+ continue;
+
+ adjust_refcol_width(rm);
+ }
+}
+
+static void print_remote_to_local(struct strbuf *display,
+ const char *remote, const char *local)
+{
+ strbuf_addf(display, "%-*s -> %s", refcol_width, remote, local);
+}
+
+static int find_and_replace(struct strbuf *haystack,
+ const char *needle,
+ const char *placeholder)
+{
+ const char *p = strstr(haystack->buf, needle);
+ int plen, nlen;
+
+ if (!p)
+ return 0;
+
+ if (p > haystack->buf && p[-1] != '/')
+ return 0;
+
+ plen = strlen(p);
+ nlen = strlen(needle);
+ if (plen > nlen && p[nlen] != '/')
+ return 0;
+
+ strbuf_splice(haystack, p - haystack->buf, nlen,
+ placeholder, strlen(placeholder));
+ return 1;
+}
+
+static void print_compact(struct strbuf *display,
+ const char *remote, const char *local)
+{
+ struct strbuf r = STRBUF_INIT;
+ struct strbuf l = STRBUF_INIT;
+
+ if (!strcmp(remote, local)) {
+ strbuf_addf(display, "%-*s -> *", refcol_width, remote);
+ return;
+ }
+
+ strbuf_addstr(&r, remote);
+ strbuf_addstr(&l, local);
+
+ if (!find_and_replace(&r, local, "*"))
+ find_and_replace(&l, remote, "*");
+ print_remote_to_local(display, r.buf, l.buf);
+
+ strbuf_release(&r);
+ strbuf_release(&l);
+}
+
+static void format_display(struct strbuf *display, char code,
+ const char *summary, const char *error,
+ const char *remote, const char *local)
+{
+ strbuf_addf(display, "%c %-*s ", code, TRANSPORT_SUMMARY(summary));
+ if (!compact_format)
+ print_remote_to_local(display, remote, local);
+ else
+ print_compact(display, remote, local);
+ if (error)
+ strbuf_addf(display, " (%s)", error);
+}
static int update_local_ref(struct ref *ref,
const char *remote,
if (!oidcmp(&ref->old_oid, &ref->new_oid)) {
if (verbosity > 0)
- strbuf_addf(display, "= %-*s %-*s -> %s",
- TRANSPORT_SUMMARY(_("[up to date]")),
- REFCOL_WIDTH, remote, pretty_ref);
+ format_display(display, '=', _("[up to date]"), NULL,
+ remote, pretty_ref);
return 0;
}
* If this is the head, and it's not okay to update
* the head, and the old value of the head isn't empty...
*/
- strbuf_addf(display,
- _("! %-*s %-*s -> %s (can't fetch in current branch)"),
- TRANSPORT_SUMMARY(_("[rejected]")),
- REFCOL_WIDTH, remote, pretty_ref);
+ format_display(display, '!', _("[rejected]"),
+ _("can't fetch in current branch"),
+ remote, pretty_ref);
return 1;
}
starts_with(ref->name, "refs/tags/")) {
int r;
r = s_update_ref("updating tag", ref, 0);
- strbuf_addf(display, "%c %-*s %-*s -> %s%s",
- r ? '!' : '-',
- TRANSPORT_SUMMARY(_("[tag update]")),
- REFCOL_WIDTH, remote, pretty_ref,
- r ? _(" (unable to update local ref)") : "");
+ format_display(display, r ? '!' : 't', _("[tag update]"),
+ r ? _("unable to update local ref") : NULL,
+ remote, pretty_ref);
return r;
}
(recurse_submodules != RECURSE_SUBMODULES_ON))
check_for_new_submodule_commits(ref->new_oid.hash);
r = s_update_ref(msg, ref, 0);
- strbuf_addf(display, "%c %-*s %-*s -> %s%s",
- r ? '!' : '*',
- TRANSPORT_SUMMARY(what),
- REFCOL_WIDTH, remote, pretty_ref,
- r ? _(" (unable to update local ref)") : "");
+ format_display(display, r ? '!' : '*', what,
+ r ? _("unable to update local ref") : NULL,
+ remote, pretty_ref);
return r;
}
(recurse_submodules != RECURSE_SUBMODULES_ON))
check_for_new_submodule_commits(ref->new_oid.hash);
r = s_update_ref("fast-forward", ref, 1);
- strbuf_addf(display, "%c %-*s %-*s -> %s%s",
- r ? '!' : ' ',
- TRANSPORT_SUMMARY_WIDTH, quickref.buf,
- REFCOL_WIDTH, remote, pretty_ref,
- r ? _(" (unable to update local ref)") : "");
+ format_display(display, r ? '!' : ' ', quickref.buf,
+ r ? _("unable to update local ref") : NULL,
+ remote, pretty_ref);
strbuf_release(&quickref);
return r;
} else if (force || ref->force) {
(recurse_submodules != RECURSE_SUBMODULES_ON))
check_for_new_submodule_commits(ref->new_oid.hash);
r = s_update_ref("forced-update", ref, 1);
- strbuf_addf(display, "%c %-*s %-*s -> %s (%s)",
- r ? '!' : '+',
- TRANSPORT_SUMMARY_WIDTH, quickref.buf,
- REFCOL_WIDTH, remote, pretty_ref,
- r ? _("unable to update local ref") : _("forced update"));
+ format_display(display, r ? '!' : '+', quickref.buf,
+ r ? _("unable to update local ref") : _("forced update"),
+ remote, pretty_ref);
strbuf_release(&quickref);
return r;
} else {
- strbuf_addf(display, "! %-*s %-*s -> %s %s",
- TRANSPORT_SUMMARY(_("[rejected]")),
- REFCOL_WIDTH, remote, pretty_ref,
- _("(non-fast-forward)"));
+ format_display(display, '!', _("[rejected]"), _("non-fast-forward"),
+ remote, pretty_ref);
return 1;
}
}
url = xstrdup("foreign");
rm = ref_map;
- if (check_everything_connected(iterate_ref_map, 0, &rm)) {
+ if (check_connected(iterate_ref_map, &rm, NULL)) {
rc = error(_("%s did not send all necessary objects\n"), url);
goto abort;
}
+ prepare_format_display(ref_map);
+
/*
* We do a pass for each fetch_head_status type in their enum order, so
* merged entries are written before not-for-merge. That lets readers
rc |= update_local_ref(ref, what, rm, ¬e);
free(ref);
} else
- strbuf_addf(¬e, "* %-*s %-*s -> FETCH_HEAD",
- TRANSPORT_SUMMARY_WIDTH,
- *kind ? kind : "branch",
- REFCOL_WIDTH,
- *what ? what : "HEAD");
+ format_display(¬e, '*',
+ *kind ? kind : "branch", NULL,
+ *what ? what : "HEAD",
+ "FETCH_HEAD");
if (note.len) {
if (verbosity >= 0 && !shown_url) {
fprintf(stderr, _("From %.*s\n"),
static int quickfetch(struct ref *ref_map)
{
struct ref *rm = ref_map;
+ struct check_connected_options opt = CHECK_CONNECTED_INIT;
/*
* If we are deepening a shallow clone we already have these
*/
if (depth)
return -1;
- return check_everything_connected(iterate_ref_map, 1, &rm);
+ opt.quiet = 1;
+ return check_connected(iterate_ref_map, &rm, &opt);
}
static int fetch_refs(struct transport *transport, struct ref *ref_map)
for (ref = stale_refs; ref; ref = ref->next)
string_list_append(&refnames, ref->name);
- result = delete_refs(&refnames);
+ result = delete_refs(&refnames, 0);
string_list_clear(&refnames, 0);
}
if (verbosity >= 0) {
for (ref = stale_refs; ref; ref = ref->next) {
+ struct strbuf sb = STRBUF_INIT;
if (!shown_url) {
fprintf(stderr, _("From %.*s\n"), url_len, url);
shown_url = 1;
}
- fprintf(stderr, " x %-*s %-*s -> %s\n",
- TRANSPORT_SUMMARY(_("[deleted]")),
- REFCOL_WIDTH, _("(none)"), prettify_refname(ref->name));
+ format_display(&sb, '-', _("[deleted]"), NULL,
+ _("(none)"), prettify_refname(ref->name));
+ fprintf(stderr, " %s\n",sb.buf);
+ strbuf_release(&sb);
warn_dangling_symref(stderr, dangling_msg, ref->name);
}
}
size_t wordlen = strcspn(value, " \t\n");
if (wordlen >= 1)
- string_list_append(g->list,
+ string_list_append_nodup(g->list,
xstrndup(value, wordlen));
value += wordlen + (value[wordlen] != '\0');
}
int cmd_fetch(int argc, const char **argv, const char *prefix)
{
int i;
- struct string_list list = STRING_LIST_INIT_NODUP;
+ struct string_list list = STRING_LIST_INIT_DUP;
struct remote *remote;
int result = 0;
struct argv_array argv_gc_auto = ARGV_ARRAY_INIT;
argv_array_clear(&options);
}
- /* All names were strdup()ed or strndup()ed */
- list.strdup_strings = 1;
string_list_clear(&list, 0);
close_all_packs();
static void add_people_count(struct strbuf *out, struct string_list *people)
{
if (people->nr == 1)
- strbuf_addf(out, "%s", people->items[0].string);
+ strbuf_addstr(out, people->items[0].string);
else if (people->nr == 2)
strbuf_addf(out, "%s (%d) and %s (%d)",
people->items[0].string,
#include "dir.h"
#include "progress.h"
#include "streaming.h"
+#include "decorate.h"
#define REACHABLE 0x0001
#define SEEN 0x0002
static int verbose;
static int show_progress = -1;
static int show_dangling = 1;
+static int name_objects;
#define ERROR_OBJECT 01
#define ERROR_REACHABLE 02
#define ERROR_PACK 04
#define ERROR_REFS 010
+static const char *describe_object(struct object *obj)
+{
+ static struct strbuf buf = STRBUF_INIT;
+ char *name = name_objects ?
+ lookup_decoration(fsck_walk_options.object_names, obj) : NULL;
+
+ strbuf_reset(&buf);
+ strbuf_addstr(&buf, oid_to_hex(&obj->oid));
+ if (name)
+ strbuf_addf(&buf, " (%s)", name);
+
+ return buf.buf;
+}
+
static int fsck_config(const char *var, const char *value, void *cb)
{
if (strcmp(var, "fsck.skiplist") == 0) {
const char *err)
{
fprintf(stderr, "%s in %s %s: %s\n",
- msg_type, typename(obj->type), oid_to_hex(&obj->oid), err);
+ msg_type, typename(obj->type), describe_object(obj), err);
}
static int objerror(struct object *obj, const char *err)
return -1;
}
-static int fsck_error_func(struct object *obj, int type, const char *message)
+static int fsck_error_func(struct fsck_options *o,
+ struct object *obj, int type, const char *message)
{
objreport(obj, (type == FSCK_WARN) ? "warning" : "error", message);
return (type == FSCK_WARN) ? 0 : 1;
if (!obj) {
/* ... these references to parent->fld are safe here */
printf("broken link from %7s %s\n",
- typename(parent->type), oid_to_hex(&parent->oid));
+ typename(parent->type), describe_object(parent));
printf("broken link from %7s %s\n",
(type == OBJ_ANY ? "unknown" : typename(type)), "unknown");
errors_found |= ERROR_REACHABLE;
if (!(obj->flags & HAS_OBJ)) {
if (parent && !has_object_file(&obj->oid)) {
printf("broken link from %7s %s\n",
- typename(parent->type), oid_to_hex(&parent->oid));
+ typename(parent->type), describe_object(parent));
printf(" to %7s %s\n",
- typename(obj->type), oid_to_hex(&obj->oid));
+ typename(obj->type), describe_object(obj));
errors_found |= ERROR_REACHABLE;
}
return 1;
return; /* it is in pack - forget about it */
if (connectivity_only && has_object_file(&obj->oid))
return;
- printf("missing %s %s\n", typename(obj->type), oid_to_hex(&obj->oid));
+ printf("missing %s %s\n", typename(obj->type),
+ describe_object(obj));
errors_found |= ERROR_REACHABLE;
return;
}
* since this is something that is prunable.
*/
if (show_unreachable) {
- printf("unreachable %s %s\n", typename(obj->type), oid_to_hex(&obj->oid));
+ printf("unreachable %s %s\n", typename(obj->type),
+ describe_object(obj));
return;
}
if (!obj->used) {
if (show_dangling)
printf("dangling %s %s\n", typename(obj->type),
- oid_to_hex(&obj->oid));
+ describe_object(obj));
if (write_lost_and_found) {
char *filename = git_pathdup("lost-found/%s/%s",
obj->type == OBJ_COMMIT ? "commit" : "other",
- oid_to_hex(&obj->oid));
+ describe_object(obj));
FILE *f;
if (safe_create_leading_directories_const(filename)) {
if (stream_blob_to_fd(fileno(f), obj->oid.hash, NULL, 1))
die_errno("Could not write '%s'", filename);
} else
- fprintf(f, "%s\n", oid_to_hex(&obj->oid));
+ fprintf(f, "%s\n", describe_object(obj));
if (fclose(f))
die_errno("Could not finish '%s'",
filename);
static void check_object(struct object *obj)
{
if (verbose)
- fprintf(stderr, "Checking %s\n", oid_to_hex(&obj->oid));
+ fprintf(stderr, "Checking %s\n", describe_object(obj));
if (obj->flags & REACHABLE)
check_reachable_object(obj);
if (verbose)
fprintf(stderr, "Checking %s %s\n",
- typename(obj->type), oid_to_hex(&obj->oid));
+ typename(obj->type), describe_object(obj));
if (fsck_walk(obj, NULL, &fsck_obj_options))
objerror(obj, "broken links");
free_commit_buffer(commit);
if (!commit->parents && show_root)
- printf("root %s\n", oid_to_hex(&commit->object.oid));
+ printf("root %s\n", describe_object(&commit->object));
}
if (obj->type == OBJ_TAG) {
struct tag *tag = (struct tag *) obj;
if (show_tags && tag->tagged) {
- printf("tagged %s %s", typename(tag->tagged->type), oid_to_hex(&tag->tagged->oid));
- printf(" (%s) in %s\n", tag->tag, oid_to_hex(&tag->object.oid));
+ printf("tagged %s %s", typename(tag->tagged->type),
+ describe_object(tag->tagged));
+ printf(" (%s) in %s\n", tag->tag,
+ describe_object(&tag->object));
}
}
static int fsck_obj_buffer(const unsigned char *sha1, enum object_type type,
unsigned long size, void *buffer, int *eaten)
{
+ /*
+ * Note, buffer may be NULL if type is OBJ_BLOB. See
+ * verify_packfile(), data_valid variable for details.
+ */
struct object *obj;
obj = parse_object_buffer(sha1, type, size, buffer, eaten);
if (!obj) {
static int default_refs;
-static void fsck_handle_reflog_sha1(const char *refname, unsigned char *sha1)
+static void fsck_handle_reflog_sha1(const char *refname, unsigned char *sha1,
+ unsigned long timestamp)
{
struct object *obj;
if (!is_null_sha1(sha1)) {
obj = lookup_object(sha1);
if (obj) {
+ if (timestamp && name_objects)
+ add_decoration(fsck_walk_options.object_names,
+ obj,
+ xstrfmt("%s@{%ld}", refname, timestamp));
obj->used = 1;
mark_object_reachable(obj);
} else {
fprintf(stderr, "Checking reflog %s->%s\n",
sha1_to_hex(osha1), sha1_to_hex(nsha1));
- fsck_handle_reflog_sha1(refname, osha1);
- fsck_handle_reflog_sha1(refname, nsha1);
+ fsck_handle_reflog_sha1(refname, osha1, 0);
+ fsck_handle_reflog_sha1(refname, nsha1, timestamp);
return 0;
}
}
default_refs++;
obj->used = 1;
+ if (name_objects)
+ add_decoration(fsck_walk_options.object_names,
+ obj, xstrdup(refname));
mark_object_reachable(obj);
return 0;
return 1;
}
obj->used = 1;
+ if (name_objects)
+ add_decoration(fsck_walk_options.object_names,
+ obj, xstrdup(":"));
mark_object_reachable(obj);
if (obj->type != OBJ_TREE)
err |= objerror(obj, "non-tree in cache-tree");
OPT_BOOL(0, "lost-found", &write_lost_and_found,
N_("write dangling objects in .git/lost-found")),
OPT_BOOL(0, "progress", &show_progress, N_("show progress")),
+ OPT_BOOL(0, "name-objects", &name_objects, N_("show verbose names for reachable objects")),
OPT_END(),
};
include_reflogs = 0;
}
+ if (name_objects)
+ fsck_walk_options.object_names =
+ xcalloc(1, sizeof(struct decoration));
+
git_config(fsck_config, NULL);
fsck_head_link();
continue;
obj->used = 1;
+ if (name_objects)
+ add_decoration(fsck_walk_options.object_names,
+ obj, xstrdup(arg));
mark_object_reachable(obj);
heads++;
continue;
continue;
obj = &blob->object;
obj->used = 1;
+ if (name_objects)
+ add_decoration(fsck_walk_options.object_names,
+ obj,
+ xstrfmt(":%s", active_cache[i]->name));
mark_object_reachable(obj);
}
if (active_cache_tree)
*/
cnt++;
}
- return gc_auto_pack_limit <= cnt;
+ return gc_auto_pack_limit < cnt;
}
static void add_repack_all_option(void)
for (nr = 0; nr < active_nr; nr++) {
const struct cache_entry *ce = active_cache[nr];
- if (!S_ISREG(ce->ce_mode) || ce_intent_to_add(ce))
+ if (!S_ISREG(ce->ce_mode))
continue;
if (!ce_path_match(ce, pathspec, NULL))
continue;
* cache version instead
*/
if (cached || (ce->ce_flags & CE_VALID) || ce_skip_worktree(ce)) {
- if (ce_stage(ce))
+ if (ce_stage(ce) || ce_intent_to_add(ce))
continue;
hit |= grep_sha1(opt, ce->sha1, ce->name, 0, ce->name);
}
free(to_free);
}
-/*
- * If open_html is not defined in a platform-specific way (see for
- * example compat/mingw.h), we use the script web--browse to display
- * HTML.
- */
-#ifndef open_html
static void open_html(const char *path)
{
execl_git_cmd("web--browse", "-c", "help.browser", path, (char *)NULL);
}
-#endif
static void show_html_page(const char *git_cmd)
{
static int do_fsck_object;
static struct fsck_options fsck_options = FSCK_OPTIONS_STRICT;
static int verbose;
+static int show_resolving_progress;
static int show_stat;
static int check_self_contained_and_connected;
use(sizeof(struct pack_header));
}
-static NORETURN void bad_object(unsigned long offset, const char *format,
+static NORETURN void bad_object(off_t offset, const char *format,
...) __attribute__((format (printf, 2, 3)));
-static NORETURN void bad_object(unsigned long offset, const char *format, ...)
+static NORETURN void bad_object(off_t offset, const char *format, ...)
{
va_list params;
char buf[1024];
va_start(params, format);
vsnprintf(buf, sizeof(buf), format, params);
va_end(params);
- die(_("pack has bad object at offset %lu: %s"), offset, buf);
+ die(_("pack has bad object at offset %"PRIuMAX": %s"),
+ (uintmax_t)offset, buf);
}
static inline struct thread_local *get_thread_data(void)
return (type == OBJ_REF_DELTA || type == OBJ_OFS_DELTA);
}
-static void *unpack_entry_data(unsigned long offset, unsigned long size,
+static void *unpack_entry_data(off_t offset, unsigned long size,
enum object_type type, unsigned char *sha1)
{
static char fixed_buf[8192];
void *cb_data)
{
off_t from = obj[0].idx.offset + obj[0].hdr_size;
- unsigned long len = obj[1].idx.offset - from;
+ off_t len = obj[1].idx.offset - from;
unsigned char *data, *inbuf;
git_zstream stream;
int status;
data = xmallocz(consume ? 64*1024 : obj->size);
- inbuf = xmalloc((len < 64*1024) ? len : 64*1024);
+ inbuf = xmalloc((len < 64*1024) ? (int)len : 64*1024);
memset(&stream, 0, sizeof(stream));
git_inflate_init(&stream);
stream.avail_out = consume ? 64*1024 : obj->size;
do {
- ssize_t n = (len < 64*1024) ? len : 64*1024;
+ ssize_t n = (len < 64*1024) ? (ssize_t)len : 64*1024;
n = xpread(get_thread_data()->pack_fd, inbuf, n, from);
if (n < 0)
die_errno(_("cannot pread pack file"));
if (!n)
- die(Q_("premature end of pack file, %lu byte missing",
- "premature end of pack file, %lu bytes missing",
- len),
- len);
+ die(Q_("premature end of pack file, %"PRIuMAX" byte missing",
+ "premature end of pack file, %"PRIuMAX" bytes missing",
+ (unsigned int)len),
+ (uintmax_t)len);
from += n;
len -= n;
stream.next_in = inbuf;
qsort(ref_deltas, nr_ref_deltas, sizeof(struct ref_delta_entry),
compare_ref_delta_entry);
- if (verbose)
+ if (verbose || show_resolving_progress)
progress = start_progress(_("Resolving deltas"),
nr_ref_deltas + nr_ofs_deltas);
struct pack_idx_option opts;
unsigned char pack_sha1[20];
unsigned foreign_nr = 1; /* zero is a "good" value, assume bad */
+ int report_end_of_input = 0;
if (argc == 2 && !strcmp(argv[1], "-h"))
usage(index_pack_usage);
input_len = sizeof(*hdr);
} else if (!strcmp(arg, "-v")) {
verbose = 1;
+ } else if (!strcmp(arg, "--show-resolving-progress")) {
+ show_resolving_progress = 1;
+ } else if (!strcmp(arg, "--report-end-of-input")) {
+ report_end_of_input = 1;
} else if (!strcmp(arg, "-o")) {
if (index_name || (i+1) >= argc)
usage(index_pack_usage);
obj_stat = xcalloc(st_add(nr_objects, 1), sizeof(struct object_stat));
ofs_deltas = xcalloc(nr_objects, sizeof(struct ofs_delta_entry));
parse_pack_objects(pack_sha1);
+ if (report_end_of_input)
+ write_in_full(2, "\0", 1);
resolve_deltas();
conclude_pack(fix_thin_pack, curr_pack, pack_sha1);
free(ofs_deltas);
if (!(flags & INIT_DB_QUIET)) {
int len = strlen(git_dir);
- /* TRANSLATORS: The first '%s' is either "Reinitialized
- existing" or "Initialized empty", the second " shared" or
- "", and the last '%s%s' is the verbatim directory name. */
- printf(_("%s%s Git repository in %s%s\n"),
- reinit ? _("Reinitialized existing") : _("Initialized empty"),
- get_shared_repository() ? _(" shared") : "",
- git_dir, len && git_dir[len-1] != '/' ? "/" : "");
+ if (reinit)
+ printf(get_shared_repository()
+ ? _("Reinitialized existing shared Git repository in %s%s\n")
+ : _("Reinitialized existing Git repository in %s%s\n"),
+ git_dir, len && git_dir[len-1] != '/' ? "/" : "");
+ else
+ printf(get_shared_repository()
+ ? _("Initialized empty shared Git repository in %s%s\n")
+ : _("Initialized empty Git repository in %s%s\n"),
+ git_dir, len && git_dir[len-1] != '/' ? "/" : "");
}
return 0;
{
int in_place = 0;
int trim_empty = 0;
- struct string_list trailers = STRING_LIST_INIT_DUP;
+ struct string_list trailers = STRING_LIST_INIT_NODUP;
struct option options[] = {
OPT_BOOL(0, "in-place", &in_place, N_("edit files in place")),
static int default_abbrev_commit;
static int default_show_root = 1;
static int default_follow;
+static int default_show_signature;
static int decoration_style;
static int decoration_given;
static int use_mailmap_config;
rev->abbrev_commit = default_abbrev_commit;
rev->show_root_diff = default_show_root;
rev->subject_prefix = fmt_patch_subject_prefix;
+ rev->show_signature = default_show_signature;
DIFF_OPT_SET(&rev->diffopt, ALLOW_TEXTCONV);
if (default_date_mode)
if (rev->commit_format != CMIT_FMT_ONELINE)
putchar(rev->diffopt.line_termination);
}
- printf(_("Final output: %d %s\n"), nr, stage);
+ fprintf(rev->diffopt.file, _("Final output: %d %s\n"), nr, stage);
}
static struct itimerval early_output_timer;
static void log_show_early(struct rev_info *revs, struct commit_list *list)
{
- int i = revs->early_output;
+ int i = revs->early_output, close_file = revs->diffopt.close_file;
int show_header = 1;
+ revs->diffopt.close_file = 0;
sort_in_topological_order(&list, revs->sort_order);
while (list && i) {
struct commit *commit = list->item;
case commit_ignore:
break;
case commit_error:
+ if (close_file)
+ fclose(revs->diffopt.file);
return;
}
list = list->next;
}
/* Did we already get enough commits for the early output? */
- if (!i)
+ if (!i) {
+ if (close_file)
+ fclose(revs->diffopt.file);
return;
+ }
/*
* ..if no, then repeat it twice a second until we
{
struct commit *commit;
int saved_nrl = 0;
- int saved_dcctc = 0;
+ int saved_dcctc = 0, close_file = rev->diffopt.close_file;
if (rev->early_output)
setup_early_output(rev);
* and HAS_CHANGES being accumulated in rev->diffopt, so be careful to
* retain that state information if replacing rev->diffopt in this loop
*/
+ rev->diffopt.close_file = 0;
while ((commit = get_revision(rev)) != NULL) {
if (!log_tree_commit(rev, commit) && rev->max_count >= 0)
/*
}
rev->diffopt.degraded_cc_to_c = saved_dcctc;
rev->diffopt.needed_rename_limit = saved_nrl;
+ if (close_file)
+ fclose(rev->diffopt.file);
if (rev->diffopt.output_format & DIFF_FORMAT_CHECKDIFF &&
DIFF_OPT_TST(&rev->diffopt, CHECK_FAILED)) {
use_mailmap_config = git_config_bool(var, value);
return 0;
}
+ if (!strcmp(var, "log.showsignature")) {
+ default_show_signature = git_config_bool(var, value);
+ return 0;
+ }
if (grep_config(var, value, cb) < 0)
return -1;
pp.fmt = rev->commit_format;
pp.date_mode = rev->date_mode;
pp_user_info(&pp, "Tagger", &out, buf, get_log_output_encoding());
- printf("%s", out.buf);
+ fprintf(rev->diffopt.file, "%s", out.buf);
strbuf_release(&out);
}
char *buf;
unsigned long size;
- fflush(stdout);
+ fflush(rev->diffopt.file);
if (!DIFF_OPT_TOUCHED(&rev->diffopt, ALLOW_TEXTCONV) ||
!DIFF_OPT_TST(&rev->diffopt, ALLOW_TEXTCONV))
return stream_blob_to_fd(1, sha1, NULL, 0);
}
if (offset < size)
- fwrite(buf + offset, size - offset, 1, stdout);
+ fwrite(buf + offset, size - offset, 1, rev->diffopt.file);
free(buf);
return 0;
}
struct strbuf *base,
const char *pathname, unsigned mode, int stage, void *context)
{
- printf("%s%s\n", pathname, S_ISDIR(mode) ? "/" : "");
+ FILE *file = context;
+ fprintf(file, "%s%s\n", pathname, S_ISDIR(mode) ? "/" : "");
return 0;
}
if (rev.shown_one)
putchar('\n');
- printf("%stag %s%s\n",
+ fprintf(rev.diffopt.file, "%stag %s%s\n",
diff_get_color_opt(&rev.diffopt, DIFF_COMMIT),
t->tag,
diff_get_color_opt(&rev.diffopt, DIFF_RESET));
case OBJ_TREE:
if (rev.shown_one)
putchar('\n');
- printf("%stree %s%s\n\n",
+ fprintf(rev.diffopt.file, "%stree %s%s\n\n",
diff_get_color_opt(&rev.diffopt, DIFF_COMMIT),
name,
diff_get_color_opt(&rev.diffopt, DIFF_RESET));
read_tree_recursive((struct tree *)o, "", 0, 0, &match_all,
- show_tree_object, NULL);
+ show_tree_object, rev.diffopt.file);
rev.shown_one = 1;
break;
case OBJ_COMMIT:
static char *default_attach = NULL;
-static struct string_list extra_hdr;
-static struct string_list extra_to;
-static struct string_list extra_cc;
+static struct string_list extra_hdr = STRING_LIST_INIT_NODUP;
+static struct string_list extra_to = STRING_LIST_INIT_NODUP;
+static struct string_list extra_cc = STRING_LIST_INIT_NODUP;
static void add_header(const char *value)
{
static int thread;
static int do_signoff;
static int base_auto;
+static char *from;
static const char *signature = git_version_string;
static const char *signature_file;
static int config_cover_letter;
base_auto = git_config_bool(var, value);
return 0;
}
+ if (!strcmp(var, "format.from")) {
+ int b = git_config_maybe_bool(var, value);
+ free(from);
+ if (b < 0)
+ from = xstrdup(value);
+ else if (b)
+ from = xstrdup(git_committer_info(IDENT_NO_DATE));
+ else
+ from = NULL;
+ return 0;
+ }
return git_log_config(var, value, cb);
}
-static FILE *realstdout = NULL;
static const char *output_directory = NULL;
static int outdir_offset;
-static int reopen_stdout(struct commit *commit, const char *subject,
+static int open_next_file(struct commit *commit, const char *subject,
struct rev_info *rev, int quiet)
{
struct strbuf filename = STRBUF_INIT;
fmt_output_subject(&filename, subject, rev);
if (!quiet)
- fprintf(realstdout, "%s\n", filename.buf + outdir_offset);
+ printf("%s\n", filename.buf + outdir_offset);
- if (freopen(filename.buf, "w", stdout) == NULL)
+ if ((rev->diffopt.file = fopen(filename.buf, "w")) == NULL)
return error(_("Cannot open patch file %s"), filename.buf);
strbuf_release(&filename);
info->message_id = strbuf_detach(&buf, NULL);
}
-static void print_signature(void)
+static void print_signature(FILE *file)
{
if (!signature || !*signature)
return;
- printf("-- \n%s", signature);
+ fprintf(file, "-- \n%s", signature);
if (signature[strlen(signature)-1] != '\n')
- putchar('\n');
- putchar('\n');
+ putc('\n', file);
+ putc('\n', file);
}
static void add_branch_description(struct strbuf *buf, const char *branch_name)
struct pretty_print_context pp = {0};
struct commit *head = list[0];
- if (rev->commit_format != CMIT_FMT_EMAIL)
+ if (!cmit_fmt_is_mail(rev->commit_format))
die(_("Cover letter needs email format"));
committer = git_committer_info(0);
if (!use_stdout &&
- reopen_stdout(NULL, rev->numbered_files ? NULL : "cover-letter", rev, quiet))
+ open_next_file(NULL, rev->numbered_files ? NULL : "cover-letter", rev, quiet))
return;
log_write_email_headers(rev, head, &pp.subject, &pp.after_subject,
pp_title_line(&pp, &msg, &sb, encoding, need_8bit_cte);
pp_remainder(&pp, &msg, &sb, 0);
add_branch_description(&sb, branch_name);
- printf("%s\n", sb.buf);
+ fprintf(rev->diffopt.file, "%s\n", sb.buf);
strbuf_release(&sb);
log.wrap = 72;
log.in1 = 2;
log.in2 = 4;
+ log.file = rev->diffopt.file;
for (i = 0; i < nr; i++)
shortlog_add_commit(&log, list[i]);
diffcore_std(&opts);
diff_flush(&opts);
- printf("\n");
- print_signature();
+ fprintf(rev->diffopt.file, "\n");
+ print_signature(rev->diffopt.file);
}
static const char *clean_message_id(const char *msg_id)
}
}
-static void print_bases(struct base_tree_info *bases)
+static void print_bases(struct base_tree_info *bases, FILE *file)
{
int i;
return;
/* Show the base commit */
- printf("base-commit: %s\n", oid_to_hex(&bases->base_commit));
+ fprintf(file, "base-commit: %s\n", oid_to_hex(&bases->base_commit));
/* Show the prerequisite patches */
for (i = bases->nr_patch_id - 1; i >= 0; i--)
- printf("prerequisite-patch-id: %s\n", oid_to_hex(&bases->patch_id[i]));
+ fprintf(file, "prerequisite-patch-id: %s\n", oid_to_hex(&bases->patch_id[i]));
free(bases->patch_id);
bases->nr_patch_id = 0;
int quiet = 0;
int reroll_count = -1;
char *branch_name = NULL;
- char *from = NULL;
char *base_commit = NULL;
struct base_tree_info bases;
setup_pager();
if (output_directory) {
+ if (rev.diffopt.use_color != GIT_COLOR_ALWAYS)
+ rev.diffopt.use_color = GIT_COLOR_NEVER;
if (use_stdout)
die(_("standard output, or directory, which one?"));
if (mkdir(output_directory, 0777) < 0 && errno != EEXIST)
get_patch_ids(&rev, &ids);
}
- if (!use_stdout)
- realstdout = xfdopen(xdup(1), "w");
-
if (prepare_revision_walk(&rev))
die(_("revision walk setup failed"));
rev.boundary = 1;
gen_message_id(&rev, "cover");
make_cover_letter(&rev, use_stdout,
origin, nr, list, branch_name, quiet);
- print_bases(&bases);
+ print_bases(&bases, rev.diffopt.file);
total++;
start_number--;
}
}
if (!use_stdout &&
- reopen_stdout(rev.numbered_files ? NULL : commit, NULL, &rev, quiet))
+ open_next_file(rev.numbered_files ? NULL : commit, NULL, &rev, quiet))
die(_("Failed to create output files"));
shown = log_tree_commit(&rev, commit);
free_commit_buffer(commit);
rev.shown_one = 0;
if (shown) {
if (rev.mime_boundary)
- printf("\n--%s%s--\n\n\n",
+ fprintf(rev.diffopt.file, "\n--%s%s--\n\n\n",
mime_boundary_leader,
rev.mime_boundary);
else
- print_signature();
- print_bases(&bases);
+ print_signature(rev.diffopt.file);
+ print_bases(&bases, rev.diffopt.file);
}
if (!use_stdout)
- fclose(stdout);
+ fclose(rev.diffopt.file);
}
free(list);
free(branch_name);
};
static void print_commit(char sign, struct commit *commit, int verbose,
- int abbrev)
+ int abbrev, FILE *file)
{
if (!verbose) {
- printf("%c %s\n", sign,
+ fprintf(file, "%c %s\n", sign,
find_unique_abbrev(commit->object.oid.hash, abbrev));
} else {
struct strbuf buf = STRBUF_INIT;
pp_commit_easy(CMIT_FMT_ONELINE, commit, &buf);
- printf("%c %s %s\n", sign,
+ fprintf(file, "%c %s %s\n", sign,
find_unique_abbrev(commit->object.oid.hash, abbrev),
buf.buf);
strbuf_release(&buf);
commit = list->item;
if (has_commit_patch_id(commit, &ids))
sign = '-';
- print_commit(sign, commit, verbose, abbrev);
+ print_commit(sign, commit, verbose, abbrev, revs.diffopt.file);
list = list->next;
}
*/
pos = cache_name_pos(ent->name, ent->len);
if (0 <= pos)
- die("bug in show-killed-files");
+ die("BUG: killed-file %.*s not found",
+ ent->len, ent->name);
pos = -pos - 1;
while (pos < active_nr &&
ce_stage(active_cache[pos]))
static struct strbuf buf = STRBUF_INIT;
static int keep_cr;
+static int mboxrd;
+
+static int is_gtfrom(const struct strbuf *buf)
+{
+ size_t min = strlen(">From ");
+ size_t ngt;
+
+ if (buf->len < min)
+ return 0;
+
+ ngt = strspn(buf->buf, ">");
+ return ngt && starts_with(buf->buf + ngt, "From ");
+}
/* Called with the first line (potentially partial)
* already in buf[] -- normally that should begin with
strbuf_addch(&buf, '\n');
}
+ if (mboxrd && is_gtfrom(&buf))
+ strbuf_remove(&buf, 0, 1);
+
if (fwrite(buf.buf, 1, buf.len, output) != buf.len)
die_errno("cannot write output");
keep_cr = 1;
} else if ( arg[1] == 'o' && arg[2] ) {
dir = arg+2;
+ } else if (!strcmp(arg, "--mboxrd")) {
+ mboxrd = 1;
} else if ( arg[1] == '-' && !arg[2] ) {
argp++; /* -- marks end of options */
break;
static const char *better_branch_name(const char *branch)
{
- static char githead_env[8 + 40 + 1];
+ static char githead_env[8 + GIT_SHA1_HEXSZ + 1];
char *name;
- if (strlen(branch) != 40)
+ if (strlen(branch) != GIT_SHA1_HEXSZ)
return branch;
xsnprintf(githead_env, sizeof(githead_env), "GITHEAD_%s", branch);
name = getenv(githead_env);
int cmd_merge_recursive(int argc, const char **argv, const char *prefix)
{
- const unsigned char *bases[21];
+ const struct object_id *bases[21];
unsigned bases_count = 0;
int i, failed;
- unsigned char h1[20], h2[20];
+ struct object_id h1, h2;
struct merge_options o;
struct commit *result;
continue;
}
if (bases_count < ARRAY_SIZE(bases)-1) {
- unsigned char *sha = xmalloc(20);
- if (get_sha1(argv[i], sha))
+ struct object_id *oid = xmalloc(sizeof(struct object_id));
+ if (get_oid(argv[i], oid))
die("Could not parse object '%s'", argv[i]);
- bases[bases_count++] = sha;
+ bases[bases_count++] = oid;
}
else
warning("Cannot handle more than %d bases. "
o.branch1 = argv[++i];
o.branch2 = argv[++i];
- if (get_sha1(o.branch1, h1))
+ if (get_oid(o.branch1, &h1))
die("Could not resolve ref '%s'", o.branch1);
- if (get_sha1(o.branch2, h2))
+ if (get_oid(o.branch2, &h2))
die("Could not resolve ref '%s'", o.branch2);
o.branch1 = better_branch_name(o.branch1);
if (o.verbosity >= 3)
printf("Merging %s with %s\n", o.branch1, o.branch2);
- failed = merge_recursive_generic(&o, h1, h2, bases_count, bases, &result);
+ failed = merge_recursive_generic(&o, &h1, &h2, bases_count, bases, &result);
if (failed < 0)
return 128; /* die() error code */
return failed;
#include "fmt-merge-msg.h"
#include "gpg-interface.h"
#include "sequencer.h"
+#include "string-list.h"
#define DEFAULT_TWOHEAD (1<<0)
#define DEFAULT_OCTOPUS (1<<1)
PARSE_OPT_NOARG | PARSE_OPT_NONEG, NULL, FF_ONLY },
OPT_RERERE_AUTOUPDATE(&allow_rerere_auto),
OPT_BOOL(0, "verify-signatures", &verify_signatures,
- N_("Verify that the named commit has a valid GPG signature")),
+ N_("verify that the named commit has a valid GPG signature")),
OPT_CALLBACK('s', "strategy", &use_strategies, N_("strategy"),
N_("merge strategy to use"), option_parse_strategy),
OPT_CALLBACK('X', "strategy-option", &xopts, N_("option=value"),
struct rev_info rev;
struct strbuf out = STRBUF_INIT;
struct commit_list *j;
- const char *filename;
- int fd;
struct pretty_print_context ctx = {0};
printf(_("Squash commit -- not updating HEAD\n"));
- filename = git_path_squash_msg();
- fd = open(filename, O_WRONLY | O_CREAT, 0666);
- if (fd < 0)
- die_errno(_("Could not write to '%s'"), filename);
init_revisions(&rev, NULL);
rev.ignore_merges = 1;
oid_to_hex(&commit->object.oid));
pretty_print_commit(&ctx, commit, &out);
}
- if (write_in_full(fd, out.buf, out.len) != out.len)
- die_errno(_("Writing SQUASH_MSG"));
- if (close(fd))
- die_errno(_("Finishing SQUASH_MSG"));
+ write_file_buf(git_path_squash_msg(), out.buf, out.len);
strbuf_release(&out);
}
if (ref_exists(truname.buf)) {
strbuf_addf(msg,
"%s\t\tbranch '%s'%s of .\n",
- sha1_to_hex(remote_head->object.oid.hash),
+ oid_to_hex(&remote_head->object.oid),
truname.buf + 11,
(early ? " (early part)" : ""));
strbuf_release(&truname);
desc = merge_remote_util(remote_head);
if (desc && desc->obj && desc->obj->type == OBJ_TAG) {
strbuf_addf(msg, "%s\t\t%s '%s'\n",
- sha1_to_hex(desc->obj->oid.hash),
+ oid_to_hex(&desc->obj->oid),
typename(desc->obj->type),
remote);
goto cleanup;
}
strbuf_addf(msg, "%s\t\tcommit '%s'\n",
- sha1_to_hex(remote_head->object.oid.hash), remote);
+ oid_to_hex(&remote_head->object.oid), remote);
cleanup:
strbuf_release(&buf);
strbuf_release(&bname);
hold_locked_index(&lock, 1);
clean = merge_recursive(&o, head,
remoteheads->item, reversed, &result);
+ if (clean < 0)
+ exit(128);
if (active_cache_changed &&
write_locked_index(&the_index, &lock, COMMIT_LOCK))
die (_("unable to write %s"), get_index_file());
return ret;
}
-static void split_merge_strategies(const char *string, struct strategy **list,
- int *nr, int *alloc)
-{
- char *p, *q, *buf;
-
- if (!string)
- return;
-
- buf = xstrdup(string);
- q = buf;
- for (;;) {
- p = strchr(q, ' ');
- if (!p) {
- ALLOC_GROW(*list, *nr + 1, *alloc);
- (*list)[(*nr)++].name = xstrdup(q);
- free(buf);
- return;
- } else {
- *p = '\0';
- ALLOC_GROW(*list, *nr + 1, *alloc);
- (*list)[(*nr)++].name = xstrdup(q);
- q = ++p;
- }
- }
-}
-
static void add_strategies(const char *string, unsigned attr)
{
- struct strategy *list = NULL;
- int list_alloc = 0, list_nr = 0, i;
-
- memset(&list, 0, sizeof(list));
- split_merge_strategies(string, &list, &list_nr, &list_alloc);
- if (list) {
- for (i = 0; i < list_nr; i++)
- append_strategy(get_strategy(list[i].name));
+ int i;
+
+ if (string) {
+ struct string_list list = STRING_LIST_INIT_DUP;
+ struct string_list_item *item;
+ string_list_split(&list, string, ' ', -1);
+ for_each_string_list_item(item, &list)
+ append_strategy(get_strategy(item->string));
+ string_list_clear(&list, 0);
return;
}
for (i = 0; i < ARRAY_SIZE(all_strategy); i++)
}
-static void write_merge_msg(struct strbuf *msg)
-{
- const char *filename = git_path_merge_msg();
- int fd = open(filename, O_WRONLY | O_CREAT, 0666);
- if (fd < 0)
- die_errno(_("Could not open '%s' for writing"),
- filename);
- if (write_in_full(fd, msg->buf, msg->len) != msg->len)
- die_errno(_("Could not write to '%s'"), filename);
- close(fd);
-}
-
static void read_merge_msg(struct strbuf *msg)
{
const char *filename = git_path_merge_msg();
strbuf_addch(&msg, '\n');
if (0 < option_edit)
strbuf_commented_addf(&msg, _(merge_editor_comment), comment_line_char);
- write_merge_msg(&msg);
+ write_file_buf(git_path_merge_msg(), msg.buf, msg.len);
if (run_commit_hook(0 < option_edit, get_index_file(), "prepare-commit-msg",
git_path_merge_msg(), "merge", NULL))
abort_commit(remoteheads, NULL);
static void write_merge_state(struct commit_list *remoteheads)
{
- const char *filename;
- int fd;
struct commit_list *j;
struct strbuf buf = STRBUF_INIT;
}
strbuf_addf(&buf, "%s\n", oid_to_hex(oid));
}
- filename = git_path_merge_head();
- fd = open(filename, O_WRONLY | O_CREAT, 0666);
- if (fd < 0)
- die_errno(_("Could not open '%s' for writing"), filename);
- if (write_in_full(fd, buf.buf, buf.len) != buf.len)
- die_errno(_("Could not write to '%s'"), filename);
- close(fd);
+ write_file_buf(git_path_merge_head(), buf.buf, buf.len);
strbuf_addch(&merge_msg, '\n');
- write_merge_msg(&merge_msg);
+ write_file_buf(git_path_merge_msg(), merge_msg.buf, merge_msg.len);
- filename = git_path_merge_mode();
- fd = open(filename, O_WRONLY | O_CREAT | O_TRUNC, 0666);
- if (fd < 0)
- die_errno(_("Could not open '%s' for writing"), filename);
strbuf_reset(&buf);
if (fast_forward == FF_NO)
strbuf_addf(&buf, "no-ff");
- if (write_in_full(fd, buf.buf, buf.len) != buf.len)
- die_errno(_("Could not write to '%s'"), filename);
- close(fd);
+ write_file_buf(git_path_merge_mode(), buf.buf, buf.len);
}
static int default_edit_option(void)
if (e) {
int v = git_config_maybe_bool(name, e);
if (v < 0)
- die("Bad value '%s' in environment '%s'", e, name);
+ die(_("Bad value '%s' in environment '%s'"), e, name);
return v;
}
if (!commit) {
if (ptr)
*ptr = '\0';
- die("not something we can merge in %s: %s",
+ die(_("not something we can merge in %s: %s"),
filename, merge_names->buf + pos);
}
remotes = &commit_list_insert(commit, remotes)->next;
struct commit *commit = get_merge_parent(argv[i]);
if (!commit)
help_unknown_ref(argv[i], "merge",
- "not something we can merge");
+ _("not something we can merge"));
remotes = &commit_list_insert(commit, remotes)->next;
}
remoteheads = reduce_parents(head_commit, head_subsumed, remoteheads);
for (p = remoteheads; p; p = p->next) {
struct commit *commit = p->item;
strbuf_addf(&buf, "GITHEAD_%s",
- sha1_to_hex(commit->object.oid.hash));
+ oid_to_hex(&commit->object.oid));
setenv(buf.buf, merge_remote_util(commit)->name, 1);
strbuf_reset(&buf);
if (fast_forward != FF_ONLY &&
* If head can reach all the merge then we are up to date.
* but first the most common case of merging one remote.
*/
- finish_up_to_date("Already up-to-date.");
+ finish_up_to_date(_("Already up-to-date."));
goto done;
} else if (fast_forward != FF_NO && !remoteheads->next &&
!common->next &&
- !hashcmp(common->item->object.oid.hash, head_commit->object.oid.hash)) {
+ !oidcmp(&common->item->object.oid, &head_commit->object.oid)) {
/* Again the most common case of merging one remote. */
struct strbuf msg = STRBUF_INIT;
struct commit *commit;
* HEAD^^" would be missed.
*/
common_one = get_merge_bases(head_commit, j->item);
- if (hashcmp(common_one->item->object.oid.hash,
- j->item->object.oid.hash)) {
+ if (oidcmp(&common_one->item->object.oid, &j->item->object.oid)) {
up_to_date = 0;
break;
}
}
if (up_to_date) {
- finish_up_to_date("Already up-to-date. Yeeah!");
+ finish_up_to_date(_("Already up-to-date. Yeeah!"));
goto done;
}
}
* Stash away the local changes so that we can try more than one.
*/
save_state(stash))
- hashcpy(stash, null_sha1);
+ hashclr(stash);
for (i = 0; i < use_strategies_nr; i++) {
int ret;
int cmd_mv(int argc, const char **argv, const char *prefix)
{
- int i, gitmodules_modified = 0;
+ int i, flags, gitmodules_modified = 0;
int verbose = 0, show_only = 0, force = 0, ignore_errors = 0;
struct option builtin_mv_options[] = {
OPT__VERBOSE(&verbose, N_("be verbose")),
modes = xcalloc(argc, sizeof(enum update_mode));
/*
* Keep trailing slash, needed to let
- * "git mv file no-such-dir/" error out.
+ * "git mv file no-such-dir/" error out, except in the case
+ * "git mv directory no-such-dir/".
*/
- dest_path = internal_copy_pathspec(prefix, argv + argc, 1,
- KEEP_TRAILING_SLASH);
+ flags = KEEP_TRAILING_SLASH;
+ if (argc == 1 && is_directory(argv[0]) && !is_directory(argv[1]))
+ flags = 0;
+ dest_path = internal_copy_pathspec(prefix, argv + argc, 1, flags);
submodule_gitfile = xcalloc(argc, sizeof(char *));
if (dest_path[0][0] == '\0')
};
static const char note_template[] =
- "\nWrite/edit the notes for the following object:\n";
+ N_("Write/edit the notes for the following object:");
struct note_data {
int given;
copy_obj_to_fd(fd, old_note);
strbuf_addch(&buf, '\n');
- strbuf_add_commented_lines(&buf, note_template, strlen(note_template));
+ strbuf_add_commented_lines(&buf, "\n", strlen("\n"));
+ strbuf_add_commented_lines(&buf, _(note_template), strlen(_(note_template)));
strbuf_addch(&buf, '\n');
write_or_die(fd, buf.buf, buf.len);
if (git_config_get_string(key, &value))
return 1;
if (parse_notes_merge_strategy(value, strategy))
- git_die_config(key, "unknown notes merge strategy %s", value);
+ git_die_config(key, _("unknown notes merge strategy %s"), value);
free(value);
return 0;
if (strategy || do_commit + do_abort == 0)
do_merge = 1;
if (do_merge + do_commit + do_abort != 1) {
- error("cannot mix --commit, --abort or -s/--strategy");
+ error(_("cannot mix --commit, --abort or -s/--strategy"));
usage_with_options(git_notes_merge_usage, options);
}
if (do_merge && argc != 1) {
- error("Must specify a notes ref to merge");
+ error(_("Must specify a notes ref to merge"));
usage_with_options(git_notes_merge_usage, options);
} else if (!do_merge && argc) {
- error("too many parameters");
+ error(_("too many parameters"));
usage_with_options(git_notes_merge_usage, options);
}
if (strategy) {
if (parse_notes_merge_strategy(strategy, &o.strategy)) {
- error("Unknown -s/--strategy: %s", strategy);
+ error(_("Unknown -s/--strategy: %s"), strategy);
usage_with_options(git_notes_merge_usage, options);
}
} else {
die(_("A notes merge into %s is already in-progress at %s"),
default_notes_ref(), wt->path);
if (create_symref("NOTES_MERGE_REF", default_notes_ref(), NULL))
- die("Failed to store link to current notes ref (%s)",
+ die(_("Failed to store link to current notes ref (%s)"),
default_notes_ref());
- printf("Automatic notes merge failed. Fix conflicts in %s and "
- "commit the result with 'git notes merge --commit', or "
- "abort the merge with 'git notes merge --abort'.\n",
+ printf(_("Automatic notes merge failed. Fix conflicts in %s and "
+ "commit the result with 'git notes merge --commit', or "
+ "abort the merge with 'git notes merge --abort'.\n"),
git_path(NOTES_MERGE_WORKTREE));
}
struct notes_tree *t;
int show_only = 0, verbose = 0;
struct option options[] = {
- OPT__DRY_RUN(&show_only, "do not remove, show only"),
- OPT__VERBOSE(&verbose, "report pruned notes"),
+ OPT__DRY_RUN(&show_only, N_("do not remove, show only")),
+ OPT__VERBOSE(&verbose, N_("report pruned notes")),
OPT_END()
};
git_notes_get_ref_usage, 0);
if (argc) {
- error("too many parameters");
+ error(_("too many parameters"));
usage_with_options(git_notes_get_ref_usage, options);
}
static int reuse_delta = 1, reuse_object = 1;
static int keep_unreachable, unpack_unreachable, include_tag;
static unsigned long unpack_unreachable_expiration;
+static int pack_loose_unreachable;
static int local;
+static int have_non_local_packs;
static int incremental;
static int ignore_packed_keep;
static int allow_ofs_delta;
}
/* Return 0 if we will bust the pack-size limit */
-static unsigned long write_reuse_object(struct sha1file *f, struct object_entry *entry,
- unsigned long limit, int usable_delta)
+static off_t write_reuse_object(struct sha1file *f, struct object_entry *entry,
+ unsigned long limit, int usable_delta)
{
struct packed_git *p = entry->in_pack;
struct pack_window *w_curs = NULL;
struct revindex_entry *revidx;
off_t offset;
enum object_type type = entry->type;
- unsigned long datalen;
+ off_t datalen;
unsigned char header[10], dheader[10];
unsigned hdrlen;
}
/* Return 0 if we will bust the pack-size limit */
-static unsigned long write_object(struct sha1file *f,
- struct object_entry *entry,
- off_t write_offset)
+static off_t write_object(struct sha1file *f,
+ struct object_entry *entry,
+ off_t write_offset)
{
- unsigned long limit, len;
+ unsigned long limit;
+ off_t len;
int usable_delta, to_reuse;
if (!pack_to_stdout)
struct object_entry *e,
off_t *offset)
{
- unsigned long size;
+ off_t size;
int recursing;
/*
return 1;
if (incremental)
return 0;
+
+ /*
+ * When asked to do --local (do not include an
+ * object that appears in a pack we borrow
+ * from elsewhere) or --honor-pack-keep (do not
+ * include an object that appears in a pack marked
+ * with .keep), we need to make sure no copy of this
+ * object come from in _any_ pack that causes us to
+ * omit it, and need to complete this loop. When
+ * neither option is in effect, we know the object
+ * we just found is going to be packed, so break
+ * out of the loop to return 1 now.
+ */
+ if (!ignore_packed_keep &&
+ (!local || !have_non_local_packs))
+ break;
+
if (local && !p->pack_local)
return 0;
if (ignore_packed_keep && p->pack_local && p->pack_keep)
free(in_pack.array);
}
+static int add_loose_object(const unsigned char *sha1, const char *path,
+ void *data)
+{
+ enum object_type type = sha1_object_info(sha1, NULL);
+
+ if (type < 0) {
+ warning("loose object at %s could not be examined", path);
+ return 0;
+ }
+
+ add_object_entry(sha1, type, "", 0);
+ return 0;
+}
+
+/*
+ * We actually don't even have to worry about reachability here.
+ * add_object_entry will weed out duplicates, so we just add every
+ * loose object we find.
+ */
+static void add_unreachable_loose_objects(void)
+{
+ for_each_loose_file_in_objdir(get_object_directory(),
+ add_loose_object,
+ NULL, NULL, NULL);
+}
+
static int has_sha1_pack_kept_or_nonlocal(const unsigned char *sha1)
{
static struct packed_git *last_found = (void *)1;
if (keep_unreachable)
add_objects_in_unpacked_packs(&revs);
+ if (pack_loose_unreachable)
+ add_unreachable_loose_objects();
if (unpack_unreachable)
loosen_unused_packed_objects(&revs);
N_("include tag objects that refer to objects to be packed")),
OPT_BOOL(0, "keep-unreachable", &keep_unreachable,
N_("keep unreachable objects")),
+ OPT_BOOL(0, "pack-loose-unreachable", &pack_loose_unreachable,
+ N_("pack loose unreachable objects")),
{ OPTION_CALLBACK, 0, "unpack-unreachable", NULL, N_("time"),
N_("unpack unreachable objects newer than <time>"),
PARSE_OPT_OPTARG, option_parse_unpack_unreachable },
progress = 2;
prepare_packed_git();
+ if (ignore_packed_keep) {
+ struct packed_git *p;
+ for (p = packed_git; p; p = p->next)
+ if (p->pack_local && p->pack_keep)
+ break;
+ if (!p) /* no keep-able packs found */
+ ignore_packed_keep = 0;
+ }
+ if (local) {
+ /*
+ * unlike ignore_packed_keep above, we do not want to
+ * unset "local" based on looking at packs, as it
+ * also covers non-local objects
+ */
+ struct packed_git *p;
+ for (p = packed_git; p; p = p->next) {
+ if (!p->pack_local) {
+ have_non_local_packs = 1;
+ break;
+ }
+ }
+ }
if (progress)
progress_state = start_progress(_("Counting objects"), 0);
argv_array_push(&args, "--no-autostash");
else if (opt_autostash == 1)
argv_array_push(&args, "--autostash");
+ if (opt_verify_signatures &&
+ !strcmp(opt_verify_signatures, "--verify-signatures"))
+ warning(_("ignoring --verify-signatures for rebase"));
argv_array_push(&args, "--onto");
argv_array_push(&args, sha1_to_hex(merge_head));
git_config(git_pull_config, NULL);
if (read_cache_unmerged())
- die_resolve_conflict("Pull");
+ die_resolve_conflict("pull");
if (file_exists(git_path("MERGE_HEAD")))
die_conclude_merge();
return 1;
}
-static int do_push(const char *repo, int flags)
+static int do_push(const char *repo, int flags,
+ const struct string_list *push_options)
{
int i, errs;
struct remote *remote = pushremote_get(repo);
if (remote->mirror)
flags |= (TRANSPORT_PUSH_MIRROR|TRANSPORT_PUSH_FORCE);
+ if (push_options->nr)
+ flags |= TRANSPORT_PUSH_OPTIONS;
+
if ((flags & TRANSPORT_PUSH_ALL) && refspec) {
if (!strcmp(*refspec, "refs/tags/*"))
return error(_("--all and --tags are incompatible"));
for (i = 0; i < url_nr; i++) {
struct transport *transport =
transport_get(remote, url[i]);
+ if (flags & TRANSPORT_PUSH_OPTIONS)
+ transport->push_options = push_options;
if (push_with_options(transport, flags))
errs++;
}
} else {
struct transport *transport =
transport_get(remote, NULL);
-
+ if (flags & TRANSPORT_PUSH_OPTIONS)
+ transport->push_options = push_options;
if (push_with_options(transport, flags))
errs++;
}
int push_cert = -1;
int rc;
const char *repo = NULL; /* default repository */
+ static struct string_list push_options = STRING_LIST_INIT_DUP;
+ static struct string_list_item *item;
+
struct option options[] = {
OPT__VERBOSITY(&verbosity),
OPT_STRING( 0 , "repo", &repo, N_("repository"), N_("repository")),
0, "signed", &push_cert, "yes|no|if-asked", N_("GPG sign the push"),
PARSE_OPT_OPTARG, option_parse_push_signed },
OPT_BIT(0, "atomic", &flags, N_("request atomic transaction on remote side"), TRANSPORT_PUSH_ATOMIC),
+ OPT_STRING_LIST('o', "push-option", &push_options, N_("server-specific"), N_("option to transmit")),
OPT_SET_INT('4', "ipv4", &family, N_("use IPv4 addresses only"),
TRANSPORT_FAMILY_IPV4),
OPT_SET_INT('6', "ipv6", &family, N_("use IPv6 addresses only"),
set_refspecs(argv + 1, argc - 1, repo);
}
- rc = do_push(repo, flags);
+ for_each_string_list_item(item, &push_options)
+ if (strchr(item->string, '\n'))
+ die(_("push options must not have new line characters"));
+
+ rc = do_push(repo, flags, &push_options);
if (rc == -1)
usage_with_options(push_usage, options);
else
static int receive_unpack_limit = -1;
static int transfer_unpack_limit = -1;
static int advertise_atomic_push = 1;
+static int advertise_push_options;
static int unpack_limit = 100;
static int report_status;
static int use_sideband;
static int use_atomic;
+static int use_push_options;
static int quiet;
static int prefer_ofs_delta = 1;
static int auto_update_server_info;
static unsigned long nonce_stamp_slop_limit;
static struct ref_transaction *transaction;
+static enum {
+ KEEPALIVE_NEVER = 0,
+ KEEPALIVE_AFTER_NUL,
+ KEEPALIVE_ALWAYS
+} use_keepalive;
+static int keepalive_in_sec = 5;
+
static enum deny_action parse_deny_action(const char *var, const char *value)
{
if (value) {
return 0;
}
+ if (strcmp(var, "receive.advertisepushoptions") == 0) {
+ advertise_push_options = git_config_bool(var, value);
+ return 0;
+ }
+
+ if (strcmp(var, "receive.keepalive") == 0) {
+ keepalive_in_sec = git_config_int(var, value);
+ return 0;
+ }
+
return git_default_config(var, value, cb);
}
strbuf_addstr(&cap, " ofs-delta");
if (push_cert_nonce)
strbuf_addf(&cap, " push-cert=%s", push_cert_nonce);
+ if (advertise_push_options)
+ strbuf_addstr(&cap, " push-options");
strbuf_addf(&cap, " agent=%s", git_user_agent_sanitized());
packet_write(1, "%s %s%c%s\n",
sha1_to_hex(sha1), path, 0, cap.buf);
static int copy_to_sideband(int in, int out, void *arg)
{
char data[128];
+ int keepalive_active = 0;
+
+ if (keepalive_in_sec <= 0)
+ use_keepalive = KEEPALIVE_NEVER;
+ if (use_keepalive == KEEPALIVE_ALWAYS)
+ keepalive_active = 1;
+
while (1) {
- ssize_t sz = xread(in, data, sizeof(data));
+ ssize_t sz;
+
+ if (keepalive_active) {
+ struct pollfd pfd;
+ int ret;
+
+ pfd.fd = in;
+ pfd.events = POLLIN;
+ ret = poll(&pfd, 1, 1000 * keepalive_in_sec);
+
+ if (ret < 0) {
+ if (errno == EINTR)
+ continue;
+ else
+ break;
+ } else if (ret == 0) {
+ /* no data; send a keepalive packet */
+ static const char buf[] = "0005\1";
+ write_or_die(1, buf, sizeof(buf) - 1);
+ continue;
+ } /* else there is actual data to read */
+ }
+
+ sz = xread(in, data, sizeof(data));
if (sz <= 0)
break;
+
+ if (use_keepalive == KEEPALIVE_AFTER_NUL && !keepalive_active) {
+ const char *p = memchr(data, '\0', sz);
+ if (p) {
+ /*
+ * The NUL tells us to start sending keepalives. Make
+ * sure we send any other data we read along
+ * with it.
+ */
+ keepalive_active = 1;
+ send_sideband(1, 2, data, p - data, use_sideband);
+ send_sideband(1, 2, p + 1, sz - (p - data + 1), use_sideband);
+ continue;
+ }
+ }
+
+ /*
+ * Either we're not looking for a NUL signal, or we didn't see
+ * it yet; just pass along the data.
+ */
send_sideband(1, 2, data, sz, use_sideband);
}
close(in);
}
}
+struct receive_hook_feed_state {
+ struct command *cmd;
+ int skip_broken;
+ struct strbuf buf;
+ const struct string_list *push_options;
+};
+
typedef int (*feed_fn)(void *, const char **, size_t *);
-static int run_and_feed_hook(const char *hook_name, feed_fn feed, void *feed_state)
+static int run_and_feed_hook(const char *hook_name, feed_fn feed,
+ struct receive_hook_feed_state *feed_state)
{
struct child_process proc = CHILD_PROCESS_INIT;
struct async muxer;
proc.argv = argv;
proc.in = -1;
proc.stdout_to_stderr = 1;
+ if (feed_state->push_options) {
+ int i;
+ for (i = 0; i < feed_state->push_options->nr; i++)
+ argv_array_pushf(&proc.env_array,
+ "GIT_PUSH_OPTION_%d=%s", i,
+ feed_state->push_options->items[i].string);
+ argv_array_pushf(&proc.env_array, "GIT_PUSH_OPTION_COUNT=%d",
+ feed_state->push_options->nr);
+ } else
+ argv_array_pushf(&proc.env_array, "GIT_PUSH_OPTION_COUNT");
if (use_sideband) {
memset(&muxer, 0, sizeof(muxer));
return finish_command(&proc);
}
-struct receive_hook_feed_state {
- struct command *cmd;
- int skip_broken;
- struct strbuf buf;
-};
-
static int feed_receive_hook(void *state_, const char **bufp, size_t *sizep)
{
struct receive_hook_feed_state *state = state_;
return 0;
}
-static int run_receive_hook(struct command *commands, const char *hook_name,
- int skip_broken)
+static int run_receive_hook(struct command *commands,
+ const char *hook_name,
+ int skip_broken,
+ const struct string_list *push_options)
{
struct receive_hook_feed_state state;
int status;
if (feed_receive_hook(&state, NULL, NULL))
return 0;
state.cmd = commands;
+ state.push_options = push_options;
status = run_and_feed_hook(hook_name, feed_receive_hook, &state);
strbuf_release(&state.buf);
return status;
{
static struct lock_file shallow_lock;
struct sha1_array extra = SHA1_ARRAY_INIT;
- const char *alt_file;
+ struct check_connected_options opt = CHECK_CONNECTED_INIT;
uint32_t mask = 1 << (cmd->index % 32);
int i;
!delayed_reachability_test(si, i))
sha1_array_append(&extra, si->shallow->sha1[i]);
- setup_alternate_shallow(&shallow_lock, &alt_file, &extra);
- if (check_shallow_connected(command_singleton_iterator,
- 0, cmd, alt_file)) {
+ setup_alternate_shallow(&shallow_lock, &opt.shallow_file, &extra);
+ if (check_connected(command_singleton_iterator, cmd, &opt)) {
rollback_lock_file(&shallow_lock);
sha1_array_clear(&extra);
return -1;
if (shallow_update && si->shallow_ref[cmd->index])
/* to be checked in update_shallow_ref() */
continue;
- if (!check_everything_connected(command_singleton_iterator,
- 0, &singleton))
+ if (!check_connected(command_singleton_iterator, &singleton,
+ NULL))
continue;
cmd->error_string = "missing necessary objects";
}
static void execute_commands(struct command *commands,
const char *unpacker_error,
- struct shallow_info *si)
+ struct shallow_info *si,
+ const struct string_list *push_options)
{
+ struct check_connected_options opt = CHECK_CONNECTED_INIT;
struct command *cmd;
unsigned char sha1[20];
struct iterate_data data;
+ struct async muxer;
+ int err_fd = 0;
if (unpacker_error) {
for (cmd = commands; cmd; cmd = cmd->next)
return;
}
+ if (use_sideband) {
+ memset(&muxer, 0, sizeof(muxer));
+ muxer.proc = copy_to_sideband;
+ muxer.in = -1;
+ if (!start_async(&muxer))
+ err_fd = muxer.in;
+ /* ...else, continue without relaying sideband */
+ }
+
data.cmds = commands;
data.si = si;
- if (check_everything_connected(iterate_receive_command_list, 0, &data))
+ opt.err_fd = err_fd;
+ opt.progress = err_fd && !quiet;
+ if (check_connected(iterate_receive_command_list, &data, &opt))
set_connectivity_errors(commands, si);
+ if (use_sideband)
+ finish_async(&muxer);
+
reject_updates_to_hidden(commands);
- if (run_receive_hook(commands, "pre-receive", 0)) {
+ if (run_receive_hook(commands, "pre-receive", 0, push_options)) {
for (cmd = commands; cmd; cmd = cmd->next) {
if (!cmd->error_string)
cmd->error_string = "pre-receive hook declined";
refname = line + 82;
reflen = linelen - 82;
- cmd = xcalloc(1, st_add3(sizeof(struct command), reflen, 1));
+ FLEX_ALLOC_MEM(cmd, ref_name, refname, reflen);
hashcpy(cmd->old_sha1, old_sha1);
hashcpy(cmd->new_sha1, new_sha1);
- memcpy(cmd->ref_name, refname, reflen);
- cmd->ref_name[reflen] = '\0';
*tail = cmd;
return &cmd->next;
}
if (advertise_atomic_push
&& parse_feature_request(feature_list, "atomic"))
use_atomic = 1;
+ if (advertise_push_options
+ && parse_feature_request(feature_list, "push-options"))
+ use_push_options = 1;
}
if (!strcmp(line, "push-cert")) {
return commands;
}
+static void read_push_options(struct string_list *options)
+{
+ while (1) {
+ char *line;
+ int len;
+
+ line = packet_read_line(0, &len);
+
+ if (!line)
+ break;
+
+ string_list_append(options, line);
+ }
+}
+
static const char *parse_pack_header(struct pack_header *hdr)
{
switch (read_pack_header(0, hdr)) {
(uintmax_t)getpid(),
hostname);
+ if (!quiet && err_fd)
+ argv_array_push(&child.args, "--show-resolving-progress");
+ if (use_sideband)
+ argv_array_push(&child.args, "--report-end-of-input");
if (fsck_objects)
argv_array_pushf(&child.args, "--strict%s",
fsck_msg_types.buf);
if (!use_sideband)
return unpack(0, si);
+ use_keepalive = KEEPALIVE_AFTER_NUL;
memset(&muxer, 0, sizeof(muxer));
muxer.proc = copy_to_sideband;
muxer.in = -1;
if ((commands = read_head_info(&shallow)) != NULL) {
const char *unpack_status = NULL;
+ struct string_list push_options = STRING_LIST_INIT_DUP;
+
+ if (use_push_options)
+ read_push_options(&push_options);
prepare_shallow_info(&si, &shallow);
if (!si.nr_ours && !si.nr_theirs)
unpack_status = unpack_with_sideband(&si);
update_shallow_info(commands, &si, &ref);
}
- execute_commands(commands, unpack_status, &si);
+ use_keepalive = KEEPALIVE_ALWAYS;
+ execute_commands(commands, unpack_status, &si,
+ &push_options);
if (pack_lockfile)
unlink_or_warn(pack_lockfile);
if (report_status)
report(commands, unpack_status);
- run_receive_hook(commands, "post-receive", 1);
+ run_receive_hook(commands, "post-receive", 1,
+ &push_options);
run_update_post_hook(commands);
+ if (push_options.nr)
+ string_list_clear(&push_options, 0);
if (auto_gc) {
const char *argv_gc_auto[] = {
"gc", "--auto", "--quiet", NULL,
};
- int opt = RUN_GIT_CMD | RUN_COMMAND_STDOUT_TO_STDERR;
+ struct child_process proc = CHILD_PROCESS_INIT;
+
+ proc.no_stdin = 1;
+ proc.stdout_to_stderr = 1;
+ proc.err = use_sideband ? -1 : 0;
+ proc.git_cmd = 1;
+ proc.argv = argv_gc_auto;
+
close_all_packs();
- run_command_v_opt(argv_gc_auto, opt);
+ if (!start_command(&proc)) {
+ if (use_sideband)
+ copy_to_sideband(proc.err, -1, NULL);
+ finish_command(&proc);
+ }
}
if (auto_update_server_info)
update_server_info(0);
enum { NO_REBASE, NORMAL_REBASE, INTERACTIVE_REBASE } rebase;
};
-static struct string_list branch_list;
+static struct string_list branch_list = STRING_LIST_INIT_NODUP;
static const char *abbrev_ref(const char *name, const char *prefix)
{
return 0;
}
- /* make sure that symrefs are deleted */
- if (flags & REF_ISSYMREF)
- return unlink(git_path("%s", refname));
-
string_list_append(branches->branches, refname);
return 0;
strbuf_release(&buf);
if (!result)
- result = delete_refs(&branches);
+ result = delete_refs(&branches, REF_NODEREF);
string_list_clear(&branches, 0);
if (skipped.nr) {
struct show_info *show_info = cb_data;
struct branch_info *branch_info = item->util;
struct string_list *merge = &branch_info->merge;
- const char *also;
+ int width = show_info->width + 4;
int i;
if (branch_info->rebase && branch_info->merge.nr > 1) {
printf(" %-*s ", show_info->width, item->string);
if (branch_info->rebase) {
- printf_ln(_(branch_info->rebase == INTERACTIVE_REBASE ?
- "rebases interactively onto remote %s" :
- "rebases onto remote %s"), merge->items[0].string);
+ printf_ln(branch_info->rebase == INTERACTIVE_REBASE
+ ? _("rebases interactively onto remote %s")
+ : _("rebases onto remote %s"), merge->items[0].string);
return 0;
} else if (show_info->any_rebase) {
printf_ln(_(" merges with remote %s"), merge->items[0].string);
- also = _(" and with remote");
+ width++;
} else {
printf_ln(_("merges with remote %s"), merge->items[0].string);
- also = _(" and with remote");
}
for (i = 1; i < merge->nr; i++)
- printf(" %-*s %s %s\n", show_info->width, "", also,
+ printf(_("%-*s and with remote %s\n"), width, "",
merge->items[i].string);
return 0;
the one in " Fetch URL: %s" translation */
printf_ln(_(" Push URL: %s"), url[i]);
if (!i)
- printf_ln(_(" Push URL: %s"), "(no URL)");
+ printf_ln(_(" Push URL: %s"), _("(no URL)"));
if (no_query)
- printf_ln(_(" HEAD branch: %s"), "(not queried)");
+ printf_ln(_(" HEAD branch: %s"), _("(not queried)"));
else if (!states.heads.nr)
- printf_ln(_(" HEAD branch: %s"), "(unknown)");
+ printf_ln(_(" HEAD branch: %s"), _("(unknown)"));
else if (states.heads.nr == 1)
printf_ln(_(" HEAD branch: %s"), states.heads.items[0].string);
else {
string_list_sort(&refs_to_prune);
if (!dry_run)
- result |= delete_refs(&refs_to_prune);
+ result |= delete_refs(&refs_to_prune, 0);
for_each_string_list_item(item, &states.stale) {
const char *refname = item->util;
int pack_everything = 0;
int delete_redundant = 0;
const char *unpack_unreachable = NULL;
+ int keep_unreachable = 0;
const char *window = NULL, *window_memory = NULL;
const char *depth = NULL;
const char *max_pack_size = NULL;
N_("write bitmap index")),
OPT_STRING(0, "unpack-unreachable", &unpack_unreachable, N_("approxidate"),
N_("with -A, do not loosen objects older than this")),
+ OPT_BOOL('k', "keep-unreachable", &keep_unreachable,
+ N_("with -a, repack unreachable objects")),
OPT_STRING(0, "window", &window, N_("n"),
N_("size of the window used for delta compression")),
OPT_STRING(0, "window-memory", &window_memory, N_("bytes"),
if (delete_redundant && repository_format_precious_objects)
die(_("cannot delete packs in a precious-objects repo"));
+ if (keep_unreachable &&
+ (unpack_unreachable || (pack_everything & LOOSEN_UNREACHABLE)))
+ die(_("--keep-unreachable and -A are incompatible"));
+
if (pack_kept_objects < 0)
pack_kept_objects = write_bitmaps;
} else if (pack_everything & LOOSEN_UNREACHABLE) {
argv_array_push(&cmd.args,
"--unpack-unreachable");
+ } else if (keep_unreachable) {
+ argv_array_push(&cmd.args, "--keep-unreachable");
+ argv_array_push(&cmd.args, "--pack-loose-unreachable");
} else {
argv_array_push(&cmd.env_array, "GIT_REF_PARANOIA=1");
}
item->string,
exts[ext].name);
if (remove_path(fname))
- warning(_("removing '%s' failed"), fname);
+ warning(_("failed to remove '%s'"), fname);
free(fname);
}
}
if (body) {
const char *eol;
size_t len;
- body += 2;
+ body = skip_blank_lines(body + 2);
eol = strchr(body, '\n');
len = eol ? eol - body : strlen(body);
printf(" %.*s\n", (int) len, body);
for (i = 0; i < q->nr; i++) {
struct diff_filespec *one = q->queue[i]->one;
- int is_missing = !(one->mode && !is_null_sha1(one->sha1));
+ int is_missing = !(one->mode && !is_null_oid(&one->oid));
struct cache_entry *ce;
if (is_missing && !intent_to_add) {
continue;
}
- ce = make_cache_entry(one->mode, one->sha1, one->path,
+ ce = make_cache_entry(one->mode, one->oid.hash, one->path,
0, 0);
if (!ce)
die(_("make_cache_entry failed for path '%s'"),
return 1;
diffcore_std(&opt);
diff_flush(&opt);
- free_pathspec(&opt.pathspec);
+ clear_pathspec(&opt.pathspec);
return 0;
}
#include "log-tree.h"
#include "graph.h"
#include "bisect.h"
+#include "progress.h"
static const char rev_list_usage[] =
"git rev-list [OPTION] <commit-id>... [ -- paths... ]\n"
" --bisect-all"
;
+static struct progress *progress;
+static unsigned progress_counter;
+
static void finish_commit(struct commit *commit, void *data);
static void show_commit(struct commit *commit, void *data)
{
struct rev_list_info *info = data;
struct rev_info *revs = info->revs;
+ display_progress(progress, ++progress_counter);
+
if (info->flags & REV_LIST_QUIET) {
finish_commit(commit, data);
return;
{
struct rev_list_info *info = cb_data;
finish_object(obj, name, cb_data);
+ display_progress(progress, ++progress_counter);
if (info->flags & REV_LIST_QUIET)
return;
show_object_with_name(stdout, obj, name);
int bisect_show_vars = 0;
int bisect_find_all = 0;
int use_bitmap_index = 0;
+ const char *show_progress = NULL;
git_config(git_default_config, NULL);
init_revisions(&revs, prefix);
test_bitmap_walk(&revs);
return 0;
}
+ if (skip_prefix(arg, "--progress=", &arg)) {
+ show_progress = arg;
+ continue;
+ }
usage(rev_list_usage);
}
if (bisect_list)
revs.limited = 1;
+ if (show_progress)
+ progress = start_progress_delay(show_progress, 0, 0, 2);
+
if (use_bitmap_index && !revs.prune) {
if (revs.count && !revs.left_right && !revs.cherry_mark) {
uint32_t commit_count;
+ int max_count = revs.max_count;
if (!prepare_bitmap_walk(&revs)) {
count_bitmap_commit_list(&commit_count, NULL, NULL, NULL);
+ if (max_count >= 0 && max_count < commit_count)
+ commit_count = max_count;
printf("%d\n", commit_count);
return 0;
}
- } else if (revs.tag_objects && revs.tree_objects && revs.blob_objects) {
+ } else if (revs.max_count < 0 &&
+ revs.tag_objects && revs.tree_objects && revs.blob_objects) {
if (!prepare_bitmap_walk(&revs)) {
traverse_bitmap_commit_list(&show_object_fast);
return 0;
traverse_commit_list(&revs, show_commit, show_object, &info);
+ stop_progress(&progress);
+
if (revs.count) {
if (revs.left_right && revs.cherry_mark)
printf("%d\t%d\t%d\n", revs.count_left, revs.count_right, revs.count_same);
(stop_at_non_option ? PARSE_OPT_STOP_AT_NON_OPTION : 0) |
PARSE_OPT_SHELL_EVAL);
- strbuf_addf(&parsed, " --");
+ strbuf_addstr(&parsed, " --");
sq_quote_argv(&parsed, argv, 0);
puts(parsed.buf);
return 0;
const char * const * usage_str = revert_or_cherry_pick_usage(opts);
const char *me = action_name(opts);
int cmd = 0;
- struct option options[] = {
+ struct option base_options[] = {
OPT_CMDMODE(0, "quit", &cmd, N_("end revert or cherry-pick sequence"), 'q'),
OPT_CMDMODE(0, "continue", &cmd, N_("resume revert or cherry-pick sequence"), 'c'),
OPT_CMDMODE(0, "abort", &cmd, N_("cancel revert or cherry-pick sequence"), 'a'),
N_("option for merge strategy"), option_parse_x),
{ OPTION_STRING, 'S', "gpg-sign", &opts->gpg_sign, N_("key-id"),
N_("GPG sign commit"), PARSE_OPT_OPTARG, NULL, (intptr_t) "" },
- OPT_END(),
- OPT_END(),
- OPT_END(),
- OPT_END(),
- OPT_END(),
- OPT_END(),
+ OPT_END()
};
+ struct option *options = base_options;
if (opts->action == REPLAY_PICK) {
struct option cp_extra[] = {
OPT_BOOL(0, "keep-redundant-commits", &opts->keep_redundant_commits, N_("keep redundant, empty commits")),
OPT_END(),
};
- if (parse_options_concat(options, ARRAY_SIZE(options), cp_extra))
- die(_("program error"));
+ options = parse_options_concat(options, cp_extra);
}
argc = parse_options(argc, argv, NULL, options, usage_str,
*/
if (!index_only) {
int removed = 0, gitmodules_modified = 0;
+ struct strbuf buf = STRBUF_INIT;
for (i = 0; i < list.nr; i++) {
const char *path = list.entry[i].name;
if (list.entry[i].is_submodule) {
continue;
}
} else {
- struct strbuf buf = STRBUF_INIT;
+ strbuf_reset(&buf);
strbuf_addstr(&buf, path);
if (!remove_dir_recursively(&buf, 0)) {
removed = 1;
/* Submodule was removed by user */
if (!remove_path_from_gitmodules(path))
gitmodules_modified = 1;
- strbuf_release(&buf);
/* Fallthrough and let remove_path() fail. */
}
}
if (!removed)
die_errno("git rm: '%s'", path);
}
+ strbuf_release(&buf);
if (gitmodules_modified)
stage_updated_gitmodules();
}
int cmd_shortlog(int argc, const char **argv, const char *prefix)
{
- static struct shortlog log;
- static struct rev_info rev;
+ struct shortlog log = { STRING_LIST_INIT_NODUP };
+ struct rev_info rev;
int nongit = !startup_info->have_repository;
- static const struct option options[] = {
+ const struct option options[] = {
OPT_BOOL('n', "numbered", &log.sort_by_number,
N_("sort output according to the number of commits per author")),
OPT_BOOL('s', "summary", &log.summary,
log.user_format = rev.commit_format == CMIT_FMT_USERFORMAT;
log.abbrev = rev.abbrev;
+ log.file = rev.diffopt.file;
/* assume HEAD if from a tty */
if (!nongit && !rev.pending.nr && isatty(0))
get_from_rev(&rev, &log);
shortlog_output(&log);
+ if (log.file != stdout)
+ fclose(log.file);
return 0;
}
for (i = 0; i < log->list.nr; i++) {
const struct string_list_item *item = &log->list.items[i];
if (log->summary) {
- printf("%6d\t%s\n", (int)UTIL_TO_INT(item), item->string);
+ fprintf(log->file, "%6d\t%s\n",
+ (int)UTIL_TO_INT(item), item->string);
} else {
struct string_list *onelines = item->util;
- printf("%s (%d):\n", item->string, onelines->nr);
+ fprintf(log->file, "%s (%d):\n",
+ item->string, onelines->nr);
for (j = onelines->nr - 1; j >= 0; j--) {
const char *msg = onelines->items[j].string;
if (log->wrap_lines) {
strbuf_reset(&sb);
add_wrapped_shortlog_msg(&sb, msg, log);
- fwrite(sb.buf, sb.len, 1, stdout);
+ fwrite(sb.buf, sb.len, 1, log->file);
}
else
- printf(" %s\n", msg);
+ fprintf(log->file, " %s\n", msg);
}
- putchar('\n');
+ putc('\n', log->file);
onelines->strdup_strings = 1;
string_list_clear(onelines, 0);
free(onelines);
static int clone_submodule(const char *path, const char *gitdir, const char *url,
const char *depth, const char *reference, int quiet)
{
- struct child_process cp;
- child_process_init(&cp);
+ struct child_process cp = CHILD_PROCESS_INIT;
argv_array_push(&cp.args, "clone");
argv_array_push(&cp.args, "--no-checkout");
/* configuration parameters which are passed on to the children */
int quiet;
+ int recommend_shallow;
const char *reference;
const char *depth;
const char *recursive_prefix;
/* If we want to stop as fast as possible and return an error */
unsigned quickstop : 1;
+
+ /* failed clones to be retried again */
+ const struct cache_entry **failed_clones;
+ int failed_clones_nr, failed_clones_alloc;
};
#define SUBMODULE_UPDATE_CLONE_INIT {0, MODULE_LIST_INIT, 0, \
- SUBMODULE_UPDATE_STRATEGY_INIT, 0, NULL, NULL, NULL, NULL, \
- STRING_LIST_INIT_DUP, 0}
+ SUBMODULE_UPDATE_STRATEGY_INIT, 0, -1, NULL, NULL, NULL, NULL, \
+ STRING_LIST_INIT_DUP, 0, NULL, 0, 0}
static void next_submodule_warn_missing(struct submodule_update_clone *suc,
argv_array_push(&child->args, "--quiet");
if (suc->prefix)
argv_array_pushl(&child->args, "--prefix", suc->prefix, NULL);
+ if (suc->recommend_shallow && sub->recommend_shallow == 1)
+ argv_array_push(&child->args, "--depth=1");
argv_array_pushl(&child->args, "--path", sub->path, NULL);
argv_array_pushl(&child->args, "--name", sub->name, NULL);
argv_array_pushl(&child->args, "--url", url, NULL);
static int update_clone_get_next_task(struct child_process *child,
struct strbuf *err,
void *suc_cb,
- void **void_task_cb)
+ void **idx_task_cb)
{
struct submodule_update_clone *suc = suc_cb;
+ const struct cache_entry *ce;
+ int index;
for (; suc->current < suc->list.nr; suc->current++) {
- const struct cache_entry *ce = suc->list.entries[suc->current];
+ ce = suc->list.entries[suc->current];
if (prepare_to_clone_next_submodule(ce, child, suc, err)) {
+ int *p = xmalloc(sizeof(*p));
+ *p = suc->current;
+ *idx_task_cb = p;
suc->current++;
return 1;
}
}
+
+ /*
+ * The loop above tried cloning each submodule once, now try the
+ * stragglers again, which we can imagine as an extension of the
+ * entry list.
+ */
+ index = suc->current - suc->list.nr;
+ if (index < suc->failed_clones_nr) {
+ int *p;
+ ce = suc->failed_clones[index];
+ if (!prepare_to_clone_next_submodule(ce, child, suc, err)) {
+ suc->current ++;
+ strbuf_addf(err, "BUG: submodule considered for cloning,"
+ "doesn't need cloning any more?\n");
+ return 0;
+ }
+ p = xmalloc(sizeof(*p));
+ *p = suc->current;
+ *idx_task_cb = p;
+ suc->current ++;
+ return 1;
+ }
+
return 0;
}
static int update_clone_start_failure(struct strbuf *err,
void *suc_cb,
- void *void_task_cb)
+ void *idx_task_cb)
{
struct submodule_update_clone *suc = suc_cb;
suc->quickstop = 1;
static int update_clone_task_finished(int result,
struct strbuf *err,
void *suc_cb,
- void *void_task_cb)
+ void *idx_task_cb)
{
+ const struct cache_entry *ce;
struct submodule_update_clone *suc = suc_cb;
+ int *idxP = *(int**)idx_task_cb;
+ int idx = *idxP;
+ free(idxP);
+
if (!result)
return 0;
- suc->quickstop = 1;
- return 1;
+ if (idx < suc->list.nr) {
+ ce = suc->list.entries[idx];
+ strbuf_addf(err, _("Failed to clone '%s'. Retry scheduled"),
+ ce->name);
+ strbuf_addch(err, '\n');
+ ALLOC_GROW(suc->failed_clones,
+ suc->failed_clones_nr + 1,
+ suc->failed_clones_alloc);
+ suc->failed_clones[suc->failed_clones_nr++] = ce;
+ return 0;
+ } else {
+ idx -= suc->list.nr;
+ ce = suc->failed_clones[idx];
+ strbuf_addf(err, _("Failed to clone '%s' a second time, aborting"),
+ ce->name);
+ strbuf_addch(err, '\n');
+ suc->quickstop = 1;
+ return 1;
+ }
+
+ return 0;
}
static int update_clone(int argc, const char **argv, const char *prefix)
"specified number of revisions")),
OPT_INTEGER('j', "jobs", &max_jobs,
N_("parallel jobs")),
+ OPT_BOOL(0, "recommend-shallow", &suc.recommend_shallow,
+ N_("whether the initial clone should follow the shallow recommendation")),
OPT__QUIET(&suc.quiet, N_("don't print cloning progress")),
OPT_END()
};
{
struct strbuf sb = STRBUF_INIT;
if (argc != 3)
- die("submodule--helper relative_path takes exactly 2 arguments, got %d", argc);
+ die("submodule--helper relative-path takes exactly 2 arguments, got %d", argc);
printf("%s", relative_path(argv[1], argv[2], &sb));
strbuf_release(&sb);
return 0;
}
+static const char *remote_submodule_branch(const char *path)
+{
+ const struct submodule *sub;
+ gitmodules_config();
+ git_config(submodule_config, NULL);
+
+ sub = submodule_from_path(null_sha1, path);
+ if (!sub)
+ return NULL;
+
+ if (!sub->branch)
+ return "master";
+
+ if (!strcmp(sub->branch, ".")) {
+ unsigned char sha1[20];
+ const char *refname = resolve_ref_unsafe("HEAD", 0, sha1, NULL);
+
+ if (!refname)
+ die(_("No such ref: %s"), "HEAD");
+
+ /* detached HEAD */
+ if (!strcmp(refname, "HEAD"))
+ die(_("Submodule (%s) branch configured to inherit "
+ "branch from superproject, but the superproject "
+ "is not on any branch"), sub->name);
+
+ if (!skip_prefix(refname, "refs/heads/", &refname))
+ die(_("Expecting a full ref name, got %s"), refname);
+ return refname;
+ }
+
+ return sub->branch;
+}
+
+static int resolve_remote_submodule_branch(int argc, const char **argv,
+ const char *prefix)
+{
+ const char *ret;
+ struct strbuf sb = STRBUF_INIT;
+ if (argc != 2)
+ die("submodule--helper remote-branch takes exactly one arguments, got %d", argc);
+
+ ret = remote_submodule_branch(argv[1]);
+ if (!ret)
+ die("submodule %s doesn't exist", argv[1]);
+
+ printf("%s", ret);
+ strbuf_release(&sb);
+ return 0;
+}
+
struct cmd_struct {
const char *cmd;
int (*fn)(int, const char **, const char *);
{"relative-path", resolve_relative_path},
{"resolve-relative-url", resolve_relative_url},
{"resolve-relative-url-test", resolve_relative_url_test},
- {"init", module_init}
+ {"init", module_init},
+ {"remote-branch", resolve_remote_submodule_branch}
};
int cmd_submodule__helper(int argc, const char **argv, const char *prefix)
return; /* we are done */
else {
/* cannot resolve yet --- queue it */
- hashcpy(obj_list[nr].sha1, null_sha1);
+ hashclr(obj_list[nr].sha1);
add_delta_to_list(nr, base_sha1, 0, delta_data, delta_size);
return;
}
* The delta base object is itself a delta that
* has not been resolved yet.
*/
- hashcpy(obj_list[nr].sha1, null_sha1);
+ hashclr(obj_list[nr].sha1);
add_delta_to_list(nr, null_sha1, base_offset, delta_data, delta_size);
return;
}
if (save_nr != active_nr)
goto redo;
}
- free_pathspec(&pathspec);
+ clear_pathspec(&pathspec);
return 0;
}
report(_("Untracked cache enabled for '%s'"), get_git_work_tree());
break;
default:
- die("Bug: bad untracked_cache value: %d", untracked_cache);
+ die("BUG: bad untracked_cache value: %d", untracked_cache);
}
if (active_cache_changed) {
static const char * const worktree_usage[] = {
N_("git worktree add [<options>] <path> [<branch>]"),
- N_("git worktree prune [<options>]"),
N_("git worktree list [<options>]"),
+ N_("git worktree lock [<options>] <path>"),
+ N_("git worktree prune [<options>]"),
+ N_("git worktree unlock <path>"),
NULL
};
if (!dir)
return;
while ((d = readdir(dir)) != NULL) {
- if (!strcmp(d->d_name, ".") || !strcmp(d->d_name, ".."))
+ if (is_dot_or_dotdot(d->d_name))
continue;
strbuf_reset(&reason);
if (!prune_worktree(d->d_name, &reason))
struct strbuf sb = STRBUF_INIT;
const char *name;
struct stat st;
- struct child_process cp;
+ struct child_process cp = CHILD_PROCESS_INIT;
struct argv_array child_env = ARGV_ARRAY_INIT;
int counter = 0, len, ret;
struct strbuf symref = STRBUF_INIT;
*/
strbuf_reset(&sb);
strbuf_addf(&sb, "%s/HEAD", sb_repo.buf);
- write_file(sb.buf, "0000000000000000000000000000000000000000");
+ write_file(sb.buf, "%s", sha1_to_hex(null_sha1));
strbuf_reset(&sb);
strbuf_addf(&sb, "%s/commondir", sb_repo.buf);
write_file(sb.buf, "../..");
argv_array_pushf(&child_env, "%s=%s", GIT_DIR_ENVIRONMENT, sb_git.buf);
argv_array_pushf(&child_env, "%s=%s", GIT_WORK_TREE_ENVIRONMENT, path);
- memset(&cp, 0, sizeof(cp));
cp.git_cmd = 1;
if (commit)
if (ac < 1 || ac > 2)
usage_with_options(worktree_usage, options);
- path = prefix ? prefix_filename(prefix, strlen(prefix), av[0]) : av[0];
+ path = prefix_filename(prefix, strlen(prefix), av[0]);
branch = ac < 2 ? "HEAD" : av[1];
+ if (!strcmp(branch, "-"))
+ branch = "@{-1}";
+
opts.force_new_branch = !!new_branch_force;
if (opts.force_new_branch) {
struct strbuf symref = STRBUF_INIT;
}
if (opts.new_branch) {
- struct child_process cp;
- memset(&cp, 0, sizeof(cp));
+ struct child_process cp = CHILD_PROCESS_INIT;
cp.git_cmd = 1;
argv_array_push(&cp.args, "branch");
if (opts.force_new_branch)
return 0;
}
+static int lock_worktree(int ac, const char **av, const char *prefix)
+{
+ const char *reason = "", *old_reason;
+ struct option options[] = {
+ OPT_STRING(0, "reason", &reason, N_("string"),
+ N_("reason for locking")),
+ OPT_END()
+ };
+ struct worktree **worktrees, *wt;
+
+ ac = parse_options(ac, av, prefix, options, worktree_usage, 0);
+ if (ac != 1)
+ usage_with_options(worktree_usage, options);
+
+ worktrees = get_worktrees();
+ wt = find_worktree(worktrees, prefix, av[0]);
+ if (!wt)
+ die(_("'%s' is not a working tree"), av[0]);
+ if (is_main_worktree(wt))
+ die(_("The main working tree cannot be locked or unlocked"));
+
+ old_reason = is_worktree_locked(wt);
+ if (old_reason) {
+ if (*old_reason)
+ die(_("'%s' is already locked, reason: %s"),
+ av[0], old_reason);
+ die(_("'%s' is already locked"), av[0]);
+ }
+
+ write_file(git_common_path("worktrees/%s/locked", wt->id),
+ "%s", reason);
+ free_worktrees(worktrees);
+ return 0;
+}
+
+static int unlock_worktree(int ac, const char **av, const char *prefix)
+{
+ struct option options[] = {
+ OPT_END()
+ };
+ struct worktree **worktrees, *wt;
+ int ret;
+
+ ac = parse_options(ac, av, prefix, options, worktree_usage, 0);
+ if (ac != 1)
+ usage_with_options(worktree_usage, options);
+
+ worktrees = get_worktrees();
+ wt = find_worktree(worktrees, prefix, av[0]);
+ if (!wt)
+ die(_("'%s' is not a working tree"), av[0]);
+ if (is_main_worktree(wt))
+ die(_("The main working tree cannot be locked or unlocked"));
+ if (!is_worktree_locked(wt))
+ die(_("'%s' is not locked"), av[0]);
+ ret = unlink_or_warn(git_common_path("worktrees/%s/locked", wt->id));
+ free_worktrees(worktrees);
+ return ret;
+}
+
int cmd_worktree(int ac, const char **av, const char *prefix)
{
struct option options[] = {
if (ac < 2)
usage_with_options(worktree_usage, options);
+ if (!prefix)
+ prefix = "";
if (!strcmp(av[1], "add"))
return add(ac - 1, av + 1, prefix);
if (!strcmp(av[1], "prune"))
return prune(ac - 1, av + 1, prefix);
if (!strcmp(av[1], "list"))
return list(ac - 1, av + 1, prefix);
+ if (!strcmp(av[1], "lock"))
+ return lock_worktree(ac - 1, av + 1, prefix);
+ if (!strcmp(av[1], "unlock"))
+ return unlock_worktree(ac - 1, av + 1, prefix);
usage_with_options(worktree_usage, options);
}
i = 0;
while (i < entries) {
const struct cache_entry *ce = cache[i];
- struct cache_tree_sub *sub;
+ struct cache_tree_sub *sub = NULL;
const char *path, *slash;
int pathlen, entlen;
const unsigned char *sha1;
unsigned mode;
int expected_missing = 0;
+ int contains_ita = 0;
path = ce->name;
pathlen = ce_namelen(ce);
i += sub->count;
sha1 = sub->cache_tree->sha1;
mode = S_IFDIR;
- if (sub->cache_tree->entry_count < 0) {
+ contains_ita = sub->cache_tree->entry_count < 0;
+ if (contains_ita) {
to_invalidate = 1;
expected_missing = 1;
}
* they are not part of generated trees. Invalidate up
* to root to force cache-tree users to read elsewhere.
*/
- if (ce_intent_to_add(ce)) {
+ if (!sub && ce_intent_to_add(ce)) {
to_invalidate = 1;
continue;
}
+ /*
+ * "sub" can be an empty tree if all subentries are i-t-a.
+ */
+ if (contains_ita && !hashcmp(sha1, EMPTY_TREE_SHA1_BIN))
+ continue;
+
strbuf_grow(&buffer, entlen + 100);
strbuf_addf(&buffer, "%o %.*s%c", mode, entlen, path + baselen, '\0');
strbuf_add(&buffer, sha1, 20);
#define rename_cache_entry_at(pos, new_name) rename_index_entry_at(&the_index, (pos), (new_name))
#define remove_cache_entry_at(pos) remove_index_entry_at(&the_index, (pos))
#define remove_file_from_cache(path) remove_file_from_index(&the_index, (path))
-#define add_to_cache(path, st, flags) add_to_index(&the_index, (path), (st), (flags))
-#define add_file_to_cache(path, flags) add_file_to_index(&the_index, (path), (flags))
+#define add_to_cache(path, st, flags) add_to_index(&the_index, (path), (st), (flags), 0)
+#define add_file_to_cache(path, flags) add_file_to_index(&the_index, (path), (flags), 0)
#define refresh_cache(flags) refresh_index(&the_index, (flags), NULL, NULL, NULL)
#define ce_match_stat(ce, st, options) ie_match_stat(&the_index, (ce), (st), (options))
#define ce_modified(ce, st, options) ie_modified(&the_index, (ce), (st), (options))
#define ADD_CACHE_IGNORE_ERRORS 4
#define ADD_CACHE_IGNORE_REMOVAL 8
#define ADD_CACHE_INTENT 16
-extern int add_to_index(struct index_state *, const char *path, struct stat *, int flags);
-extern int add_file_to_index(struct index_state *, const char *path, int flags);
+extern int add_to_index(struct index_state *, const char *path, struct stat *, int flags, int force_mode);
+extern int add_file_to_index(struct index_state *, const char *path, int flags, int force_mode);
extern struct cache_entry *make_cache_entry(unsigned int mode, const unsigned char *sha1, const char *path, int stage, unsigned int refresh_options);
extern int ce_same_name(const struct cache_entry *a, const struct cache_entry *b);
extern void set_object_name_for_intent_to_add_entry(struct cache_entry *ce);
#define REFRESH_IGNORE_SUBMODULES 0x0010 /* ignore submodules */
#define REFRESH_IN_PORCELAIN 0x0020 /* user friendly output, not "needs update" */
extern int refresh_index(struct index_state *, unsigned int flags, const struct pathspec *pathspec, char *seen, const char *header_msg);
+extern struct cache_entry *refresh_cache_entry(struct cache_entry *, unsigned int);
extern void update_index_if_able(struct index_state *, struct lock_file *);
* directory while we were working. To be robust against this kind of
* race, callers might want to try invoking the function again when it
* returns SCLD_VANISHED.
+ *
+ * safe_create_leading_directories() temporarily changes path while it
+ * is working but restores it before returning.
+ * safe_create_leading_directories_const() doesn't modify path, even
+ * temporarily.
*/
enum scld_error {
SCLD_OK = 0,
* printf("%s -> %s", sha1_to_hex(one), sha1_to_hex(two));
*/
extern char *sha1_to_hex_r(char *out, const unsigned char *sha1);
+extern char *oid_to_hex_r(char *out, const struct object_id *oid);
extern char *sha1_to_hex(const unsigned char *sha1); /* static buffer result! */
extern char *oid_to_hex(const struct object_id *oid); /* same static buffer as sha1_to_hex */
DATE_ISO8601_STRICT,
DATE_RFC2822,
DATE_STRFTIME,
- DATE_RAW
+ DATE_RAW,
+ DATE_UNIX
} type;
const char *strftime_fmt;
int local;
extern const char *git_editor(void);
extern const char *git_pager(int stdout_is_tty);
extern int git_ident_config(const char *, const char *, void *);
+extern void reset_ident_date(void);
struct ident_split {
const char *name_begin;
char pack_name[FLEX_ARRAY]; /* more */
} *packed_git;
+/*
+ * A most-recently-used ordered version of the packed_git list, which can
+ * be iterated instead of packed_git (and marked via mru_mark).
+ */
+struct mru;
+extern struct mru *packed_git_mru;
+
struct pack_entry {
off_t offset;
unsigned char sha1[20];
extern void close_pack_windows(struct packed_git *);
extern void close_all_packs(void);
extern void unuse_pack(struct pack_window **);
-extern void free_pack_by_name(const char *);
extern void clear_delta_base_cache(void);
extern struct packed_git *add_packed_git(const char *path, size_t path_len, int local);
/* Request */
enum object_type *typep;
unsigned long *sizep;
- unsigned long *disk_sizep;
+ off_t *disk_sizep;
unsigned char *delta_base_sha1;
struct strbuf *typename;
const char *blob;
};
+enum config_origin_type {
+ CONFIG_ORIGIN_BLOB,
+ CONFIG_ORIGIN_FILE,
+ CONFIG_ORIGIN_STDIN,
+ CONFIG_ORIGIN_SUBMODULE_BLOB,
+ CONFIG_ORIGIN_CMDLINE
+};
+
typedef int (*config_fn_t)(const char *, const char *, void *);
extern int git_default_config(const char *, const char *, void *);
extern int git_config_from_file(config_fn_t fn, const char *, void *);
-extern int git_config_from_mem(config_fn_t fn, const char *origin_type,
+extern int git_config_from_mem(config_fn_t fn, const enum config_origin_type,
const char *name, const char *buf, size_t len, void *data);
extern void git_config_push_parameter(const char *text);
extern int git_config_from_parameters(config_fn_t fn, void *data);
extern const char *get_commit_output_encoding(void);
extern int git_config_parse_parameter(const char *, config_fn_t fn, void *data);
+
+enum config_scope {
+ CONFIG_SCOPE_UNKNOWN = 0,
+ CONFIG_SCOPE_SYSTEM,
+ CONFIG_SCOPE_GLOBAL,
+ CONFIG_SCOPE_REPO,
+ CONFIG_SCOPE_CMDLINE,
+};
+
+extern enum config_scope current_config_scope(void);
extern const char *current_config_origin_type(void);
extern const char *current_config_name(void);
struct key_value_info {
const char *filename;
int linenr;
+ enum config_origin_type origin_type;
+ enum config_scope scope;
};
extern NORETURN void git_die_config(const char *key, const char *err, ...) __attribute__((format(printf, 2, 3)));
extern int copy_file_with_time(const char *dst, const char *src, int mode);
extern void write_or_die(int fd, const void *buf, size_t count);
-extern int write_or_whine(int fd, const void *buf, size_t count, const char *msg);
-extern int write_or_whine_pipe(int fd, const void *buf, size_t count, const char *msg);
extern void fsync_or_die(int fd, const char *);
extern ssize_t read_in_full(int fd, void *buf, size_t count);
return write_in_full(fd, str, strlen(str));
}
-extern int write_file(const char *path, const char *fmt, ...);
-extern int write_file_gently(const char *path, const char *fmt, ...);
+/**
+ * Open (and truncate) the file at path, write the contents of buf to it,
+ * and close it. Dies if any errors are encountered.
+ */
+extern void write_file_buf(const char *path, const char *buf, size_t len);
+
+/**
+ * Like write_file_buf(), but format the contents into a buffer first.
+ * Additionally, write_file() will append a newline if one is not already
+ * present, making it convenient to write text files:
+ *
+ * write_file(path, "counter: %d", ctr);
+ */
+__attribute__((format (printf, 2, 3)))
+extern void write_file(const char *path, const char *fmt, ...);
/* pager.c */
extern void setup_pager(void);
* return 0 if success, 1 - if addition of a file failed and
* ADD_FILES_IGNORE_ERRORS was specified in flags
*/
-int add_files_to_cache(const char *prefix, const struct pathspec *pathspec, int flags);
+int add_files_to_cache(const char *prefix, const struct pathspec *pathspec, int flags, int force_mode);
/* diff.c */
extern int diff_auto_refresh_index;
return -1;
}
-static int parse_attr(const char *name, int len)
+static int parse_attr(const char *name, size_t len)
{
- static const int attr_values[] = { 1, 2, 4, 5, 7,
- 22, 22, 24, 25, 27 };
- static const char * const attr_names[] = {
- "bold", "dim", "ul", "blink", "reverse",
- "nobold", "nodim", "noul", "noblink", "noreverse"
+ static const struct {
+ const char *name;
+ size_t len;
+ int val, neg;
+ } attrs[] = {
+#define ATTR(x, val, neg) { (x), sizeof(x)-1, (val), (neg) }
+ ATTR("bold", 1, 22),
+ ATTR("dim", 2, 22),
+ ATTR("italic", 3, 23),
+ ATTR("ul", 4, 24),
+ ATTR("blink", 5, 25),
+ ATTR("reverse", 7, 27),
+ ATTR("strike", 9, 29)
+#undef ATTR
};
+ int negate = 0;
int i;
- for (i = 0; i < ARRAY_SIZE(attr_names); i++) {
- const char *str = attr_names[i];
- if (!strncasecmp(name, str, len) && !str[len])
- return attr_values[i];
+
+ if (skip_prefix_mem(name, len, "no", &name, &len)) {
+ skip_prefix_mem(name, len, "-", &name, &len);
+ negate = 1;
+ }
+
+ for (i = 0; i < ARRAY_SIZE(attrs); i++) {
+ if (attrs[i].len == len && !memcmp(attrs[i].name, name, len))
+ return negate ? attrs[i].neg : attrs[i].val;
}
return -1;
}
struct strbuf;
-/* 2 + (2 * num_attrs) + 8 + 1 + 8 + 'm' + NUL */
-/* "\033[1;2;4;5;7;38;5;2xx;48;5;2xxm\0" */
/*
* The maximum length of ANSI color sequence we would generate:
* - leading ESC '[' 2
- * - attr + ';' 3 * 10 (e.g. "1;")
+ * - attr + ';' 2 * num_attr (e.g. "1;")
+ * - no-attr + ';' 3 * num_attr (e.g. "22;")
* - fg color + ';' 17 (e.g. "38;2;255;255;255;")
* - bg color + ';' 17 (e.g. "48;2;255;255;255;")
* - terminating 'm' NUL 2
*
- * The above overcounts attr (we only use 5 not 8) and one semicolon
- * but it is close enough.
+ * The above overcounts by one semicolon but it is close enough.
+ *
+ * The space for attributes is also slightly overallocated, as
+ * the negation for some attributes is the same (e.g., nobold and nodim).
+ *
+ * We allocate space for 7 attributes.
*/
-#define COLOR_MAXLEN 70
+#define COLOR_MAXLEN 75
-/*
- * IMPORTANT: Due to the way these color codes are emulated on Windows,
- * write them only using printf(), fprintf(), and fputs(). In particular,
- * do not use puts() or write().
- */
#define GIT_COLOR_NORMAL ""
#define GIT_COLOR_RESET "\033[m"
#define GIT_COLOR_BOLD "\033[1m"
memset(p->parent, 0,
sizeof(p->parent[0]) * num_parent);
- hashcpy(p->oid.hash, q->queue[i]->two->sha1);
+ oidcpy(&p->oid, &q->queue[i]->two->oid);
p->mode = q->queue[i]->two->mode;
- hashcpy(p->parent[n].oid.hash, q->queue[i]->one->sha1);
+ oidcpy(&p->parent[n].oid, &q->queue[i]->one->oid);
p->parent[n].mode = q->queue[i]->one->mode;
p->parent[n].status = q->queue[i]->status;
*tail = p;
continue;
}
- hashcpy(p->parent[n].oid.hash, q->queue[i]->one->sha1);
+ oidcpy(&p->parent[n].oid, &q->queue[i]->one->oid);
p->parent[n].mode = q->queue[i]->one->mode;
p->parent[n].status = q->queue[i]->status;
for (i = 0; i < num_parent; i++) {
pair->one[i].path = p->path;
pair->one[i].mode = p->parent[i].mode;
- hashcpy(pair->one[i].sha1, p->parent[i].oid.hash);
- pair->one[i].sha1_valid = !is_null_oid(&p->parent[i].oid);
+ oidcpy(&pair->one[i].oid, &p->parent[i].oid);
+ pair->one[i].oid_valid = !is_null_oid(&p->parent[i].oid);
pair->one[i].has_more_entries = 1;
}
pair->one[num_parent - 1].has_more_entries = 0;
pair->two->path = p->path;
pair->two->mode = p->mode;
- hashcpy(pair->two->sha1, p->oid.hash);
- pair->two->sha1_valid = !is_null_oid(&p->oid);
+ oidcpy(&pair->two->oid, &p->oid);
+ pair->two->oid_valid = !is_null_oid(&p->oid);
return pair;
}
free(tmp);
}
- free_pathspec(&diffopts.pathspec);
+ clear_pathspec(&diffopts.pathspec);
}
void diff_tree_combined_merge(const struct commit *commit, int dense,
*
* After including this header file, using:
*
- * define_commit_slab(indegee, int);
+ * define_commit_slab(indegree, int);
*
* will let you call the following functions:
*
return slabname##_at_peek(s, c, 0); \
} \
\
-static int stat_ ##slabname## realloc
+struct slabname
/*
- * Note that this seemingly redundant second declaration is required
+ * Note that this redundant forward declaration is required
* to allow a terminating semicolon, which makes instantiations look
* like function declarations. I.e., the expansion of
*
* define_commit_slab(indegree, int);
*
- * ends in 'static int stat_indegreerealloc;'. This would otherwise
+ * ends in 'struct indegree;'. This would otherwise
* be a syntax error according (at least) to ISO C. It's hard to
* catch because GCC silently parses it by default.
*/
while (*p && (*p != '\n' || p[1] != '\n'))
p++;
if (*p) {
- p += 2;
+ p = skip_blank_lines(p + 2);
for (eol = p; *eol && *eol != '\n'; eol++)
; /* do nothing */
} else
{
struct strbuf sig = STRBUF_INIT;
int inspos, copypos;
+ const char *eoh;
/* find the end of the header */
- inspos = strstr(buf->buf, "\n\n") - buf->buf + 1;
+ eoh = strstr(buf->buf, "\n\n");
+ if (!eoh)
+ inspos = buf->len;
+ else
+ inspos = eoh - buf->buf + 1;
if (!keyid || !*keyid)
keyid = get_signing_key();
return result;
}
+void set_merge_remote_desc(struct commit *commit,
+ const char *name, struct object *obj)
+{
+ struct merge_remote_desc *desc;
+ FLEX_ALLOC_STR(desc, name, name);
+ desc->obj = obj;
+ commit->util = desc;
+}
+
struct commit *get_merge_parent(const char *name)
{
struct object *obj;
return NULL;
obj = parse_object(oid.hash);
commit = (struct commit *)peel_to_type(name, 0, obj, OBJ_COMMIT);
- if (commit && !commit->util) {
- struct merge_remote_desc *desc;
- desc = xmalloc(sizeof(*desc));
- desc->obj = obj;
- desc->name = strdup(name);
- commit->util = desc;
- }
+ if (commit && !commit->util)
+ set_merge_remote_desc(commit, name, obj);
return commit;
}
return &new->next;
}
-void print_commit_list(struct commit_list *list,
- const char *format_cur,
- const char *format_last)
-{
- for ( ; list; list = list->next) {
- const char *format = list->next ? format_cur : format_last;
- printf(format, oid_to_hex(&list->item->object.oid));
- }
-}
-
const char *find_commit_header(const char *msg, const char *key, size_t *out_len)
{
int key_len = strlen(key);
CMIT_FMT_FULLER,
CMIT_FMT_ONELINE,
CMIT_FMT_EMAIL,
+ CMIT_FMT_MBOXRD,
CMIT_FMT_USERFORMAT,
CMIT_FMT_UNSPECIFIED
};
+static inline int cmit_fmt_is_mail(enum cmit_fmt fmt)
+{
+ return (fmt == CMIT_FMT_EMAIL || fmt == CMIT_FMT_MBOXRD);
+}
+
struct pretty_print_context {
/*
* Callers should tweak these to change the behavior of pp_* functions.
* should not be counted on by callers.
*/
struct string_list in_body_headers;
+ int graph_width;
};
struct userformat_want {
const char *line_separator);
extern void userformat_find_requirements(const char *fmt, struct userformat_want *w);
extern int commit_format_is_empty(enum cmit_fmt);
+extern const char *skip_blank_lines(const char *msg);
extern void format_commit_message(const struct commit *commit,
const char *format, struct strbuf *sb,
const struct pretty_print_context *context);
struct merge_remote_desc {
struct object *obj; /* the named object, could be a tag */
- const char *name;
+ char name[FLEX_ARRAY];
};
#define merge_remote_util(commit) ((struct merge_remote_desc *)((commit)->util))
+extern void set_merge_remote_desc(struct commit *commit,
+ const char *name, struct object *obj);
/*
* Given "name" from the command line to merge, find the commit object
struct strbuf *message, struct strbuf *signature);
extern int remove_signature(struct strbuf *buf);
-extern void print_commit_list(struct commit_list *list,
- const char *format_cur,
- const char *format_last);
-
/*
* Check the signature of the given commit. The result of the check is stored
* in sig->check_result, 'G' for a good signature, 'U' for a good signature
--- /dev/null
+#include "cache.h"
+#include "exec_cmd.h"
+
+/*
+ * Many parts of Git have subprograms communicate via pipe, expect the
+ * upstream of a pipe to die with SIGPIPE when the downstream of a
+ * pipe does not need to read all that is written. Some third-party
+ * programs that ignore or block SIGPIPE for their own reason forget
+ * to restore SIGPIPE handling to the default before spawning Git and
+ * break this carefully orchestrated machinery.
+ *
+ * Restore the way SIGPIPE is handled to default, which is what we
+ * expect.
+ */
+static void restore_sigpipe_to_default(void)
+{
+ sigset_t unblock;
+
+ sigemptyset(&unblock);
+ sigaddset(&unblock, SIGPIPE);
+ sigprocmask(SIG_UNBLOCK, &unblock, NULL);
+ signal(SIGPIPE, SIG_DFL);
+}
+
+int main(int argc, const char **argv)
+{
+ /*
+ * Always open file descriptors 0/1/2 to avoid clobbering files
+ * in die(). It also avoids messing up when the pipes are dup'ed
+ * onto stdin/stdout/stderr in the child processes we spawn.
+ */
+ sanitize_stdfds();
+
+ git_setup_gettext();
+
+ argv[0] = git_extract_argv0_path(argv[0]);
+
+ restore_sigpipe_to_default();
+
+ return cmd_main(argc, argv);
+}
}
}
-
-static const char *make_backslash_path(const char *path)
-{
- static char buf[PATH_MAX + 1];
- char *c;
-
- if (strlcpy(buf, path, PATH_MAX) >= PATH_MAX)
- die("Too long path: %.*s", 60, path);
-
- for (c = buf; *c; c++) {
- if (*c == '/')
- *c = '\\';
- }
- return buf;
-}
-
-void mingw_open_html(const char *unixpath)
-{
- const char *htmlpath = make_backslash_path(unixpath);
- typedef HINSTANCE (WINAPI *T)(HWND, const char *,
- const char *, const char *, const char *, INT);
- T ShellExecute;
- HMODULE shell32;
- int r;
-
- shell32 = LoadLibrary("shell32.dll");
- if (!shell32)
- die("cannot load shell32.dll");
- ShellExecute = (T)GetProcAddress(shell32, "ShellExecuteA");
- if (!ShellExecute)
- die("cannot run browser");
-
- printf("Launching default browser to display HTML ...\n");
- r = HCAST(int, ShellExecute(NULL, "open", htmlpath,
- NULL, "\\", SW_SHOWNORMAL));
- FreeLibrary(shell32);
- /* see the MSDN documentation referring to the result codes here */
- if (r <= 32) {
- die("failed to launch browser for %.*s", MAX_PATH, unixpath);
- }
-}
-
int link(const char *oldpath, const char *newpath)
{
typedef BOOL (WINAPI *T)(LPCWSTR, LPCWSTR, LPSECURITY_ATTRIBUTES);
return -1;
}
-static void setup_windows_environment()
+static void setup_windows_environment(void)
{
char *tmp = getenv("TMPDIR");
extern int __wgetmainargs(int *argc, wchar_t ***argv, wchar_t ***env, int glob,
_startupinfo *si);
-static NORETURN void die_startup()
+static NORETURN void die_startup(void)
{
fputs("fatal: not enough memory for initialization", stderr);
exit(128);
return memcpy(malloc_startup(len), buffer, len);
}
-void mingw_startup()
+void mingw_startup(void)
{
int i, maxlen, argc;
char *buffer;
#define F_SETFD 2
#define FD_CLOEXEC 0x1
+#if !defined O_CLOEXEC && defined O_NOINHERIT
+#define O_CLOEXEC O_NOINHERIT
+#endif
+
#ifndef EAFNOSUPPORT
#define EAFNOSUPPORT WSAEAFNOSUPPORT
#endif
#ifndef ECONNABORTED
#define ECONNABORTED WSAECONNABORTED
#endif
+#ifndef ENOTSOCK
+#define ENOTSOCK WSAENOTSOCK
+#endif
struct passwd {
char *pw_name;
#include <inttypes.h>
#endif
-void mingw_open_html(const char *path);
-#define open_html mingw_open_html
-
/**
* Converts UTF-8 encoded string to UTF-16LE.
*
* A replacement of main() that adds win32 specific initialization.
*/
-void mingw_startup();
-#define main(c,v) dummy_decl_mingw_main(); \
+void mingw_startup(void);
+#define main(c,v) dummy_decl_mingw_main(void); \
static int mingw_main(c,v); \
-int main(int argc, char **argv) \
+int main(int argc, const char **argv) \
{ \
mingw_startup(); \
return mingw_main(__argc, (void *)__argv); \
void **ret;
threadcache *tc;
int mymspace;
- size_t i, *adjustedsizes=(size_t *) alloca(elems*sizeof(size_t));
- if(!adjustedsizes) return 0;
- for(i=0; i<elems; i++)
- adjustedsizes[i]=sizes[i]<sizeof(threadcacheblk) ? sizeof(threadcacheblk) : sizes[i];
+ size_t i, *adjustedsizes=(size_t *) alloca(elems*sizeof(size_t));
+ if(!adjustedsizes) return 0;
+ for(i=0; i<elems; i++)
+ adjustedsizes[i]=sizes[i]<sizeof(threadcacheblk) ? sizeof(threadcacheblk) : sizes[i];
GetThreadCache(&p, &tc, &mymspace, 0);
GETMSPACE(m, p, tc, mymspace, 0,
ret=mspace_independent_comalloc(m, elems, adjustedsizes, chunks));
*/
char *strdup(const char *s1)
{
- char *s2 = 0;
- if (s1) {
- size_t len = strlen(s1) + 1;
- s2 = malloc(len);
+ size_t len = strlen(s1) + 1;
+ char *s2 = malloc(len);
+
+ if (s2)
memcpy(s2, s1, len);
- }
return s2;
}
#endif
Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA
02110-1301 USA. */
-#include <stdint.h>
-
static reg_errcode_t re_compile_internal (regex_t *preg, const char * pattern,
size_t length, reg_syntax_t syntax);
static void re_compile_fastmap_iter (regex_t *bufp,
GNU regex allows. Include it before <regex.h>, which correctly
#undefs RE_DUP_MAX and sets it to the right value. */
#include <limits.h>
+#include <stdint.h>
#ifdef GAWK
#undef alloca
(fd & (IOINFO_ARRAY_ELTS - 1)) * sizeof_ioinfo);
}
-static int init_sizeof_ioinfo()
+static int init_sizeof_ioinfo(void)
{
int istty, wastty;
/* don't init twice */
size_t pos;
} buf;
} u;
- const char *origin_type;
+ enum config_origin_type origin_type;
const char *name;
const char *path;
int die_on_error;
long (*do_ftell)(struct config_source *c);
};
+/*
+ * These variables record the "current" config source, which
+ * can be accessed by parsing callbacks.
+ *
+ * The "cf" variable will be non-NULL only when we are actually parsing a real
+ * config source (file, blob, cmdline, etc).
+ *
+ * The "current_config_kvi" variable will be non-NULL only when we are feeding
+ * cached config from a configset into a callback.
+ *
+ * They should generally never be non-NULL at the same time. If they are both
+ * NULL, then we aren't parsing anything (and depending on the function looking
+ * at the variables, it's either a bug for it to be called in the first place,
+ * or it's a function which can be reused for non-config purposes, and should
+ * fall back to some sane behavior).
+ */
static struct config_source *cf;
+static struct key_value_info *current_config_kvi;
+
+/*
+ * Similar to the variables above, this gives access to the "scope" of the
+ * current value (repo, global, etc). For cached values, it can be found via
+ * the current_config_kvi as above. During parsing, the current value can be
+ * found in this variable. It's not part of "cf" because it transcends a single
+ * file (i.e., a file included from .git/config is still in "repo" scope).
+ */
+static enum config_scope current_parsing_scope;
static int zlib_compression_seen;
if (!access_or_die(path, R_OK, 0)) {
if (++inc->depth > MAX_INCLUDE_DEPTH)
die(include_depth_advice, MAX_INCLUDE_DEPTH, path,
- cf && cf->name ? cf->name : "the command line");
+ !cf ? "<unknown>" :
+ cf->name ? cf->name :
+ "the command line");
ret = git_config_from_file(git_config_include, path, inc);
inc->depth--;
}
int git_config_from_parameters(config_fn_t fn, void *data)
{
const char *env = getenv(CONFIG_DATA_ENVIRONMENT);
+ int ret = 0;
char *envw;
const char **argv = NULL;
int nr = 0, alloc = 0;
int i;
+ struct config_source source;
if (!env)
return 0;
+
+ memset(&source, 0, sizeof(source));
+ source.prev = cf;
+ source.origin_type = CONFIG_ORIGIN_CMDLINE;
+ cf = &source;
+
/* sq_dequote will write over it */
envw = xstrdup(env);
if (sq_dequote_to_argv(envw, &argv, &nr, &alloc) < 0) {
- free(envw);
- return error("bogus format in " CONFIG_DATA_ENVIRONMENT);
+ ret = error("bogus format in " CONFIG_DATA_ENVIRONMENT);
+ goto out;
}
for (i = 0; i < nr; i++) {
if (git_config_parse_parameter(argv[i], fn, data) < 0) {
- free(argv);
- free(envw);
- return -1;
+ ret = -1;
+ goto out;
}
}
+out:
free(argv);
free(envw);
- return nr > 0;
+ cf = source.prev;
+ return ret;
}
static int get_next_char(void)
int comment = 0;
int baselen = 0;
struct strbuf *var = &cf->var;
+ int error_return = 0;
+ char *error_msg = NULL;
/* U+FEFF Byte Order Mark in UTF8 */
const char *bomptr = utf8_bom;
if (get_value(fn, data, var) < 0)
break;
}
+
+ switch (cf->origin_type) {
+ case CONFIG_ORIGIN_BLOB:
+ error_msg = xstrfmt(_("bad config line %d in blob %s"),
+ cf->linenr, cf->name);
+ break;
+ case CONFIG_ORIGIN_FILE:
+ error_msg = xstrfmt(_("bad config line %d in file %s"),
+ cf->linenr, cf->name);
+ break;
+ case CONFIG_ORIGIN_STDIN:
+ error_msg = xstrfmt(_("bad config line %d in standard input"),
+ cf->linenr);
+ break;
+ case CONFIG_ORIGIN_SUBMODULE_BLOB:
+ error_msg = xstrfmt(_("bad config line %d in submodule-blob %s"),
+ cf->linenr, cf->name);
+ break;
+ case CONFIG_ORIGIN_CMDLINE:
+ error_msg = xstrfmt(_("bad config line %d in command line %s"),
+ cf->linenr, cf->name);
+ break;
+ default:
+ error_msg = xstrfmt(_("bad config line %d in %s"),
+ cf->linenr, cf->name);
+ }
+
if (cf->die_on_error)
- die(_("bad config line %d in %s %s"), cf->linenr, cf->origin_type, cf->name);
+ die("%s", error_msg);
else
- return error(_("bad config line %d in %s %s"), cf->linenr, cf->origin_type, cf->name);
+ error_return = error("%s", error_msg);
+
+ free(error_msg);
+ return error_return;
}
static int parse_unit_factor(const char *end, uintmax_t *val)
NORETURN
static void die_bad_number(const char *name, const char *value)
{
- const char *reason = errno == ERANGE ?
- "out of range" :
- "invalid unit";
+ const char * error_type = (errno == ERANGE)? _("out of range"):_("invalid unit");
+
if (!value)
value = "";
- if (cf && cf->origin_type && cf->name)
- die(_("bad numeric config value '%s' for '%s' in %s %s: %s"),
- value, name, cf->origin_type, cf->name, reason);
- die(_("bad numeric config value '%s' for '%s': %s"), value, name, reason);
+ if (!(cf && cf->name))
+ die(_("bad numeric config value '%s' for '%s': %s"),
+ value, name, error_type);
+
+ switch (cf->origin_type) {
+ case CONFIG_ORIGIN_BLOB:
+ die(_("bad numeric config value '%s' for '%s' in blob %s: %s"),
+ value, name, cf->name, error_type);
+ case CONFIG_ORIGIN_FILE:
+ die(_("bad numeric config value '%s' for '%s' in file %s: %s"),
+ value, name, cf->name, error_type);
+ case CONFIG_ORIGIN_STDIN:
+ die(_("bad numeric config value '%s' for '%s' in standard input: %s"),
+ value, name, error_type);
+ case CONFIG_ORIGIN_SUBMODULE_BLOB:
+ die(_("bad numeric config value '%s' for '%s' in submodule-blob %s: %s"),
+ value, name, cf->name, error_type);
+ case CONFIG_ORIGIN_CMDLINE:
+ die(_("bad numeric config value '%s' for '%s' in command line %s: %s"),
+ value, name, cf->name, error_type);
+ default:
+ die(_("bad numeric config value '%s' for '%s' in %s: %s"),
+ value, name, cf->name, error_type);
+ }
}
int git_config_int(const char *name, const char *value)
}
static int do_config_from_file(config_fn_t fn,
- const char *origin_type, const char *name, const char *path, FILE *f,
+ const enum config_origin_type origin_type,
+ const char *name, const char *path, FILE *f,
void *data)
{
struct config_source top;
static int git_config_from_stdin(config_fn_t fn, void *data)
{
- return do_config_from_file(fn, "standard input", "", NULL, stdin, data);
+ return do_config_from_file(fn, CONFIG_ORIGIN_STDIN, "", NULL, stdin, data);
}
int git_config_from_file(config_fn_t fn, const char *filename, void *data)
f = fopen(filename, "r");
if (f) {
flockfile(f);
- ret = do_config_from_file(fn, "file", filename, filename, f, data);
+ ret = do_config_from_file(fn, CONFIG_ORIGIN_FILE, filename, filename, f, data);
funlockfile(f);
fclose(f);
}
return ret;
}
-int git_config_from_mem(config_fn_t fn, const char *origin_type,
+int git_config_from_mem(config_fn_t fn, const enum config_origin_type origin_type,
const char *name, const char *buf, size_t len, void *data)
{
struct config_source top;
return error("reference '%s' does not point to a blob", name);
}
- ret = git_config_from_mem(fn, "blob", name, buf, size, data);
+ ret = git_config_from_mem(fn, CONFIG_ORIGIN_BLOB, name, buf, size, data);
free(buf);
return ret;
static int do_git_config_sequence(config_fn_t fn, void *data)
{
- int ret = 0, found = 0;
+ int ret = 0;
char *xdg_config = xdg_config_home("config");
char *user_config = expand_user_path("~/.gitconfig");
char *repo_config = git_pathdup("config");
- if (git_config_system() && !access_or_die(git_etc_gitconfig(), R_OK, 0)) {
+ current_parsing_scope = CONFIG_SCOPE_SYSTEM;
+ if (git_config_system() && !access_or_die(git_etc_gitconfig(), R_OK, 0))
ret += git_config_from_file(fn, git_etc_gitconfig(),
data);
- found += 1;
- }
- if (xdg_config && !access_or_die(xdg_config, R_OK, ACCESS_EACCES_OK)) {
+ current_parsing_scope = CONFIG_SCOPE_GLOBAL;
+ if (xdg_config && !access_or_die(xdg_config, R_OK, ACCESS_EACCES_OK))
ret += git_config_from_file(fn, xdg_config, data);
- found += 1;
- }
- if (user_config && !access_or_die(user_config, R_OK, ACCESS_EACCES_OK)) {
+ if (user_config && !access_or_die(user_config, R_OK, ACCESS_EACCES_OK))
ret += git_config_from_file(fn, user_config, data);
- found += 1;
- }
- if (repo_config && !access_or_die(repo_config, R_OK, 0)) {
+ current_parsing_scope = CONFIG_SCOPE_REPO;
+ if (repo_config && !access_or_die(repo_config, R_OK, 0))
ret += git_config_from_file(fn, repo_config, data);
- found += 1;
- }
- switch (git_config_from_parameters(fn, data)) {
- case -1: /* error */
+ current_parsing_scope = CONFIG_SCOPE_CMDLINE;
+ if (git_config_from_parameters(fn, data) < 0)
die(_("unable to parse command-line config"));
- break;
- case 0: /* found nothing */
- break;
- default: /* found at least one item */
- found++;
- break;
- }
+ current_parsing_scope = CONFIG_SCOPE_UNKNOWN;
free(xdg_config);
free(user_config);
free(repo_config);
- return ret == 0 ? found : ret;
+ return ret;
}
int git_config_with_options(config_fn_t fn, void *data,
if (git_config_with_options(fn, data, NULL, 1) < 0)
/*
* git_config_with_options() normally returns only
- * positive values, as most errors are fatal, and
+ * zero, as most errors are fatal, and
* non-fatal potential errors are guarded by "if"
* statements that are entered only when no error is
* possible.
* something went really wrong and we should stop
* immediately.
*/
- die(_("unknown error occured while reading the configuration files"));
+ die(_("unknown error occurred while reading the configuration files"));
}
static void configset_iter(struct config_set *cs, config_fn_t fn, void *data)
struct string_list *values;
struct config_set_element *entry;
struct configset_list *list = &cs->list;
- struct key_value_info *kv_info;
for (i = 0; i < list->nr; i++) {
entry = list->items[i].e;
value_index = list->items[i].value_index;
values = &entry->value_list;
- if (fn(entry->key, values->items[value_index].string, data) < 0) {
- kv_info = values->items[value_index].util;
- git_die_config_linenr(entry->key, kv_info->filename, kv_info->linenr);
- }
+
+ current_config_kvi = values->items[value_index].util;
+
+ if (fn(entry->key, values->items[value_index].string, data) < 0)
+ git_die_config_linenr(entry->key,
+ current_config_kvi->filename,
+ current_config_kvi->linenr);
+
+ current_config_kvi = NULL;
}
}
l_item->e = e;
l_item->value_index = e->value_list.nr - 1;
- if (cf) {
+ if (!cf)
+ die("BUG: configset_add_value has no source");
+ if (cf->name) {
kv_info->filename = strintern(cf->name);
kv_info->linenr = cf->linenr;
+ kv_info->origin_type = cf->origin_type;
} else {
/* for values read from `git_config_from_parameters()` */
kv_info->filename = NULL;
kv_info->linenr = -1;
+ kv_info->origin_type = CONFIG_ORIGIN_CMDLINE;
}
+ kv_info->scope = current_parsing_scope;
si->util = kv_info;
return 0;
const char *current_config_origin_type(void)
{
- return cf && cf->origin_type ? cf->origin_type : "command line";
+ int type;
+ if (current_config_kvi)
+ type = current_config_kvi->origin_type;
+ else if(cf)
+ type = cf->origin_type;
+ else
+ die("BUG: current_config_origin_type called outside config callback");
+
+ switch (type) {
+ case CONFIG_ORIGIN_BLOB:
+ return "blob";
+ case CONFIG_ORIGIN_FILE:
+ return "file";
+ case CONFIG_ORIGIN_STDIN:
+ return "standard input";
+ case CONFIG_ORIGIN_SUBMODULE_BLOB:
+ return "submodule-blob";
+ case CONFIG_ORIGIN_CMDLINE:
+ return "command line";
+ default:
+ die("BUG: unknown config origin type");
+ }
}
const char *current_config_name(void)
{
- return cf && cf->name ? cf->name : "";
+ const char *name;
+ if (current_config_kvi)
+ name = current_config_kvi->filename;
+ else if (cf)
+ name = cf->name;
+ else
+ die("BUG: current_config_name called outside config callback");
+ return name ? name : "";
+}
+
+enum config_scope current_config_scope(void)
+{
+ if (current_config_kvi)
+ return current_config_kvi->scope;
+ else
+ return current_parsing_scope;
}
HAVE_DEV_TTY = YesPlease
HAVE_CLOCK_GETTIME = YesPlease
HAVE_CLOCK_MONOTONIC = YesPlease
+ # -lrt is needed for clock_gettime on glibc <= 2.16
+ NEEDS_LIBRT = YesPlease
HAVE_GETDELIM = YesPlease
SANE_TEXT_GREP=-a
endif
NO_STRTOUMAX = YesPlease
endif
PYTHON_PATH = /usr/local/bin/python
+ PERL_PATH = /usr/local/bin/perl
HAVE_PATHS_H = YesPlease
GMTIME_UNRELIABLE_ERRORS = UnfortunatelyYes
HAVE_BSD_SYSCTL = YesPlease
+ PAGER_ENV = LESS=FRX LV=-c MORE=FRX
endif
ifeq ($(uname_S),OpenBSD)
NO_STRCASESTR = YesPlease
AC_DEFUN([PTHREADTEST_SRC], [
AC_LANG_PROGRAM([[
#include <pthread.h>
+static void *noop(void *ignore) { return ignore; }
]], [[
pthread_mutex_t test_mutex;
pthread_key_t test_key;
+ pthread_t th;
int retcode = 0;
+ void *ret = (void *)0;
retcode |= pthread_key_create(&test_key, (void *)0);
retcode |= pthread_mutex_init(&test_mutex,(void *)0);
retcode |= pthread_mutex_lock(&test_mutex);
retcode |= pthread_mutex_unlock(&test_mutex);
+ retcode |= pthread_create(&th, ret, noop, ret);
+ retcode |= pthread_join(th, &ret);
return retcode;
]])])
static struct child_process no_fork = CHILD_PROCESS_INIT;
+static const char *get_ssh_command(void)
+{
+ const char *ssh;
+
+ if ((ssh = getenv("GIT_SSH_COMMAND")))
+ return ssh;
+
+ if (!git_config_get_string_const("core.sshcommand", &ssh))
+ return ssh;
+
+ return NULL;
+}
+
/*
* This returns a dummy child_process if the transport protocol does not
* need fork(2), or a struct child_process object if it does. Once done,
return NULL;
}
- ssh = getenv("GIT_SSH_COMMAND");
+ ssh = get_ssh_command();
if (!ssh) {
const char *base;
char *ssh_dup;
#include "connected.h"
#include "transport.h"
-int check_everything_connected(sha1_iterate_fn fn, int quiet, void *cb_data)
-{
- return check_everything_connected_with_transport(fn, quiet, cb_data, NULL);
-}
/*
* If we feed all the commits we want to verify to this command
*
*
* Returns 0 if everything is connected, non-zero otherwise.
*/
-static int check_everything_connected_real(sha1_iterate_fn fn,
- int quiet,
- void *cb_data,
- struct transport *transport,
- const char *shallow_file)
+int check_connected(sha1_iterate_fn fn, void *cb_data,
+ struct check_connected_options *opt)
{
struct child_process rev_list = CHILD_PROCESS_INIT;
- const char *argv[9];
+ struct check_connected_options defaults = CHECK_CONNECTED_INIT;
char commit[41];
unsigned char sha1[20];
- int err = 0, ac = 0;
+ int err = 0;
struct packed_git *new_pack = NULL;
+ struct transport *transport;
size_t base_len;
- if (fn(cb_data, sha1))
+ if (!opt)
+ opt = &defaults;
+ transport = opt->transport;
+
+ if (fn(cb_data, sha1)) {
+ if (opt->err_fd)
+ close(opt->err_fd);
return err;
+ }
if (transport && transport->smart_options &&
transport->smart_options->self_contained_and_connected &&
strbuf_release(&idx_file);
}
- if (shallow_file) {
- argv[ac++] = "--shallow-file";
- argv[ac++] = shallow_file;
+ if (opt->shallow_file) {
+ argv_array_push(&rev_list.args, "--shallow-file");
+ argv_array_push(&rev_list.args, opt->shallow_file);
}
- argv[ac++] = "rev-list";
- argv[ac++] = "--objects";
- argv[ac++] = "--stdin";
- argv[ac++] = "--not";
- argv[ac++] = "--all";
- if (quiet)
- argv[ac++] = "--quiet";
- argv[ac] = NULL;
+ argv_array_push(&rev_list.args,"rev-list");
+ argv_array_push(&rev_list.args, "--objects");
+ argv_array_push(&rev_list.args, "--stdin");
+ argv_array_push(&rev_list.args, "--not");
+ argv_array_push(&rev_list.args, "--all");
+ argv_array_push(&rev_list.args, "--quiet");
+ if (opt->progress)
+ argv_array_pushf(&rev_list.args, "--progress=%s",
+ _("Checking connectivity"));
- rev_list.argv = argv;
rev_list.git_cmd = 1;
rev_list.in = -1;
rev_list.no_stdout = 1;
- rev_list.no_stderr = quiet;
+ if (opt->err_fd)
+ rev_list.err = opt->err_fd;
+ else
+ rev_list.no_stderr = opt->quiet;
+
if (start_command(&rev_list))
return error(_("Could not run 'git rev-list'"));
sigchain_pop(SIGPIPE);
return finish_command(&rev_list) || err;
}
-
-int check_everything_connected_with_transport(sha1_iterate_fn fn,
- int quiet,
- void *cb_data,
- struct transport *transport)
-{
- return check_everything_connected_real(fn, quiet, cb_data,
- transport, NULL);
-}
-
-int check_shallow_connected(sha1_iterate_fn fn, int quiet, void *cb_data,
- const char *shallow_file)
-{
- return check_everything_connected_real(fn, quiet, cb_data,
- NULL, shallow_file);
-}
*/
typedef int (*sha1_iterate_fn)(void *, unsigned char [20]);
+/*
+ * Named-arguments struct for check_connected. All arguments are
+ * optional, and can be left to defaults as set by CHECK_CONNECTED_INIT.
+ */
+struct check_connected_options {
+ /* Avoid printing any errors to stderr. */
+ int quiet;
+
+ /* --shallow-file to pass to rev-list sub-process */
+ const char *shallow_file;
+
+ /* Transport whose objects we are checking, if available. */
+ struct transport *transport;
+
+ /*
+ * If non-zero, send error messages to this descriptor rather
+ * than stderr. The descriptor is closed before check_connected
+ * returns.
+ */
+ int err_fd;
+
+ /* If non-zero, show progress as we traverse the objects. */
+ int progress;
+};
+
+#define CHECK_CONNECTED_INIT { 0 }
+
/*
* Make sure that our object store has all the commits necessary to
* connect the ancestry chain to some of our existing refs, and all
* the trees and blobs that these commits use.
*
* Return 0 if Ok, non zero otherwise (i.e. some missing objects)
+ *
+ * If "opt" is NULL, behaves as if CHECK_CONNECTED_INIT was passed.
*/
-extern int check_everything_connected(sha1_iterate_fn, int quiet, void *cb_data);
-extern int check_shallow_connected(sha1_iterate_fn, int quiet, void *cb_data,
- const char *shallow_file);
-extern int check_everything_connected_with_transport(sha1_iterate_fn, int quiet,
- void *cb_data,
- struct transport *transport);
+int check_connected(sha1_iterate_fn fn, void *cb_data,
+ struct check_connected_options *opt);
#endif /* CONNECTED_H */
--- /dev/null
+This directory provides examples of Coccinelle (http://coccinelle.lip6.fr/)
+semantic patches that might be useful to developers.
--- /dev/null
+@@
+expression E1;
+@@
+- is_null_sha1(E1.hash)
++ is_null_oid(&E1)
+
+@@
+expression E1;
+@@
+- is_null_sha1(E1->hash)
++ is_null_oid(E1)
+
+@@
+expression E1;
+@@
+- sha1_to_hex(E1.hash)
++ oid_to_hex(&E1)
+
+@@
+expression E1;
+@@
+- sha1_to_hex(E1->hash)
++ oid_to_hex(E1)
+
+@@
+expression E1;
+@@
+- sha1_to_hex_r(E1.hash)
++ oid_to_hex_r(&E1)
+
+@@
+expression E1;
+@@
+- sha1_to_hex_r(E1->hash)
++ oid_to_hex_r(E1)
+
+@@
+expression E1;
+@@
+- hashclr(E1.hash)
++ oidclr(&E1)
+
+@@
+expression E1;
+@@
+- hashclr(E1->hash)
++ oidclr(E1)
+
+@@
+expression E1, E2;
+@@
+- hashcmp(E1.hash, E2.hash)
++ oidcmp(&E1, &E2)
+
+@@
+expression E1, E2;
+@@
+- hashcmp(E1->hash, E2->hash)
++ oidcmp(E1, E2)
+
+@@
+expression E1, E2;
+@@
+- hashcmp(E1->hash, E2.hash)
++ oidcmp(E1, &E2)
+
+@@
+expression E1, E2;
+@@
+- hashcmp(E1.hash, E2->hash)
++ oidcmp(&E1, E2)
+
+@@
+expression E1, E2;
+@@
+- hashcpy(E1.hash, E2.hash)
++ oidcpy(&E1, &E2)
+
+@@
+expression E1, E2;
+@@
+- hashcpy(E1->hash, E2->hash)
++ oidcpy(E1, E2)
+
+@@
+expression E1, E2;
+@@
+- hashcpy(E1->hash, E2.hash)
++ oidcpy(E1, &E2)
+
+@@
+expression E1, E2;
+@@
+- hashcpy(E1.hash, E2->hash)
++ oidcpy(&E1, E2)
done
}
+# Echo the value of an option set on the command line or config
+#
+# $1: short option name
+# $2: long option name including =
+# $3: list of possible values
+# $4: config string (optional)
+#
+# example:
+# result="$(__git_get_option_value "-d" "--do-something=" \
+# "yes no" "core.doSomething")"
+#
+# result is then either empty (no option set) or "yes" or "no"
+#
+# __git_get_option_value requires 3 arguments
+__git_get_option_value ()
+{
+ local c short_opt long_opt val
+ local result= values config_key word
+
+ short_opt="$1"
+ long_opt="$2"
+ values="$3"
+ config_key="$4"
+
+ ((c = $cword - 1))
+ while [ $c -ge 0 ]; do
+ word="${words[c]}"
+ for val in $values; do
+ if [ "$short_opt$val" = "$word" ] ||
+ [ "$long_opt$val" = "$word" ]; then
+ result="$val"
+ break 2
+ fi
+ done
+ ((c--))
+ done
+
+ if [ -n "$config_key" ] && [ -z "$result" ]; then
+ result="$(git --git-dir="$(__gitdir)" config "$config_key")"
+ fi
+
+ echo "$result"
+}
+
__git_has_doubledash ()
{
local c=1
while [ $c -lt $cword ]; do
i="${words[c]}"
case "$i" in
- -d|-m) only_local_ref="y" ;;
- -r) has_r="y" ;;
+ -d|--delete|-m|--move) only_local_ref="y" ;;
+ -r|--remotes) has_r="y" ;;
esac
((c++))
done
--color --no-color --verbose --abbrev= --no-abbrev
--track --no-track --contains --merged --no-merged
--set-upstream-to= --edit-description --list
- --unset-upstream
+ --unset-upstream --delete --move --remotes
"
;;
*)
--depth
--single-branch
--branch
+ --recurse-submodules
"
return
;;
esac
}
+__git_untracked_file_modes="all no normal"
+
_git_commit ()
{
case "$prev" in
return
;;
--untracked-files=*)
- __gitcomp "all no normal" "" "${cur##--untracked-files=}"
+ __gitcomp "$__git_untracked_file_modes" "" "${cur##--untracked-files=}"
return
;;
--*)
__git_diff_algorithms="myers minimal patience histogram"
+__git_diff_submodule_formats="log short"
+
__git_diff_common_options="--stat --numstat --shortstat --summary
--patch-with-stat --name-only --name-status --color
--no-color --color-words --no-renames --check
--dirstat --dirstat= --dirstat-by-file
--dirstat-by-file= --cumulative
--diff-algorithm=
+ --submodule --submodule=
"
_git_diff ()
__gitcomp "$__git_diff_algorithms" "" "${cur##--diff-algorithm=}"
return
;;
+ --submodule=*)
+ __gitcomp "$__git_diff_submodule_formats" "" "${cur##--submodule=}"
+ return
+ ;;
--*)
__gitcomp "--cached --staged --pickaxe-all --pickaxe-regex
--base --ours --theirs --no-index
__gitcomp "full short no" "" "${cur##--decorate=}"
return
;;
+ --diff-algorithm=*)
+ __gitcomp "$__git_diff_algorithms" "" "${cur##--diff-algorithm=}"
+ return
+ ;;
+ --submodule=*)
+ __gitcomp "$__git_diff_submodule_formats" "" "${cur##--submodule=}"
+ return
+ ;;
--*)
__gitcomp "
$__git_log_common_options
_git_add
}
+_git_status ()
+{
+ local complete_opt
+ local untracked_state
+
+ case "$cur" in
+ --ignore-submodules=*)
+ __gitcomp "none untracked dirty all" "" "${cur##--ignore-submodules=}"
+ return
+ ;;
+ --untracked-files=*)
+ __gitcomp "$__git_untracked_file_modes" "" "${cur##--untracked-files=}"
+ return
+ ;;
+ --column=*)
+ __gitcomp "
+ always never auto column row plain dense nodense
+ " "" "${cur##--column=}"
+ return
+ ;;
+ --*)
+ __gitcomp "
+ --short --branch --porcelain --long --verbose
+ --untracked-files= --ignore-submodules= --ignored
+ --column= --no-column
+ "
+ return
+ ;;
+ esac
+
+ untracked_state="$(__git_get_option_value "-u" "--untracked-files=" \
+ "$__git_untracked_file_modes" "status.showUntrackedFiles")"
+
+ case "$untracked_state" in
+ no)
+ # --ignored option does not matter
+ complete_opt=
+ ;;
+ all|normal|*)
+ complete_opt="--cached --directory --no-empty-directory --others"
+
+ if [ -n "$(__git_find_on_cmdline "--ignored")" ]; then
+ complete_opt="$complete_opt --ignored --exclude=*"
+ fi
+ ;;
+ esac
+
+ __git_complete_index_file "$complete_opt"
+}
+
__git_config_get_set_variables ()
{
local prevword word config_file= c=$cword
format.attach
format.cc
format.coverLetter
+ format.from
format.headers
format.numbered
format.pretty
__gitcomp "$__git_diff_algorithms" "" "${cur##--diff-algorithm=}"
return
;;
+ --submodule=*)
+ __gitcomp "$__git_diff_submodule_formats" "" "${cur##--submodule=}"
+ return
+ ;;
--*)
__gitcomp "--pretty= --format= --abbrev-commit --oneline
--show-signature
_git_log
}
+_git_worktree ()
+{
+ local subcommands="add list lock prune unlock"
+ local subcommand="$(__git_find_on_cmdline "$subcommands")"
+ if [ -z "$subcommand" ]; then
+ __gitcomp "$subcommands"
+ else
+ case "$subcommand,$cur" in
+ add,--*)
+ __gitcomp "--detach"
+ ;;
+ list,--*)
+ __gitcomp "--porcelain"
+ ;;
+ lock,--*)
+ __gitcomp "--reason"
+ ;;
+ prune,--*)
+ __gitcomp "--dry-run --expire --verbose"
+ ;;
+ *)
+ ;;
+ esac
+ fi
+}
+
__git_main ()
{
local i c=1 command __git_dir
# incorrect.)
#
local ps1_expanded=yes
- [ -z "$ZSH_VERSION" ] || [[ -o PROMPT_SUBST ]] || ps1_expanded=no
- [ -z "$BASH_VERSION" ] || shopt -q promptvars || ps1_expanded=no
+ [ -z "${ZSH_VERSION-}" ] || [[ -o PROMPT_SUBST ]] || ps1_expanded=no
+ [ -z "${BASH_VERSION-}" ] || shopt -q promptvars || ps1_expanded=no
local repo_info rev_parse_exit_code
repo_info="$(git rev-parse --git-dir --is-inside-git-dir \
return $exit
fi
- local short_sha
+ local short_sha=""
if [ "$rev_parse_exit_code" = "0" ]; then
short_sha="${repo_info##*$'\n'}"
repo_info="${repo_info%$'\n'*}"
CC = gcc
RM = rm -f
CFLAGS = -g -O2 -Wall
+PKG_CONFIG = pkg-config
-include ../../../config.mak.autogen
-include ../../../config.mak
-INCS:=$(shell pkg-config --cflags gnome-keyring-1 glib-2.0)
-LIBS:=$(shell pkg-config --libs gnome-keyring-1 glib-2.0)
+INCS:=$(shell $(PKG_CONFIG) --cflags gnome-keyring-1 glib-2.0)
+LIBS:=$(shell $(PKG_CONFIG) --libs gnome-keyring-1 glib-2.0)
SRCS:=$(MAIN).c
OBJS:=$(SRCS:.c=.o)
$mtime = oct $mtime;
next if $typeflag == 5; # directory
- print FI "blob\n", "mark :$next_mark\n";
- if ($typeflag == 2) { # symbolic link
- print FI "data ", length($linkname), "\n", $linkname;
- $mode = 0120000;
- } else {
- print FI "data $size\n";
- while ($size > 0 && read(I, $_, 512) == 512) {
- print FI substr($_, 0, $size);
- $size -= 512;
+ if ($typeflag != 1) { # handle hard links later
+ print FI "blob\n", "mark :$next_mark\n";
+ if ($typeflag == 2) { # symbolic link
+ print FI "data ", length($linkname), "\n",
+ $linkname;
+ $mode = 0120000;
+ } else {
+ print FI "data $size\n";
+ while ($size > 0 && read(I, $_, 512) == 512) {
+ print FI substr($_, 0, $size);
+ $size -= 512;
+ }
}
+ print FI "\n";
}
- print FI "\n";
my $path;
if ($prefix) {
} else {
$path = "$name";
}
- $files{$path} = [$next_mark++, $mode];
+
+ if ($typeflag == 1) { # hard link
+ $linkname = "$prefix/$linkname" if $prefix;
+ $files{$path} = [ $files{$linkname}->[0], $mode ];
+ } else {
+ $files{$path} = [$next_mark++, $mode];
+ }
$author_time = $mtime if $mtime > $author_time;
$path =~ m,^([^/]+)/,;
`foo.c` yourself. But when you have many changes scattered across a
project, you can use the editor's support to "jump" from point to point.
-Git-jump can generate three types of interesting lists:
+Git-jump can generate four types of interesting lists:
1. The beginning of any diff hunks.
3. Any grep matches.
+ 4. Any whitespace errors detected by `git diff --check`.
+
Using git-jump
--------------
Limitations
-----------
-This scripts was written and tested with vim. Given that the quickfix
+This script was written and tested with vim. Given that the quickfix
format is the same as what gcc produces, I expect emacs users have a
similar feature for iterating through the list, but I know nothing about
how to activate it.
merge: elements are merge conflicts. Arguments are ignored.
grep: elements are grep hits. Arguments are given to grep.
+
+ws: elements are whitespace errors. Arguments are given to diff --check.
EOF
}
perl -ne '
if (m{^\+\+\+ (.*)}) { $file = $1; next }
defined($file) or next;
- if (m/^@@ .*\+(\d+)/) { $line = $1; next }
+ if (m/^@@ .*?\+(\d+)/) { $line = $1; next }
defined($line) or next;
if (/^ /) { $line++; next }
if (/^[-+]\s*(.*)/) {
'
}
+mode_ws() {
+ git diff --check "$@"
+}
+
if test $# -lt 1; then
usage >&2
exit 1
+Release 1.4.0
+=============
+
+New features to troubleshoot a git-multimail installation
+---------------------------------------------------------
+
+* One can now perform a basic check of git-multimail's setup by
+ running the hook with the environment variable
+ GIT_MULTIMAIL_CHECK_SETUP set to a non-empty string. See
+ doc/troubleshooting.rst for details.
+
+* A new log files system was added. See the multimailhook.logFile,
+ multimailhook.errorLogFile and multimailhook.debugLogFile variables.
+
+* git_multimail.py can now be made more verbose using
+ multimailhook.verbose.
+
+* A new option --check-ref-filter is now available to help debugging
+ the refFilter* options.
+
+Formatting emails
+-----------------
+
+* Formatting of emails was made slightly more compact, to reduce the
+ odds of having long subject lines truncated or wrapped in short list
+ of commits.
+
+* multimailhook.emailPrefix may now use the '%(repo_shortname)s'
+ placeholder for the repository's short name.
+
+* A new option multimailhook.subjectMaxLength is available to truncate
+ overly long subject lines.
+
+Bug fixes and minor changes
+---------------------------
+
+* Options refFilterDoSendRegex and refFilterDontSendRegex were
+ essentially broken. They should work now.
+
+* The behavior when both refFilter{Do,Dont}SendRegex and
+ refFilter{Exclusion,Inclusion}Regex are set have been slightly
+ changed. Exclusion/Inclusion is now strictly stronger than
+ DoSend/DontSend.
+
+* The management of precedence when a setting can be computed in
+ multiple ways has been considerably refactored and modified.
+ multimailhook.from and multimailhook.reponame now have precedence
+ over the environment-specific settings ($GL_REPO/$GL_USER for
+ gitolite, --stash-user/repo for Stash, --submitter/--project for
+ Gerrit).
+
+* The coverage of the testsuite has been considerably improved. All
+ configuration variables now appear at least once in the testsuite.
+
+This version was tested with Python 2.6 to 3.5. It also mostly works
+with Python 2.4, but there is one known breakage in the testsuite
+related to non-ascii characters. It was tested with Git
+1.7.10.406.gdc801, 1.8.5.6, 2.1.4, and 2.10.0.rc0.1.g07c9292.
+
Release 1.3.1 (bugfix-only release)
===================================
git-multimail is an open-source project, built by volunteers. We would
welcome your help!
-The current maintainers are Michael Haggerty <mhagger@alum.mit.edu>
-and Matthieu Moy <matthieu.moy@grenoble-inp.fr>.
+The current maintainers are Matthieu Moy
+<matthieu.moy@grenoble-inp.fr> and Michael Haggerty
+<mhagger@alum.mit.edu>.
Please note that although a copy of git-multimail is distributed in
the "contrib" section of the main Git project, development takes place
project practice
<https://github.com/git/git/blob/master/Documentation/SubmittingPatches#L234>`__.
+Please vote for issues you would like to be addressed in priority
+(click "add your reaction" and then the "+1" thumbs-up button on the
+GitHub issue).
+
General discussion of git-multimail can take place on the main `Git
mailing list`_.
-git-multimail 1.3.1
-===================
+git-multimail version 1.4.0
+===========================
.. image:: https://travis-ci.org/git-multimail/git-multimail.svg?branch=master
:target: https://travis-ci.org/git-multimail/git-multimail
git-multimail is a tool for sending notification emails on pushes to a
-Git repository. It includes a Python module called git_multimail.py,
+Git repository. It includes a Python module called ``git_multimail.py``,
which can either be used as a hook script directly or can be imported
as a Python module into another script.
Invocation
----------
-git_multimail.py is designed to be used as a ``post-receive`` hook in a
+``git_multimail.py`` is designed to be used as a ``post-receive`` hook in a
Git repository (see githooks(5)). Link or copy it to
$GIT_DIR/hooks/post-receive within the repository for which email
notifications are desired. Usually it should be installed on the
central repository for a project, to which all commits are eventually
pushed.
-For use on pre-v1.5.1 Git servers, git_multimail.py can also work as
+For use on pre-v1.5.1 Git servers, ``git_multimail.py`` can also work as
an ``update`` hook, taking its arguments on the command line. To use
this script in this manner, link or copy it to $GIT_DIR/hooks/update.
Please note that the script is not completely reliable in this mode
-[2]_.
+[1]_.
-Alternatively, git_multimail.py can be imported as a Python module
+Alternatively, ``git_multimail.py`` can be imported as a Python module
into your own Python post-receive script. This method is a bit more
work, but allows the behavior of the hook to be customized using
arbitrary Python code. For example, you can use a custom environment
Or you can change how emails are sent by writing your own Mailer
class. The ``post-receive`` script in this directory demonstrates how
-to use git_multimail.py as a Python module. (If you make interesting
+to use ``git_multimail.py`` as a Python module. (If you make interesting
changes of this type, please consider sharing them with the
community.)
the repository name is derived from the repository's path.
gitolite
- the username of the pusher is read from $GL_USER, the repository
+ Environment to use when ``git-multimail`` is ran as a gitolite_
+ hook.
+
+ The username of the pusher is read from $GL_USER, the repository
name is read from $GL_REPO, and the From: header value is
optionally read from gitolite.conf (see multimailhook.from).
like ``<a href="foo">link</a>``, the reader will see the HTML
source code and not a proper link.
- Set ``multimailhook.htmlInIntro`` to true to allow writting HTML
+ Set ``multimailhook.htmlInIntro`` to true to allow writing HTML
formatting in introduction templates. Similarly, set
``multimailhook.htmlInFooter`` for HTML in the footer.
email filtering (though filtering based on the X-Git-* email
headers is probably more robust). Default is the short name of
the repository in square brackets; e.g., ``[myrepo]``. Set this
- value to the empty string to suppress the email prefix.
+ value to the empty string to suppress the email prefix. You may
+ use the placeholder ``%(repo_shortname)s`` for the short name of
+ the repository.
multimailhook.emailMaxLines
The maximum number of lines that should be included in the body of
lines, the diffs are probably unreadable anyway. To disable line
truncation, set this option to 0.
+multimailhook.subjectMaxLength
+ The maximum length of the subject line (i.e. the ``oneline`` field
+ in templates, not including the prefix). Lines longer than this
+ limit are truncated to this length with a trailing ``[...]`` added
+ to indicate the missing text. This option The default is to use
+ ``multimailhook.emailMaxLineLength``. This option avoids sending
+ emails with overly long subject lines, but should not be needed if
+ the commit messages follow the Git convention (one short subject
+ line, then a blank line, then the message body). To disable line
+ truncation, set this option to 0.
+
multimailhook.maxCommitEmails
The maximum number of commit emails to send for a given change.
When the number of patches is larger that this value, only the
not valid UTF-8 are converted to the Unicode replacement
character, U+FFFD. The default is `true`.
+ This option is ineffective with Python 3, where non-UTF-8
+ characters are unconditionally replaced.
+
multimailhook.diffOpts
Options passed to ``git diff-tree`` when generating the summary
information for ReferenceChange emails. Default is ``--stat
--summary --find-copies-harder``. Add -p to those options to
include a unified diff of changes in addition to the usual summary
- output. Shell quoting is allowed; see multimailhook.logOpts for
+ output. Shell quoting is allowed; see ``multimailhook.logOpts`` for
details.
multimailhook.graphOpts
multimailhook.dateSubstitute
String to use as a substitute for ``Date:`` in the output of ``git
- log`` while formatting commit messages. This is usefull to avoid
+ log`` while formatting commit messages. This is useful to avoid
emitting a line that can be interpreted by mailers as the start of
a cited message (Zimbra webmail in particular). Defaults to
``CommitDate:``. Set to an empty string or ``none`` to deactivate
the user-interface is not stable yet (in particular, the option
names may change). If you want to participate in stabilizing the
feature, please contact the maintainers and/or send pull-requests.
+ If you are happy with the current shape of the feature, please
+ report it too.
Regular expressions that can be used to limit refs for which email
updates will be sent. It is an error to specify both an inclusion
[multimailhook]
refFilterExclusionRegex = ^refs/tags/|^refs/heads/master$
+ ``refFilterInclusionRegex`` and ``refFilterExclusionRegex`` are
+ strictly stronger than ``refFilterDoSendRegex`` and
+ ``refFilterDontSendRegex``. In other words, adding a ref to a
+ DoSend/DontSend regex has no effect if it is already excluded by a
+ Exclusion/Inclusion regex.
+
+multimailhook.logFile, multimailhook.errorLogFile, multimailhook.debugLogFile
+
+ When set, these variable designate path to files where
+ git-multimail will log some messages. Normal messages and error
+ messages are sent to ``logFile``, and error messages are also sent
+ to ``errorLogFile``. Debug messages and all other messages are
+ sent to ``debugLogFile``. The recommended way is to set only one
+ of these variables, but it is also possible to set several of them
+ (part of the information is then duplicated in several log files,
+ for example errors are duplicated to all log files).
+
+ Relative path are relative to the Git repository where the push is
+ done.
+
+multimailhook.verbose
+
+ Verbosity level of git-multimail on its standard output. By
+ default, show only error and info messages. If set to true, show
+ also debug messages.
+
Email filtering aids
--------------------
git-multimail mostly generates emails by expanding templates. The
templates can be customized. To avoid the need to edit
-git_multimail.py directly, the preferred way to change the templates
-is to write a separate Python script that imports git_multimail.py as
+``git_multimail.py`` directly, the preferred way to change the templates
+is to write a separate Python script that imports ``git_multimail.py`` as
a module, then replaces the templates in place. See the provided
post-receive script for an example of how this is done.
a stand-alone Git repository.
GitoliteEnvironment
- a Git repository that is managed by gitolite
- [3]_. For such repositories, the identity of the pusher is read from
+ a Git repository that is managed by gitolite_. For such
+ repositories, the identity of the pusher is read from
environment variable $GL_USER, the name of the repository is read
from $GL_REPO (if it is not overridden by multimailhook.reponame),
and the From: header value is optionally read from gitolite.conf
If you need to customize the script in ways that are not supported by
the existing environments, you can define your own environment class
class using arbitrary Python code. To do so, you need to import
-git_multimail.py as a Python module, as demonstrated by the example
+``git_multimail.py`` as a Python module, as demonstrated by the example
post-receive script. Then implement your environment class; it should
usually inherit from one of the existing Environment classes and
possibly one or more of the EnvironmentMixin classes. Then set the
Footnotes
---------
-.. [1] http://www.python.org/dev/peps/pep-0394/
-
-.. [2] Because of the way information is passed to update hooks, the
+.. [1] Because of the way information is passed to update hooks, the
script's method of determining whether a commit has already
been seen does not work when it is used as an ``update`` script.
In particular, no notification email will be generated for a
push. A workaround is to use --force-send to force sending the
emails.
-.. [3] https://github.com/sitaramc/gitolite
+.. _gitolite: https://github.com/sitaramc/gitolite
https://github.com/git-multimail/git-multimail
The version in this directory was obtained from the upstream project
-on May 13 2016 and consists of the "git-multimail" subdirectory from
+on August 17 2016 and consists of the "git-multimail" subdirectory from
revision
- 3ce5470d4abf7251604cbf64e73a962e1b617f5e refs/tags/1.3.1
+ 07b1cb6bfd7be156c62e1afa17cae13b850a869f refs/tags/1.4.0
Please see the README file in this directory for information about how
to report bugs or contribute to git-multimail.
Troubleshooting issues with git-multimail: a FAQ
================================================
+How to check that git-multimail is properly set up?
+---------------------------------------------------
+
+Since version 1.4.0, git-multimail allows a simple self-checking of
+its configuration: run it with the environment variable
+``GIT_MULTIMAIL_CHECK_SETUP`` set to a non-empty string. You should
+get something like this::
+
+ $ GIT_MULTIMAIL_CHECK_SETUP=true /home/moy/dev/git-multimail/git-multimail/git_multimail.py
+ Environment values:
+ administrator : 'the administrator of this repository'
+ charset : 'utf-8'
+ emailprefix : '[git-multimail] '
+ fqdn : 'anie'
+ projectdesc : 'UNNAMED PROJECT'
+ pusher : 'moy'
+ repo_path : '/home/moy/dev/git-multimail'
+ repo_shortname : 'git-multimail'
+
+ Now, checking that git-multimail's standard input is properly set ...
+ Please type some text and then press Return
+ foo
+ You have just entered:
+ foo
+ git-multimail seems properly set up.
+
+If you forgot to set an important variable, you may get instead::
+
+ $ GIT_MULTIMAIL_CHECK_SETUP=true /home/moy/dev/git-multimail/git-multimail/git_multimail.py
+ No email recipients configured!
+
+Do not set ``$GIT_MULTIMAIL_CHECK_SETUP`` other than for testing your
+configuration: it would disable the hook completely.
+
Git is not using the right address in the From/To/Reply-To field
----------------------------------------------------------------
#! /usr/bin/env python
-__version__ = '1.3.1'
+__version__ = '1.4.0'
-# Copyright (c) 2015 Matthieu Moy and others
+# Copyright (c) 2015-2016 Matthieu Moy and others
# Copyright (c) 2012-2014 Michael Haggerty and others
# Derived from contrib/hooks/post-receive-email, which is
# Copyright (c) 2007 Andy Parkins
import subprocess
import shlex
import optparse
+import logging
import smtplib
try:
import ssl
def str_to_bytes(s):
return s.encode(ENCODING)
- def bytes_to_str(s):
- return s.decode(ENCODING)
+ def bytes_to_str(s, errors='strict'):
+ return s.decode(ENCODING, errors)
unicode = str
f.buffer.write(msg.encode(sys.getdefaultencoding()))
except UnicodeEncodeError:
f.buffer.write(msg.encode(ENCODING))
+
+ def read_line(f):
+ # Try reading with the default encoding. If it fails,
+ # try UTF-8.
+ out = f.buffer.readline()
+ try:
+ return out.decode(sys.getdefaultencoding())
+ except UnicodeEncodeError:
+ return out.decode(ENCODING)
else:
def is_string(s):
try:
def str_to_bytes(s):
return s
- def bytes_to_str(s):
+ def bytes_to_str(s, errors='strict'):
return s
def write_str(f, msg):
f.write(msg)
+ def read_line(f):
+ return f.readline()
+
def next(it):
return it.next()
\\
O -- O -- O (%(oldrev_short)s)
-Any revisions marked "omits" are not gone; other references still
-refer to them. Any revisions marked "discards" are gone forever.
+Any revisions marked "omit" are not gone; other references still
+refer to them. Any revisions marked "discard" are gone forever.
"""
revisions, and so the following emails describe only the N revisions
from the common base, B.
-Any revisions marked "omits" are not gone; other references still
-refer to them. Any revisions marked "discards" are gone forever.
+Any revisions marked "omit" are not gone; other references still
+refer to them. Any revisions marked "discard" are gone forever.
"""
NEW_REVISIONS_TEMPLATE = """\
The %(tot)s revisions listed above as "new" are entirely new to this
repository and will be described in separate emails. The revisions
-listed as "adds" were already present in the repository and have only
+listed as "add" were already present in the repository and have only
been added to this reference.
"""
TAG_CREATED_TEMPLATE = """\
- at %(newrev_short)-9s (%(newrev_type)s)
+ at %(newrev_short)-8s (%(newrev_type)s)
"""
TAG_UPDATED_TEMPLATE = """\
*** WARNING: tag %(short_refname)s was modified! ***
- from %(oldrev_short)-9s (%(oldrev_type)s)
- to %(newrev_short)-9s (%(newrev_type)s)
+ from %(oldrev_short)-8s (%(oldrev_type)s)
+ to %(newrev_short)-8s (%(newrev_type)s)
"""
# The template used in summary tables. It looks best if this uses the
# same alignment as TAG_CREATED_TEMPLATE and TAG_UPDATED_TEMPLATE.
BRIEF_SUMMARY_TEMPLATE = """\
-%(action)10s %(rev_short)-9s %(text)s
+%(action)8s %(rev_short)-8s %(text)s
"""
input = str_to_bytes(input)
else:
stdin = None
+ errors = 'strict'
+ if 'errors' in kw:
+ errors = kw['errors']
+ del kw['errors']
p = subprocess.Popen(
- cmd, stdin=stdin, stdout=subprocess.PIPE, stderr=subprocess.PIPE, **kw
+ tuple(str_to_bytes(w) for w in cmd),
+ stdin=stdin, stdout=subprocess.PIPE, stderr=subprocess.PIPE, **kw
)
(out, err) = p.communicate(input)
- out = bytes_to_str(out)
+ out = bytes_to_str(out, errors=errors)
retcode = p.wait()
if retcode:
raise CommandError(cmd, retcode)
for line in footer:
yield line
- def get_alt_fromaddr(self):
+ def get_specific_fromaddr(self):
+ """For kinds of Changes which specify it, return the kind-specific
+ From address to use."""
return None
self.cc_recipients = ', '.join(to.strip() for to in self._cc_recipients())
if self.cc_recipients:
self.environment.log_msg(
- 'Add %s to CC for %s\n' % (self.cc_recipients, self.rev.sha1))
+ 'Add %s to CC for %s' % (self.cc_recipients, self.rev.sha1))
def _cc_recipients(self):
cc_recipients = []
['log', '--format=%s', '--no-walk', self.rev.sha1]
)
+ max_subject_length = self.environment.get_max_subject_length()
+ if max_subject_length > 0 and len(oneline) > max_subject_length:
+ oneline = oneline[:max_subject_length - 6] + ' [...]'
+
values['rev'] = self.rev.sha1
values['rev_short'] = self.rev.short
values['change_type'] = self.change_type
for line in read_git_lines(
['log'] + self.environment.commitlogopts + ['-1', self.rev.sha1],
keepends=True,
- ):
+ errors='replace'):
if line.startswith('Date: ') and self.environment.date_substitute:
yield self.environment.date_substitute + line[len('Date: '):]
else:
self._contains_diff()
return Change.generate_email(self, push, body_filter, extra_header_values)
- def get_alt_fromaddr(self):
+ def get_specific_fromaddr(self):
return self.environment.from_commit
# Tracking branch:
environment.log_warning(
'*** Push-update of tracking branch %r\n'
- '*** - incomplete email generated.\n'
+ '*** - incomplete email generated.'
% (refname,)
)
klass = OtherReferenceChange
# Some other reference namespace:
environment.log_warning(
'*** Push-update of strange reference %r\n'
- '*** - incomplete email generated.\n'
+ '*** - incomplete email generated.'
% (refname,)
)
klass = OtherReferenceChange
# Anything else (is there anything else?)
environment.log_warning(
'*** Unknown type of update to %r (%s)\n'
- '*** - incomplete email generated.\n'
+ '*** - incomplete email generated.'
% (refname, rev.type,)
)
klass = OtherReferenceChange
if discards and adds:
for (sha1, subject) in discards:
if sha1 in discarded_commits:
- action = 'discards'
+ action = 'discard'
else:
- action = 'omits'
+ action = 'omit'
yield self.expand(
BRIEF_SUMMARY_TEMPLATE, action=action,
rev_short=sha1, text=subject,
if sha1 in new_commits:
action = 'new'
else:
- action = 'adds'
+ action = 'add'
yield self.expand(
BRIEF_SUMMARY_TEMPLATE, action=action,
rev_short=sha1, text=subject,
elif discards:
for (sha1, subject) in discards:
if sha1 in discarded_commits:
- action = 'discards'
+ action = 'discard'
else:
- action = 'omits'
+ action = 'omit'
yield self.expand(
BRIEF_SUMMARY_TEMPLATE, action=action,
rev_short=sha1, text=subject,
if sha1 in new_commits:
action = 'new'
else:
- action = 'adds'
+ action = 'add'
yield self.expand(
BRIEF_SUMMARY_TEMPLATE, action=action,
rev_short=sha1, text=subject,
for r in discarded_revisions:
(sha1, subject) = r.rev.get_summary()
yield r.expand(
- BRIEF_SUMMARY_TEMPLATE, action='discards', text=subject,
+ BRIEF_SUMMARY_TEMPLATE, action='discard', text=subject,
)
for line in self.generate_revision_change_graph(push):
yield line
)
yield '\n'
- def get_alt_fromaddr(self):
+ def get_specific_fromaddr(self):
return self.environment.from_refchange
except CommandError:
prevtag = None
if prevtag:
- yield ' replaces %s\n' % (prevtag,)
+ yield ' replaces %s\n' % (prevtag,)
else:
prevtag = None
- yield ' length %s bytes\n' % (read_git_output(['cat-file', '-s', tagobject]),)
+ yield ' length %s bytes\n' % (read_git_output(['cat-file', '-s', tagobject]),)
- yield ' tagged by %s\n' % (tagger,)
- yield ' on %s\n' % (tagged,)
+ yield ' by %s\n' % (tagger,)
+ yield ' on %s\n' % (tagged,)
yield '\n'
# Show the content of the tag message; this might contain a
class Mailer(object):
"""An object that can send emails."""
+ def __init__(self, environment):
+ self.environment = environment
+
def send(self, lines, to_addrs):
"""Send an email consisting of lines.
'Try setting multimailhook.sendmailCommand.'
)
- def __init__(self, command=None, envelopesender=None):
+ def __init__(self, environment, command=None, envelopesender=None):
"""Construct a SendMailer instance.
command should be the command and arguments used to invoke
sendmail, as a list of strings. If an envelopesender is
provided, it will also be passed to the command, via '-f
envelopesender'."""
-
+ super(SendMailer, self).__init__(environment)
if command:
self.command = command[:]
else:
try:
p = subprocess.Popen(self.command, stdin=subprocess.PIPE)
except OSError:
- sys.stderr.write(
+ self.environment.get_logger().error(
'*** Cannot execute command: %s\n' % ' '.join(self.command) +
'*** %s\n' % sys.exc_info()[1] +
'*** Try setting multimailhook.mailer to "smtp"\n' +
lines = (str_to_bytes(line) for line in lines)
p.stdin.writelines(lines)
except Exception:
- sys.stderr.write(
+ self.environment.get_logger().error(
'*** Error while generating commit email\n'
'*** - mail sending aborted.\n'
)
- try:
+ if hasattr(p, 'terminate'):
# subprocess.terminate() is not available in Python 2.4
p.terminate()
- except AttributeError:
- pass
+ else:
+ import signal
+ os.kill(p.pid, signal.SIGTERM)
raise
else:
p.stdin.close()
class SMTPMailer(Mailer):
"""Send emails using Python's smtplib."""
- def __init__(self, envelopesender, smtpserver,
+ def __init__(self, environment,
+ envelopesender, smtpserver,
smtpservertimeout=10.0, smtpserverdebuglevel=0,
smtpencryption='none',
smtpuser='', smtppass='',
smtpcacerts=''
):
+ super(SMTPMailer, self).__init__(environment)
if not envelopesender:
- sys.stderr.write(
+ self.environment.get_logger().error(
'fatal: git_multimail: cannot use SMTPMailer without a sender address.\n'
'please set either multimailhook.envelopeSender or user.email\n'
)
self.smtp = call(smtplib.SMTP_SSL, self.smtpserver, timeout=self.smtpservertimeout)
elif self.security == 'tls':
if 'ssl' not in sys.modules:
- sys.stderr.write(
+ self.environment.get_logger().error(
'*** Your Python version does not have the ssl library installed\n'
'*** smtpEncryption=tls is not available.\n'
'*** Either upgrade Python to 2.6 or later\n'
self.smtp.sock,
cert_reqs=ssl.CERT_NONE
)
- sys.stderr.write(
+ self.environment.get_logger().error(
'*** Warning, the server certificat is not verified (smtp) ***\n'
'*** set the option smtpCACerts ***\n'
)
% self.smtpserverdebuglevel)
self.smtp.set_debuglevel(self.smtpserverdebuglevel)
except Exception:
- sys.stderr.write(
+ self.environment.get_logger().error(
'*** Error establishing SMTP connection to %s ***\n'
- % self.smtpserver)
- sys.stderr.write('*** %s\n' % sys.exc_info()[1])
+ '*** %s\n'
+ % (self.smtpserver, sys.exc_info()[1]))
sys.exit(1)
def __del__(self):
to_addrs = [email for (name, email) in getaddresses([to_addrs])]
self.smtp.sendmail(self.envelopesender, to_addrs, msg)
except smtplib.SMTPResponseException:
- sys.stderr.write('*** Error sending email ***\n')
err = sys.exc_info()[1]
- sys.stderr.write('*** Error %d: %s\n' % (err.smtp_code,
- bytes_to_str(err.smtp_error)))
+ self.environment.get_logger().error(
+ '*** Error sending email ***\n'
+ '*** Error %d: %s\n'
+ % (err.smtp_code, bytes_to_str(err.smtp_error)))
try:
smtp = self.smtp
# delete the field before quit() so that in case of
del self.smtp
smtp.quit()
except:
- sys.stderr.write('*** Error closing the SMTP connection ***\n')
- sys.stderr.write('*** Exiting anyway ... ***\n')
- sys.stderr.write('*** %s\n' % sys.exc_info()[1])
+ self.environment.get_logger().error(
+ '*** Error closing the SMTP connection ***\n'
+ '*** Exiting anyway ... ***\n'
+ '*** %s\n' % sys.exc_info()[1])
sys.exit(1)
to send and when computing what commits are considered new
to the repository. Default is "^refs/notes/".
+ get_max_subject_length()
+
+ Return an int giving the maximal length for the subject
+ (git log --oneline).
+
They should also define the following attributes:
announce_show_shortlog (bool)
multimailhook.fromRefchange and multimailhook.fromCommit
by ConfigEnvironmentMixin.
+ log_file, error_log_file, debug_log_file (string)
+
+ Name of a file to which logs should be sent.
+
+ verbose (int)
+
+ How verbose the system should be.
+ - 0 (default): show info, errors, ...
+ - 1 : show basic debug info
"""
REPO_NAME_RE = re.compile(r'^(?P<name>.+?)(?:\.git)$')
self.quiet = False
self.stdout = False
self.combine_when_single_commit = True
+ self.logger = None
self.COMPUTED_KEYS = [
'administrator',
self._values = None
+ def get_logger(self):
+ """Get (possibly creates) the logger associated to this environment."""
+ if self.logger is None:
+ self.logger = Logger(self)
+ return self.logger
+
def get_repo_shortname(self):
"""Use the last part of the repo path, with ".git" stripped off if present."""
# which we simply do not have right now.
return "^refs/notes/"
+ def get_max_subject_length(self):
+ """Return the maximal subject line (git log --oneline) length.
+ Longer subject lines will be truncated."""
+ raise NotImplementedError()
+
def filter_body(self, lines):
"""Filter the lines intended for an email body.
"""Write the string msg on a log file or on stderr.
Sends the text to stderr by default, override to change the behavior."""
- write_str(sys.stderr, msg)
+ self.get_logger().info(msg)
def log_warning(self, msg):
"""Write the string msg on a log file or on stderr.
Sends the text to stderr by default, override to change the behavior."""
- write_str(sys.stderr, msg)
+ self.get_logger().warning(msg)
def log_error(self, msg):
"""Write the string msg on a log file or on stderr.
Sends the text to stderr by default, override to change the behavior."""
- write_str(sys.stderr, msg)
+ self.get_logger().error(msg)
+
+ def check(self):
+ pass
class ConfigEnvironmentMixin(Environment):
if combine is not None:
self.combine_when_single_commit = combine
+ self.log_file = config.get('logFile', default=None)
+ self.error_log_file = config.get('errorLogFile', default=None)
+ self.debug_log_file = config.get('debugLogFile', default=None)
+ if config.get_bool('Verbose', default=False):
+ self.verbose = 1
+ else:
+ self.verbose = 0
+
def get_administrator(self):
return (
self.config.get('administrator') or
if emailprefix is not None:
emailprefix = emailprefix.strip()
if emailprefix:
- return emailprefix + ' '
- else:
- return ''
+ emailprefix += ' '
else:
- return '[%s] ' % (self.get_repo_shortname(),)
+ emailprefix = '[%(repo_shortname)s] '
+ short_name = self.get_repo_shortname()
+ try:
+ return emailprefix % {'repo_shortname': short_name}
+ except:
+ self.get_logger().error(
+ '*** Invalid multimailhook.emailPrefix: %s\n' % emailprefix +
+ '*** %s\n' % sys.exc_info()[1] +
+ "*** Only the '%(repo_shortname)s' placeholder is allowed\n"
+ )
+ raise ConfigurationException(
+ '"%s" is not an allowed setting for emailPrefix' % emailprefix
+ )
def get_sender(self):
return self.config.get('envelopesender')
def get_fromaddr(self, change=None):
fromaddr = self.config.get('from')
if change:
- alt_fromaddr = change.get_alt_fromaddr()
- if alt_fromaddr:
- fromaddr = alt_fromaddr
+ specific_fromaddr = change.get_specific_fromaddr()
+ if specific_fromaddr:
+ fromaddr = specific_fromaddr
if fromaddr:
fromaddr = self.process_addr(fromaddr, change)
if fromaddr:
class FilterLinesEnvironmentMixin(Environment):
"""Handle encoding and maximum line length of body lines.
- emailmaxlinelength (int or None)
+ email_max_line_length (int or None)
The maximum length of any single line in the email body.
Longer lines are truncated at that length with ' [...]'
"""
- def __init__(self, strict_utf8=True, emailmaxlinelength=500, **kw):
+ def __init__(self, strict_utf8=True,
+ email_max_line_length=500, max_subject_length=500,
+ **kw):
super(FilterLinesEnvironmentMixin, self).__init__(**kw)
self.__strict_utf8 = strict_utf8
- self.__emailmaxlinelength = emailmaxlinelength
+ self.__email_max_line_length = email_max_line_length
+ self.__max_subject_length = max_subject_length
def filter_body(self, lines):
lines = super(FilterLinesEnvironmentMixin, self).filter_body(lines)
lines = (line.decode(ENCODING, 'replace') for line in lines)
# Limit the line length in Unicode-space to avoid
# splitting characters:
- if self.__emailmaxlinelength:
- lines = limit_linelength(lines, self.__emailmaxlinelength)
+ if self.__email_max_line_length > 0:
+ lines = limit_linelength(lines, self.__email_max_line_length)
if not PYTHON3:
lines = (line.encode(ENCODING, 'replace') for line in lines)
- elif self.__emailmaxlinelength:
- lines = limit_linelength(lines, self.__emailmaxlinelength)
+ elif self.__email_max_line_length:
+ lines = limit_linelength(lines, self.__email_max_line_length)
return lines
+ def get_max_subject_length(self):
+ return self.__max_subject_length
+
class ConfigFilterLinesEnvironmentMixin(
ConfigEnvironmentMixin,
if strict_utf8 is not None:
kw['strict_utf8'] = strict_utf8
- emailmaxlinelength = config.get('emailmaxlinelength')
- if emailmaxlinelength is not None:
- kw['emailmaxlinelength'] = int(emailmaxlinelength)
+ email_max_line_length = config.get('emailmaxlinelength')
+ if email_max_line_length is not None:
+ kw['email_max_line_length'] = int(email_max_line_length)
+
+ max_subject_length = config.get('subjectMaxLength', default=email_max_line_length)
+ if max_subject_length is not None:
+ kw['max_subject_length'] = int(max_subject_length)
super(ConfigFilterLinesEnvironmentMixin, self).__init__(
config=config, **kw
def filter_body(self, lines):
lines = super(MaxlinesEnvironmentMixin, self).filter_body(lines)
- if self.__emailmaxlines:
+ if self.__emailmaxlines > 0:
lines = limit_lines(lines, self.__emailmaxlines)
return lines
# actual *contents* of the change being reported, we only
# choose based on the *type* of the change. Therefore we can
# compute them once and for all:
- if not (refchange_recipients or
- announce_recipients or
- revision_recipients or
- scancommitforcc):
- raise ConfigurationException('No email recipients configured!')
self.__refchange_recipients = refchange_recipients
self.__announce_recipients = announce_recipients
self.__revision_recipients = revision_recipients
+ def check(self):
+ if not (self.get_refchange_recipients(None) or
+ self.get_announce_recipients(None) or
+ self.get_revision_recipients(None) or
+ self.get_scancommitforcc()):
+ raise ConfigurationException('No email recipients configured!')
+ super(StaticRecipientsEnvironmentMixin, self).check()
+
def get_refchange_recipients(self, refchange):
+ if self.__refchange_recipients is None:
+ return super(StaticRecipientsEnvironmentMixin,
+ self).get_refchange_recipients(refchange)
return self.__refchange_recipients
def get_announce_recipients(self, annotated_tag_change):
+ if self.__announce_recipients is None:
+ return super(StaticRecipientsEnvironmentMixin,
+ self).get_refchange_recipients(annotated_tag_change)
return self.__announce_recipients
def get_revision_recipients(self, revision):
+ if self.__revision_recipients is None:
+ return super(StaticRecipientsEnvironmentMixin,
+ self).get_refchange_recipients(revision)
return self.__revision_recipients
+class CLIRecipientsEnvironmentMixin(Environment):
+ """Mixin storing recipients information comming from the
+ command-line."""
+
+ def __init__(self, cli_recipients=None, **kw):
+ super(CLIRecipientsEnvironmentMixin, self).__init__(**kw)
+ self.__cli_recipients = cli_recipients
+
+ def get_refchange_recipients(self, refchange):
+ if self.__cli_recipients is None:
+ return super(CLIRecipientsEnvironmentMixin,
+ self).get_refchange_recipients(refchange)
+ return self.__cli_recipients
+
+ def get_announce_recipients(self, annotated_tag_change):
+ if self.__cli_recipients is None:
+ return super(CLIRecipientsEnvironmentMixin,
+ self).get_announce_recipients(annotated_tag_change)
+ return self.__cli_recipients
+
+ def get_revision_recipients(self, revision):
+ if self.__cli_recipients is None:
+ return super(CLIRecipientsEnvironmentMixin,
+ self).get_revision_recipients(revision)
+ return self.__cli_recipients
+
+
class ConfigRecipientsEnvironmentMixin(
ConfigEnvironmentMixin,
StaticRecipientsEnvironmentMixin
if ref_filter_do_send_regex and ref_filter_dont_send_regex:
raise ConfigurationException(
"Cannot specify both a ref doSend and dontSend regex.")
- if ref_filter_do_send_regex or ref_filter_dont_send_regex:
- self.__is_do_send_filter = bool(ref_filter_do_send_regex)
- if ref_filter_incl_regex:
- ref_filter_send_regex = ref_filter_incl_regex
- elif ref_filter_excl_regex:
- ref_filter_send_regex = ref_filter_excl_regex
- else:
- ref_filter_send_regex = '.*'
- self.__is_do_send_filter = True
- try:
- self.__send_compiled_regex = re.compile(ref_filter_send_regex)
- except Exception:
- raise ConfigurationException(
- 'Invalid Ref Filter Regex "%s": %s' %
- (ref_filter_send_regex, sys.exc_info()[1]))
+ self.__is_do_send_filter = bool(ref_filter_do_send_regex)
+ if ref_filter_do_send_regex:
+ ref_filter_send_regex = ref_filter_do_send_regex
+ elif ref_filter_dont_send_regex:
+ ref_filter_send_regex = ref_filter_dont_send_regex
else:
- self.__send_compiled_regex = self.__compiled_regex
- self.__is_do_send_filter = self.__is_inclusion_filter
+ ref_filter_send_regex = '.*'
+ self.__is_do_send_filter = True
+ try:
+ self.__send_compiled_regex = re.compile(ref_filter_send_regex)
+ except Exception:
+ raise ConfigurationException(
+ 'Invalid Ref Filter Regex "%s": %s' %
+ (ref_filter_send_regex, sys.exc_info()[1]))
def get_ref_filter_regex(self, send_filter=False):
if send_filter:
return self.osenv.get('USER', self.osenv.get('USERNAME', 'unknown user'))
-class GenericEnvironment(
- ProjectdescEnvironmentMixin,
- ConfigMaxlinesEnvironmentMixin,
- ComputeFQDNEnvironmentMixin,
- ConfigFilterLinesEnvironmentMixin,
- ConfigRecipientsEnvironmentMixin,
- ConfigRefFilterEnvironmentMixin,
- PusherDomainEnvironmentMixin,
- ConfigOptionsEnvironmentMixin,
- GenericEnvironmentMixin,
- Environment,
- ):
- pass
+class GitoliteEnvironmentHighPrecMixin(Environment):
+ def get_pusher(self):
+ return self.osenv.get('GL_USER', 'unknown user')
-class GitoliteEnvironmentMixin(Environment):
+class GitoliteEnvironmentLowPrecMixin(Environment):
def get_repo_shortname(self):
# The gitolite environment variable $GL_REPO is a pretty good
# repo_shortname (though it's probably not as good as a value
# the user might have explicitly put in his config).
return (
self.osenv.get('GL_REPO', None) or
- super(GitoliteEnvironmentMixin, self).get_repo_shortname()
+ super(GitoliteEnvironmentLowPrecMixin, self).get_repo_shortname()
)
- def get_pusher(self):
- return self.osenv.get('GL_USER', 'unknown user')
-
def get_fromaddr(self, change=None):
GL_USER = self.osenv.get('GL_USER')
if GL_USER is not None:
return m.group(1)
finally:
f.close()
- return super(GitoliteEnvironmentMixin, self).get_fromaddr(change)
+ return super(GitoliteEnvironmentLowPrecMixin, self).get_fromaddr(change)
class IncrementalDateTime(object):
return formatted
-class GitoliteEnvironment(
- ProjectdescEnvironmentMixin,
- ConfigMaxlinesEnvironmentMixin,
- ComputeFQDNEnvironmentMixin,
- ConfigFilterLinesEnvironmentMixin,
- ConfigRecipientsEnvironmentMixin,
- ConfigRefFilterEnvironmentMixin,
- PusherDomainEnvironmentMixin,
- ConfigOptionsEnvironmentMixin,
- GitoliteEnvironmentMixin,
- Environment,
- ):
- pass
-
-
-class StashEnvironmentMixin(Environment):
+class StashEnvironmentHighPrecMixin(Environment):
def __init__(self, user=None, repo=None, **kw):
- super(StashEnvironmentMixin, self).__init__(**kw)
+ super(StashEnvironmentHighPrecMixin,
+ self).__init__(user=user, repo=repo, **kw)
self.__user = user
self.__repo = repo
- def get_repo_shortname(self):
- return self.__repo
-
def get_pusher(self):
return re.match('(.*?)\s*<', self.__user).group(1)
def get_pusher_email(self):
return self.__user
- def get_fromaddr(self, change=None):
- return self.__user
+class StashEnvironmentLowPrecMixin(Environment):
+ def __init__(self, user=None, repo=None, **kw):
+ super(StashEnvironmentLowPrecMixin, self).__init__(**kw)
+ self.__repo = repo
+ self.__user = user
-class StashEnvironment(
- StashEnvironmentMixin,
- ProjectdescEnvironmentMixin,
- ConfigMaxlinesEnvironmentMixin,
- ComputeFQDNEnvironmentMixin,
- ConfigFilterLinesEnvironmentMixin,
- ConfigRecipientsEnvironmentMixin,
- ConfigRefFilterEnvironmentMixin,
- PusherDomainEnvironmentMixin,
- ConfigOptionsEnvironmentMixin,
- Environment,
- ):
- pass
+ def get_repo_shortname(self):
+ return self.__repo
+
+ def get_fromaddr(self, change=None):
+ return self.__user
-class GerritEnvironmentMixin(Environment):
+class GerritEnvironmentHighPrecMixin(Environment):
def __init__(self, project=None, submitter=None, update_method=None, **kw):
- super(GerritEnvironmentMixin, self).__init__(**kw)
+ super(GerritEnvironmentHighPrecMixin,
+ self).__init__(submitter=submitter, project=project, **kw)
self.__project = project
self.__submitter = submitter
self.__update_method = update_method
"Make an 'update_method' value available for templates."
self.COMPUTED_KEYS += ['update_method']
- def get_repo_shortname(self):
- return self.__project
-
def get_pusher(self):
if self.__submitter:
if self.__submitter.find('<') != -1:
if self.__submitter:
return self.__submitter
else:
- return super(GerritEnvironmentMixin, self).get_pusher_email()
-
- def get_fromaddr(self, change=None):
- if self.__submitter and self.__submitter.find('<') != -1:
- return self.__submitter
- else:
- return super(GerritEnvironmentMixin, self).get_fromaddr(change)
+ return super(GerritEnvironmentHighPrecMixin, self).get_pusher_email()
def get_default_ref_ignore_regex(self):
- default = super(GerritEnvironmentMixin, self).get_default_ref_ignore_regex()
+ default = super(GerritEnvironmentHighPrecMixin, self).get_default_ref_ignore_regex()
return default + '|^refs/changes/|^refs/cache-automerge/|^refs/meta/'
def get_revision_recipients(self, revision):
if committer == 'Gerrit Code Review':
return []
else:
- return super(GerritEnvironmentMixin, self).get_revision_recipients(revision)
+ return super(GerritEnvironmentHighPrecMixin, self).get_revision_recipients(revision)
def get_update_method(self):
return self.__update_method
-class GerritEnvironment(
- GerritEnvironmentMixin,
- ProjectdescEnvironmentMixin,
- ConfigMaxlinesEnvironmentMixin,
- ComputeFQDNEnvironmentMixin,
- ConfigFilterLinesEnvironmentMixin,
- ConfigRecipientsEnvironmentMixin,
- ConfigRefFilterEnvironmentMixin,
- PusherDomainEnvironmentMixin,
- ConfigOptionsEnvironmentMixin,
- Environment,
- ):
- pass
+class GerritEnvironmentLowPrecMixin(Environment):
+ def __init__(self, project=None, submitter=None, **kw):
+ super(GerritEnvironmentLowPrecMixin, self).__init__(**kw)
+ self.__project = project
+ self.__submitter = submitter
+
+ def get_repo_shortname(self):
+ return self.__project
+
+ def get_fromaddr(self, change=None):
+ if self.__submitter and self.__submitter.find('<') != -1:
+ return self.__submitter
+ else:
+ return super(GerritEnvironmentLowPrecMixin, self).get_fromaddr(change)
class Push(object):
if not change.recipients:
change.environment.log_warning(
'*** no recipients configured so no email will be sent\n'
- '*** for %r update %s->%s\n'
+ '*** for %r update %s->%s'
% (change.refname, change.old.sha1, change.new.sha1,)
)
else:
if not change.environment.quiet:
change.environment.log_msg(
- 'Sending notification emails to: %s\n' % (change.recipients,))
+ 'Sending notification emails to: %s' % (change.recipients,))
extra_values = {'send_date': next(send_date)}
rev = change.send_single_combined_email(sha1s)
change.environment.log_warning(
'*** Too many new commits (%d), not sending commit emails.\n' % len(sha1s) +
'*** Try setting multimailhook.maxCommitEmails to a greater value\n' +
- '*** Currently, multimailhook.maxCommitEmails=%d\n' % max_emails
+ '*** Currently, multimailhook.maxCommitEmails=%d' % max_emails
)
return
for (num, sha1) in enumerate(sha1s):
rev = Revision(change, GitObject(sha1), num=num + 1, tot=len(sha1s))
if not rev.recipients and rev.cc_recipients:
- change.environment.log_msg('*** Replacing Cc: with To:\n')
+ change.environment.log_msg('*** Replacing Cc: with To:')
rev.recipients = rev.cc_recipients
rev.cc_recipients = None
if rev.recipients:
if unhandled_sha1s:
change.environment.log_error(
'ERROR: No emails were sent for the following new commits:\n'
- ' %s\n'
+ ' %s'
% ('\n '.join(sorted(unhandled_sha1s)),)
)
def run_as_post_receive_hook(environment, mailer):
- ref_filter_regex, is_inclusion_filter = environment.get_ref_filter_regex(True)
+ environment.check()
+ send_filter_regex, send_is_inclusion_filter = environment.get_ref_filter_regex(True)
+ ref_filter_regex, is_inclusion_filter = environment.get_ref_filter_regex(False)
changes = []
- for line in sys.stdin:
+ while True:
+ line = read_line(sys.stdin)
+ if line == '':
+ break
(oldrev, newrev, refname) = line.strip().split(' ', 2)
+ environment.get_logger().debug(
+ "run_as_post_receive_hook: oldrev=%s, newrev=%s, refname=%s" %
+ (oldrev, newrev, refname))
+
if not include_ref(refname, ref_filter_regex, is_inclusion_filter):
continue
+ if not include_ref(refname, send_filter_regex, send_is_inclusion_filter):
+ continue
changes.append(
ReferenceChange.create(environment, oldrev, newrev, refname)
)
def run_as_update_hook(environment, mailer, refname, oldrev, newrev, force_send=False):
- ref_filter_regex, is_inclusion_filter = environment.get_ref_filter_regex(True)
+ environment.check()
+ send_filter_regex, send_is_inclusion_filter = environment.get_ref_filter_regex(True)
+ ref_filter_regex, is_inclusion_filter = environment.get_ref_filter_regex(False)
if not include_ref(refname, ref_filter_regex, is_inclusion_filter):
return
+ if not include_ref(refname, send_filter_regex, send_is_inclusion_filter):
+ return
changes = [
ReferenceChange.create(
environment,
mailer.__del__()
+def check_ref_filter(environment):
+ send_filter_regex, send_is_inclusion = environment.get_ref_filter_regex(True)
+ ref_filter_regex, ref_is_inclusion = environment.get_ref_filter_regex(False)
+
+ def inc_exc_lusion(b):
+ if b:
+ return 'inclusion'
+ else:
+ return 'exclusion'
+
+ if send_filter_regex:
+ sys.stdout.write("DoSend/DontSend filter regex (" +
+ (inc_exc_lusion(send_is_inclusion)) +
+ '): ' + send_filter_regex.pattern +
+ '\n')
+ if send_filter_regex:
+ sys.stdout.write("Include/Exclude filter regex (" +
+ (inc_exc_lusion(ref_is_inclusion)) +
+ '): ' + ref_filter_regex.pattern +
+ '\n')
+ sys.stdout.write(os.linesep)
+
+ sys.stdout.write(
+ "Refs marked as EXCLUDE are excluded by either refFilterInclusionRegex\n"
+ "or refFilterExclusionRegex. No emails will be sent for commits included\n"
+ "in these refs.\n"
+ "Refs marked as DONT-SEND are excluded by either refFilterDoSendRegex or\n"
+ "refFilterDontSendRegex, but not by either refFilterInclusionRegex or\n"
+ "refFilterExclusionRegex. Emails will be sent for commits included in these\n"
+ "refs only when the commit reaches a ref which isn't excluded.\n"
+ "Refs marked as DO-SEND are not excluded by any filter. Emails will\n"
+ "be sent normally for commits included in these refs.\n")
+
+ sys.stdout.write(os.linesep)
+
+ for refname in read_git_lines(['for-each-ref', '--format', '%(refname)']):
+ sys.stdout.write(refname)
+ if not include_ref(refname, ref_filter_regex, ref_is_inclusion):
+ sys.stdout.write(' EXCLUDE')
+ elif not include_ref(refname, send_filter_regex, send_is_inclusion):
+ sys.stdout.write(' DONT-SEND')
+ else:
+ sys.stdout.write(' DO-SEND')
+
+ sys.stdout.write(os.linesep)
+
+
+def show_env(environment, out):
+ out.write('Environment values:\n')
+ for (k, v) in sorted(environment.get_values().items()):
+ if k: # Don't show the {'' : ''} pair.
+ out.write(' %s : %r\n' % (k, v))
+ out.write('\n')
+ # Flush to avoid interleaving with further log output
+ out.flush()
+
+
+def check_setup(environment):
+ environment.check()
+ show_env(environment, sys.stdout)
+ sys.stdout.write("Now, checking that git-multimail's standard input "
+ "is properly set ..." + os.linesep)
+ sys.stdout.write("Please type some text and then press Return" + os.linesep)
+ stdin = sys.stdin.readline()
+ sys.stdout.write("You have just entered:" + os.linesep)
+ sys.stdout.write(stdin)
+ sys.stdout.write("git-multimail seems properly set up." + os.linesep)
+
+
def choose_mailer(config, environment):
mailer = config.get('mailer', default='sendmail')
smtppass = config.get('smtppass', default='')
smtpcacerts = config.get('smtpcacerts', default='')
mailer = SMTPMailer(
+ environment,
envelopesender=(environment.get_sender() or environment.get_fromaddr()),
smtpserver=smtpserver, smtpservertimeout=smtpservertimeout,
smtpserverdebuglevel=smtpserverdebuglevel,
command = config.get('sendmailcommand')
if command:
command = shlex.split(command)
- mailer = SendMailer(command=command, envelopesender=environment.get_sender())
+ mailer = SendMailer(environment,
+ command=command, envelopesender=environment.get_sender())
else:
environment.log_error(
'fatal: multimailhook.mailer is set to an incorrect value: "%s"\n' % mailer +
- 'please use one of "smtp" or "sendmail".\n'
+ 'please use one of "smtp" or "sendmail".'
)
sys.exit(1)
return mailer
KNOWN_ENVIRONMENTS = {
- 'generic': GenericEnvironmentMixin,
- 'gitolite': GitoliteEnvironmentMixin,
- 'stash': StashEnvironmentMixin,
- 'gerrit': GerritEnvironmentMixin,
+ 'generic': {'highprec': GenericEnvironmentMixin},
+ 'gitolite': {'highprec': GitoliteEnvironmentHighPrecMixin,
+ 'lowprec': GitoliteEnvironmentLowPrecMixin},
+ 'stash': {'highprec': StashEnvironmentHighPrecMixin,
+ 'lowprec': StashEnvironmentLowPrecMixin},
+ 'gerrit': {'highprec': GerritEnvironmentHighPrecMixin,
+ 'lowprec': GerritEnvironmentLowPrecMixin},
}
def choose_environment(config, osenv=None, env=None, recipients=None,
hook_info=None):
+ env_name = choose_environment_name(config, env, osenv)
+ environment_klass = build_environment_klass(env_name)
+ env = build_environment(environment_klass, env_name, config,
+ osenv, recipients, hook_info)
+ return env
+
+
+def choose_environment_name(config, env, osenv):
if not osenv:
osenv = os.environ
- environment_mixins = [
- ConfigRefFilterEnvironmentMixin,
- ProjectdescEnvironmentMixin,
- ConfigMaxlinesEnvironmentMixin,
- ComputeFQDNEnvironmentMixin,
- ConfigFilterLinesEnvironmentMixin,
- PusherDomainEnvironmentMixin,
- ConfigOptionsEnvironmentMixin,
- ]
- environment_kw = {
- 'osenv': osenv,
- 'config': config,
- }
-
if not env:
env = config.get('environment')
env = 'gitolite'
else:
env = 'generic'
+ return env
+
+
+COMMON_ENVIRONMENT_MIXINS = [
+ ConfigRecipientsEnvironmentMixin,
+ CLIRecipientsEnvironmentMixin,
+ ConfigRefFilterEnvironmentMixin,
+ ProjectdescEnvironmentMixin,
+ ConfigMaxlinesEnvironmentMixin,
+ ComputeFQDNEnvironmentMixin,
+ ConfigFilterLinesEnvironmentMixin,
+ PusherDomainEnvironmentMixin,
+ ConfigOptionsEnvironmentMixin,
+ ]
+
+
+def build_environment_klass(env_name):
+ if 'class' in KNOWN_ENVIRONMENTS[env_name]:
+ return KNOWN_ENVIRONMENTS[env_name]['class']
+
+ environment_mixins = []
+ known_env = KNOWN_ENVIRONMENTS[env_name]
+ if 'highprec' in known_env:
+ high_prec_mixin = known_env['highprec']
+ environment_mixins.append(high_prec_mixin)
+ environment_mixins = environment_mixins + COMMON_ENVIRONMENT_MIXINS
+ if 'lowprec' in known_env:
+ low_prec_mixin = known_env['lowprec']
+ environment_mixins.append(low_prec_mixin)
+ environment_mixins.append(Environment)
+ klass_name = env_name.capitalize() + 'Environement'
+ environment_klass = type(
+ klass_name,
+ tuple(environment_mixins),
+ {},
+ )
+ KNOWN_ENVIRONMENTS[env_name]['class'] = environment_klass
+ return environment_klass
+
- environment_mixins.insert(0, KNOWN_ENVIRONMENTS[env])
+GerritEnvironment = build_environment_klass('gerrit')
+StashEnvironment = build_environment_klass('stash')
+GitoliteEnvironment = build_environment_klass('gitolite')
+GenericEnvironment = build_environment_klass('generic')
+
+
+def build_environment(environment_klass, env, config,
+ osenv, recipients, hook_info):
+ environment_kw = {
+ 'osenv': osenv,
+ 'config': config,
+ }
if env == 'stash':
environment_kw['user'] = hook_info['stash_user']
environment_kw['submitter'] = hook_info['submitter']
environment_kw['update_method'] = hook_info['update_method']
- if recipients:
- environment_mixins.insert(0, StaticRecipientsEnvironmentMixin)
- environment_kw['refchange_recipients'] = recipients
- environment_kw['announce_recipients'] = recipients
- environment_kw['revision_recipients'] = recipients
- environment_kw['scancommitforcc'] = config.get('scancommitforcc')
- else:
- environment_mixins.insert(0, ConfigRecipientsEnvironmentMixin)
+ environment_kw['cli_recipients'] = recipients
- environment_klass = type(
- 'EffectiveEnvironment',
- tuple(environment_mixins) + (Environment,),
- {},
- )
return environment_klass(**environment_kw)
return __version__
-def compute_gerrit_options(options, args, required_gerrit_options):
+def compute_gerrit_options(options, args, required_gerrit_options,
+ raw_refname):
if None in required_gerrit_options:
raise SystemExit("Error: Specify all of --oldrev, --newrev, --refname, "
"and --project; or none of them.")
# Gerrit oddly omits 'refs/heads/' in the refname when calling
# ref-updated hook; put it back.
git_dir = get_git_dir()
- if (not os.path.exists(os.path.join(git_dir, options.refname)) and
+ if (not os.path.exists(os.path.join(git_dir, raw_refname)) and
os.path.exists(os.path.join(git_dir, 'refs', 'heads',
- options.refname))):
+ raw_refname))):
options.refname = 'refs/heads/' + options.refname
- # Convert each string option unicode for Python3.
- if PYTHON3:
- opts = ['environment', 'recipients', 'oldrev', 'newrev', 'refname',
- 'project', 'submitter', 'stash-user', 'stash-repo']
- for opt in opts:
- if not hasattr(options, opt):
- continue
- obj = getattr(options, opt)
- if obj:
- enc = obj.encode('utf-8', 'surrogateescape')
- dec = enc.decode('utf-8', 'replace')
- setattr(options, opt, dec)
-
# New revisions can appear in a gerrit repository either due to someone
# pushing directly (in which case options.submitter will be set), or they
# can press "Submit this patchset" in the web UI for some CR (in which
def check_hook_specific_args(options, args):
+ raw_refname = options.refname
+ # Convert each string option unicode for Python3.
+ if PYTHON3:
+ opts = ['environment', 'recipients', 'oldrev', 'newrev', 'refname',
+ 'project', 'submitter', 'stash_user', 'stash_repo']
+ for opt in opts:
+ if not hasattr(options, opt):
+ continue
+ obj = getattr(options, opt)
+ if obj:
+ enc = obj.encode('utf-8', 'surrogateescape')
+ dec = enc.decode('utf-8', 'replace')
+ setattr(options, opt, dec)
+
# First check for stash arguments
if (options.stash_user is None) != (options.stash_repo is None):
raise SystemExit("Error: Specify both of --stash-user and "
required_gerrit_options = (options.oldrev, options.newrev, options.refname,
options.project)
if required_gerrit_options != (None,) * 4:
- return compute_gerrit_options(options, args, required_gerrit_options)
+ return compute_gerrit_options(options, args, required_gerrit_options,
+ raw_refname)
# No special options in use, just return what we started with
return options, args, {}
+class Logger(object):
+ def parse_verbose(self, verbose):
+ if verbose > 0:
+ return logging.DEBUG
+ else:
+ return logging.INFO
+
+ def create_log_file(self, environment, name, path, verbosity):
+ log_file = logging.getLogger(name)
+ file_handler = logging.FileHandler(path)
+ log_fmt = logging.Formatter("%(asctime)s [%(levelname)-5.5s] %(message)s")
+ file_handler.setFormatter(log_fmt)
+ log_file.addHandler(file_handler)
+ log_file.setLevel(verbosity)
+ return log_file
+
+ def __init__(self, environment):
+ self.environment = environment
+ self.loggers = []
+ stderr_log = logging.getLogger('git_multimail.stderr')
+
+ class EncodedStderr(object):
+ def write(self, x):
+ write_str(sys.stderr, x)
+
+ def flush(self):
+ sys.stderr.flush()
+
+ stderr_handler = logging.StreamHandler(EncodedStderr())
+ stderr_log.addHandler(stderr_handler)
+ stderr_log.setLevel(self.parse_verbose(environment.verbose))
+ self.loggers.append(stderr_log)
+
+ if environment.debug_log_file is not None:
+ debug_log_file = self.create_log_file(
+ environment, 'git_multimail.debug', environment.debug_log_file, logging.DEBUG)
+ self.loggers.append(debug_log_file)
+
+ if environment.log_file is not None:
+ log_file = self.create_log_file(
+ environment, 'git_multimail.file', environment.log_file, logging.INFO)
+ self.loggers.append(log_file)
+
+ if environment.error_log_file is not None:
+ error_log_file = self.create_log_file(
+ environment, 'git_multimail.error', environment.error_log_file, logging.ERROR)
+ self.loggers.append(error_log_file)
+
+ def info(self, msg):
+ for l in self.loggers:
+ l.info(msg)
+
+ def debug(self, msg):
+ for l in self.loggers:
+ l.debug(msg)
+
+ def warning(self, msg):
+ for l in self.loggers:
+ l.warning(msg)
+
+ def error(self, msg):
+ for l in self.loggers:
+ l.error(msg)
+
+
def main(args):
parser = optparse.OptionParser(
description=__doc__,
'--show-env', action='store_true', default=False,
help=(
'Write to stderr the values determined for the environment '
- '(intended for debugging purposes).'
+ '(intended for debugging purposes), then proceed normally.'
),
)
parser.add_option(
"Display git-multimail's version"
),
)
+
+ parser.add_option(
+ '--python-version', action='store_true', default=False,
+ help=(
+ "Display the version of Python used by git-multimail"
+ ),
+ )
+
+ parser.add_option(
+ '--check-ref-filter', action='store_true', default=False,
+ help=(
+ 'List refs and show information on how git-multimail '
+ 'will process them.'
+ )
+ )
+
# The following options permit this script to be run as a gerrit
# ref-updated hook. See e.g.
# code.google.com/p/gerrit/source/browse/Documentation/config-hooks.txt
sys.stdout.write('git-multimail version ' + get_version() + '\n')
return
+ if options.python_version:
+ sys.stdout.write('Python version ' + sys.version + '\n')
+ return
+
if options.c:
Config.add_config_parameters(options.c)
config = Config('multimailhook')
+ environment = None
try:
environment = choose_environment(
config, osenv=os.environ,
)
if options.show_env:
- sys.stderr.write('Environment values:\n')
- for (k, v) in sorted(environment.get_values().items()):
- sys.stderr.write(' %s : %r\n' % (k, v))
- sys.stderr.write('\n')
+ show_env(environment, sys.stderr)
if options.stdout or environment.stdout:
mailer = OutputMailer(sys.stdout)
else:
mailer = choose_mailer(config, environment)
+ must_check_setup = os.environ.get('GIT_MULTIMAIL_CHECK_SETUP')
+ if must_check_setup == '':
+ must_check_setup = False
+ if options.check_ref_filter:
+ check_ref_filter(environment)
+ elif must_check_setup:
+ check_setup(environment)
# Dual mode: if arguments were specified on the command line, run
# like an update hook; otherwise, run as a post-receive hook.
- if args:
+ elif args:
if len(args) != 3:
parser.error('Need zero or three non-option arguments')
(refname, oldrev, newrev) = args
+ environment.get_logger().debug(
+ "run_as_update_hook: refname=%s, oldrev=%s, newrev=%s, force_send=%s" %
+ (refname, oldrev, newrev, options.force_send))
run_as_update_hook(environment, mailer, refname, oldrev, newrev, options.force_send)
else:
run_as_post_receive_hook(environment, mailer)
except ConfigurationException:
sys.exit(sys.exc_info()[1])
+ except SystemExit:
+ raise
except Exception:
t, e, tb = sys.exc_info()
import traceback
- sys.stdout.write('\n')
- sys.stdout.write('Exception \'' + t.__name__ +
- '\' raised. Please report this as a bug to\n')
- sys.stdout.write('https://github.com/git-multimail/git-multimail/issues\n')
- sys.stdout.write('with the information below:\n\n')
- sys.stdout.write('git-multimail version ' + get_version() + '\n')
- sys.stdout.write('Python version ' + sys.version + '\n')
- traceback.print_exc(file=sys.stdout)
+ sys.stderr.write('\n') # Avoid mixing message with previous output
+ msg = (
+ 'Exception \'' + t.__name__ +
+ '\' raised. Please report this as a bug to\n'
+ 'https://github.com/git-multimail/git-multimail/issues\n'
+ 'with the information below:\n\n'
+ 'git-multimail version ' + get_version() + '\n'
+ 'Python version ' + sys.version + '\n' +
+ traceback.format_exc())
+ try:
+ environment.get_logger().error(msg)
+ except:
+ sys.stderr.write(msg)
sys.exit(1)
if __name__ == '__main__':
[InputOutput::RequireCheckedSyscalls]
functions = open say close
-# This rules demands to add a dependancy for the Readonly module. This is not
+# This rule demands to add a dependency for the Readonly module. This is not
# wished.
[-ValuesAndExpressions::ProhibitConstantPragma]
print {*STDERR} "Check the configuration of file uploads in your mediawiki.\n";
return $newrevid;
}
- # Deleting and uploading a file requires a priviledged user
+ # Deleting and uploading a file requires a privileged user
if ($file_deleted) {
$mediawiki = connect_maybe($mediawiki, $remotename, $url);
my $query = {
# See the License for the specific language governing permissions and
# limitations under the License.
-BUILD_LABEL=$(shell date +"%s")
+BUILD_LABEL=$(shell cut -d" " -f3 ../../GIT-VERSION-FILE)
TAR_OUT=$(shell go env GOOS)_$(shell go env GOARCH).tar.gz
all: git-remote-persistent-https git-remote-persistent-https--proxy \
ln -f -s git-remote-persistent-https git-remote-persistent-http
git-remote-persistent-https:
+ case $$(go version) in \
+ "go version go"1.[0-5].*) EQ=" " ;; *) EQ="=" ;; esac && \
go build -o git-remote-persistent-https \
- -ldflags "-X main._BUILD_EMBED_LABEL $(BUILD_LABEL)"
+ -ldflags "-X main._BUILD_EMBED_LABEL$${EQ}$(BUILD_LABEL)"
clean:
rm -f git-remote-persistent-http* *.tar.gz
#
# Copyright (C) 2009 Avery Pennarun <apenwarr@gmail.com>
#
-if [ $# -eq 0 ]; then
- set -- -h
+if test $# -eq 0
+then
+ set -- -h
fi
OPTS_SPEC="\
git subtree add --prefix=<prefix> <commit>
message=
prefix=
-debug()
-{
- if [ -n "$debug" ]; then
+debug () {
+ if test -n "$debug"
+ then
printf "%s\n" "$*" >&2
fi
}
-say()
-{
- if [ -z "$quiet" ]; then
+say () {
+ if test -z "$quiet"
+ then
printf "%s\n" "$*" >&2
fi
}
-progress()
-{
- if [ -z "$quiet" ]; then
+progress () {
+ if test -z "$quiet"
+ then
printf "%s\r" "$*" >&2
fi
}
-assert()
-{
- if "$@"; then
- :
- else
+assert () {
+ if ! "$@"
+ then
die "assertion failed: " "$@"
fi
}
-#echo "Options: $*"
-
-while [ $# -gt 0 ]; do
+while test $# -gt 0
+do
opt="$1"
shift
+
case "$opt" in
- -q) quiet=1 ;;
- -d) debug=1 ;;
- --annotate) annotate="$1"; shift ;;
- --no-annotate) annotate= ;;
- -b) branch="$1"; shift ;;
- -P) prefix="${1%/}"; shift ;;
- -m) message="$1"; shift ;;
- --no-prefix) prefix= ;;
- --onto) onto="$1"; shift ;;
- --no-onto) onto= ;;
- --rejoin) rejoin=1 ;;
- --no-rejoin) rejoin= ;;
- --ignore-joins) ignore_joins=1 ;;
- --no-ignore-joins) ignore_joins= ;;
- --squash) squash=1 ;;
- --no-squash) squash= ;;
- --) break ;;
- *) die "Unexpected option: $opt" ;;
+ -q)
+ quiet=1
+ ;;
+ -d)
+ debug=1
+ ;;
+ --annotate)
+ annotate="$1"
+ shift
+ ;;
+ --no-annotate)
+ annotate=
+ ;;
+ -b)
+ branch="$1"
+ shift
+ ;;
+ -P)
+ prefix="${1%/}"
+ shift
+ ;;
+ -m)
+ message="$1"
+ shift
+ ;;
+ --no-prefix)
+ prefix=
+ ;;
+ --onto)
+ onto="$1"
+ shift
+ ;;
+ --no-onto)
+ onto=
+ ;;
+ --rejoin)
+ rejoin=1
+ ;;
+ --no-rejoin)
+ rejoin=
+ ;;
+ --ignore-joins)
+ ignore_joins=1
+ ;;
+ --no-ignore-joins)
+ ignore_joins=
+ ;;
+ --squash)
+ squash=1
+ ;;
+ --no-squash)
+ squash=
+ ;;
+ --)
+ break
+ ;;
+ *)
+ die "Unexpected option: $opt"
+ ;;
esac
done
command="$1"
shift
+
case "$command" in
- add|merge|pull) default= ;;
- split|push) default="--default HEAD" ;;
- *) die "Unknown command '$command'" ;;
+add|merge|pull)
+ default=
+ ;;
+split|push)
+ default="--default HEAD"
+ ;;
+*)
+ die "Unknown command '$command'"
+ ;;
esac
-if [ -z "$prefix" ]; then
+if test -z "$prefix"
+then
die "You must provide the --prefix option."
fi
case "$command" in
- add) [ -e "$prefix" ] &&
- die "prefix '$prefix' already exists." ;;
- *) [ -e "$prefix" ] ||
- die "'$prefix' does not exist; use 'git subtree add'" ;;
+add)
+ test -e "$prefix" &&
+ die "prefix '$prefix' already exists."
+ ;;
+*)
+ test -e "$prefix" ||
+ die "'$prefix' does not exist; use 'git subtree add'"
+ ;;
esac
dir="$(dirname "$prefix/.")"
-if [ "$command" != "pull" -a "$command" != "add" -a "$command" != "push" ]; then
+if test "$command" != "pull" &&
+ test "$command" != "add" &&
+ test "$command" != "push"
+then
revs=$(git rev-parse $default --revs-only "$@") || exit $?
- dirs="$(git rev-parse --no-revs --no-flags "$@")" || exit $?
- if [ -n "$dirs" ]; then
+ dirs=$(git rev-parse --no-revs --no-flags "$@") || exit $?
+ if test -n "$dirs"
+ then
die "Error: Use --prefix instead of bare filenames."
fi
fi
debug "opts: {$*}"
debug
-cache_setup()
-{
+cache_setup () {
cachedir="$GIT_DIR/subtree-cache/$$"
- rm -rf "$cachedir" || die "Can't delete old cachedir: $cachedir"
- mkdir -p "$cachedir" || die "Can't create new cachedir: $cachedir"
- mkdir -p "$cachedir/notree" || die "Can't create new cachedir: $cachedir/notree"
+ rm -rf "$cachedir" ||
+ die "Can't delete old cachedir: $cachedir"
+ mkdir -p "$cachedir" ||
+ die "Can't create new cachedir: $cachedir"
+ mkdir -p "$cachedir/notree" ||
+ die "Can't create new cachedir: $cachedir/notree"
debug "Using cachedir: $cachedir" >&2
}
-cache_get()
-{
- for oldrev in $*; do
- if [ -r "$cachedir/$oldrev" ]; then
+cache_get () {
+ for oldrev in "$@"
+ do
+ if test -r "$cachedir/$oldrev"
+ then
read newrev <"$cachedir/$oldrev"
echo $newrev
fi
done
}
-cache_miss()
-{
- for oldrev in $*; do
- if [ ! -r "$cachedir/$oldrev" ]; then
+cache_miss () {
+ for oldrev in "$@"
+ do
+ if ! test -r "$cachedir/$oldrev"
+ then
echo $oldrev
fi
done
}
-check_parents()
-{
- missed=$(cache_miss $*)
- for miss in $missed; do
- if [ ! -r "$cachedir/notree/$miss" ]; then
+check_parents () {
+ missed=$(cache_miss "$@")
+ for miss in $missed
+ do
+ if ! test -r "$cachedir/notree/$miss"
+ then
debug " incorrect order: $miss"
fi
done
}
-set_notree()
-{
+set_notree () {
echo "1" > "$cachedir/notree/$1"
}
-cache_set()
-{
+cache_set () {
oldrev="$1"
newrev="$2"
- if [ "$oldrev" != "latest_old" \
- -a "$oldrev" != "latest_new" \
- -a -e "$cachedir/$oldrev" ]; then
+ if test "$oldrev" != "latest_old" &&
+ test "$oldrev" != "latest_new" &&
+ test -e "$cachedir/$oldrev"
+ then
die "cache for $oldrev already exists!"
fi
echo "$newrev" >"$cachedir/$oldrev"
}
-rev_exists()
-{
- if git rev-parse "$1" >/dev/null 2>&1; then
+rev_exists () {
+ if git rev-parse "$1" >/dev/null 2>&1
+ then
return 0
else
return 1
fi
}
-rev_is_descendant_of_branch()
-{
+rev_is_descendant_of_branch () {
newrev="$1"
branch="$2"
- branch_hash=$(git rev-parse $branch)
- match=$(git rev-list -1 $branch_hash ^$newrev)
+ branch_hash=$(git rev-parse "$branch")
+ match=$(git rev-list -1 "$branch_hash" "^$newrev")
- if [ -z "$match" ]; then
+ if test -z "$match"
+ then
return 0
else
return 1
# if a commit doesn't have a parent, this might not work. But we only want
# to remove the parent from the rev-list, and since it doesn't exist, it won't
# be there anyway, so do nothing in that case.
-try_remove_previous()
-{
- if rev_exists "$1^"; then
+try_remove_previous () {
+ if rev_exists "$1^"
+ then
echo "^$1^"
fi
}
-find_latest_squash()
-{
+find_latest_squash () {
debug "Looking for latest squash ($dir)..."
dir="$1"
sq=
sub=
git log --grep="^git-subtree-dir: $dir/*\$" \
--pretty=format:'START %H%n%s%n%n%b%nEND%n' HEAD |
- while read a b junk; do
+ while read a b junk
+ do
debug "$a $b $junk"
debug "{{$sq/$main/$sub}}"
case "$a" in
- START) sq="$b" ;;
- git-subtree-mainline:) main="$b" ;;
- git-subtree-split:)
- sub="$(git rev-parse "$b^0")" ||
- die "could not rev-parse split hash $b from commit $sq"
- ;;
- END)
- if [ -n "$sub" ]; then
- if [ -n "$main" ]; then
- # a rejoin commit?
- # Pretend its sub was a squash.
- sq="$sub"
- fi
- debug "Squash found: $sq $sub"
- echo "$sq" "$sub"
- break
+ START)
+ sq="$b"
+ ;;
+ git-subtree-mainline:)
+ main="$b"
+ ;;
+ git-subtree-split:)
+ sub="$(git rev-parse "$b^0")" ||
+ die "could not rev-parse split hash $b from commit $sq"
+ ;;
+ END)
+ if test -n "$sub"
+ then
+ if test -n "$main"
+ then
+ # a rejoin commit?
+ # Pretend its sub was a squash.
+ sq="$sub"
fi
- sq=
- main=
- sub=
- ;;
+ debug "Squash found: $sq $sub"
+ echo "$sq" "$sub"
+ break
+ fi
+ sq=
+ main=
+ sub=
+ ;;
esac
done
}
-find_existing_splits()
-{
+find_existing_splits () {
debug "Looking for prior splits..."
dir="$1"
revs="$2"
sub=
git log --grep="^git-subtree-dir: $dir/*\$" \
--pretty=format:'START %H%n%s%n%n%b%nEND%n' $revs |
- while read a b junk; do
+ while read a b junk
+ do
case "$a" in
- START) sq="$b" ;;
- git-subtree-mainline:) main="$b" ;;
- git-subtree-split:)
- sub="$(git rev-parse "$b^0")" ||
- die "could not rev-parse split hash $b from commit $sq"
- ;;
- END)
- debug " Main is: '$main'"
- if [ -z "$main" -a -n "$sub" ]; then
- # squash commits refer to a subtree
- debug " Squash: $sq from $sub"
- cache_set "$sq" "$sub"
- fi
- if [ -n "$main" -a -n "$sub" ]; then
- debug " Prior: $main -> $sub"
- cache_set $main $sub
- cache_set $sub $sub
- try_remove_previous "$main"
- try_remove_previous "$sub"
- fi
- main=
- sub=
- ;;
+ START)
+ sq="$b"
+ ;;
+ git-subtree-mainline:)
+ main="$b"
+ ;;
+ git-subtree-split:)
+ sub="$(git rev-parse "$b^0")" ||
+ die "could not rev-parse split hash $b from commit $sq"
+ ;;
+ END)
+ debug " Main is: '$main'"
+ if test -z "$main" -a -n "$sub"
+ then
+ # squash commits refer to a subtree
+ debug " Squash: $sq from $sub"
+ cache_set "$sq" "$sub"
+ fi
+ if test -n "$main" -a -n "$sub"
+ then
+ debug " Prior: $main -> $sub"
+ cache_set $main $sub
+ cache_set $sub $sub
+ try_remove_previous "$main"
+ try_remove_previous "$sub"
+ fi
+ main=
+ sub=
+ ;;
esac
done
}
-copy_commit()
-{
+copy_commit () {
# We're going to set some environment vars here, so
# do it in a subshell to get rid of them safely later
debug copy_commit "{$1}" "{$2}" "{$3}"
GIT_COMMITTER_NAME \
GIT_COMMITTER_EMAIL \
GIT_COMMITTER_DATE
- (printf "%s" "$annotate"; cat ) |
+ (
+ printf "%s" "$annotate"
+ cat
+ ) |
git commit-tree "$2" $3 # reads the rest of stdin
) || die "Can't copy commit $1"
}
-add_msg()
-{
+add_msg () {
dir="$1"
latest_old="$2"
latest_new="$3"
- if [ -n "$message" ]; then
+ if test -n "$message"
+ then
commit_message="$message"
else
commit_message="Add '$dir/' from commit '$latest_new'"
fi
cat <<-EOF
$commit_message
-
+
git-subtree-dir: $dir
git-subtree-mainline: $latest_old
git-subtree-split: $latest_new
EOF
}
-add_squashed_msg()
-{
- if [ -n "$message" ]; then
+add_squashed_msg () {
+ if test -n "$message"
+ then
echo "$message"
else
echo "Merge commit '$1' as '$2'"
fi
}
-rejoin_msg()
-{
+rejoin_msg () {
dir="$1"
latest_old="$2"
latest_new="$3"
- if [ -n "$message" ]; then
+ if test -n "$message"
+ then
commit_message="$message"
else
commit_message="Split '$dir/' into commit '$latest_new'"
fi
cat <<-EOF
$commit_message
-
+
git-subtree-dir: $dir
git-subtree-mainline: $latest_old
git-subtree-split: $latest_new
EOF
}
-squash_msg()
-{
+squash_msg () {
dir="$1"
oldsub="$2"
newsub="$3"
newsub_short=$(git rev-parse --short "$newsub")
-
- if [ -n "$oldsub" ]; then
+
+ if test -n "$oldsub"
+ then
oldsub_short=$(git rev-parse --short "$oldsub")
echo "Squashed '$dir/' changes from $oldsub_short..$newsub_short"
echo
else
echo "Squashed '$dir/' content from commit $newsub_short"
fi
-
+
echo
echo "git-subtree-dir: $dir"
echo "git-subtree-split: $newsub"
}
-toptree_for_commit()
-{
+toptree_for_commit () {
commit="$1"
git log -1 --pretty=format:'%T' "$commit" -- || exit $?
}
-subtree_for_commit()
-{
+subtree_for_commit () {
commit="$1"
dir="$2"
git ls-tree "$commit" -- "$dir" |
- while read mode type tree name; do
- assert [ "$name" = "$dir" ]
- assert [ "$type" = "tree" -o "$type" = "commit" ]
- [ "$type" = "commit" ] && continue # ignore submodules
+ while read mode type tree name
+ do
+ assert test "$name" = "$dir"
+ assert test "$type" = "tree" -o "$type" = "commit"
+ test "$type" = "commit" && continue # ignore submodules
echo $tree
break
done
}
-tree_changed()
-{
+tree_changed () {
tree=$1
shift
- if [ $# -ne 1 ]; then
+ if test $# -ne 1
+ then
return 0 # weird parents, consider it changed
else
ptree=$(toptree_for_commit $1)
- if [ "$ptree" != "$tree" ]; then
+ if test "$ptree" != "$tree"
+ then
return 0 # changed
else
return 1 # not changed
fi
}
-new_squash_commit()
-{
+new_squash_commit () {
old="$1"
oldsub="$2"
newsub="$3"
tree=$(toptree_for_commit $newsub) || exit $?
- if [ -n "$old" ]; then
- squash_msg "$dir" "$oldsub" "$newsub" |
- git commit-tree "$tree" -p "$old" || exit $?
+ if test -n "$old"
+ then
+ squash_msg "$dir" "$oldsub" "$newsub" |
+ git commit-tree "$tree" -p "$old" || exit $?
else
squash_msg "$dir" "" "$newsub" |
- git commit-tree "$tree" || exit $?
+ git commit-tree "$tree" || exit $?
fi
}
-copy_or_skip()
-{
+copy_or_skip () {
rev="$1"
tree="$2"
newparents="$3"
- assert [ -n "$tree" ]
+ assert test -n "$tree"
identical=
nonidentical=
p=
gotparents=
- for parent in $newparents; do
+ for parent in $newparents
+ do
ptree=$(toptree_for_commit $parent) || exit $?
- [ -z "$ptree" ] && continue
- if [ "$ptree" = "$tree" ]; then
+ test -z "$ptree" && continue
+ if test "$ptree" = "$tree"
+ then
# an identical parent could be used in place of this rev.
identical="$parent"
else
nonidentical="$parent"
fi
-
+
# sometimes both old parents map to the same newparent;
# eliminate duplicates
is_new=1
- for gp in $gotparents; do
- if [ "$gp" = "$parent" ]; then
+ for gp in $gotparents
+ do
+ if test "$gp" = "$parent"
+ then
is_new=
break
fi
done
- if [ -n "$is_new" ]; then
+ if test -n "$is_new"
+ then
gotparents="$gotparents $parent"
p="$p -p $parent"
fi
done
copycommit=
- if [ -n "$identical" ] && [ -n "$nonidentical" ]; then
+ if test -n "$identical" && test -n "$nonidentical"
+ then
extras=$(git rev-list --count $identical..$nonidentical)
- if [ "$extras" -ne 0 ]; then
+ if test "$extras" -ne 0
+ then
# we need to preserve history along the other branch
copycommit=1
fi
fi
- if [ -n "$identical" ] && [ -z "$copycommit" ]; then
+ if test -n "$identical" && test -z "$copycommit"
+ then
echo $identical
else
- copy_commit $rev $tree "$p" || exit $?
+ copy_commit "$rev" "$tree" "$p" || exit $?
fi
}
-ensure_clean()
-{
- if ! git diff-index HEAD --exit-code --quiet 2>&1; then
+ensure_clean () {
+ if ! git diff-index HEAD --exit-code --quiet 2>&1
+ then
die "Working tree has modifications. Cannot add."
fi
- if ! git diff-index --cached HEAD --exit-code --quiet 2>&1; then
+ if ! git diff-index --cached HEAD --exit-code --quiet 2>&1
+ then
die "Index has modifications. Cannot add."
fi
}
-ensure_valid_ref_format()
-{
+ensure_valid_ref_format () {
git check-ref-format "refs/heads/$1" ||
- die "'$1' does not look like a ref"
+ die "'$1' does not look like a ref"
}
-cmd_add()
-{
- if [ -e "$dir" ]; then
+cmd_add () {
+ if test -e "$dir"
+ then
die "'$dir' already exists. Cannot add."
fi
ensure_clean
-
- if [ $# -eq 1 ]; then
- git rev-parse -q --verify "$1^{commit}" >/dev/null ||
- die "'$1' does not refer to a commit"
-
- "cmd_add_commit" "$@"
- elif [ $# -eq 2 ]; then
- # Technically we could accept a refspec here but we're
- # just going to turn around and add FETCH_HEAD under the
- # specified directory. Allowing a refspec might be
- # misleading because we won't do anything with any other
- # branches fetched via the refspec.
- ensure_valid_ref_format "$2"
-
- "cmd_add_repository" "$@"
+
+ if test $# -eq 1
+ then
+ git rev-parse -q --verify "$1^{commit}" >/dev/null ||
+ die "'$1' does not refer to a commit"
+
+ cmd_add_commit "$@"
+
+ elif test $# -eq 2
+ then
+ # Technically we could accept a refspec here but we're
+ # just going to turn around and add FETCH_HEAD under the
+ # specified directory. Allowing a refspec might be
+ # misleading because we won't do anything with any other
+ # branches fetched via the refspec.
+ ensure_valid_ref_format "$2"
+
+ cmd_add_repository "$@"
else
- say "error: parameters were '$@'"
- die "Provide either a commit or a repository and commit."
+ say "error: parameters were '$@'"
+ die "Provide either a commit or a repository and commit."
fi
}
-cmd_add_repository()
-{
+cmd_add_repository () {
echo "git fetch" "$@"
repository=$1
refspec=$2
cmd_add_commit "$@"
}
-cmd_add_commit()
-{
+cmd_add_commit () {
revs=$(git rev-parse $default --revs-only "$@") || exit $?
set -- $revs
rev="$1"
-
+
debug "Adding $dir as '$rev'..."
git read-tree --prefix="$dir" $rev || exit $?
git checkout -- "$dir" || exit $?
tree=$(git write-tree) || exit $?
-
+
headrev=$(git rev-parse HEAD) || exit $?
- if [ -n "$headrev" -a "$headrev" != "$rev" ]; then
+ if test -n "$headrev" && test "$headrev" != "$rev"
+ then
headp="-p $headrev"
else
headp=
fi
-
- if [ -n "$squash" ]; then
+
+ if test -n "$squash"
+ then
rev=$(new_squash_commit "" "" "$rev") || exit $?
commit=$(add_squashed_msg "$rev" "$dir" |
- git commit-tree $tree $headp -p "$rev") || exit $?
+ git commit-tree "$tree" $headp -p "$rev") || exit $?
else
revp=$(peel_committish "$rev") &&
- commit=$(add_msg "$dir" "$headrev" "$rev" |
- git commit-tree $tree $headp -p "$revp") || exit $?
+ commit=$(add_msg "$dir" $headrev "$rev" |
+ git commit-tree "$tree" $headp -p "$revp") || exit $?
fi
git reset "$commit" || exit $?
-
+
say "Added dir '$dir'"
}
-cmd_split()
-{
+cmd_split () {
debug "Splitting $dir..."
cache_setup || exit $?
-
- if [ -n "$onto" ]; then
+
+ if test -n "$onto"
+ then
debug "Reading history for --onto=$onto..."
git rev-list $onto |
- while read rev; do
+ while read rev
+ do
# the 'onto' history is already just the subdir, so
# any parent we find there can be used verbatim
debug " cache: $rev"
- cache_set $rev $rev
+ cache_set "$rev" "$rev"
done
fi
-
- if [ -n "$ignore_joins" ]; then
+
+ if test -n "$ignore_joins"
+ then
unrevs=
else
unrevs="$(find_existing_splits "$dir" "$revs")"
fi
-
+
# We can't restrict rev-list to only $dir here, because some of our
# parents have the $dir contents the root, and those won't match.
# (and rev-list --follow doesn't seem to solve this)
revcount=0
createcount=0
eval "$grl" |
- while read rev parents; do
+ while read rev parents
+ do
revcount=$(($revcount + 1))
progress "$revcount/$revmax ($createcount)"
debug "Processing commit: $rev"
- exists=$(cache_get $rev)
- if [ -n "$exists" ]; then
+ exists=$(cache_get "$rev")
+ if test -n "$exists"
+ then
debug " prior: $exists"
continue
fi
debug " parents: $parents"
newparents=$(cache_get $parents)
debug " newparents: $newparents"
-
- tree=$(subtree_for_commit $rev "$dir")
+
+ tree=$(subtree_for_commit "$rev" "$dir")
debug " tree is: $tree"
check_parents $parents
-
+
# ugly. is there no better way to tell if this is a subtree
# vs. a mainline commit? Does it matter?
- if [ -z $tree ]; then
- set_notree $rev
- if [ -n "$newparents" ]; then
- cache_set $rev $rev
+ if test -z "$tree"
+ then
+ set_notree "$rev"
+ if test -n "$newparents"
+ then
+ cache_set "$rev" "$rev"
fi
continue
fi
newrev=$(copy_or_skip "$rev" "$tree" "$newparents") || exit $?
debug " newrev is: $newrev"
- cache_set $rev $newrev
- cache_set latest_new $newrev
- cache_set latest_old $rev
+ cache_set "$rev" "$newrev"
+ cache_set latest_new "$newrev"
+ cache_set latest_old "$rev"
done || exit $?
+
latest_new=$(cache_get latest_new)
- if [ -z "$latest_new" ]; then
+ if test -z "$latest_new"
+ then
die "No new revisions were found"
fi
-
- if [ -n "$rejoin" ]; then
+
+ if test -n "$rejoin"
+ then
debug "Merging split branch into HEAD..."
latest_old=$(cache_get latest_old)
git merge -s ours \
- -m "$(rejoin_msg "$dir" $latest_old $latest_new)" \
- $latest_new >&2 || exit $?
- fi
- if [ -n "$branch" ]; then
- if rev_exists "refs/heads/$branch"; then
- if ! rev_is_descendant_of_branch $latest_new $branch; then
+ --allow-unrelated-histories \
+ -m "$(rejoin_msg "$dir" "$latest_old" "$latest_new")" \
+ "$latest_new" >&2 || exit $?
+ fi
+ if test -n "$branch"
+ then
+ if rev_exists "refs/heads/$branch"
+ then
+ if ! rev_is_descendant_of_branch "$latest_new" "$branch"
+ then
die "Branch '$branch' is not an ancestor of commit '$latest_new'."
fi
action='Updated'
else
action='Created'
fi
- git update-ref -m 'subtree split' "refs/heads/$branch" $latest_new || exit $?
+ git update-ref -m 'subtree split' \
+ "refs/heads/$branch" "$latest_new" || exit $?
say "$action branch '$branch'"
fi
- echo $latest_new
+ echo "$latest_new"
exit 0
}
-cmd_merge()
-{
+cmd_merge () {
revs=$(git rev-parse $default --revs-only "$@") || exit $?
ensure_clean
-
+
set -- $revs
- if [ $# -ne 1 ]; then
+ if test $# -ne 1
+ then
die "You must provide exactly one revision. Got: '$revs'"
fi
rev="$1"
-
- if [ -n "$squash" ]; then
+
+ if test -n "$squash"
+ then
first_split="$(find_latest_squash "$dir")"
- if [ -z "$first_split" ]; then
+ if test -z "$first_split"
+ then
die "Can't squash-merge: '$dir' was never added."
fi
set $first_split
old=$1
sub=$2
- if [ "$sub" = "$rev" ]; then
+ if test "$sub" = "$rev"
+ then
say "Subtree is already at commit $rev."
exit 0
fi
fi
version=$(git version)
- if [ "$version" \< "git version 1.7" ]; then
- if [ -n "$message" ]; then
- git merge -s subtree --message="$message" $rev
+ if test "$version" \< "git version 1.7"
+ then
+ if test -n "$message"
+ then
+ git merge -s subtree --message="$message" "$rev"
else
- git merge -s subtree $rev
+ git merge -s subtree "$rev"
fi
else
- if [ -n "$message" ]; then
- git merge -Xsubtree="$prefix" --message="$message" $rev
+ if test -n "$message"
+ then
+ git merge -Xsubtree="$prefix" \
+ --message="$message" "$rev"
else
git merge -Xsubtree="$prefix" $rev
fi
fi
}
-cmd_pull()
-{
- if [ $# -ne 2 ]; then
- die "You must provide <repository> <ref>"
+cmd_pull () {
+ if test $# -ne 2
+ then
+ die "You must provide <repository> <ref>"
fi
ensure_clean
ensure_valid_ref_format "$2"
cmd_merge "$@"
}
-cmd_push()
-{
- if [ $# -ne 2 ]; then
- die "You must provide <repository> <ref>"
+cmd_push () {
+ if test $# -ne 2
+ then
+ die "You must provide <repository> <ref>"
fi
ensure_valid_ref_format "$2"
- if [ -e "$dir" ]; then
- repository=$1
- refspec=$2
- echo "git push using: " $repository $refspec
- localrev=$(git subtree split --prefix="$prefix") || die
- git push "$repository" $localrev:refs/heads/$refspec
+ if test -e "$dir"
+ then
+ repository=$1
+ refspec=$2
+ echo "git push using: " "$repository" "$refspec"
+ localrev=$(git subtree split --prefix="$prefix") || die
+ git push "$repository" "$localrev":"refs/heads/$refspec"
else
- die "'$dir' must already exist. Try 'git subtree add'."
+ die "'$dir' must already exist. Try 'git subtree add'."
fi
}
subtree_test_create_repo()
{
- test_create_repo "$1"
+ test_create_repo "$1" &&
(
- cd $1
+ cd "$1" &&
git config log.date relative
)
}
create()
{
- echo "$1" >"$1"
+ echo "$1" >"$1" &&
git add "$1"
}
}
test_create_commit() (
- repo=$1
- commit=$2
- cd "$repo"
- mkdir -p $(dirname "$commit") \
+ repo=$1 &&
+ commit=$2 &&
+ cd "$repo" &&
+ mkdir -p "$(dirname "$commit")" \
|| error "Could not create directory for commit"
- echo "$commit" >"$commit"
+ echo "$commit" >"$commit" &&
git add "$commit" || error "Could not add commit"
git commit -m "$commit" || error "Could not commit"
)
)
'
+next_test
+test_expect_success 'split sub dir/ with --rejoin from scratch' '
+ subtree_test_create_repo "$subtree_test_count" &&
+ test_create_commit "$subtree_test_count" main1 &&
+ (
+ cd "$subtree_test_count" &&
+ mkdir "sub dir" &&
+ echo file >"sub dir"/file &&
+ git add "sub dir/file" &&
+ git commit -m"sub dir file" &&
+ split_hash=$(git subtree split --prefix="sub dir" --rejoin) &&
+ git subtree split --prefix="sub dir" --rejoin &&
+ check_equal "$(last_commit_message)" "Split '\''sub dir/'\'' into commit '\''$split_hash'\''"
+ )
+ '
+
next_test
test_expect_success 'split sub dir/ with --rejoin and --message' '
subtree_test_create_repo "$subtree_test_count" &&
# also test that we still can split out an entirely new subtree
# if the parent of the first commit in the tree is not empty,
- # then the new subtree has accidently been attached to something
+ # then the new subtree has accidentally been attached to something
git subtree split --prefix="sub dir2" --branch subproj2-br &&
check_equal "$(git log --pretty=format:%P -1 subproj2-br)" ""
)
return EOL_LF;
case CRLF_UNDEFINED:
case CRLF_AUTO_CRLF:
+ return EOL_CRLF;
case CRLF_AUTO_INPUT:
+ return EOL_LF;
case CRLF_TEXT:
case CRLF_AUTO:
/* fall through */
}
static void check_safe_crlf(const char *path, enum crlf_action crlf_action,
- struct text_stat *stats, enum safe_crlf checksafe)
+ struct text_stat *old_stats, struct text_stat *new_stats,
+ enum safe_crlf checksafe)
{
- if (!checksafe)
- return;
-
- if (output_eol(crlf_action) == EOL_LF) {
+ if (old_stats->crlf && !new_stats->crlf ) {
/*
- * CRLFs would not be restored by checkout:
- * check if we'd remove CRLFs
+ * CRLFs would not be restored by checkout
*/
- if (stats->crlf) {
- if (checksafe == SAFE_CRLF_WARN)
- warning("CRLF will be replaced by LF in %s.\nThe file will have its original line endings in your working directory.", path);
- else /* i.e. SAFE_CRLF_FAIL */
- die("CRLF would be replaced by LF in %s.", path);
- }
- } else if (output_eol(crlf_action) == EOL_CRLF) {
+ if (checksafe == SAFE_CRLF_WARN)
+ warning("CRLF will be replaced by LF in %s.\nThe file will have its original line endings in your working directory.", path);
+ else /* i.e. SAFE_CRLF_FAIL */
+ die("CRLF would be replaced by LF in %s.", path);
+ } else if (old_stats->lonelf && !new_stats->lonelf ) {
/*
- * CRLFs would be added by checkout:
- * check if we have "naked" LFs
+ * CRLFs would be added by checkout
*/
- if (stats->lonelf) {
- if (checksafe == SAFE_CRLF_WARN)
- warning("LF will be replaced by CRLF in %s.\nThe file will have its original line endings in your working directory.", path);
- else /* i.e. SAFE_CRLF_FAIL */
- die("LF would be replaced by CRLF in %s", path);
- }
+ if (checksafe == SAFE_CRLF_WARN)
+ warning("LF will be replaced by CRLF in %s.\nThe file will have its original line endings in your working directory.", path);
+ else /* i.e. SAFE_CRLF_FAIL */
+ die("LF would be replaced by CRLF in %s", path);
}
}
return has_cr;
}
+static int will_convert_lf_to_crlf(size_t len, struct text_stat *stats,
+ enum crlf_action crlf_action)
+{
+ if (output_eol(crlf_action) != EOL_CRLF)
+ return 0;
+ /* No "naked" LF? Nothing to convert, regardless. */
+ if (!stats->lonelf)
+ return 0;
+
+ if (crlf_action == CRLF_AUTO || crlf_action == CRLF_AUTO_INPUT || crlf_action == CRLF_AUTO_CRLF) {
+ /* If we have any CR or CRLF line endings, we do not touch it */
+ /* This is the new safer autocrlf-handling */
+ if (stats->lonecr || stats->crlf)
+ return 0;
+
+ if (convert_is_binary(len, stats))
+ return 0;
+ }
+ return 1;
+
+}
+
static int crlf_to_git(const char *path, const char *src, size_t len,
struct strbuf *buf,
enum crlf_action crlf_action, enum safe_crlf checksafe)
{
struct text_stat stats;
char *dst;
+ int convert_crlf_into_lf;
if (crlf_action == CRLF_BINARY ||
(src && !len))
return 1;
gather_stats(src, len, &stats);
+ /* Optimization: No CRLF? Nothing to convert, regardless. */
+ convert_crlf_into_lf = !!stats.crlf;
if (crlf_action == CRLF_AUTO || crlf_action == CRLF_AUTO_INPUT || crlf_action == CRLF_AUTO_CRLF) {
if (convert_is_binary(len, &stats))
return 0;
-
- if (crlf_action == CRLF_AUTO_INPUT || crlf_action == CRLF_AUTO_CRLF) {
- /*
- * If the file in the index has any CR in it, do not convert.
- * This is the new safer autocrlf handling.
- */
- if (has_cr_in_index(path))
- return 0;
+ /*
+ * If the file in the index has any CR in it, do not convert.
+ * This is the new safer autocrlf handling.
+ */
+ if (checksafe == SAFE_CRLF_RENORMALIZE)
+ checksafe = SAFE_CRLF_FALSE;
+ else if (has_cr_in_index(path))
+ convert_crlf_into_lf = 0;
+ }
+ if (checksafe && len) {
+ struct text_stat new_stats;
+ memcpy(&new_stats, &stats, sizeof(new_stats));
+ /* simulate "git add" */
+ if (convert_crlf_into_lf) {
+ new_stats.lonelf += new_stats.crlf;
+ new_stats.crlf = 0;
+ }
+ /* simulate "git checkout" */
+ if (will_convert_lf_to_crlf(len, &new_stats, crlf_action)) {
+ new_stats.crlf += new_stats.lonelf;
+ new_stats.lonelf = 0;
}
+ check_safe_crlf(path, crlf_action, &stats, &new_stats, checksafe);
}
-
- check_safe_crlf(path, crlf_action, &stats, checksafe);
-
- /* Optimization: No CRLF? Nothing to convert, regardless. */
- if (!stats.crlf)
+ if (!convert_crlf_into_lf)
return 0;
/*
return 0;
gather_stats(src, len, &stats);
-
- /* No "naked" LF? Nothing to convert, regardless. */
- if (!stats.lonelf)
+ if (!will_convert_lf_to_crlf(len, &stats, crlf_action))
return 0;
- if (crlf_action == CRLF_AUTO || crlf_action == CRLF_AUTO_INPUT || crlf_action == CRLF_AUTO_CRLF) {
- if (crlf_action == CRLF_AUTO_INPUT || crlf_action == CRLF_AUTO_CRLF) {
- /* If we have any CR or CRLF line endings, we do not touch it */
- /* This is the new safer autocrlf-handling */
- if (stats.lonecr || stats.crlf )
- return 0;
- }
-
- if (convert_is_binary(len, &stats))
- return 0;
- }
-
/* are we "faking" in place editing ? */
if (src == buf->buf)
to_free = strbuf_detach(buf, NULL);
ca->drv = git_path_check_convert(ccheck + 2);
if (ca->crlf_action != CRLF_BINARY) {
enum eol eol_attr = git_path_check_eol(ccheck + 3);
- if (eol_attr == EOL_LF)
+ if (ca->crlf_action == CRLF_AUTO && eol_attr == EOL_LF)
+ ca->crlf_action = CRLF_AUTO_INPUT;
+ else if (ca->crlf_action == CRLF_AUTO && eol_attr == EOL_CRLF)
+ ca->crlf_action = CRLF_AUTO_CRLF;
+ else if (eol_attr == EOL_LF)
ca->crlf_action = CRLF_TEXT_INPUT;
else if (eol_attr == EOL_CRLF)
ca->crlf_action = CRLF_TEXT_CRLF;
case CRLF_AUTO:
return "text=auto";
case CRLF_AUTO_CRLF:
- return "text=auto eol=crlf"; /* This is not supported yet */
+ return "text=auto eol=crlf";
case CRLF_AUTO_INPUT:
- return "text=auto eol=lf"; /* This is not supported yet */
+ return "text=auto eol=lf";
}
return "";
}
src = dst->buf;
len = dst->len;
}
- return ret | convert_to_git(path, src, len, dst, SAFE_CRLF_FALSE);
+ return ret | convert_to_git(path, src, len, dst, SAFE_CRLF_RENORMALIZE);
}
/*****************************************************************
enum safe_crlf {
SAFE_CRLF_FALSE = 0,
SAFE_CRLF_FAIL = 1,
- SAFE_CRLF_WARN = 2
+ SAFE_CRLF_WARN = 2,
+ SAFE_CRLF_RENORMALIZE = 3
};
extern enum safe_crlf safe_crlf;
free(path_copy);
}
-int main(int argc, const char **argv)
+int cmd_main(int argc, const char **argv)
{
const char *socket_path;
int ignore_sighup = 0;
strbuf_release(&buf);
}
-int main(int argc, const char **argv)
+int cmd_main(int argc, const char **argv)
{
char *socket_path = NULL;
int timeout = 900;
return; /* Found credential */
}
-int main(int argc, char **argv)
+int cmd_main(int argc, const char **argv)
{
const char * const usage[] = {
"git credential-store [<options>] <action>",
#include "cache.h"
#include "pkt-line.h"
-#include "exec_cmd.h"
#include "run-command.h"
#include "strbuf.h"
#include "string-list.h"
" [<directory>...]";
/* List of acceptable pathname prefixes */
-static char **ok_paths;
+static const char **ok_paths;
static int strict_paths;
/* If this is set, git-daemon-export-ok is not required */
}
if ( ok_paths && *ok_paths ) {
- char **pp;
+ const char **pp;
int pathlen = strlen(path);
/* The validation is done on the paths after enter_repo
strbuf_release(&hi->tcp_port);
}
+static void set_keep_alive(int sockfd)
+{
+ int ka = 1;
+
+ if (setsockopt(sockfd, SOL_SOCKET, SO_KEEPALIVE, &ka, sizeof(ka)) < 0) {
+ if (errno != ENOTSOCK)
+ logerror("unable to set SO_KEEPALIVE on socket: %s",
+ strerror(errno));
+ }
+}
+
static int execute(void)
{
char *line = packet_buffer;
if (addr)
loginfo("Connection from %s:%s", addr, port);
+ set_keep_alive(0);
alarm(init_timeout ? init_timeout : timeout);
pktlen = packet_read(0, NULL, NULL, packet_buffer, sizeof(packet_buffer), 0);
alarm(0);
continue;
}
+ set_keep_alive(sockfd);
+
if (bind(sockfd, ai->ai_addr, ai->ai_addrlen) < 0) {
logerror("Could not bind to %s: %s",
ip2str(ai->ai_family, ai->ai_addr, ai->ai_addrlen),
return 0;
}
+ set_keep_alive(sockfd);
+
if ( bind(sockfd, (struct sockaddr *)&sin, sizeof sin) < 0 ) {
logerror("Could not bind to %s: %s",
ip2str(AF_INET, (struct sockaddr *)&sin, sizeof(sin)),
return service_loop(&socklist);
}
-int main(int argc, char **argv)
+int cmd_main(int argc, const char **argv)
{
int listen_port = 0;
struct string_list listen_addr = STRING_LIST_INIT_NODUP;
struct credentials *cred = NULL;
int i;
- git_setup_gettext();
-
- git_extract_argv0_path(argv[0]);
-
for (i = 1; i < argc; i++) {
- char *arg = argv[i];
+ const char *arg = argv[i];
const char *v;
if (skip_prefix(arg, "--listen=", &v)) {
if (detach) {
if (daemonize())
die("--detach not supported on this platform");
- } else
- sanitize_stdfds();
+ }
if (pid_file)
write_file(pid_file, "%"PRIuMAX, (uintmax_t) getpid());
localtime_r(&t, &tm);
t_local = tm_to_time_t(&tm);
+ if (t_local == -1)
+ return 0; /* error; just use +0000 */
if (t_local < t) {
eastwest = -1;
offset = t - t_local;
struct tm *tm;
static struct strbuf timebuf = STRBUF_INIT;
+ if (mode->type == DATE_UNIX) {
+ strbuf_reset(&timebuf);
+ strbuf_addf(&timebuf, "%lu", time);
+ return timebuf.buf;
+ }
+
if (mode->local)
tz = local_tzoffset(time);
return DATE_NORMAL;
if (skip_prefix(format, "raw", end))
return DATE_RAW;
+ if (skip_prefix(format, "unix", end))
+ return DATE_UNIX;
if (skip_prefix(format, "format", end))
return DATE_STRFTIME;
name = p->two->path ? p->two->path : p->one->path;
- if (p->one->sha1_valid && p->two->sha1_valid)
- content_changed = hashcmp(p->one->sha1, p->two->sha1);
+ if (p->one->oid_valid && p->two->oid_valid)
+ content_changed = oidcmp(&p->one->oid, &p->two->oid);
else
content_changed = 1;
const char *add = diff_get_color_opt(o, DIFF_FILE_NEW);
show_submodule_summary(o->file, one->path ? one->path : two->path,
line_prefix,
- one->sha1, two->sha1, two->dirty_submodule,
+ one->oid.hash, two->oid.hash,
+ two->dirty_submodule,
meta, del, add, reset);
return;
}
if (!one->data && !two->data &&
S_ISREG(one->mode) && S_ISREG(two->mode) &&
!DIFF_OPT_TST(o, BINARY)) {
- if (!hashcmp(one->sha1, two->sha1)) {
+ if (!oidcmp(&one->oid, &two->oid)) {
if (must_show_header)
fprintf(o->file, "%s", header.buf);
goto free_ab_and_return;
return;
}
- same_contents = !hashcmp(one->sha1, two->sha1);
+ same_contents = !oidcmp(&one->oid, &two->oid);
if (diff_filespec_is_binary(one) || diff_filespec_is_binary(two)) {
data->is_binary = 1;
{
if (mode) {
spec->mode = canon_mode(mode);
- hashcpy(spec->sha1, sha1);
- spec->sha1_valid = sha1_valid;
+ hashcpy(spec->oid.hash, sha1);
+ spec->oid_valid = sha1_valid;
}
}
if (!FAST_WORKING_DIRECTORY && !want_file && has_sha1_pack(sha1))
return 0;
+ /*
+ * Similarly, if we'd have to convert the file contents anyway, that
+ * makes the optimization not worthwhile.
+ */
+ if (!want_file && would_convert_to_git(name))
+ return 0;
+
len = strlen(name);
pos = cache_name_pos(name, len);
if (pos < 0)
if (s->dirty_submodule)
dirty = "-dirty";
- strbuf_addf(&buf, "Subproject commit %s%s\n", sha1_to_hex(s->sha1), dirty);
+ strbuf_addf(&buf, "Subproject commit %s%s\n",
+ oid_to_hex(&s->oid), dirty);
s->size = buf.len;
if (size_only) {
s->data = NULL;
if (S_ISGITLINK(s->mode))
return diff_populate_gitlink(s, size_only);
- if (!s->sha1_valid ||
- reuse_worktree_file(s->path, s->sha1, 0)) {
+ if (!s->oid_valid ||
+ reuse_worktree_file(s->path, s->oid.hash, 0)) {
struct strbuf buf = STRBUF_INIT;
struct stat st;
int fd;
else {
enum object_type type;
if (size_only || (flags & CHECK_BINARY)) {
- type = sha1_object_info(s->sha1, &s->size);
+ type = sha1_object_info(s->oid.hash, &s->size);
if (type < 0)
- die("unable to read %s", sha1_to_hex(s->sha1));
+ die("unable to read %s",
+ oid_to_hex(&s->oid));
if (size_only)
return 0;
if (s->size > big_file_threshold && s->is_binary == -1) {
return 0;
}
}
- s->data = read_sha1_file(s->sha1, &type, &s->size);
+ s->data = read_sha1_file(s->oid.hash, &type, &s->size);
if (!s->data)
- die("unable to read %s", sha1_to_hex(s->sha1));
+ die("unable to read %s", oid_to_hex(&s->oid));
s->should_free = 1;
}
return 0;
static void prep_temp_blob(const char *path, struct diff_tempfile *temp,
void *blob,
unsigned long size,
- const unsigned char *sha1,
+ const struct object_id *oid,
int mode)
{
int fd;
die_errno("unable to write temp-file");
close_tempfile(&temp->tempfile);
temp->name = get_tempfile_path(&temp->tempfile);
- sha1_to_hex_r(temp->hex, sha1);
+ oid_to_hex_r(temp->hex, oid);
xsnprintf(temp->mode, sizeof(temp->mode), "%06o", mode);
strbuf_release(&buf);
strbuf_release(&template);
}
if (!S_ISGITLINK(one->mode) &&
- (!one->sha1_valid ||
- reuse_worktree_file(name, one->sha1, 1))) {
+ (!one->oid_valid ||
+ reuse_worktree_file(name, one->oid.hash, 1))) {
struct stat st;
if (lstat(name, &st) < 0) {
if (errno == ENOENT)
if (strbuf_readlink(&sb, name, st.st_size) < 0)
die_errno("readlink(%s)", name);
prep_temp_blob(name, temp, sb.buf, sb.len,
- (one->sha1_valid ?
- one->sha1 : null_sha1),
- (one->sha1_valid ?
+ (one->oid_valid ?
+ &one->oid : &null_oid),
+ (one->oid_valid ?
one->mode : S_IFLNK));
strbuf_release(&sb);
}
else {
/* we can borrow from the file in the work tree */
temp->name = name;
- if (!one->sha1_valid)
+ if (!one->oid_valid)
sha1_to_hex_r(temp->hex, null_sha1);
else
- sha1_to_hex_r(temp->hex, one->sha1);
+ sha1_to_hex_r(temp->hex, one->oid.hash);
/* Even though we may sometimes borrow the
* contents from the work tree, we always want
* one->mode. mode is trustworthy even when
if (diff_populate_filespec(one, 0))
die("cannot read data blob for %s", one->path);
prep_temp_blob(name, temp, one->data, one->size,
- one->sha1, one->mode);
+ &one->oid, one->mode);
}
return temp;
}
default:
*must_show_header = 0;
}
- if (one && two && hashcmp(one->sha1, two->sha1)) {
+ if (one && two && oidcmp(&one->oid, &two->oid)) {
int abbrev = DIFF_OPT_TST(o, FULL_INDEX) ? 40 : DEFAULT_ABBREV;
if (DIFF_OPT_TST(o, BINARY)) {
abbrev = 40;
}
strbuf_addf(msg, "%s%sindex %s..", line_prefix, set,
- find_unique_abbrev(one->sha1, abbrev));
- strbuf_addstr(msg, find_unique_abbrev(two->sha1, abbrev));
+ find_unique_abbrev(one->oid.hash, abbrev));
+ strbuf_addstr(msg, find_unique_abbrev(two->oid.hash, abbrev));
if (one->mode == two->mode)
strbuf_addf(msg, " %06o", one->mode);
strbuf_addf(msg, "%s\n", reset);
static void diff_fill_sha1_info(struct diff_filespec *one)
{
if (DIFF_FILE_VALID(one)) {
- if (!one->sha1_valid) {
+ if (!one->oid_valid) {
struct stat st;
if (one->is_stdin) {
- hashcpy(one->sha1, null_sha1);
+ oidclr(&one->oid);
return;
}
if (lstat(one->path, &st) < 0)
die_errno("stat '%s'", one->path);
- if (index_path(one->sha1, one->path, &st, 0))
+ if (index_path(one->oid.hash, one->path, &st, 0))
die("cannot hash %s", one->path);
}
}
else
- hashclr(one->sha1);
+ oidclr(&one->oid);
}
static void strip_prefix(int prefix_length, const char **namep, const char **otherp)
if (!options->file)
die_errno("Could not open '%s'", path);
options->close_file = 1;
+ if (options->use_color != GIT_COLOR_ALWAYS)
+ options->use_color = GIT_COLOR_NEVER;
return argcount;
} else
return 0;
fprintf(opt->file, "%s", diff_line_prefix(opt));
if (!(opt->output_format & DIFF_FORMAT_NAME_STATUS)) {
fprintf(opt->file, ":%06o %06o %s ", p->one->mode, p->two->mode,
- diff_unique_abbrev(p->one->sha1, opt->abbrev));
- fprintf(opt->file, "%s ", diff_unique_abbrev(p->two->sha1, opt->abbrev));
+ diff_unique_abbrev(p->one->oid.hash, opt->abbrev));
+ fprintf(opt->file, "%s ",
+ diff_unique_abbrev(p->two->oid.hash, opt->abbrev));
}
if (p->score) {
fprintf(opt->file, "%c%03d%c", p->status, similarity_index(p),
/* both are valid and point at the same path. that is, we are
* dealing with a change.
*/
- if (one->sha1_valid && two->sha1_valid &&
- !hashcmp(one->sha1, two->sha1) &&
+ if (one->oid_valid && two->oid_valid &&
+ !oidcmp(&one->oid, &two->oid) &&
!one->dirty_submodule && !two->dirty_submodule)
return 1; /* no change */
- if (!one->sha1_valid && !two->sha1_valid)
+ if (!one->oid_valid && !two->oid_valid)
return 1; /* both look at the same file on the filesystem. */
return 0;
}
s->path,
DIFF_FILE_VALID(s) ? "valid" : "invalid",
s->mode,
- s->sha1_valid ? sha1_to_hex(s->sha1) : "");
+ s->oid_valid ? oid_to_hex(&s->oid) : "");
fprintf(stderr, "queue[%d] %s size %lu\n",
x, one ? one : "",
s->size);
else
p->status = DIFF_STATUS_RENAMED;
}
- else if (hashcmp(p->one->sha1, p->two->sha1) ||
+ else if (oidcmp(&p->one->oid, &p->two->oid) ||
p->one->mode != p->two->mode ||
p->one->dirty_submodule ||
p->two->dirty_submodule ||
- is_null_sha1(p->one->sha1))
+ is_null_oid(&p->one->oid))
p->status = DIFF_STATUS_MODIFIED;
else {
/* This is a "no-change" entry and should not
if (diff_filespec_is_binary(p->one) ||
diff_filespec_is_binary(p->two)) {
- git_SHA1_Update(&ctx, sha1_to_hex(p->one->sha1), 40);
- git_SHA1_Update(&ctx, sha1_to_hex(p->two->sha1), 40);
+ git_SHA1_Update(&ctx, oid_to_hex(&p->one->oid),
+ 40);
+ git_SHA1_Update(&ctx, oid_to_hex(&p->two->oid),
+ 40);
continue;
}
*/
if (!DIFF_FILE_VALID(p->one) || /* (1) */
!DIFF_FILE_VALID(p->two) ||
- (p->one->sha1_valid && p->two->sha1_valid) ||
+ (p->one->oid_valid && p->two->oid_valid) ||
(p->one->mode != p->two->mode) ||
diff_populate_filespec(p->one, CHECK_SIZE_ONLY) ||
diff_populate_filespec(p->two, CHECK_SIZE_ONLY) ||
if (!driver->textconv)
die("BUG: fill_textconv called with non-textconv driver");
- if (driver->textconv_cache && df->sha1_valid) {
- *outbuf = notes_cache_get(driver->textconv_cache, df->sha1,
+ if (driver->textconv_cache && df->oid_valid) {
+ *outbuf = notes_cache_get(driver->textconv_cache,
+ df->oid.hash,
&size);
if (*outbuf)
return size;
if (!*outbuf)
die("unable to read files to diff");
- if (driver->textconv_cache && df->sha1_valid) {
+ if (driver->textconv_cache && df->oid_valid) {
/* ignore errors, as we might be in a readonly repository */
- notes_cache_put(driver->textconv_cache, df->sha1, *outbuf,
+ notes_cache_put(driver->textconv_cache, df->oid.hash, *outbuf,
size);
/*
* we could save up changes and flush them all at the end,
return 1; /* even their types are different */
}
- if (src->sha1_valid && dst->sha1_valid &&
- !hashcmp(src->sha1, dst->sha1))
+ if (src->oid_valid && dst->oid_valid &&
+ !oidcmp(&src->oid, &dst->oid))
return 0; /* they are the same */
if (diff_populate_filespec(src, 0) || diff_populate_filespec(dst, 0))
#include "diffcore.h"
#include "xdiff-interface.h"
#include "kwset.h"
+#include "commit.h"
+#include "quote.h"
typedef int (*pickaxe_fn)(mmfile_t *one, mmfile_t *two,
struct diff_options *o,
*q = outq;
}
+static void regcomp_or_die(regex_t *regex, const char *needle, int cflags)
+{
+ int err = regcomp(regex, needle, cflags);
+ if (err) {
+ /* The POSIX.2 people are surely sick */
+ char errbuf[1024];
+ regerror(err, regex, errbuf, 1024);
+ regfree(regex);
+ die("invalid regex: %s", errbuf);
+ }
+}
+
void diffcore_pickaxe(struct diff_options *o)
{
const char *needle = o->pickaxe;
kwset_t kws = NULL;
if (opts & (DIFF_PICKAXE_REGEX | DIFF_PICKAXE_KIND_G)) {
- int err;
int cflags = REG_EXTENDED | REG_NEWLINE;
if (DIFF_OPT_TST(o, PICKAXE_IGNORE_CASE))
cflags |= REG_ICASE;
- err = regcomp(®ex, needle, cflags);
- if (err) {
- /* The POSIX.2 people are surely sick */
- char errbuf[1024];
- regerror(err, ®ex, errbuf, 1024);
- regfree(®ex);
- die("invalid regex: %s", errbuf);
- }
+ regcomp_or_die(®ex, needle, cflags);
+ regexp = ®ex;
+ } else if (DIFF_OPT_TST(o, PICKAXE_IGNORE_CASE) &&
+ has_non_ascii(needle)) {
+ struct strbuf sb = STRBUF_INIT;
+ int cflags = REG_NEWLINE | REG_ICASE;
+
+ basic_regex_quote_buf(&sb, needle);
+ regcomp_or_die(®ex, sb.buf, cflags);
+ strbuf_release(&sb);
regexp = ®ex;
} else {
kws = kwsalloc(DIFF_OPT_TST(o, PICKAXE_IGNORE_CASE)
memmove(rename_dst + first + 1, rename_dst + first,
(rename_dst_nr - first - 1) * sizeof(*rename_dst));
rename_dst[first].two = alloc_filespec(two->path);
- fill_filespec(rename_dst[first].two, two->sha1, two->sha1_valid, two->mode);
+ fill_filespec(rename_dst[first].two, two->oid.hash, two->oid_valid,
+ two->mode);
rename_dst[first].pair = NULL;
return 0;
}
static unsigned int hash_filespec(struct diff_filespec *filespec)
{
- if (!filespec->sha1_valid) {
+ if (!filespec->oid_valid) {
if (diff_populate_filespec(filespec, 0))
return 0;
- hash_sha1_file(filespec->data, filespec->size, "blob", filespec->sha1);
+ hash_sha1_file(filespec->data, filespec->size, "blob",
+ filespec->oid.hash);
}
- return sha1hash(filespec->sha1);
+ return sha1hash(filespec->oid.hash);
}
static int find_identical_files(struct hashmap *srcs,
struct diff_filespec *source = p->filespec;
/* False hash collision? */
- if (hashcmp(source->sha1, target->sha1))
+ if (oidcmp(&source->oid, &target->oid))
continue;
/* Non-regular files? If so, the modes must match! */
if (!S_ISREG(source->mode) || !S_ISREG(target->mode)) {
strcmp(options->single_follow, p->two->path))
continue; /* not interested */
else if (!DIFF_OPT_TST(options, RENAME_EMPTY) &&
- is_empty_blob_sha1(p->two->sha1))
+ is_empty_blob_sha1(p->two->oid.hash))
continue;
else if (add_rename_dst(p->two) < 0) {
warning("skipping rename detection, detected"
}
}
else if (!DIFF_OPT_TST(options, RENAME_EMPTY) &&
- is_empty_blob_sha1(p->one->sha1))
+ is_empty_blob_sha1(p->one->oid.hash))
continue;
else if (!DIFF_PAIR_UNMERGED(p) && !DIFF_FILE_VALID(p->two)) {
/*
rename_dst_nr * rename_src_nr, 50, 1);
}
- mx = xcalloc(st_mult(num_create, NUM_CANDIDATE_PER_DST), sizeof(*mx));
+ mx = xcalloc(st_mult(NUM_CANDIDATE_PER_DST, num_create), sizeof(*mx));
for (dst_cnt = i = 0; i < rename_dst_nr; i++) {
struct diff_filespec *two = rename_dst[i].two;
struct diff_score *m;
struct userdiff_driver;
struct diff_filespec {
- unsigned char sha1[20];
+ struct object_id oid;
char *path;
void *data;
void *cnt_data;
int count; /* Reference count */
int rename_used; /* Count of rename users */
unsigned short mode; /* file mode */
- unsigned sha1_valid : 1; /* if true, use sha1 and trust mode;
+ unsigned oid_valid : 1; /* if true, use oid and trust mode;
* if false, use the name and read from
* the filesystem.
*/
--- /dev/null
+#include "cache.h"
+#include "dir.h"
+#include "iterator.h"
+#include "dir-iterator.h"
+
+struct dir_iterator_level {
+ int initialized;
+
+ DIR *dir;
+
+ /*
+ * The length of the directory part of path at this level
+ * (including a trailing '/'):
+ */
+ size_t prefix_len;
+
+ /*
+ * The last action that has been taken with the current entry
+ * (needed for directories, which have to be included in the
+ * iteration and also iterated into):
+ */
+ enum {
+ DIR_STATE_ITER,
+ DIR_STATE_RECURSE
+ } dir_state;
+};
+
+/*
+ * The full data structure used to manage the internal directory
+ * iteration state. It includes members that are not part of the
+ * public interface.
+ */
+struct dir_iterator_int {
+ struct dir_iterator base;
+
+ /*
+ * The number of levels currently on the stack. This is always
+ * at least 1, because when it becomes zero the iteration is
+ * ended and this struct is freed.
+ */
+ size_t levels_nr;
+
+ /* The number of levels that have been allocated on the stack */
+ size_t levels_alloc;
+
+ /*
+ * A stack of levels. levels[0] is the uppermost directory
+ * that will be included in this iteration.
+ */
+ struct dir_iterator_level *levels;
+};
+
+int dir_iterator_advance(struct dir_iterator *dir_iterator)
+{
+ struct dir_iterator_int *iter =
+ (struct dir_iterator_int *)dir_iterator;
+
+ while (1) {
+ struct dir_iterator_level *level =
+ &iter->levels[iter->levels_nr - 1];
+ struct dirent *de;
+
+ if (!level->initialized) {
+ /*
+ * Note: dir_iterator_begin() ensures that
+ * path is not the empty string.
+ */
+ if (!is_dir_sep(iter->base.path.buf[iter->base.path.len - 1]))
+ strbuf_addch(&iter->base.path, '/');
+ level->prefix_len = iter->base.path.len;
+
+ level->dir = opendir(iter->base.path.buf);
+ if (!level->dir && errno != ENOENT) {
+ warning("error opening directory %s: %s",
+ iter->base.path.buf, strerror(errno));
+ /* Popping the level is handled below */
+ }
+
+ level->initialized = 1;
+ } else if (S_ISDIR(iter->base.st.st_mode)) {
+ if (level->dir_state == DIR_STATE_ITER) {
+ /*
+ * The directory was just iterated
+ * over; now prepare to iterate into
+ * it.
+ */
+ level->dir_state = DIR_STATE_RECURSE;
+ ALLOC_GROW(iter->levels, iter->levels_nr + 1,
+ iter->levels_alloc);
+ level = &iter->levels[iter->levels_nr++];
+ level->initialized = 0;
+ continue;
+ } else {
+ /*
+ * The directory has already been
+ * iterated over and iterated into;
+ * we're done with it.
+ */
+ }
+ }
+
+ if (!level->dir) {
+ /*
+ * This level is exhausted (or wasn't opened
+ * successfully); pop up a level.
+ */
+ if (--iter->levels_nr == 0)
+ return dir_iterator_abort(dir_iterator);
+
+ continue;
+ }
+
+ /*
+ * Loop until we find an entry that we can give back
+ * to the caller:
+ */
+ while (1) {
+ strbuf_setlen(&iter->base.path, level->prefix_len);
+ errno = 0;
+ de = readdir(level->dir);
+
+ if (!de) {
+ /* This level is exhausted; pop up a level. */
+ if (errno) {
+ warning("error reading directory %s: %s",
+ iter->base.path.buf, strerror(errno));
+ } else if (closedir(level->dir))
+ warning("error closing directory %s: %s",
+ iter->base.path.buf, strerror(errno));
+
+ level->dir = NULL;
+ if (--iter->levels_nr == 0)
+ return dir_iterator_abort(dir_iterator);
+ break;
+ }
+
+ if (is_dot_or_dotdot(de->d_name))
+ continue;
+
+ strbuf_addstr(&iter->base.path, de->d_name);
+ if (lstat(iter->base.path.buf, &iter->base.st) < 0) {
+ if (errno != ENOENT)
+ warning("error reading path '%s': %s",
+ iter->base.path.buf,
+ strerror(errno));
+ continue;
+ }
+
+ /*
+ * We have to set these each time because
+ * the path strbuf might have been realloc()ed.
+ */
+ iter->base.relative_path =
+ iter->base.path.buf + iter->levels[0].prefix_len;
+ iter->base.basename =
+ iter->base.path.buf + level->prefix_len;
+ level->dir_state = DIR_STATE_ITER;
+
+ return ITER_OK;
+ }
+ }
+}
+
+int dir_iterator_abort(struct dir_iterator *dir_iterator)
+{
+ struct dir_iterator_int *iter = (struct dir_iterator_int *)dir_iterator;
+
+ for (; iter->levels_nr; iter->levels_nr--) {
+ struct dir_iterator_level *level =
+ &iter->levels[iter->levels_nr - 1];
+
+ if (level->dir && closedir(level->dir)) {
+ strbuf_setlen(&iter->base.path, level->prefix_len);
+ warning("error closing directory %s: %s",
+ iter->base.path.buf, strerror(errno));
+ }
+ }
+
+ free(iter->levels);
+ strbuf_release(&iter->base.path);
+ free(iter);
+ return ITER_DONE;
+}
+
+struct dir_iterator *dir_iterator_begin(const char *path)
+{
+ struct dir_iterator_int *iter = xcalloc(1, sizeof(*iter));
+ struct dir_iterator *dir_iterator = &iter->base;
+
+ if (!path || !*path)
+ die("BUG: empty path passed to dir_iterator_begin()");
+
+ strbuf_init(&iter->base.path, PATH_MAX);
+ strbuf_addstr(&iter->base.path, path);
+
+ ALLOC_GROW(iter->levels, 10, iter->levels_alloc);
+
+ iter->levels_nr = 1;
+ iter->levels[0].initialized = 0;
+
+ return dir_iterator;
+}
--- /dev/null
+#ifndef DIR_ITERATOR_H
+#define DIR_ITERATOR_H
+
+/*
+ * Iterate over a directory tree.
+ *
+ * Iterate over a directory tree, recursively, including paths of all
+ * types and hidden paths. Skip "." and ".." entries and don't follow
+ * symlinks except for the original path.
+ *
+ * Every time dir_iterator_advance() is called, update the members of
+ * the dir_iterator structure to reflect the next path in the
+ * iteration. The order that paths are iterated over within a
+ * directory is undefined, but directory paths are always iterated
+ * over before the subdirectory contents.
+ *
+ * A typical iteration looks like this:
+ *
+ * int ok;
+ * struct iterator *iter = dir_iterator_begin(path);
+ *
+ * while ((ok = dir_iterator_advance(iter)) == ITER_OK) {
+ * if (want_to_stop_iteration()) {
+ * ok = dir_iterator_abort(iter);
+ * break;
+ * }
+ *
+ * // Access information about the current path:
+ * if (S_ISDIR(iter->st.st_mode))
+ * printf("%s is a directory\n", iter->relative_path);
+ * }
+ *
+ * if (ok != ITER_DONE)
+ * handle_error();
+ *
+ * Callers are allowed to modify iter->path while they are working,
+ * but they must restore it to its original contents before calling
+ * dir_iterator_advance() again.
+ */
+
+struct dir_iterator {
+ /* The current path: */
+ struct strbuf path;
+
+ /*
+ * The current path relative to the starting path. This part
+ * of the path always uses "/" characters to separate path
+ * components:
+ */
+ const char *relative_path;
+
+ /* The current basename: */
+ const char *basename;
+
+ /* The result of calling lstat() on path: */
+ struct stat st;
+};
+
+/*
+ * Start a directory iteration over path. Return a dir_iterator that
+ * holds the internal state of the iteration.
+ *
+ * The iteration includes all paths under path, not including path
+ * itself and not including "." or ".." entries.
+ *
+ * path is the starting directory. An internal copy will be made.
+ */
+struct dir_iterator *dir_iterator_begin(const char *path);
+
+/*
+ * Advance the iterator to the first or next item and return ITER_OK.
+ * If the iteration is exhausted, free the dir_iterator and any
+ * resources associated with it and return ITER_DONE. On error, free
+ * dir_iterator and associated resources and return ITER_ERROR. It is
+ * a bug to use iterator or call this function again after it has
+ * returned ITER_DONE or ITER_ERROR.
+ */
+int dir_iterator_advance(struct dir_iterator *iterator);
+
+/*
+ * End the iteration before it has been exhausted. Free the
+ * dir_iterator and any associated resources and return ITER_DONE. On
+ * error, free the dir_iterator and return ITER_ERROR.
+ */
+int dir_iterator_abort(struct dir_iterator *iterator);
+
+#endif
varint_len = encode_varint(untracked->ident.len, varbuf);
strbuf_add(out, varbuf, varint_len);
- strbuf_add(out, untracked->ident.buf, untracked->ident.len);
+ strbuf_addbuf(out, &untracked->ident);
strbuf_add(out, ouc, ouc_size(len));
free(ouc);
extern void setup_standard_excludes(struct dir_struct *dir);
+
+/* Constants for remove_dir_recursively: */
+
+/*
+ * If a non-directory is found within path, stop and return an error.
+ * (In this case some empty directories might already have been
+ * removed.)
+ */
#define REMOVE_DIR_EMPTY_ONLY 01
+
+/*
+ * If any Git work trees are found within path, skip them without
+ * considering it an error.
+ */
#define REMOVE_DIR_KEEP_NESTED_GIT 02
+
+/* Remove the contents of path, but leave path itself. */
#define REMOVE_DIR_KEEP_TOPLEVEL 04
+
+/*
+ * Remove path and its contents, recursively. flags is a combination
+ * of the above REMOVE_DIR_* constants. Return 0 on success.
+ *
+ * This function uses path as temporary scratch space, but restores it
+ * before returning.
+ */
extern int remove_dir_recursively(struct strbuf *path, int flag);
/* tries to remove the path with empty directories along it, ignores ENOENT */
#include "refs.h"
#include "csum-file.h"
#include "quote.h"
-#include "exec_cmd.h"
#include "dir.h"
+#include "run-command.h"
#define PACK_ID_BITS 16
#define MAX_PACK_ID ((1<<PACK_ID_BITS)-1)
/* Configured limits on output */
static unsigned long max_depth = 10;
static off_t max_packsize;
+static int unpack_limit = 100;
static int force_update;
static int pack_compression_level = Z_DEFAULT_COMPRESSION;
static int pack_compression_seen;
static FILE *pack_edges;
static unsigned int show_stats = 1;
static int global_argc;
-static char **global_argv;
+static const char **global_argv;
/* Memory pools */
static size_t mem_pool_alloc = 2*1024*1024 - sizeof(struct mem_pool);
return e;
}
+static void invalidate_pack_id(unsigned int id)
+{
+ unsigned int h;
+ unsigned long lu;
+ struct tag *t;
+
+ for (h = 0; h < ARRAY_SIZE(object_table); h++) {
+ struct object_entry *e;
+
+ for (e = object_table[h]; e; e = e->next)
+ if (e->pack_id == id)
+ e->pack_id = MAX_PACK_ID;
+ }
+
+ for (lu = 0; lu < branch_table_sz; lu++) {
+ struct branch *b;
+
+ for (b = branch_table[lu]; b; b = b->table_next_branch)
+ if (b->pack_id == id)
+ b->pack_id = MAX_PACK_ID;
+ }
+
+ for (t = first_tag; t; t = t->next_tag)
+ if (t->pack_id == id)
+ t->pack_id = MAX_PACK_ID;
+}
+
static unsigned int hc_str(const char *s, size_t len)
{
unsigned int r = 0;
}
}
+static int loosen_small_pack(const struct packed_git *p)
+{
+ struct child_process unpack = CHILD_PROCESS_INIT;
+
+ if (lseek(p->pack_fd, 0, SEEK_SET) < 0)
+ die_errno("Failed seeking to start of '%s'", p->pack_name);
+
+ unpack.in = p->pack_fd;
+ unpack.git_cmd = 1;
+ unpack.stdout_to_stderr = 1;
+ argv_array_push(&unpack.args, "unpack-objects");
+ if (!show_stats)
+ argv_array_push(&unpack.args, "-q");
+
+ return run_command(&unpack);
+}
+
static void end_packfile(void)
{
static int running;
fixup_pack_header_footer(pack_data->pack_fd, pack_data->sha1,
pack_data->pack_name, object_count,
cur_pack_sha1, pack_size);
+
+ if (object_count <= unpack_limit) {
+ if (!loosen_small_pack(pack_data)) {
+ invalidate_pack_id(pack_id);
+ goto discard_pack;
+ }
+ }
+
close(pack_data->pack_fd);
idx_name = keep_pack(create_index());
pack_id++;
}
else {
+discard_pack:
close(pack_data->pack_fd);
unlink_or_warn(pack_data->pack_name);
}
static void git_pack_config(void)
{
int indexversion_value;
+ int limit;
unsigned long packsizelimit_value;
if (!git_config_get_ulong("pack.depth", &max_depth)) {
if (!git_config_get_ulong("pack.packsizelimit", &packsizelimit_value))
max_packsize = packsizelimit_value;
+ if (!git_config_get_int("fastimport.unpacklimit", &limit))
+ unpack_limit = limit;
+ else if (!git_config_get_int("transfer.unpacklimit", &limit))
+ unpack_limit = limit;
+
git_config(git_default_config, NULL);
}
read_marks();
}
-int main(int argc, char **argv)
+int cmd_main(int argc, const char **argv)
{
unsigned int i;
- git_extract_argv0_path(argv[0]);
-
- git_setup_gettext();
-
if (argc == 2 && !strcmp(argv[1], "-h"))
usage(fast_import_usage);
#define INITIAL_FLUSH 16
#define PIPESAFE_FLUSH 32
-#define LARGE_FLUSH 1024
+#define LARGE_FLUSH 16384
static int next_flush(struct fetch_pack_args *args, int count)
{
- int flush_limit = args->stateless_rpc ? LARGE_FLUSH : PIPESAFE_FLUSH;
-
- if (count < flush_limit)
- count <<= 1;
- else
- count += flush_limit;
+ if (args->stateless_rpc) {
+ if (count < LARGE_FLUSH)
+ count <<= 1;
+ else
+ count = count * 11 / 10;
+ } else {
+ if (count < PIPESAFE_FLUSH)
+ count <<= 1;
+ else
+ count += PIPESAFE_FLUSH;
+ }
return count;
}
#include "refs.h"
#include "utf8.h"
#include "sha1-array.h"
+#include "decorate.h"
#define FSCK_FATAL -1
#define FSCK_INFO -2
va_start(ap, fmt);
strbuf_vaddf(&sb, fmt, ap);
- result = options->error_func(object, msg_type, sb.buf);
+ result = options->error_func(options, object, msg_type, sb.buf);
strbuf_release(&sb);
va_end(ap);
return result;
}
+static char *get_object_name(struct fsck_options *options, struct object *obj)
+{
+ if (!options->object_names)
+ return NULL;
+ return lookup_decoration(options->object_names, obj);
+}
+
+static void put_object_name(struct fsck_options *options, struct object *obj,
+ const char *fmt, ...)
+{
+ va_list ap;
+ struct strbuf buf = STRBUF_INIT;
+ char *existing;
+
+ if (!options->object_names)
+ return;
+ existing = lookup_decoration(options->object_names, obj);
+ if (existing)
+ return;
+ va_start(ap, fmt);
+ strbuf_vaddf(&buf, fmt, ap);
+ add_decoration(options->object_names, obj, strbuf_detach(&buf, NULL));
+ va_end(ap);
+}
+
+static const char *describe_object(struct fsck_options *o, struct object *obj)
+{
+ static struct strbuf buf = STRBUF_INIT;
+ char *name;
+
+ strbuf_reset(&buf);
+ strbuf_addstr(&buf, oid_to_hex(&obj->oid));
+ if (o->object_names && (name = lookup_decoration(o->object_names, obj)))
+ strbuf_addf(&buf, " (%s)", name);
+
+ return buf.buf;
+}
+
static int fsck_walk_tree(struct tree *tree, void *data, struct fsck_options *options)
{
struct tree_desc desc;
struct name_entry entry;
int res = 0;
+ const char *name;
if (parse_tree(tree))
return -1;
+ name = get_object_name(options, &tree->object);
init_tree_desc(&desc, tree->buffer, tree->size);
while (tree_entry(&desc, &entry)) {
+ struct object *obj;
int result;
if (S_ISGITLINK(entry.mode))
continue;
- if (S_ISDIR(entry.mode))
- result = options->walk(&lookup_tree(entry.oid->hash)->object, OBJ_TREE, data, options);
- else if (S_ISREG(entry.mode) || S_ISLNK(entry.mode))
- result = options->walk(&lookup_blob(entry.oid->hash)->object, OBJ_BLOB, data, options);
+
+ if (S_ISDIR(entry.mode)) {
+ obj = &lookup_tree(entry.oid->hash)->object;
+ if (name)
+ put_object_name(options, obj, "%s%s/", name,
+ entry.path);
+ result = options->walk(obj, OBJ_TREE, data, options);
+ }
+ else if (S_ISREG(entry.mode) || S_ISLNK(entry.mode)) {
+ obj = &lookup_blob(entry.oid->hash)->object;
+ if (name)
+ put_object_name(options, obj, "%s%s", name,
+ entry.path);
+ result = options->walk(obj, OBJ_BLOB, data, options);
+ }
else {
result = error("in tree %s: entry %s has bad mode %.6o",
- oid_to_hex(&tree->object.oid), entry.path, entry.mode);
+ describe_object(options, &tree->object), entry.path, entry.mode);
}
if (result < 0)
return result;
static int fsck_walk_commit(struct commit *commit, void *data, struct fsck_options *options)
{
+ int counter = 0, generation = 0, name_prefix_len = 0;
struct commit_list *parents;
int res;
int result;
+ const char *name;
if (parse_commit(commit))
return -1;
+ name = get_object_name(options, &commit->object);
+ if (name)
+ put_object_name(options, &commit->tree->object, "%s:", name);
+
result = options->walk((struct object *)commit->tree, OBJ_TREE, data, options);
if (result < 0)
return result;
res = result;
parents = commit->parents;
+ if (name && parents) {
+ int len = strlen(name), power;
+
+ if (len && name[len - 1] == '^') {
+ generation = 1;
+ name_prefix_len = len - 1;
+ }
+ else { /* parse ~<generation> suffix */
+ for (generation = 0, power = 1;
+ len && isdigit(name[len - 1]);
+ power *= 10)
+ generation += power * (name[--len] - '0');
+ if (power > 1 && len && name[len - 1] == '~')
+ name_prefix_len = len - 1;
+ }
+ }
+
while (parents) {
+ if (name) {
+ struct object *obj = &parents->item->object;
+
+ if (++counter > 1)
+ put_object_name(options, obj, "%s^%d",
+ name, counter);
+ else if (generation > 0)
+ put_object_name(options, obj, "%.*s~%d",
+ name_prefix_len, name, generation + 1);
+ else
+ put_object_name(options, obj, "%s^", name);
+ }
result = options->walk((struct object *)parents->item, OBJ_COMMIT, data, options);
if (result < 0)
return result;
static int fsck_walk_tag(struct tag *tag, void *data, struct fsck_options *options)
{
+ char *name = get_object_name(options, &tag->object);
+
if (parse_tag(tag))
return -1;
+ if (name)
+ put_object_name(options, tag->tagged, "%s", name);
return options->walk(tag->tagged, OBJ_ANY, data, options);
}
case OBJ_TAG:
return fsck_walk_tag((struct tag *)obj, data, options);
default:
- error("Unknown object type for %s", oid_to_hex(&obj->oid));
+ error("Unknown object type for %s", describe_object(options, obj));
return -1;
}
}
obj->type);
}
-int fsck_error_function(struct object *obj, int msg_type, const char *message)
+int fsck_error_function(struct fsck_options *o,
+ struct object *obj, int msg_type, const char *message)
{
if (msg_type == FSCK_WARN) {
- warning("object %s: %s", oid_to_hex(&obj->oid), message);
+ warning("object %s: %s", describe_object(o, obj), message);
return 0;
}
- error("object %s: %s", oid_to_hex(&obj->oid), message);
+ error("object %s: %s", describe_object(o, obj), message);
return 1;
}
typedef int (*fsck_walk_func)(struct object *obj, int type, void *data, struct fsck_options *options);
/* callback for fsck_object, type is FSCK_ERROR or FSCK_WARN */
-typedef int (*fsck_error)(struct object *obj, int type, const char *message);
+typedef int (*fsck_error)(struct fsck_options *o,
+ struct object *obj, int type, const char *message);
-int fsck_error_function(struct object *obj, int type, const char *message);
+int fsck_error_function(struct fsck_options *o,
+ struct object *obj, int type, const char *message);
struct fsck_options {
fsck_walk_func walk;
unsigned strict:1;
int *msg_type;
struct sha1_array *skiplist;
+ struct decoration *object_names;
};
#define FSCK_OPTIONS_DEFAULT { NULL, fsck_error_function, 0, NULL }
# endif
#endif
+static const char *charset;
+
/*
* Guess the user's preferred languages from the value in LANGUAGE environment
* variable and LC_MESSAGES locale category if NO_GETTEXT is not defined.
return ret;
}
-static const char *charset;
static void init_gettext_charset(const char *domain)
{
/*
{
static int is_utf8 = -1;
if (is_utf8 == -1)
- is_utf8 = !strcmp(charset, "UTF-8");
+ is_utf8 = is_utf8_locale();
return is_utf8 ? utf8_strwidth(s) : strlen(s);
}
#endif
+
+int is_utf8_locale(void)
+{
+#ifdef NO_GETTEXT
+ if (!charset) {
+ const char *env = getenv("LC_ALL");
+ if (!env || !*env)
+ env = getenv("LC_CTYPE");
+ if (!env || !*env)
+ env = getenv("LANG");
+ if (!env)
+ env = "";
+ if (strchr(env, '.'))
+ env = strchr(env, '.') + 1;
+ charset = xstrdup(env);
+ }
+#endif
+ return is_encoding_utf8(charset);
+}
#endif
const char *get_preferred_languages(void);
+extern int is_utf8_locale(void);
#endif
my $normal_color = $repo->get_color("", "reset");
my $diff_algorithm = $repo->config('diff.algorithm');
+my $diff_compaction_heuristic = $repo->config_bool('diff.compactionheuristic');
my $diff_filter = $repo->config('interactive.difffilter');
my $use_readkey = 0;
if (defined $diff_algorithm) {
splice @diff_cmd, 1, 0, "--diff-algorithm=${diff_algorithm}";
}
+ if ($diff_compaction_heuristic) {
+ splice @diff_cmd, 1, 0, "--compaction-heuristic";
+ }
if (defined $patch_mode_revision) {
push @diff_cmd, get_diff_reference($patch_mode_revision);
}
OPTIONS_SPEC=
. git-sh-setup
-. git-sh-i18n
_x40='[0-9a-f][0-9a-f][0-9a-f][0-9a-f][0-9a-f]'
_x40="$_x40$_x40$_x40$_x40$_x40$_x40$_x40$_x40"
check_and_set_terms $state
case "$#,$state" in
0,*)
- die "$(gettext "Please call 'bisect_state' with at least one argument.")" ;;
+ die "Please call 'bisect_state' with at least one argument." ;;
1,"$TERM_BAD"|1,"$TERM_GOOD"|1,skip)
- rev=$(git rev-parse --verify $(bisect_head)) ||
- die "$(gettext "Bad rev input: $(bisect_head)")"
+ bisected_head=$(bisect_head)
+ rev=$(git rev-parse --verify "$bisected_head") ||
+ die "$(eval_gettext "Bad rev input: \$bisected_head")"
bisect_write "$state" "$rev"
check_expected_revs "$rev" ;;
2,"$TERM_BAD"|*,"$TERM_GOOD"|*,skip)
return 0;
}
+/*
+ * Like skip_prefix, but promises never to read past "len" bytes of the input
+ * buffer, and returns the remaining number of bytes in "out" via "outlen".
+ */
+static inline int skip_prefix_mem(const char *buf, size_t len,
+ const char *prefix,
+ const char **out, size_t *outlen)
+{
+ size_t prefix_len = strlen(prefix);
+ if (prefix_len <= len && !memcmp(buf, prefix, prefix_len)) {
+ *out = buf + prefix_len;
+ *outlen = len - prefix_len;
+ return 1;
+ }
+ return 0;
+}
+
/*
* If buf ends with suffix, return 1 and subtract the length of the suffix
* from *len. Otherwise, return 0 and leave *len untouched.
#define getpagesize() sysconf(_SC_PAGESIZE)
#endif
+#ifndef O_CLOEXEC
+#define O_CLOEXEC 0
+#endif
+
#ifdef FREAD_READS_DIRECTORIES
#ifdef fopen
#undef fopen
* you can do:
*
* struct foo *f;
- * FLEX_ALLOC_STR(f, name, src);
+ * FLEXPTR_ALLOC_STR(f, name, src);
*
* and "name" will point to a block of memory after the struct, which will be
* freed along with the struct (but the pointer can be repointed anywhere).
#endif
#endif
+
+extern int cmd_main(int, const char **);
do
launch_merge_tool "$1" "$2" "$5"
status=$?
+ if test $status -ge 126
+ then
+ # Command not found (127), not executable (126) or
+ # exited via a signal (>= 128).
+ exit $status
+ fi
+
if test "$status" != 0 &&
test "$GIT_DIFFTOOL_TRUST_EXIT_CODE" = true
then
exit($exitcode);
}
-sub find_worktree
-{
- # Git->repository->wc_path() does not honor changes to the working
- # tree location made by $ENV{GIT_WORK_TREE} or the 'core.worktree'
- # config variable.
- return Git::command_oneline('rev-parse', '--show-toplevel');
-}
-
sub print_tool_help
{
# See the comment at the bottom of file_diff() for the reason behind
sub use_wt_file
{
- my ($repo, $workdir, $file, $sha1) = @_;
+ my ($workdir, $file, $sha1) = @_;
my $null_sha1 = '0' x 40;
if (-l "$workdir/$file" || ! -e _) {
return (0, $null_sha1);
}
- my $wt_sha1 = $repo->command_oneline('hash-object', "$workdir/$file");
+ my $wt_sha1 = Git::command_oneline('hash-object', "$workdir/$file");
my $use = ($sha1 eq $null_sha1) || ($sha1 eq $wt_sha1);
return ($use, $wt_sha1);
}
{
my ($repo_path, $index, $worktree) = @_;
$ENV{GIT_INDEX_FILE} = $index;
- $ENV{GIT_WORK_TREE} = $worktree;
- my $must_unset_git_dir = 0;
- if (not defined($ENV{GIT_DIR})) {
- $must_unset_git_dir = 1;
- $ENV{GIT_DIR} = $repo_path;
- }
- my @refreshargs = qw/update-index --really-refresh -q --unmerged/;
- my @gitargs = qw/diff-files --name-only -z/;
+ my @gitargs = ('--git-dir', $repo_path, '--work-tree', $worktree);
+ my @refreshargs = (
+ @gitargs, 'update-index',
+ '--really-refresh', '-q', '--unmerged');
try {
Git::command_oneline(@refreshargs);
} catch Git::Error::Command with {};
- my $line = Git::command_oneline(@gitargs);
+ my @diffargs = (@gitargs, 'diff-files', '--name-only', '-z');
+ my $line = Git::command_oneline(@diffargs);
my @files;
if (defined $line) {
@files = split('\0', $line);
}
delete($ENV{GIT_INDEX_FILE});
- delete($ENV{GIT_WORK_TREE});
- delete($ENV{GIT_DIR}) if ($must_unset_git_dir);
return map { $_ => 1 } @files;
}
sub setup_dir_diff
{
- my ($repo, $workdir, $symlinks) = @_;
-
- # Run the diff; exit immediately if no diff found
- # 'Repository' and 'WorkingCopy' must be explicitly set to insure that
- # if $GIT_DIR and $GIT_WORK_TREE are set in ENV, they are actually used
- # by Git->repository->command*.
- my $repo_path = $repo->repo_path();
- my %repo_args = (Repository => $repo_path, WorkingCopy => $workdir);
- my $diffrepo = Git->repository(%repo_args);
-
+ my ($workdir, $symlinks) = @_;
my @gitargs = ('diff', '--raw', '--no-abbrev', '-z', @ARGV);
- my $diffrtn = $diffrepo->command_oneline(@gitargs);
+ my $diffrtn = Git::command_oneline(@gitargs);
exit(0) unless defined($diffrtn);
# Build index info for left and right sides of the diff
if ($lmode eq $symlink_mode) {
$symlink{$src_path}{left} =
- $diffrepo->command_oneline('show', "$lsha1");
+ Git::command_oneline('show', $lsha1);
}
if ($rmode eq $symlink_mode) {
$symlink{$dst_path}{right} =
- $diffrepo->command_oneline('show', "$rsha1");
+ Git::command_oneline('show', $rsha1);
}
if ($lmode ne $null_mode and $status !~ /^C/) {
if ($working_tree_dups{$dst_path}++) {
next;
}
- my ($use, $wt_sha1) = use_wt_file($repo, $workdir,
- $dst_path, $rsha1);
+ my ($use, $wt_sha1) =
+ use_wt_file($workdir, $dst_path, $rsha1);
if ($use) {
push @working_tree, $dst_path;
$wtindex .= "$rmode $wt_sha1\t$dst_path\0";
mkpath($ldir) or exit_cleanup($tmpdir, 1);
mkpath($rdir) or exit_cleanup($tmpdir, 1);
- # If $GIT_DIR is not set prior to calling 'git update-index' and
- # 'git checkout-index', then those commands will fail if difftool
- # is called from a directory other than the repo root.
- my $must_unset_git_dir = 0;
- if (not defined($ENV{GIT_DIR})) {
- $must_unset_git_dir = 1;
- $ENV{GIT_DIR} = $repo_path;
- }
-
# Populate the left and right directories based on each index file
my ($inpipe, $ctx);
$ENV{GIT_INDEX_FILE} = "$tmpdir/lindex";
($inpipe, $ctx) =
- $repo->command_input_pipe(qw(update-index -z --index-info));
+ Git::command_input_pipe('update-index', '-z', '--index-info');
print($inpipe $lindex);
- $repo->command_close_pipe($inpipe, $ctx);
+ Git::command_close_pipe($inpipe, $ctx);
my $rc = system('git', 'checkout-index', '--all', "--prefix=$ldir/");
exit_cleanup($tmpdir, $rc) if $rc != 0;
$ENV{GIT_INDEX_FILE} = "$tmpdir/rindex";
($inpipe, $ctx) =
- $repo->command_input_pipe(qw(update-index -z --index-info));
+ Git::command_input_pipe('update-index', '-z', '--index-info');
print($inpipe $rindex);
- $repo->command_close_pipe($inpipe, $ctx);
+ Git::command_close_pipe($inpipe, $ctx);
$rc = system('git', 'checkout-index', '--all', "--prefix=$rdir/");
exit_cleanup($tmpdir, $rc) if $rc != 0;
$ENV{GIT_INDEX_FILE} = "$tmpdir/wtindex";
($inpipe, $ctx) =
- $repo->command_input_pipe(qw(update-index --info-only -z --index-info));
+ Git::command_input_pipe('update-index', '--info-only', '-z', '--index-info');
print($inpipe $wtindex);
- $repo->command_close_pipe($inpipe, $ctx);
+ Git::command_close_pipe($inpipe, $ctx);
# If $GIT_DIR was explicitly set just for the update/checkout
# commands, then it should be unset before continuing.
- delete($ENV{GIT_DIR}) if ($must_unset_git_dir);
delete($ENV{GIT_INDEX_FILE});
# Changes in the working tree need special treatment since they are
my $rc;
my $error = 0;
my $repo = Git->repository();
- my $workdir = find_worktree();
- my ($a, $b, $tmpdir, @worktree) =
- setup_dir_diff($repo, $workdir, $symlinks);
+ my $repo_path = $repo->repo_path();
+ my $workdir = $repo->wc_path();
+ my ($a, $b, $tmpdir, @worktree) = setup_dir_diff($workdir, $symlinks);
if (defined($extcmd)) {
$rc = system($extcmd, $a, $b);
next if ! -f "$b/$file";
if (!$indices_loaded) {
- %wt_modified = changed_files($repo->repo_path(),
- "$tmpdir/wtindex", "$workdir");
- %tmp_modified = changed_files($repo->repo_path(),
- "$tmpdir/wtindex", "$b");
+ %wt_modified = changed_files(
+ $repo_path, "$tmpdir/wtindex", $workdir);
+ %tmp_modified = changed_files(
+ $repo_path, "$tmpdir/wtindex", $b);
$indices_loaded = 1;
}
# Resolve two or more trees.
#
+. git-sh-setup
+
LF='
'
-die () {
- echo >&2 "$*"
- exit 1
-}
-
# The first parameters up to -- are merge bases; the rest are heads.
bases= head= remotes= sep_seen=
for arg
if ! git diff-index --quiet --cached HEAD --
then
- echo "Error: Your local changes to the following files would be overwritten by merge"
+ gettextln "Error: Your local changes to the following files would be overwritten by merge"
git diff-index --cached --name-only HEAD -- | sed -e 's/^/ /'
exit 2
fi
# We allow only last one to have a hand-resolvable
# conflicts. Last round failed and we still had
# a head to merge.
- echo "Automated merge did not work."
- echo "Should not be doing an Octopus."
+ gettextln "Automated merge did not work."
+ gettextln "Should not be doing an Octopus."
exit 2
esac
eval pretty_name=\${GITHEAD_$SHA1_UP:-$pretty_name}
fi
common=$(git merge-base --all $SHA1 $MRC) ||
- die "Unable to find common commit with $pretty_name"
+ die "$(eval_gettext "Unable to find common commit with \$pretty_name")"
case "$LF$common$LF" in
*"$LF$SHA1$LF"*)
- echo "Already up-to-date with $pretty_name"
+ eval_gettextln "Already up-to-date with \$pretty_name"
continue
;;
esac
# tree as the intermediate result of the merge.
# We still need to count this as part of the parent set.
- echo "Fast-forwarding to: $pretty_name"
+ eval_gettextln "Fast-forwarding to: \$pretty_name"
git read-tree -u -m $head $SHA1 || exit
MRC=$SHA1 MRT=$(git write-tree)
continue
NON_FF_MERGE=1
- echo "Trying simple merge with $pretty_name"
+ eval_gettextln "Trying simple merge with \$pretty_name"
git read-tree -u -m --aggressive $common $MRT $SHA1 || exit 2
next=$(git write-tree 2>/dev/null)
if test $? -ne 0
then
- echo "Simple merge did not work, trying automatic merge."
+ gettextln "Simple merge did not work, trying automatic merge."
git-merge-index -o git-merge-one-file -a ||
OCTOPUS_FAILURE=1
next=$(git write-tree 2>/dev/null)
if self.useClientSpec:
self.clientSpecDirs = getClientSpec()
- # Check for the existance of P4 branches
+ # Check for the existence of P4 branches
branchesDetected = (len(p4BranchesInGit().keys()) > 1)
if self.useClientSpec and not branchesDetected:
self.useClientSpec_from_options = False
self.clientSpecDirs = None
self.tempBranches = []
- self.tempBranchLocation = "git-p4-tmp"
+ self.tempBranchLocation = "refs/git-p4-tmp"
self.largeFileSystem = None
if gitConfig('git-p4.largeFileSystem'):
return True
hasPrefix = [p for p in self.branchPrefixes
if p4PathStartsWith(path, p)]
- if hasPrefix and self.verbose:
+ if not hasPrefix and self.verbose:
print('Ignoring file outside of prefix: {0}'.format(path))
return hasPrefix
# below were not inside any function, and expected to return
# to the function that dot-sourced us.
#
-# However, FreeBSD /bin/sh misbehaves on such a construct and
-# continues to run the statements that follow such a "return".
+# However, older (9.x) versions of FreeBSD /bin/sh misbehave on such a
+# construct and continue to run the statements that follow such a "return".
# As a work-around, we introduce an extra layer of a function
# here, and immediately call it after defining it.
git_rebase__am () {
sed -e 1q < "$todo" >> "$done"
sed -e 1d < "$todo" >> "$todo".new
mv -f "$todo".new "$todo"
- new_count=$(git stripspace --strip-comments <"$done" | wc -l)
+ new_count=$(( $(git stripspace --strip-comments <"$done" | wc -l) ))
echo $new_count >"$msgnum"
total=$(($new_count + $(git stripspace --strip-comments <"$todo" | wc -l)))
echo $total >"$end"
if test "$last_count" != "$new_count"
then
last_count=$new_count
- printf "Rebasing (%d/%d)\r" $new_count $total
+ eval_gettext "Rebasing (\$new_count/\$total)"; printf "\r"
test -z "$verbose" || echo
fi
}
}
append_todo_help () {
- git stripspace --comment-lines >>"$todo" <<\EOF
-
+ gettext "
Commands:
p, pick = use commit
r, reword = use commit, but edit the commit message
e, edit = use commit, but stop for amending
s, squash = use commit, but meld into previous commit
- f, fixup = like "squash", but discard this commit's log message
+ f, fixup = like \"squash\", but discard this commit's log message
x, exec = run command (the rest of the line) using shell
d, drop = remove commit
These lines can be re-ordered; they are executed from top to bottom.
+" | git stripspace --comment-lines >>"$todo"
-EOF
if test $(get_missing_commit_check_level) = error
then
- git stripspace --comment-lines >>"$todo" <<\EOF
+ gettext "
Do not remove any line. Use 'drop' explicitly to remove a commit.
-EOF
+" | git stripspace --comment-lines >>"$todo"
else
- git stripspace --comment-lines >>"$todo" <<\EOF
+ gettext "
If you remove a line here THAT COMMIT WILL BE LOST.
-EOF
+" | git stripspace --comment-lines >>"$todo"
fi
}
make_patch $1
git rev-parse --verify HEAD > "$amend"
gpg_sign_opt_quoted=${gpg_sign_opt:+$(git rev-parse --sq-quote "$gpg_sign_opt")}
- warn "You can amend the commit now, with"
- warn
- warn " git commit --amend $gpg_sign_opt_quoted"
- warn
- warn "Once you are satisfied with your changes, run"
- warn
- warn " git rebase --continue"
+ warn "$(eval_gettext "\
+You can amend the commit now, with
+
+ git commit --amend \$gpg_sign_opt_quoted
+
+Once you are satisfied with your changes, run
+
+ git rebase --continue")"
warn
exit $2
}
die_abort () {
+ apply_autostash
rm -rf "$state_dir"
die "$1"
}
}
is_empty_commit() {
- tree=$(git rev-parse -q --verify "$1"^{tree} 2>/dev/null ||
- die "$1: not a commit that can be picked")
- ptree=$(git rev-parse -q --verify "$1"^^{tree} 2>/dev/null ||
- ptree=4b825dc642cb6eb9a060e54bf8d69288fbee4904)
+ tree=$(git rev-parse -q --verify "$1"^{tree} 2>/dev/null) || {
+ sha1=$1
+ die "$(eval_gettext "\$sha1: not a commit that can be picked")"
+ }
+ ptree=$(git rev-parse -q --verify "$1"^^{tree} 2>/dev/null) ||
+ ptree=4b825dc642cb6eb9a060e54bf8d69288fbee4904
test "$tree" = "$ptree"
}
case "$1" in -n) sha1=$2; ff= ;; *) sha1=$1 ;; esac
case "$force_rebase" in '') ;; ?*) ff= ;; esac
- output git rev-parse --verify $sha1 || die "Invalid commit name: $sha1"
+ output git rev-parse --verify $sha1 || die "$(eval_gettext "Invalid commit name: \$sha1")"
if is_empty_commit "$sha1"
then
git rev-parse HEAD > "$rewritten"/$current_commit
done <"$state_dir"/current-commit
rm "$state_dir"/current-commit ||
- die "Cannot write current commit's replacement sha1"
+ die "$(gettext "Cannot write current commit's replacement sha1")"
fi
fi
done
case $fast_forward in
t)
- output warn "Fast-forward to $sha1"
+ output warn "$(eval_gettext "Fast-forward to \$sha1")"
output git reset --hard $sha1 ||
- die "Cannot fast-forward to $sha1"
+ die "$(eval_gettext "Cannot fast-forward to \$sha1")"
;;
f)
first_parent=$(expr "$new_parents" : ' \([^ ]*\)')
then
# detach HEAD to current parent
output git checkout $first_parent 2> /dev/null ||
- die "Cannot move HEAD to $first_parent"
+ die "$(eval_gettext "Cannot move HEAD to \$first_parent")"
fi
case "$new_parents" in
' '*' '*)
- test "a$1" = a-n && die "Refusing to squash a merge: $sha1"
+ test "a$1" = a-n && die "$(eval_gettext "Refusing to squash a merge: \$sha1")"
# redo merge
author_script_content=$(get_author_ident_from_commit $sha1)
$merge_args $strategy_args -m "$msg_content" $new_parents'
then
printf "%s\n" "$msg_content" > "$GIT_DIR"/MERGE_MSG
- die_with_patch $sha1 "Error redoing merge $sha1"
+ die_with_patch $sha1 "$(eval_gettext "Error redoing merge \$sha1")"
fi
echo "$sha1 $(git rev-parse HEAD^0)" >> "$rewritten_list"
;;
output eval git cherry-pick \
${gpg_sign_opt:+$(git rev-parse --sq-quote "$gpg_sign_opt")} \
"$strategy_args" "$@" ||
- die_with_patch $sha1 "Could not pick $sha1"
+ die_with_patch $sha1 "$(eval_gettext "Could not pick \$sha1")"
;;
esac
;;
esac
}
-nth_string () {
- case "$1" in
- *1[0-9]|*[04-9]) echo "$1"th;;
- *1) echo "$1"st;;
- *2) echo "$1"nd;;
- *3) echo "$1"rd;;
- esac
+this_nth_commit_message () {
+ n=$1
+ eval_gettext "This is the commit message #\${n}:"
+}
+
+skip_nth_commit_message () {
+ n=$1
+ eval_gettext "The commit message #\${n} will be skipped:"
}
update_squash_messages () {
if test -f "$squash_msg"; then
mv "$squash_msg" "$squash_msg".bak || exit
count=$(($(sed -n \
- -e "1s/^. This is a combination of \(.*\) commits\./\1/p" \
+ -e "1s/^$comment_char.*\([0-9][0-9]*\).*/\1/p" \
-e "q" < "$squash_msg".bak)+1))
{
- printf '%s\n' "$comment_char This is a combination of $count commits."
+ printf '%s\n' "$comment_char $(eval_ngettext \
+ "This is a combination of \$count commit." \
+ "This is a combination of \$count commits." \
+ $count)"
sed -e 1d -e '2,/^./{
/^$/d
}' <"$squash_msg".bak
} >"$squash_msg"
else
- commit_message HEAD > "$fixup_msg" || die "Cannot write $fixup_msg"
+ commit_message HEAD > "$fixup_msg" || die "$(gettext "Cannot write \$fixup_msg")"
count=2
{
- printf '%s\n' "$comment_char This is a combination of 2 commits."
- printf '%s\n' "$comment_char The first commit's message is:"
+ printf '%s\n' "$comment_char $(gettext "This is a combination of 2 commits.")"
+ printf '%s\n' "$comment_char $(gettext "This is the 1st commit message:")"
echo
cat "$fixup_msg"
} >"$squash_msg"
squash)
rm -f "$fixup_msg"
echo
- printf '%s\n' "$comment_char This is the $(nth_string $count) commit message:"
+ printf '%s\n' "$comment_char $(this_nth_commit_message $count)"
echo
commit_message $2
;;
fixup)
echo
- printf '%s\n' "$comment_char The $(nth_string $count) commit message will be skipped:"
+ printf '%s\n' "$comment_char $(skip_nth_commit_message $count)"
echo
# Change the space after the comment character to TAB:
commit_message $2 | git stripspace --comment-lines | sed -e 's/ / /'
# messages, effectively causing the combined commit to be used as the
# new basis for any further squash/fixups. Args: sha1 rest
die_failed_squash() {
+ sha1=$1
+ rest=$2
mv "$squash_msg" "$msg" || exit
rm -f "$fixup_msg"
cp "$msg" "$GIT_DIR"/MERGE_MSG || exit
warn
- warn "Could not apply $1... $2"
- die_with_patch $1 ""
+ warn "$(eval_gettext "Could not apply \$sha1... \$rest")"
+ die_with_patch $sha1 ""
}
flush_rewritten_pending() {
}
do_pick () {
+ sha1=$1
+ rest=$2
if test "$(git rev-parse HEAD)" = "$squash_onto"
then
# Set the correct commit message and author info on the
# resolve before manually running git commit --amend then git
# rebase --continue.
git commit --allow-empty --allow-empty-message --amend \
- --no-post-rewrite -n -q -C $1 &&
- pick_one -n $1 &&
+ --no-post-rewrite -n -q -C $sha1 &&
+ pick_one -n $sha1 &&
git commit --allow-empty --allow-empty-message \
- --amend --no-post-rewrite -n -q -C $1 \
+ --amend --no-post-rewrite -n -q -C $sha1 \
${gpg_sign_opt:+"$gpg_sign_opt"} ||
- die_with_patch $1 "Could not apply $1... $2"
+ die_with_patch $sha1 "$(eval_gettext "Could not apply \$sha1... \$rest")"
else
- pick_one $1 ||
- die_with_patch $1 "Could not apply $1... $2"
+ pick_one $sha1 ||
+ die_with_patch $sha1 "$(eval_gettext "Could not apply \$sha1... \$rest")"
fi
}
mark_action_done
do_pick $sha1 "$rest"
git commit --amend --no-post-rewrite ${gpg_sign_opt:+"$gpg_sign_opt"} || {
- warn "Could not amend commit after successfully picking $sha1... $rest"
- warn "This is most likely due to an empty commit message, or the pre-commit hook"
- warn "failed. If the pre-commit hook failed, you may need to resolve the issue before"
- warn "you are able to reword the commit."
+ warn "$(eval_gettext "\
+Could not amend commit after successfully picking \$sha1... \$rest
+This is most likely due to an empty commit message, or the pre-commit hook
+failed. If the pre-commit hook failed, you may need to resolve the issue before
+you are able to reword the commit.")"
exit_with_patch $sha1 1
}
record_in_rewritten $sha1
mark_action_done
do_pick $sha1 "$rest"
sha1_abbrev=$(git rev-parse --short $sha1)
- warn "Stopped at $sha1_abbrev... $rest"
+ warn "$(eval_gettext "Stopped at \$sha1_abbrev... \$rest")"
exit_with_patch $sha1 0
;;
squash|s|fixup|f)
comment_for_reflog $squash_style
test -f "$done" && has_action "$done" ||
- die "Cannot '$squash_style' without a previous commit"
+ die "$(eval_gettext "Cannot '\$squash_style' without a previous commit")"
mark_action_done
update_squash_messages $squash_style $sha1
x|"exec")
read -r command rest < "$todo"
mark_action_done
- printf 'Executing: %s\n' "$rest"
+ eval_gettextln "Executing: \$rest"
"${SHELL:-@SHELL_PATH@}" -c "$rest" # Actual execution
status=$?
# Run in subshell because require_clean_work_tree can die.
(require_clean_work_tree "rebase" 2>/dev/null) || dirty=t
if test "$status" -ne 0
then
- warn "Execution failed: $rest"
+ warn "$(eval_gettext "Execution failed: \$rest")"
test "$dirty" = f ||
- warn "and made changes to the index and/or the working tree"
+ warn "$(gettext "and made changes to the index and/or the working tree")"
- warn "You can fix the problem, and then run"
- warn
- warn " git rebase --continue"
+ warn "$(gettext "\
+You can fix the problem, and then run
+
+ git rebase --continue")"
warn
if test $status -eq 127 # command not found
then
exit "$status"
elif test "$dirty" = t
then
- warn "Execution succeeded: $rest"
- warn "but left changes to the index and/or the working tree"
- warn "Commit or stash your changes, and then run"
- warn
- warn " git rebase --continue"
+ # TRANSLATORS: after these lines is a command to be issued by the user
+ warn "$(eval_gettext "\
+Execution succeeded: \$rest
+but left changes to the index and/or the working tree
+Commit or stash your changes, and then run
+
+ git rebase --continue")"
warn
exit 1
fi
;;
*)
- warn "Unknown command: $command $sha1 $rest"
- fixtodo="Please fix this using 'git rebase --edit-todo'."
+ warn "$(eval_gettext "Unknown command: \$command \$sha1 \$rest")"
+ fixtodo="$(gettext "Please fix this using 'git rebase --edit-todo'.")"
if git rev-parse --verify -q "$sha1" >/dev/null
then
die_with_patch $sha1 "$fixtodo"
"$hook" rebase < "$rewritten_list"
true # we don't care if this hook failed
fi &&
- warn "Successfully rebased and updated $head_name."
+ warn "$(eval_gettext "Successfully rebased and updated \$head_name.")"
return 1 # not failure; just to break the do_rest loop
}
record_in_rewritten "$onto"
;;
esac ||
- die "Could not skip unnecessary pick commands"
+ die "$(gettext "Could not skip unnecessary pick commands")"
}
transform_todo_ids () {
if test $badsha -ne 0
then
line="$(sed -n -e "${2}p" "$3")"
- warn "Warning: the SHA-1 is missing or isn't" \
- "a commit in the following line:"
- warn " - $line"
+ warn "$(eval_gettext "\
+Warning: the SHA-1 is missing or isn't a commit in the following line:
+ - \$line")"
warn
fi
;;
*)
line="$(sed -n -e "${lineno}p" "$1")"
- warn "Warning: the command isn't recognized" \
- "in the following line:"
- warn " - $line"
+ warn "$(eval_gettext "\
+Warning: the command isn't recognized in the following line:
+ - \$line")"
warn
retval=1
;;
# Switch to the branch in $into and notify it in the reflog
checkout_onto () {
GIT_REFLOG_ACTION="$GIT_REFLOG_ACTION: checkout $onto_name"
- output git checkout $onto || die_abort "could not detach HEAD"
+ output git checkout $onto || die_abort "$(gettext "could not detach HEAD")"
git update-ref ORIG_HEAD $orig_head
}
then
test "$check_level" = error && raise_error=t
- warn "Warning: some commits may have been dropped" \
- "accidentally."
- warn "Dropped commits (newer to older):"
+ warn "$(gettext "\
+Warning: some commits may have been dropped accidentally.
+Dropped commits (newer to older):")"
# Make the list user-friendly and display
opt="--no-walk=sorted --format=oneline --abbrev-commit --stdin"
git rev-list $opt <"$todo".miss | warn_lines
- warn "To avoid this message, use \"drop\" to" \
- "explicitly remove a commit."
- warn
- warn "Use 'git config rebase.missingCommitsCheck' to change" \
- "the level of warnings."
- warn "The possible behaviours are: ignore, warn, error."
+ warn "$(gettext "\
+To avoid this message, use \"drop\" to explicitly remove a commit.
+
+Use 'git config rebase.missingCommitsCheck' to change the level of warnings.
+The possible behaviours are: ignore, warn, error.")"
warn
fi
;;
ignore)
;;
*)
- warn "Unrecognized setting $check_level for option" \
- "rebase.missingCommitsCheck. Ignoring."
+ warn "$(eval_gettext "Unrecognized setting \$check_level for option rebase.missingCommitsCheck. Ignoring.")"
;;
esac
# placed before the commit of the next action
checkout_onto
- warn "You can fix this with 'git rebase --edit-todo'."
- die "Or you can abort the rebase with 'git rebase --abort'."
+ warn "$(gettext "You can fix this with 'git rebase --edit-todo'.")"
+ die "$(gettext "Or you can abort the rebase with 'git rebase --abort'.")"
fi
}
# below were not inside any function, and expected to return
# to the function that dot-sourced us.
#
-# However, FreeBSD /bin/sh misbehaves on such a construct and
-# continues to run the statements that follow such a "return".
+# However, older (9.x) versions of FreeBSD /bin/sh misbehave on such a
+# construct and continue to run the statements that follow such a "return".
# As a work-around, we introduce an extra layer of a function
# here, and immediately call it after defining it.
git_rebase__interactive () {
test ! -f "$GIT_DIR"/CHERRY_PICK_HEAD ||
rm "$GIT_DIR"/CHERRY_PICK_HEAD ||
- die "Could not remove CHERRY_PICK_HEAD"
+ die "$(gettext "Could not remove CHERRY_PICK_HEAD")"
else
if ! test -f "$author_script"
then
gpg_sign_opt_quoted=${gpg_sign_opt:+$(git rev-parse --sq-quote "$gpg_sign_opt")}
- die "You have staged changes in your working tree. If these changes are meant to be
+ die "$(eval_gettext "\
+You have staged changes in your working tree.
+If these changes are meant to be
squashed into the previous commit, run:
- git commit --amend $gpg_sign_opt_quoted
+ git commit --amend \$gpg_sign_opt_quoted
If they are meant to go into a new commit, run:
- git commit $gpg_sign_opt_quoted
+ git commit \$gpg_sign_opt_quoted
In both case, once you're done, continue with:
git rebase --continue
-"
+")"
fi
. "$author_script" ||
- die "Error trying to find the author identity to amend commit"
+ die "$(gettext "Error trying to find the author identity to amend commit")"
if test -f "$amend"
then
current_head=$(git rev-parse --verify HEAD)
test "$current_head" = $(cat "$amend") ||
- die "\
-You have uncommitted changes in your working tree. Please, commit them
-first and then run 'git rebase --continue' again."
+ die "$(gettext "\
+You have uncommitted changes in your working tree. Please commit them
+first and then run 'git rebase --continue' again.")"
do_with_author git commit --amend --no-verify -F "$msg" -e \
${gpg_sign_opt:+"$gpg_sign_opt"} ||
- die "Could not commit staged changes."
+ die "$(gettext "Could not commit staged changes.")"
else
do_with_author git commit --no-verify -F "$msg" -e \
${gpg_sign_opt:+"$gpg_sign_opt"} ||
- die "Could not commit staged changes."
+ die "$(gettext "Could not commit staged changes.")"
fi
fi
mv -f "$todo".new "$todo"
collapse_todo_ids
append_todo_help
- git stripspace --comment-lines >>"$todo" <<\EOF
-
+ gettext "
You are editing the todo file of an ongoing interactive rebase.
To continue rebase after editing, run:
git rebase --continue
-EOF
+" | git stripspace --comment-lines >>"$todo"
git_sequence_editor "$todo" ||
- die "Could not execute editor"
+ die "$(gettext "Could not execute editor")"
expand_todo_ids
exit
esac
git var GIT_COMMITTER_IDENT >/dev/null ||
- die "You need to set your committer info first"
+ die "$(gettext "You need to set your committer info first")"
comment_for_reflog start
then
GIT_REFLOG_ACTION="$GIT_REFLOG_ACTION: checkout $switch_to"
output git checkout "$switch_to" -- ||
- die "Could not checkout $switch_to"
+ die "$(eval_gettext "Could not checkout \$switch_to")"
comment_for_reflog start
fi
-orig_head=$(git rev-parse --verify HEAD) || die "No HEAD?"
-mkdir -p "$state_dir" || die "Could not create temporary $state_dir"
+orig_head=$(git rev-parse --verify HEAD) || die "$(gettext "No HEAD?")"
+mkdir -p "$state_dir" || die "$(eval_gettext "Could not create temporary \$state_dir")"
-: > "$state_dir"/interactive || die "Could not mark as interactive"
+: > "$state_dir"/interactive || die "$(gettext "Could not mark as interactive")"
write_basic_state
if test t = "$preserve_merges"
then
for c in $(git merge-base --all $orig_head $upstream)
do
echo $onto > "$rewritten"/$c ||
- die "Could not init rewritten commits"
+ die "$(gettext "Could not init rewritten commits")"
done
else
mkdir "$rewritten" &&
echo $onto > "$rewritten"/root ||
- die "Could not init rewritten commits"
+ die "$(gettext "Could not init rewritten commits")"
fi
# No cherry-pick because our first pass is to determine
# parents to rewrite and skipping dropped commits would
cat >>"$todo" <<EOF
-$comment_char Rebase $shortrevisions onto $shortonto ($todocount command(s))
+$comment_char $(eval_ngettext \
+ "Rebase \$shortrevisions onto \$shortonto (\$todocount command)" \
+ "Rebase \$shortrevisions onto \$shortonto (\$todocount commands)" \
+ "$todocount")
EOF
append_todo_help
-git stripspace --comment-lines >>"$todo" <<\EOF
-
+gettext "
However, if you remove everything, the rebase will be aborted.
-EOF
+" | git stripspace --comment-lines >>"$todo"
if test -z "$keep_empty"
then
- printf '%s\n' "$comment_char Note that empty commits are commented out" >>"$todo"
+ printf '%s\n' "$comment_char $(gettext "Note that empty commits are commented out")" >>"$todo"
fi
cp "$todo" "$todo".backup
collapse_todo_ids
git_sequence_editor "$todo" ||
- die_abort "Could not execute editor"
+ die_abort "$(gettext "Could not execute editor")"
has_action "$todo" ||
return 2
# below were not inside any function, and expected to return
# to the function that dot-sourced us.
#
-# However, FreeBSD /bin/sh misbehaves on such a construct and
-# continues to run the statements that follow such a "return".
+# However, older (9.x) versions of FreeBSD /bin/sh misbehave on such a
+# construct and continue to run the statements that follow such a "return".
# As a work-around, we introduce an extra layer of a function
# here, and immediately call it after defining it.
git_rebase__merge () {
edit-todo! edit the todo list during an interactive rebase
"
. git-sh-setup
-. git-sh-i18n
set_reflog_action rebase
require_work_tree_exists
cd_to_toplevel
git symbolic-ref \
-m "rebase finished: returning to $head_name" \
HEAD $head_name ||
- die "$(gettext "Could not move back to $head_name")"
+ die "$(eval_gettext "Could not move back to \$head_name")"
;;
esac
}
then
. git-parse-remote
error_on_missing_default_upstream "rebase" "rebase" \
- "against" "git rebase <branch>"
+ "against" "git rebase $(gettext '<branch>')"
fi
test "$fork_point" = auto && fork_point=t
push @files, $repo->command('format-patch', '-o', tempdir(CLEANUP => 1), @rev_list_opts);
}
+@files = handle_backup_files(@files);
+
if ($validate) {
foreach my $f (@files) {
unless (-p $f) {
return;
}
+sub handle_backup {
+ my ($last, $lastlen, $file, $known_suffix) = @_;
+ my ($suffix, $skip);
+
+ $skip = 0;
+ if (defined $last &&
+ ($lastlen < length($file)) &&
+ (substr($file, 0, $lastlen) eq $last) &&
+ ($suffix = substr($file, $lastlen)) !~ /^[a-z0-9]/i) {
+ if (defined $known_suffix && $suffix eq $known_suffix) {
+ print "Skipping $file with backup suffix '$known_suffix'.\n";
+ $skip = 1;
+ } else {
+ my $answer = ask("Do you really want to send $file? (y|N): ",
+ valid_re => qr/^(?:y|n)/i,
+ default => 'n');
+ $skip = ($answer ne 'y');
+ if ($skip) {
+ $known_suffix = $suffix;
+ }
+ }
+ }
+ return ($skip, $known_suffix);
+}
+
+sub handle_backup_files {
+ my @file = @_;
+ my ($last, $lastlen, $known_suffix, $skip, @result);
+ for my $file (@file) {
+ ($skip, $known_suffix) = handle_backup($last, $lastlen,
+ $file, $known_suffix);
+ push @result, $file unless $skip;
+ $last = $file;
+ $lastlen = length($file);
+ }
+ return @result;
+}
+
sub file_has_nonascii {
my $fn = shift;
open(my $fh, '<', $fn)
git sh-i18n--envsubst "$1"
)
}
+
+ eval_ngettext () {
+ ngettext "$1" "$2" "$3" | (
+ export PATH $(git sh-i18n--envsubst --variables "$2");
+ git sh-i18n--envsubst "$2"
+ )
+ }
;;
poison)
# Emit garbage so that tests that incorrectly rely on translatable
eval_gettext () {
printf "%s" "# GETTEXT POISON #"
}
+
+ eval_ngettext () {
+ printf "%s" "# GETTEXT POISON #"
+ }
;;
*)
gettext () {
git sh-i18n--envsubst "$1"
)
}
+
+ eval_ngettext () {
+ (test "$3" = 1 && printf "%s" "$1" || printf "%s" "$2") | (
+ export PATH $(git sh-i18n--envsubst --variables "$2");
+ git sh-i18n--envsubst "$2"
+ )
+ }
;;
esac
# to set up some variables pointing at the normal git directories and
# a few helper shell functions.
+# Source git-sh-i18n for gettext support.
+. git-sh-i18n
+
# Having this variable in your environment would break scripts because
# you would cause "cd" to be taken to unexpected places. If you
# like CDPATH, define it for your interactive shell sessions without
else
dashless=$(basename -- "$0" | sed -e 's/-/ /')
usage() {
- die "usage: $dashless $USAGE"
+ die "$(eval_gettext "usage: \$dashless \$USAGE")"
}
if [ -z "$LONG_USAGE" ]
then
- LONG_USAGE="usage: $dashless $USAGE"
+ LONG_USAGE="$(eval_gettext "usage: \$dashless \$USAGE")"
else
- LONG_USAGE="usage: $dashless $USAGE
+ LONG_USAGE="$(eval_gettext "usage: \$dashless \$USAGE
-$LONG_USAGE"
+$LONG_USAGE")"
fi
case "$1" in
else
GIT_PAGER=cat
fi
- : ${LESS=-FRX}
- : ${LV=-c}
- export LESS LV
+ for vardef in @@PAGER_ENV@@
+ do
+ var=${vardef%%=*}
+ eval ": \"\${$vardef}\" && export $var"
+ done
eval "$GIT_PAGER" '"$@"'
}
cd_to_toplevel () {
cdup=$(git rev-parse --show-toplevel) &&
cd "$cdup" || {
- echo >&2 "Cannot chdir to $cdup, the toplevel of the working tree"
+ gettextln "Cannot chdir to \$cdup, the toplevel of the working tree" >&2
exit 1
}
}
require_work_tree_exists () {
if test "z$(git rev-parse --is-bare-repository)" != zfalse
then
- die "fatal: $0 cannot be used without a working tree."
+ program_name=$0
+ die "$(gettext "fatal: \$program_name cannot be used without a working tree.")"
fi
}
require_work_tree () {
- test "$(git rev-parse --is-inside-work-tree 2>/dev/null)" = true ||
- die "fatal: $0 cannot be used without a working tree."
+ test "$(git rev-parse --is-inside-work-tree 2>/dev/null)" = true || {
+ program_name=$0
+ die "$(gettext "fatal: \$program_name cannot be used without a working tree.")"
+ }
}
require_clean_work_tree () {
if ! git diff-files --quiet --ignore-submodules
then
- echo >&2 "Cannot $1: You have unstaged changes."
+ action=$1
+ case "$action" in
+ rebase)
+ gettextln "Cannot rebase: You have unstaged changes." >&2
+ ;;
+ "rewrite branches")
+ gettextln "Cannot rewrite branches: You have unstaged changes." >&2
+ ;;
+ "pull with rebase")
+ gettextln "Cannot pull with rebase: You have unstaged changes." >&2
+ ;;
+ *)
+ eval_gettextln "Cannot \$action: You have unstaged changes." >&2
+ ;;
+ esac
err=1
fi
if ! git diff-index --cached --quiet --ignore-submodules HEAD --
then
- if [ $err = 0 ]
+ if test $err = 0
then
- echo >&2 "Cannot $1: Your index contains uncommitted changes."
+ action=$1
+ case "$action" in
+ rebase)
+ gettextln "Cannot rebase: Your index contains uncommitted changes." >&2
+ ;;
+ "pull with rebase")
+ gettextln "Cannot pull with rebase: Your index contains uncommitted changes." >&2
+ ;;
+ *)
+ eval_gettextln "Cannot \$action: Your index contains uncommitted changes." >&2
+ ;;
+ esac
else
- echo >&2 "Additionally, your index contains uncommitted changes."
+ gettextln "Additionally, your index contains uncommitted changes." >&2
fi
err=1
fi
- if [ $err = 1 ]
+ if test $err = 1
then
- test -n "$2" && echo >&2 "$2"
+ test -n "$2" && echo "$2" >&2
exit 1
fi
}
then
test -z "$(git rev-parse --show-cdup)" || {
exit=$?
- echo >&2 "You need to run this command from the toplevel of the working tree."
+ gettextln "You need to run this command from the toplevel of the working tree." >&2
exit $exit
}
fi
test -n "$GIT_DIR" && GIT_DIR=$(cd "$GIT_DIR" && pwd) || {
- echo >&2 "Unable to determine absolute path of git directory"
+ gettextln "Unable to determine absolute path of git directory" >&2
exit 1
}
- : ${GIT_OBJECT_DIRECTORY="$(git rev-parse --git-path objects)"}
+ : "${GIT_OBJECT_DIRECTORY="$(git rev-parse --git-path objects)"}"
}
if test -z "$NONGIT_OK"
OPTIONS_SPEC=
START_DIR=$(pwd)
. git-sh-setup
-. git-sh-i18n
require_work_tree
cd_to_toplevel
create_stash "$stash_msg" $untracked
store_stash -m "$stash_msg" -q $w_commit ||
die "$(gettext "Cannot save the current status")"
- say Saved working directory and index state "$stash_msg"
+ say "$(eval_gettext "Saved working directory and index state \$stash_msg")"
if test -z "$patch_mode"
then
drop_stash "$@"
else
status=$?
- say "The stash is kept in case you need it again."
+ say "$(gettext "The stash is kept in case you need it again.")"
exit $status
fi
}
or: $dashless [--quiet] status [--cached] [--recursive] [--] [<path>...]
or: $dashless [--quiet] init [--] [<path>...]
or: $dashless [--quiet] deinit [-f|--force] (--all| [--] <path>...)
- or: $dashless [--quiet] update [--init] [--remote] [-N|--no-fetch] [-f|--force] [--checkout|--merge|--rebase] [--reference <repository>] [--recursive] [--] [<path>...]
+ or: $dashless [--quiet] update [--init] [--remote] [-N|--no-fetch] [-f|--force] [--checkout|--merge|--rebase] [--[no-]recommend-shallow] [--reference <repository>] [--recursive] [--] [<path>...]
or: $dashless [--quiet] summary [--cached|--files] [--summary-limit <n>] [commit] [--] [<path>...]
or: $dashless [--quiet] foreach [--recursive] <command>
or: $dashless [--quiet] sync [--recursive] [--] [<path>...]"
OPTIONS_SPEC=
SUBDIRECTORY_OK=Yes
. git-sh-setup
-. git-sh-i18n
. git-parse-remote
require_work_tree
wt_prefix=$(git rev-parse --show-prefix)
{
if test "$1" = "#unmatched"
then
- exit 1
+ exit ${2:-1}
fi
}
then
if test -z "$force"
then
- echo >&2 "$(eval_gettext "A git directory for '\$sm_name' is found locally with remote(s):")"
+ eval_gettextln >&2 "A git directory for '\$sm_name' is found locally with remote(s):"
GIT_DIR=".git/modules/$sm_name" GIT_WORK_TREE=. git remote -v | grep '(fetch)' | sed -e s,^," ", -e s,' (fetch)',, >&2
- echo >&2 "$(eval_gettext "If you want to reuse this local git directory instead of cloning again from")"
- echo >&2 " $realrepo"
- echo >&2 "$(eval_gettext "use the '--force' option. If the local git directory is not the correct repo")"
- die "$(eval_gettext "or you are unsure what this means choose another name with the '--name' option.")"
+ die "$(eval_gettextln "\
+If you want to reuse this local git directory instead of cloning again from
+ \$realrepo
+use the '--force' option. If the local git directory is not the correct repo
+or you are unsure what this means choose another name with the '--name' option.")"
else
- echo "$(eval_gettext "Reactivating local git directory for submodule '\$sm_name'.")"
+ eval_gettextln "Reactivating local git directory for submodule '\$sm_name'."
fi
fi
git submodule--helper clone ${GIT_QUIET:+--quiet} --prefix "$wt_prefix" --path "$sm_path" --name "$sm_name" --url "$realrepo" ${reference:+"$reference"} ${depth:+"$depth"} || exit
{
git submodule--helper list --prefix "$wt_prefix" ||
- echo "#unmatched"
+ echo "#unmatched" $?
} |
while read mode sha1 stage sm_path
do
- die_if_unmatched "$mode"
+ die_if_unmatched "$mode" "$sha1"
if test -e "$sm_path"/.git
then
displaypath=$(git submodule--helper relative-path "$prefix$sm_path" "$wt_prefix")
#
# Unregister submodules from .git/config and remove their work tree
#
-# $@ = requested paths (use '.' to deinit all submodules)
-#
cmd_deinit()
{
# parse $args after "submodule ... deinit".
{
git submodule--helper list --prefix "$wt_prefix" "$@" ||
- echo "#unmatched"
+ echo "#unmatched" $?
} |
while read mode sha1 stage sm_path
do
- die_if_unmatched "$mode"
+ die_if_unmatched "$mode" "$sha1"
name=$(git submodule--helper name "$sm_path") || exit
displaypath=$(git submodule--helper relative-path "$sm_path" "$wt_prefix")
# Protect submodules containing a .git directory
if test -d "$sm_path/.git"
then
- echo >&2 "$(eval_gettext "Submodule work tree '\$displaypath' contains a .git directory")"
- die "$(eval_gettext "(use 'rm -rf' if you really want to remove it including all of its history)")"
+ die "$(eval_gettext "\
+Submodule work tree '\$displaypath' contains a .git directory
+(use 'rm -rf' if you really want to remove it including all of its history)")"
fi
if test -z "$force"
'')
git fetch ;;
*)
- git fetch $(get_default_remote) "$2" ;;
+ shift
+ git fetch $(get_default_remote) "$@" ;;
esac
)
--checkout)
update="checkout"
;;
+ --recommend-shallow)
+ recommend_shallow="--recommend-shallow"
+ ;;
+ --no-recommend-shallow)
+ recommend_shallow="--no-recommend-shallow"
+ ;;
--depth)
case "$2" in '') usage ;; esac
depth="--depth=$2"
${update:+--update "$update"} \
${reference:+--reference "$reference"} \
${depth:+--depth "$depth"} \
+ ${recommend_shallow:+"$recommend_shallow"} \
${jobs:+$jobs} \
- "$@" || echo "#unmatched"
+ "$@" || echo "#unmatched" $?
} | {
err=
while read mode sha1 stage just_cloned sm_path
do
- die_if_unmatched "$mode"
+ die_if_unmatched "$mode" "$sha1"
name=$(git submodule--helper name "$sm_path") || exit
url=$(git config submodule."$name".url)
- branch=$(get_submodule_config "$name" branch master)
if ! test -z "$update"
then
update_module=$update
if test -n "$remote"
then
+ branch=$(git submodule--helper remote-branch "$sm_path")
if test -z "$nofetch"
then
# Fetch remote before determining tracking $sha1
- (sanitize_submodule_env; cd "$sm_path" && git-fetch) ||
+ fetch_in_submodule "$sm_path" $depth ||
die "$(eval_gettext "Unable to fetch in submodule path '\$sm_path'")"
fi
remote_name=$(sanitize_submodule_env; cd "$sm_path" && get_default_remote)
sha1=$(sanitize_submodule_env; cd "$sm_path" &&
git rev-parse --verify "${remote_name}/${branch}") ||
- die "$(eval_gettext "Unable to find current ${remote_name}/${branch} revision in submodule path '\$sm_path'")"
+ die "$(eval_gettext "Unable to find current \${remote_name}/\${branch} revision in submodule path '\$sm_path'")"
fi
if test "$subsha1" != "$sha1" || test -n "$force"
# Run fetch only if $sha1 isn't present or it
# is not reachable from a ref.
is_tip_reachable "$sm_path" "$sha1" ||
- fetch_in_submodule "$sm_path" ||
+ fetch_in_submodule "$sm_path" $depth ||
die "$(eval_gettext "Unable to fetch in submodule path '\$displaypath'")"
# Now we tried the usual fetch, but $sha1 may
# not be reachable from any of the refs
is_tip_reachable "$sm_path" "$sha1" ||
- fetch_in_submodule "$sm_path" "$sha1" ||
- die "$(eval_gettext "Fetched in submodule path '\$displaypath', but it did not contain $sha1. Direct fetching of that commit failed.")"
+ fetch_in_submodule "$sm_path" $depth "$sha1" ||
+ die "$(eval_gettext "Fetched in submodule path '\$displaypath', but it did not contain \$sha1. Direct fetching of that commit failed.")"
fi
must_die_on_failure=
if test $res -gt 0
then
die_msg="$(eval_gettext "Failed to recurse into submodule path '\$displaypath'")"
- if test $res -eq 1
+ if test $res -ne 2
then
err="${err};$die_msg"
continue
{
git submodule--helper list --prefix "$wt_prefix" "$@" ||
- echo "#unmatched"
+ echo "#unmatched" $?
} |
while read mode sha1 stage sm_path
do
- die_if_unmatched "$mode"
+ die_if_unmatched "$mode" "$sha1"
name=$(git submodule--helper name "$sm_path") || exit
url=$(git config submodule."$name".url)
displaypath=$(git submodule--helper relative-path "$prefix$sm_path" "$wt_prefix")
cd_to_toplevel
{
git submodule--helper list --prefix "$wt_prefix" "$@" ||
- echo "#unmatched"
+ echo "#unmatched" $?
} |
while read mode sha1 stage sm_path
do
- die_if_unmatched "$mode"
+ die_if_unmatched "$mode" "$sha1"
name=$(git submodule--helper name "$sm_path")
url=$(git config -f .gitmodules --get submodule."$name".url)
die "failed to open $ENV{GIT_DIR}: $!\n";
$ENV{GIT_DIR} = $1 if <$fh> =~ /^gitdir: (.+)$/;
}
-} else {
+} elsif ($cmd) {
my ($git_dir, $cdup);
git_cmd_try {
$git_dir = command_oneline([qw/rev-parse --git-dir/]);
my %opts = %{$cmd{$cmd}->[2]} if (defined $cmd);
-read_git_config(\%opts);
+read_git_config(\%opts) if $ENV{GIT_DIR};
if ($cmd && ($cmd eq 'log' || $cmd eq 'blame')) {
Getopt::Long::Configure('pass_through');
}
sub cmd_clone {
my ($url, $path) = @_;
- if (!defined $path &&
+ if (!$url) {
+ die "SVN repository location required ",
+ "as a command-line argument\n";
+ } elsif (!defined $path &&
(defined $_trunk || @_branches || @_tags ||
defined $_stdlayout) &&
$url !~ m#^[a-z\+]+://#) {
return done_alias;
}
-/*
- * Many parts of Git have subprograms communicate via pipe, expect the
- * upstream of a pipe to die with SIGPIPE when the downstream of a
- * pipe does not need to read all that is written. Some third-party
- * programs that ignore or block SIGPIPE for their own reason forget
- * to restore SIGPIPE handling to the default before spawning Git and
- * break this carefully orchestrated machinery.
- *
- * Restore the way SIGPIPE is handled to default, which is what we
- * expect.
- */
-static void restore_sigpipe_to_default(void)
-{
- sigset_t unblock;
-
- sigemptyset(&unblock);
- sigaddset(&unblock, SIGPIPE);
- sigprocmask(SIG_UNBLOCK, &unblock, NULL);
- signal(SIGPIPE, SIG_DFL);
-}
-
-int main(int argc, char **av)
+int cmd_main(int argc, const char **argv)
{
- const char **argv = (const char **) av;
const char *cmd;
int done_help = 0;
- cmd = git_extract_argv0_path(argv[0]);
+ cmd = argv[0];
if (!cmd)
cmd = "git-help";
- /*
- * Always open file descriptors 0/1/2 to avoid clobbering files
- * in die(). It also avoids messing up when the pipes are dup'ed
- * onto stdin/stdout/stderr in the child processes we spawn.
- */
- sanitize_stdfds();
-
- restore_sigpipe_to_default();
-
- git_setup_gettext();
-
trace_command_performance(argv);
/*
-href => href(
action=>$dest_action,
hash=>$dest
- )}, $name);
+ )}, esc_html($name));
$markers .= " <span class=\"".esc_attr($class)."\" title=\"".esc_attr($ref)."\">" .
$link . "</span>";
#include "strbuf.h"
#include "gpg-interface.h"
#include "sigchain.h"
+#include "tempfile.h"
static char *configured_signing_key;
static const char *gpg_program = "gpg";
int sign_buffer(struct strbuf *buffer, struct strbuf *signature, const char *signing_key)
{
struct child_process gpg = CHILD_PROCESS_INIT;
- const char *args[4];
- ssize_t len;
+ int ret;
size_t i, j, bottom;
+ struct strbuf gpg_status = STRBUF_INIT;
- gpg.argv = args;
- gpg.in = -1;
- gpg.out = -1;
- args[0] = gpg_program;
- args[1] = "-bsau";
- args[2] = signing_key;
- args[3] = NULL;
+ argv_array_pushl(&gpg.args,
+ gpg_program,
+ "--status-fd=2",
+ "-bsau", signing_key,
+ NULL);
- if (start_command(&gpg))
- return error(_("could not run gpg."));
+ bottom = signature->len;
/*
* When the username signingkey is bad, program could be terminated
* because gpg exits without reading and then write gets SIGPIPE.
*/
sigchain_push(SIGPIPE, SIG_IGN);
-
- if (write_in_full(gpg.in, buffer->buf, buffer->len) != buffer->len) {
- close(gpg.in);
- close(gpg.out);
- finish_command(&gpg);
- return error(_("gpg did not accept the data"));
- }
- close(gpg.in);
-
- bottom = signature->len;
- len = strbuf_read(signature, gpg.out, 1024);
- close(gpg.out);
-
+ ret = pipe_command(&gpg, buffer->buf, buffer->len,
+ signature, 1024, &gpg_status, 0);
sigchain_pop(SIGPIPE);
- if (finish_command(&gpg) || !len || len < 0)
+ ret |= !strstr(gpg_status.buf, "\n[GNUPG:] SIG_CREATED ");
+ strbuf_release(&gpg_status);
+ if (ret)
return error(_("gpg failed to sign the data"));
/* Strip CR from the line endings, in case we are on Windows. */
struct strbuf *gpg_output, struct strbuf *gpg_status)
{
struct child_process gpg = CHILD_PROCESS_INIT;
- const char *args_gpg[] = {NULL, "--status-fd=1", "--verify", "FILE", "-", NULL};
- char path[PATH_MAX];
+ static struct tempfile temp;
int fd, ret;
struct strbuf buf = STRBUF_INIT;
- struct strbuf *pbuf = &buf;
- args_gpg[0] = gpg_program;
- fd = git_mkstemp(path, PATH_MAX, ".git_vtag_tmpXXXXXX");
+ fd = mks_tempfile_t(&temp, ".git_vtag_tmpXXXXXX");
if (fd < 0)
- return error_errno(_("could not create temporary file '%s'"), path);
- if (write_in_full(fd, signature, signature_size) < 0)
- return error_errno(_("failed writing detached signature to '%s'"), path);
+ return error_errno(_("could not create temporary file"));
+ if (write_in_full(fd, signature, signature_size) < 0) {
+ error_errno(_("failed writing detached signature to '%s'"),
+ temp.filename.buf);
+ delete_tempfile(&temp);
+ return -1;
+ }
close(fd);
- gpg.argv = args_gpg;
- gpg.in = -1;
- gpg.out = -1;
- if (gpg_output)
- gpg.err = -1;
- args_gpg[3] = path;
- if (start_command(&gpg)) {
- unlink(path);
- return error(_("could not run gpg."));
- }
+ argv_array_pushl(&gpg.args,
+ gpg_program,
+ "--status-fd=1",
+ "--keyid-format=long",
+ "--verify", temp.filename.buf, "-",
+ NULL);
- sigchain_push(SIGPIPE, SIG_IGN);
- write_in_full(gpg.in, payload, payload_size);
- close(gpg.in);
+ if (!gpg_status)
+ gpg_status = &buf;
- if (gpg_output) {
- strbuf_read(gpg_output, gpg.err, 0);
- close(gpg.err);
- }
- if (gpg_status)
- pbuf = gpg_status;
- strbuf_read(pbuf, gpg.out, 0);
- close(gpg.out);
-
- ret = finish_command(&gpg);
+ sigchain_push(SIGPIPE, SIG_IGN);
+ ret = pipe_command(&gpg, payload, payload_size,
+ gpg_status, 0, gpg_output, 0);
sigchain_pop(SIGPIPE);
- unlink_or_warn(path);
+ delete_tempfile(&temp);
- ret |= !strstr(pbuf->buf, "\n[GNUPG:] GOODSIG ");
+ ret |= !strstr(gpg_status->buf, "\n[GNUPG:] GOODSIG ");
strbuf_release(&buf); /* no matter it was used or not */
return ret;
static void graph_padding_line(struct git_graph *graph, struct strbuf *sb);
/*
- * Print a strbuf to stdout. If the graph is non-NULL, all lines but the
- * first will be prefixed with the graph output.
+ * Print a strbuf. If the graph is non-NULL, all lines but the first will be
+ * prefixed with the graph output.
*
* If the strbuf ends with a newline, the output will end after this
* newline. A new graph line will not be printed after the final newline.
graph_pad_horizontally(graph, sb, graph->num_new_columns * 2);
}
+
+int graph_width(struct git_graph *graph)
+{
+ return graph->width;
+}
+
+
static void graph_output_skip_line(struct git_graph *graph, struct strbuf *sb)
{
/*
while (!shown_commit_line && !graph_is_commit_finished(graph)) {
shown_commit_line = graph_next_line(graph, &msgbuf);
- fwrite(msgbuf.buf, sizeof(char), msgbuf.len, stdout);
+ fwrite(msgbuf.buf, sizeof(char), msgbuf.len,
+ graph->revs->diffopt.file);
if (!shown_commit_line)
- putchar('\n');
+ putc('\n', graph->revs->diffopt.file);
strbuf_setlen(&msgbuf, 0);
}
return;
graph_next_line(graph, &msgbuf);
- fwrite(msgbuf.buf, sizeof(char), msgbuf.len, stdout);
+ fwrite(msgbuf.buf, sizeof(char), msgbuf.len, graph->revs->diffopt.file);
strbuf_release(&msgbuf);
}
return;
graph_padding_line(graph, &msgbuf);
- fwrite(msgbuf.buf, sizeof(char), msgbuf.len, stdout);
+ fwrite(msgbuf.buf, sizeof(char), msgbuf.len, graph->revs->diffopt.file);
strbuf_release(&msgbuf);
}
for (;;) {
graph_next_line(graph, &msgbuf);
- fwrite(msgbuf.buf, sizeof(char), msgbuf.len, stdout);
+ fwrite(msgbuf.buf, sizeof(char), msgbuf.len,
+ graph->revs->diffopt.file);
strbuf_setlen(&msgbuf, 0);
shown = 1;
if (!graph_is_commit_finished(graph))
- putchar('\n');
+ putc('\n', graph->revs->diffopt.file);
else
break;
}
char *p;
if (!graph) {
- fwrite(sb->buf, sizeof(char), sb->len, stdout);
+ fwrite(sb->buf, sizeof(char), sb->len,
+ graph->revs->diffopt.file);
return;
}
} else {
len = (sb->buf + sb->len) - p;
}
- fwrite(p, sizeof(char), len, stdout);
+ fwrite(p, sizeof(char), len, graph->revs->diffopt.file);
if (next_p && *next_p != '\0')
graph_show_oneline(graph);
p = next_p;
* CMIT_FMT_USERFORMAT are already missing a terminating
* newline. All of the other formats should have it.
*/
- fwrite(sb->buf, sizeof(char), sb->len, stdout);
+ fwrite(sb->buf, sizeof(char), sb->len,
+ graph->revs->diffopt.file);
return;
}
* new line.
*/
if (!newline_terminated)
- putchar('\n');
+ putc('\n', graph->revs->diffopt.file);
graph_show_remainder(graph);
* If sb ends with a newline, our output should too.
*/
if (newline_terminated)
- putchar('\n');
+ putc('\n', graph->revs->diffopt.file);
}
}
int graph_next_line(struct git_graph *graph, struct strbuf *sb);
+/*
+ * Return current width of the graph in on-screen characters.
+ */
+int graph_width(struct git_graph *graph);
+
/*
* graph_show_*: helper functions for printing to stdout
*/
#include "xdiff-interface.h"
#include "diff.h"
#include "diffcore.h"
+#include "commit.h"
+#include "quote.h"
static int grep_source_load(struct grep_source *gs);
static int grep_source_is_binary(struct grep_source *gs);
color_set(opt->color_sep, def->color_sep);
}
-void grep_commit_pattern_type(enum grep_pattern_type pattern_type, struct grep_opt *opt)
-{
- if (pattern_type != GREP_PATTERN_TYPE_UNSPECIFIED)
- grep_set_pattern_type_option(pattern_type, opt);
- else if (opt->pattern_type_option != GREP_PATTERN_TYPE_UNSPECIFIED)
- grep_set_pattern_type_option(opt->pattern_type_option, opt);
- else if (opt->extended_regexp_option)
- grep_set_pattern_type_option(GREP_PATTERN_TYPE_ERE, opt);
-}
-
-void grep_set_pattern_type_option(enum grep_pattern_type pattern_type, struct grep_opt *opt)
+static void grep_set_pattern_type_option(enum grep_pattern_type pattern_type, struct grep_opt *opt)
{
switch (pattern_type) {
case GREP_PATTERN_TYPE_UNSPECIFIED:
}
}
+void grep_commit_pattern_type(enum grep_pattern_type pattern_type, struct grep_opt *opt)
+{
+ if (pattern_type != GREP_PATTERN_TYPE_UNSPECIFIED)
+ grep_set_pattern_type_option(pattern_type, opt);
+ else if (opt->pattern_type_option != GREP_PATTERN_TYPE_UNSPECIFIED)
+ grep_set_pattern_type_option(opt->pattern_type_option, opt);
+ else if (opt->extended_regexp_option)
+ grep_set_pattern_type_option(GREP_PATTERN_TYPE_ERE, opt);
+}
+
static struct grep_pat *create_grep_pat(const char *pat, size_t patlen,
const char *origin, int no,
enum grep_pat_token t,
int erroffset;
int options = PCRE_MULTILINE;
- if (opt->ignore_case)
+ if (opt->ignore_case) {
+ if (has_non_ascii(p->pattern))
+ p->pcre_tables = pcre_maketables();
options |= PCRE_CASELESS;
+ }
+ if (is_utf8_locale() && has_non_ascii(p->pattern))
+ options |= PCRE_UTF8;
p->pcre_regexp = pcre_compile(p->pattern, options, &error, &erroffset,
- NULL);
+ p->pcre_tables);
if (!p->pcre_regexp)
compile_regexp_failed(p, error);
{
pcre_free(p->pcre_regexp);
pcre_free(p->pcre_extra_info);
+ pcre_free((void *)p->pcre_tables);
}
#else /* !USE_LIBPCRE */
static void compile_pcre_regexp(struct grep_pat *p, const struct grep_opt *opt)
return 1;
}
+static void compile_fixed_regexp(struct grep_pat *p, struct grep_opt *opt)
+{
+ struct strbuf sb = STRBUF_INIT;
+ int err;
+ int regflags;
+
+ basic_regex_quote_buf(&sb, p->pattern);
+ regflags = opt->regflags & ~REG_EXTENDED;
+ if (opt->ignore_case)
+ regflags |= REG_ICASE;
+ err = regcomp(&p->regexp, sb.buf, regflags);
+ if (opt->debug)
+ fprintf(stderr, "fixed %s\n", sb.buf);
+ strbuf_release(&sb);
+ if (err) {
+ char errbuf[1024];
+ regerror(err, &p->regexp, errbuf, sizeof(errbuf));
+ regfree(&p->regexp);
+ compile_regexp_failed(p, errbuf);
+ }
+}
+
static void compile_regexp(struct grep_pat *p, struct grep_opt *opt)
{
+ int icase, ascii_only;
int err;
p->word_regexp = opt->word_regexp;
p->ignore_case = opt->ignore_case;
+ icase = opt->regflags & REG_ICASE || p->ignore_case;
+ ascii_only = !has_non_ascii(p->pattern);
+ /*
+ * Even when -F (fixed) asks us to do a non-regexp search, we
+ * may not be able to correctly case-fold when -i
+ * (ignore-case) is asked (in which case, we'll synthesize a
+ * regexp to match the pattern that matches regexp special
+ * characters literally, while ignoring case differences). On
+ * the other hand, even without -F, if the pattern does not
+ * have any regexp special characters and there is no need for
+ * case-folding search, we can internally turn it into a
+ * simple string match using kws. p->fixed tells us if we
+ * want to use kws.
+ */
if (opt->fixed || is_fixed(p->pattern, p->patternlen))
- p->fixed = 1;
+ p->fixed = !icase || ascii_only;
else
p->fixed = 0;
if (p->fixed) {
- if (opt->regflags & REG_ICASE || p->ignore_case)
- p->kws = kwsalloc(tolower_trans_tbl);
- else
- p->kws = kwsalloc(NULL);
+ p->kws = kwsalloc(icase ? tolower_trans_tbl : NULL);
kwsincr(p->kws, p->pattern, p->patternlen);
kwsprep(p->kws);
return;
+ } else if (opt->fixed) {
+ /*
+ * We come here when the pattern has the non-ascii
+ * characters we cannot case-fold, and asked to
+ * ignore-case.
+ */
+ compile_fixed_regexp(p, opt);
+ return;
}
if (opt->pcre) {
for (p = opt->header_list; p; p = p->next) {
if (p->token != GREP_PATTERN_HEAD)
- die("bug: a non-header pattern in grep header list.");
+ die("BUG: a non-header pattern in grep header list.");
if (p->field < GREP_HEADER_FIELD_MIN ||
GREP_HEADER_FIELD_MAX <= p->field)
- die("bug: unknown header field %d", p->field);
+ die("BUG: unknown header field %d", p->field);
compile_regexp(p, opt);
}
h = compile_pattern_atom(&pp);
if (!h || pp != p->next)
- die("bug: malformed header expr");
+ die("BUG: malformed header expr");
if (!header_group[p->field]) {
header_group[p->field] = h;
continue;
return 0;
}
+static int is_empty_line(const char *bol, const char *eol)
+{
+ while (bol < eol && isspace(*bol))
+ bol++;
+ return bol == eol;
+}
+
static int grep_source_1(struct grep_opt *opt, struct grep_source *gs, int collect_hits)
{
char *bol;
+ char *peek_bol = NULL;
unsigned long left;
unsigned lno = 1;
unsigned last_hit = 0;
case GREP_BINARY_TEXT:
break;
default:
- die("bug: unknown binary handling mode");
+ die("BUG: unknown binary handling mode");
}
}
show_function = 1;
goto next_line;
}
- if (show_function && match_funcname(opt, gs, bol, eol))
- show_function = 0;
+ if (show_function && (!peek_bol || peek_bol < bol)) {
+ unsigned long peek_left = left;
+ char *peek_eol = eol;
+
+ /*
+ * Trailing empty lines are not interesting.
+ * Peek past them to see if they belong to the
+ * body of the current function.
+ */
+ peek_bol = bol;
+ while (is_empty_line(peek_bol, peek_eol)) {
+ peek_bol = peek_eol + 1;
+ peek_eol = end_of_line(peek_bol, &peek_left);
+ }
+
+ if (match_funcname(opt, gs, peek_bol, peek_eol))
+ show_function = 0;
+ }
if (show_function ||
(last_hit && lno <= last_hit + opt->post_context)) {
/* If the last hit is within the post context,
regex_t regexp;
pcre *pcre_regexp;
pcre_extra *pcre_extra_info;
+ const unsigned char *pcre_tables;
kwset_t kws;
unsigned fixed:1;
unsigned ignore_case:1;
extern void init_grep_defaults(void);
extern int grep_config(const char *var, const char *value, void *);
extern void grep_init(struct grep_opt *, const char *prefix);
-void grep_set_pattern_type_option(enum grep_pattern_type, struct grep_opt *opt);
void grep_commit_pattern_type(enum grep_pattern_type, struct grep_opt *opt);
extern void append_grep_pat(struct grep_opt *opt, const char *pat, size_t patlen, const char *origin, int no, enum grep_pat_token t);
* with external projects that rely on the output of "git version".
*/
printf("git version %s\n", git_version_string);
+ while (*++argv) {
+ if (!strcmp(*argv, "--build-options")) {
+ printf("sizeof-long: %d\n", (int)sizeof(long));
+ /* NEEDSWORK: also save and output GIT-BUILD_OPTIONS? */
+ }
+ }
return 0;
}
return buffer;
}
+char *oid_to_hex_r(char *buffer, const struct object_id *oid)
+{
+ return sha1_to_hex_r(buffer, oid->hash);
+}
+
char *sha1_to_hex(const unsigned char *sha1)
{
static int bufno;
write_or_die(fd, buffer, n);
}
-static void http_status(unsigned code, const char *msg)
+static void http_status(struct strbuf *hdr, unsigned code, const char *msg)
{
- format_write(1, "Status: %u %s\r\n", code, msg);
+ strbuf_addf(hdr, "Status: %u %s\r\n", code, msg);
}
-static void hdr_str(const char *name, const char *value)
+static void hdr_str(struct strbuf *hdr, const char *name, const char *value)
{
- format_write(1, "%s: %s\r\n", name, value);
+ strbuf_addf(hdr, "%s: %s\r\n", name, value);
}
-static void hdr_int(const char *name, uintmax_t value)
+static void hdr_int(struct strbuf *hdr, const char *name, uintmax_t value)
{
- format_write(1, "%s: %" PRIuMAX "\r\n", name, value);
+ strbuf_addf(hdr, "%s: %" PRIuMAX "\r\n", name, value);
}
-static void hdr_date(const char *name, unsigned long when)
+static void hdr_date(struct strbuf *hdr, const char *name, unsigned long when)
{
const char *value = show_date(when, 0, DATE_MODE(RFC2822));
- hdr_str(name, value);
+ hdr_str(hdr, name, value);
}
-static void hdr_nocache(void)
+static void hdr_nocache(struct strbuf *hdr)
{
- hdr_str("Expires", "Fri, 01 Jan 1980 00:00:00 GMT");
- hdr_str("Pragma", "no-cache");
- hdr_str("Cache-Control", "no-cache, max-age=0, must-revalidate");
+ hdr_str(hdr, "Expires", "Fri, 01 Jan 1980 00:00:00 GMT");
+ hdr_str(hdr, "Pragma", "no-cache");
+ hdr_str(hdr, "Cache-Control", "no-cache, max-age=0, must-revalidate");
}
-static void hdr_cache_forever(void)
+static void hdr_cache_forever(struct strbuf *hdr)
{
unsigned long now = time(NULL);
- hdr_date("Date", now);
- hdr_date("Expires", now + 31536000);
- hdr_str("Cache-Control", "public, max-age=31536000");
+ hdr_date(hdr, "Date", now);
+ hdr_date(hdr, "Expires", now + 31536000);
+ hdr_str(hdr, "Cache-Control", "public, max-age=31536000");
}
-static void end_headers(void)
+static void end_headers(struct strbuf *hdr)
{
- write_or_die(1, "\r\n", 2);
+ strbuf_add(hdr, "\r\n", 2);
+ write_or_die(1, hdr->buf, hdr->len);
+ strbuf_release(hdr);
}
-__attribute__((format (printf, 1, 2)))
-static NORETURN void not_found(const char *err, ...)
+__attribute__((format (printf, 2, 3)))
+static NORETURN void not_found(struct strbuf *hdr, const char *err, ...)
{
va_list params;
- http_status(404, "Not Found");
- hdr_nocache();
- end_headers();
+ http_status(hdr, 404, "Not Found");
+ hdr_nocache(hdr);
+ end_headers(hdr);
va_start(params, err);
if (err && *err)
exit(0);
}
-__attribute__((format (printf, 1, 2)))
-static NORETURN void forbidden(const char *err, ...)
+__attribute__((format (printf, 2, 3)))
+static NORETURN void forbidden(struct strbuf *hdr, const char *err, ...)
{
va_list params;
- http_status(403, "Forbidden");
- hdr_nocache();
- end_headers();
+ http_status(hdr, 403, "Forbidden");
+ hdr_nocache(hdr);
+ end_headers(hdr);
va_start(params, err);
if (err && *err)
exit(0);
}
-static void select_getanyfile(void)
+static void select_getanyfile(struct strbuf *hdr)
{
if (!getanyfile)
- forbidden("Unsupported service: getanyfile");
+ forbidden(hdr, "Unsupported service: getanyfile");
}
-static void send_strbuf(const char *type, struct strbuf *buf)
+static void send_strbuf(struct strbuf *hdr,
+ const char *type, struct strbuf *buf)
{
- hdr_int(content_length, buf->len);
- hdr_str(content_type, type);
- end_headers();
+ hdr_int(hdr, content_length, buf->len);
+ hdr_str(hdr, content_type, type);
+ end_headers(hdr);
write_or_die(1, buf->buf, buf->len);
}
-static void send_local_file(const char *the_type, const char *name)
+static void send_local_file(struct strbuf *hdr, const char *the_type,
+ const char *name)
{
char *p = git_pathdup("%s", name);
size_t buf_alloc = 8192;
fd = open(p, O_RDONLY);
if (fd < 0)
- not_found("Cannot open '%s': %s", p, strerror(errno));
+ not_found(hdr, "Cannot open '%s': %s", p, strerror(errno));
if (fstat(fd, &sb) < 0)
die_errno("Cannot stat '%s'", p);
- hdr_int(content_length, sb.st_size);
- hdr_str(content_type, the_type);
- hdr_date(last_modified, sb.st_mtime);
- end_headers();
+ hdr_int(hdr, content_length, sb.st_size);
+ hdr_str(hdr, content_type, the_type);
+ hdr_date(hdr, last_modified, sb.st_mtime);
+ end_headers(hdr);
for (;;) {
ssize_t n = xread(fd, buf, buf_alloc);
free(p);
}
-static void get_text_file(char *name)
+static void get_text_file(struct strbuf *hdr, char *name)
{
- select_getanyfile();
- hdr_nocache();
- send_local_file("text/plain", name);
+ select_getanyfile(hdr);
+ hdr_nocache(hdr);
+ send_local_file(hdr, "text/plain", name);
}
-static void get_loose_object(char *name)
+static void get_loose_object(struct strbuf *hdr, char *name)
{
- select_getanyfile();
- hdr_cache_forever();
- send_local_file("application/x-git-loose-object", name);
+ select_getanyfile(hdr);
+ hdr_cache_forever(hdr);
+ send_local_file(hdr, "application/x-git-loose-object", name);
}
-static void get_pack_file(char *name)
+static void get_pack_file(struct strbuf *hdr, char *name)
{
- select_getanyfile();
- hdr_cache_forever();
- send_local_file("application/x-git-packed-objects", name);
+ select_getanyfile(hdr);
+ hdr_cache_forever(hdr);
+ send_local_file(hdr, "application/x-git-packed-objects", name);
}
-static void get_idx_file(char *name)
+static void get_idx_file(struct strbuf *hdr, char *name)
{
- select_getanyfile();
- hdr_cache_forever();
- send_local_file("application/x-git-packed-objects-toc", name);
+ select_getanyfile(hdr);
+ hdr_cache_forever(hdr);
+ send_local_file(hdr, "application/x-git-packed-objects-toc", name);
}
static void http_config(void)
strbuf_release(&var);
}
-static struct rpc_service *select_service(const char *name)
+static struct rpc_service *select_service(struct strbuf *hdr, const char *name)
{
const char *svc_name;
struct rpc_service *svc = NULL;
int i;
if (!skip_prefix(name, "git-", &svc_name))
- forbidden("Unsupported service: '%s'", name);
+ forbidden(hdr, "Unsupported service: '%s'", name);
for (i = 0; i < ARRAY_SIZE(rpc_service); i++) {
struct rpc_service *s = &rpc_service[i];
}
if (!svc)
- forbidden("Unsupported service: '%s'", name);
+ forbidden(hdr, "Unsupported service: '%s'", name);
if (svc->enabled < 0) {
const char *user = getenv("REMOTE_USER");
svc->enabled = (user && *user) ? 1 : 0;
}
if (!svc->enabled)
- forbidden("Service not enabled: '%s'", svc->name);
+ forbidden(hdr, "Service not enabled: '%s'", svc->name);
return svc;
}
return 0;
}
-static void get_info_refs(char *arg)
+static void get_info_refs(struct strbuf *hdr, char *arg)
{
const char *service_name = get_parameter("service");
struct strbuf buf = STRBUF_INIT;
- hdr_nocache();
+ hdr_nocache(hdr);
if (service_name) {
const char *argv[] = {NULL /* service name */,
"--stateless-rpc", "--advertise-refs",
".", NULL};
- struct rpc_service *svc = select_service(service_name);
+ struct rpc_service *svc = select_service(hdr, service_name);
strbuf_addf(&buf, "application/x-git-%s-advertisement",
svc->name);
- hdr_str(content_type, buf.buf);
- end_headers();
+ hdr_str(hdr, content_type, buf.buf);
+ end_headers(hdr);
packet_write(1, "# service=git-%s\n", svc->name);
packet_flush(1);
run_service(argv, 0);
} else {
- select_getanyfile();
+ select_getanyfile(hdr);
for_each_namespaced_ref(show_text_ref, &buf);
- send_strbuf("text/plain", &buf);
+ send_strbuf(hdr, "text/plain", &buf);
}
strbuf_release(&buf);
}
return 0;
}
-static void get_head(char *arg)
+static void get_head(struct strbuf *hdr, char *arg)
{
struct strbuf buf = STRBUF_INIT;
- select_getanyfile();
+ select_getanyfile(hdr);
head_ref_namespaced(show_head_ref, &buf);
- send_strbuf("text/plain", &buf);
+ send_strbuf(hdr, "text/plain", &buf);
strbuf_release(&buf);
}
-static void get_info_packs(char *arg)
+static void get_info_packs(struct strbuf *hdr, char *arg)
{
size_t objdirlen = strlen(get_object_directory());
struct strbuf buf = STRBUF_INIT;
struct packed_git *p;
size_t cnt = 0;
- select_getanyfile();
+ select_getanyfile(hdr);
prepare_packed_git();
for (p = packed_git; p; p = p->next) {
if (p->pack_local)
}
strbuf_addch(&buf, '\n');
- hdr_nocache();
- send_strbuf("text/plain; charset=utf-8", &buf);
+ hdr_nocache(hdr);
+ send_strbuf(hdr, "text/plain; charset=utf-8", &buf);
strbuf_release(&buf);
}
-static void check_content_type(const char *accepted_type)
+static void check_content_type(struct strbuf *hdr, const char *accepted_type)
{
const char *actual_type = getenv("CONTENT_TYPE");
actual_type = "";
if (strcmp(actual_type, accepted_type)) {
- http_status(415, "Unsupported Media Type");
- hdr_nocache();
- end_headers();
+ http_status(hdr, 415, "Unsupported Media Type");
+ hdr_nocache(hdr);
+ end_headers(hdr);
format_write(1,
"Expected POST with Content-Type '%s',"
" but received '%s' instead.\n",
}
}
-static void service_rpc(char *service_name)
+static void service_rpc(struct strbuf *hdr, char *service_name)
{
const char *argv[] = {NULL, "--stateless-rpc", ".", NULL};
- struct rpc_service *svc = select_service(service_name);
+ struct rpc_service *svc = select_service(hdr, service_name);
struct strbuf buf = STRBUF_INIT;
strbuf_reset(&buf);
strbuf_addf(&buf, "application/x-git-%s-request", svc->name);
- check_content_type(buf.buf);
+ check_content_type(hdr, buf.buf);
- hdr_nocache();
+ hdr_nocache(hdr);
strbuf_reset(&buf);
strbuf_addf(&buf, "application/x-git-%s-result", svc->name);
- hdr_str(content_type, buf.buf);
+ hdr_str(hdr, content_type, buf.buf);
- end_headers();
+ end_headers(hdr);
argv[0] = svc->name;
run_service(argv, svc->buffer_input);
static NORETURN void die_webcgi(const char *err, va_list params)
{
if (dead <= 1) {
+ struct strbuf hdr = STRBUF_INIT;
+
vreportf("fatal: ", err, params);
- http_status(500, "Internal Server Error");
- hdr_nocache();
- end_headers();
+ http_status(&hdr, 500, "Internal Server Error");
+ hdr_nocache(&hdr);
+ end_headers(&hdr);
}
exit(0); /* we successfully reported a failure ;-) */
}
static struct service_cmd {
const char *method;
const char *pattern;
- void (*imp)(char *);
+ void (*imp)(struct strbuf *, char *);
} services[] = {
{"GET", "/HEAD$", get_head},
{"GET", "/info/refs$", get_info_refs},
{"POST", "/git-receive-pack$", service_rpc}
};
-int main(int argc, char **argv)
+static int bad_request(struct strbuf *hdr, const struct service_cmd *c)
+{
+ const char *proto = getenv("SERVER_PROTOCOL");
+
+ if (proto && !strcmp(proto, "HTTP/1.1")) {
+ http_status(hdr, 405, "Method Not Allowed");
+ hdr_str(hdr, "Allow",
+ !strcmp(c->method, "GET") ? "GET, HEAD" : c->method);
+ } else
+ http_status(hdr, 400, "Bad Request");
+ hdr_nocache(hdr);
+ end_headers(hdr);
+ return 0;
+}
+
+int cmd_main(int argc, const char **argv)
{
char *method = getenv("REQUEST_METHOD");
char *dir;
struct service_cmd *cmd = NULL;
char *cmd_arg = NULL;
int i;
+ struct strbuf hdr = STRBUF_INIT;
- git_setup_gettext();
-
- git_extract_argv0_path(argv[0]);
set_die_routine(die_webcgi);
set_die_is_recursing_routine(die_webcgi_recursing);
if (!regexec(&re, dir, 1, out, 0)) {
size_t n;
- if (strcmp(method, c->method)) {
- const char *proto = getenv("SERVER_PROTOCOL");
- if (proto && !strcmp(proto, "HTTP/1.1")) {
- http_status(405, "Method Not Allowed");
- hdr_str("Allow", !strcmp(c->method, "GET") ?
- "GET, HEAD" : c->method);
- } else
- http_status(400, "Bad Request");
- hdr_nocache();
- end_headers();
- return 0;
- }
+ if (strcmp(method, c->method))
+ return bad_request(&hdr, c);
cmd = c;
n = out[0].rm_eo - out[0].rm_so;
}
if (!cmd)
- not_found("Request not supported: '%s'", dir);
+ not_found(&hdr, "Request not supported: '%s'", dir);
setup_path();
if (!enter_repo(dir, 0))
- not_found("Not a git repository: '%s'", dir);
+ not_found(&hdr, "Not a git repository: '%s'", dir);
if (!getenv("GIT_HTTP_EXPORT_ALL") &&
access("git-daemon-export-ok", F_OK) )
- not_found("Repository not exported: '%s'", dir);
+ not_found(&hdr, "Repository not exported: '%s'", dir);
http_config();
max_request_buffer = git_env_ulong("GIT_HTTP_MAX_REQUEST_BUFFER",
max_request_buffer);
- cmd->imp(cmd_arg);
+ cmd->imp(&hdr, cmd_arg);
return 0;
}
static const char http_fetch_usage[] = "git http-fetch "
"[-c] [-t] [-a] [-v] [--recover] [-w ref] [--stdin] commit-id url";
-int main(int argc, const char **argv)
+int cmd_main(int argc, const char **argv)
{
struct walker *walker;
int commits_on_stdin = 0;
int get_verbosely = 0;
int get_recover = 0;
- git_setup_gettext();
-
- git_extract_argv0_path(argv[0]);
-
while (arg < argc && argv[arg][0] == '-') {
if (argv[arg][1] == 't') {
get_tree = 1;
ls.userData = userData;
ls.userFunc = userFunc;
- strbuf_addf(&out_buffer.buf, PROPFIND_ALL_REQUEST);
+ strbuf_addstr(&out_buffer.buf, PROPFIND_ALL_REQUEST);
dav_headers = curl_slist_append(dav_headers, "Depth: 1");
dav_headers = curl_slist_append(dav_headers, "Content-Type: text/xml");
#endif
}
-int main(int argc, char **argv)
+int cmd_main(int argc, const char **argv)
{
struct transfer_request *request;
struct transfer_request *next_request;
int nr_refspec = 0;
- char **refspec = NULL;
+ const char **refspec = NULL;
struct remote_lock *ref_lock = NULL;
struct remote_lock *info_ref_lock = NULL;
struct rev_info revs;
int new_refs;
struct ref *ref, *local_refs;
- git_setup_gettext();
-
- git_extract_argv0_path(argv[0]);
-
repo = xcalloc(1, sizeof(*repo));
argv++;
for (i = 1; i < argc; i++, argv++) {
- char *arg = *argv;
+ const char *arg = *argv;
if (*arg == '-') {
if (!strcmp(arg, "--all")) {
#include "commit.h"
#include "walker.h"
#include "http.h"
+#include "list.h"
struct alt_base {
char *base;
struct alt_base *repo;
enum object_request_state state;
struct http_object_request *req;
- struct object_request *next;
+ struct list_head node;
};
struct alternates_request {
struct alt_base *alt;
};
-static struct object_request *object_queue_head;
+static LIST_HEAD(object_queue_head);
static void fetch_alternates(struct walker *walker, const char *base);
static void release_object_request(struct object_request *obj_req)
{
- struct object_request *entry = object_queue_head;
-
if (obj_req->req !=NULL && obj_req->req->localfile != -1)
error("fd leakage in release: %d", obj_req->req->localfile);
- if (obj_req == object_queue_head) {
- object_queue_head = obj_req->next;
- } else {
- while (entry->next != NULL && entry->next != obj_req)
- entry = entry->next;
- if (entry->next == obj_req)
- entry->next = entry->next->next;
- }
+ list_del(&obj_req->node);
free(obj_req);
}
static int fill_active_slot(struct walker *walker)
{
struct object_request *obj_req;
+ struct list_head *pos, *tmp, *head = &object_queue_head;
- for (obj_req = object_queue_head; obj_req; obj_req = obj_req->next) {
+ list_for_each_safe(pos, tmp, head) {
+ obj_req = list_entry(pos, struct object_request, node);
if (obj_req->state == WAITING) {
if (has_sha1_file(obj_req->sha1))
obj_req->state = COMPLETE;
static void prefetch(struct walker *walker, unsigned char *sha1)
{
struct object_request *newreq;
- struct object_request *tail;
struct walker_data *data = walker->data;
newreq = xmalloc(sizeof(*newreq));
newreq->repo = data->alt;
newreq->state = WAITING;
newreq->req = NULL;
- newreq->next = NULL;
http_is_verbose = walker->get_verbosely;
-
- if (object_queue_head == NULL) {
- object_queue_head = newreq;
- } else {
- tail = object_queue_head;
- while (tail->next != NULL)
- tail = tail->next;
- tail->next = newreq;
- }
+ list_add_tail(&newreq->node, &object_queue_head);
#ifdef USE_CURL_MULTI
fill_active_slots();
release_object_request(obj_req);
}
-static int fetch_object(struct walker *walker, struct alt_base *repo, unsigned char *sha1)
+static int fetch_object(struct walker *walker, unsigned char *sha1)
{
char *hex = sha1_to_hex(sha1);
int ret = 0;
- struct object_request *obj_req = object_queue_head;
+ struct object_request *obj_req = NULL;
struct http_object_request *req;
+ struct list_head *pos, *head = &object_queue_head;
- while (obj_req != NULL && hashcmp(obj_req->sha1, sha1))
- obj_req = obj_req->next;
+ list_for_each(pos, head) {
+ obj_req = list_entry(pos, struct object_request, node);
+ if (!hashcmp(obj_req->sha1, sha1))
+ break;
+ }
if (obj_req == NULL)
return error("Couldn't find request for %s in the queue", hex);
req->localfile = -1;
}
+ /*
+ * we turned off CURLOPT_FAILONERROR to avoid losing a
+ * persistent connection and got CURLE_OK.
+ */
+ if (req->http_code == 404 && req->curl_result == CURLE_OK &&
+ (starts_with(req->url, "http://") ||
+ starts_with(req->url, "https://")))
+ req->curl_result = CURLE_HTTP_RETURNED_ERROR;
+
if (obj_req->state == ABORTED) {
ret = error("Request for %s aborted", hex);
} else if (req->curl_result != CURLE_OK &&
struct walker_data *data = walker->data;
struct alt_base *altbase = data->alt;
- if (!fetch_object(walker, altbase, sha1))
+ if (!fetch_object(walker, sha1))
return 0;
while (altbase) {
if (!http_fetch_pack(walker, altbase, sha1))
#include "gettext.h"
#include "transport.h"
+static struct trace_key trace_curl = TRACE_KEY_INIT(CURL);
#if LIBCURL_VERSION_NUM >= 0x070a08
long int git_curl_ipresolve = CURL_IPRESOLVE_WHATEVER;
#else
}
#endif
+static void redact_sensitive_header(struct strbuf *header)
+{
+ const char *sensitive_header;
+
+ if (skip_prefix(header->buf, "Authorization:", &sensitive_header) ||
+ skip_prefix(header->buf, "Proxy-Authorization:", &sensitive_header)) {
+ /* The first token is the type, which is OK to log */
+ while (isspace(*sensitive_header))
+ sensitive_header++;
+ while (*sensitive_header && !isspace(*sensitive_header))
+ sensitive_header++;
+ /* Everything else is opaque and possibly sensitive */
+ strbuf_setlen(header, sensitive_header - header->buf);
+ strbuf_addstr(header, " <redacted>");
+ }
+}
+
+static void curl_dump_header(const char *text, unsigned char *ptr, size_t size, int hide_sensitive_header)
+{
+ struct strbuf out = STRBUF_INIT;
+ struct strbuf **headers, **header;
+
+ strbuf_addf(&out, "%s, %10.10ld bytes (0x%8.8lx)\n",
+ text, (long)size, (long)size);
+ trace_strbuf(&trace_curl, &out);
+ strbuf_reset(&out);
+ strbuf_add(&out, ptr, size);
+ headers = strbuf_split_max(&out, '\n', 0);
+
+ for (header = headers; *header; header++) {
+ if (hide_sensitive_header)
+ redact_sensitive_header(*header);
+ strbuf_insert((*header), 0, text, strlen(text));
+ strbuf_insert((*header), strlen(text), ": ", 2);
+ strbuf_rtrim((*header));
+ strbuf_addch((*header), '\n');
+ trace_strbuf(&trace_curl, (*header));
+ }
+ strbuf_list_free(headers);
+ strbuf_release(&out);
+}
+
+static void curl_dump_data(const char *text, unsigned char *ptr, size_t size)
+{
+ size_t i;
+ struct strbuf out = STRBUF_INIT;
+ unsigned int width = 60;
+
+ strbuf_addf(&out, "%s, %10.10ld bytes (0x%8.8lx)\n",
+ text, (long)size, (long)size);
+ trace_strbuf(&trace_curl, &out);
+
+ for (i = 0; i < size; i += width) {
+ size_t w;
+
+ strbuf_reset(&out);
+ strbuf_addf(&out, "%s: ", text);
+ for (w = 0; (w < width) && (i + w < size); w++) {
+ unsigned char ch = ptr[i + w];
+
+ strbuf_addch(&out,
+ (ch >= 0x20) && (ch < 0x80)
+ ? ch : '.');
+ }
+ strbuf_addch(&out, '\n');
+ trace_strbuf(&trace_curl, &out);
+ }
+ strbuf_release(&out);
+}
+
+static int curl_trace(CURL *handle, curl_infotype type, char *data, size_t size, void *userp)
+{
+ const char *text;
+ enum { NO_FILTER = 0, DO_FILTER = 1 };
+
+ switch (type) {
+ case CURLINFO_TEXT:
+ trace_printf_key(&trace_curl, "== Info: %s", data);
+ default: /* we ignore unknown types by default */
+ return 0;
+
+ case CURLINFO_HEADER_OUT:
+ text = "=> Send header";
+ curl_dump_header(text, (unsigned char *)data, size, DO_FILTER);
+ break;
+ case CURLINFO_DATA_OUT:
+ text = "=> Send data";
+ curl_dump_data(text, (unsigned char *)data, size);
+ break;
+ case CURLINFO_SSL_DATA_OUT:
+ text = "=> Send SSL data";
+ curl_dump_data(text, (unsigned char *)data, size);
+ break;
+ case CURLINFO_HEADER_IN:
+ text = "<= Recv header";
+ curl_dump_header(text, (unsigned char *)data, size, NO_FILTER);
+ break;
+ case CURLINFO_DATA_IN:
+ text = "<= Recv data";
+ curl_dump_data(text, (unsigned char *)data, size);
+ break;
+ case CURLINFO_SSL_DATA_IN:
+ text = "<= Recv SSL data";
+ curl_dump_data(text, (unsigned char *)data, size);
+ break;
+ }
+ return 0;
+}
+
+void setup_curl_trace(CURL *handle)
+{
+ if (!trace_want(&trace_curl))
+ return;
+ curl_easy_setopt(handle, CURLOPT_VERBOSE, 1L);
+ curl_easy_setopt(handle, CURLOPT_DEBUGFUNCTION, curl_trace);
+ curl_easy_setopt(handle, CURLOPT_DEBUGDATA, NULL);
+}
+
+
static CURL *get_curl_handle(void)
{
CURL *result = curl_easy_init();
warning("protocol restrictions not applied to curl redirects because\n"
"your curl version is too old (>= 7.19.4)");
#endif
-
if (getenv("GIT_CURL_VERBOSE"))
- curl_easy_setopt(result, CURLOPT_VERBOSE, 1);
+ curl_easy_setopt(result, CURLOPT_VERBOSE, 1L);
+ setup_curl_trace(result);
curl_easy_setopt(result, CURLOPT_USERAGENT,
user_agent ? user_agent : git_user_agent());
strbuf_addf(buf, "objects/%.*s/", 2, hex);
if (!only_two_digit_prefix)
- strbuf_addf(buf, "%s", hex+2);
+ strbuf_addstr(buf, hex + 2);
}
char *get_remote_object_url(const char *url, const char *hex,
unsigned char expn[4096];
size_t size = eltsize * nmemb;
int posn = 0;
- struct http_object_request *freq =
- (struct http_object_request *)data;
+ struct http_object_request *freq = data;
+ struct active_request_slot *slot = freq->slot;
+
+ if (slot) {
+ CURLcode c = curl_easy_getinfo(slot->curl, CURLINFO_HTTP_CODE,
+ &slot->http_code);
+ if (c != CURLE_OK)
+ die("BUG: curl_easy_getinfo for HTTP code failed: %s",
+ curl_easy_strerror(c));
+ if (slot->http_code >= 400)
+ return size;
+ }
+
do {
ssize_t retval = xwrite(freq->localfile,
(char *) ptr + posn, size - posn);
freq->slot = get_active_slot();
curl_easy_setopt(freq->slot->curl, CURLOPT_FILE, freq);
+ curl_easy_setopt(freq->slot->curl, CURLOPT_FAILONERROR, 0);
curl_easy_setopt(freq->slot->curl, CURLOPT_WRITEFUNCTION, fwrite_sha1_file);
curl_easy_setopt(freq->slot->curl, CURLOPT_ERRORBUFFER, freq->errorstr);
curl_easy_setopt(freq->slot->curl, CURLOPT_URL, freq->url);
extern void abort_http_object_request(struct http_object_request *freq);
extern void release_http_object_request(struct http_object_request *freq);
+/* setup routine for curl_easy_setopt CURLOPT_DEBUGFUNCTION */
+void setup_curl_trace(CURL *handle);
#endif /* HTTP_H */
return git_default_date.buf;
}
+void reset_ident_date(void)
+{
+ strbuf_reset(&git_default_date);
+}
+
static int crud(unsigned char c)
{
return c <= 32 ||
va_start(va, fmt);
if (blen <= 0 || (unsigned)(ret = vsnprintf(buf, blen, fmt, va)) >= (unsigned)blen)
- die("Fatal: buffer too small. Please report a bug.");
+ die("BUG: buffer too small. Please report a bug.");
va_end(va);
return ret;
}
if (0 < verbosity || getenv("GIT_CURL_VERBOSE"))
curl_easy_setopt(curl, CURLOPT_VERBOSE, 1L);
+ setup_curl_trace(curl);
return curl;
}
}
#endif
-int main(int argc, char **argv)
+int cmd_main(int argc, const char **argv)
{
struct strbuf all_msgs = STRBUF_INIT;
int total;
int nongit_ok;
- git_extract_argv0_path(argv[0]);
-
- git_setup_gettext();
-
setup_git_directory_gently(&nongit_ok);
git_imap_config();
--- /dev/null
+#ifndef ITERATOR_H
+#define ITERATOR_H
+
+/*
+ * Generic constants related to iterators.
+ */
+
+/*
+ * The attempt to advance the iterator was successful; the iterator
+ * reflects the new current entry.
+ */
+#define ITER_OK 0
+
+/*
+ * The iterator is exhausted and has been freed.
+ */
+#define ITER_DONE -1
+
+/*
+ * The iterator experienced an error. The iteration has been aborted
+ * and the iterator has been freed.
+ */
+#define ITER_ERROR -2
+
+/*
+ * Return values for selector functions for merge iterators. The
+ * numerical values of these constants are important and must be
+ * compatible with ITER_DONE and ITER_ERROR.
+ */
+enum iterator_selection {
+ /* End the iteration without an error: */
+ ITER_SELECT_DONE = ITER_DONE,
+
+ /* Report an error and abort the iteration: */
+ ITER_SELECT_ERROR = ITER_ERROR,
+
+ /*
+ * The next group of constants are masks that are useful
+ * mainly internally.
+ */
+
+ /* The LSB selects whether iter0/iter1 is the "current" iterator: */
+ ITER_CURRENT_SELECTION_MASK = 0x01,
+
+ /* iter0 is the "current" iterator this round: */
+ ITER_CURRENT_SELECTION_0 = 0x00,
+
+ /* iter1 is the "current" iterator this round: */
+ ITER_CURRENT_SELECTION_1 = 0x01,
+
+ /* Yield the value from the current iterator? */
+ ITER_YIELD_CURRENT = 0x02,
+
+ /* Discard the value from the secondary iterator? */
+ ITER_SKIP_SECONDARY = 0x04,
+
+ /*
+ * The constants that a selector function should usually
+ * return.
+ */
+
+ /* Yield the value from iter0: */
+ ITER_SELECT_0 = ITER_CURRENT_SELECTION_0 | ITER_YIELD_CURRENT,
+
+ /* Yield the value from iter0 and discard the one from iter1: */
+ ITER_SELECT_0_SKIP_1 = ITER_SELECT_0 | ITER_SKIP_SECONDARY,
+
+ /* Discard the value from iter0 without yielding anything this round: */
+ ITER_SKIP_0 = ITER_CURRENT_SELECTION_1 | ITER_SKIP_SECONDARY,
+
+ /* Yield the value from iter1: */
+ ITER_SELECT_1 = ITER_CURRENT_SELECTION_1 | ITER_YIELD_CURRENT,
+
+ /* Yield the value from iter1 and discard the one from iter0: */
+ ITER_SELECT_1_SKIP_0 = ITER_SELECT_1 | ITER_SKIP_SECONDARY,
+
+ /* Discard the value from iter1 without yielding anything this round: */
+ ITER_SKIP_1 = ITER_CURRENT_SELECTION_0 | ITER_SKIP_SECONDARY
+};
+
+#endif /* ITERATOR_H */
struct object *obj = revs->pending.objects[i].item;
if (obj->flags & UNINTERESTING)
continue;
- while (obj->type == OBJ_TAG)
- obj = deref_tag(obj, NULL, 0);
+ obj = deref_tag(obj, NULL, 0);
if (obj->type != OBJ_COMMIT)
die("Non commit %s?", revs->pending.objects[i].name);
if (commit)
char *data = NULL;
if (diff_populate_filespec(spec, 0))
- die("Cannot read blob %s", sha1_to_hex(spec->sha1));
+ die("Cannot read blob %s", oid_to_hex(&spec->oid));
ALLOC_ARRAY(ends, size);
ends[cur++] = 0;
static void print_line(const char *prefix, char first,
long line, unsigned long *ends, void *data,
- const char *color, const char *reset)
+ const char *color, const char *reset, FILE *file)
{
char *begin = get_nth_line(line, ends, data);
char *end = get_nth_line(line+1, ends, data);
had_nl = 1;
}
- fputs(prefix, stdout);
- fputs(color, stdout);
- putchar(first);
- fwrite(begin, 1, end-begin, stdout);
- fputs(reset, stdout);
- putchar('\n');
+ fputs(prefix, file);
+ fputs(color, file);
+ putc(first, file);
+ fwrite(begin, 1, end-begin, file);
+ fputs(reset, file);
+ putc('\n', file);
if (!had_nl)
- fputs("\\ No newline at end of file\n", stdout);
+ fputs("\\ No newline at end of file\n", file);
}
static char *output_prefix(struct diff_options *opt)
if (!pair || !diff)
return;
- if (pair->one->sha1_valid)
+ if (pair->one->oid_valid)
fill_line_ends(pair->one, &p_lines, &p_ends);
fill_line_ends(pair->two, &t_lines, &t_ends);
- printf("%s%sdiff --git a/%s b/%s%s\n", prefix, c_meta, pair->one->path, pair->two->path, c_reset);
- printf("%s%s--- %s%s%s\n", prefix, c_meta,
- pair->one->sha1_valid ? "a/" : "",
- pair->one->sha1_valid ? pair->one->path : "/dev/null",
+ fprintf(opt->file, "%s%sdiff --git a/%s b/%s%s\n", prefix, c_meta, pair->one->path, pair->two->path, c_reset);
+ fprintf(opt->file, "%s%s--- %s%s%s\n", prefix, c_meta,
+ pair->one->oid_valid ? "a/" : "",
+ pair->one->oid_valid ? pair->one->path : "/dev/null",
c_reset);
- printf("%s%s+++ b/%s%s\n", prefix, c_meta, pair->two->path, c_reset);
+ fprintf(opt->file, "%s%s+++ b/%s%s\n", prefix, c_meta, pair->two->path, c_reset);
for (i = 0; i < range->ranges.nr; i++) {
long p_start, p_end;
long t_start = range->ranges.ranges[i].start;
}
/* Now output a diff hunk for this range */
- printf("%s%s@@ -%ld,%ld +%ld,%ld @@%s\n",
+ fprintf(opt->file, "%s%s@@ -%ld,%ld +%ld,%ld @@%s\n",
prefix, c_frag,
p_start+1, p_end-p_start, t_start+1, t_end-t_start,
c_reset);
int k;
for (; t_cur < diff->target.ranges[j].start; t_cur++)
print_line(prefix, ' ', t_cur, t_ends, pair->two->data,
- c_context, c_reset);
+ c_context, c_reset, opt->file);
for (k = diff->parent.ranges[j].start; k < diff->parent.ranges[j].end; k++)
print_line(prefix, '-', k, p_ends, pair->one->data,
- c_old, c_reset);
+ c_old, c_reset, opt->file);
for (; t_cur < diff->target.ranges[j].end && t_cur < t_end; t_cur++)
print_line(prefix, '+', t_cur, t_ends, pair->two->data,
- c_new, c_reset);
+ c_new, c_reset, opt->file);
j++;
}
for (; t_cur < t_end; t_cur++)
print_line(prefix, ' ', t_cur, t_ends, pair->two->data,
- c_context, c_reset);
+ c_context, c_reset, opt->file);
}
free(p_ends);
*/
static void dump_diff_hacky(struct rev_info *rev, struct line_log_data *range)
{
- puts(output_prefix(&rev->diffopt));
+ fprintf(rev->diffopt.file, "%s\n", output_prefix(&rev->diffopt));
while (range) {
dump_diff_hacky_one(rev, range);
range = range->next;
if (rg->ranges.nr == 0)
return 0;
- assert(pair->two->sha1_valid);
+ assert(pair->two->oid_valid);
diff_populate_filespec(pair->two, 0);
file_target.ptr = pair->two->data;
file_target.size = pair->two->size;
- if (pair->one->sha1_valid) {
+ if (pair->one->oid_valid) {
diff_populate_filespec(pair->one, 0);
file_parent.ptr = pair->one->data;
file_parent.size = pair->one->size;
--- /dev/null
+/*
+ * Copyright (C) 2002 Free Software Foundation, Inc.
+ * (originally part of the GNU C Library and Userspace RCU)
+ * Contributed by Ulrich Drepper <drepper@redhat.com>, 2002.
+ *
+ * Copyright (C) 2009 Pierre-Marc Fournier
+ * Conversion to RCU list.
+ * Copyright (C) 2010 Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation; either
+ * version 2.1 of the License, or (at your option) any later version.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, see
+ * <http://www.gnu.org/licenses/>.
+ */
+
+#ifndef LIST_H
+#define LIST_H 1
+
+/*
+ * The definitions of this file are adopted from those which can be
+ * found in the Linux kernel headers to enable people familiar with the
+ * latter find their way in these sources as well.
+ */
+
+/* Basic type for the double-link list. */
+struct list_head {
+ struct list_head *next, *prev;
+};
+
+/* avoid conflicts with BSD-only sys/queue.h */
+#undef LIST_HEAD
+/* Define a variable with the head and tail of the list. */
+#define LIST_HEAD(name) \
+ struct list_head name = { &(name), &(name) }
+
+/* Initialize a new list head. */
+#define INIT_LIST_HEAD(ptr) \
+ (ptr)->next = (ptr)->prev = (ptr)
+
+#define LIST_HEAD_INIT(name) { &(name), &(name) }
+
+/* Add new element at the head of the list. */
+static inline void list_add(struct list_head *newp, struct list_head *head)
+{
+ head->next->prev = newp;
+ newp->next = head->next;
+ newp->prev = head;
+ head->next = newp;
+}
+
+/* Add new element at the tail of the list. */
+static inline void list_add_tail(struct list_head *newp, struct list_head *head)
+{
+ head->prev->next = newp;
+ newp->next = head;
+ newp->prev = head->prev;
+ head->prev = newp;
+}
+
+/* Remove element from list. */
+static inline void __list_del(struct list_head *prev, struct list_head *next)
+{
+ next->prev = prev;
+ prev->next = next;
+}
+
+/* Remove element from list. */
+static inline void list_del(struct list_head *elem)
+{
+ __list_del(elem->prev, elem->next);
+}
+
+/* Remove element from list, initializing the element's list pointers. */
+static inline void list_del_init(struct list_head *elem)
+{
+ list_del(elem);
+ INIT_LIST_HEAD(elem);
+}
+
+/* Delete from list, add to another list as head. */
+static inline void list_move(struct list_head *elem, struct list_head *head)
+{
+ __list_del(elem->prev, elem->next);
+ list_add(elem, head);
+}
+
+/* Replace an old entry. */
+static inline void list_replace(struct list_head *old, struct list_head *newp)
+{
+ newp->next = old->next;
+ newp->prev = old->prev;
+ newp->prev->next = newp;
+ newp->next->prev = newp;
+}
+
+/* Join two lists. */
+static inline void list_splice(struct list_head *add, struct list_head *head)
+{
+ /* Do nothing if the list which gets added is empty. */
+ if (add != add->next) {
+ add->next->prev = head;
+ add->prev->next = head->next;
+ head->next->prev = add->prev;
+ head->next = add->next;
+ }
+}
+
+/* Get typed element from list at a given position. */
+#define list_entry(ptr, type, member) \
+ ((type *) ((char *) (ptr) - offsetof(type, member)))
+
+/* Get first entry from a list. */
+#define list_first_entry(ptr, type, member) \
+ list_entry((ptr)->next, type, member)
+
+/* Iterate forward over the elements of the list. */
+#define list_for_each(pos, head) \
+ for (pos = (head)->next; pos != (head); pos = pos->next)
+
+/*
+ * Iterate forward over the elements list. The list elements can be
+ * removed from the list while doing this.
+ */
+#define list_for_each_safe(pos, p, head) \
+ for (pos = (head)->next, p = pos->next; \
+ pos != (head); \
+ pos = p, p = pos->next)
+
+/* Iterate backward over the elements of the list. */
+#define list_for_each_prev(pos, head) \
+ for (pos = (head)->prev; pos != (head); pos = pos->prev)
+
+/*
+ * Iterate backwards over the elements list. The list elements can be
+ * removed from the list while doing this.
+ */
+#define list_for_each_prev_safe(pos, p, head) \
+ for (pos = (head)->prev, p = pos->prev; \
+ pos != (head); \
+ pos = p, p = pos->prev)
+
+static inline int list_empty(struct list_head *head)
+{
+ return head == head->next;
+}
+
+static inline void list_replace_init(struct list_head *old,
+ struct list_head *newp)
+{
+ struct list_head *head = old->next;
+
+ list_del(old);
+ list_add_tail(newp, head);
+ INIT_LIST_HEAD(old);
+}
+
+#endif /* LIST_H */
* * calling `fdopen_lock_file()` to get a `FILE` pointer for the
* open file and writing to the file using stdio.
*
+ * Note that the file descriptor returned by hold_lock_file_for_update()
+ * is marked O_CLOEXEC, so the new contents must be written by the
+ * current process, not a spawned one.
+ *
* When finished writing, the caller can:
*
* * Close the file descriptor and rename the lockfile to its final
}
}
-static void show_parents(struct commit *commit, int abbrev)
+static void show_parents(struct commit *commit, int abbrev, FILE *file)
{
struct commit_list *p;
for (p = commit->parents; p ; p = p->next) {
struct commit *parent = p->item;
- printf(" %s", find_unique_abbrev(parent->object.oid.hash, abbrev));
+ fprintf(file, " %s", find_unique_abbrev(parent->object.oid.hash, abbrev));
}
}
{
struct commit_list *p = lookup_decoration(&opt->children, &commit->object);
for ( ; p; p = p->next) {
- printf(" %s", find_unique_abbrev(p->item->object.oid.hash, abbrev));
+ fprintf(opt->diffopt.file, " %s", find_unique_abbrev(p->item->object.oid.hash, abbrev));
}
}
if (current_and_HEAD &&
decoration->type == DECORATION_REF_HEAD) {
- strbuf_addstr(sb, color_reset);
- strbuf_addstr(sb, color_commit);
strbuf_addstr(sb, " -> ");
strbuf_addstr(sb, color_reset);
strbuf_addstr(sb, decorate_get_color(use_color, current_and_HEAD->type));
struct strbuf sb = STRBUF_INIT;
if (opt->show_source && commit->util)
- printf("\t%s", (char *) commit->util);
+ fprintf(opt->diffopt.file, "\t%s", (char *) commit->util);
if (!opt->show_decorations)
return;
format_decorations(&sb, commit, opt->diffopt.use_color);
- fputs(sb.buf, stdout);
+ fputs(sb.buf, opt->diffopt.file);
strbuf_release(&sb);
}
subject = "Subject: ";
}
- printf("From %s Mon Sep 17 00:00:00 2001\n", name);
+ fprintf(opt->diffopt.file, "From %s Mon Sep 17 00:00:00 2001\n", name);
graph_show_oneline(opt->graph);
if (opt->message_id) {
- printf("Message-Id: <%s>\n", opt->message_id);
+ fprintf(opt->diffopt.file, "Message-Id: <%s>\n", opt->message_id);
graph_show_oneline(opt->graph);
}
if (opt->ref_message_ids && opt->ref_message_ids->nr > 0) {
int i, n;
n = opt->ref_message_ids->nr;
- printf("In-Reply-To: <%s>\n", opt->ref_message_ids->items[n-1].string);
+ fprintf(opt->diffopt.file, "In-Reply-To: <%s>\n", opt->ref_message_ids->items[n-1].string);
for (i = 0; i < n; i++)
- printf("%s<%s>\n", (i > 0 ? "\t" : "References: "),
+ fprintf(opt->diffopt.file, "%s<%s>\n", (i > 0 ? "\t" : "References: "),
opt->ref_message_ids->items[i].string);
graph_show_oneline(opt->graph);
}
reset = diff_get_color_opt(&opt->diffopt, DIFF_RESET);
while (*bol) {
eol = strchrnul(bol, '\n');
- printf("%s%.*s%s%s", color, (int)(eol - bol), bol, reset,
+ fprintf(opt->diffopt.file, "%s%.*s%s%s", color, (int)(eol - bol), bol, reset,
*eol ? "\n" : "");
graph_show_oneline(opt->graph);
bol = (*eol) ? (eol + 1) : eol;
if (!opt->graph)
put_revision_mark(opt, commit);
- fputs(find_unique_abbrev(commit->object.oid.hash, abbrev_commit), stdout);
+ fputs(find_unique_abbrev(commit->object.oid.hash, abbrev_commit), opt->diffopt.file);
if (opt->print_parents)
- show_parents(commit, abbrev_commit);
+ show_parents(commit, abbrev_commit, opt->diffopt.file);
if (opt->children.name)
show_children(opt, commit, abbrev_commit);
show_decorations(opt, commit);
if (opt->graph && !graph_is_commit_finished(opt->graph)) {
- putchar('\n');
+ putc('\n', opt->diffopt.file);
graph_show_remainder(opt->graph);
}
- putchar(opt->diffopt.line_termination);
+ putc(opt->diffopt.line_termination, opt->diffopt.file);
return;
}
if (opt->diffopt.line_termination == '\n' &&
!opt->missing_newline)
graph_show_padding(opt->graph);
- putchar(opt->diffopt.line_termination);
+ putc(opt->diffopt.line_termination, opt->diffopt.file);
}
opt->shown_one = 1;
* Print header line of header..
*/
- if (opt->commit_format == CMIT_FMT_EMAIL) {
+ if (cmit_fmt_is_mail(opt->commit_format)) {
log_write_email_headers(opt, commit, &ctx.subject, &extra_headers,
&ctx.need_8bit_cte);
} else if (opt->commit_format != CMIT_FMT_USERFORMAT) {
- fputs(diff_get_color_opt(&opt->diffopt, DIFF_COMMIT), stdout);
+ fputs(diff_get_color_opt(&opt->diffopt, DIFF_COMMIT), opt->diffopt.file);
if (opt->commit_format != CMIT_FMT_ONELINE)
- fputs("commit ", stdout);
+ fputs("commit ", opt->diffopt.file);
if (!opt->graph)
put_revision_mark(opt, commit);
fputs(find_unique_abbrev(commit->object.oid.hash, abbrev_commit),
- stdout);
+ opt->diffopt.file);
if (opt->print_parents)
- show_parents(commit, abbrev_commit);
+ show_parents(commit, abbrev_commit, opt->diffopt.file);
if (opt->children.name)
show_children(opt, commit, abbrev_commit);
if (parent)
- printf(" (from %s)",
+ fprintf(opt->diffopt.file, " (from %s)",
find_unique_abbrev(parent->object.oid.hash,
abbrev_commit));
- fputs(diff_get_color_opt(&opt->diffopt, DIFF_RESET), stdout);
+ fputs(diff_get_color_opt(&opt->diffopt, DIFF_RESET), opt->diffopt.file);
show_decorations(opt, commit);
if (opt->commit_format == CMIT_FMT_ONELINE) {
- putchar(' ');
+ putc(' ', opt->diffopt.file);
} else {
- putchar('\n');
+ putc('\n', opt->diffopt.file);
graph_show_oneline(opt->graph);
}
if (opt->reflog_info) {
ctx.output_encoding = get_log_output_encoding();
if (opt->from_ident.mail_begin && opt->from_ident.name_begin)
ctx.from_ident = &opt->from_ident;
+ if (opt->graph)
+ ctx.graph_width = graph_width(opt->graph);
pretty_print_commit(&ctx, commit, &msgbuf);
if (opt->add_signoff)
if ((ctx.fmt != CMIT_FMT_USERFORMAT) &&
ctx.notes_message && *ctx.notes_message) {
- if (ctx.fmt == CMIT_FMT_EMAIL) {
+ if (cmit_fmt_is_mail(ctx.fmt)) {
strbuf_addstr(&msgbuf, "---\n");
opt->shown_dashes = 1;
}
}
if (opt->show_log_size) {
- printf("log size %i\n", (int)msgbuf.len);
+ fprintf(opt->diffopt.file, "log size %i\n", (int)msgbuf.len);
graph_show_oneline(opt->graph);
}
if (opt->graph)
graph_show_commit_msg(opt->graph, &msgbuf);
else
- fwrite(msgbuf.buf, sizeof(char), msgbuf.len, stdout);
+ fwrite(msgbuf.buf, sizeof(char), msgbuf.len, opt->diffopt.file);
if (opt->use_terminator && !commit_format_is_empty(opt->commit_format)) {
if (!opt->missing_newline)
graph_show_padding(opt->graph);
- putchar(opt->diffopt.line_termination);
+ putc(opt->diffopt.line_termination, opt->diffopt.file);
}
strbuf_release(&msgbuf);
struct strbuf *msg = NULL;
msg = opt->diffopt.output_prefix(&opt->diffopt,
opt->diffopt.output_prefix_data);
- fwrite(msg->buf, msg->len, 1, stdout);
+ fwrite(msg->buf, msg->len, 1, opt->diffopt.file);
}
/*
*/
if (!opt->shown_dashes &&
(pch & opt->diffopt.output_format) == pch)
- printf("---");
- putchar('\n');
+ fprintf(opt->diffopt.file, "---");
+ putc('\n', opt->diffopt.file);
}
}
diff_flush(&opt->diffopt);
int log_tree_commit(struct rev_info *opt, struct commit *commit)
{
struct log_info log;
- int shown;
+ int shown, close_file = opt->diffopt.close_file;
log.commit = commit;
log.parent = NULL;
opt->loginfo = &log;
+ opt->diffopt.close_file = 0;
if (opt->line_level_traverse)
return line_log_print(opt, commit);
if (opt->track_linear && !opt->linear && !opt->reverse_output_stage)
- printf("\n%s\n", opt->break_bar);
+ fprintf(opt->diffopt.file, "\n%s\n", opt->break_bar);
shown = log_tree_diff(opt, commit, &log);
if (!shown && opt->loginfo && opt->always_show_header) {
log.parent = NULL;
shown = 1;
}
if (opt->track_linear && !opt->linear && opt->reverse_output_stage)
- printf("\n%s\n", opt->break_bar);
+ fprintf(opt->diffopt.file, "\n%s\n", opt->break_bar);
opt->loginfo = NULL;
- maybe_flush_or_die(stdout, "stdout");
+ maybe_flush_or_die(opt->diffopt.file, "stdout");
+ if (close_file)
+ fclose(opt->diffopt.file);
return shown;
}
}
}
-static void handle_message_id(struct mailinfo *mi, const struct strbuf *line)
-{
- if (mi->add_message_id)
- mi->message_id = strdup(line->buf);
-}
-
static void handle_content_transfer_encoding(struct mailinfo *mi,
const struct strbuf *line)
{
len = strlen("Message-Id: ");
strbuf_add(&sb, line->buf + len, line->len - len);
decode_header(mi, &sb);
- handle_message_id(mi, &sb);
+ if (mi->add_message_id)
+ mi->message_id = strbuf_detach(&sb, NULL);
ret = 1;
goto check_header_out;
}
#include "dir.h"
#include "submodule.h"
+static void flush_output(struct merge_options *o)
+{
+ if (o->buffer_output < 2 && o->obuf.len) {
+ fputs(o->obuf.buf, stdout);
+ strbuf_reset(&o->obuf);
+ }
+}
+
+static int err(struct merge_options *o, const char *err, ...)
+{
+ va_list params;
+
+ if (o->buffer_output < 2)
+ flush_output(o);
+ else {
+ strbuf_complete(&o->obuf, '\n');
+ strbuf_addstr(&o->obuf, "error: ");
+ }
+ va_start(params, err);
+ strbuf_vaddf(&o->obuf, err, params);
+ va_end(params);
+ if (o->buffer_output > 1)
+ strbuf_addch(&o->obuf, '\n');
+ else {
+ error("%s", o->obuf.buf);
+ strbuf_reset(&o->obuf);
+ }
+
+ return -1;
+}
+
static struct tree *shift_tree_object(struct tree *one, struct tree *two,
const char *subtree_shift)
{
static struct commit *make_virtual_commit(struct tree *tree, const char *comment)
{
struct commit *commit = alloc_commit_node();
- struct merge_remote_desc *desc = xmalloc(sizeof(*desc));
- desc->name = comment;
- desc->obj = (struct object *)commit;
+ set_merge_remote_desc(commit, comment, (struct object *)commit);
commit->tree = tree;
- commit->util = desc;
commit->object.parsed = 1;
return commit;
}
* Since we use get_tree_entry(), which does not put the read object into
* the object pool, we cannot rely on a == b.
*/
-static int sha_eq(const unsigned char *a, const unsigned char *b)
+static int oid_eq(const struct object_id *a, const struct object_id *b)
{
if (!a && !b)
return 2;
- return a && b && hashcmp(a, b) == 0;
+ return a && b && oidcmp(a, b) == 0;
}
enum rename_type {
struct stage_data {
struct {
unsigned mode;
- unsigned char sha[20];
+ struct object_id oid;
} stages[4];
struct rename_conflict_info *rename_conflict_info;
unsigned processed:1;
int ostage2 = ostage1 ^ 1;
ci->ren1_other.path = pair1->one->path;
- hashcpy(ci->ren1_other.sha1, src_entry1->stages[ostage1].sha);
+ oidcpy(&ci->ren1_other.oid, &src_entry1->stages[ostage1].oid);
ci->ren1_other.mode = src_entry1->stages[ostage1].mode;
ci->ren2_other.path = pair2->one->path;
- hashcpy(ci->ren2_other.sha1, src_entry2->stages[ostage2].sha);
+ oidcpy(&ci->ren2_other.oid, &src_entry2->stages[ostage2].oid);
ci->ren2_other.mode = src_entry2->stages[ostage2].mode;
}
}
return (!o->call_depth && o->verbosity >= v) || o->verbosity >= 5;
}
-static void flush_output(struct merge_options *o)
-{
- if (o->obuf.len) {
- fputs(o->obuf.buf, stdout);
- strbuf_reset(&o->obuf);
- }
-}
-
__attribute__((format (printf, 3, 4)))
static void output(struct merge_options *o, int v, const char *fmt, ...)
{
static void output_commit_title(struct merge_options *o, struct commit *commit)
{
- int i;
- flush_output(o);
- for (i = o->call_depth; i--;)
- fputs(" ", stdout);
+ strbuf_addchars(&o->obuf, ' ', o->call_depth * 2);
if (commit->util)
- printf("virtual %s\n", merge_remote_util(commit)->name);
+ strbuf_addf(&o->obuf, "virtual %s\n",
+ merge_remote_util(commit)->name);
else {
- printf("%s ", find_unique_abbrev(commit->object.oid.hash, DEFAULT_ABBREV));
+ strbuf_addf(&o->obuf, "%s ",
+ find_unique_abbrev(commit->object.oid.hash,
+ DEFAULT_ABBREV));
if (parse_commit(commit) != 0)
- printf(_("(bad commit)\n"));
+ strbuf_addf(&o->obuf, _("(bad commit)\n"));
else {
const char *title;
const char *msg = get_commit_buffer(commit, NULL);
int len = find_commit_subject(msg, &title);
if (len)
- printf("%.*s\n", len, title);
+ strbuf_addf(&o->obuf, "%.*s\n", len, title);
unuse_commit_buffer(commit, msg);
}
}
+ flush_output(o);
}
-static int add_cacheinfo(unsigned int mode, const unsigned char *sha1,
+static int add_cacheinfo(struct merge_options *o,
+ unsigned int mode, const struct object_id *oid,
const char *path, int stage, int refresh, int options)
{
struct cache_entry *ce;
- ce = make_cache_entry(mode, sha1 ? sha1 : null_sha1, path, stage,
- (refresh ? (CE_MATCH_REFRESH |
- CE_MATCH_IGNORE_MISSING) : 0 ));
+ int ret;
+
+ ce = make_cache_entry(mode, oid ? oid->hash : null_sha1, path, stage, 0);
if (!ce)
- return error(_("addinfo_cache failed for path '%s'"), path);
- return add_cache_entry(ce, options);
+ return err(o, _("addinfo_cache failed for path '%s'"), path);
+
+ ret = add_cache_entry(ce, options);
+ if (refresh) {
+ struct cache_entry *nce;
+
+ nce = refresh_cache_entry(ce, CE_MATCH_REFRESH | CE_MATCH_IGNORE_MISSING);
+ if (nce != ce)
+ ret = add_cache_entry(nce, options);
+ }
+ return ret;
}
static void init_tree_desc_from_tree(struct tree_desc *desc, struct tree *tree)
fprintf(stderr, "BUG: %d %.*s\n", ce_stage(ce),
(int)ce_namelen(ce), ce->name);
}
- die("Bug in merge-recursive.c");
+ die("BUG: unmerged index entries in merge-recursive.c");
}
if (!active_cache_tree)
active_cache_tree = cache_tree();
if (!cache_tree_fully_valid(active_cache_tree) &&
- cache_tree_update(&the_index, 0) < 0)
- die(_("error building trees"));
+ cache_tree_update(&the_index, 0) < 0) {
+ err(o, _("error building trees"));
+ return NULL;
+ }
result = lookup_tree(active_cache_tree->sha1);
struct string_list_item *item;
struct stage_data *e = xcalloc(1, sizeof(struct stage_data));
get_tree_entry(o->object.oid.hash, path,
- e->stages[1].sha, &e->stages[1].mode);
+ e->stages[1].oid.hash, &e->stages[1].mode);
get_tree_entry(a->object.oid.hash, path,
- e->stages[2].sha, &e->stages[2].mode);
+ e->stages[2].oid.hash, &e->stages[2].mode);
get_tree_entry(b->object.oid.hash, path,
- e->stages[3].sha, &e->stages[3].mode);
+ e->stages[3].oid.hash, &e->stages[3].mode);
item = string_list_insert(entries, path);
item->util = e;
return e;
}
e = item->util;
e->stages[ce_stage(ce)].mode = ce->ce_mode;
- hashcpy(e->stages[ce_stage(ce)].sha, ce->sha1);
+ hashcpy(e->stages[ce_stage(ce)].oid.hash, ce->sha1);
}
return unmerged;
* and the file need to be present, then the D/F file will be
* reinstated with a new unique name at the time it is processed.
*/
- struct string_list df_sorted_entries;
+ struct string_list df_sorted_entries = STRING_LIST_INIT_NODUP;
const char *last_file = NULL;
int last_len = 0;
int i;
return;
/* Ensure D/F conflicts are adjacent in the entries list. */
- memset(&df_sorted_entries, 0, sizeof(struct string_list));
for (i = 0; i < entries->nr; i++) {
struct string_list_item *next = &entries->items[i];
string_list_append(&df_sorted_entries, next->string)->util =
return renames;
}
-static int update_stages(const char *path, const struct diff_filespec *o,
+static int update_stages(struct merge_options *opt, const char *path,
+ const struct diff_filespec *o,
const struct diff_filespec *a,
const struct diff_filespec *b)
{
if (remove_file_from_cache(path))
return -1;
if (o)
- if (add_cacheinfo(o->mode, o->sha1, path, 1, 0, options))
+ if (add_cacheinfo(opt, o->mode, &o->oid, path, 1, 0, options))
return -1;
if (a)
- if (add_cacheinfo(a->mode, a->sha1, path, 2, 0, options))
+ if (add_cacheinfo(opt, a->mode, &a->oid, path, 2, 0, options))
return -1;
if (b)
- if (add_cacheinfo(b->mode, b->sha1, path, 3, 0, options))
+ if (add_cacheinfo(opt, b->mode, &b->oid, path, 3, 0, options))
return -1;
return 0;
}
entry->stages[1].mode = o->mode;
entry->stages[2].mode = a->mode;
entry->stages[3].mode = b->mode;
- hashcpy(entry->stages[1].sha, o->sha1);
- hashcpy(entry->stages[2].sha, a->sha1);
- hashcpy(entry->stages[3].sha, b->sha1);
+ oidcpy(&entry->stages[1].oid, &o->oid);
+ oidcpy(&entry->stages[2].oid, &a->oid);
+ oidcpy(&entry->stages[3].oid, &b->oid);
}
static int remove_file(struct merge_options *o, int clean,
{
int pos = cache_name_pos(path, strlen(path));
- if (pos < 0)
- pos = -1 - pos;
- while (pos < active_nr &&
- !strcmp(path, active_cache[pos]->name)) {
- /*
- * If stage #0, it is definitely tracked.
- * If it has stage #2 then it was tracked
- * before this merge started. All other
- * cases the path was not tracked.
- */
- switch (ce_stage(active_cache[pos])) {
- case 0:
- case 2:
+ if (0 <= pos)
+ /* we have been tracking this path */
+ return 1;
+
+ /*
+ * Look for an unmerged entry for the path,
+ * specifically stage #2, which would indicate
+ * that "our" side before the merge started
+ * had the path tracked (and resulted in a conflict).
+ */
+ for (pos = -1 - pos;
+ pos < active_nr && !strcmp(path, active_cache[pos]->name);
+ pos++)
+ if (ce_stage(active_cache[pos]) == 2)
return 1;
- }
- pos++;
- }
return 0;
}
/* Make sure leading directories are created */
status = safe_create_leading_directories_const(path);
if (status) {
- if (status == SCLD_EXISTS) {
+ if (status == SCLD_EXISTS)
/* something else exists */
- error(msg, path, _(": perhaps a D/F conflict?"));
- return -1;
- }
- die(msg, path, "");
+ return err(o, msg, path, _(": perhaps a D/F conflict?"));
+ return err(o, msg, path, "");
}
/*
* tracking it.
*/
if (would_lose_untracked(path))
- return error(_("refusing to lose untracked file at '%s'"),
+ return err(o, _("refusing to lose untracked file at '%s'"),
path);
/* Successful unlink is good.. */
if (errno == ENOENT)
return 0;
/* .. but not some other error (who really cares what?) */
- return error(msg, path, _(": perhaps a D/F conflict?"));
+ return err(o, msg, path, _(": perhaps a D/F conflict?"));
}
-static void update_file_flags(struct merge_options *o,
- const unsigned char *sha,
- unsigned mode,
- const char *path,
- int update_cache,
- int update_wd)
+static int update_file_flags(struct merge_options *o,
+ const struct object_id *oid,
+ unsigned mode,
+ const char *path,
+ int update_cache,
+ int update_wd)
{
+ int ret = 0;
+
if (o->call_depth)
update_wd = 0;
goto update_index;
}
- buf = read_sha1_file(sha, &type, &size);
+ buf = read_sha1_file(oid->hash, &type, &size);
if (!buf)
- die(_("cannot read object %s '%s'"), sha1_to_hex(sha), path);
- if (type != OBJ_BLOB)
- die(_("blob expected for %s '%s'"), sha1_to_hex(sha), path);
+ return err(o, _("cannot read object %s '%s'"), oid_to_hex(oid), path);
+ if (type != OBJ_BLOB) {
+ ret = err(o, _("blob expected for %s '%s'"), oid_to_hex(oid), path);
+ goto free_buf;
+ }
if (S_ISREG(mode)) {
struct strbuf strbuf = STRBUF_INIT;
if (convert_to_working_tree(path, buf, size, &strbuf)) {
if (make_room_for_path(o, path) < 0) {
update_wd = 0;
- free(buf);
- goto update_index;
+ goto free_buf;
}
if (S_ISREG(mode) || (!has_symlinks && S_ISLNK(mode))) {
int fd;
else
mode = 0666;
fd = open(path, O_WRONLY | O_TRUNC | O_CREAT, mode);
- if (fd < 0)
- die_errno(_("failed to open '%s'"), path);
+ if (fd < 0) {
+ ret = err(o, _("failed to open '%s': %s"),
+ path, strerror(errno));
+ goto free_buf;
+ }
write_in_full(fd, buf, size);
close(fd);
} else if (S_ISLNK(mode)) {
safe_create_leading_directories_const(path);
unlink(path);
if (symlink(lnk, path))
- die_errno(_("failed to symlink '%s'"), path);
+ ret = err(o, _("failed to symlink '%s': %s"),
+ path, strerror(errno));
free(lnk);
} else
- die(_("do not know what to do with %06o %s '%s'"),
- mode, sha1_to_hex(sha), path);
+ ret = err(o,
+ _("do not know what to do with %06o %s '%s'"),
+ mode, oid_to_hex(oid), path);
+ free_buf:
free(buf);
}
update_index:
- if (update_cache)
- add_cacheinfo(mode, sha, path, 0, update_wd, ADD_CACHE_OK_TO_ADD);
+ if (!ret && update_cache)
+ add_cacheinfo(o, mode, oid, path, 0, update_wd, ADD_CACHE_OK_TO_ADD);
+ return ret;
}
-static void update_file(struct merge_options *o,
- int clean,
- const unsigned char *sha,
- unsigned mode,
- const char *path)
+static int update_file(struct merge_options *o,
+ int clean,
+ const struct object_id *oid,
+ unsigned mode,
+ const char *path)
{
- update_file_flags(o, sha, mode, path, o->call_depth || clean, !o->call_depth);
+ return update_file_flags(o, oid, mode, path, o->call_depth || clean, !o->call_depth);
}
/* Low level file merging, update and removal */
struct merge_file_info {
- unsigned char sha[20];
+ struct object_id oid;
unsigned mode;
unsigned clean:1,
merge:1;
name2 = mkpathdup("%s", branch2);
}
- read_mmblob(&orig, one->sha1);
- read_mmblob(&src1, a->sha1);
- read_mmblob(&src2, b->sha1);
+ read_mmblob(&orig, one->oid.hash);
+ read_mmblob(&src1, a->oid.hash);
+ read_mmblob(&src2, b->oid.hash);
merge_status = ll_merge(result_buf, a->path, &orig, base_name,
&src1, name1, &src2, name2, &ll_opts);
return merge_status;
}
-static struct merge_file_info merge_file_1(struct merge_options *o,
+static int merge_file_1(struct merge_options *o,
const struct diff_filespec *one,
const struct diff_filespec *a,
const struct diff_filespec *b,
const char *branch1,
- const char *branch2)
+ const char *branch2,
+ struct merge_file_info *result)
{
- struct merge_file_info result;
- result.merge = 0;
- result.clean = 1;
+ result->merge = 0;
+ result->clean = 1;
if ((S_IFMT & a->mode) != (S_IFMT & b->mode)) {
- result.clean = 0;
+ result->clean = 0;
if (S_ISREG(a->mode)) {
- result.mode = a->mode;
- hashcpy(result.sha, a->sha1);
+ result->mode = a->mode;
+ oidcpy(&result->oid, &a->oid);
} else {
- result.mode = b->mode;
- hashcpy(result.sha, b->sha1);
+ result->mode = b->mode;
+ oidcpy(&result->oid, &b->oid);
}
} else {
- if (!sha_eq(a->sha1, one->sha1) && !sha_eq(b->sha1, one->sha1))
- result.merge = 1;
+ if (!oid_eq(&a->oid, &one->oid) && !oid_eq(&b->oid, &one->oid))
+ result->merge = 1;
/*
* Merge modes
*/
if (a->mode == b->mode || a->mode == one->mode)
- result.mode = b->mode;
+ result->mode = b->mode;
else {
- result.mode = a->mode;
+ result->mode = a->mode;
if (b->mode != one->mode) {
- result.clean = 0;
- result.merge = 1;
+ result->clean = 0;
+ result->merge = 1;
}
}
- if (sha_eq(a->sha1, b->sha1) || sha_eq(a->sha1, one->sha1))
- hashcpy(result.sha, b->sha1);
- else if (sha_eq(b->sha1, one->sha1))
- hashcpy(result.sha, a->sha1);
+ if (oid_eq(&a->oid, &b->oid) || oid_eq(&a->oid, &one->oid))
+ oidcpy(&result->oid, &b->oid);
+ else if (oid_eq(&b->oid, &one->oid))
+ oidcpy(&result->oid, &a->oid);
else if (S_ISREG(a->mode)) {
mmbuffer_t result_buf;
- int merge_status;
+ int ret = 0, merge_status;
merge_status = merge_3way(o, &result_buf, one, a, b,
branch1, branch2);
if ((merge_status < 0) || !result_buf.ptr)
- die(_("Failed to execute internal merge"));
+ ret = err(o, _("Failed to execute internal merge"));
- if (write_sha1_file(result_buf.ptr, result_buf.size,
- blob_type, result.sha))
- die(_("Unable to add %s to database"),
- a->path);
+ if (!ret && write_sha1_file(result_buf.ptr, result_buf.size,
+ blob_type, result->oid.hash))
+ ret = err(o, _("Unable to add %s to database"),
+ a->path);
free(result_buf.ptr);
- result.clean = (merge_status == 0);
+ if (ret)
+ return ret;
+ result->clean = (merge_status == 0);
} else if (S_ISGITLINK(a->mode)) {
- result.clean = merge_submodule(result.sha,
- one->path, one->sha1,
- a->sha1, b->sha1,
+ result->clean = merge_submodule(result->oid.hash,
+ one->path,
+ one->oid.hash,
+ a->oid.hash,
+ b->oid.hash,
!o->call_depth);
} else if (S_ISLNK(a->mode)) {
- hashcpy(result.sha, a->sha1);
+ oidcpy(&result->oid, &a->oid);
- if (!sha_eq(a->sha1, b->sha1))
- result.clean = 0;
- } else {
- die(_("unsupported object type in the tree"));
- }
+ if (!oid_eq(&a->oid, &b->oid))
+ result->clean = 0;
+ } else
+ die("BUG: unsupported object type in the tree");
}
- return result;
+ return 0;
}
-static struct merge_file_info
-merge_file_special_markers(struct merge_options *o,
+static int merge_file_special_markers(struct merge_options *o,
const struct diff_filespec *one,
const struct diff_filespec *a,
const struct diff_filespec *b,
const char *branch1,
const char *filename1,
const char *branch2,
- const char *filename2)
+ const char *filename2,
+ struct merge_file_info *mfi)
{
char *side1 = NULL;
char *side2 = NULL;
- struct merge_file_info mfi;
+ int ret;
if (filename1)
side1 = xstrfmt("%s:%s", branch1, filename1);
if (filename2)
side2 = xstrfmt("%s:%s", branch2, filename2);
- mfi = merge_file_1(o, one, a, b,
- side1 ? side1 : branch1, side2 ? side2 : branch2);
+ ret = merge_file_1(o, one, a, b,
+ side1 ? side1 : branch1,
+ side2 ? side2 : branch2, mfi);
free(side1);
free(side2);
- return mfi;
+ return ret;
}
-static struct merge_file_info merge_file_one(struct merge_options *o,
+static int merge_file_one(struct merge_options *o,
const char *path,
- const unsigned char *o_sha, int o_mode,
- const unsigned char *a_sha, int a_mode,
- const unsigned char *b_sha, int b_mode,
+ const struct object_id *o_oid, int o_mode,
+ const struct object_id *a_oid, int a_mode,
+ const struct object_id *b_oid, int b_mode,
const char *branch1,
- const char *branch2)
+ const char *branch2,
+ struct merge_file_info *mfi)
{
struct diff_filespec one, a, b;
one.path = a.path = b.path = (char *)path;
- hashcpy(one.sha1, o_sha);
+ oidcpy(&one.oid, o_oid);
one.mode = o_mode;
- hashcpy(a.sha1, a_sha);
+ oidcpy(&a.oid, a_oid);
a.mode = a_mode;
- hashcpy(b.sha1, b_sha);
+ oidcpy(&b.oid, b_oid);
b.mode = b_mode;
- return merge_file_1(o, &one, &a, &b, branch1, branch2);
+ return merge_file_1(o, &one, &a, &b, branch1, branch2, mfi);
}
-static void handle_change_delete(struct merge_options *o,
+static int handle_change_delete(struct merge_options *o,
const char *path,
- const unsigned char *o_sha, int o_mode,
- const unsigned char *a_sha, int a_mode,
- const unsigned char *b_sha, int b_mode,
+ const struct object_id *o_oid, int o_mode,
+ const struct object_id *a_oid, int a_mode,
+ const struct object_id *b_oid, int b_mode,
const char *change, const char *change_past)
{
char *renamed = NULL;
+ int ret = 0;
if (dir_in_way(path, !o->call_depth)) {
- renamed = unique_path(o, path, a_sha ? o->branch1 : o->branch2);
+ renamed = unique_path(o, path, a_oid ? o->branch1 : o->branch2);
}
if (o->call_depth) {
* correct; since there is no true "middle point" between
* them, simply reuse the base version for virtual merge base.
*/
- remove_file_from_cache(path);
- update_file(o, 0, o_sha, o_mode, renamed ? renamed : path);
- } else if (!a_sha) {
+ ret = remove_file_from_cache(path);
+ if (!ret)
+ ret = update_file(o, 0, o_oid, o_mode,
+ renamed ? renamed : path);
+ } else if (!a_oid) {
if (!renamed) {
output(o, 1, _("CONFLICT (%s/delete): %s deleted in %s "
"and %s in %s. Version %s of %s left in tree."),
change, path, o->branch1, change_past,
o->branch2, o->branch2, path);
- update_file(o, 0, b_sha, b_mode, path);
+ ret = update_file(o, 0, b_oid, b_mode, path);
} else {
output(o, 1, _("CONFLICT (%s/delete): %s deleted in %s "
"and %s in %s. Version %s of %s left in tree at %s."),
change, path, o->branch1, change_past,
o->branch2, o->branch2, path, renamed);
- update_file(o, 0, b_sha, b_mode, renamed);
+ ret = update_file(o, 0, b_oid, b_mode, renamed);
}
} else {
if (!renamed) {
"and %s in %s. Version %s of %s left in tree at %s."),
change, path, o->branch2, change_past,
o->branch1, o->branch1, path, renamed);
- update_file(o, 0, a_sha, a_mode, renamed);
+ ret = update_file(o, 0, a_oid, a_mode, renamed);
}
/*
* No need to call update_file() on path when !renamed, since
*/
}
free(renamed);
+
+ return ret;
}
-static void conflict_rename_delete(struct merge_options *o,
+static int conflict_rename_delete(struct merge_options *o,
struct diff_filepair *pair,
const char *rename_branch,
const char *other_branch)
{
const struct diff_filespec *orig = pair->one;
const struct diff_filespec *dest = pair->two;
- const unsigned char *a_sha = NULL;
- const unsigned char *b_sha = NULL;
+ const struct object_id *a_oid = NULL;
+ const struct object_id *b_oid = NULL;
int a_mode = 0;
int b_mode = 0;
if (rename_branch == o->branch1) {
- a_sha = dest->sha1;
+ a_oid = &dest->oid;
a_mode = dest->mode;
} else {
- b_sha = dest->sha1;
+ b_oid = &dest->oid;
b_mode = dest->mode;
}
- handle_change_delete(o,
- o->call_depth ? orig->path : dest->path,
- orig->sha1, orig->mode,
- a_sha, a_mode,
- b_sha, b_mode,
- _("rename"), _("renamed"));
-
- if (o->call_depth) {
- remove_file_from_cache(dest->path);
- } else {
- update_stages(dest->path, NULL,
- rename_branch == o->branch1 ? dest : NULL,
- rename_branch == o->branch1 ? NULL : dest);
- }
+ if (handle_change_delete(o,
+ o->call_depth ? orig->path : dest->path,
+ &orig->oid, orig->mode,
+ a_oid, a_mode,
+ b_oid, b_mode,
+ _("rename"), _("renamed")))
+ return -1;
+ if (o->call_depth)
+ return remove_file_from_cache(dest->path);
+ else
+ return update_stages(o, dest->path, NULL,
+ rename_branch == o->branch1 ? dest : NULL,
+ rename_branch == o->branch1 ? NULL : dest);
}
static struct diff_filespec *filespec_from_entry(struct diff_filespec *target,
struct stage_data *entry,
int stage)
{
- unsigned char *sha = entry->stages[stage].sha;
+ struct object_id *oid = &entry->stages[stage].oid;
unsigned mode = entry->stages[stage].mode;
- if (mode == 0 || is_null_sha1(sha))
+ if (mode == 0 || is_null_oid(oid))
return NULL;
- hashcpy(target->sha1, sha);
+ oidcpy(&target->oid, oid);
target->mode = mode;
return target;
}
-static void handle_file(struct merge_options *o,
+static int handle_file(struct merge_options *o,
struct diff_filespec *rename,
int stage,
struct rename_conflict_info *ci)
const char *cur_branch, *other_branch;
struct diff_filespec other;
struct diff_filespec *add;
+ int ret;
if (stage == 2) {
dst_entry = ci->dst_entry1;
add = filespec_from_entry(&other, dst_entry, stage ^ 1);
if (add) {
char *add_name = unique_path(o, rename->path, other_branch);
- update_file(o, 0, add->sha1, add->mode, add_name);
+ if (update_file(o, 0, &add->oid, add->mode, add_name))
+ return -1;
remove_file(o, 0, rename->path, 0);
dst_name = unique_path(o, rename->path, cur_branch);
rename->path, other_branch, dst_name);
}
}
- update_file(o, 0, rename->sha1, rename->mode, dst_name);
- if (stage == 2)
- update_stages(rename->path, NULL, rename, add);
+ if ((ret = update_file(o, 0, &rename->oid, rename->mode, dst_name)))
+ ; /* fall through, do allow dst_name to be released */
+ else if (stage == 2)
+ ret = update_stages(o, rename->path, NULL, rename, add);
else
- update_stages(rename->path, NULL, add, rename);
+ ret = update_stages(o, rename->path, NULL, add, rename);
if (dst_name != rename->path)
free(dst_name);
+
+ return ret;
}
-static void conflict_rename_rename_1to2(struct merge_options *o,
+static int conflict_rename_rename_1to2(struct merge_options *o,
struct rename_conflict_info *ci)
{
/* One file was renamed in both branches, but to different names. */
struct merge_file_info mfi;
struct diff_filespec other;
struct diff_filespec *add;
- mfi = merge_file_one(o, one->path,
- one->sha1, one->mode,
- a->sha1, a->mode,
- b->sha1, b->mode,
- ci->branch1, ci->branch2);
+ if (merge_file_one(o, one->path,
+ &one->oid, one->mode,
+ &a->oid, a->mode,
+ &b->oid, b->mode,
+ ci->branch1, ci->branch2, &mfi))
+ return -1;
+
/*
* FIXME: For rename/add-source conflicts (if we could detect
* such), this is wrong. We should instead find a unique
* pathname and then either rename the add-source file to that
* unique path, or use that unique path instead of src here.
*/
- update_file(o, 0, mfi.sha, mfi.mode, one->path);
+ if (update_file(o, 0, &mfi.oid, mfi.mode, one->path))
+ return -1;
/*
* Above, we put the merged content at the merge-base's
* resolving the conflict at that path in its favor.
*/
add = filespec_from_entry(&other, ci->dst_entry1, 2 ^ 1);
- if (add)
- update_file(o, 0, add->sha1, add->mode, a->path);
+ if (add) {
+ if (update_file(o, 0, &add->oid, add->mode, a->path))
+ return -1;
+ }
else
remove_file_from_cache(a->path);
add = filespec_from_entry(&other, ci->dst_entry2, 3 ^ 1);
- if (add)
- update_file(o, 0, add->sha1, add->mode, b->path);
+ if (add) {
+ if (update_file(o, 0, &add->oid, add->mode, b->path))
+ return -1;
+ }
else
remove_file_from_cache(b->path);
- } else {
- handle_file(o, a, 2, ci);
- handle_file(o, b, 3, ci);
- }
+ } else if (handle_file(o, a, 2, ci) || handle_file(o, b, 3, ci))
+ return -1;
+
+ return 0;
}
-static void conflict_rename_rename_2to1(struct merge_options *o,
+static int conflict_rename_rename_2to1(struct merge_options *o,
struct rename_conflict_info *ci)
{
/* Two files, a & b, were renamed to the same thing, c. */
char *path = c1->path; /* == c2->path */
struct merge_file_info mfi_c1;
struct merge_file_info mfi_c2;
+ int ret;
output(o, 1, _("CONFLICT (rename/rename): "
"Rename %s->%s in %s. "
remove_file(o, 1, a->path, o->call_depth || would_lose_untracked(a->path));
remove_file(o, 1, b->path, o->call_depth || would_lose_untracked(b->path));
- mfi_c1 = merge_file_special_markers(o, a, c1, &ci->ren1_other,
- o->branch1, c1->path,
- o->branch2, ci->ren1_other.path);
- mfi_c2 = merge_file_special_markers(o, b, &ci->ren2_other, c2,
- o->branch1, ci->ren2_other.path,
- o->branch2, c2->path);
+ if (merge_file_special_markers(o, a, c1, &ci->ren1_other,
+ o->branch1, c1->path,
+ o->branch2, ci->ren1_other.path, &mfi_c1) ||
+ merge_file_special_markers(o, b, &ci->ren2_other, c2,
+ o->branch1, ci->ren2_other.path,
+ o->branch2, c2->path, &mfi_c2))
+ return -1;
if (o->call_depth) {
/*
* again later for the non-recursive merge.
*/
remove_file(o, 0, path, 0);
- update_file(o, 0, mfi_c1.sha, mfi_c1.mode, a->path);
- update_file(o, 0, mfi_c2.sha, mfi_c2.mode, b->path);
+ ret = update_file(o, 0, &mfi_c1.oid, mfi_c1.mode, a->path);
+ if (!ret)
+ ret = update_file(o, 0, &mfi_c2.oid, mfi_c2.mode,
+ b->path);
} else {
char *new_path1 = unique_path(o, path, ci->branch1);
char *new_path2 = unique_path(o, path, ci->branch2);
output(o, 1, _("Renaming %s to %s and %s to %s instead"),
a->path, new_path1, b->path, new_path2);
remove_file(o, 0, path, 0);
- update_file(o, 0, mfi_c1.sha, mfi_c1.mode, new_path1);
- update_file(o, 0, mfi_c2.sha, mfi_c2.mode, new_path2);
+ ret = update_file(o, 0, &mfi_c1.oid, mfi_c1.mode, new_path1);
+ if (!ret)
+ ret = update_file(o, 0, &mfi_c2.oid, mfi_c2.mode,
+ new_path2);
free(new_path2);
free(new_path1);
}
+
+ return ret;
}
static int process_renames(struct merge_options *o,
const char *ren2_dst = ren2->pair->two->path;
enum rename_type rename_type;
if (strcmp(ren1_src, ren2_src) != 0)
- die("ren1_src != ren2_src");
+ die("BUG: ren1_src != ren2_src");
ren2->dst_entry->processed = 1;
ren2->processed = 1;
if (strcmp(ren1_dst, ren2_dst) != 0) {
ren2 = lookup->util;
ren2_dst = ren2->pair->two->path;
if (strcmp(ren1_dst, ren2_dst) != 0)
- die("ren1_dst != ren2_dst");
+ die("BUG: ren1_dst != ren2_dst");
clean_merge = 0;
ren2->processed = 1;
remove_file(o, 1, ren1_src,
renamed_stage == 2 || !was_tracked(ren1_src));
- hashcpy(src_other.sha1, ren1->src_entry->stages[other_stage].sha);
+ oidcpy(&src_other.oid,
+ &ren1->src_entry->stages[other_stage].oid);
src_other.mode = ren1->src_entry->stages[other_stage].mode;
- hashcpy(dst_other.sha1, ren1->dst_entry->stages[other_stage].sha);
+ oidcpy(&dst_other.oid,
+ &ren1->dst_entry->stages[other_stage].oid);
dst_other.mode = ren1->dst_entry->stages[other_stage].mode;
try_merge = 0;
- if (sha_eq(src_other.sha1, null_sha1)) {
+ if (oid_eq(&src_other.oid, &null_oid)) {
setup_rename_conflict_info(RENAME_DELETE,
ren1->pair,
NULL,
NULL,
NULL);
} else if ((dst_other.mode == ren1->pair->two->mode) &&
- sha_eq(dst_other.sha1, ren1->pair->two->sha1)) {
+ oid_eq(&dst_other.oid, &ren1->pair->two->oid)) {
/*
* Added file on the other side identical to
* the file being renamed: clean merge.
* update_file_flags() instead of
* update_file().
*/
- update_file_flags(o,
- ren1->pair->two->sha1,
- ren1->pair->two->mode,
- ren1_dst,
- 1, /* update_cache */
- 0 /* update_wd */);
- } else if (!sha_eq(dst_other.sha1, null_sha1)) {
+ if (update_file_flags(o,
+ &ren1->pair->two->oid,
+ ren1->pair->two->mode,
+ ren1_dst,
+ 1, /* update_cache */
+ 0 /* update_wd */))
+ clean_merge = -1;
+ } else if (!oid_eq(&dst_other.oid, &null_oid)) {
clean_merge = 0;
try_merge = 1;
output(o, 1, _("CONFLICT (rename/add): Rename %s->%s in %s. "
ren1_dst, branch2);
if (o->call_depth) {
struct merge_file_info mfi;
- mfi = merge_file_one(o, ren1_dst, null_sha1, 0,
- ren1->pair->two->sha1, ren1->pair->two->mode,
- dst_other.sha1, dst_other.mode,
- branch1, branch2);
+ if (merge_file_one(o, ren1_dst, &null_oid, 0,
+ &ren1->pair->two->oid,
+ ren1->pair->two->mode,
+ &dst_other.oid,
+ dst_other.mode,
+ branch1, branch2, &mfi)) {
+ clean_merge = -1;
+ goto cleanup_and_return;
+ }
output(o, 1, _("Adding merged %s"), ren1_dst);
- update_file(o, 0, mfi.sha, mfi.mode, ren1_dst);
+ if (update_file(o, 0, &mfi.oid,
+ mfi.mode, ren1_dst))
+ clean_merge = -1;
try_merge = 0;
} else {
char *new_path = unique_path(o, ren1_dst, branch2);
output(o, 1, _("Adding as %s instead"), new_path);
- update_file(o, 0, dst_other.sha1, dst_other.mode, new_path);
+ if (update_file(o, 0, &dst_other.oid,
+ dst_other.mode, new_path))
+ clean_merge = -1;
free(new_path);
}
} else
try_merge = 1;
+ if (clean_merge < 0)
+ goto cleanup_and_return;
if (try_merge) {
struct diff_filespec *one, *a, *b;
src_other.path = (char *)ren1_src;
}
}
}
+cleanup_and_return:
string_list_clear(&a_by_dst, 0);
string_list_clear(&b_by_dst, 0);
return clean_merge;
}
-static unsigned char *stage_sha(const unsigned char *sha, unsigned mode)
+static struct object_id *stage_oid(const struct object_id *oid, unsigned mode)
{
- return (is_null_sha1(sha) || mode == 0) ? NULL: (unsigned char *)sha;
+ return (is_null_oid(oid) || mode == 0) ? NULL: (struct object_id *)oid;
}
-static int read_sha1_strbuf(const unsigned char *sha1, struct strbuf *dst)
+static int read_oid_strbuf(struct merge_options *o,
+ const struct object_id *oid, struct strbuf *dst)
{
void *buf;
enum object_type type;
unsigned long size;
- buf = read_sha1_file(sha1, &type, &size);
+ buf = read_sha1_file(oid->hash, &type, &size);
if (!buf)
- return error(_("cannot read object %s"), sha1_to_hex(sha1));
+ return err(o, _("cannot read object %s"), oid_to_hex(oid));
if (type != OBJ_BLOB) {
free(buf);
- return error(_("object %s is not a blob"), sha1_to_hex(sha1));
+ return err(o, _("object %s is not a blob"), oid_to_hex(oid));
}
strbuf_attach(dst, buf, size, size + 1);
return 0;
}
-static int blob_unchanged(const unsigned char *o_sha,
+static int blob_unchanged(struct merge_options *opt,
+ const struct object_id *o_oid,
unsigned o_mode,
- const unsigned char *a_sha,
+ const struct object_id *a_oid,
unsigned a_mode,
int renormalize, const char *path)
{
if (a_mode != o_mode)
return 0;
- if (sha_eq(o_sha, a_sha))
+ if (oid_eq(o_oid, a_oid))
return 1;
if (!renormalize)
return 0;
- assert(o_sha && a_sha);
- if (read_sha1_strbuf(o_sha, &o) || read_sha1_strbuf(a_sha, &a))
+ assert(o_oid && a_oid);
+ if (read_oid_strbuf(opt, o_oid, &o) || read_oid_strbuf(opt, a_oid, &a))
goto error_return;
/*
* Note: binary | is used so that both renormalizations are
return ret;
}
-static void handle_modify_delete(struct merge_options *o,
+static int handle_modify_delete(struct merge_options *o,
const char *path,
- unsigned char *o_sha, int o_mode,
- unsigned char *a_sha, int a_mode,
- unsigned char *b_sha, int b_mode)
+ struct object_id *o_oid, int o_mode,
+ struct object_id *a_oid, int a_mode,
+ struct object_id *b_oid, int b_mode)
{
- handle_change_delete(o,
- path,
- o_sha, o_mode,
- a_sha, a_mode,
- b_sha, b_mode,
- _("modify"), _("modified"));
+ return handle_change_delete(o,
+ path,
+ o_oid, o_mode,
+ a_oid, a_mode,
+ b_oid, b_mode,
+ _("modify"), _("modified"));
}
static int merge_content(struct merge_options *o,
const char *path,
- unsigned char *o_sha, int o_mode,
- unsigned char *a_sha, int a_mode,
- unsigned char *b_sha, int b_mode,
+ struct object_id *o_oid, int o_mode,
+ struct object_id *a_oid, int a_mode,
+ struct object_id *b_oid, int b_mode,
struct rename_conflict_info *rename_conflict_info)
{
const char *reason = _("content");
struct diff_filespec one, a, b;
unsigned df_conflict_remains = 0;
- if (!o_sha) {
+ if (!o_oid) {
reason = _("add/add");
- o_sha = (unsigned char *)null_sha1;
+ o_oid = (struct object_id *)&null_oid;
}
one.path = a.path = b.path = (char *)path;
- hashcpy(one.sha1, o_sha);
+ oidcpy(&one.oid, o_oid);
one.mode = o_mode;
- hashcpy(a.sha1, a_sha);
+ oidcpy(&a.oid, a_oid);
a.mode = a_mode;
- hashcpy(b.sha1, b_sha);
+ oidcpy(&b.oid, b_oid);
b.mode = b_mode;
if (rename_conflict_info) {
if (dir_in_way(path, !o->call_depth))
df_conflict_remains = 1;
}
- mfi = merge_file_special_markers(o, &one, &a, &b,
- o->branch1, path1,
- o->branch2, path2);
+ if (merge_file_special_markers(o, &one, &a, &b,
+ o->branch1, path1,
+ o->branch2, path2, &mfi))
+ return -1;
if (mfi.clean && !df_conflict_remains &&
- sha_eq(mfi.sha, a_sha) && mfi.mode == a_mode) {
+ oid_eq(&mfi.oid, a_oid) && mfi.mode == a_mode) {
int path_renamed_outside_HEAD;
output(o, 3, _("Skipped %s (merged same as existing)"), path);
/*
*/
path_renamed_outside_HEAD = !path2 || !strcmp(path, path2);
if (!path_renamed_outside_HEAD) {
- add_cacheinfo(mfi.mode, mfi.sha, path,
+ add_cacheinfo(o, mfi.mode, &mfi.oid, path,
0, (!o->call_depth), 0);
return mfi.clean;
}
output(o, 1, _("CONFLICT (%s): Merge conflict in %s"),
reason, path);
if (rename_conflict_info && !df_conflict_remains)
- update_stages(path, &one, &a, &b);
+ if (update_stages(o, path, &one, &a, &b))
+ return -1;
}
if (df_conflict_remains) {
if (o->call_depth) {
remove_file_from_cache(path);
} else {
- if (!mfi.clean)
- update_stages(path, &one, &a, &b);
- else {
+ if (!mfi.clean) {
+ if (update_stages(o, path, &one, &a, &b))
+ return -1;
+ } else {
int file_from_stage2 = was_tracked(path);
struct diff_filespec merged;
- hashcpy(merged.sha1, mfi.sha);
+ oidcpy(&merged.oid, &mfi.oid);
merged.mode = mfi.mode;
- update_stages(path, NULL,
- file_from_stage2 ? &merged : NULL,
- file_from_stage2 ? NULL : &merged);
+ if (update_stages(o, path, NULL,
+ file_from_stage2 ? &merged : NULL,
+ file_from_stage2 ? NULL : &merged))
+ return -1;
}
}
new_path = unique_path(o, path, rename_conflict_info->branch1);
output(o, 1, _("Adding as %s instead"), new_path);
- update_file(o, 0, mfi.sha, mfi.mode, new_path);
+ if (update_file(o, 0, &mfi.oid, mfi.mode, new_path)) {
+ free(new_path);
+ return -1;
+ }
free(new_path);
mfi.clean = 0;
- } else {
- update_file(o, mfi.clean, mfi.sha, mfi.mode, path);
- }
+ } else if (update_file(o, mfi.clean, &mfi.oid, mfi.mode, path))
+ return -1;
return mfi.clean;
-
}
/* Per entry merge function */
unsigned o_mode = entry->stages[1].mode;
unsigned a_mode = entry->stages[2].mode;
unsigned b_mode = entry->stages[3].mode;
- unsigned char *o_sha = stage_sha(entry->stages[1].sha, o_mode);
- unsigned char *a_sha = stage_sha(entry->stages[2].sha, a_mode);
- unsigned char *b_sha = stage_sha(entry->stages[3].sha, b_mode);
+ struct object_id *o_oid = stage_oid(&entry->stages[1].oid, o_mode);
+ struct object_id *a_oid = stage_oid(&entry->stages[2].oid, a_mode);
+ struct object_id *b_oid = stage_oid(&entry->stages[3].oid, b_mode);
entry->processed = 1;
if (entry->rename_conflict_info) {
case RENAME_NORMAL:
case RENAME_ONE_FILE_TO_ONE:
clean_merge = merge_content(o, path,
- o_sha, o_mode, a_sha, a_mode, b_sha, b_mode,
+ o_oid, o_mode, a_oid, a_mode, b_oid, b_mode,
conflict_info);
break;
case RENAME_DELETE:
clean_merge = 0;
- conflict_rename_delete(o, conflict_info->pair1,
- conflict_info->branch1,
- conflict_info->branch2);
+ if (conflict_rename_delete(o,
+ conflict_info->pair1,
+ conflict_info->branch1,
+ conflict_info->branch2))
+ clean_merge = -1;
break;
case RENAME_ONE_FILE_TO_TWO:
clean_merge = 0;
- conflict_rename_rename_1to2(o, conflict_info);
+ if (conflict_rename_rename_1to2(o, conflict_info))
+ clean_merge = -1;
break;
case RENAME_TWO_FILES_TO_ONE:
clean_merge = 0;
- conflict_rename_rename_2to1(o, conflict_info);
+ if (conflict_rename_rename_2to1(o, conflict_info))
+ clean_merge = -1;
break;
default:
entry->processed = 0;
break;
}
- } else if (o_sha && (!a_sha || !b_sha)) {
+ } else if (o_oid && (!a_oid || !b_oid)) {
/* Case A: Deleted in one */
- if ((!a_sha && !b_sha) ||
- (!b_sha && blob_unchanged(o_sha, o_mode, a_sha, a_mode, normalize, path)) ||
- (!a_sha && blob_unchanged(o_sha, o_mode, b_sha, b_mode, normalize, path))) {
+ if ((!a_oid && !b_oid) ||
+ (!b_oid && blob_unchanged(o, o_oid, o_mode, a_oid, a_mode, normalize, path)) ||
+ (!a_oid && blob_unchanged(o, o_oid, o_mode, b_oid, b_mode, normalize, path))) {
/* Deleted in both or deleted in one and
* unchanged in the other */
- if (a_sha)
+ if (a_oid)
output(o, 2, _("Removing %s"), path);
/* do not touch working file if it did not exist */
- remove_file(o, 1, path, !a_sha);
+ remove_file(o, 1, path, !a_oid);
} else {
/* Modify/delete; deleted side may have put a directory in the way */
clean_merge = 0;
- handle_modify_delete(o, path, o_sha, o_mode,
- a_sha, a_mode, b_sha, b_mode);
+ if (handle_modify_delete(o, path, o_oid, o_mode,
+ a_oid, a_mode, b_oid, b_mode))
+ clean_merge = -1;
}
- } else if ((!o_sha && a_sha && !b_sha) ||
- (!o_sha && !a_sha && b_sha)) {
+ } else if ((!o_oid && a_oid && !b_oid) ||
+ (!o_oid && !a_oid && b_oid)) {
/* Case B: Added in one. */
/* [nothing|directory] -> ([nothing|directory], file) */
const char *add_branch;
const char *other_branch;
unsigned mode;
- const unsigned char *sha;
+ const struct object_id *oid;
const char *conf;
- if (a_sha) {
+ if (a_oid) {
add_branch = o->branch1;
other_branch = o->branch2;
mode = a_mode;
- sha = a_sha;
+ oid = a_oid;
conf = _("file/directory");
} else {
add_branch = o->branch2;
other_branch = o->branch1;
mode = b_mode;
- sha = b_sha;
+ oid = b_oid;
conf = _("directory/file");
}
if (dir_in_way(path, !o->call_depth)) {
output(o, 1, _("CONFLICT (%s): There is a directory with name %s in %s. "
"Adding %s as %s"),
conf, path, other_branch, path, new_path);
- update_file(o, 0, sha, mode, new_path);
- if (o->call_depth)
+ if (update_file(o, 0, oid, mode, new_path))
+ clean_merge = -1;
+ else if (o->call_depth)
remove_file_from_cache(path);
free(new_path);
} else {
output(o, 2, _("Adding %s"), path);
/* do not overwrite file if already present */
- update_file_flags(o, sha, mode, path, 1, !a_sha);
+ if (update_file_flags(o, oid, mode, path, 1, !a_oid))
+ clean_merge = -1;
}
- } else if (a_sha && b_sha) {
+ } else if (a_oid && b_oid) {
/* Case C: Added in both (check for same permissions) and */
/* case D: Modified in both, but differently. */
clean_merge = merge_content(o, path,
- o_sha, o_mode, a_sha, a_mode, b_sha, b_mode,
+ o_oid, o_mode, a_oid, a_mode, b_oid, b_mode,
NULL);
- } else if (!o_sha && !a_sha && !b_sha) {
+ } else if (!o_oid && !a_oid && !b_oid) {
/*
* this entry was deleted altogether. a_mode == 0 means
* we had that path and want to actively remove it.
*/
remove_file(o, 1, path, !a_mode);
} else
- die(_("Fatal merge failure, shouldn't happen."));
+ die("BUG: fatal merge failure, shouldn't happen.");
return clean_merge;
}
common = shift_tree_object(head, common, o->subtree_shift);
}
- if (sha_eq(common->object.oid.hash, merge->object.oid.hash)) {
+ if (oid_eq(&common->object.oid, &merge->object.oid)) {
output(o, 0, _("Already up-to-date!"));
*result = head;
return 1;
if (code != 0) {
if (show(o, 4) || o->call_depth)
- die(_("merging of trees %s and %s failed"),
+ err(o, _("merging of trees %s and %s failed"),
oid_to_hex(&head->object.oid),
oid_to_hex(&merge->object.oid));
- else
- exit(128);
+ return -1;
}
if (unmerged_cache()) {
re_head = get_renames(o, head, common, head, merge, entries);
re_merge = get_renames(o, merge, common, head, merge, entries);
clean = process_renames(o, re_head, re_merge);
+ if (clean < 0)
+ return clean;
for (i = entries->nr-1; 0 <= i; i--) {
const char *path = entries->items[i].string;
struct stage_data *e = entries->items[i].util;
- if (!e->processed
- && !process_entry(o, path, e))
- clean = 0;
+ if (!e->processed) {
+ int ret = process_entry(o, path, e);
+ if (!ret)
+ clean = 0;
+ else if (ret < 0)
+ return ret;
+ }
}
for (i = 0; i < entries->nr; i++) {
struct stage_data *e = entries->items[i].util;
if (!e->processed)
- die(_("Unprocessed path??? %s"),
+ die("BUG: unprocessed path??? %s",
entries->items[i].string);
}
else
clean = 1;
- if (o->call_depth)
- *result = write_tree_from_memory(o);
+ if (o->call_depth && !(*result = write_tree_from_memory(o)))
+ return -1;
return clean;
}
/*
* When the merge fails, the result contains files
* with conflict markers. The cleanness flag is
- * ignored, it was never actually used, as result of
- * merge_trees has always overwritten it: the committed
- * "conflicts" were already resolved.
+ * ignored (unless indicating an error), it was never
+ * actually used, as result of merge_trees has always
+ * overwritten it: the committed "conflicts" were
+ * already resolved.
*/
discard_cache();
saved_b1 = o->branch1;
saved_b2 = o->branch2;
o->branch1 = "Temporary merge branch 1";
o->branch2 = "Temporary merge branch 2";
- merge_recursive(o, merged_common_ancestors, iter->item,
- NULL, &merged_common_ancestors);
+ if (merge_recursive(o, merged_common_ancestors, iter->item,
+ NULL, &merged_common_ancestors) < 0)
+ return -1;
o->branch1 = saved_b1;
o->branch2 = saved_b2;
o->call_depth--;
if (!merged_common_ancestors)
- die(_("merge returned no commit"));
+ return err(o, _("merge returned no commit"));
}
discard_cache();
o->ancestor = "merged common ancestors";
clean = merge_trees(o, h1->tree, h2->tree, merged_common_ancestors->tree,
&mrtree);
+ if (clean < 0) {
+ flush_output(o);
+ return clean;
+ }
if (o->call_depth) {
*result = make_virtual_commit(mrtree, "merged tree");
commit_list_insert(h2, &(*result)->parents->next);
}
flush_output(o);
+ if (!o->call_depth && o->buffer_output < 2)
+ strbuf_release(&o->obuf);
if (show(o, 2))
diff_warn_rename_limit("merge.renamelimit",
o->needed_rename_limit, 0);
return clean;
}
-static struct commit *get_ref(const unsigned char *sha1, const char *name)
+static struct commit *get_ref(const struct object_id *oid, const char *name)
{
struct object *object;
- object = deref_tag(parse_object(sha1), name, strlen(name));
+ object = deref_tag(parse_object(oid->hash), name, strlen(name));
if (!object)
return NULL;
if (object->type == OBJ_TREE)
}
int merge_recursive_generic(struct merge_options *o,
- const unsigned char *head,
- const unsigned char *merge,
+ const struct object_id *head,
+ const struct object_id *merge,
int num_base_list,
- const unsigned char **base_list,
+ const struct object_id **base_list,
struct commit **result)
{
int clean;
int i;
for (i = 0; i < num_base_list; ++i) {
struct commit *base;
- if (!(base = get_ref(base_list[i], sha1_to_hex(base_list[i]))))
- return error(_("Could not parse object '%s'"),
- sha1_to_hex(base_list[i]));
+ if (!(base = get_ref(base_list[i], oid_to_hex(base_list[i]))))
+ return err(o, _("Could not parse object '%s'"),
+ oid_to_hex(base_list[i]));
commit_list_insert(base, &ca);
}
}
hold_locked_index(lock, 1);
clean = merge_recursive(o, head_commit, next_commit, ca,
result);
+ if (clean < 0)
+ return clean;
+
if (active_cache_changed &&
write_locked_index(&the_index, lock, COMMIT_LOCK))
- return error(_("Unable to write index."));
+ return err(o, _("Unable to write index."));
return clean ? 0 : 1;
}
MERGE_RECURSIVE_THEIRS
} recursive_variant;
const char *subtree_shift;
- unsigned buffer_output : 1;
+ unsigned buffer_output; /* 1: output at end, 2: keep buffered */
unsigned renormalize : 1;
long xdl_opts;
int verbosity;
* virtual commits and call merge_recursive() proper.
*/
int merge_recursive_generic(struct merge_options *o,
- const unsigned char *head,
- const unsigned char *merge,
+ const struct object_id *head,
+ const struct object_id *merge,
int num_ca,
- const unsigned char **ca,
+ const struct object_id **ca,
struct commit **result);
void init_merge_options(struct merge_options *o);
--- /dev/null
+#include "cache.h"
+#include "mru.h"
+
+void mru_append(struct mru *mru, void *item)
+{
+ struct mru_entry *cur = xmalloc(sizeof(*cur));
+ cur->item = item;
+ cur->prev = mru->tail;
+ cur->next = NULL;
+
+ if (mru->tail)
+ mru->tail->next = cur;
+ else
+ mru->head = cur;
+ mru->tail = cur;
+}
+
+void mru_mark(struct mru *mru, struct mru_entry *entry)
+{
+ /* If we're already at the front of the list, nothing to do */
+ if (mru->head == entry)
+ return;
+
+ /* Otherwise, remove us from our current slot... */
+ if (entry->prev)
+ entry->prev->next = entry->next;
+ if (entry->next)
+ entry->next->prev = entry->prev;
+ else
+ mru->tail = entry->prev;
+
+ /* And insert us at the beginning. */
+ entry->prev = NULL;
+ entry->next = mru->head;
+ if (mru->head)
+ mru->head->prev = entry;
+ mru->head = entry;
+}
+
+void mru_clear(struct mru *mru)
+{
+ struct mru_entry *p = mru->head;
+
+ while (p) {
+ struct mru_entry *to_free = p;
+ p = p->next;
+ free(to_free);
+ }
+ mru->head = mru->tail = NULL;
+}
--- /dev/null
+#ifndef MRU_H
+#define MRU_H
+
+/**
+ * A simple most-recently-used cache, backed by a doubly-linked list.
+ *
+ * Usage is roughly:
+ *
+ * // Create a list. Zero-initialization is required.
+ * static struct mru cache;
+ * mru_append(&cache, item);
+ * ...
+ *
+ * // Iterate in MRU order.
+ * struct mru_entry *p;
+ * for (p = cache.head; p; p = p->next) {
+ * if (matches(p->item))
+ * break;
+ * }
+ *
+ * // Mark an item as used, moving it to the front of the list.
+ * mru_mark(&cache, p);
+ *
+ * // Reset the list to empty, cleaning up all resources.
+ * mru_clear(&cache);
+ *
+ * Note that you SHOULD NOT call mru_mark() and then continue traversing the
+ * list; it reorders the marked item to the front of the list, and therefore
+ * you will begin traversing the whole list again.
+ */
+
+struct mru_entry {
+ void *item;
+ struct mru_entry *prev, *next;
+};
+
+struct mru {
+ struct mru_entry *head, *tail;
+};
+
+void mru_append(struct mru *mru, void *item);
+void mru_mark(struct mru *mru, struct mru_entry *entry);
+void mru_clear(struct mru *mru);
+
+#endif /* MRU_H */
switch (p->status) {
case DIFF_STATUS_MODIFIED:
assert(p->one->mode == p->two->mode);
- assert(!is_null_sha1(p->one->sha1));
- assert(!is_null_sha1(p->two->sha1));
+ assert(!is_null_oid(&p->one->oid));
+ assert(!is_null_oid(&p->two->oid));
break;
case DIFF_STATUS_ADDED:
- assert(is_null_sha1(p->one->sha1));
+ assert(is_null_oid(&p->one->oid));
break;
case DIFF_STATUS_DELETED:
- assert(is_null_sha1(p->two->sha1));
+ assert(is_null_oid(&p->two->oid));
break;
default:
return -1;
if (verify_notes_filepair(p, obj)) {
trace_printf("\t\tCannot merge entry '%s' (%c): "
"%.7s -> %.7s. Skipping!\n", p->one->path,
- p->status, sha1_to_hex(p->one->sha1),
- sha1_to_hex(p->two->sha1));
+ p->status, oid_to_hex(&p->one->oid),
+ oid_to_hex(&p->two->oid));
continue;
}
mp = find_notes_merge_pair_pos(changes, len, obj, 1, &occupied);
if (occupied) {
/* We've found an addition/deletion pair */
assert(!hashcmp(mp->obj, obj));
- if (is_null_sha1(p->one->sha1)) { /* addition */
+ if (is_null_oid(&p->one->oid)) { /* addition */
assert(is_null_sha1(mp->remote));
- hashcpy(mp->remote, p->two->sha1);
- } else if (is_null_sha1(p->two->sha1)) { /* deletion */
+ hashcpy(mp->remote, p->two->oid.hash);
+ } else if (is_null_oid(&p->two->oid)) { /* deletion */
assert(is_null_sha1(mp->base));
- hashcpy(mp->base, p->one->sha1);
+ hashcpy(mp->base, p->one->oid.hash);
} else
assert(!"Invalid existing change recorded");
} else {
hashcpy(mp->obj, obj);
- hashcpy(mp->base, p->one->sha1);
+ hashcpy(mp->base, p->one->oid.hash);
hashcpy(mp->local, uninitialized);
- hashcpy(mp->remote, p->two->sha1);
+ hashcpy(mp->remote, p->two->oid.hash);
len++;
}
trace_printf("\t\tStored remote change for %s: %.7s -> %.7s\n",
sha1_to_hex(mp->remote));
}
diff_flush(&opt);
- free_pathspec(&opt.pathspec);
+ clear_pathspec(&opt.pathspec);
*num_changes = len;
return changes;
if (verify_notes_filepair(p, obj)) {
trace_printf("\t\tCannot merge entry '%s' (%c): "
"%.7s -> %.7s. Skipping!\n", p->one->path,
- p->status, sha1_to_hex(p->one->sha1),
- sha1_to_hex(p->two->sha1));
+ p->status, oid_to_hex(&p->one->oid),
+ oid_to_hex(&p->two->oid));
continue;
}
mp = find_notes_merge_pair_pos(changes, len, obj, 0, &match);
if (!match) {
trace_printf("\t\tIgnoring local-only change for %s: "
"%.7s -> %.7s\n", sha1_to_hex(obj),
- sha1_to_hex(p->one->sha1),
- sha1_to_hex(p->two->sha1));
+ oid_to_hex(&p->one->oid),
+ oid_to_hex(&p->two->oid));
continue;
}
assert(!hashcmp(mp->obj, obj));
- if (is_null_sha1(p->two->sha1)) { /* deletion */
+ if (is_null_oid(&p->two->oid)) { /* deletion */
/*
* Either this is a true deletion (1), or it is part
* of an A/D pair (2), or D/A pair (3):
*/
if (!hashcmp(mp->local, uninitialized))
hashclr(mp->local);
- } else if (is_null_sha1(p->one->sha1)) { /* addition */
+ } else if (is_null_oid(&p->one->oid)) { /* addition */
/*
* Either this is a true addition (1), or it is part
* of an A/D pair (2), or D/A pair (3):
*/
assert(is_null_sha1(mp->local) ||
!hashcmp(mp->local, uninitialized));
- hashcpy(mp->local, p->two->sha1);
+ hashcpy(mp->local, p->two->oid.hash);
} else { /* modification */
/*
* This is a true modification. p->one->sha1 shall
* match mp->base, and mp->local shall be uninitialized.
* Set mp->local to p->two->sha1.
*/
- assert(!hashcmp(p->one->sha1, mp->base));
+ assert(!hashcmp(p->one->oid.hash, mp->base));
assert(!hashcmp(mp->local, uninitialized));
- hashcpy(mp->local, p->two->sha1);
+ hashcpy(mp->local, p->two->oid.hash);
}
trace_printf("\t\tStored local change for %s: %.7s -> %.7s\n",
sha1_to_hex(mp->obj), sha1_to_hex(mp->base),
sha1_to_hex(mp->local));
}
diff_flush(&opt);
- free_pathspec(&opt.pathspec);
+ clear_pathspec(&opt.pathspec);
}
static void check_notes_merge_worktree(struct notes_merge_options *o)
char *path = git_pathdup(NOTES_MERGE_WORKTREE "/%s", sha1_to_hex(obj));
if (safe_create_leading_directories_const(path))
die_errno("unable to create directory for '%s'", path);
- if (file_exists(path))
- die("found existing file at '%s'", path);
- fd = open(path, O_WRONLY | O_TRUNC | O_CREAT, 0666);
- if (fd < 0)
- die_errno("failed to open '%s'", path);
+ fd = xopen(path, O_WRONLY | O_EXCL | O_CREAT, 0666);
while (size > 0) {
long ret = write_in_full(fd, buf, size);
struct notes_tree default_notes_tree;
-static struct string_list display_notes_refs;
+static struct string_list display_notes_refs = STRING_LIST_INIT_NODUP;
static struct notes_tree **display_notes_trees;
static void load_subtree(struct notes_tree *t, struct leaf_node *subtree,
void *data;
enum object_type type;
unsigned long size;
+ off_t curpos;
+ int data_valid;
if (p->index_version > 1) {
off_t offset = entries[i].offset;
sha1_to_hex(entries[i].sha1),
p->pack_name, (uintmax_t)offset);
}
- data = unpack_entry(p, entries[i].offset, &type, &size);
- if (!data)
+
+ curpos = entries[i].offset;
+ type = unpack_object_header(p, w_curs, &curpos, &size);
+ unuse_pack(w_curs);
+
+ if (type == OBJ_BLOB && big_file_threshold <= size) {
+ /*
+ * Let check_sha1_signature() check it with
+ * the streaming interface; no point slurping
+ * the data in-core only to discard.
+ */
+ data = NULL;
+ data_valid = 0;
+ } else {
+ data = unpack_entry(p, entries[i].offset, &type, &size);
+ data_valid = 1;
+ }
+
+ if (data_valid && !data)
err = error("cannot unpack %s from %s at offset %"PRIuMAX"",
sha1_to_hex(entries[i].sha1), p->pack_name,
(uintmax_t)entries[i].offset);
die_errno("unable to make temporary index file readable");
strbuf_addf(name_buffer, "%s.pack", sha1_to_hex(sha1));
- free_pack_by_name(name_buffer->buf);
if (rename(pack_tmp_name, name_buffer->buf))
die_errno("unable to rename temporary pack file");
struct progress;
+/* Note, the data argument could be NULL if object type is blob */
typedef int (*verify_fn)(const unsigned char*, enum object_type, unsigned long, void*, int*);
extern const char *write_idx_file(const char *index_name, struct pack_idx_entry **objects, int nr_objects, const struct pack_idx_option *, const unsigned char *sha1);
return pager;
}
+static void setup_pager_env(struct argv_array *env)
+{
+ const char **argv;
+ int i;
+ char *pager_env = xstrdup(PAGER_ENV);
+ int n = split_cmdline(pager_env, &argv);
+
+ if (n < 0)
+ die("malformed build-time PAGER_ENV: %s",
+ split_cmdline_strerror(n));
+
+ for (i = 0; i < n; i++) {
+ char *cp = strchr(argv[i], '=');
+
+ if (!cp)
+ die("malformed build-time PAGER_ENV");
+
+ *cp = '\0';
+ if (!getenv(argv[i])) {
+ *cp = '=';
+ argv_array_push(env, argv[i]);
+ }
+ }
+ free(pager_env);
+ free(argv);
+}
+
void prepare_pager_args(struct child_process *pager_process, const char *pager)
{
argv_array_push(&pager_process->args, pager);
pager_process->use_shell = 1;
- if (!getenv("LESS"))
- argv_array_push(&pager_process->env_array, "LESS=FRX");
- if (!getenv("LV"))
- argv_array_push(&pager_process->env_array, "LV=-c");
+ setup_pager_env(&pager_process->env_array);
}
void setup_pager(void)
return 0;
}
-int parse_options_concat(struct option *dst, size_t dst_size, struct option *src)
+struct option *parse_options_concat(struct option *a, struct option *b)
{
- int i, j;
-
- for (i = 0; i < dst_size; i++)
- if (dst[i].type == OPTION_END)
- break;
- for (j = 0; i < dst_size; i++, j++) {
- dst[i] = src[j];
- if (src[j].type == OPTION_END)
- return 0;
- }
- return -1;
+ struct option *ret;
+ size_t i, a_len = 0, b_len = 0;
+
+ for (i = 0; a[i].type != OPTION_END; i++)
+ a_len++;
+ for (i = 0; b[i].type != OPTION_END; i++)
+ b_len++;
+
+ ALLOC_ARRAY(ret, st_add3(a_len, b_len, 1));
+ for (i = 0; i < a_len; i++)
+ ret[i] = a[i];
+ for (i = 0; i < b_len; i++)
+ ret[a_len + i] = b[i];
+ ret[a_len + b_len] = b[b_len]; /* final OPTION_END */
+
+ return ret;
}
int parse_opt_string_list(const struct option *opt, const char *arg, int unset)
if (!arg)
return -1;
- string_list_append(v, xstrdup(arg));
+ string_list_append(v, arg);
return 0;
}
extern int parse_options_end(struct parse_opt_ctx_t *ctx);
-extern int parse_options_concat(struct option *dst, size_t, struct option *src);
+extern struct option *parse_options_concat(struct option *a, struct option *b);
/*----- some often used options -----*/
extern int parse_opt_abbrev_cb(const struct option *, const char *, int);
get_index_file(), strlen(get_index_file()));
else if (git_db_env && dir_prefix(base, "objects"))
replace_dir(buf, git_dir_len + 7, get_object_directory());
+ else if (git_hooks_path && dir_prefix(base, "hooks"))
+ replace_dir(buf, git_dir_len + 5, git_hooks_path);
else if (git_common_dir_env)
update_common_dir(buf, git_dir_len, NULL);
}
strbuf_addstr(buf, git_dir);
}
strbuf_addch(buf, '/');
- strbuf_addstr(&git_submodule_dir, buf->buf);
+ strbuf_addbuf(&git_submodule_dir, buf);
strbuf_vaddf(buf, fmt, args);
sizeof(struct pathspec_item) * dst->nr);
}
-void free_pathspec(struct pathspec *pathspec)
+void clear_pathspec(struct pathspec *pathspec)
{
free(pathspec->items);
pathspec->items = NULL;
#define PATHSPEC_ONESTAR 1 /* the pathspec pattern satisfies GFNM_ONESTAR */
struct pathspec {
- const char **_raw; /* get_pathspec() result, not freed by free_pathspec() */
+ const char **_raw; /* get_pathspec() result, not freed by clear_pathspec() */
int nr;
unsigned int has_wildcard:1;
unsigned int recursive:1;
const char *prefix,
const char **args);
extern void copy_pathspec(struct pathspec *dst, const struct pathspec *src);
-extern void free_pathspec(struct pathspec *);
+extern void clear_pathspec(struct pathspec *);
static inline int ps_strncmp(const struct pathspec_item *item,
const char *s1, const char *s2, size_t n)
"existing: $existing\n",
" globbed: $refname\n";
}
- my $u = (::cmt_metadata("$refname"))[0] or die
- "$refname: no associated commit metadata\n";
+ my $u = (::cmt_metadata("$refname"))[0];
+ if (!defined($u)) {
+ warn
+"W: $refname: no associated commit metadata from SVN, skipping\n";
+ next;
+ }
$u =~ s!^\Q$url\E(/|$)!! or die
"$refname: '$url' not found in '$u'\n";
if ($pathname ne $u) {
my @parents = @$parents;
my $props = $ed->{dir_prop}{$self->path};
- if ( $props->{"svk:merge"} ) {
- $self->find_extra_svk_parents($props->{"svk:merge"}, \@parents);
- }
- if ( $props->{"svn:mergeinfo"} ) {
- my $mi_changes = $self->mergeinfo_changes
- ($parent_path, $parent_rev,
- $self->path, $rev,
- $props->{"svn:mergeinfo"});
- $self->find_extra_svn_parents($mi_changes, \@parents);
+ if ($self->follow_parent) {
+ my $tickets = $props->{"svk:merge"};
+ if ($tickets) {
+ $self->find_extra_svk_parents($tickets, \@parents);
+ }
+
+ my $mergeinfo_prop = $props->{"svn:mergeinfo"};
+ if ($mergeinfo_prop) {
+ my $mi_changes = $self->mergeinfo_changes(
+ $parent_path,
+ $parent_rev,
+ $self->path,
+ $rev,
+ $mergeinfo_prop);
+ $self->find_extra_svn_parents($mi_changes, \@parents);
+ }
}
open my $un, '>>', "$self->{dir}/unhandled.log" or croak $!;
msgstr ""
"Project-Id-Version: Git\n"
"Report-Msgid-Bugs-To: Git Mailing List <git@vger.kernel.org>\n"
-"POT-Creation-Date: 2010-09-20 14:44+0000\n"
-"PO-Revision-Date: 2010-06-05 19:06 +0000\n"
-"Last-Translator: Ævar Arnfjörð Bjarmason <avarab@gmail.com>\n"
+"POT-Creation-Date: 2016-06-17 18:55+0000\n"
+"PO-Revision-Date: 2016-06-17 19:17+0000\n"
+"Last-Translator: Vasco Almeida <vascomalmeida@sapo.pt>\n"
"Language-Team: Git Mailing List <git@vger.kernel.org>\n"
"Language: is\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
+"X-Generator: Poedit 1.8.5\n"
#. TRANSLATORS: This is a test. You don't need to translate it.
#: t/t0200/test.c:5
msgid "TEST: A Perl test variable %s"
msgstr "TILRAUN: Perl tilraunastrengur með breytunni %s"
-#. TRANSLATORS: The first '%s' is either "Reinitialized
-#. existing" or "Initialized empty", the second " shared" or
-#. "", and the last '%s%s' is the verbatim directory name.
-#: builtin/init-db.c:355
+#: builtin/init-db.c:402
#, c-format
-msgid "%s%s Git repository in %s%s\n"
-msgstr "%s%s Git lind à %s%s\n"
+msgid "Reinitialized existing shared Git repository in %s%s\n"
+msgstr "Endurgerði Git lind à %s%s\n"
-#: builtin/init-db.c:356
-msgid "Reinitialized existing"
-msgstr "Endurgerði"
+#: builtin/init-db.c:403
+#, c-format
+msgid "Reinitialized existing Git repository in %s%s\n"
+msgstr "Endurgerði Git lind à %s%s\n"
+
+#: builtin/init-db.c:407
+#, c-format
+msgid "Initialized empty shared Git repository in %s%s\n"
+msgstr "Bjó til tóma sameiginlega Git lind à %s%s\n"
+
+#: builtin/init-db.c:408
+#, c-format
+msgid "Initialized empty Git repository in %s%s\n"
+msgstr "Bjó til tóma Git lind à %s%s\n"
+
+#~ msgid "Reinitialized existing"
+#~ msgstr "Endurgerði"
-#: builtin/init-db.c:356
-msgid "Initialized empty"
-msgstr "Bjó til tóma"
+#~ msgid "Initialized empty"
+#~ msgstr "Bjó til tóma"
-#: builtin/init-db.c:357
-msgid " shared"
-msgstr " sameiginlega"
+#~ msgid " shared"
+#~ msgstr " sameiginlega"
{ "medium", CMIT_FMT_MEDIUM, 0, 8 },
{ "short", CMIT_FMT_SHORT, 0, 0 },
{ "email", CMIT_FMT_EMAIL, 0, 0 },
+ { "mboxrd", CMIT_FMT_MBOXRD, 0, 0 },
{ "fuller", CMIT_FMT_FULLER, 0, 8 },
{ "full", CMIT_FMT_FULL, 0, 8 },
{ "oneline", CMIT_FMT_ONELINE, 1, 0 }
if (pp->mailmap)
map_user(pp->mailmap, &mailbuf, &maillen, &namebuf, &namelen);
- if (pp->fmt == CMIT_FMT_EMAIL) {
+ if (cmit_fmt_is_mail(pp->fmt)) {
if (pp->from_ident && ident_cmp(pp->from_ident, &ident)) {
struct strbuf buf = STRBUF_INIT;
show_ident_date(&ident, &pp->date_mode));
break;
case CMIT_FMT_EMAIL:
+ case CMIT_FMT_MBOXRD:
strbuf_addf(sb, "Date: %s\n",
show_ident_date(&ident, DATE_MODE(RFC2822)));
break;
}
}
-static int is_empty_line(const char *line, int *len_p)
+static int is_blank_line(const char *line, int *len_p)
{
int len = *len_p;
while (len && isspace(line[len - 1]))
return !len;
}
-static const char *skip_empty_lines(const char *msg)
+const char *skip_blank_lines(const char *msg)
{
for (;;) {
int linelen = get_one_line(msg);
int ll = linelen;
if (!linelen)
break;
- if (!is_empty_line(msg, &ll))
+ if (!is_blank_line(msg, &ll))
break;
msg += linelen;
}
{
struct commit_list *parent = commit->parents;
- if ((pp->fmt == CMIT_FMT_ONELINE) || (pp->fmt == CMIT_FMT_EMAIL) ||
+ if ((pp->fmt == CMIT_FMT_ONELINE) || (cmit_fmt_is_mail(pp->fmt)) ||
!parent || !parent->next)
return;
int linelen = get_one_line(line);
msg += linelen;
- if (!linelen || is_empty_line(line, &linelen))
+ if (!linelen || is_blank_line(line, &linelen))
break;
if (!sb)
const char *msg = c->message + c->message_off;
const char *start = c->message;
- msg = skip_empty_lines(msg);
+ msg = skip_blank_lines(msg);
c->subject_off = msg - start;
msg = format_subject(NULL, msg, NULL);
- msg = skip_empty_lines(msg);
+ msg = skip_blank_lines(msg);
c->body_off = msg - start;
c->commit_message_parsed = 1;
int width;
if (!end || end == start)
return 0;
- width = strtoul(start, &next, 10);
+ width = strtol(start, &next, 10);
if (next == start || width == 0)
return 0;
+ if (width < 0) {
+ if (to_column)
+ width += term_columns();
+ if (width < 0)
+ return 0;
+ }
c->padding = to_column ? -width : width;
c->flush_type = flush_type;
switch (placeholder[0]) {
case 'C':
if (starts_with(placeholder + 1, "(auto)")) {
- c->auto_color = 1;
+ c->auto_color = want_color(c->pretty_ctx->color);
return 7; /* consumed 7 bytes, "C(auto)" */
} else {
int ret = parse_color(sb, placeholder, c);
strbuf_addstr(sb, diff_get_color(c->auto_color, DIFF_RESET));
return 1;
}
- strbuf_addstr(sb, find_unique_abbrev(commit->object.oid.hash,
- c->pretty_ctx->abbrev));
+ strbuf_add_unique_abbrev(sb, commit->object.oid.hash,
+ c->pretty_ctx->abbrev);
strbuf_addstr(sb, diff_get_color(c->auto_color, DIFF_RESET));
c->abbrev_commit_hash.len = sb->len - c->abbrev_commit_hash.off;
return 1;
case 't': /* abbreviated tree hash */
if (add_again(sb, &c->abbrev_tree_hash))
return 1;
- strbuf_addstr(sb, find_unique_abbrev(commit->tree->object.oid.hash,
- c->pretty_ctx->abbrev));
+ strbuf_add_unique_abbrev(sb, commit->tree->object.oid.hash,
+ c->pretty_ctx->abbrev);
c->abbrev_tree_hash.len = sb->len - c->abbrev_tree_hash.off;
return 1;
case 'P': /* parent hashes */
for (p = commit->parents; p; p = p->next) {
if (p != commit->parents)
strbuf_addch(sb, ' ');
- strbuf_addstr(sb, find_unique_abbrev(
- p->item->object.oid.hash,
- c->pretty_ctx->abbrev));
+ strbuf_add_unique_abbrev(sb, p->item->object.oid.hash,
+ c->pretty_ctx->abbrev);
}
c->abbrev_parent_hashes.len = sb->len -
c->abbrev_parent_hashes.off;
if (!start)
start = sb->buf;
occupied = utf8_strnwidth(start, -1, 1);
+ occupied += c->pretty_ctx->graph_width;
padding = (-padding) - occupied;
}
while (1) {
if (pp->after_subject) {
strbuf_addstr(sb, pp->after_subject);
}
- if (pp->fmt == CMIT_FMT_EMAIL) {
+ if (cmit_fmt_is_mail(pp->fmt)) {
strbuf_addch(sb, '\n');
}
strbuf_add(sb, line, linelen);
}
+static int is_mboxrd_from(const char *line, int len)
+{
+ /*
+ * a line matching /^From $/ here would only have len == 4
+ * at this point because is_empty_line would've trimmed all
+ * trailing space
+ */
+ return len > 4 && starts_with(line + strspn(line, ">"), "From ");
+}
+
void pp_remainder(struct pretty_print_context *pp,
const char **msg_p,
struct strbuf *sb,
if (!linelen)
break;
- if (is_empty_line(line, &linelen)) {
+ if (is_blank_line(line, &linelen)) {
if (first)
continue;
if (pp->fmt == CMIT_FMT_SHORT)
else if (pp->expand_tabs_in_log)
strbuf_add_tabexpand(sb, pp->expand_tabs_in_log,
line, linelen);
- else
+ else {
+ if (pp->fmt == CMIT_FMT_MBOXRD &&
+ is_mboxrd_from(line, linelen))
+ strbuf_addch(sb, '>');
+
strbuf_add(sb, line, linelen);
+ }
strbuf_addch(sb, '\n');
}
}
encoding = get_log_output_encoding();
msg = reencoded = logmsg_reencode(commit, NULL, encoding);
- if (pp->fmt == CMIT_FMT_ONELINE || pp->fmt == CMIT_FMT_EMAIL)
+ if (pp->fmt == CMIT_FMT_ONELINE || cmit_fmt_is_mail(pp->fmt))
indent = 0;
/*
* We need to check and emit Content-type: to mark it
* as 8-bit if we haven't done so.
*/
- if (pp->fmt == CMIT_FMT_EMAIL && need_8bit_cte == 0) {
+ if (cmit_fmt_is_mail(pp->fmt) && need_8bit_cte == 0) {
int i, ch, in_body;
for (in_body = i = 0; (ch = msg[i]); i++) {
}
/* Skip excess blank lines at the beginning of body, if any... */
- msg = skip_empty_lines(msg);
+ msg = skip_blank_lines(msg);
/* These formats treat the title line specially. */
- if (pp->fmt == CMIT_FMT_ONELINE || pp->fmt == CMIT_FMT_EMAIL)
+ if (pp->fmt == CMIT_FMT_ONELINE || cmit_fmt_is_mail(pp->fmt))
pp_title_line(pp, &msg, sb, encoding, need_8bit_cte);
beginning_of_body = sb->len;
* format. Make sure we did not strip the blank line
* between the header and the body.
*/
- if (pp->fmt == CMIT_FMT_EMAIL && sb->len <= beginning_of_body)
+ if (cmit_fmt_is_mail(pp->fmt) && sb->len <= beginning_of_body)
strbuf_addch(sb, '\n');
unuse_commit_buffer(commit, reencoded);
}
strbuf_addch(sb, '"');
}
+
+void basic_regex_quote_buf(struct strbuf *sb, const char *src)
+{
+ char c;
+
+ if (*src == '^') {
+ /* only beginning '^' is special and needs quoting */
+ strbuf_addch(sb, '\\');
+ strbuf_addch(sb, *src++);
+ }
+ if (*src == '*')
+ /* beginning '*' is not special, no quoting */
+ strbuf_addch(sb, *src++);
+
+ while ((c = *src++)) {
+ switch (c) {
+ case '[':
+ case '.':
+ case '\\':
+ case '*':
+ strbuf_addch(sb, '\\');
+ strbuf_addch(sb, c);
+ break;
+
+ case '$':
+ /* only the end '$' is special and needs quoting */
+ if (*src == '\0')
+ strbuf_addch(sb, '\\');
+ strbuf_addch(sb, c);
+ break;
+
+ default:
+ strbuf_addch(sb, c);
+ break;
+ }
+ }
+}
extern void perl_quote_buf(struct strbuf *sb, const char *src);
extern void python_quote_buf(struct strbuf *sb, const char *src);
extern void tcl_quote_buf(struct strbuf *sb, const char *src);
+extern void basic_regex_quote_buf(struct strbuf *sb, const char *src);
#endif
#include "split-index.h"
#include "utf8.h"
-static struct cache_entry *refresh_cache_entry(struct cache_entry *ce,
- unsigned int options);
-
/* Mask for the name length in ce_flags in the on-disk index */
#define CE_NAMEMASK (0x0fff)
hashcpy(ce->sha1, sha1);
}
-int add_to_index(struct index_state *istate, const char *path, struct stat *st, int flags)
+int add_to_index(struct index_state *istate, const char *path, struct stat *st, int flags, int force_mode)
{
int size, namelen, was_same;
mode_t st_mode = st->st_mode;
else
ce->ce_flags |= CE_INTENT_TO_ADD;
- if (trust_executable_bit && has_symlinks)
+ if (S_ISREG(st_mode) && force_mode)
+ ce->ce_mode = create_ce_mode(force_mode);
+ else if (trust_executable_bit && has_symlinks)
ce->ce_mode = create_ce_mode(st_mode);
else {
/* If there is an existing entry, pick the mode bits and type
return 0;
}
-int add_file_to_index(struct index_state *istate, const char *path, int flags)
+int add_file_to_index(struct index_state *istate, const char *path,
+ int flags, int force_mode)
{
struct stat st;
if (lstat(path, &st))
die_errno("unable to stat '%s'", path);
- return add_to_index(istate, path, &st, flags);
+ return add_to_index(istate, path, &st, flags, force_mode);
}
struct cache_entry *make_cache_entry(unsigned int mode,
return has_errors;
}
-static struct cache_entry *refresh_cache_entry(struct cache_entry *ce,
+struct cache_entry *refresh_cache_entry(struct cache_entry *ce,
unsigned int options)
{
return refresh_cache_ent(&the_index, ce, options, NULL, NULL);
logobj = parse_object(reflog->osha1);
} while (commit_reflog->recno && (logobj && logobj->type != OBJ_COMMIT));
+ if (!logobj && commit_reflog->recno >= 0 && is_null_sha1(reflog->osha1)) {
+ /* a root commit, but there are still more entries to show */
+ reflog = &commit_reflog->reflogs->items[commit_reflog->recno];
+ logobj = parse_object(reflog->nsha1);
+ }
+
if (!logobj || logobj->type != OBJ_COMMIT) {
commit_info->commit = NULL;
commit->parents = NULL;
int refname_is_safe(const char *refname)
{
- if (starts_with(refname, "refs/")) {
+ const char *rest;
+
+ if (skip_prefix(refname, "refs/", &rest)) {
char *buf;
int result;
+ size_t restlen = strlen(rest);
+
+ /* rest must not be empty, or start or end with "/" */
+ if (!restlen || *rest == '/' || rest[restlen - 1] == '/')
+ return 0;
- buf = xmallocz(strlen(refname));
/*
* Does the refname try to escape refs/?
* For example: refs/foo/../bar is safe but refs/foo/../../bar
* is not.
*/
- result = !normalize_path_copy(buf, refname + strlen("refs/"));
+ buf = xmallocz(restlen);
+ result = !normalize_path_copy(buf, rest) && !strcmp(buf, rest);
free(buf);
return result;
}
- while (*refname) {
+
+ do {
if (!isupper(*refname) && *refname != '_')
return 0;
refname++;
- }
+ } while (*refname);
return 1;
}
filename = git_path("%s", pseudoref);
fd = hold_lock_file_for_update(&lock, filename, LOCK_DIE_ON_ERROR);
if (fd < 0) {
- strbuf_addf(err, "Could not open '%s' for writing: %s",
+ strbuf_addf(err, "could not open '%s' for writing: %s",
filename, strerror(errno));
return -1;
}
if (read_ref(pseudoref, actual_old_sha1))
die("could not read ref '%s'", pseudoref);
if (hashcmp(actual_old_sha1, old_sha1)) {
- strbuf_addf(err, "Unexpected sha1 when writing %s", pseudoref);
+ strbuf_addf(err, "unexpected sha1 when writing '%s'", pseudoref);
rollback_lock_file(&lock);
goto done;
}
}
if (write_in_full(fd, buf.buf, buf.len) != buf.len) {
- strbuf_addf(err, "Could not write to '%s'", filename);
+ strbuf_addf(err, "could not write to '%s'", filename);
rollback_lock_file(&lock);
goto done;
}
free(transaction);
}
-static struct ref_update *add_update(struct ref_transaction *transaction,
- const char *refname)
+struct ref_update *ref_transaction_add_update(
+ struct ref_transaction *transaction,
+ const char *refname, unsigned int flags,
+ const unsigned char *new_sha1,
+ const unsigned char *old_sha1,
+ const char *msg)
{
struct ref_update *update;
+
+ if (transaction->state != REF_TRANSACTION_OPEN)
+ die("BUG: update called for transaction that is not open");
+
+ if ((flags & REF_ISPRUNING) && !(flags & REF_NODEREF))
+ die("BUG: REF_ISPRUNING set without REF_NODEREF");
+
FLEX_ALLOC_STR(update, refname, refname);
ALLOC_GROW(transaction->updates, transaction->nr + 1, transaction->alloc);
transaction->updates[transaction->nr++] = update;
+
+ update->flags = flags;
+
+ if (flags & REF_HAVE_NEW)
+ hashcpy(update->new_sha1, new_sha1);
+ if (flags & REF_HAVE_OLD)
+ hashcpy(update->old_sha1, old_sha1);
+ if (msg)
+ update->msg = xstrdup(msg);
return update;
}
unsigned int flags, const char *msg,
struct strbuf *err)
{
- struct ref_update *update;
-
assert(err);
- if (transaction->state != REF_TRANSACTION_OPEN)
- die("BUG: update called for transaction that is not open");
-
- if (new_sha1 && !is_null_sha1(new_sha1) &&
- check_refname_format(refname, REFNAME_ALLOW_ONELEVEL)) {
- strbuf_addf(err, "refusing to update ref with bad name %s",
+ if ((new_sha1 && !is_null_sha1(new_sha1)) ?
+ check_refname_format(refname, REFNAME_ALLOW_ONELEVEL) :
+ !refname_is_safe(refname)) {
+ strbuf_addf(err, "refusing to update ref with bad name '%s'",
refname);
return -1;
}
- update = add_update(transaction, refname);
- if (new_sha1) {
- hashcpy(update->new_sha1, new_sha1);
- flags |= REF_HAVE_NEW;
- }
- if (old_sha1) {
- hashcpy(update->old_sha1, old_sha1);
- flags |= REF_HAVE_OLD;
- }
- update->flags = flags;
- if (msg)
- update->msg = xstrdup(msg);
+ flags |= (new_sha1 ? REF_HAVE_NEW : 0) | (old_sha1 ? REF_HAVE_OLD : 0);
+
+ ref_transaction_add_update(transaction, refname, flags,
+ new_sha1, old_sha1, msg);
return 0;
}
/* -2 for strlen("%.*s") - strlen("%s"); +1 for NUL */
total_len += strlen(ref_rev_parse_rules[nr_rules]) - 2 + 1;
- scanf_fmts = xmalloc(st_add(st_mult(nr_rules, sizeof(char *)), total_len));
+ scanf_fmts = xmalloc(st_add(st_mult(sizeof(char *), nr_rules), total_len));
offset = 0;
for (i = 0; i < nr_rules; i++) {
return head_ref_submodule(NULL, fn, cb_data);
}
+/*
+ * Call fn for each reference in the specified submodule for which the
+ * refname begins with prefix. If trim is non-zero, then trim that
+ * many characters off the beginning of each refname before passing
+ * the refname to fn. flags can be DO_FOR_EACH_INCLUDE_BROKEN to
+ * include broken references in the iteration. If fn ever returns a
+ * non-zero value, stop the iteration and return that value;
+ * otherwise, return 0.
+ */
+static int do_for_each_ref(const char *submodule, const char *prefix,
+ each_ref_fn fn, int trim, int flags, void *cb_data)
+{
+ struct ref_iterator *iter;
+
+ iter = files_ref_iterator_begin(submodule, prefix, flags);
+ iter = prefix_ref_iterator_begin(iter, prefix, trim);
+
+ return do_for_each_ref_iterator(iter, fn, cb_data);
+}
+
int for_each_ref(each_ref_fn fn, void *cb_data)
{
return do_for_each_ref(NULL, "", fn, 0, 0, cb_data);
#define RESOLVE_REF_NO_RECURSE 0x02
#define RESOLVE_REF_ALLOW_BAD_NAME 0x04
-extern const char *resolve_ref_unsafe(const char *refname, int resolve_flags,
- unsigned char *sha1, int *flags);
+const char *resolve_ref_unsafe(const char *refname, int resolve_flags,
+ unsigned char *sha1, int *flags);
-extern char *resolve_refdup(const char *refname, int resolve_flags,
- unsigned char *sha1, int *flags);
+char *resolve_refdup(const char *refname, int resolve_flags,
+ unsigned char *sha1, int *flags);
-extern int read_ref_full(const char *refname, int resolve_flags,
- unsigned char *sha1, int *flags);
-extern int read_ref(const char *refname, unsigned char *sha1);
+int read_ref_full(const char *refname, int resolve_flags,
+ unsigned char *sha1, int *flags);
+int read_ref(const char *refname, unsigned char *sha1);
-extern int ref_exists(const char *refname);
+int ref_exists(const char *refname);
-extern int is_branch(const char *refname);
+int is_branch(const char *refname);
/*
* If refname is a non-symbolic reference that refers to a tag object,
* Symbolic references are considered unpeelable, even if they
* ultimately resolve to a peelable tag.
*/
-extern int peel_ref(const char *refname, unsigned char *sha1);
+int peel_ref(const char *refname, unsigned char *sha1);
/**
* Resolve refname in the nested "gitlink" repository that is located
* at path. If the resolution is successful, return 0 and set sha1 to
* the name of the object; otherwise, return a non-zero value.
*/
-extern int resolve_gitlink_ref(const char *path, const char *refname, unsigned char *sha1);
+int resolve_gitlink_ref(const char *path, const char *refname,
+ unsigned char *sha1);
/*
* Return true iff abbrev_name is a possible abbreviation for
* full_name according to the rules defined by ref_rev_parse_rules in
* refs.c.
*/
-extern int refname_match(const char *abbrev_name, const char *full_name);
+int refname_match(const char *abbrev_name, const char *full_name);
-extern int dwim_ref(const char *str, int len, unsigned char *sha1, char **ref);
-extern int dwim_log(const char *str, int len, unsigned char *sha1, char **ref);
+int dwim_ref(const char *str, int len, unsigned char *sha1, char **ref);
+int dwim_log(const char *str, int len, unsigned char *sha1, char **ref);
/*
* A ref_transaction represents a collection of ref updates
struct ref_transaction;
/*
- * Bit values set in the flags argument passed to each_ref_fn():
+ * Bit values set in the flags argument passed to each_ref_fn() and
+ * stored in ref_iterator::flags. Other bits are for internal use
+ * only:
*/
/* Reference is a symbolic reference. */
* modifies the reference also returns a nonzero value to immediately
* stop the iteration.
*/
-extern int head_ref(each_ref_fn fn, void *cb_data);
-extern int for_each_ref(each_ref_fn fn, void *cb_data);
-extern int for_each_ref_in(const char *prefix, each_ref_fn fn, void *cb_data);
-extern int for_each_fullref_in(const char *prefix, each_ref_fn fn, void *cb_data, unsigned int broken);
-extern int for_each_tag_ref(each_ref_fn fn, void *cb_data);
-extern int for_each_branch_ref(each_ref_fn fn, void *cb_data);
-extern int for_each_remote_ref(each_ref_fn fn, void *cb_data);
-extern int for_each_replace_ref(each_ref_fn fn, void *cb_data);
-extern int for_each_glob_ref(each_ref_fn fn, const char *pattern, void *cb_data);
-extern int for_each_glob_ref_in(each_ref_fn fn, const char *pattern, const char *prefix, void *cb_data);
-
-extern int head_ref_submodule(const char *submodule, each_ref_fn fn, void *cb_data);
-extern int for_each_ref_submodule(const char *submodule, each_ref_fn fn, void *cb_data);
-extern int for_each_ref_in_submodule(const char *submodule, const char *prefix,
+int head_ref(each_ref_fn fn, void *cb_data);
+int for_each_ref(each_ref_fn fn, void *cb_data);
+int for_each_ref_in(const char *prefix, each_ref_fn fn, void *cb_data);
+int for_each_fullref_in(const char *prefix, each_ref_fn fn, void *cb_data,
+ unsigned int broken);
+int for_each_tag_ref(each_ref_fn fn, void *cb_data);
+int for_each_branch_ref(each_ref_fn fn, void *cb_data);
+int for_each_remote_ref(each_ref_fn fn, void *cb_data);
+int for_each_replace_ref(each_ref_fn fn, void *cb_data);
+int for_each_glob_ref(each_ref_fn fn, const char *pattern, void *cb_data);
+int for_each_glob_ref_in(each_ref_fn fn, const char *pattern,
+ const char *prefix, void *cb_data);
+
+int head_ref_submodule(const char *submodule, each_ref_fn fn, void *cb_data);
+int for_each_ref_submodule(const char *submodule,
+ each_ref_fn fn, void *cb_data);
+int for_each_ref_in_submodule(const char *submodule, const char *prefix,
each_ref_fn fn, void *cb_data);
-extern int for_each_tag_ref_submodule(const char *submodule, each_ref_fn fn, void *cb_data);
-extern int for_each_branch_ref_submodule(const char *submodule, each_ref_fn fn, void *cb_data);
-extern int for_each_remote_ref_submodule(const char *submodule, each_ref_fn fn, void *cb_data);
+int for_each_tag_ref_submodule(const char *submodule,
+ each_ref_fn fn, void *cb_data);
+int for_each_branch_ref_submodule(const char *submodule,
+ each_ref_fn fn, void *cb_data);
+int for_each_remote_ref_submodule(const char *submodule,
+ each_ref_fn fn, void *cb_data);
-extern int head_ref_namespaced(each_ref_fn fn, void *cb_data);
-extern int for_each_namespaced_ref(each_ref_fn fn, void *cb_data);
+int head_ref_namespaced(each_ref_fn fn, void *cb_data);
+int for_each_namespaced_ref(each_ref_fn fn, void *cb_data);
/* can be used to learn about broken ref and symref */
-extern int for_each_rawref(each_ref_fn fn, void *cb_data);
+int for_each_rawref(each_ref_fn fn, void *cb_data);
static inline const char *has_glob_specials(const char *pattern)
{
return strpbrk(pattern, "?*[");
}
-extern void warn_dangling_symref(FILE *fp, const char *msg_fmt, const char *refname);
-extern void warn_dangling_symrefs(FILE *fp, const char *msg_fmt, const struct string_list *refnames);
+void warn_dangling_symref(FILE *fp, const char *msg_fmt, const char *refname);
+void warn_dangling_symrefs(FILE *fp, const char *msg_fmt,
+ const struct string_list *refnames);
/*
* Flags for controlling behaviour of pack_refs()
int safe_create_reflog(const char *refname, int force_create, struct strbuf *err);
/** Reads log for the value of ref during at_time. **/
-extern int read_ref_at(const char *refname, unsigned int flags,
- unsigned long at_time, int cnt,
- unsigned char *sha1, char **msg,
- unsigned long *cutoff_time, int *cutoff_tz, int *cutoff_cnt);
+int read_ref_at(const char *refname, unsigned int flags,
+ unsigned long at_time, int cnt,
+ unsigned char *sha1, char **msg,
+ unsigned long *cutoff_time, int *cutoff_tz, int *cutoff_cnt);
/** Check if a particular reflog exists */
-extern int reflog_exists(const char *refname);
+int reflog_exists(const char *refname);
/*
* Delete the specified reference. If old_sha1 is non-NULL, then
* exists, regardless of its old value. It is an error for old_sha1 to
* be NULL_SHA1. flags is passed through to ref_transaction_delete().
*/
-extern int delete_ref(const char *refname, const unsigned char *old_sha1,
- unsigned int flags);
+int delete_ref(const char *refname, const unsigned char *old_sha1,
+ unsigned int flags);
/*
* Delete the specified references. If there are any problems, emit
* errors but attempt to keep going (i.e., the deletes are not done in
- * an all-or-nothing transaction).
+ * an all-or-nothing transaction). flags is passed through to
+ * ref_transaction_delete().
*/
-extern int delete_refs(struct string_list *refnames);
+int delete_refs(struct string_list *refnames, unsigned int flags);
/** Delete a reflog */
-extern int delete_reflog(const char *refname);
+int delete_reflog(const char *refname);
/* iterate over reflog entries */
-typedef int each_reflog_ent_fn(unsigned char *osha1, unsigned char *nsha1, const char *, unsigned long, int, const char *, void *);
+typedef int each_reflog_ent_fn(
+ unsigned char *old_sha1, unsigned char *new_sha1,
+ const char *committer, unsigned long timestamp,
+ int tz, const char *msg, void *cb_data);
+
int for_each_reflog_ent(const char *refname, each_reflog_ent_fn fn, void *cb_data);
int for_each_reflog_ent_reverse(const char *refname, each_reflog_ent_fn fn, void *cb_data);
* Calls the specified function for each reflog file until it returns nonzero,
* and returns the value
*/
-extern int for_each_reflog(each_ref_fn, void *);
+int for_each_reflog(each_ref_fn fn, void *cb_data);
#define REFNAME_ALLOW_ONELEVEL 1
#define REFNAME_REFSPEC_PATTERN 2
* allow a single "*" wildcard character in the refspec. No leading or
* repeated slashes are accepted.
*/
-extern int check_refname_format(const char *refname, int flags);
+int check_refname_format(const char *refname, int flags);
-extern const char *prettify_refname(const char *refname);
+const char *prettify_refname(const char *refname);
-extern char *shorten_unambiguous_ref(const char *refname, int strict);
+char *shorten_unambiguous_ref(const char *refname, int strict);
/** rename ref, return 0 on success **/
-extern int rename_ref(const char *oldref, const char *newref, const char *logmsg);
+int rename_ref(const char *oldref, const char *newref, const char *logmsg);
-extern int create_symref(const char *refname, const char *target, const char *logmsg);
+int create_symref(const char *refname, const char *target, const char *logmsg);
/*
* Update HEAD of the specified gitdir.
* $GIT_DIR points to.
* Return 0 if successful, non-zero otherwise.
* */
-extern int set_worktree_head_symref(const char *gitdir, const char *target);
+int set_worktree_head_symref(const char *gitdir, const char *target);
enum action_on_err {
UPDATE_REFS_MSG_ON_ERR,
* msg -- a message describing the change (for the reflog).
*
* err -- a strbuf for receiving a description of any error that
- * might have occured.
+ * might have occurred.
*
* The functions make internal copies of refname and msg, so the
* caller retains ownership of these parameters.
const unsigned char *new_sha1, const unsigned char *old_sha1,
unsigned int flags, enum action_on_err onerr);
-extern int parse_hide_refs_config(const char *var, const char *value, const char *);
+int parse_hide_refs_config(const char *var, const char *value, const char *);
/*
* Check whether a ref is hidden. If no namespace is set, both the first and
* the ref is outside that namespace, the first parameter is NULL. The second
* parameter always points to the full ref name.
*/
-extern int ref_is_hidden(const char *, const char *);
+int ref_is_hidden(const char *, const char *);
enum ref_type {
REF_TYPE_PER_WORKTREE,
* enum expire_reflog_flags. The three function pointers are described
* above. On success, return zero.
*/
-extern int reflog_expire(const char *refname, const unsigned char *sha1,
- unsigned int flags,
- reflog_expiry_prepare_fn prepare_fn,
- reflog_expiry_should_prune_fn should_prune_fn,
- reflog_expiry_cleanup_fn cleanup_fn,
- void *policy_cb_data);
+int reflog_expire(const char *refname, const unsigned char *sha1,
+ unsigned int flags,
+ reflog_expiry_prepare_fn prepare_fn,
+ reflog_expiry_should_prune_fn should_prune_fn,
+ reflog_expiry_cleanup_fn cleanup_fn,
+ void *policy_cb_data);
#endif /* REFS_H */
#include "../cache.h"
#include "../refs.h"
#include "refs-internal.h"
+#include "../iterator.h"
+#include "../dir-iterator.h"
#include "../lockfile.h"
#include "../object.h"
#include "../dir.h"
struct ref_lock {
char *ref_name;
- char *orig_ref_name;
struct lock_file *lk;
struct object_id old_oid;
};
}
/*
- * Return true iff the reference described by entry can be resolved to
- * an object in the database. Emit a warning if the referred-to
- * object does not exist.
+ * Return true if refname, which has the specified oid and flags, can
+ * be resolved to an object in the database. If the referred-to object
+ * does not exist, emit a warning and return false.
*/
-static int ref_resolves_to_object(struct ref_entry *entry)
+static int ref_resolves_to_object(const char *refname,
+ const struct object_id *oid,
+ unsigned int flags)
{
- if (entry->flag & REF_ISBROKEN)
+ if (flags & REF_ISBROKEN)
return 0;
- if (!has_sha1_file(entry->u.value.oid.hash)) {
- error("%s does not point to a valid object!", entry->name);
+ if (!has_sha1_file(oid->hash)) {
+ error("%s does not point to a valid object!", refname);
return 0;
}
return 1;
}
/*
- * current_ref is a performance hack: when iterating over references
- * using the for_each_ref*() functions, current_ref is set to the
- * current reference's entry before calling the callback function. If
- * the callback function calls peel_ref(), then peel_ref() first
- * checks whether the reference to be peeled is the current reference
- * (it usually is) and if so, returns that reference's peeled version
- * if it is available. This avoids a refname lookup in a common case.
- */
-static struct ref_entry *current_ref;
-
-typedef int each_ref_entry_fn(struct ref_entry *entry, void *cb_data);
-
-struct ref_entry_cb {
- const char *base;
- int trim;
- int flags;
- each_ref_fn *fn;
- void *cb_data;
-};
-
-/*
- * Handle one reference in a do_for_each_ref*()-style iteration,
- * calling an each_ref_fn for each entry.
+ * Return true if the reference described by entry can be resolved to
+ * an object in the database; otherwise, emit a warning and return
+ * false.
*/
-static int do_one_ref(struct ref_entry *entry, void *cb_data)
+static int entry_resolves_to_object(struct ref_entry *entry)
{
- struct ref_entry_cb *data = cb_data;
- struct ref_entry *old_current_ref;
- int retval;
-
- if (!starts_with(entry->name, data->base))
- return 0;
-
- if (!(data->flags & DO_FOR_EACH_INCLUDE_BROKEN) &&
- !ref_resolves_to_object(entry))
- return 0;
-
- /* Store the old value, in case this is a recursive call: */
- old_current_ref = current_ref;
- current_ref = entry;
- retval = data->fn(entry->name + data->trim, &entry->u.value.oid,
- entry->flag, data->cb_data);
- current_ref = old_current_ref;
- return retval;
+ return ref_resolves_to_object(entry->name,
+ &entry->u.value.oid, entry->flag);
}
+typedef int each_ref_entry_fn(struct ref_entry *entry, void *cb_data);
+
/*
* Call fn for each reference in dir that has index in the range
* offset <= index < dir->nr. Recurse into subdirectories that are in
return 0;
}
-/*
- * Call fn for each reference in the union of dir1 and dir2, in order
- * by refname. Recurse into subdirectories. If a value entry appears
- * in both dir1 and dir2, then only process the version that is in
- * dir2. The input dirs must already be sorted, but subdirs will be
- * sorted as needed. fn is called for all references, including
- * broken ones.
- */
-static int do_for_each_entry_in_dirs(struct ref_dir *dir1,
- struct ref_dir *dir2,
- each_ref_entry_fn fn, void *cb_data)
-{
- int retval;
- int i1 = 0, i2 = 0;
-
- assert(dir1->sorted == dir1->nr);
- assert(dir2->sorted == dir2->nr);
- while (1) {
- struct ref_entry *e1, *e2;
- int cmp;
- if (i1 == dir1->nr) {
- return do_for_each_entry_in_dir(dir2, i2, fn, cb_data);
- }
- if (i2 == dir2->nr) {
- return do_for_each_entry_in_dir(dir1, i1, fn, cb_data);
- }
- e1 = dir1->entries[i1];
- e2 = dir2->entries[i2];
- cmp = strcmp(e1->name, e2->name);
- if (cmp == 0) {
- if ((e1->flag & REF_DIR) && (e2->flag & REF_DIR)) {
- /* Both are directories; descend them in parallel. */
- struct ref_dir *subdir1 = get_ref_dir(e1);
- struct ref_dir *subdir2 = get_ref_dir(e2);
- sort_ref_dir(subdir1);
- sort_ref_dir(subdir2);
- retval = do_for_each_entry_in_dirs(
- subdir1, subdir2, fn, cb_data);
- i1++;
- i2++;
- } else if (!(e1->flag & REF_DIR) && !(e2->flag & REF_DIR)) {
- /* Both are references; ignore the one from dir1. */
- retval = fn(e2, cb_data);
- i1++;
- i2++;
- } else {
- die("conflict between reference and directory: %s",
- e1->name);
- }
- } else {
- struct ref_entry *e;
- if (cmp < 0) {
- e = e1;
- i1++;
- } else {
- e = e2;
- i2++;
- }
- if (e->flag & REF_DIR) {
- struct ref_dir *subdir = get_ref_dir(e);
- sort_ref_dir(subdir);
- retval = do_for_each_entry_in_dir(
- subdir, 0, fn, cb_data);
- } else {
- retval = fn(e, cb_data);
- }
- }
- if (retval)
- return retval;
- }
-}
-
/*
* Load all of the refs from the dir into our in-memory cache. The hard work
* of loading loose refs is done by get_ref_dir(), so we just need to recurse
}
}
+/*
+ * A level in the reference hierarchy that is currently being iterated
+ * through.
+ */
+struct cache_ref_iterator_level {
+ /*
+ * The ref_dir being iterated over at this level. The ref_dir
+ * is sorted before being stored here.
+ */
+ struct ref_dir *dir;
+
+ /*
+ * The index of the current entry within dir (which might
+ * itself be a directory). If index == -1, then the iteration
+ * hasn't yet begun. If index == dir->nr, then the iteration
+ * through this level is over.
+ */
+ int index;
+};
+
+/*
+ * Represent an iteration through a ref_dir in the memory cache. The
+ * iteration recurses through subdirectories.
+ */
+struct cache_ref_iterator {
+ struct ref_iterator base;
+
+ /*
+ * The number of levels currently on the stack. This is always
+ * at least 1, because when it becomes zero the iteration is
+ * ended and this struct is freed.
+ */
+ size_t levels_nr;
+
+ /* The number of levels that have been allocated on the stack */
+ size_t levels_alloc;
+
+ /*
+ * A stack of levels. levels[0] is the uppermost level that is
+ * being iterated over in this iteration. (This is not
+ * necessary the top level in the references hierarchy. If we
+ * are iterating through a subtree, then levels[0] will hold
+ * the ref_dir for that subtree, and subsequent levels will go
+ * on from there.)
+ */
+ struct cache_ref_iterator_level *levels;
+};
+
+static int cache_ref_iterator_advance(struct ref_iterator *ref_iterator)
+{
+ struct cache_ref_iterator *iter =
+ (struct cache_ref_iterator *)ref_iterator;
+
+ while (1) {
+ struct cache_ref_iterator_level *level =
+ &iter->levels[iter->levels_nr - 1];
+ struct ref_dir *dir = level->dir;
+ struct ref_entry *entry;
+
+ if (level->index == -1)
+ sort_ref_dir(dir);
+
+ if (++level->index == level->dir->nr) {
+ /* This level is exhausted; pop up a level */
+ if (--iter->levels_nr == 0)
+ return ref_iterator_abort(ref_iterator);
+
+ continue;
+ }
+
+ entry = dir->entries[level->index];
+
+ if (entry->flag & REF_DIR) {
+ /* push down a level */
+ ALLOC_GROW(iter->levels, iter->levels_nr + 1,
+ iter->levels_alloc);
+
+ level = &iter->levels[iter->levels_nr++];
+ level->dir = get_ref_dir(entry);
+ level->index = -1;
+ } else {
+ iter->base.refname = entry->name;
+ iter->base.oid = &entry->u.value.oid;
+ iter->base.flags = entry->flag;
+ return ITER_OK;
+ }
+ }
+}
+
+static enum peel_status peel_entry(struct ref_entry *entry, int repeel);
+
+static int cache_ref_iterator_peel(struct ref_iterator *ref_iterator,
+ struct object_id *peeled)
+{
+ struct cache_ref_iterator *iter =
+ (struct cache_ref_iterator *)ref_iterator;
+ struct cache_ref_iterator_level *level;
+ struct ref_entry *entry;
+
+ level = &iter->levels[iter->levels_nr - 1];
+
+ if (level->index == -1)
+ die("BUG: peel called before advance for cache iterator");
+
+ entry = level->dir->entries[level->index];
+
+ if (peel_entry(entry, 0))
+ return -1;
+ hashcpy(peeled->hash, entry->u.value.peeled.hash);
+ return 0;
+}
+
+static int cache_ref_iterator_abort(struct ref_iterator *ref_iterator)
+{
+ struct cache_ref_iterator *iter =
+ (struct cache_ref_iterator *)ref_iterator;
+
+ free(iter->levels);
+ base_ref_iterator_free(ref_iterator);
+ return ITER_DONE;
+}
+
+static struct ref_iterator_vtable cache_ref_iterator_vtable = {
+ cache_ref_iterator_advance,
+ cache_ref_iterator_peel,
+ cache_ref_iterator_abort
+};
+
+static struct ref_iterator *cache_ref_iterator_begin(struct ref_dir *dir)
+{
+ struct cache_ref_iterator *iter;
+ struct ref_iterator *ref_iterator;
+ struct cache_ref_iterator_level *level;
+
+ iter = xcalloc(1, sizeof(*iter));
+ ref_iterator = &iter->base;
+ base_ref_iterator_init(ref_iterator, &cache_ref_iterator_vtable);
+ ALLOC_GROW(iter->levels, 10, iter->levels_alloc);
+
+ iter->levels_nr = 1;
+ level = &iter->levels[0];
+ level->index = -1;
+ level->dir = dir;
+
+ return ref_iterator;
+}
+
struct nonmatching_ref_data {
const struct string_list *skip;
const char *conflicting_refname;
/*
* Return a pointer to a ref_cache for the specified submodule. For
- * the main repository, use submodule==NULL. The returned structure
- * will be allocated and initialized but not necessarily populated; it
- * should not be freed.
+ * the main repository, use submodule==NULL; such a call cannot fail.
+ * For a submodule, the submodule must exist and be a nonbare
+ * repository, otherwise return NULL.
+ *
+ * The returned structure will be allocated and initialized but not
+ * necessarily populated; it should not be freed.
*/
static struct ref_cache *get_ref_cache(const char *submodule)
{
struct ref_cache *refs = lookup_ref_cache(submodule);
- if (!refs)
- refs = create_ref_cache(submodule);
+
+ if (!refs) {
+ struct strbuf submodule_sb = STRBUF_INIT;
+
+ strbuf_addstr(&submodule_sb, submodule);
+ if (is_nonbare_repository_dir(&submodule_sb))
+ refs = create_ref_cache(submodule);
+ strbuf_release(&submodule_sb);
+ }
+
return refs;
}
return -1;
strbuf_add(&submodule, path, len);
- refs = lookup_ref_cache(submodule.buf);
+ refs = get_ref_cache(submodule.buf);
if (!refs) {
- if (!is_nonbare_repository_dir(&submodule)) {
- strbuf_release(&submodule);
- return -1;
- }
- refs = create_ref_cache(submodule.buf);
+ strbuf_release(&submodule);
+ return -1;
}
strbuf_release(&submodule);
return -1;
}
-/*
- * Read a raw ref from the filesystem or packed refs file.
- *
- * If the ref is a sha1, fill in sha1 and return 0.
- *
- * If the ref is symbolic, fill in *symref with the referrent
- * (e.g. "refs/heads/master") and return 0. The caller is responsible
- * for validating the referrent. Set REF_ISSYMREF in flags.
- *
- * If the ref doesn't exist, set errno to ENOENT and return -1.
- *
- * If the ref exists but is neither a symbolic ref nor a sha1, it is
- * broken. Set REF_ISBROKEN in flags, set errno to EINVAL, and return
- * -1.
- *
- * If there is another error reading the ref, set errno appropriately and
- * return -1.
- *
- * Backend-specific flags might be set in flags as well, regardless of
- * outcome.
- *
- * sb_path is workspace: the caller should allocate and free it.
- *
- * It is OK for refname to point into symref. In this case:
- * - if the function succeeds with REF_ISSYMREF, symref will be
- * overwritten and the memory pointed to by refname might be changed
- * or even freed.
- * - in all other cases, symref will be untouched, and therefore
- * refname will still be valid and unchanged.
- */
int read_raw_ref(const char *refname, unsigned char *sha1,
- struct strbuf *symref, unsigned int *flags)
+ struct strbuf *referent, unsigned int *type)
{
struct strbuf sb_contents = STRBUF_INIT;
struct strbuf sb_path = STRBUF_INIT;
int ret = -1;
int save_errno;
+ *type = 0;
strbuf_reset(&sb_path);
strbuf_git_path(&sb_path, "%s", refname);
path = sb_path.buf;
if (lstat(path, &st) < 0) {
if (errno != ENOENT)
goto out;
- if (resolve_missing_loose_ref(refname, sha1, flags)) {
+ if (resolve_missing_loose_ref(refname, sha1, type)) {
errno = ENOENT;
goto out;
}
}
if (starts_with(sb_contents.buf, "refs/") &&
!check_refname_format(sb_contents.buf, 0)) {
- strbuf_swap(&sb_contents, symref);
- *flags |= REF_ISSYMREF;
+ strbuf_swap(&sb_contents, referent);
+ *type |= REF_ISSYMREF;
ret = 0;
goto out;
}
/* Is it a directory? */
if (S_ISDIR(st.st_mode)) {
- errno = EISDIR;
+ /*
+ * Even though there is a directory where the loose
+ * ref is supposed to be, there could still be a
+ * packed ref:
+ */
+ if (resolve_missing_loose_ref(refname, sha1, type)) {
+ errno = EISDIR;
+ goto out;
+ }
+ ret = 0;
goto out;
}
while (isspace(*buf))
buf++;
- strbuf_reset(symref);
- strbuf_addstr(symref, buf);
- *flags |= REF_ISSYMREF;
+ strbuf_reset(referent);
+ strbuf_addstr(referent, buf);
+ *type |= REF_ISSYMREF;
ret = 0;
goto out;
}
*/
if (get_sha1_hex(buf, sha1) ||
(buf[40] != '\0' && !isspace(buf[40]))) {
- *flags |= REF_ISBROKEN;
+ *type |= REF_ISBROKEN;
errno = EINVAL;
goto out;
}
return ret;
}
+static void unlock_ref(struct ref_lock *lock)
+{
+ /* Do not free lock->lk -- atexit() still looks at them */
+ if (lock->lk)
+ rollback_lock_file(lock->lk);
+ free(lock->ref_name);
+ free(lock);
+}
+
+/*
+ * Lock refname, without following symrefs, and set *lock_p to point
+ * at a newly-allocated lock object. Fill in lock->old_oid, referent,
+ * and type similarly to read_raw_ref().
+ *
+ * The caller must verify that refname is a "safe" reference name (in
+ * the sense of refname_is_safe()) before calling this function.
+ *
+ * If the reference doesn't already exist, verify that refname doesn't
+ * have a D/F conflict with any existing references. extras and skip
+ * are passed to verify_refname_available_dir() for this check.
+ *
+ * If mustexist is not set and the reference is not found or is
+ * broken, lock the reference anyway but clear sha1.
+ *
+ * Return 0 on success. On failure, write an error message to err and
+ * return TRANSACTION_NAME_CONFLICT or TRANSACTION_GENERIC_ERROR.
+ *
+ * Implementation note: This function is basically
+ *
+ * lock reference
+ * read_raw_ref()
+ *
+ * but it includes a lot more code to
+ * - Deal with possible races with other processes
+ * - Avoid calling verify_refname_available_dir() when it can be
+ * avoided, namely if we were successfully able to read the ref
+ * - Generate informative error messages in the case of failure
+ */
+static int lock_raw_ref(const char *refname, int mustexist,
+ const struct string_list *extras,
+ const struct string_list *skip,
+ struct ref_lock **lock_p,
+ struct strbuf *referent,
+ unsigned int *type,
+ struct strbuf *err)
+{
+ struct ref_lock *lock;
+ struct strbuf ref_file = STRBUF_INIT;
+ int attempts_remaining = 3;
+ int ret = TRANSACTION_GENERIC_ERROR;
+
+ assert(err);
+ *type = 0;
+
+ /* First lock the file so it can't change out from under us. */
+
+ *lock_p = lock = xcalloc(1, sizeof(*lock));
+
+ lock->ref_name = xstrdup(refname);
+ strbuf_git_path(&ref_file, "%s", refname);
+
+retry:
+ switch (safe_create_leading_directories(ref_file.buf)) {
+ case SCLD_OK:
+ break; /* success */
+ case SCLD_EXISTS:
+ /*
+ * Suppose refname is "refs/foo/bar". We just failed
+ * to create the containing directory, "refs/foo",
+ * because there was a non-directory in the way. This
+ * indicates a D/F conflict, probably because of
+ * another reference such as "refs/foo". There is no
+ * reason to expect this error to be transitory.
+ */
+ if (verify_refname_available(refname, extras, skip, err)) {
+ if (mustexist) {
+ /*
+ * To the user the relevant error is
+ * that the "mustexist" reference is
+ * missing:
+ */
+ strbuf_reset(err);
+ strbuf_addf(err, "unable to resolve reference '%s'",
+ refname);
+ } else {
+ /*
+ * The error message set by
+ * verify_refname_available_dir() is OK.
+ */
+ ret = TRANSACTION_NAME_CONFLICT;
+ }
+ } else {
+ /*
+ * The file that is in the way isn't a loose
+ * reference. Report it as a low-level
+ * failure.
+ */
+ strbuf_addf(err, "unable to create lock file %s.lock; "
+ "non-directory in the way",
+ ref_file.buf);
+ }
+ goto error_return;
+ case SCLD_VANISHED:
+ /* Maybe another process was tidying up. Try again. */
+ if (--attempts_remaining > 0)
+ goto retry;
+ /* fall through */
+ default:
+ strbuf_addf(err, "unable to create directory for %s",
+ ref_file.buf);
+ goto error_return;
+ }
+
+ if (!lock->lk)
+ lock->lk = xcalloc(1, sizeof(struct lock_file));
+
+ if (hold_lock_file_for_update(lock->lk, ref_file.buf, LOCK_NO_DEREF) < 0) {
+ if (errno == ENOENT && --attempts_remaining > 0) {
+ /*
+ * Maybe somebody just deleted one of the
+ * directories leading to ref_file. Try
+ * again:
+ */
+ goto retry;
+ } else {
+ unable_to_lock_message(ref_file.buf, errno, err);
+ goto error_return;
+ }
+ }
+
+ /*
+ * Now we hold the lock and can read the reference without
+ * fear that its value will change.
+ */
+
+ if (read_raw_ref(refname, lock->old_oid.hash, referent, type)) {
+ if (errno == ENOENT) {
+ if (mustexist) {
+ /* Garden variety missing reference. */
+ strbuf_addf(err, "unable to resolve reference '%s'",
+ refname);
+ goto error_return;
+ } else {
+ /*
+ * Reference is missing, but that's OK. We
+ * know that there is not a conflict with
+ * another loose reference because
+ * (supposing that we are trying to lock
+ * reference "refs/foo/bar"):
+ *
+ * - We were successfully able to create
+ * the lockfile refs/foo/bar.lock, so we
+ * know there cannot be a loose reference
+ * named "refs/foo".
+ *
+ * - We got ENOENT and not EISDIR, so we
+ * know that there cannot be a loose
+ * reference named "refs/foo/bar/baz".
+ */
+ }
+ } else if (errno == EISDIR) {
+ /*
+ * There is a directory in the way. It might have
+ * contained references that have been deleted. If
+ * we don't require that the reference already
+ * exists, try to remove the directory so that it
+ * doesn't cause trouble when we want to rename the
+ * lockfile into place later.
+ */
+ if (mustexist) {
+ /* Garden variety missing reference. */
+ strbuf_addf(err, "unable to resolve reference '%s'",
+ refname);
+ goto error_return;
+ } else if (remove_dir_recursively(&ref_file,
+ REMOVE_DIR_EMPTY_ONLY)) {
+ if (verify_refname_available_dir(
+ refname, extras, skip,
+ get_loose_refs(&ref_cache),
+ err)) {
+ /*
+ * The error message set by
+ * verify_refname_available() is OK.
+ */
+ ret = TRANSACTION_NAME_CONFLICT;
+ goto error_return;
+ } else {
+ /*
+ * We can't delete the directory,
+ * but we also don't know of any
+ * references that it should
+ * contain.
+ */
+ strbuf_addf(err, "there is a non-empty directory '%s' "
+ "blocking reference '%s'",
+ ref_file.buf, refname);
+ goto error_return;
+ }
+ }
+ } else if (errno == EINVAL && (*type & REF_ISBROKEN)) {
+ strbuf_addf(err, "unable to resolve reference '%s': "
+ "reference broken", refname);
+ goto error_return;
+ } else {
+ strbuf_addf(err, "unable to resolve reference '%s': %s",
+ refname, strerror(errno));
+ goto error_return;
+ }
+
+ /*
+ * If the ref did not exist and we are creating it,
+ * make sure there is no existing packed ref whose
+ * name begins with our refname, nor a packed ref
+ * whose name is a proper prefix of our refname.
+ */
+ if (verify_refname_available_dir(
+ refname, extras, skip,
+ get_packed_refs(&ref_cache),
+ err)) {
+ goto error_return;
+ }
+ }
+
+ ret = 0;
+ goto out;
+
+error_return:
+ unlock_ref(lock);
+ *lock_p = NULL;
+
+out:
+ strbuf_release(&ref_file);
+ return ret;
+}
+
/*
* Peel the entry (if possible) and return its new peel_status. If
* repeel is true, re-peel the entry even if there is an old peeled
int flag;
unsigned char base[20];
- if (current_ref && (current_ref->name == refname
- || !strcmp(current_ref->name, refname))) {
- if (peel_entry(current_ref, 0))
+ if (current_ref_iter && current_ref_iter->refname == refname) {
+ struct object_id peeled;
+
+ if (ref_iterator_peel(current_ref_iter, &peeled))
return -1;
- hashcpy(sha1, current_ref->u.value.peeled.hash);
+ hashcpy(sha1, peeled.hash);
return 0;
}
return peel_object(base, sha1);
}
-/*
- * Call fn for each reference in the specified ref_cache, omitting
- * references not in the containing_dir of base. fn is called for all
- * references, including broken ones. If fn ever returns a non-zero
- * value, stop the iteration and return that value; otherwise, return
- * 0.
- */
-static int do_for_each_entry(struct ref_cache *refs, const char *base,
- each_ref_entry_fn fn, void *cb_data)
-{
+struct files_ref_iterator {
+ struct ref_iterator base;
+
struct packed_ref_cache *packed_ref_cache;
- struct ref_dir *loose_dir;
- struct ref_dir *packed_dir;
- int retval = 0;
+ struct ref_iterator *iter0;
+ unsigned int flags;
+};
- /*
- * We must make sure that all loose refs are read before accessing the
- * packed-refs file; this avoids a race condition in which loose refs
- * are migrated to the packed-refs file by a simultaneous process, but
- * our in-memory view is from before the migration. get_packed_ref_cache()
- * takes care of making sure our view is up to date with what is on
- * disk.
- */
- loose_dir = get_loose_refs(refs);
- if (base && *base) {
- loose_dir = find_containing_dir(loose_dir, base, 0);
- }
- if (loose_dir)
- prime_ref_dir(loose_dir);
+static int files_ref_iterator_advance(struct ref_iterator *ref_iterator)
+{
+ struct files_ref_iterator *iter =
+ (struct files_ref_iterator *)ref_iterator;
+ int ok;
- packed_ref_cache = get_packed_ref_cache(refs);
- acquire_packed_ref_cache(packed_ref_cache);
- packed_dir = get_packed_ref_dir(packed_ref_cache);
- if (base && *base) {
- packed_dir = find_containing_dir(packed_dir, base, 0);
- }
-
- if (packed_dir && loose_dir) {
- sort_ref_dir(packed_dir);
- sort_ref_dir(loose_dir);
- retval = do_for_each_entry_in_dirs(
- packed_dir, loose_dir, fn, cb_data);
- } else if (packed_dir) {
- sort_ref_dir(packed_dir);
- retval = do_for_each_entry_in_dir(
- packed_dir, 0, fn, cb_data);
- } else if (loose_dir) {
- sort_ref_dir(loose_dir);
- retval = do_for_each_entry_in_dir(
- loose_dir, 0, fn, cb_data);
+ while ((ok = ref_iterator_advance(iter->iter0)) == ITER_OK) {
+ if (!(iter->flags & DO_FOR_EACH_INCLUDE_BROKEN) &&
+ !ref_resolves_to_object(iter->iter0->refname,
+ iter->iter0->oid,
+ iter->iter0->flags))
+ continue;
+
+ iter->base.refname = iter->iter0->refname;
+ iter->base.oid = iter->iter0->oid;
+ iter->base.flags = iter->iter0->flags;
+ return ITER_OK;
}
- release_packed_ref_cache(packed_ref_cache);
- return retval;
+ iter->iter0 = NULL;
+ if (ref_iterator_abort(ref_iterator) != ITER_DONE)
+ ok = ITER_ERROR;
+
+ return ok;
}
-/*
- * Call fn for each reference in the specified ref_cache for which the
- * refname begins with base. If trim is non-zero, then trim that many
- * characters off the beginning of each refname before passing the
- * refname to fn. flags can be DO_FOR_EACH_INCLUDE_BROKEN to include
- * broken references in the iteration. If fn ever returns a non-zero
- * value, stop the iteration and return that value; otherwise, return
- * 0.
- */
-int do_for_each_ref(const char *submodule, const char *base,
- each_ref_fn fn, int trim, int flags, void *cb_data)
+static int files_ref_iterator_peel(struct ref_iterator *ref_iterator,
+ struct object_id *peeled)
{
- struct ref_entry_cb data;
- struct ref_cache *refs;
+ struct files_ref_iterator *iter =
+ (struct files_ref_iterator *)ref_iterator;
+
+ return ref_iterator_peel(iter->iter0, peeled);
+}
+
+static int files_ref_iterator_abort(struct ref_iterator *ref_iterator)
+{
+ struct files_ref_iterator *iter =
+ (struct files_ref_iterator *)ref_iterator;
+ int ok = ITER_DONE;
+
+ if (iter->iter0)
+ ok = ref_iterator_abort(iter->iter0);
- refs = get_ref_cache(submodule);
- data.base = base;
- data.trim = trim;
- data.flags = flags;
- data.fn = fn;
- data.cb_data = cb_data;
+ release_packed_ref_cache(iter->packed_ref_cache);
+ base_ref_iterator_free(ref_iterator);
+ return ok;
+}
+
+static struct ref_iterator_vtable files_ref_iterator_vtable = {
+ files_ref_iterator_advance,
+ files_ref_iterator_peel,
+ files_ref_iterator_abort
+};
+
+struct ref_iterator *files_ref_iterator_begin(
+ const char *submodule,
+ const char *prefix, unsigned int flags)
+{
+ struct ref_cache *refs = get_ref_cache(submodule);
+ struct ref_dir *loose_dir, *packed_dir;
+ struct ref_iterator *loose_iter, *packed_iter;
+ struct files_ref_iterator *iter;
+ struct ref_iterator *ref_iterator;
+
+ if (!refs)
+ return empty_ref_iterator_begin();
if (ref_paranoia < 0)
ref_paranoia = git_env_bool("GIT_REF_PARANOIA", 0);
if (ref_paranoia)
- data.flags |= DO_FOR_EACH_INCLUDE_BROKEN;
+ flags |= DO_FOR_EACH_INCLUDE_BROKEN;
- return do_for_each_entry(refs, base, do_one_ref, &data);
-}
+ iter = xcalloc(1, sizeof(*iter));
+ ref_iterator = &iter->base;
+ base_ref_iterator_init(ref_iterator, &files_ref_iterator_vtable);
-static void unlock_ref(struct ref_lock *lock)
-{
- /* Do not free lock->lk -- atexit() still looks at them */
- if (lock->lk)
- rollback_lock_file(lock->lk);
- free(lock->ref_name);
- free(lock->orig_ref_name);
- free(lock);
+ /*
+ * We must make sure that all loose refs are read before
+ * accessing the packed-refs file; this avoids a race
+ * condition if loose refs are migrated to the packed-refs
+ * file by a simultaneous process, but our in-memory view is
+ * from before the migration. We ensure this as follows:
+ * First, we call prime_ref_dir(), which pre-reads the loose
+ * references for the subtree into the cache. (If they've
+ * already been read, that's OK; we only need to guarantee
+ * that they're read before the packed refs, not *how much*
+ * before.) After that, we call get_packed_ref_cache(), which
+ * internally checks whether the packed-ref cache is up to
+ * date with what is on disk, and re-reads it if not.
+ */
+
+ loose_dir = get_loose_refs(refs);
+
+ if (prefix && *prefix)
+ loose_dir = find_containing_dir(loose_dir, prefix, 0);
+
+ if (loose_dir) {
+ prime_ref_dir(loose_dir);
+ loose_iter = cache_ref_iterator_begin(loose_dir);
+ } else {
+ /* There's nothing to iterate over. */
+ loose_iter = empty_ref_iterator_begin();
+ }
+
+ iter->packed_ref_cache = get_packed_ref_cache(refs);
+ acquire_packed_ref_cache(iter->packed_ref_cache);
+ packed_dir = get_packed_ref_dir(iter->packed_ref_cache);
+
+ if (prefix && *prefix)
+ packed_dir = find_containing_dir(packed_dir, prefix, 0);
+
+ if (packed_dir) {
+ packed_iter = cache_ref_iterator_begin(packed_dir);
+ } else {
+ /* There's nothing to iterate over. */
+ packed_iter = empty_ref_iterator_begin();
+ }
+
+ iter->iter0 = overlay_ref_iterator_begin(loose_iter, packed_iter);
+ iter->flags = flags;
+
+ return ref_iterator;
}
/*
lock->old_oid.hash, NULL)) {
if (old_sha1) {
int save_errno = errno;
- strbuf_addf(err, "can't verify ref %s", lock->ref_name);
+ strbuf_addf(err, "can't verify ref '%s'", lock->ref_name);
errno = save_errno;
return -1;
} else {
- hashclr(lock->old_oid.hash);
+ oidclr(&lock->old_oid);
return 0;
}
}
if (old_sha1 && hashcmp(lock->old_oid.hash, old_sha1)) {
- strbuf_addf(err, "ref %s is at %s but expected %s",
+ strbuf_addf(err, "ref '%s' is at %s but expected %s",
lock->ref_name,
- sha1_to_hex(lock->old_oid.hash),
+ oid_to_hex(&lock->old_oid),
sha1_to_hex(old_sha1));
errno = EBUSY;
return -1;
const unsigned char *old_sha1,
const struct string_list *extras,
const struct string_list *skip,
- unsigned int flags, int *type_p,
+ unsigned int flags, int *type,
struct strbuf *err)
{
struct strbuf ref_file = STRBUF_INIT;
- struct strbuf orig_ref_file = STRBUF_INIT;
- const char *orig_refname = refname;
struct ref_lock *lock;
int last_errno = 0;
- int type;
- int lflags = 0;
+ int lflags = LOCK_NO_DEREF;
int mustexist = (old_sha1 && !is_null_sha1(old_sha1));
- int resolve_flags = 0;
+ int resolve_flags = RESOLVE_REF_NO_RECURSE;
int attempts_remaining = 3;
+ int resolved;
assert(err);
resolve_flags |= RESOLVE_REF_READING;
if (flags & REF_DELETING)
resolve_flags |= RESOLVE_REF_ALLOW_BAD_NAME;
- if (flags & REF_NODEREF) {
- resolve_flags |= RESOLVE_REF_NO_RECURSE;
- lflags |= LOCK_NO_DEREF;
- }
- refname = resolve_ref_unsafe(refname, resolve_flags,
- lock->old_oid.hash, &type);
- if (!refname && errno == EISDIR) {
+ strbuf_git_path(&ref_file, "%s", refname);
+ resolved = !!resolve_ref_unsafe(refname, resolve_flags,
+ lock->old_oid.hash, type);
+ if (!resolved && errno == EISDIR) {
/*
* we are trying to lock foo but we used to
* have foo/bar which now does not exist;
* it is normal for the empty directory 'foo'
* to remain.
*/
- strbuf_git_path(&orig_ref_file, "%s", orig_refname);
- if (remove_empty_directories(&orig_ref_file)) {
+ if (remove_empty_directories(&ref_file)) {
last_errno = errno;
- if (!verify_refname_available_dir(orig_refname, extras, skip,
+ if (!verify_refname_available_dir(refname, extras, skip,
get_loose_refs(&ref_cache), err))
strbuf_addf(err, "there are still refs under '%s'",
- orig_refname);
+ refname);
goto error_return;
}
- refname = resolve_ref_unsafe(orig_refname, resolve_flags,
- lock->old_oid.hash, &type);
+ resolved = !!resolve_ref_unsafe(refname, resolve_flags,
+ lock->old_oid.hash, type);
}
- if (type_p)
- *type_p = type;
- if (!refname) {
+ if (!resolved) {
last_errno = errno;
if (last_errno != ENOTDIR ||
- !verify_refname_available_dir(orig_refname, extras, skip,
+ !verify_refname_available_dir(refname, extras, skip,
get_loose_refs(&ref_cache), err))
- strbuf_addf(err, "unable to resolve reference %s: %s",
- orig_refname, strerror(last_errno));
+ strbuf_addf(err, "unable to resolve reference '%s': %s",
+ refname, strerror(last_errno));
goto error_return;
}
- if (flags & REF_NODEREF)
- refname = orig_refname;
-
/*
* If the ref did not exist and we are creating it, make sure
* there is no existing packed ref whose name begins with our
lock->lk = xcalloc(1, sizeof(struct lock_file));
lock->ref_name = xstrdup(refname);
- lock->orig_ref_name = xstrdup(orig_refname);
- strbuf_git_path(&ref_file, "%s", refname);
retry:
switch (safe_create_leading_directories_const(ref_file.buf)) {
/* fall through */
default:
last_errno = errno;
- strbuf_addf(err, "unable to create directory for %s",
+ strbuf_addf(err, "unable to create directory for '%s'",
ref_file.buf);
goto error_return;
}
out:
strbuf_release(&ref_file);
- strbuf_release(&orig_ref_file);
errno = last_errno;
return lock;
}
return 0;
/* Do not pack symbolic or broken refs: */
- if ((entry->flag & REF_ISSYMREF) || !ref_resolves_to_object(entry))
+ if ((entry->flag & REF_ISSYMREF) || !entry_resolves_to_object(entry))
return 0;
/* Add a packed ref cache entry equivalent to the loose entry. */
transaction = ref_transaction_begin(&err);
if (!transaction ||
ref_transaction_delete(transaction, r->name, r->sha1,
- REF_ISPRUNING, NULL, &err) ||
+ REF_ISPRUNING | REF_NODEREF, NULL, &err) ||
ref_transaction_commit(transaction, &err)) {
ref_transaction_free(transaction);
error("%s", err.buf);
return 0;
}
-int delete_refs(struct string_list *refnames)
+int delete_refs(struct string_list *refnames, unsigned int flags)
{
struct strbuf err = STRBUF_INIT;
int i, result = 0;
for (i = 0; i < refnames->nr; i++) {
const char *refname = refnames->items[i].string;
- if (delete_ref(refname, NULL, 0))
+ if (delete_ref(refname, NULL, flags))
result |= error(_("could not remove reference %s"), refname);
}
}
int verify_refname_available(const char *newname,
- struct string_list *extras,
- struct string_list *skip,
+ const struct string_list *extras,
+ const struct string_list *skip,
struct strbuf *err)
{
struct ref_dir *packed_refs = get_packed_refs(&ref_cache);
const unsigned char *sha1, struct strbuf *err);
static int commit_ref_update(struct ref_lock *lock,
const unsigned char *sha1, const char *logmsg,
- int flags, struct strbuf *err);
+ struct strbuf *err);
int rename_ref(const char *oldrefname, const char *newrefname, const char *logmsg)
{
struct ref_lock *lock;
struct stat loginfo;
int log = !lstat(git_path("logs/%s", oldrefname), &loginfo);
- const char *symref = NULL;
struct strbuf err = STRBUF_INIT;
if (log && S_ISLNK(loginfo.st_mode))
return error("reflog for %s is a symlink", oldrefname);
- symref = resolve_ref_unsafe(oldrefname, RESOLVE_REF_READING,
- orig_sha1, &flag);
+ if (!resolve_ref_unsafe(oldrefname, RESOLVE_REF_READING | RESOLVE_REF_NO_RECURSE,
+ orig_sha1, &flag))
+ return error("refname %s not found", oldrefname);
+
if (flag & REF_ISSYMREF)
return error("refname %s is a symbolic ref, renaming it is not supported",
oldrefname);
- if (!symref)
- return error("refname %s not found", oldrefname);
-
if (!rename_ref_available(oldrefname, newrefname))
return 1;
goto rollback;
}
- if (!read_ref_full(newrefname, RESOLVE_REF_READING, sha1, NULL) &&
- delete_ref(newrefname, sha1, REF_NODEREF)) {
+ /*
+ * Since we are doing a shallow lookup, sha1 is not the
+ * correct value to pass to delete_ref as old_sha1. But that
+ * doesn't matter, because an old_sha1 check wouldn't add to
+ * the safety anyway; we want to delete the reference whatever
+ * its current value.
+ */
+ if (!read_ref_full(newrefname, RESOLVE_REF_READING | RESOLVE_REF_NO_RECURSE,
+ sha1, NULL) &&
+ delete_ref(newrefname, NULL, REF_NODEREF)) {
if (errno==EISDIR) {
struct strbuf path = STRBUF_INIT;
int result;
logmoved = log;
- lock = lock_ref_sha1_basic(newrefname, NULL, NULL, NULL, 0, NULL, &err);
+ lock = lock_ref_sha1_basic(newrefname, NULL, NULL, NULL, REF_NODEREF,
+ NULL, &err);
if (!lock) {
error("unable to rename '%s' to '%s': %s", oldrefname, newrefname, err.buf);
strbuf_release(&err);
hashcpy(lock->old_oid.hash, orig_sha1);
if (write_ref_to_lockfile(lock, orig_sha1, &err) ||
- commit_ref_update(lock, orig_sha1, logmsg, 0, &err)) {
+ commit_ref_update(lock, orig_sha1, logmsg, &err)) {
error("unable to write current sha1 into %s: %s", newrefname, err.buf);
strbuf_release(&err);
goto rollback;
return 0;
rollback:
- lock = lock_ref_sha1_basic(oldrefname, NULL, NULL, NULL, 0, NULL, &err);
+ lock = lock_ref_sha1_basic(oldrefname, NULL, NULL, NULL, REF_NODEREF,
+ NULL, &err);
if (!lock) {
error("unable to lock %s for rollback: %s", oldrefname, err.buf);
strbuf_release(&err);
flag = log_all_ref_updates;
log_all_ref_updates = 0;
if (write_ref_to_lockfile(lock, orig_sha1, &err) ||
- commit_ref_update(lock, orig_sha1, NULL, 0, &err)) {
+ commit_ref_update(lock, orig_sha1, NULL, &err)) {
error("unable to write current sha1 into %s: %s", oldrefname, err.buf);
strbuf_release(&err);
}
static int commit_ref(struct ref_lock *lock)
{
+ char *path = get_locked_file_path(lock->lk);
+ struct stat st;
+
+ if (!lstat(path, &st) && S_ISDIR(st.st_mode)) {
+ /*
+ * There is a directory at the path we want to rename
+ * the lockfile to. Hopefully it is empty; try to
+ * delete it.
+ */
+ size_t len = strlen(path);
+ struct strbuf sb_path = STRBUF_INIT;
+
+ strbuf_attach(&sb_path, path, len, len);
+
+ /*
+ * If this fails, commit_lock_file() will also fail
+ * and will report the problem.
+ */
+ remove_empty_directories(&sb_path);
+ strbuf_release(&sb_path);
+ } else {
+ free(path);
+ }
+
if (commit_lock_file(lock->lk))
return -1;
return 0;
strbuf_git_path(logfile, "logs/%s", refname);
if (force_create || should_autocreate_reflog(refname)) {
if (safe_create_leading_directories(logfile->buf) < 0) {
- strbuf_addf(err, "unable to create directory for %s: "
+ strbuf_addf(err, "unable to create directory for '%s': "
"%s", logfile->buf, strerror(errno));
return -1;
}
if (errno == EISDIR) {
if (remove_empty_directories(logfile)) {
- strbuf_addf(err, "There are still logs under "
+ strbuf_addf(err, "there are still logs under "
"'%s'", logfile->buf);
return -1;
}
}
if (logfd < 0) {
- strbuf_addf(err, "unable to append to %s: %s",
+ strbuf_addf(err, "unable to append to '%s': %s",
logfile->buf, strerror(errno));
return -1;
}
result = log_ref_write_fd(logfd, old_sha1, new_sha1,
git_committer_info(0), msg);
if (result) {
- strbuf_addf(err, "unable to append to %s: %s", logfile->buf,
+ strbuf_addf(err, "unable to append to '%s': %s", logfile->buf,
strerror(errno));
close(logfd);
return -1;
}
if (close(logfd)) {
- strbuf_addf(err, "unable to append to %s: %s", logfile->buf,
+ strbuf_addf(err, "unable to append to '%s': %s", logfile->buf,
strerror(errno));
return -1;
}
o = parse_object(sha1);
if (!o) {
strbuf_addf(err,
- "Trying to write ref %s with nonexistent object %s",
+ "trying to write ref '%s' with nonexistent object %s",
lock->ref_name, sha1_to_hex(sha1));
unlock_ref(lock);
return -1;
}
if (o->type != OBJ_COMMIT && is_branch(lock->ref_name)) {
strbuf_addf(err,
- "Trying to write non-commit object %s to branch %s",
+ "trying to write non-commit object %s to branch '%s'",
sha1_to_hex(sha1), lock->ref_name);
unlock_ref(lock);
return -1;
write_in_full(fd, &term, 1) != 1 ||
close_ref(lock) < 0) {
strbuf_addf(err,
- "Couldn't write %s", get_lock_file_path(lock->lk));
+ "couldn't write '%s'", get_lock_file_path(lock->lk));
unlock_ref(lock);
return -1;
}
*/
static int commit_ref_update(struct ref_lock *lock,
const unsigned char *sha1, const char *logmsg,
- int flags, struct strbuf *err)
+ struct strbuf *err)
{
clear_loose_ref_cache(&ref_cache);
- if (log_ref_write(lock->ref_name, lock->old_oid.hash, sha1, logmsg, flags, err) < 0 ||
- (strcmp(lock->ref_name, lock->orig_ref_name) &&
- log_ref_write(lock->orig_ref_name, lock->old_oid.hash, sha1, logmsg, flags, err) < 0)) {
+ if (log_ref_write(lock->ref_name, lock->old_oid.hash, sha1, logmsg, 0, err)) {
char *old_msg = strbuf_detach(err, NULL);
- strbuf_addf(err, "Cannot update the ref '%s': %s",
+ strbuf_addf(err, "cannot update the ref '%s': %s",
lock->ref_name, old_msg);
free(old_msg);
unlock_ref(lock);
return -1;
}
- if (strcmp(lock->orig_ref_name, "HEAD") != 0) {
+
+ if (strcmp(lock->ref_name, "HEAD") != 0) {
/*
* Special hack: If a branch is updated directly and HEAD
* points to it (may happen on the remote side of a push
unsigned char head_sha1[20];
int head_flag;
const char *head_ref;
+
head_ref = resolve_ref_unsafe("HEAD", RESOLVE_REF_READING,
head_sha1, &head_flag);
if (head_ref && (head_flag & REF_ISSYMREF) &&
}
}
}
+
if (commit_ref(lock)) {
- error("Couldn't set %s", lock->ref_name);
+ strbuf_addf(err, "couldn't set '%s'", lock->ref_name);
unlock_ref(lock);
return -1;
}
lock = xcalloc(1, sizeof(struct ref_lock));
lock->lk = &head_lock;
lock->ref_name = xstrdup(head_rel);
- lock->orig_ref_name = xstrdup(head_rel);
ret = create_symref_locked(lock, head_rel, target, NULL);
strbuf_release(&sb);
return ret;
}
-/*
- * Call fn for each reflog in the namespace indicated by name. name
- * must be empty or end with '/'. Name will be used as a scratch
- * space, but its contents will be restored before return.
- */
-static int do_for_each_reflog(struct strbuf *name, each_ref_fn fn, void *cb_data)
-{
- DIR *d = opendir(git_path("logs/%s", name->buf));
- int retval = 0;
- struct dirent *de;
- int oldlen = name->len;
- if (!d)
- return name->len ? errno : 0;
+struct files_reflog_iterator {
+ struct ref_iterator base;
- while ((de = readdir(d)) != NULL) {
- struct stat st;
+ struct dir_iterator *dir_iterator;
+ struct object_id oid;
+};
- if (de->d_name[0] == '.')
+static int files_reflog_iterator_advance(struct ref_iterator *ref_iterator)
+{
+ struct files_reflog_iterator *iter =
+ (struct files_reflog_iterator *)ref_iterator;
+ struct dir_iterator *diter = iter->dir_iterator;
+ int ok;
+
+ while ((ok = dir_iterator_advance(diter)) == ITER_OK) {
+ int flags;
+
+ if (!S_ISREG(diter->st.st_mode))
continue;
- if (ends_with(de->d_name, ".lock"))
+ if (diter->basename[0] == '.')
+ continue;
+ if (ends_with(diter->basename, ".lock"))
continue;
- strbuf_addstr(name, de->d_name);
- if (stat(git_path("logs/%s", name->buf), &st) < 0) {
- ; /* silently ignore */
- } else {
- if (S_ISDIR(st.st_mode)) {
- strbuf_addch(name, '/');
- retval = do_for_each_reflog(name, fn, cb_data);
- } else {
- struct object_id oid;
- if (read_ref_full(name->buf, 0, oid.hash, NULL))
- retval = error("bad ref for %s", name->buf);
- else
- retval = fn(name->buf, &oid, 0, cb_data);
- }
- if (retval)
- break;
+ if (read_ref_full(diter->relative_path, 0,
+ iter->oid.hash, &flags)) {
+ error("bad ref for %s", diter->path.buf);
+ continue;
}
- strbuf_setlen(name, oldlen);
+
+ iter->base.refname = diter->relative_path;
+ iter->base.oid = &iter->oid;
+ iter->base.flags = flags;
+ return ITER_OK;
}
- closedir(d);
- return retval;
+
+ iter->dir_iterator = NULL;
+ if (ref_iterator_abort(ref_iterator) == ITER_ERROR)
+ ok = ITER_ERROR;
+ return ok;
+}
+
+static int files_reflog_iterator_peel(struct ref_iterator *ref_iterator,
+ struct object_id *peeled)
+{
+ die("BUG: ref_iterator_peel() called for reflog_iterator");
+}
+
+static int files_reflog_iterator_abort(struct ref_iterator *ref_iterator)
+{
+ struct files_reflog_iterator *iter =
+ (struct files_reflog_iterator *)ref_iterator;
+ int ok = ITER_DONE;
+
+ if (iter->dir_iterator)
+ ok = dir_iterator_abort(iter->dir_iterator);
+
+ base_ref_iterator_free(ref_iterator);
+ return ok;
+}
+
+static struct ref_iterator_vtable files_reflog_iterator_vtable = {
+ files_reflog_iterator_advance,
+ files_reflog_iterator_peel,
+ files_reflog_iterator_abort
+};
+
+struct ref_iterator *files_reflog_iterator_begin(void)
+{
+ struct files_reflog_iterator *iter = xcalloc(1, sizeof(*iter));
+ struct ref_iterator *ref_iterator = &iter->base;
+
+ base_ref_iterator_init(ref_iterator, &files_reflog_iterator_vtable);
+ iter->dir_iterator = dir_iterator_begin(git_path("logs"));
+ return ref_iterator;
}
int for_each_reflog(each_ref_fn fn, void *cb_data)
{
- int retval;
- struct strbuf name;
- strbuf_init(&name, PATH_MAX);
- retval = do_for_each_reflog(&name, fn, cb_data);
- strbuf_release(&name);
- return retval;
+ return do_for_each_ref_iterator(files_reflog_iterator_begin(),
+ fn, cb_data);
}
static int ref_update_reject_duplicates(struct string_list *refnames,
for (i = 1; i < n; i++)
if (!strcmp(refnames->items[i - 1].string, refnames->items[i].string)) {
strbuf_addf(err,
- "Multiple updates for ref '%s' not allowed.",
+ "multiple updates for ref '%s' not allowed.",
refnames->items[i].string);
return 1;
}
return 0;
}
+/*
+ * If update is a direct update of head_ref (the reference pointed to
+ * by HEAD), then add an extra REF_LOG_ONLY update for HEAD.
+ */
+static int split_head_update(struct ref_update *update,
+ struct ref_transaction *transaction,
+ const char *head_ref,
+ struct string_list *affected_refnames,
+ struct strbuf *err)
+{
+ struct string_list_item *item;
+ struct ref_update *new_update;
+
+ if ((update->flags & REF_LOG_ONLY) ||
+ (update->flags & REF_ISPRUNING) ||
+ (update->flags & REF_UPDATE_VIA_HEAD))
+ return 0;
+
+ if (strcmp(update->refname, head_ref))
+ return 0;
+
+ /*
+ * First make sure that HEAD is not already in the
+ * transaction. This insertion is O(N) in the transaction
+ * size, but it happens at most once per transaction.
+ */
+ item = string_list_insert(affected_refnames, "HEAD");
+ if (item->util) {
+ /* An entry already existed */
+ strbuf_addf(err,
+ "multiple updates for 'HEAD' (including one "
+ "via its referent '%s') are not allowed",
+ update->refname);
+ return TRANSACTION_NAME_CONFLICT;
+ }
+
+ new_update = ref_transaction_add_update(
+ transaction, "HEAD",
+ update->flags | REF_LOG_ONLY | REF_NODEREF,
+ update->new_sha1, update->old_sha1,
+ update->msg);
+
+ item->util = new_update;
+
+ return 0;
+}
+
+/*
+ * update is for a symref that points at referent and doesn't have
+ * REF_NODEREF set. Split it into two updates:
+ * - The original update, but with REF_LOG_ONLY and REF_NODEREF set
+ * - A new, separate update for the referent reference
+ * Note that the new update will itself be subject to splitting when
+ * the iteration gets to it.
+ */
+static int split_symref_update(struct ref_update *update,
+ const char *referent,
+ struct ref_transaction *transaction,
+ struct string_list *affected_refnames,
+ struct strbuf *err)
+{
+ struct string_list_item *item;
+ struct ref_update *new_update;
+ unsigned int new_flags;
+
+ /*
+ * First make sure that referent is not already in the
+ * transaction. This insertion is O(N) in the transaction
+ * size, but it happens at most once per symref in a
+ * transaction.
+ */
+ item = string_list_insert(affected_refnames, referent);
+ if (item->util) {
+ /* An entry already existed */
+ strbuf_addf(err,
+ "multiple updates for '%s' (including one "
+ "via symref '%s') are not allowed",
+ referent, update->refname);
+ return TRANSACTION_NAME_CONFLICT;
+ }
+
+ new_flags = update->flags;
+ if (!strcmp(update->refname, "HEAD")) {
+ /*
+ * Record that the new update came via HEAD, so that
+ * when we process it, split_head_update() doesn't try
+ * to add another reflog update for HEAD. Note that
+ * this bit will be propagated if the new_update
+ * itself needs to be split.
+ */
+ new_flags |= REF_UPDATE_VIA_HEAD;
+ }
+
+ new_update = ref_transaction_add_update(
+ transaction, referent, new_flags,
+ update->new_sha1, update->old_sha1,
+ update->msg);
+
+ new_update->parent_update = update;
+
+ /*
+ * Change the symbolic ref update to log only. Also, it
+ * doesn't need to check its old SHA-1 value, as that will be
+ * done when new_update is processed.
+ */
+ update->flags |= REF_LOG_ONLY | REF_NODEREF;
+ update->flags &= ~REF_HAVE_OLD;
+
+ item->util = new_update;
+
+ return 0;
+}
+
+/*
+ * Return the refname under which update was originally requested.
+ */
+static const char *original_update_refname(struct ref_update *update)
+{
+ while (update->parent_update)
+ update = update->parent_update;
+
+ return update->refname;
+}
+
+/*
+ * Check whether the REF_HAVE_OLD and old_oid values stored in update
+ * are consistent with oid, which is the reference's current value. If
+ * everything is OK, return 0; otherwise, write an error message to
+ * err and return -1.
+ */
+static int check_old_oid(struct ref_update *update, struct object_id *oid,
+ struct strbuf *err)
+{
+ if (!(update->flags & REF_HAVE_OLD) ||
+ !hashcmp(oid->hash, update->old_sha1))
+ return 0;
+
+ if (is_null_sha1(update->old_sha1))
+ strbuf_addf(err, "cannot lock ref '%s': "
+ "reference already exists",
+ original_update_refname(update));
+ else if (is_null_oid(oid))
+ strbuf_addf(err, "cannot lock ref '%s': "
+ "reference is missing but expected %s",
+ original_update_refname(update),
+ sha1_to_hex(update->old_sha1));
+ else
+ strbuf_addf(err, "cannot lock ref '%s': "
+ "is at %s but expected %s",
+ original_update_refname(update),
+ oid_to_hex(oid),
+ sha1_to_hex(update->old_sha1));
+
+ return -1;
+}
+
+/*
+ * Prepare for carrying out update:
+ * - Lock the reference referred to by update.
+ * - Read the reference under lock.
+ * - Check that its old SHA-1 value (if specified) is correct, and in
+ * any case record it in update->lock->old_oid for later use when
+ * writing the reflog.
+ * - If it is a symref update without REF_NODEREF, split it up into a
+ * REF_LOG_ONLY update of the symref and add a separate update for
+ * the referent to transaction.
+ * - If it is an update of head_ref, add a corresponding REF_LOG_ONLY
+ * update of HEAD.
+ */
+static int lock_ref_for_update(struct ref_update *update,
+ struct ref_transaction *transaction,
+ const char *head_ref,
+ struct string_list *affected_refnames,
+ struct strbuf *err)
+{
+ struct strbuf referent = STRBUF_INIT;
+ int mustexist = (update->flags & REF_HAVE_OLD) &&
+ !is_null_sha1(update->old_sha1);
+ int ret;
+ struct ref_lock *lock;
+
+ if ((update->flags & REF_HAVE_NEW) && is_null_sha1(update->new_sha1))
+ update->flags |= REF_DELETING;
+
+ if (head_ref) {
+ ret = split_head_update(update, transaction, head_ref,
+ affected_refnames, err);
+ if (ret)
+ return ret;
+ }
+
+ ret = lock_raw_ref(update->refname, mustexist,
+ affected_refnames, NULL,
+ &update->lock, &referent,
+ &update->type, err);
+
+ if (ret) {
+ char *reason;
+
+ reason = strbuf_detach(err, NULL);
+ strbuf_addf(err, "cannot lock ref '%s': %s",
+ original_update_refname(update), reason);
+ free(reason);
+ return ret;
+ }
+
+ lock = update->lock;
+
+ if (update->type & REF_ISSYMREF) {
+ if (update->flags & REF_NODEREF) {
+ /*
+ * We won't be reading the referent as part of
+ * the transaction, so we have to read it here
+ * to record and possibly check old_sha1:
+ */
+ if (read_ref_full(referent.buf, 0,
+ lock->old_oid.hash, NULL)) {
+ if (update->flags & REF_HAVE_OLD) {
+ strbuf_addf(err, "cannot lock ref '%s': "
+ "error reading reference",
+ original_update_refname(update));
+ return -1;
+ }
+ } else if (check_old_oid(update, &lock->old_oid, err)) {
+ return TRANSACTION_GENERIC_ERROR;
+ }
+ } else {
+ /*
+ * Create a new update for the reference this
+ * symref is pointing at. Also, we will record
+ * and verify old_sha1 for this update as part
+ * of processing the split-off update, so we
+ * don't have to do it here.
+ */
+ ret = split_symref_update(update, referent.buf, transaction,
+ affected_refnames, err);
+ if (ret)
+ return ret;
+ }
+ } else {
+ struct ref_update *parent_update;
+
+ if (check_old_oid(update, &lock->old_oid, err))
+ return TRANSACTION_GENERIC_ERROR;
+
+ /*
+ * If this update is happening indirectly because of a
+ * symref update, record the old SHA-1 in the parent
+ * update:
+ */
+ for (parent_update = update->parent_update;
+ parent_update;
+ parent_update = parent_update->parent_update) {
+ oidcpy(&parent_update->lock->old_oid, &lock->old_oid);
+ }
+ }
+
+ if ((update->flags & REF_HAVE_NEW) &&
+ !(update->flags & REF_DELETING) &&
+ !(update->flags & REF_LOG_ONLY)) {
+ if (!(update->type & REF_ISSYMREF) &&
+ !hashcmp(lock->old_oid.hash, update->new_sha1)) {
+ /*
+ * The reference already has the desired
+ * value, so we don't need to write it.
+ */
+ } else if (write_ref_to_lockfile(lock, update->new_sha1,
+ err)) {
+ char *write_err = strbuf_detach(err, NULL);
+
+ /*
+ * The lock was freed upon failure of
+ * write_ref_to_lockfile():
+ */
+ update->lock = NULL;
+ strbuf_addf(err,
+ "cannot update ref '%s': %s",
+ update->refname, write_err);
+ free(write_err);
+ return TRANSACTION_GENERIC_ERROR;
+ } else {
+ update->flags |= REF_NEEDS_COMMIT;
+ }
+ }
+ if (!(update->flags & REF_NEEDS_COMMIT)) {
+ /*
+ * We didn't call write_ref_to_lockfile(), so
+ * the lockfile is still open. Close it to
+ * free up the file descriptor:
+ */
+ if (close_ref(lock)) {
+ strbuf_addf(err, "couldn't close '%s.lock'",
+ update->refname);
+ return TRANSACTION_GENERIC_ERROR;
+ }
+ }
+ return 0;
+}
+
int ref_transaction_commit(struct ref_transaction *transaction,
struct strbuf *err)
{
int ret = 0, i;
- int n = transaction->nr;
- struct ref_update **updates = transaction->updates;
struct string_list refs_to_delete = STRING_LIST_INIT_NODUP;
struct string_list_item *ref_to_delete;
struct string_list affected_refnames = STRING_LIST_INIT_NODUP;
+ char *head_ref = NULL;
+ int head_type;
+ struct object_id head_oid;
assert(err);
if (transaction->state != REF_TRANSACTION_OPEN)
die("BUG: commit called for transaction that is not open");
- if (!n) {
+ if (!transaction->nr) {
transaction->state = REF_TRANSACTION_CLOSED;
return 0;
}
- /* Fail if a refname appears more than once in the transaction: */
- for (i = 0; i < n; i++)
- string_list_append(&affected_refnames, updates[i]->refname);
+ /*
+ * Fail if a refname appears more than once in the
+ * transaction. (If we end up splitting up any updates using
+ * split_symref_update() or split_head_update(), those
+ * functions will check that the new updates don't have the
+ * same refname as any existing ones.)
+ */
+ for (i = 0; i < transaction->nr; i++) {
+ struct ref_update *update = transaction->updates[i];
+ struct string_list_item *item =
+ string_list_append(&affected_refnames, update->refname);
+
+ /*
+ * We store a pointer to update in item->util, but at
+ * the moment we never use the value of this field
+ * except to check whether it is non-NULL.
+ */
+ item->util = update;
+ }
string_list_sort(&affected_refnames);
if (ref_update_reject_duplicates(&affected_refnames, err)) {
ret = TRANSACTION_GENERIC_ERROR;
goto cleanup;
}
+ /*
+ * Special hack: If a branch is updated directly and HEAD
+ * points to it (may happen on the remote side of a push
+ * for example) then logically the HEAD reflog should be
+ * updated too.
+ *
+ * A generic solution would require reverse symref lookups,
+ * but finding all symrefs pointing to a given branch would be
+ * rather costly for this rare event (the direct update of a
+ * branch) to be worth it. So let's cheat and check with HEAD
+ * only, which should cover 99% of all usage scenarios (even
+ * 100% of the default ones).
+ *
+ * So if HEAD is a symbolic reference, then record the name of
+ * the reference that it points to. If we see an update of
+ * head_ref within the transaction, then split_head_update()
+ * arranges for the reflog of HEAD to be updated, too.
+ */
+ head_ref = resolve_refdup("HEAD", RESOLVE_REF_NO_RECURSE,
+ head_oid.hash, &head_type);
+
+ if (head_ref && !(head_type & REF_ISSYMREF)) {
+ free(head_ref);
+ head_ref = NULL;
+ }
+
/*
* Acquire all locks, verify old values if provided, check
* that new values are valid, and write new values to the
* lockfiles, ready to be activated. Only keep one lockfile
* open at a time to avoid running out of file descriptors.
*/
- for (i = 0; i < n; i++) {
- struct ref_update *update = updates[i];
+ for (i = 0; i < transaction->nr; i++) {
+ struct ref_update *update = transaction->updates[i];
- if ((update->flags & REF_HAVE_NEW) &&
- is_null_sha1(update->new_sha1))
- update->flags |= REF_DELETING;
- update->lock = lock_ref_sha1_basic(
- update->refname,
- ((update->flags & REF_HAVE_OLD) ?
- update->old_sha1 : NULL),
- &affected_refnames, NULL,
- update->flags,
- &update->type,
- err);
- if (!update->lock) {
- char *reason;
-
- ret = (errno == ENOTDIR)
- ? TRANSACTION_NAME_CONFLICT
- : TRANSACTION_GENERIC_ERROR;
- reason = strbuf_detach(err, NULL);
- strbuf_addf(err, "cannot lock ref '%s': %s",
- update->refname, reason);
- free(reason);
+ ret = lock_ref_for_update(update, transaction, head_ref,
+ &affected_refnames, err);
+ if (ret)
goto cleanup;
- }
- if ((update->flags & REF_HAVE_NEW) &&
- !(update->flags & REF_DELETING)) {
- int overwriting_symref = ((update->type & REF_ISSYMREF) &&
- (update->flags & REF_NODEREF));
-
- if (!overwriting_symref &&
- !hashcmp(update->lock->old_oid.hash, update->new_sha1)) {
- /*
- * The reference already has the desired
- * value, so we don't need to write it.
- */
- } else if (write_ref_to_lockfile(update->lock,
- update->new_sha1,
- err)) {
- char *write_err = strbuf_detach(err, NULL);
+ }
- /*
- * The lock was freed upon failure of
- * write_ref_to_lockfile():
- */
+ /* Perform updates first so live commits remain referenced */
+ for (i = 0; i < transaction->nr; i++) {
+ struct ref_update *update = transaction->updates[i];
+ struct ref_lock *lock = update->lock;
+
+ if (update->flags & REF_NEEDS_COMMIT ||
+ update->flags & REF_LOG_ONLY) {
+ if (log_ref_write(lock->ref_name, lock->old_oid.hash,
+ update->new_sha1,
+ update->msg, update->flags, err)) {
+ char *old_msg = strbuf_detach(err, NULL);
+
+ strbuf_addf(err, "cannot update the ref '%s': %s",
+ lock->ref_name, old_msg);
+ free(old_msg);
+ unlock_ref(lock);
update->lock = NULL;
- strbuf_addf(err,
- "cannot update the ref '%s': %s",
- update->refname, write_err);
- free(write_err);
ret = TRANSACTION_GENERIC_ERROR;
goto cleanup;
- } else {
- update->flags |= REF_NEEDS_COMMIT;
- }
- }
- if (!(update->flags & REF_NEEDS_COMMIT)) {
- /*
- * We didn't have to write anything to the lockfile.
- * Close it to free up the file descriptor:
- */
- if (close_ref(update->lock)) {
- strbuf_addf(err, "Couldn't close %s.lock",
- update->refname);
- goto cleanup;
}
}
- }
-
- /* Perform updates first so live commits remain referenced */
- for (i = 0; i < n; i++) {
- struct ref_update *update = updates[i];
-
if (update->flags & REF_NEEDS_COMMIT) {
- if (commit_ref_update(update->lock,
- update->new_sha1, update->msg,
- update->flags, err)) {
- /* freed by commit_ref_update(): */
+ clear_loose_ref_cache(&ref_cache);
+ if (commit_ref(lock)) {
+ strbuf_addf(err, "couldn't set '%s'", lock->ref_name);
+ unlock_ref(lock);
update->lock = NULL;
ret = TRANSACTION_GENERIC_ERROR;
goto cleanup;
- } else {
- /* freed by commit_ref_update(): */
- update->lock = NULL;
}
}
}
-
/* Perform deletes now that updates are safely completed */
- for (i = 0; i < n; i++) {
- struct ref_update *update = updates[i];
+ for (i = 0; i < transaction->nr; i++) {
+ struct ref_update *update = transaction->updates[i];
- if (update->flags & REF_DELETING) {
+ if (update->flags & REF_DELETING &&
+ !(update->flags & REF_LOG_ONLY)) {
if (delete_ref_loose(update->lock, update->type, err)) {
ret = TRANSACTION_GENERIC_ERROR;
goto cleanup;
cleanup:
transaction->state = REF_TRANSACTION_CLOSED;
- for (i = 0; i < n; i++)
- if (updates[i]->lock)
- unlock_ref(updates[i]->lock);
+ for (i = 0; i < transaction->nr; i++)
+ if (transaction->updates[i]->lock)
+ unlock_ref(transaction->updates[i]->lock);
string_list_clear(&refs_to_delete, 0);
+ free(head_ref);
string_list_clear(&affected_refnames, 0);
+
return ret;
}
struct strbuf *err)
{
int ret = 0, i;
- int n = transaction->nr;
- struct ref_update **updates = transaction->updates;
struct string_list affected_refnames = STRING_LIST_INIT_NODUP;
assert(err);
die("BUG: commit called for transaction that is not open");
/* Fail if a refname appears more than once in the transaction: */
- for (i = 0; i < n; i++)
- string_list_append(&affected_refnames, updates[i]->refname);
+ for (i = 0; i < transaction->nr; i++)
+ string_list_append(&affected_refnames,
+ transaction->updates[i]->refname);
string_list_sort(&affected_refnames);
if (ref_update_reject_duplicates(&affected_refnames, err)) {
ret = TRANSACTION_GENERIC_ERROR;
if (for_each_rawref(ref_present, &affected_refnames))
die("BUG: initial ref transaction called with existing refs");
- for (i = 0; i < n; i++) {
- struct ref_update *update = updates[i];
+ for (i = 0; i < transaction->nr; i++) {
+ struct ref_update *update = transaction->updates[i];
if ((update->flags & REF_HAVE_OLD) &&
!is_null_sha1(update->old_sha1))
goto cleanup;
}
- for (i = 0; i < n; i++) {
- struct ref_update *update = updates[i];
+ for (i = 0; i < transaction->nr; i++) {
+ struct ref_update *update = transaction->updates[i];
if ((update->flags & REF_HAVE_NEW) &&
!is_null_sha1(update->new_sha1))
--- /dev/null
+/*
+ * Generic reference iterator infrastructure. See refs-internal.h for
+ * documentation about the design and use of reference iterators.
+ */
+
+#include "cache.h"
+#include "refs.h"
+#include "refs/refs-internal.h"
+#include "iterator.h"
+
+int ref_iterator_advance(struct ref_iterator *ref_iterator)
+{
+ return ref_iterator->vtable->advance(ref_iterator);
+}
+
+int ref_iterator_peel(struct ref_iterator *ref_iterator,
+ struct object_id *peeled)
+{
+ return ref_iterator->vtable->peel(ref_iterator, peeled);
+}
+
+int ref_iterator_abort(struct ref_iterator *ref_iterator)
+{
+ return ref_iterator->vtable->abort(ref_iterator);
+}
+
+void base_ref_iterator_init(struct ref_iterator *iter,
+ struct ref_iterator_vtable *vtable)
+{
+ iter->vtable = vtable;
+ iter->refname = NULL;
+ iter->oid = NULL;
+ iter->flags = 0;
+}
+
+void base_ref_iterator_free(struct ref_iterator *iter)
+{
+ /* Help make use-after-free bugs fail quickly: */
+ iter->vtable = NULL;
+ free(iter);
+}
+
+struct empty_ref_iterator {
+ struct ref_iterator base;
+};
+
+static int empty_ref_iterator_advance(struct ref_iterator *ref_iterator)
+{
+ return ref_iterator_abort(ref_iterator);
+}
+
+static int empty_ref_iterator_peel(struct ref_iterator *ref_iterator,
+ struct object_id *peeled)
+{
+ die("BUG: peel called for empty iterator");
+}
+
+static int empty_ref_iterator_abort(struct ref_iterator *ref_iterator)
+{
+ base_ref_iterator_free(ref_iterator);
+ return ITER_DONE;
+}
+
+static struct ref_iterator_vtable empty_ref_iterator_vtable = {
+ empty_ref_iterator_advance,
+ empty_ref_iterator_peel,
+ empty_ref_iterator_abort
+};
+
+struct ref_iterator *empty_ref_iterator_begin(void)
+{
+ struct empty_ref_iterator *iter = xcalloc(1, sizeof(*iter));
+ struct ref_iterator *ref_iterator = &iter->base;
+
+ base_ref_iterator_init(ref_iterator, &empty_ref_iterator_vtable);
+ return ref_iterator;
+}
+
+int is_empty_ref_iterator(struct ref_iterator *ref_iterator)
+{
+ return ref_iterator->vtable == &empty_ref_iterator_vtable;
+}
+
+struct merge_ref_iterator {
+ struct ref_iterator base;
+
+ struct ref_iterator *iter0, *iter1;
+
+ ref_iterator_select_fn *select;
+ void *cb_data;
+
+ /*
+ * A pointer to iter0 or iter1 (whichever is supplying the
+ * current value), or NULL if advance has not yet been called.
+ */
+ struct ref_iterator **current;
+};
+
+static int merge_ref_iterator_advance(struct ref_iterator *ref_iterator)
+{
+ struct merge_ref_iterator *iter =
+ (struct merge_ref_iterator *)ref_iterator;
+ int ok;
+
+ if (!iter->current) {
+ /* Initialize: advance both iterators to their first entries */
+ if ((ok = ref_iterator_advance(iter->iter0)) != ITER_OK) {
+ iter->iter0 = NULL;
+ if (ok == ITER_ERROR)
+ goto error;
+ }
+ if ((ok = ref_iterator_advance(iter->iter1)) != ITER_OK) {
+ iter->iter1 = NULL;
+ if (ok == ITER_ERROR)
+ goto error;
+ }
+ } else {
+ /*
+ * Advance the current iterator past the just-used
+ * entry:
+ */
+ if ((ok = ref_iterator_advance(*iter->current)) != ITER_OK) {
+ *iter->current = NULL;
+ if (ok == ITER_ERROR)
+ goto error;
+ }
+ }
+
+ /* Loop until we find an entry that we can yield. */
+ while (1) {
+ struct ref_iterator **secondary;
+ enum iterator_selection selection =
+ iter->select(iter->iter0, iter->iter1, iter->cb_data);
+
+ if (selection == ITER_SELECT_DONE) {
+ return ref_iterator_abort(ref_iterator);
+ } else if (selection == ITER_SELECT_ERROR) {
+ ref_iterator_abort(ref_iterator);
+ return ITER_ERROR;
+ }
+
+ if ((selection & ITER_CURRENT_SELECTION_MASK) == 0) {
+ iter->current = &iter->iter0;
+ secondary = &iter->iter1;
+ } else {
+ iter->current = &iter->iter1;
+ secondary = &iter->iter0;
+ }
+
+ if (selection & ITER_SKIP_SECONDARY) {
+ if ((ok = ref_iterator_advance(*secondary)) != ITER_OK) {
+ *secondary = NULL;
+ if (ok == ITER_ERROR)
+ goto error;
+ }
+ }
+
+ if (selection & ITER_YIELD_CURRENT) {
+ iter->base.refname = (*iter->current)->refname;
+ iter->base.oid = (*iter->current)->oid;
+ iter->base.flags = (*iter->current)->flags;
+ return ITER_OK;
+ }
+ }
+
+error:
+ ref_iterator_abort(ref_iterator);
+ return ITER_ERROR;
+}
+
+static int merge_ref_iterator_peel(struct ref_iterator *ref_iterator,
+ struct object_id *peeled)
+{
+ struct merge_ref_iterator *iter =
+ (struct merge_ref_iterator *)ref_iterator;
+
+ if (!iter->current) {
+ die("BUG: peel called before advance for merge iterator");
+ }
+ return ref_iterator_peel(*iter->current, peeled);
+}
+
+static int merge_ref_iterator_abort(struct ref_iterator *ref_iterator)
+{
+ struct merge_ref_iterator *iter =
+ (struct merge_ref_iterator *)ref_iterator;
+ int ok = ITER_DONE;
+
+ if (iter->iter0) {
+ if (ref_iterator_abort(iter->iter0) != ITER_DONE)
+ ok = ITER_ERROR;
+ }
+ if (iter->iter1) {
+ if (ref_iterator_abort(iter->iter1) != ITER_DONE)
+ ok = ITER_ERROR;
+ }
+ base_ref_iterator_free(ref_iterator);
+ return ok;
+}
+
+static struct ref_iterator_vtable merge_ref_iterator_vtable = {
+ merge_ref_iterator_advance,
+ merge_ref_iterator_peel,
+ merge_ref_iterator_abort
+};
+
+struct ref_iterator *merge_ref_iterator_begin(
+ struct ref_iterator *iter0, struct ref_iterator *iter1,
+ ref_iterator_select_fn *select, void *cb_data)
+{
+ struct merge_ref_iterator *iter = xcalloc(1, sizeof(*iter));
+ struct ref_iterator *ref_iterator = &iter->base;
+
+ /*
+ * We can't do the same kind of is_empty_ref_iterator()-style
+ * optimization here as overlay_ref_iterator_begin() does,
+ * because we don't know the semantics of the select function.
+ * It might, for example, implement "intersect" by passing
+ * references through only if they exist in both iterators.
+ */
+
+ base_ref_iterator_init(ref_iterator, &merge_ref_iterator_vtable);
+ iter->iter0 = iter0;
+ iter->iter1 = iter1;
+ iter->select = select;
+ iter->cb_data = cb_data;
+ iter->current = NULL;
+ return ref_iterator;
+}
+
+/*
+ * A ref_iterator_select_fn that overlays the items from front on top
+ * of those from back (like loose refs over packed refs). See
+ * overlay_ref_iterator_begin().
+ */
+static enum iterator_selection overlay_iterator_select(
+ struct ref_iterator *front, struct ref_iterator *back,
+ void *cb_data)
+{
+ int cmp;
+
+ if (!back)
+ return front ? ITER_SELECT_0 : ITER_SELECT_DONE;
+ else if (!front)
+ return ITER_SELECT_1;
+
+ cmp = strcmp(front->refname, back->refname);
+
+ if (cmp < 0)
+ return ITER_SELECT_0;
+ else if (cmp > 0)
+ return ITER_SELECT_1;
+ else
+ return ITER_SELECT_0_SKIP_1;
+}
+
+struct ref_iterator *overlay_ref_iterator_begin(
+ struct ref_iterator *front, struct ref_iterator *back)
+{
+ /*
+ * Optimization: if one of the iterators is empty, return the
+ * other one rather than incurring the overhead of wrapping
+ * them.
+ */
+ if (is_empty_ref_iterator(front)) {
+ ref_iterator_abort(front);
+ return back;
+ } else if (is_empty_ref_iterator(back)) {
+ ref_iterator_abort(back);
+ return front;
+ }
+
+ return merge_ref_iterator_begin(front, back,
+ overlay_iterator_select, NULL);
+}
+
+struct prefix_ref_iterator {
+ struct ref_iterator base;
+
+ struct ref_iterator *iter0;
+ char *prefix;
+ int trim;
+};
+
+static int prefix_ref_iterator_advance(struct ref_iterator *ref_iterator)
+{
+ struct prefix_ref_iterator *iter =
+ (struct prefix_ref_iterator *)ref_iterator;
+ int ok;
+
+ while ((ok = ref_iterator_advance(iter->iter0)) == ITER_OK) {
+ if (!starts_with(iter->iter0->refname, iter->prefix))
+ continue;
+
+ iter->base.refname = iter->iter0->refname + iter->trim;
+ iter->base.oid = iter->iter0->oid;
+ iter->base.flags = iter->iter0->flags;
+ return ITER_OK;
+ }
+
+ iter->iter0 = NULL;
+ if (ref_iterator_abort(ref_iterator) != ITER_DONE)
+ return ITER_ERROR;
+ return ok;
+}
+
+static int prefix_ref_iterator_peel(struct ref_iterator *ref_iterator,
+ struct object_id *peeled)
+{
+ struct prefix_ref_iterator *iter =
+ (struct prefix_ref_iterator *)ref_iterator;
+
+ return ref_iterator_peel(iter->iter0, peeled);
+}
+
+static int prefix_ref_iterator_abort(struct ref_iterator *ref_iterator)
+{
+ struct prefix_ref_iterator *iter =
+ (struct prefix_ref_iterator *)ref_iterator;
+ int ok = ITER_DONE;
+
+ if (iter->iter0)
+ ok = ref_iterator_abort(iter->iter0);
+ free(iter->prefix);
+ base_ref_iterator_free(ref_iterator);
+ return ok;
+}
+
+static struct ref_iterator_vtable prefix_ref_iterator_vtable = {
+ prefix_ref_iterator_advance,
+ prefix_ref_iterator_peel,
+ prefix_ref_iterator_abort
+};
+
+struct ref_iterator *prefix_ref_iterator_begin(struct ref_iterator *iter0,
+ const char *prefix,
+ int trim)
+{
+ struct prefix_ref_iterator *iter;
+ struct ref_iterator *ref_iterator;
+
+ if (!*prefix && !trim)
+ return iter0; /* optimization: no need to wrap iterator */
+
+ iter = xcalloc(1, sizeof(*iter));
+ ref_iterator = &iter->base;
+
+ base_ref_iterator_init(ref_iterator, &prefix_ref_iterator_vtable);
+
+ iter->iter0 = iter0;
+ iter->prefix = xstrdup(prefix);
+ iter->trim = trim;
+
+ return ref_iterator;
+}
+
+struct ref_iterator *current_ref_iter = NULL;
+
+int do_for_each_ref_iterator(struct ref_iterator *iter,
+ each_ref_fn fn, void *cb_data)
+{
+ int retval = 0, ok;
+ struct ref_iterator *old_ref_iter = current_ref_iter;
+
+ current_ref_iter = iter;
+ while ((ok = ref_iterator_advance(iter)) == ITER_OK) {
+ retval = fn(iter->refname, iter->oid, iter->flags, cb_data);
+ if (retval) {
+ /*
+ * If ref_iterator_abort() returns ITER_ERROR,
+ * we ignore that error in deference to the
+ * callback function's return value.
+ */
+ ref_iterator_abort(iter);
+ goto out;
+ }
+ }
+
+out:
+ current_ref_iter = old_ref_iter;
+ if (ok == ITER_ERROR)
+ return -1;
+ return retval;
+}
/*
* Used as a flag in ref_update::flags when a loose ref is being
- * pruned.
+ * pruned. This flag must only be used when REF_NODEREF is set.
*/
#define REF_ISPRUNING 0x04
* value to ref_update::flags
*/
+/*
+ * Used as a flag in ref_update::flags when we want to log a ref
+ * update but not actually perform it. This is used when a symbolic
+ * ref update is split up.
+ */
+#define REF_LOG_ONLY 0x80
+
+/*
+ * Internal flag, meaning that the containing ref_update was via an
+ * update to HEAD.
+ */
+#define REF_UPDATE_VIA_HEAD 0x100
+
/*
* Return true iff refname is minimally safe. "Safe" here means that
* deleting a loose reference by this name will not do any damage, for
* extras and skip must be sorted.
*/
int verify_refname_available(const char *newname,
- struct string_list *extras,
- struct string_list *skip,
+ const struct string_list *extras,
+ const struct string_list *skip,
struct strbuf *err);
/*
* not exist before update.
*/
struct ref_update {
+
/*
* If (flags & REF_HAVE_NEW), set the reference to this value:
*/
unsigned char new_sha1[20];
+
/*
* If (flags & REF_HAVE_OLD), check that the reference
* previously had this value:
*/
unsigned char old_sha1[20];
+
/*
* One or more of REF_HAVE_NEW, REF_HAVE_OLD, REF_NODEREF,
- * REF_DELETING, and REF_ISPRUNING:
+ * REF_DELETING, REF_ISPRUNING, REF_LOG_ONLY, and
+ * REF_UPDATE_VIA_HEAD:
*/
unsigned int flags;
+
struct ref_lock *lock;
- int type;
+ unsigned int type;
char *msg;
+
+ /*
+ * If this ref_update was split off of a symref update via
+ * split_symref_update(), then this member points at that
+ * update. This is used for two purposes:
+ * 1. When reporting errors, we report the refname under which
+ * the update was originally requested.
+ * 2. When we read the old value of this reference, we
+ * propagate it back to its parent update for recording in
+ * the latter's reflog.
+ */
+ struct ref_update *parent_update;
+
const char refname[FLEX_ARRAY];
};
+/*
+ * Add a ref_update with the specified properties to transaction, and
+ * return a pointer to the new object. This function does not verify
+ * that refname is well-formed. new_sha1 and old_sha1 are only
+ * dereferenced if the REF_HAVE_NEW and REF_HAVE_OLD bits,
+ * respectively, are set in flags.
+ */
+struct ref_update *ref_transaction_add_update(
+ struct ref_transaction *transaction,
+ const char *refname, unsigned int flags,
+ const unsigned char *new_sha1,
+ const unsigned char *old_sha1,
+ const char *msg);
+
/*
* Transaction states.
* OPEN: The transaction is in a valid state and can accept new updates.
#define DO_FOR_EACH_INCLUDE_BROKEN 0x01
/*
- * The common backend for the for_each_*ref* functions
+ * Reference iterators
+ *
+ * A reference iterator encapsulates the state of an in-progress
+ * iteration over references. Create an instance of `struct
+ * ref_iterator` via one of the functions in this module.
+ *
+ * A freshly-created ref_iterator doesn't yet point at a reference. To
+ * advance the iterator, call ref_iterator_advance(). If successful,
+ * this sets the iterator's refname, oid, and flags fields to describe
+ * the next reference and returns ITER_OK. The data pointed at by
+ * refname and oid belong to the iterator; if you want to retain them
+ * after calling ref_iterator_advance() again or calling
+ * ref_iterator_abort(), you must make a copy. When the iteration has
+ * been exhausted, ref_iterator_advance() releases any resources
+ * assocated with the iteration, frees the ref_iterator object, and
+ * returns ITER_DONE. If you want to abort the iteration early, call
+ * ref_iterator_abort(), which also frees the ref_iterator object and
+ * any associated resources. If there was an internal error advancing
+ * to the next entry, ref_iterator_advance() aborts the iteration,
+ * frees the ref_iterator, and returns ITER_ERROR.
+ *
+ * The reference currently being looked at can be peeled by calling
+ * ref_iterator_peel(). This function is often faster than peel_ref(),
+ * so it should be preferred when iterating over references.
+ *
+ * Putting it all together, a typical iteration looks like this:
+ *
+ * int ok;
+ * struct ref_iterator *iter = ...;
+ *
+ * while ((ok = ref_iterator_advance(iter)) == ITER_OK) {
+ * if (want_to_stop_iteration()) {
+ * ok = ref_iterator_abort(iter);
+ * break;
+ * }
+ *
+ * // Access information about the current reference:
+ * if (!(iter->flags & REF_ISSYMREF))
+ * printf("%s is %s\n", iter->refname, oid_to_hex(&iter->oid));
+ *
+ * // If you need to peel the reference:
+ * ref_iterator_peel(iter, &oid);
+ * }
+ *
+ * if (ok != ITER_DONE)
+ * handle_error();
+ */
+struct ref_iterator {
+ struct ref_iterator_vtable *vtable;
+ const char *refname;
+ const struct object_id *oid;
+ unsigned int flags;
+};
+
+/*
+ * Advance the iterator to the first or next item and return ITER_OK.
+ * If the iteration is exhausted, free the resources associated with
+ * the ref_iterator and return ITER_DONE. On errors, free the iterator
+ * resources and return ITER_ERROR. It is a bug to use ref_iterator or
+ * call this function again after it has returned ITER_DONE or
+ * ITER_ERROR.
+ */
+int ref_iterator_advance(struct ref_iterator *ref_iterator);
+
+/*
+ * If possible, peel the reference currently being viewed by the
+ * iterator. Return 0 on success.
+ */
+int ref_iterator_peel(struct ref_iterator *ref_iterator,
+ struct object_id *peeled);
+
+/*
+ * End the iteration before it has been exhausted, freeing the
+ * reference iterator and any associated resources and returning
+ * ITER_DONE. If the abort itself failed, return ITER_ERROR.
+ */
+int ref_iterator_abort(struct ref_iterator *ref_iterator);
+
+/*
+ * An iterator over nothing (its first ref_iterator_advance() call
+ * returns ITER_DONE).
+ */
+struct ref_iterator *empty_ref_iterator_begin(void);
+
+/*
+ * Return true iff ref_iterator is an empty_ref_iterator.
+ */
+int is_empty_ref_iterator(struct ref_iterator *ref_iterator);
+
+/*
+ * A callback function used to instruct merge_ref_iterator how to
+ * interleave the entries from iter0 and iter1. The function should
+ * return one of the constants defined in enum iterator_selection. It
+ * must not advance either of the iterators itself.
+ *
+ * The function must be prepared to handle the case that iter0 and/or
+ * iter1 is NULL, which indicates that the corresponding sub-iterator
+ * has been exhausted. Its return value must be consistent with the
+ * current states of the iterators; e.g., it must not return
+ * ITER_SKIP_1 if iter1 has already been exhausted.
*/
-int do_for_each_ref(const char *submodule, const char *base,
- each_ref_fn fn, int trim, int flags, void *cb_data);
+typedef enum iterator_selection ref_iterator_select_fn(
+ struct ref_iterator *iter0, struct ref_iterator *iter1,
+ void *cb_data);
+/*
+ * Iterate over the entries from iter0 and iter1, with the values
+ * interleaved as directed by the select function. The iterator takes
+ * ownership of iter0 and iter1 and frees them when the iteration is
+ * over.
+ */
+struct ref_iterator *merge_ref_iterator_begin(
+ struct ref_iterator *iter0, struct ref_iterator *iter1,
+ ref_iterator_select_fn *select, void *cb_data);
+
+/*
+ * An iterator consisting of the union of the entries from front and
+ * back. If there are entries common to the two sub-iterators, use the
+ * one from front. Each iterator must iterate over its entries in
+ * strcmp() order by refname for this to work.
+ *
+ * The new iterator takes ownership of its arguments and frees them
+ * when the iteration is over. As a convenience to callers, if front
+ * or back is an empty_ref_iterator, then abort that one immediately
+ * and return the other iterator directly, without wrapping it.
+ */
+struct ref_iterator *overlay_ref_iterator_begin(
+ struct ref_iterator *front, struct ref_iterator *back);
+
+/*
+ * Wrap iter0, only letting through the references whose names start
+ * with prefix. If trim is set, set iter->refname to the name of the
+ * reference with that many characters trimmed off the front;
+ * otherwise set it to the full refname. The new iterator takes over
+ * ownership of iter0 and frees it when iteration is over. It makes
+ * its own copy of prefix.
+ *
+ * As an convenience to callers, if prefix is the empty string and
+ * trim is zero, this function returns iter0 directly, without
+ * wrapping it.
+ */
+struct ref_iterator *prefix_ref_iterator_begin(struct ref_iterator *iter0,
+ const char *prefix,
+ int trim);
+
+/*
+ * Iterate over the packed and loose references in the specified
+ * submodule that are within find_containing_dir(prefix). If prefix is
+ * NULL or the empty string, iterate over all references in the
+ * submodule.
+ */
+struct ref_iterator *files_ref_iterator_begin(const char *submodule,
+ const char *prefix,
+ unsigned int flags);
+
+/*
+ * Iterate over the references in the main ref_store that have a
+ * reflog. The paths within a directory are iterated over in arbitrary
+ * order.
+ */
+struct ref_iterator *files_reflog_iterator_begin(void);
+
+/* Internal implementation of reference iteration: */
+
+/*
+ * Base class constructor for ref_iterators. Initialize the
+ * ref_iterator part of iter, setting its vtable pointer as specified.
+ * This is meant to be called only by the initializers of derived
+ * classes.
+ */
+void base_ref_iterator_init(struct ref_iterator *iter,
+ struct ref_iterator_vtable *vtable);
+
+/*
+ * Base class destructor for ref_iterators. Destroy the ref_iterator
+ * part of iter and shallow-free the object. This is meant to be
+ * called only by the destructors of derived classes.
+ */
+void base_ref_iterator_free(struct ref_iterator *iter);
+
+/* Virtual function declarations for ref_iterators: */
+
+typedef int ref_iterator_advance_fn(struct ref_iterator *ref_iterator);
+
+typedef int ref_iterator_peel_fn(struct ref_iterator *ref_iterator,
+ struct object_id *peeled);
+
+/*
+ * Implementations of this function should free any resources specific
+ * to the derived class, then call base_ref_iterator_free() to clean
+ * up and free the ref_iterator object.
+ */
+typedef int ref_iterator_abort_fn(struct ref_iterator *ref_iterator);
+
+struct ref_iterator_vtable {
+ ref_iterator_advance_fn *advance;
+ ref_iterator_peel_fn *peel;
+ ref_iterator_abort_fn *abort;
+};
+
+/*
+ * current_ref_iter is a performance hack: when iterating over
+ * references using the for_each_ref*() functions, current_ref_iter is
+ * set to the reference iterator before calling the callback function.
+ * If the callback function calls peel_ref(), then peel_ref() first
+ * checks whether the reference to be peeled is the one referred to by
+ * the iterator (it usually is) and if so, asks the iterator for the
+ * peeled version of the reference if it is available. This avoids a
+ * refname lookup in a common case. current_ref_iter is set to NULL
+ * when the iteration is over.
+ */
+extern struct ref_iterator *current_ref_iter;
+
+/*
+ * The common backend for the for_each_*ref* functions. Call fn for
+ * each reference in iter. If the iterator itself ever returns
+ * ITER_ERROR, return -1. If fn ever returns a non-zero value, stop
+ * the iteration and return that value. Otherwise, return 0. In any
+ * case, free the iterator when done. This function is basically an
+ * adapter between the callback style of reference iteration and the
+ * iterator style.
+ */
+int do_for_each_ref_iterator(struct ref_iterator *iter,
+ each_ref_fn fn, void *cb_data);
+
+/*
+ * Read the specified reference from the filesystem or packed refs
+ * file, non-recursively. Set type to describe the reference, and:
+ *
+ * - If refname is the name of a normal reference, fill in sha1
+ * (leaving referent unchanged).
+ *
+ * - If refname is the name of a symbolic reference, write the full
+ * name of the reference to which it refers (e.g.
+ * "refs/heads/master") to referent and set the REF_ISSYMREF bit in
+ * type (leaving sha1 unchanged). The caller is responsible for
+ * validating that referent is a valid reference name.
+ *
+ * WARNING: refname might be used as part of a filename, so it is
+ * important from a security standpoint that it be safe in the sense
+ * of refname_is_safe(). Moreover, for symrefs this function sets
+ * referent to whatever the repository says, which might not be a
+ * properly-formatted or even safe reference name. NEITHER INPUT NOR
+ * OUTPUT REFERENCE NAMES ARE VALIDATED WITHIN THIS FUNCTION.
+ *
+ * Return 0 on success. If the ref doesn't exist, set errno to ENOENT
+ * and return -1. If the ref exists but is neither a symbolic ref nor
+ * a sha1, it is broken; set REF_ISBROKEN in type, set errno to
+ * EINVAL, and return -1. If there is another error reading the ref,
+ * set errno appropriately and return -1.
+ *
+ * Backend-specific flags might be set in type as well, regardless of
+ * outcome.
+ *
+ * It is OK for refname to point into referent. If so:
+ *
+ * - if the function succeeds with REF_ISSYMREF, referent will be
+ * overwritten and the memory formerly pointed to by it might be
+ * changed or even freed.
+ *
+ * - in all other cases, referent will be untouched, and therefore
+ * refname will still be valid and unchanged.
+ */
int read_raw_ref(const char *refname, unsigned char *sha1,
- struct strbuf *symref, unsigned int *flags);
+ struct strbuf *referent, unsigned int *type);
#endif /* REFS_REFS_INTERNAL_H */
free(specs);
}
-int main(int argc, const char **argv)
+int cmd_main(int argc, const char **argv)
{
struct strbuf buf = STRBUF_INIT;
int nongit;
- git_setup_gettext();
-
- git_extract_argv0_path(argv[0]);
setup_git_directory_gently(&nongit);
if (argc < 2) {
error("remote-curl: usage: git remote-curl <remote> [<url>]");
return 0;
}
-int main(int argc, char **argv)
+int cmd_main(int argc, const char **argv)
{
struct strbuf buf = STRBUF_INIT, url_sb = STRBUF_INIT,
private_ref_sb = STRBUF_INIT, marksfilename_sb = STRBUF_INIT,
static struct remote *remote;
const char *url_in;
- git_extract_argv0_path(argv[0]);
setup_git_directory();
if (argc < 2 || argc > 3) {
usage("git-remote-svn <remote-name> [<url>]");
* branch.
*/
if (ref->expect_old_sha1) {
- if (ref->expect_old_no_trackback ||
- oidcmp(&ref->old_oid, &ref->old_oid_expect))
+ if (oidcmp(&ref->old_oid, &ref->old_oid_expect))
reject_reason = REF_STATUS_REJECT_STALE;
else
/* If the ref isn't stale then force the update. */
entry = add_cas_entry(cas, arg, colon - arg);
if (!*colon)
entry->use_tracking = 1;
+ else if (!colon[1])
+ hashclr(entry->expect);
else if (get_sha1(colon + 1, entry->expect))
return error("cannot parse expected object name '%s'", colon + 1);
return 0;
if (!entry->use_tracking)
hashcpy(ref->old_oid_expect.hash, cas->entry[i].expect);
else if (remote_tracking(remote, ref->name, &ref->old_oid_expect))
- ref->expect_old_no_trackback = 1;
+ oidclr(&ref->old_oid_expect);
return;
}
ref->expect_old_sha1 = 1;
if (remote_tracking(remote, ref->name, &ref->old_oid_expect))
- ref->expect_old_no_trackback = 1;
+ oidclr(&ref->old_oid_expect);
}
void apply_push_cas(struct push_cas_option *cas,
force:1,
forced_update:1,
expect_old_sha1:1,
- expect_old_no_trackback:1,
deletion:1,
matched:1;
ce_same_name(ce, active_cache[i+1]))
i++;
}
- free_pathspec(&revs->prune_data);
+ clear_pathspec(&revs->prune_data);
parse_pathspec(&revs->prune_data, PATHSPEC_ALL_MAGIC & ~PATHSPEC_LITERAL,
PATHSPEC_PREFER_FULL | PATHSPEC_LITERAL_PATH, "", prune);
revs->limited = 1;
revs->notes_opt.use_default_notes = 1;
} else if (!strcmp(arg, "--show-signature")) {
revs->show_signature = 1;
+ } else if (!strcmp(arg, "--no-show-signature")) {
+ revs->show_signature = 0;
} else if (!strcmp(arg, "--show-linear-break") ||
starts_with(arg, "--show-linear-break=")) {
if (starts_with(arg, "--show-linear-break="))
} else if (!strcmp(arg, "--grep-debug")) {
revs->grep_filter.debug = 1;
} else if (!strcmp(arg, "--basic-regexp")) {
- grep_set_pattern_type_option(GREP_PATTERN_TYPE_BRE, &revs->grep_filter);
+ revs->grep_filter.pattern_type_option = GREP_PATTERN_TYPE_BRE;
} else if (!strcmp(arg, "--extended-regexp") || !strcmp(arg, "-E")) {
- grep_set_pattern_type_option(GREP_PATTERN_TYPE_ERE, &revs->grep_filter);
+ revs->grep_filter.pattern_type_option = GREP_PATTERN_TYPE_ERE;
} else if (!strcmp(arg, "--regexp-ignore-case") || !strcmp(arg, "-i")) {
revs->grep_filter.regflags |= REG_ICASE;
DIFF_OPT_SET(&revs->diffopt, PICKAXE_IGNORE_CASE);
} else if (!strcmp(arg, "--fixed-strings") || !strcmp(arg, "-F")) {
- grep_set_pattern_type_option(GREP_PATTERN_TYPE_FIXED, &revs->grep_filter);
+ revs->grep_filter.pattern_type_option = GREP_PATTERN_TYPE_FIXED;
} else if (!strcmp(arg, "--perl-regexp")) {
- grep_set_pattern_type_option(GREP_PATTERN_TYPE_PCRE, &revs->grep_filter);
+ revs->grep_filter.pattern_type_option = GREP_PATTERN_TYPE_PCRE;
} else if (!strcmp(arg, "--all-match")) {
revs->grep_filter.all_match = 1;
} else if (!strcmp(arg, "--invert-grep")) {
static struct strbuf path = STRBUF_INIT;
strbuf_reset(&path);
- if (git_hooks_path)
- strbuf_addf(&path, "%s/%s", git_hooks_path, name);
- else
- strbuf_git_path(&path, "hooks/%s", name);
+ strbuf_git_path(&path, "hooks/%s", name);
if (access(path.buf, X_OK) < 0)
return NULL;
return path.buf;
return ret;
}
-int capture_command(struct child_process *cmd, struct strbuf *buf, size_t hint)
+struct io_pump {
+ /* initialized by caller */
+ int fd;
+ int type; /* POLLOUT or POLLIN */
+ union {
+ struct {
+ const char *buf;
+ size_t len;
+ } out;
+ struct {
+ struct strbuf *buf;
+ size_t hint;
+ } in;
+ } u;
+
+ /* returned by pump_io */
+ int error; /* 0 for success, otherwise errno */
+
+ /* internal use */
+ struct pollfd *pfd;
+};
+
+static int pump_io_round(struct io_pump *slots, int nr, struct pollfd *pfd)
{
- cmd->out = -1;
+ int pollsize = 0;
+ int i;
+
+ for (i = 0; i < nr; i++) {
+ struct io_pump *io = &slots[i];
+ if (io->fd < 0)
+ continue;
+ pfd[pollsize].fd = io->fd;
+ pfd[pollsize].events = io->type;
+ io->pfd = &pfd[pollsize++];
+ }
+
+ if (!pollsize)
+ return 0;
+
+ if (poll(pfd, pollsize, -1) < 0) {
+ if (errno == EINTR)
+ return 1;
+ die_errno("poll failed");
+ }
+
+ for (i = 0; i < nr; i++) {
+ struct io_pump *io = &slots[i];
+
+ if (io->fd < 0)
+ continue;
+
+ if (!(io->pfd->revents & (POLLOUT|POLLIN|POLLHUP|POLLERR|POLLNVAL)))
+ continue;
+
+ if (io->type == POLLOUT) {
+ ssize_t len = xwrite(io->fd,
+ io->u.out.buf, io->u.out.len);
+ if (len < 0) {
+ io->error = errno;
+ close(io->fd);
+ io->fd = -1;
+ } else {
+ io->u.out.buf += len;
+ io->u.out.len -= len;
+ if (!io->u.out.len) {
+ close(io->fd);
+ io->fd = -1;
+ }
+ }
+ }
+
+ if (io->type == POLLIN) {
+ ssize_t len = strbuf_read_once(io->u.in.buf,
+ io->fd, io->u.in.hint);
+ if (len < 0)
+ io->error = errno;
+ if (len <= 0) {
+ close(io->fd);
+ io->fd = -1;
+ }
+ }
+ }
+
+ return 1;
+}
+
+static int pump_io(struct io_pump *slots, int nr)
+{
+ struct pollfd *pfd;
+ int i;
+
+ for (i = 0; i < nr; i++)
+ slots[i].error = 0;
+
+ ALLOC_ARRAY(pfd, nr);
+ while (pump_io_round(slots, nr, pfd))
+ ; /* nothing */
+ free(pfd);
+
+ /* There may be multiple errno values, so just pick the first. */
+ for (i = 0; i < nr; i++) {
+ if (slots[i].error) {
+ errno = slots[i].error;
+ return -1;
+ }
+ }
+ return 0;
+}
+
+
+int pipe_command(struct child_process *cmd,
+ const char *in, size_t in_len,
+ struct strbuf *out, size_t out_hint,
+ struct strbuf *err, size_t err_hint)
+{
+ struct io_pump io[3];
+ int nr = 0;
+
+ if (in)
+ cmd->in = -1;
+ if (out)
+ cmd->out = -1;
+ if (err)
+ cmd->err = -1;
+
if (start_command(cmd) < 0)
return -1;
- if (strbuf_read(buf, cmd->out, hint) < 0) {
- close(cmd->out);
+ if (in) {
+ io[nr].fd = cmd->in;
+ io[nr].type = POLLOUT;
+ io[nr].u.out.buf = in;
+ io[nr].u.out.len = in_len;
+ nr++;
+ }
+ if (out) {
+ io[nr].fd = cmd->out;
+ io[nr].type = POLLIN;
+ io[nr].u.in.buf = out;
+ io[nr].u.in.hint = out_hint;
+ nr++;
+ }
+ if (err) {
+ io[nr].fd = cmd->err;
+ io[nr].type = POLLIN;
+ io[nr].u.in.buf = err;
+ io[nr].u.in.hint = err_hint;
+ nr++;
+ }
+
+ if (pump_io(io, nr) < 0) {
finish_command(cmd); /* throw away exit code */
return -1;
}
- close(cmd->out);
return finish_command(cmd);
}
int run_command_v_opt_cd_env(const char **argv, int opt, const char *dir, const char *const *env);
/**
- * Execute the given command, capturing its stdout in the given strbuf.
+ * Execute the given command, sending "in" to its stdin, and capturing its
+ * stdout and stderr in the "out" and "err" strbufs. Any of the three may
+ * be NULL to skip processing.
+ *
* Returns -1 if starting the command fails or reading fails, and otherwise
- * returns the exit code of the command. The output collected in the
- * buffer is kept even if the command returns a non-zero exit. The hint field
- * gives a starting size for the strbuf allocation.
+ * returns the exit code of the command. Any output collected in the
+ * buffers is kept even if the command returns a non-zero exit. The hint fields
+ * gives starting sizes for the strbuf allocations.
*
* The fields of "cmd" should be set up as they would for a normal run_command
- * invocation. But note that there is no need to set cmd->out; the function
- * sets it up for the caller.
+ * invocation. But note that there is no need to set the in, out, or err
+ * fields; pipe_command handles that automatically.
+ */
+int pipe_command(struct child_process *cmd,
+ const char *in, size_t in_len,
+ struct strbuf *out, size_t out_hint,
+ struct strbuf *err, size_t err_hint);
+
+/**
+ * Convenience wrapper around pipe_command for the common case
+ * of capturing only stdout.
*/
-int capture_command(struct child_process *cmd, struct strbuf *buf, size_t hint);
+static inline int capture_command(struct child_process *cmd,
+ struct strbuf *out,
+ size_t hint)
+{
+ return pipe_command(cmd, NULL, 0, out, hint, NULL, 0);
+}
/*
* The purpose of the following functions is to feed a pipe by running
die("bad %s argument: %s", opt->long_name, arg);
}
-static int feed_object(const unsigned char *sha1, int fd, int negative)
+static void feed_object(const unsigned char *sha1, FILE *fh, int negative)
{
- char buf[42];
-
if (negative && !has_sha1_file(sha1))
- return 1;
+ return;
- memcpy(buf + negative, sha1_to_hex(sha1), 40);
if (negative)
- buf[0] = '^';
- buf[40 + negative] = '\n';
- return write_or_whine(fd, buf, 41 + negative, "send-pack: send refs");
+ putc('^', fh);
+ fputs(sha1_to_hex(sha1), fh);
+ putc('\n', fh);
}
/*
NULL,
};
struct child_process po = CHILD_PROCESS_INIT;
+ FILE *po_in;
int i;
i = 4;
* We feed the pack-objects we just spawned with revision
* parameters by writing to the pipe.
*/
+ po_in = xfdopen(po.in, "w");
for (i = 0; i < extra->nr; i++)
- if (!feed_object(extra->sha1[i], po.in, 1))
- break;
+ feed_object(extra->sha1[i], po_in, 1);
while (refs) {
- if (!is_null_oid(&refs->old_oid) &&
- !feed_object(refs->old_oid.hash, po.in, 1))
- break;
- if (!is_null_oid(&refs->new_oid) &&
- !feed_object(refs->new_oid.hash, po.in, 0))
- break;
+ if (!is_null_oid(&refs->old_oid))
+ feed_object(refs->old_oid.hash, po_in, 1);
+ if (!is_null_oid(&refs->new_oid))
+ feed_object(refs->new_oid.hash, po_in, 0);
refs = refs->next;
}
- close(po.in);
+ fflush(po_in);
+ if (ferror(po_in))
+ die_errno("error writing to pack-objects");
+ fclose(po_in);
if (args->stateless_rpc) {
char *buf = xmalloc(LARGE_PACKET_MAX);
const char *push_cert_nonce)
{
const struct ref *ref;
+ struct string_list_item *item;
char *signing_key = xstrdup(get_signing_key());
const char *cp, *np;
struct strbuf cert = STRBUF_INIT;
int update_seen = 0;
- strbuf_addf(&cert, "certificate version 0.1\n");
+ strbuf_addstr(&cert, "certificate version 0.1\n");
strbuf_addf(&cert, "pusher %s ", signing_key);
datestamp(&cert);
strbuf_addch(&cert, '\n');
}
if (push_cert_nonce[0])
strbuf_addf(&cert, "nonce %s\n", push_cert_nonce);
+ if (args->push_options)
+ for_each_string_list_item(item, args->push_options)
+ strbuf_addf(&cert, "push-option %s\n", item->string);
strbuf_addstr(&cert, "\n");
for (ref = remote_refs; ref; ref = ref->next) {
int agent_supported = 0;
int use_atomic = 0;
int atomic_supported = 0;
+ int use_push_options = 0;
+ int push_options_supported = 0;
unsigned cmds_sent = 0;
int ret;
struct async demux;
args->use_thin_pack = 0;
if (server_supports("atomic"))
atomic_supported = 1;
+ if (server_supports("push-options"))
+ push_options_supported = 1;
if (args->push_cert != SEND_PACK_PUSH_CERT_NEVER) {
int len;
use_atomic = atomic_supported && args->atomic;
+ if (args->push_options && !push_options_supported)
+ die(_("the receiving end does not support push options"));
+
+ use_push_options = push_options_supported && args->push_options;
+
if (status_report)
strbuf_addstr(&cap_buf, " report-status");
if (use_sideband)
strbuf_addstr(&cap_buf, " quiet");
if (use_atomic)
strbuf_addstr(&cap_buf, " atomic");
+ if (use_push_options)
+ strbuf_addstr(&cap_buf, " push-options");
if (agent_supported)
strbuf_addf(&cap_buf, " agent=%s", git_user_agent_sanitized());
strbuf_release(&req_buf);
strbuf_release(&cap_buf);
+ if (use_push_options) {
+ struct string_list_item *item;
+ struct strbuf sb = STRBUF_INIT;
+
+ for_each_string_list_item(item, args->push_options)
+ packet_buf_write(&sb, "%s", item->string);
+
+ write_or_die(out, sb.buf, sb.len);
+ packet_flush(out);
+ strbuf_release(&sb);
+ }
+
if (use_sideband && cmds_sent) {
memset(&demux, 0, sizeof(demux));
demux.proc = sideband_demux;
#ifndef SEND_PACK_H
#define SEND_PACK_H
+#include "string-list.h"
+
/* Possible values for push_cert field in send_pack_args. */
#define SEND_PACK_PUSH_CERT_NEVER 0
#define SEND_PACK_PUSH_CERT_IF_ASKED 1
push_cert:2,
stateless_rpc:1,
atomic:1;
+ const struct string_list *push_options;
};
struct option;
{
struct strbuf seq_dir = STRBUF_INIT;
- strbuf_addf(&seq_dir, "%s", git_path(SEQ_DIR));
+ strbuf_addstr(&seq_dir, git_path(SEQ_DIR));
remove_dir_recursively(&seq_dir, 0);
strbuf_release(&seq_dir);
}
die_errno(_("Could not write to %s"), filename);
strbuf_release(msgbuf);
if (commit_lock_file(&msg_file) < 0)
- die(_("Error wrapping up %s"), filename);
+ die(_("Error wrapping up %s."), filename);
}
static struct tree *empty_tree(void)
if (checkout_fast_forward(from, to, 1))
exit(128); /* the callee should have complained already */
- strbuf_addf(&sb, "%s: fast-forward", action_name(opts));
+ strbuf_addf(&sb, _("%s: fast-forward"), action_name(opts));
transaction = ref_transaction_begin(&err);
if (!transaction ||
clean = merge_trees(&o,
head_tree,
next_tree, base_tree, &result);
+ strbuf_release(&o.obuf);
+ if (clean < 0)
+ return clean;
if (active_cache_changed &&
write_locked_index(&the_index, &index_lock, COMMIT_LOCK))
* information followed by "\n\n".
*/
p = strstr(msg.message, "\n\n");
- if (p) {
- p += 2;
- strbuf_addstr(&msgbuf, p);
- }
+ if (p)
+ strbuf_addstr(&msgbuf, skip_blank_lines(p + 2));
if (opts->record_origin) {
if (!has_conforming_footer(&msgbuf, NULL, 0))
if (!opts->strategy || !strcmp(opts->strategy, "recursive") || opts->action == REPLAY_REVERT) {
res = do_recursive_merge(base, next, base_label, next_label,
head, &msgbuf, opts);
+ if (res < 0)
+ return res;
write_message(&msgbuf, git_path_merge_msg());
} else {
struct commit_list *common = NULL;
* opts; we don't support arbitrary instructions
*/
if (action != opts->action) {
- const char *action_str;
- action_str = action == REPLAY_REVERT ? "revert" : "cherry-pick";
- error(_("Cannot %s during a %s"), action_str, action_name(opts));
+ if (action == REPLAY_REVERT)
+ error((opts->action == REPLAY_REVERT)
+ ? _("Cannot revert during another revert.")
+ : _("Cannot revert during a cherry-pick."));
+ else
+ error((opts->action == REPLAY_REVERT)
+ ? _("Cannot cherry-pick during a revert.")
+ : _("Cannot cherry-pick during another cherry-pick."));
return NULL;
}
git_path_head_file());
goto fail;
}
+ if (is_null_sha1(sha1)) {
+ error(_("cannot abort from a branch yet to be born"));
+ goto fail;
+ }
if (reset_for_rollback(sha1))
goto fail;
remove_sequencer_state();
walk_revs_populate_todo(&todo_list, opts);
if (create_seq_dir() < 0)
return -1;
- if (get_sha1("HEAD", sha1)) {
- if (opts->action == REPLAY_REVERT)
- return error(_("Can't revert as initial commit"));
- return error(_("Can't cherry-pick into empty head"));
- }
+ if (get_sha1("HEAD", sha1) && (opts->action == REPLAY_REVERT))
+ return error(_("Can't revert as initial commit"));
save_head(sha1_to_hex(sha1));
save_opts(opts);
return pick_commits(todo_list, opts);
int diagnose_misspelt_rev)
{
if (!diagnose_misspelt_rev)
- die("%s: no such path in the working tree.\n"
- "Use 'git <command> -- <path>...' to specify paths that do not exist locally.",
+ die(_("%s: no such path in the working tree.\n"
+ "Use 'git <command> -- <path>...' to specify paths that do not exist locally."),
arg);
/*
* Saying "'(icase)foo' does not exist in the index" when the
maybe_die_on_misspelt_object_name(arg, prefix);
/* ... or fall back the most general message. */
- die("ambiguous argument '%s': unknown revision or path not in the working tree.\n"
- "Use '--' to separate paths from revisions, like this:\n"
- "'git <command> [<revision>...] -- [<file>...]'", arg);
+ die(_("ambiguous argument '%s': unknown revision or path not in the working tree.\n"
+ "Use '--' to separate paths from revisions, like this:\n"
+ "'git <command> [<revision>...] -- [<file>...]'"), arg);
}
return; /* flag */
if (!check_filename(prefix, arg))
return;
- die("ambiguous argument '%s': both revision and filename\n"
- "Use '--' to separate paths from revisions, like this:\n"
- "'git <command> [<revision>...] -- [<file>...]'", arg);
+ die(_("ambiguous argument '%s': both revision and filename\n"
+ "Use '--' to separate paths from revisions, like this:\n"
+ "'git <command> [<revision>...] -- [<file>...]'"), arg);
}
int get_common_dir(struct strbuf *sb, const char *gitdir)
static const char *setup_nongit(const char *cwd, int *nongit_ok)
{
if (!nongit_ok)
- die("Not a git repository (or any of the parent directories): %s", DEFAULT_GIT_DIR_ENVIRONMENT);
+ die(_("Not a git repository (or any of the parent directories): %s"), DEFAULT_GIT_DIR_ENVIRONMENT);
if (chdir(cwd))
- die_errno("Cannot come back to cwd");
+ die_errno(_("Cannot come back to cwd"));
*nongit_ok = 1;
return NULL;
}
*nongit_ok = 0;
if (strbuf_getcwd(&cwd))
- die_errno("Unable to read current working directory");
+ die_errno(_("Unable to read current working directory"));
offset = cwd.len;
/*
if (parent_device != current_device) {
if (nongit_ok) {
if (chdir(cwd.buf))
- die_errno("Cannot come back to cwd");
+ die_errno(_("Cannot come back to cwd"));
*nongit_ok = 1;
return NULL;
}
strbuf_setlen(&cwd, offset);
- die("Not a git repository (or any parent up to mount point %s)\n"
- "Stopping at filesystem boundary (GIT_DISCOVERY_ACROSS_FILESYSTEM not set).",
+ die(_("Not a git repository (or any parent up to mount point %s)\n"
+ "Stopping at filesystem boundary (GIT_DISCOVERY_ACROSS_FILESYSTEM not set)."),
cwd.buf);
}
}
if (chdir("..")) {
strbuf_setlen(&cwd, offset);
- die_errno("Cannot change to '%s/..'", cwd.buf);
+ die_errno(_("Cannot change to '%s/..'"), cwd.buf);
}
offset = offset_parent;
}
/* A filemode value was given: 0xxx */
if ((i & 0600) != 0600)
- die("Problem with core.sharedRepository filemode value "
+ die(_("Problem with core.sharedRepository filemode value "
"(0%.3o).\nThe owner of files must always have "
- "read and write permissions.", i);
+ "read and write permissions."), i);
/*
* Mask filemode value. Others can not get write permission.
static void subst_from_stdin (void);
int
-main (int argc, char *argv[])
+cmd_main (int argc, const char *argv[])
{
/* Default values for command line options. */
/* unsigned short int show_variables = 0; */
#include "bulk-checkin.h"
#include "streaming.h"
#include "dir.h"
+#include "mru.h"
#ifndef O_NOATIME
#if defined(__linux__) && (defined(__i386__) || defined(__PPC__))
0
};
-/*
- * A pointer to the last packed_git in which an object was found.
- * When an object is sought, we look in this packfile first, because
- * objects that are looked up at similar times are often in the same
- * packfile as one another.
- */
-static struct packed_git *last_found_pack;
-
static struct cached_object *find_cached_object(const unsigned char *sha1)
{
int i;
static size_t pack_mapped;
struct packed_git *packed_git;
+static struct mru packed_git_mru_storage;
+struct mru *packed_git_mru = &packed_git_mru_storage;
+
void pack_report(void)
{
fprintf(stderr,
for (p = packed_git; p; p = p->next)
if (p->do_not_close)
- die("BUG! Want to close pack marked 'do-not-close'");
+ die("BUG: want to close pack marked 'do-not-close'");
else
close_pack(p);
}
}
}
-/*
- * This is used by git-repack in case a newly created pack happens to
- * contain the same set of objects as an existing one. In that case
- * the resulting file might be different even if its name would be the
- * same. It is best to close any reference to the old pack before it is
- * replaced on disk. Of course no index pointers or windows for given pack
- * must subsist at this point. If ever objects from this pack are requested
- * again, the new version of the pack will be reinitialized through
- * reprepare_packed_git().
- */
-void free_pack_by_name(const char *pack_name)
-{
- struct packed_git *p, **pp = &packed_git;
-
- while (*pp) {
- p = *pp;
- if (strcmp(pack_name, p->pack_name) == 0) {
- clear_delta_base_cache();
- close_pack(p);
- free(p->bad_object_sha1);
- *pp = p->next;
- if (last_found_pack == p)
- last_found_pack = NULL;
- free(p);
- return;
- }
- pp = &p->next;
- }
-}
-
static unsigned int get_max_fd_limit(void)
{
#ifdef RLIMIT_NOFILE
free(ary);
}
+static void prepare_packed_git_mru(void)
+{
+ struct packed_git *p;
+
+ mru_clear(packed_git_mru);
+ for (p = packed_git; p; p = p->next)
+ mru_append(packed_git_mru, p);
+}
+
static int prepare_packed_git_run_once = 0;
void prepare_packed_git(void)
{
alt->name[-1] = '/';
}
rearrange_packed_git();
+ prepare_packed_git_mru();
prepare_packed_git_run_once = 1;
}
strbuf_add(oi->typename, type_buf, type_len);
/*
* Set type to 0 if its an unknown object and
- * we're obtaining the type using '--allow-unkown-type'
+ * we're obtaining the type using '--allow-unknown-type'
* option.
*/
if ((flags & LOOKUP_UNKNOWN_OBJECT) && (type < 0))
if (do_check_packed_object_crc && p->index_version > 1) {
struct revindex_entry *revidx = find_pack_revindex(p, obj_offset);
- unsigned long len = revidx[1].offset - obj_offset;
+ off_t len = revidx[1].offset - obj_offset;
if (check_pack_crc(p, &w_curs, obj_offset, len, revidx->nr)) {
const unsigned char *sha1 =
nth_packed_object_sha1(p, revidx->nr);
case OBJ_OFS_DELTA:
case OBJ_REF_DELTA:
if (data)
- die("BUG in unpack_entry: left loop at a valid delta");
+ die("BUG: unpack_entry: left loop at a valid delta");
break;
case OBJ_COMMIT:
case OBJ_TREE:
*/
static int find_pack_entry(const unsigned char *sha1, struct pack_entry *e)
{
- struct packed_git *p;
+ struct mru_entry *p;
prepare_packed_git();
if (!packed_git)
return 0;
- if (last_found_pack && fill_pack_entry(sha1, e, last_found_pack))
- return 1;
-
- for (p = packed_git; p; p = p->next) {
- if (p == last_found_pack)
- continue; /* we already checked this one */
-
- if (fill_pack_entry(sha1, e, p)) {
- last_found_pack = p;
+ for (p = packed_git_mru->head; p; p = p->next) {
+ if (fill_pack_entry(sha1, e, p->item)) {
+ mru_mark(packed_git_mru, p);
return 1;
}
}
unsigned int i, nr;
struct commit_list *head = NULL;
int bitmap_nr = (info->nr_bits + 31) / 32;
- size_t bitmap_size = st_mult(bitmap_nr, sizeof(uint32_t));
+ size_t bitmap_size = st_mult(sizeof(uint32_t), bitmap_nr);
uint32_t *tmp = xmalloc(bitmap_size); /* to be freed before return */
uint32_t *bitmap = paint_alloc(info);
struct commit *c = lookup_commit_reference_gently(sha1, 1);
{ NULL },
};
-int main(int argc, char **argv)
+int cmd_main(int argc, const char **argv)
{
char *prog;
const char **user_argv;
struct commands *cmd;
int count;
- git_setup_gettext();
-
- git_extract_argv0_path(argv[0]);
-
- /*
- * Always open file descriptors 0/1/2 to avoid clobbering files
- * in die(). It also avoids messing up when the pipes are dup'ed
- * onto stdin/stdout/stderr in the child processes we spawn.
- */
- sanitize_stdfds();
-
/*
* Special hack to pretend to be a CVS server
*/
char *common_repo_prefix;
int email;
struct string_list mailmap;
+ FILE *file;
};
void shortlog_init(struct shortlog *log);
static const char show_index_usage[] =
"git show-index";
-int main(int argc, char **argv)
+int cmd_main(int argc, const char **argv)
{
int i;
unsigned nr;
unsigned int version;
static unsigned int top_index[256];
- git_setup_gettext();
-
if (argc != 1)
usage(show_index_usage);
if (fread(top_index, 2 * 4, 1, stdin) != 1)
* the remote died unexpectedly. A flush() concludes the stream.
*/
-#define PREFIX "remote:"
+#define PREFIX "remote: "
#define ANSI_SUFFIX "\033[K"
#define DUMB_SUFFIX " "
-#define FIX_SIZE 10 /* large enough for any of the above */
-
int recv_sideband(const char *me, int in_stream, int out)
{
- unsigned pf = strlen(PREFIX);
- unsigned sf;
- char buf[LARGE_PACKET_MAX + 2*FIX_SIZE];
- char *suffix, *term;
- int skip_pf = 0;
+ const char *term, *suffix;
+ char buf[LARGE_PACKET_MAX + 1];
+ struct strbuf outbuf = STRBUF_INIT;
+ int retval = 0;
- memcpy(buf, PREFIX, pf);
term = getenv("TERM");
if (isatty(2) && term && strcmp(term, "dumb"))
suffix = ANSI_SUFFIX;
else
suffix = DUMB_SUFFIX;
- sf = strlen(suffix);
- while (1) {
+ while (!retval) {
+ const char *b, *brk;
int band, len;
- len = packet_read(in_stream, NULL, NULL, buf + pf, LARGE_PACKET_MAX, 0);
+ len = packet_read(in_stream, NULL, NULL, buf, LARGE_PACKET_MAX, 0);
if (len == 0)
break;
if (len < 1) {
- fprintf(stderr, "%s: protocol error: no band designator\n", me);
- return SIDEBAND_PROTOCOL_ERROR;
+ strbuf_addf(&outbuf,
+ "%s%s: protocol error: no band designator",
+ outbuf.len ? "\n" : "", me);
+ retval = SIDEBAND_PROTOCOL_ERROR;
+ break;
}
- band = buf[pf] & 0xff;
+ band = buf[0] & 0xff;
+ buf[len] = '\0';
len--;
switch (band) {
case 3:
- buf[pf] = ' ';
- buf[pf+1+len] = '\0';
- fprintf(stderr, "%s\n", buf);
- return SIDEBAND_REMOTE_ERROR;
+ strbuf_addf(&outbuf, "%s%s%s", outbuf.len ? "\n" : "",
+ PREFIX, buf + 1);
+ retval = SIDEBAND_REMOTE_ERROR;
+ break;
case 2:
- buf[pf] = ' ';
- do {
- char *b = buf;
- int brk = 0;
+ b = buf + 1;
- /*
- * If the last buffer didn't end with a line
- * break then we should not print a prefix
- * this time around.
- */
- if (skip_pf) {
- b += pf+1;
- } else {
- len += pf+1;
- brk += pf+1;
- }
-
- /* Look for a line break. */
- for (;;) {
- brk++;
- if (brk > len) {
- brk = 0;
- break;
- }
- if (b[brk-1] == '\n' ||
- b[brk-1] == '\r')
- break;
- }
+ /*
+ * Append a suffix to each nonempty line to clear the
+ * end of the screen line.
+ *
+ * The output is accumulated in a buffer and
+ * each line is printed to stderr using
+ * write(2) to ensure inter-process atomicity.
+ */
+ while ((brk = strpbrk(b, "\n\r"))) {
+ int linelen = brk - b;
- /*
- * Let's insert a suffix to clear the end
- * of the screen line if a line break was
- * found. Also, if we don't skip the
- * prefix, then a non-empty string must be
- * present too.
- */
- if (brk > (skip_pf ? 0 : (pf+1 + 1))) {
- char save[FIX_SIZE];
- memcpy(save, b + brk, sf);
- b[brk + sf - 1] = b[brk - 1];
- memcpy(b + brk - 1, suffix, sf);
- fprintf(stderr, "%.*s", brk + sf, b);
- memcpy(b + brk, save, sf);
- len -= brk;
+ if (!outbuf.len)
+ strbuf_addstr(&outbuf, PREFIX);
+ if (linelen > 0) {
+ strbuf_addf(&outbuf, "%.*s%s%c",
+ linelen, b, suffix, *brk);
} else {
- int l = brk ? brk : len;
- fprintf(stderr, "%.*s", l, b);
- len -= l;
+ strbuf_addch(&outbuf, *brk);
}
+ xwrite(2, outbuf.buf, outbuf.len);
+ strbuf_reset(&outbuf);
- skip_pf = !brk;
- memmove(buf + pf+1, b + brk, len);
- } while (len);
- continue;
+ b = brk + 1;
+ }
+
+ if (*b)
+ strbuf_addf(&outbuf, "%s%s",
+ outbuf.len ? "" : PREFIX, b);
+ break;
case 1:
- write_or_die(out, buf + pf+1, len);
- continue;
+ write_or_die(out, buf + 1, len);
+ break;
default:
- fprintf(stderr, "%s: protocol error: bad band #%d\n",
- me, band);
- return SIDEBAND_PROTOCOL_ERROR;
+ strbuf_addf(&outbuf, "%s%s: protocol error: bad band #%d",
+ outbuf.len ? "\n" : "", me, band);
+ retval = SIDEBAND_PROTOCOL_ERROR;
+ break;
}
}
- return 0;
+
+ if (outbuf.len) {
+ strbuf_addch(&outbuf, '\n');
+ xwrite(2, outbuf.buf, outbuf.len);
+ }
+ strbuf_release(&outbuf);
+ return retval;
}
/*
* fd is connected to the remote side; send the sideband data
* over multiplexed packet stream.
*/
-ssize_t send_sideband(int fd, int band, const char *data, ssize_t sz, int packet_max)
+void send_sideband(int fd, int band, const char *data, ssize_t sz, int packet_max)
{
- ssize_t ssz = sz;
const char *p = data;
while (sz) {
p += n;
sz -= n;
}
- return ssz;
}
#define SIDEBAND_REMOTE_ERROR -1
int recv_sideband(const char *me, int in_stream, int out);
-ssize_t send_sideband(int fd, int band, const char *data, ssize_t sz, int packet_max);
+void send_sideband(int fd, int band, const char *data, ssize_t sz, int packet_max);
#endif
strbuf_setlen(sb, sb->len + len);
}
+void strbuf_addbuf(struct strbuf *sb, const struct strbuf *sb2)
+{
+ strbuf_grow(sb, sb2->len);
+ memcpy(sb->buf + sb->len, sb2->buf, sb2->len);
+ strbuf_setlen(sb, sb->len + sb2->len);
+}
+
void strbuf_adddup(struct strbuf *sb, size_t pos, size_t len)
{
strbuf_grow(sb, len);
/**
* Copy the contents of another buffer at the end of the current one.
*/
-static inline void strbuf_addbuf(struct strbuf *sb, const struct strbuf *sb2)
-{
- strbuf_grow(sb, sb2->len);
- strbuf_add(sb, sb2->buf, sb2->len);
-}
+extern void strbuf_addbuf(struct strbuf *sb, const struct strbuf *sb2);
/**
* Copy part of the buffer from a given position till a given length to the
/**
* Read the contents of a file, specified by its path. The third argument
* can be used to give a hint about the file size, to avoid reallocs.
+ * Return the number of bytes read or a negative value if some error
+ * occurred while opening or reading the file.
*/
extern ssize_t strbuf_read_file(struct strbuf *sb, const char *path, size_t hint);
{
free((void *) entry->config->path);
free((void *) entry->config->name);
+ free((void *) entry->config->branch);
free((void *) entry->config->update_strategy.command);
free(entry->config);
}
submodule->update_strategy.command = NULL;
submodule->fetch_recurse = RECURSE_SUBMODULES_NONE;
submodule->ignore = NULL;
+ submodule->branch = NULL;
+ submodule->recommend_shallow = -1;
hashcpy(submodule->gitmodules_sha1, gitmodules_sha1);
else if (parse_submodule_update_strategy(value,
&submodule->update_strategy) < 0)
die(_("invalid value for %s"), var);
+ } else if (!strcmp(item.buf, "shallow")) {
+ if (!me->overwrite && submodule->recommend_shallow != -1)
+ warn_multiple_config(me->commit_sha1, submodule->name,
+ "shallow");
+ else
+ submodule->recommend_shallow =
+ git_config_bool(var, value);
+ } else if (!strcmp(item.buf, "branch")) {
+ if (!me->overwrite && submodule->branch)
+ warn_multiple_config(me->commit_sha1, submodule->name,
+ "branch");
+ else {
+ free((void *)submodule->branch);
+ submodule->branch = xstrdup(value);
+ }
}
strbuf_release(&name);
}
static int gitmodule_sha1_from_commit(const unsigned char *commit_sha1,
- unsigned char *gitmodules_sha1)
+ unsigned char *gitmodules_sha1,
+ struct strbuf *rev)
{
- struct strbuf rev = STRBUF_INIT;
int ret = 0;
if (is_null_sha1(commit_sha1)) {
- hashcpy(gitmodules_sha1, null_sha1);
+ hashclr(gitmodules_sha1);
return 1;
}
- strbuf_addf(&rev, "%s:.gitmodules", sha1_to_hex(commit_sha1));
- if (get_sha1(rev.buf, gitmodules_sha1) >= 0)
+ strbuf_addf(rev, "%s:.gitmodules", sha1_to_hex(commit_sha1));
+ if (get_sha1(rev->buf, gitmodules_sha1) >= 0)
ret = 1;
- strbuf_release(&rev);
return ret;
}
{
struct strbuf rev = STRBUF_INIT;
unsigned long config_size;
- char *config;
+ char *config = NULL;
unsigned char sha1[20];
enum object_type type;
const struct submodule *submodule = NULL;
return entry->config;
}
- if (!gitmodule_sha1_from_commit(commit_sha1, sha1))
- return NULL;
+ if (!gitmodule_sha1_from_commit(commit_sha1, sha1, &rev))
+ goto out;
switch (lookup_type) {
case lookup_name:
break;
}
if (submodule)
- return submodule;
+ goto out;
config = read_sha1_file(sha1, &type, &config_size);
- if (!config)
- return NULL;
-
- if (type != OBJ_BLOB) {
- free(config);
- return NULL;
- }
+ if (!config || type != OBJ_BLOB)
+ goto out;
/* fill the submodule config into the cache */
parameter.cache = cache;
parameter.commit_sha1 = commit_sha1;
parameter.gitmodules_sha1 = sha1;
parameter.overwrite = 0;
- git_config_from_mem(parse_config, "submodule-blob", rev.buf,
+ git_config_from_mem(parse_config, CONFIG_ORIGIN_SUBMODULE_BLOB, rev.buf,
config, config_size, ¶meter);
+ strbuf_release(&rev);
free(config);
switch (lookup_type) {
default:
return NULL;
}
+
+out:
+ strbuf_release(&rev);
+ free(config);
+ return submodule;
}
static const struct submodule *config_from_path(struct submodule_cache *cache,
const char *url;
int fetch_recurse;
const char *ignore;
+ const char *branch;
struct submodule_update_strategy update_strategy;
/* the sha1 blob id of the responsible .gitmodules file */
unsigned char gitmodules_sha1[20];
+ int recommend_shallow;
};
int parse_fetch_recurse_submodules_arg(const char *opt, const char *arg);
static int config_fetch_recurse_submodules = RECURSE_SUBMODULES_ON_DEMAND;
static int parallel_jobs = 1;
-static struct string_list changed_submodule_paths;
+static struct string_list changed_submodule_paths = STRING_LIST_INIT_NODUP;
static int initialized_fetch_ref_tips;
static struct sha1_array ref_tips_before_fetch;
static struct sha1_array ref_tips_after_fetch;
struct diff_filepair *p = q->queue[i];
if (!S_ISGITLINK(p->two->mode))
continue;
- if (submodule_needs_pushing(p->two->path, p->two->sha1))
+ if (submodule_needs_pushing(p->two->path, p->two->oid.hash))
string_list_insert(needs_pushing, p->two->path);
}
}
* being moved around. */
struct string_list_item *path;
path = unsorted_string_list_lookup(&changed_submodule_paths, p->two->path);
- if (!path && !is_submodule_commit_present(p->two->path, p->two->sha1))
+ if (!path && !is_submodule_commit_present(p->two->path, p->two->oid.hash))
string_list_append(&changed_submodule_paths, xstrdup(p->two->path));
} else {
/* Submodule is new or was moved here */
clean: clean-except-prove-cache
$(RM) .prove
-test-lint: test-lint-duplicates test-lint-executable test-lint-shell-syntax
+test-lint: test-lint-duplicates test-lint-executable test-lint-shell-syntax \
+ test-lint-filenames
test-lint-duplicates:
@dups=`echo $(T) | tr ' ' '\n' | sed 's/-.*//' | sort | uniq -d` && \
test-lint-shell-syntax:
@'$(PERL_PATH_SQ)' check-non-portable-shell.pl $(T) $(THELPERS)
+test-lint-filenames:
+ @# We do *not* pass a glob to ls-files but use grep instead, to catch
+ @# non-ASCII characters (which are quoted within double-quotes)
+ @bad="$$(git -c core.quotepath=true ls-files 2>/dev/null | \
+ grep '["*:<>?\\|]')"; \
+ test -z "$$bad" || { \
+ echo >&2 "non-portable file name(s): $$bad"; exit 1; }
+
aggregate-results-and-cleanup: $(T)
$(MAKE) aggregate-results
$(MAKE) clean
$ sh ./t9200-git-cvsexport-commit.sh --run='1-4 !3'
will run tests 1, 2, and 4. Items that comes later have higher
-precendence. It means that this:
+precedence. It means that this:
$ sh ./t9200-git-cvsexport-commit.sh --run='!3 1-4'
return 1;
}
-int main(int argc, char *argv[])
+int cmd_main(int argc, const char **argv)
{
static int verbose;
* ascending order of priority from a config_set
* constructed from files entered as arguments.
*
+ * iterate -> iterate over all values using git_config(), and print some
+ * data for each
+ *
* Examples:
*
* To print the value with highest priority for key "foo.bAr Baz.rock":
*
*/
+static const char *scope_name(enum config_scope scope)
+{
+ switch (scope) {
+ case CONFIG_SCOPE_SYSTEM:
+ return "system";
+ case CONFIG_SCOPE_GLOBAL:
+ return "global";
+ case CONFIG_SCOPE_REPO:
+ return "repo";
+ case CONFIG_SCOPE_CMDLINE:
+ return "cmdline";
+ default:
+ return "unknown";
+ }
+}
+static int iterate_cb(const char *var, const char *value, void *data)
+{
+ static int nr;
+
+ if (nr++)
+ putchar('\n');
+
+ printf("key=%s\n", var);
+ printf("value=%s\n", value ? value : "(null)");
+ printf("origin=%s\n", current_config_origin_type());
+ printf("name=%s\n", current_config_name());
+ printf("scope=%s\n", scope_name(current_config_scope()));
+
+ return 0;
+}
-int main(int argc, char **argv)
+int cmd_main(int argc, const char **argv)
{
int i, val;
const char *v;
printf("Value not found for \"%s\"\n", argv[2]);
goto exit1;
}
+ } else if (!strcmp(argv[1], "iterate")) {
+ git_config(iterate_cb, NULL);
+ goto exit0;
}
die("%s: Please check the syntax and the function name", argv[0]);
#define LOWER "abcdefghijklmnopqrstuvwxyz"
#define UPPER "ABCDEFGHIJKLMNOPQRSTUVWXYZ"
-int main(int argc, char **argv)
+int cmd_main(int argc, const char **argv)
{
TEST_CLASS(isdigit, DIGIT);
TEST_CLASS(isspace, " \n\r\t");
#include "cache.h"
static const char *usage_msg = "\n"
-" test-date show [time_t]...\n"
+" test-date relative [time_t]...\n"
+" test-date show:<format> [time_t]...\n"
" test-date parse [date]...\n"
" test-date approxidate [date]...\n";
-static void show_dates(char **argv, struct timeval *now)
+static void show_relative_dates(const char **argv, struct timeval *now)
{
struct strbuf buf = STRBUF_INIT;
strbuf_release(&buf);
}
-static void parse_dates(char **argv, struct timeval *now)
+static void show_dates(const char **argv, const char *format)
+{
+ struct date_mode mode;
+
+ parse_date_format(format, &mode);
+ for (; *argv; argv++) {
+ char *arg;
+ time_t t;
+ int tz;
+
+ /*
+ * Do not use our normal timestamp parsing here, as the point
+ * is to test the formatting code in isolation.
+ */
+ t = strtol(*argv, &arg, 10);
+ while (*arg == ' ')
+ arg++;
+ tz = atoi(arg);
+
+ printf("%s -> %s\n", *argv, show_date(t, tz, &mode));
+ }
+}
+
+static void parse_dates(const char **argv, struct timeval *now)
{
struct strbuf result = STRBUF_INIT;
strbuf_release(&result);
}
-static void parse_approxidate(char **argv, struct timeval *now)
+static void parse_approxidate(const char **argv, struct timeval *now)
{
for (; *argv; argv++) {
time_t t;
}
}
-int main(int argc, char **argv)
+int cmd_main(int argc, const char **argv)
{
struct timeval now;
const char *x;
argv++;
if (!*argv)
usage(usage_msg);
- if (!strcmp(*argv, "show"))
- show_dates(argv+1, &now);
+ if (!strcmp(*argv, "relative"))
+ show_relative_dates(argv+1, &now);
+ else if (skip_prefix(*argv, "show:", &x))
+ show_dates(argv+1, x);
else if (!strcmp(*argv, "parse"))
parse_dates(argv+1, &now);
else if (!strcmp(*argv, "approxidate"))
static const char usage_str[] =
"test-delta (-d|-p) <from_file> <data_file> <out_file>";
-int main(int argc, char *argv[])
+int cmd_main(int argc, const char **argv)
{
int fd;
struct stat st;
return errs;
}
-int main(int ac, char **av)
+int cmd_main(int ac, const char **av)
{
struct index_state istate;
struct cache_tree *another = cache_tree();
printf(" %d", (int)pos);
}
-int main(int ac, char **av)
+int cmd_main(int ac, const char **av)
{
struct split_index *si;
int i;
strbuf_setlen(base, len);
}
-int main(int ac, char **av)
+int cmd_main(int ac, const char **av)
{
struct untracked_cache *uc;
struct strbuf base = STRBUF_INIT;
#include "run-command.h"
#include "strbuf.h"
-int main(int argc, char **argv)
+int cmd_main(int argc, const char **argv)
{
const char *trash_directory = getenv("TRASH_DIRECTORY");
struct strbuf buf = STRBUF_INIT;
#include "git-compat-util.h"
-int main(int argc, char *argv[])
+int cmd_main(int argc, const char **argv)
{
unsigned long count, next = 0;
unsigned char *c;
*
* perfhashmap method rounds -> test hashmap.[ch] performance
*/
-int main(int argc, char *argv[])
+int cmd_main(int argc, const char **argv)
{
char line[1024];
struct hashmap map;
#include "cache.h"
-int main(int argc, char **argv)
+int cmd_main(int argc, const char **argv)
{
struct cache_header hdr;
int version;
handle_command(line, arg + 1, stdin_buf);
}
-int main(int argc, char *argv[])
+int cmd_main(int argc, const char **argv)
{
struct line_buffer stdin_buf = LINE_BUFFER_INIT;
struct line_buffer file_buf = LINE_BUFFER_INIT;
#include "cache.h"
#include "tree.h"
-int main(int ac, char **av)
+int cmd_main(int ac, const char **av)
{
struct object_id hash1, hash2, shifted;
struct tree *one, *two;
return strcmp(x->text, y->text);
}
-int main(int argc, char **argv)
+int cmd_main(int argc, const char **argv)
{
struct line *line, *p = NULL, *lines = NULL;
struct strbuf sb = STRBUF_INIT;
*/
#include "git-compat-util.h"
-int main(int argc, char *argv[])
+int cmd_main(int argc, const char **argv)
{
if (argc != 2)
usage("Expected 1 parameter defining the temporary file template");
static char *string = NULL;
static char *file = NULL;
static int ambiguous;
-static struct string_list list;
+static struct string_list list = STRING_LIST_INIT_NODUP;
static struct {
int called;
strbuf_release(&buf);
}
-int main(int argc, char **argv)
+int cmd_main(int argc, const char **argv)
{
const char *prefix = "prefix/";
const char *usage[] = {
{ NULL, NULL }
};
-int main(int argc, char **argv)
+int cmd_main(int argc, const char **argv)
{
if (argc == 3 && !strcmp(argv[1], "normalize_path_copy")) {
char *buf = xmallocz(strlen(argv[2]));
}
if (argc >= 4 && !strcmp(argv[1], "prefix_path")) {
- char *prefix = argv[2];
+ const char *prefix = argv[2];
int prefix_len = strlen(prefix);
int nongit_ok;
setup_git_directory_gently(&nongit_ok);
free(v);
}
-int main(int argc, char **argv)
+int cmd_main(int argc, const char **argv)
{
struct prio_queue pq = { intcmp };
#include "cache.h"
-int main (int argc, char **argv)
+int cmd_main(int argc, const char **argv)
{
int i, cnt = 1;
if (argc == 2)
#include "git-compat-util.h"
+#include "gettext.h"
-int main(int argc, char **argv)
+struct reg_flag {
+ const char *name;
+ int flag;
+};
+
+static struct reg_flag reg_flags[] = {
+ { "EXTENDED", REG_EXTENDED },
+ { "NEWLINE", REG_NEWLINE },
+ { "ICASE", REG_ICASE },
+ { "NOTBOL", REG_NOTBOL },
+#ifdef REG_STARTEND
+ { "STARTEND", REG_STARTEND },
+#endif
+ { NULL, 0 }
+};
+
+static int test_regex_bug(void)
{
char *pat = "[^={} \t]+";
char *str = "={}\nfred";
if (m[0].rm_so == 3) /* matches '\n' when it should not */
die("regex bug confirmed: re-build git with NO_REGEX=1");
- exit(0);
+ return 0;
+}
+
+int cmd_main(int argc, const char **argv)
+{
+ const char *pat;
+ const char *str;
+ int flags = 0;
+ regex_t r;
+ regmatch_t m[1];
+
+ if (argc == 2 && !strcmp(argv[1], "--bug"))
+ return test_regex_bug();
+ else if (argc < 3)
+ usage("test-regex --bug\n"
+ "test-regex <pattern> <string> [<options>]");
+
+ argv++;
+ pat = *argv++;
+ str = *argv++;
+ while (*argv) {
+ struct reg_flag *rf;
+ for (rf = reg_flags; rf->name; rf++)
+ if (!strcmp(*argv, rf->name)) {
+ flags |= rf->flag;
+ break;
+ }
+ if (!rf->name)
+ die("do not recognize %s", *argv);
+ argv++;
+ }
+ git_setup_gettext();
+
+ if (regcomp(&r, pat, flags))
+ die("failed regcomp() for pattern '%s'", pat);
+ if (regexec(&r, str, 1, m, 0))
+ return 1;
+
+ return 0;
}
return got_revision;
}
-int main(int argc, char **argv)
+int cmd_main(int argc, const char **argv)
{
if (argc < 2)
return 1;
return 0;
argv_array_pushv(&cp->args, d->argv);
- strbuf_addf(err, "preloaded output of a child\n");
+ strbuf_addstr(err, "preloaded output of a child\n");
number_callbacks++;
return 1;
}
void *cb,
void **task_cb)
{
- strbuf_addf(err, "no further jobs available\n");
+ strbuf_addstr(err, "no further jobs available\n");
return 0;
}
void *pp_cb,
void *pp_task_cb)
{
- strbuf_addf(err, "asking for a quick stop\n");
+ strbuf_addstr(err, "asking for a quick stop\n");
return 1;
}
-int main(int argc, char **argv)
+int cmd_main(int argc, const char **argv)
{
struct child_process proc = CHILD_PROCESS_INIT;
int jobs;
static struct lock_file index_lock;
-int main(int ac, char **av)
+int cmd_main(int ac, const char **av)
{
hold_locked_index(&index_lock, 1);
if (read_cache() < 0)
puts(sha1_to_hex(sha1));
}
-int main(int argc, char **argv)
+int cmd_main(int argc, const char **argv)
{
struct sha1_array array = SHA1_ARRAY_INIT;
struct strbuf line = STRBUF_INIT;
#include "cache.h"
-int main(int ac, char **av)
+int cmd_main(int ac, const char **av)
{
git_SHA_CTX ctx;
unsigned char sha1[20];
X(three)
#undef X
-int main(int argc, char **argv) {
+int cmd_main(int argc, const char **argv) {
sigchain_push(SIGTERM, one);
sigchain_push(SIGTERM, two);
sigchain_push(SIGTERM, three);
return starts_with(item->string, prefix);
}
-int main(int argc, char **argv)
+int cmd_main(int argc, const char **argv)
{
if (argc == 5 && !strcmp(argv[1], "split")) {
struct string_list list = STRING_LIST_INIT_DUP;
#include "submodule-config.h"
#include "submodule.h"
-static void die_usage(int argc, char **argv, const char *msg)
+static void die_usage(int argc, const char **argv, const char *msg)
{
fprintf(stderr, "%s\n", msg);
fprintf(stderr, "Usage: %s [<commit> <submodulepath>] ...\n", argv[0]);
return parse_submodule_config_option(var, value);
}
-int main(int argc, char **argv)
+int cmd_main(int argc, const char **argv)
{
- char **arg = argv;
+ const char **arg = argv;
int my_argc = argc;
int output_url = 0;
int lookup_name = 0;
arg++;
my_argc--;
- while (starts_with(arg[0], "--")) {
+ while (arg[0] && starts_with(arg[0], "--")) {
if (!strcmp(arg[0], "--url"))
output_url = 1;
if (!strcmp(arg[0], "--name"))
path_or_name = arg[1];
if (commit[0] == '\0')
- hashcpy(commit_sha1, null_sha1);
+ hashclr(commit_sha1);
else if (get_sha1(commit, commit_sha1) < 0)
die_usage(argc, argv, "Commit not found.");
#include "cache.h"
#include "run-command.h"
-int main(int argc, char **argv)
+int cmd_main(int argc, const char **argv)
{
struct child_process cp = CHILD_PROCESS_INIT;
int nogit = 0;
static const char test_svnfe_usage[] =
"test-svn-fe (<dumpfile> | [-d] <preimage> <delta> <len>)";
-static int apply_delta(int argc, char *argv[])
+static int apply_delta(int argc, const char **argv)
{
struct line_buffer preimage = LINE_BUFFER_INIT;
struct line_buffer delta = LINE_BUFFER_INIT;
return 0;
}
-int main(int argc, char *argv[])
+int cmd_main(int argc, const char **argv)
{
if (argc == 2) {
if (svndump_init(argv[1]))
#include "git-compat-util.h"
#include "urlmatch.h"
-int main(int argc, char **argv)
+int cmd_main(int argc, const char **argv)
{
const char usage[] = "test-urlmatch-normalization [-p | -l] <url1> | <url1> <url2>";
char *url1, *url2;
#include "cache.h"
-int main(int argc, char **argv)
+int cmd_main(int argc, const char **argv)
{
int i;
for (i = 2; i < argc; i++) {
kill "$GIT_DAEMON_PID"
wait "$GIT_DAEMON_PID" >&3 2>&4
ret=$?
- # expect exit with status 143 = 128+15 for signal TERM=15
- if test $ret -ne 143
+ if test_match_signal 15 $?
then
error "git daemon exited with status: $ret"
fi
svn "$orig_svncmd" --config-dir "$svnconf" "$@"
}
-prepare_httpd () {
- for d in \
- "$SVN_HTTPD_PATH" \
- /usr/sbin/apache2 \
- /usr/sbin/httpd \
- ; do
- if test -f "$d"
- then
- SVN_HTTPD_PATH="$d"
- break
- fi
- done
- if test -z "$SVN_HTTPD_PATH"
- then
- echo >&2 '*** error: Apache not found'
- return 1
- fi
- for d in \
- "$SVN_HTTPD_MODULE_PATH" \
- /usr/lib/apache2/modules \
- /usr/libexec/apache2 \
- ; do
- if test -d "$d"
- then
- SVN_HTTPD_MODULE_PATH="$d"
- break
- fi
- done
- if test -z "$SVN_HTTPD_MODULE_PATH"
- then
- echo >&2 '*** error: Apache module dir not found'
- return 1
- fi
- if test ! -f "$SVN_HTTPD_MODULE_PATH/mod_dav_svn.so"
- then
- echo >&2 '*** error: Apache module "mod_dav_svn" not found'
- return 1
- fi
-
- repo_base_path="${1-svn}"
- mkdir "$GIT_DIR"/logs
-
- cat > "$GIT_DIR/httpd.conf" <<EOF
-ServerName "git svn test"
-ServerRoot "$GIT_DIR"
-DocumentRoot "$GIT_DIR"
-PidFile "$GIT_DIR/httpd.pid"
-LockFile logs/accept.lock
-Listen 127.0.0.1:$SVN_HTTPD_PORT
-LoadModule dav_module $SVN_HTTPD_MODULE_PATH/mod_dav.so
-LoadModule dav_svn_module $SVN_HTTPD_MODULE_PATH/mod_dav_svn.so
-<Location /$repo_base_path>
- DAV svn
- SVNPath "$rawsvnrepo"
-</Location>
-EOF
-}
-
-start_httpd () {
- if test -z "$SVN_HTTPD_PORT"
- then
- echo >&2 'SVN_HTTPD_PORT is not defined!'
- return
- fi
-
- prepare_httpd "$1" || return 1
-
- "$SVN_HTTPD_PATH" -f "$GIT_DIR"/httpd.conf -k start
- svnrepo="http://127.0.0.1:$SVN_HTTPD_PORT/$repo_base_path"
-}
-
-stop_httpd () {
- test -z "$SVN_HTTPD_PORT" && return
- test ! -f "$GIT_DIR/httpd.conf" && return
- "$SVN_HTTPD_PATH" -f "$GIT_DIR"/httpd.conf -k stop
+maybe_start_httpd () {
+ loc=${1-svn}
+
+ test_tristate GIT_SVN_TEST_HTTPD
+ case $GIT_SVN_TEST_HTTPD in
+ true)
+ . "$TEST_DIRECTORY"/lib-httpd.sh
+ LIB_HTTPD_SVN="$loc"
+ start_httpd
+ ;;
+ *)
+ stop_httpd () {
+ : noop
+ }
+ ;;
+ esac
}
convert_to_rev_db () {
# LIB_HTTPD_MODULE_PATH web server modules path
# LIB_HTTPD_PORT listening port
# LIB_HTTPD_DAV enable DAV
-# LIB_HTTPD_SVN enable SVN
+# LIB_HTTPD_SVN enable SVN at given location (e.g. "svn")
# LIB_HTTPD_SSL enable SSL
#
# Copyright (c) 2008 Clemens Buchacher <drizzd@aon.at>
if test -n "$LIB_HTTPD_SVN"
then
HTTPD_PARA="$HTTPD_PARA -DSVN"
- rawsvnrepo="$HTTPD_ROOT_PATH/svnrepo"
- svnrepo="http://127.0.0.1:$LIB_HTTPD_PORT/svn"
+ LIB_HTTPD_SVNPATH="$rawsvnrepo"
+ svnrepo="http://127.0.0.1:$LIB_HTTPD_PORT/"
+ svnrepo="$svnrepo$LIB_HTTPD_SVN"
+ export LIB_HTTPD_SVN LIB_HTTPD_SVNPATH
fi
fi
}
if test $? -ne 0
then
trap 'die' EXIT
+ cat "$HTTPD_ROOT_PATH"/error.log >&4 2>/dev/null
test_skip_or_die $GIT_TEST_HTTPD "web server setup failed"
fi
}
<IfDefine SVN>
LoadModule dav_svn_module modules/mod_dav_svn.so
- <Location /svn>
+ <Location /${LIB_HTTPD_SVN}>
DAV svn
- SVNPath svnrepo
+ SVNPath "${LIB_HTTPD_SVNPATH}"
</Location>
</IfDefine>
*/COMMIT_EDITMSG)
test -z "$EXPECT_HEADER_COUNT" ||
test "$EXPECT_HEADER_COUNT" = "$(sed -n '1s/^# This is a combination of \(.*\) commits\./\1/p' < "$1")" ||
+ test "# # GETTEXT POISON #" = "$(sed -n '1p' < "$1")" ||
exit
test -z "$FAKE_COMMIT_MESSAGE" || echo "$FAKE_COMMIT_MESSAGE" > "$1"
test -z "$FAKE_COMMIT_AMEND" || echo "$FAKE_COMMIT_AMEND" >> "$1"
At least one of the first two is required!
-You can use test_expect_success as usual. For actual performance
-tests, use
+You can use test_expect_success as usual. In both test_expect_success
+and in test_perf, running "git" points to the version that is being
+perf-tested. The $MODERN_GIT variable points to the git wrapper for the
+currently checked-out version (i.e., the one that matches the t/perf
+scripts you are running). This is useful if your setup uses commands
+that only work with newer versions of git than what you might want to
+test (but obviously your new commands must still create a state that can
+be used by the older version of git you are testing).
+
+For actual performance tests, use
test_perf 'descriptive string' '
command1 &&
git log --oneline --follow -- "$file" >/dev/null
'
-test_perf 'git log -L' '
- git log -L 1:"$file" >/dev/null
+test_perf 'git log -L (renames off)' '
+ git log --no-renames -L 1:"$file" >/dev/null
'
-test_perf 'git log -M -L' '
+test_perf 'git log -L (renames on)' '
git log -M -L 1:"$file" >/dev/null
'
--- /dev/null
+#!/bin/sh
+
+test_description='performance with large numbers of packs'
+. ./perf-lib.sh
+
+test_perf_large_repo
+
+# A real many-pack situation would probably come from having a lot of pushes
+# over time. We don't know how big each push would be, but we can fake it by
+# just walking the first-parent chain and having every 5 commits be their own
+# "push". This isn't _entirely_ accurate, as real pushes would have some
+# duplicate objects due to thin-pack fixing, but it's a reasonable
+# approximation.
+#
+# And then all of the rest of the objects can go in a single packfile that
+# represents the state before any of those pushes (actually, we'll generate
+# that first because in such a setup it would be the oldest pack, and we sort
+# the packs by reverse mtime inside git).
+repack_into_n () {
+ rm -rf staging &&
+ mkdir staging &&
+
+ git rev-list --first-parent HEAD |
+ sed -n '1~5p' |
+ head -n "$1" |
+ perl -e 'print reverse <>' \
+ >pushes
+
+ # create base packfile
+ head -n 1 pushes |
+ git pack-objects --delta-base-offset --revs staging/pack
+
+ # and then incrementals between each pair of commits
+ last= &&
+ while read rev
+ do
+ if test -n "$last"; then
+ {
+ echo "$rev" &&
+ echo "^$last"
+ } |
+ git pack-objects --delta-base-offset --revs \
+ staging/pack || return 1
+ fi
+ last=$rev
+ done <pushes &&
+
+ # and install the whole thing
+ rm -f .git/objects/pack/* &&
+ mv staging/* .git/objects/pack/
+}
+
+# Pretend we just have a single branch and no reflogs, and that everything is
+# in objects/pack; that makes our fake pack-building via repack_into_n()
+# much simpler.
+test_expect_success 'simplify reachability' '
+ tip=$(git rev-parse --verify HEAD) &&
+ git for-each-ref --format="option no-deref%0adelete %(refname)" |
+ git update-ref --stdin &&
+ rm -rf .git/logs &&
+ git update-ref refs/heads/master $tip &&
+ git symbolic-ref HEAD refs/heads/master &&
+ git repack -ad
+'
+
+for nr_packs in 1 50 1000
+do
+ test_expect_success "create $nr_packs-pack scenario" '
+ repack_into_n $nr_packs
+ '
+
+ test_perf "rev-list ($nr_packs)" '
+ git rev-list --objects --all >/dev/null
+ '
+
+ # This simulates the interesting part of the repack, which is the
+ # actual pack generation, without smudging the on-disk setup
+ # between trials.
+ test_perf "repack ($nr_packs)" '
+ git pack-objects --keep-true-parents \
+ --honor-pack-keep --non-empty --all \
+ --reflog --indexed-objects --delta-base-offset \
+ --stdout </dev/null >/dev/null
+ '
+done
+
+test_done
# need to export them for test_perf subshells
export TEST_DIRECTORY TRASH_DIRECTORY GIT_BUILD_DIR GIT_TEST_CMP
+MODERN_GIT=$GIT_BUILD_DIR/bin-wrappers/git
+export MODERN_GIT
+
perf_results_dir=$TEST_OUTPUT_DIRECTORY/test-results
mkdir -p "$perf_results_dir"
rm -f "$perf_results_dir"/$(basename "$0" .sh).subtests
repo="$1"
source="$2"
source_git="$(git -C "$source" rev-parse --git-dir)"
- objects_dir="$(git -C "$source" rev-parse --git-path objects)"
+ objects_dir="$("$MODERN_GIT" -C "$source" rev-parse --git-path objects)"
mkdir -p "$repo/.git"
(
cd "$source" &&
# Performance tests should never fail. If they do, stop immediately
immediate=t
+# Perf tests require GNU time
+case "$(uname -s)" in Darwin) GTIME="${GTIME:-gtime}";; esac
+GTIME="${GTIME:-/usr/bin/time}"
+
test_run_perf_ () {
test_cleanup=:
test_export_="test_cleanup"
export test_cleanup test_export_
- /usr/bin/time -f "%E %U %S" -o test_time.$i "$SHELL" -c '
+ "$GTIME" -f "%E %U %S" -o test_time.$i "$SHELL" -c '
. '"$TEST_DIRECTORY"/test-lib-functions.sh'
test_export () {
[ $# != 0 ] || return 0
'
test_expect_success 'validate object ID of a known tree' '
- test "$tree" = 4b825dc642cb6eb9a060e54bf8d69288fbee4904
+ test "$tree" = $EMPTY_TREE
'
# Various types of objects
test_expect_success 'sigchain works' '
{ test-sigchain >actual; ret=$?; } &&
- case "$ret" in
- 143) true ;; # POSIX w/ SIGTERM=15
- 271) true ;; # ksh w/ SIGTERM=15
- 3) true ;; # Windows
- *) false ;;
- esac &&
+ {
+ # Signal death by raise() on Windows acts like exit(3),
+ # regardless of the signal number. So we must allow that
+ # as well as the normal signal check.
+ test_match_signal 15 "$ret" ||
+ test "$ret" = 3
+ } &&
test_cmp expect actual
'
test_expect_success !MINGW 'a constipated git dies with SIGPIPE' '
OUT=$( ((large_git; echo $? 1>&3) | :) 3>&1 ) &&
- test "$OUT" -eq 141
+ test_match_signal 13 "$OUT"
'
test_expect_success !MINGW 'a constipated git dies with SIGPIPE even if parent ignores it' '
OUT=$( ((trap "" PIPE; large_git; echo $? 1>&3) | :) 3>&1 ) &&
- test "$OUT" -eq 141
+ test_match_signal 13 "$OUT"
'
test_done
# arbitrary reference time: 2009-08-30 19:20:00
TEST_DATE_NOW=1251660000; export TEST_DATE_NOW
-check_show() {
+check_relative() {
t=$(($TEST_DATE_NOW - $1))
echo "$t -> $2" >expect
test_expect_${3:-success} "relative date ($2)" "
- test-date show $t >actual &&
+ test-date relative $t >actual &&
test_i18ncmp expect actual
"
}
-check_show 5 '5 seconds ago'
-check_show 300 '5 minutes ago'
-check_show 18000 '5 hours ago'
-check_show 432000 '5 days ago'
-check_show 1728000 '3 weeks ago'
-check_show 13000000 '5 months ago'
-check_show 37500000 '1 year, 2 months ago'
-check_show 55188000 '1 year, 9 months ago'
-check_show 630000000 '20 years ago'
-check_show 31449600 '12 months ago'
-check_show 62985600 '2 years ago'
+check_relative 5 '5 seconds ago'
+check_relative 300 '5 minutes ago'
+check_relative 18000 '5 hours ago'
+check_relative 432000 '5 days ago'
+check_relative 1728000 '3 weeks ago'
+check_relative 13000000 '5 months ago'
+check_relative 37500000 '1 year, 2 months ago'
+check_relative 55188000 '1 year, 9 months ago'
+check_relative 630000000 '20 years ago'
+check_relative 31449600 '12 months ago'
+check_relative 62985600 '2 years ago'
+
+check_show () {
+ format=$1
+ time=$2
+ expect=$3
+ test_expect_success $4 "show date ($format:$time)" '
+ echo "$time -> $expect" >expect &&
+ test-date show:$format "$time" >actual &&
+ test_cmp expect actual
+ '
+}
+
+# arbitrary but sensible time for examples
+TIME='1466000000 +0200'
+check_show iso8601 "$TIME" '2016-06-15 16:13:20 +0200'
+check_show iso8601-strict "$TIME" '2016-06-15T16:13:20+02:00'
+check_show rfc2822 "$TIME" 'Wed, 15 Jun 2016 16:13:20 +0200'
+check_show short "$TIME" '2016-06-15'
+check_show default "$TIME" 'Wed Jun 15 16:13:20 2016 +0200'
+check_show raw "$TIME" '1466000000 +0200'
+check_show unix "$TIME" '1466000000'
+check_show iso-local "$TIME" '2016-06-15 14:13:20 +0000'
+check_show raw-local "$TIME" '1466000000 +0000'
+check_show unix-local "$TIME" '1466000000'
+
+# arbitrary time absurdly far in the future
+FUTURE="5758122296 -0400"
+check_show iso "$FUTURE" "2152-06-19 18:24:56 -0400" LONG_IS_64BIT
+check_show iso-local "$FUTURE" "2152-06-19 22:24:56 +0000" LONG_IS_64BIT
check_parse() {
echo "$1 -> $2" >expect
test_stderr () {
expected="$1"
expect_in stderr "$1" &&
- test_cmp "$HOME/expected-stderr" "$HOME/stderr"
+ test_i18ncmp "$HOME/expected-stderr" "$HOME/stderr"
}
broken_c_unquote () {
stderr_contains () {
regexp="$1"
- if grep "$regexp" "$HOME/stderr"
+ if test_i18ngrep "$regexp" "$HOME/stderr"
then
return 0
else
test_must_be_empty err
'
+test_expect_success 'diff does not reuse worktree files that need cleaning' '
+ test_config filter.counter.clean "echo . >>count; sed s/^/clean:/" &&
+ echo "file filter=counter" >.gitattributes &&
+ test_commit one file &&
+ test_commit two file &&
+
+ >count &&
+ git diff-tree -p HEAD &&
+ test_line_count = 0 count
+'
+
test_done
test -z "$LFonlydiff" -a -z "$CRLFonlydiff" -a -z "$LFwithNULdiff"
'
-test_expect_success 'text=auto, autocrlf=true _does_ normalize CRLF files' '
+test_expect_success 'text=auto, autocrlf=true does not normalize CRLF files' '
rm -f .gitattributes tmp LFonly CRLFonly LFwithNUL &&
git config core.autocrlf true &&
LFonlydiff=$(git diff LFonly) &&
CRLFonlydiff=$(git diff CRLFonly) &&
LFwithNULdiff=$(git diff LFwithNUL) &&
- test -z "$LFonlydiff" -a -n "$CRLFonlydiff" -a -z "$LFwithNULdiff"
+ test -z "$LFonlydiff" -a -z "$CRLFonlydiff" -a -z "$LFwithNULdiff"
'
test_expect_success 'text=auto, autocrlf=true does not normalize binary files' '
fname=${pfx}_$f.txt &&
cp $f $fname &&
printf Z >>"$fname" &&
- git -c core.autocrlf=$crlf add $fname 2>/dev/null &&
- git -c core.autocrlf=$crlf commit -m "commit_$fname" $fname >"${pfx}_$f.err" 2>&1
+ git -c core.autocrlf=$crlf add $fname 2>"${pfx}_$f.err"
done
test_expect_success "commit NNO files crlf=$crlf attr=$attr LF" '
text,lf) echo "text eol=lf" ;;
text,crlf) echo "text eol=crlf" ;;
auto,) echo "text=auto" ;;
- auto,lf) echo "text eol=lf" ;;
- auto,crlf) echo "text eol=crlf" ;;
+ auto,lf) echo "text=auto eol=lf" ;;
+ auto,crlf) echo "text=auto eol=crlf" ;;
lf,) echo "text eol=lf" ;;
crlf,) echo "text eol=crlf" ;;
,) echo "" ;;
commit_chk_wrnNNO "" "" true LF_CRLF "" "" "" ""
commit_chk_wrnNNO "" "" input "" "" "" "" ""
-commit_chk_wrnNNO "auto" "" false "$WILC" "$WICL" "$WAMIX" "" ""
-commit_chk_wrnNNO "auto" "" true LF_CRLF "" LF_CRLF "" ""
-commit_chk_wrnNNO "auto" "" input "" CRLF_LF CRLF_LF "" ""
-
+commit_chk_wrnNNO "auto" "" false "$WILC" "" "" "" ""
+commit_chk_wrnNNO "auto" "" true LF_CRLF "" "" "" ""
+commit_chk_wrnNNO "auto" "" input "" "" "" "" ""
for crlf in true false input
do
commit_chk_wrnNNO -text "" $crlf "" "" "" "" ""
commit_chk_wrnNNO -text crlf $crlf "" "" "" "" ""
commit_chk_wrnNNO "" lf $crlf "" CRLF_LF CRLF_LF "" CRLF_LF
commit_chk_wrnNNO "" crlf $crlf LF_CRLF "" LF_CRLF LF_CRLF ""
- commit_chk_wrnNNO auto lf $crlf "" CRLF_LF CRLF_LF "" CRLF_LF
- commit_chk_wrnNNO auto crlf $crlf LF_CRLF "" LF_CRLF LF_CRLF ""
+ commit_chk_wrnNNO auto lf $crlf "" "" "" "" ""
+ commit_chk_wrnNNO auto crlf $crlf LF_CRLF "" "" "" ""
commit_chk_wrnNNO text lf $crlf "" CRLF_LF CRLF_LF "" CRLF_LF
commit_chk_wrnNNO text crlf $crlf LF_CRLF "" LF_CRLF LF_CRLF ""
done
commit_chk_wrnNNO "text" "" true LF_CRLF "" LF_CRLF LF_CRLF ""
commit_chk_wrnNNO "text" "" input "" CRLF_LF CRLF_LF "" CRLF_LF
-test_expect_success 'create files cleanup' '
+test_expect_success 'commit NNO and cleanup' '
+ git commit -m "commit files on top of NNO" &&
rm -f *.txt &&
git -c core.autocrlf=false reset --hard
'
check_in_repo_NNO -text "" $crlf LF CRLF CRLF_mix_LF LF_mix_CR CRLF_nul
check_in_repo_NNO -text lf $crlf LF CRLF CRLF_mix_LF LF_mix_CR CRLF_nul
check_in_repo_NNO -text crlf $crlf LF CRLF CRLF_mix_LF LF_mix_CR CRLF_nul
- check_in_repo_NNO auto "" $crlf LF LF LF LF_mix_CR CRLF_nul
- check_in_repo_NNO auto lf $crlf LF LF LF LF_mix_CR LF_nul
- check_in_repo_NNO auto crlf $crlf LF LF LF LF_mix_CR LF_nul
+ check_in_repo_NNO auto "" $crlf LF CRLF CRLF_mix_LF LF_mix_CR CRLF_nul
+ check_in_repo_NNO auto lf $crlf LF CRLF CRLF_mix_LF LF_mix_CR CRLF_nul
+ check_in_repo_NNO auto crlf $crlf LF CRLF CRLF_mix_LF LF_mix_CR CRLF_nul
check_in_repo_NNO text "" $crlf LF LF LF LF_mix_CR LF_nul
check_in_repo_NNO text lf $crlf LF LF LF LF_mix_CR LF_nul
check_in_repo_NNO text crlf $crlf LF LF LF LF_mix_CR LF_nul
checkout_files text "$id" "crlf" "$crlf" "$ceol" CRLF CRLF CRLF CRLF_mix_CR CRLF_nul
# currently the same as text, eol=XXX
checkout_files auto "$id" "lf" "$crlf" "$ceol" LF CRLF CRLF_mix_LF LF_mix_CR LF_nul
- checkout_files auto "$id" "crlf" "$crlf" "$ceol" CRLF CRLF CRLF CRLF_mix_CR CRLF_nul
+ checkout_files auto "$id" "crlf" "$crlf" "$ceol" CRLF CRLF CRLF_mix_LF LF_mix_CR LF_nul
done
# core.autocrlf false, different core.eol
# core.autocrlf true
checkout_files "" "$id" "" true "$ceol" CRLF CRLF CRLF_mix_LF LF_mix_CR LF_nul
# text: core.autocrlf = true overrides core.eol
- checkout_files auto "$id" "" true "$ceol" CRLF CRLF CRLF LF_mix_CR LF_nul
+ checkout_files auto "$id" "" true "$ceol" CRLF CRLF CRLF_mix_LF LF_mix_CR LF_nul
checkout_files text "$id" "" true "$ceol" CRLF CRLF CRLF CRLF_mix_CR CRLF_nul
# text: core.autocrlf = input overrides core.eol
checkout_files text "$id" "" input "$ceol" LF CRLF CRLF_mix_LF LF_mix_CR LF_nul
checkout_files text "$id" "" false "" $NL CRLF $MIX_CRLF_LF $MIX_LF_CR $LFNUL
checkout_files text "$id" "" false native $NL CRLF $MIX_CRLF_LF $MIX_LF_CR $LFNUL
# auto: core.autocrlf=false and core.eol unset(or native) uses native eol
- checkout_files auto "$id" "" false "" $NL CRLF $MIX_CRLF_LF LF_mix_CR LF_nul
- checkout_files auto "$id" "" false native $NL CRLF $MIX_CRLF_LF LF_mix_CR LF_nul
+ checkout_files auto "$id" "" false "" $NL CRLF CRLF_mix_LF LF_mix_CR LF_nul
+ checkout_files auto "$id" "" false native $NL CRLF CRLF_mix_LF LF_mix_CR LF_nul
done
# Should be the last test case: remove some files from the worktree
test_expect_success 'check for a bug in the regex routines' '
# if this test fails, re-build git with NO_REGEX=1
- test-regex
+ test-regex --bug
'
test_done
| git cat-file --batch)"
'
-test_expect_success "--batch-check for an emtpy line" '
+test_expect_success "--batch-check for an empty line" '
test " missing" = "$(echo | git cat-file --batch-check)"
'
. "$TEST_DIRECTORY"/lib-read-tree.sh
test_expect_success 'setup' '
- cat >expected <<-\EOF &&
+ cat >expected <<-EOF &&
100644 77f0ba1734ed79d12881f81b36ee134de6a3327b 0 init.t
- 100644 e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 0 sub/added
- 100644 e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 0 sub/addedtoo
- 100644 e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 0 subsub/added
+ 100644 $EMPTY_BLOB 0 sub/added
+ 100644 $EMPTY_BLOB 0 sub/addedtoo
+ 100644 $EMPTY_BLOB 0 subsub/added
EOF
cat >expected.swt <<-\EOF &&
H init.t
error: The following untracked working tree files would be overwritten by checkout:
sub/added
sub/addedtoo
-Please move or remove them before you can switch branches.
+Please move or remove them before you switch branches.
Aborting
EOF
- test_cmp expected actual
+ test_i18ncmp expected actual
'
test_expect_success 'checkout without --ignore-skip-worktree-bits' '
git archive --format=zip HEAD >/dev/null
'
-test_expect_success 'fsck' '
- test_must_fail git fsck 2>err &&
- n=$(grep "error: attempting to allocate .* over limit" err | wc -l) &&
- test "$n" -gt 1
+test_expect_success 'fsck large blobs' '
+ git fsck 2>err &&
+ test_must_be_empty err
'
test_done
. ./test-lib.sh
cat >expected <<EOF
-tree 4b825dc642cb6eb9a060e54bf8d69288fbee4904
+tree $EMPTY_TREE
author Author Name <author@email> 1117148400 +0000
committer Committer Name <committer@email> 1117150200 +0000
git config --get --path path.normal >>result &&
git config --get --path path.trailingtilde >>result
) &&
- grep "[Ff]ailed to expand.*~/" msg &&
+ test_i18ngrep "[Ff]ailed to expand.*~/" msg &&
test_cmp expect result
'
key garbage
EOF
test_must_fail git config --get section.key >actual 2>error &&
- grep " line 3 " error
+ test_i18ngrep " line 3 " error
'
test_expect_success 'barf on incomplete section header' '
key = value
EOF
test_must_fail git config --get section.key >actual 2>error &&
- grep " line 2 " error
+ test_i18ngrep " line 2 " error
'
test_expect_success 'barf on incomplete string' '
key = "value string
EOF
test_must_fail git config --get section.key >actual 2>error &&
- grep " line 3 " error
+ test_i18ngrep " line 3 " error
'
test_expect_success 'urlmatch' '
git commit -m broken &&
test_must_fail git config --blob=HEAD:config some.value 2>err &&
-
- # just grep for our token as the exact error message is likely to
- # change or be internationalized
- grep "HEAD:config" err
+ test_i18ngrep "HEAD:config" err
'
test_expect_success 'can parse blob ending with CR' '
echo "[" >>.git/config &&
echo "fatal: bad config line 34 in file .git/config" >expect &&
test_expect_code 128 test-config get_value foo.bar 2>actual &&
- test_cmp expect actual
+ test_i18ncmp expect actual
'
test_expect_success 'proper error on error in custom config files' '
echo "[" >>syntax-error &&
echo "fatal: bad config line 1 in file syntax-error" >expect &&
test_expect_code 128 test-config configset_get_value foo.bar syntax-error 2>actual &&
- test_cmp expect actual
+ test_i18ncmp expect actual
'
test_expect_success 'check line errors for malformed values' '
)
'
+cmdline_config="'foo.bar=from-cmdline'"
+test_expect_success 'iteration shows correct origins' '
+ echo "[foo]bar = from-repo" >.git/config &&
+ echo "[foo]bar = from-home" >.gitconfig &&
+ if test_have_prereq MINGW
+ then
+ # Use Windows path (i.e. *not* $HOME)
+ HOME_GITCONFIG=$(pwd)/.gitconfig
+ else
+ # Do not get fooled by symbolic links, i.e. $HOME != $(pwd)
+ HOME_GITCONFIG=$HOME/.gitconfig
+ fi &&
+ cat >expect <<-EOF &&
+ key=foo.bar
+ value=from-home
+ origin=file
+ name=$HOME_GITCONFIG
+ scope=global
+
+ key=foo.bar
+ value=from-repo
+ origin=file
+ name=.git/config
+ scope=repo
+
+ key=foo.bar
+ value=from-cmdline
+ origin=command line
+ name=
+ scope=cmdline
+ EOF
+ GIT_CONFIG_PARAMETERS=$cmdline_config test-config iterate >actual &&
+ test_cmp expect actual
+'
+
test_done
test_cmp expect actual
'
+test_expect_success 'git rev-parse --git-path hooks' '
+ git config core.hooksPath .git/custom-hooks &&
+ git rev-parse --git-path hooks/abc >actual &&
+ test .git/custom-hooks/abc = "$(cat actual)"
+'
+
test_done
m=refs/heads/master
n_dir=refs/heads/gu
n=$n_dir/fixes
-outside=foo
+outside=refs/foo
test_expect_success \
"create $m" \
test_expect_success '-z fails without --stdin' '
test_must_fail git update-ref -z $m $m $m 2>err &&
- grep "usage: git update-ref" err
+ test_i18ngrep "usage: git update-ref" err
'
test_expect_success 'stdin works with no input' '
create $a $m
EOF
test_must_fail git update-ref --stdin <stdin 2>err &&
- grep "fatal: Multiple updates for ref '"'"'$a'"'"' not allowed." err
+ grep "fatal: multiple updates for ref '"'"'$a'"'"' not allowed." err
'
test_expect_success 'stdin create ref works' '
test_expect_success 'stdin -z fails with duplicate refs' '
printf $F "create $a" "$m" "create $b" "$m" "create $a" "$m" >stdin &&
test_must_fail git update-ref -z --stdin <stdin 2>err &&
- grep "fatal: Multiple updates for ref '"'"'$a'"'"' not allowed." err
+ grep "fatal: multiple updates for ref '"'"'$a'"'"' not allowed." err
'
test_expect_success 'stdin -z create ref works' '
test_must_fail git rev-parse --verify -q $c
'
+test_expect_success 'fails with duplicate HEAD update' '
+ git branch target1 $A &&
+ git checkout target1 &&
+ cat >stdin <<-EOF &&
+ update refs/heads/target1 $C
+ option no-deref
+ update HEAD $B
+ EOF
+ test_must_fail git update-ref --stdin <stdin 2>err &&
+ grep "fatal: multiple updates for '\''HEAD'\'' (including one via its referent .refs/heads/target1.) are not allowed" err &&
+ echo "refs/heads/target1" >expect &&
+ git symbolic-ref HEAD >actual &&
+ test_cmp expect actual &&
+ echo "$A" >expect &&
+ git rev-parse refs/heads/target1 >actual &&
+ test_cmp expect actual
+'
+
+test_expect_success 'fails with duplicate ref update via symref' '
+ git branch target2 $A &&
+ git symbolic-ref refs/heads/symref2 refs/heads/target2 &&
+ cat >stdin <<-EOF &&
+ update refs/heads/target2 $C
+ update refs/heads/symref2 $B
+ EOF
+ test_must_fail git update-ref --stdin <stdin 2>err &&
+ grep "fatal: multiple updates for '\''refs/heads/target2'\'' (including one via symref .refs/heads/symref2.) are not allowed" err &&
+ echo "refs/heads/target2" >expect &&
+ git symbolic-ref refs/heads/symref2 >actual &&
+ test_cmp expect actual &&
+ echo "$A" >expect &&
+ git rev-parse refs/heads/target2 >actual &&
+ test_cmp expect actual
+'
+
run_with_limited_open_files () {
(ulimit -n 32 && "$@")
}
update
create
EOF
- git log --format=%gs -g >actual &&
+ git log --format=%gs -g -2 >actual &&
test_cmp expect actual
'
+++ /dev/null
-#!/bin/sh
-
-test_description='Test git update-ref with D/F conflicts'
-. ./test-lib.sh
-
-test_update_rejected () {
- prefix="$1" &&
- before="$2" &&
- pack="$3" &&
- create="$4" &&
- error="$5" &&
- printf "create $prefix/%s $C\n" $before |
- git update-ref --stdin &&
- git for-each-ref $prefix >unchanged &&
- if $pack
- then
- git pack-refs --all
- fi &&
- printf "create $prefix/%s $C\n" $create >input &&
- test_must_fail git update-ref --stdin <input 2>output.err &&
- grep -F "$error" output.err &&
- git for-each-ref $prefix >actual &&
- test_cmp unchanged actual
-}
-
-Q="'"
-
-test_expect_success 'setup' '
-
- git commit --allow-empty -m Initial &&
- C=$(git rev-parse HEAD)
-
-'
-
-test_expect_success 'existing loose ref is a simple prefix of new' '
-
- prefix=refs/1l &&
- test_update_rejected $prefix "a c e" false "b c/x d" \
- "$Q$prefix/c$Q exists; cannot create $Q$prefix/c/x$Q"
-
-'
-
-test_expect_success 'existing packed ref is a simple prefix of new' '
-
- prefix=refs/1p &&
- test_update_rejected $prefix "a c e" true "b c/x d" \
- "$Q$prefix/c$Q exists; cannot create $Q$prefix/c/x$Q"
-
-'
-
-test_expect_success 'existing loose ref is a deeper prefix of new' '
-
- prefix=refs/2l &&
- test_update_rejected $prefix "a c e" false "b c/x/y d" \
- "$Q$prefix/c$Q exists; cannot create $Q$prefix/c/x/y$Q"
-
-'
-
-test_expect_success 'existing packed ref is a deeper prefix of new' '
-
- prefix=refs/2p &&
- test_update_rejected $prefix "a c e" true "b c/x/y d" \
- "$Q$prefix/c$Q exists; cannot create $Q$prefix/c/x/y$Q"
-
-'
-
-test_expect_success 'new ref is a simple prefix of existing loose' '
-
- prefix=refs/3l &&
- test_update_rejected $prefix "a c/x e" false "b c d" \
- "$Q$prefix/c/x$Q exists; cannot create $Q$prefix/c$Q"
-
-'
-
-test_expect_success 'new ref is a simple prefix of existing packed' '
-
- prefix=refs/3p &&
- test_update_rejected $prefix "a c/x e" true "b c d" \
- "$Q$prefix/c/x$Q exists; cannot create $Q$prefix/c$Q"
-
-'
-
-test_expect_success 'new ref is a deeper prefix of existing loose' '
-
- prefix=refs/4l &&
- test_update_rejected $prefix "a c/x/y e" false "b c d" \
- "$Q$prefix/c/x/y$Q exists; cannot create $Q$prefix/c$Q"
-
-'
-
-test_expect_success 'new ref is a deeper prefix of existing packed' '
-
- prefix=refs/4p &&
- test_update_rejected $prefix "a c/x/y e" true "b c d" \
- "$Q$prefix/c/x/y$Q exists; cannot create $Q$prefix/c$Q"
-
-'
-
-test_expect_success 'one new ref is a simple prefix of another' '
-
- prefix=refs/5 &&
- test_update_rejected $prefix "a e" false "b c c/x d" \
- "cannot process $Q$prefix/c$Q and $Q$prefix/c/x$Q at the same time"
-
-'
-
-test_done
--- /dev/null
+#!/bin/sh
+
+test_description='Test git update-ref error handling'
+. ./test-lib.sh
+
+# Create some references, perhaps run pack-refs --all, then try to
+# create some more references. Ensure that the second creation fails
+# with the correct error message.
+# Usage: test_update_rejected <before> <pack> <create> <error>
+# <before> is a ws-separated list of refs to create before the test
+# <pack> (true or false) tells whether to pack the refs before the test
+# <create> is a list of variables to attempt creating
+# <error> is a string to look for in the stderr of update-ref.
+# All references are created in the namespace specified by the current
+# value of $prefix.
+test_update_rejected () {
+ before="$1" &&
+ pack="$2" &&
+ create="$3" &&
+ error="$4" &&
+ printf "create $prefix/%s $C\n" $before |
+ git update-ref --stdin &&
+ git for-each-ref $prefix >unchanged &&
+ if $pack
+ then
+ git pack-refs --all
+ fi &&
+ printf "create $prefix/%s $C\n" $create >input &&
+ test_must_fail git update-ref --stdin <input 2>output.err &&
+ grep -F "$error" output.err &&
+ git for-each-ref $prefix >actual &&
+ test_cmp unchanged actual
+}
+
+Q="'"
+
+test_expect_success 'setup' '
+
+ git commit --allow-empty -m Initial &&
+ C=$(git rev-parse HEAD) &&
+ git commit --allow-empty -m Second &&
+ D=$(git rev-parse HEAD) &&
+ git commit --allow-empty -m Third &&
+ E=$(git rev-parse HEAD)
+'
+
+test_expect_success 'existing loose ref is a simple prefix of new' '
+
+ prefix=refs/1l &&
+ test_update_rejected "a c e" false "b c/x d" \
+ "$Q$prefix/c$Q exists; cannot create $Q$prefix/c/x$Q"
+
+'
+
+test_expect_success 'existing packed ref is a simple prefix of new' '
+
+ prefix=refs/1p &&
+ test_update_rejected "a c e" true "b c/x d" \
+ "$Q$prefix/c$Q exists; cannot create $Q$prefix/c/x$Q"
+
+'
+
+test_expect_success 'existing loose ref is a deeper prefix of new' '
+
+ prefix=refs/2l &&
+ test_update_rejected "a c e" false "b c/x/y d" \
+ "$Q$prefix/c$Q exists; cannot create $Q$prefix/c/x/y$Q"
+
+'
+
+test_expect_success 'existing packed ref is a deeper prefix of new' '
+
+ prefix=refs/2p &&
+ test_update_rejected "a c e" true "b c/x/y d" \
+ "$Q$prefix/c$Q exists; cannot create $Q$prefix/c/x/y$Q"
+
+'
+
+test_expect_success 'new ref is a simple prefix of existing loose' '
+
+ prefix=refs/3l &&
+ test_update_rejected "a c/x e" false "b c d" \
+ "$Q$prefix/c/x$Q exists; cannot create $Q$prefix/c$Q"
+
+'
+
+test_expect_success 'new ref is a simple prefix of existing packed' '
+
+ prefix=refs/3p &&
+ test_update_rejected "a c/x e" true "b c d" \
+ "$Q$prefix/c/x$Q exists; cannot create $Q$prefix/c$Q"
+
+'
+
+test_expect_success 'new ref is a deeper prefix of existing loose' '
+
+ prefix=refs/4l &&
+ test_update_rejected "a c/x/y e" false "b c d" \
+ "$Q$prefix/c/x/y$Q exists; cannot create $Q$prefix/c$Q"
+
+'
+
+test_expect_success 'new ref is a deeper prefix of existing packed' '
+
+ prefix=refs/4p &&
+ test_update_rejected "a c/x/y e" true "b c d" \
+ "$Q$prefix/c/x/y$Q exists; cannot create $Q$prefix/c$Q"
+
+'
+
+test_expect_success 'one new ref is a simple prefix of another' '
+
+ prefix=refs/5 &&
+ test_update_rejected "a e" false "b c c/x d" \
+ "cannot process $Q$prefix/c$Q and $Q$prefix/c/x$Q at the same time"
+
+'
+
+test_expect_success 'empty directory should not fool rev-parse' '
+ prefix=refs/e-rev-parse &&
+ git update-ref $prefix/foo $C &&
+ git pack-refs --all &&
+ mkdir -p .git/$prefix/foo/bar/baz &&
+ echo "$C" >expected &&
+ git rev-parse $prefix/foo >actual &&
+ test_cmp expected actual
+'
+
+test_expect_success 'empty directory should not fool for-each-ref' '
+ prefix=refs/e-for-each-ref &&
+ git update-ref $prefix/foo $C &&
+ git for-each-ref $prefix >expected &&
+ git pack-refs --all &&
+ mkdir -p .git/$prefix/foo/bar/baz &&
+ git for-each-ref $prefix >actual &&
+ test_cmp expected actual
+'
+
+test_expect_success 'empty directory should not fool create' '
+ prefix=refs/e-create &&
+ mkdir -p .git/$prefix/foo/bar/baz &&
+ printf "create %s $C\n" $prefix/foo |
+ git update-ref --stdin
+'
+
+test_expect_success 'empty directory should not fool verify' '
+ prefix=refs/e-verify &&
+ git update-ref $prefix/foo $C &&
+ git pack-refs --all &&
+ mkdir -p .git/$prefix/foo/bar/baz &&
+ printf "verify %s $C\n" $prefix/foo |
+ git update-ref --stdin
+'
+
+test_expect_success 'empty directory should not fool 1-arg update' '
+ prefix=refs/e-update-1 &&
+ git update-ref $prefix/foo $C &&
+ git pack-refs --all &&
+ mkdir -p .git/$prefix/foo/bar/baz &&
+ printf "update %s $D\n" $prefix/foo |
+ git update-ref --stdin
+'
+
+test_expect_success 'empty directory should not fool 2-arg update' '
+ prefix=refs/e-update-2 &&
+ git update-ref $prefix/foo $C &&
+ git pack-refs --all &&
+ mkdir -p .git/$prefix/foo/bar/baz &&
+ printf "update %s $D $C\n" $prefix/foo |
+ git update-ref --stdin
+'
+
+test_expect_success 'empty directory should not fool 0-arg delete' '
+ prefix=refs/e-delete-0 &&
+ git update-ref $prefix/foo $C &&
+ git pack-refs --all &&
+ mkdir -p .git/$prefix/foo/bar/baz &&
+ printf "delete %s\n" $prefix/foo |
+ git update-ref --stdin
+'
+
+test_expect_success 'empty directory should not fool 1-arg delete' '
+ prefix=refs/e-delete-1 &&
+ git update-ref $prefix/foo $C &&
+ git pack-refs --all &&
+ mkdir -p .git/$prefix/foo/bar/baz &&
+ printf "delete %s $C\n" $prefix/foo |
+ git update-ref --stdin
+'
+
+# Test various errors when reading the old values of references...
+
+test_expect_success 'missing old value blocks update' '
+ prefix=refs/missing-update &&
+ cat >expected <<-EOF &&
+ fatal: cannot lock ref $Q$prefix/foo$Q: unable to resolve reference $Q$prefix/foo$Q
+ EOF
+ printf "%s\n" "update $prefix/foo $E $D" |
+ test_must_fail git update-ref --stdin 2>output.err &&
+ test_cmp expected output.err
+'
+
+test_expect_success 'incorrect old value blocks update' '
+ prefix=refs/incorrect-update &&
+ git update-ref $prefix/foo $C &&
+ cat >expected <<-EOF &&
+ fatal: cannot lock ref $Q$prefix/foo$Q: is at $C but expected $D
+ EOF
+ printf "%s\n" "update $prefix/foo $E $D" |
+ test_must_fail git update-ref --stdin 2>output.err &&
+ test_cmp expected output.err
+'
+
+test_expect_success 'existing old value blocks create' '
+ prefix=refs/existing-create &&
+ git update-ref $prefix/foo $C &&
+ cat >expected <<-EOF &&
+ fatal: cannot lock ref $Q$prefix/foo$Q: reference already exists
+ EOF
+ printf "%s\n" "create $prefix/foo $E" |
+ test_must_fail git update-ref --stdin 2>output.err &&
+ test_cmp expected output.err
+'
+
+test_expect_success 'incorrect old value blocks delete' '
+ prefix=refs/incorrect-delete &&
+ git update-ref $prefix/foo $C &&
+ cat >expected <<-EOF &&
+ fatal: cannot lock ref $Q$prefix/foo$Q: is at $C but expected $D
+ EOF
+ printf "%s\n" "delete $prefix/foo $D" |
+ test_must_fail git update-ref --stdin 2>output.err &&
+ test_cmp expected output.err
+'
+
+test_expect_success 'missing old value blocks indirect update' '
+ prefix=refs/missing-indirect-update &&
+ git symbolic-ref $prefix/symref $prefix/foo &&
+ cat >expected <<-EOF &&
+ fatal: cannot lock ref $Q$prefix/symref$Q: unable to resolve reference $Q$prefix/foo$Q
+ EOF
+ printf "%s\n" "update $prefix/symref $E $D" |
+ test_must_fail git update-ref --stdin 2>output.err &&
+ test_cmp expected output.err
+'
+
+test_expect_success 'incorrect old value blocks indirect update' '
+ prefix=refs/incorrect-indirect-update &&
+ git symbolic-ref $prefix/symref $prefix/foo &&
+ git update-ref $prefix/foo $C &&
+ cat >expected <<-EOF &&
+ fatal: cannot lock ref $Q$prefix/symref$Q: is at $C but expected $D
+ EOF
+ printf "%s\n" "update $prefix/symref $E $D" |
+ test_must_fail git update-ref --stdin 2>output.err &&
+ test_cmp expected output.err
+'
+
+test_expect_success 'existing old value blocks indirect create' '
+ prefix=refs/existing-indirect-create &&
+ git symbolic-ref $prefix/symref $prefix/foo &&
+ git update-ref $prefix/foo $C &&
+ cat >expected <<-EOF &&
+ fatal: cannot lock ref $Q$prefix/symref$Q: reference already exists
+ EOF
+ printf "%s\n" "create $prefix/symref $E" |
+ test_must_fail git update-ref --stdin 2>output.err &&
+ test_cmp expected output.err
+'
+
+test_expect_success 'incorrect old value blocks indirect delete' '
+ prefix=refs/incorrect-indirect-delete &&
+ git symbolic-ref $prefix/symref $prefix/foo &&
+ git update-ref $prefix/foo $C &&
+ cat >expected <<-EOF &&
+ fatal: cannot lock ref $Q$prefix/symref$Q: is at $C but expected $D
+ EOF
+ printf "%s\n" "delete $prefix/symref $D" |
+ test_must_fail git update-ref --stdin 2>output.err &&
+ test_cmp expected output.err
+'
+
+test_expect_success 'missing old value blocks indirect no-deref update' '
+ prefix=refs/missing-noderef-update &&
+ git symbolic-ref $prefix/symref $prefix/foo &&
+ cat >expected <<-EOF &&
+ fatal: cannot lock ref $Q$prefix/symref$Q: reference is missing but expected $D
+ EOF
+ printf "%s\n" "option no-deref" "update $prefix/symref $E $D" |
+ test_must_fail git update-ref --stdin 2>output.err &&
+ test_cmp expected output.err
+'
+
+test_expect_success 'incorrect old value blocks indirect no-deref update' '
+ prefix=refs/incorrect-noderef-update &&
+ git symbolic-ref $prefix/symref $prefix/foo &&
+ git update-ref $prefix/foo $C &&
+ cat >expected <<-EOF &&
+ fatal: cannot lock ref $Q$prefix/symref$Q: is at $C but expected $D
+ EOF
+ printf "%s\n" "option no-deref" "update $prefix/symref $E $D" |
+ test_must_fail git update-ref --stdin 2>output.err &&
+ test_cmp expected output.err
+'
+
+test_expect_success 'existing old value blocks indirect no-deref create' '
+ prefix=refs/existing-noderef-create &&
+ git symbolic-ref $prefix/symref $prefix/foo &&
+ git update-ref $prefix/foo $C &&
+ cat >expected <<-EOF &&
+ fatal: cannot lock ref $Q$prefix/symref$Q: reference already exists
+ EOF
+ printf "%s\n" "option no-deref" "create $prefix/symref $E" |
+ test_must_fail git update-ref --stdin 2>output.err &&
+ test_cmp expected output.err
+'
+
+test_expect_success 'incorrect old value blocks indirect no-deref delete' '
+ prefix=refs/incorrect-noderef-delete &&
+ git symbolic-ref $prefix/symref $prefix/foo &&
+ git update-ref $prefix/foo $C &&
+ cat >expected <<-EOF &&
+ fatal: cannot lock ref $Q$prefix/symref$Q: is at $C but expected $D
+ EOF
+ printf "%s\n" "option no-deref" "delete $prefix/symref $D" |
+ test_must_fail git update-ref --stdin 2>output.err &&
+ test_cmp expected output.err
+'
+
+test_expect_success 'non-empty directory blocks create' '
+ prefix=refs/ne-create &&
+ mkdir -p .git/$prefix/foo/bar &&
+ : >.git/$prefix/foo/bar/baz.lock &&
+ test_when_finished "rm -f .git/$prefix/foo/bar/baz.lock" &&
+ cat >expected <<-EOF &&
+ fatal: cannot lock ref $Q$prefix/foo$Q: there is a non-empty directory $Q.git/$prefix/foo$Q blocking reference $Q$prefix/foo$Q
+ EOF
+ printf "%s\n" "update $prefix/foo $C" |
+ test_must_fail git update-ref --stdin 2>output.err &&
+ test_cmp expected output.err &&
+ cat >expected <<-EOF &&
+ fatal: cannot lock ref $Q$prefix/foo$Q: unable to resolve reference $Q$prefix/foo$Q
+ EOF
+ printf "%s\n" "update $prefix/foo $D $C" |
+ test_must_fail git update-ref --stdin 2>output.err &&
+ test_cmp expected output.err
+'
+
+test_expect_success 'broken reference blocks create' '
+ prefix=refs/broken-create &&
+ mkdir -p .git/$prefix &&
+ echo "gobbledigook" >.git/$prefix/foo &&
+ test_when_finished "rm -f .git/$prefix/foo" &&
+ cat >expected <<-EOF &&
+ fatal: cannot lock ref $Q$prefix/foo$Q: unable to resolve reference $Q$prefix/foo$Q: reference broken
+ EOF
+ printf "%s\n" "update $prefix/foo $C" |
+ test_must_fail git update-ref --stdin 2>output.err &&
+ test_cmp expected output.err &&
+ cat >expected <<-EOF &&
+ fatal: cannot lock ref $Q$prefix/foo$Q: unable to resolve reference $Q$prefix/foo$Q: reference broken
+ EOF
+ printf "%s\n" "update $prefix/foo $D $C" |
+ test_must_fail git update-ref --stdin 2>output.err &&
+ test_cmp expected output.err
+'
+
+test_expect_success 'non-empty directory blocks indirect create' '
+ prefix=refs/ne-indirect-create &&
+ git symbolic-ref $prefix/symref $prefix/foo &&
+ mkdir -p .git/$prefix/foo/bar &&
+ : >.git/$prefix/foo/bar/baz.lock &&
+ test_when_finished "rm -f .git/$prefix/foo/bar/baz.lock" &&
+ cat >expected <<-EOF &&
+ fatal: cannot lock ref $Q$prefix/symref$Q: there is a non-empty directory $Q.git/$prefix/foo$Q blocking reference $Q$prefix/foo$Q
+ EOF
+ printf "%s\n" "update $prefix/symref $C" |
+ test_must_fail git update-ref --stdin 2>output.err &&
+ test_cmp expected output.err &&
+ cat >expected <<-EOF &&
+ fatal: cannot lock ref $Q$prefix/symref$Q: unable to resolve reference $Q$prefix/foo$Q
+ EOF
+ printf "%s\n" "update $prefix/symref $D $C" |
+ test_must_fail git update-ref --stdin 2>output.err &&
+ test_cmp expected output.err
+'
+
+test_expect_success 'broken reference blocks indirect create' '
+ prefix=refs/broken-indirect-create &&
+ git symbolic-ref $prefix/symref $prefix/foo &&
+ echo "gobbledigook" >.git/$prefix/foo &&
+ test_when_finished "rm -f .git/$prefix/foo" &&
+ cat >expected <<-EOF &&
+ fatal: cannot lock ref $Q$prefix/symref$Q: unable to resolve reference $Q$prefix/foo$Q: reference broken
+ EOF
+ printf "%s\n" "update $prefix/symref $C" |
+ test_must_fail git update-ref --stdin 2>output.err &&
+ test_cmp expected output.err &&
+ cat >expected <<-EOF &&
+ fatal: cannot lock ref $Q$prefix/symref$Q: unable to resolve reference $Q$prefix/foo$Q: reference broken
+ EOF
+ printf "%s\n" "update $prefix/symref $D $C" |
+ test_must_fail git update-ref --stdin 2>output.err &&
+ test_cmp expected output.err
+'
+
+test_done
git reflog expire --expire=all the_symref
'
+test_expect_success 'continue walking past root commits' '
+ git init orphanage &&
+ (
+ cd orphanage &&
+ cat >expect <<-\EOF &&
+ HEAD@{0} commit (initial): orphan2-1
+ HEAD@{1} commit: orphan1-2
+ HEAD@{2} commit (initial): orphan1-1
+ HEAD@{3} commit (initial): initial
+ EOF
+ test_commit initial &&
+ git checkout --orphan orphan1 &&
+ test_commit orphan1-1 &&
+ test_commit orphan1-2 &&
+ git checkout --orphan orphan2 &&
+ test_commit orphan2-1 &&
+ git log -g --format="%gd %gs" >actual &&
+ test_cmp expect actual
+ )
+'
+
test_done
echo precious >expect &&
test_must_fail git update-ref -d my-private-file >output 2>error &&
test_must_be_empty output &&
- test_i18ngrep -e "cannot lock .*: unable to resolve reference" error &&
+ test_i18ngrep -e "refusing to update ref with bad name" error &&
test_cmp expect .git/my-private-file
'
)
'
+remove_loose_object () {
+ sha1="$(git rev-parse "$1")" &&
+ remainder=${sha1#??} &&
+ firsttwo=${sha1%$remainder} &&
+ rm .git/objects/$firsttwo/$remainder
+}
+
+test_expect_success 'fsck --name-objects' '
+ rm -rf name-objects &&
+ git init name-objects &&
+ (
+ cd name-objects &&
+ test_commit julius caesar.t &&
+ test_commit augustus &&
+ test_commit caesar &&
+ remove_loose_object $(git rev-parse julius:caesar.t) &&
+ test_must_fail git fsck --name-objects >out &&
+ tree=$(git rev-parse --verify julius:) &&
+ grep "$tree (\(refs/heads/master\|HEAD\)@{[0-9]*}:" out
+ )
+'
+
test_done
test_must_fail git rev-parse foobar:file.txt 2>error &&
grep "Invalid object name '"'"'foobar'"'"'." error &&
test_must_fail git rev-parse foobar 2> error &&
- grep "unknown revision or path not in the working tree." error
+ test_i18ngrep "unknown revision or path not in the working tree." error
'
test_expect_success 'incorrect file in sha1:path' '
git update-index --add one &&
git ls-files --stage >ls-files.actual &&
cat >ls-files.expect <<EOF &&
-100644 e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 0 one
+100644 $EMPTY_BLOB 0 one
EOF
test_cmp ls-files.expect ls-files.actual &&
test-dump-split-index .git/index | sed "/^own/d" >actual &&
cat >expect <<EOF &&
base $base
-100644 e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 0 one
+100644 $EMPTY_BLOB 0 one
replacements:
deletions:
EOF
git update-index --no-split-index &&
git ls-files --stage >ls-files.actual &&
cat >ls-files.expect <<EOF &&
-100644 e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 0 one
+100644 $EMPTY_BLOB 0 one
EOF
test_cmp ls-files.expect ls-files.actual &&
git update-index --split-index &&
git ls-files --stage >ls-files.actual &&
cat >ls-files.expect <<EOF &&
-100644 e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 0 one
+100644 $EMPTY_BLOB 0 one
EOF
test_cmp ls-files.expect ls-files.actual &&
git ls-files --stage >ls-files.actual &&
cat >ls-files.expect <<EOF &&
100644 2e0996000b7e9019eabcad29391bf0f5c7702f0b 0 one
-100644 e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 0 two
+100644 $EMPTY_BLOB 0 two
EOF
test_cmp ls-files.expect ls-files.actual &&
q_to_tab >expect <<EOF &&
$BASE
100644 2e0996000b7e9019eabcad29391bf0f5c7702f0b 0Q
-100644 e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 0 two
+100644 $EMPTY_BLOB 0 two
replacements: 0
deletions:
EOF
git update-index --add one &&
git ls-files --stage >ls-files.actual &&
cat >ls-files.expect <<EOF &&
-100644 e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 0 one
+100644 $EMPTY_BLOB 0 one
EOF
test_cmp ls-files.expect ls-files.actual &&
test-dump-split-index .git/index | sed "/^own/d" >actual &&
cat >expect <<EOF &&
$BASE
-100644 e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 0 one
+100644 $EMPTY_BLOB 0 one
replacements:
deletions: 0
EOF
git update-index --add two &&
git ls-files --stage >actual &&
cat >expect <<EOF &&
-100644 e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 0 one
-100644 e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 0 two
+100644 $EMPTY_BLOB 0 one
+100644 $EMPTY_BLOB 0 two
EOF
test_cmp expect actual
'
git update-index --no-split-index &&
git ls-files --stage >ls-files.actual &&
cat >ls-files.expect <<EOF &&
-100644 e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 0 one
-100644 e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 0 two
+100644 $EMPTY_BLOB 0 one
+100644 $EMPTY_BLOB 0 two
EOF
test_cmp ls-files.expect ls-files.actual &&
test_expect_success 'accurate error message with more than one ref' '
test_must_fail git checkout HEAD master -- 2>actual &&
- grep 2 actual &&
+ test_i18ngrep 2 actual &&
test_i18ngrep "one reference expected, 2 given" actual
'
git checkout branch2 &&
echo >expect "fatal: A branch named '\''branch1'\'' already exists." &&
test_must_fail git checkout -b @{-1} 2>actual &&
- test_cmp expect actual
+ test_i18ncmp expect actual
'
test_expect_success 'checkout -B to an existing branch resets branch to HEAD' '
test_i18ncmp expect stdout
'
+test_expect_success 'no advice given for explicit detached head state' '
+ # baseline
+ test_config advice.detachedHead true &&
+ git checkout child && git checkout HEAD^0 >expect.advice 2>&1 &&
+ test_config advice.detachedHead false &&
+ git checkout child && git checkout HEAD^0 >expect.no-advice 2>&1 &&
+ test_unconfig advice.detachedHead &&
+ # without configuration, the advice.* variables default to true
+ git checkout child && git checkout HEAD^0 >actual 2>&1 &&
+ test_cmp expect.advice actual &&
+
+ # with explicit --detach
+ # no configuration
+ test_unconfig advice.detachedHead &&
+ git checkout child && git checkout --detach HEAD^0 >actual 2>&1 &&
+ test_cmp expect.no-advice actual &&
+
+ # explicitly decline advice
+ test_config advice.detachedHead false &&
+ git checkout child && git checkout --detach HEAD^0 >actual 2>&1 &&
+ test_cmp expect.no-advice actual
+'
+
test_done
git worktree add --detach existing_empty master
'
+test_expect_success '"add" using shorthand - fails when no previous branch' '
+ test_must_fail git worktree add existing_short -
+'
+
+test_expect_success '"add" using - shorthand' '
+ git checkout -b newbranch &&
+ echo hello >myworld &&
+ git add myworld &&
+ git commit -m myworld &&
+ git checkout master &&
+ git worktree add short-hand - &&
+ echo refs/heads/newbranch >expect &&
+ git -C short-hand rev-parse --symbolic-full-name HEAD >actual &&
+ test_cmp expect actual
+'
+
test_expect_success '"add" refuses to checkout locked branch' '
test_must_fail git worktree add zere master &&
! test -d zere &&
--- /dev/null
+#!/bin/sh
+
+test_description='test git worktree move, remove, lock and unlock'
+
+. ./test-lib.sh
+
+test_expect_success 'setup' '
+ test_commit init &&
+ git worktree add source &&
+ git worktree list --porcelain | grep "^worktree" >actual &&
+ cat <<-EOF >expected &&
+ worktree $(pwd)
+ worktree $(pwd)/source
+ EOF
+ test_cmp expected actual
+'
+
+test_expect_success 'lock main worktree' '
+ test_must_fail git worktree lock .
+'
+
+test_expect_success 'lock linked worktree' '
+ git worktree lock --reason hahaha source &&
+ echo hahaha >expected &&
+ test_cmp expected .git/worktrees/source/locked
+'
+
+test_expect_success 'lock linked worktree from another worktree' '
+ rm .git/worktrees/source/locked &&
+ git worktree add elsewhere &&
+ git -C elsewhere worktree lock --reason hahaha ../source &&
+ echo hahaha >expected &&
+ test_cmp expected .git/worktrees/source/locked
+'
+
+test_expect_success 'lock worktree twice' '
+ test_must_fail git worktree lock source &&
+ echo hahaha >expected &&
+ test_cmp expected .git/worktrees/source/locked
+'
+
+test_expect_success 'lock worktree twice (from the locked worktree)' '
+ test_must_fail git -C source worktree lock . &&
+ echo hahaha >expected &&
+ test_cmp expected .git/worktrees/source/locked
+'
+
+test_expect_success 'unlock main worktree' '
+ test_must_fail git worktree unlock .
+'
+
+test_expect_success 'unlock linked worktree' '
+ git worktree unlock source &&
+ test_path_is_missing .git/worktrees/source/locked
+'
+
+test_expect_success 'unlock worktree twice' '
+ test_must_fail git worktree unlock source &&
+ test_path_is_missing .git/worktrees/source/locked
+'
+
+test_done
test_cmp expect actual
'
+test_expect_success 'cache-tree does not ignore dir that has i-t-a entries' '
+ git init ita-in-dir &&
+ (
+ cd ita-in-dir &&
+ mkdir 2 &&
+ for f in 1 2/1 2/2 3
+ do
+ echo "$f" >"$f"
+ done &&
+ git add 1 2/2 3 &&
+ git add -N 2/1 &&
+ git commit -m committed &&
+ git ls-tree -r HEAD >actual &&
+ grep 2/2 actual
+ )
+'
+
+test_expect_success 'cache-tree does skip dir that becomes empty' '
+ rm -fr ita-in-dir &&
+ git init ita-in-dir &&
+ (
+ cd ita-in-dir &&
+ mkdir -p 1/2/3 &&
+ echo 4 >1/2/3/4 &&
+ git add -N 1/2/3/4 &&
+ git write-tree >actual &&
+ echo $EMPTY_TREE >expected &&
+ test_cmp expected actual
+ )
+'
+
test_done
. ./test-lib.sh
+EXEC_PATH="$(git --exec-path)"
+test_have_prereq !MINGW ||
+case "$EXEC_PATH" in
+[A-Za-z]:/*)
+ EXEC_PATH="/${EXEC_PATH%%:*}${EXEC_PATH#?:}"
+ ;;
+esac
+
test_cd_to_toplevel () {
test_expect_success $3 "$2" '
(
cd '"'$1'"' &&
- PATH="$(git --exec-path):$PATH" &&
+ PATH="$EXEC_PATH:$PATH" &&
. git-sh-setup &&
cd_to_toplevel &&
[ "$(pwd -P)" = "$TOPLEVEL" ]
git merge other
'
+test_expect_success 'merge-recursive remembers the names of all base trees' '
+ git reset --hard HEAD &&
+
+ # more trees than static slots used by oid_to_hex()
+ for commit in $c0 $c2 $c4 $c5 $c6 $c7
+ do
+ git rev-parse "$commit^{tree}"
+ done >trees &&
+
+ # ignore the return code -- it only fails because the input is weird
+ test_must_fail git -c merge.verbosity=5 merge-recursive $(cat trees) -- $c1 $c3 >out &&
+
+ # merge-recursive prints in reverse order, but we do not care
+ sort <trees >expect &&
+ sed -n "s/^virtual //p" out | sort >actual &&
+ test_cmp expect actual
+'
+
test_done
path3/1.txt - a file in a directory
path3/2.txt - a file in a directory
-Test the handling of mulitple directories which have matching file
+Test the handling of multiple directories which have matching file
entries. Also test odd filename and missing entries handling.
'
. ./test-lib.sh
'
test_expect_success 'ls-tree a[a] matches literally' '
- cat >expect <<-\EOF &&
- 100644 blob e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 a[a]/three
+ cat >expect <<-EOF &&
+ 100644 blob $EMPTY_BLOB a[a]/three
EOF
git ls-tree -r HEAD "a[a]" >actual &&
test_cmp expect actual
'
test_expect_success 'ls-tree outside prefix' '
- cat >expect <<-\EOF &&
- 100644 blob e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 ../a[a]/three
+ cat >expect <<-EOF &&
+ 100644 blob $EMPTY_BLOB ../a[a]/three
EOF
( cd aa && git ls-tree -r HEAD "../a[a]"; ) >actual &&
test_cmp expect actual
test_i18ngrep "branch name required" err
'
+test_expect_success 'git branch -m m broken_symref should work' '
+ test_when_finished "git branch -D broken_symref" &&
+ git branch -l m &&
+ git symbolic-ref refs/heads/broken_symref refs/heads/i_am_broken &&
+ git branch -m m broken_symref &&
+ git reflog exists refs/heads/broken_symref &&
+ test_must_fail git reflog exists refs/heads/i_am_broken
+'
+
test_expect_success 'git branch -m m m/m should work' '
git branch -l m &&
git branch -m m m/m &&
git branch -d origin/master
git branch --set-upstream-to origin/master
EOF
- test_cmp expected actual
+ test_i18ncmp expected actual
'
test_expect_success '--set-upstream with two args only shows the deprecation message' '
cat >expected <<EOF &&
The --set-upstream flag is deprecated and will be removed. Consider using --track or --set-upstream-to
EOF
- test_cmp expected actual
+ test_i18ncmp expected actual
'
test_expect_success '--set-upstream with one arg only shows the deprecation message if the branch existed' '
cat >expected <<EOF &&
The --set-upstream flag is deprecated and will be removed. Consider using --track or --set-upstream-to
EOF
- test_cmp expected actual
+ test_i18ncmp expected actual
'
test_expect_success '--set-upstream-to notices an error to set branch as own upstream' '
* topic 2c939f4 [ahead 1] foo
zzz c77a0a9 second on master
EOF
- test_cmp expect actual
+ test_i18ncmp expect actual
'
test_done
git config core.notesRef refs/notes/m &&
test_must_fail git notes merge z >output &&
# Output should point to where to resolve conflicts
- grep -q "\\.git/NOTES_MERGE_WORKTREE" output &&
+ test_i18ngrep "\\.git/NOTES_MERGE_WORKTREE" output &&
# Inspect merge conflicts
ls .git/NOTES_MERGE_WORKTREE >output_conflicts &&
test_cmp expect_conflicts output_conflicts &&
git config core.notesRef refs/notes/m &&
test_must_fail git notes merge z >output &&
# Output should point to where to resolve conflicts
- grep -q "\\.git/NOTES_MERGE_WORKTREE" output &&
+ test_i18ngrep "\\.git/NOTES_MERGE_WORKTREE" output &&
# Inspect merge conflicts
ls .git/NOTES_MERGE_WORKTREE >output_conflicts &&
test_cmp expect_conflicts output_conflicts &&
test_expect_success 'redo merge of z into m (== y) with default ("manual") resolver => Conflicting 3-way merge' '
test_must_fail git notes merge z >output &&
# Output should point to where to resolve conflicts
- grep -q "\\.git/NOTES_MERGE_WORKTREE" output &&
+ test_i18ngrep "\\.git/NOTES_MERGE_WORKTREE" output &&
# Inspect merge conflicts
ls .git/NOTES_MERGE_WORKTREE >output_conflicts &&
test_cmp expect_conflicts output_conflicts &&
git update-ref refs/notes/m refs/notes/y &&
test_must_fail git notes merge z >output &&
# Output should point to where to resolve conflicts
- grep -q "\\.git/NOTES_MERGE_WORKTREE" output &&
+ test_i18ngrep "\\.git/NOTES_MERGE_WORKTREE" output &&
# Inspect merge conflicts
ls .git/NOTES_MERGE_WORKTREE >output_conflicts &&
test_cmp expect_conflicts output_conflicts &&
cd worktree &&
git config core.notesRef refs/notes/y &&
test_must_fail git notes merge z 2>err &&
- grep "A notes merge into refs/notes/y is already in-progress at" err
+ test_i18ngrep "A notes merge into refs/notes/y is already in-progress at" err
) &&
test_path_is_missing .git/worktrees/worktree/NOTES_MERGE_REF
'
cd worktree2 &&
git config core.notesRef refs/notes/x &&
test_must_fail git notes merge z 2>&1 >out &&
- grep "Automatic notes merge failed" out &&
+ test_i18ngrep "Automatic notes merge failed" out &&
grep -v "A notes merge into refs/notes/x is already in-progress in" out
) &&
echo "ref: refs/notes/x" >expect &&
test_expect_success 'Show verbose error when HEAD could not be detached' '
>B &&
test_must_fail git rebase topic 2>output.err >output.out &&
- grep "The following untracked working tree files would be overwritten by checkout:" output.err &&
- grep B output.err
+ test_i18ngrep "The following untracked working tree files would be overwritten by checkout:" output.err &&
+ test_i18ngrep B output.err
'
rm -f B
test_commit P fileP
'
-# "exec" commands are ran with the user shell by default, but this may
+# "exec" commands are run with the user shell by default, but this may
# be non-POSIX. For example, if SHELL=zsh then ">file" doesn't work
# to create a file. Unsetting SHELL avoids such non-portable behavior
# in tests. It must be exported for it to take effect where needed.
git commit -m "remove file in base" &&
set_fake_editor &&
test_must_fail git rebase -i master > output 2>&1 &&
- grep "The following untracked working tree files would be overwritten by checkout:" \
+ test_i18ngrep "The following untracked working tree files would be overwritten by checkout:" \
output &&
- grep "file1" output &&
+ test_i18ngrep "file1" output &&
test_path_is_missing .git/rebase-merge &&
git reset --hard HEAD^
'
echo "edited again" > file7 &&
git add file7 &&
test_must_fail git rebase --continue 2>error &&
- grep "You have staged changes in your working tree." error
+ test_i18ngrep "You have staged changes in your working tree." error
'
test_expect_success 'rebase a detached HEAD' '
EOF
test_set_editor "$(pwd)/dump-raw.sh" &&
git rebase -i HEAD~4 >actual &&
- grep "^# Rebase ..* onto ..* ([0-9]" actual
+ test_i18ngrep "^# Rebase ..* onto ..* ([0-9]" actual
'
test_expect_success 'rebase -i commits that overwrite untracked files (pick)' '
FAKE_LINES="1 2 3 4" \
git rebase -i --root 2>actual &&
test D = $(git cat-file commit HEAD | sed -ne \$p) &&
- test_cmp expect actual
+ test_i18ncmp expect actual
'
cat >expect <<EOF
set_fake_editor &&
FAKE_LINES="1 2 3 4" \
git rebase -i --root 2>actual &&
- test_cmp expect actual &&
+ test_i18ncmp expect actual &&
test D = $(git cat-file commit HEAD | sed -ne \$p)
'
set_fake_editor &&
test_must_fail env FAKE_LINES="1 2 4" \
git rebase -i --root 2>actual &&
- test_cmp expect actual &&
+ test_i18ncmp expect actual &&
cp .git/rebase-merge/git-rebase-todo.backup \
.git/rebase-merge/git-rebase-todo &&
FAKE_LINES="1 2 drop 3 4 drop 5" \
set_fake_editor &&
test_must_fail env FAKE_LINES="1 2 3 bad 4 5" \
git rebase -i --root 2>actual &&
- test_cmp expect actual &&
+ test_i18ncmp expect actual &&
FAKE_LINES="1 2 3 drop 4 5" git rebase --edit-todo &&
git rebase --continue &&
test E = $(git cat-file commit HEAD | sed -ne \$p) &&
set_fake_editor &&
test_must_fail env FAKE_LINES="1 2 edit fakesha 3 4 5 #" \
git rebase -i --root 2>actual &&
- test_cmp expect actual &&
+ test_i18ncmp expect actual &&
FAKE_LINES="1 2 4 5 6" git rebase --edit-todo &&
git rebase --continue &&
test E = $(git cat-file commit HEAD | sed -ne \$p)
)
'
+SQ="'"
+test_expect_success 'rebase -i --gpg-sign=<key-id>' '
+ set_fake_editor &&
+ FAKE_LINES="edit 1" git rebase -i --gpg-sign="\"S I Gner\"" HEAD^ \
+ >out 2>err &&
+ test_i18ngrep "$SQ-S\"S I Gner\"$SQ" err
+'
+
test_done
test 2 = $(git cat-file commit HEAD^ | grep squash | wc -l)
'
+set_backup_editor () {
+ write_script backup-editor.sh <<-\EOF
+ cp "$1" .git/backup-"$(basename "$1")"
+ EOF
+ test_set_editor "$PWD/backup-editor.sh"
+}
+
+test_expect_failure 'autosquash with multiple empty patches' '
+ test_tick &&
+ git commit --allow-empty -m "empty" &&
+ test_tick &&
+ git commit --allow-empty -m "empty2" &&
+ test_tick &&
+ >fixup &&
+ git add fixup &&
+ git commit --fixup HEAD^^ &&
+ (
+ set_backup_editor &&
+ GIT_USE_REBASE_HELPER=false \
+ git rebase -i --force-rebase --autosquash HEAD~4 &&
+ grep empty2 .git/backup-git-rebase-todo
+ )
+'
+
+test_expect_success 'extra spaces after fixup!' '
+ base=$(git rev-parse HEAD) &&
+ test_commit to-fixup &&
+ git commit --allow-empty -m "fixup! to-fixup" &&
+ git rebase -i --autosquash --keep-empty HEAD~2 &&
+ parent=$(git rev-parse HEAD^) &&
+ test $base = $parent
+'
+
test_done
test_expect_success 'abort rebase -i with --autostash' '
test_when_finished "git reset --hard" &&
- echo uncommited-content >file0 &&
+ echo uncommitted-content >file0 &&
(
write_script abort-editor.sh <<-\EOF &&
echo >"$1"
test_must_fail git rebase -i --autostash HEAD^ &&
rm -f abort-editor.sh
) &&
- echo uncommited-content >expected &&
+ echo uncommitted-content >expected &&
+ test_cmp expected file0
+'
+
+test_expect_success 'restore autostash on editor failure' '
+ test_when_finished "git reset --hard" &&
+ echo uncommitted-content >file0 &&
+ (
+ test_set_editor "false" &&
+ test_must_fail git rebase -i --autostash HEAD^
+ ) &&
+ echo uncommitted-content >expected &&
+ test_cmp expected file0
+'
+
+test_expect_success 'autostash is saved on editor failure with conflict' '
+ test_when_finished "git reset --hard" &&
+ echo uncommitted-content >file0 &&
+ (
+ write_script abort-editor.sh <<-\EOF &&
+ echo conflicting-content >file0
+ exit 1
+ EOF
+ test_set_editor "$(pwd)/abort-editor.sh" &&
+ test_must_fail git rebase -i --autostash HEAD^ &&
+ rm -f abort-editor.sh
+ ) &&
+ echo conflicting-content >expected &&
+ test_cmp expected file0 &&
+ git checkout file0 &&
+ git stash pop &&
+ echo uncommitted-content >expected &&
test_cmp expected file0
'
--- /dev/null
+#!/bin/sh
+
+test_description='git rebase tests for -Xsubtree
+
+This test runs git rebase and tests the subtree strategy.
+'
+. ./test-lib.sh
+. "$TEST_DIRECTORY"/lib-rebase.sh
+
+commit_message() {
+ git log --pretty=format:%s -1 "$1"
+}
+
+test_expect_success 'setup' '
+ test_commit README &&
+ mkdir files &&
+ (
+ cd files &&
+ git init &&
+ test_commit master1 &&
+ test_commit master2 &&
+ test_commit master3
+ ) &&
+ git fetch files master &&
+ git branch files-master FETCH_HEAD &&
+ git read-tree --prefix=files_subtree files-master &&
+ git checkout -- files_subtree &&
+ tree=$(git write-tree) &&
+ head=$(git rev-parse HEAD) &&
+ rev=$(git rev-parse --verify files-master^0) &&
+ commit=$(git commit-tree -p $head -p $rev -m "Add subproject master" $tree) &&
+ git update-ref HEAD $commit &&
+ (
+ cd files_subtree &&
+ test_commit master4
+ ) &&
+ test_commit files_subtree/master5
+'
+
+# FAILURE: Does not preserve master4.
+test_expect_failure 'Rebase -Xsubtree --preserve-merges --onto commit 4' '
+ reset_rebase &&
+ git checkout -b rebase-preserve-merges-4 master &&
+ git filter-branch --prune-empty -f --subdirectory-filter files_subtree &&
+ git commit -m "Empty commit" --allow-empty &&
+ git rebase -Xsubtree=files_subtree --preserve-merges --onto files-master master &&
+ verbose test "$(commit_message HEAD~)" = "files_subtree/master4"
+'
+
+# FAILURE: Does not preserve master5.
+test_expect_failure 'Rebase -Xsubtree --preserve-merges --onto commit 5' '
+ reset_rebase &&
+ git checkout -b rebase-preserve-merges-5 master &&
+ git filter-branch --prune-empty -f --subdirectory-filter files_subtree &&
+ git commit -m "Empty commit" --allow-empty &&
+ git rebase -Xsubtree=files_subtree --preserve-merges --onto files-master master &&
+ verbose test "$(commit_message HEAD)" = "files_subtree/master5"
+'
+
+# FAILURE: Does not preserve master4.
+test_expect_failure 'Rebase -Xsubtree --keep-empty --preserve-merges --onto commit 4' '
+ reset_rebase &&
+ git checkout -b rebase-keep-empty-4 master &&
+ git filter-branch --prune-empty -f --subdirectory-filter files_subtree &&
+ git commit -m "Empty commit" --allow-empty &&
+ git rebase -Xsubtree=files_subtree --keep-empty --preserve-merges --onto files-master master &&
+ verbose test "$(commit_message HEAD~2)" = "files_subtree/master4"
+'
+
+# FAILURE: Does not preserve master5.
+test_expect_failure 'Rebase -Xsubtree --keep-empty --preserve-merges --onto commit 5' '
+ reset_rebase &&
+ git checkout -b rebase-keep-empty-5 master &&
+ git filter-branch --prune-empty -f --subdirectory-filter files_subtree &&
+ git commit -m "Empty commit" --allow-empty &&
+ git rebase -Xsubtree=files_subtree --keep-empty --preserve-merges --onto files-master master &&
+ verbose test "$(commit_message HEAD~)" = "files_subtree/master5"
+'
+
+# FAILURE: Does not preserve Empty.
+test_expect_failure 'Rebase -Xsubtree --keep-empty --preserve-merges --onto empty commit' '
+ reset_rebase &&
+ git checkout -b rebase-keep-empty-empty master &&
+ git filter-branch --prune-empty -f --subdirectory-filter files_subtree &&
+ git commit -m "Empty commit" --allow-empty &&
+ git rebase -Xsubtree=files_subtree --keep-empty --preserve-merges --onto files-master master &&
+ verbose test "$(commit_message HEAD)" = "Empty commit"
+'
+
+# FAILURE: fatal: Could not parse object
+test_expect_failure 'Rebase -Xsubtree --onto commit 4' '
+ reset_rebase &&
+ git checkout -b rebase-onto-4 master &&
+ git filter-branch --prune-empty -f --subdirectory-filter files_subtree &&
+ git commit -m "Empty commit" --allow-empty &&
+ git rebase -Xsubtree=files_subtree --onto files-master master &&
+ verbose test "$(commit_message HEAD~2)" = "files_subtree/master4"
+'
+
+# FAILURE: fatal: Could not parse object
+test_expect_failure 'Rebase -Xsubtree --onto commit 5' '
+ reset_rebase &&
+ git checkout -b rebase-onto-5 master &&
+ git filter-branch --prune-empty -f --subdirectory-filter files_subtree &&
+ git commit -m "Empty commit" --allow-empty &&
+ git rebase -Xsubtree=files_subtree --onto files-master master &&
+ verbose test "$(commit_message HEAD~)" = "files_subtree/master5"
+'
+# FAILURE: fatal: Could not parse object
+test_expect_failure 'Rebase -Xsubtree --onto empty commit' '
+ reset_rebase &&
+ git checkout -b rebase-onto-empty master &&
+ git filter-branch --prune-empty -f --subdirectory-filter files_subtree &&
+ git commit -m "Empty commit" --allow-empty &&
+ git rebase -Xsubtree=files_subtree --onto files-master master &&
+ verbose test "$(commit_message HEAD)" = "Empty commit"
+'
+
+test_done
. ./test-lib.sh
+# Test the file mode "$1" of the file "$2" in the index.
+test_mode_in_index () {
+ case "$(git ls-files -s "$2")" in
+ "$1 "*" $2")
+ echo pass
+ ;;
+ *)
+ echo fail
+ git ls-files -s "$2"
+ return 1
+ ;;
+ esac
+}
+
test_expect_success \
'Test of git add' \
'touch foo && git add foo'
echo foo >xfoo1 &&
chmod 755 xfoo1 &&
git add xfoo1 &&
- case "$(git ls-files --stage xfoo1)" in
- 100644" "*xfoo1) echo pass;;
- *) echo fail; git ls-files --stage xfoo1; (exit 1);;
- esac'
+ test_mode_in_index 100644 xfoo1'
test_expect_success 'git add: filemode=0 should not get confused by symlink' '
rm -f xfoo1 &&
test_ln_s_add foo xfoo1 &&
- case "$(git ls-files --stage xfoo1)" in
- 120000" "*xfoo1) echo pass;;
- *) echo fail; git ls-files --stage xfoo1; (exit 1);;
- esac
+ test_mode_in_index 120000 xfoo1
'
test_expect_success \
echo foo >xfoo2 &&
chmod 755 xfoo2 &&
git update-index --add xfoo2 &&
- case "$(git ls-files --stage xfoo2)" in
- 100644" "*xfoo2) echo pass;;
- *) echo fail; git ls-files --stage xfoo2; (exit 1);;
- esac'
+ test_mode_in_index 100644 xfoo2'
test_expect_success 'git add: filemode=0 should not get confused by symlink' '
rm -f xfoo2 &&
test_ln_s_add foo xfoo2 &&
- case "$(git ls-files --stage xfoo2)" in
- 120000" "*xfoo2) echo pass;;
- *) echo fail; git ls-files --stage xfoo2; (exit 1);;
- esac
+ test_mode_in_index 120000 xfoo2
'
test_expect_success \
'git update-index --add: Test that executable bit is not used...' \
'git config core.filemode 0 &&
test_ln_s_add xfoo2 xfoo3 && # runs git update-index --add
- case "$(git ls-files --stage xfoo3)" in
- 120000" "*xfoo3) echo pass;;
- *) echo fail; git ls-files --stage xfoo3; (exit 1);;
- esac'
+ test_mode_in_index 120000 xfoo3'
test_expect_success '.gitignore test setup' '
echo "*.ig" >.gitignore &&
test_i18ncmp expect.err actual.err
'
+test_expect_success 'git add --chmod=[+-]x stages correctly' '
+ rm -f foo1 &&
+ echo foo >foo1 &&
+ git add --chmod=+x foo1 &&
+ test_mode_in_index 100755 foo1 &&
+ git add --chmod=-x foo1 &&
+ test_mode_in_index 100644 foo1
+'
+
+test_expect_success POSIXPERM,SYMLINKS 'git add --chmod=+x with symlinks' '
+ git config core.filemode 1 &&
+ git config core.symlinks 1 &&
+ rm -f foo2 &&
+ echo foo >foo2 &&
+ git add --chmod=+x foo2 &&
+ test_mode_in_index 100755 foo2
+'
+
test_done
test_cmp expected current
'
-EMPTY_TREE=4b825dc642cb6eb9a060e54bf8d69288fbee4904
-
test_expect_success 'diff-tree with wildcard shows dir also matches' '
git diff-tree --name-only $EMPTY_TREE $tree -- "f*" >result &&
echo file0 >expected &&
grep -e "^Subject:" "$1"
}
+test_expect_success 'format.from=false' '
+
+ git -c format.from=false format-patch --stdout master..side |
+ sed -e "/^\$/q" >patch &&
+ check_patch patch &&
+ ! grep "^From: C O Mitter <committer@example.com>\$" patch
+'
+
+test_expect_success 'format.from=true' '
+
+ git -c format.from=true format-patch --stdout master..side |
+ sed -e "/^\$/q" >patch &&
+ check_patch patch &&
+ grep "^From: C O Mitter <committer@example.com>\$" patch
+'
+
+test_expect_success 'format.from with address' '
+
+ git -c format.from="F R Om <from@example.com>" format-patch --stdout master..side |
+ sed -e "/^\$/q" >patch &&
+ check_patch patch &&
+ grep "^From: F R Om <from@example.com>\$" patch
+'
+
+test_expect_success '--no-from overrides format.from' '
+
+ git -c format.from="F R Om <from@example.com>" format-patch --no-from --stdout master..side |
+ sed -e "/^\$/q" >patch &&
+ check_patch patch &&
+ ! grep "^From: F R Om <from@example.com>\$" patch
+'
+
+test_expect_success '--from overrides format.from' '
+
+ git -c format.from="F R Om <from@example.com>" format-patch --from --stdout master..side |
+ sed -e "/^\$/q" >patch &&
+ check_patch patch &&
+ ! grep "^From: F R Om <from@example.com>\$" patch
+'
+
test_expect_success '--no-to overrides config.to' '
git config --replace-all format.to \
test_cmp expected actual
'
+test_expect_success 'format-patch --pretty=mboxrd' '
+ sp=" " &&
+ cat >msg <<-INPUT_END &&
+ mboxrd should escape the body
+
+ From could trip up a loose mbox parser
+ >From extra escape for reversibility
+ >>From extra escape for reversibility 2
+ from lower case not escaped
+ Fromm bad speling not escaped
+ From with leading space not escaped
+
+ F
+ From
+ From$sp
+ From $sp
+ From $sp
+ INPUT_END
+
+ cat >expect <<-INPUT_END &&
+ >From could trip up a loose mbox parser
+ >>From extra escape for reversibility
+ >>>From extra escape for reversibility 2
+ from lower case not escaped
+ Fromm bad speling not escaped
+ From with leading space not escaped
+
+ F
+ From
+ From
+ From
+ From
+ INPUT_END
+
+ C=$(git commit-tree HEAD^^{tree} -p HEAD <msg) &&
+ git format-patch --pretty=mboxrd --stdout -1 $C~1..$C >patch &&
+ git grep -h --no-index -A11 \
+ "^>From could trip up a loose mbox parser" patch >actual &&
+ test_cmp expect actual
+'
+
test_done
bibtex
cpp
csharp
+ css
fortran
fountain
html
--- /dev/null
+RIGHT label.control-label
+{
+ margin-top: 10px!important;
+ border : 10px ChangeMe #C6C6C6;
+}
--- /dev/null
+RIGHT h1 {
+color:
+ChangeMe;
+}
--- /dev/null
+RIGHT a:hover {
+ margin-top:
+ 10px!important;
+ border : 10px ChangeMe #C6C6C6;
+}
--- /dev/null
+RIGHT label.control-label {
+ margin-top: 10px!important;
+ border : 10px ChangeMe #C6C6C6;
+}
--- /dev/null
+p.header,
+label.control-label,
+div ul#RIGHT {
+ margin-top: 10px!important;
+ border : 10px ChangeMe #C6C6C6;
+}
--- /dev/null
+RIGHT, label.control-label {
+margin-top: 10px!important;
+padding: 0;
+border : 10px ChangeMe #C6C6C6;
+}
--- /dev/null
+label.control, div ul#RIGHT {
+ margin-top: 10px!important;
+ border : 10px ChangeMe #C6C6C6;
+}
--- /dev/null
+RIGHT label.control-label {
+ margin:10px;
+ padding:10px;
+ border : 10px ChangeMe #C6C6C6;
+}
color "nobold nodim noul noblink noreverse" "[22;24;25;27m"
'
+test_expect_success '"no-" variant of negation' '
+ color "no-bold no-blink" "[22;25m"
+'
+
test_expect_success 'long color specification' '
color "254 255 bold dim ul blink reverse" "[1;2;4;5;7;38;5;254;48;5;255m"
'
test_expect_success 'absurdly long color specification' '
color \
- "#ffffff #ffffff bold nobold dim nodim ul noul blink noblink reverse noreverse" \
- "[1;2;4;5;7;22;24;25;27;38;2;255;255;255;48;2;255;255;255m"
+ "#ffffff #ffffff bold nobold dim nodim italic noitalic
+ ul noul blink noblink reverse noreverse strike nostrike" \
+ "[1;2;3;4;5;7;9;22;23;24;25;27;29;38;2;255;255;255;48;2;255;255;255m"
'
test_expect_success '0-7 are aliases for basic ANSI color names' '
. ./test-lib.sh
. "$TEST_DIRECTORY"/lib-diff-alternative.sh
+test_expect_success '--ignore-space-at-eol with a single appended character' '
+ printf "a\nb\nc\n" >pre &&
+ printf "a\nbX\nc\n" >post &&
+ test_must_fail git diff --no-index \
+ --patience --ignore-space-at-eol pre post >diff &&
+ grep "^+.*X" diff
+'
+
test_diff_frobnitz "patience"
test_diff_unique "patience"
test_language_driver bibtex
test_language_driver cpp
test_language_driver csharp
+test_language_driver css
test_language_driver fortran
test_language_driver html
test_language_driver java
--- /dev/null
+<BOLD>diff --git a/pre b/post<RESET>
+<BOLD>index b8ae0bb..fe500b7 100644<RESET>
+<BOLD>--- a/pre<RESET>
+<BOLD>+++ b/post<RESET>
+<CYAN>@@ -1,10 +1,10 @@<RESET>
+.<RED>class-form<RESET><GREEN>other-form<RESET> label.control-label {
+ margin-top: <RED>10<RESET><GREEN>15<RESET>px!important;
+ border : 10px <RED>dashed<RESET><GREEN>dotted<RESET> #C6C6C6;
+}<RESET>
+<RED>#CCCCCC<RESET><GREEN>#CCCCCB<RESET>
+10em<RESET>
+<RED>padding-bottom<RESET><GREEN>margin-left<RESET>
+150<RED>px<RESET><GREEN>em<RESET>
+10px
+<RED>!important<RESET>
+<RED>div<RESET><GREEN>li<RESET>.class#id
--- /dev/null
+.other-form label.control-label {
+ margin-top: 15px!important;
+ border : 10px dotted #C6C6C6;
+}
+#CCCCCB
+10em
+margin-left
+150em
+10px
+li.class#id
--- /dev/null
+.class-form label.control-label {
+ margin-top: 10px!important;
+ border : 10px dashed #C6C6C6;
+}
+#CCCCCC
+10em
+padding-bottom
+150px
+10px!important
+div.class#id
test_description='diff function context'
. ./test-lib.sh
-. "$TEST_DIRECTORY"/diff-lib.sh
+dir="$TEST_DIRECTORY/t4051"
-cat <<\EOF >hello.c
-#include <stdio.h>
-
-static int a(void)
-{
- /*
- * Dummy.
- */
+commit_and_tag () {
+ tag=$1 &&
+ shift &&
+ git add "$@" &&
+ test_tick &&
+ git commit -m "$tag" &&
+ git tag "$tag"
}
-static int hello_world(void)
-{
- /* Classic. */
- printf("Hello world.\n");
-
- /* Success! */
- return 0;
+first_context_line () {
+ awk '
+ found {print; exit}
+ /^@@/ {found = 1}
+ '
}
-static int b(void)
-{
- /*
- * Dummy, too.
- */
+
+last_context_line () {
+ sed -ne \$p
}
-int main(int argc, char **argv)
-{
- a();
- b();
- return hello_world();
+check_diff () {
+ name=$1
+ desc=$2
+ options="-W $3"
+
+ test_expect_success "$desc" '
+ git diff $options "$name^" "$name" >"$name.diff"
+ '
+
+ test_expect_success ' diff applies' '
+ test_when_finished "git reset --hard" &&
+ git checkout --detach "$name^" &&
+ git apply --index "$name.diff" &&
+ git diff --exit-code "$name"
+ '
}
-EOF
test_expect_success 'setup' '
- git add hello.c &&
- test_tick &&
- git commit -m initial &&
-
- grep -v Classic <hello.c >hello.c.new &&
- mv hello.c.new hello.c
-'
-
-cat <<\EOF >expected
-diff --git a/hello.c b/hello.c
---- a/hello.c
-+++ b/hello.c
-@@ -10,8 +10,7 @@ static int a(void)
- static int hello_world(void)
- {
-- /* Classic. */
- printf("Hello world.\n");
-
- /* Success! */
- return 0;
- }
-EOF
-
-test_expect_success 'diff -U0 -W' '
- git diff -U0 -W >actual &&
- compare_diff_patch actual expected
-'
-
-cat <<\EOF >expected
-diff --git a/hello.c b/hello.c
---- a/hello.c
-+++ b/hello.c
-@@ -9,9 +9,8 @@ static int a(void)
-
- static int hello_world(void)
- {
-- /* Classic. */
- printf("Hello world.\n");
-
- /* Success! */
- return 0;
- }
-EOF
-
-test_expect_success 'diff -W' '
- git diff -W >actual &&
- compare_diff_patch actual expected
+ cat "$dir/includes.c" "$dir/dummy.c" "$dir/dummy.c" "$dir/hello.c" \
+ "$dir/dummy.c" "$dir/dummy.c" >file.c &&
+ commit_and_tag initial file.c &&
+
+ grep -v "delete me from hello" <file.c >file.c.new &&
+ mv file.c.new file.c &&
+ commit_and_tag changed_hello file.c &&
+
+ grep -v "delete me from includes" <file.c >file.c.new &&
+ mv file.c.new file.c &&
+ commit_and_tag changed_includes file.c &&
+
+ cat "$dir/appended1.c" >>file.c &&
+ commit_and_tag appended file.c &&
+
+ cat "$dir/appended2.c" >>file.c &&
+ commit_and_tag extended file.c &&
+
+ grep -v "Begin of second part" <file.c >file.c.new &&
+ mv file.c.new file.c &&
+ commit_and_tag long_common_tail file.c &&
+
+ git checkout initial &&
+ grep -v "delete me from hello" <file.c >file.c.new &&
+ mv file.c.new file.c &&
+ cat "$dir/appended1.c" >>file.c &&
+ commit_and_tag changed_hello_appended file.c
+'
+
+check_diff changed_hello 'changed function'
+
+test_expect_success ' context includes begin' '
+ grep "^ .*Begin of hello" changed_hello.diff
+'
+
+test_expect_success ' context includes end' '
+ grep "^ .*End of hello" changed_hello.diff
+'
+
+test_expect_success ' context does not include other functions' '
+ test $(grep -c "^[ +-].*Begin" changed_hello.diff) -le 1
+'
+
+test_expect_success ' context does not include preceding empty lines' '
+ test "$(first_context_line <changed_hello.diff)" != " "
+'
+
+test_expect_success ' context does not include trailing empty lines' '
+ test "$(last_context_line <changed_hello.diff)" != " "
+'
+
+check_diff changed_includes 'changed includes'
+
+test_expect_success ' context includes begin' '
+ grep "^ .*Begin.h" changed_includes.diff
+'
+
+test_expect_success ' context includes end' '
+ grep "^ .*End.h" changed_includes.diff
+'
+
+test_expect_success ' context does not include other functions' '
+ test $(grep -c "^[ +-].*Begin" changed_includes.diff) -le 1
+'
+
+test_expect_success ' context does not include trailing empty lines' '
+ test "$(last_context_line <changed_includes.diff)" != " "
+'
+
+check_diff appended 'appended function'
+
+test_expect_success ' context includes begin' '
+ grep "^[+].*Begin of first part" appended.diff
+'
+
+test_expect_success ' context includes end' '
+ grep "^[+].*End of first part" appended.diff
+'
+
+test_expect_success ' context does not include other functions' '
+ test $(grep -c "^[ +-].*Begin" appended.diff) -le 1
+'
+
+check_diff extended 'appended function part'
+
+test_expect_success ' context includes begin' '
+ grep "^ .*Begin of first part" extended.diff
+'
+
+test_expect_success ' context includes end' '
+ grep "^[+].*End of second part" extended.diff
+'
+
+test_expect_success ' context does not include other functions' '
+ test $(grep -c "^[ +-].*Begin" extended.diff) -le 2
+'
+
+test_expect_success ' context does not include preceding empty lines' '
+ test "$(first_context_line <extended.diff)" != " "
+'
+
+check_diff long_common_tail 'change with long common tail and no context' -U0
+
+test_expect_success ' context includes begin' '
+ grep "^ .*Begin of first part" long_common_tail.diff
+'
+
+test_expect_success ' context includes end' '
+ grep "^ .*End of second part" long_common_tail.diff
+'
+
+test_expect_success ' context does not include other functions' '
+ test $(grep -c "^[ +-].*Begin" long_common_tail.diff) -le 2
+'
+
+test_expect_success ' context does not include preceding empty lines' '
+ test "$(first_context_line <long_common_tail.diff.diff)" != " "
+'
+
+check_diff changed_hello_appended 'changed function plus appended function'
+
+test_expect_success ' context includes begin' '
+ grep "^ .*Begin of hello" changed_hello_appended.diff &&
+ grep "^[+].*Begin of first part" changed_hello_appended.diff
+'
+
+test_expect_success ' context includes end' '
+ grep "^ .*End of hello" changed_hello_appended.diff &&
+ grep "^[+].*End of first part" changed_hello_appended.diff
+'
+
+test_expect_success ' context does not include other functions' '
+ test $(grep -c "^[ +-].*Begin" changed_hello_appended.diff) -le 2
'
test_done
--- /dev/null
+
+int appended(void) // Begin of first part
+{
+ int i;
+ char *s = "a string";
+
+ printf("%s\n", s);
+
+ for (i = 99;
+ i >= 0;
+ i--) {
+ printf("%d bottles of beer on the wall\n", i);
+ }
+
+ printf("End of first part\n");
--- /dev/null
+ printf("Begin of second part\n");
+
+ /*
+ * Lorem ipsum dolor sit amet, consectetuer sadipscing elitr,
+ * sed diam nonumy eirmod tempor invidunt ut labore et dolore
+ * magna aliquyam erat, sed diam voluptua. At vero eos et
+ * accusam et justo duo dolores et ea rebum. Stet clita kasd
+ * gubergren, no sea takimata sanctus est Lorem ipsum dolor
+ * sit amet.
+ *
+ * Lorem ipsum dolor sit amet, consectetuer sadipscing elitr,
+ * sed diam nonumy eirmod tempor invidunt ut labore et dolore
+ * magna aliquyam erat, sed diam voluptua. At vero eos et
+ * accusam et justo duo dolores et ea rebum. Stet clita kasd
+ * gubergren, no sea takimata sanctus est Lorem ipsum dolor
+ * sit amet.
+ *
+ * Lorem ipsum dolor sit amet, consectetuer sadipscing elitr,
+ * sed diam nonumy eirmod tempor invidunt ut labore et dolore
+ * magna aliquyam erat, sed diam voluptua. At vero eos et
+ * accusam et justo duo dolores et ea rebum. Stet clita kasd
+ * gubergren, no sea takimata sanctus est Lorem ipsum dolor
+ * sit amet.
+ *
+ * Lorem ipsum dolor sit amet, consectetuer sadipscing elitr,
+ * sed diam nonumy eirmod tempor invidunt ut labore et dolore
+ * magna aliquyam erat, sed diam voluptua. At vero eos et
+ * accusam et justo duo dolores et ea rebum. Stet clita kasd
+ * gubergren, no sea takimata sanctus est Lorem ipsum dolor
+ * sit amet.
+ *
+ */
+
+ return 0;
+} // End of second part
--- /dev/null
+
+static int dummy(void) // Begin of dummy
+{
+ int rc = 0;
+
+ return rc;
+} // End of dummy
--- /dev/null
+
+static void hello(void) // Begin of hello
+{
+ /*
+ * Classic.
+ */
+ putchar('H');
+ putchar('e');
+ putchar('l');
+ putchar('l');
+ putchar('o');
+ putchar(' ');
+ /* delete me from hello */
+ putchar('w');
+ putchar('o');
+ putchar('r');
+ putchar('l');
+ putchar('d');
+ putchar('.');
+ putchar('\n');
+} // End of hello
--- /dev/null
+#include <Begin.h>
+#include <unistd.h>
+#include <stdio.h>
+#include <sys/stat.h>
+#include <fcntl.h>
+#include <stddef.h>
+#include <stdlib.h>
+#include <stdarg.h>
+/* delete me from includes */
+#include <string.h>
+#include <sys/types.h>
+#include <dirent.h>
+#include <sys/time.h>
+#include <time.h>
+#include <signal.h>
+#include <assert.h>
+#include <regex.h>
+#include <utime.h>
+#include <syslog.h>
+#include <End.h>
test_description='test diff with a bogus tree containing the null sha1'
. ./test-lib.sh
-empty_tree=4b825dc642cb6eb9a060e54bf8d69288fbee4904
-
test_expect_success 'create bogus tree' '
bogus_tree=$(
printf "100644 fooQQQQQQQQQQQQQQQQQQQQQ" |
test_expect_success 'raw diff shows null sha1 (addition)' '
echo ":000000 100644 $_z40 $_z40 A foo" >expect &&
- git diff-tree $empty_tree $bogus_tree >actual &&
+ git diff-tree $EMPTY_TREE $bogus_tree >actual &&
test_cmp expect actual
'
test_expect_success 'raw diff shows null sha1 (removal)' '
echo ":100644 000000 $_z40 $_z40 D foo" >expect &&
- git diff-tree $bogus_tree $empty_tree >actual &&
+ git diff-tree $bogus_tree $EMPTY_TREE >actual &&
test_cmp expect actual
'
'
test_expect_success 'patch fails due to bogus sha1 (addition)' '
- test_must_fail git diff-tree -p $empty_tree $bogus_tree
+ test_must_fail git diff-tree -p $EMPTY_TREE $bogus_tree
'
test_expect_success 'patch fails due to bogus sha1 (removal)' '
- test_must_fail git diff-tree -p $bogus_tree $empty_tree
+ test_must_fail git diff-tree -p $bogus_tree $EMPTY_TREE
'
test_expect_success 'patch fails due to bogus sha1 (modification)' '
}
test_expect_success 'setup' '
- create_file file1 "File1 contents" &&
- create_file file2 "File2 contents" &&
- create_file file3 "File3 contents" &&
+ # Ensure that file sizes are different, because on Windows
+ # lstat() does not discover inode numbers, and we need
+ # other properties to discover swapped files
+ # (mtime is not always different, either).
+ create_file file1 "some content" &&
+ create_file file2 "some other content" &&
+ create_file file3 "again something else" &&
git add file1 file2 file3 &&
git commit -m 1
'
test_cmp expect actual
'
+test_expect_success 'am --patch-format=mboxrd handles mboxrd' '
+ rm -fr .git/rebase-apply &&
+ git checkout -f first &&
+ echo mboxrd >>file &&
+ git add file &&
+ cat >msg <<-\INPUT_END &&
+ mboxrd should escape the body
+
+ From could trip up a loose mbox parser
+ >From extra escape for reversibility
+ INPUT_END
+ git commit -F msg &&
+ git format-patch --pretty=mboxrd --stdout -1 >mboxrd1 &&
+ grep "^>From could trip up a loose mbox parser" mboxrd1 &&
+ git checkout -f first &&
+ git am --patch-format=mboxrd mboxrd1 &&
+ git cat-file commit HEAD | tail -n4 >out &&
+ test_cmp msg out
+'
+
test_done
# Applying side1 will be quiet.
test_must_fail git am --quiet side[123].eml >out &&
test_path_is_dir .git/rebase-apply &&
- ! test_i18ngrep "^Applying: " out &&
+ test_i18ngrep ! "^Applying: " out &&
echo side1 >file &&
git add file &&
git shortlog --exclude=refs/heads/m* --all
'
+test_expect_success 'shortlog with --output=<file>' '
+ git shortlog --output=shortlog -1 master >output &&
+ test ! -s output &&
+ test_line_count = 3 shortlog
+'
+
test_done
test_cmp expect actual
'
+test_expect_success 'log with grep.patternType configuration' '
+ >expect &&
+ git -c grep.patterntype=fixed \
+ log -1 --pretty=tformat:%s --grep=s.c.nd >actual &&
+ test_cmp expect actual
+'
+
+test_expect_success 'log with grep.patternType configuration and command line' '
+ echo second >expect &&
+ git -c grep.patterntype=fixed \
+ log -1 --pretty=tformat:%s --basic-regexp --grep=s.c.nd >actual &&
+ test_cmp expect actual
+'
+
cat > expect <<EOF
* Second
* sixth
test_cmp expect actual
'
-test_expect_success GPG 'log --graph --show-signature' '
+test_expect_success GPG 'setup signed branch' '
test_when_finished "git reset --hard && git checkout master" &&
git checkout -b signed master &&
echo foo >foo &&
git add foo &&
- git commit -S -m signed_commit &&
+ git commit -S -m signed_commit
+'
+
+test_expect_success GPG 'log --graph --show-signature' '
git log --graph --show-signature -n1 signed >actual &&
grep "^| gpg: Signature made" actual &&
grep "^| gpg: Good signature" actual
grep "^| | gpg: Good signature" actual
'
+test_expect_success GPG '--no-show-signature overrides --show-signature' '
+ git log -1 --show-signature --no-show-signature signed >actual &&
+ ! grep "^gpg:" actual
+'
+
+test_expect_success GPG 'log.showsignature=true behaves like --show-signature' '
+ test_config log.showsignature true &&
+ git log -1 signed >actual &&
+ grep "gpg: Signature made" actual &&
+ grep "gpg: Good signature" actual
+'
+
+test_expect_success GPG '--no-show-signature overrides log.showsignature=true' '
+ test_config log.showsignature true &&
+ git log -1 --no-show-signature signed >actual &&
+ ! grep "^gpg:" actual
+'
+
+test_expect_success GPG '--show-signature overrides log.showsignature=false' '
+ test_config log.showsignature false &&
+ git log -1 --show-signature signed >actual &&
+ grep "gpg: Signature made" actual &&
+ grep "gpg: Good signature" actual
+'
+
test_expect_success 'log --graph --no-walk is forbidden' '
test_must_fail git log --graph --no-walk
'
test_expect_success 'left alignment formatting' '
git log --pretty="tformat:%<(40)%s" >actual &&
- qz_to_tab_space <<EOF >expected &&
-message two Z
-message one Z
-add bar Z
-$(commit_msg) Z
-EOF
+ qz_to_tab_space <<-EOF >expected &&
+ message two Z
+ message one Z
+ add bar Z
+ $(commit_msg) Z
+ EOF
test_cmp expected actual
'
test_expect_success 'left alignment formatting. i18n.logOutputEncoding' '
git -c i18n.logOutputEncoding=$test_encoding log --pretty="tformat:%<(40)%s" >actual &&
- qz_to_tab_space <<EOF | iconv -f utf-8 -t $test_encoding >expected &&
-message two Z
-message one Z
-add bar Z
-$(commit_msg) Z
-EOF
+ qz_to_tab_space <<-EOF | iconv -f utf-8 -t $test_encoding >expected &&
+ message two Z
+ message one Z
+ add bar Z
+ $(commit_msg) Z
+ EOF
test_cmp expected actual
'
test_expect_success 'left alignment formatting at the nth column' '
git log --pretty="tformat:%h %<|(40)%s" >actual &&
- qz_to_tab_space <<EOF >expected &&
-$head1 message two Z
-$head2 message one Z
-$head3 add bar Z
-$head4 $(commit_msg) Z
-EOF
+ qz_to_tab_space <<-EOF >expected &&
+ $head1 message two Z
+ $head2 message one Z
+ $head3 add bar Z
+ $head4 $(commit_msg) Z
+ EOF
+ test_cmp expected actual
+'
+
+test_expect_success 'left alignment formatting at the nth column' '
+ COLUMNS=50 git log --pretty="tformat:%h %<|(-10)%s" >actual &&
+ qz_to_tab_space <<-EOF >expected &&
+ $head1 message two Z
+ $head2 message one Z
+ $head3 add bar Z
+ $head4 $(commit_msg) Z
+ EOF
test_cmp expected actual
'
test_expect_success 'left alignment formatting at the nth column. i18n.logOutputEncoding' '
git -c i18n.logOutputEncoding=$test_encoding log --pretty="tformat:%h %<|(40)%s" >actual &&
- qz_to_tab_space <<EOF | iconv -f utf-8 -t $test_encoding >expected &&
-$head1 message two Z
-$head2 message one Z
-$head3 add bar Z
-$head4 $(commit_msg) Z
-EOF
+ qz_to_tab_space <<-EOF | iconv -f utf-8 -t $test_encoding >expected &&
+ $head1 message two Z
+ $head2 message one Z
+ $head3 add bar Z
+ $head4 $(commit_msg) Z
+ EOF
test_cmp expected actual
'
test_expect_success 'left alignment formatting with no padding' '
git log --pretty="tformat:%<(1)%s" >actual &&
- cat <<EOF >expected &&
-message two
-message one
-add bar
-$(commit_msg)
-EOF
+ cat <<-EOF >expected &&
+ message two
+ message one
+ add bar
+ $(commit_msg)
+ EOF
test_cmp expected actual
'
test_expect_success 'left alignment formatting with no padding. i18n.logOutputEncoding' '
git -c i18n.logOutputEncoding=$test_encoding log --pretty="tformat:%<(1)%s" >actual &&
- cat <<EOF | iconv -f utf-8 -t $test_encoding >expected &&
-message two
-message one
-add bar
-$(commit_msg)
-EOF
+ cat <<-EOF | iconv -f utf-8 -t $test_encoding >expected &&
+ message two
+ message one
+ add bar
+ $(commit_msg)
+ EOF
test_cmp expected actual
'
test_expect_success 'left alignment formatting with trunc' '
git log --pretty="tformat:%<(10,trunc)%s" >actual &&
- qz_to_tab_space <<EOF >expected &&
-message ..
-message ..
-add bar Z
-initial...
-EOF
+ qz_to_tab_space <<-\EOF >expected &&
+ message ..
+ message ..
+ add bar Z
+ initial...
+ EOF
test_cmp expected actual
'
test_expect_success 'left alignment formatting with trunc. i18n.logOutputEncoding' '
git -c i18n.logOutputEncoding=$test_encoding log --pretty="tformat:%<(10,trunc)%s" >actual &&
- qz_to_tab_space <<EOF | iconv -f utf-8 -t $test_encoding >expected &&
-message ..
-message ..
-add bar Z
-initial...
-EOF
+ qz_to_tab_space <<-\EOF | iconv -f utf-8 -t $test_encoding >expected &&
+ message ..
+ message ..
+ add bar Z
+ initial...
+ EOF
test_cmp expected actual
'
test_expect_success 'left alignment formatting with ltrunc' '
git log --pretty="tformat:%<(10,ltrunc)%s" >actual &&
- qz_to_tab_space <<EOF >expected &&
-..sage two
-..sage one
-add bar Z
-..${sample_utf8_part}lich
-EOF
+ qz_to_tab_space <<-EOF >expected &&
+ ..sage two
+ ..sage one
+ add bar Z
+ ..${sample_utf8_part}lich
+ EOF
test_cmp expected actual
'
test_expect_success 'left alignment formatting with ltrunc. i18n.logOutputEncoding' '
git -c i18n.logOutputEncoding=$test_encoding log --pretty="tformat:%<(10,ltrunc)%s" >actual &&
- qz_to_tab_space <<EOF | iconv -f utf-8 -t $test_encoding >expected &&
-..sage two
-..sage one
-add bar Z
-..${sample_utf8_part}lich
-EOF
+ qz_to_tab_space <<-EOF | iconv -f utf-8 -t $test_encoding >expected &&
+ ..sage two
+ ..sage one
+ add bar Z
+ ..${sample_utf8_part}lich
+ EOF
test_cmp expected actual
'
test_expect_success 'left alignment formatting with mtrunc' '
git log --pretty="tformat:%<(10,mtrunc)%s" >actual &&
- qz_to_tab_space <<EOF >expected &&
-mess.. two
-mess.. one
-add bar Z
-init..lich
-EOF
+ qz_to_tab_space <<-\EOF >expected &&
+ mess.. two
+ mess.. one
+ add bar Z
+ init..lich
+ EOF
test_cmp expected actual
'
test_expect_success 'left alignment formatting with mtrunc. i18n.logOutputEncoding' '
git -c i18n.logOutputEncoding=$test_encoding log --pretty="tformat:%<(10,mtrunc)%s" >actual &&
- qz_to_tab_space <<EOF | iconv -f utf-8 -t $test_encoding >expected &&
-mess.. two
-mess.. one
-add bar Z
-init..lich
-EOF
+ qz_to_tab_space <<-\EOF | iconv -f utf-8 -t $test_encoding >expected &&
+ mess.. two
+ mess.. one
+ add bar Z
+ init..lich
+ EOF
test_cmp expected actual
'
test_expect_success 'right alignment formatting' '
git log --pretty="tformat:%>(40)%s" >actual &&
- qz_to_tab_space <<EOF >expected &&
-Z message two
-Z message one
-Z add bar
-Z $(commit_msg)
-EOF
+ qz_to_tab_space <<-EOF >expected &&
+ Z message two
+ Z message one
+ Z add bar
+ Z $(commit_msg)
+ EOF
test_cmp expected actual
'
test_expect_success 'right alignment formatting. i18n.logOutputEncoding' '
git -c i18n.logOutputEncoding=$test_encoding log --pretty="tformat:%>(40)%s" >actual &&
- qz_to_tab_space <<EOF | iconv -f utf-8 -t $test_encoding >expected &&
-Z message two
-Z message one
-Z add bar
-Z $(commit_msg)
-EOF
+ qz_to_tab_space <<-EOF | iconv -f utf-8 -t $test_encoding >expected &&
+ Z message two
+ Z message one
+ Z add bar
+ Z $(commit_msg)
+ EOF
test_cmp expected actual
'
test_expect_success 'right alignment formatting at the nth column' '
git log --pretty="tformat:%h %>|(40)%s" >actual &&
- qz_to_tab_space <<EOF >expected &&
-$head1 message two
-$head2 message one
-$head3 add bar
-$head4 $(commit_msg)
-EOF
+ qz_to_tab_space <<-EOF >expected &&
+ $head1 message two
+ $head2 message one
+ $head3 add bar
+ $head4 $(commit_msg)
+ EOF
+ test_cmp expected actual
+'
+
+test_expect_success 'right alignment formatting at the nth column' '
+ COLUMNS=50 git log --pretty="tformat:%h %>|(-10)%s" >actual &&
+ qz_to_tab_space <<-EOF >expected &&
+ $head1 message two
+ $head2 message one
+ $head3 add bar
+ $head4 $(commit_msg)
+ EOF
test_cmp expected actual
'
test_expect_success 'right alignment formatting at the nth column. i18n.logOutputEncoding' '
git -c i18n.logOutputEncoding=$test_encoding log --pretty="tformat:%h %>|(40)%s" >actual &&
- qz_to_tab_space <<EOF | iconv -f utf-8 -t $test_encoding >expected &&
-$head1 message two
-$head2 message one
-$head3 add bar
-$head4 $(commit_msg)
-EOF
+ qz_to_tab_space <<-EOF | iconv -f utf-8 -t $test_encoding >expected &&
+ $head1 message two
+ $head2 message one
+ $head3 add bar
+ $head4 $(commit_msg)
+ EOF
+ test_cmp expected actual
+'
+
+# Note: Space between 'message' and 'two' should be in the same column
+# as in previous test.
+test_expect_success 'right alignment formatting at the nth column with --graph. i18n.logOutputEncoding' '
+ git -c i18n.logOutputEncoding=$test_encoding log --graph --pretty="tformat:%h %>|(40)%s" >actual &&
+ iconv -f utf-8 -t $test_encoding >expected <<-EOF &&
+ * $head1 message two
+ * $head2 message one
+ * $head3 add bar
+ * $head4 $(commit_msg)
+ EOF
test_cmp expected actual
'
test_expect_success 'right alignment formatting with no padding' '
git log --pretty="tformat:%>(1)%s" >actual &&
- cat <<EOF >expected &&
-message two
-message one
-add bar
-$(commit_msg)
-EOF
+ cat <<-EOF >expected &&
+ message two
+ message one
+ add bar
+ $(commit_msg)
+ EOF
+ test_cmp expected actual
+'
+
+test_expect_success 'right alignment formatting with no padding and with --graph' '
+ git log --graph --pretty="tformat:%>(1)%s" >actual &&
+ cat <<-EOF >expected &&
+ * message two
+ * message one
+ * add bar
+ * $(commit_msg)
+ EOF
test_cmp expected actual
'
test_expect_success 'right alignment formatting with no padding. i18n.logOutputEncoding' '
git -c i18n.logOutputEncoding=$test_encoding log --pretty="tformat:%>(1)%s" >actual &&
- cat <<EOF | iconv -f utf-8 -t $test_encoding >expected &&
-message two
-message one
-add bar
-$(commit_msg)
-EOF
+ cat <<-EOF | iconv -f utf-8 -t $test_encoding >expected &&
+ message two
+ message one
+ add bar
+ $(commit_msg)
+ EOF
test_cmp expected actual
'
test_expect_success 'center alignment formatting' '
git log --pretty="tformat:%><(40)%s" >actual &&
- qz_to_tab_space <<EOF >expected &&
-Z message two Z
-Z message one Z
-Z add bar Z
-Z $(commit_msg) Z
-EOF
+ qz_to_tab_space <<-EOF >expected &&
+ Z message two Z
+ Z message one Z
+ Z add bar Z
+ Z $(commit_msg) Z
+ EOF
test_cmp expected actual
'
test_expect_success 'center alignment formatting. i18n.logOutputEncoding' '
git -c i18n.logOutputEncoding=$test_encoding log --pretty="tformat:%><(40)%s" >actual &&
- qz_to_tab_space <<EOF | iconv -f utf-8 -t $test_encoding >expected &&
-Z message two Z
-Z message one Z
-Z add bar Z
-Z $(commit_msg) Z
-EOF
+ qz_to_tab_space <<-EOF | iconv -f utf-8 -t $test_encoding >expected &&
+ Z message two Z
+ Z message one Z
+ Z add bar Z
+ Z $(commit_msg) Z
+ EOF
test_cmp expected actual
'
test_expect_success 'center alignment formatting at the nth column' '
git log --pretty="tformat:%h %><|(40)%s" >actual &&
- qz_to_tab_space <<EOF >expected &&
-$head1 message two Z
-$head2 message one Z
-$head3 add bar Z
-$head4 $(commit_msg) Z
-EOF
+ qz_to_tab_space <<-EOF >expected &&
+ $head1 message two Z
+ $head2 message one Z
+ $head3 add bar Z
+ $head4 $(commit_msg) Z
+ EOF
+ test_cmp expected actual
+'
+
+test_expect_success 'center alignment formatting at the nth column' '
+ COLUMNS=70 git log --pretty="tformat:%h %><|(-30)%s" >actual &&
+ qz_to_tab_space <<-EOF >expected &&
+ $head1 message two Z
+ $head2 message one Z
+ $head3 add bar Z
+ $head4 $(commit_msg) Z
+ EOF
test_cmp expected actual
'
test_expect_success 'center alignment formatting at the nth column. i18n.logOutputEncoding' '
git -c i18n.logOutputEncoding=$test_encoding log --pretty="tformat:%h %><|(40)%s" >actual &&
- qz_to_tab_space <<EOF | iconv -f utf-8 -t $test_encoding >expected &&
-$head1 message two Z
-$head2 message one Z
-$head3 add bar Z
-$head4 $(commit_msg) Z
-EOF
+ qz_to_tab_space <<-EOF | iconv -f utf-8 -t $test_encoding >expected &&
+ $head1 message two Z
+ $head2 message one Z
+ $head3 add bar Z
+ $head4 $(commit_msg) Z
+ EOF
test_cmp expected actual
'
test_expect_success 'center alignment formatting with no padding' '
git log --pretty="tformat:%><(1)%s" >actual &&
- cat <<EOF >expected &&
-message two
-message one
-add bar
-$(commit_msg)
-EOF
+ cat <<-EOF >expected &&
+ message two
+ message one
+ add bar
+ $(commit_msg)
+ EOF
test_cmp expected actual
'
old_head1=$(git rev-parse --verify HEAD~0)
test_expect_success 'center alignment formatting with no padding. i18n.logOutputEncoding' '
git -c i18n.logOutputEncoding=$test_encoding log --pretty="tformat:%><(1)%s" >actual &&
- cat <<EOF | iconv -f utf-8 -t $test_encoding >expected &&
-message two
-message one
-add bar
-$(commit_msg)
-EOF
+ cat <<-EOF | iconv -f utf-8 -t $test_encoding >expected &&
+ message two
+ message one
+ add bar
+ $(commit_msg)
+ EOF
test_cmp expected actual
'
test_expect_success 'left/right alignment formatting with stealing' '
git commit --amend -m short --author "long long long <long@me.com>" &&
git log --pretty="tformat:%<(10,trunc)%s%>>(10,ltrunc)% an" >actual &&
- cat <<EOF >expected &&
-short long long long
-message .. A U Thor
-add bar A U Thor
-initial... A U Thor
-EOF
+ cat <<-\EOF >expected &&
+ short long long long
+ message .. A U Thor
+ add bar A U Thor
+ initial... A U Thor
+ EOF
test_cmp expected actual
'
test_expect_success 'left/right alignment formatting with stealing. i18n.logOutputEncoding' '
git -c i18n.logOutputEncoding=$test_encoding log --pretty="tformat:%<(10,trunc)%s%>>(10,ltrunc)% an" >actual &&
- cat <<EOF | iconv -f utf-8 -t $test_encoding >expected &&
-short long long long
-message .. A U Thor
-add bar A U Thor
-initial... A U Thor
-EOF
+ cat <<-\EOF | iconv -f utf-8 -t $test_encoding >expected &&
+ short long long long
+ message .. A U Thor
+ add bar A U Thor
+ initial... A U Thor
+ EOF
test_cmp expected actual
'
'
# get new digests (with no abbreviations)
-head1=$(git rev-parse --verify HEAD~0) &&
-head2=$(git rev-parse --verify HEAD~1) &&
+test_expect_success 'set up log decoration tests' '
+ head1=$(git rev-parse --verify HEAD~0) &&
+ head2=$(git rev-parse --verify HEAD~1)
+'
test_expect_success 'log decoration properly follows tag chain' '
git tag -a tag1 -m tag1 &&
git tag -d tag1 &&
git commit --amend -m shorter &&
git log --no-walk --tags --pretty="%H %d" --decorate=full >actual &&
- cat <<EOF >expected &&
-$head1 (tag: refs/tags/tag2)
-$head2 (tag: refs/tags/message-one)
-$old_head1 (tag: refs/tags/message-two)
-EOF
+ cat <<-EOF >expected &&
+ $head1 (tag: refs/tags/tag2)
+ $head2 (tag: refs/tags/message-one)
+ $old_head1 (tag: refs/tags/message-two)
+ EOF
sort actual >actual1 &&
test_cmp expected actual1
'
test_expect_success 'clean log decoration' '
git log --no-walk --tags --pretty="%H %D" --decorate=full >actual &&
- cat >expected <<EOF &&
-$head1 tag: refs/tags/tag2
-$head2 tag: refs/tags/message-one
-$old_head1 tag: refs/tags/message-two
-EOF
+ cat >expected <<-EOF &&
+ $head1 tag: refs/tags/tag2
+ $head2 tag: refs/tags/message-one
+ $old_head1 tag: refs/tags/message-two
+ EOF
sort actual >actual1 &&
test_cmp expected actual1
'
'
cat >expected <<EOF
-${c_commit}COMMIT_ID${c_reset}${c_commit} (${c_reset}${c_HEAD}HEAD${c_reset}${c_commit} ->\
+${c_commit}COMMIT_ID${c_reset}${c_commit} (${c_reset}${c_HEAD}HEAD ->\
${c_reset}${c_branch}master${c_reset}${c_commit},\
${c_reset}${c_tag}tag: v1.0${c_reset}${c_commit},\
${c_reset}${c_tag}tag: B${c_reset}${c_commit})${c_reset} B
test_expect_success '"git log :/a" should be ambiguous (applied both rev and worktree)' '
: >a &&
test_must_fail git log :/a 2>error &&
- grep ambiguous error
+ test_i18ngrep ambiguous error
'
test_expect_success '"git log :/a -- " should not be ambiguous' '
test_expect_success '"git log :" should be ambiguous' '
test_must_fail git log : 2>error &&
- grep ambiguous error
+ test_i18ngrep ambiguous error
'
test_expect_success 'git log -- :' '
git log --first-parent -L 1,1:b.c
'
+test_expect_success '-L with --output' '
+ git checkout parallel-change &&
+ git log --output=log -L :main:b.c >output &&
+ test ! -s output &&
+ test_line_count = 70 log
+'
+
test_done
test_must_fail git archive -v HEAD -- "*.abc" >/dev/null
'
+# Pull the size and date of each entry in a tarfile using the system tar.
+#
+# We'll pull out only the year from the date; that avoids any question of
+# timezones impacting the result (as long as we keep our test times away from a
+# year boundary; our reference times are all in August).
+#
+# The output of tar_info is expected to be "<size> <year>", both in decimal. It
+# ignores the return value of tar. We have to do this, because some of our test
+# input is only partial (the real data is 64GB in some cases).
+tar_info () {
+ "$TAR" tvf "$1" |
+ awk '{
+ split($4, date, "-")
+ print $3 " " date[1]
+ }'
+}
+
+# See if our system tar can handle a tar file with huge sizes and dates far in
+# the future, and that we can actually parse its output.
+#
+# The reference file was generated by GNU tar, and the magic time and size are
+# both octal 01000000000001, which overflows normal ustar fields.
+test_lazy_prereq TAR_HUGE '
+ echo "68719476737 4147" >expect &&
+ tar_info "$TEST_DIRECTORY"/t5000/huge-and-future.tar >actual &&
+ test_cmp expect actual
+'
+
+test_expect_success LONG_IS_64BIT 'set up repository with huge blob' '
+ obj_d=19 &&
+ obj_f=f9c8273ec45a8938e6999cb59b3ff66739902a &&
+ obj=${obj_d}${obj_f} &&
+ mkdir -p .git/objects/$obj_d &&
+ cp "$TEST_DIRECTORY"/t5000/$obj .git/objects/$obj_d/$obj_f &&
+ rm -f .git/index &&
+ git update-index --add --cacheinfo 100644,$obj,huge &&
+ git commit -m huge
+'
+
+# We expect git to die with SIGPIPE here (otherwise we
+# would generate the whole 64GB).
+test_expect_success LONG_IS_64BIT 'generate tar with huge size' '
+ {
+ git archive HEAD
+ echo $? >exit-code
+ } | test_copy_bytes 4096 >huge.tar &&
+ echo 141 >expect &&
+ test_cmp expect exit-code
+'
+
+test_expect_success TAR_HUGE,LONG_IS_64BIT 'system tar can read our huge size' '
+ echo 68719476737 >expect &&
+ tar_info huge.tar | cut -d" " -f1 >actual &&
+ test_cmp expect actual
+'
+
+test_expect_success LONG_IS_64BIT 'set up repository with far-future commit' '
+ rm -f .git/index &&
+ echo content >file &&
+ git add file &&
+ GIT_COMMITTER_DATE="@68719476737 +0000" \
+ git commit -m "tempori parendum"
+'
+
+test_expect_success LONG_IS_64BIT 'generate tar with future mtime' '
+ git archive HEAD >future.tar
+'
+
+test_expect_success TAR_HUGE,LONG_IS_64BIT 'system tar can read our future mtime' '
+ echo 4147 >expect &&
+ tar_info future.tar | cut -d" " -f2 >actual &&
+ test_cmp expect actual
+'
+
test_done
test_cmp "$TEST_DIRECTORY"/t5100/quoted-from.expect quoted-from/msg
'
+test_expect_success 'mailinfo unescapes with --mboxrd' '
+ mkdir mboxrd &&
+ git mailsplit -omboxrd --mboxrd \
+ "$TEST_DIRECTORY"/t5100/sample.mboxrd >last &&
+ test x"$(cat last)" = x2 &&
+ for i in 0001 0002
+ do
+ git mailinfo mboxrd/msg mboxrd/patch \
+ <mboxrd/$i >mboxrd/out &&
+ test_cmp "$TEST_DIRECTORY"/t5100/${i}mboxrd mboxrd/msg
+ done &&
+ sp=" " &&
+ echo "From " >expect &&
+ echo "From " >>expect &&
+ echo >> expect &&
+ cat >sp <<-INPUT_END &&
+ From mboxrd Mon Sep 17 00:00:00 2001
+ From: trailing spacer <sp@example.com>
+ Subject: [PATCH] a commit with trailing space
+
+ From$sp
+ >From$sp
+
+ INPUT_END
+
+ git mailsplit -f2 -omboxrd --mboxrd <sp >last &&
+ test x"$(cat last)" = x1 &&
+ git mailinfo mboxrd/msg mboxrd/patch <mboxrd/0003 &&
+ test_cmp expect mboxrd/msg
+'
+
test_done
--- /dev/null
+From the beginning, mbox should have been mboxrd
+>From escaped
+From not mangled but this line should have been escaped
+
--- /dev/null
+ >From unchanged
+ From also unchanged
+no trailing space, no escaping necessary and '>' was intended:
+>From
+
--- /dev/null
+From mboxrd Mon Sep 17 00:00:00 2001
+From: mboxrd writer <mboxrd@example.com>
+Date: Fri, 9 Jun 2006 00:44:16 -0700
+Subject: [PATCH] a commit with escaped From lines
+
+>From the beginning, mbox should have been mboxrd
+>>From escaped
+From not mangled but this line should have been escaped
+
+From mboxrd Mon Sep 17 00:00:00 2001
+From: mboxrd writer <mboxrd@example.com>
+Date: Fri, 9 Jun 2006 00:44:16 -0700
+Subject: [PATCH 2/2] another with fake From lines
+
+ >From unchanged
+ From also unchanged
+no trailing space, no escaping necessary and '>' was intended:
+>From
+
test_cmp expect actual
'
+ test_expect_success "counting commits with limit ($state)" '
+ git rev-list --count -n 1 HEAD >expect &&
+ git rev-list --use-bitmap-index --count -n 1 HEAD >actual &&
+ test_cmp expect actual
+ '
+
test_expect_success "counting non-linear history ($state)" '
git rev-list --count other...master >expect &&
git rev-list --use-bitmap-index --count other...master >actual &&
test_cmp exp act
'
-cat >bogus-commit <<\EOF
-tree 4b825dc642cb6eb9a060e54bf8d69288fbee4904
+cat >bogus-commit <<EOF
+tree $EMPTY_TREE
author Bugs Bunny 1234567890 +0000
committer Bugs Bunny <bugs@bun.ni> 1234567890 +0000
test_extra_arg () {
test_expect_success "extra args: $*" "
test_must_fail git remote $* bogus_extra_arg 2>actual &&
- grep '^usage:' actual
+ test_i18ngrep '^usage:' actual
"
}
git fetch --prune origin 2>&1 | head -n1 >../actual
) &&
echo "From ${D}/." >expect &&
- test_cmp expect actual
+ test_i18ncmp expect actual
'
test_expect_success 'branchname D/F conflict resolved by --prune' '
)
'
+test_expect_success C_LOCALE_OUTPUT 'fetch aligned output' '
+ git clone . full-output &&
+ test_commit looooooooooooong-tag &&
+ (
+ cd full-output &&
+ git -c fetch.output=full fetch origin 2>&1 | \
+ grep -e "->" | cut -c 22- >../actual
+ ) &&
+ cat >expect <<-\EOF &&
+ master -> origin/master
+ looooooooooooong-tag -> looooooooooooong-tag
+ EOF
+ test_cmp expect actual
+'
+
+test_expect_success C_LOCALE_OUTPUT 'fetch compact output' '
+ git clone . compact &&
+ test_commit extraaa &&
+ (
+ cd compact &&
+ git -c fetch.output=compact fetch origin 2>&1 | \
+ grep -e "->" | cut -c 22- >../actual
+ ) &&
+ cat >expect <<-\EOF &&
+ master -> origin/*
+ extraaa -> *
+ EOF
+ test_cmp expect actual
+'
+
test_done
test -n "$(git ls-files -u)" &&
cp file expected &&
test_must_fail git pull . second 2>err &&
- test_i18ngrep "Pull is not possible because you have unmerged files" err &&
+ test_i18ngrep "Pulling is not possible because you have unmerged files." err &&
test_cmp expected file &&
git add file &&
test -z "$(git ls-files -u)" &&
test new = "$(git show HEAD:file2)"
'
+test_expect_success '--rebase with conflicts shows advice' '
+ test_when_finished "git rebase --abort; git checkout -f to-rebase" &&
+ git checkout -b seq &&
+ test_seq 5 >seq.txt &&
+ git add seq.txt &&
+ test_tick &&
+ git commit -m "Add seq.txt" &&
+ echo 6 >>seq.txt &&
+ test_tick &&
+ git commit -m "Append to seq.txt" seq.txt &&
+ git checkout -b with-conflicts HEAD^ &&
+ echo conflicting >>seq.txt &&
+ test_tick &&
+ git commit -m "Create conflict" seq.txt &&
+ test_must_fail git pull --rebase . seq 2>err >out &&
+ test_i18ngrep "When you have resolved this problem" out
+'
+
+test_expect_success 'failed --rebase shows advice' '
+ test_when_finished "git rebase --abort; git checkout -f to-rebase" &&
+ git checkout -b diverging &&
+ test_commit attributes .gitattributes "* text=auto" attrs &&
+ sha1="$(printf "1\\r\\n" | git hash-object -w --stdin)" &&
+ git update-index --cacheinfo 0644 $sha1 file &&
+ git commit -m v1-with-cr &&
+ # force checkout because `git reset --hard` will not leave clean `file`
+ git checkout -f -b fails-to-rebase HEAD^ &&
+ test_commit v2-without-cr file "2" file2-lf &&
+ test_must_fail git pull --rebase . diverging 2>err >out &&
+ test_i18ngrep "When you have resolved this problem" out
+'
+
test_expect_success '--rebase fails with multiple branches' '
git reset --hard before-rebase &&
test_must_fail git pull --rebase . copy master 2>err &&
test new = "$(git show HEAD:file2)"
'
+test_expect_success "pull --rebase warns on --verify-signatures" '
+ git reset --hard before-rebase &&
+ git pull --rebase --verify-signatures . copy 2>err &&
+ test "$(git rev-parse HEAD^)" = "$(git rev-parse copy)" &&
+ test new = "$(git show HEAD:file2)" &&
+ test_i18ngrep "ignoring --verify-signatures for rebase" err
+'
+
+test_expect_success "pull --rebase does not warn on --no-verify-signatures" '
+ git reset --hard before-rebase &&
+ git pull --rebase --no-verify-signatures . copy 2>err &&
+ test "$(git rev-parse HEAD^)" = "$(git rev-parse copy)" &&
+ test new = "$(git show HEAD:file2)" &&
+ test_i18ngrep ! "verify-signatures" err
+'
+
# add a feature branch, keep-merge, that is merged into master, so the
# test can try preserving the merge commit (or not) with various
# --rebase flags/pull.rebase settings.
ensure_fresh_upstream &&
test_terminal git push -u upstream master >out 2>err &&
- grep "Writing objects" err
+ test_i18ngrep "Writing objects" err
'
test_expect_success 'progress messages do not go to non-tty' '
# skip progress messages, since stderr is non-tty
git push -u upstream master >out 2>err &&
- ! grep "Writing objects" err
+ test_i18ngrep ! "Writing objects" err
'
test_expect_success 'progress messages go to non-tty (forced)' '
# force progress messages to stderr, even though it is non-tty
git push -u --progress upstream master >out 2>err &&
- grep "Writing objects" err
+ test_i18ngrep "Writing objects" err
'
test_expect_success TTY 'push -q suppresses progress' '
ensure_fresh_upstream &&
test_terminal git push -u -q upstream master >out 2>err &&
- ! grep "Writing objects" err
+ test_i18ngrep ! "Writing objects" err
'
test_expect_success TTY 'push --no-progress suppresses progress' '
ensure_fresh_upstream &&
test_terminal git push -u --no-progress upstream master >out 2>err &&
- ! grep "Unpacking objects" err &&
- ! grep "Writing objects" err
+ test_i18ngrep ! "Unpacking objects" err &&
+ test_i18ngrep ! "Writing objects" err
'
test_expect_success TTY 'quiet push' '
test_cmp expect actual
'
+test_expect_success 'new branch covered by force-with-lease' '
+ setup_srcdst_basic &&
+ (
+ cd dst &&
+ git branch branch master &&
+ git push --force-with-lease=branch origin branch
+ ) &&
+ git ls-remote dst refs/heads/branch >expect &&
+ git ls-remote src refs/heads/branch >actual &&
+ test_cmp expect actual
+'
+
+test_expect_success 'new branch covered by force-with-lease (explicit)' '
+ setup_srcdst_basic &&
+ (
+ cd dst &&
+ git branch branch master &&
+ git push --force-with-lease=branch: origin branch
+ ) &&
+ git ls-remote dst refs/heads/branch >expect &&
+ git ls-remote src refs/heads/branch >actual &&
+ test_cmp expect actual
+'
+
+test_expect_success 'new branch already exists' '
+ setup_srcdst_basic &&
+ (
+ cd src &&
+ git checkout -b branch master &&
+ test_commit F
+ ) &&
+ (
+ cd dst &&
+ git branch branch master &&
+ test_must_fail git push --force-with-lease=branch: origin branch
+ )
+'
+
test_done
cat >expected &&
# We're not interested in the error
# "fatal: The remote end hung up unexpectedly":
- grep -E '^(fatal|warning):' <error | grep -v 'hung up' >actual | sort &&
- test_cmp expected actual
+ test_i18ngrep -E '^(fatal|warning):' <error | grep -v 'hung up' >actual | sort &&
+ test_i18ncmp expected actual
}
test_expect_success 'setup' '
git commit -m dev2 &&
test_must_fail git push origin dev2 2>act &&
sed -e "/^remote: /s/ *$//" <act >cmp &&
- test_cmp exp cmp
+ test_i18ncmp exp cmp
'
rm -f "$HTTPD_DOCUMENT_ROOT_PATH/test_repo.git/hooks/update"
cd "$ROOT_PATH"/test_repo_clone &&
test_commit noisy &&
test_terminal git push >output 2>&1 &&
- grep "^Writing objects" output
+ test_i18ngrep "^Writing objects" output
'
test_expect_success TTY 'push --quiet silences status and progress' '
cd "$ROOT_PATH"/test_repo_clone &&
test_commit no-progress &&
test_terminal git push --no-progress >output 2>&1 &&
- grep "^To http" output &&
- ! grep "^Writing objects"
+ test_i18ngrep "^To http" output &&
+ test_i18ngrep ! "^Writing objects"
'
test_expect_success 'push --progress shows progress to non-tty' '
cd "$ROOT_PATH"/test_repo_clone &&
test_commit progress &&
git push --progress >output 2>&1 &&
- grep "^To http" output &&
- grep "^Writing objects" output
+ test_i18ngrep "^To http" output &&
+ test_i18ngrep "^Writing objects" output
'
test_expect_success 'http push gives sane defaults to reflog' '
test_cmp expect "$HTTPD_DOCUMENT_ROOT_PATH/push-cert-status"
'
+test_expect_success 'push status output scrubs password' '
+ cd "$ROOT_PATH/test_repo_clone" &&
+ git push --porcelain \
+ "$HTTPD_URL_USER_PASS/smart/test_repo.git" \
+ +HEAD:scrub >status &&
+ # should have been scrubbed down to vanilla URL
+ grep "^To $HTTPD_URL/smart/test_repo.git" status
+'
+
stop_httpd
test_done
--- /dev/null
+#!/bin/sh
+
+test_description='test custom script in place of pack-objects'
+. ./test-lib.sh
+
+test_expect_success 'create some history to fetch' '
+ test_commit one &&
+ test_commit two
+'
+
+test_expect_success 'create debugging hook script' '
+ write_script .git/hook <<-\EOF
+ echo >&2 "hook running"
+ echo "$*" >hook.args
+ cat >hook.stdin
+ "$@" <hook.stdin >hook.stdout
+ cat hook.stdout
+ EOF
+'
+
+clear_hook_results () {
+ rm -rf .git/hook.* dst.git
+}
+
+test_expect_success 'hook runs via global config' '
+ clear_hook_results &&
+ test_config_global uploadpack.packObjectsHook ./hook &&
+ git clone --no-local . dst.git 2>stderr &&
+ grep "hook running" stderr
+'
+
+test_expect_success 'hook outputs are sane' '
+ # check that we recorded a usable pack
+ git index-pack --stdin <.git/hook.stdout &&
+
+ # check that we recorded args and stdin. We do not check
+ # the full argument list or the exact pack contents, as it would make
+ # the test brittle. So just sanity check that we could replay
+ # the packing procedure.
+ grep "^git" .git/hook.args &&
+ $(cat .git/hook.args) <.git/hook.stdin >replay
+'
+
+test_expect_success 'hook runs from -c config' '
+ clear_hook_results &&
+ git clone --no-local \
+ -u "git -c uploadpack.packObjectsHook=./hook upload-pack" \
+ . dst.git 2>stderr &&
+ grep "hook running" stderr
+'
+
+test_expect_success 'hook does not run from repo config' '
+ clear_hook_results &&
+ test_config uploadpack.packObjectsHook "./hook" &&
+ git clone --no-local . dst.git 2>stderr &&
+ ! grep "hook running" stderr &&
+ test_path_is_missing .git/hook.args &&
+ test_path_is_missing .git/hook.stdin &&
+ test_path_is_missing .git/hook.stdout
+'
+
+test_done
--- /dev/null
+#!/bin/sh
+
+test_description='pushing to a repository using push options'
+
+. ./test-lib.sh
+
+mk_repo_pair () {
+ rm -rf workbench upstream &&
+ test_create_repo upstream &&
+ test_create_repo workbench &&
+ (
+ cd upstream &&
+ git config receive.denyCurrentBranch warn &&
+ mkdir -p .git/hooks &&
+ cat >.git/hooks/pre-receive <<-'EOF' &&
+ #!/bin/sh
+ if test -n "$GIT_PUSH_OPTION_COUNT"; then
+ i=0
+ >hooks/pre-receive.push_options
+ while test "$i" -lt "$GIT_PUSH_OPTION_COUNT"; do
+ eval "value=\$GIT_PUSH_OPTION_$i"
+ echo $value >>hooks/pre-receive.push_options
+ i=$((i + 1))
+ done
+ fi
+ EOF
+ chmod u+x .git/hooks/pre-receive
+
+ cat >.git/hooks/post-receive <<-'EOF' &&
+ #!/bin/sh
+ if test -n "$GIT_PUSH_OPTION_COUNT"; then
+ i=0
+ >hooks/post-receive.push_options
+ while test "$i" -lt "$GIT_PUSH_OPTION_COUNT"; do
+ eval "value=\$GIT_PUSH_OPTION_$i"
+ echo $value >>hooks/post-receive.push_options
+ i=$((i + 1))
+ done
+ fi
+ EOF
+ chmod u+x .git/hooks/post-receive
+ ) &&
+ (
+ cd workbench &&
+ git remote add up ../upstream
+ )
+}
+
+# Compare the ref ($1) in upstream with a ref value from workbench ($2)
+# i.e. test_refs second HEAD@{2}
+test_refs () {
+ test $# = 2 &&
+ git -C upstream rev-parse --verify "$1" >expect &&
+ git -C workbench rev-parse --verify "$2" >actual &&
+ test_cmp expect actual
+}
+
+test_expect_success 'one push option works for a single branch' '
+ mk_repo_pair &&
+ git -C upstream config receive.advertisePushOptions true &&
+ (
+ cd workbench &&
+ test_commit one &&
+ git push --mirror up &&
+ test_commit two &&
+ git push --push-option=asdf up master
+ ) &&
+ test_refs master master &&
+ echo "asdf" >expect &&
+ test_cmp expect upstream/.git/hooks/pre-receive.push_options &&
+ test_cmp expect upstream/.git/hooks/post-receive.push_options
+'
+
+test_expect_success 'push option denied by remote' '
+ mk_repo_pair &&
+ git -C upstream config receive.advertisePushOptions false &&
+ (
+ cd workbench &&
+ test_commit one &&
+ git push --mirror up &&
+ test_commit two &&
+ test_must_fail git push --push-option=asdf up master
+ ) &&
+ test_refs master HEAD@{1}
+'
+
+test_expect_success 'two push options work' '
+ mk_repo_pair &&
+ git -C upstream config receive.advertisePushOptions true &&
+ (
+ cd workbench &&
+ test_commit one &&
+ git push --mirror up &&
+ test_commit two &&
+ git push --push-option=asdf --push-option="more structured text" up master
+ ) &&
+ test_refs master master &&
+ printf "asdf\nmore structured text\n" >expect &&
+ test_cmp expect upstream/.git/hooks/pre-receive.push_options &&
+ test_cmp expect upstream/.git/hooks/post-receive.push_options
+'
+
+test_done
test_expect_success 'nonshallow clone implies nonshallow submodule' '
test_when_finished "rm -rf super_clone" &&
git clone --recurse-submodules "file://$pwd/." super_clone &&
- (
- cd super_clone &&
- git log --oneline >lines &&
- test_line_count = 3 lines
- ) &&
- (
- cd super_clone/sub &&
- git log --oneline >lines &&
- test_line_count = 3 lines
- )
+ git -C super_clone log --oneline >lines &&
+ test_line_count = 3 lines &&
+ git -C super_clone/sub log --oneline >lines &&
+ test_line_count = 3 lines
+'
+
+test_expect_success 'shallow clone with shallow submodule' '
+ test_when_finished "rm -rf super_clone" &&
+ git clone --recurse-submodules --depth 2 --shallow-submodules "file://$pwd/." super_clone &&
+ git -C super_clone log --oneline >lines &&
+ test_line_count = 2 lines &&
+ git -C super_clone/sub log --oneline >lines &&
+ test_line_count = 1 lines
'
-test_expect_success 'shallow clone implies shallow submodule' '
+test_expect_success 'shallow clone does not imply shallow submodule' '
test_when_finished "rm -rf super_clone" &&
git clone --recurse-submodules --depth 2 "file://$pwd/." super_clone &&
+ git -C super_clone log --oneline >lines &&
+ test_line_count = 2 lines &&
+ git -C super_clone/sub log --oneline >lines &&
+ test_line_count = 3 lines
+'
+
+test_expect_success 'shallow clone with non shallow submodule' '
+ test_when_finished "rm -rf super_clone" &&
+ git clone --recurse-submodules --depth 2 --no-shallow-submodules "file://$pwd/." super_clone &&
+ git -C super_clone log --oneline >lines &&
+ test_line_count = 2 lines &&
+ git -C super_clone/sub log --oneline >lines &&
+ test_line_count = 3 lines
+'
+
+test_expect_success 'non shallow clone with shallow submodule' '
+ test_when_finished "rm -rf super_clone" &&
+ git clone --recurse-submodules --no-local --shallow-submodules "file://$pwd/." super_clone &&
+ git -C super_clone log --oneline >lines &&
+ test_line_count = 3 lines &&
+ git -C super_clone/sub log --oneline >lines &&
+ test_line_count = 1 lines
+'
+
+test_expect_success 'clone follows shallow recommendation' '
+ test_when_finished "rm -rf super_clone" &&
+ git config -f .gitmodules submodule.sub.shallow true &&
+ git add .gitmodules &&
+ git commit -m "recommed shallow for sub" &&
+ git clone --recurse-submodules --no-local "file://$pwd/." super_clone &&
(
cd super_clone &&
git log --oneline >lines &&
- test_line_count = 2 lines
+ test_line_count = 4 lines
) &&
(
cd super_clone/sub &&
)
'
-test_expect_success 'shallow clone with non shallow submodule' '
+test_expect_success 'get unshallow recommended shallow submodule' '
test_when_finished "rm -rf super_clone" &&
- git clone --recurse-submodules --depth 2 --no-shallow-submodules "file://$pwd/." super_clone &&
+ git clone --no-local "file://$pwd/." super_clone &&
(
cd super_clone &&
+ git submodule update --init --no-recommend-shallow &&
git log --oneline >lines &&
- test_line_count = 2 lines
+ test_line_count = 4 lines
) &&
(
cd super_clone/sub &&
)
'
-test_expect_success 'non shallow clone with shallow submodule' '
+test_expect_success 'clone follows non shallow recommendation' '
test_when_finished "rm -rf super_clone" &&
- git clone --recurse-submodules --no-local --shallow-submodules "file://$pwd/." super_clone &&
+ git config -f .gitmodules submodule.sub.shallow false &&
+ git add .gitmodules &&
+ git commit -m "recommed non shallow for sub" &&
+ git clone --recurse-submodules --no-local "file://$pwd/." super_clone &&
(
cd super_clone &&
git log --oneline >lines &&
- test_line_count = 3 lines
+ test_line_count = 5 lines
) &&
(
cd super_clone/sub &&
git log --oneline >lines &&
- test_line_count = 1 lines
+ test_line_count = 3 lines
)
'
\e[1;31;43mfoo\e[m
EOF
-test_expect_success '%C(auto) does not enable color by default' '
+test_expect_success '%C(auto,...) does not enable color by default' '
git log --format=$AUTO_COLOR -1 >actual &&
has_no_color actual
'
-test_expect_success '%C(auto) enables colors for color.diff' '
+test_expect_success '%C(auto,...) enables colors for color.diff' '
git -c color.diff=always log --format=$AUTO_COLOR -1 >actual &&
has_color actual
'
-test_expect_success '%C(auto) enables colors for color.ui' '
+test_expect_success '%C(auto,...) enables colors for color.ui' '
git -c color.ui=always log --format=$AUTO_COLOR -1 >actual &&
has_color actual
'
-test_expect_success '%C(auto) respects --color' '
+test_expect_success '%C(auto,...) respects --color' '
git log --format=$AUTO_COLOR -1 --color >actual &&
has_color actual
'
-test_expect_success '%C(auto) respects --no-color' '
+test_expect_success '%C(auto,...) respects --no-color' '
git -c color.ui=always log --format=$AUTO_COLOR -1 --no-color >actual &&
has_no_color actual
'
-test_expect_success TTY '%C(auto) respects --color=auto (stdout is tty)' '
+test_expect_success TTY '%C(auto,...) respects --color=auto (stdout is tty)' '
test_terminal env TERM=vt100 \
git log --format=$AUTO_COLOR -1 --color=auto >actual &&
has_color actual
'
-test_expect_success '%C(auto) respects --color=auto (stdout not tty)' '
+test_expect_success '%C(auto,...) respects --color=auto (stdout not tty)' '
(
TERM=vt100 && export TERM &&
git log --format=$AUTO_COLOR -1 --color=auto >actual &&
)
'
+test_expect_success '%C(auto) respects --color' '
+ git log --color --format="%C(auto)%H" -1 >actual &&
+ printf "\\033[33m%s\\033[m\\n" $(git rev-parse HEAD) >expect &&
+ test_cmp expect actual
+'
+
+test_expect_success '%C(auto) respects --no-color' '
+ git log --no-color --format="%C(auto)%H" -1 >actual &&
+ git rev-parse HEAD >expect &&
+ test_cmp expect actual
+'
+
iconv -f utf-8 -t $test_encoding > commit-msg <<EOF
Test printing of complex bodies
# "git rev-list<ENTER>" is likely to be a bug in the calling script and may
-# deserve an error message, but do cases where set of refs programatically
+# deserve an error message, but do cases where set of refs programmatically
# given using globbing and/or --stdin need to fail with the same error, or
# are we better off reporting a success with no output? The following few
# tests document the current behaviour to remind us that we might want to
)
'
+test_expect_success 'custom merge does not lock index' '
+ git reset --hard anchor &&
+ write_script sleep-one-second.sh <<-\EOF &&
+ sleep 1 &
+ EOF
+
+ test_write_lines >.gitattributes \
+ "* merge=ours" "text merge=sleep-one-second" &&
+ test_config merge.ours.driver true &&
+ test_config merge.sleep-one-second.driver ./sleep-one-second.sh &&
+ git merge master
+'
+
test_done
test_expect_success 'bisect errors out if bad and good are mistaken' '
git bisect reset &&
test_must_fail git bisect start $HASH2 $HASH4 2> rev_list_error &&
- grep "mistook good and bad" rev_list_error &&
+ test_i18ngrep "mistook good and bad" rev_list_error &&
git bisect reset
'
test_expect_success 'good merge base when good and bad are siblings' '
git bisect start "$HASH7" "$SIDE_HASH7" > my_bisect_log.txt &&
- grep "merge base must be tested" my_bisect_log.txt &&
+ test_i18ngrep "merge base must be tested" my_bisect_log.txt &&
grep $HASH4 my_bisect_log.txt &&
git bisect good > my_bisect_log.txt &&
test_must_fail grep "merge base must be tested" my_bisect_log.txt &&
'
test_expect_success 'skipped merge base when good and bad are siblings' '
git bisect start "$SIDE_HASH7" "$HASH7" > my_bisect_log.txt &&
- grep "merge base must be tested" my_bisect_log.txt &&
+ test_i18ngrep "merge base must be tested" my_bisect_log.txt &&
grep $HASH4 my_bisect_log.txt &&
git bisect skip > my_bisect_log.txt 2>&1 &&
grep "warning" my_bisect_log.txt &&
test_expect_success 'bad merge base when good and bad are siblings' '
git bisect start "$HASH7" HEAD > my_bisect_log.txt &&
- grep "merge base must be tested" my_bisect_log.txt &&
+ test_i18ngrep "merge base must be tested" my_bisect_log.txt &&
grep $HASH4 my_bisect_log.txt &&
test_must_fail git bisect bad > my_bisect_log.txt 2>&1 &&
- grep "merge base $HASH4 is bad" my_bisect_log.txt &&
- grep "fixed between $HASH4 and \[$SIDE_HASH7\]" my_bisect_log.txt &&
+ test_i18ngrep "merge base $HASH4 is bad" my_bisect_log.txt &&
+ test_i18ngrep "fixed between $HASH4 and \[$SIDE_HASH7\]" my_bisect_log.txt &&
git bisect reset
'
test_expect_success 'good merge bases when good and bad are siblings' '
git bisect start "$B_HASH" "$A_HASH" > my_bisect_log.txt &&
- grep "merge base must be tested" my_bisect_log.txt &&
+ test_i18ngrep "merge base must be tested" my_bisect_log.txt &&
git bisect good > my_bisect_log2.txt &&
- grep "merge base must be tested" my_bisect_log2.txt &&
+ test_i18ngrep "merge base must be tested" my_bisect_log2.txt &&
{
{
grep "$SIDE_HASH5" my_bisect_log.txt &&
test_expect_success 'optimized merge base checks' '
git bisect start "$HASH7" "$SIDE_HASH7" > my_bisect_log.txt &&
- grep "merge base must be tested" my_bisect_log.txt &&
+ test_i18ngrep "merge base must be tested" my_bisect_log.txt &&
grep "$HASH4" my_bisect_log.txt &&
git bisect good > my_bisect_log2.txt &&
test -f ".git/BISECT_ANCESTORS_OK" &&
test "$HASH6" = $(git rev-parse --verify HEAD) &&
git bisect bad > my_bisect_log3.txt &&
git bisect good "$A_HASH" > my_bisect_log4.txt &&
- grep "merge base must be tested" my_bisect_log4.txt &&
+ test_i18ngrep "merge base must be tested" my_bisect_log4.txt &&
test_must_fail test -f ".git/BISECT_ANCESTORS_OK"
'
test_expect_success 'erroring out when using bad path parameters' '
test_must_fail git bisect start $PARA_HASH7 $HASH1 -- foobar 2> error.txt &&
- grep "bad path parameters" error.txt
+ test_i18ngrep "bad path parameters" error.txt
'
test_expect_success 'test bisection on bare repo - --no-checkout specified' '
# first bad commit: [32a594a3fdac2d57cf6d02987e30eec68511498c] Add <4: Ciao for now> into <hello>.
EOF
-test_expect_success 'bisect log: successfull result' '
+test_expect_success 'bisect log: successful result' '
git bisect reset &&
git bisect start $HASH4 $HASH2 &&
git bisect good &&
test_must_fail git bisect terms 1 2 &&
test_must_fail git bisect terms 2>actual &&
echo "no terms defined" >expected &&
- test_cmp expected actual
+ test_i18ncmp expected actual
'
test_expect_success 'bisect terms shows good/bad after start' '
Your current terms are two for the old state
and one for the new state.
EOF
- test_cmp expected actual &&
+ test_i18ncmp expected actual &&
git bisect terms --term-bad >actual &&
echo one >expected &&
test_cmp expected actual &&
test_have_prereq SED_STRIPS_CR && SED_OPTIONS=-b
+compare_files () {
+ tr '\015\000' QN <"$1" >"$1".expect &&
+ tr '\015\000' QN <"$2" >"$2".actual &&
+ test_cmp "$1".expect "$2".actual &&
+ rm "$1".expect "$2".actual
+}
+
test_expect_success setup '
git config core.autocrlf false &&
git branch side &&
echo "* text=auto" >.gitattributes &&
- touch file &&
+ echo first line >file &&
git add .gitattributes file &&
test_tick &&
git commit -m "normalize file" &&
rm -f .gitattributes &&
git reset --hard a &&
git merge b &&
- test_cmp expected file
+ compare_files expected file
'
-test_expect_success 'Merge addition of text=auto' '
+test_expect_success 'Merge addition of text=auto eol=LF' '
+ git config core.eol lf &&
cat <<-\EOF >expected &&
first line
same line
EOF
- if test_have_prereq NATIVE_CRLF; then
- append_cr <expected >expected.temp &&
- mv expected.temp expected
- fi &&
git config merge.renormalize true &&
git rm -fr . &&
rm -f .gitattributes &&
git reset --hard b &&
git merge a &&
- test_cmp expected file
+ compare_files expected file
+'
+
+test_expect_success 'Merge addition of text=auto eol=CRLF' '
+ git config core.eol crlf &&
+ cat <<-\EOF >expected &&
+ first line
+ same line
+ EOF
+
+ append_cr <expected >expected.temp &&
+ mv expected.temp expected &&
+ git config merge.renormalize true &&
+ git rm -fr . &&
+ rm -f .gitattributes &&
+ git reset --hard b &&
+ echo >&2 "After git reset --hard b" &&
+ git ls-files -s --eol >&2 &&
+ git merge a &&
+ compare_files expected file
'
test_expect_success 'Detect CRLF/LF conflict after setting text=auto' '
+ git config core.eol native &&
echo "<<<<<<<" >expected &&
- if test_have_prereq NATIVE_CRLF; then
- echo first line | append_cr >>expected &&
- echo same line | append_cr >>expected &&
- echo ======= | append_cr >>expected
- else
- echo first line >>expected &&
- echo same line >>expected &&
- echo ======= >>expected
- fi &&
+ echo first line >>expected &&
+ echo same line >>expected &&
+ echo ======= >>expected &&
echo first line | append_cr >>expected &&
echo same line | append_cr >>expected &&
echo ">>>>>>>" >>expected &&
git reset --hard a &&
test_must_fail git merge b &&
fuzz_conflict file >file.fuzzy &&
- test_cmp expected file.fuzzy
+ compare_files expected file.fuzzy
'
test_expect_success 'Detect LF/CRLF conflict from addition of text=auto' '
echo "<<<<<<<" >expected &&
echo first line | append_cr >>expected &&
echo same line | append_cr >>expected &&
- if test_have_prereq NATIVE_CRLF; then
- echo ======= | append_cr >>expected &&
- echo first line | append_cr >>expected &&
- echo same line | append_cr >>expected
- else
- echo ======= >>expected &&
- echo first line >>expected &&
- echo same line >>expected
- fi &&
+ echo ======= >>expected &&
+ echo first line >>expected &&
+ echo same line >>expected &&
echo ">>>>>>>" >>expected &&
git config merge.renormalize false &&
rm -f .gitattributes &&
git reset --hard b &&
test_must_fail git merge a &&
fuzz_conflict file >file.fuzzy &&
- test_cmp expected file.fuzzy
+ compare_files expected file.fuzzy
'
test_expect_failure 'checkout -m after setting text=auto' '
git reset --hard initial &&
git checkout a -- . &&
git checkout -m b &&
- test_cmp expected file
+ compare_files expected file
'
test_expect_failure 'checkout -m addition of text=auto' '
git reset --hard initial &&
git checkout b -- . &&
git checkout -m a &&
- test_cmp expected file
+ compare_files expected file
'
test_expect_failure 'cherry-pick patch from after text=auto was added' '
git reset --hard b &&
test_must_fail git cherry-pick a >err 2>&1 &&
grep "[Nn]othing added" err &&
- test_cmp expected file
+ compare_files expected file
'
test_expect_success 'Test delete/normalize conflict' '
test_when_finished "rm -f .git/$r" &&
echo "warning: ignoring broken ref $r" >broken-err &&
git for-each-ref >out 2>err &&
- test_cmp full-list out &&
- test_cmp broken-err err
+ test_i18ncmp full-list out &&
+ test_i18ncmp broken-err err
'
test_expect_success 'NULL_SHA1 refs are reported correctly' '
echo "warning: ignoring broken ref $r" >zeros-err &&
git for-each-ref >out 2>err &&
test_cmp full-list out &&
- test_cmp zeros-err err &&
+ test_i18ncmp zeros-err err &&
git for-each-ref --format="%(objectname) %(refname)" >brief-out 2>brief-err &&
test_cmp brief-list brief-out &&
- test_cmp zeros-err brief-err
+ test_i18ncmp zeros-err brief-err
'
test_expect_success 'Missing objects are reported correctly' '
test_when_finished "rm -f .git/$r" &&
echo "fatal: missing object $MISSING for $r" >missing-err &&
test_must_fail git for-each-ref 2>err &&
- test_cmp missing-err err &&
+ test_i18ncmp missing-err err &&
(
cat brief-list &&
echo "$MISSING $r"
echo content >file &&
git add file &&
git commit -m "added sub and file" &&
- mkdir -p deep/directory/hierachy &&
- git submodule add ./. deep/directory/hierachy/sub &&
+ mkdir -p deep/directory/hierarchy &&
+ git submodule add ./. deep/directory/hierarchy/sub &&
git commit -m "added another submodule" &&
git branch submodule
'
# git status would fail if the update of linking git dir to
# work dir of the submodule failed.
git status &&
- git config -f ../.gitmodules submodule.deep/directory/hierachy/sub.path >../actual &&
- echo "directory/hierachy/sub" >../expect
+ git config -f ../.gitmodules submodule.deep/directory/hierarchy/sub.path >../actual &&
+ echo "directory/hierarchy/sub" >../expect
) &&
test_cmp actual expect
'
# try to sign with bad user.signingkey
git config user.signingkey BobTheMouse
test_expect_success GPG \
- 'git tag -s fails if gpg is misconfigured' \
+ 'git tag -s fails if gpg is misconfigured (bad key)' \
'test_must_fail git tag -s -m tail tag-gpg-failure'
git config --unset user.signingkey
+# try to produce invalid signature
+test_expect_success GPG \
+ 'git tag -s fails if gpg is misconfigured (bad signature format)' \
+ 'test_config gpg.program echo &&
+ test_must_fail git tag -s -m tail tag-gpg-failure'
+
+
# try to verify without gpg:
rm -rf gpghome
grep ^LV= pager-env.out
'
+test_expect_success !MINGW,TTY 'LESS and LV envvars set by git-sh-setup' '
+ (
+ sane_unset LESS LV &&
+ PAGER="env >pager-env.out; wc" &&
+ export PAGER &&
+ PATH="$(git --exec-path):$PATH" &&
+ export PATH &&
+ test_terminal sh -c ". git-sh-setup && git_pager"
+ ) &&
+ grep ^LESS= pager-env.out &&
+ grep ^LV= pager-env.out
+'
+
test_expect_success TTY 'some commands do not use a pager' '
rm -f paginated.out &&
test_terminal git rev-list HEAD &&
H sub/2
EOF
-NULL_SHA1=e69de29bb2d1d6434b8b29ae775ad8c2e48c5391
-
setup_absent() {
test -f 1 && rm 1
git update-index --remove 1 &&
- git update-index --add --cacheinfo 100644 $NULL_SHA1 1 &&
+ git update-index --add --cacheinfo 100644 $EMPTY_BLOB 1 &&
git update-index --skip-worktree 1
}
test_absent() {
- echo "100644 $NULL_SHA1 0 1" > expected &&
+ echo "100644 $EMPTY_BLOB 0 1" > expected &&
git ls-files --stage 1 > result &&
test_cmp expected result &&
test ! -f 1
setup_dirty() {
git update-index --force-remove 1 &&
echo dirty > 1 &&
- git update-index --add --cacheinfo 100644 $NULL_SHA1 1 &&
+ git update-index --add --cacheinfo 100644 $EMPTY_BLOB 1 &&
git update-index --skip-worktree 1
}
test_dirty() {
- echo "100644 $NULL_SHA1 0 1" > expected &&
+ echo "100644 $EMPTY_BLOB 0 1" > expected &&
git ls-files --stage 1 > result &&
test_cmp expected result &&
echo dirty > expected
test "$(git grep --no-ext-grep test)" = "1:test"
'
-echo ":000000 100644 $_z40 $NULL_SHA1 A 1" > expected
+echo ":000000 100644 $_z40 $EMPTY_BLOB A 1" > expected
test_expect_success 'diff-index does not examine skip-worktree absent entries' '
setup_absent &&
git diff-index HEAD -- 1 > result &&
git update-index --no-skip-worktree added
'
-NULL_SHA1=e69de29bb2d1d6434b8b29ae775ad8c2e48c5391
-
setup_absent() {
test -f 1 && rm 1
git update-index --remove 1 &&
- git update-index --add --cacheinfo 100644 $NULL_SHA1 1 &&
+ git update-index --add --cacheinfo 100644 $EMPTY_BLOB 1 &&
git update-index --skip-worktree 1
}
test_absent() {
- echo "100644 $NULL_SHA1 0 1" > expected &&
+ echo "100644 $EMPTY_BLOB 0 1" > expected &&
git ls-files --stage 1 > result &&
test_cmp expected result &&
test ! -f 1
setup_dirty() {
git update-index --force-remove 1 &&
echo dirty > 1 &&
- git update-index --add --cacheinfo 100644 $NULL_SHA1 1 &&
+ git update-index --add --cacheinfo 100644 $EMPTY_BLOB 1 &&
git update-index --skip-worktree 1
}
test_dirty() {
- echo "100644 $NULL_SHA1 0 1" > expected &&
+ echo "100644 $EMPTY_BLOB 0 1" > expected &&
git ls-files --stage 1 > result &&
test_cmp expected result &&
echo dirty > expected
On branch side
You have unmerged paths.
(fix conflicts and run "git commit")
+ (use "git merge --abort" to abort the merge)
Unmerged paths:
(use "git add/rm <file>..." as appropriate to mark resolution)
On branch master
You have unmerged paths.
(fix conflicts and run "git commit")
+ (use "git merge --abort" to abort the merge)
Unmerged paths:
(use "git add/rm <file>..." as appropriate to mark resolution)
On branch conflict_second
You have unmerged paths.
(fix conflicts and run "git commit")
+ (use "git merge --abort" to abort the merge)
Unmerged paths:
(use "git add/rm <file>..." as appropriate to mark resolution)
On branch conflict_second
You have unmerged paths.
(fix conflicts and run "git commit")
+ (use "git merge --abort" to abort the merge)
Changes to be committed:
. ./test-lib.sh
+# On some filesystems (e.g. FreeBSD's ext2 and ufs) directory mtime
+# is updated lazily after contents in the directory changes, which
+# forces the untracked cache code to take the slow path. A test
+# that wants to make sure that the fast path works correctly should
+# call this helper to make mtime of the containing directory in sync
+# with the reality before checking the fast path behaviour.
+#
+# See <20160803174522.5571-1-pclouds@gmail.com> if you want to know
+# more.
+
+sync_mtime () {
+ find . -type d -ls >/dev/null
+}
+
avoid_racy() {
sleep 1
}
EOF
cat >../dump.expect <<EOF &&
-info/exclude e69de29bb2d1d6434b8b29ae775ad8c2e48c5391
+info/exclude $EMPTY_BLOB
core.excludesfile 0000000000000000000000000000000000000000
exclude_per_dir .gitignore
flags 00000006
test_expect_success 'verify untracked cache dump' '
test-dump-untracked-cache >../actual &&
cat >../expect <<EOF &&
-info/exclude e69de29bb2d1d6434b8b29ae775ad8c2e48c5391
+info/exclude $EMPTY_BLOB
core.excludesfile 0000000000000000000000000000000000000000
exclude_per_dir .gitignore
flags 00000006
test_expect_success 'verify untracked cache dump' '
test-dump-untracked-cache >../actual &&
cat >../expect <<EOF &&
-info/exclude e69de29bb2d1d6434b8b29ae775ad8c2e48c5391
+info/exclude $EMPTY_BLOB
core.excludesfile 0000000000000000000000000000000000000000
exclude_per_dir .gitignore
flags 00000006
echo four >done/four && # four is gitignored at a higher level
echo five >done/five && # five is not gitignored
echo test >base && #we need to ensure that the root dir is touched
- rm base
+ rm base &&
+ sync_mtime
'
test_expect_success 'test sparse status with untracked cache' '
cp -R done dthree dtwo four three ../other_worktree &&
GIT_WORK_TREE=../other_worktree git status 2>../err &&
echo "warning: Untracked cache is disabled on this system or location." >../expect &&
- test_cmp ../expect ../err
+ test_i18ncmp ../expect ../err
'
test_done
hex=$(git log -1 --format="%h") &&
git reset --hard > .actual &&
echo HEAD is now at $hex $(commit_msg) > .expected &&
- test_cmp .expected .actual
+ test_i18ncmp .expected .actual
'
test_expect_success 'reset --hard message (ISO8859-1 logoutputencoding)' '
hex=$(git log -1 --format="%h") &&
git -c "i18n.logOutputEncoding=$test_encoding" reset --hard > .actual &&
echo HEAD is now at $hex $(commit_msg $test_encoding) > .expected &&
- test_cmp .expected .actual
+ test_i18ncmp .expected .actual
'
>.diff_expect
git checkout -f renamer && git clean -f &&
git checkout renamer^ 2>messages &&
test_i18ngrep "HEAD is now at 7329388" messages &&
- test_line_count -gt 1 messages &&
+ (test_line_count -gt 1 messages || test -n "$GETTEXT_POISON") &&
H=$(git rev-parse --verify HEAD) &&
M=$(git show-ref -s --verify refs/heads/master) &&
test "z$H" = "z$M" &&
cd sub &&
git submodule deinit ../init >../output
) &&
- grep "\\.\\./init" output &&
+ test_i18ngrep "\\.\\./init" output &&
test -z "$(git config --get-regexp "submodule\.example\.")" &&
test -n "$(git config --get-regexp "submodule\.example2\.")" &&
test -f example2/.git &&
cd sub &&
git submodule sync >../../output
) &&
- grep "\\.\\./submodule" output &&
+ test_i18ngrep "\\.\\./submodule" output &&
test -d "$(
cd super-clone/submodule &&
git config remote.origin.url
cd sub &&
git submodule sync --recursive >../../output
) &&
- grep "\\.\\./submodule/sub-submodule" output &&
+ test_i18ngrep "\\.\\./submodule/sub-submodule" output &&
test -d "$(
cd super-clone/submodule &&
git config remote.origin.url
cd tmp &&
git submodule update --init --recursive ../super >../../actual 2>../../actual2
) &&
- test_cmp expect actual &&
- test_cmp expect2 actual2
+ test_i18ncmp expect actual &&
+ test_i18ncmp expect2 actual2
'
apos="'";
)
'
+test_expect_success 'submodule update --remote should fetch upstream changes with .' '
+ (
+ cd super &&
+ git config -f .gitmodules submodule."submodule".branch "." &&
+ git add .gitmodules &&
+ git commit -m "submodules: update from the respective superproject branch"
+ ) &&
+ (
+ cd submodule &&
+ echo line4a >> file &&
+ git add file &&
+ test_tick &&
+ git commit -m "upstream line4a" &&
+ git checkout -b test-branch &&
+ test_commit on-test-branch
+ ) &&
+ (
+ cd super &&
+ git submodule update --remote --force submodule &&
+ git -C submodule log -1 --oneline >actual
+ git -C ../submodule log -1 --oneline master >expect
+ test_cmp expect actual &&
+ git checkout -b test-branch &&
+ git submodule update --remote --force submodule &&
+ git -C submodule log -1 --oneline >actual
+ git -C ../submodule log -1 --oneline test-branch >expect
+ test_cmp expect actual &&
+ git checkout master &&
+ git branch -d test-branch &&
+ git reset --hard HEAD^
+ )
+'
+
test_expect_success 'local config should override .gitmodules branch' '
(cd submodule &&
- git checkout -b test-branch &&
+ git checkout test-branch &&
echo line5 >> file &&
git add file &&
test_tick &&
(cd super &&
test_must_fail git submodule update submodule 2>../actual
) &&
- test_cmp actual expect
+ test_i18ncmp actual expect
'
cat << EOF >expect
mkdir tmp && cd tmp &&
test_must_fail git submodule update ../submodule 2>../../actual
) &&
- test_cmp actual expect
+ test_i18ncmp actual expect
'
cat << EOF >expect
mkdir -p tmp && cd tmp &&
test_must_fail git submodule update --recursive ../super 2>../../actual
) &&
- test_cmp actual expect
+ test_i18ncmp actual expect
'
test_expect_success 'submodule init does not copy command into .git/config' '
'
test_expect_success 'submodule update clone shallow submodule' '
+ test_when_finished "rm -rf super3" &&
+ first=$(git -C cloned submodule status submodule |cut -c2-41) &&
+ second=$(git -C submodule rev-parse HEAD) &&
+ commit_count=$(git -C submodule rev-list --count $first^..$second) &&
git clone cloned super3 &&
pwd=$(pwd) &&
- (cd super3 &&
- sed -e "s#url = ../#url = file://$pwd/#" <.gitmodules >.gitmodules.tmp &&
- mv -f .gitmodules.tmp .gitmodules &&
- git submodule update --init --depth=3
- (cd submodule &&
- test 1 = $(git log --oneline | wc -l)
- )
-)
+ (
+ cd super3 &&
+ sed -e "s#url = ../#url = file://$pwd/#" <.gitmodules >.gitmodules.tmp &&
+ mv -f .gitmodules.tmp .gitmodules &&
+ git submodule update --init --depth=$commit_count &&
+ test 1 = $(git -C submodule log --oneline | wc -l)
+ )
+'
+
+test_expect_success 'submodule update clone shallow submodule outside of depth' '
+ test_when_finished "rm -rf super3" &&
+ git clone cloned super3 &&
+ pwd=$(pwd) &&
+ (
+ cd super3 &&
+ sed -e "s#url = ../#url = file://$pwd/#" <.gitmodules >.gitmodules.tmp &&
+ mv -f .gitmodules.tmp .gitmodules &&
+ test_must_fail git submodule update --init --depth=1 2>actual &&
+ test_i18ngrep "Direct fetching of that commit failed." actual &&
+ git -C ../submodule config uploadpack.allowReachableSHA1InWant true &&
+ git submodule update --init --depth=1 >actual &&
+ test 1 = $(git -C submodule log --oneline | wc -l)
+ )
'
test_expect_success 'submodule update --recursive drops module name before recursing' '
)
'
+test_expect_success 'error message contains blob reference' '
+ (cd super &&
+ sha1=$(git rev-parse HEAD) &&
+ test-submodule-config \
+ HEAD b \
+ HEAD submodule \
+ 2>actual_err &&
+ test_i18ngrep "submodule-blob $sha1:.gitmodules" actual_err >/dev/null
+ )
+'
+
cat >super/expect_url <<EOF
Submodule url: 'git@somewhere.else.net:a.git' for path 'b'
Submodule url: 'git@somewhere.else.net:submodule.git' for path 'submodule'
'
cat >expect <<EOF
-:100644 100644 e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 0000000000000000000000000000000000000000 M dir1/modified
+:100644 100644 $EMPTY_BLOB 0000000000000000000000000000000000000000 M dir1/modified
EOF
test_expect_success 'status refreshes the index' '
touch dir2/added &&
git config --add -f .gitmodules submodule.subname.ignore all &&
git config --add -f .gitmodules submodule.subname.path sm &&
git status > output &&
- test_cmp expect output &&
+ test_i18ncmp expect output &&
git config -f .gitmodules --remove-section submodule.subname
'
git config --add submodule.subname.ignore all &&
git config --add submodule.subname.path sm &&
git status > output &&
- test_cmp expect output &&
+ test_i18ncmp expect output &&
git config --remove-section submodule.subname &&
git config -f .gitmodules --remove-section submodule.subname
'
test_cmp expect actual
'
+test_expect_success GPG 'log.showsignature behaves like --show-signature' '
+ test_config log.showsignature true &&
+ git show initial >actual &&
+ grep "gpg: Signature made" actual &&
+ grep "gpg: Good signature" actual
+'
+
test_done
On branch conflicts
You have unmerged paths.
(fix conflicts and run "git commit")
+ (use "git merge --abort" to abort the merge)
Unmerged paths:
(use "git add <file>..." to mark resolution)
error: The following untracked working tree files would be overwritten by merge:
sub
sub2
-Please move or remove them before you can merge.
+Please move or remove them before you merge.
Aborting
EOF
cp important sub &&
cp important sub2 &&
test_must_fail git merge sub 2>out &&
- test_cmp out expect &&
+ test_i18ncmp out expect &&
test_path_is_missing .git/MERGE_HEAD &&
test_cmp important sub &&
test_cmp important sub2 &&
four
three
two
-Please move or remove them before you can merge.
+Please move or remove them before you merge.
Aborting
EOF
four
three
two
-Please commit your changes or stash them before you can merge.
+Please commit your changes or stash them before you merge.
error: The following untracked working tree files would be overwritten by merge:
five
-Please move or remove them before you can merge.
+Please move or remove them before you merge.
Aborting
EOF
error: Your local changes to the following files would be overwritten by checkout:
rep/one
rep/two
-Please commit your changes or stash them before you can switch branches.
+Please commit your changes or stash them before you switch branches.
Aborting
EOF
error: Your local changes to the following files would be overwritten by checkout:
rep/one
rep/two
-Please commit your changes or stash them before you can switch branches.
+Please commit your changes or stash them before you switch branches.
Aborting
EOF
git reset --hard master >/dev/null 2>&1
'
-test_expect_success 'temporary filenames are used with mergetool.writeToTemp' '
+test_lazy_prereq MKTEMP '
+ tempdir=$(mktemp -d -t foo.XXXXXX) &&
+ test -d "$tempdir"
+'
+
+test_expect_success MKTEMP 'temporary filenames are used with mergetool.writeToTemp' '
git checkout -b test16 branch1 &&
test_config mergetool.writeToTemp true &&
test_config mergetool.myecho.cmd "echo \"\$LOCAL\"" &&
git cat-file blob :file
'
+test_expect_success 'repack -k keeps unreachable packed objects' '
+ # create packed-but-unreachable object
+ sha1=$(echo unreachable-packed | git hash-object -w --stdin) &&
+ pack=$(echo $sha1 | git pack-objects .git/objects/pack/pack) &&
+ git prune-packed &&
+
+ # -k should keep it
+ git repack -adk &&
+ git cat-file -p $sha1 &&
+
+ # and double check that without -k it would have been removed
+ git repack -ad &&
+ test_must_fail git cat-file -p $sha1
+'
+
+test_expect_success 'repack -k packs unreachable loose objects' '
+ # create loose unreachable object
+ sha1=$(echo would-be-deleted-loose | git hash-object -w --stdin) &&
+ objpath=.git/objects/$(echo $sha1 | sed "s,..,&/,") &&
+ test_path_is_file $objpath &&
+
+ # and confirm that the loose object goes away, but we can
+ # still access it (ergo, it is packed)
+ git repack -adk &&
+ test_path_is_missing $objpath &&
+ git cat-file -p $sha1
+'
+
test_done
test_cmp expect actual
'
+test_expect_success PERL 'difftool honors exit status if command not found' '
+ test_config difftool.nonexistent.cmd i-dont-exist &&
+ test_config difftool.trustExitCode false &&
+ test_must_fail git difftool -y -t nonexistent branch
+'
+
test_expect_success PERL 'difftool honors --gui' '
difftool_test_setup &&
test_config merge.tool bogus-tool &&
)
'
+run_dir_diff_test 'difftool --dir-diff from subdirectory with GIT_DIR set' '
+ (
+ GIT_DIR=$(pwd)/.git &&
+ export GIT_DIR &&
+ GIT_WORK_TREE=$(pwd) &&
+ export GIT_WORK_TREE &&
+ cd sub &&
+ git difftool --dir-diff $symlinks --extcmd ls \
+ branch -- sub >output &&
+ grep sub output &&
+ ! grep file output
+ )
+'
+
run_dir_diff_test 'difftool --dir-diff when worktree file is missing' '
test_when_finished git reset --hard &&
rm file2 &&
for f in file file2 sub/sub
do
echo "$f"
- readlink "$2/$f"
+ ls -ld "$2/$f" | sed -e 's/.* -> //'
done >actual
EOF
. ./test-lib.sh
cat >hello.c <<EOF
+#include <assert.h>
#include <stdio.h>
+
int main(int argc, const char **argv)
{
printf("Hello world.\n");
test_expect_success "grep -c $L (no /dev/null)" '
! git grep -c test $H | grep /dev/null
- '
+ '
test_expect_success "grep --max-depth -1 $L" '
{
cat >expected <<EOF
file:5
EOF
-test_expect_success 'grep -l -C' '
+test_expect_success 'grep -c -C' '
git grep -c -C1 foo >actual &&
test_cmp expected actual
'
'
test_expect_success 'log grep (9)' '
- git log -g --grep-reflog="commit: third" --author="non-existant" --pretty=tformat:%s >actual &&
+ git log -g --grep-reflog="commit: third" --author="non-existent" --pretty=tformat:%s >actual &&
: >expect &&
test_cmp expect actual
'
cat >expected <<EOF
hello.c-#include <stdio.h>
+hello.c-
hello.c=int main(int argc, const char **argv)
hello.c-{
hello.c- printf("Hello world.\n");
test_cmp expected actual
'
+cat >expected <<EOF
+hello.c-#include <assert.h>
+hello.c:#include <stdio.h>
+EOF
+
+test_expect_success 'grep -W shows no trailing empty lines' '
+ git grep -W stdio >actual &&
+ test_cmp expected actual
+'
+
cat >expected <<EOF
hello.c= printf("Hello world.\n");
hello.c: return 0;
cat >expected <<EOF
<BOLD;GREEN>hello.c<RESET>
-2:int main(int argc, const <BLACK;BYELLOW>char<RESET> **argv)
-6: /* <BLACK;BYELLOW>char<RESET> ?? */
+4:int main(int argc, const <BLACK;BYELLOW>char<RESET> **argv)
+8: /* <BLACK;BYELLOW>char<RESET> ?? */
<BOLD;GREEN>hello_world<RESET>
3:Hel<BLACK;BYELLOW>lo_w<RESET>orld
'
cat >expected <<EOF
-hello.c-#include <stdio.h>
+hello.c-
hello.c=int main(int argc, const char **argv)
hello.c-{
hello.c: pr<RED>int<RESET>f("<RED>Hello<RESET> world.\n");
test_cmp expected actual
'
+test_expect_success 'grep can find things only in the work tree' '
+ : >work-tree-only &&
+ git add work-tree-only &&
+ test_when_finished "git rm -f work-tree-only" &&
+ echo "find in work tree" >work-tree-only &&
+ git grep --quiet "find in work tree" &&
+ test_must_fail git grep --quiet --cached "find in work tree" &&
+ test_must_fail git grep --quiet "find in work tree" HEAD
+'
+
+test_expect_success 'grep can find things only in the work tree (i-t-a)' '
+ echo "intend to add this" >intend-to-add &&
+ git add -N intend-to-add &&
+ test_when_finished "git rm -f intend-to-add" &&
+ git grep --quiet "intend to add this" &&
+ test_must_fail git grep --quiet --cached "intend to add this" &&
+ test_must_fail git grep --quiet "intend to add this" HEAD
+'
+
+test_expect_success 'grep does not search work tree with assume unchanged' '
+ echo "intend to add this" >intend-to-add &&
+ git add -N intend-to-add &&
+ git update-index --assume-unchanged intend-to-add &&
+ test_when_finished "git rm -f intend-to-add" &&
+ test_must_fail git grep --quiet "intend to add this" &&
+ test_must_fail git grep --quiet --cached "intend to add this" &&
+ test_must_fail git grep --quiet "intend to add this" HEAD
+'
+
+test_expect_success 'grep can find things only in the index' '
+ echo "only in the index" >cache-this &&
+ git add cache-this &&
+ rm cache-this &&
+ test_when_finished "git rm --cached cache-this" &&
+ test_must_fail git grep --quiet "only in the index" &&
+ git grep --quiet --cached "only in the index" &&
+ test_must_fail git grep --quiet "only in the index" HEAD
+'
+
+test_expect_success 'grep does not report i-t-a with -L --cached' '
+ echo "intend to add this" >intend-to-add &&
+ git add -N intend-to-add &&
+ test_when_finished "git rm -f intend-to-add" &&
+ git ls-files | grep -v "^intend-to-add\$" >expected &&
+ git grep -L --cached "nonexistent_string" >actual &&
+ test_cmp expected actual
+'
+
+test_expect_success 'grep does not report i-t-a and assume unchanged with -L' '
+ echo "intend to add this" >intend-to-add-assume-unchanged &&
+ git add -N intend-to-add-assume-unchanged &&
+ test_when_finished "git rm -f intend-to-add-assume-unchanged" &&
+ git update-index --assume-unchanged intend-to-add-assume-unchanged &&
+ git ls-files | grep -v "^intend-to-add-assume-unchanged\$" >expected &&
+ git grep -L "nonexistent_string" >actual &&
+ test_cmp expected actual
+'
+
test_done
--- /dev/null
+#!/bin/sh
+
+test_description='grep icase on non-English locales'
+
+. ./lib-gettext.sh
+
+test_expect_success GETTEXT_LOCALE 'setup' '
+ test_write_lines "TILRAUN: Halló Heimur!" >file &&
+ git add file &&
+ LC_ALL="$is_IS_locale" &&
+ export LC_ALL
+'
+
+test_have_prereq GETTEXT_LOCALE &&
+test-regex "HALLÓ" "Halló" ICASE &&
+test_set_prereq REGEX_LOCALE
+
+test_expect_success REGEX_LOCALE 'grep literal string, no -F' '
+ git grep -i "TILRAUN: Halló Heimur!" &&
+ git grep -i "TILRAUN: HALLÓ HEIMUR!"
+'
+
+test_expect_success GETTEXT_LOCALE,LIBPCRE 'grep pcre utf-8 icase' '
+ git grep --perl-regexp "TILRAUN: H.lló Heimur!" &&
+ git grep --perl-regexp -i "TILRAUN: H.lló Heimur!" &&
+ git grep --perl-regexp -i "TILRAUN: H.LLÓ HEIMUR!"
+'
+
+test_expect_success GETTEXT_LOCALE,LIBPCRE 'grep pcre utf-8 string with "+"' '
+ test_write_lines "TILRAUN: Hallóó Heimur!" >file2 &&
+ git add file2 &&
+ git grep -l --perl-regexp "TILRAUN: H.lló+ Heimur!" >actual &&
+ echo file >expected &&
+ echo file2 >>expected &&
+ test_cmp expected actual
+'
+
+test_expect_success REGEX_LOCALE 'grep literal string, with -F' '
+ git grep --debug -i -F "TILRAUN: Halló Heimur!" 2>&1 >/dev/null |
+ grep fixed >debug1 &&
+ test_write_lines "fixed TILRAUN: Halló Heimur!" >expect1 &&
+ test_cmp expect1 debug1 &&
+
+ git grep --debug -i -F "TILRAUN: HALLÓ HEIMUR!" 2>&1 >/dev/null |
+ grep fixed >debug2 &&
+ test_write_lines "fixed TILRAUN: HALLÓ HEIMUR!" >expect2 &&
+ test_cmp expect2 debug2
+'
+
+test_expect_success REGEX_LOCALE 'grep string with regex, with -F' '
+ test_write_lines "^*TILR^AUN:.* \\Halló \$He[]imur!\$" >file &&
+
+ git grep --debug -i -F "^*TILR^AUN:.* \\Halló \$He[]imur!\$" 2>&1 >/dev/null |
+ grep fixed >debug1 &&
+ test_write_lines "fixed \\^*TILR^AUN:\\.\\* \\\\Halló \$He\\[]imur!\\\$" >expect1 &&
+ test_cmp expect1 debug1 &&
+
+ git grep --debug -i -F "^*TILR^AUN:.* \\HALLÓ \$HE[]IMUR!\$" 2>&1 >/dev/null |
+ grep fixed >debug2 &&
+ test_write_lines "fixed \\^*TILR^AUN:\\.\\* \\\\HALLÓ \$HE\\[]IMUR!\\\$" >expect2 &&
+ test_cmp expect2 debug2
+'
+
+test_expect_success REGEX_LOCALE 'pickaxe -i on non-ascii' '
+ git commit -m first &&
+ git log --format=%f -i -S"TILRAUN: HALLÓ HEIMUR!" >actual &&
+ echo first >expected &&
+ test_cmp expected actual
+'
+
+test_done
--- /dev/null
+#!/bin/sh
+
+test_description='grep icase on non-English locales'
+
+. ./lib-gettext.sh
+
+test_expect_success GETTEXT_ISO_LOCALE 'setup' '
+ printf "TILRAUN: Halló Heimur!" >file &&
+ git add file &&
+ LC_ALL="$is_IS_iso_locale" &&
+ export LC_ALL
+'
+
+test_expect_success GETTEXT_ISO_LOCALE,LIBPCRE 'grep pcre string' '
+ git grep --perl-regexp -i "TILRAUN: H.lló Heimur!" &&
+ git grep --perl-regexp -i "TILRAUN: H.LLÓ HEIMUR!"
+'
+
+test_done
test_tick &&
GIT_AUTHOR_NAME=Fourth git commit -m Fourth &&
- {
- echo ABC
- echo DEF
- echo XXXX
- echo GHIJK
- } >cow &&
+ cat >cow <<-\EOF &&
+ ABC
+ DEF
+ XXXX
+ GHIJK
+ EOF
git add cow &&
test_tick &&
GIT_AUTHOR_NAME=Fifth git commit -m Fifth
test_expect_success 'blame wholesale copy' '
git blame -f -C -C1 HEAD^ -- cow | sed -e "$pick_fc" >current &&
- {
- echo mouse-Initial
- echo mouse-Second
- echo mouse-Third
- } >expected &&
+ cat >expected <<-\EOF &&
+ mouse-Initial
+ mouse-Second
+ mouse-Third
+ EOF
test_cmp expected current
'
test_expect_success 'blame wholesale copy and more' '
git blame -f -C -C1 HEAD -- cow | sed -e "$pick_fc" >current &&
- {
- echo mouse-Initial
- echo mouse-Second
- echo cow-Fifth
- echo mouse-Third
- } >expected &&
+ cat >expected <<-\EOF &&
+ mouse-Initial
+ mouse-Second
+ cow-Fifth
+ mouse-Third
+ EOF
test_cmp expected current
'
+test_expect_success 'blame wholesale copy and more in the index' '
+
+ cat >horse <<-\EOF &&
+ ABC
+ DEF
+ XXXX
+ YYYY
+ GHIJK
+ EOF
+ git add horse &&
+ test_when_finished "git rm -f horse" &&
+ git blame -f -C -C1 -- horse | sed -e "$pick_fc" >current &&
+ cat >expected <<-\EOF &&
+ mouse-Initial
+ mouse-Second
+ cow-Fifth
+ horse-Not
+ mouse-Third
+ EOF
+ test_cmp expected current
+
+'
+
+test_expect_success 'blame during cherry-pick with file rename conflict' '
+
+ test_when_finished "git reset --hard && git checkout master" &&
+ git checkout HEAD~3 &&
+ echo MOUSE >> mouse &&
+ git mv mouse rodent &&
+ git add rodent &&
+ GIT_AUTHOR_NAME=Rodent git commit -m "rodent" &&
+ git checkout --detach master &&
+ (git cherry-pick HEAD@{1} || test $? -eq 1) &&
+ git show HEAD@{1}:rodent > rodent &&
+ git add rodent &&
+ git blame -f -C -C1 rodent | sed -e "$pick_fc" >current &&
+ cat current &&
+ cat >expected <<-\EOF &&
+ mouse-Initial
+ mouse-Second
+ rodent-Not
+ EOF
+ test_cmp expected current
+'
+
test_expect_success 'blame path that used to be a directory' '
mkdir path &&
echo A A A A A >path/file &&
test_cmp expect actual
'
+test_expect_success '--porcelain detects first non-blank line as subject' '
+ (
+ GIT_INDEX_FILE=.git/tmp-index &&
+ export GIT_INDEX_FILE &&
+ echo "This is it" >single-file &&
+ git add single-file &&
+ tree=$(git write-tree) &&
+ commit=$(printf "%s\n%s\n%s\n\n\n \noneline\n\nbody\n" \
+ "tree $tree" \
+ "author A <a@b.c> 123456789 +0000" \
+ "committer C <c@d.e> 123456789 +0000" |
+ git hash-object -w -t commit --stdin) &&
+ git blame --porcelain $commit -- single-file >output &&
+ grep "^summary oneline$" output
+ )
+'
+
test_done
git config help.autocorrect 0 &&
test_must_fail git lfg 2>actual &&
- sed -e "1,/^Did you mean this/d" actual | grep lgf &&
+ grep "^ lgf" actual &&
test_must_fail git distimdist 2>actual &&
- sed -e "1,/^Did you mean this/d" actual | grep distimdistim
+ grep "^ distimdistim" actual
'
test_expect_success 'autocorrect running commands' '
. ./lib-git-svn.sh
-say 'define NO_SVN_TESTS to skip git svn tests'
-
case "$GIT_SVN_LC_ALL" in
*.UTF-8)
test_set_prereq UTF8
;;
esac
+deepdir=nothing-above
+ceiling=$PWD
+
+test_expect_success 'git svn --version works anywhere' '
+ mkdir -p "$deepdir" && (
+ GIT_CEILING_DIRECTORIES="$ceiling" &&
+ export GIT_CEILING_DIRECTORIES &&
+ cd "$deepdir" &&
+ git svn --version
+ )
+'
+
+test_expect_success 'git svn help works anywhere' '
+ mkdir -p "$deepdir" && (
+ GIT_CEILING_DIRECTORIES="$ceiling" &&
+ export GIT_CEILING_DIRECTORIES &&
+ cd "$deepdir" &&
+ git svn help
+ )
+'
+
test_expect_success \
'initialize git svn' '
mkdir import &&
. ./lib-git-svn.sh
test_expect_success 'load repository with strange names' '
- svnadmin load -q "$rawsvnrepo" < "$TEST_DIRECTORY"/t9115/funky-names.dump &&
- start_httpd gtk+
- '
+ svnadmin load -q "$rawsvnrepo" <"$TEST_DIRECTORY"/t9115/funky-names.dump
+'
+
+maybe_start_httpd gtk+
test_expect_success 'init and fetch repository' '
git svn init "$svnrepo" &&
"$svnrepo/pr ject/branches/trailing_dotlock.lock" &&
svn_cmd cp -m "reflog" "$svnrepo/pr ject/trunk" \
"$svnrepo/pr ject/branches/not-a@{0}reflog@" &&
- start_httpd
+ maybe_start_httpd
'
# SVN 1.7 will truncate "not-a%40{0]" to just "not-a".
svn_cmd cp -m "tag" "$svnrepo/pr ject/trunk" \
"$svnrepo/pr ject/tags/v1" &&
rm -rf project &&
- start_httpd
+ maybe_start_httpd
'
test_expect_success 'test clone with percent escapes' '
svn_cmd add foo &&
svn_cmd commit -m "add foo"
) &&
- start_httpd
+ maybe_start_httpd
'
test_expect_success 'clone trunk with "-r HEAD"' '
. ./lib-git-svn.sh
-say 'define NO_SVN_TESTS to skip git svn tests'
-
test_expect_success 'initialize source svn repo' '
svn_cmd mkdir -m x "$svnrepo"/trunk &&
svn_cmd co "$svnrepo"/trunk "$SVN_TREE" &&
. ./lib-git-svn.sh
-say 'define NO_SVN_TESTS to skip git svn tests'
GIT_REPO=git-svn-repo
test_expect_success 'initialize source svn repo containing empty dirs' '
. ./test-lib.sh
. "$TEST_DIRECTORY"/diff-lib.sh ;# test-lib chdir's into trash
-# Print $1 bytes from stdin to stdout.
-#
-# This could be written as "head -c $1", but IRIX "head" does not
-# support the -c option.
-head_c () {
- perl -e '
- my $len = $ARGV[1];
- while ($len > 0) {
- my $s;
- my $nread = sysread(STDIN, $s, $len);
- die "cannot read: $!" unless defined($nread);
- print $s;
- $len -= $nread;
- }
- ' - "$1"
-}
-
verify_packs () {
for p in .git/objects/pack/*.pack
do
###
test_expect_success 'empty stream succeeds' '
+ git config fastimport.unpackLimit 0 &&
git fast-import </dev/null
'
read blob_id type size <&3 &&
echo "$blob_id $type $size" >response &&
- head_c $size >blob <&3 &&
+ test_copy_bytes $size >blob <&3 &&
read newline <&3 &&
cat <<-EOF &&
EOF
read blob_id type size <&3 &&
- head_c $size >actual <&3 &&
+ test_copy_bytes $size >actual <&3 &&
read newline <&3 &&
echo
echo "cat-blob $to_get" &&
read blob_id type size <&3 &&
- head_c $size >actual <&3 &&
+ test_copy_bytes $size >actual <&3 &&
read newline <&3 &&
echo deleteall
echo >>input &&
test_create_repo R &&
+ git --git-dir=R/.git config fastimport.unpackLimit 0 &&
git --git-dir=R/.git fast-import --big-file-threshold=1 <input
'
--- /dev/null
+#!/bin/sh
+test_description='test git fast-import unpack limit'
+. ./test-lib.sh
+
+test_expect_success 'create loose objects on import' '
+ test_tick &&
+ cat >input <<-INPUT_END &&
+ commit refs/heads/master
+ committer $GIT_COMMITTER_NAME <$GIT_COMMITTER_EMAIL> $GIT_COMMITTER_DATE
+ data <<COMMIT
+ initial
+ COMMIT
+
+ done
+ INPUT_END
+
+ git -c fastimport.unpackLimit=2 fast-import --done <input &&
+ git fsck --no-progress &&
+ test $(find .git/objects/?? -type f | wc -l) -eq 2 &&
+ test $(find .git/objects/pack -type f | wc -l) -eq 0
+'
+
+test_expect_success 'bigger packs are preserved' '
+ test_tick &&
+ cat >input <<-INPUT_END &&
+ commit refs/heads/master
+ committer $GIT_COMMITTER_NAME <$GIT_COMMITTER_EMAIL> $GIT_COMMITTER_DATE
+ data <<COMMIT
+ incremental should create a pack
+ COMMIT
+ from refs/heads/master^0
+
+ commit refs/heads/branch
+ committer $GIT_COMMITTER_NAME <$GIT_COMMITTER_EMAIL> $GIT_COMMITTER_DATE
+ data <<COMMIT
+ branch
+ COMMIT
+
+ done
+ INPUT_END
+
+ git -c fastimport.unpackLimit=2 fast-import --done <input &&
+ git fsck --no-progress &&
+ test $(find .git/objects/?? -type f | wc -l) -eq 2 &&
+ test $(find .git/objects/pack -type f | wc -l) -eq 2
+'
+
+test_expect_success 'lookups after checkpoint works' '
+ hello_id=$(echo hello | git hash-object --stdin -t blob) &&
+ id="$GIT_COMMITTER_NAME <$GIT_COMMITTER_EMAIL> $GIT_COMMITTER_DATE" &&
+ before=$(git rev-parse refs/heads/master^0) &&
+ (
+ cat <<-INPUT_END &&
+ blob
+ mark :1
+ data 6
+ hello
+
+ commit refs/heads/master
+ mark :2
+ committer $id
+ data <<COMMIT
+ checkpoint after this
+ COMMIT
+ from refs/heads/master^0
+ M 100644 :1 hello
+
+ # pre-checkpoint
+ cat-blob :1
+ cat-blob $hello_id
+ checkpoint
+ # post-checkpoint
+ cat-blob :1
+ cat-blob $hello_id
+ INPUT_END
+
+ n=0 &&
+ from=$before &&
+ while test x"$from" = x"$before"
+ do
+ if test $n -gt 30
+ then
+ echo >&2 "checkpoint did not update branch"
+ exit 1
+ else
+ n=$(($n + 1))
+ fi &&
+ sleep 1 &&
+ from=$(git rev-parse refs/heads/master^0)
+ done &&
+ cat <<-INPUT_END &&
+ commit refs/heads/master
+ committer $id
+ data <<COMMIT
+ make sure from "unpacked sha1 reference" works, too
+ COMMIT
+ from $from
+ INPUT_END
+ echo done
+ ) | git -c fastimport.unpackLimit=100 fast-import --done &&
+ test $(find .git/objects/?? -type f | wc -l) -eq 6 &&
+ test $(find .git/objects/pack -type f | wc -l) -eq 2
+'
+
+test_done
echo "more text" > src.c &&
GIT_CONFIG="$git_config" cvs -Q add src.c >cvs.log 2>&1 &&
marked_as . src.c "" &&
- echo "psuedo-binary" > temp.bin
+ echo "pseudo-binary" > temp.bin
) &&
GIT_CONFIG="$git_config" cvs -Q add subdir/temp.bin >cvs.log 2>&1 &&
marked_as subdir temp.bin "-kb" &&
test_path_is_file file2 &&
test_path_is_file file3 &&
! grep update file2 &&
- test_path_is_missing .git/git-p4-tmp
+ test_must_fail git show-ref --verify refs/git-p4-tmp
)
'
test_path_is_file file2 &&
test_path_is_file file3 &&
! grep update file2 &&
- test_path_is_missing .git/git-p4-tmp
+ test_must_fail git show-ref --verify refs/git-p4-tmp
)
'
then
echo >&2 "test_must_fail: command succeeded: $*"
return 1
- elif test $exit_code -eq 141 && list_contains "$_test_ok" sigpipe
+ elif test_match_signal 13 $exit_code && list_contains "$_test_ok" sigpipe
then
return 0
elif test $exit_code -gt 129 && test $exit_code -le 192
done
)
}
+
+# Returns true if the numeric exit code in "$2" represents the expected signal
+# in "$1". Signals should be given numerically.
+test_match_signal () {
+ if test "$2" = "$((128 + $1))"
+ then
+ # POSIX
+ return 0
+ elif test "$2" = "$((256 + $1))"
+ then
+ # ksh
+ return 0
+ fi
+ return 1
+}
+
+# Read up to "$1" bytes (or to EOF) from stdin and write them to stdout.
+test_copy_bytes () {
+ perl -e '
+ my $len = $ARGV[1];
+ while ($len > 0) {
+ my $s;
+ my $nread = sysread(STDIN, $s, $len);
+ die "cannot read: $!" unless defined($nread);
+ print $s;
+ $len -= $nread;
+ }
+ ' - "$1"
+}
# Zero SHA-1
_z40=0000000000000000000000000000000000000000
+EMPTY_TREE=4b825dc642cb6eb9a060e54bf8d69288fbee4904
+EMPTY_BLOB=e69de29bb2d1d6434b8b29ae775ad8c2e48c5391
+
# Line feed
LF='
'
# when case-folding filenames
u200c=$(printf '\342\200\214')
-export _x05 _x40 _z40 LF u200c
+export _x05 _x40 _z40 LF u200c EMPTY_TREE EMPTY_BLOB
# Each test should start with something like this, after copyright notices:
#
# override all git executables in TEST_DIRECTORY/..
GIT_VALGRIND=$TEST_DIRECTORY/valgrind
mkdir -p "$GIT_VALGRIND"/bin
- for file in $GIT_BUILD_DIR/git* $GIT_BUILD_DIR/test-*
+ for file in $GIT_BUILD_DIR/git* $GIT_BUILD_DIR/t/helper/test-*
do
make_valgrind_symlink $file
done
}
test_lazy_prereq CMDLINE_LIMIT 'run_with_limited_cmdline true'
+
+build_option () {
+ git version --build-options |
+ sed -ne "s/^$1: //p"
+}
+
+test_lazy_prereq LONG_IS_64BIT '
+ test 8 -le "$(build_option sizeof-long)"
+'
prepare_tempfile_object(tempfile);
strbuf_add_absolute_path(&tempfile->filename, path);
- tempfile->fd = open(tempfile->filename.buf, O_RDWR | O_CREAT | O_EXCL, 0666);
+ tempfile->fd = open(tempfile->filename.buf,
+ O_RDWR | O_CREAT | O_EXCL | O_CLOEXEC, 0666);
+ if (O_CLOEXEC && tempfile->fd < 0 && errno == EINVAL)
+ /* Try again w/o O_CLOEXEC: the kernel might not support it */
+ tempfile->fd = open(tempfile->filename.buf,
+ O_RDWR | O_CREAT | O_EXCL, 0666);
if (tempfile->fd < 0) {
strbuf_reset(&tempfile->filename);
return -1;
* * calling `fdopen_tempfile()` to get a `FILE` pointer for the
* open file and writing to the file using stdio.
*
+ * Note that the file descriptor returned by create_tempfile()
+ * is marked O_CLOEXEC, so the new contents must be written by
+ * the current process, not any spawned one.
+ *
* When finished writing, the caller can:
*
* * Close the file descriptor and remove the temporary file by
--- /dev/null
+#!/bin/sh
+#
+# An example hook script to make use of push options.
+# The example simply echoes all push options that start with 'echoback='
+# and rejects all pushes when the "reject" push option is used.
+#
+# To enable this hook, rename this file to "pre-receive".
+
+if test -n "$GIT_PUSH_OPTION_COUNT"
+then
+ i=0
+ while test "$i" -lt "$GIT_PUSH_OPTION_COUNT"
+ do
+ eval "value=\$GIT_PUSH_OPTION_$i"
+ case "$value" in
+ echoback=*)
+ echo "echo from the pre-receive-hook: ${value#*=}" >&2
+ ;;
+ reject)
+ exit 1
+ esac
+ i=$((i + 1))
+ done
+fi
#include "cache.h"
#include "quote.h"
+/*
+ * "Normalize" a key argument by converting NULL to our trace_default,
+ * and otherwise passing through the value. All caller-facing functions
+ * should normalize their inputs in this way, though most get it
+ * for free by calling get_trace_fd() (directly or indirectly).
+ */
+static void normalize_trace_key(struct trace_key **key)
+{
+ static struct trace_key trace_default = { "GIT_TRACE" };
+ if (!*key)
+ *key = &trace_default;
+}
+
/* Get a trace file descriptor from "key" env variable. */
static int get_trace_fd(struct trace_key *key)
{
- static struct trace_key trace_default = { "GIT_TRACE" };
const char *trace;
- /* use default "GIT_TRACE" if NULL */
- if (!key)
- key = &trace_default;
+ normalize_trace_key(&key);
/* don't open twice */
if (key->initialized)
else if (is_absolute_path(trace)) {
int fd = open(trace, O_WRONLY | O_APPEND | O_CREAT, 0666);
if (fd == -1) {
- fprintf(stderr,
- "Could not open '%s' for tracing: %s\n"
- "Defaulting to tracing on stderr...\n",
+ warning("could not open '%s' for tracing: %s",
trace, strerror(errno));
- key->fd = STDERR_FILENO;
+ trace_disable(key);
} else {
key->fd = fd;
key->need_close = 1;
}
} else {
- fprintf(stderr, "What does '%s' for %s mean?\n"
- "If you want to trace into a file, then please set "
- "%s to an absolute pathname (starting with /).\n"
- "Defaulting to tracing on stderr...\n",
- trace, key->key, key->key);
- key->fd = STDERR_FILENO;
+ warning("unknown trace value for '%s': %s\n"
+ " If you want to trace into a file, then please set %s\n"
+ " to an absolute pathname (starting with /)",
+ key->key, trace, key->key);
+ trace_disable(key);
}
key->initialized = 1;
void trace_disable(struct trace_key *key)
{
+ normalize_trace_key(&key);
+
if (key->need_close)
close(key->fd);
key->fd = 0;
key->need_close = 0;
}
-static const char err_msg[] = "Could not trace into fd given by "
- "GIT_TRACE environment variable";
-
static int prepare_trace_line(const char *file, int line,
struct trace_key *key, struct strbuf *buf)
{
return 1;
}
+static void trace_write(struct trace_key *key, const void *buf, unsigned len)
+{
+ if (write_in_full(get_trace_fd(key), buf, len) < 0) {
+ normalize_trace_key(&key);
+ warning("unable to write trace for %s: %s",
+ key->key, strerror(errno));
+ trace_disable(key);
+ }
+}
+
void trace_verbatim(struct trace_key *key, const void *buf, unsigned len)
{
if (!trace_want(key))
return;
- write_or_whine_pipe(get_trace_fd(key), buf, len, err_msg);
+ trace_write(key, buf, len);
}
static void print_trace_line(struct trace_key *key, struct strbuf *buf)
{
strbuf_complete_line(buf);
-
- write_or_whine_pipe(get_trace_fd(key), buf->buf, buf->len, err_msg);
+ trace_write(key, buf->buf, buf->len);
strbuf_release(buf);
}
warning(_("unknown value '%s' for key '%s'"), value, conf_key);
break;
default:
- die("internal bug in trailer.c");
+ die("BUG: trailer.c: unhandled type %d", type);
}
return 0;
}
(*tail)->status |= REF_STATUS_UPTODATE;
if (read_ref((*tail)->name,
(*tail)->old_oid.hash) < 0)
- die(N_("Could not read ref %s"),
+ die(_("Could not read ref %s"),
(*tail)->name);
}
}
}
/* Stream state: More data may be coming in this direction. */
-#define SSTATE_TRANSFERING 0
+#define SSTATE_TRANSFERRING 0
/*
* Stream state: No more data coming in this direction, flushing rest of
* data.
/* Stream state: Transfer in this direction finished. */
#define SSTATE_FINISHED 2
-#define STATE_NEEDS_READING(state) ((state) <= SSTATE_TRANSFERING)
+#define STATE_NEEDS_READING(state) ((state) <= SSTATE_TRANSFERRING)
#define STATE_NEEDS_WRITING(state) ((state) <= SSTATE_FLUSHING)
#define STATE_NEEDS_CLOSING(state) ((state) == SSTATE_FLUSHING)
state.ptg.dest = 1;
state.ptg.src_is_sock = (input == output);
state.ptg.dest_is_sock = 0;
- state.ptg.state = SSTATE_TRANSFERING;
+ state.ptg.state = SSTATE_TRANSFERRING;
state.ptg.bufuse = 0;
state.ptg.src_name = "remote input";
state.ptg.dest_name = "stdout";
state.gtp.dest = output;
state.gtp.src_is_sock = 0;
state.gtp.dest_is_sock = (input == output);
- state.gtp.state = SSTATE_TRANSFERING;
+ state.gtp.state = SSTATE_TRANSFERRING;
state.gtp.bufuse = 0;
state.gtp.src_name = "stdin";
state.gtp.dest_name = "remote output";
localname + 11, transport->remote->name,
remotename);
else
- printf("Would set upstream of '%s' to '%s' of '%s'\n",
+ printf(_("Would set upstream of '%s' to '%s' of '%s'\n"),
localname + 11, remotename + 11,
transport->remote->name);
}
char *end;
opts->depth = strtol(value, &end, 0);
if (*end)
- die("transport: invalid depth option '%s'", value);
+ die(_("transport: invalid depth option '%s'"), value);
}
return 0;
}
}
}
-static const char *status_abbrev(unsigned char sha1[20])
-{
- return find_unique_abbrev(sha1, DEFAULT_ABBREV);
-}
-
static void print_ok_ref_status(struct ref *ref, int porcelain)
{
if (ref->deletion)
char type;
const char *msg;
- strbuf_addstr(&quickref, status_abbrev(ref->old_oid.hash));
+ strbuf_add_unique_abbrev(&quickref, ref->old_oid.hash,
+ DEFAULT_ABBREV);
if (ref->forced_update) {
strbuf_addstr(&quickref, "...");
type = '+';
type = ' ';
msg = NULL;
}
- strbuf_addstr(&quickref, status_abbrev(ref->new_oid.hash));
+ strbuf_add_unique_abbrev(&quickref, ref->new_oid.hash,
+ DEFAULT_ABBREV);
print_ref_status(type, quickref.buf, ref, ref->peer_ref, msg, porcelain);
strbuf_release(&quickref);
static int print_one_push_status(struct ref *ref, const char *dest, int count, int porcelain)
{
- if (!count)
- fprintf(porcelain ? stdout : stderr, "To %s\n", dest);
+ if (!count) {
+ char *url = transport_anonymize_url(dest);
+ fprintf(porcelain ? stdout : stderr, "To %s\n", url);
+ free(url);
+ }
switch(ref->status) {
case REF_STATUS_NONE:
args.dry_run = !!(flags & TRANSPORT_PUSH_DRY_RUN);
args.porcelain = !!(flags & TRANSPORT_PUSH_PORCELAIN);
args.atomic = !!(flags & TRANSPORT_PUSH_ATOMIC);
+ args.push_options = transport->push_options;
args.url = transport->url;
if (flags & TRANSPORT_PUSH_CERT_ALWAYS)
struct git_transport_data *data;
if (!transport->smart_options)
- die("Bug detected: Taking over transport requires non-NULL "
+ die("BUG: taking over transport requires non-NULL "
"smart_options field.");
data = xcalloc(1, sizeof(*data));
{
int i;
- fprintf(stderr, "The following submodule paths contain changes that can\n"
- "not be found on any remote:\n");
+ fprintf(stderr, _("The following submodule paths contain changes that can\n"
+ "not be found on any remote:\n"));
for (i = 0; i < needs_pushing->nr; i++)
printf(" %s\n", needs_pushing->items[i].string);
- fprintf(stderr, "\nPlease try\n\n"
- " git push --recurse-submodules=on-demand\n\n"
- "or cd to the path and use\n\n"
- " git push\n\n"
- "to push them to a remote.\n\n");
+ fprintf(stderr, _("\nPlease try\n\n"
+ " git push --recurse-submodules=on-demand\n\n"
+ "or cd to the path and use\n\n"
+ " git push\n\n"
+ "to push them to a remote.\n\n"));
string_list_clear(needs_pushing, 0);
- die("Aborting.");
+ die(_("Aborting."));
}
static int run_pre_push_hook(struct transport *transport,
*/
unsigned cloning : 1;
+ /*
+ * These strings will be passed to the {pre, post}-receive hook,
+ * on the remote side, if both sides support the push options capability.
+ */
+ const struct string_list *push_options;
+
/**
* Returns 0 if successful, positive if the option is not
* recognized or is inapplicable, and negative if the option
#define TRANSPORT_PUSH_CERT_ALWAYS 2048
#define TRANSPORT_PUSH_CERT_IF_ASKED 4096
#define TRANSPORT_PUSH_ATOMIC 8192
+#define TRANSPORT_PUSH_OPTIONS 16384
#define TRANSPORT_SUMMARY_WIDTH (2 * DEFAULT_ABBREV + 3)
#define TRANSPORT_SUMMARY(x) (int)(TRANSPORT_SUMMARY_WIDTH + strlen(x) - gettext_width(x)), (x)
*/
#define S_IFXMIN_NEQ S_DIFFTREE_IFXMIN_NEQ
+#define FAST_ARRAY_ALLOC(x, nr) do { \
+ if ((nr) <= 2) \
+ (x) = xalloca((nr) * sizeof(*(x))); \
+ else \
+ ALLOC_ARRAY((x), nr); \
+} while(0)
+#define FAST_ARRAY_FREE(x, nr) do { \
+ if ((nr) > 2) \
+ free((x)); \
+} while(0)
static struct combine_diff_path *ll_diff_tree_paths(
struct combine_diff_path *p, const unsigned char *sha1,
if (recurse) {
const unsigned char **parents_sha1;
- parents_sha1 = xalloca(nparent * sizeof(parents_sha1[0]));
+ FAST_ARRAY_ALLOC(parents_sha1, nparent);
for (i = 0; i < nparent; ++i) {
/* same rule as in emitthis */
int tpi_valid = tp && !(tp[i].entry.mode & S_IFXMIN_NEQ);
strbuf_add(base, path, pathlen);
strbuf_addch(base, '/');
p = ll_diff_tree_paths(p, sha1, parents_sha1, nparent, base, opt);
- xalloca_free(parents_sha1);
+ FAST_ARRAY_FREE(parents_sha1, nparent);
}
strbuf_setlen(base, old_baselen);
void *ttree, **tptree;
int i;
- tp = xalloca(nparent * sizeof(tp[0]));
- tptree = xalloca(nparent * sizeof(tptree[0]));
+ FAST_ARRAY_ALLOC(tp, nparent);
+ FAST_ARRAY_ALLOC(tptree, nparent);
/*
* load parents first, as they are probably already cached.
free(ttree);
for (i = nparent-1; i >= 0; i--)
free(tptree[i]);
- xalloca_free(tptree);
- xalloca_free(tp);
+ FAST_ARRAY_FREE(tptree, nparent);
+ FAST_ARRAY_FREE(tp, nparent);
return p;
}
diff_setup_done(&diff_opts);
ll_diff_tree_sha1(old, new, base, &diff_opts);
diffcore_std(&diff_opts);
- free_pathspec(&diff_opts.pathspec);
+ clear_pathspec(&diff_opts.pathspec);
/* Go through the new set of filepairing, and see if we find a more interesting one */
opt->found_follow = 0;
/* Update the path we use from now on.. */
path[0] = p->one->path;
path[1] = NULL;
- free_pathspec(&opt->pathspec);
+ clear_pathspec(&opt->pathspec);
parse_pathspec(&opt->pathspec,
PATHSPEC_ALL_MAGIC & ~PATHSPEC_LITERAL,
PATHSPEC_LITERAL_PATH, "", path);
if (!strcmp(cmd, "checkout"))
msg = advice_commit_before_merge
? _("Your local changes to the following files would be overwritten by checkout:\n%%s"
- "Please commit your changes or stash them before you can switch branches.")
+ "Please commit your changes or stash them before you switch branches.")
: _("Your local changes to the following files would be overwritten by checkout:\n%%s");
else if (!strcmp(cmd, "merge"))
msg = advice_commit_before_merge
? _("Your local changes to the following files would be overwritten by merge:\n%%s"
- "Please commit your changes or stash them before you can merge.")
+ "Please commit your changes or stash them before you merge.")
: _("Your local changes to the following files would be overwritten by merge:\n%%s");
else
msg = advice_commit_before_merge
? _("Your local changes to the following files would be overwritten by %s:\n%%s"
- "Please commit your changes or stash them before you can %s.")
+ "Please commit your changes or stash them before you %s.")
: _("Your local changes to the following files would be overwritten by %s:\n%%s");
msgs[ERROR_WOULD_OVERWRITE] = msgs[ERROR_NOT_UPTODATE_FILE] =
xstrfmt(msg, cmd, cmd);
if (!strcmp(cmd, "checkout"))
msg = advice_commit_before_merge
? _("The following untracked working tree files would be removed by checkout:\n%%s"
- "Please move or remove them before you can switch branches.")
+ "Please move or remove them before you switch branches.")
: _("The following untracked working tree files would be removed by checkout:\n%%s");
else if (!strcmp(cmd, "merge"))
msg = advice_commit_before_merge
? _("The following untracked working tree files would be removed by merge:\n%%s"
- "Please move or remove them before you can merge.")
+ "Please move or remove them before you merge.")
: _("The following untracked working tree files would be removed by merge:\n%%s");
else
msg = advice_commit_before_merge
? _("The following untracked working tree files would be removed by %s:\n%%s"
- "Please move or remove them before you can %s.")
+ "Please move or remove them before you %s.")
: _("The following untracked working tree files would be removed by %s:\n%%s");
msgs[ERROR_WOULD_LOSE_UNTRACKED_REMOVED] = xstrfmt(msg, cmd, cmd);
if (!strcmp(cmd, "checkout"))
msg = advice_commit_before_merge
? _("The following untracked working tree files would be overwritten by checkout:\n%%s"
- "Please move or remove them before you can switch branches.")
+ "Please move or remove them before you switch branches.")
: _("The following untracked working tree files would be overwritten by checkout:\n%%s");
else if (!strcmp(cmd, "merge"))
msg = advice_commit_before_merge
? _("The following untracked working tree files would be overwritten by merge:\n%%s"
- "Please move or remove them before you can merge.")
+ "Please move or remove them before you merge.")
: _("The following untracked working tree files would be overwritten by merge:\n%%s");
else
msg = advice_commit_before_merge
? _("The following untracked working tree files would be overwritten by %s:\n%%s"
- "Please move or remove them before you can %s.")
+ "Please move or remove them before you %s.")
: _("The following untracked working tree files would be overwritten by %s:\n%%s");
msgs[ERROR_WOULD_LOSE_UNTRACKED_OVERWRITTEN] = xstrfmt(msg, cmd, cmd);
#include "sigchain.h"
#include "version.h"
#include "string-list.h"
+#include "parse-options.h"
-static const char upload_pack_usage[] = "git upload-pack [--strict] [--timeout=<n>] <dir>";
+static const char * const upload_pack_usage[] = {
+ N_("git upload-pack [<options>] <dir>"),
+ NULL
+};
/* Remember to update object flag allocation in object.h */
#define THEY_HAVE (1u << 11)
static int use_sideband;
static int advertise_refs;
static int stateless_rpc;
+static const char *pack_objects_hook;
static void reset_timeout(void)
{
alarm(timeout);
}
-static ssize_t send_client_data(int fd, const char *data, ssize_t sz)
+static void send_client_data(int fd, const char *data, ssize_t sz)
{
- if (use_sideband)
- return send_sideband(1, fd, data, sz, use_sideband);
+ if (use_sideband) {
+ send_sideband(1, fd, data, sz, use_sideband);
+ return;
+ }
if (fd == 3)
/* emergency quit */
fd = 2;
if (fd == 2) {
/* XXX: are we happy to lose stuff here? */
xwrite(fd, data, sz);
- return sz;
+ return;
}
write_or_die(fd, data, sz);
- return sz;
}
static int write_one_shallow(const struct commit_graft *graft, void *cb_data)
int i;
FILE *pipe_fd;
+ if (!pack_objects_hook)
+ pack_objects.git_cmd = 1;
+ else {
+ argv_array_push(&pack_objects.args, pack_objects_hook);
+ argv_array_push(&pack_objects.args, "git");
+ pack_objects.use_shell = 1;
+ }
+
if (shallow_nr) {
argv_array_push(&pack_objects.args, "--shallow-file");
argv_array_push(&pack_objects.args, "");
pack_objects.in = -1;
pack_objects.out = -1;
pack_objects.err = -1;
- pack_objects.git_cmd = 1;
if (start_command(&pack_objects))
die("git upload-pack: unable to fork git-pack-objects");
}
else
buffered = -1;
- sz = send_client_data(1, data, sz);
- if (sz < 0)
- goto fail;
+ send_client_data(1, data, sz);
}
/*
/* flush the data */
if (0 <= buffered) {
data[0] = buffered;
- sz = send_client_data(1, data, 1);
- if (sz < 0)
- goto fail;
+ send_client_data(1, data, 1);
fprintf(stderr, "flushed.\n");
}
if (use_sideband)
keepalive = git_config_int(var, value);
if (!keepalive)
keepalive = -1;
+ } else if (current_config_scope() != CONFIG_SCOPE_REPO) {
+ if (!strcmp("uploadpack.packobjectshook", var))
+ return git_config_string(&pack_objects_hook, var, value);
}
return parse_hide_refs_config(var, value, "uploadpack");
}
-int main(int argc, char **argv)
+int cmd_main(int argc, const char **argv)
{
- char *dir;
- int i;
+ const char *dir;
int strict = 0;
-
- git_setup_gettext();
+ struct option options[] = {
+ OPT_BOOL(0, "stateless-rpc", &stateless_rpc,
+ N_("quit after a single request/response exchange")),
+ OPT_BOOL(0, "advertise-refs", &advertise_refs,
+ N_("exit immediately after initial ref advertisement")),
+ OPT_BOOL(0, "strict", &strict,
+ N_("do not try <directory>/.git/ if <directory> is no Git directory")),
+ OPT_INTEGER(0, "timeout", &timeout,
+ N_("interrupt transfer after <n> seconds of inactivity")),
+ OPT_END()
+ };
packet_trace_identity("upload-pack");
- git_extract_argv0_path(argv[0]);
check_replace_refs = 0;
- for (i = 1; i < argc; i++) {
- char *arg = argv[i];
+ argc = parse_options(argc, argv, NULL, options, upload_pack_usage, 0);
- if (arg[0] != '-')
- break;
- if (!strcmp(arg, "--advertise-refs")) {
- advertise_refs = 1;
- continue;
- }
- if (!strcmp(arg, "--stateless-rpc")) {
- stateless_rpc = 1;
- continue;
- }
- if (!strcmp(arg, "--strict")) {
- strict = 1;
- continue;
- }
- if (starts_with(arg, "--timeout=")) {
- timeout = atoi(arg+10);
- daemon_mode = 1;
- continue;
- }
- if (!strcmp(arg, "--")) {
- i++;
- break;
- }
- }
+ if (argc != 1)
+ usage_with_options(upload_pack_usage, options);
- if (i != argc-1)
- usage(upload_pack_usage);
+ if (timeout)
+ daemon_mode = 1;
setup_path();
- dir = argv[i];
+ dir = argv[0];
if (!enter_repo(dir, strict))
die("'%s' does not appear to be a git repository", dir);
"[a-zA-Z_][a-zA-Z0-9_]*"
"|[-+0-9.e]+[fFlL]?|0[xXbB]?[0-9a-fA-F]+[lL]?"
"|[-+*/<>%&^|=!]=|--|\\+\\+|<<=?|>>=?|&&|\\|\\||::|->"),
+IPATTERN("css",
+ "![:;][[:space:]]*$\n"
+ "^[_a-z0-9].*$",
+ /* -- */
+ /*
+ * This regex comes from W3C CSS specs. Should theoretically also
+ * allow ISO 10646 characters U+00A0 and higher,
+ * but they are not handled in this regex.
+ */
+ "-?[_a-zA-Z][-_a-zA-Z0-9]*" /* identifiers */
+ "|-?[0-9]+|\\#[0-9a-fA-F]+" /* numbers */
+),
{ "default", NULL, -1, { NULL, 0 } },
};
#undef PATTERNS
static unsigned char current_commit_sha1[20];
-void walker_say(struct walker *walker, const char *fmt, const char *hex)
+void walker_say(struct walker *walker, const char *fmt, ...)
{
- if (walker->get_verbosely)
- fprintf(stderr, fmt, hex);
+ if (walker->get_verbosely) {
+ va_list ap;
+ va_start(ap, fmt);
+ vfprintf(stderr, fmt, ap);
+ va_end(ap);
+ }
}
static void report_missing(const struct object *obj)
};
/* Report what we got under get_verbosely */
-void walker_say(struct walker *walker, const char *, const char *);
+__attribute__((format (printf, 2, 3)))
+void walker_say(struct walker *walker, const char *fmt, ...);
/* Load pull targets from stdin */
int walker_targets_stdin(char ***target, const char ***write_ref);
free(worktrees[i]->path);
free(worktrees[i]->id);
free(worktrees[i]->head_ref);
+ free(worktrees[i]->lock_reason);
free(worktrees[i]);
}
free (worktrees);
int is_bare = 0;
int is_detached = 0;
- strbuf_addstr(&worktree_path, absolute_path(get_git_common_dir()));
+ strbuf_add_absolute_path(&worktree_path, get_git_common_dir());
is_bare = !strbuf_strip_suffix(&worktree_path, "/.git");
if (is_bare)
strbuf_strip_suffix(&worktree_path, "/.");
worktree->is_detached = is_detached;
worktree->is_current = 0;
add_head_info(&head_ref, worktree);
+ worktree->lock_reason = NULL;
+ worktree->lock_reason_valid = 0;
done:
strbuf_release(&path);
strbuf_rtrim(&worktree_path);
if (!strbuf_strip_suffix(&worktree_path, "/.git")) {
strbuf_reset(&worktree_path);
- strbuf_addstr(&worktree_path, absolute_path("."));
+ strbuf_add_absolute_path(&worktree_path, ".");
strbuf_strip_suffix(&worktree_path, "/.");
}
worktree->is_detached = is_detached;
worktree->is_current = 0;
add_head_info(&head_ref, worktree);
+ worktree->lock_reason = NULL;
+ worktree->lock_reason_valid = 0;
done:
strbuf_release(&path);
static void mark_current_worktree(struct worktree **worktrees)
{
- struct strbuf git_dir = STRBUF_INIT;
- struct strbuf path = STRBUF_INIT;
+ char *git_dir = xstrdup(absolute_path(get_git_dir()));
int i;
- strbuf_addstr(&git_dir, absolute_path(get_git_dir()));
for (i = 0; worktrees[i]; i++) {
struct worktree *wt = worktrees[i];
- strbuf_addstr(&path, absolute_path(get_worktree_git_dir(wt)));
- wt->is_current = !fspathcmp(git_dir.buf, path.buf);
- strbuf_reset(&path);
- if (wt->is_current)
+ const char *wt_git_dir = get_worktree_git_dir(wt);
+
+ if (!fspathcmp(git_dir, absolute_path(wt_git_dir))) {
+ wt->is_current = 1;
break;
+ }
}
- strbuf_release(&git_dir);
- strbuf_release(&path);
+ free(git_dir);
}
struct worktree **get_worktrees(void)
if (dir) {
while ((d = readdir(dir)) != NULL) {
struct worktree *linked = NULL;
- if (!strcmp(d->d_name, ".") || !strcmp(d->d_name, ".."))
+ if (is_dot_or_dotdot(d->d_name))
continue;
if ((linked = get_linked_worktree(d->d_name))) {
return git_common_path("worktrees/%s", wt->id);
}
+static struct worktree *find_worktree_by_suffix(struct worktree **list,
+ const char *suffix)
+{
+ struct worktree *found = NULL;
+ int nr_found = 0, suffixlen;
+
+ suffixlen = strlen(suffix);
+ if (!suffixlen)
+ return NULL;
+
+ for (; *list && nr_found < 2; list++) {
+ const char *path = (*list)->path;
+ int pathlen = strlen(path);
+ int start = pathlen - suffixlen;
+
+ /* suffix must start at directory boundary */
+ if ((!start || (start > 0 && is_dir_sep(path[start - 1]))) &&
+ !fspathcmp(suffix, path + start)) {
+ found = *list;
+ nr_found++;
+ }
+ }
+ return nr_found == 1 ? found : NULL;
+}
+
+struct worktree *find_worktree(struct worktree **list,
+ const char *prefix,
+ const char *arg)
+{
+ struct worktree *wt;
+ char *path;
+
+ if ((wt = find_worktree_by_suffix(list, arg)))
+ return wt;
+
+ arg = prefix_filename(prefix, strlen(prefix), arg);
+ path = xstrdup(real_path(arg));
+ for (; *list; list++)
+ if (!fspathcmp(path, real_path((*list)->path)))
+ break;
+ free(path);
+ return *list;
+}
+
+int is_main_worktree(const struct worktree *wt)
+{
+ return !wt->id;
+}
+
+const char *is_worktree_locked(struct worktree *wt)
+{
+ assert(!is_main_worktree(wt));
+
+ if (!wt->lock_reason_valid) {
+ struct strbuf path = STRBUF_INIT;
+
+ strbuf_addstr(&path, worktree_git_path(wt, "locked"));
+ if (file_exists(path.buf)) {
+ struct strbuf lock_reason = STRBUF_INIT;
+ if (strbuf_read_file(&lock_reason, path.buf, 0) < 0)
+ die_errno(_("failed to read '%s'"), path.buf);
+ strbuf_trim(&lock_reason);
+ wt->lock_reason = strbuf_detach(&lock_reason, NULL);
+ } else
+ wt->lock_reason = NULL;
+ wt->lock_reason_valid = 1;
+ strbuf_release(&path);
+ }
+
+ return wt->lock_reason;
+}
+
int is_worktree_being_rebased(const struct worktree *wt,
const char *target)
{
char *path;
char *id;
char *head_ref;
+ char *lock_reason; /* internal use */
unsigned char head_sha1[20];
int is_detached;
int is_bare;
int is_current;
+ int lock_reason_valid;
};
/* Functions for acting on the information about worktrees. */
*/
extern const char *get_worktree_git_dir(const struct worktree *wt);
+/*
+ * Search a worktree that can be unambiguously identified by
+ * "arg". "prefix" must not be NULL.
+ */
+extern struct worktree *find_worktree(struct worktree **list,
+ const char *prefix,
+ const char *arg);
+
+/*
+ * Return true if the given worktree is the main one.
+ */
+extern int is_main_worktree(const struct worktree *wt);
+
+/*
+ * Return the reason string if the given worktree is locked or NULL
+ * otherwise.
+ */
+extern const char *is_worktree_locked(struct worktree *wt);
+
/*
* Free up the memory for worktree(s)
*/
}
}
+static int handle_nonblock(int fd, short poll_events, int err)
+{
+ struct pollfd pfd;
+
+ if (err != EAGAIN && err != EWOULDBLOCK)
+ return 0;
+
+ pfd.fd = fd;
+ pfd.events = poll_events;
+
+ /*
+ * no need to check for errors, here;
+ * a subsequent read/write will detect unrecoverable errors
+ */
+ poll(&pfd, 1, -1);
+ return 1;
+}
+
/*
* xread() is the same a read(), but it automatically restarts read()
* operations with a recoverable error (EAGAIN and EINTR). xread()
if (nr < 0) {
if (errno == EINTR)
continue;
- if (errno == EAGAIN || errno == EWOULDBLOCK) {
- struct pollfd pfd;
- pfd.events = POLLIN;
- pfd.fd = fd;
- /*
- * it is OK if this poll() failed; we
- * want to leave this infinite loop
- * only when read() returns with
- * success, or an expected failure,
- * which would be checked by the next
- * call to read(2).
- */
- poll(&pfd, 1, -1);
- }
+ if (handle_nonblock(fd, POLLIN, errno))
+ continue;
}
return nr;
}
len = MAX_IO_SIZE;
while (1) {
nr = write(fd, buf, len);
- if ((nr < 0) && (errno == EAGAIN || errno == EINTR))
- continue;
+ if (nr < 0) {
+ if (errno == EINTR)
+ continue;
+ if (handle_nonblock(fd, POLLOUT, errno))
+ continue;
+ }
+
return nr;
}
}
return len;
}
-static int write_file_v(const char *path, int fatal,
- const char *fmt, va_list params)
+void write_file_buf(const char *path, const char *buf, size_t len)
{
- struct strbuf sb = STRBUF_INIT;
- int fd = open(path, O_RDWR | O_CREAT | O_TRUNC, 0666);
- if (fd < 0) {
- if (fatal)
- die_errno(_("could not open %s for writing"), path);
- return -1;
- }
- strbuf_vaddf(&sb, fmt, params);
- strbuf_complete_line(&sb);
- if (write_in_full(fd, sb.buf, sb.len) != sb.len) {
- int err = errno;
- close(fd);
- strbuf_release(&sb);
- errno = err;
- if (fatal)
- die_errno(_("could not write to %s"), path);
- return -1;
- }
- strbuf_release(&sb);
- if (close(fd)) {
- if (fatal)
- die_errno(_("could not close %s"), path);
- return -1;
- }
- return 0;
+ int fd = xopen(path, O_WRONLY | O_CREAT | O_TRUNC, 0666);
+ if (write_in_full(fd, buf, len) != len)
+ die_errno(_("could not write to %s"), path);
+ if (close(fd))
+ die_errno(_("could not close %s"), path);
}
-int write_file(const char *path, const char *fmt, ...)
+void write_file(const char *path, const char *fmt, ...)
{
- int status;
va_list params;
+ struct strbuf sb = STRBUF_INIT;
va_start(params, fmt);
- status = write_file_v(path, 1, fmt, params);
+ strbuf_vaddf(&sb, fmt, params);
va_end(params);
- return status;
-}
-int write_file_gently(const char *path, const char *fmt, ...)
-{
- int status;
- va_list params;
+ strbuf_complete_line(&sb);
- va_start(params, fmt);
- status = write_file_v(path, 0, fmt, params);
- va_end(params);
- return status;
+ write_file_buf(path, sb.buf, sb.len);
+ strbuf_release(&sb);
}
void sleep_millisec(int millisec)
die_errno("write error");
}
}
-
-int write_or_whine_pipe(int fd, const void *buf, size_t count, const char *msg)
-{
- if (write_in_full(fd, buf, count) < 0) {
- check_pipe(errno);
- fprintf(stderr, "%s: write error (%s)\n",
- msg, strerror(errno));
- return 0;
- }
-
- return 1;
-}
-
-int write_or_whine(int fd, const void *buf, size_t count, const char *msg)
-{
- if (write_in_full(fd, buf, count) < 0) {
- fprintf(stderr, "%s: write error (%s)\n",
- msg, strerror(errno));
- return 0;
- }
-
- return 1;
-}
case 7:
return _("both modified:");
default:
- die(_("bug: unhandled unmerged status %x"), stagemask);
+ die("BUG: unhandled unmerged status %x", stagemask);
}
}
status_printf(s, color(WT_STATUS_HEADER, s), "\t");
what = wt_status_diff_status_string(status);
if (!what)
- die(_("bug: unhandled diff status %c"), status);
+ die("BUG: unhandled diff status %c", status);
len = label_width - utf8_strwidth(what);
assert(len >= 0);
if (status == DIFF_STATUS_COPIED || status == DIFF_STATUS_RENAMED)
d->worktree_status = p->status;
d->dirty_submodule = p->two->dirty_submodule;
if (S_ISGITLINK(p->two->mode))
- d->new_submodule_commits = !!hashcmp(p->one->sha1, p->two->sha1);
+ d->new_submodule_commits = !!oidcmp(&p->one->oid,
+ &p->two->oid);
}
}
{
if (has_unmerged(s)) {
status_printf_ln(s, color, _("You have unmerged paths."));
- if (s->hints)
+ if (s->hints) {
status_printf_ln(s, color,
- _(" (fix conflicts and run \"git commit\")"));
+ _(" (fix conflicts and run \"git commit\")"));
+ status_printf_ln(s, color,
+ _(" (use \"git merge --abort\" to abort the merge)"));
+ }
} else {
s-> commitable = 1;
status_printf_ln(s, color,
strbuf_addf(split[1], "%s ", abbrev);
strbuf_reset(line);
for (i = 0; split[i]; i++)
- strbuf_addf(line, "%s", split[i]->buf);
+ strbuf_addbuf(line, split[i]);
}
}
strbuf_list_free(split);
else
printf(_("nothing to commit\n"));
} else
- printf(_("nothing to commit, working directory clean\n"));
+ printf(_("nothing to commit, working tree clean\n"));
}
}
/*
* Trim down common substring at the end of the buffers,
- * but leave at least ctx lines at the end.
+ * but end on a complete line.
*/
-static void trim_common_tail(mmfile_t *a, mmfile_t *b, long ctx)
+static void trim_common_tail(mmfile_t *a, mmfile_t *b)
{
const int blk = 1024;
long trimmed = 0, recovered = 0;
char *bp = b->ptr + b->size;
long smaller = (a->size < b->size) ? a->size : b->size;
- if (ctx)
- return;
-
while (blk + trimmed <= smaller && !memcmp(ap - blk, bp - blk, blk)) {
trimmed += blk;
ap -= blk;
if (mf1->size > MAX_XDIFF_SIZE || mf2->size > MAX_XDIFF_SIZE)
return -1;
- trim_common_tail(&a, &b, xecfg->ctxlen);
+ if (!xecfg->ctxlen && !(xecfg->flags & XDL_EMIT_FUNCCONTEXT))
+ trim_common_tail(&a, &b);
return xdl_diff(&a, &b, xpp, xecfg, xecb);
}
return -1;
}
+static long match_func_rec(xdfile_t *xdf, xdemitconf_t const *xecfg, long ri,
+ char *buf, long sz)
+{
+ const char *rec;
+ long len = xdl_get_rec(xdf, ri, &rec);
+ if (!xecfg->find_func)
+ return def_ff(rec, len, buf, sz, xecfg->find_func_priv);
+ return xecfg->find_func(rec, len, buf, sz, xecfg->find_func_priv);
+}
+
struct func_line {
long len;
char buf[80];
static long get_func_line(xdfenv_t *xe, xdemitconf_t const *xecfg,
struct func_line *func_line, long start, long limit)
{
- find_func_t ff = xecfg->find_func ? xecfg->find_func : def_ff;
long l, size, step = (start > limit) ? -1 : 1;
char *buf, dummy[1];
size = func_line ? sizeof(func_line->buf) : sizeof(dummy);
for (l = start; l != limit && 0 <= l && l < xe->xdf1.nrec; l += step) {
- const char *rec;
- long reclen = xdl_get_rec(&xe->xdf1, l, &rec);
- long len = ff(rec, reclen, buf, size, xecfg->find_func_priv);
+ long len = match_func_rec(&xe->xdf1, xecfg, l, buf, size);
if (len >= 0) {
if (func_line)
func_line->len = len;
return -1;
}
+static int is_empty_rec(xdfile_t *xdf, long ri)
+{
+ const char *rec;
+ long len = xdl_get_rec(xdf, ri, &rec);
+
+ while (len > 0 && XDL_ISSPACE(*rec)) {
+ rec++;
+ len--;
+ }
+ return !len;
+}
+
int xdl_emit_diff(xdfenv_t *xe, xdchange_t *xscr, xdemitcb_t *ecb,
xdemitconf_t const *xecfg) {
long s1, s2, e1, e2, lctx;
s2 = XDL_MAX(xch->i2 - xecfg->ctxlen, 0);
if (xecfg->flags & XDL_EMIT_FUNCCONTEXT) {
- long fs1 = get_func_line(xe, xecfg, NULL, xch->i1, -1);
+ long fs1, i1 = xch->i1;
+
+ /* Appended chunk? */
+ if (i1 >= xe->xdf1.nrec) {
+ char dummy[1];
+ long i2 = xch->i2;
+
+ /*
+ * We don't need additional context if
+ * a whole function was added, possibly
+ * starting with empty lines.
+ */
+ while (i2 < xe->xdf2.nrec &&
+ is_empty_rec(&xe->xdf2, i2))
+ i2++;
+ if (i2 < xe->xdf2.nrec &&
+ match_func_rec(&xe->xdf2, xecfg, i2,
+ dummy, sizeof(dummy)) >= 0)
+ goto post_context_calculation;
+
+ /*
+ * Otherwise get more context from the
+ * pre-image.
+ */
+ i1 = xe->xdf1.nrec - 1;
+ }
+
+ fs1 = get_func_line(xe, xecfg, NULL, i1, -1);
if (fs1 < 0)
fs1 = 0;
if (fs1 < s1) {
}
}
- again:
+ post_context_calculation:
lctx = xecfg->ctxlen;
lctx = XDL_MIN(lctx, xe->xdf1.nrec - (xche->i1 + xche->chg1));
lctx = XDL_MIN(lctx, xe->xdf2.nrec - (xche->i2 + xche->chg2));
long fe1 = get_func_line(xe, xecfg, NULL,
xche->i1 + xche->chg1,
xe->xdf1.nrec);
+ while (fe1 > 0 && is_empty_rec(&xe->xdf1, fe1 - 1))
+ fe1--;
if (fe1 < 0)
fe1 = xe->xdf1.nrec;
if (fe1 > e1) {
* its new end.
*/
if (xche->next) {
- long l = xche->next->i1;
+ long l = XDL_MIN(xche->next->i1,
+ xe->xdf1.nrec - 1);
if (l <= e1 ||
get_func_line(xe, xecfg, NULL, l, e1) < 0) {
xche = xche->next;
- goto again;
+ goto post_context_calculation;
}
}
}
/*
* LibXDiff by Davide Libenzi ( File Differential Library )
- * Copyright (C) 2003-2009 Davide Libenzi, Johannes E. Schindelin
+ * Copyright (C) 2003-2016 Davide Libenzi, Johannes E. Schindelin
*
* This library is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
return 0;
}
} else if (flags & XDF_IGNORE_WHITESPACE_AT_EOL) {
- while (i1 < s1 && i2 < s2 && l1[i1++] == l2[i2++])
- ; /* keep going */
+ while (i1 < s1 && i2 < s2 && l1[i1] == l2[i2]) {
+ i1++;
+ i2++;
+ }
}
/*