--- /dev/null
+Git 2.11 Release Notes
+======================
+
+Backward compatibility notes.
+
+ * An empty string used as a pathspec element has always meant
+ 'everything matches', but it is too easy to write a script that
+ finds a path to remove in $path and run 'git rm "$paht"', which
+ ends up removing everything. This release starts warning about the
+ use of an empty string that is used for 'everything matches' and
+ asks users to use a more explicit '.' for that instead.
+
+ The hope is that existing users will not mind this change, and
+ eventually the warning can be turned into a hard error, upgrading
+ the deprecation into removal of this (mis)feature.
+
+ * The historical argument order "git merge <msg> HEAD <commit>..."
+ has been deprecated for quite some time, and will be removed in the
+ next release (not this one).
+
+ * The default abbreviation length, which has historically been 7, now
+ scales as the repository grows, using the approximate number of
+ objects in the reopsitory and a bit of math around the birthday
+ paradox. The logic suggests to use 12 hexdigits for the Linux
+ kernel, and 9 to 10 for Git itself.
+
+
+Updates since v2.10
+-------------------
+
+UI, Workflows & Features
+
+ * Comes with new version of git-gui, now at its 0.21.0 tag.
+
+ * "git format-patch --cover-letter HEAD^" to format a single patch
+ with a separate cover letter now numbers the output as [PATCH 0/1]
+ and [PATCH 1/1] by default.
+
+ * An incoming "git push" that attempts to push too many bytes can now
+ be rejected by setting a new configuration variable at the receiving
+ end.
+
+ * "git nosuchcommand --help" said "No manual entry for gitnosuchcommand",
+ which was not intuitive, given that "git nosuchcommand" said "git:
+ 'nosuchcommand' is not a git command".
+
+ * "git clone --resurse-submodules --reference $path $URL" is a way to
+ reduce network transfer cost by borrowing objects in an existing
+ $path repository when cloning the superproject from $URL; it
+ learned to also peek into $path for presense of corresponding
+ repositories of submodules and borrow objects from there when able.
+
+ * The "git diff --submodule={short,log}" mechanism has been enhanced
+ to allow "--submodule=diff" to show the patch between the submodule
+ commits bound to the superproject.
+
+ * Even though "git hash-objects", which is a tool to take an
+ on-filesystem data stream and put it into the Git object store,
+ allowed to perform the "outside-world-to-Git" conversions (e.g.
+ end-of-line conversions and application of the clean-filter), and
+ it had the feature on by default from very early days, its reverse
+ operation "git cat-file", which takes an object from the Git object
+ store and externalize for the consumption by the outside world,
+ lacked an equivalent mechanism to run the "Git-to-outside-world"
+ conversion. The command learned the "--filters" option to do so.
+
+ * Output from "git diff" can be made easier to read by selecting
+ which lines are common and which lines are added/deleted
+ intelligently when the lines before and after the changed section
+ are the same. A command line option is added to help with the
+ experiment to find a good heuristics.
+
+ * In some projects, it is common to use "[RFC PATCH]" as the subject
+ prefix for a patch meant for discussion rather than application. A
+ new option "--rfc" was a short-hand for "--subject-prefix=RFC PATCH"
+ to help the participants of such projects.
+
+ * "git add --chmod=+x <pathspec>" added recently only toggled the
+ executable bit for paths that are either new or modified. This has
+ been corrected to flip the executable bit for all paths that match
+ the given pathspec.
+
+ * When "git format-patch --stdout" output is placed as an in-body
+ header and it uses the RFC2822 header folding, "git am" failed to
+ put the header line back into a single logical line. The
+ underlying "git mailinfo" was taught to handle this properly.
+
+ * "gitweb" can spawn "highlight" to show blob contents with
+ (programming) language-specific syntax highlighting, but only
+ when the language is known. "highlight" can however be told
+ to make the guess itself by giving it "--force" option, which
+ has been enabled.
+
+ * "git gui" l10n to Portuguese.
+
+ * When given an abbreviated object name that is not (or more
+ realistically, "no longer") unique, we gave a fatal error
+ "ambiguous argument". This error is now accompanied by hints that
+ lists the objects that begins with the given prefix. During the
+ course of development of this new feature, numerous minor bugs were
+ uncovered and corrected, the most notable one of which is that we
+ gave "short SHA1 xxxx is ambiguous." twice without good reason.
+
+ * "git log rev^..rev" is an often-used revision range specification
+ to show what was done on a side branch merged at rev. This has
+ gained a short-hand "rev^-1". In general "rev^-$n" is the same as
+ "^rev^$n rev", i.e. what has happened on other branches while the
+ history leading to nth parent was looking the other way.
+
+ * In recent versions of cURL, GSSAPI credential delegation is
+ disabled by default due to CVE-2011-2192; introduce a configuration
+ to selectively allow enabling this.
+ (merge 26a7b23429 ps/http-gssapi-cred-delegation later to maint).
+
+ * "git mergetool" learned to honor "-O<orderfile>" to control the
+ order of paths to present to the end user.
+
+ * "git diff/log --ws-error-highlight=<kind>" lacked the corresponding
+ configuration variable to set it by default.
+
+ * "git ls-files" learned "--recurse-submodules" option that can be
+ used to get a listing of tracked files across submodules (i.e. this
+ only works with "--cached" option, not for listing untracked or
+ ignored files). This would be a useful tool to sit on the upstream
+ side of a pipe that is read with xargs to work on all working tree
+ files from the top-level superproject.
+
+ * A new credential helper that talks via "libsecret" with
+ implementations of XDG Secret Service API has been added to
+ contrib/credential/.
+
+ * The GPG verification status shown in "%G?" pretty format specifier
+ was not rich enough to differentiate a signature made by an expired
+ key, a signature made by a revoked key, etc. New output letters
+ have been assigned to express them.
+
+ * In addition to purely abbreviated commit object names, "gitweb"
+ learned to turn "git describe" output (e.g. v2.9.3-599-g2376d31787)
+ into clickable links in its output.
+
+ * When new paths were added by "git add -N" to the index, it was
+ enough to circumvent the check by "git commit" to refrain from
+ making an empty commit without "--allow-empty". The same logic
+ prevented "git status" to show such a path as "new file" in the
+ "Changes not staged for commit" section.
+
+
+Performance, Internal Implementation, Development Support etc.
+
+ * The delta-base-cache mechanism has been a key to the performance in
+ a repository with a tightly packed packfile, but it did not scale
+ well even with a larger value of core.deltaBaseCacheLimit.
+
+ * Enhance "git status --porcelain" output by collecting more data on
+ the state of the index and the working tree files, which may
+ further be used to teach git-prompt (in contrib/) to make fewer
+ calls to git.
+
+ * Extract a small helper out of the function that reads the authors
+ script file "git am" internally uses.
+ (merge a77598e jc/am-read-author-file later to maint).
+
+ * Lifts calls to exit(2) and die() higher in the callchain in
+ sequencer.c files so that more helper functions in it can be used
+ by callers that want to handle error conditions themselves.
+
+ * "git am" has been taught to make an internal call to "git apply"'s
+ innards without spawning the latter as a separate process.
+
+ * The ref-store abstraction was introduced to the refs API so that we
+ can plug in different backends to store references.
+
+ * The "unsigned char sha1[20]" to "struct object_id" conversion
+ continues. Notable changes in this round includes that ce->sha1,
+ i.e. the object name recorded in the cache_entry, turns into an
+ object_id.
+
+ * JGit can show a fake ref "capabilities^{}" to "git fetch" when it
+ does not advertise any refs, but "git fetch" was not prepared to
+ see such an advertisement. When the other side disconnects without
+ giving any ref advertisement, we used to say "there may not be a
+ repository at that URL", but we may have seen other advertisement
+ like "shallow" and ".have" in which case we definitely know that a
+ repository is there. The code to detect this case has also been
+ updated.
+
+ * Some codepaths in "git pack-objects" were not ready to use an
+ existing pack bitmap; now they are and as the result they have
+ become faster.
+
+ * The codepath in "git fsck" to detect malformed tree objects has
+ been updated not to die but keep going after detecting them.
+
+ * We call "qsort(array, nelem, sizeof(array[0]), fn)", and most of
+ the time third parameter is redundant. A new QSORT() macro lets us
+ omit it.
+
+ * "git pack-objects" in a repository with many packfiles used to
+ spend a lot of time looking for/at objects in them; the accesses to
+ the packfiles are now optimized by checking the most-recently-used
+ packfile first.
+ (merge c9af708b1a jk/pack-objects-optim-mru later to maint).
+
+ * Codepaths involved in interacting alternate object store have
+ been cleaned up.
+
+ * In order for the receiving end of "git push" to inspect the
+ received history and decide to reject the push, the objects sent
+ from the sending end need to be made available to the hook and
+ the mechanism for the connectivity check, and this was done
+ traditionally by storing the objects in the receiving repository
+ and letting "git gc" to expire it. Instead, store the newly
+ received objects in a temporary area, and make them available by
+ reusing the alternate object store mechanism to them only while we
+ decide if we accept the check, and once we decide, either migrate
+ them to the repository or purge them immediately.
+
+ * The require_clean_work_tree() helper was recreated in C when "git
+ pull" was rewritten from shell; the helper is now made available to
+ other callers in preparation for upcoming "rebase -i" work.
+
+ * "git upload-pack" had its code cleaned-up and performance improved
+ by reducing use of timestamp-ordered commit-list, which was
+ replaced with a priority queue.
+
+ * "git diff --no-index" codepath has been updated not to try to peek
+ into .git/ directory that happens to be under the current
+ directory, when we know we are operating outside any repository.
+
+ * Update of the sequencer codebase to make it reusable to reimplement
+ "rebase -i" continues.
+
+
+Also contains various documentation updates and code clean-ups.
+
+
+Fixes since v2.10
+-----------------
+
+Unless otherwise noted, all the fixes since v2.9 in the maintenance
+track are contained in this release (see the maintenance releases'
+notes for details).
+
+ * Clarify various ways to specify the "revision ranges" in the
+ documentation.
+
+ * "diff-highlight" script (in contrib/) learned to work better with
+ "git log -p --graph" output.
+
+ * The test framework left the number of tests and success/failure
+ count in the t/test-results directory, keyed by the name of the
+ test script plus the process ID. The latter however turned out not
+ to serve any useful purpose. The process ID part of the filename
+ has been removed.
+
+ * Having a submodule whose ".git" repository is somehow corrupt
+ caused a few commands that recurse into submodules loop forever.
+
+ * "git symbolic-ref -d HEAD" happily removes the symbolic ref, but
+ the resulting repository becomes an invalid one. Teach the command
+ to forbid removal of HEAD.
+
+ * A test spawned a short-lived background process, which sometimes
+ prevented the test directory from getting removed at the end of the
+ script on some platforms.
+
+ * Update a few tests that used to use GIT_CURL_VERBOSE to use the
+ newer GIT_TRACE_CURL.
+
+ * "git pack-objects --include-tag" was taught that when we know that
+ we are sending an object C, we want a tag B that directly points at
+ C but also a tag A that points at the tag B. We used to miss the
+ intermediate tag B in some cases.
+
+ * Update Japanese translation for "git-gui".
+
+ * "git fetch http::/site/path" did not die correctly and segfaulted
+ instead.
+
+ * "git commit-tree" stopped reading commit.gpgsign configuration
+ variable that was meant for Porcelain "git commit" in Git 2.9; we
+ forgot to update "git gui" to look at the configuration to match
+ this change.
+
+ * "git add --chmod=+x" added recently lacked documentation, which has
+ been corrected.
+
+ * "git log --cherry-pick" used to include merge commits as candidates
+ to be matched up with other commits, resulting a lot of wasted time.
+ The patch-id generation logic has been updated to ignore merges to
+ avoid the wastage.
+
+ * The http transport (with curl-multi option, which is the default
+ these days) failed to remove curl-easy handle from a curlm session,
+ which led to unnecessary API failures.
+
+ * There were numerous corner cases in which the configuration files
+ are read and used or not read at all depending on the directory a
+ Git command was run, leading to inconsistent behaviour. The code
+ to set-up repository access at the beginning of a Git process has
+ been updated to fix them.
+ (merge 4d0efa1 jk/setup-sequence-update later to maint).
+
+ * "git diff -W" output needs to extend the context backward to
+ include the header line of the current function and also forward to
+ include the body of the entire current function up to the header
+ line of the next one. This process may have to merge to adjacent
+ hunks, but the code forgot to do so in some cases.
+
+ * Performance tests done via "t/perf" did not use the same set of
+ build configuration if the user relied on autoconf generated
+ configuration.
+
+ * "git format-patch --base=..." feature that was recently added
+ showed the base commit information after "-- " e-mail signature
+ line, which turned out to be inconvenient. The base information
+ has been moved above the signature line.
+
+ * More i18n.
+
+ * Even when "git pull --rebase=preserve" (and the underlying "git
+ rebase --preserve") can complete without creating any new commit
+ (i.e. fast-forwards), it still insisted on having a usable ident
+ information (read: user.email is set correctly), which was less
+ than nice. As the underlying commands used inside "git rebase"
+ would fail with a more meaningful error message and advice text
+ when the bogus ident matters, this extra check was removed.
+
+ * "git gc --aggressive" used to limit the delta-chain length to 250,
+ which is way too deep for gaining additional space savings and is
+ detrimental for runtime performance. The limit has been reduced to
+ 50.
+
+ * Documentation for individual configuration variables to control use
+ of color (like `color.grep`) said that their default value is
+ 'false', instead of saying their default is taken from `color.ui`.
+ When we updated the default value for color.ui from 'false' to
+ 'auto' quite a while ago, all of them broke. This has been
+ corrected.
+
+ * The pretty-format specifier "%C(auto)" used by the "log" family of
+ commands to enable coloring of the output is taught to also issue a
+ color-reset sequence to the output.
+ (merge 82b83da8d3 rs/c-auto-resets-attributes later to maint).
+
+ * A shell script example in check-ref-format documentation has been
+ fixed.
+
+ * "git checkout <word>" does not follow the usual disambiguation
+ rules when the <word> can be both a rev and a path, to allow
+ checking out a branch 'foo' in a project that happens to have a
+ file 'foo' in the working tree without having to disambiguate.
+ This was poorly documented and the check was incorrect when the
+ command was run from a subdirectory.
+
+ * Some codepaths in "git diff" used regexec(3) on a buffer that was
+ mmap(2)ed, which may not have a terminating NUL, leading to a read
+ beyond the end of the mapped region. This was fixed by introducing
+ a regexec_buf() helper that takes a <ptr,len> pair with REG_STARTEND
+ extension.
+ (merge 842a516cb0 js/regexec-buf later to maint).
+
+ * The procedure to build Git on Mac OS X for Travis CI hardcoded the
+ internal directory structure we assumed HomeBrew uses, which was a
+ no-no. The procedure has been updated to ask HomeBrew things we
+ need to know to fix this.
+
+ * When "git rebase -i" is given a broken instruction, it told the
+ user to fix it with "--edit-todo", but didn't say what the step
+ after that was (i.e. "--continue").
+
+ * Documentation around tools to import from CVS was fairly outdated.
+
+ * "git clone --recurse-submodules" lost the progress eye-candy in
+ recent update, which has been corrected.
+
+ * A low-level function verify_packfile() was meant to show errors
+ that were detected without dying itself, but under some conditions
+ it didn't and died instead, which has been fixed.
+
+ * When "git fetch" tries to find where the history of the repository
+ it runs in has diverged from what the other side has, it has a
+ mechanism to avoid digging too deep into irrelevant side branches.
+ This however did not work well over the "smart-http" transport due
+ to a design bug, which has been fixed.
+ (merge 06b3d386e0 jt/fetch-pack-in-vain-count-with-stateless later to maint).
+
+ * In the codepath that comes up with the hostname to be used in an
+ e-mail when the user didn't tell us, we looked at ai_canonname
+ field in struct addrinfo without making sure it is not NULL first.
+
+ * "git worktree", even though it used the default_abbrev setting that
+ ought to be affected by core.abbrev configuration variable, ignored
+ the variable setting. The command has been taught to read the
+ default set of configuration variables to correct this.
+
+ * "git init" tried to record core.worktree in the repository's
+ 'config' file when GIT_WORK_TREE environment variable was set and
+ it was different from where GIT_DIR appears as ".git" at its top,
+ but the logic was faulty when .git is a "gitdir:" file that points
+ at the real place, causing trouble in working trees that are
+ managed by "git worktree". This has been corrected.
+
+ * Codepaths that read from an on-disk loose object were too loose in
+ validating what they are reading is a proper object file and
+ sometimes read past the data they read from the disk, which has
+ been corrected. H/t to Gustavo Grieco for reporting.
+
+ * The original command line syntax for "git merge", which was "git
+ merge <msg> HEAD <parent>...", has been deprecated for quite some
+ time, and "git gui" was the last in-tree user of the syntax. This
+ is finally fixed, so that we can move forward with the deprecation.
+
+ * An author name, that spelled a backslash-quoted double quote in the
+ human readable part "My \"double quoted\" name", was not unquoted
+ correctly while applying a patch from a piece of e-mail.
+
+ * Doc update to clarify what "log -3 --reverse" does.
+
+ * Almost everybody uses DEFAULT_ABBREV to refer to the default
+ setting for the abbreviation, but "git blame" peeked into
+ underlying variable bypassing the macro for no good reason.
+
+ * The "graph" API used in "git log --graph" miscounted the number of
+ output columns consumed so far when drawing a padding line, which
+ has been fixed; this did not affect any existing code as nobody
+ tried to write anything after the padding on such a line, though.
+
+ * The code that parses the format parameter of for-each-ref command
+ has seen a micro-optimization.
+
+ * When we started cURL to talk to imap server when a new enough
+ version of cURL library is available, we forgot to explicitly add
+ imap(s):// before the destination. To some folks, that didn't work
+ and the library tried to make HTTP(s) requests instead.
+ (merge d2d07ab861 ak/curl-imap-send-explicit-scheme later to maint).
+
+ * The ./configure script generated from configure.ac was taught how
+ to detect support of SSL by libcurl better.
+ (merge 924b7eb1c9 dp/autoconf-curl-ssl later to maint).
+
+ * The command-line completion script (in contrib/) learned to
+ complete "git cmd ^mas<HT>" to complete the negative end of
+ reference to "git cmd ^master".
+ (merge 49416ad22a cp/completion-negative-refs later to maint).
+
+ * The existing "git fetch --depth=<n>" option was hard to use
+ correctly when making the history of an existing shallow clone
+ deeper. A new option, "--deepen=<n>", has been added to make this
+ easier to use. "git clone" also learned "--shallow-since=<date>"
+ and "--shallow-exclude=<tag>" options to make it easier to specify
+ "I am interested only in the recent N months worth of history" and
+ "Give me only the history since that version".
+ (merge cccf74e2da nd/shallow-deepen later to maint).
+
+ * It is a common mistake to say "git blame --reverse OLD path",
+ expecting that the command line is dwimmed as if asking how lines
+ in path in an old revision OLD have survived up to the current
+ commit.
+ (merge e1d09701a4 jc/blame-reverse later to maint).
+
+ * http.emptyauth configuration is a way to allow an empty username to
+ pass when attempting to authenticate using mechanisms like
+ Kerberos. We took an unspecified (NULL) username and sent ":"
+ (i.e. no username, no password) to CURLOPT_USERPWD, but did not do
+ the same when the username is explicitly set to an empty string.
+ (merge 5275c3081c dt/http-empty-auth later to maint).
+
+ * "git clone" of a local repository can be done at the filesystem
+ level, but the codepath did not check errors while copying and
+ adjusting the file that lists alternate object stores.
+ (merge 22d3b8de1b jk/clone-copy-alternates-fix later to maint).
+
+ * Documentation for "git commit" was updated to clarify that "commit
+ -p <paths>" adds to the current contents of the index to come up
+ with what to commit.
+ (merge 7431596ab1 nd/commit-p-doc later to maint).
+
+ * A stray symbolic link in $GIT_DIR/refs/ directory could make name
+ resolution loop forever, which has been corrected.
+ (merge e8c42cb9ce jk/ref-symlink-loop later to maint).
+
+ * The "submodule.<name>.path" stored in .gitmodules is never copied
+ to .git/config and such a key in .git/config has no meaning, but
+ the documentation described it and submodule.<name>.url next to
+ each other as if both belong to .git/config. This has been fixed.
+ (merge 72710165c9 sb/submodule-config-doc-drop-path later to maint).
+
+ * In a worktree connected to a repository elsewhere, created via "git
+ worktree", "git checkout" attempts to protect users from confusion
+ by refusing to check out a branch that is already checked out in
+ another worktree. However, this also prevented checking out a
+ branch, which is designated as the primary branch of a bare
+ reopsitory, in a worktree that is connected to the bare
+ repository. The check has been corrected to allow it.
+ (merge 171c646f8c dk/worktree-dup-checkout-with-bare-is-ok later to maint).
+
+ * "git rebase" immediately after "git clone" failed to find the fork
+ point from the upstream.
+ (merge 4f21454b55 jk/merge-base-fork-point-without-reflog later to maint).
+
+ * When fetching from a remote that has many tags that are irrelevant
+ to branches we are following, we used to waste way too many cycles
+ when checking if the object pointed at by a tag (that we are not
+ going to fetch!) exists in our repository too carefully.
+ (merge 5827a03545 jk/fetch-quick-tag-following later to maint).
+
+ * Protect our code from over-eager compilers.
+ (merge 0ac52a38e8 jk/tighten-alloc later to maint).
+
+ * Recent git allows submodule.<name>.branch to use a special token
+ "." instead of the branch name; the documentation has been updated
+ to describe it.
+ (merge 15ef78008a bw/submodule-branch-dot-doc later to maint).
+
+ * A hot-fix for a test added by a recent topic that went to both
+ 'master' and 'maint' already.
+ (merge 76e368c378 tg/add-chmod+x-fix later to maint).
+
+ * "git send-email" attempts to pick up valid e-mails from the
+ trailers, but people in real world write non-addresses there, like
+ "Cc: Stable <add@re.ss> # 4.8+", which broke the output depending
+ on the availability and vintage of Mail::Address perl module.
+ (merge dcfafc5214 mm/send-email-cc-cruft-after-address later to maint).
+
+ * The Travis CI configuration we ship ran the tests with --verbose
+ option but this risks non-TAP output that happens to be "ok" to be
+ misinterpreted as TAP signalling a test that passed. This resulted
+ in unnecessary failure. This has been corrected by introducing a
+ new mode to run our tests in the test harness to send the verbose
+ output separately to the log file.
+ (merge 614fe01521 jk/tap-verbose-fix later to maint).
+
+ * Some AsciiDoc formatter mishandles a displayed illustration with
+ tabs in it. Adjust a few of them in merge-base documentation to
+ work around them.
+ (merge 6750f62699 po/fix-doc-merge-base-illustration later to maint).
+
+ * A minor regression fix for "git submodule" that was introduced
+ when more helper functions were reimplemented in C.
+ (merge 77b63ac31e sb/submodule-ignore-trailing-slash later to maint).
+
+ * The code that we have used for the past 10+ years to cycle
+ 4-element ring buffers turns out to be not quite portable in
+ theoretical world.
+ (merge bb84735c80 rs/ring-buffer-wraparound later to maint).
+
+ * "git daemon" used fixed-length buffers to turn URL to the
+ repository the client asked for into the server side directory
+ path, using snprintf() to avoid overflowing these buffers, but
+ allowed possibly truncated paths to the directory. This has been
+ tightened to reject such a request that causes overlong path to be
+ required to serve.
+ (merge 6bdb0083be jk/daemon-path-ok-check-truncation later to maint).
+
+ * Other minor doc, test and build updates and code cleanups.
+ (merge a94bb68397 rs/cocci later to maint).
+ (merge 641c900b2c js/reset-usage later to maint).
+ (merge 30cfe72d37 rs/pretty-format-color-doc-fix later to maint).
+ (merge d709f1fb9d jc/diff-unique-abbrev-comments later to maint).
+ (merge 13092a916d jc/cocci-xstrdup-or-null later to maint).
+ (merge 86009f32bb pb/test-parse-options-expect later to maint).
+ (merge 749a2279a4 yk/git-tag-remove-mention-of-old-layout-in-doc later to maint).
-S <revs-file>::
Use revisions from revs-file instead of calling linkgit:git-rev-list[1].
---reverse::
+--reverse <rev>..<rev>::
Walk history forward instead of backward. Instead of showing
the revision in which a line appeared, this shows the last
revision in which a line has existed. This requires a range of
revision like START..END where the path to blame exists in
- START.
+ START. `git blame --reverse START` is taken as `git blame
+ --reverse START..HEAD` for convenience.
-p::
--porcelain::
a username in the URL, as libcurl normally requires a username for
authentication.
+http.delegation::
+ Control GSSAPI credential delegation. The delegation is disabled
+ by default in libcurl since version 7.21.7. Set parameter to tell
+ the server what it is allowed to delegate when it comes to user
+ credentials. Used with GSS/kerberos. Possible values are:
++
+--
+* `none` - Don't allow any delegation.
+* `policy` - Delegates if and only if the OK-AS-DELEGATE flag is set in the
+ Kerberos service ticket, which is a matter of realm policy.
+* `always` - Unconditionally allow the server to delegate.
+--
+
+
http.extraHeader::
Pass an additional HTTP header when communicating with a server. If
more than one such entry exists, all of them are added as extra
command in the todo-list.
Defaults to "ignore".
-rebase.instructionFormat
+rebase.instructionFormat::
A format string, as specified in linkgit:git-log[1], to be used for
the instruction list during an interactive rebase. The format will automatically
have the long commit hash prepended to the format.
especially on slow filesystems. If not set, the value of
`transfer.unpackLimit` is used instead.
+receive.maxInputSize::
+ If the size of the incoming pack stream is larger than this
+ limit, then git-receive-pack will error out, instead of
+ accepting the pack file. If not set or set to 0, then the size
+ is unlimited.
+
receive.denyDeletes::
If set to true, git-receive-pack will deny a ref update that deletes
the ref. Use this to prevent such a ref deletion via a push.
in parallel. A value of 0 will give some reasonable default.
If unset, it defaults to 1.
+submodule.alternateLocation::
+ Specifies how the submodules obtain alternates when submodules are
+ cloned. Possible values are `no`, `superproject`.
+ By default `no` is assumed, which doesn't add references. When the
+ value is set to `superproject` the submodule to be cloned computes
+ its alternates location relative to the superprojects alternate.
+
+submodule.alternateErrorStrategy
+ Specifies how to treat errors with the alternates for a submodule
+ as computed via `submodule.alternateLocation`. Possible values are
+ `ignore`, `info`, `die`. Default is `die`.
+
tag.forceSignAnnotated::
A boolean to specify whether annotated tags created should be GPG signed.
If `--annotate` is specified on the command line, it takes
diff.submodule::
Specify the format in which differences in submodules are
- shown. The "log" format lists the commits in the range like
- linkgit:git-submodule[1] `summary` does. The "short" format
- format just shows the names of the commits at the beginning
- and end of the range. Defaults to short.
+ shown. The "short" format just shows the names of the commits
+ at the beginning and end of the range. The "log" format lists
+ the commits in the range like linkgit:git-submodule[1] `summary`
+ does. The "diff" format shows an inline diff of the changed
+ contents of the submodule. Defaults to "short".
diff.wordRegex::
A POSIX Extended Regular Expression used to determine what is a "word"
include::mergetools-diff.txt[]
+diff.indentHeuristic::
diff.compactionHeuristic::
- Set this option to `true` to enable an experimental heuristic that
- shifts the hunk boundary in an attempt to make the resulting
- patch easier to read.
+ Set one of these options to `true` to enable one of two
+ experimental heuristics that shift diff hunk boundaries to
+ make patches easier to read.
diff.algorithm::
Choose a diff algorithm. The variants are as follows:
low-occurrence common elements".
--
+
+
+diff.wsErrorHighlight::
+ A comma separated list of `old`, `new`, `context`, that
+ specifies how whitespace errors on lines are highlighted
+ with `color.diff.whitespace`. Can be overridden by the
+ command line option `--ws-error-highlight=<kind>`
--- /dev/null
+--indent-heuristic::
+--no-indent-heuristic::
+--compaction-heuristic::
+--no-compaction-heuristic::
+ These are to help debugging and tuning experimental heuristics
+ (which are off by default) that shift diff hunk boundaries to
+ make patches easier to read.
Synonym for `-p --raw`.
endif::git-format-patch[]
---compaction-heuristic::
---no-compaction-heuristic::
- These are to help debugging and tuning an experimental
- heuristic (which is off by default) that shifts the hunk
- boundary in an attempt to make the resulting patch easier
- to read.
+include::diff-heuristic-options.txt[]
--minimal::
Spend extra time to make sure the smallest possible
of the `--diff-filter` option on what the status letters mean.
--submodule[=<format>]::
- Specify how differences in submodules are shown. When `--submodule`
- or `--submodule=log` is given, the 'log' format is used. This format lists
- the commits in the range like linkgit:git-submodule[1] `summary` does.
- Omitting the `--submodule` option or specifying `--submodule=short`,
- uses the 'short' format. This format just shows the names of the commits
- at the beginning and end of the range. Can be tweaked via the
- `diff.submodule` configuration variable.
+ Specify how differences in submodules are shown. When specifying
+ `--submodule=short` the 'short' format is used. This format just
+ shows the names of the commits at the beginning and end of the range.
+ When `--submodule` or `--submodule=log` is specified, the 'log'
+ format is used. This format lists the commits in the range like
+ linkgit:git-submodule[1] `summary` does. When `--submodule=diff`
+ is specified, the 'diff' format is used. This format shows an
+ inline diff of the changes in the submodule contents between the
+ commit range. Defaults to `diff.submodule` or the 'short' format
+ if the config option is unset.
--color[=<when>]::
Show colored diff.
lines are highlighted. E.g. `--ws-error-highlight=new,old`
highlights whitespace errors on both deleted and added lines.
`all` can be used as a short-hand for `old,new,context`.
+ The `diff.wsErrorHighlight` configuration variable can be
+ used to specify the default behaviour.
endif::git-format-patch[]
--no-prefix::
Do not show any source or destination prefix.
+--line-prefix=<prefix>::
+ Prepend an additional prefix to every line of output.
+
+--ita-invisible-in-index::
+ By default entries added by "git add -N" appear as an existing
+ empty file in "git diff" and a new file in "git diff --cached".
+ This option makes the entry appear as a new file in "git diff"
+ and non-existent in "git diff --cached". This option could be
+ reverted with `--ita-visible-in-index`. Both options are
+ experimental and could be removed in future.
+
For more detailed explanation on these common options, see also
linkgit:gitdiffcore[7].
linkgit:git-clone[1]), deepen or shorten the history to the specified
number of commits. Tags for the deepened commits are not fetched.
+--deepen=<depth>::
+ Similar to --depth, except it specifies the number of commits
+ from the current shallow boundary instead of from the tip of
+ each remote branch history.
+
+--shallow-since=<date>::
+ Deepen or shorten the history of a shallow repository to
+ include all reachable commits after <date>.
+
+--shallow-exclude=<revision>::
+ Deepen or shorten the history of a shallow repository to
+ exclude commits reachable from a specified remote branch or tag.
+ This option can be specified multiple times.
+
--unshallow::
If the source repository is complete, convert a shallow
repository to a complete one, removing all the limitations
OPTIONS
-------
include::blame-options.txt[]
+include::diff-heuristic-options.txt[]
SEE ALSO
--------
[verse]
'git blame' [-c] [-b] [-l] [--root] [-t] [-f] [-n] [-s] [-e] [-p] [-w] [--incremental]
[-L <range>] [-S <revs-file>] [-M] [-C] [-C] [-C] [--since=<date>]
- [--progress] [--abbrev=<n>] [<rev> | --contents <file> | --reverse <rev>]
+ [--progress] [--abbrev=<n>] [<rev> | --contents <file> | --reverse <rev>..<rev>]
[--] <file>
DESCRIPTION
abbreviated object name, use <n>+1 digits. Note that 1 column
is used for a caret to mark the boundary commit.
+include::diff-heuristic-options.txt[]
+
THE PORCELAIN FORMAT
--------------------
SYNOPSIS
--------
[verse]
-'git cat-file' (-t [--allow-unknown-type]| -s [--allow-unknown-type]| -e | -p | <type> | --textconv ) <object>
-'git cat-file' (--batch | --batch-check) [--follow-symlinks]
+'git cat-file' (-t [--allow-unknown-type]| -s [--allow-unknown-type]| -e | -p | <type> | --textconv | --filters ) [--path=<path>] <object>
+'git cat-file' (--batch | --batch-check) [ --textconv | --filters ] [--follow-symlinks]
DESCRIPTION
-----------
In its first form, the command provides the content or the type of an object in
the repository. The type is required unless `-t` or `-p` is used to find the
-object type, or `-s` is used to find the object size, or `--textconv` is used
-(which implies type "blob").
+object type, or `-s` is used to find the object size, or `--textconv` or
+`--filters` is used (which imply type "blob").
In the second form, a list of objects (separated by linefeeds) is provided on
-stdin, and the SHA-1, type, and size of each object is printed on stdout.
+stdin, and the SHA-1, type, and size of each object is printed on stdout. The
+output format can be overridden using the optional `<format>` argument. If
+either `--textconv` or `--filters` was specified, the input is expected to
+list the object names followed by the path name, separated by a single white
+space, so that the appropriate drivers can be determined.
OPTIONS
-------
--textconv::
Show the content as transformed by a textconv filter. In this case,
- <object> has be of the form <tree-ish>:<path>, or :<path> in order
- to apply the filter to the content recorded in the index at <path>.
+ <object> has to be of the form <tree-ish>:<path>, or :<path> in
+ order to apply the filter to the content recorded in the index at
+ <path>.
+
+--filters::
+ Show the content as converted by the filters configured in
+ the current working tree for the given <path> (i.e. smudge filters,
+ end-of-line conversion, etc). In this case, <object> has to be of
+ the form <tree-ish>:<path>, or :<path>.
+
+--path=<path>::
+ For use with --textconv or --filters, to allow specifying an object
+ name and a path separately, e.g. when it is difficult to figure out
+ the revision from which the blob came.
--batch::
--batch=<format>::
Print object information and contents for each object provided
- on stdin. May not be combined with any other options or arguments.
- See the section `BATCH OUTPUT` below for details.
+ on stdin. May not be combined with any other options or arguments
+ except `--textconv` or `--filters`, in which case the input lines
+ also need to specify the path, separated by white space. See the
+ section `BATCH OUTPUT` below for details.
--batch-check::
--batch-check=<format>::
Print object information for each object provided on stdin. May
- not be combined with any other options or arguments. See the
+ not be combined with any other options or arguments except
+ `--textconv` or `--filters`, in which case the input lines also
+ need to specify the path, separated by white space. See the
section `BATCH OUTPUT` below for details.
--batch-all-objects::
its source repository, you can simply run `git repack -a` to copy all
objects from the source repository into a pack in the cloned repository.
---reference <repository>::
+--reference[-if-able] <repository>::
If the reference repository is on the local machine,
automatically setup `.git/objects/info/alternates` to
obtain objects from the reference repository. Using
an already existing repository as an alternate will
require fewer objects to be copied from the repository
being cloned, reducing network and local storage costs.
+ When using the `--reference-if-able`, a non existing
+ directory is skipped with a warning instead of aborting
+ the clone.
+
*NOTE*: see the NOTE for the `--shared` option, and also the
`--dissociate` option.
tips of all branches. If you want to clone submodules shallowly,
also pass `--shallow-submodules`.
+--shallow-since=<date>::
+ Create a shallow clone with a history after the specified time.
+
+--shallow-exclude=<revision>::
+ Create a shallow clone with a history, excluding commits
+ reachable from a specified remote branch or tag. This option
+ can be specified multiple times.
+
--[no-]single-branch::
Clone only the history leading to the tip of a single branch,
either specified by the `--branch` option or the primary
+
size-garbage: disk space consumed by garbage files, in KiB (unless -H is
specified)
++
+alternate: absolute path of alternate object databases; may appear
+multiple times, one line per path. Note that if the path contains
+non-printable characters, it may be surrounded by double-quotes and
+contain C-style backslashed escape sequences.
-H::
--human-readable::
'git-upload-pack' treats the special depth 2147483647 as
infinite even if there is an ancestor-chain that long.
+--shallow-since=<date>::
+ Deepen or shorten the history of a shallow'repository to
+ include all reachable commits after <date>.
+
+--shallow-exclude=<revision>::
+ Deepen or shorten the history of a shallow repository to
+ exclude commits reachable from a specified remote branch or tag.
+ This option can be specified multiple times.
+
+--deepen-relative::
+ Argument --depth specifies the number of commits from the
+ current shallow boundary instead of from the tip of each
+ remote branch history.
+
--no-progress::
Do not show the progress.
[--start-number <n>] [--numbered-files]
[--in-reply-to=Message-Id] [--suffix=.<sfx>]
[--ignore-if-in-upstream]
- [--subject-prefix=Subject-Prefix] [(--reroll-count|-v) <n>]
+ [--rfc] [--subject-prefix=Subject-Prefix]
+ [(--reroll-count|-v) <n>]
[--to=<email>] [--cc=<email>]
[--[no-]cover-letter] [--quiet] [--notes[=<ref>]]
[<common diff options>]
allows for useful naming of a patch series, and can be
combined with the `--numbered` option.
+--rfc::
+ Alias for `--subject-prefix="RFC PATCH"`. RFC means "Request For
+ Comments"; use this when sending an experimental patch for
+ discussion rather than application.
+
-v <n>::
--reroll-count=<n>::
Mark the series as the <n>-th iteration of the topic. The
Specifying 0 will cause Git to auto-detect the number of CPU's
and use maximum 3 threads.
+--max-input-size=<size>::
+ Die, if the pack is larger than <size>.
Note
----
will be added before the new trailer.
Existing trailers are extracted from the input message by looking for
-a group of one or more lines that contain a colon (by default), where
-the group is preceded by one or more empty (or whitespace-only) lines.
+a group of one or more lines that (i) are all trailers, or (ii) contains at
+least one Git-generated trailer and consists of at least 25% trailers.
+The group must be preceded by one or more empty (or whitespace-only) lines.
The group must either be at the end of the message or be the last
non-whitespace lines before a line that starts with '---'. Such three
minus signs start the patch part of the message.
-When reading trailers, there can be whitespaces before and after the
+When reading trailers, there can be whitespaces after the
token, the separator and the value. There can also be whitespaces
-inside the token and the value.
+inside the token and the value. The value may be split over multiple lines with
+each subsequent line starting with whitespace, like the "folding" in RFC 822.
Note that 'trailers' do not follow and are not intended to follow many
-rules for RFC 822 headers. For example they do not follow the line
-folding rules, the encoding rules and probably many other rules.
+rules for RFC 822 headers. For example they do not follow
+the encoding rules and probably many other rules.
OPTIONS
-------
[--exclude-per-directory=<file>]
[--exclude-standard]
[--error-unmatch] [--with-tree=<tree-ish>]
- [--full-name] [--abbrev] [--] [<file>...]
+ [--full-name] [--recurse-submodules]
+ [--abbrev] [--] [<file>...]
DESCRIPTION
-----------
option forces paths to be output relative to the project
top directory.
+--recurse-submodules::
+ Recursively calls ls-files on each submodule in the repository.
+ Currently there is only support for the --cached mode.
+
--abbrev[=<n>]::
Instead of showing the full 40-byte hexadecimal object
lines, show only a partial prefix.
Prompt before each invocation of the merge resolution program
to give the user a chance to skip the path.
+-O<orderfile>::
+ Process files in the order specified in the
+ <orderfile>, which has one shell glob pattern per line.
+ This overrides the `diff.orderFile` configuration variable
+ (see linkgit:git-config[1]). To cancel `diff.orderFile`,
+ use `-O/dev/null`.
+
TEMPORARY FILES
---------------
`git mergetool` creates `*.orig` backup files while resolving merges.
option, which tells it if updates to a ref should be denied if they
are not fast-forwards.
+A number of other receive.* config options are available to tweak
+its behavior, see linkgit:git-config[1].
+
OPTIONS
-------
<directory>::
stashes are found in the reflog of this reference and can be named using
the usual reflog syntax (e.g. `stash@{0}` is the most recently
created stash, `stash@{1}` is the one before it, `stash@{2.hours.ago}`
-is also possible).
+is also possible). Stashes may also be referenced by specifying just the
+stash index (e.g. the integer `n` is equivalent to `stash@{n}`).
OPTIONS
-------
--branch::
Show the branch and tracking info even in short-format.
---porcelain::
+--porcelain[=<version>]::
Give the output in an easy-to-parse format for scripts.
This is similar to the short output, but will remain stable
across Git versions and regardless of user configuration. See
below for details.
++
+The version parameter is used to specify the format version.
+This is optional and defaults to the original version 'v1' format.
--long::
Give the output in the long-format. This is the default.
-z::
Terminate entries with NUL, instead of LF. This implies
- the `--porcelain` output format if no other format is given.
+ the `--porcelain=v1` output format if no other format is given.
--column[=<options>]::
--no-column::
If -b is used the short-format status is preceded by a line
-## branchname tracking info
+ ## branchname tracking info
-Porcelain Format
-~~~~~~~~~~~~~~~~
+Porcelain Format Version 1
+~~~~~~~~~~~~~~~~~~~~~~~~~~
-The porcelain format is similar to the short format, but is guaranteed
+Version 1 porcelain format is similar to the short format, but is guaranteed
not to change in a backwards-incompatible way between Git versions or
based on user configuration. This makes it ideal for parsing by scripts.
The description of the short format above also describes the porcelain
characters are not specially formatted; no quoting or
backslash-escaping is performed.
+Porcelain Format Version 2
+~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Version 2 format adds more detailed information about the state of
+the worktree and changed items. Version 2 also defines an extensible
+set of easy to parse optional headers.
+
+Header lines start with "#" and are added in response to specific
+command line arguments. Parsers should ignore headers they
+don't recognize.
+
+### Branch Headers
+
+If `--branch` is given, a series of header lines are printed with
+information about the current branch.
+
+ Line Notes
+ ------------------------------------------------------------
+ # branch.oid <commit> | (initial) Current commit.
+ # branch.head <branch> | (detached) Current branch.
+ # branch.upstream <upstream_branch> If upstream is set.
+ # branch.ab +<ahead> -<behind> If upstream is set and
+ the commit is present.
+ ------------------------------------------------------------
+
+### Changed Tracked Entries
+
+Following the headers, a series of lines are printed for tracked
+entries. One of three different line formats may be used to describe
+an entry depending on the type of change. Tracked entries are printed
+in an undefined order; parsers should allow for a mixture of the 3
+line types in any order.
+
+Ordinary changed entries have the following format:
+
+ 1 <XY> <sub> <mH> <mI> <mW> <hH> <hI> <path>
+
+Renamed or copied entries have the following format:
+
+ 2 <XY> <sub> <mH> <mI> <mW> <hH> <hI> <X><score> <path><sep><origPath>
+
+ Field Meaning
+ --------------------------------------------------------
+ <XY> A 2 character field containing the staged and
+ unstaged XY values described in the short format,
+ with unchanged indicated by a "." rather than
+ a space.
+ <sub> A 4 character field describing the submodule state.
+ "N..." when the entry is not a submodule.
+ "S<c><m><u>" when the entry is a submodule.
+ <c> is "C" if the commit changed; otherwise ".".
+ <m> is "M" if it has tracked changes; otherwise ".".
+ <u> is "U" if there are untracked changes; otherwise ".".
+ <mH> The octal file mode in HEAD.
+ <mI> The octal file mode in the index.
+ <mW> The octal file mode in the worktree.
+ <hH> The object name in HEAD.
+ <hI> The object name in the index.
+ <X><score> The rename or copy score (denoting the percentage
+ of similarity between the source and target of the
+ move or copy). For example "R100" or "C75".
+ <path> The pathname. In a renamed/copied entry, this
+ is the path in the index and in the working tree.
+ <sep> When the `-z` option is used, the 2 pathnames are separated
+ with a NUL (ASCII 0x00) byte; otherwise, a tab (ASCII 0x09)
+ byte separates them.
+ <origPath> The pathname in the commit at HEAD. This is only
+ present in a renamed/copied entry, and tells
+ where the renamed/copied contents came from.
+ --------------------------------------------------------
+
+Unmerged entries have the following format; the first character is
+a "u" to distinguish from ordinary changed entries.
+
+ u <xy> <sub> <m1> <m2> <m3> <mW> <h1> <h2> <h3> <path>
+
+ Field Meaning
+ --------------------------------------------------------
+ <XY> A 2 character field describing the conflict type
+ as described in the short format.
+ <sub> A 4 character field describing the submodule state
+ as described above.
+ <m1> The octal file mode in stage 1.
+ <m2> The octal file mode in stage 2.
+ <m3> The octal file mode in stage 3.
+ <mW> The octal file mode in the worktree.
+ <h1> The object name in stage 1.
+ <h2> The object name in stage 2.
+ <h3> The object name in stage 3.
+ <path> The pathname.
+ --------------------------------------------------------
+
+### Other Items
+
+Following the tracked entries (and if requested), a series of
+lines will be printed for untracked and then ignored items
+found in the worktree.
+
+Untracked items have the following format:
+
+ ? <path>
+
+Ignored items have the following format:
+
+ ! <path>
+
+### Pathname Format Notes and -z
+
+When the `-z` option is given, pathnames are printed as is and
+without any quoting and lines are terminated with a NUL (ASCII 0x00)
+byte.
+
+Otherwise, all pathnames will be "C-quoted" if they contain any tab,
+linefeed, double quote, or backslash characters. In C-quoting, these
+characters will be replaced with the corresponding C-style escape
+sequences and the resulting pathname will be double quoted.
+
+
CONFIGURATION
-------------
--strict::
Don't write objects with broken content or links.
+--max-input-size=<size>::
+ Die, if the pack is larger than <size>.
+
GIT
---
Part of the linkgit:git[1] suite
[--exec-path[=<path>]] [--html-path] [--man-path] [--info-path]
[-p|--paginate|--no-pager] [--no-replace-objects] [--bare]
[--git-dir=<path>] [--work-tree=<path>] [--namespace=<name>]
+ [--super-prefix=<path>]
<command> [<args>]
DESCRIPTION
details. Equivalent to setting the `GIT_NAMESPACE` environment
variable.
+--super-prefix=<path>::
+ Currently for internal use only. Set a prefix which gives a path from
+ above a repository down to its root. One use is to give submodules
+ context about the superproject that invoked it.
+
--bare::
Treat the repository as a bare repository. If GIT_DIR
environment is not set, it is set to the current working
fed the blob object from its standard input, and its standard
output is used to update the worktree file. Similarly, the
`clean` command is used to convert the contents of worktree file
-upon checkin.
+upon checkin. By default these commands process only a single
+blob and terminate. If a long running `process` filter is used
+in place of `clean` and/or `smudge` filters, then Git can process
+all blobs with a single filter command invocation for the entire
+life of a single Git command, for example `git add --all`. If a
+long running `process` filter is configured then it always takes
+precedence over a configured single blob filter. See section
+below for the description of the protocol used to communicate with
+a `process` filter.
One use of the content filtering is to massage the content into a shape
that is more convenient for the platform, filesystem, and the user to use.
should not try to access the file on disk, but only act as filters on the
content provided to them on standard input.
+Long Running Filter Process
+^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+If the filter command (a string value) is defined via
+`filter.<driver>.process` then Git can process all blobs with a
+single filter invocation for the entire life of a single Git
+command. This is achieved by using a packet format (pkt-line,
+see technical/protocol-common.txt) based protocol over standard
+input and standard output as follows. All packets, except for the
+"*CONTENT" packets and the "0000" flush packet, are considered
+text and therefore are terminated by a LF.
+
+Git starts the filter when it encounters the first file
+that needs to be cleaned or smudged. After the filter started
+Git sends a welcome message ("git-filter-client"), a list of supported
+protocol version numbers, and a flush packet. Git expects to read a welcome
+response message ("git-filter-server"), exactly one protocol version number
+from the previously sent list, and a flush packet. All further
+communication will be based on the selected version. The remaining
+protocol description below documents "version=2". Please note that
+"version=42" in the example below does not exist and is only there
+to illustrate how the protocol would look like with more than one
+version.
+
+After the version negotiation Git sends a list of all capabilities that
+it supports and a flush packet. Git expects to read a list of desired
+capabilities, which must be a subset of the supported capabilities list,
+and a flush packet as response:
+------------------------
+packet: git> git-filter-client
+packet: git> version=2
+packet: git> version=42
+packet: git> 0000
+packet: git< git-filter-server
+packet: git< version=2
+packet: git< 0000
+packet: git> capability=clean
+packet: git> capability=smudge
+packet: git> capability=not-yet-invented
+packet: git> 0000
+packet: git< capability=clean
+packet: git< capability=smudge
+packet: git< 0000
+------------------------
+Supported filter capabilities in version 2 are "clean" and
+"smudge".
+
+Afterwards Git sends a list of "key=value" pairs terminated with
+a flush packet. The list will contain at least the filter command
+(based on the supported capabilities) and the pathname of the file
+to filter relative to the repository root. Right after the flush packet
+Git sends the content split in zero or more pkt-line packets and a
+flush packet to terminate content. Please note, that the filter
+must not send any response before it received the content and the
+final flush packet.
+------------------------
+packet: git> command=smudge
+packet: git> pathname=path/testfile.dat
+packet: git> 0000
+packet: git> CONTENT
+packet: git> 0000
+------------------------
+
+The filter is expected to respond with a list of "key=value" pairs
+terminated with a flush packet. If the filter does not experience
+problems then the list must contain a "success" status. Right after
+these packets the filter is expected to send the content in zero
+or more pkt-line packets and a flush packet at the end. Finally, a
+second list of "key=value" pairs terminated with a flush packet
+is expected. The filter can change the status in the second list
+or keep the status as is with an empty list. Please note that the
+empty list must be terminated with a flush packet regardless.
+
+------------------------
+packet: git< status=success
+packet: git< 0000
+packet: git< SMUDGED_CONTENT
+packet: git< 0000
+packet: git< 0000 # empty list, keep "status=success" unchanged!
+------------------------
+
+If the result content is empty then the filter is expected to respond
+with a "success" status and a flush packet to signal the empty content.
+------------------------
+packet: git< status=success
+packet: git< 0000
+packet: git< 0000 # empty content!
+packet: git< 0000 # empty list, keep "status=success" unchanged!
+------------------------
+
+In case the filter cannot or does not want to process the content,
+it is expected to respond with an "error" status.
+------------------------
+packet: git< status=error
+packet: git< 0000
+------------------------
+
+If the filter experiences an error during processing, then it can
+send the status "error" after the content was (partially or
+completely) sent.
+------------------------
+packet: git< status=success
+packet: git< 0000
+packet: git< HALF_WRITTEN_ERRONEOUS_CONTENT
+packet: git< 0000
+packet: git< status=error
+packet: git< 0000
+------------------------
+
+In case the filter cannot or does not want to process the content
+as well as any future content for the lifetime of the Git process,
+then it is expected to respond with an "abort" status at any point
+in the protocol.
+------------------------
+packet: git< status=abort
+packet: git< 0000
+------------------------
+
+Git neither stops nor restarts the filter process in case the
+"error"/"abort" status is set. However, Git sets its exit code
+according to the `filter.<driver>.required` flag, mimicking the
+behavior of the `filter.<driver>.clean` / `filter.<driver>.smudge`
+mechanism.
+
+If the filter dies during the communication or does not adhere to
+the protocol then Git will stop the filter process and restart it
+with the next file that needs to be processed. Depending on the
+`filter.<driver>.required` flag Git will interpret that as error.
+
+After the filter has processed a blob it is expected to wait for
+the next "key=value" list containing a command. Git will close
+the command pipe on exit. The filter is expected to detect EOF
+and exit gracefully on its own. Git will wait until the filter
+process has stopped.
+
+A long running filter demo implementation can be found in
+`contrib/long-running-filter/example.pl` located in the Git
+core repository. If you develop your own long running filter
+process then the `GIT_TRACE_PACKET` environment variables can be
+very helpful for debugging (see linkgit:git[1]).
+
+Please note that you cannot use an existing `filter.<driver>.clean`
+or `filter.<driver>.smudge` command with `filter.<driver>.process`
+because the former two use a different inter process communication
+protocol than the latter one.
+
+
Interaction between checkin/checkout attributes
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
'option depth' <depth>::
Deepens the history of a shallow repository.
+'option deepen-since <timestamp>::
+ Deepens the history of a shallow repository based on time.
+
+'option deepen-not <ref>::
+ Deepens the history of a shallow repository excluding ref.
+ Multiple options add up.
+
+'option deepen-relative {'true'|'false'}::
+ Deepens the history of a shallow repository relative to
+ current boundary. Only valid when used with "option depth".
+
'option followtags' {'true'|'false'}::
If enabled the helper should automatically fetch annotated
tag objects if the object the tag points at was transferred
Note that 'highlight' feature must be set for gitweb to actually
use syntax highlighting.
+
-*NOTE*: if you want to add support for new file type (supported by
-"highlight" but not used by gitweb), you need to modify `%highlight_ext`
-or `%highlight_basename`, depending on whether you detect type of file
-based on extension (for example "sh") or on its basename (for example
-"Makefile"). The keys of these hashes are extension and basename,
-respectively, and value for given key is name of syntax to be passed via
-`--syntax <syntax>` to highlighter.
+*NOTE*: for a file to be highlighted, its syntax type must be detected
+and that syntax must be supported by "highlight". The default syntax
+detection is minimal, and there are many supported syntax types with no
+detection by default. There are three options for adding syntax
+detection. The first and second priority are `%highlight_basename` and
+`%highlight_ext`, which detect based on basename (the full filename, for
+example "Makefile") and extension (for example "sh"). The keys of these
+hashes are the basename and extension, respectively, and the value for a
+given key is the name of the syntax to be passed via `--syntax <syntax>`
+to "highlight". The last priority is the "highlight" configuration of
+`Shebang` regular expressions to detect the language based on the first
+line in the file, (for example, matching the line "#!/bin/bash"). See
+the highlight documentation and the default config at
+/etc/highlight/filetypes.conf for more details.
+
For example if repositories you are hosting use "phtml" extension for
PHP files, and you want to have correct syntax-highlighting for those
- '%N': commit notes
endif::git-rev-list[]
- '%GG': raw verification message from GPG for a signed commit
-- '%G?': show "G" for a good (valid) signature, "B" for a bad signature,
- "U" for a good signature with unknown validity and "N" for no signature
+- '%G?': show "G" for a good (valid) signature,
+ "B" for a bad signature,
+ "U" for a good signature with unknown validity,
+ "X" for a good signature that has expired,
+ "Y" for a good signature made by an expired key,
+ "R" for a good signature made by a revoked key,
+ "E" if the signature cannot be checked (e.g. missing key)
+ and "N" for no signature
- '%GS': show the name of the signer for a signed commit
- '%GK': show the key used to sign a signed commit
- '%gD': reflog selector, e.g., `refs/stash@{1}` or
Other <rev>{caret} Parent Shorthand Notations
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-Two other shorthands exist, particularly useful for merge commits,
+Three other shorthands exist, particularly useful for merge commits,
for naming a set that is formed by a commit and its parent commits.
The 'r1{caret}@' notation means all parents of 'r1'.
The 'r1{caret}!' notation includes commit 'r1' but excludes all of its parents.
By itself, this notation denotes the single commit 'r1'.
+The '<rev>{caret}-{<n>}' notation includes '<rev>' but excludes the <n>th
+parent (i.e. a shorthand for '<rev>{caret}<n>..<rev>'), with '<n>' = 1 if
+not given. This is typically useful for merge commits where you
+can just pass '<commit>{caret}-' to get all the commits in the branch
+that was merged in merge commit '<commit>' (including '<commit>'
+itself).
+
While '<rev>{caret}<n>' was about specifying a single commit parent, these
-two notations consider all its parents. For example you can say
+three notations also consider its parents. For example you can say
'HEAD{caret}2{caret}@', however you cannot say 'HEAD{caret}@{caret}2'.
Revision Range Summary
as giving commit '<rev>' and then all its parents prefixed with
'{caret}' to exclude them (and their ancestors).
+'<rev>{caret}-{<n>}', e.g. 'HEAD{caret}-, HEAD{caret}-2'::
+ Equivalent to '<rev>{caret}<n>..<rev>', with '<n>' = 1 if not
+ given.
+
Here are a handful of examples using the Loeliger illustration above,
with each step in the notation's expansion and selection carefully
spelt out:
C I J F C
B..C = ^B C C
B...C = B ^F C G H D E B C
+ B^- = B^..B
+ = ^B^1 B E I J F B
C^@ = C^1
= F I J F
B^@ = B^1 B^2 B^3
`sha1_array_for_each_unique`::
Efficiently iterate over each unique element of the list,
executing the callback function for each one. If the array is
- not sorted, this function has the side effect of sorting it.
+ not sorted, this function has the side effect of sorting it. If
+ the callback returns a non-zero value, the iteration ends
+ immediately and the callback's return is propagated; otherwise,
+ 0 is returned.
Examples
--------
-----------------------------------------
-void print_callback(const unsigned char sha1[20],
+int print_callback(const unsigned char sha1[20],
void *data)
{
printf("%s\n", sha1_to_hex(sha1));
+ return 0; /* always continue */
}
void some_func(void)
shallow-line = PKT-LINE("shallow" SP obj-id)
- depth-request = PKT-LINE("deepen" SP depth)
+ depth-request = PKT-LINE("deepen" SP depth) /
+ PKT-LINE("deepen-since" SP timestamp) /
+ PKT-LINE("deepen-not" SP ref)
first-want = PKT-LINE("want" SP obj-id SP capability-list)
additional-want = PKT-LINE("want" SP obj-id)
the fetch-pack/upload-pack protocol so clients can request shallow
clones.
+deepen-since
+------------
+
+This capability adds "deepen-since" command to fetch-pack/upload-pack
+protocol so the client can request shallow clones that are cut at a
+specific time, instead of depth. Internally it's equivalent of doing
+"rev-list --max-age=<timestamp>" on the server side. "deepen-since"
+cannot be used with "deepen".
+
+deepen-not
+----------
+
+This capability adds "deepen-not" command to fetch-pack/upload-pack
+protocol so the client can request shallow clones that are cut at a
+specific revision, instead of depth. Internally it's equivalent of
+doing "rev-list --not <rev>" on the server side. "deepen-not"
+cannot be used with "deepen", but can be used with "deepen-since".
+
+deepen-relative
+---------------
+
+If this capability is requested by the client, the semantics of
+"deepen" command is changed. The "depth" argument is the depth from
+the current shallow boundary, instead of the depth from remote refs.
+
no-progress
-----------
#!/bin/sh
GVF=GIT-VERSION-FILE
-DEF_VER=v2.10.2
+DEF_VER=v2.10.0.GIT
LF='
'
export TCL_PATH TCLTK_PATH
SPARSE_FLAGS =
+SPATCH_FLAGS = --all-includes
LIB_OBJS += advice.o
LIB_OBJS += alias.o
LIB_OBJS += alloc.o
+LIB_OBJS += apply.o
LIB_OBJS += archive.o
LIB_OBJS += archive-tar.o
LIB_OBJS += archive-zip.o
LIB_OBJS += symlinks.o
LIB_OBJS += tag.o
LIB_OBJS += tempfile.o
+LIB_OBJS += tmp-objdir.o
LIB_OBJS += trace.o
LIB_OBJS += trailer.o
LIB_OBJS += transport.o
%.cocci.patch: %.cocci $(C_SOURCES)
@echo ' ' SPATCH $<; \
for f in $(C_SOURCES); do \
- $(SPATCH) --sp-file $< $$f; \
+ $(SPATCH) --sp-file $< $$f $(SPATCH_FLAGS); \
done >$@ 2>$@.log; \
if test -s $@; \
then \
-Documentation/RelNotes/2.10.2.txt
\ No newline at end of file
+Documentation/RelNotes/2.11.0.txt
\ No newline at end of file
--- /dev/null
+/*
+ * apply.c
+ *
+ * Copyright (C) Linus Torvalds, 2005
+ *
+ * This applies patches on top of some (arbitrary) version of the SCM.
+ *
+ */
+
+#include "cache.h"
+#include "blob.h"
+#include "delta.h"
+#include "diff.h"
+#include "dir.h"
+#include "xdiff-interface.h"
+#include "ll-merge.h"
+#include "lockfile.h"
+#include "parse-options.h"
+#include "quote.h"
+#include "rerere.h"
+#include "apply.h"
+
+static void git_apply_config(void)
+{
+ git_config_get_string_const("apply.whitespace", &apply_default_whitespace);
+ git_config_get_string_const("apply.ignorewhitespace", &apply_default_ignorewhitespace);
+ git_config(git_default_config, NULL);
+}
+
+static int parse_whitespace_option(struct apply_state *state, const char *option)
+{
+ if (!option) {
+ state->ws_error_action = warn_on_ws_error;
+ return 0;
+ }
+ if (!strcmp(option, "warn")) {
+ state->ws_error_action = warn_on_ws_error;
+ return 0;
+ }
+ if (!strcmp(option, "nowarn")) {
+ state->ws_error_action = nowarn_ws_error;
+ return 0;
+ }
+ if (!strcmp(option, "error")) {
+ state->ws_error_action = die_on_ws_error;
+ return 0;
+ }
+ if (!strcmp(option, "error-all")) {
+ state->ws_error_action = die_on_ws_error;
+ state->squelch_whitespace_errors = 0;
+ return 0;
+ }
+ if (!strcmp(option, "strip") || !strcmp(option, "fix")) {
+ state->ws_error_action = correct_ws_error;
+ return 0;
+ }
+ return error(_("unrecognized whitespace option '%s'"), option);
+}
+
+static int parse_ignorewhitespace_option(struct apply_state *state,
+ const char *option)
+{
+ if (!option || !strcmp(option, "no") ||
+ !strcmp(option, "false") || !strcmp(option, "never") ||
+ !strcmp(option, "none")) {
+ state->ws_ignore_action = ignore_ws_none;
+ return 0;
+ }
+ if (!strcmp(option, "change")) {
+ state->ws_ignore_action = ignore_ws_change;
+ return 0;
+ }
+ return error(_("unrecognized whitespace ignore option '%s'"), option);
+}
+
+int init_apply_state(struct apply_state *state,
+ const char *prefix,
+ struct lock_file *lock_file)
+{
+ memset(state, 0, sizeof(*state));
+ state->prefix = prefix;
+ state->prefix_length = state->prefix ? strlen(state->prefix) : 0;
+ state->lock_file = lock_file;
+ state->newfd = -1;
+ state->apply = 1;
+ state->line_termination = '\n';
+ state->p_value = 1;
+ state->p_context = UINT_MAX;
+ state->squelch_whitespace_errors = 5;
+ state->ws_error_action = warn_on_ws_error;
+ state->ws_ignore_action = ignore_ws_none;
+ state->linenr = 1;
+ string_list_init(&state->fn_table, 0);
+ string_list_init(&state->limit_by_name, 0);
+ string_list_init(&state->symlink_changes, 0);
+ strbuf_init(&state->root, 0);
+
+ git_apply_config();
+ if (apply_default_whitespace && parse_whitespace_option(state, apply_default_whitespace))
+ return -1;
+ if (apply_default_ignorewhitespace && parse_ignorewhitespace_option(state, apply_default_ignorewhitespace))
+ return -1;
+ return 0;
+}
+
+void clear_apply_state(struct apply_state *state)
+{
+ string_list_clear(&state->limit_by_name, 0);
+ string_list_clear(&state->symlink_changes, 0);
+ strbuf_release(&state->root);
+
+ /* &state->fn_table is cleared at the end of apply_patch() */
+}
+
+static void mute_routine(const char *msg, va_list params)
+{
+ /* do nothing */
+}
+
+int check_apply_state(struct apply_state *state, int force_apply)
+{
+ int is_not_gitdir = !startup_info->have_repository;
+
+ if (state->apply_with_reject && state->threeway)
+ return error(_("--reject and --3way cannot be used together."));
+ if (state->cached && state->threeway)
+ return error(_("--cached and --3way cannot be used together."));
+ if (state->threeway) {
+ if (is_not_gitdir)
+ return error(_("--3way outside a repository"));
+ state->check_index = 1;
+ }
+ if (state->apply_with_reject) {
+ state->apply = 1;
+ if (state->apply_verbosity == verbosity_normal)
+ state->apply_verbosity = verbosity_verbose;
+ }
+ if (!force_apply && (state->diffstat || state->numstat || state->summary || state->check || state->fake_ancestor))
+ state->apply = 0;
+ if (state->check_index && is_not_gitdir)
+ return error(_("--index outside a repository"));
+ if (state->cached) {
+ if (is_not_gitdir)
+ return error(_("--cached outside a repository"));
+ state->check_index = 1;
+ }
+ if (state->check_index)
+ state->unsafe_paths = 0;
+ if (!state->lock_file)
+ return error("BUG: state->lock_file should not be NULL");
+
+ if (state->apply_verbosity <= verbosity_silent) {
+ state->saved_error_routine = get_error_routine();
+ state->saved_warn_routine = get_warn_routine();
+ set_error_routine(mute_routine);
+ set_warn_routine(mute_routine);
+ }
+
+ return 0;
+}
+
+static void set_default_whitespace_mode(struct apply_state *state)
+{
+ if (!state->whitespace_option && !apply_default_whitespace)
+ state->ws_error_action = (state->apply ? warn_on_ws_error : nowarn_ws_error);
+}
+
+/*
+ * This represents one "hunk" from a patch, starting with
+ * "@@ -oldpos,oldlines +newpos,newlines @@" marker. The
+ * patch text is pointed at by patch, and its byte length
+ * is stored in size. leading and trailing are the number
+ * of context lines.
+ */
+struct fragment {
+ unsigned long leading, trailing;
+ unsigned long oldpos, oldlines;
+ unsigned long newpos, newlines;
+ /*
+ * 'patch' is usually borrowed from buf in apply_patch(),
+ * but some codepaths store an allocated buffer.
+ */
+ const char *patch;
+ unsigned free_patch:1,
+ rejected:1;
+ int size;
+ int linenr;
+ struct fragment *next;
+};
+
+/*
+ * When dealing with a binary patch, we reuse "leading" field
+ * to store the type of the binary hunk, either deflated "delta"
+ * or deflated "literal".
+ */
+#define binary_patch_method leading
+#define BINARY_DELTA_DEFLATED 1
+#define BINARY_LITERAL_DEFLATED 2
+
+/*
+ * This represents a "patch" to a file, both metainfo changes
+ * such as creation/deletion, filemode and content changes represented
+ * as a series of fragments.
+ */
+struct patch {
+ char *new_name, *old_name, *def_name;
+ unsigned int old_mode, new_mode;
+ int is_new, is_delete; /* -1 = unknown, 0 = false, 1 = true */
+ int rejected;
+ unsigned ws_rule;
+ int lines_added, lines_deleted;
+ int score;
+ unsigned int is_toplevel_relative:1;
+ unsigned int inaccurate_eof:1;
+ unsigned int is_binary:1;
+ unsigned int is_copy:1;
+ unsigned int is_rename:1;
+ unsigned int recount:1;
+ unsigned int conflicted_threeway:1;
+ unsigned int direct_to_threeway:1;
+ struct fragment *fragments;
+ char *result;
+ size_t resultsize;
+ char old_sha1_prefix[41];
+ char new_sha1_prefix[41];
+ struct patch *next;
+
+ /* three-way fallback result */
+ struct object_id threeway_stage[3];
+};
+
+static void free_fragment_list(struct fragment *list)
+{
+ while (list) {
+ struct fragment *next = list->next;
+ if (list->free_patch)
+ free((char *)list->patch);
+ free(list);
+ list = next;
+ }
+}
+
+static void free_patch(struct patch *patch)
+{
+ free_fragment_list(patch->fragments);
+ free(patch->def_name);
+ free(patch->old_name);
+ free(patch->new_name);
+ free(patch->result);
+ free(patch);
+}
+
+static void free_patch_list(struct patch *list)
+{
+ while (list) {
+ struct patch *next = list->next;
+ free_patch(list);
+ list = next;
+ }
+}
+
+/*
+ * A line in a file, len-bytes long (includes the terminating LF,
+ * except for an incomplete line at the end if the file ends with
+ * one), and its contents hashes to 'hash'.
+ */
+struct line {
+ size_t len;
+ unsigned hash : 24;
+ unsigned flag : 8;
+#define LINE_COMMON 1
+#define LINE_PATCHED 2
+};
+
+/*
+ * This represents a "file", which is an array of "lines".
+ */
+struct image {
+ char *buf;
+ size_t len;
+ size_t nr;
+ size_t alloc;
+ struct line *line_allocated;
+ struct line *line;
+};
+
+static uint32_t hash_line(const char *cp, size_t len)
+{
+ size_t i;
+ uint32_t h;
+ for (i = 0, h = 0; i < len; i++) {
+ if (!isspace(cp[i])) {
+ h = h * 3 + (cp[i] & 0xff);
+ }
+ }
+ return h;
+}
+
+/*
+ * Compare lines s1 of length n1 and s2 of length n2, ignoring
+ * whitespace difference. Returns 1 if they match, 0 otherwise
+ */
+static int fuzzy_matchlines(const char *s1, size_t n1,
+ const char *s2, size_t n2)
+{
+ const char *last1 = s1 + n1 - 1;
+ const char *last2 = s2 + n2 - 1;
+ int result = 0;
+
+ /* ignore line endings */
+ while ((*last1 == '\r') || (*last1 == '\n'))
+ last1--;
+ while ((*last2 == '\r') || (*last2 == '\n'))
+ last2--;
+
+ /* skip leading whitespaces, if both begin with whitespace */
+ if (s1 <= last1 && s2 <= last2 && isspace(*s1) && isspace(*s2)) {
+ while (isspace(*s1) && (s1 <= last1))
+ s1++;
+ while (isspace(*s2) && (s2 <= last2))
+ s2++;
+ }
+ /* early return if both lines are empty */
+ if ((s1 > last1) && (s2 > last2))
+ return 1;
+ while (!result) {
+ result = *s1++ - *s2++;
+ /*
+ * Skip whitespace inside. We check for whitespace on
+ * both buffers because we don't want "a b" to match
+ * "ab"
+ */
+ if (isspace(*s1) && isspace(*s2)) {
+ while (isspace(*s1) && s1 <= last1)
+ s1++;
+ while (isspace(*s2) && s2 <= last2)
+ s2++;
+ }
+ /*
+ * If we reached the end on one side only,
+ * lines don't match
+ */
+ if (
+ ((s2 > last2) && (s1 <= last1)) ||
+ ((s1 > last1) && (s2 <= last2)))
+ return 0;
+ if ((s1 > last1) && (s2 > last2))
+ break;
+ }
+
+ return !result;
+}
+
+static void add_line_info(struct image *img, const char *bol, size_t len, unsigned flag)
+{
+ ALLOC_GROW(img->line_allocated, img->nr + 1, img->alloc);
+ img->line_allocated[img->nr].len = len;
+ img->line_allocated[img->nr].hash = hash_line(bol, len);
+ img->line_allocated[img->nr].flag = flag;
+ img->nr++;
+}
+
+/*
+ * "buf" has the file contents to be patched (read from various sources).
+ * attach it to "image" and add line-based index to it.
+ * "image" now owns the "buf".
+ */
+static void prepare_image(struct image *image, char *buf, size_t len,
+ int prepare_linetable)
+{
+ const char *cp, *ep;
+
+ memset(image, 0, sizeof(*image));
+ image->buf = buf;
+ image->len = len;
+
+ if (!prepare_linetable)
+ return;
+
+ ep = image->buf + image->len;
+ cp = image->buf;
+ while (cp < ep) {
+ const char *next;
+ for (next = cp; next < ep && *next != '\n'; next++)
+ ;
+ if (next < ep)
+ next++;
+ add_line_info(image, cp, next - cp, 0);
+ cp = next;
+ }
+ image->line = image->line_allocated;
+}
+
+static void clear_image(struct image *image)
+{
+ free(image->buf);
+ free(image->line_allocated);
+ memset(image, 0, sizeof(*image));
+}
+
+/* fmt must contain _one_ %s and no other substitution */
+static void say_patch_name(FILE *output, const char *fmt, struct patch *patch)
+{
+ struct strbuf sb = STRBUF_INIT;
+
+ if (patch->old_name && patch->new_name &&
+ strcmp(patch->old_name, patch->new_name)) {
+ quote_c_style(patch->old_name, &sb, NULL, 0);
+ strbuf_addstr(&sb, " => ");
+ quote_c_style(patch->new_name, &sb, NULL, 0);
+ } else {
+ const char *n = patch->new_name;
+ if (!n)
+ n = patch->old_name;
+ quote_c_style(n, &sb, NULL, 0);
+ }
+ fprintf(output, fmt, sb.buf);
+ fputc('\n', output);
+ strbuf_release(&sb);
+}
+
+#define SLOP (16)
+
+static int read_patch_file(struct strbuf *sb, int fd)
+{
+ if (strbuf_read(sb, fd, 0) < 0)
+ return error_errno("git apply: failed to read");
+
+ /*
+ * Make sure that we have some slop in the buffer
+ * so that we can do speculative "memcmp" etc, and
+ * see to it that it is NUL-filled.
+ */
+ strbuf_grow(sb, SLOP);
+ memset(sb->buf + sb->len, 0, SLOP);
+ return 0;
+}
+
+static unsigned long linelen(const char *buffer, unsigned long size)
+{
+ unsigned long len = 0;
+ while (size--) {
+ len++;
+ if (*buffer++ == '\n')
+ break;
+ }
+ return len;
+}
+
+static int is_dev_null(const char *str)
+{
+ return skip_prefix(str, "/dev/null", &str) && isspace(*str);
+}
+
+#define TERM_SPACE 1
+#define TERM_TAB 2
+
+static int name_terminate(int c, int terminate)
+{
+ if (c == ' ' && !(terminate & TERM_SPACE))
+ return 0;
+ if (c == '\t' && !(terminate & TERM_TAB))
+ return 0;
+
+ return 1;
+}
+
+/* remove double slashes to make --index work with such filenames */
+static char *squash_slash(char *name)
+{
+ int i = 0, j = 0;
+
+ if (!name)
+ return NULL;
+
+ while (name[i]) {
+ if ((name[j++] = name[i++]) == '/')
+ while (name[i] == '/')
+ i++;
+ }
+ name[j] = '\0';
+ return name;
+}
+
+static char *find_name_gnu(struct apply_state *state,
+ const char *line,
+ const char *def,
+ int p_value)
+{
+ struct strbuf name = STRBUF_INIT;
+ char *cp;
+
+ /*
+ * Proposed "new-style" GNU patch/diff format; see
+ * http://marc.info/?l=git&m=112927316408690&w=2
+ */
+ if (unquote_c_style(&name, line, NULL)) {
+ strbuf_release(&name);
+ return NULL;
+ }
+
+ for (cp = name.buf; p_value; p_value--) {
+ cp = strchr(cp, '/');
+ if (!cp) {
+ strbuf_release(&name);
+ return NULL;
+ }
+ cp++;
+ }
+
+ strbuf_remove(&name, 0, cp - name.buf);
+ if (state->root.len)
+ strbuf_insert(&name, 0, state->root.buf, state->root.len);
+ return squash_slash(strbuf_detach(&name, NULL));
+}
+
+static size_t sane_tz_len(const char *line, size_t len)
+{
+ const char *tz, *p;
+
+ if (len < strlen(" +0500") || line[len-strlen(" +0500")] != ' ')
+ return 0;
+ tz = line + len - strlen(" +0500");
+
+ if (tz[1] != '+' && tz[1] != '-')
+ return 0;
+
+ for (p = tz + 2; p != line + len; p++)
+ if (!isdigit(*p))
+ return 0;
+
+ return line + len - tz;
+}
+
+static size_t tz_with_colon_len(const char *line, size_t len)
+{
+ const char *tz, *p;
+
+ if (len < strlen(" +08:00") || line[len - strlen(":00")] != ':')
+ return 0;
+ tz = line + len - strlen(" +08:00");
+
+ if (tz[0] != ' ' || (tz[1] != '+' && tz[1] != '-'))
+ return 0;
+ p = tz + 2;
+ if (!isdigit(*p++) || !isdigit(*p++) || *p++ != ':' ||
+ !isdigit(*p++) || !isdigit(*p++))
+ return 0;
+
+ return line + len - tz;
+}
+
+static size_t date_len(const char *line, size_t len)
+{
+ const char *date, *p;
+
+ if (len < strlen("72-02-05") || line[len-strlen("-05")] != '-')
+ return 0;
+ p = date = line + len - strlen("72-02-05");
+
+ if (!isdigit(*p++) || !isdigit(*p++) || *p++ != '-' ||
+ !isdigit(*p++) || !isdigit(*p++) || *p++ != '-' ||
+ !isdigit(*p++) || !isdigit(*p++)) /* Not a date. */
+ return 0;
+
+ if (date - line >= strlen("19") &&
+ isdigit(date[-1]) && isdigit(date[-2])) /* 4-digit year */
+ date -= strlen("19");
+
+ return line + len - date;
+}
+
+static size_t short_time_len(const char *line, size_t len)
+{
+ const char *time, *p;
+
+ if (len < strlen(" 07:01:32") || line[len-strlen(":32")] != ':')
+ return 0;
+ p = time = line + len - strlen(" 07:01:32");
+
+ /* Permit 1-digit hours? */
+ if (*p++ != ' ' ||
+ !isdigit(*p++) || !isdigit(*p++) || *p++ != ':' ||
+ !isdigit(*p++) || !isdigit(*p++) || *p++ != ':' ||
+ !isdigit(*p++) || !isdigit(*p++)) /* Not a time. */
+ return 0;
+
+ return line + len - time;
+}
+
+static size_t fractional_time_len(const char *line, size_t len)
+{
+ const char *p;
+ size_t n;
+
+ /* Expected format: 19:41:17.620000023 */
+ if (!len || !isdigit(line[len - 1]))
+ return 0;
+ p = line + len - 1;
+
+ /* Fractional seconds. */
+ while (p > line && isdigit(*p))
+ p--;
+ if (*p != '.')
+ return 0;
+
+ /* Hours, minutes, and whole seconds. */
+ n = short_time_len(line, p - line);
+ if (!n)
+ return 0;
+
+ return line + len - p + n;
+}
+
+static size_t trailing_spaces_len(const char *line, size_t len)
+{
+ const char *p;
+
+ /* Expected format: ' ' x (1 or more) */
+ if (!len || line[len - 1] != ' ')
+ return 0;
+
+ p = line + len;
+ while (p != line) {
+ p--;
+ if (*p != ' ')
+ return line + len - (p + 1);
+ }
+
+ /* All spaces! */
+ return len;
+}
+
+static size_t diff_timestamp_len(const char *line, size_t len)
+{
+ const char *end = line + len;
+ size_t n;
+
+ /*
+ * Posix: 2010-07-05 19:41:17
+ * GNU: 2010-07-05 19:41:17.620000023 -0500
+ */
+
+ if (!isdigit(end[-1]))
+ return 0;
+
+ n = sane_tz_len(line, end - line);
+ if (!n)
+ n = tz_with_colon_len(line, end - line);
+ end -= n;
+
+ n = short_time_len(line, end - line);
+ if (!n)
+ n = fractional_time_len(line, end - line);
+ end -= n;
+
+ n = date_len(line, end - line);
+ if (!n) /* No date. Too bad. */
+ return 0;
+ end -= n;
+
+ if (end == line) /* No space before date. */
+ return 0;
+ if (end[-1] == '\t') { /* Success! */
+ end--;
+ return line + len - end;
+ }
+ if (end[-1] != ' ') /* No space before date. */
+ return 0;
+
+ /* Whitespace damage. */
+ end -= trailing_spaces_len(line, end - line);
+ return line + len - end;
+}
+
+static char *find_name_common(struct apply_state *state,
+ const char *line,
+ const char *def,
+ int p_value,
+ const char *end,
+ int terminate)
+{
+ int len;
+ const char *start = NULL;
+
+ if (p_value == 0)
+ start = line;
+ while (line != end) {
+ char c = *line;
+
+ if (!end && isspace(c)) {
+ if (c == '\n')
+ break;
+ if (name_terminate(c, terminate))
+ break;
+ }
+ line++;
+ if (c == '/' && !--p_value)
+ start = line;
+ }
+ if (!start)
+ return squash_slash(xstrdup_or_null(def));
+ len = line - start;
+ if (!len)
+ return squash_slash(xstrdup_or_null(def));
+
+ /*
+ * Generally we prefer the shorter name, especially
+ * if the other one is just a variation of that with
+ * something else tacked on to the end (ie "file.orig"
+ * or "file~").
+ */
+ if (def) {
+ int deflen = strlen(def);
+ if (deflen < len && !strncmp(start, def, deflen))
+ return squash_slash(xstrdup(def));
+ }
+
+ if (state->root.len) {
+ char *ret = xstrfmt("%s%.*s", state->root.buf, len, start);
+ return squash_slash(ret);
+ }
+
+ return squash_slash(xmemdupz(start, len));
+}
+
+static char *find_name(struct apply_state *state,
+ const char *line,
+ char *def,
+ int p_value,
+ int terminate)
+{
+ if (*line == '"') {
+ char *name = find_name_gnu(state, line, def, p_value);
+ if (name)
+ return name;
+ }
+
+ return find_name_common(state, line, def, p_value, NULL, terminate);
+}
+
+static char *find_name_traditional(struct apply_state *state,
+ const char *line,
+ char *def,
+ int p_value)
+{
+ size_t len;
+ size_t date_len;
+
+ if (*line == '"') {
+ char *name = find_name_gnu(state, line, def, p_value);
+ if (name)
+ return name;
+ }
+
+ len = strchrnul(line, '\n') - line;
+ date_len = diff_timestamp_len(line, len);
+ if (!date_len)
+ return find_name_common(state, line, def, p_value, NULL, TERM_TAB);
+ len -= date_len;
+
+ return find_name_common(state, line, def, p_value, line + len, 0);
+}
+
+static int count_slashes(const char *cp)
+{
+ int cnt = 0;
+ char ch;
+
+ while ((ch = *cp++))
+ if (ch == '/')
+ cnt++;
+ return cnt;
+}
+
+/*
+ * Given the string after "--- " or "+++ ", guess the appropriate
+ * p_value for the given patch.
+ */
+static int guess_p_value(struct apply_state *state, const char *nameline)
+{
+ char *name, *cp;
+ int val = -1;
+
+ if (is_dev_null(nameline))
+ return -1;
+ name = find_name_traditional(state, nameline, NULL, 0);
+ if (!name)
+ return -1;
+ cp = strchr(name, '/');
+ if (!cp)
+ val = 0;
+ else if (state->prefix) {
+ /*
+ * Does it begin with "a/$our-prefix" and such? Then this is
+ * very likely to apply to our directory.
+ */
+ if (!strncmp(name, state->prefix, state->prefix_length))
+ val = count_slashes(state->prefix);
+ else {
+ cp++;
+ if (!strncmp(cp, state->prefix, state->prefix_length))
+ val = count_slashes(state->prefix) + 1;
+ }
+ }
+ free(name);
+ return val;
+}
+
+/*
+ * Does the ---/+++ line have the POSIX timestamp after the last HT?
+ * GNU diff puts epoch there to signal a creation/deletion event. Is
+ * this such a timestamp?
+ */
+static int has_epoch_timestamp(const char *nameline)
+{
+ /*
+ * We are only interested in epoch timestamp; any non-zero
+ * fraction cannot be one, hence "(\.0+)?" in the regexp below.
+ * For the same reason, the date must be either 1969-12-31 or
+ * 1970-01-01, and the seconds part must be "00".
+ */
+ const char stamp_regexp[] =
+ "^(1969-12-31|1970-01-01)"
+ " "
+ "[0-2][0-9]:[0-5][0-9]:00(\\.0+)?"
+ " "
+ "([-+][0-2][0-9]:?[0-5][0-9])\n";
+ const char *timestamp = NULL, *cp, *colon;
+ static regex_t *stamp;
+ regmatch_t m[10];
+ int zoneoffset;
+ int hourminute;
+ int status;
+
+ for (cp = nameline; *cp != '\n'; cp++) {
+ if (*cp == '\t')
+ timestamp = cp + 1;
+ }
+ if (!timestamp)
+ return 0;
+ if (!stamp) {
+ stamp = xmalloc(sizeof(*stamp));
+ if (regcomp(stamp, stamp_regexp, REG_EXTENDED)) {
+ warning(_("Cannot prepare timestamp regexp %s"),
+ stamp_regexp);
+ return 0;
+ }
+ }
+
+ status = regexec(stamp, timestamp, ARRAY_SIZE(m), m, 0);
+ if (status) {
+ if (status != REG_NOMATCH)
+ warning(_("regexec returned %d for input: %s"),
+ status, timestamp);
+ return 0;
+ }
+
+ zoneoffset = strtol(timestamp + m[3].rm_so + 1, (char **) &colon, 10);
+ if (*colon == ':')
+ zoneoffset = zoneoffset * 60 + strtol(colon + 1, NULL, 10);
+ else
+ zoneoffset = (zoneoffset / 100) * 60 + (zoneoffset % 100);
+ if (timestamp[m[3].rm_so] == '-')
+ zoneoffset = -zoneoffset;
+
+ /*
+ * YYYY-MM-DD hh:mm:ss must be from either 1969-12-31
+ * (west of GMT) or 1970-01-01 (east of GMT)
+ */
+ if ((zoneoffset < 0 && memcmp(timestamp, "1969-12-31", 10)) ||
+ (0 <= zoneoffset && memcmp(timestamp, "1970-01-01", 10)))
+ return 0;
+
+ hourminute = (strtol(timestamp + 11, NULL, 10) * 60 +
+ strtol(timestamp + 14, NULL, 10) -
+ zoneoffset);
+
+ return ((zoneoffset < 0 && hourminute == 1440) ||
+ (0 <= zoneoffset && !hourminute));
+}
+
+/*
+ * Get the name etc info from the ---/+++ lines of a traditional patch header
+ *
+ * FIXME! The end-of-filename heuristics are kind of screwy. For existing
+ * files, we can happily check the index for a match, but for creating a
+ * new file we should try to match whatever "patch" does. I have no idea.
+ */
+static int parse_traditional_patch(struct apply_state *state,
+ const char *first,
+ const char *second,
+ struct patch *patch)
+{
+ char *name;
+
+ first += 4; /* skip "--- " */
+ second += 4; /* skip "+++ " */
+ if (!state->p_value_known) {
+ int p, q;
+ p = guess_p_value(state, first);
+ q = guess_p_value(state, second);
+ if (p < 0) p = q;
+ if (0 <= p && p == q) {
+ state->p_value = p;
+ state->p_value_known = 1;
+ }
+ }
+ if (is_dev_null(first)) {
+ patch->is_new = 1;
+ patch->is_delete = 0;
+ name = find_name_traditional(state, second, NULL, state->p_value);
+ patch->new_name = name;
+ } else if (is_dev_null(second)) {
+ patch->is_new = 0;
+ patch->is_delete = 1;
+ name = find_name_traditional(state, first, NULL, state->p_value);
+ patch->old_name = name;
+ } else {
+ char *first_name;
+ first_name = find_name_traditional(state, first, NULL, state->p_value);
+ name = find_name_traditional(state, second, first_name, state->p_value);
+ free(first_name);
+ if (has_epoch_timestamp(first)) {
+ patch->is_new = 1;
+ patch->is_delete = 0;
+ patch->new_name = name;
+ } else if (has_epoch_timestamp(second)) {
+ patch->is_new = 0;
+ patch->is_delete = 1;
+ patch->old_name = name;
+ } else {
+ patch->old_name = name;
+ patch->new_name = xstrdup_or_null(name);
+ }
+ }
+ if (!name)
+ return error(_("unable to find filename in patch at line %d"), state->linenr);
+
+ return 0;
+}
+
+static int gitdiff_hdrend(struct apply_state *state,
+ const char *line,
+ struct patch *patch)
+{
+ return 1;
+}
+
+/*
+ * We're anal about diff header consistency, to make
+ * sure that we don't end up having strange ambiguous
+ * patches floating around.
+ *
+ * As a result, gitdiff_{old|new}name() will check
+ * their names against any previous information, just
+ * to make sure..
+ */
+#define DIFF_OLD_NAME 0
+#define DIFF_NEW_NAME 1
+
+static int gitdiff_verify_name(struct apply_state *state,
+ const char *line,
+ int isnull,
+ char **name,
+ int side)
+{
+ if (!*name && !isnull) {
+ *name = find_name(state, line, NULL, state->p_value, TERM_TAB);
+ return 0;
+ }
+
+ if (*name) {
+ int len = strlen(*name);
+ char *another;
+ if (isnull)
+ return error(_("git apply: bad git-diff - expected /dev/null, got %s on line %d"),
+ *name, state->linenr);
+ another = find_name(state, line, NULL, state->p_value, TERM_TAB);
+ if (!another || memcmp(another, *name, len + 1)) {
+ free(another);
+ return error((side == DIFF_NEW_NAME) ?
+ _("git apply: bad git-diff - inconsistent new filename on line %d") :
+ _("git apply: bad git-diff - inconsistent old filename on line %d"), state->linenr);
+ }
+ free(another);
+ } else {
+ /* expect "/dev/null" */
+ if (memcmp("/dev/null", line, 9) || line[9] != '\n')
+ return error(_("git apply: bad git-diff - expected /dev/null on line %d"), state->linenr);
+ }
+
+ return 0;
+}
+
+static int gitdiff_oldname(struct apply_state *state,
+ const char *line,
+ struct patch *patch)
+{
+ return gitdiff_verify_name(state, line,
+ patch->is_new, &patch->old_name,
+ DIFF_OLD_NAME);
+}
+
+static int gitdiff_newname(struct apply_state *state,
+ const char *line,
+ struct patch *patch)
+{
+ return gitdiff_verify_name(state, line,
+ patch->is_delete, &patch->new_name,
+ DIFF_NEW_NAME);
+}
+
+static int gitdiff_oldmode(struct apply_state *state,
+ const char *line,
+ struct patch *patch)
+{
+ patch->old_mode = strtoul(line, NULL, 8);
+ return 0;
+}
+
+static int gitdiff_newmode(struct apply_state *state,
+ const char *line,
+ struct patch *patch)
+{
+ patch->new_mode = strtoul(line, NULL, 8);
+ return 0;
+}
+
+static int gitdiff_delete(struct apply_state *state,
+ const char *line,
+ struct patch *patch)
+{
+ patch->is_delete = 1;
+ free(patch->old_name);
+ patch->old_name = xstrdup_or_null(patch->def_name);
+ return gitdiff_oldmode(state, line, patch);
+}
+
+static int gitdiff_newfile(struct apply_state *state,
+ const char *line,
+ struct patch *patch)
+{
+ patch->is_new = 1;
+ free(patch->new_name);
+ patch->new_name = xstrdup_or_null(patch->def_name);
+ return gitdiff_newmode(state, line, patch);
+}
+
+static int gitdiff_copysrc(struct apply_state *state,
+ const char *line,
+ struct patch *patch)
+{
+ patch->is_copy = 1;
+ free(patch->old_name);
+ patch->old_name = find_name(state, line, NULL, state->p_value ? state->p_value - 1 : 0, 0);
+ return 0;
+}
+
+static int gitdiff_copydst(struct apply_state *state,
+ const char *line,
+ struct patch *patch)
+{
+ patch->is_copy = 1;
+ free(patch->new_name);
+ patch->new_name = find_name(state, line, NULL, state->p_value ? state->p_value - 1 : 0, 0);
+ return 0;
+}
+
+static int gitdiff_renamesrc(struct apply_state *state,
+ const char *line,
+ struct patch *patch)
+{
+ patch->is_rename = 1;
+ free(patch->old_name);
+ patch->old_name = find_name(state, line, NULL, state->p_value ? state->p_value - 1 : 0, 0);
+ return 0;
+}
+
+static int gitdiff_renamedst(struct apply_state *state,
+ const char *line,
+ struct patch *patch)
+{
+ patch->is_rename = 1;
+ free(patch->new_name);
+ patch->new_name = find_name(state, line, NULL, state->p_value ? state->p_value - 1 : 0, 0);
+ return 0;
+}
+
+static int gitdiff_similarity(struct apply_state *state,
+ const char *line,
+ struct patch *patch)
+{
+ unsigned long val = strtoul(line, NULL, 10);
+ if (val <= 100)
+ patch->score = val;
+ return 0;
+}
+
+static int gitdiff_dissimilarity(struct apply_state *state,
+ const char *line,
+ struct patch *patch)
+{
+ unsigned long val = strtoul(line, NULL, 10);
+ if (val <= 100)
+ patch->score = val;
+ return 0;
+}
+
+static int gitdiff_index(struct apply_state *state,
+ const char *line,
+ struct patch *patch)
+{
+ /*
+ * index line is N hexadecimal, "..", N hexadecimal,
+ * and optional space with octal mode.
+ */
+ const char *ptr, *eol;
+ int len;
+
+ ptr = strchr(line, '.');
+ if (!ptr || ptr[1] != '.' || 40 < ptr - line)
+ return 0;
+ len = ptr - line;
+ memcpy(patch->old_sha1_prefix, line, len);
+ patch->old_sha1_prefix[len] = 0;
+
+ line = ptr + 2;
+ ptr = strchr(line, ' ');
+ eol = strchrnul(line, '\n');
+
+ if (!ptr || eol < ptr)
+ ptr = eol;
+ len = ptr - line;
+
+ if (40 < len)
+ return 0;
+ memcpy(patch->new_sha1_prefix, line, len);
+ patch->new_sha1_prefix[len] = 0;
+ if (*ptr == ' ')
+ patch->old_mode = strtoul(ptr+1, NULL, 8);
+ return 0;
+}
+
+/*
+ * This is normal for a diff that doesn't change anything: we'll fall through
+ * into the next diff. Tell the parser to break out.
+ */
+static int gitdiff_unrecognized(struct apply_state *state,
+ const char *line,
+ struct patch *patch)
+{
+ return 1;
+}
+
+/*
+ * Skip p_value leading components from "line"; as we do not accept
+ * absolute paths, return NULL in that case.
+ */
+static const char *skip_tree_prefix(struct apply_state *state,
+ const char *line,
+ int llen)
+{
+ int nslash;
+ int i;
+
+ if (!state->p_value)
+ return (llen && line[0] == '/') ? NULL : line;
+
+ nslash = state->p_value;
+ for (i = 0; i < llen; i++) {
+ int ch = line[i];
+ if (ch == '/' && --nslash <= 0)
+ return (i == 0) ? NULL : &line[i + 1];
+ }
+ return NULL;
+}
+
+/*
+ * This is to extract the same name that appears on "diff --git"
+ * line. We do not find and return anything if it is a rename
+ * patch, and it is OK because we will find the name elsewhere.
+ * We need to reliably find name only when it is mode-change only,
+ * creation or deletion of an empty file. In any of these cases,
+ * both sides are the same name under a/ and b/ respectively.
+ */
+static char *git_header_name(struct apply_state *state,
+ const char *line,
+ int llen)
+{
+ const char *name;
+ const char *second = NULL;
+ size_t len, line_len;
+
+ line += strlen("diff --git ");
+ llen -= strlen("diff --git ");
+
+ if (*line == '"') {
+ const char *cp;
+ struct strbuf first = STRBUF_INIT;
+ struct strbuf sp = STRBUF_INIT;
+
+ if (unquote_c_style(&first, line, &second))
+ goto free_and_fail1;
+
+ /* strip the a/b prefix including trailing slash */
+ cp = skip_tree_prefix(state, first.buf, first.len);
+ if (!cp)
+ goto free_and_fail1;
+ strbuf_remove(&first, 0, cp - first.buf);
+
+ /*
+ * second points at one past closing dq of name.
+ * find the second name.
+ */
+ while ((second < line + llen) && isspace(*second))
+ second++;
+
+ if (line + llen <= second)
+ goto free_and_fail1;
+ if (*second == '"') {
+ if (unquote_c_style(&sp, second, NULL))
+ goto free_and_fail1;
+ cp = skip_tree_prefix(state, sp.buf, sp.len);
+ if (!cp)
+ goto free_and_fail1;
+ /* They must match, otherwise ignore */
+ if (strcmp(cp, first.buf))
+ goto free_and_fail1;
+ strbuf_release(&sp);
+ return strbuf_detach(&first, NULL);
+ }
+
+ /* unquoted second */
+ cp = skip_tree_prefix(state, second, line + llen - second);
+ if (!cp)
+ goto free_and_fail1;
+ if (line + llen - cp != first.len ||
+ memcmp(first.buf, cp, first.len))
+ goto free_and_fail1;
+ return strbuf_detach(&first, NULL);
+
+ free_and_fail1:
+ strbuf_release(&first);
+ strbuf_release(&sp);
+ return NULL;
+ }
+
+ /* unquoted first name */
+ name = skip_tree_prefix(state, line, llen);
+ if (!name)
+ return NULL;
+
+ /*
+ * since the first name is unquoted, a dq if exists must be
+ * the beginning of the second name.
+ */
+ for (second = name; second < line + llen; second++) {
+ if (*second == '"') {
+ struct strbuf sp = STRBUF_INIT;
+ const char *np;
+
+ if (unquote_c_style(&sp, second, NULL))
+ goto free_and_fail2;
+
+ np = skip_tree_prefix(state, sp.buf, sp.len);
+ if (!np)
+ goto free_and_fail2;
+
+ len = sp.buf + sp.len - np;
+ if (len < second - name &&
+ !strncmp(np, name, len) &&
+ isspace(name[len])) {
+ /* Good */
+ strbuf_remove(&sp, 0, np - sp.buf);
+ return strbuf_detach(&sp, NULL);
+ }
+
+ free_and_fail2:
+ strbuf_release(&sp);
+ return NULL;
+ }
+ }
+
+ /*
+ * Accept a name only if it shows up twice, exactly the same
+ * form.
+ */
+ second = strchr(name, '\n');
+ if (!second)
+ return NULL;
+ line_len = second - name;
+ for (len = 0 ; ; len++) {
+ switch (name[len]) {
+ default:
+ continue;
+ case '\n':
+ return NULL;
+ case '\t': case ' ':
+ /*
+ * Is this the separator between the preimage
+ * and the postimage pathname? Again, we are
+ * only interested in the case where there is
+ * no rename, as this is only to set def_name
+ * and a rename patch has the names elsewhere
+ * in an unambiguous form.
+ */
+ if (!name[len + 1])
+ return NULL; /* no postimage name */
+ second = skip_tree_prefix(state, name + len + 1,
+ line_len - (len + 1));
+ if (!second)
+ return NULL;
+ /*
+ * Does len bytes starting at "name" and "second"
+ * (that are separated by one HT or SP we just
+ * found) exactly match?
+ */
+ if (second[len] == '\n' && !strncmp(name, second, len))
+ return xmemdupz(name, len);
+ }
+ }
+}
+
+/* Verify that we recognize the lines following a git header */
+static int parse_git_header(struct apply_state *state,
+ const char *line,
+ int len,
+ unsigned int size,
+ struct patch *patch)
+{
+ unsigned long offset;
+
+ /* A git diff has explicit new/delete information, so we don't guess */
+ patch->is_new = 0;
+ patch->is_delete = 0;
+
+ /*
+ * Some things may not have the old name in the
+ * rest of the headers anywhere (pure mode changes,
+ * or removing or adding empty files), so we get
+ * the default name from the header.
+ */
+ patch->def_name = git_header_name(state, line, len);
+ if (patch->def_name && state->root.len) {
+ char *s = xstrfmt("%s%s", state->root.buf, patch->def_name);
+ free(patch->def_name);
+ patch->def_name = s;
+ }
+
+ line += len;
+ size -= len;
+ state->linenr++;
+ for (offset = len ; size > 0 ; offset += len, size -= len, line += len, state->linenr++) {
+ static const struct opentry {
+ const char *str;
+ int (*fn)(struct apply_state *, const char *, struct patch *);
+ } optable[] = {
+ { "@@ -", gitdiff_hdrend },
+ { "--- ", gitdiff_oldname },
+ { "+++ ", gitdiff_newname },
+ { "old mode ", gitdiff_oldmode },
+ { "new mode ", gitdiff_newmode },
+ { "deleted file mode ", gitdiff_delete },
+ { "new file mode ", gitdiff_newfile },
+ { "copy from ", gitdiff_copysrc },
+ { "copy to ", gitdiff_copydst },
+ { "rename old ", gitdiff_renamesrc },
+ { "rename new ", gitdiff_renamedst },
+ { "rename from ", gitdiff_renamesrc },
+ { "rename to ", gitdiff_renamedst },
+ { "similarity index ", gitdiff_similarity },
+ { "dissimilarity index ", gitdiff_dissimilarity },
+ { "index ", gitdiff_index },
+ { "", gitdiff_unrecognized },
+ };
+ int i;
+
+ len = linelen(line, size);
+ if (!len || line[len-1] != '\n')
+ break;
+ for (i = 0; i < ARRAY_SIZE(optable); i++) {
+ const struct opentry *p = optable + i;
+ int oplen = strlen(p->str);
+ int res;
+ if (len < oplen || memcmp(p->str, line, oplen))
+ continue;
+ res = p->fn(state, line + oplen, patch);
+ if (res < 0)
+ return -1;
+ if (res > 0)
+ return offset;
+ break;
+ }
+ }
+
+ return offset;
+}
+
+static int parse_num(const char *line, unsigned long *p)
+{
+ char *ptr;
+
+ if (!isdigit(*line))
+ return 0;
+ *p = strtoul(line, &ptr, 10);
+ return ptr - line;
+}
+
+static int parse_range(const char *line, int len, int offset, const char *expect,
+ unsigned long *p1, unsigned long *p2)
+{
+ int digits, ex;
+
+ if (offset < 0 || offset >= len)
+ return -1;
+ line += offset;
+ len -= offset;
+
+ digits = parse_num(line, p1);
+ if (!digits)
+ return -1;
+
+ offset += digits;
+ line += digits;
+ len -= digits;
+
+ *p2 = 1;
+ if (*line == ',') {
+ digits = parse_num(line+1, p2);
+ if (!digits)
+ return -1;
+
+ offset += digits+1;
+ line += digits+1;
+ len -= digits+1;
+ }
+
+ ex = strlen(expect);
+ if (ex > len)
+ return -1;
+ if (memcmp(line, expect, ex))
+ return -1;
+
+ return offset + ex;
+}
+
+static void recount_diff(const char *line, int size, struct fragment *fragment)
+{
+ int oldlines = 0, newlines = 0, ret = 0;
+
+ if (size < 1) {
+ warning("recount: ignore empty hunk");
+ return;
+ }
+
+ for (;;) {
+ int len = linelen(line, size);
+ size -= len;
+ line += len;
+
+ if (size < 1)
+ break;
+
+ switch (*line) {
+ case ' ': case '\n':
+ newlines++;
+ /* fall through */
+ case '-':
+ oldlines++;
+ continue;
+ case '+':
+ newlines++;
+ continue;
+ case '\\':
+ continue;
+ case '@':
+ ret = size < 3 || !starts_with(line, "@@ ");
+ break;
+ case 'd':
+ ret = size < 5 || !starts_with(line, "diff ");
+ break;
+ default:
+ ret = -1;
+ break;
+ }
+ if (ret) {
+ warning(_("recount: unexpected line: %.*s"),
+ (int)linelen(line, size), line);
+ return;
+ }
+ break;
+ }
+ fragment->oldlines = oldlines;
+ fragment->newlines = newlines;
+}
+
+/*
+ * Parse a unified diff fragment header of the
+ * form "@@ -a,b +c,d @@"
+ */
+static int parse_fragment_header(const char *line, int len, struct fragment *fragment)
+{
+ int offset;
+
+ if (!len || line[len-1] != '\n')
+ return -1;
+
+ /* Figure out the number of lines in a fragment */
+ offset = parse_range(line, len, 4, " +", &fragment->oldpos, &fragment->oldlines);
+ offset = parse_range(line, len, offset, " @@", &fragment->newpos, &fragment->newlines);
+
+ return offset;
+}
+
+/*
+ * Find file diff header
+ *
+ * Returns:
+ * -1 if no header was found
+ * -128 in case of error
+ * the size of the header in bytes (called "offset") otherwise
+ */
+static int find_header(struct apply_state *state,
+ const char *line,
+ unsigned long size,
+ int *hdrsize,
+ struct patch *patch)
+{
+ unsigned long offset, len;
+
+ patch->is_toplevel_relative = 0;
+ patch->is_rename = patch->is_copy = 0;
+ patch->is_new = patch->is_delete = -1;
+ patch->old_mode = patch->new_mode = 0;
+ patch->old_name = patch->new_name = NULL;
+ for (offset = 0; size > 0; offset += len, size -= len, line += len, state->linenr++) {
+ unsigned long nextlen;
+
+ len = linelen(line, size);
+ if (!len)
+ break;
+
+ /* Testing this early allows us to take a few shortcuts.. */
+ if (len < 6)
+ continue;
+
+ /*
+ * Make sure we don't find any unconnected patch fragments.
+ * That's a sign that we didn't find a header, and that a
+ * patch has become corrupted/broken up.
+ */
+ if (!memcmp("@@ -", line, 4)) {
+ struct fragment dummy;
+ if (parse_fragment_header(line, len, &dummy) < 0)
+ continue;
+ error(_("patch fragment without header at line %d: %.*s"),
+ state->linenr, (int)len-1, line);
+ return -128;
+ }
+
+ if (size < len + 6)
+ break;
+
+ /*
+ * Git patch? It might not have a real patch, just a rename
+ * or mode change, so we handle that specially
+ */
+ if (!memcmp("diff --git ", line, 11)) {
+ int git_hdr_len = parse_git_header(state, line, len, size, patch);
+ if (git_hdr_len < 0)
+ return -128;
+ if (git_hdr_len <= len)
+ continue;
+ if (!patch->old_name && !patch->new_name) {
+ if (!patch->def_name) {
+ error(Q_("git diff header lacks filename information when removing "
+ "%d leading pathname component (line %d)",
+ "git diff header lacks filename information when removing "
+ "%d leading pathname components (line %d)",
+ state->p_value),
+ state->p_value, state->linenr);
+ return -128;
+ }
+ patch->old_name = xstrdup(patch->def_name);
+ patch->new_name = xstrdup(patch->def_name);
+ }
+ if (!patch->is_delete && !patch->new_name) {
+ error(_("git diff header lacks filename information "
+ "(line %d)"), state->linenr);
+ return -128;
+ }
+ patch->is_toplevel_relative = 1;
+ *hdrsize = git_hdr_len;
+ return offset;
+ }
+
+ /* --- followed by +++ ? */
+ if (memcmp("--- ", line, 4) || memcmp("+++ ", line + len, 4))
+ continue;
+
+ /*
+ * We only accept unified patches, so we want it to
+ * at least have "@@ -a,b +c,d @@\n", which is 14 chars
+ * minimum ("@@ -0,0 +1 @@\n" is the shortest).
+ */
+ nextlen = linelen(line + len, size - len);
+ if (size < nextlen + 14 || memcmp("@@ -", line + len + nextlen, 4))
+ continue;
+
+ /* Ok, we'll consider it a patch */
+ if (parse_traditional_patch(state, line, line+len, patch))
+ return -128;
+ *hdrsize = len + nextlen;
+ state->linenr += 2;
+ return offset;
+ }
+ return -1;
+}
+
+static void record_ws_error(struct apply_state *state,
+ unsigned result,
+ const char *line,
+ int len,
+ int linenr)
+{
+ char *err;
+
+ if (!result)
+ return;
+
+ state->whitespace_error++;
+ if (state->squelch_whitespace_errors &&
+ state->squelch_whitespace_errors < state->whitespace_error)
+ return;
+
+ err = whitespace_error_string(result);
+ if (state->apply_verbosity > verbosity_silent)
+ fprintf(stderr, "%s:%d: %s.\n%.*s\n",
+ state->patch_input_file, linenr, err, len, line);
+ free(err);
+}
+
+static void check_whitespace(struct apply_state *state,
+ const char *line,
+ int len,
+ unsigned ws_rule)
+{
+ unsigned result = ws_check(line + 1, len - 1, ws_rule);
+
+ record_ws_error(state, result, line + 1, len - 2, state->linenr);
+}
+
+/*
+ * Parse a unified diff. Note that this really needs to parse each
+ * fragment separately, since the only way to know the difference
+ * between a "---" that is part of a patch, and a "---" that starts
+ * the next patch is to look at the line counts..
+ */
+static int parse_fragment(struct apply_state *state,
+ const char *line,
+ unsigned long size,
+ struct patch *patch,
+ struct fragment *fragment)
+{
+ int added, deleted;
+ int len = linelen(line, size), offset;
+ unsigned long oldlines, newlines;
+ unsigned long leading, trailing;
+
+ offset = parse_fragment_header(line, len, fragment);
+ if (offset < 0)
+ return -1;
+ if (offset > 0 && patch->recount)
+ recount_diff(line + offset, size - offset, fragment);
+ oldlines = fragment->oldlines;
+ newlines = fragment->newlines;
+ leading = 0;
+ trailing = 0;
+
+ /* Parse the thing.. */
+ line += len;
+ size -= len;
+ state->linenr++;
+ added = deleted = 0;
+ for (offset = len;
+ 0 < size;
+ offset += len, size -= len, line += len, state->linenr++) {
+ if (!oldlines && !newlines)
+ break;
+ len = linelen(line, size);
+ if (!len || line[len-1] != '\n')
+ return -1;
+ switch (*line) {
+ default:
+ return -1;
+ case '\n': /* newer GNU diff, an empty context line */
+ case ' ':
+ oldlines--;
+ newlines--;
+ if (!deleted && !added)
+ leading++;
+ trailing++;
+ if (!state->apply_in_reverse &&
+ state->ws_error_action == correct_ws_error)
+ check_whitespace(state, line, len, patch->ws_rule);
+ break;
+ case '-':
+ if (state->apply_in_reverse &&
+ state->ws_error_action != nowarn_ws_error)
+ check_whitespace(state, line, len, patch->ws_rule);
+ deleted++;
+ oldlines--;
+ trailing = 0;
+ break;
+ case '+':
+ if (!state->apply_in_reverse &&
+ state->ws_error_action != nowarn_ws_error)
+ check_whitespace(state, line, len, patch->ws_rule);
+ added++;
+ newlines--;
+ trailing = 0;
+ break;
+
+ /*
+ * We allow "\ No newline at end of file". Depending
+ * on locale settings when the patch was produced we
+ * don't know what this line looks like. The only
+ * thing we do know is that it begins with "\ ".
+ * Checking for 12 is just for sanity check -- any
+ * l10n of "\ No newline..." is at least that long.
+ */
+ case '\\':
+ if (len < 12 || memcmp(line, "\\ ", 2))
+ return -1;
+ break;
+ }
+ }
+ if (oldlines || newlines)
+ return -1;
+ if (!deleted && !added)
+ return -1;
+
+ fragment->leading = leading;
+ fragment->trailing = trailing;
+
+ /*
+ * If a fragment ends with an incomplete line, we failed to include
+ * it in the above loop because we hit oldlines == newlines == 0
+ * before seeing it.
+ */
+ if (12 < size && !memcmp(line, "\\ ", 2))
+ offset += linelen(line, size);
+
+ patch->lines_added += added;
+ patch->lines_deleted += deleted;
+
+ if (0 < patch->is_new && oldlines)
+ return error(_("new file depends on old contents"));
+ if (0 < patch->is_delete && newlines)
+ return error(_("deleted file still has contents"));
+ return offset;
+}
+
+/*
+ * We have seen "diff --git a/... b/..." header (or a traditional patch
+ * header). Read hunks that belong to this patch into fragments and hang
+ * them to the given patch structure.
+ *
+ * The (fragment->patch, fragment->size) pair points into the memory given
+ * by the caller, not a copy, when we return.
+ *
+ * Returns:
+ * -1 in case of error,
+ * the number of bytes in the patch otherwise.
+ */
+static int parse_single_patch(struct apply_state *state,
+ const char *line,
+ unsigned long size,
+ struct patch *patch)
+{
+ unsigned long offset = 0;
+ unsigned long oldlines = 0, newlines = 0, context = 0;
+ struct fragment **fragp = &patch->fragments;
+
+ while (size > 4 && !memcmp(line, "@@ -", 4)) {
+ struct fragment *fragment;
+ int len;
+
+ fragment = xcalloc(1, sizeof(*fragment));
+ fragment->linenr = state->linenr;
+ len = parse_fragment(state, line, size, patch, fragment);
+ if (len <= 0) {
+ free(fragment);
+ return error(_("corrupt patch at line %d"), state->linenr);
+ }
+ fragment->patch = line;
+ fragment->size = len;
+ oldlines += fragment->oldlines;
+ newlines += fragment->newlines;
+ context += fragment->leading + fragment->trailing;
+
+ *fragp = fragment;
+ fragp = &fragment->next;
+
+ offset += len;
+ line += len;
+ size -= len;
+ }
+
+ /*
+ * If something was removed (i.e. we have old-lines) it cannot
+ * be creation, and if something was added it cannot be
+ * deletion. However, the reverse is not true; --unified=0
+ * patches that only add are not necessarily creation even
+ * though they do not have any old lines, and ones that only
+ * delete are not necessarily deletion.
+ *
+ * Unfortunately, a real creation/deletion patch do _not_ have
+ * any context line by definition, so we cannot safely tell it
+ * apart with --unified=0 insanity. At least if the patch has
+ * more than one hunk it is not creation or deletion.
+ */
+ if (patch->is_new < 0 &&
+ (oldlines || (patch->fragments && patch->fragments->next)))
+ patch->is_new = 0;
+ if (patch->is_delete < 0 &&
+ (newlines || (patch->fragments && patch->fragments->next)))
+ patch->is_delete = 0;
+
+ if (0 < patch->is_new && oldlines)
+ return error(_("new file %s depends on old contents"), patch->new_name);
+ if (0 < patch->is_delete && newlines)
+ return error(_("deleted file %s still has contents"), patch->old_name);
+ if (!patch->is_delete && !newlines && context && state->apply_verbosity > verbosity_silent)
+ fprintf_ln(stderr,
+ _("** warning: "
+ "file %s becomes empty but is not deleted"),
+ patch->new_name);
+
+ return offset;
+}
+
+static inline int metadata_changes(struct patch *patch)
+{
+ return patch->is_rename > 0 ||
+ patch->is_copy > 0 ||
+ patch->is_new > 0 ||
+ patch->is_delete ||
+ (patch->old_mode && patch->new_mode &&
+ patch->old_mode != patch->new_mode);
+}
+
+static char *inflate_it(const void *data, unsigned long size,
+ unsigned long inflated_size)
+{
+ git_zstream stream;
+ void *out;
+ int st;
+
+ memset(&stream, 0, sizeof(stream));
+
+ stream.next_in = (unsigned char *)data;
+ stream.avail_in = size;
+ stream.next_out = out = xmalloc(inflated_size);
+ stream.avail_out = inflated_size;
+ git_inflate_init(&stream);
+ st = git_inflate(&stream, Z_FINISH);
+ git_inflate_end(&stream);
+ if ((st != Z_STREAM_END) || stream.total_out != inflated_size) {
+ free(out);
+ return NULL;
+ }
+ return out;
+}
+
+/*
+ * Read a binary hunk and return a new fragment; fragment->patch
+ * points at an allocated memory that the caller must free, so
+ * it is marked as "->free_patch = 1".
+ */
+static struct fragment *parse_binary_hunk(struct apply_state *state,
+ char **buf_p,
+ unsigned long *sz_p,
+ int *status_p,
+ int *used_p)
+{
+ /*
+ * Expect a line that begins with binary patch method ("literal"
+ * or "delta"), followed by the length of data before deflating.
+ * a sequence of 'length-byte' followed by base-85 encoded data
+ * should follow, terminated by a newline.
+ *
+ * Each 5-byte sequence of base-85 encodes up to 4 bytes,
+ * and we would limit the patch line to 66 characters,
+ * so one line can fit up to 13 groups that would decode
+ * to 52 bytes max. The length byte 'A'-'Z' corresponds
+ * to 1-26 bytes, and 'a'-'z' corresponds to 27-52 bytes.
+ */
+ int llen, used;
+ unsigned long size = *sz_p;
+ char *buffer = *buf_p;
+ int patch_method;
+ unsigned long origlen;
+ char *data = NULL;
+ int hunk_size = 0;
+ struct fragment *frag;
+
+ llen = linelen(buffer, size);
+ used = llen;
+
+ *status_p = 0;
+
+ if (starts_with(buffer, "delta ")) {
+ patch_method = BINARY_DELTA_DEFLATED;
+ origlen = strtoul(buffer + 6, NULL, 10);
+ }
+ else if (starts_with(buffer, "literal ")) {
+ patch_method = BINARY_LITERAL_DEFLATED;
+ origlen = strtoul(buffer + 8, NULL, 10);
+ }
+ else
+ return NULL;
+
+ state->linenr++;
+ buffer += llen;
+ while (1) {
+ int byte_length, max_byte_length, newsize;
+ llen = linelen(buffer, size);
+ used += llen;
+ state->linenr++;
+ if (llen == 1) {
+ /* consume the blank line */
+ buffer++;
+ size--;
+ break;
+ }
+ /*
+ * Minimum line is "A00000\n" which is 7-byte long,
+ * and the line length must be multiple of 5 plus 2.
+ */
+ if ((llen < 7) || (llen-2) % 5)
+ goto corrupt;
+ max_byte_length = (llen - 2) / 5 * 4;
+ byte_length = *buffer;
+ if ('A' <= byte_length && byte_length <= 'Z')
+ byte_length = byte_length - 'A' + 1;
+ else if ('a' <= byte_length && byte_length <= 'z')
+ byte_length = byte_length - 'a' + 27;
+ else
+ goto corrupt;
+ /* if the input length was not multiple of 4, we would
+ * have filler at the end but the filler should never
+ * exceed 3 bytes
+ */
+ if (max_byte_length < byte_length ||
+ byte_length <= max_byte_length - 4)
+ goto corrupt;
+ newsize = hunk_size + byte_length;
+ data = xrealloc(data, newsize);
+ if (decode_85(data + hunk_size, buffer + 1, byte_length))
+ goto corrupt;
+ hunk_size = newsize;
+ buffer += llen;
+ size -= llen;
+ }
+
+ frag = xcalloc(1, sizeof(*frag));
+ frag->patch = inflate_it(data, hunk_size, origlen);
+ frag->free_patch = 1;
+ if (!frag->patch)
+ goto corrupt;
+ free(data);
+ frag->size = origlen;
+ *buf_p = buffer;
+ *sz_p = size;
+ *used_p = used;
+ frag->binary_patch_method = patch_method;
+ return frag;
+
+ corrupt:
+ free(data);
+ *status_p = -1;
+ error(_("corrupt binary patch at line %d: %.*s"),
+ state->linenr-1, llen-1, buffer);
+ return NULL;
+}
+
+/*
+ * Returns:
+ * -1 in case of error,
+ * the length of the parsed binary patch otherwise
+ */
+static int parse_binary(struct apply_state *state,
+ char *buffer,
+ unsigned long size,
+ struct patch *patch)
+{
+ /*
+ * We have read "GIT binary patch\n"; what follows is a line
+ * that says the patch method (currently, either "literal" or
+ * "delta") and the length of data before deflating; a
+ * sequence of 'length-byte' followed by base-85 encoded data
+ * follows.
+ *
+ * When a binary patch is reversible, there is another binary
+ * hunk in the same format, starting with patch method (either
+ * "literal" or "delta") with the length of data, and a sequence
+ * of length-byte + base-85 encoded data, terminated with another
+ * empty line. This data, when applied to the postimage, produces
+ * the preimage.
+ */
+ struct fragment *forward;
+ struct fragment *reverse;
+ int status;
+ int used, used_1;
+
+ forward = parse_binary_hunk(state, &buffer, &size, &status, &used);
+ if (!forward && !status)
+ /* there has to be one hunk (forward hunk) */
+ return error(_("unrecognized binary patch at line %d"), state->linenr-1);
+ if (status)
+ /* otherwise we already gave an error message */
+ return status;
+
+ reverse = parse_binary_hunk(state, &buffer, &size, &status, &used_1);
+ if (reverse)
+ used += used_1;
+ else if (status) {
+ /*
+ * Not having reverse hunk is not an error, but having
+ * a corrupt reverse hunk is.
+ */
+ free((void*) forward->patch);
+ free(forward);
+ return status;
+ }
+ forward->next = reverse;
+ patch->fragments = forward;
+ patch->is_binary = 1;
+ return used;
+}
+
+static void prefix_one(struct apply_state *state, char **name)
+{
+ char *old_name = *name;
+ if (!old_name)
+ return;
+ *name = xstrdup(prefix_filename(state->prefix, state->prefix_length, *name));
+ free(old_name);
+}
+
+static void prefix_patch(struct apply_state *state, struct patch *p)
+{
+ if (!state->prefix || p->is_toplevel_relative)
+ return;
+ prefix_one(state, &p->new_name);
+ prefix_one(state, &p->old_name);
+}
+
+/*
+ * include/exclude
+ */
+
+static void add_name_limit(struct apply_state *state,
+ const char *name,
+ int exclude)
+{
+ struct string_list_item *it;
+
+ it = string_list_append(&state->limit_by_name, name);
+ it->util = exclude ? NULL : (void *) 1;
+}
+
+static int use_patch(struct apply_state *state, struct patch *p)
+{
+ const char *pathname = p->new_name ? p->new_name : p->old_name;
+ int i;
+
+ /* Paths outside are not touched regardless of "--include" */
+ if (0 < state->prefix_length) {
+ int pathlen = strlen(pathname);
+ if (pathlen <= state->prefix_length ||
+ memcmp(state->prefix, pathname, state->prefix_length))
+ return 0;
+ }
+
+ /* See if it matches any of exclude/include rule */
+ for (i = 0; i < state->limit_by_name.nr; i++) {
+ struct string_list_item *it = &state->limit_by_name.items[i];
+ if (!wildmatch(it->string, pathname, 0, NULL))
+ return (it->util != NULL);
+ }
+
+ /*
+ * If we had any include, a path that does not match any rule is
+ * not used. Otherwise, we saw bunch of exclude rules (or none)
+ * and such a path is used.
+ */
+ return !state->has_include;
+}
+
+/*
+ * Read the patch text in "buffer" that extends for "size" bytes; stop
+ * reading after seeing a single patch (i.e. changes to a single file).
+ * Create fragments (i.e. patch hunks) and hang them to the given patch.
+ *
+ * Returns:
+ * -1 if no header was found or parse_binary() failed,
+ * -128 on another error,
+ * the number of bytes consumed otherwise,
+ * so that the caller can call us again for the next patch.
+ */
+static int parse_chunk(struct apply_state *state, char *buffer, unsigned long size, struct patch *patch)
+{
+ int hdrsize, patchsize;
+ int offset = find_header(state, buffer, size, &hdrsize, patch);
+
+ if (offset < 0)
+ return offset;
+
+ prefix_patch(state, patch);
+
+ if (!use_patch(state, patch))
+ patch->ws_rule = 0;
+ else
+ patch->ws_rule = whitespace_rule(patch->new_name
+ ? patch->new_name
+ : patch->old_name);
+
+ patchsize = parse_single_patch(state,
+ buffer + offset + hdrsize,
+ size - offset - hdrsize,
+ patch);
+
+ if (patchsize < 0)
+ return -128;
+
+ if (!patchsize) {
+ static const char git_binary[] = "GIT binary patch\n";
+ int hd = hdrsize + offset;
+ unsigned long llen = linelen(buffer + hd, size - hd);
+
+ if (llen == sizeof(git_binary) - 1 &&
+ !memcmp(git_binary, buffer + hd, llen)) {
+ int used;
+ state->linenr++;
+ used = parse_binary(state, buffer + hd + llen,
+ size - hd - llen, patch);
+ if (used < 0)
+ return -1;
+ if (used)
+ patchsize = used + llen;
+ else
+ patchsize = 0;
+ }
+ else if (!memcmp(" differ\n", buffer + hd + llen - 8, 8)) {
+ static const char *binhdr[] = {
+ "Binary files ",
+ "Files ",
+ NULL,
+ };
+ int i;
+ for (i = 0; binhdr[i]; i++) {
+ int len = strlen(binhdr[i]);
+ if (len < size - hd &&
+ !memcmp(binhdr[i], buffer + hd, len)) {
+ state->linenr++;
+ patch->is_binary = 1;
+ patchsize = llen;
+ break;
+ }
+ }
+ }
+
+ /* Empty patch cannot be applied if it is a text patch
+ * without metadata change. A binary patch appears
+ * empty to us here.
+ */
+ if ((state->apply || state->check) &&
+ (!patch->is_binary && !metadata_changes(patch))) {
+ error(_("patch with only garbage at line %d"), state->linenr);
+ return -128;
+ }
+ }
+
+ return offset + hdrsize + patchsize;
+}
+
+#define swap(a,b) myswap((a),(b),sizeof(a))
+
+#define myswap(a, b, size) do { \
+ unsigned char mytmp[size]; \
+ memcpy(mytmp, &a, size); \
+ memcpy(&a, &b, size); \
+ memcpy(&b, mytmp, size); \
+} while (0)
+
+static void reverse_patches(struct patch *p)
+{
+ for (; p; p = p->next) {
+ struct fragment *frag = p->fragments;
+
+ swap(p->new_name, p->old_name);
+ swap(p->new_mode, p->old_mode);
+ swap(p->is_new, p->is_delete);
+ swap(p->lines_added, p->lines_deleted);
+ swap(p->old_sha1_prefix, p->new_sha1_prefix);
+
+ for (; frag; frag = frag->next) {
+ swap(frag->newpos, frag->oldpos);
+ swap(frag->newlines, frag->oldlines);
+ }
+ }
+}
+
+static const char pluses[] =
+"++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++";
+static const char minuses[]=
+"----------------------------------------------------------------------";
+
+static void show_stats(struct apply_state *state, struct patch *patch)
+{
+ struct strbuf qname = STRBUF_INIT;
+ char *cp = patch->new_name ? patch->new_name : patch->old_name;
+ int max, add, del;
+
+ quote_c_style(cp, &qname, NULL, 0);
+
+ /*
+ * "scale" the filename
+ */
+ max = state->max_len;
+ if (max > 50)
+ max = 50;
+
+ if (qname.len > max) {
+ cp = strchr(qname.buf + qname.len + 3 - max, '/');
+ if (!cp)
+ cp = qname.buf + qname.len + 3 - max;
+ strbuf_splice(&qname, 0, cp - qname.buf, "...", 3);
+ }
+
+ if (patch->is_binary) {
+ printf(" %-*s | Bin\n", max, qname.buf);
+ strbuf_release(&qname);
+ return;
+ }
+
+ printf(" %-*s |", max, qname.buf);
+ strbuf_release(&qname);
+
+ /*
+ * scale the add/delete
+ */
+ max = max + state->max_change > 70 ? 70 - max : state->max_change;
+ add = patch->lines_added;
+ del = patch->lines_deleted;
+
+ if (state->max_change > 0) {
+ int total = ((add + del) * max + state->max_change / 2) / state->max_change;
+ add = (add * max + state->max_change / 2) / state->max_change;
+ del = total - add;
+ }
+ printf("%5d %.*s%.*s\n", patch->lines_added + patch->lines_deleted,
+ add, pluses, del, minuses);
+}
+
+static int read_old_data(struct stat *st, const char *path, struct strbuf *buf)
+{
+ switch (st->st_mode & S_IFMT) {
+ case S_IFLNK:
+ if (strbuf_readlink(buf, path, st->st_size) < 0)
+ return error(_("unable to read symlink %s"), path);
+ return 0;
+ case S_IFREG:
+ if (strbuf_read_file(buf, path, st->st_size) != st->st_size)
+ return error(_("unable to open or read %s"), path);
+ convert_to_git(path, buf->buf, buf->len, buf, 0);
+ return 0;
+ default:
+ return -1;
+ }
+}
+
+/*
+ * Update the preimage, and the common lines in postimage,
+ * from buffer buf of length len. If postlen is 0 the postimage
+ * is updated in place, otherwise it's updated on a new buffer
+ * of length postlen
+ */
+
+static void update_pre_post_images(struct image *preimage,
+ struct image *postimage,
+ char *buf,
+ size_t len, size_t postlen)
+{
+ int i, ctx, reduced;
+ char *new, *old, *fixed;
+ struct image fixed_preimage;
+
+ /*
+ * Update the preimage with whitespace fixes. Note that we
+ * are not losing preimage->buf -- apply_one_fragment() will
+ * free "oldlines".
+ */
+ prepare_image(&fixed_preimage, buf, len, 1);
+ assert(postlen
+ ? fixed_preimage.nr == preimage->nr
+ : fixed_preimage.nr <= preimage->nr);
+ for (i = 0; i < fixed_preimage.nr; i++)
+ fixed_preimage.line[i].flag = preimage->line[i].flag;
+ free(preimage->line_allocated);
+ *preimage = fixed_preimage;
+
+ /*
+ * Adjust the common context lines in postimage. This can be
+ * done in-place when we are shrinking it with whitespace
+ * fixing, but needs a new buffer when ignoring whitespace or
+ * expanding leading tabs to spaces.
+ *
+ * We trust the caller to tell us if the update can be done
+ * in place (postlen==0) or not.
+ */
+ old = postimage->buf;
+ if (postlen)
+ new = postimage->buf = xmalloc(postlen);
+ else
+ new = old;
+ fixed = preimage->buf;
+
+ for (i = reduced = ctx = 0; i < postimage->nr; i++) {
+ size_t l_len = postimage->line[i].len;
+ if (!(postimage->line[i].flag & LINE_COMMON)) {
+ /* an added line -- no counterparts in preimage */
+ memmove(new, old, l_len);
+ old += l_len;
+ new += l_len;
+ continue;
+ }
+
+ /* a common context -- skip it in the original postimage */
+ old += l_len;
+
+ /* and find the corresponding one in the fixed preimage */
+ while (ctx < preimage->nr &&
+ !(preimage->line[ctx].flag & LINE_COMMON)) {
+ fixed += preimage->line[ctx].len;
+ ctx++;
+ }
+
+ /*
+ * preimage is expected to run out, if the caller
+ * fixed addition of trailing blank lines.
+ */
+ if (preimage->nr <= ctx) {
+ reduced++;
+ continue;
+ }
+
+ /* and copy it in, while fixing the line length */
+ l_len = preimage->line[ctx].len;
+ memcpy(new, fixed, l_len);
+ new += l_len;
+ fixed += l_len;
+ postimage->line[i].len = l_len;
+ ctx++;
+ }
+
+ if (postlen
+ ? postlen < new - postimage->buf
+ : postimage->len < new - postimage->buf)
+ die("BUG: caller miscounted postlen: asked %d, orig = %d, used = %d",
+ (int)postlen, (int) postimage->len, (int)(new - postimage->buf));
+
+ /* Fix the length of the whole thing */
+ postimage->len = new - postimage->buf;
+ postimage->nr -= reduced;
+}
+
+static int line_by_line_fuzzy_match(struct image *img,
+ struct image *preimage,
+ struct image *postimage,
+ unsigned long try,
+ int try_lno,
+ int preimage_limit)
+{
+ int i;
+ size_t imgoff = 0;
+ size_t preoff = 0;
+ size_t postlen = postimage->len;
+ size_t extra_chars;
+ char *buf;
+ char *preimage_eof;
+ char *preimage_end;
+ struct strbuf fixed;
+ char *fixed_buf;
+ size_t fixed_len;
+
+ for (i = 0; i < preimage_limit; i++) {
+ size_t prelen = preimage->line[i].len;
+ size_t imglen = img->line[try_lno+i].len;
+
+ if (!fuzzy_matchlines(img->buf + try + imgoff, imglen,
+ preimage->buf + preoff, prelen))
+ return 0;
+ if (preimage->line[i].flag & LINE_COMMON)
+ postlen += imglen - prelen;
+ imgoff += imglen;
+ preoff += prelen;
+ }
+
+ /*
+ * Ok, the preimage matches with whitespace fuzz.
+ *
+ * imgoff now holds the true length of the target that
+ * matches the preimage before the end of the file.
+ *
+ * Count the number of characters in the preimage that fall
+ * beyond the end of the file and make sure that all of them
+ * are whitespace characters. (This can only happen if
+ * we are removing blank lines at the end of the file.)
+ */
+ buf = preimage_eof = preimage->buf + preoff;
+ for ( ; i < preimage->nr; i++)
+ preoff += preimage->line[i].len;
+ preimage_end = preimage->buf + preoff;
+ for ( ; buf < preimage_end; buf++)
+ if (!isspace(*buf))
+ return 0;
+
+ /*
+ * Update the preimage and the common postimage context
+ * lines to use the same whitespace as the target.
+ * If whitespace is missing in the target (i.e.
+ * if the preimage extends beyond the end of the file),
+ * use the whitespace from the preimage.
+ */
+ extra_chars = preimage_end - preimage_eof;
+ strbuf_init(&fixed, imgoff + extra_chars);
+ strbuf_add(&fixed, img->buf + try, imgoff);
+ strbuf_add(&fixed, preimage_eof, extra_chars);
+ fixed_buf = strbuf_detach(&fixed, &fixed_len);
+ update_pre_post_images(preimage, postimage,
+ fixed_buf, fixed_len, postlen);
+ return 1;
+}
+
+static int match_fragment(struct apply_state *state,
+ struct image *img,
+ struct image *preimage,
+ struct image *postimage,
+ unsigned long try,
+ int try_lno,
+ unsigned ws_rule,
+ int match_beginning, int match_end)
+{
+ int i;
+ char *fixed_buf, *buf, *orig, *target;
+ struct strbuf fixed;
+ size_t fixed_len, postlen;
+ int preimage_limit;
+
+ if (preimage->nr + try_lno <= img->nr) {
+ /*
+ * The hunk falls within the boundaries of img.
+ */
+ preimage_limit = preimage->nr;
+ if (match_end && (preimage->nr + try_lno != img->nr))
+ return 0;
+ } else if (state->ws_error_action == correct_ws_error &&
+ (ws_rule & WS_BLANK_AT_EOF)) {
+ /*
+ * This hunk extends beyond the end of img, and we are
+ * removing blank lines at the end of the file. This
+ * many lines from the beginning of the preimage must
+ * match with img, and the remainder of the preimage
+ * must be blank.
+ */
+ preimage_limit = img->nr - try_lno;
+ } else {
+ /*
+ * The hunk extends beyond the end of the img and
+ * we are not removing blanks at the end, so we
+ * should reject the hunk at this position.
+ */
+ return 0;
+ }
+
+ if (match_beginning && try_lno)
+ return 0;
+
+ /* Quick hash check */
+ for (i = 0; i < preimage_limit; i++)
+ if ((img->line[try_lno + i].flag & LINE_PATCHED) ||
+ (preimage->line[i].hash != img->line[try_lno + i].hash))
+ return 0;
+
+ if (preimage_limit == preimage->nr) {
+ /*
+ * Do we have an exact match? If we were told to match
+ * at the end, size must be exactly at try+fragsize,
+ * otherwise try+fragsize must be still within the preimage,
+ * and either case, the old piece should match the preimage
+ * exactly.
+ */
+ if ((match_end
+ ? (try + preimage->len == img->len)
+ : (try + preimage->len <= img->len)) &&
+ !memcmp(img->buf + try, preimage->buf, preimage->len))
+ return 1;
+ } else {
+ /*
+ * The preimage extends beyond the end of img, so
+ * there cannot be an exact match.
+ *
+ * There must be one non-blank context line that match
+ * a line before the end of img.
+ */
+ char *buf_end;
+
+ buf = preimage->buf;
+ buf_end = buf;
+ for (i = 0; i < preimage_limit; i++)
+ buf_end += preimage->line[i].len;
+
+ for ( ; buf < buf_end; buf++)
+ if (!isspace(*buf))
+ break;
+ if (buf == buf_end)
+ return 0;
+ }
+
+ /*
+ * No exact match. If we are ignoring whitespace, run a line-by-line
+ * fuzzy matching. We collect all the line length information because
+ * we need it to adjust whitespace if we match.
+ */
+ if (state->ws_ignore_action == ignore_ws_change)
+ return line_by_line_fuzzy_match(img, preimage, postimage,
+ try, try_lno, preimage_limit);
+
+ if (state->ws_error_action != correct_ws_error)
+ return 0;
+
+ /*
+ * The hunk does not apply byte-by-byte, but the hash says
+ * it might with whitespace fuzz. We weren't asked to
+ * ignore whitespace, we were asked to correct whitespace
+ * errors, so let's try matching after whitespace correction.
+ *
+ * While checking the preimage against the target, whitespace
+ * errors in both fixed, we count how large the corresponding
+ * postimage needs to be. The postimage prepared by
+ * apply_one_fragment() has whitespace errors fixed on added
+ * lines already, but the common lines were propagated as-is,
+ * which may become longer when their whitespace errors are
+ * fixed.
+ */
+
+ /* First count added lines in postimage */
+ postlen = 0;
+ for (i = 0; i < postimage->nr; i++) {
+ if (!(postimage->line[i].flag & LINE_COMMON))
+ postlen += postimage->line[i].len;
+ }
+
+ /*
+ * The preimage may extend beyond the end of the file,
+ * but in this loop we will only handle the part of the
+ * preimage that falls within the file.
+ */
+ strbuf_init(&fixed, preimage->len + 1);
+ orig = preimage->buf;
+ target = img->buf + try;
+ for (i = 0; i < preimage_limit; i++) {
+ size_t oldlen = preimage->line[i].len;
+ size_t tgtlen = img->line[try_lno + i].len;
+ size_t fixstart = fixed.len;
+ struct strbuf tgtfix;
+ int match;
+
+ /* Try fixing the line in the preimage */
+ ws_fix_copy(&fixed, orig, oldlen, ws_rule, NULL);
+
+ /* Try fixing the line in the target */
+ strbuf_init(&tgtfix, tgtlen);
+ ws_fix_copy(&tgtfix, target, tgtlen, ws_rule, NULL);
+
+ /*
+ * If they match, either the preimage was based on
+ * a version before our tree fixed whitespace breakage,
+ * or we are lacking a whitespace-fix patch the tree
+ * the preimage was based on already had (i.e. target
+ * has whitespace breakage, the preimage doesn't).
+ * In either case, we are fixing the whitespace breakages
+ * so we might as well take the fix together with their
+ * real change.
+ */
+ match = (tgtfix.len == fixed.len - fixstart &&
+ !memcmp(tgtfix.buf, fixed.buf + fixstart,
+ fixed.len - fixstart));
+
+ /* Add the length if this is common with the postimage */
+ if (preimage->line[i].flag & LINE_COMMON)
+ postlen += tgtfix.len;
+
+ strbuf_release(&tgtfix);
+ if (!match)
+ goto unmatch_exit;
+
+ orig += oldlen;
+ target += tgtlen;
+ }
+
+
+ /*
+ * Now handle the lines in the preimage that falls beyond the
+ * end of the file (if any). They will only match if they are
+ * empty or only contain whitespace (if WS_BLANK_AT_EOL is
+ * false).
+ */
+ for ( ; i < preimage->nr; i++) {
+ size_t fixstart = fixed.len; /* start of the fixed preimage */
+ size_t oldlen = preimage->line[i].len;
+ int j;
+
+ /* Try fixing the line in the preimage */
+ ws_fix_copy(&fixed, orig, oldlen, ws_rule, NULL);
+
+ for (j = fixstart; j < fixed.len; j++)
+ if (!isspace(fixed.buf[j]))
+ goto unmatch_exit;
+
+ orig += oldlen;
+ }
+
+ /*
+ * Yes, the preimage is based on an older version that still
+ * has whitespace breakages unfixed, and fixing them makes the
+ * hunk match. Update the context lines in the postimage.
+ */
+ fixed_buf = strbuf_detach(&fixed, &fixed_len);
+ if (postlen < postimage->len)
+ postlen = 0;
+ update_pre_post_images(preimage, postimage,
+ fixed_buf, fixed_len, postlen);
+ return 1;
+
+ unmatch_exit:
+ strbuf_release(&fixed);
+ return 0;
+}
+
+static int find_pos(struct apply_state *state,
+ struct image *img,
+ struct image *preimage,
+ struct image *postimage,
+ int line,
+ unsigned ws_rule,
+ int match_beginning, int match_end)
+{
+ int i;
+ unsigned long backwards, forwards, try;
+ int backwards_lno, forwards_lno, try_lno;
+
+ /*
+ * If match_beginning or match_end is specified, there is no
+ * point starting from a wrong line that will never match and
+ * wander around and wait for a match at the specified end.
+ */
+ if (match_beginning)
+ line = 0;
+ else if (match_end)
+ line = img->nr - preimage->nr;
+
+ /*
+ * Because the comparison is unsigned, the following test
+ * will also take care of a negative line number that can
+ * result when match_end and preimage is larger than the target.
+ */
+ if ((size_t) line > img->nr)
+ line = img->nr;
+
+ try = 0;
+ for (i = 0; i < line; i++)
+ try += img->line[i].len;
+
+ /*
+ * There's probably some smart way to do this, but I'll leave
+ * that to the smart and beautiful people. I'm simple and stupid.
+ */
+ backwards = try;
+ backwards_lno = line;
+ forwards = try;
+ forwards_lno = line;
+ try_lno = line;
+
+ for (i = 0; ; i++) {
+ if (match_fragment(state, img, preimage, postimage,
+ try, try_lno, ws_rule,
+ match_beginning, match_end))
+ return try_lno;
+
+ again:
+ if (backwards_lno == 0 && forwards_lno == img->nr)
+ break;
+
+ if (i & 1) {
+ if (backwards_lno == 0) {
+ i++;
+ goto again;
+ }
+ backwards_lno--;
+ backwards -= img->line[backwards_lno].len;
+ try = backwards;
+ try_lno = backwards_lno;
+ } else {
+ if (forwards_lno == img->nr) {
+ i++;
+ goto again;
+ }
+ forwards += img->line[forwards_lno].len;
+ forwards_lno++;
+ try = forwards;
+ try_lno = forwards_lno;
+ }
+
+ }
+ return -1;
+}
+
+static void remove_first_line(struct image *img)
+{
+ img->buf += img->line[0].len;
+ img->len -= img->line[0].len;
+ img->line++;
+ img->nr--;
+}
+
+static void remove_last_line(struct image *img)
+{
+ img->len -= img->line[--img->nr].len;
+}
+
+/*
+ * The change from "preimage" and "postimage" has been found to
+ * apply at applied_pos (counts in line numbers) in "img".
+ * Update "img" to remove "preimage" and replace it with "postimage".
+ */
+static void update_image(struct apply_state *state,
+ struct image *img,
+ int applied_pos,
+ struct image *preimage,
+ struct image *postimage)
+{
+ /*
+ * remove the copy of preimage at offset in img
+ * and replace it with postimage
+ */
+ int i, nr;
+ size_t remove_count, insert_count, applied_at = 0;
+ char *result;
+ int preimage_limit;
+
+ /*
+ * If we are removing blank lines at the end of img,
+ * the preimage may extend beyond the end.
+ * If that is the case, we must be careful only to
+ * remove the part of the preimage that falls within
+ * the boundaries of img. Initialize preimage_limit
+ * to the number of lines in the preimage that falls
+ * within the boundaries.
+ */
+ preimage_limit = preimage->nr;
+ if (preimage_limit > img->nr - applied_pos)
+ preimage_limit = img->nr - applied_pos;
+
+ for (i = 0; i < applied_pos; i++)
+ applied_at += img->line[i].len;
+
+ remove_count = 0;
+ for (i = 0; i < preimage_limit; i++)
+ remove_count += img->line[applied_pos + i].len;
+ insert_count = postimage->len;
+
+ /* Adjust the contents */
+ result = xmalloc(st_add3(st_sub(img->len, remove_count), insert_count, 1));
+ memcpy(result, img->buf, applied_at);
+ memcpy(result + applied_at, postimage->buf, postimage->len);
+ memcpy(result + applied_at + postimage->len,
+ img->buf + (applied_at + remove_count),
+ img->len - (applied_at + remove_count));
+ free(img->buf);
+ img->buf = result;
+ img->len += insert_count - remove_count;
+ result[img->len] = '\0';
+
+ /* Adjust the line table */
+ nr = img->nr + postimage->nr - preimage_limit;
+ if (preimage_limit < postimage->nr) {
+ /*
+ * NOTE: this knows that we never call remove_first_line()
+ * on anything other than pre/post image.
+ */
+ REALLOC_ARRAY(img->line, nr);
+ img->line_allocated = img->line;
+ }
+ if (preimage_limit != postimage->nr)
+ memmove(img->line + applied_pos + postimage->nr,
+ img->line + applied_pos + preimage_limit,
+ (img->nr - (applied_pos + preimage_limit)) *
+ sizeof(*img->line));
+ memcpy(img->line + applied_pos,
+ postimage->line,
+ postimage->nr * sizeof(*img->line));
+ if (!state->allow_overlap)
+ for (i = 0; i < postimage->nr; i++)
+ img->line[applied_pos + i].flag |= LINE_PATCHED;
+ img->nr = nr;
+}
+
+/*
+ * Use the patch-hunk text in "frag" to prepare two images (preimage and
+ * postimage) for the hunk. Find lines that match "preimage" in "img" and
+ * replace the part of "img" with "postimage" text.
+ */
+static int apply_one_fragment(struct apply_state *state,
+ struct image *img, struct fragment *frag,
+ int inaccurate_eof, unsigned ws_rule,
+ int nth_fragment)
+{
+ int match_beginning, match_end;
+ const char *patch = frag->patch;
+ int size = frag->size;
+ char *old, *oldlines;
+ struct strbuf newlines;
+ int new_blank_lines_at_end = 0;
+ int found_new_blank_lines_at_end = 0;
+ int hunk_linenr = frag->linenr;
+ unsigned long leading, trailing;
+ int pos, applied_pos;
+ struct image preimage;
+ struct image postimage;
+
+ memset(&preimage, 0, sizeof(preimage));
+ memset(&postimage, 0, sizeof(postimage));
+ oldlines = xmalloc(size);
+ strbuf_init(&newlines, size);
+
+ old = oldlines;
+ while (size > 0) {
+ char first;
+ int len = linelen(patch, size);
+ int plen;
+ int added_blank_line = 0;
+ int is_blank_context = 0;
+ size_t start;
+
+ if (!len)
+ break;
+
+ /*
+ * "plen" is how much of the line we should use for
+ * the actual patch data. Normally we just remove the
+ * first character on the line, but if the line is
+ * followed by "\ No newline", then we also remove the
+ * last one (which is the newline, of course).
+ */
+ plen = len - 1;
+ if (len < size && patch[len] == '\\')
+ plen--;
+ first = *patch;
+ if (state->apply_in_reverse) {
+ if (first == '-')
+ first = '+';
+ else if (first == '+')
+ first = '-';
+ }
+
+ switch (first) {
+ case '\n':
+ /* Newer GNU diff, empty context line */
+ if (plen < 0)
+ /* ... followed by '\No newline'; nothing */
+ break;
+ *old++ = '\n';
+ strbuf_addch(&newlines, '\n');
+ add_line_info(&preimage, "\n", 1, LINE_COMMON);
+ add_line_info(&postimage, "\n", 1, LINE_COMMON);
+ is_blank_context = 1;
+ break;
+ case ' ':
+ if (plen && (ws_rule & WS_BLANK_AT_EOF) &&
+ ws_blank_line(patch + 1, plen, ws_rule))
+ is_blank_context = 1;
+ case '-':
+ memcpy(old, patch + 1, plen);
+ add_line_info(&preimage, old, plen,
+ (first == ' ' ? LINE_COMMON : 0));
+ old += plen;
+ if (first == '-')
+ break;
+ /* Fall-through for ' ' */
+ case '+':
+ /* --no-add does not add new lines */
+ if (first == '+' && state->no_add)
+ break;
+
+ start = newlines.len;
+ if (first != '+' ||
+ !state->whitespace_error ||
+ state->ws_error_action != correct_ws_error) {
+ strbuf_add(&newlines, patch + 1, plen);
+ }
+ else {
+ ws_fix_copy(&newlines, patch + 1, plen, ws_rule, &state->applied_after_fixing_ws);
+ }
+ add_line_info(&postimage, newlines.buf + start, newlines.len - start,
+ (first == '+' ? 0 : LINE_COMMON));
+ if (first == '+' &&
+ (ws_rule & WS_BLANK_AT_EOF) &&
+ ws_blank_line(patch + 1, plen, ws_rule))
+ added_blank_line = 1;
+ break;
+ case '@': case '\\':
+ /* Ignore it, we already handled it */
+ break;
+ default:
+ if (state->apply_verbosity > verbosity_normal)
+ error(_("invalid start of line: '%c'"), first);
+ applied_pos = -1;
+ goto out;
+ }
+ if (added_blank_line) {
+ if (!new_blank_lines_at_end)
+ found_new_blank_lines_at_end = hunk_linenr;
+ new_blank_lines_at_end++;
+ }
+ else if (is_blank_context)
+ ;
+ else
+ new_blank_lines_at_end = 0;
+ patch += len;
+ size -= len;
+ hunk_linenr++;
+ }
+ if (inaccurate_eof &&
+ old > oldlines && old[-1] == '\n' &&
+ newlines.len > 0 && newlines.buf[newlines.len - 1] == '\n') {
+ old--;
+ strbuf_setlen(&newlines, newlines.len - 1);
+ }
+
+ leading = frag->leading;
+ trailing = frag->trailing;
+
+ /*
+ * A hunk to change lines at the beginning would begin with
+ * @@ -1,L +N,M @@
+ * but we need to be careful. -U0 that inserts before the second
+ * line also has this pattern.
+ *
+ * And a hunk to add to an empty file would begin with
+ * @@ -0,0 +N,M @@
+ *
+ * In other words, a hunk that is (frag->oldpos <= 1) with or
+ * without leading context must match at the beginning.
+ */
+ match_beginning = (!frag->oldpos ||
+ (frag->oldpos == 1 && !state->unidiff_zero));
+
+ /*
+ * A hunk without trailing lines must match at the end.
+ * However, we simply cannot tell if a hunk must match end
+ * from the lack of trailing lines if the patch was generated
+ * with unidiff without any context.
+ */
+ match_end = !state->unidiff_zero && !trailing;
+
+ pos = frag->newpos ? (frag->newpos - 1) : 0;
+ preimage.buf = oldlines;
+ preimage.len = old - oldlines;
+ postimage.buf = newlines.buf;
+ postimage.len = newlines.len;
+ preimage.line = preimage.line_allocated;
+ postimage.line = postimage.line_allocated;
+
+ for (;;) {
+
+ applied_pos = find_pos(state, img, &preimage, &postimage, pos,
+ ws_rule, match_beginning, match_end);
+
+ if (applied_pos >= 0)
+ break;
+
+ /* Am I at my context limits? */
+ if ((leading <= state->p_context) && (trailing <= state->p_context))
+ break;
+ if (match_beginning || match_end) {
+ match_beginning = match_end = 0;
+ continue;
+ }
+
+ /*
+ * Reduce the number of context lines; reduce both
+ * leading and trailing if they are equal otherwise
+ * just reduce the larger context.
+ */
+ if (leading >= trailing) {
+ remove_first_line(&preimage);
+ remove_first_line(&postimage);
+ pos--;
+ leading--;
+ }
+ if (trailing > leading) {
+ remove_last_line(&preimage);
+ remove_last_line(&postimage);
+ trailing--;
+ }
+ }
+
+ if (applied_pos >= 0) {
+ if (new_blank_lines_at_end &&
+ preimage.nr + applied_pos >= img->nr &&
+ (ws_rule & WS_BLANK_AT_EOF) &&
+ state->ws_error_action != nowarn_ws_error) {
+ record_ws_error(state, WS_BLANK_AT_EOF, "+", 1,
+ found_new_blank_lines_at_end);
+ if (state->ws_error_action == correct_ws_error) {
+ while (new_blank_lines_at_end--)
+ remove_last_line(&postimage);
+ }
+ /*
+ * We would want to prevent write_out_results()
+ * from taking place in apply_patch() that follows
+ * the callchain led us here, which is:
+ * apply_patch->check_patch_list->check_patch->
+ * apply_data->apply_fragments->apply_one_fragment
+ */
+ if (state->ws_error_action == die_on_ws_error)
+ state->apply = 0;
+ }
+
+ if (state->apply_verbosity > verbosity_normal && applied_pos != pos) {
+ int offset = applied_pos - pos;
+ if (state->apply_in_reverse)
+ offset = 0 - offset;
+ fprintf_ln(stderr,
+ Q_("Hunk #%d succeeded at %d (offset %d line).",
+ "Hunk #%d succeeded at %d (offset %d lines).",
+ offset),
+ nth_fragment, applied_pos + 1, offset);
+ }
+
+ /*
+ * Warn if it was necessary to reduce the number
+ * of context lines.
+ */
+ if ((leading != frag->leading ||
+ trailing != frag->trailing) && state->apply_verbosity > verbosity_silent)
+ fprintf_ln(stderr, _("Context reduced to (%ld/%ld)"
+ " to apply fragment at %d"),
+ leading, trailing, applied_pos+1);
+ update_image(state, img, applied_pos, &preimage, &postimage);
+ } else {
+ if (state->apply_verbosity > verbosity_normal)
+ error(_("while searching for:\n%.*s"),
+ (int)(old - oldlines), oldlines);
+ }
+
+out:
+ free(oldlines);
+ strbuf_release(&newlines);
+ free(preimage.line_allocated);
+ free(postimage.line_allocated);
+
+ return (applied_pos < 0);
+}
+
+static int apply_binary_fragment(struct apply_state *state,
+ struct image *img,
+ struct patch *patch)
+{
+ struct fragment *fragment = patch->fragments;
+ unsigned long len;
+ void *dst;
+
+ if (!fragment)
+ return error(_("missing binary patch data for '%s'"),
+ patch->new_name ?
+ patch->new_name :
+ patch->old_name);
+
+ /* Binary patch is irreversible without the optional second hunk */
+ if (state->apply_in_reverse) {
+ if (!fragment->next)
+ return error(_("cannot reverse-apply a binary patch "
+ "without the reverse hunk to '%s'"),
+ patch->new_name
+ ? patch->new_name : patch->old_name);
+ fragment = fragment->next;
+ }
+ switch (fragment->binary_patch_method) {
+ case BINARY_DELTA_DEFLATED:
+ dst = patch_delta(img->buf, img->len, fragment->patch,
+ fragment->size, &len);
+ if (!dst)
+ return -1;
+ clear_image(img);
+ img->buf = dst;
+ img->len = len;
+ return 0;
+ case BINARY_LITERAL_DEFLATED:
+ clear_image(img);
+ img->len = fragment->size;
+ img->buf = xmemdupz(fragment->patch, img->len);
+ return 0;
+ }
+ return -1;
+}
+
+/*
+ * Replace "img" with the result of applying the binary patch.
+ * The binary patch data itself in patch->fragment is still kept
+ * but the preimage prepared by the caller in "img" is freed here
+ * or in the helper function apply_binary_fragment() this calls.
+ */
+static int apply_binary(struct apply_state *state,
+ struct image *img,
+ struct patch *patch)
+{
+ const char *name = patch->old_name ? patch->old_name : patch->new_name;
+ struct object_id oid;
+
+ /*
+ * For safety, we require patch index line to contain
+ * full 40-byte textual SHA1 for old and new, at least for now.
+ */
+ if (strlen(patch->old_sha1_prefix) != 40 ||
+ strlen(patch->new_sha1_prefix) != 40 ||
+ get_oid_hex(patch->old_sha1_prefix, &oid) ||
+ get_oid_hex(patch->new_sha1_prefix, &oid))
+ return error(_("cannot apply binary patch to '%s' "
+ "without full index line"), name);
+
+ if (patch->old_name) {
+ /*
+ * See if the old one matches what the patch
+ * applies to.
+ */
+ hash_sha1_file(img->buf, img->len, blob_type, oid.hash);
+ if (strcmp(oid_to_hex(&oid), patch->old_sha1_prefix))
+ return error(_("the patch applies to '%s' (%s), "
+ "which does not match the "
+ "current contents."),
+ name, oid_to_hex(&oid));
+ }
+ else {
+ /* Otherwise, the old one must be empty. */
+ if (img->len)
+ return error(_("the patch applies to an empty "
+ "'%s' but it is not empty"), name);
+ }
+
+ get_oid_hex(patch->new_sha1_prefix, &oid);
+ if (is_null_oid(&oid)) {
+ clear_image(img);
+ return 0; /* deletion patch */
+ }
+
+ if (has_sha1_file(oid.hash)) {
+ /* We already have the postimage */
+ enum object_type type;
+ unsigned long size;
+ char *result;
+
+ result = read_sha1_file(oid.hash, &type, &size);
+ if (!result)
+ return error(_("the necessary postimage %s for "
+ "'%s' cannot be read"),
+ patch->new_sha1_prefix, name);
+ clear_image(img);
+ img->buf = result;
+ img->len = size;
+ } else {
+ /*
+ * We have verified buf matches the preimage;
+ * apply the patch data to it, which is stored
+ * in the patch->fragments->{patch,size}.
+ */
+ if (apply_binary_fragment(state, img, patch))
+ return error(_("binary patch does not apply to '%s'"),
+ name);
+
+ /* verify that the result matches */
+ hash_sha1_file(img->buf, img->len, blob_type, oid.hash);
+ if (strcmp(oid_to_hex(&oid), patch->new_sha1_prefix))
+ return error(_("binary patch to '%s' creates incorrect result (expecting %s, got %s)"),
+ name, patch->new_sha1_prefix, oid_to_hex(&oid));
+ }
+
+ return 0;
+}
+
+static int apply_fragments(struct apply_state *state, struct image *img, struct patch *patch)
+{
+ struct fragment *frag = patch->fragments;
+ const char *name = patch->old_name ? patch->old_name : patch->new_name;
+ unsigned ws_rule = patch->ws_rule;
+ unsigned inaccurate_eof = patch->inaccurate_eof;
+ int nth = 0;
+
+ if (patch->is_binary)
+ return apply_binary(state, img, patch);
+
+ while (frag) {
+ nth++;
+ if (apply_one_fragment(state, img, frag, inaccurate_eof, ws_rule, nth)) {
+ error(_("patch failed: %s:%ld"), name, frag->oldpos);
+ if (!state->apply_with_reject)
+ return -1;
+ frag->rejected = 1;
+ }
+ frag = frag->next;
+ }
+ return 0;
+}
+
+static int read_blob_object(struct strbuf *buf, const struct object_id *oid, unsigned mode)
+{
+ if (S_ISGITLINK(mode)) {
+ strbuf_grow(buf, 100);
+ strbuf_addf(buf, "Subproject commit %s\n", oid_to_hex(oid));
+ } else {
+ enum object_type type;
+ unsigned long sz;
+ char *result;
+
+ result = read_sha1_file(oid->hash, &type, &sz);
+ if (!result)
+ return -1;
+ /* XXX read_sha1_file NUL-terminates */
+ strbuf_attach(buf, result, sz, sz + 1);
+ }
+ return 0;
+}
+
+static int read_file_or_gitlink(const struct cache_entry *ce, struct strbuf *buf)
+{
+ if (!ce)
+ return 0;
+ return read_blob_object(buf, &ce->oid, ce->ce_mode);
+}
+
+static struct patch *in_fn_table(struct apply_state *state, const char *name)
+{
+ struct string_list_item *item;
+
+ if (name == NULL)
+ return NULL;
+
+ item = string_list_lookup(&state->fn_table, name);
+ if (item != NULL)
+ return (struct patch *)item->util;
+
+ return NULL;
+}
+
+/*
+ * item->util in the filename table records the status of the path.
+ * Usually it points at a patch (whose result records the contents
+ * of it after applying it), but it could be PATH_WAS_DELETED for a
+ * path that a previously applied patch has already removed, or
+ * PATH_TO_BE_DELETED for a path that a later patch would remove.
+ *
+ * The latter is needed to deal with a case where two paths A and B
+ * are swapped by first renaming A to B and then renaming B to A;
+ * moving A to B should not be prevented due to presence of B as we
+ * will remove it in a later patch.
+ */
+#define PATH_TO_BE_DELETED ((struct patch *) -2)
+#define PATH_WAS_DELETED ((struct patch *) -1)
+
+static int to_be_deleted(struct patch *patch)
+{
+ return patch == PATH_TO_BE_DELETED;
+}
+
+static int was_deleted(struct patch *patch)
+{
+ return patch == PATH_WAS_DELETED;
+}
+
+static void add_to_fn_table(struct apply_state *state, struct patch *patch)
+{
+ struct string_list_item *item;
+
+ /*
+ * Always add new_name unless patch is a deletion
+ * This should cover the cases for normal diffs,
+ * file creations and copies
+ */
+ if (patch->new_name != NULL) {
+ item = string_list_insert(&state->fn_table, patch->new_name);
+ item->util = patch;
+ }
+
+ /*
+ * store a failure on rename/deletion cases because
+ * later chunks shouldn't patch old names
+ */
+ if ((patch->new_name == NULL) || (patch->is_rename)) {
+ item = string_list_insert(&state->fn_table, patch->old_name);
+ item->util = PATH_WAS_DELETED;
+ }
+}
+
+static void prepare_fn_table(struct apply_state *state, struct patch *patch)
+{
+ /*
+ * store information about incoming file deletion
+ */
+ while (patch) {
+ if ((patch->new_name == NULL) || (patch->is_rename)) {
+ struct string_list_item *item;
+ item = string_list_insert(&state->fn_table, patch->old_name);
+ item->util = PATH_TO_BE_DELETED;
+ }
+ patch = patch->next;
+ }
+}
+
+static int checkout_target(struct index_state *istate,
+ struct cache_entry *ce, struct stat *st)
+{
+ struct checkout costate = CHECKOUT_INIT;
+
+ costate.refresh_cache = 1;
+ costate.istate = istate;
+ if (checkout_entry(ce, &costate, NULL) || lstat(ce->name, st))
+ return error(_("cannot checkout %s"), ce->name);
+ return 0;
+}
+
+static struct patch *previous_patch(struct apply_state *state,
+ struct patch *patch,
+ int *gone)
+{
+ struct patch *previous;
+
+ *gone = 0;
+ if (patch->is_copy || patch->is_rename)
+ return NULL; /* "git" patches do not depend on the order */
+
+ previous = in_fn_table(state, patch->old_name);
+ if (!previous)
+ return NULL;
+
+ if (to_be_deleted(previous))
+ return NULL; /* the deletion hasn't happened yet */
+
+ if (was_deleted(previous))
+ *gone = 1;
+
+ return previous;
+}
+
+static int verify_index_match(const struct cache_entry *ce, struct stat *st)
+{
+ if (S_ISGITLINK(ce->ce_mode)) {
+ if (!S_ISDIR(st->st_mode))
+ return -1;
+ return 0;
+ }
+ return ce_match_stat(ce, st, CE_MATCH_IGNORE_VALID|CE_MATCH_IGNORE_SKIP_WORKTREE);
+}
+
+#define SUBMODULE_PATCH_WITHOUT_INDEX 1
+
+static int load_patch_target(struct apply_state *state,
+ struct strbuf *buf,
+ const struct cache_entry *ce,
+ struct stat *st,
+ const char *name,
+ unsigned expected_mode)
+{
+ if (state->cached || state->check_index) {
+ if (read_file_or_gitlink(ce, buf))
+ return error(_("failed to read %s"), name);
+ } else if (name) {
+ if (S_ISGITLINK(expected_mode)) {
+ if (ce)
+ return read_file_or_gitlink(ce, buf);
+ else
+ return SUBMODULE_PATCH_WITHOUT_INDEX;
+ } else if (has_symlink_leading_path(name, strlen(name))) {
+ return error(_("reading from '%s' beyond a symbolic link"), name);
+ } else {
+ if (read_old_data(st, name, buf))
+ return error(_("failed to read %s"), name);
+ }
+ }
+ return 0;
+}
+
+/*
+ * We are about to apply "patch"; populate the "image" with the
+ * current version we have, from the working tree or from the index,
+ * depending on the situation e.g. --cached/--index. If we are
+ * applying a non-git patch that incrementally updates the tree,
+ * we read from the result of a previous diff.
+ */
+static int load_preimage(struct apply_state *state,
+ struct image *image,
+ struct patch *patch, struct stat *st,
+ const struct cache_entry *ce)
+{
+ struct strbuf buf = STRBUF_INIT;
+ size_t len;
+ char *img;
+ struct patch *previous;
+ int status;
+
+ previous = previous_patch(state, patch, &status);
+ if (status)
+ return error(_("path %s has been renamed/deleted"),
+ patch->old_name);
+ if (previous) {
+ /* We have a patched copy in memory; use that. */
+ strbuf_add(&buf, previous->result, previous->resultsize);
+ } else {
+ status = load_patch_target(state, &buf, ce, st,
+ patch->old_name, patch->old_mode);
+ if (status < 0)
+ return status;
+ else if (status == SUBMODULE_PATCH_WITHOUT_INDEX) {
+ /*
+ * There is no way to apply subproject
+ * patch without looking at the index.
+ * NEEDSWORK: shouldn't this be flagged
+ * as an error???
+ */
+ free_fragment_list(patch->fragments);
+ patch->fragments = NULL;
+ } else if (status) {
+ return error(_("failed to read %s"), patch->old_name);
+ }
+ }
+
+ img = strbuf_detach(&buf, &len);
+ prepare_image(image, img, len, !patch->is_binary);
+ return 0;
+}
+
+static int three_way_merge(struct image *image,
+ char *path,
+ const struct object_id *base,
+ const struct object_id *ours,
+ const struct object_id *theirs)
+{
+ mmfile_t base_file, our_file, their_file;
+ mmbuffer_t result = { NULL };
+ int status;
+
+ read_mmblob(&base_file, base);
+ read_mmblob(&our_file, ours);
+ read_mmblob(&their_file, theirs);
+ status = ll_merge(&result, path,
+ &base_file, "base",
+ &our_file, "ours",
+ &their_file, "theirs", NULL);
+ free(base_file.ptr);
+ free(our_file.ptr);
+ free(their_file.ptr);
+ if (status < 0 || !result.ptr) {
+ free(result.ptr);
+ return -1;
+ }
+ clear_image(image);
+ image->buf = result.ptr;
+ image->len = result.size;
+
+ return status;
+}
+
+/*
+ * When directly falling back to add/add three-way merge, we read from
+ * the current contents of the new_name. In no cases other than that
+ * this function will be called.
+ */
+static int load_current(struct apply_state *state,
+ struct image *image,
+ struct patch *patch)
+{
+ struct strbuf buf = STRBUF_INIT;
+ int status, pos;
+ size_t len;
+ char *img;
+ struct stat st;
+ struct cache_entry *ce;
+ char *name = patch->new_name;
+ unsigned mode = patch->new_mode;
+
+ if (!patch->is_new)
+ die("BUG: patch to %s is not a creation", patch->old_name);
+
+ pos = cache_name_pos(name, strlen(name));
+ if (pos < 0)
+ return error(_("%s: does not exist in index"), name);
+ ce = active_cache[pos];
+ if (lstat(name, &st)) {
+ if (errno != ENOENT)
+ return error_errno("%s", name);
+ if (checkout_target(&the_index, ce, &st))
+ return -1;
+ }
+ if (verify_index_match(ce, &st))
+ return error(_("%s: does not match index"), name);
+
+ status = load_patch_target(state, &buf, ce, &st, name, mode);
+ if (status < 0)
+ return status;
+ else if (status)
+ return -1;
+ img = strbuf_detach(&buf, &len);
+ prepare_image(image, img, len, !patch->is_binary);
+ return 0;
+}
+
+static int try_threeway(struct apply_state *state,
+ struct image *image,
+ struct patch *patch,
+ struct stat *st,
+ const struct cache_entry *ce)
+{
+ struct object_id pre_oid, post_oid, our_oid;
+ struct strbuf buf = STRBUF_INIT;
+ size_t len;
+ int status;
+ char *img;
+ struct image tmp_image;
+
+ /* No point falling back to 3-way merge in these cases */
+ if (patch->is_delete ||
+ S_ISGITLINK(patch->old_mode) || S_ISGITLINK(patch->new_mode))
+ return -1;
+
+ /* Preimage the patch was prepared for */
+ if (patch->is_new)
+ write_sha1_file("", 0, blob_type, pre_oid.hash);
+ else if (get_sha1(patch->old_sha1_prefix, pre_oid.hash) ||
+ read_blob_object(&buf, &pre_oid, patch->old_mode))
+ return error(_("repository lacks the necessary blob to fall back on 3-way merge."));
+
+ if (state->apply_verbosity > verbosity_silent)
+ fprintf(stderr, _("Falling back to three-way merge...\n"));
+
+ img = strbuf_detach(&buf, &len);
+ prepare_image(&tmp_image, img, len, 1);
+ /* Apply the patch to get the post image */
+ if (apply_fragments(state, &tmp_image, patch) < 0) {
+ clear_image(&tmp_image);
+ return -1;
+ }
+ /* post_oid is theirs */
+ write_sha1_file(tmp_image.buf, tmp_image.len, blob_type, post_oid.hash);
+ clear_image(&tmp_image);
+
+ /* our_oid is ours */
+ if (patch->is_new) {
+ if (load_current(state, &tmp_image, patch))
+ return error(_("cannot read the current contents of '%s'"),
+ patch->new_name);
+ } else {
+ if (load_preimage(state, &tmp_image, patch, st, ce))
+ return error(_("cannot read the current contents of '%s'"),
+ patch->old_name);
+ }
+ write_sha1_file(tmp_image.buf, tmp_image.len, blob_type, our_oid.hash);
+ clear_image(&tmp_image);
+
+ /* in-core three-way merge between post and our using pre as base */
+ status = three_way_merge(image, patch->new_name,
+ &pre_oid, &our_oid, &post_oid);
+ if (status < 0) {
+ if (state->apply_verbosity > verbosity_silent)
+ fprintf(stderr,
+ _("Failed to fall back on three-way merge...\n"));
+ return status;
+ }
+
+ if (status) {
+ patch->conflicted_threeway = 1;
+ if (patch->is_new)
+ oidclr(&patch->threeway_stage[0]);
+ else
+ oidcpy(&patch->threeway_stage[0], &pre_oid);
+ oidcpy(&patch->threeway_stage[1], &our_oid);
+ oidcpy(&patch->threeway_stage[2], &post_oid);
+ if (state->apply_verbosity > verbosity_silent)
+ fprintf(stderr,
+ _("Applied patch to '%s' with conflicts.\n"),
+ patch->new_name);
+ } else {
+ if (state->apply_verbosity > verbosity_silent)
+ fprintf(stderr,
+ _("Applied patch to '%s' cleanly.\n"),
+ patch->new_name);
+ }
+ return 0;
+}
+
+static int apply_data(struct apply_state *state, struct patch *patch,
+ struct stat *st, const struct cache_entry *ce)
+{
+ struct image image;
+
+ if (load_preimage(state, &image, patch, st, ce) < 0)
+ return -1;
+
+ if (patch->direct_to_threeway ||
+ apply_fragments(state, &image, patch) < 0) {
+ /* Note: with --reject, apply_fragments() returns 0 */
+ if (!state->threeway || try_threeway(state, &image, patch, st, ce) < 0)
+ return -1;
+ }
+ patch->result = image.buf;
+ patch->resultsize = image.len;
+ add_to_fn_table(state, patch);
+ free(image.line_allocated);
+
+ if (0 < patch->is_delete && patch->resultsize)
+ return error(_("removal patch leaves file contents"));
+
+ return 0;
+}
+
+/*
+ * If "patch" that we are looking at modifies or deletes what we have,
+ * we would want it not to lose any local modification we have, either
+ * in the working tree or in the index.
+ *
+ * This also decides if a non-git patch is a creation patch or a
+ * modification to an existing empty file. We do not check the state
+ * of the current tree for a creation patch in this function; the caller
+ * check_patch() separately makes sure (and errors out otherwise) that
+ * the path the patch creates does not exist in the current tree.
+ */
+static int check_preimage(struct apply_state *state,
+ struct patch *patch,
+ struct cache_entry **ce,
+ struct stat *st)
+{
+ const char *old_name = patch->old_name;
+ struct patch *previous = NULL;
+ int stat_ret = 0, status;
+ unsigned st_mode = 0;
+
+ if (!old_name)
+ return 0;
+
+ assert(patch->is_new <= 0);
+ previous = previous_patch(state, patch, &status);
+
+ if (status)
+ return error(_("path %s has been renamed/deleted"), old_name);
+ if (previous) {
+ st_mode = previous->new_mode;
+ } else if (!state->cached) {
+ stat_ret = lstat(old_name, st);
+ if (stat_ret && errno != ENOENT)
+ return error_errno("%s", old_name);
+ }
+
+ if (state->check_index && !previous) {
+ int pos = cache_name_pos(old_name, strlen(old_name));
+ if (pos < 0) {
+ if (patch->is_new < 0)
+ goto is_new;
+ return error(_("%s: does not exist in index"), old_name);
+ }
+ *ce = active_cache[pos];
+ if (stat_ret < 0) {
+ if (checkout_target(&the_index, *ce, st))
+ return -1;
+ }
+ if (!state->cached && verify_index_match(*ce, st))
+ return error(_("%s: does not match index"), old_name);
+ if (state->cached)
+ st_mode = (*ce)->ce_mode;
+ } else if (stat_ret < 0) {
+ if (patch->is_new < 0)
+ goto is_new;
+ return error_errno("%s", old_name);
+ }
+
+ if (!state->cached && !previous)
+ st_mode = ce_mode_from_stat(*ce, st->st_mode);
+
+ if (patch->is_new < 0)
+ patch->is_new = 0;
+ if (!patch->old_mode)
+ patch->old_mode = st_mode;
+ if ((st_mode ^ patch->old_mode) & S_IFMT)
+ return error(_("%s: wrong type"), old_name);
+ if (st_mode != patch->old_mode)
+ warning(_("%s has type %o, expected %o"),
+ old_name, st_mode, patch->old_mode);
+ if (!patch->new_mode && !patch->is_delete)
+ patch->new_mode = st_mode;
+ return 0;
+
+ is_new:
+ patch->is_new = 1;
+ patch->is_delete = 0;
+ free(patch->old_name);
+ patch->old_name = NULL;
+ return 0;
+}
+
+
+#define EXISTS_IN_INDEX 1
+#define EXISTS_IN_WORKTREE 2
+
+static int check_to_create(struct apply_state *state,
+ const char *new_name,
+ int ok_if_exists)
+{
+ struct stat nst;
+
+ if (state->check_index &&
+ cache_name_pos(new_name, strlen(new_name)) >= 0 &&
+ !ok_if_exists)
+ return EXISTS_IN_INDEX;
+ if (state->cached)
+ return 0;
+
+ if (!lstat(new_name, &nst)) {
+ if (S_ISDIR(nst.st_mode) || ok_if_exists)
+ return 0;
+ /*
+ * A leading component of new_name might be a symlink
+ * that is going to be removed with this patch, but
+ * still pointing at somewhere that has the path.
+ * In such a case, path "new_name" does not exist as
+ * far as git is concerned.
+ */
+ if (has_symlink_leading_path(new_name, strlen(new_name)))
+ return 0;
+
+ return EXISTS_IN_WORKTREE;
+ } else if ((errno != ENOENT) && (errno != ENOTDIR)) {
+ return error_errno("%s", new_name);
+ }
+ return 0;
+}
+
+static uintptr_t register_symlink_changes(struct apply_state *state,
+ const char *path,
+ uintptr_t what)
+{
+ struct string_list_item *ent;
+
+ ent = string_list_lookup(&state->symlink_changes, path);
+ if (!ent) {
+ ent = string_list_insert(&state->symlink_changes, path);
+ ent->util = (void *)0;
+ }
+ ent->util = (void *)(what | ((uintptr_t)ent->util));
+ return (uintptr_t)ent->util;
+}
+
+static uintptr_t check_symlink_changes(struct apply_state *state, const char *path)
+{
+ struct string_list_item *ent;
+
+ ent = string_list_lookup(&state->symlink_changes, path);
+ if (!ent)
+ return 0;
+ return (uintptr_t)ent->util;
+}
+
+static void prepare_symlink_changes(struct apply_state *state, struct patch *patch)
+{
+ for ( ; patch; patch = patch->next) {
+ if ((patch->old_name && S_ISLNK(patch->old_mode)) &&
+ (patch->is_rename || patch->is_delete))
+ /* the symlink at patch->old_name is removed */
+ register_symlink_changes(state, patch->old_name, APPLY_SYMLINK_GOES_AWAY);
+
+ if (patch->new_name && S_ISLNK(patch->new_mode))
+ /* the symlink at patch->new_name is created or remains */
+ register_symlink_changes(state, patch->new_name, APPLY_SYMLINK_IN_RESULT);
+ }
+}
+
+static int path_is_beyond_symlink_1(struct apply_state *state, struct strbuf *name)
+{
+ do {
+ unsigned int change;
+
+ while (--name->len && name->buf[name->len] != '/')
+ ; /* scan backwards */
+ if (!name->len)
+ break;
+ name->buf[name->len] = '\0';
+ change = check_symlink_changes(state, name->buf);
+ if (change & APPLY_SYMLINK_IN_RESULT)
+ return 1;
+ if (change & APPLY_SYMLINK_GOES_AWAY)
+ /*
+ * This cannot be "return 0", because we may
+ * see a new one created at a higher level.
+ */
+ continue;
+
+ /* otherwise, check the preimage */
+ if (state->check_index) {
+ struct cache_entry *ce;
+
+ ce = cache_file_exists(name->buf, name->len, ignore_case);
+ if (ce && S_ISLNK(ce->ce_mode))
+ return 1;
+ } else {
+ struct stat st;
+ if (!lstat(name->buf, &st) && S_ISLNK(st.st_mode))
+ return 1;
+ }
+ } while (1);
+ return 0;
+}
+
+static int path_is_beyond_symlink(struct apply_state *state, const char *name_)
+{
+ int ret;
+ struct strbuf name = STRBUF_INIT;
+
+ assert(*name_ != '\0');
+ strbuf_addstr(&name, name_);
+ ret = path_is_beyond_symlink_1(state, &name);
+ strbuf_release(&name);
+
+ return ret;
+}
+
+static int check_unsafe_path(struct patch *patch)
+{
+ const char *old_name = NULL;
+ const char *new_name = NULL;
+ if (patch->is_delete)
+ old_name = patch->old_name;
+ else if (!patch->is_new && !patch->is_copy)
+ old_name = patch->old_name;
+ if (!patch->is_delete)
+ new_name = patch->new_name;
+
+ if (old_name && !verify_path(old_name))
+ return error(_("invalid path '%s'"), old_name);
+ if (new_name && !verify_path(new_name))
+ return error(_("invalid path '%s'"), new_name);
+ return 0;
+}
+
+/*
+ * Check and apply the patch in-core; leave the result in patch->result
+ * for the caller to write it out to the final destination.
+ */
+static int check_patch(struct apply_state *state, struct patch *patch)
+{
+ struct stat st;
+ const char *old_name = patch->old_name;
+ const char *new_name = patch->new_name;
+ const char *name = old_name ? old_name : new_name;
+ struct cache_entry *ce = NULL;
+ struct patch *tpatch;
+ int ok_if_exists;
+ int status;
+
+ patch->rejected = 1; /* we will drop this after we succeed */
+
+ status = check_preimage(state, patch, &ce, &st);
+ if (status)
+ return status;
+ old_name = patch->old_name;
+
+ /*
+ * A type-change diff is always split into a patch to delete
+ * old, immediately followed by a patch to create new (see
+ * diff.c::run_diff()); in such a case it is Ok that the entry
+ * to be deleted by the previous patch is still in the working
+ * tree and in the index.
+ *
+ * A patch to swap-rename between A and B would first rename A
+ * to B and then rename B to A. While applying the first one,
+ * the presence of B should not stop A from getting renamed to
+ * B; ask to_be_deleted() about the later rename. Removal of
+ * B and rename from A to B is handled the same way by asking
+ * was_deleted().
+ */
+ if ((tpatch = in_fn_table(state, new_name)) &&
+ (was_deleted(tpatch) || to_be_deleted(tpatch)))
+ ok_if_exists = 1;
+ else
+ ok_if_exists = 0;
+
+ if (new_name &&
+ ((0 < patch->is_new) || patch->is_rename || patch->is_copy)) {
+ int err = check_to_create(state, new_name, ok_if_exists);
+
+ if (err && state->threeway) {
+ patch->direct_to_threeway = 1;
+ } else switch (err) {
+ case 0:
+ break; /* happy */
+ case EXISTS_IN_INDEX:
+ return error(_("%s: already exists in index"), new_name);
+ break;
+ case EXISTS_IN_WORKTREE:
+ return error(_("%s: already exists in working directory"),
+ new_name);
+ default:
+ return err;
+ }
+
+ if (!patch->new_mode) {
+ if (0 < patch->is_new)
+ patch->new_mode = S_IFREG | 0644;
+ else
+ patch->new_mode = patch->old_mode;
+ }
+ }
+
+ if (new_name && old_name) {
+ int same = !strcmp(old_name, new_name);
+ if (!patch->new_mode)
+ patch->new_mode = patch->old_mode;
+ if ((patch->old_mode ^ patch->new_mode) & S_IFMT) {
+ if (same)
+ return error(_("new mode (%o) of %s does not "
+ "match old mode (%o)"),
+ patch->new_mode, new_name,
+ patch->old_mode);
+ else
+ return error(_("new mode (%o) of %s does not "
+ "match old mode (%o) of %s"),
+ patch->new_mode, new_name,
+ patch->old_mode, old_name);
+ }
+ }
+
+ if (!state->unsafe_paths && check_unsafe_path(patch))
+ return -128;
+
+ /*
+ * An attempt to read from or delete a path that is beyond a
+ * symbolic link will be prevented by load_patch_target() that
+ * is called at the beginning of apply_data() so we do not
+ * have to worry about a patch marked with "is_delete" bit
+ * here. We however need to make sure that the patch result
+ * is not deposited to a path that is beyond a symbolic link
+ * here.
+ */
+ if (!patch->is_delete && path_is_beyond_symlink(state, patch->new_name))
+ return error(_("affected file '%s' is beyond a symbolic link"),
+ patch->new_name);
+
+ if (apply_data(state, patch, &st, ce) < 0)
+ return error(_("%s: patch does not apply"), name);
+ patch->rejected = 0;
+ return 0;
+}
+
+static int check_patch_list(struct apply_state *state, struct patch *patch)
+{
+ int err = 0;
+
+ prepare_symlink_changes(state, patch);
+ prepare_fn_table(state, patch);
+ while (patch) {
+ int res;
+ if (state->apply_verbosity > verbosity_normal)
+ say_patch_name(stderr,
+ _("Checking patch %s..."), patch);
+ res = check_patch(state, patch);
+ if (res == -128)
+ return -128;
+ err |= res;
+ patch = patch->next;
+ }
+ return err;
+}
+
+static int read_apply_cache(struct apply_state *state)
+{
+ if (state->index_file)
+ return read_cache_from(state->index_file);
+ else
+ return read_cache();
+}
+
+/* This function tries to read the object name from the current index */
+static int get_current_oid(struct apply_state *state, const char *path,
+ struct object_id *oid)
+{
+ int pos;
+
+ if (read_apply_cache(state) < 0)
+ return -1;
+ pos = cache_name_pos(path, strlen(path));
+ if (pos < 0)
+ return -1;
+ oidcpy(oid, &active_cache[pos]->oid);
+ return 0;
+}
+
+static int preimage_oid_in_gitlink_patch(struct patch *p, struct object_id *oid)
+{
+ /*
+ * A usable gitlink patch has only one fragment (hunk) that looks like:
+ * @@ -1 +1 @@
+ * -Subproject commit <old sha1>
+ * +Subproject commit <new sha1>
+ * or
+ * @@ -1 +0,0 @@
+ * -Subproject commit <old sha1>
+ * for a removal patch.
+ */
+ struct fragment *hunk = p->fragments;
+ static const char heading[] = "-Subproject commit ";
+ char *preimage;
+
+ if (/* does the patch have only one hunk? */
+ hunk && !hunk->next &&
+ /* is its preimage one line? */
+ hunk->oldpos == 1 && hunk->oldlines == 1 &&
+ /* does preimage begin with the heading? */
+ (preimage = memchr(hunk->patch, '\n', hunk->size)) != NULL &&
+ starts_with(++preimage, heading) &&
+ /* does it record full SHA-1? */
+ !get_oid_hex(preimage + sizeof(heading) - 1, oid) &&
+ preimage[sizeof(heading) + GIT_SHA1_HEXSZ - 1] == '\n' &&
+ /* does the abbreviated name on the index line agree with it? */
+ starts_with(preimage + sizeof(heading) - 1, p->old_sha1_prefix))
+ return 0; /* it all looks fine */
+
+ /* we may have full object name on the index line */
+ return get_oid_hex(p->old_sha1_prefix, oid);
+}
+
+/* Build an index that contains the just the files needed for a 3way merge */
+static int build_fake_ancestor(struct apply_state *state, struct patch *list)
+{
+ struct patch *patch;
+ struct index_state result = { NULL };
+ static struct lock_file lock;
+ int res;
+
+ /* Once we start supporting the reverse patch, it may be
+ * worth showing the new sha1 prefix, but until then...
+ */
+ for (patch = list; patch; patch = patch->next) {
+ struct object_id oid;
+ struct cache_entry *ce;
+ const char *name;
+
+ name = patch->old_name ? patch->old_name : patch->new_name;
+ if (0 < patch->is_new)
+ continue;
+
+ if (S_ISGITLINK(patch->old_mode)) {
+ if (!preimage_oid_in_gitlink_patch(patch, &oid))
+ ; /* ok, the textual part looks sane */
+ else
+ return error(_("sha1 information is lacking or "
+ "useless for submodule %s"), name);
+ } else if (!get_sha1_blob(patch->old_sha1_prefix, oid.hash)) {
+ ; /* ok */
+ } else if (!patch->lines_added && !patch->lines_deleted) {
+ /* mode-only change: update the current */
+ if (get_current_oid(state, patch->old_name, &oid))
+ return error(_("mode change for %s, which is not "
+ "in current HEAD"), name);
+ } else
+ return error(_("sha1 information is lacking or useless "
+ "(%s)."), name);
+
+ ce = make_cache_entry(patch->old_mode, oid.hash, name, 0, 0);
+ if (!ce)
+ return error(_("make_cache_entry failed for path '%s'"),
+ name);
+ if (add_index_entry(&result, ce, ADD_CACHE_OK_TO_ADD)) {
+ free(ce);
+ return error(_("could not add %s to temporary index"),
+ name);
+ }
+ }
+
+ hold_lock_file_for_update(&lock, state->fake_ancestor, LOCK_DIE_ON_ERROR);
+ res = write_locked_index(&result, &lock, COMMIT_LOCK);
+ discard_index(&result);
+
+ if (res)
+ return error(_("could not write temporary index to %s"),
+ state->fake_ancestor);
+
+ return 0;
+ }
+
+ static void stat_patch_list(struct apply_state *state, struct patch *patch)
+ {
+ int files, adds, dels;
+
+ for (files = adds = dels = 0 ; patch ; patch = patch->next) {
+ files++;
+ adds += patch->lines_added;
+ dels += patch->lines_deleted;
+ show_stats(state, patch);
+ }
+
+ print_stat_summary(stdout, files, adds, dels);
+ }
+
+ static void numstat_patch_list(struct apply_state *state,
+ struct patch *patch)
+ {
+ for ( ; patch; patch = patch->next) {
+ const char *name;
+ name = patch->new_name ? patch->new_name : patch->old_name;
+ if (patch->is_binary)
+ printf("-\t-\t");
+ else
+ printf("%d\t%d\t", patch->lines_added, patch->lines_deleted);
+ write_name_quoted(name, stdout, state->line_termination);
+ }
+ }
+
+ static void show_file_mode_name(const char *newdelete, unsigned int mode, const char *name)
+ {
+ if (mode)
+ printf(" %s mode %06o %s\n", newdelete, mode, name);
+ else
+ printf(" %s %s\n", newdelete, name);
+ }
+
+ static void show_mode_change(struct patch *p, int show_name)
+ {
+ if (p->old_mode && p->new_mode && p->old_mode != p->new_mode) {
+ if (show_name)
+ printf(" mode change %06o => %06o %s\n",
+ p->old_mode, p->new_mode, p->new_name);
+ else
+ printf(" mode change %06o => %06o\n",
+ p->old_mode, p->new_mode);
+ }
+ }
+
+ static void show_rename_copy(struct patch *p)
+ {
+ const char *renamecopy = p->is_rename ? "rename" : "copy";
+ const char *old, *new;
+
+ /* Find common prefix */
+ old = p->old_name;
+ new = p->new_name;
+ while (1) {
+ const char *slash_old, *slash_new;
+ slash_old = strchr(old, '/');
+ slash_new = strchr(new, '/');
+ if (!slash_old ||
+ !slash_new ||
+ slash_old - old != slash_new - new ||
+ memcmp(old, new, slash_new - new))
+ break;
+ old = slash_old + 1;
+ new = slash_new + 1;
+ }
+ /* p->old_name thru old is the common prefix, and old and new
+ * through the end of names are renames
+ */
+ if (old != p->old_name)
+ printf(" %s %.*s{%s => %s} (%d%%)\n", renamecopy,
+ (int)(old - p->old_name), p->old_name,
+ old, new, p->score);
+ else
+ printf(" %s %s => %s (%d%%)\n", renamecopy,
+ p->old_name, p->new_name, p->score);
+ show_mode_change(p, 0);
+ }
+
+ static void summary_patch_list(struct patch *patch)
+ {
+ struct patch *p;
+
+ for (p = patch; p; p = p->next) {
+ if (p->is_new)
+ show_file_mode_name("create", p->new_mode, p->new_name);
+ else if (p->is_delete)
+ show_file_mode_name("delete", p->old_mode, p->old_name);
+ else {
+ if (p->is_rename || p->is_copy)
+ show_rename_copy(p);
+ else {
+ if (p->score) {
+ printf(" rewrite %s (%d%%)\n",
+ p->new_name, p->score);
+ show_mode_change(p, 0);
+ }
+ else
+ show_mode_change(p, 1);
+ }
+ }
+ }
+ }
+
+ static void patch_stats(struct apply_state *state, struct patch *patch)
+ {
+ int lines = patch->lines_added + patch->lines_deleted;
+
+ if (lines > state->max_change)
+ state->max_change = lines;
+ if (patch->old_name) {
+ int len = quote_c_style(patch->old_name, NULL, NULL, 0);
+ if (!len)
+ len = strlen(patch->old_name);
+ if (len > state->max_len)
+ state->max_len = len;
+ }
+ if (patch->new_name) {
+ int len = quote_c_style(patch->new_name, NULL, NULL, 0);
+ if (!len)
+ len = strlen(patch->new_name);
+ if (len > state->max_len)
+ state->max_len = len;
+ }
+ }
+
+ static int remove_file(struct apply_state *state, struct patch *patch, int rmdir_empty)
+ {
+ if (state->update_index) {
+ if (remove_file_from_cache(patch->old_name) < 0)
+ return error(_("unable to remove %s from index"), patch->old_name);
+ }
+ if (!state->cached) {
+ if (!remove_or_warn(patch->old_mode, patch->old_name) && rmdir_empty) {
+ remove_path(patch->old_name);
+ }
+ }
+ return 0;
+ }
+
+ static int add_index_file(struct apply_state *state,
+ const char *path,
+ unsigned mode,
+ void *buf,
+ unsigned long size)
+ {
+ struct stat st;
+ struct cache_entry *ce;
+ int namelen = strlen(path);
+ unsigned ce_size = cache_entry_size(namelen);
+
+ if (!state->update_index)
+ return 0;
+
+ ce = xcalloc(1, ce_size);
+ memcpy(ce->name, path, namelen);
+ ce->ce_mode = create_ce_mode(mode);
+ ce->ce_flags = create_ce_flags(0);
+ ce->ce_namelen = namelen;
+ if (S_ISGITLINK(mode)) {
+ const char *s;
+
+ if (!skip_prefix(buf, "Subproject commit ", &s) ||
+ get_oid_hex(s, &ce->oid)) {
+ free(ce);
+ return error(_("corrupt patch for submodule %s"), path);
+ }
+ } else {
+ if (!state->cached) {
+ if (lstat(path, &st) < 0) {
+ free(ce);
+ return error_errno(_("unable to stat newly "
+ "created file '%s'"),
+ path);
+ }
+ fill_stat_cache_info(ce, &st);
+ }
+ if (write_sha1_file(buf, size, blob_type, ce->oid.hash) < 0) {
+ free(ce);
+ return error(_("unable to create backing store "
+ "for newly created file %s"), path);
+ }
+ }
+ if (add_cache_entry(ce, ADD_CACHE_OK_TO_ADD) < 0) {
+ free(ce);
+ return error(_("unable to add cache entry for %s"), path);
+ }
+
+ return 0;
+}
+
+/*
+ * Returns:
+ * -1 if an unrecoverable error happened
+ * 0 if everything went well
+ * 1 if a recoverable error happened
+ */
+static int try_create_file(const char *path, unsigned int mode, const char *buf, unsigned long size)
+{
+ int fd, res;
+ struct strbuf nbuf = STRBUF_INIT;
+
+ if (S_ISGITLINK(mode)) {
+ struct stat st;
+ if (!lstat(path, &st) && S_ISDIR(st.st_mode))
+ return 0;
+ return !!mkdir(path, 0777);
+ }
+
+ if (has_symlinks && S_ISLNK(mode))
+ /* Although buf:size is counted string, it also is NUL
+ * terminated.
+ */
+ return !!symlink(buf, path);
+
+ fd = open(path, O_CREAT | O_EXCL | O_WRONLY, (mode & 0100) ? 0777 : 0666);
+ if (fd < 0)
+ return 1;
+
+ if (convert_to_working_tree(path, buf, size, &nbuf)) {
+ size = nbuf.len;
+ buf = nbuf.buf;
+ }
+
+ res = write_in_full(fd, buf, size) < 0;
+ if (res)
+ error_errno(_("failed to write to '%s'"), path);
+ strbuf_release(&nbuf);
+
+ if (close(fd) < 0 && !res)
+ return error_errno(_("closing file '%s'"), path);
+
+ return res ? -1 : 0;
+}
+
+/*
+ * We optimistically assume that the directories exist,
+ * which is true 99% of the time anyway. If they don't,
+ * we create them and try again.
+ *
+ * Returns:
+ * -1 on error
+ * 0 otherwise
+ */
+static int create_one_file(struct apply_state *state,
+ char *path,
+ unsigned mode,
+ const char *buf,
+ unsigned long size)
+{
+ int res;
+
+ if (state->cached)
+ return 0;
+
+ res = try_create_file(path, mode, buf, size);
+ if (res < 0)
+ return -1;
+ if (!res)
+ return 0;
+
+ if (errno == ENOENT) {
+ if (safe_create_leading_directories(path))
+ return 0;
+ res = try_create_file(path, mode, buf, size);
+ if (res < 0)
+ return -1;
+ if (!res)
+ return 0;
+ }
+
+ if (errno == EEXIST || errno == EACCES) {
+ /* We may be trying to create a file where a directory
+ * used to be.
+ */
+ struct stat st;
+ if (!lstat(path, &st) && (!S_ISDIR(st.st_mode) || !rmdir(path)))
+ errno = EEXIST;
+ }
+
+ if (errno == EEXIST) {
+ unsigned int nr = getpid();
+
+ for (;;) {
+ char newpath[PATH_MAX];
+ mksnpath(newpath, sizeof(newpath), "%s~%u", path, nr);
+ res = try_create_file(newpath, mode, buf, size);
+ if (res < 0)
+ return -1;
+ if (!res) {
+ if (!rename(newpath, path))
+ return 0;
+ unlink_or_warn(newpath);
+ break;
+ }
+ if (errno != EEXIST)
+ break;
+ ++nr;
+ }
+ }
+ return error_errno(_("unable to write file '%s' mode %o"),
+ path, mode);
+}
+
+static int add_conflicted_stages_file(struct apply_state *state,
+ struct patch *patch)
+{
+ int stage, namelen;
+ unsigned ce_size, mode;
+ struct cache_entry *ce;
+
+ if (!state->update_index)
+ return 0;
+ namelen = strlen(patch->new_name);
+ ce_size = cache_entry_size(namelen);
+ mode = patch->new_mode ? patch->new_mode : (S_IFREG | 0644);
+
+ remove_file_from_cache(patch->new_name);
+ for (stage = 1; stage < 4; stage++) {
+ if (is_null_oid(&patch->threeway_stage[stage - 1]))
+ continue;
+ ce = xcalloc(1, ce_size);
+ memcpy(ce->name, patch->new_name, namelen);
+ ce->ce_mode = create_ce_mode(mode);
+ ce->ce_flags = create_ce_flags(stage);
+ ce->ce_namelen = namelen;
+ oidcpy(&ce->oid, &patch->threeway_stage[stage - 1]);
+ if (add_cache_entry(ce, ADD_CACHE_OK_TO_ADD) < 0) {
+ free(ce);
+ return error(_("unable to add cache entry for %s"),
+ patch->new_name);
+ }
+ }
+
+ return 0;
+}
+
+static int create_file(struct apply_state *state, struct patch *patch)
+{
+ char *path = patch->new_name;
+ unsigned mode = patch->new_mode;
+ unsigned long size = patch->resultsize;
+ char *buf = patch->result;
+
+ if (!mode)
+ mode = S_IFREG | 0644;
+ if (create_one_file(state, path, mode, buf, size))
+ return -1;
+
+ if (patch->conflicted_threeway)
+ return add_conflicted_stages_file(state, patch);
+ else
+ return add_index_file(state, path, mode, buf, size);
+}
+
+/* phase zero is to remove, phase one is to create */
+static int write_out_one_result(struct apply_state *state,
+ struct patch *patch,
+ int phase)
+{
+ if (patch->is_delete > 0) {
+ if (phase == 0)
+ return remove_file(state, patch, 1);
+ return 0;
+ }
+ if (patch->is_new > 0 || patch->is_copy) {
+ if (phase == 1)
+ return create_file(state, patch);
+ return 0;
+ }
+ /*
+ * Rename or modification boils down to the same
+ * thing: remove the old, write the new
+ */
+ if (phase == 0)
+ return remove_file(state, patch, patch->is_rename);
+ if (phase == 1)
+ return create_file(state, patch);
+ return 0;
+}
+
+static int write_out_one_reject(struct apply_state *state, struct patch *patch)
+{
+ FILE *rej;
+ char namebuf[PATH_MAX];
+ struct fragment *frag;
+ int cnt = 0;
+ struct strbuf sb = STRBUF_INIT;
+
+ for (cnt = 0, frag = patch->fragments; frag; frag = frag->next) {
+ if (!frag->rejected)
+ continue;
+ cnt++;
+ }
+
+ if (!cnt) {
+ if (state->apply_verbosity > verbosity_normal)
+ say_patch_name(stderr,
+ _("Applied patch %s cleanly."), patch);
+ return 0;
+ }
+
+ /* This should not happen, because a removal patch that leaves
+ * contents are marked "rejected" at the patch level.
+ */
+ if (!patch->new_name)
+ die(_("internal error"));
+
+ /* Say this even without --verbose */
+ strbuf_addf(&sb, Q_("Applying patch %%s with %d reject...",
+ "Applying patch %%s with %d rejects...",
+ cnt),
+ cnt);
+ if (state->apply_verbosity > verbosity_silent)
+ say_patch_name(stderr, sb.buf, patch);
+ strbuf_release(&sb);
+
+ cnt = strlen(patch->new_name);
+ if (ARRAY_SIZE(namebuf) <= cnt + 5) {
+ cnt = ARRAY_SIZE(namebuf) - 5;
+ warning(_("truncating .rej filename to %.*s.rej"),
+ cnt - 1, patch->new_name);
+ }
+ memcpy(namebuf, patch->new_name, cnt);
+ memcpy(namebuf + cnt, ".rej", 5);
+
+ rej = fopen(namebuf, "w");
+ if (!rej)
+ return error_errno(_("cannot open %s"), namebuf);
+
+ /* Normal git tools never deal with .rej, so do not pretend
+ * this is a git patch by saying --git or giving extended
+ * headers. While at it, maybe please "kompare" that wants
+ * the trailing TAB and some garbage at the end of line ;-).
+ */
+ fprintf(rej, "diff a/%s b/%s\t(rejected hunks)\n",
+ patch->new_name, patch->new_name);
+ for (cnt = 1, frag = patch->fragments;
+ frag;
+ cnt++, frag = frag->next) {
+ if (!frag->rejected) {
+ if (state->apply_verbosity > verbosity_silent)
+ fprintf_ln(stderr, _("Hunk #%d applied cleanly."), cnt);
+ continue;
+ }
+ if (state->apply_verbosity > verbosity_silent)
+ fprintf_ln(stderr, _("Rejected hunk #%d."), cnt);
+ fprintf(rej, "%.*s", frag->size, frag->patch);
+ if (frag->patch[frag->size-1] != '\n')
+ fputc('\n', rej);
+ }
+ fclose(rej);
+ return -1;
+}
+
+/*
+ * Returns:
+ * -1 if an error happened
+ * 0 if the patch applied cleanly
+ * 1 if the patch did not apply cleanly
+ */
+static int write_out_results(struct apply_state *state, struct patch *list)
+{
+ int phase;
+ int errs = 0;
+ struct patch *l;
+ struct string_list cpath = STRING_LIST_INIT_DUP;
+
+ for (phase = 0; phase < 2; phase++) {
+ l = list;
+ while (l) {
+ if (l->rejected)
+ errs = 1;
+ else {
+ if (write_out_one_result(state, l, phase)) {
+ string_list_clear(&cpath, 0);
+ return -1;
+ }
+ if (phase == 1) {
+ if (write_out_one_reject(state, l))
+ errs = 1;
+ if (l->conflicted_threeway) {
+ string_list_append(&cpath, l->new_name);
+ errs = 1;
+ }
+ }
+ }
+ l = l->next;
+ }
+ }
+
+ if (cpath.nr) {
+ struct string_list_item *item;
+
+ string_list_sort(&cpath);
+ if (state->apply_verbosity > verbosity_silent) {
+ for_each_string_list_item(item, &cpath)
+ fprintf(stderr, "U %s\n", item->string);
+ }
+ string_list_clear(&cpath, 0);
+
+ rerere(0);
+ }
+
+ return errs;
+}
+
+/*
+ * Try to apply a patch.
+ *
+ * Returns:
+ * -128 if a bad error happened (like patch unreadable)
+ * -1 if patch did not apply and user cannot deal with it
+ * 0 if the patch applied
+ * 1 if the patch did not apply but user might fix it
+ */
+static int apply_patch(struct apply_state *state,
+ int fd,
+ const char *filename,
+ int options)
+{
+ size_t offset;
+ struct strbuf buf = STRBUF_INIT; /* owns the patch text */
+ struct patch *list = NULL, **listp = &list;
+ int skipped_patch = 0;
+ int res = 0;
+
+ state->patch_input_file = filename;
+ if (read_patch_file(&buf, fd) < 0)
+ return -128;
+ offset = 0;
+ while (offset < buf.len) {
+ struct patch *patch;
+ int nr;
+
+ patch = xcalloc(1, sizeof(*patch));
+ patch->inaccurate_eof = !!(options & APPLY_OPT_INACCURATE_EOF);
+ patch->recount = !!(options & APPLY_OPT_RECOUNT);
+ nr = parse_chunk(state, buf.buf + offset, buf.len - offset, patch);
+ if (nr < 0) {
+ free_patch(patch);
+ if (nr == -128) {
+ res = -128;
+ goto end;
+ }
+ break;
+ }
+ if (state->apply_in_reverse)
+ reverse_patches(patch);
+ if (use_patch(state, patch)) {
+ patch_stats(state, patch);
+ *listp = patch;
+ listp = &patch->next;
+ }
+ else {
+ if (state->apply_verbosity > verbosity_normal)
+ say_patch_name(stderr, _("Skipped patch '%s'."), patch);
+ free_patch(patch);
+ skipped_patch++;
+ }
+ offset += nr;
+ }
+
+ if (!list && !skipped_patch) {
+ error(_("unrecognized input"));
+ res = -128;
+ goto end;
+ }
+
+ if (state->whitespace_error && (state->ws_error_action == die_on_ws_error))
+ state->apply = 0;
+
+ state->update_index = state->check_index && state->apply;
+ if (state->update_index && state->newfd < 0) {
+ if (state->index_file)
+ state->newfd = hold_lock_file_for_update(state->lock_file,
+ state->index_file,
+ LOCK_DIE_ON_ERROR);
+ else
+ state->newfd = hold_locked_index(state->lock_file, 1);
+ }
+
+ if (state->check_index && read_apply_cache(state) < 0) {
+ error(_("unable to read index file"));
+ res = -128;
+ goto end;
+ }
+
+ if (state->check || state->apply) {
+ int r = check_patch_list(state, list);
+ if (r == -128) {
+ res = -128;
+ goto end;
+ }
+ if (r < 0 && !state->apply_with_reject) {
+ res = -1;
+ goto end;
+ }
+ }
+
+ if (state->apply) {
+ int write_res = write_out_results(state, list);
+ if (write_res < 0) {
+ res = -128;
+ goto end;
+ }
+ if (write_res > 0) {
+ /* with --3way, we still need to write the index out */
+ res = state->apply_with_reject ? -1 : 1;
+ goto end;
+ }
+ }
+
+ if (state->fake_ancestor &&
+ build_fake_ancestor(state, list)) {
+ res = -128;
+ goto end;
+ }
+
+ if (state->diffstat && state->apply_verbosity > verbosity_silent)
+ stat_patch_list(state, list);
+
+ if (state->numstat && state->apply_verbosity > verbosity_silent)
+ numstat_patch_list(state, list);
+
+ if (state->summary && state->apply_verbosity > verbosity_silent)
+ summary_patch_list(list);
+
+end:
+ free_patch_list(list);
+ strbuf_release(&buf);
+ string_list_clear(&state->fn_table, 0);
+ return res;
+}
+
+static int apply_option_parse_exclude(const struct option *opt,
+ const char *arg, int unset)
+{
+ struct apply_state *state = opt->value;
+ add_name_limit(state, arg, 1);
+ return 0;
+}
+
+static int apply_option_parse_include(const struct option *opt,
+ const char *arg, int unset)
+{
+ struct apply_state *state = opt->value;
+ add_name_limit(state, arg, 0);
+ state->has_include = 1;
+ return 0;
+}
+
+static int apply_option_parse_p(const struct option *opt,
+ const char *arg,
+ int unset)
+{
+ struct apply_state *state = opt->value;
+ state->p_value = atoi(arg);
+ state->p_value_known = 1;
+ return 0;
+}
+
+static int apply_option_parse_space_change(const struct option *opt,
+ const char *arg, int unset)
+{
+ struct apply_state *state = opt->value;
+ if (unset)
+ state->ws_ignore_action = ignore_ws_none;
+ else
+ state->ws_ignore_action = ignore_ws_change;
+ return 0;
+}
+
+static int apply_option_parse_whitespace(const struct option *opt,
+ const char *arg, int unset)
+{
+ struct apply_state *state = opt->value;
+ state->whitespace_option = arg;
+ if (parse_whitespace_option(state, arg))
+ exit(1);
+ return 0;
+}
+
+static int apply_option_parse_directory(const struct option *opt,
+ const char *arg, int unset)
+{
+ struct apply_state *state = opt->value;
+ strbuf_reset(&state->root);
+ strbuf_addstr(&state->root, arg);
+ strbuf_complete(&state->root, '/');
+ return 0;
+}
+
+int apply_all_patches(struct apply_state *state,
+ int argc,
+ const char **argv,
+ int options)
+{
+ int i;
+ int res;
+ int errs = 0;
+ int read_stdin = 1;
+
+ for (i = 0; i < argc; i++) {
+ const char *arg = argv[i];
+ int fd;
+
+ if (!strcmp(arg, "-")) {
+ res = apply_patch(state, 0, "<stdin>", options);
+ if (res < 0)
+ goto end;
+ errs |= res;
+ read_stdin = 0;
+ continue;
+ } else if (0 < state->prefix_length)
+ arg = prefix_filename(state->prefix,
+ state->prefix_length,
+ arg);
+
+ fd = open(arg, O_RDONLY);
+ if (fd < 0) {
+ error(_("can't open patch '%s': %s"), arg, strerror(errno));
+ res = -128;
+ goto end;
+ }
+ read_stdin = 0;
+ set_default_whitespace_mode(state);
+ res = apply_patch(state, fd, arg, options);
+ close(fd);
+ if (res < 0)
+ goto end;
+ errs |= res;
+ }
+ set_default_whitespace_mode(state);
+ if (read_stdin) {
+ res = apply_patch(state, 0, "<stdin>", options);
+ if (res < 0)
+ goto end;
+ errs |= res;
+ }
+
+ if (state->whitespace_error) {
+ if (state->squelch_whitespace_errors &&
+ state->squelch_whitespace_errors < state->whitespace_error) {
+ int squelched =
+ state->whitespace_error - state->squelch_whitespace_errors;
+ warning(Q_("squelched %d whitespace error",
+ "squelched %d whitespace errors",
+ squelched),
+ squelched);
+ }
+ if (state->ws_error_action == die_on_ws_error) {
+ error(Q_("%d line adds whitespace errors.",
+ "%d lines add whitespace errors.",
+ state->whitespace_error),
+ state->whitespace_error);
+ res = -128;
+ goto end;
+ }
+ if (state->applied_after_fixing_ws && state->apply)
+ warning(Q_("%d line applied after"
+ " fixing whitespace errors.",
+ "%d lines applied after"
+ " fixing whitespace errors.",
+ state->applied_after_fixing_ws),
+ state->applied_after_fixing_ws);
+ else if (state->whitespace_error)
+ warning(Q_("%d line adds whitespace errors.",
+ "%d lines add whitespace errors.",
+ state->whitespace_error),
+ state->whitespace_error);
+ }
+
+ if (state->update_index) {
+ res = write_locked_index(&the_index, state->lock_file, COMMIT_LOCK);
+ if (res) {
+ error(_("Unable to write new index file"));
+ res = -128;
+ goto end;
+ }
+ state->newfd = -1;
+ }
+
+ res = !!errs;
+
+end:
+ if (state->newfd >= 0) {
+ rollback_lock_file(state->lock_file);
+ state->newfd = -1;
+ }
+
+ if (state->apply_verbosity <= verbosity_silent) {
+ set_error_routine(state->saved_error_routine);
+ set_warn_routine(state->saved_warn_routine);
+ }
+
+ if (res > -1)
+ return res;
+ return (res == -1 ? 1 : 128);
+}
+
+int apply_parse_options(int argc, const char **argv,
+ struct apply_state *state,
+ int *force_apply, int *options,
+ const char * const *apply_usage)
+{
+ struct option builtin_apply_options[] = {
+ { OPTION_CALLBACK, 0, "exclude", state, N_("path"),
+ N_("don't apply changes matching the given path"),
+ 0, apply_option_parse_exclude },
+ { OPTION_CALLBACK, 0, "include", state, N_("path"),
+ N_("apply changes matching the given path"),
+ 0, apply_option_parse_include },
+ { OPTION_CALLBACK, 'p', NULL, state, N_("num"),
+ N_("remove <num> leading slashes from traditional diff paths"),
+ 0, apply_option_parse_p },
+ OPT_BOOL(0, "no-add", &state->no_add,
+ N_("ignore additions made by the patch")),
+ OPT_BOOL(0, "stat", &state->diffstat,
+ N_("instead of applying the patch, output diffstat for the input")),
+ OPT_NOOP_NOARG(0, "allow-binary-replacement"),
+ OPT_NOOP_NOARG(0, "binary"),
+ OPT_BOOL(0, "numstat", &state->numstat,
+ N_("show number of added and deleted lines in decimal notation")),
+ OPT_BOOL(0, "summary", &state->summary,
+ N_("instead of applying the patch, output a summary for the input")),
+ OPT_BOOL(0, "check", &state->check,
+ N_("instead of applying the patch, see if the patch is applicable")),
+ OPT_BOOL(0, "index", &state->check_index,
+ N_("make sure the patch is applicable to the current index")),
+ OPT_BOOL(0, "cached", &state->cached,
+ N_("apply a patch without touching the working tree")),
+ OPT_BOOL(0, "unsafe-paths", &state->unsafe_paths,
+ N_("accept a patch that touches outside the working area")),
+ OPT_BOOL(0, "apply", force_apply,
+ N_("also apply the patch (use with --stat/--summary/--check)")),
+ OPT_BOOL('3', "3way", &state->threeway,
+ N_( "attempt three-way merge if a patch does not apply")),
+ OPT_FILENAME(0, "build-fake-ancestor", &state->fake_ancestor,
+ N_("build a temporary index based on embedded index information")),
+ /* Think twice before adding "--nul" synonym to this */
+ OPT_SET_INT('z', NULL, &state->line_termination,
+ N_("paths are separated with NUL character"), '\0'),
+ OPT_INTEGER('C', NULL, &state->p_context,
+ N_("ensure at least <n> lines of context match")),
+ { OPTION_CALLBACK, 0, "whitespace", state, N_("action"),
+ N_("detect new or modified lines that have whitespace errors"),
+ 0, apply_option_parse_whitespace },
+ { OPTION_CALLBACK, 0, "ignore-space-change", state, NULL,
+ N_("ignore changes in whitespace when finding context"),
+ PARSE_OPT_NOARG, apply_option_parse_space_change },
+ { OPTION_CALLBACK, 0, "ignore-whitespace", state, NULL,
+ N_("ignore changes in whitespace when finding context"),
+ PARSE_OPT_NOARG, apply_option_parse_space_change },
+ OPT_BOOL('R', "reverse", &state->apply_in_reverse,
+ N_("apply the patch in reverse")),
+ OPT_BOOL(0, "unidiff-zero", &state->unidiff_zero,
+ N_("don't expect at least one line of context")),
+ OPT_BOOL(0, "reject", &state->apply_with_reject,
+ N_("leave the rejected hunks in corresponding *.rej files")),
+ OPT_BOOL(0, "allow-overlap", &state->allow_overlap,
+ N_("allow overlapping hunks")),
+ OPT__VERBOSE(&state->apply_verbosity, N_("be verbose")),
+ OPT_BIT(0, "inaccurate-eof", options,
+ N_("tolerate incorrectly detected missing new-line at the end of file"),
+ APPLY_OPT_INACCURATE_EOF),
+ OPT_BIT(0, "recount", options,
+ N_("do not trust the line counts in the hunk headers"),
+ APPLY_OPT_RECOUNT),
+ { OPTION_CALLBACK, 0, "directory", state, N_("root"),
+ N_("prepend <root> to all filenames"),
+ 0, apply_option_parse_directory },
+ OPT_END()
+ };
+
+ return parse_options(argc, argv, state->prefix, builtin_apply_options, apply_usage, 0);
+}
--- /dev/null
+#ifndef APPLY_H
+#define APPLY_H
+
+enum apply_ws_error_action {
+ nowarn_ws_error,
+ warn_on_ws_error,
+ die_on_ws_error,
+ correct_ws_error
+};
+
+enum apply_ws_ignore {
+ ignore_ws_none,
+ ignore_ws_change
+};
+
+enum apply_verbosity {
+ verbosity_silent = -1,
+ verbosity_normal = 0,
+ verbosity_verbose = 1
+};
+
+/*
+ * We need to keep track of how symlinks in the preimage are
+ * manipulated by the patches. A patch to add a/b/c where a/b
+ * is a symlink should not be allowed to affect the directory
+ * the symlink points at, but if the same patch removes a/b,
+ * it is perfectly fine, as the patch removes a/b to make room
+ * to create a directory a/b so that a/b/c can be created.
+ *
+ * See also "struct string_list symlink_changes" in "struct
+ * apply_state".
+ */
+#define APPLY_SYMLINK_GOES_AWAY 01
+#define APPLY_SYMLINK_IN_RESULT 02
+
+struct apply_state {
+ const char *prefix;
+ int prefix_length;
+
+ /* These are lock_file related */
+ struct lock_file *lock_file;
+ int newfd;
+
+ /* These control what gets looked at and modified */
+ int apply; /* this is not a dry-run */
+ int cached; /* apply to the index only */
+ int check; /* preimage must match working tree, don't actually apply */
+ int check_index; /* preimage must match the indexed version */
+ int update_index; /* check_index && apply */
+
+ /* These control cosmetic aspect of the output */
+ int diffstat; /* just show a diffstat, and don't actually apply */
+ int numstat; /* just show a numeric diffstat, and don't actually apply */
+ int summary; /* just report creation, deletion, etc, and don't actually apply */
+
+ /* These boolean parameters control how the apply is done */
+ int allow_overlap;
+ int apply_in_reverse;
+ int apply_with_reject;
+ int no_add;
+ int threeway;
+ int unidiff_zero;
+ int unsafe_paths;
+
+ /* Other non boolean parameters */
+ const char *index_file;
+ enum apply_verbosity apply_verbosity;
+ const char *fake_ancestor;
+ const char *patch_input_file;
+ int line_termination;
+ struct strbuf root;
+ int p_value;
+ int p_value_known;
+ unsigned int p_context;
+
+ /* Exclude and include path parameters */
+ struct string_list limit_by_name;
+ int has_include;
+
+ /* Various "current state" */
+ int linenr; /* current line number */
+ struct string_list symlink_changes; /* we have to track symlinks */
+
+ /*
+ * For "diff-stat" like behaviour, we keep track of the biggest change
+ * we've seen, and the longest filename. That allows us to do simple
+ * scaling.
+ */
+ int max_change;
+ int max_len;
+
+ /*
+ * Records filenames that have been touched, in order to handle
+ * the case where more than one patches touch the same file.
+ */
+ struct string_list fn_table;
+
+ /*
+ * This is to save reporting routines before using
+ * set_error_routine() or set_warn_routine() to install muting
+ * routines when in verbosity_silent mode.
+ */
+ void (*saved_error_routine)(const char *err, va_list params);
+ void (*saved_warn_routine)(const char *warn, va_list params);
+
+ /* These control whitespace errors */
+ enum apply_ws_error_action ws_error_action;
+ enum apply_ws_ignore ws_ignore_action;
+ const char *whitespace_option;
+ int whitespace_error;
+ int squelch_whitespace_errors;
+ int applied_after_fixing_ws;
+};
+
+extern int apply_parse_options(int argc, const char **argv,
+ struct apply_state *state,
+ int *force_apply, int *options,
+ const char * const *apply_usage);
+extern int init_apply_state(struct apply_state *state,
+ const char *prefix,
+ struct lock_file *lock_file);
+extern void clear_apply_state(struct apply_state *state);
+extern int check_apply_state(struct apply_state *state, int force_apply);
+
+/*
+ * Some aspects of the apply behavior are controlled by the following
+ * bits in the "options" parameter passed to apply_all_patches().
+ */
+#define APPLY_OPT_INACCURATE_EOF (1<<0) /* accept inaccurate eof */
+#define APPLY_OPT_RECOUNT (1<<1) /* accept inaccurate line count */
+
+extern int apply_all_patches(struct apply_state *state,
+ int argc,
+ const char **argv,
+ int options);
+
+#endif
debug_push(elem);
}
- elem = read_attr_from_file(git_path_info_attributes(), 1);
+ if (startup_info->have_repository)
+ elem = read_attr_from_file(git_path_info_attributes(), 1);
+ else
+ elem = NULL;
+
if (!elem)
elem = xcalloc(1, sizeof(*elem));
elem->origin = NULL;
array[cnt].distance = distance;
cnt++;
}
- qsort(array, cnt, sizeof(*array), compare_commit_dist);
+ QSORT(array, cnt, compare_commit_dist);
for (p = list, i = 0; i < cnt; i++) {
char buf[100]; /* enough for dist=%d */
struct object *obj = &(array[i].commit->object);
extern int fmt_merge_msg(struct strbuf *in, struct strbuf *out,
struct fmt_merge_msg_opts *);
-extern int textconv_object(const char *path, unsigned mode, const unsigned char *sha1, int sha1_valid, char **buf, unsigned long *buf_size);
+extern int textconv_object(const char *path, unsigned mode, const struct object_id *oid, int oid_valid, char **buf, unsigned long *buf_size);
extern int is_builtin(const char *s);
#include "rerere.h"
#include "prompt.h"
#include "mailinfo.h"
+#include "apply.h"
+#include "string-list.h"
/**
* Returns 1 if the file is empty or does not exist, 0 otherwise.
size_t msg_len;
/* when --rebasing, records the original commit the patch came from */
- unsigned char orig_commit[GIT_SHA1_RAWSZ];
+ struct object_id orig_commit;
/* number of digits in patch filename */
int prec;
}
/**
- * Reads a KEY=VALUE shell variable assignment from `fp`, returning the VALUE
- * as a newly-allocated string. VALUE must be a quoted string, and the KEY must
- * match `key`. Returns NULL on failure.
- *
- * This is used by read_author_script() to read the GIT_AUTHOR_* variables from
- * the author-script.
+ * Take a series of KEY='VALUE' lines where VALUE part is
+ * sq-quoted, and append <KEY, VALUE> at the end of the string list
*/
-static char *read_shell_var(FILE *fp, const char *key)
+static int parse_key_value_squoted(char *buf, struct string_list *list)
{
- struct strbuf sb = STRBUF_INIT;
- const char *str;
-
- if (strbuf_getline_lf(&sb, fp))
- goto fail;
-
- if (!skip_prefix(sb.buf, key, &str))
- goto fail;
-
- if (!skip_prefix(str, "=", &str))
- goto fail;
-
- strbuf_remove(&sb, 0, str - sb.buf);
-
- str = sq_dequote(sb.buf);
- if (!str)
- goto fail;
-
- return strbuf_detach(&sb, NULL);
-
-fail:
- strbuf_release(&sb);
- return NULL;
+ while (*buf) {
+ struct string_list_item *item;
+ char *np;
+ char *cp = strchr(buf, '=');
+ if (!cp)
+ return -1;
+ np = strchrnul(cp, '\n');
+ *cp++ = '\0';
+ item = string_list_append(list, buf);
+
+ buf = np + (*np == '\n');
+ *np = '\0';
+ cp = sq_dequote(cp);
+ if (!cp)
+ return -1;
+ item->util = xstrdup(cp);
+ }
+ return 0;
}
/**
static int read_author_script(struct am_state *state)
{
const char *filename = am_path(state, "author-script");
- FILE *fp;
+ struct strbuf buf = STRBUF_INIT;
+ struct string_list kv = STRING_LIST_INIT_DUP;
+ int retval = -1; /* assume failure */
+ int fd;
assert(!state->author_name);
assert(!state->author_email);
assert(!state->author_date);
- fp = fopen(filename, "r");
- if (!fp) {
+ fd = open(filename, O_RDONLY);
+ if (fd < 0) {
if (errno == ENOENT)
return 0;
die_errno(_("could not open '%s' for reading"), filename);
}
+ strbuf_read(&buf, fd, 0);
+ close(fd);
+ if (parse_key_value_squoted(buf.buf, &kv))
+ goto finish;
- state->author_name = read_shell_var(fp, "GIT_AUTHOR_NAME");
- if (!state->author_name) {
- fclose(fp);
- return -1;
- }
-
- state->author_email = read_shell_var(fp, "GIT_AUTHOR_EMAIL");
- if (!state->author_email) {
- fclose(fp);
- return -1;
- }
-
- state->author_date = read_shell_var(fp, "GIT_AUTHOR_DATE");
- if (!state->author_date) {
- fclose(fp);
- return -1;
- }
-
- if (fgetc(fp) != EOF) {
- fclose(fp);
- return -1;
- }
-
- fclose(fp);
- return 0;
+ if (kv.nr != 3 ||
+ strcmp(kv.items[0].string, "GIT_AUTHOR_NAME") ||
+ strcmp(kv.items[1].string, "GIT_AUTHOR_EMAIL") ||
+ strcmp(kv.items[2].string, "GIT_AUTHOR_DATE"))
+ goto finish;
+ state->author_name = kv.items[0].util;
+ state->author_email = kv.items[1].util;
+ state->author_date = kv.items[2].util;
+ retval = 0;
+finish:
+ string_list_clear(&kv, !!retval);
+ strbuf_release(&buf);
+ return retval;
}
/**
read_commit_msg(state);
if (read_state_file(&sb, state, "original-commit", 1) < 0)
- hashclr(state->orig_commit);
- else if (get_sha1_hex(sb.buf, state->orig_commit) < 0)
+ oidclr(&state->orig_commit);
+ else if (get_oid_hex(sb.buf, &state->orig_commit) < 0)
die(_("could not parse %s"), am_path(state, "original-commit"));
read_state_file(&sb, state, "threeway", 1);
fp = xfopen(am_path(state, "rewritten"), "r");
while (!strbuf_getline_lf(&sb, fp)) {
- unsigned char from_obj[GIT_SHA1_RAWSZ], to_obj[GIT_SHA1_RAWSZ];
+ struct object_id from_obj, to_obj;
if (sb.len != GIT_SHA1_HEXSZ * 2 + 1) {
ret = error(invalid_line, sb.buf);
goto finish;
}
- if (get_sha1_hex(sb.buf, from_obj)) {
+ if (get_oid_hex(sb.buf, &from_obj)) {
ret = error(invalid_line, sb.buf);
goto finish;
}
goto finish;
}
- if (get_sha1_hex(sb.buf + GIT_SHA1_HEXSZ + 1, to_obj)) {
+ if (get_oid_hex(sb.buf + GIT_SHA1_HEXSZ + 1, &to_obj)) {
ret = error(invalid_line, sb.buf);
goto finish;
}
- if (copy_note_for_rewrite(c, from_obj, to_obj))
+ if (copy_note_for_rewrite(c, from_obj.hash, to_obj.hash))
ret = error(_("Failed to copy notes from '%s' to '%s'"),
- sha1_to_hex(from_obj), sha1_to_hex(to_obj));
+ oid_to_hex(&from_obj), oid_to_hex(&to_obj));
}
finish:
static void am_setup(struct am_state *state, enum patch_format patch_format,
const char **paths, int keep_cr)
{
- unsigned char curr_head[GIT_SHA1_RAWSZ];
+ struct object_id curr_head;
const char *str;
struct strbuf sb = STRBUF_INIT;
else
write_state_text(state, "applying", "");
- if (!get_sha1("HEAD", curr_head)) {
- write_state_text(state, "abort-safety", sha1_to_hex(curr_head));
+ if (!get_oid("HEAD", &curr_head)) {
+ write_state_text(state, "abort-safety", oid_to_hex(&curr_head));
if (!state->rebasing)
- update_ref("am", "ORIG_HEAD", curr_head, NULL, 0,
+ update_ref_oid("am", "ORIG_HEAD", &curr_head, NULL, 0,
UPDATE_REFS_DIE_ON_ERR);
} else {
write_state_text(state, "abort-safety", "");
*/
static void am_next(struct am_state *state)
{
- unsigned char head[GIT_SHA1_RAWSZ];
+ struct object_id head;
free(state->author_name);
state->author_name = NULL;
unlink(am_path(state, "author-script"));
unlink(am_path(state, "final-commit"));
- hashclr(state->orig_commit);
+ oidclr(&state->orig_commit);
unlink(am_path(state, "original-commit"));
- if (!get_sha1("HEAD", head))
- write_state_text(state, "abort-safety", sha1_to_hex(head));
+ if (!get_oid("HEAD", &head))
+ write_state_text(state, "abort-safety", oid_to_hex(&head));
else
write_state_text(state, "abort-safety", "");
*/
static int index_has_changes(struct strbuf *sb)
{
- unsigned char head[GIT_SHA1_RAWSZ];
+ struct object_id head;
int i;
- if (!get_sha1_tree("HEAD", head)) {
+ if (!get_sha1_tree("HEAD", head.hash)) {
struct diff_options opt;
diff_setup(&opt);
DIFF_OPT_SET(&opt, EXIT_WITH_STATUS);
if (!sb)
DIFF_OPT_SET(&opt, QUICK);
- do_diff_cache(head, &opt);
+ do_diff_cache(head.hash, &opt);
diffcore_std(&opt);
for (i = 0; sb && i < diff_queued_diff.nr; i++) {
if (i)
* Sets commit_id to the commit hash where the mail was generated from.
* Returns 0 on success, -1 on failure.
*/
-static int get_mail_commit_sha1(unsigned char *commit_id, const char *mail)
+static int get_mail_commit_oid(struct object_id *commit_id, const char *mail)
{
struct strbuf sb = STRBUF_INIT;
FILE *fp = xfopen(mail, "r");
if (!skip_prefix(sb.buf, "From ", &x))
return -1;
- if (get_sha1_hex(x, commit_id) < 0)
+ if (get_oid_hex(x, commit_id) < 0)
return -1;
strbuf_release(&sb);
static void write_index_patch(const struct am_state *state)
{
struct tree *tree;
- unsigned char head[GIT_SHA1_RAWSZ];
+ struct object_id head;
struct rev_info rev_info;
FILE *fp;
- if (!get_sha1_tree("HEAD", head))
- tree = lookup_tree(head);
+ if (!get_sha1_tree("HEAD", head.hash))
+ tree = lookup_tree(head.hash);
else
tree = lookup_tree(EMPTY_TREE_SHA1_BIN);
static int parse_mail_rebase(struct am_state *state, const char *mail)
{
struct commit *commit;
- unsigned char commit_sha1[GIT_SHA1_RAWSZ];
+ struct object_id commit_oid;
- if (get_mail_commit_sha1(commit_sha1, mail) < 0)
+ if (get_mail_commit_oid(&commit_oid, mail) < 0)
die(_("could not parse %s"), mail);
- commit = lookup_commit_or_die(commit_sha1, mail);
+ commit = lookup_commit_or_die(commit_oid.hash, mail);
get_commit_info(state, commit);
write_commit_patch(state, commit);
- hashcpy(state->orig_commit, commit_sha1);
- write_state_text(state, "original-commit", sha1_to_hex(commit_sha1));
+ oidcpy(&state->orig_commit, &commit_oid);
+ write_state_text(state, "original-commit", oid_to_hex(&commit_oid));
return 0;
}
*/
static int run_apply(const struct am_state *state, const char *index_file)
{
- struct child_process cp = CHILD_PROCESS_INIT;
-
- cp.git_cmd = 1;
-
- if (index_file)
- argv_array_pushf(&cp.env_array, "GIT_INDEX_FILE=%s", index_file);
+ struct argv_array apply_paths = ARGV_ARRAY_INIT;
+ struct argv_array apply_opts = ARGV_ARRAY_INIT;
+ struct apply_state apply_state;
+ int res, opts_left;
+ static struct lock_file lock_file;
+ int force_apply = 0;
+ int options = 0;
+
+ if (init_apply_state(&apply_state, NULL, &lock_file))
+ die("BUG: init_apply_state() failed");
+
+ argv_array_push(&apply_opts, "apply");
+ argv_array_pushv(&apply_opts, state->git_apply_opts.argv);
+
+ opts_left = apply_parse_options(apply_opts.argc, apply_opts.argv,
+ &apply_state, &force_apply, &options,
+ NULL);
+
+ if (opts_left != 0)
+ die("unknown option passed through to git apply");
+
+ if (index_file) {
+ apply_state.index_file = index_file;
+ apply_state.cached = 1;
+ } else
+ apply_state.check_index = 1;
/*
* If we are allowed to fall back on 3-way merge, don't give false
* errors during the initial attempt.
*/
- if (state->threeway && !index_file) {
- cp.no_stdout = 1;
- cp.no_stderr = 1;
- }
+ if (state->threeway && !index_file)
+ apply_state.apply_verbosity = verbosity_silent;
- argv_array_push(&cp.args, "apply");
+ if (check_apply_state(&apply_state, force_apply))
+ die("BUG: check_apply_state() failed");
- argv_array_pushv(&cp.args, state->git_apply_opts.argv);
+ argv_array_push(&apply_paths, am_path(state, "patch"));
- if (index_file)
- argv_array_push(&cp.args, "--cached");
- else
- argv_array_push(&cp.args, "--index");
+ res = apply_all_patches(&apply_state, apply_paths.argc, apply_paths.argv, options);
- argv_array_push(&cp.args, am_path(state, "patch"));
+ argv_array_clear(&apply_paths);
+ argv_array_clear(&apply_opts);
+ clear_apply_state(&apply_state);
- if (run_command(&cp))
- return -1;
+ if (res)
+ return res;
- /* Reload index as git-apply will have modified it. */
- discard_cache();
- read_cache_from(index_file ? index_file : get_index_file());
+ if (index_file) {
+ /* Reload index as apply_all_patches() will have modified it. */
+ discard_cache();
+ read_cache_from(index_file);
+ }
return 0;
}
*/
static void do_commit(const struct am_state *state)
{
- unsigned char tree[GIT_SHA1_RAWSZ], parent[GIT_SHA1_RAWSZ],
- commit[GIT_SHA1_RAWSZ];
- unsigned char *ptr;
+ struct object_id tree, parent, commit;
+ const struct object_id *old_oid;
struct commit_list *parents = NULL;
const char *reflog_msg, *author;
struct strbuf sb = STRBUF_INIT;
if (run_hook_le(NULL, "pre-applypatch", NULL))
exit(1);
- if (write_cache_as_tree(tree, 0, NULL))
+ if (write_cache_as_tree(tree.hash, 0, NULL))
die(_("git write-tree failed to write a tree"));
- if (!get_sha1_commit("HEAD", parent)) {
- ptr = parent;
- commit_list_insert(lookup_commit(parent), &parents);
+ if (!get_sha1_commit("HEAD", parent.hash)) {
+ old_oid = &parent;
+ commit_list_insert(lookup_commit(parent.hash), &parents);
} else {
- ptr = NULL;
+ old_oid = NULL;
say(state, stderr, _("applying to an empty history"));
}
setenv("GIT_COMMITTER_DATE",
state->ignore_date ? "" : state->author_date, 1);
- if (commit_tree(state->msg, state->msg_len, tree, parents, commit,
+ if (commit_tree(state->msg, state->msg_len, tree.hash, parents, commit.hash,
author, state->sign_commit))
die(_("failed to write commit object"));
strbuf_addf(&sb, "%s: %.*s", reflog_msg, linelen(state->msg),
state->msg);
- update_ref(sb.buf, "HEAD", commit, ptr, 0, UPDATE_REFS_DIE_ON_ERR);
+ update_ref_oid(sb.buf, "HEAD", &commit, old_oid, 0,
+ UPDATE_REFS_DIE_ON_ERR);
if (state->rebasing) {
FILE *fp = xfopen(am_path(state, "rewritten"), "a");
- assert(!is_null_sha1(state->orig_commit));
- fprintf(fp, "%s ", sha1_to_hex(state->orig_commit));
- fprintf(fp, "%s\n", sha1_to_hex(commit));
+ assert(!is_null_oid(&state->orig_commit));
+ fprintf(fp, "%s ", oid_to_hex(&state->orig_commit));
+ fprintf(fp, "%s\n", oid_to_hex(&commit));
fclose(fp);
}
* Clean the index without touching entries that are not modified between
* `head` and `remote`.
*/
-static int clean_index(const unsigned char *head, const unsigned char *remote)
+static int clean_index(const struct object_id *head, const struct object_id *remote)
{
struct tree *head_tree, *remote_tree, *index_tree;
- unsigned char index[GIT_SHA1_RAWSZ];
+ struct object_id index;
- head_tree = parse_tree_indirect(head);
+ head_tree = parse_tree_indirect(head->hash);
if (!head_tree)
- return error(_("Could not parse object '%s'."), sha1_to_hex(head));
+ return error(_("Could not parse object '%s'."), oid_to_hex(head));
- remote_tree = parse_tree_indirect(remote);
+ remote_tree = parse_tree_indirect(remote->hash);
if (!remote_tree)
- return error(_("Could not parse object '%s'."), sha1_to_hex(remote));
+ return error(_("Could not parse object '%s'."), oid_to_hex(remote));
read_cache_unmerged();
if (fast_forward_to(head_tree, head_tree, 1))
return -1;
- if (write_cache_as_tree(index, 0, NULL))
+ if (write_cache_as_tree(index.hash, 0, NULL))
return -1;
- index_tree = parse_tree_indirect(index);
+ index_tree = parse_tree_indirect(index.hash);
if (!index_tree)
- return error(_("Could not parse object '%s'."), sha1_to_hex(index));
+ return error(_("Could not parse object '%s'."), oid_to_hex(&index));
if (fast_forward_to(index_tree, remote_tree, 0))
return -1;
*/
static void am_skip(struct am_state *state)
{
- unsigned char head[GIT_SHA1_RAWSZ];
+ struct object_id head;
am_rerere_clear();
- if (get_sha1("HEAD", head))
- hashcpy(head, EMPTY_TREE_SHA1_BIN);
+ if (get_oid("HEAD", &head))
+ hashcpy(head.hash, EMPTY_TREE_SHA1_BIN);
- if (clean_index(head, head))
+ if (clean_index(&head, &head))
die(_("failed to clean index"));
am_next(state);
static int safe_to_abort(const struct am_state *state)
{
struct strbuf sb = STRBUF_INIT;
- unsigned char abort_safety[GIT_SHA1_RAWSZ], head[GIT_SHA1_RAWSZ];
+ struct object_id abort_safety, head;
if (file_exists(am_path(state, "dirtyindex")))
return 0;
if (read_state_file(&sb, state, "abort-safety", 1) > 0) {
- if (get_sha1_hex(sb.buf, abort_safety))
+ if (get_oid_hex(sb.buf, &abort_safety))
die(_("could not parse %s"), am_path(state, "abort_safety"));
} else
- hashclr(abort_safety);
+ oidclr(&abort_safety);
- if (get_sha1("HEAD", head))
- hashclr(head);
+ if (get_oid("HEAD", &head))
+ oidclr(&head);
- if (!hashcmp(head, abort_safety))
+ if (!oidcmp(&head, &abort_safety))
return 1;
error(_("You seem to have moved HEAD since the last 'am' failure.\n"
*/
static void am_abort(struct am_state *state)
{
- unsigned char curr_head[GIT_SHA1_RAWSZ], orig_head[GIT_SHA1_RAWSZ];
+ struct object_id curr_head, orig_head;
int has_curr_head, has_orig_head;
char *curr_branch;
am_rerere_clear();
- curr_branch = resolve_refdup("HEAD", 0, curr_head, NULL);
- has_curr_head = !is_null_sha1(curr_head);
+ curr_branch = resolve_refdup("HEAD", 0, curr_head.hash, NULL);
+ has_curr_head = !is_null_oid(&curr_head);
if (!has_curr_head)
- hashcpy(curr_head, EMPTY_TREE_SHA1_BIN);
+ hashcpy(curr_head.hash, EMPTY_TREE_SHA1_BIN);
- has_orig_head = !get_sha1("ORIG_HEAD", orig_head);
+ has_orig_head = !get_oid("ORIG_HEAD", &orig_head);
if (!has_orig_head)
- hashcpy(orig_head, EMPTY_TREE_SHA1_BIN);
+ hashcpy(orig_head.hash, EMPTY_TREE_SHA1_BIN);
- clean_index(curr_head, orig_head);
+ clean_index(&curr_head, &orig_head);
if (has_orig_head)
- update_ref("am --abort", "HEAD", orig_head,
- has_curr_head ? curr_head : NULL, 0,
+ update_ref_oid("am --abort", "HEAD", &orig_head,
+ has_curr_head ? &curr_head : NULL, 0,
UPDATE_REFS_DIE_ON_ERR);
else if (curr_branch)
delete_ref(curr_branch, NULL, REF_NODEREF);
-/*
- * apply.c
- *
- * Copyright (C) Linus Torvalds, 2005
- *
- * This applies patches on top of some (arbitrary) version of the SCM.
- *
- */
#include "cache.h"
-#include "lockfile.h"
-#include "cache-tree.h"
-#include "quote.h"
-#include "blob.h"
-#include "delta.h"
-#include "builtin.h"
-#include "string-list.h"
-#include "dir.h"
-#include "diff.h"
-#include "parse-options.h"
-#include "xdiff-interface.h"
-#include "ll-merge.h"
-#include "rerere.h"
-
-enum ws_error_action {
- nowarn_ws_error,
- warn_on_ws_error,
- die_on_ws_error,
- correct_ws_error
-};
-
-
-enum ws_ignore {
- ignore_ws_none,
- ignore_ws_change
-};
-
-/*
- * We need to keep track of how symlinks in the preimage are
- * manipulated by the patches. A patch to add a/b/c where a/b
- * is a symlink should not be allowed to affect the directory
- * the symlink points at, but if the same patch removes a/b,
- * it is perfectly fine, as the patch removes a/b to make room
- * to create a directory a/b so that a/b/c can be created.
- *
- * See also "struct string_list symlink_changes" in "struct
- * apply_state".
- */
-#define SYMLINK_GOES_AWAY 01
-#define SYMLINK_IN_RESULT 02
-
-struct apply_state {
- const char *prefix;
- int prefix_length;
-
- /* These are lock_file related */
- struct lock_file *lock_file;
- int newfd;
-
- /* These control what gets looked at and modified */
- int apply; /* this is not a dry-run */
- int cached; /* apply to the index only */
- int check; /* preimage must match working tree, don't actually apply */
- int check_index; /* preimage must match the indexed version */
- int update_index; /* check_index && apply */
-
- /* These control cosmetic aspect of the output */
- int diffstat; /* just show a diffstat, and don't actually apply */
- int numstat; /* just show a numeric diffstat, and don't actually apply */
- int summary; /* just report creation, deletion, etc, and don't actually apply */
-
- /* These boolean parameters control how the apply is done */
- int allow_overlap;
- int apply_in_reverse;
- int apply_with_reject;
- int apply_verbosely;
- int no_add;
- int threeway;
- int unidiff_zero;
- int unsafe_paths;
-
- /* Other non boolean parameters */
- const char *fake_ancestor;
- const char *patch_input_file;
- int line_termination;
- struct strbuf root;
- int p_value;
- int p_value_known;
- unsigned int p_context;
-
- /* Exclude and include path parameters */
- struct string_list limit_by_name;
- int has_include;
-
- /* Various "current state" */
- int linenr; /* current line number */
- struct string_list symlink_changes; /* we have to track symlinks */
-
- /*
- * For "diff-stat" like behaviour, we keep track of the biggest change
- * we've seen, and the longest filename. That allows us to do simple
- * scaling.
- */
- int max_change;
- int max_len;
-
- /*
- * Records filenames that have been touched, in order to handle
- * the case where more than one patches touch the same file.
- */
- struct string_list fn_table;
-
- /* These control whitespace errors */
- enum ws_error_action ws_error_action;
- enum ws_ignore ws_ignore_action;
- const char *whitespace_option;
- int whitespace_error;
- int squelch_whitespace_errors;
- int applied_after_fixing_ws;
-};
-
-static const char * const apply_usage[] = {
- N_("git apply [<options>] [<patch>...]"),
- NULL
-};
-
-static void parse_whitespace_option(struct apply_state *state, const char *option)
-{
- if (!option) {
- state->ws_error_action = warn_on_ws_error;
- return;
- }
- if (!strcmp(option, "warn")) {
- state->ws_error_action = warn_on_ws_error;
- return;
- }
- if (!strcmp(option, "nowarn")) {
- state->ws_error_action = nowarn_ws_error;
- return;
- }
- if (!strcmp(option, "error")) {
- state->ws_error_action = die_on_ws_error;
- return;
- }
- if (!strcmp(option, "error-all")) {
- state->ws_error_action = die_on_ws_error;
- state->squelch_whitespace_errors = 0;
- return;
- }
- if (!strcmp(option, "strip") || !strcmp(option, "fix")) {
- state->ws_error_action = correct_ws_error;
- return;
- }
- die(_("unrecognized whitespace option '%s'"), option);
-}
-
-static void parse_ignorewhitespace_option(struct apply_state *state,
- const char *option)
-{
- if (!option || !strcmp(option, "no") ||
- !strcmp(option, "false") || !strcmp(option, "never") ||
- !strcmp(option, "none")) {
- state->ws_ignore_action = ignore_ws_none;
- return;
- }
- if (!strcmp(option, "change")) {
- state->ws_ignore_action = ignore_ws_change;
- return;
- }
- die(_("unrecognized whitespace ignore option '%s'"), option);
-}
-
-static void set_default_whitespace_mode(struct apply_state *state)
-{
- if (!state->whitespace_option && !apply_default_whitespace)
- state->ws_error_action = (state->apply ? warn_on_ws_error : nowarn_ws_error);
-}
-
-/*
- * This represents one "hunk" from a patch, starting with
- * "@@ -oldpos,oldlines +newpos,newlines @@" marker. The
- * patch text is pointed at by patch, and its byte length
- * is stored in size. leading and trailing are the number
- * of context lines.
- */
-struct fragment {
- unsigned long leading, trailing;
- unsigned long oldpos, oldlines;
- unsigned long newpos, newlines;
- /*
- * 'patch' is usually borrowed from buf in apply_patch(),
- * but some codepaths store an allocated buffer.
- */
- const char *patch;
- unsigned free_patch:1,
- rejected:1;
- int size;
- int linenr;
- struct fragment *next;
-};
-
-/*
- * When dealing with a binary patch, we reuse "leading" field
- * to store the type of the binary hunk, either deflated "delta"
- * or deflated "literal".
- */
-#define binary_patch_method leading
-#define BINARY_DELTA_DEFLATED 1
-#define BINARY_LITERAL_DEFLATED 2
-
-/*
- * This represents a "patch" to a file, both metainfo changes
- * such as creation/deletion, filemode and content changes represented
- * as a series of fragments.
- */
-struct patch {
- char *new_name, *old_name, *def_name;
- unsigned int old_mode, new_mode;
- int is_new, is_delete; /* -1 = unknown, 0 = false, 1 = true */
- int rejected;
- unsigned ws_rule;
- int lines_added, lines_deleted;
- int score;
- unsigned int is_toplevel_relative:1;
- unsigned int inaccurate_eof:1;
- unsigned int is_binary:1;
- unsigned int is_copy:1;
- unsigned int is_rename:1;
- unsigned int recount:1;
- unsigned int conflicted_threeway:1;
- unsigned int direct_to_threeway:1;
- struct fragment *fragments;
- char *result;
- size_t resultsize;
- char old_sha1_prefix[41];
- char new_sha1_prefix[41];
- struct patch *next;
-
- /* three-way fallback result */
- struct object_id threeway_stage[3];
-};
-
-static void free_fragment_list(struct fragment *list)
-{
- while (list) {
- struct fragment *next = list->next;
- if (list->free_patch)
- free((char *)list->patch);
- free(list);
- list = next;
- }
-}
-
-static void free_patch(struct patch *patch)
-{
- free_fragment_list(patch->fragments);
- free(patch->def_name);
- free(patch->old_name);
- free(patch->new_name);
- free(patch->result);
- free(patch);
-}
-
-static void free_patch_list(struct patch *list)
-{
- while (list) {
- struct patch *next = list->next;
- free_patch(list);
- list = next;
- }
-}
-
-/*
- * A line in a file, len-bytes long (includes the terminating LF,
- * except for an incomplete line at the end if the file ends with
- * one), and its contents hashes to 'hash'.
- */
-struct line {
- size_t len;
- unsigned hash : 24;
- unsigned flag : 8;
-#define LINE_COMMON 1
-#define LINE_PATCHED 2
-};
-
-/*
- * This represents a "file", which is an array of "lines".
- */
-struct image {
- char *buf;
- size_t len;
- size_t nr;
- size_t alloc;
- struct line *line_allocated;
- struct line *line;
-};
-
-static uint32_t hash_line(const char *cp, size_t len)
-{
- size_t i;
- uint32_t h;
- for (i = 0, h = 0; i < len; i++) {
- if (!isspace(cp[i])) {
- h = h * 3 + (cp[i] & 0xff);
- }
- }
- return h;
-}
-
-/*
- * Compare lines s1 of length n1 and s2 of length n2, ignoring
- * whitespace difference. Returns 1 if they match, 0 otherwise
- */
-static int fuzzy_matchlines(const char *s1, size_t n1,
- const char *s2, size_t n2)
-{
- const char *last1 = s1 + n1 - 1;
- const char *last2 = s2 + n2 - 1;
- int result = 0;
-
- /* ignore line endings */
- while ((*last1 == '\r') || (*last1 == '\n'))
- last1--;
- while ((*last2 == '\r') || (*last2 == '\n'))
- last2--;
-
- /* skip leading whitespaces, if both begin with whitespace */
- if (s1 <= last1 && s2 <= last2 && isspace(*s1) && isspace(*s2)) {
- while (isspace(*s1) && (s1 <= last1))
- s1++;
- while (isspace(*s2) && (s2 <= last2))
- s2++;
- }
- /* early return if both lines are empty */
- if ((s1 > last1) && (s2 > last2))
- return 1;
- while (!result) {
- result = *s1++ - *s2++;
- /*
- * Skip whitespace inside. We check for whitespace on
- * both buffers because we don't want "a b" to match
- * "ab"
- */
- if (isspace(*s1) && isspace(*s2)) {
- while (isspace(*s1) && s1 <= last1)
- s1++;
- while (isspace(*s2) && s2 <= last2)
- s2++;
- }
- /*
- * If we reached the end on one side only,
- * lines don't match
- */
- if (
- ((s2 > last2) && (s1 <= last1)) ||
- ((s1 > last1) && (s2 <= last2)))
- return 0;
- if ((s1 > last1) && (s2 > last2))
- break;
- }
-
- return !result;
-}
-
-static void add_line_info(struct image *img, const char *bol, size_t len, unsigned flag)
-{
- ALLOC_GROW(img->line_allocated, img->nr + 1, img->alloc);
- img->line_allocated[img->nr].len = len;
- img->line_allocated[img->nr].hash = hash_line(bol, len);
- img->line_allocated[img->nr].flag = flag;
- img->nr++;
-}
-
-/*
- * "buf" has the file contents to be patched (read from various sources).
- * attach it to "image" and add line-based index to it.
- * "image" now owns the "buf".
- */
-static void prepare_image(struct image *image, char *buf, size_t len,
- int prepare_linetable)
-{
- const char *cp, *ep;
-
- memset(image, 0, sizeof(*image));
- image->buf = buf;
- image->len = len;
-
- if (!prepare_linetable)
- return;
-
- ep = image->buf + image->len;
- cp = image->buf;
- while (cp < ep) {
- const char *next;
- for (next = cp; next < ep && *next != '\n'; next++)
- ;
- if (next < ep)
- next++;
- add_line_info(image, cp, next - cp, 0);
- cp = next;
- }
- image->line = image->line_allocated;
-}
-
-static void clear_image(struct image *image)
-{
- free(image->buf);
- free(image->line_allocated);
- memset(image, 0, sizeof(*image));
-}
-
-/* fmt must contain _one_ %s and no other substitution */
-static void say_patch_name(FILE *output, const char *fmt, struct patch *patch)
-{
- struct strbuf sb = STRBUF_INIT;
-
- if (patch->old_name && patch->new_name &&
- strcmp(patch->old_name, patch->new_name)) {
- quote_c_style(patch->old_name, &sb, NULL, 0);
- strbuf_addstr(&sb, " => ");
- quote_c_style(patch->new_name, &sb, NULL, 0);
- } else {
- const char *n = patch->new_name;
- if (!n)
- n = patch->old_name;
- quote_c_style(n, &sb, NULL, 0);
- }
- fprintf(output, fmt, sb.buf);
- fputc('\n', output);
- strbuf_release(&sb);
-}
-
-#define SLOP (16)
-
-static void read_patch_file(struct strbuf *sb, int fd)
-{
- if (strbuf_read(sb, fd, 0) < 0)
- die_errno("git apply: failed to read");
-
- /*
- * Make sure that we have some slop in the buffer
- * so that we can do speculative "memcmp" etc, and
- * see to it that it is NUL-filled.
- */
- strbuf_grow(sb, SLOP);
- memset(sb->buf + sb->len, 0, SLOP);
-}
-
-static unsigned long linelen(const char *buffer, unsigned long size)
-{
- unsigned long len = 0;
- while (size--) {
- len++;
- if (*buffer++ == '\n')
- break;
- }
- return len;
-}
-
-static int is_dev_null(const char *str)
-{
- return skip_prefix(str, "/dev/null", &str) && isspace(*str);
-}
-
-#define TERM_SPACE 1
-#define TERM_TAB 2
-
-static int name_terminate(int c, int terminate)
-{
- if (c == ' ' && !(terminate & TERM_SPACE))
- return 0;
- if (c == '\t' && !(terminate & TERM_TAB))
- return 0;
-
- return 1;
-}
-
-/* remove double slashes to make --index work with such filenames */
-static char *squash_slash(char *name)
-{
- int i = 0, j = 0;
-
- if (!name)
- return NULL;
-
- while (name[i]) {
- if ((name[j++] = name[i++]) == '/')
- while (name[i] == '/')
- i++;
- }
- name[j] = '\0';
- return name;
-}
-
-static char *find_name_gnu(struct apply_state *state,
- const char *line,
- const char *def,
- int p_value)
-{
- struct strbuf name = STRBUF_INIT;
- char *cp;
-
- /*
- * Proposed "new-style" GNU patch/diff format; see
- * http://marc.info/?l=git&m=112927316408690&w=2
- */
- if (unquote_c_style(&name, line, NULL)) {
- strbuf_release(&name);
- return NULL;
- }
-
- for (cp = name.buf; p_value; p_value--) {
- cp = strchr(cp, '/');
- if (!cp) {
- strbuf_release(&name);
- return NULL;
- }
- cp++;
- }
-
- strbuf_remove(&name, 0, cp - name.buf);
- if (state->root.len)
- strbuf_insert(&name, 0, state->root.buf, state->root.len);
- return squash_slash(strbuf_detach(&name, NULL));
-}
-
-static size_t sane_tz_len(const char *line, size_t len)
-{
- const char *tz, *p;
-
- if (len < strlen(" +0500") || line[len-strlen(" +0500")] != ' ')
- return 0;
- tz = line + len - strlen(" +0500");
-
- if (tz[1] != '+' && tz[1] != '-')
- return 0;
-
- for (p = tz + 2; p != line + len; p++)
- if (!isdigit(*p))
- return 0;
-
- return line + len - tz;
-}
-
-static size_t tz_with_colon_len(const char *line, size_t len)
-{
- const char *tz, *p;
-
- if (len < strlen(" +08:00") || line[len - strlen(":00")] != ':')
- return 0;
- tz = line + len - strlen(" +08:00");
-
- if (tz[0] != ' ' || (tz[1] != '+' && tz[1] != '-'))
- return 0;
- p = tz + 2;
- if (!isdigit(*p++) || !isdigit(*p++) || *p++ != ':' ||
- !isdigit(*p++) || !isdigit(*p++))
- return 0;
-
- return line + len - tz;
-}
-
-static size_t date_len(const char *line, size_t len)
-{
- const char *date, *p;
-
- if (len < strlen("72-02-05") || line[len-strlen("-05")] != '-')
- return 0;
- p = date = line + len - strlen("72-02-05");
-
- if (!isdigit(*p++) || !isdigit(*p++) || *p++ != '-' ||
- !isdigit(*p++) || !isdigit(*p++) || *p++ != '-' ||
- !isdigit(*p++) || !isdigit(*p++)) /* Not a date. */
- return 0;
-
- if (date - line >= strlen("19") &&
- isdigit(date[-1]) && isdigit(date[-2])) /* 4-digit year */
- date -= strlen("19");
-
- return line + len - date;
-}
-
-static size_t short_time_len(const char *line, size_t len)
-{
- const char *time, *p;
-
- if (len < strlen(" 07:01:32") || line[len-strlen(":32")] != ':')
- return 0;
- p = time = line + len - strlen(" 07:01:32");
-
- /* Permit 1-digit hours? */
- if (*p++ != ' ' ||
- !isdigit(*p++) || !isdigit(*p++) || *p++ != ':' ||
- !isdigit(*p++) || !isdigit(*p++) || *p++ != ':' ||
- !isdigit(*p++) || !isdigit(*p++)) /* Not a time. */
- return 0;
-
- return line + len - time;
-}
-
-static size_t fractional_time_len(const char *line, size_t len)
-{
- const char *p;
- size_t n;
-
- /* Expected format: 19:41:17.620000023 */
- if (!len || !isdigit(line[len - 1]))
- return 0;
- p = line + len - 1;
-
- /* Fractional seconds. */
- while (p > line && isdigit(*p))
- p--;
- if (*p != '.')
- return 0;
-
- /* Hours, minutes, and whole seconds. */
- n = short_time_len(line, p - line);
- if (!n)
- return 0;
-
- return line + len - p + n;
-}
-
-static size_t trailing_spaces_len(const char *line, size_t len)
-{
- const char *p;
-
- /* Expected format: ' ' x (1 or more) */
- if (!len || line[len - 1] != ' ')
- return 0;
-
- p = line + len;
- while (p != line) {
- p--;
- if (*p != ' ')
- return line + len - (p + 1);
- }
-
- /* All spaces! */
- return len;
-}
-
-static size_t diff_timestamp_len(const char *line, size_t len)
-{
- const char *end = line + len;
- size_t n;
-
- /*
- * Posix: 2010-07-05 19:41:17
- * GNU: 2010-07-05 19:41:17.620000023 -0500
- */
-
- if (!isdigit(end[-1]))
- return 0;
-
- n = sane_tz_len(line, end - line);
- if (!n)
- n = tz_with_colon_len(line, end - line);
- end -= n;
-
- n = short_time_len(line, end - line);
- if (!n)
- n = fractional_time_len(line, end - line);
- end -= n;
-
- n = date_len(line, end - line);
- if (!n) /* No date. Too bad. */
- return 0;
- end -= n;
-
- if (end == line) /* No space before date. */
- return 0;
- if (end[-1] == '\t') { /* Success! */
- end--;
- return line + len - end;
- }
- if (end[-1] != ' ') /* No space before date. */
- return 0;
-
- /* Whitespace damage. */
- end -= trailing_spaces_len(line, end - line);
- return line + len - end;
-}
-
-static char *find_name_common(struct apply_state *state,
- const char *line,
- const char *def,
- int p_value,
- const char *end,
- int terminate)
-{
- int len;
- const char *start = NULL;
-
- if (p_value == 0)
- start = line;
- while (line != end) {
- char c = *line;
-
- if (!end && isspace(c)) {
- if (c == '\n')
- break;
- if (name_terminate(c, terminate))
- break;
- }
- line++;
- if (c == '/' && !--p_value)
- start = line;
- }
- if (!start)
- return squash_slash(xstrdup_or_null(def));
- len = line - start;
- if (!len)
- return squash_slash(xstrdup_or_null(def));
-
- /*
- * Generally we prefer the shorter name, especially
- * if the other one is just a variation of that with
- * something else tacked on to the end (ie "file.orig"
- * or "file~").
- */
- if (def) {
- int deflen = strlen(def);
- if (deflen < len && !strncmp(start, def, deflen))
- return squash_slash(xstrdup(def));
- }
-
- if (state->root.len) {
- char *ret = xstrfmt("%s%.*s", state->root.buf, len, start);
- return squash_slash(ret);
- }
-
- return squash_slash(xmemdupz(start, len));
-}
-
-static char *find_name(struct apply_state *state,
- const char *line,
- char *def,
- int p_value,
- int terminate)
-{
- if (*line == '"') {
- char *name = find_name_gnu(state, line, def, p_value);
- if (name)
- return name;
- }
-
- return find_name_common(state, line, def, p_value, NULL, terminate);
-}
-
-static char *find_name_traditional(struct apply_state *state,
- const char *line,
- char *def,
- int p_value)
-{
- size_t len;
- size_t date_len;
-
- if (*line == '"') {
- char *name = find_name_gnu(state, line, def, p_value);
- if (name)
- return name;
- }
-
- len = strchrnul(line, '\n') - line;
- date_len = diff_timestamp_len(line, len);
- if (!date_len)
- return find_name_common(state, line, def, p_value, NULL, TERM_TAB);
- len -= date_len;
-
- return find_name_common(state, line, def, p_value, line + len, 0);
-}
-
-static int count_slashes(const char *cp)
-{
- int cnt = 0;
- char ch;
-
- while ((ch = *cp++))
- if (ch == '/')
- cnt++;
- return cnt;
-}
-
-/*
- * Given the string after "--- " or "+++ ", guess the appropriate
- * p_value for the given patch.
- */
-static int guess_p_value(struct apply_state *state, const char *nameline)
-{
- char *name, *cp;
- int val = -1;
-
- if (is_dev_null(nameline))
- return -1;
- name = find_name_traditional(state, nameline, NULL, 0);
- if (!name)
- return -1;
- cp = strchr(name, '/');
- if (!cp)
- val = 0;
- else if (state->prefix) {
- /*
- * Does it begin with "a/$our-prefix" and such? Then this is
- * very likely to apply to our directory.
- */
- if (!strncmp(name, state->prefix, state->prefix_length))
- val = count_slashes(state->prefix);
- else {
- cp++;
- if (!strncmp(cp, state->prefix, state->prefix_length))
- val = count_slashes(state->prefix) + 1;
- }
- }
- free(name);
- return val;
-}
-
-/*
- * Does the ---/+++ line have the POSIX timestamp after the last HT?
- * GNU diff puts epoch there to signal a creation/deletion event. Is
- * this such a timestamp?
- */
-static int has_epoch_timestamp(const char *nameline)
-{
- /*
- * We are only interested in epoch timestamp; any non-zero
- * fraction cannot be one, hence "(\.0+)?" in the regexp below.
- * For the same reason, the date must be either 1969-12-31 or
- * 1970-01-01, and the seconds part must be "00".
- */
- const char stamp_regexp[] =
- "^(1969-12-31|1970-01-01)"
- " "
- "[0-2][0-9]:[0-5][0-9]:00(\\.0+)?"
- " "
- "([-+][0-2][0-9]:?[0-5][0-9])\n";
- const char *timestamp = NULL, *cp, *colon;
- static regex_t *stamp;
- regmatch_t m[10];
- int zoneoffset;
- int hourminute;
- int status;
-
- for (cp = nameline; *cp != '\n'; cp++) {
- if (*cp == '\t')
- timestamp = cp + 1;
- }
- if (!timestamp)
- return 0;
- if (!stamp) {
- stamp = xmalloc(sizeof(*stamp));
- if (regcomp(stamp, stamp_regexp, REG_EXTENDED)) {
- warning(_("Cannot prepare timestamp regexp %s"),
- stamp_regexp);
- return 0;
- }
- }
-
- status = regexec(stamp, timestamp, ARRAY_SIZE(m), m, 0);
- if (status) {
- if (status != REG_NOMATCH)
- warning(_("regexec returned %d for input: %s"),
- status, timestamp);
- return 0;
- }
-
- zoneoffset = strtol(timestamp + m[3].rm_so + 1, (char **) &colon, 10);
- if (*colon == ':')
- zoneoffset = zoneoffset * 60 + strtol(colon + 1, NULL, 10);
- else
- zoneoffset = (zoneoffset / 100) * 60 + (zoneoffset % 100);
- if (timestamp[m[3].rm_so] == '-')
- zoneoffset = -zoneoffset;
-
- /*
- * YYYY-MM-DD hh:mm:ss must be from either 1969-12-31
- * (west of GMT) or 1970-01-01 (east of GMT)
- */
- if ((zoneoffset < 0 && memcmp(timestamp, "1969-12-31", 10)) ||
- (0 <= zoneoffset && memcmp(timestamp, "1970-01-01", 10)))
- return 0;
-
- hourminute = (strtol(timestamp + 11, NULL, 10) * 60 +
- strtol(timestamp + 14, NULL, 10) -
- zoneoffset);
-
- return ((zoneoffset < 0 && hourminute == 1440) ||
- (0 <= zoneoffset && !hourminute));
-}
-
-/*
- * Get the name etc info from the ---/+++ lines of a traditional patch header
- *
- * FIXME! The end-of-filename heuristics are kind of screwy. For existing
- * files, we can happily check the index for a match, but for creating a
- * new file we should try to match whatever "patch" does. I have no idea.
- */
-static void parse_traditional_patch(struct apply_state *state,
- const char *first,
- const char *second,
- struct patch *patch)
-{
- char *name;
-
- first += 4; /* skip "--- " */
- second += 4; /* skip "+++ " */
- if (!state->p_value_known) {
- int p, q;
- p = guess_p_value(state, first);
- q = guess_p_value(state, second);
- if (p < 0) p = q;
- if (0 <= p && p == q) {
- state->p_value = p;
- state->p_value_known = 1;
- }
- }
- if (is_dev_null(first)) {
- patch->is_new = 1;
- patch->is_delete = 0;
- name = find_name_traditional(state, second, NULL, state->p_value);
- patch->new_name = name;
- } else if (is_dev_null(second)) {
- patch->is_new = 0;
- patch->is_delete = 1;
- name = find_name_traditional(state, first, NULL, state->p_value);
- patch->old_name = name;
- } else {
- char *first_name;
- first_name = find_name_traditional(state, first, NULL, state->p_value);
- name = find_name_traditional(state, second, first_name, state->p_value);
- free(first_name);
- if (has_epoch_timestamp(first)) {
- patch->is_new = 1;
- patch->is_delete = 0;
- patch->new_name = name;
- } else if (has_epoch_timestamp(second)) {
- patch->is_new = 0;
- patch->is_delete = 1;
- patch->old_name = name;
- } else {
- patch->old_name = name;
- patch->new_name = xstrdup_or_null(name);
- }
- }
- if (!name)
- die(_("unable to find filename in patch at line %d"), state->linenr);
-}
-
-static int gitdiff_hdrend(struct apply_state *state,
- const char *line,
- struct patch *patch)
-{
- return -1;
-}
-
-/*
- * We're anal about diff header consistency, to make
- * sure that we don't end up having strange ambiguous
- * patches floating around.
- *
- * As a result, gitdiff_{old|new}name() will check
- * their names against any previous information, just
- * to make sure..
- */
-#define DIFF_OLD_NAME 0
-#define DIFF_NEW_NAME 1
-
-static void gitdiff_verify_name(struct apply_state *state,
- const char *line,
- int isnull,
- char **name,
- int side)
-{
- if (!*name && !isnull) {
- *name = find_name(state, line, NULL, state->p_value, TERM_TAB);
- return;
- }
-
- if (*name) {
- int len = strlen(*name);
- char *another;
- if (isnull)
- die(_("git apply: bad git-diff - expected /dev/null, got %s on line %d"),
- *name, state->linenr);
- another = find_name(state, line, NULL, state->p_value, TERM_TAB);
- if (!another || memcmp(another, *name, len + 1))
- die((side == DIFF_NEW_NAME) ?
- _("git apply: bad git-diff - inconsistent new filename on line %d") :
- _("git apply: bad git-diff - inconsistent old filename on line %d"), state->linenr);
- free(another);
- } else {
- /* expect "/dev/null" */
- if (memcmp("/dev/null", line, 9) || line[9] != '\n')
- die(_("git apply: bad git-diff - expected /dev/null on line %d"), state->linenr);
- }
-}
-
-static int gitdiff_oldname(struct apply_state *state,
- const char *line,
- struct patch *patch)
-{
- gitdiff_verify_name(state, line,
- patch->is_new, &patch->old_name,
- DIFF_OLD_NAME);
- return 0;
-}
-
-static int gitdiff_newname(struct apply_state *state,
- const char *line,
- struct patch *patch)
-{
- gitdiff_verify_name(state, line,
- patch->is_delete, &patch->new_name,
- DIFF_NEW_NAME);
- return 0;
-}
-
-static int gitdiff_oldmode(struct apply_state *state,
- const char *line,
- struct patch *patch)
-{
- patch->old_mode = strtoul(line, NULL, 8);
- return 0;
-}
-
-static int gitdiff_newmode(struct apply_state *state,
- const char *line,
- struct patch *patch)
-{
- patch->new_mode = strtoul(line, NULL, 8);
- return 0;
-}
-
-static int gitdiff_delete(struct apply_state *state,
- const char *line,
- struct patch *patch)
-{
- patch->is_delete = 1;
- free(patch->old_name);
- patch->old_name = xstrdup_or_null(patch->def_name);
- return gitdiff_oldmode(state, line, patch);
-}
-
-static int gitdiff_newfile(struct apply_state *state,
- const char *line,
- struct patch *patch)
-{
- patch->is_new = 1;
- free(patch->new_name);
- patch->new_name = xstrdup_or_null(patch->def_name);
- return gitdiff_newmode(state, line, patch);
-}
-
-static int gitdiff_copysrc(struct apply_state *state,
- const char *line,
- struct patch *patch)
-{
- patch->is_copy = 1;
- free(patch->old_name);
- patch->old_name = find_name(state, line, NULL, state->p_value ? state->p_value - 1 : 0, 0);
- return 0;
-}
-
-static int gitdiff_copydst(struct apply_state *state,
- const char *line,
- struct patch *patch)
-{
- patch->is_copy = 1;
- free(patch->new_name);
- patch->new_name = find_name(state, line, NULL, state->p_value ? state->p_value - 1 : 0, 0);
- return 0;
-}
-
-static int gitdiff_renamesrc(struct apply_state *state,
- const char *line,
- struct patch *patch)
-{
- patch->is_rename = 1;
- free(patch->old_name);
- patch->old_name = find_name(state, line, NULL, state->p_value ? state->p_value - 1 : 0, 0);
- return 0;
-}
-
-static int gitdiff_renamedst(struct apply_state *state,
- const char *line,
- struct patch *patch)
-{
- patch->is_rename = 1;
- free(patch->new_name);
- patch->new_name = find_name(state, line, NULL, state->p_value ? state->p_value - 1 : 0, 0);
- return 0;
-}
-
-static int gitdiff_similarity(struct apply_state *state,
- const char *line,
- struct patch *patch)
-{
- unsigned long val = strtoul(line, NULL, 10);
- if (val <= 100)
- patch->score = val;
- return 0;
-}
-
-static int gitdiff_dissimilarity(struct apply_state *state,
- const char *line,
- struct patch *patch)
-{
- unsigned long val = strtoul(line, NULL, 10);
- if (val <= 100)
- patch->score = val;
- return 0;
-}
-
-static int gitdiff_index(struct apply_state *state,
- const char *line,
- struct patch *patch)
-{
- /*
- * index line is N hexadecimal, "..", N hexadecimal,
- * and optional space with octal mode.
- */
- const char *ptr, *eol;
- int len;
-
- ptr = strchr(line, '.');
- if (!ptr || ptr[1] != '.' || 40 < ptr - line)
- return 0;
- len = ptr - line;
- memcpy(patch->old_sha1_prefix, line, len);
- patch->old_sha1_prefix[len] = 0;
-
- line = ptr + 2;
- ptr = strchr(line, ' ');
- eol = strchrnul(line, '\n');
-
- if (!ptr || eol < ptr)
- ptr = eol;
- len = ptr - line;
-
- if (40 < len)
- return 0;
- memcpy(patch->new_sha1_prefix, line, len);
- patch->new_sha1_prefix[len] = 0;
- if (*ptr == ' ')
- patch->old_mode = strtoul(ptr+1, NULL, 8);
- return 0;
-}
-
-/*
- * This is normal for a diff that doesn't change anything: we'll fall through
- * into the next diff. Tell the parser to break out.
- */
-static int gitdiff_unrecognized(struct apply_state *state,
- const char *line,
- struct patch *patch)
-{
- return -1;
-}
-
-/*
- * Skip p_value leading components from "line"; as we do not accept
- * absolute paths, return NULL in that case.
- */
-static const char *skip_tree_prefix(struct apply_state *state,
- const char *line,
- int llen)
-{
- int nslash;
- int i;
-
- if (!state->p_value)
- return (llen && line[0] == '/') ? NULL : line;
-
- nslash = state->p_value;
- for (i = 0; i < llen; i++) {
- int ch = line[i];
- if (ch == '/' && --nslash <= 0)
- return (i == 0) ? NULL : &line[i + 1];
- }
- return NULL;
-}
-
-/*
- * This is to extract the same name that appears on "diff --git"
- * line. We do not find and return anything if it is a rename
- * patch, and it is OK because we will find the name elsewhere.
- * We need to reliably find name only when it is mode-change only,
- * creation or deletion of an empty file. In any of these cases,
- * both sides are the same name under a/ and b/ respectively.
- */
-static char *git_header_name(struct apply_state *state,
- const char *line,
- int llen)
-{
- const char *name;
- const char *second = NULL;
- size_t len, line_len;
-
- line += strlen("diff --git ");
- llen -= strlen("diff --git ");
-
- if (*line == '"') {
- const char *cp;
- struct strbuf first = STRBUF_INIT;
- struct strbuf sp = STRBUF_INIT;
-
- if (unquote_c_style(&first, line, &second))
- goto free_and_fail1;
-
- /* strip the a/b prefix including trailing slash */
- cp = skip_tree_prefix(state, first.buf, first.len);
- if (!cp)
- goto free_and_fail1;
- strbuf_remove(&first, 0, cp - first.buf);
-
- /*
- * second points at one past closing dq of name.
- * find the second name.
- */
- while ((second < line + llen) && isspace(*second))
- second++;
-
- if (line + llen <= second)
- goto free_and_fail1;
- if (*second == '"') {
- if (unquote_c_style(&sp, second, NULL))
- goto free_and_fail1;
- cp = skip_tree_prefix(state, sp.buf, sp.len);
- if (!cp)
- goto free_and_fail1;
- /* They must match, otherwise ignore */
- if (strcmp(cp, first.buf))
- goto free_and_fail1;
- strbuf_release(&sp);
- return strbuf_detach(&first, NULL);
- }
-
- /* unquoted second */
- cp = skip_tree_prefix(state, second, line + llen - second);
- if (!cp)
- goto free_and_fail1;
- if (line + llen - cp != first.len ||
- memcmp(first.buf, cp, first.len))
- goto free_and_fail1;
- return strbuf_detach(&first, NULL);
-
- free_and_fail1:
- strbuf_release(&first);
- strbuf_release(&sp);
- return NULL;
- }
-
- /* unquoted first name */
- name = skip_tree_prefix(state, line, llen);
- if (!name)
- return NULL;
-
- /*
- * since the first name is unquoted, a dq if exists must be
- * the beginning of the second name.
- */
- for (second = name; second < line + llen; second++) {
- if (*second == '"') {
- struct strbuf sp = STRBUF_INIT;
- const char *np;
-
- if (unquote_c_style(&sp, second, NULL))
- goto free_and_fail2;
-
- np = skip_tree_prefix(state, sp.buf, sp.len);
- if (!np)
- goto free_and_fail2;
-
- len = sp.buf + sp.len - np;
- if (len < second - name &&
- !strncmp(np, name, len) &&
- isspace(name[len])) {
- /* Good */
- strbuf_remove(&sp, 0, np - sp.buf);
- return strbuf_detach(&sp, NULL);
- }
-
- free_and_fail2:
- strbuf_release(&sp);
- return NULL;
- }
- }
-
- /*
- * Accept a name only if it shows up twice, exactly the same
- * form.
- */
- second = strchr(name, '\n');
- if (!second)
- return NULL;
- line_len = second - name;
- for (len = 0 ; ; len++) {
- switch (name[len]) {
- default:
- continue;
- case '\n':
- return NULL;
- case '\t': case ' ':
- /*
- * Is this the separator between the preimage
- * and the postimage pathname? Again, we are
- * only interested in the case where there is
- * no rename, as this is only to set def_name
- * and a rename patch has the names elsewhere
- * in an unambiguous form.
- */
- if (!name[len + 1])
- return NULL; /* no postimage name */
- second = skip_tree_prefix(state, name + len + 1,
- line_len - (len + 1));
- if (!second)
- return NULL;
- /*
- * Does len bytes starting at "name" and "second"
- * (that are separated by one HT or SP we just
- * found) exactly match?
- */
- if (second[len] == '\n' && !strncmp(name, second, len))
- return xmemdupz(name, len);
- }
- }
-}
-
-/* Verify that we recognize the lines following a git header */
-static int parse_git_header(struct apply_state *state,
- const char *line,
- int len,
- unsigned int size,
- struct patch *patch)
-{
- unsigned long offset;
-
- /* A git diff has explicit new/delete information, so we don't guess */
- patch->is_new = 0;
- patch->is_delete = 0;
-
- /*
- * Some things may not have the old name in the
- * rest of the headers anywhere (pure mode changes,
- * or removing or adding empty files), so we get
- * the default name from the header.
- */
- patch->def_name = git_header_name(state, line, len);
- if (patch->def_name && state->root.len) {
- char *s = xstrfmt("%s%s", state->root.buf, patch->def_name);
- free(patch->def_name);
- patch->def_name = s;
- }
-
- line += len;
- size -= len;
- state->linenr++;
- for (offset = len ; size > 0 ; offset += len, size -= len, line += len, state->linenr++) {
- static const struct opentry {
- const char *str;
- int (*fn)(struct apply_state *, const char *, struct patch *);
- } optable[] = {
- { "@@ -", gitdiff_hdrend },
- { "--- ", gitdiff_oldname },
- { "+++ ", gitdiff_newname },
- { "old mode ", gitdiff_oldmode },
- { "new mode ", gitdiff_newmode },
- { "deleted file mode ", gitdiff_delete },
- { "new file mode ", gitdiff_newfile },
- { "copy from ", gitdiff_copysrc },
- { "copy to ", gitdiff_copydst },
- { "rename old ", gitdiff_renamesrc },
- { "rename new ", gitdiff_renamedst },
- { "rename from ", gitdiff_renamesrc },
- { "rename to ", gitdiff_renamedst },
- { "similarity index ", gitdiff_similarity },
- { "dissimilarity index ", gitdiff_dissimilarity },
- { "index ", gitdiff_index },
- { "", gitdiff_unrecognized },
- };
- int i;
-
- len = linelen(line, size);
- if (!len || line[len-1] != '\n')
- break;
- for (i = 0; i < ARRAY_SIZE(optable); i++) {
- const struct opentry *p = optable + i;
- int oplen = strlen(p->str);
- if (len < oplen || memcmp(p->str, line, oplen))
- continue;
- if (p->fn(state, line + oplen, patch) < 0)
- return offset;
- break;
- }
- }
-
- return offset;
-}
-
-static int parse_num(const char *line, unsigned long *p)
-{
- char *ptr;
-
- if (!isdigit(*line))
- return 0;
- *p = strtoul(line, &ptr, 10);
- return ptr - line;
-}
-
-static int parse_range(const char *line, int len, int offset, const char *expect,
- unsigned long *p1, unsigned long *p2)
-{
- int digits, ex;
-
- if (offset < 0 || offset >= len)
- return -1;
- line += offset;
- len -= offset;
-
- digits = parse_num(line, p1);
- if (!digits)
- return -1;
-
- offset += digits;
- line += digits;
- len -= digits;
-
- *p2 = 1;
- if (*line == ',') {
- digits = parse_num(line+1, p2);
- if (!digits)
- return -1;
-
- offset += digits+1;
- line += digits+1;
- len -= digits+1;
- }
-
- ex = strlen(expect);
- if (ex > len)
- return -1;
- if (memcmp(line, expect, ex))
- return -1;
-
- return offset + ex;
-}
-
-static void recount_diff(const char *line, int size, struct fragment *fragment)
-{
- int oldlines = 0, newlines = 0, ret = 0;
-
- if (size < 1) {
- warning("recount: ignore empty hunk");
- return;
- }
-
- for (;;) {
- int len = linelen(line, size);
- size -= len;
- line += len;
-
- if (size < 1)
- break;
-
- switch (*line) {
- case ' ': case '\n':
- newlines++;
- /* fall through */
- case '-':
- oldlines++;
- continue;
- case '+':
- newlines++;
- continue;
- case '\\':
- continue;
- case '@':
- ret = size < 3 || !starts_with(line, "@@ ");
- break;
- case 'd':
- ret = size < 5 || !starts_with(line, "diff ");
- break;
- default:
- ret = -1;
- break;
- }
- if (ret) {
- warning(_("recount: unexpected line: %.*s"),
- (int)linelen(line, size), line);
- return;
- }
- break;
- }
- fragment->oldlines = oldlines;
- fragment->newlines = newlines;
-}
-
-/*
- * Parse a unified diff fragment header of the
- * form "@@ -a,b +c,d @@"
- */
-static int parse_fragment_header(const char *line, int len, struct fragment *fragment)
-{
- int offset;
-
- if (!len || line[len-1] != '\n')
- return -1;
-
- /* Figure out the number of lines in a fragment */
- offset = parse_range(line, len, 4, " +", &fragment->oldpos, &fragment->oldlines);
- offset = parse_range(line, len, offset, " @@", &fragment->newpos, &fragment->newlines);
-
- return offset;
-}
-
-static int find_header(struct apply_state *state,
- const char *line,
- unsigned long size,
- int *hdrsize,
- struct patch *patch)
-{
- unsigned long offset, len;
-
- patch->is_toplevel_relative = 0;
- patch->is_rename = patch->is_copy = 0;
- patch->is_new = patch->is_delete = -1;
- patch->old_mode = patch->new_mode = 0;
- patch->old_name = patch->new_name = NULL;
- for (offset = 0; size > 0; offset += len, size -= len, line += len, state->linenr++) {
- unsigned long nextlen;
-
- len = linelen(line, size);
- if (!len)
- break;
-
- /* Testing this early allows us to take a few shortcuts.. */
- if (len < 6)
- continue;
-
- /*
- * Make sure we don't find any unconnected patch fragments.
- * That's a sign that we didn't find a header, and that a
- * patch has become corrupted/broken up.
- */
- if (!memcmp("@@ -", line, 4)) {
- struct fragment dummy;
- if (parse_fragment_header(line, len, &dummy) < 0)
- continue;
- die(_("patch fragment without header at line %d: %.*s"),
- state->linenr, (int)len-1, line);
- }
-
- if (size < len + 6)
- break;
-
- /*
- * Git patch? It might not have a real patch, just a rename
- * or mode change, so we handle that specially
- */
- if (!memcmp("diff --git ", line, 11)) {
- int git_hdr_len = parse_git_header(state, line, len, size, patch);
- if (git_hdr_len <= len)
- continue;
- if (!patch->old_name && !patch->new_name) {
- if (!patch->def_name)
- die(Q_("git diff header lacks filename information when removing "
- "%d leading pathname component (line %d)",
- "git diff header lacks filename information when removing "
- "%d leading pathname components (line %d)",
- state->p_value),
- state->p_value, state->linenr);
- patch->old_name = xstrdup(patch->def_name);
- patch->new_name = xstrdup(patch->def_name);
- }
- if (!patch->is_delete && !patch->new_name)
- die("git diff header lacks filename information "
- "(line %d)", state->linenr);
- patch->is_toplevel_relative = 1;
- *hdrsize = git_hdr_len;
- return offset;
- }
-
- /* --- followed by +++ ? */
- if (memcmp("--- ", line, 4) || memcmp("+++ ", line + len, 4))
- continue;
-
- /*
- * We only accept unified patches, so we want it to
- * at least have "@@ -a,b +c,d @@\n", which is 14 chars
- * minimum ("@@ -0,0 +1 @@\n" is the shortest).
- */
- nextlen = linelen(line + len, size - len);
- if (size < nextlen + 14 || memcmp("@@ -", line + len + nextlen, 4))
- continue;
-
- /* Ok, we'll consider it a patch */
- parse_traditional_patch(state, line, line+len, patch);
- *hdrsize = len + nextlen;
- state->linenr += 2;
- return offset;
- }
- return -1;
-}
-
-static void record_ws_error(struct apply_state *state,
- unsigned result,
- const char *line,
- int len,
- int linenr)
-{
- char *err;
-
- if (!result)
- return;
-
- state->whitespace_error++;
- if (state->squelch_whitespace_errors &&
- state->squelch_whitespace_errors < state->whitespace_error)
- return;
-
- err = whitespace_error_string(result);
- fprintf(stderr, "%s:%d: %s.\n%.*s\n",
- state->patch_input_file, linenr, err, len, line);
- free(err);
-}
-
-static void check_whitespace(struct apply_state *state,
- const char *line,
- int len,
- unsigned ws_rule)
-{
- unsigned result = ws_check(line + 1, len - 1, ws_rule);
-
- record_ws_error(state, result, line + 1, len - 2, state->linenr);
-}
-
-/*
- * Parse a unified diff. Note that this really needs to parse each
- * fragment separately, since the only way to know the difference
- * between a "---" that is part of a patch, and a "---" that starts
- * the next patch is to look at the line counts..
- */
-static int parse_fragment(struct apply_state *state,
- const char *line,
- unsigned long size,
- struct patch *patch,
- struct fragment *fragment)
-{
- int added, deleted;
- int len = linelen(line, size), offset;
- unsigned long oldlines, newlines;
- unsigned long leading, trailing;
-
- offset = parse_fragment_header(line, len, fragment);
- if (offset < 0)
- return -1;
- if (offset > 0 && patch->recount)
- recount_diff(line + offset, size - offset, fragment);
- oldlines = fragment->oldlines;
- newlines = fragment->newlines;
- leading = 0;
- trailing = 0;
-
- /* Parse the thing.. */
- line += len;
- size -= len;
- state->linenr++;
- added = deleted = 0;
- for (offset = len;
- 0 < size;
- offset += len, size -= len, line += len, state->linenr++) {
- if (!oldlines && !newlines)
- break;
- len = linelen(line, size);
- if (!len || line[len-1] != '\n')
- return -1;
- switch (*line) {
- default:
- return -1;
- case '\n': /* newer GNU diff, an empty context line */
- case ' ':
- oldlines--;
- newlines--;
- if (!deleted && !added)
- leading++;
- trailing++;
- if (!state->apply_in_reverse &&
- state->ws_error_action == correct_ws_error)
- check_whitespace(state, line, len, patch->ws_rule);
- break;
- case '-':
- if (state->apply_in_reverse &&
- state->ws_error_action != nowarn_ws_error)
- check_whitespace(state, line, len, patch->ws_rule);
- deleted++;
- oldlines--;
- trailing = 0;
- break;
- case '+':
- if (!state->apply_in_reverse &&
- state->ws_error_action != nowarn_ws_error)
- check_whitespace(state, line, len, patch->ws_rule);
- added++;
- newlines--;
- trailing = 0;
- break;
-
- /*
- * We allow "\ No newline at end of file". Depending
- * on locale settings when the patch was produced we
- * don't know what this line looks like. The only
- * thing we do know is that it begins with "\ ".
- * Checking for 12 is just for sanity check -- any
- * l10n of "\ No newline..." is at least that long.
- */
- case '\\':
- if (len < 12 || memcmp(line, "\\ ", 2))
- return -1;
- break;
- }
- }
- if (oldlines || newlines)
- return -1;
- if (!deleted && !added)
- return -1;
-
- fragment->leading = leading;
- fragment->trailing = trailing;
-
- /*
- * If a fragment ends with an incomplete line, we failed to include
- * it in the above loop because we hit oldlines == newlines == 0
- * before seeing it.
- */
- if (12 < size && !memcmp(line, "\\ ", 2))
- offset += linelen(line, size);
-
- patch->lines_added += added;
- patch->lines_deleted += deleted;
-
- if (0 < patch->is_new && oldlines)
- return error(_("new file depends on old contents"));
- if (0 < patch->is_delete && newlines)
- return error(_("deleted file still has contents"));
- return offset;
-}
-
-/*
- * We have seen "diff --git a/... b/..." header (or a traditional patch
- * header). Read hunks that belong to this patch into fragments and hang
- * them to the given patch structure.
- *
- * The (fragment->patch, fragment->size) pair points into the memory given
- * by the caller, not a copy, when we return.
- */
-static int parse_single_patch(struct apply_state *state,
- const char *line,
- unsigned long size,
- struct patch *patch)
-{
- unsigned long offset = 0;
- unsigned long oldlines = 0, newlines = 0, context = 0;
- struct fragment **fragp = &patch->fragments;
-
- while (size > 4 && !memcmp(line, "@@ -", 4)) {
- struct fragment *fragment;
- int len;
-
- fragment = xcalloc(1, sizeof(*fragment));
- fragment->linenr = state->linenr;
- len = parse_fragment(state, line, size, patch, fragment);
- if (len <= 0)
- die(_("corrupt patch at line %d"), state->linenr);
- fragment->patch = line;
- fragment->size = len;
- oldlines += fragment->oldlines;
- newlines += fragment->newlines;
- context += fragment->leading + fragment->trailing;
-
- *fragp = fragment;
- fragp = &fragment->next;
-
- offset += len;
- line += len;
- size -= len;
- }
-
- /*
- * If something was removed (i.e. we have old-lines) it cannot
- * be creation, and if something was added it cannot be
- * deletion. However, the reverse is not true; --unified=0
- * patches that only add are not necessarily creation even
- * though they do not have any old lines, and ones that only
- * delete are not necessarily deletion.
- *
- * Unfortunately, a real creation/deletion patch do _not_ have
- * any context line by definition, so we cannot safely tell it
- * apart with --unified=0 insanity. At least if the patch has
- * more than one hunk it is not creation or deletion.
- */
- if (patch->is_new < 0 &&
- (oldlines || (patch->fragments && patch->fragments->next)))
- patch->is_new = 0;
- if (patch->is_delete < 0 &&
- (newlines || (patch->fragments && patch->fragments->next)))
- patch->is_delete = 0;
-
- if (0 < patch->is_new && oldlines)
- die(_("new file %s depends on old contents"), patch->new_name);
- if (0 < patch->is_delete && newlines)
- die(_("deleted file %s still has contents"), patch->old_name);
- if (!patch->is_delete && !newlines && context)
- fprintf_ln(stderr,
- _("** warning: "
- "file %s becomes empty but is not deleted"),
- patch->new_name);
-
- return offset;
-}
-
-static inline int metadata_changes(struct patch *patch)
-{
- return patch->is_rename > 0 ||
- patch->is_copy > 0 ||
- patch->is_new > 0 ||
- patch->is_delete ||
- (patch->old_mode && patch->new_mode &&
- patch->old_mode != patch->new_mode);
-}
-
-static char *inflate_it(const void *data, unsigned long size,
- unsigned long inflated_size)
-{
- git_zstream stream;
- void *out;
- int st;
-
- memset(&stream, 0, sizeof(stream));
-
- stream.next_in = (unsigned char *)data;
- stream.avail_in = size;
- stream.next_out = out = xmalloc(inflated_size);
- stream.avail_out = inflated_size;
- git_inflate_init(&stream);
- st = git_inflate(&stream, Z_FINISH);
- git_inflate_end(&stream);
- if ((st != Z_STREAM_END) || stream.total_out != inflated_size) {
- free(out);
- return NULL;
- }
- return out;
-}
-
-/*
- * Read a binary hunk and return a new fragment; fragment->patch
- * points at an allocated memory that the caller must free, so
- * it is marked as "->free_patch = 1".
- */
-static struct fragment *parse_binary_hunk(struct apply_state *state,
- char **buf_p,
- unsigned long *sz_p,
- int *status_p,
- int *used_p)
-{
- /*
- * Expect a line that begins with binary patch method ("literal"
- * or "delta"), followed by the length of data before deflating.
- * a sequence of 'length-byte' followed by base-85 encoded data
- * should follow, terminated by a newline.
- *
- * Each 5-byte sequence of base-85 encodes up to 4 bytes,
- * and we would limit the patch line to 66 characters,
- * so one line can fit up to 13 groups that would decode
- * to 52 bytes max. The length byte 'A'-'Z' corresponds
- * to 1-26 bytes, and 'a'-'z' corresponds to 27-52 bytes.
- */
- int llen, used;
- unsigned long size = *sz_p;
- char *buffer = *buf_p;
- int patch_method;
- unsigned long origlen;
- char *data = NULL;
- int hunk_size = 0;
- struct fragment *frag;
-
- llen = linelen(buffer, size);
- used = llen;
-
- *status_p = 0;
-
- if (starts_with(buffer, "delta ")) {
- patch_method = BINARY_DELTA_DEFLATED;
- origlen = strtoul(buffer + 6, NULL, 10);
- }
- else if (starts_with(buffer, "literal ")) {
- patch_method = BINARY_LITERAL_DEFLATED;
- origlen = strtoul(buffer + 8, NULL, 10);
- }
- else
- return NULL;
-
- state->linenr++;
- buffer += llen;
- while (1) {
- int byte_length, max_byte_length, newsize;
- llen = linelen(buffer, size);
- used += llen;
- state->linenr++;
- if (llen == 1) {
- /* consume the blank line */
- buffer++;
- size--;
- break;
- }
- /*
- * Minimum line is "A00000\n" which is 7-byte long,
- * and the line length must be multiple of 5 plus 2.
- */
- if ((llen < 7) || (llen-2) % 5)
- goto corrupt;
- max_byte_length = (llen - 2) / 5 * 4;
- byte_length = *buffer;
- if ('A' <= byte_length && byte_length <= 'Z')
- byte_length = byte_length - 'A' + 1;
- else if ('a' <= byte_length && byte_length <= 'z')
- byte_length = byte_length - 'a' + 27;
- else
- goto corrupt;
- /* if the input length was not multiple of 4, we would
- * have filler at the end but the filler should never
- * exceed 3 bytes
- */
- if (max_byte_length < byte_length ||
- byte_length <= max_byte_length - 4)
- goto corrupt;
- newsize = hunk_size + byte_length;
- data = xrealloc(data, newsize);
- if (decode_85(data + hunk_size, buffer + 1, byte_length))
- goto corrupt;
- hunk_size = newsize;
- buffer += llen;
- size -= llen;
- }
-
- frag = xcalloc(1, sizeof(*frag));
- frag->patch = inflate_it(data, hunk_size, origlen);
- frag->free_patch = 1;
- if (!frag->patch)
- goto corrupt;
- free(data);
- frag->size = origlen;
- *buf_p = buffer;
- *sz_p = size;
- *used_p = used;
- frag->binary_patch_method = patch_method;
- return frag;
-
- corrupt:
- free(data);
- *status_p = -1;
- error(_("corrupt binary patch at line %d: %.*s"),
- state->linenr-1, llen-1, buffer);
- return NULL;
-}
-
-/*
- * Returns:
- * -1 in case of error,
- * the length of the parsed binary patch otherwise
- */
-static int parse_binary(struct apply_state *state,
- char *buffer,
- unsigned long size,
- struct patch *patch)
-{
- /*
- * We have read "GIT binary patch\n"; what follows is a line
- * that says the patch method (currently, either "literal" or
- * "delta") and the length of data before deflating; a
- * sequence of 'length-byte' followed by base-85 encoded data
- * follows.
- *
- * When a binary patch is reversible, there is another binary
- * hunk in the same format, starting with patch method (either
- * "literal" or "delta") with the length of data, and a sequence
- * of length-byte + base-85 encoded data, terminated with another
- * empty line. This data, when applied to the postimage, produces
- * the preimage.
- */
- struct fragment *forward;
- struct fragment *reverse;
- int status;
- int used, used_1;
-
- forward = parse_binary_hunk(state, &buffer, &size, &status, &used);
- if (!forward && !status)
- /* there has to be one hunk (forward hunk) */
- return error(_("unrecognized binary patch at line %d"), state->linenr-1);
- if (status)
- /* otherwise we already gave an error message */
- return status;
-
- reverse = parse_binary_hunk(state, &buffer, &size, &status, &used_1);
- if (reverse)
- used += used_1;
- else if (status) {
- /*
- * Not having reverse hunk is not an error, but having
- * a corrupt reverse hunk is.
- */
- free((void*) forward->patch);
- free(forward);
- return status;
- }
- forward->next = reverse;
- patch->fragments = forward;
- patch->is_binary = 1;
- return used;
-}
-
-static void prefix_one(struct apply_state *state, char **name)
-{
- char *old_name = *name;
- if (!old_name)
- return;
- *name = xstrdup(prefix_filename(state->prefix, state->prefix_length, *name));
- free(old_name);
-}
-
-static void prefix_patch(struct apply_state *state, struct patch *p)
-{
- if (!state->prefix || p->is_toplevel_relative)
- return;
- prefix_one(state, &p->new_name);
- prefix_one(state, &p->old_name);
-}
-
-/*
- * include/exclude
- */
-
-static void add_name_limit(struct apply_state *state,
- const char *name,
- int exclude)
-{
- struct string_list_item *it;
-
- it = string_list_append(&state->limit_by_name, name);
- it->util = exclude ? NULL : (void *) 1;
-}
-
-static int use_patch(struct apply_state *state, struct patch *p)
-{
- const char *pathname = p->new_name ? p->new_name : p->old_name;
- int i;
-
- /* Paths outside are not touched regardless of "--include" */
- if (0 < state->prefix_length) {
- int pathlen = strlen(pathname);
- if (pathlen <= state->prefix_length ||
- memcmp(state->prefix, pathname, state->prefix_length))
- return 0;
- }
-
- /* See if it matches any of exclude/include rule */
- for (i = 0; i < state->limit_by_name.nr; i++) {
- struct string_list_item *it = &state->limit_by_name.items[i];
- if (!wildmatch(it->string, pathname, 0, NULL))
- return (it->util != NULL);
- }
-
- /*
- * If we had any include, a path that does not match any rule is
- * not used. Otherwise, we saw bunch of exclude rules (or none)
- * and such a path is used.
- */
- return !state->has_include;
-}
-
-
-/*
- * Read the patch text in "buffer" that extends for "size" bytes; stop
- * reading after seeing a single patch (i.e. changes to a single file).
- * Create fragments (i.e. patch hunks) and hang them to the given patch.
- * Return the number of bytes consumed, so that the caller can call us
- * again for the next patch.
- */
-static int parse_chunk(struct apply_state *state, char *buffer, unsigned long size, struct patch *patch)
-{
- int hdrsize, patchsize;
- int offset = find_header(state, buffer, size, &hdrsize, patch);
-
- if (offset < 0)
- return offset;
-
- prefix_patch(state, patch);
-
- if (!use_patch(state, patch))
- patch->ws_rule = 0;
- else
- patch->ws_rule = whitespace_rule(patch->new_name
- ? patch->new_name
- : patch->old_name);
-
- patchsize = parse_single_patch(state,
- buffer + offset + hdrsize,
- size - offset - hdrsize,
- patch);
-
- if (!patchsize) {
- static const char git_binary[] = "GIT binary patch\n";
- int hd = hdrsize + offset;
- unsigned long llen = linelen(buffer + hd, size - hd);
-
- if (llen == sizeof(git_binary) - 1 &&
- !memcmp(git_binary, buffer + hd, llen)) {
- int used;
- state->linenr++;
- used = parse_binary(state, buffer + hd + llen,
- size - hd - llen, patch);
- if (used < 0)
- return -1;
- if (used)
- patchsize = used + llen;
- else
- patchsize = 0;
- }
- else if (!memcmp(" differ\n", buffer + hd + llen - 8, 8)) {
- static const char *binhdr[] = {
- "Binary files ",
- "Files ",
- NULL,
- };
- int i;
- for (i = 0; binhdr[i]; i++) {
- int len = strlen(binhdr[i]);
- if (len < size - hd &&
- !memcmp(binhdr[i], buffer + hd, len)) {
- state->linenr++;
- patch->is_binary = 1;
- patchsize = llen;
- break;
- }
- }
- }
-
- /* Empty patch cannot be applied if it is a text patch
- * without metadata change. A binary patch appears
- * empty to us here.
- */
- if ((state->apply || state->check) &&
- (!patch->is_binary && !metadata_changes(patch)))
- die(_("patch with only garbage at line %d"), state->linenr);
- }
-
- return offset + hdrsize + patchsize;
-}
-
-#define swap(a,b) myswap((a),(b),sizeof(a))
-
-#define myswap(a, b, size) do { \
- unsigned char mytmp[size]; \
- memcpy(mytmp, &a, size); \
- memcpy(&a, &b, size); \
- memcpy(&b, mytmp, size); \
-} while (0)
-
-static void reverse_patches(struct patch *p)
-{
- for (; p; p = p->next) {
- struct fragment *frag = p->fragments;
-
- swap(p->new_name, p->old_name);
- swap(p->new_mode, p->old_mode);
- swap(p->is_new, p->is_delete);
- swap(p->lines_added, p->lines_deleted);
- swap(p->old_sha1_prefix, p->new_sha1_prefix);
-
- for (; frag; frag = frag->next) {
- swap(frag->newpos, frag->oldpos);
- swap(frag->newlines, frag->oldlines);
- }
- }
-}
-
-static const char pluses[] =
-"++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++";
-static const char minuses[]=
-"----------------------------------------------------------------------";
-
-static void show_stats(struct apply_state *state, struct patch *patch)
-{
- struct strbuf qname = STRBUF_INIT;
- char *cp = patch->new_name ? patch->new_name : patch->old_name;
- int max, add, del;
-
- quote_c_style(cp, &qname, NULL, 0);
-
- /*
- * "scale" the filename
- */
- max = state->max_len;
- if (max > 50)
- max = 50;
-
- if (qname.len > max) {
- cp = strchr(qname.buf + qname.len + 3 - max, '/');
- if (!cp)
- cp = qname.buf + qname.len + 3 - max;
- strbuf_splice(&qname, 0, cp - qname.buf, "...", 3);
- }
-
- if (patch->is_binary) {
- printf(" %-*s | Bin\n", max, qname.buf);
- strbuf_release(&qname);
- return;
- }
-
- printf(" %-*s |", max, qname.buf);
- strbuf_release(&qname);
-
- /*
- * scale the add/delete
- */
- max = max + state->max_change > 70 ? 70 - max : state->max_change;
- add = patch->lines_added;
- del = patch->lines_deleted;
-
- if (state->max_change > 0) {
- int total = ((add + del) * max + state->max_change / 2) / state->max_change;
- add = (add * max + state->max_change / 2) / state->max_change;
- del = total - add;
- }
- printf("%5d %.*s%.*s\n", patch->lines_added + patch->lines_deleted,
- add, pluses, del, minuses);
-}
-
-static int read_old_data(struct stat *st, const char *path, struct strbuf *buf)
-{
- switch (st->st_mode & S_IFMT) {
- case S_IFLNK:
- if (strbuf_readlink(buf, path, st->st_size) < 0)
- return error(_("unable to read symlink %s"), path);
- return 0;
- case S_IFREG:
- if (strbuf_read_file(buf, path, st->st_size) != st->st_size)
- return error(_("unable to open or read %s"), path);
- convert_to_git(path, buf->buf, buf->len, buf, 0);
- return 0;
- default:
- return -1;
- }
-}
-
-/*
- * Update the preimage, and the common lines in postimage,
- * from buffer buf of length len. If postlen is 0 the postimage
- * is updated in place, otherwise it's updated on a new buffer
- * of length postlen
- */
-
-static void update_pre_post_images(struct image *preimage,
- struct image *postimage,
- char *buf,
- size_t len, size_t postlen)
-{
- int i, ctx, reduced;
- char *new, *old, *fixed;
- struct image fixed_preimage;
-
- /*
- * Update the preimage with whitespace fixes. Note that we
- * are not losing preimage->buf -- apply_one_fragment() will
- * free "oldlines".
- */
- prepare_image(&fixed_preimage, buf, len, 1);
- assert(postlen
- ? fixed_preimage.nr == preimage->nr
- : fixed_preimage.nr <= preimage->nr);
- for (i = 0; i < fixed_preimage.nr; i++)
- fixed_preimage.line[i].flag = preimage->line[i].flag;
- free(preimage->line_allocated);
- *preimage = fixed_preimage;
-
- /*
- * Adjust the common context lines in postimage. This can be
- * done in-place when we are shrinking it with whitespace
- * fixing, but needs a new buffer when ignoring whitespace or
- * expanding leading tabs to spaces.
- *
- * We trust the caller to tell us if the update can be done
- * in place (postlen==0) or not.
- */
- old = postimage->buf;
- if (postlen)
- new = postimage->buf = xmalloc(postlen);
- else
- new = old;
- fixed = preimage->buf;
-
- for (i = reduced = ctx = 0; i < postimage->nr; i++) {
- size_t l_len = postimage->line[i].len;
- if (!(postimage->line[i].flag & LINE_COMMON)) {
- /* an added line -- no counterparts in preimage */
- memmove(new, old, l_len);
- old += l_len;
- new += l_len;
- continue;
- }
-
- /* a common context -- skip it in the original postimage */
- old += l_len;
-
- /* and find the corresponding one in the fixed preimage */
- while (ctx < preimage->nr &&
- !(preimage->line[ctx].flag & LINE_COMMON)) {
- fixed += preimage->line[ctx].len;
- ctx++;
- }
-
- /*
- * preimage is expected to run out, if the caller
- * fixed addition of trailing blank lines.
- */
- if (preimage->nr <= ctx) {
- reduced++;
- continue;
- }
-
- /* and copy it in, while fixing the line length */
- l_len = preimage->line[ctx].len;
- memcpy(new, fixed, l_len);
- new += l_len;
- fixed += l_len;
- postimage->line[i].len = l_len;
- ctx++;
- }
-
- if (postlen
- ? postlen < new - postimage->buf
- : postimage->len < new - postimage->buf)
- die("BUG: caller miscounted postlen: asked %d, orig = %d, used = %d",
- (int)postlen, (int) postimage->len, (int)(new - postimage->buf));
-
- /* Fix the length of the whole thing */
- postimage->len = new - postimage->buf;
- postimage->nr -= reduced;
-}
-
-static int line_by_line_fuzzy_match(struct image *img,
- struct image *preimage,
- struct image *postimage,
- unsigned long try,
- int try_lno,
- int preimage_limit)
-{
- int i;
- size_t imgoff = 0;
- size_t preoff = 0;
- size_t postlen = postimage->len;
- size_t extra_chars;
- char *buf;
- char *preimage_eof;
- char *preimage_end;
- struct strbuf fixed;
- char *fixed_buf;
- size_t fixed_len;
-
- for (i = 0; i < preimage_limit; i++) {
- size_t prelen = preimage->line[i].len;
- size_t imglen = img->line[try_lno+i].len;
-
- if (!fuzzy_matchlines(img->buf + try + imgoff, imglen,
- preimage->buf + preoff, prelen))
- return 0;
- if (preimage->line[i].flag & LINE_COMMON)
- postlen += imglen - prelen;
- imgoff += imglen;
- preoff += prelen;
- }
-
- /*
- * Ok, the preimage matches with whitespace fuzz.
- *
- * imgoff now holds the true length of the target that
- * matches the preimage before the end of the file.
- *
- * Count the number of characters in the preimage that fall
- * beyond the end of the file and make sure that all of them
- * are whitespace characters. (This can only happen if
- * we are removing blank lines at the end of the file.)
- */
- buf = preimage_eof = preimage->buf + preoff;
- for ( ; i < preimage->nr; i++)
- preoff += preimage->line[i].len;
- preimage_end = preimage->buf + preoff;
- for ( ; buf < preimage_end; buf++)
- if (!isspace(*buf))
- return 0;
-
- /*
- * Update the preimage and the common postimage context
- * lines to use the same whitespace as the target.
- * If whitespace is missing in the target (i.e.
- * if the preimage extends beyond the end of the file),
- * use the whitespace from the preimage.
- */
- extra_chars = preimage_end - preimage_eof;
- strbuf_init(&fixed, imgoff + extra_chars);
- strbuf_add(&fixed, img->buf + try, imgoff);
- strbuf_add(&fixed, preimage_eof, extra_chars);
- fixed_buf = strbuf_detach(&fixed, &fixed_len);
- update_pre_post_images(preimage, postimage,
- fixed_buf, fixed_len, postlen);
- return 1;
-}
-
-static int match_fragment(struct apply_state *state,
- struct image *img,
- struct image *preimage,
- struct image *postimage,
- unsigned long try,
- int try_lno,
- unsigned ws_rule,
- int match_beginning, int match_end)
-{
- int i;
- char *fixed_buf, *buf, *orig, *target;
- struct strbuf fixed;
- size_t fixed_len, postlen;
- int preimage_limit;
-
- if (preimage->nr + try_lno <= img->nr) {
- /*
- * The hunk falls within the boundaries of img.
- */
- preimage_limit = preimage->nr;
- if (match_end && (preimage->nr + try_lno != img->nr))
- return 0;
- } else if (state->ws_error_action == correct_ws_error &&
- (ws_rule & WS_BLANK_AT_EOF)) {
- /*
- * This hunk extends beyond the end of img, and we are
- * removing blank lines at the end of the file. This
- * many lines from the beginning of the preimage must
- * match with img, and the remainder of the preimage
- * must be blank.
- */
- preimage_limit = img->nr - try_lno;
- } else {
- /*
- * The hunk extends beyond the end of the img and
- * we are not removing blanks at the end, so we
- * should reject the hunk at this position.
- */
- return 0;
- }
-
- if (match_beginning && try_lno)
- return 0;
-
- /* Quick hash check */
- for (i = 0; i < preimage_limit; i++)
- if ((img->line[try_lno + i].flag & LINE_PATCHED) ||
- (preimage->line[i].hash != img->line[try_lno + i].hash))
- return 0;
-
- if (preimage_limit == preimage->nr) {
- /*
- * Do we have an exact match? If we were told to match
- * at the end, size must be exactly at try+fragsize,
- * otherwise try+fragsize must be still within the preimage,
- * and either case, the old piece should match the preimage
- * exactly.
- */
- if ((match_end
- ? (try + preimage->len == img->len)
- : (try + preimage->len <= img->len)) &&
- !memcmp(img->buf + try, preimage->buf, preimage->len))
- return 1;
- } else {
- /*
- * The preimage extends beyond the end of img, so
- * there cannot be an exact match.
- *
- * There must be one non-blank context line that match
- * a line before the end of img.
- */
- char *buf_end;
-
- buf = preimage->buf;
- buf_end = buf;
- for (i = 0; i < preimage_limit; i++)
- buf_end += preimage->line[i].len;
-
- for ( ; buf < buf_end; buf++)
- if (!isspace(*buf))
- break;
- if (buf == buf_end)
- return 0;
- }
-
- /*
- * No exact match. If we are ignoring whitespace, run a line-by-line
- * fuzzy matching. We collect all the line length information because
- * we need it to adjust whitespace if we match.
- */
- if (state->ws_ignore_action == ignore_ws_change)
- return line_by_line_fuzzy_match(img, preimage, postimage,
- try, try_lno, preimage_limit);
-
- if (state->ws_error_action != correct_ws_error)
- return 0;
-
- /*
- * The hunk does not apply byte-by-byte, but the hash says
- * it might with whitespace fuzz. We weren't asked to
- * ignore whitespace, we were asked to correct whitespace
- * errors, so let's try matching after whitespace correction.
- *
- * While checking the preimage against the target, whitespace
- * errors in both fixed, we count how large the corresponding
- * postimage needs to be. The postimage prepared by
- * apply_one_fragment() has whitespace errors fixed on added
- * lines already, but the common lines were propagated as-is,
- * which may become longer when their whitespace errors are
- * fixed.
- */
-
- /* First count added lines in postimage */
- postlen = 0;
- for (i = 0; i < postimage->nr; i++) {
- if (!(postimage->line[i].flag & LINE_COMMON))
- postlen += postimage->line[i].len;
- }
-
- /*
- * The preimage may extend beyond the end of the file,
- * but in this loop we will only handle the part of the
- * preimage that falls within the file.
- */
- strbuf_init(&fixed, preimage->len + 1);
- orig = preimage->buf;
- target = img->buf + try;
- for (i = 0; i < preimage_limit; i++) {
- size_t oldlen = preimage->line[i].len;
- size_t tgtlen = img->line[try_lno + i].len;
- size_t fixstart = fixed.len;
- struct strbuf tgtfix;
- int match;
-
- /* Try fixing the line in the preimage */
- ws_fix_copy(&fixed, orig, oldlen, ws_rule, NULL);
-
- /* Try fixing the line in the target */
- strbuf_init(&tgtfix, tgtlen);
- ws_fix_copy(&tgtfix, target, tgtlen, ws_rule, NULL);
-
- /*
- * If they match, either the preimage was based on
- * a version before our tree fixed whitespace breakage,
- * or we are lacking a whitespace-fix patch the tree
- * the preimage was based on already had (i.e. target
- * has whitespace breakage, the preimage doesn't).
- * In either case, we are fixing the whitespace breakages
- * so we might as well take the fix together with their
- * real change.
- */
- match = (tgtfix.len == fixed.len - fixstart &&
- !memcmp(tgtfix.buf, fixed.buf + fixstart,
- fixed.len - fixstart));
-
- /* Add the length if this is common with the postimage */
- if (preimage->line[i].flag & LINE_COMMON)
- postlen += tgtfix.len;
-
- strbuf_release(&tgtfix);
- if (!match)
- goto unmatch_exit;
-
- orig += oldlen;
- target += tgtlen;
- }
-
-
- /*
- * Now handle the lines in the preimage that falls beyond the
- * end of the file (if any). They will only match if they are
- * empty or only contain whitespace (if WS_BLANK_AT_EOL is
- * false).
- */
- for ( ; i < preimage->nr; i++) {
- size_t fixstart = fixed.len; /* start of the fixed preimage */
- size_t oldlen = preimage->line[i].len;
- int j;
-
- /* Try fixing the line in the preimage */
- ws_fix_copy(&fixed, orig, oldlen, ws_rule, NULL);
-
- for (j = fixstart; j < fixed.len; j++)
- if (!isspace(fixed.buf[j]))
- goto unmatch_exit;
-
- orig += oldlen;
- }
-
- /*
- * Yes, the preimage is based on an older version that still
- * has whitespace breakages unfixed, and fixing them makes the
- * hunk match. Update the context lines in the postimage.
- */
- fixed_buf = strbuf_detach(&fixed, &fixed_len);
- if (postlen < postimage->len)
- postlen = 0;
- update_pre_post_images(preimage, postimage,
- fixed_buf, fixed_len, postlen);
- return 1;
-
- unmatch_exit:
- strbuf_release(&fixed);
- return 0;
-}
-
-static int find_pos(struct apply_state *state,
- struct image *img,
- struct image *preimage,
- struct image *postimage,
- int line,
- unsigned ws_rule,
- int match_beginning, int match_end)
-{
- int i;
- unsigned long backwards, forwards, try;
- int backwards_lno, forwards_lno, try_lno;
-
- /*
- * If match_beginning or match_end is specified, there is no
- * point starting from a wrong line that will never match and
- * wander around and wait for a match at the specified end.
- */
- if (match_beginning)
- line = 0;
- else if (match_end)
- line = img->nr - preimage->nr;
-
- /*
- * Because the comparison is unsigned, the following test
- * will also take care of a negative line number that can
- * result when match_end and preimage is larger than the target.
- */
- if ((size_t) line > img->nr)
- line = img->nr;
-
- try = 0;
- for (i = 0; i < line; i++)
- try += img->line[i].len;
-
- /*
- * There's probably some smart way to do this, but I'll leave
- * that to the smart and beautiful people. I'm simple and stupid.
- */
- backwards = try;
- backwards_lno = line;
- forwards = try;
- forwards_lno = line;
- try_lno = line;
-
- for (i = 0; ; i++) {
- if (match_fragment(state, img, preimage, postimage,
- try, try_lno, ws_rule,
- match_beginning, match_end))
- return try_lno;
-
- again:
- if (backwards_lno == 0 && forwards_lno == img->nr)
- break;
-
- if (i & 1) {
- if (backwards_lno == 0) {
- i++;
- goto again;
- }
- backwards_lno--;
- backwards -= img->line[backwards_lno].len;
- try = backwards;
- try_lno = backwards_lno;
- } else {
- if (forwards_lno == img->nr) {
- i++;
- goto again;
- }
- forwards += img->line[forwards_lno].len;
- forwards_lno++;
- try = forwards;
- try_lno = forwards_lno;
- }
-
- }
- return -1;
-}
-
-static void remove_first_line(struct image *img)
-{
- img->buf += img->line[0].len;
- img->len -= img->line[0].len;
- img->line++;
- img->nr--;
-}
-
-static void remove_last_line(struct image *img)
-{
- img->len -= img->line[--img->nr].len;
-}
-
-/*
- * The change from "preimage" and "postimage" has been found to
- * apply at applied_pos (counts in line numbers) in "img".
- * Update "img" to remove "preimage" and replace it with "postimage".
- */
-static void update_image(struct apply_state *state,
- struct image *img,
- int applied_pos,
- struct image *preimage,
- struct image *postimage)
-{
- /*
- * remove the copy of preimage at offset in img
- * and replace it with postimage
- */
- int i, nr;
- size_t remove_count, insert_count, applied_at = 0;
- char *result;
- int preimage_limit;
-
- /*
- * If we are removing blank lines at the end of img,
- * the preimage may extend beyond the end.
- * If that is the case, we must be careful only to
- * remove the part of the preimage that falls within
- * the boundaries of img. Initialize preimage_limit
- * to the number of lines in the preimage that falls
- * within the boundaries.
- */
- preimage_limit = preimage->nr;
- if (preimage_limit > img->nr - applied_pos)
- preimage_limit = img->nr - applied_pos;
-
- for (i = 0; i < applied_pos; i++)
- applied_at += img->line[i].len;
-
- remove_count = 0;
- for (i = 0; i < preimage_limit; i++)
- remove_count += img->line[applied_pos + i].len;
- insert_count = postimage->len;
-
- /* Adjust the contents */
- result = xmalloc(st_add3(st_sub(img->len, remove_count), insert_count, 1));
- memcpy(result, img->buf, applied_at);
- memcpy(result + applied_at, postimage->buf, postimage->len);
- memcpy(result + applied_at + postimage->len,
- img->buf + (applied_at + remove_count),
- img->len - (applied_at + remove_count));
- free(img->buf);
- img->buf = result;
- img->len += insert_count - remove_count;
- result[img->len] = '\0';
-
- /* Adjust the line table */
- nr = img->nr + postimage->nr - preimage_limit;
- if (preimage_limit < postimage->nr) {
- /*
- * NOTE: this knows that we never call remove_first_line()
- * on anything other than pre/post image.
- */
- REALLOC_ARRAY(img->line, nr);
- img->line_allocated = img->line;
- }
- if (preimage_limit != postimage->nr)
- memmove(img->line + applied_pos + postimage->nr,
- img->line + applied_pos + preimage_limit,
- (img->nr - (applied_pos + preimage_limit)) *
- sizeof(*img->line));
- memcpy(img->line + applied_pos,
- postimage->line,
- postimage->nr * sizeof(*img->line));
- if (!state->allow_overlap)
- for (i = 0; i < postimage->nr; i++)
- img->line[applied_pos + i].flag |= LINE_PATCHED;
- img->nr = nr;
-}
-
-/*
- * Use the patch-hunk text in "frag" to prepare two images (preimage and
- * postimage) for the hunk. Find lines that match "preimage" in "img" and
- * replace the part of "img" with "postimage" text.
- */
-static int apply_one_fragment(struct apply_state *state,
- struct image *img, struct fragment *frag,
- int inaccurate_eof, unsigned ws_rule,
- int nth_fragment)
-{
- int match_beginning, match_end;
- const char *patch = frag->patch;
- int size = frag->size;
- char *old, *oldlines;
- struct strbuf newlines;
- int new_blank_lines_at_end = 0;
- int found_new_blank_lines_at_end = 0;
- int hunk_linenr = frag->linenr;
- unsigned long leading, trailing;
- int pos, applied_pos;
- struct image preimage;
- struct image postimage;
-
- memset(&preimage, 0, sizeof(preimage));
- memset(&postimage, 0, sizeof(postimage));
- oldlines = xmalloc(size);
- strbuf_init(&newlines, size);
-
- old = oldlines;
- while (size > 0) {
- char first;
- int len = linelen(patch, size);
- int plen;
- int added_blank_line = 0;
- int is_blank_context = 0;
- size_t start;
-
- if (!len)
- break;
-
- /*
- * "plen" is how much of the line we should use for
- * the actual patch data. Normally we just remove the
- * first character on the line, but if the line is
- * followed by "\ No newline", then we also remove the
- * last one (which is the newline, of course).
- */
- plen = len - 1;
- if (len < size && patch[len] == '\\')
- plen--;
- first = *patch;
- if (state->apply_in_reverse) {
- if (first == '-')
- first = '+';
- else if (first == '+')
- first = '-';
- }
-
- switch (first) {
- case '\n':
- /* Newer GNU diff, empty context line */
- if (plen < 0)
- /* ... followed by '\No newline'; nothing */
- break;
- *old++ = '\n';
- strbuf_addch(&newlines, '\n');
- add_line_info(&preimage, "\n", 1, LINE_COMMON);
- add_line_info(&postimage, "\n", 1, LINE_COMMON);
- is_blank_context = 1;
- break;
- case ' ':
- if (plen && (ws_rule & WS_BLANK_AT_EOF) &&
- ws_blank_line(patch + 1, plen, ws_rule))
- is_blank_context = 1;
- case '-':
- memcpy(old, patch + 1, plen);
- add_line_info(&preimage, old, plen,
- (first == ' ' ? LINE_COMMON : 0));
- old += plen;
- if (first == '-')
- break;
- /* Fall-through for ' ' */
- case '+':
- /* --no-add does not add new lines */
- if (first == '+' && state->no_add)
- break;
-
- start = newlines.len;
- if (first != '+' ||
- !state->whitespace_error ||
- state->ws_error_action != correct_ws_error) {
- strbuf_add(&newlines, patch + 1, plen);
- }
- else {
- ws_fix_copy(&newlines, patch + 1, plen, ws_rule, &state->applied_after_fixing_ws);
- }
- add_line_info(&postimage, newlines.buf + start, newlines.len - start,
- (first == '+' ? 0 : LINE_COMMON));
- if (first == '+' &&
- (ws_rule & WS_BLANK_AT_EOF) &&
- ws_blank_line(patch + 1, plen, ws_rule))
- added_blank_line = 1;
- break;
- case '@': case '\\':
- /* Ignore it, we already handled it */
- break;
- default:
- if (state->apply_verbosely)
- error(_("invalid start of line: '%c'"), first);
- applied_pos = -1;
- goto out;
- }
- if (added_blank_line) {
- if (!new_blank_lines_at_end)
- found_new_blank_lines_at_end = hunk_linenr;
- new_blank_lines_at_end++;
- }
- else if (is_blank_context)
- ;
- else
- new_blank_lines_at_end = 0;
- patch += len;
- size -= len;
- hunk_linenr++;
- }
- if (inaccurate_eof &&
- old > oldlines && old[-1] == '\n' &&
- newlines.len > 0 && newlines.buf[newlines.len - 1] == '\n') {
- old--;
- strbuf_setlen(&newlines, newlines.len - 1);
- }
-
- leading = frag->leading;
- trailing = frag->trailing;
-
- /*
- * A hunk to change lines at the beginning would begin with
- * @@ -1,L +N,M @@
- * but we need to be careful. -U0 that inserts before the second
- * line also has this pattern.
- *
- * And a hunk to add to an empty file would begin with
- * @@ -0,0 +N,M @@
- *
- * In other words, a hunk that is (frag->oldpos <= 1) with or
- * without leading context must match at the beginning.
- */
- match_beginning = (!frag->oldpos ||
- (frag->oldpos == 1 && !state->unidiff_zero));
-
- /*
- * A hunk without trailing lines must match at the end.
- * However, we simply cannot tell if a hunk must match end
- * from the lack of trailing lines if the patch was generated
- * with unidiff without any context.
- */
- match_end = !state->unidiff_zero && !trailing;
-
- pos = frag->newpos ? (frag->newpos - 1) : 0;
- preimage.buf = oldlines;
- preimage.len = old - oldlines;
- postimage.buf = newlines.buf;
- postimage.len = newlines.len;
- preimage.line = preimage.line_allocated;
- postimage.line = postimage.line_allocated;
-
- for (;;) {
-
- applied_pos = find_pos(state, img, &preimage, &postimage, pos,
- ws_rule, match_beginning, match_end);
-
- if (applied_pos >= 0)
- break;
-
- /* Am I at my context limits? */
- if ((leading <= state->p_context) && (trailing <= state->p_context))
- break;
- if (match_beginning || match_end) {
- match_beginning = match_end = 0;
- continue;
- }
-
- /*
- * Reduce the number of context lines; reduce both
- * leading and trailing if they are equal otherwise
- * just reduce the larger context.
- */
- if (leading >= trailing) {
- remove_first_line(&preimage);
- remove_first_line(&postimage);
- pos--;
- leading--;
- }
- if (trailing > leading) {
- remove_last_line(&preimage);
- remove_last_line(&postimage);
- trailing--;
- }
- }
-
- if (applied_pos >= 0) {
- if (new_blank_lines_at_end &&
- preimage.nr + applied_pos >= img->nr &&
- (ws_rule & WS_BLANK_AT_EOF) &&
- state->ws_error_action != nowarn_ws_error) {
- record_ws_error(state, WS_BLANK_AT_EOF, "+", 1,
- found_new_blank_lines_at_end);
- if (state->ws_error_action == correct_ws_error) {
- while (new_blank_lines_at_end--)
- remove_last_line(&postimage);
- }
- /*
- * We would want to prevent write_out_results()
- * from taking place in apply_patch() that follows
- * the callchain led us here, which is:
- * apply_patch->check_patch_list->check_patch->
- * apply_data->apply_fragments->apply_one_fragment
- */
- if (state->ws_error_action == die_on_ws_error)
- state->apply = 0;
- }
-
- if (state->apply_verbosely && applied_pos != pos) {
- int offset = applied_pos - pos;
- if (state->apply_in_reverse)
- offset = 0 - offset;
- fprintf_ln(stderr,
- Q_("Hunk #%d succeeded at %d (offset %d line).",
- "Hunk #%d succeeded at %d (offset %d lines).",
- offset),
- nth_fragment, applied_pos + 1, offset);
- }
-
- /*
- * Warn if it was necessary to reduce the number
- * of context lines.
- */
- if ((leading != frag->leading) ||
- (trailing != frag->trailing))
- fprintf_ln(stderr, _("Context reduced to (%ld/%ld)"
- " to apply fragment at %d"),
- leading, trailing, applied_pos+1);
- update_image(state, img, applied_pos, &preimage, &postimage);
- } else {
- if (state->apply_verbosely)
- error(_("while searching for:\n%.*s"),
- (int)(old - oldlines), oldlines);
- }
-
-out:
- free(oldlines);
- strbuf_release(&newlines);
- free(preimage.line_allocated);
- free(postimage.line_allocated);
-
- return (applied_pos < 0);
-}
-
-static int apply_binary_fragment(struct apply_state *state,
- struct image *img,
- struct patch *patch)
-{
- struct fragment *fragment = patch->fragments;
- unsigned long len;
- void *dst;
-
- if (!fragment)
- return error(_("missing binary patch data for '%s'"),
- patch->new_name ?
- patch->new_name :
- patch->old_name);
-
- /* Binary patch is irreversible without the optional second hunk */
- if (state->apply_in_reverse) {
- if (!fragment->next)
- return error("cannot reverse-apply a binary patch "
- "without the reverse hunk to '%s'",
- patch->new_name
- ? patch->new_name : patch->old_name);
- fragment = fragment->next;
- }
- switch (fragment->binary_patch_method) {
- case BINARY_DELTA_DEFLATED:
- dst = patch_delta(img->buf, img->len, fragment->patch,
- fragment->size, &len);
- if (!dst)
- return -1;
- clear_image(img);
- img->buf = dst;
- img->len = len;
- return 0;
- case BINARY_LITERAL_DEFLATED:
- clear_image(img);
- img->len = fragment->size;
- img->buf = xmemdupz(fragment->patch, img->len);
- return 0;
- }
- return -1;
-}
-
-/*
- * Replace "img" with the result of applying the binary patch.
- * The binary patch data itself in patch->fragment is still kept
- * but the preimage prepared by the caller in "img" is freed here
- * or in the helper function apply_binary_fragment() this calls.
- */
-static int apply_binary(struct apply_state *state,
- struct image *img,
- struct patch *patch)
-{
- const char *name = patch->old_name ? patch->old_name : patch->new_name;
- unsigned char sha1[20];
-
- /*
- * For safety, we require patch index line to contain
- * full 40-byte textual SHA1 for old and new, at least for now.
- */
- if (strlen(patch->old_sha1_prefix) != 40 ||
- strlen(patch->new_sha1_prefix) != 40 ||
- get_sha1_hex(patch->old_sha1_prefix, sha1) ||
- get_sha1_hex(patch->new_sha1_prefix, sha1))
- return error("cannot apply binary patch to '%s' "
- "without full index line", name);
-
- if (patch->old_name) {
- /*
- * See if the old one matches what the patch
- * applies to.
- */
- hash_sha1_file(img->buf, img->len, blob_type, sha1);
- if (strcmp(sha1_to_hex(sha1), patch->old_sha1_prefix))
- return error("the patch applies to '%s' (%s), "
- "which does not match the "
- "current contents.",
- name, sha1_to_hex(sha1));
- }
- else {
- /* Otherwise, the old one must be empty. */
- if (img->len)
- return error("the patch applies to an empty "
- "'%s' but it is not empty", name);
- }
-
- get_sha1_hex(patch->new_sha1_prefix, sha1);
- if (is_null_sha1(sha1)) {
- clear_image(img);
- return 0; /* deletion patch */
- }
-
- if (has_sha1_file(sha1)) {
- /* We already have the postimage */
- enum object_type type;
- unsigned long size;
- char *result;
-
- result = read_sha1_file(sha1, &type, &size);
- if (!result)
- return error("the necessary postimage %s for "
- "'%s' cannot be read",
- patch->new_sha1_prefix, name);
- clear_image(img);
- img->buf = result;
- img->len = size;
- } else {
- /*
- * We have verified buf matches the preimage;
- * apply the patch data to it, which is stored
- * in the patch->fragments->{patch,size}.
- */
- if (apply_binary_fragment(state, img, patch))
- return error(_("binary patch does not apply to '%s'"),
- name);
-
- /* verify that the result matches */
- hash_sha1_file(img->buf, img->len, blob_type, sha1);
- if (strcmp(sha1_to_hex(sha1), patch->new_sha1_prefix))
- return error(_("binary patch to '%s' creates incorrect result (expecting %s, got %s)"),
- name, patch->new_sha1_prefix, sha1_to_hex(sha1));
- }
-
- return 0;
-}
-
-static int apply_fragments(struct apply_state *state, struct image *img, struct patch *patch)
-{
- struct fragment *frag = patch->fragments;
- const char *name = patch->old_name ? patch->old_name : patch->new_name;
- unsigned ws_rule = patch->ws_rule;
- unsigned inaccurate_eof = patch->inaccurate_eof;
- int nth = 0;
-
- if (patch->is_binary)
- return apply_binary(state, img, patch);
-
- while (frag) {
- nth++;
- if (apply_one_fragment(state, img, frag, inaccurate_eof, ws_rule, nth)) {
- error(_("patch failed: %s:%ld"), name, frag->oldpos);
- if (!state->apply_with_reject)
- return -1;
- frag->rejected = 1;
- }
- frag = frag->next;
- }
- return 0;
-}
-
-static int read_blob_object(struct strbuf *buf, const unsigned char *sha1, unsigned mode)
-{
- if (S_ISGITLINK(mode)) {
- strbuf_grow(buf, 100);
- strbuf_addf(buf, "Subproject commit %s\n", sha1_to_hex(sha1));
- } else {
- enum object_type type;
- unsigned long sz;
- char *result;
-
- result = read_sha1_file(sha1, &type, &sz);
- if (!result)
- return -1;
- /* XXX read_sha1_file NUL-terminates */
- strbuf_attach(buf, result, sz, sz + 1);
- }
- return 0;
-}
-
-static int read_file_or_gitlink(const struct cache_entry *ce, struct strbuf *buf)
-{
- if (!ce)
- return 0;
- return read_blob_object(buf, ce->sha1, ce->ce_mode);
-}
-
-static struct patch *in_fn_table(struct apply_state *state, const char *name)
-{
- struct string_list_item *item;
-
- if (name == NULL)
- return NULL;
-
- item = string_list_lookup(&state->fn_table, name);
- if (item != NULL)
- return (struct patch *)item->util;
-
- return NULL;
-}
-
-/*
- * item->util in the filename table records the status of the path.
- * Usually it points at a patch (whose result records the contents
- * of it after applying it), but it could be PATH_WAS_DELETED for a
- * path that a previously applied patch has already removed, or
- * PATH_TO_BE_DELETED for a path that a later patch would remove.
- *
- * The latter is needed to deal with a case where two paths A and B
- * are swapped by first renaming A to B and then renaming B to A;
- * moving A to B should not be prevented due to presence of B as we
- * will remove it in a later patch.
- */
-#define PATH_TO_BE_DELETED ((struct patch *) -2)
-#define PATH_WAS_DELETED ((struct patch *) -1)
-
-static int to_be_deleted(struct patch *patch)
-{
- return patch == PATH_TO_BE_DELETED;
-}
-
-static int was_deleted(struct patch *patch)
-{
- return patch == PATH_WAS_DELETED;
-}
-
-static void add_to_fn_table(struct apply_state *state, struct patch *patch)
-{
- struct string_list_item *item;
-
- /*
- * Always add new_name unless patch is a deletion
- * This should cover the cases for normal diffs,
- * file creations and copies
- */
- if (patch->new_name != NULL) {
- item = string_list_insert(&state->fn_table, patch->new_name);
- item->util = patch;
- }
-
- /*
- * store a failure on rename/deletion cases because
- * later chunks shouldn't patch old names
- */
- if ((patch->new_name == NULL) || (patch->is_rename)) {
- item = string_list_insert(&state->fn_table, patch->old_name);
- item->util = PATH_WAS_DELETED;
- }
-}
-
-static void prepare_fn_table(struct apply_state *state, struct patch *patch)
-{
- /*
- * store information about incoming file deletion
- */
- while (patch) {
- if ((patch->new_name == NULL) || (patch->is_rename)) {
- struct string_list_item *item;
- item = string_list_insert(&state->fn_table, patch->old_name);
- item->util = PATH_TO_BE_DELETED;
- }
- patch = patch->next;
- }
-}
-
-static int checkout_target(struct index_state *istate,
- struct cache_entry *ce, struct stat *st)
-{
- struct checkout costate;
-
- memset(&costate, 0, sizeof(costate));
- costate.base_dir = "";
- costate.refresh_cache = 1;
- costate.istate = istate;
- if (checkout_entry(ce, &costate, NULL) || lstat(ce->name, st))
- return error(_("cannot checkout %s"), ce->name);
- return 0;
-}
-
-static struct patch *previous_patch(struct apply_state *state,
- struct patch *patch,
- int *gone)
-{
- struct patch *previous;
-
- *gone = 0;
- if (patch->is_copy || patch->is_rename)
- return NULL; /* "git" patches do not depend on the order */
-
- previous = in_fn_table(state, patch->old_name);
- if (!previous)
- return NULL;
-
- if (to_be_deleted(previous))
- return NULL; /* the deletion hasn't happened yet */
-
- if (was_deleted(previous))
- *gone = 1;
-
- return previous;
-}
-
-static int verify_index_match(const struct cache_entry *ce, struct stat *st)
-{
- if (S_ISGITLINK(ce->ce_mode)) {
- if (!S_ISDIR(st->st_mode))
- return -1;
- return 0;
- }
- return ce_match_stat(ce, st, CE_MATCH_IGNORE_VALID|CE_MATCH_IGNORE_SKIP_WORKTREE);
-}
-
-#define SUBMODULE_PATCH_WITHOUT_INDEX 1
-
-static int load_patch_target(struct apply_state *state,
- struct strbuf *buf,
- const struct cache_entry *ce,
- struct stat *st,
- const char *name,
- unsigned expected_mode)
-{
- if (state->cached || state->check_index) {
- if (read_file_or_gitlink(ce, buf))
- return error(_("failed to read %s"), name);
- } else if (name) {
- if (S_ISGITLINK(expected_mode)) {
- if (ce)
- return read_file_or_gitlink(ce, buf);
- else
- return SUBMODULE_PATCH_WITHOUT_INDEX;
- } else if (has_symlink_leading_path(name, strlen(name))) {
- return error(_("reading from '%s' beyond a symbolic link"), name);
- } else {
- if (read_old_data(st, name, buf))
- return error(_("failed to read %s"), name);
- }
- }
- return 0;
-}
-
-/*
- * We are about to apply "patch"; populate the "image" with the
- * current version we have, from the working tree or from the index,
- * depending on the situation e.g. --cached/--index. If we are
- * applying a non-git patch that incrementally updates the tree,
- * we read from the result of a previous diff.
- */
-static int load_preimage(struct apply_state *state,
- struct image *image,
- struct patch *patch, struct stat *st,
- const struct cache_entry *ce)
-{
- struct strbuf buf = STRBUF_INIT;
- size_t len;
- char *img;
- struct patch *previous;
- int status;
-
- previous = previous_patch(state, patch, &status);
- if (status)
- return error(_("path %s has been renamed/deleted"),
- patch->old_name);
- if (previous) {
- /* We have a patched copy in memory; use that. */
- strbuf_add(&buf, previous->result, previous->resultsize);
- } else {
- status = load_patch_target(state, &buf, ce, st,
- patch->old_name, patch->old_mode);
- if (status < 0)
- return status;
- else if (status == SUBMODULE_PATCH_WITHOUT_INDEX) {
- /*
- * There is no way to apply subproject
- * patch without looking at the index.
- * NEEDSWORK: shouldn't this be flagged
- * as an error???
- */
- free_fragment_list(patch->fragments);
- patch->fragments = NULL;
- } else if (status) {
- return error(_("failed to read %s"), patch->old_name);
- }
- }
-
- img = strbuf_detach(&buf, &len);
- prepare_image(image, img, len, !patch->is_binary);
- return 0;
-}
-
-static int three_way_merge(struct image *image,
- char *path,
- const unsigned char *base,
- const unsigned char *ours,
- const unsigned char *theirs)
-{
- mmfile_t base_file, our_file, their_file;
- mmbuffer_t result = { NULL };
- int status;
-
- read_mmblob(&base_file, base);
- read_mmblob(&our_file, ours);
- read_mmblob(&their_file, theirs);
- status = ll_merge(&result, path,
- &base_file, "base",
- &our_file, "ours",
- &their_file, "theirs", NULL);
- free(base_file.ptr);
- free(our_file.ptr);
- free(their_file.ptr);
- if (status < 0 || !result.ptr) {
- free(result.ptr);
- return -1;
- }
- clear_image(image);
- image->buf = result.ptr;
- image->len = result.size;
-
- return status;
-}
-
-/*
- * When directly falling back to add/add three-way merge, we read from
- * the current contents of the new_name. In no cases other than that
- * this function will be called.
- */
-static int load_current(struct apply_state *state,
- struct image *image,
- struct patch *patch)
-{
- struct strbuf buf = STRBUF_INIT;
- int status, pos;
- size_t len;
- char *img;
- struct stat st;
- struct cache_entry *ce;
- char *name = patch->new_name;
- unsigned mode = patch->new_mode;
-
- if (!patch->is_new)
- die("BUG: patch to %s is not a creation", patch->old_name);
-
- pos = cache_name_pos(name, strlen(name));
- if (pos < 0)
- return error(_("%s: does not exist in index"), name);
- ce = active_cache[pos];
- if (lstat(name, &st)) {
- if (errno != ENOENT)
- return error(_("%s: %s"), name, strerror(errno));
- if (checkout_target(&the_index, ce, &st))
- return -1;
- }
- if (verify_index_match(ce, &st))
- return error(_("%s: does not match index"), name);
-
- status = load_patch_target(state, &buf, ce, &st, name, mode);
- if (status < 0)
- return status;
- else if (status)
- return -1;
- img = strbuf_detach(&buf, &len);
- prepare_image(image, img, len, !patch->is_binary);
- return 0;
-}
-
-static int try_threeway(struct apply_state *state,
- struct image *image,
- struct patch *patch,
- struct stat *st,
- const struct cache_entry *ce)
-{
- unsigned char pre_sha1[20], post_sha1[20], our_sha1[20];
- struct strbuf buf = STRBUF_INIT;
- size_t len;
- int status;
- char *img;
- struct image tmp_image;
-
- /* No point falling back to 3-way merge in these cases */
- if (patch->is_delete ||
- S_ISGITLINK(patch->old_mode) || S_ISGITLINK(patch->new_mode))
- return -1;
-
- /* Preimage the patch was prepared for */
- if (patch->is_new)
- write_sha1_file("", 0, blob_type, pre_sha1);
- else if (get_sha1(patch->old_sha1_prefix, pre_sha1) ||
- read_blob_object(&buf, pre_sha1, patch->old_mode))
- return error("repository lacks the necessary blob to fall back on 3-way merge.");
-
- fprintf(stderr, "Falling back to three-way merge...\n");
-
- img = strbuf_detach(&buf, &len);
- prepare_image(&tmp_image, img, len, 1);
- /* Apply the patch to get the post image */
- if (apply_fragments(state, &tmp_image, patch) < 0) {
- clear_image(&tmp_image);
- return -1;
- }
- /* post_sha1[] is theirs */
- write_sha1_file(tmp_image.buf, tmp_image.len, blob_type, post_sha1);
- clear_image(&tmp_image);
-
- /* our_sha1[] is ours */
- if (patch->is_new) {
- if (load_current(state, &tmp_image, patch))
- return error("cannot read the current contents of '%s'",
- patch->new_name);
- } else {
- if (load_preimage(state, &tmp_image, patch, st, ce))
- return error("cannot read the current contents of '%s'",
- patch->old_name);
- }
- write_sha1_file(tmp_image.buf, tmp_image.len, blob_type, our_sha1);
- clear_image(&tmp_image);
-
- /* in-core three-way merge between post and our using pre as base */
- status = three_way_merge(image, patch->new_name,
- pre_sha1, our_sha1, post_sha1);
- if (status < 0) {
- fprintf(stderr, "Failed to fall back on three-way merge...\n");
- return status;
- }
-
- if (status) {
- patch->conflicted_threeway = 1;
- if (patch->is_new)
- oidclr(&patch->threeway_stage[0]);
- else
- hashcpy(patch->threeway_stage[0].hash, pre_sha1);
- hashcpy(patch->threeway_stage[1].hash, our_sha1);
- hashcpy(patch->threeway_stage[2].hash, post_sha1);
- fprintf(stderr, "Applied patch to '%s' with conflicts.\n", patch->new_name);
- } else {
- fprintf(stderr, "Applied patch to '%s' cleanly.\n", patch->new_name);
- }
- return 0;
-}
-
-static int apply_data(struct apply_state *state, struct patch *patch,
- struct stat *st, const struct cache_entry *ce)
-{
- struct image image;
-
- if (load_preimage(state, &image, patch, st, ce) < 0)
- return -1;
-
- if (patch->direct_to_threeway ||
- apply_fragments(state, &image, patch) < 0) {
- /* Note: with --reject, apply_fragments() returns 0 */
- if (!state->threeway || try_threeway(state, &image, patch, st, ce) < 0)
- return -1;
- }
- patch->result = image.buf;
- patch->resultsize = image.len;
- add_to_fn_table(state, patch);
- free(image.line_allocated);
-
- if (0 < patch->is_delete && patch->resultsize)
- return error(_("removal patch leaves file contents"));
-
- return 0;
-}
-
-/*
- * If "patch" that we are looking at modifies or deletes what we have,
- * we would want it not to lose any local modification we have, either
- * in the working tree or in the index.
- *
- * This also decides if a non-git patch is a creation patch or a
- * modification to an existing empty file. We do not check the state
- * of the current tree for a creation patch in this function; the caller
- * check_patch() separately makes sure (and errors out otherwise) that
- * the path the patch creates does not exist in the current tree.
- */
-static int check_preimage(struct apply_state *state,
- struct patch *patch,
- struct cache_entry **ce,
- struct stat *st)
-{
- const char *old_name = patch->old_name;
- struct patch *previous = NULL;
- int stat_ret = 0, status;
- unsigned st_mode = 0;
-
- if (!old_name)
- return 0;
-
- assert(patch->is_new <= 0);
- previous = previous_patch(state, patch, &status);
-
- if (status)
- return error(_("path %s has been renamed/deleted"), old_name);
- if (previous) {
- st_mode = previous->new_mode;
- } else if (!state->cached) {
- stat_ret = lstat(old_name, st);
- if (stat_ret && errno != ENOENT)
- return error(_("%s: %s"), old_name, strerror(errno));
- }
-
- if (state->check_index && !previous) {
- int pos = cache_name_pos(old_name, strlen(old_name));
- if (pos < 0) {
- if (patch->is_new < 0)
- goto is_new;
- return error(_("%s: does not exist in index"), old_name);
- }
- *ce = active_cache[pos];
- if (stat_ret < 0) {
- if (checkout_target(&the_index, *ce, st))
- return -1;
- }
- if (!state->cached && verify_index_match(*ce, st))
- return error(_("%s: does not match index"), old_name);
- if (state->cached)
- st_mode = (*ce)->ce_mode;
- } else if (stat_ret < 0) {
- if (patch->is_new < 0)
- goto is_new;
- return error(_("%s: %s"), old_name, strerror(errno));
- }
-
- if (!state->cached && !previous)
- st_mode = ce_mode_from_stat(*ce, st->st_mode);
-
- if (patch->is_new < 0)
- patch->is_new = 0;
- if (!patch->old_mode)
- patch->old_mode = st_mode;
- if ((st_mode ^ patch->old_mode) & S_IFMT)
- return error(_("%s: wrong type"), old_name);
- if (st_mode != patch->old_mode)
- warning(_("%s has type %o, expected %o"),
- old_name, st_mode, patch->old_mode);
- if (!patch->new_mode && !patch->is_delete)
- patch->new_mode = st_mode;
- return 0;
-
- is_new:
- patch->is_new = 1;
- patch->is_delete = 0;
- free(patch->old_name);
- patch->old_name = NULL;
- return 0;
-}
-
-
-#define EXISTS_IN_INDEX 1
-#define EXISTS_IN_WORKTREE 2
-
-static int check_to_create(struct apply_state *state,
- const char *new_name,
- int ok_if_exists)
-{
- struct stat nst;
-
- if (state->check_index &&
- cache_name_pos(new_name, strlen(new_name)) >= 0 &&
- !ok_if_exists)
- return EXISTS_IN_INDEX;
- if (state->cached)
- return 0;
-
- if (!lstat(new_name, &nst)) {
- if (S_ISDIR(nst.st_mode) || ok_if_exists)
- return 0;
- /*
- * A leading component of new_name might be a symlink
- * that is going to be removed with this patch, but
- * still pointing at somewhere that has the path.
- * In such a case, path "new_name" does not exist as
- * far as git is concerned.
- */
- if (has_symlink_leading_path(new_name, strlen(new_name)))
- return 0;
-
- return EXISTS_IN_WORKTREE;
- } else if ((errno != ENOENT) && (errno != ENOTDIR)) {
- return error("%s: %s", new_name, strerror(errno));
- }
- return 0;
-}
-
-static uintptr_t register_symlink_changes(struct apply_state *state,
- const char *path,
- uintptr_t what)
-{
- struct string_list_item *ent;
-
- ent = string_list_lookup(&state->symlink_changes, path);
- if (!ent) {
- ent = string_list_insert(&state->symlink_changes, path);
- ent->util = (void *)0;
- }
- ent->util = (void *)(what | ((uintptr_t)ent->util));
- return (uintptr_t)ent->util;
-}
-
-static uintptr_t check_symlink_changes(struct apply_state *state, const char *path)
-{
- struct string_list_item *ent;
-
- ent = string_list_lookup(&state->symlink_changes, path);
- if (!ent)
- return 0;
- return (uintptr_t)ent->util;
-}
-
-static void prepare_symlink_changes(struct apply_state *state, struct patch *patch)
-{
- for ( ; patch; patch = patch->next) {
- if ((patch->old_name && S_ISLNK(patch->old_mode)) &&
- (patch->is_rename || patch->is_delete))
- /* the symlink at patch->old_name is removed */
- register_symlink_changes(state, patch->old_name, SYMLINK_GOES_AWAY);
-
- if (patch->new_name && S_ISLNK(patch->new_mode))
- /* the symlink at patch->new_name is created or remains */
- register_symlink_changes(state, patch->new_name, SYMLINK_IN_RESULT);
- }
-}
-
-static int path_is_beyond_symlink_1(struct apply_state *state, struct strbuf *name)
-{
- do {
- unsigned int change;
-
- while (--name->len && name->buf[name->len] != '/')
- ; /* scan backwards */
- if (!name->len)
- break;
- name->buf[name->len] = '\0';
- change = check_symlink_changes(state, name->buf);
- if (change & SYMLINK_IN_RESULT)
- return 1;
- if (change & SYMLINK_GOES_AWAY)
- /*
- * This cannot be "return 0", because we may
- * see a new one created at a higher level.
- */
- continue;
-
- /* otherwise, check the preimage */
- if (state->check_index) {
- struct cache_entry *ce;
-
- ce = cache_file_exists(name->buf, name->len, ignore_case);
- if (ce && S_ISLNK(ce->ce_mode))
- return 1;
- } else {
- struct stat st;
- if (!lstat(name->buf, &st) && S_ISLNK(st.st_mode))
- return 1;
- }
- } while (1);
- return 0;
-}
-
-static int path_is_beyond_symlink(struct apply_state *state, const char *name_)
-{
- int ret;
- struct strbuf name = STRBUF_INIT;
-
- assert(*name_ != '\0');
- strbuf_addstr(&name, name_);
- ret = path_is_beyond_symlink_1(state, &name);
- strbuf_release(&name);
-
- return ret;
-}
-
-static void die_on_unsafe_path(struct patch *patch)
-{
- const char *old_name = NULL;
- const char *new_name = NULL;
- if (patch->is_delete)
- old_name = patch->old_name;
- else if (!patch->is_new && !patch->is_copy)
- old_name = patch->old_name;
- if (!patch->is_delete)
- new_name = patch->new_name;
-
- if (old_name && !verify_path(old_name))
- die(_("invalid path '%s'"), old_name);
- if (new_name && !verify_path(new_name))
- die(_("invalid path '%s'"), new_name);
-}
-
-/*
- * Check and apply the patch in-core; leave the result in patch->result
- * for the caller to write it out to the final destination.
- */
-static int check_patch(struct apply_state *state, struct patch *patch)
-{
- struct stat st;
- const char *old_name = patch->old_name;
- const char *new_name = patch->new_name;
- const char *name = old_name ? old_name : new_name;
- struct cache_entry *ce = NULL;
- struct patch *tpatch;
- int ok_if_exists;
- int status;
-
- patch->rejected = 1; /* we will drop this after we succeed */
-
- status = check_preimage(state, patch, &ce, &st);
- if (status)
- return status;
- old_name = patch->old_name;
-
- /*
- * A type-change diff is always split into a patch to delete
- * old, immediately followed by a patch to create new (see
- * diff.c::run_diff()); in such a case it is Ok that the entry
- * to be deleted by the previous patch is still in the working
- * tree and in the index.
- *
- * A patch to swap-rename between A and B would first rename A
- * to B and then rename B to A. While applying the first one,
- * the presence of B should not stop A from getting renamed to
- * B; ask to_be_deleted() about the later rename. Removal of
- * B and rename from A to B is handled the same way by asking
- * was_deleted().
- */
- if ((tpatch = in_fn_table(state, new_name)) &&
- (was_deleted(tpatch) || to_be_deleted(tpatch)))
- ok_if_exists = 1;
- else
- ok_if_exists = 0;
-
- if (new_name &&
- ((0 < patch->is_new) || patch->is_rename || patch->is_copy)) {
- int err = check_to_create(state, new_name, ok_if_exists);
-
- if (err && state->threeway) {
- patch->direct_to_threeway = 1;
- } else switch (err) {
- case 0:
- break; /* happy */
- case EXISTS_IN_INDEX:
- return error(_("%s: already exists in index"), new_name);
- break;
- case EXISTS_IN_WORKTREE:
- return error(_("%s: already exists in working directory"),
- new_name);
- default:
- return err;
- }
-
- if (!patch->new_mode) {
- if (0 < patch->is_new)
- patch->new_mode = S_IFREG | 0644;
- else
- patch->new_mode = patch->old_mode;
- }
- }
-
- if (new_name && old_name) {
- int same = !strcmp(old_name, new_name);
- if (!patch->new_mode)
- patch->new_mode = patch->old_mode;
- if ((patch->old_mode ^ patch->new_mode) & S_IFMT) {
- if (same)
- return error(_("new mode (%o) of %s does not "
- "match old mode (%o)"),
- patch->new_mode, new_name,
- patch->old_mode);
- else
- return error(_("new mode (%o) of %s does not "
- "match old mode (%o) of %s"),
- patch->new_mode, new_name,
- patch->old_mode, old_name);
- }
- }
-
- if (!state->unsafe_paths)
- die_on_unsafe_path(patch);
-
- /*
- * An attempt to read from or delete a path that is beyond a
- * symbolic link will be prevented by load_patch_target() that
- * is called at the beginning of apply_data() so we do not
- * have to worry about a patch marked with "is_delete" bit
- * here. We however need to make sure that the patch result
- * is not deposited to a path that is beyond a symbolic link
- * here.
- */
- if (!patch->is_delete && path_is_beyond_symlink(state, patch->new_name))
- return error(_("affected file '%s' is beyond a symbolic link"),
- patch->new_name);
-
- if (apply_data(state, patch, &st, ce) < 0)
- return error(_("%s: patch does not apply"), name);
- patch->rejected = 0;
- return 0;
-}
-
-static int check_patch_list(struct apply_state *state, struct patch *patch)
-{
- int err = 0;
-
- prepare_symlink_changes(state, patch);
- prepare_fn_table(state, patch);
- while (patch) {
- if (state->apply_verbosely)
- say_patch_name(stderr,
- _("Checking patch %s..."), patch);
- err |= check_patch(state, patch);
- patch = patch->next;
- }
- return err;
-}
-
-/* This function tries to read the sha1 from the current index */
-static int get_current_sha1(const char *path, unsigned char *sha1)
-{
- int pos;
-
- if (read_cache() < 0)
- return -1;
- pos = cache_name_pos(path, strlen(path));
- if (pos < 0)
- return -1;
- hashcpy(sha1, active_cache[pos]->sha1);
- return 0;
-}
-
-static int preimage_sha1_in_gitlink_patch(struct patch *p, unsigned char sha1[20])
-{
- /*
- * A usable gitlink patch has only one fragment (hunk) that looks like:
- * @@ -1 +1 @@
- * -Subproject commit <old sha1>
- * +Subproject commit <new sha1>
- * or
- * @@ -1 +0,0 @@
- * -Subproject commit <old sha1>
- * for a removal patch.
- */
- struct fragment *hunk = p->fragments;
- static const char heading[] = "-Subproject commit ";
- char *preimage;
-
- if (/* does the patch have only one hunk? */
- hunk && !hunk->next &&
- /* is its preimage one line? */
- hunk->oldpos == 1 && hunk->oldlines == 1 &&
- /* does preimage begin with the heading? */
- (preimage = memchr(hunk->patch, '\n', hunk->size)) != NULL &&
- starts_with(++preimage, heading) &&
- /* does it record full SHA-1? */
- !get_sha1_hex(preimage + sizeof(heading) - 1, sha1) &&
- preimage[sizeof(heading) + 40 - 1] == '\n' &&
- /* does the abbreviated name on the index line agree with it? */
- starts_with(preimage + sizeof(heading) - 1, p->old_sha1_prefix))
- return 0; /* it all looks fine */
-
- /* we may have full object name on the index line */
- return get_sha1_hex(p->old_sha1_prefix, sha1);
-}
-
-/* Build an index that contains the just the files needed for a 3way merge */
-static void build_fake_ancestor(struct patch *list, const char *filename)
-{
- struct patch *patch;
- struct index_state result = { NULL };
- static struct lock_file lock;
-
- /* Once we start supporting the reverse patch, it may be
- * worth showing the new sha1 prefix, but until then...
- */
- for (patch = list; patch; patch = patch->next) {
- unsigned char sha1[20];
- struct cache_entry *ce;
- const char *name;
-
- name = patch->old_name ? patch->old_name : patch->new_name;
- if (0 < patch->is_new)
- continue;
-
- if (S_ISGITLINK(patch->old_mode)) {
- if (!preimage_sha1_in_gitlink_patch(patch, sha1))
- ; /* ok, the textual part looks sane */
- else
- die("sha1 information is lacking or useless for submodule %s",
- name);
- } else if (!get_sha1_blob(patch->old_sha1_prefix, sha1)) {
- ; /* ok */
- } else if (!patch->lines_added && !patch->lines_deleted) {
- /* mode-only change: update the current */
- if (get_current_sha1(patch->old_name, sha1))
- die("mode change for %s, which is not "
- "in current HEAD", name);
- } else
- die("sha1 information is lacking or useless "
- "(%s).", name);
-
- ce = make_cache_entry(patch->old_mode, sha1, name, 0, 0);
- if (!ce)
- die(_("make_cache_entry failed for path '%s'"), name);
- if (add_index_entry(&result, ce, ADD_CACHE_OK_TO_ADD))
- die ("Could not add %s to temporary index", name);
- }
-
- hold_lock_file_for_update(&lock, filename, LOCK_DIE_ON_ERROR);
- if (write_locked_index(&result, &lock, COMMIT_LOCK))
- die ("Could not write temporary index to %s", filename);
-
- discard_index(&result);
-}
-
-static void stat_patch_list(struct apply_state *state, struct patch *patch)
-{
- int files, adds, dels;
-
- for (files = adds = dels = 0 ; patch ; patch = patch->next) {
- files++;
- adds += patch->lines_added;
- dels += patch->lines_deleted;
- show_stats(state, patch);
- }
-
- print_stat_summary(stdout, files, adds, dels);
-}
-
-static void numstat_patch_list(struct apply_state *state,
- struct patch *patch)
-{
- for ( ; patch; patch = patch->next) {
- const char *name;
- name = patch->new_name ? patch->new_name : patch->old_name;
- if (patch->is_binary)
- printf("-\t-\t");
- else
- printf("%d\t%d\t", patch->lines_added, patch->lines_deleted);
- write_name_quoted(name, stdout, state->line_termination);
- }
-}
-
-static void show_file_mode_name(const char *newdelete, unsigned int mode, const char *name)
-{
- if (mode)
- printf(" %s mode %06o %s\n", newdelete, mode, name);
- else
- printf(" %s %s\n", newdelete, name);
-}
-
-static void show_mode_change(struct patch *p, int show_name)
-{
- if (p->old_mode && p->new_mode && p->old_mode != p->new_mode) {
- if (show_name)
- printf(" mode change %06o => %06o %s\n",
- p->old_mode, p->new_mode, p->new_name);
- else
- printf(" mode change %06o => %06o\n",
- p->old_mode, p->new_mode);
- }
-}
-
-static void show_rename_copy(struct patch *p)
-{
- const char *renamecopy = p->is_rename ? "rename" : "copy";
- const char *old, *new;
-
- /* Find common prefix */
- old = p->old_name;
- new = p->new_name;
- while (1) {
- const char *slash_old, *slash_new;
- slash_old = strchr(old, '/');
- slash_new = strchr(new, '/');
- if (!slash_old ||
- !slash_new ||
- slash_old - old != slash_new - new ||
- memcmp(old, new, slash_new - new))
- break;
- old = slash_old + 1;
- new = slash_new + 1;
- }
- /* p->old_name thru old is the common prefix, and old and new
- * through the end of names are renames
- */
- if (old != p->old_name)
- printf(" %s %.*s{%s => %s} (%d%%)\n", renamecopy,
- (int)(old - p->old_name), p->old_name,
- old, new, p->score);
- else
- printf(" %s %s => %s (%d%%)\n", renamecopy,
- p->old_name, p->new_name, p->score);
- show_mode_change(p, 0);
-}
-
-static void summary_patch_list(struct patch *patch)
-{
- struct patch *p;
-
- for (p = patch; p; p = p->next) {
- if (p->is_new)
- show_file_mode_name("create", p->new_mode, p->new_name);
- else if (p->is_delete)
- show_file_mode_name("delete", p->old_mode, p->old_name);
- else {
- if (p->is_rename || p->is_copy)
- show_rename_copy(p);
- else {
- if (p->score) {
- printf(" rewrite %s (%d%%)\n",
- p->new_name, p->score);
- show_mode_change(p, 0);
- }
- else
- show_mode_change(p, 1);
- }
- }
- }
-}
-
-static void patch_stats(struct apply_state *state, struct patch *patch)
-{
- int lines = patch->lines_added + patch->lines_deleted;
-
- if (lines > state->max_change)
- state->max_change = lines;
- if (patch->old_name) {
- int len = quote_c_style(patch->old_name, NULL, NULL, 0);
- if (!len)
- len = strlen(patch->old_name);
- if (len > state->max_len)
- state->max_len = len;
- }
- if (patch->new_name) {
- int len = quote_c_style(patch->new_name, NULL, NULL, 0);
- if (!len)
- len = strlen(patch->new_name);
- if (len > state->max_len)
- state->max_len = len;
- }
-}
-
-static void remove_file(struct apply_state *state, struct patch *patch, int rmdir_empty)
-{
- if (state->update_index) {
- if (remove_file_from_cache(patch->old_name) < 0)
- die(_("unable to remove %s from index"), patch->old_name);
- }
- if (!state->cached) {
- if (!remove_or_warn(patch->old_mode, patch->old_name) && rmdir_empty) {
- remove_path(patch->old_name);
- }
- }
-}
-
-static void add_index_file(struct apply_state *state,
- const char *path,
- unsigned mode,
- void *buf,
- unsigned long size)
-{
- struct stat st;
- struct cache_entry *ce;
- int namelen = strlen(path);
- unsigned ce_size = cache_entry_size(namelen);
-
- if (!state->update_index)
- return;
-
- ce = xcalloc(1, ce_size);
- memcpy(ce->name, path, namelen);
- ce->ce_mode = create_ce_mode(mode);
- ce->ce_flags = create_ce_flags(0);
- ce->ce_namelen = namelen;
- if (S_ISGITLINK(mode)) {
- const char *s;
-
- if (!skip_prefix(buf, "Subproject commit ", &s) ||
- get_sha1_hex(s, ce->sha1))
- die(_("corrupt patch for submodule %s"), path);
- } else {
- if (!state->cached) {
- if (lstat(path, &st) < 0)
- die_errno(_("unable to stat newly created file '%s'"),
- path);
- fill_stat_cache_info(ce, &st);
- }
- if (write_sha1_file(buf, size, blob_type, ce->sha1) < 0)
- die(_("unable to create backing store for newly created file %s"), path);
- }
- if (add_cache_entry(ce, ADD_CACHE_OK_TO_ADD) < 0)
- die(_("unable to add cache entry for %s"), path);
-}
-
-static int try_create_file(const char *path, unsigned int mode, const char *buf, unsigned long size)
-{
- int fd;
- struct strbuf nbuf = STRBUF_INIT;
-
- if (S_ISGITLINK(mode)) {
- struct stat st;
- if (!lstat(path, &st) && S_ISDIR(st.st_mode))
- return 0;
- return mkdir(path, 0777);
- }
-
- if (has_symlinks && S_ISLNK(mode))
- /* Although buf:size is counted string, it also is NUL
- * terminated.
- */
- return symlink(buf, path);
-
- fd = open(path, O_CREAT | O_EXCL | O_WRONLY, (mode & 0100) ? 0777 : 0666);
- if (fd < 0)
- return -1;
-
- if (convert_to_working_tree(path, buf, size, &nbuf)) {
- size = nbuf.len;
- buf = nbuf.buf;
- }
- write_or_die(fd, buf, size);
- strbuf_release(&nbuf);
-
- if (close(fd) < 0)
- die_errno(_("closing file '%s'"), path);
- return 0;
-}
-
-/*
- * We optimistically assume that the directories exist,
- * which is true 99% of the time anyway. If they don't,
- * we create them and try again.
- */
-static void create_one_file(struct apply_state *state,
- char *path,
- unsigned mode,
- const char *buf,
- unsigned long size)
-{
- if (state->cached)
- return;
- if (!try_create_file(path, mode, buf, size))
- return;
-
- if (errno == ENOENT) {
- if (safe_create_leading_directories(path))
- return;
- if (!try_create_file(path, mode, buf, size))
- return;
- }
-
- if (errno == EEXIST || errno == EACCES) {
- /* We may be trying to create a file where a directory
- * used to be.
- */
- struct stat st;
- if (!lstat(path, &st) && (!S_ISDIR(st.st_mode) || !rmdir(path)))
- errno = EEXIST;
- }
-
- if (errno == EEXIST) {
- unsigned int nr = getpid();
-
- for (;;) {
- char newpath[PATH_MAX];
- mksnpath(newpath, sizeof(newpath), "%s~%u", path, nr);
- if (!try_create_file(newpath, mode, buf, size)) {
- if (!rename(newpath, path))
- return;
- unlink_or_warn(newpath);
- break;
- }
- if (errno != EEXIST)
- break;
- ++nr;
- }
- }
- die_errno(_("unable to write file '%s' mode %o"), path, mode);
-}
-
-static void add_conflicted_stages_file(struct apply_state *state,
- struct patch *patch)
-{
- int stage, namelen;
- unsigned ce_size, mode;
- struct cache_entry *ce;
-
- if (!state->update_index)
- return;
- namelen = strlen(patch->new_name);
- ce_size = cache_entry_size(namelen);
- mode = patch->new_mode ? patch->new_mode : (S_IFREG | 0644);
-
- remove_file_from_cache(patch->new_name);
- for (stage = 1; stage < 4; stage++) {
- if (is_null_oid(&patch->threeway_stage[stage - 1]))
- continue;
- ce = xcalloc(1, ce_size);
- memcpy(ce->name, patch->new_name, namelen);
- ce->ce_mode = create_ce_mode(mode);
- ce->ce_flags = create_ce_flags(stage);
- ce->ce_namelen = namelen;
- hashcpy(ce->sha1, patch->threeway_stage[stage - 1].hash);
- if (add_cache_entry(ce, ADD_CACHE_OK_TO_ADD) < 0)
- die(_("unable to add cache entry for %s"), patch->new_name);
- }
-}
-
-static void create_file(struct apply_state *state, struct patch *patch)
-{
- char *path = patch->new_name;
- unsigned mode = patch->new_mode;
- unsigned long size = patch->resultsize;
- char *buf = patch->result;
-
- if (!mode)
- mode = S_IFREG | 0644;
- create_one_file(state, path, mode, buf, size);
-
- if (patch->conflicted_threeway)
- add_conflicted_stages_file(state, patch);
- else
- add_index_file(state, path, mode, buf, size);
-}
-
-/* phase zero is to remove, phase one is to create */
-static void write_out_one_result(struct apply_state *state,
- struct patch *patch,
- int phase)
-{
- if (patch->is_delete > 0) {
- if (phase == 0)
- remove_file(state, patch, 1);
- return;
- }
- if (patch->is_new > 0 || patch->is_copy) {
- if (phase == 1)
- create_file(state, patch);
- return;
- }
- /*
- * Rename or modification boils down to the same
- * thing: remove the old, write the new
- */
- if (phase == 0)
- remove_file(state, patch, patch->is_rename);
- if (phase == 1)
- create_file(state, patch);
-}
-
-static int write_out_one_reject(struct apply_state *state, struct patch *patch)
-{
- FILE *rej;
- char namebuf[PATH_MAX];
- struct fragment *frag;
- int cnt = 0;
- struct strbuf sb = STRBUF_INIT;
-
- for (cnt = 0, frag = patch->fragments; frag; frag = frag->next) {
- if (!frag->rejected)
- continue;
- cnt++;
- }
-
- if (!cnt) {
- if (state->apply_verbosely)
- say_patch_name(stderr,
- _("Applied patch %s cleanly."), patch);
- return 0;
- }
-
- /* This should not happen, because a removal patch that leaves
- * contents are marked "rejected" at the patch level.
- */
- if (!patch->new_name)
- die(_("internal error"));
-
- /* Say this even without --verbose */
- strbuf_addf(&sb, Q_("Applying patch %%s with %d reject...",
- "Applying patch %%s with %d rejects...",
- cnt),
- cnt);
- say_patch_name(stderr, sb.buf, patch);
- strbuf_release(&sb);
-
- cnt = strlen(patch->new_name);
- if (ARRAY_SIZE(namebuf) <= cnt + 5) {
- cnt = ARRAY_SIZE(namebuf) - 5;
- warning(_("truncating .rej filename to %.*s.rej"),
- cnt - 1, patch->new_name);
- }
- memcpy(namebuf, patch->new_name, cnt);
- memcpy(namebuf + cnt, ".rej", 5);
-
- rej = fopen(namebuf, "w");
- if (!rej)
- return error(_("cannot open %s: %s"), namebuf, strerror(errno));
-
- /* Normal git tools never deal with .rej, so do not pretend
- * this is a git patch by saying --git or giving extended
- * headers. While at it, maybe please "kompare" that wants
- * the trailing TAB and some garbage at the end of line ;-).
- */
- fprintf(rej, "diff a/%s b/%s\t(rejected hunks)\n",
- patch->new_name, patch->new_name);
- for (cnt = 1, frag = patch->fragments;
- frag;
- cnt++, frag = frag->next) {
- if (!frag->rejected) {
- fprintf_ln(stderr, _("Hunk #%d applied cleanly."), cnt);
- continue;
- }
- fprintf_ln(stderr, _("Rejected hunk #%d."), cnt);
- fprintf(rej, "%.*s", frag->size, frag->patch);
- if (frag->patch[frag->size-1] != '\n')
- fputc('\n', rej);
- }
- fclose(rej);
- return -1;
-}
-
-static int write_out_results(struct apply_state *state, struct patch *list)
-{
- int phase;
- int errs = 0;
- struct patch *l;
- struct string_list cpath = STRING_LIST_INIT_DUP;
-
- for (phase = 0; phase < 2; phase++) {
- l = list;
- while (l) {
- if (l->rejected)
- errs = 1;
- else {
- write_out_one_result(state, l, phase);
- if (phase == 1) {
- if (write_out_one_reject(state, l))
- errs = 1;
- if (l->conflicted_threeway) {
- string_list_append(&cpath, l->new_name);
- errs = 1;
- }
- }
- }
- l = l->next;
- }
- }
-
- if (cpath.nr) {
- struct string_list_item *item;
-
- string_list_sort(&cpath);
- for_each_string_list_item(item, &cpath)
- fprintf(stderr, "U %s\n", item->string);
- string_list_clear(&cpath, 0);
-
- rerere(0);
- }
+#include "builtin.h"
+#include "parse-options.h"
+#include "lockfile.h"
+#include "apply.h"
- return errs;
-}
+static const char * const apply_usage[] = {
+ N_("git apply [<options>] [<patch>...]"),
+ NULL
+};
static struct lock_file lock_file;
-#define INACCURATE_EOF (1<<0)
-#define RECOUNT (1<<1)
-
-static int apply_patch(struct apply_state *state,
- int fd,
- const char *filename,
- int options)
-{
- size_t offset;
- struct strbuf buf = STRBUF_INIT; /* owns the patch text */
- struct patch *list = NULL, **listp = &list;
- int skipped_patch = 0;
-
- state->patch_input_file = filename;
- read_patch_file(&buf, fd);
- offset = 0;
- while (offset < buf.len) {
- struct patch *patch;
- int nr;
-
- patch = xcalloc(1, sizeof(*patch));
- patch->inaccurate_eof = !!(options & INACCURATE_EOF);
- patch->recount = !!(options & RECOUNT);
- nr = parse_chunk(state, buf.buf + offset, buf.len - offset, patch);
- if (nr < 0) {
- free_patch(patch);
- break;
- }
- if (state->apply_in_reverse)
- reverse_patches(patch);
- if (use_patch(state, patch)) {
- patch_stats(state, patch);
- *listp = patch;
- listp = &patch->next;
- }
- else {
- if (state->apply_verbosely)
- say_patch_name(stderr, _("Skipped patch '%s'."), patch);
- free_patch(patch);
- skipped_patch++;
- }
- offset += nr;
- }
-
- if (!list && !skipped_patch)
- die(_("unrecognized input"));
-
- if (state->whitespace_error && (state->ws_error_action == die_on_ws_error))
- state->apply = 0;
-
- state->update_index = state->check_index && state->apply;
- if (state->update_index && state->newfd < 0)
- state->newfd = hold_locked_index(state->lock_file, 1);
-
- if (state->check_index) {
- if (read_cache() < 0)
- die(_("unable to read index file"));
- }
-
- if ((state->check || state->apply) &&
- check_patch_list(state, list) < 0 &&
- !state->apply_with_reject)
- exit(1);
-
- if (state->apply && write_out_results(state, list)) {
- if (state->apply_with_reject)
- exit(1);
- /* with --3way, we still need to write the index out */
- return 1;
- }
-
- if (state->fake_ancestor)
- build_fake_ancestor(list, state->fake_ancestor);
-
- if (state->diffstat)
- stat_patch_list(state, list);
-
- if (state->numstat)
- numstat_patch_list(state, list);
-
- if (state->summary)
- summary_patch_list(list);
-
- free_patch_list(list);
- strbuf_release(&buf);
- string_list_clear(&state->fn_table, 0);
- return 0;
-}
-
-static void git_apply_config(void)
-{
- git_config_get_string_const("apply.whitespace", &apply_default_whitespace);
- git_config_get_string_const("apply.ignorewhitespace", &apply_default_ignorewhitespace);
- git_config(git_default_config, NULL);
-}
-
-static int option_parse_exclude(const struct option *opt,
- const char *arg, int unset)
-{
- struct apply_state *state = opt->value;
- add_name_limit(state, arg, 1);
- return 0;
-}
-
-static int option_parse_include(const struct option *opt,
- const char *arg, int unset)
-{
- struct apply_state *state = opt->value;
- add_name_limit(state, arg, 0);
- state->has_include = 1;
- return 0;
-}
-
-static int option_parse_p(const struct option *opt,
- const char *arg,
- int unset)
-{
- struct apply_state *state = opt->value;
- state->p_value = atoi(arg);
- state->p_value_known = 1;
- return 0;
-}
-
-static int option_parse_space_change(const struct option *opt,
- const char *arg, int unset)
-{
- struct apply_state *state = opt->value;
- if (unset)
- state->ws_ignore_action = ignore_ws_none;
- else
- state->ws_ignore_action = ignore_ws_change;
- return 0;
-}
-
-static int option_parse_whitespace(const struct option *opt,
- const char *arg, int unset)
-{
- struct apply_state *state = opt->value;
- state->whitespace_option = arg;
- parse_whitespace_option(state, arg);
- return 0;
-}
-
-static int option_parse_directory(const struct option *opt,
- const char *arg, int unset)
-{
- struct apply_state *state = opt->value;
- strbuf_reset(&state->root);
- strbuf_addstr(&state->root, arg);
- strbuf_complete(&state->root, '/');
- return 0;
-}
-
-static void init_apply_state(struct apply_state *state,
- const char *prefix,
- struct lock_file *lock_file)
-{
- memset(state, 0, sizeof(*state));
- state->prefix = prefix;
- state->prefix_length = state->prefix ? strlen(state->prefix) : 0;
- state->lock_file = lock_file;
- state->newfd = -1;
- state->apply = 1;
- state->line_termination = '\n';
- state->p_value = 1;
- state->p_context = UINT_MAX;
- state->squelch_whitespace_errors = 5;
- state->ws_error_action = warn_on_ws_error;
- state->ws_ignore_action = ignore_ws_none;
- state->linenr = 1;
- string_list_init(&state->fn_table, 0);
- string_list_init(&state->limit_by_name, 0);
- string_list_init(&state->symlink_changes, 0);
- strbuf_init(&state->root, 0);
-
- git_apply_config();
- if (apply_default_whitespace)
- parse_whitespace_option(state, apply_default_whitespace);
- if (apply_default_ignorewhitespace)
- parse_ignorewhitespace_option(state, apply_default_ignorewhitespace);
-}
-
-static void clear_apply_state(struct apply_state *state)
-{
- string_list_clear(&state->limit_by_name, 0);
- string_list_clear(&state->symlink_changes, 0);
- strbuf_release(&state->root);
-
- /* &state->fn_table is cleared at the end of apply_patch() */
-}
-
-static void check_apply_state(struct apply_state *state, int force_apply)
-{
- int is_not_gitdir = !startup_info->have_repository;
-
- if (state->apply_with_reject && state->threeway)
- die("--reject and --3way cannot be used together.");
- if (state->cached && state->threeway)
- die("--cached and --3way cannot be used together.");
- if (state->threeway) {
- if (is_not_gitdir)
- die(_("--3way outside a repository"));
- state->check_index = 1;
- }
- if (state->apply_with_reject)
- state->apply = state->apply_verbosely = 1;
- if (!force_apply && (state->diffstat || state->numstat || state->summary || state->check || state->fake_ancestor))
- state->apply = 0;
- if (state->check_index && is_not_gitdir)
- die(_("--index outside a repository"));
- if (state->cached) {
- if (is_not_gitdir)
- die(_("--cached outside a repository"));
- state->check_index = 1;
- }
- if (state->check_index)
- state->unsafe_paths = 0;
- if (!state->lock_file)
- die("BUG: state->lock_file should not be NULL");
-}
-
-static int apply_all_patches(struct apply_state *state,
- int argc,
- const char **argv,
- int options)
-{
- int i;
- int errs = 0;
- int read_stdin = 1;
-
- for (i = 0; i < argc; i++) {
- const char *arg = argv[i];
- int fd;
-
- if (!strcmp(arg, "-")) {
- errs |= apply_patch(state, 0, "<stdin>", options);
- read_stdin = 0;
- continue;
- } else if (0 < state->prefix_length)
- arg = prefix_filename(state->prefix,
- state->prefix_length,
- arg);
-
- fd = open(arg, O_RDONLY);
- if (fd < 0)
- die_errno(_("can't open patch '%s'"), arg);
- read_stdin = 0;
- set_default_whitespace_mode(state);
- errs |= apply_patch(state, fd, arg, options);
- close(fd);
- }
- set_default_whitespace_mode(state);
- if (read_stdin)
- errs |= apply_patch(state, 0, "<stdin>", options);
-
- if (state->whitespace_error) {
- if (state->squelch_whitespace_errors &&
- state->squelch_whitespace_errors < state->whitespace_error) {
- int squelched =
- state->whitespace_error - state->squelch_whitespace_errors;
- warning(Q_("squelched %d whitespace error",
- "squelched %d whitespace errors",
- squelched),
- squelched);
- }
- if (state->ws_error_action == die_on_ws_error)
- die(Q_("%d line adds whitespace errors.",
- "%d lines add whitespace errors.",
- state->whitespace_error),
- state->whitespace_error);
- if (state->applied_after_fixing_ws && state->apply)
- warning("%d line%s applied after"
- " fixing whitespace errors.",
- state->applied_after_fixing_ws,
- state->applied_after_fixing_ws == 1 ? "" : "s");
- else if (state->whitespace_error)
- warning(Q_("%d line adds whitespace errors.",
- "%d lines add whitespace errors.",
- state->whitespace_error),
- state->whitespace_error);
- }
-
- if (state->update_index) {
- if (write_locked_index(&the_index, state->lock_file, COMMIT_LOCK))
- die(_("Unable to write new index file"));
- state->newfd = -1;
- }
-
- return !!errs;
-}
-
int cmd_apply(int argc, const char **argv, const char *prefix)
{
int force_apply = 0;
int ret;
struct apply_state state;
- struct option builtin_apply_options[] = {
- { OPTION_CALLBACK, 0, "exclude", &state, N_("path"),
- N_("don't apply changes matching the given path"),
- 0, option_parse_exclude },
- { OPTION_CALLBACK, 0, "include", &state, N_("path"),
- N_("apply changes matching the given path"),
- 0, option_parse_include },
- { OPTION_CALLBACK, 'p', NULL, &state, N_("num"),
- N_("remove <num> leading slashes from traditional diff paths"),
- 0, option_parse_p },
- OPT_BOOL(0, "no-add", &state.no_add,
- N_("ignore additions made by the patch")),
- OPT_BOOL(0, "stat", &state.diffstat,
- N_("instead of applying the patch, output diffstat for the input")),
- OPT_NOOP_NOARG(0, "allow-binary-replacement"),
- OPT_NOOP_NOARG(0, "binary"),
- OPT_BOOL(0, "numstat", &state.numstat,
- N_("show number of added and deleted lines in decimal notation")),
- OPT_BOOL(0, "summary", &state.summary,
- N_("instead of applying the patch, output a summary for the input")),
- OPT_BOOL(0, "check", &state.check,
- N_("instead of applying the patch, see if the patch is applicable")),
- OPT_BOOL(0, "index", &state.check_index,
- N_("make sure the patch is applicable to the current index")),
- OPT_BOOL(0, "cached", &state.cached,
- N_("apply a patch without touching the working tree")),
- OPT_BOOL(0, "unsafe-paths", &state.unsafe_paths,
- N_("accept a patch that touches outside the working area")),
- OPT_BOOL(0, "apply", &force_apply,
- N_("also apply the patch (use with --stat/--summary/--check)")),
- OPT_BOOL('3', "3way", &state.threeway,
- N_( "attempt three-way merge if a patch does not apply")),
- OPT_FILENAME(0, "build-fake-ancestor", &state.fake_ancestor,
- N_("build a temporary index based on embedded index information")),
- /* Think twice before adding "--nul" synonym to this */
- OPT_SET_INT('z', NULL, &state.line_termination,
- N_("paths are separated with NUL character"), '\0'),
- OPT_INTEGER('C', NULL, &state.p_context,
- N_("ensure at least <n> lines of context match")),
- { OPTION_CALLBACK, 0, "whitespace", &state, N_("action"),
- N_("detect new or modified lines that have whitespace errors"),
- 0, option_parse_whitespace },
- { OPTION_CALLBACK, 0, "ignore-space-change", &state, NULL,
- N_("ignore changes in whitespace when finding context"),
- PARSE_OPT_NOARG, option_parse_space_change },
- { OPTION_CALLBACK, 0, "ignore-whitespace", &state, NULL,
- N_("ignore changes in whitespace when finding context"),
- PARSE_OPT_NOARG, option_parse_space_change },
- OPT_BOOL('R', "reverse", &state.apply_in_reverse,
- N_("apply the patch in reverse")),
- OPT_BOOL(0, "unidiff-zero", &state.unidiff_zero,
- N_("don't expect at least one line of context")),
- OPT_BOOL(0, "reject", &state.apply_with_reject,
- N_("leave the rejected hunks in corresponding *.rej files")),
- OPT_BOOL(0, "allow-overlap", &state.allow_overlap,
- N_("allow overlapping hunks")),
- OPT__VERBOSE(&state.apply_verbosely, N_("be verbose")),
- OPT_BIT(0, "inaccurate-eof", &options,
- N_("tolerate incorrectly detected missing new-line at the end of file"),
- INACCURATE_EOF),
- OPT_BIT(0, "recount", &options,
- N_("do not trust the line counts in the hunk headers"),
- RECOUNT),
- { OPTION_CALLBACK, 0, "directory", &state, N_("root"),
- N_("prepend <root> to all filenames"),
- 0, option_parse_directory },
- OPT_END()
- };
-
- init_apply_state(&state, prefix, &lock_file);
+ if (init_apply_state(&state, prefix, &lock_file))
+ exit(128);
- argc = parse_options(argc, argv, state.prefix, builtin_apply_options,
- apply_usage, 0);
+ argc = apply_parse_options(argc, argv,
+ &state, &force_apply, &options,
+ apply_usage);
- check_apply_state(&state, force_apply);
+ if (check_apply_state(&state, force_apply))
+ exit(128);
ret = apply_all_patches(&state, argc, argv, options);
if (name_hint) {
const char *format = archive_format_from_filename(name_hint);
if (format)
- packet_write(fd[1], "argument --format=%s\n", format);
+ packet_write_fmt(fd[1], "argument --format=%s\n", format);
}
for (i = 1; i < argc; i++)
- packet_write(fd[1], "argument %s\n", argv[i]);
+ packet_write_fmt(fd[1], "argument %s\n", argv[i]);
packet_flush(fd[1]);
buf = packet_read_line(fd[0], NULL);
*/
struct blame_entry *suspects;
mmfile_t file;
- unsigned char blob_sha1[20];
+ struct object_id blob_oid;
unsigned mode;
/* guilty gets set when shipping any suspects to the final
* blame list instead of other commits
*/
int textconv_object(const char *path,
unsigned mode,
- const unsigned char *sha1,
- int sha1_valid,
+ const struct object_id *oid,
+ int oid_valid,
char **buf,
unsigned long *buf_size)
{
struct userdiff_driver *textconv;
df = alloc_filespec(path);
- fill_filespec(df, sha1, sha1_valid, mode);
+ fill_filespec(df, oid->hash, oid_valid, mode);
textconv = get_textconv(df);
if (!textconv) {
free_filespec(df);
num_read_blob++;
if (DIFF_OPT_TST(opt, ALLOW_TEXTCONV) &&
- textconv_object(o->path, o->mode, o->blob_sha1, 1, &file->ptr, &file_size))
+ textconv_object(o->path, o->mode, &o->blob_oid, 1, &file->ptr, &file_size))
;
else
- file->ptr = read_sha1_file(o->blob_sha1, &type, &file_size);
+ file->ptr = read_sha1_file(o->blob_oid.hash, &type,
+ &file_size);
file->size = file_size;
if (!file->ptr)
die("Cannot read blob %s for path %s",
- sha1_to_hex(o->blob_sha1),
+ oid_to_hex(&o->blob_oid),
o->path);
o->file = *file;
}
*/
static int fill_blob_sha1_and_mode(struct origin *origin)
{
- if (!is_null_sha1(origin->blob_sha1))
+ if (!is_null_oid(&origin->blob_oid))
return 0;
if (get_tree_entry(origin->commit->object.oid.hash,
origin->path,
- origin->blob_sha1, &origin->mode))
+ origin->blob_oid.hash, &origin->mode))
goto error_out;
- if (sha1_object_info(origin->blob_sha1, NULL) != OBJ_BLOB)
+ if (sha1_object_info(origin->blob_oid.hash, NULL) != OBJ_BLOB)
goto error_out;
return 0;
error_out:
- hashclr(origin->blob_sha1);
+ oidclr(&origin->blob_oid);
origin->mode = S_IFINVALID;
return -1;
}
if (!diff_queued_diff.nr) {
/* The path is the same as parent */
porigin = get_origin(sb, parent, origin->path);
- hashcpy(porigin->blob_sha1, origin->blob_sha1);
+ oidcpy(&porigin->blob_oid, &origin->blob_oid);
porigin->mode = origin->mode;
} else {
/*
p->status);
case 'M':
porigin = get_origin(sb, parent, origin->path);
- hashcpy(porigin->blob_sha1, p->one->oid.hash);
+ oidcpy(&porigin->blob_oid, &p->one->oid);
porigin->mode = p->one->mode;
break;
case 'A':
if ((p->status == 'R' || p->status == 'C') &&
!strcmp(p->two->path, origin->path)) {
porigin = get_origin(sb, parent, p->one->path);
- hashcpy(porigin->blob_sha1, p->one->oid.hash);
+ oidcpy(&porigin->blob_oid, &p->one->oid);
porigin->mode = p->one->mode;
break;
}
continue;
norigin = get_origin(sb, parent, p->one->path);
- hashcpy(norigin->blob_sha1, p->one->oid.hash);
+ oidcpy(&norigin->blob_oid, &p->one->oid);
norigin->mode = p->one->mode;
fill_origin_blob(&sb->revs->diffopt, norigin, &file_p);
if (!file_p.ptr)
porigin = find(sb, p, origin);
if (!porigin)
continue;
- if (!hashcmp(porigin->blob_sha1, origin->blob_sha1)) {
+ if (!oidcmp(&porigin->blob_oid, &origin->blob_oid)) {
pass_whole_blame(sb, origin, porigin);
origin_decref(porigin);
goto finish;
}
for (j = same = 0; j < i; j++)
if (sg_origin[j] &&
- !hashcmp(sg_origin[j]->blob_sha1,
- porigin->blob_sha1)) {
+ !oidcmp(&sg_origin[j]->blob_oid, &porigin->blob_oid)) {
same = 1;
break;
}
cp = nth_line(sb, ent->lno);
for (cnt = 0; cnt < ent->num_lines; cnt++) {
char ch;
- int length = (opt & OUTPUT_LONG_OBJECT_NAME) ? 40 : abbrev;
+ int length = (opt & OUTPUT_LONG_OBJECT_NAME) ? GIT_SHA1_HEXSZ : abbrev;
if (suspect->commit->object.flags & UNINTERESTING) {
if (blank_boundary)
return 0;
}
+ if (git_diff_heuristic_config(var, value, cb) < 0)
+ return -1;
if (userdiff_config(var, value) < 0)
return -1;
int pos;
for (parents = work_tree->parents; parents; parents = parents->next) {
- const unsigned char *commit_sha1 = parents->item->object.oid.hash;
- unsigned char blob_sha1[20];
+ const struct object_id *commit_oid = &parents->item->object.oid;
+ struct object_id blob_oid;
unsigned mode;
- if (!get_tree_entry(commit_sha1, path, blob_sha1, &mode) &&
- sha1_object_info(blob_sha1, NULL) == OBJ_BLOB)
+ if (!get_tree_entry(commit_oid->hash, path, blob_oid.hash, &mode) &&
+ sha1_object_info(blob_oid.hash, NULL) == OBJ_BLOB)
return;
}
die("no such path '%s' in HEAD", path);
}
-static struct commit_list **append_parent(struct commit_list **tail, const unsigned char *sha1)
+static struct commit_list **append_parent(struct commit_list **tail, const struct object_id *oid)
{
struct commit *parent;
- parent = lookup_commit_reference(sha1);
+ parent = lookup_commit_reference(oid->hash);
if (!parent)
- die("no such commit %s", sha1_to_hex(sha1));
+ die("no such commit %s", oid_to_hex(oid));
return &commit_list_insert(parent, tail)->next;
}
}
while (!strbuf_getwholeline_fd(&line, merge_head, '\n')) {
- unsigned char sha1[20];
- if (line.len < 40 || get_sha1_hex(line.buf, sha1))
+ struct object_id oid;
+ if (line.len < GIT_SHA1_HEXSZ || get_oid_hex(line.buf, &oid))
die("unknown line in '%s': %s", git_path_merge_head(), line.buf);
- tail = append_parent(tail, sha1);
+ tail = append_parent(tail, &oid);
}
close(merge_head);
strbuf_release(&line);
struct commit *commit;
struct origin *origin;
struct commit_list **parent_tail, *parent;
- unsigned char head_sha1[20];
+ struct object_id head_oid;
struct strbuf buf = STRBUF_INIT;
const char *ident;
time_t now;
commit->date = now;
parent_tail = &commit->parents;
- if (!resolve_ref_unsafe("HEAD", RESOLVE_REF_READING, head_sha1, NULL))
+ if (!resolve_ref_unsafe("HEAD", RESOLVE_REF_READING, head_oid.hash, NULL))
die("no such ref: HEAD");
- parent_tail = append_parent(parent_tail, head_sha1);
+ parent_tail = append_parent(parent_tail, &head_oid);
append_merge_parents(parent_tail);
verify_working_tree_path(commit, path);
switch (st.st_mode & S_IFMT) {
case S_IFREG:
if (DIFF_OPT_TST(opt, ALLOW_TEXTCONV) &&
- textconv_object(read_from, mode, null_sha1, 0, &buf_ptr, &buf_len))
+ textconv_object(read_from, mode, &null_oid, 0, &buf_ptr, &buf_len))
strbuf_attach(&buf, buf_ptr, buf_len, buf_len + 1);
else if (strbuf_read_file(&buf, read_from, st.st_size) != st.st_size)
die_errno("cannot open or read '%s'", read_from);
convert_to_git(path, buf.buf, buf.len, &buf, 0);
origin->file.ptr = buf.buf;
origin->file.size = buf.len;
- pretend_sha1_file(buf.buf, buf.len, OBJ_BLOB, origin->blob_sha1);
+ pretend_sha1_file(buf.buf, buf.len, OBJ_BLOB, origin->blob_oid.hash);
/*
* Read the current index, replace the path entry with
}
size = cache_entry_size(len);
ce = xcalloc(1, size);
- hashcpy(ce->sha1, origin->blob_sha1);
+ oidcpy(&ce->oid, &origin->blob_oid);
memcpy(ce->name, path, len);
ce->ce_flags = create_ce_flags(0);
ce->ce_namelen = len;
return xstrdup_or_null(name);
}
+static const char *dwim_reverse_initial(struct scoreboard *sb)
+{
+ /*
+ * DWIM "git blame --reverse ONE -- PATH" as
+ * "git blame --reverse ONE..HEAD -- PATH" but only do so
+ * when it makes sense.
+ */
+ struct object *obj;
+ struct commit *head_commit;
+ unsigned char head_sha1[20];
+
+ if (sb->revs->pending.nr != 1)
+ return NULL;
+
+ /* Is that sole rev a committish? */
+ obj = sb->revs->pending.objects[0].item;
+ obj = deref_tag(obj, NULL, 0);
+ if (obj->type != OBJ_COMMIT)
+ return NULL;
+
+ /* Do we have HEAD? */
+ if (!resolve_ref_unsafe("HEAD", RESOLVE_REF_READING, head_sha1, NULL))
+ return NULL;
+ head_commit = lookup_commit_reference_gently(head_sha1, 1);
+ if (!head_commit)
+ return NULL;
+
+ /* Turn "ONE" into "ONE..HEAD" then */
+ obj->flags |= UNINTERESTING;
+ add_pending_object(sb->revs, &head_commit->object, "HEAD");
+
+ sb->final = (struct commit *)obj;
+ return sb->revs->pending.objects[0].name;
+}
+
static char *prepare_initial(struct scoreboard *sb)
{
int i;
if (obj->type != OBJ_COMMIT)
die("Non commit %s?", revs->pending.objects[i].name);
if (sb->final)
- die("More than one commit to dig down to %s and %s?",
+ die("More than one commit to dig up from, %s and %s?",
revs->pending.objects[i].name,
final_commit_name);
sb->final = (struct commit *) obj;
final_commit_name = revs->pending.objects[i].name;
}
+
+ if (!final_commit_name)
+ final_commit_name = dwim_reverse_initial(sb);
if (!final_commit_name)
- die("No commit to dig down to?");
+ die("No commit to dig up from?");
return xstrdup(final_commit_name);
}
OPT_BIT('s', NULL, &output_option, N_("Suppress author name and timestamp (Default: off)"), OUTPUT_NO_AUTHOR),
OPT_BIT('e', "show-email", &output_option, N_("Show author email instead of name (Default: off)"), OUTPUT_SHOW_EMAIL),
OPT_BIT('w', NULL, &xdl_opts, N_("Ignore whitespace differences"), XDF_IGNORE_WHITESPACE),
+
+ /*
+ * The following two options are parsed by parse_revision_opt()
+ * and are only included here to get included in the "-h"
+ * output:
+ */
+ { OPTION_LOWLEVEL_CALLBACK, 0, "indent-heuristic", NULL, NULL, N_("Use an experimental indent-based heuristic to improve diffs"), PARSE_OPT_NOARG, parse_opt_unknown_cb },
+ { OPTION_LOWLEVEL_CALLBACK, 0, "compaction-heuristic", NULL, NULL, N_("Use an experimental blank-line-based heuristic to improve diffs"), PARSE_OPT_NOARG, parse_opt_unknown_cb },
+
OPT_BIT(0, "minimal", &xdl_opts, N_("Spend extra cycles to find better match"), XDF_NEED_MINIMAL),
OPT_STRING('S', NULL, &revs_file, N_("file"), N_("Use revisions from <file> instead of calling git-rev-list")),
OPT_STRING(0, "contents", &contents_from, N_("file"), N_("Use <file>'s contents as the final image")),
}
parse_done:
no_whole_file_rename = !DIFF_OPT_TST(&revs.diffopt, FOLLOW_RENAMES);
+ xdl_opts |= revs.diffopt.xdl_opts & (XDF_COMPACTION_HEURISTIC | XDF_INDENT_HEURISTIC);
DIFF_OPT_CLR(&revs.diffopt, FOLLOW_RENAMES);
argc = parse_options_end(&ctx);
if (incremental || (output_option & OUTPUT_PORCELAIN)) {
if (show_progress > 0)
- die("--progress can't be used with --incremental or porcelain formats");
+ die(_("--progress can't be used with --incremental or porcelain formats"));
show_progress = 0;
} else if (show_progress < 0)
show_progress = isatty(2);
sb.commits.compare = compare_commits_by_commit_date;
}
else if (contents_from)
- die("--contents and --reverse do not blend well.");
+ die(_("--contents and --reverse do not blend well."));
else {
final_commit_name = prepare_initial(&sb);
sb.commits.compare = compare_commits_by_reverse_commit_date;
add_pending_object(&revs, &(sb.final->object), ":");
}
else if (contents_from)
- die("Cannot use --contents with final commit object name");
+ die(_("cannot use --contents with final commit object name"));
if (reverse && revs.first_parent_only) {
final_commit = find_single_final(sb.revs, NULL);
if (!final_commit)
- die("--reverse and --first-parent together require specified latest commit");
+ die(_("--reverse and --first-parent together require specified latest commit"));
}
/*
}
if (oidcmp(&c->object.oid, &sb.final->object.oid))
- die("--reverse --first-parent together require range along first-parent chain");
+ die(_("--reverse --first-parent together require range along first-parent chain"));
}
if (is_null_oid(&sb.final->object.oid)) {
else {
o = get_origin(&sb, sb.final, path);
if (fill_blob_sha1_and_mode(o))
- die("no such path %s in %s", path, final_commit_name);
+ die(_("no such path %s in %s"), path, final_commit_name);
if (DIFF_OPT_TST(&sb.revs->diffopt, ALLOW_TEXTCONV) &&
- textconv_object(path, o->mode, o->blob_sha1, 1, (char **) &sb.final_buf,
+ textconv_object(path, o->mode, &o->blob_oid, 1, (char **) &sb.final_buf,
&sb.final_buf_size))
;
else
- sb.final_buf = read_sha1_file(o->blob_sha1, &type,
+ sb.final_buf = read_sha1_file(o->blob_oid.hash, &type,
&sb.final_buf_size);
if (!sb.final_buf)
- die("Cannot read blob %s for path %s",
- sha1_to_hex(o->blob_sha1),
+ die(_("cannot read blob %s for path %s"),
+ oid_to_hex(&o->blob_oid),
path);
}
num_read_blob++;
&bottom, &top, sb.path))
usage(blame_usage);
if (lno < top || ((lno || bottom) && lno < bottom))
- die("file %s has only %lu lines", path, lno);
+ die(Q_("file %s has only %lu line",
+ "file %s has only %lu lines",
+ lno), path, lno);
if (bottom < 1)
bottom = 1;
if (top < 1)
OPT_SET_INT( 0, "set-upstream", &track, N_("change upstream info"),
BRANCH_TRACK_OVERRIDE),
OPT_STRING('u', "set-upstream-to", &new_upstream, N_("upstream"), N_("change the upstream info")),
- OPT_BOOL(0, "unset-upstream", &unset_upstream, "Unset the upstream info"),
+ OPT_BOOL(0, "unset-upstream", &unset_upstream, N_("Unset the upstream info")),
OPT__COLOR(&branch_use_color, N_("use colored output")),
OPT_SET_INT('r', "remotes", &filter.kind, N_("act on remote-tracking branches"),
FILTER_REFS_REMOTES),
int print_contents;
int buffer_output;
int all_objects;
+ int cmdmode; /* may be 'w' or 'c' for --filters or --textconv */
const char *format;
};
+static const char *force_path;
+
+static int filter_object(const char *path, unsigned mode,
+ const struct object_id *oid,
+ char **buf, unsigned long *size)
+{
+ enum object_type type;
+
+ *buf = read_sha1_file(oid->hash, &type, size);
+ if (!*buf)
+ return error(_("cannot read object %s '%s'"),
+ oid_to_hex(oid), path);
+ if ((type == OBJ_BLOB) && S_ISREG(mode)) {
+ struct strbuf strbuf = STRBUF_INIT;
+ if (convert_to_working_tree(path, *buf, *size, &strbuf)) {
+ free(*buf);
+ *size = strbuf.len;
+ *buf = strbuf_detach(&strbuf, NULL);
+ }
+ }
+
+ return 0;
+}
+
static int cat_one_file(int opt, const char *exp_type, const char *obj_name,
int unknown_type)
{
- unsigned char sha1[20];
+ struct object_id oid;
enum object_type type;
char *buf;
unsigned long size;
struct object_context obj_context;
- struct object_info oi = {NULL};
+ struct object_info oi = OBJECT_INFO_INIT;
struct strbuf sb = STRBUF_INIT;
unsigned flags = LOOKUP_REPLACE_OBJECT;
+ const char *path = force_path;
if (unknown_type)
flags |= LOOKUP_UNKNOWN_OBJECT;
- if (get_sha1_with_context(obj_name, 0, sha1, &obj_context))
+ if (get_sha1_with_context(obj_name, 0, oid.hash, &obj_context))
die("Not a valid object name %s", obj_name);
+ if (!path)
+ path = obj_context.path;
+ if (obj_context.mode == S_IFINVALID)
+ obj_context.mode = 0100644;
+
buf = NULL;
switch (opt) {
case 't':
oi.typename = &sb;
- if (sha1_object_info_extended(sha1, &oi, flags) < 0)
+ if (sha1_object_info_extended(oid.hash, &oi, flags) < 0)
die("git cat-file: could not get object info");
if (sb.len) {
printf("%s\n", sb.buf);
case 's':
oi.sizep = &size;
- if (sha1_object_info_extended(sha1, &oi, flags) < 0)
+ if (sha1_object_info_extended(oid.hash, &oi, flags) < 0)
die("git cat-file: could not get object info");
printf("%lu\n", size);
return 0;
case 'e':
- return !has_sha1_file(sha1);
+ return !has_object_file(&oid);
+
+ case 'w':
+ if (!path[0])
+ die("git cat-file --filters %s: <object> must be "
+ "<sha1:path>", obj_name);
+
+ if (filter_object(path, obj_context.mode,
+ &oid, &buf, &size))
+ return -1;
+ break;
case 'c':
- if (!obj_context.path[0])
+ if (!path[0])
die("git cat-file --textconv %s: <object> must be <sha1:path>",
obj_name);
- if (textconv_object(obj_context.path, obj_context.mode, sha1, 1, &buf, &size))
+ if (textconv_object(path, obj_context.mode, &oid, 1, &buf, &size))
break;
case 'p':
- type = sha1_object_info(sha1, NULL);
+ type = sha1_object_info(oid.hash, NULL);
if (type < 0)
die("Not a valid object name %s", obj_name);
}
if (type == OBJ_BLOB)
- return stream_blob_to_fd(1, sha1, NULL, 0);
- buf = read_sha1_file(sha1, &type, &size);
+ return stream_blob_to_fd(1, &oid, NULL, 0);
+ buf = read_sha1_file(oid.hash, &type, &size);
if (!buf)
die("Cannot read object %s", obj_name);
case 0:
if (type_from_string(exp_type) == OBJ_BLOB) {
- unsigned char blob_sha1[20];
- if (sha1_object_info(sha1, NULL) == OBJ_TAG) {
- char *buffer = read_sha1_file(sha1, &type, &size);
+ struct object_id blob_oid;
+ if (sha1_object_info(oid.hash, NULL) == OBJ_TAG) {
+ char *buffer = read_sha1_file(oid.hash, &type, &size);
const char *target;
if (!skip_prefix(buffer, "object ", &target) ||
- get_sha1_hex(target, blob_sha1))
- die("%s not a valid tag", sha1_to_hex(sha1));
+ get_oid_hex(target, &blob_oid))
+ die("%s not a valid tag", oid_to_hex(&oid));
free(buffer);
} else
- hashcpy(blob_sha1, sha1);
+ oidcpy(&blob_oid, &oid);
- if (sha1_object_info(blob_sha1, NULL) == OBJ_BLOB)
- return stream_blob_to_fd(1, blob_sha1, NULL, 0);
+ if (sha1_object_info(blob_oid.hash, NULL) == OBJ_BLOB)
+ return stream_blob_to_fd(1, &blob_oid, NULL, 0);
/*
* we attempted to dereference a tag to a blob
* and failed; there may be new dereference
* fall-back to the usual case.
*/
}
- buf = read_object_with_reference(sha1, exp_type, &size, NULL);
+ buf = read_object_with_reference(oid.hash, exp_type, &size, NULL);
break;
default:
}
struct expand_data {
- unsigned char sha1[20];
+ struct object_id oid;
enum object_type type;
unsigned long size;
off_t disk_size;
const char *rest;
- unsigned char delta_base_sha1[20];
+ struct object_id delta_base_oid;
/*
* If mark_query is true, we do not expand anything, but rather
if (is_atom("objectname", atom, len)) {
if (!data->mark_query)
- strbuf_addstr(sb, sha1_to_hex(data->sha1));
+ strbuf_addstr(sb, oid_to_hex(&data->oid));
} else if (is_atom("objecttype", atom, len)) {
if (data->mark_query)
data->info.typep = &data->type;
strbuf_addstr(sb, data->rest);
} else if (is_atom("deltabase", atom, len)) {
if (data->mark_query)
- data->info.delta_base_sha1 = data->delta_base_sha1;
+ data->info.delta_base_sha1 = data->delta_base_oid.hash;
else
- strbuf_addstr(sb, sha1_to_hex(data->delta_base_sha1));
+ strbuf_addstr(sb,
+ oid_to_hex(&data->delta_base_oid));
} else
die("unknown format element: %.*s", len, atom);
}
static void print_object_or_die(struct batch_options *opt, struct expand_data *data)
{
- const unsigned char *sha1 = data->sha1;
+ const struct object_id *oid = &data->oid;
assert(data->info.typep);
if (data->type == OBJ_BLOB) {
if (opt->buffer_output)
fflush(stdout);
- if (stream_blob_to_fd(1, sha1, NULL, 0) < 0)
- die("unable to stream %s to stdout", sha1_to_hex(sha1));
+ if (opt->cmdmode) {
+ char *contents;
+ unsigned long size;
+
+ if (!data->rest)
+ die("missing path for '%s'", oid_to_hex(oid));
+
+ if (opt->cmdmode == 'w') {
+ if (filter_object(data->rest, 0100644, oid,
+ &contents, &size))
+ die("could not convert '%s' %s",
+ oid_to_hex(oid), data->rest);
+ } else if (opt->cmdmode == 'c') {
+ enum object_type type;
+ if (!textconv_object(data->rest, 0100644, oid,
+ 1, &contents, &size))
+ contents = read_sha1_file(oid->hash, &type,
+ &size);
+ if (!contents)
+ die("could not convert '%s' %s",
+ oid_to_hex(oid), data->rest);
+ } else
+ die("BUG: invalid cmdmode: %c", opt->cmdmode);
+ batch_write(opt, contents, size);
+ free(contents);
+ } else if (stream_blob_to_fd(1, oid, NULL, 0) < 0)
+ die("unable to stream %s to stdout", oid_to_hex(oid));
}
else {
enum object_type type;
unsigned long size;
void *contents;
- contents = read_sha1_file(sha1, &type, &size);
+ contents = read_sha1_file(oid->hash, &type, &size);
if (!contents)
- die("object %s disappeared", sha1_to_hex(sha1));
+ die("object %s disappeared", oid_to_hex(oid));
if (type != data->type)
- die("object %s changed type!?", sha1_to_hex(sha1));
+ die("object %s changed type!?", oid_to_hex(oid));
if (data->info.sizep && size != data->size)
- die("object %s changed size!?", sha1_to_hex(sha1));
+ die("object %s changed size!?", oid_to_hex(oid));
batch_write(opt, contents, size);
free(contents);
struct strbuf buf = STRBUF_INIT;
if (!data->skip_object_info &&
- sha1_object_info_extended(data->sha1, &data->info, LOOKUP_REPLACE_OBJECT) < 0) {
- printf("%s missing\n", obj_name ? obj_name : sha1_to_hex(data->sha1));
+ sha1_object_info_extended(data->oid.hash, &data->info, LOOKUP_REPLACE_OBJECT) < 0) {
+ printf("%s missing\n",
+ obj_name ? obj_name : oid_to_hex(&data->oid));
fflush(stdout);
return;
}
int flags = opt->follow_symlinks ? GET_SHA1_FOLLOW_SYMLINKS : 0;
enum follow_symlinks_result result;
- result = get_sha1_with_context(obj_name, flags, data->sha1, &ctx);
+ result = get_sha1_with_context(obj_name, flags, data->oid.hash, &ctx);
if (result != FOUND) {
switch (result) {
case MISSING_OBJECT:
struct expand_data *expand;
};
-static void batch_object_cb(const unsigned char sha1[20], void *vdata)
+static int batch_object_cb(const unsigned char sha1[20], void *vdata)
{
struct object_cb_data *data = vdata;
- hashcpy(data->expand->sha1, sha1);
+ hashcpy(data->expand->oid.hash, sha1);
batch_object_write(NULL, data->opt, data->expand);
+ return 0;
}
static int batch_loose_object(const unsigned char *sha1,
data.mark_query = 1;
strbuf_expand(&buf, opt->format, expand_format, &data);
data.mark_query = 0;
+ if (opt->cmdmode)
+ data.split_on_whitespace = 1;
if (opt->all_objects) {
- struct object_info empty;
- memset(&empty, 0, sizeof(empty));
+ struct object_info empty = OBJECT_INFO_INIT;
if (!memcmp(&data.info, &empty, sizeof(empty)))
data.skip_object_info = 1;
}
}
static const char * const cat_file_usage[] = {
- N_("git cat-file (-t [--allow-unknown-type] | -s [--allow-unknown-type] | -e | -p | <type> | --textconv) <object>"),
- N_("git cat-file (--batch | --batch-check) [--follow-symlinks]"),
+ N_("git cat-file (-t [--allow-unknown-type] | -s [--allow-unknown-type] | -e | -p | <type> | --textconv | --filters) [--path=<path>] <object>"),
+ N_("git cat-file (--batch | --batch-check) [--follow-symlinks] [--textconv | --filters]"),
NULL
};
OPT_CMDMODE('p', NULL, &opt, N_("pretty-print object's content"), 'p'),
OPT_CMDMODE(0, "textconv", &opt,
N_("for blob objects, run textconv on object's content"), 'c'),
+ OPT_CMDMODE(0, "filters", &opt,
+ N_("for blob objects, run filters on object's content"), 'w'),
+ OPT_STRING(0, "path", &force_path, N_("blob"),
+ N_("use a specific path for --textconv/--filters")),
OPT_BOOL(0, "allow-unknown-type", &unknown_type,
N_("allow -s and -t to work with broken/corrupt objects")),
OPT_BOOL(0, "buffer", &batch.buffer_output, N_("buffer --batch output")),
argc = parse_options(argc, argv, prefix, options, cat_file_usage, 0);
if (opt) {
- if (argc == 1)
+ if (batch.enabled && (opt == 'c' || opt == 'w'))
+ batch.cmdmode = opt;
+ else if (argc == 1)
obj_name = argv[0];
else
usage_with_options(cat_file_usage, options);
} else
usage_with_options(cat_file_usage, options);
}
- if (batch.enabled && (opt || argc)) {
- usage_with_options(cat_file_usage, options);
+ if (batch.enabled) {
+ if (batch.cmdmode != opt || argc)
+ usage_with_options(cat_file_usage, options);
+ if (batch.cmdmode && batch.all_objects)
+ die("--batch-all-objects cannot be combined with "
+ "--textconv nor with --filters");
}
if ((batch.follow_symlinks || batch.all_objects) && !batch.enabled) {
usage_with_options(cat_file_usage, options);
}
+ if (force_path && opt != 'c' && opt != 'w') {
+ error("--path=<path> needs --textconv or --filters");
+ usage_with_options(cat_file_usage, options);
+ }
+
+ if (force_path && batch.enabled) {
+ error("--path=<path> incompatible with --batch");
+ usage_with_options(cat_file_usage, options);
+ }
+
if (batch.buffer_output < 0)
batch.buffer_output = batch.all_objects;
static int to_tempfile;
static char topath[4][TEMPORARY_FILENAME_LENGTH + 1];
-static struct checkout state;
+static struct checkout state = CHECKOUT_INIT;
static void write_tempfile_record(const char *name, const char *prefix)
{
len = base->len + strlen(pathname);
ce = xcalloc(1, cache_entry_size(len));
- hashcpy(ce->sha1, sha1);
+ hashcpy(ce->oid.hash, sha1);
memcpy(ce->name, base->buf, base->len);
memcpy(ce->name + base->len, pathname, len - base->len);
ce->ce_flags = create_ce_flags(0) | CE_UPDATE;
if (pos >= 0) {
struct cache_entry *old = active_cache[pos];
if (ce->ce_mode == old->ce_mode &&
- !hashcmp(ce->sha1, old->sha1)) {
+ !oidcmp(&ce->oid, &old->oid)) {
old->ce_flags |= CE_UPDATE;
free(ce);
return 0;
const char *path = ce->name;
mmfile_t ancestor, ours, theirs;
int status;
- unsigned char sha1[20];
+ struct object_id oid;
mmbuffer_t result_buf;
- unsigned char threeway[3][20];
+ struct object_id threeway[3];
unsigned mode = 0;
memset(threeway, 0, sizeof(threeway));
stage = ce_stage(ce);
if (!stage || strcmp(path, ce->name))
break;
- hashcpy(threeway[stage - 1], ce->sha1);
+ oidcpy(&threeway[stage - 1], &ce->oid);
if (stage == 2)
mode = create_ce_mode(ce->ce_mode);
pos++;
ce = active_cache[pos];
}
- if (is_null_sha1(threeway[1]) || is_null_sha1(threeway[2]))
+ if (is_null_oid(&threeway[1]) || is_null_oid(&threeway[2]))
return error(_("path '%s' does not have necessary versions"), path);
- read_mmblob(&ancestor, threeway[0]);
- read_mmblob(&ours, threeway[1]);
- read_mmblob(&theirs, threeway[2]);
+ read_mmblob(&ancestor, &threeway[0]);
+ read_mmblob(&ours, &threeway[1]);
+ read_mmblob(&theirs, &threeway[2]);
/*
* NEEDSWORK: re-create conflicts from merges with
* object database even when it may contain conflicts).
*/
if (write_sha1_file(result_buf.ptr, result_buf.size,
- blob_type, sha1))
+ blob_type, oid.hash))
die(_("Unable to add merge result for '%s'"), path);
- ce = make_cache_entry(mode, sha1, path, 2, 0);
+ ce = make_cache_entry(mode, oid.hash, path, 2, 0);
if (!ce)
die(_("make_cache_entry failed for path '%s'"), path);
status = checkout_entry(ce, state, NULL);
const char *revision)
{
int pos;
- struct checkout state;
+ struct checkout state = CHECKOUT_INIT;
static char *ps_matched;
- unsigned char rev[20];
+ struct object_id rev;
struct commit *head;
int errs = 0;
struct lock_file *lock_file;
return 1;
/* Now we are committed to check them out */
- memset(&state, 0, sizeof(state));
state.force = 1;
state.refresh_cache = 1;
state.istate = &the_index;
if (write_locked_index(&the_index, lock_file, COMMIT_LOCK))
die(_("unable to write new index file"));
- read_ref_full("HEAD", 0, rev, NULL);
- head = lookup_commit_reference_gently(rev, 1);
+ read_ref_full("HEAD", 0, rev.hash, NULL);
+ head = lookup_commit_reference_gently(rev.hash, 1);
errs |= post_checkout_hook(head, head, 0);
return errs;
int ret = 0;
struct branch_info old;
void *path_to_free;
- unsigned char rev[20];
+ struct object_id rev;
int flag, writeout_error = 0;
memset(&old, 0, sizeof(old));
- old.path = path_to_free = resolve_refdup("HEAD", 0, rev, &flag);
- old.commit = lookup_commit_reference_gently(rev, 1);
+ old.path = path_to_free = resolve_refdup("HEAD", 0, rev.hash, &flag);
+ old.commit = lookup_commit_reference_gently(rev.hash, 1);
if (!(flag & REF_ISSYMREF))
old.path = NULL;
struct tracking_name_data {
/* const */ char *src_ref;
char *dst_ref;
- unsigned char *dst_sha1;
+ struct object_id *dst_oid;
int unique;
};
memset(&query, 0, sizeof(struct refspec));
query.src = cb->src_ref;
if (remote_find_tracking(remote, &query) ||
- get_sha1(query.dst, cb->dst_sha1)) {
+ get_oid(query.dst, cb->dst_oid)) {
free(query.dst);
return 0;
}
return 0;
}
-static const char *unique_tracking_name(const char *name, unsigned char *sha1)
+static const char *unique_tracking_name(const char *name, struct object_id *oid)
{
struct tracking_name_data cb_data = { NULL, NULL, NULL, 1 };
char src_ref[PATH_MAX];
snprintf(src_ref, PATH_MAX, "refs/heads/%s", name);
cb_data.src_ref = src_ref;
- cb_data.dst_sha1 = sha1;
+ cb_data.dst_oid = oid;
for_each_remote(check_tracking_name, &cb_data);
if (cb_data.unique)
return cb_data.dst_ref;
int dwim_new_local_branch_ok,
struct branch_info *new,
struct checkout_opts *opts,
- unsigned char rev[20])
+ struct object_id *rev)
{
struct tree **source_tree = &opts->source_tree;
const char **new_branch = &opts->new_branch;
int argcount = 0;
- unsigned char branch_rev[20];
+ struct object_id branch_rev;
const char *arg;
int dash_dash_pos;
int has_dash_dash = 0;
if (!strcmp(arg, "-"))
arg = "@{-1}";
- if (get_sha1_mb(arg, rev)) {
+ if (get_oid_mb(arg, rev)) {
/*
* Either case (3) or (4), with <something> not being
* a commit, or an attempt to use case (1) with an
setup_branch_path(new);
if (!check_refname_format(new->path, 0) &&
- !read_ref(new->path, branch_rev))
- hashcpy(rev, branch_rev);
+ !read_ref(new->path, branch_rev.hash))
+ oidcpy(rev, &branch_rev);
else
new->path = NULL; /* not an existing branch */
- new->commit = lookup_commit_reference_gently(rev, 1);
+ new->commit = lookup_commit_reference_gently(rev->hash, 1);
if (!new->commit) {
/* not a commit */
- *source_tree = parse_tree_indirect(rev);
+ *source_tree = parse_tree_indirect(rev->hash);
} else {
parse_commit_or_die(new->commit);
*source_tree = new->commit->tree;
if (new->path && !opts->force_detach && !opts->new_branch &&
!opts->ignore_other_worktrees) {
- unsigned char sha1[20];
+ struct object_id oid;
int flag;
- char *head_ref = resolve_refdup("HEAD", 0, sha1, &flag);
+ char *head_ref = resolve_refdup("HEAD", 0, oid.hash, &flag);
if (head_ref &&
(!(flag & REF_ISSYMREF) || strcmp(head_ref, new->path)))
die_if_checked_out(new->path, 1);
}
if (!new->commit && opts->new_branch) {
- unsigned char rev[20];
+ struct object_id rev;
int flag;
- if (!read_ref_full("HEAD", 0, rev, &flag) &&
- (flag & REF_ISSYMREF) && is_null_sha1(rev))
+ if (!read_ref_full("HEAD", 0, rev.hash, &flag) &&
+ (flag & REF_ISSYMREF) && is_null_oid(&rev))
return switch_unborn_to_new_branch(opts);
}
return switch_branches(opts, new);
* remote branches, erroring out for invalid or ambiguous cases.
*/
if (argc) {
- unsigned char rev[20];
+ struct object_id rev;
int dwim_ok =
!opts.patch_mode &&
dwim_new_local_branch &&
opts.track == BRANCH_TRACK_UNSPECIFIED &&
!opts.new_branch;
int n = parse_branchname_arg(argc, argv, dwim_ok,
- &new, &opts, rev);
+ &new, &opts, &rev);
argv += n;
argc -= n;
}
static int option_no_checkout, option_bare, option_mirror, option_single_branch = -1;
static int option_local = -1, option_no_hardlinks, option_shared, option_recursive;
static int option_shallow_submodules;
-static char *option_template, *option_depth;
+static int deepen;
+static char *option_template, *option_depth, *option_since;
static char *option_origin = NULL;
static char *option_branch = NULL;
+static struct string_list option_not = STRING_LIST_INIT_NODUP;
static const char *real_git_dir;
static char *option_upload_pack = "git-upload-pack";
static int option_verbosity;
static int option_progress = -1;
static enum transport_family family;
static struct string_list option_config = STRING_LIST_INIT_NODUP;
-static struct string_list option_reference = STRING_LIST_INIT_NODUP;
+static struct string_list option_required_reference = STRING_LIST_INIT_NODUP;
+static struct string_list option_optional_reference = STRING_LIST_INIT_NODUP;
static int option_dissociate;
static int max_jobs = -1;
N_("number of submodules cloned in parallel")),
OPT_STRING(0, "template", &option_template, N_("template-directory"),
N_("directory from which templates will be used")),
- OPT_STRING_LIST(0, "reference", &option_reference, N_("repo"),
+ OPT_STRING_LIST(0, "reference", &option_required_reference, N_("repo"),
N_("reference repository")),
+ OPT_STRING_LIST(0, "reference-if-able", &option_optional_reference,
+ N_("repo"), N_("reference repository")),
OPT_BOOL(0, "dissociate", &option_dissociate,
N_("use --reference only while cloning")),
OPT_STRING('o', "origin", &option_origin, N_("name"),
N_("path to git-upload-pack on the remote")),
OPT_STRING(0, "depth", &option_depth, N_("depth"),
N_("create a shallow clone of that depth")),
+ OPT_STRING(0, "shallow-since", &option_since, N_("time"),
+ N_("create a shallow clone since a specific time")),
+ OPT_STRING_LIST(0, "shallow-exclude", &option_not, N_("revision"),
+ N_("deepen history of shallow clone by excluding rev")),
OPT_BOOL(0, "single-branch", &option_single_branch,
N_("clone only one branch, HEAD or --branch")),
OPT_BOOL(0, "shallow-submodules", &option_shallow_submodules,
static int add_one_reference(struct string_list_item *item, void *cb_data)
{
- char *ref_git;
- const char *repo;
- struct strbuf alternate = STRBUF_INIT;
-
- /* Beware: read_gitfile(), real_path() and mkpath() return static buffer */
- ref_git = xstrdup(real_path(item->string));
-
- repo = read_gitfile(ref_git);
- if (!repo)
- repo = read_gitfile(mkpath("%s/.git", ref_git));
- if (repo) {
- free(ref_git);
- ref_git = xstrdup(repo);
- }
+ struct strbuf err = STRBUF_INIT;
+ int *required = cb_data;
+ char *ref_git = compute_alternate_path(item->string, &err);
- if (!repo && is_directory(mkpath("%s/.git/objects", ref_git))) {
- char *ref_git_git = mkpathdup("%s/.git", ref_git);
- free(ref_git);
- ref_git = ref_git_git;
- } else if (!is_directory(mkpath("%s/objects", ref_git))) {
+ if (!ref_git) {
+ if (*required)
+ die("%s", err.buf);
+ else
+ fprintf(stderr,
+ _("info: Could not add alternate for '%s': %s\n"),
+ item->string, err.buf);
+ } else {
struct strbuf sb = STRBUF_INIT;
- if (get_common_dir(&sb, ref_git))
- die(_("reference repository '%s' as a linked checkout is not supported yet."),
- item->string);
- die(_("reference repository '%s' is not a local repository."),
- item->string);
+ strbuf_addf(&sb, "%s/objects", ref_git);
+ add_to_alternates_file(sb.buf);
+ strbuf_release(&sb);
}
- if (!access(mkpath("%s/shallow", ref_git), F_OK))
- die(_("reference repository '%s' is shallow"), item->string);
-
- if (!access(mkpath("%s/info/grafts", ref_git), F_OK))
- die(_("reference repository '%s' is grafted"), item->string);
-
- strbuf_addf(&alternate, "%s/objects", ref_git);
- add_to_alternates_file(alternate.buf);
- strbuf_release(&alternate);
+ strbuf_release(&err);
free(ref_git);
return 0;
}
static void setup_reference(void)
{
- for_each_string_list(&option_reference, add_one_reference, NULL);
+ int required = 1;
+ for_each_string_list(&option_required_reference,
+ add_one_reference, &required);
+ required = 0;
+ for_each_string_list(&option_optional_reference,
+ add_one_reference, &required);
}
static void copy_alternates(struct strbuf *src, struct strbuf *dst,
}
}
-static int checkout(void)
+static int checkout(int submodule_progress)
{
unsigned char sha1[20];
char *head;
if (max_jobs != -1)
argv_array_pushf(&args, "--jobs=%d", max_jobs);
+ if (submodule_progress)
+ argv_array_push(&args, "--progress");
+
err = run_command_v_opt(args.argv, RUN_GIT_CMD);
argv_array_clear(&args);
}
const char *src_ref_prefix = "refs/heads/";
struct remote *remote;
int err = 0, complete_refs_before_fetch = 1;
+ int submodule_progress;
struct refspec *refspec;
const char *fetch_pattern;
usage_msg_opt(_("You must specify a repository to clone."),
builtin_clone_usage, builtin_clone_options);
+ if (option_depth || option_since || option_not.nr)
+ deepen = 1;
if (option_single_branch == -1)
- option_single_branch = option_depth ? 1 : 0;
+ option_single_branch = deepen ? 1 : 0;
if (option_mirror)
option_bare = 1;
set_git_work_tree(work_tree);
}
- junk_git_dir = git_dir;
+ junk_git_dir = real_git_dir ? real_git_dir : git_dir;
if (safe_create_leading_directories_const(git_dir) < 0)
die(_("could not create leading directories of '%s'"), git_dir);
- set_git_dir_init(git_dir, real_git_dir, 0);
- if (real_git_dir) {
- git_dir = real_git_dir;
- junk_git_dir = real_git_dir;
- }
-
if (0 <= option_verbosity) {
if (option_bare)
fprintf(stderr, _("Cloning into bare repository '%s'...\n"), dir);
else
fprintf(stderr, _("Cloning into '%s'...\n"), dir);
}
- init_db(option_template, INIT_DB_QUIET);
+
+ if (option_recursive) {
+ if (option_required_reference.nr &&
+ option_optional_reference.nr)
+ die(_("clone --recursive is not compatible with "
+ "both --reference and --reference-if-able"));
+ else if (option_required_reference.nr) {
+ string_list_append(&option_config,
+ "submodule.alternateLocation=superproject");
+ string_list_append(&option_config,
+ "submodule.alternateErrorStrategy=die");
+ } else if (option_optional_reference.nr) {
+ string_list_append(&option_config,
+ "submodule.alternateLocation=superproject");
+ string_list_append(&option_config,
+ "submodule.alternateErrorStrategy=info");
+ }
+ }
+
+ init_db(git_dir, real_git_dir, option_template, INIT_DB_QUIET);
+
+ if (real_git_dir)
+ git_dir = real_git_dir;
+
write_config(&option_config);
git_config(git_default_config, NULL);
git_config_set(key.buf, repo);
strbuf_reset(&key);
- if (option_reference.nr)
+ if (option_required_reference.nr || option_optional_reference.nr)
setup_reference();
fetch_pattern = value.buf;
if (is_local) {
if (option_depth)
warning(_("--depth is ignored in local clones; use file:// instead."));
+ if (option_since)
+ warning(_("--shallow-since is ignored in local clones; use file:// instead."));
+ if (option_not.nr)
+ warning(_("--shallow-exclude is ignored in local clones; use file:// instead."));
if (!access(mkpath("%s/shallow", path), F_OK)) {
if (option_local > 0)
warning(_("source repository is shallow, ignoring --local"));
if (option_depth)
transport_set_option(transport, TRANS_OPT_DEPTH,
option_depth);
+ if (option_since)
+ transport_set_option(transport, TRANS_OPT_DEEPEN_SINCE,
+ option_since);
+ if (option_not.nr)
+ transport_set_option(transport, TRANS_OPT_DEEPEN_NOT,
+ (const char *)&option_not);
if (option_single_branch)
transport_set_option(transport, TRANS_OPT_FOLLOWTAGS, "1");
transport_set_option(transport, TRANS_OPT_UPLOADPACK,
option_upload_pack);
- if (transport->smart_options && !option_depth)
+ if (transport->smart_options && !deepen)
transport->smart_options->check_self_contained_and_connected = 1;
refs = transport_get_remote_refs(transport);
update_head(our_head_points_at, remote_head, reflog_msg.buf);
+ /*
+ * We want to show progress for recursive submodule clones iff
+ * we did so for the main clone. But only the transport knows
+ * the final decision for this flag, so we need to rescue the value
+ * before we free the transport.
+ */
+ submodule_progress = transport->progress;
+
transport_unlock_pack(transport);
transport_disconnect(transport);
}
junk_mode = JUNK_LEAVE_REPO;
- err = checkout();
+ err = checkout(submodule_progress);
strbuf_release(&reflog_msg);
strbuf_release(&branch_top);
{
int i, got_tree = 0;
struct commit_list *parents = NULL;
- unsigned char tree_sha1[20];
- unsigned char commit_sha1[20];
+ struct object_id tree_oid;
+ struct object_id commit_oid;
struct strbuf buffer = STRBUF_INIT;
git_config(commit_tree_config, NULL);
for (i = 1; i < argc; i++) {
const char *arg = argv[i];
if (!strcmp(arg, "-p")) {
- unsigned char sha1[20];
+ struct object_id oid;
if (argc <= ++i)
usage(commit_tree_usage);
- if (get_sha1_commit(argv[i], sha1))
+ if (get_sha1_commit(argv[i], oid.hash))
die("Not a valid object name %s", argv[i]);
- assert_sha1_type(sha1, OBJ_COMMIT);
- new_parent(lookup_commit(sha1), &parents);
+ assert_sha1_type(oid.hash, OBJ_COMMIT);
+ new_parent(lookup_commit(oid.hash), &parents);
continue;
}
continue;
}
- if (get_sha1_tree(arg, tree_sha1))
+ if (get_sha1_tree(arg, tree_oid.hash))
die("Not a valid object name %s", arg);
if (got_tree)
die("Cannot give more than one trees");
die_errno("git commit-tree: failed to read");
}
- if (commit_tree(buffer.buf, buffer.len, tree_sha1, parents,
- commit_sha1, NULL, sign_commit)) {
+ if (commit_tree(buffer.buf, buffer.len, tree_oid.hash, parents,
+ commit_oid.hash, NULL, sign_commit)) {
strbuf_release(&buffer);
return 1;
}
- printf("%s\n", sha1_to_hex(commit_sha1));
+ printf("%s\n", oid_to_hex(&commit_oid));
strbuf_release(&buffer);
return 0;
}
static const char *only_include_assumed;
static struct strbuf message = STRBUF_INIT;
-static enum status_format {
- STATUS_FORMAT_NONE = 0,
- STATUS_FORMAT_LONG,
- STATUS_FORMAT_SHORT,
- STATUS_FORMAT_PORCELAIN,
+static enum wt_status_format status_format = STATUS_FORMAT_UNSPECIFIED;
- STATUS_FORMAT_UNSPECIFIED
-} status_format = STATUS_FORMAT_UNSPECIFIED;
+static int opt_parse_porcelain(const struct option *opt, const char *arg, int unset)
+{
+ enum wt_status_format *value = (enum wt_status_format *)opt->value;
+ if (unset)
+ *value = STATUS_FORMAT_NONE;
+ else if (!arg)
+ *value = STATUS_FORMAT_PORCELAIN;
+ else if (!strcmp(arg, "v1") || !strcmp(arg, "1"))
+ *value = STATUS_FORMAT_PORCELAIN;
+ else if (!strcmp(arg, "v2") || !strcmp(arg, "2"))
+ *value = STATUS_FORMAT_PORCELAIN_V2;
+ else
+ die("unsupported porcelain version '%s'", arg);
+
+ return 0;
+}
static int opt_parse_m(const struct option *opt, const char *arg, int unset)
{
whence = FROM_MERGE;
else if (file_exists(git_path_cherry_pick_head())) {
whence = FROM_CHERRY_PICK;
- if (file_exists(git_path(SEQ_DIR)))
+ if (file_exists(git_path_seq_dir()))
sequencer_in_use = 1;
}
else
s->fp = fp;
s->nowarn = nowarn;
s->is_initial = get_sha1(s->reference, sha1) ? 1 : 0;
+ if (!s->is_initial)
+ hashcpy(s->sha1_commit, sha1);
+ s->status_format = status_format;
+ s->ignore_submodule_arg = ignore_submodule_arg;
wt_status_collect(s);
-
- switch (status_format) {
- case STATUS_FORMAT_SHORT:
- wt_shortstatus_print(s);
- break;
- case STATUS_FORMAT_PORCELAIN:
- wt_porcelain_print(s);
- break;
- case STATUS_FORMAT_UNSPECIFIED:
- die("BUG: finalize_deferred_config() should have been called");
- break;
- case STATUS_FORMAT_NONE:
- case STATUS_FORMAT_LONG:
- wt_status_print(s);
- break;
- }
+ wt_status_print(s);
return s->commitable;
}
if (amend)
parent = "HEAD^1";
- if (get_sha1(parent, sha1))
- commitable = !!active_nr;
- else {
+ if (get_sha1(parent, sha1)) {
+ int i, ita_nr = 0;
+
+ for (i = 0; i < active_nr; i++)
+ if (ce_intent_to_add(active_cache[i]))
+ ita_nr++;
+ commitable = active_nr - ita_nr > 0;
+ } else {
/*
* Unless the user did explicitly request a submodule
* ignore mode by passing a command line option we do
if (ignore_submodule_arg &&
!strcmp(ignore_submodule_arg, "all"))
diff_flags |= DIFF_OPT_IGNORE_SUBMODULES;
- commitable = index_differs_from(parent, diff_flags);
+ commitable = index_differs_from(parent, diff_flags, 1);
}
}
strbuf_release(&committer_ident);
* is not in effect here.
*/
static struct status_deferred_config {
- enum status_format status_format;
+ enum wt_status_format status_format;
int show_branch;
} status_deferred_config = {
STATUS_FORMAT_UNSPECIFIED,
static void finalize_deferred_config(struct wt_status *s)
{
int use_deferred_config = (status_format != STATUS_FORMAT_PORCELAIN &&
+ status_format != STATUS_FORMAT_PORCELAIN_V2 &&
!s->null_termination);
if (s->null_termination) {
N_("show status concisely"), STATUS_FORMAT_SHORT),
OPT_BOOL('b', "branch", &s.show_branch,
N_("show branch information")),
- OPT_SET_INT(0, "porcelain", &status_format,
- N_("machine-readable output"),
- STATUS_FORMAT_PORCELAIN),
+ { OPTION_CALLBACK, 0, "porcelain", &status_format,
+ N_("version"), N_("machine-readable output"),
+ PARSE_OPT_OPTARG, opt_parse_porcelain },
OPT_SET_INT(0, "long", &status_format,
N_("show status in long format (default)"),
STATUS_FORMAT_LONG),
fd = hold_locked_index(&index_lock, 0);
s.is_initial = get_sha1(s.reference, sha1) ? 1 : 0;
+ if (!s.is_initial)
+ hashcpy(s.sha1_commit, sha1);
+
s.ignore_submodule_arg = ignore_submodule_arg;
+ s.status_format = status_format;
+ s.verbose = verbose;
+
wt_status_collect(&s);
if (0 <= fd)
if (s.relative_paths)
s.prefix = prefix;
- switch (status_format) {
- case STATUS_FORMAT_SHORT:
- wt_shortstatus_print(&s);
- break;
- case STATUS_FORMAT_PORCELAIN:
- wt_porcelain_print(&s);
- break;
- case STATUS_FORMAT_UNSPECIFIED:
- die("BUG: finalize_deferred_config() should have been called");
- break;
- case STATUS_FORMAT_NONE:
- case STATUS_FORMAT_LONG:
- s.verbose = verbose;
- s.ignore_submodule_arg = ignore_submodule_arg;
- wt_status_print(&s);
- break;
- }
+ wt_status_print(&s);
return 0;
}
const char *index_file, *reflog_msg;
char *nl;
unsigned char sha1[20];
- struct commit_list *parents = NULL, **pptr = &parents;
+ struct commit_list *parents = NULL;
struct stat statbuf;
struct commit *current_head = NULL;
struct commit_extra_header *extra = NULL;
if (!reflog_msg)
reflog_msg = "commit (initial)";
} else if (amend) {
- struct commit_list *c;
-
if (!reflog_msg)
reflog_msg = "commit (amend)";
- for (c = current_head->parents; c; c = c->next)
- pptr = &commit_list_insert(c->item, pptr)->next;
+ parents = copy_commit_list(current_head->parents);
} else if (whence == FROM_MERGE) {
struct strbuf m = STRBUF_INIT;
FILE *fp;
int allow_fast_forward = 1;
+ struct commit_list **pptr = &parents;
if (!reflog_msg)
reflog_msg = "commit (merge)";
- pptr = &commit_list_insert(current_head, pptr)->next;
+ pptr = commit_list_append(current_head, pptr);
fp = fopen(git_path_merge_head(), "r");
if (fp == NULL)
die_errno(_("could not open '%s' for reading"),
parent = get_merge_parent(m.buf);
if (!parent)
die(_("Corrupt MERGE_HEAD file (%s)"), m.buf);
- pptr = &commit_list_insert(parent, pptr)->next;
+ pptr = commit_list_append(parent, pptr);
}
fclose(fp);
strbuf_release(&m);
reflog_msg = (whence == FROM_CHERRY_PICK)
? "commit (cherry-pick)"
: "commit";
- pptr = &commit_list_insert(current_head, pptr)->next;
+ commit_list_insert(current_head, &parents);
}
/* Finally, get the commit message */
value = normalize_value(argv[0], argv[1]);
ret = git_config_set_in_file_gently(given_config_source.file, argv[0], value);
if (ret == CONFIG_NOTHING_SET)
- error("cannot overwrite multiple values with a single value\n"
- " Use a regexp, --add or --replace-all to change %s.", argv[0]);
+ error(_("cannot overwrite multiple values with a single value\n"
+ " Use a regexp, --add or --replace-all to change %s."), argv[0]);
return ret;
}
else if (actions == ACTION_SET_ALL) {
#include "dir.h"
#include "builtin.h"
#include "parse-options.h"
+#include "quote.h"
static unsigned long garbage;
static off_t size_garbage;
return 0;
}
+static int print_alternate(struct alternate_object_database *alt, void *data)
+{
+ printf("alternate: ");
+ quote_c_style(alt->path, NULL, stdout, 0);
+ putchar('\n');
+ return 0;
+}
+
static char const * const count_objects_usage[] = {
N_("git count-objects [-v] [-H | --human-readable]"),
NULL
OPT_END(),
};
+ git_config(git_default_config, NULL);
+
argc = parse_options(argc, argv, prefix, opts, count_objects_usage, 0);
/* we do not take arguments other than flags for now */
if (argc)
printf("prune-packable: %lu\n", packed_loose);
printf("garbage: %lu\n", garbage);
printf("size-garbage: %s\n", garbage_buf.buf);
+ foreach_alt_odb(print_alternate, NULL);
strbuf_release(&loose_buf);
strbuf_release(&pack_buf);
strbuf_release(&garbage_buf);
oid_to_hex(oid));
}
- qsort(all_matches, match_cnt, sizeof(all_matches[0]), compare_pt);
+ QSORT(all_matches, match_cnt, compare_pt);
if (gave_up_on) {
commit_list_insert_by_date(gave_up_on, &list);
break;
}
- if (!no_index)
- prefix = setup_git_directory_gently(&nongit);
+ prefix = setup_git_directory_gently(&nongit);
- /*
- * Treat git diff with at least one path outside of the
- * repo the same as if the command would have been executed
- * outside of a git repository. In this case it behaves
- * the same way as "git diff --no-index <a> <b>", which acts
- * as a colourful "diff" replacement.
- */
- if (nongit || ((argc == i + 2) &&
- (!path_inside_repo(prefix, argv[i]) ||
- !path_inside_repo(prefix, argv[i + 1]))))
- no_index = DIFF_NO_INDEX_IMPLICIT;
+ if (!no_index) {
+ /*
+ * Treat git diff with at least one path outside of the
+ * repo the same as if the command would have been executed
+ * outside of a git repository. In this case it behaves
+ * the same way as "git diff --no-index <a> <b>", which acts
+ * as a colourful "diff" replacement.
+ */
+ if (nongit || ((argc == i + 2) &&
+ (!path_inside_repo(prefix, argv[i]) ||
+ !path_inside_repo(prefix, argv[i + 1]))))
+ no_index = DIFF_NO_INDEX_IMPLICIT;
+ }
if (!no_index)
gitmodules_config();
* Handle files below a directory first, in case they are all deleted
* and the directory changes to a file or symlink.
*/
- qsort(q->queue, q->nr, sizeof(q->queue[0]), depth_first);
+ QSORT(q->queue, q->nr, depth_first);
for (i = 0; i < q->nr; i++) {
struct diff_filespec *ospec = q->queue[i]->one;
struct child_process *conn;
struct fetch_pack_args args;
struct sha1_array shallow = SHA1_ARRAY_INIT;
+ struct string_list deepen_not = STRING_LIST_INIT_DUP;
packet_trace_identity("fetch-pack");
for (i = 1; i < argc && *argv[i] == '-'; i++) {
const char *arg = argv[i];
- if (starts_with(arg, "--upload-pack=")) {
- args.uploadpack = arg + 14;
+ if (skip_prefix(arg, "--upload-pack=", &arg)) {
+ args.uploadpack = arg;
continue;
}
- if (starts_with(arg, "--exec=")) {
- args.uploadpack = arg + 7;
+ if (skip_prefix(arg, "--exec=", &arg)) {
+ args.uploadpack = arg;
continue;
}
if (!strcmp("--quiet", arg) || !strcmp("-q", arg)) {
args.verbose = 1;
continue;
}
- if (starts_with(arg, "--depth=")) {
- args.depth = strtol(arg + 8, NULL, 0);
+ if (skip_prefix(arg, "--depth=", &arg)) {
+ args.depth = strtol(arg, NULL, 0);
+ continue;
+ }
+ if (skip_prefix(arg, "--shallow-since=", &arg)) {
+ args.deepen_since = xstrdup(arg);
+ continue;
+ }
+ if (skip_prefix(arg, "--shallow-exclude=", &arg)) {
+ string_list_append(&deepen_not, arg);
+ continue;
+ }
+ if (!strcmp(arg, "--deepen-relative")) {
+ args.deepen_relative = 1;
continue;
}
if (!strcmp("--no-progress", arg)) {
}
usage(fetch_pack_usage);
}
+ if (deepen_not.nr)
+ args.deepen_not = &deepen_not;
if (i < argc)
dest = argv[i++];
static int prune = -1; /* unspecified */
#define PRUNE_BY_DEFAULT 0 /* do we prune by default? */
-static int all, append, dry_run, force, keep, multiple, update_head_ok, verbosity;
+static int all, append, dry_run, force, keep, multiple, update_head_ok, verbosity, deepen_relative;
static int progress = -1, recurse_submodules = RECURSE_SUBMODULES_DEFAULT;
-static int tags = TAGS_DEFAULT, unshallow, update_shallow;
+static int tags = TAGS_DEFAULT, unshallow, update_shallow, deepen;
static int max_children = -1;
static enum transport_family family;
static const char *depth;
+static const char *deepen_since;
static const char *upload_pack;
+static struct string_list deepen_not = STRING_LIST_INIT_NODUP;
static struct strbuf default_rla = STRBUF_INIT;
static struct transport *gtransport;
static struct transport *gsecondary;
OPT_BOOL(0, "progress", &progress, N_("force progress reporting")),
OPT_STRING(0, "depth", &depth, N_("depth"),
N_("deepen history of shallow clone")),
+ OPT_STRING(0, "shallow-since", &deepen_since, N_("time"),
+ N_("deepen history of shallow repository based on time")),
+ OPT_STRING_LIST(0, "shallow-exclude", &deepen_not, N_("revision"),
+ N_("deepen history of shallow clone by excluding rev")),
+ OPT_INTEGER(0, "deepen", &deepen_relative,
+ N_("deepen history of shallow clone")),
{ OPTION_SET_INT, 0, "unshallow", &unshallow, NULL,
N_("convert to a complete repository"),
PARSE_OPT_NONEG | PARSE_OPT_NOARG, NULL, 1 },
static void format_display(struct strbuf *display, char code,
const char *summary, const char *error,
- const char *remote, const char *local)
+ const char *remote, const char *local,
+ int summary_width)
{
- strbuf_addf(display, "%c %-*s ", code, TRANSPORT_SUMMARY(summary));
+ int width = (summary_width + strlen(summary) - gettext_width(summary));
+
+ strbuf_addf(display, "%c %-*s ", code, width, summary);
if (!compact_format)
print_remote_to_local(display, remote, local);
else
static int update_local_ref(struct ref *ref,
const char *remote,
const struct ref *remote_ref,
- struct strbuf *display)
+ struct strbuf *display,
+ int summary_width)
{
struct commit *current = NULL, *updated;
enum object_type type;
if (!oidcmp(&ref->old_oid, &ref->new_oid)) {
if (verbosity > 0)
format_display(display, '=', _("[up to date]"), NULL,
- remote, pretty_ref);
+ remote, pretty_ref, summary_width);
return 0;
}
*/
format_display(display, '!', _("[rejected]"),
_("can't fetch in current branch"),
- remote, pretty_ref);
+ remote, pretty_ref, summary_width);
return 1;
}
r = s_update_ref("updating tag", ref, 0);
format_display(display, r ? '!' : 't', _("[tag update]"),
r ? _("unable to update local ref") : NULL,
- remote, pretty_ref);
+ remote, pretty_ref, summary_width);
return r;
}
r = s_update_ref(msg, ref, 0);
format_display(display, r ? '!' : '*', what,
r ? _("unable to update local ref") : NULL,
- remote, pretty_ref);
+ remote, pretty_ref, summary_width);
return r;
}
r = s_update_ref("fast-forward", ref, 1);
format_display(display, r ? '!' : ' ', quickref.buf,
r ? _("unable to update local ref") : NULL,
- remote, pretty_ref);
+ remote, pretty_ref, summary_width);
strbuf_release(&quickref);
return r;
} else if (force || ref->force) {
r = s_update_ref("forced-update", ref, 1);
format_display(display, r ? '!' : '+', quickref.buf,
r ? _("unable to update local ref") : _("forced update"),
- remote, pretty_ref);
+ remote, pretty_ref, summary_width);
strbuf_release(&quickref);
return r;
} else {
format_display(display, '!', _("[rejected]"), _("non-fast-forward"),
- remote, pretty_ref);
+ remote, pretty_ref, summary_width);
return 1;
}
}
char *url;
const char *filename = dry_run ? "/dev/null" : git_path_fetch_head();
int want_status;
+ int summary_width = transport_summary_width(ref_map);
fp = fopen(filename, "a");
if (!fp)
strbuf_reset(¬e);
if (ref) {
- rc |= update_local_ref(ref, what, rm, ¬e);
+ rc |= update_local_ref(ref, what, rm, ¬e,
+ summary_width);
free(ref);
} else
format_display(¬e, '*',
*kind ? kind : "branch", NULL,
*what ? what : "HEAD",
- "FETCH_HEAD");
+ "FETCH_HEAD", summary_width);
if (note.len) {
if (verbosity >= 0 && !shown_url) {
fprintf(stderr, _("From %.*s\n"),
* really need to perform. Claiming failure now will ensure
* we perform the network exchange to deepen our history.
*/
- if (depth)
+ if (deepen)
return -1;
opt.quiet = 1;
return check_connected(iterate_ref_map, &rm, &opt);
int url_len, i, result = 0;
struct ref *ref, *stale_refs = get_stale_heads(refs, ref_count, ref_map);
char *url;
+ int summary_width = transport_summary_width(stale_refs);
const char *dangling_msg = dry_run
? _(" (%s will become dangling)")
: _(" (%s has become dangling)");
shown_url = 1;
}
format_display(&sb, '-', _("[deleted]"), NULL,
- _("(none)"), prettify_refname(ref->name));
+ _("(none)"), prettify_refname(ref->name),
+ summary_width);
fprintf(stderr, " %s\n",sb.buf);
strbuf_release(&sb);
warn_dangling_symref(stderr, dangling_msg, ref->name);
name, transport->url);
}
-static struct transport *prepare_transport(struct remote *remote)
+static struct transport *prepare_transport(struct remote *remote, int deepen)
{
struct transport *transport;
transport = transport_get(remote, NULL);
set_option(transport, TRANS_OPT_KEEP, "yes");
if (depth)
set_option(transport, TRANS_OPT_DEPTH, depth);
+ if (deepen && deepen_since)
+ set_option(transport, TRANS_OPT_DEEPEN_SINCE, deepen_since);
+ if (deepen && deepen_not.nr)
+ set_option(transport, TRANS_OPT_DEEPEN_NOT,
+ (const char *)&deepen_not);
+ if (deepen_relative)
+ set_option(transport, TRANS_OPT_DEEPEN_RELATIVE, "yes");
if (update_shallow)
set_option(transport, TRANS_OPT_UPDATE_SHALLOW, "yes");
return transport;
static void backfill_tags(struct transport *transport, struct ref *ref_map)
{
- if (transport->cannot_reuse) {
- gsecondary = prepare_transport(transport->remote);
+ int cannot_reuse;
+
+ /*
+ * Once we have set TRANS_OPT_DEEPEN_SINCE, we can't unset it
+ * when remote helper is used (setting it to an empty string
+ * is not unsetting). We could extend the remote helper
+ * protocol for that, but for now, just force a new connection
+ * without deepen-since. Similar story for deepen-not.
+ */
+ cannot_reuse = transport->cannot_reuse ||
+ deepen_since || deepen_not.nr;
+ if (cannot_reuse) {
+ gsecondary = prepare_transport(transport->remote, 0);
transport = gsecondary;
}
transport_set_option(transport, TRANS_OPT_FOLLOWTAGS, NULL);
transport_set_option(transport, TRANS_OPT_DEPTH, "0");
+ transport_set_option(transport, TRANS_OPT_DEEPEN_RELATIVE, NULL);
fetch_refs(transport, ref_map);
if (gsecondary) {
die(_("No remote repository specified. Please, specify either a URL or a\n"
"remote name from which new revisions should be fetched."));
- gtransport = prepare_transport(remote);
+ gtransport = prepare_transport(remote, 1);
if (prune < 0) {
/* no command line request */
argc = parse_options(argc, argv, prefix,
builtin_fetch_options, builtin_fetch_usage, 0);
+ if (deepen_relative) {
+ if (deepen_relative < 0)
+ die(_("Negative depth in --deepen is not supported"));
+ if (depth)
+ die(_("--deepen and --depth are mutually exclusive"));
+ depth = xstrfmt("%d", deepen_relative);
+ }
if (unshallow) {
if (depth)
die(_("--depth and --unshallow cannot be used together"));
/* no need to be strict, transport_set_option() will validate it again */
if (depth && atoi(depth) < 1)
die(_("depth %s is not a positive number"), depth);
+ if (depth || deepen_since || deepen_not.nr)
+ deepen = 1;
if (recurse_submodules != RECURSE_SUBMODULES_OFF) {
if (recurse_submodules_default) {
struct string_list *authors,
struct string_list *committers)
{
- if (authors->nr)
- qsort(authors->items,
- authors->nr, sizeof(authors->items[0]),
- cmp_string_list_util_as_integral);
- if (committers->nr)
- qsort(committers->items,
- committers->nr, sizeof(committers->items[0]),
- cmp_string_list_util_as_integral);
+ QSORT(authors->items, authors->nr,
+ cmp_string_list_util_as_integral);
+ QSORT(committers->items, committers->nr,
+ cmp_string_list_util_as_integral);
credit_people(out, authors, 'a');
credit_people(out, committers, 'c');
if (!(f = fopen(filename, "w")))
die_errno("Could not open '%s'", filename);
if (obj->type == OBJ_BLOB) {
- if (stream_blob_to_fd(fileno(f), obj->oid.hash, NULL, 1))
+ if (stream_blob_to_fd(fileno(f), &obj->oid, NULL, 1))
die_errno("Could not write '%s'", filename);
} else
fprintf(f, "%s\n", describe_object(obj));
fsck_object_dir(get_object_directory());
prepare_alt_odb();
- for (alt = alt_odb_list; alt; alt = alt->next) {
- /* directory name, minus trailing slash */
- size_t namelen = alt->name - alt->base - 1;
- struct strbuf name = STRBUF_INIT;
- strbuf_add(&name, alt->base, namelen);
- fsck_object_dir(name.buf);
- strbuf_release(&name);
- }
+ for (alt = alt_odb_list; alt; alt = alt->next)
+ fsck_object_dir(alt->path);
}
if (check_full) {
mode = active_cache[i]->ce_mode;
if (S_ISGITLINK(mode))
continue;
- blob = lookup_blob(active_cache[i]->sha1);
+ blob = lookup_blob(active_cache[i]->oid.hash);
if (!blob)
continue;
obj = &blob->object;
if (cached || (ce->ce_flags & CE_VALID) || ce_skip_worktree(ce)) {
if (ce_stage(ce) || ce_intent_to_add(ce))
continue;
- hit |= grep_sha1(opt, ce->sha1, ce->name, 0, ce->name);
+ hit |= grep_sha1(opt, ce->oid.hash, ce->name, 0,
+ ce->name);
}
else
hit |= grep_file(opt, ce->name);
int stdin_paths = 0;
int no_filters = 0;
int literally = 0;
+ int nongit = 0;
unsigned flags = HASH_FORMAT_CHECK;
const char *vpath = NULL;
const struct option hash_object_options[] = {
argc = parse_options(argc, argv, NULL, hash_object_options,
hash_object_usage, 0);
- if (flags & HASH_WRITE_OBJECT) {
+ if (flags & HASH_WRITE_OBJECT)
prefix = setup_git_directory();
- prefix_length = prefix ? strlen(prefix) : 0;
- if (vpath && prefix)
- vpath = prefix_filename(prefix, prefix_length, vpath);
- }
+ else
+ prefix = setup_git_directory_gently(&nongit);
+
+ prefix_length = prefix ? strlen(prefix) : 0;
+ if (vpath && prefix)
+ vpath = prefix_filename(prefix, prefix_length, vpath);
git_config(git_default_config, NULL);
static int show_guides = 0;
static unsigned int colopts;
static enum help_format help_format = HELP_FORMAT_NONE;
+static int exclude_guides;
static struct option builtin_help_options[] = {
OPT_BOOL('a', "all", &show_all, N_("print all available commands")),
+ OPT_HIDDEN_BOOL(0, "exclude-guides", &exclude_guides, N_("exclude guides")),
OPT_BOOL('g', "guides", &show_guides, N_("print list of useful guides")),
OPT_SET_INT('m', "man", &help_format, N_("show man page"), HELP_FORMAT_MAN),
OPT_SET_INT('w', "web", &help_format, N_("show manual in web browser"),
putchar('\n');
}
+static const char *check_git_cmd(const char* cmd)
+{
+ char *alias;
+
+ if (is_git_command(cmd))
+ return cmd;
+
+ alias = alias_lookup(cmd);
+ if (alias) {
+ printf_ln(_("`git %s' is aliased to `%s'"), cmd, alias);
+ free(alias);
+ exit(0);
+ }
+
+ if (exclude_guides)
+ return help_unknown_cmd(cmd);
+
+ return cmd;
+}
+
int cmd_help(int argc, const char **argv, const char *prefix)
{
int nongit;
- char *alias;
enum help_format parsed_help_format;
argc = parse_options(argc, argv, prefix, builtin_help_options,
if (help_format == HELP_FORMAT_NONE)
help_format = parse_help_format(DEFAULT_HELP_FORMAT);
- alias = alias_lookup(argv[0]);
- if (alias && !is_git_command(argv[0])) {
- printf_ln(_("`git %s' is aliased to `%s'"), argv[0], alias);
- free(alias);
- return 0;
- }
+ argv[0] = check_git_cmd(argv[0]);
switch (help_format) {
case HELP_FORMAT_NONE:
static unsigned char input_buffer[4096];
static unsigned int input_offset, input_len;
static off_t consumed_bytes;
+static off_t max_input_size;
static unsigned deepest_delta;
static git_SHA_CTX input_ctx;
static uint32_t input_crc32;
if (signed_add_overflows(consumed_bytes, bytes))
die(_("pack too large for current definition of off_t"));
consumed_bytes += bytes;
+ if (max_input_size && consumed_bytes > max_input_size)
+ die(_("pack exceeds maximum allowed size"));
}
static const char *open_pack_file(const char *pack_name)
return;
/* Sort deltas by base SHA1/offset for fast searching */
- qsort(ofs_deltas, nr_ofs_deltas, sizeof(struct ofs_delta_entry),
- compare_ofs_delta_entry);
- qsort(ref_deltas, nr_ref_deltas, sizeof(struct ref_delta_entry),
- compare_ref_delta_entry);
+ QSORT(ofs_deltas, nr_ofs_deltas, compare_ofs_delta_entry);
+ QSORT(ref_deltas, nr_ref_deltas, compare_ref_delta_entry);
if (verbose || show_resolving_progress)
progress = start_progress(_("Resolving deltas"),
ALLOC_ARRAY(sorted_by_pos, nr_ref_deltas);
for (i = 0; i < nr_ref_deltas; i++)
sorted_by_pos[i] = &ref_deltas[i];
- qsort(sorted_by_pos, nr_ref_deltas, sizeof(*sorted_by_pos), delta_pos_compare);
+ QSORT(sorted_by_pos, nr_ref_deltas, delta_pos_compare);
for (i = 0; i < nr_ref_deltas; i++) {
struct ref_delta_entry *d = sorted_by_pos[i];
opts->anomaly[opts->anomaly_nr++] = ntohl(idx2[off * 2 + 1]);
}
- if (1 < opts->anomaly_nr)
- qsort(opts->anomaly, opts->anomaly_nr, sizeof(uint32_t), cmp_uint32);
+ QSORT(opts->anomaly, opts->anomaly_nr, cmp_uint32);
}
static void read_idx_option(struct pack_idx_option *opts, const char *pack_name)
opts.off32_limit = strtoul(c+1, &c, 0);
if (*c || opts.off32_limit & 0x80000000)
die(_("bad %s"), arg);
+ } else if (skip_prefix(arg, "--max-input-size=", &arg)) {
+ max_input_size = strtoumax(arg, NULL, 10);
} else
usage(index_pack_usage);
continue;
static int init_is_bare_repository = 0;
static int init_shared_repository = -1;
static const char *init_db_template_dir;
-static const char *git_link;
static void copy_templates_1(struct strbuf *path, struct strbuf *template,
DIR *dir)
goto close_free_return;
}
- strbuf_addstr(&path, get_git_dir());
+ strbuf_addstr(&path, get_git_common_dir());
strbuf_complete(&path, '/');
copy_templates_1(&path, &template_path, dir);
close_free_return:
return 1;
}
-static int create_default_files(const char *template_path)
+static int create_default_files(const char *template_path,
+ const char *original_git_dir)
{
struct stat st1;
struct strbuf buf = STRBUF_INIT;
char junk[2];
int reinit;
int filemode;
-
- /*
- * Create .git/refs/{heads,tags}
- */
- safe_create_dir(git_path_buf(&buf, "refs"), 1);
- safe_create_dir(git_path_buf(&buf, "refs/heads"), 1);
- safe_create_dir(git_path_buf(&buf, "refs/tags"), 1);
+ struct strbuf err = STRBUF_INIT;
/* Just look for `init.templatedir` */
git_config(git_init_db_config, NULL);
- /* First copy the templates -- we might have the default
+ /*
+ * First copy the templates -- we might have the default
* config file there, in which case we would want to read
* from it after installing.
+ *
+ * Before reading that config, we also need to clear out any cached
+ * values (since we've just potentially changed what's available on
+ * disk).
*/
copy_templates(template_path);
-
+ git_config_clear();
+ reset_shared_repository();
git_config(git_default_config, NULL);
- is_bare_repository_cfg = init_is_bare_repository;
- /* reading existing config may have overwrote it */
+ /*
+ * We must make sure command-line options continue to override any
+ * values we might have just re-read from the config.
+ */
+ is_bare_repository_cfg = init_is_bare_repository;
if (init_shared_repository != -1)
set_shared_repository(init_shared_repository);
*/
if (get_shared_repository()) {
adjust_shared_perm(get_git_dir());
- adjust_shared_perm(git_path_buf(&buf, "refs"));
- adjust_shared_perm(git_path_buf(&buf, "refs/heads"));
- adjust_shared_perm(git_path_buf(&buf, "refs/tags"));
}
+ /*
+ * We need to create a "refs" dir in any case so that older
+ * versions of git can tell that this is a repository.
+ */
+ safe_create_dir(git_path("refs"), 1);
+ adjust_shared_perm(git_path("refs"));
+
+ if (refs_init_db(&err))
+ die("failed to set up refs db: %s", err.buf);
+
/*
* Create the default symlink from ".git/HEAD" to the "master"
* branch, if it does not exist yet.
/* allow template config file to override the default */
if (log_all_ref_updates == -1)
git_config_set("core.logallrefupdates", "true");
- if (needs_work_tree_config(get_git_dir(), work_tree))
+ if (needs_work_tree_config(original_git_dir, work_tree))
git_config_set("core.worktree", work_tree);
}
strbuf_release(&path);
}
-int set_git_dir_init(const char *git_dir, const char *real_git_dir,
- int exist_ok)
-{
- if (real_git_dir) {
- struct stat st;
-
- if (!exist_ok && !stat(git_dir, &st))
- die(_("%s already exists"), git_dir);
-
- if (!exist_ok && !stat(real_git_dir, &st))
- die(_("%s already exists"), real_git_dir);
-
- /*
- * make sure symlinks are resolved because we'll be
- * moving the target repo later on in separate_git_dir()
- */
- git_link = xstrdup(real_path(git_dir));
- set_git_dir(real_path(real_git_dir));
- }
- else {
- set_git_dir(real_path(git_dir));
- git_link = NULL;
- }
- startup_info->have_repository = 1;
- return 0;
-}
-
-static void separate_git_dir(const char *git_dir)
+static void separate_git_dir(const char *git_dir, const char *git_link)
{
struct stat st;
write_file(git_link, "gitdir: %s", git_dir);
}
-int init_db(const char *template_dir, unsigned int flags)
+int init_db(const char *git_dir, const char *real_git_dir,
+ const char *template_dir, unsigned int flags)
{
int reinit;
- const char *git_dir = get_git_dir();
+ int exist_ok = flags & INIT_DB_EXIST_OK;
+ char *original_git_dir = xstrdup(real_path(git_dir));
- if (git_link)
- separate_git_dir(git_dir);
+ if (real_git_dir) {
+ struct stat st;
+
+ if (!exist_ok && !stat(git_dir, &st))
+ die(_("%s already exists"), git_dir);
+
+ if (!exist_ok && !stat(real_git_dir, &st))
+ die(_("%s already exists"), real_git_dir);
+
+ set_git_dir(real_path(real_git_dir));
+ git_dir = get_git_dir();
+ separate_git_dir(git_dir, original_git_dir);
+ }
+ else {
+ set_git_dir(real_path(git_dir));
+ git_dir = get_git_dir();
+ }
+ startup_info->have_repository = 1;
safe_create_dir(git_dir, 0);
*/
check_repository_format();
- reinit = create_default_files(template_dir);
+ reinit = create_default_files(template_dir, original_git_dir);
create_object_directory();
git_dir, len && git_dir[len-1] != '/' ? "/" : "");
}
+ free(original_git_dir);
return 0;
}
set_git_work_tree(work_tree);
}
- set_git_dir_init(git_dir, real_git_dir, 1);
-
- return init_db(template_dir, flags);
+ flags |= INIT_DB_EXIST_OK;
+ return init_db(git_dir, real_git_dir, template_dir, flags);
}
strbuf_release(&out);
}
-static int show_blob_object(const unsigned char *sha1, struct rev_info *rev, const char *obj_name)
+static int show_blob_object(const struct object_id *oid, struct rev_info *rev, const char *obj_name)
{
- unsigned char sha1c[20];
+ struct object_id oidc;
struct object_context obj_context;
char *buf;
unsigned long size;
fflush(rev->diffopt.file);
if (!DIFF_OPT_TOUCHED(&rev->diffopt, ALLOW_TEXTCONV) ||
!DIFF_OPT_TST(&rev->diffopt, ALLOW_TEXTCONV))
- return stream_blob_to_fd(1, sha1, NULL, 0);
+ return stream_blob_to_fd(1, oid, NULL, 0);
- if (get_sha1_with_context(obj_name, 0, sha1c, &obj_context))
+ if (get_sha1_with_context(obj_name, 0, oidc.hash, &obj_context))
die(_("Not a valid object name %s"), obj_name);
if (!obj_context.path[0] ||
- !textconv_object(obj_context.path, obj_context.mode, sha1c, 1, &buf, &size))
- return stream_blob_to_fd(1, sha1, NULL, 0);
+ !textconv_object(obj_context.path, obj_context.mode, &oidc, 1, &buf, &size))
+ return stream_blob_to_fd(1, oid, NULL, 0);
if (!buf)
die(_("git show %s: bad file"), obj_name);
return 0;
}
-static int show_tag_object(const unsigned char *sha1, struct rev_info *rev)
+static int show_tag_object(const struct object_id *oid, struct rev_info *rev)
{
unsigned long size;
enum object_type type;
- char *buf = read_sha1_file(sha1, &type, &size);
+ char *buf = read_sha1_file(oid->hash, &type, &size);
int offset = 0;
if (!buf)
- return error(_("Could not read object %s"), sha1_to_hex(sha1));
+ return error(_("Could not read object %s"), oid_to_hex(oid));
assert(type == OBJ_TAG);
while (offset < size && buf[offset] != '\n') {
const char *name = objects[i].name;
switch (o->type) {
case OBJ_BLOB:
- ret = show_blob_object(o->oid.hash, &rev, name);
+ ret = show_blob_object(&o->oid, &rev, name);
break;
case OBJ_TAG: {
struct tag *t = (struct tag *)o;
diff_get_color_opt(&rev.diffopt, DIFF_COMMIT),
t->tag,
diff_get_color_opt(&rev.diffopt, DIFF_RESET));
- ret = show_tag_object(o->oid.hash, &rev);
+ ret = show_tag_object(&o->oid, &rev);
rev.shown_one = 1;
if (ret)
break;
return 0;
}
+static int rfc_callback(const struct option *opt, const char *arg, int unset)
+{
+ return subject_prefix_callback(opt, "RFC PATCH", unset);
+}
+
static int numbered_cmdline_opt = 0;
static int numbered_callback(const struct option *opt, const char *arg,
if (upstream) {
struct commit_list *base_list;
struct commit *commit;
- unsigned char sha1[20];
+ struct object_id oid;
- if (get_sha1(upstream, sha1))
+ if (get_oid(upstream, &oid))
die(_("Failed to resolve '%s' as a valid ref."), upstream);
- commit = lookup_commit_or_die(sha1, "upstream base");
+ commit = lookup_commit_or_die(oid.hash, "upstream base");
base_list = get_merge_bases_many(commit, total, list);
/* There should be one and only one merge base. */
if (!base_list || base_list->next)
* and stuff them in bases structure.
*/
while ((commit = get_revision(&revs)) != NULL) {
- unsigned char sha1[20];
+ struct object_id oid;
struct object_id *patch_id;
if (commit->util)
continue;
- if (commit_patch_id(commit, &diffopt, sha1, 0))
+ if (commit_patch_id(commit, &diffopt, oid.hash, 0))
die(_("cannot get patch id"));
ALLOC_GROW(bases->patch_id, bases->nr_patch_id + 1, bases->alloc_patch_id);
patch_id = bases->patch_id + bases->nr_patch_id;
- hashcpy(patch_id->hash, sha1);
+ oidcpy(patch_id, &oid);
bases->nr_patch_id++;
}
}
N_("start numbering patches at <n> instead of 1")),
OPT_INTEGER('v', "reroll-count", &reroll_count,
N_("mark the series as Nth re-roll")),
+ { OPTION_CALLBACK, 0, "rfc", &rev, NULL,
+ N_("Use [RFC PATCH] instead of [PATCH]"),
+ PARSE_OPT_NOARG | PARSE_OPT_NONEG, rfc_callback },
{ OPTION_CALLBACK, 0, "subject-prefix", &rev, N_("prefix"),
N_("Use [<prefix>] instead of [PATCH]"),
PARSE_OPT_NONEG, subject_prefix_callback },
if (numbered && keep_subject)
die (_("-n and -k are mutually exclusive."));
if (keep_subject && subject_prefix)
- die (_("--subject-prefix and -k are mutually exclusive."));
+ die (_("--subject-prefix/--rfc and -k are mutually exclusive."));
rev.preserve_subject = keep_subject;
argc = setup_revisions(argc, argv, &rev, &s_r_opt);
check_head = 1;
if (check_head) {
- unsigned char sha1[20];
+ struct object_id oid;
const char *ref, *v;
ref = resolve_ref_unsafe("HEAD", RESOLVE_REF_READING,
- sha1, NULL);
+ oid.hash, NULL);
if (ref && skip_prefix(ref, "refs/heads/", &v))
branch_name = xstrdup(v);
else
/* nothing to do */
return 0;
total = nr;
- if (!keep_subject && auto_number && total > 1)
- numbered = 1;
- if (numbered)
- rev.total = total + start_number - 1;
if (cover_letter == -1) {
if (config_cover_letter == COVER_AUTO)
cover_letter = (total > 1);
else
cover_letter = (config_cover_letter == COVER_ON);
}
+ if (!keep_subject && auto_number && (total > 1 || cover_letter))
+ numbered = 1;
+ if (numbered)
+ rev.total = total + start_number - 1;
if (!signature) {
; /* --no-signature inhibits all signatures */
static int add_pending_commit(const char *arg, struct rev_info *revs, int flags)
{
- unsigned char sha1[20];
- if (get_sha1(arg, sha1) == 0) {
- struct commit *commit = lookup_commit_reference(sha1);
+ struct object_id oid;
+ if (get_oid(arg, &oid) == 0) {
+ struct commit *commit = lookup_commit_reference(oid.hash);
if (commit) {
commit->object.flags |= flags;
add_pending_object(revs, &commit->object, arg);
#include "resolve-undo.h"
#include "string-list.h"
#include "pathspec.h"
+#include "run-command.h"
static int abbrev;
static int show_deleted;
static int line_terminator = '\n';
static int debug_mode;
static int show_eol;
+static int recurse_submodules;
+static struct argv_array submodules_options = ARGV_ARRAY_INIT;
static const char *prefix;
+static const char *super_prefix;
static int max_prefix_len;
static int prefix_len;
static struct pathspec pathspec;
static void write_name(const char *name)
{
+ /*
+ * Prepend the super_prefix to name to construct the full_name to be
+ * written.
+ */
+ struct strbuf full_name = STRBUF_INIT;
+ if (super_prefix) {
+ strbuf_addstr(&full_name, super_prefix);
+ strbuf_addstr(&full_name, name);
+ name = full_name.buf;
+ }
+
/*
* With "--full-name", prefix_len=0; this caller needs to pass
* an empty string in that case (a NULL is good for "").
*/
write_name_quoted_relative(name, prefix_len ? prefix : NULL,
stdout, line_terminator);
+
+ strbuf_release(&full_name);
}
static void show_dir_entry(const char *tag, struct dir_entry *ent)
}
}
+/*
+ * Compile an argv_array with all of the options supported by --recurse_submodules
+ */
+static void compile_submodule_options(const struct dir_struct *dir, int show_tag)
+{
+ if (line_terminator == '\0')
+ argv_array_push(&submodules_options, "-z");
+ if (show_tag)
+ argv_array_push(&submodules_options, "-t");
+ if (show_valid_bit)
+ argv_array_push(&submodules_options, "-v");
+ if (show_cached)
+ argv_array_push(&submodules_options, "--cached");
+ if (show_eol)
+ argv_array_push(&submodules_options, "--eol");
+ if (debug_mode)
+ argv_array_push(&submodules_options, "--debug");
+}
+
+/**
+ * Recursively call ls-files on a submodule
+ */
+static void show_gitlink(const struct cache_entry *ce)
+{
+ struct child_process cp = CHILD_PROCESS_INIT;
+ int status;
+ int i;
+
+ argv_array_pushf(&cp.args, "--super-prefix=%s%s/",
+ super_prefix ? super_prefix : "",
+ ce->name);
+ argv_array_push(&cp.args, "ls-files");
+ argv_array_push(&cp.args, "--recurse-submodules");
+
+ /* add supported options */
+ argv_array_pushv(&cp.args, submodules_options.argv);
+
+ /*
+ * Pass in the original pathspec args. The submodule will be
+ * responsible for prepending the 'submodule_prefix' prior to comparing
+ * against the pathspec for matches.
+ */
+ argv_array_push(&cp.args, "--");
+ for (i = 0; i < pathspec.nr; i++)
+ argv_array_push(&cp.args, pathspec.items[i].original);
+
+ cp.git_cmd = 1;
+ cp.dir = ce->name;
+ status = run_command(&cp);
+ if (status)
+ exit(status);
+}
+
static void show_ce_entry(const char *tag, const struct cache_entry *ce)
{
+ struct strbuf name = STRBUF_INIT;
int len = max_prefix_len;
+ if (super_prefix)
+ strbuf_addstr(&name, super_prefix);
+ strbuf_addstr(&name, ce->name);
if (len >= ce_namelen(ce))
die("git ls-files: internal error - cache entry not superset of prefix");
- if (!match_pathspec(&pathspec, ce->name, ce_namelen(ce),
- len, ps_matched,
- S_ISDIR(ce->ce_mode) || S_ISGITLINK(ce->ce_mode)))
- return;
+ if (recurse_submodules && S_ISGITLINK(ce->ce_mode) &&
+ submodule_path_match(&pathspec, name.buf, ps_matched)) {
+ show_gitlink(ce);
+ } else if (match_pathspec(&pathspec, name.buf, name.len,
+ len, ps_matched,
+ S_ISDIR(ce->ce_mode) ||
+ S_ISGITLINK(ce->ce_mode))) {
+ if (tag && *tag && show_valid_bit &&
+ (ce->ce_flags & CE_VALID)) {
+ static char alttag[4];
+ memcpy(alttag, tag, 3);
+ if (isalpha(tag[0]))
+ alttag[0] = tolower(tag[0]);
+ else if (tag[0] == '?')
+ alttag[0] = '!';
+ else {
+ alttag[0] = 'v';
+ alttag[1] = tag[0];
+ alttag[2] = ' ';
+ alttag[3] = 0;
+ }
+ tag = alttag;
+ }
- if (tag && *tag && show_valid_bit &&
- (ce->ce_flags & CE_VALID)) {
- static char alttag[4];
- memcpy(alttag, tag, 3);
- if (isalpha(tag[0]))
- alttag[0] = tolower(tag[0]);
- else if (tag[0] == '?')
- alttag[0] = '!';
- else {
- alttag[0] = 'v';
- alttag[1] = tag[0];
- alttag[2] = ' ';
- alttag[3] = 0;
+ if (!show_stage) {
+ fputs(tag, stdout);
+ } else {
+ printf("%s%06o %s %d\t",
+ tag,
+ ce->ce_mode,
+ find_unique_abbrev(ce->oid.hash, abbrev),
+ ce_stage(ce));
+ }
+ write_eolinfo(ce, ce->name);
+ write_name(ce->name);
+ if (debug_mode) {
+ const struct stat_data *sd = &ce->ce_stat_data;
+
+ printf(" ctime: %d:%d\n", sd->sd_ctime.sec, sd->sd_ctime.nsec);
+ printf(" mtime: %d:%d\n", sd->sd_mtime.sec, sd->sd_mtime.nsec);
+ printf(" dev: %d\tino: %d\n", sd->sd_dev, sd->sd_ino);
+ printf(" uid: %d\tgid: %d\n", sd->sd_uid, sd->sd_gid);
+ printf(" size: %d\tflags: %x\n", sd->sd_size, ce->ce_flags);
}
- tag = alttag;
}
- if (!show_stage) {
- fputs(tag, stdout);
- } else {
- printf("%s%06o %s %d\t",
- tag,
- ce->ce_mode,
- find_unique_abbrev(ce->sha1,abbrev),
- ce_stage(ce));
- }
- write_eolinfo(ce, ce->name);
- write_name(ce->name);
- if (debug_mode) {
- const struct stat_data *sd = &ce->ce_stat_data;
-
- printf(" ctime: %d:%d\n", sd->sd_ctime.sec, sd->sd_ctime.nsec);
- printf(" mtime: %d:%d\n", sd->sd_mtime.sec, sd->sd_mtime.nsec);
- printf(" dev: %d\tino: %d\n", sd->sd_dev, sd->sd_ino);
- printf(" uid: %d\tgid: %d\n", sd->sd_uid, sd->sd_gid);
- printf(" size: %d\tflags: %x\n", sd->sd_size, ce->ce_flags);
- }
+ strbuf_release(&name);
}
static void show_ru_info(void)
{ OPTION_SET_INT, 0, "full-name", &prefix_len, NULL,
N_("make the output relative to the project top directory"),
PARSE_OPT_NOARG | PARSE_OPT_NONEG, NULL },
+ OPT_BOOL(0, "recurse-submodules", &recurse_submodules,
+ N_("recurse through submodules")),
OPT_BOOL(0, "error-unmatch", &error_unmatch,
N_("if any <file> is not in the index, treat this as an error")),
OPT_STRING(0, "with-tree", &with_tree, N_("tree-ish"),
prefix = cmd_prefix;
if (prefix)
prefix_len = strlen(prefix);
+ super_prefix = get_super_prefix();
git_config(git_default_config, NULL);
if (read_cache() < 0)
if (require_work_tree && !is_inside_work_tree())
setup_work_tree();
+ if (recurse_submodules)
+ compile_submodule_options(&dir, show_tag);
+
+ if (recurse_submodules &&
+ (show_stage || show_deleted || show_others || show_unmerged ||
+ show_killed || show_modified || show_resolve_undo || with_tree))
+ die("ls-files --recurse-submodules unsupported mode");
+
+ if (recurse_submodules && error_unmatch)
+ die("ls-files --recurse-submodules does not support "
+ "--error-unmatch");
+
parse_pathspec(&pathspec, 0,
PATHSPEC_PREFER_CWD |
PATHSPEC_STRIP_SUBMODULE_SLASH_CHEAP,
prefix, argv);
- /* Find common prefix for all pathspec's */
- max_prefix = common_prefix(&pathspec);
+ /*
+ * Find common prefix for all pathspec's
+ * This is used as a performance optimization which unfortunately cannot
+ * be done when recursing into submodules
+ */
+ if (recurse_submodules)
+ max_prefix = NULL;
+ else
+ max_prefix = common_prefix(&pathspec);
max_prefix_len = max_prefix ? strlen(max_prefix) : 0;
/* Treat unmatching pathspec elements as errors */
if (strcmp(ce->name, path))
break;
found++;
- sha1_to_hex_r(hexbuf[stage], ce->sha1);
+ sha1_to_hex_r(hexbuf[stage], ce->oid.hash);
xsnprintf(ownbuf[stage], sizeof(ownbuf[stage]), "%o", ce->ce_mode);
arguments[stage] = hexbuf[stage];
arguments[stage + 4] = ownbuf[stage];
if (!arg[2])
break;
if (parse_merge_opt(&o, arg + 2))
- die("Unknown option %s", arg);
+ die(_("unknown option %s"), arg);
continue;
}
if (bases_count < ARRAY_SIZE(bases)-1) {
struct object_id *oid = xmalloc(sizeof(struct object_id));
if (get_oid(argv[i], oid))
- die("Could not parse object '%s'", argv[i]);
+ die(_("could not parse object '%s'"), argv[i]);
bases[bases_count++] = oid;
}
else
- warning("Cannot handle more than %d bases. "
- "Ignoring %s.",
+ warning(Q_("cannot handle more than %d base. "
+ "Ignoring %s.",
+ "cannot handle more than %d bases. "
+ "Ignoring %s.",
+ (int)ARRAY_SIZE(bases)-1),
(int)ARRAY_SIZE(bases)-1, argv[i]);
}
if (argc - i != 3) /* "--" "<head>" "<remote>" */
- die("Not handling anything other than two heads merge.");
+ die(_("not handling anything other than two heads merge."));
o.branch1 = argv[++i];
o.branch2 = argv[++i];
if (get_oid(o.branch1, &h1))
- die("Could not resolve ref '%s'", o.branch1);
+ die(_("could not resolve ref '%s'"), o.branch1);
if (get_oid(o.branch2, &h2))
- die("Could not resolve ref '%s'", o.branch2);
+ die(_("could not resolve ref '%s'"), o.branch2);
o.branch1 = better_branch_name(o.branch1);
o.branch2 = better_branch_name(o.branch2);
if (o.verbosity >= 3)
- printf("Merging %s with %s\n", o.branch1, o.branch2);
+ printf(_("Merging %s with %s\n"), o.branch1, o.branch2);
failed = merge_recursive_generic(&o, &h1, &h2, bases_count, bases, &result);
if (failed < 0)
struct commit *commit;
if (verbosity >= 0) {
- char from[GIT_SHA1_HEXSZ + 1], to[GIT_SHA1_HEXSZ + 1];
- find_unique_abbrev_r(from, head_commit->object.oid.hash,
- DEFAULT_ABBREV);
- find_unique_abbrev_r(to, remoteheads->item->object.oid.hash,
- DEFAULT_ABBREV);
- printf(_("Updating %s..%s\n"), from, to);
+ printf(_("Updating %s..%s\n"),
+ find_unique_abbrev(head_commit->object.oid.hash,
+ DEFAULT_ABBREV),
+ find_unique_abbrev(remoteheads->item->object.oid.hash,
+ DEFAULT_ABBREV));
}
strbuf_addstr(&msg, "Fast-forward");
if (have_message)
size_t size;
int i;
- qsort(entries, used, sizeof(*entries), ent_compare);
+ QSORT(entries, used, ent_compare);
for (size = i = 0; i < used; i++)
size += 32 + entries[i]->len;
return NULL;
if (!tip_table.sorted) {
- qsort(tip_table.table, tip_table.nr, sizeof(*tip_table.table),
- tipcmp);
+ QSORT(tip_table.table, tip_table.nr, tipcmp);
tip_table.sorted = 1;
}
strbuf_reset(&d->buf);
if (launch_editor(d->edit_path, &d->buf, NULL)) {
- die(_("Please supply the note contents using either -m or -F option"));
+ die(_("please supply the note contents using either -m or -F option"));
}
strbuf_stripspace(&d->buf, 1);
}
if (write_sha1_file(d->buf.buf, d->buf.len, blob_type, sha1)) {
error(_("unable to write note object"));
if (d->edit_path)
- error(_("The note contents have been left in %s"),
+ error(_("the note contents have been left in %s"),
d->edit_path);
exit(128);
}
strbuf_addch(&d->buf, '\n');
if (get_sha1(arg, object))
- die(_("Failed to resolve '%s' as a valid ref."), arg);
+ die(_("failed to resolve '%s' as a valid ref."), arg);
if (!(buf = read_sha1_file(object, &type, &len))) {
free(buf);
- die(_("Failed to read object '%s'."), arg);
+ die(_("failed to read object '%s'."), arg);
}
if (type != OBJ_BLOB) {
free(buf);
- die(_("Cannot read note data from non-blob object '%s'."), arg);
+ die(_("cannot read note data from non-blob object '%s'."), arg);
}
strbuf_add(&d->buf, buf, len);
free(buf);
split = strbuf_split(&buf, ' ');
if (!split[0] || !split[1])
- die(_("Malformed input line: '%s'."), buf.buf);
+ die(_("malformed input line: '%s'."), buf.buf);
strbuf_rtrim(split[0]);
strbuf_rtrim(split[1]);
if (get_sha1(split[0]->buf, from_obj))
- die(_("Failed to resolve '%s' as a valid ref."), split[0]->buf);
+ die(_("failed to resolve '%s' as a valid ref."), split[0]->buf);
if (get_sha1(split[1]->buf, to_obj))
- die(_("Failed to resolve '%s' as a valid ref."), split[1]->buf);
+ die(_("failed to resolve '%s' as a valid ref."), split[1]->buf);
if (rewrite_cmd)
err = copy_note_for_rewrite(c, from_obj, to_obj);
combine_notes_overwrite);
if (err) {
- error(_("Failed to copy notes from '%s' to '%s'"),
+ error(_("failed to copy notes from '%s' to '%s'"),
split[0]->buf, split[1]->buf);
ret = 1;
}
ref = (flags & NOTES_INIT_WRITABLE) ? t->update_ref : t->ref;
if (!starts_with(ref, "refs/notes/"))
- die("Refusing to %s notes in %s (outside of refs/notes/)",
+ /* TRANSLATORS: the first %s will be replaced by a
+ git notes command: 'add', 'merge', 'remove', etc.*/
+ die(_("refusing to %s notes in %s (outside of refs/notes/)"),
subcommand, ref);
return t;
}
t = init_notes_check("list", 0);
if (argc) {
if (get_sha1(argv[0], object))
- die(_("Failed to resolve '%s' as a valid ref."), argv[0]);
+ die(_("failed to resolve '%s' as a valid ref."), argv[0]);
note = get_note(t, object);
if (note) {
puts(sha1_to_hex(note));
retval = 0;
} else
- retval = error(_("No note found for object %s."),
+ retval = error(_("no note found for object %s."),
sha1_to_hex(object));
} else
retval = for_each_note(t, 0, list_each_note, NULL);
object_ref = argc > 1 ? argv[1] : "HEAD";
if (get_sha1(object_ref, object))
- die(_("Failed to resolve '%s' as a valid ref."), object_ref);
+ die(_("failed to resolve '%s' as a valid ref."), object_ref);
t = init_notes_check("add", NOTES_INIT_WRITABLE);
note = get_note(t, object);
}
if (get_sha1(argv[0], from_obj))
- die(_("Failed to resolve '%s' as a valid ref."), argv[0]);
+ die(_("failed to resolve '%s' as a valid ref."), argv[0]);
object_ref = 1 < argc ? argv[1] : "HEAD";
if (get_sha1(object_ref, object))
- die(_("Failed to resolve '%s' as a valid ref."), object_ref);
+ die(_("failed to resolve '%s' as a valid ref."), object_ref);
t = init_notes_check("copy", NOTES_INIT_WRITABLE);
note = get_note(t, object);
from_note = get_note(t, from_obj);
if (!from_note) {
- retval = error(_("Missing notes on source object %s. Cannot "
+ retval = error(_("missing notes on source object %s. Cannot "
"copy."), sha1_to_hex(from_obj));
goto out;
}
object_ref = 1 < argc ? argv[1] : "HEAD";
if (get_sha1(object_ref, object))
- die(_("Failed to resolve '%s' as a valid ref."), object_ref);
+ die(_("failed to resolve '%s' as a valid ref."), object_ref);
t = init_notes_check(argv[0], NOTES_INIT_WRITABLE);
note = get_note(t, object);
object_ref = argc ? argv[0] : "HEAD";
if (get_sha1(object_ref, object))
- die(_("Failed to resolve '%s' as a valid ref."), object_ref);
+ die(_("failed to resolve '%s' as a valid ref."), object_ref);
t = init_notes_check("show", 0);
note = get_note(t, object);
if (!note)
- retval = error(_("No note found for object %s."),
+ retval = error(_("no note found for object %s."),
sha1_to_hex(object));
else {
const char *show_args[3] = {"show", sha1_to_hex(note), NULL};
*/
if (delete_ref("NOTES_MERGE_PARTIAL", NULL, 0))
- ret += error("Failed to delete ref NOTES_MERGE_PARTIAL");
+ ret += error(_("failed to delete ref NOTES_MERGE_PARTIAL"));
if (delete_ref("NOTES_MERGE_REF", NULL, REF_NODEREF))
- ret += error("Failed to delete ref NOTES_MERGE_REF");
+ ret += error(_("failed to delete ref NOTES_MERGE_REF"));
if (notes_merge_abort(o))
- ret += error("Failed to remove 'git notes merge' worktree");
+ ret += error(_("failed to remove 'git notes merge' worktree"));
return ret;
}
*/
if (get_sha1("NOTES_MERGE_PARTIAL", sha1))
- die("Failed to read ref NOTES_MERGE_PARTIAL");
+ die(_("failed to read ref NOTES_MERGE_PARTIAL"));
else if (!(partial = lookup_commit_reference(sha1)))
- die("Could not find commit from NOTES_MERGE_PARTIAL.");
+ die(_("could not find commit from NOTES_MERGE_PARTIAL."));
else if (parse_commit(partial))
- die("Could not parse commit from NOTES_MERGE_PARTIAL.");
+ die(_("could not parse commit from NOTES_MERGE_PARTIAL."));
if (partial->parents)
hashcpy(parent_sha1, partial->parents->item->object.oid.hash);
o->local_ref = local_ref_to_free =
resolve_refdup("NOTES_MERGE_REF", 0, sha1, NULL);
if (!o->local_ref)
- die("Failed to resolve NOTES_MERGE_REF");
+ die(_("failed to resolve NOTES_MERGE_REF"));
if (notes_merge_commit(o, t, partial, sha1))
- die("Failed to finalize notes merge");
+ die(_("failed to finalize notes merge"));
/* Reuse existing commit message in reflog message */
memset(&pretty_ctx, 0, sizeof(pretty_ctx));
}
if (do_merge && argc != 1) {
- error(_("Must specify a notes ref to merge"));
+ error(_("must specify a notes ref to merge"));
usage_with_options(git_notes_merge_usage, options);
} else if (!do_merge && argc) {
error(_("too many parameters"));
if (strategy) {
if (parse_notes_merge_strategy(strategy, &o.strategy)) {
- error(_("Unknown -s/--strategy: %s"), strategy);
+ error(_("unknown -s/--strategy: %s"), strategy);
usage_with_options(git_notes_merge_usage, options);
}
} else {
/* Store ref-to-be-updated into .git/NOTES_MERGE_REF */
wt = find_shared_symref("NOTES_MERGE_REF", default_notes_ref());
if (wt)
- die(_("A notes merge into %s is already in-progress at %s"),
+ die(_("a notes merge into %s is already in-progress at %s"),
default_notes_ref(), wt->path);
if (create_symref("NOTES_MERGE_REF", default_notes_ref(), NULL))
- die(_("Failed to store link to current notes ref (%s)"),
+ die(_("failed to store link to current notes ref (%s)"),
default_notes_ref());
printf(_("Automatic notes merge failed. Fix conflicts in %s and "
"commit the result with 'git notes merge --commit', or "
else if (!strcmp(argv[0], "get-ref"))
result = get_ref(argc, argv, prefix);
else {
- result = error(_("Unknown subcommand: %s"), argv[0]);
+ result = error(_("unknown subcommand: %s"), argv[0]);
usage_with_options(git_notes_usage, options);
}
#include "reachable.h"
#include "sha1-array.h"
#include "argv-array.h"
+#include "mru.h"
static const char *pack_usage[] = {
N_("git pack-objects --stdout [<options>...] [< <ref-list> | < <object-list>]"),
static uint32_t reuse_packfile_objects;
static off_t reuse_packfile_offset;
-static int use_bitmap_index = 1;
+static int use_bitmap_index_default = 1;
+static int use_bitmap_index = -1;
static int write_bitmap_index;
static uint16_t write_bitmap_options;
if (!is_pack_valid(reuse_packfile))
die("packfile is invalid: %s", reuse_packfile->pack_name);
- fd = git_open_noatime(reuse_packfile->pack_name);
+ fd = git_open(reuse_packfile->pack_name);
if (fd < 0)
die_errno("unable to open packfile for reuse: %s",
reuse_packfile->pack_name);
return 1;
}
+static int want_found_object(int exclude, struct packed_git *p)
+{
+ if (exclude)
+ return 1;
+ if (incremental)
+ return 0;
+
+ /*
+ * When asked to do --local (do not include an object that appears in a
+ * pack we borrow from elsewhere) or --honor-pack-keep (do not include
+ * an object that appears in a pack marked with .keep), finding a pack
+ * that matches the criteria is sufficient for us to decide to omit it.
+ * However, even if this pack does not satisfy the criteria, we need to
+ * make sure no copy of this object appears in _any_ pack that makes us
+ * to omit the object, so we need to check all the packs.
+ *
+ * We can however first check whether these options can possible matter;
+ * if they do not matter we know we want the object in generated pack.
+ * Otherwise, we signal "-1" at the end to tell the caller that we do
+ * not know either way, and it needs to check more packs.
+ */
+ if (!ignore_packed_keep &&
+ (!local || !have_non_local_packs))
+ return 1;
+
+ if (local && !p->pack_local)
+ return 0;
+ if (ignore_packed_keep && p->pack_local && p->pack_keep)
+ return 0;
+
+ /* we don't know yet; keep looking for more packs */
+ return -1;
+}
+
/*
* Check whether we want the object in the pack (e.g., we do not want
* objects found in non-local stores if the "--local" option was used).
*
- * As a side effect of this check, we will find the packed version of this
- * object, if any. We therefore pass out the pack information to avoid having
- * to look it up again later.
+ * If the caller already knows an existing pack it wants to take the object
+ * from, that is passed in *found_pack and *found_offset; otherwise this
+ * function finds if there is any pack that has the object and returns the pack
+ * and its offset in these variables.
*/
static int want_object_in_pack(const unsigned char *sha1,
int exclude,
struct packed_git **found_pack,
off_t *found_offset)
{
- struct packed_git *p;
+ struct mru_entry *entry;
+ int want;
if (!exclude && local && has_loose_object_nonlocal(sha1))
return 0;
- *found_pack = NULL;
- *found_offset = 0;
+ /*
+ * If we already know the pack object lives in, start checks from that
+ * pack - in the usual case when neither --local was given nor .keep files
+ * are present we will determine the answer right now.
+ */
+ if (*found_pack) {
+ want = want_found_object(exclude, *found_pack);
+ if (want != -1)
+ return want;
+ }
+
+ for (entry = packed_git_mru->head; entry; entry = entry->next) {
+ struct packed_git *p = entry->item;
+ off_t offset;
+
+ if (p == *found_pack)
+ offset = *found_offset;
+ else
+ offset = find_pack_entry_one(sha1, p);
- for (p = packed_git; p; p = p->next) {
- off_t offset = find_pack_entry_one(sha1, p);
if (offset) {
if (!*found_pack) {
if (!is_pack_valid(p))
*found_offset = offset;
*found_pack = p;
}
- if (exclude)
- return 1;
- if (incremental)
- return 0;
-
- /*
- * When asked to do --local (do not include an
- * object that appears in a pack we borrow
- * from elsewhere) or --honor-pack-keep (do not
- * include an object that appears in a pack marked
- * with .keep), we need to make sure no copy of this
- * object come from in _any_ pack that causes us to
- * omit it, and need to complete this loop. When
- * neither option is in effect, we know the object
- * we just found is going to be packed, so break
- * out of the loop to return 1 now.
- */
- if (!ignore_packed_keep &&
- (!local || !have_non_local_packs))
- break;
-
- if (local && !p->pack_local)
- return 0;
- if (ignore_packed_keep && p->pack_local && p->pack_keep)
- return 0;
+ want = want_found_object(exclude, p);
+ if (!exclude && want > 0)
+ mru_mark(packed_git_mru, entry);
+ if (want != -1)
+ return want;
}
}
static int add_object_entry(const unsigned char *sha1, enum object_type type,
const char *name, int exclude)
{
- struct packed_git *found_pack;
- off_t found_offset;
+ struct packed_git *found_pack = NULL;
+ off_t found_offset = 0;
uint32_t index_pos;
if (have_duplicate_entry(sha1, exclude, &index_pos))
if (have_duplicate_entry(sha1, 0, &index_pos))
return 0;
+ if (!want_object_in_pack(sha1, 0, &pack, &offset))
+ return 0;
+
create_object_entry(sha1, type, name_hash, 0, 0, index_pos, pack, offset);
display_progress(progress_state, nr_result);
(a->in_pack_offset > b->in_pack_offset);
}
+/*
+ * Drop an on-disk delta we were planning to reuse. Naively, this would
+ * just involve blanking out the "delta" field, but we have to deal
+ * with some extra book-keeping:
+ *
+ * 1. Removing ourselves from the delta_sibling linked list.
+ *
+ * 2. Updating our size/type to the non-delta representation. These were
+ * either not recorded initially (size) or overwritten with the delta type
+ * (type) when check_object() decided to reuse the delta.
+ */
+static void drop_reused_delta(struct object_entry *entry)
+{
+ struct object_entry **p = &entry->delta->delta_child;
+ struct object_info oi = OBJECT_INFO_INIT;
+
+ while (*p) {
+ if (*p == entry)
+ *p = (*p)->delta_sibling;
+ else
+ p = &(*p)->delta_sibling;
+ }
+ entry->delta = NULL;
+
+ oi.sizep = &entry->size;
+ oi.typep = &entry->type;
+ if (packed_object_info(entry->in_pack, entry->in_pack_offset, &oi) < 0) {
+ /*
+ * We failed to get the info from this pack for some reason;
+ * fall back to sha1_object_info, which may find another copy.
+ * And if that fails, the error will be recorded in entry->type
+ * and dealt with in prepare_pack().
+ */
+ entry->type = sha1_object_info(entry->idx.sha1, &entry->size);
+ }
+}
+
+/*
+ * Follow the chain of deltas from this entry onward, throwing away any links
+ * that cause us to hit a cycle (as determined by the DFS state flags in
+ * the entries).
+ */
+static void break_delta_chains(struct object_entry *entry)
+{
+ /* If it's not a delta, it can't be part of a cycle. */
+ if (!entry->delta) {
+ entry->dfs_state = DFS_DONE;
+ return;
+ }
+
+ switch (entry->dfs_state) {
+ case DFS_NONE:
+ /*
+ * This is the first time we've seen the object. We mark it as
+ * part of the active potential cycle and recurse.
+ */
+ entry->dfs_state = DFS_ACTIVE;
+ break_delta_chains(entry->delta);
+ entry->dfs_state = DFS_DONE;
+ break;
+
+ case DFS_DONE:
+ /* object already examined, and not part of a cycle */
+ break;
+
+ case DFS_ACTIVE:
+ /*
+ * We found a cycle that needs broken. It would be correct to
+ * break any link in the chain, but it's convenient to
+ * break this one.
+ */
+ drop_reused_delta(entry);
+ entry->dfs_state = DFS_DONE;
+ break;
+ }
+}
+
static void get_object_details(void)
{
uint32_t i;
sorted_by_offset = xcalloc(to_pack.nr_objects, sizeof(struct object_entry *));
for (i = 0; i < to_pack.nr_objects; i++)
sorted_by_offset[i] = to_pack.objects + i;
- qsort(sorted_by_offset, to_pack.nr_objects, sizeof(*sorted_by_offset), pack_offset_sort);
+ QSORT(sorted_by_offset, to_pack.nr_objects, pack_offset_sort);
for (i = 0; i < to_pack.nr_objects; i++) {
struct object_entry *entry = sorted_by_offset[i];
entry->no_try_delta = 1;
}
+ /*
+ * This must happen in a second pass, since we rely on the delta
+ * information for the whole list being completed.
+ */
+ for (i = 0; i < to_pack.nr_objects; i++)
+ break_delta_chains(&to_pack.objects[i]);
+
free(sorted_by_offset);
}
if (progress)
progress_state = start_progress(_("Compressing objects"),
nr_deltas);
- qsort(delta_list, n, sizeof(*delta_list), type_size_sort);
+ QSORT(delta_list, n, type_size_sort);
ll_find_deltas(delta_list, n, window+1, depth, &nr_done);
stop_progress(&progress_state);
if (nr_done != nr_deltas)
write_bitmap_options &= ~BITMAP_OPT_HASH_CACHE;
}
if (!strcmp(k, "pack.usebitmaps")) {
- use_bitmap_index = git_config_bool(k, v);
+ use_bitmap_index_default = git_config_bool(k, v);
return 0;
}
if (!strcmp(k, "pack.threads")) {
}
if (in_pack.nr) {
- qsort(in_pack.array, in_pack.nr, sizeof(in_pack.array[0]),
- ofscmp);
+ QSORT(in_pack.array, in_pack.nr, ofscmp);
for (i = 0; i < in_pack.nr; i++) {
struct object *o = in_pack.array[i].object;
add_object_entry(o->oid.hash, o->type, "", 0);
}
/*
- * This tracks any options which a reader of the pack might
- * not understand, and which would therefore prevent blind reuse
- * of what we have on disk.
+ * This tracks any options which pack-reuse code expects to be on, or which a
+ * reader of the pack might not understand, and which would therefore prevent
+ * blind reuse of what we have on disk.
*/
static int pack_options_allow_reuse(void)
{
- return allow_ofs_delta;
+ return pack_to_stdout && allow_ofs_delta;
}
static int get_object_list_from_bitmap(struct rev_info *revs)
if (!rev_list_all || !rev_list_reflog || !rev_list_index)
unpack_unreachable_expiration = 0;
- if (!use_internal_rev_list || !pack_to_stdout || is_repository_shallow())
+ /*
+ * "soft" reasons not to use bitmaps - for on-disk repack by default we want
+ *
+ * - to produce good pack (with bitmap index not-yet-packed objects are
+ * packed in suboptimal order).
+ *
+ * - to use more robust pack-generation codepath (avoiding possible
+ * bugs in bitmap code and possible bitmap index corruption).
+ */
+ if (!pack_to_stdout)
+ use_bitmap_index_default = 0;
+
+ if (use_bitmap_index < 0)
+ use_bitmap_index = use_bitmap_index_default;
+
+ /* "hard" reasons not to use bitmaps; these just won't work at all */
+ if (!use_internal_rev_list || (!pack_to_stdout && write_bitmap_index) || is_repository_shallow())
use_bitmap_index = 0;
if (pack_to_stdout || !rev_list_all)
#include "revision.h"
#include "tempfile.h"
#include "lockfile.h"
+#include "wt-status.h"
enum rebase_type {
REBASE_INVALID = -1,
return git_default_config(var, value, cb);
}
-/**
- * Returns 1 if there are unstaged changes, 0 otherwise.
- */
-static int has_unstaged_changes(const char *prefix)
-{
- struct rev_info rev_info;
- int result;
-
- init_revisions(&rev_info, prefix);
- DIFF_OPT_SET(&rev_info.diffopt, IGNORE_SUBMODULES);
- DIFF_OPT_SET(&rev_info.diffopt, QUICK);
- diff_setup_done(&rev_info.diffopt);
- result = run_diff_files(&rev_info, 0);
- return diff_result_code(&rev_info.diffopt, result);
-}
-
-/**
- * Returns 1 if there are uncommitted changes, 0 otherwise.
- */
-static int has_uncommitted_changes(const char *prefix)
-{
- struct rev_info rev_info;
- int result;
-
- if (is_cache_unborn())
- return 0;
-
- init_revisions(&rev_info, prefix);
- DIFF_OPT_SET(&rev_info.diffopt, IGNORE_SUBMODULES);
- DIFF_OPT_SET(&rev_info.diffopt, QUICK);
- add_head_to_pending(&rev_info);
- diff_setup_done(&rev_info.diffopt);
- result = run_diff_index(&rev_info, 1);
- return diff_result_code(&rev_info.diffopt, result);
-}
-
-/**
- * If the work tree has unstaged or uncommitted changes, dies with the
- * appropriate message.
- */
-static void die_on_unclean_work_tree(const char *prefix)
-{
- struct lock_file *lock_file = xcalloc(1, sizeof(*lock_file));
- int do_die = 0;
-
- hold_locked_index(lock_file, 0);
- refresh_cache(REFRESH_QUIET);
- update_index_if_able(&the_index, lock_file);
- rollback_lock_file(lock_file);
-
- if (has_unstaged_changes(prefix)) {
- error(_("Cannot pull with rebase: You have unstaged changes."));
- do_die = 1;
- }
-
- if (has_uncommitted_changes(prefix)) {
- if (do_die)
- error(_("Additionally, your index contains uncommitted changes."));
- else
- error(_("Cannot pull with rebase: Your index contains uncommitted changes."));
- do_die = 1;
- }
-
- if (do_die)
- exit(1);
-}
-
/**
* Appends merge candidates from FETCH_HEAD that are not marked not-for-merge
* into merge_heads.
die(_("Updating an unborn branch with changes added to the index."));
if (!autostash)
- die_on_unclean_work_tree(prefix);
+ require_clean_work_tree(N_("pull with rebase"),
+ _("please commit or stash them."), 1, 0);
if (get_rebase_fork_point(rebase_fork_point, repo, *refspecs))
hashclr(rebase_fork_point);
else
printf("%06o #%d %s %.8s\n",
ce->ce_mode, ce_stage(ce), ce->name,
- sha1_to_hex(ce->sha1));
+ oid_to_hex(&ce->oid));
}
static int debug_merge(const struct cache_entry * const *stages,
#include "gpg-interface.h"
#include "sigchain.h"
#include "fsck.h"
+#include "tmp-objdir.h"
static const char * const receive_pack_usage[] = {
N_("git receive-pack <git-dir>"),
static int advertise_atomic_push = 1;
static int advertise_push_options;
static int unpack_limit = 100;
+static off_t max_input_size;
static int report_status;
static int use_sideband;
static int use_atomic;
} use_keepalive;
static int keepalive_in_sec = 5;
+static struct tmp_objdir *tmp_objdir;
+
static enum deny_action parse_deny_action(const char *var, const char *value)
{
if (value) {
return 0;
}
+ if (strcmp(var, "receive.maxinputsize") == 0) {
+ max_input_size = git_config_int64(var, value);
+ return 0;
+ }
+
return git_default_config(var, value, cb);
}
static void show_ref(const char *path, const unsigned char *sha1)
{
if (sent_capabilities) {
- packet_write(1, "%s %s\n", sha1_to_hex(sha1), path);
+ packet_write_fmt(1, "%s %s\n", sha1_to_hex(sha1), path);
} else {
struct strbuf cap = STRBUF_INIT;
if (advertise_push_options)
strbuf_addstr(&cap, " push-options");
strbuf_addf(&cap, " agent=%s", git_user_agent_sanitized());
- packet_write(1, "%s %s%c%s\n",
+ packet_write_fmt(1, "%s %s%c%s\n",
sha1_to_hex(sha1), path, 0, cap.buf);
strbuf_release(&cap);
sent_capabilities = 1;
return 0;
}
-static void show_one_alternate_sha1(const unsigned char sha1[20], void *unused)
+static int show_one_alternate_sha1(const unsigned char sha1[20], void *unused)
{
show_ref(".have", sha1);
+ return 0;
}
static void collect_one_alternate_ref(const struct ref *ref, void *data)
} else
argv_array_pushf(&proc.env_array, "GIT_PUSH_OPTION_COUNT");
+ if (tmp_objdir)
+ argv_array_pushv(&proc.env_array, tmp_objdir_env(tmp_objdir));
+
if (use_sideband) {
memset(&muxer, 0, sizeof(muxer));
muxer.proc = copy_to_sideband;
proc.stdout_to_stderr = 1;
proc.err = use_sideband ? -1 : 0;
proc.argv = argv;
+ proc.env = tmp_objdir_env(tmp_objdir);
code = start_command(&proc);
if (code)
return !strcmp(head_name, ref);
}
-static char *refuse_unconfigured_deny_msg[] = {
- "By default, updating the current branch in a non-bare repository",
- "is denied, because it will make the index and work tree inconsistent",
- "with what you pushed, and will require 'git reset --hard' to match",
- "the work tree to HEAD.",
- "",
- "You can set 'receive.denyCurrentBranch' configuration variable to",
- "'ignore' or 'warn' in the remote repository to allow pushing into",
- "its current branch; however, this is not recommended unless you",
- "arranged to update its work tree to match what you pushed in some",
- "other way.",
- "",
- "To squelch this message and still keep the default behaviour, set",
- "'receive.denyCurrentBranch' configuration variable to 'refuse'."
-};
+static char *refuse_unconfigured_deny_msg =
+ N_("By default, updating the current branch in a non-bare repository\n"
+ "is denied, because it will make the index and work tree inconsistent\n"
+ "with what you pushed, and will require 'git reset --hard' to match\n"
+ "the work tree to HEAD.\n"
+ "\n"
+ "You can set 'receive.denyCurrentBranch' configuration variable to\n"
+ "'ignore' or 'warn' in the remote repository to allow pushing into\n"
+ "its current branch; however, this is not recommended unless you\n"
+ "arranged to update its work tree to match what you pushed in some\n"
+ "other way.\n"
+ "\n"
+ "To squelch this message and still keep the default behaviour, set\n"
+ "'receive.denyCurrentBranch' configuration variable to 'refuse'.");
static void refuse_unconfigured_deny(void)
{
- int i;
- for (i = 0; i < ARRAY_SIZE(refuse_unconfigured_deny_msg); i++)
- rp_error("%s", refuse_unconfigured_deny_msg[i]);
+ rp_error("%s", _(refuse_unconfigured_deny_msg));
}
-static char *refuse_unconfigured_deny_delete_current_msg[] = {
- "By default, deleting the current branch is denied, because the next",
- "'git clone' won't result in any file checked out, causing confusion.",
- "",
- "You can set 'receive.denyDeleteCurrent' configuration variable to",
- "'warn' or 'ignore' in the remote repository to allow deleting the",
- "current branch, with or without a warning message.",
- "",
- "To squelch this message, you can set it to 'refuse'."
-};
+static char *refuse_unconfigured_deny_delete_current_msg =
+ N_("By default, deleting the current branch is denied, because the next\n"
+ "'git clone' won't result in any file checked out, causing confusion.\n"
+ "\n"
+ "You can set 'receive.denyDeleteCurrent' configuration variable to\n"
+ "'warn' or 'ignore' in the remote repository to allow deleting the\n"
+ "current branch, with or without a warning message.\n"
+ "\n"
+ "To squelch this message, you can set it to 'refuse'.");
static void refuse_unconfigured_deny_delete_current(void)
{
- int i;
- for (i = 0;
- i < ARRAY_SIZE(refuse_unconfigured_deny_delete_current_msg);
- i++)
- rp_error("%s", refuse_unconfigured_deny_delete_current_msg[i]);
+ rp_error("%s", _(refuse_unconfigured_deny_delete_current_msg));
}
static int command_singleton_iterator(void *cb_data, unsigned char sha1[20]);
!delayed_reachability_test(si, i))
sha1_array_append(&extra, si->shallow->sha1[i]);
+ opt.env = tmp_objdir_env(tmp_objdir);
setup_alternate_shallow(&shallow_lock, &opt.shallow_file, &extra);
if (check_connected(command_singleton_iterator, cmd, &opt)) {
rollback_lock_file(&shallow_lock);
struct string_list_item *item;
struct command *dst_cmd;
unsigned char sha1[GIT_SHA1_RAWSZ];
- char cmd_oldh[GIT_SHA1_HEXSZ + 1],
- cmd_newh[GIT_SHA1_HEXSZ + 1],
- dst_oldh[GIT_SHA1_HEXSZ + 1],
- dst_newh[GIT_SHA1_HEXSZ + 1];
int flag;
strbuf_addf(&buf, "%s%s", get_git_namespace(), cmd->ref_name);
dst_cmd->skip_update = 1;
- find_unique_abbrev_r(cmd_oldh, cmd->old_sha1, DEFAULT_ABBREV);
- find_unique_abbrev_r(cmd_newh, cmd->new_sha1, DEFAULT_ABBREV);
- find_unique_abbrev_r(dst_oldh, dst_cmd->old_sha1, DEFAULT_ABBREV);
- find_unique_abbrev_r(dst_newh, dst_cmd->new_sha1, DEFAULT_ABBREV);
rp_error("refusing inconsistent update between symref '%s' (%s..%s) and"
" its target '%s' (%s..%s)",
- cmd->ref_name, cmd_oldh, cmd_newh,
- dst_cmd->ref_name, dst_oldh, dst_newh);
+ cmd->ref_name,
+ find_unique_abbrev(cmd->old_sha1, DEFAULT_ABBREV),
+ find_unique_abbrev(cmd->new_sha1, DEFAULT_ABBREV),
+ dst_cmd->ref_name,
+ find_unique_abbrev(dst_cmd->old_sha1, DEFAULT_ABBREV),
+ find_unique_abbrev(dst_cmd->new_sha1, DEFAULT_ABBREV));
cmd->error_string = dst_cmd->error_string =
"inconsistent aliased update";
for (cmd = commands; cmd; cmd = cmd->next) {
struct command *singleton = cmd;
+ struct check_connected_options opt = CHECK_CONNECTED_INIT;
+
if (shallow_update && si->shallow_ref[cmd->index])
/* to be checked in update_shallow_ref() */
continue;
+
+ opt.env = tmp_objdir_env(tmp_objdir);
if (!check_connected(command_singleton_iterator, &singleton,
- NULL))
+ &opt))
continue;
+
cmd->error_string = "missing necessary objects";
}
}
data.si = si;
opt.err_fd = err_fd;
opt.progress = err_fd && !quiet;
+ opt.env = tmp_objdir_env(tmp_objdir);
if (check_connected(iterate_receive_command_list, &data, &opt))
set_connectivity_errors(commands, si);
return;
}
+ /*
+ * Now we'll start writing out refs, which means the objects need
+ * to be in their final positions so that other processes can see them.
+ */
+ if (tmp_objdir_migrate(tmp_objdir) < 0) {
+ for (cmd = commands; cmd; cmd = cmd->next) {
+ if (!cmd->error_string)
+ cmd->error_string = "unable to migrate objects to permanent storage";
+ }
+ return;
+ }
+ tmp_objdir = NULL;
+
check_aliased_updates(commands);
free(head_name_to_free);
argv_array_push(&child.args, alt_shallow_file);
}
+ tmp_objdir = tmp_objdir_create();
+ if (!tmp_objdir)
+ return "unable to create temporary object directory";
+ child.env = tmp_objdir_env(tmp_objdir);
+
+ /*
+ * Normally we just pass the tmp_objdir environment to the child
+ * processes that do the heavy lifting, but we may need to see these
+ * objects ourselves to set up shallow information.
+ */
+ tmp_objdir_add_as_alternate(tmp_objdir);
+
if (ntohl(hdr.hdr_entries) < unpack_limit) {
argv_array_pushl(&child.args, "unpack-objects", hdr_arg, NULL);
if (quiet)
if (fsck_objects)
argv_array_pushf(&child.args, "--strict%s",
fsck_msg_types.buf);
+ if (max_input_size)
+ argv_array_pushf(&child.args, "--max-input-size=%"PRIuMAX,
+ (uintmax_t)max_input_size);
child.no_stdout = 1;
child.err = err_fd;
child.git_cmd = 1;
fsck_msg_types.buf);
if (!reject_thin)
argv_array_push(&child.args, "--fix-thin");
+ if (max_input_size)
+ argv_array_pushf(&child.args, "--max-input-size=%"PRIuMAX,
+ (uintmax_t)max_input_size);
child.out = -1;
child.err = err_fd;
child.git_cmd = 1;
const char *vhost)
{
if (!vhost)
- packet_write(stdin_fd, "%s %s%c", serv, repo, 0);
+ packet_write_fmt(stdin_fd, "%s %s%c", serv, repo, 0);
else
- packet_write(stdin_fd, "%s %s%chost=%s%c", serv, repo, 0,
+ packet_write_fmt(stdin_fd, "%s %s%chost=%s%c", serv, repo, 0,
vhost, 0);
}
info.width = info.width2 = 0;
for_each_string_list(&states.push, add_push_to_show_info, &info);
- qsort(info.list->items, info.list->nr,
- sizeof(*info.list->items), cmp_string_with_push);
+ QSORT(info.list->items, info.list->nr, cmp_string_with_push);
if (info.list->nr)
printf_ln(Q_(" Local ref configured for 'git push'%s:",
" Local refs configured for 'git push'%s:",
return !access(git_path_merge_head(), F_OK);
}
-static int reset_index(const unsigned char *sha1, int reset_type, int quiet)
+static int reset_index(const struct object_id *oid, int reset_type, int quiet)
{
int nr = 1;
struct tree_desc desc[2];
read_cache_unmerged();
if (reset_type == KEEP) {
- unsigned char head_sha1[20];
- if (get_sha1("HEAD", head_sha1))
+ struct object_id head_oid;
+ if (get_oid("HEAD", &head_oid))
return error(_("You do not have a valid HEAD."));
- if (!fill_tree_descriptor(desc, head_sha1))
+ if (!fill_tree_descriptor(desc, head_oid.hash))
return error(_("Failed to find tree of HEAD."));
nr++;
opts.fn = twoway_merge;
}
- if (!fill_tree_descriptor(desc + nr - 1, sha1))
- return error(_("Failed to find tree of %s."), sha1_to_hex(sha1));
+ if (!fill_tree_descriptor(desc + nr - 1, oid->hash))
+ return error(_("Failed to find tree of %s."), oid_to_hex(oid));
if (unpack_trees(nr, desc, &opts))
return -1;
if (reset_type == MIXED || reset_type == HARD) {
- tree = parse_tree_indirect(sha1);
+ tree = parse_tree_indirect(oid->hash);
prime_cache_tree(&the_index, tree);
}
}
static int read_from_tree(const struct pathspec *pathspec,
- unsigned char *tree_sha1,
+ struct object_id *tree_oid,
int intent_to_add)
{
struct diff_options opt;
opt.format_callback = update_index_from_diff;
opt.format_callback_data = &intent_to_add;
- if (do_diff_cache(tree_sha1, &opt))
+ if (do_diff_cache(tree_oid->hash, &opt))
return 1;
diffcore_std(&opt);
diff_flush(&opt);
const char **rev_ret)
{
const char *rev = "HEAD";
- unsigned char unused[20];
+ struct object_id unused;
/*
* Possible arguments are:
*
* has to be unambiguous. If there is a single argument, it
* can not be a tree
*/
- else if ((!argv[1] && !get_sha1_committish(argv[0], unused)) ||
- (argv[1] && !get_sha1_treeish(argv[0], unused))) {
+ else if ((!argv[1] && !get_sha1_committish(argv[0], unused.hash)) ||
+ (argv[1] && !get_sha1_treeish(argv[0], unused.hash))) {
/*
* Ok, argv[0] looks like a commit/tree; it should not
* be a filename.
prefix, argv);
}
-static int reset_refs(const char *rev, const unsigned char *sha1)
+static int reset_refs(const char *rev, const struct object_id *oid)
{
int update_ref_status;
struct strbuf msg = STRBUF_INIT;
- unsigned char *orig = NULL, sha1_orig[20],
- *old_orig = NULL, sha1_old_orig[20];
+ struct object_id *orig = NULL, oid_orig,
+ *old_orig = NULL, oid_old_orig;
- if (!get_sha1("ORIG_HEAD", sha1_old_orig))
- old_orig = sha1_old_orig;
- if (!get_sha1("HEAD", sha1_orig)) {
- orig = sha1_orig;
+ if (!get_oid("ORIG_HEAD", &oid_old_orig))
+ old_orig = &oid_old_orig;
+ if (!get_oid("HEAD", &oid_orig)) {
+ orig = &oid_orig;
set_reflog_message(&msg, "updating ORIG_HEAD", NULL);
- update_ref(msg.buf, "ORIG_HEAD", orig, old_orig, 0,
+ update_ref_oid(msg.buf, "ORIG_HEAD", orig, old_orig, 0,
UPDATE_REFS_MSG_ON_ERR);
} else if (old_orig)
- delete_ref("ORIG_HEAD", old_orig, 0);
+ delete_ref("ORIG_HEAD", old_orig->hash, 0);
set_reflog_message(&msg, "updating HEAD", rev);
- update_ref_status = update_ref(msg.buf, "HEAD", sha1, orig, 0,
+ update_ref_status = update_ref_oid(msg.buf, "HEAD", oid, orig, 0,
UPDATE_REFS_MSG_ON_ERR);
strbuf_release(&msg);
return update_ref_status;
hold_locked_index(lock, 1);
if (reset_type == MIXED) {
int flags = quiet ? REFRESH_QUIET : REFRESH_IN_PORCELAIN;
- if (read_from_tree(&pathspec, oid.hash, intent_to_add))
+ if (read_from_tree(&pathspec, &oid, intent_to_add))
return 1;
if (get_git_work_tree())
refresh_index(&the_index, flags, NULL, NULL,
_("Unstaged changes after reset:"));
} else {
- int err = reset_index(oid.hash, reset_type, quiet);
+ int err = reset_index(&oid, reset_type, quiet);
if (reset_type == KEEP && !err)
- err = reset_index(oid.hash, MIXED, quiet);
+ err = reset_index(&oid, MIXED, quiet);
if (err)
die(_("Could not reset index file to revision '%s'."), rev);
}
if (!pathspec.nr && !unborn) {
/* Any resets without paths update HEAD to the head being
* switched to, saving the previous head in ORIG_HEAD before. */
- update_ref_status = reset_refs(rev, oid.hash);
+ update_ref_status = reset_refs(rev, &oid);
if (reset_type == HARD && !update_ref_status && !quiet)
print_new_head_line(lookup_commit_reference(oid.hash));
ctx.fmt = revs->commit_format;
ctx.output_encoding = get_log_output_encoding();
pretty_print_commit(&ctx, commit, &buf);
- if (revs->graph) {
- if (buf.len) {
- if (revs->commit_format != CMIT_FMT_ONELINE)
- graph_show_oneline(revs->graph);
-
- graph_show_commit_msg(revs->graph, &buf);
-
- /*
- * Add a newline after the commit message.
- *
- * Usually, this newline produces a blank
- * padding line between entries, in which case
- * we need to add graph padding on this line.
- *
- * However, the commit message may not end in a
- * newline. In this case the newline simply
- * ends the last line of the commit message,
- * and we don't need any graph output. (This
- * always happens with CMIT_FMT_ONELINE, and it
- * happens with CMIT_FMT_USERFORMAT when the
- * format doesn't explicitly end in a newline.)
- */
- if (buf.len && buf.buf[buf.len - 1] == '\n')
- graph_show_padding(revs->graph);
- putchar('\n');
- } else {
- /*
- * If the message buffer is empty, just show
- * the rest of the graph output for this
- * commit.
- */
- if (graph_show_remainder(revs->graph))
- putchar('\n');
- if (revs->commit_format == CMIT_FMT_ONELINE)
- putchar('\n');
- }
+ if (buf.len) {
+ if (revs->commit_format != CMIT_FMT_ONELINE)
+ graph_show_oneline(revs->graph);
+
+ graph_show_commit_msg(revs->graph, stdout, &buf);
+
+ /*
+ * Add a newline after the commit message.
+ *
+ * Usually, this newline produces a blank
+ * padding line between entries, in which case
+ * we need to add graph padding on this line.
+ *
+ * However, the commit message may not end in a
+ * newline. In this case the newline simply
+ * ends the last line of the commit message,
+ * and we don't need any graph output. (This
+ * always happens with CMIT_FMT_ONELINE, and it
+ * happens with CMIT_FMT_USERFORMAT when the
+ * format doesn't explicitly end in a newline.)
+ */
+ if (buf.len && buf.buf[buf.len - 1] == '\n')
+ graph_show_padding(revs->graph);
+ putchar(info->hdr_termination);
} else {
- if (revs->commit_format != CMIT_FMT_USERFORMAT ||
- buf.len) {
- fwrite(buf.buf, 1, buf.len, stdout);
- putchar(info->hdr_termination);
- }
+ /*
+ * If the message buffer is empty, just show
+ * the rest of the graph output for this
+ * commit.
+ */
+ if (graph_show_remainder(revs->graph))
+ putchar('\n');
+ if (revs->commit_format == CMIT_FMT_ONELINE)
+ putchar('\n');
}
strbuf_release(&buf);
} else {
unsigned char sha1[20];
struct commit *commit;
struct commit_list *parents;
- int parents_only;
-
- if ((dotdot = strstr(arg, "^!")))
- parents_only = 0;
- else if ((dotdot = strstr(arg, "^@")))
- parents_only = 1;
-
- if (!dotdot || dotdot[2])
+ int parent_number;
+ int include_rev = 0;
+ int include_parents = 0;
+ int exclude_parent = 0;
+
+ if ((dotdot = strstr(arg, "^!"))) {
+ include_rev = 1;
+ if (dotdot[2])
+ return 0;
+ } else if ((dotdot = strstr(arg, "^@"))) {
+ include_parents = 1;
+ if (dotdot[2])
+ return 0;
+ } else if ((dotdot = strstr(arg, "^-"))) {
+ include_rev = 1;
+ exclude_parent = 1;
+
+ if (dotdot[2]) {
+ char *end;
+ exclude_parent = strtoul(dotdot + 2, &end, 10);
+ if (*end != '\0' || !exclude_parent)
+ return 0;
+ }
+ } else
return 0;
*dotdot = 0;
return 0;
}
- if (!parents_only)
- show_rev(NORMAL, sha1, arg);
commit = lookup_commit_reference(sha1);
- for (parents = commit->parents; parents; parents = parents->next)
- show_rev(parents_only ? NORMAL : REVERSED,
- parents->item->object.oid.hash, arg);
+ if (exclude_parent &&
+ exclude_parent > commit_list_count(commit->parents)) {
+ *dotdot = '^';
+ return 0;
+ }
+
+ if (include_rev)
+ show_rev(NORMAL, sha1, arg);
+ for (parents = commit->parents, parent_number = 1;
+ parents;
+ parents = parents->next, parent_number++) {
+ if (exclude_parent && parent_number != exclude_parent)
+ continue;
+
+ show_rev(include_parents ? NORMAL : REVERSED,
+ parents->item->object.oid.hash, arg);
+ }
*dotdot = '^';
return 1;
filter &= ~(DO_FLAGS|DO_NOREV);
verify = 1;
abbrev = DEFAULT_ABBREV;
- if (arg[7] == '=')
- abbrev = strtoul(arg + 8, NULL, 10);
+ if (!arg[7])
+ continue;
+ abbrev = strtoul(arg + 8, NULL, 10);
if (abbrev < MINIMUM_ABBREV)
abbrev = MINIMUM_ABBREV;
else if (40 <= abbrev)
die(_("%s: %s cannot be used with %s"), me, this_opt, base_opt);
}
-static void parse_args(int argc, const char **argv, struct replay_opts *opts)
+static int run_sequencer(int argc, const char **argv, struct replay_opts *opts)
{
const char * const * usage_str = revert_or_cherry_pick_usage(opts);
const char *me = action_name(opts);
if (opts->keep_redundant_commits)
opts->allow_empty = 1;
- /* Set the subcommand */
- if (cmd == 'q')
- opts->subcommand = REPLAY_REMOVE_STATE;
- else if (cmd == 'c')
- opts->subcommand = REPLAY_CONTINUE;
- else if (cmd == 'a')
- opts->subcommand = REPLAY_ROLLBACK;
- else
- opts->subcommand = REPLAY_NONE;
-
/* Check for incompatible command line arguments */
- if (opts->subcommand != REPLAY_NONE) {
+ if (cmd) {
char *this_operation;
- if (opts->subcommand == REPLAY_REMOVE_STATE)
+ if (cmd == 'q')
this_operation = "--quit";
- else if (opts->subcommand == REPLAY_CONTINUE)
+ else if (cmd == 'c')
this_operation = "--continue";
else {
- assert(opts->subcommand == REPLAY_ROLLBACK);
+ assert(cmd == 'a');
this_operation = "--abort";
}
"--edit", opts->edit,
NULL);
- if (opts->subcommand != REPLAY_NONE) {
+ if (cmd) {
opts->revs = NULL;
} else {
struct setup_revision_opt s_r_opt;
if (argc > 1)
usage_with_options(usage_str, options);
+
+ /* These option values will be free()d */
+ opts->gpg_sign = xstrdup_or_null(opts->gpg_sign);
+ opts->strategy = xstrdup_or_null(opts->strategy);
+
+ if (cmd == 'q')
+ return sequencer_remove_state(opts);
+ if (cmd == 'c')
+ return sequencer_continue(opts);
+ if (cmd == 'a')
+ return sequencer_rollback(opts);
+ return sequencer_pick_revisions(opts);
}
int cmd_revert(int argc, const char **argv, const char *prefix)
{
- struct replay_opts opts;
+ struct replay_opts opts = REPLAY_OPTS_INIT;
int res;
- memset(&opts, 0, sizeof(opts));
if (isatty(0))
opts.edit = 1;
opts.action = REPLAY_REVERT;
git_config(git_default_config, NULL);
- parse_args(argc, argv, &opts);
- res = sequencer_pick_revisions(&opts);
+ res = run_sequencer(argc, argv, &opts);
if (res < 0)
die(_("revert failed"));
return res;
int cmd_cherry_pick(int argc, const char **argv, const char *prefix)
{
- struct replay_opts opts;
+ struct replay_opts opts = REPLAY_OPTS_INIT;
int res;
- memset(&opts, 0, sizeof(opts));
opts.action = REPLAY_PICK;
git_config(git_default_config, NULL);
- parse_args(argc, argv, &opts);
- res = sequencer_pick_revisions(&opts);
+ res = run_sequencer(argc, argv, &opts);
if (res < 0)
die(_("cherry-pick failed"));
return res;
return errs;
}
-static int check_local_mod(unsigned char *head, int index_only)
+static int check_local_mod(struct object_id *head, int index_only)
{
/*
* Items in list are already sorted in the cache order,
struct string_list files_submodule = STRING_LIST_INIT_NODUP;
struct string_list files_local = STRING_LIST_INIT_NODUP;
- no_head = is_null_sha1(head);
+ no_head = is_null_oid(head);
for (i = 0; i < list.nr; i++) {
struct stat st;
int pos;
const struct cache_entry *ce;
const char *name = list.entry[i].name;
- unsigned char sha1[20];
+ struct object_id oid;
unsigned mode;
int local_changes = 0;
int staged_changes = 0;
* way as changed from the HEAD.
*/
if (no_head
- || get_tree_entry(head, name, sha1, &mode)
+ || get_tree_entry(head->hash, name, oid.hash, &mode)
|| ce->ce_mode != create_ce_mode(mode)
- || hashcmp(ce->sha1, sha1))
+ || oidcmp(&ce->oid, &oid))
staged_changes = 1;
/*
* report no changes unless forced.
*/
if (!force) {
- unsigned char sha1[20];
- if (get_sha1("HEAD", sha1))
- hashclr(sha1);
- if (check_local_mod(sha1, index_only))
+ struct object_id oid;
+ if (get_oid("HEAD", &oid))
+ oidclr(&oid);
+ if (check_local_mod(&oid, index_only))
exit(1);
} else if (!index_only) {
if (check_submodules_use_gitfiles())
struct strbuf sb = STRBUF_INIT;
if (log->sort_by_number)
- qsort(log->list.items, log->list.nr, sizeof(struct string_list_item),
+ QSORT(log->list.items, log->list.nr,
log->summary ? compare_by_counter : compare_by_list);
for (i = 0; i < log->list.nr; i++) {
const struct string_list_item *item = &log->list.items[i];
static void sort_ref_range(int bottom, int top)
{
- qsort(ref_name + bottom, top - bottom, sizeof(ref_name[0]),
- compare_ref_name);
+ QSORT(ref_name + bottom, top - bottom, compare_ref_name);
}
static int append_ref(const char *refname, const struct object_id *oid,
return 0;
}
if (MAX_REVS <= ref_name_cnt) {
- warning("ignoring %s; cannot handle more than %d refs",
- refname, MAX_REVS);
+ warning(Q_("ignoring %s; cannot handle more than %d ref",
+ "ignoring %s; cannot handle more than %d refs",
+ MAX_REVS), refname, MAX_REVS);
return 0;
}
ref_name[ref_name_cnt++] = xstrdup(refname);
for_each_ref(append_matching_ref, NULL);
if (saved_matches == ref_name_cnt &&
ref_name_cnt < MAX_REVS)
- error("no matching refs with %s", av);
- if (saved_matches + 1 < ref_name_cnt)
- sort_ref_range(saved_matches, ref_name_cnt);
+ error(_("no matching refs with %s"), av);
+ sort_ref_range(saved_matches, ref_name_cnt);
return;
}
die("bad sha1 reference %s", av);
*
* Also --all and --remotes do not make sense either.
*/
- die("--reflog is incompatible with --all, --remotes, "
- "--independent or --merge-base");
+ die(_("--reflog is incompatible with --all, --remotes, "
+ "--independent or --merge-base"));
}
/* If nothing is specified, show all branches by default */
av = fake_av;
ac = 1;
if (!*av)
- die("no branches given, and HEAD is not valid");
+ die(_("no branches given, and HEAD is not valid"));
}
if (ac != 1)
- die("--reflog option needs one branch name");
+ die(_("--reflog option needs one branch name"));
if (MAX_REVS < reflog)
- die("Only %d entries can be shown at one time.",
- MAX_REVS);
+ die(Q_("only %d entry can be shown at one time.",
+ "only %d entries can be shown at one time.",
+ MAX_REVS), MAX_REVS);
if (!dwim_ref(*av, strlen(*av), oid.hash, &ref))
- die("No such ref %s", *av);
+ die(_("no such ref %s"), *av);
/* Has the base been specified? */
if (reflog_base) {
unsigned int flag = 1u << (num_rev + REV_SHIFT);
if (MAX_REVS <= num_rev)
- die("cannot handle more than %d revs.", MAX_REVS);
+ die(Q_("cannot handle more than %d rev.",
+ "cannot handle more than %d revs.",
+ MAX_REVS), MAX_REVS);
if (get_sha1(ref_name[num_rev], revkey.hash))
- die("'%s' is not a valid ref.", ref_name[num_rev]);
+ die(_("'%s' is not a valid ref."), ref_name[num_rev]);
commit = lookup_commit_reference(revkey.hash);
if (!commit)
- die("cannot find commit %s (%s)",
+ die(_("cannot find commit %s (%s)"),
ref_name[num_rev], oid_to_hex(&revkey));
parse_commit(commit);
mark_seen(commit, &seen);
* NEEDSWORK: This works incorrectly on the domain and protocol part.
* remote_url url outcome expectation
* http://a.com/b ../c http://a.com/c as is
+ * http://a.com/b/ ../c http://a.com/c same as previous line, but
+ * ignore trailing slash in url
* http://a.com/b ../../c http://c error out
* http://a.com/b ../../../c http:/c error out
* http://a.com/b ../../../../c http:c error out
struct strbuf sb = STRBUF_INIT;
size_t len = strlen(remoteurl);
- if (is_dir_sep(remoteurl[len]))
- remoteurl[len] = '\0';
+ if (is_dir_sep(remoteurl[len-1]))
+ remoteurl[len-1] = '\0';
if (!url_is_local_not_ssh(remoteurl) || is_absolute_path(remoteurl))
is_relative = 0;
}
strbuf_reset(&sb);
strbuf_addf(&sb, "%s%s%s", remoteurl, colonsep ? ":" : "/", url);
+ if (ends_with(url, "/"))
+ strbuf_setlen(&sb, sb.len - 1);
free(remoteurl);
if (starts_with_dot_slash(sb.buf))
if (ce_stage(ce))
printf("%06o %s U\t", ce->ce_mode, sha1_to_hex(null_sha1));
else
- printf("%06o %s %d\t", ce->ce_mode, sha1_to_hex(ce->sha1), ce_stage(ce));
+ printf("%06o %s %d\t", ce->ce_mode,
+ oid_to_hex(&ce->oid), ce_stage(ce));
utf8_fprintf(stdout, "%s\n", ce->name);
}
}
static int clone_submodule(const char *path, const char *gitdir, const char *url,
- const char *depth, const char *reference, int quiet)
+ const char *depth, struct string_list *reference,
+ int quiet, int progress)
{
struct child_process cp = CHILD_PROCESS_INIT;
argv_array_push(&cp.args, "--no-checkout");
if (quiet)
argv_array_push(&cp.args, "--quiet");
+ if (progress)
+ argv_array_push(&cp.args, "--progress");
if (depth && *depth)
argv_array_pushl(&cp.args, "--depth", depth, NULL);
- if (reference && *reference)
- argv_array_pushl(&cp.args, "--reference", reference, NULL);
+ if (reference->nr) {
+ struct string_list_item *item;
+ for_each_string_list_item(item, reference)
+ argv_array_pushl(&cp.args, "--reference",
+ item->string, NULL);
+ }
if (gitdir && *gitdir)
argv_array_pushl(&cp.args, "--separate-git-dir", gitdir, NULL);
return run_command(&cp);
}
+struct submodule_alternate_setup {
+ const char *submodule_name;
+ enum SUBMODULE_ALTERNATE_ERROR_MODE {
+ SUBMODULE_ALTERNATE_ERROR_DIE,
+ SUBMODULE_ALTERNATE_ERROR_INFO,
+ SUBMODULE_ALTERNATE_ERROR_IGNORE
+ } error_mode;
+ struct string_list *reference;
+};
+#define SUBMODULE_ALTERNATE_SETUP_INIT { NULL, \
+ SUBMODULE_ALTERNATE_ERROR_IGNORE, NULL }
+
+static int add_possible_reference_from_superproject(
+ struct alternate_object_database *alt, void *sas_cb)
+{
+ struct submodule_alternate_setup *sas = sas_cb;
+
+ /*
+ * If the alternate object store is another repository, try the
+ * standard layout with .git/modules/<name>/objects
+ */
+ if (ends_with(alt->path, ".git/objects")) {
+ char *sm_alternate;
+ struct strbuf sb = STRBUF_INIT;
+ struct strbuf err = STRBUF_INIT;
+ strbuf_add(&sb, alt->path, strlen(alt->path) - strlen("objects"));
+
+ /*
+ * We need to end the new path with '/' to mark it as a dir,
+ * otherwise a submodule name containing '/' will be broken
+ * as the last part of a missing submodule reference would
+ * be taken as a file name.
+ */
+ strbuf_addf(&sb, "modules/%s/", sas->submodule_name);
+
+ sm_alternate = compute_alternate_path(sb.buf, &err);
+ if (sm_alternate) {
+ string_list_append(sas->reference, xstrdup(sb.buf));
+ free(sm_alternate);
+ } else {
+ switch (sas->error_mode) {
+ case SUBMODULE_ALTERNATE_ERROR_DIE:
+ die(_("submodule '%s' cannot add alternate: %s"),
+ sas->submodule_name, err.buf);
+ case SUBMODULE_ALTERNATE_ERROR_INFO:
+ fprintf(stderr, _("submodule '%s' cannot add alternate: %s"),
+ sas->submodule_name, err.buf);
+ case SUBMODULE_ALTERNATE_ERROR_IGNORE:
+ ; /* nothing */
+ }
+ }
+ strbuf_release(&sb);
+ }
+
+ return 0;
+}
+
+static void prepare_possible_alternates(const char *sm_name,
+ struct string_list *reference)
+{
+ char *sm_alternate = NULL, *error_strategy = NULL;
+ struct submodule_alternate_setup sas = SUBMODULE_ALTERNATE_SETUP_INIT;
+
+ git_config_get_string("submodule.alternateLocation", &sm_alternate);
+ if (!sm_alternate)
+ return;
+
+ git_config_get_string("submodule.alternateErrorStrategy", &error_strategy);
+
+ if (!error_strategy)
+ error_strategy = xstrdup("die");
+
+ sas.submodule_name = sm_name;
+ sas.reference = reference;
+ if (!strcmp(error_strategy, "die"))
+ sas.error_mode = SUBMODULE_ALTERNATE_ERROR_DIE;
+ else if (!strcmp(error_strategy, "info"))
+ sas.error_mode = SUBMODULE_ALTERNATE_ERROR_INFO;
+ else if (!strcmp(error_strategy, "ignore"))
+ sas.error_mode = SUBMODULE_ALTERNATE_ERROR_IGNORE;
+ else
+ die(_("Value '%s' for submodule.alternateErrorStrategy is not recognized"), error_strategy);
+
+ if (!strcmp(sm_alternate, "superproject"))
+ foreach_alt_odb(add_possible_reference_from_superproject, &sas);
+ else if (!strcmp(sm_alternate, "no"))
+ ; /* do nothing */
+ else
+ die(_("Value '%s' for submodule.alternateLocation is not recognized"), sm_alternate);
+
+ free(sm_alternate);
+ free(error_strategy);
+}
+
static int module_clone(int argc, const char **argv, const char *prefix)
{
- const char *name = NULL, *url = NULL;
- const char *reference = NULL, *depth = NULL;
+ const char *name = NULL, *url = NULL, *depth = NULL;
int quiet = 0;
+ int progress = 0;
FILE *submodule_dot_git;
char *p, *path = NULL, *sm_gitdir;
struct strbuf rel_path = STRBUF_INIT;
struct strbuf sb = STRBUF_INIT;
+ struct string_list reference = STRING_LIST_INIT_NODUP;
struct option module_clone_options[] = {
OPT_STRING(0, "prefix", &prefix,
OPT_STRING(0, "url", &url,
N_("string"),
N_("url where to clone the submodule from")),
- OPT_STRING(0, "reference", &reference,
- N_("string"),
+ OPT_STRING_LIST(0, "reference", &reference,
+ N_("repo"),
N_("reference repository")),
OPT_STRING(0, "depth", &depth,
N_("string"),
N_("depth for shallow clones")),
OPT__QUIET(&quiet, "Suppress output for cloning a submodule"),
+ OPT_BOOL(0, "progress", &progress,
+ N_("force cloning progress")),
OPT_END()
};
if (!file_exists(sm_gitdir)) {
if (safe_create_leading_directories_const(sm_gitdir) < 0)
die(_("could not create directory '%s'"), sm_gitdir);
- if (clone_submodule(path, sm_gitdir, url, depth, reference, quiet))
+
+ prepare_possible_alternates(name, &reference);
+
+ if (clone_submodule(path, sm_gitdir, url, depth, &reference,
+ quiet, progress))
die(_("clone of '%s' into submodule path '%s' failed"),
url, path);
} else {
struct submodule_update_strategy update;
/* configuration parameters which are passed on to the children */
+ int progress;
int quiet;
int recommend_shallow;
- const char *reference;
+ struct string_list references;
const char *depth;
const char *recursive_prefix;
const char *prefix;
int failed_clones_nr, failed_clones_alloc;
};
#define SUBMODULE_UPDATE_CLONE_INIT {0, MODULE_LIST_INIT, 0, \
- SUBMODULE_UPDATE_STRATEGY_INIT, 0, -1, NULL, NULL, NULL, NULL, \
+ SUBMODULE_UPDATE_STRATEGY_INIT, 0, 0, -1, STRING_LIST_INIT_DUP, \
+ NULL, NULL, NULL, \
STRING_LIST_INIT_DUP, 0, NULL, 0, 0}
strbuf_reset(&sb);
strbuf_addf(&sb, "%06o %s %d %d\t%s\n", ce->ce_mode,
- sha1_to_hex(ce->sha1), ce_stage(ce),
+ oid_to_hex(&ce->oid), ce_stage(ce),
needs_cloning, ce->name);
string_list_append(&suc->projectlines, sb.buf);
child->err = -1;
argv_array_push(&child->args, "submodule--helper");
argv_array_push(&child->args, "clone");
+ if (suc->progress)
+ argv_array_push(&child->args, "--progress");
if (suc->quiet)
argv_array_push(&child->args, "--quiet");
if (suc->prefix)
argv_array_pushl(&child->args, "--path", sub->path, NULL);
argv_array_pushl(&child->args, "--name", sub->name, NULL);
argv_array_pushl(&child->args, "--url", url, NULL);
- if (suc->reference)
- argv_array_push(&child->args, suc->reference);
+ if (suc->references.nr) {
+ struct string_list_item *item;
+ for_each_string_list_item(item, &suc->references)
+ argv_array_pushl(&child->args, "--reference", item->string, NULL);
+ }
if (suc->depth)
argv_array_push(&child->args, suc->depth);
OPT_STRING(0, "update", &update,
N_("string"),
N_("rebase, merge, checkout or none")),
- OPT_STRING(0, "reference", &suc.reference, N_("repo"),
+ OPT_STRING_LIST(0, "reference", &suc.references, N_("repo"),
N_("reference repository")),
OPT_STRING(0, "depth", &suc.depth, "<depth>",
N_("Create a shallow clone truncated to the "
OPT_BOOL(0, "recommend-shallow", &suc.recommend_shallow,
N_("whether the initial clone should follow the shallow recommendation")),
OPT__QUIET(&suc.quiet, N_("don't print cloning progress")),
+ OPT_BOOL(0, "progress", &suc.progress,
+ N_("force cloning progress")),
OPT_END()
};
static unsigned char buffer[4096];
static unsigned int offset, len;
static off_t consumed_bytes;
+static off_t max_input_size;
static git_SHA_CTX ctx;
static struct fsck_options fsck_options = FSCK_OPTIONS_STRICT;
if (signed_add_overflows(consumed_bytes, bytes))
die("pack too large for current definition of off_t");
consumed_bytes += bytes;
+ if (max_input_size && consumed_bytes > max_input_size)
+ die(_("pack exceeds maximum allowed size"));
}
static void *get_data(unsigned long size)
len = sizeof(*hdr);
continue;
}
+ if (skip_prefix(arg, "--max-input-size=", &arg)) {
+ max_input_size = strtoumax(arg, NULL, 10);
+ continue;
+ }
usage(unpack_usage);
}
fill_stat_cache_info(ce, st);
ce->ce_mode = ce_mode_from_stat(old, st->st_mode);
- if (index_path(ce->sha1, path, st,
+ if (index_path(ce->oid.hash, path, st,
info_only ? 0 : HASH_WRITE_OBJECT)) {
free(ce);
return -1;
*/
static int process_directory(const char *path, int len, struct stat *st)
{
- unsigned char sha1[20];
+ struct object_id oid;
int pos = cache_name_pos(path, len);
/* Exact match: file or existing gitlink */
if (S_ISGITLINK(ce->ce_mode)) {
/* Do nothing to the index if there is no HEAD! */
- if (resolve_gitlink_ref(path, "HEAD", sha1) < 0)
+ if (resolve_gitlink_ref(path, "HEAD", oid.hash) < 0)
return 0;
return add_one_path(ce, path, len, st);
}
/* No match - should we add it as a gitlink? */
- if (!resolve_gitlink_ref(path, "HEAD", sha1))
+ if (!resolve_gitlink_ref(path, "HEAD", oid.hash))
return add_one_path(NULL, path, len, st);
/* Error out. */
return add_one_path(ce, path, len, &st);
}
-static int add_cacheinfo(unsigned int mode, const unsigned char *sha1,
+static int add_cacheinfo(unsigned int mode, const struct object_id *oid,
const char *path, int stage)
{
int size, len, option;
size = cache_entry_size(len);
ce = xcalloc(1, size);
- hashcpy(ce->sha1, sha1);
+ oidcpy(&ce->oid, oid);
memcpy(ce->name, path, len);
ce->ce_flags = create_ce_flags(stage);
ce->ce_namelen = len;
while (getline_fn(&buf, stdin) != EOF) {
char *ptr, *tab;
char *path_name;
- unsigned char sha1[20];
+ struct object_id oid;
unsigned int mode;
unsigned long ul;
int stage;
mode = ul;
tab = strchr(ptr, '\t');
- if (!tab || tab - ptr < 41)
+ if (!tab || tab - ptr < GIT_SHA1_HEXSZ + 1)
goto bad_line;
if (tab[-2] == ' ' && '0' <= tab[-1] && tab[-1] <= '3') {
ptr = tab + 1; /* point at the head of path */
}
- if (get_sha1_hex(tab - 40, sha1) || tab[-41] != ' ')
+ if (get_oid_hex(tab - GIT_SHA1_HEXSZ, &oid) ||
+ tab[-(GIT_SHA1_HEXSZ + 1)] != ' ')
goto bad_line;
path_name = ptr;
* ptr[-1] points at tab,
* ptr[-41] is at the beginning of sha1
*/
- ptr[-42] = ptr[-1] = 0;
- if (add_cacheinfo(mode, sha1, path_name, stage))
+ ptr[-(GIT_SHA1_HEXSZ + 2)] = ptr[-1] = 0;
+ if (add_cacheinfo(mode, &oid, path_name, stage))
die("git update-index: unable to update %s",
path_name);
}
NULL
};
-static unsigned char head_sha1[20];
-static unsigned char merge_head_sha1[20];
+static struct object_id head_oid;
+static struct object_id merge_head_oid;
static struct cache_entry *read_one_ent(const char *which,
- unsigned char *ent, const char *path,
+ struct object_id *ent, const char *path,
int namelen, int stage)
{
unsigned mode;
- unsigned char sha1[20];
+ struct object_id oid;
int size;
struct cache_entry *ce;
- if (get_tree_entry(ent, path, sha1, &mode)) {
+ if (get_tree_entry(ent->hash, path, oid.hash, &mode)) {
if (which)
error("%s: not in %s branch.", path, which);
return NULL;
size = cache_entry_size(namelen);
ce = xcalloc(1, size);
- hashcpy(ce->sha1, sha1);
+ oidcpy(&ce->oid, &oid);
memcpy(ce->name, path, namelen);
ce->ce_flags = create_ce_flags(stage);
ce->ce_namelen = namelen;
* stuff HEAD version in stage #2,
* stuff MERGE_HEAD version in stage #3.
*/
- ce_2 = read_one_ent("our", head_sha1, path, namelen, 2);
- ce_3 = read_one_ent("their", merge_head_sha1, path, namelen, 3);
+ ce_2 = read_one_ent("our", &head_oid, path, namelen, 2);
+ ce_3 = read_one_ent("their", &merge_head_oid, path, namelen, 3);
if (!ce_2 || !ce_3) {
ret = -1;
goto free_return;
}
- if (!hashcmp(ce_2->sha1, ce_3->sha1) &&
+ if (!oidcmp(&ce_2->oid, &ce_3->oid) &&
ce_2->ce_mode == ce_3->ce_mode) {
fprintf(stderr, "%s: identical in both, skipping.\n",
path);
static void read_head_pointers(void)
{
- if (read_ref("HEAD", head_sha1))
+ if (read_ref("HEAD", head_oid.hash))
die("No HEAD -- no initial commit yet?");
- if (read_ref("MERGE_HEAD", merge_head_sha1)) {
+ if (read_ref("MERGE_HEAD", merge_head_oid.hash)) {
fprintf(stderr, "Not in the middle of a merge.\n");
exit(0);
}
PATHSPEC_PREFER_CWD,
prefix, av + 1);
- if (read_ref("HEAD", head_sha1))
+ if (read_ref("HEAD", head_oid.hash))
/* If there is no HEAD, that means it is an initial
* commit. Update everything in the index.
*/
if (ce_stage(ce) || !ce_path_match(ce, &pathspec, NULL))
continue;
if (has_head)
- old = read_one_ent(NULL, head_sha1,
+ old = read_one_ent(NULL, &head_oid,
ce->name, ce_namelen(ce), 0);
if (old && ce->ce_mode == old->ce_mode &&
- !hashcmp(ce->sha1, old->sha1)) {
+ !oidcmp(&ce->oid, &old->oid)) {
free(old);
continue; /* unchanged */
}
static int parse_new_style_cacheinfo(const char *arg,
unsigned int *mode,
- unsigned char sha1[],
+ struct object_id *oid,
const char **path)
{
unsigned long ul;
return -1; /* not a new-style cacheinfo */
*mode = ul;
endp++;
- if (get_sha1_hex(endp, sha1) || endp[40] != ',')
+ if (get_oid_hex(endp, oid) || endp[GIT_SHA1_HEXSZ] != ',')
return -1;
- *path = endp + 41;
+ *path = endp + GIT_SHA1_HEXSZ + 1;
return 0;
}
static int cacheinfo_callback(struct parse_opt_ctx_t *ctx,
const struct option *opt, int unset)
{
- unsigned char sha1[20];
+ struct object_id oid;
unsigned int mode;
const char *path;
- if (!parse_new_style_cacheinfo(ctx->argv[1], &mode, sha1, &path)) {
- if (add_cacheinfo(mode, sha1, path, 0))
+ if (!parse_new_style_cacheinfo(ctx->argv[1], &mode, &oid, &path)) {
+ if (add_cacheinfo(mode, &oid, path, 0))
die("git update-index: --cacheinfo cannot add %s", path);
ctx->argv++;
ctx->argc--;
if (ctx->argc <= 3)
return error("option 'cacheinfo' expects <mode>,<sha1>,<path>");
if (strtoul_ui(*++ctx->argv, 8, &mode) ||
- get_sha1_hex(*++ctx->argv, sha1) ||
- add_cacheinfo(mode, sha1, *++ctx->argv, 0))
+ get_oid_hex(*++ctx->argv, &oid) ||
+ add_cacheinfo(mode, &oid, *++ctx->argv, 0))
die("git update-index: --cacheinfo cannot add %s", *ctx->argv);
ctx->argc -= 3;
return 0;
break;
case UC_DISABLE:
if (git_config_get_untracked_cache() == 1)
- warning("core.untrackedCache is set to true; "
- "remove or change it, if you really want to "
- "disable the untracked cache");
+ warning(_("core.untrackedCache is set to true; "
+ "remove or change it, if you really want to "
+ "disable the untracked cache"));
remove_untracked_cache(&the_index);
report(_("Untracked cache disabled"));
break;
case UC_ENABLE:
case UC_FORCE:
if (git_config_get_untracked_cache() == 0)
- warning("core.untrackedCache is set to false; "
- "remove or change it, if you really want to "
- "enable the untracked cache");
+ warning(_("core.untrackedCache is set to false; "
+ "remove or change it, if you really want to "
+ "enable the untracked cache"));
add_untracked_cache(&the_index);
report(_("Untracked cache enabled for '%s'"), get_git_work_tree());
break;
writer.git_cmd = 1;
if (start_command(&writer)) {
int err = errno;
- packet_write(1, "NACK unable to spawn subprocess\n");
+ packet_write_fmt(1, "NACK unable to spawn subprocess\n");
die("upload-archive: %s", strerror(err));
}
- packet_write(1, "ACK\n");
+ packet_write_fmt(1, "ACK\n");
packet_flush(1);
while (1) {
break;
}
fprintf(stderr, "%s: unmerged (%s)\n",
- ce->name, sha1_to_hex(ce->sha1));
+ ce->name, oid_to_hex(&ce->oid));
}
}
if (funny)
}
}
else {
- sha1 = ce->sha1;
+ sha1 = ce->oid.hash;
mode = ce->ce_mode;
entlen = pathlen - baselen;
i++;
unsigned int ce_flags;
unsigned int ce_namelen;
unsigned int index; /* for link extension */
- unsigned char sha1[20];
+ struct object_id oid;
char name[FLEX_ARRAY]; /* more */
};
#define GIT_NAMESPACE_ENVIRONMENT "GIT_NAMESPACE"
#define GIT_WORK_TREE_ENVIRONMENT "GIT_WORK_TREE"
#define GIT_PREFIX_ENVIRONMENT "GIT_PREFIX"
+#define GIT_SUPER_PREFIX_ENVIRONMENT "GIT_INTERNAL_SUPER_PREFIX"
#define DEFAULT_GIT_DIR_ENVIRONMENT ".git"
#define DB_ENVIRONMENT "GIT_OBJECT_DIRECTORY"
#define INDEX_ENVIRONMENT "GIT_INDEX_FILE"
#define GIT_GLOB_PATHSPECS_ENVIRONMENT "GIT_GLOB_PATHSPECS"
#define GIT_NOGLOB_PATHSPECS_ENVIRONMENT "GIT_NOGLOB_PATHSPECS"
#define GIT_ICASE_PATHSPECS_ENVIRONMENT "GIT_ICASE_PATHSPECS"
+#define GIT_QUARANTINE_ENVIRONMENT "GIT_QUARANTINE_PATH"
/*
* This environment variable is expected to contain a boolean indicating
*/
extern const char * const local_repo_env[];
+/*
+ * Returns true iff we have a configured git repository (either via
+ * setup_git_directory, or in the environment via $GIT_DIR).
+ */
+int have_git_dir(void);
+
extern int is_bare_repository_cfg;
extern int is_bare_repository(void);
extern int is_inside_git_dir(void);
extern int get_common_dir(struct strbuf *sb, const char *gitdir);
extern const char *get_git_namespace(void);
extern const char *strip_namespace(const char *namespaced_ref);
+extern const char *get_super_prefix(void);
extern const char *get_git_work_tree(void);
/*
extern int path_inside_repo(const char *prefix, const char *path);
#define INIT_DB_QUIET 0x0001
+#define INIT_DB_EXIST_OK 0x0002
-extern int set_git_dir_init(const char *git_dir, const char *real_git_dir, int);
-extern int init_db(const char *template_dir, unsigned int flags);
+extern int init_db(const char *git_dir, const char *real_git_dir,
+ const char *template_dir, unsigned int flags);
extern void sanitize_stdfds(void);
extern int daemonize(void);
extern unsigned long big_file_threshold;
extern unsigned long pack_size_limit_cfg;
+/*
+ * Accessors for the core.sharedrepository config which lazy-load the value
+ * from the config (if not already set). The "reset" function can be
+ * used to unset "set" or cached value, meaning that the value will be loaded
+ * fresh from the config file on the next call to get_shared_repository().
+ */
void set_shared_repository(int value);
int get_shared_repository(void);
+void reset_shared_repository(void);
/*
* Do replace refs need to be checked this run? This variable is
__attribute__((format (printf, 2, 3)));
extern char *git_path_buf(struct strbuf *buf, const char *fmt, ...)
__attribute__((format (printf, 2, 3)));
-extern void strbuf_git_path_submodule(struct strbuf *sb, const char *path,
- const char *fmt, ...)
+extern int strbuf_git_path_submodule(struct strbuf *sb, const char *path,
+ const char *fmt, ...)
__attribute__((format (printf, 3, 4)));
extern char *git_pathdup(const char *fmt, ...)
__attribute__((format (printf, 1, 2)));
* The result will be at least `len` characters long, and will be NUL
* terminated.
*
- * The non-`_r` version returns a static buffer which will be overwritten by
- * subsequent calls.
+ * The non-`_r` version returns a static buffer which remains valid until 4
+ * more calls to find_unique_abbrev are made.
*
* The `_r` variant writes to a buffer supplied by the caller, which must be at
* least `GIT_SHA1_HEXSZ + 1` bytes. The return value is the number of bytes
#define EMPTY_TREE_SHA1_BIN_LITERAL \
"\x4b\x82\x5d\xc6\x42\xcb\x6e\xb9\xa0\x60" \
"\xe5\x4b\xf8\xd6\x92\x88\xfb\xee\x49\x04"
-#define EMPTY_TREE_SHA1_BIN \
- ((const unsigned char *) EMPTY_TREE_SHA1_BIN_LITERAL)
+extern const struct object_id empty_tree_oid;
+#define EMPTY_TREE_SHA1_BIN (empty_tree_oid.hash)
#define EMPTY_BLOB_SHA1_HEX \
"e69de29bb2d1d6434b8b29ae775ad8c2e48c5391"
#define EMPTY_BLOB_SHA1_BIN_LITERAL \
"\xe6\x9d\xe2\x9b\xb2\xd1\xd6\x43\x4b\x8b" \
"\x29\xae\x77\x5a\xd8\xc2\xe4\x8c\x53\x91"
-#define EMPTY_BLOB_SHA1_BIN \
- ((const unsigned char *) EMPTY_BLOB_SHA1_BIN_LITERAL)
+extern const struct object_id empty_blob_oid;
+#define EMPTY_BLOB_SHA1_BIN (empty_blob_oid.hash)
+
static inline int is_empty_blob_sha1(const unsigned char *sha1)
{
return !hashcmp(sha1, EMPTY_BLOB_SHA1_BIN);
}
+static inline int is_empty_blob_oid(const struct object_id *oid)
+{
+ return !hashcmp(oid->hash, EMPTY_BLOB_SHA1_BIN);
+}
+
+static inline int is_empty_tree_sha1(const unsigned char *sha1)
+{
+ return !hashcmp(sha1, EMPTY_TREE_SHA1_BIN);
+}
+
+static inline int is_empty_tree_oid(const struct object_id *oid)
+{
+ return !hashcmp(oid->hash, EMPTY_TREE_SHA1_BIN);
+}
+
+
int git_mkstemp(char *path, size_t n, const char *template);
/* set default permissions by passing mode arguments to open(2) */
extern int hash_sha1_file_literally(const void *buf, unsigned long len, const char *type, unsigned char *sha1, unsigned flags);
extern int pretend_sha1_file(void *, unsigned long, enum object_type, unsigned char *);
extern int force_object_loose(const unsigned char *sha1, time_t mtime);
-extern int git_open_noatime(const char *name);
+extern int git_open(const char *name);
extern void *map_sha1_file(const unsigned char *sha1, unsigned long *size);
extern int unpack_sha1_header(git_zstream *stream, unsigned char *map, unsigned long mapsize, void *buffer, unsigned long bufsiz);
extern int parse_sha1_header(const char *hdr, unsigned long *sizep);
#define MINIMUM_ABBREV minimum_abbrev
#define DEFAULT_ABBREV default_abbrev
+/* used when the code does not know or care what the default abbrev is */
+#define FALLBACK_DEFAULT_ABBREV 7
+
struct object_context {
unsigned char tree[20];
char path[PATH_MAX];
#define GET_SHA1_FOLLOW_SYMLINKS 0100
#define GET_SHA1_ONLY_TO_DIE 04000
+#define GET_SHA1_DISAMBIGUATORS \
+ (GET_SHA1_COMMIT | GET_SHA1_COMMITTISH | \
+ GET_SHA1_TREE | GET_SHA1_TREEISH | \
+ GET_SHA1_BLOB)
+
extern int get_sha1(const char *str, unsigned char *sha1);
extern int get_sha1_commit(const char *str, unsigned char *sha1);
extern int get_sha1_committish(const char *str, unsigned char *sha1);
typedef int each_abbrev_fn(const unsigned char *sha1, void *);
extern int for_each_abbrev(const char *prefix, each_abbrev_fn, void *);
+extern int set_disambiguate_hint_config(const char *var, const char *value);
+
/*
* Try to read a SHA1 in hexadecimal format from the 40 characters
* starting at hex. Write the 20-byte result to sha1 in binary form.
extern char *oid_to_hex(const struct object_id *oid); /* same static buffer as sha1_to_hex */
extern int interpret_branch_name(const char *str, int len, struct strbuf *);
-extern int get_sha1_mb(const char *str, unsigned char *sha1);
+extern int get_oid_mb(const char *str, struct object_id *oid);
extern int validate_headref(const char *ref);
not_new:1,
refresh_cache:1;
};
+#define CHECKOUT_INIT { NULL, "" }
#define TEMPORARY_FILENAME_LENGTH 25
extern int checkout_entry(struct cache_entry *ce, const struct checkout *state, char *topath);
extern struct alternate_object_database {
struct alternate_object_database *next;
- char *name;
- char base[FLEX_ARRAY]; /* more */
+
+ /* see alt_scratch_buf() */
+ struct strbuf scratch;
+ size_t base_len;
+
+ char path[FLEX_ARRAY];
} *alt_odb_list;
extern void prepare_alt_odb(void);
extern void read_info_alternates(const char * relative_base, int depth);
-extern void add_to_alternates_file(const char *reference);
+extern char *compute_alternate_path(const char *path, struct strbuf *err);
typedef int alt_odb_fn(struct alternate_object_database *, void *);
extern int foreach_alt_odb(alt_odb_fn, void*);
+/*
+ * Allocate a "struct alternate_object_database" but do _not_ actually
+ * add it to the list of alternates.
+ */
+struct alternate_object_database *alloc_alt_odb(const char *dir);
+
+/*
+ * Add the directory to the on-disk alternates file; the new entry will also
+ * take effect in the current process.
+ */
+extern void add_to_alternates_file(const char *dir);
+
+/*
+ * Add the directory to the in-memory list of alternates (along with any
+ * recursive alternates it points to), but do not modify the on-disk alternates
+ * file.
+ */
+extern void add_to_alternates_memory(const char *dir);
+
+/*
+ * Returns a scratch strbuf pre-filled with the alternate object directory,
+ * including a trailing slash, which can be used to access paths in the
+ * alternate. Always use this over direct access to alt->scratch, as it
+ * cleans up any previous use of the scratch buffer.
+ */
+extern struct strbuf *alt_scratch_buf(struct alternate_object_database *alt);
+
struct pack_window {
struct pack_window *next;
unsigned char *base;
extern void reprepare_packed_git(void);
extern void install_packed_git(struct packed_git *pack);
+/*
+ * Give a rough count of objects in the repository. This sacrifices accuracy
+ * for speed.
+ */
+unsigned long approximate_object_count(void);
+
extern struct packed_git *find_sha1_pack(const unsigned char *sha1,
struct packed_git *packs);
} packed;
} u;
};
+
+/*
+ * Initializer for a "struct object_info" that wants no items. You may
+ * also memset() the memory to all-zeroes.
+ */
+#define OBJECT_INFO_INIT {NULL}
+
extern int sha1_object_info_extended(const unsigned char *, struct object_info *, unsigned flags);
+extern int packed_object_info(struct packed_git *pack, off_t offset, struct object_info *);
/* Dumb servers support */
extern int update_server_info(int);
/* pager.c */
extern void setup_pager(void);
-extern const char *pager_program;
extern int pager_in_use(void);
extern int pager_use_color;
extern int term_columns(void);
/* Show sha1's */
for (i = 0; i < num_parent; i++)
- printf(" %s", diff_unique_abbrev(p->parent[i].oid.hash,
- opt->abbrev));
- printf(" %s ", diff_unique_abbrev(p->oid.hash, opt->abbrev));
+ printf(" %s", diff_aligned_abbrev(&p->parent[i].oid,
+ opt->abbrev));
+ printf(" %s ", diff_aligned_abbrev(&p->oid, opt->abbrev));
}
if (opt->output_format & (DIFF_FORMAT_RAW | DIFF_FORMAT_NAME_STATUS)) {
}
static const char commit_utf8_warn[] =
-"Warning: commit message did not conform to UTF-8.\n"
-"You may want to amend it after fixing the message, or set the config\n"
-"variable i18n.commitencoding to the encoding your project uses.\n";
+N_("Warning: commit message did not conform to UTF-8.\n"
+ "You may want to amend it after fixing the message, or set the config\n"
+ "variable i18n.commitencoding to the encoding your project uses.\n");
int commit_tree_extended(const char *msg, size_t msg_len,
const unsigned char *tree,
/* And check the encoding */
if (encoding_is_utf8 && !verify_utf8(&buffer))
- fprintf(stderr, commit_utf8_warn);
+ fprintf(stderr, _(commit_utf8_warn));
if (sign_commit && do_sign_commit(&buffer, sign_commit))
return -1;
extern int is_repository_shallow(void);
extern struct commit_list *get_shallow_commits(struct object_array *heads,
int depth, int shallow_flag, int not_shallow_flag);
+extern struct commit_list *get_shallow_commits_by_rev_list(
+ int ac, const char **av, int shallow_flag, int not_shallow_flag);
extern void set_alternate_shallow_file(const char *path, int override);
extern int write_shallow_commits(struct strbuf *out, int use_pack_protocol,
const struct sha1_array *extra);
return 0;
}
+ if (!strcmp(var, "core.disambiguate"))
+ return set_disambiguate_hint_config(var, value);
+
if (!strcmp(var, "core.loosecompression")) {
int level = git_config_int(var, value);
if (level == -1)
return 0;
}
- if (!strcmp(var, "core.pager"))
- return git_config_string(&pager_program, var, value);
-
if (!strcmp(var, "core.editor"))
return git_config_string(&editor_program, var, value);
int ret = 0;
char *xdg_config = xdg_config_home("config");
char *user_config = expand_user_path("~/.gitconfig");
- char *repo_config = git_pathdup("config");
+ char *repo_config = have_git_dir() ? git_pathdup("config") : NULL;
current_parsing_scope = CONFIG_SCOPE_SYSTEM;
if (git_config_system() && !access_or_die(git_etc_gitconfig(), R_OK, 0))
return check_ref(ref->name, flags);
}
-static void die_initial_contact(int got_at_least_one_head)
+static void die_initial_contact(int unexpected)
{
- if (got_at_least_one_head)
- die("The remote end hung up upon initial contact");
+ if (unexpected)
+ die(_("The remote end hung up upon initial contact"));
else
- die("Could not read from remote repository.\n\n"
- "Please make sure you have the correct access rights\n"
- "and the repository exists.");
+ die(_("Could not read from remote repository.\n\n"
+ "Please make sure you have the correct access rights\n"
+ "and the repository exists."));
}
static void parse_one_symref_info(struct string_list *symref, const char *val, int len)
struct sha1_array *shallow_points)
{
struct ref **orig_list = list;
- int got_at_least_one_head = 0;
+
+ /*
+ * A hang-up after seeing some response from the other end
+ * means that it is unexpected, as we know the other end is
+ * willing to talk to us. A hang-up before seeing any
+ * response does not necessarily mean an ACL problem, though.
+ */
+ int saw_response;
+ int got_dummy_ref_with_capabilities_declaration = 0;
*list = NULL;
- for (;;) {
+ for (saw_response = 0; ; saw_response = 1) {
struct ref *ref;
struct object_id old_oid;
char *name;
PACKET_READ_GENTLE_ON_EOF |
PACKET_READ_CHOMP_NEWLINE);
if (len < 0)
- die_initial_contact(got_at_least_one_head);
+ die_initial_contact(saw_response);
if (!len)
break;
continue;
}
+ if (!strcmp(name, "capabilities^{}")) {
+ if (saw_response)
+ die("protocol error: unexpected capabilities^{}");
+ if (got_dummy_ref_with_capabilities_declaration)
+ die("protocol error: multiple capabilities^{}");
+ got_dummy_ref_with_capabilities_declaration = 1;
+ continue;
+ }
+
if (!check_ref(name, flags))
continue;
+
+ if (got_dummy_ref_with_capabilities_declaration)
+ die("protocol error: unexpected ref after capabilities^{}");
+
ref = alloc_ref(buffer + GIT_SHA1_HEXSZ + 1);
oidcpy(&ref->old_oid, &old_oid);
*list = ref;
list = &ref->next;
- got_at_least_one_head = 1;
}
annotate_refs_with_symref_info(*orig_list);
* Note: Do not add any other headers here! Doing so
* will cause older git-daemon servers to crash.
*/
- packet_write(fd[1],
+ packet_write_fmt(fd[1],
"%s %s%chost=%s%c",
prog, path, 0,
target_host, 0);
_("Checking connectivity"));
rev_list.git_cmd = 1;
+ rev_list.env = opt->env;
rev_list.in = -1;
rev_list.no_stdout = 1;
if (opt->err_fd)
/* If non-zero, show progress as we traverse the objects. */
int progress;
+
+ /*
+ * Insert these variables into the environment of the child process.
+ */
+ const char **env;
};
#define CHECK_CONNECTED_INIT { 0 }
--- /dev/null
+@@
+expression base, nmemb, compar;
+@@
+- qsort(base, nmemb, sizeof(*base), compar);
++ QSORT(base, nmemb, compar);
+
+@@
+expression base, nmemb, compar;
+@@
+- qsort(base, nmemb, sizeof(base[0]), compar);
++ QSORT(base, nmemb, compar);
+
+@@
+type T;
+T *base;
+expression nmemb, compar;
+@@
+- qsort(base, nmemb, sizeof(T), compar);
++ QSORT(base, nmemb, compar);
+
+@@
+expression base, nmemb, compar;
+@@
+- if (nmemb)
+ QSORT(base, nmemb, compar);
+
+@@
+expression base, nmemb, compar;
+@@
+- if (nmemb > 0)
+ QSORT(base, nmemb, compar);
+
+@@
+expression base, nmemb, compar;
+@@
+- if (nmemb > 1)
+ QSORT(base, nmemb, compar);
__git_refs ()
{
local i hash dir="$(__gitdir "${1-}")" track="${2-}"
- local format refs
+ local format refs pfx
if [ -d "$dir" ]; then
case "$cur" in
refs|refs/*)
track=""
;;
*)
+ [[ "$cur" == ^* ]] && pfx="^"
for i in HEAD FETCH_HEAD ORIG_HEAD MERGE_HEAD; do
- if [ -e "$dir/$i" ]; then echo $i; fi
+ if [ -e "$dir/$i" ]; then echo $pfx$i; fi
done
format="refname:short"
refs="refs/tags refs/heads refs/remotes"
;;
esac
- git --git-dir="$dir" for-each-ref --format="%($format)" \
+ git --git-dir="$dir" for-each-ref --format="$pfx%($format)" \
$refs
if [ -n "$track" ]; then
# employ the heuristic used by git checkout
--- /dev/null
+MAIN:=git-credential-libsecret
+all:: $(MAIN)
+
+CC = gcc
+RM = rm -f
+CFLAGS = -g -O2 -Wall
+PKG_CONFIG = pkg-config
+
+-include ../../../config.mak.autogen
+-include ../../../config.mak
+
+INCS:=$(shell $(PKG_CONFIG) --cflags libsecret-1 glib-2.0)
+LIBS:=$(shell $(PKG_CONFIG) --libs libsecret-1 glib-2.0)
+
+SRCS:=$(MAIN).c
+OBJS:=$(SRCS:.c=.o)
+
+%.o: %.c
+ $(CC) $(CFLAGS) $(CPPFLAGS) $(INCS) -o $@ -c $<
+
+$(MAIN): $(OBJS)
+ $(CC) -o $@ $(LDFLAGS) $^ $(LIBS)
+
+clean:
+ @$(RM) $(MAIN) $(OBJS)
--- /dev/null
+/*
+ * Copyright (C) 2011 John Szakmeister <john@szakmeister.net>
+ * 2012 Philipp A. Hartmann <pah@qo.cx>
+ * 2016 Mantas Mikulėnas <grawity@gmail.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ */
+
+/*
+ * Credits:
+ * - GNOME Keyring API handling originally written by John Szakmeister
+ * - ported to credential helper API by Philipp A. Hartmann
+ */
+
+#include <stdio.h>
+#include <string.h>
+#include <stdlib.h>
+#include <glib.h>
+#include <libsecret/secret.h>
+
+/*
+ * This credential struct and API is simplified from git's credential.{h,c}
+ */
+struct credential {
+ char *protocol;
+ char *host;
+ unsigned short port;
+ char *path;
+ char *username;
+ char *password;
+};
+
+#define CREDENTIAL_INIT { NULL, NULL, 0, NULL, NULL, NULL }
+
+typedef int (*credential_op_cb)(struct credential *);
+
+struct credential_operation {
+ char *name;
+ credential_op_cb op;
+};
+
+#define CREDENTIAL_OP_END { NULL, NULL }
+
+/* ----------------- Secret Service functions ----------------- */
+
+static char *make_label(struct credential *c)
+{
+ if (c->port)
+ return g_strdup_printf("Git: %s://%s:%hu/%s",
+ c->protocol, c->host, c->port, c->path ? c->path : "");
+ else
+ return g_strdup_printf("Git: %s://%s/%s",
+ c->protocol, c->host, c->path ? c->path : "");
+}
+
+static GHashTable *make_attr_list(struct credential *c)
+{
+ GHashTable *al = g_hash_table_new_full(g_str_hash, g_str_equal, NULL, g_free);
+
+ if (c->username)
+ g_hash_table_insert(al, "user", g_strdup(c->username));
+ if (c->protocol)
+ g_hash_table_insert(al, "protocol", g_strdup(c->protocol));
+ if (c->host)
+ g_hash_table_insert(al, "server", g_strdup(c->host));
+ if (c->port)
+ g_hash_table_insert(al, "port", g_strdup_printf("%hu", c->port));
+ if (c->path)
+ g_hash_table_insert(al, "object", g_strdup(c->path));
+
+ return al;
+}
+
+static int keyring_get(struct credential *c)
+{
+ SecretService *service = NULL;
+ GHashTable *attributes = NULL;
+ GError *error = NULL;
+ GList *items = NULL;
+
+ if (!c->protocol || !(c->host || c->path))
+ return EXIT_FAILURE;
+
+ service = secret_service_get_sync(0, NULL, &error);
+ if (error != NULL) {
+ g_critical("could not connect to Secret Service: %s", error->message);
+ g_error_free(error);
+ return EXIT_FAILURE;
+ }
+
+ attributes = make_attr_list(c);
+ items = secret_service_search_sync(service,
+ SECRET_SCHEMA_COMPAT_NETWORK,
+ attributes,
+ SECRET_SEARCH_LOAD_SECRETS,
+ NULL,
+ &error);
+ g_hash_table_unref(attributes);
+ if (error != NULL) {
+ g_critical("lookup failed: %s", error->message);
+ g_error_free(error);
+ return EXIT_FAILURE;
+ }
+
+ if (items != NULL) {
+ SecretItem *item;
+ SecretValue *secret;
+ const char *s;
+
+ item = items->data;
+ secret = secret_item_get_secret(item);
+ attributes = secret_item_get_attributes(item);
+
+ s = g_hash_table_lookup(attributes, "user");
+ if (s) {
+ g_free(c->username);
+ c->username = g_strdup(s);
+ }
+
+ s = secret_value_get_text(secret);
+ if (s) {
+ g_free(c->password);
+ c->password = g_strdup(s);
+ }
+
+ g_hash_table_unref(attributes);
+ secret_value_unref(secret);
+ g_list_free_full(items, g_object_unref);
+ }
+
+ return EXIT_SUCCESS;
+}
+
+
+static int keyring_store(struct credential *c)
+{
+ char *label = NULL;
+ GHashTable *attributes = NULL;
+ GError *error = NULL;
+
+ /*
+ * Sanity check that what we are storing is actually sensible.
+ * In particular, we can't make a URL without a protocol field.
+ * Without either a host or pathname (depending on the scheme),
+ * we have no primary key. And without a username and password,
+ * we are not actually storing a credential.
+ */
+ if (!c->protocol || !(c->host || c->path) ||
+ !c->username || !c->password)
+ return EXIT_FAILURE;
+
+ label = make_label(c);
+ attributes = make_attr_list(c);
+ secret_password_storev_sync(SECRET_SCHEMA_COMPAT_NETWORK,
+ attributes,
+ NULL,
+ label,
+ c->password,
+ NULL,
+ &error);
+ g_free(label);
+ g_hash_table_unref(attributes);
+
+ if (error != NULL) {
+ g_critical("store failed: %s", error->message);
+ g_error_free(error);
+ return EXIT_FAILURE;
+ }
+
+ return EXIT_SUCCESS;
+}
+
+static int keyring_erase(struct credential *c)
+{
+ GHashTable *attributes = NULL;
+ GError *error = NULL;
+
+ /*
+ * Sanity check that we actually have something to match
+ * against. The input we get is a restrictive pattern,
+ * so technically a blank credential means "erase everything".
+ * But it is too easy to accidentally send this, since it is equivalent
+ * to empty input. So explicitly disallow it, and require that the
+ * pattern have some actual content to match.
+ */
+ if (!c->protocol && !c->host && !c->path && !c->username)
+ return EXIT_FAILURE;
+
+ attributes = make_attr_list(c);
+ secret_password_clearv_sync(SECRET_SCHEMA_COMPAT_NETWORK,
+ attributes,
+ NULL,
+ &error);
+ g_hash_table_unref(attributes);
+
+ if (error != NULL) {
+ g_critical("erase failed: %s", error->message);
+ g_error_free(error);
+ return EXIT_FAILURE;
+ }
+
+ return EXIT_SUCCESS;
+}
+
+/*
+ * Table with helper operation callbacks, used by generic
+ * credential helper main function.
+ */
+static struct credential_operation const credential_helper_ops[] = {
+ { "get", keyring_get },
+ { "store", keyring_store },
+ { "erase", keyring_erase },
+ CREDENTIAL_OP_END
+};
+
+/* ------------------ credential functions ------------------ */
+
+static void credential_init(struct credential *c)
+{
+ memset(c, 0, sizeof(*c));
+}
+
+static void credential_clear(struct credential *c)
+{
+ g_free(c->protocol);
+ g_free(c->host);
+ g_free(c->path);
+ g_free(c->username);
+ g_free(c->password);
+
+ credential_init(c);
+}
+
+static int credential_read(struct credential *c)
+{
+ char *buf;
+ size_t line_len;
+ char *key;
+ char *value;
+
+ key = buf = g_malloc(1024);
+
+ while (fgets(buf, 1024, stdin)) {
+ line_len = strlen(buf);
+
+ if (line_len && buf[line_len-1] == '\n')
+ buf[--line_len] = '\0';
+
+ if (!line_len)
+ break;
+
+ value = strchr(buf, '=');
+ if (!value) {
+ g_warning("invalid credential line: %s", key);
+ g_free(buf);
+ return -1;
+ }
+ *value++ = '\0';
+
+ if (!strcmp(key, "protocol")) {
+ g_free(c->protocol);
+ c->protocol = g_strdup(value);
+ } else if (!strcmp(key, "host")) {
+ g_free(c->host);
+ c->host = g_strdup(value);
+ value = strrchr(c->host, ':');
+ if (value) {
+ *value++ = '\0';
+ c->port = atoi(value);
+ }
+ } else if (!strcmp(key, "path")) {
+ g_free(c->path);
+ c->path = g_strdup(value);
+ } else if (!strcmp(key, "username")) {
+ g_free(c->username);
+ c->username = g_strdup(value);
+ } else if (!strcmp(key, "password")) {
+ g_free(c->password);
+ c->password = g_strdup(value);
+ while (*value)
+ *value++ = '\0';
+ }
+ /*
+ * Ignore other lines; we don't know what they mean, but
+ * this future-proofs us when later versions of git do
+ * learn new lines, and the helpers are updated to match.
+ */
+ }
+
+ g_free(buf);
+
+ return 0;
+}
+
+static void credential_write_item(FILE *fp, const char *key, const char *value)
+{
+ if (!value)
+ return;
+ fprintf(fp, "%s=%s\n", key, value);
+}
+
+static void credential_write(const struct credential *c)
+{
+ /* only write username/password, if set */
+ credential_write_item(stdout, "username", c->username);
+ credential_write_item(stdout, "password", c->password);
+}
+
+static void usage(const char *name)
+{
+ struct credential_operation const *try_op = credential_helper_ops;
+ const char *basename = strrchr(name, '/');
+
+ basename = (basename) ? basename + 1 : name;
+ fprintf(stderr, "usage: %s <", basename);
+ while (try_op->name) {
+ fprintf(stderr, "%s", (try_op++)->name);
+ if (try_op->name)
+ fprintf(stderr, "%s", "|");
+ }
+ fprintf(stderr, "%s", ">\n");
+}
+
+int main(int argc, char *argv[])
+{
+ int ret = EXIT_SUCCESS;
+
+ struct credential_operation const *try_op = credential_helper_ops;
+ struct credential cred = CREDENTIAL_INIT;
+
+ if (!argv[1]) {
+ usage(argv[0]);
+ exit(EXIT_FAILURE);
+ }
+
+ g_set_application_name("Git Credential Helper");
+
+ /* lookup operation callback */
+ while (try_op->name && strcmp(argv[1], try_op->name))
+ try_op++;
+
+ /* unsupported operation given -- ignore silently */
+ if (!try_op->name || !try_op->op)
+ goto out;
+
+ ret = credential_read(&cred);
+ if (ret)
+ goto out;
+
+ /* perform credential operation */
+ ret = (*try_op->op)(&cred);
+
+ credential_write(&cred);
+
+out:
+ credential_clear(&cred);
+ return ret;
+}
--- /dev/null
+#!/usr/bin/perl
+#
+# Example implementation for the Git filter protocol version 2
+# See Documentation/gitattributes.txt, section "Filter Protocol"
+#
+# Please note, this pass-thru filter is a minimal skeleton. No proper
+# error handling was implemented.
+#
+
+use strict;
+use warnings;
+
+my $MAX_PACKET_CONTENT_SIZE = 65516;
+
+sub packet_bin_read {
+ my $buffer;
+ my $bytes_read = read STDIN, $buffer, 4;
+ if ( $bytes_read == 0 ) {
+
+ # EOF - Git stopped talking to us!
+ exit();
+ }
+ elsif ( $bytes_read != 4 ) {
+ die "invalid packet: '$buffer'";
+ }
+ my $pkt_size = hex($buffer);
+ if ( $pkt_size == 0 ) {
+ return ( 1, "" );
+ }
+ elsif ( $pkt_size > 4 ) {
+ my $content_size = $pkt_size - 4;
+ $bytes_read = read STDIN, $buffer, $content_size;
+ if ( $bytes_read != $content_size ) {
+ die "invalid packet ($content_size bytes expected; $bytes_read bytes read)";
+ }
+ return ( 0, $buffer );
+ }
+ else {
+ die "invalid packet size: $pkt_size";
+ }
+}
+
+sub packet_txt_read {
+ my ( $res, $buf ) = packet_bin_read();
+ unless ( $buf =~ s/\n$// ) {
+ die "A non-binary line MUST be terminated by an LF.";
+ }
+ return ( $res, $buf );
+}
+
+sub packet_bin_write {
+ my $buf = shift;
+ print STDOUT sprintf( "%04x", length($buf) + 4 );
+ print STDOUT $buf;
+ STDOUT->flush();
+}
+
+sub packet_txt_write {
+ packet_bin_write( $_[0] . "\n" );
+}
+
+sub packet_flush {
+ print STDOUT sprintf( "%04x", 0 );
+ STDOUT->flush();
+}
+
+( packet_txt_read() eq ( 0, "git-filter-client" ) ) || die "bad initialize";
+( packet_txt_read() eq ( 0, "version=2" ) ) || die "bad version";
+( packet_bin_read() eq ( 1, "" ) ) || die "bad version end";
+
+packet_txt_write("git-filter-server");
+packet_txt_write("version=2");
+packet_flush();
+
+( packet_txt_read() eq ( 0, "capability=clean" ) ) || die "bad capability";
+( packet_txt_read() eq ( 0, "capability=smudge" ) ) || die "bad capability";
+( packet_bin_read() eq ( 1, "" ) ) || die "bad capability end";
+
+packet_txt_write("capability=clean");
+packet_txt_write("capability=smudge");
+packet_flush();
+
+while (1) {
+ my ($command) = packet_txt_read() =~ /^command=([^=]+)$/;
+ my ($pathname) = packet_txt_read() =~ /^pathname=([^=]+)$/;
+
+ packet_bin_read();
+
+ my $input = "";
+ {
+ binmode(STDIN);
+ my $buffer;
+ my $done = 0;
+ while ( !$done ) {
+ ( $done, $buffer ) = packet_bin_read();
+ $input .= $buffer;
+ }
+ }
+
+ my $output;
+ if ( $command eq "clean" ) {
+ ### Perform clean here ###
+ $output = $input;
+ }
+ elsif ( $command eq "smudge" ) {
+ ### Perform smudge here ###
+ $output = $input;
+ }
+ else {
+ die "bad command '$command'";
+ }
+
+ packet_txt_write("status=success");
+ packet_flush();
+ while ( length($output) > 0 ) {
+ my $packet = substr( $output, 0, $MAX_PACKET_CONTENT_SIZE );
+ packet_bin_write($packet);
+ if ( length($output) > $MAX_PACKET_CONTENT_SIZE ) {
+ $output = substr( $output, $MAX_PACKET_CONTENT_SIZE );
+ }
+ else {
+ $output = "";
+ }
+ }
+ packet_flush(); # flush content!
+ packet_flush(); # empty list, keep "status=success" unchanged!
+
+}
#include "run-command.h"
#include "quote.h"
#include "sigchain.h"
+#include "pkt-line.h"
/*
* convert.c - convert a file when checking it out and checking it in.
* CRLFs would not be restored by checkout
*/
if (checksafe == SAFE_CRLF_WARN)
- warning("CRLF will be replaced by LF in %s.\nThe file will have its original line endings in your working directory.", path);
+ warning(_("CRLF will be replaced by LF in %s.\n"
+ "The file will have its original line"
+ " endings in your working directory."), path);
else /* i.e. SAFE_CRLF_FAIL */
- die("CRLF would be replaced by LF in %s.", path);
+ die(_("CRLF would be replaced by LF in %s."), path);
} else if (old_stats->lonelf && !new_stats->lonelf ) {
/*
* CRLFs would be added by checkout
*/
if (checksafe == SAFE_CRLF_WARN)
- warning("LF will be replaced by CRLF in %s.\nThe file will have its original line endings in your working directory.", path);
+ warning(_("LF will be replaced by CRLF in %s.\n"
+ "The file will have its original line"
+ " endings in your working directory."), path);
else /* i.e. SAFE_CRLF_FAIL */
- die("LF would be replaced by CRLF in %s", path);
+ die(_("LF would be replaced by CRLF in %s"), path);
}
}
child_process.out = out;
if (start_command(&child_process))
- return error("cannot fork to run external filter %s", params->cmd);
+ return error("cannot fork to run external filter '%s'", params->cmd);
sigchain_push(SIGPIPE, SIG_IGN);
if (close(child_process.in))
write_err = 1;
if (write_err)
- error("cannot feed the input to external filter %s", params->cmd);
+ error("cannot feed the input to external filter '%s'", params->cmd);
sigchain_pop(SIGPIPE);
status = finish_command(&child_process);
if (status)
- error("external filter %s failed %d", params->cmd, status);
+ error("external filter '%s' failed %d", params->cmd, status);
strbuf_release(&cmd);
return (write_err || status);
}
-static int apply_filter(const char *path, const char *src, size_t len, int fd,
+static int apply_single_file_filter(const char *path, const char *src, size_t len, int fd,
struct strbuf *dst, const char *cmd)
{
/*
*
* (child --> cmd) --> us
*/
- int ret = 1;
+ int err = 0;
struct strbuf nbuf = STRBUF_INIT;
struct async async;
struct filter_params params;
- if (!cmd || !*cmd)
- return 0;
-
- if (!dst)
- return 1;
-
memset(&async, 0, sizeof(async));
async.proc = filter_buffer_or_fd;
async.data = ¶ms;
return 0; /* error was already reported */
if (strbuf_read(&nbuf, async.out, len) < 0) {
- error("read from external filter %s failed", cmd);
- ret = 0;
+ err = error("read from external filter '%s' failed", cmd);
}
if (close(async.out)) {
- error("read from external filter %s failed", cmd);
- ret = 0;
+ err = error("read from external filter '%s' failed", cmd);
}
if (finish_async(&async)) {
- error("external filter %s failed", cmd);
- ret = 0;
+ err = error("external filter '%s' failed", cmd);
}
- if (ret) {
+ if (!err) {
strbuf_swap(dst, &nbuf);
}
strbuf_release(&nbuf);
- return ret;
+ return !err;
+}
+
+#define CAP_CLEAN (1u<<0)
+#define CAP_SMUDGE (1u<<1)
+
+struct cmd2process {
+ struct hashmap_entry ent; /* must be the first member! */
+ unsigned int supported_capabilities;
+ const char *cmd;
+ struct child_process process;
+};
+
+static int cmd_process_map_initialized;
+static struct hashmap cmd_process_map;
+
+static int cmd2process_cmp(const struct cmd2process *e1,
+ const struct cmd2process *e2,
+ const void *unused)
+{
+ return strcmp(e1->cmd, e2->cmd);
+}
+
+static struct cmd2process *find_multi_file_filter_entry(struct hashmap *hashmap, const char *cmd)
+{
+ struct cmd2process key;
+ hashmap_entry_init(&key, strhash(cmd));
+ key.cmd = cmd;
+ return hashmap_get(hashmap, &key, NULL);
+}
+
+static int packet_write_list(int fd, const char *line, ...)
+{
+ va_list args;
+ int err;
+ va_start(args, line);
+ for (;;) {
+ if (!line)
+ break;
+ if (strlen(line) > LARGE_PACKET_DATA_MAX)
+ return -1;
+ err = packet_write_fmt_gently(fd, "%s\n", line);
+ if (err)
+ return err;
+ line = va_arg(args, const char*);
+ }
+ va_end(args);
+ return packet_flush_gently(fd);
+}
+
+static void read_multi_file_filter_status(int fd, struct strbuf *status)
+{
+ struct strbuf **pair;
+ char *line;
+ for (;;) {
+ line = packet_read_line(fd, NULL);
+ if (!line)
+ break;
+ pair = strbuf_split_str(line, '=', 2);
+ if (pair[0] && pair[0]->len && pair[1]) {
+ /* the last "status=<foo>" line wins */
+ if (!strcmp(pair[0]->buf, "status=")) {
+ strbuf_reset(status);
+ strbuf_addbuf(status, pair[1]);
+ }
+ }
+ strbuf_list_free(pair);
+ }
+}
+
+static void kill_multi_file_filter(struct hashmap *hashmap, struct cmd2process *entry)
+{
+ if (!entry)
+ return;
+
+ entry->process.clean_on_exit = 0;
+ kill(entry->process.pid, SIGTERM);
+ finish_command(&entry->process);
+
+ hashmap_remove(hashmap, entry, NULL);
+ free(entry);
+}
+
+static void stop_multi_file_filter(struct child_process *process)
+{
+ sigchain_push(SIGPIPE, SIG_IGN);
+ /* Closing the pipe signals the filter to initiate a shutdown. */
+ close(process->in);
+ close(process->out);
+ sigchain_pop(SIGPIPE);
+ /* Finish command will wait until the shutdown is complete. */
+ finish_command(process);
+}
+
+static struct cmd2process *start_multi_file_filter(struct hashmap *hashmap, const char *cmd)
+{
+ int err;
+ struct cmd2process *entry;
+ struct child_process *process;
+ const char *argv[] = { cmd, NULL };
+ struct string_list cap_list = STRING_LIST_INIT_NODUP;
+ char *cap_buf;
+ const char *cap_name;
+
+ entry = xmalloc(sizeof(*entry));
+ entry->cmd = cmd;
+ entry->supported_capabilities = 0;
+ process = &entry->process;
+
+ child_process_init(process);
+ process->argv = argv;
+ process->use_shell = 1;
+ process->in = -1;
+ process->out = -1;
+ process->clean_on_exit = 1;
+ process->clean_on_exit_handler = stop_multi_file_filter;
+
+ if (start_command(process)) {
+ error("cannot fork to run external filter '%s'", cmd);
+ return NULL;
+ }
+
+ hashmap_entry_init(entry, strhash(cmd));
+
+ sigchain_push(SIGPIPE, SIG_IGN);
+
+ err = packet_write_list(process->in, "git-filter-client", "version=2", NULL);
+ if (err)
+ goto done;
+
+ err = strcmp(packet_read_line(process->out, NULL), "git-filter-server");
+ if (err) {
+ error("external filter '%s' does not support filter protocol version 2", cmd);
+ goto done;
+ }
+ err = strcmp(packet_read_line(process->out, NULL), "version=2");
+ if (err)
+ goto done;
+ err = packet_read_line(process->out, NULL) != NULL;
+ if (err)
+ goto done;
+
+ err = packet_write_list(process->in, "capability=clean", "capability=smudge", NULL);
+
+ for (;;) {
+ cap_buf = packet_read_line(process->out, NULL);
+ if (!cap_buf)
+ break;
+ string_list_split_in_place(&cap_list, cap_buf, '=', 1);
+
+ if (cap_list.nr != 2 || strcmp(cap_list.items[0].string, "capability"))
+ continue;
+
+ cap_name = cap_list.items[1].string;
+ if (!strcmp(cap_name, "clean")) {
+ entry->supported_capabilities |= CAP_CLEAN;
+ } else if (!strcmp(cap_name, "smudge")) {
+ entry->supported_capabilities |= CAP_SMUDGE;
+ } else {
+ warning(
+ "external filter '%s' requested unsupported filter capability '%s'",
+ cmd, cap_name
+ );
+ }
+
+ string_list_clear(&cap_list, 0);
+ }
+
+done:
+ sigchain_pop(SIGPIPE);
+
+ if (err || errno == EPIPE) {
+ error("initialization for external filter '%s' failed", cmd);
+ kill_multi_file_filter(hashmap, entry);
+ return NULL;
+ }
+
+ hashmap_add(hashmap, entry);
+ return entry;
+}
+
+static int apply_multi_file_filter(const char *path, const char *src, size_t len,
+ int fd, struct strbuf *dst, const char *cmd,
+ const unsigned int wanted_capability)
+{
+ int err;
+ struct cmd2process *entry;
+ struct child_process *process;
+ struct strbuf nbuf = STRBUF_INIT;
+ struct strbuf filter_status = STRBUF_INIT;
+ const char *filter_type;
+
+ if (!cmd_process_map_initialized) {
+ cmd_process_map_initialized = 1;
+ hashmap_init(&cmd_process_map, (hashmap_cmp_fn) cmd2process_cmp, 0);
+ entry = NULL;
+ } else {
+ entry = find_multi_file_filter_entry(&cmd_process_map, cmd);
+ }
+
+ fflush(NULL);
+
+ if (!entry) {
+ entry = start_multi_file_filter(&cmd_process_map, cmd);
+ if (!entry)
+ return 0;
+ }
+ process = &entry->process;
+
+ if (!(wanted_capability & entry->supported_capabilities))
+ return 0;
+
+ if (CAP_CLEAN & wanted_capability)
+ filter_type = "clean";
+ else if (CAP_SMUDGE & wanted_capability)
+ filter_type = "smudge";
+ else
+ die("unexpected filter type");
+
+ sigchain_push(SIGPIPE, SIG_IGN);
+
+ assert(strlen(filter_type) < LARGE_PACKET_DATA_MAX - strlen("command=\n"));
+ err = packet_write_fmt_gently(process->in, "command=%s\n", filter_type);
+ if (err)
+ goto done;
+
+ err = strlen(path) > LARGE_PACKET_DATA_MAX - strlen("pathname=\n");
+ if (err) {
+ error("path name too long for external filter");
+ goto done;
+ }
+
+ err = packet_write_fmt_gently(process->in, "pathname=%s\n", path);
+ if (err)
+ goto done;
+
+ err = packet_flush_gently(process->in);
+ if (err)
+ goto done;
+
+ if (fd >= 0)
+ err = write_packetized_from_fd(fd, process->in);
+ else
+ err = write_packetized_from_buf(src, len, process->in);
+ if (err)
+ goto done;
+
+ read_multi_file_filter_status(process->out, &filter_status);
+ err = strcmp(filter_status.buf, "success");
+ if (err)
+ goto done;
+
+ err = read_packetized_to_strbuf(process->out, &nbuf) < 0;
+ if (err)
+ goto done;
+
+ read_multi_file_filter_status(process->out, &filter_status);
+ err = strcmp(filter_status.buf, "success");
+
+done:
+ sigchain_pop(SIGPIPE);
+
+ if (err || errno == EPIPE) {
+ if (!strcmp(filter_status.buf, "error")) {
+ /* The filter signaled a problem with the file. */
+ } else if (!strcmp(filter_status.buf, "abort")) {
+ /*
+ * The filter signaled a permanent problem. Don't try to filter
+ * files with the same command for the lifetime of the current
+ * Git process.
+ */
+ entry->supported_capabilities &= ~wanted_capability;
+ } else {
+ /*
+ * Something went wrong with the protocol filter.
+ * Force shutdown and restart if another blob requires filtering.
+ */
+ error("external filter '%s' failed", cmd);
+ kill_multi_file_filter(&cmd_process_map, entry);
+ }
+ } else {
+ strbuf_swap(dst, &nbuf);
+ }
+ strbuf_release(&nbuf);
+ return !err;
}
static struct convert_driver {
struct convert_driver *next;
const char *smudge;
const char *clean;
+ const char *process;
int required;
} *user_convert, **user_convert_tail;
+static int apply_filter(const char *path, const char *src, size_t len,
+ int fd, struct strbuf *dst, struct convert_driver *drv,
+ const unsigned int wanted_capability)
+{
+ const char *cmd = NULL;
+
+ if (!drv)
+ return 0;
+
+ if (!dst)
+ return 1;
+
+ if ((CAP_CLEAN & wanted_capability) && !drv->process && drv->clean)
+ cmd = drv->clean;
+ else if ((CAP_SMUDGE & wanted_capability) && !drv->process && drv->smudge)
+ cmd = drv->smudge;
+
+ if (cmd && *cmd)
+ return apply_single_file_filter(path, src, len, fd, dst, cmd);
+ else if (drv->process && *drv->process)
+ return apply_multi_file_filter(path, src, len, fd, dst, drv->process, wanted_capability);
+
+ return 0;
+}
+
static int read_convert_config(const char *var, const char *value, void *cb)
{
const char *key, *name;
if (!strcmp("clean", key))
return git_config_string(&drv->clean, var, value);
+ if (!strcmp("process", key))
+ return git_config_string(&drv->process, var, value);
+
if (!strcmp("required", key)) {
drv->required = git_config_bool(var, value);
return 0;
if (!ca.drv->required)
return 0;
- return apply_filter(path, NULL, 0, -1, NULL, ca.drv->clean);
+ return apply_filter(path, NULL, 0, -1, NULL, ca.drv, CAP_CLEAN);
}
const char *get_convert_attr_ascii(const char *path)
struct strbuf *dst, enum safe_crlf checksafe)
{
int ret = 0;
- const char *filter = NULL;
- int required = 0;
struct conv_attrs ca;
convert_attrs(&ca, path);
- if (ca.drv) {
- filter = ca.drv->clean;
- required = ca.drv->required;
- }
- ret |= apply_filter(path, src, len, -1, dst, filter);
- if (!ret && required)
+ ret |= apply_filter(path, src, len, -1, dst, ca.drv, CAP_CLEAN);
+ if (!ret && ca.drv && ca.drv->required)
die("%s: clean filter '%s' failed", path, ca.drv->name);
if (ret && dst) {
convert_attrs(&ca, path);
assert(ca.drv);
- assert(ca.drv->clean);
+ assert(ca.drv->clean || ca.drv->process);
- if (!apply_filter(path, NULL, 0, fd, dst, ca.drv->clean))
+ if (!apply_filter(path, NULL, 0, fd, dst, ca.drv, CAP_CLEAN))
die("%s: clean filter '%s' failed", path, ca.drv->name);
crlf_to_git(path, dst->buf, dst->len, dst, ca.crlf_action, checksafe);
int normalizing)
{
int ret = 0, ret_filter = 0;
- const char *filter = NULL;
- int required = 0;
struct conv_attrs ca;
convert_attrs(&ca, path);
- if (ca.drv) {
- filter = ca.drv->smudge;
- required = ca.drv->required;
- }
ret |= ident_to_worktree(path, src, len, dst, ca.ident);
if (ret) {
}
/*
* CRLF conversion can be skipped if normalizing, unless there
- * is a smudge filter. The filter might expect CRLFs.
+ * is a smudge or process filter (even if the process filter doesn't
+ * support smudge). The filters might expect CRLFs.
*/
- if (filter || !normalizing) {
+ if ((ca.drv && (ca.drv->smudge || ca.drv->process)) || !normalizing) {
ret |= crlf_to_worktree(path, src, len, dst, ca.crlf_action);
if (ret) {
src = dst->buf;
}
}
- ret_filter = apply_filter(path, src, len, -1, dst, filter);
- if (!ret_filter && required)
+ ret_filter = apply_filter(path, src, len, -1, dst, ca.drv, CAP_SMUDGE);
+ if (!ret_filter && ca.drv && ca.drv->required)
die("%s: smudge filter %s failed", path, ca.drv->name);
return ret | ret_filter;
struct stream_filter *filter = NULL;
convert_attrs(&ca, path);
- if (ca.drv && (ca.drv->smudge || ca.drv->clean))
+ if (ca.drv && (ca.drv->process || ca.drv->smudge || ca.drv->clean))
return NULL;
if (ca.crlf_action == CRLF_AUTO || ca.crlf_action == CRLF_AUTO_CRLF)
close(fd);
}
-static const char permissions_advice[] =
+static const char permissions_advice[] = N_(
"The permissions on your socket directory are too loose; other\n"
"users may be able to read your cached credentials. Consider running:\n"
"\n"
-" chmod 0700 %s";
+" chmod 0700 %s");
static void init_socket_directory(const char *path)
{
struct stat st;
if (!stat(dir, &st)) {
if (st.st_mode & 077)
- die(permissions_advice, dir);
+ die(_(permissions_advice), dir);
} else {
/*
* We must be sure to create the directory with the correct mode,
{
static char rpath[PATH_MAX];
static char interp_path[PATH_MAX];
+ size_t rlen;
const char *path;
const char *dir;
namlen = slash - dir;
restlen -= namlen;
loginfo("userpath <%s>, request <%s>, namlen %d, restlen %d, slash <%s>", user_path, dir, namlen, restlen, slash);
- snprintf(rpath, PATH_MAX, "%.*s/%s%.*s",
- namlen, dir, user_path, restlen, slash);
+ rlen = snprintf(rpath, sizeof(rpath), "%.*s/%s%.*s",
+ namlen, dir, user_path, restlen, slash);
+ if (rlen >= sizeof(rpath)) {
+ logerror("user-path too large: %s", rpath);
+ return NULL;
+ }
dir = rpath;
}
}
strbuf_expand(&expanded_path, interpolated_path,
expand_path, &context);
- strlcpy(interp_path, expanded_path.buf, PATH_MAX);
+
+ rlen = strlcpy(interp_path, expanded_path.buf,
+ sizeof(interp_path));
+ if (rlen >= sizeof(interp_path)) {
+ logerror("interpolated path too large: %s",
+ interp_path);
+ return NULL;
+ }
+
strbuf_release(&expanded_path);
loginfo("Interpolated dir '%s'", interp_path);
logerror("'%s': Non-absolute path denied (base-path active)", dir);
return NULL;
}
- snprintf(rpath, PATH_MAX, "%s%s", base_path, dir);
+ rlen = snprintf(rpath, sizeof(rpath), "%s%s", base_path, dir);
+ if (rlen >= sizeof(rpath)) {
+ logerror("base-path too large: %s", rpath);
+ return NULL;
+ }
dir = rpath;
}
{
if (!informative_errors)
msg = "access denied or repository not exported";
- packet_write(1, "ERR %s: %s", msg, dir);
+ packet_write_fmt(1, "ERR %s: %s", msg, dir);
return -1;
}
if (2 <= stage) {
int mode = nce->ce_mode;
num_compare_stages++;
- hashcpy(dpath->parent[stage-2].oid.hash, nce->sha1);
+ oidcpy(&dpath->parent[stage - 2].oid,
+ &nce->oid);
dpath->parent[stage-2].mode = ce_mode_from_stat(nce, mode);
dpath->parent[stage-2].status =
DIFF_STATUS_MODIFIED;
continue;
}
diff_addremove(&revs->diffopt, '-', ce->ce_mode,
- ce->sha1, !is_null_sha1(ce->sha1),
+ ce->oid.hash,
+ !is_null_oid(&ce->oid),
+ ce->name, 0);
+ continue;
+ } else if (revs->diffopt.ita_invisible_in_index &&
+ ce_intent_to_add(ce)) {
+ diff_addremove(&revs->diffopt, '+', ce->ce_mode,
+ EMPTY_BLOB_SHA1_BIN, 0,
ce->name, 0);
continue;
}
continue;
}
oldmode = ce->ce_mode;
- old_sha1 = ce->sha1;
- new_sha1 = changed ? null_sha1 : ce->sha1;
+ old_sha1 = ce->oid.hash;
+ new_sha1 = changed ? null_sha1 : ce->oid.hash;
diff_change(&revs->diffopt, oldmode, newmode,
old_sha1, new_sha1,
!is_null_sha1(old_sha1),
int cached, int match_missing,
unsigned *dirty_submodule, struct diff_options *diffopt)
{
- const unsigned char *sha1 = ce->sha1;
+ const unsigned char *sha1 = ce->oid.hash;
unsigned int mode = ce->ce_mode;
if (!cached && !ce_uptodate(ce)) {
&dirty_submodule, &revs->diffopt) < 0) {
if (report_missing)
diff_index_show_file(revs, "-", old,
- old->sha1, 1, old->ce_mode, 0);
+ old->oid.hash, 1, old->ce_mode,
+ 0);
return -1;
}
if (revs->combine_merges && !cached &&
- (hashcmp(sha1, old->sha1) || hashcmp(old->sha1, new->sha1))) {
+ (hashcmp(sha1, old->oid.hash) || oidcmp(&old->oid, &new->oid))) {
struct combine_diff_path *p;
int pathlen = ce_namelen(new);
memset(p->parent, 0, 2 * sizeof(struct combine_diff_parent));
p->parent[0].status = DIFF_STATUS_MODIFIED;
p->parent[0].mode = new->ce_mode;
- hashcpy(p->parent[0].oid.hash, new->sha1);
+ oidcpy(&p->parent[0].oid, &new->oid);
p->parent[1].status = DIFF_STATUS_MODIFIED;
p->parent[1].mode = old->ce_mode;
- hashcpy(p->parent[1].oid.hash, old->sha1);
+ oidcpy(&p->parent[1].oid, &old->oid);
show_combined_diff(p, 2, revs->dense_combined_merges, revs);
free(p);
return 0;
}
oldmode = old->ce_mode;
- if (mode == oldmode && !hashcmp(sha1, old->sha1) && !dirty_submodule &&
+ if (mode == oldmode && !hashcmp(sha1, old->oid.hash) && !dirty_submodule &&
!DIFF_OPT_TST(&revs->diffopt, FIND_COPIES_HARDER))
return 0;
diff_change(&revs->diffopt, oldmode, mode,
- old->sha1, sha1, 1, !is_null_sha1(sha1),
+ old->oid.hash, sha1, 1, !is_null_sha1(sha1),
old->name, 0, dirty_submodule);
return 0;
}
struct rev_info *revs = o->unpack_data;
int match_missing, cached;
+ /* i-t-a entries do not actually exist in the index */
+ if (revs->diffopt.ita_invisible_in_index &&
+ idx && ce_intent_to_add(idx)) {
+ idx = NULL;
+ if (!tree)
+ return; /* nothing to diff.. */
+ }
+
/* if the entry is not checked out, don't examine work tree */
cached = o->index_only ||
(idx && ((idx->ce_flags & CE_VALID) || ce_skip_worktree(idx)));
struct diff_filepair *pair;
pair = diff_unmerge(&revs->diffopt, idx->name);
if (tree)
- fill_filespec(pair->one, tree->sha1, 1, tree->ce_mode);
+ fill_filespec(pair->one, tree->oid.hash, 1,
+ tree->ce_mode);
return;
}
* Something removed from the tree?
*/
if (!idx) {
- diff_index_show_file(revs, "-", tree, tree->sha1, 1, tree->ce_mode, 0);
+ diff_index_show_file(revs, "-", tree, tree->oid.hash, 1,
+ tree->ce_mode, 0);
return;
}
return 0;
}
-int index_differs_from(const char *def, int diff_flags)
+int index_differs_from(const char *def, int diff_flags,
+ int ita_invisible_in_index)
{
struct rev_info rev;
struct setup_revision_opt opt;
DIFF_OPT_SET(&rev.diffopt, QUICK);
DIFF_OPT_SET(&rev.diffopt, EXIT_WITH_STATUS);
rev.diffopt.flags |= diff_flags;
+ rev.diffopt.ita_invisible_in_index = ita_invisible_in_index;
run_diff_index(&rev, 1);
if (rev.pending.alloc)
free(rev.pending.objects);
DIFF_OPT_SET(&revs->diffopt, NO_INDEX);
+ DIFF_OPT_SET(&revs->diffopt, RELATIVE_NAME);
+ revs->diffopt.prefix = prefix;
+
revs->max_count = -2;
diff_setup_done(&revs->diffopt);
#include "ll-merge.h"
#include "string-list.h"
#include "argv-array.h"
+#include "graph.h"
#ifdef NO_FAST_WORKING_DIRECTORY
#define FAST_WORKING_DIRECTORY 0
#endif
static int diff_detect_rename_default;
+static int diff_indent_heuristic; /* experimental */
static int diff_compaction_heuristic; /* experimental */
static int diff_rename_limit_default = 400;
static int diff_suppress_blank_empty;
static int diff_dirstat_permille_default = 30;
static struct diff_options default_diff_options;
static long diff_algorithm;
+static unsigned ws_error_highlight_default = WSEH_NEW;
static char diff_colors[][COLOR_MAXLEN] = {
GIT_COLOR_RESET,
GIT_COLOR_NORMAL, /* FUNCINFO */
};
+static NORETURN void die_want_option(const char *option_name)
+{
+ die(_("option '%s' requires a value"), option_name);
+}
+
static int parse_diff_color_slot(const char *var)
{
if (!strcasecmp(var, "context") || !strcasecmp(var, "plain"))
static int parse_submodule_params(struct diff_options *options, const char *value)
{
if (!strcmp(value, "log"))
- DIFF_OPT_SET(options, SUBMODULE_LOG);
+ options->submodule_format = DIFF_SUBMODULE_LOG;
else if (!strcmp(value, "short"))
- DIFF_OPT_CLR(options, SUBMODULE_LOG);
+ options->submodule_format = DIFF_SUBMODULE_SHORT;
+ else if (!strcmp(value, "diff"))
+ options->submodule_format = DIFF_SUBMODULE_INLINE_DIFF;
else
return -1;
return 0;
return -1;
}
+static int parse_one_token(const char **arg, const char *token)
+{
+ const char *rest;
+ if (skip_prefix(*arg, token, &rest) && (!*rest || *rest == ',')) {
+ *arg = rest;
+ return 1;
+ }
+ return 0;
+}
+
+static int parse_ws_error_highlight(const char *arg)
+{
+ const char *orig_arg = arg;
+ unsigned val = 0;
+
+ while (*arg) {
+ if (parse_one_token(&arg, "none"))
+ val = 0;
+ else if (parse_one_token(&arg, "default"))
+ val = WSEH_NEW;
+ else if (parse_one_token(&arg, "all"))
+ val = WSEH_NEW | WSEH_OLD | WSEH_CONTEXT;
+ else if (parse_one_token(&arg, "new"))
+ val |= WSEH_NEW;
+ else if (parse_one_token(&arg, "old"))
+ val |= WSEH_OLD;
+ else if (parse_one_token(&arg, "context"))
+ val |= WSEH_CONTEXT;
+ else {
+ return -1 - (int)(arg - orig_arg);
+ }
+ if (*arg)
+ arg++;
+ }
+ return val;
+}
+
/*
* These are to give UI layer defaults.
* The core-level commands such as git-diff-files should
diff_detect_rename_default = 1;
}
+int git_diff_heuristic_config(const char *var, const char *value, void *cb)
+{
+ if (!strcmp(var, "diff.indentheuristic")) {
+ diff_indent_heuristic = git_config_bool(var, value);
+ if (diff_indent_heuristic)
+ diff_compaction_heuristic = 0;
+ }
+ if (!strcmp(var, "diff.compactionheuristic")) {
+ diff_compaction_heuristic = git_config_bool(var, value);
+ if (diff_compaction_heuristic)
+ diff_indent_heuristic = 0;
+ }
+ return 0;
+}
+
int git_diff_ui_config(const char *var, const char *value, void *cb)
{
if (!strcmp(var, "diff.color") || !strcmp(var, "color.diff")) {
diff_detect_rename_default = git_config_rename(var, value);
return 0;
}
- if (!strcmp(var, "diff.compactionheuristic")) {
- diff_compaction_heuristic = git_config_bool(var, value);
- return 0;
- }
if (!strcmp(var, "diff.autorefreshindex")) {
diff_auto_refresh_index = git_config_bool(var, value);
return 0;
return 0;
}
+ if (git_diff_heuristic_config(var, value, cb) < 0)
+ return -1;
+
+ if (!strcmp(var, "diff.wserrorhighlight")) {
+ int val = parse_ws_error_highlight(value);
+ if (val < 0)
+ return -1;
+ ws_error_highlight_default = val;
+ return 0;
+ }
+
if (git_color_config(var, value, cb) < 0)
return -1;
*/
if (options->stat_width == -1)
- width = term_columns() - options->output_prefix_length;
+ width = term_columns() - strlen(line_prefix);
else
width = options->stat_width ? options->stat_width : 80;
number_width = decimal_width(max_change) > number_width ?
return;
/* Show all directories with more than x% of the changes */
- qsort(dir.files, dir.nr, sizeof(dir.files[0]), dirstat_compare);
+ QSORT(dir.files, dir.nr, dirstat_compare);
gather_dirstat(options, &dir, changed, "", 0);
}
return;
/* Show all directories with more than x% of the changes */
- qsort(dir.files, dir.nr, sizeof(dir.files[0]), dirstat_compare);
+ QSORT(dir.files, dir.nr, dirstat_compare);
gather_dirstat(options, &dir, changed, "", 0);
}
struct strbuf header = STRBUF_INIT;
const char *line_prefix = diff_line_prefix(o);
- if (DIFF_OPT_TST(o, SUBMODULE_LOG) &&
- (!one->mode || S_ISGITLINK(one->mode)) &&
- (!two->mode || S_ISGITLINK(two->mode))) {
+ diff_set_mnemonic_prefix(o, "a/", "b/");
+ if (DIFF_OPT_TST(o, REVERSE_DIFF)) {
+ a_prefix = o->b_prefix;
+ b_prefix = o->a_prefix;
+ } else {
+ a_prefix = o->a_prefix;
+ b_prefix = o->b_prefix;
+ }
+
+ if (o->submodule_format == DIFF_SUBMODULE_LOG &&
+ (!one->mode || S_ISGITLINK(one->mode)) &&
+ (!two->mode || S_ISGITLINK(two->mode))) {
const char *del = diff_get_color_opt(o, DIFF_FILE_OLD);
const char *add = diff_get_color_opt(o, DIFF_FILE_NEW);
show_submodule_summary(o->file, one->path ? one->path : two->path,
line_prefix,
- one->oid.hash, two->oid.hash,
+ &one->oid, &two->oid,
two->dirty_submodule,
meta, del, add, reset);
return;
+ } else if (o->submodule_format == DIFF_SUBMODULE_INLINE_DIFF &&
+ (!one->mode || S_ISGITLINK(one->mode)) &&
+ (!two->mode || S_ISGITLINK(two->mode))) {
+ const char *del = diff_get_color_opt(o, DIFF_FILE_OLD);
+ const char *add = diff_get_color_opt(o, DIFF_FILE_NEW);
+ show_submodule_inline_diff(o->file, one->path ? one->path : two->path,
+ line_prefix,
+ &one->oid, &two->oid,
+ two->dirty_submodule,
+ meta, del, add, reset, o);
+ return;
}
if (DIFF_OPT_TST(o, ALLOW_TEXTCONV)) {
textconv_two = get_textconv(two);
}
- diff_set_mnemonic_prefix(o, "a/", "b/");
- if (DIFF_OPT_TST(o, REVERSE_DIFF)) {
- a_prefix = o->b_prefix;
- b_prefix = o->a_prefix;
- } else {
- a_prefix = o->a_prefix;
- b_prefix = o->b_prefix;
- }
-
/* Never use a non-valid filename anywhere if at all possible */
name_a = DIFF_FILE_VALID(one) ? name_a : name_b;
name_b = DIFF_FILE_VALID(two) ? name_b : name_a;
* This is not the sha1 we are looking for, or
* unreusable because it is not a regular file.
*/
- if (hashcmp(sha1, ce->sha1) || !S_ISREG(ce->ce_mode))
+ if (hashcmp(sha1, ce->oid.hash) || !S_ISREG(ce->ce_mode))
return 0;
/*
return p->score * 100 / MAX_SCORE;
}
+static const char *diff_abbrev_oid(const struct object_id *oid, int abbrev)
+{
+ if (startup_info->have_repository)
+ return find_unique_abbrev(oid->hash, abbrev);
+ else {
+ char *hex = oid_to_hex(oid);
+ if (abbrev < 0)
+ abbrev = FALLBACK_DEFAULT_ABBREV;
+ if (abbrev > GIT_SHA1_HEXSZ)
+ die("BUG: oid abbreviation out of range: %d", abbrev);
+ hex[abbrev] = '\0';
+ return hex;
+ }
+}
+
static void fill_metainfo(struct strbuf *msg,
const char *name,
const char *other,
(!fill_mmfile(&mf, two) && diff_filespec_is_binary(two)))
abbrev = 40;
}
- strbuf_addf(msg, "%s%sindex %s..", line_prefix, set,
- find_unique_abbrev(one->oid.hash, abbrev));
- strbuf_add_unique_abbrev(msg, two->oid.hash, abbrev);
+ strbuf_addf(msg, "%s%sindex %s..%s", line_prefix, set,
+ diff_abbrev_oid(&one->oid, abbrev),
+ diff_abbrev_oid(&two->oid, abbrev));
if (one->mode == two->mode)
strbuf_addf(msg, " %06o", one->mode);
strbuf_addf(msg, "%s\n", reset);
options->rename_limit = -1;
options->dirstat_permille = diff_dirstat_permille_default;
options->context = diff_context_default;
- options->ws_error_highlight = WSEH_NEW;
+ options->ws_error_highlight = ws_error_highlight_default;
DIFF_OPT_SET(options, RENAME_EMPTY);
/* pathchange left =NULL by default */
options->use_color = diff_use_color_default;
options->detect_rename = diff_detect_rename_default;
options->xdl_opts |= diff_algorithm;
- if (diff_compaction_heuristic)
+ if (diff_indent_heuristic)
+ DIFF_XDL_SET(options, INDENT_HEURISTIC);
+ else if (diff_compaction_heuristic)
DIFF_XDL_SET(options, COMPACTION_HEURISTIC);
options->orderfile = diff_order_file_cfg;
if (options->output_format & DIFF_FORMAT_NO_OUTPUT)
count++;
if (count > 1)
- die("--name-only, --name-status, --check and -s are mutually exclusive");
+ die(_("--name-only, --name-status, --check and -s are mutually exclusive"));
/*
* Most of the time we can say "there are changes"
*/
read_cache();
}
- if (options->abbrev <= 0 || 40 < options->abbrev)
+ if (40 < options->abbrev)
options->abbrev = 40; /* full */
/*
if (*arg == '=')
width = strtoul(arg + 1, &end, 10);
else if (!*arg && !av[1])
- die("Option '--stat-width' requires a value");
+ die_want_option("--stat-width");
else if (!*arg) {
width = strtoul(av[1], &end, 10);
argcount = 2;
if (*arg == '=')
name_width = strtoul(arg + 1, &end, 10);
else if (!*arg && !av[1])
- die("Option '--stat-name-width' requires a value");
+ die_want_option("--stat-name-width");
else if (!*arg) {
name_width = strtoul(av[1], &end, 10);
argcount = 2;
if (*arg == '=')
graph_width = strtoul(arg + 1, &end, 10);
else if (!*arg && !av[1])
- die("Option '--stat-graph-width' requires a value");
+ die_want_option("--stat-graph-width");
else if (!*arg) {
graph_width = strtoul(av[1], &end, 10);
argcount = 2;
if (*arg == '=')
count = strtoul(arg + 1, &end, 10);
else if (!*arg && !av[1])
- die("Option '--stat-count' requires a value");
+ die_want_option("--stat-count");
else if (!*arg) {
count = strtoul(av[1], &end, 10);
argcount = 2;
*fmt |= DIFF_FORMAT_PATCH;
}
-static int parse_one_token(const char **arg, const char *token)
+static int parse_ws_error_highlight_opt(struct diff_options *opt, const char *arg)
{
- const char *rest;
- if (skip_prefix(*arg, token, &rest) && (!*rest || *rest == ',')) {
- *arg = rest;
- return 1;
- }
- return 0;
-}
+ int val = parse_ws_error_highlight(arg);
-static int parse_ws_error_highlight(struct diff_options *opt, const char *arg)
-{
- const char *orig_arg = arg;
- unsigned val = 0;
- while (*arg) {
- if (parse_one_token(&arg, "none"))
- val = 0;
- else if (parse_one_token(&arg, "default"))
- val = WSEH_NEW;
- else if (parse_one_token(&arg, "all"))
- val = WSEH_NEW | WSEH_OLD | WSEH_CONTEXT;
- else if (parse_one_token(&arg, "new"))
- val |= WSEH_NEW;
- else if (parse_one_token(&arg, "old"))
- val |= WSEH_OLD;
- else if (parse_one_token(&arg, "context"))
- val |= WSEH_CONTEXT;
- else {
- error("unknown value after ws-error-highlight=%.*s",
- (int)(arg - orig_arg), orig_arg);
- return 0;
- }
- if (*arg)
- arg++;
+ if (val < 0) {
+ error("unknown value after ws-error-highlight=%.*s",
+ -1 - val, arg);
+ return 0;
}
opt->ws_error_highlight = val;
return 1;
DIFF_XDL_SET(options, IGNORE_WHITESPACE_AT_EOL);
else if (!strcmp(arg, "--ignore-blank-lines"))
DIFF_XDL_SET(options, IGNORE_BLANK_LINES);
- else if (!strcmp(arg, "--compaction-heuristic"))
+ else if (!strcmp(arg, "--indent-heuristic")) {
+ DIFF_XDL_SET(options, INDENT_HEURISTIC);
+ DIFF_XDL_CLR(options, COMPACTION_HEURISTIC);
+ } else if (!strcmp(arg, "--no-indent-heuristic"))
+ DIFF_XDL_CLR(options, INDENT_HEURISTIC);
+ else if (!strcmp(arg, "--compaction-heuristic")) {
DIFF_XDL_SET(options, COMPACTION_HEURISTIC);
- else if (!strcmp(arg, "--no-compaction-heuristic"))
+ DIFF_XDL_CLR(options, INDENT_HEURISTIC);
+ } else if (!strcmp(arg, "--no-compaction-heuristic"))
DIFF_XDL_CLR(options, COMPACTION_HEURISTIC);
else if (!strcmp(arg, "--patience"))
options->xdl_opts = DIFF_WITH_ALG(options, PATIENCE_DIFF);
DIFF_OPT_SET(options, OVERRIDE_SUBMODULE_CONFIG);
handle_ignore_submodules_arg(options, arg);
} else if (!strcmp(arg, "--submodule"))
- DIFF_OPT_SET(options, SUBMODULE_LOG);
+ options->submodule_format = DIFF_SUBMODULE_LOG;
else if (skip_prefix(arg, "--submodule=", &arg))
return parse_submodule_opt(options, arg);
else if (skip_prefix(arg, "--ws-error-highlight=", &arg))
- return parse_ws_error_highlight(options, arg);
+ return parse_ws_error_highlight_opt(options, arg);
+ else if (!strcmp(arg, "--ita-invisible-in-index"))
+ options->ita_invisible_in_index = 1;
+ else if (!strcmp(arg, "--ita-visible-in-index"))
+ options->ita_invisible_in_index = 0;
/* misc options */
else if (!strcmp(arg, "-z"))
options->a_prefix = optarg;
return argcount;
}
+ else if ((argcount = parse_long_opt("line-prefix", av, &optarg))) {
+ options->line_prefix = optarg;
+ options->line_prefix_length = strlen(options->line_prefix);
+ graph_setup_line_prefix(options);
+ return argcount;
+ }
else if ((argcount = parse_long_opt("dst-prefix", av, &optarg))) {
options->b_prefix = optarg;
return argcount;
free(p);
}
-/*
- * This is different from find_unique_abbrev() in that
- * it stuffs the result with dots for alignment.
- */
-const char *diff_unique_abbrev(const unsigned char *sha1, int len)
+const char *diff_aligned_abbrev(const struct object_id *oid, int len)
{
int abblen;
const char *abbrev;
- if (len == 40)
- return sha1_to_hex(sha1);
- abbrev = find_unique_abbrev(sha1, len);
+ if (len == GIT_SHA1_HEXSZ)
+ return oid_to_hex(oid);
+
+ abbrev = diff_abbrev_oid(oid, len);
abblen = strlen(abbrev);
/*
* the automatic sizing is supposed to give abblen that ensures
* uniqueness across all objects (statistically speaking).
*/
- if (abblen < 37) {
- static char hex[41];
+ if (abblen < GIT_SHA1_HEXSZ - 3) {
+ static char hex[GIT_SHA1_HEXSZ + 1];
if (len < abblen && abblen <= len + 2)
xsnprintf(hex, sizeof(hex), "%s%.*s", abbrev, len+3-abblen, "..");
else
xsnprintf(hex, sizeof(hex), "%s...", abbrev);
return hex;
}
- return sha1_to_hex(sha1);
+
+ return oid_to_hex(oid);
}
static void diff_flush_raw(struct diff_filepair *p, struct diff_options *opt)
fprintf(opt->file, "%s", diff_line_prefix(opt));
if (!(opt->output_format & DIFF_FORMAT_NAME_STATUS)) {
fprintf(opt->file, ":%06o %06o %s ", p->one->mode, p->two->mode,
- diff_unique_abbrev(p->one->oid.hash, opt->abbrev));
+ diff_aligned_abbrev(&p->one->oid, opt->abbrev));
fprintf(opt->file, "%s ",
- diff_unique_abbrev(p->two->oid.hash, opt->abbrev));
+ diff_aligned_abbrev(&p->two->oid, opt->abbrev));
}
if (p->score) {
fprintf(opt->file, "%c%03d%c", p->status, similarity_index(p),
}
static const char rename_limit_warning[] =
-"inexact rename detection was skipped due to too many files.";
+N_("inexact rename detection was skipped due to too many files.");
static const char degrade_cc_to_c_warning[] =
-"only found copies from modified paths due to too many files.";
+N_("only found copies from modified paths due to too many files.");
static const char rename_limit_advice[] =
-"you may want to set your %s variable to at least "
-"%d and retry the command.";
+N_("you may want to set your %s variable to at least "
+ "%d and retry the command.");
void diff_warn_rename_limit(const char *varname, int needed, int degraded_cc)
{
if (degraded_cc)
- warning(degrade_cc_to_c_warning);
+ warning(_(degrade_cc_to_c_warning));
else if (needed)
- warning(rename_limit_warning);
+ warning(_(rename_limit_warning));
else
return;
if (0 < needed && needed < 32767)
- warning(rename_limit_advice, varname, needed);
+ warning(_(rename_limit_advice), varname, needed);
}
void diff_flush(struct diff_options *options)
void diffcore_fix_diff_index(struct diff_options *options)
{
struct diff_queue_struct *q = &diff_queued_diff;
- qsort(q->queue, q->nr, sizeof(q->queue[0]), diffnamecmp);
+ QSORT(q->queue, q->nr, diffnamecmp);
}
void diffcore_std(struct diff_options *options)
#define DIFF_OPT_DIRSTAT_BY_FILE (1 << 20)
#define DIFF_OPT_ALLOW_TEXTCONV (1 << 21)
#define DIFF_OPT_DIFF_FROM_CONTENTS (1 << 22)
-#define DIFF_OPT_SUBMODULE_LOG (1 << 23)
#define DIFF_OPT_DIRTY_SUBMODULES (1 << 24)
#define DIFF_OPT_IGNORE_UNTRACKED_IN_SUBMODULES (1 << 25)
#define DIFF_OPT_IGNORE_DIRTY_SUBMODULES (1 << 26)
DIFF_WORDS_COLOR
};
+enum diff_submodule_format {
+ DIFF_SUBMODULE_SHORT = 0,
+ DIFF_SUBMODULE_LOG,
+ DIFF_SUBMODULE_INLINE_DIFF
+};
+
struct diff_options {
const char *orderfile;
const char *pickaxe;
const char *single_follow;
const char *a_prefix, *b_prefix;
+ const char *line_prefix;
+ size_t line_prefix_length;
unsigned flags;
unsigned touched_flags;
int dirstat_permille;
int setup;
int abbrev;
+ int ita_invisible_in_index;
/* white-space error highlighting */
#define WSEH_NEW 1
#define WSEH_CONTEXT 2
int stat_count;
const char *word_regex;
enum diff_words_type word_diff;
+ enum diff_submodule_format submodule_format;
/* this is set by diffcore for DIFF_FORMAT_PATCH */
int found_changes;
diff_format_fn_t format_callback;
void *format_callback_data;
diff_prefix_fn_t output_prefix;
- int output_prefix_length;
void *output_prefix_data;
int diff_path_counter;
const char **optarg);
extern int git_diff_basic_config(const char *var, const char *value, void *cb);
+extern int git_diff_heuristic_config(const char *var, const char *value, void *cb);
extern void init_diff_ui_defaults(void);
extern int git_diff_ui_config(const char *var, const char *value, void *cb);
extern void diff_setup(struct diff_options *);
#define DIFF_STATUS_FILTER_AON '*'
#define DIFF_STATUS_FILTER_BROKEN 'B'
-extern const char *diff_unique_abbrev(const unsigned char *, int);
+/*
+ * This is different from find_unique_abbrev() in that
+ * it stuffs the result with dots for alignment.
+ */
+extern const char *diff_aligned_abbrev(const struct object_id *sha1, int);
/* do not report anything on removed paths */
#define DIFF_SILENT_ON_REMOVED 01
extern void diff_no_index(struct rev_info *, int, const char **);
-extern int index_differs_from(const char *def, int diff_flags);
+extern int index_differs_from(const char *def, int diff_flags, int ita_invisible_in_index);
/*
* Fill the contents of the filespec "df", respecting any textconv defined by
n = 0;
accum1 = accum2 = 0;
}
- qsort(hash->data,
- 1ul << hash->alloc_log2,
- sizeof(hash->data[0]),
- spanhash_cmp);
+ QSORT(hash->data, 1ul << hash->alloc_log2, spanhash_cmp);
return hash;
}
objs[i].orig_order = i;
objs[i].order = match_order(obj_path(objs[i].obj));
}
- qsort(objs, nr, sizeof(*objs), compare_objs_order);
+ QSORT(objs, nr, compare_objs_order);
}
static const char *pair_pathtwo(void *obj)
stop_progress(&progress);
/* cost matrix sorted by most to least similar pair */
- qsort(mx, dst_cnt * NUM_CANDIDATE_PER_DST, sizeof(*mx), score_compare);
+ QSORT(mx, dst_cnt * NUM_CANDIDATE_PER_DST, score_compare);
rename_count += find_renames(mx, dst_cnt, minimum_score, 0);
if (detect_rename == DIFF_DETECT_COPY)
return 1;
}
-#define DO_MATCH_EXCLUDE 1
-#define DO_MATCH_DIRECTORY 2
+#define DO_MATCH_EXCLUDE (1<<0)
+#define DO_MATCH_DIRECTORY (1<<1)
+#define DO_MATCH_SUBMODULE (1<<2)
/*
* Does 'match' match the given name?
item->nowildcard_len - prefix))
return MATCHED_FNMATCH;
+ /* Perform checks to see if "name" is a super set of the pathspec */
+ if (flags & DO_MATCH_SUBMODULE) {
+ /* name is a literal prefix of the pathspec */
+ if ((namelen < matchlen) &&
+ (match[namelen] == '/') &&
+ !ps_strncmp(item, match, name, namelen))
+ return MATCHED_RECURSIVELY;
+
+ /* name" doesn't match up to the first wild character */
+ if (item->nowildcard_len < item->len &&
+ ps_strncmp(item, match, name,
+ item->nowildcard_len - prefix))
+ return 0;
+
+ /*
+ * Here is where we would perform a wildmatch to check if
+ * "name" can be matched as a directory (or a prefix) against
+ * the pathspec. Since wildmatch doesn't have this capability
+ * at the present we have to punt and say that it is a match,
+ * potentially returning a false positive
+ * The submodules themselves will be able to perform more
+ * accurate matching to determine if the pathspec matches.
+ */
+ return MATCHED_RECURSIVELY;
+ }
+
return 0;
}
return negative ? 0 : positive;
}
+/**
+ * Check if a submodule is a superset of the pathspec
+ */
+int submodule_path_match(const struct pathspec *ps,
+ const char *submodule_name,
+ char *seen)
+{
+ int matched = do_match_pathspec(ps, submodule_name,
+ strlen(submodule_name),
+ 0, seen,
+ DO_MATCH_DIRECTORY |
+ DO_MATCH_SUBMODULE);
+ return matched;
+}
+
int report_path_error(const char *ps_matched,
const struct pathspec *pathspec,
const char *prefix)
return NULL;
if (!ce_skip_worktree(active_cache[pos]))
return NULL;
- data = read_sha1_file(active_cache[pos]->sha1, &type, &sz);
+ data = read_sha1_file(active_cache[pos]->oid.hash, &type, &sz);
if (!data || type != OBJ_BLOB) {
free(data);
return NULL;
*size = xsize_t(sz);
if (sha1_stat) {
memset(&sha1_stat->stat, 0, sizeof(sha1_stat->stat));
- hashcpy(sha1_stat->sha1, active_cache[pos]->sha1);
+ hashcpy(sha1_stat->sha1, active_cache[pos]->oid.hash);
}
return data;
}
!ce_stage(active_cache[pos]) &&
ce_uptodate(active_cache[pos]) &&
!would_convert_to_git(fname))
- hashcpy(sha1_stat->sha1, active_cache[pos]->sha1);
+ hashcpy(sha1_stat->sha1,
+ active_cache[pos]->oid.hash);
else
hash_sha1_file(buf, size, "blob", sha1_stat->sha1);
fill_stat_data(&sha1_stat->stat, &st);
if (!len || treat_leading_path(dir, path, len, simplify))
read_directory_recursive(dir, path, len, untracked, 0, simplify);
free_simplify(simplify);
- qsort(dir->entries, dir->nr, sizeof(struct dir_entry *), cmp_name);
- qsort(dir->ignored, dir->ignored_nr, sizeof(struct dir_entry *), cmp_name);
+ QSORT(dir->entries, dir->nr, cmp_name);
+ QSORT(dir->ignored, dir->ignored_nr, cmp_name);
if (dir->untracked) {
static struct trace_key trace_untracked_stats = TRACE_KEY_INIT(UNTRACKED_STATS);
trace_printf_key(&trace_untracked_stats,
void setup_standard_excludes(struct dir_struct *dir)
{
- const char *path;
-
dir->exclude_per_dir = ".gitignore";
/* core.excludefile defaulting to $XDG_HOME/git/ignore */
dir->untracked ? &dir->ss_excludes_file : NULL);
/* per repository user preference */
- path = git_path_info_exclude();
- if (!access_or_warn(path, R_OK, 0))
- add_excludes_from_file_1(dir, path,
- dir->untracked ? &dir->ss_info_exclude : NULL);
+ if (startup_info->have_repository) {
+ const char *path = git_path_info_exclude();
+ if (!access_or_warn(path, R_OK, 0))
+ add_excludes_from_file_1(dir, path,
+ dir->untracked ? &dir->ss_info_exclude : NULL);
+ }
}
int remove_path(const char *name)
const char *pattern, const char *string,
int prefix);
+extern int submodule_path_match(const struct pathspec *ps,
+ const char *submodule_name,
+ char *seen);
+
static inline int ce_path_match(const struct cache_entry *ce,
const struct pathspec *pathspec,
char *seen)
static void *read_blob_entry(const struct cache_entry *ce, unsigned long *size)
{
enum object_type type;
- void *new = read_sha1_file(ce->sha1, &type, size);
+ void *new = read_sha1_file(ce->oid.hash, &type, size);
if (new) {
if (type == OBJ_BLOB)
if (fd < 0)
return -1;
- result |= stream_blob_to_fd(fd, ce->sha1, filter, 1);
+ result |= stream_blob_to_fd(fd, &ce->oid, filter, 1);
*fstat_done = fstat_output(fd, state, statbuf);
result |= close(fd);
struct stat st;
if (ce_mode_s_ifmt == S_IFREG) {
- struct stream_filter *filter = get_stream_filter(ce->name, ce->sha1);
+ struct stream_filter *filter = get_stream_filter(ce->name,
+ ce->oid.hash);
if (filter &&
!streaming_write_entry(ce, path, filter,
state, to_tempfile,
new = read_blob_entry(ce, &size);
if (!new)
return error("unable to read sha1 file of %s (%s)",
- path, sha1_to_hex(ce->sha1));
+ path, oid_to_hex(&ce->oid));
if (ce_mode_s_ifmt == S_IFLNK && has_symlinks && !to_tempfile) {
ret = symlink(new, path);
int trust_ctime = 1;
int check_stat = 1;
int has_symlinks = 1;
-int minimum_abbrev = 4, default_abbrev = 7;
+int minimum_abbrev = 4, default_abbrev = -1;
int ignore_case;
int assume_unchanged;
int prefer_symlink_refs;
size_t packed_git_limit = DEFAULT_PACKED_GIT_LIMIT;
size_t delta_base_cache_limit = 96 * 1024 * 1024;
unsigned long big_file_threshold = 512 * 1024 * 1024;
-const char *pager_program;
int pager_use_color = 1;
const char *editor_program;
const char *askpass_program;
static const char *namespace;
static size_t namespace_len;
+static const char *super_prefix;
+
static const char *git_dir, *git_common_dir;
static char *git_object_dir, *git_index_file, *git_graft_file;
int git_db_env, git_index_env, git_graft_env, git_common_dir_env;
NO_REPLACE_OBJECTS_ENVIRONMENT,
GIT_REPLACE_REF_BASE_ENVIRONMENT,
GIT_PREFIX_ENVIRONMENT,
+ GIT_SUPER_PREFIX_ENVIRONMENT,
GIT_SHALLOW_FILE_ENVIRONMENT,
GIT_COMMON_DIR_ENVIRONMENT,
NULL
return is_bare_repository_cfg && !get_git_work_tree();
}
+int have_git_dir(void)
+{
+ return startup_info->have_repository
+ || git_dir
+ || getenv(GIT_DIR_ENVIRONMENT);
+}
+
const char *get_git_dir(void)
{
if (!git_dir)
return namespaced_ref + namespace_len;
}
+const char *get_super_prefix(void)
+{
+ static int initialized;
+ if (!initialized) {
+ super_prefix = getenv(GIT_SUPER_PREFIX_ENVIRONMENT);
+ initialized = 1;
+ }
+ return super_prefix;
+}
+
static int git_work_tree_initialized;
/*
}
return the_shared_repository;
}
+
+void reset_shared_repository(void)
+{
+ need_shared_repository_from_config = 1;
+}
unsigned int i;
if (!v)
- qsort(t->entries,t->entry_count,sizeof(t->entries[0]),tecmp0);
+ QSORT(t->entries, t->entry_count, tecmp0);
else
- qsort(t->entries,t->entry_count,sizeof(t->entries[0]),tecmp1);
+ QSORT(t->entries, t->entry_count, tecmp1);
for (i = 0; i < t->entry_count; i++) {
if (t->entries[i]->versions[v].mode)
static int unpack_limit = 100;
static int prefer_ofs_delta = 1;
static int no_done;
+static int deepen_since_ok;
+static int deepen_not_ok;
static int fetch_fsck_objects = -1;
static int transfer_fsck_objects = -1;
static int agent_supported;
#define ALLOW_REACHABLE_SHA1 02
static unsigned int allow_unadvertised_object_request;
+__attribute__((format (printf, 2, 3)))
+static inline void print_verbose(const struct fetch_pack_args *args,
+ const char *fmt, ...)
+{
+ va_list params;
+
+ if (!args->verbose)
+ return;
+
+ va_start(params, fmt);
+ vfprintf(stderr, fmt, params);
+ va_end(params);
+ fputc('\n', stderr);
+}
+
static void rev_list_push(struct commit *commit, int mark)
{
if (!(commit->object.flags & mark)) {
static void consume_shallow_list(struct fetch_pack_args *args, int fd)
{
- if (args->stateless_rpc && args->depth > 0) {
+ if (args->stateless_rpc && args->deepen) {
/* If we sent a depth we will get back "duplicate"
* shallow and unshallow commands every time there
* is a block of have lines exchanged.
continue;
if (starts_with(line, "unshallow "))
continue;
- die("git fetch-pack: expected shallow list");
+ die(_("git fetch-pack: expected shallow list"));
}
}
}
const char *arg;
if (!len)
- die("git fetch-pack: expected ACK/NAK, got EOF");
+ die(_("git fetch-pack: expected ACK/NAK, got EOF"));
if (!strcmp(line, "NAK"))
return NAK;
if (skip_prefix(line, "ACK ", &arg)) {
return ACK;
}
}
- die("git fetch_pack: expected ACK/NAK, got '%s'", line);
+ die(_("git fetch_pack: expected ACK/NAK, got '%s'"), line);
}
static void send_request(struct fetch_pack_args *args,
size_t state_len = 0;
if (args->stateless_rpc && multi_ack == 1)
- die("--stateless-rpc requires multi_ack_detailed");
+ die(_("--stateless-rpc requires multi_ack_detailed"));
if (marked)
for_each_ref(clear_marks, NULL);
marked = 1;
if (no_done) strbuf_addstr(&c, " no-done");
if (use_sideband == 2) strbuf_addstr(&c, " side-band-64k");
if (use_sideband == 1) strbuf_addstr(&c, " side-band");
+ if (args->deepen_relative) strbuf_addstr(&c, " deepen-relative");
if (args->use_thin_pack) strbuf_addstr(&c, " thin-pack");
if (args->no_progress) strbuf_addstr(&c, " no-progress");
if (args->include_tag) strbuf_addstr(&c, " include-tag");
if (prefer_ofs_delta) strbuf_addstr(&c, " ofs-delta");
+ if (deepen_since_ok) strbuf_addstr(&c, " deepen-since");
+ if (deepen_not_ok) strbuf_addstr(&c, " deepen-not");
if (agent_supported) strbuf_addf(&c, " agent=%s",
git_user_agent_sanitized());
packet_buf_write(&req_buf, "want %s%s\n", remote_hex, c.buf);
write_shallow_commits(&req_buf, 1, NULL);
if (args->depth > 0)
packet_buf_write(&req_buf, "deepen %d", args->depth);
+ if (args->deepen_since) {
+ unsigned long max_age = approxidate(args->deepen_since);
+ packet_buf_write(&req_buf, "deepen-since %lu", max_age);
+ }
+ if (args->deepen_not) {
+ int i;
+ for (i = 0; i < args->deepen_not->nr; i++) {
+ struct string_list_item *s = args->deepen_not->items + i;
+ packet_buf_write(&req_buf, "deepen-not %s", s->string);
+ }
+ }
packet_buf_flush(&req_buf);
state_len = req_buf.len;
- if (args->depth > 0) {
+ if (args->deepen) {
char *line;
const char *arg;
unsigned char sha1[20];
while ((line = packet_read_line(fd[0], NULL))) {
if (skip_prefix(line, "shallow ", &arg)) {
if (get_sha1_hex(arg, sha1))
- die("invalid shallow line: %s", line);
+ die(_("invalid shallow line: %s"), line);
register_shallow(sha1);
continue;
}
if (skip_prefix(line, "unshallow ", &arg)) {
if (get_sha1_hex(arg, sha1))
- die("invalid unshallow line: %s", line);
+ die(_("invalid unshallow line: %s"), line);
if (!lookup_object(sha1))
- die("object not found: %s", line);
+ die(_("object not found: %s"), line);
/* make sure that it is parsed as shallow */
if (!parse_object(sha1))
- die("error in object: %s", line);
+ die(_("error in object: %s"), line);
if (unregister_shallow(sha1))
- die("no shallow found: %s", line);
+ die(_("no shallow found: %s"), line);
continue;
}
- die("expected shallow/unshallow, got %s", line);
+ die(_("expected shallow/unshallow, got %s"), line);
}
} else if (!args->stateless_rpc)
send_request(args, fd[1], &req_buf);
retval = -1;
while ((sha1 = get_rev())) {
packet_buf_write(&req_buf, "have %s\n", sha1_to_hex(sha1));
- if (args->verbose)
- fprintf(stderr, "have %s\n", sha1_to_hex(sha1));
+ print_verbose(args, "have %s", sha1_to_hex(sha1));
in_vain++;
if (flush_at <= ++count) {
int ack;
consume_shallow_list(args, fd[0]);
do {
ack = get_ack(fd[0], result_sha1);
- if (args->verbose && ack)
- fprintf(stderr, "got ack %d %s\n", ack,
- sha1_to_hex(result_sha1));
+ if (ack)
+ print_verbose(args, _("got %s %d %s"), "ack",
+ ack, sha1_to_hex(result_sha1));
switch (ack) {
case ACK:
flushes = 0;
struct commit *commit =
lookup_commit(result_sha1);
if (!commit)
- die("invalid commit %s", sha1_to_hex(result_sha1));
+ die(_("invalid commit %s"), sha1_to_hex(result_sha1));
if (args->stateless_rpc
&& ack == ACK_common
&& !(commit->object.flags & COMMON)) {
} while (ack);
flushes--;
if (got_continue && MAX_IN_VAIN < in_vain) {
- if (args->verbose)
- fprintf(stderr, "giving up\n");
+ print_verbose(args, _("giving up"));
break; /* give up */
}
}
packet_buf_write(&req_buf, "done\n");
send_request(args, fd[1], &req_buf);
}
- if (args->verbose)
- fprintf(stderr, "done\n");
+ print_verbose(args, _("done"));
if (retval != 0) {
multi_ack = 0;
flushes++;
while (flushes || multi_ack) {
int ack = get_ack(fd[0], result_sha1);
if (ack) {
- if (args->verbose)
- fprintf(stderr, "got ack (%d) %s\n", ack,
- sha1_to_hex(result_sha1));
+ print_verbose(args, _("got %s (%d) %s"), "ack",
+ ack, sha1_to_hex(result_sha1));
if (ack == ACK)
return 0;
multi_ack = 1;
unsigned long cutoff)
{
while (complete && cutoff <= complete->item->date) {
- if (args->verbose)
- fprintf(stderr, "Marking %s as complete\n",
- oid_to_hex(&complete->item->object.oid));
+ print_verbose(args, _("Marking %s as complete"),
+ oid_to_hex(&complete->item->object.oid));
pop_most_recent_commit(&complete, COMPLETE);
}
}
}
if (!keep && args->fetch_all &&
- (!args->depth || !starts_with(ref->name, "refs/tags/")))
+ (!args->deepen || !starts_with(ref->name, "refs/tags/")))
keep = 1;
if (keep) {
}
}
- if (!args->depth) {
+ if (!args->deepen) {
for_each_ref(mark_complete_oid, NULL);
for_each_alternate_ref(mark_alternate_complete, NULL);
commit_list_sort_by_date(&complete);
o = lookup_object(remote);
if (!o || !(o->flags & COMPLETE)) {
retval = 0;
- if (!args->verbose)
- continue;
- fprintf(stderr,
- "want %s (%s)\n", sha1_to_hex(remote),
- ref->name);
+ print_verbose(args, "want %s (%s)", sha1_to_hex(remote),
+ ref->name);
continue;
}
- if (!args->verbose)
- continue;
- fprintf(stderr,
- "already have %s (%s)\n", sha1_to_hex(remote),
- ref->name);
+ print_verbose(args, _("already have %s (%s)"), sha1_to_hex(remote),
+ ref->name);
}
return retval;
}
demux.out = -1;
demux.isolate_sigpipe = 1;
if (start_async(&demux))
- die("fetch-pack: unable to fork off sideband"
- " demultiplexer");
+ die(_("fetch-pack: unable to fork off sideband demultiplexer"));
}
else
demux.out = xd[0];
if (!args->keep_pack && unpack_limit) {
if (read_pack_header(demux.out, &header))
- die("protocol error: bad pack header");
+ die(_("protocol error: bad pack header"));
pass_header = 1;
if (ntohl(header.hdr_entries) < unpack_limit)
do_keep = 0;
cmd.in = demux.out;
cmd.git_cmd = 1;
if (start_command(&cmd))
- die("fetch-pack: unable to fork off %s", cmd_name);
+ die(_("fetch-pack: unable to fork off %s"), cmd_name);
if (do_keep && pack_lockfile) {
*pack_lockfile = index_pack_lockfile(cmd.out);
close(cmd.out);
args->check_self_contained_and_connected &&
ret == 0;
else
- die("%s failed", cmd_name);
+ die(_("%s failed"), cmd_name);
if (use_sideband && finish_async(&demux))
- die("error in sideband demultiplexer");
+ die(_("error in sideband demultiplexer"));
return 0;
}
int agent_len;
sort_ref_list(&ref, ref_compare_name);
- qsort(sought, nr_sought, sizeof(*sought), cmp_ref_by_name);
+ QSORT(sought, nr_sought, cmp_ref_by_name);
if ((args->depth > 0 || is_repository_shallow()) && !server_supports("shallow"))
- die("Server does not support shallow clients");
+ die(_("Server does not support shallow clients"));
+ if (args->depth > 0 || args->deepen_since || args->deepen_not)
+ args->deepen = 1;
if (server_supports("multi_ack_detailed")) {
- if (args->verbose)
- fprintf(stderr, "Server supports multi_ack_detailed\n");
+ print_verbose(args, _("Server supports multi_ack_detailed"));
multi_ack = 2;
if (server_supports("no-done")) {
- if (args->verbose)
- fprintf(stderr, "Server supports no-done\n");
+ print_verbose(args, _("Server supports no-done"));
if (args->stateless_rpc)
no_done = 1;
}
}
else if (server_supports("multi_ack")) {
- if (args->verbose)
- fprintf(stderr, "Server supports multi_ack\n");
+ print_verbose(args, _("Server supports multi_ack"));
multi_ack = 1;
}
if (server_supports("side-band-64k")) {
- if (args->verbose)
- fprintf(stderr, "Server supports side-band-64k\n");
+ print_verbose(args, _("Server supports side-band-64k"));
use_sideband = 2;
}
else if (server_supports("side-band")) {
- if (args->verbose)
- fprintf(stderr, "Server supports side-band\n");
+ print_verbose(args, _("Server supports side-band"));
use_sideband = 1;
}
if (server_supports("allow-tip-sha1-in-want")) {
- if (args->verbose)
- fprintf(stderr, "Server supports allow-tip-sha1-in-want\n");
+ print_verbose(args, _("Server supports allow-tip-sha1-in-want"));
allow_unadvertised_object_request |= ALLOW_TIP_SHA1;
}
if (server_supports("allow-reachable-sha1-in-want")) {
- if (args->verbose)
- fprintf(stderr, "Server supports allow-reachable-sha1-in-want\n");
+ print_verbose(args, _("Server supports allow-reachable-sha1-in-want"));
allow_unadvertised_object_request |= ALLOW_REACHABLE_SHA1;
}
if (!server_supports("thin-pack"))
args->no_progress = 0;
if (!server_supports("include-tag"))
args->include_tag = 0;
- if (server_supports("ofs-delta")) {
- if (args->verbose)
- fprintf(stderr, "Server supports ofs-delta\n");
- } else
+ if (server_supports("ofs-delta"))
+ print_verbose(args, _("Server supports ofs-delta"));
+ else
prefer_ofs_delta = 0;
if ((agent_feature = server_feature_value("agent", &agent_len))) {
agent_supported = 1;
- if (args->verbose && agent_len)
- fprintf(stderr, "Server version is %.*s\n",
- agent_len, agent_feature);
+ if (agent_len)
+ print_verbose(args, _("Server version is %.*s"),
+ agent_len, agent_feature);
}
+ if (server_supports("deepen-since"))
+ deepen_since_ok = 1;
+ else if (args->deepen_since)
+ die(_("Server does not support --shallow-since"));
+ if (server_supports("deepen-not"))
+ deepen_not_ok = 1;
+ else if (args->deepen_not)
+ die(_("Server does not support --shallow-exclude"));
+ if (!server_supports("deepen-relative") && args->deepen_relative)
+ die(_("Server does not support --deepen"));
if (everything_local(args, &ref, sought, nr_sought)) {
packet_flush(fd[1]);
/* When cloning, it is not unusual to have
* no common commit.
*/
- warning("no common commits");
+ warning(_("no common commits"));
if (args->stateless_rpc)
packet_flush(fd[1]);
- if (args->depth > 0)
+ if (args->deepen)
setup_alternate_shallow(&shallow_lock, &alternate_shallow_file,
NULL);
else if (si->nr_ours || si->nr_theirs)
else
alternate_shallow_file = NULL;
if (get_pack(args, fd, pack_lockfile))
- die("git fetch-pack: fetch failed.");
+ die(_("git fetch-pack: fetch failed."));
all_done:
return ref;
int *status;
int i;
- if (args->depth > 0 && alternate_shallow_file) {
+ if (args->deepen && alternate_shallow_file) {
if (*alternate_shallow_file == '\0') { /* --unshallow */
unlink_or_warn(git_path_shallow());
rollback_lock_file(&shallow_lock);
if (!ref) {
packet_flush(fd[1]);
- die("no matching remote head");
+ die(_("no matching remote head"));
}
prepare_shallow_info(&si, shallow);
ref_cpy = do_fetch_pack(args, fd, ref, sought, nr_sought,
const char *uploadpack;
int unpacklimit;
int depth;
+ const char *deepen_since;
+ const struct string_list *deepen_not;
+ unsigned deepen_relative:1;
unsigned quiet:1;
unsigned keep_pack:1;
unsigned lock_pack:1;
unsigned self_contained_and_connected:1;
unsigned cloning:1;
unsigned update_shallow:1;
+ unsigned deepen:1;
};
/*
return -1;
name = get_object_name(options, &tree->object);
- init_tree_desc(&desc, tree->buffer, tree->size);
- while (tree_entry(&desc, &entry)) {
+ if (init_tree_desc_gently(&desc, tree->buffer, tree->size))
+ return -1;
+ while (tree_entry_gently(&desc, &entry)) {
struct object *obj;
int result;
static int fsck_tree(struct tree *item, struct fsck_options *options)
{
- int retval;
+ int retval = 0;
int has_null_sha1 = 0;
int has_full_path = 0;
int has_empty_name = 0;
unsigned o_mode;
const char *o_name;
- init_tree_desc(&desc, item->buffer, item->size);
+ if (init_tree_desc_gently(&desc, item->buffer, item->size)) {
+ retval += report(options, &item->object, FSCK_MSG_BAD_TREE, "cannot be parsed as a tree");
+ return retval;
+ }
o_mode = 0;
o_name = NULL;
is_hfs_dotgit(name) ||
is_ntfs_dotgit(name));
has_zero_pad |= *(char *)desc.buffer == '0';
- update_tree_entry(&desc);
+ if (update_tree_entry_gently(&desc)) {
+ retval += report(options, &item->object, FSCK_MSG_BAD_TREE, "cannot be parsed as a tree");
+ break;
+ }
switch (mode) {
/*
o_name = name;
}
- retval = 0;
if (has_null_sha1)
retval += report(options, &item->object, FSCK_MSG_NULL_SHA1, "contains entries pointing to null sha1");
if (has_full_path)
my $normal_color = $repo->get_color("", "reset");
my $diff_algorithm = $repo->config('diff.algorithm');
+my $diff_indent_heuristic = $repo->config_bool('diff.indentheuristic');
my $diff_compaction_heuristic = $repo->config_bool('diff.compactionheuristic');
my $diff_filter = $repo->config('interactive.difffilter');
if (defined $diff_algorithm) {
splice @diff_cmd, 1, 0, "--diff-algorithm=${diff_algorithm}";
}
- if ($diff_compaction_heuristic) {
+ if ($diff_indent_heuristic) {
+ splice @diff_cmd, 1, 0, "--indent-heuristic";
+ } elsif ($diff_compaction_heuristic) {
splice @diff_cmd, 1, 0, "--compaction-heuristic";
}
if (defined $patch_mode_revision) {
extern void set_die_routine(NORETURN_PTR void (*routine)(const char *err, va_list params));
extern void set_error_routine(void (*routine)(const char *err, va_list params));
+extern void (*get_error_routine(void))(const char *err, va_list params);
+extern void set_warn_routine(void (*routine)(const char *warn, va_list params));
+extern void (*get_warn_routine(void))(const char *warn, va_list params);
extern void set_die_is_recursing_routine(int (*routine)(void));
extern void set_error_handle(FILE *);
#define qsort git_qsort
#endif
+#define QSORT(base, n, compar) sane_qsort((base), (n), sizeof(*(base)), compar)
+static inline void sane_qsort(void *base, size_t nmemb, size_t size,
+ int(*compar)(const void *, const void *))
+{
+ if (nmemb > 1)
+ qsort(base, nmemb, size, compar);
+}
+
#ifndef REG_STARTEND
#error "Git requires REG_STARTEND support. Compile with NO_REGEX=NeedsStartEnd"
#endif
#define getc_unlocked(fh) getc(fh)
#endif
-#endif
-
extern int cmd_main(int, const char **);
+
+#endif
#!/bin/sh
GVF=GIT-VERSION-FILE
-DEF_VER=0.20.GITGUI
+DEF_VER=0.21.GITGUI
LF='
'
rm -f $@ ; \
echo '# Autogenerated by git-gui Makefile' >$@ && \
echo >>$@ && \
- $(foreach p,$(PRELOAD_FILES) $(ALL_LIBFILES),echo '$(subst lib/,,$p)' >>$@ &&) \
+ $(foreach p,$(PRELOAD_FILES) $(sort $(ALL_LIBFILES)),echo '$(subst lib/,,$p)' >>$@ &&) \
echo >>$@ ; \
fi
set _iscygwin 0
} else {
set _iscygwin 1
+ # Handle MSys2 which is only cygwin when MSYSTEM is MSYS.
+ if {[info exists ::env(MSYSTEM)] && $::env(MSYSTEM) ne "MSYS"} {
+ set _iscygwin 0
+ }
}
} else {
set _iscygwin 0
}
proc git {args} {
- set opt [list]
-
- while {1} {
- switch -- [lindex $args 0] {
- --nice {
- _lappend_nice opt
- }
-
- default {
- break
- }
-
- }
-
- set args [lrange $args 1 end]
- }
-
- set cmdp [_git_cmd [lindex $args 0]]
- set args [lrange $args 1 end]
-
- _trace_exec [concat $opt $cmdp $args]
- set result [eval exec $opt $cmdp $args]
+ set fd [eval [list git_read] $args]
+ fconfigure $fd -translation binary -encoding utf-8
+ set result [string trimright [read $fd] "\n"]
+ close $fd
if {$::_trace} {
puts stderr "< $result"
}
[list git_read config] \
$args \
[list --null --list]]
- fconfigure $fd_rc -translation binary
+ fconfigure $fd_rc -translation binary -encoding utf-8
set buf [read $fd_rc]
close $fd_rc
}
if {[file isfile [gitdir MERGE_MSG]]} {
set pcm_source "merge"
set fd_mm [open [gitdir MERGE_MSG] r]
+ fconfigure $fd_mm -encoding utf-8
puts -nonewline $fd_pcm [read $fd_mm]
close $fd_mm
} elseif {[file isfile [gitdir SQUASH_MSG]]} {
set pcm_source "squash"
set fd_sm [open [gitdir SQUASH_MSG] r]
+ fconfigure $fd_sm -encoding utf-8
puts -nonewline $fd_pcm [read $fd_sm]
close $fd_sm
} else {
set i [split [string range $buf_rdi $c [expr {$z1 - 2}]] { }]
set p [string range $buf_rdi $z1 [expr {$z2 - 1}]]
merge_state \
- [encoding convertfrom $p] \
+ [encoding convertfrom utf-8 $p] \
[lindex $i 4]? \
[list [lindex $i 0] [lindex $i 2]] \
[list]
set i [split [string range $buf_rdf $c [expr {$z1 - 2}]] { }]
set p [string range $buf_rdf $z1 [expr {$z2 - 1}]]
merge_state \
- [encoding convertfrom $p] \
+ [encoding convertfrom utf-8 $p] \
?[lindex $i 4] \
[list] \
[list [lindex $i 0] [lindex $i 2]]
set pck [split $buf_rlo "\0"]
set buf_rlo [lindex $pck end]
foreach p [lrange $pck 0 end-1] {
- set p [encoding convertfrom $p]
+ set p [encoding convertfrom utf-8 $p]
if {[string index $p end] eq {/}} {
set p [string range $p 0 end-1]
}
}
}
-proc toggle_or_diff {w x y} {
+proc toggle_or_diff {mode w args} {
global file_states file_lists current_diff_path ui_index ui_workdir
global last_clicked selected_paths
- set pos [split [$w index @$x,$y] .]
- set lno [lindex $pos 0]
- set col [lindex $pos 1]
+ if {$mode eq "click"} {
+ foreach {x y} $args break
+ set pos [split [$w index @$x,$y] .]
+ foreach {lno col} $pos break
+ } else {
+ if {$last_clicked ne {}} {
+ set lno [lindex $last_clicked 1]
+ } else {
+ set lno [expr {int([lindex [$w tag ranges in_diff] 0])}]
+ }
+ if {$mode eq "toggle"} {
+ set col 0; set y 2
+ } else {
+ incr lno [expr {$mode eq "up" ? -1 : 1}]
+ set col 1
+ }
+ }
+
set path [lindex $file_lists($w) [expr {$lno - 1}]]
if {$path eq {}} {
set last_clicked {}
}
set last_clicked [list $w $lno]
+ focus $w
array unset selected_paths
$ui_index tag remove in_sel 0.0 end
$ui_workdir tag remove in_sel 0.0 end
global file_lists last_clicked selected_paths
if {[lindex $last_clicked 0] ne $w} {
- toggle_or_diff $w $x $y
+ toggle_or_diff click $w $x $y
return
}
set subcommand_args {}
proc usage {} {
- set s "usage: $::argv0 $::subcommand $::subcommand_args"
+ set s "[mc usage:] $::argv0 $::subcommand $::subcommand_args"
if {[tk windowingsystem] eq "win32"} {
wm withdraw .
tk_messageBox -icon info -message $s \
# fall through to setup UI for commits
}
default {
- set err "usage: $argv0 \[{blame|browser|citool}\]"
+ set err "[mc usage:] $argv0 \[{blame|browser|citool}\]"
if {[tk windowingsystem] eq "win32"} {
wm withdraw .
tk_messageBox -icon error -message $err \
}
pack .vpane -anchor n -side top -fill both -expand 1
+# -- Working Directory File List
+
+textframe .vpane.files.workdir -height 100 -width 200
+tlabel .vpane.files.workdir.title -text [mc "Unstaged Changes"] \
+ -background lightsalmon -foreground black
+ttext $ui_workdir -background white -foreground black \
+ -borderwidth 0 \
+ -width 20 -height 10 \
+ -wrap none \
+ -takefocus 1 -highlightthickness 1\
+ -cursor $cursor_ptr \
+ -xscrollcommand {.vpane.files.workdir.sx set} \
+ -yscrollcommand {.vpane.files.workdir.sy set} \
+ -state disabled
+${NS}::scrollbar .vpane.files.workdir.sx -orient h -command [list $ui_workdir xview]
+${NS}::scrollbar .vpane.files.workdir.sy -orient v -command [list $ui_workdir yview]
+pack .vpane.files.workdir.title -side top -fill x
+pack .vpane.files.workdir.sx -side bottom -fill x
+pack .vpane.files.workdir.sy -side right -fill y
+pack $ui_workdir -side left -fill both -expand 1
+
# -- Index File List
#
-${NS}::frame .vpane.files.index -height 100 -width 200
+textframe .vpane.files.index -height 100 -width 200
tlabel .vpane.files.index.title \
-text [mc "Staged Changes (Will Commit)"] \
-background lightgreen -foreground black
-text $ui_index -background white -foreground black \
+ttext $ui_index -background white -foreground black \
-borderwidth 0 \
-width 20 -height 10 \
-wrap none \
+ -takefocus 1 -highlightthickness 1\
-cursor $cursor_ptr \
-xscrollcommand {.vpane.files.index.sx set} \
-yscrollcommand {.vpane.files.index.sy set} \
pack .vpane.files.index.sy -side right -fill y
pack $ui_index -side left -fill both -expand 1
-# -- Working Directory File List
+# -- Insert the workdir and index into the panes
#
-${NS}::frame .vpane.files.workdir -height 100 -width 200
-tlabel .vpane.files.workdir.title -text [mc "Unstaged Changes"] \
- -background lightsalmon -foreground black
-text $ui_workdir -background white -foreground black \
- -borderwidth 0 \
- -width 20 -height 10 \
- -wrap none \
- -cursor $cursor_ptr \
- -xscrollcommand {.vpane.files.workdir.sx set} \
- -yscrollcommand {.vpane.files.workdir.sy set} \
- -state disabled
-${NS}::scrollbar .vpane.files.workdir.sx -orient h -command [list $ui_workdir xview]
-${NS}::scrollbar .vpane.files.workdir.sy -orient v -command [list $ui_workdir yview]
-pack .vpane.files.workdir.title -side top -fill x
-pack .vpane.files.workdir.sx -side bottom -fill x
-pack .vpane.files.workdir.sy -side right -fill y
-pack $ui_workdir -side left -fill both -expand 1
-
.vpane.files add .vpane.files.workdir
.vpane.files add .vpane.files.index
if {!$use_ttk} {
#
${NS}::frame .vpane.lower.commarea.buffer
${NS}::frame .vpane.lower.commarea.buffer.header
-set ui_comm .vpane.lower.commarea.buffer.t
+set ui_comm .vpane.lower.commarea.buffer.frame.t
set ui_coml .vpane.lower.commarea.buffer.header.l
if {![is_enabled nocommit]} {
pack .vpane.lower.commarea.buffer.header.new -side right
}
-text $ui_comm -background white -foreground black \
+textframe .vpane.lower.commarea.buffer.frame
+ttext $ui_comm -background white -foreground black \
-borderwidth 1 \
-undo true \
-maxundo 20 \
-autoseparators true \
+ -takefocus 1 \
+ -highlightthickness 1 \
-relief sunken \
-width $repo_config(gui.commitmsgwidth) -height 9 -wrap none \
-font font_diff \
- -yscrollcommand {.vpane.lower.commarea.buffer.sby set}
-${NS}::scrollbar .vpane.lower.commarea.buffer.sby \
+ -yscrollcommand {.vpane.lower.commarea.buffer.frame.sby set}
+${NS}::scrollbar .vpane.lower.commarea.buffer.frame.sby \
-command [list $ui_comm yview]
-pack .vpane.lower.commarea.buffer.header -side top -fill x
-pack .vpane.lower.commarea.buffer.sby -side right -fill y
+
+pack .vpane.lower.commarea.buffer.frame.sby -side right -fill y
pack $ui_comm -side left -fill y
+pack .vpane.lower.commarea.buffer.header -side top -fill x
+pack .vpane.lower.commarea.buffer.frame -side left -fill y
pack .vpane.lower.commarea.buffer -side left -fill y
# -- Commit Message Buffer Context Menu
# -- Diff Body
#
-${NS}::frame .vpane.lower.diff.body
+textframe .vpane.lower.diff.body
set ui_diff .vpane.lower.diff.body.t
-text $ui_diff -background white -foreground black \
+ttext $ui_diff -background white -foreground black \
-borderwidth 0 \
-width 80 -height 5 -wrap none \
-font font_diff \
+ -takefocus 1 -highlightthickness 1 \
-xscrollcommand {.vpane.lower.diff.body.sbx set} \
-yscrollcommand {.vpane.lower.diff.body.sby set} \
-state disabled
bind . <$M1B-Key-R> ui_do_rescan
bind . <$M1B-Key-s> do_signoff
bind . <$M1B-Key-S> do_signoff
-bind . <$M1B-Key-t> do_add_selection
-bind . <$M1B-Key-T> do_add_selection
-bind . <$M1B-Key-u> do_unstage_selection
-bind . <$M1B-Key-U> do_unstage_selection
+bind . <$M1B-Key-t> { toggle_or_diff toggle %W }
+bind . <$M1B-Key-T> { toggle_or_diff toggle %W }
+bind . <$M1B-Key-u> { toggle_or_diff toggle %W }
+bind . <$M1B-Key-U> { toggle_or_diff toggle %W }
bind . <$M1B-Key-j> do_revert_selection
bind . <$M1B-Key-J> do_revert_selection
bind . <$M1B-Key-i> do_add_all
bind . <$M1B-Key-KP_Add> {show_more_context;break}
bind . <$M1B-Key-Return> do_commit
foreach i [list $ui_index $ui_workdir] {
- bind $i <Button-1> "toggle_or_diff $i %x %y; break"
- bind $i <$M1B-Button-1> "add_one_to_selection $i %x %y; break"
- bind $i <Shift-Button-1> "add_range_to_selection $i %x %y; break"
+ bind $i <Button-1> { toggle_or_diff click %W %x %y; break }
+ bind $i <$M1B-Button-1> { add_one_to_selection %W %x %y; break }
+ bind $i <Shift-Button-1> { add_range_to_selection %W %x %y; break }
+ bind $i <Key-Up> { toggle_or_diff up %W; break }
+ bind $i <Key-Down> { toggle_or_diff down %W; break }
}
unset i
set path $i_path
make_toplevel top w
- wm title $top [append "[appname] ([reponame]): " [mc "File Viewer"]]
+ wm title $top [mc "%s (%s): File Viewer" [appname] [reponame]]
set font_w [font measure font_diff "0"]
global use_ttk NS
make_dialog top w
wm withdraw $w
- wm title $top [append "[appname] ([reponame]): " [mc "Checkout Branch"]]
+ wm title $top [mc "%s (%s): Checkout Branch" [appname] [reponame]]
if {$top ne {.}} {
wm geometry $top "+[winfo rootx .]+[winfo rooty .]"
}
make_dialog top w
wm withdraw $w
- wm title $top [append "[appname] ([reponame]): " [mc "Create Branch"]]
+ wm title $top [mc "%s (%s): Create Branch" [appname] [reponame]]
if {$top ne {.}} {
wm geometry $top "+[winfo rootx .]+[winfo rooty .]"
}
make_dialog top w
wm withdraw $w
- wm title $top [append "[appname] ([reponame]): " [mc "Delete Branch"]]
+ wm title $top [mc "%s (%s): Delete Branch" [appname] [reponame]]
if {$top ne {.}} {
wm geometry $top "+[winfo rootx .]+[winfo rooty .]"
}
set b [lindex $i 0]
set o [lindex $i 1]
if {[catch {git branch -D $b} err]} {
- append failed " - $b: $err\n"
+ append failed [mc " - %s:" $b] " $err\n"
}
}
make_dialog top w
wm withdraw $w
- wm title $top [append "[appname] ([reponame]): " [mc "Rename Branch"]]
+ wm title $top [mc "%s (%s): Rename Branch" [appname] [reponame]]
if {$top ne {.}} {
wm geometry $top "+[winfo rootx .]+[winfo rooty .]"
}
global cursor_ptr M1B use_ttk NS
make_dialog top w
wm withdraw $top
- wm title $top [append "[appname] ([reponame]): " [mc "File Browser"]]
+ wm title $top [mc "%s (%s): File Browser" [appname] [reponame]]
if {$path ne {}} {
if {[string index $path end] ne {/}} {
$w conf -state disabled
set fd [git_read ls-tree -z $tree_id]
- fconfigure $fd -blocking 0 -translation binary -encoding binary
+ fconfigure $fd -blocking 0 -translation binary -encoding utf-8
fileevent $fd readable [cb _read $fd]
}
global use_ttk NS
make_dialog top w
wm withdraw $top
- wm title $top [append "[appname] ([reponame]): " [mc "Browse Branch Files"]]
+ wm title $top [mc "%s (%s): Browse Branch Files" [appname] [reponame]]
if {$top ne {.}} {
wm geometry $top "+[winfo rootx .]+[winfo rooty .]"
wm transient $top .
# Copyright (C) 2006, 2007 Shawn Pearce
proc load_last_commit {} {
- global HEAD PARENT MERGE_HEAD commit_type ui_comm
+ global HEAD PARENT MERGE_HEAD commit_type ui_comm commit_author
global repo_config
if {[llength $PARENT] == 0} {
lappend parents [string range $line 7 end]
} elseif {[string match {encoding *} $line]} {
set enc [string tolower [string range $line 9 end]]
+ } elseif {[regexp "author (.*)\\s<(.*)>\\s(\\d.*$)" $line all name email time]} {
+ set commit_author [list name $name email $email date $time]
}
}
set msg [read $fd]
}
proc create_new_commit {} {
- global commit_type ui_comm
+ global commit_type ui_comm commit_author
set commit_type normal
+ unset -nocomplain commit_author
$ui_comm delete 0.0 end
$ui_comm edit reset
$ui_comm edit modified false
}
proc commit_committree {fd_wt curHEAD msg_p} {
- global HEAD PARENT MERGE_HEAD commit_type
+ global HEAD PARENT MERGE_HEAD commit_type commit_author
global current_branch
global ui_comm selected_commit_type
global file_states selected_paths rescan_active
global repo_config
+ global env
gets $fd_wt tree_id
if {[catch {close $fd_wt} err]} {
}
}
+ if {[info exists commit_author]} {
+ set old_author [commit_author_ident $commit_author]
+ }
# -- Create the commit.
#
set cmd [list commit-tree $tree_id]
error_popup [strcat [mc "commit-tree failed:"] "\n\n$err"]
ui_status [mc "Commit failed."]
unlock_index
+ unset -nocomplain commit_author
+ commit_author_reset $old_author
return
}
+ if {[info exists commit_author]} {
+ unset -nocomplain commit_author
+ commit_author_reset $old_author
+ }
# -- Update the HEAD ref.
#
}
fconfigure $fd_ph -blocking 0
}
+
+proc commit_author_ident {details} {
+ global env
+ array set author $details
+ set old [array get env GIT_AUTHOR_*]
+ set env(GIT_AUTHOR_NAME) $author(name)
+ set env(GIT_AUTHOR_EMAIL) $author(email)
+ set env(GIT_AUTHOR_DATE) $author(date)
+ return $old
+}
+proc commit_author_reset {details} {
+ global env
+ unset env(GIT_AUTHOR_NAME) env(GIT_AUTHOR_EMAIL) env(GIT_AUTHOR_DATE)
+ if {$details ne {}} {
+ array set env $details
+ }
+}
set value "$value[lindex $s 2]"
}
- ${NS}::label $w.stat.l_$name -text "$label:" -anchor w
+ ${NS}::label $w.stat.l_$name -text [mc "%s:" $label] -anchor w
${NS}::label $w.stat.v_$name -text $value -anchor w
grid $w.stat.l_$name $w.stat.v_$name -sticky we -padx {0 5}
}
bind $w <Visibility> "grab $w; focus $w.buttons.close"
bind $w <Key-Escape> [list destroy $w]
bind $w <Key-Return> [list destroy $w]
- wm title $w [append "[appname] ([reponame]): " [mc "Database Statistics"]]
+ wm title $w [mc "%s (%s): Database Statistics" [appname] [reponame]]
wm deiconify $w
tkwait window $w
}
} else {
start_show_diff $cont_info
}
+
+ global current_diff_path selected_paths
+ set selected_paths($current_diff_path) 1
}
proc show_unmerged_diff {cont_info} {
}
$ui_diff conf -state normal
if {$type eq {submodule}} {
- $ui_diff insert end [append \
- "* " \
- [mc "Git Repository (subproject)"] \
- "\n"] d_info
+ $ui_diff insert end \
+ "* [mc "Git Repository (subproject)"]\n" \
+ d_info
} elseif {![catch {set type [exec file $path]}]} {
set n [string length $path]
if {[string equal -length $n $path $type]} {
puts -nonewline $p $current_diff_header
puts -nonewline $p [$ui_diff get $s_lno $e_lno]
close $p} err]} {
- error_popup [append $failed_msg "\n\n$err"]
+ error_popup "$failed_msg\n\n$err"
unlock_index
return
}
puts -nonewline $p $current_diff_header
puts -nonewline $p $wholepatch
close $p} err]} {
- error_popup [append $failed_msg "\n\n$err"]
+ error_popup "$failed_msg\n\n$err"
}
unlock_index
set cmd [list tk_messageBox \
-icon error \
-type ok \
- -title [append "$title: " [mc "error"]] \
+ -title [mc "%s: error" $title] \
-message $msg]
if {[winfo ismapped [_error_parent]]} {
lappend cmd -parent [_error_parent]
set cmd [list tk_messageBox \
-icon warning \
-type ok \
- -title [append "$title: " [mc "warning"]] \
+ -title [mc "%s: warning" $title] \
-message $msg]
if {[winfo ismapped [_error_parent]]} {
lappend cmd -parent [_error_parent]
wm withdraw $w
${NS}::frame $w.m
- ${NS}::label $w.m.l1 -text "$hook hook failed:" \
+ ${NS}::label $w.m.l1 -text [mc "%s hook failed:" $hook] \
-anchor w \
-justify left \
-font font_uibold
bind $w <Visibility> "grab $w; focus $w"
bind $w <Key-Return> "destroy $w"
- wm title $w [strcat "[appname] ([reponame]): " [mc "error"]]
+ wm title $w [mc "%s (%s): error" [appname] [reponame]]
wm deiconify $w
tkwait window $w
}
set info [lindex $s 2]
if {$info eq {}} continue
- puts -nonewline $fd "$info\t[encoding convertto $path]\0"
+ puts -nonewline $fd "$info\t[encoding convertto utf-8 $path]\0"
display_file $path $new
}
?M {set new M_}
?? {continue}
}
- puts -nonewline $fd "[encoding convertto $path]\0"
+ puts -nonewline $fd "[encoding convertto utf-8 $path]\0"
display_file $path $new
}
?M -
?T -
?D {
- puts -nonewline $fd "[encoding convertto $path]\0"
+ puts -nonewline $fd "[encoding convertto utf-8 $path]\0"
display_file $path ?_
}
}
if {[array size selected_paths] > 0} {
unstage_helper \
- {Unstaging selected files from commit} \
+ [mc "Unstaging selected files from commit"] \
[array names selected_paths]
} elseif {$current_diff_path ne {}} {
unstage_helper \
if {[array size selected_paths] > 0} {
add_helper \
- {Adding selected files} \
+ [mc "Adding selected files"] \
[array names selected_paths]
} elseif {$current_diff_path ne {}} {
add_helper \
set paths [concat $paths $untracked_paths]
}
}
- add_helper {Adding all changed files} $paths
+ add_helper [mc "Adding all changed files"] $paths
}
proc revert_helper {txt paths} {
close $fh
set _last_merged_branch $branch
- set cmd [list git merge --strategy=recursive FETCH_HEAD]
+ if {[git-version >= "2.5.0"]} {
+ set cmd [list git merge --strategy=recursive FETCH_HEAD]
+ } else {
+ set cmd [list git]
+ lappend cmd merge
+ lappend cmd --strategy=recursive
+ lappend cmd [git fmt-merge-msg <[gitdir FETCH_HEAD]]
+ lappend cmd HEAD
+ lappend cmd $name
+ }
ui_status [mc "Merging %s and %s..." $current_branch $stitle]
set cons [console::new [mc "Merge"] "merge $stitle"]
}
make_dialog top w
- wm title $top [append "[appname] ([reponame]): " [mc "Merge"]]
+ wm title $top [mc "%s (%s): Merge" [appname] [reponame]]
if {$top ne {.}} {
wm geometry $top "+[winfo rootx .]+[winfo rooty .]"
}
i-* {
regexp -- {-(\d+)\.\.(\d+)$} $type _junk min max
${NS}::frame $w.$f.$optid
- ${NS}::label $w.$f.$optid.l -text "$text:"
+ ${NS}::label $w.$f.$optid.l -text [mc "%s:" $text]
pack $w.$f.$optid.l -side left -anchor w -fill x
tspinbox $w.$f.$optid.v \
-textvariable ${f}_config_new($name) \
c -
t {
${NS}::frame $w.$f.$optid
- ${NS}::label $w.$f.$optid.l -text "$text:"
+ ${NS}::label $w.$f.$optid.l -text [mc "%s:" $text]
${NS}::entry $w.$f.$optid.v \
-width 20 \
-textvariable ${f}_config_new($name)
s {
set opts [eval [lindex $option 3]]
${NS}::frame $w.$f.$optid
- ${NS}::label $w.$f.$optid.l -text "$text:"
+ ${NS}::label $w.$f.$optid.l -text [mc "%s:" $text]
if {$use_ttk} {
ttk::combobox $w.$f.$optid.v \
-textvariable ${f}_config_new($name) \
[font configure $font -size]
${NS}::frame $w.global.$name
- ${NS}::label $w.global.$name.l -text "$text:"
+ ${NS}::label $w.global.$name.l -text [mc "%s:" $text]
${NS}::button $w.global.$name.b \
-text [mc "Change Font"] \
-command [list \
if {$have_remote > 1} {
make_sure_remote_submenues_exist $remote_m
if {[$fetch_m type end] eq "command" \
- && [$fetch_m entrycget end -label] ne "All"} {
+ && [$fetch_m entrycget end -label] ne [mc "All"]} {
$fetch_m insert end separator
$fetch_m insert end command \
- -label "All" \
+ -label [mc "All"] \
-command fetch_from_all
$prune_m insert end separator
$prune_m insert end command \
- -label "All" \
+ -label [mc "All"] \
-command prune_from_all
}
} else {
if {[winfo exists $fetch_m]} {
if {[$fetch_m type end] eq "command" \
- && [$fetch_m entrycget end -label] eq "All"} {
+ && [$fetch_m entrycget end -label] eq [mc "All"]} {
delete_from_menu $fetch_m end
delete_from_menu $fetch_m end
make_dialog top w
wm withdraw $top
- wm title $top [append "[appname] ([reponame]): " [mc "Add Remote"]]
+ wm title $top [mc "%s (%s): Add Remote" [appname] [reponame]]
if {$top ne {.}} {
wm geometry $top "+[winfo rootx .]+[winfo rooty .]"
}
global all_remotes M1B use_ttk NS
make_dialog top w
- wm title $top [append "[appname] ([reponame]): " [mc "Delete Branch Remotely"]]
+ wm title $top [mc "%s (%s): Delete Branch Remotely" [appname] [reponame]]
if {$top ne {.}} {
wm geometry $top "+[winfo rootx .]+[winfo rooty .]"
}
global _gitworktree
set fn [tk_getSaveFile \
-parent . \
- -title [append "[appname] ([reponame]): " [mc "Create Desktop Icon"]] \
+ -title [mc "%s (%s): Create Desktop Icon" [appname] [reponame]] \
-initialfile "Git [reponame].lnk"]
if {$fn != {}} {
if {[file extension $fn] ne {.lnk}} {
set fn ${fn}.lnk
}
+ # Use git-gui.exe if available (ie: git-for-windows)
+ set cmdLine [auto_execok git-gui.exe]
+ if {$cmdLine eq {}} {
+ set cmdLine [list [info nameofexecutable] \
+ [file normalize $::argv0]]
+ }
if {[catch {
- win32_create_lnk $fn [list \
- [info nameofexecutable] \
- [file normalize $::argv0] \
- ] \
+ win32_create_lnk $fn $cmdLine \
[file normalize $_gitworktree]
} err]} {
error_popup [strcat [mc "Cannot write shortcut:"] "\n\n$err"]
}
set fn [tk_getSaveFile \
-parent . \
- -title [append "[appname] ([reponame]): " [mc "Create Desktop Icon"]] \
+ -title [mc "%s (%s): Create Desktop Icon" [appname] [reponame]] \
-initialdir $desktop \
-initialfile "Git [reponame].lnk"]
if {$fn != {}} {
set fn [tk_getSaveFile \
-parent . \
- -title [append "[appname] ([reponame]): " [mc "Create Desktop Icon"]] \
+ -title [mc "%s (%s): Create Desktop Icon" [appname] [reponame]] \
-initialdir [file join $env(HOME) Desktop] \
-initialfile "Git [reponame].app"]
if {$fn != {}} {
}
}
+# Define a style used for the surround of text widgets.
+proc InitEntryFrame {} {
+ ttk::style theme settings default {
+ ttk::style layout EntryFrame {
+ EntryFrame.field -sticky nswe -border 0 -children {
+ EntryFrame.fill -sticky nswe -children {
+ EntryFrame.padding -sticky nswe
+ }
+ }
+ }
+ ttk::style configure EntryFrame -padding 1 -relief sunken
+ ttk::style map EntryFrame -background {}
+ }
+ ttk::style theme settings classic {
+ ttk::style configure EntryFrame -padding 2 -relief sunken
+ ttk::style map EntryFrame -background {}
+ }
+ ttk::style theme settings alt {
+ ttk::style configure EntryFrame -padding 2
+ ttk::style map EntryFrame -background {}
+ }
+ ttk::style theme settings clam {
+ ttk::style configure EntryFrame -padding 2
+ ttk::style map EntryFrame -background {}
+ }
+
+ # Ignore errors for missing native themes
+ catch {
+ ttk::style theme settings winnative {
+ ttk::style configure EntryFrame -padding 2
+ }
+ ttk::style theme settings xpnative {
+ ttk::style configure EntryFrame -padding 1
+ ttk::style element create EntryFrame.field vsapi \
+ EDIT 1 {disabled 4 focus 3 active 2 {} 1} -padding 1
+ }
+ ttk::style theme settings vista {
+ ttk::style configure EntryFrame -padding 2
+ ttk::style element create EntryFrame.field vsapi \
+ EDIT 6 {disabled 4 focus 3 active 2 {} 1} -padding 2
+ }
+ }
+
+ bind EntryFrame <Enter> {%W instate !disabled {%W state active}}
+ bind EntryFrame <Leave> {%W state !active}
+ bind EntryFrame <<ThemeChanged>> {
+ set pad [ttk::style lookup EntryFrame -padding]
+ %W configure -padding [expr {$pad eq {} ? 1 : $pad}]
+ }
+}
+
proc gold_frame {w args} {
global use_ttk
if {$use_ttk} {
# place a themed frame over the surface.
proc Dialog {w args} {
eval [linsert $args 0 toplevel $w -class Dialog]
- catch {wm attributes $w -type dialog}
+ catch {wm attributes $w -type dialog}
pave_toplevel $w
return $w
}
}
}
+# Create a text widget with any theme specific properties.
+proc ttext {w args} {
+ global use_ttk
+ if {$use_ttk} {
+ switch -- [ttk::style theme use] {
+ "vista" - "xpnative" {
+ lappend args -highlightthickness 0 -borderwidth 0
+ }
+ }
+ }
+ set w [eval [linsert $args 0 text $w]]
+ if {$use_ttk} {
+ if {[winfo class [winfo parent $w]] eq "EntryFrame"} {
+ bind $w <FocusIn> {[winfo parent %W] state focus}
+ bind $w <FocusOut> {[winfo parent %W] state !focus}
+ }
+ }
+ return $w
+}
+
+# themed frame suitable for surrounding a text field.
+proc textframe {w args} {
+ global use_ttk
+ if {$use_ttk} {
+ if {[catch {ttk::style layout EntryFrame}]} {
+ InitEntryFrame
+ }
+ eval [linsert $args 0 ttk::frame $w -class EntryFrame -style EntryFrame]
+ } else {
+ eval [linsert $args 0 frame $w]
+ }
+ return $w
+}
+
proc tentry {w args} {
global use_ttk
if {$use_ttk} {
proc tools_exec {fullname} {
global repo_config env current_diff_path
global current_branch is_detached
+ global selected_paths
if {[is_config_true "guitool.$fullname.needsfile"]} {
if {$current_diff_path eq {}} {
set env(GIT_GUITOOL) $fullname
set env(FILENAME) $current_diff_path
+ set env(FILENAMES) [join [array names selected_paths] \n]
if {$is_detached} {
set env(CUR_BRANCH) ""
} else {
unset env(GIT_GUITOOL)
unset env(FILENAME)
+ unset env(FILENAMES)
unset env(CUR_BRANCH)
catch { unset env(ARGS) }
catch { unset env(REVISION) }
global repo_config use_ttk NS
make_dialog top w
- wm title $top [append "[appname] ([reponame]): " [mc "Add Tool"]]
+ wm title $top [mc "%s (%s): Add Tool" [appname] [reponame]]
if {$top ne {.}} {
wm geometry $top "+[winfo rootx .]+[winfo rooty .]"
wm transient $top .
load_config 1
make_dialog top w
- wm title $top [append "[appname] ([reponame]): " [mc "Remove Tool"]]
+ wm title $top [mc "%s (%s): Remove Tool" [appname] [reponame]]
if {$top ne {.}} {
wm geometry $top "+[winfo rootx .]+[winfo rooty .]"
wm transient $top .
}
make_dialog top w -autodelete 0
- wm title $top [append "[appname] ([reponame]): " $title]
+ wm title $top "[mc "%s (%s):" [appname] [reponame]] $title"
if {$top ne {.}} {
wm geometry $top "+[winfo rootx .]+[winfo rooty .]"
wm transient $top .
bind $w <Visibility> "grab $w; focus $w.buttons.create"
bind $w <Key-Escape> "destroy $w"
bind $w <Key-Return> [list start_push_anywhere_action $w]
- wm title $w [append "[appname] ([reponame]): " [mc "Push"]]
+ wm title $w [mc "%s (%s): Push" [appname] [reponame]]
wm deiconify $w
tkwait window $w
}
# Bulgarian translation of git-gui po-file.
-# Copyright (C) 2012, 2013, 2014, 2015 Alexander Shopov <ash@kambanaria.org>.
+# Copyright (C) 2012, 2013, 2014, 2015, 2016 Alexander Shopov <ash@kambanaria.org>.
# This file is distributed under the same license as the git package.
-# Alexander Shopov <ash@kambanaria.org>, 2012, 2013, 2014, 2015.
+# Alexander Shopov <ash@kambanaria.org>, 2012, 2013, 2014, 2015, 2016.
#
#
msgid ""
msgstr ""
"Project-Id-Version: git-gui master\n"
"Report-Msgid-Bugs-To: \n"
-"POT-Creation-Date: 2015-04-07 07:37+0300\n"
-"PO-Revision-Date: 2015-04-07 07:46+0300\n"
+"POT-Creation-Date: 2016-10-13 15:16+0300\n"
+"PO-Revision-Date: 2016-10-13 15:16+0300\n"
"Last-Translator: Alexander Shopov <ash@kambanaria.org>\n"
"Language-Team: Bulgarian <dict@fsa-bg.org>\n"
"Language: bg\n"
"Content-Transfer-Encoding: 8bit\n"
"Plural-Forms: nplurals=2; plural=(n != 1);\n"
-#: git-gui.sh:861
+#: git-gui.sh:865
#, tcl-format
msgid "Invalid font specified in %s:"
msgstr "Указан е неправилен шрифт в „%s“:"
-#: git-gui.sh:915
+#: git-gui.sh:919
msgid "Main Font"
msgstr "Основен шрифт"
-#: git-gui.sh:916
+#: git-gui.sh:920
msgid "Diff/Console Font"
msgstr "Шрифт за разликите/конзолата"
-#: git-gui.sh:931 git-gui.sh:945 git-gui.sh:958 git-gui.sh:1048
-#: git-gui.sh:1067 git-gui.sh:3125
+#: git-gui.sh:935 git-gui.sh:949 git-gui.sh:962 git-gui.sh:1052 git-gui.sh:1071
+#: git-gui.sh:3147
msgid "git-gui: fatal error"
msgstr "git-gui: фатална грешка"
-#: git-gui.sh:932
+#: git-gui.sh:936
msgid "Cannot find git in PATH."
msgstr "Командата git липсва в пътя (PATH)."
-#: git-gui.sh:959
+#: git-gui.sh:963
msgid "Cannot parse Git version string:"
msgstr "Низът с версията на Git не може да бъде интерпретиран:"
-#: git-gui.sh:984
+#: git-gui.sh:988
#, tcl-format
msgid ""
"Git version cannot be determined.\n"
"\n"
"Да се приеме ли, че „%s“ е версия „1.5.0“?\n"
-#: git-gui.sh:1281
+#: git-gui.sh:1285
msgid "Git directory not found:"
msgstr "Директорията на Git не е открита:"
-#: git-gui.sh:1315
+#: git-gui.sh:1319
msgid "Cannot move to top of working directory:"
msgstr "Не може да се премине към родителската директория."
-#: git-gui.sh:1323
+#: git-gui.sh:1327
msgid "Cannot use bare repository:"
msgstr "Голо хранилище не може да се използва:"
-#: git-gui.sh:1331
+#: git-gui.sh:1335
msgid "No working directory"
msgstr "Работната директория липсва"
-#: git-gui.sh:1503 lib/checkout_op.tcl:306
+#: git-gui.sh:1507 lib/checkout_op.tcl:306
msgid "Refreshing file status..."
msgstr "Обновяване на състоянието на файла…"
-#: git-gui.sh:1563
+#: git-gui.sh:1567
msgid "Scanning for modified files ..."
msgstr "Проверка за променени файлове…"
-#: git-gui.sh:1639
+#: git-gui.sh:1645
msgid "Calling prepare-commit-msg hook..."
msgstr "Куката „prepare-commit-msg“ се изпълнява в момента…"
-#: git-gui.sh:1656
+#: git-gui.sh:1662
msgid "Commit declined by prepare-commit-msg hook."
msgstr "Подаването е отхвърлено от куката „prepare-commit-msg“."
-#: git-gui.sh:1814 lib/browser.tcl:252
+#: git-gui.sh:1820 lib/browser.tcl:252
msgid "Ready."
msgstr "Готово."
-#: git-gui.sh:1978
+#: git-gui.sh:1984
#, tcl-format
msgid ""
"Display limit (gui.maxfilesdisplayed = %s) reached, not showing all %s files."
msgstr ""
-"Достигнат е максималният брой файлове за показване (gui.maxfilesdisplayed = "
-"%s). Файловете са общо %s."
+"Достигнат е максималният размер на списъка за извеждане(gui."
+"maxfilesdisplayed = %s), съответно не са показани всички %s файла."
-#: git-gui.sh:2101
+#: git-gui.sh:2107
msgid "Unmodified"
msgstr "Непроменен"
-#: git-gui.sh:2103
+#: git-gui.sh:2109
msgid "Modified, not staged"
msgstr "Променен, но не е в индекса"
-#: git-gui.sh:2104 git-gui.sh:2116
+#: git-gui.sh:2110 git-gui.sh:2122
msgid "Staged for commit"
msgstr "В индекса за подаване"
-#: git-gui.sh:2105 git-gui.sh:2117
+#: git-gui.sh:2111 git-gui.sh:2123
msgid "Portions staged for commit"
msgstr "Части са в индекса за подаване"
-#: git-gui.sh:2106 git-gui.sh:2118
+#: git-gui.sh:2112 git-gui.sh:2124
msgid "Staged for commit, missing"
msgstr "В индекса за подаване, но липсва"
-#: git-gui.sh:2108
+#: git-gui.sh:2114
msgid "File type changed, not staged"
msgstr "Видът на файла е сменен, но не е в индекса"
-#: git-gui.sh:2109 git-gui.sh:2110
+#: git-gui.sh:2115 git-gui.sh:2116
msgid "File type changed, old type staged for commit"
-msgstr "Ð\92идÑ\8aÑ\82 на Ñ\84айла е Ñ\81менен, но в индекÑ\81а е вÑ\81е оÑ\89е Ñ\81Ñ\82аÑ\80иÑ\8fÑ\82"
+msgstr "Ð\92идÑ\8aÑ\82 на Ñ\84айла е Ñ\81менен, но новиÑ\8fÑ\82 вид не е в индекÑ\81а"
-#: git-gui.sh:2111
+#: git-gui.sh:2117
msgid "File type changed, staged"
msgstr "Видът на файла е сменен и е в индекса"
-#: git-gui.sh:2112
+#: git-gui.sh:2118
msgid "File type change staged, modification not staged"
-msgstr "Видът на файла е сменен, но промяната не е в индекса"
+msgstr "Видът на файла е сменен в индекса, но не и съдържанието"
-#: git-gui.sh:2113
+#: git-gui.sh:2119
msgid "File type change staged, file missing"
-msgstr "Видът на файла е сменен, файлът липсва"
+msgstr "Видът на файла е сменен в индекса, но файлът липсва"
-#: git-gui.sh:2115
+#: git-gui.sh:2121
msgid "Untracked, not staged"
msgstr "Неследен"
-#: git-gui.sh:2120
+#: git-gui.sh:2126
msgid "Missing"
msgstr "Липсващ"
-#: git-gui.sh:2121
+#: git-gui.sh:2127
msgid "Staged for removal"
msgstr "В индекса за изтриване"
-#: git-gui.sh:2122
+#: git-gui.sh:2128
msgid "Staged for removal, still present"
msgstr "В индекса за изтриване, но още го има"
-#: git-gui.sh:2124 git-gui.sh:2125 git-gui.sh:2126 git-gui.sh:2127
-#: git-gui.sh:2128 git-gui.sh:2129
+#: git-gui.sh:2130 git-gui.sh:2131 git-gui.sh:2132 git-gui.sh:2133
+#: git-gui.sh:2134 git-gui.sh:2135
msgid "Requires merge resolution"
msgstr "Изисква коригиране при сливане"
-#: git-gui.sh:2164
+#: git-gui.sh:2170
msgid "Starting gitk... please wait..."
msgstr "Стартиране на „gitk“…, изчакайте…"
-#: git-gui.sh:2176
+#: git-gui.sh:2182
msgid "Couldn't find gitk in PATH"
msgstr "Командата „gitk“ липсва в пътищата, определени от променливата PATH."
-#: git-gui.sh:2235
+#: git-gui.sh:2241
msgid "Couldn't find git gui in PATH"
msgstr ""
"Командата „git gui“ липсва в пътищата, определени от променливата PATH."
-#: git-gui.sh:2654 lib/choose_repository.tcl:41
+#: git-gui.sh:2676 lib/choose_repository.tcl:41
msgid "Repository"
msgstr "Хранилище"
-#: git-gui.sh:2655
+#: git-gui.sh:2677
msgid "Edit"
msgstr "Редактиране"
-#: git-gui.sh:2657 lib/choose_rev.tcl:567
+#: git-gui.sh:2679 lib/choose_rev.tcl:567
msgid "Branch"
msgstr "Клон"
-#: git-gui.sh:2660 lib/choose_rev.tcl:554
+#: git-gui.sh:2682 lib/choose_rev.tcl:554
msgid "Commit@@noun"
msgstr "Подаване"
-#: git-gui.sh:2663 lib/merge.tcl:123 lib/merge.tcl:152 lib/merge.tcl:170
+#: git-gui.sh:2685 lib/merge.tcl:127 lib/merge.tcl:174
msgid "Merge"
msgstr "Сливане"
-#: git-gui.sh:2664 lib/choose_rev.tcl:563
+#: git-gui.sh:2686 lib/choose_rev.tcl:563
msgid "Remote"
msgstr "Отдалечено хранилище"
-#: git-gui.sh:2667
+#: git-gui.sh:2689
msgid "Tools"
msgstr "Команди"
-#: git-gui.sh:2676
+#: git-gui.sh:2698
msgid "Explore Working Copy"
msgstr "Разглеждане на работното копие"
-#: git-gui.sh:2682
+#: git-gui.sh:2704
msgid "Git Bash"
msgstr "Bash за Git"
-#: git-gui.sh:2692
+#: git-gui.sh:2714
msgid "Browse Current Branch's Files"
msgstr "Разглеждане на файловете в текущия клон"
-#: git-gui.sh:2696
+#: git-gui.sh:2718
msgid "Browse Branch Files..."
msgstr "Разглеждане на текущия клон…"
-#: git-gui.sh:2701
+#: git-gui.sh:2723
msgid "Visualize Current Branch's History"
msgstr "Визуализация на историята на текущия клон"
-#: git-gui.sh:2705
+#: git-gui.sh:2727
msgid "Visualize All Branch History"
msgstr "Визуализация на историята на всички клонове"
-#: git-gui.sh:2712
+#: git-gui.sh:2734
#, tcl-format
msgid "Browse %s's Files"
-msgstr "Разглеждане на файловете в %s"
+msgstr "Разглеждане на файловете в „%s“"
-#: git-gui.sh:2714
+#: git-gui.sh:2736
#, tcl-format
msgid "Visualize %s's History"
-msgstr "Визуализация на историята на %s"
+msgstr "Визуализация на историята на „%s“"
-#: git-gui.sh:2719 lib/database.tcl:40 lib/database.tcl:66
+#: git-gui.sh:2741 lib/database.tcl:40
msgid "Database Statistics"
msgstr "Статистика на базата от данни"
-#: git-gui.sh:2722 lib/database.tcl:33
+#: git-gui.sh:2744 lib/database.tcl:33
msgid "Compress Database"
msgstr "Компресиране на базата от данни"
-#: git-gui.sh:2725
+#: git-gui.sh:2747
msgid "Verify Database"
msgstr "Проверка на базата от данни"
-#: git-gui.sh:2732 git-gui.sh:2736 git-gui.sh:2740 lib/shortcut.tcl:8
-#: lib/shortcut.tcl:40 lib/shortcut.tcl:72
+#: git-gui.sh:2754 git-gui.sh:2758 git-gui.sh:2762
msgid "Create Desktop Icon"
msgstr "Добавяне на икона на работния плот"
-#: git-gui.sh:2748 lib/choose_repository.tcl:193 lib/choose_repository.tcl:201
+#: git-gui.sh:2770 lib/choose_repository.tcl:193 lib/choose_repository.tcl:201
msgid "Quit"
msgstr "Спиране на програмата"
-#: git-gui.sh:2756
+#: git-gui.sh:2778
msgid "Undo"
msgstr "Отмяна"
-#: git-gui.sh:2759
+#: git-gui.sh:2781
msgid "Redo"
msgstr "Повторение"
-#: git-gui.sh:2763 git-gui.sh:3368
+#: git-gui.sh:2785 git-gui.sh:3399
msgid "Cut"
msgstr "Отрязване"
-#: git-gui.sh:2766 git-gui.sh:3371 git-gui.sh:3445 git-gui.sh:3530
+#: git-gui.sh:2788 git-gui.sh:3402 git-gui.sh:3476 git-gui.sh:3562
#: lib/console.tcl:69
msgid "Copy"
msgstr "Копиране"
-#: git-gui.sh:2769 git-gui.sh:3374
+#: git-gui.sh:2791 git-gui.sh:3405
msgid "Paste"
msgstr "Поставяне"
-#: git-gui.sh:2772 git-gui.sh:3377 lib/remote_branch_delete.tcl:39
-#: lib/branch_delete.tcl:28
+#: git-gui.sh:2794 git-gui.sh:3408 lib/branch_delete.tcl:28
+#: lib/remote_branch_delete.tcl:39
msgid "Delete"
msgstr "Изтриване"
-#: git-gui.sh:2776 git-gui.sh:3381 git-gui.sh:3534 lib/console.tcl:71
+#: git-gui.sh:2798 git-gui.sh:3412 git-gui.sh:3566 lib/console.tcl:71
msgid "Select All"
msgstr "Избиране на всичко"
-#: git-gui.sh:2785
+#: git-gui.sh:2807
msgid "Create..."
msgstr "Създаване…"
-#: git-gui.sh:2791
+#: git-gui.sh:2813
msgid "Checkout..."
msgstr "Изтегляне…"
-#: git-gui.sh:2797
+#: git-gui.sh:2819
msgid "Rename..."
msgstr "Преименуване…"
-#: git-gui.sh:2802
+#: git-gui.sh:2824
msgid "Delete..."
msgstr "Изтриване…"
-#: git-gui.sh:2807
+#: git-gui.sh:2829
msgid "Reset..."
msgstr "Отмяна на промените…"
-#: git-gui.sh:2817
+#: git-gui.sh:2839
msgid "Done"
msgstr "Готово"
-#: git-gui.sh:2819
+#: git-gui.sh:2841
msgid "Commit@@verb"
msgstr "Подаване"
-#: git-gui.sh:2828 git-gui.sh:3309
+#: git-gui.sh:2850 git-gui.sh:3335
msgid "New Commit"
msgstr "Ново подаване"
-#: git-gui.sh:2836 git-gui.sh:3316
+#: git-gui.sh:2858 git-gui.sh:3342
msgid "Amend Last Commit"
msgstr "Поправяне на последното подаване"
-#: git-gui.sh:2846 git-gui.sh:3270 lib/remote_branch_delete.tcl:101
+#: git-gui.sh:2868 git-gui.sh:3296 lib/remote_branch_delete.tcl:101
msgid "Rescan"
msgstr "Обновяване"
-#: git-gui.sh:2852
+#: git-gui.sh:2874
msgid "Stage To Commit"
msgstr "Към индекса за подаване"
-#: git-gui.sh:2858
+#: git-gui.sh:2880
msgid "Stage Changed Files To Commit"
msgstr "Всички променени файлове към индекса за подаване"
-#: git-gui.sh:2864
+#: git-gui.sh:2886
msgid "Unstage From Commit"
msgstr "Изваждане от индекса за подаване"
-#: git-gui.sh:2870 lib/index.tcl:442
+#: git-gui.sh:2892 lib/index.tcl:442
msgid "Revert Changes"
msgstr "Връщане на оригинала"
-#: git-gui.sh:2878 git-gui.sh:3581 git-gui.sh:3612
+#: git-gui.sh:2900 git-gui.sh:3613 git-gui.sh:3644
msgid "Show Less Context"
msgstr "По-малко контекст"
-#: git-gui.sh:2882 git-gui.sh:3585 git-gui.sh:3616
+#: git-gui.sh:2904 git-gui.sh:3617 git-gui.sh:3648
msgid "Show More Context"
msgstr "Повече контекст"
-#: git-gui.sh:2889 git-gui.sh:3283 git-gui.sh:3392
+#: git-gui.sh:2911 git-gui.sh:3309 git-gui.sh:3423
msgid "Sign Off"
msgstr "Подписване"
-#: git-gui.sh:2905
+#: git-gui.sh:2927
msgid "Local Merge..."
msgstr "Локално сливане…"
-#: git-gui.sh:2910
+#: git-gui.sh:2932
msgid "Abort Merge..."
msgstr "Преустановяване на сливане…"
-#: git-gui.sh:2922 git-gui.sh:2950
+#: git-gui.sh:2944 git-gui.sh:2972
msgid "Add..."
msgstr "Добавяне…"
-#: git-gui.sh:2926
+#: git-gui.sh:2948
msgid "Push..."
-msgstr "Избутване…"
+msgstr "Изтласкване…"
-#: git-gui.sh:2930
+#: git-gui.sh:2952
msgid "Delete Branch..."
msgstr "Изтриване на клон…"
-#: git-gui.sh:2940 git-gui.sh:3563
+#: git-gui.sh:2962 git-gui.sh:3595
msgid "Options..."
msgstr "Опции…"
-#: git-gui.sh:2951
+#: git-gui.sh:2973
msgid "Remove..."
msgstr "Премахване…"
-#: git-gui.sh:2960 lib/choose_repository.tcl:55
+#: git-gui.sh:2982 lib/choose_repository.tcl:55
msgid "Help"
msgstr "Помощ"
-#: git-gui.sh:2964 git-gui.sh:2968 lib/choose_repository.tcl:49
-#: lib/choose_repository.tcl:58 lib/about.tcl:14
+#: git-gui.sh:2986 git-gui.sh:2990 lib/about.tcl:14
+#: lib/choose_repository.tcl:49 lib/choose_repository.tcl:58
#, tcl-format
msgid "About %s"
msgstr "Относно %s"
-#: git-gui.sh:2992
+#: git-gui.sh:3014
msgid "Online Documentation"
msgstr "Документация в Интернет"
-#: git-gui.sh:2995 lib/choose_repository.tcl:52 lib/choose_repository.tcl:61
+#: git-gui.sh:3017 lib/choose_repository.tcl:52 lib/choose_repository.tcl:61
msgid "Show SSH Key"
msgstr "Показване на ключа за SSH"
-#: git-gui.sh:3014 git-gui.sh:3146
+#: git-gui.sh:3032 git-gui.sh:3164
+msgid "usage:"
+msgstr "употреба:"
+
+#: git-gui.sh:3036 git-gui.sh:3168
msgid "Usage"
msgstr "Употреба"
-#: git-gui.sh:3095 lib/blame.tcl:573
+#: git-gui.sh:3117 lib/blame.tcl:573
msgid "Error"
msgstr "Грешка"
-#: git-gui.sh:3126
+#: git-gui.sh:3148
#, tcl-format
msgid "fatal: cannot stat path %s: No such file or directory"
msgstr ""
"ФАТАЛНА ГРЕШКА: пътят %s не може да бъде открит: такъв файл или директория "
"няма"
-#: git-gui.sh:3159
+#: git-gui.sh:3181
msgid "Current Branch:"
msgstr "Текущ клон:"
-#: git-gui.sh:3185
-msgid "Staged Changes (Will Commit)"
-msgstr "Промени в индекса (за подаване)"
-
-#: git-gui.sh:3205
+#: git-gui.sh:3206
msgid "Unstaged Changes"
msgstr "Промени извън индекса"
-#: git-gui.sh:3276
+#: git-gui.sh:3228
+msgid "Staged Changes (Will Commit)"
+msgstr "Промени в индекса (за подаване)"
+
+#: git-gui.sh:3302
msgid "Stage Changed"
msgstr "Индексът е променен"
-#: git-gui.sh:3295 lib/transport.tcl:137 lib/transport.tcl:229
+#: git-gui.sh:3321 lib/transport.tcl:137
msgid "Push"
msgstr "Изтласкване"
-#: git-gui.sh:3330
+#: git-gui.sh:3356
msgid "Initial Commit Message:"
msgstr "Първоначално съобщение при подаване:"
-#: git-gui.sh:3331
+#: git-gui.sh:3357
msgid "Amended Commit Message:"
msgstr "Поправено съобщение при подаване:"
-#: git-gui.sh:3332
+#: git-gui.sh:3358
msgid "Amended Initial Commit Message:"
msgstr "Поправено първоначално съобщение при подаване:"
-#: git-gui.sh:3333
+#: git-gui.sh:3359
msgid "Amended Merge Commit Message:"
msgstr "Поправено съобщение при подаване със сливане:"
-#: git-gui.sh:3334
+#: git-gui.sh:3360
msgid "Merge Commit Message:"
msgstr "Съобщение при подаване със сливане:"
-#: git-gui.sh:3335
+#: git-gui.sh:3361
msgid "Commit Message:"
msgstr "Съобщение при подаване:"
-#: git-gui.sh:3384 git-gui.sh:3538 lib/console.tcl:73
+#: git-gui.sh:3415 git-gui.sh:3570 lib/console.tcl:73
msgid "Copy All"
msgstr "Копиране на всичко"
-#: git-gui.sh:3408 lib/blame.tcl:105
+#: git-gui.sh:3439 lib/blame.tcl:105
msgid "File:"
msgstr "Файл:"
-#: git-gui.sh:3526
+#: git-gui.sh:3558
msgid "Refresh"
msgstr "Обновяване"
-#: git-gui.sh:3547
+#: git-gui.sh:3579
msgid "Decrease Font Size"
msgstr "По-едър шрифт"
-#: git-gui.sh:3551
+#: git-gui.sh:3583
msgid "Increase Font Size"
msgstr "По-дребен шрифт"
-#: git-gui.sh:3559 lib/blame.tcl:294
+#: git-gui.sh:3591 lib/blame.tcl:294
msgid "Encoding"
msgstr "Кодиране"
-#: git-gui.sh:3570
+#: git-gui.sh:3602
msgid "Apply/Reverse Hunk"
msgstr "Прилагане/връщане на парче"
-#: git-gui.sh:3575
+#: git-gui.sh:3607
msgid "Apply/Reverse Line"
msgstr "Прилагане/връщане на ред"
-#: git-gui.sh:3594
+#: git-gui.sh:3626
msgid "Run Merge Tool"
msgstr "Изпълнение на програмата за сливане"
-#: git-gui.sh:3599
+#: git-gui.sh:3631
msgid "Use Remote Version"
msgstr "Версия от отдалеченото хранилище"
-#: git-gui.sh:3603
+#: git-gui.sh:3635
msgid "Use Local Version"
msgstr "Локална версия"
-#: git-gui.sh:3607
+#: git-gui.sh:3639
msgid "Revert To Base"
msgstr "Връщане към родителската версия"
-#: git-gui.sh:3625
+#: git-gui.sh:3657
msgid "Visualize These Changes In The Submodule"
msgstr "Визуализиране на промените в подмодула"
-#: git-gui.sh:3629
+#: git-gui.sh:3661
msgid "Visualize Current Branch History In The Submodule"
msgstr "Визуализация на историята на текущия клон в историята за подмодула"
-#: git-gui.sh:3633
+#: git-gui.sh:3665
msgid "Visualize All Branch History In The Submodule"
msgstr "Визуализация на историята на всички клони в историята за подмодула"
-#: git-gui.sh:3638
+#: git-gui.sh:3670
msgid "Start git gui In The Submodule"
msgstr "Стартиране на „git gui“ за подмодула"
-#: git-gui.sh:3673
+#: git-gui.sh:3705
msgid "Unstage Hunk From Commit"
msgstr "Изваждане на парчето от подаването"
-#: git-gui.sh:3675
+#: git-gui.sh:3707
msgid "Unstage Lines From Commit"
msgstr "Изваждане на редовете от подаването"
-#: git-gui.sh:3677
+#: git-gui.sh:3709
msgid "Unstage Line From Commit"
msgstr "Изваждане на реда от подаването"
-#: git-gui.sh:3680
+#: git-gui.sh:3712
msgid "Stage Hunk For Commit"
msgstr "Добавяне на парчето за подаване"
-#: git-gui.sh:3682
+#: git-gui.sh:3714
msgid "Stage Lines For Commit"
msgstr "Добавяне на редовете за подаване"
-#: git-gui.sh:3684
+#: git-gui.sh:3716
msgid "Stage Line For Commit"
msgstr "Добавяне на реда за подаване"
-#: git-gui.sh:3709
+#: git-gui.sh:3741
msgid "Initializing..."
msgstr "Инициализиране…"
-#: git-gui.sh:3852
+#: git-gui.sh:3886
#, tcl-format
msgid ""
"Possible environment issues exist.\n"
"от %s:\n"
"\n"
-#: git-gui.sh:3881
+#: git-gui.sh:3915
msgid ""
"\n"
"This is due to a known issue with the\n"
"Това е познат проблем и се дължи на\n"
"версията на Tcl включена в Cygwin."
-#: git-gui.sh:3886
+#: git-gui.sh:3920
#, tcl-format
msgid ""
"\n"
"е да поставите настройките „user.name“ и\n"
"„user.email“ в личния си файл „~/.gitconfig“.\n"
-#: lib/spellcheck.tcl:57
-msgid "Unsupported spell checker"
-msgstr "Тази програма за проверка на правописа не се поддържа"
-
-#: lib/spellcheck.tcl:65
-msgid "Spell checking is unavailable"
-msgstr "Липсва програма за проверка на правописа"
-
-#: lib/spellcheck.tcl:68
-msgid "Invalid spell checking configuration"
-msgstr "Неправилни настройки на проверката на правописа"
+#: lib/about.tcl:26
+msgid "git-gui - a graphical user interface for Git."
+msgstr "git-gui — графичен интерфейс за Git."
-#: lib/spellcheck.tcl:70
+#: lib/blame.tcl:73
#, tcl-format
-msgid "Reverting dictionary to %s."
-msgstr "Ползване на речник за език „%s“."
-
-#: lib/spellcheck.tcl:73
-msgid "Spell checker silently failed on startup"
-msgstr "Програмата за правопис даже не стартира успешно."
-
-#: lib/spellcheck.tcl:80
-msgid "Unrecognized spell checker"
-msgstr "Непозната програма за проверка на правописа"
-
-#: lib/spellcheck.tcl:186
-msgid "No Suggestions"
-msgstr "Няма предложения"
-
-#: lib/spellcheck.tcl:388
-msgid "Unexpected EOF from spell checker"
-msgstr "Неочакван край на файл от програмата за проверка на правописа"
-
-#: lib/spellcheck.tcl:392
-msgid "Spell Checker Failed"
-msgstr "Грешка в програмата за проверка на правописа"
-
-#: lib/remote_add.tcl:20
-msgid "Add Remote"
-msgstr "Добавяне на отдалечено хранилище"
-
-#: lib/remote_add.tcl:25
-msgid "Add New Remote"
-msgstr "Добавяне на отдалечено хранилище"
-
-#: lib/remote_add.tcl:30 lib/tools_dlg.tcl:37
-msgid "Add"
-msgstr "Добавяне"
-
-#: lib/remote_add.tcl:34 lib/browser.tcl:292 lib/branch_checkout.tcl:30
-#: lib/transport.tcl:141 lib/branch_rename.tcl:32 lib/choose_font.tcl:45
-#: lib/option.tcl:127 lib/tools_dlg.tcl:41 lib/tools_dlg.tcl:202
-#: lib/tools_dlg.tcl:345 lib/remote_branch_delete.tcl:43
-#: lib/checkout_op.tcl:579 lib/branch_create.tcl:37 lib/branch_delete.tcl:34
-#: lib/merge.tcl:174
-msgid "Cancel"
-msgstr "Отказване"
-
-#: lib/remote_add.tcl:39
-msgid "Remote Details"
-msgstr "Данни за отдалеченото хранилище"
-
-#: lib/remote_add.tcl:41 lib/tools_dlg.tcl:51 lib/branch_create.tcl:44
-msgid "Name:"
-msgstr "Име:"
+msgid "%s (%s): File Viewer"
+msgstr "%s (%s): Преглед на файлове"
-#: lib/remote_add.tcl:50
-msgid "Location:"
-msgstr "Ð\9cеÑ\81Ñ\82оположение:"
+#: lib/blame.tcl:79
+msgid "Commit:"
+msgstr "Ð\9fодаване:"
-#: lib/remote_add.tcl:60
-msgid "Further Action"
-msgstr "СледваÑ\89о дейÑ\81Ñ\82вие"
+#: lib/blame.tcl:280
+msgid "Copy Commit"
+msgstr "Ð\9aопиÑ\80ане на подаване"
-#: lib/remote_add.tcl:63
-msgid "Fetch Immediately"
-msgstr "Ð\9dезабавно доÑ\81Ñ\82авÑ\8fне"
+#: lib/blame.tcl:284
+msgid "Find Text..."
+msgstr "ТÑ\8aÑ\80Ñ\81ене на Ñ\82екÑ\81Ñ\82â\80¦"
-#: lib/remote_add.tcl:69
-msgid "Initialize Remote Repository and Push"
-msgstr "Ð\98ниÑ\86иализиÑ\80ане на оÑ\82далеÑ\87еноÑ\82о Ñ\85Ñ\80анилиÑ\89е и изÑ\82лаÑ\81кване на пÑ\80омениÑ\82е"
+#: lib/blame.tcl:288
+msgid "Goto Line..."
+msgstr "Ð\9aÑ\8aм Ñ\80едâ\80¦"
-#: lib/remote_add.tcl:75
-msgid "Do Nothing Else Now"
-msgstr "Ð\94а не Ñ\81е пÑ\80ави ниÑ\89о"
+#: lib/blame.tcl:297
+msgid "Do Full Copy Detection"
+msgstr "Ð\9fÑ\8aлно Ñ\82Ñ\8aÑ\80Ñ\81ене на копиÑ\80ане"
-#: lib/remote_add.tcl:100
-msgid "Please supply a remote name."
-msgstr "Ð\97адайÑ\82е име за оÑ\82далеÑ\87еноÑ\82о Ñ\85Ñ\80анилиÑ\89е."
+#: lib/blame.tcl:301
+msgid "Show History Context"
+msgstr "Ð\9fоказване на конÑ\82екÑ\81Ñ\82а оÑ\82 иÑ\81Ñ\82оÑ\80иÑ\8fÑ\82а"
-#: lib/remote_add.tcl:113
-#, tcl-format
-msgid "'%s' is not an acceptable remote name."
-msgstr "Отдалечено хранилище не може да се казва „%s“."
+#: lib/blame.tcl:304
+msgid "Blame Parent Commit"
+msgstr "Анотиране на родителското подаване"
-#: lib/remote_add.tcl:124
+#: lib/blame.tcl:466
#, tcl-format
-msgid "Failed to add remote '%s' of location '%s'."
-msgstr "Ð\9dеÑ\83Ñ\81пеÑ\88но добавÑ\8fне на оÑ\82далеÑ\87еноÑ\82о Ñ\85Ñ\80анилиÑ\89е â\80\9e%sâ\80\9c оÑ\82 адÑ\80еÑ\81 â\80\9e%sâ\80\9c."
+msgid "Reading %s..."
+msgstr "ЧеÑ\82е Ñ\81е â\80\9e%sâ\80\9câ\80¦"
-#: lib/remote_add.tcl:132 lib/transport.tcl:6
-#, tcl-format
-msgid "fetch %s"
-msgstr "доставяне на „%s“"
+#: lib/blame.tcl:594
+msgid "Loading copy/move tracking annotations..."
+msgstr "Зареждане на анотациите за проследяване на копирането/преместването…"
-#: lib/remote_add.tcl:133
-#, tcl-format
-msgid "Fetching the %s"
-msgstr "Доставяне на „%s“"
+#: lib/blame.tcl:614
+msgid "lines annotated"
+msgstr "реда анотирани"
-#: lib/remote_add.tcl:156
-#, tcl-format
-msgid "Do not know how to initialize repository at location '%s'."
-msgstr "Хранилището с местоположение „%s“ не може да бъде инициализирано."
+#: lib/blame.tcl:806
+msgid "Loading original location annotations..."
+msgstr "Зареждане на анотациите за първоначалното местоположение…"
-#: lib/remote_add.tcl:162 lib/transport.tcl:54 lib/transport.tcl:92
-#: lib/transport.tcl:110
-#, tcl-format
-msgid "push %s"
-msgstr "изтласкване на „%s“"
+#: lib/blame.tcl:809
+msgid "Annotation complete."
+msgstr "Анотирането завърши."
-#: lib/remote_add.tcl:163
-#, tcl-format
-msgid "Setting up the %s (at %s)"
-msgstr "Добавяне на хранилище „%s“ (с адрес „%s“)"
+#: lib/blame.tcl:839
+msgid "Busy"
+msgstr "Операцията не е завършила"
-#: lib/browser.tcl:17
-msgid "Starting..."
-msgstr "СÑ\82аÑ\80Ñ\82иÑ\80анеâ\80¦"
+#: lib/blame.tcl:840
+msgid "Annotation process is already running."
+msgstr "Ð\92 моменÑ\82а Ñ\82еÑ\87е пÑ\80оÑ\86еÑ\81 на аноÑ\82иÑ\80ане."
-#: lib/browser.tcl:27
-msgid "File Browser"
-msgstr "Файлов бÑ\80аÑ\83зÑ\8aÑ\80"
+#: lib/blame.tcl:879
+msgid "Running thorough copy detection..."
+msgstr "Ð\98зпÑ\8aлнÑ\8fва Ñ\81е Ñ\86Ñ\8fлоÑ\81Ñ\82ен пÑ\80оÑ\86еÑ\81 на оÑ\82кÑ\80иване на копиÑ\80анеâ\80¦"
-#: lib/browser.tcl:132 lib/browser.tcl:149
-#, tcl-format
-msgid "Loading %s..."
-msgstr "Зареждане на „%s“…"
+#: lib/blame.tcl:947
+msgid "Loading annotation..."
+msgstr "Зареждане на анотации…"
-#: lib/browser.tcl:193
-msgid "[Up To Parent]"
-msgstr "[Към родителя]"
+#: lib/blame.tcl:1000
+msgid "Author:"
+msgstr "Автор:"
-#: lib/browser.tcl:275 lib/browser.tcl:282
-msgid "Browse Branch Files"
-msgstr "Разглеждане на Ñ\84айловеÑ\82е в клона"
+#: lib/blame.tcl:1004
+msgid "Committer:"
+msgstr "Ð\9fодал:"
-#: lib/browser.tcl:288 lib/choose_repository.tcl:422
-#: lib/choose_repository.tcl:509 lib/choose_repository.tcl:518
-#: lib/choose_repository.tcl:1074
-msgid "Browse"
-msgstr "Разглеждане"
+#: lib/blame.tcl:1009
+msgid "Original File:"
+msgstr "Първоначален файл:"
-#: lib/browser.tcl:297 lib/branch_checkout.tcl:35 lib/tools_dlg.tcl:321
-msgid "Revision"
-msgstr "Ð\92еÑ\80Ñ\81иÑ\8f"
+#: lib/blame.tcl:1057
+msgid "Cannot find HEAD commit:"
+msgstr "Ð\9fодаванеÑ\82о за вÑ\80Ñ\8aÑ\85 â\80\9eHEADâ\80\9c не може да Ñ\81е оÑ\82кÑ\80ие:"
-#: lib/tools.tcl:75
-#, tcl-format
-msgid "Running %s requires a selected file."
-msgstr "За изпълнението на „%s“ трябва да изберете файл."
+#: lib/blame.tcl:1112
+msgid "Cannot find parent commit:"
+msgstr "Родителското подаване не може да бъде открито"
-#: lib/tools.tcl:91
-#, tcl-format
-msgid "Are you sure you want to run %1$s on file \"%2$s\"?"
-msgstr "Сигурни ли сте, че искате да изпълните „%1$s“ върху файла „%2$s“?"
+#: lib/blame.tcl:1127
+msgid "Unable to display parent"
+msgstr "Родителят не може да бъде показан"
-#: lib/tools.tcl:95
-#, tcl-format
-msgid "Are you sure you want to run %s?"
-msgstr "Сигурни ли сте, че искате да изпълните „%s“?"
+#: lib/blame.tcl:1128 lib/diff.tcl:358
+msgid "Error loading diff:"
+msgstr "Грешка при зареждане на разлика:"
-#: lib/tools.tcl:116
-#, tcl-format
-msgid "Tool: %s"
-msgstr "Команда: %s"
+#: lib/blame.tcl:1269
+msgid "Originally By:"
+msgstr "Първоначално от:"
-#: lib/tools.tcl:117
-#, tcl-format
-msgid "Running: %s"
-msgstr "Изпълнение: %s"
+#: lib/blame.tcl:1275
+msgid "In File:"
+msgstr "Във файл:"
-#: lib/tools.tcl:155
-#, tcl-format
-msgid "Tool completed successfully: %s"
-msgstr "Командата завърши успешно: %s"
+#: lib/blame.tcl:1280
+msgid "Copied Or Moved Here By:"
+msgstr "Копирано или преместено тук от:"
-#: lib/tools.tcl:157
+#: lib/branch_checkout.tcl:16
#, tcl-format
-msgid "Tool failed: %s"
-msgstr "Командата върна грешка: %s"
+msgid "%s (%s): Checkout Branch"
+msgstr "%s (%s): Клон за изтегляне"
-#: lib/branch_checkout.tcl:16 lib/branch_checkout.tcl:21
+#: lib/branch_checkout.tcl:21
msgid "Checkout Branch"
msgstr "Клон за изтегляне"
msgid "Checkout"
msgstr "Изтегляне"
-#: lib/branch_checkout.tcl:39 lib/option.tcl:310 lib/branch_create.tcl:69
+#: lib/branch_checkout.tcl:30 lib/branch_create.tcl:37 lib/branch_delete.tcl:34
+#: lib/branch_rename.tcl:32 lib/browser.tcl:292 lib/checkout_op.tcl:579
+#: lib/choose_font.tcl:45 lib/merge.tcl:178 lib/option.tcl:127
+#: lib/remote_add.tcl:34 lib/remote_branch_delete.tcl:43 lib/tools_dlg.tcl:41
+#: lib/tools_dlg.tcl:202 lib/tools_dlg.tcl:345 lib/transport.tcl:141
+msgid "Cancel"
+msgstr "Отказване"
+
+#: lib/branch_checkout.tcl:35 lib/browser.tcl:297 lib/tools_dlg.tcl:321
+msgid "Revision"
+msgstr "Версия"
+
+#: lib/branch_checkout.tcl:39 lib/branch_create.tcl:69 lib/option.tcl:310
msgid "Options"
msgstr "Опции"
msgid "Detach From Local Branch"
msgstr "Изтриване от локалния клон"
-#: lib/transport.tcl:7
+#: lib/branch_create.tcl:23
#, tcl-format
-msgid "Fetching new changes from %s"
-msgstr "Доставяне на промените от „%s“"
+msgid "%s (%s): Create Branch"
+msgstr "%s (%s): Създаване на клон"
-#: lib/transport.tcl:18
-#, tcl-format
-msgid "remote prune %s"
-msgstr "окастряне на следящите клони към „%s“"
-
-#: lib/transport.tcl:19
-#, tcl-format
-msgid "Pruning tracking branches deleted from %s"
-msgstr "Окастряне на следящите клони на изтритите клони от „%s“"
-
-#: lib/transport.tcl:25
-msgid "fetch all remotes"
-msgstr "доставяне на всички отдалечени хранилища"
-
-#: lib/transport.tcl:26
-msgid "Fetching new changes from all remotes"
-msgstr "Доставяне на новите промени от всички отдалечени хранилища"
-
-#: lib/transport.tcl:40
-msgid "remote prune all remotes"
-msgstr "окастряне на всички следящи клони"
+#: lib/branch_create.tcl:28
+msgid "Create New Branch"
+msgstr "Създаване на нов клон"
-#: lib/transport.tcl:41
-msgid "Pruning tracking branches deleted from all remotes"
-msgstr ""
-"Окастряне на всички клони, които следят изтрити клони от отдалечени хранилища"
+#: lib/branch_create.tcl:33 lib/choose_repository.tcl:407
+msgid "Create"
+msgstr "Създаване"
-#: lib/transport.tcl:55
-#, tcl-format
-msgid "Pushing changes to %s"
-msgstr "Изтласкване на промените към „%s“"
+#: lib/branch_create.tcl:42
+msgid "Branch Name"
+msgstr "Име на клона"
-#: lib/transport.tcl:93
-#, tcl-format
-msgid "Mirroring to %s"
-msgstr "Изтласкване на всичко към „%s“"
+#: lib/branch_create.tcl:44 lib/remote_add.tcl:41 lib/tools_dlg.tcl:51
+msgid "Name:"
+msgstr "Име:"
-#: lib/transport.tcl:111
-#, tcl-format
-msgid "Pushing %s %s to %s"
-msgstr "Изтласкване на %s „%s“ към „%s“"
+#: lib/branch_create.tcl:57
+msgid "Match Tracking Branch Name"
+msgstr "Съвпадане по името на следения клон"
-#: lib/transport.tcl:132
-msgid "Push Branches"
-msgstr "Ð\9aлони за изÑ\82лаÑ\81кване"
+#: lib/branch_create.tcl:66
+msgid "Starting Revision"
+msgstr "Ð\9dаÑ\87ална веÑ\80Ñ\81иÑ\8f"
-#: lib/transport.tcl:147
-msgid "Source Branches"
-msgstr "Ð\9aлони-изÑ\82оÑ\87ниÑ\86и"
+#: lib/branch_create.tcl:72
+msgid "Update Existing Branch:"
+msgstr "Ð\9eбновÑ\8fване на Ñ\81Ñ\8aÑ\89еÑ\81Ñ\82вÑ\83ваÑ\89 клон:"
-#: lib/transport.tcl:162
-msgid "Destination Repository"
-msgstr "Целево Ñ\85Ñ\80анилиÑ\89е"
+#: lib/branch_create.tcl:75
+msgid "No"
+msgstr "Ð\9dе"
-#: lib/transport.tcl:165 lib/remote_branch_delete.tcl:51
-msgid "Remote:"
-msgstr "Ð\9eÑ\82далеÑ\87ено Ñ\85Ñ\80анилиÑ\89е:"
+#: lib/branch_create.tcl:80
+msgid "Fast Forward Only"
+msgstr "Само Ñ\82Ñ\80ивиално пÑ\80евÑ\8aÑ\80Ñ\82аÑ\89о Ñ\81ливане"
-#: lib/transport.tcl:187 lib/remote_branch_delete.tcl:72
-msgid "Arbitrary Location:"
-msgstr "Ð\9fÑ\80оизволно меÑ\81Ñ\82оположение:"
+#: lib/branch_create.tcl:85 lib/checkout_op.tcl:571
+msgid "Reset"
+msgstr "Ð\9eÑ\82наÑ\87ало"
-#: lib/transport.tcl:205
-msgid "Transfer Options"
-msgstr "Ð\9dаÑ\81Ñ\82Ñ\80ойки пÑ\80и пÑ\80енаÑ\81Ñ\8fнеÑ\82о"
+#: lib/branch_create.tcl:97
+msgid "Checkout After Creation"
+msgstr "Ð\9fÑ\80еминаване кÑ\8aм клона Ñ\81лед Ñ\81Ñ\8aздаванеÑ\82о мÑ\83"
-#: lib/transport.tcl:207
-msgid "Force overwrite existing branch (may discard changes)"
-msgstr ""
-"Изрично презаписване на съществуващ клон (някои промени може да бъдат "
-"загубени)"
+#: lib/branch_create.tcl:132
+msgid "Please select a tracking branch."
+msgstr "Изберете клон за следени."
-#: lib/transport.tcl:211
-msgid "Use thin pack (for slow network connections)"
-msgstr "Максимална компресия (за бавни мрежови връзки)"
+#: lib/branch_create.tcl:141
+#, tcl-format
+msgid "Tracking branch %s is not a branch in the remote repository."
+msgstr "Следящият клон — „%s“, не съществува в отдалеченото хранилище."
-#: lib/transport.tcl:215
-msgid "Include tags"
-msgstr "Ð\92клÑ\8eÑ\87ване на еÑ\82икеÑ\82иÑ\82е"
+#: lib/branch_create.tcl:154 lib/branch_rename.tcl:92
+msgid "Please supply a branch name."
+msgstr "Ð\94айÑ\82е име на клона."
-#: lib/status_bar.tcl:87
+#: lib/branch_create.tcl:165 lib/branch_rename.tcl:112
#, tcl-format
-msgid "%s ... %*i of %*i %s (%3i%%)"
-msgstr "%s… %*i от общо %*i %s (%3i%%)"
+msgid "'%s' is not an acceptable branch name."
+msgstr "„%s“ не може да се използва за име на клон."
-#: lib/remote.tcl:200
-msgid "Push to"
-msgstr "Изтласкване към"
+#: lib/branch_delete.tcl:16
+#, tcl-format
+msgid "%s (%s): Delete Branch"
+msgstr "%s (%s): Изтриване на клон"
-#: lib/remote.tcl:218
-msgid "Remove Remote"
-msgstr "Ð\9fÑ\80емаÑ\85ване на оÑ\82далеÑ\87ено Ñ\85Ñ\80анилиÑ\89е"
+#: lib/branch_delete.tcl:21
+msgid "Delete Local Branch"
+msgstr "Ð\98зÑ\82Ñ\80иване на локален клон"
-#: lib/remote.tcl:223
-msgid "Prune from"
-msgstr "Ð\9eкаÑ\81Ñ\82Ñ\80Ñ\8fне оÑ\82"
+#: lib/branch_delete.tcl:39
+msgid "Local Branches"
+msgstr "Ð\9bокални клони"
-#: lib/remote.tcl:228
-msgid "Fetch from"
-msgstr "Ð\94оÑ\81Ñ\82авÑ\8fне оÑ\82"
+#: lib/branch_delete.tcl:51
+msgid "Delete Only If Merged Into"
+msgstr "Ð\98зÑ\82Ñ\80иване, Ñ\81амо ако пÑ\80омениÑ\82е Ñ\81а Ñ\81леÑ\82и и дÑ\80Ñ\83гаде"
-#: lib/sshkey.tcl:31
-msgid "No keys found."
-msgstr "Ð\9dе Ñ\81а оÑ\82кÑ\80иÑ\82и клÑ\8eÑ\87ове."
+#: lib/branch_delete.tcl:53 lib/remote_branch_delete.tcl:120
+msgid "Always (Do not perform merge checks)"
+msgstr "Ð\92инаги (без пÑ\80овеÑ\80ка за Ñ\81ливане)"
-#: lib/sshkey.tcl:34
+#: lib/branch_delete.tcl:103
#, tcl-format
-msgid "Found a public key in: %s"
-msgstr "Открит е публичен ключ в „%s“"
-
-#: lib/sshkey.tcl:40
-msgid "Generate Key"
-msgstr "Генериране на ключ"
-
-#: lib/sshkey.tcl:55 lib/checkout_op.tcl:146 lib/console.tcl:81
-#: lib/database.tcl:30
-msgid "Close"
-msgstr "Затваряне"
-
-#: lib/sshkey.tcl:58
-msgid "Copy To Clipboard"
-msgstr "Копиране към системния буфер"
+msgid "The following branches are not completely merged into %s:"
+msgstr "Не всички промени в клоните са слети в „%s“:"
-#: lib/sshkey.tcl:72
-msgid "Your OpenSSH Public Key"
-msgstr "Публичният ви ключ за OpenSSH"
+#: lib/branch_delete.tcl:115 lib/remote_branch_delete.tcl:218
+msgid ""
+"Recovering deleted branches is difficult.\n"
+"\n"
+"Delete the selected branches?"
+msgstr ""
+"Възстановяването на изтрити клони може да е трудно.\n"
+"\n"
+"Сигурни ли сте, че искате да триете?"
-#: lib/sshkey.tcl:80
-msgid "Generating..."
-msgstr "Генериране…"
+#: lib/branch_delete.tcl:131
+#, tcl-format
+msgid " - %s:"
+msgstr " — „%s:“"
-#: lib/sshkey.tcl:86
+#: lib/branch_delete.tcl:141
#, tcl-format
msgid ""
-"Could not start ssh-keygen:\n"
-"\n"
+"Failed to delete branches:\n"
"%s"
msgstr ""
-"Програмата „ssh-keygen“ не може да бъде стартирана:\n"
-"\n"
+"Неуспешно триене на клони:\n"
"%s"
-#: lib/sshkey.tcl:113
-msgid "Generation failed."
-msgstr "Неуспешно генериране."
-
-#: lib/sshkey.tcl:120
-msgid "Generation succeeded, but no keys found."
-msgstr "Генерирането завърши успешно, а не са намерени ключове."
-
-#: lib/sshkey.tcl:123
+#: lib/branch_rename.tcl:15
#, tcl-format
-msgid "Your key is in: %s"
-msgstr "Ключът ви е в „%s“"
+msgid "%s (%s): Rename Branch"
+msgstr "%s (%s): Преименуване на клон"
-#: lib/branch_rename.tcl:15 lib/branch_rename.tcl:23
+#: lib/branch_rename.tcl:23
msgid "Rename Branch"
msgstr "Преименуване на клон"
msgid "Please select a branch to rename."
msgstr "Изберете клон за преименуване."
-#: lib/branch_rename.tcl:92 lib/branch_create.tcl:154
-msgid "Please supply a branch name."
-msgstr "Дайте име на клона."
-
#: lib/branch_rename.tcl:102 lib/checkout_op.tcl:202
#, tcl-format
msgid "Branch '%s' already exists."
msgstr "Клонът „%s“ вече съществува."
-#: lib/branch_rename.tcl:112 lib/branch_create.tcl:165
-#, tcl-format
-msgid "'%s' is not an acceptable branch name."
-msgstr "„%s“ не може да се използва за име на клон."
-
#: lib/branch_rename.tcl:123
#, tcl-format
msgid "Failed to rename '%s'."
msgstr "Неуспешно преименуване на „%s“."
-#: lib/choose_font.tcl:41
-msgid "Select"
-msgstr "Избор"
-
-#: lib/choose_font.tcl:55
-msgid "Font Family"
-msgstr "Шрифт"
+#: lib/browser.tcl:17
+msgid "Starting..."
+msgstr "Стартиране…"
-#: lib/choose_font.tcl:76
-msgid "Font Size"
-msgstr "Размер"
+#: lib/browser.tcl:27
+#, tcl-format
+msgid "%s (%s): File Browser"
+msgstr "%s (%s): Файлов браузър"
-#: lib/choose_font.tcl:93
-msgid "Font Example"
-msgstr "Мостра"
+#: lib/browser.tcl:132 lib/browser.tcl:149
+#, tcl-format
+msgid "Loading %s..."
+msgstr "Зареждане на „%s“…"
-#: lib/choose_font.tcl:105
-msgid ""
-"This is example text.\n"
-"If you like this text, it can be your font."
-msgstr ""
-"Това е примерен текст.\n"
-"Ако ви харесва как изглежда, изберете шрифта."
+#: lib/browser.tcl:193
+msgid "[Up To Parent]"
+msgstr "[Към родителя]"
-#: lib/option.tcl:11
+#: lib/browser.tcl:275
#, tcl-format
-msgid "Invalid global encoding '%s'"
-msgstr "Неправилно глобално кодиране „%s“"
+msgid "%s (%s): Browse Branch Files"
+msgstr "%s (%s): Разглеждане на файловете в клона"
-#: lib/option.tcl:19
-#, tcl-format
-msgid "Invalid repo encoding '%s'"
-msgstr "Неправилно кодиране „%s“ на хранилището"
+#: lib/browser.tcl:282
+msgid "Browse Branch Files"
+msgstr "Разглеждане на файловете в клона"
-#: lib/option.tcl:119
-msgid "Restore Defaults"
-msgstr "Стандартни настройки"
+#: lib/browser.tcl:288 lib/choose_repository.tcl:422
+#: lib/choose_repository.tcl:509 lib/choose_repository.tcl:518
+#: lib/choose_repository.tcl:1074
+msgid "Browse"
+msgstr "Разглеждане"
-#: lib/option.tcl:123
-msgid "Save"
-msgstr "Запазване"
+#: lib/checkout_op.tcl:85
+#, tcl-format
+msgid "Fetching %s from %s"
+msgstr "Доставяне на „%s“ от „%s“"
-#: lib/option.tcl:133
+#: lib/checkout_op.tcl:133
#, tcl-format
-msgid "%s Repository"
-msgstr "Хранилище „%s“"
+msgid "fatal: Cannot resolve %s"
+msgstr "фатална грешка: „%s“ не може да се открие"
-#: lib/option.tcl:134
-msgid "Global (All Repositories)"
-msgstr "Глобално (за всички хранилища)"
+#: lib/checkout_op.tcl:146 lib/console.tcl:81 lib/database.tcl:30
+#: lib/sshkey.tcl:55
+msgid "Close"
+msgstr "Затваряне"
-#: lib/option.tcl:140
-msgid "User Name"
-msgstr "Потребителско име"
+#: lib/checkout_op.tcl:175
+#, tcl-format
+msgid "Branch '%s' does not exist."
+msgstr "Клонът „%s“ не съществува."
-#: lib/option.tcl:141
-msgid "Email Address"
-msgstr "Адрес на е-поща"
+#: lib/checkout_op.tcl:194
+#, tcl-format
+msgid "Failed to configure simplified git-pull for '%s'."
+msgstr "Неуспешно настройване на опростен git-pull за „%s“."
-#: lib/option.tcl:143
-msgid "Summarize Merge Commits"
-msgstr "Обобщаване на подаванията при сливане"
+#: lib/checkout_op.tcl:229
+#, tcl-format
+msgid ""
+"Branch '%s' already exists.\n"
+"\n"
+"It cannot fast-forward to %s.\n"
+"A merge is required."
+msgstr ""
+"Клонът „%s“ съществува.\n"
+"\n"
+"Той не може да бъде тривиално слят до „%s“.\n"
+"Необходимо е сливане."
-#: lib/option.tcl:144
-msgid "Merge Verbosity"
-msgstr "Подробности при сливанията"
+#: lib/checkout_op.tcl:243
+#, tcl-format
+msgid "Merge strategy '%s' not supported."
+msgstr "Стратегия за сливане „%s“ не се поддържа."
-#: lib/option.tcl:145
-msgid "Show Diffstat After Merge"
-msgstr "Извеждане на статистика след сливанията"
+#: lib/checkout_op.tcl:262
+#, tcl-format
+msgid "Failed to update '%s'."
+msgstr "Неуспешно обновяване на „%s“."
-#: lib/option.tcl:146
-msgid "Use Merge Tool"
-msgstr "Ð\98зползване на пÑ\80огÑ\80ама за Ñ\81ливане"
+#: lib/checkout_op.tcl:274
+msgid "Staging area (index) is already locked."
+msgstr "Ð\98ндекÑ\81Ñ\8aÑ\82 веÑ\87е е заклÑ\8eÑ\87ен."
-#: lib/option.tcl:148
-msgid "Trust File Modification Timestamps"
-msgstr "Доверие във времето на промяна на файловете"
+#: lib/checkout_op.tcl:289
+msgid ""
+"Last scanned state does not match repository state.\n"
+"\n"
+"Another Git program has modified this repository since the last scan. A "
+"rescan must be performed before the current branch can be changed.\n"
+"\n"
+"The rescan will be automatically started now.\n"
+msgstr ""
+"Състоянието при последната проверка не отговаря на състоянието на "
+"хранилището.\n"
+"\n"
+"Някой друг процес за Git е променил хранилището междувременно. Състоянието "
+"трябва да бъде проверено, преди да се премине към нов клон.\n"
+"\n"
+"Автоматично ще започне нова проверка.\n"
-#: lib/option.tcl:149
-msgid "Prune Tracking Branches During Fetch"
-msgstr "Окастряне на следящите клонове при доставяне"
+#: lib/checkout_op.tcl:345
+#, tcl-format
+msgid "Updating working directory to '%s'..."
+msgstr "Работната директория се привежда към „%s“…"
-#: lib/option.tcl:150
-msgid "Match Tracking Branches"
-msgstr "Напасване на следящите клонове"
+#: lib/checkout_op.tcl:346
+msgid "files checked out"
+msgstr "файла са изтеглени"
-#: lib/option.tcl:151
-msgid "Use Textconv For Diffs and Blames"
+#: lib/checkout_op.tcl:376
+#, tcl-format
+msgid "Aborted checkout of '%s' (file level merging is required)."
msgstr ""
-"Преобразуване на текста с „textconv“ при анотиране и извеждане на разлики"
+"Преустановяване на изтеглянето на „%s“ (необходимо е пофайлово сливане)."
-#: lib/option.tcl:152
-msgid "Blame Copy Only On Changed Files"
-msgstr "Ð\90ноÑ\82иÑ\80ане на копиеÑ\82о Ñ\81амо по пÑ\80оменениÑ\82е Ñ\84айлове"
+#: lib/checkout_op.tcl:377
+msgid "File level merge required."
+msgstr "Ð\9dеобÑ\85одимо е поÑ\84айлово Ñ\81ливане."
-#: lib/option.tcl:153
-msgid "Maximum Length of Recent Repositories List"
-msgstr "Максимална дължина на списъка със скоро ползвани хранилища"
+#: lib/checkout_op.tcl:381
+#, tcl-format
+msgid "Staying on branch '%s'."
+msgstr "Оставане върху клона „%s“."
-#: lib/option.tcl:154
-msgid "Minimum Letters To Blame Copy On"
-msgstr "Минимален брой знаци за анотиране на копието"
+#: lib/checkout_op.tcl:452
+msgid ""
+"You are no longer on a local branch.\n"
+"\n"
+"If you wanted to be on a branch, create one now starting from 'This Detached "
+"Checkout'."
+msgstr ""
+"Вече не сте на локален клон.\n"
+"\n"
+"Ако искате да сте на клон, създайте базиран на „Това несвързано изтегляне“."
-#: lib/option.tcl:155
-msgid "Blame History Context Radius (days)"
-msgstr "Исторически обхват за анотиране в дни"
+#: lib/checkout_op.tcl:503 lib/checkout_op.tcl:507
+#, tcl-format
+msgid "Checked out '%s'."
+msgstr "„%s“ е изтеглен."
-#: lib/option.tcl:156
-msgid "Number of Diff Context Lines"
-msgstr "Брой редове за контекста при извеждане на разликите"
+#: lib/checkout_op.tcl:535
+#, tcl-format
+msgid "Resetting '%s' to '%s' will lose the following commits:"
+msgstr ""
+"Зануляването на „%s“ към „%s“ ще доведе до загубването на следните подавания:"
-#: lib/option.tcl:157
-msgid "Additional Diff Parameters"
-msgstr "Ð\94опÑ\8aлниÑ\82елни аÑ\80гÑ\83менÑ\82и кÑ\8aм â\80\9egit diffâ\80\9c"
+#: lib/checkout_op.tcl:557
+msgid "Recovering lost commits may not be easy."
+msgstr "Ð\92Ñ\8aзÑ\81Ñ\82ановÑ\8fванеÑ\82о на загÑ\83бениÑ\82е подаваниÑ\8f може да е Ñ\82Ñ\80Ñ\83дно."
-#: lib/option.tcl:158
-msgid "Commit Message Text Width"
-msgstr "Широчина на текста на съобщението при подаване"
+#: lib/checkout_op.tcl:562
+#, tcl-format
+msgid "Reset '%s'?"
+msgstr "Зануляване на „%s“?"
-#: lib/option.tcl:159
-msgid "New Branch Name Template"
-msgstr "Шаблон за имеÑ\82о на новиÑ\82е клони"
+#: lib/checkout_op.tcl:567 lib/merge.tcl:170 lib/tools_dlg.tcl:336
+msgid "Visualize"
+msgstr "Ð\92изÑ\83ализаÑ\86иÑ\8f"
-#: lib/option.tcl:160
-msgid "Default File Contents Encoding"
-msgstr "Стандартно кодиране на файловете"
+#: lib/checkout_op.tcl:635
+#, tcl-format
+msgid ""
+"Failed to set current branch.\n"
+"\n"
+"This working directory is only partially switched. We successfully updated "
+"your files, but failed to update an internal Git file.\n"
+"\n"
+"This should not have occurred. %s will now close and give up."
+msgstr ""
+"Неуспешно задаване на текущия клон.\n"
+"\n"
+"Работната директория е само частично обновена: файловете са обновени "
+"успешно, но някой от вътрешните, служебни файлове на Git не е бил.\n"
+"\n"
+"Това състояние е аварийно и не трябва да се случва. Програмата „%s“ ще "
+"преустанови работа."
-#: lib/option.tcl:161
-msgid "Warn before committing to a detached head"
-msgstr "Ð\9fÑ\80едÑ\83пÑ\80еждаване пÑ\80и подаванеÑ\82о пÑ\80и неÑ\81вÑ\8aÑ\80зан вÑ\80Ñ\8aÑ\85"
+#: lib/choose_font.tcl:41
+msgid "Select"
+msgstr "Ð\98збоÑ\80"
-#: lib/option.tcl:162
-msgid "Staging of untracked files"
-msgstr "Ð\92каÑ\80ване на неÑ\81ледени Ñ\84айлове в индекÑ\81а"
+#: lib/choose_font.tcl:55
+msgid "Font Family"
+msgstr "ШÑ\80иÑ\84Ñ\82"
-#: lib/option.tcl:163
-msgid "Show untracked files"
-msgstr "Ð\9fоказване на неÑ\81ледениÑ\82е Ñ\84айлове"
+#: lib/choose_font.tcl:76
+msgid "Font Size"
+msgstr "РазмеÑ\80"
-#: lib/option.tcl:164
-msgid "Tab spacing"
-msgstr "РазмеÑ\80 на Ñ\82абÑ\83лаÑ\86иÑ\8fÑ\82а в инÑ\82еÑ\80вали"
+#: lib/choose_font.tcl:93
+msgid "Font Example"
+msgstr "Ð\9cоÑ\81Ñ\82Ñ\80а"
-#: lib/option.tcl:210
-msgid "Change"
-msgstr "Смяна"
+#: lib/choose_font.tcl:105
+msgid ""
+"This is example text.\n"
+"If you like this text, it can be your font."
+msgstr ""
+"Това е примерен текст.\n"
+"Ако ви харесва как изглежда, изберете шрифта."
+
+#: lib/choose_repository.tcl:33
+msgid "Git Gui"
+msgstr "ГПИ на Git"
+
+#: lib/choose_repository.tcl:92 lib/choose_repository.tcl:412
+msgid "Create New Repository"
+msgstr "Създаване на ново хранилище"
+
+#: lib/choose_repository.tcl:98
+msgid "New..."
+msgstr "Ново…"
+
+#: lib/choose_repository.tcl:105 lib/choose_repository.tcl:496
+msgid "Clone Existing Repository"
+msgstr "Клониране на съществуващо хранилище"
+
+#: lib/choose_repository.tcl:116
+msgid "Clone..."
+msgstr "Клониране…"
+
+#: lib/choose_repository.tcl:123 lib/choose_repository.tcl:1064
+msgid "Open Existing Repository"
+msgstr "Отваряне на съществуващо хранилище"
+
+#: lib/choose_repository.tcl:129
+msgid "Open..."
+msgstr "Отваряне…"
+
+#: lib/choose_repository.tcl:142
+msgid "Recent Repositories"
+msgstr "Скоро ползвани"
+
+#: lib/choose_repository.tcl:148
+msgid "Open Recent Repository:"
+msgstr "Отваряне на хранилище ползвано наскоро:"
+
+#: lib/choose_repository.tcl:316 lib/choose_repository.tcl:323
+#: lib/choose_repository.tcl:330
+#, tcl-format
+msgid "Failed to create repository %s:"
+msgstr "Неуспешно създаване на хранилището „%s“:"
+
+#: lib/choose_repository.tcl:417
+msgid "Directory:"
+msgstr "Директория:"
+
+#: lib/choose_repository.tcl:447 lib/choose_repository.tcl:573
+#: lib/choose_repository.tcl:1098
+msgid "Git Repository"
+msgstr "Хранилище на Git"
+
+#: lib/choose_repository.tcl:472
+#, tcl-format
+msgid "Directory %s already exists."
+msgstr "Вече съществува директория „%s“."
+
+#: lib/choose_repository.tcl:476
+#, tcl-format
+msgid "File %s already exists."
+msgstr "Вече съществува файл „%s“."
+
+#: lib/choose_repository.tcl:491
+msgid "Clone"
+msgstr "Клониране"
+
+#: lib/choose_repository.tcl:504
+msgid "Source Location:"
+msgstr "Адрес на източника:"
+
+#: lib/choose_repository.tcl:513
+msgid "Target Directory:"
+msgstr "Целева директория:"
+
+#: lib/choose_repository.tcl:523
+msgid "Clone Type:"
+msgstr "Вид клониране:"
+
+#: lib/choose_repository.tcl:528
+msgid "Standard (Fast, Semi-Redundant, Hardlinks)"
+msgstr "Стандартно (бързо, частично споделяне на файлове, твърди връзки)"
+
+#: lib/choose_repository.tcl:533
+msgid "Full Copy (Slower, Redundant Backup)"
+msgstr "Пълно (бавно, пълноценно резервно копие)"
+
+#: lib/choose_repository.tcl:538
+msgid "Shared (Fastest, Not Recommended, No Backup)"
+msgstr "Споделено (най-бързо, не се препоръчва, не прави резервно копие)"
+
+#: lib/choose_repository.tcl:545
+msgid "Recursively clone submodules too"
+msgstr "Рекурсивно клониране и на подмодулите"
+
+#: lib/choose_repository.tcl:579 lib/choose_repository.tcl:626
+#: lib/choose_repository.tcl:772 lib/choose_repository.tcl:842
+#: lib/choose_repository.tcl:1104 lib/choose_repository.tcl:1112
+#, tcl-format
+msgid "Not a Git repository: %s"
+msgstr "Това не е хранилище на Git: %s"
+
+#: lib/choose_repository.tcl:615
+msgid "Standard only available for local repository."
+msgstr "Само локални хранилища могат да се клонират стандартно"
+
+#: lib/choose_repository.tcl:619
+msgid "Shared only available for local repository."
+msgstr "Само локални хранилища могат да се клонират споделено"
+
+#: lib/choose_repository.tcl:640
+#, tcl-format
+msgid "Location %s already exists."
+msgstr "Местоположението „%s“ вече съществува."
+
+#: lib/choose_repository.tcl:651
+msgid "Failed to configure origin"
+msgstr "Неуспешно настройване на хранилището-източник"
+
+#: lib/choose_repository.tcl:663
+msgid "Counting objects"
+msgstr "Преброяване на обекти"
+
+#: lib/choose_repository.tcl:664
+msgid "buckets"
+msgstr "клетки"
+
+#: lib/choose_repository.tcl:688
+#, tcl-format
+msgid "Unable to copy objects/info/alternates: %s"
+msgstr "Обектите/информацията/синонимите не могат да бъдат копирани: %s"
+
+#: lib/choose_repository.tcl:724
+#, tcl-format
+msgid "Nothing to clone from %s."
+msgstr "Няма какво да се клонира от „%s“."
+
+#: lib/choose_repository.tcl:726 lib/choose_repository.tcl:940
+#: lib/choose_repository.tcl:952
+msgid "The 'master' branch has not been initialized."
+msgstr "Основният клон — „master“ не е инициализиран."
+
+#: lib/choose_repository.tcl:739
+msgid "Hardlinks are unavailable. Falling back to copying."
+msgstr "Не се поддържат твърди връзки. Преминава се към копиране."
+
+#: lib/choose_repository.tcl:751
+#, tcl-format
+msgid "Cloning from %s"
+msgstr "Клониране на „%s“"
+
+#: lib/choose_repository.tcl:782
+msgid "Copying objects"
+msgstr "Копиране на обекти"
+
+#: lib/choose_repository.tcl:783
+msgid "KiB"
+msgstr "KiB"
+
+#: lib/choose_repository.tcl:807
+#, tcl-format
+msgid "Unable to copy object: %s"
+msgstr "Неуспешно копиране на обект: %s"
+
+#: lib/choose_repository.tcl:817
+msgid "Linking objects"
+msgstr "Създаване на връзки към обектите"
+
+#: lib/choose_repository.tcl:818
+msgid "objects"
+msgstr "обекти"
+
+#: lib/choose_repository.tcl:826
+#, tcl-format
+msgid "Unable to hardlink object: %s"
+msgstr "Неуспешно създаване на твърда връзка към обект: %s"
+
+#: lib/choose_repository.tcl:881
+msgid "Cannot fetch branches and objects. See console output for details."
+msgstr ""
+"Клоните и обектите не могат да бъдат изтеглени. За повече информация "
+"погледнете изхода на конзолата."
+
+#: lib/choose_repository.tcl:892
+msgid "Cannot fetch tags. See console output for details."
+msgstr ""
+"Етикетите не могат да бъдат изтеглени. За повече информация погледнете "
+"изхода на конзолата."
+
+#: lib/choose_repository.tcl:916
+msgid "Cannot determine HEAD. See console output for details."
+msgstr ""
+"Върхът „HEAD“ не може да бъде определен. За повече информация погледнете "
+"изхода на конзолата."
+
+#: lib/choose_repository.tcl:925
+#, tcl-format
+msgid "Unable to cleanup %s"
+msgstr "„%s“ не може да се зачисти"
+
+#: lib/choose_repository.tcl:931
+msgid "Clone failed."
+msgstr "Неуспешно клониране."
+
+#: lib/choose_repository.tcl:938
+msgid "No default branch obtained."
+msgstr "Не е получен клон по подразбиране."
+
+#: lib/choose_repository.tcl:949
+#, tcl-format
+msgid "Cannot resolve %s as a commit."
+msgstr "Няма подаване отговарящо на „%s“."
+
+#: lib/choose_repository.tcl:961
+msgid "Creating working directory"
+msgstr "Създаване на работната директория"
+
+#: lib/choose_repository.tcl:962 lib/index.tcl:70 lib/index.tcl:136
+#: lib/index.tcl:207
+msgid "files"
+msgstr "файлове"
+
+#: lib/choose_repository.tcl:981
+msgid "Cannot clone submodules."
+msgstr "Подмодулите не могат да се клонират."
+
+#: lib/choose_repository.tcl:990
+msgid "Cloning submodules"
+msgstr "Клониране на подмодули"
+
+#: lib/choose_repository.tcl:1015
+msgid "Initial file checkout failed."
+msgstr "Неуспешно първоначално изтегляне."
+
+#: lib/choose_repository.tcl:1059
+msgid "Open"
+msgstr "Отваряне"
+
+#: lib/choose_repository.tcl:1069
+msgid "Repository:"
+msgstr "Хранилище:"
+
+#: lib/choose_repository.tcl:1118
+#, tcl-format
+msgid "Failed to open repository %s:"
+msgstr "Неуспешно отваряне на хранилището „%s“:"
+
+#: lib/choose_rev.tcl:52
+msgid "This Detached Checkout"
+msgstr "Това несвързано изтегляне"
+
+#: lib/choose_rev.tcl:60
+msgid "Revision Expression:"
+msgstr "Израз за версия:"
+
+#: lib/choose_rev.tcl:72
+msgid "Local Branch"
+msgstr "Локален клон"
-#: lib/option.tcl:254
-msgid "Spelling Dictionary:"
-msgstr "Ð\9fÑ\80авопиÑ\81ен Ñ\80еÑ\87ник:"
+#: lib/choose_rev.tcl:77
+msgid "Tracking Branch"
+msgstr "СледÑ\8fÑ\89 клон"
-#: lib/option.tcl:284
-msgid "Change Font"
-msgstr "СмÑ\8fна на Ñ\88Ñ\80иÑ\84Ñ\82а"
+#: lib/choose_rev.tcl:82 lib/choose_rev.tcl:544
+msgid "Tag"
+msgstr "Ð\95Ñ\82икеÑ\82"
-#: lib/option.tcl:288
+#: lib/choose_rev.tcl:321
#, tcl-format
-msgid "Choose %s"
-msgstr "Ð\98збоÑ\80 на â\80\9e%sâ\80\9c"
+msgid "Invalid revision: %s"
+msgstr "Ð\9dепÑ\80авилна веÑ\80Ñ\81иÑ\8f: %s"
-#: lib/option.tcl:294
-msgid "pt."
-msgstr "тчк."
+#: lib/choose_rev.tcl:342
+msgid "No revision selected."
+msgstr "Не е избрана версия."
-#: lib/option.tcl:308
-msgid "Preferences"
-msgstr "Ð\9dаÑ\81Ñ\82Ñ\80ойки"
+#: lib/choose_rev.tcl:350
+msgid "Revision expression is empty."
+msgstr "Ð\98зÑ\80азÑ\8aÑ\82 за веÑ\80Ñ\81иÑ\8f е пÑ\80азен."
-#: lib/option.tcl:345
-msgid "Failed to completely save options:"
-msgstr "Ð\9dеÑ\83Ñ\81пеÑ\88но запазване на наÑ\81Ñ\82Ñ\80ойкиÑ\82е:"
+#: lib/choose_rev.tcl:537
+msgid "Updated"
+msgstr "Ð\9eбновен"
-#: lib/encoding.tcl:443
-msgid "Default"
-msgstr "СÑ\82андаÑ\80Ñ\82ноÑ\82о"
+#: lib/choose_rev.tcl:565
+msgid "URL"
+msgstr "Ð\90дÑ\80еÑ\81"
-#: lib/encoding.tcl:448
-#, tcl-format
-msgid "System (%s)"
-msgstr "Системното (%s)"
+#: lib/commit.tcl:9
+msgid ""
+"There is nothing to amend.\n"
+"\n"
+"You are about to create the initial commit. There is no commit before this "
+"to amend.\n"
+msgstr ""
+"Няма какво да се поправи.\n"
+"\n"
+"Ще създадете първоначалното подаване. Преди него няма други подавания, които "
+"да поправите.\n"
-#: lib/encoding.tcl:459 lib/encoding.tcl:465
-msgid "Other"
-msgstr "Друго"
+#: lib/commit.tcl:18
+msgid ""
+"Cannot amend while merging.\n"
+"\n"
+"You are currently in the middle of a merge that has not been fully "
+"completed. You cannot amend the prior commit unless you first abort the "
+"current merge activity.\n"
+msgstr ""
+"По време на сливане не може да поправяте.\n"
+"\n"
+"В момента все още не сте завършили операция по сливане. Не може да поправите "
+"предишното подаване, освен ако първо не преустановите текущото сливане.\n"
-#: lib/mergetool.tcl:8
-msgid "Force resolution to the base version?"
-msgstr "Ð\94а Ñ\81е използва базоваÑ\82а веÑ\80Ñ\81иÑ\8f"
+#: lib/commit.tcl:48
+msgid "Error loading commit data for amend:"
+msgstr "Ð\93Ñ\80еÑ\88ка пÑ\80и заÑ\80еждане на данниÑ\82е оÑ\82 подаване, коиÑ\82о да Ñ\81е попÑ\80авÑ\8fÑ\82:"
-#: lib/mergetool.tcl:9
-msgid "Force resolution to this branch?"
-msgstr "Ð\94а Ñ\81е използва веÑ\80Ñ\81иÑ\8fÑ\82а оÑ\82 Ñ\82ози клон"
+#: lib/commit.tcl:75
+msgid "Unable to obtain your identity:"
+msgstr "Ð\98денÑ\82иÑ\84икаÑ\86иÑ\8fÑ\82а ви не може да бÑ\8aде опÑ\80еделена:"
-#: lib/mergetool.tcl:10
-msgid "Force resolution to the other branch?"
-msgstr "Ð\94а Ñ\81е използва веÑ\80Ñ\81иÑ\8fÑ\82а оÑ\82 дÑ\80Ñ\83гиÑ\8f клон"
+#: lib/commit.tcl:80
+msgid "Invalid GIT_COMMITTER_IDENT:"
+msgstr "Ð\9dепÑ\80авилно поле â\80\9eGIT_COMMITTER_IDENTâ\80\9c:"
-#: lib/mergetool.tcl:14
+#: lib/commit.tcl:129
#, tcl-format
+msgid "warning: Tcl does not support encoding '%s'."
+msgstr "предупреждение: Tcl не поддържа кодирането „%s“."
+
+#: lib/commit.tcl:149
msgid ""
-"Note that the diff shows only conflicting changes.\n"
+"Last scanned state does not match repository state.\n"
"\n"
-"%s will be overwritten.\n"
+"Another Git program has modified this repository since the last scan. A "
+"rescan must be performed before another commit can be created.\n"
"\n"
-"This operation can be undone only by restarting the merge."
+"The rescan will be automatically started now.\n"
msgstr ""
-"Разликата показва само разликите с конфликт.\n"
+"Състоянието при последната проверка не отговаря на състоянието на "
+"хранилището.\n"
"\n"
-"Файлът „%s“ ще бъде презаписан.\n"
+"Някой друг процес за Git е променил хранилището междувременно. Състоянието "
+"трябва да бъде проверено преди ново подаване.\n"
"\n"
-"Тази опеÑ\80аÑ\86иÑ\8f може да бÑ\8aде оÑ\82менена Ñ\81амо Ñ\87Ñ\80ез запоÑ\87ване на Ñ\81ливанеÑ\82о наново."
+"Ð\90вÑ\82омаÑ\82иÑ\87но Ñ\89е запоÑ\87не нова пÑ\80овеÑ\80ка.\n"
-#: lib/mergetool.tcl:45
+#: lib/commit.tcl:173
#, tcl-format
-msgid "File %s seems to have unresolved conflicts, still stage?"
+msgid ""
+"Unmerged files cannot be committed.\n"
+"\n"
+"File %s has merge conflicts. You must resolve them and stage the file "
+"before committing.\n"
msgstr ""
-"Изглежда, че все още има некоригирани конфликти във файла „%s“. Да се добави "
-"ли файлът към индекса?"
+"Неслетите файлове не могат да бъдат подавани.\n"
+"\n"
+"Във файла „%s“ има конфликти при сливане. За да го подадете, трябва първо да "
+"коригирате конфликтите и да добавите файла към индекса за подаване.\n"
-#: lib/mergetool.tcl:60
+#: lib/commit.tcl:181
#, tcl-format
-msgid "Adding resolution for %s"
-msgstr "Добавяне на корекция на конфликтите в „%s“"
-
-#: lib/mergetool.tcl:141
-msgid "Cannot resolve deletion or link conflicts using a tool"
+msgid ""
+"Unknown file state %s detected.\n"
+"\n"
+"File %s cannot be committed by this program.\n"
msgstr ""
-"Конфликтите при символни връзки или изтриване не могат да бъдат коригирани с "
-"външна програма."
-
-#: lib/mergetool.tcl:146
-msgid "Conflict file does not exist"
-msgstr "Файлът, в който е конфликтът, не съществува"
-
-#: lib/mergetool.tcl:246
-#, tcl-format
-msgid "Not a GUI merge tool: '%s'"
-msgstr "Това не е графична програма за сливане: „%s“"
-
-#: lib/mergetool.tcl:275
-#, tcl-format
-msgid "Unsupported merge tool '%s'"
-msgstr "Неподдържана програма за сливане: „%s“"
-
-#: lib/mergetool.tcl:310
-msgid "Merge tool is already running, terminate it?"
-msgstr "Програмата за сливане вече е стартирана. Да бъде ли изключена?"
+"Непознато състояние на файл „%s“.\n"
+"\n"
+"Файлът „%s“ не може да бъде подаден чрез текущата програма.\n"
-#: lib/mergetool.tcl:330
-#, tcl-format
+#: lib/commit.tcl:189
msgid ""
-"Error retrieving versions:\n"
-"%s"
+"No changes to commit.\n"
+"\n"
+"You must stage at least 1 file before you can commit.\n"
msgstr ""
-"Грешка при изтеглянето на версии:\n"
-"%s"
+"Няма промени за подаване.\n"
+"\n"
+"Трябва да добавите поне един файл към индекса, за да подадете.\n"
-#: lib/mergetool.tcl:350
-#, tcl-format
+#: lib/commit.tcl:204
msgid ""
-"Could not start the merge tool:\n"
+"Please supply a commit message.\n"
"\n"
-"%s"
+"A good commit message has the following format:\n"
+"\n"
+"- First line: Describe in one sentence what you did.\n"
+"- Second line: Blank\n"
+"- Remaining lines: Describe why this change is good.\n"
msgstr ""
-"Ð\9fÑ\80огÑ\80амаÑ\82а за Ñ\81ливане не може да бÑ\8aде Ñ\81Ñ\82аÑ\80Ñ\82иÑ\80ана:\n"
+"Ð\97адайÑ\82е добÑ\80о Ñ\81Ñ\8aобÑ\89ение пÑ\80и подаване.\n"
"\n"
-"%s"
-
-#: lib/mergetool.tcl:354
-msgid "Running merge tool..."
-msgstr "Стартиране на програмата за сливане…"
+"Използвайте следния формат:\n"
+"\n"
+"● Първи ред: описание в едно изречение на промяната.\n"
+"● Втори ред: празен.\n"
+"● Останалите редове: опишете защо се налага тази промяна.\n"
-#: lib/mergetool.tcl:382 lib/mergetool.tcl:390
-msgid "Merge tool failed."
-msgstr "Ð\93Ñ\80еÑ\88ка в пÑ\80огÑ\80амаÑ\82а за Ñ\81ливане."
+#: lib/commit.tcl:235
+msgid "Calling pre-commit hook..."
+msgstr "Ð\98зпÑ\8aлнÑ\8fване на кÑ\83каÑ\82а пÑ\80еди подаванеâ\80¦"
-#: lib/tools_dlg.tcl:22
-msgid "Add Tool"
-msgstr "Ð\94обавÑ\8fне на команда"
+#: lib/commit.tcl:250
+msgid "Commit declined by pre-commit hook."
+msgstr "Ð\9fодаванеÑ\82о е оÑ\82Ñ\85вÑ\8aÑ\80лено оÑ\82 кÑ\83каÑ\82а пÑ\80еди подаване."
-#: lib/tools_dlg.tcl:28
-msgid "Add New Tool Command"
-msgstr "Добавяне на команда"
+#: lib/commit.tcl:269
+msgid ""
+"You are about to commit on a detached head. This is a potentially dangerous "
+"thing to do because if you switch to another branch you will lose your "
+"changes and it can be difficult to retrieve them later from the reflog. You "
+"should probably cancel this commit and create a new branch to continue.\n"
+" \n"
+" Do you really want to proceed with your Commit?"
+msgstr ""
+"Ще подадете към несвързан, отделѐн указател „HEAD“. Това е опасно, защото "
+"при преминаването към клон ще загубите промените си, като единственият начин "
+"да ги върнете ще е чрез журнала на указателите (reflog). Най-вероятно трябва "
+"да не правите това подаване, а да създадете нов клон, преди да продължите.\n"
+" \n"
+"Сигурни ли сте, че искате да извършите текущото подаване?"
-#: lib/tools_dlg.tcl:34
-msgid "Add globally"
-msgstr "Ð\93лобално добавÑ\8fне"
+#: lib/commit.tcl:290
+msgid "Calling commit-msg hook..."
+msgstr "Ð\98зпÑ\8aлнÑ\8fване на кÑ\83каÑ\82а за Ñ\81Ñ\8aобÑ\89ениеÑ\82о пÑ\80и подаванеâ\80¦"
-#: lib/tools_dlg.tcl:46
-msgid "Tool Details"
-msgstr "Подробности за командата"
+#: lib/commit.tcl:305
+msgid "Commit declined by commit-msg hook."
+msgstr "Подаването е отхвърлено от куката за съобщението при подаване."
-#: lib/tools_dlg.tcl:49
-msgid "Use '/' separators to create a submenu tree:"
-msgstr "Ð\97а Ñ\81Ñ\8aздаване на подменÑ\8eÑ\82а използвайÑ\82е знака â\80\9e/â\80\9c за Ñ\80азделиÑ\82ел:"
+#: lib/commit.tcl:318
+msgid "Committing changes..."
+msgstr "Ð\9fодаване на пÑ\80омениÑ\82еâ\80¦"
-#: lib/tools_dlg.tcl:60
-msgid "Command:"
-msgstr "Ð\9aоманда:"
+#: lib/commit.tcl:334
+msgid "write-tree failed:"
+msgstr "неÑ\83Ñ\81пеÑ\88но запазване на дÑ\8aÑ\80воÑ\82о (write-tree):"
-#: lib/tools_dlg.tcl:71
-msgid "Show a dialog before running"
-msgstr "Ð\9fÑ\80еди изпÑ\8aлнение да Ñ\81е извежда диалогов пÑ\80озоÑ\80еÑ\86"
+#: lib/commit.tcl:335 lib/commit.tcl:382 lib/commit.tcl:403
+msgid "Commit failed."
+msgstr "Ð\9dеÑ\83Ñ\81пеÑ\88но подаване."
-#: lib/tools_dlg.tcl:77
-msgid "Ask the user to select a revision (sets $REVISION)"
-msgstr "Потребителят да укаже версия (задаване на променливата $REVISION)"
+#: lib/commit.tcl:352
+#, tcl-format
+msgid "Commit %s appears to be corrupt"
+msgstr "Подаването „%s“ изглежда повредено"
-#: lib/tools_dlg.tcl:82
-msgid "Ask the user for additional arguments (sets $ARGS)"
+#: lib/commit.tcl:357
+msgid ""
+"No changes to commit.\n"
+"\n"
+"No files were modified by this commit and it was not a merge commit.\n"
+"\n"
+"A rescan will be automatically started now.\n"
msgstr ""
-"Потребителят да укаже допълнителни аргументи (задаване на променливата $ARGS)"
+"Няма промени за подаване.\n"
+"\n"
+"В това подаване не са променяни никакви файлове, а и не е подаване със "
+"сливане.\n"
+"\n"
+"Автоматично ще започне нова проверка.\n"
-#: lib/tools_dlg.tcl:89
-msgid "Don't show the command output window"
-msgstr "Ð\91ез показване на пÑ\80озоÑ\80еÑ\86 Ñ\81 изÑ\85ода оÑ\82 командаÑ\82а"
+#: lib/commit.tcl:364
+msgid "No changes to commit."
+msgstr "Ð\9dÑ\8fма пÑ\80омени за подаване."
-#: lib/tools_dlg.tcl:94
-msgid "Run only if a diff is selected ($FILENAME not empty)"
-msgstr ""
-"Стартиране само след избор на разлика (променливата $FILENAME не е празна)"
+#: lib/commit.tcl:381
+msgid "commit-tree failed:"
+msgstr "неуспешно подаване на дървото (commit-tree):"
-#: lib/tools_dlg.tcl:118
-msgid "Please supply a name for the tool."
-msgstr "Ð\97адайÑ\82е име за командаÑ\82а."
+#: lib/commit.tcl:402
+msgid "update-ref failed:"
+msgstr "неÑ\83Ñ\81пеÑ\88но обновÑ\8fване на Ñ\83казаÑ\82елиÑ\82е (update-ref):"
-#: lib/tools_dlg.tcl:126
+#: lib/commit.tcl:495
#, tcl-format
-msgid "Tool '%s' already exists."
-msgstr "Ð\9aомандаÑ\82а â\80\9e%sâ\80\9c веÑ\87е Ñ\81Ñ\8aÑ\89еÑ\81Ñ\82вÑ\83ва."
+msgid "Created commit %s: %s"
+msgstr "УÑ\81пеÑ\88но подаване %s: %s"
-#: lib/tools_dlg.tcl:148
-#, tcl-format
-msgid ""
-"Could not add tool:\n"
-"%s"
-msgstr ""
-"Командата не може да бъде добавена:\n"
-"%s"
+#: lib/console.tcl:59
+msgid "Working... please wait..."
+msgstr "В момента се извършва действие, изчакайте…"
-#: lib/tools_dlg.tcl:187
-msgid "Remove Tool"
-msgstr "Ð\9fÑ\80емаÑ\85ване на команда"
+#: lib/console.tcl:186
+msgid "Success"
+msgstr "УÑ\81пеÑ\85"
-#: lib/tools_dlg.tcl:193
-msgid "Remove Tool Commands"
-msgstr "Ð\9fÑ\80емаÑ\85ване на команди"
+#: lib/console.tcl:200
+msgid "Error: Command Failed"
+msgstr "Ð\93Ñ\80еÑ\88ка: неÑ\83Ñ\81пеÑ\88но изпÑ\8aлнение на команда"
-#: lib/tools_dlg.tcl:198
-msgid "Remove"
-msgstr "Ð\9fÑ\80емаÑ\85ване"
+#: lib/database.tcl:42
+msgid "Number of loose objects"
+msgstr "Ð\91Ñ\80ой непакеÑ\82иÑ\80ани обекÑ\82и"
-#: lib/tools_dlg.tcl:231
-msgid "(Blue denotes repository-local tools)"
-msgstr "(командите към локалното хранилище са обозначени в синьо)"
+#: lib/database.tcl:43
+msgid "Disk space used by loose objects"
+msgstr "Дисково пространство заето от непакетирани обекти"
-#: lib/tools_dlg.tcl:292
-#, tcl-format
-msgid "Run Command: %s"
-msgstr "Изпълнение на командата „%s“"
+#: lib/database.tcl:44
+msgid "Number of packed objects"
+msgstr "Брой пакетирани обекти"
-#: lib/tools_dlg.tcl:306
-msgid "Arguments"
-msgstr "Ð\90Ñ\80гÑ\83менти"
+#: lib/database.tcl:45
+msgid "Number of packs"
+msgstr "Ð\91Ñ\80ой пакети"
-#: lib/tools_dlg.tcl:336 lib/checkout_op.tcl:567 lib/merge.tcl:166
-msgid "Visualize"
-msgstr "Ð\92изÑ\83ализаÑ\86иÑ\8f"
+#: lib/database.tcl:46
+msgid "Disk space used by packed objects"
+msgstr "Ð\94иÑ\81ково пÑ\80оÑ\81Ñ\82Ñ\80анÑ\81Ñ\82во заеÑ\82о оÑ\82 пакеÑ\82иÑ\80ани обекÑ\82и"
-#: lib/tools_dlg.tcl:341
-msgid "OK"
-msgstr "Ð\94обÑ\80е"
+#: lib/database.tcl:47
+msgid "Packed objects waiting for pruning"
+msgstr "Ð\9fакеÑ\82иÑ\80ани обекÑ\82и за окаÑ\81Ñ\82Ñ\80Ñ\8fне"
-#: lib/search.tcl:48
-msgid "Find:"
-msgstr "ТÑ\8aÑ\80Ñ\81ене:"
+#: lib/database.tcl:48
+msgid "Garbage files"
+msgstr "Файлове за боклÑ\83ка"
-#: lib/search.tcl:50
-msgid "Next"
-msgstr "Следваща поява"
+#: lib/database.tcl:57 lib/option.tcl:182 lib/option.tcl:197 lib/option.tcl:220
+#: lib/option.tcl:282
+#, tcl-format
+msgid "%s:"
+msgstr "%s:"
-#: lib/search.tcl:51
-msgid "Prev"
-msgstr "Предишна поява"
+#: lib/database.tcl:66
+#, tcl-format
+msgid "%s (%s): Database Statistics"
+msgstr "%s (%s): Статистика на базата от данни"
-#: lib/search.tcl:52
-msgid "RegExp"
-msgstr "Рег. изÑ\80аз"
+#: lib/database.tcl:72
+msgid "Compressing the object database"
+msgstr "Ð\9aомпÑ\80еÑ\81иÑ\80ане на базаÑ\82а Ñ\81 данни за обекÑ\82иÑ\82е"
-#: lib/search.tcl:54
-msgid "Case"
-msgstr "РегиÑ\81Ñ\82Ñ\8aÑ\80"
+#: lib/database.tcl:83
+msgid "Verifying the object database with fsck-objects"
+msgstr "Ð\9fÑ\80овеÑ\80ка на базаÑ\82а Ñ\81 данни за обекÑ\82иÑ\82е Ñ\81 пÑ\80огÑ\80амаÑ\82а â\80\9efsck-objectsâ\80\9c"
-#: lib/shortcut.tcl:21 lib/shortcut.tcl:62
-msgid "Cannot write shortcut:"
-msgstr "Клавишната комбинация не може да бъде запазена:"
+#: lib/database.tcl:107
+#, tcl-format
+msgid ""
+"This repository currently has approximately %i loose objects.\n"
+"\n"
+"To maintain optimal performance it is strongly recommended that you compress "
+"the database.\n"
+"\n"
+"Compress the database now?"
+msgstr ""
+"В това хранилище в момента има към %i непакетирани обекти.\n"
+"\n"
+"За добра производителност се препоръчва да компресирате базата с данни за "
+"обектите.\n"
+"\n"
+"Да се започне ли компресирането?"
-#: lib/shortcut.tcl:137
-msgid "Cannot write icon:"
-msgstr "Иконата не може да бъде запазена:"
+#: lib/date.tcl:25
+#, tcl-format
+msgid "Invalid date from Git: %s"
+msgstr "Неправилни данни от Git: %s"
#: lib/diff.tcl:77
#, tcl-format
msgid "Loading diff of %s..."
msgstr "Зареждане на разликите в „%s“…"
-#: lib/diff.tcl:140
+#: lib/diff.tcl:143
msgid ""
"LOCAL: deleted\n"
"REMOTE:\n"
"ЛОКАЛНО: изтрит\n"
"ОТДАЛЕЧЕНО:\n"
-#: lib/diff.tcl:145
+#: lib/diff.tcl:148
msgid ""
"REMOTE: deleted\n"
"LOCAL:\n"
"ОТДАЛЕЧЕНО: изтрит\n"
"ЛОКАЛНО:\n"
-#: lib/diff.tcl:152
+#: lib/diff.tcl:155
msgid "LOCAL:\n"
msgstr "ЛОКАЛНО:\n"
-#: lib/diff.tcl:155
+#: lib/diff.tcl:158
msgid "REMOTE:\n"
msgstr "ОТДАЛЕЧЕНО:\n"
-#: lib/diff.tcl:217 lib/diff.tcl:355
+#: lib/diff.tcl:220 lib/diff.tcl:357
#, tcl-format
msgid "Unable to display %s"
msgstr "Файлът „%s“ не може да бъде показан"
-#: lib/diff.tcl:218
+#: lib/diff.tcl:221
msgid "Error loading file:"
msgstr "Грешка при зареждане на файл:"
-#: lib/diff.tcl:225
+#: lib/diff.tcl:227
msgid "Git Repository (subproject)"
msgstr "Хранилище на Git (подмодул)"
-#: lib/diff.tcl:237
+#: lib/diff.tcl:239
msgid "* Binary file (not showing content)."
msgstr "● Двоичен файл (съдържанието не се показва)."
-#: lib/diff.tcl:242
+#: lib/diff.tcl:244
#, tcl-format
msgid ""
"* Untracked file is %d bytes.\n"
"● Неследеният файл е %d байта.\n"
"● Показват се само първите %d байта.\n"
-#: lib/diff.tcl:248
-#, tcl-format
-msgid ""
-"\n"
-"* Untracked file clipped here by %s.\n"
-"* To see the entire file, use an external editor.\n"
-msgstr ""
-"\n"
-"● Неследеният файл е отрязан дотук от програмата „%s“.\n"
-"● Използвайте външен редактор, за да видите целия файл.\n"
-
-#: lib/diff.tcl:356 lib/blame.tcl:1128
-msgid "Error loading diff:"
-msgstr "Грешка при зареждане на разлика:"
-
-#: lib/diff.tcl:578
-msgid "Failed to unstage selected hunk."
-msgstr "Избраното парче не може да бъде извадено от индекса."
-
-#: lib/diff.tcl:585
-msgid "Failed to stage selected hunk."
-msgstr "Избраното парче не може да бъде добавено към индекса."
-
-#: lib/diff.tcl:664
-msgid "Failed to unstage selected line."
-msgstr "Избраният ред не може да бъде изваден от индекса."
-
-#: lib/diff.tcl:672
-msgid "Failed to stage selected line."
-msgstr "Избраният ред не може да бъде добавен към индекса."
-
-#: lib/remote_branch_delete.tcl:29 lib/remote_branch_delete.tcl:34
-msgid "Delete Branch Remotely"
-msgstr "Изтриване на отдалечения клон"
-
-#: lib/remote_branch_delete.tcl:48
-msgid "From Repository"
-msgstr "От хранилище"
-
-#: lib/remote_branch_delete.tcl:88
-msgid "Branches"
-msgstr "Клони"
-
-#: lib/remote_branch_delete.tcl:110
-msgid "Delete Only If"
-msgstr "Изтриване, само ако"
-
-#: lib/remote_branch_delete.tcl:112
-msgid "Merged Into:"
-msgstr "Слят в:"
-
-#: lib/remote_branch_delete.tcl:120 lib/branch_delete.tcl:53
-msgid "Always (Do not perform merge checks)"
-msgstr "Винаги (без проверка за сливане)"
-
-#: lib/remote_branch_delete.tcl:153
-msgid "A branch is required for 'Merged Into'."
-msgstr "За данните „Слят в“ е необходимо да зададете клон."
-
-#: lib/remote_branch_delete.tcl:185
-#, tcl-format
-msgid ""
-"The following branches are not completely merged into %s:\n"
-"\n"
-" - %s"
-msgstr ""
-"Следните клони не са слети напълно в „%s“:\n"
-"\n"
-" ● %s"
-
-#: lib/remote_branch_delete.tcl:190
+#: lib/diff.tcl:250
#, tcl-format
msgid ""
-"One or more of the merge tests failed because you have not fetched the "
-"necessary commits. Try fetching from %s first."
-msgstr ""
-"Поне една от пробите за сливане е неуспешна, защото не сте доставили всички "
-"необходими подавания. Пробвайте първо да доставите подаванията от „%s“."
-
-#: lib/remote_branch_delete.tcl:208
-msgid "Please select one or more branches to delete."
-msgstr "Изберете поне един клон за изтриване."
-
-#: lib/remote_branch_delete.tcl:218 lib/branch_delete.tcl:115
-msgid ""
-"Recovering deleted branches is difficult.\n"
-"\n"
-"Delete the selected branches?"
-msgstr ""
-"Възстановяването на изтрити клони може да е трудно.\n"
"\n"
-"Сигурни ли сте, че искате да триете?"
-
-#: lib/remote_branch_delete.tcl:227
-#, tcl-format
-msgid "Deleting branches from %s"
-msgstr "Изтриване на клони от „%s“"
-
-#: lib/remote_branch_delete.tcl:300
-msgid "No repository selected."
-msgstr "Не е избрано хранилище."
-
-#: lib/remote_branch_delete.tcl:305
-#, tcl-format
-msgid "Scanning %s..."
-msgstr "Претърсване на „%s“…"
-
-#: lib/choose_repository.tcl:33
-msgid "Git Gui"
-msgstr "ГПИ на Git"
-
-#: lib/choose_repository.tcl:92 lib/choose_repository.tcl:412
-msgid "Create New Repository"
-msgstr "Създаване на ново хранилище"
-
-#: lib/choose_repository.tcl:98
-msgid "New..."
-msgstr "Ново…"
-
-#: lib/choose_repository.tcl:105 lib/choose_repository.tcl:496
-msgid "Clone Existing Repository"
-msgstr "Клониране на съществуващо хранилище"
+"* Untracked file clipped here by %s.\n"
+"* To see the entire file, use an external editor.\n"
+msgstr ""
+"\n"
+"● Неследеният файл е отрязан дотук от програмата „%s“.\n"
+"● Използвайте външен редактор, за да видите целия файл.\n"
-#: lib/choose_repository.tcl:116
-msgid "Clone..."
-msgstr "Ð\9aлониÑ\80анеâ\80¦"
+#: lib/diff.tcl:580
+msgid "Failed to unstage selected hunk."
+msgstr "Ð\98збÑ\80аноÑ\82о паÑ\80Ñ\87е не може да бÑ\8aде извадено оÑ\82 индекÑ\81а."
-#: lib/choose_repository.tcl:123 lib/choose_repository.tcl:1064
-msgid "Open Existing Repository"
-msgstr "Ð\9eÑ\82ваÑ\80Ñ\8fне на Ñ\81Ñ\8aÑ\89еÑ\81Ñ\82вÑ\83ваÑ\89о Ñ\85Ñ\80анилиÑ\89е"
+#: lib/diff.tcl:587
+msgid "Failed to stage selected hunk."
+msgstr "Ð\98збÑ\80аноÑ\82о паÑ\80Ñ\87е не може да бÑ\8aде добавено кÑ\8aм индекÑ\81а."
-#: lib/choose_repository.tcl:129
-msgid "Open..."
-msgstr "Ð\9eÑ\82ваÑ\80Ñ\8fнеâ\80¦"
+#: lib/diff.tcl:666
+msgid "Failed to unstage selected line."
+msgstr "Ð\98збÑ\80аниÑ\8fÑ\82 Ñ\80ед не може да бÑ\8aде изваден оÑ\82 индекÑ\81а."
-#: lib/choose_repository.tcl:142
-msgid "Recent Repositories"
-msgstr "СкоÑ\80о ползвани"
+#: lib/diff.tcl:674
+msgid "Failed to stage selected line."
+msgstr "Ð\98збÑ\80аниÑ\8fÑ\82 Ñ\80ед не може да бÑ\8aде добавен кÑ\8aм индекÑ\81а."
-#: lib/choose_repository.tcl:148
-msgid "Open Recent Repository:"
-msgstr "Ð\9eÑ\82ваÑ\80Ñ\8fне на Ñ\85Ñ\80анилиÑ\89е ползвано наÑ\81коÑ\80о:"
+#: lib/encoding.tcl:443
+msgid "Default"
+msgstr "СÑ\82андаÑ\80Ñ\82ноÑ\82о"
-#: lib/choose_repository.tcl:316 lib/choose_repository.tcl:323
-#: lib/choose_repository.tcl:330
+#: lib/encoding.tcl:448
#, tcl-format
-msgid "Failed to create repository %s:"
-msgstr "Неуспешно създаване на хранилището „%s“:"
-
-#: lib/choose_repository.tcl:407 lib/branch_create.tcl:33
-msgid "Create"
-msgstr "Създаване"
+msgid "System (%s)"
+msgstr "Системното (%s)"
-#: lib/choose_repository.tcl:417
-msgid "Directory:"
-msgstr "Директория:"
+#: lib/encoding.tcl:459 lib/encoding.tcl:465
+msgid "Other"
+msgstr "Друго"
-#: lib/choose_repository.tcl:447 lib/choose_repository.tcl:573
-#: lib/choose_repository.tcl:1098
-msgid "Git Repository"
-msgstr "Хранилище на Git"
+#: lib/error.tcl:20
+#, tcl-format
+msgid "%s: error"
+msgstr "%s: грешка"
-#: lib/choose_repository.tcl:472
+#: lib/error.tcl:36
#, tcl-format
-msgid "Directory %s already exists."
-msgstr "Вече съществува директория „%s“."
+msgid "%s: warning"
+msgstr "%s: предупреждение"
-#: lib/choose_repository.tcl:476
+#: lib/error.tcl:80
#, tcl-format
-msgid "File %s already exists."
-msgstr "Вече съществува файл „%s“."
+msgid "%s hook failed:"
+msgstr "%s: грешка от куката"
-#: lib/choose_repository.tcl:491
-msgid "Clone"
-msgstr "Ð\9aлониÑ\80ане"
+#: lib/error.tcl:96
+msgid "You must correct the above errors before committing."
+msgstr "Ð\9fÑ\80еди да можеÑ\82е да подадеÑ\82е, коÑ\80игиÑ\80айÑ\82е гоÑ\80ниÑ\82е гÑ\80еÑ\88ки."
-#: lib/choose_repository.tcl:504
-msgid "Source Location:"
-msgstr "Адрес на източника:"
+#: lib/error.tcl:116
+#, tcl-format
+msgid "%s (%s): error"
+msgstr "%s (%s): грешка"
-#: lib/choose_repository.tcl:513
-msgid "Target Directory:"
-msgstr "Целева диÑ\80екÑ\82оÑ\80иÑ\8f:"
+#: lib/index.tcl:6
+msgid "Unable to unlock the index."
+msgstr "Ð\98ндекÑ\81Ñ\8aÑ\82 не може да бÑ\8aде оÑ\82клÑ\8eÑ\87ен."
-#: lib/choose_repository.tcl:523
-msgid "Clone Type:"
-msgstr "Ð\92ид клониÑ\80ане:"
+#: lib/index.tcl:17
+msgid "Index Error"
+msgstr "Ð\93Ñ\80еÑ\88ка в индекÑ\81а"
-#: lib/choose_repository.tcl:528
-msgid "Standard (Fast, Semi-Redundant, Hardlinks)"
-msgstr "Стандартно (бързо, частично споделяне на файлове, твърди връзки)"
+#: lib/index.tcl:19
+msgid ""
+"Updating the Git index failed. A rescan will be automatically started to "
+"resynchronize git-gui."
+msgstr ""
+"Неуспешно обновяване на индекса на Git. Автоматично ще започне нова проверка "
+"за синхронизирането на git-gui."
-#: lib/choose_repository.tcl:533
-msgid "Full Copy (Slower, Redundant Backup)"
-msgstr "Ð\9fÑ\8aлно (бавно, пÑ\8aлноÑ\86енно Ñ\80езеÑ\80вно копие)"
+#: lib/index.tcl:30
+msgid "Continue"
+msgstr "Ð\9fÑ\80одÑ\8aлжаване"
-#: lib/choose_repository.tcl:538
-msgid "Shared (Fastest, Not Recommended, No Backup)"
-msgstr "Споделено (най-бÑ\8aÑ\80зо, не Ñ\81е пÑ\80епоÑ\80Ñ\8aÑ\87ва, не пÑ\80ави Ñ\80езеÑ\80вно копие)"
+#: lib/index.tcl:33
+msgid "Unlock Index"
+msgstr "Ð\9eÑ\82клÑ\8eÑ\87ване на индекÑ\81а"
-#: lib/choose_repository.tcl:545
-msgid "Recursively clone submodules too"
-msgstr "РекÑ\83Ñ\80Ñ\81ивно клониÑ\80ане и на подмодÑ\83лиÑ\82е"
+#: lib/index.tcl:294
+msgid "Unstaging selected files from commit"
+msgstr "Ð\98зваждане на избÑ\80аниÑ\82е Ñ\84айлове оÑ\82 подаванеÑ\82о"
-#: lib/choose_repository.tcl:579 lib/choose_repository.tcl:626
-#: lib/choose_repository.tcl:772 lib/choose_repository.tcl:842
-#: lib/choose_repository.tcl:1104 lib/choose_repository.tcl:1112
+#: lib/index.tcl:298
#, tcl-format
-msgid "Not a Git repository: %s"
-msgstr "Това не е Ñ\85Ñ\80анилиÑ\89е на Git: %s"
+msgid "Unstaging %s from commit"
+msgstr "Ð\98зваждане на â\80\9e%sâ\80\9c оÑ\82 подаванеÑ\82о"
-#: lib/choose_repository.tcl:615
-msgid "Standard only available for local repository."
-msgstr "Само локални Ñ\85Ñ\80анилиÑ\89а могаÑ\82 да Ñ\81е клониÑ\80аÑ\82 Ñ\81Ñ\82андаÑ\80Ñ\82но"
+#: lib/index.tcl:337
+msgid "Ready to commit."
+msgstr "Ð\93оÑ\82овноÑ\81Ñ\82 за подаване."
-#: lib/choose_repository.tcl:619
-msgid "Shared only available for local repository."
-msgstr "Само локални Ñ\85Ñ\80анилиÑ\89а могаÑ\82 да Ñ\81е клониÑ\80аÑ\82 Ñ\81поделено"
+#: lib/index.tcl:346
+msgid "Adding selected files"
+msgstr "Ð\94обавÑ\8fне на избÑ\80аниÑ\82е Ñ\84айлове"
-#: lib/choose_repository.tcl:640
+#: lib/index.tcl:350
#, tcl-format
-msgid "Location %s already exists."
-msgstr "Местоположението „%s“ вече съществува."
-
-#: lib/choose_repository.tcl:651
-msgid "Failed to configure origin"
-msgstr "Неуспешно настройване на хранилището-източник"
+msgid "Adding %s"
+msgstr "Добавяне на „%s“"
-#: lib/choose_repository.tcl:663
-msgid "Counting objects"
-msgstr "Преброяване на обекти"
+#: lib/index.tcl:380
+#, tcl-format
+msgid "Stage %d untracked files?"
+msgstr "Да се добавят ли %d неследени файла към индекса?"
-#: lib/choose_repository.tcl:664
-msgid "buckets"
-msgstr "клеÑ\82ки"
+#: lib/index.tcl:388
+msgid "Adding all changed files"
+msgstr "Ð\94обавÑ\8fне на вÑ\81иÑ\87ки пÑ\80оменени Ñ\84айлове"
-#: lib/choose_repository.tcl:688
+#: lib/index.tcl:428
#, tcl-format
-msgid "Unable to copy objects/info/alternates: %s"
-msgstr "Ð\9eбекÑ\82иÑ\82е/инÑ\84оÑ\80маÑ\86иÑ\8fÑ\82а/Ñ\81инонимиÑ\82е не могаÑ\82 да бÑ\8aдаÑ\82 копиÑ\80ани: %s"
+msgid "Revert changes in file %s?"
+msgstr "Ð\94а Ñ\81е маÑ\85наÑ\82 ли пÑ\80омениÑ\82е вÑ\8aв Ñ\84айла â\80\9e%sâ\80\9c?"
-#: lib/choose_repository.tcl:724
+#: lib/index.tcl:430
#, tcl-format
-msgid "Nothing to clone from %s."
-msgstr "Няма какво да се клонира от „%s“."
-
-#: lib/choose_repository.tcl:726 lib/choose_repository.tcl:940
-#: lib/choose_repository.tcl:952
-msgid "The 'master' branch has not been initialized."
-msgstr "Основният клон — „master“ не е инициализиран."
-
-#: lib/choose_repository.tcl:739
-msgid "Hardlinks are unavailable. Falling back to copying."
-msgstr "Не се поддържат твърди връзки. Преминава се към копиране."
+msgid "Revert changes in these %i files?"
+msgstr "Да се махнат ли промените в тези %i файла?"
-#: lib/choose_repository.tcl:751
-#, tcl-format
-msgid "Cloning from %s"
-msgstr "Клониране на „%s“"
+#: lib/index.tcl:438
+msgid "Any unstaged changes will be permanently lost by the revert."
+msgstr ""
+"Всички промени, които не са били вкарани в индекса, ще бъдат безвъзвратно "
+"загубени."
-#: lib/choose_repository.tcl:782
-msgid "Copying objects"
-msgstr "Ð\9aопиÑ\80ане на обекÑ\82и"
+#: lib/index.tcl:441
+msgid "Do Nothing"
+msgstr "Ð\9dиÑ\89о да не Ñ\81е пÑ\80ави"
-#: lib/choose_repository.tcl:783
-msgid "KiB"
-msgstr "KiB"
+#: lib/index.tcl:459
+msgid "Reverting selected files"
+msgstr "Махане на промените в избраните файлове"
-#: lib/choose_repository.tcl:807
+#: lib/index.tcl:463
#, tcl-format
-msgid "Unable to copy object: %s"
-msgstr "Ð\9dеÑ\83Ñ\81пеÑ\88но копиÑ\80ане на обекÑ\82: %s"
+msgid "Reverting %s"
+msgstr "Ð\9cаÑ\85ане на пÑ\80омениÑ\82е в â\80\9e%sâ\80\9c"
-#: lib/choose_repository.tcl:817
-msgid "Linking objects"
-msgstr "СÑ\8aздаване на вÑ\80Ñ\8aзки кÑ\8aм обекÑ\82иÑ\82е"
+#: lib/line.tcl:17
+msgid "Goto Line:"
+msgstr "Ð\9aÑ\8aм Ñ\80ед:"
-#: lib/choose_repository.tcl:818
-msgid "objects"
-msgstr "обекÑ\82и"
+#: lib/line.tcl:23
+msgid "Go"
+msgstr "Ð\9fÑ\80идвижване"
-#: lib/choose_repository.tcl:826
-#, tcl-format
-msgid "Unable to hardlink object: %s"
-msgstr "Неуспешно създаване на твърда връзка към обект: %s"
+#: lib/merge.tcl:13
+msgid ""
+"Cannot merge while amending.\n"
+"\n"
+"You must finish amending this commit before starting any type of merge.\n"
+msgstr ""
+"По време на поправяне не може да сливане.\n"
+"\n"
+"Трябва да завършите поправянето на текущото подаване, преди да започнете "
+"сливане.\n"
-#: lib/choose_repository.tcl:881
-msgid "Cannot fetch branches and objects. See console output for details."
+#: lib/merge.tcl:27
+msgid ""
+"Last scanned state does not match repository state.\n"
+"\n"
+"Another Git program has modified this repository since the last scan. A "
+"rescan must be performed before a merge can be performed.\n"
+"\n"
+"The rescan will be automatically started now.\n"
msgstr ""
-"Клоните и обектите не могат да бъдат изтеглени. За повече информация "
-"погледнете изхода на конзолата."
+"Последно установеното състояние не отговаря на това в хранилището.\n"
+"\n"
+"Някой друг процес за Git е променил хранилището междувременно. Състоянието "
+"трябва да бъде проверено, преди да се извърши сливане.\n"
+"\n"
+"Автоматично ще започне нова проверка.\n"
+"\n"
-#: lib/choose_repository.tcl:892
-msgid "Cannot fetch tags. See console output for details."
+#: lib/merge.tcl:45
+#, tcl-format
+msgid ""
+"You are in the middle of a conflicted merge.\n"
+"\n"
+"File %s has merge conflicts.\n"
+"\n"
+"You must resolve them, stage the file, and commit to complete the current "
+"merge. Only then can you begin another merge.\n"
msgstr ""
-"Етикетите не могат да бъдат изтеглени. За повече информация погледнете "
-"изхода на конзолата."
+"В момента тече сливане, но има конфликти.\n"
+"\n"
+"Погледнете файла „%s“.\n"
+"\n"
+"Трябва да коригирате конфликтите в него, да го добавите към индекса и да "
+"завършите текущото сливане чрез подаване. Чак тогава може да започнете ново "
+"сливане.\n"
-#: lib/choose_repository.tcl:916
-msgid "Cannot determine HEAD. See console output for details."
+#: lib/merge.tcl:55
+#, tcl-format
+msgid ""
+"You are in the middle of a change.\n"
+"\n"
+"File %s is modified.\n"
+"\n"
+"You should complete the current commit before starting a merge. Doing so "
+"will help you abort a failed merge, should the need arise.\n"
msgstr ""
-"Върхът „HEAD“ не може да бъде определен. За повече информация погледнете "
-"изхода на конзолата."
+"В момента тече подаване.\n"
+"\n"
+"Файлът „%s“ е променен.\n"
+"\n"
+"Трябва да завършите текущото подаване, преди да започнете сливане. Така ще "
+"можете лесно да преустановите сливането, ако възникне нужда.\n"
-#: lib/choose_repository.tcl:925
+#: lib/merge.tcl:108
#, tcl-format
-msgid "Unable to cleanup %s"
-msgstr "„%s“ не може да се зачисти"
+msgid "%s of %s"
+msgstr "%s от общо %s"
-#: lib/choose_repository.tcl:931
-msgid "Clone failed."
-msgstr "Неуспешно клониране."
+#: lib/merge.tcl:126
+#, tcl-format
+msgid "Merging %s and %s..."
+msgstr "Сливане на „%s“ и „%s“…"
-#: lib/choose_repository.tcl:938
-msgid "No default branch obtained."
-msgstr "Ð\9dе е полÑ\83Ñ\87ен клон по подÑ\80азбиÑ\80ане."
+#: lib/merge.tcl:137
+msgid "Merge completed successfully."
+msgstr "СливанеÑ\82о завÑ\8aÑ\80Ñ\88и Ñ\83Ñ\81пеÑ\88но."
-#: lib/choose_repository.tcl:949
-#, tcl-format
-msgid "Cannot resolve %s as a commit."
-msgstr "Няма подаване отговарящо на „%s“."
+#: lib/merge.tcl:139
+msgid "Merge failed. Conflict resolution is required."
+msgstr "Неуспешно сливане — има конфликти за коригиране."
-#: lib/choose_repository.tcl:961
-msgid "Creating working directory"
-msgstr "Създаване на работната директория"
+#: lib/merge.tcl:156
+#, tcl-format
+msgid "%s (%s): Merge"
+msgstr "%s (%s): Сливане"
-#: lib/choose_repository.tcl:962 lib/index.tcl:70 lib/index.tcl:136
-#: lib/index.tcl:207
-msgid "files"
-msgstr "файлове"
+#: lib/merge.tcl:164
+#, tcl-format
+msgid "Merge Into %s"
+msgstr "Сливане в „%s“"
-#: lib/choose_repository.tcl:981
-msgid "Cannot clone submodules."
-msgstr "Ð\9fодмодÑ\83лиÑ\82е не могаÑ\82 да Ñ\81е клониÑ\80аÑ\82."
+#: lib/merge.tcl:183
+msgid "Revision To Merge"
+msgstr "Ð\92еÑ\80Ñ\81иÑ\8f за Ñ\81ливане"
-#: lib/choose_repository.tcl:990
-msgid "Cloning submodules"
-msgstr "Клониране на подмодулите"
+#: lib/merge.tcl:218
+msgid ""
+"Cannot abort while amending.\n"
+"\n"
+"You must finish amending this commit.\n"
+msgstr ""
+"Поправянето не може да бъде преустановено.\n"
+"\n"
+"Трябва да завършите поправката на това подаване.\n"
-#: lib/choose_repository.tcl:1015
-msgid "Initial file checkout failed."
-msgstr "Неуспешно първоначално изтегляне."
+#: lib/merge.tcl:228
+msgid ""
+"Abort merge?\n"
+"\n"
+"Aborting the current merge will cause *ALL* uncommitted changes to be lost.\n"
+"\n"
+"Continue with aborting the current merge?"
+msgstr ""
+"Да се преустанови ли сливането?\n"
+"\n"
+"В такъв случай ●ВСИЧКИ● неподадени промени ще бъдат безвъзвратно загубени.\n"
+"\n"
+"Наистина ли да се преустанови сливането?"
-#: lib/choose_repository.tcl:1059
-msgid "Open"
-msgstr "Отваряне"
+#: lib/merge.tcl:234
+msgid ""
+"Reset changes?\n"
+"\n"
+"Resetting the changes will cause *ALL* uncommitted changes to be lost.\n"
+"\n"
+"Continue with resetting the current changes?"
+msgstr ""
+"Да се занулят ли промените?\n"
+"\n"
+"В такъв случай ●ВСИЧКИ● неподадени промени ще бъдат безвъзвратно загубени.\n"
+"\n"
+"Наистина ли да се занулят промените?"
-#: lib/choose_repository.tcl:1069
-msgid "Repository:"
-msgstr "Ð¥Ñ\80анилиÑ\89е:"
+#: lib/merge.tcl:245
+msgid "Aborting"
+msgstr "Ð\9fÑ\80еÑ\83Ñ\81Ñ\82ановÑ\8fване"
-#: lib/choose_repository.tcl:1118
-#, tcl-format
-msgid "Failed to open repository %s:"
-msgstr "Неуспешно отваряне на хранилището „%s“:"
+#: lib/merge.tcl:245
+msgid "files reset"
+msgstr "файла със занулени промени"
-#: lib/about.tcl:26
-msgid "git-gui - a graphical user interface for Git."
-msgstr "git-gui — графичен интерфейс за Git."
+#: lib/merge.tcl:273
+msgid "Abort failed."
+msgstr "Неуспешно преустановяване."
-#: lib/checkout_op.tcl:85
-#, tcl-format
-msgid "Fetching %s from %s"
-msgstr "Доставяне на „%s“ от „%s“"
+#: lib/merge.tcl:275
+msgid "Abort completed. Ready."
+msgstr "Успешно преустановяване. Готовност за следващо действие."
-#: lib/checkout_op.tcl:133
-#, tcl-format
-msgid "fatal: Cannot resolve %s"
-msgstr "фатална грешка: „%s“ не може да се открие"
+#: lib/mergetool.tcl:8
+msgid "Force resolution to the base version?"
+msgstr "Да се използва базовата версия"
-#: lib/checkout_op.tcl:175
-#, tcl-format
-msgid "Branch '%s' does not exist."
-msgstr "Клонът „%s“ не съществува."
+#: lib/mergetool.tcl:9
+msgid "Force resolution to this branch?"
+msgstr "Да се използва версията от този клон"
-#: lib/checkout_op.tcl:194
-#, tcl-format
-msgid "Failed to configure simplified git-pull for '%s'."
-msgstr "Неуспешно настройване на опростен git-pull за „%s“."
+#: lib/mergetool.tcl:10
+msgid "Force resolution to the other branch?"
+msgstr "Да се използва версията от другия клон"
-#: lib/checkout_op.tcl:229
+#: lib/mergetool.tcl:14
#, tcl-format
msgid ""
-"Branch '%s' already exists.\n"
+"Note that the diff shows only conflicting changes.\n"
"\n"
-"It cannot fast-forward to %s.\n"
-"A merge is required."
+"%s will be overwritten.\n"
+"\n"
+"This operation can be undone only by restarting the merge."
msgstr ""
-"Ð\9aлонÑ\8aÑ\82 â\80\9e%sâ\80\9c Ñ\81Ñ\8aÑ\89еÑ\81Ñ\82вÑ\83ва.\n"
+"РазликаÑ\82а показва Ñ\81амо Ñ\80азликиÑ\82е Ñ\81 конÑ\84ликÑ\82.\n"
"\n"
-"Той не може да бъде тривиално слят до „%s“.\n"
-"Необходимо е сливане."
+"Файлът „%s“ ще бъде презаписан.\n"
+"\n"
+"Тази операция може да бъде отменена само чрез започване на сливането наново."
-#: lib/checkout_op.tcl:243
+#: lib/mergetool.tcl:45
#, tcl-format
-msgid "Merge strategy '%s' not supported."
-msgstr "Стратегия за сливане „%s“ не се поддържа."
+msgid "File %s seems to have unresolved conflicts, still stage?"
+msgstr ""
+"Изглежда, че все още има некоригирани конфликти във файла „%s“. Да се добави "
+"ли файлът към индекса?"
-#: lib/checkout_op.tcl:262
+#: lib/mergetool.tcl:60
#, tcl-format
-msgid "Failed to update '%s'."
-msgstr "Неуспешно обновяване на „%s“."
-
-#: lib/checkout_op.tcl:274
-msgid "Staging area (index) is already locked."
-msgstr "Индексът вече е заключен."
+msgid "Adding resolution for %s"
+msgstr "Добавяне на корекция на конфликтите в „%s“"
-#: lib/checkout_op.tcl:289
-msgid ""
-"Last scanned state does not match repository state.\n"
-"\n"
-"Another Git program has modified this repository since the last scan. A "
-"rescan must be performed before the current branch can be changed.\n"
-"\n"
-"The rescan will be automatically started now.\n"
+#: lib/mergetool.tcl:141
+msgid "Cannot resolve deletion or link conflicts using a tool"
msgstr ""
-"Състоянието при последната проверка не отговаря на състоянието на "
-"хранилището.\n"
-"\n"
-"Някой друг процес за Git е променил хранилището междувременно. Състоянието "
-"трябва да бъде проверено, преди да се премине към нов клон.\n"
-"\n"
-"Автоматично ще започне нова проверка.\n"
+"Конфликтите при символни връзки или изтриване не могат да бъдат коригирани с "
+"външна програма."
-#: lib/checkout_op.tcl:345
-#, tcl-format
-msgid "Updating working directory to '%s'..."
-msgstr "Работната директория се привежда към „%s“…"
+#: lib/mergetool.tcl:146
+msgid "Conflict file does not exist"
+msgstr "Файлът, в който е конфликтът, не съществува"
-#: lib/checkout_op.tcl:346
-msgid "files checked out"
-msgstr "файла са изтеглени"
+#: lib/mergetool.tcl:246
+#, tcl-format
+msgid "Not a GUI merge tool: '%s'"
+msgstr "Това не е графична програма за сливане: „%s“"
-#: lib/checkout_op.tcl:376
+#: lib/mergetool.tcl:275
#, tcl-format
-msgid "Aborted checkout of '%s' (file level merging is required)."
-msgstr ""
-"Преустановяване на изтеглянето на „%s“ (необходимо е пофайлово сливане)."
+msgid "Unsupported merge tool '%s'"
+msgstr "Неподдържана програма за сливане: „%s“"
-#: lib/checkout_op.tcl:377
-msgid "File level merge required."
-msgstr "Ð\9dеобÑ\85одимо е поÑ\84айлово Ñ\81ливане."
+#: lib/mergetool.tcl:310
+msgid "Merge tool is already running, terminate it?"
+msgstr "Ð\9fÑ\80огÑ\80амаÑ\82а за Ñ\81ливане веÑ\87е е Ñ\81Ñ\82аÑ\80Ñ\82иÑ\80ана. Ð\94а бÑ\8aде ли изклÑ\8eÑ\87ена?"
-#: lib/checkout_op.tcl:381
+#: lib/mergetool.tcl:330
#, tcl-format
-msgid "Staying on branch '%s'."
-msgstr "Оставане върху клона „%s“."
+msgid ""
+"Error retrieving versions:\n"
+"%s"
+msgstr ""
+"Грешка при изтеглянето на версии:\n"
+"%s"
-#: lib/checkout_op.tcl:452
+#: lib/mergetool.tcl:350
+#, tcl-format
msgid ""
-"You are no longer on a local branch.\n"
+"Could not start the merge tool:\n"
"\n"
-"If you wanted to be on a branch, create one now starting from 'This Detached "
-"Checkout'."
+"%s"
msgstr ""
-"Ð\92еÑ\87е не Ñ\81Ñ\82е на локален клон.\n"
+"Ð\9fÑ\80огÑ\80амаÑ\82а за Ñ\81ливане не може да бÑ\8aде Ñ\81Ñ\82аÑ\80Ñ\82иÑ\80ана:\n"
"\n"
-"Ако искате да сте на клон, създайте базиран на „Това несвързано изтегляне“."
+"%s"
-#: lib/checkout_op.tcl:503 lib/checkout_op.tcl:507
+#: lib/mergetool.tcl:354
+msgid "Running merge tool..."
+msgstr "Стартиране на програмата за сливане…"
+
+#: lib/mergetool.tcl:382 lib/mergetool.tcl:390
+msgid "Merge tool failed."
+msgstr "Грешка в програмата за сливане."
+
+#: lib/option.tcl:11
#, tcl-format
-msgid "Checked out '%s'."
-msgstr "„%s“ е изтеглен."
+msgid "Invalid global encoding '%s'"
+msgstr "Неправилно глобално кодиране „%s“"
-#: lib/checkout_op.tcl:535
+#: lib/option.tcl:19
#, tcl-format
-msgid "Resetting '%s' to '%s' will lose the following commits:"
-msgstr ""
-"Зануляването на „%s“ към „%s“ ще доведе до загубването на следните подавания:"
+msgid "Invalid repo encoding '%s'"
+msgstr "Неправилно кодиране „%s“ на хранилището"
-#: lib/checkout_op.tcl:557
-msgid "Recovering lost commits may not be easy."
-msgstr "Ð\92Ñ\8aзÑ\81Ñ\82ановÑ\8fванеÑ\82о на загÑ\83бениÑ\82е подаваниÑ\8f може да е Ñ\82Ñ\80Ñ\83дно."
+#: lib/option.tcl:119
+msgid "Restore Defaults"
+msgstr "СÑ\82андаÑ\80Ñ\82ни наÑ\81Ñ\82Ñ\80ойки"
-#: lib/checkout_op.tcl:562
+#: lib/option.tcl:123
+msgid "Save"
+msgstr "Запазване"
+
+#: lib/option.tcl:133
#, tcl-format
-msgid "Reset '%s'?"
-msgstr "Ð\97анÑ\83лÑ\8fване на â\80\9e%sâ\80\9c?"
+msgid "%s Repository"
+msgstr "Ð¥Ñ\80анилиÑ\89е â\80\9e%sâ\80\9c"
-#: lib/checkout_op.tcl:571 lib/branch_create.tcl:85
-msgid "Reset"
-msgstr "Ð\9eÑ\82наÑ\87ало"
+#: lib/option.tcl:134
+msgid "Global (All Repositories)"
+msgstr "Ð\93лобално (за вÑ\81иÑ\87ки Ñ\85Ñ\80анилиÑ\89а)"
-#: lib/checkout_op.tcl:635
-#, tcl-format
-msgid ""
-"Failed to set current branch.\n"
-"\n"
-"This working directory is only partially switched. We successfully updated "
-"your files, but failed to update an internal Git file.\n"
-"\n"
-"This should not have occurred. %s will now close and give up."
-msgstr ""
-"Неуспешно задаване на текущия клон.\n"
-"\n"
-"Работната директория е само частично обновена: файловете са обновени "
-"успешно, но някой от вътрешните, служебни файлове на Git не е бил.\n"
-"\n"
-"Това състояние е аварийно и не трябва да се случва. Програмата „%s“ ще "
-"преустанови работа."
+#: lib/option.tcl:140
+msgid "User Name"
+msgstr "Потребителско име"
-#: lib/branch_create.tcl:23
-msgid "Create Branch"
-msgstr "СÑ\8aздаване на клон"
+#: lib/option.tcl:141
+msgid "Email Address"
+msgstr "Ð\90дÑ\80еÑ\81 на е-поÑ\89а"
-#: lib/branch_create.tcl:28
-msgid "Create New Branch"
-msgstr "СÑ\8aздаване на нов клон"
+#: lib/option.tcl:143
+msgid "Summarize Merge Commits"
+msgstr "Ð\9eбобÑ\89аване на подаваниÑ\8fÑ\82а пÑ\80и Ñ\81ливане"
-#: lib/branch_create.tcl:42
-msgid "Branch Name"
-msgstr "Ð\98ме на клона"
+#: lib/option.tcl:144
+msgid "Merge Verbosity"
+msgstr "Ð\9fодÑ\80обноÑ\81Ñ\82и пÑ\80и Ñ\81ливаниÑ\8fÑ\82а"
-#: lib/branch_create.tcl:57
-msgid "Match Tracking Branch Name"
-msgstr "СÑ\8aвпадане по имеÑ\82о на Ñ\81ледениÑ\8f клон"
+#: lib/option.tcl:145
+msgid "Show Diffstat After Merge"
+msgstr "Ð\98звеждане на Ñ\81Ñ\82аÑ\82иÑ\81Ñ\82ика Ñ\81лед Ñ\81ливаниÑ\8fÑ\82а"
-#: lib/branch_create.tcl:66
-msgid "Starting Revision"
-msgstr "Ð\9dаÑ\87ална веÑ\80Ñ\81иÑ\8f"
+#: lib/option.tcl:146
+msgid "Use Merge Tool"
+msgstr "Ð\98зползване на пÑ\80огÑ\80ама за Ñ\81ливане"
-#: lib/branch_create.tcl:72
-msgid "Update Existing Branch:"
-msgstr "Ð\9eбновÑ\8fване на Ñ\81Ñ\8aÑ\89еÑ\81Ñ\82вÑ\83ваÑ\89 клон:"
+#: lib/option.tcl:148
+msgid "Trust File Modification Timestamps"
+msgstr "Ð\94овеÑ\80ие вÑ\8aв вÑ\80емеÑ\82о на пÑ\80омÑ\8fна на Ñ\84айловеÑ\82е"
-#: lib/branch_create.tcl:75
-msgid "No"
-msgstr "Ð\9dе"
+#: lib/option.tcl:149
+msgid "Prune Tracking Branches During Fetch"
+msgstr "Ð\9eкаÑ\81Ñ\82Ñ\80Ñ\8fне на Ñ\81ледÑ\8fÑ\89иÑ\82е клонове пÑ\80и доÑ\81Ñ\82авÑ\8fне"
-#: lib/branch_create.tcl:80
-msgid "Fast Forward Only"
-msgstr "Само Ñ\82Ñ\80ивиално пÑ\80евÑ\8aÑ\80Ñ\82аÑ\89о Ñ\81ливане"
+#: lib/option.tcl:150
+msgid "Match Tracking Branches"
+msgstr "Ð\9dапаÑ\81ване на Ñ\81ледÑ\8fÑ\89иÑ\82е клонове"
-#: lib/branch_create.tcl:97
-msgid "Checkout After Creation"
-msgstr "Ð\9fÑ\80еминаване кÑ\8aм клона Ñ\81лед Ñ\81Ñ\8aздаванеÑ\82о мÑ\83"
+#: lib/option.tcl:151
+msgid "Use Textconv For Diffs and Blames"
+msgstr "Ð\98зползване на â\80\9etextconvâ\80\9c за Ñ\80азликиÑ\82е и аноÑ\82иÑ\80анеÑ\82о"
-#: lib/branch_create.tcl:132
-msgid "Please select a tracking branch."
-msgstr "Ð\98збеÑ\80еÑ\82е клон за Ñ\81ледени."
+#: lib/option.tcl:152
+msgid "Blame Copy Only On Changed Files"
+msgstr "Ð\90ноÑ\82иÑ\80ане на копиеÑ\82о Ñ\81амо по пÑ\80оменениÑ\82е Ñ\84айлове"
-#: lib/branch_create.tcl:141
-#, tcl-format
-msgid "Tracking branch %s is not a branch in the remote repository."
-msgstr "Следящият клон — „%s“, не съществува в отдалеченото хранилище."
+#: lib/option.tcl:153
+msgid "Maximum Length of Recent Repositories List"
+msgstr "Максимален брой на списъка „Скоро ползвани“ хранилища"
-#: lib/console.tcl:59
-msgid "Working... please wait..."
-msgstr "Ð\92 моменÑ\82а Ñ\81е извÑ\8aÑ\80Ñ\88ва дейÑ\81Ñ\82вие, изÑ\87акайÑ\82еâ\80¦"
+#: lib/option.tcl:154
+msgid "Minimum Letters To Blame Copy On"
+msgstr "Ð\9cинимален бÑ\80ой знаÑ\86и за аноÑ\82иÑ\80ане на копиеÑ\82о"
-#: lib/console.tcl:186
-msgid "Success"
-msgstr "УÑ\81пеÑ\85"
+#: lib/option.tcl:155
+msgid "Blame History Context Radius (days)"
+msgstr "Ð\98Ñ\81Ñ\82оÑ\80иÑ\87еÑ\81ки обÑ\85ваÑ\82 за аноÑ\82иÑ\80ане в дни"
-#: lib/console.tcl:200
-msgid "Error: Command Failed"
-msgstr "Ð\93Ñ\80еÑ\88ка: неÑ\83Ñ\81пеÑ\88но изпÑ\8aлнение на команда"
+#: lib/option.tcl:156
+msgid "Number of Diff Context Lines"
+msgstr "Ð\91Ñ\80ой Ñ\80едове за конÑ\82екÑ\81Ñ\82а на Ñ\80азликиÑ\82е"
-#: lib/choose_rev.tcl:52
-msgid "This Detached Checkout"
-msgstr "Това неÑ\81вÑ\8aÑ\80зано изÑ\82еглÑ\8fне"
+#: lib/option.tcl:157
+msgid "Additional Diff Parameters"
+msgstr "Ð\90Ñ\80гÑ\83менÑ\82и кÑ\8aм командаÑ\82а за Ñ\80азликиÑ\82е"
-#: lib/choose_rev.tcl:60
-msgid "Revision Expression:"
-msgstr "Ð\98зÑ\80аз за веÑ\80Ñ\81иÑ\8f:"
+#: lib/option.tcl:158
+msgid "Commit Message Text Width"
+msgstr "ШиÑ\80оÑ\87ина на Ñ\82екÑ\81Ñ\82а на Ñ\81Ñ\8aобÑ\89ениеÑ\82о пÑ\80и подаване"
-#: lib/choose_rev.tcl:72
-msgid "Local Branch"
-msgstr "Ð\9bокален клон"
+#: lib/option.tcl:159
+msgid "New Branch Name Template"
+msgstr "Шаблон за имеÑ\82о на новиÑ\82е клони"
-#: lib/choose_rev.tcl:77
-msgid "Tracking Branch"
-msgstr "СледÑ\8fÑ\89 клон"
+#: lib/option.tcl:160
+msgid "Default File Contents Encoding"
+msgstr "Ð\9aодиÑ\80ане на Ñ\84айловеÑ\82е"
-#: lib/choose_rev.tcl:82 lib/choose_rev.tcl:544
-msgid "Tag"
-msgstr "Ð\95Ñ\82икеÑ\82"
+#: lib/option.tcl:161
+msgid "Warn before committing to a detached head"
+msgstr "Ð\9fÑ\80едÑ\83пÑ\80еждаване пÑ\80и подаване кÑ\8aм неÑ\81вÑ\8aÑ\80зан Ñ\83казаÑ\82ел"
-#: lib/choose_rev.tcl:321
-#, tcl-format
-msgid "Invalid revision: %s"
-msgstr "Неправилна версия: %s"
+#: lib/option.tcl:162
+msgid "Staging of untracked files"
+msgstr "Добавяне на неследените файлове към индекса"
-#: lib/choose_rev.tcl:342
-msgid "No revision selected."
-msgstr "Ð\9dе е избÑ\80ана веÑ\80Ñ\81иÑ\8f."
+#: lib/option.tcl:163
+msgid "Show untracked files"
+msgstr "Ð\9fоказване на неÑ\81ледениÑ\82е Ñ\84айлове"
-#: lib/choose_rev.tcl:350
-msgid "Revision expression is empty."
-msgstr "Ð\98зÑ\80азÑ\8aÑ\82 за веÑ\80Ñ\81иÑ\8f е пÑ\80азен."
+#: lib/option.tcl:164
+msgid "Tab spacing"
+msgstr "ШиÑ\80ина на Ñ\82абÑ\83лаÑ\86иÑ\8fÑ\82а"
-#: lib/choose_rev.tcl:537
-msgid "Updated"
-msgstr "Ð\9eбновен"
+#: lib/option.tcl:210
+msgid "Change"
+msgstr "СмÑ\8fна"
-#: lib/choose_rev.tcl:565
-msgid "URL"
-msgstr "Ð\90дÑ\80еÑ\81"
+#: lib/option.tcl:254
+msgid "Spelling Dictionary:"
+msgstr "Ð\9fÑ\80авопиÑ\81ен Ñ\80еÑ\87ник:"
-#: lib/line.tcl:17
-msgid "Goto Line:"
-msgstr "Ð\9aÑ\8aм Ñ\80ед:"
+#: lib/option.tcl:284
+msgid "Change Font"
+msgstr "СмÑ\8fна на Ñ\88Ñ\80иÑ\84Ñ\82а"
-#: lib/line.tcl:23
-msgid "Go"
-msgstr "Придвижване"
+#: lib/option.tcl:288
+#, tcl-format
+msgid "Choose %s"
+msgstr "Избор на „%s“"
-#: lib/commit.tcl:9
-msgid ""
-"There is nothing to amend.\n"
-"\n"
-"You are about to create the initial commit. There is no commit before this "
-"to amend.\n"
-msgstr ""
-"Няма какво да се поправи.\n"
-"\n"
-"Ще създадете първоначалното подаване. Преди него няма други подавания, които "
-"да поправите.\n"
+#: lib/option.tcl:294
+msgid "pt."
+msgstr "тчк."
-#: lib/commit.tcl:18
-msgid ""
-"Cannot amend while merging.\n"
-"\n"
-"You are currently in the middle of a merge that has not been fully "
-"completed. You cannot amend the prior commit unless you first abort the "
-"current merge activity.\n"
-msgstr ""
-"По време на сливане не може да поправяте.\n"
-"\n"
-"В момента все още не сте завършили операция по сливане. Не може да поправите "
-"предишното подаване, освен ако първо не преустановите текущото сливане.\n"
+#: lib/option.tcl:308
+msgid "Preferences"
+msgstr "Настройки"
-#: lib/commit.tcl:48
-msgid "Error loading commit data for amend:"
-msgstr "Ð\93Ñ\80еÑ\88ка пÑ\80и заÑ\80еждане на данниÑ\82е оÑ\82 подаване, коиÑ\82о да Ñ\81е попÑ\80авÑ\8fÑ\82:"
+#: lib/option.tcl:345
+msgid "Failed to completely save options:"
+msgstr "Ð\9dеÑ\83Ñ\81пеÑ\88но запазване на наÑ\81Ñ\82Ñ\80ойкиÑ\82е:"
-#: lib/commit.tcl:75
-msgid "Unable to obtain your identity:"
-msgstr "Ð\98денÑ\82иÑ\84икаÑ\86иÑ\8fÑ\82а ви не може да бÑ\8aде опÑ\80еделена:"
+#: lib/remote.tcl:200
+msgid "Push to"
+msgstr "Ð\98зÑ\82лаÑ\81кване кÑ\8aм"
-#: lib/commit.tcl:80
-msgid "Invalid GIT_COMMITTER_IDENT:"
-msgstr "Ð\9dепÑ\80авилно поле â\80\9eGIT_COMMITTER_IDENTâ\80\9c:"
+#: lib/remote.tcl:218
+msgid "Remove Remote"
+msgstr "Ð\9fÑ\80емаÑ\85ване на оÑ\82далеÑ\87ено Ñ\85Ñ\80анилиÑ\89е"
-#: lib/commit.tcl:129
-#, tcl-format
-msgid "warning: Tcl does not support encoding '%s'."
-msgstr "предупреждение: Tcl не поддържа кодирането „%s“."
+#: lib/remote.tcl:223
+msgid "Prune from"
+msgstr "Окастряне от"
+
+#: lib/remote.tcl:228
+msgid "Fetch from"
+msgstr "Доставяне от"
-#: lib/commit.tcl:149
-msgid ""
-"Last scanned state does not match repository state.\n"
-"\n"
-"Another Git program has modified this repository since the last scan. A "
-"rescan must be performed before another commit can be created.\n"
-"\n"
-"The rescan will be automatically started now.\n"
-msgstr ""
-"Състоянието при последната проверка не отговаря на състоянието на "
-"хранилището.\n"
-"\n"
-"Някой друг процес за Git е променил хранилището междувременно. Състоянието "
-"трябва да бъде проверено преди ново подаване.\n"
-"\n"
-"Автоматично ще започне нова проверка.\n"
+#: lib/remote.tcl:253 lib/remote.tcl:258
+msgid "All"
+msgstr "Всички"
-#: lib/commit.tcl:173
+#: lib/remote_add.tcl:20
#, tcl-format
-msgid ""
-"Unmerged files cannot be committed.\n"
-"\n"
-"File %s has merge conflicts. You must resolve them and stage the file "
-"before committing.\n"
-msgstr ""
-"Неслетите файлове не могат да бъдат подавани.\n"
-"\n"
-"Във файла „%s“ има конфликти при сливане. За да го подадете, трябва първо да "
-"коригирате конфликтите и да добавите файла към индекса за подаване.\n"
+msgid "%s (%s): Add Remote"
+msgstr "%s (%s): Добавяне на отдалечено хранилище"
-#: lib/commit.tcl:181
-#, tcl-format
-msgid ""
-"Unknown file state %s detected.\n"
-"\n"
-"File %s cannot be committed by this program.\n"
-msgstr ""
-"Непознато състояние на файл „%s“.\n"
-"\n"
-"Файлът „%s“ не може да бъде подаден чрез текущата програма.\n"
+#: lib/remote_add.tcl:25
+msgid "Add New Remote"
+msgstr "Добавяне на отдалечено хранилище"
-#: lib/commit.tcl:189
-msgid ""
-"No changes to commit.\n"
-"\n"
-"You must stage at least 1 file before you can commit.\n"
-msgstr ""
-"Няма промени за подаване.\n"
-"\n"
-"Трябва да добавите поне един файл към индекса, за да подадете.\n"
+#: lib/remote_add.tcl:30 lib/tools_dlg.tcl:37
+msgid "Add"
+msgstr "Добавяне"
-#: lib/commit.tcl:204
-msgid ""
-"Please supply a commit message.\n"
-"\n"
-"A good commit message has the following format:\n"
-"\n"
-"- First line: Describe in one sentence what you did.\n"
-"- Second line: Blank\n"
-"- Remaining lines: Describe why this change is good.\n"
-msgstr ""
-"Задайте добро съобщение при подаване.\n"
-"\n"
-"Използвайте следния формат:\n"
-"\n"
-"● Първи ред: описание в едно изречение на промяната.\n"
-"● Втори ред: празен.\n"
-"● Останалите редове: опишете защо се налага тази промяна.\n"
+#: lib/remote_add.tcl:39
+msgid "Remote Details"
+msgstr "Данни за отдалеченото хранилище"
-#: lib/commit.tcl:235
-msgid "Calling pre-commit hook..."
-msgstr "Ð\98зпÑ\8aлнÑ\8fване на кÑ\83каÑ\82а пÑ\80еди подаванеâ\80¦"
+#: lib/remote_add.tcl:50
+msgid "Location:"
+msgstr "Ð\9cеÑ\81Ñ\82оположение:"
-#: lib/commit.tcl:250
-msgid "Commit declined by pre-commit hook."
-msgstr "Ð\9fодаванеÑ\82о е оÑ\82Ñ\85вÑ\8aÑ\80лено оÑ\82 кÑ\83каÑ\82а пÑ\80еди подаване."
+#: lib/remote_add.tcl:60
+msgid "Further Action"
+msgstr "СледваÑ\89о дейÑ\81Ñ\82вие"
-#: lib/commit.tcl:269
-msgid ""
-"You are about to commit on a detached head. This is a potentially dangerous "
-"thing to do because if you switch to another branch you will lose your "
-"changes and it can be difficult to retrieve them later from the reflog. You "
-"should probably cancel this commit and create a new branch to continue.\n"
-" \n"
-" Do you really want to proceed with your Commit?"
-msgstr ""
-"Ще подавате към несвързан връх. Това е опасно — при изтеглянето на друг клон "
-"ще изгубите промените си. След това може да е невъзможно да ги възстановите "
-"от журнала на указателите „reflog“. Най-вероятно трябва да отмените това "
-"подаване и да създадете клон, в който да подадете.\n"
-" \n"
-"Сигурни ли сте, че искате да подадете към несвързан връх?"
+#: lib/remote_add.tcl:63
+msgid "Fetch Immediately"
+msgstr "Незабавно доставяне"
-#: lib/commit.tcl:290
-msgid "Calling commit-msg hook..."
-msgstr "Ð\98зпÑ\8aлнÑ\8fване на кÑ\83каÑ\82а за Ñ\81Ñ\8aобÑ\89ениеÑ\82о пÑ\80и подаванеâ\80¦"
+#: lib/remote_add.tcl:69
+msgid "Initialize Remote Repository and Push"
+msgstr "Ð\98ниÑ\86иализиÑ\80ане на оÑ\82далеÑ\87еноÑ\82о Ñ\85Ñ\80анилиÑ\89е и изÑ\82лаÑ\81кване на пÑ\80омениÑ\82е"
-#: lib/commit.tcl:305
-msgid "Commit declined by commit-msg hook."
-msgstr "Ð\9fодаванеÑ\82о е оÑ\82Ñ\85вÑ\8aÑ\80лено оÑ\82 кÑ\83каÑ\82а за Ñ\81Ñ\8aобÑ\89ениеÑ\82о пÑ\80и подаване."
+#: lib/remote_add.tcl:75
+msgid "Do Nothing Else Now"
+msgstr "Ð\94а не Ñ\81е пÑ\80ави ниÑ\89о"
-#: lib/commit.tcl:318
-msgid "Committing changes..."
-msgstr "Ð\9fодаване на пÑ\80омениÑ\82еâ\80¦"
+#: lib/remote_add.tcl:100
+msgid "Please supply a remote name."
+msgstr "Ð\97адайÑ\82е име за оÑ\82далеÑ\87еноÑ\82о Ñ\85Ñ\80анилиÑ\89е."
-#: lib/commit.tcl:334
-msgid "write-tree failed:"
-msgstr "неуспешно запазване на дървото (write-tree):"
+#: lib/remote_add.tcl:113
+#, tcl-format
+msgid "'%s' is not an acceptable remote name."
+msgstr "Отдалечено хранилище не може да се казва „%s“."
-#: lib/commit.tcl:335 lib/commit.tcl:379 lib/commit.tcl:400
-msgid "Commit failed."
-msgstr "Неуспешно подаване."
+#: lib/remote_add.tcl:124
+#, tcl-format
+msgid "Failed to add remote '%s' of location '%s'."
+msgstr "Неуспешно добавяне на отдалеченото хранилище „%s“ от адрес „%s“."
-#: lib/commit.tcl:352
+#: lib/remote_add.tcl:132 lib/transport.tcl:6
#, tcl-format
-msgid "Commit %s appears to be corrupt"
-msgstr "Ð\9fодаванеÑ\82о â\80\9e%sâ\80\9c изглежда повÑ\80едено"
+msgid "fetch %s"
+msgstr "доÑ\81Ñ\82авÑ\8fне на â\80\9e%sâ\80\9c"
-#: lib/commit.tcl:357
-msgid ""
-"No changes to commit.\n"
-"\n"
-"No files were modified by this commit and it was not a merge commit.\n"
-"\n"
-"A rescan will be automatically started now.\n"
-msgstr ""
-"Няма промени за подаване.\n"
-"\n"
-"В това подаване не са променяни никакви файлове, а и не е подаване със "
-"сливане.\n"
-"\n"
-"Автоматично ще започне нова проверка.\n"
+#: lib/remote_add.tcl:133
+#, tcl-format
+msgid "Fetching the %s"
+msgstr "Доставяне на „%s“"
-#: lib/commit.tcl:364
-msgid "No changes to commit."
-msgstr "Няма промени за подаване."
+#: lib/remote_add.tcl:156
+#, tcl-format
+msgid "Do not know how to initialize repository at location '%s'."
+msgstr "Хранилището с местоположение „%s“ не може да бъде инициализирано."
-#: lib/commit.tcl:378
-msgid "commit-tree failed:"
-msgstr "неуспешно подаване на дървото (commit-tree):"
+#: lib/remote_add.tcl:162 lib/transport.tcl:54 lib/transport.tcl:92
+#: lib/transport.tcl:110
+#, tcl-format
+msgid "push %s"
+msgstr "изтласкване на „%s“"
-#: lib/commit.tcl:399
-msgid "update-ref failed:"
-msgstr "неуспешно обновяване на указателите (update-ref):"
+#: lib/remote_add.tcl:163
+#, tcl-format
+msgid "Setting up the %s (at %s)"
+msgstr "Добавяне на хранилище „%s“ (с адрес „%s“)"
-#: lib/commit.tcl:492
+#: lib/remote_branch_delete.tcl:29
#, tcl-format
-msgid "Created commit %s: %s"
-msgstr "Успешно подаване %s: %s"
+msgid "%s (%s): Delete Branch Remotely"
+msgstr "%s (%s): Изтриване на отдалечения клон"
-#: lib/branch_delete.tcl:16
-msgid "Delete Branch"
-msgstr "Изтриване на клон"
+#: lib/remote_branch_delete.tcl:34
+msgid "Delete Branch Remotely"
+msgstr "Ð\98зÑ\82Ñ\80иване на оÑ\82далеÑ\87ениÑ\8f клон"
-#: lib/branch_delete.tcl:21
-msgid "Delete Local Branch"
-msgstr "Ð\98зÑ\82Ñ\80иване на локален клон"
+#: lib/remote_branch_delete.tcl:48
+msgid "From Repository"
+msgstr "Ð\9eÑ\82 Ñ\85Ñ\80анилиÑ\89е"
-#: lib/branch_delete.tcl:39
-msgid "Local Branches"
-msgstr "Ð\9bокални клони"
+#: lib/remote_branch_delete.tcl:51 lib/transport.tcl:165
+msgid "Remote:"
+msgstr "Ð\9eÑ\82далеÑ\87ено Ñ\85Ñ\80анилиÑ\89е:"
-#: lib/branch_delete.tcl:51
-msgid "Delete Only If Merged Into"
-msgstr "Ð\98зÑ\82Ñ\80иване, Ñ\81амо ако пÑ\80омениÑ\82е Ñ\81а Ñ\81леÑ\82и и дÑ\80Ñ\83гаде"
+#: lib/remote_branch_delete.tcl:72 lib/transport.tcl:187
+msgid "Arbitrary Location:"
+msgstr "Ð\9fÑ\80оизволно меÑ\81Ñ\82оположение:"
-#: lib/branch_delete.tcl:103
-#, tcl-format
-msgid "The following branches are not completely merged into %s:"
-msgstr "Не всички промени в клоните са слети в „%s“:"
+#: lib/remote_branch_delete.tcl:88
+msgid "Branches"
+msgstr "Клони"
-#: lib/branch_delete.tcl:141
+#: lib/remote_branch_delete.tcl:110
+msgid "Delete Only If"
+msgstr "Изтриване, само ако"
+
+#: lib/remote_branch_delete.tcl:112
+msgid "Merged Into:"
+msgstr "Слят в:"
+
+#: lib/remote_branch_delete.tcl:153
+msgid "A branch is required for 'Merged Into'."
+msgstr "За данните „Слят в“ е необходимо да зададете клон."
+
+#: lib/remote_branch_delete.tcl:185
#, tcl-format
msgid ""
-"Failed to delete branches:\n"
-"%s"
+"The following branches are not completely merged into %s:\n"
+"\n"
+" - %s"
msgstr ""
-"Неуспешно триене на клони:\n"
-"%s"
+"Следните клони не са слети напълно в „%s“:\n"
+"\n"
+" ● %s"
-#: lib/blame.tcl:73
-msgid "File Viewer"
-msgstr "Преглед на файлове"
+#: lib/remote_branch_delete.tcl:190
+#, tcl-format
+msgid ""
+"One or more of the merge tests failed because you have not fetched the "
+"necessary commits. Try fetching from %s first."
+msgstr ""
+"Поне една от пробите за сливане е неуспешна, защото не сте доставили всички "
+"необходими подавания. Пробвайте първо да доставите подаванията от „%s“."
-#: lib/blame.tcl:79
-msgid "Commit:"
-msgstr "Ð\9fодаване:"
+#: lib/remote_branch_delete.tcl:208
+msgid "Please select one or more branches to delete."
+msgstr "Ð\98збеÑ\80еÑ\82е поне един клон за изÑ\82Ñ\80иване."
-#: lib/blame.tcl:280
-msgid "Copy Commit"
-msgstr "Копиране на подаване"
+#: lib/remote_branch_delete.tcl:227
+#, tcl-format
+msgid "Deleting branches from %s"
+msgstr "Изтриване на клони от „%s“"
-#: lib/blame.tcl:284
-msgid "Find Text..."
-msgstr "ТÑ\8aÑ\80Ñ\81ене на Ñ\82екÑ\81Ñ\82â\80¦"
+#: lib/remote_branch_delete.tcl:300
+msgid "No repository selected."
+msgstr "Ð\9dе е избÑ\80ано Ñ\85Ñ\80анилиÑ\89е."
-#: lib/blame.tcl:288
-msgid "Goto Line..."
-msgstr "Към ред…"
+#: lib/remote_branch_delete.tcl:305
+#, tcl-format
+msgid "Scanning %s..."
+msgstr "Претърсване на „%s“…"
-#: lib/blame.tcl:297
-msgid "Do Full Copy Detection"
-msgstr "Ð\9fÑ\8aлно Ñ\82Ñ\8aÑ\80Ñ\81ене на копиÑ\80ане"
+#: lib/search.tcl:48
+msgid "Find:"
+msgstr "ТÑ\8aÑ\80Ñ\81ене:"
-#: lib/blame.tcl:301
-msgid "Show History Context"
-msgstr "Ð\9fоказване на конÑ\82екÑ\81Ñ\82а оÑ\82 иÑ\81Ñ\82оÑ\80иÑ\8fÑ\82а"
+#: lib/search.tcl:50
+msgid "Next"
+msgstr "СледваÑ\89а поÑ\8fва"
-#: lib/blame.tcl:304
-msgid "Blame Parent Commit"
-msgstr "Ð\90ноÑ\82иÑ\80ане на Ñ\80одиÑ\82елÑ\81коÑ\82о подаване"
+#: lib/search.tcl:51
+msgid "Prev"
+msgstr "Ð\9fÑ\80едиÑ\88на поÑ\8fва"
-#: lib/blame.tcl:466
-#, tcl-format
-msgid "Reading %s..."
-msgstr "Чете се „%s“…"
+#: lib/search.tcl:52
+msgid "RegExp"
+msgstr "РегИзр"
-#: lib/blame.tcl:594
-msgid "Loading copy/move tracking annotations..."
-msgstr "Ð\97аÑ\80еждане на аноÑ\82аÑ\86ииÑ\82е за пÑ\80оÑ\81ледÑ\8fване на копиÑ\80анеÑ\82о/пÑ\80емеÑ\81Ñ\82ванеÑ\82оâ\80¦"
+#: lib/search.tcl:54
+msgid "Case"
+msgstr "Ð\93лавни/малки"
-#: lib/blame.tcl:614
-msgid "lines annotated"
-msgstr "реда анотирани"
+#: lib/shortcut.tcl:8 lib/shortcut.tcl:43 lib/shortcut.tcl:75
+#, tcl-format
+msgid "%s (%s): Create Desktop Icon"
+msgstr "%s (%s): Добавяне на икона на работния плот"
-#: lib/blame.tcl:806
-msgid "Loading original location annotations..."
-msgstr "Ð\97аÑ\80еждане на аноÑ\82аÑ\86ииÑ\82е за пÑ\8aÑ\80вонаÑ\87алноÑ\82о меÑ\81Ñ\82оположениеâ\80¦"
+#: lib/shortcut.tcl:24 lib/shortcut.tcl:65
+msgid "Cannot write shortcut:"
+msgstr "Ð\9aлавиÑ\88наÑ\82а комбинаÑ\86иÑ\8f не може да бÑ\8aде запазена:"
-#: lib/blame.tcl:809
-msgid "Annotation complete."
-msgstr "Ð\90ноÑ\82иÑ\80анеÑ\82о завÑ\8aÑ\80Ñ\88и."
+#: lib/shortcut.tcl:140
+msgid "Cannot write icon:"
+msgstr "Ð\98конаÑ\82а не може да бÑ\8aде запазена:"
-#: lib/blame.tcl:839
-msgid "Busy"
-msgstr "Ð\9eпеÑ\80аÑ\86иÑ\8fÑ\82а не е завÑ\8aÑ\80Ñ\88ила"
+#: lib/spellcheck.tcl:57
+msgid "Unsupported spell checker"
+msgstr "Тази пÑ\80огÑ\80ама за пÑ\80овеÑ\80ка на пÑ\80авопиÑ\81а не Ñ\81е поддÑ\8aÑ\80жа"
-#: lib/blame.tcl:840
-msgid "Annotation process is already running."
-msgstr "Ð\92 моменÑ\82а Ñ\82еÑ\87е пÑ\80оÑ\86еÑ\81 на аноÑ\82иÑ\80ане."
+#: lib/spellcheck.tcl:65
+msgid "Spell checking is unavailable"
+msgstr "Ð\9bипÑ\81ва пÑ\80огÑ\80ама за пÑ\80овеÑ\80ка на пÑ\80авопиÑ\81а"
-#: lib/blame.tcl:879
-msgid "Running thorough copy detection..."
-msgstr "Ð\98зпÑ\8aлнÑ\8fва Ñ\81е Ñ\86Ñ\8fлоÑ\81Ñ\82ен пÑ\80оÑ\86еÑ\81 на оÑ\82кÑ\80иване на копиÑ\80анеâ\80¦"
+#: lib/spellcheck.tcl:68
+msgid "Invalid spell checking configuration"
+msgstr "Ð\9dепÑ\80авилни наÑ\81Ñ\82Ñ\80ойки на пÑ\80овеÑ\80каÑ\82а на пÑ\80авопиÑ\81а"
-#: lib/blame.tcl:947
-msgid "Loading annotation..."
-msgstr "Зареждане на анотации…"
+#: lib/spellcheck.tcl:70
+#, tcl-format
+msgid "Reverting dictionary to %s."
+msgstr "Ползване на речник за език „%s“."
-#: lib/blame.tcl:1000
-msgid "Author:"
-msgstr "Ð\90вÑ\82оÑ\80:"
+#: lib/spellcheck.tcl:73
+msgid "Spell checker silently failed on startup"
+msgstr "Ð\9fÑ\80огÑ\80амаÑ\82а за пÑ\80авопиÑ\81 даже не Ñ\81Ñ\82аÑ\80Ñ\82иÑ\80а Ñ\83Ñ\81пеÑ\88но."
-#: lib/blame.tcl:1004
-msgid "Committer:"
-msgstr "Ð\9fодал:"
+#: lib/spellcheck.tcl:80
+msgid "Unrecognized spell checker"
+msgstr "Ð\9dепознаÑ\82а пÑ\80огÑ\80ама за пÑ\80овеÑ\80ка на пÑ\80авопиÑ\81а"
-#: lib/blame.tcl:1009
-msgid "Original File:"
-msgstr "Ð\9fÑ\8aÑ\80вонаÑ\87ален Ñ\84айл:"
+#: lib/spellcheck.tcl:186
+msgid "No Suggestions"
+msgstr "Ð\9dÑ\8fма пÑ\80едложениÑ\8f"
-#: lib/blame.tcl:1057
-msgid "Cannot find HEAD commit:"
-msgstr "Ð\9fодаванеÑ\82о за вÑ\80Ñ\8aÑ\85 â\80\9eHEADâ\80\9c не може да Ñ\81е оÑ\82кÑ\80ие:"
+#: lib/spellcheck.tcl:388
+msgid "Unexpected EOF from spell checker"
+msgstr "Ð\9dеоÑ\87акван кÑ\80ай на Ñ\84айл оÑ\82 пÑ\80огÑ\80амаÑ\82а за пÑ\80овеÑ\80ка на пÑ\80авопиÑ\81а"
-#: lib/blame.tcl:1112
-msgid "Cannot find parent commit:"
-msgstr "РодиÑ\82елÑ\81коÑ\82о подаване не може да бÑ\8aде оÑ\82кÑ\80иÑ\82о"
+#: lib/spellcheck.tcl:392
+msgid "Spell Checker Failed"
+msgstr "Ð\93Ñ\80еÑ\88ка в пÑ\80огÑ\80амаÑ\82а за пÑ\80овеÑ\80ка на пÑ\80авопиÑ\81а"
-#: lib/blame.tcl:1127
-msgid "Unable to display parent"
-msgstr "РодиÑ\82елÑ\8fÑ\82 не може да бÑ\8aде показан"
+#: lib/sshkey.tcl:31
+msgid "No keys found."
+msgstr "Ð\9dе Ñ\81а оÑ\82кÑ\80иÑ\82и клÑ\8eÑ\87ове."
-#: lib/blame.tcl:1269
-msgid "Originally By:"
-msgstr "Първоначално от:"
+#: lib/sshkey.tcl:34
+#, tcl-format
+msgid "Found a public key in: %s"
+msgstr "Открит е публичен ключ в „%s“"
-#: lib/blame.tcl:1275
-msgid "In File:"
-msgstr "Ð\92Ñ\8aв Ñ\84айл:"
+#: lib/sshkey.tcl:40
+msgid "Generate Key"
+msgstr "Ð\93енеÑ\80иÑ\80ане на клÑ\8eÑ\87"
-#: lib/blame.tcl:1280
-msgid "Copied Or Moved Here By:"
-msgstr "Ð\9aопиÑ\80ано или пÑ\80емеÑ\81Ñ\82ено Ñ\82Ñ\83к оÑ\82:"
+#: lib/sshkey.tcl:58
+msgid "Copy To Clipboard"
+msgstr "Ð\9aопиÑ\80ане кÑ\8aм Ñ\81иÑ\81Ñ\82емниÑ\8f бÑ\83Ñ\84еÑ\80"
-#: lib/index.tcl:6
-msgid "Unable to unlock the index."
-msgstr "Ð\98ндекÑ\81Ñ\8aÑ\82 не може да бÑ\8aде оÑ\82клÑ\8eÑ\87ен."
+#: lib/sshkey.tcl:72
+msgid "Your OpenSSH Public Key"
+msgstr "Ð\9fÑ\83блиÑ\87ниÑ\8fÑ\82 ви клÑ\8eÑ\87 за OpenSSH"
-#: lib/index.tcl:17
-msgid "Index Error"
-msgstr "Грешка в индекса"
+#: lib/sshkey.tcl:80
+msgid "Generating..."
+msgstr "Генериране…"
-#: lib/index.tcl:19
+#: lib/sshkey.tcl:86
+#, tcl-format
msgid ""
-"Updating the Git index failed. A rescan will be automatically started to "
-"resynchronize git-gui."
+"Could not start ssh-keygen:\n"
+"\n"
+"%s"
msgstr ""
-"Неуспешно обновяване на индекса на Git. Автоматично ще започне нова проверка "
-"за синхронизирането на git-gui."
+"Програмата „ssh-keygen“ не може да бъде стартирана:\n"
+"\n"
+"%s"
-#: lib/index.tcl:30
-msgid "Continue"
-msgstr "Ð\9fÑ\80одÑ\8aлжаване"
+#: lib/sshkey.tcl:113
+msgid "Generation failed."
+msgstr "Ð\9dеÑ\83Ñ\81пеÑ\88но генеÑ\80иÑ\80ане."
-#: lib/index.tcl:33
-msgid "Unlock Index"
-msgstr "Ð\9eÑ\82клÑ\8eÑ\87ване на индекÑ\81а"
+#: lib/sshkey.tcl:120
+msgid "Generation succeeded, but no keys found."
+msgstr "Ð\93енеÑ\80иÑ\80анеÑ\82о завÑ\8aÑ\80Ñ\88и Ñ\83Ñ\81пеÑ\88но, а не Ñ\81а намеÑ\80ени клÑ\8eÑ\87ове."
-#: lib/index.tcl:298
+#: lib/sshkey.tcl:123
#, tcl-format
-msgid "Unstaging %s from commit"
-msgstr "Ð\98зваждане на â\80\9e%sâ\80\9c оÑ\82 подаванеÑ\82о"
+msgid "Your key is in: %s"
+msgstr "Ð\9aлÑ\8eÑ\87Ñ\8aÑ\82 ви е в â\80\9e%sâ\80\9c"
-#: lib/index.tcl:337
-msgid "Ready to commit."
-msgstr "Готовност за подаване."
+#: lib/status_bar.tcl:87
+#, tcl-format
+msgid "%s ... %*i of %*i %s (%3i%%)"
+msgstr "%s… %*i от общо %*i %s (%3i%%)"
-#: lib/index.tcl:350
+#: lib/tools.tcl:76
#, tcl-format
-msgid "Adding %s"
-msgstr "Ð\94обавÑ\8fне на â\80\9e%sâ\80\9c"
+msgid "Running %s requires a selected file."
+msgstr "Ð\97а изпÑ\8aлнениеÑ\82о на â\80\9e%sâ\80\9c Ñ\82Ñ\80Ñ\8fбва да избеÑ\80еÑ\82е Ñ\84айл."
-#: lib/index.tcl:380
+#: lib/tools.tcl:92
#, tcl-format
-msgid "Stage %d untracked files?"
-msgstr "Ð\94а Ñ\81е вкаÑ\80аÑ\82 ли %d неÑ\81ледени Ñ\84айла в индекÑ\81а?"
+msgid "Are you sure you want to run %1$s on file \"%2$s\"?"
+msgstr "СигÑ\83Ñ\80ни ли Ñ\81Ñ\82е, Ñ\87е иÑ\81каÑ\82е да изпÑ\8aлниÑ\82е â\80\9e%1$sâ\80\9c вÑ\8aÑ\80Ñ\85Ñ\83 Ñ\84айла â\80\9e%2$sâ\80\9c?"
-#: lib/index.tcl:428
+#: lib/tools.tcl:96
#, tcl-format
-msgid "Revert changes in file %s?"
-msgstr "Ð\94а Ñ\81е маÑ\85наÑ\82 ли пÑ\80омениÑ\82е вÑ\8aв Ñ\84айла „%s“?"
+msgid "Are you sure you want to run %s?"
+msgstr "СигÑ\83Ñ\80ни ли Ñ\81Ñ\82е, Ñ\87е иÑ\81каÑ\82е да изпÑ\8aлниÑ\82е „%s“?"
-#: lib/index.tcl:430
+#: lib/tools.tcl:118
#, tcl-format
-msgid "Revert changes in these %i files?"
-msgstr "Ð\94а Ñ\81е маÑ\85наÑ\82 ли пÑ\80омениÑ\82е в Ñ\82ези %i Ñ\84айла?"
+msgid "Tool: %s"
+msgstr "Ð\9aоманда: %s"
-#: lib/index.tcl:438
-msgid "Any unstaged changes will be permanently lost by the revert."
+#: lib/tools.tcl:119
+#, tcl-format
+msgid "Running: %s"
+msgstr "Изпълнение: %s"
+
+#: lib/tools.tcl:158
+#, tcl-format
+msgid "Tool completed successfully: %s"
+msgstr "Командата завърши успешно: %s"
+
+#: lib/tools.tcl:160
+#, tcl-format
+msgid "Tool failed: %s"
+msgstr "Командата върна грешка: %s"
+
+#: lib/tools_dlg.tcl:22
+#, tcl-format
+msgid "%s (%s): Add Tool"
+msgstr "%s (%s): Добавяне на команда"
+
+#: lib/tools_dlg.tcl:28
+msgid "Add New Tool Command"
+msgstr "Добавяне на команда"
+
+#: lib/tools_dlg.tcl:34
+msgid "Add globally"
+msgstr "Глобално добавяне"
+
+#: lib/tools_dlg.tcl:46
+msgid "Tool Details"
+msgstr "Подробности за командата"
+
+#: lib/tools_dlg.tcl:49
+msgid "Use '/' separators to create a submenu tree:"
+msgstr "За създаване на подменюта използвайте знака „/“ за разделител:"
+
+#: lib/tools_dlg.tcl:60
+msgid "Command:"
+msgstr "Команда:"
+
+#: lib/tools_dlg.tcl:71
+msgid "Show a dialog before running"
+msgstr "Преди изпълнение да се извежда диалогов прозорец"
+
+#: lib/tools_dlg.tcl:77
+msgid "Ask the user to select a revision (sets $REVISION)"
+msgstr "Потребителят да укаже версия (задаване на променливата $REVISION)"
+
+#: lib/tools_dlg.tcl:82
+msgid "Ask the user for additional arguments (sets $ARGS)"
msgstr ""
-"Всички промени, които не са били вкарани в индекса, ще бъдат безвъзвратно "
-"загубени."
+"Потребителят да укаже допълнителни аргументи (задаване на променливата $ARGS)"
-#: lib/index.tcl:441
-msgid "Do Nothing"
-msgstr "Нищо да не се прави"
+#: lib/tools_dlg.tcl:89
+msgid "Don't show the command output window"
+msgstr "Без показване на прозорец с изхода от командата"
+
+#: lib/tools_dlg.tcl:94
+msgid "Run only if a diff is selected ($FILENAME not empty)"
+msgstr ""
+"Стартиране само след избор на разлика (променливата $FILENAME не е празна)"
-#: lib/index.tcl:459
-msgid "Reverting selected files"
-msgstr "Ð\9cаÑ\85ане на пÑ\80омениÑ\82е в избÑ\80аниÑ\82е Ñ\84айлове"
+#: lib/tools_dlg.tcl:118
+msgid "Please supply a name for the tool."
+msgstr "Ð\97адайÑ\82е име за командаÑ\82а."
-#: lib/index.tcl:463
+#: lib/tools_dlg.tcl:126
#, tcl-format
-msgid "Reverting %s"
-msgstr "Ð\9cаÑ\85ане на пÑ\80омениÑ\82е в â\80\9e%sâ\80\9c"
+msgid "Tool '%s' already exists."
+msgstr "Ð\9aомандаÑ\82а â\80\9e%sâ\80\9c веÑ\87е Ñ\81Ñ\8aÑ\89еÑ\81Ñ\82вÑ\83ва."
-#: lib/date.tcl:25
+#: lib/tools_dlg.tcl:148
#, tcl-format
-msgid "Invalid date from Git: %s"
-msgstr "Неправилни данни от Git: %s"
-
-#: lib/database.tcl:42
-msgid "Number of loose objects"
-msgstr "Брой непакетирани обекти"
+msgid ""
+"Could not add tool:\n"
+"%s"
+msgstr ""
+"Командата не може да бъде добавена:\n"
+"%s"
-#: lib/database.tcl:43
-msgid "Disk space used by loose objects"
-msgstr "Дисково пространство заето от непакетирани обекти"
+#: lib/tools_dlg.tcl:187
+#, tcl-format
+msgid "%s (%s): Remove Tool"
+msgstr "%s (%s): Премахване на команда"
-#: lib/database.tcl:44
-msgid "Number of packed objects"
-msgstr "Ð\91Ñ\80ой пакеÑ\82иÑ\80ани обекÑ\82и"
+#: lib/tools_dlg.tcl:193
+msgid "Remove Tool Commands"
+msgstr "Ð\9fÑ\80емаÑ\85ване на команди"
-#: lib/database.tcl:45
-msgid "Number of packs"
-msgstr "Ð\91Ñ\80ой пакеÑ\82и"
+#: lib/tools_dlg.tcl:198
+msgid "Remove"
+msgstr "Ð\9fÑ\80емаÑ\85ване"
-#: lib/database.tcl:46
-msgid "Disk space used by packed objects"
-msgstr "Дисково пространство заето от пакетирани обекти"
+#: lib/tools_dlg.tcl:231
+msgid "(Blue denotes repository-local tools)"
+msgstr "(командите към локалното хранилище са обозначени в синьо)"
-#: lib/database.tcl:47
-msgid "Packed objects waiting for pruning"
-msgstr "Пакетирани обекти за окастряне"
+#: lib/tools_dlg.tcl:283
+#, tcl-format
+msgid "%s (%s):"
+msgstr "%s (%s):"
-#: lib/database.tcl:48
-msgid "Garbage files"
-msgstr "Файлове за боклука"
+#: lib/tools_dlg.tcl:292
+#, tcl-format
+msgid "Run Command: %s"
+msgstr "Изпълнение на командата „%s“"
-#: lib/database.tcl:72
-msgid "Compressing the object database"
-msgstr "Ð\9aомпÑ\80еÑ\81иÑ\80ане на базаÑ\82а Ñ\81 данни за обекÑ\82иÑ\82е"
+#: lib/tools_dlg.tcl:306
+msgid "Arguments"
+msgstr "Ð\90Ñ\80гÑ\83менÑ\82и"
-#: lib/database.tcl:83
-msgid "Verifying the object database with fsck-objects"
-msgstr "Ð\9fÑ\80овеÑ\80ка на базаÑ\82а Ñ\81 данни за обекÑ\82иÑ\82е Ñ\81 пÑ\80огÑ\80амаÑ\82а â\80\9efsck-objectsâ\80\9c"
+#: lib/tools_dlg.tcl:341
+msgid "OK"
+msgstr "Ð\94обÑ\80е"
-#: lib/database.tcl:107
+#: lib/transport.tcl:7
#, tcl-format
-msgid ""
-"This repository currently has approximately %i loose objects.\n"
-"\n"
-"To maintain optimal performance it is strongly recommended that you compress "
-"the database.\n"
-"\n"
-"Compress the database now?"
-msgstr ""
-"В това хранилище в момента има към %i непакетирани обекти.\n"
-"\n"
-"За добра производителност се препоръчва да компресирате базата с данни за "
-"обектите.\n"
-"\n"
-"Да се започне ли компресирането?"
+msgid "Fetching new changes from %s"
+msgstr "Доставяне на промените от „%s“"
-#: lib/error.tcl:20 lib/error.tcl:116
-msgid "error"
-msgstr "грешка"
+#: lib/transport.tcl:18
+#, tcl-format
+msgid "remote prune %s"
+msgstr "окастряне на следящите клони към „%s“"
-#: lib/error.tcl:36
-msgid "warning"
-msgstr "предупреждение"
+#: lib/transport.tcl:19
+#, tcl-format
+msgid "Pruning tracking branches deleted from %s"
+msgstr "Окастряне на следящите клони на изтритите клони от „%s“"
-#: lib/error.tcl:96
-msgid "You must correct the above errors before committing."
-msgstr "Ð\9fÑ\80еди да можеÑ\82е да подадеÑ\82е, коÑ\80игиÑ\80айÑ\82е гоÑ\80ниÑ\82е гÑ\80еÑ\88ки."
+#: lib/transport.tcl:25
+msgid "fetch all remotes"
+msgstr "доÑ\81Ñ\82авÑ\8fне оÑ\82 вÑ\81иÑ\87ки оÑ\82далеÑ\87ени"
-#: lib/merge.tcl:13
-msgid ""
-"Cannot merge while amending.\n"
-"\n"
-"You must finish amending this commit before starting any type of merge.\n"
-msgstr ""
-"По време на поправяне не може да сливане.\n"
-"\n"
-"Трябва да завършите поправянето на текущото подаване, преди да започнете "
-"сливане.\n"
+#: lib/transport.tcl:26
+msgid "Fetching new changes from all remotes"
+msgstr "Доставяне на промените от всички отдалечени хранилища"
-#: lib/merge.tcl:27
-msgid ""
-"Last scanned state does not match repository state.\n"
-"\n"
-"Another Git program has modified this repository since the last scan. A "
-"rescan must be performed before a merge can be performed.\n"
-"\n"
-"The rescan will be automatically started now.\n"
-msgstr ""
-"Последно установеното състояние не отговаря на това в хранилището.\n"
-"\n"
-"Някой друг процес за Git е променил хранилището междувременно. Състоянието "
-"трябва да бъде проверено, преди да се извърши сливане.\n"
-"\n"
-"Автоматично ще започне нова проверка.\n"
-"\n"
+#: lib/transport.tcl:40
+msgid "remote prune all remotes"
+msgstr "окастряне на следящите изтрити"
-#: lib/merge.tcl:45
-#, tcl-format
-msgid ""
-"You are in the middle of a conflicted merge.\n"
-"\n"
-"File %s has merge conflicts.\n"
-"\n"
-"You must resolve them, stage the file, and commit to complete the current "
-"merge. Only then can you begin another merge.\n"
+#: lib/transport.tcl:41
+msgid "Pruning tracking branches deleted from all remotes"
msgstr ""
-"В момента тече сливане, но има конфликти.\n"
-"\n"
-"Погледнете файла „%s“.\n"
-"\n"
-"Трябва да коригирате конфликтите в него, да го добавите към индекса и да "
-"завършите текущото сливане чрез подаване. Чак тогава може да започнете ново "
-"сливане.\n"
+"Окастряне на следящите клони на изтритите клони от всички отдалечени "
+"хранилища"
-#: lib/merge.tcl:55
+#: lib/transport.tcl:55
#, tcl-format
-msgid ""
-"You are in the middle of a change.\n"
-"\n"
-"File %s is modified.\n"
-"\n"
-"You should complete the current commit before starting a merge. Doing so "
-"will help you abort a failed merge, should the need arise.\n"
-msgstr ""
-"В момента тече подаване.\n"
-"\n"
-"Файлът „%s“ е променен.\n"
-"\n"
-"Трябва да завършите текущото подаване, преди да започнете сливане. Така ще "
-"можете лесно да преустановите сливането, ако възникне нужда.\n"
+msgid "Pushing changes to %s"
+msgstr "Изтласкване на промените към „%s“"
-#: lib/merge.tcl:108
+#: lib/transport.tcl:93
#, tcl-format
-msgid "%s of %s"
-msgstr "%s от общо %s"
+msgid "Mirroring to %s"
+msgstr "Изтласкване на всичко към „%s“"
-#: lib/merge.tcl:122
+#: lib/transport.tcl:111
#, tcl-format
-msgid "Merging %s and %s..."
-msgstr "Сливане на „%s“ и „%s“…"
-
-#: lib/merge.tcl:133
-msgid "Merge completed successfully."
-msgstr "Сливането завърши успешно."
-
-#: lib/merge.tcl:135
-msgid "Merge failed. Conflict resolution is required."
-msgstr "Неуспешно сливане — има конфликти за коригиране."
+msgid "Pushing %s %s to %s"
+msgstr "Изтласкване на %s „%s“ към „%s“"
-#: lib/merge.tcl:160
-#, tcl-format
-msgid "Merge Into %s"
-msgstr "Сливане в „%s“"
+#: lib/transport.tcl:132
+msgid "Push Branches"
+msgstr "Клони за изтласкване"
-#: lib/merge.tcl:179
-msgid "Revision To Merge"
-msgstr "Ð\92еÑ\80Ñ\81иÑ\8f за Ñ\81ливане"
+#: lib/transport.tcl:147
+msgid "Source Branches"
+msgstr "Ð\9aлони-изÑ\82оÑ\87ниÑ\86и"
-#: lib/merge.tcl:214
-msgid ""
-"Cannot abort while amending.\n"
-"\n"
-"You must finish amending this commit.\n"
-msgstr ""
-"Поправянето не може да бъде преустановено.\n"
-"\n"
-"Трябва да завършите поправката на това подаване.\n"
+#: lib/transport.tcl:162
+msgid "Destination Repository"
+msgstr "Целево хранилище"
-#: lib/merge.tcl:224
-msgid ""
-"Abort merge?\n"
-"\n"
-"Aborting the current merge will cause *ALL* uncommitted changes to be lost.\n"
-"\n"
-"Continue with aborting the current merge?"
-msgstr ""
-"Да се преустанови ли сливането?\n"
-"\n"
-"В такъв случай ●ВСИЧКИ● неподадени промени ще бъдат безвъзвратно загубени.\n"
-"\n"
-"Наистина ли да се преустанови сливането?"
+#: lib/transport.tcl:205
+msgid "Transfer Options"
+msgstr "Настройки при пренасянето"
-#: lib/merge.tcl:230
-msgid ""
-"Reset changes?\n"
-"\n"
-"Resetting the changes will cause *ALL* uncommitted changes to be lost.\n"
-"\n"
-"Continue with resetting the current changes?"
+#: lib/transport.tcl:207
+msgid "Force overwrite existing branch (may discard changes)"
msgstr ""
-"Да се занулят ли промените?\n"
-"\n"
-"В такъв случай ●ВСИЧКИ● неподадени промени ще бъдат безвъзвратно загубени.\n"
-"\n"
-"Наистина ли да се занулят промените?"
-
-#: lib/merge.tcl:241
-msgid "Aborting"
-msgstr "Преустановяване"
+"Изрично презаписване на съществуващ клон (някои промени може да бъдат "
+"загубени)"
-#: lib/merge.tcl:241
-msgid "files reset"
-msgstr "файла със занулени промени"
+#: lib/transport.tcl:211
+msgid "Use thin pack (for slow network connections)"
+msgstr "Максимална компресия (за бавни мрежови връзки)"
-#: lib/merge.tcl:269
-msgid "Abort failed."
-msgstr "Ð\9dеÑ\83Ñ\81пеÑ\88но пÑ\80еÑ\83Ñ\81Ñ\82ановÑ\8fване."
+#: lib/transport.tcl:215
+msgid "Include tags"
+msgstr "Ð\92клÑ\8eÑ\87ване на еÑ\82икеÑ\82иÑ\82е"
-#: lib/merge.tcl:271
-msgid "Abort completed. Ready."
-msgstr "Успешно преустановяване. Готовност за следващо действие."
+#: lib/transport.tcl:229
+#, tcl-format
+msgid "%s (%s): Push"
+msgstr "%s (%s): Изтласкване"
--- /dev/null
+# Portuguese translations for git-gui glossary.
+# Copyright (C) 2016 Shawn Pearce, et al.
+# This file is distributed under the same license as the git package.
+# Vasco Almeida <vascomalmeida@sapo.pt>, 2016.
+msgid ""
+msgstr ""
+"Project-Id-Version: git-gui glossary\n"
+"POT-Creation-Date: 2016-05-06 10:22+0000\n"
+"PO-Revision-Date: 2016-05-06 12:32+0000\n"
+"Last-Translator: Vasco Almeida <vascomalmeida@sapo.pt>\n"
+"Language-Team: Portuguese\n"
+"Language: pt\n"
+"MIME-Version: 1.0\n"
+"Content-Type: text/plain; charset=UTF-8\n"
+"Content-Transfer-Encoding: 8bit\n"
+"Plural-Forms: nplurals=2; plural=(n != 1);\n"
+"X-Generator: Virtaal 0.7.1\n"
+
+#. "English Definition (Dear translator: This file will never be visible to the user! It should only serve as a tool for you, the translator. Nothing more.)"
+msgid ""
+"English Term (Dear translator: This file will never be visible to the user!)"
+msgstr ""
+"Outro SCM em português:\n"
+"http://svn.code.sf.net/p/tortoisesvn/code/trunk/Languages/pt/TortoiseUI.po e "
+"\n"
+"http://svn.code.sf.net/p/tortoisesvn/code/trunk/Languages/pt/TortoiseDoc.po\n"
+" em html: https://tortoisesvn.net/docs/release/TortoiseSVN_pt/index.html\n"
+"\n"
+"https://translations.launchpad.net/tortoisehg (medíocre)"
+
+#. ""
+msgid "amend"
+msgstr "emendar"
+
+#. ""
+msgid "annotate"
+msgstr "anotar"
+
+#. "A 'branch' is an active line of development."
+msgid "branch [noun]"
+msgstr "ramo"
+
+#. ""
+msgid "branch [verb]"
+msgstr "criar ramo"
+
+#. ""
+msgid "checkout [noun]"
+msgstr "extração"
+
+#. "The action of updating the working tree to a revision which was stored in the object database."
+msgid "checkout [verb]"
+msgstr "extrair"
+
+#. ""
+msgid "clone [verb]"
+msgstr "clonar"
+
+#. "A single point in the git history."
+msgid "commit [noun]"
+msgstr "commit"
+
+#. "The action of storing a new snapshot of the project's state in the git history."
+msgid "commit [verb]"
+msgstr "submeter"
+
+#. ""
+msgid "diff [noun]"
+msgstr "diferenças"
+
+#. ""
+msgid "diff [verb]"
+msgstr "mostrar diferenças"
+
+#. "A fast-forward is a special type of merge where you have a revision and you are merging another branch's changes that happen to be a descendant of what you have."
+msgid "fast forward merge"
+msgstr "integração por avanço rápido"
+
+#. "Fetching a branch means to get the branch's head from a remote repository, to find out which objects are missing from the local object database, and to get them, too."
+msgid "fetch"
+msgstr "obter"
+
+#. "One context of consecutive lines in a whole patch, which consists of many such hunks"
+msgid "hunk"
+msgstr "excerto"
+
+#. "A collection of files. The index is a stored version of your working tree."
+msgid "index (in git-gui: staging area)"
+msgstr "índice"
+
+#. "A successful merge results in the creation of a new commit representing the result of the merge."
+msgid "merge [noun]"
+msgstr "integração"
+
+#. "To bring the contents of another branch into the current branch."
+msgid "merge [verb]"
+msgstr "integrar"
+
+#. ""
+msgid "message"
+msgstr "mensagem"
+
+#. "Deletes all stale tracking branches under <name>. These stale branches have already been removed from the remote repository referenced by <name>, but are still locally available in 'remotes/<name>'."
+msgid "prune"
+msgstr "podar"
+
+#. "Pulling a branch means to fetch it and merge it."
+msgid "pull"
+msgstr "puxar"
+
+#. "Pushing a branch means to get the branch's head ref from a remote repository, and ... (well, can someone please explain it for mere mortals?)"
+msgid "push"
+msgstr "publicar"
+
+#. ""
+msgid "redo"
+msgstr "refazer"
+
+#. "An other repository ('remote'). One might have a set of remotes whose branches one tracks."
+msgid "remote"
+msgstr "remoto"
+
+#. "A collection of refs (?) together with an object database containing all objects which are reachable from the refs... (oops, you've lost me here. Again, please an explanation for mere mortals?)"
+msgid "repository"
+msgstr "repositório"
+
+#. ""
+msgid "reset"
+msgstr "repor"
+
+#. ""
+msgid "revert"
+msgstr "reverter"
+
+#. "A particular state of files and directories which was stored in the object database."
+msgid "revision"
+msgstr "revisão"
+
+#. ""
+msgid "sign off"
+msgstr "assinar por baixo"
+
+#. ""
+msgid "staging area"
+msgstr "área de estágio"
+
+#. ""
+msgid "status"
+msgstr "estado"
+
+#. "A ref pointing to a tag or commit object"
+msgid "tag [noun]"
+msgstr "tag"
+
+#. ""
+msgid "tag [verb]"
+msgstr "criar tag"
+
+#. "A regular git branch that is used to follow changes from another repository."
+msgid "tracking branch"
+msgstr "ramo de monitorização"
+
+#. ""
+msgid "undo"
+msgstr "desfazer"
+
+#. ""
+msgid "update"
+msgstr "atualizar"
+
+#. ""
+msgid "verify"
+msgstr "verificar"
+
+#. "The tree of actual checked out files."
+msgid "working copy, working tree"
+msgstr "cópia de trabalho, árvore de trabalho"
+
+#. "a commit that succeeds the current one in git's graph of commits (not necessarily directly)"
+msgid "ancestor"
+msgstr "antecessor"
+
+#. "prematurely stop and abandon an operation"
+msgid "abort"
+msgstr "abortar"
+
+#. "a repository with only .git directory, without working directory"
+msgid "bare repository"
+msgstr "repositório nu"
+
+#. "a parent version of the current file"
+msgid "base"
+msgstr "base"
+
+#. "get the authors responsible for each line in a file"
+msgid "blame"
+msgstr "culpar"
+
+#. "to select and apply a single commit without merging"
+msgid "cherry-pick"
+msgstr "efetuar cherry-pick (escolher-a-dedo?, selecionar?)"
+
+#. "a commit that directly succeeds the current one in git's graph of commits"
+msgid "child"
+msgstr "filho"
+
+#. "clean the state of the git repository, often after manually stopped operation"
+msgid "cleanup"
+msgstr "limpar"
+
+#. "a message that gets attached with any commit"
+msgid "commit message"
+msgstr "mensagem de commit"
+
+#. "a commit that precedes the current one in git's graph of commits (not necessarily directly)"
+msgid "descendant"
+msgstr "descendente"
+
+#. "checkout of a revision rather than a some head"
+msgid "detached checkout"
+msgstr "extração destacada"
+
+#. "any merge strategy that works on a file by file basis"
+msgid "file level merging"
+msgstr "integração ao nível de ficheiros"
+
+#. "the last revision in a branch"
+msgid "head"
+msgstr "cabeça"
+
+#. "script that gets executed automatically on some event"
+msgid "hook"
+msgstr "gancho"
+
+#. "the first checkout during a clone operation"
+msgid "initial checkout"
+msgstr "extração inicial"
+
+#. "a branch that resides in the local git repository"
+msgid "local branch"
+msgstr "ramo local"
+
+#. "a Git object that is not part of any pack"
+msgid "loose object"
+msgstr "objeto solto"
+
+#. "a branch called by convention 'master' that exists in a newly created git repository"
+msgid "master branch"
+msgstr "ramo mestre"
+
+#. "a remote called by convention 'origin' that the current git repository has been cloned from"
+msgid "origin"
+msgstr "origem"
+
+#. "a file containing many git objects packed together"
+msgid "pack [noun]"
+msgstr "pacote"
+
+#. "a Git object part of some pack"
+msgid "packed object"
+msgstr "objeto compactado"
+
+#. "a commit that directly precedes the current one in git's graph of commits"
+msgid "parent"
+msgstr "pai"
+
+#. "the log file containing all states of the HEAD reference (in other words past pristine states of the working copy)"
+msgid "reflog"
+msgstr "reflog"
+
+#. "decide which changes from alternative versions of a file should persist in Git"
+msgid "resolve (a conflict)"
+msgstr "resolver (um conflito)"
+
+#. "abandon changes and go to pristine version"
+msgid "revert changes"
+msgstr "reverter alterações"
+
+#. "expression that signifies a revision in git"
+msgid "revision expression"
+msgstr "expressão de revisão"
+
+#. "add some content of files and directories to the staging area in preparation for a commit"
+msgid "stage/unstage"
+msgstr "preparar/retirar"
+
+#. "temporarily save changes in a stack without committing"
+msgid "stash"
+msgstr "empilhar"
+
+#. "file whose content is tracked/not tracked by git"
+msgid "tracked/untracked"
+msgstr "controlado/não controlado"
--- /dev/null
+# Portuguese translations for git-gui package.
+# Copyright (C) 2016 Shawn Pearce, et al.
+# This file is distributed under the same license as the git package.
+# Vasco Almeida <vascomalmeida@sapo.pt>, 2016.
+msgid ""
+msgstr ""
+"Project-Id-Version: git-gui\n"
+"Report-Msgid-Bugs-To: \n"
+"POT-Creation-Date: 2016-05-06 09:36+0000\n"
+"PO-Revision-Date: 2016-05-06 13:09+0000\n"
+"Last-Translator: Vasco Almeida <vascomalmeida@sapo.pt>\n"
+"Language-Team: Portuguese\n"
+"Language: pt\n"
+"MIME-Version: 1.0\n"
+"Content-Type: text/plain; charset=UTF-8\n"
+"Content-Transfer-Encoding: 8bit\n"
+"Plural-Forms: nplurals=2; plural=(n != 1);\n"
+"X-Generator: Virtaal 0.7.1\n"
+
+#: git-gui.sh:861
+#, tcl-format
+msgid "Invalid font specified in %s:"
+msgstr "Tipo de letra inválido especificado em %s:"
+
+#: git-gui.sh:915
+msgid "Main Font"
+msgstr "Tipo de letra principal"
+
+#: git-gui.sh:916
+msgid "Diff/Console Font"
+msgstr "Tipo de letra Diferenças/Consola"
+
+#: git-gui.sh:931 git-gui.sh:945 git-gui.sh:958 git-gui.sh:1048
+#: git-gui.sh:1067 git-gui.sh:3125
+msgid "git-gui: fatal error"
+msgstr "git-gui: erro fatal"
+
+#: git-gui.sh:932
+msgid "Cannot find git in PATH."
+msgstr "Não é possível encontrar o git em PATH."
+
+#: git-gui.sh:959
+msgid "Cannot parse Git version string:"
+msgstr "Não é possível analisar a versão do Git:"
+
+#: git-gui.sh:984
+#, tcl-format
+msgid ""
+"Git version cannot be determined.\n"
+"\n"
+"%s claims it is version '%s'.\n"
+"\n"
+"%s requires at least Git 1.5.0 or later.\n"
+"\n"
+"Assume '%s' is version 1.5.0?\n"
+msgstr ""
+"A versão do Git não pôde ser determinada.\n"
+"\n"
+"%s alega que está na versão '%s'.\n"
+"\n"
+"%s requer pelo menos Git 1.5.0 ou mais recente.\n"
+"\n"
+"Assumir que '%s' está na versão 1.5.0?\n"
+
+#: git-gui.sh:1281
+msgid "Git directory not found:"
+msgstr "Diretório Git não encontrado:"
+
+#: git-gui.sh:1315
+msgid "Cannot move to top of working directory:"
+msgstr "Não é possível mover para o topo do diretório de trabalho:"
+
+#: git-gui.sh:1323
+msgid "Cannot use bare repository:"
+msgstr "Não é possível usar repositório nu:"
+
+#: git-gui.sh:1331
+msgid "No working directory"
+msgstr "Nenhum diretório de trabalho"
+
+#: git-gui.sh:1503 lib/checkout_op.tcl:306
+msgid "Refreshing file status..."
+msgstr "A atualizar estado do ficheiro..."
+
+#: git-gui.sh:1563
+msgid "Scanning for modified files ..."
+msgstr "A procurar por ficheiros modificados..."
+
+#: git-gui.sh:1639
+msgid "Calling prepare-commit-msg hook..."
+msgstr ""
+"A invocar gancho preparar-mensagem-de-commit (prepare-commit-msg hook)..."
+
+#: git-gui.sh:1656
+msgid "Commit declined by prepare-commit-msg hook."
+msgstr ""
+"Commit recusado pelo gancho preparar-mensagem-de-commit (prepare-commit-msg "
+"hook)."
+
+#: git-gui.sh:1814 lib/browser.tcl:252
+msgid "Ready."
+msgstr "Pronto."
+
+#: git-gui.sh:1978
+#, tcl-format
+msgid ""
+"Display limit (gui.maxfilesdisplayed = %s) reached, not showing all %s files."
+msgstr ""
+"Limite de visualização (gui.maxfilesdisplayed = %s) atingido, não são "
+"mostrados todos os %s ficheiros."
+
+#: git-gui.sh:2101
+msgid "Unmodified"
+msgstr "Não modificado"
+
+#: git-gui.sh:2103
+msgid "Modified, not staged"
+msgstr "Modificado, não preparado"
+
+#: git-gui.sh:2104 git-gui.sh:2116
+msgid "Staged for commit"
+msgstr "Preparado para commit"
+
+#: git-gui.sh:2105 git-gui.sh:2117
+msgid "Portions staged for commit"
+msgstr "Porções preparadas para commit"
+
+#: git-gui.sh:2106 git-gui.sh:2118
+msgid "Staged for commit, missing"
+msgstr "Preparado para commit, em falta"
+
+#: git-gui.sh:2108
+msgid "File type changed, not staged"
+msgstr "Tipo de ficheiro modificado, não preparado"
+
+#: git-gui.sh:2109 git-gui.sh:2110
+msgid "File type changed, old type staged for commit"
+msgstr "Tipo de ficheiro modificado, tipo antigo preparado para commit"
+
+#: git-gui.sh:2111
+msgid "File type changed, staged"
+msgstr "Tipo de ficheiro modificado, preparado"
+
+#: git-gui.sh:2112
+msgid "File type change staged, modification not staged"
+msgstr "Tipo de ficheiro modificado, modificação não preparada"
+
+#: git-gui.sh:2113
+msgid "File type change staged, file missing"
+msgstr "Tipo de ficheiro modificado, ficheiro em falta"
+
+#: git-gui.sh:2115
+msgid "Untracked, not staged"
+msgstr "Não controlado, não preparado"
+
+#: git-gui.sh:2120
+msgid "Missing"
+msgstr "Em falta"
+
+#: git-gui.sh:2121
+msgid "Staged for removal"
+msgstr "Preparado para remoção"
+
+#: git-gui.sh:2122
+msgid "Staged for removal, still present"
+msgstr "Preparado para remoção, ainda presente"
+
+#: git-gui.sh:2124 git-gui.sh:2125 git-gui.sh:2126 git-gui.sh:2127
+#: git-gui.sh:2128 git-gui.sh:2129
+msgid "Requires merge resolution"
+msgstr "Requer resolução de integração"
+
+#: git-gui.sh:2164
+msgid "Starting gitk... please wait..."
+msgstr "A iniciar gitk... aguarde..."
+
+#: git-gui.sh:2176
+msgid "Couldn't find gitk in PATH"
+msgstr "Não foi possível encontrar gitk em PATH"
+
+#: git-gui.sh:2235
+msgid "Couldn't find git gui in PATH"
+msgstr "Não foi possível encontrar git gui em PATH"
+
+#: git-gui.sh:2654 lib/choose_repository.tcl:41
+msgid "Repository"
+msgstr "Repositório"
+
+#: git-gui.sh:2655
+msgid "Edit"
+msgstr "Editar"
+
+#: git-gui.sh:2657 lib/choose_rev.tcl:567
+msgid "Branch"
+msgstr "Ramo"
+
+#: git-gui.sh:2660 lib/choose_rev.tcl:554
+msgid "Commit@@noun"
+msgstr "Commit"
+
+#: git-gui.sh:2663 lib/merge.tcl:123 lib/merge.tcl:152 lib/merge.tcl:170
+msgid "Merge"
+msgstr "Integrar"
+
+#: git-gui.sh:2664 lib/choose_rev.tcl:563
+msgid "Remote"
+msgstr "Remoto"
+
+#: git-gui.sh:2667
+msgid "Tools"
+msgstr "Ferramentas"
+
+#: git-gui.sh:2676
+msgid "Explore Working Copy"
+msgstr "Explorar cópia de trabalho"
+
+#: git-gui.sh:2682
+msgid "Git Bash"
+msgstr "Git Bash"
+
+#: git-gui.sh:2692
+msgid "Browse Current Branch's Files"
+msgstr "Navegar pelos ficheiro do ramo atual"
+
+#: git-gui.sh:2696
+msgid "Browse Branch Files..."
+msgstr "Navegar pelos ficheiros do ramo..."
+
+#: git-gui.sh:2701
+msgid "Visualize Current Branch's History"
+msgstr "Visualizar histórico do ramo atual"
+
+#: git-gui.sh:2705
+msgid "Visualize All Branch History"
+msgstr "Visualizar histórico de todos os ramos"
+
+#: git-gui.sh:2712
+#, tcl-format
+msgid "Browse %s's Files"
+msgstr "Navegar pelos ficheiro de %s"
+
+#: git-gui.sh:2714
+#, tcl-format
+msgid "Visualize %s's History"
+msgstr "Visualizar histórico de %s"
+
+#: git-gui.sh:2719 lib/database.tcl:40 lib/database.tcl:66
+msgid "Database Statistics"
+msgstr "Estatísticas da base de dados"
+
+#: git-gui.sh:2722 lib/database.tcl:33
+msgid "Compress Database"
+msgstr "Comprimir base de dados"
+
+#: git-gui.sh:2725
+msgid "Verify Database"
+msgstr "Verificar base de dados"
+
+#: git-gui.sh:2732 git-gui.sh:2736 git-gui.sh:2740 lib/shortcut.tcl:8
+#: lib/shortcut.tcl:40 lib/shortcut.tcl:72
+msgid "Create Desktop Icon"
+msgstr "Criar ícone no ambiente de trabalho"
+
+#: git-gui.sh:2748 lib/choose_repository.tcl:193 lib/choose_repository.tcl:201
+msgid "Quit"
+msgstr "Sair"
+
+#: git-gui.sh:2756
+msgid "Undo"
+msgstr "Desfazer"
+
+#: git-gui.sh:2759
+msgid "Redo"
+msgstr "Refazer"
+
+#: git-gui.sh:2763 git-gui.sh:3368
+msgid "Cut"
+msgstr "Cortar"
+
+#: git-gui.sh:2766 git-gui.sh:3371 git-gui.sh:3445 git-gui.sh:3530
+#: lib/console.tcl:69
+msgid "Copy"
+msgstr "Copiar"
+
+#: git-gui.sh:2769 git-gui.sh:3374
+msgid "Paste"
+msgstr "Colar"
+
+#: git-gui.sh:2772 git-gui.sh:3377 lib/remote_branch_delete.tcl:39
+#: lib/branch_delete.tcl:28
+msgid "Delete"
+msgstr "Eliminar"
+
+#: git-gui.sh:2776 git-gui.sh:3381 git-gui.sh:3534 lib/console.tcl:71
+msgid "Select All"
+msgstr "Selecionar tudo"
+
+#: git-gui.sh:2785
+msgid "Create..."
+msgstr "Criar..."
+
+#: git-gui.sh:2791
+msgid "Checkout..."
+msgstr "Extrair..."
+
+#: git-gui.sh:2797
+msgid "Rename..."
+msgstr "Mudar nome..."
+
+#: git-gui.sh:2802
+msgid "Delete..."
+msgstr "Eliminar..."
+
+#: git-gui.sh:2807
+msgid "Reset..."
+msgstr "Repor..."
+
+#: git-gui.sh:2817
+msgid "Done"
+msgstr "Concluído"
+
+#: git-gui.sh:2819
+msgid "Commit@@verb"
+msgstr "Submeter"
+
+#: git-gui.sh:2828 git-gui.sh:3309
+msgid "New Commit"
+msgstr "Novo commit"
+
+#: git-gui.sh:2836 git-gui.sh:3316
+msgid "Amend Last Commit"
+msgstr "Emendar último commit"
+
+#: git-gui.sh:2846 git-gui.sh:3270 lib/remote_branch_delete.tcl:101
+msgid "Rescan"
+msgstr "Reanalisar"
+
+#: git-gui.sh:2852
+msgid "Stage To Commit"
+msgstr "Preparar para commit"
+
+#: git-gui.sh:2858
+msgid "Stage Changed Files To Commit"
+msgstr "Preparar ficheiros modificados para commit"
+
+#: git-gui.sh:2864
+msgid "Unstage From Commit"
+msgstr "Retirar do commit"
+
+#: git-gui.sh:2870 lib/index.tcl:442
+msgid "Revert Changes"
+msgstr "Reverter alterações"
+
+#: git-gui.sh:2878 git-gui.sh:3581 git-gui.sh:3612
+msgid "Show Less Context"
+msgstr "Mostrar menos contexto"
+
+#: git-gui.sh:2882 git-gui.sh:3585 git-gui.sh:3616
+msgid "Show More Context"
+msgstr "Mostrar mais contexto"
+
+#: git-gui.sh:2889 git-gui.sh:3283 git-gui.sh:3392
+msgid "Sign Off"
+msgstr "Assinar por baixo"
+
+#: git-gui.sh:2905
+msgid "Local Merge..."
+msgstr "Integração local..."
+
+#: git-gui.sh:2910
+msgid "Abort Merge..."
+msgstr "Abortar integração..."
+
+#: git-gui.sh:2922 git-gui.sh:2950
+msgid "Add..."
+msgstr "Adicionar..."
+
+#: git-gui.sh:2926
+msgid "Push..."
+msgstr "Publicar..."
+
+#: git-gui.sh:2930
+msgid "Delete Branch..."
+msgstr "Eliminar ramo..."
+
+#: git-gui.sh:2940 git-gui.sh:3563
+msgid "Options..."
+msgstr "Opções..."
+
+#: git-gui.sh:2951
+msgid "Remove..."
+msgstr "Remover..."
+
+#: git-gui.sh:2960 lib/choose_repository.tcl:55
+msgid "Help"
+msgstr "Ajuda"
+
+#: git-gui.sh:2964 git-gui.sh:2968 lib/choose_repository.tcl:49
+#: lib/choose_repository.tcl:58 lib/about.tcl:14
+#, tcl-format
+msgid "About %s"
+msgstr "Sobre %s"
+
+#: git-gui.sh:2992
+msgid "Online Documentation"
+msgstr "Documentação online"
+
+#: git-gui.sh:2995 lib/choose_repository.tcl:52 lib/choose_repository.tcl:61
+msgid "Show SSH Key"
+msgstr "Mostrar chave SSH"
+
+#: git-gui.sh:3014 git-gui.sh:3146
+msgid "Usage"
+msgstr "Utilização"
+
+#: git-gui.sh:3095 lib/blame.tcl:573
+msgid "Error"
+msgstr "Erro"
+
+#: git-gui.sh:3126
+#, tcl-format
+msgid "fatal: cannot stat path %s: No such file or directory"
+msgstr ""
+"fatal: não é possível obter estado do caminho %s: Ficheiro ou diretório "
+"inexistente"
+
+#: git-gui.sh:3159
+msgid "Current Branch:"
+msgstr "Ramo atual:"
+
+#: git-gui.sh:3185
+msgid "Staged Changes (Will Commit)"
+msgstr "Alterações preparadas (para commit)"
+
+#: git-gui.sh:3205
+msgid "Unstaged Changes"
+msgstr "Alterações não preparadas"
+
+#: git-gui.sh:3276
+msgid "Stage Changed"
+msgstr "Preparar modificados"
+
+#: git-gui.sh:3295 lib/transport.tcl:137 lib/transport.tcl:229
+msgid "Push"
+msgstr "Publicar"
+
+#: git-gui.sh:3330
+msgid "Initial Commit Message:"
+msgstr "Mensagem de commit inicial:"
+
+#: git-gui.sh:3331
+msgid "Amended Commit Message:"
+msgstr "Mensagem de commit emendada:"
+
+#: git-gui.sh:3332
+msgid "Amended Initial Commit Message:"
+msgstr "Mensagem de commit inicial emendada:"
+
+#: git-gui.sh:3333
+msgid "Amended Merge Commit Message:"
+msgstr "Mensagem de commit de integração emendada:"
+
+#: git-gui.sh:3334
+msgid "Merge Commit Message:"
+msgstr "Mensagem de commit de integração:"
+
+#: git-gui.sh:3335
+msgid "Commit Message:"
+msgstr "Mensagem de commit:"
+
+#: git-gui.sh:3384 git-gui.sh:3538 lib/console.tcl:73
+msgid "Copy All"
+msgstr "Copiar tudo"
+
+#: git-gui.sh:3408 lib/blame.tcl:105
+msgid "File:"
+msgstr "Ficheiro:"
+
+#: git-gui.sh:3526
+msgid "Refresh"
+msgstr "Atualizar"
+
+#: git-gui.sh:3547
+msgid "Decrease Font Size"
+msgstr "Diminuir tamanho de letra"
+
+#: git-gui.sh:3551
+msgid "Increase Font Size"
+msgstr "Aumentar tamanho de letra"
+
+#: git-gui.sh:3559 lib/blame.tcl:294
+msgid "Encoding"
+msgstr "Codificação"
+
+#: git-gui.sh:3570
+msgid "Apply/Reverse Hunk"
+msgstr "Aplicar/Reverter excerto"
+
+#: git-gui.sh:3575
+msgid "Apply/Reverse Line"
+msgstr "Aplicar/Reverter linha"
+
+#: git-gui.sh:3594
+msgid "Run Merge Tool"
+msgstr "Executar ferramenta de integração"
+
+#: git-gui.sh:3599
+msgid "Use Remote Version"
+msgstr "Usar a versão remota"
+
+#: git-gui.sh:3603
+msgid "Use Local Version"
+msgstr "Usar a versão local"
+
+#: git-gui.sh:3607
+msgid "Revert To Base"
+msgstr "Reverter para a base"
+
+#: git-gui.sh:3625
+msgid "Visualize These Changes In The Submodule"
+msgstr "Visualizar estas alterações no submódulo"
+
+#: git-gui.sh:3629
+msgid "Visualize Current Branch History In The Submodule"
+msgstr "Visualizar histórico do ramo atual no submódulo"
+
+#: git-gui.sh:3633
+msgid "Visualize All Branch History In The Submodule"
+msgstr "Visualizar histórico de todos os ramos no submódulo"
+
+#: git-gui.sh:3638
+msgid "Start git gui In The Submodule"
+msgstr "Iniciar git gui no submódulo"
+
+#: git-gui.sh:3673
+msgid "Unstage Hunk From Commit"
+msgstr "Retirar excerto do commit"
+
+#: git-gui.sh:3675
+msgid "Unstage Lines From Commit"
+msgstr "Retirar linhas do commit"
+
+#: git-gui.sh:3677
+msgid "Unstage Line From Commit"
+msgstr "Retirar linha do commit"
+
+#: git-gui.sh:3680
+msgid "Stage Hunk For Commit"
+msgstr "Preparar excerto para commit"
+
+#: git-gui.sh:3682
+msgid "Stage Lines For Commit"
+msgstr "Preparar linhas para commit"
+
+#: git-gui.sh:3684
+msgid "Stage Line For Commit"
+msgstr "Preparar linha para commit"
+
+#: git-gui.sh:3709
+msgid "Initializing..."
+msgstr "A inicializar..."
+
+#: git-gui.sh:3852
+#, tcl-format
+msgid ""
+"Possible environment issues exist.\n"
+"\n"
+"The following environment variables are probably\n"
+"going to be ignored by any Git subprocess run\n"
+"by %s:\n"
+"\n"
+msgstr ""
+"Existem possíveis erros de ambiente.\n"
+"\n"
+"As seguintes variáveis de ambiente serão provavelmente\n"
+"ignoradas pelos subprocessos do Git executados\n"
+"por %s:\n"
+"\n"
+
+#: git-gui.sh:3881
+msgid ""
+"\n"
+"This is due to a known issue with the\n"
+"Tcl binary distributed by Cygwin."
+msgstr ""
+"\n"
+"Devido a um problema conhecido com o\n"
+"binário Tcl distribuído pelo Cygwin."
+
+#: git-gui.sh:3886
+#, tcl-format
+msgid ""
+"\n"
+"\n"
+"A good replacement for %s\n"
+"is placing values for the user.name and\n"
+"user.email settings into your personal\n"
+"~/.gitconfig file.\n"
+msgstr ""
+"\n"
+"\n"
+"Um bom substituto para %s\n"
+"é colocar valores das definições user.name e\n"
+"user.email no ficheiro pessoal ~/.gitconfig.\n"
+
+#: lib/line.tcl:17
+msgid "Goto Line:"
+msgstr "Ir para a linha:"
+
+#: lib/line.tcl:23
+msgid "Go"
+msgstr "Ir"
+
+#: lib/console.tcl:59
+msgid "Working... please wait..."
+msgstr "A processar... aguarde..."
+
+#: lib/console.tcl:81 lib/checkout_op.tcl:146 lib/sshkey.tcl:55
+#: lib/database.tcl:30
+msgid "Close"
+msgstr "Fechar"
+
+#: lib/console.tcl:186
+msgid "Success"
+msgstr "Sucesso"
+
+#: lib/console.tcl:200
+msgid "Error: Command Failed"
+msgstr "Erro: falha ao executar comando"
+
+#: lib/checkout_op.tcl:85
+#, tcl-format
+msgid "Fetching %s from %s"
+msgstr "A obter %s de %s"
+
+#: lib/checkout_op.tcl:133
+#, tcl-format
+msgid "fatal: Cannot resolve %s"
+msgstr "fatal: Não é possível resolver %s"
+
+#: lib/checkout_op.tcl:175
+#, tcl-format
+msgid "Branch '%s' does not exist."
+msgstr "O ramo '%s' não existe."
+
+#: lib/checkout_op.tcl:194
+#, tcl-format
+msgid "Failed to configure simplified git-pull for '%s'."
+msgstr "Falha ao configurar git-pull simplificado de '%s'."
+
+#: lib/checkout_op.tcl:202 lib/branch_rename.tcl:102
+#, tcl-format
+msgid "Branch '%s' already exists."
+msgstr "O ramo '%s' já existe."
+
+#: lib/checkout_op.tcl:229
+#, tcl-format
+msgid ""
+"Branch '%s' already exists.\n"
+"\n"
+"It cannot fast-forward to %s.\n"
+"A merge is required."
+msgstr ""
+"O ramo '%s' já existe.\n"
+"\n"
+"Não pode ser avançado rapidamente para %s.\n"
+"Integração necessária."
+
+#: lib/checkout_op.tcl:243
+#, tcl-format
+msgid "Merge strategy '%s' not supported."
+msgstr "A estratégia de integração '%s' não é suportada."
+
+#: lib/checkout_op.tcl:262
+#, tcl-format
+msgid "Failed to update '%s'."
+msgstr "Falha ao atualizar '%s'."
+
+#: lib/checkout_op.tcl:274
+msgid "Staging area (index) is already locked."
+msgstr "A área de estágio (índice) já está bloqueada."
+
+#: lib/checkout_op.tcl:289
+msgid ""
+"Last scanned state does not match repository state.\n"
+"\n"
+"Another Git program has modified this repository since the last scan. A "
+"rescan must be performed before the current branch can be changed.\n"
+"\n"
+"The rescan will be automatically started now.\n"
+msgstr ""
+"O último estado analisado não corresponde ao estado do repositório.\n"
+"\n"
+"Outro programa Git modificou este repositório deste a última análise. Deve-"
+"se reanalisar antes do ramo atual poder ser alterado.\n"
+"\n"
+"Irá-se reanalisar automaticamente agora.\n"
+
+#: lib/checkout_op.tcl:345
+#, tcl-format
+msgid "Updating working directory to '%s'..."
+msgstr "A atualizar o diretório de trabalho para '%s'..."
+
+#: lib/checkout_op.tcl:346
+msgid "files checked out"
+msgstr "ficheiros extraídos"
+
+#: lib/checkout_op.tcl:376
+#, tcl-format
+msgid "Aborted checkout of '%s' (file level merging is required)."
+msgstr ""
+"Extração de '%s' abortada (é necessário integrar ao nível de ficheiros)."
+
+#: lib/checkout_op.tcl:377
+msgid "File level merge required."
+msgstr "Integração ao nível de ficheiros necessária."
+
+#: lib/checkout_op.tcl:381
+#, tcl-format
+msgid "Staying on branch '%s'."
+msgstr "Permanecer no ramo '%s'."
+
+#: lib/checkout_op.tcl:452
+msgid ""
+"You are no longer on a local branch.\n"
+"\n"
+"If you wanted to be on a branch, create one now starting from 'This Detached "
+"Checkout'."
+msgstr ""
+"Já não se encontra num ramo local.\n"
+"\n"
+"Se queria estar sobre um ramo, crie um a partir de 'Esta extração destacada'."
+
+#: lib/checkout_op.tcl:503 lib/checkout_op.tcl:507
+#, tcl-format
+msgid "Checked out '%s'."
+msgstr "'%s' extraído."
+
+#: lib/checkout_op.tcl:535
+#, tcl-format
+msgid "Resetting '%s' to '%s' will lose the following commits:"
+msgstr "Ao repor '%s' para '%s' perderá os seguintes commits:"
+
+#: lib/checkout_op.tcl:557
+msgid "Recovering lost commits may not be easy."
+msgstr "Recuperar commits perdidos pode não ser fácil."
+
+#: lib/checkout_op.tcl:562
+#, tcl-format
+msgid "Reset '%s'?"
+msgstr "Repor '%s'?"
+
+#: lib/checkout_op.tcl:567 lib/tools_dlg.tcl:336 lib/merge.tcl:166
+msgid "Visualize"
+msgstr "Visualizar"
+
+#: lib/checkout_op.tcl:571 lib/branch_create.tcl:85
+msgid "Reset"
+msgstr "Repor"
+
+#: lib/checkout_op.tcl:579 lib/transport.tcl:141 lib/remote_add.tcl:34
+#: lib/browser.tcl:292 lib/branch_checkout.tcl:30 lib/choose_font.tcl:45
+#: lib/option.tcl:127 lib/tools_dlg.tcl:41 lib/tools_dlg.tcl:202
+#: lib/tools_dlg.tcl:345 lib/branch_rename.tcl:32
+#: lib/remote_branch_delete.tcl:43 lib/branch_create.tcl:37
+#: lib/branch_delete.tcl:34 lib/merge.tcl:174
+msgid "Cancel"
+msgstr "Cancelar"
+
+#: lib/checkout_op.tcl:635
+#, tcl-format
+msgid ""
+"Failed to set current branch.\n"
+"\n"
+"This working directory is only partially switched. We successfully updated "
+"your files, but failed to update an internal Git file.\n"
+"\n"
+"This should not have occurred. %s will now close and give up."
+msgstr ""
+"Falha ao definir ramo atual.\n"
+"\n"
+"Apenas se mudou o diretório de trabalho parcialmente. Os ficheiros foram "
+"atualizados com sucesso, mas não foi possível atualizar o ficheiro Git "
+"interno.\n"
+"\n"
+"Não devia ter ocorrido. %s irá terminar e desistir."
+
+#: lib/transport.tcl:6 lib/remote_add.tcl:132
+#, tcl-format
+msgid "fetch %s"
+msgstr "obter %s"
+
+#: lib/transport.tcl:7
+#, tcl-format
+msgid "Fetching new changes from %s"
+msgstr "Obter novas alterações de %s"
+
+#: lib/transport.tcl:18
+#, tcl-format
+msgid "remote prune %s"
+msgstr "poda remota de %s"
+
+#: lib/transport.tcl:19
+#, tcl-format
+msgid "Pruning tracking branches deleted from %s"
+msgstr "A podar ramos de monitorização eliminados de %s"
+
+#: lib/transport.tcl:25
+msgid "fetch all remotes"
+msgstr "obter de todos os remotos"
+
+#: lib/transport.tcl:26
+msgid "Fetching new changes from all remotes"
+msgstr "A obter novas alterações de todos os remotos"
+
+#: lib/transport.tcl:40
+msgid "remote prune all remotes"
+msgstr "poda remota de todos os remotos"
+
+#: lib/transport.tcl:41
+msgid "Pruning tracking branches deleted from all remotes"
+msgstr "A podar ramos de monitorização eliminados de todos os remotos"
+
+#: lib/transport.tcl:54 lib/transport.tcl:92 lib/transport.tcl:110
+#: lib/remote_add.tcl:162
+#, tcl-format
+msgid "push %s"
+msgstr "publicar %s"
+
+#: lib/transport.tcl:55
+#, tcl-format
+msgid "Pushing changes to %s"
+msgstr "A publicar alterações em %s"
+
+#: lib/transport.tcl:93
+#, tcl-format
+msgid "Mirroring to %s"
+msgstr "A espelhar em %s"
+
+#: lib/transport.tcl:111
+#, tcl-format
+msgid "Pushing %s %s to %s"
+msgstr "A publicar %s %s em %s"
+
+#: lib/transport.tcl:132
+msgid "Push Branches"
+msgstr "Publicar ramos"
+
+#: lib/transport.tcl:147
+msgid "Source Branches"
+msgstr "Ramos de origem"
+
+#: lib/transport.tcl:162
+msgid "Destination Repository"
+msgstr "Repositório de destino"
+
+#: lib/transport.tcl:165 lib/remote_branch_delete.tcl:51
+msgid "Remote:"
+msgstr "Remoto:"
+
+#: lib/transport.tcl:187 lib/remote_branch_delete.tcl:72
+msgid "Arbitrary Location:"
+msgstr "Localização arbitrária:"
+
+#: lib/transport.tcl:205
+msgid "Transfer Options"
+msgstr "Opções de transferência"
+
+#: lib/transport.tcl:207
+msgid "Force overwrite existing branch (may discard changes)"
+msgstr "Forçar substituição de ramos existente (pode descartar alterações)"
+
+#: lib/transport.tcl:211
+msgid "Use thin pack (for slow network connections)"
+msgstr "Usar pacote fino (para conexões de rede lentas)"
+
+#: lib/transport.tcl:215
+msgid "Include tags"
+msgstr "Incluir tags"
+
+#: lib/remote_add.tcl:20
+msgid "Add Remote"
+msgstr "Adicionar remoto"
+
+#: lib/remote_add.tcl:25
+msgid "Add New Remote"
+msgstr "Adicionar novo remoto"
+
+#: lib/remote_add.tcl:30 lib/tools_dlg.tcl:37
+msgid "Add"
+msgstr "Adicionar"
+
+#: lib/remote_add.tcl:39
+msgid "Remote Details"
+msgstr "Detalhes do remoto"
+
+#: lib/remote_add.tcl:41 lib/tools_dlg.tcl:51 lib/branch_create.tcl:44
+msgid "Name:"
+msgstr "Nome:"
+
+#: lib/remote_add.tcl:50
+msgid "Location:"
+msgstr "Localização:"
+
+#: lib/remote_add.tcl:60
+msgid "Further Action"
+msgstr "Ação adicional"
+
+#: lib/remote_add.tcl:63
+msgid "Fetch Immediately"
+msgstr "Obter imediatamente"
+
+#: lib/remote_add.tcl:69
+msgid "Initialize Remote Repository and Push"
+msgstr "Inicializar repositório remoto e publicar"
+
+#: lib/remote_add.tcl:75
+msgid "Do Nothing Else Now"
+msgstr "Não fazer mais nada agora"
+
+#: lib/remote_add.tcl:100
+msgid "Please supply a remote name."
+msgstr "Forneça um nome para o remoto."
+
+#: lib/remote_add.tcl:113
+#, tcl-format
+msgid "'%s' is not an acceptable remote name."
+msgstr "'%s' não pode ser aceite como nome de remoto."
+
+#: lib/remote_add.tcl:124
+#, tcl-format
+msgid "Failed to add remote '%s' of location '%s'."
+msgstr "Falha ao adicionar remoto '%s' localizado em '%s'."
+
+#: lib/remote_add.tcl:133
+#, tcl-format
+msgid "Fetching the %s"
+msgstr "A obter de %s"
+
+#: lib/remote_add.tcl:156
+#, tcl-format
+msgid "Do not know how to initialize repository at location '%s'."
+msgstr "Não se sabe como inicializar o repositório localizado em '%s'."
+
+#: lib/remote_add.tcl:163
+#, tcl-format
+msgid "Setting up the %s (at %s)"
+msgstr "A configurar %s (em %s)"
+
+#: lib/browser.tcl:17
+msgid "Starting..."
+msgstr "A iniciar..."
+
+#: lib/browser.tcl:27
+msgid "File Browser"
+msgstr "Navegador de ficheiros"
+
+#: lib/browser.tcl:132 lib/browser.tcl:149
+#, tcl-format
+msgid "Loading %s..."
+msgstr "A carregar %s..."
+
+#: lib/browser.tcl:193
+msgid "[Up To Parent]"
+msgstr "[Subir]"
+
+#: lib/browser.tcl:275 lib/browser.tcl:282
+msgid "Browse Branch Files"
+msgstr "Navegar pelos ficheiros do ramo"
+
+#: lib/browser.tcl:288 lib/choose_repository.tcl:422
+#: lib/choose_repository.tcl:509 lib/choose_repository.tcl:518
+#: lib/choose_repository.tcl:1074
+msgid "Browse"
+msgstr "Navegar"
+
+#: lib/browser.tcl:297 lib/branch_checkout.tcl:35 lib/tools_dlg.tcl:321
+msgid "Revision"
+msgstr "Revisão"
+
+#: lib/tools.tcl:75
+#, tcl-format
+msgid "Running %s requires a selected file."
+msgstr "Deve selecionar um ficheiro para executar %s."
+
+#: lib/tools.tcl:91
+#, tcl-format
+msgid "Are you sure you want to run %1$s on file \"%2$s\"?"
+msgstr "Tem a certeza que pretende executar %1$s sobre o ficheiro \"%2$s\"?"
+
+#: lib/tools.tcl:95
+#, tcl-format
+msgid "Are you sure you want to run %s?"
+msgstr "Tem a certeza que pretende executar %s?"
+
+#: lib/tools.tcl:116
+#, tcl-format
+msgid "Tool: %s"
+msgstr "Ferramenta: %s"
+
+#: lib/tools.tcl:117
+#, tcl-format
+msgid "Running: %s"
+msgstr "A executar: %s"
+
+#: lib/tools.tcl:155
+#, tcl-format
+msgid "Tool completed successfully: %s"
+msgstr "A ferramenta concluí com sucesso: %s"
+
+#: lib/tools.tcl:157
+#, tcl-format
+msgid "Tool failed: %s"
+msgstr "A ferramenta falhou: %s"
+
+#: lib/branch_checkout.tcl:16 lib/branch_checkout.tcl:21
+msgid "Checkout Branch"
+msgstr "Extrair ramo"
+
+#: lib/branch_checkout.tcl:26
+msgid "Checkout"
+msgstr "Extrair"
+
+#: lib/branch_checkout.tcl:39 lib/option.tcl:310 lib/branch_create.tcl:69
+msgid "Options"
+msgstr "Opções"
+
+#: lib/branch_checkout.tcl:42 lib/branch_create.tcl:92
+msgid "Fetch Tracking Branch"
+msgstr "Obter ramo de monitorização"
+
+#: lib/branch_checkout.tcl:47
+msgid "Detach From Local Branch"
+msgstr "Destacar do ramo local"
+
+#: lib/spellcheck.tcl:57
+msgid "Unsupported spell checker"
+msgstr "Corretor ortográfico não suportado"
+
+#: lib/spellcheck.tcl:65
+msgid "Spell checking is unavailable"
+msgstr "Correção ortográfica indisponível"
+
+#: lib/spellcheck.tcl:68
+msgid "Invalid spell checking configuration"
+msgstr "Configuração inválida do corretor ortográfico"
+
+#: lib/spellcheck.tcl:70
+#, tcl-format
+msgid "Reverting dictionary to %s."
+msgstr "A reverter dicionário para %s."
+
+#: lib/spellcheck.tcl:73
+msgid "Spell checker silently failed on startup"
+msgstr "O corretor ortográfico falhou silenciosamente ao iniciar"
+
+#: lib/spellcheck.tcl:80
+msgid "Unrecognized spell checker"
+msgstr "Corretor ortográfico não reconhecido"
+
+#: lib/spellcheck.tcl:186
+msgid "No Suggestions"
+msgstr "Sem sugestões"
+
+#: lib/spellcheck.tcl:388
+msgid "Unexpected EOF from spell checker"
+msgstr "EOF (fim de ficheiro) inesperado do corretor ortográfico"
+
+#: lib/spellcheck.tcl:392
+msgid "Spell Checker Failed"
+msgstr "Corretor ortográfico falhou"
+
+#: lib/status_bar.tcl:87
+#, tcl-format
+msgid "%s ... %*i of %*i %s (%3i%%)"
+msgstr "%s ... %*i de %*i %s (%3i%%)"
+
+#: lib/diff.tcl:77
+#, tcl-format
+msgid ""
+"No differences detected.\n"
+"\n"
+"%s has no changes.\n"
+"\n"
+"The modification date of this file was updated by another application, but "
+"the content within the file was not changed.\n"
+"\n"
+"A rescan will be automatically started to find other files which may have "
+"the same state."
+msgstr ""
+"Nenhum diferença detetada.\n"
+"\n"
+"%s não tem alterações.\n"
+"\n"
+"A data de modificação deste ficheiro foi atualizada por outra aplicação, mas "
+"o conteúdo no interior do ficheiro não foi alterado.\n"
+"\n"
+"Irá-se reanalisar automaticamente para encontrar outros ficheiros que "
+"estejam no mesmo estado."
+
+#: lib/diff.tcl:117
+#, tcl-format
+msgid "Loading diff of %s..."
+msgstr "A carregar diferenças de %s..."
+
+#: lib/diff.tcl:140
+msgid ""
+"LOCAL: deleted\n"
+"REMOTE:\n"
+msgstr ""
+"LOCAL: eliminado\n"
+"REMOTO:\n"
+
+#: lib/diff.tcl:145
+msgid ""
+"REMOTE: deleted\n"
+"LOCAL:\n"
+msgstr ""
+"REMOTO: eliminado\n"
+"LOCAL:\n"
+
+#: lib/diff.tcl:152
+msgid "LOCAL:\n"
+msgstr "LOCAL:\n"
+
+#: lib/diff.tcl:155
+msgid "REMOTE:\n"
+msgstr "REMOTO:\n"
+
+#: lib/diff.tcl:217 lib/diff.tcl:355
+#, tcl-format
+msgid "Unable to display %s"
+msgstr "Não é possível mostrar %s"
+
+#: lib/diff.tcl:218
+msgid "Error loading file:"
+msgstr "Erro ao carregar ficheiro:"
+
+#: lib/diff.tcl:225
+msgid "Git Repository (subproject)"
+msgstr "Repositório Git (subprojeto)"
+
+#: lib/diff.tcl:237
+msgid "* Binary file (not showing content)."
+msgstr "* Ficheiro binário (conteúdo não exibido)."
+
+#: lib/diff.tcl:242
+#, tcl-format
+msgid ""
+"* Untracked file is %d bytes.\n"
+"* Showing only first %d bytes.\n"
+msgstr ""
+"* O ficheiro não controlado tem %d bytes.\n"
+"* Exibido apenas os primeiros %d bytes.\n"
+
+#: lib/diff.tcl:248
+#, tcl-format
+msgid ""
+"\n"
+"* Untracked file clipped here by %s.\n"
+"* To see the entire file, use an external editor.\n"
+msgstr ""
+"\n"
+"* Ficheiro não controlado recortado aqui por %s.\n"
+"* Para ver o ficheiro inteiro, use um editor externo.\n"
+
+#: lib/diff.tcl:356 lib/blame.tcl:1128
+msgid "Error loading diff:"
+msgstr "Erro ao carregar diferenças:"
+
+#: lib/diff.tcl:578
+msgid "Failed to unstage selected hunk."
+msgstr "Falha ao retirar excerto selecionado do índice."
+
+#: lib/diff.tcl:585
+msgid "Failed to stage selected hunk."
+msgstr "Falha ao preparar excerto selecionado."
+
+#: lib/diff.tcl:664
+msgid "Failed to unstage selected line."
+msgstr "Falha ao retirar linha selecionada do índice."
+
+#: lib/diff.tcl:672
+msgid "Failed to stage selected line."
+msgstr "Falha ao preparar linha selecionada."
+
+#: lib/remote.tcl:200
+msgid "Push to"
+msgstr "Publicar em"
+
+#: lib/remote.tcl:218
+msgid "Remove Remote"
+msgstr "Remover remoto"
+
+#: lib/remote.tcl:223
+msgid "Prune from"
+msgstr "Podar de"
+
+#: lib/remote.tcl:228
+msgid "Fetch from"
+msgstr "Obter de"
+
+#: lib/choose_font.tcl:41
+msgid "Select"
+msgstr "Selecionar"
+
+#: lib/choose_font.tcl:55
+msgid "Font Family"
+msgstr "Família de tipo de letra"
+
+#: lib/choose_font.tcl:76
+msgid "Font Size"
+msgstr "Tamanho de letra"
+
+#: lib/choose_font.tcl:93
+msgid "Font Example"
+msgstr "Exemplo do tipo de letra"
+
+#: lib/choose_font.tcl:105
+msgid ""
+"This is example text.\n"
+"If you like this text, it can be your font."
+msgstr ""
+"Este texto é um exemplo.\n"
+"Se gostar deste texto, pode defini-lo como tipo de letra."
+
+#: lib/option.tcl:11
+#, tcl-format
+msgid "Invalid global encoding '%s'"
+msgstr "Codificação global '%s' inválida"
+
+#: lib/option.tcl:19
+#, tcl-format
+msgid "Invalid repo encoding '%s'"
+msgstr "Codificação do repositório '%s' inválida"
+
+#: lib/option.tcl:119
+msgid "Restore Defaults"
+msgstr "Restaurar predefinições"
+
+#: lib/option.tcl:123
+msgid "Save"
+msgstr "Guardar"
+
+#: lib/option.tcl:133
+#, tcl-format
+msgid "%s Repository"
+msgstr "Repositório %s"
+
+#: lib/option.tcl:134
+msgid "Global (All Repositories)"
+msgstr "Global (todos os repositórios)"
+
+#: lib/option.tcl:140
+msgid "User Name"
+msgstr "Nome de utilizador"
+
+#: lib/option.tcl:141
+msgid "Email Address"
+msgstr "Endereço de e-mail"
+
+#: lib/option.tcl:143
+msgid "Summarize Merge Commits"
+msgstr "Resumir commits de integração"
+
+#: lib/option.tcl:144
+msgid "Merge Verbosity"
+msgstr "Verbosidade de integração"
+
+#: lib/option.tcl:145
+msgid "Show Diffstat After Merge"
+msgstr "Mostrar estatísticas de diferenças depois de integrar"
+
+#: lib/option.tcl:146
+msgid "Use Merge Tool"
+msgstr "Usar ferramenta de integração"
+
+#: lib/option.tcl:148
+msgid "Trust File Modification Timestamps"
+msgstr "Confiar na data de modificação dos ficheiros"
+
+#: lib/option.tcl:149
+msgid "Prune Tracking Branches During Fetch"
+msgstr "Podar ramos de monitorização ao obter"
+
+#: lib/option.tcl:150
+msgid "Match Tracking Branches"
+msgstr "Corresponder ramos de monitorização"
+
+#: lib/option.tcl:151
+msgid "Use Textconv For Diffs and Blames"
+msgstr "Usar textconv para mostrar diferenças e culpar"
+
+#: lib/option.tcl:152
+msgid "Blame Copy Only On Changed Files"
+msgstr "Detetar cópia apenas em ficheiros modificados"
+
+#: lib/option.tcl:153
+msgid "Maximum Length of Recent Repositories List"
+msgstr "Comprimento máximo da lista de repositórios recentes"
+
+#: lib/option.tcl:154
+msgid "Minimum Letters To Blame Copy On"
+msgstr "Número mínimo de letras para detetar cópia"
+
+#: lib/option.tcl:155
+msgid "Blame History Context Radius (days)"
+msgstr "Raio de contexto histórico para culpar (dias)"
+
+#: lib/option.tcl:156
+msgid "Number of Diff Context Lines"
+msgstr "Número de linhas de contexto ao mostrar diferenças"
+
+#: lib/option.tcl:157
+msgid "Additional Diff Parameters"
+msgstr "Parâmetros de diff adicionais"
+
+#: lib/option.tcl:158
+msgid "Commit Message Text Width"
+msgstr "Largura do texto da mensagem de commit"
+
+#: lib/option.tcl:159
+msgid "New Branch Name Template"
+msgstr "Modelo para nome de novo ramo"
+
+#: lib/option.tcl:160
+msgid "Default File Contents Encoding"
+msgstr "Codificação predefinida dos conteúdos de ficheiros"
+
+#: lib/option.tcl:161
+msgid "Warn before committing to a detached head"
+msgstr "Avisar antes de submeter numa cabeça destacada"
+
+#: lib/option.tcl:162
+msgid "Staging of untracked files"
+msgstr "Preparar ficheiros não controlados"
+
+#: lib/option.tcl:163
+msgid "Show untracked files"
+msgstr "Mostrar ficheiros não controlados"
+
+#: lib/option.tcl:164
+msgid "Tab spacing"
+msgstr "Espaçamento da tabulação"
+
+#: lib/option.tcl:210
+msgid "Change"
+msgstr "Alterar"
+
+#: lib/option.tcl:254
+msgid "Spelling Dictionary:"
+msgstr "Dicionário ortográfico:"
+
+#: lib/option.tcl:284
+msgid "Change Font"
+msgstr "Alterar tipo de letra"
+
+#: lib/option.tcl:288
+#, tcl-format
+msgid "Choose %s"
+msgstr "Escolher %s"
+
+#: lib/option.tcl:294
+msgid "pt."
+msgstr "pt."
+
+#: lib/option.tcl:308
+msgid "Preferences"
+msgstr "Preferências"
+
+#: lib/option.tcl:345
+msgid "Failed to completely save options:"
+msgstr "Falha ao guardar todas as opções:"
+
+#: lib/mergetool.tcl:8
+msgid "Force resolution to the base version?"
+msgstr "Forçar resolução para a versão base?"
+
+#: lib/mergetool.tcl:9
+msgid "Force resolution to this branch?"
+msgstr "Forçar resolução para este ramo?"
+
+#: lib/mergetool.tcl:10
+msgid "Force resolution to the other branch?"
+msgstr "Forçar resolução para o outro ramo?"
+
+#: lib/mergetool.tcl:14
+#, tcl-format
+msgid ""
+"Note that the diff shows only conflicting changes.\n"
+"\n"
+"%s will be overwritten.\n"
+"\n"
+"This operation can be undone only by restarting the merge."
+msgstr ""
+"Note que as diferenças mostram apenas alterações em conflito.\n"
+"\n"
+"%s será substituído.\n"
+"\n"
+"Esta operação só pode ser anulada reiniciando a integração."
+
+#: lib/mergetool.tcl:45
+#, tcl-format
+msgid "File %s seems to have unresolved conflicts, still stage?"
+msgstr ""
+"O ficheiro %s parece ter conflitos não resolvidos, prepará-lo mesmo assim?"
+
+#: lib/mergetool.tcl:60
+#, tcl-format
+msgid "Adding resolution for %s"
+msgstr "A adicionar resolução de %s"
+
+#: lib/mergetool.tcl:141
+msgid "Cannot resolve deletion or link conflicts using a tool"
+msgstr ""
+"Não é possível resolver conflitos de exclusão ou ligação usando uma "
+"ferramenta"
+
+#: lib/mergetool.tcl:146
+msgid "Conflict file does not exist"
+msgstr "O ficheiro em conflito não existe"
+
+#: lib/mergetool.tcl:246
+#, tcl-format
+msgid "Not a GUI merge tool: '%s'"
+msgstr "Não é uma ferramenta GUI de integração: '%s'"
+
+#: lib/mergetool.tcl:275
+#, tcl-format
+msgid "Unsupported merge tool '%s'"
+msgstr "Ferramenta de integração '%s' não suportada"
+
+#: lib/mergetool.tcl:310
+msgid "Merge tool is already running, terminate it?"
+msgstr "A ferramenta de integração já está a executar, terminá-la?"
+
+#: lib/mergetool.tcl:330
+#, tcl-format
+msgid ""
+"Error retrieving versions:\n"
+"%s"
+msgstr ""
+"Erro ao obter versões:\n"
+"%s"
+
+#: lib/mergetool.tcl:350
+#, tcl-format
+msgid ""
+"Could not start the merge tool:\n"
+"\n"
+"%s"
+msgstr ""
+"Não foi possível iniciar a ferramenta de integração:\n"
+"\n"
+"%s"
+
+#: lib/mergetool.tcl:354
+msgid "Running merge tool..."
+msgstr "A executar a ferramenta de integração..."
+
+#: lib/mergetool.tcl:382 lib/mergetool.tcl:390
+msgid "Merge tool failed."
+msgstr "A ferramenta de integração falhou."
+
+#: lib/tools_dlg.tcl:22
+msgid "Add Tool"
+msgstr "Adicionar ferramenta"
+
+#: lib/tools_dlg.tcl:28
+msgid "Add New Tool Command"
+msgstr "Adicionar novo comando de ferramenta"
+
+#: lib/tools_dlg.tcl:34
+msgid "Add globally"
+msgstr "Adicionar globalmente"
+
+#: lib/tools_dlg.tcl:46
+msgid "Tool Details"
+msgstr "Detalhes da ferramenta"
+
+#: lib/tools_dlg.tcl:49
+msgid "Use '/' separators to create a submenu tree:"
+msgstr "Use separadores '/' para criar uma árvore de submenus:"
+
+#: lib/tools_dlg.tcl:60
+msgid "Command:"
+msgstr "Comando:"
+
+#: lib/tools_dlg.tcl:71
+msgid "Show a dialog before running"
+msgstr "Mostrar um diálogo antes de executar"
+
+#: lib/tools_dlg.tcl:77
+msgid "Ask the user to select a revision (sets $REVISION)"
+msgstr "Pedir ao utilizador para selecionar uma revisão (define $REVISION)"
+
+#: lib/tools_dlg.tcl:82
+msgid "Ask the user for additional arguments (sets $ARGS)"
+msgstr "Pedir ao utilizador argumentos adicionais (define $ARGS)"
+
+#: lib/tools_dlg.tcl:89
+msgid "Don't show the command output window"
+msgstr "Não mostrar a janela com a saída do comando"
+
+#: lib/tools_dlg.tcl:94
+msgid "Run only if a diff is selected ($FILENAME not empty)"
+msgstr "Executar só se for selecionada um diferença ($FILENAME não vazio)"
+
+#: lib/tools_dlg.tcl:118
+msgid "Please supply a name for the tool."
+msgstr "Forneça um nome para a ferramenta."
+
+#: lib/tools_dlg.tcl:126
+#, tcl-format
+msgid "Tool '%s' already exists."
+msgstr "A ferramenta '%s' já existe."
+
+#: lib/tools_dlg.tcl:148
+#, tcl-format
+msgid ""
+"Could not add tool:\n"
+"%s"
+msgstr ""
+"Não foi possível adicionar ferramenta:\n"
+"%s"
+
+#: lib/tools_dlg.tcl:187
+msgid "Remove Tool"
+msgstr "Remover ferramenta"
+
+#: lib/tools_dlg.tcl:193
+msgid "Remove Tool Commands"
+msgstr "Remover comandos de ferramenta"
+
+#: lib/tools_dlg.tcl:198
+msgid "Remove"
+msgstr "Remover"
+
+#: lib/tools_dlg.tcl:231
+msgid "(Blue denotes repository-local tools)"
+msgstr "(Azul denota ferramentas locais do repositório)"
+
+#: lib/tools_dlg.tcl:292
+#, tcl-format
+msgid "Run Command: %s"
+msgstr "Executar comando: %s"
+
+#: lib/tools_dlg.tcl:306
+msgid "Arguments"
+msgstr "Argumentos"
+
+#: lib/tools_dlg.tcl:341
+msgid "OK"
+msgstr "OK"
+
+#: lib/search.tcl:48
+msgid "Find:"
+msgstr "Procurar:"
+
+#: lib/search.tcl:50
+msgid "Next"
+msgstr "Seguinte"
+
+#: lib/search.tcl:51
+msgid "Prev"
+msgstr "Anterior"
+
+#: lib/search.tcl:52
+msgid "RegExp"
+msgstr "ExpReg"
+
+#: lib/search.tcl:54
+msgid "Case"
+msgstr "Maiúsculas"
+
+#: lib/shortcut.tcl:21 lib/shortcut.tcl:62
+msgid "Cannot write shortcut:"
+msgstr "Não é possível escrever atalho:"
+
+#: lib/shortcut.tcl:137
+msgid "Cannot write icon:"
+msgstr "Não é possível escrever ícone:"
+
+#: lib/branch_rename.tcl:15 lib/branch_rename.tcl:23
+msgid "Rename Branch"
+msgstr "Mudar nome de ramo"
+
+#: lib/branch_rename.tcl:28
+msgid "Rename"
+msgstr "Mudar nome"
+
+#: lib/branch_rename.tcl:38
+msgid "Branch:"
+msgstr "Ramo:"
+
+#: lib/branch_rename.tcl:46
+msgid "New Name:"
+msgstr "Novo nome:"
+
+#: lib/branch_rename.tcl:81
+msgid "Please select a branch to rename."
+msgstr "Selecione um ramo para mudar de nome."
+
+#: lib/branch_rename.tcl:92 lib/branch_create.tcl:154
+msgid "Please supply a branch name."
+msgstr "Indique um nome para o ramo."
+
+#: lib/branch_rename.tcl:112 lib/branch_create.tcl:165
+#, tcl-format
+msgid "'%s' is not an acceptable branch name."
+msgstr "'%s' não pode ser aceite como nome de ramo."
+
+#: lib/branch_rename.tcl:123
+#, tcl-format
+msgid "Failed to rename '%s'."
+msgstr "Falha ao mudar o nome de '%s'."
+
+#: lib/remote_branch_delete.tcl:29 lib/remote_branch_delete.tcl:34
+msgid "Delete Branch Remotely"
+msgstr "Remover ramo remotamente"
+
+#: lib/remote_branch_delete.tcl:48
+msgid "From Repository"
+msgstr "Do repositório"
+
+#: lib/remote_branch_delete.tcl:88
+msgid "Branches"
+msgstr "Ramos"
+
+#: lib/remote_branch_delete.tcl:110
+msgid "Delete Only If"
+msgstr "Eliminar só se"
+
+#: lib/remote_branch_delete.tcl:112
+msgid "Merged Into:"
+msgstr "Integrar em:"
+
+#: lib/remote_branch_delete.tcl:120 lib/branch_delete.tcl:53
+msgid "Always (Do not perform merge checks)"
+msgstr "Sempre (não realizar verificação de integração)"
+
+#: lib/remote_branch_delete.tcl:153
+msgid "A branch is required for 'Merged Into'."
+msgstr "É necessário um ramo em 'Integrar em'."
+
+#: lib/remote_branch_delete.tcl:185
+#, tcl-format
+msgid ""
+"The following branches are not completely merged into %s:\n"
+"\n"
+" - %s"
+msgstr ""
+"Os seguintes ramos não foram completamente integrados em %s:\n"
+"\n"
+" - %s"
+
+#: lib/remote_branch_delete.tcl:190
+#, tcl-format
+msgid ""
+"One or more of the merge tests failed because you have not fetched the "
+"necessary commits. Try fetching from %s first."
+msgstr ""
+"Um ou mais testes de integração falharam porque não obteve os commits "
+"necessários. Tente primeiro obter de %s."
+
+#: lib/remote_branch_delete.tcl:208
+msgid "Please select one or more branches to delete."
+msgstr "Selecione um ou mais ramos para eliminar."
+
+#: lib/remote_branch_delete.tcl:218 lib/branch_delete.tcl:115
+msgid ""
+"Recovering deleted branches is difficult.\n"
+"\n"
+"Delete the selected branches?"
+msgstr ""
+"Recuperar ramos eliminados é difícil.\n"
+"\n"
+"Eliminar os ramos selecionado?"
+
+#: lib/remote_branch_delete.tcl:227
+#, tcl-format
+msgid "Deleting branches from %s"
+msgstr "A eliminar ramos de %s"
+
+#: lib/remote_branch_delete.tcl:300
+msgid "No repository selected."
+msgstr "Nenhum repositório selecionado."
+
+#: lib/remote_branch_delete.tcl:305
+#, tcl-format
+msgid "Scanning %s..."
+msgstr "A analisar %s..."
+
+#: lib/choose_repository.tcl:33
+msgid "Git Gui"
+msgstr "Git Gui"
+
+#: lib/choose_repository.tcl:92 lib/choose_repository.tcl:412
+msgid "Create New Repository"
+msgstr "Criar novo repositório"
+
+#: lib/choose_repository.tcl:98
+msgid "New..."
+msgstr "Novo..."
+
+#: lib/choose_repository.tcl:105 lib/choose_repository.tcl:496
+msgid "Clone Existing Repository"
+msgstr "Clonar repositório existente"
+
+#: lib/choose_repository.tcl:116
+msgid "Clone..."
+msgstr "Clonar..."
+
+#: lib/choose_repository.tcl:123 lib/choose_repository.tcl:1064
+msgid "Open Existing Repository"
+msgstr "Abrir repositório existente"
+
+#: lib/choose_repository.tcl:129
+msgid "Open..."
+msgstr "Abrir..."
+
+#: lib/choose_repository.tcl:142
+msgid "Recent Repositories"
+msgstr "Repositórios recentes"
+
+#: lib/choose_repository.tcl:148
+msgid "Open Recent Repository:"
+msgstr "Abrir repositório recente:"
+
+#: lib/choose_repository.tcl:316 lib/choose_repository.tcl:323
+#: lib/choose_repository.tcl:330
+#, tcl-format
+msgid "Failed to create repository %s:"
+msgstr "Falha ao criar o repositório %s:"
+
+#: lib/choose_repository.tcl:407 lib/branch_create.tcl:33
+msgid "Create"
+msgstr "Criar"
+
+#: lib/choose_repository.tcl:417
+msgid "Directory:"
+msgstr "Diretório:"
+
+#: lib/choose_repository.tcl:447 lib/choose_repository.tcl:573
+#: lib/choose_repository.tcl:1098
+msgid "Git Repository"
+msgstr "Repositório Git"
+
+#: lib/choose_repository.tcl:472
+#, tcl-format
+msgid "Directory %s already exists."
+msgstr "O diretório %s já existe."
+
+#: lib/choose_repository.tcl:476
+#, tcl-format
+msgid "File %s already exists."
+msgstr "O ficheiro %s já existe."
+
+#: lib/choose_repository.tcl:491
+msgid "Clone"
+msgstr "Clonar"
+
+#: lib/choose_repository.tcl:504
+msgid "Source Location:"
+msgstr "Localização de origem:"
+
+#: lib/choose_repository.tcl:513
+msgid "Target Directory:"
+msgstr "Diretório de destino:"
+
+#: lib/choose_repository.tcl:523
+msgid "Clone Type:"
+msgstr "Tipo de clone:"
+
+#: lib/choose_repository.tcl:528
+msgid "Standard (Fast, Semi-Redundant, Hardlinks)"
+msgstr "Padrão (rápido, semi-redundante, ligações fixas)"
+
+#: lib/choose_repository.tcl:533
+msgid "Full Copy (Slower, Redundant Backup)"
+msgstr "Cópia Total (lento, cópia de segurança redundante)"
+
+#: lib/choose_repository.tcl:538
+msgid "Shared (Fastest, Not Recommended, No Backup)"
+msgstr "Partilhado (mais rápido, não recomendado, sem cópia)"
+
+#: lib/choose_repository.tcl:545
+msgid "Recursively clone submodules too"
+msgstr "Clonar recursivamente submódulos também"
+
+#: lib/choose_repository.tcl:579 lib/choose_repository.tcl:626
+#: lib/choose_repository.tcl:772 lib/choose_repository.tcl:842
+#: lib/choose_repository.tcl:1104 lib/choose_repository.tcl:1112
+#, tcl-format
+msgid "Not a Git repository: %s"
+msgstr "Não é um repositório Git: %s"
+
+#: lib/choose_repository.tcl:615
+msgid "Standard only available for local repository."
+msgstr "Padrão só disponível em repositórios locais."
+
+#: lib/choose_repository.tcl:619
+msgid "Shared only available for local repository."
+msgstr "Partilhado só disponível em repositórios locais."
+
+#: lib/choose_repository.tcl:640
+#, tcl-format
+msgid "Location %s already exists."
+msgstr "A localização %s já existe."
+
+#: lib/choose_repository.tcl:651
+msgid "Failed to configure origin"
+msgstr "Falha ao configurar origem"
+
+#: lib/choose_repository.tcl:663
+msgid "Counting objects"
+msgstr "A contar objetos"
+
+#: lib/choose_repository.tcl:664
+msgid "buckets"
+msgstr "baldes"
+
+#: lib/choose_repository.tcl:688
+#, tcl-format
+msgid "Unable to copy objects/info/alternates: %s"
+msgstr "Não é possível copiar objects/info/alternates: %s"
+
+#: lib/choose_repository.tcl:724
+#, tcl-format
+msgid "Nothing to clone from %s."
+msgstr "Nada para clonar de %s."
+
+#: lib/choose_repository.tcl:726 lib/choose_repository.tcl:940
+#: lib/choose_repository.tcl:952
+msgid "The 'master' branch has not been initialized."
+msgstr "O ramo 'master' não foi inicializado."
+
+#: lib/choose_repository.tcl:739
+msgid "Hardlinks are unavailable. Falling back to copying."
+msgstr "Ligações fixas indisponíveis. A recorrer a cópia."
+
+#: lib/choose_repository.tcl:751
+#, tcl-format
+msgid "Cloning from %s"
+msgstr "A clonar de %s"
+
+#: lib/choose_repository.tcl:782
+msgid "Copying objects"
+msgstr "A copiar objetos"
+
+#: lib/choose_repository.tcl:783
+msgid "KiB"
+msgstr "KiB"
+
+#: lib/choose_repository.tcl:807
+#, tcl-format
+msgid "Unable to copy object: %s"
+msgstr "Não é possível copiar objeto: %s"
+
+#: lib/choose_repository.tcl:817
+msgid "Linking objects"
+msgstr "A ligar objetos"
+
+#: lib/choose_repository.tcl:818
+msgid "objects"
+msgstr "objetos"
+
+#: lib/choose_repository.tcl:826
+#, tcl-format
+msgid "Unable to hardlink object: %s"
+msgstr "Não é possível criar ligação fixa de objeto: %s"
+
+#: lib/choose_repository.tcl:881
+msgid "Cannot fetch branches and objects. See console output for details."
+msgstr ""
+"Não é possível obter ramos e objetos. Ver saída na consola para detalhes."
+
+#: lib/choose_repository.tcl:892
+msgid "Cannot fetch tags. See console output for details."
+msgstr "Não é possível obter tags. Ver saída na consola para detalhes."
+
+#: lib/choose_repository.tcl:916
+msgid "Cannot determine HEAD. See console output for details."
+msgstr "Não é possível determinar HEAD. Ver saída na consola para detalhes."
+
+#: lib/choose_repository.tcl:925
+#, tcl-format
+msgid "Unable to cleanup %s"
+msgstr "Não foi possível limpar %s"
+
+#: lib/choose_repository.tcl:931
+msgid "Clone failed."
+msgstr "Falha ao clonar."
+
+#: lib/choose_repository.tcl:938
+msgid "No default branch obtained."
+msgstr "Não foi obtido nenhum ramo predefinido."
+
+#: lib/choose_repository.tcl:949
+#, tcl-format
+msgid "Cannot resolve %s as a commit."
+msgstr "Não é possível resolver %s como um commit."
+
+#: lib/choose_repository.tcl:961
+msgid "Creating working directory"
+msgstr "A criar diretório de trabalho"
+
+#: lib/choose_repository.tcl:962 lib/index.tcl:70 lib/index.tcl:136
+#: lib/index.tcl:207
+msgid "files"
+msgstr "ficheiros"
+
+#: lib/choose_repository.tcl:981
+msgid "Cannot clone submodules."
+msgstr "Não é possível clonar submódulos."
+
+#: lib/choose_repository.tcl:990
+msgid "Cloning submodules"
+msgstr "A clonar submódulos"
+
+#: lib/choose_repository.tcl:1015
+msgid "Initial file checkout failed."
+msgstr "Falha de extração inicial de ficheiro."
+
+#: lib/choose_repository.tcl:1059
+msgid "Open"
+msgstr "Abrir"
+
+#: lib/choose_repository.tcl:1069
+msgid "Repository:"
+msgstr "Repositório:"
+
+#: lib/choose_repository.tcl:1118
+#, tcl-format
+msgid "Failed to open repository %s:"
+msgstr "Falha ao abrir o repositório %s:"
+
+#: lib/about.tcl:26
+msgid "git-gui - a graphical user interface for Git."
+msgstr "git-gui - uma interface gráfica do Git."
+
+#: lib/blame.tcl:73
+msgid "File Viewer"
+msgstr "Visualizador de ficheiros"
+
+#: lib/blame.tcl:79
+msgid "Commit:"
+msgstr "Commit:"
+
+#: lib/blame.tcl:280
+msgid "Copy Commit"
+msgstr "Copiar commit"
+
+#: lib/blame.tcl:284
+msgid "Find Text..."
+msgstr "Procurar texto..."
+
+#: lib/blame.tcl:288
+msgid "Goto Line..."
+msgstr "Ir para a linha..."
+
+#: lib/blame.tcl:297
+msgid "Do Full Copy Detection"
+msgstr "Efetuar deteção de cópia integral"
+
+#: lib/blame.tcl:301
+msgid "Show History Context"
+msgstr "Mostrar contexto histórico"
+
+#: lib/blame.tcl:304
+msgid "Blame Parent Commit"
+msgstr "Culpar commit pai"
+
+#: lib/blame.tcl:466
+#, tcl-format
+msgid "Reading %s..."
+msgstr "A ler %s..."
+
+#: lib/blame.tcl:594
+msgid "Loading copy/move tracking annotations..."
+msgstr "A carregar anotações de cópia/movimento..."
+
+#: lib/blame.tcl:614
+msgid "lines annotated"
+msgstr "linhas anotadas"
+
+#: lib/blame.tcl:806
+msgid "Loading original location annotations..."
+msgstr "A carregar anotações da localização original..."
+
+#: lib/blame.tcl:809
+msgid "Annotation complete."
+msgstr "Anotação concluída."
+
+#: lib/blame.tcl:839
+msgid "Busy"
+msgstr "A processar"
+
+#: lib/blame.tcl:840
+msgid "Annotation process is already running."
+msgstr "O processo de anotação já está em execução."
+
+#: lib/blame.tcl:879
+msgid "Running thorough copy detection..."
+msgstr "A executar deteção de cópia integral..."
+
+#: lib/blame.tcl:947
+msgid "Loading annotation..."
+msgstr "A carregar anotação..."
+
+#: lib/blame.tcl:1000
+msgid "Author:"
+msgstr "Autor:"
+
+#: lib/blame.tcl:1004
+msgid "Committer:"
+msgstr "Committer:"
+
+#: lib/blame.tcl:1009
+msgid "Original File:"
+msgstr "Ficheiro original:"
+
+#: lib/blame.tcl:1057
+msgid "Cannot find HEAD commit:"
+msgstr "Não é possível encontrar commit HEAD:"
+
+#: lib/blame.tcl:1112
+msgid "Cannot find parent commit:"
+msgstr "Não é possível encontrar commit pai:"
+
+#: lib/blame.tcl:1127
+msgid "Unable to display parent"
+msgstr "Não é possível mostrar pai"
+
+#: lib/blame.tcl:1269
+msgid "Originally By:"
+msgstr "Originalmente por:"
+
+#: lib/blame.tcl:1275
+msgid "In File:"
+msgstr "No ficheiro:"
+
+#: lib/blame.tcl:1280
+msgid "Copied Or Moved Here By:"
+msgstr "Copiado ou Movido para aqui por:"
+
+#: lib/sshkey.tcl:31
+msgid "No keys found."
+msgstr "Nenhum chave encontrada."
+
+#: lib/sshkey.tcl:34
+#, tcl-format
+msgid "Found a public key in: %s"
+msgstr "Chave pública encontrada em: %s"
+
+#: lib/sshkey.tcl:40
+msgid "Generate Key"
+msgstr "Gerar chave"
+
+#: lib/sshkey.tcl:58
+msgid "Copy To Clipboard"
+msgstr "Copiar para a área de transferência"
+
+#: lib/sshkey.tcl:72
+msgid "Your OpenSSH Public Key"
+msgstr "A sua chave OpenSSH pública"
+
+#: lib/sshkey.tcl:80
+msgid "Generating..."
+msgstr "A gerar..."
+
+#: lib/sshkey.tcl:86
+#, tcl-format
+msgid ""
+"Could not start ssh-keygen:\n"
+"\n"
+"%s"
+msgstr ""
+"Não foi possível iniciar ssh-keygen:\n"
+"\n"
+"%s"
+
+#: lib/sshkey.tcl:113
+msgid "Generation failed."
+msgstr "Falha ao gerar."
+
+#: lib/sshkey.tcl:120
+msgid "Generation succeeded, but no keys found."
+msgstr "Gerada com sucesso, mas não foi encontrada nenhum chave."
+
+#: lib/sshkey.tcl:123
+#, tcl-format
+msgid "Your key is in: %s"
+msgstr "A sua chave encontra-se em: %s"
+
+#: lib/branch_create.tcl:23
+msgid "Create Branch"
+msgstr "Criar ramo"
+
+#: lib/branch_create.tcl:28
+msgid "Create New Branch"
+msgstr "Cria novo ramo"
+
+#: lib/branch_create.tcl:42
+msgid "Branch Name"
+msgstr "Nome do ramo"
+
+#: lib/branch_create.tcl:57
+msgid "Match Tracking Branch Name"
+msgstr "Corresponder ao nome do ramo de monitorização"
+
+#: lib/branch_create.tcl:66
+msgid "Starting Revision"
+msgstr "Revisão inicial"
+
+#: lib/branch_create.tcl:72
+msgid "Update Existing Branch:"
+msgstr "Atualizar ramo existente:"
+
+#: lib/branch_create.tcl:75
+msgid "No"
+msgstr "Não"
+
+#: lib/branch_create.tcl:80
+msgid "Fast Forward Only"
+msgstr "Apenas avanço rápido (fast-forward)"
+
+#: lib/branch_create.tcl:97
+msgid "Checkout After Creation"
+msgstr "Extrair depois de criar"
+
+#: lib/branch_create.tcl:132
+msgid "Please select a tracking branch."
+msgstr "Selecione um ramo de monitorização."
+
+#: lib/branch_create.tcl:141
+#, tcl-format
+msgid "Tracking branch %s is not a branch in the remote repository."
+msgstr "O ramo de monitorização %s não é um ramo no repositório remoto."
+
+#: lib/commit.tcl:9
+msgid ""
+"There is nothing to amend.\n"
+"\n"
+"You are about to create the initial commit. There is no commit before this "
+"to amend.\n"
+msgstr ""
+"Não há nada para emendar.\n"
+"\n"
+"Está prestes a criar o commit inicial. Não há nenhum commit antes deste para "
+"emendar.\n"
+
+#: lib/commit.tcl:18
+msgid ""
+"Cannot amend while merging.\n"
+"\n"
+"You are currently in the middle of a merge that has not been fully "
+"completed. You cannot amend the prior commit unless you first abort the "
+"current merge activity.\n"
+msgstr ""
+"Não é possível emendar ao mesmo tempo que se integra.\n"
+"\n"
+"Há uma integração em curso que não foi concluída. Não pode emendar o commit "
+"anterior a não ser que primeiro aborte a atividade da integração atual.\n"
+
+#: lib/commit.tcl:48
+msgid "Error loading commit data for amend:"
+msgstr "Erro ao carregar dados do commit para emendar:"
+
+#: lib/commit.tcl:75
+msgid "Unable to obtain your identity:"
+msgstr "Não é possível obter a sua identidade:"
+
+#: lib/commit.tcl:80
+msgid "Invalid GIT_COMMITTER_IDENT:"
+msgstr "GIT_COMMITTER_IDENT inválido:"
+
+#: lib/commit.tcl:129
+#, tcl-format
+msgid "warning: Tcl does not support encoding '%s'."
+msgstr "aviso: Tcl não suporta a codificação '%s'."
+
+#: lib/commit.tcl:149
+msgid ""
+"Last scanned state does not match repository state.\n"
+"\n"
+"Another Git program has modified this repository since the last scan. A "
+"rescan must be performed before another commit can be created.\n"
+"\n"
+"The rescan will be automatically started now.\n"
+msgstr ""
+"O último estado analisado não corresponde ao estado do repositório.\n"
+"\n"
+"Outro programa Git modificou este repositório deste a última análise. Deve-"
+"se reanalisar antes que se possa criar outro commit.\n"
+"\n"
+"Irá-se reanalisar automaticamente agora.\n"
+
+#: lib/commit.tcl:173
+#, tcl-format
+msgid ""
+"Unmerged files cannot be committed.\n"
+"\n"
+"File %s has merge conflicts. You must resolve them and stage the file "
+"before committing.\n"
+msgstr ""
+"Não pode fazer commit de ficheiros não integrados.\n"
+"\n"
+"O ficheiro %s tem conflitos de integração. Deve resolvê-los e preparar o "
+"ficheiro antes de submeter.\n"
+
+#: lib/commit.tcl:181
+#, tcl-format
+msgid ""
+"Unknown file state %s detected.\n"
+"\n"
+"File %s cannot be committed by this program.\n"
+msgstr ""
+"Detetado estado de ficheiro %s desconhecido.\n"
+"\n"
+"Este programa não pode submeter o ficheiro %s.\n"
+
+#: lib/commit.tcl:189
+msgid ""
+"No changes to commit.\n"
+"\n"
+"You must stage at least 1 file before you can commit.\n"
+msgstr ""
+"Nenhum alteração para submeter.\n"
+"\n"
+"Deve preparar pelo menos 1 ficheiro antes de submeter.\n"
+
+#: lib/commit.tcl:204
+msgid ""
+"Please supply a commit message.\n"
+"\n"
+"A good commit message has the following format:\n"
+"\n"
+"- First line: Describe in one sentence what you did.\n"
+"- Second line: Blank\n"
+"- Remaining lines: Describe why this change is good.\n"
+msgstr ""
+"Forneça uma mensagem de commit.\n"
+"\n"
+"Um boa mensagem de commit tem o seguinte formato:\n"
+"\n"
+"- Primeira linha: descreve numa frase o que fez.\n"
+"- Segunda linha: em branco.\n"
+"- Linhas restantes: descreve porque esta alteração é vantajosa.\n"
+
+#: lib/commit.tcl:235
+msgid "Calling pre-commit hook..."
+msgstr "A invocar gancho de pré-commit (pre-commit hook)..."
+
+#: lib/commit.tcl:250
+msgid "Commit declined by pre-commit hook."
+msgstr "Commit recusado pela retina de pré-commit (pre-commit hook)."
+
+#: lib/commit.tcl:269
+msgid ""
+"You are about to commit on a detached head. This is a potentially dangerous "
+"thing to do because if you switch to another branch you will lose your "
+"changes and it can be difficult to retrieve them later from the reflog. You "
+"should probably cancel this commit and create a new branch to continue.\n"
+" \n"
+" Do you really want to proceed with your Commit?"
+msgstr ""
+"Está prestes a submeter numa cabeça destacada. Fazê-lo é potencialmente "
+"perigoso, porque, se mudar para outro ramo, perderá as suas alterações e "
+"pode ser difícil recuperá-las do reflog posteriormente. Provavelmente deve "
+"cancelar este commit e criar um novo ramo para continuar.\n"
+"\n"
+"Pretende mesmo continuar com o commit?"
+
+#: lib/commit.tcl:290
+msgid "Calling commit-msg hook..."
+msgstr "A invocar gancho de mensagem-de-commit (commit-msg hook)..."
+
+#: lib/commit.tcl:305
+msgid "Commit declined by commit-msg hook."
+msgstr "Commit recusado pelo gancho de mensagem-de-commit (commit-msg hook)."
+
+#: lib/commit.tcl:318
+msgid "Committing changes..."
+msgstr "A submeter alterações..."
+
+#: lib/commit.tcl:334
+msgid "write-tree failed:"
+msgstr "write-tree falhou:"
+
+#: lib/commit.tcl:335 lib/commit.tcl:379 lib/commit.tcl:400
+msgid "Commit failed."
+msgstr "Falha ao submeter."
+
+#: lib/commit.tcl:352
+#, tcl-format
+msgid "Commit %s appears to be corrupt"
+msgstr "O commit %s parece estar corrompido"
+
+#: lib/commit.tcl:357
+msgid ""
+"No changes to commit.\n"
+"\n"
+"No files were modified by this commit and it was not a merge commit.\n"
+"\n"
+"A rescan will be automatically started now.\n"
+msgstr ""
+"Não há alterações para submeter.\n"
+"\n"
+"Nenhum ficheiro foi modificado por este commit e não era um commit de "
+"integração.\n"
+"\n"
+"Irá-se reanalisar agora automaticamente.\n"
+
+#: lib/commit.tcl:364
+msgid "No changes to commit."
+msgstr "Não há alterações para submeter."
+
+#: lib/commit.tcl:378
+msgid "commit-tree failed:"
+msgstr "commit-tree falhou:"
+
+#: lib/commit.tcl:399
+msgid "update-ref failed:"
+msgstr "update-ref falhou:"
+
+#: lib/commit.tcl:492
+#, tcl-format
+msgid "Created commit %s: %s"
+msgstr "Commit %s criado: %s"
+
+#: lib/branch_delete.tcl:16
+msgid "Delete Branch"
+msgstr "Eliminar ramo"
+
+#: lib/branch_delete.tcl:21
+msgid "Delete Local Branch"
+msgstr "Eliminar ramo local"
+
+#: lib/branch_delete.tcl:39
+msgid "Local Branches"
+msgstr "Ramos locais"
+
+#: lib/branch_delete.tcl:51
+msgid "Delete Only If Merged Into"
+msgstr "Eliminar só se foi integrado"
+
+#: lib/branch_delete.tcl:103
+#, tcl-format
+msgid "The following branches are not completely merged into %s:"
+msgstr "Os seguintes ramos não foram completamente integrados em %s:"
+
+#: lib/branch_delete.tcl:141
+#, tcl-format
+msgid ""
+"Failed to delete branches:\n"
+"%s"
+msgstr ""
+"Falha ao eliminar ramos:\n"
+"%s"
+
+#: lib/index.tcl:6
+msgid "Unable to unlock the index."
+msgstr "Não é possível desbloquear o índice."
+
+#: lib/index.tcl:17
+msgid "Index Error"
+msgstr "Erro de Índice"
+
+#: lib/index.tcl:19
+msgid ""
+"Updating the Git index failed. A rescan will be automatically started to "
+"resynchronize git-gui."
+msgstr ""
+"Falha ao atualizar o índice do Git. Irá-se reanalisar automaticamente para "
+"ressincronizar o git-gui."
+
+#: lib/index.tcl:30
+msgid "Continue"
+msgstr "Continuar"
+
+#: lib/index.tcl:33
+msgid "Unlock Index"
+msgstr "Desbloquear índice"
+
+#: lib/index.tcl:294
+msgid "Unstaging selected files from commit"
+msgstr "A retirar ficheiros selecionados do commit"
+
+#: lib/index.tcl:298
+#, tcl-format
+msgid "Unstaging %s from commit"
+msgstr "A retirar %s do commit"
+
+#: lib/index.tcl:337
+msgid "Ready to commit."
+msgstr "Pronto para submeter."
+
+#: lib/index.tcl:346
+msgid "Adding selected files"
+msgstr "A adicionar ficheiros selecionados"
+
+#: lib/index.tcl:350
+#, tcl-format
+msgid "Adding %s"
+msgstr "A adicionar %s"
+
+#: lib/index.tcl:380
+#, tcl-format
+msgid "Stage %d untracked files?"
+msgstr "Preparar %d ficheiros não controlados?"
+
+#: lib/index.tcl:388
+msgid "Adding all changed files"
+msgstr "A adicionar todos os ficheiros controlados"
+
+#: lib/index.tcl:428
+#, tcl-format
+msgid "Revert changes in file %s?"
+msgstr "Reverter alterações no ficheiro %s?"
+
+#: lib/index.tcl:430
+#, tcl-format
+msgid "Revert changes in these %i files?"
+msgstr "Reverter alterações nestes %i ficheiros?"
+
+#: lib/index.tcl:438
+msgid "Any unstaged changes will be permanently lost by the revert."
+msgstr ""
+"Qualquer alteração não preparada será permanentemente perdida ao reverter."
+
+#: lib/index.tcl:441
+msgid "Do Nothing"
+msgstr "Não fazer nada"
+
+#: lib/index.tcl:459
+msgid "Reverting selected files"
+msgstr "A reverter ficheiros selecionados"
+
+#: lib/index.tcl:463
+#, tcl-format
+msgid "Reverting %s"
+msgstr "A reverter %s"
+
+#: lib/encoding.tcl:443
+msgid "Default"
+msgstr "Predefinição"
+
+#: lib/encoding.tcl:448
+#, tcl-format
+msgid "System (%s)"
+msgstr "Sistema (%s)"
+
+#: lib/encoding.tcl:459 lib/encoding.tcl:465
+msgid "Other"
+msgstr "Outro"
+
+#: lib/date.tcl:25
+#, tcl-format
+msgid "Invalid date from Git: %s"
+msgstr "Data do Git inválida: %s"
+
+#: lib/choose_rev.tcl:52
+msgid "This Detached Checkout"
+msgstr "Esta extração destacada"
+
+#: lib/choose_rev.tcl:60
+msgid "Revision Expression:"
+msgstr "Expressão de revisão:"
+
+#: lib/choose_rev.tcl:72
+msgid "Local Branch"
+msgstr "Ramo local"
+
+#: lib/choose_rev.tcl:77
+msgid "Tracking Branch"
+msgstr "Ramo de monitorização"
+
+#: lib/choose_rev.tcl:82 lib/choose_rev.tcl:544
+msgid "Tag"
+msgstr "Tag"
+
+#: lib/choose_rev.tcl:321
+#, tcl-format
+msgid "Invalid revision: %s"
+msgstr "Revisão inválida: %s"
+
+#: lib/choose_rev.tcl:342
+msgid "No revision selected."
+msgstr "Nenhum revisão selecionada."
+
+#: lib/choose_rev.tcl:350
+msgid "Revision expression is empty."
+msgstr "A expressão de revisão está vazia."
+
+#: lib/choose_rev.tcl:537
+msgid "Updated"
+msgstr "Atualizado"
+
+#: lib/choose_rev.tcl:565
+msgid "URL"
+msgstr "URL"
+
+#: lib/database.tcl:42
+msgid "Number of loose objects"
+msgstr "Número de objetos soltos"
+
+#: lib/database.tcl:43
+msgid "Disk space used by loose objects"
+msgstr "Espaço em disco usados por objetos soltos"
+
+#: lib/database.tcl:44
+msgid "Number of packed objects"
+msgstr "Número de objetos compactados"
+
+#: lib/database.tcl:45
+msgid "Number of packs"
+msgstr "Números de pacotes"
+
+#: lib/database.tcl:46
+msgid "Disk space used by packed objects"
+msgstr "Espaço em disco usado por objetos compactados"
+
+#: lib/database.tcl:47
+msgid "Packed objects waiting for pruning"
+msgstr "Objetos compactados à espera de poda"
+
+#: lib/database.tcl:48
+msgid "Garbage files"
+msgstr "Ficheiros de lixo"
+
+#: lib/database.tcl:72
+msgid "Compressing the object database"
+msgstr "A comprimir a base de dados de objetos"
+
+#: lib/database.tcl:83
+msgid "Verifying the object database with fsck-objects"
+msgstr "A verificar a base de dados de objetos com fsck-objects"
+
+#: lib/database.tcl:107
+#, tcl-format
+msgid ""
+"This repository currently has approximately %i loose objects.\n"
+"\n"
+"To maintain optimal performance it is strongly recommended that you compress "
+"the database.\n"
+"\n"
+"Compress the database now?"
+msgstr ""
+"Este repositório tem aproximadamente %i objetos soltos.\n"
+"\n"
+"Para manter o desempenho ótimo é veemente recomendado que comprima a base de "
+"dados.\n"
+"\n"
+"Comprimir a base de dados agora?"
+
+#: lib/error.tcl:20 lib/error.tcl:116
+msgid "error"
+msgstr "erro"
+
+#: lib/error.tcl:36
+msgid "warning"
+msgstr "aviso"
+
+#: lib/error.tcl:96
+msgid "You must correct the above errors before committing."
+msgstr "Deve corrigir os erros acima antes de submeter."
+
+#: lib/merge.tcl:13
+msgid ""
+"Cannot merge while amending.\n"
+"\n"
+"You must finish amending this commit before starting any type of merge.\n"
+msgstr ""
+"Não possível integrar ao mesmo tempo que se emenda.\n"
+"\n"
+"Deve acabar de emendar este commit antes de iniciar qualquer tipo de "
+"integração.\n"
+
+#: lib/merge.tcl:27
+msgid ""
+"Last scanned state does not match repository state.\n"
+"\n"
+"Another Git program has modified this repository since the last scan. A "
+"rescan must be performed before a merge can be performed.\n"
+"\n"
+"The rescan will be automatically started now.\n"
+msgstr ""
+"O último estado analisado não corresponde ao estado do repositório.\n"
+"\n"
+"Outro programa Git modificou este repositório deste a última análise. Deve-"
+"se reanalisar antes de se poder integrar.\n"
+"\n"
+"Irá-se reanalisar agora automaticamente.\n"
+
+#: lib/merge.tcl:45
+#, tcl-format
+msgid ""
+"You are in the middle of a conflicted merge.\n"
+"\n"
+"File %s has merge conflicts.\n"
+"\n"
+"You must resolve them, stage the file, and commit to complete the current "
+"merge. Only then can you begin another merge.\n"
+msgstr ""
+"Integração com conflitos em curso.\n"
+"\n"
+"O ficheiro %s tem conflitos de integração.\n"
+"\n"
+"Deve resolvê-los, preparar o ficheiro e submeter para concluir a integração "
+"atual. Só então pode iniciar outra integração.\n"
+
+#: lib/merge.tcl:55
+#, tcl-format
+msgid ""
+"You are in the middle of a change.\n"
+"\n"
+"File %s is modified.\n"
+"\n"
+"You should complete the current commit before starting a merge. Doing so "
+"will help you abort a failed merge, should the need arise.\n"
+msgstr ""
+"Tem alterações presentes.\n"
+"\n"
+"O ficheiro %s foi modificado.\n"
+"\n"
+"Deve concluir o commit atual antes de iniciar uma integração. Assim, ajuda-o "
+"a abortar uma integração falhada, caso necessário.\n"
+
+#: lib/merge.tcl:108
+#, tcl-format
+msgid "%s of %s"
+msgstr "%s de %s"
+
+#: lib/merge.tcl:122
+#, tcl-format
+msgid "Merging %s and %s..."
+msgstr "A integrar %s e %s..."
+
+#: lib/merge.tcl:133
+msgid "Merge completed successfully."
+msgstr "Integração concluída com sucesso."
+
+#: lib/merge.tcl:135
+msgid "Merge failed. Conflict resolution is required."
+msgstr "Integração falhada. É necessário resolver conflitos."
+
+#: lib/merge.tcl:160
+#, tcl-format
+msgid "Merge Into %s"
+msgstr "Integrar em %s"
+
+#: lib/merge.tcl:179
+msgid "Revision To Merge"
+msgstr "Revisão a integrar"
+
+#: lib/merge.tcl:214
+msgid ""
+"Cannot abort while amending.\n"
+"\n"
+"You must finish amending this commit.\n"
+msgstr ""
+"Não é possível abortar enquanto se emenda.\n"
+"\n"
+"Deve acabar de emendar este commit.\n"
+
+#: lib/merge.tcl:224
+msgid ""
+"Abort merge?\n"
+"\n"
+"Aborting the current merge will cause *ALL* uncommitted changes to be lost.\n"
+"\n"
+"Continue with aborting the current merge?"
+msgstr ""
+"Abortar integração?\n"
+"\n"
+"Ao abortar a integração atual perderá *TODAS* as alteração que não foram "
+"submetidas.\n"
+"\n"
+"Continuar a abortar a integração atual?"
+
+#: lib/merge.tcl:230
+msgid ""
+"Reset changes?\n"
+"\n"
+"Resetting the changes will cause *ALL* uncommitted changes to be lost.\n"
+"\n"
+"Continue with resetting the current changes?"
+msgstr ""
+"Repor alterações?\n"
+"\n"
+"Ao repor as alterações perderá *TODAS* as alterações não submetidas.\n"
+"\n"
+"Continuar a repor as alterações atuais?"
+
+#: lib/merge.tcl:241
+msgid "Aborting"
+msgstr "A abortar"
+
+#: lib/merge.tcl:241
+msgid "files reset"
+msgstr "ficheiros repostos"
+
+#: lib/merge.tcl:269
+msgid "Abort failed."
+msgstr "Falha ao abortar."
+
+#: lib/merge.tcl:271
+msgid "Abort completed. Ready."
+msgstr "Aborto concluído. Pronto."
+
+#~ msgid "Displaying only %s of %s files."
+#~ msgstr "A mostrar apenas %s de %s ficheiros."
+
+#~ msgid "Case-Sensitive"
+#~ msgstr "Distinguir Maiúsculas"
# Translation of git-gui to russian
# Copyright (C) 2007 Shawn Pearce
# This file is distributed under the same license as the git-gui package.
-# Irina Riesen <irina.riesen@gmail.com>, 2007.
-#
+# Translators:
+# Dimitriy Ryazantcev <DJm00n@mail.ru>, 2015-2016
+# Irina Riesen <irina.riesen@gmail.com>, 2007
msgid ""
msgstr ""
-"Project-Id-Version: git-gui\n"
+"Project-Id-Version: Git Russian Localization Project\n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2010-01-26 15:47-0800\n"
-"PO-Revision-Date: 2007-10-22 22:30-0200\n"
-"Last-Translator: Alex Riesen <raa.lkml@gmail.com>\n"
-"Language-Team: Russian Translation <git@vger.kernel.org>\n"
+"PO-Revision-Date: 2016-06-30 12:39+0000\n"
+"Last-Translator: Dimitriy Ryazantcev <DJm00n@mail.ru>\n"
+"Language-Team: Russian (http://www.transifex.com/djm00n/git-po-ru/language/ru/)\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
+"Language: ru\n"
+"Plural-Forms: nplurals=4; plural=(n%10==1 && n%100!=11 ? 0 : n%10>=2 && n%10<=4 && (n%100<12 || n%100>14) ? 1 : n%10==0 || (n%10>=5 && n%10<=9) || (n%100>=11 && n%100<=14)? 2 : 3);\n"
#: git-gui.sh:41 git-gui.sh:793 git-gui.sh:807 git-gui.sh:820 git-gui.sh:903
#: git-gui.sh:922
"%s requires at least Git 1.5.0 or later.\n"
"\n"
"Assume '%s' is version 1.5.0?\n"
-msgstr ""
-"Невозможно определить версию Git\n"
-"\n"
-"%s указывает на версию '%s'.\n"
-"\n"
-"для %s требуется версия Git, начиная с 1.5.0\n"
-"\n"
-"Принять '%s' как версию 1.5.0?\n"
+msgstr "Невозможно определить версию Git\n\n%s указывает на версию «%s».\n\nдля %s требуется версия Git, начиная с 1.5.0\n\nПредположить, что «%s» и есть версия 1.5.0?\n"
#: git-gui.sh:1128
msgid "Git directory not found:"
#: git-gui.sh:1334 lib/checkout_op.tcl:306
msgid "Refreshing file status..."
-msgstr "Обновление информации о состоянии файлов..."
+msgstr "Обновление информации о состоянии файлов…"
#: git-gui.sh:1390
msgid "Scanning for modified files ..."
-msgstr "Поиск измененных файлов..."
+msgstr "Поиск измененных файлов…"
#: git-gui.sh:1454
msgid "Calling prepare-commit-msg hook..."
-msgstr "Вызов программы поддержки репозитория prepare-commit-msg..."
+msgstr "Вызов перехватчика prepare-commit-msg…"
#: git-gui.sh:1471
msgid "Commit declined by prepare-commit-msg hook."
-msgstr ""
-"Сохранение прервано программой поддержки репозитория prepare-commit-msg"
+msgstr "Коммит прерван перехватчиком prepare-commit-msg."
#: git-gui.sh:1629 lib/browser.tcl:246
msgid "Ready."
#: git-gui.sh:1915
msgid "Modified, not staged"
-msgstr "Ð\98зменено, не подгоÑ\82овлено"
+msgstr "Ð\98зменено, не в индекÑ\81е"
#: git-gui.sh:1916 git-gui.sh:1924
msgid "Staged for commit"
-msgstr "Ð\9fодгоÑ\82овлено длÑ\8f Ñ\81оÑ\85Ñ\80анениÑ\8f"
+msgstr "Ð\92 индекÑ\81е длÑ\8f коммиÑ\82а"
#: git-gui.sh:1917 git-gui.sh:1925
msgid "Portions staged for commit"
-msgstr "ЧаÑ\81Ñ\82и, подгоÑ\82овленнÑ\8bе длÑ\8f Ñ\81оÑ\85Ñ\80анениÑ\8f"
+msgstr "ЧаÑ\81Ñ\82и, в индекÑ\81е длÑ\8f коммиÑ\82а"
#: git-gui.sh:1918 git-gui.sh:1926
msgid "Staged for commit, missing"
-msgstr "Ð\9fодгоÑ\82овлено длÑ\8f Ñ\81оÑ\85Ñ\80анениÑ\8f, отсутствует"
+msgstr "Ð\92 индекÑ\81е длÑ\8f коммиÑ\82а, отсутствует"
#: git-gui.sh:1920
msgid "File type changed, not staged"
-msgstr "Тип Ñ\84айла изменÑ\91н, не подгоÑ\82овлено"
+msgstr "Тип Ñ\84айла изменÑ\91н, не в индекÑ\81е"
#: git-gui.sh:1921
msgid "File type changed, staged"
-msgstr "Тип Ñ\84айла изменÑ\91н, подгоÑ\82овлено"
+msgstr "Тип Ñ\84айла изменÑ\91н, в индекÑ\81е"
#: git-gui.sh:1923
msgid "Untracked, not staged"
-msgstr "Ð\9dе оÑ\82Ñ\81леживаеÑ\82Ñ\81Ñ\8f, не подгоÑ\82овлено"
+msgstr "Ð\9dе оÑ\82Ñ\81леживаеÑ\82Ñ\81Ñ\8f, не в индекÑ\81е"
#: git-gui.sh:1928
msgid "Missing"
#: git-gui.sh:1929
msgid "Staged for removal"
-msgstr "Ð\9fодгоÑ\82овлено для удаления"
+msgstr "Ð\92 индекÑ\81е для удаления"
#: git-gui.sh:1930
msgid "Staged for removal, still present"
-msgstr "Ð\9fодгоÑ\82овлено для удаления, еще не удалено"
+msgstr "Ð\92 индекÑ\81е для удаления, еще не удалено"
#: git-gui.sh:1932 git-gui.sh:1933 git-gui.sh:1934 git-gui.sh:1935
#: git-gui.sh:1936 git-gui.sh:1937
#: git-gui.sh:1972
msgid "Starting gitk... please wait..."
-msgstr "Запускается gitk... Подождите, пожалуйста..."
+msgstr "Запускается gitk… Подождите, пожалуйста…"
#: git-gui.sh:1984
msgid "Couldn't find gitk in PATH"
#: git-gui.sh:2458 lib/choose_rev.tcl:561
msgid "Branch"
-msgstr "Ð\92еÑ\82вÑ\8c"
+msgstr "Ð\92еÑ\82ка"
#: git-gui.sh:2461 lib/choose_rev.tcl:548
msgid "Commit@@noun"
-msgstr "СоÑ\81Ñ\82оÑ\8fние"
+msgstr "Ð\9aоммиÑ\82"
#: git-gui.sh:2464 lib/merge.tcl:121 lib/merge.tcl:150 lib/merge.tcl:168
msgid "Merge"
#: git-gui.sh:2483
msgid "Browse Current Branch's Files"
-msgstr "Ð\9fÑ\80оÑ\81моÑ\82Ñ\80еÑ\82Ñ\8c Ñ\84айлÑ\8b Ñ\82екÑ\83Ñ\89ей веÑ\82ви"
+msgstr "Ð\9fÑ\80оÑ\81моÑ\82Ñ\80еÑ\82Ñ\8c Ñ\84айлÑ\8b Ñ\82екÑ\83Ñ\89ей веÑ\82ки"
#: git-gui.sh:2487
msgid "Browse Branch Files..."
-msgstr "Ð\9fоказаÑ\82Ñ\8c Ñ\84айлÑ\8b веÑ\82ви..."
+msgstr "Ð\9fоказаÑ\82Ñ\8c Ñ\84айлÑ\8b веÑ\82киâ\80¦"
#: git-gui.sh:2492
msgid "Visualize Current Branch's History"
-msgstr "Ð\9fоказаÑ\82Ñ\8c иÑ\81Ñ\82оÑ\80иÑ\8e Ñ\82екÑ\83Ñ\89ей веÑ\82ви"
+msgstr "Ð\9fоказаÑ\82Ñ\8c иÑ\81Ñ\82оÑ\80иÑ\8e Ñ\82екÑ\83Ñ\89ей веÑ\82ки"
#: git-gui.sh:2496
msgid "Visualize All Branch History"
-msgstr "Ð\9fоказаÑ\82Ñ\8c иÑ\81Ñ\82оÑ\80иÑ\8e вÑ\81еÑ\85 веÑ\82вей"
+msgstr "Ð\9fоказаÑ\82Ñ\8c иÑ\81Ñ\82оÑ\80иÑ\8e вÑ\81еÑ\85 веÑ\82ок"
#: git-gui.sh:2503
#, tcl-format
msgid "Browse %s's Files"
-msgstr "Ð\9fоказаÑ\82Ñ\8c Ñ\84айлÑ\8b веÑ\82ви %s"
+msgstr "Ð\9fоказаÑ\82Ñ\8c Ñ\84айлÑ\8b веÑ\82ки %s"
#: git-gui.sh:2505
#, tcl-format
msgid "Visualize %s's History"
-msgstr "Ð\9fоказаÑ\82Ñ\8c иÑ\81Ñ\82оÑ\80иÑ\8e веÑ\82ви %s"
+msgstr "Ð\9fоказаÑ\82Ñ\8c иÑ\81Ñ\82оÑ\80иÑ\8e веÑ\82ки %s"
#: git-gui.sh:2510 lib/database.tcl:27 lib/database.tcl:67
msgid "Database Statistics"
#: git-gui.sh:2576
msgid "Create..."
-msgstr "Создать..."
+msgstr "Создать…"
#: git-gui.sh:2582
msgid "Checkout..."
-msgstr "Перейти..."
+msgstr "Перейти…"
#: git-gui.sh:2588
msgid "Rename..."
-msgstr "Переименовать..."
+msgstr "Переименовать…"
#: git-gui.sh:2593
msgid "Delete..."
-msgstr "Удалить..."
+msgstr "Удалить…"
#: git-gui.sh:2598
msgid "Reset..."
-msgstr "Сбросить..."
+msgstr "Сбросить…"
#: git-gui.sh:2608
msgid "Done"
#: git-gui.sh:2610
msgid "Commit@@verb"
-msgstr "СоÑ\85Ñ\80анить"
+msgstr "Ð\97акоммиÑ\82ить"
#: git-gui.sh:2619 git-gui.sh:3050
msgid "New Commit"
-msgstr "Новое состояние"
+msgstr "Новый коммит"
#: git-gui.sh:2627 git-gui.sh:3057
msgid "Amend Last Commit"
-msgstr "Ð\98Ñ\81пÑ\80авиÑ\82Ñ\8c поÑ\81леднее Ñ\81оÑ\81Ñ\82оÑ\8fние"
+msgstr "Ð\98Ñ\81пÑ\80авиÑ\82Ñ\8c поÑ\81ледний коммиÑ\82"
#: git-gui.sh:2637 git-gui.sh:3011 lib/remote_branch_delete.tcl:99
msgid "Rescan"
#: git-gui.sh:2643
msgid "Stage To Commit"
-msgstr "Ð\9fодгоÑ\82овиÑ\82Ñ\8c длÑ\8f Ñ\81оÑ\85Ñ\80анениÑ\8f"
+msgstr "Ð\94обавиÑ\82Ñ\8c в индекÑ\81"
#: git-gui.sh:2649
msgid "Stage Changed Files To Commit"
-msgstr "Ð\9fодгоÑ\82овиÑ\82Ñ\8c измененнÑ\8bе Ñ\84айлÑ\8b длÑ\8f Ñ\81оÑ\85Ñ\80анениÑ\8f"
+msgstr "Ð\94обавиÑ\82Ñ\8c изменÑ\91ннÑ\8bе Ñ\84айлÑ\8b в индекÑ\81"
#: git-gui.sh:2655
msgid "Unstage From Commit"
-msgstr "УбÑ\80аÑ\82Ñ\8c из подгоÑ\82овленного"
+msgstr "УбÑ\80аÑ\82Ñ\8c из издекÑ\81а"
#: git-gui.sh:2661 lib/index.tcl:412
msgid "Revert Changes"
-msgstr "Отменить изменения"
+msgstr "Обратить изменения"
#: git-gui.sh:2669 git-gui.sh:3310 git-gui.sh:3341
msgid "Show Less Context"
#: git-gui.sh:2696
msgid "Local Merge..."
-msgstr "Локальное слияние..."
+msgstr "Локальное слияние…"
#: git-gui.sh:2701
msgid "Abort Merge..."
-msgstr "Прервать слияние..."
+msgstr "Прервать слияние…"
#: git-gui.sh:2713 git-gui.sh:2741
msgid "Add..."
-msgstr "Добавить..."
+msgstr "Добавить…"
#: git-gui.sh:2717
msgid "Push..."
-msgstr "Отправить..."
+msgstr "Отправить…"
#: git-gui.sh:2721
msgid "Delete Branch..."
-msgstr "УдалиÑ\82Ñ\8c веÑ\82вÑ\8c..."
+msgstr "УдалиÑ\82Ñ\8c веÑ\82кÑ\83â\80¦"
#: git-gui.sh:2731 git-gui.sh:3292
msgid "Options..."
-msgstr "Настройки..."
+msgstr "Настройки…"
#: git-gui.sh:2742
msgid "Remove..."
-msgstr "Удалить..."
+msgstr "Удалить…"
#: git-gui.sh:2751 lib/choose_repository.tcl:50
msgid "Help"
#: git-gui.sh:2926
msgid "Current Branch:"
-msgstr "ТекÑ\83Ñ\89аÑ\8f веÑ\82вÑ\8c:"
+msgstr "ТекÑ\83Ñ\89аÑ\8f веÑ\82ка:"
#: git-gui.sh:2947
msgid "Staged Changes (Will Commit)"
-msgstr "Ð\9fодгоÑ\82овлено (бÑ\83деÑ\82 Ñ\81оÑ\85Ñ\80анено)"
+msgstr "Ð\98зменениÑ\8f в индекÑ\81е (бÑ\83дÑ\83Ñ\82 закоммиÑ\87енÑ\8b)"
#: git-gui.sh:2967
msgid "Unstaged Changes"
#: git-gui.sh:3017
msgid "Stage Changed"
-msgstr "Ð\9fодгоÑ\82овиÑ\82Ñ\8c вÑ\81е"
+msgstr "Ð\98ндекÑ\81иÑ\80оваÑ\82Ñ\8c вÑ\81Ñ\91"
#: git-gui.sh:3036 lib/transport.tcl:104 lib/transport.tcl:193
msgid "Push"
#: git-gui.sh:3071
msgid "Initial Commit Message:"
-msgstr "Ð\9aомменÑ\82аÑ\80ий к пеÑ\80вомÑ\83 Ñ\81оÑ\81Ñ\82оÑ\8fниÑ\8e:"
+msgstr "СообÑ\89ение пеÑ\80вого коммиÑ\82а:"
#: git-gui.sh:3072
msgid "Amended Commit Message:"
-msgstr "Ð\9aомменÑ\82аÑ\80ий к иÑ\81пÑ\80авленномÑ\83 Ñ\81оÑ\81Ñ\82оÑ\8fниÑ\8e:"
+msgstr "СообÑ\89ение иÑ\81пÑ\80авленного коммиÑ\82а:"
#: git-gui.sh:3073
msgid "Amended Initial Commit Message:"
-msgstr "Ð\9aомменÑ\82аÑ\80ий к иÑ\81пÑ\80авленномÑ\83 пеÑ\80вонаÑ\87алÑ\8cномÑ\83 Ñ\81оÑ\81Ñ\82оÑ\8fниÑ\8e:"
+msgstr "СообÑ\89ение иÑ\81пÑ\80авленного пеÑ\80вого коммиÑ\82а:"
#: git-gui.sh:3074
msgid "Amended Merge Commit Message:"
-msgstr "Ð\9aомменÑ\82аÑ\80ий к иÑ\81пÑ\80авленномÑ\83 Ñ\81лиÑ\8fниÑ\8e:"
+msgstr "СообÑ\89ение иÑ\81пÑ\80авленного Ñ\81лиÑ\8fниÑ\8f:"
#: git-gui.sh:3075
msgid "Merge Commit Message:"
-msgstr "Ð\9aомменÑ\82аÑ\80ий к Ñ\81лиÑ\8fниÑ\8e:"
+msgstr "СообÑ\89ение Ñ\81лиÑ\8fниÑ\8f:"
#: git-gui.sh:3076
msgid "Commit Message:"
-msgstr "Ð\9aомменÑ\82аÑ\80ий к Ñ\81оÑ\81Ñ\82оÑ\8fниÑ\8e:"
+msgstr "СообÑ\89ение коммиÑ\82а:"
#: git-gui.sh:3125 git-gui.sh:3267 lib/console.tcl:73
msgid "Copy All"
#: git-gui.sh:3336
msgid "Revert To Base"
-msgstr "Отменить изменения"
+msgstr "Обратить изменения"
#: git-gui.sh:3354
msgid "Visualize These Changes In The Submodule"
-msgstr ""
+msgstr "Показать эти изменения подмодуля"
#: git-gui.sh:3358
msgid "Visualize Current Branch History In The Submodule"
-msgstr "Ð\9fоказаÑ\82Ñ\8c иÑ\81Ñ\82оÑ\80иÑ\8e Ñ\82екÑ\83Ñ\89ей веÑ\82ви подмодуля"
+msgstr "Ð\9fоказаÑ\82Ñ\8c иÑ\81Ñ\82оÑ\80иÑ\8e Ñ\82екÑ\83Ñ\89ей веÑ\82ки подмодуля"
#: git-gui.sh:3362
msgid "Visualize All Branch History In The Submodule"
-msgstr "Ð\9fоказаÑ\82Ñ\8c иÑ\81Ñ\82оÑ\80иÑ\8e вÑ\81еÑ\85 веÑ\82вей подмодуля"
+msgstr "Ð\9fоказаÑ\82Ñ\8c иÑ\81Ñ\82оÑ\80иÑ\8e вÑ\81еÑ\85 веÑ\82ок подмодуля"
#: git-gui.sh:3367
msgid "Start git gui In The Submodule"
-msgstr ""
+msgstr "Запустить git gui в подмодуле"
#: git-gui.sh:3389
msgid "Unstage Hunk From Commit"
-msgstr "Ð\9dе Ñ\81оÑ\85Ñ\80анÑ\8fÑ\82Ñ\8c Ñ\87аÑ\81Ñ\82Ñ\8c"
+msgstr "УбÑ\80аÑ\82Ñ\8c блок из индекÑ\81а"
#: git-gui.sh:3391
msgid "Unstage Lines From Commit"
-msgstr "УбÑ\80аÑ\82Ñ\8c Ñ\81Ñ\82Ñ\80оки из подгоÑ\82овленного"
+msgstr "УбÑ\80аÑ\82Ñ\8c Ñ\81Ñ\82Ñ\80оки из индекÑ\81а"
#: git-gui.sh:3393
msgid "Unstage Line From Commit"
-msgstr "УбÑ\80аÑ\82Ñ\8c Ñ\81Ñ\82Ñ\80окÑ\83 из подгоÑ\82овленного"
+msgstr "УбÑ\80аÑ\82Ñ\8c Ñ\81Ñ\82Ñ\80окÑ\83 из индекÑ\81а"
#: git-gui.sh:3396
msgid "Stage Hunk For Commit"
-msgstr "Ð\9fодгоÑ\82овиÑ\82Ñ\8c Ñ\87аÑ\81Ñ\82Ñ\8c длÑ\8f Ñ\81оÑ\85Ñ\80анениÑ\8f"
+msgstr "Ð\94обавиÑ\82Ñ\8c блок в индекÑ\81"
#: git-gui.sh:3398
msgid "Stage Lines For Commit"
-msgstr "Ð\9fодгоÑ\82овиÑ\82Ñ\8c Ñ\81Ñ\82Ñ\80оки длÑ\8f Ñ\81оÑ\85Ñ\80анениÑ\8f"
+msgstr "Ð\94обавиÑ\82Ñ\8c Ñ\81Ñ\82Ñ\80оки в индекÑ\81"
#: git-gui.sh:3400
msgid "Stage Line For Commit"
-msgstr "Ð\9fодгоÑ\82овиÑ\82Ñ\8c Ñ\81Ñ\82Ñ\80окÑ\83 длÑ\8f Ñ\81оÑ\85Ñ\80анениÑ\8f"
+msgstr "Ð\94обавиÑ\82Ñ\8c Ñ\81Ñ\82Ñ\80окÑ\83 в индекÑ\81"
#: git-gui.sh:3424
msgid "Initializing..."
-msgstr "Инициализация..."
+msgstr "Инициализация…"
#: git-gui.sh:3541
#, tcl-format
"going to be ignored by any Git subprocess run\n"
"by %s:\n"
"\n"
-msgstr ""
-"Возможны ошибки в переменных окружения.\n"
-"\n"
-"Переменные окружения, которые возможно\n"
-"будут проигнорированы командами Git,\n"
-"запущенными из %s\n"
-"\n"
+msgstr "Возможны ошибки в переменных окружения.\n\nПеременные окружения, которые возможно\nбудут проигнорированы командами Git,\nзапущенными из %s\n\n"
#: git-gui.sh:3570
msgid ""
"\n"
"This is due to a known issue with the\n"
"Tcl binary distributed by Cygwin."
-msgstr ""
-"\n"
-"Это известная проблема с Tcl,\n"
-"распространяемым Cygwin."
+msgstr "\nЭто известная проблема с Tcl,\nраспространяемым Cygwin."
#: git-gui.sh:3575
#, tcl-format
"is placing values for the user.name and\n"
"user.email settings into your personal\n"
"~/.gitconfig file.\n"
-msgstr ""
-"\n"
-"\n"
-"Вместо использования %s можно\n"
-"сохранить значения user.name и\n"
-"user.email в Вашем персональном\n"
-"файле ~/.gitconfig.\n"
+msgstr "\n\nВместо использования %s можно\nсохранить значения user.name и\nuser.email в Вашем персональном\nфайле ~/.gitconfig.\n"
#: lib/about.tcl:26
msgid "git-gui - a graphical user interface for Git."
#: lib/blame.tcl:78
msgid "Commit:"
-msgstr "СоÑ\85Ñ\80аненное Ñ\81оÑ\81Ñ\82оÑ\8fние:"
+msgstr "Ð\9aоммиÑ\82:"
#: lib/blame.tcl:271
msgid "Copy Commit"
-msgstr "Скопировать SHA-1"
+msgstr "Ð\9aопировать SHA-1"
#: lib/blame.tcl:275
msgid "Find Text..."
-msgstr "Найти текст..."
+msgstr "Найти текст…"
#: lib/blame.tcl:284
msgid "Do Full Copy Detection"
#: lib/blame.tcl:291
msgid "Blame Parent Commit"
-msgstr "РаÑ\81Ñ\81моÑ\82Ñ\80еÑ\82Ñ\8c Ñ\81оÑ\81Ñ\82оÑ\8fние пÑ\80едка"
+msgstr "Ð\90вÑ\82оÑ\80Ñ\8b Ñ\80одиÑ\82елÑ\8cÑ\81кого коммиÑ\82а"
#: lib/blame.tcl:450
#, tcl-format
msgid "Reading %s..."
-msgstr "Чтение %s..."
+msgstr "Чтение %s…"
#: lib/blame.tcl:557
msgid "Loading copy/move tracking annotations..."
-msgstr "Загрузка аннотации копирований/переименований..."
+msgstr "Загрузка аннотации копирований/переименований…"
#: lib/blame.tcl:577
msgid "lines annotated"
#: lib/blame.tcl:769
msgid "Loading original location annotations..."
-msgstr "Загрузка аннотаций первоначального положения объекта..."
+msgstr "Загрузка аннотаций первоначального положения объекта…"
#: lib/blame.tcl:772
msgid "Annotation complete."
#: lib/blame.tcl:842
msgid "Running thorough copy detection..."
-msgstr "Выполнение полного поиска копий..."
+msgstr "Выполнение полного поиска копий…"
#: lib/blame.tcl:910
msgid "Loading annotation..."
-msgstr "Загрузка аннотации..."
+msgstr "Загрузка аннотации…"
#: lib/blame.tcl:963
msgid "Author:"
#: lib/blame.tcl:967
msgid "Committer:"
-msgstr "СоÑ\85Ñ\80анил:"
+msgstr "Ð\9aоммиÑ\82еÑ\80:"
#: lib/blame.tcl:972
msgid "Original File:"
#: lib/blame.tcl:1020
msgid "Cannot find HEAD commit:"
-msgstr "Невозможно найти текущее состояние:"
+msgstr "Не удалось найти текущее состояние:"
#: lib/blame.tcl:1075
msgid "Cannot find parent commit:"
-msgstr "Невозможно найти состояние предка:"
+msgstr "Не удалось найти родительское состояние:"
#: lib/blame.tcl:1090
msgid "Unable to display parent"
#: lib/branch_checkout.tcl:14 lib/branch_checkout.tcl:19
msgid "Checkout Branch"
-msgstr "Ð\9fеÑ\80ейÑ\82и на веÑ\82вÑ\8c"
+msgstr "Ð\9fеÑ\80ейÑ\82и на веÑ\82кÑ\83"
#: lib/branch_checkout.tcl:23
msgid "Checkout"
#: lib/branch_checkout.tcl:39 lib/branch_create.tcl:92
msgid "Fetch Tracking Branch"
-msgstr "Ð\9fолÑ\83Ñ\87иÑ\82Ñ\8c изменениÑ\8f из внеÑ\88ней веÑ\82ви"
+msgstr "Ð\98звлеÑ\87Ñ\8c изменениÑ\8f из внеÑ\88ней веÑ\82ки"
#: lib/branch_checkout.tcl:44
msgid "Detach From Local Branch"
-msgstr "Ð\9eÑ\82Ñ\81оединиÑ\82Ñ\8c оÑ\82 локалÑ\8cной веÑ\82ви"
+msgstr "Ð\9eÑ\82Ñ\81оединиÑ\82Ñ\8c оÑ\82 локалÑ\8cной веÑ\82ки"
#: lib/branch_create.tcl:22
msgid "Create Branch"
-msgstr "Создание ветви"
+msgstr "Создать ветку"
#: lib/branch_create.tcl:27
msgid "Create New Branch"
-msgstr "СоздаÑ\82Ñ\8c новÑ\83Ñ\8e веÑ\82вÑ\8c"
+msgstr "СоздаÑ\82Ñ\8c новÑ\83Ñ\8e веÑ\82кÑ\83"
#: lib/branch_create.tcl:31 lib/choose_repository.tcl:381
msgid "Create"
#: lib/branch_create.tcl:40
msgid "Branch Name"
-msgstr "Ð\9dазвание веÑ\82ви"
+msgstr "Ð\98мÑ\8f веÑ\82ки"
#: lib/branch_create.tcl:43 lib/remote_add.tcl:39 lib/tools_dlg.tcl:50
msgid "Name:"
#: lib/branch_create.tcl:58
msgid "Match Tracking Branch Name"
-msgstr "Ð\92зÑ\8fÑ\82Ñ\8c из имен веÑ\82вей Ñ\81лежениÑ\8f"
+msgstr "СооÑ\82веÑ\82Ñ\81Ñ\82воваÑ\82Ñ\8c имени оÑ\82Ñ\81леживаемой веÑ\82ки"
#: lib/branch_create.tcl:66
msgid "Starting Revision"
#: lib/branch_create.tcl:72
msgid "Update Existing Branch:"
-msgstr "Ð\9eбновиÑ\82Ñ\8c имеÑ\8eÑ\89Ñ\83Ñ\8eÑ\81Ñ\8f веÑ\82вÑ\8c:"
+msgstr "Ð\9eбновиÑ\82Ñ\8c имеÑ\8eÑ\89Ñ\83Ñ\8eÑ\81Ñ\8f веÑ\82кÑ\83:"
#: lib/branch_create.tcl:75
msgid "No"
#: lib/branch_create.tcl:131
msgid "Please select a tracking branch."
-msgstr "УкажиÑ\82е веÑ\82вÑ\8c Ñ\81лежениÑ\8f."
+msgstr "УкажиÑ\82е оÑ\82леживаемÑ\83Ñ\8e веÑ\82кÑ\83."
#: lib/branch_create.tcl:140
#, tcl-format
msgid "Tracking branch %s is not a branch in the remote repository."
-msgstr "Ð\92еÑ\82вÑ\8c Ñ\81лежениÑ\8f %s не Ñ\8fвлÑ\8fеÑ\82Ñ\81Ñ\8f веÑ\82вÑ\8cÑ\8e во внешнем репозитории."
+msgstr "Ð\9eÑ\82Ñ\81леживаемаÑ\8f веÑ\82ка %s не Ñ\8fвлÑ\8fеÑ\82Ñ\81Ñ\8f веÑ\82кой на внешнем репозитории."
#: lib/branch_create.tcl:153 lib/branch_rename.tcl:86
msgid "Please supply a branch name."
-msgstr "УкажиÑ\82е название веÑ\82ви."
+msgstr "УкажиÑ\82е имÑ\8f веÑ\82ки."
#: lib/branch_create.tcl:164 lib/branch_rename.tcl:106
#, tcl-format
msgid "'%s' is not an acceptable branch name."
-msgstr "Ð\9dедопÑ\83Ñ\81Ñ\82имое название веÑ\82ви '%s'."
+msgstr "Ð\9dедопÑ\83Ñ\81Ñ\82имое имÑ\8f веÑ\82ки «%s»."
#: lib/branch_delete.tcl:15
msgid "Delete Branch"
-msgstr "Удаление веÑ\82ви"
+msgstr "Удаление веÑ\82ки"
#: lib/branch_delete.tcl:20
msgid "Delete Local Branch"
-msgstr "УдалиÑ\82Ñ\8c локалÑ\8cнÑ\83Ñ\8e веÑ\82вÑ\8c"
+msgstr "УдалиÑ\82Ñ\8c локалÑ\8cнÑ\83Ñ\8e веÑ\82кÑ\83"
#: lib/branch_delete.tcl:37
msgid "Local Branches"
-msgstr "Ð\9bокалÑ\8cнÑ\8bе веÑ\82ви"
+msgstr "Ð\9bокалÑ\8cнÑ\8bе веÑ\82ки"
#: lib/branch_delete.tcl:52
msgid "Delete Only If Merged Into"
#: lib/branch_delete.tcl:103
#, tcl-format
msgid "The following branches are not completely merged into %s:"
-msgstr "Ð\92еÑ\82ви, которые не полностью сливаются с %s:"
+msgstr "Ð\92еÑ\82ки, которые не полностью сливаются с %s:"
#: lib/branch_delete.tcl:115 lib/remote_branch_delete.tcl:217
msgid ""
"Recovering deleted branches is difficult.\n"
"\n"
"Delete the selected branches?"
-msgstr ""
-"Восстановить удаленные ветви сложно.\n"
-"\n"
-"Продолжить?"
+msgstr "Восстановить удаленные ветки сложно.\n\nПродолжить?"
#: lib/branch_delete.tcl:141
#, tcl-format
msgid ""
"Failed to delete branches:\n"
"%s"
-msgstr ""
-"Не удалось удалить ветви:\n"
-"%s"
+msgstr "Не удалось удалить ветки:\n%s"
#: lib/branch_rename.tcl:14 lib/branch_rename.tcl:22
msgid "Rename Branch"
-msgstr "Ð\9fеÑ\80еименование веÑ\82ви"
+msgstr "Ð\9fеÑ\80еименование веÑ\82ки"
#: lib/branch_rename.tcl:26
msgid "Rename"
#: lib/branch_rename.tcl:36
msgid "Branch:"
-msgstr "Ð\92еÑ\82вÑ\8c:"
+msgstr "Ð\92еÑ\82ка:"
#: lib/branch_rename.tcl:39
msgid "New Name:"
#: lib/branch_rename.tcl:75
msgid "Please select a branch to rename."
-msgstr "УкажиÑ\82е веÑ\82вÑ\8c для переименования."
+msgstr "УкажиÑ\82е веÑ\82кÑ\83 для переименования."
#: lib/branch_rename.tcl:96 lib/checkout_op.tcl:202
#, tcl-format
msgid "Branch '%s' already exists."
-msgstr "Ð\92еÑ\82вÑ\8c '%s' уже существует."
+msgstr "Ð\92еÑ\82ка «%s» уже существует."
#: lib/branch_rename.tcl:117
#, tcl-format
msgid "Failed to rename '%s'."
-msgstr "Не удалось переименовать '%s'. "
+msgstr "Не удалось переименовать «%s». "
#: lib/browser.tcl:17
msgid "Starting..."
-msgstr "Запуск..."
+msgstr "Запуск…"
#: lib/browser.tcl:26
msgid "File Browser"
#: lib/browser.tcl:126 lib/browser.tcl:143
#, tcl-format
msgid "Loading %s..."
-msgstr "Загрузка %s..."
+msgstr "Загрузка %s…"
#: lib/browser.tcl:187
msgid "[Up To Parent]"
#: lib/browser.tcl:267 lib/browser.tcl:273
msgid "Browse Branch Files"
-msgstr "Ð\9fоказаÑ\82Ñ\8c Ñ\84айлÑ\8b веÑ\82ви"
+msgstr "Ð\9fоказаÑ\82Ñ\8c Ñ\84айлÑ\8b веÑ\82ки"
#: lib/browser.tcl:278 lib/choose_repository.tcl:398
#: lib/choose_repository.tcl:486 lib/choose_repository.tcl:497
#: lib/checkout_op.tcl:85
#, tcl-format
msgid "Fetching %s from %s"
-msgstr "Ð\9fолÑ\83чение %s из %s "
+msgstr "Ð\98звлечение %s из %s "
#: lib/checkout_op.tcl:133
#, tcl-format
#: lib/checkout_op.tcl:175
#, tcl-format
msgid "Branch '%s' does not exist."
-msgstr "Ð\92еÑ\82вÑ\8c '%s' не Ñ\81Ñ\83Ñ\89еÑ\81Ñ\82вÑ\83еÑ\82 "
+msgstr "Ð\92еÑ\82ка «%s» не Ñ\81Ñ\83Ñ\89еÑ\81Ñ\82вÑ\83еÑ\82."
#: lib/checkout_op.tcl:194
#, tcl-format
msgid "Failed to configure simplified git-pull for '%s'."
-msgstr "Ошибка создания упрощённой конфигурации git pull для '%s'."
+msgstr "Ошибка создания упрощённой конфигурации git pull для «%s»."
#: lib/checkout_op.tcl:229
#, tcl-format
"\n"
"It cannot fast-forward to %s.\n"
"A merge is required."
-msgstr ""
-"Ветвь '%s' уже существует.\n"
-"\n"
-"Она не может быть прокручена(fast-forward) к %s.\n"
-"Требуется слияние."
+msgstr "Ветка «%s» уже существует.\n\nОна не может быть перемотана вперед к %s.\nТребуется слияние."
#: lib/checkout_op.tcl:243
#, tcl-format
msgid "Merge strategy '%s' not supported."
-msgstr "Неизвестная стратегия слияния: '%s'."
+msgstr "Неизвестная стратегия слияния «%s»."
#: lib/checkout_op.tcl:262
#, tcl-format
msgid "Failed to update '%s'."
-msgstr "Не удалось обновить '%s'."
+msgstr "Не удалось обновить «%s»."
#: lib/checkout_op.tcl:274
msgid "Staging area (index) is already locked."
msgid ""
"Last scanned state does not match repository state.\n"
"\n"
-"Another Git program has modified this repository since the last scan. A "
-"rescan must be performed before the current branch can be changed.\n"
+"Another Git program has modified this repository since the last scan. A rescan must be performed before the current branch can be changed.\n"
"\n"
"The rescan will be automatically started now.\n"
-msgstr ""
-"Последнее прочитанное состояние репозитория не соответствует текущему.\n"
-"\n"
-"С момента последней проверки репозиторий был изменен другой программой Git. "
-"Необходимо перечитать репозиторий, прежде чем изменять текущую ветвь.\n"
-"\n"
-"Это будет сделано сейчас автоматически.\n"
+msgstr "Последнее прочитанное состояние репозитория не соответствует текущему.\n\nС момента последней проверки репозиторий был изменен другой программой Git. Необходимо перечитать репозиторий, прежде чем текущая ветка может быть изменена.\n\nЭто будет сделано сейчас автоматически.\n"
#: lib/checkout_op.tcl:345
#, tcl-format
msgid "Updating working directory to '%s'..."
-msgstr "Обновление рабочего каталога из '%s'..."
+msgstr "Обновление рабочего каталога из «%s»…"
#: lib/checkout_op.tcl:346
msgid "files checked out"
#: lib/checkout_op.tcl:376
#, tcl-format
msgid "Aborted checkout of '%s' (file level merging is required)."
-msgstr "Прерван переход на '%s' (требуется слияние содержания файлов)"
+msgstr "Прерван переход на «%s» (требуется слияние содержимого файлов)"
#: lib/checkout_op.tcl:377
msgid "File level merge required."
#: lib/checkout_op.tcl:381
#, tcl-format
msgid "Staying on branch '%s'."
-msgstr "Ð\92еÑ\82вÑ\8c '%s' оÑ\81Ñ\82ается текущей."
+msgstr "Ð\92еÑ\82ка «%s» оÑ\81Ñ\82аÑ\91тся текущей."
#: lib/checkout_op.tcl:452
msgid ""
"You are no longer on a local branch.\n"
"\n"
-"If you wanted to be on a branch, create one now starting from 'This Detached "
-"Checkout'."
-msgstr ""
-"Вы находитесь не в локальной ветви.\n"
-"\n"
-"Если вы хотите снова вернуться к какой-нибудь ветви, создайте ее сейчас, "
-"начиная с 'Текущего отсоединенного состояния'."
+"If you wanted to be on a branch, create one now starting from 'This Detached Checkout'."
+msgstr "Вы более не находитесь на локальной ветке.\n\nЕсли вы хотите снова вернуться к какой-нибудь ветке, создайте её сейчас, начиная с «Текущего отсоединенного состояния»."
#: lib/checkout_op.tcl:503 lib/checkout_op.tcl:507
#, tcl-format
msgid "Checked out '%s'."
-msgstr "Ветвь '%s' сделана текущей."
+msgstr "Выполнен переход на «%s»."
#: lib/checkout_op.tcl:535
#, tcl-format
msgid "Resetting '%s' to '%s' will lose the following commits:"
-msgstr "Сброс '%s' в '%s' приведет к потере следующих сохраненных состояний: "
+msgstr "Сброс «%s» на «%s» приведет к потере следующих коммитов:"
#: lib/checkout_op.tcl:557
msgid "Recovering lost commits may not be easy."
-msgstr "Восстановить потерянные сохраненные состояния будет сложно."
+msgstr "Восстановить потерянные коммиты будет сложно."
#: lib/checkout_op.tcl:562
#, tcl-format
msgid "Reset '%s'?"
-msgstr "Сбросить '%s'?"
+msgstr "Сбросить «%s»?"
#: lib/checkout_op.tcl:567 lib/merge.tcl:164 lib/tools_dlg.tcl:343
msgid "Visualize"
msgid ""
"Failed to set current branch.\n"
"\n"
-"This working directory is only partially switched. We successfully updated "
-"your files, but failed to update an internal Git file.\n"
+"This working directory is only partially switched. We successfully updated your files, but failed to update an internal Git file.\n"
"\n"
"This should not have occurred. %s will now close and give up."
-msgstr ""
-"Не удалось установить текущую ветвь.\n"
-"\n"
-"Ваш рабочий каталог обновлен только частично. Были обновлены все файлы кроме "
-"служебных файлов Git. \n"
-"\n"
-"Этого не должно было произойти. %s завершается."
+msgstr "Не удалось установить текущую ветку.\n\nВаш рабочий каталог обновлён только частично. Были обновлены все файлы кроме служебных файлов Git. \n\nЭтого не должно было произойти. %s завершается."
#: lib/choose_font.tcl:39
msgid "Select"
msgid ""
"This is example text.\n"
"If you like this text, it can be your font."
-msgstr ""
-"Это пример текста.\n"
-"Если Вам нравится этот текст, это может быть Ваш шрифт."
+msgstr "Это пример текста.\nЕсли Вам нравится этот текст, это может быть Ваш шрифт."
#: lib/choose_repository.tcl:28
msgid "Git Gui"
#: lib/choose_repository.tcl:93
msgid "New..."
-msgstr "Новый..."
+msgstr "Новый…"
#: lib/choose_repository.tcl:100 lib/choose_repository.tcl:471
msgid "Clone Existing Repository"
#: lib/choose_repository.tcl:106
msgid "Clone..."
-msgstr "СклониÑ\80оваÑ\82Ñ\8c..."
+msgstr "Ð\9aлониÑ\80оваÑ\82Ñ\8câ\80¦"
#: lib/choose_repository.tcl:113 lib/choose_repository.tcl:1016
msgid "Open Existing Repository"
#: lib/choose_repository.tcl:119
msgid "Open..."
-msgstr "Открыть..."
+msgstr "Открыть…"
#: lib/choose_repository.tcl:132
msgid "Recent Repositories"
#: lib/choose_repository.tcl:508
msgid "Standard (Fast, Semi-Redundant, Hardlinks)"
-msgstr "Стандартный (Быстрый, полуизбыточный, \"жесткие\" ссылки)"
+msgstr "Стандартный (Быстрый, полуизбыточный, «жесткие» ссылки)"
#: lib/choose_repository.tcl:514
msgid "Full Copy (Slower, Redundant Backup)"
#: lib/choose_repository.tcl:641
msgid "buckets"
-msgstr ""
+msgstr "блоки"
#: lib/choose_repository.tcl:665
#, tcl-format
#: lib/choose_repository.tcl:703 lib/choose_repository.tcl:917
#: lib/choose_repository.tcl:929
msgid "The 'master' branch has not been initialized."
-msgstr "Не инициализирована ветвь 'master'."
+msgstr "Не инициализирована ветвь «master»."
#: lib/choose_repository.tcl:716
msgid "Hardlinks are unavailable. Falling back to copying."
-msgstr "\"Жесткие ссылки\" недоступны. Будет использовано копирование."
+msgstr "«Жесткие ссылки» недоступны. Будет использовано копирование."
#: lib/choose_repository.tcl:728
#, tcl-format
#: lib/choose_repository.tcl:803
#, tcl-format
msgid "Unable to hardlink object: %s"
-msgstr "Не могу \"жестко связать\" объект: %s"
+msgstr "Не могу создать «жесткую ссылку» на объект: %s"
#: lib/choose_repository.tcl:858
msgid "Cannot fetch branches and objects. See console output for details."
-msgstr ""
-"Не могу получить ветви и объекты. Дополнительная информация на консоли."
+msgstr "Не удалось извлечь ветки и объекты. Дополнительная информация на консоли."
#: lib/choose_repository.tcl:869
msgid "Cannot fetch tags. See console output for details."
-msgstr "Не могу получить метки. Дополнительная информация на консоли."
+msgstr "Не удалось извлечь метки. Дополнительная информация на консоли."
#: lib/choose_repository.tcl:893
msgid "Cannot determine HEAD. See console output for details."
#: lib/choose_repository.tcl:915
msgid "No default branch obtained."
-msgstr "Ð\9dе бÑ\8bло полÑ\83Ñ\87ено веÑ\82ви по Ñ\83молÑ\87аниÑ\8e."
+msgstr "Ð\92еÑ\82ка по Ñ\83молÑ\87аниÑ\8e не бÑ\8bла полÑ\83Ñ\87ена."
#: lib/choose_repository.tcl:926
#, tcl-format
msgid "Cannot resolve %s as a commit."
-msgstr "Не могу распознать %s как состояние."
+msgstr "Не могу распознать %s как коммит."
#: lib/choose_repository.tcl:938
msgid "Creating working directory"
#: lib/choose_rev.tcl:74
msgid "Local Branch"
-msgstr "Ð\9bокалÑ\8cнаÑ\8f веÑ\82вÑ\8c:"
+msgstr "Ð\9bокалÑ\8cнаÑ\8f веÑ\82ка:"
#: lib/choose_rev.tcl:79
msgid "Tracking Branch"
-msgstr "Ð\92еÑ\82вÑ\8c Ñ\81лежениÑ\8f"
+msgstr "Ð\9eÑ\82Ñ\81леживаемаÑ\8f веÑ\82ка"
#: lib/choose_rev.tcl:84 lib/choose_rev.tcl:538
msgid "Tag"
msgid ""
"There is nothing to amend.\n"
"\n"
-"You are about to create the initial commit. There is no commit before this "
-"to amend.\n"
-msgstr ""
-"Отсутствует состояние для исправления.\n"
-"\n"
-"Вы создаете первое состояние в репозитории, здесь еще нечего исправлять.\n"
+"You are about to create the initial commit. There is no commit before this to amend.\n"
+msgstr "Отсутствует коммиты для исправления.\n\nВы создаете начальный коммит, здесь еще нечего исправлять.\n"
#: lib/commit.tcl:18
msgid ""
"Cannot amend while merging.\n"
"\n"
-"You are currently in the middle of a merge that has not been fully "
-"completed. You cannot amend the prior commit unless you first abort the "
-"current merge activity.\n"
-msgstr ""
-"Невозможно исправить состояние во время операции слияния.\n"
-"\n"
-"Текущее слияние не завершено. Невозможно исправить предыдущее сохраненное "
-"состояние, не прерывая эту операцию.\n"
+"You are currently in the middle of a merge that has not been fully completed. You cannot amend the prior commit unless you first abort the current merge activity.\n"
+msgstr "Невозможно исправить коммит во время слияния.\n\nТекущее слияние не завершено. Невозможно исправить предыдуий коммит, не прерывая эту операцию.\n"
#: lib/commit.tcl:48
msgid "Error loading commit data for amend:"
-msgstr "Ошибка при загрузке данных для исправления сохраненного состояния:"
+msgstr "Ошибка при загрузке данных для исправления коммита:"
#: lib/commit.tcl:75
msgid "Unable to obtain your identity:"
#: lib/commit.tcl:80
msgid "Invalid GIT_COMMITTER_IDENT:"
-msgstr "Ð\9dевеÑ\80ный GIT_COMMITTER_IDENT:"
+msgstr "Ð\9dедопÑ\83Ñ\81Ñ\82имый GIT_COMMITTER_IDENT:"
#: lib/commit.tcl:129
#, tcl-format
msgid "warning: Tcl does not support encoding '%s'."
-msgstr "предупреждение: Tcl не поддерживает кодировку '%s'."
+msgstr "предупреждение: Tcl не поддерживает кодировку «%s»."
#: lib/commit.tcl:149
msgid ""
"Last scanned state does not match repository state.\n"
"\n"
-"Another Git program has modified this repository since the last scan. A "
-"rescan must be performed before another commit can be created.\n"
+"Another Git program has modified this repository since the last scan. A rescan must be performed before another commit can be created.\n"
"\n"
"The rescan will be automatically started now.\n"
-msgstr ""
-"Последнее прочитанное состояние репозитория не соответствует текущему.\n"
-"\n"
-"С момента последней проверки репозиторий был изменен другой программой Git. "
-"Необходимо перечитать репозиторий, прежде чем изменять текущую ветвь. \n"
-"\n"
-"Это будет сделано сейчас автоматически.\n"
+msgstr "Последнее прочитанное состояние репозитория не соответствует текущему.\n\nС момента последней проверки репозиторий был изменен другой программой Git. Необходимо перечитать репозиторий, прежде чем изменять текущую ветвь. \n\nЭто будет сделано сейчас автоматически.\n"
#: lib/commit.tcl:172
#, tcl-format
msgid ""
"Unmerged files cannot be committed.\n"
"\n"
-"File %s has merge conflicts. You must resolve them and stage the file "
-"before committing.\n"
-msgstr ""
-"Нельзя сохранить файлы с незавершённой операцией слияния.\n"
-"\n"
-"Для файла %s возник конфликт слияния. Разрешите конфликт и добавьте к "
-"подготовленным файлам перед сохранением.\n"
+"File %s has merge conflicts. You must resolve them and stage the file before committing.\n"
+msgstr "Нельзя выполнить коммит с незавершённой операцией слияния.\n\nДля файла %s возник конфликт слияния. Разрешите конфликт и добавьте их в индекс перед выполнением коммита.\n"
#: lib/commit.tcl:180
#, tcl-format
"Unknown file state %s detected.\n"
"\n"
"File %s cannot be committed by this program.\n"
-msgstr ""
-"Обнаружено неизвестное состояние файла %s.\n"
-"\n"
-"Файл %s не может быть сохранен данной программой.\n"
+msgstr "Обнаружено неизвестное состояние файла %s.\n\nФайл %s не может быть закоммичен этой программой.\n"
#: lib/commit.tcl:188
msgid ""
"No changes to commit.\n"
"\n"
"You must stage at least 1 file before you can commit.\n"
-msgstr ""
-"Отсутствуют изменения для сохранения.\n"
-"\n"
-"Подготовьте хотя бы один файл до создания сохраненного состояния.\n"
+msgstr "Отсутствуют изменения для сохранения.\n\nДобавьте в индекс хотя бы один файл перед выполнением коммита.\n"
#: lib/commit.tcl:203
msgid ""
"- First line: Describe in one sentence what you did.\n"
"- Second line: Blank\n"
"- Remaining lines: Describe why this change is good.\n"
-msgstr ""
-"Напишите комментарий к сохраненному состоянию.\n"
-"\n"
-"Рекомендуется следующий формат комментария:\n"
-"\n"
-"- первая строка: краткое описание сделанных изменений.\n"
-"- вторая строка пустая\n"
-"- оставшиеся строки: опишите, что дают ваши изменения.\n"
+msgstr "Укажите сообщение коммита.\n\nРекомендуется следующий формат сообщения:\n\n- в первой строке краткое описание сделанных изменений\n- вторая строка пустая\n- в оставшихся строках опишите, что дают ваши изменения\n"
#: lib/commit.tcl:234
msgid "Calling pre-commit hook..."
-msgstr "Вызов программы поддержки репозитория pre-commit..."
+msgstr "Вызов перехватчика pre-commit…"
#: lib/commit.tcl:249
msgid "Commit declined by pre-commit hook."
-msgstr "СоÑ\85Ñ\80анение пÑ\80еÑ\80вано пÑ\80огÑ\80аммой поддеÑ\80жки Ñ\80епозиÑ\82оÑ\80иÑ\8f pre-commit"
+msgstr "Ð\9aоммиÑ\82 пÑ\80еÑ\80ван пеÑ\80еваÑ\82Ñ\87иком pre-commit."
#: lib/commit.tcl:272
msgid "Calling commit-msg hook..."
-msgstr "Вызов программы поддержки репозитория commit-msg..."
+msgstr "Вызов перехватчика commit-msg…"
#: lib/commit.tcl:287
msgid "Commit declined by commit-msg hook."
-msgstr "СоÑ\85Ñ\80анение пÑ\80еÑ\80вано пÑ\80огÑ\80аммой поддеÑ\80жки Ñ\80епозиÑ\82оÑ\80иÑ\8f commit-msg"
+msgstr "Ð\9aоммиÑ\82 пÑ\80еÑ\80ван пеÑ\80еваÑ\82Ñ\87иком commit-msg"
#: lib/commit.tcl:300
msgid "Committing changes..."
-msgstr "СоÑ\85Ñ\80анение изменений..."
+msgstr "Ð\9aоммиÑ\82 измененийâ\80¦"
#: lib/commit.tcl:316
msgid "write-tree failed:"
#: lib/commit.tcl:317 lib/commit.tcl:361 lib/commit.tcl:382
msgid "Commit failed."
-msgstr "СоÑ\85Ñ\80аниÑ\82Ñ\8c Ñ\81оÑ\81Ñ\82оÑ\8fние не Ñ\83далоÑ\81Ñ\8c."
+msgstr "Ð\9dе Ñ\83далоÑ\81Ñ\8c закоммиÑ\82иÑ\82Ñ\8c изменениÑ\8f."
#: lib/commit.tcl:334
#, tcl-format
msgid "Commit %s appears to be corrupt"
-msgstr "СоÑ\81Ñ\82оÑ\8fние %s вÑ\8bглÑ\8fдиÑ\82 повÑ\80ежденнÑ\8bм"
+msgstr "Ð\9aоммиÑ\82 %s поÑ\85оже повÑ\80ежден"
#: lib/commit.tcl:339
msgid ""
"No files were modified by this commit and it was not a merge commit.\n"
"\n"
"A rescan will be automatically started now.\n"
-msgstr ""
-"Отсутствуют изменения для сохранения.\n"
-"\n"
-"Ни один файл не был изменен и не было слияния.\n"
-"\n"
-"Сейчас автоматически запустится перечитывание репозитория.\n"
+msgstr "Нет изменения для коммита.\n\nНи один файл не был изменен и не было слияния.\n\nСейчас автоматически запустится перечитывание репозитория.\n"
#: lib/commit.tcl:346
msgid "No changes to commit."
-msgstr "Ð\9eÑ\82Ñ\81Ñ\83Ñ\82Ñ\81Ñ\82вÑ\83Ñ\8eÑ\82 изменениÑ\8f длÑ\8f Ñ\81оÑ\85Ñ\80анениÑ\8f."
+msgstr "Ð\9dеÑ\82 изменениÑ\8f длÑ\8f коммиÑ\82а."
#: lib/commit.tcl:360
msgid "commit-tree failed:"
#: lib/commit.tcl:469
#, tcl-format
msgid "Created commit %s: %s"
-msgstr "Создано состояние %s: %s "
+msgstr "Создан коммит %s: %s "
#: lib/console.tcl:59
msgid "Working... please wait..."
-msgstr "В процессе... пожалуйста, ждите..."
+msgstr "В процессе… пожалуйста, ждите…"
#: lib/console.tcl:186
msgid "Success"
msgid ""
"This repository currently has approximately %i loose objects.\n"
"\n"
-"To maintain optimal performance it is strongly recommended that you compress "
-"the database.\n"
+"To maintain optimal performance it is strongly recommended that you compress the database.\n"
"\n"
"Compress the database now?"
-msgstr ""
-"Этот репозиторий сейчас содержит примерно %i свободных объектов\n"
-"\n"
-"Для лучшей производительности рекомендуется сжать базу данных.\n"
-"\n"
-"Сжать базу данных сейчас?"
+msgstr "Этот репозиторий сейчас содержит примерно %i свободных объектов\n\nДля лучшей производительности рекомендуется сжать базу данных.\n\nСжать базу данных сейчас?"
#: lib/date.tcl:25
#, tcl-format
"\n"
"%s has no changes.\n"
"\n"
-"The modification date of this file was updated by another application, but "
-"the content within the file was not changed.\n"
-"\n"
-"A rescan will be automatically started to find other files which may have "
-"the same state."
-msgstr ""
-"Изменений не обнаружено.\n"
+"The modification date of this file was updated by another application, but the content within the file was not changed.\n"
"\n"
-"в %s отсутствуют изменения.\n"
-"\n"
-"Дата изменения файла была обновлена другой программой, но содержимое файла "
-"осталось прежним.\n"
-"\n"
-"Сейчас будет запущено перечитывание репозитория, чтобы найти подобные файлы."
+"A rescan will be automatically started to find other files which may have the same state."
+msgstr "Изменений не обнаружено.\n\nв %s отсутствуют изменения.\n\nДата изменения файла была обновлена другой программой, но содержимое файла осталось прежним.\n\nСейчас будет запущено перечитывание репозитория, чтобы найти подобные файлы."
#: lib/diff.tcl:104
#, tcl-format
msgid "Loading diff of %s..."
-msgstr "Загрузка изменений в %s..."
+msgstr "Загрузка изменений %s…"
#: lib/diff.tcl:125
msgid ""
"LOCAL: deleted\n"
"REMOTE:\n"
-msgstr ""
-"ЛОКАЛЬНО: удалён\n"
-"ВНЕШНИЙ:\n"
+msgstr "ЛОКАЛЬНО: удалён\nВНЕШНИЙ:\n"
#: lib/diff.tcl:130
msgid ""
"REMOTE: deleted\n"
"LOCAL:\n"
-msgstr ""
-"ВНЕШНИЙ: удалён\n"
-"ЛОКАЛЬНО:\n"
+msgstr "ВНЕШНИЙ: удалён\nЛОКАЛЬНО:\n"
#: lib/diff.tcl:137
msgid "LOCAL:\n"
msgid ""
"* Untracked file is %d bytes.\n"
"* Showing only first %d bytes.\n"
-msgstr ""
-"* Размер неподготовленного файла %d байт.\n"
-"* Показано первых %d байт.\n"
+msgstr "* Размер неотслеживаемого файла %d байт.\n* Показано первых %d байт.\n"
#: lib/diff.tcl:233
#, tcl-format
"\n"
"* Untracked file clipped here by %s.\n"
"* To see the entire file, use an external editor.\n"
-msgstr ""
-"\n"
-"* Неподготовленный файл обрезан: %s.\n"
-"* Чтобы увидеть весь файл, используйте программу-редактор.\n"
+msgstr "\n* Неотслеживаемый файл обрезан: %s.\n* Чтобы увидеть весь файл, используйте внешний редактор.\n"
#: lib/diff.tcl:482
msgid "Failed to unstage selected hunk."
#: lib/diff.tcl:489
msgid "Failed to stage selected hunk."
-msgstr "Не удалось подготовить к сохранению выбранную часть."
+msgstr "Не удалось проиндексировать выбранный блок изменений."
#: lib/diff.tcl:568
msgid "Failed to unstage selected line."
#: lib/diff.tcl:576
msgid "Failed to stage selected line."
-msgstr "Не удалось подготовить к сохранению выбранную строку."
+msgstr "Не удалось проиндексировать выбранную строку."
#: lib/encoding.tcl:443
msgid "Default"
#: lib/error.tcl:94
msgid "You must correct the above errors before committing."
-msgstr "Прежде чем сохранить, исправьте вышеуказанные ошибки."
+msgstr "Перед коммитом, исправьте вышеуказанные ошибки."
#: lib/index.tcl:6
msgid "Unable to unlock the index."
msgid ""
"Updating the Git index failed. A rescan will be automatically started to "
"resynchronize git-gui."
-msgstr ""
-"Не удалось обновить индекс Git. Состояние репозитория будет перечитано "
-"автоматически."
+msgstr "Не удалось обновить индекс Git. Состояние репозитория будет перечитано автоматически."
#: lib/index.tcl:28
msgid "Continue"
#: lib/index.tcl:289
#, tcl-format
msgid "Unstaging %s from commit"
-msgstr "Удаление %s из подгоÑ\82овленного"
+msgstr "Удаление %s из индекÑ\81а"
#: lib/index.tcl:328
msgid "Ready to commit."
-msgstr "Ð\9fодгоÑ\82овлено длÑ\8f Ñ\81оÑ\85Ñ\80анениÑ\8f"
+msgstr "Ð\93оÑ\82ов длÑ\8f коммиÑ\82а."
#: lib/index.tcl:341
#, tcl-format
msgid "Adding %s"
-msgstr "Добавление %s..."
+msgstr "Добавление %s…"
#: lib/index.tcl:398
#, tcl-format
msgid "Revert changes in file %s?"
-msgstr "Отменить изменения в файле %s?"
+msgstr "Обратить изменения в файле %s?"
#: lib/index.tcl:400
#, tcl-format
msgid "Revert changes in these %i files?"
-msgstr "Отменить изменения в %i файле(-ах)?"
+msgstr "Обратить изменения в %i файле(-ах)?"
#: lib/index.tcl:408
msgid "Any unstaged changes will be permanently lost by the revert."
-msgstr ""
-"Любые изменения, не подготовленные к сохранению, будут потеряны при данной "
-"операции."
+msgstr "Любые непроиндексированные изменения, будут потеряны при обращении изменений."
#: lib/index.tcl:411
msgid "Do Nothing"
#: lib/index.tcl:429
msgid "Reverting selected files"
-msgstr "Удаление изменений в выбранных файлах"
+msgstr "Ð\9eбÑ\80аÑ\89ение изменений в выбранных файлах"
#: lib/index.tcl:433
#, tcl-format
msgid "Reverting %s"
-msgstr "Отмена изменений в %s"
+msgstr "Обращение изменений в %s"
#: lib/merge.tcl:13
msgid ""
"Cannot merge while amending.\n"
"\n"
"You must finish amending this commit before starting any type of merge.\n"
-msgstr ""
-"Невозможно выполнить слияние во время исправления.\n"
-"\n"
-"Завершите исправление данного состояния перед выполнением операции слияния.\n"
+msgstr "Невозможно выполнить слияние во время исправления.\n\nЗавершите исправление данного коммита перед выполнением операции слияния.\n"
#: lib/merge.tcl:27
msgid ""
"Last scanned state does not match repository state.\n"
"\n"
-"Another Git program has modified this repository since the last scan. A "
-"rescan must be performed before a merge can be performed.\n"
+"Another Git program has modified this repository since the last scan. A rescan must be performed before a merge can be performed.\n"
"\n"
"The rescan will be automatically started now.\n"
-msgstr ""
-"Последнее прочитанное состояние репозитория не соответствует текущему.\n"
-"\n"
-"С момента последней проверки репозиторий был изменен другой программой Git. "
-"Необходимо перечитать репозиторий, прежде чем изменять текущую ветвь.\n"
-"\n"
-"Это будет сделано сейчас автоматически.\n"
+msgstr "Последнее прочитанное состояние репозитория не соответствует текущему.\n\nС момента последней проверки репозиторий был изменен другой программой Git. Необходимо перечитать репозиторий, прежде чем слияние может быть сделано.\n\nЭто будет сделано сейчас автоматически.\n"
#: lib/merge.tcl:45
#, tcl-format
"\n"
"File %s has merge conflicts.\n"
"\n"
-"You must resolve them, stage the file, and commit to complete the current "
-"merge. Only then can you begin another merge.\n"
-msgstr ""
-"Предыдущее слияние не завершено из-за конфликта.\n"
-"\n"
-"Для файла %s возник конфликт слияния.\n"
-"\n"
-"Разрешите конфликт, подготовьте файл и сохраните. Только после этого можно "
-"начать следующее слияние.\n"
+"You must resolve them, stage the file, and commit to complete the current merge. Only then can you begin another merge.\n"
+msgstr "Предыдущее слияние не завершено из-за конфликта.\n\nДля файла %s возник конфликт слияния.\n\nРазрешите конфликт, добавьте файл в индекс и закоммитьте. Только после этого можно начать следующее слияние.\n"
#: lib/merge.tcl:55
#, tcl-format
"\n"
"File %s is modified.\n"
"\n"
-"You should complete the current commit before starting a merge. Doing so "
-"will help you abort a failed merge, should the need arise.\n"
-msgstr ""
-"Изменения не сохранены.\n"
-"\n"
-"Файл %s изменен.\n"
-"\n"
-"Подготовьте и сохраните изменения перед началом слияния. В случае "
-"необходимости это позволит прервать операцию слияния.\n"
+"You should complete the current commit before starting a merge. Doing so will help you abort a failed merge, should the need arise.\n"
+msgstr "Вы находитесь в процессе изменений.\n\nФайл %s изменён.\n\nВы должны завершить текущий коммит перед началом слияния. В случае необходимости, это позволит прервать операцию слияния.\n"
#: lib/merge.tcl:107
#, tcl-format
#: lib/merge.tcl:120
#, tcl-format
msgid "Merging %s and %s..."
-msgstr "Слияние %s и %s..."
+msgstr "Слияние %s и %s…"
#: lib/merge.tcl:131
msgid "Merge completed successfully."
"Cannot abort while amending.\n"
"\n"
"You must finish amending this commit.\n"
-msgstr ""
-"Невозможно прервать исправление.\n"
-"\n"
-"Завершите текущее исправление сохраненного состояния.\n"
+msgstr "Невозможно прервать исправление.\n\nЗавершите текущее исправление коммита.\n"
#: lib/merge.tcl:222
msgid ""
"Aborting the current merge will cause *ALL* uncommitted changes to be lost.\n"
"\n"
"Continue with aborting the current merge?"
-msgstr ""
-"Прервать операцию слияния?\n"
-"\n"
-"Прерывание этой операции приведет к потере *ВСЕХ* несохраненных изменений.\n"
-"\n"
-"Продолжить?"
+msgstr "Прервать операцию слияния?\n\nПрерывание текущего слияния приведет к потере *ВСЕХ* несохраненных изменений.\n\nПродолжить?"
#: lib/merge.tcl:228
msgid ""
"Resetting the changes will cause *ALL* uncommitted changes to be lost.\n"
"\n"
"Continue with resetting the current changes?"
-msgstr ""
-"Прервать операцию слияния?\n"
-"\n"
-"Прерывание этой операции приведет к потере *ВСЕХ* несохраненных изменений.\n"
-"\n"
-"Продолжить?"
+msgstr "Сбросить изменения?\n\nСброс изменений приведет к потере *ВСЕХ* несохраненных изменений.\n\nПродолжить?"
#: lib/merge.tcl:239
msgid "Aborting"
#: lib/mergetool.tcl:9
msgid "Force resolution to this branch?"
-msgstr "Использовать версию этой ветви для разрешения конфликта?"
+msgstr "Использовать версию из этой ветки для разрешения конфликта?"
#: lib/mergetool.tcl:10
msgid "Force resolution to the other branch?"
-msgstr "Ð\98Ñ\81полÑ\8cзоваÑ\82Ñ\8c веÑ\80Ñ\81иÑ\8e дÑ\80Ñ\83гой веÑ\82ви для разрешения конфликта?"
+msgstr "Ð\98Ñ\81полÑ\8cзоваÑ\82Ñ\8c веÑ\80Ñ\81иÑ\8e из дÑ\80Ñ\83гой веÑ\82ки для разрешения конфликта?"
#: lib/mergetool.tcl:14
#, tcl-format
"%s will be overwritten.\n"
"\n"
"This operation can be undone only by restarting the merge."
-msgstr ""
-"Внимание! Список изменений показывает только конфликтующие отличия.\n"
-"\n"
-"%s будет переписан.\n"
-"\n"
-"Это действие можно отменить только перезапуском операции слияния."
+msgstr "Внимание! Список изменений показывает только конфликтующие отличия.\n\n%s будет переписан.\n\nЭто действие можно отменить только перезапуском операции слияния."
#: lib/mergetool.tcl:45
#, tcl-format
msgid "File %s seems to have unresolved conflicts, still stage?"
-msgstr ""
-"Файл %s, похоже, содержит необработанные конфликты. Продолжить подготовку к "
-"сохранению?"
+msgstr "Похоже, что файл %s содержит неразрешенные конфликты. Продолжить индексацию?"
#: lib/mergetool.tcl:60
#, tcl-format
#: lib/mergetool.tcl:141
msgid "Cannot resolve deletion or link conflicts using a tool"
-msgstr ""
-"Программа слияния не обрабатывает конфликты с удалением или участием ссылок"
+msgstr "Программа слияния не обрабатывает конфликты с удалением или участием ссылок"
#: lib/mergetool.tcl:146
msgid "Conflict file does not exist"
#: lib/mergetool.tcl:264
#, tcl-format
msgid "Not a GUI merge tool: '%s'"
-msgstr "'%s' не является программой слияния"
+msgstr "«%s» не является программой слияния"
#: lib/mergetool.tcl:268
#, tcl-format
msgid "Unsupported merge tool '%s'"
-msgstr "Ð\9dеизвеÑ\81Ñ\82наÑ\8f пÑ\80огÑ\80амма Ñ\81лиÑ\8fниÑ\8f '%s'"
+msgstr "Ð\9dеподдеÑ\80живаемаÑ\8f пÑ\80огÑ\80амма Ñ\81лиÑ\8fниÑ\8f «%s»"
#: lib/mergetool.tcl:303
msgid "Merge tool is already running, terminate it?"
msgid ""
"Error retrieving versions:\n"
"%s"
-msgstr ""
-"Ошибка получения версий:\n"
-"%s"
+msgstr "Ошибка получения версий:\n%s"
#: lib/mergetool.tcl:343
#, tcl-format
"Could not start the merge tool:\n"
"\n"
"%s"
-msgstr ""
-"Ошибка запуска программы слияния:\n"
-"\n"
-"%s"
+msgstr "Ошибка запуска программы слияния:\n\n%s"
#: lib/mergetool.tcl:347
msgid "Running merge tool..."
-msgstr "Запуск программы слияния..."
+msgstr "Запуск программы слияния…"
#: lib/mergetool.tcl:375 lib/mergetool.tcl:383
msgid "Merge tool failed."
#: lib/option.tcl:11
#, tcl-format
msgid "Invalid global encoding '%s'"
-msgstr "Ð\9eÑ\88ибка в глобалÑ\8cной Ñ\83Ñ\81Ñ\82ановке кодиÑ\80овки '%s'"
+msgstr "Ð\9dевеÑ\80наÑ\8f глобалÑ\8cнаÑ\8f кодиÑ\80овка «%s»"
#: lib/option.tcl:19
#, tcl-format
msgid "Invalid repo encoding '%s'"
-msgstr "Неверная кодировка репозитория: '%s'"
+msgstr "Неверная кодировка репозитория «%s»"
#: lib/option.tcl:117
msgid "Restore Defaults"
#: lib/option.tcl:141
msgid "Summarize Merge Commits"
-msgstr "Суммарный комментарий при слиянии"
+msgstr "Суммарное сообщение при слиянии"
#: lib/option.tcl:142
msgid "Merge Verbosity"
#: lib/option.tcl:147
msgid "Prune Tracking Branches During Fetch"
-msgstr "ЧиÑ\81Ñ\82ка веÑ\82вей Ñ\81лежениÑ\8f пÑ\80и полÑ\83чении изменений"
+msgstr "ЧиÑ\81Ñ\82ка оÑ\82Ñ\81леживаемÑ\8bÑ\85 веÑ\82ок пÑ\80и извлечении изменений"
#: lib/option.tcl:148
msgid "Match Tracking Branches"
-msgstr "Ð\98мÑ\8f новой веÑ\82ви взÑ\8fÑ\82Ñ\8c из имен веÑ\82вей Ñ\81лежениÑ\8f"
+msgstr "Такое же имÑ\8f, как и Ñ\83 оÑ\82Ñ\81леживаемой веÑ\82ки"
#: lib/option.tcl:149
msgid "Blame Copy Only On Changed Files"
#: lib/option.tcl:153
msgid "Commit Message Text Width"
-msgstr "Ширина текста комментария"
+msgstr "Ширина текста сообщения коммита"
#: lib/option.tcl:154
msgid "New Branch Name Template"
-msgstr "Шаблон длÑ\8f имени новой веÑ\82ви"
+msgstr "Шаблон длÑ\8f имени новой веÑ\82ки"
#: lib/option.tcl:155
msgid "Default File Contents Encoding"
msgid "Choose %s"
msgstr "Выберите %s"
-# carbon copy
#: lib/option.tcl:264
msgid "pt."
msgstr "pt."
#: lib/remote.tcl:173
msgid "Fetch from"
-msgstr "Ð\9fолÑ\83чение из"
+msgstr "Ð\98звлечение из"
#: lib/remote.tcl:215
msgid "Push to"
#: lib/remote_add.tcl:28 lib/tools_dlg.tcl:36
msgid "Add"
-msgstr ""
+msgstr "Добавить"
#: lib/remote_add.tcl:37
msgid "Remote Details"
#: lib/remote_add.tcl:65
msgid "Fetch Immediately"
-msgstr "Скачать сразу"
+msgstr "Сразу извлечь изменения"
#: lib/remote_add.tcl:71
msgid "Initialize Remote Repository and Push"
#: lib/remote_add.tcl:114
#, tcl-format
msgid "'%s' is not an acceptable remote name."
-msgstr "Недопустимое название внешнего репозитория '%s'."
+msgstr "«%s» не является допустимым именем внешнего репозитория."
#: lib/remote_add.tcl:125
#, tcl-format
msgid "Failed to add remote '%s' of location '%s'."
-msgstr "Не удалось добавить '%s' из '%s'. "
+msgstr "Не удалось добавить «%s» из «%s». "
#: lib/remote_add.tcl:133 lib/transport.tcl:6
#, tcl-format
msgid "fetch %s"
-msgstr "полÑ\83чение %s"
+msgstr "извлечение %s"
#: lib/remote_add.tcl:134
#, tcl-format
msgid "Fetching the %s"
-msgstr "Ð\9fолÑ\83чение %s"
+msgstr "Ð\98звлечение %s"
#: lib/remote_add.tcl:157
#, tcl-format
msgid "Do not know how to initialize repository at location '%s'."
-msgstr "Невозможно инициализировать репозиторий в '%s'."
+msgstr "Невозможно инициализировать репозиторий в «%s»."
#: lib/remote_add.tcl:163 lib/transport.tcl:25 lib/transport.tcl:63
#: lib/transport.tcl:81
#: lib/remote_branch_delete.tcl:29 lib/remote_branch_delete.tcl:34
msgid "Delete Branch Remotely"
-msgstr "Удаление веÑ\82ви во внешнем репозитории"
+msgstr "Удаление веÑ\82ки во внешнем репозитории"
#: lib/remote_branch_delete.tcl:47
msgid "From Repository"
#: lib/remote_branch_delete.tcl:84
msgid "Branches"
-msgstr "Ð\92еÑ\82ви"
+msgstr "Ð\92еÑ\82ки"
#: lib/remote_branch_delete.tcl:109
msgid "Delete Only If"
#: lib/remote_branch_delete.tcl:152
msgid "A branch is required for 'Merged Into'."
-msgstr "Для опции 'Слияние с' требуется указать ветвь."
+msgstr "Для операции «Слияние с» требуется указать ветку."
#: lib/remote_branch_delete.tcl:184
#, tcl-format
"The following branches are not completely merged into %s:\n"
"\n"
" - %s"
-msgstr ""
-"Следующие ветви могут быть объединены с %s при помощи операции слияния:\n"
-"\n"
-" - %s"
+msgstr "Следующие ветки могут быть объединены с %s при помощи операции слияния:\n\n - %s"
#: lib/remote_branch_delete.tcl:189
#, tcl-format
msgid ""
"One or more of the merge tests failed because you have not fetched the "
"necessary commits. Try fetching from %s first."
-msgstr ""
-"Некоторые тесты на слияние не прошли, потому что Вы не получили необходимые "
-"состояния. Попытайтесь получить их из %s."
+msgstr "Некоторые тесты на слияние не прошли, потому что вы не извлекли необходимые коммиты. Попытайтесь извлечь их из %s."
#: lib/remote_branch_delete.tcl:207
msgid "Please select one or more branches to delete."
-msgstr "УкажиÑ\82е однÑ\83 или неÑ\81колÑ\8cко веÑ\82вей для удаления."
+msgstr "УкажиÑ\82е однÑ\83 или неÑ\81колÑ\8cко веÑ\82ок для удаления."
#: lib/remote_branch_delete.tcl:226
#, tcl-format
msgid "Deleting branches from %s"
-msgstr "Удаление веÑ\82вей из %s"
+msgstr "Удаление веÑ\82ок из %s"
#: lib/remote_branch_delete.tcl:292
msgid "No repository selected."
#: lib/remote_branch_delete.tcl:297
#, tcl-format
msgid "Scanning %s..."
-msgstr "Перечитывание %s... "
+msgstr "Перечитывание %s…"
#: lib/search.tcl:21
msgid "Find:"
#: lib/sshkey.tcl:78
msgid "Generating..."
-msgstr "Создание..."
+msgstr "Создание…"
#: lib/sshkey.tcl:84
#, tcl-format
"Could not start ssh-keygen:\n"
"\n"
"%s"
-msgstr ""
-"Ошибка запуска ssh-keygen:\n"
-"\n"
-"%s"
+msgstr "Ошибка запуска ssh-keygen:\n\n%s"
#: lib/sshkey.tcl:111
msgid "Generation failed."
#: lib/status_bar.tcl:83
#, tcl-format
msgid "%s ... %*i of %*i %s (%3i%%)"
-msgstr "%s ... %*i из %*i %s (%3i%%)"
+msgstr "%s … %*i из %*i %s (%3i%%)"
#: lib/tools.tcl:75
#, tcl-format
#: lib/tools_dlg.tcl:48
msgid "Use '/' separators to create a submenu tree:"
-msgstr "Используйте '/' для создания подменю"
+msgstr "Используйте «/» для создания подменю"
#: lib/tools_dlg.tcl:61
msgid "Command:"
#: lib/tools_dlg.tcl:129
#, tcl-format
msgid "Tool '%s' already exists."
-msgstr "Вспомогательная операция '%s' уже существует."
+msgstr "Вспомогательная операция «%s» уже существует."
#: lib/tools_dlg.tcl:151
#, tcl-format
msgid ""
"Could not add tool:\n"
"%s"
-msgstr ""
-"Ошибка добавления программы:\n"
-"%s"
+msgstr "Ошибка добавления программы:\n%s"
#: lib/tools_dlg.tcl:190
msgid "Remove Tool"
#: lib/transport.tcl:7
#, tcl-format
msgid "Fetching new changes from %s"
-msgstr "Ð\9fолÑ\83чение изменений из %s "
+msgstr "Ð\98звлечение изменений из %s "
-# carbon copy
#: lib/transport.tcl:18
#, tcl-format
msgid "remote prune %s"
#: lib/transport.tcl:19
#, tcl-format
msgid "Pruning tracking branches deleted from %s"
-msgstr "ЧиÑ\81Ñ\82ка веÑ\82вей Ñ\81лежениÑ\8f, Ñ\83даленных из %s"
+msgstr "ЧиÑ\81Ñ\82ка оÑ\82Ñ\81леживаемÑ\8bÑ\85 веÑ\82ок, Ñ\83далÑ\91нных из %s"
#: lib/transport.tcl:26
#, tcl-format
#: lib/transport.tcl:100
msgid "Push Branches"
-msgstr "Ð\9eÑ\82пÑ\80авиÑ\82Ñ\8c изменениÑ\8f в веÑ\82вÑ\8fÑ\85"
+msgstr "Ð\9eÑ\82пÑ\80авиÑ\82Ñ\8c веÑ\82ки"
#: lib/transport.tcl:114
msgid "Source Branches"
-msgstr "Ð\98Ñ\81Ñ\85однÑ\8bе веÑ\82ви"
+msgstr "Ð\98Ñ\81Ñ\85однÑ\8bе веÑ\82ки"
#: lib/transport.tcl:131
msgid "Destination Repository"
#: lib/transport.tcl:171
msgid "Force overwrite existing branch (may discard changes)"
-msgstr "Ð\9dамеÑ\80енно пеÑ\80епиÑ\81аÑ\82Ñ\8c Ñ\81Ñ\83Ñ\89еÑ\81Ñ\82вÑ\83Ñ\8eÑ\89Ñ\83Ñ\8e веÑ\82вÑ\8c (возможна потеря изменений)"
+msgstr "Ð\9fÑ\80инÑ\83диÑ\82елÑ\8cно пеÑ\80езапиÑ\81аÑ\82Ñ\8c Ñ\81Ñ\83Ñ\89еÑ\81Ñ\82вÑ\83Ñ\8eÑ\89Ñ\83Ñ\8e веÑ\82кÑ\83 (возможна потеря изменений)"
#: lib/transport.tcl:175
msgid "Use thin pack (for slow network connections)"
# This program resolves merge conflicts in git
#
# Copyright (c) 2006 Theodore Y. Ts'o
+# Copyright (c) 2009-2016 David Aguilar
#
# This file is licensed under the GPL v2, or a later version
# at the discretion of Junio C Hamano.
#
-USAGE='[--tool=tool] [--tool-help] [-y|--no-prompt|--prompt] [file to merge] ...'
+USAGE='[--tool=tool] [--tool-help] [-y|--no-prompt|--prompt] [-O<orderfile>] [file to merge] ...'
SUBDIRECTORY_OK=Yes
NONGIT_OK=Yes
OPTIONS_SPEC=
return 0
}
-prompt=$(git config --bool mergetool.prompt)
-guessed_merge_tool=false
-
-while test $# != 0
-do
- case "$1" in
- --tool-help=*)
- TOOL_MODE=${1#--tool-help=}
- show_tool_help
- ;;
- --tool-help)
- show_tool_help
- ;;
- -t|--tool*)
- case "$#,$1" in
- *,*=*)
- merge_tool=$(expr "z$1" : 'z-[^=]*=\(.*\)')
- ;;
- 1,*)
- usage ;;
- *)
- merge_tool="$2"
- shift ;;
- esac
- ;;
- -y|--no-prompt)
- prompt=false
- ;;
- --prompt)
- prompt=true
- ;;
- --)
- shift
- break
- ;;
- -*)
- usage
- ;;
- *)
- break
- ;;
- esac
- shift
-done
-
prompt_after_failed_merge () {
while true
do
done
}
-git_dir_init
-require_work_tree
+print_noop_and_exit () {
+ echo "No files need merging"
+ exit 0
+}
+
+main () {
+ prompt=$(git config --bool mergetool.prompt)
+ guessed_merge_tool=false
+ orderfile=
+
+ while test $# != 0
+ do
+ case "$1" in
+ --tool-help=*)
+ TOOL_MODE=${1#--tool-help=}
+ show_tool_help
+ ;;
+ --tool-help)
+ show_tool_help
+ ;;
+ -t|--tool*)
+ case "$#,$1" in
+ *,*=*)
+ merge_tool=$(expr "z$1" : 'z-[^=]*=\(.*\)')
+ ;;
+ 1,*)
+ usage ;;
+ *)
+ merge_tool="$2"
+ shift ;;
+ esac
+ ;;
+ -y|--no-prompt)
+ prompt=false
+ ;;
+ --prompt)
+ prompt=true
+ ;;
+ -O*)
+ orderfile="$1"
+ ;;
+ --)
+ shift
+ break
+ ;;
+ -*)
+ usage
+ ;;
+ *)
+ break
+ ;;
+ esac
+ shift
+ done
+
+ git_dir_init
+ require_work_tree
-if test -z "$merge_tool"
-then
- # Check if a merge tool has been configured
- merge_tool=$(get_configured_merge_tool)
- # Try to guess an appropriate merge tool if no tool has been set.
if test -z "$merge_tool"
then
- merge_tool=$(guess_merge_tool) || exit
- guessed_merge_tool=true
+ # Check if a merge tool has been configured
+ merge_tool=$(get_configured_merge_tool)
+ # Try to guess an appropriate merge tool if no tool has been set.
+ if test -z "$merge_tool"
+ then
+ merge_tool=$(guess_merge_tool) || exit
+ guessed_merge_tool=true
+ fi
+ fi
+ merge_keep_backup="$(git config --bool mergetool.keepBackup || echo true)"
+ merge_keep_temporaries="$(git config --bool mergetool.keepTemporaries || echo false)"
+
+ if test $# -eq 0 && test -e "$GIT_DIR/MERGE_RR"
+ then
+ set -- $(git rerere remaining)
+ if test $# -eq 0
+ then
+ print_noop_and_exit
+ fi
fi
-fi
-merge_keep_backup="$(git config --bool mergetool.keepBackup || echo true)"
-merge_keep_temporaries="$(git config --bool mergetool.keepTemporaries || echo false)"
-files=
+ files=$(git -c core.quotePath=false \
+ diff --name-only --diff-filter=U \
+ ${orderfile:+"$orderfile"} -- "$@")
-if test $# -eq 0
-then
cd_to_toplevel
- if test -e "$GIT_DIR/MERGE_RR"
+ if test -z "$files"
then
- files=$(git rerere remaining)
- else
- files=$(git ls-files -u | sed -e 's/^[^ ]* //' | sort -u)
+ print_noop_and_exit
fi
-else
- files=$(git ls-files -u -- "$@" | sed -e 's/^[^ ]* //' | sort -u)
-fi
-if test -z "$files"
-then
- echo "No files need merging"
- exit 0
-fi
+ printf "Merging:\n"
+ printf "%s\n" "$files"
-printf "Merging:\n"
-printf "%s\n" "$files"
+ rc=0
+ for i in $files
+ do
+ printf "\n"
+ if ! merge_file "$i"
+ then
+ rc=1
+ prompt_after_failed_merge || exit 1
+ fi
+ done
-rc=0
-for i in $files
-do
- printf "\n"
- if ! merge_file "$i"
- then
- rc=1
- prompt_after_failed_merge || exit 1
- fi
-done
+ exit $rc
+}
-exit $rc
+main "$@"
u_tree=$(git write-tree) &&
printf 'untracked files on %s\n' "$msg" | git commit-tree $u_tree &&
rm -f "$TMPindex"
- ) ) || die "Cannot save the untracked files"
+ ) ) || die "$(gettext "Cannot save the untracked files")"
untracked_commit_option="-p $u_commit";
else
if test -n "$patch_mode" && test -n "$untracked"
then
- die "Can't use --patch and --include-untracked or --all at the same time"
+ die "$(gettext "Can't use --patch and --include-untracked or --all at the same time")"
fi
stash_msg="$*"
i_tree=
u_tree=
- REV=$(git rev-parse --no-flags --symbolic --sq "$@") || exit 1
-
FLAGS=
+ REV=
for opt
do
case "$opt" in
die "$(eval_gettext "unknown option: \$opt")"
FLAGS="${FLAGS}${FLAGS:+ }$opt"
;;
+ *)
+ REV="${REV}${REV:+ }'$opt'"
+ ;;
esac
done
;;
esac
+ case "$1" in
+ *[!0-9]*)
+ :
+ ;;
+ *)
+ set -- "${ref_stash}@{$1}"
+ ;;
+ esac
+
REV=$(git rev-parse --symbolic --verify --quiet "$1") || {
reference="$1"
die "$(eval_gettext "\$reference is not a valid reference")"
GIT_INDEX_FILE="$TMPindex" git-read-tree "$u_tree" &&
GIT_INDEX_FILE="$TMPindex" git checkout-index --all &&
rm -f "$TMPindex" ||
- die 'Could not restore untracked files from stash'
+ die "$(gettext "Could not restore untracked files from stash")"
fi
eval "
prefix=
custom_name=
depth=
+progress=
die_if_unmatched ()
{
-q|--quiet)
GIT_QUIET=1
;;
+ --progress)
+ progress="--progress"
+ ;;
-i|--init)
init=1
;;
{
git submodule--helper update-clone ${GIT_QUIET:+--quiet} \
+ ${progress:+"$progress"} \
${wt_prefix:+--prefix "$wt_prefix"} \
${prefix:+--recursive-prefix "$prefix"} \
${update:+--update "$update"} \
- ${reference:+--reference "$reference"} \
+ ${reference:+"$reference"} \
${depth:+--depth "$depth"} \
${recommend_shallow:+"$recommend_shallow"} \
${jobs:+$jobs} \
command_close_pipe
command_bidi_pipe
command_close_bidi_pipe
+ get_record
);
BEGIN {
"files will not be compressed.\n";
}
File::Find::find({ wanted => \&gc_directory, no_chdir => 1},
- "$ENV{GIT_DIR}/svn");
+ Git::SVN::svn_dir());
}
########################### utility functions #########################
return unless verify_ref('HEAD^0');
return if $ENV{GIT_DIR} !~ m#^(?:.*/)?\.git$#;
- my $index = $ENV{GIT_INDEX_FILE} || "$ENV{GIT_DIR}/index";
+ my $index = command_oneline(qw(rev-parse --git-path index));
return if -f $index;
return if command_oneline(qw/rev-parse --is-inside-work-tree/) eq 'false';
sub get_commit_entry {
my ($treeish) = shift;
my %log_entry = ( log => '', tree => get_tree_from_treeish($treeish) );
- my $commit_editmsg = "$ENV{GIT_DIR}/COMMIT_EDITMSG";
- my $commit_msg = "$ENV{GIT_DIR}/COMMIT_MSG";
+ my @git_path = qw(rev-parse --git-path);
+ my $commit_editmsg = command_oneline(@git_path, 'COMMIT_EDITMSG');
+ my $commit_msg = command_oneline(@git_path, 'COMMIT_MSG');
open my $log_fh, '>', $commit_editmsg or croak $!;
my $type = command_oneline(qw/cat-file -t/, $treeish);
{
require Encode;
# SVN requires messages to be UTF-8 when entering the repo
- local $/;
open $log_fh, '<', $commit_msg or croak $!;
binmode $log_fh;
- chomp($log_entry{log} = <$log_fh>);
+ chomp($log_entry{log} = get_record($log_fh, undef));
my $enc = Git::config('i18n.commitencoding') || 'UTF-8';
my $msg = $log_entry{log};
setenv(GIT_WORK_TREE_ENVIRONMENT, cmd, 1);
if (envchanged)
*envchanged = 1;
+ } else if (!strcmp(cmd, "--super-prefix")) {
+ if (*argc < 2) {
+ fprintf(stderr, "No prefix given for --super-prefix.\n" );
+ usage(git_usage_string);
+ }
+ setenv(GIT_SUPER_PREFIX_ENVIRONMENT, (*argv)[1], 1);
+ if (envchanged)
+ *envchanged = 1;
+ (*argv)++;
+ (*argc)--;
+ } else if (skip_prefix(cmd, "--super-prefix=", &cmd)) {
+ setenv(GIT_SUPER_PREFIX_ENVIRONMENT, cmd, 1);
+ if (envchanged)
+ *envchanged = 1;
} else if (!strcmp(cmd, "--bare")) {
char *cwd = xgetcwd();
is_bare_repository_cfg = 1;
* RUN_SETUP for reading from the configuration file.
*/
#define NEED_WORK_TREE (1<<3)
+#define SUPPORT_SUPER_PREFIX (1<<4)
struct cmd_struct {
const char *cmd;
}
commit_pager_choice();
+ if (!help && get_super_prefix()) {
+ if (!(p->option & SUPPORT_SUPER_PREFIX))
+ die("%s doesn't support --super-prefix", p->cmd);
+ if (prefix)
+ die("can't use --super-prefix from a subdirectory");
+ }
+
if (!help && p->option & NEED_WORK_TREE)
setup_work_tree();
{ "init-db", cmd_init_db },
{ "interpret-trailers", cmd_interpret_trailers, RUN_SETUP_GENTLY },
{ "log", cmd_log, RUN_SETUP },
- { "ls-files", cmd_ls_files, RUN_SETUP },
+ { "ls-files", cmd_ls_files, RUN_SETUP | SUPPORT_SUPER_PREFIX },
{ "ls-remote", cmd_ls_remote, RUN_SETUP_GENTLY },
{ "ls-tree", cmd_ls_tree, RUN_SETUP },
{ "mailinfo", cmd_mailinfo },
{ "pack-objects", cmd_pack_objects, RUN_SETUP },
{ "pack-redundant", cmd_pack_redundant, RUN_SETUP },
{ "pack-refs", cmd_pack_refs, RUN_SETUP },
- { "patch-id", cmd_patch_id },
+ { "patch-id", cmd_patch_id, RUN_SETUP_GENTLY },
{ "pickaxe", cmd_blame, RUN_SETUP },
{ "prune", cmd_prune, RUN_SETUP },
{ "prune-packed", cmd_prune_packed, RUN_SETUP },
static void handle_builtin(int argc, const char **argv)
{
+ struct argv_array args = ARGV_ARRAY_INIT;
const char *cmd;
struct cmd_struct *builtin;
strip_extension(argv);
cmd = argv[0];
- /* Turn "git cmd --help" into "git help cmd" */
+ /* Turn "git cmd --help" into "git help --exclude-guides cmd" */
if (argc > 1 && !strcmp(argv[1], "--help")) {
+ int i;
+
argv[1] = argv[0];
argv[0] = cmd = "help";
+
+ for (i = 0; i < argc; i++) {
+ argv_array_push(&args, argv[i]);
+ if (!i)
+ argv_array_push(&args, "--exclude-guides");
+ }
+
+ argc++;
+ argv = args.argv;
}
builtin = get_builtin(cmd);
if (builtin)
exit(run_builtin(builtin, argc, argv));
+ argv_array_clear(&args);
}
static void execv_dashed_external(const char **argv)
const char *tmp;
int status;
+ if (get_super_prefix())
+ die("%s doesn't support --super-prefix", argv[0]);
+
if (use_pager == -1)
use_pager = check_pager_config(argv[0]);
commit_pager_choice();
return $str;
}
-# Sanitize for use in XHTML + application/xml+xhtm (valid XML 1.0)
+# Sanitize for use in XHTML + application/xml+xhtml (valid XML 1.0)
sub sanitize {
my $str = shift;
my $line = shift;
$line = esc_html($line, -nbsp=>1);
- $line =~ s{\b([0-9a-fA-F]{8,40})\b}{
+ $line =~ s{
+ \b
+ (
+ # The output of "git describe", e.g. v2.10.0-297-gf6727b0
+ # or hadoop-20160921-113441-20-g094fb7d
+ (?<!-) # see strbuf_check_tag_ref(). Tags can't start with -
+ [A-Za-z0-9.-]+
+ (?!\.) # refs can't end with ".", see check_refname_format()
+ -g[0-9a-fA-F]{7,40}
+ |
+ # Just a normal looking Git SHA1
+ [0-9a-fA-F]{7,40}
+ )
+ \b
+ }{
$cgi->a({-href => href(action=>"object", hash=>$1),
-class => "text"}, $1);
- }eg;
+ }egx;
return $line;
}
# guess file syntax for syntax highlighting; return undef if no highlighting
# the name of syntax can (in the future) depend on syntax highlighter used
sub guess_file_syntax {
- my ($highlight, $mimetype, $file_name) = @_;
+ my ($highlight, $file_name) = @_;
return undef unless ($highlight && defined $file_name);
my $basename = basename($file_name, '.in');
return $highlight_basename{$basename}
# or return original FD if no highlighting
sub run_highlighter {
my ($fd, $highlight, $syntax) = @_;
- return $fd unless ($highlight && defined $syntax);
+ return $fd unless ($highlight);
close $fd;
+ my $syntax_arg = (defined $syntax) ? "--syntax $syntax" : "--force";
open $fd, quote_command(git_cmd(), "cat-file", "blob", $hash)." | ".
quote_command($^X, '-CO', '-MEncode=decode,FB_DEFAULT', '-pse',
'$_ = decode($fe, $_, FB_DEFAULT) if !utf8::decode($_);',
'--', "-fe=$fallback_encoding")." | ".
quote_command($highlight_bin).
- " --replace-tabs=8 --fragment --syntax $syntax |"
+ " --replace-tabs=8 --fragment $syntax_arg |"
or die_error(500, "Couldn't open file or run syntax highlighter");
return $fd;
}
$have_blame &&= ($mimetype =~ m!^text/!);
my $highlight = gitweb_check_feature('highlight');
- my $syntax = guess_file_syntax($highlight, $mimetype, $file_name);
- $fd = run_highlighter($fd, $highlight, $syntax)
- if $syntax;
+ my $syntax = guess_file_syntax($highlight, $file_name);
+ $fd = run_highlighter($fd, $highlight, $syntax);
git_header_html(undef, $expires);
my $formats_nav = '';
$line = untabify($line);
printf qq!<div class="pre"><a id="l%i" href="%s#l%i" class="linenr">%4i</a> %s</div>\n!,
$nr, esc_attr(href(-replay => 1)), $nr, $nr,
- $syntax ? sanitize($line) : esc_html($line, -nbsp=>1);
+ $highlight ? sanitize($line) : esc_html($line, -nbsp=>1);
}
}
close $fd
{ 'B', "\n[GNUPG:] BADSIG " },
{ 'U', "\n[GNUPG:] TRUST_NEVER" },
{ 'U', "\n[GNUPG:] TRUST_UNDEFINED" },
+ { 'E', "\n[GNUPG:] ERRSIG "},
+ { 'X', "\n[GNUPG:] EXPSIG "},
+ { 'Y', "\n[GNUPG:] EXPKEYSIG "},
+ { 'R', "\n[GNUPG:] REVKEYSIG "},
};
void parse_gpg_output(struct signature_check *sigc)
/* The trust messages are not followed by key/signer information */
if (sigc->result != 'U') {
sigc->key = xmemdupz(found, 16);
- found += 17;
- next = strchrnul(found, '\n');
- sigc->signer = xmemdupz(found, next - found);
+ /* The ERRSIG message is not followed by signer information */
+ if (sigc-> result != 'E') {
+ found += 17;
+ next = strchrnul(found, '\n');
+ sigc->signer = xmemdupz(found, next - found);
+ }
}
}
}
#include "commit.h"
#include "color.h"
#include "graph.h"
-#include "diff.h"
#include "revision.h"
/* Internal API */
* responsible for printing this line's graph (perhaps via
* graph_show_commit() or graph_show_oneline()) before calling
* graph_show_strbuf().
+ *
+ * Note that unlike some other graph display functions, you must pass the file
+ * handle directly. It is assumed that this is the same file handle as the
+ * file specified by the graph diff options. This is necessary so that
+ * graph_show_strbuf can be called even with a NULL graph.
*/
-static void graph_show_strbuf(struct git_graph *graph, struct strbuf const *sb);
+static void graph_show_strbuf(struct git_graph *graph,
+ FILE *file,
+ struct strbuf const *sb);
/*
* TODO:
GRAPH_COLLAPSING
};
+static void graph_show_line_prefix(const struct diff_options *diffopt)
+{
+ if (!diffopt || !diffopt->line_prefix)
+ return;
+
+ fwrite(diffopt->line_prefix,
+ sizeof(char),
+ diffopt->line_prefix_length,
+ diffopt->file);
+}
+
static const char **column_colors;
static unsigned short column_colors_max;
static struct strbuf msgbuf = STRBUF_INIT;
assert(opt);
- assert(graph);
- opt->output_prefix_length = graph->width;
strbuf_reset(&msgbuf);
- graph_padding_line(graph, &msgbuf);
+ if (opt->line_prefix)
+ strbuf_add(&msgbuf, opt->line_prefix,
+ opt->line_prefix_length);
+ if (graph)
+ graph_padding_line(graph, &msgbuf);
return &msgbuf;
}
+static const struct diff_options *default_diffopt;
+
+void graph_setup_line_prefix(struct diff_options *diffopt)
+{
+ default_diffopt = diffopt;
+
+ /* setup an output prefix callback if necessary */
+ if (diffopt && !diffopt->output_prefix)
+ diffopt->output_prefix = diff_output_prefix_callback;
+}
+
+
struct git_graph *graph_init(struct rev_info *opt)
{
struct git_graph *graph = xmalloc(sizeof(struct git_graph));
*/
opt->diffopt.output_prefix = diff_output_prefix_callback;
opt->diffopt.output_prefix_data = graph;
- opt->diffopt.output_prefix_length = 0;
return graph;
}
struct strbuf msgbuf = STRBUF_INIT;
int shown_commit_line = 0;
+ graph_show_line_prefix(default_diffopt);
+
if (!graph)
return;
shown_commit_line = graph_next_line(graph, &msgbuf);
fwrite(msgbuf.buf, sizeof(char), msgbuf.len,
graph->revs->diffopt.file);
- if (!shown_commit_line)
+ if (!shown_commit_line) {
putc('\n', graph->revs->diffopt.file);
+ graph_show_line_prefix(&graph->revs->diffopt);
+ }
strbuf_setlen(&msgbuf, 0);
}
{
struct strbuf msgbuf = STRBUF_INIT;
+ graph_show_line_prefix(default_diffopt);
+
if (!graph)
return;
{
struct strbuf msgbuf = STRBUF_INIT;
+ graph_show_line_prefix(default_diffopt);
+
if (!graph)
return;
struct strbuf msgbuf = STRBUF_INIT;
int shown = 0;
+ graph_show_line_prefix(default_diffopt);
+
if (!graph)
return 0;
strbuf_setlen(&msgbuf, 0);
shown = 1;
- if (!graph_is_commit_finished(graph))
+ if (!graph_is_commit_finished(graph)) {
putc('\n', graph->revs->diffopt.file);
- else
+ graph_show_line_prefix(&graph->revs->diffopt);
+ } else {
break;
+ }
}
strbuf_release(&msgbuf);
return shown;
}
-
-static void graph_show_strbuf(struct git_graph *graph, struct strbuf const *sb)
+static void graph_show_strbuf(struct git_graph *graph,
+ FILE *file,
+ struct strbuf const *sb)
{
char *p;
- if (!graph) {
- fwrite(sb->buf, sizeof(char), sb->len,
- graph->revs->diffopt.file);
- return;
- }
-
/*
* Print the strbuf line by line,
* and display the graph info before each line but the first.
} else {
len = (sb->buf + sb->len) - p;
}
- fwrite(p, sizeof(char), len, graph->revs->diffopt.file);
+ fwrite(p, sizeof(char), len, file);
if (next_p && *next_p != '\0')
graph_show_oneline(graph);
p = next_p;
}
void graph_show_commit_msg(struct git_graph *graph,
+ FILE *file,
struct strbuf const *sb)
{
int newline_terminated;
- if (!graph) {
- /*
- * If there's no graph, just print the message buffer.
- *
- * The message buffer for CMIT_FMT_ONELINE and
- * CMIT_FMT_USERFORMAT are already missing a terminating
- * newline. All of the other formats should have it.
- */
- fwrite(sb->buf, sizeof(char), sb->len,
- graph->revs->diffopt.file);
- return;
- }
-
- newline_terminated = (sb->len && sb->buf[sb->len - 1] == '\n');
-
/*
* Show the commit message
*/
- graph_show_strbuf(graph, sb);
+ graph_show_strbuf(graph, file, sb);
+
+ if (!graph)
+ return;
+
+ newline_terminated = (sb->len && sb->buf[sb->len - 1] == '\n');
/*
* If there is more output needed for this commit, show it now
* new line.
*/
if (!newline_terminated)
- putc('\n', graph->revs->diffopt.file);
+ putc('\n', file);
graph_show_remainder(graph);
* If sb ends with a newline, our output should too.
*/
if (newline_terminated)
- putc('\n', graph->revs->diffopt.file);
+ putc('\n', file);
}
}
#ifndef GRAPH_H
#define GRAPH_H
+#include "diff.h"
/* A graph is a pointer to this opaque structure */
struct git_graph;
+/*
+ * Called to setup global display of line_prefix diff option.
+ *
+ * Passed a diff_options structure which indicates the line_prefix and the
+ * file to output the prefix to. This is sort of a hack used so that the
+ * line_prefix will be honored by all flows which also honor "--graph"
+ * regardless of whether a graph has actually been setup. The normal graph
+ * flow will honor the exact diff_options passed, but a NULL graph will cause
+ * display of a line_prefix to stdout.
+ */
+void graph_setup_line_prefix(struct diff_options *diffopt);
+
/*
* Set up a custom scheme for column colors.
*
* missing a terminating newline (including if it is empty), the output
* printed by graph_show_commit_msg() will also be missing a terminating
* newline.
+ *
+ * Note that unlike some other graph display functions, you must pass the file
+ * handle directly. It is assumed that this is the same file handle as the
+ * file specified by the graph diff options. This is necessary so that
+ * graph_show_commit_msg can be called even with a NULL graph.
*/
-void graph_show_commit_msg(struct git_graph *graph, struct strbuf const *sb);
+void graph_show_commit_msg(struct git_graph *graph,
+ FILE *file,
+ struct strbuf const *sb);
#endif /* GRAPH_H */
if (exec_path) {
list_commands_in_dir(main_cmds, exec_path, prefix);
- qsort(main_cmds->names, main_cmds->cnt,
- sizeof(*main_cmds->names), cmdname_compare);
+ QSORT(main_cmds->names, main_cmds->cnt, cmdname_compare);
uniq(main_cmds);
}
}
free(paths);
- qsort(other_cmds->names, other_cmds->cnt,
- sizeof(*other_cmds->names), cmdname_compare);
+ QSORT(other_cmds->names, other_cmds->cnt, cmdname_compare);
uniq(other_cmds);
}
exclude_cmds(other_cmds, main_cmds);
longest = strlen(common_cmds[i].name);
}
- qsort(common_cmds, ARRAY_SIZE(common_cmds),
- sizeof(common_cmds[0]), cmd_group_cmp);
+ QSORT(common_cmds, ARRAY_SIZE(common_cmds), cmd_group_cmp);
puts(_("These are common Git commands used in various situations:"));
add_cmd_list(&main_cmds, &aliases);
add_cmd_list(&main_cmds, &other_cmds);
- qsort(main_cmds.names, main_cmds.cnt,
- sizeof(*main_cmds.names), cmdname_compare);
+ QSORT(main_cmds.names, main_cmds.cnt, cmdname_compare);
uniq(&main_cmds);
/* This abuses cmdname->len for levenshtein distance */
levenshtein(cmd, candidate, 0, 2, 1, 3) + 1;
}
- qsort(main_cmds.names, main_cmds.cnt,
- sizeof(*main_cmds.names), levenshtein_compare);
+ QSORT(main_cmds.names, main_cmds.cnt, levenshtein_compare);
if (!main_cmds.cnt)
die(_("Uh oh. Your system reports no Git commands at all."));
{
static int bufno;
static char hexbuffer[4][GIT_SHA1_HEXSZ + 1];
- return sha1_to_hex_r(hexbuffer[3 & ++bufno], sha1);
+ bufno = (bufno + 1) % ARRAY_SIZE(hexbuffer);
+ return sha1_to_hex_r(hexbuffer[bufno], sha1);
}
char *oid_to_hex(const struct object_id *oid)
hdr_str(hdr, content_type, buf.buf);
end_headers(hdr);
- packet_write(1, "# service=git-%s\n", svc->name);
+ packet_write_fmt(1, "# service=git-%s\n", svc->name);
packet_flush(1);
argv[0] = svc->name;
* here, too
*/
};
+#if LIBCURL_VERSION_NUM >= 0x071600
+static const char *curl_deleg;
+static struct {
+ const char *name;
+ long curl_deleg_param;
+} curl_deleg_levels[] = {
+ { "none", CURLGSSAPI_DELEGATION_NONE },
+ { "policy", CURLGSSAPI_DELEGATION_POLICY_FLAG },
+ { "always", CURLGSSAPI_DELEGATION_FLAG },
+};
+#endif
+
static struct credential proxy_auth = CREDENTIAL_INIT;
static const char *curl_proxyuserpwd;
static const char *curl_cookie_file;
return 0;
}
+ if (!strcmp("http.delegation", var)) {
+#if LIBCURL_VERSION_NUM >= 0x071600
+ return git_config_string(&curl_deleg, var, value);
+#else
+ warning(_("Delegation control is not supported with cURL < 7.22.0"));
+ return 0;
+#endif
+ }
+
if (!strcmp("http.pinnedpubkey", var)) {
#if LIBCURL_VERSION_NUM >= 0x072c00
return git_config_pathname(&ssl_pinnedkey, var, value);
curl_easy_setopt(result, CURLOPT_HTTPAUTH, CURLAUTH_ANY);
#endif
+#if LIBCURL_VERSION_NUM >= 0x071600
+ if (curl_deleg) {
+ int i;
+ for (i = 0; i < ARRAY_SIZE(curl_deleg_levels); i++) {
+ if (!strcmp(curl_deleg, curl_deleg_levels[i].name)) {
+ curl_easy_setopt(result, CURLOPT_GSSAPI_DELEGATION,
+ curl_deleg_levels[i].curl_deleg_param);
+ break;
+ }
+ }
+ if (i == ARRAY_SIZE(curl_deleg_levels))
+ warning("Unknown delegation method '%s': using default",
+ curl_deleg);
+ }
+#endif
+
if (http_proactive_auth)
init_curl_http_auth(result);
}
static const char *env_hint =
-"\n"
-"*** Please tell me who you are.\n"
-"\n"
-"Run\n"
-"\n"
-" git config --global user.email \"you@example.com\"\n"
-" git config --global user.name \"Your Name\"\n"
-"\n"
-"to set your account\'s default identity.\n"
-"Omit --global to set the identity only in this repository.\n"
-"\n";
+N_("\n"
+ "*** Please tell me who you are.\n"
+ "\n"
+ "Run\n"
+ "\n"
+ " git config --global user.email \"you@example.com\"\n"
+ " git config --global user.name \"Your Name\"\n"
+ "\n"
+ "to set your account\'s default identity.\n"
+ "Omit --global to set the identity only in this repository.\n"
+ "\n");
const char *fmt_ident(const char *name, const char *email,
const char *date_str, int flag)
if (!name) {
if (strict && ident_use_config_only
&& !(ident_config_given & IDENT_NAME_GIVEN)) {
- fputs(env_hint, stderr);
+ fputs(_(env_hint), stderr);
die("no name was given and auto-detection is disabled");
}
name = ident_default_name();
using_default = 1;
if (strict && default_name_is_bogus) {
- fputs(env_hint, stderr);
+ fputs(_(env_hint), stderr);
die("unable to auto-detect name (got '%s')", name);
}
}
struct passwd *pw;
if (strict) {
if (using_default)
- fputs(env_hint, stderr);
+ fputs(_(env_hint), stderr);
die("empty ident name (for <%s>) not allowed", email);
}
pw = xgetpwuid_self(NULL);
if (!email) {
if (strict && ident_use_config_only
&& !(ident_config_given & IDENT_MAIL_GIVEN)) {
- fputs(env_hint, stderr);
+ fputs(_(env_hint), stderr);
die("no email was given and auto-detection is disabled");
}
email = ident_default_email();
if (strict && default_email_is_bogus) {
- fputs(env_hint, stderr);
+ fputs(_(env_hint), stderr);
die("unable to auto-detect email address (got '%s')", email);
}
}
int i;
int o = 0; /* output cursor */
- qsort(rs->ranges, rs->nr, sizeof(struct range), range_cmp);
+ QSORT(rs->ranges, rs->nr, range_cmp);
for (i = 0; i < rs->nr; i++) {
if (rs->ranges[i].start == rs->ranges[i].end)
else
opt->missing_newline = 0;
- if (opt->graph)
- graph_show_commit_msg(opt->graph, &msgbuf);
- else
- fwrite(msgbuf.buf, sizeof(char), msgbuf.len, opt->diffopt.file);
+ graph_show_commit_msg(opt->graph, opt->diffopt.file, &msgbuf);
if (opt->use_terminator && !commit_format_is_empty(opt->commit_format)) {
if (!opt->missing_newline)
graph_show_padding(opt->graph);
goto check_header_out;
}
- /* for inbody stuff */
- if (starts_with(line->buf, ">From") && isspace(line->buf[5])) {
- ret = is_format_patch_separator(line->buf + 1, line->len - 1);
- goto check_header_out;
- }
- if (starts_with(line->buf, "[PATCH]") && isspace(line->buf[7])) {
- for (i = 0; header[i]; i++) {
- if (!strcmp("Subject", header[i])) {
- handle_header(&hdr_data[i], line);
- ret = 1;
- goto check_header_out;
- }
- }
- }
-
check_header_out:
strbuf_release(&sb);
return ret;
}
+/*
+ * Returns 1 if the given line or any line beginning with the given line is an
+ * in-body header (that is, check_header will succeed when passed
+ * mi->s_hdr_data).
+ */
+static int is_inbody_header(const struct mailinfo *mi,
+ const struct strbuf *line)
+{
+ int i;
+ for (i = 0; header[i]; i++)
+ if (!mi->s_hdr_data[i] && cmp_header(line, header[i]))
+ return 1;
+ return 0;
+}
+
static void decode_transfer_encoding(struct mailinfo *mi, struct strbuf *line)
{
struct strbuf *ret;
return 0;
}
-static int is_scissors_line(const struct strbuf *line)
+static int is_scissors_line(const char *line)
{
- size_t i, len = line->len;
+ const char *c;
int scissors = 0, gap = 0;
- int first_nonblank = -1;
- int last_nonblank = 0, visible, perforation = 0, in_perforation = 0;
- const char *buf = line->buf;
+ const char *first_nonblank = NULL, *last_nonblank = NULL;
+ int visible, perforation = 0, in_perforation = 0;
- for (i = 0; i < len; i++) {
- if (isspace(buf[i])) {
+ for (c = line; *c; c++) {
+ if (isspace(*c)) {
if (in_perforation) {
perforation++;
gap++;
}
continue;
}
- last_nonblank = i;
- if (first_nonblank < 0)
- first_nonblank = i;
- if (buf[i] == '-') {
+ last_nonblank = c;
+ if (first_nonblank == NULL)
+ first_nonblank = c;
+ if (*c == '-') {
in_perforation = 1;
perforation++;
continue;
}
- if (i + 1 < len &&
- (!memcmp(buf + i, ">8", 2) || !memcmp(buf + i, "8<", 2) ||
- !memcmp(buf + i, ">%", 2) || !memcmp(buf + i, "%<", 2))) {
+ if ((!memcmp(c, ">8", 2) || !memcmp(c, "8<", 2) ||
+ !memcmp(c, ">%", 2) || !memcmp(c, "%<", 2))) {
in_perforation = 1;
perforation += 2;
scissors += 2;
- i++;
+ c++;
continue;
}
in_perforation = 0;
* than half of the perforation.
*/
- visible = last_nonblank - first_nonblank + 1;
+ if (first_nonblank && last_nonblank)
+ visible = last_nonblank - first_nonblank + 1;
+ else
+ visible = 0;
return (scissors && 8 <= visible &&
visible < perforation * 3 &&
gap * 2 < perforation);
}
+static void flush_inbody_header_accum(struct mailinfo *mi)
+{
+ if (!mi->inbody_header_accum.len)
+ return;
+ assert(check_header(mi, &mi->inbody_header_accum, mi->s_hdr_data, 0));
+ strbuf_reset(&mi->inbody_header_accum);
+}
+
+static int check_inbody_header(struct mailinfo *mi, const struct strbuf *line)
+{
+ if (mi->inbody_header_accum.len &&
+ (line->buf[0] == ' ' || line->buf[0] == '\t')) {
+ if (mi->use_scissors && is_scissors_line(line->buf)) {
+ /*
+ * This is a scissors line; do not consider this line
+ * as a header continuation line.
+ */
+ flush_inbody_header_accum(mi);
+ return 0;
+ }
+ strbuf_strip_suffix(&mi->inbody_header_accum, "\n");
+ strbuf_addbuf(&mi->inbody_header_accum, line);
+ return 1;
+ }
+
+ flush_inbody_header_accum(mi);
+
+ if (starts_with(line->buf, ">From") && isspace(line->buf[5]))
+ return is_format_patch_separator(line->buf + 1, line->len - 1);
+ if (starts_with(line->buf, "[PATCH]") && isspace(line->buf[7])) {
+ int i;
+ for (i = 0; header[i]; i++)
+ if (!strcmp("Subject", header[i])) {
+ handle_header(&mi->s_hdr_data[i], line);
+ return 1;
+ }
+ return 0;
+ }
+ if (is_inbody_header(mi, line)) {
+ strbuf_addbuf(&mi->inbody_header_accum, line);
+ return 1;
+ }
+ return 0;
+}
+
static int handle_commit_msg(struct mailinfo *mi, struct strbuf *line)
{
assert(!mi->filter_stage);
}
if (mi->use_inbody_headers && mi->header_stage) {
- mi->header_stage = check_header(mi, line, mi->s_hdr_data, 0);
+ mi->header_stage = check_inbody_header(mi, line);
if (mi->header_stage)
return 0;
} else
if (convert_to_utf8(mi, line, mi->charset.buf))
return 0; /* mi->input_error already set */
- if (mi->use_scissors && is_scissors_line(line)) {
+ if (mi->use_scissors && is_scissors_line(line->buf)) {
int i;
strbuf_setlen(&mi->log_message, 0);
break;
} while (!strbuf_getwholeline(line, mi->input, '\n'));
+ flush_inbody_header_accum(mi);
+
handle_body_out:
strbuf_release(&prev);
}
strbuf_init(&mi->email, 0);
strbuf_init(&mi->charset, 0);
strbuf_init(&mi->log_message, 0);
+ strbuf_init(&mi->inbody_header_accum, 0);
mi->header_stage = 1;
mi->use_inbody_headers = 1;
mi->content_top = mi->content;
strbuf_release(&mi->name);
strbuf_release(&mi->email);
strbuf_release(&mi->charset);
+ strbuf_release(&mi->inbody_header_accum);
free(mi->message_id);
for (i = 0; mi->p_hdr_data[i]; i++)
int patch_lines;
int filter_stage; /* still reading log or are we copying patch? */
int header_stage; /* still checking in-body headers? */
+ struct strbuf inbody_header_accum;
struct strbuf **p_hdr_data;
struct strbuf **s_hdr_data;
}
e = item->util;
e->stages[ce_stage(ce)].mode = ce->ce_mode;
- hashcpy(e->stages[ce_stage(ce)].oid.hash, ce->sha1);
+ oidcpy(&e->stages[ce_stage(ce)].oid, &ce->oid);
}
return unmerged;
name2 = mkpathdup("%s", branch2);
}
- read_mmblob(&orig, one->oid.hash);
- read_mmblob(&src1, a->oid.hash);
- read_mmblob(&src2, b->oid.hash);
+ read_mmblob(&orig, &one->oid);
+ read_mmblob(&src1, &a->oid);
+ read_mmblob(&src2, &b->oid);
merge_status = ll_merge(result_buf, a->path, &orig, base_name,
&src1, name1, &src2, name2, &ll_opts);
refresh_cache(REFRESH_QUIET);
- hold_locked_index(lock_file, 1);
+ if (hold_locked_index(lock_file, 0) < 0)
+ return -1;
memset(&trees, 0, sizeof(trees));
memset(&opts, 0, sizeof(opts));
}
if (unpack_trees(nr_trees, t, &opts))
return -1;
- if (write_locked_index(&the_index, lock_file, COMMIT_LOCK))
- die(_("unable to write new index file"));
+ if (write_locked_index(&the_index, lock_file, COMMIT_LOCK)) {
+ rollback_lock_file(lock_file);
+ return error(_("unable to write new index file"));
+ }
return 0;
}
#include "notes-utils.h"
struct notes_merge_pair {
- unsigned char obj[20], base[20], local[20], remote[20];
+ struct object_id obj, base, local, remote;
};
void init_notes_merge_options(struct notes_merge_options *o)
int i = last_index < len ? last_index : len - 1;
int prev_cmp = 0, cmp = -1;
while (i >= 0 && i < len) {
- cmp = hashcmp(obj, list[i].obj);
+ cmp = hashcmp(obj, list[i].obj.hash);
if (!cmp) /* obj belongs @ i */
break;
else if (cmp < 0 && prev_cmp <= 0) /* obj belongs < i */
return list + i;
}
-static unsigned char uninitialized[20] =
+static struct object_id uninitialized = {
"\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff" \
- "\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff";
+ "\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff"
+};
static struct notes_merge_pair *diff_tree_remote(struct notes_merge_options *o,
const unsigned char *base,
mp = find_notes_merge_pair_pos(changes, len, obj, 1, &occupied);
if (occupied) {
/* We've found an addition/deletion pair */
- assert(!hashcmp(mp->obj, obj));
+ assert(!hashcmp(mp->obj.hash, obj));
if (is_null_oid(&p->one->oid)) { /* addition */
- assert(is_null_sha1(mp->remote));
- hashcpy(mp->remote, p->two->oid.hash);
+ assert(is_null_oid(&mp->remote));
+ oidcpy(&mp->remote, &p->two->oid);
} else if (is_null_oid(&p->two->oid)) { /* deletion */
- assert(is_null_sha1(mp->base));
- hashcpy(mp->base, p->one->oid.hash);
+ assert(is_null_oid(&mp->base));
+ oidcpy(&mp->base, &p->one->oid);
} else
assert(!"Invalid existing change recorded");
} else {
- hashcpy(mp->obj, obj);
- hashcpy(mp->base, p->one->oid.hash);
- hashcpy(mp->local, uninitialized);
- hashcpy(mp->remote, p->two->oid.hash);
+ hashcpy(mp->obj.hash, obj);
+ oidcpy(&mp->base, &p->one->oid);
+ oidcpy(&mp->local, &uninitialized);
+ oidcpy(&mp->remote, &p->two->oid);
len++;
}
trace_printf("\t\tStored remote change for %s: %.7s -> %.7s\n",
- sha1_to_hex(mp->obj), sha1_to_hex(mp->base),
- sha1_to_hex(mp->remote));
+ oid_to_hex(&mp->obj), oid_to_hex(&mp->base),
+ oid_to_hex(&mp->remote));
}
diff_flush(&opt);
clear_pathspec(&opt.pathspec);
continue;
}
- assert(!hashcmp(mp->obj, obj));
+ assert(!hashcmp(mp->obj.hash, obj));
if (is_null_oid(&p->two->oid)) { /* deletion */
/*
* Either this is a true deletion (1), or it is part
* (3) mp->local is uninitialized; set it to null_sha1
* (will be overwritten by following addition)
*/
- if (!hashcmp(mp->local, uninitialized))
- hashclr(mp->local);
+ if (!oidcmp(&mp->local, &uninitialized))
+ oidclr(&mp->local);
} else if (is_null_oid(&p->one->oid)) { /* addition */
/*
* Either this is a true addition (1), or it is part
* (2) mp->local is uninitialized; set to p->two->sha1
* (3) mp->local is null_sha1; set to p->two->sha1
*/
- assert(is_null_sha1(mp->local) ||
- !hashcmp(mp->local, uninitialized));
- hashcpy(mp->local, p->two->oid.hash);
+ assert(is_null_oid(&mp->local) ||
+ !oidcmp(&mp->local, &uninitialized));
+ oidcpy(&mp->local, &p->two->oid);
} else { /* modification */
/*
* This is a true modification. p->one->sha1 shall
* match mp->base, and mp->local shall be uninitialized.
* Set mp->local to p->two->sha1.
*/
- assert(!hashcmp(p->one->oid.hash, mp->base));
- assert(!hashcmp(mp->local, uninitialized));
- hashcpy(mp->local, p->two->oid.hash);
+ assert(!oidcmp(&p->one->oid, &mp->base));
+ assert(!oidcmp(&mp->local, &uninitialized));
+ oidcpy(&mp->local, &p->two->oid);
}
trace_printf("\t\tStored local change for %s: %.7s -> %.7s\n",
- sha1_to_hex(mp->obj), sha1_to_hex(mp->base),
- sha1_to_hex(mp->local));
+ oid_to_hex(&mp->obj), oid_to_hex(&mp->base),
+ oid_to_hex(&mp->local));
}
diff_flush(&opt);
clear_pathspec(&opt.pathspec);
if (file_exists(git_path(NOTES_MERGE_WORKTREE)) &&
!is_empty_dir(git_path(NOTES_MERGE_WORKTREE))) {
if (advice_resolve_conflict)
- die("You have not concluded your previous "
+ die(_("You have not concluded your previous "
"notes merge (%s exists).\nPlease, use "
"'git notes merge --commit' or 'git notes "
"merge --abort' to commit/abort the "
"previous merge before you start a new "
- "notes merge.", git_path("NOTES_MERGE_*"));
+ "notes merge."), git_path("NOTES_MERGE_*"));
else
- die("You have not concluded your notes merge "
- "(%s exists).", git_path("NOTES_MERGE_*"));
+ die(_("You have not concluded your notes merge "
+ "(%s exists)."), git_path("NOTES_MERGE_*"));
}
if (safe_create_leading_directories_const(git_path(
mmfile_t base, local, remote;
int status;
- read_mmblob(&base, p->base);
- read_mmblob(&local, p->local);
- read_mmblob(&remote, p->remote);
+ read_mmblob(&base, &p->base);
+ read_mmblob(&local, &p->local);
+ read_mmblob(&remote, &p->remote);
- status = ll_merge(&result_buf, sha1_to_hex(p->obj), &base, NULL,
+ status = ll_merge(&result_buf, oid_to_hex(&p->obj), &base, NULL,
&local, o->local_ref, &remote, o->remote_ref, NULL);
free(base.ptr);
if ((status < 0) || !result_buf.ptr)
die("Failed to execute internal merge");
- write_buf_to_worktree(p->obj, result_buf.ptr, result_buf.size);
+ write_buf_to_worktree(p->obj.hash, result_buf.ptr, result_buf.size);
free(result_buf.ptr);
return status;
trace_printf("\t\t\tmerge_one_change_manual(obj = %.7s, base = %.7s, "
"local = %.7s, remote = %.7s)\n",
- sha1_to_hex(p->obj), sha1_to_hex(p->base),
- sha1_to_hex(p->local), sha1_to_hex(p->remote));
+ oid_to_hex(&p->obj), oid_to_hex(&p->base),
+ oid_to_hex(&p->local), oid_to_hex(&p->remote));
/* add "Conflicts:" section to commit message first time through */
if (!o->has_worktree)
strbuf_addstr(&(o->commit_msg), "\n\nConflicts:\n");
- strbuf_addf(&(o->commit_msg), "\t%s\n", sha1_to_hex(p->obj));
+ strbuf_addf(&(o->commit_msg), "\t%s\n", oid_to_hex(&p->obj));
if (o->verbosity >= 2)
- printf("Auto-merging notes for %s\n", sha1_to_hex(p->obj));
+ printf("Auto-merging notes for %s\n", oid_to_hex(&p->obj));
check_notes_merge_worktree(o);
- if (is_null_sha1(p->local)) {
+ if (is_null_oid(&p->local)) {
/* D/F conflict, checkout p->remote */
- assert(!is_null_sha1(p->remote));
+ assert(!is_null_oid(&p->remote));
if (o->verbosity >= 1)
printf("CONFLICT (delete/modify): Notes for object %s "
"deleted in %s and modified in %s. Version from %s "
"left in tree.\n",
- sha1_to_hex(p->obj), lref, rref, rref);
- write_note_to_worktree(p->obj, p->remote);
- } else if (is_null_sha1(p->remote)) {
+ oid_to_hex(&p->obj), lref, rref, rref);
+ write_note_to_worktree(p->obj.hash, p->remote.hash);
+ } else if (is_null_oid(&p->remote)) {
/* D/F conflict, checkout p->local */
- assert(!is_null_sha1(p->local));
+ assert(!is_null_oid(&p->local));
if (o->verbosity >= 1)
printf("CONFLICT (delete/modify): Notes for object %s "
"deleted in %s and modified in %s. Version from %s "
"left in tree.\n",
- sha1_to_hex(p->obj), rref, lref, lref);
- write_note_to_worktree(p->obj, p->local);
+ oid_to_hex(&p->obj), rref, lref, lref);
+ write_note_to_worktree(p->obj.hash, p->local.hash);
} else {
/* "regular" conflict, checkout result of ll_merge() */
const char *reason = "content";
- if (is_null_sha1(p->base))
+ if (is_null_oid(&p->base))
reason = "add/add";
- assert(!is_null_sha1(p->local));
- assert(!is_null_sha1(p->remote));
+ assert(!is_null_oid(&p->local));
+ assert(!is_null_oid(&p->remote));
if (o->verbosity >= 1)
printf("CONFLICT (%s): Merge conflict in notes for "
- "object %s\n", reason, sha1_to_hex(p->obj));
+ "object %s\n", reason,
+ oid_to_hex(&p->obj));
ll_merge_in_worktree(o, p);
}
trace_printf("\t\t\tremoving from partial merge result\n");
- remove_note(t, p->obj);
+ remove_note(t, p->obj.hash);
return 1;
}
case NOTES_MERGE_RESOLVE_OURS:
if (o->verbosity >= 2)
printf("Using local notes for %s\n",
- sha1_to_hex(p->obj));
+ oid_to_hex(&p->obj));
/* nothing to do */
return 0;
case NOTES_MERGE_RESOLVE_THEIRS:
if (o->verbosity >= 2)
printf("Using remote notes for %s\n",
- sha1_to_hex(p->obj));
- if (add_note(t, p->obj, p->remote, combine_notes_overwrite))
+ oid_to_hex(&p->obj));
+ if (add_note(t, p->obj.hash, p->remote.hash, combine_notes_overwrite))
die("BUG: combine_notes_overwrite failed");
return 0;
case NOTES_MERGE_RESOLVE_UNION:
if (o->verbosity >= 2)
printf("Concatenating local and remote notes for %s\n",
- sha1_to_hex(p->obj));
- if (add_note(t, p->obj, p->remote, combine_notes_concatenate))
+ oid_to_hex(&p->obj));
+ if (add_note(t, p->obj.hash, p->remote.hash, combine_notes_concatenate))
die("failed to concatenate notes "
"(combine_notes_concatenate)");
return 0;
case NOTES_MERGE_RESOLVE_CAT_SORT_UNIQ:
if (o->verbosity >= 2)
printf("Concatenating unique lines in local and remote "
- "notes for %s\n", sha1_to_hex(p->obj));
- if (add_note(t, p->obj, p->remote, combine_notes_cat_sort_uniq))
+ "notes for %s\n", oid_to_hex(&p->obj));
+ if (add_note(t, p->obj.hash, p->remote.hash, combine_notes_cat_sort_uniq))
die("failed to concatenate notes "
"(combine_notes_cat_sort_uniq)");
return 0;
for (i = 0; i < *num_changes; i++) {
struct notes_merge_pair *p = changes + i;
trace_printf("\t\t%.7s: %.7s -> %.7s/%.7s\n",
- sha1_to_hex(p->obj), sha1_to_hex(p->base),
- sha1_to_hex(p->local), sha1_to_hex(p->remote));
+ oid_to_hex(&p->obj), oid_to_hex(&p->base),
+ oid_to_hex(&p->local),
+ oid_to_hex(&p->remote));
- if (!hashcmp(p->base, p->remote)) {
+ if (!oidcmp(&p->base, &p->remote)) {
/* no remote change; nothing to do */
trace_printf("\t\t\tskipping (no remote change)\n");
- } else if (!hashcmp(p->local, p->remote)) {
+ } else if (!oidcmp(&p->local, &p->remote)) {
/* same change in local and remote; nothing to do */
trace_printf("\t\t\tskipping (local == remote)\n");
- } else if (!hashcmp(p->local, uninitialized) ||
- !hashcmp(p->local, p->base)) {
+ } else if (!oidcmp(&p->local, &uninitialized) ||
+ !oidcmp(&p->local, &p->base)) {
/* no local change; adopt remote change */
trace_printf("\t\t\tno local change, adopted remote\n");
- if (add_note(t, p->obj, p->remote,
+ if (add_note(t, p->obj.hash, p->remote.hash,
combine_notes_overwrite))
die("BUG: combine_notes_overwrite failed");
} else {
void init_notes(struct notes_tree *t, const char *notes_ref,
combine_notes_fn combine_notes, int flags)
{
- unsigned char sha1[20], object_sha1[20];
+ struct object_id oid, object_oid;
unsigned mode;
struct leaf_node root_tree;
t->dirty = 0;
if (flags & NOTES_INIT_EMPTY || !notes_ref ||
- get_sha1_treeish(notes_ref, object_sha1))
+ get_sha1_treeish(notes_ref, object_oid.hash))
return;
- if (flags & NOTES_INIT_WRITABLE && read_ref(notes_ref, object_sha1))
+ if (flags & NOTES_INIT_WRITABLE && read_ref(notes_ref, object_oid.hash))
die("Cannot use notes ref %s", notes_ref);
- if (get_tree_entry(object_sha1, "", sha1, &mode))
+ if (get_tree_entry(object_oid.hash, "", oid.hash, &mode))
die("Failed to read notes tree referenced by %s (%s)",
- notes_ref, sha1_to_hex(object_sha1));
+ notes_ref, oid_to_hex(&object_oid));
hashclr(root_tree.key_sha1);
- hashcpy(root_tree.val_sha1, sha1);
+ hashcpy(root_tree.val_sha1, oid.hash);
load_subtree(t, &root_tree, t->root, 0);
}
* revision.h: 0---------10 26
* fetch-pack.c: 0---4
* walker.c: 0-2
- * upload-pack.c: 11----------------19
+ * upload-pack.c: 4 11----------------19
* builtin/blame.c: 12-13
* bisect.c: 16
* bundle.c: 16
{
unsigned int i = 0, j, next;
- qsort(indexed_commits, indexed_commits_nr, sizeof(indexed_commits[0]),
- date_compare);
+ QSORT(indexed_commits, indexed_commits_nr, date_compare);
if (writer.show_progress)
writer.progress = start_progress("Selecting bitmap commits", 0);
return -1;
idx_name = pack_bitmap_filename(packfile);
- fd = git_open_noatime(idx_name);
+ fd = git_open(idx_name);
free(idx_name);
if (fd < 0)
entries[i].offset = nth_packed_object_offset(p, i);
entries[i].nr = i;
}
- qsort(entries, nr_objects, sizeof(*entries), compare_entries);
+ QSORT(entries, nr_objects, compare_entries);
for (i = 0; i < nr_objects; i++) {
void *data;
unsigned no_try_delta:1;
unsigned tagged:1; /* near the very tip of refs */
unsigned filled:1; /* assigned write-order */
+
+ /*
+ * State flags for depth-first search used for analyzing delta cycles.
+ */
+ enum {
+ DFS_NONE = 0,
+ DFS_ACTIVE,
+ DFS_DONE
+ } dfs_state;
};
struct packing_data {
if (objects[i]->offset > last_obj_offset)
last_obj_offset = objects[i]->offset;
}
- qsort(sorted_by_sha, nr_objects, sizeof(sorted_by_sha[0]),
- sha1_compare);
+ QSORT(sorted_by_sha, nr_objects, sha1_compare);
}
else
sorted_by_sha = list = last = NULL;
#define DEFAULT_PAGER "less"
#endif
-/*
- * This is split up from the rest of git so that we can do
- * something different on Windows.
- */
-
static struct child_process pager_process = CHILD_PROCESS_INIT;
+static const char *pager_program;
static void wait_for_pager(int in_signal)
{
raise(signo);
}
+static int core_pager_config(const char *var, const char *value, void *data)
+{
+ if (!strcmp(var, "core.pager"))
+ return git_config_string(&pager_program, var, value);
+ return 0;
+}
+
+static void read_early_config(config_fn_t cb, void *data)
+{
+ git_config_with_options(cb, data, NULL, 1);
+
+ /*
+ * Note that this is a really dirty hack that does the wrong thing in
+ * many cases. The crux of the problem is that we cannot run
+ * setup_git_directory() early on in git's setup, so we have no idea if
+ * we are in a repository or not, and therefore are not sure whether
+ * and how to read repository-local config.
+ *
+ * So if we _aren't_ in a repository (or we are but we would reject its
+ * core.repositoryformatversion), we'll read whatever is in .git/config
+ * blindly. Similarly, if we _are_ in a repository, but not at the
+ * root, we'll fail to find .git/config (because it's really
+ * ../.git/config, etc). See t7006 for a complete set of failures.
+ *
+ * However, we have historically provided this hack because it does
+ * work some of the time (namely when you are at the top-level of a
+ * valid repository), and would rarely make things worse (i.e., you do
+ * not generally have a .git/config file sitting around).
+ */
+ if (!startup_info->have_repository) {
+ struct git_config_source repo_config;
+
+ memset(&repo_config, 0, sizeof(repo_config));
+ repo_config.file = ".git/config";
+ git_config_with_options(cb, data, &repo_config, 1);
+ }
+}
+
const char *git_pager(int stdout_is_tty)
{
const char *pager;
pager = getenv("GIT_PAGER");
if (!pager) {
if (!pager_program)
- git_config(git_default_config, NULL);
+ read_early_config(core_pager_config, NULL);
pager = pager_program;
}
if (!pager)
return width;
}
-/* returns 0 for "no pager", 1 for "use pager", and -1 for "not specified" */
-int check_pager_config(const char *cmd)
+struct pager_command_config_data {
+ const char *cmd;
+ int want;
+ char *value;
+};
+
+static int pager_command_config(const char *var, const char *value, void *vdata)
{
- int want = -1;
- struct strbuf key = STRBUF_INIT;
- const char *value = NULL;
- strbuf_addf(&key, "pager.%s", cmd);
- if (git_config_key_is_valid(key.buf) &&
- !git_config_get_value(key.buf, &value)) {
- int b = git_config_maybe_bool(key.buf, value);
+ struct pager_command_config_data *data = vdata;
+ const char *cmd;
+
+ if (skip_prefix(var, "pager.", &cmd) && !strcmp(cmd, data->cmd)) {
+ int b = git_config_maybe_bool(var, value);
if (b >= 0)
- want = b;
+ data->want = b;
else {
- want = 1;
- pager_program = xstrdup(value);
+ data->want = 1;
+ data->value = xstrdup(value);
}
}
- strbuf_release(&key);
- return want;
+
+ return 0;
+}
+
+/* returns 0 for "no pager", 1 for "use pager", and -1 for "not specified" */
+int check_pager_config(const char *cmd)
+{
+ struct pager_command_config_data data;
+
+ data.cmd = cmd;
+ data.want = -1;
+ data.value = NULL;
+
+ read_early_config(pager_command_config, &data);
+
+ if (data.value)
+ pager_program = data.value;
+ return data.want;
}
return 0;
}
+/**
+ * Report that the option is unknown, so that other code can handle
+ * it. This can be used as a callback together with
+ * OPTION_LOWLEVEL_CALLBACK to allow an option to be documented in the
+ * "-h" output even if it's not being handled directly by
+ * parse_options().
+ */
+int parse_opt_unknown_cb(const struct option *opt, const char *arg, int unset)
+{
+ return -2;
+}
+
/**
* Recreates the command-line option in the strbuf.
*/
extern int parse_opt_tertiary(const struct option *, const char *, int);
extern int parse_opt_string_list(const struct option *, const char *, int);
extern int parse_opt_noop_cb(const struct option *, const char *, int);
+extern int parse_opt_unknown_cb(const struct option *, const char *, int);
extern int parse_opt_passthru(const struct option *, const char *, int);
extern int parse_opt_passthru_argv(const struct option *, const char *, int);
#include "string-list.h"
#include "dir.h"
#include "worktree.h"
+#include "submodule-config.h"
static int get_st_mode_bits(const char *path, int *mode)
{
STRBUF_INIT, STRBUF_INIT, STRBUF_INIT, STRBUF_INIT
};
static int index;
- struct strbuf *sb = &pathname_array[3 & ++index];
+ struct strbuf *sb = &pathname_array[index];
+ index = (index + 1) % ARRAY_SIZE(pathname_array);
strbuf_reset(sb);
return sb;
}
return pathname->buf;
}
-static void do_submodule_path(struct strbuf *buf, const char *path,
- const char *fmt, va_list args)
+/* Returns 0 on success, negative on failure. */
+#define SUBMODULE_PATH_ERR_NOT_CONFIGURED -1
+static int do_submodule_path(struct strbuf *buf, const char *path,
+ const char *fmt, va_list args)
{
const char *git_dir;
struct strbuf git_submodule_common_dir = STRBUF_INIT;
struct strbuf git_submodule_dir = STRBUF_INIT;
+ const struct submodule *sub;
+ int err = 0;
strbuf_addstr(buf, path);
strbuf_complete(buf, '/');
strbuf_reset(buf);
strbuf_addstr(buf, git_dir);
}
+ if (!is_git_directory(buf->buf)) {
+ gitmodules_config();
+ sub = submodule_from_path(null_sha1, path);
+ if (!sub) {
+ err = SUBMODULE_PATH_ERR_NOT_CONFIGURED;
+ goto cleanup;
+ }
+ strbuf_reset(buf);
+ strbuf_git_path(buf, "%s/%s", "modules", sub->name);
+ }
+
strbuf_addch(buf, '/');
strbuf_addbuf(&git_submodule_dir, buf);
strbuf_cleanup_path(buf);
+cleanup:
strbuf_release(&git_submodule_dir);
strbuf_release(&git_submodule_common_dir);
+
+ return err;
}
char *git_pathdup_submodule(const char *path, const char *fmt, ...)
{
+ int err;
va_list args;
struct strbuf buf = STRBUF_INIT;
va_start(args, fmt);
- do_submodule_path(&buf, path, fmt, args);
+ err = do_submodule_path(&buf, path, fmt, args);
va_end(args);
+ if (err) {
+ strbuf_release(&buf);
+ return NULL;
+ }
return strbuf_detach(&buf, NULL);
}
-void strbuf_git_path_submodule(struct strbuf *buf, const char *path,
- const char *fmt, ...)
+int strbuf_git_path_submodule(struct strbuf *buf, const char *path,
+ const char *fmt, ...)
{
+ int err;
va_list args;
va_start(args, fmt);
- do_submodule_path(buf, path, fmt, args);
+ err = do_submodule_path(buf, path, fmt, args);
va_end(args);
+
+ return err;
}
static void do_git_common_path(struct strbuf *buf,
{
struct pathspec_item *item;
const char *entry = argv ? *argv : NULL;
- int i, n, prefixlen, nr_exclude = 0;
+ int i, n, prefixlen, warn_empty_string, nr_exclude = 0;
memset(pathspec, 0, sizeof(*pathspec));
}
n = 0;
- while (argv[n])
+ warn_empty_string = 1;
+ while (argv[n]) {
+ if (*argv[n] == '\0' && warn_empty_string) {
+ warning(_("empty strings as pathspecs will be made invalid in upcoming releases. "
+ "please use . instead if you meant to match all paths"));
+ warn_empty_string = 0;
+ }
n++;
+ }
pathspec->nr = n;
ALLOC_ARRAY(pathspec->items, n);
if (pathspec->magic & PATHSPEC_MAXDEPTH) {
if (flags & PATHSPEC_KEEP_ORDER)
die("BUG: PATHSPEC_MAXDEPTH_VALID and PATHSPEC_KEEP_ORDER are incompatible");
- qsort(pathspec->items, pathspec->nr,
- sizeof(struct pathspec_item), pathspec_item_cmp);
+ QSORT(pathspec->items, pathspec->nr, pathspec_item_cmp);
}
}
command_bidi_pipe command_close_bidi_pipe
version exec_path html_path hash_object git_cmd_try
remote_refs prompt
- get_tz_offset
+ get_tz_offset get_record
credential credential_read credential_write
temp_acquire temp_is_locked temp_release temp_reset temp_path);
return sprintf("%s%02d%02d", $sign, (gmtime(abs($t - $gm)))[2,1]);
}
+=item get_record ( FILEHANDLE, INPUT_RECORD_SEPARATOR )
+
+Read one record from FILEHANDLE delimited by INPUT_RECORD_SEPARATOR,
+removing any trailing INPUT_RECORD_SEPARATOR.
+
+=cut
+
+sub get_record {
+ my ($fh, $rs) = @_;
+ local $/ = $rs;
+ my $rec = <$fh>;
+ chomp $rec if defined $rs;
+ $rec;
+}
=item prompt ( PROMPT , ISPASSWORD )
=cut
+# Very close to Mail::Address's parser, but we still have minor
+# differences in some cases (see t9000 for examples).
sub parse_mailboxes {
my $re_comment = qr/\((?:[^)]*)\)/;
my $re_quote = qr/"(?:[^\"\\]|\\.)*"/;
# divide the string in tokens of the above form
my $re_token = qr/(?:$re_quote|$re_word|$re_comment|\S)/;
my @tokens = map { $_ =~ /\s*($re_token)\s*/g } @_;
+ my $end_of_addr_seen = 0;
# add a delimiter to simplify treatment for the last mailbox
push @tokens, ",";
if ($token =~ /^[,;]$/) {
# if buffer still contains undeterminated strings
# append it at the end of @address or @phrase
- if (@address) {
- push @address, @buffer;
- } else {
+ if ($end_of_addr_seen) {
push @phrase, @buffer;
+ } else {
+ push @address, @buffer;
}
my $str_phrase = join ' ', @phrase;
push @addr_list, $str_mailbox if ($str_mailbox);
@phrase = @address = @comment = @buffer = ();
+ $end_of_addr_seen = 0;
} elsif ($token =~ /^\(/) {
push @comment, $token;
} elsif ($token eq "<") {
push @phrase, (splice @address), (splice @buffer);
} elsif ($token eq ">") {
+ $end_of_addr_seen = 1;
push @address, (splice @buffer);
- } elsif ($token eq "@") {
+ } elsif ($token eq "@" && !$end_of_addr_seen) {
push @address, (splice @buffer), "@";
- } elsif ($token eq ".") {
- push @address, (splice @buffer), ".";
} else {
push @buffer, $token;
}
(++$min, $max);
}
+sub svn_dir {
+ command_oneline(qw(rev-parse --git-path svn));
+}
+
sub tmp_config {
my (@args) = @_;
- my $old_def_config = "$ENV{GIT_DIR}/svn/config";
- my $config = "$ENV{GIT_DIR}/svn/.metadata";
+ my $svn_dir = svn_dir();
+ my $old_def_config = "$svn_dir/config";
+ my $config = "$svn_dir/.metadata";
if (! -f $config && -f $old_def_config) {
rename $old_def_config, $config or
die "Failed rename $old_def_config => $config: $!\n";
if ($memo_backend > 0) {
tie %$hash => 'Git::SVN::Memoize::YAML', "$path.yaml";
} else {
- tie %$hash => 'Memoize::Storable', "$path.db", 'nstore';
+ # first verify that any existing file can actually be loaded
+ # (it may have been saved by an incompatible version)
+ my $db = "$path.db";
+ if (-e $db) {
+ use Storable qw(retrieve);
+
+ if (!eval { retrieve($db); 1 }) {
+ unlink $db or die "unlink $db failed: $!";
+ }
+ }
+ tie %$hash => 'Memoize::Storable', $db, 'nstore';
}
}
return if $memoized;
$memoized = 1;
- my $cache_path = "$ENV{GIT_DIR}/svn/.caches/";
+ my $cache_path = svn_dir() . '/.caches/';
mkpath([$cache_path]) unless -d $cache_path;
my %lookup_svn_merge_cache;
sub clear_memoized_mergeinfo_caches {
die "Only call this method in non-memoized context" if ($memoized);
- my $cache_path = "$ENV{GIT_DIR}/svn/.caches/";
+ my $cache_path = svn_dir() . '/.caches/';
return unless -d $cache_path;
for my $cache_file (("$cache_path/lookup_svn_merge",
"refs/remotes/$prefix$default_ref_id";
}
$_[1] = $repo_id;
- my $dir = "$ENV{GIT_DIR}/svn/$ref_id";
+ my $svn_dir = svn_dir();
+ my $dir = "$svn_dir/$ref_id";
- # Older repos imported by us used $GIT_DIR/svn/foo instead of
- # $GIT_DIR/svn/refs/remotes/foo when tracking refs/remotes/foo
+ # Older repos imported by us used $svn_dir/foo instead of
+ # $svn_dir/refs/remotes/foo when tracking refs/remotes/foo
if ($ref_id =~ m{^refs/remotes/(.+)}) {
- my $old_dir = "$ENV{GIT_DIR}/svn/$1";
+ my $old_dir = "$svn_dir/$1";
if (-d $old_dir && ! -d $dir) {
$dir = $old_dir;
}
mkpath([$dir]);
my $obj = bless {
ref_id => $ref_id, dir => $dir, index => "$dir/index",
- config => "$ENV{GIT_DIR}/svn/config",
+ config => "$svn_dir/config",
map_root => "$dir/.rev_map", repo_id => $repo_id }, $class;
# Ensure it gets canonicalized
use Carp qw/croak/;
use Git qw/command command_oneline command_noisy command_output_pipe
command_input_pipe command_close_pipe
- command_bidi_pipe command_close_bidi_pipe/;
+ command_bidi_pipe command_close_bidi_pipe
+ get_record/;
+
BEGIN {
@ISA = qw(SVN::Delta::Editor);
}
push @diff_tree, "-l$_rename_limit" if defined $_rename_limit;
push @diff_tree, $tree_a, $tree_b;
my ($diff_fh, $ctx) = command_output_pipe(@diff_tree);
- local $/ = "\0";
my $state = 'meta';
my @mods;
- while (<$diff_fh>) {
- chomp $_; # this gets rid of the trailing "\0"
+ while (defined($_ = get_record($diff_fh, "\0"))) {
if ($state eq 'meta' && /^:(\d{6})\s(\d{6})\s
($::sha1)\s($::sha1)\s
([MTCRAD])\d*$/xo) {
my ($fh, $ctx) = command_output_pipe(qw/ls-tree --name-only -r -z/,
$self->{tree_b});
- local $/ = "\0";
- while (<$fh>) {
- chomp;
+ while (defined($_ = get_record($fh, "\0"))) {
my @dn = split m#/#, $_;
while (pop @dn) {
delete $rm->{join '/', @dn};
use File::Basename qw/dirname/;
use Git qw/command command_oneline command_noisy command_output_pipe
command_input_pipe command_close_pipe
- command_bidi_pipe command_close_bidi_pipe/;
+ command_bidi_pipe command_close_bidi_pipe
+ get_record/;
BEGIN {
@ISA = qw(SVN::Delta::Editor);
}
my $printed_warning;
chomp(my $empty_blob = `git hash-object -t blob --stdin < /dev/null`);
my ($ls, $ctx) = command_output_pipe(qw/ls-tree -r -z/, $cmt);
- local $/ = "\0";
my $pfx = defined($switch_path) ? $switch_path : $git_svn->path;
$pfx .= '/' if length($pfx);
- while (<$ls>) {
- chomp;
+ while (defined($_ = get_record($ls, "\0"))) {
s/\A100644 blob $empty_blob\t//o or next;
unless ($printed_warning) {
print STDERR "Scanning for empty symlinks, ",
my ($ls, $ctx) = command_output_pipe(qw/ls-tree
-r --name-only -z/,
$tree);
- local $/ = "\0";
- while (<$ls>) {
- chomp;
+ while (defined($_ = get_record($ls, "\0"))) {
my $rmpath = "$gpath/$_";
$self->{gii}->remove($rmpath);
print "\tD\t$rmpath\n" unless $::_q;
my ($ls, $ctx) = command_output_pipe(qw/ls-tree
-r --name-only -z/,
$self->{c});
- local $/ = "\0";
- while (<$ls>) {
- chomp;
+ while (defined($_ = get_record($ls, "\0"))) {
$self->{gii}->remove($_);
print "\tD\t$_\n" unless $::_q;
push @deleted_gpath, $gpath;
command_noisy
command_output_pipe
command_close_pipe
+ command_oneline
);
+use Git::SVN;
sub migrate_from_v0 {
my $git_dir = $ENV{GIT_DIR};
chomp;
my ($id, $orig_ref) = ($_, $_);
next unless $id =~ s#^refs/heads/(.+)-HEAD$#$1#;
- next unless -f "$git_dir/$id/info/url";
+ my $info_url = command_oneline(qw(rev-parse --git-path),
+ "$id/info/url");
+ next unless -f $info_url;
my $new_ref = "refs/remotes/$id";
if (::verify_ref("$new_ref^0")) {
print STDERR "W: $orig_ref is probably an old ",
my $git_dir = $ENV{GIT_DIR};
my $migrated = 0;
return $migrated unless -d $git_dir;
- my $svn_dir = "$git_dir/svn";
+ my $svn_dir = Git::SVN::svn_dir();
# just in case somebody used 'svn' as their $id at some point...
return $migrated if -d $svn_dir && ! -f "$svn_dir/info/url";
my $x = $_;
next unless $x =~ s#^refs/remotes/##;
chomp $x;
- next unless -f "$git_dir/$x/info/url";
- my $u = eval { ::file_to_s("$git_dir/$x/info/url") };
+ my $info_url = command_oneline(qw(rev-parse --git-path),
+ "$x/info/url");
+ next unless -f $info_url;
+ my $u = eval { ::file_to_s($info_url) };
next unless $u;
- my $dn = dirname("$git_dir/svn/$x");
+ my $dn = dirname("$svn_dir/$x");
mkpath([$dn]) unless -d $dn;
if ($x eq 'svn') { # they used 'svn' as GIT_SVN_ID:
- mkpath(["$git_dir/svn/svn"]);
+ mkpath(["$svn_dir/svn"]);
print STDERR " - $git_dir/$x/info => ",
- "$git_dir/svn/$x/info\n";
- rename "$git_dir/$x/info", "$git_dir/svn/$x/info" or
+ "$svn_dir/$x/info\n";
+ rename "$git_dir/$x/info", "$svn_dir/$x/info" or
croak "$!: $x";
# don't worry too much about these, they probably
# don't exist with repos this old (save for index,
# and we can easily regenerate that)
foreach my $f (qw/unhandled.log index .rev_db/) {
- rename "$git_dir/$x/$f", "$git_dir/svn/$x/$f";
+ rename "$git_dir/$x/$f", "$svn_dir/$x/$f";
}
} else {
- print STDERR " - $git_dir/$x => $git_dir/svn/$x\n";
- rename "$git_dir/$x", "$git_dir/svn/$x" or
- croak "$!: $x";
+ print STDERR " - $git_dir/$x => $svn_dir/$x\n";
+ rename "$git_dir/$x", "$svn_dir/$x" or croak "$!: $x";
}
$migrated++;
}
push @dir, $_;
}
}
+ my $svn_dir = Git::SVN::svn_dir();
foreach (@dir) {
my $x = $_;
- $x =~ s!^\Q$ENV{GIT_DIR}\E/svn/!!o;
+ $x =~ s!^\Q$svn_dir\E/!!o;
read_old_urls($l_map, $x, $_);
}
}
my @cfg = command(qw/config -l/);
return if grep /^svn-remote\..+\.url=/, @cfg;
my %l_map;
- read_old_urls(\%l_map, '', "$ENV{GIT_DIR}/svn");
+ read_old_urls(\%l_map, '', Git::SVN::svn_dir());
my $migrated = 0;
require Git::SVN;
}
}
if (@emptied) {
- my $file = $ENV{GIT_CONFIG} || "$ENV{GIT_DIR}/config";
+ my $file = $ENV{GIT_CONFIG} ||
+ command_oneline(qw(rev-parse --git-path config));
print STDERR <<EOF;
The following [svn-remote] sections in your config file ($file) are empty
and can be safely removed:
write_or_die(fd, "0000", 4);
}
+int packet_flush_gently(int fd)
+{
+ packet_trace("0000", 4, 1);
+ if (write_in_full(fd, "0000", 4) == 4)
+ return 0;
+ return error("flush packet write failed");
+}
+
void packet_buf_flush(struct strbuf *buf)
{
packet_trace("0000", 4, 1);
strbuf_add(buf, "0000", 4);
}
-#define hex(a) (hexchar[(a) & 15])
-static void format_packet(struct strbuf *out, const char *fmt, va_list args)
+static void set_packet_header(char *buf, const int size)
{
static char hexchar[] = "0123456789abcdef";
+
+ #define hex(a) (hexchar[(a) & 15])
+ buf[0] = hex(size >> 12);
+ buf[1] = hex(size >> 8);
+ buf[2] = hex(size >> 4);
+ buf[3] = hex(size);
+ #undef hex
+}
+
+static void format_packet(struct strbuf *out, const char *fmt, va_list args)
+{
size_t orig_len, n;
orig_len = out->len;
if (n > LARGE_PACKET_MAX)
die("protocol error: impossibly long line");
- out->buf[orig_len + 0] = hex(n >> 12);
- out->buf[orig_len + 1] = hex(n >> 8);
- out->buf[orig_len + 2] = hex(n >> 4);
- out->buf[orig_len + 3] = hex(n);
+ set_packet_header(&out->buf[orig_len], n);
packet_trace(out->buf + orig_len + 4, n - 4, 1);
}
-void packet_write(int fd, const char *fmt, ...)
+static int packet_write_fmt_1(int fd, int gently,
+ const char *fmt, va_list args)
+{
+ struct strbuf buf = STRBUF_INIT;
+ ssize_t count;
+
+ format_packet(&buf, fmt, args);
+ count = write_in_full(fd, buf.buf, buf.len);
+ if (count == buf.len)
+ return 0;
+
+ if (!gently) {
+ check_pipe(errno);
+ die_errno("packet write with format failed");
+ }
+ return error("packet write with format failed");
+}
+
+void packet_write_fmt(int fd, const char *fmt, ...)
{
- static struct strbuf buf = STRBUF_INIT;
va_list args;
- strbuf_reset(&buf);
va_start(args, fmt);
- format_packet(&buf, fmt, args);
+ packet_write_fmt_1(fd, 0, fmt, args);
+ va_end(args);
+}
+
+int packet_write_fmt_gently(int fd, const char *fmt, ...)
+{
+ int status;
+ va_list args;
+
+ va_start(args, fmt);
+ status = packet_write_fmt_1(fd, 1, fmt, args);
va_end(args);
- write_or_die(fd, buf.buf, buf.len);
+ return status;
+}
+
+static int packet_write_gently(const int fd_out, const char *buf, size_t size)
+{
+ static char packet_write_buffer[LARGE_PACKET_MAX];
+ size_t packet_size;
+
+ if (size > sizeof(packet_write_buffer) - 4)
+ return error("packet write failed - data exceeds max packet size");
+
+ packet_trace(buf, size, 1);
+ packet_size = size + 4;
+ set_packet_header(packet_write_buffer, packet_size);
+ memcpy(packet_write_buffer + 4, buf, size);
+ if (write_in_full(fd_out, packet_write_buffer, packet_size) == packet_size)
+ return 0;
+ return error("packet write failed");
}
void packet_buf_write(struct strbuf *buf, const char *fmt, ...)
va_end(args);
}
+int write_packetized_from_fd(int fd_in, int fd_out)
+{
+ static char buf[LARGE_PACKET_DATA_MAX];
+ int err = 0;
+ ssize_t bytes_to_write;
+
+ while (!err) {
+ bytes_to_write = xread(fd_in, buf, sizeof(buf));
+ if (bytes_to_write < 0)
+ return COPY_READ_ERROR;
+ if (bytes_to_write == 0)
+ break;
+ err = packet_write_gently(fd_out, buf, bytes_to_write);
+ }
+ if (!err)
+ err = packet_flush_gently(fd_out);
+ return err;
+}
+
+int write_packetized_from_buf(const char *src_in, size_t len, int fd_out)
+{
+ int err = 0;
+ size_t bytes_written = 0;
+ size_t bytes_to_write;
+
+ while (!err) {
+ if ((len - bytes_written) > LARGE_PACKET_DATA_MAX)
+ bytes_to_write = LARGE_PACKET_DATA_MAX;
+ else
+ bytes_to_write = len - bytes_written;
+ if (bytes_to_write == 0)
+ break;
+ err = packet_write_gently(fd_out, src_in + bytes_written, bytes_to_write);
+ bytes_written += bytes_to_write;
+ }
+ if (!err)
+ err = packet_flush_gently(fd_out);
+ return err;
+}
+
static int get_packet_data(int fd, char **src_buf, size_t *src_size,
void *dst, unsigned size, int options)
{
{
return packet_read_line_generic(-1, src, src_len, dst_len);
}
+
+ssize_t read_packetized_to_strbuf(int fd_in, struct strbuf *sb_out)
+{
+ int packet_len;
+
+ size_t orig_len = sb_out->len;
+ size_t orig_alloc = sb_out->alloc;
+
+ for (;;) {
+ strbuf_grow(sb_out, LARGE_PACKET_DATA_MAX);
+ packet_len = packet_read(fd_in, NULL, NULL,
+ /* strbuf_grow() above always allocates one extra byte to
+ * store a '\0' at the end of the string. packet_read()
+ * writes a '\0' extra byte at the end, too. Let it know
+ * that there is already room for the extra byte.
+ */
+ sb_out->buf + sb_out->len, LARGE_PACKET_DATA_MAX+1,
+ PACKET_READ_GENTLE_ON_EOF);
+ if (packet_len <= 0)
+ break;
+ sb_out->len += packet_len;
+ }
+
+ if (packet_len < 0) {
+ if (orig_alloc == 0)
+ strbuf_release(sb_out);
+ else
+ strbuf_setlen(sb_out, orig_len);
+ return packet_len;
+ }
+ return sb_out->len - orig_len;
+}
* side can't, we stay with pure read/write interfaces.
*/
void packet_flush(int fd);
-void packet_write(int fd, const char *fmt, ...) __attribute__((format (printf, 2, 3)));
+void packet_write_fmt(int fd, const char *fmt, ...) __attribute__((format (printf, 2, 3)));
void packet_buf_flush(struct strbuf *buf);
void packet_buf_write(struct strbuf *buf, const char *fmt, ...) __attribute__((format (printf, 2, 3)));
+int packet_flush_gently(int fd);
+int packet_write_fmt_gently(int fd, const char *fmt, ...) __attribute__((format (printf, 2, 3)));
+int write_packetized_from_fd(int fd_in, int fd_out);
+int write_packetized_from_buf(const char *src_in, size_t len, int fd_out);
/*
* Read a packetized line into the buffer, which must be at least size bytes
*/
char *packet_read_line_buf(char **src_buf, size_t *src_len, int *size);
+/*
+ * Reads a stream of variable sized packets until a flush packet is detected.
+ */
+ssize_t read_packetized_to_strbuf(int fd_in, struct strbuf *sb_out);
+
#define DEFAULT_PACKET_MAX 1000
#define LARGE_PACKET_MAX 65520
+#define LARGE_PACKET_DATA_MAX (LARGE_PACKET_MAX - 4)
extern char packet_buffer[LARGE_PACKET_MAX];
#endif
switch (c->signature_check.result) {
case 'G':
case 'B':
+ case 'E':
case 'U':
case 'N':
+ case 'X':
+ case 'Y':
+ case 'R':
strbuf_addch(sb, c->signature_check.result);
}
break;
static int ce_compare_data(const struct cache_entry *ce, struct stat *st)
{
int match = -1;
- int fd = open(ce->name, O_RDONLY);
+ static int cloexec = O_CLOEXEC;
+ int fd = open(ce->name, O_RDONLY | cloexec);
+
+ if ((cloexec & O_CLOEXEC) && fd < 0 && errno == EINVAL) {
+ /* Try again w/o O_CLOEXEC: the kernel might not support it */
+ cloexec &= ~O_CLOEXEC;
+ fd = open(ce->name, O_RDONLY | cloexec);
+ }
if (fd >= 0) {
unsigned char sha1[20];
if (!index_fd(sha1, fd, st, OBJ_BLOB, ce->name, 0))
- match = hashcmp(sha1, ce->sha1);
+ match = hashcmp(sha1, ce->oid.hash);
/* index_fd() closed the file descriptor already */
}
return match;
if (strbuf_readlink(&sb, ce->name, expected_size))
return -1;
- buffer = read_sha1_file(ce->sha1, &type, &size);
+ buffer = read_sha1_file(ce->oid.hash, &type, &size);
if (buffer) {
if (size == sb.len)
match = memcmp(buffer, sb.buf, size);
*/
if (resolve_gitlink_ref(ce->name, "HEAD", sha1) < 0)
return 0;
- return hashcmp(sha1, ce->sha1);
+ return hashcmp(sha1, ce->oid.hash);
}
static int ce_modified_check_fs(const struct cache_entry *ce, struct stat *st)
/* Racily smudged entry? */
if (!ce->ce_stat_data.sd_size) {
- if (!is_empty_blob_sha1(ce->sha1))
+ if (!is_empty_blob_sha1(ce->oid.hash))
changed |= DATA_CHANGED;
}
unsigned char sha1[20];
if (write_sha1_file("", 0, blob_type, sha1))
die("cannot create an empty blob in the object database");
- hashcpy(ce->sha1, sha1);
+ hashcpy(ce->oid.hash, sha1);
}
int add_to_index(struct index_state *istate, const char *path, struct stat *st, int flags)
return 0;
}
if (!intent_only) {
- if (index_path(ce->sha1, path, st, HASH_WRITE_OBJECT)) {
+ if (index_path(ce->oid.hash, path, st, HASH_WRITE_OBJECT)) {
free(ce);
return error("unable to index file %s", path);
}
/* It was suspected to be racily clean, but it turns out to be Ok */
was_same = (alias &&
!ce_stage(alias) &&
- !hashcmp(alias->sha1, ce->sha1) &&
+ !oidcmp(&alias->oid, &ce->oid) &&
ce->ce_mode == alias->ce_mode);
if (pretend)
size = cache_entry_size(len);
ce = xcalloc(1, size);
- hashcpy(ce->sha1, sha1);
+ hashcpy(ce->oid.hash, sha1);
memcpy(ce->name, path, len);
ce->ce_flags = create_ce_flags(stage);
ce->ce_namelen = len;
ce->ce_flags = flags & ~CE_NAMEMASK;
ce->ce_namelen = len;
ce->index = 0;
- hashcpy(ce->sha1, ondisk->sha1);
+ hashcpy(ce->oid.hash, ondisk->sha1);
memcpy(ce->name, name, len);
ce->name[len] = '\0';
return ce;
ondisk->uid = htonl(ce->ce_stat_data.sd_uid);
ondisk->gid = htonl(ce->ce_stat_data.sd_gid);
ondisk->size = htonl(ce->ce_stat_data.sd_size);
- hashcpy(ondisk->sha1, ce->sha1);
+ hashcpy(ondisk->sha1, ce->oid.hash);
flags = ce->ce_flags & ~CE_NAMEMASK;
flags |= (ce_namelen(ce) >= CE_NAMEMASK ? CE_NAMEMASK : ce_namelen(ce));
continue;
if (!ce_uptodate(ce) && is_racy_timestamp(istate, ce))
ce_smudge_racily_clean_entry(ce);
- if (is_null_sha1(ce->sha1)) {
+ if (is_null_oid(&ce->oid)) {
static const char msg[] = "cache entry has null sha1: %s";
static int allow = -1;
}
if (pos < 0)
return NULL;
- data = read_sha1_file(istate->cache[pos]->sha1, &type, &sz);
+ data = read_sha1_file(istate->cache[pos]->oid.hash, &type, &sz);
if (!data || type != OBJ_BLOB) {
free(data);
return NULL;
void ref_array_sort(struct ref_sorting *sorting, struct ref_array *array)
{
ref_sorting = sorting;
- qsort(array->items, array->nr, sizeof(struct ref_array_item *), compare_refs);
+ QSORT(array->items, array->nr, compare_refs);
}
static void append_literal(const char *cp, const char *ep, struct ref_formatting_state *state)
#include "object.h"
#include "tag.h"
+/*
+ * List of all available backends
+ */
+static struct ref_storage_be *refs_backends = &refs_be_files;
+
+static struct ref_storage_be *find_ref_storage_backend(const char *name)
+{
+ struct ref_storage_be *be;
+ for (be = refs_backends; be; be = be->next)
+ if (!strcmp(be->name, name))
+ return be;
+ return NULL;
+}
+
+int ref_storage_backend_exists(const char *name)
+{
+ return find_ref_storage_backend(name) != NULL;
+}
+
/*
* How to handle various characters in refnames:
* 0: An acceptable character for refs
int dwim_ref(const char *str, int len, unsigned char *sha1, char **ref)
{
char *last_branch = substitute_branch_name(&str, &len);
+ int refs_found = expand_ref(str, len, sha1, ref);
+ free(last_branch);
+ return refs_found;
+}
+
+int expand_ref(const char *str, int len, unsigned char *sha1, char **ref)
+{
const char **p, *r;
int refs_found = 0;
warning("ignoring broken ref %s.", fullref);
}
}
- free(last_branch);
return refs_found;
}
flags, NULL, err);
}
+int update_ref_oid(const char *msg, const char *refname,
+ const struct object_id *new_oid, const struct object_id *old_oid,
+ unsigned int flags, enum action_on_err onerr)
+{
+ return update_ref(msg, refname, new_oid ? new_oid->hash : NULL,
+ old_oid ? old_oid->hash : NULL, flags, onerr);
+}
+
int update_ref(const char *msg, const char *refname,
const unsigned char *new_sha1, const unsigned char *old_sha1,
unsigned int flags, enum action_on_err onerr)
return NULL;
}
-int rename_ref_available(const char *oldname, const char *newname)
+int rename_ref_available(const char *old_refname, const char *new_refname)
{
struct string_list skip = STRING_LIST_INIT_NODUP;
struct strbuf err = STRBUF_INIT;
- int ret;
+ int ok;
- string_list_insert(&skip, oldname);
- ret = !verify_refname_available(newname, NULL, &skip, &err);
- if (!ret)
+ string_list_insert(&skip, old_refname);
+ ok = !verify_refname_available(new_refname, NULL, &skip, &err);
+ if (!ok)
error("%s", err.buf);
string_list_clear(&skip, 0);
strbuf_release(&err);
- return ret;
+ return ok;
}
int head_ref_submodule(const char *submodule, each_ref_fn fn, void *cb_data)
static int do_for_each_ref(const char *submodule, const char *prefix,
each_ref_fn fn, int trim, int flags, void *cb_data)
{
+ struct ref_store *refs = get_ref_store(submodule);
struct ref_iterator *iter;
- iter = files_ref_iterator_begin(submodule, prefix, flags);
+ if (!refs)
+ return 0;
+
+ iter = refs->be->iterator_begin(refs, prefix, flags);
iter = prefix_ref_iterator_begin(iter, prefix, trim);
return do_for_each_ref_iterator(iter, fn, cb_data);
}
/* This function needs to return a meaningful errno on failure */
-const char *resolve_ref_unsafe(const char *refname, int resolve_flags,
- unsigned char *sha1, int *flags)
+static const char *resolve_ref_recursively(struct ref_store *refs,
+ const char *refname,
+ int resolve_flags,
+ unsigned char *sha1, int *flags)
{
static struct strbuf sb_refname = STRBUF_INIT;
int unused_flags;
for (symref_count = 0; symref_count < SYMREF_MAXDEPTH; symref_count++) {
unsigned int read_flags = 0;
- if (read_raw_ref(refname, sha1, &sb_refname, &read_flags)) {
+ if (refs->be->read_raw_ref(refs, refname,
+ sha1, &sb_refname, &read_flags)) {
*flags |= read_flags;
if (errno != ENOENT || (resolve_flags & RESOLVE_REF_READING))
return NULL;
errno = ELOOP;
return NULL;
}
+
+/* backend functions */
+int refs_init_db(struct strbuf *err)
+{
+ struct ref_store *refs = get_ref_store(NULL);
+
+ return refs->be->init_db(refs, err);
+}
+
+const char *resolve_ref_unsafe(const char *refname, int resolve_flags,
+ unsigned char *sha1, int *flags)
+{
+ return resolve_ref_recursively(get_ref_store(NULL), refname,
+ resolve_flags, sha1, flags);
+}
+
+int resolve_gitlink_ref(const char *submodule, const char *refname,
+ unsigned char *sha1)
+{
+ size_t len = strlen(submodule);
+ struct ref_store *refs;
+ int flags;
+
+ while (len && submodule[len - 1] == '/')
+ len--;
+
+ if (!len)
+ return -1;
+
+ if (submodule[len]) {
+ /* We need to strip off one or more trailing slashes */
+ char *stripped = xmemdupz(submodule, len);
+
+ refs = get_ref_store(stripped);
+ free(stripped);
+ } else {
+ refs = get_ref_store(submodule);
+ }
+
+ if (!refs)
+ return -1;
+
+ if (!resolve_ref_recursively(refs, refname, 0, sha1, &flags) ||
+ is_null_sha1(sha1))
+ return -1;
+ return 0;
+}
+
+/* A pointer to the ref_store for the main repository: */
+static struct ref_store *main_ref_store;
+
+/* A linked list of ref_stores for submodules: */
+static struct ref_store *submodule_ref_stores;
+
+void base_ref_store_init(struct ref_store *refs,
+ const struct ref_storage_be *be,
+ const char *submodule)
+{
+ refs->be = be;
+ if (!submodule) {
+ if (main_ref_store)
+ die("BUG: main_ref_store initialized twice");
+
+ refs->submodule = "";
+ refs->next = NULL;
+ main_ref_store = refs;
+ } else {
+ if (lookup_ref_store(submodule))
+ die("BUG: ref_store for submodule '%s' initialized twice",
+ submodule);
+
+ refs->submodule = xstrdup(submodule);
+ refs->next = submodule_ref_stores;
+ submodule_ref_stores = refs;
+ }
+}
+
+struct ref_store *ref_store_init(const char *submodule)
+{
+ const char *be_name = "files";
+ struct ref_storage_be *be = find_ref_storage_backend(be_name);
+
+ if (!be)
+ die("BUG: reference backend %s is unknown", be_name);
+
+ if (!submodule || !*submodule)
+ return be->init(NULL);
+ else
+ return be->init(submodule);
+}
+
+struct ref_store *lookup_ref_store(const char *submodule)
+{
+ struct ref_store *refs;
+
+ if (!submodule || !*submodule)
+ return main_ref_store;
+
+ for (refs = submodule_ref_stores; refs; refs = refs->next) {
+ if (!strcmp(submodule, refs->submodule))
+ return refs;
+ }
+
+ return NULL;
+}
+
+struct ref_store *get_ref_store(const char *submodule)
+{
+ struct ref_store *refs;
+
+ if (!submodule || !*submodule) {
+ refs = lookup_ref_store(NULL);
+
+ if (!refs)
+ refs = ref_store_init(NULL);
+ } else {
+ refs = lookup_ref_store(submodule);
+
+ if (!refs) {
+ struct strbuf submodule_sb = STRBUF_INIT;
+
+ strbuf_addstr(&submodule_sb, submodule);
+ if (is_nonbare_repository_dir(&submodule_sb))
+ refs = ref_store_init(submodule);
+ strbuf_release(&submodule_sb);
+ }
+ }
+
+ return refs;
+}
+
+void assert_main_repository(struct ref_store *refs, const char *caller)
+{
+ if (*refs->submodule)
+ die("BUG: %s called for a submodule", caller);
+}
+
+/* backend functions */
+int pack_refs(unsigned int flags)
+{
+ struct ref_store *refs = get_ref_store(NULL);
+
+ return refs->be->pack_refs(refs, flags);
+}
+
+int peel_ref(const char *refname, unsigned char *sha1)
+{
+ struct ref_store *refs = get_ref_store(NULL);
+
+ return refs->be->peel_ref(refs, refname, sha1);
+}
+
+int create_symref(const char *ref_target, const char *refs_heads_master,
+ const char *logmsg)
+{
+ struct ref_store *refs = get_ref_store(NULL);
+
+ return refs->be->create_symref(refs, ref_target, refs_heads_master,
+ logmsg);
+}
+
+int ref_transaction_commit(struct ref_transaction *transaction,
+ struct strbuf *err)
+{
+ struct ref_store *refs = get_ref_store(NULL);
+
+ return refs->be->transaction_commit(refs, transaction, err);
+}
+
+int verify_refname_available(const char *refname,
+ const struct string_list *extra,
+ const struct string_list *skip,
+ struct strbuf *err)
+{
+ struct ref_store *refs = get_ref_store(NULL);
+
+ return refs->be->verify_refname_available(refs, refname, extra, skip, err);
+}
+
+int for_each_reflog(each_ref_fn fn, void *cb_data)
+{
+ struct ref_store *refs = get_ref_store(NULL);
+ struct ref_iterator *iter;
+
+ iter = refs->be->reflog_iterator_begin(refs);
+
+ return do_for_each_ref_iterator(iter, fn, cb_data);
+}
+
+int for_each_reflog_ent_reverse(const char *refname, each_reflog_ent_fn fn,
+ void *cb_data)
+{
+ struct ref_store *refs = get_ref_store(NULL);
+
+ return refs->be->for_each_reflog_ent_reverse(refs, refname,
+ fn, cb_data);
+}
+
+int for_each_reflog_ent(const char *refname, each_reflog_ent_fn fn,
+ void *cb_data)
+{
+ struct ref_store *refs = get_ref_store(NULL);
+
+ return refs->be->for_each_reflog_ent(refs, refname, fn, cb_data);
+}
+
+int reflog_exists(const char *refname)
+{
+ struct ref_store *refs = get_ref_store(NULL);
+
+ return refs->be->reflog_exists(refs, refname);
+}
+
+int safe_create_reflog(const char *refname, int force_create,
+ struct strbuf *err)
+{
+ struct ref_store *refs = get_ref_store(NULL);
+
+ return refs->be->create_reflog(refs, refname, force_create, err);
+}
+
+int delete_reflog(const char *refname)
+{
+ struct ref_store *refs = get_ref_store(NULL);
+
+ return refs->be->delete_reflog(refs, refname);
+}
+
+int reflog_expire(const char *refname, const unsigned char *sha1,
+ unsigned int flags,
+ reflog_expiry_prepare_fn prepare_fn,
+ reflog_expiry_should_prune_fn should_prune_fn,
+ reflog_expiry_cleanup_fn cleanup_fn,
+ void *policy_cb_data)
+{
+ struct ref_store *refs = get_ref_store(NULL);
+
+ return refs->be->reflog_expire(refs, refname, sha1, flags,
+ prepare_fn, should_prune_fn,
+ cleanup_fn, policy_cb_data);
+}
+
+int initial_ref_transaction_commit(struct ref_transaction *transaction,
+ struct strbuf *err)
+{
+ struct ref_store *refs = get_ref_store(NULL);
+
+ return refs->be->initial_transaction_commit(refs, transaction, err);
+}
+
+int delete_refs(struct string_list *refnames, unsigned int flags)
+{
+ struct ref_store *refs = get_ref_store(NULL);
+
+ return refs->be->delete_refs(refs, refnames, flags);
+}
+
+int rename_ref(const char *oldref, const char *newref, const char *logmsg)
+{
+ struct ref_store *refs = get_ref_store(NULL);
+
+ return refs->be->rename_ref(refs, oldref, newref, logmsg);
+}
int is_branch(const char *refname);
+extern int refs_init_db(struct strbuf *err);
+
/*
* If refname is a non-symbolic reference that refers to a tag object,
* and the tag can be (recursively) dereferenced to a non-tag object,
int peel_ref(const char *refname, unsigned char *sha1);
/**
- * Resolve refname in the nested "gitlink" repository that is located
- * at path. If the resolution is successful, return 0 and set sha1 to
- * the name of the object; otherwise, return a non-zero value.
+ * Resolve refname in the nested "gitlink" repository in the specified
+ * submodule (which must be non-NULL). If the resolution is
+ * successful, return 0 and set sha1 to the name of the object;
+ * otherwise, return a non-zero value.
*/
-int resolve_gitlink_ref(const char *path, const char *refname,
+int resolve_gitlink_ref(const char *submodule, const char *refname,
unsigned char *sha1);
/*
*/
int refname_match(const char *abbrev_name, const char *full_name);
+int expand_ref(const char *str, int len, unsigned char *sha1, char **ref);
int dwim_ref(const char *str, int len, unsigned char *sha1, char **ref);
int dwim_log(const char *str, int len, unsigned char *sha1, char **ref);
int update_ref(const char *msg, const char *refname,
const unsigned char *new_sha1, const unsigned char *old_sha1,
unsigned int flags, enum action_on_err onerr);
+int update_ref_oid(const char *msg, const char *refname,
+ const struct object_id *new_oid, const struct object_id *old_oid,
+ unsigned int flags, enum action_on_err onerr);
int parse_hide_refs_config(const char *var, const char *value, const char *);
reflog_expiry_cleanup_fn cleanup_fn,
void *policy_cb_data);
+int ref_storage_backend_exists(const char *name);
+
#endif /* REFS_H */
struct object_id peeled;
};
-struct ref_cache;
+struct files_ref_store;
/*
* Information used (along with the information in ref_entry) to
*/
int sorted;
- /* A pointer to the ref_cache that contains this ref_dir. */
- struct ref_cache *ref_cache;
+ /* A pointer to the files_ref_store that contains this ref_dir. */
+ struct files_ref_store *ref_store;
struct ref_entry **entries;
};
static void read_loose_refs(const char *dirname, struct ref_dir *dir);
static int search_ref_dir(struct ref_dir *dir, const char *refname, size_t len);
-static struct ref_entry *create_dir_entry(struct ref_cache *ref_cache,
+static struct ref_entry *create_dir_entry(struct files_ref_store *ref_store,
const char *dirname, size_t len,
int incomplete);
static void add_entry_to_dir(struct ref_dir *dir, struct ref_entry *entry);
int pos = search_ref_dir(dir, "refs/bisect/", 12);
if (pos < 0) {
struct ref_entry *child_entry;
- child_entry = create_dir_entry(dir->ref_cache,
+ child_entry = create_dir_entry(dir->ref_store,
"refs/bisect/",
12, 1);
add_entry_to_dir(dir, child_entry);
* dirname is the name of the directory with a trailing slash (e.g.,
* "refs/heads/") or "" for the top-level directory.
*/
-static struct ref_entry *create_dir_entry(struct ref_cache *ref_cache,
+static struct ref_entry *create_dir_entry(struct files_ref_store *ref_store,
const char *dirname, size_t len,
int incomplete)
{
struct ref_entry *direntry;
FLEX_ALLOC_MEM(direntry, name, dirname, len);
- direntry->u.subdir.ref_cache = ref_cache;
+ direntry->u.subdir.ref_store = ref_store;
direntry->flag = REF_DIR | (incomplete ? REF_INCOMPLETE : 0);
return direntry;
}
* therefore, create an empty record for it but mark
* the record complete.
*/
- entry = create_dir_entry(dir->ref_cache, subdirname, len, 0);
+ entry = create_dir_entry(dir->ref_store, subdirname, len, 0);
add_entry_to_dir(dir, entry);
} else {
entry = dir->entries[entry_index];
if (dir->sorted == dir->nr)
return;
- qsort(dir->entries, dir->nr, sizeof(*dir->entries), ref_entry_cmp);
+ QSORT(dir->entries, dir->nr, ref_entry_cmp);
/* Remove any duplicates: */
for (i = 0, j = 0; j < dir->nr; j++) {
/*
* Count of references to the data structure in this instance,
- * including the pointer from ref_cache::packed if any. The
- * data will not be freed as long as the reference count is
- * nonzero.
+ * including the pointer from files_ref_store::packed if any.
+ * The data will not be freed as long as the reference count
+ * is nonzero.
*/
unsigned int referrers;
* Future: need to be in "struct repository"
* when doing a full libification.
*/
-static struct ref_cache {
- struct ref_cache *next;
+struct files_ref_store {
+ struct ref_store base;
struct ref_entry *loose;
struct packed_ref_cache *packed;
- /*
- * The submodule name, or "" for the main repo. We allocate
- * length 1 rather than FLEX_ARRAY so that the main ref_cache
- * is initialized correctly.
- */
- char name[1];
-} ref_cache, *submodule_ref_caches;
+};
/* Lock used for the main packed-refs file: */
static struct lock_file packlock;
}
}
-static void clear_packed_ref_cache(struct ref_cache *refs)
+static void clear_packed_ref_cache(struct files_ref_store *refs)
{
if (refs->packed) {
struct packed_ref_cache *packed_refs = refs->packed;
}
}
-static void clear_loose_ref_cache(struct ref_cache *refs)
+static void clear_loose_ref_cache(struct files_ref_store *refs)
{
if (refs->loose) {
free_ref_entry(refs->loose);
* Create a new submodule ref cache and add it to the internal
* set of caches.
*/
-static struct ref_cache *create_ref_cache(const char *submodule)
-{
- struct ref_cache *refs;
- if (!submodule)
- submodule = "";
- FLEX_ALLOC_STR(refs, name, submodule);
- refs->next = submodule_ref_caches;
- submodule_ref_caches = refs;
- return refs;
-}
-
-static struct ref_cache *lookup_ref_cache(const char *submodule)
+static struct ref_store *files_ref_store_create(const char *submodule)
{
- struct ref_cache *refs;
+ struct files_ref_store *refs = xcalloc(1, sizeof(*refs));
+ struct ref_store *ref_store = (struct ref_store *)refs;
- if (!submodule || !*submodule)
- return &ref_cache;
+ base_ref_store_init(ref_store, &refs_be_files, submodule);
- for (refs = submodule_ref_caches; refs; refs = refs->next)
- if (!strcmp(submodule, refs->name))
- return refs;
- return NULL;
+ return ref_store;
}
/*
- * Return a pointer to a ref_cache for the specified submodule. For
- * the main repository, use submodule==NULL; such a call cannot fail.
- * For a submodule, the submodule must exist and be a nonbare
- * repository, otherwise return NULL.
- *
- * The returned structure will be allocated and initialized but not
- * necessarily populated; it should not be freed.
+ * Downcast ref_store to files_ref_store. Die if ref_store is not a
+ * files_ref_store. If submodule_allowed is not true, then also die if
+ * files_ref_store is for a submodule (i.e., not for the main
+ * repository). caller is used in any necessary error messages.
*/
-static struct ref_cache *get_ref_cache(const char *submodule)
+static struct files_ref_store *files_downcast(
+ struct ref_store *ref_store, int submodule_allowed,
+ const char *caller)
{
- struct ref_cache *refs = lookup_ref_cache(submodule);
-
- if (!refs) {
- struct strbuf submodule_sb = STRBUF_INIT;
+ if (ref_store->be != &refs_be_files)
+ die("BUG: ref_store is type \"%s\" not \"files\" in %s",
+ ref_store->be->name, caller);
- strbuf_addstr(&submodule_sb, submodule);
- if (is_nonbare_repository_dir(&submodule_sb))
- refs = create_ref_cache(submodule);
- strbuf_release(&submodule_sb);
- }
+ if (!submodule_allowed)
+ assert_main_repository(ref_store, caller);
- return refs;
+ return (struct files_ref_store *)ref_store;
}
/* The length of a peeled reference line in packed-refs, including EOL: */
}
/*
- * Get the packed_ref_cache for the specified ref_cache, creating it
- * if necessary.
+ * Get the packed_ref_cache for the specified files_ref_store,
+ * creating it if necessary.
*/
-static struct packed_ref_cache *get_packed_ref_cache(struct ref_cache *refs)
+static struct packed_ref_cache *get_packed_ref_cache(struct files_ref_store *refs)
{
char *packed_refs_file;
- if (*refs->name)
- packed_refs_file = git_pathdup_submodule(refs->name, "packed-refs");
+ if (*refs->base.submodule)
+ packed_refs_file = git_pathdup_submodule(refs->base.submodule,
+ "packed-refs");
else
packed_refs_file = git_pathdup("packed-refs");
return get_ref_dir(packed_ref_cache->root);
}
-static struct ref_dir *get_packed_refs(struct ref_cache *refs)
+static struct ref_dir *get_packed_refs(struct files_ref_store *refs)
{
return get_packed_ref_dir(get_packed_ref_cache(refs));
}
* lock_packed_refs()). To actually write the packed-refs file, call
* commit_packed_refs().
*/
-static void add_packed_ref(const char *refname, const unsigned char *sha1)
+static void add_packed_ref(struct files_ref_store *refs,
+ const char *refname, const unsigned char *sha1)
{
- struct packed_ref_cache *packed_ref_cache =
- get_packed_ref_cache(&ref_cache);
+ struct packed_ref_cache *packed_ref_cache = get_packed_ref_cache(refs);
if (!packed_ref_cache->lock)
die("internal error: packed refs not locked");
*/
static void read_loose_refs(const char *dirname, struct ref_dir *dir)
{
- struct ref_cache *refs = dir->ref_cache;
+ struct files_ref_store *refs = dir->ref_store;
DIR *d;
struct dirent *de;
int dirnamelen = strlen(dirname);
struct strbuf refname;
struct strbuf path = STRBUF_INIT;
size_t path_baselen;
+ int err = 0;
- if (*refs->name)
- strbuf_git_path_submodule(&path, refs->name, "%s", dirname);
+ if (*refs->base.submodule)
+ err = strbuf_git_path_submodule(&path, refs->base.submodule, "%s", dirname);
else
strbuf_git_path(&path, "%s", dirname);
path_baselen = path.len;
+ if (err) {
+ strbuf_release(&path);
+ return;
+ }
+
d = opendir(path.buf);
if (!d) {
strbuf_release(&path);
} else {
int read_ok;
- if (*refs->name) {
+ if (*refs->base.submodule) {
hashclr(sha1);
flag = 0;
- read_ok = !resolve_gitlink_ref(refs->name,
+ read_ok = !resolve_gitlink_ref(refs->base.submodule,
refname.buf, sha1);
} else {
read_ok = !read_ref_full(refname.buf,
closedir(d);
}
-static struct ref_dir *get_loose_refs(struct ref_cache *refs)
+static struct ref_dir *get_loose_refs(struct files_ref_store *refs)
{
if (!refs->loose) {
/*
return get_ref_dir(refs->loose);
}
-#define MAXREFLEN (1024)
-
-/*
- * Called by resolve_gitlink_ref_recursive() after it failed to read
- * from the loose refs in ref_cache refs. Find <refname> in the
- * packed-refs file for the submodule.
- */
-static int resolve_gitlink_packed_ref(struct ref_cache *refs,
- const char *refname, unsigned char *sha1)
-{
- struct ref_entry *ref;
- struct ref_dir *dir = get_packed_refs(refs);
-
- ref = find_ref(dir, refname);
- if (ref == NULL)
- return -1;
-
- hashcpy(sha1, ref->u.value.oid.hash);
- return 0;
-}
-
-static int resolve_gitlink_ref_recursive(struct ref_cache *refs,
- const char *refname, unsigned char *sha1,
- int recursion)
-{
- int fd, len;
- char buffer[128], *p;
- char *path;
-
- if (recursion > SYMREF_MAXDEPTH || strlen(refname) > MAXREFLEN)
- return -1;
- path = *refs->name
- ? git_pathdup_submodule(refs->name, "%s", refname)
- : git_pathdup("%s", refname);
- fd = open(path, O_RDONLY);
- free(path);
- if (fd < 0)
- return resolve_gitlink_packed_ref(refs, refname, sha1);
-
- len = read(fd, buffer, sizeof(buffer)-1);
- close(fd);
- if (len < 0)
- return -1;
- while (len && isspace(buffer[len-1]))
- len--;
- buffer[len] = 0;
-
- /* Was it a detached head or an old-fashioned symlink? */
- if (!get_sha1_hex(buffer, sha1))
- return 0;
-
- /* Symref? */
- if (strncmp(buffer, "ref:", 4))
- return -1;
- p = buffer + 4;
- while (isspace(*p))
- p++;
-
- return resolve_gitlink_ref_recursive(refs, p, sha1, recursion+1);
-}
-
-int resolve_gitlink_ref(const char *path, const char *refname, unsigned char *sha1)
-{
- int len = strlen(path), retval;
- struct strbuf submodule = STRBUF_INIT;
- struct ref_cache *refs;
-
- while (len && path[len-1] == '/')
- len--;
- if (!len)
- return -1;
-
- strbuf_add(&submodule, path, len);
- refs = get_ref_cache(submodule.buf);
- if (!refs) {
- strbuf_release(&submodule);
- return -1;
- }
- strbuf_release(&submodule);
-
- retval = resolve_gitlink_ref_recursive(refs, refname, sha1, 0);
- return retval;
-}
-
/*
* Return the ref_entry for the given refname from the packed
* references. If it does not exist, return NULL.
*/
-static struct ref_entry *get_packed_ref(const char *refname)
+static struct ref_entry *get_packed_ref(struct files_ref_store *refs,
+ const char *refname)
{
- return find_ref(get_packed_refs(&ref_cache), refname);
+ return find_ref(get_packed_refs(refs), refname);
}
/*
* A loose ref file doesn't exist; check for a packed ref.
*/
-static int resolve_missing_loose_ref(const char *refname,
- unsigned char *sha1,
- unsigned int *flags)
+static int resolve_packed_ref(struct files_ref_store *refs,
+ const char *refname,
+ unsigned char *sha1, unsigned int *flags)
{
struct ref_entry *entry;
* The loose reference file does not exist; check for a packed
* reference.
*/
- entry = get_packed_ref(refname);
+ entry = get_packed_ref(refs, refname);
if (entry) {
hashcpy(sha1, entry->u.value.oid.hash);
*flags |= REF_ISPACKED;
return -1;
}
-int read_raw_ref(const char *refname, unsigned char *sha1,
- struct strbuf *referent, unsigned int *type)
+static int files_read_raw_ref(struct ref_store *ref_store,
+ const char *refname, unsigned char *sha1,
+ struct strbuf *referent, unsigned int *type)
{
+ struct files_ref_store *refs =
+ files_downcast(ref_store, 1, "read_raw_ref");
struct strbuf sb_contents = STRBUF_INIT;
struct strbuf sb_path = STRBUF_INIT;
const char *path;
*type = 0;
strbuf_reset(&sb_path);
- strbuf_git_path(&sb_path, "%s", refname);
+
+ if (*refs->base.submodule)
+ strbuf_git_path_submodule(&sb_path, refs->base.submodule, "%s", refname);
+ else
+ strbuf_git_path(&sb_path, "%s", refname);
+
path = sb_path.buf;
stat_ref:
if (lstat(path, &st) < 0) {
if (errno != ENOENT)
goto out;
- if (resolve_missing_loose_ref(refname, sha1, type)) {
+ if (resolve_packed_ref(refs, refname, sha1, type)) {
errno = ENOENT;
goto out;
}
* ref is supposed to be, there could still be a
* packed ref:
*/
- if (resolve_missing_loose_ref(refname, sha1, type)) {
+ if (resolve_packed_ref(refs, refname, sha1, type)) {
errno = EISDIR;
goto out;
}
* avoided, namely if we were successfully able to read the ref
* - Generate informative error messages in the case of failure
*/
-static int lock_raw_ref(const char *refname, int mustexist,
+static int lock_raw_ref(struct files_ref_store *refs,
+ const char *refname, int mustexist,
const struct string_list *extras,
const struct string_list *skip,
struct ref_lock **lock_p,
int ret = TRANSACTION_GENERIC_ERROR;
assert(err);
+ assert_main_repository(&refs->base, "lock_raw_ref");
+
*type = 0;
/* First lock the file so it can't change out from under us. */
* fear that its value will change.
*/
- if (read_raw_ref(refname, lock->old_oid.hash, referent, type)) {
+ if (files_read_raw_ref(&refs->base, refname,
+ lock->old_oid.hash, referent, type)) {
if (errno == ENOENT) {
if (mustexist) {
/* Garden variety missing reference. */
REMOVE_DIR_EMPTY_ONLY)) {
if (verify_refname_available_dir(
refname, extras, skip,
- get_loose_refs(&ref_cache),
+ get_loose_refs(refs),
err)) {
/*
* The error message set by
*/
if (verify_refname_available_dir(
refname, extras, skip,
- get_packed_refs(&ref_cache),
+ get_packed_refs(refs),
err)) {
goto error_return;
}
return status;
}
-int peel_ref(const char *refname, unsigned char *sha1)
+static int files_peel_ref(struct ref_store *ref_store,
+ const char *refname, unsigned char *sha1)
{
+ struct files_ref_store *refs = files_downcast(ref_store, 0, "peel_ref");
int flag;
unsigned char base[20];
* have REF_KNOWS_PEELED.
*/
if (flag & REF_ISPACKED) {
- struct ref_entry *r = get_packed_ref(refname);
+ struct ref_entry *r = get_packed_ref(refs, refname);
if (r) {
if (peel_entry(r, 0))
return -1;
int ok;
while ((ok = ref_iterator_advance(iter->iter0)) == ITER_OK) {
+ if (iter->flags & DO_FOR_EACH_PER_WORKTREE_ONLY &&
+ ref_type(iter->iter0->refname) != REF_TYPE_PER_WORKTREE)
+ continue;
+
if (!(iter->flags & DO_FOR_EACH_INCLUDE_BROKEN) &&
!ref_resolves_to_object(iter->iter0->refname,
iter->iter0->oid,
files_ref_iterator_abort
};
-struct ref_iterator *files_ref_iterator_begin(
- const char *submodule,
+static struct ref_iterator *files_ref_iterator_begin(
+ struct ref_store *ref_store,
const char *prefix, unsigned int flags)
{
- struct ref_cache *refs = get_ref_cache(submodule);
+ struct files_ref_store *refs =
+ files_downcast(ref_store, 1, "ref_iterator_begin");
struct ref_dir *loose_dir, *packed_dir;
struct ref_iterator *loose_iter, *packed_iter;
struct files_ref_iterator *iter;
* Locks a ref returning the lock on success and NULL on failure.
* On failure errno is set to something meaningful.
*/
-static struct ref_lock *lock_ref_sha1_basic(const char *refname,
+static struct ref_lock *lock_ref_sha1_basic(struct files_ref_store *refs,
+ const char *refname,
const unsigned char *old_sha1,
const struct string_list *extras,
const struct string_list *skip,
int attempts_remaining = 3;
int resolved;
+ assert_main_repository(&refs->base, "lock_ref_sha1_basic");
assert(err);
lock = xcalloc(1, sizeof(struct ref_lock));
*/
if (remove_empty_directories(&ref_file)) {
last_errno = errno;
- if (!verify_refname_available_dir(refname, extras, skip,
- get_loose_refs(&ref_cache), err))
+ if (!verify_refname_available_dir(
+ refname, extras, skip,
+ get_loose_refs(refs), err))
strbuf_addf(err, "there are still refs under '%s'",
refname);
goto error_return;
if (!resolved) {
last_errno = errno;
if (last_errno != ENOTDIR ||
- !verify_refname_available_dir(refname, extras, skip,
- get_loose_refs(&ref_cache), err))
+ !verify_refname_available_dir(
+ refname, extras, skip,
+ get_loose_refs(refs), err))
strbuf_addf(err, "unable to resolve reference '%s': %s",
refname, strerror(last_errno));
*/
if (is_null_oid(&lock->old_oid) &&
verify_refname_available_dir(refname, extras, skip,
- get_packed_refs(&ref_cache), err)) {
+ get_packed_refs(refs),
+ err)) {
last_errno = ENOTDIR;
goto error_return;
}
* hold_lock_file_for_update(). Return 0 on success. On errors, set
* errno appropriately and return a nonzero value.
*/
-static int lock_packed_refs(int flags)
+static int lock_packed_refs(struct files_ref_store *refs, int flags)
{
static int timeout_configured = 0;
static int timeout_value = 1000;
-
struct packed_ref_cache *packed_ref_cache;
+ assert_main_repository(&refs->base, "lock_packed_refs");
+
if (!timeout_configured) {
git_config_get_int("core.packedrefstimeout", &timeout_value);
timeout_configured = 1;
* this will automatically invalidate the cache and re-read
* the packed-refs file.
*/
- packed_ref_cache = get_packed_ref_cache(&ref_cache);
+ packed_ref_cache = get_packed_ref_cache(refs);
packed_ref_cache->lock = &packlock;
/* Increment the reference count to prevent it from being freed: */
acquire_packed_ref_cache(packed_ref_cache);
* lock_packed_refs()). Return zero on success. On errors, set errno
* and return a nonzero value
*/
-static int commit_packed_refs(void)
+static int commit_packed_refs(struct files_ref_store *refs)
{
struct packed_ref_cache *packed_ref_cache =
- get_packed_ref_cache(&ref_cache);
+ get_packed_ref_cache(refs);
int error = 0;
int save_errno = 0;
FILE *out;
+ assert_main_repository(&refs->base, "commit_packed_refs");
+
if (!packed_ref_cache->lock)
die("internal error: packed-refs not locked");
* in-memory packed reference cache. (The packed-refs file will be
* read anew if it is needed again after this function is called.)
*/
-static void rollback_packed_refs(void)
+static void rollback_packed_refs(struct files_ref_store *refs)
{
struct packed_ref_cache *packed_ref_cache =
- get_packed_ref_cache(&ref_cache);
+ get_packed_ref_cache(refs);
+
+ assert_main_repository(&refs->base, "rollback_packed_refs");
if (!packed_ref_cache->lock)
die("internal error: packed-refs not locked");
rollback_lock_file(packed_ref_cache->lock);
packed_ref_cache->lock = NULL;
release_packed_ref_cache(packed_ref_cache);
- clear_packed_ref_cache(&ref_cache);
+ clear_packed_ref_cache(refs);
}
struct ref_to_prune {
}
}
-int pack_refs(unsigned int flags)
+static int files_pack_refs(struct ref_store *ref_store, unsigned int flags)
{
+ struct files_ref_store *refs =
+ files_downcast(ref_store, 0, "pack_refs");
struct pack_refs_cb_data cbdata;
memset(&cbdata, 0, sizeof(cbdata));
cbdata.flags = flags;
- lock_packed_refs(LOCK_DIE_ON_ERROR);
- cbdata.packed_refs = get_packed_refs(&ref_cache);
+ lock_packed_refs(refs, LOCK_DIE_ON_ERROR);
+ cbdata.packed_refs = get_packed_refs(refs);
- do_for_each_entry_in_dir(get_loose_refs(&ref_cache), 0,
+ do_for_each_entry_in_dir(get_loose_refs(refs), 0,
pack_if_possible_fn, &cbdata);
- if (commit_packed_refs())
+ if (commit_packed_refs(refs))
die_errno("unable to overwrite old ref-pack file");
prune_refs(cbdata.ref_to_prune);
*
* The refs in 'refnames' needn't be sorted. `err` must not be NULL.
*/
-static int repack_without_refs(struct string_list *refnames, struct strbuf *err)
+static int repack_without_refs(struct files_ref_store *refs,
+ struct string_list *refnames, struct strbuf *err)
{
struct ref_dir *packed;
struct string_list_item *refname;
int ret, needs_repacking = 0, removed = 0;
+ assert_main_repository(&refs->base, "repack_without_refs");
assert(err);
/* Look for a packed ref */
for_each_string_list_item(refname, refnames) {
- if (get_packed_ref(refname->string)) {
+ if (get_packed_ref(refs, refname->string)) {
needs_repacking = 1;
break;
}
if (!needs_repacking)
return 0; /* no refname exists in packed refs */
- if (lock_packed_refs(0)) {
+ if (lock_packed_refs(refs, 0)) {
unable_to_lock_message(git_path("packed-refs"), errno, err);
return -1;
}
- packed = get_packed_refs(&ref_cache);
+ packed = get_packed_refs(refs);
/* Remove refnames from the cache */
for_each_string_list_item(refname, refnames)
* All packed entries disappeared while we were
* acquiring the lock.
*/
- rollback_packed_refs();
+ rollback_packed_refs(refs);
return 0;
}
/* Write what remains */
- ret = commit_packed_refs();
+ ret = commit_packed_refs(refs);
if (ret)
strbuf_addf(err, "unable to overwrite old ref-pack file: %s",
strerror(errno));
return 0;
}
-int delete_refs(struct string_list *refnames, unsigned int flags)
+static int files_delete_refs(struct ref_store *ref_store,
+ struct string_list *refnames, unsigned int flags)
{
+ struct files_ref_store *refs =
+ files_downcast(ref_store, 0, "delete_refs");
struct strbuf err = STRBUF_INIT;
int i, result = 0;
if (!refnames->nr)
return 0;
- result = repack_without_refs(refnames, &err);
+ result = repack_without_refs(refs, refnames, &err);
if (result) {
/*
* If we failed to rewrite the packed-refs file, then
return ret;
}
-int verify_refname_available(const char *newname,
- const struct string_list *extras,
- const struct string_list *skip,
- struct strbuf *err)
+static int files_verify_refname_available(struct ref_store *ref_store,
+ const char *newname,
+ const struct string_list *extras,
+ const struct string_list *skip,
+ struct strbuf *err)
{
- struct ref_dir *packed_refs = get_packed_refs(&ref_cache);
- struct ref_dir *loose_refs = get_loose_refs(&ref_cache);
+ struct files_ref_store *refs =
+ files_downcast(ref_store, 1, "verify_refname_available");
+ struct ref_dir *packed_refs = get_packed_refs(refs);
+ struct ref_dir *loose_refs = get_loose_refs(refs);
if (verify_refname_available_dir(newname, extras, skip,
packed_refs, err) ||
static int write_ref_to_lockfile(struct ref_lock *lock,
const unsigned char *sha1, struct strbuf *err);
-static int commit_ref_update(struct ref_lock *lock,
+static int commit_ref_update(struct files_ref_store *refs,
+ struct ref_lock *lock,
const unsigned char *sha1, const char *logmsg,
struct strbuf *err);
-int rename_ref(const char *oldrefname, const char *newrefname, const char *logmsg)
+static int files_rename_ref(struct ref_store *ref_store,
+ const char *oldrefname, const char *newrefname,
+ const char *logmsg)
{
+ struct files_ref_store *refs =
+ files_downcast(ref_store, 0, "rename_ref");
unsigned char sha1[20], orig_sha1[20];
int flag = 0, logmoved = 0;
struct ref_lock *lock;
logmoved = log;
- lock = lock_ref_sha1_basic(newrefname, NULL, NULL, NULL, REF_NODEREF,
- NULL, &err);
+ lock = lock_ref_sha1_basic(refs, newrefname, NULL, NULL, NULL,
+ REF_NODEREF, NULL, &err);
if (!lock) {
error("unable to rename '%s' to '%s': %s", oldrefname, newrefname, err.buf);
strbuf_release(&err);
hashcpy(lock->old_oid.hash, orig_sha1);
if (write_ref_to_lockfile(lock, orig_sha1, &err) ||
- commit_ref_update(lock, orig_sha1, logmsg, &err)) {
+ commit_ref_update(refs, lock, orig_sha1, logmsg, &err)) {
error("unable to write current sha1 into %s: %s", newrefname, err.buf);
strbuf_release(&err);
goto rollback;
return 0;
rollback:
- lock = lock_ref_sha1_basic(oldrefname, NULL, NULL, NULL, REF_NODEREF,
- NULL, &err);
+ lock = lock_ref_sha1_basic(refs, oldrefname, NULL, NULL, NULL,
+ REF_NODEREF, NULL, &err);
if (!lock) {
error("unable to lock %s for rollback: %s", oldrefname, err.buf);
strbuf_release(&err);
flag = log_all_ref_updates;
log_all_ref_updates = 0;
if (write_ref_to_lockfile(lock, orig_sha1, &err) ||
- commit_ref_update(lock, orig_sha1, NULL, &err)) {
+ commit_ref_update(refs, lock, orig_sha1, NULL, &err)) {
error("unable to write current sha1 into %s: %s", oldrefname, err.buf);
strbuf_release(&err);
}
}
-int safe_create_reflog(const char *refname, int force_create, struct strbuf *err)
+static int files_create_reflog(struct ref_store *ref_store,
+ const char *refname, int force_create,
+ struct strbuf *err)
{
int ret;
struct strbuf sb = STRBUF_INIT;
+ /* Check validity (but we don't need the result): */
+ files_downcast(ref_store, 0, "create_reflog");
+
ret = log_ref_setup(refname, &sb, err, force_create);
strbuf_release(&sb);
return ret;
* to the loose reference lockfile. Also update the reflogs if
* necessary, using the specified lockmsg (which can be NULL).
*/
-static int commit_ref_update(struct ref_lock *lock,
+static int commit_ref_update(struct files_ref_store *refs,
+ struct ref_lock *lock,
const unsigned char *sha1, const char *logmsg,
struct strbuf *err)
{
- clear_loose_ref_cache(&ref_cache);
+ assert_main_repository(&refs->base, "commit_ref_update");
+
+ clear_loose_ref_cache(refs);
if (log_ref_write(lock->ref_name, lock->old_oid.hash, sha1, logmsg, 0, err)) {
char *old_msg = strbuf_detach(err, NULL);
strbuf_addf(err, "cannot update the ref '%s': %s",
return 0;
}
-int create_symref(const char *refname, const char *target, const char *logmsg)
+static int files_create_symref(struct ref_store *ref_store,
+ const char *refname, const char *target,
+ const char *logmsg)
{
+ struct files_ref_store *refs =
+ files_downcast(ref_store, 0, "create_symref");
struct strbuf err = STRBUF_INIT;
struct ref_lock *lock;
int ret;
- lock = lock_ref_sha1_basic(refname, NULL, NULL, NULL, REF_NODEREF, NULL,
+ lock = lock_ref_sha1_basic(refs, refname, NULL,
+ NULL, NULL, REF_NODEREF, NULL,
&err);
if (!lock) {
error("%s", err.buf);
return ret;
}
-int reflog_exists(const char *refname)
+static int files_reflog_exists(struct ref_store *ref_store,
+ const char *refname)
{
struct stat st;
+ /* Check validity (but we don't need the result): */
+ files_downcast(ref_store, 0, "reflog_exists");
+
return !lstat(git_path("logs/%s", refname), &st) &&
S_ISREG(st.st_mode);
}
-int delete_reflog(const char *refname)
+static int files_delete_reflog(struct ref_store *ref_store,
+ const char *refname)
{
+ /* Check validity (but we don't need the result): */
+ files_downcast(ref_store, 0, "delete_reflog");
+
return remove_path(git_path("logs/%s", refname));
}
return scan;
}
-int for_each_reflog_ent_reverse(const char *refname, each_reflog_ent_fn fn, void *cb_data)
+static int files_for_each_reflog_ent_reverse(struct ref_store *ref_store,
+ const char *refname,
+ each_reflog_ent_fn fn,
+ void *cb_data)
{
struct strbuf sb = STRBUF_INIT;
FILE *logfp;
long pos;
int ret = 0, at_tail = 1;
+ /* Check validity (but we don't need the result): */
+ files_downcast(ref_store, 0, "for_each_reflog_ent_reverse");
+
logfp = fopen(git_path("logs/%s", refname), "r");
if (!logfp)
return -1;
return ret;
}
-int for_each_reflog_ent(const char *refname, each_reflog_ent_fn fn, void *cb_data)
+static int files_for_each_reflog_ent(struct ref_store *ref_store,
+ const char *refname,
+ each_reflog_ent_fn fn, void *cb_data)
{
FILE *logfp;
struct strbuf sb = STRBUF_INIT;
int ret = 0;
+ /* Check validity (but we don't need the result): */
+ files_downcast(ref_store, 0, "for_each_reflog_ent");
+
logfp = fopen(git_path("logs/%s", refname), "r");
if (!logfp)
return -1;
files_reflog_iterator_abort
};
-struct ref_iterator *files_reflog_iterator_begin(void)
+static struct ref_iterator *files_reflog_iterator_begin(struct ref_store *ref_store)
{
struct files_reflog_iterator *iter = xcalloc(1, sizeof(*iter));
struct ref_iterator *ref_iterator = &iter->base;
+ /* Check validity (but we don't need the result): */
+ files_downcast(ref_store, 0, "reflog_iterator_begin");
+
base_ref_iterator_init(ref_iterator, &files_reflog_iterator_vtable);
iter->dir_iterator = dir_iterator_begin(git_path("logs"));
return ref_iterator;
}
-int for_each_reflog(each_ref_fn fn, void *cb_data)
-{
- return do_for_each_ref_iterator(files_reflog_iterator_begin(),
- fn, cb_data);
-}
-
static int ref_update_reject_duplicates(struct string_list *refnames,
struct strbuf *err)
{
* Note that the new update will itself be subject to splitting when
* the iteration gets to it.
*/
-static int split_symref_update(struct ref_update *update,
+static int split_symref_update(struct files_ref_store *refs,
+ struct ref_update *update,
const char *referent,
struct ref_transaction *transaction,
struct string_list *affected_refnames,
* - If it is an update of head_ref, add a corresponding REF_LOG_ONLY
* update of HEAD.
*/
-static int lock_ref_for_update(struct ref_update *update,
+static int lock_ref_for_update(struct files_ref_store *refs,
+ struct ref_update *update,
struct ref_transaction *transaction,
const char *head_ref,
struct string_list *affected_refnames,
int ret;
struct ref_lock *lock;
+ assert_main_repository(&refs->base, "lock_ref_for_update");
+
if ((update->flags & REF_HAVE_NEW) && is_null_sha1(update->new_sha1))
update->flags |= REF_DELETING;
return ret;
}
- ret = lock_raw_ref(update->refname, mustexist,
+ ret = lock_raw_ref(refs, update->refname, mustexist,
affected_refnames, NULL,
- &update->lock, &referent,
+ &lock, &referent,
&update->type, err);
-
if (ret) {
char *reason;
return ret;
}
- lock = update->lock;
+ update->backend_data = lock;
if (update->type & REF_ISSYMREF) {
if (update->flags & REF_NODEREF) {
* of processing the split-off update, so we
* don't have to do it here.
*/
- ret = split_symref_update(update, referent.buf, transaction,
+ ret = split_symref_update(refs, update,
+ referent.buf, transaction,
affected_refnames, err);
if (ret)
return ret;
for (parent_update = update->parent_update;
parent_update;
parent_update = parent_update->parent_update) {
- oidcpy(&parent_update->lock->old_oid, &lock->old_oid);
+ struct ref_lock *parent_lock = parent_update->backend_data;
+ oidcpy(&parent_lock->old_oid, &lock->old_oid);
}
}
* The lock was freed upon failure of
* write_ref_to_lockfile():
*/
- update->lock = NULL;
+ update->backend_data = NULL;
strbuf_addf(err,
"cannot update ref '%s': %s",
update->refname, write_err);
return 0;
}
-int ref_transaction_commit(struct ref_transaction *transaction,
- struct strbuf *err)
+static int files_transaction_commit(struct ref_store *ref_store,
+ struct ref_transaction *transaction,
+ struct strbuf *err)
{
+ struct files_ref_store *refs =
+ files_downcast(ref_store, 0, "ref_transaction_commit");
int ret = 0, i;
struct string_list refs_to_delete = STRING_LIST_INIT_NODUP;
struct string_list_item *ref_to_delete;
for (i = 0; i < transaction->nr; i++) {
struct ref_update *update = transaction->updates[i];
- ret = lock_ref_for_update(update, transaction, head_ref,
- &affected_refnames, err);
+ ret = lock_ref_for_update(refs, update, transaction,
+ head_ref, &affected_refnames, err);
if (ret)
goto cleanup;
}
/* Perform updates first so live commits remain referenced */
for (i = 0; i < transaction->nr; i++) {
struct ref_update *update = transaction->updates[i];
- struct ref_lock *lock = update->lock;
+ struct ref_lock *lock = update->backend_data;
if (update->flags & REF_NEEDS_COMMIT ||
update->flags & REF_LOG_ONLY) {
lock->ref_name, old_msg);
free(old_msg);
unlock_ref(lock);
- update->lock = NULL;
+ update->backend_data = NULL;
ret = TRANSACTION_GENERIC_ERROR;
goto cleanup;
}
}
if (update->flags & REF_NEEDS_COMMIT) {
- clear_loose_ref_cache(&ref_cache);
+ clear_loose_ref_cache(refs);
if (commit_ref(lock)) {
strbuf_addf(err, "couldn't set '%s'", lock->ref_name);
unlock_ref(lock);
- update->lock = NULL;
+ update->backend_data = NULL;
ret = TRANSACTION_GENERIC_ERROR;
goto cleanup;
}
/* Perform deletes now that updates are safely completed */
for (i = 0; i < transaction->nr; i++) {
struct ref_update *update = transaction->updates[i];
+ struct ref_lock *lock = update->backend_data;
if (update->flags & REF_DELETING &&
!(update->flags & REF_LOG_ONLY)) {
- if (delete_ref_loose(update->lock, update->type, err)) {
+ if (delete_ref_loose(lock, update->type, err)) {
ret = TRANSACTION_GENERIC_ERROR;
goto cleanup;
}
if (!(update->flags & REF_ISPRUNING))
string_list_append(&refs_to_delete,
- update->lock->ref_name);
+ lock->ref_name);
}
}
- if (repack_without_refs(&refs_to_delete, err)) {
+ if (repack_without_refs(refs, &refs_to_delete, err)) {
ret = TRANSACTION_GENERIC_ERROR;
goto cleanup;
}
for_each_string_list_item(ref_to_delete, &refs_to_delete)
unlink_or_warn(git_path("logs/%s", ref_to_delete->string));
- clear_loose_ref_cache(&ref_cache);
+ clear_loose_ref_cache(refs);
cleanup:
transaction->state = REF_TRANSACTION_CLOSED;
for (i = 0; i < transaction->nr; i++)
- if (transaction->updates[i]->lock)
- unlock_ref(transaction->updates[i]->lock);
+ if (transaction->updates[i]->backend_data)
+ unlock_ref(transaction->updates[i]->backend_data);
string_list_clear(&refs_to_delete, 0);
free(head_ref);
string_list_clear(&affected_refnames, 0);
return string_list_has_string(affected_refnames, refname);
}
-int initial_ref_transaction_commit(struct ref_transaction *transaction,
- struct strbuf *err)
+static int files_initial_transaction_commit(struct ref_store *ref_store,
+ struct ref_transaction *transaction,
+ struct strbuf *err)
{
+ struct files_ref_store *refs =
+ files_downcast(ref_store, 0, "initial_ref_transaction_commit");
int ret = 0, i;
struct string_list affected_refnames = STRING_LIST_INIT_NODUP;
}
}
- if (lock_packed_refs(0)) {
+ if (lock_packed_refs(refs, 0)) {
strbuf_addf(err, "unable to lock packed-refs file: %s",
strerror(errno));
ret = TRANSACTION_GENERIC_ERROR;
if ((update->flags & REF_HAVE_NEW) &&
!is_null_sha1(update->new_sha1))
- add_packed_ref(update->refname, update->new_sha1);
+ add_packed_ref(refs, update->refname, update->new_sha1);
}
- if (commit_packed_refs()) {
+ if (commit_packed_refs(refs)) {
strbuf_addf(err, "unable to commit packed-refs file: %s",
strerror(errno));
ret = TRANSACTION_GENERIC_ERROR;
return 0;
}
-int reflog_expire(const char *refname, const unsigned char *sha1,
- unsigned int flags,
- reflog_expiry_prepare_fn prepare_fn,
- reflog_expiry_should_prune_fn should_prune_fn,
- reflog_expiry_cleanup_fn cleanup_fn,
- void *policy_cb_data)
+static int files_reflog_expire(struct ref_store *ref_store,
+ const char *refname, const unsigned char *sha1,
+ unsigned int flags,
+ reflog_expiry_prepare_fn prepare_fn,
+ reflog_expiry_should_prune_fn should_prune_fn,
+ reflog_expiry_cleanup_fn cleanup_fn,
+ void *policy_cb_data)
{
+ struct files_ref_store *refs =
+ files_downcast(ref_store, 0, "reflog_expire");
static struct lock_file reflog_lock;
struct expire_reflog_cb cb;
struct ref_lock *lock;
* reference itself, plus we might need to update the
* reference if --updateref was specified:
*/
- lock = lock_ref_sha1_basic(refname, sha1, NULL, NULL, REF_NODEREF,
+ lock = lock_ref_sha1_basic(refs, refname, sha1,
+ NULL, NULL, REF_NODEREF,
&type, &err);
if (!lock) {
error("cannot lock ref '%s': %s", refname, err.buf);
unlock_ref(lock);
return -1;
}
+
+static int files_init_db(struct ref_store *ref_store, struct strbuf *err)
+{
+ /* Check validity (but we don't need the result): */
+ files_downcast(ref_store, 0, "init_db");
+
+ /*
+ * Create .git/refs/{heads,tags}
+ */
+ safe_create_dir(git_path("refs/heads"), 1);
+ safe_create_dir(git_path("refs/tags"), 1);
+ if (get_shared_repository()) {
+ adjust_shared_perm(git_path("refs/heads"));
+ adjust_shared_perm(git_path("refs/tags"));
+ }
+ return 0;
+}
+
+struct ref_storage_be refs_be_files = {
+ NULL,
+ "files",
+ files_ref_store_create,
+ files_init_db,
+ files_transaction_commit,
+ files_initial_transaction_commit,
+
+ files_pack_refs,
+ files_peel_ref,
+ files_create_symref,
+ files_delete_refs,
+ files_rename_ref,
+
+ files_ref_iterator_begin,
+ files_read_raw_ref,
+ files_verify_refname_available,
+
+ files_reflog_iterator_begin,
+ files_for_each_reflog_ent,
+ files_for_each_reflog_ent_reverse,
+ files_reflog_exists,
+ files_create_reflog,
+ files_delete_reflog,
+ files_reflog_expire
+};
*/
unsigned int flags;
- struct ref_lock *lock;
+ void *backend_data;
unsigned int type;
char *msg;
const struct string_list *extras,
const struct string_list *skip);
-int rename_ref_available(const char *oldname, const char *newname);
+/*
+ * Check whether an attempt to rename old_refname to new_refname would
+ * cause a D/F conflict with any existing reference (other than
+ * possibly old_refname). If there would be a conflict, emit an error
+ * message and return false; otherwise, return true.
+ *
+ * Note that this function is not safe against all races with other
+ * processes (though rename_ref() catches some races that might get by
+ * this check).
+ */
+int rename_ref_available(const char *old_refname, const char *new_refname);
/* We allow "recursive" symbolic refs. Only within reason, though */
#define SYMREF_MAXDEPTH 5
const char *prefix,
int trim);
-/*
- * Iterate over the packed and loose references in the specified
- * submodule that are within find_containing_dir(prefix). If prefix is
- * NULL or the empty string, iterate over all references in the
- * submodule.
- */
-struct ref_iterator *files_ref_iterator_begin(const char *submodule,
- const char *prefix,
- unsigned int flags);
-
-/*
- * Iterate over the references in the main ref_store that have a
- * reflog. The paths within a directory are iterated over in arbitrary
- * order.
- */
-struct ref_iterator *files_reflog_iterator_begin(void);
-
/* Internal implementation of reference iteration: */
/*
each_ref_fn fn, void *cb_data);
/*
- * Read the specified reference from the filesystem or packed refs
- * file, non-recursively. Set type to describe the reference, and:
+ * Only include per-worktree refs in a do_for_each_ref*() iteration.
+ * Normally this will be used with a files ref_store, since that's
+ * where all reference backends will presumably store their
+ * per-worktree refs.
+ */
+#define DO_FOR_EACH_PER_WORKTREE_ONLY 0x02
+
+struct ref_store;
+
+/* refs backends */
+
+/*
+ * Initialize the ref_store for the specified submodule, or for the
+ * main repository if submodule == NULL. These functions should call
+ * base_ref_store_init() to initialize the shared part of the
+ * ref_store and to record the ref_store for later lookup.
+ */
+typedef struct ref_store *ref_store_init_fn(const char *submodule);
+
+typedef int ref_init_db_fn(struct ref_store *refs, struct strbuf *err);
+
+typedef int ref_transaction_commit_fn(struct ref_store *refs,
+ struct ref_transaction *transaction,
+ struct strbuf *err);
+
+typedef int pack_refs_fn(struct ref_store *ref_store, unsigned int flags);
+typedef int peel_ref_fn(struct ref_store *ref_store,
+ const char *refname, unsigned char *sha1);
+typedef int create_symref_fn(struct ref_store *ref_store,
+ const char *ref_target,
+ const char *refs_heads_master,
+ const char *logmsg);
+typedef int delete_refs_fn(struct ref_store *ref_store,
+ struct string_list *refnames, unsigned int flags);
+typedef int rename_ref_fn(struct ref_store *ref_store,
+ const char *oldref, const char *newref,
+ const char *logmsg);
+
+/*
+ * Iterate over the references in the specified ref_store that are
+ * within find_containing_dir(prefix). If prefix is NULL or the empty
+ * string, iterate over all references in the submodule.
+ */
+typedef struct ref_iterator *ref_iterator_begin_fn(
+ struct ref_store *ref_store,
+ const char *prefix, unsigned int flags);
+
+/* reflog functions */
+
+/*
+ * Iterate over the references in the specified ref_store that have a
+ * reflog. The refs are iterated over in arbitrary order.
+ */
+typedef struct ref_iterator *reflog_iterator_begin_fn(
+ struct ref_store *ref_store);
+
+typedef int for_each_reflog_ent_fn(struct ref_store *ref_store,
+ const char *refname,
+ each_reflog_ent_fn fn,
+ void *cb_data);
+typedef int for_each_reflog_ent_reverse_fn(struct ref_store *ref_store,
+ const char *refname,
+ each_reflog_ent_fn fn,
+ void *cb_data);
+typedef int reflog_exists_fn(struct ref_store *ref_store, const char *refname);
+typedef int create_reflog_fn(struct ref_store *ref_store, const char *refname,
+ int force_create, struct strbuf *err);
+typedef int delete_reflog_fn(struct ref_store *ref_store, const char *refname);
+typedef int reflog_expire_fn(struct ref_store *ref_store,
+ const char *refname, const unsigned char *sha1,
+ unsigned int flags,
+ reflog_expiry_prepare_fn prepare_fn,
+ reflog_expiry_should_prune_fn should_prune_fn,
+ reflog_expiry_cleanup_fn cleanup_fn,
+ void *policy_cb_data);
+
+/*
+ * Read a reference from the specified reference store, non-recursively.
+ * Set type to describe the reference, and:
*
* - If refname is the name of a normal reference, fill in sha1
* (leaving referent unchanged).
* - in all other cases, referent will be untouched, and therefore
* refname will still be valid and unchanged.
*/
-int read_raw_ref(const char *refname, unsigned char *sha1,
- struct strbuf *referent, unsigned int *type);
+typedef int read_raw_ref_fn(struct ref_store *ref_store,
+ const char *refname, unsigned char *sha1,
+ struct strbuf *referent, unsigned int *type);
+
+typedef int verify_refname_available_fn(struct ref_store *ref_store,
+ const char *newname,
+ const struct string_list *extras,
+ const struct string_list *skip,
+ struct strbuf *err);
+
+struct ref_storage_be {
+ struct ref_storage_be *next;
+ const char *name;
+ ref_store_init_fn *init;
+ ref_init_db_fn *init_db;
+ ref_transaction_commit_fn *transaction_commit;
+ ref_transaction_commit_fn *initial_transaction_commit;
+
+ pack_refs_fn *pack_refs;
+ peel_ref_fn *peel_ref;
+ create_symref_fn *create_symref;
+ delete_refs_fn *delete_refs;
+ rename_ref_fn *rename_ref;
+
+ ref_iterator_begin_fn *iterator_begin;
+ read_raw_ref_fn *read_raw_ref;
+ verify_refname_available_fn *verify_refname_available;
+
+ reflog_iterator_begin_fn *reflog_iterator_begin;
+ for_each_reflog_ent_fn *for_each_reflog_ent;
+ for_each_reflog_ent_reverse_fn *for_each_reflog_ent_reverse;
+ reflog_exists_fn *reflog_exists;
+ create_reflog_fn *create_reflog;
+ delete_reflog_fn *delete_reflog;
+ reflog_expire_fn *reflog_expire;
+};
+
+extern struct ref_storage_be refs_be_files;
+
+/*
+ * A representation of the reference store for the main repository or
+ * a submodule. The ref_store instances for submodules are kept in a
+ * linked list.
+ */
+struct ref_store {
+ /* The backend describing this ref_store's storage scheme: */
+ const struct ref_storage_be *be;
+
+ /*
+ * The name of the submodule represented by this object, or
+ * the empty string if it represents the main repository's
+ * reference store:
+ */
+ const char *submodule;
+
+ /*
+ * Submodule reference store instances are stored in a linked
+ * list using this pointer.
+ */
+ struct ref_store *next;
+};
+
+/*
+ * Fill in the generic part of refs for the specified submodule and
+ * add it to our collection of reference stores.
+ */
+void base_ref_store_init(struct ref_store *refs,
+ const struct ref_storage_be *be,
+ const char *submodule);
+
+/*
+ * Create, record, and return a ref_store instance for the specified
+ * submodule (or the main repository if submodule is NULL).
+ *
+ * For backwards compatibility, submodule=="" is treated the same as
+ * submodule==NULL.
+ */
+struct ref_store *ref_store_init(const char *submodule);
+
+/*
+ * Return the ref_store instance for the specified submodule (or the
+ * main repository if submodule is NULL). If that ref_store hasn't
+ * been initialized yet, return NULL.
+ *
+ * For backwards compatibility, submodule=="" is treated the same as
+ * submodule==NULL.
+ */
+struct ref_store *lookup_ref_store(const char *submodule);
+
+/*
+ * Return the ref_store instance for the specified submodule. For the
+ * main repository, use submodule==NULL; such a call cannot fail. For
+ * a submodule, the submodule must exist and be a nonbare repository,
+ * otherwise return NULL. If the requested reference store has not yet
+ * been initialized, initialize it first.
+ *
+ * For backwards compatibility, submodule=="" is treated the same as
+ * submodule==NULL.
+ */
+struct ref_store *get_ref_store(const char *submodule);
+
+/*
+ * Die if refs is for a submodule (i.e., not for the main repository).
+ * caller is used in any necessary error messages.
+ */
+void assert_main_repository(struct ref_store *refs, const char *caller);
#endif /* REFS_REFS_INTERNAL_H */
struct options {
int verbosity;
unsigned long depth;
+ char *deepen_since;
+ struct string_list deepen_not;
unsigned progress : 1,
check_self_contained_and_connected : 1,
cloning : 1,
dry_run : 1,
thin : 1,
/* One of the SEND_PACK_PUSH_CERT_* constants. */
- push_cert : 2;
+ push_cert : 2,
+ deepen_relative : 1;
};
static struct options options;
static struct string_list cas_options = STRING_LIST_INIT_DUP;
options.depth = v;
return 0;
}
+ else if (!strcmp(name, "deepen-since")) {
+ options.deepen_since = xstrdup(value);
+ return 0;
+ }
+ else if (!strcmp(name, "deepen-not")) {
+ string_list_append(&options.deepen_not, value);
+ return 0;
+ }
+ else if (!strcmp(name, "deepen-relative")) {
+ if (!strcmp(value, "true"))
+ options.deepen_relative = 1;
+ else if (!strcmp(value, "false"))
+ options.deepen_relative = 0;
+ else
+ return -1;
+ return 0;
+ }
else if (!strcmp(name, "followtags")) {
if (!strcmp(value, "true"))
options.followtags = 1;
int ret, i;
ALLOC_ARRAY(targets, nr_heads);
- if (options.depth)
- die("dumb http transport does not support --depth");
+ if (options.depth || options.deepen_since)
+ die("dumb http transport does not support shallow capabilities");
for (i = 0; i < nr_heads; i++)
targets[i] = xstrdup(oid_to_hex(&to_fetch[i]->old_oid));
{
struct rpc_state rpc;
struct strbuf preamble = STRBUF_INIT;
- char *depth_arg = NULL;
- int argc = 0, i, err;
- const char *argv[17];
-
- argv[argc++] = "fetch-pack";
- argv[argc++] = "--stateless-rpc";
- argv[argc++] = "--stdin";
- argv[argc++] = "--lock-pack";
+ int i, err;
+ struct argv_array args = ARGV_ARRAY_INIT;
+
+ argv_array_pushl(&args, "fetch-pack", "--stateless-rpc",
+ "--stdin", "--lock-pack", NULL);
if (options.followtags)
- argv[argc++] = "--include-tag";
+ argv_array_push(&args, "--include-tag");
if (options.thin)
- argv[argc++] = "--thin";
- if (options.verbosity >= 3) {
- argv[argc++] = "-v";
- argv[argc++] = "-v";
- }
+ argv_array_push(&args, "--thin");
+ if (options.verbosity >= 3)
+ argv_array_pushl(&args, "-v", "-v", NULL);
if (options.check_self_contained_and_connected)
- argv[argc++] = "--check-self-contained-and-connected";
+ argv_array_push(&args, "--check-self-contained-and-connected");
if (options.cloning)
- argv[argc++] = "--cloning";
+ argv_array_push(&args, "--cloning");
if (options.update_shallow)
- argv[argc++] = "--update-shallow";
+ argv_array_push(&args, "--update-shallow");
if (!options.progress)
- argv[argc++] = "--no-progress";
- if (options.depth) {
- struct strbuf buf = STRBUF_INIT;
- strbuf_addf(&buf, "--depth=%lu", options.depth);
- depth_arg = strbuf_detach(&buf, NULL);
- argv[argc++] = depth_arg;
- }
- argv[argc++] = url.buf;
- argv[argc++] = NULL;
+ argv_array_push(&args, "--no-progress");
+ if (options.depth)
+ argv_array_pushf(&args, "--depth=%lu", options.depth);
+ if (options.deepen_since)
+ argv_array_pushf(&args, "--shallow-since=%s", options.deepen_since);
+ for (i = 0; i < options.deepen_not.nr; i++)
+ argv_array_pushf(&args, "--shallow-exclude=%s",
+ options.deepen_not.items[i].string);
+ if (options.deepen_relative && options.depth)
+ argv_array_push(&args, "--deepen-relative");
+ argv_array_push(&args, url.buf);
for (i = 0; i < nr_heads; i++) {
struct ref *ref = to_fetch[i];
memset(&rpc, 0, sizeof(rpc));
rpc.service_name = "git-upload-pack",
- rpc.argv = argv;
+ rpc.argv = args.argv;
rpc.stdin_preamble = &preamble;
rpc.gzip_request = 1;
write_or_die(1, rpc.result.buf, rpc.result.len);
strbuf_release(&rpc.result);
strbuf_release(&preamble);
- free(depth_arg);
+ argv_array_clear(&args);
return err;
}
options.verbosity = 1;
options.progress = !!isatty(2);
options.thin = 1;
+ string_list_init(&options.deepen_not, 1);
remote = remote_get(argv[1]);
break;
i = ce_stage(ce) - 1;
if (!mmfile[i].ptr) {
- mmfile[i].ptr = read_sha1_file(ce->sha1, &type, &size);
+ mmfile[i].ptr = read_sha1_file(ce->oid.hash, &type,
+ &size);
mmfile[i].size = size;
}
}
if (!lost->util)
lost->util = xcalloc(1, sizeof(*ui));
ui = lost->util;
- hashcpy(ui->sha1[stage - 1], ce->sha1);
+ hashcpy(ui->sha1[stage - 1], ce->oid.hash);
ui->mode[stage - 1] = ce->ce_mode;
}
if (S_ISGITLINK(ce->ce_mode))
continue;
- blob = lookup_blob(ce->sha1);
+ blob = lookup_blob(ce->oid.hash);
if (!blob)
die("unable to add index blob to traversal");
add_pending_object_with_path(revs, &blob->object, "",
}
}
-static int add_parents_only(struct rev_info *revs, const char *arg_, int flags)
+static int add_parents_only(struct rev_info *revs, const char *arg_, int flags,
+ int exclude_parent)
{
unsigned char sha1[20];
struct object *it;
struct commit *commit;
struct commit_list *parents;
+ int parent_number;
const char *arg = arg_;
if (*arg == '^') {
if (it->type != OBJ_COMMIT)
return 0;
commit = (struct commit *)it;
- for (parents = commit->parents; parents; parents = parents->next) {
+ if (exclude_parent &&
+ exclude_parent > commit_list_count(commit->parents))
+ return 0;
+ for (parents = commit->parents, parent_number = 1;
+ parents;
+ parents = parents->next, parent_number++) {
+ if (exclude_parent && parent_number != exclude_parent)
+ continue;
+
it = &parents->item->object;
it->flags |= flags;
add_rev_cmdline(revs, it, arg_, REV_CMD_PARENTS_ONLY, flags);
}
*dotdot = '.';
}
+
dotdot = strstr(arg, "^@");
if (dotdot && !dotdot[2]) {
*dotdot = 0;
- if (add_parents_only(revs, arg, flags))
+ if (add_parents_only(revs, arg, flags, 0))
return 0;
*dotdot = '^';
}
dotdot = strstr(arg, "^!");
if (dotdot && !dotdot[2]) {
*dotdot = 0;
- if (!add_parents_only(revs, arg, flags ^ (UNINTERESTING | BOTTOM)))
+ if (!add_parents_only(revs, arg, flags ^ (UNINTERESTING | BOTTOM), 0))
+ *dotdot = '^';
+ }
+ dotdot = strstr(arg, "^-");
+ if (dotdot) {
+ int exclude_parent = 1;
+
+ if (dotdot[2]) {
+ char *end;
+ exclude_parent = strtoul(dotdot + 2, &end, 10);
+ if (*end != '\0' || !exclude_parent)
+ return -1;
+ }
+
+ *dotdot = 0;
+ if (!add_parents_only(revs, arg, flags ^ (UNINTERESTING | BOTTOM), exclude_parent))
*dotdot = '^';
}
struct child_to_clean {
pid_t pid;
+ struct child_process *process;
struct child_to_clean *next;
};
static struct child_to_clean *children_to_clean;
while (children_to_clean) {
struct child_to_clean *p = children_to_clean;
children_to_clean = p->next;
+
+ if (p->process && !in_signal) {
+ struct child_process *process = p->process;
+ if (process->clean_on_exit_handler) {
+ trace_printf(
+ "trace: run_command: running exit handler for pid %"
+ PRIuMAX, (uintmax_t)p->pid
+ );
+ process->clean_on_exit_handler(process);
+ }
+ }
+
kill(p->pid, sig);
if (!in_signal)
free(p);
cleanup_children(SIGTERM, 0);
}
-static void mark_child_for_cleanup(pid_t pid)
+static void mark_child_for_cleanup(pid_t pid, struct child_process *process)
{
struct child_to_clean *p = xmalloc(sizeof(*p));
p->pid = pid;
+ p->process = process;
p->next = children_to_clean;
children_to_clean = p;
if (cmd->pid < 0)
error_errno("cannot fork() for %s", cmd->argv[0]);
else if (cmd->clean_on_exit)
- mark_child_for_cleanup(cmd->pid);
+ mark_child_for_cleanup(cmd->pid, cmd);
/*
* Wait for child's execvp. If the execvp succeeds (or if fork()
if (cmd->pid < 0 && (!cmd->silent_exec_failure || errno != ENOENT))
error_errno("cannot spawn %s", cmd->argv[0]);
if (cmd->clean_on_exit && cmd->pid >= 0)
- mark_child_for_cleanup(cmd->pid);
+ mark_child_for_cleanup(cmd->pid, cmd);
argv_array_clear(&nargv);
cmd->argv = sargv;
return !pthread_equal(main_thread, pthread_self());
}
-void NORETURN async_exit(int code)
+static void NORETURN async_exit(int code)
{
pthread_exit((void *)(intptr_t)code);
}
return process_is_async;
}
-void NORETURN async_exit(int code)
+static void NORETURN async_exit(int code)
{
exit(code);
}
#endif
+void check_pipe(int err)
+{
+ if (err == EPIPE) {
+ if (in_async())
+ async_exit(141);
+
+ signal(SIGPIPE, SIG_DFL);
+ raise(SIGPIPE);
+ /* Should never happen, but just in case... */
+ exit(141);
+ }
+}
+
int start_async(struct async *async)
{
int need_in, need_out;
exit(!!async->proc(proc_in, proc_out, async->data));
}
- mark_child_for_cleanup(async->pid);
+ mark_child_for_cleanup(async->pid, NULL);
if (need_in)
close(fdin[0]);
unsigned stdout_to_stderr:1;
unsigned use_shell:1;
unsigned clean_on_exit:1;
+ void (*clean_on_exit_handler)(struct child_process *process);
+ void *clean_on_exit_handler_cbdata;
};
#define CHILD_PROCESS_INIT { NULL, ARGV_ARRAY_INIT, ARGV_ARRAY_INIT }
int start_async(struct async *async);
int finish_async(struct async *async);
int in_async(void);
-void NORETURN async_exit(int code);
+void check_pipe(int err);
/**
* This callback should initialize the child process and preload the
#include "merge-recursive.h"
#include "refs.h"
#include "argv-array.h"
+#include "quote.h"
#define GIT_REFLOG_ACTION "GIT_REFLOG_ACTION"
const char sign_off_header[] = "Signed-off-by: ";
static const char cherry_picked_prefix[] = "(cherry picked from commit ";
-static GIT_PATH_FUNC(git_path_todo_file, SEQ_TODO_FILE)
-static GIT_PATH_FUNC(git_path_opts_file, SEQ_OPTS_FILE)
-static GIT_PATH_FUNC(git_path_seq_dir, SEQ_DIR)
-static GIT_PATH_FUNC(git_path_head_file, SEQ_HEAD_FILE)
+GIT_PATH_FUNC(git_path_seq_dir, "sequencer")
+
+static GIT_PATH_FUNC(git_path_todo_file, "sequencer/todo")
+static GIT_PATH_FUNC(git_path_opts_file, "sequencer/opts")
+static GIT_PATH_FUNC(git_path_head_file, "sequencer/head")
+
+/*
+ * A script to set the GIT_AUTHOR_NAME, GIT_AUTHOR_EMAIL, and
+ * GIT_AUTHOR_DATE that will be used for the commit that is currently
+ * being rebased.
+ */
+static GIT_PATH_FUNC(rebase_path_author_script, "rebase-merge/author-script")
+/*
+ * The following files are written by git-rebase just after parsing the
+ * command-line (and are only consumed, not modified, by the sequencer).
+ */
+static GIT_PATH_FUNC(rebase_path_gpg_sign_opt, "rebase-merge/gpg_sign_opt")
+
+/* We will introduce the 'interactive rebase' mode later */
+static inline int is_rebase_i(const struct replay_opts *opts)
+{
+ return 0;
+}
+
+static const char *get_dir(const struct replay_opts *opts)
+{
+ return git_path_seq_dir();
+}
+
+static const char *get_todo_path(const struct replay_opts *opts)
+{
+ return git_path_todo_file();
+}
static int is_rfc2822_line(const char *buf, int len)
{
return 1;
}
-static void remove_sequencer_state(void)
+static const char *gpg_sign_opt_quoted(struct replay_opts *opts)
+{
+ static struct strbuf buf = STRBUF_INIT;
+
+ strbuf_reset(&buf);
+ if (opts->gpg_sign)
+ sq_quotef(&buf, "-S%s", opts->gpg_sign);
+ return buf.buf;
+}
+
+int sequencer_remove_state(struct replay_opts *opts)
{
- struct strbuf seq_dir = STRBUF_INIT;
+ struct strbuf dir = STRBUF_INIT;
+ int i;
+
+ free(opts->gpg_sign);
+ free(opts->strategy);
+ for (i = 0; i < opts->xopts_nr; i++)
+ free(opts->xopts[i]);
+ free(opts->xopts);
+
+ strbuf_addf(&dir, "%s", get_dir(opts));
+ remove_dir_recursively(&dir, 0);
+ strbuf_release(&dir);
- strbuf_addstr(&seq_dir, git_path(SEQ_DIR));
- remove_dir_recursively(&seq_dir, 0);
- strbuf_release(&seq_dir);
+ return 0;
}
static const char *action_name(const struct replay_opts *opts)
{
- return opts->action == REPLAY_REVERT ? "revert" : "cherry-pick";
+ return opts->action == REPLAY_REVERT ? N_("revert") : N_("cherry-pick");
}
struct commit_message {
const char *message;
};
+static const char *short_commit_name(struct commit *commit)
+{
+ return find_unique_abbrev(commit->object.oid.hash, DEFAULT_ABBREV);
+}
+
static int get_message(struct commit *commit, struct commit_message *out)
{
const char *abbrev, *subject;
int subject_len;
out->message = logmsg_reencode(commit, NULL, get_commit_output_encoding());
- abbrev = find_unique_abbrev(commit->object.oid.hash, DEFAULT_ABBREV);
+ abbrev = short_commit_name(commit);
subject_len = find_commit_subject(out->message, &subject);
}
}
-static void write_message(struct strbuf *msgbuf, const char *filename)
+static int write_message(const void *buf, size_t len, const char *filename,
+ int append_eol)
{
static struct lock_file msg_file;
- int msg_fd = hold_lock_file_for_update(&msg_file, filename,
- LOCK_DIE_ON_ERROR);
- if (write_in_full(msg_fd, msgbuf->buf, msgbuf->len) < 0)
- die_errno(_("Could not write to %s"), filename);
- strbuf_release(msgbuf);
- if (commit_lock_file(&msg_file) < 0)
- die(_("Error wrapping up %s."), filename);
+ int msg_fd = hold_lock_file_for_update(&msg_file, filename, 0);
+ if (msg_fd < 0)
+ return error_errno(_("could not lock '%s'"), filename);
+ if (write_in_full(msg_fd, buf, len) < 0) {
+ rollback_lock_file(&msg_file);
+ return error_errno(_("could not write to '%s'"), filename);
+ }
+ if (append_eol && write(msg_fd, "\n", 1) < 0) {
+ rollback_lock_file(&msg_file);
+ return error_errno(_("could not write eol to '%s"), filename);
+ }
+ if (commit_lock_file(&msg_file) < 0) {
+ rollback_lock_file(&msg_file);
+ return error(_("failed to finalize '%s'."), filename);
+ }
+
+ return 0;
+}
+
+/*
+ * Reads a file that was presumably written by a shell script, i.e. with an
+ * end-of-line marker that needs to be stripped.
+ *
+ * Note that only the last end-of-line marker is stripped, consistent with the
+ * behavior of "$(cat path)" in a shell script.
+ *
+ * Returns 1 if the file was read, 0 if it could not be read or does not exist.
+ */
+static int read_oneliner(struct strbuf *buf,
+ const char *path, int skip_if_empty)
+{
+ int orig_len = buf->len;
+
+ if (!file_exists(path))
+ return 0;
+
+ if (strbuf_read_file(buf, path, 0) < 0) {
+ warning_errno(_("could not read '%s'"), path);
+ return 0;
+ }
+
+ if (buf->len > orig_len && buf->buf[buf->len - 1] == '\n') {
+ if (--buf->len > orig_len && buf->buf[buf->len - 1] == '\r')
+ --buf->len;
+ buf->buf[buf->len] = '\0';
+ }
+
+ if (skip_if_empty && buf->len == orig_len)
+ return 0;
+
+ return 1;
}
static struct tree *empty_tree(void)
static int error_dirty_index(struct replay_opts *opts)
{
if (read_cache_unmerged())
- return error_resolve_conflict(action_name(opts));
+ return error_resolve_conflict(_(action_name(opts)));
- /* Different translation strings for cherry-pick and revert */
- if (opts->action == REPLAY_PICK)
- error(_("Your local changes would be overwritten by cherry-pick."));
- else
- error(_("Your local changes would be overwritten by revert."));
+ error(_("your local changes would be overwritten by %s."),
+ _(action_name(opts)));
if (advice_commit_before_merge)
- advise(_("Commit your changes or stash them to proceed."));
+ advise(_("commit your changes or stash them to proceed."));
return -1;
}
read_cache();
if (checkout_fast_forward(from, to, 1))
- exit(128); /* the callee should have complained already */
+ return -1; /* the callee should have complained already */
- strbuf_addf(&sb, _("%s: fast-forward"), action_name(opts));
+ strbuf_addf(&sb, _("%s: fast-forward"), _(action_name(opts)));
transaction = ref_transaction_begin(&err);
if (!transaction ||
struct merge_options o;
struct tree *result, *next_tree, *base_tree, *head_tree;
int clean;
- const char **xopt;
+ char **xopt;
static struct lock_file index_lock;
hold_locked_index(&index_lock, 1);
if (active_cache_changed &&
write_locked_index(&the_index, &index_lock, COMMIT_LOCK))
/* TRANSLATORS: %s will be "revert" or "cherry-pick" */
- die(_("%s: Unable to write new index file"), action_name(opts));
+ return error(_("%s: Unable to write new index file"),
+ _(action_name(opts)));
rollback_lock_file(&index_lock);
if (opts->signoff)
struct commit *head_commit;
if (!resolve_ref_unsafe("HEAD", RESOLVE_REF_READING, head_sha1, NULL))
- return error(_("Could not resolve HEAD commit\n"));
+ return error(_("could not resolve HEAD commit\n"));
head_commit = lookup_commit(head_sha1);
if (!cache_tree_fully_valid(active_cache_tree))
if (cache_tree_update(&the_index, 0))
- return error(_("Unable to update cache tree\n"));
+ return error(_("unable to update cache tree\n"));
return !hashcmp(active_cache_tree->sha1, head_commit->tree->object.oid.hash);
}
+/*
+ * Read the author-script file into an environment block, ready for use in
+ * run_command(), that can be free()d afterwards.
+ */
+static char **read_author_script(void)
+{
+ struct strbuf script = STRBUF_INIT;
+ int i, count = 0;
+ char *p, *p2, **env;
+ size_t env_size;
+
+ if (strbuf_read_file(&script, rebase_path_author_script(), 256) <= 0)
+ return NULL;
+
+ for (p = script.buf; *p; p++)
+ if (skip_prefix(p, "'\\\\''", (const char **)&p2))
+ strbuf_splice(&script, p - script.buf, p2 - p, "'", 1);
+ else if (*p == '\'')
+ strbuf_splice(&script, p-- - script.buf, 1, "", 0);
+ else if (*p == '\n') {
+ *p = '\0';
+ count++;
+ }
+
+ env_size = (count + 1) * sizeof(*env);
+ strbuf_grow(&script, env_size);
+ memmove(script.buf + env_size, script.buf, script.len);
+ p = script.buf + env_size;
+ env = (char **)strbuf_detach(&script, NULL);
+
+ for (i = 0; i < count; i++) {
+ env[i] = p;
+ p += strlen(p) + 1;
+ }
+ env[count] = NULL;
+
+ return env;
+}
+
+static const char staged_changes_advice[] =
+N_("you have staged changes in your working tree\n"
+"If these changes are meant to be squashed into the previous commit, run:\n"
+"\n"
+" git commit --amend %s\n"
+"\n"
+"If they are meant to go into a new commit, run:\n"
+"\n"
+" git commit %s\n"
+"\n"
+"In both cases, once you're done, continue with:\n"
+"\n"
+" git rebase --continue\n");
+
/*
* If we are cherry-pick, and if the merge did not result in
* hand-editing, we will hit this commit and inherit the original
* author date and name.
+ *
* If we are revert, or if our cherry-pick results in a hand merge,
* we had better say that the current user is responsible for that.
+ *
+ * An exception is when run_git_commit() is called during an
+ * interactive rebase: in that case, we will want to retain the
+ * author metadata.
*/
static int run_git_commit(const char *defmsg, struct replay_opts *opts,
- int allow_empty)
+ int allow_empty, int edit, int amend,
+ int cleanup_commit_message)
{
+ char **env = NULL;
struct argv_array array;
int rc;
const char *value;
+ if (is_rebase_i(opts)) {
+ env = read_author_script();
+ if (!env) {
+ const char *gpg_opt = gpg_sign_opt_quoted(opts);
+
+ return error(_(staged_changes_advice),
+ gpg_opt, gpg_opt);
+ }
+ }
+
argv_array_init(&array);
argv_array_push(&array, "commit");
argv_array_push(&array, "-n");
+ if (amend)
+ argv_array_push(&array, "--amend");
if (opts->gpg_sign)
argv_array_pushf(&array, "-S%s", opts->gpg_sign);
if (opts->signoff)
argv_array_push(&array, "-s");
- if (!opts->edit) {
- argv_array_push(&array, "-F");
- argv_array_push(&array, defmsg);
- if (!opts->signoff &&
- !opts->record_origin &&
- git_config_get_value("commit.cleanup", &value))
- argv_array_push(&array, "--cleanup=verbatim");
- }
+ if (defmsg)
+ argv_array_pushl(&array, "-F", defmsg, NULL);
+ if (cleanup_commit_message)
+ argv_array_push(&array, "--cleanup=strip");
+ if (edit)
+ argv_array_push(&array, "-e");
+ else if (!cleanup_commit_message &&
+ !opts->signoff && !opts->record_origin &&
+ git_config_get_value("commit.cleanup", &value))
+ argv_array_push(&array, "--cleanup=verbatim");
if (allow_empty)
argv_array_push(&array, "--allow-empty");
if (opts->allow_empty_message)
argv_array_push(&array, "--allow-empty-message");
- rc = run_command_v_opt(array.argv, RUN_GIT_CMD);
+ rc = run_command_v_opt_cd_env(array.argv, RUN_GIT_CMD, NULL,
+ (const char *const *)env);
argv_array_clear(&array);
+ free(env);
+
return rc;
}
const unsigned char *ptree_sha1;
if (parse_commit(commit))
- return error(_("Could not parse commit %s\n"),
+ return error(_("could not parse commit %s\n"),
oid_to_hex(&commit->object.oid));
if (commit->parents) {
struct commit *parent = commit->parents->item;
if (parse_commit(parent))
- return error(_("Could not parse parent commit %s\n"),
+ return error(_("could not parse parent commit %s\n"),
oid_to_hex(&parent->object.oid));
ptree_sha1 = parent->tree->object.oid.hash;
} else {
return 1;
}
-static int do_pick_commit(struct commit *commit, struct replay_opts *opts)
+enum todo_command {
+ TODO_PICK = 0,
+ TODO_REVERT
+};
+
+static const char *todo_command_strings[] = {
+ "pick",
+ "revert"
+};
+
+static const char *command_to_string(const enum todo_command command)
+{
+ if (command < ARRAY_SIZE(todo_command_strings))
+ return todo_command_strings[command];
+ die("Unknown command: %d", command);
+}
+
+
+static int do_pick_commit(enum todo_command command, struct commit *commit,
+ struct replay_opts *opts)
{
unsigned char head[20];
struct commit *base, *next, *parent;
* to work on.
*/
if (write_cache_as_tree(head, 0, NULL))
- die (_("Your index file is unmerged."));
+ return error(_("your index file is unmerged."));
} else {
unborn = get_sha1("HEAD", head);
if (unborn)
hashcpy(head, EMPTY_TREE_SHA1_BIN);
- if (index_differs_from(unborn ? EMPTY_TREE_SHA1_HEX : "HEAD", 0))
+ if (index_differs_from(unborn ? EMPTY_TREE_SHA1_HEX : "HEAD", 0, 0))
return error_dirty_index(opts);
}
discard_cache();
struct commit_list *p;
if (!opts->mainline)
- return error(_("Commit %s is a merge but no -m option was given."),
+ return error(_("commit %s is a merge but no -m option was given."),
oid_to_hex(&commit->object.oid));
for (cnt = 1, p = commit->parents;
cnt++)
p = p->next;
if (cnt != opts->mainline || !p)
- return error(_("Commit %s does not have parent %d"),
+ return error(_("commit %s does not have parent %d"),
oid_to_hex(&commit->object.oid), opts->mainline);
parent = p->item;
} else if (0 < opts->mainline)
- return error(_("Mainline was specified but commit %s is not a merge."),
+ return error(_("mainline was specified but commit %s is not a merge."),
oid_to_hex(&commit->object.oid));
else
parent = commit->parents->item;
return fast_forward_to(commit->object.oid.hash, head, unborn, opts);
if (parent && parse_commit(parent) < 0)
- /* TRANSLATORS: The first %s will be "revert" or
- "cherry-pick", the second %s a SHA1 */
+ /* TRANSLATORS: The first %s will be a "todo" command like
+ "revert" or "pick", the second %s a SHA1. */
return error(_("%s: cannot parse parent commit %s"),
- action_name(opts), oid_to_hex(&parent->object.oid));
+ command_to_string(command),
+ oid_to_hex(&parent->object.oid));
if (get_message(commit, &msg) != 0)
- return error(_("Cannot get commit message for %s"),
+ return error(_("cannot get commit message for %s"),
oid_to_hex(&commit->object.oid));
/*
* reverse of it if we are revert.
*/
- if (opts->action == REPLAY_REVERT) {
+ if (command == TODO_REVERT) {
base = commit;
base_label = msg.label;
next = parent;
}
}
- if (!opts->strategy || !strcmp(opts->strategy, "recursive") || opts->action == REPLAY_REVERT) {
+ if (!opts->strategy || !strcmp(opts->strategy, "recursive") || command == TODO_REVERT) {
res = do_recursive_merge(base, next, base_label, next_label,
head, &msgbuf, opts);
if (res < 0)
return res;
- write_message(&msgbuf, git_path_merge_msg());
+ res |= write_message(msgbuf.buf, msgbuf.len,
+ git_path_merge_msg(), 0);
} else {
struct commit_list *common = NULL;
struct commit_list *remotes = NULL;
- write_message(&msgbuf, git_path_merge_msg());
+ res = write_message(msgbuf.buf, msgbuf.len,
+ git_path_merge_msg(), 0);
commit_list_insert(base, &common);
commit_list_insert(next, &remotes);
- res = try_merge_command(opts->strategy, opts->xopts_nr, opts->xopts,
+ res |= try_merge_command(opts->strategy,
+ opts->xopts_nr, (const char **)opts->xopts,
common, sha1_to_hex(head), remotes);
free_commit_list(common);
free_commit_list(remotes);
}
+ strbuf_release(&msgbuf);
/*
* If the merge was clean or if it failed due to conflict, we write
* However, if the merge did not even start, then we don't want to
* write it at all.
*/
- if (opts->action == REPLAY_PICK && !opts->no_commit && (res == 0 || res == 1))
- update_ref(NULL, "CHERRY_PICK_HEAD", commit->object.oid.hash, NULL,
- REF_NODEREF, UPDATE_REFS_DIE_ON_ERR);
- if (opts->action == REPLAY_REVERT && ((opts->no_commit && res == 0) || res == 1))
- update_ref(NULL, "REVERT_HEAD", commit->object.oid.hash, NULL,
- REF_NODEREF, UPDATE_REFS_DIE_ON_ERR);
+ if (command == TODO_PICK && !opts->no_commit && (res == 0 || res == 1) &&
+ update_ref(NULL, "CHERRY_PICK_HEAD", commit->object.oid.hash, NULL,
+ REF_NODEREF, UPDATE_REFS_MSG_ON_ERR))
+ res = -1;
+ if (command == TODO_REVERT && ((opts->no_commit && res == 0) || res == 1) &&
+ update_ref(NULL, "REVERT_HEAD", commit->object.oid.hash, NULL,
+ REF_NODEREF, UPDATE_REFS_MSG_ON_ERR))
+ res = -1;
if (res) {
- error(opts->action == REPLAY_REVERT
+ error(command == TODO_REVERT
? _("could not revert %s... %s")
: _("could not apply %s... %s"),
- find_unique_abbrev(commit->object.oid.hash, DEFAULT_ABBREV),
- msg.subject);
+ short_commit_name(commit), msg.subject);
print_advice(res == 1, opts);
rerere(opts->allow_rerere_auto);
goto leave;
goto leave;
}
if (!opts->no_commit)
- res = run_git_commit(git_path_merge_msg(), opts, allow);
+ res = run_git_commit(opts->edit ? NULL : git_path_merge_msg(),
+ opts, allow, opts->edit, 0, 0);
leave:
free_message(commit, &msg);
return res;
}
-static void prepare_revs(struct replay_opts *opts)
+static int prepare_revs(struct replay_opts *opts)
{
/*
* picking (but not reverting) ranges (but not individual revisions)
opts->revs->reverse ^= 1;
if (prepare_revision_walk(opts->revs))
- die(_("revision walk setup failed"));
+ return error(_("revision walk setup failed"));
if (!opts->revs->commits)
- die(_("empty commit set passed"));
+ return error(_("empty commit set passed"));
+ return 0;
}
-static void read_and_refresh_cache(struct replay_opts *opts)
+static int read_and_refresh_cache(struct replay_opts *opts)
{
static struct lock_file index_lock;
int index_fd = hold_locked_index(&index_lock, 0);
- if (read_index_preload(&the_index, NULL) < 0)
- die(_("git %s: failed to read the index"), action_name(opts));
+ if (read_index_preload(&the_index, NULL) < 0) {
+ rollback_lock_file(&index_lock);
+ return error(_("git %s: failed to read the index"),
+ _(action_name(opts)));
+ }
refresh_index(&the_index, REFRESH_QUIET|REFRESH_UNMERGED, NULL, NULL, NULL);
if (the_index.cache_changed && index_fd >= 0) {
- if (write_locked_index(&the_index, &index_lock, COMMIT_LOCK))
- die(_("git %s: failed to refresh the index"), action_name(opts));
+ if (write_locked_index(&the_index, &index_lock, COMMIT_LOCK)) {
+ rollback_lock_file(&index_lock);
+ return error(_("git %s: failed to refresh the index"),
+ _(action_name(opts)));
+ }
}
rollback_lock_file(&index_lock);
+ return 0;
}
-static int format_todo(struct strbuf *buf, struct commit_list *todo_list,
- struct replay_opts *opts)
+struct todo_item {
+ enum todo_command command;
+ struct commit *commit;
+ const char *arg;
+ int arg_len;
+ size_t offset_in_buf;
+};
+
+struct todo_list {
+ struct strbuf buf;
+ struct todo_item *items;
+ int nr, alloc, current;
+};
+
+#define TODO_LIST_INIT { STRBUF_INIT }
+
+static void todo_list_release(struct todo_list *todo_list)
{
- struct commit_list *cur = NULL;
- const char *sha1_abbrev = NULL;
- const char *action_str = opts->action == REPLAY_REVERT ? "revert" : "pick";
- const char *subject;
- int subject_len;
+ strbuf_release(&todo_list->buf);
+ free(todo_list->items);
+ todo_list->items = NULL;
+ todo_list->nr = todo_list->alloc = 0;
+}
- for (cur = todo_list; cur; cur = cur->next) {
- const char *commit_buffer = get_commit_buffer(cur->item, NULL);
- sha1_abbrev = find_unique_abbrev(cur->item->object.oid.hash, DEFAULT_ABBREV);
- subject_len = find_commit_subject(commit_buffer, &subject);
- strbuf_addf(buf, "%s %s %.*s\n", action_str, sha1_abbrev,
- subject_len, subject);
- unuse_commit_buffer(cur->item, commit_buffer);
- }
- return 0;
+static struct todo_item *append_new_todo(struct todo_list *todo_list)
+{
+ ALLOC_GROW(todo_list->items, todo_list->nr + 1, todo_list->alloc);
+ return todo_list->items + todo_list->nr++;
}
-static struct commit *parse_insn_line(char *bol, char *eol, struct replay_opts *opts)
+static int parse_insn_line(struct todo_item *item, const char *bol, char *eol)
{
unsigned char commit_sha1[20];
- enum replay_action action;
char *end_of_object_name;
- int saved, status, padding;
-
- if (starts_with(bol, "pick")) {
- action = REPLAY_PICK;
- bol += strlen("pick");
- } else if (starts_with(bol, "revert")) {
- action = REPLAY_REVERT;
- bol += strlen("revert");
- } else
- return NULL;
+ int i, saved, status, padding;
+
+ /* left-trim */
+ bol += strspn(bol, " \t");
+
+ for (i = 0; i < ARRAY_SIZE(todo_command_strings); i++)
+ if (skip_prefix(bol, todo_command_strings[i], &bol)) {
+ item->command = i;
+ break;
+ }
+ if (i >= ARRAY_SIZE(todo_command_strings))
+ return -1;
/* Eat up extra spaces/ tabs before object name */
padding = strspn(bol, " \t");
if (!padding)
- return NULL;
+ return -1;
bol += padding;
- end_of_object_name = bol + strcspn(bol, " \t\n");
+ end_of_object_name = (char *) bol + strcspn(bol, " \t\n");
saved = *end_of_object_name;
*end_of_object_name = '\0';
status = get_sha1(bol, commit_sha1);
*end_of_object_name = saved;
- /*
- * Verify that the action matches up with the one in
- * opts; we don't support arbitrary instructions
- */
- if (action != opts->action) {
- if (action == REPLAY_REVERT)
- error((opts->action == REPLAY_REVERT)
- ? _("Cannot revert during another revert.")
- : _("Cannot revert during a cherry-pick."));
- else
- error((opts->action == REPLAY_REVERT)
- ? _("Cannot cherry-pick during a revert.")
- : _("Cannot cherry-pick during another cherry-pick."));
- return NULL;
- }
+ item->arg = end_of_object_name + strspn(end_of_object_name, " \t");
+ item->arg_len = (int)(eol - item->arg);
if (status < 0)
- return NULL;
+ return -1;
- return lookup_commit_reference(commit_sha1);
+ item->commit = lookup_commit_reference(commit_sha1);
+ return !item->commit;
}
-static int parse_insn_buffer(char *buf, struct commit_list **todo_list,
- struct replay_opts *opts)
+static int parse_insn_buffer(char *buf, struct todo_list *todo_list)
{
- struct commit_list **next = todo_list;
- struct commit *commit;
- char *p = buf;
- int i;
+ struct todo_item *item;
+ char *p = buf, *next_p;
+ int i, res = 0;
- for (i = 1; *p; i++) {
+ for (i = 1; *p; i++, p = next_p) {
char *eol = strchrnul(p, '\n');
- commit = parse_insn_line(p, eol, opts);
- if (!commit)
- return error(_("Could not parse line %d."), i);
- next = commit_list_append(commit, next);
- p = *eol ? eol + 1 : eol;
+
+ next_p = *eol ? eol + 1 /* skip LF */ : eol;
+
+ if (p != eol && eol[-1] == '\r')
+ eol--; /* strip Carriage Return */
+
+ item = append_new_todo(todo_list);
+ item->offset_in_buf = p - todo_list->buf.buf;
+ if (parse_insn_line(item, p, eol)) {
+ res = error(_("invalid line %d: %.*s"),
+ i, (int)(eol - p), p);
+ item->command = -1;
+ }
}
- if (!*todo_list)
- return error(_("No commits parsed."));
- return 0;
+ if (!todo_list->nr)
+ return error(_("no commits parsed."));
+ return res;
}
-static void read_populate_todo(struct commit_list **todo_list,
+static int read_populate_todo(struct todo_list *todo_list,
struct replay_opts *opts)
{
- struct strbuf buf = STRBUF_INIT;
+ const char *todo_file = get_todo_path(opts);
int fd, res;
- fd = open(git_path_todo_file(), O_RDONLY);
+ strbuf_reset(&todo_list->buf);
+ fd = open(todo_file, O_RDONLY);
if (fd < 0)
- die_errno(_("Could not open %s"), git_path_todo_file());
- if (strbuf_read(&buf, fd, 0) < 0) {
+ return error_errno(_("could not open '%s'"), todo_file);
+ if (strbuf_read(&todo_list->buf, fd, 0) < 0) {
close(fd);
- strbuf_release(&buf);
- die(_("Could not read %s."), git_path_todo_file());
+ return error(_("could not read '%s'."), todo_file);
}
close(fd);
- res = parse_insn_buffer(buf.buf, todo_list, opts);
- strbuf_release(&buf);
+ res = parse_insn_buffer(todo_list->buf.buf, todo_list);
if (res)
- die(_("Unusable instruction sheet: %s"), git_path_todo_file());
+ return error(_("unusable instruction sheet: '%s'"), todo_file);
+
+ if (!is_rebase_i(opts)) {
+ enum todo_command valid =
+ opts->action == REPLAY_PICK ? TODO_PICK : TODO_REVERT;
+ int i;
+
+ for (i = 0; i < todo_list->nr; i++)
+ if (valid == todo_list->items[i].command)
+ continue;
+ else if (valid == TODO_PICK)
+ return error(_("cannot cherry-pick during a revert."));
+ else
+ return error(_("cannot revert during a cherry-pick."));
+ }
+
+ return 0;
+}
+
+static int git_config_string_dup(char **dest,
+ const char *var, const char *value)
+{
+ if (!value)
+ return config_error_nonbool(var);
+ free(*dest);
+ *dest = xstrdup(value);
+ return 0;
}
static int populate_opts_cb(const char *key, const char *value, void *data)
else if (!strcmp(key, "options.mainline"))
opts->mainline = git_config_int(key, value);
else if (!strcmp(key, "options.strategy"))
- git_config_string(&opts->strategy, key, value);
+ git_config_string_dup(&opts->strategy, key, value);
else if (!strcmp(key, "options.gpg-sign"))
- git_config_string(&opts->gpg_sign, key, value);
+ git_config_string_dup(&opts->gpg_sign, key, value);
else if (!strcmp(key, "options.strategy-option")) {
ALLOC_GROW(opts->xopts, opts->xopts_nr + 1, opts->xopts_alloc);
opts->xopts[opts->xopts_nr++] = xstrdup(value);
} else
- return error(_("Invalid key: %s"), key);
+ return error(_("invalid key: %s"), key);
if (!error_flag)
- return error(_("Invalid value for %s: %s"), key, value);
+ return error(_("invalid value for %s: %s"), key, value);
return 0;
}
-static void read_populate_opts(struct replay_opts **opts_ptr)
+static int read_populate_opts(struct replay_opts *opts)
{
+ if (is_rebase_i(opts)) {
+ struct strbuf buf = STRBUF_INIT;
+
+ if (read_oneliner(&buf, rebase_path_gpg_sign_opt(), 1)) {
+ if (!starts_with(buf.buf, "-S"))
+ strbuf_reset(&buf);
+ else {
+ free(opts->gpg_sign);
+ opts->gpg_sign = xstrdup(buf.buf + 2);
+ }
+ }
+ strbuf_release(&buf);
+
+ return 0;
+ }
+
if (!file_exists(git_path_opts_file()))
- return;
- if (git_config_from_file(populate_opts_cb, git_path_opts_file(), *opts_ptr) < 0)
- die(_("Malformed options sheet: %s"), git_path_opts_file());
+ return 0;
+ /*
+ * The function git_parse_source(), called from git_config_from_file(),
+ * may die() in case of a syntactically incorrect file. We do not care
+ * about this case, though, because we wrote that file ourselves, so we
+ * are pretty certain that it is syntactically correct.
+ */
+ if (git_config_from_file(populate_opts_cb, git_path_opts_file(), opts) < 0)
+ return error(_("malformed options sheet: '%s'"),
+ git_path_opts_file());
+ return 0;
}
-static void walk_revs_populate_todo(struct commit_list **todo_list,
+static int walk_revs_populate_todo(struct todo_list *todo_list,
struct replay_opts *opts)
{
+ enum todo_command command = opts->action == REPLAY_PICK ?
+ TODO_PICK : TODO_REVERT;
+ const char *command_string = todo_command_strings[command];
struct commit *commit;
- struct commit_list **next;
- prepare_revs(opts);
+ if (prepare_revs(opts))
+ return -1;
- next = todo_list;
- while ((commit = get_revision(opts->revs)))
- next = commit_list_append(commit, next);
+ while ((commit = get_revision(opts->revs))) {
+ struct todo_item *item = append_new_todo(todo_list);
+ const char *commit_buffer = get_commit_buffer(commit, NULL);
+ const char *subject;
+ int subject_len;
+
+ item->command = command;
+ item->commit = commit;
+ item->arg = NULL;
+ item->arg_len = 0;
+ item->offset_in_buf = todo_list->buf.len;
+ subject_len = find_commit_subject(commit_buffer, &subject);
+ strbuf_addf(&todo_list->buf, "%s %s %.*s\n", command_string,
+ short_commit_name(commit), subject_len, subject);
+ unuse_commit_buffer(commit, commit_buffer);
+ }
+ return 0;
}
static int create_seq_dir(void)
return -1;
}
else if (mkdir(git_path_seq_dir(), 0777) < 0)
- die_errno(_("Could not create sequencer directory %s"),
- git_path_seq_dir());
+ return error_errno(_("could not create sequencer directory '%s'"),
+ git_path_seq_dir());
return 0;
}
-static void save_head(const char *head)
+static int save_head(const char *head)
{
static struct lock_file head_lock;
struct strbuf buf = STRBUF_INIT;
int fd;
- fd = hold_lock_file_for_update(&head_lock, git_path_head_file(), LOCK_DIE_ON_ERROR);
+ fd = hold_lock_file_for_update(&head_lock, git_path_head_file(), 0);
+ if (fd < 0) {
+ rollback_lock_file(&head_lock);
+ return error_errno(_("could not lock HEAD"));
+ }
strbuf_addf(&buf, "%s\n", head);
- if (write_in_full(fd, buf.buf, buf.len) < 0)
- die_errno(_("Could not write to %s"), git_path_head_file());
- if (commit_lock_file(&head_lock) < 0)
- die(_("Error wrapping up %s."), git_path_head_file());
+ if (write_in_full(fd, buf.buf, buf.len) < 0) {
+ rollback_lock_file(&head_lock);
+ return error_errno(_("could not write to '%s'"),
+ git_path_head_file());
+ }
+ if (commit_lock_file(&head_lock) < 0) {
+ rollback_lock_file(&head_lock);
+ return error(_("failed to finalize '%s'."), git_path_head_file());
+ }
+ return 0;
}
static int reset_for_rollback(const unsigned char *sha1)
return reset_for_rollback(head_sha1);
}
-static int sequencer_rollback(struct replay_opts *opts)
+int sequencer_rollback(struct replay_opts *opts)
{
FILE *f;
unsigned char sha1[20];
return rollback_single_pick();
}
if (!f)
- return error_errno(_("cannot open %s"), git_path_head_file());
+ return error_errno(_("cannot open '%s'"), git_path_head_file());
if (strbuf_getline_lf(&buf, f)) {
- error(_("cannot read %s: %s"), git_path_head_file(),
+ error(_("cannot read '%s': %s"), git_path_head_file(),
ferror(f) ? strerror(errno) : _("unexpected end of file"));
fclose(f);
goto fail;
}
if (reset_for_rollback(sha1))
goto fail;
- remove_sequencer_state();
strbuf_release(&buf);
- return 0;
+ return sequencer_remove_state(opts);
fail:
strbuf_release(&buf);
return -1;
}
-static void save_todo(struct commit_list *todo_list, struct replay_opts *opts)
+static int save_todo(struct todo_list *todo_list, struct replay_opts *opts)
{
static struct lock_file todo_lock;
- struct strbuf buf = STRBUF_INIT;
- int fd;
+ const char *todo_path = get_todo_path(opts);
+ int next = todo_list->current, offset, fd;
- fd = hold_lock_file_for_update(&todo_lock, git_path_todo_file(), LOCK_DIE_ON_ERROR);
- if (format_todo(&buf, todo_list, opts) < 0)
- die(_("Could not format %s."), git_path_todo_file());
- if (write_in_full(fd, buf.buf, buf.len) < 0) {
- strbuf_release(&buf);
- die_errno(_("Could not write to %s"), git_path_todo_file());
- }
- if (commit_lock_file(&todo_lock) < 0) {
- strbuf_release(&buf);
- die(_("Error wrapping up %s."), git_path_todo_file());
- }
- strbuf_release(&buf);
+ fd = hold_lock_file_for_update(&todo_lock, todo_path, 0);
+ if (fd < 0)
+ return error_errno(_("could not lock '%s'"), todo_path);
+ offset = next < todo_list->nr ?
+ todo_list->items[next].offset_in_buf : todo_list->buf.len;
+ if (write_in_full(fd, todo_list->buf.buf + offset,
+ todo_list->buf.len - offset) < 0)
+ return error_errno(_("could not write to '%s'"), todo_path);
+ if (commit_lock_file(&todo_lock) < 0)
+ return error(_("failed to finalize '%s'."), todo_path);
+ return 0;
}
-static void save_opts(struct replay_opts *opts)
+static int save_opts(struct replay_opts *opts)
{
const char *opts_file = git_path_opts_file();
+ int res = 0;
if (opts->no_commit)
- git_config_set_in_file(opts_file, "options.no-commit", "true");
+ res |= git_config_set_in_file_gently(opts_file, "options.no-commit", "true");
if (opts->edit)
- git_config_set_in_file(opts_file, "options.edit", "true");
+ res |= git_config_set_in_file_gently(opts_file, "options.edit", "true");
if (opts->signoff)
- git_config_set_in_file(opts_file, "options.signoff", "true");
+ res |= git_config_set_in_file_gently(opts_file, "options.signoff", "true");
if (opts->record_origin)
- git_config_set_in_file(opts_file, "options.record-origin", "true");
+ res |= git_config_set_in_file_gently(opts_file, "options.record-origin", "true");
if (opts->allow_ff)
- git_config_set_in_file(opts_file, "options.allow-ff", "true");
+ res |= git_config_set_in_file_gently(opts_file, "options.allow-ff", "true");
if (opts->mainline) {
struct strbuf buf = STRBUF_INIT;
strbuf_addf(&buf, "%d", opts->mainline);
- git_config_set_in_file(opts_file, "options.mainline", buf.buf);
+ res |= git_config_set_in_file_gently(opts_file, "options.mainline", buf.buf);
strbuf_release(&buf);
}
if (opts->strategy)
- git_config_set_in_file(opts_file, "options.strategy", opts->strategy);
+ res |= git_config_set_in_file_gently(opts_file, "options.strategy", opts->strategy);
if (opts->gpg_sign)
- git_config_set_in_file(opts_file, "options.gpg-sign", opts->gpg_sign);
+ res |= git_config_set_in_file_gently(opts_file, "options.gpg-sign", opts->gpg_sign);
if (opts->xopts) {
int i;
for (i = 0; i < opts->xopts_nr; i++)
- git_config_set_multivar_in_file(opts_file,
+ res |= git_config_set_multivar_in_file_gently(opts_file,
"options.strategy-option",
opts->xopts[i], "^$", 0);
}
+ return res;
}
-static int pick_commits(struct commit_list *todo_list, struct replay_opts *opts)
+static int pick_commits(struct todo_list *todo_list, struct replay_opts *opts)
{
- struct commit_list *cur;
int res;
setenv(GIT_REFLOG_ACTION, action_name(opts), 0);
if (opts->allow_ff)
assert(!(opts->signoff || opts->no_commit ||
opts->record_origin || opts->edit));
- read_and_refresh_cache(opts);
+ if (read_and_refresh_cache(opts))
+ return -1;
- for (cur = todo_list; cur; cur = cur->next) {
- save_todo(cur, opts);
- res = do_pick_commit(cur->item, opts);
+ while (todo_list->current < todo_list->nr) {
+ struct todo_item *item = todo_list->items + todo_list->current;
+ if (save_todo(todo_list, opts))
+ return -1;
+ res = do_pick_commit(item->command, item->commit, opts);
+ todo_list->current++;
if (res)
return res;
}
* Sequence of picks finished successfully; cleanup by
* removing the .git/sequencer directory
*/
- remove_sequencer_state();
- return 0;
+ return sequencer_remove_state(opts);
}
static int continue_single_pick(void)
return run_command_v_opt(argv, RUN_GIT_CMD);
}
-static int sequencer_continue(struct replay_opts *opts)
+int sequencer_continue(struct replay_opts *opts)
{
- struct commit_list *todo_list = NULL;
+ struct todo_list todo_list = TODO_LIST_INIT;
+ int res;
- if (!file_exists(git_path_todo_file()))
+ if (read_and_refresh_cache(opts))
+ return -1;
+
+ if (!file_exists(get_todo_path(opts)))
return continue_single_pick();
- read_populate_opts(&opts);
- read_populate_todo(&todo_list, opts);
+ if (read_populate_opts(opts))
+ return -1;
+ if ((res = read_populate_todo(&todo_list, opts)))
+ goto release_todo_list;
/* Verify that the conflict has been resolved */
if (file_exists(git_path_cherry_pick_head()) ||
file_exists(git_path_revert_head())) {
- int ret = continue_single_pick();
- if (ret)
- return ret;
+ res = continue_single_pick();
+ if (res)
+ goto release_todo_list;
+ }
+ if (index_differs_from("HEAD", 0, 0)) {
+ res = error_dirty_index(opts);
+ goto release_todo_list;
}
- if (index_differs_from("HEAD", 0))
- return error_dirty_index(opts);
- todo_list = todo_list->next;
- return pick_commits(todo_list, opts);
+ todo_list.current++;
+ res = pick_commits(&todo_list, opts);
+release_todo_list:
+ todo_list_release(&todo_list);
+ return res;
}
static int single_pick(struct commit *cmit, struct replay_opts *opts)
{
setenv(GIT_REFLOG_ACTION, action_name(opts), 0);
- return do_pick_commit(cmit, opts);
+ return do_pick_commit(opts->action == REPLAY_PICK ?
+ TODO_PICK : TODO_REVERT, cmit, opts);
}
int sequencer_pick_revisions(struct replay_opts *opts)
{
- struct commit_list *todo_list = NULL;
+ struct todo_list todo_list = TODO_LIST_INIT;
unsigned char sha1[20];
- int i;
-
- if (opts->subcommand == REPLAY_NONE)
- assert(opts->revs);
-
- read_and_refresh_cache(opts);
+ int i, res;
- /*
- * Decide what to do depending on the arguments; a fresh
- * cherry-pick should be handled differently from an existing
- * one that is being continued
- */
- if (opts->subcommand == REPLAY_REMOVE_STATE) {
- remove_sequencer_state();
- return 0;
- }
- if (opts->subcommand == REPLAY_ROLLBACK)
- return sequencer_rollback(opts);
- if (opts->subcommand == REPLAY_CONTINUE)
- return sequencer_continue(opts);
+ assert(opts->revs);
+ if (read_and_refresh_cache(opts))
+ return -1;
for (i = 0; i < opts->revs->pending.nr; i++) {
unsigned char sha1[20];
if (!get_sha1(name, sha1)) {
if (!lookup_commit_reference_gently(sha1, 1)) {
enum object_type type = sha1_object_info(sha1, NULL);
- die(_("%s: can't cherry-pick a %s"), name, typename(type));
+ return error(_("%s: can't cherry-pick a %s"),
+ name, typename(type));
}
} else
- die(_("%s: bad revision"), name);
+ return error(_("%s: bad revision"), name);
}
/*
!opts->revs->cmdline.rev->flags) {
struct commit *cmit;
if (prepare_revision_walk(opts->revs))
- die(_("revision walk setup failed"));
+ return error(_("revision walk setup failed"));
cmit = get_revision(opts->revs);
if (!cmit || get_revision(opts->revs))
- die("BUG: expected exactly one commit from walk");
+ return error("BUG: expected exactly one commit from walk");
return single_pick(cmit, opts);
}
* progress
*/
- walk_revs_populate_todo(&todo_list, opts);
- if (create_seq_dir() < 0)
+ if (walk_revs_populate_todo(&todo_list, opts) ||
+ create_seq_dir() < 0)
return -1;
if (get_sha1("HEAD", sha1) && (opts->action == REPLAY_REVERT))
- return error(_("Can't revert as initial commit"));
- save_head(sha1_to_hex(sha1));
- save_opts(opts);
- return pick_commits(todo_list, opts);
+ return error(_("can't revert as initial commit"));
+ if (save_head(sha1_to_hex(sha1)))
+ return -1;
+ if (save_opts(opts))
+ return -1;
+ res = pick_commits(&todo_list, opts);
+ todo_list_release(&todo_list);
+ return res;
}
void append_signoff(struct strbuf *msgbuf, int ignore_footer, unsigned flag)
#ifndef SEQUENCER_H
#define SEQUENCER_H
-#define SEQ_DIR "sequencer"
-#define SEQ_HEAD_FILE "sequencer/head"
-#define SEQ_TODO_FILE "sequencer/todo"
-#define SEQ_OPTS_FILE "sequencer/opts"
+const char *git_path_seq_dir(void);
#define APPEND_SIGNOFF_DEDUP (1u << 0)
REPLAY_PICK
};
-enum replay_subcommand {
- REPLAY_NONE,
- REPLAY_REMOVE_STATE,
- REPLAY_CONTINUE,
- REPLAY_ROLLBACK
-};
-
struct replay_opts {
enum replay_action action;
- enum replay_subcommand subcommand;
/* Boolean options */
int edit;
int mainline;
- const char *gpg_sign;
+ char *gpg_sign;
/* Merge strategy */
- const char *strategy;
- const char **xopts;
+ char *strategy;
+ char **xopts;
size_t xopts_nr, xopts_alloc;
/* Only used by REPLAY_NONE */
struct rev_info *revs;
};
+#define REPLAY_OPTS_INIT { -1 }
int sequencer_pick_revisions(struct replay_opts *opts);
+int sequencer_continue(struct replay_opts *opts);
+int sequencer_rollback(struct replay_opts *opts);
+int sequencer_remove_state(struct replay_opts *opts);
extern const char sign_off_header[];
}
/* renumber them */
- qsort(info, num_pack, sizeof(info[0]), compare_info);
+ QSORT(info, num_pack, compare_info);
for (i = 0; i < num_pack; i++)
info[i]->new_num = i;
}
static inline void
string_list_sort (string_list_ty *slp)
{
- if (slp->nitems > 0)
- qsort (slp->item, slp->nitems, sizeof (slp->item[0]), cmp_string);
+ QSORT(slp->item, slp->nitems, cmp_string);
}
/* Test whether a sorted string list contains a given string. */
static void sha1_array_sort(struct sha1_array *array)
{
- qsort(array->sha1, array->nr, sizeof(*array->sha1), void_hashcmp);
+ QSORT(array->sha1, array->nr, void_hashcmp);
array->sorted = 1;
}
array->sorted = 0;
}
-void sha1_array_for_each_unique(struct sha1_array *array,
+int sha1_array_for_each_unique(struct sha1_array *array,
for_each_sha1_fn fn,
void *data)
{
sha1_array_sort(array);
for (i = 0; i < array->nr; i++) {
+ int ret;
if (i > 0 && !hashcmp(array->sha1[i], array->sha1[i-1]))
continue;
- fn(array->sha1[i], data);
+ ret = fn(array->sha1[i], data);
+ if (ret)
+ return ret;
}
+ return 0;
}
int sha1_array_lookup(struct sha1_array *array, const unsigned char *sha1);
void sha1_array_clear(struct sha1_array *array);
-typedef void (*for_each_sha1_fn)(const unsigned char sha1[20],
- void *data);
-void sha1_array_for_each_unique(struct sha1_array *array,
- for_each_sha1_fn fn,
+typedef int (*for_each_sha1_fn)(const unsigned char sha1[20],
void *data);
+int sha1_array_for_each_unique(struct sha1_array *array,
+ for_each_sha1_fn fn,
+ void *data);
#endif /* SHA1_ARRAY_H */
#include "streaming.h"
#include "dir.h"
#include "mru.h"
+#include "list.h"
+#include "mergesort.h"
#ifndef O_NOATIME
#if defined(__linux__) && (defined(__i386__) || defined(__PPC__))
const unsigned char null_sha1[20];
const struct object_id null_oid;
+const struct object_id empty_tree_oid = {
+ EMPTY_TREE_SHA1_BIN_LITERAL
+};
+const struct object_id empty_blob_oid = {
+ EMPTY_BLOB_SHA1_BIN_LITERAL
+};
/*
* This is meant to hold a *small* number of objects that you would
return result;
}
-static void fill_sha1_path(char *pathbuf, const unsigned char *sha1)
+static void fill_sha1_path(struct strbuf *buf, const unsigned char *sha1)
{
int i;
for (i = 0; i < 20; i++) {
static char hex[] = "0123456789abcdef";
unsigned int val = sha1[i];
- char *pos = pathbuf + i*2 + (i > 0);
- *pos++ = hex[val >> 4];
- *pos = hex[val & 0xf];
+ strbuf_addch(buf, hex[val >> 4]);
+ strbuf_addch(buf, hex[val & 0xf]);
+ if (!i)
+ strbuf_addch(buf, '/');
}
}
const char *sha1_file_name(const unsigned char *sha1)
{
- static char buf[PATH_MAX];
- const char *objdir;
- int len;
+ static struct strbuf buf = STRBUF_INIT;
+
+ strbuf_reset(&buf);
+ strbuf_addf(&buf, "%s/", get_object_directory());
+
+ fill_sha1_path(&buf, sha1);
+ return buf.buf;
+}
- objdir = get_object_directory();
- len = strlen(objdir);
+struct strbuf *alt_scratch_buf(struct alternate_object_database *alt)
+{
+ strbuf_setlen(&alt->scratch, alt->base_len);
+ return &alt->scratch;
+}
- /* '/' + sha1(2) + '/' + sha1(38) + '\0' */
- if (len + 43 > PATH_MAX)
- die("insanely long object directory %s", objdir);
- memcpy(buf, objdir, len);
- buf[len] = '/';
- buf[len+3] = '/';
- buf[len+42] = '\0';
- fill_sha1_path(buf + len + 1, sha1);
- return buf;
+static const char *alt_sha1_path(struct alternate_object_database *alt,
+ const unsigned char *sha1)
+{
+ struct strbuf *buf = alt_scratch_buf(alt);
+ fill_sha1_path(buf, sha1);
+ return buf->buf;
}
/*
struct alternate_object_database *alt_odb_list;
static struct alternate_object_database **alt_odb_tail;
+/*
+ * Return non-zero iff the path is usable as an alternate object database.
+ */
+static int alt_odb_usable(struct strbuf *path, const char *normalized_objdir)
+{
+ struct alternate_object_database *alt;
+
+ /* Detect cases where alternate disappeared */
+ if (!is_directory(path->buf)) {
+ error("object directory %s does not exist; "
+ "check .git/objects/info/alternates.",
+ path->buf);
+ return 0;
+ }
+
+ /*
+ * Prevent the common mistake of listing the same
+ * thing twice, or object directory itself.
+ */
+ for (alt = alt_odb_list; alt; alt = alt->next) {
+ if (!fspathcmp(path->buf, alt->path))
+ return 0;
+ }
+ if (!fspathcmp(path->buf, normalized_objdir))
+ return 0;
+
+ return 1;
+}
+
/*
* Prepare alternate object database registry.
*
int depth, const char *normalized_objdir)
{
struct alternate_object_database *ent;
- struct alternate_object_database *alt;
- size_t pfxlen, entlen;
struct strbuf pathbuf = STRBUF_INIT;
if (!is_absolute_path(entry) && relative_base) {
}
strbuf_addstr(&pathbuf, entry);
- normalize_path_copy(pathbuf.buf, pathbuf.buf);
-
- pfxlen = strlen(pathbuf.buf);
+ if (strbuf_normalize_path(&pathbuf) < 0) {
+ error("unable to normalize alternate object path: %s",
+ pathbuf.buf);
+ strbuf_release(&pathbuf);
+ return -1;
+ }
/*
* The trailing slash after the directory name is given by
* this function at the end. Remove duplicates.
*/
- while (pfxlen && pathbuf.buf[pfxlen-1] == '/')
- pfxlen -= 1;
-
- entlen = st_add(pfxlen, 43); /* '/' + 2 hex + '/' + 38 hex + NUL */
- ent = xmalloc(st_add(sizeof(*ent), entlen));
- memcpy(ent->base, pathbuf.buf, pfxlen);
- strbuf_release(&pathbuf);
-
- ent->name = ent->base + pfxlen + 1;
- ent->base[pfxlen + 3] = '/';
- ent->base[pfxlen] = ent->base[entlen-1] = 0;
+ while (pathbuf.len && pathbuf.buf[pathbuf.len - 1] == '/')
+ strbuf_setlen(&pathbuf, pathbuf.len - 1);
- /* Detect cases where alternate disappeared */
- if (!is_directory(ent->base)) {
- error("object directory %s does not exist; "
- "check .git/objects/info/alternates.",
- ent->base);
- free(ent);
+ if (!alt_odb_usable(&pathbuf, normalized_objdir)) {
+ strbuf_release(&pathbuf);
return -1;
}
- /* Prevent the common mistake of listing the same
- * thing twice, or object directory itself.
- */
- for (alt = alt_odb_list; alt; alt = alt->next) {
- if (pfxlen == alt->name - alt->base - 1 &&
- !memcmp(ent->base, alt->base, pfxlen)) {
- free(ent);
- return -1;
- }
- }
- if (!fspathcmp(ent->base, normalized_objdir)) {
- free(ent);
- return -1;
- }
+ ent = alloc_alt_odb(pathbuf.buf);
/* add the alternate entry */
*alt_odb_tail = ent;
ent->next = NULL;
/* recursively add alternates */
- read_info_alternates(ent->base, depth + 1);
-
- ent->base[pfxlen] = '/';
+ read_info_alternates(pathbuf.buf, depth + 1);
+ strbuf_release(&pathbuf);
return 0;
}
}
strbuf_add_absolute_path(&objdirbuf, get_object_directory());
- normalize_path_copy(objdirbuf.buf, objdirbuf.buf);
+ if (strbuf_normalize_path(&objdirbuf) < 0)
+ die("unable to normalize object directory: %s",
+ objdirbuf.buf);
alt_copy = xmemdupz(alt, len);
string_list_split_in_place(&entries, alt_copy, sep, -1);
const char *entry = entries.items[i].string;
if (entry[0] == '\0' || entry[0] == '#')
continue;
- if (!is_absolute_path(entry) && depth) {
- error("%s: ignoring relative alternate object store %s",
- relative_base, entry);
- } else {
- link_alt_odb_entry(entry, relative_base, depth, objdirbuf.buf);
- }
+ link_alt_odb_entry(entry, relative_base, depth, objdirbuf.buf);
}
string_list_clear(&entries, 0);
free(alt_copy);
int fd;
path = xstrfmt("%s/info/alternates", relative_base);
- fd = git_open_noatime(path);
+ fd = git_open(path);
free(path);
if (fd < 0)
return;
munmap(map, mapsz);
}
+struct alternate_object_database *alloc_alt_odb(const char *dir)
+{
+ struct alternate_object_database *ent;
+
+ FLEX_ALLOC_STR(ent, path, dir);
+ strbuf_init(&ent->scratch, 0);
+ strbuf_addf(&ent->scratch, "%s/", dir);
+ ent->base_len = ent->scratch.len;
+
+ return ent;
+}
+
void add_to_alternates_file(const char *reference)
{
struct lock_file *lock = xcalloc(1, sizeof(struct lock_file));
free(alts);
}
+void add_to_alternates_memory(const char *reference)
+{
+ /*
+ * Make sure alternates are initialized, or else our entry may be
+ * overwritten when they are.
+ */
+ prepare_alt_odb();
+
+ link_alt_odb_entries(reference, strlen(reference), '\n', NULL, 0);
+}
+
+/*
+ * Compute the exact path an alternate is at and returns it. In case of
+ * error NULL is returned and the human readable error is added to `err`
+ * `path` may be relative and should point to $GITDIR.
+ * `err` must not be null.
+ */
+char *compute_alternate_path(const char *path, struct strbuf *err)
+{
+ char *ref_git = NULL;
+ const char *repo, *ref_git_s;
+ int seen_error = 0;
+
+ ref_git_s = real_path_if_valid(path);
+ if (!ref_git_s) {
+ seen_error = 1;
+ strbuf_addf(err, _("path '%s' does not exist"), path);
+ goto out;
+ } else
+ /*
+ * Beware: read_gitfile(), real_path() and mkpath()
+ * return static buffer
+ */
+ ref_git = xstrdup(ref_git_s);
+
+ repo = read_gitfile(ref_git);
+ if (!repo)
+ repo = read_gitfile(mkpath("%s/.git", ref_git));
+ if (repo) {
+ free(ref_git);
+ ref_git = xstrdup(repo);
+ }
+
+ if (!repo && is_directory(mkpath("%s/.git/objects", ref_git))) {
+ char *ref_git_git = mkpathdup("%s/.git", ref_git);
+ free(ref_git);
+ ref_git = ref_git_git;
+ } else if (!is_directory(mkpath("%s/objects", ref_git))) {
+ struct strbuf sb = STRBUF_INIT;
+ seen_error = 1;
+ if (get_common_dir(&sb, ref_git)) {
+ strbuf_addf(err,
+ _("reference repository '%s' as a linked "
+ "checkout is not supported yet."),
+ path);
+ goto out;
+ }
+
+ strbuf_addf(err, _("reference repository '%s' is not a "
+ "local repository."), path);
+ goto out;
+ }
+
+ if (!access(mkpath("%s/shallow", ref_git), F_OK)) {
+ strbuf_addf(err, _("reference repository '%s' is shallow"),
+ path);
+ seen_error = 1;
+ goto out;
+ }
+
+ if (!access(mkpath("%s/info/grafts", ref_git), F_OK)) {
+ strbuf_addf(err,
+ _("reference repository '%s' is grafted"),
+ path);
+ seen_error = 1;
+ goto out;
+ }
+
+out:
+ if (seen_error) {
+ free(ref_git);
+ ref_git = NULL;
+ }
+
+ return ref_git;
+}
+
int foreach_alt_odb(alt_odb_fn fn, void *cb)
{
struct alternate_object_database *ent;
struct alternate_object_database *alt;
prepare_alt_odb();
for (alt = alt_odb_list; alt; alt = alt->next) {
- fill_sha1_path(alt->name, sha1);
- if (check_and_freshen_file(alt->base, freshen))
+ const char *path = alt_sha1_path(alt, sha1);
+ if (check_and_freshen_file(path, freshen))
return 1;
}
return 0;
struct pack_idx_header *hdr;
size_t idx_size;
uint32_t version, nr, i, *index;
- int fd = git_open_noatime(path);
+ int fd = git_open(path);
struct stat st;
if (fd < 0)
while (pack_max_fds <= pack_open_fds && close_one_pack())
; /* nothing */
- p->pack_fd = git_open_noatime(p->pack_name);
+ p->pack_fd = git_open(p->pack_name);
if (p->pack_fd < 0 || fstat(p->pack_fd, &st))
return -1;
pack_open_fds++;
strbuf_release(&path);
}
+static int approximate_object_count_valid;
+
+/*
+ * Give a fast, rough count of the number of objects in the repository. This
+ * ignores loose objects completely. If you have a lot of them, then either
+ * you should repack because your performance will be awful, or they are
+ * all unreachable objects about to be pruned, in which case they're not really
+ * interesting as a measure of repo size in the first place.
+ */
+unsigned long approximate_object_count(void)
+{
+ static unsigned long count;
+ if (!approximate_object_count_valid) {
+ struct packed_git *p;
+
+ prepare_packed_git();
+ count = 0;
+ for (p = packed_git; p; p = p->next) {
+ if (open_pack_index(p))
+ continue;
+ count += p->num_objects;
+ }
+ }
+ return count;
+}
+
+static void *get_next_packed_git(const void *p)
+{
+ return ((const struct packed_git *)p)->next;
+}
+
+static void set_next_packed_git(void *p, void *next)
+{
+ ((struct packed_git *)p)->next = next;
+}
+
static int sort_pack(const void *a_, const void *b_)
{
- struct packed_git *a = *((struct packed_git **)a_);
- struct packed_git *b = *((struct packed_git **)b_);
+ const struct packed_git *a = a_;
+ const struct packed_git *b = b_;
int st;
/*
static void rearrange_packed_git(void)
{
- struct packed_git **ary, *p;
- int i, n;
-
- for (n = 0, p = packed_git; p; p = p->next)
- n++;
- if (n < 2)
- return;
-
- /* prepare an array of packed_git for easier sorting */
- ary = xcalloc(n, sizeof(struct packed_git *));
- for (n = 0, p = packed_git; p; p = p->next)
- ary[n++] = p;
-
- qsort(ary, n, sizeof(struct packed_git *), sort_pack);
-
- /* link them back again */
- for (i = 0; i < n - 1; i++)
- ary[i]->next = ary[i + 1];
- ary[n - 1]->next = NULL;
- packed_git = ary[0];
-
- free(ary);
+ packed_git = llist_mergesort(packed_git, get_next_packed_git,
+ set_next_packed_git, sort_pack);
}
static void prepare_packed_git_mru(void)
return;
prepare_packed_git_one(get_object_directory(), 1);
prepare_alt_odb();
- for (alt = alt_odb_list; alt; alt = alt->next) {
- alt->name[-1] = 0;
- prepare_packed_git_one(alt->base, 0);
- alt->name[-1] = '/';
- }
+ for (alt = alt_odb_list; alt; alt = alt->next)
+ prepare_packed_git_one(alt->path, 0);
rearrange_packed_git();
prepare_packed_git_mru();
prepare_packed_git_run_once = 1;
void reprepare_packed_git(void)
{
+ approximate_object_count_valid = 0;
prepare_packed_git_run_once = 0;
prepare_packed_git();
}
return hashcmp(sha1, real_sha1) ? -1 : 0;
}
-int git_open_noatime(const char *name)
+int git_open(const char *name)
{
- static int sha1_file_open_flag = O_NOATIME;
+ static int sha1_file_open_flag = O_NOATIME | O_CLOEXEC;
for (;;) {
int fd;
if (fd >= 0)
return fd;
- /* Might the failure be due to O_NOATIME? */
- if (errno != ENOENT && sha1_file_open_flag) {
- sha1_file_open_flag = 0;
+ /* Try again w/o O_CLOEXEC: the kernel might not support it */
+ if ((sha1_file_open_flag & O_CLOEXEC) && errno == EINVAL) {
+ sha1_file_open_flag &= ~O_CLOEXEC;
continue;
}
+ /* Might the failure be due to O_NOATIME? */
+ if (errno != ENOENT && (sha1_file_open_flag & O_NOATIME)) {
+ sha1_file_open_flag &= ~O_NOATIME;
+ continue;
+ }
return -1;
}
}
prepare_alt_odb();
errno = ENOENT;
for (alt = alt_odb_list; alt; alt = alt->next) {
- fill_sha1_path(alt->name, sha1);
- if (!lstat(alt->base, st))
+ const char *path = alt_sha1_path(alt, sha1);
+ if (!lstat(path, st))
return 0;
}
struct alternate_object_database *alt;
int most_interesting_errno;
- fd = git_open_noatime(sha1_file_name(sha1));
+ fd = git_open(sha1_file_name(sha1));
if (fd >= 0)
return fd;
most_interesting_errno = errno;
prepare_alt_odb();
for (alt = alt_odb_list; alt; alt = alt->next) {
- fill_sha1_path(alt->name, sha1);
- fd = git_open_noatime(alt->base);
+ const char *path = alt_sha1_path(alt, sha1);
+ fd = git_open(path);
if (fd >= 0)
return fd;
if (most_interesting_errno == ENOENT)
int parse_sha1_header(const char *hdr, unsigned long *sizep)
{
- struct object_info oi;
+ struct object_info oi = OBJECT_INFO_INIT;
oi.sizep = sizep;
- oi.typename = NULL;
- oi.typep = NULL;
return parse_sha1_header_extended(hdr, &oi, LOOKUP_REPLACE_OBJECT);
}
goto out;
}
-static int packed_object_info(struct packed_git *p, off_t obj_offset,
- struct object_info *oi)
+int packed_object_info(struct packed_git *p, off_t obj_offset,
+ struct object_info *oi)
{
struct pack_window *w_curs = NULL;
unsigned long size;
return buffer;
}
-#define MAX_DELTA_CACHE (256)
-
+static struct hashmap delta_base_cache;
static size_t delta_base_cached;
-static struct delta_base_cache_lru_list {
- struct delta_base_cache_lru_list *prev;
- struct delta_base_cache_lru_list *next;
-} delta_base_cache_lru = { &delta_base_cache_lru, &delta_base_cache_lru };
+static LIST_HEAD(delta_base_cache_lru);
-static struct delta_base_cache_entry {
- struct delta_base_cache_lru_list lru;
- void *data;
+struct delta_base_cache_key {
struct packed_git *p;
off_t base_offset;
+};
+
+struct delta_base_cache_entry {
+ struct hashmap hash;
+ struct delta_base_cache_key key;
+ struct list_head lru;
+ void *data;
unsigned long size;
enum object_type type;
-} delta_base_cache[MAX_DELTA_CACHE];
+};
-static unsigned long pack_entry_hash(struct packed_git *p, off_t base_offset)
+static unsigned int pack_entry_hash(struct packed_git *p, off_t base_offset)
{
- unsigned long hash;
+ unsigned int hash;
- hash = (unsigned long)(intptr_t)p + (unsigned long)base_offset;
+ hash = (unsigned int)(intptr_t)p + (unsigned int)base_offset;
hash += (hash >> 8) + (hash >> 16);
- return hash % MAX_DELTA_CACHE;
+ return hash;
}
static struct delta_base_cache_entry *
get_delta_base_cache_entry(struct packed_git *p, off_t base_offset)
{
- unsigned long hash = pack_entry_hash(p, base_offset);
- return delta_base_cache + hash;
+ struct hashmap_entry entry;
+ struct delta_base_cache_key key;
+
+ if (!delta_base_cache.cmpfn)
+ return NULL;
+
+ hashmap_entry_init(&entry, pack_entry_hash(p, base_offset));
+ key.p = p;
+ key.base_offset = base_offset;
+ return hashmap_get(&delta_base_cache, &entry, &key);
+}
+
+static int delta_base_cache_key_eq(const struct delta_base_cache_key *a,
+ const struct delta_base_cache_key *b)
+{
+ return a->p == b->p && a->base_offset == b->base_offset;
}
-static int eq_delta_base_cache_entry(struct delta_base_cache_entry *ent,
- struct packed_git *p, off_t base_offset)
+static int delta_base_cache_hash_cmp(const void *va, const void *vb,
+ const void *vkey)
{
- return (ent->data && ent->p == p && ent->base_offset == base_offset);
+ const struct delta_base_cache_entry *a = va, *b = vb;
+ const struct delta_base_cache_key *key = vkey;
+ if (key)
+ return !delta_base_cache_key_eq(&a->key, key);
+ else
+ return !delta_base_cache_key_eq(&a->key, &b->key);
}
static int in_delta_base_cache(struct packed_git *p, off_t base_offset)
{
- struct delta_base_cache_entry *ent;
- ent = get_delta_base_cache_entry(p, base_offset);
- return eq_delta_base_cache_entry(ent, p, base_offset);
+ return !!get_delta_base_cache_entry(p, base_offset);
}
-static void clear_delta_base_cache_entry(struct delta_base_cache_entry *ent)
+/*
+ * Remove the entry from the cache, but do _not_ free the associated
+ * entry data. The caller takes ownership of the "data" buffer, and
+ * should copy out any fields it wants before detaching.
+ */
+static void detach_delta_base_cache_entry(struct delta_base_cache_entry *ent)
{
- ent->data = NULL;
- ent->lru.next->prev = ent->lru.prev;
- ent->lru.prev->next = ent->lru.next;
+ hashmap_remove(&delta_base_cache, ent, &ent->key);
+ list_del(&ent->lru);
delta_base_cached -= ent->size;
+ free(ent);
}
static void *cache_or_unpack_entry(struct packed_git *p, off_t base_offset,
- unsigned long *base_size, enum object_type *type, int keep_cache)
+ unsigned long *base_size, enum object_type *type)
{
struct delta_base_cache_entry *ent;
- void *ret;
ent = get_delta_base_cache_entry(p, base_offset);
-
- if (!eq_delta_base_cache_entry(ent, p, base_offset))
+ if (!ent)
return unpack_entry(p, base_offset, type, base_size);
- ret = ent->data;
-
- if (!keep_cache)
- clear_delta_base_cache_entry(ent);
- else
- ret = xmemdupz(ent->data, ent->size);
*type = ent->type;
*base_size = ent->size;
- return ret;
+ return xmemdupz(ent->data, ent->size);
}
static inline void release_delta_base_cache(struct delta_base_cache_entry *ent)
{
- if (ent->data) {
- free(ent->data);
- ent->data = NULL;
- ent->lru.next->prev = ent->lru.prev;
- ent->lru.prev->next = ent->lru.next;
- delta_base_cached -= ent->size;
- }
+ free(ent->data);
+ detach_delta_base_cache_entry(ent);
}
void clear_delta_base_cache(void)
{
- unsigned long p;
- for (p = 0; p < MAX_DELTA_CACHE; p++)
- release_delta_base_cache(&delta_base_cache[p]);
+ struct hashmap_iter iter;
+ struct delta_base_cache_entry *entry;
+ for (entry = hashmap_iter_first(&delta_base_cache, &iter);
+ entry;
+ entry = hashmap_iter_next(&iter)) {
+ release_delta_base_cache(entry);
+ }
}
static void add_delta_base_cache(struct packed_git *p, off_t base_offset,
void *base, unsigned long base_size, enum object_type type)
{
- unsigned long hash = pack_entry_hash(p, base_offset);
- struct delta_base_cache_entry *ent = delta_base_cache + hash;
- struct delta_base_cache_lru_list *lru;
+ struct delta_base_cache_entry *ent = xmalloc(sizeof(*ent));
+ struct list_head *lru, *tmp;
- release_delta_base_cache(ent);
delta_base_cached += base_size;
- for (lru = delta_base_cache_lru.next;
- delta_base_cached > delta_base_cache_limit
- && lru != &delta_base_cache_lru;
- lru = lru->next) {
- struct delta_base_cache_entry *f = (void *)lru;
- if (f->type == OBJ_BLOB)
- release_delta_base_cache(f);
- }
- for (lru = delta_base_cache_lru.next;
- delta_base_cached > delta_base_cache_limit
- && lru != &delta_base_cache_lru;
- lru = lru->next) {
- struct delta_base_cache_entry *f = (void *)lru;
+ list_for_each_safe(lru, tmp, &delta_base_cache_lru) {
+ struct delta_base_cache_entry *f =
+ list_entry(lru, struct delta_base_cache_entry, lru);
+ if (delta_base_cached <= delta_base_cache_limit)
+ break;
release_delta_base_cache(f);
}
- ent->p = p;
- ent->base_offset = base_offset;
+ ent->key.p = p;
+ ent->key.base_offset = base_offset;
ent->type = type;
ent->data = base;
ent->size = base_size;
- ent->lru.next = &delta_base_cache_lru;
- ent->lru.prev = delta_base_cache_lru.prev;
- delta_base_cache_lru.prev->next = &ent->lru;
- delta_base_cache_lru.prev = &ent->lru;
+ list_add_tail(&ent->lru, &delta_base_cache_lru);
+
+ if (!delta_base_cache.cmpfn)
+ hashmap_init(&delta_base_cache, delta_base_cache_hash_cmp, 0);
+ hashmap_entry_init(ent, pack_entry_hash(p, base_offset));
+ hashmap_add(&delta_base_cache, ent);
}
static void *read_object(const unsigned char *sha1, enum object_type *type,
struct delta_base_cache_entry *ent;
ent = get_delta_base_cache_entry(p, curpos);
- if (eq_delta_base_cache_entry(ent, p, curpos)) {
+ if (ent) {
type = ent->type;
data = ent->data;
size = ent->size;
- clear_delta_base_cache_entry(ent);
+ detach_delta_base_cache_entry(ent);
base_from_cache = 1;
break;
}
int sha1_object_info(const unsigned char *sha1, unsigned long *sizep)
{
enum object_type type;
- struct object_info oi = {NULL};
+ struct object_info oi = OBJECT_INFO_INIT;
oi.typep = &type;
oi.sizep = sizep;
if (!find_pack_entry(sha1, &e))
return NULL;
- data = cache_or_unpack_entry(e.p, e.offset, size, type, 1);
+ data = cache_or_unpack_entry(e.p, e.offset, size, type);
if (!data) {
/*
* We're probably in deep shit, but let's try to fetch
struct strbuf buf = STRBUF_INIT;
int r;
- /* copy base not including trailing '/' */
- strbuf_add(&buf, alt->base, alt->name - alt->base - 1);
+ strbuf_addstr(&buf, alt->path);
r = for_each_loose_file_in_objdir_buf(&buf,
data->cb, NULL, NULL,
data->data);
#include "refs.h"
#include "remote.h"
#include "dir.h"
+#include "sha1-array.h"
static int get_sha1_oneline(const char *, unsigned char *, struct commit_list *);
typedef int (*disambiguate_hint_fn)(const unsigned char *, void *);
struct disambiguate_state {
+ int len; /* length of prefix in hex chars */
+ char hex_pfx[GIT_SHA1_HEXSZ + 1];
+ unsigned char bin_pfx[GIT_SHA1_RAWSZ];
+
disambiguate_hint_fn fn;
void *cb_data;
- unsigned char candidate[20];
+ unsigned char candidate[GIT_SHA1_RAWSZ];
unsigned candidate_exists:1;
unsigned candidate_checked:1;
unsigned candidate_ok:1;
/* otherwise, current can be discarded and candidate is still good */
}
-static void find_short_object_filename(int len, const char *hex_pfx, struct disambiguate_state *ds)
+static void find_short_object_filename(struct disambiguate_state *ds)
{
struct alternate_object_database *alt;
- char hex[40];
+ char hex[GIT_SHA1_HEXSZ];
static struct alternate_object_database *fakeent;
if (!fakeent) {
* alt->name/alt->base while iterating over the
* object databases including our own.
*/
- const char *objdir = get_object_directory();
- size_t objdir_len = strlen(objdir);
- fakeent = xmalloc(st_add3(sizeof(*fakeent), objdir_len, 43));
- memcpy(fakeent->base, objdir, objdir_len);
- fakeent->name = fakeent->base + objdir_len + 1;
- fakeent->name[-1] = '/';
+ fakeent = alloc_alt_odb(get_object_directory());
}
fakeent->next = alt_odb_list;
- xsnprintf(hex, sizeof(hex), "%.2s", hex_pfx);
+ xsnprintf(hex, sizeof(hex), "%.2s", ds->hex_pfx);
for (alt = fakeent; alt && !ds->ambiguous; alt = alt->next) {
+ struct strbuf *buf = alt_scratch_buf(alt);
struct dirent *de;
DIR *dir;
- /*
- * every alt_odb struct has 42 extra bytes after the base
- * for exactly this purpose
- */
- xsnprintf(alt->name, 42, "%.2s/", hex_pfx);
- dir = opendir(alt->base);
+
+ strbuf_addf(buf, "%.2s/", ds->hex_pfx);
+ dir = opendir(buf->buf);
if (!dir)
continue;
if (strlen(de->d_name) != 38)
continue;
- if (memcmp(de->d_name, hex_pfx + 2, len - 2))
+ if (memcmp(de->d_name, ds->hex_pfx + 2, ds->len - 2))
continue;
memcpy(hex + 2, de->d_name, 38);
if (!get_sha1_hex(hex, sha1))
return 1;
}
-static void unique_in_pack(int len,
- const unsigned char *bin_pfx,
- struct packed_git *p,
+static void unique_in_pack(struct packed_git *p,
struct disambiguate_state *ds)
{
uint32_t num, last, i, first = 0;
int cmp;
current = nth_packed_object_sha1(p, mid);
- cmp = hashcmp(bin_pfx, current);
+ cmp = hashcmp(ds->bin_pfx, current);
if (!cmp) {
first = mid;
break;
*/
for (i = first; i < num && !ds->ambiguous; i++) {
current = nth_packed_object_sha1(p, i);
- if (!match_sha(len, bin_pfx, current))
+ if (!match_sha(ds->len, ds->bin_pfx, current))
break;
update_candidates(ds, current);
}
}
-static void find_short_packed_object(int len, const unsigned char *bin_pfx,
- struct disambiguate_state *ds)
+static void find_short_packed_object(struct disambiguate_state *ds)
{
struct packed_git *p;
prepare_packed_git();
for (p = packed_git; p && !ds->ambiguous; p = p->next)
- unique_in_pack(len, bin_pfx, p, ds);
+ unique_in_pack(p, ds);
}
#define SHORT_NAME_NOT_FOUND (-1)
return 0;
/* We need to do this the hard way... */
- obj = deref_tag(lookup_object(sha1), NULL, 0);
+ obj = deref_tag(parse_object(sha1), NULL, 0);
if (obj && (obj->type == OBJ_TREE || obj->type == OBJ_COMMIT))
return 1;
return 0;
return kind == OBJ_BLOB;
}
-static int prepare_prefixes(const char *name, int len,
- unsigned char *bin_pfx,
- char *hex_pfx)
+static disambiguate_hint_fn default_disambiguate_hint;
+
+int set_disambiguate_hint_config(const char *var, const char *value)
+{
+ static const struct {
+ const char *name;
+ disambiguate_hint_fn fn;
+ } hints[] = {
+ { "none", NULL },
+ { "commit", disambiguate_commit_only },
+ { "committish", disambiguate_committish_only },
+ { "tree", disambiguate_tree_only },
+ { "treeish", disambiguate_treeish_only },
+ { "blob", disambiguate_blob_only }
+ };
+ int i;
+
+ if (!value)
+ return config_error_nonbool(var);
+
+ for (i = 0; i < ARRAY_SIZE(hints); i++) {
+ if (!strcasecmp(value, hints[i].name)) {
+ default_disambiguate_hint = hints[i].fn;
+ return 0;
+ }
+ }
+
+ return error("unknown hint type for '%s': %s", var, value);
+}
+
+static int init_object_disambiguation(const char *name, int len,
+ struct disambiguate_state *ds)
{
int i;
- hashclr(bin_pfx);
- memset(hex_pfx, 'x', 40);
+ if (len < MINIMUM_ABBREV || len > GIT_SHA1_HEXSZ)
+ return -1;
+
+ memset(ds, 0, sizeof(*ds));
+
for (i = 0; i < len ;i++) {
unsigned char c = name[i];
unsigned char val;
}
else
return -1;
- hex_pfx[i] = c;
+ ds->hex_pfx[i] = c;
if (!(i & 1))
val <<= 4;
- bin_pfx[i >> 1] |= val;
+ ds->bin_pfx[i >> 1] |= val;
}
+
+ ds->len = len;
+ ds->hex_pfx[len] = '\0';
+ prepare_alt_odb();
+ return 0;
+}
+
+static int show_ambiguous_object(const unsigned char *sha1, void *data)
+{
+ const struct disambiguate_state *ds = data;
+ struct strbuf desc = STRBUF_INIT;
+ int type;
+
+ if (ds->fn && !ds->fn(sha1, ds->cb_data))
+ return 0;
+
+ type = sha1_object_info(sha1, NULL);
+ if (type == OBJ_COMMIT) {
+ struct commit *commit = lookup_commit(sha1);
+ if (commit) {
+ struct pretty_print_context pp = {0};
+ pp.date_mode.type = DATE_SHORT;
+ format_commit_message(commit, " %ad - %s", &desc, &pp);
+ }
+ } else if (type == OBJ_TAG) {
+ struct tag *tag = lookup_tag(sha1);
+ if (!parse_tag(tag) && tag->tag)
+ strbuf_addf(&desc, " %s", tag->tag);
+ }
+
+ advise(" %s %s%s",
+ find_unique_abbrev(sha1, DEFAULT_ABBREV),
+ typename(type) ? typename(type) : "unknown type",
+ desc.buf);
+
+ strbuf_release(&desc);
return 0;
}
unsigned flags)
{
int status;
- char hex_pfx[40];
- unsigned char bin_pfx[20];
struct disambiguate_state ds;
int quietly = !!(flags & GET_SHA1_QUIETLY);
- if (len < MINIMUM_ABBREV || len > 40)
- return -1;
- if (prepare_prefixes(name, len, bin_pfx, hex_pfx) < 0)
+ if (init_object_disambiguation(name, len, &ds) < 0)
return -1;
- prepare_alt_odb();
+ if (HAS_MULTI_BITS(flags & GET_SHA1_DISAMBIGUATORS))
+ die("BUG: multiple get_short_sha1 disambiguator flags");
- memset(&ds, 0, sizeof(ds));
if (flags & GET_SHA1_COMMIT)
ds.fn = disambiguate_commit_only;
else if (flags & GET_SHA1_COMMITTISH)
ds.fn = disambiguate_treeish_only;
else if (flags & GET_SHA1_BLOB)
ds.fn = disambiguate_blob_only;
+ else
+ ds.fn = default_disambiguate_hint;
- find_short_object_filename(len, hex_pfx, &ds);
- find_short_packed_object(len, bin_pfx, &ds);
+ find_short_object_filename(&ds);
+ find_short_packed_object(&ds);
status = finish_object_disambiguation(&ds, sha1);
- if (!quietly && (status == SHORT_NAME_AMBIGUOUS))
- return error("short SHA1 %.*s is ambiguous.", len, hex_pfx);
+ if (!quietly && (status == SHORT_NAME_AMBIGUOUS)) {
+ error(_("short SHA1 %s is ambiguous"), ds.hex_pfx);
+
+ /*
+ * We may still have ambiguity if we simply saw a series of
+ * candidates that did not satisfy our hint function. In
+ * that case, we still want to show them, so disable the hint
+ * function entirely.
+ */
+ if (!ds.ambiguous)
+ ds.fn = NULL;
+
+ advise(_("The candidates are:"));
+ for_each_abbrev(ds.hex_pfx, show_ambiguous_object, &ds);
+ }
+
return status;
}
+static int collect_ambiguous(const unsigned char *sha1, void *data)
+{
+ sha1_array_append(data, sha1);
+ return 0;
+}
+
int for_each_abbrev(const char *prefix, each_abbrev_fn fn, void *cb_data)
{
- char hex_pfx[40];
- unsigned char bin_pfx[20];
+ struct sha1_array collect = SHA1_ARRAY_INIT;
struct disambiguate_state ds;
- int len = strlen(prefix);
+ int ret;
- if (len < MINIMUM_ABBREV || len > 40)
+ if (init_object_disambiguation(prefix, strlen(prefix), &ds) < 0)
return -1;
- if (prepare_prefixes(prefix, len, bin_pfx, hex_pfx) < 0)
- return -1;
-
- prepare_alt_odb();
- memset(&ds, 0, sizeof(ds));
ds.always_call_fn = 1;
- ds.cb_data = cb_data;
- ds.fn = fn;
+ ds.fn = collect_ambiguous;
+ ds.cb_data = &collect;
+ find_short_object_filename(&ds);
+ find_short_packed_object(&ds);
- find_short_object_filename(len, hex_pfx, &ds);
- find_short_packed_object(len, bin_pfx, &ds);
- return ds.ambiguous;
+ ret = sha1_array_for_each_unique(&collect, fn, cb_data);
+ sha1_array_clear(&collect);
+ return ret;
+}
+
+/*
+ * Return the slot of the most-significant bit set in "val". There are various
+ * ways to do this quickly with fls() or __builtin_clzl(), but speed is
+ * probably not a big deal here.
+ */
+static unsigned msb(unsigned long val)
+{
+ unsigned r = 0;
+ while (val >>= 1)
+ r++;
+ return r;
}
int find_unique_abbrev_r(char *hex, const unsigned char *sha1, int len)
{
int status, exists;
+ if (len < 0) {
+ unsigned long count = approximate_object_count();
+ /*
+ * Add one because the MSB only tells us the highest bit set,
+ * not including the value of all the _other_ bits (so "15"
+ * is only one off of 2^4, but the MSB is the 3rd bit.
+ */
+ len = msb(count) + 1;
+ /*
+ * We now know we have on the order of 2^len objects, which
+ * expects a collision at 2^(len/2). But we also care about hex
+ * chars, not bits, and there are 4 bits per hex. So all
+ * together we need to divide by 2; but we also want to round
+ * odd numbers up, hence adding one before dividing.
+ */
+ len = (len + 1) / 2;
+ /*
+ * For very small repos, we stick with our regular fallback.
+ */
+ if (len < FALLBACK_DEFAULT_ABBREV)
+ len = FALLBACK_DEFAULT_ABBREV;
+ }
+
sha1_to_hex_r(hex, sha1);
if (len == 40 || !len)
return 40;
const char *find_unique_abbrev(const unsigned char *sha1, int len)
{
- static char hex[GIT_SHA1_HEXSZ + 1];
+ static int bufno;
+ static char hexbuffer[4][GIT_SHA1_HEXSZ + 1];
+ char *hex = hexbuffer[3 & ++bufno];
find_unique_abbrev_r(hex, sha1, len);
return hex;
}
}
}
-static int peel_onion(const char *name, int len, unsigned char *sha1)
+static int peel_onion(const char *name, int len, unsigned char *sha1,
+ unsigned lookup_flags)
{
unsigned char outer[20];
const char *sp;
unsigned int expected_type = 0;
- unsigned lookup_flags = 0;
struct object *o;
/*
else
return -1;
+ lookup_flags &= ~GET_SHA1_DISAMBIGUATORS;
if (expected_type == OBJ_COMMIT)
- lookup_flags = GET_SHA1_COMMITTISH;
+ lookup_flags |= GET_SHA1_COMMITTISH;
else if (expected_type == OBJ_TREE)
- lookup_flags = GET_SHA1_TREEISH;
+ lookup_flags |= GET_SHA1_TREEISH;
if (get_sha1_1(name, sp - name - 2, outer, lookup_flags))
return -1;
return get_nth_ancestor(name, len1, sha1, num);
}
- ret = peel_onion(name, len, sha1);
+ ret = peel_onion(name, len, sha1, lookup_flags);
if (!ret)
return 0;
return retval;
}
-int get_sha1_mb(const char *name, unsigned char *sha1)
+int get_oid_mb(const char *name, struct object_id *oid)
{
struct commit *one, *two;
struct commit_list *mbs;
- unsigned char sha1_tmp[20];
+ struct object_id oid_tmp;
const char *dots;
int st;
dots = strstr(name, "...");
if (!dots)
- return get_sha1(name, sha1);
+ return get_oid(name, oid);
if (dots == name)
- st = get_sha1("HEAD", sha1_tmp);
+ st = get_oid("HEAD", &oid_tmp);
else {
struct strbuf sb;
strbuf_init(&sb, dots - name);
strbuf_add(&sb, name, dots - name);
- st = get_sha1_committish(sb.buf, sha1_tmp);
+ st = get_sha1_committish(sb.buf, oid_tmp.hash);
strbuf_release(&sb);
}
if (st)
return st;
- one = lookup_commit_reference_gently(sha1_tmp, 0);
+ one = lookup_commit_reference_gently(oid_tmp.hash, 0);
if (!one)
return -1;
- if (get_sha1_committish(dots[3] ? (dots + 3) : "HEAD", sha1_tmp))
+ if (get_sha1_committish(dots[3] ? (dots + 3) : "HEAD", oid_tmp.hash))
return -1;
- two = lookup_commit_reference_gently(sha1_tmp, 0);
+ two = lookup_commit_reference_gently(oid_tmp.hash, 0);
if (!two)
return -1;
mbs = get_merge_bases(one, two);
st = -1;
else {
st = 0;
- hashcpy(sha1, mbs->item->object.oid.hash);
+ oidcpy(oid, &mbs->item->object.oid);
}
free_commit_list(mbs);
return st;
const char *cp;
int only_to_die = flags & GET_SHA1_ONLY_TO_DIE;
+ if (only_to_die)
+ flags |= GET_SHA1_QUIETLY;
+
memset(oc, 0, sizeof(*oc));
oc->mode = S_IFINVALID;
ret = get_sha1_1(name, namelen, sha1, flags);
memcmp(ce->name, cp, namelen))
break;
if (ce_stage(ce) == stage) {
- hashcpy(sha1, ce->sha1);
+ hashcpy(sha1, ce->oid.hash);
oc->mode = ce->ce_mode;
free(new_path);
return 0;
if (*cp == ':') {
unsigned char tree_sha1[20];
int len = cp - name;
- if (!get_sha1_1(name, len, tree_sha1, GET_SHA1_TREEISH)) {
+ unsigned sub_flags = flags;
+
+ sub_flags &= ~GET_SHA1_DISAMBIGUATORS;
+ sub_flags |= GET_SHA1_TREEISH;
+
+ if (!get_sha1_1(name, len, tree_sha1, sub_flags)) {
const char *filename = cp+1;
char *new_filename = NULL;
#include "diff.h"
#include "revision.h"
#include "commit-slab.h"
+#include "revision.h"
+#include "list-objects.h"
static int is_shallow = -1;
static struct stat_validity shallow_stat;
return result;
}
+static void show_commit(struct commit *commit, void *data)
+{
+ commit_list_insert(commit, data);
+}
+
+/*
+ * Given rev-list arguments, run rev-list. All reachable commits
+ * except border ones are marked with not_shallow_flag. Border commits
+ * are marked with shallow_flag. The list of border/shallow commits
+ * are also returned.
+ */
+struct commit_list *get_shallow_commits_by_rev_list(int ac, const char **av,
+ int shallow_flag,
+ int not_shallow_flag)
+{
+ struct commit_list *result = NULL, *p;
+ struct commit_list *not_shallow_list = NULL;
+ struct rev_info revs;
+ int both_flags = shallow_flag | not_shallow_flag;
+
+ /*
+ * SHALLOW (excluded) and NOT_SHALLOW (included) should not be
+ * set at this point. But better be safe than sorry.
+ */
+ clear_object_flags(both_flags);
+
+ is_repository_shallow(); /* make sure shallows are read */
+
+ init_revisions(&revs, NULL);
+ save_commit_buffer = 0;
+ setup_revisions(ac, av, &revs, NULL);
+
+ if (prepare_revision_walk(&revs))
+ die("revision walk setup failed");
+ traverse_commit_list(&revs, show_commit, NULL, ¬_shallow_list);
+
+ /* Mark all reachable commits as NOT_SHALLOW */
+ for (p = not_shallow_list; p; p = p->next)
+ p->item->object.flags |= not_shallow_flag;
+
+ /*
+ * mark border commits SHALLOW + NOT_SHALLOW.
+ * We cannot clear NOT_SHALLOW right now. Imagine border
+ * commit A is processed first, then commit B, whose parent is
+ * A, later. If NOT_SHALLOW on A is cleared at step 1, B
+ * itself is considered border at step 2, which is incorrect.
+ */
+ for (p = not_shallow_list; p; p = p->next) {
+ struct commit *c = p->item;
+ struct commit_list *parent;
+
+ if (parse_commit(c))
+ die("unable to parse commit %s",
+ oid_to_hex(&c->object.oid));
+
+ for (parent = c->parents; parent; parent = parent->next)
+ if (!(parent->item->object.flags & not_shallow_flag)) {
+ c->object.flags |= shallow_flag;
+ commit_list_insert(c, &result);
+ break;
+ }
+ }
+ free_commit_list(not_shallow_list);
+
+ /*
+ * Now we can clean up NOT_SHALLOW on border commits. Having
+ * both flags set can confuse the caller.
+ */
+ for (p = result; p; p = p->next) {
+ struct object *o = &p->item->object;
+ if ((o->flags & both_flags) == both_flags)
+ o->flags &= ~not_shallow_flag;
+ }
+ return result;
+}
+
static void check_shallow_file_for_update(void)
{
if (is_shallow == -1)
{
int fd = *(int *)cb;
if (graft->nr_parent == -1)
- packet_write(fd, "shallow %s\n", oid_to_hex(&graft->oid));
+ packet_write_fmt(fd, "shallow %s\n", oid_to_hex(&graft->oid));
return 0;
}
strbuf_setlen(sb, j);
}
+
+int strbuf_normalize_path(struct strbuf *src)
+{
+ struct strbuf dst = STRBUF_INIT;
+
+ strbuf_grow(&dst, src->len);
+ if (normalize_path_copy(dst.buf, src->buf) < 0) {
+ strbuf_release(&dst);
+ return -1;
+ }
+
+ /*
+ * normalize_path does not tell us the new length, so we have to
+ * compute it by looking for the new NUL it placed
+ */
+ strbuf_setlen(&dst, strlen(dst.buf));
+ strbuf_swap(src, &dst);
+ strbuf_release(&dst);
+ return 0;
+}
*/
extern void strbuf_add_absolute_path(struct strbuf *sb, const char *path);
+
+/**
+ * Normalize in-place the path contained in the strbuf. See
+ * normalize_path_copy() for details. If an error occurs, the contents of "sb"
+ * are left untouched, and -1 is returned.
+ */
+extern int strbuf_normalize_path(struct strbuf *sb);
+
/**
* Strip whitespace from a buffer. The second parameter controls if
* comments are considered contents to be removed or not.
struct stream_filter *filter)
{
struct git_istream *st;
- struct object_info oi = {NULL};
+ struct object_info oi = OBJECT_INFO_INIT;
const unsigned char *real = lookup_replace_object(sha1);
enum input_source src = istream_source(real, type, &oi);
* Users of streaming interface
****************************************************************/
-int stream_blob_to_fd(int fd, unsigned const char *sha1, struct stream_filter *filter,
+int stream_blob_to_fd(int fd, const struct object_id *oid, struct stream_filter *filter,
int can_seek)
{
struct git_istream *st;
ssize_t kept = 0;
int result = -1;
- st = open_istream(sha1, &type, &sz, filter);
+ st = open_istream(oid->hash, &type, &sz, filter);
if (!st) {
if (filter)
free_stream_filter(filter);
extern int close_istream(struct git_istream *);
extern ssize_t read_istream(struct git_istream *, void *, size_t);
-extern int stream_blob_to_fd(int fd, const unsigned char *, struct stream_filter *, int can_seek);
+extern int stream_blob_to_fd(int fd, const struct object_id *, struct stream_filter *, int can_seek);
#endif /* STREAMING_H */
void string_list_sort(struct string_list *list)
{
compare_for_qsort = list->cmp ? list->cmp : strcmp;
- qsort(list->items, list->nr, sizeof(*list->items), cmp_items);
+ QSORT(list->items, list->nr, cmp_items);
}
struct string_list_item *unsorted_string_list_lookup(struct string_list *list,
static int add_submodule_odb(const char *path)
{
struct strbuf objects_directory = STRBUF_INIT;
- struct alternate_object_database *alt_odb;
int ret = 0;
- size_t alloc;
- strbuf_git_path_submodule(&objects_directory, path, "objects/");
+ ret = strbuf_git_path_submodule(&objects_directory, path, "objects/");
+ if (ret)
+ goto done;
if (!is_directory(objects_directory.buf)) {
ret = -1;
goto done;
}
- /* avoid adding it twice */
- prepare_alt_odb();
- for (alt_odb = alt_odb_list; alt_odb; alt_odb = alt_odb->next)
- if (alt_odb->name - alt_odb->base == objects_directory.len &&
- !strncmp(alt_odb->base, objects_directory.buf,
- objects_directory.len))
- goto done;
-
- alloc = st_add(objects_directory.len, 42); /* for "12/345..." sha1 */
- alt_odb = xmalloc(st_add(sizeof(*alt_odb), alloc));
- alt_odb->next = alt_odb_list;
- xsnprintf(alt_odb->base, alloc, "%s", objects_directory.buf);
- alt_odb->name = alt_odb->base + objects_directory.len;
- alt_odb->name[2] = '/';
- alt_odb->name[40] = '\0';
- alt_odb->name[41] = '\0';
- alt_odb_list = alt_odb;
-
- /* add possible alternates from the submodule */
- read_info_alternates(objects_directory.buf, 0);
+ add_to_alternates_memory(objects_directory.buf);
done:
strbuf_release(&objects_directory);
return ret;
static int prepare_submodule_summary(struct rev_info *rev, const char *path,
struct commit *left, struct commit *right,
- int *fast_forward, int *fast_backward)
+ struct commit_list *merge_bases)
{
- struct commit_list *merge_bases, *list;
+ struct commit_list *list;
init_revisions(rev, NULL);
setup_revisions(0, NULL, rev, NULL);
left->object.flags |= SYMMETRIC_LEFT;
add_pending_object(rev, &left->object, path);
add_pending_object(rev, &right->object, path);
- merge_bases = get_merge_bases(left, right);
- if (merge_bases) {
- if (merge_bases->item == left)
- *fast_forward = 1;
- else if (merge_bases->item == right)
- *fast_backward = 1;
- }
for (list = merge_bases; list; list = list->next) {
list->item->object.flags |= UNINTERESTING;
add_pending_object(rev, &list->item->object,
strbuf_release(&sb);
}
-void show_submodule_summary(FILE *f, const char *path,
+/* Helper function to display the submodule header line prior to the full
+ * summary output. If it can locate the submodule objects directory it will
+ * attempt to lookup both the left and right commits and put them into the
+ * left and right pointers.
+ */
+static void show_submodule_header(FILE *f, const char *path,
const char *line_prefix,
- unsigned char one[20], unsigned char two[20],
+ struct object_id *one, struct object_id *two,
unsigned dirty_submodule, const char *meta,
- const char *del, const char *add, const char *reset)
+ const char *reset,
+ struct commit **left, struct commit **right,
+ struct commit_list **merge_bases)
{
- struct rev_info rev;
- struct commit *left = NULL, *right = NULL;
const char *message = NULL;
struct strbuf sb = STRBUF_INIT;
int fast_forward = 0, fast_backward = 0;
- if (is_null_sha1(two))
- message = "(submodule deleted)";
- else if (add_submodule_odb(path))
- message = "(not checked out)";
- else if (is_null_sha1(one))
- message = "(new submodule)";
- else if (!(left = lookup_commit_reference(one)) ||
- !(right = lookup_commit_reference(two)))
- message = "(commits not present)";
- else if (prepare_submodule_summary(&rev, path, left, right,
- &fast_forward, &fast_backward))
- message = "(revision walker failed)";
-
if (dirty_submodule & DIRTY_SUBMODULE_UNTRACKED)
fprintf(f, "%sSubmodule %s contains untracked content\n",
line_prefix, path);
fprintf(f, "%sSubmodule %s contains modified content\n",
line_prefix, path);
- if (!hashcmp(one, two)) {
+ if (is_null_oid(one))
+ message = "(new submodule)";
+ else if (is_null_oid(two))
+ message = "(submodule deleted)";
+
+ if (add_submodule_odb(path)) {
+ if (!message)
+ message = "(not initialized)";
+ goto output_header;
+ }
+
+ /*
+ * Attempt to lookup the commit references, and determine if this is
+ * a fast forward or fast backwards update.
+ */
+ *left = lookup_commit_reference(one->hash);
+ *right = lookup_commit_reference(two->hash);
+
+ /*
+ * Warn about missing commits in the submodule project, but only if
+ * they aren't null.
+ */
+ if ((!is_null_oid(one) && !*left) ||
+ (!is_null_oid(two) && !*right))
+ message = "(commits not present)";
+
+ *merge_bases = get_merge_bases(*left, *right);
+ if (*merge_bases) {
+ if ((*merge_bases)->item == *left)
+ fast_forward = 1;
+ else if ((*merge_bases)->item == *right)
+ fast_backward = 1;
+ }
+
+ if (!oidcmp(one, two)) {
strbuf_release(&sb);
return;
}
+output_header:
strbuf_addf(&sb, "%s%sSubmodule %s ", line_prefix, meta, path);
- strbuf_add_unique_abbrev(&sb, one, DEFAULT_ABBREV);
+ strbuf_add_unique_abbrev(&sb, one->hash, DEFAULT_ABBREV);
strbuf_addstr(&sb, (fast_backward || fast_forward) ? ".." : "...");
- strbuf_add_unique_abbrev(&sb, two, DEFAULT_ABBREV);
+ strbuf_add_unique_abbrev(&sb, two->hash, DEFAULT_ABBREV);
if (message)
strbuf_addf(&sb, " %s%s\n", message, reset);
else
strbuf_addf(&sb, "%s:%s\n", fast_backward ? " (rewind)" : "", reset);
fwrite(sb.buf, sb.len, 1, f);
- if (!message) /* only NULL if we succeeded in setting up the walk */
- print_submodule_summary(&rev, f, line_prefix, del, add, reset);
+ strbuf_release(&sb);
+}
+
+void show_submodule_summary(FILE *f, const char *path,
+ const char *line_prefix,
+ struct object_id *one, struct object_id *two,
+ unsigned dirty_submodule, const char *meta,
+ const char *del, const char *add, const char *reset)
+{
+ struct rev_info rev;
+ struct commit *left = NULL, *right = NULL;
+ struct commit_list *merge_bases = NULL;
+
+ show_submodule_header(f, path, line_prefix, one, two, dirty_submodule,
+ meta, reset, &left, &right, &merge_bases);
+
+ /*
+ * If we don't have both a left and a right pointer, there is no
+ * reason to try and display a summary. The header line should contain
+ * all the information the user needs.
+ */
+ if (!left || !right)
+ goto out;
+
+ /* Treat revision walker failure the same as missing commits */
+ if (prepare_submodule_summary(&rev, path, left, right, merge_bases)) {
+ fprintf(f, "%s(revision walker failed)\n", line_prefix);
+ goto out;
+ }
+
+ print_submodule_summary(&rev, f, line_prefix, del, add, reset);
+
+out:
+ if (merge_bases)
+ free_commit_list(merge_bases);
+ clear_commit_marks(left, ~0);
+ clear_commit_marks(right, ~0);
+}
+
+void show_submodule_inline_diff(FILE *f, const char *path,
+ const char *line_prefix,
+ struct object_id *one, struct object_id *two,
+ unsigned dirty_submodule, const char *meta,
+ const char *del, const char *add, const char *reset,
+ const struct diff_options *o)
+{
+ const struct object_id *old = &empty_tree_oid, *new = &empty_tree_oid;
+ struct commit *left = NULL, *right = NULL;
+ struct commit_list *merge_bases = NULL;
+ struct strbuf submodule_dir = STRBUF_INIT;
+ struct child_process cp = CHILD_PROCESS_INIT;
+
+ show_submodule_header(f, path, line_prefix, one, two, dirty_submodule,
+ meta, reset, &left, &right, &merge_bases);
+
+ /* We need a valid left and right commit to display a difference */
+ if (!(left || is_null_oid(one)) ||
+ !(right || is_null_oid(two)))
+ goto done;
+
+ if (left)
+ old = one;
+ if (right)
+ new = two;
+
+ fflush(f);
+ cp.git_cmd = 1;
+ cp.dir = path;
+ cp.out = dup(fileno(f));
+ cp.no_stdin = 1;
+
+ /* TODO: other options may need to be passed here. */
+ argv_array_push(&cp.args, "diff");
+ argv_array_pushf(&cp.args, "--line-prefix=%s", line_prefix);
+ if (DIFF_OPT_TST(o, REVERSE_DIFF)) {
+ argv_array_pushf(&cp.args, "--src-prefix=%s%s/",
+ o->b_prefix, path);
+ argv_array_pushf(&cp.args, "--dst-prefix=%s%s/",
+ o->a_prefix, path);
+ } else {
+ argv_array_pushf(&cp.args, "--src-prefix=%s%s/",
+ o->a_prefix, path);
+ argv_array_pushf(&cp.args, "--dst-prefix=%s%s/",
+ o->b_prefix, path);
+ }
+ argv_array_push(&cp.args, oid_to_hex(old));
+ /*
+ * If the submodule has modified content, we will diff against the
+ * work tree, under the assumption that the user has asked for the
+ * diff format and wishes to actually see all differences even if they
+ * haven't yet been committed to the submodule yet.
+ */
+ if (!(dirty_submodule & DIRTY_SUBMODULE_MODIFIED))
+ argv_array_push(&cp.args, oid_to_hex(new));
+
+ if (run_command(&cp))
+ fprintf(f, "(diff failed)\n");
+
+done:
+ strbuf_release(&submodule_dir);
+ if (merge_bases)
+ free_commit_list(merge_bases);
if (left)
clear_commit_marks(left, ~0);
if (right)
clear_commit_marks(right, ~0);
-
- strbuf_release(&sb);
}
void set_config_fetch_recurse_submodules(int value)
sha1_array_append(&ref_tips_after_fetch, new_sha1);
}
-static void add_sha1_to_argv(const unsigned char sha1[20], void *data)
+static int add_sha1_to_argv(const unsigned char sha1[20], void *data)
{
argv_array_push(data, sha1_to_hex(sha1));
+ return 0;
}
static void calculate_changed_submodule_paths(void)
void handle_ignore_submodules_arg(struct diff_options *diffopt, const char *);
void show_submodule_summary(FILE *f, const char *path,
const char *line_prefix,
- unsigned char one[20], unsigned char two[20],
+ struct object_id *one, struct object_id *two,
unsigned dirty_submodule, const char *meta,
const char *del, const char *add, const char *reset);
+void show_submodule_inline_diff(FILE *f, const char *path,
+ const char *line_prefix,
+ struct object_id *one, struct object_id *two,
+ unsigned dirty_submodule, const char *meta,
+ const char *del, const char *add, const char *reset,
+ const struct diff_options *opt);
void set_config_fetch_recurse_submodules(int value);
void check_for_new_submodule_commits(unsigned char new_sha1[20]);
int fetch_populated_submodules(const struct argv_array *options,
const char *v;
const struct string_list *strptr;
struct config_set cs;
+
+ setup_git_directory();
+
git_configset_init(&cs);
if (argc < 2) {
{
struct index_state istate;
struct cache_tree *another = cache_tree();
+ setup_git_directory();
if (read_cache() < 0)
die("unable to read index file");
istate = the_index;
for (i = 0; i < the_index.cache_nr; i++) {
struct cache_entry *ce = the_index.cache[i];
printf("%06o %s %d\t%s\n", ce->ce_mode,
- sha1_to_hex(ce->sha1), ce_stage(ce), ce->name);
+ oid_to_hex(&ce->oid), ce_stage(ce), ce->name);
}
printf("replacements:");
if (si->replace_bitmap)
static void dump(struct untracked_cache_dir *ucd, struct strbuf *base)
{
int i, len;
- qsort(ucd->untracked, ucd->untracked_nr, sizeof(*ucd->untracked),
- compare_untracked);
- qsort(ucd->dirs, ucd->dirs_nr, sizeof(*ucd->dirs),
- compare_dir);
+ QSORT(ucd->untracked, ucd->untracked_nr, compare_untracked);
+ QSORT(ucd->dirs, ucd->dirs_nr, compare_dir);
len = base->len;
strbuf_addf(base, "%s/", ucd->name);
printf("%s %s", base->buf,
int cmd_main(int ac, const char **av)
{
+ setup_git_directory();
hold_locked_index(&index_lock, 1);
if (read_cache() < 0)
die("unable to read index file");
#include "cache.h"
#include "sha1-array.h"
-static void print_sha1(const unsigned char sha1[20], void *data)
+static int print_sha1(const unsigned char sha1[20], void *data)
{
puts(sha1_to_hex(sha1));
+ return 0;
}
int cmd_main(int argc, const char **argv)
--- /dev/null
+#!/bin/sh
+
+test_description='Test operations that emphasize the delta base cache.
+
+We look at both "log --raw", which should put only trees into the delta cache,
+and "log -Sfoo --raw", which should look at both trees and blobs.
+
+Any effects will be emphasized if the test repository is fully packed (loose
+objects obviously do not use the delta base cache at all). It is also
+emphasized if the pack has long delta chains (e.g., as produced by "gc
+--aggressive"), though cache is still quite noticeable even with the default
+depth of 50.
+
+The setting of core.deltaBaseCacheLimit in the source repository is also
+relevant (depending on the size of your test repo), so be sure it is consistent
+between runs.
+'
+. ./perf-lib.sh
+
+test_perf_large_repo
+
+# puts mostly trees into the delta base cache
+test_perf 'log --raw' '
+ git log --raw >/dev/null
+'
+
+test_perf 'log -S' '
+ git log --raw -Sfoo >/dev/null
+'
+
+test_done
} | git pack-objects --revs --stdout >/dev/null
'
+test_perf 'pack to file' '
+ git pack-objects --all pack1 </dev/null >/dev/null
+'
+
+test_perf 'pack to file (bitmap)' '
+ git pack-objects --use-bitmap-index --all pack1b </dev/null >/dev/null
+'
+
test_expect_success 'create partial bitmap state' '
# pick a commit to represent the repo tip in the past
cutoff=$(git rev-list HEAD~100 -1) &&
git update-ref HEAD $orig_tip
'
-test_perf 'partial bitmap' '
+test_perf 'clone (partial bitmap)' '
git pack-objects --stdout --all </dev/null >/dev/null
'
+test_perf 'pack to file (partial bitmap)' '
+ git pack-objects --use-bitmap-index --all pack2b </dev/null >/dev/null
+'
+
test_done
! is_hidden newdir
'
+test_expect_success 'remote init from does not use config from cwd' '
+ rm -rf newdir &&
+ test_config core.logallrefupdates true &&
+ git init newdir &&
+ echo true >expect &&
+ git -C newdir config --bool core.logallrefupdates >actual &&
+ test_cmp expect actual
+'
+
+test_expect_success 're-init from a linked worktree' '
+ git init main-worktree &&
+ (
+ cd main-worktree &&
+ test_commit first &&
+ git worktree add ../linked-worktree &&
+ mv .git/info/exclude expected-exclude &&
+ cp .git/config expected-config &&
+ find .git/worktrees -print | sort >expected &&
+ git -C ../linked-worktree init &&
+ test_cmp expected-exclude .git/info/exclude &&
+ test_cmp expected-config .git/config &&
+ find .git/worktrees -print | sort >actual &&
+ test_cmp expected actual
+ )
+'
+
test_done
--- /dev/null
+#!/bin/sh
+
+test_description='help'
+
+. ./test-lib.sh
+
+configure_help () {
+ test_config help.format html &&
+
+ # Unless the path has "://" in it, Git tries to make sure
+ # the documentation directory locally exists. Avoid it as
+ # we are only interested in seeing an attempt to correctly
+ # invoke a help browser in this test.
+ test_config help.htmlpath test://html &&
+
+ # Name a custom browser
+ test_config browser.test.cmd ./test-browser &&
+ test_config help.browser test
+}
+
+test_expect_success "setup" '
+ # Just write out which page gets requested
+ write_script test-browser <<-\EOF
+ echo "$*" >test-browser.log
+ EOF
+'
+
+test_expect_success "works for commands and guides by default" '
+ configure_help &&
+ git help status &&
+ echo "test://html/git-status.html" >expect &&
+ test_cmp expect test-browser.log &&
+ git help revisions &&
+ echo "test://html/gitrevisions.html" >expect &&
+ test_cmp expect test-browser.log
+'
+
+test_expect_success "--exclude-guides does not work for guides" '
+ >test-browser.log &&
+ test_must_fail git help --exclude-guides revisions &&
+ test_must_be_empty test-browser.log
+'
+
+test_expect_success "--help does not work for guides" "
+ cat <<-EOF >expect &&
+ git: 'revisions' is not a git command. See 'git --help'.
+ EOF
+ test_must_fail git revisions --help 2>actual &&
+ test_i18ncmp expect actual
+"
+
+test_done
git add doublewarn &&
git commit -m "nowarn" &&
for w in Oh here is CRLFQ in text; do echo $w; done | q_to_cr >doublewarn &&
- test $(git add doublewarn 2>&1 | grep "CRLF will be replaced by LF" | wc -l) = 1
+ git add doublewarn 2>err &&
+ if test_have_prereq C_LOCALE_OUTPUT
+ then
+ test $(grep "CRLF will be replaced by LF" err | wc -l) = 1
+ fi
'
. ./test-lib.sh
-cat <<EOF >rot13.sh
+TEST_ROOT="$(pwd)"
+
+cat <<EOF >"$TEST_ROOT/rot13.sh"
#!$SHELL_PATH
tr \
'abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ' \
'nopqrstuvwxyzabcdefghijklmNOPQRSTUVWXYZABCDEFGHIJKLM'
EOF
-chmod +x rot13.sh
+chmod +x "$TEST_ROOT/rot13.sh"
+
+generate_random_characters () {
+ LEN=$1
+ NAME=$2
+ test-genrandom some-seed $LEN |
+ perl -pe "s/./chr((ord($&) % 26) + ord('a'))/sge" >"$TEST_ROOT/$NAME"
+}
+
+file_size () {
+ cat "$1" | wc -c | sed "s/^[ ]*//"
+}
+
+filter_git () {
+ rm -f rot13-filter.log &&
+ git "$@" 2>git-stderr.log &&
+ rm -f git-stderr.log
+}
+
+# Compare two files and ensure that `clean` and `smudge` respectively are
+# called at least once if specified in the `expect` file. The actual
+# invocation count is not relevant because their number can vary.
+# c.f. http://public-inbox.org/git/xmqqshv18i8i.fsf@gitster.mtv.corp.google.com/
+test_cmp_count () {
+ expect=$1
+ actual=$2
+ for FILE in "$expect" "$actual"
+ do
+ sort "$FILE" | uniq -c | sed "s/^[ ]*//" |
+ sed "s/^\([0-9]\) IN: clean/x IN: clean/" |
+ sed "s/^\([0-9]\) IN: smudge/x IN: smudge/" >"$FILE.tmp" &&
+ mv "$FILE.tmp" "$FILE"
+ done &&
+ test_cmp "$expect" "$actual"
+}
+
+# Compare two files but exclude all `clean` invocations because Git can
+# call `clean` zero or more times.
+# c.f. http://public-inbox.org/git/xmqqshv18i8i.fsf@gitster.mtv.corp.google.com/
+test_cmp_exclude_clean () {
+ expect=$1
+ actual=$2
+ for FILE in "$expect" "$actual"
+ do
+ grep -v "IN: clean" "$FILE" >"$FILE.tmp" &&
+ mv "$FILE.tmp" "$FILE"
+ done &&
+ test_cmp "$expect" "$actual"
+}
+
+# Check that the contents of two files are equal and that their rot13 version
+# is equal to the committed content.
+test_cmp_committed_rot13 () {
+ test_cmp "$1" "$2" &&
+ "$TEST_ROOT/rot13.sh" <"$1" >expected &&
+ git cat-file blob :"$2" >actual &&
+ test_cmp expected actual
+}
test_expect_success setup '
git config filter.rot13.smudge ./rot13.sh &&
cat test >test.i &&
git add test test.t test.i &&
rm -f test test.t test.i &&
- git checkout -- test test.t test.i
+ git checkout -- test test.t test.i &&
+
+ echo "content-test2" >test2.o &&
+ echo "content-test3 - filename with special characters" >"test3 '\''sq'\'',\$x.o"
'
script='s/^\$Id: \([0-9a-f]*\) \$/\1/p'
test_expect_success check '
- cmp test.o test &&
- cmp test.o test.t &&
+ test_cmp test.o test &&
+ test_cmp test.o test.t &&
# ident should be stripped in the repository
git diff --raw --exit-code :test :test.i &&
embedded=$(sed -ne "$script" test.i) &&
test "z$id" = "z$embedded" &&
- git cat-file blob :test.t > test.r &&
+ git cat-file blob :test.t >test.r &&
- ./rot13.sh < test.o > test.t &&
- cmp test.r test.t
+ ./rot13.sh <test.o >test.t &&
+ test_cmp test.r test.t
'
# If an expanded ident ever gets into the repository, we want to make sure that
# delete the files and check them out again, using a smudge filter
# that will count the args and echo the command-line back to us
- git config filter.argc.smudge "sh ./argc.sh %f" &&
+ test_config filter.argc.smudge "sh ./argc.sh %f" &&
rm "$normal" "$special" &&
git checkout -- "$normal" "$special" &&
test_cmp expect "$special" &&
# do the same thing, but with more args in the filter expression
- git config filter.argc.smudge "sh ./argc.sh %f --my-extra-arg" &&
+ test_config filter.argc.smudge "sh ./argc.sh %f --my-extra-arg" &&
rm "$normal" "$special" &&
git checkout -- "$normal" "$special" &&
'
test_expect_success 'required filter should filter data' '
- git config filter.required.smudge ./rot13.sh &&
- git config filter.required.clean ./rot13.sh &&
- git config filter.required.required true &&
+ test_config filter.required.smudge ./rot13.sh &&
+ test_config filter.required.clean ./rot13.sh &&
+ test_config filter.required.required true &&
echo "*.r filter=required" >.gitattributes &&
rm -f test.r &&
git checkout -- test.r &&
- cmp test.o test.r &&
+ test_cmp test.o test.r &&
./rot13.sh <test.o >expected &&
git cat-file blob :test.r >actual &&
- cmp expected actual
+ test_cmp expected actual
'
test_expect_success 'required filter smudge failure' '
- git config filter.failsmudge.smudge false &&
- git config filter.failsmudge.clean cat &&
- git config filter.failsmudge.required true &&
+ test_config filter.failsmudge.smudge false &&
+ test_config filter.failsmudge.clean cat &&
+ test_config filter.failsmudge.required true &&
echo "*.fs filter=failsmudge" >.gitattributes &&
'
test_expect_success 'required filter clean failure' '
- git config filter.failclean.smudge cat &&
- git config filter.failclean.clean false &&
- git config filter.failclean.required true &&
+ test_config filter.failclean.smudge cat &&
+ test_config filter.failclean.clean false &&
+ test_config filter.failclean.required true &&
echo "*.fc filter=failclean" >.gitattributes &&
'
test_expect_success 'filtering large input to small output should use little memory' '
- git config filter.devnull.clean "cat >/dev/null" &&
- git config filter.devnull.required true &&
+ test_config filter.devnull.clean "cat >/dev/null" &&
+ test_config filter.devnull.required true &&
for i in $(test_seq 1 30); do printf "%1048576d" 1; done >30MB &&
echo "30MB filter=devnull" >.gitattributes &&
GIT_MMAP_LIMIT=1m GIT_ALLOC_LIMIT=1m git add 30MB
test_expect_success 'filter that does not read is fine' '
test-genrandom foo $((128 * 1024 + 1)) >big &&
echo "big filter=epipe" >.gitattributes &&
- git config filter.epipe.clean "echo xyzzy" &&
+ test_config filter.epipe.clean "echo xyzzy" &&
git add big &&
git cat-file blob :big >actual &&
echo xyzzy >expect &&
'
test_expect_success EXPENSIVE 'filter large file' '
- git config filter.largefile.smudge cat &&
- git config filter.largefile.clean cat &&
+ test_config filter.largefile.smudge cat &&
+ test_config filter.largefile.clean cat &&
for i in $(test_seq 1 2048); do printf "%1048576d" 1; done >2GB &&
echo "2GB filter=largefile" >.gitattributes &&
git add 2GB 2>err &&
- ! test -s err &&
+ test_must_be_empty err &&
rm -f 2GB &&
git checkout -- 2GB 2>err &&
- ! test -s err
+ test_must_be_empty err
'
test_expect_success "filter: clean empty file" '
- git config filter.in-repo-header.clean "echo cleaned && cat" &&
- git config filter.in-repo-header.smudge "sed 1d" &&
+ test_config filter.in-repo-header.clean "echo cleaned && cat" &&
+ test_config filter.in-repo-header.smudge "sed 1d" &&
echo "empty-in-worktree filter=in-repo-header" >>.gitattributes &&
>empty-in-worktree &&
'
test_expect_success "filter: smudge empty file" '
- git config filter.empty-in-repo.clean "cat >/dev/null" &&
- git config filter.empty-in-repo.smudge "echo smudged && cat" &&
+ test_config filter.empty-in-repo.clean "cat >/dev/null" &&
+ test_config filter.empty-in-repo.smudge "echo smudged && cat" &&
echo "empty-in-repo filter=empty-in-repo" >>.gitattributes &&
echo dead data walking >empty-in-repo &&
test_line_count = 0 count
'
+test_expect_success PERL 'required process filter should filter data' '
+ test_config_global filter.protocol.process "$TEST_DIRECTORY/t0021/rot13-filter.pl clean smudge" &&
+ test_config_global filter.protocol.required true &&
+ rm -rf repo &&
+ mkdir repo &&
+ (
+ cd repo &&
+ git init &&
+
+ echo "git-stderr.log" >.gitignore &&
+ echo "*.r filter=protocol" >.gitattributes &&
+ git add . &&
+ git commit . -m "test commit 1" &&
+ git branch empty-branch &&
+
+ cp "$TEST_ROOT/test.o" test.r &&
+ cp "$TEST_ROOT/test2.o" test2.r &&
+ mkdir testsubdir &&
+ cp "$TEST_ROOT/test3 '\''sq'\'',\$x.o" "testsubdir/test3 '\''sq'\'',\$x.r" &&
+ >test4-empty.r &&
+
+ S=$(file_size test.r) &&
+ S2=$(file_size test2.r) &&
+ S3=$(file_size "testsubdir/test3 '\''sq'\'',\$x.r") &&
+
+ filter_git add . &&
+ cat >expected.log <<-EOF &&
+ START
+ init handshake complete
+ IN: clean test.r $S [OK] -- OUT: $S . [OK]
+ IN: clean test2.r $S2 [OK] -- OUT: $S2 . [OK]
+ IN: clean test4-empty.r 0 [OK] -- OUT: 0 [OK]
+ IN: clean testsubdir/test3 '\''sq'\'',\$x.r $S3 [OK] -- OUT: $S3 . [OK]
+ STOP
+ EOF
+ test_cmp_count expected.log rot13-filter.log &&
+
+ filter_git commit . -m "test commit 2" &&
+ cat >expected.log <<-EOF &&
+ START
+ init handshake complete
+ IN: clean test.r $S [OK] -- OUT: $S . [OK]
+ IN: clean test2.r $S2 [OK] -- OUT: $S2 . [OK]
+ IN: clean test4-empty.r 0 [OK] -- OUT: 0 [OK]
+ IN: clean testsubdir/test3 '\''sq'\'',\$x.r $S3 [OK] -- OUT: $S3 . [OK]
+ IN: clean test.r $S [OK] -- OUT: $S . [OK]
+ IN: clean test2.r $S2 [OK] -- OUT: $S2 . [OK]
+ IN: clean test4-empty.r 0 [OK] -- OUT: 0 [OK]
+ IN: clean testsubdir/test3 '\''sq'\'',\$x.r $S3 [OK] -- OUT: $S3 . [OK]
+ STOP
+ EOF
+ test_cmp_count expected.log rot13-filter.log &&
+
+ rm -f test2.r "testsubdir/test3 '\''sq'\'',\$x.r" &&
+
+ filter_git checkout --quiet --no-progress . &&
+ cat >expected.log <<-EOF &&
+ START
+ init handshake complete
+ IN: smudge test2.r $S2 [OK] -- OUT: $S2 . [OK]
+ IN: smudge testsubdir/test3 '\''sq'\'',\$x.r $S3 [OK] -- OUT: $S3 . [OK]
+ STOP
+ EOF
+ test_cmp_exclude_clean expected.log rot13-filter.log &&
+
+ filter_git checkout --quiet --no-progress empty-branch &&
+ cat >expected.log <<-EOF &&
+ START
+ init handshake complete
+ IN: clean test.r $S [OK] -- OUT: $S . [OK]
+ STOP
+ EOF
+ test_cmp_exclude_clean expected.log rot13-filter.log &&
+
+ filter_git checkout --quiet --no-progress master &&
+ cat >expected.log <<-EOF &&
+ START
+ init handshake complete
+ IN: smudge test.r $S [OK] -- OUT: $S . [OK]
+ IN: smudge test2.r $S2 [OK] -- OUT: $S2 . [OK]
+ IN: smudge test4-empty.r 0 [OK] -- OUT: 0 [OK]
+ IN: smudge testsubdir/test3 '\''sq'\'',\$x.r $S3 [OK] -- OUT: $S3 . [OK]
+ STOP
+ EOF
+ test_cmp_exclude_clean expected.log rot13-filter.log &&
+
+ test_cmp_committed_rot13 "$TEST_ROOT/test.o" test.r &&
+ test_cmp_committed_rot13 "$TEST_ROOT/test2.o" test2.r &&
+ test_cmp_committed_rot13 "$TEST_ROOT/test3 '\''sq'\'',\$x.o" "testsubdir/test3 '\''sq'\'',\$x.r"
+ )
+'
+
+test_expect_success PERL 'required process filter takes precedence' '
+ test_config_global filter.protocol.clean false &&
+ test_config_global filter.protocol.process "$TEST_DIRECTORY/t0021/rot13-filter.pl clean" &&
+ test_config_global filter.protocol.required true &&
+ rm -rf repo &&
+ mkdir repo &&
+ (
+ cd repo &&
+ git init &&
+
+ echo "*.r filter=protocol" >.gitattributes &&
+ cp "$TEST_ROOT/test.o" test.r &&
+ S=$(file_size test.r) &&
+
+ # Check that the process filter is invoked here
+ filter_git add . &&
+ cat >expected.log <<-EOF &&
+ START
+ init handshake complete
+ IN: clean test.r $S [OK] -- OUT: $S . [OK]
+ STOP
+ EOF
+ test_cmp_count expected.log rot13-filter.log
+ )
+'
+
+test_expect_success PERL 'required process filter should be used only for "clean" operation only' '
+ test_config_global filter.protocol.process "$TEST_DIRECTORY/t0021/rot13-filter.pl clean" &&
+ rm -rf repo &&
+ mkdir repo &&
+ (
+ cd repo &&
+ git init &&
+
+ echo "*.r filter=protocol" >.gitattributes &&
+ cp "$TEST_ROOT/test.o" test.r &&
+ S=$(file_size test.r) &&
+
+ filter_git add . &&
+ cat >expected.log <<-EOF &&
+ START
+ init handshake complete
+ IN: clean test.r $S [OK] -- OUT: $S . [OK]
+ STOP
+ EOF
+ test_cmp_count expected.log rot13-filter.log &&
+
+ rm test.r &&
+
+ filter_git checkout --quiet --no-progress . &&
+ # If the filter would be used for "smudge", too, we would see
+ # "IN: smudge test.r 57 [OK] -- OUT: 57 . [OK]" here
+ cat >expected.log <<-EOF &&
+ START
+ init handshake complete
+ STOP
+ EOF
+ test_cmp_exclude_clean expected.log rot13-filter.log
+ )
+'
+
+test_expect_success PERL 'required process filter should process multiple packets' '
+ test_config_global filter.protocol.process "$TEST_DIRECTORY/t0021/rot13-filter.pl clean smudge" &&
+ test_config_global filter.protocol.required true &&
+
+ rm -rf repo &&
+ mkdir repo &&
+ (
+ cd repo &&
+ git init &&
+
+ # Generate data requiring 1, 2, 3 packets
+ S=65516 && # PKTLINE_DATA_MAXLEN -> Maximal size of a packet
+ generate_random_characters $(($S )) 1pkt_1__.file &&
+ generate_random_characters $(($S +1)) 2pkt_1+1.file &&
+ generate_random_characters $(($S*2-1)) 2pkt_2-1.file &&
+ generate_random_characters $(($S*2 )) 2pkt_2__.file &&
+ generate_random_characters $(($S*2+1)) 3pkt_2+1.file &&
+
+ for FILE in "$TEST_ROOT"/*.file
+ do
+ cp "$FILE" . &&
+ "$TEST_ROOT/rot13.sh" <"$FILE" >"$FILE.rot13"
+ done &&
+
+ echo "*.file filter=protocol" >.gitattributes &&
+ filter_git add *.file .gitattributes &&
+ cat >expected.log <<-EOF &&
+ START
+ init handshake complete
+ IN: clean 1pkt_1__.file $(($S )) [OK] -- OUT: $(($S )) . [OK]
+ IN: clean 2pkt_1+1.file $(($S +1)) [OK] -- OUT: $(($S +1)) .. [OK]
+ IN: clean 2pkt_2-1.file $(($S*2-1)) [OK] -- OUT: $(($S*2-1)) .. [OK]
+ IN: clean 2pkt_2__.file $(($S*2 )) [OK] -- OUT: $(($S*2 )) .. [OK]
+ IN: clean 3pkt_2+1.file $(($S*2+1)) [OK] -- OUT: $(($S*2+1)) ... [OK]
+ STOP
+ EOF
+ test_cmp_count expected.log rot13-filter.log &&
+
+ rm -f *.file &&
+
+ filter_git checkout --quiet --no-progress -- *.file &&
+ cat >expected.log <<-EOF &&
+ START
+ init handshake complete
+ IN: smudge 1pkt_1__.file $(($S )) [OK] -- OUT: $(($S )) . [OK]
+ IN: smudge 2pkt_1+1.file $(($S +1)) [OK] -- OUT: $(($S +1)) .. [OK]
+ IN: smudge 2pkt_2-1.file $(($S*2-1)) [OK] -- OUT: $(($S*2-1)) .. [OK]
+ IN: smudge 2pkt_2__.file $(($S*2 )) [OK] -- OUT: $(($S*2 )) .. [OK]
+ IN: smudge 3pkt_2+1.file $(($S*2+1)) [OK] -- OUT: $(($S*2+1)) ... [OK]
+ STOP
+ EOF
+ test_cmp_exclude_clean expected.log rot13-filter.log &&
+
+ for FILE in *.file
+ do
+ test_cmp_committed_rot13 "$TEST_ROOT/$FILE" $FILE
+ done
+ )
+'
+
+test_expect_success PERL 'required process filter with clean error should fail' '
+ test_config_global filter.protocol.process "$TEST_DIRECTORY/t0021/rot13-filter.pl clean smudge" &&
+ test_config_global filter.protocol.required true &&
+ rm -rf repo &&
+ mkdir repo &&
+ (
+ cd repo &&
+ git init &&
+
+ echo "*.r filter=protocol" >.gitattributes &&
+
+ cp "$TEST_ROOT/test.o" test.r &&
+ echo "this is going to fail" >clean-write-fail.r &&
+ echo "content-test3-subdir" >test3.r &&
+
+ test_must_fail git add .
+ )
+'
+
+test_expect_success PERL 'process filter should restart after unexpected write failure' '
+ test_config_global filter.protocol.process "$TEST_DIRECTORY/t0021/rot13-filter.pl clean smudge" &&
+ rm -rf repo &&
+ mkdir repo &&
+ (
+ cd repo &&
+ git init &&
+
+ echo "*.r filter=protocol" >.gitattributes &&
+
+ cp "$TEST_ROOT/test.o" test.r &&
+ cp "$TEST_ROOT/test2.o" test2.r &&
+ echo "this is going to fail" >smudge-write-fail.o &&
+ cp smudge-write-fail.o smudge-write-fail.r &&
+
+ S=$(file_size test.r) &&
+ S2=$(file_size test2.r) &&
+ SF=$(file_size smudge-write-fail.r) &&
+
+ git add . &&
+ rm -f *.r &&
+
+ rm -f rot13-filter.log &&
+ git checkout --quiet --no-progress . 2>git-stderr.log &&
+
+ grep "smudge write error at" git-stderr.log &&
+ grep "error: external filter" git-stderr.log &&
+
+ cat >expected.log <<-EOF &&
+ START
+ init handshake complete
+ IN: smudge smudge-write-fail.r $SF [OK] -- OUT: $SF [WRITE FAIL]
+ START
+ init handshake complete
+ IN: smudge test.r $S [OK] -- OUT: $S . [OK]
+ IN: smudge test2.r $S2 [OK] -- OUT: $S2 . [OK]
+ STOP
+ EOF
+ test_cmp_exclude_clean expected.log rot13-filter.log &&
+
+ test_cmp_committed_rot13 "$TEST_ROOT/test.o" test.r &&
+ test_cmp_committed_rot13 "$TEST_ROOT/test2.o" test2.r &&
+
+ # Smudge failed
+ ! test_cmp smudge-write-fail.o smudge-write-fail.r &&
+ "$TEST_ROOT/rot13.sh" <smudge-write-fail.o >expected &&
+ git cat-file blob :smudge-write-fail.r >actual &&
+ test_cmp expected actual
+ )
+'
+
+test_expect_success PERL 'process filter should not be restarted if it signals an error' '
+ test_config_global filter.protocol.process "$TEST_DIRECTORY/t0021/rot13-filter.pl clean smudge" &&
+ rm -rf repo &&
+ mkdir repo &&
+ (
+ cd repo &&
+ git init &&
+
+ echo "*.r filter=protocol" >.gitattributes &&
+
+ cp "$TEST_ROOT/test.o" test.r &&
+ cp "$TEST_ROOT/test2.o" test2.r &&
+ echo "this will cause an error" >error.o &&
+ cp error.o error.r &&
+
+ S=$(file_size test.r) &&
+ S2=$(file_size test2.r) &&
+ SE=$(file_size error.r) &&
+
+ git add . &&
+ rm -f *.r &&
+
+ filter_git checkout --quiet --no-progress . &&
+ cat >expected.log <<-EOF &&
+ START
+ init handshake complete
+ IN: smudge error.r $SE [OK] -- OUT: 0 [ERROR]
+ IN: smudge test.r $S [OK] -- OUT: $S . [OK]
+ IN: smudge test2.r $S2 [OK] -- OUT: $S2 . [OK]
+ STOP
+ EOF
+ test_cmp_exclude_clean expected.log rot13-filter.log &&
+
+ test_cmp_committed_rot13 "$TEST_ROOT/test.o" test.r &&
+ test_cmp_committed_rot13 "$TEST_ROOT/test2.o" test2.r &&
+ test_cmp error.o error.r
+ )
+'
+
+test_expect_success PERL 'process filter abort stops processing of all further files' '
+ test_config_global filter.protocol.process "$TEST_DIRECTORY/t0021/rot13-filter.pl clean smudge" &&
+ rm -rf repo &&
+ mkdir repo &&
+ (
+ cd repo &&
+ git init &&
+
+ echo "*.r filter=protocol" >.gitattributes &&
+
+ cp "$TEST_ROOT/test.o" test.r &&
+ cp "$TEST_ROOT/test2.o" test2.r &&
+ echo "error this blob and all future blobs" >abort.o &&
+ cp abort.o abort.r &&
+
+ SA=$(file_size abort.r) &&
+
+ git add . &&
+ rm -f *.r &&
+
+ # Note: This test assumes that Git filters files in alphabetical
+ # order ("abort.r" before "test.r").
+ filter_git checkout --quiet --no-progress . &&
+ cat >expected.log <<-EOF &&
+ START
+ init handshake complete
+ IN: smudge abort.r $SA [OK] -- OUT: 0 [ABORT]
+ STOP
+ EOF
+ test_cmp_exclude_clean expected.log rot13-filter.log &&
+
+ test_cmp "$TEST_ROOT/test.o" test.r &&
+ test_cmp "$TEST_ROOT/test2.o" test2.r &&
+ test_cmp abort.o abort.r
+ )
+'
+
+test_expect_success PERL 'invalid process filter must fail (and not hang!)' '
+ test_config_global filter.protocol.process cat &&
+ test_config_global filter.protocol.required true &&
+ rm -rf repo &&
+ mkdir repo &&
+ (
+ cd repo &&
+ git init &&
+
+ echo "*.r filter=protocol" >.gitattributes &&
+
+ cp "$TEST_ROOT/test.o" test.r &&
+ test_must_fail git add . 2>git-stderr.log &&
+ grep "does not support filter protocol version" git-stderr.log
+ )
+'
+
test_done
--- /dev/null
+#!/usr/bin/perl
+#
+# Example implementation for the Git filter protocol version 2
+# See Documentation/gitattributes.txt, section "Filter Protocol"
+#
+# The script takes the list of supported protocol capabilities as
+# arguments ("clean", "smudge", etc).
+#
+# This implementation supports special test cases:
+# (1) If data with the pathname "clean-write-fail.r" is processed with
+# a "clean" operation then the write operation will die.
+# (2) If data with the pathname "smudge-write-fail.r" is processed with
+# a "smudge" operation then the write operation will die.
+# (3) If data with the pathname "error.r" is processed with any
+# operation then the filter signals that it cannot or does not want
+# to process the file.
+# (4) If data with the pathname "abort.r" is processed with any
+# operation then the filter signals that it cannot or does not want
+# to process the file and any file after that is processed with the
+# same command.
+#
+
+use strict;
+use warnings;
+
+my $MAX_PACKET_CONTENT_SIZE = 65516;
+my @capabilities = @ARGV;
+
+open my $debug, ">>", "rot13-filter.log" or die "cannot open log file: $!";
+
+sub rot13 {
+ my $str = shift;
+ $str =~ y/A-Za-z/N-ZA-Mn-za-m/;
+ return $str;
+}
+
+sub packet_bin_read {
+ my $buffer;
+ my $bytes_read = read STDIN, $buffer, 4;
+ if ( $bytes_read == 0 ) {
+ # EOF - Git stopped talking to us!
+ print $debug "STOP\n";
+ exit();
+ }
+ elsif ( $bytes_read != 4 ) {
+ die "invalid packet: '$buffer'";
+ }
+ my $pkt_size = hex($buffer);
+ if ( $pkt_size == 0 ) {
+ return ( 1, "" );
+ }
+ elsif ( $pkt_size > 4 ) {
+ my $content_size = $pkt_size - 4;
+ $bytes_read = read STDIN, $buffer, $content_size;
+ if ( $bytes_read != $content_size ) {
+ die "invalid packet ($content_size bytes expected; $bytes_read bytes read)";
+ }
+ return ( 0, $buffer );
+ }
+ else {
+ die "invalid packet size: $pkt_size";
+ }
+}
+
+sub packet_txt_read {
+ my ( $res, $buf ) = packet_bin_read();
+ unless ( $buf =~ s/\n$// ) {
+ die "A non-binary line MUST be terminated by an LF.";
+ }
+ return ( $res, $buf );
+}
+
+sub packet_bin_write {
+ my $buf = shift;
+ print STDOUT sprintf( "%04x", length($buf) + 4 );
+ print STDOUT $buf;
+ STDOUT->flush();
+}
+
+sub packet_txt_write {
+ packet_bin_write( $_[0] . "\n" );
+}
+
+sub packet_flush {
+ print STDOUT sprintf( "%04x", 0 );
+ STDOUT->flush();
+}
+
+print $debug "START\n";
+$debug->flush();
+
+( packet_txt_read() eq ( 0, "git-filter-client" ) ) || die "bad initialize";
+( packet_txt_read() eq ( 0, "version=2" ) ) || die "bad version";
+( packet_bin_read() eq ( 1, "" ) ) || die "bad version end";
+
+packet_txt_write("git-filter-server");
+packet_txt_write("version=2");
+packet_flush();
+
+( packet_txt_read() eq ( 0, "capability=clean" ) ) || die "bad capability";
+( packet_txt_read() eq ( 0, "capability=smudge" ) ) || die "bad capability";
+( packet_bin_read() eq ( 1, "" ) ) || die "bad capability end";
+
+foreach (@capabilities) {
+ packet_txt_write( "capability=" . $_ );
+}
+packet_flush();
+print $debug "init handshake complete\n";
+$debug->flush();
+
+while (1) {
+ my ($command) = packet_txt_read() =~ /^command=([^=]+)$/;
+ print $debug "IN: $command";
+ $debug->flush();
+
+ my ($pathname) = packet_txt_read() =~ /^pathname=([^=]+)$/;
+ print $debug " $pathname";
+ $debug->flush();
+
+ # Flush
+ packet_bin_read();
+
+ my $input = "";
+ {
+ binmode(STDIN);
+ my $buffer;
+ my $done = 0;
+ while ( !$done ) {
+ ( $done, $buffer ) = packet_bin_read();
+ $input .= $buffer;
+ }
+ print $debug " " . length($input) . " [OK] -- ";
+ $debug->flush();
+ }
+
+ my $output;
+ if ( $pathname eq "error.r" or $pathname eq "abort.r" ) {
+ $output = "";
+ }
+ elsif ( $command eq "clean" and grep( /^clean$/, @capabilities ) ) {
+ $output = rot13($input);
+ }
+ elsif ( $command eq "smudge" and grep( /^smudge$/, @capabilities ) ) {
+ $output = rot13($input);
+ }
+ else {
+ die "bad command '$command'";
+ }
+
+ print $debug "OUT: " . length($output) . " ";
+ $debug->flush();
+
+ if ( $pathname eq "error.r" ) {
+ print $debug "[ERROR]\n";
+ $debug->flush();
+ packet_txt_write("status=error");
+ packet_flush();
+ }
+ elsif ( $pathname eq "abort.r" ) {
+ print $debug "[ABORT]\n";
+ $debug->flush();
+ packet_txt_write("status=abort");
+ packet_flush();
+ }
+ else {
+ packet_txt_write("status=success");
+ packet_flush();
+
+ if ( $pathname eq "${command}-write-fail.r" ) {
+ print $debug "[WRITE FAIL]\n";
+ $debug->flush();
+ die "${command} write error";
+ }
+
+ while ( length($output) > 0 ) {
+ my $packet = substr( $output, 0, $MAX_PACKET_CONTENT_SIZE );
+ packet_bin_write($packet);
+ # dots represent the number of packets
+ print $debug ".";
+ if ( length($output) > $MAX_PACKET_CONTENT_SIZE ) {
+ $output = substr( $output, $MAX_PACKET_CONTENT_SIZE );
+ }
+ else {
+ $output = "";
+ }
+ }
+ packet_flush();
+ print $debug " [OK]\n";
+ $debug->flush();
+ packet_flush();
+ }
+}
test_git_path GIT_COMMON_DIR=bar packed-refs bar/packed-refs
test_git_path GIT_COMMON_DIR=bar shallow bar/shallow
-# In the tests below, the distinction between $PWD and $(pwd) is important:
-# on Windows, $PWD is POSIX style (/c/foo), $(pwd) has drive letter (c:/foo).
+# In the tests below, $(pwd) must be used because it is a native path on
+# Windows and avoids MSYS's path mangling (which simplifies "foo/../bar" and
+# strips the dot from trailing "/.").
test_submodule_relative_url "../" "../foo" "../submodule" "../../submodule"
test_submodule_relative_url "../" "../foo/bar" "../submodule" "../../foo/submodule"
test_submodule_relative_url "../" "./foo" "../submodule" "../submodule"
test_submodule_relative_url "../" "./foo/bar" "../submodule" "../foo/submodule"
test_submodule_relative_url "../../../" "../foo/bar" "../sub/a/b/c" "../../../../foo/sub/a/b/c"
-test_submodule_relative_url "../" "$PWD/addtest" "../repo" "$(pwd)/repo"
+test_submodule_relative_url "../" "$(pwd)/addtest" "../repo" "$(pwd)/repo"
test_submodule_relative_url "../" "foo/bar" "../submodule" "../foo/submodule"
test_submodule_relative_url "../" "foo" "../submodule" "../submodule"
test_submodule_relative_url "(null)" "../foo/bar" "../sub/a/b/c" "../foo/sub/a/b/c"
+test_submodule_relative_url "(null)" "../foo/bar" "../sub/a/b/c/" "../foo/sub/a/b/c"
+test_submodule_relative_url "(null)" "../foo/bar/" "../sub/a/b/c" "../foo/sub/a/b/c"
test_submodule_relative_url "(null)" "../foo/bar" "../submodule" "../foo/submodule"
test_submodule_relative_url "(null)" "../foo/submodule" "../submodule" "../foo/submodule"
test_submodule_relative_url "(null)" "../foo" "../submodule" "../submodule"
test_submodule_relative_url "(null)" "./foo/bar" "../submodule" "foo/submodule"
test_submodule_relative_url "(null)" "./foo" "../submodule" "submodule"
test_submodule_relative_url "(null)" "//somewhere else/repo" "../subrepo" "//somewhere else/subrepo"
-test_submodule_relative_url "(null)" "$PWD/subsuper_update_r" "../subsubsuper_update_r" "$(pwd)/subsubsuper_update_r"
-test_submodule_relative_url "(null)" "$PWD/super_update_r2" "../subsuper_update_r" "$(pwd)/subsuper_update_r"
-test_submodule_relative_url "(null)" "$PWD/." "../." "$(pwd)/."
-test_submodule_relative_url "(null)" "$PWD" "./." "$(pwd)/."
-test_submodule_relative_url "(null)" "$PWD/addtest" "../repo" "$(pwd)/repo"
-test_submodule_relative_url "(null)" "$PWD" "./å äö" "$(pwd)/å äö"
-test_submodule_relative_url "(null)" "$PWD/." "../submodule" "$(pwd)/submodule"
-test_submodule_relative_url "(null)" "$PWD/submodule" "../submodule" "$(pwd)/submodule"
-test_submodule_relative_url "(null)" "$PWD/home2/../remote" "../bundle1" "$(pwd)/home2/../bundle1"
-test_submodule_relative_url "(null)" "$PWD/submodule_update_repo" "./." "$(pwd)/submodule_update_repo/."
+test_submodule_relative_url "(null)" "$(pwd)/subsuper_update_r" "../subsubsuper_update_r" "$(pwd)/subsubsuper_update_r"
+test_submodule_relative_url "(null)" "$(pwd)/super_update_r2" "../subsuper_update_r" "$(pwd)/subsuper_update_r"
+test_submodule_relative_url "(null)" "$(pwd)/." "../." "$(pwd)/."
+test_submodule_relative_url "(null)" "$(pwd)" "./." "$(pwd)/."
+test_submodule_relative_url "(null)" "$(pwd)/addtest" "../repo" "$(pwd)/repo"
+test_submodule_relative_url "(null)" "$(pwd)" "./å äö" "$(pwd)/å äö"
+test_submodule_relative_url "(null)" "$(pwd)/." "../submodule" "$(pwd)/submodule"
+test_submodule_relative_url "(null)" "$(pwd)/submodule" "../submodule" "$(pwd)/submodule"
+test_submodule_relative_url "(null)" "$(pwd)/home2/../remote" "../bundle1" "$(pwd)/home2/../bundle1"
+test_submodule_relative_url "(null)" "$(pwd)/submodule_update_repo" "./." "$(pwd)/submodule_update_repo/."
test_submodule_relative_url "(null)" "file:///tmp/repo" "../subrepo" "file:///tmp/subrepo"
test_submodule_relative_url "(null)" "foo/bar" "../submodule" "foo/submodule"
test_submodule_relative_url "(null)" "foo" "../submodule" "submodule"
test "$obname1" = "$obname1new"
'
-test_expect_success 'check that appropriate filter is invoke when --path is used' '
+test_expect_success 'set up crlf tests' '
echo fooQ | tr Q "\\015" >file0 &&
cp file0 file1 &&
echo "file0 -crlf" >.gitattributes &&
git config core.autocrlf true &&
file0_sha=$(git hash-object file0) &&
file1_sha=$(git hash-object file1) &&
- test "$file0_sha" != "$file1_sha" &&
+ test "$file0_sha" != "$file1_sha"
+'
+
+test_expect_success 'check that appropriate filter is invoke when --path is used' '
path1_sha=$(git hash-object --path=file1 file0) &&
path0_sha=$(git hash-object --path=file0 file1) &&
test "$file0_sha" = "$path0_sha" &&
path1_sha=$(cat file0 | git hash-object --path=file1 --stdin) &&
path0_sha=$(cat file1 | git hash-object --path=file0 --stdin) &&
test "$file0_sha" = "$path0_sha" &&
- test "$file1_sha" = "$path1_sha" &&
- git config --unset core.autocrlf
+ test "$file1_sha" = "$path1_sha"
+'
+
+test_expect_success 'gitattributes also work in a subdirectory' '
+ mkdir subdir &&
+ (
+ cd subdir &&
+ subdir_sha0=$(git hash-object ../file0) &&
+ subdir_sha1=$(git hash-object ../file1) &&
+ test "$file0_sha" = "$subdir_sha0" &&
+ test "$file1_sha" = "$subdir_sha1"
+ )
'
test_expect_success 'check that --no-filters option works' '
- echo fooQ | tr Q "\\015" >file0 &&
- cp file0 file1 &&
- echo "file0 -crlf" >.gitattributes &&
- echo "file1 crlf" >>.gitattributes &&
- git config core.autocrlf true &&
- file0_sha=$(git hash-object file0) &&
- file1_sha=$(git hash-object file1) &&
- test "$file0_sha" != "$file1_sha" &&
nofilters_file1=$(git hash-object --no-filters file1) &&
test "$file0_sha" = "$nofilters_file1" &&
nofilters_file1=$(cat file1 | git hash-object --stdin) &&
- test "$file0_sha" = "$nofilters_file1" &&
- git config --unset core.autocrlf
+ test "$file0_sha" = "$nofilters_file1"
'
test_expect_success 'check that --no-filters option works with --stdin-paths' '
- echo fooQ | tr Q "\\015" >file0 &&
- cp file0 file1 &&
- echo "file0 -crlf" >.gitattributes &&
- echo "file1 crlf" >>.gitattributes &&
- git config core.autocrlf true &&
- file0_sha=$(git hash-object file0) &&
- file1_sha=$(git hash-object file1) &&
- test "$file0_sha" != "$file1_sha" &&
nofilters_file1=$(echo "file1" | git hash-object --stdin-paths --no-filters) &&
- test "$file0_sha" = "$nofilters_file1" &&
- git config --unset core.autocrlf
+ test "$file0_sha" = "$nofilters_file1"
'
pop_repo
pop_repo
done
-test_expect_success 'corrupt tree' '
+test_expect_success 'too-short tree' '
echo abc >malformed-tree &&
- test_must_fail git hash-object -t tree malformed-tree
+ test_must_fail git hash-object -t tree malformed-tree 2>err &&
+ test_i18ngrep "too-short tree object" err
+'
+
+hex2oct() {
+ perl -ne 'printf "\\%03o", hex for /../g'
+}
+
+test_expect_success 'malformed mode in tree' '
+ hex_sha1=$(echo foo | git hash-object --stdin -w) &&
+ bin_sha1=$(echo $hex_sha1 | hex2oct) &&
+ printf "9100644 \0$bin_sha1" >tree-with-malformed-mode &&
+ test_must_fail git hash-object -t tree tree-with-malformed-mode 2>err &&
+ test_i18ngrep "malformed mode in tree entry" err
+'
+
+test_expect_success 'empty filename in tree' '
+ hex_sha1=$(echo foo | git hash-object --stdin -w) &&
+ bin_sha1=$(echo $hex_sha1 | hex2oct) &&
+ printf "100644 \0$bin_sha1" >tree-with-empty-filename &&
+ test_must_fail git hash-object -t tree tree-with-empty-filename 2>err &&
+ test_i18ngrep "empty filename in tree entry" err
'
test_expect_success 'corrupt commit' '
}" actual)"
'
+test_expect_success POSIXPERM 'remote init does not use config from cwd' '
+ git config core.sharedrepository 0666 &&
+ umask 0022 &&
+ git init --bare child.git &&
+ echo "-rw-r--r--" >expect &&
+ modebits child.git/config >actual &&
+ test_cmp expect actual
+'
+
+test_expect_success POSIXPERM 're-init respects core.sharedrepository (local)' '
+ git config core.sharedrepository 0666 &&
+ umask 0022 &&
+ echo whatever >templates/foo &&
+ git init --template=templates &&
+ echo "-rw-rw-rw-" >expect &&
+ modebits .git/foo >actual &&
+ test_cmp expect actual
+'
+
+test_expect_success POSIXPERM 're-init respects core.sharedrepository (remote)' '
+ rm -rf child.git &&
+ umask 0022 &&
+ git init --bare --shared=0666 child.git &&
+ test_path_is_missing child.git/foo &&
+ git init --bare --template=../templates child.git &&
+ echo "-rw-rw-rw-" >expect &&
+ modebits child.git/foo >actual &&
+ test_cmp expect actual
+'
+
+test_expect_success POSIXPERM 'template can set core.sharedrepository' '
+ rm -rf child.git &&
+ umask 0022 &&
+ git config core.sharedrepository 0666 &&
+ cp .git/config templates/config &&
+ git init --bare --template=../templates child.git &&
+ echo "-rw-rw-rw-" >expect &&
+ modebits child.git/HEAD >actual &&
+ test_cmp expect actual
+'
+
test_done
test_expect_success 'gitdir selection on normal repos' '
echo 0 >expect &&
git config core.repositoryformatversion >actual &&
- (
- cd test &&
- git config core.repositoryformatversion >../actual2
- ) &&
+ git -C test config core.repositoryformatversion >actual2 &&
test_cmp expect actual &&
test_cmp expect actual2
'
test_expect_success 'gitdir selection on unsupported repo' '
# Make sure it would stop at test2, not trash
- echo 99 >expect &&
- (
- cd test2 &&
- git config core.repositoryformatversion >../actual
- ) &&
- test_cmp expect actual
+ test_expect_code 1 git -C test2 config core.repositoryformatversion >actual
'
test_expect_success 'gitdir not required mode' '
git apply --stat test.patch &&
- (
- cd test &&
- git apply --stat ../test.patch
- ) &&
- (
- cd test2 &&
- git apply --stat ../test.patch
- )
+ git -C test apply --stat ../test.patch &&
+ git -C test2 apply --stat ../test.patch
'
test_expect_success 'gitdir required mode' '
git apply --check --index test.patch &&
- (
- cd test &&
- git apply --check --index ../test.patch
- ) &&
- (
- cd test2 &&
- test_must_fail git apply --check --index ../test.patch
- )
+ git -C test apply --check --index ../test.patch &&
+ test_must_fail git -C test2 apply --check --index ../test.patch
'
check_allow () {
grep "error in commit $new.*unterminated header: NUL at offset" out
'
-test_expect_success 'malformatted tree object' '
- test_when_finished "git update-ref -d refs/tags/wrong" &&
+test_expect_success 'tree object with duplicate entries' '
test_when_finished "remove_object \$T" &&
T=$(
GIT_INDEX_FILE=test-index &&
grep "error in tree .*contains duplicate file entries" out
'
+test_expect_success 'unparseable tree object' '
+ test_when_finished "git update-ref -d refs/heads/wrong" &&
+ test_when_finished "remove_object \$tree_sha1" &&
+ test_when_finished "remove_object \$commit_sha1" &&
+ tree_sha1=$(printf "100644 \0twenty-bytes-of-junk" | git hash-object -t tree --stdin -w --literally) &&
+ commit_sha1=$(git commit-tree $tree_sha1) &&
+ git update-ref refs/heads/wrong $commit_sha1 &&
+ test_must_fail git fsck 2>out &&
+ test_i18ngrep "error: empty filename in tree entry" out &&
+ test_i18ngrep "$tree_sha1" out &&
+ test_i18ngrep ! "fatal: empty filename in tree entry" out
+'
+
test_expect_success 'tag pointing to nonexistent' '
cat >invalid-tag <<-\EOF &&
object ffffffffffffffffffffffffffffffffffffffff
test_expect_success 'warn ambiguity when no candidate matches type hint' '
test_must_fail git rev-parse --verify 000000000^{commit} 2>actual &&
- grep "short SHA1 000000000 is ambiguous" actual
+ test_i18ngrep "short SHA1 000000000 is ambiguous" actual
'
test_expect_success 'disambiguate tree-ish' '
test_must_fail git log 000000000...
'
+# There are three objects with this prefix: a blob, a tree, and a tag. We know
+# the blob will not pass as a treeish, but the tree and tag should (and thus
+# cause an error).
+test_expect_success 'ambiguous tags peel to treeish' '
+ test_must_fail git rev-parse 0000000000f^{tree}
+'
+
test_expect_success 'rev-parse --disambiguate' '
# The test creates 16 objects that share the prefix and two
# commits created by commit-tree in earlier tests share a
test "$(sed -e "s/^\(.........\).*/\1/" actual | sort -u)" = 000000000
'
+test_expect_success 'rev-parse --disambiguate drops duplicates' '
+ git rev-parse --disambiguate=000000000 >expect &&
+ git pack-objects .git/objects/pack/pack <expect &&
+ git rev-parse --disambiguate=000000000 >actual &&
+ test_cmp expect actual
+'
+
test_expect_success 'ambiguous 40-hex ref' '
TREE=$(git mktree </dev/null) &&
REF=$(git rev-parse HEAD) &&
grep "refname.*${REF}.*ambiguous" err
'
+test_expect_success C_LOCALE_OUTPUT 'ambiguity errors are not repeated (raw)' '
+ test_must_fail git rev-parse 00000 2>stderr &&
+ grep "is ambiguous" stderr >errors &&
+ test_line_count = 1 errors
+'
+
+test_expect_success C_LOCALE_OUTPUT 'ambiguity errors are not repeated (treeish)' '
+ test_must_fail git rev-parse 00000:foo 2>stderr &&
+ grep "is ambiguous" stderr >errors &&
+ test_line_count = 1 errors
+'
+
+test_expect_success C_LOCALE_OUTPUT 'ambiguity errors are not repeated (peel)' '
+ test_must_fail git rev-parse 00000^{commit} 2>stderr &&
+ grep "is ambiguous" stderr >errors &&
+ test_line_count = 1 errors
+'
+
+test_expect_success C_LOCALE_OUTPUT 'ambiguity hints' '
+ test_must_fail git rev-parse 000000000 2>stderr &&
+ grep ^hint: stderr >hints &&
+ # 16 candidates, plus one intro line
+ test_line_count = 17 hints
+'
+
+test_expect_success C_LOCALE_OUTPUT 'ambiguity hints respect type' '
+ test_must_fail git rev-parse 000000000^{commit} 2>stderr &&
+ grep ^hint: stderr >hints &&
+ # 5 commits, 1 tag (which is a commitish), plus intro line
+ test_line_count = 7 hints
+'
+
+test_expect_success C_LOCALE_OUTPUT 'failed type-selector still shows hint' '
+ # these two blobs share the same prefix "ee3d", but neither
+ # will pass for a commit
+ echo 851 | git hash-object --stdin -w &&
+ echo 872 | git hash-object --stdin -w &&
+ test_must_fail git rev-parse ee3d^{commit} 2>stderr &&
+ grep ^hint: stderr >hints &&
+ test_line_count = 3 hints
+'
+
+test_expect_success 'core.disambiguate config can prefer types' '
+ # ambiguous between tree and tag
+ sha1=0000000000f &&
+ test_must_fail git rev-parse $sha1 &&
+ git rev-parse $sha1^{commit} &&
+ git -c core.disambiguate=committish rev-parse $sha1
+'
+
+test_expect_success 'core.disambiguate does not override context' '
+ # treeish ambiguous between tag and tree
+ test_must_fail \
+ git -c core.disambiguate=committish rev-parse $sha1^{tree}
+'
+
test_done
. ./test-lib.sh
test_expect_success 'intent to add' '
+ test_commit 1 &&
+ git rm 1.t &&
+ echo hello >1.t &&
echo hello >file &&
echo hello >elif &&
git add -N file &&
- git add elif
+ git add elif &&
+ git add -N 1.t
+'
+
+test_expect_success 'git status' '
+ git status --porcelain | grep -v actual >actual &&
+ cat >expect <<-\EOF &&
+ DA 1.t
+ A elif
+ A file
+ EOF
+ test_cmp expect actual
'
test_expect_success 'check result of "add -N"' '
git add -N nitfol &&
git commit -m second &&
test $(git ls-tree HEAD -- nitfol | wc -l) = 0 &&
- test $(git diff --name-only HEAD -- nitfol | wc -l) = 1
+ test $(git diff --name-only HEAD -- nitfol | wc -l) = 1 &&
+ test $(git diff --name-only --ita-invisible-in-index HEAD -- nitfol | wc -l) = 0 &&
+ test $(git diff --name-only --ita-invisible-in-index -- nitfol | wc -l) = 1
'
test_expect_success 'can commit with an unrelated i-t-a entry in index' '
)
'
+test_expect_success 'commit: ita entries ignored in empty intial commit check' '
+ git init empty-intial-commit &&
+ (
+ cd empty-intial-commit &&
+ : >one &&
+ git add -N one &&
+ test_must_fail git commit -m nothing-new-here
+ )
+'
+
+test_expect_success 'commit: ita entries ignored in empty commit check' '
+ git init empty-subsequent-commit &&
+ (
+ cd empty-subsequent-commit &&
+ test_commit one &&
+ : >two &&
+ git add -N two &&
+ test_must_fail git commit -m nothing-new-here
+ )
+'
+
test_done
--- /dev/null
+#!/bin/sh
+
+test_description='Test ls-files recurse-submodules feature
+
+This test verifies the recurse-submodules feature correctly lists files from
+submodules.
+'
+
+. ./test-lib.sh
+
+test_expect_success 'setup directory structure and submodules' '
+ echo a >a &&
+ mkdir b &&
+ echo b >b/b &&
+ git add a b &&
+ git commit -m "add a and b" &&
+ git init submodule &&
+ echo c >submodule/c &&
+ git -C submodule add c &&
+ git -C submodule commit -m "add c" &&
+ git submodule add ./submodule &&
+ git commit -m "added submodule"
+'
+
+test_expect_success 'ls-files correctly outputs files in submodule' '
+ cat >expect <<-\EOF &&
+ .gitmodules
+ a
+ b/b
+ submodule/c
+ EOF
+
+ git ls-files --recurse-submodules >actual &&
+ test_cmp expect actual
+'
+
+test_expect_success 'ls-files correctly outputs files in submodule with -z' '
+ lf_to_nul >expect <<-\EOF &&
+ .gitmodules
+ a
+ b/b
+ submodule/c
+ EOF
+
+ git ls-files --recurse-submodules -z >actual &&
+ test_cmp expect actual
+'
+
+test_expect_success 'ls-files does not output files not added to a repo' '
+ cat >expect <<-\EOF &&
+ .gitmodules
+ a
+ b/b
+ submodule/c
+ EOF
+
+ echo a >not_added &&
+ echo b >b/not_added &&
+ echo c >submodule/not_added &&
+ git ls-files --recurse-submodules >actual &&
+ test_cmp expect actual
+'
+
+test_expect_success 'ls-files recurses more than 1 level' '
+ cat >expect <<-\EOF &&
+ .gitmodules
+ a
+ b/b
+ submodule/.gitmodules
+ submodule/c
+ submodule/subsub/d
+ EOF
+
+ git init submodule/subsub &&
+ echo d >submodule/subsub/d &&
+ git -C submodule/subsub add d &&
+ git -C submodule/subsub commit -m "add d" &&
+ git -C submodule submodule add ./subsub &&
+ git -C submodule commit -m "added subsub" &&
+ git ls-files --recurse-submodules >actual &&
+ test_cmp expect actual
+'
+
+test_expect_success '--recurse-submodules and pathspecs setup' '
+ echo e >submodule/subsub/e.txt &&
+ git -C submodule/subsub add e.txt &&
+ git -C submodule/subsub commit -m "adding e.txt" &&
+ echo f >submodule/f.TXT &&
+ echo g >submodule/g.txt &&
+ git -C submodule add f.TXT g.txt &&
+ git -C submodule commit -m "add f and g" &&
+ echo h >h.txt &&
+ mkdir sib &&
+ echo sib >sib/file &&
+ git add h.txt sib/file &&
+ git commit -m "add h and sib/file" &&
+ git init sub &&
+ echo sub >sub/file &&
+ git -C sub add file &&
+ git -C sub commit -m "add file" &&
+ git submodule add ./sub &&
+ git commit -m "added sub" &&
+
+ cat >expect <<-\EOF &&
+ .gitmodules
+ a
+ b/b
+ h.txt
+ sib/file
+ sub/file
+ submodule/.gitmodules
+ submodule/c
+ submodule/f.TXT
+ submodule/g.txt
+ submodule/subsub/d
+ submodule/subsub/e.txt
+ EOF
+
+ git ls-files --recurse-submodules >actual &&
+ test_cmp expect actual &&
+ cat actual &&
+ git ls-files --recurse-submodules "*" >actual &&
+ test_cmp expect actual
+'
+
+test_expect_success '--recurse-submodules and pathspecs' '
+ cat >expect <<-\EOF &&
+ h.txt
+ submodule/g.txt
+ submodule/subsub/e.txt
+ EOF
+
+ git ls-files --recurse-submodules "*.txt" >actual &&
+ test_cmp expect actual
+'
+
+test_expect_success '--recurse-submodules and pathspecs' '
+ cat >expect <<-\EOF &&
+ h.txt
+ submodule/f.TXT
+ submodule/g.txt
+ submodule/subsub/e.txt
+ EOF
+
+ git ls-files --recurse-submodules ":(icase)*.txt" >actual &&
+ test_cmp expect actual
+'
+
+test_expect_success '--recurse-submodules and pathspecs' '
+ cat >expect <<-\EOF &&
+ h.txt
+ submodule/f.TXT
+ submodule/g.txt
+ EOF
+
+ git ls-files --recurse-submodules ":(icase)*.txt" ":(exclude)submodule/subsub/*" >actual &&
+ test_cmp expect actual
+'
+
+test_expect_success '--recurse-submodules and pathspecs' '
+ cat >expect <<-\EOF &&
+ sub/file
+ EOF
+
+ git ls-files --recurse-submodules "sub" >actual &&
+ test_cmp expect actual &&
+ git ls-files --recurse-submodules "sub/" >actual &&
+ test_cmp expect actual &&
+ git ls-files --recurse-submodules "sub/file" >actual &&
+ test_cmp expect actual &&
+ git ls-files --recurse-submodules "su*/file" >actual &&
+ test_cmp expect actual &&
+ git ls-files --recurse-submodules "su?/file" >actual &&
+ test_cmp expect actual
+'
+
+test_expect_success '--recurse-submodules and pathspecs' '
+ cat >expect <<-\EOF &&
+ sib/file
+ sub/file
+ EOF
+
+ git ls-files --recurse-submodules "s??/file" >actual &&
+ test_cmp expect actual &&
+ git ls-files --recurse-submodules "s???file" >actual &&
+ test_cmp expect actual &&
+ git ls-files --recurse-submodules "s*file" >actual &&
+ test_cmp expect actual
+'
+
+test_expect_success '--recurse-submodules does not support --error-unmatch' '
+ test_must_fail git ls-files --recurse-submodules --error-unmatch 2>actual &&
+ test_i18ngrep "does not support --error-unmatch" actual
+'
+
+test_incompatible_with_recurse_submodules () {
+ test_expect_success "--recurse-submodules and $1 are incompatible" "
+ test_must_fail git ls-files --recurse-submodules $1 2>actual &&
+ test_i18ngrep 'unsupported mode' actual
+ "
+}
+
+test_incompatible_with_recurse_submodules --deleted
+test_incompatible_with_recurse_submodules --modified
+test_incompatible_with_recurse_submodules --others
+test_incompatible_with_recurse_submodules --stage
+test_incompatible_with_recurse_submodules --killed
+test_incompatible_with_recurse_submodules --unmerged
+
+test_done
test -d .git/NOTES_MERGE_WORKTREE &&
test_must_fail git notes merge z >output 2>&1 &&
# Output should indicate what is wrong
- grep -q "\\.git/NOTES_MERGE_\\* exists" output
+ test_i18ngrep -q "\\.git/NOTES_MERGE_\\* exists" output
'
# Setup non-conflicting merge between x and new notes ref w
cd worktree &&
git config core.notesRef refs/notes/y &&
test_must_fail git notes merge z 2>err &&
- test_i18ngrep "A notes merge into refs/notes/y is already in-progress at" err
+ test_i18ngrep "a notes merge into refs/notes/y is already in-progress at" err
) &&
test_path_is_missing .git/worktrees/worktree/NOTES_MERGE_REF
'
echo content >extra_file &&
git add extra_file &&
test_must_fail git revert HEAD 2>errors &&
- test_i18ngrep "Your local changes would be overwritten by " errors
+ test_i18ngrep "your local changes would be overwritten by " errors
'
test_i18ncmp expect actual
'
+test_expect_success 'rm empty string should invoke warning' '
+ git rm -rf "" 2>output &&
+ test_i18ngrep "warning: empty strings" output
+'
+
test_done
test_i18ncmp expect.err actual.err
'
+test_expect_success 'git add empty string should invoke warning' '
+ git add "" 2>output &&
+ test_i18ngrep "warning: empty strings" output
+'
+
test_expect_success 'git add --chmod=[+-]x stages correctly' '
rm -f foo1 &&
echo foo >foo1 &&
printf "Commit message\n\nInvalid surrogate:\355\240\200\n" \
>"$HOME/invalid" &&
git commit -a -F "$HOME/invalid" 2>"$HOME"/stderr &&
- grep "did not conform" "$HOME"/stderr
+ test_i18ngrep "did not conform" "$HOME"/stderr
'
test_expect_success 'UTF-8 overlong sequences rejected' '
printf "\340\202\251ommit message\n\nThis is not a space:\300\240\n" \
>"$HOME/invalid" &&
git commit -a -F "$HOME/invalid" 2>"$HOME"/stderr &&
- grep "did not conform" "$HOME"/stderr
+ test_i18ngrep "did not conform" "$HOME"/stderr
'
test_expect_success 'UTF-8 non-characters refused' '
printf "Commit message\n\nNon-character:\364\217\277\276\n" \
>"$HOME/invalid" &&
git commit -a -F "$HOME/invalid" 2>"$HOME"/stderr &&
- grep "did not conform" "$HOME"/stderr
+ test_i18ngrep "did not conform" "$HOME"/stderr
'
test_expect_success 'UTF-8 non-characters refused' '
printf "Commit message\n\nNon-character:\357\267\220\n" \
>"$HOME/invalid" &&
git commit -a -F "$HOME/invalid" 2>"$HOME"/stderr &&
- grep "did not conform" "$HOME"/stderr
+ test_i18ngrep "did not conform" "$HOME"/stderr
'
for H in ISO8859-1 eucJP ISO-2022-JP
# commit-tree will warn that the commit message does not contain valid UTF-8
# as mailinfo did not convert it
- grep "did not conform" err &&
+ test_i18ngrep "did not conform" err &&
check_encoding 2
'
test 1 = $(git show HEAD:file)
'
+test_expect_success 'drop middle stash by index' '
+ git reset --hard &&
+ echo 8 >file &&
+ git stash &&
+ echo 9 >file &&
+ git stash &&
+ git stash drop 1 &&
+ test 2 = $(git stash list | wc -l) &&
+ git stash apply &&
+ test 9 = $(cat file) &&
+ test 1 = $(git show :file) &&
+ test 1 = $(git show HEAD:file) &&
+ git reset --hard &&
+ git stash drop &&
+ git stash apply &&
+ test 3 = $(cat file) &&
+ test 1 = $(git show :file) &&
+ test 1 = $(git show HEAD:file)
+'
+
test_expect_success 'stash pop' '
git reset --hard &&
git stash pop &&
git stash drop
'
+test_expect_success 'invalid ref of the form "n", n >= N' '
+ git stash clear &&
+ test_must_fail git stash drop 0 &&
+ echo bar5 >file &&
+ echo bar6 >file2 &&
+ git add file2 &&
+ git stash &&
+ test_must_fail git stash drop 1 &&
+ test_must_fail git stash pop 1 &&
+ test_must_fail git stash apply 1 &&
+ test_must_fail git stash show 1 &&
+ test_must_fail git stash branch tmp 1 &&
+ git stash drop
+'
+
test_expect_success 'stash branch should not drop the stash if the branch exists' '
git stash clear &&
echo foo >file &&
sed -e "s/-CIT/xCIT/" <output >broken &&
test_must_fail git apply --stat --summary broken 2>detected &&
detected=$(cat detected) &&
- detected=$(expr "$detected" : "fatal.*at line \\([0-9]*\\)\$") &&
+ detected=$(expr "$detected" : "error.*at line \\([0-9]*\\)\$") &&
detected=$(sed -ne "${detected}p" broken) &&
test "$detected" = xCIT
'
git diff --binary | sed -e "s/-CIT/xCIT/" >broken &&
test_must_fail git apply --stat --summary broken 2>detected &&
detected=$(cat detected) &&
- detected=$(expr "$detected" : "fatal.*at line \\([0-9]*\\)\$") &&
+ detected=$(expr "$detected" : "error.*at line \\([0-9]*\\)\$") &&
detected=$(sed -ne "${detected}p" broken) &&
test "$detected" = xCIT
'
diff --no-index --name-status -- dir2 dir
diff --no-index dir dir3
diff master master^ side
+# Can't use spaces...
+diff --line-prefix=abc master master^ side
diff --dirstat master~1 master~2
diff --dirstat initial rearrange
diff --dirstat-by-file initial rearrange
git diff --cached -- file0 >result &&
test_cmp "$TEST_DIRECTORY/t4013/diff.diff_--cached_--_file0" result
'
+test_expect_success 'diff --line-prefix with spaces' '
+ git diff --line-prefix="| | | " --cached -- file0 >result &&
+ test_cmp "$TEST_DIRECTORY/t4013/diff.diff_--line-prefix_--cached_--_file0" result
+'
test_expect_success 'diff-tree --stdin with log formatting' '
cat >expect <<-\EOF &&
--- /dev/null
+$ git diff --line-prefix=abc master master^ side
+abcdiff --cc dir/sub
+abcindex cead32e,7289e35..992913c
+abc--- a/dir/sub
+abc+++ b/dir/sub
+abc@@@ -1,6 -1,4 +1,8 @@@
+abc A
+abc B
+abc +C
+abc +D
+abc +E
+abc +F
+abc+ 1
+abc+ 2
+abcdiff --cc file0
+abcindex b414108,f4615da..10a8a9f
+abc--- a/file0
+abc+++ b/file0
+abc@@@ -1,6 -1,6 +1,9 @@@
+abc 1
+abc 2
+abc 3
+abc +4
+abc +5
+abc +6
+abc+ A
+abc+ B
+abc+ C
+$
--- /dev/null
+| | | diff --git a/file0 b/file0
+| | | new file mode 100644
+| | | index 0000000..10a8a9f
+| | | --- /dev/null
+| | | +++ b/file0
+| | | @@ -0,0 +1,9 @@
+| | | +1
+| | | +2
+| | | +3
+| | | +4
+| | | +5
+| | | +6
+| | | +A
+| | | +B
+| | | +C
test_cmp expect actual
'
+test_expect_success '--rfc' '
+ cat >expect <<-\EOF &&
+ Subject: [RFC PATCH 1/1] header with . in it
+ EOF
+ git format-patch -n -1 --stdout --rfc >patch &&
+ grep ^Subject: patch >actual &&
+ test_cmp expect actual
+'
+
test_expect_success '--from=ident notices bogus ident' '
test_must_fail git format-patch -1 --stdout --from=foo >patch
'
test_cmp expected current
'
-test_expect_success 'the same with --ws-error-highlight' '
+test_expect_success 'ws-error-highlight test setup' '
+
git reset --hard &&
{
echo "0. blank-at-eol " &&
echo "2. and a new line "
} >x &&
- git -c color.diff=always diff --ws-error-highlight=default,old |
- test_decode_color >current &&
-
- cat >expected <<-\EOF &&
+ cat >expect.default-old <<-\EOF &&
<BOLD>diff --git a/x b/x<RESET>
<BOLD>index d0233a2..700886e 100644<RESET>
<BOLD>--- a/x<RESET>
<GREEN>+<RESET><GREEN>2. and a new line<RESET><BLUE> <RESET>
EOF
- test_cmp expected current &&
-
- git -c color.diff=always diff --ws-error-highlight=all |
- test_decode_color >current &&
-
- cat >expected <<-\EOF &&
+ cat >expect.all <<-\EOF &&
<BOLD>diff --git a/x b/x<RESET>
<BOLD>index d0233a2..700886e 100644<RESET>
<BOLD>--- a/x<RESET>
<GREEN>+<RESET><GREEN>2. and a new line<RESET><BLUE> <RESET>
EOF
- test_cmp expected current &&
-
- git -c color.diff=always diff --ws-error-highlight=none |
- test_decode_color >current &&
-
- cat >expected <<-\EOF &&
+ cat >expect.none <<-\EOF
<BOLD>diff --git a/x b/x<RESET>
<BOLD>index d0233a2..700886e 100644<RESET>
<BOLD>--- a/x<RESET>
<GREEN>+2. and a new line <RESET>
EOF
- test_cmp expected current
+'
+
+test_expect_success 'test --ws-error-highlight option' '
+
+ git -c color.diff=always diff --ws-error-highlight=default,old |
+ test_decode_color >current &&
+ test_cmp expect.default-old current &&
+
+ git -c color.diff=always diff --ws-error-highlight=all |
+ test_decode_color >current &&
+ test_cmp expect.all current &&
+
+ git -c color.diff=always diff --ws-error-highlight=none |
+ test_decode_color >current &&
+ test_cmp expect.none current
+
+'
+
+test_expect_success 'test diff.wsErrorHighlight config' '
+
+ git -c color.diff=always -c diff.wsErrorHighlight=default,old diff |
+ test_decode_color >current &&
+ test_cmp expect.default-old current &&
+
+ git -c color.diff=always -c diff.wsErrorHighlight=all diff |
+ test_decode_color >current &&
+ test_cmp expect.all current &&
+
+ git -c color.diff=always -c diff.wsErrorHighlight=none diff |
+ test_decode_color >current &&
+ test_cmp expect.none current
+
+'
+
+test_expect_success 'option overrides diff.wsErrorHighlight' '
+
+ git -c color.diff=always -c diff.wsErrorHighlight=none \
+ diff --ws-error-highlight=default,old |
+ test_decode_color >current &&
+ test_cmp expect.default-old current &&
+
+ git -c color.diff=always -c diff.wsErrorHighlight=default \
+ diff --ws-error-highlight=all |
+ test_decode_color >current &&
+ test_cmp expect.all current &&
+
+ git -c color.diff=always -c diff.wsErrorHighlight=all \
+ diff --ws-error-highlight=none |
+ test_decode_color >current &&
+ test_cmp expect.none current
+
'
test_done
test_num_no_numbered $1 2
}
+test_single_cover_letter_numbered() {
+ grep "^Subject: \[PATCH 0/1\]" $1 &&
+ grep "^Subject: \[PATCH 1/1\]" $1
+}
+
test_single_numbered() {
grep "^Subject: \[PATCH 1/1\]" $1
}
grep "^Subject: \[PATCH 3/3\]" patch8
'
+test_expect_success 'single patch with cover-letter defaults to numbers' '
+ git format-patch --cover-letter --stdout HEAD~1 >patch9.single &&
+ test_single_cover_letter_numbered patch9.single
+'
+
+test_expect_success 'Use --no-numbered and --cover-letter single patch' '
+ git format-patch --no-numbered --stdout --cover-letter HEAD~1 >patch10 &&
+ test_no_numbered patch10
+'
+
+
+
test_done
)
'
+test_expect_success 'diff from repo subdir shows real paths (explicit)' '
+ echo "diff --git a/../../non/git/a b/../../non/git/b" >expect &&
+ test_expect_code 1 \
+ git -C repo/sub \
+ diff --no-index ../../non/git/a ../../non/git/b >actual &&
+ head -n 1 <actual >actual.head &&
+ test_cmp expect actual.head
+'
+
+test_expect_success 'diff from repo subdir shows real paths (implicit)' '
+ echo "diff --git a/../../non/git/a b/../../non/git/b" >expect &&
+ test_expect_code 1 \
+ git -C repo/sub \
+ diff ../../non/git/a ../../non/git/b >actual &&
+ head -n 1 <actual >actual.head &&
+ test_cmp expect actual.head
+'
+
+test_expect_success 'diff --no-index from repo subdir respects config (explicit)' '
+ echo "diff --git ../../non/git/a ../../non/git/b" >expect &&
+ test_config -C repo diff.noprefix true &&
+ test_expect_code 1 \
+ git -C repo/sub \
+ diff --no-index ../../non/git/a ../../non/git/b >actual &&
+ head -n 1 <actual >actual.head &&
+ test_cmp expect actual.head
+'
+
+test_expect_success 'diff --no-index from repo subdir respects config (implicit)' '
+ echo "diff --git ../../non/git/a ../../non/git/b" >expect &&
+ test_config -C repo diff.noprefix true &&
+ test_expect_code 1 \
+ git -C repo/sub \
+ diff ../../non/git/a ../../non/git/b >actual &&
+ head -n 1 <actual >actual.head &&
+ test_cmp expect actual.head
+'
+
test_done
--- /dev/null
+#!/bin/sh
+#
+# Copyright (c) 2016 Jacob Keller, based on t4041 by Jens Lehmann
+#
+
+test_description='Test for submodule diff on non-checked out submodule
+
+This test tries to verify that add_submodule_odb works when the submodule was
+initialized previously but the checkout has since been removed.
+'
+
+. ./test-lib.sh
+
+# Tested non-UTF-8 encoding
+test_encoding="ISO8859-1"
+
+# String "added" in German (translated with Google Translate), encoded in UTF-8,
+# used in sample commit log messages in add_file() function below.
+added=$(printf "hinzugef\303\274gt")
+
+add_file () {
+ (
+ cd "$1" &&
+ shift &&
+ for name
+ do
+ echo "$name" >"$name" &&
+ git add "$name" &&
+ test_tick &&
+ # "git commit -m" would break MinGW, as Windows refuse to pass
+ # $test_encoding encoded parameter to git.
+ echo "Add $name ($added $name)" | iconv -f utf-8 -t $test_encoding |
+ git -c "i18n.commitEncoding=$test_encoding" commit -F -
+ done >/dev/null &&
+ git rev-parse --short --verify HEAD
+ )
+}
+
+commit_file () {
+ test_tick &&
+ git commit "$@" -m "Commit $*" >/dev/null
+}
+
+test_expect_success 'setup - submodules' '
+ test_create_repo sm2 &&
+ add_file . foo &&
+ add_file sm2 foo1 foo2 &&
+ smhead1=$(git -C sm2 rev-parse --short --verify HEAD)
+'
+
+test_expect_success 'setup - git submodule add' '
+ git submodule add ./sm2 sm1 &&
+ commit_file sm1 .gitmodules &&
+ git diff-tree -p --no-commit-id --submodule=log HEAD -- sm1 >actual &&
+ cat >expected <<-EOF &&
+ Submodule sm1 0000000...$smhead1 (new submodule)
+ EOF
+ test_cmp expected actual
+'
+
+test_expect_success 'submodule directory removed' '
+ rm -rf sm1 &&
+ git diff-tree -p --no-commit-id --submodule=log HEAD -- sm1 >actual &&
+ cat >expected <<-EOF &&
+ Submodule sm1 0000000...$smhead1 (new submodule)
+ EOF
+ test_cmp expected actual
+'
+
+test_expect_success 'setup - submodule multiple commits' '
+ git submodule update --checkout sm1 &&
+ smhead2=$(add_file sm1 foo3 foo4) &&
+ commit_file sm1 &&
+ git diff-tree -p --no-commit-id --submodule=log HEAD >actual &&
+ cat >expected <<-EOF &&
+ Submodule sm1 $smhead1..$smhead2:
+ > Add foo4 ($added foo4)
+ > Add foo3 ($added foo3)
+ EOF
+ test_cmp expected actual
+'
+
+test_expect_success 'submodule removed multiple commits' '
+ rm -rf sm1 &&
+ git diff-tree -p --no-commit-id --submodule=log HEAD >actual &&
+ cat >expected <<-EOF &&
+ Submodule sm1 $smhead1..$smhead2:
+ > Add foo4 ($added foo4)
+ > Add foo3 ($added foo3)
+ EOF
+ test_cmp expected actual
+'
+
+test_expect_success 'submodule not initialized in new clone' '
+ git clone . sm3 &&
+ git -C sm3 diff-tree -p --no-commit-id --submodule=log HEAD >actual &&
+ cat >expected <<-EOF &&
+ Submodule sm1 $smhead1...$smhead2 (not initialized)
+ EOF
+ test_cmp expected actual
+'
+
+test_expect_success 'setup submodule moved' '
+ git submodule update --checkout sm1 &&
+ git mv sm1 sm4 &&
+ commit_file sm4 &&
+ git diff-tree -p --no-commit-id --submodule=log HEAD >actual &&
+ cat >expected <<-EOF &&
+ Submodule sm4 0000000...$smhead2 (new submodule)
+ EOF
+ test_cmp expected actual
+'
+
+test_expect_success 'submodule moved then removed' '
+ smhead3=$(add_file sm4 foo6 foo7) &&
+ commit_file sm4 &&
+ rm -rf sm4 &&
+ git diff-tree -p --no-commit-id --submodule=log HEAD >actual &&
+ cat >expected <<-EOF &&
+ Submodule sm4 $smhead2..$smhead3:
+ > Add foo7 ($added foo7)
+ > Add foo6 ($added foo6)
+ EOF
+ test_cmp expected actual
+'
+
+test_done
--- /dev/null
+#!/bin/sh
+#
+# Copyright (c) 2009 Jens Lehmann, based on t7401 by Ping Yin
+# Copyright (c) 2011 Alexey Shumkin (+ non-UTF-8 commit encoding tests)
+# Copyright (c) 2016 Jacob Keller (copy + convert to --submodule=diff)
+#
+
+test_description='Support for diff format verbose submodule difference in git diff
+
+This test tries to verify the sanity of --submodule=diff option of git diff.
+'
+
+. ./test-lib.sh
+
+# Tested non-UTF-8 encoding
+test_encoding="ISO8859-1"
+
+# String "added" in German (translated with Google Translate), encoded in UTF-8,
+# used in sample commit log messages in add_file() function below.
+added=$(printf "hinzugef\303\274gt")
+
+add_file () {
+ (
+ cd "$1" &&
+ shift &&
+ for name
+ do
+ echo "$name" >"$name" &&
+ git add "$name" &&
+ test_tick &&
+ # "git commit -m" would break MinGW, as Windows refuse to pass
+ # $test_encoding encoded parameter to git.
+ echo "Add $name ($added $name)" | iconv -f utf-8 -t $test_encoding |
+ git -c "i18n.commitEncoding=$test_encoding" commit -F -
+ done >/dev/null &&
+ git rev-parse --short --verify HEAD
+ )
+}
+
+commit_file () {
+ test_tick &&
+ git commit "$@" -m "Commit $*" >/dev/null
+}
+
+test_expect_success 'setup repository' '
+ test_create_repo sm1 &&
+ add_file . foo &&
+ head1=$(add_file sm1 foo1 foo2) &&
+ fullhead1=$(git -C sm1 rev-parse --verify HEAD)
+'
+
+test_expect_success 'added submodule' '
+ git add sm1 &&
+ git diff-index -p --submodule=diff HEAD >actual &&
+ cat >expected <<-EOF &&
+ Submodule sm1 0000000...$head1 (new submodule)
+ diff --git a/sm1/foo1 b/sm1/foo1
+ new file mode 100644
+ index 0000000..1715acd
+ --- /dev/null
+ +++ b/sm1/foo1
+ @@ -0,0 +1 @@
+ +foo1
+ diff --git a/sm1/foo2 b/sm1/foo2
+ new file mode 100644
+ index 0000000..54b060e
+ --- /dev/null
+ +++ b/sm1/foo2
+ @@ -0,0 +1 @@
+ +foo2
+ EOF
+ test_cmp expected actual
+'
+
+test_expect_success 'added submodule, set diff.submodule' '
+ test_config diff.submodule log &&
+ git add sm1 &&
+ git diff-index -p --submodule=diff HEAD >actual &&
+ cat >expected <<-EOF &&
+ Submodule sm1 0000000...$head1 (new submodule)
+ diff --git a/sm1/foo1 b/sm1/foo1
+ new file mode 100644
+ index 0000000..1715acd
+ --- /dev/null
+ +++ b/sm1/foo1
+ @@ -0,0 +1 @@
+ +foo1
+ diff --git a/sm1/foo2 b/sm1/foo2
+ new file mode 100644
+ index 0000000..54b060e
+ --- /dev/null
+ +++ b/sm1/foo2
+ @@ -0,0 +1 @@
+ +foo2
+ EOF
+ test_cmp expected actual
+'
+
+test_expect_success '--submodule=short overrides diff.submodule' '
+ test_config diff.submodule log &&
+ git add sm1 &&
+ git diff --submodule=short --cached >actual &&
+ cat >expected <<-EOF &&
+ diff --git a/sm1 b/sm1
+ new file mode 160000
+ index 0000000..$head1
+ --- /dev/null
+ +++ b/sm1
+ @@ -0,0 +1 @@
+ +Subproject commit $fullhead1
+ EOF
+ test_cmp expected actual
+'
+
+test_expect_success 'diff.submodule does not affect plumbing' '
+ test_config diff.submodule log &&
+ git diff-index -p HEAD >actual &&
+ cat >expected <<-EOF &&
+ diff --git a/sm1 b/sm1
+ new file mode 160000
+ index 0000000..$head1
+ --- /dev/null
+ +++ b/sm1
+ @@ -0,0 +1 @@
+ +Subproject commit $fullhead1
+ EOF
+ test_cmp expected actual
+'
+
+commit_file sm1 &&
+head2=$(add_file sm1 foo3)
+
+test_expect_success 'modified submodule(forward)' '
+ git diff-index -p --submodule=diff HEAD >actual &&
+ cat >expected <<-EOF &&
+ Submodule sm1 $head1..$head2:
+ diff --git a/sm1/foo3 b/sm1/foo3
+ new file mode 100644
+ index 0000000..c1ec6c6
+ --- /dev/null
+ +++ b/sm1/foo3
+ @@ -0,0 +1 @@
+ +foo3
+ EOF
+ test_cmp expected actual
+'
+
+test_expect_success 'modified submodule(forward)' '
+ git diff --submodule=diff >actual &&
+ cat >expected <<-EOF &&
+ Submodule sm1 $head1..$head2:
+ diff --git a/sm1/foo3 b/sm1/foo3
+ new file mode 100644
+ index 0000000..c1ec6c6
+ --- /dev/null
+ +++ b/sm1/foo3
+ @@ -0,0 +1 @@
+ +foo3
+ EOF
+ test_cmp expected actual
+'
+
+test_expect_success 'modified submodule(forward) --submodule' '
+ git diff --submodule >actual &&
+ cat >expected <<-EOF &&
+ Submodule sm1 $head1..$head2:
+ > Add foo3 ($added foo3)
+ EOF
+ test_cmp expected actual
+'
+
+fullhead2=$(cd sm1; git rev-parse --verify HEAD)
+test_expect_success 'modified submodule(forward) --submodule=short' '
+ git diff --submodule=short >actual &&
+ cat >expected <<-EOF &&
+ diff --git a/sm1 b/sm1
+ index $head1..$head2 160000
+ --- a/sm1
+ +++ b/sm1
+ @@ -1 +1 @@
+ -Subproject commit $fullhead1
+ +Subproject commit $fullhead2
+ EOF
+ test_cmp expected actual
+'
+
+commit_file sm1 &&
+head3=$(
+ cd sm1 &&
+ git reset --hard HEAD~2 >/dev/null &&
+ git rev-parse --short --verify HEAD
+)
+
+test_expect_success 'modified submodule(backward)' '
+ git diff-index -p --submodule=diff HEAD >actual &&
+ cat >expected <<-EOF &&
+ Submodule sm1 $head2..$head3 (rewind):
+ diff --git a/sm1/foo2 b/sm1/foo2
+ deleted file mode 100644
+ index 54b060e..0000000
+ --- a/sm1/foo2
+ +++ /dev/null
+ @@ -1 +0,0 @@
+ -foo2
+ diff --git a/sm1/foo3 b/sm1/foo3
+ deleted file mode 100644
+ index c1ec6c6..0000000
+ --- a/sm1/foo3
+ +++ /dev/null
+ @@ -1 +0,0 @@
+ -foo3
+ EOF
+ test_cmp expected actual
+'
+
+head4=$(add_file sm1 foo4 foo5)
+test_expect_success 'modified submodule(backward and forward)' '
+ git diff-index -p --submodule=diff HEAD >actual &&
+ cat >expected <<-EOF &&
+ Submodule sm1 $head2...$head4:
+ diff --git a/sm1/foo2 b/sm1/foo2
+ deleted file mode 100644
+ index 54b060e..0000000
+ --- a/sm1/foo2
+ +++ /dev/null
+ @@ -1 +0,0 @@
+ -foo2
+ diff --git a/sm1/foo3 b/sm1/foo3
+ deleted file mode 100644
+ index c1ec6c6..0000000
+ --- a/sm1/foo3
+ +++ /dev/null
+ @@ -1 +0,0 @@
+ -foo3
+ diff --git a/sm1/foo4 b/sm1/foo4
+ new file mode 100644
+ index 0000000..a0016db
+ --- /dev/null
+ +++ b/sm1/foo4
+ @@ -0,0 +1 @@
+ +foo4
+ diff --git a/sm1/foo5 b/sm1/foo5
+ new file mode 100644
+ index 0000000..d6f2413
+ --- /dev/null
+ +++ b/sm1/foo5
+ @@ -0,0 +1 @@
+ +foo5
+ EOF
+ test_cmp expected actual
+'
+
+commit_file sm1 &&
+mv sm1 sm1-bak &&
+echo sm1 >sm1 &&
+head5=$(git hash-object sm1 | cut -c1-7) &&
+git add sm1 &&
+rm -f sm1 &&
+mv sm1-bak sm1
+
+test_expect_success 'typechanged submodule(submodule->blob), --cached' '
+ git diff --submodule=diff --cached >actual &&
+ cat >expected <<-EOF &&
+ Submodule sm1 $head4...0000000 (submodule deleted)
+ diff --git a/sm1/foo1 b/sm1/foo1
+ deleted file mode 100644
+ index 1715acd..0000000
+ --- a/sm1/foo1
+ +++ /dev/null
+ @@ -1 +0,0 @@
+ -foo1
+ diff --git a/sm1/foo4 b/sm1/foo4
+ deleted file mode 100644
+ index a0016db..0000000
+ --- a/sm1/foo4
+ +++ /dev/null
+ @@ -1 +0,0 @@
+ -foo4
+ diff --git a/sm1/foo5 b/sm1/foo5
+ deleted file mode 100644
+ index d6f2413..0000000
+ --- a/sm1/foo5
+ +++ /dev/null
+ @@ -1 +0,0 @@
+ -foo5
+ diff --git a/sm1 b/sm1
+ new file mode 100644
+ index 0000000..9da5fb8
+ --- /dev/null
+ +++ b/sm1
+ @@ -0,0 +1 @@
+ +sm1
+ EOF
+ test_cmp expected actual
+'
+
+test_expect_success 'typechanged submodule(submodule->blob)' '
+ git diff --submodule=diff >actual &&
+ cat >expected <<-EOF &&
+ diff --git a/sm1 b/sm1
+ deleted file mode 100644
+ index 9da5fb8..0000000
+ --- a/sm1
+ +++ /dev/null
+ @@ -1 +0,0 @@
+ -sm1
+ Submodule sm1 0000000...$head4 (new submodule)
+ diff --git a/sm1/foo1 b/sm1/foo1
+ new file mode 100644
+ index 0000000..1715acd
+ --- /dev/null
+ +++ b/sm1/foo1
+ @@ -0,0 +1 @@
+ +foo1
+ diff --git a/sm1/foo4 b/sm1/foo4
+ new file mode 100644
+ index 0000000..a0016db
+ --- /dev/null
+ +++ b/sm1/foo4
+ @@ -0,0 +1 @@
+ +foo4
+ diff --git a/sm1/foo5 b/sm1/foo5
+ new file mode 100644
+ index 0000000..d6f2413
+ --- /dev/null
+ +++ b/sm1/foo5
+ @@ -0,0 +1 @@
+ +foo5
+ EOF
+ test_cmp expected actual
+'
+
+rm -rf sm1 &&
+git checkout-index sm1
+test_expect_success 'typechanged submodule(submodule->blob)' '
+ git diff-index -p --submodule=diff HEAD >actual &&
+ cat >expected <<-EOF &&
+ Submodule sm1 $head4...0000000 (submodule deleted)
+ diff --git a/sm1 b/sm1
+ new file mode 100644
+ index 0000000..9da5fb8
+ --- /dev/null
+ +++ b/sm1
+ @@ -0,0 +1 @@
+ +sm1
+ EOF
+ test_cmp expected actual
+'
+
+rm -f sm1 &&
+test_create_repo sm1 &&
+head6=$(add_file sm1 foo6 foo7)
+fullhead6=$(cd sm1; git rev-parse --verify HEAD)
+test_expect_success 'nonexistent commit' '
+ git diff-index -p --submodule=diff HEAD >actual &&
+ cat >expected <<-EOF &&
+ Submodule sm1 $head4...$head6 (commits not present)
+ EOF
+ test_cmp expected actual
+'
+
+commit_file
+test_expect_success 'typechanged submodule(blob->submodule)' '
+ git diff-index -p --submodule=diff HEAD >actual &&
+ cat >expected <<-EOF &&
+ diff --git a/sm1 b/sm1
+ deleted file mode 100644
+ index 9da5fb8..0000000
+ --- a/sm1
+ +++ /dev/null
+ @@ -1 +0,0 @@
+ -sm1
+ Submodule sm1 0000000...$head6 (new submodule)
+ diff --git a/sm1/foo6 b/sm1/foo6
+ new file mode 100644
+ index 0000000..462398b
+ --- /dev/null
+ +++ b/sm1/foo6
+ @@ -0,0 +1 @@
+ +foo6
+ diff --git a/sm1/foo7 b/sm1/foo7
+ new file mode 100644
+ index 0000000..6e9262c
+ --- /dev/null
+ +++ b/sm1/foo7
+ @@ -0,0 +1 @@
+ +foo7
+ EOF
+ test_cmp expected actual
+'
+
+commit_file sm1 &&
+test_expect_success 'submodule is up to date' '
+ git diff-index -p --submodule=diff HEAD >actual &&
+ cat >expected <<-EOF &&
+ EOF
+ test_cmp expected actual
+'
+
+test_expect_success 'submodule contains untracked content' '
+ echo new > sm1/new-file &&
+ git diff-index -p --submodule=diff HEAD >actual &&
+ cat >expected <<-EOF &&
+ Submodule sm1 contains untracked content
+ EOF
+ test_cmp expected actual
+'
+
+test_expect_success 'submodule contains untracked content (untracked ignored)' '
+ git diff-index -p --ignore-submodules=untracked --submodule=diff HEAD >actual &&
+ ! test -s actual
+'
+
+test_expect_success 'submodule contains untracked content (dirty ignored)' '
+ git diff-index -p --ignore-submodules=dirty --submodule=diff HEAD >actual &&
+ ! test -s actual
+'
+
+test_expect_success 'submodule contains untracked content (all ignored)' '
+ git diff-index -p --ignore-submodules=all --submodule=diff HEAD >actual &&
+ ! test -s actual
+'
+
+test_expect_success 'submodule contains untracked and modified content' '
+ echo new > sm1/foo6 &&
+ git diff-index -p --submodule=diff HEAD >actual &&
+ cat >expected <<-EOF &&
+ Submodule sm1 contains untracked content
+ Submodule sm1 contains modified content
+ diff --git a/sm1/foo6 b/sm1/foo6
+ index 462398b..3e75765 100644
+ --- a/sm1/foo6
+ +++ b/sm1/foo6
+ @@ -1 +1 @@
+ -foo6
+ +new
+ EOF
+ test_cmp expected actual
+'
+
+# NOT OK
+test_expect_success 'submodule contains untracked and modified content (untracked ignored)' '
+ echo new > sm1/foo6 &&
+ git diff-index -p --ignore-submodules=untracked --submodule=diff HEAD >actual &&
+ cat >expected <<-EOF &&
+ Submodule sm1 contains modified content
+ diff --git a/sm1/foo6 b/sm1/foo6
+ index 462398b..3e75765 100644
+ --- a/sm1/foo6
+ +++ b/sm1/foo6
+ @@ -1 +1 @@
+ -foo6
+ +new
+ EOF
+ test_cmp expected actual
+'
+
+test_expect_success 'submodule contains untracked and modified content (dirty ignored)' '
+ echo new > sm1/foo6 &&
+ git diff-index -p --ignore-submodules=dirty --submodule=diff HEAD >actual &&
+ ! test -s actual
+'
+
+test_expect_success 'submodule contains untracked and modified content (all ignored)' '
+ echo new > sm1/foo6 &&
+ git diff-index -p --ignore-submodules --submodule=diff HEAD >actual &&
+ ! test -s actual
+'
+
+test_expect_success 'submodule contains modified content' '
+ rm -f sm1/new-file &&
+ git diff-index -p --submodule=diff HEAD >actual &&
+ cat >expected <<-EOF &&
+ Submodule sm1 contains modified content
+ diff --git a/sm1/foo6 b/sm1/foo6
+ index 462398b..3e75765 100644
+ --- a/sm1/foo6
+ +++ b/sm1/foo6
+ @@ -1 +1 @@
+ -foo6
+ +new
+ EOF
+ test_cmp expected actual
+'
+
+(cd sm1; git commit -mchange foo6 >/dev/null) &&
+head8=$(cd sm1; git rev-parse --short --verify HEAD) &&
+test_expect_success 'submodule is modified' '
+ git diff-index -p --submodule=diff HEAD >actual &&
+ cat >expected <<-EOF &&
+ Submodule sm1 17243c9..$head8:
+ diff --git a/sm1/foo6 b/sm1/foo6
+ index 462398b..3e75765 100644
+ --- a/sm1/foo6
+ +++ b/sm1/foo6
+ @@ -1 +1 @@
+ -foo6
+ +new
+ EOF
+ test_cmp expected actual
+'
+
+test_expect_success 'modified submodule contains untracked content' '
+ echo new > sm1/new-file &&
+ git diff-index -p --submodule=diff HEAD >actual &&
+ cat >expected <<-EOF &&
+ Submodule sm1 contains untracked content
+ Submodule sm1 17243c9..$head8:
+ diff --git a/sm1/foo6 b/sm1/foo6
+ index 462398b..3e75765 100644
+ --- a/sm1/foo6
+ +++ b/sm1/foo6
+ @@ -1 +1 @@
+ -foo6
+ +new
+ EOF
+ test_cmp expected actual
+'
+
+test_expect_success 'modified submodule contains untracked content (untracked ignored)' '
+ git diff-index -p --ignore-submodules=untracked --submodule=diff HEAD >actual &&
+ cat >expected <<-EOF &&
+ Submodule sm1 17243c9..$head8:
+ diff --git a/sm1/foo6 b/sm1/foo6
+ index 462398b..3e75765 100644
+ --- a/sm1/foo6
+ +++ b/sm1/foo6
+ @@ -1 +1 @@
+ -foo6
+ +new
+ EOF
+ test_cmp expected actual
+'
+
+test_expect_success 'modified submodule contains untracked content (dirty ignored)' '
+ git diff-index -p --ignore-submodules=dirty --submodule=diff HEAD >actual &&
+ cat >expected <<-EOF &&
+ Submodule sm1 17243c9..cfce562:
+ diff --git a/sm1/foo6 b/sm1/foo6
+ index 462398b..3e75765 100644
+ --- a/sm1/foo6
+ +++ b/sm1/foo6
+ @@ -1 +1 @@
+ -foo6
+ +new
+ EOF
+ test_cmp expected actual
+'
+
+test_expect_success 'modified submodule contains untracked content (all ignored)' '
+ git diff-index -p --ignore-submodules=all --submodule=diff HEAD >actual &&
+ ! test -s actual
+'
+
+test_expect_success 'modified submodule contains untracked and modified content' '
+ echo modification >> sm1/foo6 &&
+ git diff-index -p --submodule=diff HEAD >actual &&
+ cat >expected <<-EOF &&
+ Submodule sm1 contains untracked content
+ Submodule sm1 contains modified content
+ Submodule sm1 17243c9..cfce562:
+ diff --git a/sm1/foo6 b/sm1/foo6
+ index 462398b..dfda541 100644
+ --- a/sm1/foo6
+ +++ b/sm1/foo6
+ @@ -1 +1,2 @@
+ -foo6
+ +new
+ +modification
+ EOF
+ test_cmp expected actual
+'
+
+test_expect_success 'modified submodule contains untracked and modified content (untracked ignored)' '
+ echo modification >> sm1/foo6 &&
+ git diff-index -p --ignore-submodules=untracked --submodule=diff HEAD >actual &&
+ cat >expected <<-EOF &&
+ Submodule sm1 contains modified content
+ Submodule sm1 17243c9..cfce562:
+ diff --git a/sm1/foo6 b/sm1/foo6
+ index 462398b..e20e2d9 100644
+ --- a/sm1/foo6
+ +++ b/sm1/foo6
+ @@ -1 +1,3 @@
+ -foo6
+ +new
+ +modification
+ +modification
+ EOF
+ test_cmp expected actual
+'
+
+test_expect_success 'modified submodule contains untracked and modified content (dirty ignored)' '
+ echo modification >> sm1/foo6 &&
+ git diff-index -p --ignore-submodules=dirty --submodule=diff HEAD >actual &&
+ cat >expected <<-EOF &&
+ Submodule sm1 17243c9..cfce562:
+ diff --git a/sm1/foo6 b/sm1/foo6
+ index 462398b..3e75765 100644
+ --- a/sm1/foo6
+ +++ b/sm1/foo6
+ @@ -1 +1 @@
+ -foo6
+ +new
+ EOF
+ test_cmp expected actual
+'
+
+test_expect_success 'modified submodule contains untracked and modified content (all ignored)' '
+ echo modification >> sm1/foo6 &&
+ git diff-index -p --ignore-submodules --submodule=diff HEAD >actual &&
+ ! test -s actual
+'
+
+# NOT OK
+test_expect_success 'modified submodule contains modified content' '
+ rm -f sm1/new-file &&
+ git diff-index -p --submodule=diff HEAD >actual &&
+ cat >expected <<-EOF &&
+ Submodule sm1 contains modified content
+ Submodule sm1 17243c9..cfce562:
+ diff --git a/sm1/foo6 b/sm1/foo6
+ index 462398b..ac466ca 100644
+ --- a/sm1/foo6
+ +++ b/sm1/foo6
+ @@ -1 +1,5 @@
+ -foo6
+ +new
+ +modification
+ +modification
+ +modification
+ +modification
+ EOF
+ test_cmp expected actual
+'
+
+rm -rf sm1
+test_expect_success 'deleted submodule' '
+ git diff-index -p --submodule=diff HEAD >actual &&
+ cat >expected <<-EOF &&
+ Submodule sm1 17243c9...0000000 (submodule deleted)
+ EOF
+ test_cmp expected actual
+'
+
+test_create_repo sm2 &&
+head7=$(add_file sm2 foo8 foo9) &&
+git add sm2
+
+test_expect_success 'multiple submodules' '
+ git diff-index -p --submodule=diff HEAD >actual &&
+ cat >expected <<-EOF &&
+ Submodule sm1 17243c9...0000000 (submodule deleted)
+ Submodule sm2 0000000...a5a65c9 (new submodule)
+ diff --git a/sm2/foo8 b/sm2/foo8
+ new file mode 100644
+ index 0000000..db9916b
+ --- /dev/null
+ +++ b/sm2/foo8
+ @@ -0,0 +1 @@
+ +foo8
+ diff --git a/sm2/foo9 b/sm2/foo9
+ new file mode 100644
+ index 0000000..9c3b4f6
+ --- /dev/null
+ +++ b/sm2/foo9
+ @@ -0,0 +1 @@
+ +foo9
+ EOF
+ test_cmp expected actual
+'
+
+test_expect_success 'path filter' '
+ git diff-index -p --submodule=diff HEAD sm2 >actual &&
+ cat >expected <<-EOF &&
+ Submodule sm2 0000000...a5a65c9 (new submodule)
+ diff --git a/sm2/foo8 b/sm2/foo8
+ new file mode 100644
+ index 0000000..db9916b
+ --- /dev/null
+ +++ b/sm2/foo8
+ @@ -0,0 +1 @@
+ +foo8
+ diff --git a/sm2/foo9 b/sm2/foo9
+ new file mode 100644
+ index 0000000..9c3b4f6
+ --- /dev/null
+ +++ b/sm2/foo9
+ @@ -0,0 +1 @@
+ +foo9
+ EOF
+ test_cmp expected actual
+'
+
+commit_file sm2
+test_expect_success 'given commit' '
+ git diff-index -p --submodule=diff HEAD^ >actual &&
+ cat >expected <<-EOF &&
+ Submodule sm1 17243c9...0000000 (submodule deleted)
+ Submodule sm2 0000000...a5a65c9 (new submodule)
+ diff --git a/sm2/foo8 b/sm2/foo8
+ new file mode 100644
+ index 0000000..db9916b
+ --- /dev/null
+ +++ b/sm2/foo8
+ @@ -0,0 +1 @@
+ +foo8
+ diff --git a/sm2/foo9 b/sm2/foo9
+ new file mode 100644
+ index 0000000..9c3b4f6
+ --- /dev/null
+ +++ b/sm2/foo9
+ @@ -0,0 +1 @@
+ +foo9
+ EOF
+ test_cmp expected actual
+'
+
+test_expect_success 'setup .git file for sm2' '
+ (cd sm2 &&
+ REAL="$(pwd)/../.real" &&
+ mv .git "$REAL"
+ echo "gitdir: $REAL" >.git)
+'
+
+test_expect_success 'diff --submodule=diff with .git file' '
+ git diff --submodule=diff HEAD^ >actual &&
+ cat >expected <<-EOF &&
+ Submodule sm1 17243c9...0000000 (submodule deleted)
+ Submodule sm2 0000000...a5a65c9 (new submodule)
+ diff --git a/sm2/foo8 b/sm2/foo8
+ new file mode 100644
+ index 0000000..db9916b
+ --- /dev/null
+ +++ b/sm2/foo8
+ @@ -0,0 +1 @@
+ +foo8
+ diff --git a/sm2/foo9 b/sm2/foo9
+ new file mode 100644
+ index 0000000..9c3b4f6
+ --- /dev/null
+ +++ b/sm2/foo9
+ @@ -0,0 +1 @@
+ +foo9
+ EOF
+ test_cmp expected actual
+'
+
+test_done
--- /dev/null
+#!/bin/sh
+
+test_description='Test diff indent heuristic.
+
+'
+. ./test-lib.sh
+. "$TEST_DIRECTORY"/diff-lib.sh
+
+# Compare two diff outputs. Ignore "index" lines, because we don't
+# care about SHA-1s or file modes.
+compare_diff () {
+ sed -e "/^index /d" <"$1" >.tmp-1
+ sed -e "/^index /d" <"$2" >.tmp-2
+ test_cmp .tmp-1 .tmp-2 && rm -f .tmp-1 .tmp-2
+}
+
+# Compare blame output using the expectation for a diff as reference.
+# Only look for the lines coming from non-boundary commits.
+compare_blame () {
+ sed -n -e "1,4d" -e "s/^\+//p" <"$1" >.tmp-1
+ sed -ne "s/^[^^][^)]*) *//p" <"$2" >.tmp-2
+ test_cmp .tmp-1 .tmp-2 && rm -f .tmp-1 .tmp-2
+}
+
+test_expect_success 'prepare' '
+ cat <<-\EOF >spaces.txt &&
+ 1
+ 2
+ a
+
+ b
+ 3
+ 4
+ EOF
+
+ cat <<-\EOF >functions.c &&
+ 1
+ 2
+ /* function */
+ foo() {
+ foo
+ }
+
+ 3
+ 4
+ EOF
+
+ git add spaces.txt functions.c &&
+ test_tick &&
+ git commit -m initial &&
+ git branch old &&
+
+ cat <<-\EOF >spaces.txt &&
+ 1
+ 2
+ a
+
+ b
+ a
+
+ b
+ 3
+ 4
+ EOF
+
+ cat <<-\EOF >functions.c &&
+ 1
+ 2
+ /* function */
+ bar() {
+ foo
+ }
+
+ /* function */
+ foo() {
+ foo
+ }
+
+ 3
+ 4
+ EOF
+
+ git add spaces.txt functions.c &&
+ test_tick &&
+ git commit -m initial &&
+ git branch new &&
+
+ tr "_" " " <<-\EOF >spaces-expect &&
+ diff --git a/spaces.txt b/spaces.txt
+ --- a/spaces.txt
+ +++ b/spaces.txt
+ @@ -3,5 +3,8 @@
+ a
+ _
+ b
+ +a
+ +
+ +b
+ 3
+ 4
+ EOF
+
+ tr "_" " " <<-\EOF >spaces-compacted-expect &&
+ diff --git a/spaces.txt b/spaces.txt
+ --- a/spaces.txt
+ +++ b/spaces.txt
+ @@ -2,6 +2,9 @@
+ 2
+ a
+ _
+ +b
+ +a
+ +
+ b
+ 3
+ 4
+ EOF
+
+ tr "_" " " <<-\EOF >functions-expect &&
+ diff --git a/functions.c b/functions.c
+ --- a/functions.c
+ +++ b/functions.c
+ @@ -1,6 +1,11 @@
+ 1
+ 2
+ /* function */
+ +bar() {
+ + foo
+ +}
+ +
+ +/* function */
+ foo() {
+ foo
+ }
+ EOF
+
+ tr "_" " " <<-\EOF >functions-compacted-expect
+ diff --git a/functions.c b/functions.c
+ --- a/functions.c
+ +++ b/functions.c
+ @@ -1,5 +1,10 @@
+ 1
+ 2
+ +/* function */
+ +bar() {
+ + foo
+ +}
+ +
+ /* function */
+ foo() {
+ foo
+ EOF
+'
+
+test_expect_success 'diff: ugly spaces' '
+ git diff old new -- spaces.txt >out &&
+ compare_diff spaces-expect out
+'
+
+test_expect_success 'diff: nice spaces with --indent-heuristic' '
+ git diff --indent-heuristic old new -- spaces.txt >out-compacted &&
+ compare_diff spaces-compacted-expect out-compacted
+'
+
+test_expect_success 'diff: nice spaces with diff.indentHeuristic' '
+ git -c diff.indentHeuristic=true diff old new -- spaces.txt >out-compacted2 &&
+ compare_diff spaces-compacted-expect out-compacted2
+'
+
+test_expect_success 'diff: --no-indent-heuristic overrides config' '
+ git -c diff.indentHeuristic=true diff --no-indent-heuristic old new -- spaces.txt >out2 &&
+ compare_diff spaces-expect out2
+'
+
+test_expect_success 'diff: --indent-heuristic with --patience' '
+ git diff --indent-heuristic --patience old new -- spaces.txt >out-compacted3 &&
+ compare_diff spaces-compacted-expect out-compacted3
+'
+
+test_expect_success 'diff: --indent-heuristic with --histogram' '
+ git diff --indent-heuristic --histogram old new -- spaces.txt >out-compacted4 &&
+ compare_diff spaces-compacted-expect out-compacted4
+'
+
+test_expect_success 'diff: ugly functions' '
+ git diff old new -- functions.c >out &&
+ compare_diff functions-expect out
+'
+
+test_expect_success 'diff: nice functions with --indent-heuristic' '
+ git diff --indent-heuristic old new -- functions.c >out-compacted &&
+ compare_diff functions-compacted-expect out-compacted
+'
+
+test_expect_success 'blame: ugly spaces' '
+ git blame old..new -- spaces.txt >out-blame &&
+ compare_blame spaces-expect out-blame
+'
+
+test_expect_success 'blame: nice spaces with --indent-heuristic' '
+ git blame --indent-heuristic old..new -- spaces.txt >out-blame-compacted &&
+ compare_blame spaces-compacted-expect out-blame-compacted
+'
+
+test_expect_success 'blame: nice spaces with diff.indentHeuristic' '
+ git -c diff.indentHeuristic=true blame old..new -- spaces.txt >out-blame-compacted2 &&
+ compare_blame spaces-compacted-expect out-blame-compacted2
+'
+
+test_expect_success 'blame: --no-indent-heuristic overrides config' '
+ git -c diff.indentHeuristic=true blame --no-indent-heuristic old..new -- spaces.txt >out-blame2 &&
+ git blame old..new -- spaces.txt >out-blame &&
+ compare_blame spaces-expect out-blame2
+'
+
+test_done
test_cmp msg out
'
+test_expect_success 'am works with multi-line in-body headers' '
+ FORTY="String that has a length of more than forty characters" &&
+ LONG="$FORTY $FORTY" &&
+ rm -fr .git/rebase-apply &&
+ git checkout -f first &&
+ echo one >> file &&
+ git commit -am "$LONG" --author="$LONG <long@example.com>" &&
+ git format-patch --stdout -1 >patch &&
+ # bump from, date, and subject down to in-body header
+ perl -lpe "
+ if (/^From:/) {
+ print \"From: x <x\@example.com>\";
+ print \"Date: Sat, 1 Jan 2000 00:00:00 +0000\";
+ print \"Subject: x\n\";
+ }
+ " patch >msg &&
+ git checkout HEAD^ &&
+ git am msg &&
+ # Ensure that the author and full message are present
+ git cat-file commit HEAD | grep "^author.*long@example.com" &&
+ git cat-file commit HEAD | grep "^$LONG"
+'
+
test_done
test_cmp expect actual
'
+cat > expect << EOF
+=== 804a787 sixth
+=== 394ef78 fifth
+=== 5d31159 fourth
+EOF
+test_expect_success 'git log --line-prefix="=== " --no-walk <commits> sorts by commit time' '
+ git log --line-prefix="=== " --no-walk --oneline 5d31159 804a787 394ef78 > actual &&
+ test_cmp expect actual
+'
+
cat > expect << EOF
5d31159 fourth
804a787 sixth
test_cmp expect actual
'
+cat > expect <<EOF
+123 * Second
+123 * sixth
+123 * fifth
+123 * fourth
+123 * third
+123 * second
+123 * initial
+EOF
+
+test_expect_success 'simple log --graph --line-prefix="123 "' '
+ git log --graph --line-prefix="123 " --pretty=tformat:%s >actual &&
+ test_cmp expect actual
+'
+
test_expect_success 'set up merge history' '
git checkout -b side HEAD~4 &&
test_commit side-1 1 1 &&
test_cmp expect actual
'
+cat > expect <<\EOF
+| | | * Merge branch 'side'
+| | | |\
+| | | | * side-2
+| | | | * side-1
+| | | * | Second
+| | | * | sixth
+| | | * | fifth
+| | | * | fourth
+| | | |/
+| | | * third
+| | | * second
+| | | * initial
+EOF
+
+test_expect_success 'log --graph --line-prefix="| | | " with merge' '
+ git log --line-prefix="| | | " --graph --date-order --pretty=tformat:%s |
+ sed "s/ *\$//" >actual &&
+ test_cmp expect actual
+'
+
test_expect_success 'log --raw --graph -m with merge' '
git log --raw --graph --oneline -m master | head -n 500 >actual &&
grep "initial" actual
test_i18ncmp expect actual.sanitized
'
+cat >expect <<\EOF
+*** * commit COMMIT_OBJECT_NAME
+*** |\ Merge: MERGE_PARENTS
+*** | | Author: A U Thor <author@example.com>
+*** | |
+*** | | Merge HEADS DESCRIPTION
+*** | |
+*** | * commit COMMIT_OBJECT_NAME
+*** | | Author: A U Thor <author@example.com>
+*** | |
+*** | | reach
+*** | | ---
+*** | | reach.t | 1 +
+*** | | 1 file changed, 1 insertion(+)
+*** | |
+*** | | diff --git a/reach.t b/reach.t
+*** | | new file mode 100644
+*** | | index 0000000..10c9591
+*** | | --- /dev/null
+*** | | +++ b/reach.t
+*** | | @@ -0,0 +1 @@
+*** | | +reach
+*** | |
+*** | \
+*** *-. \ commit COMMIT_OBJECT_NAME
+*** |\ \ \ Merge: MERGE_PARENTS
+*** | | | | Author: A U Thor <author@example.com>
+*** | | | |
+*** | | | | Merge HEADS DESCRIPTION
+*** | | | |
+*** | | * | commit COMMIT_OBJECT_NAME
+*** | | |/ Author: A U Thor <author@example.com>
+*** | | |
+*** | | | octopus-b
+*** | | | ---
+*** | | | octopus-b.t | 1 +
+*** | | | 1 file changed, 1 insertion(+)
+*** | | |
+*** | | | diff --git a/octopus-b.t b/octopus-b.t
+*** | | | new file mode 100644
+*** | | | index 0000000..d5fcad0
+*** | | | --- /dev/null
+*** | | | +++ b/octopus-b.t
+*** | | | @@ -0,0 +1 @@
+*** | | | +octopus-b
+*** | | |
+*** | * | commit COMMIT_OBJECT_NAME
+*** | |/ Author: A U Thor <author@example.com>
+*** | |
+*** | | octopus-a
+*** | | ---
+*** | | octopus-a.t | 1 +
+*** | | 1 file changed, 1 insertion(+)
+*** | |
+*** | | diff --git a/octopus-a.t b/octopus-a.t
+*** | | new file mode 100644
+*** | | index 0000000..11ee015
+*** | | --- /dev/null
+*** | | +++ b/octopus-a.t
+*** | | @@ -0,0 +1 @@
+*** | | +octopus-a
+*** | |
+*** * | commit COMMIT_OBJECT_NAME
+*** |/ Author: A U Thor <author@example.com>
+*** |
+*** | seventh
+*** | ---
+*** | seventh.t | 1 +
+*** | 1 file changed, 1 insertion(+)
+*** |
+*** | diff --git a/seventh.t b/seventh.t
+*** | new file mode 100644
+*** | index 0000000..9744ffc
+*** | --- /dev/null
+*** | +++ b/seventh.t
+*** | @@ -0,0 +1 @@
+*** | +seventh
+*** |
+*** * commit COMMIT_OBJECT_NAME
+*** |\ Merge: MERGE_PARENTS
+*** | | Author: A U Thor <author@example.com>
+*** | |
+*** | | Merge branch 'tangle'
+*** | |
+*** | * commit COMMIT_OBJECT_NAME
+*** | |\ Merge: MERGE_PARENTS
+*** | | | Author: A U Thor <author@example.com>
+*** | | |
+*** | | | Merge branch 'side' (early part) into tangle
+*** | | |
+*** | * | commit COMMIT_OBJECT_NAME
+*** | |\ \ Merge: MERGE_PARENTS
+*** | | | | Author: A U Thor <author@example.com>
+*** | | | |
+*** | | | | Merge branch 'master' (early part) into tangle
+*** | | | |
+*** | * | | commit COMMIT_OBJECT_NAME
+*** | | | | Author: A U Thor <author@example.com>
+*** | | | |
+*** | | | | tangle-a
+*** | | | | ---
+*** | | | | tangle-a | 1 +
+*** | | | | 1 file changed, 1 insertion(+)
+*** | | | |
+*** | | | | diff --git a/tangle-a b/tangle-a
+*** | | | | new file mode 100644
+*** | | | | index 0000000..7898192
+*** | | | | --- /dev/null
+*** | | | | +++ b/tangle-a
+*** | | | | @@ -0,0 +1 @@
+*** | | | | +a
+*** | | | |
+*** * | | | commit COMMIT_OBJECT_NAME
+*** |\ \ \ \ Merge: MERGE_PARENTS
+*** | | | | | Author: A U Thor <author@example.com>
+*** | | | | |
+*** | | | | | Merge branch 'side'
+*** | | | | |
+*** | * | | | commit COMMIT_OBJECT_NAME
+*** | | |_|/ Author: A U Thor <author@example.com>
+*** | |/| |
+*** | | | | side-2
+*** | | | | ---
+*** | | | | 2 | 1 +
+*** | | | | 1 file changed, 1 insertion(+)
+*** | | | |
+*** | | | | diff --git a/2 b/2
+*** | | | | new file mode 100644
+*** | | | | index 0000000..0cfbf08
+*** | | | | --- /dev/null
+*** | | | | +++ b/2
+*** | | | | @@ -0,0 +1 @@
+*** | | | | +2
+*** | | | |
+*** | * | | commit COMMIT_OBJECT_NAME
+*** | | | | Author: A U Thor <author@example.com>
+*** | | | |
+*** | | | | side-1
+*** | | | | ---
+*** | | | | 1 | 1 +
+*** | | | | 1 file changed, 1 insertion(+)
+*** | | | |
+*** | | | | diff --git a/1 b/1
+*** | | | | new file mode 100644
+*** | | | | index 0000000..d00491f
+*** | | | | --- /dev/null
+*** | | | | +++ b/1
+*** | | | | @@ -0,0 +1 @@
+*** | | | | +1
+*** | | | |
+*** * | | | commit COMMIT_OBJECT_NAME
+*** | | | | Author: A U Thor <author@example.com>
+*** | | | |
+*** | | | | Second
+*** | | | | ---
+*** | | | | one | 1 +
+*** | | | | 1 file changed, 1 insertion(+)
+*** | | | |
+*** | | | | diff --git a/one b/one
+*** | | | | new file mode 100644
+*** | | | | index 0000000..9a33383
+*** | | | | --- /dev/null
+*** | | | | +++ b/one
+*** | | | | @@ -0,0 +1 @@
+*** | | | | +case
+*** | | | |
+*** * | | | commit COMMIT_OBJECT_NAME
+*** | |_|/ Author: A U Thor <author@example.com>
+*** |/| |
+*** | | | sixth
+*** | | | ---
+*** | | | a/two | 1 -
+*** | | | 1 file changed, 1 deletion(-)
+*** | | |
+*** | | | diff --git a/a/two b/a/two
+*** | | | deleted file mode 100644
+*** | | | index 9245af5..0000000
+*** | | | --- a/a/two
+*** | | | +++ /dev/null
+*** | | | @@ -1 +0,0 @@
+*** | | | -ni
+*** | | |
+*** * | | commit COMMIT_OBJECT_NAME
+*** | | | Author: A U Thor <author@example.com>
+*** | | |
+*** | | | fifth
+*** | | | ---
+*** | | | a/two | 1 +
+*** | | | 1 file changed, 1 insertion(+)
+*** | | |
+*** | | | diff --git a/a/two b/a/two
+*** | | | new file mode 100644
+*** | | | index 0000000..9245af5
+*** | | | --- /dev/null
+*** | | | +++ b/a/two
+*** | | | @@ -0,0 +1 @@
+*** | | | +ni
+*** | | |
+*** * | | commit COMMIT_OBJECT_NAME
+*** |/ / Author: A U Thor <author@example.com>
+*** | |
+*** | | fourth
+*** | | ---
+*** | | ein | 1 +
+*** | | 1 file changed, 1 insertion(+)
+*** | |
+*** | | diff --git a/ein b/ein
+*** | | new file mode 100644
+*** | | index 0000000..9d7e69f
+*** | | --- /dev/null
+*** | | +++ b/ein
+*** | | @@ -0,0 +1 @@
+*** | | +ichi
+*** | |
+*** * | commit COMMIT_OBJECT_NAME
+*** |/ Author: A U Thor <author@example.com>
+*** |
+*** | third
+*** | ---
+*** | ichi | 1 +
+*** | one | 1 -
+*** | 2 files changed, 1 insertion(+), 1 deletion(-)
+*** |
+*** | diff --git a/ichi b/ichi
+*** | new file mode 100644
+*** | index 0000000..9d7e69f
+*** | --- /dev/null
+*** | +++ b/ichi
+*** | @@ -0,0 +1 @@
+*** | +ichi
+*** | diff --git a/one b/one
+*** | deleted file mode 100644
+*** | index 9d7e69f..0000000
+*** | --- a/one
+*** | +++ /dev/null
+*** | @@ -1 +0,0 @@
+*** | -ichi
+*** |
+*** * commit COMMIT_OBJECT_NAME
+*** | Author: A U Thor <author@example.com>
+*** |
+*** | second
+*** | ---
+*** | one | 2 +-
+*** | 1 file changed, 1 insertion(+), 1 deletion(-)
+*** |
+*** | diff --git a/one b/one
+*** | index 5626abf..9d7e69f 100644
+*** | --- a/one
+*** | +++ b/one
+*** | @@ -1 +1 @@
+*** | -one
+*** | +ichi
+*** |
+*** * commit COMMIT_OBJECT_NAME
+*** Author: A U Thor <author@example.com>
+***
+*** initial
+*** ---
+*** one | 1 +
+*** 1 file changed, 1 insertion(+)
+***
+*** diff --git a/one b/one
+*** new file mode 100644
+*** index 0000000..5626abf
+*** --- /dev/null
+*** +++ b/one
+*** @@ -0,0 +1 @@
+*** +one
+EOF
+
+test_expect_success 'log --line-prefix="*** " --graph with diff and stats' '
+ git log --line-prefix="*** " --no-renames --graph --pretty=short --stat -p >actual &&
+ sanitize_output >actual.sanitized <actual &&
+ test_i18ncmp expect actual.sanitized
+'
+
test_expect_success 'dotdot is a parent directory' '
mkdir -p a/b &&
( echo sixth && echo fifth ) >expect &&
test_cmp patch-id_master patch-id_same
'
+test_expect_success 'patch-id respects config from subdir' '
+ test_config patchid.stable true &&
+ mkdir subdir &&
+
+ # copy these because test_patch_id() looks for them in
+ # the current directory
+ cp bar-then-foo foo-then-bar subdir &&
+
+ (
+ cd subdir &&
+ test_patch_id irrelevant patchid.stable=true
+ )
+'
+
cat >nonl <<\EOF
diff --git i/a w/a
index e69de29..2e65efe 100644
'
test_expect_success 'compare diagnostic; ensure file is still here' '
- echo "fatal: git diff header lacks filename information (line 4)" >expected &&
+ echo "error: git diff header lacks filename information (line 4)" >expected &&
test_path_is_file f &&
- test_cmp expected actual
+ test_i18ncmp expected actual
'
test_done
'git mailsplit -o. "$DATA/sample.mbox" >last &&
last=$(cat last) &&
echo total is $last &&
- test $(cat last) = 17'
+ test $(cat last) = 18'
check_mailinfo () {
mail=$1 opt=$2
--- /dev/null
+Author: Another Thor
+Email: a.thor@example.com
+Subject: This one contains a tab and a space
+Date: Fri, 9 Jun 2006 00:44:16 -0700
+
--- /dev/null
+Author: A U Thor
+Email: a.u.thor@example.com
+Subject: check multiline inbody headers
+Date: Fri, 9 Jun 2006 00:44:16 -0700
+
--- /dev/null
+a commit message
+
--- /dev/null
+From: Another Thor
+ <a.thor@example.com>
+Subject: This one contains
+ a tab
+ and a space
+
+a commit message
+
--- /dev/null
+diff --git a/foo b/foo
+index e69de29..d95f3ad 100644
+--- a/foo
++++ b/foo
+@@ -0,0 +1 @@
++content
--- /dev/null
+diff --git a/foo b/foo
+index e69de29..d95f3ad 100644
+--- a/foo
++++ b/foo
+@@ -0,0 +1 @@
++content
+++ b/foo
@@ -0,0 +1 @@
+New content
+From nobody Mon Sep 17 00:00:00 2001
+From: A U Thor <a.u.thor@example.com>
+Subject: check multiline inbody headers
+Date: Fri, 9 Jun 2006 00:44:16 -0700
+
+From: Another Thor
+ <a.thor@example.com>
+Subject: This one contains
+ a tab
+ and a space
+
+a commit message
+
+diff --git a/foo b/foo
+index e69de29..d95f3ad 100644
+--- a/foo
++++ b/foo
+@@ -0,0 +1 @@
++content
echo ".git/objects/$(echo "$1" | sed -e 's|\(..\)|\1/|')"
}
+# show objects present in pack ($1 should be associated *.idx)
+list_packed_objects () {
+ git show-index <"$1" | cut -d' ' -f2
+}
+
+# has_any pattern-file content-file
+# tests whether content-file has any entry from pattern-file with entries being
+# whole lines.
+has_any () {
+ grep -Ff "$1" "$2"
+}
+
test_expect_success 'setup repo with moderate-sized history' '
for i in $(test_seq 1 10); do
test_commit $i
test_commit side-$i
done &&
git checkout master &&
+ bitmaptip=$(git rev-parse master) &&
blob=$(echo tagged-blob | git hash-object -w --stdin) &&
git tag tagged-blob $blob &&
git config repack.writebitmaps true &&
git repack -d --no-write-bitmap-index
'
+test_expect_success 'pack-objects respects --local (non-local loose)' '
+ git init --bare alt.git &&
+ echo $(pwd)/alt.git/objects >.git/objects/info/alternates &&
+ echo content1 >file1 &&
+ # non-local loose object which is not present in bitmapped pack
+ altblob=$(GIT_DIR=alt.git git hash-object -w file1) &&
+ # non-local loose object which is also present in bitmapped pack
+ git cat-file blob $blob | GIT_DIR=alt.git git hash-object -w --stdin &&
+ git add file1 &&
+ test_tick &&
+ git commit -m commit_file1 &&
+ echo HEAD | git pack-objects --local --stdout --revs >1.pack &&
+ git index-pack 1.pack &&
+ list_packed_objects 1.idx >1.objects &&
+ printf "%s\n" "$altblob" "$blob" >nonlocal-loose &&
+ ! has_any nonlocal-loose 1.objects
+'
+
+test_expect_success 'pack-objects respects --honor-pack-keep (local non-bitmapped pack)' '
+ echo content2 >file2 &&
+ blob2=$(git hash-object -w file2) &&
+ git add file2 &&
+ test_tick &&
+ git commit -m commit_file2 &&
+ printf "%s\n" "$blob2" "$bitmaptip" >keepobjects &&
+ pack2=$(git pack-objects pack2 <keepobjects) &&
+ mv pack2-$pack2.* .git/objects/pack/ &&
+ >.git/objects/pack/pack2-$pack2.keep &&
+ rm $(objpath $blob2) &&
+ echo HEAD | git pack-objects --honor-pack-keep --stdout --revs >2a.pack &&
+ git index-pack 2a.pack &&
+ list_packed_objects 2a.idx >2a.objects &&
+ ! has_any keepobjects 2a.objects
+'
+
+test_expect_success 'pack-objects respects --local (non-local pack)' '
+ mv .git/objects/pack/pack2-$pack2.* alt.git/objects/pack/ &&
+ echo HEAD | git pack-objects --local --stdout --revs >2b.pack &&
+ git index-pack 2b.pack &&
+ list_packed_objects 2b.idx >2b.objects &&
+ ! has_any keepobjects 2b.objects
+'
+
+test_expect_success 'pack-objects respects --honor-pack-keep (local bitmapped pack)' '
+ ls .git/objects/pack/ | grep bitmap >output &&
+ test_line_count = 1 output &&
+ packbitmap=$(basename $(cat output) .bitmap) &&
+ list_packed_objects .git/objects/pack/$packbitmap.idx >packbitmap.objects &&
+ test_when_finished "rm -f .git/objects/pack/$packbitmap.keep" &&
+ >.git/objects/pack/$packbitmap.keep &&
+ echo HEAD | git pack-objects --honor-pack-keep --stdout --revs >3a.pack &&
+ git index-pack 3a.pack &&
+ list_packed_objects 3a.idx >3a.objects &&
+ ! has_any packbitmap.objects 3a.objects
+'
+
+test_expect_success 'pack-objects respects --local (non-local bitmapped pack)' '
+ mv .git/objects/pack/$packbitmap.* alt.git/objects/pack/ &&
+ test_when_finished "mv alt.git/objects/pack/$packbitmap.* .git/objects/pack/" &&
+ echo HEAD | git pack-objects --local --stdout --revs >3b.pack &&
+ git index-pack 3b.pack &&
+ list_packed_objects 3b.idx >3b.objects &&
+ ! has_any packbitmap.objects 3b.objects
+'
+
+test_expect_success 'pack-objects to file can use bitmap' '
+ # make sure we still have 1 bitmap index from previous tests
+ ls .git/objects/pack/ | grep bitmap >output &&
+ test_line_count = 1 output &&
+ # verify equivalent packs are generated with/without using bitmap index
+ packasha1=$(git pack-objects --no-use-bitmap-index --all packa </dev/null) &&
+ packbsha1=$(git pack-objects --use-bitmap-index --all packb </dev/null) &&
+ list_packed_objects <packa-$packasha1.idx >packa.objects &&
+ list_packed_objects <packb-$packbsha1.idx >packb.objects &&
+ test_cmp packa.objects packb.objects
+'
+
test_expect_success 'full repack, reusing previous bitmaps' '
git repack -ad &&
ls .git/objects/pack/ | grep bitmap >output &&
EOF
'
+test_expect_success 'pack-objects respects --incremental' '
+ cat >revs2 <<-EOF &&
+ HEAD
+ $commit
+ EOF
+ git pack-objects --incremental --stdout --revs <revs2 >4.pack &&
+ git index-pack 4.pack &&
+ list_packed_objects 4.idx >4.objects &&
+ test_line_count = 4 4.objects &&
+ git rev-list --objects $commit >revlist &&
+ cut -d" " -f1 revlist |sort >objects &&
+ test_cmp 4.objects objects
+'
+
test_expect_success 'pack with missing blob' '
rm $(objpath $blob) &&
git pack-objects --stdout --revs <revs >/dev/null
git pack-objects --stdout --revs <revs >/dev/null
'
-test_lazy_prereq JGIT '
- type jgit
-'
-
test_expect_success JGIT 'we can read jgit bitmaps' '
git clone . compat-jgit &&
(
--- /dev/null
+#!/bin/sh
+
+test_description='test handling of inter-pack delta cycles during repack
+
+The goal here is to create a situation where we have two blobs, A and B, with A
+as a delta against B in one pack, and vice versa in the other. Then if we can
+persuade a full repack to find A from one pack and B from the other, that will
+give us a cycle when we attempt to reuse those deltas.
+
+The trick is in the "persuade" step, as it depends on the internals of how
+pack-objects picks which pack to reuse the deltas from. But we can assume
+that it does so in one of two general strategies:
+
+ 1. Using a static ordering of packs. In this case, no inter-pack cycles can
+ happen. Any objects with a delta relationship must be present in the same
+ pack (i.e., no "--thin" packs on disk), so we will find all related objects
+ from that pack. So assuming there are no cycles within a single pack (and
+ we avoid generating them via pack-objects or importing them via
+ index-pack), then our result will have no cycles.
+
+ So this case should pass the tests no matter how we arrange things.
+
+ 2. Picking the next pack to examine based on locality (i.e., where we found
+ something else recently).
+
+ In this case, we want to make sure that we find the delta versions of A and
+ B and not their base versions. We can do this by putting two blobs in each
+ pack. The first is a "dummy" blob that can only be found in the pack in
+ question. And then the second is the actual delta we want to find.
+
+ The two blobs must be present in the same tree, not present in other trees,
+ and the dummy pathname must sort before the delta path.
+
+The setup below focuses on case 2. We have two commits HEAD and HEAD^, each
+which has two files: "dummy" and "file". Then we can make two packs which
+contain:
+
+ [pack one]
+ HEAD:dummy
+ HEAD:file (as delta against HEAD^:file)
+ HEAD^:file (as base)
+
+ [pack two]
+ HEAD^:dummy
+ HEAD^:file (as delta against HEAD:file)
+ HEAD:file (as base)
+
+Then no matter which order we start looking at the packs in, we know that we
+will always find a delta for "file", because its lookup will always come
+immediately after the lookup for "dummy".
+'
+. ./test-lib.sh
+
+
+
+# Create a pack containing the the tree $1 and blob $1:file, with
+# the latter stored as a delta against $2:file.
+#
+# We convince pack-objects to make the delta in the direction of our choosing
+# by marking $2 as a preferred-base edge. That results in $1:file as a thin
+# delta, and index-pack completes it by adding $2:file as a base.
+#
+# Note that the two variants of "file" must be similar enough to convince git
+# to create the delta.
+make_pack () {
+ {
+ printf '%s\n' "-$(git rev-parse $2)"
+ printf '%s dummy\n' "$(git rev-parse $1:dummy)"
+ printf '%s file\n' "$(git rev-parse $1:file)"
+ } |
+ git pack-objects --stdout |
+ git index-pack --stdin --fix-thin
+}
+
+test_expect_success 'setup' '
+ test-genrandom base 4096 >base &&
+ for i in one two
+ do
+ # we want shared content here to encourage deltas...
+ cp base file &&
+ echo $i >>file &&
+
+ # ...whereas dummy should be short, because we do not want
+ # deltas that would create duplicates when we --fix-thin
+ echo $i >dummy &&
+
+ git add file dummy &&
+ test_tick &&
+ git commit -m $i ||
+ return 1
+ done &&
+
+ make_pack HEAD^ HEAD &&
+ make_pack HEAD HEAD^
+'
+
+test_expect_success 'repack' '
+ # We first want to check that we do not have any internal errors,
+ # and also that we do not hit the last-ditch cycle-breaking code
+ # in write_object(), which will issue a warning to stderr.
+ >expect &&
+ git repack -ad 2>stderr &&
+ test_cmp expect stderr &&
+
+ # And then double-check that the resulting pack is usable (i.e.,
+ # we did not fail to notice any cycles). We know we are accessing
+ # the objects via the new pack here, because "repack -d" will have
+ # removed the others.
+ git cat-file blob HEAD:file >/dev/null &&
+ git cat-file blob HEAD^:file >/dev/null
+'
+
+test_done
check_prot_path c:repo file c:repo
'
+test_expect_success 'clone shallow since ...' '
+ test_create_repo shallow-since &&
+ (
+ cd shallow-since &&
+ GIT_COMMITTER_DATE="100000000 +0700" git commit --allow-empty -m one &&
+ GIT_COMMITTER_DATE="200000000 +0700" git commit --allow-empty -m two &&
+ GIT_COMMITTER_DATE="300000000 +0700" git commit --allow-empty -m three &&
+ git clone --shallow-since "300000000 +0700" "file://$(pwd)/." ../shallow11 &&
+ git -C ../shallow11 log --pretty=tformat:%s HEAD >actual &&
+ echo three >expected &&
+ test_cmp expected actual
+ )
+'
+
+test_expect_success 'fetch shallow since ...' '
+ git -C shallow11 fetch --shallow-since "200000000 +0700" origin &&
+ git -C shallow11 log --pretty=tformat:%s origin/master >actual &&
+ cat >expected <<-\EOF &&
+ three
+ two
+ EOF
+ test_cmp expected actual
+'
+
+test_expect_success 'shallow clone exclude tag two' '
+ test_create_repo shallow-exclude &&
+ (
+ cd shallow-exclude &&
+ test_commit one &&
+ test_commit two &&
+ test_commit three &&
+ git clone --shallow-exclude two "file://$(pwd)/." ../shallow12 &&
+ git -C ../shallow12 log --pretty=tformat:%s HEAD >actual &&
+ echo three >expected &&
+ test_cmp expected actual
+ )
+'
+
+test_expect_success 'fetch exclude tag one' '
+ git -C shallow12 fetch --shallow-exclude one origin &&
+ git -C shallow12 log --pretty=tformat:%s origin/master >actual &&
+ test_write_lines three two >expected &&
+ test_cmp expected actual
+'
+
+test_expect_success 'fetching deepen' '
+ test_create_repo shallow-deepen &&
+ (
+ cd shallow-deepen &&
+ test_commit one &&
+ test_commit two &&
+ test_commit three &&
+ git clone --depth 1 "file://$(pwd)/." deepen &&
+ test_commit four &&
+ git -C deepen log --pretty=tformat:%s master >actual &&
+ echo three >expected &&
+ test_cmp expected actual &&
+ git -C deepen fetch --deepen=1 &&
+ git -C deepen log --pretty=tformat:%s origin/master >actual &&
+ cat >expected <<-\EOF &&
+ four
+ three
+ two
+ EOF
+ test_cmp expected actual
+ )
+'
+
test_done
# We could just as easily have used "master"; the "*" emphasizes its
# role as a pattern.
test_must_fail git ls-remote refs*master >actual 2>&1 &&
- test_cmp exp actual
+ test_i18ncmp exp actual
'
test_expect_success 'die with non-2 for wrong repository even with --exit-code' '
test_cmp expect actual
'
+test_lazy_prereq GIT_DAEMON '
+ test_tristate GIT_TEST_GIT_DAEMON &&
+ test "$GIT_TEST_GIT_DAEMON" != false
+'
+
+# This test spawns a daemon, so run it only if the user would be OK with
+# testing with git-daemon.
+test_expect_success PIPE,JGIT,GIT_DAEMON 'indicate no refs in standards-compliant empty remote' '
+ JGIT_DAEMON_PORT=${JGIT_DAEMON_PORT-${this_test#t}} &&
+ JGIT_DAEMON_PID= &&
+ git init --bare empty.git &&
+ >empty.git/git-daemon-export-ok &&
+ mkfifo jgit_daemon_output &&
+ {
+ jgit daemon --port="$JGIT_DAEMON_PORT" . >jgit_daemon_output &
+ JGIT_DAEMON_PID=$!
+ } &&
+ test_when_finished kill "$JGIT_DAEMON_PID" &&
+ {
+ read line &&
+ case $line in
+ Exporting*)
+ ;;
+ *)
+ echo "Expected: Exporting" &&
+ false;;
+ esac &&
+ read line &&
+ case $line in
+ "Listening on"*)
+ ;;
+ *)
+ echo "Expected: Listening on" &&
+ false;;
+ esac
+ } <jgit_daemon_output &&
+ # --exit-code asks the command to exit with 2 when no
+ # matching refs are found.
+ test_expect_code 2 git ls-remote --exit-code git://localhost:$JGIT_DAEMON_PORT/empty.git
+'
test_done
)
'
+test_expect_success 'clone shallow since ...' '
+ test_create_repo shallow-since &&
+ (
+ cd shallow-since &&
+ GIT_COMMITTER_DATE="100000000 +0700" git commit --allow-empty -m one &&
+ GIT_COMMITTER_DATE="200000000 +0700" git commit --allow-empty -m two &&
+ GIT_COMMITTER_DATE="300000000 +0700" git commit --allow-empty -m three &&
+ mv .git "$HTTPD_DOCUMENT_ROOT_PATH/shallow-since.git" &&
+ git clone --shallow-since "300000000 +0700" $HTTPD_URL/smart/shallow-since.git ../shallow11 &&
+ git -C ../shallow11 log --pretty=tformat:%s HEAD >actual &&
+ echo three >expected &&
+ test_cmp expected actual
+ )
+'
+
+test_expect_success 'fetch shallow since ...' '
+ git -C shallow11 fetch --shallow-since "200000000 +0700" origin &&
+ git -C shallow11 log --pretty=tformat:%s origin/master >actual &&
+ cat >expected <<-\EOF &&
+ three
+ two
+ EOF
+ test_cmp expected actual
+'
+
+test_expect_success 'shallow clone exclude tag two' '
+ test_create_repo shallow-exclude &&
+ (
+ cd shallow-exclude &&
+ test_commit one &&
+ test_commit two &&
+ test_commit three &&
+ mv .git "$HTTPD_DOCUMENT_ROOT_PATH/shallow-exclude.git" &&
+ git clone --shallow-exclude two $HTTPD_URL/smart/shallow-exclude.git ../shallow12 &&
+ git -C ../shallow12 log --pretty=tformat:%s HEAD >actual &&
+ echo three >expected &&
+ test_cmp expected actual
+ )
+'
+
+test_expect_success 'fetch exclude tag one' '
+ git -C shallow12 fetch --shallow-exclude one origin &&
+ git -C shallow12 log --pretty=tformat:%s origin/master >actual &&
+ test_write_lines three two >expected &&
+ test_cmp expected actual
+'
+
+test_expect_success 'fetching deepen' '
+ test_create_repo shallow-deepen &&
+ (
+ cd shallow-deepen &&
+ test_commit one &&
+ test_commit two &&
+ test_commit three &&
+ mv .git "$HTTPD_DOCUMENT_ROOT_PATH/shallow-deepen.git" &&
+ git clone --depth 1 $HTTPD_URL/smart/shallow-deepen.git deepen &&
+ mv "$HTTPD_DOCUMENT_ROOT_PATH/shallow-deepen.git" .git &&
+ test_commit four &&
+ git -C deepen log --pretty=tformat:%s master >actual &&
+ echo three >expected &&
+ test_cmp expected actual &&
+ mv .git "$HTTPD_DOCUMENT_ROOT_PATH/shallow-deepen.git" &&
+ git -C deepen fetch --deepen=1 &&
+ git -C deepen log --pretty=tformat:%s origin/master >actual &&
+ cat >expected <<-\EOF &&
+ four
+ three
+ two
+ EOF
+ test_cmp expected actual
+ )
+'
+
stop_httpd
test_done
--- /dev/null
+#!/bin/sh
+
+test_description='check receive input limits'
+. ./test-lib.sh
+
+# Let's run tests with different unpack limits: 1 and 10000
+# When the limit is 1, `git receive-pack` will call `git index-pack`.
+# When the limit is 10000, `git receive-pack` will call `git unpack-objects`.
+
+test_pack_input_limit () {
+ case "$1" in
+ index) unpack_limit=1 ;;
+ unpack) unpack_limit=10000 ;;
+ esac
+
+ test_expect_success 'prepare destination repository' '
+ rm -fr dest &&
+ git --bare init dest
+ '
+
+ test_expect_success "set unpacklimit to $unpack_limit" '
+ git --git-dir=dest config receive.unpacklimit "$unpack_limit"
+ '
+
+ test_expect_success 'setting receive.maxInputSize to 512 rejects push' '
+ git --git-dir=dest config receive.maxInputSize 512 &&
+ test_must_fail git push dest HEAD
+ '
+
+ test_expect_success 'bumping limit to 4k allows push' '
+ git --git-dir=dest config receive.maxInputSize 4k &&
+ git push dest HEAD
+ '
+
+ test_expect_success 'prepare destination repository (again)' '
+ rm -fr dest &&
+ git --bare init dest
+ '
+
+ test_expect_success 'lifting the limit allows push' '
+ git --git-dir=dest config receive.maxInputSize 0 &&
+ git push dest HEAD
+ '
+}
+
+test_expect_success "create known-size (1024 bytes) commit" '
+ test-genrandom foo 1024 >one-k &&
+ git add one-k &&
+ test_commit one-k
+'
+
+test_pack_input_limit index
+test_pack_input_limit unpack
+
+test_done
--- /dev/null
+#!/bin/sh
+
+test_description='check quarantine of objects during push'
+. ./test-lib.sh
+
+test_expect_success 'create picky dest repo' '
+ git init --bare dest.git &&
+ write_script dest.git/hooks/pre-receive <<-\EOF
+ while read old new ref; do
+ test "$(git log -1 --format=%s $new)" = reject && exit 1
+ done
+ exit 0
+ EOF
+'
+
+test_expect_success 'accepted objects work' '
+ test_commit ok &&
+ git push dest.git HEAD &&
+ commit=$(git rev-parse HEAD) &&
+ git --git-dir=dest.git cat-file commit $commit
+'
+
+test_expect_success 'rejected objects are not installed' '
+ test_commit reject &&
+ commit=$(git rev-parse HEAD) &&
+ test_must_fail git push dest.git reject &&
+ test_must_fail git --git-dir=dest.git cat-file commit $commit
+'
+
+test_expect_success 'rejected objects are removed' '
+ echo "incoming-*" >expect &&
+ (cd dest.git/objects && echo incoming-*) >actual &&
+ test_cmp expect actual
+'
+
+test_done
test_description='test transitive info/alternate entries'
. ./test-lib.sh
-# test that a file is not reachable in the current repository
-# but that it is after creating a info/alternate entry
-reachable_via() {
- alternate="$1"
- file="$2"
- if git cat-file -e "HEAD:$file"; then return 1; fi
- echo "$alternate" >> .git/objects/info/alternate
- git cat-file -e "HEAD:$file"
-}
-
-test_valid_repo() {
- git fsck --full > fsck.log &&
- test_line_count = 0 fsck.log
-}
-
-base_dir=$(pwd)
-
-test_expect_success 'preparing first repository' \
-'test_create_repo A && cd A &&
-echo "Hello World" > file1 &&
-git add file1 &&
-git commit -m "Initial commit" file1 &&
-git repack -a -d &&
-git prune'
-
-cd "$base_dir"
-
-test_expect_success 'preparing second repository' \
-'git clone -l -s A B && cd B &&
-echo "foo bar" > file2 &&
-git add file2 &&
-git commit -m "next commit" file2 &&
-git repack -a -d -l &&
-git prune'
-
-cd "$base_dir"
-
-test_expect_success 'preparing third repository' \
-'git clone -l -s B C && cd C &&
-echo "Goodbye, cruel world" > file3 &&
-git add file3 &&
-git commit -m "one more" file3 &&
-git repack -a -d -l &&
-git prune'
-
-cd "$base_dir"
-
-test_expect_success 'creating too deep nesting' \
-'git clone -l -s C D &&
-git clone -l -s D E &&
-git clone -l -s E F &&
-git clone -l -s F G &&
-git clone --bare -l -s G H'
-
-test_expect_success 'invalidity of deepest repository' \
-'cd H && {
- test_valid_repo
- test $? -ne 0
-}'
-
-cd "$base_dir"
+test_expect_success 'preparing first repository' '
+ test_create_repo A && (
+ cd A &&
+ echo "Hello World" > file1 &&
+ git add file1 &&
+ git commit -m "Initial commit" file1 &&
+ git repack -a -d &&
+ git prune
+ )
+'
-test_expect_success 'validity of third repository' \
-'cd C &&
-test_valid_repo'
+test_expect_success 'preparing second repository' '
+ git clone -l -s A B && (
+ cd B &&
+ echo "foo bar" > file2 &&
+ git add file2 &&
+ git commit -m "next commit" file2 &&
+ git repack -a -d -l &&
+ git prune
+ )
+'
-cd "$base_dir"
+test_expect_success 'preparing third repository' '
+ git clone -l -s B C && (
+ cd C &&
+ echo "Goodbye, cruel world" > file3 &&
+ git add file3 &&
+ git commit -m "one more" file3 &&
+ git repack -a -d -l &&
+ git prune
+ )
+'
-test_expect_success 'validity of fourth repository' \
-'cd D &&
-test_valid_repo'
+test_expect_success 'count-objects shows the alternates' '
+ cat >expect <<-EOF &&
+ alternate: $(pwd)/B/.git/objects
+ alternate: $(pwd)/A/.git/objects
+ EOF
+ git -C C count-objects -v >actual &&
+ grep ^alternate: actual >actual.alternates &&
+ test_cmp expect actual.alternates
+'
-cd "$base_dir"
+# Note: These tests depend on the hard-coded value of 5 as the maximum depth
+# we will follow recursion. We start the depth at 0 and count links, not
+# repositories. This means that in a chain like:
+#
+# A --> B --> C --> D --> E --> F --> G --> H
+# 0 1 2 3 4 5 6
+#
+# we are OK at "G", but break at "H", even though "H" is actually the 8th
+# repository, not the 6th, which you might expect. Counting the links allows
+# N+1 repositories, and counting from 0 to 5 inclusive allows 6 links.
+#
+# Note also that we must use "--bare -l" to make the link to H. The "-l"
+# ensures we do not do a connectivity check, and the "--bare" makes sure
+# we do not try to checkout the result (which needs objects), either of
+# which would cause the clone to fail.
+test_expect_success 'creating too deep nesting' '
+ git clone -l -s C D &&
+ git clone -l -s D E &&
+ git clone -l -s E F &&
+ git clone -l -s F G &&
+ git clone --bare -l -s G H
+'
-test_expect_success 'breaking of loops' \
-'echo "$base_dir"/B/.git/objects >> "$base_dir"/A/.git/objects/info/alternates&&
-cd C &&
-test_valid_repo'
+test_expect_success 'validity of seventh repository' '
+ git -C G fsck
+'
-cd "$base_dir"
+test_expect_success 'invalidity of eighth repository' '
+ test_must_fail git -C H fsck
+'
-test_expect_success 'that info/alternates is necessary' \
-'cd C &&
-rm -f .git/objects/info/alternates &&
-! (test_valid_repo)'
+test_expect_success 'breaking of loops' '
+ echo "$(pwd)"/B/.git/objects >>A/.git/objects/info/alternates &&
+ git -C C fsck
+'
-cd "$base_dir"
+test_expect_success 'that info/alternates is necessary' '
+ rm -f C/.git/objects/info/alternates &&
+ test_must_fail git -C C fsck
+'
-test_expect_success 'that relative alternate is possible for current dir' \
-'cd C &&
-echo "../../../B/.git/objects" > .git/objects/info/alternates &&
-test_valid_repo'
+test_expect_success 'that relative alternate is possible for current dir' '
+ echo "../../../B/.git/objects" >C/.git/objects/info/alternates &&
+ git fsck
+'
-cd "$base_dir"
+test_expect_success 'that relative alternate is recursive' '
+ git -C D fsck
+'
-test_expect_success \
- 'that relative alternate is only possible for current dir' '
- cd D &&
- ! (test_valid_repo)
+# we can reach "A" from our new repo both directly, and via "C".
+# The deep/subdir is there to make sure we are not doing a stupid
+# pure-text comparison of the alternate names.
+test_expect_success 'relative duplicates are eliminated' '
+ mkdir -p deep/subdir &&
+ git init --bare deep/subdir/duplicate.git &&
+ cat >deep/subdir/duplicate.git/objects/info/alternates <<-\EOF &&
+ ../../../../C/.git/objects
+ ../../../../A/.git/objects
+ EOF
+ cat >expect <<-EOF &&
+ alternate: $(pwd)/C/.git/objects
+ alternate: $(pwd)/B/.git/objects
+ alternate: $(pwd)/A/.git/objects
+ EOF
+ git -C deep/subdir/duplicate.git count-objects -v >actual &&
+ grep ^alternate: actual >actual.alternates &&
+ test_cmp expect actual.alternates
'
-cd "$base_dir"
+test_expect_success CASE_INSENSITIVE_FS 'dup finding can be case-insensitive' '
+ git init --bare insensitive.git &&
+ # the previous entry for "A" will have used uppercase
+ cat >insensitive.git/objects/info/alternates <<-\EOF &&
+ ../../C/.git/objects
+ ../../a/.git/objects
+ EOF
+ cat >expect <<-EOF &&
+ alternate: $(pwd)/C/.git/objects
+ alternate: $(pwd)/B/.git/objects
+ alternate: $(pwd)/A/.git/objects
+ EOF
+ git -C insensitive.git count-objects -v >actual &&
+ grep ^alternate: actual >actual.alternates &&
+ test_cmp expect actual.alternates
+'
test_done
test_must_fail git rev-list --bisect --first-parent HEAD
'
+test_expect_success '--header shows a NUL after each commit' '
+ # We know that there is no Q in the true payload; names and
+ # addresses of the authors and the committers do not have
+ # any, and object names or header names do not, either.
+ git rev-list --header --max-count=2 HEAD |
+ nul_to_q |
+ grep "^Q" >actual &&
+ cat >expect <<-EOF &&
+ Q$(git rev-parse HEAD~1)
+ Q
+ EOF
+ test_cmp expect actual
+'
+
test_done
test_cmp_rev_output start "git rev-parse ${start%?}"
'
+# rev^- tests; we can use a simpler setup for these
+
+test_expect_success 'setup for rev^- tests' '
+ test_commit one &&
+ test_commit two &&
+ test_commit three &&
+
+ # Merge in a branch for testing rev^-
+ git checkout -b branch &&
+ git checkout HEAD^^ &&
+ git merge -m merge --no-edit --no-ff branch &&
+ git checkout -b merge
+'
+
+# The merged branch has 2 commits + the merge
+test_expect_success 'rev-list --count merge^- = merge^..merge' '
+ git rev-list --count merge^..merge >expect &&
+ echo 3 >actual &&
+ test_cmp expect actual
+'
+
+# All rev^- rev-parse tests
+
+test_expect_success 'rev-parse merge^- = merge^..merge' '
+ git rev-parse merge^..merge >expect &&
+ git rev-parse merge^- >actual &&
+ test_cmp expect actual
+'
+
+test_expect_success 'rev-parse merge^-1 = merge^..merge' '
+ git rev-parse merge^1..merge >expect &&
+ git rev-parse merge^-1 >actual &&
+ test_cmp expect actual
+'
+
+test_expect_success 'rev-parse merge^-2 = merge^2..merge' '
+ git rev-parse merge^2..merge >expect &&
+ git rev-parse merge^-2 >actual &&
+ test_cmp expect actual
+'
+
+test_expect_success 'rev-parse merge^-0 (invalid parent)' '
+ test_must_fail git rev-parse merge^-0
+'
+
+test_expect_success 'rev-parse merge^-3 (invalid parent)' '
+ test_must_fail git rev-parse merge^-3
+'
+
+test_expect_success 'rev-parse merge^-^ (garbage after ^-)' '
+ test_must_fail git rev-parse merge^-^
+'
+
+test_expect_success 'rev-parse merge^-1x (garbage after ^-1)' '
+ test_must_fail git rev-parse merge^-1x
+'
+
+# All rev^- rev-list tests (should be mostly the same as rev-parse; the reason
+# for the duplication is that rev-parse and rev-list use different parsers).
+
+test_expect_success 'rev-list merge^- = merge^..merge' '
+ git rev-list merge^..merge >expect &&
+ git rev-list merge^- >actual &&
+ test_cmp expect actual
+'
+
+test_expect_success 'rev-list merge^-1 = merge^1..merge' '
+ git rev-list merge^1..merge >expect &&
+ git rev-list merge^-1 >actual &&
+ test_cmp expect actual
+'
+
+test_expect_success 'rev-list merge^-2 = merge^2..merge' '
+ git rev-list merge^2..merge >expect &&
+ git rev-list merge^-2 >actual &&
+ test_cmp expect actual
+'
+
+test_expect_success 'rev-list merge^-0 (invalid parent)' '
+ test_must_fail git rev-list merge^-0
+'
+
+test_expect_success 'rev-list merge^-3 (invalid parent)' '
+ test_must_fail git rev-list merge^-3
+'
+
+test_expect_success 'rev-list merge^-^ (garbage after ^-)' '
+ test_must_fail git rev-list merge^-^
+'
+
+test_expect_success 'rev-list merge^-1x (garbage after ^-1)' '
+ test_must_fail git rev-list merge^-1x
+'
+
test_done
test_i18ncmp expected actual
'
+## Duplicate the above test and verify --porcelain=v1 arg parsing.
+test_expect_success 'status --porcelain=v1 --branch with detached HEAD' '
+ git reset --hard &&
+ git checkout master^0 &&
+ git status --branch --porcelain=v1 >actual &&
+ cat >expected <<-EOF &&
+ ## HEAD (no branch)
+ ?? .gitconfig
+ ?? actual
+ ?? expect
+ ?? expected
+ ?? mdconflict/
+ EOF
+ test_i18ncmp expected actual
+'
+
+## Verify parser error on invalid --porcelain argument.
+test_expect_success 'status --porcelain=bogus' '
+ test_must_fail git status --porcelain=bogus
+'
+
test_done
--- /dev/null
+#!/bin/sh
+
+test_description='git status --porcelain=v2
+
+This test exercises porcelain V2 output for git status.'
+
+
+. ./test-lib.sh
+
+
+test_expect_success setup '
+ test_tick &&
+ git config core.autocrlf false &&
+ echo x >file_x &&
+ echo y >file_y &&
+ echo z >file_z &&
+ mkdir dir1 &&
+ echo a >dir1/file_a &&
+ echo b >dir1/file_b
+'
+
+test_expect_success 'before initial commit, nothing added, only untracked' '
+ cat >expect <<-EOF &&
+ # branch.oid (initial)
+ # branch.head master
+ ? actual
+ ? dir1/
+ ? expect
+ ? file_x
+ ? file_y
+ ? file_z
+ EOF
+
+ git status --porcelain=v2 --branch --untracked-files=normal >actual &&
+ test_cmp expect actual
+'
+
+test_expect_success 'before initial commit, things added' '
+ git add file_x file_y file_z dir1 &&
+ OID_A=$(git hash-object -t blob -- dir1/file_a) &&
+ OID_B=$(git hash-object -t blob -- dir1/file_b) &&
+ OID_X=$(git hash-object -t blob -- file_x) &&
+ OID_Y=$(git hash-object -t blob -- file_y) &&
+ OID_Z=$(git hash-object -t blob -- file_z) &&
+
+ cat >expect <<-EOF &&
+ # branch.oid (initial)
+ # branch.head master
+ 1 A. N... 000000 100644 100644 $_z40 $OID_A dir1/file_a
+ 1 A. N... 000000 100644 100644 $_z40 $OID_B dir1/file_b
+ 1 A. N... 000000 100644 100644 $_z40 $OID_X file_x
+ 1 A. N... 000000 100644 100644 $_z40 $OID_Y file_y
+ 1 A. N... 000000 100644 100644 $_z40 $OID_Z file_z
+ ? actual
+ ? expect
+ EOF
+
+ git status --porcelain=v2 --branch --untracked-files=all >actual &&
+ test_cmp expect actual
+'
+
+test_expect_success 'before initial commit, things added (-z)' '
+ lf_to_nul >expect <<-EOF &&
+ # branch.oid (initial)
+ # branch.head master
+ 1 A. N... 000000 100644 100644 $_z40 $OID_A dir1/file_a
+ 1 A. N... 000000 100644 100644 $_z40 $OID_B dir1/file_b
+ 1 A. N... 000000 100644 100644 $_z40 $OID_X file_x
+ 1 A. N... 000000 100644 100644 $_z40 $OID_Y file_y
+ 1 A. N... 000000 100644 100644 $_z40 $OID_Z file_z
+ ? actual
+ ? expect
+ EOF
+
+ git status -z --porcelain=v2 --branch --untracked-files=all >actual &&
+ test_cmp expect actual
+'
+
+test_expect_success 'make first commit, comfirm HEAD oid and branch' '
+ git commit -m initial &&
+ H0=$(git rev-parse HEAD) &&
+ cat >expect <<-EOF &&
+ # branch.oid $H0
+ # branch.head master
+ ? actual
+ ? expect
+ EOF
+
+ git status --porcelain=v2 --branch --untracked-files=all >actual &&
+ test_cmp expect actual
+'
+
+test_expect_success 'after first commit, create unstaged changes' '
+ echo x >>file_x &&
+ OID_X1=$(git hash-object -t blob -- file_x) &&
+ rm file_z &&
+ H0=$(git rev-parse HEAD) &&
+
+ cat >expect <<-EOF &&
+ # branch.oid $H0
+ # branch.head master
+ 1 .M N... 100644 100644 100644 $OID_X $OID_X file_x
+ 1 .D N... 100644 100644 000000 $OID_Z $OID_Z file_z
+ ? actual
+ ? expect
+ EOF
+
+ git status --porcelain=v2 --branch --untracked-files=all >actual &&
+ test_cmp expect actual
+'
+
+test_expect_success 'after first commit but omit untracked files and branch' '
+ cat >expect <<-EOF &&
+ 1 .M N... 100644 100644 100644 $OID_X $OID_X file_x
+ 1 .D N... 100644 100644 000000 $OID_Z $OID_Z file_z
+ EOF
+
+ git status --porcelain=v2 --untracked-files=no >actual &&
+ test_cmp expect actual
+'
+
+test_expect_success 'after first commit, stage existing changes' '
+ git add file_x &&
+ git rm file_z &&
+ H0=$(git rev-parse HEAD) &&
+
+ cat >expect <<-EOF &&
+ # branch.oid $H0
+ # branch.head master
+ 1 M. N... 100644 100644 100644 $OID_X $OID_X1 file_x
+ 1 D. N... 100644 000000 000000 $OID_Z $_z40 file_z
+ ? actual
+ ? expect
+ EOF
+
+ git status --porcelain=v2 --branch --untracked-files=all >actual &&
+ test_cmp expect actual
+'
+
+test_expect_success 'rename causes 2 path lines' '
+ git mv file_y renamed_y &&
+ H0=$(git rev-parse HEAD) &&
+
+ q_to_tab >expect <<-EOF &&
+ # branch.oid $H0
+ # branch.head master
+ 1 M. N... 100644 100644 100644 $OID_X $OID_X1 file_x
+ 1 D. N... 100644 000000 000000 $OID_Z $_z40 file_z
+ 2 R. N... 100644 100644 100644 $OID_Y $OID_Y R100 renamed_yQfile_y
+ ? actual
+ ? expect
+ EOF
+
+ git status --porcelain=v2 --branch --untracked-files=all >actual &&
+ test_cmp expect actual
+'
+
+test_expect_success 'rename causes 2 path lines (-z)' '
+ H0=$(git rev-parse HEAD) &&
+
+ ## Lines use NUL path separator and line terminator, so double transform here.
+ q_to_nul <<-EOF | lf_to_nul >expect &&
+ # branch.oid $H0
+ # branch.head master
+ 1 M. N... 100644 100644 100644 $OID_X $OID_X1 file_x
+ 1 D. N... 100644 000000 000000 $OID_Z $_z40 file_z
+ 2 R. N... 100644 100644 100644 $OID_Y $OID_Y R100 renamed_yQfile_y
+ ? actual
+ ? expect
+ EOF
+
+ git status --porcelain=v2 --branch --untracked-files=all -z >actual &&
+ test_cmp expect actual
+'
+
+test_expect_success 'make second commit, confirm clean and new HEAD oid' '
+ git commit -m second &&
+ H1=$(git rev-parse HEAD) &&
+
+ cat >expect <<-EOF &&
+ # branch.oid $H1
+ # branch.head master
+ ? actual
+ ? expect
+ EOF
+
+ git status --porcelain=v2 --branch --untracked-files=all >actual &&
+ test_cmp expect actual
+'
+
+test_expect_success 'confirm ignored files are not printed' '
+ test_when_finished "rm -f x.ign .gitignore" &&
+ echo x.ign >.gitignore &&
+ echo "ignore me" >x.ign &&
+
+ cat >expect <<-EOF &&
+ ? .gitignore
+ ? actual
+ ? expect
+ EOF
+
+ git status --porcelain=v2 --untracked-files=all >actual &&
+ test_cmp expect actual
+'
+
+test_expect_success 'ignored files are printed with --ignored' '
+ test_when_finished "rm -f x.ign .gitignore" &&
+ echo x.ign >.gitignore &&
+ echo "ignore me" >x.ign &&
+
+ cat >expect <<-EOF &&
+ ? .gitignore
+ ? actual
+ ? expect
+ ! x.ign
+ EOF
+
+ git status --porcelain=v2 --ignored --untracked-files=all >actual &&
+ test_cmp expect actual
+'
+
+test_expect_success 'create and commit permanent ignore file' '
+ cat >.gitignore <<-EOF &&
+ actual*
+ expect*
+ EOF
+
+ git add .gitignore &&
+ git commit -m ignore_trash &&
+ H1=$(git rev-parse HEAD) &&
+
+ cat >expect <<-EOF &&
+ # branch.oid $H1
+ # branch.head master
+ EOF
+
+ git status --porcelain=v2 --branch >actual &&
+ test_cmp expect actual
+'
+
+test_expect_success 'verify --intent-to-add output' '
+ test_when_finished "git rm -f intent1.add intent2.add" &&
+ touch intent1.add &&
+ echo test >intent2.add &&
+
+ git add --intent-to-add intent1.add intent2.add &&
+
+ cat >expect <<-EOF &&
+ 1 .A N... 000000 000000 100644 $_z40 $_z40 intent1.add
+ 1 .A N... 000000 000000 100644 $_z40 $_z40 intent2.add
+ EOF
+
+ git status --porcelain=v2 >actual &&
+ test_cmp expect actual
+'
+
+test_expect_success 'verify AA (add-add) conflict' '
+ test_when_finished "git reset --hard" &&
+
+ git branch AA_A master &&
+ git checkout AA_A &&
+ echo "Branch AA_A" >conflict.txt &&
+ OID_AA_A=$(git hash-object -t blob -- conflict.txt) &&
+ git add conflict.txt &&
+ git commit -m "branch aa_a" &&
+
+ git branch AA_B master &&
+ git checkout AA_B &&
+ echo "Branch AA_B" >conflict.txt &&
+ OID_AA_B=$(git hash-object -t blob -- conflict.txt) &&
+ git add conflict.txt &&
+ git commit -m "branch aa_b" &&
+
+ git branch AA_M AA_B &&
+ git checkout AA_M &&
+ test_must_fail git merge AA_A &&
+
+ HM=$(git rev-parse HEAD) &&
+
+ cat >expect <<-EOF &&
+ # branch.oid $HM
+ # branch.head AA_M
+ u AA N... 000000 100644 100644 100644 $_z40 $OID_AA_B $OID_AA_A conflict.txt
+ EOF
+
+ git status --porcelain=v2 --branch --untracked-files=all >actual &&
+ test_cmp expect actual
+'
+
+test_expect_success 'verify UU (edit-edit) conflict' '
+ test_when_finished "git reset --hard" &&
+
+ git branch UU_ANC master &&
+ git checkout UU_ANC &&
+ echo "Ancestor" >conflict.txt &&
+ OID_UU_ANC=$(git hash-object -t blob -- conflict.txt) &&
+ git add conflict.txt &&
+ git commit -m "UU_ANC" &&
+
+ git branch UU_A UU_ANC &&
+ git checkout UU_A &&
+ echo "Branch UU_A" >conflict.txt &&
+ OID_UU_A=$(git hash-object -t blob -- conflict.txt) &&
+ git add conflict.txt &&
+ git commit -m "branch uu_a" &&
+
+ git branch UU_B UU_ANC &&
+ git checkout UU_B &&
+ echo "Branch UU_B" >conflict.txt &&
+ OID_UU_B=$(git hash-object -t blob -- conflict.txt) &&
+ git add conflict.txt &&
+ git commit -m "branch uu_b" &&
+
+ git branch UU_M UU_B &&
+ git checkout UU_M &&
+ test_must_fail git merge UU_A &&
+
+ HM=$(git rev-parse HEAD) &&
+
+ cat >expect <<-EOF &&
+ # branch.oid $HM
+ # branch.head UU_M
+ u UU N... 100644 100644 100644 100644 $OID_UU_ANC $OID_UU_B $OID_UU_A conflict.txt
+ EOF
+
+ git status --porcelain=v2 --branch --untracked-files=all >actual &&
+ test_cmp expect actual
+'
+
+test_expect_success 'verify upstream fields in branch header' '
+ git checkout master &&
+ test_when_finished "rm -rf sub_repo" &&
+ git clone . sub_repo &&
+ (
+ ## Confirm local master tracks remote master.
+ cd sub_repo &&
+ HUF=$(git rev-parse HEAD) &&
+
+ cat >expect <<-EOF &&
+ # branch.oid $HUF
+ # branch.head master
+ # branch.upstream origin/master
+ # branch.ab +0 -0
+ EOF
+
+ git status --porcelain=v2 --branch --untracked-files=all >actual &&
+ test_cmp expect actual &&
+
+ ## Test ahead/behind.
+ echo xyz >file_xyz &&
+ git add file_xyz &&
+ git commit -m xyz &&
+
+ HUF=$(git rev-parse HEAD) &&
+
+ cat >expect <<-EOF &&
+ # branch.oid $HUF
+ # branch.head master
+ # branch.upstream origin/master
+ # branch.ab +1 -0
+ EOF
+
+ git status --porcelain=v2 --branch --untracked-files=all >actual &&
+ test_cmp expect actual &&
+
+ ## Repeat the above but without --branch.
+ cat >expect <<-EOF &&
+ EOF
+
+ git status --porcelain=v2 --untracked-files=all >actual &&
+ test_cmp expect actual &&
+
+ ## Test upstream-gone case. Fake this by pointing origin/master at
+ ## a non-existing commit.
+ OLD=$(git rev-parse origin/master) &&
+ NEW=$_z40 &&
+ mv .git/packed-refs .git/old-packed-refs &&
+ sed "s/$OLD/$NEW/g" <.git/old-packed-refs >.git/packed-refs &&
+
+ HUF=$(git rev-parse HEAD) &&
+
+ cat >expect <<-EOF &&
+ # branch.oid $HUF
+ # branch.head master
+ # branch.upstream origin/master
+ EOF
+
+ git status --porcelain=v2 --branch --untracked-files=all >actual &&
+ test_cmp expect actual
+ )
+'
+
+test_expect_success 'create and add submodule, submodule appears clean (A. S...)' '
+ git checkout master &&
+ git clone . sub_repo &&
+ git clone . super_repo &&
+ ( cd super_repo &&
+ git submodule add ../sub_repo sub1 &&
+
+ ## Confirm stage/add of clean submodule.
+ HMOD=$(git hash-object -t blob -- .gitmodules) &&
+ HSUP=$(git rev-parse HEAD) &&
+ HSUB=$HSUP &&
+
+ cat >expect <<-EOF &&
+ # branch.oid $HSUP
+ # branch.head master
+ # branch.upstream origin/master
+ # branch.ab +0 -0
+ 1 A. N... 000000 100644 100644 $_z40 $HMOD .gitmodules
+ 1 A. S... 000000 160000 160000 $_z40 $HSUB sub1
+ EOF
+
+ git status --porcelain=v2 --branch --untracked-files=all >actual &&
+ test_cmp expect actual
+ )
+'
+
+test_expect_success 'untracked changes in added submodule (AM S..U)' '
+ ( cd super_repo &&
+ ## create untracked file in the submodule.
+ ( cd sub1 &&
+ echo "xxxx" >file_in_sub
+ ) &&
+
+ HMOD=$(git hash-object -t blob -- .gitmodules) &&
+ HSUP=$(git rev-parse HEAD) &&
+ HSUB=$HSUP &&
+
+ cat >expect <<-EOF &&
+ # branch.oid $HSUP
+ # branch.head master
+ # branch.upstream origin/master
+ # branch.ab +0 -0
+ 1 A. N... 000000 100644 100644 $_z40 $HMOD .gitmodules
+ 1 AM S..U 000000 160000 160000 $_z40 $HSUB sub1
+ EOF
+
+ git status --porcelain=v2 --branch --untracked-files=all >actual &&
+ test_cmp expect actual
+ )
+'
+
+test_expect_success 'staged changes in added submodule (AM S.M.)' '
+ ( cd super_repo &&
+ ## stage the changes in the submodule.
+ ( cd sub1 &&
+ git add file_in_sub
+ ) &&
+
+ HMOD=$(git hash-object -t blob -- .gitmodules) &&
+ HSUP=$(git rev-parse HEAD) &&
+ HSUB=$HSUP &&
+
+ cat >expect <<-EOF &&
+ # branch.oid $HSUP
+ # branch.head master
+ # branch.upstream origin/master
+ # branch.ab +0 -0
+ 1 A. N... 000000 100644 100644 $_z40 $HMOD .gitmodules
+ 1 AM S.M. 000000 160000 160000 $_z40 $HSUB sub1
+ EOF
+
+ git status --porcelain=v2 --branch --untracked-files=all >actual &&
+ test_cmp expect actual
+ )
+'
+
+test_expect_success 'staged and unstaged changes in added (AM S.M.)' '
+ ( cd super_repo &&
+ ( cd sub1 &&
+ ## make additional unstaged changes (on the same file) in the submodule.
+ ## This does not cause us to get S.MU (because the submodule does not report
+ ## a "?" line for the unstaged changes).
+ echo "more changes" >>file_in_sub
+ ) &&
+
+ HMOD=$(git hash-object -t blob -- .gitmodules) &&
+ HSUP=$(git rev-parse HEAD) &&
+ HSUB=$HSUP &&
+
+ cat >expect <<-EOF &&
+ # branch.oid $HSUP
+ # branch.head master
+ # branch.upstream origin/master
+ # branch.ab +0 -0
+ 1 A. N... 000000 100644 100644 $_z40 $HMOD .gitmodules
+ 1 AM S.M. 000000 160000 160000 $_z40 $HSUB sub1
+ EOF
+
+ git status --porcelain=v2 --branch --untracked-files=all >actual &&
+ test_cmp expect actual
+ )
+'
+
+test_expect_success 'staged and untracked changes in added submodule (AM S.MU)' '
+ ( cd super_repo &&
+ ( cd sub1 &&
+ ## stage new changes in tracked file.
+ git add file_in_sub &&
+ ## create new untracked file.
+ echo "yyyy" >>another_file_in_sub
+ ) &&
+
+ HMOD=$(git hash-object -t blob -- .gitmodules) &&
+ HSUP=$(git rev-parse HEAD) &&
+ HSUB=$HSUP &&
+
+ cat >expect <<-EOF &&
+ # branch.oid $HSUP
+ # branch.head master
+ # branch.upstream origin/master
+ # branch.ab +0 -0
+ 1 A. N... 000000 100644 100644 $_z40 $HMOD .gitmodules
+ 1 AM S.MU 000000 160000 160000 $_z40 $HSUB sub1
+ EOF
+
+ git status --porcelain=v2 --branch --untracked-files=all >actual &&
+ test_cmp expect actual
+ )
+'
+
+test_expect_success 'commit within the submodule appears as new commit in super (AM SC..)' '
+ ( cd super_repo &&
+ ( cd sub1 &&
+ ## Make a new commit in the submodule.
+ git add file_in_sub &&
+ rm -f another_file_in_sub &&
+ git commit -m "new commit"
+ ) &&
+
+ HMOD=$(git hash-object -t blob -- .gitmodules) &&
+ HSUP=$(git rev-parse HEAD) &&
+ HSUB=$HSUP &&
+
+ cat >expect <<-EOF &&
+ # branch.oid $HSUP
+ # branch.head master
+ # branch.upstream origin/master
+ # branch.ab +0 -0
+ 1 A. N... 000000 100644 100644 $_z40 $HMOD .gitmodules
+ 1 AM SC.. 000000 160000 160000 $_z40 $HSUB sub1
+ EOF
+
+ git status --porcelain=v2 --branch --untracked-files=all >actual &&
+ test_cmp expect actual
+ )
+'
+
+test_expect_success 'stage submodule in super and commit' '
+ ( cd super_repo &&
+ ## Stage the new submodule commit in the super.
+ git add sub1 &&
+ ## Commit the super so that the sub no longer appears as added.
+ git commit -m "super commit" &&
+
+ HSUP=$(git rev-parse HEAD) &&
+
+ cat >expect <<-EOF &&
+ # branch.oid $HSUP
+ # branch.head master
+ # branch.upstream origin/master
+ # branch.ab +1 -0
+ EOF
+
+ git status --porcelain=v2 --branch --untracked-files=all >actual &&
+ test_cmp expect actual
+ )
+'
+
+test_expect_success 'make unstaged changes in existing submodule (.M S.M.)' '
+ ( cd super_repo &&
+ ( cd sub1 &&
+ echo "zzzz" >>file_in_sub
+ ) &&
+
+ HSUP=$(git rev-parse HEAD) &&
+ HSUB=$(cd sub1 && git rev-parse HEAD) &&
+
+ cat >expect <<-EOF &&
+ # branch.oid $HSUP
+ # branch.head master
+ # branch.upstream origin/master
+ # branch.ab +1 -0
+ 1 .M S.M. 160000 160000 160000 $HSUB $HSUB sub1
+ EOF
+
+ git status --porcelain=v2 --branch --untracked-files=all >actual &&
+ test_cmp expect actual
+ )
+'
+
+test_done
base_dir=$(pwd)
-U=$base_dir/UPLOAD_LOG
-
-test_expect_success 'preparing first repository' \
-'test_create_repo A && cd A &&
-echo first > file1 &&
-git add file1 &&
-git commit -m A-initial'
-
-cd "$base_dir"
-
-test_expect_success 'preparing second repository' \
-'git clone A B && cd B &&
-echo second > file2 &&
-git add file2 &&
-git commit -m B-addition &&
-git repack -a -d &&
-git prune'
-
-cd "$base_dir"
-
-test_expect_success 'preparing superproject' \
-'test_create_repo super && cd super &&
-echo file > file &&
-git add file &&
-git commit -m B-super-initial'
-
-cd "$base_dir"
-
-test_expect_success 'submodule add --reference' \
-'cd super && git submodule add --reference ../B "file://$base_dir/A" sub &&
-git commit -m B-super-added'
-
-cd "$base_dir"
-
-test_expect_success 'after add: existence of info/alternates' \
-'test_line_count = 1 super/.git/modules/sub/objects/info/alternates'
-
-cd "$base_dir"
-
-test_expect_success 'that reference gets used with add' \
-'cd super/sub &&
-echo "0 objects, 0 kilobytes" > expected &&
-git count-objects > current &&
-diff expected current'
-
-cd "$base_dir"
-
-test_expect_success 'cloning superproject' \
-'git clone super super-clone'
-
-cd "$base_dir"
-
-test_expect_success 'update with reference' \
-'cd super-clone && git submodule update --init --reference ../B'
-
-cd "$base_dir"
-
-test_expect_success 'after update: existence of info/alternates' \
-'test_line_count = 1 super-clone/.git/modules/sub/objects/info/alternates'
-
-cd "$base_dir"
-
-test_expect_success 'that reference gets used with update' \
-'cd super-clone/sub &&
-echo "0 objects, 0 kilobytes" > expected &&
-git count-objects > current &&
-diff expected current'
-
-cd "$base_dir"
+test_alternate_is_used () {
+ alternates_file="$1" &&
+ working_dir="$2" &&
+ test_line_count = 1 "$alternates_file" &&
+ echo "0 objects, 0 kilobytes" >expect &&
+ git -C "$working_dir" count-objects >actual &&
+ test_cmp expect actual
+}
+
+test_expect_success 'preparing first repository' '
+ test_create_repo A &&
+ (
+ cd A &&
+ echo first >file1 &&
+ git add file1 &&
+ git commit -m A-initial
+ )
+'
+
+test_expect_success 'preparing second repository' '
+ git clone A B &&
+ (
+ cd B &&
+ echo second >file2 &&
+ git add file2 &&
+ git commit -m B-addition &&
+ git repack -a -d &&
+ git prune
+ )
+'
+
+test_expect_success 'preparing superproject' '
+ test_create_repo super &&
+ (
+ cd super &&
+ echo file >file &&
+ git add file &&
+ git commit -m B-super-initial
+ )
+'
+
+test_expect_success 'submodule add --reference uses alternates' '
+ (
+ cd super &&
+ git submodule add --reference ../B "file://$base_dir/A" sub &&
+ git commit -m B-super-added &&
+ git repack -ad
+ ) &&
+ test_alternate_is_used super/.git/modules/sub/objects/info/alternates super/sub
+'
+
+test_expect_success 'that reference gets used with add' '
+ (
+ cd super/sub &&
+ echo "0 objects, 0 kilobytes" >expected &&
+ git count-objects >current &&
+ diff expected current
+ )
+'
+
+# The tests up to this point, and repositories created by them
+# (A, B, super and super/sub), are about setting up the stage
+# for subsequent tests and meant to be kept throughout the
+# remainder of the test.
+# Tests from here on, if they create their own test repository,
+# are expected to clean after themselves.
+
+test_expect_success 'updating superproject keeps alternates' '
+ test_when_finished "rm -rf super-clone" &&
+ git clone super super-clone &&
+ git -C super-clone submodule update --init --reference ../B &&
+ test_alternate_is_used super-clone/.git/modules/sub/objects/info/alternates super-clone/sub
+'
+
+test_expect_success 'submodules use alternates when cloning a superproject' '
+ test_when_finished "rm -rf super-clone" &&
+ git clone --reference super --recursive super super-clone &&
+ (
+ cd super-clone &&
+ # test superproject has alternates setup correctly
+ test_alternate_is_used .git/objects/info/alternates . &&
+ # test submodule has correct setup
+ test_alternate_is_used .git/modules/sub/objects/info/alternates sub
+ )
+'
+
+test_expect_success 'missing submodule alternate fails clone and submodule update' '
+ test_when_finished "rm -rf super-clone" &&
+ git clone super super2 &&
+ test_must_fail git clone --recursive --reference super2 super2 super-clone &&
+ (
+ cd super-clone &&
+ # test superproject has alternates setup correctly
+ test_alternate_is_used .git/objects/info/alternates . &&
+ # update of the submodule succeeds
+ test_must_fail git submodule update --init &&
+ # and we have no alternates:
+ test_must_fail test_alternate_is_used .git/modules/sub/objects/info/alternates sub &&
+ test_must_fail test_path_is_file sub/file1
+ )
+'
+
+test_expect_success 'ignoring missing submodule alternates passes clone and submodule update' '
+ test_when_finished "rm -rf super-clone" &&
+ git clone --reference-if-able super2 --recursive super2 super-clone &&
+ (
+ cd super-clone &&
+ # test superproject has alternates setup correctly
+ test_alternate_is_used .git/objects/info/alternates . &&
+ # update of the submodule succeeds
+ git submodule update --init &&
+ # and we have no alternates:
+ test_must_fail test_alternate_is_used .git/modules/sub/objects/info/alternates sub &&
+ test_path_is_file sub/file1
+ )
+'
test_done
test_description='signed commit tests'
. ./test-lib.sh
+GNUPGHOME_NOT_USED=$GNUPGHOME
. "$TEST_DIRECTORY/lib-gpg.sh"
test_expect_success GPG 'create signed commits' '
test_cmp expect actual
'
-test_expect_success GPG 'show unknown signature with custom format' '
+test_expect_success GPG 'show untrusted signature with custom format' '
cat >expect <<-\EOF &&
U
61092E85B7227189
test_cmp expect actual
'
+test_expect_success GPG 'show unknown signature with custom format' '
+ cat >expect <<-\EOF &&
+ E
+ 61092E85B7227189
+
+ EOF
+ GNUPGHOME="$GNUPGHOME_NOT_USED" git log -1 --format="%G?%n%GK%n%GS" eighth-signed-alt >actual &&
+ test_cmp expect actual
+'
+
test_expect_success GPG 'show lack of signature with custom format' '
cat >expect <<-\EOF &&
N
test_cmp expected actual
'
+test_expect_success 'with non-trailer lines mixed with Signed-off-by' '
+ cat >patch <<-\EOF &&
+
+ this is not a trailer
+ this is not a trailer
+ Signed-off-by: a <a@example.com>
+ this is not a trailer
+ EOF
+ cat >expected <<-\EOF &&
+
+ this is not a trailer
+ this is not a trailer
+ Signed-off-by: a <a@example.com>
+ this is not a trailer
+ token: value
+ EOF
+ git interpret-trailers --trailer "token: value" patch >actual &&
+ test_cmp expected actual
+'
+
+test_expect_success 'with non-trailer lines mixed with cherry picked from' '
+ cat >patch <<-\EOF &&
+
+ this is not a trailer
+ this is not a trailer
+ (cherry picked from commit x)
+ this is not a trailer
+ EOF
+ cat >expected <<-\EOF &&
+
+ this is not a trailer
+ this is not a trailer
+ (cherry picked from commit x)
+ this is not a trailer
+ token: value
+ EOF
+ git interpret-trailers --trailer "token: value" patch >actual &&
+ test_cmp expected actual
+'
+
+test_expect_success 'with non-trailer lines mixed with a configured trailer' '
+ cat >patch <<-\EOF &&
+
+ this is not a trailer
+ this is not a trailer
+ My-trailer: x
+ this is not a trailer
+ EOF
+ cat >expected <<-\EOF &&
+
+ this is not a trailer
+ this is not a trailer
+ My-trailer: x
+ this is not a trailer
+ token: value
+ EOF
+ test_config trailer.my.key "My-trailer: " &&
+ git interpret-trailers --trailer "token: value" patch >actual &&
+ test_cmp expected actual
+'
+
+test_expect_success 'with non-trailer lines mixed with a non-configured trailer' '
+ cat >patch <<-\EOF &&
+
+ this is not a trailer
+ this is not a trailer
+ I-am-not-configured: x
+ this is not a trailer
+ EOF
+ cat >expected <<-\EOF &&
+
+ this is not a trailer
+ this is not a trailer
+ I-am-not-configured: x
+ this is not a trailer
+
+ token: value
+ EOF
+ test_config trailer.my.key "My-trailer: " &&
+ git interpret-trailers --trailer "token: value" patch >actual &&
+ test_cmp expected actual
+'
+
+test_expect_success 'with all non-configured trailers' '
+ cat >patch <<-\EOF &&
+
+ I-am-not-configured: x
+ I-am-also-not-configured: x
+ EOF
+ cat >expected <<-\EOF &&
+
+ I-am-not-configured: x
+ I-am-also-not-configured: x
+ token: value
+ EOF
+ test_config trailer.my.key "My-trailer: " &&
+ git interpret-trailers --trailer "token: value" patch >actual &&
+ test_cmp expected actual
+'
+
+test_expect_success 'with non-trailer lines only' '
+ cat >patch <<-\EOF &&
+
+ this is not a trailer
+ EOF
+ cat >expected <<-\EOF &&
+
+ this is not a trailer
+
+ token: value
+ EOF
+ git interpret-trailers --trailer "token: value" patch >actual &&
+ test_cmp expected actual
+'
+
+test_expect_success 'line with leading whitespace is not trailer' '
+ q_to_tab >patch <<-\EOF &&
+
+ Qtoken: value
+ EOF
+ q_to_tab >expected <<-\EOF &&
+
+ Qtoken: value
+
+ token: value
+ EOF
+ git interpret-trailers --trailer "token: value" patch >actual &&
+ test_cmp expected actual
+'
+
+test_expect_success 'multiline field treated as one trailer for 25% check' '
+ q_to_tab >patch <<-\EOF &&
+
+ Signed-off-by: a <a@example.com>
+ name: value on
+ Qmultiple lines
+ this is not a trailer
+ this is not a trailer
+ this is not a trailer
+ this is not a trailer
+ this is not a trailer
+ this is not a trailer
+ EOF
+ q_to_tab >expected <<-\EOF &&
+
+ Signed-off-by: a <a@example.com>
+ name: value on
+ Qmultiple lines
+ this is not a trailer
+ this is not a trailer
+ this is not a trailer
+ this is not a trailer
+ this is not a trailer
+ this is not a trailer
+ name: value
+ EOF
+ git interpret-trailers --trailer "name: value" patch >actual &&
+ test_cmp expected actual
+'
+
+test_expect_success 'multiline field treated as atomic for placement' '
+ q_to_tab >patch <<-\EOF &&
+
+ another: trailer
+ name: value on
+ Qmultiple lines
+ another: trailer
+ EOF
+ q_to_tab >expected <<-\EOF &&
+
+ another: trailer
+ name: value on
+ Qmultiple lines
+ name: value
+ another: trailer
+ EOF
+ test_config trailer.name.where after &&
+ git interpret-trailers --trailer "name: value" patch >actual &&
+ test_cmp expected actual
+'
+
+test_expect_success 'multiline field treated as atomic for replacement' '
+ q_to_tab >patch <<-\EOF &&
+
+ another: trailer
+ name: value on
+ Qmultiple lines
+ another: trailer
+ EOF
+ q_to_tab >expected <<-\EOF &&
+
+ another: trailer
+ another: trailer
+ name: value
+ EOF
+ test_config trailer.name.ifexists replace &&
+ git interpret-trailers --trailer "name: value" patch >actual &&
+ test_cmp expected actual
+'
+
+test_expect_success 'multiline field treated as atomic for difference check' '
+ q_to_tab >patch <<-\EOF &&
+
+ another: trailer
+ name: first line
+ Qsecond line
+ another: trailer
+ EOF
+ test_config trailer.name.ifexists addIfDifferent &&
+
+ q_to_tab >trailer <<-\EOF &&
+ name: first line
+ Qsecond line
+ EOF
+ q_to_tab >expected <<-\EOF &&
+
+ another: trailer
+ name: first line
+ Qsecond line
+ another: trailer
+ EOF
+ git interpret-trailers --trailer "$(cat trailer)" patch >actual &&
+ test_cmp expected actual &&
+
+ q_to_tab >trailer <<-\EOF &&
+ name: first line
+ QQQQQsecond line
+ EOF
+ q_to_tab >expected <<-\EOF &&
+
+ another: trailer
+ name: first line
+ Qsecond line
+ another: trailer
+ name: first line
+ QQQQQsecond line
+ EOF
+ git interpret-trailers --trailer "$(cat trailer)" patch >actual &&
+ test_cmp expected actual &&
+
+ q_to_tab >trailer <<-\EOF &&
+ name: first line *DIFFERENT*
+ Qsecond line
+ EOF
+ q_to_tab >expected <<-\EOF &&
+
+ another: trailer
+ name: first line
+ Qsecond line
+ another: trailer
+ name: first line *DIFFERENT*
+ Qsecond line
+ EOF
+ git interpret-trailers --trailer "$(cat trailer)" patch >actual &&
+ test_cmp expected actual
+'
+
+test_expect_success 'multiline field treated as atomic for neighbor check' '
+ q_to_tab >patch <<-\EOF &&
+
+ another: trailer
+ name: first line
+ Qsecond line
+ another: trailer
+ EOF
+ test_config trailer.name.where after &&
+ test_config trailer.name.ifexists addIfDifferentNeighbor &&
+
+ q_to_tab >trailer <<-\EOF &&
+ name: first line
+ Qsecond line
+ EOF
+ q_to_tab >expected <<-\EOF &&
+
+ another: trailer
+ name: first line
+ Qsecond line
+ another: trailer
+ EOF
+ git interpret-trailers --trailer "$(cat trailer)" patch >actual &&
+ test_cmp expected actual &&
+
+ q_to_tab >trailer <<-\EOF &&
+ name: first line
+ QQQQQsecond line
+ EOF
+ q_to_tab >expected <<-\EOF &&
+
+ another: trailer
+ name: first line
+ Qsecond line
+ name: first line
+ QQQQQsecond line
+ another: trailer
+ EOF
+ git interpret-trailers --trailer "$(cat trailer)" patch >actual &&
+ test_cmp expected actual
+'
+
test_expect_success 'with config setup' '
git config trailer.ack.key "Acked-by: " &&
cat >expected <<-\EOF &&
git reset --hard master >/dev/null 2>&1
'
+test_expect_success 'diff.orderFile configuration is honored' '
+ test_config diff.orderFile order-file &&
+ test_config mergetool.myecho.cmd "echo \"\$LOCAL\"" &&
+ test_config mergetool.myecho.trustExitCode true &&
+ echo b >order-file &&
+ echo a >>order-file &&
+ git checkout -b order-file-start master &&
+ echo start >a &&
+ echo start >b &&
+ git add a b &&
+ git commit -m start &&
+ git checkout -b order-file-side1 order-file-start &&
+ echo side1 >a &&
+ echo side1 >b &&
+ git add a b &&
+ git commit -m side1 &&
+ git checkout -b order-file-side2 order-file-start &&
+ echo side2 >a &&
+ echo side2 >b &&
+ git add a b &&
+ git commit -m side2 &&
+ test_must_fail git merge order-file-side1 &&
+ cat >expect <<-\EOF &&
+ Merging:
+ b
+ a
+ EOF
+ git mergetool --no-prompt --tool myecho >output &&
+ git grep --no-index -h -A2 Merging: output >actual &&
+ test_cmp expect actual &&
+ git reset --hard >/dev/null
+'
+test_expect_success 'mergetool -Oorder-file is honored' '
+ test_config diff.orderFile order-file &&
+ test_config mergetool.myecho.cmd "echo \"\$LOCAL\"" &&
+ test_config mergetool.myecho.trustExitCode true &&
+ test_must_fail git merge order-file-side1 &&
+ cat >expect <<-\EOF &&
+ Merging:
+ a
+ b
+ EOF
+ git mergetool -O/dev/null --no-prompt --tool myecho >output &&
+ git grep --no-index -h -A2 Merging: output >actual &&
+ test_cmp expect actual &&
+ git reset --hard >/dev/null 2>&1 &&
+
+ git config --unset diff.orderFile &&
+ test_must_fail git merge order-file-side1 &&
+ cat >expect <<-\EOF &&
+ Merging:
+ b
+ a
+ EOF
+ git mergetool -Oorder-file --no-prompt --tool myecho >output &&
+ git grep --no-index -h -A2 Merging: output >actual &&
+ test_cmp expect actual &&
+ git reset --hard >/dev/null 2>&1
+'
+
test_done
test_expect_success 'blame -L with invalid start' '
test_must_fail git blame -L5 tres 2>errors &&
- grep "has only 2 lines" errors
+ test_i18ngrep "has only 2 lines" errors
'
test_expect_success 'blame -L with invalid end' '
test_must_fail git blame -L1,5 tres 2>errors &&
- grep "has only 2 lines" errors
+ test_i18ngrep "has only 2 lines" errors
'
test_expect_success 'blame parses <end> part of -L' '
--- /dev/null
+#!/bin/sh
+
+test_description='git cat-file filters support'
+. ./test-lib.sh
+
+test_expect_success 'setup ' '
+ echo "*.txt eol=crlf diff=txt" >.gitattributes &&
+ echo "hello" | append_cr >world.txt &&
+ git add .gitattributes world.txt &&
+ test_tick &&
+ git commit -m "Initial commit"
+'
+
+has_cr () {
+ tr '\015' Q <"$1" | grep Q >/dev/null
+}
+
+test_expect_success 'no filters with `git show`' '
+ git show HEAD:world.txt >actual &&
+ ! has_cr actual
+
+'
+
+test_expect_success 'no filters with cat-file' '
+ git cat-file blob HEAD:world.txt >actual &&
+ ! has_cr actual
+'
+
+test_expect_success 'cat-file --filters converts to worktree version' '
+ git cat-file --filters HEAD:world.txt >actual &&
+ has_cr actual
+'
+
+test_expect_success 'cat-file --filters --path=<path> works' '
+ sha1=$(git rev-parse -q --verify HEAD:world.txt) &&
+ git cat-file --filters --path=world.txt $sha1 >actual &&
+ has_cr actual
+'
+
+test_expect_success 'cat-file --textconv --path=<path> works' '
+ sha1=$(git rev-parse -q --verify HEAD:world.txt) &&
+ test_config diff.txt.textconv "tr A-Za-z N-ZA-Mn-za-m <" &&
+ git cat-file --textconv --path=hello.txt $sha1 >rot13 &&
+ test uryyb = "$(cat rot13 | remove_cr)"
+'
+
+test_expect_success '--path=<path> complains without --textconv/--filters' '
+ sha1=$(git rev-parse -q --verify HEAD:world.txt) &&
+ test_must_fail git cat-file --path=hello.txt blob $sha1 >actual 2>err &&
+ test ! -s actual &&
+ grep "path.*needs.*filters" err
+'
+
+test_expect_success 'cat-file --textconv --batch works' '
+ sha1=$(git rev-parse -q --verify HEAD:world.txt) &&
+ test_config diff.txt.textconv "tr A-Za-z N-ZA-Mn-za-m <" &&
+ printf "%s hello.txt\n%s hello\n" $sha1 $sha1 |
+ git cat-file --textconv --batch >actual &&
+ printf "%s blob 6\nuryyb\r\n\n%s blob 6\nhello\n\n" \
+ $sha1 $sha1 >expect &&
+ test_cmp expect actual
+'
+
+test_done
q["Jane\" Doe" <jdoe@example.com>],
q[Doe, jane <jdoe@example.com>],
q["Jane Doe <jdoe@example.com>],
- q['Jane 'Doe' <jdoe@example.com>]);
+ q['Jane 'Doe' <jdoe@example.com>],
+ q[Jane@:;\.,()<>Doe <jdoe@example.com>],
+ q[Jane <jdoe@example.com> Doe],
+ q[<jdoe@example.com> Jane Doe]);
my @known_failure_list = (q[Jane\ Doe <jdoe@example.com>],
q["Doe, Ja"ne <jdoe@example.com>],
q["Doe, Katarina" Jane <jdoe@example.com>],
- q[Jane@:;\.,()<>Doe <jdoe@example.com>],
q[Jane jdoe@example.com],
- q[<jdoe@example.com> Jane Doe],
- q[Jane <jdoe@example.com> Doe],
q["Jane "Kat"a" ri"na" ",Doe" <jdoe@example.com>],
q[Jane Doe],
q[Jane "Doe <jdoe@example.com>"],
test_cmp expected commandline1
'
+test_expect_success $PREREQ 'setup expect for cc trailer' "
+cat >expected-cc <<\EOF
+!recipient@example.com!
+!author@example.com!
+!one@example.com!
+!two@example.com!
+!three@example.com!
+!four@example.com!
+!five@example.com!
+EOF
+"
+
+test_expect_success $PREREQ 'cc trailer with various syntax' '
+ test_commit cc-trailer &&
+ test_when_finished "git reset --hard HEAD^" &&
+ git commit --amend -F - <<-EOF &&
+ Test Cc: trailers.
+
+ Cc: one@example.com
+ Cc: <two@example.com> # this is part of the name
+ Cc: <three@example.com>, <four@example.com> # not.five@example.com
+ Cc: "Some # Body" <five@example.com> [part.of.name.too]
+ EOF
+ clean_fake_sendmail &&
+ git send-email -1 --to=recipient@example.com \
+ --smtp-server="$(pwd)/fake.sendmail" &&
+ test_cmp expected-cc commandline1
+'
+
test_expect_success $PREREQ 'setup expect' "
cat >expected-show-all-headers <<\EOF
0001-Second.patch
git commit -m "Add test.sh" &&
gitweb_run "p=.git;a=blob;f=test.sh"'
+test_expect_success HIGHLIGHT \
+ 'syntax highlighting (highlighter language autodetection)' \
+ 'git config gitweb.highlight yes &&
+ echo "#!/usr/bin/perl" > test &&
+ git add test &&
+ git commit -m "Add test" &&
+ gitweb_run "p=.git;a=blob;f=test"'
+
# ----------------------------------------------------------------------
# forks of projects
'
}
+lf_to_nul () {
+ perl -pe 'y/\012/\000/'
+}
+
nul_to_q () {
perl -pe 'y/\000/Q/'
}
test "$uid" != 0
'
+test_lazy_prereq JGIT '
+ type jgit
+'
+
# SANITY is about "can you correctly predict what the filesystem would
# do by only looking at the permission bits of the files and
# directories?" A typical example of !SANITY is running the test
--- /dev/null
+#include "cache.h"
+#include "tmp-objdir.h"
+#include "dir.h"
+#include "sigchain.h"
+#include "string-list.h"
+#include "strbuf.h"
+#include "argv-array.h"
+
+struct tmp_objdir {
+ struct strbuf path;
+ struct argv_array env;
+};
+
+/*
+ * Allow only one tmp_objdir at a time in a running process, which simplifies
+ * our signal/atexit cleanup routines. It's doubtful callers will ever need
+ * more than one, and we can expand later if so. You can have many such
+ * tmp_objdirs simultaneously in many processes, of course.
+ */
+static struct tmp_objdir *the_tmp_objdir;
+
+static void tmp_objdir_free(struct tmp_objdir *t)
+{
+ strbuf_release(&t->path);
+ argv_array_clear(&t->env);
+ free(t);
+}
+
+static int tmp_objdir_destroy_1(struct tmp_objdir *t, int on_signal)
+{
+ int err;
+
+ if (!t)
+ return 0;
+
+ if (t == the_tmp_objdir)
+ the_tmp_objdir = NULL;
+
+ /*
+ * This may use malloc via strbuf_grow(), but we should
+ * have pre-grown t->path sufficiently so that this
+ * doesn't happen in practice.
+ */
+ err = remove_dir_recursively(&t->path, 0);
+
+ /*
+ * When we are cleaning up due to a signal, we won't bother
+ * freeing memory; it may cause a deadlock if the signal
+ * arrived while libc's allocator lock is held.
+ */
+ if (!on_signal)
+ tmp_objdir_free(t);
+ return err;
+}
+
+int tmp_objdir_destroy(struct tmp_objdir *t)
+{
+ return tmp_objdir_destroy_1(t, 0);
+}
+
+static void remove_tmp_objdir(void)
+{
+ tmp_objdir_destroy(the_tmp_objdir);
+}
+
+static void remove_tmp_objdir_on_signal(int signo)
+{
+ tmp_objdir_destroy_1(the_tmp_objdir, 1);
+ sigchain_pop(signo);
+ raise(signo);
+}
+
+/*
+ * These env_* functions are for setting up the child environment; the
+ * "replace" variant overrides the value of any existing variable with that
+ * "key". The "append" variant puts our new value at the end of a list,
+ * separated by PATH_SEP (which is what separate values in
+ * GIT_ALTERNATE_OBJECT_DIRECTORIES).
+ */
+static void env_append(struct argv_array *env, const char *key, const char *val)
+{
+ const char *old = getenv(key);
+
+ if (!old)
+ argv_array_pushf(env, "%s=%s", key, val);
+ else
+ argv_array_pushf(env, "%s=%s%c%s", key, old, PATH_SEP, val);
+}
+
+static void env_replace(struct argv_array *env, const char *key, const char *val)
+{
+ argv_array_pushf(env, "%s=%s", key, val);
+}
+
+static int setup_tmp_objdir(const char *root)
+{
+ char *path;
+ int ret = 0;
+
+ path = xstrfmt("%s/pack", root);
+ ret = mkdir(path, 0777);
+ free(path);
+
+ return ret;
+}
+
+struct tmp_objdir *tmp_objdir_create(void)
+{
+ static int installed_handlers;
+ struct tmp_objdir *t;
+
+ if (the_tmp_objdir)
+ die("BUG: only one tmp_objdir can be used at a time");
+
+ t = xmalloc(sizeof(*t));
+ strbuf_init(&t->path, 0);
+ argv_array_init(&t->env);
+
+ strbuf_addf(&t->path, "%s/incoming-XXXXXX", get_object_directory());
+
+ /*
+ * Grow the strbuf beyond any filename we expect to be placed in it.
+ * If tmp_objdir_destroy() is called by a signal handler, then
+ * we should be able to use the strbuf to remove files without
+ * having to call malloc.
+ */
+ strbuf_grow(&t->path, 1024);
+
+ if (!mkdtemp(t->path.buf)) {
+ /* free, not destroy, as we never touched the filesystem */
+ tmp_objdir_free(t);
+ return NULL;
+ }
+
+ the_tmp_objdir = t;
+ if (!installed_handlers) {
+ atexit(remove_tmp_objdir);
+ sigchain_push_common(remove_tmp_objdir_on_signal);
+ installed_handlers++;
+ }
+
+ if (setup_tmp_objdir(t->path.buf)) {
+ tmp_objdir_destroy(t);
+ return NULL;
+ }
+
+ env_append(&t->env, ALTERNATE_DB_ENVIRONMENT,
+ absolute_path(get_object_directory()));
+ env_replace(&t->env, DB_ENVIRONMENT, absolute_path(t->path.buf));
+ env_replace(&t->env, GIT_QUARANTINE_ENVIRONMENT,
+ absolute_path(t->path.buf));
+
+ return t;
+}
+
+/*
+ * Make sure we copy packfiles and their associated metafiles in the correct
+ * order. All of these ends_with checks are slightly expensive to do in
+ * the midst of a sorting routine, but in practice it shouldn't matter.
+ * We will have a relatively small number of packfiles to order, and loose
+ * objects exit early in the first line.
+ */
+static int pack_copy_priority(const char *name)
+{
+ if (!starts_with(name, "pack"))
+ return 0;
+ if (ends_with(name, ".keep"))
+ return 1;
+ if (ends_with(name, ".pack"))
+ return 2;
+ if (ends_with(name, ".idx"))
+ return 3;
+ return 4;
+}
+
+static int pack_copy_cmp(const char *a, const char *b)
+{
+ return pack_copy_priority(a) - pack_copy_priority(b);
+}
+
+static int read_dir_paths(struct string_list *out, const char *path)
+{
+ DIR *dh;
+ struct dirent *de;
+
+ dh = opendir(path);
+ if (!dh)
+ return -1;
+
+ while ((de = readdir(dh)))
+ if (de->d_name[0] != '.')
+ string_list_append(out, de->d_name);
+
+ closedir(dh);
+ return 0;
+}
+
+static int migrate_paths(struct strbuf *src, struct strbuf *dst);
+
+static int migrate_one(struct strbuf *src, struct strbuf *dst)
+{
+ struct stat st;
+
+ if (stat(src->buf, &st) < 0)
+ return -1;
+ if (S_ISDIR(st.st_mode)) {
+ if (!mkdir(dst->buf, 0777)) {
+ if (adjust_shared_perm(dst->buf))
+ return -1;
+ } else if (errno != EEXIST)
+ return -1;
+ return migrate_paths(src, dst);
+ }
+ return finalize_object_file(src->buf, dst->buf);
+}
+
+static int migrate_paths(struct strbuf *src, struct strbuf *dst)
+{
+ size_t src_len = src->len, dst_len = dst->len;
+ struct string_list paths = STRING_LIST_INIT_DUP;
+ int i;
+ int ret = 0;
+
+ if (read_dir_paths(&paths, src->buf) < 0)
+ return -1;
+ paths.cmp = pack_copy_cmp;
+ string_list_sort(&paths);
+
+ for (i = 0; i < paths.nr; i++) {
+ const char *name = paths.items[i].string;
+
+ strbuf_addf(src, "/%s", name);
+ strbuf_addf(dst, "/%s", name);
+
+ ret |= migrate_one(src, dst);
+
+ strbuf_setlen(src, src_len);
+ strbuf_setlen(dst, dst_len);
+ }
+
+ string_list_clear(&paths, 0);
+ return ret;
+}
+
+int tmp_objdir_migrate(struct tmp_objdir *t)
+{
+ struct strbuf src = STRBUF_INIT, dst = STRBUF_INIT;
+ int ret;
+
+ if (!t)
+ return 0;
+
+ strbuf_addbuf(&src, &t->path);
+ strbuf_addstr(&dst, get_object_directory());
+
+ ret = migrate_paths(&src, &dst);
+
+ strbuf_release(&src);
+ strbuf_release(&dst);
+
+ tmp_objdir_destroy(t);
+ return ret;
+}
+
+const char **tmp_objdir_env(const struct tmp_objdir *t)
+{
+ if (!t)
+ return NULL;
+ return t->env.argv;
+}
+
+void tmp_objdir_add_as_alternate(const struct tmp_objdir *t)
+{
+ add_to_alternates_memory(t->path.buf);
+}
--- /dev/null
+#ifndef TMP_OBJDIR_H
+#define TMP_OBJDIR_H
+
+/*
+ * This API allows you to create a temporary object directory, advertise it to
+ * sub-processes via GIT_OBJECT_DIRECTORY and GIT_ALTERNATE_OBJECT_DIRECTORIES,
+ * and then either migrate its object into the main object directory, or remove
+ * it. The library handles unexpected signal/exit death by cleaning up the
+ * temporary directory.
+ *
+ * Example:
+ *
+ * struct tmp_objdir *t = tmp_objdir_create();
+ * if (!run_command_v_opt_cd_env(cmd, 0, NULL, tmp_objdir_env(t)) &&
+ * !tmp_objdir_migrate(t))
+ * printf("success!\n");
+ * else
+ * die("failed...tmp_objdir will clean up for us");
+ *
+ */
+
+struct tmp_objdir;
+
+/*
+ * Create a new temporary object directory; returns NULL on failure.
+ */
+struct tmp_objdir *tmp_objdir_create(void);
+
+/*
+ * Return a list of environment strings, suitable for use with
+ * child_process.env, that can be passed to child programs to make use of the
+ * temporary object directory.
+ */
+const char **tmp_objdir_env(const struct tmp_objdir *);
+
+/*
+ * Finalize a temporary object directory by migrating its objects into the main
+ * object database, removing the temporary directory, and freeing any
+ * associated resources.
+ */
+int tmp_objdir_migrate(struct tmp_objdir *);
+
+/*
+ * Destroy a temporary object directory, discarding any objects it contains.
+ */
+int tmp_objdir_destroy(struct tmp_objdir *);
+
+/*
+ * Add the temporary object directory as an alternate object store in the
+ * current process.
+ */
+void tmp_objdir_add_as_alternate(const struct tmp_objdir *);
+
+#endif /* TMP_OBJDIR_H */
#include "commit.h"
#include "tempfile.h"
#include "trailer.h"
+#include "list.h"
/*
* Copyright (c) 2013, 2014 Christian Couder <chriscool@tuxfamily.org>
*/
static struct conf_info default_conf_info;
struct trailer_item {
- struct trailer_item *previous;
- struct trailer_item *next;
- const char *token;
- const char *value;
+ struct list_head list;
+ /*
+ * If this is not a trailer line, the line is stored in value
+ * (excluding the terminating newline) and token is NULL.
+ */
+ char *token;
+ char *value;
+};
+
+struct arg_item {
+ struct list_head list;
+ char *token;
+ char *value;
struct conf_info conf;
};
-static struct trailer_item *first_conf_item;
+static LIST_HEAD(conf_head);
static char *separators = ":";
#define TRAILER_ARG_STRING "$ARG"
+static const char *git_generated_prefixes[] = {
+ "Signed-off-by: ",
+ "(cherry picked from commit ",
+ NULL
+};
+
+/* Iterate over the elements of the list. */
+#define list_for_each_dir(pos, head, is_reverse) \
+ for (pos = is_reverse ? (head)->prev : (head)->next; \
+ pos != (head); \
+ pos = is_reverse ? pos->prev : pos->next)
+
static int after_or_end(enum action_where where)
{
return (where == WHERE_AFTER) || (where == WHERE_END);
return len;
}
-static int same_token(struct trailer_item *a, struct trailer_item *b)
+static int same_token(struct trailer_item *a, struct arg_item *b)
{
- size_t a_len = token_len_without_separator(a->token, strlen(a->token));
- size_t b_len = token_len_without_separator(b->token, strlen(b->token));
- size_t min_len = (a_len > b_len) ? b_len : a_len;
+ size_t a_len, b_len, min_len;
+
+ if (!a->token)
+ return 0;
+
+ a_len = token_len_without_separator(a->token, strlen(a->token));
+ b_len = token_len_without_separator(b->token, strlen(b->token));
+ min_len = (a_len > b_len) ? b_len : a_len;
return !strncasecmp(a->token, b->token, min_len);
}
-static int same_value(struct trailer_item *a, struct trailer_item *b)
+static int same_value(struct trailer_item *a, struct arg_item *b)
{
return !strcasecmp(a->value, b->value);
}
-static int same_trailer(struct trailer_item *a, struct trailer_item *b)
+static int same_trailer(struct trailer_item *a, struct arg_item *b)
{
return same_token(a, b) && same_value(a, b);
}
}
static void free_trailer_item(struct trailer_item *item)
+{
+ free(item->token);
+ free(item->value);
+ free(item);
+}
+
+static void free_arg_item(struct arg_item *item)
{
free(item->conf.name);
free(item->conf.key);
free(item->conf.command);
- free((char *)item->token);
- free((char *)item->value);
+ free(item->token);
+ free(item->value);
free(item);
}
static void print_tok_val(FILE *outfile, const char *tok, const char *val)
{
- char c = last_non_space_char(tok);
+ char c;
+
+ if (!tok) {
+ fprintf(outfile, "%s\n", val);
+ return;
+ }
+
+ c = last_non_space_char(tok);
if (!c)
return;
if (strchr(separators, c))
fprintf(outfile, "%s%c %s\n", tok, separators[0], val);
}
-static void print_all(FILE *outfile, struct trailer_item *first, int trim_empty)
+static void print_all(FILE *outfile, struct list_head *head, int trim_empty)
{
+ struct list_head *pos;
struct trailer_item *item;
- for (item = first; item; item = item->next) {
+ list_for_each(pos, head) {
+ item = list_entry(pos, struct trailer_item, list);
if (!trim_empty || strlen(item->value) > 0)
print_tok_val(outfile, item->token, item->value);
}
}
-static void update_last(struct trailer_item **last)
-{
- if (*last)
- while ((*last)->next != NULL)
- *last = (*last)->next;
-}
-
-static void update_first(struct trailer_item **first)
+static struct trailer_item *trailer_from_arg(struct arg_item *arg_tok)
{
- if (*first)
- while ((*first)->previous != NULL)
- *first = (*first)->previous;
+ struct trailer_item *new = xcalloc(sizeof(*new), 1);
+ new->token = arg_tok->token;
+ new->value = arg_tok->value;
+ arg_tok->token = arg_tok->value = NULL;
+ free_arg_item(arg_tok);
+ return new;
}
static void add_arg_to_input_list(struct trailer_item *on_tok,
- struct trailer_item *arg_tok,
- struct trailer_item **first,
- struct trailer_item **last)
-{
- if (after_or_end(arg_tok->conf.where)) {
- arg_tok->next = on_tok->next;
- on_tok->next = arg_tok;
- arg_tok->previous = on_tok;
- if (arg_tok->next)
- arg_tok->next->previous = arg_tok;
- update_last(last);
- } else {
- arg_tok->previous = on_tok->previous;
- on_tok->previous = arg_tok;
- arg_tok->next = on_tok;
- if (arg_tok->previous)
- arg_tok->previous->next = arg_tok;
- update_first(first);
- }
+ struct arg_item *arg_tok)
+{
+ int aoe = after_or_end(arg_tok->conf.where);
+ struct trailer_item *to_add = trailer_from_arg(arg_tok);
+ if (aoe)
+ list_add(&to_add->list, &on_tok->list);
+ else
+ list_add_tail(&to_add->list, &on_tok->list);
}
static int check_if_different(struct trailer_item *in_tok,
- struct trailer_item *arg_tok,
- int check_all)
+ struct arg_item *arg_tok,
+ int check_all,
+ struct list_head *head)
{
enum action_where where = arg_tok->conf.where;
+ struct list_head *next_head;
do {
- if (!in_tok)
- return 1;
if (same_trailer(in_tok, arg_tok))
return 0;
/*
* if we want to add a trailer after another one,
* we have to check those before this one
*/
- in_tok = after_or_end(where) ? in_tok->previous : in_tok->next;
+ next_head = after_or_end(where) ? in_tok->list.prev
+ : in_tok->list.next;
+ if (next_head == head)
+ break;
+ in_tok = list_entry(next_head, struct trailer_item, list);
} while (check_all);
return 1;
}
-static void remove_from_list(struct trailer_item *item,
- struct trailer_item **first,
- struct trailer_item **last)
-{
- struct trailer_item *next = item->next;
- struct trailer_item *previous = item->previous;
-
- if (next) {
- item->next->previous = previous;
- item->next = NULL;
- } else if (last)
- *last = previous;
-
- if (previous) {
- item->previous->next = next;
- item->previous = NULL;
- } else if (first)
- *first = next;
-}
-
-static struct trailer_item *remove_first(struct trailer_item **first)
-{
- struct trailer_item *item = *first;
- *first = item->next;
- if (item->next) {
- item->next->previous = NULL;
- item->next = NULL;
- }
- return item;
-}
-
-static const char *apply_command(const char *command, const char *arg)
+static char *apply_command(const char *command, const char *arg)
{
struct strbuf cmd = STRBUF_INIT;
struct strbuf buf = STRBUF_INIT;
struct child_process cp = CHILD_PROCESS_INIT;
const char *argv[] = {NULL, NULL};
- const char *result;
+ char *result;
strbuf_addstr(&cmd, command);
if (arg)
return result;
}
-static void apply_item_command(struct trailer_item *in_tok, struct trailer_item *arg_tok)
+static void apply_item_command(struct trailer_item *in_tok, struct arg_item *arg_tok)
{
if (arg_tok->conf.command) {
const char *arg;
}
static void apply_arg_if_exists(struct trailer_item *in_tok,
- struct trailer_item *arg_tok,
+ struct arg_item *arg_tok,
struct trailer_item *on_tok,
- struct trailer_item **in_tok_first,
- struct trailer_item **in_tok_last)
+ struct list_head *head)
{
switch (arg_tok->conf.if_exists) {
case EXISTS_DO_NOTHING:
- free_trailer_item(arg_tok);
+ free_arg_item(arg_tok);
break;
case EXISTS_REPLACE:
apply_item_command(in_tok, arg_tok);
- add_arg_to_input_list(on_tok, arg_tok,
- in_tok_first, in_tok_last);
- remove_from_list(in_tok, in_tok_first, in_tok_last);
+ add_arg_to_input_list(on_tok, arg_tok);
+ list_del(&in_tok->list);
free_trailer_item(in_tok);
break;
case EXISTS_ADD:
apply_item_command(in_tok, arg_tok);
- add_arg_to_input_list(on_tok, arg_tok,
- in_tok_first, in_tok_last);
+ add_arg_to_input_list(on_tok, arg_tok);
break;
case EXISTS_ADD_IF_DIFFERENT:
apply_item_command(in_tok, arg_tok);
- if (check_if_different(in_tok, arg_tok, 1))
- add_arg_to_input_list(on_tok, arg_tok,
- in_tok_first, in_tok_last);
+ if (check_if_different(in_tok, arg_tok, 1, head))
+ add_arg_to_input_list(on_tok, arg_tok);
else
- free_trailer_item(arg_tok);
+ free_arg_item(arg_tok);
break;
case EXISTS_ADD_IF_DIFFERENT_NEIGHBOR:
apply_item_command(in_tok, arg_tok);
- if (check_if_different(on_tok, arg_tok, 0))
- add_arg_to_input_list(on_tok, arg_tok,
- in_tok_first, in_tok_last);
+ if (check_if_different(on_tok, arg_tok, 0, head))
+ add_arg_to_input_list(on_tok, arg_tok);
else
- free_trailer_item(arg_tok);
+ free_arg_item(arg_tok);
break;
}
}
-static void apply_arg_if_missing(struct trailer_item **in_tok_first,
- struct trailer_item **in_tok_last,
- struct trailer_item *arg_tok)
+static void apply_arg_if_missing(struct list_head *head,
+ struct arg_item *arg_tok)
{
- struct trailer_item **in_tok;
enum action_where where;
+ struct trailer_item *to_add;
switch (arg_tok->conf.if_missing) {
case MISSING_DO_NOTHING:
- free_trailer_item(arg_tok);
+ free_arg_item(arg_tok);
break;
case MISSING_ADD:
where = arg_tok->conf.where;
- in_tok = after_or_end(where) ? in_tok_last : in_tok_first;
apply_item_command(NULL, arg_tok);
- if (*in_tok) {
- add_arg_to_input_list(*in_tok, arg_tok,
- in_tok_first, in_tok_last);
- } else {
- *in_tok_first = arg_tok;
- *in_tok_last = arg_tok;
- }
- break;
+ to_add = trailer_from_arg(arg_tok);
+ if (after_or_end(where))
+ list_add_tail(&to_add->list, head);
+ else
+ list_add(&to_add->list, head);
}
}
-static int find_same_and_apply_arg(struct trailer_item **in_tok_first,
- struct trailer_item **in_tok_last,
- struct trailer_item *arg_tok)
+static int find_same_and_apply_arg(struct list_head *head,
+ struct arg_item *arg_tok)
{
+ struct list_head *pos;
struct trailer_item *in_tok;
struct trailer_item *on_tok;
- struct trailer_item *following_tok;
enum action_where where = arg_tok->conf.where;
int middle = (where == WHERE_AFTER) || (where == WHERE_BEFORE);
int backwards = after_or_end(where);
- struct trailer_item *start_tok = backwards ? *in_tok_last : *in_tok_first;
+ struct trailer_item *start_tok;
+
+ if (list_empty(head))
+ return 0;
- for (in_tok = start_tok; in_tok; in_tok = following_tok) {
- following_tok = backwards ? in_tok->previous : in_tok->next;
+ start_tok = list_entry(backwards ? head->prev : head->next,
+ struct trailer_item,
+ list);
+
+ list_for_each_dir(pos, head, backwards) {
+ in_tok = list_entry(pos, struct trailer_item, list);
if (!same_token(in_tok, arg_tok))
continue;
on_tok = middle ? in_tok : start_tok;
- apply_arg_if_exists(in_tok, arg_tok, on_tok,
- in_tok_first, in_tok_last);
+ apply_arg_if_exists(in_tok, arg_tok, on_tok, head);
return 1;
}
return 0;
}
-static void process_trailers_lists(struct trailer_item **in_tok_first,
- struct trailer_item **in_tok_last,
- struct trailer_item **arg_tok_first)
+static void process_trailers_lists(struct list_head *head,
+ struct list_head *arg_head)
{
- struct trailer_item *arg_tok;
- struct trailer_item *next_arg;
-
- if (!*arg_tok_first)
- return;
+ struct list_head *pos, *p;
+ struct arg_item *arg_tok;
- for (arg_tok = *arg_tok_first; arg_tok; arg_tok = next_arg) {
+ list_for_each_safe(pos, p, arg_head) {
int applied = 0;
+ arg_tok = list_entry(pos, struct arg_item, list);
- next_arg = arg_tok->next;
- remove_from_list(arg_tok, arg_tok_first, NULL);
+ list_del(pos);
- applied = find_same_and_apply_arg(in_tok_first,
- in_tok_last,
- arg_tok);
+ applied = find_same_and_apply_arg(head, arg_tok);
if (!applied)
- apply_arg_if_missing(in_tok_first,
- in_tok_last,
- arg_tok);
+ apply_arg_if_missing(head, arg_tok);
}
}
return 0;
}
-static void duplicate_conf(struct conf_info *dst, struct conf_info *src)
+static void duplicate_conf(struct conf_info *dst, const struct conf_info *src)
{
*dst = *src;
dst->name = xstrdup_or_null(src->name);
dst->command = xstrdup_or_null(src->command);
}
-static struct trailer_item *get_conf_item(const char *name)
+static struct arg_item *get_conf_item(const char *name)
{
- struct trailer_item *item;
- struct trailer_item *previous;
+ struct list_head *pos;
+ struct arg_item *item;
/* Look up item with same name */
- for (previous = NULL, item = first_conf_item;
- item;
- previous = item, item = item->next) {
+ list_for_each(pos, &conf_head) {
+ item = list_entry(pos, struct arg_item, list);
if (!strcasecmp(item->conf.name, name))
return item;
}
/* Item does not already exists, create it */
- item = xcalloc(sizeof(struct trailer_item), 1);
+ item = xcalloc(sizeof(*item), 1);
duplicate_conf(&item->conf, &default_conf_info);
item->conf.name = xstrdup(name);
- if (!previous)
- first_conf_item = item;
- else {
- previous->next = item;
- item->previous = previous;
- }
+ list_add_tail(&item->list, &conf_head);
return item;
}
static int git_trailer_config(const char *conf_key, const char *value, void *cb)
{
const char *trailer_item, *variable_name;
- struct trailer_item *item;
+ struct arg_item *item;
struct conf_info *conf;
char *name = NULL;
enum trailer_info_type type;
return 0;
}
-static int parse_trailer(struct strbuf *tok, struct strbuf *val, const char *trailer)
-{
- size_t len;
- struct strbuf seps = STRBUF_INIT;
- strbuf_addstr(&seps, separators);
- strbuf_addch(&seps, '=');
- len = strcspn(trailer, seps.buf);
- strbuf_release(&seps);
- if (len == 0) {
- int l = strlen(trailer);
- while (l > 0 && isspace(trailer[l - 1]))
- l--;
- return error(_("empty trailer token in trailer '%.*s'"), l, trailer);
- }
- if (len < strlen(trailer)) {
- strbuf_add(tok, trailer, len);
- strbuf_trim(tok);
- strbuf_addstr(val, trailer + len + 1);
- strbuf_trim(val);
- } else {
- strbuf_addstr(tok, trailer);
- strbuf_trim(tok);
- }
- return 0;
-}
-
-static const char *token_from_item(struct trailer_item *item, char *tok)
+static const char *token_from_item(struct arg_item *item, char *tok)
{
if (item->conf.key)
return item->conf.key;
return item->conf.name;
}
-static struct trailer_item *new_trailer_item(struct trailer_item *conf_item,
- char *tok, char *val)
-{
- struct trailer_item *new = xcalloc(sizeof(*new), 1);
- new->value = val ? val : xstrdup("");
-
- if (conf_item) {
- duplicate_conf(&new->conf, &conf_item->conf);
- new->token = xstrdup(token_from_item(conf_item, tok));
- free(tok);
- } else {
- duplicate_conf(&new->conf, &default_conf_info);
- new->token = tok;
- }
-
- return new;
-}
-
-static int token_matches_item(const char *tok, struct trailer_item *item, int tok_len)
+static int token_matches_item(const char *tok, struct arg_item *item, int tok_len)
{
if (!strncasecmp(tok, item->conf.name, tok_len))
return 1;
return item->conf.key ? !strncasecmp(tok, item->conf.key, tok_len) : 0;
}
-static struct trailer_item *create_trailer_item(const char *string)
+/*
+ * Return the location of the first separator in line, or -1 if there is no
+ * separator.
+ */
+static int find_separator(const char *line, const char *separators)
{
- struct strbuf tok = STRBUF_INIT;
- struct strbuf val = STRBUF_INIT;
- struct trailer_item *item;
- int tok_len;
+ int loc = strcspn(line, separators);
+ if (!line[loc])
+ return -1;
+ return loc;
+}
- if (parse_trailer(&tok, &val, string))
- return NULL;
+/*
+ * Obtain the token, value, and conf from the given trailer.
+ *
+ * separator_pos must not be 0, since the token cannot be an empty string.
+ *
+ * If separator_pos is -1, interpret the whole trailer as a token.
+ */
+static void parse_trailer(struct strbuf *tok, struct strbuf *val,
+ const struct conf_info **conf, const char *trailer,
+ int separator_pos)
+{
+ struct arg_item *item;
+ int tok_len;
+ struct list_head *pos;
- tok_len = token_len_without_separator(tok.buf, tok.len);
+ if (separator_pos != -1) {
+ strbuf_add(tok, trailer, separator_pos);
+ strbuf_trim(tok);
+ strbuf_addstr(val, trailer + separator_pos + 1);
+ strbuf_trim(val);
+ } else {
+ strbuf_addstr(tok, trailer);
+ strbuf_trim(tok);
+ }
/* Lookup if the token matches something in the config */
- for (item = first_conf_item; item; item = item->next) {
- if (token_matches_item(tok.buf, item, tok_len))
- return new_trailer_item(item,
- strbuf_detach(&tok, NULL),
- strbuf_detach(&val, NULL));
+ tok_len = token_len_without_separator(tok->buf, tok->len);
+ if (conf)
+ *conf = &default_conf_info;
+ list_for_each(pos, &conf_head) {
+ item = list_entry(pos, struct arg_item, list);
+ if (token_matches_item(tok->buf, item, tok_len)) {
+ char *tok_buf = strbuf_detach(tok, NULL);
+ if (conf)
+ *conf = &item->conf;
+ strbuf_addstr(tok, token_from_item(item, tok_buf));
+ free(tok_buf);
+ break;
+ }
}
+}
- return new_trailer_item(NULL,
- strbuf_detach(&tok, NULL),
- strbuf_detach(&val, NULL));
+static struct trailer_item *add_trailer_item(struct list_head *head, char *tok,
+ char *val)
+{
+ struct trailer_item *new = xcalloc(sizeof(*new), 1);
+ new->token = tok;
+ new->value = val;
+ list_add_tail(&new->list, head);
+ return new;
}
-static void add_trailer_item(struct trailer_item **first,
- struct trailer_item **last,
- struct trailer_item *new)
+static void add_arg_item(struct list_head *arg_head, char *tok, char *val,
+ const struct conf_info *conf)
{
- if (!new)
- return;
- if (!*last) {
- *first = new;
- *last = new;
- } else {
- (*last)->next = new;
- new->previous = *last;
- *last = new;
- }
+ struct arg_item *new = xcalloc(sizeof(*new), 1);
+ new->token = tok;
+ new->value = val;
+ duplicate_conf(&new->conf, conf);
+ list_add_tail(&new->list, arg_head);
}
-static struct trailer_item *process_command_line_args(struct string_list *trailers)
+static void process_command_line_args(struct list_head *arg_head,
+ struct string_list *trailers)
{
- struct trailer_item *arg_tok_first = NULL;
- struct trailer_item *arg_tok_last = NULL;
struct string_list_item *tr;
- struct trailer_item *item;
+ struct arg_item *item;
+ struct strbuf tok = STRBUF_INIT;
+ struct strbuf val = STRBUF_INIT;
+ const struct conf_info *conf;
+ struct list_head *pos;
- /* Add a trailer item for each configured trailer with a command */
- for (item = first_conf_item; item; item = item->next) {
- if (item->conf.command) {
- struct trailer_item *new = new_trailer_item(item, NULL, NULL);
- add_trailer_item(&arg_tok_first, &arg_tok_last, new);
- }
+ /*
+ * In command-line arguments, '=' is accepted (in addition to the
+ * separators that are defined).
+ */
+ char *cl_separators = xstrfmt("=%s", separators);
+
+ /* Add an arg item for each configured trailer with a command */
+ list_for_each(pos, &conf_head) {
+ item = list_entry(pos, struct arg_item, list);
+ if (item->conf.command)
+ add_arg_item(arg_head,
+ xstrdup(token_from_item(item, NULL)),
+ xstrdup(""),
+ &item->conf);
}
- /* Add a trailer item for each trailer on the command line */
+ /* Add an arg item for each trailer on the command line */
for_each_string_list_item(tr, trailers) {
- struct trailer_item *new = create_trailer_item(tr->string);
- add_trailer_item(&arg_tok_first, &arg_tok_last, new);
+ int separator_pos = find_separator(tr->string, cl_separators);
+ if (separator_pos == 0) {
+ struct strbuf sb = STRBUF_INIT;
+ strbuf_addstr(&sb, tr->string);
+ strbuf_trim(&sb);
+ error(_("empty trailer token in trailer '%.*s'"),
+ (int) sb.len, sb.buf);
+ strbuf_release(&sb);
+ } else {
+ parse_trailer(&tok, &val, &conf, tr->string,
+ separator_pos);
+ add_arg_item(arg_head,
+ strbuf_detach(&tok, NULL),
+ strbuf_detach(&val, NULL),
+ conf);
+ }
}
- return arg_tok_first;
+ free(cl_separators);
}
static struct strbuf **read_input_file(const char *file)
static int find_trailer_start(struct strbuf **lines, int count)
{
int start, end_of_title, only_spaces = 1;
+ int recognized_prefix = 0, trailer_lines = 0, non_trailer_lines = 0;
+ /*
+ * Number of possible continuation lines encountered. This will be
+ * reset to 0 if we encounter a trailer (since those lines are to be
+ * considered continuations of that trailer), and added to
+ * non_trailer_lines if we encounter a non-trailer (since those lines
+ * are to be considered non-trailers).
+ */
+ int possible_continuation_lines = 0;
/* The first paragraph is the title and cannot be trailers */
for (start = 0; start < count; start++) {
end_of_title = start;
/*
- * Get the start of the trailers by looking starting from the end
- * for a line with only spaces before lines with one separator.
+ * Get the start of the trailers by looking starting from the end for a
+ * blank line before a set of non-blank lines that (i) are all
+ * trailers, or (ii) contains at least one Git-generated trailer and
+ * consists of at least 25% trailers.
*/
for (start = count - 1; start >= end_of_title; start--) {
- if (lines[start]->buf[0] == comment_line_char)
+ const char **p;
+ int separator_pos;
+
+ if (lines[start]->buf[0] == comment_line_char) {
+ non_trailer_lines += possible_continuation_lines;
+ possible_continuation_lines = 0;
continue;
+ }
if (contains_only_spaces(lines[start]->buf)) {
if (only_spaces)
continue;
- return start + 1;
+ non_trailer_lines += possible_continuation_lines;
+ if (recognized_prefix &&
+ trailer_lines * 3 >= non_trailer_lines)
+ return start + 1;
+ if (trailer_lines && !non_trailer_lines)
+ return start + 1;
+ return count;
}
- if (strcspn(lines[start]->buf, separators) < lines[start]->len) {
- if (only_spaces)
- only_spaces = 0;
- continue;
+ only_spaces = 0;
+
+ for (p = git_generated_prefixes; *p; p++) {
+ if (starts_with(lines[start]->buf, *p)) {
+ trailer_lines++;
+ possible_continuation_lines = 0;
+ recognized_prefix = 1;
+ goto continue_outer_loop;
+ }
+ }
+
+ separator_pos = find_separator(lines[start]->buf, separators);
+ if (separator_pos >= 1 && !isspace(lines[start]->buf[0])) {
+ struct list_head *pos;
+
+ trailer_lines++;
+ possible_continuation_lines = 0;
+ if (recognized_prefix)
+ continue;
+ list_for_each(pos, &conf_head) {
+ struct arg_item *item;
+ item = list_entry(pos, struct arg_item, list);
+ if (token_matches_item(lines[start]->buf, item,
+ separator_pos)) {
+ recognized_prefix = 1;
+ break;
+ }
+ }
+ } else if (isspace(lines[start]->buf[0]))
+ possible_continuation_lines++;
+ else {
+ non_trailer_lines++;
+ non_trailer_lines += possible_continuation_lines;
+ possible_continuation_lines = 0;
}
- return count;
+continue_outer_loop:
+ ;
}
- return only_spaces ? count : 0;
+ return count;
}
/* Get the index of the end of the trailers */
static int process_input_file(FILE *outfile,
struct strbuf **lines,
- struct trailer_item **in_tok_first,
- struct trailer_item **in_tok_last)
+ struct list_head *head)
{
int count = 0;
int patch_start, trailer_start, trailer_end, i;
+ struct strbuf tok = STRBUF_INIT;
+ struct strbuf val = STRBUF_INIT;
+ struct trailer_item *last = NULL;
/* Get the line count */
while (lines[count])
/* Parse trailer lines */
for (i = trailer_start; i < trailer_end; i++) {
- if (lines[i]->buf[0] != comment_line_char) {
- struct trailer_item *new = create_trailer_item(lines[i]->buf);
- add_trailer_item(in_tok_first, in_tok_last, new);
+ int separator_pos;
+ if (lines[i]->buf[0] == comment_line_char)
+ continue;
+ if (last && isspace(lines[i]->buf[0])) {
+ struct strbuf sb = STRBUF_INIT;
+ strbuf_addf(&sb, "%s\n%s", last->value, lines[i]->buf);
+ strbuf_strip_suffix(&sb, "\n");
+ free(last->value);
+ last->value = strbuf_detach(&sb, NULL);
+ continue;
+ }
+ separator_pos = find_separator(lines[i]->buf, separators);
+ if (separator_pos >= 1) {
+ parse_trailer(&tok, &val, NULL, lines[i]->buf,
+ separator_pos);
+ last = add_trailer_item(head,
+ strbuf_detach(&tok, NULL),
+ strbuf_detach(&val, NULL));
+ } else {
+ strbuf_addbuf(&val, lines[i]);
+ strbuf_strip_suffix(&val, "\n");
+ add_trailer_item(head,
+ NULL,
+ strbuf_detach(&val, NULL));
+ last = NULL;
}
}
return trailer_end;
}
-static void free_all(struct trailer_item **first)
+static void free_all(struct list_head *head)
{
- while (*first) {
- struct trailer_item *item = remove_first(first);
- free_trailer_item(item);
+ struct list_head *pos, *p;
+ list_for_each_safe(pos, p, head) {
+ list_del(pos);
+ free_trailer_item(list_entry(pos, struct trailer_item, list));
}
}
void process_trailers(const char *file, int in_place, int trim_empty, struct string_list *trailers)
{
- struct trailer_item *in_tok_first = NULL;
- struct trailer_item *in_tok_last = NULL;
- struct trailer_item *arg_tok_first;
+ LIST_HEAD(head);
+ LIST_HEAD(arg_head);
struct strbuf **lines;
int trailer_end;
FILE *outfile = stdout;
outfile = create_in_place_tempfile(file);
/* Print the lines before the trailers */
- trailer_end = process_input_file(outfile, lines, &in_tok_first, &in_tok_last);
+ trailer_end = process_input_file(outfile, lines, &head);
- arg_tok_first = process_command_line_args(trailers);
+ process_command_line_args(&arg_head, trailers);
- process_trailers_lists(&in_tok_first, &in_tok_last, &arg_tok_first);
+ process_trailers_lists(&head, &arg_head);
- print_all(outfile, in_tok_first, trim_empty);
+ print_all(outfile, &head, trim_empty);
- free_all(&in_tok_first);
+ free_all(&head);
/* Print the lines after the trailers as is */
print_lines(outfile, lines, trailer_end, INT_MAX);
TRANS_OPT_THIN,
TRANS_OPT_KEEP,
TRANS_OPT_FOLLOWTAGS,
+ TRANS_OPT_DEEPEN_RELATIVE
};
+static int strbuf_set_helper_option(struct helper_data *data,
+ struct strbuf *buf)
+{
+ int ret;
+
+ sendline(data, buf);
+ if (recvline(data, buf))
+ exit(128);
+
+ if (!strcmp(buf->buf, "ok"))
+ ret = 0;
+ else if (starts_with(buf->buf, "error"))
+ ret = -1;
+ else if (!strcmp(buf->buf, "unsupported"))
+ ret = 1;
+ else {
+ warning("%s unexpectedly said: '%s'", data->name, buf->buf);
+ ret = 1;
+ }
+ return ret;
+}
+
+static int string_list_set_helper_option(struct helper_data *data,
+ const char *name,
+ struct string_list *list)
+{
+ struct strbuf buf = STRBUF_INIT;
+ int i, ret = 0;
+
+ for (i = 0; i < list->nr; i++) {
+ strbuf_addf(&buf, "option %s ", name);
+ quote_c_style(list->items[i].string, &buf, NULL, 0);
+ strbuf_addch(&buf, '\n');
+
+ if ((ret = strbuf_set_helper_option(data, &buf)))
+ break;
+ strbuf_reset(&buf);
+ }
+ strbuf_release(&buf);
+ return ret;
+}
+
static int set_helper_option(struct transport *transport,
const char *name, const char *value)
{
if (!data->option)
return 1;
+ if (!strcmp(name, "deepen-not"))
+ return string_list_set_helper_option(data, name,
+ (struct string_list *)value);
+
for (i = 0; i < ARRAY_SIZE(unsupported_options); i++) {
if (!strcmp(name, unsupported_options[i]))
return 1;
quote_c_style(value, &buf, NULL, 0);
strbuf_addch(&buf, '\n');
- sendline(data, &buf);
- if (recvline(data, &buf))
- exit(128);
-
- if (!strcmp(buf.buf, "ok"))
- ret = 0;
- else if (starts_with(buf.buf, "error")) {
- ret = -1;
- } else if (!strcmp(buf.buf, "unsupported"))
- ret = 1;
- else {
- warning("%s unexpectedly said: '%s'", data->name, buf.buf);
- ret = 1;
- }
+ ret = strbuf_set_helper_option(data, &buf);
strbuf_release(&buf);
return ret;
}
die(_("transport: invalid depth option '%s'"), value);
}
return 0;
+ } else if (!strcmp(name, TRANS_OPT_DEEPEN_SINCE)) {
+ opts->deepen_since = value;
+ return 0;
+ } else if (!strcmp(name, TRANS_OPT_DEEPEN_NOT)) {
+ opts->deepen_not = (const struct string_list *)value;
+ return 0;
+ } else if (!strcmp(name, TRANS_OPT_DEEPEN_RELATIVE)) {
+ opts->deepen_relative = !!value;
+ return 0;
}
return 1;
}
args.quiet = (transport->verbose < 0);
args.no_progress = !transport->progress;
args.depth = data->options.depth;
+ args.deepen_since = data->options.deepen_since;
+ args.deepen_not = data->options.deepen_not;
+ args.deepen_relative = data->options.deepen_relative;
args.check_self_contained_and_connected =
data->options.check_self_contained_and_connected;
args.cloning = transport->cloning;
}
}
-static void print_ref_status(char flag, const char *summary, struct ref *to, struct ref *from, const char *msg, int porcelain)
+static void print_ref_status(char flag, const char *summary,
+ struct ref *to, struct ref *from, const char *msg,
+ int porcelain, int summary_width)
{
if (porcelain) {
if (from)
else
fprintf(stdout, "%s\n", summary);
} else {
- fprintf(stderr, " %c %-*s ", flag, TRANSPORT_SUMMARY_WIDTH, summary);
+ fprintf(stderr, " %c %-*s ", flag, summary_width, summary);
if (from)
fprintf(stderr, "%s -> %s", prettify_refname(from->name), prettify_refname(to->name));
else
}
}
-static void print_ok_ref_status(struct ref *ref, int porcelain)
+static void print_ok_ref_status(struct ref *ref, int porcelain, int summary_width)
{
if (ref->deletion)
- print_ref_status('-', "[deleted]", ref, NULL, NULL, porcelain);
+ print_ref_status('-', "[deleted]", ref, NULL, NULL,
+ porcelain, summary_width);
else if (is_null_oid(&ref->old_oid))
print_ref_status('*',
(starts_with(ref->name, "refs/tags/") ? "[new tag]" :
"[new branch]"),
- ref, ref->peer_ref, NULL, porcelain);
+ ref, ref->peer_ref, NULL, porcelain, summary_width);
else {
struct strbuf quickref = STRBUF_INIT;
char type;
strbuf_add_unique_abbrev(&quickref, ref->new_oid.hash,
DEFAULT_ABBREV);
- print_ref_status(type, quickref.buf, ref, ref->peer_ref, msg, porcelain);
+ print_ref_status(type, quickref.buf, ref, ref->peer_ref, msg,
+ porcelain, summary_width);
strbuf_release(&quickref);
}
}
-static int print_one_push_status(struct ref *ref, const char *dest, int count, int porcelain)
+static int print_one_push_status(struct ref *ref, const char *dest, int count,
+ int porcelain, int summary_width)
{
if (!count) {
char *url = transport_anonymize_url(dest);
switch(ref->status) {
case REF_STATUS_NONE:
- print_ref_status('X', "[no match]", ref, NULL, NULL, porcelain);
+ print_ref_status('X', "[no match]", ref, NULL, NULL,
+ porcelain, summary_width);
break;
case REF_STATUS_REJECT_NODELETE:
print_ref_status('!', "[rejected]", ref, NULL,
- "remote does not support deleting refs", porcelain);
+ "remote does not support deleting refs",
+ porcelain, summary_width);
break;
case REF_STATUS_UPTODATE:
print_ref_status('=', "[up to date]", ref,
- ref->peer_ref, NULL, porcelain);
+ ref->peer_ref, NULL, porcelain, summary_width);
break;
case REF_STATUS_REJECT_NONFASTFORWARD:
print_ref_status('!', "[rejected]", ref, ref->peer_ref,
- "non-fast-forward", porcelain);
+ "non-fast-forward", porcelain, summary_width);
break;
case REF_STATUS_REJECT_ALREADY_EXISTS:
print_ref_status('!', "[rejected]", ref, ref->peer_ref,
- "already exists", porcelain);
+ "already exists", porcelain, summary_width);
break;
case REF_STATUS_REJECT_FETCH_FIRST:
print_ref_status('!', "[rejected]", ref, ref->peer_ref,
- "fetch first", porcelain);
+ "fetch first", porcelain, summary_width);
break;
case REF_STATUS_REJECT_NEEDS_FORCE:
print_ref_status('!', "[rejected]", ref, ref->peer_ref,
- "needs force", porcelain);
+ "needs force", porcelain, summary_width);
break;
case REF_STATUS_REJECT_STALE:
print_ref_status('!', "[rejected]", ref, ref->peer_ref,
- "stale info", porcelain);
+ "stale info", porcelain, summary_width);
break;
case REF_STATUS_REJECT_SHALLOW:
print_ref_status('!', "[rejected]", ref, ref->peer_ref,
- "new shallow roots not allowed", porcelain);
+ "new shallow roots not allowed",
+ porcelain, summary_width);
break;
case REF_STATUS_REMOTE_REJECT:
print_ref_status('!', "[remote rejected]", ref,
- ref->deletion ? NULL : ref->peer_ref,
- ref->remote_status, porcelain);
+ ref->deletion ? NULL : ref->peer_ref,
+ ref->remote_status, porcelain, summary_width);
break;
case REF_STATUS_EXPECTING_REPORT:
print_ref_status('!', "[remote failure]", ref,
- ref->deletion ? NULL : ref->peer_ref,
- "remote failed to report status", porcelain);
+ ref->deletion ? NULL : ref->peer_ref,
+ "remote failed to report status",
+ porcelain, summary_width);
break;
case REF_STATUS_ATOMIC_PUSH_FAILED:
print_ref_status('!', "[rejected]", ref, ref->peer_ref,
- "atomic push failed", porcelain);
+ "atomic push failed", porcelain, summary_width);
break;
case REF_STATUS_OK:
- print_ok_ref_status(ref, porcelain);
+ print_ok_ref_status(ref, porcelain, summary_width);
break;
}
return 1;
}
+static int measure_abbrev(const struct object_id *oid, int sofar)
+{
+ char hex[GIT_SHA1_HEXSZ + 1];
+ int w = find_unique_abbrev_r(hex, oid->hash, DEFAULT_ABBREV);
+
+ return (w < sofar) ? sofar : w;
+}
+
+int transport_summary_width(const struct ref *refs)
+{
+ int maxw = -1;
+
+ for (; refs; refs = refs->next) {
+ maxw = measure_abbrev(&refs->old_oid, maxw);
+ maxw = measure_abbrev(&refs->new_oid, maxw);
+ }
+ if (maxw < 0)
+ maxw = FALLBACK_DEFAULT_ABBREV;
+ return (2 * maxw + 3);
+}
+
void transport_print_push_status(const char *dest, struct ref *refs,
int verbose, int porcelain, unsigned int *reject_reasons)
{
int n = 0;
unsigned char head_sha1[20];
char *head;
+ int summary_width = transport_summary_width(refs);
head = resolve_refdup("HEAD", RESOLVE_REF_READING, head_sha1, NULL);
if (verbose) {
for (ref = refs; ref; ref = ref->next)
if (ref->status == REF_STATUS_UPTODATE)
- n += print_one_push_status(ref, dest, n, porcelain);
+ n += print_one_push_status(ref, dest, n,
+ porcelain, summary_width);
}
for (ref = refs; ref; ref = ref->next)
if (ref->status == REF_STATUS_OK)
- n += print_one_push_status(ref, dest, n, porcelain);
+ n += print_one_push_status(ref, dest, n,
+ porcelain, summary_width);
*reject_reasons = 0;
for (ref = refs; ref; ref = ref->next) {
if (ref->status != REF_STATUS_NONE &&
ref->status != REF_STATUS_UPTODATE &&
ref->status != REF_STATUS_OK)
- n += print_one_push_status(ref, dest, n, porcelain);
+ n += print_one_push_status(ref, dest, n,
+ porcelain, summary_width);
if (ref->status == REF_STATUS_REJECT_NONFASTFORWARD) {
if (head != NULL && !strcmp(head, ref->name))
*reject_reasons |= REJECT_NON_FF_HEAD;
const struct ref *extra;
struct alternate_refs_data *cb = data;
- e->name[-1] = '\0';
- other = xstrdup(real_path(e->base));
- e->name[-1] = '/';
+ other = xstrdup(real_path(e->path));
len = strlen(other);
while (other[len-1] == '/')
#include "run-command.h"
#include "remote.h"
+struct string_list;
+
struct git_transport_options {
unsigned thin : 1;
unsigned keep : 1;
unsigned check_self_contained_and_connected : 1;
unsigned self_contained_and_connected : 1;
unsigned update_shallow : 1;
+ unsigned deepen_relative : 1;
int depth;
+ const char *deepen_since;
+ const struct string_list *deepen_not;
const char *uploadpack;
const char *receivepack;
struct push_cas_option *cas;
#define TRANSPORT_PUSH_ATOMIC 8192
#define TRANSPORT_PUSH_OPTIONS 16384
-#define TRANSPORT_SUMMARY_WIDTH (2 * DEFAULT_ABBREV + 3)
-#define TRANSPORT_SUMMARY(x) (int)(TRANSPORT_SUMMARY_WIDTH + strlen(x) - gettext_width(x)), (x)
+extern int transport_summary_width(const struct ref *refs);
/* Returns a transport suitable for the url */
struct transport *transport_get(struct remote *, const char *);
/* Limit the depth of the fetch if not null */
#define TRANS_OPT_DEPTH "depth"
+/* Limit the depth of the fetch based on time if not null */
+#define TRANS_OPT_DEEPEN_SINCE "deepen-since"
+
+/* Limit the depth of the fetch based on revs if not null */
+#define TRANS_OPT_DEEPEN_NOT "deepen-not"
+
+/* Limit the deepen of the fetch if not null */
+#define TRANS_OPT_DEEPEN_RELATIVE "deepen-relative"
+
/* Aggressively fetch annotated tags if possible */
#define TRANS_OPT_FOLLOWTAGS "followtags"
return str;
}
-static void decode_tree_entry(struct tree_desc *desc, const char *buf, unsigned long size)
+static int decode_tree_entry(struct tree_desc *desc, const char *buf, unsigned long size, struct strbuf *err)
{
const char *path;
unsigned int mode, len;
- if (size < 24 || buf[size - 21])
- die("corrupt tree file");
+ if (size < 23 || buf[size - 21]) {
+ strbuf_addstr(err, _("too-short tree object"));
+ return -1;
+ }
path = get_mode(buf, &mode);
- if (!path || !*path)
- die("corrupt tree file");
+ if (!path) {
+ strbuf_addstr(err, _("malformed mode in tree entry"));
+ return -1;
+ }
+ if (!*path) {
+ strbuf_addstr(err, _("empty filename in tree entry"));
+ return -1;
+ }
len = strlen(path) + 1;
/* Initialize the descriptor entry */
desc->entry.path = path;
desc->entry.mode = canon_mode(mode);
desc->entry.oid = (const struct object_id *)(path + len);
+
+ return 0;
}
-void init_tree_desc(struct tree_desc *desc, const void *buffer, unsigned long size)
+static int init_tree_desc_internal(struct tree_desc *desc, const void *buffer, unsigned long size, struct strbuf *err)
{
desc->buffer = buffer;
desc->size = size;
if (size)
- decode_tree_entry(desc, buffer, size);
+ return decode_tree_entry(desc, buffer, size, err);
+ return 0;
+}
+
+void init_tree_desc(struct tree_desc *desc, const void *buffer, unsigned long size)
+{
+ struct strbuf err = STRBUF_INIT;
+ if (init_tree_desc_internal(desc, buffer, size, &err))
+ die("%s", err.buf);
+ strbuf_release(&err);
+}
+
+int init_tree_desc_gently(struct tree_desc *desc, const void *buffer, unsigned long size)
+{
+ struct strbuf err = STRBUF_INIT;
+ int result = init_tree_desc_internal(desc, buffer, size, &err);
+ if (result)
+ error("%s", err.buf);
+ strbuf_release(&err);
+ return result;
}
void *fill_tree_descriptor(struct tree_desc *desc, const unsigned char *sha1)
*a = t->entry;
}
-void update_tree_entry(struct tree_desc *desc)
+static int update_tree_entry_internal(struct tree_desc *desc, struct strbuf *err)
{
const void *buf = desc->buffer;
const unsigned char *end = desc->entry.oid->hash + 20;
unsigned long len = end - (const unsigned char *)buf;
if (size < len)
- die("corrupt tree file");
+ die(_("too-short tree file"));
buf = end;
size -= len;
desc->buffer = buf;
desc->size = size;
if (size)
- decode_tree_entry(desc, buf, size);
+ return decode_tree_entry(desc, buf, size, err);
+ return 0;
+}
+
+void update_tree_entry(struct tree_desc *desc)
+{
+ struct strbuf err = STRBUF_INIT;
+ if (update_tree_entry_internal(desc, &err))
+ die("%s", err.buf);
+ strbuf_release(&err);
+}
+
+int update_tree_entry_gently(struct tree_desc *desc)
+{
+ struct strbuf err = STRBUF_INIT;
+ if (update_tree_entry_internal(desc, &err)) {
+ error("%s", err.buf);
+ strbuf_release(&err);
+ /* Stop processing this tree after error */
+ desc->size = 0;
+ return -1;
+ }
+ strbuf_release(&err);
+ return 0;
}
int tree_entry(struct tree_desc *desc, struct name_entry *entry)
return 1;
}
+int tree_entry_gently(struct tree_desc *desc, struct name_entry *entry)
+{
+ if (!desc->size)
+ return 0;
+
+ *entry = desc->entry;
+ if (update_tree_entry_gently(desc))
+ return 0;
+ return 1;
+}
+
void setup_traverse_info(struct traverse_info *info, const char *base)
{
int pathlen = strlen(base);
return (const char *)ne->oid - ne->path - 1;
}
+/*
+ * The _gently versions of these functions warn and return false on a
+ * corrupt tree entry rather than dying,
+ */
+
void update_tree_entry(struct tree_desc *);
+int update_tree_entry_gently(struct tree_desc *);
void init_tree_desc(struct tree_desc *desc, const void *buf, unsigned long size);
+int init_tree_desc_gently(struct tree_desc *desc, const void *buf, unsigned long size);
/*
* Helper function that does both tree_entry_extract() and update_tree_entry()
* and returns true for success
*/
int tree_entry(struct tree_desc *, struct name_entry *);
+int tree_entry_gently(struct tree_desc *, struct name_entry *);
void *fill_tree_descriptor(struct tree_desc *desc, const unsigned char *sha1);
ce->ce_namelen = baselen + len;
memcpy(ce->name, base, baselen);
memcpy(ce->name + baselen, pathname, len+1);
- hashcpy(ce->sha1, sha1);
+ hashcpy(ce->oid.hash, sha1);
return add_cache_entry(ce, opt);
}
* Sort the cache entry -- we need to nuke the cache tree, though.
*/
cache_tree_free(&active_cache_tree);
- qsort(active_cache, active_nr, sizeof(active_cache[0]),
- cmp_cache_name_compare);
+ QSORT(active_cache, active_nr, cmp_cache_name_compare);
return 0;
}
ce->ce_mode = create_ce_mode(n->mode);
ce->ce_flags = create_ce_flags(stage);
ce->ce_namelen = len;
- hashcpy(ce->sha1, n->oid->hash);
+ oidcpy(&ce->oid, n->oid);
make_traverse_path(ce->name, info, n);
return ce;
int i, ret;
static struct cache_entry *dfc;
struct exclude_list el;
- struct checkout state;
+ struct checkout state = CHECKOUT_INIT;
if (len > MAX_UNPACK_TREES)
die("unpack_trees takes at most %d trees", MAX_UNPACK_TREES);
- memset(&state, 0, sizeof(state));
- state.base_dir = "";
state.force = 1;
state.quiet = 1;
state.refresh_cache = 1;
if ((a->ce_flags | b->ce_flags) & CE_CONFLICTED)
return 0;
return a->ce_mode == b->ce_mode &&
- !hashcmp(a->sha1, b->sha1);
+ !oidcmp(&a->oid, &b->oid);
}
/* If we are not going to update the submodule, then
* we don't care.
*/
- if (!hashcmp(sha1, ce->sha1))
+ if (!hashcmp(sha1, ce->oid.hash))
return 0;
return verify_clean_submodule(ce, error_type, o);
}
fprintf(o, "%s%06o %s %d\t%s\n",
label,
ce->ce_mode,
- sha1_to_hex(ce->sha1),
+ oid_to_hex(&ce->oid),
ce_stage(ce),
ce->name);
}
#include "version.h"
#include "string-list.h"
#include "parse-options.h"
+#include "argv-array.h"
+#include "prio-queue.h"
static const char * const upload_pack_usage[] = {
N_("git upload-pack [<options>] <dir>"),
static unsigned long oldest_have;
+static int deepen_relative;
static int multi_ack;
static int no_done;
static int use_thin_pack, use_ofs_delta, use_include_tag;
die("git upload-pack: %s", abort_msg);
}
-static int got_sha1(char *hex, unsigned char *sha1)
+static int got_sha1(const char *hex, unsigned char *sha1)
{
struct object *o;
int we_knew_they_have = 0;
static int reachable(struct commit *want)
{
- struct commit_list *work = NULL;
+ struct prio_queue work = { compare_commits_by_commit_date };
- commit_list_insert_by_date(want, &work);
- while (work) {
+ prio_queue_put(&work, want);
+ while (work.nr) {
struct commit_list *list;
- struct commit *commit = pop_commit(&work);
+ struct commit *commit = prio_queue_get(&work);
if (commit->object.flags & THEY_HAVE) {
want->object.flags |= COMMON_KNOWN;
for (list = commit->parents; list; list = list->next) {
struct commit *parent = list->item;
if (!(parent->object.flags & REACHABLE))
- commit_list_insert_by_date(parent, &work);
+ prio_queue_put(&work, parent);
}
}
want->object.flags |= REACHABLE;
clear_commit_marks(want, REACHABLE);
- free_commit_list(work);
+ clear_prio_queue(&work);
return (want->object.flags & COMMON_KNOWN);
}
for (;;) {
char *line = packet_read_line(0, NULL);
+ const char *arg;
+
reset_timeout();
if (!line) {
if (multi_ack == 2 && got_common
&& !got_other && ok_to_give_up()) {
sent_ready = 1;
- packet_write(1, "ACK %s ready\n", last_hex);
+ packet_write_fmt(1, "ACK %s ready\n", last_hex);
}
if (have_obj.nr == 0 || multi_ack)
- packet_write(1, "NAK\n");
+ packet_write_fmt(1, "NAK\n");
if (no_done && sent_ready) {
- packet_write(1, "ACK %s\n", last_hex);
+ packet_write_fmt(1, "ACK %s\n", last_hex);
return 0;
}
if (stateless_rpc)
got_other = 0;
continue;
}
- if (starts_with(line, "have ")) {
- switch (got_sha1(line+5, sha1)) {
+ if (skip_prefix(line, "have ", &arg)) {
+ switch (got_sha1(arg, sha1)) {
case -1: /* they have what we do not */
got_other = 1;
if (multi_ack && ok_to_give_up()) {
const char *hex = sha1_to_hex(sha1);
if (multi_ack == 2) {
sent_ready = 1;
- packet_write(1, "ACK %s ready\n", hex);
+ packet_write_fmt(1, "ACK %s ready\n", hex);
} else
- packet_write(1, "ACK %s continue\n", hex);
+ packet_write_fmt(1, "ACK %s continue\n", hex);
}
break;
default:
got_common = 1;
memcpy(last_hex, sha1_to_hex(sha1), 41);
if (multi_ack == 2)
- packet_write(1, "ACK %s common\n", last_hex);
+ packet_write_fmt(1, "ACK %s common\n", last_hex);
else if (multi_ack)
- packet_write(1, "ACK %s continue\n", last_hex);
+ packet_write_fmt(1, "ACK %s continue\n", last_hex);
else if (have_obj.nr == 1)
- packet_write(1, "ACK %s\n", last_hex);
+ packet_write_fmt(1, "ACK %s\n", last_hex);
break;
}
continue;
if (!strcmp(line, "done")) {
if (have_obj.nr > 0) {
if (multi_ack)
- packet_write(1, "ACK %s\n", last_hex);
+ packet_write_fmt(1, "ACK %s\n", last_hex);
return 0;
}
- packet_write(1, "NAK\n");
+ packet_write_fmt(1, "NAK\n");
return -1;
}
die("git upload-pack: expected SHA1 list, got '%s'", line);
return o->flags & ((allow_hidden_ref ? HIDDEN_REF : 0) | OUR_REF);
}
-static void check_non_tip(void)
+/*
+ * on successful case, it's up to the caller to close cmd->out
+ */
+static int do_reachable_revlist(struct child_process *cmd,
+ struct object_array *src,
+ struct object_array *reachable)
{
static const char *argv[] = {
"rev-list", "--stdin", NULL,
};
- static struct child_process cmd = CHILD_PROCESS_INIT;
struct object *o;
char namebuf[42]; /* ^ + SHA-1 + LF */
int i;
- /*
- * In the normal in-process case without
- * uploadpack.allowReachableSHA1InWant,
- * non-tip requests can never happen.
- */
- if (!stateless_rpc && !(allow_unadvertised_object_request & ALLOW_REACHABLE_SHA1))
- goto error;
-
- cmd.argv = argv;
- cmd.git_cmd = 1;
- cmd.no_stderr = 1;
- cmd.in = -1;
- cmd.out = -1;
-
- if (start_command(&cmd))
- goto error;
+ cmd->argv = argv;
+ cmd->git_cmd = 1;
+ cmd->no_stderr = 1;
+ cmd->in = -1;
+ cmd->out = -1;
/*
- * If rev-list --stdin encounters an unknown commit, it
- * terminates, which will cause SIGPIPE in the write loop
+ * If the next rev-list --stdin encounters an unknown commit,
+ * it terminates, which will cause SIGPIPE in the write loop
* below.
*/
sigchain_push(SIGPIPE, SIG_IGN);
+ if (start_command(cmd))
+ goto error;
+
namebuf[0] = '^';
namebuf[41] = '\n';
for (i = get_max_object_index(); 0 < i; ) {
o = get_indexed_object(--i);
if (!o)
continue;
+ if (reachable && o->type == OBJ_COMMIT)
+ o->flags &= ~TMP_MARK;
if (!is_our_ref(o))
continue;
memcpy(namebuf + 1, oid_to_hex(&o->oid), GIT_SHA1_HEXSZ);
- if (write_in_full(cmd.in, namebuf, 42) < 0)
+ if (write_in_full(cmd->in, namebuf, 42) < 0)
goto error;
}
namebuf[40] = '\n';
- for (i = 0; i < want_obj.nr; i++) {
- o = want_obj.objects[i].item;
- if (is_our_ref(o))
+ for (i = 0; i < src->nr; i++) {
+ o = src->objects[i].item;
+ if (is_our_ref(o)) {
+ if (reachable)
+ add_object_array(o, NULL, reachable);
continue;
+ }
+ if (reachable && o->type == OBJ_COMMIT)
+ o->flags |= TMP_MARK;
memcpy(namebuf, oid_to_hex(&o->oid), GIT_SHA1_HEXSZ);
- if (write_in_full(cmd.in, namebuf, 41) < 0)
+ if (write_in_full(cmd->in, namebuf, 41) < 0)
goto error;
}
- close(cmd.in);
+ close(cmd->in);
+ cmd->in = -1;
+ sigchain_pop(SIGPIPE);
+ return 0;
+
+error:
sigchain_pop(SIGPIPE);
+ if (cmd->in >= 0)
+ close(cmd->in);
+ if (cmd->out >= 0)
+ close(cmd->out);
+ return -1;
+}
+
+static int get_reachable_list(struct object_array *src,
+ struct object_array *reachable)
+{
+ struct child_process cmd = CHILD_PROCESS_INIT;
+ int i;
+ struct object *o;
+ char namebuf[42]; /* ^ + SHA-1 + LF */
+
+ if (do_reachable_revlist(&cmd, src, reachable) < 0)
+ return -1;
+
+ while ((i = read_in_full(cmd.out, namebuf, 41)) == 41) {
+ struct object_id sha1;
+
+ if (namebuf[40] != '\n' || get_oid_hex(namebuf, &sha1))
+ break;
+
+ o = lookup_object(sha1.hash);
+ if (o && o->type == OBJ_COMMIT) {
+ o->flags &= ~TMP_MARK;
+ }
+ }
+ for (i = get_max_object_index(); 0 < i; i--) {
+ o = get_indexed_object(i - 1);
+ if (o && o->type == OBJ_COMMIT &&
+ (o->flags & TMP_MARK)) {
+ add_object_array(o, NULL, reachable);
+ o->flags &= ~TMP_MARK;
+ }
+ }
+ close(cmd.out);
+
+ if (finish_command(&cmd))
+ return -1;
+
+ return 0;
+}
+
+static int has_unreachable(struct object_array *src)
+{
+ struct child_process cmd = CHILD_PROCESS_INIT;
+ char buf[1];
+ int i;
+
+ if (do_reachable_revlist(&cmd, src, NULL) < 0)
+ return 1;
+
/*
* The commits out of the rev-list are not ancestors of
* our ref.
*/
- i = read_in_full(cmd.out, namebuf, 1);
+ i = read_in_full(cmd.out, buf, 1);
if (i)
goto error;
close(cmd.out);
+ cmd.out = -1;
/*
* rev-list may have died by encountering a bad commit
goto error;
/* All the non-tip ones are ancestors of what we advertised */
- return;
+ return 0;
+
+error:
+ sigchain_pop(SIGPIPE);
+ if (cmd.out >= 0)
+ close(cmd.out);
+ return 1;
+}
+
+static void check_non_tip(void)
+{
+ int i;
+
+ /*
+ * In the normal in-process case without
+ * uploadpack.allowReachableSHA1InWant,
+ * non-tip requests can never happen.
+ */
+ if (!stateless_rpc && !(allow_unadvertised_object_request & ALLOW_REACHABLE_SHA1))
+ goto error;
+ if (!has_unreachable(&want_obj))
+ /* All the non-tip ones are ancestors of what we advertised */
+ return;
error:
/* Pick one of them (we know there at least is one) */
for (i = 0; i < want_obj.nr; i++) {
- o = want_obj.objects[i].item;
+ struct object *o = want_obj.objects[i].item;
if (!is_our_ref(o))
die("git upload-pack: not our ref %s",
oid_to_hex(&o->oid));
}
}
+static void send_shallow(struct commit_list *result)
+{
+ while (result) {
+ struct object *object = &result->item->object;
+ if (!(object->flags & (CLIENT_SHALLOW|NOT_SHALLOW))) {
+ packet_write_fmt(1, "shallow %s",
+ oid_to_hex(&object->oid));
+ register_shallow(object->oid.hash);
+ shallow_nr++;
+ }
+ result = result->next;
+ }
+}
+
+static void send_unshallow(const struct object_array *shallows)
+{
+ int i;
+
+ for (i = 0; i < shallows->nr; i++) {
+ struct object *object = shallows->objects[i].item;
+ if (object->flags & NOT_SHALLOW) {
+ struct commit_list *parents;
+ packet_write_fmt(1, "unshallow %s",
+ oid_to_hex(&object->oid));
+ object->flags &= ~CLIENT_SHALLOW;
+ /*
+ * We want to _register_ "object" as shallow, but we
+ * also need to traverse object's parents to deepen a
+ * shallow clone. Unregister it for now so we can
+ * parse and add the parents to the want list, then
+ * re-register it.
+ */
+ unregister_shallow(object->oid.hash);
+ object->parsed = 0;
+ parse_commit_or_die((struct commit *)object);
+ parents = ((struct commit *)object)->parents;
+ while (parents) {
+ add_object_array(&parents->item->object,
+ NULL, &want_obj);
+ parents = parents->next;
+ }
+ add_object_array(object, NULL, &extra_edge_obj);
+ }
+ /* make sure commit traversal conforms to client */
+ register_shallow(object->oid.hash);
+ }
+}
+
+static void deepen(int depth, int deepen_relative,
+ struct object_array *shallows)
+{
+ if (depth == INFINITE_DEPTH && !is_repository_shallow()) {
+ int i;
+
+ for (i = 0; i < shallows->nr; i++) {
+ struct object *object = shallows->objects[i].item;
+ object->flags |= NOT_SHALLOW;
+ }
+ } else if (deepen_relative) {
+ struct object_array reachable_shallows = OBJECT_ARRAY_INIT;
+ struct commit_list *result;
+
+ get_reachable_list(shallows, &reachable_shallows);
+ result = get_shallow_commits(&reachable_shallows,
+ depth + 1,
+ SHALLOW, NOT_SHALLOW);
+ send_shallow(result);
+ free_commit_list(result);
+ object_array_clear(&reachable_shallows);
+ } else {
+ struct commit_list *result;
+
+ result = get_shallow_commits(&want_obj, depth,
+ SHALLOW, NOT_SHALLOW);
+ send_shallow(result);
+ free_commit_list(result);
+ }
+
+ send_unshallow(shallows);
+ packet_flush(1);
+}
+
+static void deepen_by_rev_list(int ac, const char **av,
+ struct object_array *shallows)
+{
+ struct commit_list *result;
+
+ result = get_shallow_commits_by_rev_list(ac, av, SHALLOW, NOT_SHALLOW);
+ send_shallow(result);
+ free_commit_list(result);
+ send_unshallow(shallows);
+ packet_flush(1);
+}
+
static void receive_needs(void)
{
struct object_array shallows = OBJECT_ARRAY_INIT;
+ struct string_list deepen_not = STRING_LIST_INIT_DUP;
int depth = 0;
int has_non_tip = 0;
+ unsigned long deepen_since = 0;
+ int deepen_rev_list = 0;
shallow_nr = 0;
for (;;) {
const char *features;
unsigned char sha1_buf[20];
char *line = packet_read_line(0, NULL);
+ const char *arg;
+
reset_timeout();
if (!line)
break;
- if (starts_with(line, "shallow ")) {
+ if (skip_prefix(line, "shallow ", &arg)) {
unsigned char sha1[20];
struct object *object;
- if (get_sha1_hex(line + 8, sha1))
+ if (get_sha1_hex(arg, sha1))
die("invalid shallow line: %s", line);
object = parse_object(sha1);
if (!object)
}
continue;
}
- if (starts_with(line, "deepen ")) {
- char *end;
- depth = strtol(line + 7, &end, 0);
- if (end == line + 7 || depth <= 0)
+ if (skip_prefix(line, "deepen ", &arg)) {
+ char *end = NULL;
+ depth = strtol(arg, &end, 0);
+ if (!end || *end || depth <= 0)
die("Invalid deepen: %s", line);
continue;
}
- if (!starts_with(line, "want ") ||
- get_sha1_hex(line+5, sha1_buf))
+ if (skip_prefix(line, "deepen-since ", &arg)) {
+ char *end = NULL;
+ deepen_since = strtoul(arg, &end, 0);
+ if (!end || *end || !deepen_since ||
+ /* revisions.c's max_age -1 is special */
+ deepen_since == -1)
+ die("Invalid deepen-since: %s", line);
+ deepen_rev_list = 1;
+ continue;
+ }
+ if (skip_prefix(line, "deepen-not ", &arg)) {
+ char *ref = NULL;
+ unsigned char sha1[20];
+ if (expand_ref(arg, strlen(arg), sha1, &ref) != 1)
+ die("git upload-pack: ambiguous deepen-not: %s", line);
+ string_list_append(&deepen_not, ref);
+ free(ref);
+ deepen_rev_list = 1;
+ continue;
+ }
+ if (!skip_prefix(line, "want ", &arg) ||
+ get_sha1_hex(arg, sha1_buf))
die("git upload-pack: protocol error, "
"expected to get sha, not '%s'", line);
- features = line + 45;
+ features = arg + 40;
+ if (parse_feature_request(features, "deepen-relative"))
+ deepen_relative = 1;
if (parse_feature_request(features, "multi_ack_detailed"))
multi_ack = 2;
else if (parse_feature_request(features, "multi_ack"))
if (!use_sideband && daemon_mode)
no_progress = 1;
- if (depth == 0 && shallows.nr == 0)
+ if (depth == 0 && !deepen_rev_list && shallows.nr == 0)
return;
- if (depth > 0) {
- struct commit_list *result = NULL, *backup = NULL;
+ if (depth > 0 && deepen_rev_list)
+ die("git upload-pack: deepen and deepen-since (or deepen-not) cannot be used together");
+ if (depth > 0)
+ deepen(depth, deepen_relative, &shallows);
+ else if (deepen_rev_list) {
+ struct argv_array av = ARGV_ARRAY_INIT;
int i;
- if (depth == INFINITE_DEPTH && !is_repository_shallow())
- for (i = 0; i < shallows.nr; i++) {
- struct object *object = shallows.objects[i].item;
- object->flags |= NOT_SHALLOW;
- }
- else
- backup = result =
- get_shallow_commits(&want_obj, depth,
- SHALLOW, NOT_SHALLOW);
- while (result) {
- struct object *object = &result->item->object;
- if (!(object->flags & (CLIENT_SHALLOW|NOT_SHALLOW))) {
- packet_write(1, "shallow %s",
- oid_to_hex(&object->oid));
- register_shallow(object->oid.hash);
- shallow_nr++;
+
+ argv_array_push(&av, "rev-list");
+ if (deepen_since)
+ argv_array_pushf(&av, "--max-age=%lu", deepen_since);
+ if (deepen_not.nr) {
+ argv_array_push(&av, "--not");
+ for (i = 0; i < deepen_not.nr; i++) {
+ struct string_list_item *s = deepen_not.items + i;
+ argv_array_push(&av, s->string);
}
- result = result->next;
+ argv_array_push(&av, "--not");
}
- free_commit_list(backup);
- for (i = 0; i < shallows.nr; i++) {
- struct object *object = shallows.objects[i].item;
- if (object->flags & NOT_SHALLOW) {
- struct commit_list *parents;
- packet_write(1, "unshallow %s",
- oid_to_hex(&object->oid));
- object->flags &= ~CLIENT_SHALLOW;
- /* make sure the real parents are parsed */
- unregister_shallow(object->oid.hash);
- object->parsed = 0;
- parse_commit_or_die((struct commit *)object);
- parents = ((struct commit *)object)->parents;
- while (parents) {
- add_object_array(&parents->item->object,
- NULL, &want_obj);
- parents = parents->next;
- }
- add_object_array(object, NULL, &extra_edge_obj);
- }
- /* make sure commit traversal conforms to client */
- register_shallow(object->oid.hash);
+ for (i = 0; i < want_obj.nr; i++) {
+ struct object *o = want_obj.objects[i].item;
+ argv_array_push(&av, oid_to_hex(&o->oid));
}
- packet_flush(1);
- } else
+ deepen_by_rev_list(av.argc, av.argv, &shallows);
+ argv_array_clear(&av);
+ }
+ else
if (shallows.nr > 0) {
int i;
for (i = 0; i < shallows.nr; i++)
int flag, void *cb_data)
{
static const char *capabilities = "multi_ack thin-pack side-band"
- " side-band-64k ofs-delta shallow no-progress"
- " include-tag multi_ack_detailed";
+ " side-band-64k ofs-delta shallow deepen-since deepen-not"
+ " deepen-relative no-progress include-tag multi_ack_detailed";
const char *refname_nons = strip_namespace(refname);
struct object_id peeled;
struct strbuf symref_info = STRBUF_INIT;
format_symref_info(&symref_info, cb_data);
- packet_write(1, "%s %s%c%s%s%s%s%s agent=%s\n",
+ packet_write_fmt(1, "%s %s%c%s%s%s%s%s agent=%s\n",
oid_to_hex(oid), refname_nons,
0, capabilities,
(allow_unadvertised_object_request & ALLOW_TIP_SHA1) ?
git_user_agent_sanitized());
strbuf_release(&symref_info);
} else {
- packet_write(1, "%s %s\n", oid_to_hex(oid), refname_nons);
+ packet_write_fmt(1, "%s %s\n", oid_to_hex(oid), refname_nons);
}
capabilities = NULL;
if (!peel_ref(refname, peeled.hash))
- packet_write(1, "%s %s^{}\n", oid_to_hex(&peeled), refname_nons);
+ packet_write_fmt(1, "%s %s^{}\n", oid_to_hex(&peeled), refname_nons);
return 0;
}
error_routine = routine;
}
+void (*get_error_routine(void))(const char *err, va_list params)
+{
+ return error_routine;
+}
+
+void set_warn_routine(void (*routine)(const char *warn, va_list params))
+{
+ warn_routine = routine;
+}
+
+void (*get_warn_routine(void))(const char *warn, va_list params)
+{
+ return warn_routine;
+}
+
void set_die_is_recursing_routine(int (*routine)(void))
{
die_is_recursing = routine;
#include "cache.h"
#include "run-command.h"
-static void check_pipe(int err)
-{
- if (err == EPIPE) {
- if (in_async())
- async_exit(141);
-
- signal(SIGPIPE, SIG_DFL);
- raise(SIGPIPE);
- /* Should never happen, but just in case... */
- exit(141);
- }
-}
-
/*
* Some cases use stdio, but want to flush after the write
* to get error handling (and to get better interactive
#include "strbuf.h"
#include "utf8.h"
#include "worktree.h"
+#include "lockfile.h"
static const char cut_line[] =
"------------------------ >8 ------------------------\n";
s->display_comment_prefix = 0;
}
-static void wt_status_print_unmerged_header(struct wt_status *s)
+static void wt_longstatus_print_unmerged_header(struct wt_status *s)
{
int i;
int del_mod_conflict = 0;
status_printf_ln(s, c, "%s", "");
}
-static void wt_status_print_cached_header(struct wt_status *s)
+static void wt_longstatus_print_cached_header(struct wt_status *s)
{
const char *c = color(WT_STATUS_HEADER, s);
status_printf_ln(s, c, "%s", "");
}
-static void wt_status_print_dirty_header(struct wt_status *s,
- int has_deleted,
- int has_dirty_submodules)
+static void wt_longstatus_print_dirty_header(struct wt_status *s,
+ int has_deleted,
+ int has_dirty_submodules)
{
const char *c = color(WT_STATUS_HEADER, s);
status_printf_ln(s, c, "%s", "");
}
-static void wt_status_print_other_header(struct wt_status *s,
- const char *what,
- const char *how)
+static void wt_longstatus_print_other_header(struct wt_status *s,
+ const char *what,
+ const char *how)
{
const char *c = color(WT_STATUS_HEADER, s);
status_printf_ln(s, c, "%s:", what);
status_printf_ln(s, c, "%s", "");
}
-static void wt_status_print_trailer(struct wt_status *s)
+static void wt_longstatus_print_trailer(struct wt_status *s)
{
status_printf_ln(s, color(WT_STATUS_HEADER, s), "%s", "");
}
return result;
}
-static void wt_status_print_unmerged_data(struct wt_status *s,
- struct string_list_item *it)
+static void wt_longstatus_print_unmerged_data(struct wt_status *s,
+ struct string_list_item *it)
{
const char *c = color(WT_STATUS_UNMERGED, s);
struct wt_status_change_data *d = it->util;
strbuf_release(&onebuf);
}
-static void wt_status_print_change_data(struct wt_status *s,
- int change_type,
- struct string_list_item *it)
+static void wt_longstatus_print_change_data(struct wt_status *s,
+ int change_type,
+ struct string_list_item *it)
{
struct wt_status_change_data *d = it->util;
const char *c = color(change_type, s);
status = d->worktree_status;
break;
default:
- die("BUG: unhandled change_type %d in wt_status_print_change_data",
+ die("BUG: unhandled change_type %d in wt_longstatus_print_change_data",
change_type);
}
if (S_ISGITLINK(p->two->mode))
d->new_submodule_commits = !!oidcmp(&p->one->oid,
&p->two->oid);
+
+ switch (p->status) {
+ case DIFF_STATUS_ADDED:
+ d->mode_worktree = p->two->mode;
+ break;
+
+ case DIFF_STATUS_DELETED:
+ d->mode_index = p->one->mode;
+ oidcpy(&d->oid_index, &p->one->oid);
+ /* mode_worktree is zero for a delete. */
+ break;
+
+ case DIFF_STATUS_MODIFIED:
+ case DIFF_STATUS_TYPE_CHANGED:
+ case DIFF_STATUS_UNMERGED:
+ d->mode_index = p->one->mode;
+ d->mode_worktree = p->two->mode;
+ oidcpy(&d->oid_index, &p->one->oid);
+ break;
+
+ case DIFF_STATUS_UNKNOWN:
+ die("BUG: worktree status unknown???");
+ break;
+ }
+
}
}
if (!d->index_status)
d->index_status = p->status;
switch (p->status) {
+ case DIFF_STATUS_ADDED:
+ /* Leave {mode,oid}_head zero for an add. */
+ d->mode_index = p->two->mode;
+ oidcpy(&d->oid_index, &p->two->oid);
+ break;
+ case DIFF_STATUS_DELETED:
+ d->mode_head = p->one->mode;
+ oidcpy(&d->oid_head, &p->one->oid);
+ /* Leave {mode,oid}_index zero for a delete. */
+ break;
+
case DIFF_STATUS_COPIED:
case DIFF_STATUS_RENAMED:
d->head_path = xstrdup(p->one->path);
+ d->score = p->score * 100 / MAX_SCORE;
+ /* fallthru */
+ case DIFF_STATUS_MODIFIED:
+ case DIFF_STATUS_TYPE_CHANGED:
+ d->mode_head = p->one->mode;
+ d->mode_index = p->two->mode;
+ oidcpy(&d->oid_head, &p->one->oid);
+ oidcpy(&d->oid_index, &p->two->oid);
break;
case DIFF_STATUS_UNMERGED:
d->stagemask = unmerged_mask(p->two->path);
+ /*
+ * Don't bother setting {mode,oid}_{head,index} since the print
+ * code will output the stage values directly and not use the
+ * values in these fields.
+ */
break;
}
}
setup_revisions(0, NULL, &rev, NULL);
rev.diffopt.output_format |= DIFF_FORMAT_CALLBACK;
DIFF_OPT_SET(&rev.diffopt, DIRTY_SUBMODULES);
+ rev.diffopt.ita_invisible_in_index = 1;
if (!s->show_untracked_files)
DIFF_OPT_SET(&rev.diffopt, IGNORE_UNTRACKED_IN_SUBMODULES);
if (s->ignore_submodule_arg) {
setup_revisions(0, NULL, &rev, &opt);
DIFF_OPT_SET(&rev.diffopt, OVERRIDE_SUBMODULE_CONFIG);
+ rev.diffopt.ita_invisible_in_index = 1;
if (s->ignore_submodule_arg) {
handle_ignore_submodules_arg(&rev.diffopt, s->ignore_submodule_arg);
} else {
if (!ce_path_match(ce, &s->pathspec, NULL))
continue;
+ if (ce_intent_to_add(ce))
+ continue;
it = string_list_insert(&s->change, ce->name);
d = it->util;
if (!d) {
if (ce_stage(ce)) {
d->index_status = DIFF_STATUS_UNMERGED;
d->stagemask |= (1 << (ce_stage(ce) - 1));
- }
- else
+ /*
+ * Don't bother setting {mode,oid}_{head,index} since the print
+ * code will output the stage values directly and not use the
+ * values in these fields.
+ */
+ } else {
d->index_status = DIFF_STATUS_ADDED;
+ /* Leave {mode,oid}_head zero for adds. */
+ d->mode_index = ce->ce_mode;
+ hashcpy(d->oid_index.hash, ce->oid.hash);
+ }
}
}
wt_status_collect_untracked(s);
}
-static void wt_status_print_unmerged(struct wt_status *s)
+static void wt_longstatus_print_unmerged(struct wt_status *s)
{
int shown_header = 0;
int i;
if (!d->stagemask)
continue;
if (!shown_header) {
- wt_status_print_unmerged_header(s);
+ wt_longstatus_print_unmerged_header(s);
shown_header = 1;
}
- wt_status_print_unmerged_data(s, it);
+ wt_longstatus_print_unmerged_data(s, it);
}
if (shown_header)
- wt_status_print_trailer(s);
+ wt_longstatus_print_trailer(s);
}
-static void wt_status_print_updated(struct wt_status *s)
+static void wt_longstatus_print_updated(struct wt_status *s)
{
int shown_header = 0;
int i;
d->index_status == DIFF_STATUS_UNMERGED)
continue;
if (!shown_header) {
- wt_status_print_cached_header(s);
+ wt_longstatus_print_cached_header(s);
s->commitable = 1;
shown_header = 1;
}
- wt_status_print_change_data(s, WT_STATUS_UPDATED, it);
+ wt_longstatus_print_change_data(s, WT_STATUS_UPDATED, it);
}
if (shown_header)
- wt_status_print_trailer(s);
+ wt_longstatus_print_trailer(s);
}
/*
return changes;
}
-static void wt_status_print_changed(struct wt_status *s)
+static void wt_longstatus_print_changed(struct wt_status *s)
{
int i, dirty_submodules;
int worktree_changes = wt_status_check_worktree_changes(s, &dirty_submodules);
if (!worktree_changes)
return;
- wt_status_print_dirty_header(s, worktree_changes < 0, dirty_submodules);
+ wt_longstatus_print_dirty_header(s, worktree_changes < 0, dirty_submodules);
for (i = 0; i < s->change.nr; i++) {
struct wt_status_change_data *d;
if (!d->worktree_status ||
d->worktree_status == DIFF_STATUS_UNMERGED)
continue;
- wt_status_print_change_data(s, WT_STATUS_CHANGED, it);
+ wt_longstatus_print_change_data(s, WT_STATUS_CHANGED, it);
}
- wt_status_print_trailer(s);
+ wt_longstatus_print_trailer(s);
}
-static void wt_status_print_submodule_summary(struct wt_status *s, int uncommitted)
+static void wt_longstatus_print_submodule_summary(struct wt_status *s, int uncommitted)
{
struct child_process sm_summary = CHILD_PROCESS_INIT;
struct strbuf cmd_stdout = STRBUF_INIT;
strbuf_release(&summary);
}
-static void wt_status_print_other(struct wt_status *s,
- struct string_list *l,
- const char *what,
- const char *how)
+static void wt_longstatus_print_other(struct wt_status *s,
+ struct string_list *l,
+ const char *what,
+ const char *how)
{
int i;
struct strbuf buf = STRBUF_INIT;
if (!l->nr)
return;
- wt_status_print_other_header(s, what, how);
+ wt_longstatus_print_other_header(s, what, how);
for (i = 0; i < l->nr; i++) {
struct string_list_item *it;
strbuf_release(&buf);
}
-static void wt_status_print_verbose(struct wt_status *s)
+static void wt_longstatus_print_verbose(struct wt_status *s)
{
struct rev_info rev;
struct setup_revision_opt opt;
init_revisions(&rev, NULL);
DIFF_OPT_SET(&rev.diffopt, ALLOW_TEXTCONV);
+ rev.diffopt.ita_invisible_in_index = 1;
memset(&opt, 0, sizeof(opt));
opt.def = s->is_initial ? EMPTY_TREE_SHA1_HEX : s->reference;
if (s->verbose > 1 && s->commitable) {
/* print_updated() printed a header, so do we */
if (s->fp != stdout)
- wt_status_print_trailer(s);
+ wt_longstatus_print_trailer(s);
status_printf_ln(s, c, _("Changes to be committed:"));
rev.diffopt.a_prefix = "c/";
rev.diffopt.b_prefix = "i/";
}
}
-static void wt_status_print_tracking(struct wt_status *s)
+static void wt_longstatus_print_tracking(struct wt_status *s)
{
struct strbuf sb = STRBUF_INIT;
const char *cp, *ep, *branch_name;
status_printf_ln(s, color,
_(" (use \"git commit\" to conclude merge)"));
}
- wt_status_print_trailer(s);
+ wt_longstatus_print_trailer(s);
}
static void show_am_in_progress(struct wt_status *s,
status_printf_ln(s, color,
_(" (use \"git am --abort\" to restore the original branch)"));
}
- wt_status_print_trailer(s);
+ wt_longstatus_print_trailer(s);
}
static char *read_line_from_git_path(const char *filename)
_(" (use \"git rebase --continue\" once you are satisfied with your changes)"));
}
}
- wt_status_print_trailer(s);
+ wt_longstatus_print_trailer(s);
}
static void show_cherry_pick_in_progress(struct wt_status *s,
status_printf_ln(s, color,
_(" (use \"git cherry-pick --abort\" to cancel the cherry-pick operation)"));
}
- wt_status_print_trailer(s);
+ wt_longstatus_print_trailer(s);
}
static void show_revert_in_progress(struct wt_status *s,
status_printf_ln(s, color,
_(" (use \"git revert --abort\" to cancel the revert operation)"));
}
- wt_status_print_trailer(s);
+ wt_longstatus_print_trailer(s);
}
static void show_bisect_in_progress(struct wt_status *s,
if (s->hints)
status_printf_ln(s, color,
_(" (use \"git bisect reset\" to get back to the original branch)"));
- wt_status_print_trailer(s);
+ wt_longstatus_print_trailer(s);
}
/*
wt_status_get_detached_from(state);
}
-static void wt_status_print_state(struct wt_status *s,
- struct wt_status_state *state)
+static void wt_longstatus_print_state(struct wt_status *s,
+ struct wt_status_state *state)
{
const char *state_color = color(WT_STATUS_HEADER, s);
if (state->merge_in_progress)
show_bisect_in_progress(s, state, state_color);
}
-void wt_status_print(struct wt_status *s)
+static void wt_longstatus_print(struct wt_status *s)
{
const char *branch_color = color(WT_STATUS_ONBRANCH, s);
const char *branch_status_color = color(WT_STATUS_HEADER, s);
status_printf_more(s, branch_status_color, "%s", on_what);
status_printf_more(s, branch_color, "%s\n", branch_name);
if (!s->is_initial)
- wt_status_print_tracking(s);
+ wt_longstatus_print_tracking(s);
}
- wt_status_print_state(s, &state);
+ wt_longstatus_print_state(s, &state);
free(state.branch);
free(state.onto);
free(state.detached_from);
status_printf_ln(s, color(WT_STATUS_HEADER, s), "%s", "");
}
- wt_status_print_updated(s);
- wt_status_print_unmerged(s);
- wt_status_print_changed(s);
+ wt_longstatus_print_updated(s);
+ wt_longstatus_print_unmerged(s);
+ wt_longstatus_print_changed(s);
if (s->submodule_summary &&
(!s->ignore_submodule_arg ||
strcmp(s->ignore_submodule_arg, "all"))) {
- wt_status_print_submodule_summary(s, 0); /* staged */
- wt_status_print_submodule_summary(s, 1); /* unstaged */
+ wt_longstatus_print_submodule_summary(s, 0); /* staged */
+ wt_longstatus_print_submodule_summary(s, 1); /* unstaged */
}
if (s->show_untracked_files) {
- wt_status_print_other(s, &s->untracked, _("Untracked files"), "add");
+ wt_longstatus_print_other(s, &s->untracked, _("Untracked files"), "add");
if (s->show_ignored_files)
- wt_status_print_other(s, &s->ignored, _("Ignored files"), "add -f");
+ wt_longstatus_print_other(s, &s->ignored, _("Ignored files"), "add -f");
if (advice_status_u_option && 2000 < s->untracked_in_ms) {
status_printf_ln(s, GIT_COLOR_NORMAL, "%s", "");
status_printf_ln(s, GIT_COLOR_NORMAL,
? _(" (use -u option to show untracked files)") : "");
if (s->verbose)
- wt_status_print_verbose(s);
+ wt_longstatus_print_verbose(s);
if (!s->commitable) {
if (s->amend)
status_printf_ln(s, GIT_COLOR_NORMAL, _("No changes"));
fputc(s->null_termination ? '\0' : '\n', s->fp);
}
-void wt_shortstatus_print(struct wt_status *s)
+static void wt_shortstatus_print(struct wt_status *s)
{
int i;
}
}
-void wt_porcelain_print(struct wt_status *s)
+static void wt_porcelain_print(struct wt_status *s)
{
s->use_color = 0;
s->relative_paths = 0;
s->no_gettext = 1;
wt_shortstatus_print(s);
}
+
+/*
+ * Print branch information for porcelain v2 output. These lines
+ * are printed when the '--branch' parameter is given.
+ *
+ * # branch.oid <commit><eol>
+ * # branch.head <head><eol>
+ * [# branch.upstream <upstream><eol>
+ * [# branch.ab +<ahead> -<behind><eol>]]
+ *
+ * <commit> ::= the current commit hash or the the literal
+ * "(initial)" to indicate an initialized repo
+ * with no commits.
+ *
+ * <head> ::= <branch_name> the current branch name or
+ * "(detached)" literal when detached head or
+ * "(unknown)" when something is wrong.
+ *
+ * <upstream> ::= the upstream branch name, when set.
+ *
+ * <ahead> ::= integer ahead value, when upstream set
+ * and the commit is present (not gone).
+ *
+ * <behind> ::= integer behind value, when upstream set
+ * and commit is present.
+ *
+ *
+ * The end-of-line is defined by the -z flag.
+ *
+ * <eol> ::= NUL when -z,
+ * LF when NOT -z.
+ *
+ */
+static void wt_porcelain_v2_print_tracking(struct wt_status *s)
+{
+ struct branch *branch;
+ const char *base;
+ const char *branch_name;
+ struct wt_status_state state;
+ int ab_info, nr_ahead, nr_behind;
+ char eol = s->null_termination ? '\0' : '\n';
+
+ memset(&state, 0, sizeof(state));
+ wt_status_get_state(&state, s->branch && !strcmp(s->branch, "HEAD"));
+
+ fprintf(s->fp, "# branch.oid %s%c",
+ (s->is_initial ? "(initial)" : sha1_to_hex(s->sha1_commit)),
+ eol);
+
+ if (!s->branch)
+ fprintf(s->fp, "# branch.head %s%c", "(unknown)", eol);
+ else {
+ if (!strcmp(s->branch, "HEAD")) {
+ fprintf(s->fp, "# branch.head %s%c", "(detached)", eol);
+
+ if (state.rebase_in_progress || state.rebase_interactive_in_progress)
+ branch_name = state.onto;
+ else if (state.detached_from)
+ branch_name = state.detached_from;
+ else
+ branch_name = "";
+ } else {
+ branch_name = NULL;
+ skip_prefix(s->branch, "refs/heads/", &branch_name);
+
+ fprintf(s->fp, "# branch.head %s%c", branch_name, eol);
+ }
+
+ /* Lookup stats on the upstream tracking branch, if set. */
+ branch = branch_get(branch_name);
+ base = NULL;
+ ab_info = (stat_tracking_info(branch, &nr_ahead, &nr_behind, &base) == 0);
+ if (base) {
+ base = shorten_unambiguous_ref(base, 0);
+ fprintf(s->fp, "# branch.upstream %s%c", base, eol);
+ free((char *)base);
+
+ if (ab_info)
+ fprintf(s->fp, "# branch.ab +%d -%d%c", nr_ahead, nr_behind, eol);
+ }
+ }
+
+ free(state.branch);
+ free(state.onto);
+ free(state.detached_from);
+}
+
+/*
+ * Convert various submodule status values into a
+ * fixed-length string of characters in the buffer provided.
+ */
+static void wt_porcelain_v2_submodule_state(
+ struct wt_status_change_data *d,
+ char sub[5])
+{
+ if (S_ISGITLINK(d->mode_head) ||
+ S_ISGITLINK(d->mode_index) ||
+ S_ISGITLINK(d->mode_worktree)) {
+ sub[0] = 'S';
+ sub[1] = d->new_submodule_commits ? 'C' : '.';
+ sub[2] = (d->dirty_submodule & DIRTY_SUBMODULE_MODIFIED) ? 'M' : '.';
+ sub[3] = (d->dirty_submodule & DIRTY_SUBMODULE_UNTRACKED) ? 'U' : '.';
+ } else {
+ sub[0] = 'N';
+ sub[1] = '.';
+ sub[2] = '.';
+ sub[3] = '.';
+ }
+ sub[4] = 0;
+}
+
+/*
+ * Fix-up changed entries before we print them.
+ */
+static void wt_porcelain_v2_fix_up_changed(
+ struct string_list_item *it,
+ struct wt_status *s)
+{
+ struct wt_status_change_data *d = it->util;
+
+ if (!d->index_status) {
+ /*
+ * This entry is unchanged in the index (relative to the head).
+ * Therefore, the collect_updated_cb was never called for this
+ * entry (during the head-vs-index scan) and so the head column
+ * fields were never set.
+ *
+ * We must have data for the index column (from the
+ * index-vs-worktree scan (otherwise, this entry should not be
+ * in the list of changes)).
+ *
+ * Copy index column fields to the head column, so that our
+ * output looks complete.
+ */
+ assert(d->mode_head == 0);
+ d->mode_head = d->mode_index;
+ oidcpy(&d->oid_head, &d->oid_index);
+ }
+
+ if (!d->worktree_status) {
+ /*
+ * This entry is unchanged in the worktree (relative to the index).
+ * Therefore, the collect_changed_cb was never called for this entry
+ * (during the index-vs-worktree scan) and so the worktree column
+ * fields were never set.
+ *
+ * We must have data for the index column (from the head-vs-index
+ * scan).
+ *
+ * Copy the index column fields to the worktree column so that
+ * our output looks complete.
+ *
+ * Note that we only have a mode field in the worktree column
+ * because the scan code tries really hard to not have to compute it.
+ */
+ assert(d->mode_worktree == 0);
+ d->mode_worktree = d->mode_index;
+ }
+}
+
+/*
+ * Print porcelain v2 info for tracked entries with changes.
+ */
+static void wt_porcelain_v2_print_changed_entry(
+ struct string_list_item *it,
+ struct wt_status *s)
+{
+ struct wt_status_change_data *d = it->util;
+ struct strbuf buf_index = STRBUF_INIT;
+ struct strbuf buf_head = STRBUF_INIT;
+ const char *path_index = NULL;
+ const char *path_head = NULL;
+ char key[3];
+ char submodule_token[5];
+ char sep_char, eol_char;
+
+ wt_porcelain_v2_fix_up_changed(it, s);
+ wt_porcelain_v2_submodule_state(d, submodule_token);
+
+ key[0] = d->index_status ? d->index_status : '.';
+ key[1] = d->worktree_status ? d->worktree_status : '.';
+ key[2] = 0;
+
+ if (s->null_termination) {
+ /*
+ * In -z mode, we DO NOT C-quote pathnames. Current path is ALWAYS first.
+ * A single NUL character separates them.
+ */
+ sep_char = '\0';
+ eol_char = '\0';
+ path_index = it->string;
+ path_head = d->head_path;
+ } else {
+ /*
+ * Path(s) are C-quoted if necessary. Current path is ALWAYS first.
+ * The source path is only present when necessary.
+ * A single TAB separates them (because paths can contain spaces
+ * which are not escaped and C-quoting does escape TAB characters).
+ */
+ sep_char = '\t';
+ eol_char = '\n';
+ path_index = quote_path(it->string, s->prefix, &buf_index);
+ if (d->head_path)
+ path_head = quote_path(d->head_path, s->prefix, &buf_head);
+ }
+
+ if (path_head)
+ fprintf(s->fp, "2 %s %s %06o %06o %06o %s %s %c%d %s%c%s%c",
+ key, submodule_token,
+ d->mode_head, d->mode_index, d->mode_worktree,
+ oid_to_hex(&d->oid_head), oid_to_hex(&d->oid_index),
+ key[0], d->score,
+ path_index, sep_char, path_head, eol_char);
+ else
+ fprintf(s->fp, "1 %s %s %06o %06o %06o %s %s %s%c",
+ key, submodule_token,
+ d->mode_head, d->mode_index, d->mode_worktree,
+ oid_to_hex(&d->oid_head), oid_to_hex(&d->oid_index),
+ path_index, eol_char);
+
+ strbuf_release(&buf_index);
+ strbuf_release(&buf_head);
+}
+
+/*
+ * Print porcelain v2 status info for unmerged entries.
+ */
+static void wt_porcelain_v2_print_unmerged_entry(
+ struct string_list_item *it,
+ struct wt_status *s)
+{
+ struct wt_status_change_data *d = it->util;
+ const struct cache_entry *ce;
+ struct strbuf buf_index = STRBUF_INIT;
+ const char *path_index = NULL;
+ int pos, stage, sum;
+ struct {
+ int mode;
+ struct object_id oid;
+ } stages[3];
+ char *key;
+ char submodule_token[5];
+ char unmerged_prefix = 'u';
+ char eol_char = s->null_termination ? '\0' : '\n';
+
+ wt_porcelain_v2_submodule_state(d, submodule_token);
+
+ switch (d->stagemask) {
+ case 1: key = "DD"; break; /* both deleted */
+ case 2: key = "AU"; break; /* added by us */
+ case 3: key = "UD"; break; /* deleted by them */
+ case 4: key = "UA"; break; /* added by them */
+ case 5: key = "DU"; break; /* deleted by us */
+ case 6: key = "AA"; break; /* both added */
+ case 7: key = "UU"; break; /* both modified */
+ default:
+ die("BUG: unhandled unmerged status %x", d->stagemask);
+ }
+
+ /*
+ * Disregard d.aux.porcelain_v2 data that we accumulated
+ * for the head and index columns during the scans and
+ * replace with the actual stage data.
+ *
+ * Note that this is a last-one-wins for each the individual
+ * stage [123] columns in the event of multiple cache entries
+ * for same stage.
+ */
+ memset(stages, 0, sizeof(stages));
+ sum = 0;
+ pos = cache_name_pos(it->string, strlen(it->string));
+ assert(pos < 0);
+ pos = -pos-1;
+ while (pos < active_nr) {
+ ce = active_cache[pos++];
+ stage = ce_stage(ce);
+ if (strcmp(ce->name, it->string) || !stage)
+ break;
+ stages[stage - 1].mode = ce->ce_mode;
+ hashcpy(stages[stage - 1].oid.hash, ce->oid.hash);
+ sum |= (1 << (stage - 1));
+ }
+ if (sum != d->stagemask)
+ die("BUG: observed stagemask 0x%x != expected stagemask 0x%x", sum, d->stagemask);
+
+ if (s->null_termination)
+ path_index = it->string;
+ else
+ path_index = quote_path(it->string, s->prefix, &buf_index);
+
+ fprintf(s->fp, "%c %s %s %06o %06o %06o %06o %s %s %s %s%c",
+ unmerged_prefix, key, submodule_token,
+ stages[0].mode, /* stage 1 */
+ stages[1].mode, /* stage 2 */
+ stages[2].mode, /* stage 3 */
+ d->mode_worktree,
+ oid_to_hex(&stages[0].oid), /* stage 1 */
+ oid_to_hex(&stages[1].oid), /* stage 2 */
+ oid_to_hex(&stages[2].oid), /* stage 3 */
+ path_index,
+ eol_char);
+
+ strbuf_release(&buf_index);
+}
+
+/*
+ * Print porcelain V2 status info for untracked and ignored entries.
+ */
+static void wt_porcelain_v2_print_other(
+ struct string_list_item *it,
+ struct wt_status *s,
+ char prefix)
+{
+ struct strbuf buf = STRBUF_INIT;
+ const char *path;
+ char eol_char;
+
+ if (s->null_termination) {
+ path = it->string;
+ eol_char = '\0';
+ } else {
+ path = quote_path(it->string, s->prefix, &buf);
+ eol_char = '\n';
+ }
+
+ fprintf(s->fp, "%c %s%c", prefix, path, eol_char);
+
+ strbuf_release(&buf);
+}
+
+/*
+ * Print porcelain V2 status.
+ *
+ * [<v2_branch>]
+ * [<v2_changed_items>]*
+ * [<v2_unmerged_items>]*
+ * [<v2_untracked_items>]*
+ * [<v2_ignored_items>]*
+ *
+ */
+static void wt_porcelain_v2_print(struct wt_status *s)
+{
+ struct wt_status_change_data *d;
+ struct string_list_item *it;
+ int i;
+
+ if (s->show_branch)
+ wt_porcelain_v2_print_tracking(s);
+
+ for (i = 0; i < s->change.nr; i++) {
+ it = &(s->change.items[i]);
+ d = it->util;
+ if (!d->stagemask)
+ wt_porcelain_v2_print_changed_entry(it, s);
+ }
+
+ for (i = 0; i < s->change.nr; i++) {
+ it = &(s->change.items[i]);
+ d = it->util;
+ if (d->stagemask)
+ wt_porcelain_v2_print_unmerged_entry(it, s);
+ }
+
+ for (i = 0; i < s->untracked.nr; i++) {
+ it = &(s->untracked.items[i]);
+ wt_porcelain_v2_print_other(it, s, '?');
+ }
+
+ for (i = 0; i < s->ignored.nr; i++) {
+ it = &(s->ignored.items[i]);
+ wt_porcelain_v2_print_other(it, s, '!');
+ }
+}
+
+void wt_status_print(struct wt_status *s)
+{
+ switch (s->status_format) {
+ case STATUS_FORMAT_SHORT:
+ wt_shortstatus_print(s);
+ break;
+ case STATUS_FORMAT_PORCELAIN:
+ wt_porcelain_print(s);
+ break;
+ case STATUS_FORMAT_PORCELAIN_V2:
+ wt_porcelain_v2_print(s);
+ break;
+ case STATUS_FORMAT_UNSPECIFIED:
+ die("BUG: finalize_deferred_config() should have been called");
+ break;
+ case STATUS_FORMAT_NONE:
+ case STATUS_FORMAT_LONG:
+ wt_longstatus_print(s);
+ break;
+ }
+}
+
+/**
+ * Returns 1 if there are unstaged changes, 0 otherwise.
+ */
+int has_unstaged_changes(int ignore_submodules)
+{
+ struct rev_info rev_info;
+ int result;
+
+ init_revisions(&rev_info, NULL);
+ if (ignore_submodules)
+ DIFF_OPT_SET(&rev_info.diffopt, IGNORE_SUBMODULES);
+ DIFF_OPT_SET(&rev_info.diffopt, QUICK);
+ diff_setup_done(&rev_info.diffopt);
+ result = run_diff_files(&rev_info, 0);
+ return diff_result_code(&rev_info.diffopt, result);
+}
+
+/**
+ * Returns 1 if there are uncommitted changes, 0 otherwise.
+ */
+int has_uncommitted_changes(int ignore_submodules)
+{
+ struct rev_info rev_info;
+ int result;
+
+ if (is_cache_unborn())
+ return 0;
+
+ init_revisions(&rev_info, NULL);
+ if (ignore_submodules)
+ DIFF_OPT_SET(&rev_info.diffopt, IGNORE_SUBMODULES);
+ DIFF_OPT_SET(&rev_info.diffopt, QUICK);
+ add_head_to_pending(&rev_info);
+ diff_setup_done(&rev_info.diffopt);
+ result = run_diff_index(&rev_info, 1);
+ return diff_result_code(&rev_info.diffopt, result);
+}
+
+/**
+ * If the work tree has unstaged or uncommitted changes, dies with the
+ * appropriate message.
+ */
+int require_clean_work_tree(const char *action, const char *hint, int ignore_submodules, int gently)
+{
+ struct lock_file *lock_file = xcalloc(1, sizeof(*lock_file));
+ int err = 0;
+
+ hold_locked_index(lock_file, 0);
+ refresh_cache(REFRESH_QUIET);
+ update_index_if_able(&the_index, lock_file);
+ rollback_lock_file(lock_file);
+
+ if (has_unstaged_changes(ignore_submodules)) {
+ /* TRANSLATORS: the action is e.g. "pull with rebase" */
+ error(_("cannot %s: You have unstaged changes."), _(action));
+ err = 1;
+ }
+
+ if (has_uncommitted_changes(ignore_submodules)) {
+ if (err)
+ error(_("additionally, your index contains uncommitted changes."));
+ else
+ error(_("cannot %s: Your index contains uncommitted changes."),
+ _(action));
+ err = 1;
+ }
+
+ if (err) {
+ if (hint)
+ error("%s", hint);
+ if (!gently)
+ exit(128);
+ }
+
+ return err;
+}
int worktree_status;
int index_status;
int stagemask;
+ int score;
+ int mode_head, mode_index, mode_worktree;
+ struct object_id oid_head, oid_index;
char *head_path;
unsigned dirty_submodule : 2;
unsigned new_submodule_commits : 1;
};
+enum wt_status_format {
+ STATUS_FORMAT_NONE = 0,
+ STATUS_FORMAT_LONG,
+ STATUS_FORMAT_SHORT,
+ STATUS_FORMAT_PORCELAIN,
+ STATUS_FORMAT_PORCELAIN_V2,
+
+ STATUS_FORMAT_UNSPECIFIED
+};
+
struct wt_status {
int is_initial;
char *branch;
int show_branch;
int hints;
+ enum wt_status_format status_format;
+ unsigned char sha1_commit[GIT_SHA1_RAWSZ]; /* when not Initial */
+
/* These are computed during processing of the individual sections */
int commitable;
int workdir_dirty;
int wt_status_check_bisect(const struct worktree *wt,
struct wt_status_state *state);
-void wt_shortstatus_print(struct wt_status *s);
-void wt_porcelain_print(struct wt_status *s);
-
__attribute__((format (printf, 3, 4)))
void status_printf_ln(struct wt_status *s, const char *color, const char *fmt, ...);
__attribute__((format (printf, 3, 4)))
void status_printf(struct wt_status *s, const char *color, const char *fmt, ...);
+/* The following functions expect that the caller took care of reading the index. */
+int has_unstaged_changes(int ignore_submodules);
+int has_uncommitted_changes(int ignore_submodules);
+int require_clean_work_tree(const char *action, const char *hint,
+ int ignore_submodules, int gently);
+
#endif /* STATUS_H */
return 0;
}
-void read_mmblob(mmfile_t *ptr, const unsigned char *sha1)
+void read_mmblob(mmfile_t *ptr, const struct object_id *oid)
{
unsigned long size;
enum object_type type;
- if (!hashcmp(sha1, null_sha1)) {
+ if (!oidcmp(oid, &null_oid)) {
ptr->ptr = xstrdup("");
ptr->size = 0;
return;
}
- ptr->ptr = read_sha1_file(sha1, &type, &size);
+ ptr->ptr = read_sha1_file(oid->hash, &type, &size);
if (!ptr->ptr || type != OBJ_BLOB)
- die("unable to read blob object %s", sha1_to_hex(sha1));
+ die("unable to read blob object %s", oid_to_hex(oid));
ptr->size = size;
}
#ifndef XDIFF_INTERFACE_H
#define XDIFF_INTERFACE_H
+#include "cache.h"
#include "xdiff/xdiff.h"
/*
int *ob, int *on,
int *nb, int *nn);
int read_mmfile(mmfile_t *ptr, const char *filename);
-void read_mmblob(mmfile_t *ptr, const unsigned char *sha1);
+void read_mmblob(mmfile_t *ptr, const struct object_id *oid);
int buffer_is_binary(const char *ptr, unsigned long size);
extern void xdiff_set_find_func(xdemitconf_t *xecfg, const char *line, int cflags);
#define XDF_IGNORE_BLANK_LINES (1 << 7)
#define XDF_COMPACTION_HEURISTIC (1 << 8)
+#define XDF_INDENT_HEURISTIC (1 << 9)
#define XDL_EMIT_FUNCNAMES (1 << 0)
#define XDL_EMIT_FUNCCONTEXT (1 << 2)
}
-static int is_blank_line(xrecord_t **recs, long ix, long flags)
+static int is_blank_line(xrecord_t *rec, long flags)
{
- return xdl_blankline(recs[ix]->ptr, recs[ix]->size, flags);
+ return xdl_blankline(rec->ptr, rec->size, flags);
}
-static int recs_match(xrecord_t **recs, long ixs, long ix, long flags)
+static int recs_match(xrecord_t *rec1, xrecord_t *rec2, long flags)
{
- return (recs[ixs]->ha == recs[ix]->ha &&
- xdl_recmatch(recs[ixs]->ptr, recs[ixs]->size,
- recs[ix]->ptr, recs[ix]->size,
+ return (rec1->ha == rec2->ha &&
+ xdl_recmatch(rec1->ptr, rec1->size,
+ rec2->ptr, rec2->size,
flags));
}
-int xdl_change_compact(xdfile_t *xdf, xdfile_t *xdfo, long flags) {
- long ix, ixo, ixs, ixref, grpsiz, nrec = xdf->nrec;
- char *rchg = xdf->rchg, *rchgo = xdfo->rchg;
- unsigned int blank_lines;
- xrecord_t **recs = xdf->recs;
+/*
+ * If a line is indented more than this, get_indent() just returns this value.
+ * This avoids having to do absurd amounts of work for data that are not
+ * human-readable text, and also ensures that the output of get_indent fits within
+ * an int.
+ */
+#define MAX_INDENT 200
+/*
+ * Return the amount of indentation of the specified line, treating TAB as 8
+ * columns. Return -1 if line is empty or contains only whitespace. Clamp the
+ * output value at MAX_INDENT.
+ */
+static int get_indent(xrecord_t *rec)
+{
+ long i;
+ int ret = 0;
+
+ for (i = 0; i < rec->size; i++) {
+ char c = rec->ptr[i];
+
+ if (!XDL_ISSPACE(c))
+ return ret;
+ else if (c == ' ')
+ ret += 1;
+ else if (c == '\t')
+ ret += 8 - ret % 8;
+ /* ignore other whitespace characters */
+
+ if (ret >= MAX_INDENT)
+ return MAX_INDENT;
+ }
+
+ /* The line contains only whitespace. */
+ return -1;
+}
+
+/*
+ * If more than this number of consecutive blank rows are found, just return this
+ * value. This avoids requiring O(N^2) work for pathological cases, and also
+ * ensures that the output of score_split fits in an int.
+ */
+#define MAX_BLANKS 20
+
+/* Characteristics measured about a hypothetical split position. */
+struct split_measurement {
/*
- * This is the same of what GNU diff does. Move back and forward
- * change groups for a consistent and pretty diff output. This also
- * helps in finding joinable change groups and reduce the diff size.
+ * Is the split at the end of the file (aside from any blank lines)?
*/
- for (ix = ixo = 0;;) {
- /*
- * Find the first changed line in the to-be-compacted file.
- * We need to keep track of both indexes, so if we find a
- * changed lines group on the other file, while scanning the
- * to-be-compacted file, we need to skip it properly. Note
- * that loops that are testing for changed lines on rchg* do
- * not need index bounding since the array is prepared with
- * a zero at position -1 and N.
- */
- for (; ix < nrec && !rchg[ix]; ix++)
- while (rchgo[ixo++]);
- if (ix == nrec)
+ int end_of_file;
+
+ /*
+ * How much is the line immediately following the split indented (or -1 if
+ * the line is blank):
+ */
+ int indent;
+
+ /*
+ * How many consecutive lines above the split are blank?
+ */
+ int pre_blank;
+
+ /*
+ * How much is the nearest non-blank line above the split indented (or -1
+ * if there is no such line)?
+ */
+ int pre_indent;
+
+ /*
+ * How many lines after the line following the split are blank?
+ */
+ int post_blank;
+
+ /*
+ * How much is the nearest non-blank line after the line following the
+ * split indented (or -1 if there is no such line)?
+ */
+ int post_indent;
+};
+
+struct split_score {
+ /* The effective indent of this split (smaller is preferred). */
+ int effective_indent;
+
+ /* Penalty for this split (smaller is preferred). */
+ int penalty;
+};
+
+/*
+ * Fill m with information about a hypothetical split of xdf above line split.
+ */
+static void measure_split(const xdfile_t *xdf, long split,
+ struct split_measurement *m)
+{
+ long i;
+
+ if (split >= xdf->nrec) {
+ m->end_of_file = 1;
+ m->indent = -1;
+ } else {
+ m->end_of_file = 0;
+ m->indent = get_indent(xdf->recs[split]);
+ }
+
+ m->pre_blank = 0;
+ m->pre_indent = -1;
+ for (i = split - 1; i >= 0; i--) {
+ m->pre_indent = get_indent(xdf->recs[i]);
+ if (m->pre_indent != -1)
+ break;
+ m->pre_blank += 1;
+ if (m->pre_blank == MAX_BLANKS) {
+ m->pre_indent = 0;
+ break;
+ }
+ }
+
+ m->post_blank = 0;
+ m->post_indent = -1;
+ for (i = split + 1; i < xdf->nrec; i++) {
+ m->post_indent = get_indent(xdf->recs[i]);
+ if (m->post_indent != -1)
break;
+ m->post_blank += 1;
+ if (m->post_blank == MAX_BLANKS) {
+ m->post_indent = 0;
+ break;
+ }
+ }
+}
+
+/*
+ * The empirically-determined weight factors used by score_split() below.
+ * Larger values means that the position is a less favorable place to split.
+ *
+ * Note that scores are only ever compared against each other, so multiplying
+ * all of these weight/penalty values by the same factor wouldn't change the
+ * heuristic's behavior. Still, we need to set that arbitrary scale *somehow*.
+ * In practice, these numbers are chosen to be large enough that they can be
+ * adjusted relative to each other with sufficient precision despite using
+ * integer math.
+ */
+
+/* Penalty if there are no non-blank lines before the split */
+#define START_OF_FILE_PENALTY 1
+
+/* Penalty if there are no non-blank lines after the split */
+#define END_OF_FILE_PENALTY 21
+/* Multiplier for the number of blank lines around the split */
+#define TOTAL_BLANK_WEIGHT (-30)
+
+/* Multiplier for the number of blank lines after the split */
+#define POST_BLANK_WEIGHT 6
+
+/*
+ * Penalties applied if the line is indented more than its predecessor
+ */
+#define RELATIVE_INDENT_PENALTY (-4)
+#define RELATIVE_INDENT_WITH_BLANK_PENALTY 10
+
+/*
+ * Penalties applied if the line is indented less than both its predecessor and
+ * its successor
+ */
+#define RELATIVE_OUTDENT_PENALTY 24
+#define RELATIVE_OUTDENT_WITH_BLANK_PENALTY 17
+
+/*
+ * Penalties applied if the line is indented less than its predecessor but not
+ * less than its successor
+ */
+#define RELATIVE_DEDENT_PENALTY 23
+#define RELATIVE_DEDENT_WITH_BLANK_PENALTY 17
+
+/*
+ * We only consider whether the sum of the effective indents for splits are
+ * less than (-1), equal to (0), or greater than (+1) each other. The resulting
+ * value is multiplied by the following weight and combined with the penalty to
+ * determine the better of two scores.
+ */
+#define INDENT_WEIGHT 60
+
+/*
+ * Compute a badness score for the hypothetical split whose measurements are
+ * stored in m. The weight factors were determined empirically using the tools and
+ * corpus described in
+ *
+ * https://github.com/mhagger/diff-slider-tools
+ *
+ * Also see that project if you want to improve the weights based on, for example,
+ * a larger or more diverse corpus.
+ */
+static void score_add_split(const struct split_measurement *m, struct split_score *s)
+{
+ /*
+ * A place to accumulate penalty factors (positive makes this index more
+ * favored):
+ */
+ int post_blank, total_blank, indent, any_blanks;
+
+ if (m->pre_indent == -1 && m->pre_blank == 0)
+ s->penalty += START_OF_FILE_PENALTY;
+
+ if (m->end_of_file)
+ s->penalty += END_OF_FILE_PENALTY;
+
+ /*
+ * Set post_blank to the number of blank lines following the split,
+ * including the line immediately after the split:
+ */
+ post_blank = (m->indent == -1) ? 1 + m->post_blank : 0;
+ total_blank = m->pre_blank + post_blank;
+
+ /* Penalties based on nearby blank lines: */
+ s->penalty += TOTAL_BLANK_WEIGHT * total_blank;
+ s->penalty += POST_BLANK_WEIGHT * post_blank;
+
+ if (m->indent != -1)
+ indent = m->indent;
+ else
+ indent = m->post_indent;
+
+ any_blanks = (total_blank != 0);
+
+ /* Note that the effective indent is -1 at the end of the file: */
+ s->effective_indent += indent;
+
+ if (indent == -1) {
+ /* No additional adjustments needed. */
+ } else if (m->pre_indent == -1) {
+ /* No additional adjustments needed. */
+ } else if (indent > m->pre_indent) {
+ /*
+ * The line is indented more than its predecessor.
+ */
+ s->penalty += any_blanks ?
+ RELATIVE_INDENT_WITH_BLANK_PENALTY :
+ RELATIVE_INDENT_PENALTY;
+ } else if (indent == m->pre_indent) {
+ /*
+ * The line has the same indentation level as its predecessor.
+ * No additional adjustments needed.
+ */
+ } else {
/*
- * Record the start of a changed-group in the to-be-compacted file
- * and find the end of it, on both to-be-compacted and other file
- * indexes (ix and ixo).
+ * The line is indented less than its predecessor. It could be
+ * the block terminator of the previous block, but it could
+ * also be the start of a new block (e.g., an "else" block, or
+ * maybe the previous block didn't have a block terminator).
+ * Try to distinguish those cases based on what comes next:
*/
- ixs = ix;
- for (ix++; rchg[ix]; ix++);
- for (; rchgo[ixo]; ixo++);
+ if (m->post_indent != -1 && m->post_indent > indent) {
+ /*
+ * The following line is indented more. So it is likely
+ * that this line is the start of a block.
+ */
+ s->penalty += any_blanks ?
+ RELATIVE_OUTDENT_WITH_BLANK_PENALTY :
+ RELATIVE_OUTDENT_PENALTY;
+ } else {
+ /*
+ * That was probably the end of a block.
+ */
+ s->penalty += any_blanks ?
+ RELATIVE_DEDENT_WITH_BLANK_PENALTY :
+ RELATIVE_DEDENT_PENALTY;
+ }
+ }
+}
+
+static int score_cmp(struct split_score *s1, struct split_score *s2)
+{
+ /* -1 if s1.effective_indent < s2->effective_indent, etc. */
+ int cmp_indents = ((s1->effective_indent > s2->effective_indent) -
+ (s1->effective_indent < s2->effective_indent));
+
+ return INDENT_WEIGHT * cmp_indents + (s1->penalty - s2->penalty);
+}
+
+/*
+ * Represent a group of changed lines in an xdfile_t (i.e., a contiguous group
+ * of lines that was inserted or deleted from the corresponding version of the
+ * file). We consider there to be such a group at the beginning of the file, at
+ * the end of the file, and between any two unchanged lines, though most such
+ * groups will usually be empty.
+ *
+ * If the first line in a group is equal to the line following the group, then
+ * the group can be slid down. Similarly, if the last line in a group is equal
+ * to the line preceding the group, then the group can be slid up. See
+ * group_slide_down() and group_slide_up().
+ *
+ * Note that loops that are testing for changed lines in xdf->rchg do not need
+ * index bounding since the array is prepared with a zero at position -1 and N.
+ */
+struct xdlgroup {
+ /*
+ * The index of the first changed line in the group, or the index of
+ * the unchanged line above which the (empty) group is located.
+ */
+ long start;
+
+ /*
+ * The index of the first unchanged line after the group. For an empty
+ * group, end is equal to start.
+ */
+ long end;
+};
+
+/*
+ * Initialize g to point at the first group in xdf.
+ */
+static void group_init(xdfile_t *xdf, struct xdlgroup *g)
+{
+ g->start = g->end = 0;
+ while (xdf->rchg[g->end])
+ g->end++;
+}
+
+/*
+ * Move g to describe the next (possibly empty) group in xdf and return 0. If g
+ * is already at the end of the file, do nothing and return -1.
+ */
+static inline int group_next(xdfile_t *xdf, struct xdlgroup *g)
+{
+ if (g->end == xdf->nrec)
+ return -1;
+ g->start = g->end + 1;
+ for (g->end = g->start; xdf->rchg[g->end]; g->end++)
+ ;
+
+ return 0;
+}
+
+/*
+ * Move g to describe the previous (possibly empty) group in xdf and return 0.
+ * If g is already at the beginning of the file, do nothing and return -1.
+ */
+static inline int group_previous(xdfile_t *xdf, struct xdlgroup *g)
+{
+ if (g->start == 0)
+ return -1;
+
+ g->end = g->start - 1;
+ for (g->start = g->end; xdf->rchg[g->start - 1]; g->start--)
+ ;
+
+ return 0;
+}
+
+/*
+ * If g can be slid toward the end of the file, do so, and if it bumps into a
+ * following group, expand this group to include it. Return 0 on success or -1
+ * if g cannot be slid down.
+ */
+static int group_slide_down(xdfile_t *xdf, struct xdlgroup *g, long flags)
+{
+ if (g->end < xdf->nrec &&
+ recs_match(xdf->recs[g->start], xdf->recs[g->end], flags)) {
+ xdf->rchg[g->start++] = 0;
+ xdf->rchg[g->end++] = 1;
+
+ while (xdf->rchg[g->end])
+ g->end++;
+
+ return 0;
+ } else {
+ return -1;
+ }
+}
+
+/*
+ * If g can be slid toward the beginning of the file, do so, and if it bumps
+ * into a previous group, expand this group to include it. Return 0 on success
+ * or -1 if g cannot be slid up.
+ */
+static int group_slide_up(xdfile_t *xdf, struct xdlgroup *g, long flags)
+{
+ if (g->start > 0 &&
+ recs_match(xdf->recs[g->start - 1], xdf->recs[g->end - 1], flags)) {
+ xdf->rchg[--g->start] = 1;
+ xdf->rchg[--g->end] = 0;
+
+ while (xdf->rchg[g->start - 1])
+ g->start--;
+
+ return 0;
+ } else {
+ return -1;
+ }
+}
+
+static void xdl_bug(const char *msg)
+{
+ fprintf(stderr, "BUG: %s\n", msg);
+ exit(1);
+}
+
+/*
+ * Move back and forward change groups for a consistent and pretty diff output.
+ * This also helps in finding joinable change groups and reducing the diff
+ * size.
+ */
+int xdl_change_compact(xdfile_t *xdf, xdfile_t *xdfo, long flags) {
+ struct xdlgroup g, go;
+ long earliest_end, end_matching_other;
+ long groupsize;
+ unsigned int blank_lines;
+
+ group_init(xdf, &g);
+ group_init(xdfo, &go);
+
+ while (1) {
+ /* If the group is empty in the to-be-compacted file, skip it: */
+ if (g.end == g.start)
+ goto next;
+
+ /*
+ * Now shift the change up and then down as far as possible in
+ * each direction. If it bumps into any other changes, merge them.
+ */
do {
- grpsiz = ix - ixs;
- blank_lines = 0;
+ groupsize = g.end - g.start;
/*
- * If the line before the current change group, is equal to
- * the last line of the current change group, shift backward
- * the group.
+ * Keep track of the last "end" index that causes this
+ * group to align with a group of changed lines in the
+ * other file. -1 indicates that we haven't found such
+ * a match yet:
*/
- while (ixs > 0 && recs_match(recs, ixs - 1, ix - 1, flags)) {
- rchg[--ixs] = 1;
- rchg[--ix] = 0;
-
- /*
- * This change might have joined two change groups,
- * so we try to take this scenario in account by moving
- * the start index accordingly (and so the other-file
- * end-of-group index).
- */
- for (; rchg[ixs - 1]; ixs--);
- while (rchgo[--ixo]);
- }
+ end_matching_other = -1;
/*
- * Record the end-of-group position in case we are matched
- * with a group of changes in the other file (that is, the
- * change record before the end-of-group index in the other
- * file is set).
+ * Boolean value that records whether there are any blank
+ * lines that could be made to be the last line of this
+ * group.
*/
- ixref = rchgo[ixo - 1] ? ix: nrec;
+ blank_lines = 0;
+
+ /* Shift the group backward as much as possible: */
+ while (!group_slide_up(xdf, &g, flags))
+ if (group_previous(xdfo, &go))
+ xdl_bug("group sync broken sliding up");
/*
- * If the first line of the current change group, is equal to
- * the line next of the current change group, shift forward
- * the group.
+ * This is this highest that this group can be shifted.
+ * Record its end index:
*/
- while (ix < nrec && recs_match(recs, ixs, ix, flags)) {
- blank_lines += is_blank_line(recs, ix, flags);
-
- rchg[ixs++] = 0;
- rchg[ix++] = 1;
-
- /*
- * This change might have joined two change groups,
- * so we try to take this scenario in account by moving
- * the start index accordingly (and so the other-file
- * end-of-group index). Keep tracking the reference
- * index in case we are shifting together with a
- * corresponding group of changes in the other file.
- */
- for (; rchg[ix]; ix++);
- while (rchgo[++ixo])
- ixref = ix;
- }
- } while (grpsiz != ix - ixs);
+ earliest_end = g.end;
- /*
- * Try to move back the possibly merged group of changes, to match
- * the recorded position in the other file.
- */
- while (ixref < ix) {
- rchg[--ixs] = 1;
- rchg[--ix] = 0;
- while (rchgo[--ixo]);
- }
+ if (go.end > go.start)
+ end_matching_other = g.end;
+
+ /* Now shift the group forward as far as possible: */
+ while (1) {
+ if (!blank_lines)
+ blank_lines = is_blank_line(
+ xdf->recs[g.end - 1],
+ flags);
+
+ if (group_slide_down(xdf, &g, flags))
+ break;
+ if (group_next(xdfo, &go))
+ xdl_bug("group sync broken sliding down");
+
+ if (go.end > go.start)
+ end_matching_other = g.end;
+ }
+ } while (groupsize != g.end - g.start);
/*
- * If a group can be moved back and forth, see if there is a
- * blank line in the moving space. If there is a blank line,
- * make sure the last blank line is the end of the group.
+ * If the group can be shifted, then we can possibly use this
+ * freedom to produce a more intuitive diff.
*
- * As we already shifted the group forward as far as possible
- * in the earlier loop, we need to shift it back only if at all.
+ * The group is currently shifted as far down as possible, so the
+ * heuristics below only have to handle upwards shifts.
*/
- if ((flags & XDF_COMPACTION_HEURISTIC) && blank_lines) {
- while (ixs > 0 &&
- !is_blank_line(recs, ix - 1, flags) &&
- recs_match(recs, ixs - 1, ix - 1, flags)) {
- rchg[--ixs] = 1;
- rchg[--ix] = 0;
+
+ if (g.end == earliest_end) {
+ /* no shifting was possible */
+ } else if (end_matching_other != -1) {
+ /*
+ * Move the possibly merged group of changes back to line
+ * up with the last group of changes from the other file
+ * that it can align with.
+ */
+ while (go.end == go.start) {
+ if (group_slide_up(xdf, &g, flags))
+ xdl_bug("match disappeared");
+ if (group_previous(xdfo, &go))
+ xdl_bug("group sync broken sliding to match");
+ }
+ } else if ((flags & XDF_COMPACTION_HEURISTIC) && blank_lines) {
+ /*
+ * Compaction heuristic: if it is possible to shift the
+ * group to make its bottom line a blank line, do so.
+ *
+ * As we already shifted the group forward as far as
+ * possible in the earlier loop, we only need to handle
+ * backward shifts, not forward ones.
+ */
+ while (!is_blank_line(xdf->recs[g.end - 1], flags)) {
+ if (group_slide_up(xdf, &g, flags))
+ xdl_bug("blank line disappeared");
+ if (group_previous(xdfo, &go))
+ xdl_bug("group sync broken sliding to blank line");
+ }
+ } else if (flags & XDF_INDENT_HEURISTIC) {
+ /*
+ * Indent heuristic: a group of pure add/delete lines
+ * implies two splits, one between the end of the "before"
+ * context and the start of the group, and another between
+ * the end of the group and the beginning of the "after"
+ * context. Some splits are aesthetically better and some
+ * are worse. We compute a badness "score" for each split,
+ * and add the scores for the two splits to define a
+ * "score" for each position that the group can be shifted
+ * to. Then we pick the shift with the lowest score.
+ */
+ long shift, best_shift = -1;
+ struct split_score best_score;
+
+ for (shift = earliest_end; shift <= g.end; shift++) {
+ struct split_measurement m;
+ struct split_score score = {0, 0};
+
+ measure_split(xdf, shift, &m);
+ score_add_split(&m, &score);
+ measure_split(xdf, shift - groupsize, &m);
+ score_add_split(&m, &score);
+ if (best_shift == -1 ||
+ score_cmp(&score, &best_score) <= 0) {
+ best_score.effective_indent = score.effective_indent;
+ best_score.penalty = score.penalty;
+ best_shift = shift;
+ }
+ }
+
+ while (g.end > best_shift) {
+ if (group_slide_up(xdf, &g, flags))
+ xdl_bug("best shift unreached");
+ if (group_previous(xdfo, &go))
+ xdl_bug("group sync broken sliding to blank line");
}
}
+
+ next:
+ /* Move past the just-processed group: */
+ if (group_next(xdf, &g))
+ break;
+ if (group_next(xdfo, &go))
+ xdl_bug("group sync broken moving to next group");
}
+ if (!group_next(xdfo, &go))
+ xdl_bug("group sync broken at end of file");
+
return 0;
}