important for those who are used to "git add -u" (without pathspec)
updating the index only for paths in the current subdirectory to start
training their fingers to explicitly say "git add -u ." when they mean
-it before Git 2.0 comes.
+it before Git 2.0 comes. A warning is issued when these commands are
+run without a pathspec and when you have local changes outside the
+current directory, because the behaviour in Git 2.0 will be different
+from today's version in such a situation.
Updates since v1.8.2
UI, Workflows & Features
+ * "git branch --vv" learned to paint the name of the branch it
+ integrates with in a different color (color.branch.upstream,
+ which defaults to blue).
+
+ * "git count-objects" learned "--human-readable" aka "-H" option to
+ show various large numbers in Ki/Mi/GiB scaled as necessary.
+
+ * "git cherry-pick $blob" and "git cherry-pick $tree" are nonsense,
+ and a more readable error message e.g. "can't cherry-pick a tree"
+ is given (we used to say "expected exactly one commit").
+
+ * The "--annotate" option to "git send-email" can be turned on (or
+ off) by default with sendemail.annotate configuration variable (you
+ can use --no-annotate from the command line to override it).
+
+ * The "--cover-letter" option to "git format-patch" can be turned on
+ (or off) by default with format.coverLetter configuration
+ variable. By setting it to 'auto', you can turn it on only for a
+ series with two or more patches.
+
+ * The bash completion support (in contrib/) learned that cherry-pick
+ takes a few more options than it already knew about.
+
* "git help" learned "-g" option to show the list of guides just like
list of commands are given with "-a".
ref by specifying a raw object name from the command line when the
server side supports this feature.
+ * Output from "git log --graph" works better with submodule log
+ output now.
+
* "git count-objects -v" learned to report leftover temporary
packfiles and other garbage in the object store.
* Updates for building under msvc.
+ * The stack footprint of some codepaths that access an object from a
+ pack has been shrunk.
+
* The logic to coalesce the same lines removed from the parents in
the output from "diff -c/--cc" has been updated, but with an O(n^2)
complexity, so this might turn out to be undesirable.
conflicts have been applied.
-Also contains minor documentation updates and code clean-ups.
+Also contains various documentation updates and code clean-ups.
Fixes since v1.8.2
track are contained in this release (see release notes to them for
details).
+ * When "upload-pack" fails while generating a pack in response to
+ "git fetch" (or "git clone"), the receiving side mistakenly said
+ there was a programming error to trigger the die handler
+ recursively.
+ (merge 1ece66b jk/a-thread-only-dies-once later to maint).
+
+ * "rev-list --stdin" and friends kept bogus pointers into input
+ buffer around as human readble object names. This was not a huge
+ problem but was exposed by a new change that uses these names in
+ error output.
+ (merge 70d26c6 tr/copy-revisions-from-stdin later to maint).
+
+ * Smart-capable HTTP servers were not restricted via the
+ GIT_NAMESPACE mechanism when talking with commit-walker clients,
+ like they do when talking with smart HTTP clients.
+ (merge 6130f86 jk/http-dumb-namespaces later to maint).
+
+ * "git merge-tree" did not omit a merge result that is identical to
+ "our" side in certain cases.
+ (merge aacecc3 jk/merge-tree-added-identically later to maint).
+
+ * Perl scripts like "git-svn" closed (not redirecting to /dev/null)
+ the standard error stream, which is not a very smart thing to do.
+ Later open may return file descriptor #2 for unrelated purpose, and
+ error reporting code may write into them.
+ (merge a749c0b tr/perl-keep-stderr-open later to maint).
+
+ * "git show-branch" was not prepared to show a very long run of
+ ancestor operators e.g. foobar^2~2^2^2^2...^2~4 correctly.
+ (merge aaa07e3 jk/show-branch-strbuf later to maint).
+
+ * "git diff --diff-algorithm algo" is also understood as "git diff
+ --diff-algorithm=algo".
+ (merge 0895c6d jk/diff-algo-finishing-touches later to maint).
+
+ * The new core.commentchar configuration was not applied to a few
+ places.
+ (merge 89c3bbd rt/commentchar-fmt-merge-msg later to maint).
+
+ * "git bundle" did not like a bundle created using a commit without
+ any message as its one of the prerequistes.
+ (merge 5446e33 lf/bundle-with-tip-wo-message later to maint).
+
* "git log -S/-G" started paying attention to textconv filter, but
there was no way to disable this. Make it honor --no-textconv
option.
core.logAllRefUpdates::
Enable the reflog. Updates to a ref <ref> is logged to the file
"$GIT_DIR/logs/<ref>", by appending the new and old
- SHA1, the date/time and the reason of the update, but
+ SHA-1, the date/time and the reason of the update, but
only when the file exists. If this configuration
variable is set to true, missing "$GIT_DIR/logs/<ref>"
file is automatically created for branch heads (i.e. under
color.branch.<slot>::
Use customized color for branch coloration. `<slot>` is one of
`current` (the current branch), `local` (a local branch),
- `remote` (a remote-tracking branch in refs/remotes/), `plain` (other
+ `remote` (a remote-tracking branch in refs/remotes/),
+ `upstream` (upstream tracking branch), `plain` (other
refs).
+
The value for these configuration variables is a list of colors (at most
the rights to submit this work under the same open source license.
Please see the 'SubmittingPatches' document for further discussion.
+format.coverLetter::
+ A boolean that controls whether to generate a cover-letter when
+ format-patch is invoked, but in addition can be set to "auto", to
+ generate a cover-letter only when there's more than one patch.
+
filter.<driver>.clean::
The command which is used to convert the content of a worktree
file to a blob upon checkin. See linkgit:gitattributes[5] for
with when fetching or pushing over HTTPS. Can be overridden
by the 'GIT_SSL_CAPATH' environment variable.
+http.sslTry::
+ Attempt to use AUTH SSL/TLS and encrypted data transfers
+ when connecting via regular FTP protocol. This might be needed
+ if the FTP server requires it for security reasons or you wish
+ to connect securely whenever remote FTP server supports it.
+ Default is false since it might trigger certificate verification
+ errors on misconfigured servers.
+
http.maxRequests::
How many HTTP requests to launch in parallel. Can be overridden
by the 'GIT_HTTP_MAX_REQUESTS' environment variable. Default is 5.
sendemail.aliasesfile::
sendemail.aliasfiletype::
+sendemail.annotate::
sendemail.bcc::
sendemail.cc::
sendemail.cccmd::
(which implies type "blob").
In the second form, a list of objects (separated by linefeeds) is provided on
-stdin, and the SHA1, type, and size of each object is printed on stdout.
+stdin, and the SHA-1, type, and size of each object is printed on stdout.
OPTIONS
-------
to apply the filter to the content recorded in the index at <path>.
--batch::
- Print the SHA1, type, size, and contents of each object provided on
+ Print the SHA-1, type, size, and contents of each object provided on
stdin. May not be combined with any other options or arguments.
--batch-check::
- Print the SHA1, type, and size of each object provided on stdin. May not
+ Print the SHA-1, type, and size of each object provided on stdin. May not
be combined with any other options or arguments.
OUTPUT
SYNOPSIS
--------
[verse]
-'git count-objects' [-v]
+'git count-objects' [-v] [-H | --human-readable]
DESCRIPTION
-----------
+
count: the number of loose objects
+
-size: disk space consumed by loose objects, in KiB
+size: disk space consumed by loose objects, in KiB (unless -H is specified)
+
in-pack: the number of in-pack objects
+
-size-pack: disk space consumed by the packs, in KiB
+size-pack: disk space consumed by the packs, in KiB (unless -H is specified)
+
prune-packable: the number of loose objects that are also present in
the packs. These objects could be pruned using `git prune-packed`.
garbage: the number of files in object database that are not valid
loose objects nor valid packs
+
-size-garbage: disk space consumed by garbage files, in KiB
+size-garbage: disk space consumed by garbage files, in KiB (unless -H is
+specified)
+
+-H::
+--human-readable::
+
+Print sizes in human readable format
GIT
---
Giving these options is an error when used with `--inetd`; use
the facility of inet daemon to achieve the same before spawning
'git daemon' if needed.
++
+Like many programs that switch user id, the daemon does not reset
+environment variables such as `$HOME` when it runs git programs,
+e.g. `upload-pack` and `receive-pack`. When using this option, you
+may also want to set and export `HOME` to point at the home
+directory of `<user>` before starting the daemon, and make sure any
+Git configuration files in that directory are readable by `<user>`.
--enable=<service>::
--disable=<service>::
If an exact match was not found, 'git describe' will walk back
through the commit history to locate an ancestor commit which
has been tagged. The ancestor's tag will be output along with an
-abbreviation of the input committish's SHA1.
+abbreviation of the input committish's SHA-1.
If multiple tags were found during the walk then the tag which
has the fewest commits different from the input committish will be
[--ignore-if-in-upstream]
[--subject-prefix=Subject-Prefix] [(--reroll-count|-v) <n>]
[--to=<email>] [--cc=<email>]
- [--cover-letter] [--quiet] [--notes[=<ref>]]
+ [--[no-]cover-letter] [--quiet] [--notes[=<ref>]]
[<common diff options>]
[ <since> | <revision range> ]
`Cc:`, and custom) headers added so far from config or command
line.
---cover-letter::
+--[no-]cover-letter::
In addition to the patches, generate a cover letter file
containing the shortlog and the overall diffstat. You can
fill in a description in the file before sending it out.
cc = <email>
attach [ = mime-boundary-string ]
signoff = true
+ coverletter = auto
------------
An object to treat as the head of an unreachability trace.
+
If no objects are given, 'git fsck' defaults to using the
-index file, all SHA1 references in `refs` namespace, and all reflogs
+index file, all SHA-1 references in `refs` namespace, and all reflogs
(unless --no-reflogs is given) as heads.
--unreachable::
DISCUSSION
----------
-git-fsck tests SHA1 and general object sanity, and it does full tracking
+git-fsck tests SHA-1 and general object sanity, and it does full tracking
of the resulting reachability and everything else. It prints out any
corruption it finds (missing or bad objects), and if you use the
'--unreachable' flag it will also print out objects that exist but that
----------------------------------------------------------------
+
To enable anonymous read access but authenticated write access,
-require authorization with a LocationMatch directive:
+require authorization for both the initial ref advertisement (which we
+detect as a push via the service parameter in the query string), and the
+receive-pack invocation itself:
++
+----------------------------------------------------------------
+RewriteCond %{QUERY_STRING} service=git-receive-pack [OR]
+RewriteCond %{REQUEST_URI} /git-receive-pack$
+RewriteRule ^/git/ - [E=AUTHREQUIRED:yes]
+
+<LocationMatch "^/git/">
+ Order Deny,Allow
+ Deny from env=AUTHREQUIRED
+
+ AuthType Basic
+ AuthName "Git Access"
+ Require group committers
+ Satisfy Any
+ ...
+</LocationMatch>
+----------------------------------------------------------------
++
+If you do not have `mod_rewrite` available to match against the query
+string, it is sufficient to just protect `git-receive-pack` itself,
+like:
+
----------------------------------------------------------------
<LocationMatch "^/git/.*/git-receive-pack$">
</LocationMatch>
----------------------------------------------------------------
+
+In this mode, the server will not request authentication until the
+client actually starts the object negotiation phase of the push, rather
+than during the initial contact. For this reason, you must also enable
+the `http.receivepack` config option in any repositories that should
+accept a push. The default behavior, if `http.receivepack` is not set,
+is to reject any pushes by unauthenticated users; the initial request
+will therefore report `403 Forbidden` to the client, without even giving
+an opportunity for authentication.
++
To require authentication for both reads and writes, use a Location
directive around the repository, or one of its parent directories:
+
ScriptAlias /git/ /var/www/cgi-bin/gitweb.cgi/
----------------------------------------------------------------
+Lighttpd::
+ Ensure that `mod_cgi`, `mod_alias, `mod_auth`, `mod_setenv` are
+ loaded, then set `GIT_PROJECT_ROOT` appropriately and redirect
+ all requests to the CGI:
++
+----------------------------------------------------------------
+alias.url += ( "/git" => "/usr/lib/git-core/git-http-backend" )
+$HTTP["url"] =~ "^/git" {
+ cgi.assign = ("" => "")
+ setenv.add-environment = (
+ "GIT_PROJECT_ROOT" => "/var/www/git",
+ "GIT_HTTP_EXPORT_ALL" => ""
+ )
+}
+----------------------------------------------------------------
++
+To enable anonymous read access but authenticated write access:
++
+----------------------------------------------------------------
+$HTTP["querystring"] =~ "service=git-receive-pack" {
+ include "git-auth.conf"
+}
+$HTTP["url"] =~ "^/git/.*/git-receive-pack$" {
+ include "git-auth.conf"
+}
+----------------------------------------------------------------
++
+where `git-auth.conf` looks something like:
++
+----------------------------------------------------------------
+auth.require = (
+ "/" => (
+ "method" => "basic",
+ "realm" => "Git Access",
+ "require" => "valid-user"
+ )
+)
+# ...and set up auth.backend here
+----------------------------------------------------------------
++
+To require authentication for both reads and writes:
++
+----------------------------------------------------------------
+$HTTP["url"] =~ "^/git/private" {
+ include "git-auth.conf"
+}
+----------------------------------------------------------------
+
ENVIRONMENT
-----------
----
Once the index has been created, the list of object names is sorted
-and the SHA1 hash of that list is printed to stdout. If --stdin was
+and the SHA-1 hash of that list is printed to stdout. If --stdin was
also used then this is prefixed by either "pack\t", or "keep\t" if a
new .keep file was successfully created. This is useful to remove a
.keep file used as a lock to prevent the race with 'git repack'
'git ls-files --unmerged' and 'git ls-files --stage' can be used to examine
detailed information on unmerged paths.
-For an unmerged path, instead of recording a single mode/SHA1 pair,
+For an unmerged path, instead of recording a single mode/SHA-1 pair,
the index records up to three such pairs; one from tree O in stage
1, A in stage 2, and B in stage 3. This information can be used by
the user (or the porcelain) to see what should eventually be recorded at the
DESCRIPTION
-----------
This looks up the <file>(s) in the index and, if there are any merge
-entries, passes the SHA1 hash for those files as arguments 1, 2, 3 (empty
+entries, passes the SHA-1 hash for those files as arguments 1, 2, 3 (empty
argument if no file), and <file> as argument 4. File modes for the three
files are passed as arguments 5, 6 and 7.
Write into a pair of files (.pack and .idx), using
<base-name> to determine the name of the created file.
When this option is used, the two files are written in
- <base-name>-<SHA1>.{pack,idx} files. <SHA1> is a hash
+ <base-name>-<SHA-1>.{pack,idx} files. <SHA-1> is a hash
of the sorted object names to make the resulting filename
based on the pack content, and written to the standard
output of the command.
DESCRIPTION
-----------
-A "patch ID" is nothing but a SHA1 of the diff associated with a patch, with
+A "patch ID" is nothing but a SHA-1 of the diff associated with a patch, with
whitespace and line numbers ignored. As such, it's "reasonably stable", but at
the same time also reasonably unique, i.e., two patches that have the same "patch
ID" are almost guaranteed to be the same thing.
-----------
Adds a 'replace' reference in `refs/replace/` namespace.
-The name of the 'replace' reference is the SHA1 of the object that is
-replaced. The content of the 'replace' reference is the SHA1 of the
+The name of the 'replace' reference is the SHA-1 of the object that is
+replaced. The content of the 'replace' reference is the SHA-1 of the
replacement object.
Unless `-f` is given, the 'replace' reference must not yet exist.
one.
--symbolic::
- Usually the object names are output in SHA1 form (with
+ Usually the object names are output in SHA-1 form (with
possible '{caret}' prefix); this option makes them output in a
form as close to the original input as possible.
--short::
--short=number::
- Instead of outputting the full SHA1 values of object names try to
+ Instead of outputting the full SHA-1 values of object names try to
abbreviate them to a shorter unique name. When no length is specified
7 is used. The minimum length is 4.
~~~~~~~~~
--annotate::
- Review and edit each patch you're about to send. See the
- CONFIGURATION section for 'sendemail.multiedit'.
+ Review and edit each patch you're about to send. Default is the value
+ of 'sendemail.annotate'. See the CONFIGURATION section for
+ 'sendemail.multiedit'.
--bcc=<address>::
Specify a "Bcc:" value for each email. Default is the value of
OPTIONS
-------
<rev>::
- Arbitrary extended SHA1 expression (see linkgit:gitrevisions[7])
+ Arbitrary extended SHA-1 expression (see linkgit:gitrevisions[7])
that typically names a branch head or a tag.
<glob>::
branch, the I-th indentation character shows a `+` sign;
otherwise it shows a space. Merge commits are denoted by
a `-` sign. Each commit shows a short name that
-can be used as an extended SHA1 to name that commit.
+can be used as an extended SHA-1 to name that commit.
The following example shows three branches, "master", "fixes"
and "mhf":
The information it outputs is subset of what you can get from
'git verify-pack -v'; this command only shows the packfile
-offset and SHA1 of each object.
+offset and SHA-1 of each object.
GIT
---
-s::
--hash[=<n>]::
- Only show the SHA1 hash, not the reference name. When combined with
- --dereference the dereferenced tag will still be shown after the SHA1.
+ Only show the SHA-1 hash, not the reference name. When combined with
+ --dereference the dereferenced tag will still be shown after the SHA-1.
--verify::
If `-m <msg>` or `-F <file>` is given and `-a`, `-s`, and `-u <key-id>`
are absent, `-a` is implied.
-Otherwise just a tag reference for the SHA1 object name of the commit object is
+Otherwise just a tag reference for the SHA-1 object name of the commit object is
created (i.e. a lightweight tag).
A GnuPG signed tag object will be created when `-s` or `-u
------------
The first line of the input feeds 0 as the mode to remove the
-path; the SHA1 does not matter as long as it is well formatted.
+path; the SHA-1 does not matter as long as it is well formatted.
Then the second and third line feeds stage 1 and stage 2 entries
for that path. After the above, we would end up with this:
-------------
When specifying the -v option the format used is:
- SHA1 type size size-in-pack-file offset-in-packfile
+ SHA-1 type size size-in-pack-file offset-in-packfile
for objects that are not deltified in the pack, and
- SHA1 type size size-in-packfile offset-in-packfile depth base-SHA1
+ SHA-1 type size size-in-packfile offset-in-packfile depth base-SHA-1
for objects that are deltified.
Print the contents of the tag object before validating it.
<tag>...::
- SHA1 identifiers of Git tag objects.
+ SHA-1 identifiers of Git tag objects.
GIT
---
<old|new>-file:: are files GIT_EXTERNAL_DIFF can use to read the
contents of <old|new>,
- <old|new>-hex:: are the 40-hexdigit SHA1 hashes,
+ <old|new>-hex:: are the 40-hexdigit SHA-1 hashes,
<old|new>-mode:: are the octal representation of the file modes.
+
The file parameters can point at the user's working file
represents an immediately preceding step. Commits with more than one
parent represent merges of independent lines of development.
-All objects are named by the SHA1 hash of their contents, normally
+All objects are named by the SHA-1 hash of their contents, normally
written as a string of 40 hex digits. Such names are globally unique.
The entire history leading up to a commit can be vouched for by signing
just that commit. A fourth object type, the tag, is provided for this
efficiency may later be compressed together into "pack files".
Named pointers called refs mark interesting points in history. A ref
-may contain the SHA1 name of an object or the name of another ref. Refs
-with names beginning `ref/head/` contain the SHA1 name of the most
-recent commit (or "head") of a branch under development. SHA1 names of
+may contain the SHA-1 name of an object or the name of another ref. Refs
+with names beginning `ref/head/` contain the SHA-1 name of the most
+recent commit (or "head") of a branch under development. SHA-1 names of
tags of interest are stored under `ref/tags/`. A special ref named
`HEAD` contains the name of the currently checked-out branch.
valid, though.
[NOTE]
-An 'object' is identified by its 160-bit SHA1 hash, aka 'object name',
+An 'object' is identified by its 160-bit SHA-1 hash, aka 'object name',
and a reference to an object is always the 40-byte hex
-representation of that SHA1 name. The files in the `refs`
+representation of that SHA-1 name. The files in the `refs`
subdirectory are expected to contain these hex references
(usually with a final `\n` at the end), and you should thus
expect to see a number of 41-byte files containing these
these object pointers.
You can at any time create a new branch by just picking an arbitrary
-point in the project history, and just writing the SHA1 name of that
+point in the project history, and just writing the SHA-1 name of that
object into a file under `.git/refs/heads/`. You can use any filename you
want (and indeed, subdirectories), but the convention is that the
"normal" branch is called `master`. That's just a convention, though,
etc.). After reading three trees into three stages, the paths
that are the same in all three stages are 'collapsed' into stage
0. Also paths that are the same in two of three stages are
-collapsed into stage 0, taking the SHA1 from either stage 2 or
+collapsed into stage 0, taking the SHA-1 from either stage 2 or
stage 3, whichever is different from stage 1 (i.e. only one side
changed from the common ancestor).
For the purpose of breaking a filepair, diffcore-break examines
the extent of changes between the contents of the files before
and after modification (i.e. the contents that have "bcd1234..."
-and "0123456..." as their SHA1 content ID, in the above
+and "0123456..." as their SHA-1 content ID, in the above
example). The amount of deletion of original contents and
insertion of new material are added together, and if it exceeds
the "break score", the filepair is broken into two. The break
configuration option `commit.template` is set); `merge` (if the
commit is a merge or a `.git/MERGE_MSG` file exists); `squash`
(if a `.git/SQUASH_MSG` file exists); or `commit`, followed by
-a commit SHA1 (if a `-c`, `-C` or `--amend` option was given).
+a commit SHA-1 (if a `-c`, `-C` or `--amend` option was given).
If the exit status is non-zero, 'git commit' will abort.
refs/heads/master 67890 refs/heads/foreign 12345
-although the full, 40-character SHA1s would be supplied. If the foreign ref
-does not yet exist the `<remote SHA1>` will be 40 `0`. If a ref is to be
+although the full, 40-character SHA-1s would be supplied. If the foreign ref
+does not yet exist the `<remote SHA-1>` will be 40 `0`. If a ref is to be
deleted, the `<local ref>` will be supplied as `(delete)` and the `<local
-SHA1>` will be 40 `0`. If the local commit was specified by something other
-than a name which could be expanded (such as `HEAD~`, or a SHA1) it will be
+SHA-1>` will be 40 `0`. If the local commit was specified by something other
+than a name which could be expanded (such as `HEAD~`, or a SHA-1) it will be
supplied as it was originally given.
If this hook exits with a non-zero status, 'git push' will abort without
from a remote repository.
refs/replace/`<obj-sha1>`::
- records the SHA1 of the object that replaces `<obj-sha1>`.
+ records the SHA-1 of the object that replaces `<obj-sha1>`.
This is similar to info/grafts and is internally used and
maintained by linkgit:git-replace[1]. Such refs can be exchanged
between repositories while grafts are not.
We saw in part one of the tutorial that commits have names like this.
It turns out that every object in the Git history is stored under
-a 40-digit hex name. That name is the SHA1 hash of the object's
+a 40-digit hex name. That name is the SHA-1 hash of the object's
contents; among other things, this ensures that Git will never store
-the same data twice (since identical data is given an identical SHA1
+the same data twice (since identical data is given an identical SHA-1
name), and that the contents of a Git object will never change (since
that would change the object's name as well). The 7 char hex strings
here are simply the abbreviation of such 40 character long strings.
can be used, so long as they are unambiguous.
It is expected that the content of the commit object you created while
-following the example above generates a different SHA1 hash than
+following the example above generates a different SHA-1 hash than
the one shown above because the commit object records the time when
it was created and the name of the person performing the commit.
a file. In addition, a tree can also refer to other tree objects,
thus creating a directory hierarchy. You can examine the contents of
any tree using ls-tree (remember that a long enough initial portion
-of the SHA1 will also work):
+of the SHA-1 will also work):
------------------------------------------------
$ git ls-tree 92b8b694
100644 blob 3b18e512dba79e4c8300dd08aeb37f8e728b8dad file.txt
------------------------------------------------
-Thus we see that this tree has one file in it. The SHA1 hash is a
+Thus we see that this tree has one file in it. The SHA-1 hash is a
reference to that file's data:
------------------------------------------------
its response to the initial tree was a tree with a snapshot of the
directory state that was recorded by the first commit.
-All of these objects are stored under their SHA1 names inside the Git
+All of these objects are stored under their SHA-1 names inside the Git
directory:
------------------------------------------------
As you can see, this tells us which branch we're currently on, and it
tells us this by naming a file under the .git directory, which itself
-contains a SHA1 name referring to a commit object, which we can
+contains a SHA-1 name referring to a commit object, which we can
examine with cat-file:
------------------------------------------------
Note, by the way, that lots of commands take a tree as an argument.
But as we can see above, a tree can be referred to in many different
-ways--by the SHA1 name for that tree, by the name of a commit that
+ways--by the SHA-1 name for that tree, by the name of a commit that
refers to the tree, by the name of a branch whose head refers to that
tree, etc.--and most such commands can accept any of these names.
[[def_detached_HEAD]]detached HEAD::
Normally the <<def_HEAD,HEAD>> stores the name of a
- <<def_branch,branch>>. However, Git also allows you to <<def_checkout,check out>>
- an arbitrary <<def_commit,commit>> that isn't necessarily the tip of any
- particular branch. In this case HEAD is said to be "detached".
-
-[[def_dircache]]dircache::
- You are *waaaaay* behind. See <<def_index,index>>.
+ <<def_branch,branch>>, and commands that operate on the
+ history HEAD represents operate on the history leading to the
+ tip of the branch the HEAD points at. However, Git also
+ allows you to <<def_checkout,check out>> an arbitrary
+ <<def_commit,commit>> that isn't necessarily the tip of any
+ particular branch. The HEAD in such a state is called
+ "detached".
++
+Note that commands that operate on the history of the current branch
+(e.g. `git commit` to build a new history on top of it) still work
+while the HEAD is detached. They update the HEAD to point at the tip
+of the updated history without affecting any branch. Commands that
+update or inquire information _about_ the current branch (e.g. `git
+branch --set-upstream-to` that sets what remote tracking branch the
+current branch integrates with) obviously do not work, as there is no
+(real) current branch to ask about in this state.
[[def_directory]]directory::
The list you get with "ls" :-)
it contains modifications which have not been <<def_commit,committed>> to the current
<<def_branch,branch>>.
-[[def_ent]]ent::
- Favorite synonym to "<<def_tree-ish,tree-ish>>" by some total geeks. See
- http://en.wikipedia.org/wiki/Ent_(Middle-earth) for an in-depth
- explanation. Avoid this term, not to confuse people.
-
[[def_evil_merge]]evil merge::
An evil merge is a <<def_merge,merge>> that introduces changes that
do not appear in any <<def_parent,parent>>.
created. Configured via the `.git/info/grafts` file.
[[def_hash]]hash::
- In Git's context, synonym to <<def_object_name,object name>>.
+ In Git's context, synonym for <<def_object_name,object name>>.
[[def_head]]head::
A <<def_ref,named reference>> to the <<def_commit,commit>> at the tip of a
[[def_object]]object::
The unit of storage in Git. It is uniquely identified by the
- <<def_SHA1,SHA1>> of its contents. Consequently, an
+ <<def_SHA1,SHA-1>> of its contents. Consequently, an
object can not be changed.
[[def_object_database]]object database::
Synonym for <<def_object_name,object name>>.
[[def_object_name]]object name::
- The unique identifier of an <<def_object,object>>. The <<def_hash,hash>>
- of the object's contents using the Secure Hash Algorithm
- 1 and usually represented by the 40 character hexadecimal encoding of
- the <<def_hash,hash>> of the object.
+ The unique identifier of an <<def_object,object>>. The
+ object name is usually represented by a 40 character
+ hexadecimal string. Also colloquially called <<def_SHA1,SHA-1>>.
[[def_object_type]]object type::
One of the identifiers "<<def_commit_object,commit>>",
<<def_object,object>>.
[[def_octopus]]octopus::
- To <<def_merge,merge>> more than two <<def_branch,branches>>. Also denotes an
- intelligent predator.
+ To <<def_merge,merge>> more than two <<def_branch,branches>>.
[[def_origin]]origin::
The default upstream <<def_repository,repository>>. Most projects have
pack.
[[def_pathspec]]pathspec::
- Pattern used to specify paths.
+ Pattern used to limit paths in Git commands.
+
Pathspecs are used on the command line of "git ls-files", "git
ls-tree", "git add", "git grep", "git diff", "git checkout",
worktree. See the documentation of each command for whether
paths are relative to the current directory or toplevel. The
pathspec syntax is as follows:
++
+--
* any path matches itself
* the pathspec up to the last slash represents a
of the pathname. Paths relative to the directory
prefix will be matched against that pattern using fnmatch(3);
in particular, '*' and '?' _can_ match directory separators.
+
+--
+
For example, Documentation/*.jpg will match all .jpg files
in the Documentation subtree,
including Documentation/chapter_1/figure_1.jpg.
-
+
A pathspec that begins with a colon `:` has special meaning. In the
short form, the leading colon `:` is followed by zero or more "magic
against the path.
+
The "magic signature" consists of an ASCII symbol that is not
-alphanumeric.
-+
---
-top `/`;;
- The magic word `top` (mnemonic: `/`) makes the pattern match
- from the root of the working tree, even when you are running
- the command from inside a subdirectory.
---
-+
-Currently only the slash `/` is recognized as the "magic signature",
-but it is envisioned that we will support more types of magic in later
-versions of Git.
+alphanumeric. Currently only the slash `/` is recognized as a
+"magic signature": it makes the pattern match from the root of
+the working tree, even when you are running the command from
+inside a subdirectory.
+
A pathspec with only a colon means "there is no pathspec". This form
should not be combined with other pathspec.
to the result.
[[def_ref]]ref::
- A 40-byte hex representation of a <<def_SHA1,SHA1>> or a name that
+ A 40-byte hex representation of a <<def_SHA1,SHA-1>> or a name that
denotes a particular <<def_object,object>>. They may be stored in
a file under `$GIT_DIR/refs/` directory, or
in the `$GIT_DIR/packed-refs` file.
[[def_refspec]]refspec::
A "refspec" is used by <<def_fetch,fetch>> and
<<def_push,push>> to describe the mapping between remote
- <<def_ref,ref>> and local ref. They are combined with a colon in
- the format <src>:<dst>, preceded by an optional plus sign, +.
- For example: `git fetch $URL
- refs/heads/master:refs/heads/origin` means "grab the master
- <<def_branch,branch>> <<def_head,head>> from the $URL and store
- it as my origin branch head". And `git push
- $URL refs/heads/master:refs/heads/to-upstream` means "publish my
- master branch head as to-upstream branch at $URL". See also
- linkgit:git-push[1].
+ <<def_ref,ref>> and local ref.
[[def_remote_tracking_branch]]remote-tracking branch::
A regular Git <<def_branch,branch>> that is used to follow changes from
[[def_SCM]]SCM::
Source code management (tool).
-[[def_SHA1]]SHA1::
- Synonym for <<def_object_name,object name>>.
+[[def_SHA1]]SHA-1::
+ "Secure Hash Algorithm 1"; a cryptographic hash function.
+ In the context of Git used as a synonym for <<def_object_name,object name>>.
[[def_shallow_repository]]shallow repository::
A shallow <<def_repository,repository>> has an incomplete
its history can be later deepened with linkgit:git-fetch[1].
[[def_symref]]symref::
- Symbolic reference: instead of containing the <<def_SHA1,SHA1>>
+ Symbolic reference: instead of containing the <<def_SHA1,SHA-1>>
id itself, it is of the format 'ref: refs/some/thing' and when
referenced, it recursively dereferences to this reference.
'<<def_HEAD,HEAD>>' is a prime example of a symref. Symbolic
> Any one know how can I track this object and understand which file is it
-----------------------------------------------------------
-So exactly *because* the SHA1 hash is cryptographically secure, the hash
+So exactly *because* the SHA-1 hash is cryptographically secure, the hash
itself doesn't actually tell you anything, in order to fix a corrupt
object you basically have to find the "original source" for it.
-----------------------------------------------------------
This is the right thing to do, although it's usually best to save it under
-it's full SHA1 name (you just dropped the "4b" from the result ;).
+it's full SHA-1 name (you just dropped the "4b" from the result ;).
Let's see what that tells us:
git hash-object -w my-magic-file
-again, and if it outputs the missing SHA1 (4b945..) you're now all done!
+again, and if it outputs the missing SHA-1 (4b945..) you're now all done!
But that's the really lucky case, so let's assume that it was some older
version that was broken. How do you tell which version it was?
this when the branches to be merged have diverged wildly.
See also linkgit:git-diff[1] `--patience`.
+diff-algorithm=[patience|minimal|histogram|myers];;
+ Tells 'merge-recursive' to use a different diff algorithm, which
+ can help avoid mismerges that occur due to unimportant matching
+ lines (such as braces from distinct functions). See also
+ linkgit:git-diff[1] `--diff-algorithm`.
+
ignore-space-change;;
ignore-all-space;;
ignore-space-at-eol;;
* 'raw'
+
The 'raw' format shows the entire commit exactly as
-stored in the commit object. Notably, the SHA1s are
+stored in the commit object. Notably, the SHA-1s are
displayed in full, regardless of whether --abbrev or
--no-abbrev are used, and 'parents' information show the
true parent commits, without taking grafts nor history
--------------------
A revision parameter '<rev>' typically, but not necessarily, names a
-commit object. It uses what is called an 'extended SHA1'
+commit object. It uses what is called an 'extended SHA-1'
syntax. Here are various ways to spell object names. The
ones listed near the end of this list name trees and
blobs contained in a commit.
'<sha1>', e.g. 'dae86e1950b1277e545cee180551750029cfe735', 'dae86e'::
- The full SHA1 object name (40-byte hexadecimal string), or
+ The full SHA-1 object name (40-byte hexadecimal string), or
a leading substring that is unique within the repository.
E.g. dae86e1950b1277e545cee180551750029cfe735 and dae86e both
name the same commit object if there is no other object in
sha1-array API
==============
-The sha1-array API provides storage and manipulation of sets of SHA1
+The sha1-array API provides storage and manipulation of sets of SHA-1
identifiers. The emphasis is on storage and processing efficiency,
making them suitable for large lists. Note that the ordering of items is
not preserved over some operations.
`struct sha1_array`::
- A single array of SHA1 hashes. This should be initialized by
+ A single array of SHA-1 hashes. This should be initialized by
assignment from `SHA1_ARRAY_INIT`. The `sha1` member contains
the actual data. The `nr` member contains the number of items in
the set. The `alloc` and `sorted` members are used internally,
destination. This is useful for literal data to be fed to either
strbuf_expand or to the *printf family of functions.
+`strbuf_humanise_bytes`::
+
+ Append the given byte size as a human-readable string (i.e. 12.23 KiB,
+ 3.50 MiB).
+
`strbuf_addf`::
Add a formatted string to the buffer.
Observation: length of each object is encoded in a variable
length format and is not constrained to 32-bit or anything.
- - The trailer records 20-byte SHA1 checksum of all of the above.
+ - The trailer records 20-byte SHA-1 checksum of all of the above.
== Original (version 1) pack-*.idx files have the following format:
- The file is concluded with a trailer:
- A copy of the 20-byte SHA1 checksum at the end of
+ A copy of the 20-byte SHA-1 checksum at the end of
corresponding packfile.
- 20-byte SHA1-checksum of all of the above.
+ 20-byte SHA-1-checksum of all of the above.
Pack Idx file:
If it is not DELTA, then deflated bytes (the size above
is the size before compression).
If it is REF_DELTA, then
- 20-byte base object name SHA1 (the size above is the
+ 20-byte base object name SHA-1 (the size above is the
size of the delta data that follows).
delta data, deflated.
If it is OFS_DELTA, then
- A 256-entry fan-out table just like v1.
- - A table of sorted 20-byte SHA1 object names. These are
+ - A table of sorted 20-byte SHA-1 object names. These are
packed together without offset values to reduce the cache
footprint of the binary search for a specific object name.
- The same trailer as a v1 pack file:
- A copy of the 20-byte SHA1 checksum at the end of
+ A copy of the 20-byte SHA-1 checksum at the end of
corresponding packfile.
- 20-byte SHA1-checksum of all of the above.
+ 20-byte SHA-1-checksum of all of the above.
<linus> The "magic" is actually in theory totally arbitrary.
ANY order will give you a working pack, but no, it's not
- ordered by SHA1.
+ ordered by SHA-1.
Before talking about the ordering for the sliding delta
window, let's talk about the recency order. That's more
these commits have no parents.
*********************************************************
-The basic idea is to write the SHA1s of shallow commits into
+The basic idea is to write the SHA-1s of shallow commits into
$GIT_DIR/shallow, and handle its contents like the contents
of $GIT_DIR/info/grafts (with the difference that shallow
cannot contain parent information).
at all (even throughout development of the shallow clone, it
was never manually edited!).
-Each line contains exactly one SHA1. When read, a commit_graft
+Each line contains exactly one SHA-1. When read, a commit_graft
will be constructed, which has nr_parent < 0 to make it easier
to discern from user provided grafts.
return res;
}
-static void *read_index_data(const char *path)
-{
- int pos, len;
- unsigned long sz;
- enum object_type type;
- void *data;
- struct index_state *istate = use_index ? use_index : &the_index;
-
- len = strlen(path);
- pos = index_name_pos(istate, path, len);
- if (pos < 0) {
- /*
- * We might be in the middle of a merge, in which
- * case we would read stage #2 (ours).
- */
- int i;
- for (i = -pos - 1;
- (pos < 0 && i < istate->cache_nr &&
- !strcmp(istate->cache[i]->name, path));
- i++)
- if (ce_stage(istate->cache[i]) == 2)
- pos = i;
- }
- if (pos < 0)
- return NULL;
- data = read_sha1_file(istate->cache[pos]->sha1, &type, &sz);
- if (!data || type != OBJ_BLOB) {
- free(data);
- return NULL;
- }
- return data;
-}
-
static struct attr_stack *read_attr_from_index(const char *path, int macro_ok)
{
struct attr_stack *res;
char *buf, *sp;
int lineno = 0;
- buf = read_index_data(path);
+ buf = read_blob_data_from_index(use_index ? use_index : &the_index, path, NULL);
if (!buf)
return NULL;
if (remote_is_branch
&& !strcmp(local, shortname)
&& !origin) {
- warning("Not setting branch %s as its own upstream.",
+ warning(_("Not setting branch %s as its own upstream."),
local);
return;
}
if (flag & BRANCH_CONFIG_VERBOSE) {
if (remote_is_branch && origin)
- printf(rebasing ?
- "Branch %s set up to track remote branch %s from %s by rebasing.\n" :
- "Branch %s set up to track remote branch %s from %s.\n",
- local, shortname, origin);
+ printf_ln(rebasing ?
+ _("Branch %s set up to track remote branch %s from %s by rebasing.") :
+ _("Branch %s set up to track remote branch %s from %s."),
+ local, shortname, origin);
else if (remote_is_branch && !origin)
- printf(rebasing ?
- "Branch %s set up to track local branch %s by rebasing.\n" :
- "Branch %s set up to track local branch %s.\n",
- local, shortname);
+ printf_ln(rebasing ?
+ _("Branch %s set up to track local branch %s by rebasing.") :
+ _("Branch %s set up to track local branch %s."),
+ local, shortname);
else if (!remote_is_branch && origin)
- printf(rebasing ?
- "Branch %s set up to track remote ref %s by rebasing.\n" :
- "Branch %s set up to track remote ref %s.\n",
- local, remote);
+ printf_ln(rebasing ?
+ _("Branch %s set up to track remote ref %s by rebasing.") :
+ _("Branch %s set up to track remote ref %s."),
+ local, remote);
else if (!remote_is_branch && !origin)
- printf(rebasing ?
- "Branch %s set up to track local ref %s by rebasing.\n" :
- "Branch %s set up to track local ref %s.\n",
- local, remote);
+ printf_ln(rebasing ?
+ _("Branch %s set up to track local ref %s by rebasing.") :
+ _("Branch %s set up to track local ref %s."),
+ local, remote);
else
die("BUG: impossible combination of %d and %p",
remote_is_branch, origin);
int config_flags = quiet ? 0 : BRANCH_CONFIG_VERBOSE;
if (strlen(new_ref) > 1024 - 7 - 7 - 1)
- return error("Tracking not set up: name too long: %s",
+ return error(_("Tracking not set up: name too long: %s"),
new_ref);
memset(&tracking, 0, sizeof(tracking));
}
if (tracking.matches > 1)
- return error("Not tracking: ambiguous information for ref %s",
+ return error(_("Not tracking: ambiguous information for ref %s"),
orig_ref);
install_branch_config(config_flags, new_ref, tracking.remote,
int force, int attr_only)
{
if (strbuf_check_branch_ref(ref, name))
- die("'%s' is not a valid branch name.", name);
+ die(_("'%s' is not a valid branch name."), name);
if (!ref_exists(ref->buf))
return 0;
else if (!force && !attr_only)
- die("A branch named '%s' already exists.", ref->buf + strlen("refs/heads/"));
+ die(_("A branch named '%s' already exists."), ref->buf + strlen("refs/heads/"));
if (!attr_only) {
const char *head;
head = resolve_ref_unsafe("HEAD", sha1, 0, NULL);
if (!is_bare_repository() && head && !strcmp(head, ref->buf))
- die("Cannot force update the current branch.");
+ die(_("Cannot force update the current branch."));
}
return 1;
}
}
die(_(upstream_missing), start_name);
}
- die("Not a valid object name: '%s'.", start_name);
+ die(_("Not a valid object name: '%s'."), start_name);
}
switch (dwim_ref(start_name, strlen(start_name), sha1, &real_ref)) {
}
break;
default:
- die("Ambiguous object name: '%s'.", start_name);
+ die(_("Ambiguous object name: '%s'."), start_name);
break;
}
if ((commit = lookup_commit_reference(sha1)) == NULL)
- die("Not a valid branch point: '%s'.", start_name);
+ die(_("Not a valid branch point: '%s'."), start_name);
hashcpy(sha1, commit->object.sha1);
if (!dont_change_ref) {
lock = lock_any_ref_for_update(ref.buf, NULL, 0);
if (!lock)
- die_errno("Failed to lock ref for update");
+ die_errno(_("Failed to lock ref for update"));
}
if (reflog)
if (!dont_change_ref)
if (write_ref_sha1(lock, sha1, msg) < 0)
- die_errno("Failed to write ref");
+ die_errno(_("Failed to write ref"));
strbuf_release(&ref);
free(real_ref);
struct update_callback_data {
int flags;
int add_errors;
+ const char *implicit_dot;
+ size_t implicit_dot_len;
};
+static const char *option_with_implicit_dot;
+static const char *short_option_with_implicit_dot;
+
+static void warn_pathless_add(void)
+{
+ static int shown;
+ assert(option_with_implicit_dot && short_option_with_implicit_dot);
+
+ if (shown)
+ return;
+ shown = 1;
+
+ /*
+ * To be consistent with "git add -p" and most Git
+ * commands, we should default to being tree-wide, but
+ * this is not the original behavior and can't be
+ * changed until users trained themselves not to type
+ * "git add -u" or "git add -A". For now, we warn and
+ * keep the old behavior. Later, the behavior can be changed
+ * to tree-wide, keeping the warning for a while, and
+ * eventually we can drop the warning.
+ */
+ warning(_("The behavior of 'git add %s (or %s)' with no path argument from a\n"
+ "subdirectory of the tree will change in Git 2.0 and should not be used anymore.\n"
+ "To add content for the whole tree, run:\n"
+ "\n"
+ " git add %s :/\n"
+ " (or git add %s :/)\n"
+ "\n"
+ "To restrict the command to the current directory, run:\n"
+ "\n"
+ " git add %s .\n"
+ " (or git add %s .)\n"
+ "\n"
+ "With the current Git version, the command is restricted to "
+ "the current directory.\n"
+ ""),
+ option_with_implicit_dot, short_option_with_implicit_dot,
+ option_with_implicit_dot, short_option_with_implicit_dot,
+ option_with_implicit_dot, short_option_with_implicit_dot);
+}
+
static int fix_unmerged_status(struct diff_filepair *p,
struct update_callback_data *data)
{
{
int i;
struct update_callback_data *data = cbdata;
+ const char *implicit_dot = data->implicit_dot;
+ size_t implicit_dot_len = data->implicit_dot_len;
for (i = 0; i < q->nr; i++) {
struct diff_filepair *p = q->queue[i];
const char *path = p->one->path;
+ /*
+ * Check if "git add -A" or "git add -u" was run from a
+ * subdirectory with a modified file outside that directory,
+ * and warn if so.
+ *
+ * "git add -u" will behave like "git add -u :/" instead of
+ * "git add -u ." in the future. This warning prepares for
+ * that change.
+ */
+ if (implicit_dot &&
+ strncmp_icase(path, implicit_dot, implicit_dot_len)) {
+ warn_pathless_add();
+ continue;
+ }
switch (fix_unmerged_status(p, data)) {
default:
die(_("unexpected diff status %c"), p->status);
{
struct update_callback_data data;
struct rev_info rev;
+
+ memset(&data, 0, sizeof(data));
+ data.flags = flags & ~ADD_CACHE_IMPLICIT_DOT;
+ if ((flags & ADD_CACHE_IMPLICIT_DOT) && prefix) {
+ /*
+ * Check for modified files throughout the worktree so
+ * update_callback has a chance to warn about changes
+ * outside the cwd.
+ */
+ data.implicit_dot = prefix;
+ data.implicit_dot_len = strlen(prefix);
+ pathspec = NULL;
+ }
+
init_revisions(&rev, prefix);
setup_revisions(0, NULL, &rev, NULL);
init_pathspec(&rev.prune_data, pathspec);
rev.diffopt.output_format = DIFF_FORMAT_CALLBACK;
rev.diffopt.format_callback = update_callback;
- data.flags = flags;
- data.add_errors = 0;
rev.diffopt.format_callback_data = &data;
rev.max_count = 0; /* do not compare unmerged paths with stage #2 */
run_diff_files(&rev, DIFF_RACY_IS_MODIFIED);
return !!data.add_errors;
}
-static char *prune_directory(struct dir_struct *dir, const char **pathspec, int prefix)
+#define WARN_IMPLICIT_DOT (1u << 0)
+static char *prune_directory(struct dir_struct *dir, const char **pathspec,
+ int prefix, unsigned flag)
{
char *seen;
int i, specs;
if (match_pathspec(pathspec, entry->name, entry->len,
prefix, seen))
*dst++ = entry;
+ else if (flag & WARN_IMPLICIT_DOT)
+ /*
+ * "git add -A" was run from a subdirectory with a
+ * new file outside that directory.
+ *
+ * "git add -A" will behave like "git add -A :/"
+ * instead of "git add -A ." in the future.
+ * Warn about the coming behavior change.
+ */
+ warn_pathless_add();
}
dir->nr = dst - dir->entries;
add_pathspec_matches_against_index(pathspec, seen, specs);
return exit_status;
}
-static void warn_pathless_add(const char *option_name, const char *short_name) {
- /*
- * To be consistent with "git add -p" and most Git
- * commands, we should default to being tree-wide, but
- * this is not the original behavior and can't be
- * changed until users trained themselves not to type
- * "git add -u" or "git add -A". For now, we warn and
- * keep the old behavior. Later, the behavior can be changed
- * to tree-wide, keeping the warning for a while, and
- * eventually we can drop the warning.
- */
- warning(_("The behavior of 'git add %s (or %s)' with no path argument from a\n"
- "subdirectory of the tree will change in Git 2.0 and should not be used anymore.\n"
- "To add content for the whole tree, run:\n"
- "\n"
- " git add %s :/\n"
- " (or git add %s :/)\n"
- "\n"
- "To restrict the command to the current directory, run:\n"
- "\n"
- " git add %s .\n"
- " (or git add %s .)\n"
- "\n"
- "With the current Git version, the command is restricted to the current directory."),
- option_name, short_name,
- option_name, short_name,
- option_name, short_name);
-}
-
int cmd_add(int argc, const char **argv, const char *prefix)
{
int exit_status = 0;
int add_new_files;
int require_pathspec;
char *seen = NULL;
- const char *option_with_implicit_dot = NULL;
- const char *short_option_with_implicit_dot = NULL;
+ int implicit_dot = 0;
git_config(add_config, NULL);
}
if (option_with_implicit_dot && !argc) {
static const char *here[2] = { ".", NULL };
- if (prefix)
- warn_pathless_add(option_with_implicit_dot,
- short_option_with_implicit_dot);
argc = 1;
argv = here;
+ implicit_dot = 1;
}
add_new_files = !take_worktree_changes && !refresh_only;
(intent_to_add ? ADD_CACHE_INTENT : 0) |
(ignore_add_errors ? ADD_CACHE_IGNORE_ERRORS : 0) |
(!(addremove || take_worktree_changes)
- ? ADD_CACHE_IGNORE_REMOVAL : 0));
+ ? ADD_CACHE_IGNORE_REMOVAL : 0)) |
+ (implicit_dot ? ADD_CACHE_IMPLICIT_DOT : 0);
if (require_pathspec && argc == 0) {
fprintf(stderr, _("Nothing specified, nothing added.\n"));
}
/* This picks up the paths that are not tracked */
- baselen = fill_directory(&dir, pathspec);
+ baselen = fill_directory(&dir, implicit_dot ? NULL : pathspec);
if (pathspec)
- seen = prune_directory(&dir, pathspec, baselen);
+ seen = prune_directory(&dir, pathspec, baselen,
+ implicit_dot ? WARN_IMPLICIT_DOT : 0);
}
if (refresh_only) {
maillen = ident.mail_end - ident.mail_begin;
mailbuf = ident.mail_begin;
- *time = strtoul(ident.date_begin, NULL, 10);
+ if (ident.date_begin && ident.date_end)
+ *time = strtoul(ident.date_begin, NULL, 10);
+ else
+ *time = 0;
- len = ident.tz_end - ident.tz_begin;
- strbuf_add(tz, ident.tz_begin, len);
+ if (ident.tz_begin && ident.tz_end)
+ strbuf_add(tz, ident.tz_begin, ident.tz_end - ident.tz_begin);
+ else
+ strbuf_addstr(tz, "(unknown)");
/*
* Now, convert both name and e-mail using mailmap
GIT_COLOR_RED, /* REMOTE */
GIT_COLOR_NORMAL, /* LOCAL */
GIT_COLOR_GREEN, /* CURRENT */
+ GIT_COLOR_BLUE, /* UPSTREAM */
};
enum color_branch {
BRANCH_COLOR_RESET = 0,
BRANCH_COLOR_PLAIN = 1,
BRANCH_COLOR_REMOTE = 2,
BRANCH_COLOR_LOCAL = 3,
- BRANCH_COLOR_CURRENT = 4
+ BRANCH_COLOR_CURRENT = 4,
+ BRANCH_COLOR_UPSTREAM = 5
};
static enum merge_filter {
return BRANCH_COLOR_LOCAL;
if (!strcasecmp(var+ofs, "current"))
return BRANCH_COLOR_CURRENT;
+ if (!strcasecmp(var+ofs, "upstream"))
+ return BRANCH_COLOR_UPSTREAM;
return -1;
}
int ours, theirs;
char *ref = NULL;
struct branch *branch = branch_get(branch_name);
+ struct strbuf fancy = STRBUF_INIT;
if (!stat_tracking_info(branch, &ours, &theirs)) {
if (branch && branch->merge && branch->merge[0]->dst &&
- show_upstream_ref)
- strbuf_addf(stat, "[%s] ",
- shorten_unambiguous_ref(branch->merge[0]->dst, 0));
+ show_upstream_ref) {
+ ref = shorten_unambiguous_ref(branch->merge[0]->dst, 0);
+ if (want_color(branch_use_color))
+ strbuf_addf(stat, "[%s%s%s] ",
+ branch_get_color(BRANCH_COLOR_UPSTREAM),
+ ref, branch_get_color(BRANCH_COLOR_RESET));
+ else
+ strbuf_addf(stat, "[%s] ", ref);
+ }
return;
}
- if (show_upstream_ref)
+ if (show_upstream_ref) {
ref = shorten_unambiguous_ref(branch->merge[0]->dst, 0);
+ if (want_color(branch_use_color))
+ strbuf_addf(&fancy, "%s%s%s",
+ branch_get_color(BRANCH_COLOR_UPSTREAM),
+ ref, branch_get_color(BRANCH_COLOR_RESET));
+ else
+ strbuf_addstr(&fancy, ref);
+ }
+
if (!ours) {
if (ref)
- strbuf_addf(stat, _("[%s: behind %d]"), ref, theirs);
+ strbuf_addf(stat, _("[%s: behind %d]"), fancy.buf, theirs);
else
strbuf_addf(stat, _("[behind %d]"), theirs);
} else if (!theirs) {
if (ref)
- strbuf_addf(stat, _("[%s: ahead %d]"), ref, ours);
+ strbuf_addf(stat, _("[%s: ahead %d]"), fancy.buf, ours);
else
strbuf_addf(stat, _("[ahead %d]"), ours);
} else {
if (ref)
strbuf_addf(stat, _("[%s: ahead %d, behind %d]"),
- ref, ours, theirs);
+ fancy.buf, ours, theirs);
else
strbuf_addf(stat, _("[ahead %d, behind %d]"),
ours, theirs);
}
+ strbuf_release(&fancy);
strbuf_addch(stat, ' ');
free(ref);
}
#define BATCH 1
#define BATCH_CHECK 2
-static void pprint_tag(const unsigned char *sha1, const char *buf, unsigned long size)
-{
- /* the parser in tag.c is useless here. */
- const char *endp = buf + size;
- const char *cp = buf;
-
- while (cp < endp) {
- char c = *cp++;
- if (c != '\n')
- continue;
- if (7 <= endp - cp && !memcmp("tagger ", cp, 7)) {
- const char *tagger = cp;
-
- /* Found the tagger line. Copy out the contents
- * of the buffer so far.
- */
- write_or_die(1, buf, cp - buf);
-
- /*
- * Do something intelligent, like pretty-printing
- * the date.
- */
- while (cp < endp) {
- if (*cp++ == '\n') {
- /* tagger to cp is a line
- * that has ident and time.
- */
- const char *sp = tagger;
- char *ep;
- unsigned long date;
- long tz;
- while (sp < cp && *sp != '>')
- sp++;
- if (sp == cp) {
- /* give up */
- write_or_die(1, tagger,
- cp - tagger);
- break;
- }
- while (sp < cp &&
- !('0' <= *sp && *sp <= '9'))
- sp++;
- write_or_die(1, tagger, sp - tagger);
- date = strtoul(sp, &ep, 10);
- tz = strtol(ep, NULL, 10);
- sp = show_date(date, tz, 0);
- write_or_die(1, sp, strlen(sp));
- xwrite(1, "\n", 1);
- break;
- }
- }
- break;
- }
- if (cp < endp && *cp == '\n')
- /* end of header */
- break;
- }
- /* At this point, we have copied out the header up to the end of
- * the tagger line and cp points at one past \n. It could be the
- * next header line after the tagger line, or it could be another
- * \n that marks the end of the headers. We need to copy out the
- * remainder as is.
- */
- if (cp < endp)
- write_or_die(1, cp, endp - cp);
-}
-
static int cat_one_file(int opt, const char *exp_type, const char *obj_name)
{
unsigned char sha1[20];
buf = read_sha1_file(sha1, &type, &size);
if (!buf)
die("Cannot read object %s", obj_name);
- if (type == OBJ_TAG) {
- pprint_tag(sha1, buf, size);
- return 0;
- }
/* otherwise just spit out the data */
break;
"If you want to keep them by creating a new branch, "
"this may be a good time\nto do so with:\n\n"
" git branch new_branch_name %s\n\n"),
- sha1_to_hex(commit->object.sha1));
+ find_unique_abbrev(commit->object.sha1, DEFAULT_ABBREV));
}
/*
}
static char const * const count_objects_usage[] = {
- N_("git count-objects [-v]"),
+ N_("git count-objects [-v] [-H | --human-readable]"),
NULL
};
int cmd_count_objects(int argc, const char **argv, const char *prefix)
{
- int i, verbose = 0;
+ int i, verbose = 0, human_readable = 0;
const char *objdir = get_object_directory();
int len = strlen(objdir);
char *path = xmalloc(len + 50);
off_t loose_size = 0;
struct option opts[] = {
OPT__VERBOSE(&verbose, N_("be verbose")),
+ OPT_BOOL('H', "human-readable", &human_readable,
+ N_("print sizes in human readable format")),
OPT_END(),
};
struct packed_git *p;
unsigned long num_pack = 0;
off_t size_pack = 0;
+ struct strbuf loose_buf = STRBUF_INIT;
+ struct strbuf pack_buf = STRBUF_INIT;
+ struct strbuf garbage_buf = STRBUF_INIT;
if (!packed_git)
prepare_packed_git();
for (p = packed_git; p; p = p->next) {
size_pack += p->pack_size + p->index_size;
num_pack++;
}
+
+ if (human_readable) {
+ strbuf_humanise_bytes(&loose_buf, loose_size);
+ strbuf_humanise_bytes(&pack_buf, size_pack);
+ strbuf_humanise_bytes(&garbage_buf, size_garbage);
+ } else {
+ strbuf_addf(&loose_buf, "%lu",
+ (unsigned long)(loose_size / 1024));
+ strbuf_addf(&pack_buf, "%lu",
+ (unsigned long)(size_pack / 1024));
+ strbuf_addf(&garbage_buf, "%lu",
+ (unsigned long)(size_garbage / 1024));
+ }
+
printf("count: %lu\n", loose);
- printf("size: %lu\n", (unsigned long) (loose_size / 1024));
+ printf("size: %s\n", loose_buf.buf);
printf("in-pack: %lu\n", packed);
printf("packs: %lu\n", num_pack);
- printf("size-pack: %lu\n", (unsigned long) (size_pack / 1024));
+ printf("size-pack: %s\n", pack_buf.buf);
printf("prune-packable: %lu\n", packed_loose);
printf("garbage: %lu\n", garbage);
- printf("size-garbage: %lu\n", (unsigned long) (size_garbage / 1024));
+ printf("size-garbage: %s\n", garbage_buf.buf);
+ strbuf_release(&loose_buf);
+ strbuf_release(&pack_buf);
+ strbuf_release(&garbage_buf);
+ } else {
+ struct strbuf buf = STRBUF_INIT;
+ if (human_readable)
+ strbuf_humanise_bytes(&buf, loose_size);
+ else
+ strbuf_addf(&buf, "%lu kilobytes",
+ (unsigned long)(loose_size / 1024));
+ printf("%lu objects, %s\n", loose, buf.buf);
+ strbuf_release(&buf);
}
- else
- printf("%lu objects, %lu kilobytes\n",
- loose, (unsigned long) (loose_size / 1024));
return 0;
}
const char *me;
if (kind == 'a') {
- label = "\n# By ";
+ label = "By";
me = git_author_info(IDENT_NO_DATE);
} else {
- label = "\n# Via ";
+ label = "Via";
me = git_committer_info(IDENT_NO_DATE);
}
(me = skip_prefix(me, them->items->string)) != NULL &&
skip_prefix(me, " <")))
return;
- strbuf_addstr(out, label);
+ strbuf_addf(out, "\n%c %s ", comment_line_char, label);
add_people_count(out, them);
}
} else {
if (tag_number == 2) {
struct strbuf tagline = STRBUF_INIT;
- strbuf_addf(&tagline, "\n# %s\n",
- origins.items[first_tag].string);
+ strbuf_addch(&tagline, '\n');
+ strbuf_add_commented_lines(&tagline,
+ origins.items[first_tag].string,
+ strlen(origins.items[first_tag].string));
strbuf_insert(&tagbuf, 0, tagline.buf,
tagline.len);
strbuf_release(&tagline);
}
- strbuf_addf(&tagbuf, "\n# %s\n",
- origins.items[i].string);
+ strbuf_addch(&tagbuf, '\n');
+ strbuf_add_commented_lines(&tagbuf,
+ origins.items[i].string,
+ strlen(origins.items[i].string));
fmt_tag_signature(&tagbuf, &sig, buf, len);
}
strbuf_release(&sig);
int quiet = 0, source = 0, mailmap = 0;
const struct option builtin_log_options[] = {
- OPT_BOOLEAN(0, "quiet", &quiet, N_("suppress diff output")),
- OPT_BOOLEAN(0, "source", &source, N_("show source")),
- OPT_BOOLEAN(0, "use-mailmap", &mailmap, N_("Use mail map file")),
+ OPT_BOOL(0, "quiet", &quiet, N_("suppress diff output")),
+ OPT_BOOL(0, "source", &source, N_("show source")),
+ OPT_BOOL(0, "use-mailmap", &mailmap, N_("Use mail map file")),
{ OPTION_CALLBACK, 0, "decorate", NULL, NULL, N_("decorate options"),
PARSE_OPT_OPTARG, decorate_callback},
OPT_END()
static int thread;
static int do_signoff;
static const char *signature = git_version_string;
+static int config_cover_letter;
+
+enum {
+ COVER_UNSET,
+ COVER_OFF,
+ COVER_ON,
+ COVER_AUTO
+};
static int git_format_config(const char *var, const char *value, void *cb)
{
}
if (!strcmp(var, "format.signature"))
return git_config_string(&signature, var, value);
+ if (!strcmp(var, "format.coverletter")) {
+ if (value && !strcasecmp(value, "auto")) {
+ config_cover_letter = COVER_AUTO;
+ return 0;
+ }
+ config_cover_letter = git_config_bool(var, value) ? COVER_ON : COVER_OFF;
+ return 0;
+ }
return git_log_config(var, value, cb);
}
}
}
+static char *find_branch_name(struct rev_info *rev)
+{
+ int i, positive = -1;
+ unsigned char branch_sha1[20];
+ const unsigned char *tip_sha1;
+ const char *ref;
+ char *full_ref, *branch = NULL;
+
+ for (i = 0; i < rev->cmdline.nr; i++) {
+ if (rev->cmdline.rev[i].flags & UNINTERESTING)
+ continue;
+ if (positive < 0)
+ positive = i;
+ else
+ return NULL;
+ }
+ if (positive < 0)
+ return NULL;
+ ref = rev->cmdline.rev[positive].name;
+ tip_sha1 = rev->cmdline.rev[positive].item->sha1;
+ if (dwim_ref(ref, strlen(ref), branch_sha1, &full_ref) &&
+ !prefixcmp(full_ref, "refs/heads/") &&
+ !hashcmp(tip_sha1, branch_sha1))
+ branch = xstrdup(full_ref + strlen("refs/heads/"));
+ free(full_ref);
+ return branch;
+}
+
static void make_cover_letter(struct rev_info *rev, int use_stdout,
struct commit *origin,
- int nr, struct commit **list, struct commit *head,
+ int nr, struct commit **list,
const char *branch_name,
int quiet)
{
struct diff_options opts;
int need_8bit_cte = 0;
struct pretty_print_context pp = {0};
+ struct commit *head = list[0];
if (rev->commit_format != CMIT_FMT_EMAIL)
die(_("Cover letter needs email format"));
if (has_non_ascii(list[i]->buffer))
need_8bit_cte = 1;
+ if (!branch_name)
+ branch_name = find_branch_name(rev);
+
msg = body;
pp.fmt = CMIT_FMT_EMAIL;
pp.date_mode = DATE_RFC2822;
return 0;
}
-static char *find_branch_name(struct rev_info *rev)
-{
- int i, positive = -1;
- unsigned char branch_sha1[20];
- const unsigned char *tip_sha1;
- const char *ref;
- char *full_ref, *branch = NULL;
-
- for (i = 0; i < rev->cmdline.nr; i++) {
- if (rev->cmdline.rev[i].flags & UNINTERESTING)
- continue;
- if (positive < 0)
- positive = i;
- else
- return NULL;
- }
- if (0 <= positive) {
- ref = rev->cmdline.rev[positive].name;
- tip_sha1 = rev->cmdline.rev[positive].item->sha1;
- } else if (!rev->cmdline.nr && rev->pending.nr == 1 &&
- !strcmp(rev->pending.objects[0].name, "HEAD")) {
- /*
- * No actual ref from command line, but "HEAD" from
- * rev->def was added in setup_revisions()
- * e.g. format-patch --cover-letter -12
- */
- ref = "HEAD";
- tip_sha1 = rev->pending.objects[0].item->sha1;
- } else {
- return NULL;
- }
- if (dwim_ref(ref, strlen(ref), branch_sha1, &full_ref) &&
- !prefixcmp(full_ref, "refs/heads/") &&
- !hashcmp(tip_sha1, branch_sha1))
- branch = xstrdup(full_ref + strlen("refs/heads/"));
- free(full_ref);
- return branch;
-}
-
int cmd_format_patch(int argc, const char **argv, const char *prefix)
{
struct commit *commit;
int start_number = -1;
int just_numbers = 0;
int ignore_if_in_upstream = 0;
- int cover_letter = 0;
+ int cover_letter = -1;
int boundary_count = 0;
int no_binary_diff = 0;
- struct commit *origin = NULL, *head = NULL;
+ struct commit *origin = NULL;
const char *in_reply_to = NULL;
struct patch_ids ids;
struct strbuf buf = STRBUF_INIT;
{ OPTION_CALLBACK, 'N', "no-numbered", &numbered, NULL,
N_("use [PATCH] even with multiple patches"),
PARSE_OPT_NOARG, no_numbered_callback },
- OPT_BOOLEAN('s', "signoff", &do_signoff, N_("add Signed-off-by:")),
- OPT_BOOLEAN(0, "stdout", &use_stdout,
+ OPT_BOOL('s', "signoff", &do_signoff, N_("add Signed-off-by:")),
+ OPT_BOOL(0, "stdout", &use_stdout,
N_("print patches to standard out")),
- OPT_BOOLEAN(0, "cover-letter", &cover_letter,
+ OPT_BOOL(0, "cover-letter", &cover_letter,
N_("generate a cover letter")),
- OPT_BOOLEAN(0, "numbered-files", &just_numbers,
+ OPT_BOOL(0, "numbered-files", &just_numbers,
N_("use simple number sequence for output file names")),
OPT_STRING(0, "suffix", &fmt_patch_suffix, N_("sfx"),
N_("use <sfx> instead of '.patch'")),
}
if (rev.pending.nr == 1) {
+ int check_head = 0;
+
if (rev.max_count < 0 && !rev.show_root_diff) {
/*
* This is traditional behaviour of "git format-patch
* origin" that prepares what the origin side still
* does not have.
*/
- unsigned char sha1[20];
- const char *ref;
-
rev.pending.objects[0].item->flags |= UNINTERESTING;
add_head_to_pending(&rev);
- ref = resolve_ref_unsafe("HEAD", sha1, 1, NULL);
- if (ref && !prefixcmp(ref, "refs/heads/"))
- branch_name = xstrdup(ref + strlen("refs/heads/"));
- else
- branch_name = xstrdup(""); /* no branch */
+ check_head = 1;
}
/*
* Otherwise, it is "format-patch -22 HEAD", and/or
* "format-patch --root HEAD". The user wants
* get_revision() to do the usual traversal.
*/
+
+ if (!strcmp(rev.pending.objects[0].name, "HEAD"))
+ check_head = 1;
+
+ if (check_head) {
+ unsigned char sha1[20];
+ const char *ref;
+ ref = resolve_ref_unsafe("HEAD", sha1, 1, NULL);
+ if (ref && !prefixcmp(ref, "refs/heads/"))
+ branch_name = xstrdup(ref + strlen("refs/heads/"));
+ else
+ branch_name = xstrdup(""); /* no branch */
+ }
}
/*
*/
rev.show_root_diff = 1;
- if (cover_letter) {
- /*
- * NEEDSWORK:randomly pick one positive commit to show
- * diffstat; this is often the tip and the command
- * happens to do the right thing in most cases, but a
- * complex command like "--cover-letter a b c ^bottom"
- * picks "c" and shows diffstat between bottom..c
- * which may not match what the series represents at
- * all and totally broken.
- */
- int i;
- for (i = 0; i < rev.pending.nr; i++) {
- struct object *o = rev.pending.objects[i].item;
- if (!(o->flags & UNINTERESTING))
- head = (struct commit *)o;
- }
- /* There is nothing to show; it is not an error, though. */
- if (!head)
- return 0;
- if (!branch_name)
- branch_name = find_branch_name(&rev);
- }
-
if (ignore_if_in_upstream) {
/* Don't say anything if head and upstream are the same. */
if (rev.pending.nr == 2) {
list = xrealloc(list, nr * sizeof(list[0]));
list[nr - 1] = commit;
}
+ if (nr == 0)
+ /* nothing to do */
+ return 0;
total = nr;
if (!keep_subject && auto_number && total > 1)
numbered = 1;
if (numbered)
rev.total = total + start_number - 1;
+ if (cover_letter == -1) {
+ if (config_cover_letter == COVER_AUTO)
+ cover_letter = (total > 1);
+ else
+ cover_letter = (config_cover_letter == COVER_ON);
+ }
+
if (in_reply_to || thread || cover_letter)
rev.ref_message_ids = xcalloc(1, sizeof(struct string_list));
if (in_reply_to) {
if (thread)
gen_message_id(&rev, "cover");
make_cover_letter(&rev, use_stdout,
- origin, nr, list, head, branch_name, quiet);
+ origin, nr, list, branch_name, quiet);
total++;
start_number--;
}
a->mode == b->mode;
}
+static int both_empty(struct name_entry *a, struct name_entry *b)
+{
+ return !(a->sha1 || b->sha1);
+}
+
static struct merge_list *create_entry(unsigned stage, unsigned mode, const unsigned char *sha1, const char *path)
{
struct merge_list *res = xcalloc(1, sizeof(*res));
static int threeway_callback(int n, unsigned long mask, unsigned long dirmask, struct name_entry *entry, struct traverse_info *info)
{
/* Same in both? */
- if (same_entry(entry+1, entry+2)) {
- if (entry[0].sha1) {
- /* Modified identically */
- resolve(info, NULL, entry+1);
- return mask;
- }
- /* "Both added the same" is left unresolved */
+ if (same_entry(entry+1, entry+2) || both_empty(entry+0, entry+2)) {
+ /* Modified, added or removed identically */
+ resolve(info, NULL, entry+1);
+ return mask;
}
if (same_entry(entry+0, entry+1)) {
*/
}
- if (same_entry(entry+0, entry+2)) {
- if (entry[1].sha1 && !S_ISDIR(entry[1].mode)) {
- /* We modified, they did not touch -- take ours */
- resolve(info, NULL, entry+1);
- return mask;
- }
+ if (same_entry(entry+0, entry+2) || both_empty(entry+0, entry+2)) {
+ /* We added, modified or removed, they did not touch -- take ours */
+ resolve(info, NULL, entry+1);
+ return mask;
}
unresolved(info, entry);
nth = 0;
while (parents) {
struct commit *p = parents->item;
- char newname[1000], *en;
+ struct strbuf newname = STRBUF_INIT;
parents = parents->next;
nth++;
if (p->util)
continue;
- en = newname;
switch (n->generation) {
case 0:
- en += sprintf(en, "%s", n->head_name);
+ strbuf_addstr(&newname, n->head_name);
break;
case 1:
- en += sprintf(en, "%s^", n->head_name);
+ strbuf_addf(&newname, "%s^", n->head_name);
break;
default:
- en += sprintf(en, "%s~%d",
- n->head_name, n->generation);
+ strbuf_addf(&newname, "%s~%d",
+ n->head_name, n->generation);
break;
}
if (nth == 1)
- en += sprintf(en, "^");
+ strbuf_addch(&newname, '^');
else
- en += sprintf(en, "^%d", nth);
- name_commit(p, xstrdup(newname), 0);
+ strbuf_addf(&newname, "^%d", nth);
+ name_commit(p, strbuf_detach(&newname, NULL), 0);
i++;
name_first_parent_chain(p);
}
* followed by SP and subject line.
*/
if (get_sha1_hex(buf.buf, sha1) ||
- (40 <= buf.len && !isspace(buf.buf[40])) ||
+ (buf.len > 40 && !isspace(buf.buf[40])) ||
(!is_prereq && buf.len <= 40)) {
if (report_path)
error(_("unrecognized header: %s%s (%d)"),
#define resolve_undo_clear() resolve_undo_clear_index(&the_index)
#define unmerge_cache_entry_at(at) unmerge_index_entry_at(&the_index, at)
#define unmerge_cache(pathspec) unmerge_index(&the_index, pathspec)
+#define read_blob_data_from_cache(path, sz) read_blob_data_from_index(&the_index, (path), (sz))
#endif
enum object_type {
#define ADD_CACHE_IGNORE_ERRORS 4
#define ADD_CACHE_IGNORE_REMOVAL 8
#define ADD_CACHE_INTENT 16
+#define ADD_CACHE_IMPLICIT_DOT 32 /* internal to "git add -u/-A" */
extern int add_to_index(struct index_state *, const char *path, struct stat *, int flags);
extern int add_file_to_index(struct index_state *, const char *path, int flags);
extern struct cache_entry *make_cache_entry(unsigned int mode, const unsigned char *sha1, const char *path, int stage, int refresh);
extern int ce_same_name(struct cache_entry *a, struct cache_entry *b);
extern int index_name_is_other(const struct index_state *, const char *, int);
+extern void *read_blob_data_from_index(struct index_state *, const char *, unsigned long *);
/* do stat comparison even if CE_VALID is true */
#define CE_MATCH_IGNORE_VALID 01
compat/win32/dirent.o
EXTLIBS += -lws2_32
PTHREAD_LIBS =
+ NATIVE_CRLF = YesPlease
X = .exe
SPARSE_FLAGS = -Wno-one-bit-signed-bitfield
ifneq (,$(wildcard ../THIS_IS_MSYSGIT))
fi
}
-__gitcomp_1 ()
-{
- local c IFS=$' \t\n'
- for c in $1; do
- c="$c$2"
- case $c in
- --*=*|*.) ;;
- *) c="$c " ;;
- esac
- printf '%s\n' "$c"
- done
-}
-
# The following function is based on code from:
#
# bash_completion - programmable completion functions for bash 3.2+
}
fi
-# Generates completion reply with compgen, appending a space to possible
-# completion words, if necessary.
+__gitcompadd ()
+{
+ local i=0
+ for x in $1; do
+ if [[ "$x" == "$3"* ]]; then
+ COMPREPLY[i++]="$2$x$4"
+ fi
+ done
+}
+
+# Generates completion reply, appending a space to possible completion words,
+# if necessary.
# It accepts 1 to 4 arguments:
# 1: List of possible completion words.
# 2: A prefix to be added to each possible completion word (optional).
case "$cur_" in
--*=)
- COMPREPLY=()
;;
*)
- local IFS=$'\n'
- COMPREPLY=($(compgen -P "${2-}" \
- -W "$(__gitcomp_1 "${1-}" "${4-}")" \
- -- "$cur_"))
+ local c i=0 IFS=$' \t\n'
+ for c in $1; do
+ c="$c${4-}"
+ if [[ $c == "$cur_"* ]]; then
+ case $c in
+ --*=*|*.) ;;
+ *) c="$c " ;;
+ esac
+ COMPREPLY[i++]="${2-}$c"
+ fi
+ done
;;
esac
}
-# Generates completion reply with compgen from newline-separated possible
-# completion words by appending a space to all of them.
+# Generates completion reply from newline-separated possible completion words
+# by appending a space to all of them.
# It accepts 1 to 4 arguments:
# 1: List of possible completion words, separated by a single newline.
# 2: A prefix to be added to each possible completion word (optional).
__gitcomp_nl ()
{
local IFS=$'\n'
- COMPREPLY=($(compgen -P "${2-}" -S "${4- }" -W "$1" -- "${3-$cur}"))
+ __gitcompadd "$1" "${2-}" "${3-$cur}" "${4- }"
}
# Generates completion reply with compgen from newline-separated possible
case "$cmd" in
push) no_complete_refspec=1 ;;
fetch)
- COMPREPLY=()
return
;;
*) ;;
return
fi
if [ $no_complete_refspec = 1 ]; then
- COMPREPLY=()
return
fi
[ "$remote" = "." ] && remote=
"
return
esac
- COMPREPLY=()
}
_git_apply ()
"
return
esac
- COMPREPLY=()
}
_git_add ()
__gitcomp_nl "$(__git_refs)"
;;
*)
- COMPREPLY=()
;;
esac
}
_git_cherry_pick ()
{
+ local dir="$(__gitdir)"
+ if [ -f "$dir"/CHERRY_PICK_HEAD ]; then
+ __gitcomp "--continue --quit --abort"
+ return
+ fi
case "$cur" in
--*)
- __gitcomp "--edit --no-commit"
+ __gitcomp "--edit --no-commit --signoff --strategy= --mainline"
;;
*)
__gitcomp_nl "$(__git_refs)"
return
;;
esac
- COMPREPLY=()
}
_git_commit ()
return
;;
esac
- COMPREPLY=()
}
_git_gc ()
return
;;
esac
- COMPREPLY=()
}
_git_gitk ()
return
;;
esac
- COMPREPLY=()
}
_git_ls_files ()
return
;;
esac
- COMPREPLY=()
}
_git_merge_base ()
local remote="${prev#remote.}"
remote="${remote%.fetch}"
if [ -z "$cur" ]; then
- COMPREPLY=("refs/heads/")
+ __gitcompadd "refs/heads/" "" "" ""
return
fi
__gitcomp_nl "$(__git_refs_remotes "$remote")"
return
;;
*.*)
- COMPREPLY=()
return
;;
esac
__gitcomp "$c"
;;
*)
- COMPREPLY=()
;;
esac
}
*)
if [ -z "$(__git_find_on_cmdline "$save_opts")" ]; then
__gitcomp "$subcommands"
- else
- COMPREPLY=()
fi
;;
esac
__gitcomp "--index --quiet"
;;
show,--*|drop,--*|branch,--*)
- COMPREPLY=()
;;
show,*|apply,*|drop,*|pop,*|branch,*)
__gitcomp_nl "$(git --git-dir="$(__gitdir)" stash list \
| sed -n -e 's/:.*//p')"
;;
*)
- COMPREPLY=()
;;
esac
fi
__gitcomp "--revision= --parent"
;;
*)
- COMPREPLY=()
;;
esac
fi
case "$prev" in
-m|-F)
- COMPREPLY=()
;;
-*|tag)
if [ $f = 1 ]; then
__gitcomp_nl "$(__git_tags)"
- else
- COMPREPLY=()
fi
;;
*)
return final
def export_branch(branch, name):
- global prefix, dirname
+ global prefix
ref = '%s/heads/%s' % (prefix, name)
tip = marks.get_tip(name)
marks.set_tip(name, revid)
def export_tag(repo, name):
- global tags
- print "reset refs/tags/%s" % name
+ global tags, prefix
+
+ ref = '%s/tags/%s' % (prefix, name)
+ print "reset %s" % ref
print "from :%u" % rev_to_mark(tags[name])
print
print "import"
print "export"
print "refspec refs/heads/*:%s/heads/*" % prefix
+ print "refspec refs/tags/*:%s/tags/*" % prefix
path = os.path.join(dirname, 'marks-git')
# Just copy to your ~/bin, or anywhere in your $PATH.
# Then you can clone with:
# git clone hg::/path/to/mercurial/repo/
+#
+# For remote repositories a local clone is stored in
+# "$GIT_DIR/hg/origin/clone/.hg/".
-from mercurial import hg, ui, bookmarks, context, util, encoding
+from mercurial import hg, ui, bookmarks, context, util, encoding, node, error
import re
import sys
import shutil
import subprocess
import urllib
+import atexit
#
# If you want to switch to hg-git compatibility mode:
# git config --global remote-hg.hg-git-compat true
#
+# If you are not in hg-git-compat mode and want to disable the tracking of
+# named branches:
+# git config --global remote-hg.track-branches false
+#
+# If you don't want to force pushes (and thus risk creating new remote heads):
+# git config --global remote-hg.force-push false
+#
+# If you want the equivalent of hg's clone/pull--insecure option:
+# git config remote-hg.insecure true
+#
# git:
# Sensible defaults for git.
# hg bookmarks are exported as git branches, hg branches are prefixed
m = { '100755': 'x', '120000': 'l' }
return m.get(mode, '')
+def hghex(node):
+ return hg.node.hex(node)
+
def get_config(config):
cmd = ['git', 'config', '--get', config]
process = subprocess.Popen(cmd, stdout=subprocess.PIPE)
tz = ((tz / 100) * 3600) + ((tz % 100) * 60)
return (user, int(date), -tz)
+def fix_file_path(path):
+ if not os.path.isabs(path):
+ return path
+ return os.path.relpath(path, '/')
+
def export_file(fc):
d = fc.data()
- print "M %s inline %s" % (gitmode(fc.flags()), fc.path())
+ path = fix_file_path(fc.path())
+ print "M %s inline %s" % (gitmode(fc.flags()), path)
print "data %d" % len(d)
print d
myui = ui.ui()
myui.setconfig('ui', 'interactive', 'off')
+ myui.fout = sys.stderr
+
+ try:
+ if get_config('remote-hg.insecure') == 'true\n':
+ myui.setconfig('web', 'cacerts', '')
+ except subprocess.CalledProcessError:
+ pass
if hg.islocal(url):
repo = hg.repository(myui, url)
else:
local_path = os.path.join(dirname, 'clone')
if not os.path.exists(local_path):
- peer, dstpeer = hg.clone(myui, {}, url, local_path, update=False, pull=True)
+ try:
+ peer, dstpeer = hg.clone(myui, {}, url, local_path, update=True, pull=True)
+ except:
+ die('Repository error')
repo = dstpeer.local()
else:
repo = hg.repository(myui, local_path)
- peer = hg.peer(myui, {}, url)
+ try:
+ peer = hg.peer(myui, {}, url)
+ except:
+ die('Repository error')
repo.pull(peer, heads=None, force=True)
return repo
else:
modified, removed = get_filechanges(repo, c, parents[0])
+ desc += '\n'
+
if mode == 'hg':
extra_msg = ''
else:
extra_msg += "extra : %s : %s\n" % (key, urllib.quote(value))
- desc += '\n'
if extra_msg:
desc += '\n--HG--\n' + extra_msg
for f in modified:
export_file(c.filectx(f))
for f in removed:
- print "D %s" % (f)
+ print "D %s" % (fix_file_path(f))
print
count += 1
data = parser.get_data()
blob_marks[mark] = data
parser.next()
- return
def get_merge_files(repo, p1, p2, files):
for e in repo[p1].files():
files[e] = f
def parse_commit(parser):
- global marks, blob_marks, bmarks, parsed_refs
+ global marks, blob_marks, parsed_refs
global mode
from_mark = merge_mark = None
mark = int(mark_ref[1:])
f = { 'mode' : hgmode(m), 'data' : blob_marks[mark] }
elif parser.check('D'):
- t, path = line.split(' ')
+ t, path = line.split(' ', 1)
f = { 'deleted' : True }
else:
die('Unknown file command: %s' % line)
if merge_mark:
get_merge_files(repo, p1, p2, files)
+ # Check if the ref is supposed to be a named branch
+ if ref.startswith('refs/heads/branches/'):
+ extra['branch'] = ref[len('refs/heads/branches/'):]
+
if mode == 'hg':
i = data.find('\n--HG--\n')
if i >= 0:
tmp = data[i + len('\n--HG--\n'):].strip()
- for k, v in [e.split(' : ') for e in tmp.split('\n')]:
+ for k, v in [e.split(' : ', 1) for e in tmp.split('\n')]:
if k == 'rename':
old, new = v.split(' => ', 1)
files[new]['rename'] = old
rev = repo[node].rev()
parsed_refs[ref] = node
-
marks.new_mark(rev, commit_mark)
def parse_reset(parser):
+ global parsed_refs
+
ref = parser[1]
parser.next()
# ugh
def do_export(parser):
global parsed_refs, bmarks, peer
+ p_bmarks = []
+
parser.next()
for line in parser.each_block('done'):
for ref, node in parsed_refs.iteritems():
if ref.startswith('refs/heads/branches'):
- pass
+ print "ok %s" % ref
elif ref.startswith('refs/heads/'):
bmark = ref[len('refs/heads/'):]
- if bmark in bmarks:
- old = bmarks[bmark].hex()
- else:
- old = ''
- if not bookmarks.pushbookmark(parser.repo, bmark, old, node):
- continue
+ p_bmarks.append((bmark, node))
+ continue
elif ref.startswith('refs/tags/'):
tag = ref[len('refs/tags/'):]
- parser.repo.tag([tag], node, None, True, None, {})
+ if mode == 'git':
+ msg = 'Added tag %s for changeset %s' % (tag, hghex(node[:6]));
+ parser.repo.tag([tag], node, msg, False, None, {})
+ else:
+ parser.repo.tag([tag], node, None, True, None, {})
+ print "ok %s" % ref
else:
# transport-helper/fast-export bugs
continue
+
+ if peer:
+ parser.repo.push(peer, force=force_push)
+
+ # handle bookmarks
+ for bmark, node in p_bmarks:
+ ref = 'refs/heads/' + bmark
+ new = hghex(node)
+
+ if bmark in bmarks:
+ old = bmarks[bmark].hex()
+ else:
+ old = ''
+
+ if bmark == 'master' and 'master' not in parser.repo._bookmarks:
+ # fake bookmark
+ pass
+ elif bookmarks.pushbookmark(parser.repo, bmark, old, new):
+ # updated locally
+ pass
+ else:
+ print "error %s" % ref
+ continue
+
+ if peer:
+ if not peer.pushkey('bookmarks', bmark, old, new):
+ print "error %s" % ref
+ continue
+
print "ok %s" % ref
print
- if peer:
- parser.repo.push(peer, force=False)
-
def fix_path(alias, repo, orig_url):
repo_url = util.url(repo.url())
url = util.url(orig_url)
global prefix, dirname, branches, bmarks
global marks, blob_marks, parsed_refs
global peer, mode, bad_mail, bad_name
- global track_branches
+ global track_branches, force_push, is_tmp
alias = args[1]
url = args[2]
hg_git_compat = False
track_branches = True
+ force_push = True
+
try:
if get_config('remote-hg.hg-git-compat') == 'true\n':
hg_git_compat = True
track_branches = False
if get_config('remote-hg.track-branches') == 'false\n':
track_branches = False
+ if get_config('remote-hg.force-push') == 'false\n':
+ force_push = False
except subprocess.CalledProcessError:
pass
bmarks = {}
blob_marks = {}
parsed_refs = {}
+ marks = None
repo = get_repo(url, alias)
prefix = 'refs/hg/%s' % alias
die('unhandled command: %s' % line)
sys.stdout.flush()
+def bye():
+ if not marks:
+ return
if not is_tmp:
marks.store()
else:
shutil.rmtree(dirname)
+atexit.register(bye)
sys.exit(main(sys.argv))
# clone to a git repo
git_clone () {
- hg -R $1 bookmark -f -r tip master &&
git clone -q "hg::$PWD/$1" $2
}
hg_clone () {
(
hg init $2 &&
+ hg -R $2 bookmark -i master &&
cd $1 &&
git push -q "hg::$PWD/../$2" 'refs/tags/*:refs/tags/*' 'refs/heads/*:refs/heads/*'
) &&
}
hg_log () {
- hg -R $1 log --graph --debug | grep -v 'tag: *default/'
+ hg -R $1 log --graph --debug >log &&
+ grep -v 'tag: *default/' log
}
setup () {
echo "commit = -d \"0 0\""
echo "debugrawcommit = -d \"0 0\""
echo "tag = -d \"0 0\""
+ echo "[extensions]"
+ echo "graphlog ="
) >> "$HOME"/.hgrc &&
git config --global remote-hg.hg-git-compat true
hg_push hgrepo gitrepo &&
hg_clone gitrepo hgrepo2 &&
- : TODO, avoid "master" bookmark &&
- (cd hgrepo2 && hg checkout gamma) &&
+ : Back to the common revision &&
+ (cd hgrepo && hg checkout default) &&
hg_log hgrepo > expected &&
hg_log hgrepo2 > actual &&
# clone to a git repo with git
git_clone_git () {
- hg -R $1 bookmark -f -r tip master &&
git clone -q "hg::$PWD/$1" $2
}
hg_clone_git () {
(
hg init $2 &&
+ hg -R $2 bookmark -i master &&
cd $1 &&
git push -q "hg::$PWD/../$2" 'refs/tags/*:refs/tags/*' 'refs/heads/*:refs/heads/*'
) &&
(
git init -q $2 &&
cd $1 &&
- hg bookmark -f -r tip master &&
+ hg bookmark -i -f -r tip master &&
hg -q push -r master ../$2 || true
)
}
}
hg_log () {
- hg -R $1 log --graph --debug | grep -v 'tag: *default/'
+ hg -R $1 log --graph --debug >log &&
+ grep -v 'tag: *default/' log
}
git_log () {
echo "[extensions]"
echo "hgext.bookmarks ="
echo "hggit ="
+ echo "graphlog ="
) >> "$HOME"/.hgrc &&
git config --global receive.denycurrentbranch warn
git config --global remote-hg.hg-git-compat true
hg -R hgrepo bookmarks | egrep "devel[ ]+3:"
'
+author_test () {
+ echo $1 >> content &&
+ hg commit -u "$2" -m "add $1" &&
+ echo "$3" >> ../expected
+}
+
+test_expect_success 'authors' '
+ mkdir -p tmp && cd tmp &&
+ test_when_finished "cd .. && rm -rf tmp" &&
+
+ (
+ hg init hgrepo &&
+ cd hgrepo &&
+
+ touch content &&
+ hg add content &&
+
+ author_test alpha "" "H G Wells <wells@example.com>" &&
+ author_test beta "test" "test <unknown>" &&
+ author_test beta "test <test@example.com> (comment)" "test <unknown>" &&
+ author_test gamma "<test@example.com>" "Unknown <test@example.com>" &&
+ author_test delta "name<test@example.com>" "name <test@example.com>" &&
+ author_test epsilon "name <test@example.com" "name <unknown>" &&
+ author_test zeta " test " "test <unknown>" &&
+ author_test eta "test < test@example.com >" "test <test@example.com>" &&
+ author_test theta "test >test@example.com>" "test <unknown>" &&
+ author_test iota "test < test <at> example <dot> com>" "test <unknown>" &&
+ author_test kappa "test@example.com" "test@example.com <unknown>"
+ ) &&
+
+ git clone "hg::$PWD/hgrepo" gitrepo &&
+ git --git-dir=gitrepo/.git log --reverse --format="%an <%ae>" > actual &&
+
+ test_cmp expected actual
+'
+
test_done
static int has_cr_in_index(const char *path)
{
- int pos, len;
unsigned long sz;
- enum object_type type;
void *data;
int has_cr;
- struct index_state *istate = &the_index;
- len = strlen(path);
- pos = index_name_pos(istate, path, len);
- if (pos < 0) {
- /*
- * We might be in the middle of a merge, in which
- * case we would read stage #2 (ours).
- */
- int i;
- for (i = -pos - 1;
- (pos < 0 && i < istate->cache_nr &&
- !strcmp(istate->cache[i]->name, path));
- i++)
- if (ce_stage(istate->cache[i]) == 2)
- pos = i;
- }
- if (pos < 0)
+ data = read_blob_data_from_cache(path, &sz);
+ if (!data)
return 0;
- data = read_sha1_file(istate->cache[pos]->sha1, &type, &sz);
- if (!data || type != OBJ_BLOB) {
- free(data);
- return 0;
- }
-
has_cr = memchr(data, '\r', sz) != NULL;
free(data);
return has_cr;
const char *del = diff_get_color_opt(o, DIFF_FILE_OLD);
const char *add = diff_get_color_opt(o, DIFF_FILE_NEW);
show_submodule_summary(o->file, one ? one->path : two->path,
+ line_prefix,
one->sha1, two->sha1, two->dirty_submodule,
meta, del, add, reset);
return;
options->xdl_opts = DIFF_WITH_ALG(options, PATIENCE_DIFF);
else if (!strcmp(arg, "--histogram"))
options->xdl_opts = DIFF_WITH_ALG(options, HISTOGRAM_DIFF);
- else if (!prefixcmp(arg, "--diff-algorithm=")) {
- long value = parse_algorithm_value(arg+17);
+ else if ((argcount = parse_long_opt("diff-algorithm", av, &optarg))) {
+ long value = parse_algorithm_value(optarg);
if (value < 0)
return error("option diff-algorithm accepts \"myers\", "
"\"minimal\", \"patience\" and \"histogram\"");
DIFF_XDL_CLR(options, NEED_MINIMAL);
options->xdl_opts &= ~XDF_DIFF_ALGORITHM_MASK;
options->xdl_opts |= value;
+ return argcount;
}
/* flags options */
res=$?
# Check if we should exit because bisection is finished
- test $res -eq 10 && exit 0
+ if test $res -eq 10
+ then
+ bad_rev=$(git show-ref --hash --verify refs/bisect/bad)
+ bad_commit=$(git show-branch $bad_rev)
+ echo "# first bad commit: $bad_commit" >>"$GIT_DIR/BISECT_LOG"
+ exit 0
+ fi
# Check for an error in the bisection process
test $res -ne 0 && exit $res
extern void set_die_routine(NORETURN_PTR void (*routine)(const char *err, va_list params));
extern void set_error_routine(void (*routine)(const char *err, va_list params));
+extern void set_die_is_recursing_routine(int (*routine)(void));
extern int prefixcmp(const char *str, const char *prefix);
extern int suffixcmp(const char *str, const char *suffix);
rm -f "$GIT_DIR/rebased-patches"
git format-patch -k --stdout --full-index --ignore-if-in-upstream \
- --src-prefix=a/ --dst-prefix=b/ \
- --no-renames $root_flag "$revisions" >"$GIT_DIR/rebased-patches"
+ --src-prefix=a/ --dst-prefix=b/ --no-renames --no-cover-letter \
+ $root_flag "$revisions" >"$GIT_DIR/rebased-patches"
ret=$?
if test 0 != $ret
--[no-]bcc <str> * Email Bcc:
--subject <str> * Email "Subject:"
--in-reply-to <str> * Email "In-Reply-To:"
- --annotate * Review each patch that will be sent in an editor.
+ --[no-]annotate * Review each patch that will be sent in an editor.
--compose * Open an editor for introduction.
--compose-encoding <str> * Encoding to assume for introduction.
--8bit-encoding <str> * Encoding to assume 8bit mails if undeclared
"signedoffbycc" => [\$signed_off_by_cc, undef],
"signedoffcc" => [\$signed_off_by_cc, undef], # Deprecated
"validate" => [\$validate, 1],
- "multiedit" => [\$multiedit, undef]
+ "multiedit" => [\$multiedit, undef],
+ "annotate" => [\$annotate, undef]
);
my %config_settings = (
"smtp-debug:i" => \$debug_net_smtp,
"smtp-domain:s" => \$smtp_domain,
"identity=s" => \$identity,
- "annotate" => \$annotate,
+ "annotate!" => \$annotate,
"compose" => \$compose,
"quiet" => \$quiet,
"cc-cmd=s" => \$cc_cmd,
int cmd_version(int argc, const char **argv, const char *prefix)
{
+ /*
+ * The format of this string should be kept stable for compatibility
+ * with external projects that rely on the output of "git version".
+ */
printf("git version %s\n", git_version_string);
return 0;
}
static int show_text_ref(const char *name, const unsigned char *sha1,
int flag, void *cb_data)
{
+ const char *name_nons = strip_namespace(name);
struct strbuf *buf = cb_data;
struct object *o = parse_object(sha1);
if (!o)
return 0;
- strbuf_addf(buf, "%s\t%s\n", sha1_to_hex(sha1), name);
+ strbuf_addf(buf, "%s\t%s\n", sha1_to_hex(sha1), name_nons);
if (o->type == OBJ_TAG) {
o = deref_tag(o, name, 0);
if (!o)
return 0;
- strbuf_addf(buf, "%s\t%s^{}\n", sha1_to_hex(o->sha1), name);
+ strbuf_addf(buf, "%s\t%s^{}\n", sha1_to_hex(o->sha1),
+ name_nons);
}
return 0;
}
} else {
select_getanyfile();
- for_each_ref(show_text_ref, &buf);
+ for_each_namespaced_ref(show_text_ref, &buf);
send_strbuf("text/plain", &buf);
}
strbuf_release(&buf);
}
+static int show_head_ref(const char *name, const unsigned char *sha1,
+ int flag, void *cb_data)
+{
+ struct strbuf *buf = cb_data;
+
+ if (flag & REF_ISSYMREF) {
+ unsigned char sha1[20];
+ const char *target = resolve_ref_unsafe(name, sha1, 1, NULL);
+ const char *target_nons = strip_namespace(target);
+
+ strbuf_addf(buf, "ref: %s\n", target_nons);
+ } else {
+ strbuf_addf(buf, "%s\n", sha1_to_hex(sha1));
+ }
+
+ return 0;
+}
+
+static void get_head(char *arg)
+{
+ struct strbuf buf = STRBUF_INIT;
+
+ select_getanyfile();
+ head_ref_namespaced(show_head_ref, &buf);
+ send_strbuf("text/plain", &buf);
+ strbuf_release(&buf);
+}
+
static void get_info_packs(char *arg)
{
size_t objdirlen = strlen(get_object_directory());
const char *pattern;
void (*imp)(char *);
} services[] = {
- {"GET", "/HEAD$", get_text_file},
+ {"GET", "/HEAD$", get_head},
{"GET", "/info/refs$", get_info_refs},
{"GET", "/objects/info/alternates$", get_text_file},
{"GET", "/objects/info/http-alternates$", get_text_file},
ret = 0;
break;
case HTTP_ERROR:
- http_error(url, HTTP_ERROR);
+ error("unable to access '%s': %s", url, curl_errorstr);
default:
ret = -1;
}
char curl_errorstr[CURL_ERROR_SIZE];
static int curl_ssl_verify = -1;
+static int curl_ssl_try;
static const char *ssl_cert;
#if LIBCURL_VERSION_NUM >= 0x070903
static const char *ssl_key;
ssl_cert_password_required = 1;
return 0;
}
+ if (!strcmp("http.ssltry", var)) {
+ curl_ssl_try = git_config_bool(var, value);
+ return 0;
+ }
if (!strcmp("http.minsessions", var)) {
min_curl_sessions = git_config_int(var, value);
#ifndef USE_CURL_MULTI
#endif
if (ssl_cainfo != NULL)
curl_easy_setopt(result, CURLOPT_CAINFO, ssl_cainfo);
- curl_easy_setopt(result, CURLOPT_FAILONERROR, 1);
if (curl_low_speed_limit > 0 && curl_low_speed_time > 0) {
curl_easy_setopt(result, CURLOPT_LOW_SPEED_LIMIT,
if (curl_ftp_no_epsv)
curl_easy_setopt(result, CURLOPT_FTP_USE_EPSV, 0);
+#ifdef CURLOPT_USE_SSL
+ if (curl_ssl_try)
+ curl_easy_setopt(result, CURLOPT_USE_SSL, CURLUSESSL_TRY);
+#endif
+
if (curl_http_proxy) {
curl_easy_setopt(result, CURLOPT_PROXY, curl_http_proxy);
curl_easy_setopt(result, CURLOPT_PROXYAUTH, CURLAUTH_ANY);
curl_easy_setopt(slot->curl, CURLOPT_POSTFIELDS, NULL);
curl_easy_setopt(slot->curl, CURLOPT_UPLOAD, 0);
curl_easy_setopt(slot->curl, CURLOPT_HTTPGET, 1);
+ curl_easy_setopt(slot->curl, CURLOPT_FAILONERROR, 1);
if (http_auth.password)
init_curl_http_auth(slot->curl);
int handle_curl_result(struct slot_results *results)
{
+ /*
+ * If we see a failing http code with CURLE_OK, we have turned off
+ * FAILONERROR (to keep the server's custom error response), and should
+ * translate the code into failure here.
+ */
+ if (results->curl_result == CURLE_OK &&
+ results->http_code >= 400) {
+ results->curl_result = CURLE_HTTP_RETURNED_ERROR;
+ /*
+ * Normally curl will already have put the "reason phrase"
+ * from the server into curl_errorstr; unfortunately without
+ * FAILONERROR it is lost, so we can give only the numeric
+ * status code.
+ */
+ snprintf(curl_errorstr, sizeof(curl_errorstr),
+ "The requested URL returned error: %ld",
+ results->http_code);
+ }
+
if (results->curl_result == CURLE_OK) {
credential_approve(&http_auth);
return HTTP_OK;
strbuf_addstr(&buf, "Pragma:");
if (options & HTTP_NO_CACHE)
strbuf_addstr(&buf, " no-cache");
+ if (options & HTTP_KEEP_ERROR)
+ curl_easy_setopt(slot->curl, CURLOPT_FAILONERROR, 0);
headers = curl_slist_append(headers, buf.buf);
run_active_slot(slot);
ret = handle_curl_result(&results);
} else {
- error("Unable to start HTTP request for %s", url);
+ snprintf(curl_errorstr, sizeof(curl_errorstr),
+ "failed to start HTTP request");
ret = HTTP_START_FAILED;
}
int ret = http_request(url, type, result, target, options);
if (ret != HTTP_REAUTH)
return ret;
+
+ /*
+ * If we are using KEEP_ERROR, the previous request may have
+ * put cruft into our output stream; we should clear it out before
+ * making our next request. We only know how to do this for
+ * the strbuf case, but that is enough to satisfy current callers.
+ */
+ if (options & HTTP_KEEP_ERROR) {
+ switch (target) {
+ case HTTP_REQUEST_STRBUF:
+ strbuf_reset(result);
+ break;
+ default:
+ die("BUG: HTTP_KEEP_ERROR is only supported with strbufs");
+ }
+ }
return http_request(url, type, result, target, options);
}
return ret;
}
-int http_error(const char *url, int ret)
-{
- /* http_request has already handled HTTP_START_FAILED. */
- if (ret != HTTP_START_FAILED)
- error("%s while accessing %s", curl_errorstr, url);
-
- return ret;
-}
-
int http_fetch_ref(const char *base, struct ref *ref)
{
char *url;
#define NO_CURL_IOCTL
#endif
+/*
+ * CURLOPT_USE_SSL was known as CURLOPT_FTP_SSL up to 7.16.4,
+ * and the constants were known as CURLFTPSSL_*
+*/
+#if !defined(CURLOPT_USE_SSL) && defined(CURLOPT_FTP_SSL)
+#define CURLOPT_USE_SSL CURLOPT_FTP_SSL
+#define CURLUSESSL_TRY CURLFTPSSL_TRY
+#endif
+
struct slot_results {
CURLcode curl_result;
long http_code;
/* Options for http_request_*() */
#define HTTP_NO_CACHE 1
+#define HTTP_KEEP_ERROR 2
/* Return values for http_request_*() */
#define HTTP_OK 0
*/
int http_get_strbuf(const char *url, struct strbuf *content_type, struct strbuf *result, int options);
-/*
- * Prints an error message using error() containing url and curl_errorstr,
- * and returns ret.
- */
-int http_error(const char *url, int ret);
-
extern int http_fetch_ref(const char *base, struct ref *ref);
/* Helpers for fetching packs */
if (not defined $pid) {
throw Error::Simple("open failed: $!");
} elsif ($pid == 0) {
- if (defined $opts{STDERR}) {
- close STDERR;
- }
if ($opts{STDERR}) {
open (STDERR, '>&', $opts{STDERR})
or die "dup failed: $!";
+ } elsif (defined $opts{STDERR}) {
+ open (STDERR, '>', '/dev/null')
+ or die "opening /dev/null failed: $!";
}
_cmd_exec($self, $cmd, @args);
}
strbuf_addstr(sb, "?=");
}
+static const char *show_ident_date(const struct ident_split *ident,
+ enum date_mode mode)
+{
+ unsigned long date = 0;
+ int tz = 0;
+
+ if (ident->date_begin && ident->date_end)
+ date = strtoul(ident->date_begin, NULL, 10);
+ if (ident->tz_begin && ident->tz_end)
+ tz = strtol(ident->tz_begin, NULL, 10);
+ return show_date(date, tz, mode);
+}
+
void pp_user_info(const struct pretty_print_context *pp,
const char *what, struct strbuf *sb,
const char *line, const char *encoding)
struct strbuf mail;
struct ident_split ident;
int linelen;
- char *line_end, *date;
+ char *line_end;
const char *mailbuf, *namebuf;
size_t namelen, maillen;
int max_length = 78; /* per rfc2822 */
- unsigned long time;
- int tz;
if (pp->fmt == CMIT_FMT_ONELINE)
return;
strbuf_add(&name, namebuf, namelen);
namelen = name.len + mail.len + 3; /* ' ' + '<' + '>' */
- time = strtoul(ident.date_begin, &date, 10);
- tz = strtol(date, NULL, 10);
if (pp->fmt == CMIT_FMT_EMAIL) {
strbuf_addstr(sb, "From: ");
switch (pp->fmt) {
case CMIT_FMT_MEDIUM:
- strbuf_addf(sb, "Date: %s\n", show_date(time, tz, pp->date_mode));
+ strbuf_addf(sb, "Date: %s\n",
+ show_ident_date(&ident, pp->date_mode));
break;
case CMIT_FMT_EMAIL:
- strbuf_addf(sb, "Date: %s\n", show_date(time, tz, DATE_RFC2822));
+ strbuf_addf(sb, "Date: %s\n",
+ show_ident_date(&ident, DATE_RFC2822));
break;
case CMIT_FMT_FULLER:
- strbuf_addf(sb, "%sDate: %s\n", what, show_date(time, tz, pp->date_mode));
+ strbuf_addf(sb, "%sDate: %s\n", what,
+ show_ident_date(&ident, pp->date_mode));
break;
default:
/* notin' */
{
/* currently all placeholders have same length */
const int placeholder_len = 2;
- int tz;
- unsigned long date = 0;
struct ident_split s;
const char *name, *mail;
size_t maillen, namelen;
if (!s.date_begin)
goto skip;
- date = strtoul(s.date_begin, NULL, 10);
-
if (part == 't') { /* date, UNIX timestamp */
strbuf_add(sb, s.date_begin, s.date_end - s.date_begin);
return placeholder_len;
}
- /* parse tz */
- tz = strtoul(s.tz_begin + 1, NULL, 10);
- if (*s.tz_begin == '-')
- tz = -tz;
-
switch (part) {
case 'd': /* date */
- strbuf_addstr(sb, show_date(date, tz, dmode));
+ strbuf_addstr(sb, show_ident_date(&s, dmode));
return placeholder_len;
case 'D': /* date, RFC2822 style */
- strbuf_addstr(sb, show_date(date, tz, DATE_RFC2822));
+ strbuf_addstr(sb, show_ident_date(&s, DATE_RFC2822));
return placeholder_len;
case 'r': /* date, relative */
- strbuf_addstr(sb, show_date(date, tz, DATE_RELATIVE));
+ strbuf_addstr(sb, show_ident_date(&s, DATE_RELATIVE));
return placeholder_len;
case 'i': /* date, ISO 8601 */
- strbuf_addstr(sb, show_date(date, tz, DATE_ISO8601));
+ strbuf_addstr(sb, show_ident_date(&s, DATE_ISO8601));
return placeholder_len;
}
#include "git-compat-util.h"
#include "progress.h"
+#include "strbuf.h"
#define TP_IDX_MAX 8
return 0;
}
-static void throughput_string(struct throughput *tp, off_t total,
+static void throughput_string(struct strbuf *buf, off_t total,
unsigned int rate)
{
- int l = sizeof(tp->display);
- if (total > 1 << 30) {
- l -= snprintf(tp->display, l, ", %u.%2.2u GiB",
- (int)(total >> 30),
- (int)(total & ((1 << 30) - 1)) / 10737419);
- } else if (total > 1 << 20) {
- int x = total + 5243; /* for rounding */
- l -= snprintf(tp->display, l, ", %u.%2.2u MiB",
- x >> 20, ((x & ((1 << 20) - 1)) * 100) >> 20);
- } else if (total > 1 << 10) {
- int x = total + 5; /* for rounding */
- l -= snprintf(tp->display, l, ", %u.%2.2u KiB",
- x >> 10, ((x & ((1 << 10) - 1)) * 100) >> 10);
- } else {
- l -= snprintf(tp->display, l, ", %u bytes", (int)total);
- }
-
- if (rate > 1 << 10) {
- int x = rate + 5; /* for rounding */
- snprintf(tp->display + sizeof(tp->display) - l, l,
- " | %u.%2.2u MiB/s",
- x >> 10, ((x & ((1 << 10) - 1)) * 100) >> 10);
- } else if (rate)
- snprintf(tp->display + sizeof(tp->display) - l, l,
- " | %u KiB/s", rate);
+ strbuf_addstr(buf, ", ");
+ strbuf_humanise_bytes(buf, total);
+ strbuf_addstr(buf, " | ");
+ strbuf_humanise_bytes(buf, rate * 1024);
+ strbuf_addstr(buf, "/s");
}
void display_throughput(struct progress *progress, off_t total)
misecs += (int)(tv.tv_usec - tp->prev_tv.tv_usec) / 977;
if (misecs > 512) {
+ struct strbuf buf = STRBUF_INIT;
unsigned int count, rate;
count = total - tp->prev_total;
tp->last_misecs[tp->idx] = misecs;
tp->idx = (tp->idx + 1) % TP_IDX_MAX;
- throughput_string(tp, total, rate);
+ throughput_string(&buf, total, rate);
+ strncpy(tp->display, buf.buf, sizeof(tp->display));
+ strbuf_release(&buf);
if (progress->last_value != -1 && progress_update)
display(progress, progress->last_value, NULL);
}
bufp = (len < sizeof(buf)) ? buf : xmalloc(len + 1);
if (tp) {
+ struct strbuf strbuf = STRBUF_INIT;
unsigned int rate = !tp->avg_misecs ? 0 :
tp->avg_bytes / tp->avg_misecs;
- throughput_string(tp, tp->curr_total, rate);
+ throughput_string(&strbuf, tp->curr_total, rate);
+ strncpy(tp->display, strbuf.buf, sizeof(tp->display));
+ strbuf_release(&strbuf);
}
progress_update = 1;
sprintf(bufp, ", %s.\n", msg);
}
return 1;
}
+
+void *read_blob_data_from_index(struct index_state *istate, const char *path, unsigned long *size)
+{
+ int pos, len;
+ unsigned long sz;
+ enum object_type type;
+ void *data;
+
+ len = strlen(path);
+ pos = index_name_pos(istate, path, len);
+ if (pos < 0) {
+ /*
+ * We might be in the middle of a merge, in which
+ * case we would read stage #2 (ours).
+ */
+ int i;
+ for (i = -pos - 1;
+ (pos < 0 && i < istate->cache_nr &&
+ !strcmp(istate->cache[i]->name, path));
+ i++)
+ if (ce_stage(istate->cache[i]) == 2)
+ pos = i;
+ }
+ if (pos < 0)
+ return NULL;
+ data = read_sha1_file(istate->cache[pos]->sha1, &type, &sz);
+ if (!data || type != OBJ_BLOB) {
+ free(data);
+ return NULL;
+ }
+ if (size)
+ *size = sz;
+ return data;
+}
}
}
+static int show_http_message(struct strbuf *type, struct strbuf *msg)
+{
+ const char *p, *eol;
+
+ /*
+ * We only show text/plain parts, as other types are likely
+ * to be ugly to look at on the user's terminal.
+ *
+ * TODO should handle "; charset=XXX", and re-encode into
+ * logoutputencoding
+ */
+ if (strcasecmp(type->buf, "text/plain"))
+ return -1;
+
+ strbuf_trim(msg);
+ if (!msg->len)
+ return -1;
+
+ p = msg->buf;
+ do {
+ eol = strchrnul(p, '\n');
+ fprintf(stderr, "remote: %.*s\n", (int)(eol - p), p);
+ p = eol + 1;
+ } while(*eol);
+ return 0;
+}
+
static struct discovery* discover_refs(const char *service, int for_push)
{
struct strbuf exp = STRBUF_INIT;
}
refs_url = strbuf_detach(&buffer, NULL);
- http_ret = http_get_strbuf(refs_url, &type, &buffer, HTTP_NO_CACHE);
+ http_ret = http_get_strbuf(refs_url, &type, &buffer,
+ HTTP_NO_CACHE | HTTP_KEEP_ERROR);
switch (http_ret) {
case HTTP_OK:
break;
case HTTP_MISSING_TARGET:
- die("%s not found: did you run git update-server-info on the"
- " server?", refs_url);
+ show_http_message(&type, &buffer);
+ die("repository '%s' not found", url);
case HTTP_NOAUTH:
- die("Authentication failed");
+ show_http_message(&type, &buffer);
+ die("Authentication failed for '%s'", url);
default:
- http_error(refs_url, http_ret);
- die("HTTP request failed");
+ show_http_message(&type, &buffer);
+ die("unable to access '%s': %s", url, curl_errorstr);
}
last= xcalloc(1, sizeof(*last_discovery));
}
die("options not supported in --stdin mode");
}
- if (handle_revision_arg(sb.buf, revs, 0, REVARG_CANNOT_BE_FILENAME))
+ if (handle_revision_arg(xstrdup(sb.buf), revs, 0,
+ REVARG_CANNOT_BE_FILENAME))
die("bad revision '%s'", sb.buf);
}
if (seen_dashdash)
static pthread_t main_thread;
static int main_thread_set;
static pthread_key_t async_key;
+static pthread_key_t async_die_counter;
static void *run_thread(void *data)
{
exit(128);
}
+
+static int async_die_is_recursing(void)
+{
+ void *ret = pthread_getspecific(async_die_counter);
+ pthread_setspecific(async_die_counter, (void *)1);
+ return ret != NULL;
+}
+
#endif
int start_async(struct async *async)
main_thread_set = 1;
main_thread = pthread_self();
pthread_key_create(&async_key, NULL);
+ pthread_key_create(&async_die_counter, NULL);
set_die_routine(die_async);
+ set_die_is_recursing_routine(async_die_is_recursing);
}
if (proc_in >= 0)
{
struct commit_list *todo_list = NULL;
unsigned char sha1[20];
+ int i;
if (opts->subcommand == REPLAY_NONE)
assert(opts->revs);
if (opts->subcommand == REPLAY_CONTINUE)
return sequencer_continue(opts);
+ for (i = 0; i < opts->revs->pending.nr; i++) {
+ unsigned char sha1[20];
+ const char *name = opts->revs->pending.objects[i].name;
+
+ /* This happens when using --stdin. */
+ if (!strlen(name))
+ continue;
+
+ if (!get_sha1(name, sha1)) {
+ enum object_type type = sha1_object_info(sha1, NULL);
+
+ if (type > 0 && type != OBJ_COMMIT)
+ die(_("%s: can't cherry-pick a %s"), name, typename(type));
+ } else
+ die(_("%s: bad revision"), name);
+ }
+
/*
* If we were called as "git cherry-pick <commit>", just
* cherry-pick/revert it, set CHERRY_PICK_HEAD /
return base_offset;
}
-/* forward declaration for a mutually recursive function */
-static int packed_object_info(struct packed_git *p, off_t offset,
- unsigned long *sizep, int *rtype);
-
-static int packed_delta_info(struct packed_git *p,
- struct pack_window **w_curs,
- off_t curpos,
- enum object_type type,
- off_t obj_offset,
- unsigned long *sizep)
-{
- off_t base_offset;
-
- base_offset = get_delta_base(p, w_curs, &curpos, type, obj_offset);
- if (!base_offset)
- return OBJ_BAD;
- type = packed_object_info(p, base_offset, NULL, NULL);
- if (type <= OBJ_NONE) {
- struct revindex_entry *revidx;
- const unsigned char *base_sha1;
- revidx = find_pack_revindex(p, base_offset);
- if (!revidx)
- return OBJ_BAD;
- base_sha1 = nth_packed_object_sha1(p, revidx->nr);
- mark_bad_packed_object(p, base_sha1);
- type = sha1_object_info(base_sha1, NULL);
- if (type <= OBJ_NONE)
- return OBJ_BAD;
- }
-
- /* We choose to only get the type of the base object and
- * ignore potentially corrupt pack file that expects the delta
- * based on a base with a wrong size. This saves tons of
- * inflate() calls.
- */
- if (sizep) {
- *sizep = get_size_from_delta(p, w_curs, curpos);
- if (*sizep == 0)
- type = OBJ_BAD;
- }
-
- return type;
-}
-
int unpack_object_header(struct packed_git *p,
struct pack_window **w_curs,
off_t *curpos,
return type;
}
+static int retry_bad_packed_offset(struct packed_git *p, off_t obj_offset)
+{
+ int type;
+ struct revindex_entry *revidx;
+ const unsigned char *sha1;
+ revidx = find_pack_revindex(p, obj_offset);
+ if (!revidx)
+ return OBJ_BAD;
+ sha1 = nth_packed_object_sha1(p, revidx->nr);
+ mark_bad_packed_object(p, sha1);
+ type = sha1_object_info(sha1, NULL);
+ if (type <= OBJ_NONE)
+ return OBJ_BAD;
+ return type;
+}
+
+
+#define POI_STACK_PREALLOC 64
+
static int packed_object_info(struct packed_git *p, off_t obj_offset,
unsigned long *sizep, int *rtype)
{
unsigned long size;
off_t curpos = obj_offset;
enum object_type type;
+ off_t small_poi_stack[POI_STACK_PREALLOC];
+ off_t *poi_stack = small_poi_stack;
+ int poi_stack_nr = 0, poi_stack_alloc = POI_STACK_PREALLOC;
type = unpack_object_header(p, &w_curs, &curpos, &size);
+
if (rtype)
*rtype = type; /* representation type */
+ if (sizep) {
+ if (type == OBJ_OFS_DELTA || type == OBJ_REF_DELTA) {
+ off_t tmp_pos = curpos;
+ off_t base_offset = get_delta_base(p, &w_curs, &tmp_pos,
+ type, obj_offset);
+ if (!base_offset) {
+ type = OBJ_BAD;
+ goto out;
+ }
+ *sizep = get_size_from_delta(p, &w_curs, tmp_pos);
+ if (*sizep == 0) {
+ type = OBJ_BAD;
+ goto out;
+ }
+ } else {
+ *sizep = size;
+ }
+ }
+
+ while (type == OBJ_OFS_DELTA || type == OBJ_REF_DELTA) {
+ off_t base_offset;
+ /* Push the object we're going to leave behind */
+ if (poi_stack_nr >= poi_stack_alloc && poi_stack == small_poi_stack) {
+ poi_stack_alloc = alloc_nr(poi_stack_nr);
+ poi_stack = xmalloc(sizeof(off_t)*poi_stack_alloc);
+ memcpy(poi_stack, small_poi_stack, sizeof(off_t)*poi_stack_nr);
+ } else {
+ ALLOC_GROW(poi_stack, poi_stack_nr+1, poi_stack_alloc);
+ }
+ poi_stack[poi_stack_nr++] = obj_offset;
+ /* If parsing the base offset fails, just unwind */
+ base_offset = get_delta_base(p, &w_curs, &curpos, type, obj_offset);
+ if (!base_offset)
+ goto unwind;
+ curpos = obj_offset = base_offset;
+ type = unpack_object_header(p, &w_curs, &curpos, &size);
+ if (type <= OBJ_NONE) {
+ /* If getting the base itself fails, we first
+ * retry the base, otherwise unwind */
+ type = retry_bad_packed_offset(p, base_offset);
+ if (type > OBJ_NONE)
+ goto out;
+ goto unwind;
+ }
+ }
+
switch (type) {
- case OBJ_OFS_DELTA:
- case OBJ_REF_DELTA:
- type = packed_delta_info(p, &w_curs, curpos,
- type, obj_offset, sizep);
- break;
+ case OBJ_BAD:
case OBJ_COMMIT:
case OBJ_TREE:
case OBJ_BLOB:
case OBJ_TAG:
- if (sizep)
- *sizep = size;
break;
default:
error("unknown object type %i at offset %"PRIuMAX" in %s",
type, (uintmax_t)obj_offset, p->pack_name);
type = OBJ_BAD;
}
+
+out:
+ if (poi_stack != small_poi_stack)
+ free(poi_stack);
unuse_pack(&w_curs);
return type;
+
+unwind:
+ while (poi_stack_nr) {
+ obj_offset = poi_stack[--poi_stack_nr];
+ type = retry_bad_packed_offset(p, obj_offset);
+ if (type > OBJ_NONE)
+ goto out;
+ }
+ type = OBJ_BAD;
+ goto out;
}
static void *unpack_compressed_entry(struct packed_git *p,
return hash % MAX_DELTA_CACHE;
}
-static int in_delta_base_cache(struct packed_git *p, off_t base_offset)
+static struct delta_base_cache_entry *
+get_delta_base_cache_entry(struct packed_git *p, off_t base_offset)
{
unsigned long hash = pack_entry_hash(p, base_offset);
- struct delta_base_cache_entry *ent = delta_base_cache + hash;
+ return delta_base_cache + hash;
+}
+
+static int eq_delta_base_cache_entry(struct delta_base_cache_entry *ent,
+ struct packed_git *p, off_t base_offset)
+{
return (ent->data && ent->p == p && ent->base_offset == base_offset);
}
+static int in_delta_base_cache(struct packed_git *p, off_t base_offset)
+{
+ struct delta_base_cache_entry *ent;
+ ent = get_delta_base_cache_entry(p, base_offset);
+ return eq_delta_base_cache_entry(ent, p, base_offset);
+}
+
+static void clear_delta_base_cache_entry(struct delta_base_cache_entry *ent)
+{
+ ent->data = NULL;
+ ent->lru.next->prev = ent->lru.prev;
+ ent->lru.prev->next = ent->lru.next;
+ delta_base_cached -= ent->size;
+}
+
static void *cache_or_unpack_entry(struct packed_git *p, off_t base_offset,
unsigned long *base_size, enum object_type *type, int keep_cache)
{
+ struct delta_base_cache_entry *ent;
void *ret;
- unsigned long hash = pack_entry_hash(p, base_offset);
- struct delta_base_cache_entry *ent = delta_base_cache + hash;
- ret = ent->data;
- if (!ret || ent->p != p || ent->base_offset != base_offset)
+ ent = get_delta_base_cache_entry(p, base_offset);
+
+ if (!eq_delta_base_cache_entry(ent, p, base_offset))
return unpack_entry(p, base_offset, type, base_size);
- if (!keep_cache) {
- ent->data = NULL;
- ent->lru.next->prev = ent->lru.prev;
- ent->lru.prev->next = ent->lru.next;
- delta_base_cached -= ent->size;
- } else {
+ ret = ent->data;
+
+ if (!keep_cache)
+ clear_delta_base_cache_entry(ent);
+ else
ret = xmemdupz(ent->data, ent->size);
- }
*type = ent->type;
*base_size = ent->size;
return ret;
static void *read_object(const unsigned char *sha1, enum object_type *type,
unsigned long *size);
-static void *unpack_delta_entry(struct packed_git *p,
- struct pack_window **w_curs,
- off_t curpos,
- unsigned long delta_size,
- off_t obj_offset,
- enum object_type *type,
- unsigned long *sizep)
-{
- void *delta_data, *result, *base;
- unsigned long base_size;
- off_t base_offset;
-
- base_offset = get_delta_base(p, w_curs, &curpos, *type, obj_offset);
- if (!base_offset) {
- error("failed to validate delta base reference "
- "at offset %"PRIuMAX" from %s",
- (uintmax_t)curpos, p->pack_name);
- return NULL;
- }
- unuse_pack(w_curs);
- base = cache_or_unpack_entry(p, base_offset, &base_size, type, 0);
- if (!base) {
- /*
- * We're probably in deep shit, but let's try to fetch
- * the required base anyway from another pack or loose.
- * This is costly but should happen only in the presence
- * of a corrupted pack, and is better than failing outright.
- */
- struct revindex_entry *revidx;
- const unsigned char *base_sha1;
- revidx = find_pack_revindex(p, base_offset);
- if (!revidx)
- return NULL;
- base_sha1 = nth_packed_object_sha1(p, revidx->nr);
- error("failed to read delta base object %s"
- " at offset %"PRIuMAX" from %s",
- sha1_to_hex(base_sha1), (uintmax_t)base_offset,
- p->pack_name);
- mark_bad_packed_object(p, base_sha1);
- base = read_object(base_sha1, type, &base_size);
- if (!base)
- return NULL;
- }
-
- delta_data = unpack_compressed_entry(p, w_curs, curpos, delta_size);
- if (!delta_data) {
- error("failed to unpack compressed delta "
- "at offset %"PRIuMAX" from %s",
- (uintmax_t)curpos, p->pack_name);
- free(base);
- return NULL;
- }
- result = patch_delta(base, base_size,
- delta_data, delta_size,
- sizep);
- if (!result)
- die("failed to apply delta");
- free(delta_data);
- add_delta_base_cache(p, base_offset, base, base_size, *type);
- return result;
-}
-
static void write_pack_access_log(struct packed_git *p, off_t obj_offset)
{
static FILE *log_file;
int do_check_packed_object_crc;
+#define UNPACK_ENTRY_STACK_PREALLOC 64
+struct unpack_entry_stack_ent {
+ off_t obj_offset;
+ off_t curpos;
+ unsigned long size;
+};
+
void *unpack_entry(struct packed_git *p, off_t obj_offset,
- enum object_type *type, unsigned long *sizep)
+ enum object_type *final_type, unsigned long *final_size)
{
struct pack_window *w_curs = NULL;
off_t curpos = obj_offset;
- void *data;
+ void *data = NULL;
+ unsigned long size;
+ enum object_type type;
+ struct unpack_entry_stack_ent small_delta_stack[UNPACK_ENTRY_STACK_PREALLOC];
+ struct unpack_entry_stack_ent *delta_stack = small_delta_stack;
+ int delta_stack_nr = 0, delta_stack_alloc = UNPACK_ENTRY_STACK_PREALLOC;
+ int base_from_cache = 0;
if (log_pack_access)
write_pack_access_log(p, obj_offset);
- if (do_check_packed_object_crc && p->index_version > 1) {
- struct revindex_entry *revidx = find_pack_revindex(p, obj_offset);
- unsigned long len = revidx[1].offset - obj_offset;
- if (check_pack_crc(p, &w_curs, obj_offset, len, revidx->nr)) {
- const unsigned char *sha1 =
- nth_packed_object_sha1(p, revidx->nr);
- error("bad packed object CRC for %s",
- sha1_to_hex(sha1));
- mark_bad_packed_object(p, sha1);
- unuse_pack(&w_curs);
- return NULL;
+ /* PHASE 1: drill down to the innermost base object */
+ for (;;) {
+ off_t base_offset;
+ int i;
+ struct delta_base_cache_entry *ent;
+
+ if (do_check_packed_object_crc && p->index_version > 1) {
+ struct revindex_entry *revidx = find_pack_revindex(p, obj_offset);
+ unsigned long len = revidx[1].offset - obj_offset;
+ if (check_pack_crc(p, &w_curs, obj_offset, len, revidx->nr)) {
+ const unsigned char *sha1 =
+ nth_packed_object_sha1(p, revidx->nr);
+ error("bad packed object CRC for %s",
+ sha1_to_hex(sha1));
+ mark_bad_packed_object(p, sha1);
+ unuse_pack(&w_curs);
+ return NULL;
+ }
+ }
+
+ ent = get_delta_base_cache_entry(p, curpos);
+ if (eq_delta_base_cache_entry(ent, p, curpos)) {
+ type = ent->type;
+ data = ent->data;
+ size = ent->size;
+ clear_delta_base_cache_entry(ent);
+ base_from_cache = 1;
+ break;
}
+
+ type = unpack_object_header(p, &w_curs, &curpos, &size);
+ if (type != OBJ_OFS_DELTA && type != OBJ_REF_DELTA)
+ break;
+
+ base_offset = get_delta_base(p, &w_curs, &curpos, type, obj_offset);
+ if (!base_offset) {
+ error("failed to validate delta base reference "
+ "at offset %"PRIuMAX" from %s",
+ (uintmax_t)curpos, p->pack_name);
+ /* bail to phase 2, in hopes of recovery */
+ data = NULL;
+ break;
+ }
+
+ /* push object, proceed to base */
+ if (delta_stack_nr >= delta_stack_alloc
+ && delta_stack == small_delta_stack) {
+ delta_stack_alloc = alloc_nr(delta_stack_nr);
+ delta_stack = xmalloc(sizeof(*delta_stack)*delta_stack_alloc);
+ memcpy(delta_stack, small_delta_stack,
+ sizeof(*delta_stack)*delta_stack_nr);
+ } else {
+ ALLOC_GROW(delta_stack, delta_stack_nr+1, delta_stack_alloc);
+ }
+ i = delta_stack_nr++;
+ delta_stack[i].obj_offset = obj_offset;
+ delta_stack[i].curpos = curpos;
+ delta_stack[i].size = size;
+
+ curpos = obj_offset = base_offset;
}
- *type = unpack_object_header(p, &w_curs, &curpos, sizep);
- switch (*type) {
+ /* PHASE 2: handle the base */
+ switch (type) {
case OBJ_OFS_DELTA:
case OBJ_REF_DELTA:
- data = unpack_delta_entry(p, &w_curs, curpos, *sizep,
- obj_offset, type, sizep);
+ if (data)
+ die("BUG in unpack_entry: left loop at a valid delta");
break;
case OBJ_COMMIT:
case OBJ_TREE:
case OBJ_BLOB:
case OBJ_TAG:
- data = unpack_compressed_entry(p, &w_curs, curpos, *sizep);
+ if (!base_from_cache)
+ data = unpack_compressed_entry(p, &w_curs, curpos, size);
break;
default:
data = NULL;
error("unknown object type %i at offset %"PRIuMAX" in %s",
- *type, (uintmax_t)obj_offset, p->pack_name);
+ type, (uintmax_t)obj_offset, p->pack_name);
+ }
+
+ /* PHASE 3: apply deltas in order */
+
+ /* invariants:
+ * 'data' holds the base data, or NULL if there was corruption
+ */
+ while (delta_stack_nr) {
+ void *delta_data;
+ void *base = data;
+ unsigned long delta_size, base_size = size;
+ int i;
+
+ data = NULL;
+
+ if (base)
+ add_delta_base_cache(p, obj_offset, base, base_size, type);
+
+ if (!base) {
+ /*
+ * We're probably in deep shit, but let's try to fetch
+ * the required base anyway from another pack or loose.
+ * This is costly but should happen only in the presence
+ * of a corrupted pack, and is better than failing outright.
+ */
+ struct revindex_entry *revidx;
+ const unsigned char *base_sha1;
+ revidx = find_pack_revindex(p, obj_offset);
+ if (revidx) {
+ base_sha1 = nth_packed_object_sha1(p, revidx->nr);
+ error("failed to read delta base object %s"
+ " at offset %"PRIuMAX" from %s",
+ sha1_to_hex(base_sha1), (uintmax_t)obj_offset,
+ p->pack_name);
+ mark_bad_packed_object(p, base_sha1);
+ base = read_object(base_sha1, &type, &base_size);
+ }
+ }
+
+ i = --delta_stack_nr;
+ obj_offset = delta_stack[i].obj_offset;
+ curpos = delta_stack[i].curpos;
+ delta_size = delta_stack[i].size;
+
+ if (!base)
+ continue;
+
+ delta_data = unpack_compressed_entry(p, &w_curs, curpos, delta_size);
+
+ if (!delta_data) {
+ error("failed to unpack compressed delta "
+ "at offset %"PRIuMAX" from %s",
+ (uintmax_t)curpos, p->pack_name);
+ free(base);
+ data = NULL;
+ continue;
+ }
+
+ data = patch_delta(base, base_size,
+ delta_data, delta_size,
+ &size);
+ if (!data)
+ die("failed to apply delta");
+
+ free (delta_data);
}
+
+ *final_type = type;
+ *final_size = size;
+
unuse_pack(&w_curs);
return data;
}
strbuf_add_urlencode(sb, s, strlen(s), reserved);
}
+void strbuf_humanise_bytes(struct strbuf *buf, off_t bytes)
+{
+ if (bytes > 1 << 30) {
+ strbuf_addf(buf, "%u.%2.2u GiB",
+ (int)(bytes >> 30),
+ (int)(bytes & ((1 << 30) - 1)) / 10737419);
+ } else if (bytes > 1 << 20) {
+ int x = bytes + 5243; /* for rounding */
+ strbuf_addf(buf, "%u.%2.2u MiB",
+ x >> 20, ((x & ((1 << 20) - 1)) * 100) >> 20);
+ } else if (bytes > 1 << 10) {
+ int x = bytes + 5; /* for rounding */
+ strbuf_addf(buf, "%u.%2.2u KiB",
+ x >> 10, ((x & ((1 << 10) - 1)) * 100) >> 10);
+ } else {
+ strbuf_addf(buf, "%u bytes", (int)bytes);
+ }
+}
+
int printf_ln(const char *fmt, ...)
{
int ret;
extern void strbuf_addstr_urlencode(struct strbuf *, const char *,
int reserved);
+extern void strbuf_humanise_bytes(struct strbuf *buf, off_t bytes);
__attribute__((format (printf,1,2)))
extern int printf_ln(const char *fmt, ...);
}
static void print_submodule_summary(struct rev_info *rev, FILE *f,
+ const char *line_prefix,
const char *del, const char *add, const char *reset)
{
static const char format[] = " %m %s";
struct pretty_print_context ctx = {0};
ctx.date_mode = rev->date_mode;
strbuf_setlen(&sb, 0);
+ strbuf_addstr(&sb, line_prefix);
if (commit->object.flags & SYMMETRIC_LEFT) {
if (del)
strbuf_addstr(&sb, del);
}
void show_submodule_summary(FILE *f, const char *path,
+ const char *line_prefix,
unsigned char one[20], unsigned char two[20],
unsigned dirty_submodule, const char *meta,
const char *del, const char *add, const char *reset)
message = "(revision walker failed)";
if (dirty_submodule & DIRTY_SUBMODULE_UNTRACKED)
- fprintf(f, "Submodule %s contains untracked content\n", path);
+ fprintf(f, "%sSubmodule %s contains untracked content\n",
+ line_prefix, path);
if (dirty_submodule & DIRTY_SUBMODULE_MODIFIED)
- fprintf(f, "Submodule %s contains modified content\n", path);
+ fprintf(f, "%sSubmodule %s contains modified content\n",
+ line_prefix, path);
if (!hashcmp(one, two)) {
strbuf_release(&sb);
return;
}
- strbuf_addf(&sb, "%sSubmodule %s %s..", meta, path,
+ strbuf_addf(&sb, "%s%sSubmodule %s %s..", line_prefix, meta, path,
find_unique_abbrev(one, DEFAULT_ABBREV));
if (!fast_backward && !fast_forward)
strbuf_addch(&sb, '.');
fwrite(sb.buf, sb.len, 1, f);
if (!message) /* only NULL if we succeeded in setting up the walk */
- print_submodule_summary(&rev, f, del, add, reset);
+ print_submodule_summary(&rev, f, line_prefix, del, add, reset);
if (left)
clear_commit_marks(left, ~0);
if (right)
void handle_ignore_submodules_arg(struct diff_options *diffopt, const char *);
int parse_fetch_recurse_submodules_arg(const char *opt, const char *arg);
void show_submodule_summary(FILE *f, const char *path,
+ const char *line_prefix,
unsigned char one[20], unsigned char two[20],
unsigned dirty_submodule, const char *meta,
const char *del, const char *add, const char *reset);
<IfModule !mod_authz_user.c>
LoadModule authz_user_module modules/mod_authz_user.so
</IfModule>
+<IfModule !mod_authz_host.c>
+ LoadModule authz_host_module modules/mod_authz_host.so
+</IfModule>
</IfVersion>
PassEnv GIT_VALGRIND
SetEnv GIT_COMMITTER_NAME "Custom User"
SetEnv GIT_COMMITTER_EMAIL custom@example.com
</LocationMatch>
+<LocationMatch /smart_namespace/>
+ SetEnv GIT_EXEC_PATH ${GIT_EXEC_PATH}
+ SetEnv GIT_HTTP_EXPORT_ALL
+ SetEnv GIT_NAMESPACE ns
+</LocationMatch>
ScriptAliasMatch /smart_*[^/]*/(.*) ${GIT_EXEC_PATH}/git-http-backend/$1
ScriptAlias /broken_smart/ broken-smart-http.sh/
<Directory ${GIT_EXEC_PATH}>
Require valid-user
</LocationMatch>
+RewriteCond %{QUERY_STRING} service=git-receive-pack [OR]
+RewriteCond %{REQUEST_URI} /git-receive-pack$
+RewriteRule ^/half-auth-complete/ - [E=AUTHREQUIRED:yes]
+
+<Location /half-auth-complete/>
+ Order Deny,Allow
+ Deny from env=AUTHREQUIRED
+
+ AuthType Basic
+ AuthName "Git Access"
+ AuthUserFile passwd
+ Require valid-user
+ Satisfy Any
+</Location>
+
<IfDefine DAV>
LoadModule dav_module modules/mod_dav.so
LoadModule dav_fs_module modules/mod_dav_fs.so
tag_description="This is a tag"
tag_content="$tag_header_without_timestamp 0000000000 +0000
-$tag_description"
-tag_pretty_content="$tag_header_without_timestamp Thu Jan 1 00:00:00 1970 +0000
-
$tag_description"
tag_sha1=$(echo_without_newline "$tag_content" | git mktag)
tag_size=$(strlen "$tag_content")
-run_tests 'tag' $tag_sha1 $tag_size "$tag_content" "$tag_pretty_content" 1
+run_tests 'tag' $tag_sha1 $tag_size "$tag_content" "$tag_content" 1
test_expect_success \
"Reach a blob from a tag pointing to it" \
two"
'
+test_expect_success 'cherry-pick three one two: fails' '
+ git checkout -f master &&
+ git reset --hard first &&
+ test_must_fail git cherry-pick three one two:
+'
+
test_expect_success 'output to keep user entertained during multi-pick' '
cat <<-\EOF >expected &&
[master OBJID] second
grep hello actual >/dev/null
'
+test_expect_success 'cover letter with nothing' '
+ git format-patch --stdout --cover-letter >actual &&
+ test_line_count = 0 actual
+'
+
+test_expect_success 'cover letter auto' '
+ mkdir -p tmp &&
+ test_when_finished "rm -rf tmp;
+ git config --unset format.coverletter" &&
+
+ git config format.coverletter auto &&
+ git format-patch -o tmp -1 >list &&
+ test_line_count = 1 list &&
+ git format-patch -o tmp -2 >list &&
+ test_line_count = 3 list
+'
+
+test_expect_success 'cover letter auto user override' '
+ mkdir -p tmp &&
+ test_when_finished "rm -rf tmp;
+ git config --unset format.coverletter" &&
+
+ git config format.coverletter auto &&
+ git format-patch -o tmp --cover-letter -1 >list &&
+ test_line_count = 2 list &&
+ git format-patch -o tmp --cover-letter -2 >list &&
+ test_line_count = 3 list &&
+ git format-patch -o tmp --no-cover-letter -1 >list &&
+ test_line_count = 1 list &&
+ git format-patch -o tmp --no-cover-letter -2 >list &&
+ test_line_count = 2 list
+'
+
test_done
--- /dev/null
+#!/bin/sh
+
+test_description='git log with invalid commit headers'
+
+. ./test-lib.sh
+
+test_expect_success 'setup' '
+ test_commit foo &&
+
+ git cat-file commit HEAD |
+ sed "/^author /s/>/>-<>/" >broken_email.commit &&
+ git hash-object -w -t commit broken_email.commit >broken_email.hash &&
+ git update-ref refs/heads/broken_email $(cat broken_email.hash)
+'
+
+test_expect_success 'git log with broken author email' '
+ {
+ echo commit $(cat broken_email.hash)
+ echo "Author: A U Thor <author@example.com>"
+ echo "Date: Thu Jan 1 00:00:00 1970 +0000"
+ echo
+ echo " foo"
+ } >expect.out &&
+ : >expect.err &&
+
+ git log broken_email >actual.out 2>actual.err &&
+
+ test_cmp expect.out actual.out &&
+ test_cmp expect.err actual.err
+'
+
+test_expect_success 'git log --format with broken author email' '
+ echo "A U Thor+author@example.com+" >expect.out &&
+ : >expect.err &&
+
+ git log --format="%an+%ae+%ad" broken_email >actual.out 2>actual.err &&
+
+ test_cmp expect.out actual.out &&
+ test_cmp expect.err actual.err
+'
+
+test_done
test_expect_success 'file add !A, B' '
cat >expected <<\EXPECTED &&
-added in local
- our 100644 43d5a8ed6ef6c00ff775008633f95787d088285d ONE
EXPECTED
git reset --hard initial &&
test_expect_success 'file add A, B (same)' '
cat >expected <<\EXPECTED &&
-added in both
- our 100644 43d5a8ed6ef6c00ff775008633f95787d088285d ONE
- their 100644 43d5a8ed6ef6c00ff775008633f95787d088285d ONE
EXPECTED
git reset --hard initial &&
test_expect_success 'file remove A, !B' '
cat >expected <<\EXPECTED &&
-removed in local
- base 100644 43d5a8ed6ef6c00ff775008633f95787d088285d ONE
- their 100644 43d5a8ed6ef6c00ff775008633f95787d088285d ONE
EXPECTED
git reset --hard initial &&
test_commit "make-file" "dir" "CCC" &&
git merge-tree add-tree add-another-tree make-file >actual &&
cat >expect <<-\EOF &&
- added in local
- our 100644 ba629238ca89489f2b350e196ca445e09d8bb834 dir/another
removed in remote
base 100644 43d5a8ed6ef6c00ff775008633f95787d088285d dir/path
our 100644 43d5a8ed6ef6c00ff775008633f95787d088285d dir/path
}
test_expect_success 'tar archive of empty tree is empty' '
- git archive --format=tar HEAD >empty.tar &&
+ git archive --format=tar HEAD: >empty.tar &&
make_dir extract &&
"$TAR" xf empty.tar -C extract &&
check_dir extract
'
test_expect_success 'mixed-success push returns error' '
- test_must_fail git push
+ test_must_fail git push origin :
'
test_expect_success 'check tracking branches updated correctly after push' '
) &&
(cd mirror-fetch/child &&
git branch -m renamed renamed2 &&
- git push parent
+ git push parent :
) &&
(cd mirror-fetch/parent &&
git rev-parse --verify renamed &&
test_expect_success 'push with matching heads' '
mk_test testrepo heads/master &&
- git push testrepo &&
+ git push testrepo : &&
check_push_result testrepo $the_commit heads/master
'
mk_test testrepo heads/master &&
git push testrepo : &&
git commit --amend -massaged &&
- git push --force testrepo &&
+ git push --force testrepo : &&
! check_push_result testrepo $the_commit heads/master &&
git reset --hard $the_commit
test_config remote.down.url down_repo &&
test_config branch.master.remote up &&
test_config remote.pushdefault down &&
+ test_config push.default matching &&
git push &&
check_push_result up_repo $the_first_commit heads/master &&
check_push_result down_repo $the_commit heads/master
git checkout master &&
test_config remote.there.url test2repo &&
test_config remote.there.pushurl testrepo &&
- git push there &&
+ git push there : &&
check_push_result testrepo $the_commit heads/master
'
test_config remote.down.url down_repo &&
test_config branch.master.remote up &&
test_config branch.master.pushremote down &&
+ test_config push.default matching &&
git push &&
check_push_result up_repo $the_first_commit heads/master &&
check_push_result side_repo $the_first_commit heads/master &&
cd testrepo &&
old_commit=$(git show-ref -s --verify refs/heads/master)
) &&
- git push --dry-run testrepo &&
+ git push --dry-run testrepo : &&
check_push_result testrepo $old_commit heads/master
'
test_expect_success 'push --prune' '
mk_test testrepo heads/master heads/second heads/foo heads/bar &&
- git push --prune testrepo &&
+ git push --prune testrepo : &&
check_push_result testrepo $the_commit heads/master &&
check_push_result testrepo $the_first_commit heads/second &&
! check_push_result testrepo $the_first_commit heads/foo heads/bar
git branch keep master &&
git push --mirror up &&
git branch -D keep &&
- git push up
+ git push up :
) &&
(
cd mirror &&
cd alice-work &&
echo more >file &&
git commit -a -m second &&
- git push ../alice-pub
+ git push ../alice-pub :
)
'
git pull ../alice-pub master &&
echo more bob >file &&
git commit -a -m third &&
- git push ../bob-pub
+ git push ../bob-pub :
) &&
# Check that the second commit by Alice is not sent
cd alice-work &&
echo more alice >file &&
git commit -a -m fourth &&
- git push ../alice-pub
+ git push ../alice-pub :
)
'
cd bob-work &&
echo yet more bob >file &&
git commit -a -m fifth &&
- git push ../bob-pub
+ git push ../bob-pub :
)
'
git commit -a -m sixth.2 &&
echo more and more alice >>file &&
git commit -a -m sixth.3 &&
- git push ../alice-pub
+ git push ../alice-pub :
)
'
git hash-object -t commit -w commit &&
echo even more bob >file &&
git commit -a -m seventh &&
- git push ../bob-pub
+ git push ../bob-pub :
)
'
(
cd gar/bage &&
git init &&
+ git config push.default matching &&
>junk &&
git add junk &&
git commit -m "Initial junk"
test_cmp expect actual
'
+test_expect_success 'create repo without http.receivepack set' '
+ cd "$ROOT_PATH" &&
+ git init half-auth &&
+ (
+ cd half-auth &&
+ test_commit one
+ ) &&
+ git clone --bare half-auth "$HTTPD_DOCUMENT_ROOT_PATH/half-auth.git"
+'
+
+test_expect_success 'clone via half-auth-complete does not need password' '
+ cd "$ROOT_PATH" &&
+ set_askpass wrong &&
+ git clone "$HTTPD_URL"/half-auth-complete/smart/half-auth.git \
+ half-auth-clone &&
+ expect_askpass none
+'
+
+test_expect_success 'push into half-auth-complete requires password' '
+ cd "$ROOT_PATH/half-auth-clone" &&
+ echo two >expect &&
+ test_commit two &&
+ set_askpass user@host &&
+ git push "$HTTPD_URL/half-auth-complete/smart/half-auth.git" &&
+ git --git-dir="$HTTPD_DOCUMENT_ROOT_PATH/half-auth.git" \
+ log -1 --format=%s >actual &&
+ expect_askpass both user@host &&
+ test_cmp expect actual
+'
+
stop_httpd
test_done
start_httpd
test_expect_success 'setup repository' '
+ git config push.default matching &&
echo content1 >file &&
git add file &&
git commit -m one
start_httpd
test_expect_success 'setup repository' '
+ git config push.default matching &&
echo content >file &&
git add file &&
git commit -m one
grep "not valid:" actual
'
+test_expect_success 'create namespaced refs' '
+ test_commit namespaced &&
+ git push public HEAD:refs/namespaces/ns/refs/heads/master &&
+ git --git-dir="$HTTPD_DOCUMENT_ROOT_PATH/repo.git" \
+ symbolic-ref refs/namespaces/ns/HEAD refs/namespaces/ns/refs/heads/master
+'
+
+test_expect_success 'smart clone respects namespace' '
+ git clone "$HTTPD_URL/smart_namespace/repo.git" ns-smart &&
+ echo namespaced >expect &&
+ git --git-dir=ns-smart/.git log -1 --format=%s >actual &&
+ test_cmp expect actual
+'
+
+test_expect_success 'dumb clone via http-backend respects namespace' '
+ git --git-dir="$HTTPD_DOCUMENT_ROOT_PATH/repo.git" \
+ config http.getanyfile true &&
+ GIT_SMART_HTTP=0 git clone \
+ "$HTTPD_URL/smart_namespace/repo.git" ns-dumb &&
+ echo namespaced >expect &&
+ git --git-dir=ns-dumb/.git log -1 --format=%s >actual &&
+ test_cmp expect actual
+'
+
test -n "$GIT_TEST_LONG" && test_set_prereq EXPENSIVE
test_expect_success EXPENSIVE 'create 50,000 tags in the repo' '
start_git_daemon
test_expect_success 'setup repository' '
+ git config push.default matching &&
echo content >file &&
git add file &&
git commit -m one
grep "^-[0-9a-f]\\{40\\} " boundary
'
+test_expect_success 'prerequisites with an empty commit message' '
+ : >file1 &&
+ git add file1 &&
+ test_tick &&
+ git commit --allow-empty-message -m "" &&
+ test_commit file2 &&
+ git bundle create bundle HEAD^.. &&
+ git bundle verify bundle
+'
+
test_done
git bisect reset
"
+cat > expected.bisect-log <<EOF
+# bad: [32a594a3fdac2d57cf6d02987e30eec68511498c] Add <4: Ciao for now> into <hello>.
+# good: [7b7f204a749c3125d5224ed61ea2ae1187ad046f] Add <2: A new day for git> into <hello>.
+git bisect start '32a594a3fdac2d57cf6d02987e30eec68511498c' '7b7f204a749c3125d5224ed61ea2ae1187ad046f'
+# good: [3de952f2416b6084f557ec417709eac740c6818c] Add <3: Another new day for git> into <hello>.
+git bisect good 3de952f2416b6084f557ec417709eac740c6818c
+# first bad commit: [32a594a3fdac2d57cf6d02987e30eec68511498c] Add <4: Ciao for now> into <hello>.
+EOF
+
+test_expect_success 'bisect log: successfull result' '
+ git bisect reset &&
+ git bisect start $HASH4 $HASH2 &&
+ git bisect good &&
+ git bisect log >bisect-log.txt &&
+ test_cmp expected.bisect-log bisect-log.txt &&
+ git bisect reset
+'
+
test_done
test_cmp expected actual
'
+test_expect_success '--log=5 with custom comment character' '
+ cat >expected <<-EOF &&
+ Merge branch ${apos}left${apos}
+
+ x By Another Author (3) and A U Thor (2)
+ x Via Another Committer
+ * left:
+ Left #5
+ Left #4
+ Left #3
+ Common #2
+ Common #1
+ EOF
+
+ git -c core.commentchar="x" fmt-merge-msg --log=5 <.git/FETCH_HEAD >actual &&
+ test_cmp expected actual
+'
+
test_expect_success 'merge.log=0 disables shortlog' '
echo "Merge branch ${apos}left${apos}" >expected
git -c merge.log=0 fmt-merge-msg <.git/FETCH_HEAD >actual &&
git log > ../../../expected
) &&
git commit -m "added subsubmodule" &&
- git push
+ git push origin :
) &&
(cd .git/modules/deeper/submodule/modules/subsubmodule &&
git log > ../../../../../actual
) &&
git add deeper/submodule &&
git commit -m "update submodule" &&
- git push &&
+ git push origin : &&
test_cmp actual expected
)
'
rm -rf "$CVSWORK" "$SERVERDIR"
test_expect_success 'setup' '
+ git config push.default matching &&
echo >empty &&
git add empty &&
git commit -q -m "First Commit" &&
rm -rf "$CVSWORK" "$SERVERDIR"
test_expect_success 'setup' '
+ git config push.default matching &&
echo "Simple text file" >textfile.c &&
echo "File with embedded NUL: Q <- there" | q_to_nul > binfile.bin &&
mkdir subdir &&
# Failure cases for config:
# Save and restore STDERR; we will probably extract this into a
# "dies_ok" method and possibly move the STDERR handling to Git.pm.
-open our $tmpstderr, ">&STDERR" or die "cannot save STDERR"; close STDERR;
+open our $tmpstderr, ">&STDERR" or die "cannot save STDERR";
+open STDERR, ">", "/dev/null" or die "cannot redirect STDERR to /dev/null";
is($r->config("test.dupstring"), "value2", "config: multivar");
eval { $r->config_bool("test.boolother") };
ok($@, "config_bool: non-boolean values fail");
local -a COMPREPLY _words
local _cword
_words=( $1 )
+ test "${1: -1}" == ' ' && _words+=('')
(( _cword = ${#_words[@]} - 1 ))
__git_wrap__git_main && print_comp
}
test_cmp expected out
}
+# Test __gitcomp_nl
+# Arguments are:
+# 1: current word (cur)
+# -: the rest are passed to __gitcomp_nl
+test_gitcomp_nl ()
+{
+ local -a COMPREPLY &&
+ sed -e 's/Z$//' >expected &&
+ cur="$1" &&
+ shift &&
+ __gitcomp_nl "$@" &&
+ print_comp &&
+ test_cmp expected out
+}
+
+invalid_variable_name='${foo.bar}'
+
test_expect_success '__gitcomp - trailing space - options' '
test_gitcomp "--re" "--dry-run --reuse-message= --reedit-message=
--reset-author" <<-EOF
EOF
'
+test_expect_success '__gitcomp - doesnt fail because of invalid variable name' '
+ __gitcomp "$invalid_variable_name"
+'
+
+read -r -d "" refs <<-\EOF
+maint
+master
+next
+pu
+EOF
+
+test_expect_success '__gitcomp_nl - trailing space' '
+ test_gitcomp_nl "m" "$refs" <<-EOF
+ maint Z
+ master Z
+ EOF
+'
+
+test_expect_success '__gitcomp_nl - prefix' '
+ test_gitcomp_nl "--fixup=m" "$refs" "--fixup=" "m" <<-EOF
+ --fixup=maint Z
+ --fixup=master Z
+ EOF
+'
+
+test_expect_success '__gitcomp_nl - suffix' '
+ test_gitcomp_nl "branch.ma" "$refs" "branch." "ma" "." <<-\EOF
+ branch.maint.Z
+ branch.master.Z
+ EOF
+'
+
+test_expect_success '__gitcomp_nl - no suffix' '
+ test_gitcomp_nl "ma" "$refs" "" "ma" "" <<-\EOF
+ maintZ
+ masterZ
+ EOF
+'
+
+test_expect_success '__gitcomp_nl - doesnt fail because of invalid variable name' '
+ __gitcomp_nl "$invalid_variable_name"
+'
+
test_expect_success 'basic' '
- run_completion "git \"\"" &&
+ run_completion "git " &&
# built-in
grep -q "^add \$" out &&
# script
EOF
'
-test_expect_failure 'complete tree filename with metacharacters' '
+test_expect_success 'complete tree filename with metacharacters' '
echo content >"name with \${meta}" &&
git add . &&
git commit -m meta &&
'
test_expect_success 'gitdir - .git directory in parent' '
- echo "$TRASH_DIRECTORY/.git" > expected &&
+ echo "$(pwd -P)/.git" > expected &&
(
cd subdir/subsubdir &&
__gitdir > "$actual"
'
test_expect_success 'gitdir - parent is a .git directory' '
- echo "$TRASH_DIRECTORY/.git" > expected &&
+ echo "$(pwd -P)/.git" > expected &&
(
cd .git/refs/heads &&
__gitdir > "$actual"
'
test_expect_success 'gitdir - gitfile in cwd' '
- echo "$TRASH_DIRECTORY/otherrepo/.git" > expected &&
+ echo "$(pwd -P)/otherrepo/.git" > expected &&
echo "gitdir: $TRASH_DIRECTORY/otherrepo/.git" > subdir/.git &&
test_when_finished "rm -f subdir/.git" &&
(
'
test_expect_success 'gitdir - gitfile in parent' '
- echo "$TRASH_DIRECTORY/otherrepo/.git" > expected &&
+ echo "$(pwd -P)/otherrepo/.git" > expected &&
echo "gitdir: $TRASH_DIRECTORY/otherrepo/.git" > subdir/.git &&
test_when_finished "rm -f subdir/.git" &&
(
'
test_expect_success SYMLINKS 'gitdir - resulting path avoids symlinks' '
- echo "$TRASH_DIRECTORY/otherrepo/.git" > expected &&
+ echo "$(pwd -P)/otherrepo/.git" > expected &&
mkdir otherrepo/dir &&
test_when_finished "rm -rf otherrepo/dir" &&
ln -s otherrepo/dir link &&
fi
# Test repository
-test="trash directory.$(basename "$0" .sh)"
-test -n "$root" && test="$root/$test"
-case "$test" in
-/*) TRASH_DIRECTORY="$test" ;;
- *) TRASH_DIRECTORY="$TEST_OUTPUT_DIRECTORY/$test" ;;
+TRASH_DIRECTORY="trash directory.$(basename "$0" .sh)"
+test -n "$root" && TRASH_DIRECTORY="$root/$TRASH_DIRECTORY"
+case "$TRASH_DIRECTORY" in
+/*) ;; # absolute path is good
+ *) TRASH_DIRECTORY="$TEST_OUTPUT_DIRECTORY/$TRASH_DIRECTORY" ;;
esac
test ! -z "$debug" || remove_trash=$TRASH_DIRECTORY
-rm -fr "$test" || {
+rm -fr "$TRASH_DIRECTORY" || {
GIT_EXIT_OK=t
echo >&5 "FATAL: Cannot prepare test area"
exit 1
if test -z "$TEST_NO_CREATE_REPO"
then
- test_create_repo "$test"
+ test_create_repo "$TRASH_DIRECTORY"
else
- mkdir -p "$test"
+ mkdir -p "$TRASH_DIRECTORY"
fi
# Use -P to resolve symlinks in our working directory so that the cwd
# in subprocesses like git equals our $PWD (for pathname comparisons).
-cd -P "$test" || exit 1
+cd -P "$TRASH_DIRECTORY" || exit 1
this_test=${0##*/}
this_test=${this_test%%-*}
#include "git-compat-util.h"
#include "cache.h"
-static int dying;
-
void vreportf(const char *prefix, const char *err, va_list params)
{
char msg[4096];
vreportf("warning: ", warn, params);
}
+static int die_is_recursing_builtin(void)
+{
+ static int dying;
+ return dying++;
+}
+
/* If we are in a dlopen()ed .so write to a global variable would segfault
* (ugh), so keep things static. */
static NORETURN_PTR void (*usage_routine)(const char *err, va_list params) = usage_builtin;
static NORETURN_PTR void (*die_routine)(const char *err, va_list params) = die_builtin;
static void (*error_routine)(const char *err, va_list params) = error_builtin;
static void (*warn_routine)(const char *err, va_list params) = warn_builtin;
+static int (*die_is_recursing)(void) = die_is_recursing_builtin;
void set_die_routine(NORETURN_PTR void (*routine)(const char *err, va_list params))
{
error_routine = routine;
}
+void set_die_is_recursing_routine(int (*routine)(void))
+{
+ die_is_recursing = routine;
+}
+
void NORETURN usagef(const char *err, ...)
{
va_list params;
{
va_list params;
- if (dying) {
+ if (die_is_recursing()) {
fputs("fatal: recursion detected in die handler\n", stderr);
exit(128);
}
- dying = 1;
va_start(params, err);
die_routine(err, params);
char str_error[256], *err;
int i, j;
- if (dying) {
+ if (die_is_recursing()) {
fputs("fatal: recursion detected in die_errno handler\n",
stderr);
exit(128);
}
- dying = 1;
err = strerror(errno);
for (i = j = 0; err[i] && j < sizeof(str_error) - 1; ) {
* Use default 15 bits, +16 is to generate gzip header/trailer
* instead of the zlib wrapper.
*/
- return do_git_deflate_init(strm, level, 15 + 16);
+ do_git_deflate_init(strm, level, 15 + 16);
}
void git_deflate_init_raw(git_zstream *strm, int level)
* Use default 15 bits, negate the value to get raw compressed
* data without zlib header and trailer.
*/
- return do_git_deflate_init(strm, level, -15);
+ do_git_deflate_init(strm, level, -15);
}
int git_deflate_abort(git_zstream *strm)