--- /dev/null
+Git v2.3.8 Release Notes
+========================
+
+Fixes since v2.3.7
+------------------
+
+ * The usual "git diff" when seeing a file turning into a directory
+ showed a patchset to remove the file and create all files in the
+ directory, but "git diff --no-index" simply refused to work. Also,
+ when asked to compare a file and a directory, imitate POSIX "diff"
+ and compare the file with the file with the same name in the
+ directory, instead of refusing to run.
+
+ * The default $HOME/.gitconfig file created upon "git config --global"
+ that edits it had incorrectly spelled user.name and user.email
+ entries in it.
+
+ * "git commit --date=now" or anything that relies on approxidate lost
+ the daylight-saving-time offset.
+
+Also contains typofixes, documentation updates and trivial code
+clean-ups.
--- /dev/null
+Git v2.4.1 Release Notes
+========================
+
+Fixes since v2.4
+----------------
+
+ * The usual "git diff" when seeing a file turning into a directory
+ showed a patchset to remove the file and create all files in the
+ directory, but "git diff --no-index" simply refused to work. Also,
+ when asked to compare a file and a directory, imitate POSIX "diff"
+ and compare the file with the file with the same name in the
+ directory, instead of refusing to run.
+
+ * The default $HOME/.gitconfig file created upon "git config --global"
+ that edits it had incorrectly spelled user.name and user.email
+ entries in it.
+
+ * "git commit --date=now" or anything that relies on approxidate lost
+ the daylight-saving-time offset.
+
+ * "git cat-file bl $blob" failed to barf even though there is no
+ object type that is "bl".
+
+ * Teach the codepaths that read .gitignore and .gitattributes files
+ that these files encoded in UTF-8 may have UTF-8 BOM marker at the
+ beginning; this makes it in line with what we do for configuration
+ files already.
+
+ * Access to objects in repositories that borrow from another one on a
+ slow NFS server unnecessarily got more expensive due to recent code
+ becoming more cautious in a naive way not to lose objects to pruning.
+
+ * We avoid setting core.worktree when the repository location is the
+ ".git" directory directly at the top level of the working tree, but
+ the code misdetected the case in which the working tree is at the
+ root level of the filesystem (which arguably is a silly thing to
+ do, but still valid).
+
+Also contains typofixes, documentation updates and trivial code
+clean-ups.
--- /dev/null
+Git v2.4.2 Release Notes
+========================
+
+Fixes since v2.4.1
+------------------
+
+ * "git rev-list --objects $old --not --all" to see if everything that
+ is reachable from $old is already connected to the existing refs
+ was very inefficient.
+
+ * "hash-object --literally" introduced in v2.2 was not prepared to
+ take a really long object type name.
+
+ * "git rebase --quiet" was not quite quiet when there is nothing to
+ do.
+
+ * The completion for "log --decorate=" parameter value was incorrect.
+
+ * "filter-branch" corrupted commit log message that ends with an
+ incomplete line on platforms with some "sed" implementations that
+ munge such a line. Work it around by avoiding to use "sed".
+
+ * "git daemon" fails to build from the source under NO_IPV6
+ configuration (regression in 2.4).
+
+ * "git stash pop/apply" forgot to make sure that not just the working
+ tree is clean but also the index is clean. The latter is important
+ as a stash application can conflict and the index will be used for
+ conflict resolution.
+
+ * We have prepended $GIT_EXEC_PATH and the path "git" is installed in
+ (typically "/usr/bin") to $PATH when invoking subprograms and hooks
+ for almost eternity, but the original use case the latter tried to
+ support was semi-bogus (i.e. install git to /opt/foo/git and run it
+ without having /opt/foo on $PATH), and more importantly it has
+ become less and less relevant as Git grew more mainstream (i.e. the
+ users would _want_ to have it on their $PATH). Stop prepending the
+ path in which "git" is installed to users' $PATH, as that would
+ interfere the command search order people depend on (e.g. they may
+ not like versions of programs that are unrelated to Git in /usr/bin
+ and want to override them by having different ones in /usr/local/bin
+ and have the latter directory earlier in their $PATH).
+
+Also contains typofixes, documentation updates and trivial code
+clean-ups.
--- /dev/null
+Git v2.4.3 Release Notes
+========================
+
+Fixes since v2.4.3
+------------------
+
+ * Error messages from "git branch" called remote-tracking branches as
+ "remote branches".
+
+ * "git rerere forget" in a repository without rerere enabled gave a
+ cryptic error message; it should be a silent no-op instead.
+
+ * "git pull --log" and "git pull --no-log" worked as expected, but
+ "git pull --log=20" did not.
+
+ * The pull.ff configuration was supposed to override the merge.ff
+ configuration, but it didn't.
+
+ * The code to read pack-bitmap wanted to allocate a few hundred
+ pointers to a structure, but by mistake allocated and leaked memory
+ enough to hold that many actual structures. Correct the allocation
+ size and also have it on stack, as it is small enough.
+
+ * Various documentation mark-up fixes to make the output more
+ consistent in general and also make AsciiDoctor (an alternative
+ formatter) happier.
+
+ * "git bundle verify" did not diagnose extra parameters on the
+ command line.
+
+ * Multi-ref transaction support we merged a few releases ago
+ unnecessarily kept many file descriptors open, risking to fail with
+ resource exhaustion.
+
+ * The ref API did not handle cases where 'refs/heads/xyzzy/frotz' is
+ removed at the same time as 'refs/heads/xyzzy' is added (or vice
+ versa) very well.
+
+ * The "log --decorate" enhancement in Git 2.4 that shows the commit
+ at the tip of the current branch e.g. "HEAD -> master", did not
+ work with --decorate=full.
+
+ * There was a commented-out (instead of being marked to expect
+ failure) test that documented a breakage that was fixed since the
+ test was written; turn it into a proper test.
+
+ * core.excludesfile (defaulting to $XDG_HOME/git/ignore) is supposed
+ to be overridden by repository-specific .git/info/exclude file, but
+ the order was swapped from the beginning. This belatedly fixes it.
+
+ * The connection initiation code for "ssh" transport tried to absorb
+ differences between the stock "ssh" and Putty-supplied "plink" and
+ its derivatives, but the logic to tell that we are using "plink"
+ variants were too loose and falsely triggered when "plink" appeared
+ anywhere in the path (e.g. "/home/me/bin/uplink/ssh").
+
+ * "git rebase -i" moved the "current" command from "todo" to "done" a
+ bit too prematurely, losing a step when a "pick" did not even start.
+
+ * "git add -e" did not allow the user to abort the operation by
+ killing the editor.
+
+ * Git 2.4 broke setting verbosity and progress levels on "git clone"
+ with native transports.
+
+ * Some time ago, "git blame" (incorrectly) lost the convert_to_git()
+ call when synthesizing a fake "tip" commit that represents the
+ state in the working tree, which broke folks who record the history
+ with LF line ending to make their project portabile across
+ platforms while terminating lines in their working tree files with
+ CRLF for their platform.
+
+ * Code clean-up for xdg configuration path support.
+
+Also contains typofixes, documentation updates and trivial code
+clean-ups.
--- /dev/null
+Git v2.4.4 Release Notes
+========================
+
+Fixes since v2.4.3
+------------------
+
+ * l10n updates for German.
+
+ * An earlier leakfix to bitmap testing code was incomplete.
+
+ * "git clean pathspec..." tried to lstat(2) and complain even for
+ paths outside the given pathspec.
+
+ * Communication between the HTTP server and http_backend process can
+ lead to a dead-lock when relaying a large ref negotiation request.
+ Diagnose the situation better, and mitigate it by reading such a
+ request first into core (to a reasonable limit).
+
+ * The clean/smudge interface did not work well when filtering an
+ empty contents (failed and then passed the empty input through).
+ It can be argued that a filter that produces anything but empty for
+ an empty input is nonsense, but if the user wants to do strange
+ things, then why not?
+
+ * Make "git stash something --help" error out, so that users can
+ safely say "git stash drop --help".
+
+ * Clarify that "log --raw" and "log --format=raw" are unrelated
+ concepts.
+
+ * Catch a programmer mistake to feed a pointer not an array to
+ ARRAY_SIZE() macro, by using a couple of GCC extensions.
+
+Also contains typofixes, documentation updates and trivial code
+clean-ups.
Include additional statistics at the end of blame output.
-L <start>,<end>::
--L :<regex>::
+-L :<funcname>::
Annotate only the given line range. May be specified multiple times.
Overlapping ranges are allowed.
+
color.diff.<slot>::
Use customized color for diff colorization. `<slot>` specifies
which part of the patch to use the specified color, and is one
- of `plain` (context text), `meta` (metainformation), `frag`
+ of `context` (context text - `plain` is a historical synonym),
+ `meta` (metainformation), `frag`
(hunk header), 'func' (function in hunk header), `old` (removed lines),
`new` (added lines), `commit` (commit headers), or `whitespace`
(highlighting whitespace errors).
a case (equivalent to giving the `--no-ff` option from the command
line). When set to `only`, only such fast-forward merges are
allowed (equivalent to giving the `--ff-only` option from the
- command line).
+ command line). This setting overrides `merge.ff` when pulling.
pull.rebase::
When true, rebase branches on top of the fetched branch, instead
remote.<name>.receivepack::
The default program to execute on the remote side when pushing. See
- option \--receive-pack of linkgit:git-push[1].
+ option --receive-pack of linkgit:git-push[1].
remote.<name>.uploadpack::
The default program to execute on the remote side when fetching. See
- option \--upload-pack of linkgit:git-fetch-pack[1].
+ option --upload-pack of linkgit:git-fetch-pack[1].
remote.<name>.tagOpt::
- Setting this value to \--no-tags disables automatic tag following when
- fetching from remote <name>. Setting it to \--tags will fetch every
+ Setting this value to --no-tags disables automatic tag following when
+ fetching from remote <name>. Setting it to --tags will fetch every
tag from remote <name>, even if they are not reachable from remote
branch heads. Passing these flags directly to linkgit:git-fetch[1] can
- override this setting. See options \--tags and \--no-tags of
+ override this setting. See options --tags and --no-tags of
linkgit:git-fetch[1].
remote.<name>.vcs::
Any diff-generating command can take the `-c` or `--cc` option to
produce a 'combined diff' when showing a merge. This is the default
format when showing merges with linkgit:git-diff[1] or
-linkgit:git-show[1]. Note also that you can give the `-m' option to any
+linkgit:git-show[1]. Note also that you can give the `-m` option to any
of these commands to force generation of diffs with individual parents
of a merge.
-u::
--patch::
Generate patch (see section on generating patches).
- {git-diff? This is the default.}
+ifdef::git-diff[]
+ This is the default.
+endif::git-diff[]
endif::git-format-patch[]
-s::
ifndef::git-format-patch[]
--raw::
- Generate the raw format.
- {git-diff-core? This is the default.}
+ifndef::git-log[]
+ Generate the diff in raw format.
+ifdef::git-diff-core[]
+ This is the default.
+endif::git-diff-core[]
+endif::git-log[]
+ifdef::git-log[]
+ For each commit, show a summary of changes using the raw diff
+ format. See the "RAW OUTPUT FORMAT" section of
+ linkgit:git-diff[1]. This is different from showing the log
+ itself in raw format, which you can achieve with
+ `--format=raw`.
+endif::git-log[]
endif::git-format-patch[]
ifndef::git-format-patch[]
initial command menu and directly jumps to the `patch` subcommand.
See ``Interactive mode'' for details.
--e, \--edit::
+-e::
+--edit::
Open the diff vs. the index in an editor and let the user
edit it. After the editor was closed, adjust the hunk headers
and apply the patch to the index.
--reset-author::
When used with -C/-c/--amend options, or when committing after a
a conflicting cherry-pick, declare that the authorship of the
- resulting commit now belongs of the committer. This also renews
+ resulting commit now belongs to the committer. This also renews
the author timestamp.
--short::
+
--
strip::
- Strip leading and trailing empty lines, trailing whitespace, and
- #commentary and collapse consecutive empty lines.
+ Strip leading and trailing empty lines, trailing whitespace,
+ commentary and collapse consecutive empty lines.
whitespace::
Same as `strip` except #commentary is not removed.
verbatim::
--verbose::
Show unified diff between the HEAD commit and what
would be committed at the bottom of the commit message
- template. Note that this diff output doesn't have its
- lines prefixed with '#'.
+ template to help the user describe the commit by reminding
+ what changes the commit has.
+ Note that this diff output doesn't have its
+ lines prefixed with '#'. This diff will not be a part
+ of the commit message.
+
If specified twice, show in addition the unified diff between
what would be committed and the worktree files, i.e. the unstaged
--file=<path>::
- Use `<path>` to store credentials. The file will have its
+ Use `<path>` to lookup and store credentials. The file will have its
filesystem permissions set to prevent other users on the system
from reading it, but will not be encrypted or otherwise
- protected. Defaults to `~/.git-credentials`.
+ protected. If not specified, credentials will be searched for from
+ `~/.git-credentials` and `$XDG_CONFIG_HOME/git/credentials`, and
+ credentials will be written to `~/.git-credentials` if it exists, or
+ `$XDG_CONFIG_HOME/git/credentials` if it exists and the former does
+ not. See also <<FILES>>.
+
+[[FILES]]
+FILES
+-----
+
+If not set explicitly with '--file', there are two files where
+git-credential-store will search for credentials in order of precedence:
+
+~/.git-credentials::
+ User-specific credentials file.
+
+$XDG_CONFIG_HOME/git/credentials::
+ Second user-specific credentials file. If '$XDG_CONFIG_HOME' is not set
+ or empty, `$HOME/.config/git/credentials` will be used. Any credentials
+ stored in this file will not be used if `~/.git-credentials` has a
+ matching credential as well. It is a good idea not to create this file
+ if you sometimes use older versions of Git that do not support it.
+
+For credential lookups, the files are read in the order given above, with the
+first matching credential found taking precedence over credentials found in
+files further down the list.
+
+Credential storage will by default write to the first existing file in the
+list. If none of these files exist, `~/.git-credentials` will be created and
+written to.
+
+When erasing credentials, matching credentials will be erased from all files.
EXAMPLES
--------
have been completed, or to save the marks table across
incremental runs. As <file> is only opened and truncated
at completion, the same path can also be safely given to
- \--import-marks.
+ --import-marks.
The file will not be written if no new object has been
marked/exported.
--import-marks=<file>::
Before processing any input, load the marks specified in
<file>. The input file must exist, must be readable, and
- must use the same format as produced by \--export-marks.
+ must use the same format as produced by --export-marks.
+
Any commits that have already been marked will not be exported again.
-If the backend uses a similar \--import-marks file, this allows for
+If the backend uses a similar --import-marks file, this allows for
incremental bidirectional exporting of the repository by keeping the
marks the same across runs.
--quiet::
Disable all non-fatal output, making fast-import silent when it
is successful. This option disables the output shown by
- \--stats.
+ --stats.
--stats::
Display some basic statistics about the objects fast-import has
created, the packfiles they were stored into, and the
memory used by fast-import during this run. Showing this output
- is currently the default, but can be disabled with \--quiet.
+ is currently the default, but can be disabled with --quiet.
Options for Frontends
~~~~~~~~~~~~~~~~~~~~~
have been completed, or to save the marks table across
incremental runs. As <file> is only opened and truncated
at checkpoint (or completion) the same path can also be
- safely given to \--import-marks.
+ safely given to --import-marks.
--import-marks=<file>::
Before processing any input, load the marks specified in
<file>. The input file must exist, must be readable, and
- must use the same format as produced by \--export-marks.
+ must use the same format as produced by --export-marks.
Multiple options may be supplied to import more than one
set of marks. If a mark is defined to different values,
the last file wins.
prints a warning message. fast-import will always attempt to update all
branch refs, and does not stop on the first failure.
-Branch updates can be forced with \--force, but it's recommended that
-this only be used on an otherwise quiet repository. Using \--force
+Branch updates can be forced with --force, but it's recommended that
+this only be used on an otherwise quiet repository. Using --force
is not necessary for an initial import into an empty repository.
~~~~~~~~~~~~
The following date formats are supported. A frontend should select
the format it will use for this import by passing the format name
-in the \--date-format=<fmt> command-line option.
+in the --date-format=<fmt> command-line option.
`raw`::
This is the Git native format and is `<time> SP <offutc>`.
- It is also fast-import's default format, if \--date-format was
+ It is also fast-import's default format, if --date-format was
not specified.
+
The time of the event is specified by `<time>` as the number of
of bytes, except `LT`, `GT` and `LF`. `<name>` is typically UTF-8 encoded.
The time of the change is specified by `<when>` using the date format
-that was selected by the \--date-format=<fmt> command-line option.
+that was selected by the --date-format=<fmt> command-line option.
See ``Date Formats'' above for the set of supported formats, and
their syntax.
See `filemodify` above for a detailed description of `<path>`.
`filecopy`
-^^^^^^^^^^^^
+^^^^^^^^^^
Recursively copies an existing file or subdirectory to a different
location within the branch. The existing file or directory must
exist. If the destination exists it will be completely replaced
....
Note that fast-import automatically switches packfiles when the current
-packfile reaches \--max-pack-size, or 4 GiB, whichever limit is
+packfile reaches --max-pack-size, or 4 GiB, whichever limit is
smaller. During an automatic packfile switch fast-import does not update
the branch refs, tags or marks.
Use One Mark Per Commit
~~~~~~~~~~~~~~~~~~~~~~~
When doing a repository conversion, use a unique mark per commit
-(`mark :<n>`) and supply the \--export-marks option on the command
+(`mark :<n>`) and supply the --export-marks option on the command
line. fast-import will dump a file which lists every mark and the Git
object SHA-1 that corresponds to it. If the frontend can tie
the marks back to the source repository, it is easy to verify the
However repacking the repository is necessary to improve data
locality and access performance. It can also take hours on extremely
-large projects (especially if -f and a large \--window parameter is
+large projects (especially if -f and a large --window parameter is
used). Since repacking is safe to run alongside readers and writers,
run the repack in the background and let it finish when it finishes.
There is no reason to wait to explore your new Git project!
~~~~~~~~~~~~~~~~~~~~~~~~~
If you are repacking very old imported data (e.g. older than the
last year), consider expending some extra CPU time and supplying
-\--window=50 (or higher) when you run 'git repack'.
+--window=50 (or higher) when you run 'git repack'.
This will take longer, but will also produce a smaller packfile.
You only need to expend the effort once, and everyone using your
project will benefit from the smaller repository.
fast-import automatically moves active branches to inactive status based on
a simple least-recently-used algorithm. The LRU chain is updated on
each `commit` command. The maximum number of active branches can be
-increased or decreased on the command line with \--active-branches=.
+increased or decreased on the command line with --active-branches=.
per active tree
~~~~~~~~~~~~~~~
the things up in .bash_profile).
--exec=<git-upload-pack>::
- Same as \--upload-pack=<git-upload-pack>.
+ Same as --upload-pack=<git-upload-pack>.
--depth=<n>::
Limit fetching to ancestor-chains not longer than n.
SYNOPSIS
--------
[verse]
-'git hash-object' [-t <type>] [-w] [--path=<file>|--no-filters] [--stdin] [--] <file>...
+'git hash-object' [-t <type>] [-w] [--path=<file>|--no-filters] [--stdin [--literally]] [--] <file>...
'git hash-object' [-t <type>] [-w] --stdin-paths [--no-filters] < <list-of-paths>
DESCRIPTION
Hash the contents as is, ignoring any input filter that would
have been chosen by the attributes mechanism, including the end-of-line
conversion. If the file is read from standard input then this
- is always implied, unless the --path option is given.
+ is always implied, unless the `--path` option is given.
+
+--literally::
+ Allow `--stdin` to hash any garbage into a loose object which might not
+ otherwise pass standard object parsing or git-fsck checks. Useful for
+ stress-testing Git itself or reproducing characteristics of corrupt or
+ bogus objects encountered in the wild.
GIT
---
EXAMPLES
--------
-All of the following examples map 'http://$hostname/git/foo/bar.git'
-to '/var/www/git/foo/bar.git'.
+All of the following examples map `http://$hostname/git/foo/bar.git`
+to `/var/www/git/foo/bar.git`.
Apache 2.x::
Ensure mod_cgi, mod_alias, and mod_env are enabled, set
'git-http-backend' to bypass the check for the "git-daemon-export-ok"
file in each repository before allowing export of that repository.
+The `GIT_HTTP_MAX_REQUEST_BUFFER` environment variable (or the
+`http.maxRequestBuffer` config variable) may be set to change the
+largest ref negotiation request that git will handle during a fetch; any
+fetch requiring a larger buffer will not succeed. This value should not
+normally need to be changed, but may be helpful if you are fetching from
+a repository with an extremely large number of refs. The value can be
+specified with a unit (e.g., `100M` for 100 megabytes). The default is
+10 megabytes.
+
The backend process sets GIT_COMMITTER_NAME to '$REMOTE_USER' and
GIT_COMMITTER_EMAIL to '$\{REMOTE_USER}@http.$\{REMOTE_ADDR\}',
ensuring that any reflogs created by 'git-receive-pack' contain some
output by allowing them to allocate space in advance.
-L <start>,<end>:<file>::
--L :<regex>:<file>::
+-L :<funcname>:<file>::
Trace the evolution of the line range given by "<start>,<end>"
- (or the funcname regex <regex>) within the <file>. You may
+ (or the function name regex <funcname>) within the <file>. You may
not give any pathspec limiters. This is currently limited to
a walk starting from a single revision, i.e., you may only
give zero or one positive revision arguments.
--shallow::
Optimize a pack that will be provided to a client with a shallow
- repository. This option, combined with \--thin, can result in a
+ repository. This option, combined with --thin, can result in a
smaller pack at the cost of speed.
--delta-base-offset::
--[no-]verify::
Toggle the pre-push hook (see linkgit:githooks[5]). The
- default is \--verify, giving the hook a chance to prevent the
- push. With \--no-verify, the hook is bypassed completely.
+ default is --verify, giving the hook a chance to prevent the
+ push. With --no-verify, the hook is bypassed completely.
include::urls-remotes.txt[]
If the upstream branch already contains a change you have made (e.g.,
because you mailed a patch which was applied upstream), then that commit
will be skipped. For example, running `git rebase master` on the
-following history (in which A' and A introduce the same set of changes,
+following history (in which `A'` and `A` introduce the same set of changes,
but have different committer information):
------------
SYNOPSIS
--------
[verse]
-'git rev-list' [ \--max-count=<number> ]
- [ \--skip=<number> ]
- [ \--max-age=<timestamp> ]
- [ \--min-age=<timestamp> ]
- [ \--sparse ]
- [ \--merges ]
- [ \--no-merges ]
- [ \--min-parents=<number> ]
- [ \--no-min-parents ]
- [ \--max-parents=<number> ]
- [ \--no-max-parents ]
- [ \--first-parent ]
- [ \--remove-empty ]
- [ \--full-history ]
- [ \--not ]
- [ \--all ]
- [ \--branches[=<pattern>] ]
- [ \--tags[=<pattern>] ]
- [ \--remotes[=<pattern>] ]
- [ \--glob=<glob-pattern> ]
- [ \--ignore-missing ]
- [ \--stdin ]
- [ \--quiet ]
- [ \--topo-order ]
- [ \--parents ]
- [ \--timestamp ]
- [ \--left-right ]
- [ \--left-only ]
- [ \--right-only ]
- [ \--cherry-mark ]
- [ \--cherry-pick ]
- [ \--encoding=<encoding> ]
- [ \--(author|committer|grep)=<pattern> ]
- [ \--regexp-ignore-case | -i ]
- [ \--extended-regexp | -E ]
- [ \--fixed-strings | -F ]
- [ \--date=(local|relative|default|iso|iso-strict|rfc|short) ]
- [ [ \--objects | \--objects-edge | \--objects-edge-aggressive ]
- [ \--unpacked ] ]
- [ \--pretty | \--header ]
- [ \--bisect ]
- [ \--bisect-vars ]
- [ \--bisect-all ]
- [ \--merge ]
- [ \--reverse ]
- [ \--walk-reflogs ]
- [ \--no-walk ] [ \--do-walk ]
- [ \--use-bitmap-index ]
+'git rev-list' [ --max-count=<number> ]
+ [ --skip=<number> ]
+ [ --max-age=<timestamp> ]
+ [ --min-age=<timestamp> ]
+ [ --sparse ]
+ [ --merges ]
+ [ --no-merges ]
+ [ --min-parents=<number> ]
+ [ --no-min-parents ]
+ [ --max-parents=<number> ]
+ [ --no-max-parents ]
+ [ --first-parent ]
+ [ --remove-empty ]
+ [ --full-history ]
+ [ --not ]
+ [ --all ]
+ [ --branches[=<pattern>] ]
+ [ --tags[=<pattern>] ]
+ [ --remotes[=<pattern>] ]
+ [ --glob=<glob-pattern> ]
+ [ --ignore-missing ]
+ [ --stdin ]
+ [ --quiet ]
+ [ --topo-order ]
+ [ --parents ]
+ [ --timestamp ]
+ [ --left-right ]
+ [ --left-only ]
+ [ --right-only ]
+ [ --cherry-mark ]
+ [ --cherry-pick ]
+ [ --encoding=<encoding> ]
+ [ --(author|committer|grep)=<pattern> ]
+ [ --regexp-ignore-case | -i ]
+ [ --extended-regexp | -E ]
+ [ --fixed-strings | -F ]
+ [ --date=(local|relative|default|iso|iso-strict|rfc|short) ]
+ [ [ --objects | --objects-edge | --objects-edge-aggressive ]
+ [ --unpacked ] ]
+ [ --pretty | --header ]
+ [ --bisect ]
+ [ --bisect-vars ]
+ [ --bisect-all ]
+ [ --merge ]
+ [ --reverse ]
+ [ --walk-reflogs ]
+ [ --no-walk ] [ --do-walk ]
+ [ --use-bitmap-index ]
<commit>... [ \-- <paths>... ]
DESCRIPTION
+
If you want to make sure that the output actually names an object in
your object database and/or can be used as a specific type of object
-you require, you can add "\^{type}" peeling operator to the parameter.
+you require, you can add the `^{type}` peeling operator to the parameter.
For example, `git rev-parse "$VAR^{commit}"` will make sure `$VAR`
names an existing object that is a commit-ish (i.e. a commit, or an
annotated tag that points at a commit). To make sure that `$VAR`
form as close to the original input as possible.
--symbolic-full-name::
- This is similar to \--symbolic, but it omits input that
+ This is similar to --symbolic, but it omits input that
are not refs (i.e. branch or tag names; or more
explicitly disambiguating "heads/master" form, when you
want to name the "master" branch when there is an
a directory on the default $PATH.
--exec=<git-receive-pack>::
- Same as \--receive-pack=<git-receive-pack>.
+ Same as --receive-pack=<git-receive-pack>.
--all::
Instead of explicitly specifying which refs to update,
For tags, it shows the tag message and the referenced objects.
For trees, it shows the names (equivalent to 'git ls-tree'
-with \--name-only).
+with --name-only).
For plain blobs, it shows the plain contents.
Given the following noisy input with '$' indicating the end of a line:
---------
+---------
|A brief introduction $
| $
|$
Use 'git stripspace' with no arguments to obtain:
---------
+---------
|A brief introduction$
|$
|A new paragraph$
Use 'git stripspace --strip-comments' to obtain:
---------
+---------
|A brief introduction$
|$
|A new paragraph$
DESCRIPTION
-----------
-Submodules allow foreign repositories to be embedded within
-a dedicated subdirectory of the source tree, always pointed
-at a particular commit.
+Inspects, updates and manages submodules.
-They are not to be confused with remotes, which are meant mainly
-for branches of the same project; submodules are meant for
-different projects you would like to make part of your source tree,
-while the history of the two projects still stays completely
-independent and you cannot modify the contents of the submodule
-from within the main project.
-If you want to merge the project histories and want to treat the
-aggregated whole as a single project from then on, you may want to
-add a remote for the other project and use the 'subtree' merge strategy,
-instead of treating the other project as a submodule. Directories
-that come from both projects can be cloned and checked out as a whole
-if you choose to go that route.
+A submodule allows you to keep another Git repository in a subdirectory
+of your repository. The other repository has its own history, which does not
+interfere with the history of the current repository. This can be used to
+have external dependencies such as third party libraries for example.
+
+When cloning or pulling a repository containing submodules however,
+these will not be checked out by default; the 'init' and 'update'
+subcommands will maintain submodules checked out and at
+appropriate revision in your working tree.
Submodules are composed from a so-called `gitlink` tree entry
in the main repository that refers to a particular commit object
The logical name can be used for overriding this URL within your
local repository configuration (see 'submodule init').
-This command will manage the tree entries and contents of the
-gitmodules file for you, as well as inspect the status of your
-submodules and update them.
-When adding a new submodule to the tree, the 'add' subcommand
-is to be used. However, when pulling a tree containing submodules,
-these will not be checked out by default;
-the 'init' and 'update' subcommands will maintain submodules
-checked out and at appropriate revision in your working tree.
-You can briefly inspect the up-to-date status of your submodules
-using the 'status' subcommand and get a detailed overview of the
-difference between the index and checkouts using the 'summary'
-subcommand.
-
+Submodules are not to be confused with remotes, which are other
+repositories of the same project; submodules are meant for
+different projects you would like to make part of your source tree,
+while the history of the two projects still stays completely
+independent and you cannot modify the contents of the submodule
+from within the main project.
+If you want to merge the project histories and want to treat the
+aggregated whole as a single project from then on, you may want to
+add a remote for the other project and use the 'subtree' merge strategy,
+instead of treating the other project as a submodule. Directories
+that come from both projects can be cloned and checked out as a whole
+if you choose to go that route.
COMMANDS
--------
--username=<user>;;
For transports that SVN handles authentication for (http,
https, and plain svn), specify the username. For other
- transports (e.g. svn+ssh://), you must include the username in
- the URL, e.g. svn+ssh://foo@svn.bar.com/project
+ transports (e.g. `svn+ssh://`), you must include the username in
+ the URL, e.g. `svn+ssh://foo@svn.bar.com/project`
--prefix=<prefix>;;
This allows one to specify a prefix which is prepended
to the names of remotes if trunk/branches/tags are
Ask the user to confirm that a patch set should actually be sent to SVN.
For each patch, one may answer "yes" (accept this patch), "no" (discard this
patch), "all" (accept all patches), or "quit".
- +
- 'git svn dcommit' returns immediately if answer is "no" or "quit", without
- committing anything to SVN.
++
+'git svn dcommit' returns immediately if answer is "no" or "quit", without
+committing anything to SVN.
'branch'::
Create a branch in the SVN repository.
CONFIGURATION
-------------
By default, 'git tag' in sign-with-default mode (-s) will use your
-committer identity (of the form "Your Name <\your@email.address>") to
+committer identity (of the form `Your Name <your@email.address>`) to
find a key. If you want to use a different default key, you can specify
it in the repository configuration as follows:
SYNOPSIS
--------
[verse]
-'git unpack-objects' [-n] [-q] [-r] [--strict] < <pack-file>
+'git unpack-objects' [-n] [-q] [-r] [--strict] < <packfile>
DESCRIPTION
"loose" (one object per file) format.
Objects that already exist in the repository will *not* be unpacked
-from the pack-file. Therefore, nothing will be unpacked if you use
-this command on a pack-file that exists within the target repository.
+from the packfile. Therefore, nothing will be unpacked if you use
+this command on a packfile that exists within the target repository.
See linkgit:git-repack[1] for options to generate
new packs and replace existing ones.
-------------
When specifying the -v option the format used is:
- SHA-1 type size size-in-pack-file offset-in-packfile
+ SHA-1 type size size-in-packfile offset-in-packfile
for objects that are not deltified in the pack, and
branch of the `git.git` repository.
Documentation for older releases are available here:
-* link:v2.4.0/git.html[documentation for release 2.4]
+* link:v2.4.4/git.html[documentation for release 2.4.4]
* release notes for
+ link:RelNotes/2.4.4.txt[2.4.4],
+ link:RelNotes/2.4.3.txt[2.4.3],
+ link:RelNotes/2.4.2.txt[2.4.2],
+ link:RelNotes/2.4.1.txt[2.4.1],
link:RelNotes/2.4.0.txt[2.4].
-* link:v2.3.7/git.html[documentation for release 2.3.7]
+* link:v2.3.8/git.html[documentation for release 2.3.8]
* release notes for
+ link:RelNotes/2.3.8.txt[2.3.8],
link:RelNotes/2.3.7.txt[2.3.7],
link:RelNotes/2.3.6.txt[2.3.6],
link:RelNotes/2.3.5.txt[2.3.5],
@@ -1 +1,2 @@
Hello World
+It's a new day for git
-----
+------------
i.e. the diff of the change we caused by adding another line to `hello`.
files:
- 'git diff-index' compares contents of a "tree" object and the
- working directory (when '\--cached' flag is not used) or a
- "tree" object and the index file (when '\--cached' flag is
+ working directory (when '--cached' flag is not used) or a
+ "tree" object and the index file (when '--cached' flag is
used);
- 'git diff-files' compares contents of the index file and the
When the "-C" option is used, the original contents of modified files,
and deleted files (and also unmodified files, if the
-"\--find-copies-harder" option is used) are considered as candidates
+"--find-copies-harder" option is used) are considered as candidates
of the source files in rename/copy operation. If the input were like
these filepairs, that talk about a modified file fileY and a newly
created file file0:
detailed explanation.)
-L<start>,<end>:<file>::
--L:<regex>:<file>::
+-L:<funcname>:<file>::
Trace the evolution of the line range given by "<start>,<end>"
- (or the funcname regex <regex>) within the <file>. You may
+ (or the function name regex <funcname>) within the <file>. You may
not give any pathspec limiters. This is currently limited to
a walk starting from a single revision, i.e., you may only
give zero or one positive revision arguments.
of <n> correspond to the number of -v flags passed on the
command line.
-'option progress' \{'true'|'false'\}::
+'option progress' {'true'|'false'}::
Enables (or disables) progress messages displayed by the
transport helper during a command.
'option depth' <depth>::
Deepens the history of a shallow repository.
-'option followtags' \{'true'|'false'\}::
+'option followtags' {'true'|'false'}::
If enabled the helper should automatically fetch annotated
tag objects if the object the tag points at was transferred
during the fetch command. If the tag is not fetched by
ask for the tag specifically. Some helpers may be able to
use this option to avoid a second network connection.
-'option dry-run' \{'true'|'false'\}:
+'option dry-run' {'true'|'false'}:
If true, pretend the operation completed successfully,
but don't actually change any repository data. For most
helpers this only applies to the 'push', if supported.
must not rely on this option being set before
connect request occurs.
-'option check-connectivity' \{'true'|'false'\}::
+'option check-connectivity' {'true'|'false'}::
Request the helper to check connectivity of a clone.
-'option force' \{'true'|'false'\}::
+'option force' {'true'|'false'}::
Request the helper to perform a force update. Defaults to
'false'.
-'option cloning \{'true'|'false'\}::
+'option cloning {'true'|'false'}::
Notify the helper this is a clone request (i.e. the current
repository is guaranteed empty).
-'option update-shallow \{'true'|'false'\}::
+'option update-shallow {'true'|'false'}::
Allow to extend .git/shallow if the new refs require it.
SEE ALSO
<<def_push,push>> to describe the mapping between remote
<<def_ref,ref>> and local ref.
+[[def_remote]]remote repository::
+ A <<def_repository,repository>> which is used to track the same
+ project but resides somewhere else. To communicate with remotes,
+ see <<def_fetch,fetch>> or <<def_push,push>>.
+
[[def_remote_tracking_branch]]remote-tracking branch::
A <<def_ref,ref>> that is used to follow changes from another
<<def_repository,repository>>. It typically looks like
is created by giving the `--depth` option to linkgit:git-clone[1], and
its history can be later deepened with linkgit:git-fetch[1].
+[[def_submodule]]submodule::
+ A <<def_repository,repository>> that holds the history of a
+ separate project inside another repository (the latter of
+ which is called <<def_superproject, superproject>>).
+
+[[def_superproject]]superproject::
+ A <<def_repository,repository>> that references repositories
+ of other projects in its working tree as <<def_submodule,submodules>>.
+ The superproject knows about the names of (but does not hold
+ copies of) commit objects of the contained submodules.
+
[[def_symref]]symref::
Symbolic reference: instead of containing the <<def_SHA1,SHA-1>>
id itself, it is of the format 'ref: refs/some/thing' and when
of lines before or after the line given by <start>.
+
-If ``:<regex>'' is given in place of <start> and <end>, it denotes the range
-from the first funcname line that matches <regex>, up to the next
-funcname line. ``:<regex>'' searches from the end of the previous `-L` range,
-if any, otherwise from the start of file.
-``^:<regex>'' searches from the start of file.
+If ``:<funcname>'' is given in place of <start> and <end>, it is a
+regular expression that denotes the range from the first funcname line
+that matches <funcname>, up to the next funcname line. ``:<funcname>''
+searches from the end of the previous `-L` range, if any, otherwise
+from the start of file. ``^:<funcname>'' searches from the start of
+file.
displayed in full, regardless of whether --abbrev or
--no-abbrev are used, and 'parents' information show the
true parent commits, without taking grafts or history
-simplification into account.
+simplification into account. Note that this format affects the way
+commits are displayed, but not the way the diff is shown e.g. with
+`git log --raw`. To get full object names in a raw diff format,
+use `--no-abbrev`.
* 'format:<string>'
+
references.
----
- update-request = *shallow ( command-list | push-cert ) [pack-file]
+ update-request = *shallow ( command-list | push-cert ) [packfile]
shallow = PKT-LINE("shallow" SP obj-id LF)
*PKT-LINE(gpg-signature-lines LF)
PKT-LINE("push-cert-end" LF)
- pack-file = "PACK" 28*(OCTET)
+ packfile = "PACK" 28*(OCTET)
----
If the receiving end does not support delete-refs, the sending end MUST
sent, command-list MUST NOT be sent; the commands recorded in the
push certificate is used instead.
-The pack-file MUST NOT be sent if the only command used is 'delete'.
+The packfile MUST NOT be sent if the only command used is 'delete'.
-A pack-file MUST be sent if either create or update command is used,
+A packfile MUST be sent if either create or update command is used,
even if the server already has all the necessary objects. In this
-case the client MUST send an empty pack-file. The only time this
+case the client MUST send an empty packfile. The only time this
is likely to happen is if the client is creating
a new branch or a tag that points to an existing obj-id.
#!/bin/sh
GVF=GIT-VERSION-FILE
-DEF_VER=v2.4.0
+DEF_VER=v2.4.4
LF='
'
@echo PYTHON_PATH=\''$(subst ','\'',$(PYTHON_PATH_SQ))'\' >>$@
@echo TAR=\''$(subst ','\'',$(subst ','\'',$(TAR)))'\' >>$@
@echo NO_CURL=\''$(subst ','\'',$(subst ','\'',$(NO_CURL)))'\' >>$@
+ @echo NO_EXPAT=\''$(subst ','\'',$(subst ','\'',$(NO_EXPAT)))'\' >>$@
@echo USE_LIBPCRE=\''$(subst ','\'',$(subst ','\'',$(USE_LIBPCRE)))'\' >>$@
@echo NO_PERL=\''$(subst ','\'',$(subst ','\'',$(NO_PERL)))'\' >>$@
@echo NO_PYTHON=\''$(subst ','\'',$(subst ','\'',$(NO_PYTHON)))'\' >>$@
-Documentation/RelNotes/2.4.0.txt
\ No newline at end of file
+Documentation/RelNotes/2.4.4.txt
\ No newline at end of file
#include "exec_cmd.h"
#include "attr.h"
#include "dir.h"
+#include "utf8.h"
const char git_attr__true[] = "(builtin)true";
const char git_attr__false[] = "\0(builtin)false";
return NULL;
}
res = xcalloc(1, sizeof(*res));
- while (fgets(buf, sizeof(buf), fp))
- handle_attr_line(res, buf, path, ++lineno, macro_ok);
+ while (fgets(buf, sizeof(buf), fp)) {
+ char *bufp = buf;
+ if (!lineno)
+ skip_utf8_bom(&bufp, strlen(bufp));
+ handle_attr_line(res, bufp, path, ++lineno, macro_ok);
+ }
fclose(fp);
return res;
}
static void bootstrap_attr_stack(void)
{
struct attr_stack *elem;
- char *xdg_attributes_file;
if (attr_stack)
return;
}
}
- if (!git_attributes_file) {
- home_config_paths(NULL, &xdg_attributes_file, "attributes");
- git_attributes_file = xdg_attributes_file;
- }
+ if (!git_attributes_file)
+ git_attributes_file = xdg_config_home("attributes");
if (git_attributes_file) {
elem = read_attr_from_file(git_attributes_file, 1);
if (elem) {
if (run_diff_files(&rev, 0))
die(_("Could not write patch"));
- launch_editor(file, NULL, NULL);
+ if (launch_editor(file, NULL, NULL))
+ die(_("editing patch failed"));
if (stat(file, &st))
die_errno(_("Could not stat '%s'"), file);
#include "userdiff.h"
#include "line-range.h"
#include "line-log.h"
+#include "dir.h"
-static char blame_usage[] = N_("git blame [<options>] [<rev-opts>] [<rev>] [--] file");
+static char blame_usage[] = N_("git blame [<options>] [<rev-opts>] [<rev>] [--] <file>");
static const char *blame_opt_usage[] = {
blame_usage,
}
}
-/*
- * Used for the command line parsing; check if the path exists
- * in the working tree.
- */
-static int has_string_in_work_tree(const char *path)
-{
- struct stat st;
- return !lstat(path, &st);
-}
-
static unsigned parse_score(const char *arg)
{
char *end;
if (strbuf_read(&buf, 0, 0) < 0)
die_errno("failed to read from stdin");
}
+ convert_to_git(path, buf.buf, buf.len, &buf, 0);
origin->file.ptr = buf.buf;
origin->file.size = buf.len;
pretend_sha1_file(buf.buf, buf.len, OBJ_BLOB, origin->blob_sha1);
if (argc < 2)
usage_with_options(blame_opt_usage, options);
path = add_prefix(prefix, argv[argc - 1]);
- if (argc == 3 && !has_string_in_work_tree(path)) { /* (2b) */
+ if (argc == 3 && !file_exists(path)) { /* (2b) */
path = add_prefix(prefix, argv[1]);
argv[1] = argv[2];
}
argv[argc - 1] = "--";
setup_work_tree();
- if (!has_string_in_work_tree(path))
+ if (!file_exists(path))
die_errno("cannot stat path '%s'", path);
}
sha1, &flags);
if (!target) {
error(remote_branch
- ? _("remote branch '%s' not found.")
+ ? _("remote-tracking branch '%s' not found.")
: _("branch '%s' not found."), bname.buf);
ret = 1;
continue;
if (delete_ref(name, sha1, REF_NODEREF)) {
error(remote_branch
- ? _("Error deleting remote branch '%s'")
+ ? _("Error deleting remote-tracking branch '%s'")
: _("Error deleting branch '%s'"),
bname.buf);
ret = 1;
}
if (!quiet) {
printf(remote_branch
- ? _("Deleted remote branch %s (was %s).\n")
+ ? _("Deleted remote-tracking branch %s (was %s).\n")
: _("Deleted branch %s (was %s).\n"),
bname.buf,
(flags & REF_ISBROKEN) ? "broken"
if (!strcmp(cmd, "verify")) {
close(bundle_fd);
+ if (argc != 1) {
+ usage(builtin_bundle_usage);
+ return 1;
+ }
if (verify_bundle(&header, 1))
return 1;
fprintf(stderr, _("%s is okay\n"), bundle_file);
return !!list_bundle_refs(&header, argc, argv);
}
if (!strcmp(cmd, "create")) {
+ if (argc < 2) {
+ usage(builtin_bundle_usage);
+ return 1;
+ }
if (!startup_info->have_repository)
die(_("Need a repository to create a bundle."));
return !!create_bundle(&header, bundle_file, argc, argv);
{
struct string_list menu_list = STRING_LIST_INIT_DUP;
struct strbuf menu = STRBUF_INIT;
- struct strbuf buf = STRBUF_INIT;
struct menu_item *menu_item;
struct string_list_item *string_list_item;
int i;
pretty_print_menus(&menu_list);
strbuf_release(&menu);
- strbuf_release(&buf);
string_list_clear(&menu_list, 0);
}
if (!cache_name_is_other(ent->name, ent->len))
continue;
- if (lstat(ent->name, &st))
- die_errno("Cannot lstat '%s'", ent->name);
-
if (pathspec.nr)
matches = dir_path_match(ent, &pathspec, 0, NULL);
if (pathspec.nr && !matches)
continue;
+ if (lstat(ent->name, &st))
+ die_errno("Cannot lstat '%s'", ent->name);
+
if (S_ISDIR(st.st_mode) && !remove_directories &&
matches != MATCHED_EXACTLY)
continue;
static struct string_list option_reference;
static int option_dissociate;
-static int opt_parse_reference(const struct option *opt, const char *arg, int unset)
-{
- struct string_list *option_reference = opt->value;
- if (!arg)
- return -1;
- string_list_append(option_reference, arg);
- return 0;
-}
-
static struct option builtin_clone_options[] = {
OPT__VERBOSITY(&option_verbosity),
OPT_BOOL(0, "progress", &option_progress,
N_("initialize submodules in the clone")),
OPT_STRING(0, "template", &option_template, N_("template-directory"),
N_("directory from which templates will be used")),
- OPT_CALLBACK(0 , "reference", &option_reference, N_("repo"),
- N_("reference repository"), &opt_parse_reference),
+ OPT_STRING_LIST(0, "reference", &option_reference, N_("repo"),
+ N_("reference repository")),
+ OPT_BOOL(0, "dissociate", &option_dissociate,
+ N_("use --reference only while cloning")),
OPT_STRING('o', "origin", &option_origin, N_("name"),
N_("use <name> instead of 'origin' to track upstream")),
OPT_STRING('b', "branch", &option_branch, N_("branch"),
N_("create a shallow clone of that depth")),
OPT_BOOL(0, "single-branch", &option_single_branch,
N_("clone only one branch, HEAD or --branch")),
- OPT_BOOL(0, "dissociate", &option_dissociate,
- N_("use --reference only while cloning")),
OPT_STRING(0, "separate-git-dir", &real_git_dir, N_("gitdir"),
N_("separate git dir from working tree")),
OPT_STRING_LIST('c', "config", &option_config, N_("key=value"),
remote = remote_get(option_origin);
transport = transport_get(remote, remote->url[0]);
+ transport_set_verbosity(transport, option_verbosity, option_progress);
+
path = get_repo_path(remote->url[0], &is_bundle);
is_local = option_local != 0 && path && !is_bundle;
if (is_local) {
if (option_single_branch)
transport_set_option(transport, TRANS_OPT_FOLLOWTAGS, "1");
- transport_set_verbosity(transport, option_verbosity, option_progress);
-
if (option_upload_pack)
transport_set_option(transport, TRANS_OPT_UPLOADPACK,
option_upload_pack);
static const char *implicit_ident_advice(void)
{
- char *user_config = NULL;
- char *xdg_config = NULL;
- int config_exists;
+ char *user_config = expand_user_path("~/.gitconfig");
+ char *xdg_config = xdg_config_home("config");
+ int config_exists = file_exists(user_config) || file_exists(xdg_config);
- home_config_paths(&user_config, &xdg_config, "config");
- config_exists = file_exists(user_config) || file_exists(xdg_config);
free(user_config);
free(xdg_config);
struct strbuf buf = STRBUF_INIT;
strbuf_addf(&buf,
_("# This is Git's per-user configuration file.\n"
- "[core]\n"
+ "[user]\n"
"# Please adapt and uncomment the following lines:\n"
- "# user = %s\n"
+ "# name = %s\n"
"# email = %s\n"),
ident_default_name(),
ident_default_email());
}
if (use_global_config) {
- char *user_config = NULL;
- char *xdg_config = NULL;
-
- home_config_paths(&user_config, &xdg_config, "config");
+ char *user_config = expand_user_path("~/.gitconfig");
+ char *xdg_config = xdg_config_home("config");
if (!user_config)
/*
#define util_as_integral(elem) ((intptr_t)((elem)->util))
-static void record_person(int which, struct string_list *people,
- struct commit *commit)
+static void record_person_from_buf(int which, struct string_list *people,
+ const char *buffer)
{
- const char *buffer;
char *name_buf, *name, *name_end;
struct string_list_item *elem;
const char *field;
field = (which == 'a') ? "\nauthor " : "\ncommitter ";
- buffer = get_commit_buffer(commit, NULL);
name = strstr(buffer, field);
if (!name)
return;
if (name_end < name)
return;
name_buf = xmemdupz(name, name_end - name + 1);
- unuse_commit_buffer(commit, buffer);
elem = string_list_lookup(people, name_buf);
if (!elem) {
free(name_buf);
}
+
+static void record_person(int which, struct string_list *people,
+ struct commit *commit)
+{
+ const char *buffer = get_commit_buffer(commit, NULL);
+ record_person_from_buf(which, people, buffer);
+ unuse_commit_buffer(commit, buffer);
+}
+
static int cmp_string_list_util_as_integral(const void *a_, const void *b_)
{
const struct string_list_item *a = a_, *b = b_;
if (strbuf_read(&buf, fd, 4096) < 0)
ret = -1;
- else if (flags & HASH_WRITE_OBJECT)
- ret = write_sha1_file(buf.buf, buf.len, type, sha1);
else
- ret = hash_sha1_file(buf.buf, buf.len, type, sha1);
+ ret = hash_sha1_file_literally(buf.buf, buf.len, type, sha1, flags);
strbuf_release(&buf);
return ret;
}
return 0;
}
+/*
+ * If the git_dir is not directly inside the working tree, then git will not
+ * find it by default, and we need to set the worktree explicitly.
+ */
+static int needs_work_tree_config(const char *git_dir, const char *work_tree)
+{
+ if (!strcmp(work_tree, "/") && !strcmp(git_dir, "/.git"))
+ return 0;
+ if (skip_prefix(git_dir, work_tree, &git_dir) &&
+ !strcmp(git_dir, "/.git"))
+ return 0;
+ return 1;
+}
+
static int create_default_files(const char *template_path)
{
const char *git_dir = get_git_dir();
/* allow template config file to override the default */
if (log_all_ref_updates == -1)
git_config_set("core.logallrefupdates", "true");
- if (!starts_with(git_dir, work_tree) ||
- strcmp(git_dir + strlen(work_tree), "/.git")) {
+ if (needs_work_tree_config(git_dir, work_tree))
git_config_set("core.worktree", work_tree);
- }
}
if (!reinit) {
static const char *fmt_pretty;
static const char * const builtin_log_usage[] = {
- N_("git log [<options>] [<revision range>] [[--] <path>...]"),
+ N_("git log [<options>] [<revision-range>] [[--] <path>...]"),
N_("git show [<options>] <object>..."),
NULL
};
off_t offset = find_pack_entry_one(sha1, p);
if (offset) {
if (!*found_pack) {
- if (!is_pack_valid(p)) {
- warning("packfile %s cannot be accessed", p->pack_name);
+ if (!is_pack_valid(p))
continue;
- }
*found_offset = offset;
*found_pack = p;
}
const char *name = list.entry[i].name;
int pos;
const struct cache_entry *ce;
- struct stat st;
pos = cache_name_pos(name, strlen(name));
if (pos < 0) {
ce = active_cache[pos];
if (!S_ISGITLINK(ce->ce_mode) ||
- (lstat(ce->name, &st) < 0) ||
+ !file_exists(ce->name) ||
is_empty_dir(name))
continue;
enum scld_error safe_create_leading_directories_const(const char *path);
int mkdir_in_gitdir(const char *path);
-extern void home_config_paths(char **global, char **xdg, char *file);
extern char *expand_user_path(const char *path);
const char *enter_repo(const char *path, int strict);
static inline int is_absolute_path(const char *path)
int daemon_avoid_alias(const char *path);
extern int is_ntfs_dotgit(const char *name);
+/**
+ * Return a newly allocated string with the evaluation of
+ * "$XDG_CONFIG_HOME/git/$filename" if $XDG_CONFIG_HOME is non-empty, otherwise
+ * "$HOME/.config/git/$filename". Return NULL upon error.
+ */
+extern char *xdg_config_home(const char *filename);
+
/* object replacement */
#define LOOKUP_REPLACE_OBJECT 1
extern void *read_sha1_file_extended(const unsigned char *sha1, enum object_type *type, unsigned long *size, unsigned flag);
extern int sha1_object_info(const unsigned char *, unsigned long *);
extern int hash_sha1_file(const void *buf, unsigned long len, const char *type, unsigned char *sha1);
extern int write_sha1_file(const void *buf, unsigned long len, const char *type, unsigned char *return_sha1);
+extern int hash_sha1_file_literally(const void *buf, unsigned long len, const char *type, unsigned char *sha1, unsigned flags);
extern int pretend_sha1_file(void *, unsigned long, enum object_type, unsigned char *);
extern int force_object_loose(const unsigned char *sha1, time_t mtime);
extern int git_open_noatime(const char *name);
int pack_fd;
unsigned pack_local:1,
pack_keep:1,
+ freshened:1,
do_not_close:1;
unsigned char sha1[20];
/* something like ".git/objects/pack/xxxxx.pack" */
/*
* Iterate over loose and packed objects in both the local
- * repository and any alternates repositories.
+ * repository and any alternates repositories (unless the
+ * LOCAL_ONLY flag is set).
*/
+#define FOR_EACH_OBJECT_LOCAL_ONLY 0x1
typedef int each_packed_object_fn(const unsigned char *sha1,
struct packed_git *pack,
uint32_t pos,
void *data);
-extern int for_each_loose_object(each_loose_object_fn, void *);
-extern int for_each_packed_object(each_packed_object_fn, void *);
+extern int for_each_loose_object(each_loose_object_fn, void *, unsigned flags);
+extern int for_each_packed_object(each_packed_object_fn, void *, unsigned flags);
struct object_info {
/* Request */
const char *c_func = diff_get_color(use_color, DIFF_FUNCINFO);
const char *c_new = diff_get_color(use_color, DIFF_FILE_NEW);
const char *c_old = diff_get_color(use_color, DIFF_FILE_OLD);
- const char *c_plain = diff_get_color(use_color, DIFF_PLAIN);
+ const char *c_context = diff_get_color(use_color, DIFF_CONTEXT);
const char *c_reset = diff_get_color(use_color, DIFF_RESET);
if (result_deleted)
}
if (comment_end)
printf("%s%s %s%s", c_reset,
- c_plain, c_reset,
+ c_context, c_reset,
c_func);
for (i = 0; i < comment_end; i++)
putchar(hunk_comment[i]);
*/
if (!context)
continue;
- fputs(c_plain, stdout);
+ fputs(c_context, stdout);
}
else
fputs(c_new, stdout);
return 0;
}
-int parse_commit(struct commit *item)
+int parse_commit_gently(struct commit *item, int quiet_on_missing)
{
enum object_type type;
void *buffer;
return 0;
buffer = read_sha1_file(item->object.sha1, &type, &size);
if (!buffer)
- return error("Could not read %s",
+ return quiet_on_missing ? -1 :
+ error("Could not read %s",
sha1_to_hex(item->object.sha1));
if (type != OBJ_COMMIT) {
free(buffer);
struct commit *lookup_commit_or_die(const unsigned char *sha1, const char *ref_name);
int parse_commit_buffer(struct commit *item, const void *buffer, unsigned long size);
-int parse_commit(struct commit *item);
+int parse_commit_gently(struct commit *item, int quiet_on_missing);
+static inline int parse_commit(struct commit *item)
+{
+ return parse_commit_gently(item, 0);
+}
void parse_commit_or_die(struct commit *item);
/*
#include "quote.h"
#include "hashmap.h"
#include "string-list.h"
+#include "utf8.h"
struct config_source {
struct config_source *prev;
struct strbuf *var = &cf->var;
/* U+FEFF Byte Order Mark in UTF8 */
- static const unsigned char *utf8_bom = (unsigned char *) "\xef\xbb\xbf";
- const unsigned char *bomptr = utf8_bom;
+ const char *bomptr = utf8_bom;
for (;;) {
int c = get_next_char();
/* We are at the file beginning; skip UTF8-encoded BOM
* if present. Sane editors won't put this in on their
* own, but e.g. Windows Notepad will do it happily. */
- if ((unsigned char) c == *bomptr) {
+ if (c == (*bomptr & 0377)) {
bomptr++;
continue;
} else {
int git_config_early(config_fn_t fn, void *data, const char *repo_config)
{
int ret = 0, found = 0;
- char *xdg_config = NULL;
- char *user_config = NULL;
-
- home_config_paths(&user_config, &xdg_config, "config");
+ char *xdg_config = xdg_config_home("config");
+ char *user_config = expand_user_path("~/.gitconfig");
if (git_config_system() && !access_or_die(git_etc_gitconfig(), R_OK, 0)) {
ret += git_config_from_file(fn, git_etc_gitconfig(),
int ret;
struct lock_file *lock = NULL;
char *filename_buf = NULL;
+ char *contents = NULL;
+ size_t contents_sz;
/* parse-key returns negative; flip the sign to feed exit(3) */
ret = 0 - git_config_parse_key(key, &store.key, &store.baselen);
goto write_err_out;
} else {
struct stat st;
- char *contents;
- size_t contents_sz, copy_begin, copy_end;
+ size_t copy_begin, copy_end;
int i, new_line = 0;
if (value_regex == NULL)
fstat(in_fd, &st);
contents_sz = xsize_t(st.st_size);
- contents = xmmap(NULL, contents_sz, PROT_READ,
- MAP_PRIVATE, in_fd, 0);
+ contents = xmmap_gently(NULL, contents_sz, PROT_READ,
+ MAP_PRIVATE, in_fd, 0);
+ if (contents == MAP_FAILED) {
+ if (errno == ENODEV && S_ISDIR(st.st_mode))
+ errno = EISDIR;
+ error("unable to mmap '%s': %s",
+ config_filename, strerror(errno));
+ ret = CONFIG_INVALID_FILE;
+ contents = NULL;
+ goto out_free;
+ }
close(in_fd);
if (chmod(lock->filename.buf, st.st_mode & 07777) < 0) {
contents_sz - copy_begin) <
contents_sz - copy_begin)
goto write_err_out;
-
- munmap(contents, contents_sz);
}
if (commit_lock_file(lock) < 0) {
if (lock)
rollback_lock_file(lock);
free(filename_buf);
+ if (contents)
+ munmap(contents, contents_sz);
return ret;
write_err_out:
conn->in = conn->out = -1;
if (protocol == PROTO_SSH) {
const char *ssh;
- int putty;
+ int putty, tortoiseplink = 0;
char *ssh_host = hostandport;
const char *port = NULL;
get_host_and_port(&ssh_host, &port);
free(path);
free(conn);
return NULL;
+ }
+
+ ssh = getenv("GIT_SSH_COMMAND");
+ if (ssh) {
+ conn->use_shell = 1;
+ putty = 0;
} else {
- ssh = getenv("GIT_SSH_COMMAND");
- if (ssh) {
- conn->use_shell = 1;
- putty = 0;
- } else {
- ssh = getenv("GIT_SSH");
- if (!ssh)
- ssh = "ssh";
- putty = !!strcasestr(ssh, "plink");
- }
-
- argv_array_push(&conn->args, ssh);
- if (putty && !strcasestr(ssh, "tortoiseplink"))
- argv_array_push(&conn->args, "-batch");
- if (port) {
- /* P is for PuTTY, p is for OpenSSH */
- argv_array_push(&conn->args, putty ? "-P" : "-p");
- argv_array_push(&conn->args, port);
- }
- argv_array_push(&conn->args, ssh_host);
+ const char *base;
+ char *ssh_dup;
+
+ ssh = getenv("GIT_SSH");
+ if (!ssh)
+ ssh = "ssh";
+
+ ssh_dup = xstrdup(ssh);
+ base = basename(ssh_dup);
+
+ tortoiseplink = !strcasecmp(base, "tortoiseplink") ||
+ !strcasecmp(base, "tortoiseplink.exe");
+ putty = !strcasecmp(base, "plink") ||
+ !strcasecmp(base, "plink.exe") || tortoiseplink;
+
+ free(ssh_dup);
+ }
+
+ argv_array_push(&conn->args, ssh);
+ if (tortoiseplink)
+ argv_array_push(&conn->args, "-batch");
+ if (port) {
+ /* P is for PuTTY, p is for OpenSSH */
+ argv_array_push(&conn->args, putty ? "-P" : "-p");
+ argv_array_push(&conn->args, port);
}
+ argv_array_push(&conn->args, ssh_host);
} else {
/* remove repo-local variables from the environment */
conn->env = local_repo_env;
return
;;
--decorate=*)
- __gitcomp "long short" "" "${cur##--decorate=}"
+ __gitcomp "full short no" "" "${cur##--decorate=}"
return
;;
--*)
static struct lock_file credential_lock;
-static void parse_credential_file(const char *fn,
+static int parse_credential_file(const char *fn,
struct credential *c,
void (*match_cb)(struct credential *),
void (*other_cb)(struct strbuf *))
FILE *fh;
struct strbuf line = STRBUF_INIT;
struct credential entry = CREDENTIAL_INIT;
+ int found_credential = 0;
fh = fopen(fn, "r");
if (!fh) {
- if (errno != ENOENT)
+ if (errno != ENOENT && errno != EACCES)
die_errno("unable to open %s", fn);
- return;
+ return found_credential;
}
while (strbuf_getline(&line, fh, '\n') != EOF) {
credential_from_url(&entry, line.buf);
if (entry.username && entry.password &&
credential_match(c, &entry)) {
+ found_credential = 1;
if (match_cb) {
match_cb(&entry);
break;
credential_clear(&entry);
strbuf_release(&line);
fclose(fh);
+ return found_credential;
}
static void print_entry(struct credential *c)
die_errno("unable to commit credential store");
}
-static void store_credential(const char *fn, struct credential *c)
+static void store_credential_file(const char *fn, struct credential *c)
{
struct strbuf buf = STRBUF_INIT;
- /*
- * Sanity check that what we are storing is actually sensible.
- * In particular, we can't make a URL without a protocol field.
- * Without either a host or pathname (depending on the scheme),
- * we have no primary key. And without a username and password,
- * we are not actually storing a credential.
- */
- if (!c->protocol || !(c->host || c->path) ||
- !c->username || !c->password)
- return;
-
strbuf_addf(&buf, "%s://", c->protocol);
strbuf_addstr_urlencode(&buf, c->username, 1);
strbuf_addch(&buf, ':');
strbuf_release(&buf);
}
-static void remove_credential(const char *fn, struct credential *c)
+static void store_credential(const struct string_list *fns, struct credential *c)
+{
+ struct string_list_item *fn;
+
+ /*
+ * Sanity check that what we are storing is actually sensible.
+ * In particular, we can't make a URL without a protocol field.
+ * Without either a host or pathname (depending on the scheme),
+ * we have no primary key. And without a username and password,
+ * we are not actually storing a credential.
+ */
+ if (!c->protocol || !(c->host || c->path) || !c->username || !c->password)
+ return;
+
+ for_each_string_list_item(fn, fns)
+ if (!access(fn->string, F_OK)) {
+ store_credential_file(fn->string, c);
+ return;
+ }
+ /*
+ * Write credential to the filename specified by fns->items[0], thus
+ * creating it
+ */
+ if (fns->nr)
+ store_credential_file(fns->items[0].string, c);
+}
+
+static void remove_credential(const struct string_list *fns, struct credential *c)
{
+ struct string_list_item *fn;
+
/*
* Sanity check that we actually have something to match
* against. The input we get is a restrictive pattern,
* to empty input. So explicitly disallow it, and require that the
* pattern have some actual content to match.
*/
- if (c->protocol || c->host || c->path || c->username)
- rewrite_credential_file(fn, c, NULL);
+ if (!c->protocol && !c->host && !c->path && !c->username)
+ return;
+ for_each_string_list_item(fn, fns)
+ if (!access(fn->string, F_OK))
+ rewrite_credential_file(fn->string, c, NULL);
}
-static int lookup_credential(const char *fn, struct credential *c)
+static void lookup_credential(const struct string_list *fns, struct credential *c)
{
- parse_credential_file(fn, c, print_entry, NULL);
- return c->username && c->password;
+ struct string_list_item *fn;
+
+ for_each_string_list_item(fn, fns)
+ if (parse_credential_file(fn->string, c, print_entry, NULL))
+ return; /* Found credential */
}
int main(int argc, char **argv)
};
const char *op;
struct credential c = CREDENTIAL_INIT;
+ struct string_list fns = STRING_LIST_INIT_DUP;
char *file = NULL;
struct option options[] = {
OPT_STRING(0, "file", &file, "path",
usage_with_options(usage, options);
op = argv[0];
- if (!file)
- file = expand_user_path("~/.git-credentials");
- if (!file)
+ if (file) {
+ string_list_append(&fns, file);
+ } else {
+ if ((file = expand_user_path("~/.git-credentials")))
+ string_list_append_nodup(&fns, file);
+ file = xdg_config_home("credentials");
+ if (file)
+ string_list_append_nodup(&fns, file);
+ }
+ if (!fns.nr)
die("unable to set up default path; use --file");
if (credential_read(&c, stdin) < 0)
die("unable to read credential");
if (!strcmp(op, "get"))
- lookup_credential(file, &c);
+ lookup_credential(&fns, &c);
else if (!strcmp(op, "erase"))
- remove_credential(file, &c);
+ remove_credential(&fns, &c);
else if (!strcmp(op, "store"))
- store_credential(file, &c);
+ store_credential(&fns, &c);
else
; /* Ignore unknown operation. */
+ string_list_clear(&fns, 0);
return 0;
}
char **ap;
static char addrbuf[HOST_NAME_MAX + 1];
- hent = gethostbyname(hostname.buf);
+ hent = gethostbyname(hi->hostname.buf);
if (hent) {
ap = hent->h_addr_list;
memset(&sa, 0, sizeof sa);
date += match;
}
- /* mktime uses local timezone */
+ /* do not use mktime(), which uses local timezone, here */
*timestamp = tm_to_time_t(&tm);
+ if (*timestamp == -1)
+ return -1;
+
if (*offset == -1) {
- time_t temp_time = mktime(&tm);
+ time_t temp_time;
+
+ /* gmtime_r() in match_digit() may have clobbered it */
+ tm.tm_isdst = -1;
+ temp_time = mktime(&tm);
if ((time_t)*timestamp > temp_time) {
*offset = ((time_t)*timestamp - temp_time) / 60;
} else {
}
}
- if (*timestamp == -1)
- return -1;
-
if (!tm_gmt)
*timestamp -= *offset * 60;
return 0; /* success */
if (get_mode(name1, &mode1) || get_mode(name2, &mode2))
return -1;
- if (mode1 && mode2 && S_ISDIR(mode1) != S_ISDIR(mode2))
- return error("file/directory conflict: %s, %s", name1, name2);
+ if (mode1 && mode2 && S_ISDIR(mode1) != S_ISDIR(mode2)) {
+ struct diff_filespec *d1, *d2;
+
+ if (S_ISDIR(mode1)) {
+ /* 2 is file that is created */
+ d1 = noindex_filespec(NULL, 0);
+ d2 = noindex_filespec(name2, mode2);
+ name2 = NULL;
+ mode2 = 0;
+ } else {
+ /* 1 is file that is deleted */
+ d1 = noindex_filespec(name1, mode1);
+ d2 = noindex_filespec(NULL, 0);
+ name1 = NULL;
+ mode1 = 0;
+ }
+ /* emit that file */
+ diff_queue(&diff_queued_diff, d1, d2);
+
+ /* and then let the entire directory be created or deleted */
+ }
if (S_ISDIR(mode1) || S_ISDIR(mode2)) {
struct strbuf buffer1 = STRBUF_INIT;
}
}
+/* append basename of F to D */
+static void append_basename(struct strbuf *path, const char *dir, const char *file)
+{
+ const char *tail = strrchr(file, '/');
+
+ strbuf_addstr(path, dir);
+ while (path->len && path->buf[path->len - 1] == '/')
+ path->len--;
+ strbuf_addch(path, '/');
+ strbuf_addstr(path, tail ? tail + 1 : file);
+}
+
+/*
+ * DWIM "diff D F" into "diff D/F F" and "diff F D" into "diff F D/F"
+ * Note that we append the basename of F to D/, so "diff a/b/file D"
+ * becomes "diff a/b/file D/file", not "diff a/b/file D/a/b/file".
+ */
+static void fixup_paths(const char **path, struct strbuf *replacement)
+{
+ unsigned int isdir0, isdir1;
+
+ if (path[0] == file_from_standard_input ||
+ path[1] == file_from_standard_input)
+ return;
+ isdir0 = is_directory(path[0]);
+ isdir1 = is_directory(path[1]);
+ if (isdir0 == isdir1)
+ return;
+ if (isdir0) {
+ append_basename(replacement, path[0], path[1]);
+ path[0] = replacement->buf;
+ } else {
+ append_basename(replacement, path[1], path[0]);
+ path[1] = replacement->buf;
+ }
+}
+
void diff_no_index(struct rev_info *revs,
int argc, const char **argv,
const char *prefix)
{
int i, prefixlen;
const char *paths[2];
+ struct strbuf replacement = STRBUF_INIT;
diff_setup(&revs->diffopt);
for (i = 1; i < argc - 2; ) {
p = xstrdup(prefix_filename(prefix, prefixlen, p));
paths[i] = p;
}
+
+ fixup_paths(paths, &replacement);
+
revs->diffopt.skip_stat_unmatch = 1;
if (!revs->diffopt.output_format)
revs->diffopt.output_format = DIFF_FORMAT_PATCH;
diffcore_std(&revs->diffopt);
diff_flush(&revs->diffopt);
+ strbuf_release(&replacement);
+
/*
* The return code for --no-index imitates diff(1):
* 0 = no changes, 1 = changes, else error
static char diff_colors[][COLOR_MAXLEN] = {
GIT_COLOR_RESET,
- GIT_COLOR_NORMAL, /* PLAIN */
+ GIT_COLOR_NORMAL, /* CONTEXT */
GIT_COLOR_BOLD, /* METAINFO */
GIT_COLOR_CYAN, /* FRAGINFO */
GIT_COLOR_RED, /* OLD */
static int parse_diff_color_slot(const char *var)
{
- if (!strcasecmp(var, "plain"))
- return DIFF_PLAIN;
+ if (!strcasecmp(var, "context") || !strcasecmp(var, "plain"))
+ return DIFF_CONTEXT;
if (!strcasecmp(var, "meta"))
return DIFF_METAINFO;
if (!strcasecmp(var, "frag"))
static void emit_hunk_header(struct emit_callback *ecbdata,
const char *line, int len)
{
- const char *plain = diff_get_color(ecbdata->color_diff, DIFF_PLAIN);
+ const char *context = diff_get_color(ecbdata->color_diff, DIFF_CONTEXT);
const char *frag = diff_get_color(ecbdata->color_diff, DIFF_FRAGINFO);
const char *func = diff_get_color(ecbdata->color_diff, DIFF_FUNCINFO);
const char *reset = diff_get_color(ecbdata->color_diff, DIFF_RESET);
if (len < 10 ||
memcmp(line, atat, 2) ||
!(ep = memmem(line + 2, len - 2, atat, 2))) {
- emit_line(ecbdata->opt, plain, reset, line, len);
+ emit_line(ecbdata->opt, context, reset, line, len);
return;
}
ep += 2; /* skip over @@ */
if (*ep != ' ' && *ep != '\t')
break;
if (ep != cp) {
- strbuf_addstr(&msgbuf, plain);
+ strbuf_addstr(&msgbuf, context);
strbuf_add(&msgbuf, cp, ep - cp);
strbuf_addstr(&msgbuf, reset);
}
data += len;
}
if (!endp) {
- const char *plain = diff_get_color(ecb->color_diff,
- DIFF_PLAIN);
+ const char *context = diff_get_color(ecb->color_diff,
+ DIFF_CONTEXT);
putc('\n', ecb->opt->file);
- emit_line_0(ecb->opt, plain, reset, '\\',
+ emit_line_0(ecb->opt, context, reset, '\\',
nneof, strlen(nneof));
}
}
struct diff_words_style *st = ecbdata->diff_words->style;
st->old.color = diff_get_color_opt(o, DIFF_FILE_OLD);
st->new.color = diff_get_color_opt(o, DIFF_FILE_NEW);
- st->ctx.color = diff_get_color_opt(o, DIFF_PLAIN);
+ st->ctx.color = diff_get_color_opt(o, DIFF_CONTEXT);
}
}
{
struct emit_callback *ecbdata = priv;
const char *meta = diff_get_color(ecbdata->color_diff, DIFF_METAINFO);
- const char *plain = diff_get_color(ecbdata->color_diff, DIFF_PLAIN);
+ const char *context = diff_get_color(ecbdata->color_diff, DIFF_CONTEXT);
const char *reset = diff_get_color(ecbdata->color_diff, DIFF_RESET);
struct diff_options *o = ecbdata->opt;
const char *line_prefix = diff_line_prefix(o);
}
diff_words_flush(ecbdata);
if (ecbdata->diff_words->type == DIFF_WORDS_PORCELAIN) {
- emit_line(ecbdata->opt, plain, reset, line, len);
+ emit_line(ecbdata->opt, context, reset, line, len);
fputs("~\n", ecbdata->opt->file);
} else {
/*
line++;
len--;
}
- emit_line(ecbdata->opt, plain, reset, line, len);
+ emit_line(ecbdata->opt, context, reset, line, len);
}
return;
}
if (line[0] != '+') {
const char *color =
diff_get_color(ecbdata->color_diff,
- line[0] == '-' ? DIFF_FILE_OLD : DIFF_PLAIN);
+ line[0] == '-' ? DIFF_FILE_OLD : DIFF_CONTEXT);
ecbdata->lno_in_preimage++;
if (line[0] == ' ')
ecbdata->lno_in_postimage++;
enum color_diff {
DIFF_RESET = 0,
- DIFF_PLAIN = 1,
+ DIFF_CONTEXT = 1,
DIFF_METAINFO = 2,
DIFF_FRAGINFO = 3,
DIFF_FILE_OLD = 4,
#include "refs.h"
#include "wildmatch.h"
#include "pathspec.h"
+#include "utf8.h"
struct path_simplify {
int len;
/*
* Make sure all pathspec matched; otherwise it is an error.
*/
- struct strbuf sb = STRBUF_INIT;
int num, errors = 0;
for (num = 0; num < pathspec->nr; num++) {
int other, found_dup;
pathspec->items[num].original);
errors++;
}
- strbuf_release(&sb);
return errors;
}
}
el->filebuf = buf;
+
+ if (skip_utf8_bom(&buf, size))
+ size -= buf - el->filebuf;
+
entry = buf;
+
for (i = 0; i < size; i++) {
if (buf[i] == '\n') {
if (entry != buf + i && entry[0] != '#') {
void setup_standard_excludes(struct dir_struct *dir)
{
const char *path;
- char *xdg_path;
dir->exclude_per_dir = ".gitignore";
+
+ /* core.excludefile defaulting to $XDG_HOME/git/ignore */
+ if (!excludes_file)
+ excludes_file = xdg_config_home("ignore");
+ if (excludes_file && !access_or_warn(excludes_file, R_OK, 0))
+ add_excludes_from_file(dir, excludes_file);
+
+ /* per repository user preference */
path = git_path("info/exclude");
- if (!excludes_file) {
- home_config_paths(NULL, &xdg_path, "ignore");
- excludes_file = xdg_path;
- }
if (!access_or_warn(path, R_OK, 0))
add_excludes_from_file(dir, path);
- if (excludes_file && !access_or_warn(excludes_file, R_OK, 0))
- add_excludes_from_file(dir, excludes_file);
}
int remove_path(const char *name)
struct strbuf new_path = STRBUF_INIT;
add_path(&new_path, git_exec_path());
- add_path(&new_path, argv0_path);
if (old_path)
strbuf_addstr(&new_path, old_path);
#define _FILE_OFFSET_BITS 64
+
+/* Derived from Linux "Features Test Macro" header
+ * Convenience macros to test the versions of gcc (or
+ * a compatible compiler).
+ * Use them like this:
+ * #if GIT_GNUC_PREREQ (2,8)
+ * ... code requiring gcc 2.8 or later ...
+ * #endif
+*/
+#if defined(__GNUC__) && defined(__GNUC_MINOR__)
+# define GIT_GNUC_PREREQ(maj, min) \
+ ((__GNUC__ << 16) + __GNUC_MINOR__ >= ((maj) << 16) + (min))
+#else
+ #define GIT_GNUC_PREREQ(maj, min) 0
+#endif
+
+
#ifndef FLEX_ARRAY
/*
* See if our compiler is known to support flexible array members.
#endif
#endif
-#define ARRAY_SIZE(x) (sizeof(x)/sizeof(x[0]))
+
+/*
+ * BUILD_ASSERT_OR_ZERO - assert a build-time dependency, as an expression.
+ * @cond: the compile-time condition which must be true.
+ *
+ * Your compile will fail if the condition isn't true, or can't be evaluated
+ * by the compiler. This can be used in an expression: its value is "0".
+ *
+ * Example:
+ * #define foo_to_char(foo) \
+ * ((char *)(foo) \
+ * + BUILD_ASSERT_OR_ZERO(offsetof(struct foo, string) == 0))
+ */
+#define BUILD_ASSERT_OR_ZERO(cond) \
+ (sizeof(char [1 - 2*!(cond)]) - 1)
+
+#if defined(__GNUC__) && (__GNUC__ >= 3)
+# if GIT_GNUC_PREREQ(3, 1)
+ /* &arr[0] degrades to a pointer: a different type from an array */
+# define BARF_UNLESS_AN_ARRAY(arr) \
+ BUILD_ASSERT_OR_ZERO(!__builtin_types_compatible_p(__typeof__(arr), \
+ __typeof__(&(arr)[0])))
+# else
+# define BARF_UNLESS_AN_ARRAY(arr) 0
+# endif
+#endif
+/*
+ * ARRAY_SIZE - get the number of elements in a visible array
+ * <at> x: the array whose size you want.
+ *
+ * This does not work on pointers, or arrays declared as [], or
+ * function parameters. With correct compiler support, such usage
+ * will cause a build error (see the build_assert_or_zero macro).
+ */
+#define ARRAY_SIZE(x) (sizeof(x) / sizeof((x)[0]) + BARF_UNLESS_AN_ARRAY(x))
+
#define bitsizeof(x) (CHAR_BIT * sizeof(x))
#define maximum_signed_value_of_type(a) \
extern void *xrealloc(void *ptr, size_t size);
extern void *xcalloc(size_t nmemb, size_t size);
extern void *xmmap(void *start, size_t length, int prot, int flags, int fd, off_t offset);
+extern void *xmmap_gently(void *start, size_t length, int prot, int flags, int fd, off_t offset);
extern ssize_t xread(int fd, void *buf, size_t len);
extern ssize_t xwrite(int fd, const void *buf, size_t len);
extern ssize_t xpread(int fd, void *buf, size_t len, off_t offset);
die "parent filter failed: $filter_parent"
fi
- sed -e '1,/^$/d' <../commit | \
+ {
+ while read -r header_line && test -n "$header_line"
+ do
+ # skip header lines...
+ :;
+ done
+ # and output the actual commit message
+ cat
+ } <../commit |
eval "$filter_msg" > ../message ||
die "msg filter failed: $filter_msg"
workdir=$workdir @SHELL_PATH@ -c "$filter_commit" "git commit-tree" \
fi
# Setup default fast-forward options via `pull.ff`
-pull_ff=$(git config pull.ff)
+pull_ff=$(bool_or_string_config pull.ff)
case "$pull_ff" in
+true)
+ no_ff=--ff
+ ;;
false)
no_ff=--no-ff
;;
diffstat=--no-stat ;;
--stat|--summary)
diffstat=--stat ;;
- --log|--no-log)
- log_arg=$1 ;;
+ --log|--log=*|--no-log)
+ log_arg="$1" ;;
--no-c|--no-co|--no-com|--no-comm|--no-commi|--no-commit)
no_commit=--no-commit ;;
--c|--co|--com|--comm|--commi|--commit)
fi
}
+# Put the last action marked done at the beginning of the todo list
+# again. If there has not been an action marked done yet, leave the list of
+# items on the todo list unchanged.
+reschedule_last_action () {
+ tail -n 1 "$done" | cat - "$todo" >"$todo".new
+ sed -e \$d <"$done" >"$done".new
+ mv -f "$todo".new "$todo"
+ mv -f "$done".new "$done"
+}
+
append_todo_help () {
git stripspace --comment-lines >>"$todo" <<\EOF
output eval git cherry-pick \
${gpg_sign_opt:+$(git rev-parse --sq-quote "$gpg_sign_opt")} \
"$strategy_args" $empty_args $ff "$@"
+
+ # If cherry-pick dies it leaves the to-be-picked commit unrecorded. Reschedule
+ # previous task so this commit is not lost.
+ ret=$?
+ case "$ret" in [01]) ;; *) reschedule_last_action ;; esac
+ return $ret
}
pick_one_preserving_merges () {
}
do_next () {
- rm -f "$msg" "$author_script" "$amend" || exit
+ rm -f "$msg" "$author_script" "$amend" "$state_dir"/stopped-sha || exit
read -r command sha1 rest < "$todo"
case "$command" in
"$comment_char"*|''|noop)
read -r command rest < "$todo"
mark_action_done
printf 'Executing: %s\n' "$rest"
- # "exec" command doesn't take a sha1 in the todo-list.
- # => can't just use $sha1 here.
- git rev-parse --verify HEAD > "$state_dir"/stopped-sha
${SHELL:-@SHELL_PATH@} -c "$rest" # Actual execution
status=$?
# Run in subshell because require_clean_work_tree can die.
fi
fi
- record_in_rewritten "$(cat "$state_dir"/stopped-sha)"
+ if test -r "$state_dir"/stopped-sha
+ then
+ record_in_rewritten "$(cat "$state_dir"/stopped-sha)"
+ fi
require_clean_work_tree "rebase"
do_rest
# Lazily switch to the target branch if needed...
test -z "$switch_to" ||
GIT_REFLOG_ACTION="$GIT_REFLOG_ACTION: checkout $switch_to" \
- git checkout "$switch_to" --
+ git checkout -q "$switch_to" --
say "$(eval_gettext "Current branch \$branch_name is up to date.")"
finish_rebase
exit 0
-a|--all)
untracked=all
;;
+ --help)
+ show_help
+ ;;
--)
shift
break
}
show_stash () {
+ ALLOW_UNKNOWN_FLAGS=t
assert_stash_like "$@"
git diff ${FLAGS:---stat} $b_commit $w_commit
}
+show_help () {
+ exec git help stash
+ exit 1
+}
+
#
# Parses the remaining options looking for flags and
# at most one revision defaulting to ${ref_stash}@{0}
#
# GIT_QUIET is set to t if -q is specified
# INDEX_OPTION is set to --index if --index is specified.
-# FLAGS is set to the remaining flags
+# FLAGS is set to the remaining flags (if allowed)
#
# dies if:
# * too many revisions specified
# * no revision is specified and there is no stash stack
# * a revision is specified which cannot be resolve to a SHA1
# * a non-existent stash reference is specified
+# * unknown flags were set and ALLOW_UNKNOWN_FLAGS is not "t"
#
parse_flags_and_rev()
--index)
INDEX_OPTION=--index
;;
+ --help)
+ show_help
+ ;;
-*)
+ test "$ALLOW_UNKNOWN_FLAGS" = t ||
+ die "$(eval_gettext "unknown option: \$opt")"
FLAGS="${FLAGS}${FLAGS:+ }$opt"
;;
esac
assert_stash_like "$@"
git update-index -q --refresh || die "$(gettext "unable to refresh index")"
+ git diff-index --cached --quiet --ignore-submodules HEAD -- ||
+ die "$(gettext "Cannot apply stash: Your index contains uncommitted changes.")"
# current index state
c_tree=$(git write-tree) ||
static const char content_length[] = "Content-Length";
static const char last_modified[] = "Last-Modified";
static int getanyfile = 1;
+static unsigned long max_request_buffer = 10 * 1024 * 1024;
static struct string_list *query_params;
struct rpc_service {
const char *name;
const char *config_name;
+ unsigned buffer_input : 1;
signed enabled : 2;
};
static struct rpc_service rpc_service[] = {
- { "upload-pack", "uploadpack", 1 },
- { "receive-pack", "receivepack", -1 },
+ { "upload-pack", "uploadpack", 1, 1 },
+ { "receive-pack", "receivepack", 0, -1 },
};
static struct string_list *get_parameters(void)
struct strbuf var = STRBUF_INIT;
git_config_get_bool("http.getanyfile", &getanyfile);
+ git_config_get_ulong("http.maxrequestbuffer", &max_request_buffer);
for (i = 0; i < ARRAY_SIZE(rpc_service); i++) {
struct rpc_service *svc = &rpc_service[i];
return svc;
}
-static void inflate_request(const char *prog_name, int out)
+/*
+ * This is basically strbuf_read(), except that if we
+ * hit max_request_buffer we die (we'd rather reject a
+ * maliciously large request than chew up infinite memory).
+ */
+static ssize_t read_request(int fd, unsigned char **out)
+{
+ size_t len = 0, alloc = 8192;
+ unsigned char *buf = xmalloc(alloc);
+
+ if (max_request_buffer < alloc)
+ max_request_buffer = alloc;
+
+ while (1) {
+ ssize_t cnt;
+
+ cnt = read_in_full(fd, buf + len, alloc - len);
+ if (cnt < 0) {
+ free(buf);
+ return -1;
+ }
+
+ /* partial read from read_in_full means we hit EOF */
+ len += cnt;
+ if (len < alloc) {
+ *out = buf;
+ return len;
+ }
+
+ /* otherwise, grow and try again (if we can) */
+ if (alloc == max_request_buffer)
+ die("request was larger than our maximum size (%lu);"
+ " try setting GIT_HTTP_MAX_REQUEST_BUFFER",
+ max_request_buffer);
+
+ alloc = alloc_nr(alloc);
+ if (alloc > max_request_buffer)
+ alloc = max_request_buffer;
+ REALLOC_ARRAY(buf, alloc);
+ }
+}
+
+static void inflate_request(const char *prog_name, int out, int buffer_input)
{
git_zstream stream;
+ unsigned char *full_request = NULL;
unsigned char in_buf[8192];
unsigned char out_buf[8192];
unsigned long cnt = 0;
git_inflate_init_gzip_only(&stream);
while (1) {
- ssize_t n = xread(0, in_buf, sizeof(in_buf));
+ ssize_t n;
+
+ if (buffer_input) {
+ if (full_request)
+ n = 0; /* nothing left to read */
+ else
+ n = read_request(0, &full_request);
+ stream.next_in = full_request;
+ } else {
+ n = xread(0, in_buf, sizeof(in_buf));
+ stream.next_in = in_buf;
+ }
+
if (n <= 0)
die("request ended in the middle of the gzip stream");
-
- stream.next_in = in_buf;
stream.avail_in = n;
while (0 < stream.avail_in) {
done:
git_inflate_end(&stream);
close(out);
+ free(full_request);
+}
+
+static void copy_request(const char *prog_name, int out)
+{
+ unsigned char *buf;
+ ssize_t n = read_request(0, &buf);
+ if (n < 0)
+ die_errno("error reading request body");
+ if (write_in_full(out, buf, n) != n)
+ die("%s aborted reading request", prog_name);
+ close(out);
+ free(buf);
}
-static void run_service(const char **argv)
+static void run_service(const char **argv, int buffer_input)
{
const char *encoding = getenv("HTTP_CONTENT_ENCODING");
const char *user = getenv("REMOTE_USER");
"GIT_COMMITTER_EMAIL=%s@http.%s", user, host);
cld.argv = argv;
- if (gzipped_request)
+ if (buffer_input || gzipped_request)
cld.in = -1;
cld.git_cmd = 1;
if (start_command(&cld))
close(1);
if (gzipped_request)
- inflate_request(argv[0], cld.in);
+ inflate_request(argv[0], cld.in, buffer_input);
+ else if (buffer_input)
+ copy_request(argv[0], cld.in);
else
close(0);
packet_flush(1);
argv[0] = svc->name;
- run_service(argv);
+ run_service(argv, 0);
} else {
select_getanyfile();
end_headers();
argv[0] = svc->name;
- run_service(argv);
+ run_service(argv, svc->buffer_input);
strbuf_release(&buf);
}
+static int dead;
static NORETURN void die_webcgi(const char *err, va_list params)
{
- static int dead;
+ if (dead <= 1) {
+ vreportf("fatal: ", err, params);
- if (!dead) {
- dead = 1;
http_status(500, "Internal Server Error");
hdr_nocache();
end_headers();
-
- vreportf("fatal: ", err, params);
}
exit(0); /* we successfully reported a failure ;-) */
}
+static int die_webcgi_recursing(void)
+{
+ return dead++ > 1;
+}
+
static char* getdir(void)
{
struct strbuf buf = STRBUF_INIT;
git_extract_argv0_path(argv[0]);
set_die_routine(die_webcgi);
+ set_die_is_recursing_routine(die_webcgi_recursing);
if (!method)
die("No REQUEST_METHOD from server");
not_found("Repository not exported: '%s'", dir);
http_config();
+ max_request_buffer = git_env_ulong("GIT_HTTP_MAX_REQUEST_BUFFER",
+ max_request_buffer);
+
cmd->imp(cmd_arg);
return 0;
}
name_part = skip_range_arg(item->string);
if (!name_part || *name_part != ':' || !name_part[1])
- die("-L argument '%s' not of the form start,end:file",
+ die("-L argument not 'start,end:file' or ':funcname:file': %s",
item->string);
range_part = xstrndup(item->string, name_part - item->string);
name_part++;
const char *c_meta = diff_get_color(opt->use_color, DIFF_METAINFO);
const char *c_old = diff_get_color(opt->use_color, DIFF_FILE_OLD);
const char *c_new = diff_get_color(opt->use_color, DIFF_FILE_NEW);
- const char *c_plain = diff_get_color(opt->use_color, DIFF_PLAIN);
+ const char *c_context = diff_get_color(opt->use_color, DIFF_CONTEXT);
if (!pair || !diff)
return;
int k;
for (; t_cur < diff->target.ranges[j].start; t_cur++)
print_line(prefix, ' ', t_cur, t_ends, pair->two->data,
- c_plain, c_reset);
+ c_context, c_reset);
for (k = diff->parent.ranges[j].start; k < diff->parent.ranges[j].end; k++)
print_line(prefix, '-', k, p_ends, pair->one->data,
c_old, c_reset);
}
for (; t_cur < t_end; t_cur++)
print_line(prefix, ' ', t_cur, t_ends, pair->two->data,
- c_plain, c_reset);
+ c_context, c_reset);
}
free(p_ends);
rg->pair = diff_filepair_dup(queue->queue[i]);
memcpy(&rg->diff, pairdiff, sizeof(struct diff_ranges));
}
+ free(pairdiff);
}
return changed;
die("bad tree object");
if (obj->flags & (UNINTERESTING | SEEN))
return;
- if (parse_tree(tree) < 0) {
+ if (parse_tree_gently(tree, revs->ignore_missing_links) < 0) {
if (revs->ignore_missing_links)
return;
die("bad tree object %s", sha1_to_hex(obj->sha1));
#include "line-log.h"
static struct decoration name_decoration = { "object names" };
+static int decoration_loaded;
+static int decoration_flags;
static char decoration_colors[][COLOR_MAXLEN] = {
GIT_COLOR_RESET,
struct object *obj;
enum decoration_type type = DECORATION_NONE;
+ assert(cb_data == NULL);
+
if (starts_with(refname, "refs/replace/")) {
unsigned char original_sha1[20];
if (!check_replace_refs)
else if (!strcmp(refname, "HEAD"))
type = DECORATION_REF_HEAD;
- if (!cb_data || *(int *)cb_data == DECORATE_SHORT_REFS)
- refname = prettify_refname(refname);
add_name_decoration(type, refname, obj);
while (obj->type == OBJ_TAG) {
obj = ((struct tag *)obj)->tagged;
void load_ref_decorations(int flags)
{
- static int loaded;
- if (!loaded) {
- loaded = 1;
- for_each_ref(add_ref_decoration, &flags);
- head_ref(add_ref_decoration, &flags);
+ if (!decoration_loaded) {
+ decoration_loaded = 1;
+ decoration_flags = flags;
+ for_each_ref(add_ref_decoration, NULL);
+ head_ref(add_ref_decoration, NULL);
for_each_commit_graft(add_graft_decoration, NULL);
}
}
branch_name = resolve_ref_unsafe("HEAD", 0, unused, &rru_flags);
if (!(rru_flags & REF_ISSYMREF))
return NULL;
- if (!skip_prefix(branch_name, "refs/heads/", &branch_name))
+
+ if (!starts_with(branch_name, "refs/"))
return NULL;
/* OK, do we have that ref in the list? */
return NULL;
}
+static void show_name(struct strbuf *sb, const struct name_decoration *decoration)
+{
+ if (decoration_flags == DECORATE_SHORT_REFS)
+ strbuf_addstr(sb, prettify_refname(decoration->name));
+ else
+ strbuf_addstr(sb, decoration->name);
+}
+
/*
* The caller makes sure there is no funny color before calling.
* format_decorations_extended makes sure the same after return.
if (decoration->type == DECORATION_REF_TAG)
strbuf_addstr(sb, "tag: ");
- strbuf_addstr(sb, decoration->name);
+ show_name(sb, decoration);
if (current_and_HEAD &&
decoration->type == DECORATION_REF_HEAD) {
strbuf_addstr(sb, " -> ");
strbuf_addstr(sb, color_reset);
strbuf_addstr(sb, decorate_get_color(use_color, current_and_HEAD->type));
- strbuf_addstr(sb, current_and_HEAD->name);
+ show_name(sb, current_and_HEAD);
}
strbuf_addstr(sb, color_reset);
{
struct strbuf newpath = STRBUF_INIT;
int suffix = 0;
- struct stat st;
size_t base_len;
strbuf_addf(&newpath, "%s~", path);
base_len = newpath.len;
while (string_list_has_string(&o->current_file_set, newpath.buf) ||
string_list_has_string(&o->current_directory_set, newpath.buf) ||
- lstat(newpath.buf, &st) == 0) {
+ file_exists(newpath.buf)) {
strbuf_setlen(&newpath, base_len);
strbuf_addf(&newpath, "_%d", suffix++);
}
len = strlen(str);
for (i = 1; i < ARRAY_SIZE(object_type_strings); i++)
- if (!strncmp(str, object_type_strings[i], len))
+ if (!strncmp(str, object_type_strings[i], len) &&
+ object_type_strings[i][len] == '\0')
return i;
if (gentle)
return buffer[(*pos)++];
}
+#define MAX_XOR_OFFSET 160
+
static int load_bitmap_entries_v1(struct bitmap_index *index)
{
- static const size_t MAX_XOR_OFFSET = 160;
-
uint32_t i;
- struct stored_bitmap **recent_bitmaps;
-
- recent_bitmaps = xcalloc(MAX_XOR_OFFSET, sizeof(struct stored_bitmap));
+ struct stored_bitmap *recent_bitmaps[MAX_XOR_OFFSET] = { NULL };
for (i = 0; i < index->entry_count; ++i) {
int xor_offset, flags;
fprintf(stderr, "OK!\n");
else
fprintf(stderr, "Mismatch!\n");
+
+ bitmap_free(result);
}
static int rebuild_bitmap(uint32_t *reposition,
return ret;
}
-void home_config_paths(char **global, char **xdg, char *file)
-{
- char *xdg_home = getenv("XDG_CONFIG_HOME");
- char *home = getenv("HOME");
- char *to_free = NULL;
-
- if (!home) {
- if (global)
- *global = NULL;
- } else {
- if (!xdg_home) {
- to_free = mkpathdup("%s/.config", home);
- xdg_home = to_free;
- }
- if (global)
- *global = mkpathdup("%s/.gitconfig", home);
- }
-
- if (xdg) {
- if (!xdg_home)
- *xdg = NULL;
- else
- *xdg = mkpathdup("%s/git/%s", xdg_home, file);
- }
-
- free(to_free);
-}
-
char *git_path_submodule(const char *path, const char *fmt, ...)
{
char *pathname = get_pathname();
len = -1;
}
}
+
+char *xdg_config_home(const char *filename)
+{
+ const char *home, *config_home;
+
+ assert(filename);
+ config_home = getenv("XDG_CONFIG_HOME");
+ if (config_home && *config_home)
+ return mkpathdup("%s/git/%s", config_home, filename);
+
+ home = getenv("HOME");
+ if (home)
+ return mkpathdup("%s/.config/git/%s", home, filename);
+ return NULL;
+}
#: builtin/add.c:358
#, c-format
msgid "Maybe you wanted to say 'git add .'?\n"
-msgstr "Wollten Sie vielleicht 'git add .' sagen?\n"
+msgstr "Meinten Sie vielleicht 'git add .'?\n"
#: builtin/add.c:363 builtin/check-ignore.c:172 builtin/clean.c:920
#: builtin/commit.c:335 builtin/mv.c:130 builtin/reset.c:235 builtin/rm.c:299
#: git-am.sh:142
msgid "Using index info to reconstruct a base tree..."
msgstr ""
-"Verwende Informationen aus der Staging-Area, um einen Basisverzeichnis "
-"nachzustellen"
+"Verwende Informationen aus der Staging-Area, um ein Basisverzeichnis "
+"nachzustellen ..."
#: git-am.sh:157
msgid ""
#: git-am.sh:166
msgid "Falling back to patching base and 3-way merge..."
-msgstr "Falle zurück zum Patchen der Basis und des 3-Wege-Merges ..."
+msgstr "Falle zurück zum Patchen der Basis und zum 3-Wege-Merge ..."
#: git-am.sh:182
msgid "Failed to merge in the changes."
-msgstr "Merge der Änderungen fehlgeschlagen"
+msgstr "Merge der Änderungen fehlgeschlagen."
#: git-am.sh:277
msgid "Only one StGIT patch series can be applied at once"
data.revs = revs;
data.timestamp = timestamp;
- r = for_each_loose_object(add_recent_loose, &data);
+ r = for_each_loose_object(add_recent_loose, &data,
+ FOR_EACH_OBJECT_LOCAL_ONLY);
if (r)
return r;
- return for_each_packed_object(add_recent_packed, &data);
+ return for_each_packed_object(add_recent_packed, &data,
+ FOR_EACH_OBJECT_LOCAL_ONLY);
}
void mark_reachable_objects(struct rev_info *revs, int mark_reflog,
if (mmap_size < sizeof(struct cache_header) + 20)
die("index file smaller than expected");
- mmap = xmmap(NULL, mmap_size, PROT_READ | PROT_WRITE, MAP_PRIVATE, fd, 0);
+ mmap = xmmap(NULL, mmap_size, PROT_READ, MAP_PRIVATE, fd, 0);
if (mmap == MAP_FAILED)
die_errno("unable to map index file");
close(fd);
*/
#define REF_HAVE_OLD 0x10
+/*
+ * Used as a flag in ref_update::flags when the lockfile needs to be
+ * committed.
+ */
+#define REF_NEEDS_COMMIT 0x20
+
/*
* Try to read one refname component from the front of refname.
* Return the length of the component found, or -1 if the component is
* presence of an empty subdirectory does not block the creation of a
* similarly-named reference. (The fact that reference names with the
* same leading components can conflict *with each other* is a
- * separate issue that is regulated by is_refname_available().)
+ * separate issue that is regulated by verify_refname_available().)
*
* Please note that the name field contains the fully-qualified
* reference (or subdirectory) name. Space could be saved by only
}
}
-static int entry_matches(struct ref_entry *entry, const struct string_list *list)
-{
- return list && string_list_has_string(list, entry->name);
-}
-
struct nonmatching_ref_data {
const struct string_list *skip;
- struct ref_entry *found;
+ const char *conflicting_refname;
};
static int nonmatching_ref_fn(struct ref_entry *entry, void *vdata)
{
struct nonmatching_ref_data *data = vdata;
- if (entry_matches(entry, data->skip))
+ if (data->skip && string_list_has_string(data->skip, entry->name))
return 0;
- data->found = entry;
+ data->conflicting_refname = entry->name;
return 1;
}
-static void report_refname_conflict(struct ref_entry *entry,
- const char *refname)
-{
- error("'%s' exists; cannot create '%s'", entry->name, refname);
-}
-
/*
- * Return true iff a reference named refname could be created without
- * conflicting with the name of an existing reference in dir. If
- * skip is non-NULL, ignore potential conflicts with refs in skip
- * (e.g., because they are scheduled for deletion in the same
- * operation).
+ * Return 0 if a reference named refname could be created without
+ * conflicting with the name of an existing reference in dir.
+ * Otherwise, return a negative value and write an explanation to err.
+ * If extras is non-NULL, it is a list of additional refnames with
+ * which refname is not allowed to conflict. If skip is non-NULL,
+ * ignore potential conflicts with refs in skip (e.g., because they
+ * are scheduled for deletion in the same operation). Behavior is
+ * undefined if the same name is listed in both extras and skip.
*
* Two reference names conflict if one of them exactly matches the
- * leading components of the other; e.g., "foo/bar" conflicts with
- * both "foo" and with "foo/bar/baz" but not with "foo/bar" or
- * "foo/barbados".
+ * leading components of the other; e.g., "refs/foo/bar" conflicts
+ * with both "refs/foo" and with "refs/foo/bar/baz" but not with
+ * "refs/foo/bar" or "refs/foo/barbados".
*
- * skip must be sorted.
+ * extras and skip must be sorted.
*/
-static int is_refname_available(const char *refname,
- const struct string_list *skip,
- struct ref_dir *dir)
+static int verify_refname_available(const char *refname,
+ const struct string_list *extras,
+ const struct string_list *skip,
+ struct ref_dir *dir,
+ struct strbuf *err)
{
const char *slash;
- size_t len;
int pos;
- char *dirname;
+ struct strbuf dirname = STRBUF_INIT;
+ int ret = -1;
+
+ /*
+ * For the sake of comments in this function, suppose that
+ * refname is "refs/foo/bar".
+ */
+
+ assert(err);
+ strbuf_grow(&dirname, strlen(refname) + 1);
for (slash = strchr(refname, '/'); slash; slash = strchr(slash + 1, '/')) {
+ /* Expand dirname to the new prefix, not including the trailing slash: */
+ strbuf_add(&dirname, refname + dirname.len, slash - refname - dirname.len);
+
/*
- * We are still at a leading dir of the refname; we are
- * looking for a conflict with a leaf entry.
- *
- * If we find one, we still must make sure it is
- * not in "skip".
+ * We are still at a leading dir of the refname (e.g.,
+ * "refs/foo"; if there is a reference with that name,
+ * it is a conflict, *unless* it is in skip.
*/
- pos = search_ref_dir(dir, refname, slash - refname);
- if (pos >= 0) {
- struct ref_entry *entry = dir->entries[pos];
- if (entry_matches(entry, skip))
- return 1;
- report_refname_conflict(entry, refname);
- return 0;
+ if (dir) {
+ pos = search_ref_dir(dir, dirname.buf, dirname.len);
+ if (pos >= 0 &&
+ (!skip || !string_list_has_string(skip, dirname.buf))) {
+ /*
+ * We found a reference whose name is
+ * a proper prefix of refname; e.g.,
+ * "refs/foo", and is not in skip.
+ */
+ strbuf_addf(err, "'%s' exists; cannot create '%s'",
+ dirname.buf, refname);
+ goto cleanup;
+ }
}
+ if (extras && string_list_has_string(extras, dirname.buf) &&
+ (!skip || !string_list_has_string(skip, dirname.buf))) {
+ strbuf_addf(err, "cannot process '%s' and '%s' at the same time",
+ refname, dirname.buf);
+ goto cleanup;
+ }
/*
* Otherwise, we can try to continue our search with
- * the next component; if we come up empty, we know
- * there is nothing under this whole prefix.
+ * the next component. So try to look up the
+ * directory, e.g., "refs/foo/". If we come up empty,
+ * we know there is nothing under this whole prefix,
+ * but even in that case we still have to continue the
+ * search for conflicts with extras.
*/
- pos = search_ref_dir(dir, refname, slash + 1 - refname);
- if (pos < 0)
- return 1;
-
- dir = get_ref_dir(dir->entries[pos]);
+ strbuf_addch(&dirname, '/');
+ if (dir) {
+ pos = search_ref_dir(dir, dirname.buf, dirname.len);
+ if (pos < 0) {
+ /*
+ * There was no directory "refs/foo/",
+ * so there is nothing under this
+ * whole prefix. So there is no need
+ * to continue looking for conflicting
+ * references. But we need to continue
+ * looking for conflicting extras.
+ */
+ dir = NULL;
+ } else {
+ dir = get_ref_dir(dir->entries[pos]);
+ }
+ }
}
/*
- * We are at the leaf of our refname; we want to
- * make sure there are no directories which match it.
+ * We are at the leaf of our refname (e.g., "refs/foo/bar").
+ * There is no point in searching for a reference with that
+ * name, because a refname isn't considered to conflict with
+ * itself. But we still need to check for references whose
+ * names are in the "refs/foo/bar/" namespace, because they
+ * *do* conflict.
*/
- len = strlen(refname);
- dirname = xmallocz(len + 1);
- sprintf(dirname, "%s/", refname);
- pos = search_ref_dir(dir, dirname, len + 1);
- free(dirname);
+ strbuf_addstr(&dirname, refname + dirname.len);
+ strbuf_addch(&dirname, '/');
+
+ if (dir) {
+ pos = search_ref_dir(dir, dirname.buf, dirname.len);
- if (pos >= 0) {
+ if (pos >= 0) {
+ /*
+ * We found a directory named "$refname/"
+ * (e.g., "refs/foo/bar/"). It is a problem
+ * iff it contains any ref that is not in
+ * "skip".
+ */
+ struct nonmatching_ref_data data;
+
+ data.skip = skip;
+ data.conflicting_refname = NULL;
+ dir = get_ref_dir(dir->entries[pos]);
+ sort_ref_dir(dir);
+ if (do_for_each_entry_in_dir(dir, 0, nonmatching_ref_fn, &data)) {
+ strbuf_addf(err, "'%s' exists; cannot create '%s'",
+ data.conflicting_refname, refname);
+ goto cleanup;
+ }
+ }
+ }
+
+ if (extras) {
/*
- * We found a directory named "refname". It is a
- * problem iff it contains any ref that is not
- * in "skip".
+ * Check for entries in extras that start with
+ * "$refname/". We do that by looking for the place
+ * where "$refname/" would be inserted in extras. If
+ * there is an entry at that position that starts with
+ * "$refname/" and is not in skip, then we have a
+ * conflict.
*/
- struct ref_entry *entry = dir->entries[pos];
- struct ref_dir *dir = get_ref_dir(entry);
- struct nonmatching_ref_data data;
+ for (pos = string_list_find_insert_index(extras, dirname.buf, 0);
+ pos < extras->nr; pos++) {
+ const char *extra_refname = extras->items[pos].string;
- data.skip = skip;
- sort_ref_dir(dir);
- if (!do_for_each_entry_in_dir(dir, 0, nonmatching_ref_fn, &data))
- return 1;
+ if (!starts_with(extra_refname, dirname.buf))
+ break;
- report_refname_conflict(data.found, refname);
- return 0;
+ if (!skip || !string_list_has_string(skip, extra_refname)) {
+ strbuf_addf(err, "cannot process '%s' and '%s' at the same time",
+ refname, extra_refname);
+ goto cleanup;
+ }
+ }
}
- /*
- * There is no point in searching for another leaf
- * node which matches it; such an entry would be the
- * ref we are looking for, not a conflict.
- */
- return 1;
+ /* No conflicts were found */
+ ret = 0;
+
+cleanup:
+ strbuf_release(&dirname);
+ return ret;
}
struct packed_ref_cache {
*/
static struct ref_lock *lock_ref_sha1_basic(const char *refname,
const unsigned char *old_sha1,
+ const struct string_list *extras,
const struct string_list *skip,
- unsigned int flags, int *type_p)
+ unsigned int flags, int *type_p,
+ struct strbuf *err)
{
char *ref_file;
const char *orig_refname = refname;
int resolve_flags = 0;
int attempts_remaining = 3;
+ assert(err);
+
lock = xcalloc(1, sizeof(struct ref_lock));
lock->lock_fd = -1;
ref_file = git_path("%s", orig_refname);
if (remove_empty_directories(ref_file)) {
last_errno = errno;
- error("there are still refs under '%s'", orig_refname);
+
+ if (!verify_refname_available(orig_refname, extras, skip,
+ get_loose_refs(&ref_cache), err))
+ strbuf_addf(err, "there are still refs under '%s'",
+ orig_refname);
+
goto error_return;
}
refname = resolve_ref_unsafe(orig_refname, resolve_flags,
*type_p = type;
if (!refname) {
last_errno = errno;
- error("unable to resolve reference %s: %s",
- orig_refname, strerror(errno));
+ if (last_errno != ENOTDIR ||
+ !verify_refname_available(orig_refname, extras, skip,
+ get_loose_refs(&ref_cache), err))
+ strbuf_addf(err, "unable to resolve reference %s: %s",
+ orig_refname, strerror(last_errno));
+
goto error_return;
}
/*
* our refname.
*/
if (is_null_sha1(lock->old_sha1) &&
- !is_refname_available(refname, skip, get_packed_refs(&ref_cache))) {
+ verify_refname_available(refname, extras, skip,
+ get_packed_refs(&ref_cache), err)) {
last_errno = ENOTDIR;
goto error_return;
}
/* fall through */
default:
last_errno = errno;
- error("unable to create directory for %s", ref_file);
+ strbuf_addf(err, "unable to create directory for %s", ref_file);
goto error_return;
}
*/
goto retry;
else {
- struct strbuf err = STRBUF_INIT;
- unable_to_lock_message(ref_file, errno, &err);
- error("%s", err.buf);
- strbuf_release(&err);
+ unable_to_lock_message(ref_file, errno, err);
goto error_return;
}
}
static int rename_ref_available(const char *oldname, const char *newname)
{
struct string_list skip = STRING_LIST_INIT_NODUP;
+ struct strbuf err = STRBUF_INIT;
int ret;
string_list_insert(&skip, oldname);
- ret = is_refname_available(newname, &skip, get_packed_refs(&ref_cache))
- && is_refname_available(newname, &skip, get_loose_refs(&ref_cache));
+ ret = !verify_refname_available(newname, NULL, &skip,
+ get_packed_refs(&ref_cache), &err)
+ && !verify_refname_available(newname, NULL, &skip,
+ get_loose_refs(&ref_cache), &err);
+ if (!ret)
+ error("%s", err.buf);
+
string_list_clear(&skip, 0);
+ strbuf_release(&err);
return ret;
}
-static int write_ref_sha1(struct ref_lock *lock, const unsigned char *sha1,
- const char *logmsg);
+static int write_ref_to_lockfile(struct ref_lock *lock, const unsigned char *sha1);
+static int commit_ref_update(struct ref_lock *lock,
+ const unsigned char *sha1, const char *logmsg);
int rename_ref(const char *oldrefname, const char *newrefname, const char *logmsg)
{
struct stat loginfo;
int log = !lstat(git_path("logs/%s", oldrefname), &loginfo);
const char *symref = NULL;
+ struct strbuf err = STRBUF_INIT;
if (log && S_ISLNK(loginfo.st_mode))
return error("reflog for %s is a symlink", oldrefname);
logmoved = log;
- lock = lock_ref_sha1_basic(newrefname, NULL, NULL, 0, NULL);
+ lock = lock_ref_sha1_basic(newrefname, NULL, NULL, NULL, 0, NULL, &err);
if (!lock) {
- error("unable to lock %s for update", newrefname);
+ error("unable to rename '%s' to '%s': %s", oldrefname, newrefname, err.buf);
+ strbuf_release(&err);
goto rollback;
}
hashcpy(lock->old_sha1, orig_sha1);
- if (write_ref_sha1(lock, orig_sha1, logmsg)) {
+
+ if (write_ref_to_lockfile(lock, orig_sha1) ||
+ commit_ref_update(lock, orig_sha1, logmsg)) {
error("unable to write current sha1 into %s", newrefname);
goto rollback;
}
return 0;
rollback:
- lock = lock_ref_sha1_basic(oldrefname, NULL, NULL, 0, NULL);
+ lock = lock_ref_sha1_basic(oldrefname, NULL, NULL, NULL, 0, NULL, &err);
if (!lock) {
- error("unable to lock %s for rollback", oldrefname);
+ error("unable to lock %s for rollback: %s", oldrefname, err.buf);
+ strbuf_release(&err);
goto rollbacklog;
}
flag = log_all_ref_updates;
log_all_ref_updates = 0;
- if (write_ref_sha1(lock, orig_sha1, NULL))
+ if (write_ref_to_lockfile(lock, orig_sha1) ||
+ commit_ref_update(lock, orig_sha1, NULL))
error("unable to write current sha1 into %s", oldrefname);
log_all_ref_updates = flag;
}
/*
- * Write sha1 into the ref specified by the lock. Make sure that errno
- * is sane on error.
+ * Write sha1 into the open lockfile, then close the lockfile. On
+ * errors, rollback the lockfile and set errno to reflect the problem.
*/
-static int write_ref_sha1(struct ref_lock *lock,
- const unsigned char *sha1, const char *logmsg)
+static int write_ref_to_lockfile(struct ref_lock *lock,
+ const unsigned char *sha1)
{
static char term = '\n';
struct object *o;
errno = save_errno;
return -1;
}
+ return 0;
+}
+
+/*
+ * Commit a change to a loose reference that has already been written
+ * to the loose reference lockfile. Also update the reflogs if
+ * necessary, using the specified lockmsg (which can be NULL).
+ */
+static int commit_ref_update(struct ref_lock *lock,
+ const unsigned char *sha1, const char *logmsg)
+{
clear_loose_ref_cache(&ref_cache);
if (log_ref_write(lock->ref_name, lock->old_sha1, sha1, logmsg) < 0 ||
(strcmp(lock->ref_name, lock->orig_ref_name) &&
return 0;
}
-static int ref_update_compare(const void *r1, const void *r2)
-{
- const struct ref_update * const *u1 = r1;
- const struct ref_update * const *u2 = r2;
- return strcmp((*u1)->refname, (*u2)->refname);
-}
-
-static int ref_update_reject_duplicates(struct ref_update **updates, int n,
+static int ref_update_reject_duplicates(struct string_list *refnames,
struct strbuf *err)
{
- int i;
+ int i, n = refnames->nr;
assert(err);
for (i = 1; i < n; i++)
- if (!strcmp(updates[i - 1]->refname, updates[i]->refname)) {
+ if (!strcmp(refnames->items[i - 1].string, refnames->items[i].string)) {
strbuf_addf(err,
"Multiple updates for ref '%s' not allowed.",
- updates[i]->refname);
+ refnames->items[i].string);
return 1;
}
return 0;
struct ref_update **updates = transaction->updates;
struct string_list refs_to_delete = STRING_LIST_INIT_NODUP;
struct string_list_item *ref_to_delete;
+ struct string_list affected_refnames = STRING_LIST_INIT_NODUP;
assert(err);
return 0;
}
- /* Copy, sort, and reject duplicate refs */
- qsort(updates, n, sizeof(*updates), ref_update_compare);
- if (ref_update_reject_duplicates(updates, n, err)) {
+ /* Fail if a refname appears more than once in the transaction: */
+ for (i = 0; i < n; i++)
+ string_list_append(&affected_refnames, updates[i]->refname);
+ string_list_sort(&affected_refnames);
+ if (ref_update_reject_duplicates(&affected_refnames, err)) {
ret = TRANSACTION_GENERIC_ERROR;
goto cleanup;
}
- /* Acquire all locks while verifying old values */
+ /*
+ * Acquire all locks, verify old values if provided, check
+ * that new values are valid, and write new values to the
+ * lockfiles, ready to be activated. Only keep one lockfile
+ * open at a time to avoid running out of file descriptors.
+ */
for (i = 0; i < n; i++) {
struct ref_update *update = updates[i];
- unsigned int flags = update->flags;
- if ((flags & REF_HAVE_NEW) && is_null_sha1(update->new_sha1))
- flags |= REF_DELETING;
+ if ((update->flags & REF_HAVE_NEW) &&
+ is_null_sha1(update->new_sha1))
+ update->flags |= REF_DELETING;
update->lock = lock_ref_sha1_basic(
update->refname,
((update->flags & REF_HAVE_OLD) ?
update->old_sha1 : NULL),
- NULL,
- flags,
- &update->type);
+ &affected_refnames, NULL,
+ update->flags,
+ &update->type,
+ err);
if (!update->lock) {
+ char *reason;
+
ret = (errno == ENOTDIR)
? TRANSACTION_NAME_CONFLICT
: TRANSACTION_GENERIC_ERROR;
- strbuf_addf(err, "Cannot lock the ref '%s'.",
- update->refname);
+ reason = strbuf_detach(err, NULL);
+ strbuf_addf(err, "Cannot lock ref '%s': %s",
+ update->refname, reason);
+ free(reason);
goto cleanup;
}
- }
-
- /* Perform updates first so live commits remain referenced */
- for (i = 0; i < n; i++) {
- struct ref_update *update = updates[i];
- int flags = update->flags;
-
- if ((flags & REF_HAVE_NEW) && !is_null_sha1(update->new_sha1)) {
+ if ((update->flags & REF_HAVE_NEW) &&
+ !(update->flags & REF_DELETING)) {
int overwriting_symref = ((update->type & REF_ISSYMREF) &&
(update->flags & REF_NODEREF));
- if (!overwriting_symref
- && !hashcmp(update->lock->old_sha1, update->new_sha1)) {
+ if (!overwriting_symref &&
+ !hashcmp(update->lock->old_sha1, update->new_sha1)) {
/*
* The reference already has the desired
* value, so we don't need to write it.
*/
- unlock_ref(update->lock);
+ } else if (write_ref_to_lockfile(update->lock,
+ update->new_sha1)) {
+ /*
+ * The lock was freed upon failure of
+ * write_ref_to_lockfile():
+ */
update->lock = NULL;
- } else if (write_ref_sha1(update->lock, update->new_sha1,
- update->msg)) {
- update->lock = NULL; /* freed by write_ref_sha1 */
strbuf_addf(err, "Cannot update the ref '%s'.",
update->refname);
ret = TRANSACTION_GENERIC_ERROR;
goto cleanup;
} else {
- /* freed by write_ref_sha1(): */
+ update->flags |= REF_NEEDS_COMMIT;
+ }
+ }
+ if (!(update->flags & REF_NEEDS_COMMIT)) {
+ /*
+ * We didn't have to write anything to the lockfile.
+ * Close it to free up the file descriptor:
+ */
+ if (close_ref(update->lock)) {
+ strbuf_addf(err, "Couldn't close %s.lock",
+ update->refname);
+ goto cleanup;
+ }
+ }
+ }
+
+ /* Perform updates first so live commits remain referenced */
+ for (i = 0; i < n; i++) {
+ struct ref_update *update = updates[i];
+
+ if (update->flags & REF_NEEDS_COMMIT) {
+ if (commit_ref_update(update->lock,
+ update->new_sha1, update->msg)) {
+ /* freed by commit_ref_update(): */
+ update->lock = NULL;
+ strbuf_addf(err, "Cannot update the ref '%s'.",
+ update->refname);
+ ret = TRANSACTION_GENERIC_ERROR;
+ goto cleanup;
+ } else {
+ /* freed by commit_ref_update(): */
update->lock = NULL;
}
}
/* Perform deletes now that updates are safely completed */
for (i = 0; i < n; i++) {
struct ref_update *update = updates[i];
- int flags = update->flags;
- if ((flags & REF_HAVE_NEW) && is_null_sha1(update->new_sha1)) {
+ if (update->flags & REF_DELETING) {
if (delete_ref_loose(update->lock, update->type, err)) {
ret = TRANSACTION_GENERIC_ERROR;
goto cleanup;
}
- if (!(flags & REF_ISPRUNING))
+ if (!(update->flags & REF_ISPRUNING))
string_list_append(&refs_to_delete,
update->lock->ref_name);
}
if (updates[i]->lock)
unlock_ref(updates[i]->lock);
string_list_clear(&refs_to_delete, 0);
+ string_list_clear(&affected_refnames, 0);
return ret;
}
char *log_file;
int status = 0;
int type;
+ struct strbuf err = STRBUF_INIT;
memset(&cb, 0, sizeof(cb));
cb.flags = flags;
* reference itself, plus we might need to update the
* reference if --updateref was specified:
*/
- lock = lock_ref_sha1_basic(refname, sha1, NULL, 0, &type);
- if (!lock)
- return error("cannot lock ref '%s'", refname);
+ lock = lock_ref_sha1_basic(refname, sha1, NULL, NULL, 0, &type, &err);
+ if (!lock) {
+ error("cannot lock ref '%s': %s", refname, err.buf);
+ strbuf_release(&err);
+ return -1;
+ }
if (!reflog_exists(refname)) {
unlock_ref(lock);
return 0;
return error("Could not read index");
fd = setup_rerere(&merge_rr, RERERE_NOAUTOUPDATE);
+ if (fd < 0)
+ return 0;
unmerge_cache(pathspec);
find_conflict(&conflict);
die("%s is unknown object", name);
}
-static int everybody_uninteresting(struct commit_list *orig)
+static int everybody_uninteresting(struct commit_list *orig,
+ struct commit **interesting_cache)
{
struct commit_list *list = orig;
+
+ if (*interesting_cache) {
+ struct commit *commit = *interesting_cache;
+ if (!(commit->object.flags & UNINTERESTING))
+ return 0;
+ }
+
while (list) {
struct commit *commit = list->item;
list = list->next;
if (commit->object.flags & UNINTERESTING)
continue;
+ if (interesting_cache)
+ *interesting_cache = commit;
return 0;
}
return 1;
parent = parent->next;
if (p)
p->object.flags |= UNINTERESTING;
- if (parse_commit(p) < 0)
+ if (parse_commit_gently(p, 1) < 0)
continue;
if (p->parents)
mark_parents_uninteresting(p);
for (parent = commit->parents; parent; parent = parent->next) {
struct commit *p = parent->item;
- if (parse_commit(p) < 0)
+ if (parse_commit_gently(p, revs->ignore_missing_links) < 0)
return -1;
if (revs->show_source && !p->util)
p->util = commit->util;
/* How many extra uninteresting commits we want to see.. */
#define SLOP 5
-static int still_interesting(struct commit_list *src, unsigned long date, int slop)
+static int still_interesting(struct commit_list *src, unsigned long date, int slop,
+ struct commit **interesting_cache)
{
/*
* No source list at all? We're definitely done..
* Does the source list still have interesting commits in
* it? Definitely not done..
*/
- if (!everybody_uninteresting(src))
+ if (!everybody_uninteresting(src, interesting_cache))
return SLOP;
/* Ok, we're closing in.. */
struct commit_list *newlist = NULL;
struct commit_list **p = &newlist;
struct commit_list *bottom = NULL;
+ struct commit *interesting_cache = NULL;
if (revs->ancestry_path) {
bottom = collect_bottom_commits(list);
list = list->next;
free(entry);
+ if (commit == interesting_cache)
+ interesting_cache = NULL;
+
if (revs->max_age != -1 && (commit->date < revs->max_age))
obj->flags |= UNINTERESTING;
if (add_parents_to_list(revs, commit, &list, NULL) < 0)
mark_parents_uninteresting(commit);
if (revs->show_all)
p = &commit_list_insert(commit, p)->next;
- slop = still_interesting(list, date, slop);
+ slop = still_interesting(list, date, slop, &interesting_cache);
if (slop)
continue;
/* If showing all, add the whole pending list to the end */
(uintmax_t)length, (uintmax_t)limit);
}
-void *xmmap(void *start, size_t length,
- int prot, int flags, int fd, off_t offset)
+void *xmmap_gently(void *start, size_t length,
+ int prot, int flags, int fd, off_t offset)
{
void *ret;
return NULL;
release_pack_memory(length);
ret = mmap(start, length, prot, flags, fd, offset);
- if (ret == MAP_FAILED)
- die_errno("Out of memory? mmap failed");
}
return ret;
}
+void *xmmap(void *start, size_t length,
+ int prot, int flags, int fd, off_t offset)
+{
+ void *ret = xmmap_gently(start, length, prot, flags, fd, offset);
+ if (ret == MAP_FAILED)
+ die_errno("mmap failed");
+ return ret;
+}
+
void close_pack_windows(struct packed_git *p)
{
while (p->windows) {
* answer, as it may have been deleted since the index was
* loaded!
*/
- if (!is_pack_valid(p)) {
- warning("packfile %s cannot be accessed", p->pack_name);
+ if (!is_pack_valid(p))
return 0;
- }
e->offset = offset;
e->p = p;
hashcpy(e->sha1, sha1);
static int freshen_packed_object(const unsigned char *sha1)
{
struct pack_entry e;
- return find_pack_entry(sha1, &e) && freshen_file(e.p->pack_name);
+ if (!find_pack_entry(sha1, &e))
+ return 0;
+ if (e.p->freshened)
+ return 1;
+ if (!freshen_file(e.p->pack_name))
+ return 0;
+ e.p->freshened = 1;
+ return 1;
}
-int write_sha1_file(const void *buf, unsigned long len, const char *type, unsigned char *returnsha1)
+int write_sha1_file(const void *buf, unsigned long len, const char *type, unsigned char *sha1)
{
- unsigned char sha1[20];
char hdr[32];
int hdrlen;
* it out into .git/objects/??/?{38} file.
*/
write_sha1_file_prepare(buf, len, type, sha1, hdr, &hdrlen);
- if (returnsha1)
- hashcpy(returnsha1, sha1);
- if (freshen_loose_object(sha1) || freshen_packed_object(sha1))
+ if (freshen_packed_object(sha1) || freshen_loose_object(sha1))
return 0;
return write_loose_object(sha1, hdr, hdrlen, buf, len, 0);
}
+int hash_sha1_file_literally(const void *buf, unsigned long len, const char *type,
+ unsigned char *sha1, unsigned flags)
+{
+ char *header;
+ int hdrlen, status = 0;
+
+ /* type string, SP, %lu of the length plus NUL must fit this */
+ header = xmalloc(strlen(type) + 32);
+ write_sha1_file_prepare(buf, len, type, sha1, header, &hdrlen);
+
+ if (!(flags & HASH_WRITE_OBJECT))
+ goto cleanup;
+ if (freshen_packed_object(sha1) || freshen_loose_object(sha1))
+ goto cleanup;
+ status = write_loose_object(sha1, header, hdrlen, buf, len, 0);
+
+cleanup:
+ free(header);
+ return status;
+}
+
int force_object_loose(const unsigned char *sha1, time_t mtime)
{
void *buf;
int ret;
if (!size) {
- ret = index_mem(sha1, NULL, size, type, path, flags);
+ ret = index_mem(sha1, "", size, type, path, flags);
} else if (size <= SMALL_FILE_SIZE) {
char *buf = xmalloc(size);
if (size == read_in_full(fd, buf, size))
return r;
}
-int for_each_loose_object(each_loose_object_fn cb, void *data)
+int for_each_loose_object(each_loose_object_fn cb, void *data, unsigned flags)
{
struct loose_alt_odb_data alt;
int r;
if (r)
return r;
+ if (flags & FOR_EACH_OBJECT_LOCAL_ONLY)
+ return 0;
+
alt.cb = cb;
alt.data = data;
return foreach_alt_odb(loose_from_alt_odb, &alt);
return r;
}
-int for_each_packed_object(each_packed_object_fn cb, void *data)
+int for_each_packed_object(each_packed_object_fn cb, void *data, unsigned flags)
{
struct packed_git *p;
int r = 0;
prepare_packed_git();
for (p = packed_git; p; p = p->next) {
+ if ((flags & FOR_EACH_OBJECT_LOCAL_ONLY) && !p->pack_local)
+ continue;
r = for_each_object_in_pack(p, cb, data);
if (r)
break;
#include "tree-walk.h"
#include "refs.h"
#include "remote.h"
+#include "dir.h"
static int get_sha1_oneline(const char *, unsigned char *, struct commit_list *);
const char *object_name,
int object_name_len)
{
- struct stat st;
unsigned char sha1[20];
unsigned mode;
if (!prefix)
prefix = "";
- if (!lstat(filename, &st))
+ if (file_exists(filename))
die("Path '%s' exists on disk, but not in '%.*s'.",
filename, object_name_len, object_name);
if (errno == ENOENT || errno == ENOTDIR) {
const char *prefix,
const char *filename)
{
- struct stat st;
const struct cache_entry *ce;
int pos;
unsigned namelen = strlen(filename);
ce_stage(ce), filename);
}
- if (!lstat(filename, &st))
+ if (file_exists(filename))
die("Path '%s' exists on disk, but not in the index.", filename);
if (errno == ENOENT || errno == ENOTDIR)
die("Path '%s' does not exist (neither on disk nor in the index).",
int ok_to_remove_submodule(const char *path)
{
- struct stat st;
ssize_t len;
struct child_process cp = CHILD_PROCESS_INIT;
const char *argv[] = {
struct strbuf buf = STRBUF_INIT;
int ok_to_remove = 1;
- if ((lstat(path, &st) < 0) || is_empty_dir(path))
+ if (!file_exists(path) || is_empty_dir(path))
return 1;
if (!submodule_uses_gitfile(path))
# Copyright (c) 2008 Clemens Buchacher <drizzd@aon.at>
#
+if test -n "$NO_CURL"
+then
+ skip_all='skipping test, git built without http support'
+ test_done
+fi
+
+if test -n "$NO_EXPAT" && test -n "$LIB_HTTPD_DAV"
+then
+ skip_all='skipping test, git built without expat support'
+ test_done
+fi
+
test_tristate GIT_TEST_HTTPD
if test "$GIT_TEST_HTTPD" = false
then
test_cmp err.expect err
'
+test_expect_success 'info/exclude trumps core.excludesfile' '
+ echo >>global-excludes usually-ignored &&
+ echo >>.git/info/exclude "!usually-ignored" &&
+ >usually-ignored &&
+ echo "?? usually-ignored" >expect &&
+
+ git status --porcelain usually-ignored >actual &&
+ test_cmp expect actual
+'
+
test_done
! test -s err
'
+test_expect_success "filter: clean empty file" '
+ git config filter.in-repo-header.clean "echo cleaned && cat" &&
+ git config filter.in-repo-header.smudge "sed 1d" &&
+
+ echo "empty-in-worktree filter=in-repo-header" >>.gitattributes &&
+ >empty-in-worktree &&
+
+ echo cleaned >expected &&
+ git add empty-in-worktree &&
+ git show :empty-in-worktree >actual &&
+ test_cmp expected actual
+'
+
+test_expect_success "filter: smudge empty file" '
+ git config filter.empty-in-repo.clean "cat >/dev/null" &&
+ git config filter.empty-in-repo.smudge "echo smudged && cat" &&
+
+ echo "empty-in-repo filter=empty-in-repo" >>.gitattributes &&
+ echo dead data walking >empty-in-repo &&
+ git add empty-in-repo &&
+
+ echo smudged >expected &&
+ git checkout-index --prefix=filtered- empty-in-repo &&
+ test_cmp expected filtered-empty-in-repo
+'
+
test_done
helper_test store
+test_expect_success 'when xdg file does not exist, xdg file not created' '
+ test_path_is_missing "$HOME/.config/git/credentials" &&
+ test -s "$HOME/.git-credentials"
+'
+
+test_expect_success 'setup xdg file' '
+ rm -f "$HOME/.git-credentials" &&
+ mkdir -p "$HOME/.config/git" &&
+ >"$HOME/.config/git/credentials"
+'
+
+helper_test store
+
+test_expect_success 'when xdg file exists, home file not created' '
+ test -s "$HOME/.config/git/credentials" &&
+ test_path_is_missing "$HOME/.git-credentials"
+'
+
+test_expect_success 'setup custom xdg file' '
+ rm -f "$HOME/.git-credentials" &&
+ rm -f "$HOME/.config/git/credentials" &&
+ mkdir -p "$HOME/xdg/git" &&
+ >"$HOME/xdg/git/credentials"
+'
+
+XDG_CONFIG_HOME="$HOME/xdg"
+export XDG_CONFIG_HOME
+helper_test store
+unset XDG_CONFIG_HOME
+
+test_expect_success 'if custom xdg file exists, home and xdg files not created' '
+ test_when_finished "rm -f $HOME/xdg/git/credentials" &&
+ test -s "$HOME/xdg/git/credentials" &&
+ test_path_is_missing "$HOME/.git-credentials" &&
+ test_path_is_missing "$HOME/.config/git/credentials"
+'
+
+test_expect_success 'get: use home file if both home and xdg files have matches' '
+ echo "https://home-user:home-pass@example.com" >"$HOME/.git-credentials" &&
+ mkdir -p "$HOME/.config/git" &&
+ echo "https://xdg-user:xdg-pass@example.com" >"$HOME/.config/git/credentials" &&
+ check fill store <<-\EOF
+ protocol=https
+ host=example.com
+ --
+ protocol=https
+ host=example.com
+ username=home-user
+ password=home-pass
+ --
+ EOF
+'
+
+test_expect_success 'get: use xdg file if home file has no matches' '
+ >"$HOME/.git-credentials" &&
+ mkdir -p "$HOME/.config/git" &&
+ echo "https://xdg-user:xdg-pass@example.com" >"$HOME/.config/git/credentials" &&
+ check fill store <<-\EOF
+ protocol=https
+ host=example.com
+ --
+ protocol=https
+ host=example.com
+ username=xdg-user
+ password=xdg-pass
+ --
+ EOF
+'
+
+test_expect_success POSIXPERM 'get: use xdg file if home file is unreadable' '
+ echo "https://home-user:home-pass@example.com" >"$HOME/.git-credentials" &&
+ chmod -r "$HOME/.git-credentials" &&
+ mkdir -p "$HOME/.config/git" &&
+ echo "https://xdg-user:xdg-pass@example.com" >"$HOME/.config/git/credentials" &&
+ check fill store <<-\EOF
+ protocol=https
+ host=example.com
+ --
+ protocol=https
+ host=example.com
+ username=xdg-user
+ password=xdg-pass
+ --
+ EOF
+'
+
+test_expect_success 'store: if both xdg and home files exist, only store in home file' '
+ >"$HOME/.git-credentials" &&
+ mkdir -p "$HOME/.config/git" &&
+ >"$HOME/.config/git/credentials" &&
+ check approve store <<-\EOF &&
+ protocol=https
+ host=example.com
+ username=store-user
+ password=store-pass
+ EOF
+ echo "https://store-user:store-pass@example.com" >expected &&
+ test_cmp expected "$HOME/.git-credentials" &&
+ test_must_be_empty "$HOME/.config/git/credentials"
+'
+
+
+test_expect_success 'erase: erase matching credentials from both xdg and home files' '
+ echo "https://home-user:home-pass@example.com" >"$HOME/.git-credentials" &&
+ mkdir -p "$HOME/.config/git" &&
+ echo "https://xdg-user:xdg-pass@example.com" >"$HOME/.config/git/credentials" &&
+ check reject store <<-\EOF &&
+ protocol=https
+ host=example.com
+ EOF
+ test_must_be_empty "$HOME/.git-credentials" &&
+ test_must_be_empty "$HOME/.config/git/credentials"
+'
+
test_done
test_must_fail git hash-object -t tag --stdin </dev/null
'
+test_expect_success 'hash-object complains about bogus type name' '
+ test_must_fail git hash-object -t bogus --stdin </dev/null
+'
+
+test_expect_success 'hash-object complains about truncated type name' '
+ test_must_fail git hash-object -t bl --stdin </dev/null
+'
+
+test_expect_success '--literally' '
+ t=1234567890 &&
+ echo example | git hash-object -t $t --literally --stdin
+'
+
+test_expect_success '--literally with extra-long type' '
+ t=12345678901234567890123456789012345678901234567890 &&
+ t="$t$t$t$t$t$t$t$t$t$t$t$t$t$t$t$t$t$t$t$t$t$t$t$t$t$t$t$t$t$t" &&
+ echo example | git hash-object -t $t --literally --stdin
+'
+
test_done
)
'
-test_expect_success 'no file/rev ambiguity check inside a bare repo' '
+test_expect_success 'no file/rev ambiguity check inside a bare repo (explicit GIT_DIR)' '
+ test_when_finished "rm -fr foo.git" &&
git clone -s --bare .git foo.git &&
(
cd foo.git &&
+ # older Git needed help by exporting GIT_DIR=.
+ # to realize that it is inside a bare repository.
+ # We keep this test around for regression testing.
GIT_DIR=. git show -s HEAD
)
'
-# This still does not work as it should...
-: test_expect_success 'no file/rev ambiguity check inside a bare repo' '
+test_expect_success 'no file/rev ambiguity check inside a bare repo' '
+ test_when_finished "rm -fr foo.git" &&
git clone -s --bare .git foo.git &&
(
cd foo.git &&
'
test_expect_success SYMLINKS 'detection should not be fooled by a symlink' '
- rm -fr foo.git &&
git clone -s .git another &&
ln -s another yetanother &&
(
test_expect_success 'stdin update ref fails with wrong old value' '
echo "update $c $m $m~1" >stdin &&
test_must_fail git update-ref --stdin <stdin 2>err &&
- grep "fatal: Cannot lock the ref '"'"'$c'"'"'" err &&
+ grep "fatal: Cannot lock ref '"'"'$c'"'"'" err &&
test_must_fail git rev-parse --verify -q $c
'
test_expect_success 'stdin delete ref fails with wrong old value' '
echo "delete $a $m~1" >stdin &&
test_must_fail git update-ref --stdin <stdin 2>err &&
- grep "fatal: Cannot lock the ref '"'"'$a'"'"'" err &&
+ grep "fatal: Cannot lock ref '"'"'$a'"'"'" err &&
git rev-parse $m >expect &&
git rev-parse $a >actual &&
test_cmp expect actual
update $c ''
EOF
test_must_fail git update-ref --stdin <stdin 2>err &&
- grep "fatal: Cannot lock the ref '"'"'$c'"'"'" err &&
+ grep "fatal: Cannot lock ref '"'"'$c'"'"'" err &&
git rev-parse $m >expect &&
git rev-parse $a >actual &&
test_cmp expect actual &&
test_expect_success 'stdin -z update ref fails with wrong old value' '
printf $F "update $c" "$m" "$m~1" >stdin &&
test_must_fail git update-ref -z --stdin <stdin 2>err &&
- grep "fatal: Cannot lock the ref '"'"'$c'"'"'" err &&
+ grep "fatal: Cannot lock ref '"'"'$c'"'"'" err &&
test_must_fail git rev-parse --verify -q $c
'
git rev-parse "$c" >expect &&
printf $F "create $c" "$m~1" >stdin &&
test_must_fail git update-ref -z --stdin <stdin 2>err &&
- grep "fatal: Cannot lock the ref '"'"'$c'"'"'" err &&
+ grep "fatal: Cannot lock ref '"'"'$c'"'"'" err &&
git rev-parse "$c" >actual &&
test_cmp expect actual
'
test_expect_success 'stdin -z delete ref fails with wrong old value' '
printf $F "delete $a" "$m~1" >stdin &&
test_must_fail git update-ref -z --stdin <stdin 2>err &&
- grep "fatal: Cannot lock the ref '"'"'$a'"'"'" err &&
+ grep "fatal: Cannot lock ref '"'"'$a'"'"'" err &&
git rev-parse $m >expect &&
git rev-parse $a >actual &&
test_cmp expect actual
git update-ref $c $m &&
printf $F "update $a" "$m" "$m" "update $b" "$m" "$m" "update $c" "$m" "$Z" >stdin &&
test_must_fail git update-ref -z --stdin <stdin 2>err &&
- grep "fatal: Cannot lock the ref '"'"'$c'"'"'" err &&
+ grep "fatal: Cannot lock ref '"'"'$c'"'"'" err &&
git rev-parse $m >expect &&
git rev-parse $a >actual &&
test_cmp expect actual &&
test_must_fail git rev-parse --verify -q $c
'
+run_with_limited_open_files () {
+ (ulimit -n 32 && "$@")
+}
+
+test_lazy_prereq ULIMIT_FILE_DESCRIPTORS 'run_with_limited_open_files true'
+
+test_expect_success ULIMIT_FILE_DESCRIPTORS 'large transaction creating branches does not burst open file limit' '
+(
+ for i in $(test_seq 33)
+ do
+ echo "create refs/heads/$i HEAD"
+ done >large_input &&
+ run_with_limited_open_files git update-ref --stdin <large_input &&
+ git rev-parse --verify -q refs/heads/33
+)
+'
+
+test_expect_success ULIMIT_FILE_DESCRIPTORS 'large transaction deleting branches does not burst open file limit' '
+(
+ for i in $(test_seq 33)
+ do
+ echo "delete refs/heads/$i HEAD"
+ done >large_input &&
+ run_with_limited_open_files git update-ref --stdin <large_input &&
+ test_must_fail git rev-parse --verify -q refs/heads/33
+)
+'
+
test_done
--- /dev/null
+#!/bin/sh
+
+test_description='Test git update-ref with D/F conflicts'
+. ./test-lib.sh
+
+test_update_rejected () {
+ prefix="$1" &&
+ before="$2" &&
+ pack="$3" &&
+ create="$4" &&
+ error="$5" &&
+ printf "create $prefix/%s $C\n" $before |
+ git update-ref --stdin &&
+ git for-each-ref $prefix >unchanged &&
+ if $pack
+ then
+ git pack-refs --all
+ fi &&
+ printf "create $prefix/%s $C\n" $create >input &&
+ test_must_fail git update-ref --stdin <input 2>output.err &&
+ grep -F "$error" output.err &&
+ git for-each-ref $prefix >actual &&
+ test_cmp unchanged actual
+}
+
+Q="'"
+
+test_expect_success 'setup' '
+
+ git commit --allow-empty -m Initial &&
+ C=$(git rev-parse HEAD)
+
+'
+
+test_expect_success 'existing loose ref is a simple prefix of new' '
+
+ prefix=refs/1l &&
+ test_update_rejected $prefix "a c e" false "b c/x d" \
+ "$Q$prefix/c$Q exists; cannot create $Q$prefix/c/x$Q"
+
+'
+
+test_expect_success 'existing packed ref is a simple prefix of new' '
+
+ prefix=refs/1p &&
+ test_update_rejected $prefix "a c e" true "b c/x d" \
+ "$Q$prefix/c$Q exists; cannot create $Q$prefix/c/x$Q"
+
+'
+
+test_expect_success 'existing loose ref is a deeper prefix of new' '
+
+ prefix=refs/2l &&
+ test_update_rejected $prefix "a c e" false "b c/x/y d" \
+ "$Q$prefix/c$Q exists; cannot create $Q$prefix/c/x/y$Q"
+
+'
+
+test_expect_success 'existing packed ref is a deeper prefix of new' '
+
+ prefix=refs/2p &&
+ test_update_rejected $prefix "a c e" true "b c/x/y d" \
+ "$Q$prefix/c$Q exists; cannot create $Q$prefix/c/x/y$Q"
+
+'
+
+test_expect_success 'new ref is a simple prefix of existing loose' '
+
+ prefix=refs/3l &&
+ test_update_rejected $prefix "a c/x e" false "b c d" \
+ "$Q$prefix/c/x$Q exists; cannot create $Q$prefix/c$Q"
+
+'
+
+test_expect_success 'new ref is a simple prefix of existing packed' '
+
+ prefix=refs/3p &&
+ test_update_rejected $prefix "a c/x e" true "b c d" \
+ "$Q$prefix/c/x$Q exists; cannot create $Q$prefix/c$Q"
+
+'
+
+test_expect_success 'new ref is a deeper prefix of existing loose' '
+
+ prefix=refs/4l &&
+ test_update_rejected $prefix "a c/x/y e" false "b c d" \
+ "$Q$prefix/c/x/y$Q exists; cannot create $Q$prefix/c$Q"
+
+'
+
+test_expect_success 'new ref is a deeper prefix of existing packed' '
+
+ prefix=refs/4p &&
+ test_update_rejected $prefix "a c/x/y e" true "b c d" \
+ "$Q$prefix/c/x/y$Q exists; cannot create $Q$prefix/c$Q"
+
+'
+
+test_expect_success 'one new ref is a simple prefix of another' '
+
+ prefix=refs/5 &&
+ test_update_rejected $prefix "a e" false "b c c/x d" \
+ "cannot process $Q$prefix/c$Q and $Q$prefix/c/x$Q at the same time"
+
+'
+
+test_done
R="$1"
+[ "$(id -u)" -eq 0 ] && die "This script should not be run as root, what if it does rm -rf /?"
[ -n "$R" ] || die "usage: prepare-chroot.sh <root>"
[ -x git ] || die "This script needs to be executed at git source code's top directory"
-[ -x /bin/busybox ] || die "You need busybox"
+if [ -x /bin/busybox ]; then
+ BB=/bin/busybox
+elif [ -x /usr/bin/busybox ]; then
+ BB=/usr/bin/busybox
+else
+ die "You need busybox"
+fi
xmkdir "$R" "$R/bin" "$R/etc" "$R/lib" "$R/dev"
-[ -c "$R/dev/null" ] || die "/dev/null is missing. Do mknod $R/dev/null c 1 3 && chmod 666 $R/dev/null"
+touch "$R/dev/null"
echo "root:x:0:0:root:/:/bin/sh" > "$R/etc/passwd"
echo "$(id -nu):x:$(id -u):$(id -g)::$(pwd)/t:/bin/sh" >> "$R/etc/passwd"
echo "root::0:root" > "$R/etc/group"
echo "$(id -ng)::$(id -g):$(id -nu)" >> "$R/etc/group"
-[ -x "$R/bin/busybox" ] || cp /bin/busybox "$R/bin/busybox"
-[ -x "$R/bin/sh" ] || ln -s /bin/busybox "$R/bin/sh"
-[ -x "$R/bin/su" ] || ln -s /bin/busybox "$R/bin/su"
+[ -x "$R$BB" ] || cp $BB "$R/bin/busybox"
+for cmd in sh su ls expr tr basename rm mkdir mv id uname dirname cat true sed diff; do
+ ln -f -s /bin/busybox "$R/bin/$cmd"
+done
mkdir -p "$R$(pwd)"
rsync --exclude-from t/t1509/excludes -Ha . "$R$(pwd)"
-ldd git | grep '/' | sed 's,.*\s\(/[^ ]*\).*,\1,' | while read i; do
- mkdir -p "$R$(dirname $i)"
- cp "$i" "$R/$i"
+# Fake perl to reduce dependency, t1509 does not use perl, but some
+# env might slip through, see test-lib.sh, unset.*PERL_PATH
+sed 's|^PERL_PATH=.*|PERL_PATH=/bin/true|' GIT-BUILD-OPTIONS > "$R$(pwd)/GIT-BUILD-OPTIONS"
+for cmd in git $BB;do
+ ldd $cmd | grep '/' | sed 's,.*\s\(/[^ ]*\).*,\1,' | while read i; do
+ mkdir -p "$R$(dirname $i)"
+ cp "$i" "$R/$i"
+ done
done
-echo "Execute this in root: 'chroot $R /bin/su - $(id -nu)'"
+cat <<EOF
+All is set up in $R, execute t1509 with the following commands:
+
+sudo chroot $R /bin/su - $(id -nu)
+IKNOWWHATIAMDOING=YES ./t1509-root-worktree.sh -v -i
+
+When you are done, simply delete $R to clean up
+EOF
grep "^# Rebase ..* onto ..* ([0-9]" actual
'
+test_expect_success 'rebase -i commits that overwrite untracked files (pick)' '
+ git checkout --force branch2 &&
+ git clean -f &&
+ set_fake_editor &&
+ FAKE_LINES="edit 1 2" git rebase -i A &&
+ test_cmp_rev HEAD F &&
+ test_path_is_missing file6 &&
+ >file6 &&
+ test_must_fail git rebase --continue &&
+ test_cmp_rev HEAD F &&
+ rm file6 &&
+ git rebase --continue &&
+ test_cmp_rev HEAD I
+'
+
+test_expect_success 'rebase -i commits that overwrite untracked files (squash)' '
+ git checkout --force branch2 &&
+ git clean -f &&
+ git tag original-branch2 &&
+ set_fake_editor &&
+ FAKE_LINES="edit 1 squash 2" git rebase -i A &&
+ test_cmp_rev HEAD F &&
+ test_path_is_missing file6 &&
+ >file6 &&
+ test_must_fail git rebase --continue &&
+ test_cmp_rev HEAD F &&
+ rm file6 &&
+ git rebase --continue &&
+ test $(git cat-file commit HEAD | sed -ne \$p) = I &&
+ git reset --hard original-branch2
+'
+
+test_expect_success 'rebase -i commits that overwrite untracked files (no ff)' '
+ git checkout --force branch2 &&
+ git clean -f &&
+ set_fake_editor &&
+ FAKE_LINES="edit 1 2" git rebase -i --no-ff A &&
+ test $(git cat-file commit HEAD | sed -ne \$p) = F &&
+ test_path_is_missing file6 &&
+ >file6 &&
+ test_must_fail git rebase --continue &&
+ test $(git cat-file commit HEAD | sed -ne \$p) = F &&
+ rm file6 &&
+ git rebase --continue &&
+ test $(git cat-file commit HEAD | sed -ne \$p) = I
+'
+
test_done
'
+test_expect_success 'add -e notices editor failure' '
+ git reset --hard &&
+ echo change >>file &&
+ test_must_fail env GIT_EDITOR=false git add -e &&
+ test_expect_code 1 git diff --exit-code
+'
+
test_done
test_expect_success 'stash some dirty working directory' '
echo 1 > file &&
git add file &&
+ echo unrelated >other-file &&
+ git add other-file &&
test_tick &&
git commit -m initial &&
echo 2 > file &&
test_cmp expect file
'
+test_expect_success 'apply requires a clean index' '
+ test_when_finished "git reset --hard" &&
+ echo changed >other-file &&
+ git add other-file &&
+ test_must_fail git stash apply
+'
+
test_expect_success 'apply does not need clean working directory' '
echo 4 >other-file &&
- git add other-file &&
- echo 5 >other-file &&
git stash apply &&
echo 3 >expect &&
test_cmp expect file
)
'
+test_expect_success 'stash drop complains of extra options' '
+ test_must_fail git stash drop --foo
+'
+
test_expect_success 'drop top stash' '
git reset --hard &&
git stash list > stashlist1 &&
'
test_expect_success 'stash list implies --first-parent -m' '
- cat >expect <<-\EOF &&
- stash@{0}: WIP on master: b27a2bc subdir
+ cat >expect <<-EOF &&
+ stash@{0}
diff --git a/file b/file
index 257cc56..d26b33d 100644
-foo
+working
EOF
- git stash list -p >actual &&
+ git stash list --format=%gd -p >actual &&
test_cmp expect actual
'
test_expect_success 'stash list --cc shows combined diff' '
cat >expect <<-\EOF &&
- stash@{0}: WIP on master: b27a2bc subdir
+ stash@{0}
diff --cc file
index 257cc56,9015a7a..d26b33d
-index
++working
EOF
- git stash list -p --cc >actual &&
+ git stash list --format=%gd -p --cc >actual &&
test_cmp expect actual
'
Rearranged lines in dir/sub
-commit 59d314ad6f356dd08601a4cd5e530381da3e3c64 (HEAD, refs/heads/master)
+commit 59d314ad6f356dd08601a4cd5e530381da3e3c64 (HEAD -> refs/heads/master)
Merge: 9a6d494 c7a2ab9
Author: A U Thor <author@example.com>
Date: Mon Jun 26 00:04:00 2006 +0000
)
'
+test_expect_success 'diff D F and diff F D' '
+ (
+ cd repo &&
+ echo in-repo >a &&
+ echo non-repo >../non/git/a &&
+ mkdir sub &&
+ echo sub-repo >sub/a &&
+
+ test_must_fail git diff --no-index sub/a ../non/git/a >expect &&
+ test_must_fail git diff --no-index sub/a ../non/git/ >actual &&
+ test_cmp expect actual &&
+
+ test_must_fail git diff --no-index a ../non/git/a >expect &&
+ test_must_fail git diff --no-index a ../non/git/ >actual &&
+ test_cmp expect actual &&
+
+ test_must_fail git diff --no-index ../non/git/a a >expect &&
+ test_must_fail git diff --no-index ../non/git a >actual &&
+ test_cmp expect actual
+ )
+'
+
+test_expect_success 'turning a file into a directory' '
+ (
+ cd non/git &&
+ mkdir d e e/sub &&
+ echo 1 >d/sub &&
+ echo 2 >e/sub/file &&
+ printf "D\td/sub\nA\te/sub/file\n" >expect &&
+ test_must_fail git diff --no-index --name-status d e >actual &&
+ test_cmp expect actual
+ )
+'
+
test_done
canned_test "-L 8,12:a.c -L 4:a.c simple" multiple-superset
test_bad_opts "-L" "switch.*requires a value"
-test_bad_opts "-L b.c" "argument.*not of the form"
-test_bad_opts "-L 1:" "argument.*not of the form"
+test_bad_opts "-L b.c" "argument not .start,end:file"
+test_bad_opts "-L 1:" "argument not .start,end:file"
test_bad_opts "-L 1:nonexistent" "There is no path"
test_bad_opts "-L 1:simple" "There is no path"
-test_bad_opts "-L '/foo:b.c'" "argument.*not of the form"
+test_bad_opts "-L '/foo:b.c'" "argument not .start,end:file"
test_bad_opts "-L 1000:b.c" "has only.*lines"
test_bad_opts "-L 1,1000:b.c" "has only.*lines"
-test_bad_opts "-L :b.c" "argument.*not of the form"
+test_bad_opts "-L :b.c" "argument not .start,end:file"
test_bad_opts "-L :foo:b.c" "no match"
test_expect_success '-L X (X == nlines)' '
git add foo &&
git rebase --continue &&
echo rebase >expected.args &&
- cat >expected.data <<EOF &&
-$(git rev-parse C) $(git rev-parse HEAD^)
-$(git rev-parse D) $(git rev-parse HEAD)
-EOF
+ cat >expected.data <<-EOF &&
+ $(git rev-parse C) $(git rev-parse HEAD^)
+ $(git rev-parse D) $(git rev-parse HEAD)
+ EOF
verify_hook_input
'
git add foo &&
git rebase --continue &&
echo rebase >expected.args &&
- cat >expected.data <<EOF &&
-$(git rev-parse D) $(git rev-parse HEAD)
-EOF
+ cat >expected.data <<-EOF &&
+ $(git rev-parse D) $(git rev-parse HEAD)
+ EOF
verify_hook_input
'
test_must_fail git rebase --onto D A &&
git rebase --skip &&
echo rebase >expected.args &&
- cat >expected.data <<EOF &&
-$(git rev-parse E) $(git rev-parse HEAD)
-EOF
+ cat >expected.data <<-EOF &&
+ $(git rev-parse E) $(git rev-parse HEAD)
+ EOF
verify_hook_input
'
git add foo &&
git rebase --continue &&
echo rebase >expected.args &&
- cat >expected.data <<EOF &&
-$(git rev-parse C) $(git rev-parse HEAD^)
-$(git rev-parse D) $(git rev-parse HEAD)
-EOF
+ cat >expected.data <<-EOF &&
+ $(git rev-parse C) $(git rev-parse HEAD^)
+ $(git rev-parse D) $(git rev-parse HEAD)
+ EOF
verify_hook_input
'
git add foo &&
git rebase --continue &&
echo rebase >expected.args &&
- cat >expected.data <<EOF &&
-$(git rev-parse D) $(git rev-parse HEAD)
-EOF
+ cat >expected.data <<-EOF &&
+ $(git rev-parse D) $(git rev-parse HEAD)
+ EOF
verify_hook_input
'
git add foo &&
git rebase --continue &&
echo rebase >expected.args &&
- cat >expected.data <<EOF &&
-$(git rev-parse C) $(git rev-parse HEAD^)
-$(git rev-parse D) $(git rev-parse HEAD)
-EOF
+ cat >expected.data <<-EOF &&
+ $(git rev-parse C) $(git rev-parse HEAD^)
+ $(git rev-parse D) $(git rev-parse HEAD)
+ EOF
verify_hook_input
'
git add foo &&
git rebase --continue &&
echo rebase >expected.args &&
- cat >expected.data <<EOF &&
-$(git rev-parse D) $(git rev-parse HEAD)
-EOF
+ cat >expected.data <<-EOF &&
+ $(git rev-parse D) $(git rev-parse HEAD)
+ EOF
verify_hook_input
'
git add foo &&
git rebase --continue &&
echo rebase >expected.args &&
- cat >expected.data <<EOF &&
-$(git rev-parse C) $(git rev-parse HEAD)
-$(git rev-parse D) $(git rev-parse HEAD)
-EOF
+ cat >expected.data <<-EOF &&
+ $(git rev-parse C) $(git rev-parse HEAD)
+ $(git rev-parse D) $(git rev-parse HEAD)
+ EOF
verify_hook_input
'
clear_hook_input &&
FAKE_LINES="1 fixup 2" git rebase -i B &&
echo rebase >expected.args &&
- cat >expected.data <<EOF &&
-$(git rev-parse C) $(git rev-parse HEAD)
-$(git rev-parse D) $(git rev-parse HEAD)
-EOF
+ cat >expected.data <<-EOF &&
+ $(git rev-parse C) $(git rev-parse HEAD)
+ $(git rev-parse D) $(git rev-parse HEAD)
+ EOF
verify_hook_input
'
git add foo &&
git rebase --continue &&
echo rebase >expected.args &&
- cat >expected.data <<EOF &&
-$(git rev-parse C) $(git rev-parse HEAD^)
-$(git rev-parse D) $(git rev-parse HEAD)
-EOF
+ cat >expected.data <<-EOF &&
+ $(git rev-parse C) $(git rev-parse HEAD^)
+ $(git rev-parse D) $(git rev-parse HEAD)
+ EOF
+ verify_hook_input
+'
+
+test_expect_success 'git rebase -i (exec)' '
+ git reset --hard D &&
+ clear_hook_input &&
+ FAKE_LINES="edit 1 exec_false 2" git rebase -i B &&
+ echo something >bar &&
+ git add bar &&
+ # Fails because of exec false
+ test_must_fail git rebase --continue &&
+ git rebase --continue &&
+ echo rebase >expected.args &&
+ cat >expected.data <<-EOF &&
+ $(git rev-parse C) $(git rev-parse HEAD^)
+ $(git rev-parse D) $(git rev-parse HEAD)
+ EOF
verify_hook_input
'
git commit -m "add bfile"
) &&
test_tick && test_tick &&
+ echo "second" >afile &&
+ git add afile &&
+ git commit -m "second commit" &&
echo "original $dollar" >afile &&
git add afile &&
git commit -m "do not clobber $dollar signs"
)
'
+test_expect_success '--log=1 limits shortlog length' '
+(
+ cd cloned &&
+ git reset --hard HEAD^ &&
+ test "$(cat afile)" = original &&
+ test "$(cat bfile)" = added &&
+ git pull --log=1 &&
+ git log -3 &&
+ git cat-file commit HEAD >result &&
+ grep Dollar result &&
+ ! grep "second commit" result
+)
+'
+
test_done
test_description='fetch/clone from a shallow clone over http'
. ./test-lib.sh
-
-if test -n "$NO_CURL"; then
- skip_all='skipping test, git built without http support'
- test_done
-fi
-
. "$TEST_DIRECTORY"/lib-httpd.sh
start_httpd
test_description='test smart pushing over http via http-backend'
. ./test-lib.sh
-if test -n "$NO_CURL"; then
- skip_all='skipping test, git built without http support'
- test_done
-fi
-
ROOT_PATH="$PWD"
. "$TEST_DIRECTORY"/lib-gpg.sh
. "$TEST_DIRECTORY"/lib-httpd.sh
test_description='push from/to a shallow clone over http'
. ./test-lib.sh
-
-if test -n "$NO_CURL"; then
- say 'skipping test, git built without http support'
- test_done
-fi
-
. "$TEST_DIRECTORY"/lib-httpd.sh
start_httpd
test_description='test dumb fetching over http via static file'
. ./test-lib.sh
-
-if test -n "$NO_CURL"; then
- skip_all='skipping test, git built without http support'
- test_done
-fi
-
. "$TEST_DIRECTORY"/lib-httpd.sh
start_httpd
test_description='test smart fetching over http via http-backend'
. ./test-lib.sh
-
-if test -n "$NO_CURL"; then
- skip_all='skipping test, git built without http support'
- test_done
-fi
-
. "$TEST_DIRECTORY"/lib-httpd.sh
start_httpd
git -C hidden.git rev-parse --verify b
'
-test_expect_success 'create 2,000 tags in the repo' '
- (
- cd "$HTTPD_DOCUMENT_ROOT_PATH/repo.git" &&
- for i in $(test_seq 2000)
+# create an arbitrary number of tags, numbered from tag-$1 to tag-$2
+create_tags () {
+ rm -f marks &&
+ for i in $(test_seq "$1" "$2")
do
- echo "commit refs/heads/too-many-refs"
- echo "mark :$i"
- echo "committer git <git@example.com> $i +0000"
- echo "data 0"
- echo "M 644 inline bla.txt"
- echo "data 4"
- echo "bla"
+ # don't use here-doc, because it requires a process
+ # per loop iteration
+ echo "commit refs/heads/too-many-refs-$1" &&
+ echo "mark :$i" &&
+ echo "committer git <git@example.com> $i +0000" &&
+ echo "data 0" &&
+ echo "M 644 inline bla.txt" &&
+ echo "data 4" &&
+ echo "bla" &&
# make every commit dangling by always
# rewinding the branch after each commit
- echo "reset refs/heads/too-many-refs"
- echo "from :1"
+ echo "reset refs/heads/too-many-refs-$1" &&
+ echo "from :$1"
done | git fast-import --export-marks=marks &&
# now assign tags to all the dangling commits we created above
tag=$(perl -e "print \"bla\" x 30") &&
sed -e "s|^:\([^ ]*\) \(.*\)$|\2 refs/tags/$tag-\1|" <marks >>packed-refs
+}
+
+test_expect_success 'create 2,000 tags in the repo' '
+ (
+ cd "$HTTPD_DOCUMENT_ROOT_PATH/repo.git" &&
+ create_tags 1 2000
)
'
test_line_count = 2 posts
'
+test_expect_success EXPENSIVE 'http can handle enormous ref negotiation' '
+ (
+ cd "$HTTPD_DOCUMENT_ROOT_PATH/repo.git" &&
+ create_tags 2001 50000
+ ) &&
+ git -C too-many-refs fetch -q --tags &&
+ (
+ cd "$HTTPD_DOCUMENT_ROOT_PATH/repo.git" &&
+ create_tags 50001 100000
+ ) &&
+ git -C too-many-refs fetch -q --tags &&
+ git -C too-many-refs for-each-ref refs/tags >tags &&
+ test_line_count = 100000 tags
+'
+
stop_httpd
test_done
test_description='test git-http-backend'
. ./test-lib.sh
-
-if test -n "$NO_CURL"; then
- skip_all='skipping test, git built without http support'
- test_done
-fi
-
. "$TEST_DIRECTORY"/lib-httpd.sh
start_httpd
'
}
+copy_ssh_wrapper_as () {
+ cp "$TRASH_DIRECTORY/ssh-wrapper" "$1" &&
+ GIT_SSH="$1" &&
+ export GIT_SSH
+}
+
expect_ssh () {
test_when_finished '
(cd "$TRASH_DIRECTORY" && rm -f ssh-expect && >ssh-output)
test_expect_success 'bracketed hostnames are still ssh' '
git clone "[myhost:123]:src" ssh-bracket-clone &&
- expect_ssh myhost '-p 123' src
+ expect_ssh "-p 123" myhost src
+'
+
+test_expect_success 'uplink is not treated as putty' '
+ copy_ssh_wrapper_as "$TRASH_DIRECTORY/uplink" &&
+ git clone "[myhost:123]:src" ssh-bracket-clone-uplink &&
+ expect_ssh "-p 123" myhost src
+'
+
+test_expect_success 'plink is treated specially (as putty)' '
+ copy_ssh_wrapper_as "$TRASH_DIRECTORY/plink" &&
+ git clone "[myhost:123]:src" ssh-bracket-clone-plink-0 &&
+ expect_ssh "-P 123" myhost src
'
+test_expect_success 'plink.exe is treated specially (as putty)' '
+ copy_ssh_wrapper_as "$TRASH_DIRECTORY/plink.exe" &&
+ git clone "[myhost:123]:src" ssh-bracket-clone-plink-1 &&
+ expect_ssh "-P 123" myhost src
+'
+
+test_expect_success 'tortoiseplink is like putty, with extra arguments' '
+ copy_ssh_wrapper_as "$TRASH_DIRECTORY/tortoiseplink" &&
+ git clone "[myhost:123]:src" ssh-bracket-clone-plink-2 &&
+ expect_ssh "-batch -P 123" myhost src
+'
+
+# Reset the GIT_SSH environment variable for clone tests.
+setup_ssh_wrapper
+
counter=0
# $1 url
# $2 none|host
'
done
+test_expect_success 'do not complain about existing broken links' '
+ cat >broken-commit <<-\EOF &&
+ tree 0000000000000000000000000000000000000001
+ parent 0000000000000000000000000000000000000002
+ author whatever <whatever@example.com> 1234 -0000
+ committer whatever <whatever@example.com> 1234 -0000
+
+ some message
+ EOF
+ commit=$(git hash-object -t commit -w broken-commit) &&
+ git gc 2>stderr &&
+ verbose git cat-file -e $commit &&
+ test_must_be_empty stderr
+'
+
test_done
test $orig_head != `git show-ref --hash --head HEAD`
'
+test_expect_success 'filter commit message without trailing newline' '
+ git reset --hard original &&
+ commit=$(printf "no newline" | git commit-tree HEAD^{tree}) &&
+ git update-ref refs/heads/no-newline $commit &&
+ git filter-branch -f refs/heads/no-newline &&
+ echo $commit >expect &&
+ git rev-parse refs/heads/no-newline >actual &&
+ test_cmp expect actual
+'
+
test_done
(ulimit -s 128 && "$@")
}
-test_lazy_prereq ULIMIT 'run_with_limited_stack true'
+test_lazy_prereq ULIMIT_STACK_SIZE 'run_with_limited_stack true'
# we require ulimit, this excludes Windows
-test_expect_success ULIMIT '--contains works in a deep repo' '
+test_expect_success ULIMIT_STACK_SIZE '--contains works in a deep repo' '
>expect &&
i=1 &&
while test $i -lt 8000
test_cmp expected actual
'
+test_expect_success 'same with gitignore starting with BOM' '
+ printf "\357\273\277ignored\n" >.gitignore &&
+ mkdir -p untracked &&
+ : >untracked/ignored &&
+ : >untracked/uncommitted &&
+ git status --porcelain --ignored >actual &&
+ test_cmp expected actual
+'
+
cat >expected <<\EOF
?? .gitignore
?? actual
test "$(git rev-parse HEAD)" = "$(git rev-parse c1)"
'
+test_expect_success 'pull.ff=true overrides merge.ff=false' '
+ git reset --hard c0 &&
+ test_config merge.ff false &&
+ test_config pull.ff true &&
+ git pull . c1 &&
+ test "$(git rev-parse HEAD)" = "$(git rev-parse c1)"
+'
+
test_expect_success 'fast-forward pull creates merge with "false" in pull.ff' '
git reset --hard c0 &&
test_config pull.ff false &&
test $(grep -c " " actual) = 9
'
-test_expect_success 'blaming files with CRLF newlines' '
+test_expect_success 'setup file with CRLF newlines' '
git config core.autocrlf false &&
- printf "testcase\r\n" >crlffile &&
+ printf "testcase\n" >crlffile &&
git add crlffile &&
git commit -m testcase &&
- git -c core.autocrlf=input blame crlffile >actual &&
+ printf "testcase\r\n" >crlffile
+'
+
+test_expect_success 'blame file with CRLF core.autocrlf true' '
+ git config core.autocrlf true &&
+ git blame crlffile >actual &&
+ grep "A U Thor" actual
+'
+
+test_expect_success 'blame file with CRLF attributes text' '
+ git config core.autocrlf false &&
+ echo "crlffile text" >.gitattributes &&
+ git blame crlffile >actual &&
grep "A U Thor" actual
'
test_path_is_file () {
if ! test -f "$1"
then
- echo "File $1 doesn't exist. $*"
+ echo "File $1 doesn't exist. $2"
false
fi
}
test_path_is_dir () {
if ! test -d "$1"
then
- echo "Directory $1 doesn't exist. $*"
+ echo "Directory $1 doesn't exist. $2"
false
fi
}
return 0;
}
-int parse_tree(struct tree *item)
+int parse_tree_gently(struct tree *item, int quiet_on_missing)
{
enum object_type type;
void *buffer;
return 0;
buffer = read_sha1_file(item->object.sha1, &type, &size);
if (!buffer)
- return error("Could not read %s",
+ return quiet_on_missing ? -1 :
+ error("Could not read %s",
sha1_to_hex(item->object.sha1));
if (type != OBJ_TREE) {
free(buffer);
int parse_tree_buffer(struct tree *item, void *buffer, unsigned long size);
-int parse_tree(struct tree *tree);
+int parse_tree_gently(struct tree *tree, int quiet_on_missing);
+static inline int parse_tree(struct tree *tree)
+{
+ return parse_tree_gently(tree, 0);
+}
void free_tree_buffer(struct tree *tree);
/* Parses and returns the tree in the given ent, chasing tags and commits. */
return 1;
}
+
+const char utf8_bom[] = "\357\273\277";
+
+int skip_utf8_bom(char **text, size_t len)
+{
+ if (len < strlen(utf8_bom) ||
+ memcmp(*text, utf8_bom, strlen(utf8_bom)))
+ return 0;
+ *text += strlen(utf8_bom);
+ return 1;
+}
__attribute__((format (printf, 2, 3)))
int utf8_fprintf(FILE *, const char *, ...);
+extern const char utf8_bom[];
+extern int skip_utf8_bom(char **, size_t);
+
void strbuf_add_wrapped_text(struct strbuf *buf,
const char *text, int indent, int indent2, int width);
void strbuf_add_wrapped_bytes(struct strbuf *buf, const char *data, int len,