Remove an ancient tool left in contrib/.
* sb/retire-convert-objects-from-contrib:
contrib: remove git-convert-objects
Ralf Thielow <ralf.thielow@gmail.com> <ralf.thielow@googlemail.com>
Ramsay Jones <ramsay@ramsayjones.plus.com> <ramsay@ramsay1.demon.co.uk>
René Scharfe <l.s.r@web.de> <rene.scharfe@lsrfire.ath.cx>
+Richard Hansen <rhansen@rhansen.org> <hansenr@google.com>
+Richard Hansen <rhansen@rhansen.org> <rhansen@bbn.com>
Robert Fitzsimons <robfitz@273k.net>
Robert Shearman <robertshearman@gmail.com> <rob@codeweavers.com>
Robert Zeh <robert.a.zeh@gmail.com>
x = 1;
}
- is frowned upon. A gray area is when the statement extends
- over a few lines, and/or you have a lengthy comment atop of
- it. Also, like in the Linux kernel, if there is a long list
- of "else if" statements, it can make sense to add braces to
- single line blocks.
+ is frowned upon. But there are a few exceptions:
+
+ - When the statement extends over a few lines (e.g., a while loop
+ with an embedded conditional, or a comment). E.g.:
+
+ while (foo) {
+ if (x)
+ one();
+ else
+ two();
+ }
+
+ if (foo) {
+ /*
+ * This one requires some explanation,
+ * so we're better off with braces to make
+ * it obvious that the indentation is correct.
+ */
+ doit();
+ }
+
+ - When there are multiple arms to a conditional and some of them
+ require braces, enclose even a single line block in braces for
+ consistency. E.g.:
+
+ if (foo) {
+ doit();
+ } else {
+ one();
+ two();
+ three();
+ }
- We try to avoid assignments in the condition of an "if" statement.
user-manual.xml: user-manual.txt user-manual.conf
$(QUIET_ASCIIDOC)$(RM) $@+ $@ && \
- $(TXT_TO_XML) -d article -o $@+ $< && \
+ $(TXT_TO_XML) -d book -o $@+ $< && \
mv $@+ $@
technical/api-index.txt: technical/api-index-skel.txt \
more widely known when conversion fails from/to it.
(merge df3755888b jc/latin-1 later to maint).
+ * "git grep" has been taught to optionally recurse into submodules.
+
+ * "git rm" used to refuse to remove a submodule when it has its own
+ git repository embedded in its working tree. It learned to move
+ the repository away to $GIT_DIR/modules/ of the superproject
+ instead, and allow the submodule to be deleted (as long as there
+ will be no loss of local modifications, that is).
+
+ * A recent updates to "git p4" was not usable for older p4 but it
+ could be made to work with minimum changes. Do so.
+
+ * "git diff" learned diff.interHunkContext configuration variable
+ that gives the default value for its --inter-hunk-context option.
+
+ * The prereleaseSuffix feature of version comparison that is used in
+ "git tag -l" did not correctly when two or more prereleases for the
+ same release were present (e.g. when 2.0, 2.0-beta1, and 2.0-beta2
+ are there and the code needs to compare 2.0-beta1 and 2.0-beta2).
+
Performance, Internal Implementation, Development Support etc.
superproject to .git/modules/ (and point the latter with the former
that is turned into a "gitdir:" file) has been added.
+ * "git push \\server\share\dir" has recently regressed and then
+ fixed. A test has retroactively been added for this breakage.
+
+ * Build updates for Cygwin.
+
+ * The implementation of "real_path()" was to go there with chdir(2)
+ and call getcwd(3), but this obviously wouldn't be usable in a
+ threaded environment. Rewrite it to manually resolve relative
+ paths including symbolic links in path components.
+
+ * Adjust documentation to help AsciiDoctor render better while not
+ breaking the rendering done by AsciiDoc.
+
Also contains various documentation updates and code clean-ups.
* Leakage of lockfiles in the config subsystem has been fixed.
(merge c06fa62dfc nd/config-misc-fixes later to maint).
+ * It is natural that "git gc --auto" may not attempt to pack
+ everything into a single pack, and there is no point in warning
+ when the user has configured the system to use the pack bitmap,
+ leading to disabling further "gc".
+ (merge 1c409a705c dt/disable-bitmap-in-auto-gc later to maint).
+
+ * "git archive" did not read the standard configuration files, and
+ failed to notice a file that is marked as binary via the userdiff
+ driver configuration.
+ (merge 965cba2e7e jk/archive-zip-userdiff-config later to maint).
+
+ * "git blame --porcelain" misidentified the "previous" <commit, path>
+ pair (aka "source") when contents came from two or more files.
+ (merge 4e76832984 jk/blame-fixes later to maint).
+
+ * "git rebase -i" with a recent update started showing an incorrect
+ count when squashing more than 10 commits.
+ (merge 356b8ecff1 jk/rebase-i-squash-count-fix later to maint).
+
+ * "git <cmd> @{push}" on a detached HEAD used to segfault; it has
+ been corrected to error out with a message.
+ (merge b10731f43d km/branch-get-push-while-detached later to maint).
+
+ * Running "git add a/b" when "a" is a submodule correctly errored
+ out, but without a meaningful error message.
+ (merge 2d81c48fa7 sb/pathspec-errors later to maint).
+
+ * Typing ^C to pager, which usually does not kill it, killed Git and
+ took the pager down as a collateral damage in certain process-tree
+ structure. This has been fixed.
+ (merge 46df6906f3 jk/execv-dashed-external later to maint).
+
+ * "git mergetool" without any pathspec on the command line that is
+ run from a subdirectory became no-op in Git v2.11 by mistake, which
+ has been fixed.
+
+ * Retire long unused/unmaintained gitview from the contrib/ area.
+ (merge 3120925c25 sb/remove-gitview later to maint).
+
+ * Tighten a test to avoid mistaking an extended ERE regexp engine as
+ a PRE regexp engine.
+ (merge 7675c7bd01 jk/grep-e-could-be-extended-beyond-posix later to maint).
+
* Other minor doc, test and build updates and code cleanups.
(merge f2627d9b19 sb/submodule-config-cleanup later to maint).
+ (merge 384f1a167b sb/unpack-trees-cleanup later to maint).
+ (merge 3f05402ac0 ad/bisect-terms later to maint).
+ (merge 874444b704 rh/diff-orderfile-doc later to maint).
+ (merge c68d2d7c2b ws/request-pull-code-cleanup later to maint).
This option is passed unchanged to gpg's --local-user parameter,
so you may specify a key using any method that gpg supports.
-versionsort.prereleaseSuffix::
- When version sort is used in linkgit:git-tag[1], prerelease
- tags (e.g. "1.0-rc1") may appear after the main release
- "1.0". By specifying the suffix "-rc" in this variable,
- "1.0-rc1" will appear before "1.0".
-+
-This variable can be specified multiple times, once per suffix. The
-order of suffixes in the config file determines the sorting order
-(e.g. if "-pre" appears before "-rc" in the config file then 1.0-preXX
-is sorted before 1.0-rcXX). The sorting order between different
-suffixes is undefined if they are in multiple config files.
+versionsort.prereleaseSuffix (deprecated)::
+ Deprecated alias for `versionsort.suffix`. Ignored if
+ `versionsort.suffix` is set.
+
+versionsort.suffix::
+ Even when version sort is used in linkgit:git-tag[1], tagnames
+ with the same base version but different suffixes are still sorted
+ lexicographically, resulting e.g. in prerelease tags appearing
+ after the main release (e.g. "1.0-rc1" after "1.0"). This
+ variable can be specified to determine the sorting order of tags
+ with different suffixes.
++
+By specifying a single suffix in this variable, any tagname containing
+that suffix will appear before the corresponding main release. E.g. if
+the variable is set to "-rc", then all "1.0-rcX" tags will appear before
+"1.0". If specified multiple times, once per suffix, then the order of
+suffixes in the configuration will determine the sorting order of tagnames
+with those suffixes. E.g. if "-pre" appears before "-rc" in the
+configuration, then all "1.0-preX" tags will be listed before any
+"1.0-rcX" tags. The placement of the main release tag relative to tags
+with various suffixes can be determined by specifying the empty suffix
+among those other suffixes. E.g. if the suffixes "-rc", "", "-ck" and
+"-bfs" appear in the configuration in this order, then all "v4.8-rcX" tags
+are listed first, followed by "v4.8", then "v4.8-ckX" and finally
+"v4.8-bfsX".
++
+If more than one suffixes match the same tagname, then that tagname will
+be sorted according to the suffix which starts at the earliest position in
+the tagname. If more than one different matching suffixes start at
+that earliest position, then that tagname will be sorted according to the
+longest of those suffixes.
+The sorting order between different suffixes is undefined if they are
+in multiple config files.
web.browser::
Specify a web browser that may be used by some commands.
Generate diffs with <n> lines of context instead of the default
of 3. This value is overridden by the -U option.
+diff.interHunkContext::
+ Show the context between diff hunks, up to the specified number
+ of lines, thereby fusing the hunks that are close to each other.
+ This value serves as the default for the `--inter-hunk-context`
+ command line option.
+
diff.external::
If this config variable is set, diff generation is not
performed using the internal diff machinery, but using the
If set, 'git diff' does not show any source or destination prefix.
diff.orderFile::
- File indicating how to order files within a diff, using
- one shell glob pattern per line.
- Can be overridden by the '-O' option to linkgit:git-diff[1].
+ File indicating how to order files within a diff.
+ See the '-O' option to linkgit:git-diff[1] for details.
+ If `diff.orderFile` is a relative pathname, it is treated as
+ relative to the top of the working tree.
diff.renameLimit::
The number of files to consider when performing the copy/rename
endif::git-format-patch[]
-O<orderfile>::
- Output the patch in the order specified in the
- <orderfile>, which has one shell glob pattern per line.
+ Control the order in which files appear in the output.
This overrides the `diff.orderFile` configuration variable
(see linkgit:git-config[1]). To cancel `diff.orderFile`,
use `-O/dev/null`.
++
+The output order is determined by the order of glob patterns in
+<orderfile>.
+All files with pathnames that match the first pattern are output
+first, all files with pathnames that match the second pattern (but not
+the first) are output next, and so on.
+All files with pathnames that do not match any pattern are output
+last, as if there was an implicit match-all pattern at the end of the
+file.
+If multiple pathnames have the same rank (they match the same pattern
+but no earlier patterns), their output order relative to each other is
+the normal order.
++
+<orderfile> is parsed as follows:
++
+--
+ - Blank lines are ignored, so they can be used as separators for
+ readability.
+
+ - Lines starting with a hash ("`#`") are ignored, so they can be used
+ for comments. Add a backslash ("`\`") to the beginning of the
+ pattern if it starts with a hash.
+
+ - Each other line contains a single pattern.
+--
++
+Patterns have the same syntax and semantics as patterns used for
+fnmantch(3) without the FNM_PATHNAME flag, except a pathname also
+matches a pattern if removing any number of the final pathname
+components matches the pattern. For example, the pattern "`foo*bar`"
+matches "`fooasdfbar`" and "`foo/bar/baz/asdf`" but not "`foobarx`".
ifndef::git-format-patch[]
-R::
--inter-hunk-context=<lines>::
Show the context between diff hunks, up to the specified number
of lines, thereby fusing hunks that are close to each other.
+ Defaults to `diff.interHunkContext` or 0 if the config option
+ is unset.
-W::
--function-context::
git bisect start [--term-{old,good}=<term> --term-{new,bad}=<term>]
[--no-checkout] [<bad> [<good>...]] [--] [<paths>...]
- git bisect (bad|new) [<rev>]
- git bisect (good|old) [<rev>...]
+ git bisect (bad|new|<term-new>) [<rev>]
+ git bisect (good|old|<term-old>) [<rev>...]
git bisect terms [--term-good | --term-bad]
git bisect skip [(<rev>|<range>)...]
git bisect reset [<commit>]
[--threads <num>]
[-f <file>] [-e] <pattern>
[--and|--or|--not|(|)|-e <pattern>...]
+ [--recurse-submodules] [--parent-basename <basename>]
[ [--[no-]exclude-standard] [--cached | --no-index | --untracked] | <tree>...]
[--] [<pathspec>...]
mechanism. Only useful when searching files in the current directory
with `--no-index`.
+--recurse-submodules::
+ Recursively search in each submodule that has been initialized and
+ checked out in the repository. When used in combination with the
+ <tree> option the prefix of all submodule output will be the name of
+ the parent project's <tree> object.
+
+--parent-basename <basename>::
+ For internal use only. In order to produce uniform output with the
+ --recurse-submodules option, this option can be used to provide the
+ basename of a parent's <tree> object to a submodule so the submodule
+ can prefix its output with the parent's name rather than the SHA1 of
+ the submodule.
+
-a::
--text::
Process binary files as if they were text.
browser::
Start a tree browser showing all files in the specified
- commit (or `HEAD` by default). Files selected through the
+ commit. Files selected through the
browser are opened in the blame viewer.
citool::
git-p4.retries::
Specifies the number of times to retry a p4 command (notably,
'p4 sync') if the network times out. The default value is 3.
+ Set the value to 0 to disable retries or if your p4 version
+ does not support retries (pre 2012.2).
Clone and sync variables
~~~~~~~~~~~~~~~~~~~~~~~~
----
prefix=$(git rev-parse --show-prefix)
cd "$(git rev-parse --show-toplevel)"
-eval "set -- $(git rev-parse --sq --prefix "$prefix" "$@")"
+# rev-parse provides the -- needed for 'set'
+eval "set $(git rev-parse --sq --prefix "$prefix" -- "$@")"
----
--verify::
'git tag' [-n[<num>]] -l [--contains <commit>] [--points-at <object>]
[--column[=<options>] | --no-column] [--create-reflog] [--sort=<key>]
[--format=<format>] [--[no-]merged [<commit>]] [<pattern>...]
-'git tag' -v <tagname>...
+'git tag' -v [--format=<format>] <tagname>...
DESCRIPTION
-----------
multiple times, in which case the last key becomes the primary
key. Also supports "version:refname" or "v:refname" (tag
names are treated as versions). The "version:refname" sort
- order can also be affected by the
- "versionsort.prereleaseSuffix" configuration variable.
+ order can also be affected by the "versionsort.suffix"
+ configuration variable.
The keys supported are the same as those in `git for-each-ref`.
Sort order defaults to the value configured for the `tag.sort`
variable if it exists, or lexicographic order otherwise. See
SYNOPSIS
--------
[verse]
-'git verify-tag' <tag>...
+'git verify-tag' [--format=<format>] <tag>...
DESCRIPTION
-----------
<9> backport a critical fix.
<10> create a signed tag.
<11> make sure master was not accidentally rewound beyond that
-already pushed out. `ko` shorthand points at the Git maintainer's
+already pushed out.
+<12> In the output from `git show-branch`, `master` should have
+everything `ko/master` has, and `next` should have
+everything `ko/next` has, etc.
+<13> push out the bleeding edge, together with new tags that point
+into the pushed history.
+
+In this example, the `ko` shorthand points at the Git maintainer's
repository at kernel.org, and looks like this:
-+
+
------------
(in .git/config)
[remote "ko"]
push = +refs/heads/pu
push = refs/heads/maint
------------
-+
-<12> In the output from `git show-branch`, `master` should have
-everything `ko/master` has, and `next` should have
-everything `ko/next` has, etc.
-<13> push out the bleeding edge, together with new tags that point
-into the pushed history.
Repository Administration[[ADMINISTRATION]]
History
-------
Gitk was the first graphical repository browser. It's written in
-tcl/tk and started off in a separate repository but was later merged
-into the main Git repository.
+tcl/tk.
+'gitk' is actually maintained as an independent project, but stable
+versions are distributed as part of the Git suite for the convenience
+of end users.
+
+gitk-git/ comes from Paul Mackerras's gitk project:
+
+ git://ozlabs.org/~paulus/gitk
SEE ALSO
--------
'qgit(1)'::
A repository browser written in C++ using Qt.
-'gitview(1)'::
- A repository browser written in Python using Gtk. It's based on
- 'bzrk(1)' and distributed in the contrib area of the Git repository.
-
'tig(1)'::
A minimal repository browser and Git tool output highlighter written
in C using Ncurses.
`void *hashmap_iter_next(struct hashmap_iter *iter)`::
`void *hashmap_iter_first(struct hashmap *map, struct hashmap_iter *iter)`::
- Used to iterate over all entries of a hashmap.
+ Used to iterate over all entries of a hashmap. Note that it is
+ not safe to add or remove entries to the hashmap while
+ iterating.
+
`hashmap_iter_init` initializes a `hashmap_iter` structure.
+
+++ /dev/null
-in-core index API
-=================
-
-Talk about <read-cache.c> and <cache-tree.c>, things like:
-
-* cache -> the_index macros
-* read_index()
-* write_index()
-* ie_match_stat() and ie_modified(); how they are different and when to
- use which.
-* index_name_pos()
-* remove_index_entry_at()
-* remove_file_from_index()
-* add_file_to_index()
-* add_index_entry()
-* refresh_index()
-* discard_index()
-* cache_tree_invalidate_path()
-* cache_tree_update()
-
-(JC, Linus)
- prefix and args come from cmd_* functions
-get_pathspec() is obsolete and should never be used in new code.
-
parse_pathspec() helps catch unsupported features and reject them
politely. At a lower level, different pathspec-related functions may
not support the same set of features. Such pathspec-sensitive
Git Glossary
============
+[[git-explained]]
+Git explained
+-------------
+
include::glossary-content.txt[]
[[git-quick-start]]
Appendix B: Notes and todo list for this manual
===============================================
+[[todo-list]]
+Todo list
+---------
+
This is a work in progress.
The basic requirements:
git.res: git.rc GIT-VERSION-FILE
$(QUIET_RC)$(RC) \
$(join -DMAJOR= -DMINOR=, $(wordlist 1,2,$(subst -, ,$(subst ., ,$(GIT_VERSION))))) \
- -DGIT_VERSION="\\\"$(GIT_VERSION)\\\"" $< -o $@
+ -DGIT_VERSION="\\\"$(GIT_VERSION)\\\"" -i $< -o $@
# This makes sure we depend on the NO_PERL setting itself.
$(SCRIPT_PERL_GEN): GIT-BUILD-OPTIONS
git-imap-send$X: imap-send.o $(IMAP_SEND_BUILDDEPS) GIT-LDFLAGS $(GITLIBS)
$(QUIET_LINK)$(CC) $(ALL_CFLAGS) -o $@ $(ALL_LDFLAGS) $(filter %.o,$^) \
- $(LIBS) $(IMAP_SEND_LDFLAGS)
+ $(IMAP_SEND_LDFLAGS) $(LIBS)
git-http-fetch$X: http.o http-walker.o http-fetch.o GIT-LDFLAGS $(GITLIBS)
$(QUIET_LINK)$(CC) $(ALL_CFLAGS) -o $@ $(ALL_LDFLAGS) $(filter %.o,$^) \
return (!stat(path, &st) && S_ISDIR(st.st_mode));
}
+/* removes the last path component from 'path' except if 'path' is root */
+static void strip_last_component(struct strbuf *path)
+{
+ size_t offset = offset_1st_component(path->buf);
+ size_t len = path->len;
+
+ /* Find start of the last component */
+ while (offset < len && !is_dir_sep(path->buf[len - 1]))
+ len--;
+ /* Skip sequences of multiple path-separators */
+ while (offset < len && is_dir_sep(path->buf[len - 1]))
+ len--;
+
+ strbuf_setlen(path, len);
+}
+
+/* get (and remove) the next component in 'remaining' and place it in 'next' */
+static void get_next_component(struct strbuf *next, struct strbuf *remaining)
+{
+ char *start = NULL;
+ char *end = NULL;
+
+ strbuf_reset(next);
+
+ /* look for the next component */
+ /* Skip sequences of multiple path-separators */
+ for (start = remaining->buf; is_dir_sep(*start); start++)
+ ; /* nothing */
+ /* Find end of the path component */
+ for (end = start; *end && !is_dir_sep(*end); end++)
+ ; /* nothing */
+
+ strbuf_add(next, start, end - start);
+ /* remove the component from 'remaining' */
+ strbuf_remove(remaining, 0, end - remaining->buf);
+}
+
+/* copies root part from remaining to resolved, canonicalizing it on the way */
+static void get_root_part(struct strbuf *resolved, struct strbuf *remaining)
+{
+ int offset = offset_1st_component(remaining->buf);
+
+ strbuf_reset(resolved);
+ strbuf_add(resolved, remaining->buf, offset);
+#ifdef GIT_WINDOWS_NATIVE
+ convert_slashes(resolved->buf);
+#endif
+ strbuf_remove(remaining, 0, offset);
+}
+
/* We allow "recursive" symbolic links. Only within reason, though. */
-#define MAXDEPTH 5
+#ifndef MAXSYMLINKS
+#define MAXSYMLINKS 32
+#endif
/*
* Return the real path (i.e., absolute path, with symlinks resolved
* and extra slashes removed) equivalent to the specified path. (If
* you want an absolute path but don't mind links, use
- * absolute_path().) The return value is a pointer to a static
- * buffer.
+ * absolute_path().) Places the resolved realpath in the provided strbuf.
*
- * The input and all intermediate paths must be shorter than MAX_PATH.
* The directory part of path (i.e., everything up to the last
* dir_sep) must denote a valid, existing directory, but the last
* component need not exist. If die_on_error is set, then die with an
* informative error message if there is a problem. Otherwise, return
* NULL on errors (without generating any output).
- *
- * If path is our buffer, then return path, as it's already what the
- * user wants.
*/
-static const char *real_path_internal(const char *path, int die_on_error)
+char *strbuf_realpath(struct strbuf *resolved, const char *path,
+ int die_on_error)
{
- static struct strbuf sb = STRBUF_INIT;
+ struct strbuf remaining = STRBUF_INIT;
+ struct strbuf next = STRBUF_INIT;
+ struct strbuf symlink = STRBUF_INIT;
char *retval = NULL;
-
- /*
- * If we have to temporarily chdir(), store the original CWD
- * here so that we can chdir() back to it at the end of the
- * function:
- */
- struct strbuf cwd = STRBUF_INIT;
-
- int depth = MAXDEPTH;
- char *last_elem = NULL;
+ int num_symlinks = 0;
struct stat st;
- /* We've already done it */
- if (path == sb.buf)
- return path;
-
if (!*path) {
if (die_on_error)
die("The empty string is not a valid path");
goto error_out;
}
- strbuf_reset(&sb);
- strbuf_addstr(&sb, path);
-
- while (depth--) {
- if (!is_directory(sb.buf)) {
- char *last_slash = find_last_dir_sep(sb.buf);
- if (last_slash) {
- last_elem = xstrdup(last_slash + 1);
- strbuf_setlen(&sb, last_slash - sb.buf + 1);
- } else {
- last_elem = xmemdupz(sb.buf, sb.len);
- strbuf_reset(&sb);
- }
+ strbuf_addstr(&remaining, path);
+ get_root_part(resolved, &remaining);
+
+ if (!resolved->len) {
+ /* relative path; can use CWD as the initial resolved path */
+ if (strbuf_getcwd(resolved)) {
+ if (die_on_error)
+ die_errno("unable to get current working directory");
+ else
+ goto error_out;
}
+ }
- if (sb.len) {
- if (!cwd.len && strbuf_getcwd(&cwd)) {
+ /* Iterate over the remaining path components */
+ while (remaining.len > 0) {
+ get_next_component(&next, &remaining);
+
+ if (next.len == 0) {
+ continue; /* empty component */
+ } else if (next.len == 1 && !strcmp(next.buf, ".")) {
+ continue; /* '.' component */
+ } else if (next.len == 2 && !strcmp(next.buf, "..")) {
+ /* '..' component; strip the last path component */
+ strip_last_component(resolved);
+ continue;
+ }
+
+ /* append the next component and resolve resultant path */
+ if (!is_dir_sep(resolved->buf[resolved->len - 1]))
+ strbuf_addch(resolved, '/');
+ strbuf_addbuf(resolved, &next);
+
+ if (lstat(resolved->buf, &st)) {
+ /* error out unless this was the last component */
+ if (errno != ENOENT || remaining.len) {
if (die_on_error)
- die_errno("Could not get current working directory");
+ die_errno("Invalid path '%s'",
+ resolved->buf);
else
goto error_out;
}
+ } else if (S_ISLNK(st.st_mode)) {
+ ssize_t len;
+ strbuf_reset(&symlink);
+
+ if (num_symlinks++ > MAXSYMLINKS) {
+ errno = ELOOP;
- if (chdir(sb.buf)) {
if (die_on_error)
- die_errno("Could not switch to '%s'",
- sb.buf);
+ die("More than %d nested symlinks "
+ "on path '%s'", MAXSYMLINKS, path);
else
goto error_out;
}
- }
- if (strbuf_getcwd(&sb)) {
- if (die_on_error)
- die_errno("Could not get current working directory");
- else
- goto error_out;
- }
-
- if (last_elem) {
- if (sb.len && !is_dir_sep(sb.buf[sb.len - 1]))
- strbuf_addch(&sb, '/');
- strbuf_addstr(&sb, last_elem);
- free(last_elem);
- last_elem = NULL;
- }
- if (!lstat(sb.buf, &st) && S_ISLNK(st.st_mode)) {
- struct strbuf next_sb = STRBUF_INIT;
- ssize_t len = strbuf_readlink(&next_sb, sb.buf, 0);
+ len = strbuf_readlink(&symlink, resolved->buf,
+ st.st_size);
if (len < 0) {
if (die_on_error)
die_errno("Invalid symlink '%s'",
- sb.buf);
+ resolved->buf);
else
goto error_out;
}
- strbuf_swap(&sb, &next_sb);
- strbuf_release(&next_sb);
- } else
- break;
+
+ if (is_absolute_path(symlink.buf)) {
+ /* absolute symlink; set resolved to root */
+ get_root_part(resolved, &symlink);
+ } else {
+ /*
+ * relative symlink
+ * strip off the last component since it will
+ * be replaced with the contents of the symlink
+ */
+ strip_last_component(resolved);
+ }
+
+ /*
+ * if there are still remaining components to resolve
+ * then append them to symlink
+ */
+ if (remaining.len) {
+ strbuf_addch(&symlink, '/');
+ strbuf_addbuf(&symlink, &remaining);
+ }
+
+ /*
+ * use the symlink as the remaining components that
+ * need to be resloved
+ */
+ strbuf_swap(&symlink, &remaining);
+ }
}
- retval = sb.buf;
+ retval = resolved->buf;
+
error_out:
- free(last_elem);
- if (cwd.len && chdir(cwd.buf))
- die_errno("Could not change back to '%s'", cwd.buf);
- strbuf_release(&cwd);
+ strbuf_release(&remaining);
+ strbuf_release(&next);
+ strbuf_release(&symlink);
+
+ if (!retval)
+ strbuf_reset(resolved);
return retval;
}
const char *real_path(const char *path)
{
- return real_path_internal(path, 1);
+ static struct strbuf realpath = STRBUF_INIT;
+ return strbuf_realpath(&realpath, path, 1);
}
const char *real_path_if_valid(const char *path)
{
- return real_path_internal(path, 0);
+ static struct strbuf realpath = STRBUF_INIT;
+ return strbuf_realpath(&realpath, path, 0);
+}
+
+char *real_pathdup(const char *path)
+{
+ struct strbuf realpath = STRBUF_INIT;
+ char *retval = NULL;
+
+ if (strbuf_realpath(&realpath, path, 0))
+ retval = strbuf_detach(&realpath, NULL);
+
+ strbuf_release(&realpath);
+
+ return retval;
}
/*
*dos_time = t->tm_sec / 2 + t->tm_min * 32 + t->tm_hour * 2048;
}
+static int archive_zip_config(const char *var, const char *value, void *data)
+{
+ return userdiff_config(var, value);
+}
+
static int write_zip_archive(const struct archiver *ar,
struct archiver_args *args)
{
int err;
+ git_config(archive_zip_config, NULL);
+
dos_time(&args->time, &zip_date, &zip_time);
zip_dir = xmalloc(ZIP_DIRECTORY_MIN_SIZE);
}
/*
+ * Write out any suspect information which depends on the path. This must be
+ * handled separately from emit_one_suspect_detail(), because a given commit
+ * may have changes in multiple paths. So this needs to appear each time
+ * we mention a new group.
+ *
* To allow LF and other nonportable characters in pathnames,
* they are c-style quoted as needed.
*/
-static void write_filename_info(const char *path)
+static void write_filename_info(struct origin *suspect)
{
+ if (suspect->previous) {
+ struct origin *prev = suspect->previous;
+ printf("previous %s ", oid_to_hex(&prev->commit->object.oid));
+ write_name_quoted(prev->path, stdout, '\n');
+ }
printf("filename ");
- write_name_quoted(path, stdout, '\n');
+ write_name_quoted(suspect->path, stdout, '\n');
}
/*
printf("summary %s\n", ci.summary.buf);
if (suspect->commit->object.flags & UNINTERESTING)
printf("boundary\n");
- if (suspect->previous) {
- struct origin *prev = suspect->previous;
- printf("previous %s ", oid_to_hex(&prev->commit->object.oid));
- write_name_quoted(prev->path, stdout, '\n');
- }
commit_info_destroy(&ci);
oid_to_hex(&suspect->commit->object.oid),
ent->s_lno + 1, ent->lno + 1, ent->num_lines);
emit_one_suspect_detail(suspect, 0);
- write_filename_info(suspect->path);
+ write_filename_info(suspect);
maybe_flush_or_die(stdout, "stdout");
}
pi->blamed_lines += ent->num_lines;
{
if (emit_one_suspect_detail(suspect, repeat) ||
(suspect->commit->object.flags & MORE_THAN_ONE_PATH))
- write_filename_info(suspect->path);
+ write_filename_info(suspect);
}
static void emit_porcelain(struct scoreboard *sb, struct blame_entry *ent,
} else if (show_progress < 0)
show_progress = isatty(2);
- if (0 < abbrev)
+ if (0 < abbrev && abbrev < GIT_SHA1_HEXSZ)
/* one more abbrev length is needed for the boundary commit */
abbrev++;
+ else if (!abbrev)
+ abbrev = GIT_SHA1_HEXSZ;
if (revs_file && read_ancestry(revs_file))
die_errno("reading graft file '%s' failed", revs_file);
git_config(get_remote_group, &g);
if (list->nr == prev_nr) {
struct remote *remote = remote_get(name);
- if (!remote_is_configured(remote))
+ if (!remote_is_configured(remote, 0))
return 0;
string_list_append(list, remote->name);
}
return 0;
}
-static int fsck_sha1(const unsigned char *sha1)
-{
- struct object *obj = parse_object(sha1);
- if (!obj) {
- errors_found |= ERROR_OBJECT;
- return error("%s: object corrupt or missing",
- sha1_to_hex(sha1));
- }
- obj->flags |= HAS_OBJ;
- return fsck_obj(obj);
-}
-
static int fsck_obj_buffer(const unsigned char *sha1, enum object_type type,
unsigned long size, void *buffer, int *eaten)
{
}
}
+static struct object *parse_loose_object(const unsigned char *sha1,
+ const char *path)
+{
+ struct object *obj;
+ void *contents;
+ enum object_type type;
+ unsigned long size;
+ int eaten;
+
+ if (read_loose_object(path, sha1, &type, &size, &contents) < 0)
+ return NULL;
+
+ if (!contents && type != OBJ_BLOB)
+ die("BUG: read_loose_object streamed a non-blob");
+
+ obj = parse_object_buffer(sha1, type, size, contents, &eaten);
+
+ if (!eaten)
+ free(contents);
+ return obj;
+}
+
static int fsck_loose(const unsigned char *sha1, const char *path, void *data)
{
- if (fsck_sha1(sha1))
+ struct object *obj = parse_loose_object(sha1, path);
+
+ if (!obj) {
+ errors_found |= ERROR_OBJECT;
+ error("%s: object corrupt or missing: %s",
+ sha1_to_hex(sha1), path);
+ return 0; /* keep checking other objects */
+ }
+
+ obj->flags = HAS_OBJ;
+ if (fsck_obj(obj))
errors_found |= ERROR_OBJECT;
return 0;
}
}
}
+static void add_repack_incremental_option(void)
+{
+ argv_array_push(&repack, "--no-write-bitmap-index");
+}
+
static int need_to_gc(void)
{
/*
*/
if (too_many_packs())
add_repack_all_option();
- else if (!too_many_loose_objects())
+ else if (too_many_loose_objects())
+ add_repack_incremental_option();
+ else
return 0;
if (run_hook_le(NULL, "pre-auto-gc", NULL))
#include "quote.h"
#include "dir.h"
#include "pathspec.h"
+#include "submodule.h"
+#include "submodule-config.h"
static char const * const grep_usage[] = {
N_("git grep [<options>] [-e] <pattern> [<rev>...] [[--] <path>...]"),
NULL
};
+static const char *super_prefix;
+static int recurse_submodules;
+static struct argv_array submodule_options = ARGV_ARRAY_INIT;
+static const char *parent_basename;
+
+static int grep_submodule_launch(struct grep_opt *opt,
+ const struct grep_source *gs);
+
#define GREP_NUM_THREADS_DEFAULT 8
static int num_threads;
break;
opt->output_priv = w;
- hit |= grep_source(opt, &w->source);
+ if (w->source.type == GREP_SOURCE_SUBMODULE)
+ hit |= grep_submodule_launch(opt, &w->source);
+ else
+ hit |= grep_source(opt, &w->source);
grep_source_clear_data(&w->source);
work_done(w);
}
if (opt->relative && opt->prefix_length) {
quote_path_relative(filename + tree_name_len, opt->prefix, &pathbuf);
strbuf_insert(&pathbuf, 0, filename, tree_name_len);
+ } else if (super_prefix) {
+ strbuf_add(&pathbuf, filename, tree_name_len);
+ strbuf_addstr(&pathbuf, super_prefix);
+ strbuf_addstr(&pathbuf, filename + tree_name_len);
} else {
strbuf_addstr(&pathbuf, filename);
}
{
struct strbuf buf = STRBUF_INIT;
- if (opt->relative && opt->prefix_length)
+ if (opt->relative && opt->prefix_length) {
quote_path_relative(filename, opt->prefix, &buf);
- else
+ } else {
+ if (super_prefix)
+ strbuf_addstr(&buf, super_prefix);
strbuf_addstr(&buf, filename);
+ }
#ifndef NO_PTHREADS
if (num_threads) {
exit(status);
}
-static int grep_cache(struct grep_opt *opt, const struct pathspec *pathspec, int cached)
+static void compile_submodule_options(const struct grep_opt *opt,
+ const struct pathspec *pathspec,
+ int cached, int untracked,
+ int opt_exclude, int use_index,
+ int pattern_type_arg)
+{
+ struct grep_pat *pattern;
+ int i;
+
+ if (recurse_submodules)
+ argv_array_push(&submodule_options, "--recurse-submodules");
+
+ if (cached)
+ argv_array_push(&submodule_options, "--cached");
+ if (!use_index)
+ argv_array_push(&submodule_options, "--no-index");
+ if (untracked)
+ argv_array_push(&submodule_options, "--untracked");
+ if (opt_exclude > 0)
+ argv_array_push(&submodule_options, "--exclude-standard");
+
+ if (opt->invert)
+ argv_array_push(&submodule_options, "-v");
+ if (opt->ignore_case)
+ argv_array_push(&submodule_options, "-i");
+ if (opt->word_regexp)
+ argv_array_push(&submodule_options, "-w");
+ switch (opt->binary) {
+ case GREP_BINARY_NOMATCH:
+ argv_array_push(&submodule_options, "-I");
+ break;
+ case GREP_BINARY_TEXT:
+ argv_array_push(&submodule_options, "-a");
+ break;
+ default:
+ break;
+ }
+ if (opt->allow_textconv)
+ argv_array_push(&submodule_options, "--textconv");
+ if (opt->max_depth != -1)
+ argv_array_pushf(&submodule_options, "--max-depth=%d",
+ opt->max_depth);
+ if (opt->linenum)
+ argv_array_push(&submodule_options, "-n");
+ if (!opt->pathname)
+ argv_array_push(&submodule_options, "-h");
+ if (!opt->relative)
+ argv_array_push(&submodule_options, "--full-name");
+ if (opt->name_only)
+ argv_array_push(&submodule_options, "-l");
+ if (opt->unmatch_name_only)
+ argv_array_push(&submodule_options, "-L");
+ if (opt->null_following_name)
+ argv_array_push(&submodule_options, "-z");
+ if (opt->count)
+ argv_array_push(&submodule_options, "-c");
+ if (opt->file_break)
+ argv_array_push(&submodule_options, "--break");
+ if (opt->heading)
+ argv_array_push(&submodule_options, "--heading");
+ if (opt->pre_context)
+ argv_array_pushf(&submodule_options, "--before-context=%d",
+ opt->pre_context);
+ if (opt->post_context)
+ argv_array_pushf(&submodule_options, "--after-context=%d",
+ opt->post_context);
+ if (opt->funcname)
+ argv_array_push(&submodule_options, "-p");
+ if (opt->funcbody)
+ argv_array_push(&submodule_options, "-W");
+ if (opt->all_match)
+ argv_array_push(&submodule_options, "--all-match");
+ if (opt->debug)
+ argv_array_push(&submodule_options, "--debug");
+ if (opt->status_only)
+ argv_array_push(&submodule_options, "-q");
+
+ switch (pattern_type_arg) {
+ case GREP_PATTERN_TYPE_BRE:
+ argv_array_push(&submodule_options, "-G");
+ break;
+ case GREP_PATTERN_TYPE_ERE:
+ argv_array_push(&submodule_options, "-E");
+ break;
+ case GREP_PATTERN_TYPE_FIXED:
+ argv_array_push(&submodule_options, "-F");
+ break;
+ case GREP_PATTERN_TYPE_PCRE:
+ argv_array_push(&submodule_options, "-P");
+ break;
+ case GREP_PATTERN_TYPE_UNSPECIFIED:
+ break;
+ }
+
+ for (pattern = opt->pattern_list; pattern != NULL;
+ pattern = pattern->next) {
+ switch (pattern->token) {
+ case GREP_PATTERN:
+ argv_array_pushf(&submodule_options, "-e%s",
+ pattern->pattern);
+ break;
+ case GREP_AND:
+ case GREP_OPEN_PAREN:
+ case GREP_CLOSE_PAREN:
+ case GREP_NOT:
+ case GREP_OR:
+ argv_array_push(&submodule_options, pattern->pattern);
+ break;
+ /* BODY and HEAD are not used by git-grep */
+ case GREP_PATTERN_BODY:
+ case GREP_PATTERN_HEAD:
+ break;
+ }
+ }
+
+ /*
+ * Limit number of threads for child process to use.
+ * This is to prevent potential fork-bomb behavior of git-grep as each
+ * submodule process has its own thread pool.
+ */
+ argv_array_pushf(&submodule_options, "--threads=%d",
+ (num_threads + 1) / 2);
+
+ /* Add Pathspecs */
+ argv_array_push(&submodule_options, "--");
+ for (i = 0; i < pathspec->nr; i++)
+ argv_array_push(&submodule_options,
+ pathspec->items[i].original);
+}
+
+/*
+ * Launch child process to grep contents of a submodule
+ */
+static int grep_submodule_launch(struct grep_opt *opt,
+ const struct grep_source *gs)
+{
+ struct child_process cp = CHILD_PROCESS_INIT;
+ int status, i;
+ const char *end_of_base;
+ const char *name;
+ struct work_item *w = opt->output_priv;
+
+ end_of_base = strchr(gs->name, ':');
+ if (gs->identifier && end_of_base)
+ name = end_of_base + 1;
+ else
+ name = gs->name;
+
+ prepare_submodule_repo_env(&cp.env_array);
+ argv_array_push(&cp.env_array, GIT_DIR_ENVIRONMENT);
+
+ /* Add super prefix */
+ argv_array_pushf(&cp.args, "--super-prefix=%s%s/",
+ super_prefix ? super_prefix : "",
+ name);
+ argv_array_push(&cp.args, "grep");
+
+ /*
+ * Add basename of parent project
+ * When performing grep on a tree object the filename is prefixed
+ * with the object's name: 'tree-name:filename'. In order to
+ * provide uniformity of output we want to pass the name of the
+ * parent project's object name to the submodule so the submodule can
+ * prefix its output with the parent's name and not its own SHA1.
+ */
+ if (gs->identifier && end_of_base)
+ argv_array_pushf(&cp.args, "--parent-basename=%.*s",
+ (int) (end_of_base - gs->name),
+ gs->name);
+
+ /* Add options */
+ for (i = 0; i < submodule_options.argc; i++) {
+ /*
+ * If there is a tree identifier for the submodule, add the
+ * rev after adding the submodule options but before the
+ * pathspecs. To do this we listen for the '--' and insert the
+ * sha1 before pushing the '--' onto the child process argv
+ * array.
+ */
+ if (gs->identifier &&
+ !strcmp("--", submodule_options.argv[i])) {
+ argv_array_push(&cp.args, sha1_to_hex(gs->identifier));
+ }
+
+ argv_array_push(&cp.args, submodule_options.argv[i]);
+ }
+
+ cp.git_cmd = 1;
+ cp.dir = gs->path;
+
+ /*
+ * Capture output to output buffer and check the return code from the
+ * child process. A '0' indicates a hit, a '1' indicates no hit and
+ * anything else is an error.
+ */
+ status = capture_command(&cp, &w->out, 0);
+ if (status && (status != 1)) {
+ /* flush the buffer */
+ write_or_die(1, w->out.buf, w->out.len);
+ die("process for submodule '%s' failed with exit code: %d",
+ gs->name, status);
+ }
+
+ /* invert the return code to make a hit equal to 1 */
+ return !status;
+}
+
+/*
+ * Prep grep structures for a submodule grep
+ * sha1: the sha1 of the submodule or NULL if using the working tree
+ * filename: name of the submodule including tree name of parent
+ * path: location of the submodule
+ */
+static int grep_submodule(struct grep_opt *opt, const unsigned char *sha1,
+ const char *filename, const char *path)
+{
+ if (!is_submodule_initialized(path))
+ return 0;
+ if (!is_submodule_populated(path)) {
+ /*
+ * If searching history, check for the presense of the
+ * submodule's gitdir before skipping the submodule.
+ */
+ if (sha1) {
+ const struct submodule *sub =
+ submodule_from_path(null_sha1, path);
+ if (sub)
+ path = git_path("modules/%s", sub->name);
+
+ if (!(is_directory(path) && is_git_directory(path)))
+ return 0;
+ } else {
+ return 0;
+ }
+ }
+
+#ifndef NO_PTHREADS
+ if (num_threads) {
+ add_work(opt, GREP_SOURCE_SUBMODULE, filename, path, sha1);
+ return 0;
+ } else
+#endif
+ {
+ struct work_item w;
+ int hit;
+
+ grep_source_init(&w.source, GREP_SOURCE_SUBMODULE,
+ filename, path, sha1);
+ strbuf_init(&w.out, 0);
+ opt->output_priv = &w;
+ hit = grep_submodule_launch(opt, &w.source);
+
+ write_or_die(1, w.out.buf, w.out.len);
+
+ grep_source_clear(&w.source);
+ strbuf_release(&w.out);
+ return hit;
+ }
+}
+
+static int grep_cache(struct grep_opt *opt, const struct pathspec *pathspec,
+ int cached)
{
int hit = 0;
int nr;
+ struct strbuf name = STRBUF_INIT;
+ int name_base_len = 0;
+ if (super_prefix) {
+ name_base_len = strlen(super_prefix);
+ strbuf_addstr(&name, super_prefix);
+ }
+
read_cache();
for (nr = 0; nr < active_nr; nr++) {
const struct cache_entry *ce = active_cache[nr];
- if (!S_ISREG(ce->ce_mode))
- continue;
- if (!ce_path_match(ce, pathspec, NULL))
+ strbuf_setlen(&name, name_base_len);
+ strbuf_addstr(&name, ce->name);
+
+ if (S_ISREG(ce->ce_mode) &&
+ match_pathspec(pathspec, name.buf, name.len, 0, NULL,
+ S_ISDIR(ce->ce_mode) ||
+ S_ISGITLINK(ce->ce_mode))) {
+ /*
+ * If CE_VALID is on, we assume worktree file and its
+ * cache entry are identical, even if worktree file has
+ * been modified, so use cache version instead
+ */
+ if (cached || (ce->ce_flags & CE_VALID) ||
+ ce_skip_worktree(ce)) {
+ if (ce_stage(ce) || ce_intent_to_add(ce))
+ continue;
+ hit |= grep_sha1(opt, ce->oid.hash, ce->name,
+ 0, ce->name);
+ } else {
+ hit |= grep_file(opt, ce->name);
+ }
+ } else if (recurse_submodules && S_ISGITLINK(ce->ce_mode) &&
+ submodule_path_match(pathspec, name.buf, NULL)) {
+ hit |= grep_submodule(opt, NULL, ce->name, ce->name);
+ } else {
continue;
- /*
- * If CE_VALID is on, we assume worktree file and its cache entry
- * are identical, even if worktree file has been modified, so use
- * cache version instead
- */
- if (cached || (ce->ce_flags & CE_VALID) || ce_skip_worktree(ce)) {
- if (ce_stage(ce) || ce_intent_to_add(ce))
- continue;
- hit |= grep_sha1(opt, ce->oid.hash, ce->name, 0,
- ce->name);
}
- else
- hit |= grep_file(opt, ce->name);
+
if (ce_stage(ce)) {
do {
nr++;
if (hit && opt->status_only)
break;
}
+
+ strbuf_release(&name);
return hit;
}
enum interesting match = entry_not_interesting;
struct name_entry entry;
int old_baselen = base->len;
+ struct strbuf name = STRBUF_INIT;
+ int name_base_len = 0;
+ if (super_prefix) {
+ strbuf_addstr(&name, super_prefix);
+ name_base_len = name.len;
+ }
while (tree_entry(tree, &entry)) {
int te_len = tree_entry_len(&entry);
if (match != all_entries_interesting) {
- match = tree_entry_interesting(&entry, base, tn_len, pathspec);
+ strbuf_addstr(&name, base->buf + tn_len);
+ match = tree_entry_interesting(&entry, &name,
+ 0, pathspec);
+ strbuf_setlen(&name, name_base_len);
+
if (match == all_entries_not_interesting)
break;
if (match == entry_not_interesting)
if (S_ISREG(entry.mode)) {
hit |= grep_sha1(opt, entry.oid->hash, base->buf, tn_len,
check_attr ? base->buf + tn_len : NULL);
- }
- else if (S_ISDIR(entry.mode)) {
+ } else if (S_ISDIR(entry.mode)) {
enum object_type type;
struct tree_desc sub;
void *data;
hit |= grep_tree(opt, pathspec, &sub, base, tn_len,
check_attr);
free(data);
+ } else if (recurse_submodules && S_ISGITLINK(entry.mode)) {
+ hit |= grep_submodule(opt, entry.oid->hash, base->buf,
+ base->buf + tn_len);
}
+
strbuf_setlen(base, old_baselen);
if (hit && opt->status_only)
break;
}
+
+ strbuf_release(&name);
return hit;
}
if (!data)
die(_("unable to read tree (%s)"), oid_to_hex(&obj->oid));
+ /* Use parent's name as base when recursing submodules */
+ if (recurse_submodules && parent_basename)
+ name = parent_basename;
+
len = name ? strlen(name) : 0;
strbuf_init(&base, PATH_MAX + len + 1);
if (len) {
for (i = 0; i < nr; i++) {
struct object *real_obj;
real_obj = deref_tag(list->objects[i].item, NULL, 0);
+
+ /* load the gitmodules file for this rev */
+ if (recurse_submodules) {
+ submodule_free();
+ gitmodules_config_sha1(real_obj->oid.hash);
+ }
if (grep_object(opt, pathspec, real_obj, list->objects[i].name, list->objects[i].path)) {
hit = 1;
if (opt->status_only)
N_("search in both tracked and untracked files")),
OPT_SET_INT(0, "exclude-standard", &opt_exclude,
N_("ignore files specified via '.gitignore'"), 1),
+ OPT_BOOL(0, "recurse-submodules", &recurse_submodules,
+ N_("recursivley search in each submodule")),
+ OPT_STRING(0, "parent-basename", &parent_basename,
+ N_("basename"),
+ N_("prepend parent project's basename to output")),
OPT_GROUP(""),
OPT_BOOL('v', "invert-match", &opt.invert,
N_("show non-matching lines")),
init_grep_defaults();
git_config(grep_cmd_config, NULL);
grep_init(&opt, prefix);
+ super_prefix = get_super_prefix();
/*
* If there is no -- then the paths must exist in the working
pathspec.max_depth = opt.max_depth;
pathspec.recursive = 1;
+ if (recurse_submodules) {
+ gitmodules_config();
+ compile_submodule_options(&opt, &pathspec, cached, untracked,
+ opt_exclude, use_index,
+ pattern_type_arg);
+ }
+
if (show_in_pager && (cached || list.nr))
die(_("--open-files-in-pager only works on the worktree"));
}
}
+ if (recurse_submodules && (!use_index || untracked))
+ die(_("option not supported with --recurse-submodules."));
+
if (!show_in_pager && !opt.status_only)
setup_pager();
{
int reinit;
int exist_ok = flags & INIT_DB_EXIST_OK;
- char *original_git_dir = xstrdup(real_path(git_dir));
+ char *original_git_dir = real_pathdup(git_dir);
if (real_git_dir) {
struct stat st;
argc = parse_options(argc, argv, prefix, init_db_options, init_db_usage, 0);
if (real_git_dir && !is_absolute_path(real_git_dir))
- real_git_dir = xstrdup(real_path(real_git_dir));
+ real_git_dir = real_pathdup(real_git_dir);
if (argc == 1) {
int mkdir_tried = 0;
const char *git_dir_parent = strrchr(git_dir, '/');
if (git_dir_parent) {
char *rel = xstrndup(git_dir, git_dir_parent - git_dir);
- git_work_tree_cfg = xstrdup(real_path(rel));
+ git_work_tree_cfg = real_pathdup(rel);
free(rel);
}
if (!git_work_tree_cfg)
static int show_recursive(const char *base, int baselen, const char *pathname)
{
- const char **s;
+ int i;
if (ls_options & LS_RECURSIVE)
return 1;
- s = pathspec._raw;
- if (!s)
+ if (!pathspec.nr)
return 0;
- for (;;) {
- const char *spec = *s++;
+ for (i = 0; i < pathspec.nr; i++) {
+ const char *spec = pathspec.items[i].match;
int len, speclen;
- if (!spec)
- return 0;
if (strncmp(base, spec, baselen))
continue;
len = strlen(pathname);
continue;
return 1;
}
+ return 0;
}
static int show_tree(const unsigned char *sha1, struct strbuf *base,
* cannot be lifted until it is converted to use
* match_pathspec() or tree_entry_interesting()
*/
- parse_pathspec(&pathspec, PATHSPEC_GLOB | PATHSPEC_ICASE |
- PATHSPEC_EXCLUDE,
+ parse_pathspec(&pathspec, PATHSPEC_ALL_MAGIC &
+ ~(PATHSPEC_FROMTOP | PATHSPEC_LITERAL),
PATHSPEC_PREFER_CWD,
prefix, argv + 1);
for (i = 0; i < pathspec.nr; i++)
* Copyright (C) 2006 Johannes Schindelin
*/
#include "builtin.h"
+#include "pathspec.h"
#include "lockfile.h"
#include "dir.h"
#include "cache-tree.h"
#define DUP_BASENAME 1
#define KEEP_TRAILING_SLASH 2
-static const char **internal_copy_pathspec(const char *prefix,
- const char **pathspec,
- int count, unsigned flags)
+static const char **internal_prefix_pathspec(const char *prefix,
+ const char **pathspec,
+ int count, unsigned flags)
{
int i;
const char **result;
+ int prefixlen = prefix ? strlen(prefix) : 0;
ALLOC_ARRAY(result, count + 1);
- COPY_ARRAY(result, pathspec, count);
- result[count] = NULL;
+
+ /* Create an intermediate copy of the pathspec based on the flags */
for (i = 0; i < count; i++) {
- int length = strlen(result[i]);
+ int length = strlen(pathspec[i]);
int to_copy = length;
+ char *it;
while (!(flags & KEEP_TRAILING_SLASH) &&
- to_copy > 0 && is_dir_sep(result[i][to_copy - 1]))
+ to_copy > 0 && is_dir_sep(pathspec[i][to_copy - 1]))
to_copy--;
- if (to_copy != length || flags & DUP_BASENAME) {
- char *it = xmemdupz(result[i], to_copy);
- if (flags & DUP_BASENAME) {
- result[i] = xstrdup(basename(it));
- free(it);
- } else
- result[i] = it;
+
+ it = xmemdupz(pathspec[i], to_copy);
+ if (flags & DUP_BASENAME) {
+ result[i] = xstrdup(basename(it));
+ free(it);
+ } else {
+ result[i] = it;
}
}
- return get_pathspec(prefix, result);
+ result[count] = NULL;
+
+ /* Prefix the pathspec and free the old intermediate strings */
+ for (i = 0; i < count; i++) {
+ const char *match = prefix_path(prefix, prefixlen, result[i]);
+ free((char *) result[i]);
+ result[i] = match;
+ }
+
+ return result;
}
static const char *add_slash(const char *path)
if (read_cache() < 0)
die(_("index file corrupt"));
- source = internal_copy_pathspec(prefix, argv, argc, 0);
+ source = internal_prefix_pathspec(prefix, argv, argc, 0);
modes = xcalloc(argc, sizeof(enum update_mode));
/*
* Keep trailing slash, needed to let
flags = KEEP_TRAILING_SLASH;
if (argc == 1 && is_directory(argv[0]) && !is_directory(argv[1]))
flags = 0;
- dest_path = internal_copy_pathspec(prefix, argv + argc, 1, flags);
+ dest_path = internal_prefix_pathspec(prefix, argv + argc, 1, flags);
submodule_gitfile = xcalloc(argc, sizeof(char *));
if (dest_path[0][0] == '\0')
/* special case: "." was normalized to "" */
- destination = internal_copy_pathspec(dest_path[0], argv, argc, DUP_BASENAME);
+ destination = internal_prefix_pathspec(dest_path[0], argv, argc, DUP_BASENAME);
else if (!lstat(dest_path[0], &st) &&
S_ISDIR(st.st_mode)) {
dest_path[0] = add_slash(dest_path[0]);
- destination = internal_copy_pathspec(dest_path[0], argv, argc, DUP_BASENAME);
+ destination = internal_prefix_pathspec(dest_path[0], argv, argc, DUP_BASENAME);
} else {
if (argc != 1)
die(_("destination '%s' is not a directory"), dest_path[0]);
flags |= TRANSPORT_RECURSE_SUBMODULES_CHECK;
else if (recurse_submodules == RECURSE_SUBMODULES_ON_DEMAND)
flags |= TRANSPORT_RECURSE_SUBMODULES_ON_DEMAND;
+ else if (recurse_submodules == RECURSE_SUBMODULES_ONLY)
+ flags |= TRANSPORT_RECURSE_SUBMODULES_ONLY;
if (tags)
add_refspec("refs/tags/*");
url = argv[1];
remote = remote_get(name);
- if (remote_is_configured(remote))
+ if (remote_is_configured(remote, 1))
die(_("remote %s already exists."), name);
strbuf_addf(&buf2, "refs/heads/test:refs/remotes/%s/test", name);
rename.remote_branches = &remote_branches;
oldremote = remote_get(rename.old);
- if (!remote_is_configured(oldremote))
+ if (!remote_is_configured(oldremote, 1))
die(_("No such remote: %s"), rename.old);
if (!strcmp(rename.old, rename.new) && oldremote->origin != REMOTE_CONFIG)
return migrate_file(oldremote);
newremote = remote_get(rename.new);
- if (remote_is_configured(newremote))
+ if (remote_is_configured(newremote, 1))
die(_("remote %s already exists."), rename.new);
strbuf_addf(&buf, "refs/heads/test:refs/remotes/%s/test", rename.new);
usage_with_options(builtin_remote_rm_usage, options);
remote = remote_get(argv[1]);
- if (!remote_is_configured(remote))
+ if (!remote_is_configured(remote, 1))
die(_("No such remote: %s"), argv[1]);
known_remotes.to_delete = remote;
strbuf_addf(&key, "remote.%s.fetch", remotename);
remote = remote_get(remotename);
- if (!remote_is_configured(remote))
+ if (!remote_is_configured(remote, 1))
die(_("No such remote '%s'"), remotename);
if (!add_mode && remove_all_fetch_refspecs(remotename, key.buf)) {
remotename = argv[0];
remote = remote_get(remotename);
- if (!remote_is_configured(remote))
+ if (!remote_is_configured(remote, 1))
die(_("No such remote '%s'"), remotename);
url_nr = 0;
oldurl = newurl;
remote = remote_get(remotename);
- if (!remote_is_configured(remote))
+ if (!remote_is_configured(remote, 1))
die(_("No such remote '%s'"), remotename);
if (push_mode) {
NULL
};
+static const char incremental_bitmap_conflict_error[] = N_(
+"Incremental repacks are incompatible with bitmap indexes. Use\n"
+"--no-write-bitmap-index or disable the pack.writebitmaps configuration."
+);
+
+
static int repack_config(const char *var, const char *value, void *cb)
{
if (!strcmp(var, "repack.usedeltabaseoffset")) {
if (pack_kept_objects < 0)
pack_kept_objects = write_bitmaps;
+ if (write_bitmaps && !(pack_everything & ALL_INTO_ONE))
+ die(_(incremental_bitmap_conflict_error));
+
packdir = mkpathdup("%s/pack", get_object_directory());
packtmp = mkpathdup("%s/.tmp-%d-pack", packdir, (int)getpid());
}
}
-static void error_removing_concrete_submodules(struct string_list *files, int *errs)
-{
- print_error_files(files,
- Q_("the following submodule (or one of its nested "
- "submodules)\n"
- "uses a .git directory:",
- "the following submodules (or one of their nested "
- "submodules)\n"
- "use a .git directory:", files->nr),
- _("\n(use 'rm -rf' if you really want to remove "
- "it including all of its history)"),
- errs);
- string_list_clear(files, 0);
-}
-
-static int check_submodules_use_gitfiles(void)
+static void submodules_absorb_gitdir_if_needed(const char *prefix)
{
int i;
- int errs = 0;
- struct string_list files = STRING_LIST_INIT_NODUP;
-
for (i = 0; i < list.nr; i++) {
const char *name = list.entry[i].name;
int pos;
continue;
if (!submodule_uses_gitfile(name))
- string_list_append(&files, name);
+ absorb_git_dir_into_superproject(prefix, name,
+ ABSORB_GITDIR_RECURSE_SUBMODULES);
}
-
- error_removing_concrete_submodules(&files, &errs);
-
- return errs;
}
static int check_local_mod(struct object_id *head, int index_only)
int errs = 0;
struct string_list files_staged = STRING_LIST_INIT_NODUP;
struct string_list files_cached = STRING_LIST_INIT_NODUP;
- struct string_list files_submodule = STRING_LIST_INIT_NODUP;
struct string_list files_local = STRING_LIST_INIT_NODUP;
no_head = is_null_oid(head);
*/
if (ce_match_stat(ce, &st, 0) ||
(S_ISGITLINK(ce->ce_mode) &&
- !ok_to_remove_submodule(ce->name)))
+ bad_to_remove_submodule(ce->name,
+ SUBMODULE_REMOVAL_DIE_ON_ERROR |
+ SUBMODULE_REMOVAL_IGNORE_IGNORED_UNTRACKED)))
local_changes = 1;
/*
else if (!index_only) {
if (staged_changes)
string_list_append(&files_cached, name);
- if (local_changes) {
- if (S_ISGITLINK(ce->ce_mode) &&
- !submodule_uses_gitfile(name))
- string_list_append(&files_submodule, name);
- else
- string_list_append(&files_local, name);
- }
+ if (local_changes)
+ string_list_append(&files_local, name);
}
}
print_error_files(&files_staged,
&errs);
string_list_clear(&files_cached, 0);
- error_removing_concrete_submodules(&files_submodule, &errs);
-
print_error_files(&files_local,
Q_("the following file has local modifications:",
"the following files have local modifications:",
exit(0);
}
+ if (!index_only)
+ submodules_absorb_gitdir_if_needed(prefix);
+
/*
* If not forced, the file, the index and the HEAD (if exists)
* must match; but the file can already been removed, since
oidclr(&oid);
if (check_local_mod(&oid, index_only))
exit(1);
- } else if (!index_only) {
- if (check_submodules_use_gitfiles())
- exit(1);
}
/*
*/
if (!index_only) {
int removed = 0, gitmodules_modified = 0;
- struct strbuf buf = STRBUF_INIT;
for (i = 0; i < list.nr; i++) {
const char *path = list.entry[i].name;
if (list.entry[i].is_submodule) {
- if (is_empty_dir(path)) {
- if (!rmdir(path)) {
- removed = 1;
- if (!remove_path_from_gitmodules(path))
- gitmodules_modified = 1;
- continue;
- }
- } else {
- strbuf_reset(&buf);
- strbuf_addstr(&buf, path);
- if (!remove_dir_recursively(&buf, 0)) {
- removed = 1;
- if (!remove_path_from_gitmodules(path))
- gitmodules_modified = 1;
- strbuf_release(&buf);
- continue;
- } else if (!file_exists(path))
- /* Submodule was removed by user */
- if (!remove_path_from_gitmodules(path))
- gitmodules_modified = 1;
- /* Fallthrough and let remove_path() fail. */
- }
+ struct strbuf buf = STRBUF_INIT;
+
+ strbuf_addstr(&buf, path);
+ if (remove_dir_recursively(&buf, 0))
+ die(_("could not remove '%s'"), path);
+ strbuf_release(&buf);
+
+ removed = 1;
+ if (!remove_path_from_gitmodules(path))
+ gitmodules_modified = 1;
+ continue;
}
if (!remove_path(path)) {
removed = 1;
if (!removed)
die_errno("git rm: '%s'", path);
}
- strbuf_release(&buf);
if (gitmodules_modified)
stage_updated_gitmodules();
}
/* Only loads from .gitmodules, no overlay with .git/config */
gitmodules_config();
- if (prefix) {
- strbuf_addf(&sb, "%s%s", prefix, path);
+ if (prefix && get_super_prefix())
+ die("BUG: cannot have prefix and superprefix");
+ else if (prefix)
+ displaypath = xstrdup(relative_path(path, prefix, &sb));
+ else if (get_super_prefix()) {
+ strbuf_addf(&sb, "%s%s", get_super_prefix(), path);
displaypath = strbuf_detach(&sb, NULL);
} else
displaypath = xstrdup(path);
int i;
struct option module_init_options[] = {
- OPT_STRING(0, "prefix", &prefix,
- N_("path"),
- N_("alternative anchor for relative paths")),
OPT__QUIET(&quiet, N_("Suppress output for initializing a submodule")),
OPT_END()
};
{"relative-path", resolve_relative_path, 0},
{"resolve-relative-url", resolve_relative_url, 0},
{"resolve-relative-url-test", resolve_relative_url_test, 0},
- {"init", module_init, 0},
+ {"init", module_init, SUPPORT_SUPER_PREFIX},
{"remote-branch", resolve_remote_submodule_branch, 0},
{"absorb-git-dirs", absorb_git_dirs, SUPPORT_SUPER_PREFIX},
};
N_("git tag -d <tagname>..."),
N_("git tag -l [-n[<num>]] [--contains <commit>] [--points-at <object>]"
"\n\t\t[--format=<format>] [--[no-]merged [<commit>]] [<pattern>...]"),
- N_("git tag -v <tagname>..."),
+ N_("git tag -v [--format=<format>] <tagname>..."),
NULL
};
}
typedef int (*each_tag_name_fn)(const char *name, const char *ref,
- const unsigned char *sha1);
+ const unsigned char *sha1, const void *cb_data);
-static int for_each_tag_name(const char **argv, each_tag_name_fn fn)
+static int for_each_tag_name(const char **argv, each_tag_name_fn fn,
+ const void *cb_data)
{
const char **p;
char ref[PATH_MAX];
had_error = 1;
continue;
}
- if (fn(*p, ref, sha1))
+ if (fn(*p, ref, sha1, cb_data))
had_error = 1;
}
return had_error;
}
static int delete_tag(const char *name, const char *ref,
- const unsigned char *sha1)
+ const unsigned char *sha1, const void *cb_data)
{
if (delete_ref(ref, sha1, 0))
return 1;
}
static int verify_tag(const char *name, const char *ref,
- const unsigned char *sha1)
+ const unsigned char *sha1, const void *cb_data)
{
- return gpg_verify_tag(sha1, name, GPG_VERIFY_VERBOSE);
+ int flags;
+ const char *fmt_pretty = cb_data;
+ flags = GPG_VERIFY_VERBOSE;
+
+ if (fmt_pretty)
+ flags = GPG_VERIFY_OMIT_STATUS;
+
+ if (gpg_verify_tag(sha1, name, flags))
+ return -1;
+
+ if (fmt_pretty)
+ pretty_print_ref(name, sha1, fmt_pretty);
+
+ return 0;
}
static int do_sign(struct strbuf *buffer)
if (filter.merge_commit)
die(_("--merged and --no-merged option are only allowed with -l"));
if (cmdmode == 'd')
- return for_each_tag_name(argv, delete_tag);
- if (cmdmode == 'v')
- return for_each_tag_name(argv, verify_tag);
+ return for_each_tag_name(argv, delete_tag, NULL);
+ if (cmdmode == 'v') {
+ if (format)
+ verify_ref_format(format);
+ return for_each_tag_name(argv, verify_tag, format);
+ }
if (msg.given || msgfile) {
if (msg.given && msgfile)
#include <signal.h>
#include "parse-options.h"
#include "gpg-interface.h"
+#include "ref-filter.h"
static const char * const verify_tag_usage[] = {
- N_("git verify-tag [-v | --verbose] <tag>..."),
+ N_("git verify-tag [-v | --verbose] [--format=<format>] <tag>..."),
NULL
};
{
int i = 1, verbose = 0, had_error = 0;
unsigned flags = 0;
+ char *fmt_pretty = NULL;
const struct option verify_tag_options[] = {
OPT__VERBOSE(&verbose, N_("print tag contents")),
OPT_BIT(0, "raw", &flags, N_("print raw gpg status output"), GPG_VERIFY_RAW),
+ OPT_STRING( 0 , "format", &fmt_pretty, N_("format"), N_("format to use for the output")),
OPT_END()
};
if (verbose)
flags |= GPG_VERIFY_VERBOSE;
+ if (fmt_pretty) {
+ verify_ref_format(fmt_pretty);
+ flags |= GPG_VERIFY_OMIT_STATUS;
+ }
+
while (i < argc) {
unsigned char sha1[20];
const char *name = argv[i++];
- if (get_sha1(name, sha1))
+ if (get_sha1(name, sha1)) {
had_error = !!error("tag '%s' not found.", name);
- else if (gpg_verify_tag(sha1, name, flags))
+ continue;
+ }
+
+ if (gpg_verify_tag(sha1, name, flags)) {
had_error = 1;
+ continue;
+ }
+
+ if (fmt_pretty)
+ pretty_print_ref(name, sha1, fmt_pretty);
}
return had_error;
}
#define ALTERNATE_DB_ENVIRONMENT "GIT_ALTERNATE_OBJECT_DIRECTORIES"
-extern const char **get_pathspec(const char *prefix, const char **pathspec);
extern void setup_work_tree(void);
extern const char *setup_git_directory_gently(int *);
extern const char *setup_git_directory(void);
extern int index_dir_exists(struct index_state *istate, const char *name, int namelen);
extern void adjust_dirname_case(struct index_state *istate, char *name);
extern struct cache_entry *index_file_exists(struct index_state *istate, const char *name, int namelen, int igncase);
+
+/*
+ * Searches for an entry defined by name and namelen in the given index.
+ * If the return value is positive (including 0) it is the position of an
+ * exact match. If the return value is negative, the negated value minus 1
+ * is the position where the entry would be inserted.
+ * Example: The current index consists of these files and its stages:
+ *
+ * b#0, d#0, f#1, f#3
+ *
+ * index_name_pos(&index, "a", 1) -> -1
+ * index_name_pos(&index, "b", 1) -> 0
+ * index_name_pos(&index, "c", 1) -> -2
+ * index_name_pos(&index, "d", 1) -> 1
+ * index_name_pos(&index, "e", 1) -> -3
+ * index_name_pos(&index, "f", 1) -> -3
+ * index_name_pos(&index, "g", 1) -> -5
+ */
extern int index_name_pos(const struct index_state *, const char *name, int namelen);
+
#define ADD_CACHE_OK_TO_ADD 1 /* Ok to add */
#define ADD_CACHE_OK_TO_REPLACE 2 /* Ok to replace file/directory */
#define ADD_CACHE_SKIP_DFCHECK 4 /* Ok to skip DF conflict checks */
#define ADD_CACHE_KEEP_CACHE_TREE 32 /* Do not invalidate cache-tree */
extern int add_index_entry(struct index_state *, struct cache_entry *ce, int option);
extern void rename_index_entry_at(struct index_state *, int pos, const char *new_name);
+
+/* Remove entry, return true if there are more entries to go. */
extern int remove_index_entry_at(struct index_state *, int pos);
+
extern void remove_marked_cache_entries(struct index_state *istate);
extern int remove_file_from_index(struct index_state *, const char *path);
#define ADD_CACHE_VERBOSE 1
#define ADD_CACHE_IGNORE_ERRORS 4
#define ADD_CACHE_IGNORE_REMOVAL 8
#define ADD_CACHE_INTENT 16
+/*
+ * These two are used to add the contents of the file at path
+ * to the index, marking the working tree up-to-date by storing
+ * the cached stat info in the resulting cache entry. A caller
+ * that has already run lstat(2) on the path can call
+ * add_to_index(), and all others can call add_file_to_index();
+ * the latter will do necessary lstat(2) internally before
+ * calling the former.
+ */
extern int add_to_index(struct index_state *, const char *path, struct stat *, int flags);
extern int add_file_to_index(struct index_state *, const char *path, int flags);
+
extern struct cache_entry *make_cache_entry(unsigned int mode, const unsigned char *sha1, const char *path, int stage, unsigned int refresh_options);
extern int chmod_index_entry(struct index_state *, struct cache_entry *ce, char flip);
extern int ce_same_name(const struct cache_entry *a, const struct cache_entry *b);
extern void set_object_name_for_intent_to_add_entry(struct cache_entry *ce);
extern int index_name_is_other(const struct index_state *, const char *, int);
-extern void *read_blob_data_from_index(struct index_state *, const char *, unsigned long *);
+extern void *read_blob_data_from_index(const struct index_state *, const char *, unsigned long *);
/* do stat comparison even if CE_VALID is true */
#define CE_MATCH_IGNORE_VALID 01
return is_dir_sep(path[0]) || has_dos_drive_prefix(path);
}
int is_directory(const char *);
+char *strbuf_realpath(struct strbuf *resolved, const char *path,
+ int die_on_error);
const char *real_path(const char *path);
const char *real_path_if_valid(const char *path);
+char *real_pathdup(const char *path);
const char *absolute_path(const char *path);
const char *remove_leading_path(const char *in, const char *prefix);
const char *relative_path(const char *in, const char *prefix, struct strbuf *sb);
extern int has_sha1_pack(const unsigned char *sha1);
+/*
+ * Open the loose object at path, check its sha1, and return the contents,
+ * type, and size. If the object is a blob, then "contents" may return NULL,
+ * to allow streaming of large blobs.
+ *
+ * Returns 0 on success, negative on error (details may be written to stderr).
+ */
+int read_loose_object(const char *path,
+ const unsigned char *expected_sha1,
+ enum object_type *type,
+ unsigned long *size,
+ void **contents);
+
/*
* Return true iff we have an object named sha1, whether local or in
* an alternate object database, and whether packed or loose. This
extern int git_config_from_file(config_fn_t fn, const char *, void *);
extern int git_config_from_mem(config_fn_t fn, const enum config_origin_type,
const char *name, const char *buf, size_t len, void *data);
+extern int git_config_from_blob_sha1(config_fn_t fn, const char *name,
+ const unsigned char *sha1, void *data);
extern void git_config_push_parameter(const char *text);
extern int git_config_from_parameters(config_fn_t fn, void *data);
extern void git_config(config_fn_t fn, void *);
* It is because of this implicit close() that we created the
* copy of the original.
*
- * Note that the OS can recycle HANDLE (numbers) just like it
- * recycles fd (numbers), so we must update the cached value
- * of "console". You can use GetFileType() to see that
- * handle and _get_osfhandle(fd) may have the same number
- * value, but they refer to different actual files now.
+ * Note that we need to update the cached console handle to the
+ * duplicated one because the dup2() call will implicitly close
+ * the original one.
*
* Note that dup2() when given target := {0,1,2} will also
* call SetStdHandle(), so we don't need to worry about that.
*/
- dup2(new_fd, fd);
if (console == handle)
console = duplicate;
- handle = INVALID_HANDLE_VALUE;
+ dup2(new_fd, fd);
/* Close the temp fd. This explicitly closes "new_handle"
* (because it has been associated with it).
return do_config_from(&top, fn, data);
}
-static int git_config_from_blob_sha1(config_fn_t fn,
- const char *name,
- const unsigned char *sha1,
- void *data)
+int git_config_from_blob_sha1(config_fn_t fn,
+ const char *name,
+ const unsigned char *sha1,
+ void *data)
{
enum object_type type;
char *buf;
+++ /dev/null
-#! /usr/bin/env python
-
-# This program is free software; you can redistribute it and/or modify
-# it under the terms of the GNU General Public License as published by
-# the Free Software Foundation; either version 2 of the License, or
-# (at your option) any later version.
-
-""" gitview
-GUI browser for git repository
-This program is based on bzrk by Scott James Remnant <scott@ubuntu.com>
-"""
-__copyright__ = "Copyright (C) 2006 Hewlett-Packard Development Company, L.P."
-__copyright__ = "Copyright (C) 2007 Aneesh Kumar K.V <aneesh.kumar@gmail.com"
-__author__ = "Aneesh Kumar K.V <aneesh.kumar@gmail.com>"
-
-
-import sys
-import os
-import gtk
-import pygtk
-import pango
-import re
-import time
-import gobject
-import cairo
-import math
-import string
-import fcntl
-
-have_gtksourceview2 = False
-have_gtksourceview = False
-try:
- import gtksourceview2
- have_gtksourceview2 = True
-except ImportError:
- try:
- import gtksourceview
- have_gtksourceview = True
- except ImportError:
- print "Running without gtksourceview2 or gtksourceview module"
-
-re_ident = re.compile('(author|committer) (?P<ident>.*) (?P<epoch>\d+) (?P<tz>[+-]\d{4})')
-
-def list_to_string(args, skip):
- count = len(args)
- i = skip
- str_arg=" "
- while (i < count ):
- str_arg = str_arg + args[i]
- str_arg = str_arg + " "
- i = i+1
-
- return str_arg
-
-def show_date(epoch, tz):
- secs = float(epoch)
- tzsecs = float(tz[1:3]) * 3600
- tzsecs += float(tz[3:5]) * 60
- if (tz[0] == "+"):
- secs += tzsecs
- else:
- secs -= tzsecs
-
- return time.strftime("%Y-%m-%d %H:%M:%S", time.gmtime(secs))
-
-def get_source_buffer_and_view():
- if have_gtksourceview2:
- buffer = gtksourceview2.Buffer()
- slm = gtksourceview2.LanguageManager()
- gsl = slm.get_language("diff")
- buffer.set_highlight_syntax(True)
- buffer.set_language(gsl)
- view = gtksourceview2.View(buffer)
- elif have_gtksourceview:
- buffer = gtksourceview.SourceBuffer()
- slm = gtksourceview.SourceLanguagesManager()
- gsl = slm.get_language_from_mime_type("text/x-patch")
- buffer.set_highlight(True)
- buffer.set_language(gsl)
- view = gtksourceview.SourceView(buffer)
- else:
- buffer = gtk.TextBuffer()
- view = gtk.TextView(buffer)
- return (buffer, view)
-
-
-class CellRendererGraph(gtk.GenericCellRenderer):
- """Cell renderer for directed graph.
-
- This module contains the implementation of a custom GtkCellRenderer that
- draws part of the directed graph based on the lines suggested by the code
- in graph.py.
-
- Because we're shiny, we use Cairo to do this, and because we're naughty
- we cheat and draw over the bits of the TreeViewColumn that are supposed to
- just be for the background.
-
- Properties:
- node (column, colour, [ names ]) tuple to draw revision node,
- in_lines (start, end, colour) tuple list to draw inward lines,
- out_lines (start, end, colour) tuple list to draw outward lines.
- """
-
- __gproperties__ = {
- "node": ( gobject.TYPE_PYOBJECT, "node",
- "revision node instruction",
- gobject.PARAM_WRITABLE
- ),
- "in-lines": ( gobject.TYPE_PYOBJECT, "in-lines",
- "instructions to draw lines into the cell",
- gobject.PARAM_WRITABLE
- ),
- "out-lines": ( gobject.TYPE_PYOBJECT, "out-lines",
- "instructions to draw lines out of the cell",
- gobject.PARAM_WRITABLE
- ),
- }
-
- def do_set_property(self, property, value):
- """Set properties from GObject properties."""
- if property.name == "node":
- self.node = value
- elif property.name == "in-lines":
- self.in_lines = value
- elif property.name == "out-lines":
- self.out_lines = value
- else:
- raise AttributeError, "no such property: '%s'" % property.name
-
- def box_size(self, widget):
- """Calculate box size based on widget's font.
-
- Cache this as it's probably expensive to get. It ensures that we
- draw the graph at least as large as the text.
- """
- try:
- return self._box_size
- except AttributeError:
- pango_ctx = widget.get_pango_context()
- font_desc = widget.get_style().font_desc
- metrics = pango_ctx.get_metrics(font_desc)
-
- ascent = pango.PIXELS(metrics.get_ascent())
- descent = pango.PIXELS(metrics.get_descent())
-
- self._box_size = ascent + descent + 6
- return self._box_size
-
- def set_colour(self, ctx, colour, bg, fg):
- """Set the context source colour.
-
- Picks a distinct colour based on an internal wheel; the bg
- parameter provides the value that should be assigned to the 'zero'
- colours and the fg parameter provides the multiplier that should be
- applied to the foreground colours.
- """
- colours = [
- ( 1.0, 0.0, 0.0 ),
- ( 1.0, 1.0, 0.0 ),
- ( 0.0, 1.0, 0.0 ),
- ( 0.0, 1.0, 1.0 ),
- ( 0.0, 0.0, 1.0 ),
- ( 1.0, 0.0, 1.0 ),
- ]
-
- colour %= len(colours)
- red = (colours[colour][0] * fg) or bg
- green = (colours[colour][1] * fg) or bg
- blue = (colours[colour][2] * fg) or bg
-
- ctx.set_source_rgb(red, green, blue)
-
- def on_get_size(self, widget, cell_area):
- """Return the size we need for this cell.
-
- Each cell is drawn individually and is only as wide as it needs
- to be, we let the TreeViewColumn take care of making them all
- line up.
- """
- box_size = self.box_size(widget)
-
- cols = self.node[0]
- for start, end, colour in self.in_lines + self.out_lines:
- cols = int(max(cols, start, end))
-
- (column, colour, names) = self.node
- names_len = 0
- if (len(names) != 0):
- for item in names:
- names_len += len(item)
-
- width = box_size * (cols + 1 ) + names_len
- height = box_size
-
- # FIXME I have no idea how to use cell_area properly
- return (0, 0, width, height)
-
- def on_render(self, window, widget, bg_area, cell_area, exp_area, flags):
- """Render an individual cell.
-
- Draws the cell contents using cairo, taking care to clip what we
- do to within the background area so we don't draw over other cells.
- Note that we're a bit naughty there and should really be drawing
- in the cell_area (or even the exposed area), but we explicitly don't
- want any gutter.
-
- We try and be a little clever, if the line we need to draw is going
- to cross other columns we actually draw it as in the .---' style
- instead of a pure diagonal ... this reduces confusion by an
- incredible amount.
- """
- ctx = window.cairo_create()
- ctx.rectangle(bg_area.x, bg_area.y, bg_area.width, bg_area.height)
- ctx.clip()
-
- box_size = self.box_size(widget)
-
- ctx.set_line_width(box_size / 8)
- ctx.set_line_cap(cairo.LINE_CAP_SQUARE)
-
- # Draw lines into the cell
- for start, end, colour in self.in_lines:
- ctx.move_to(cell_area.x + box_size * start + box_size / 2,
- bg_area.y - bg_area.height / 2)
-
- if start - end > 1:
- ctx.line_to(cell_area.x + box_size * start, bg_area.y)
- ctx.line_to(cell_area.x + box_size * end + box_size, bg_area.y)
- elif start - end < -1:
- ctx.line_to(cell_area.x + box_size * start + box_size,
- bg_area.y)
- ctx.line_to(cell_area.x + box_size * end, bg_area.y)
-
- ctx.line_to(cell_area.x + box_size * end + box_size / 2,
- bg_area.y + bg_area.height / 2)
-
- self.set_colour(ctx, colour, 0.0, 0.65)
- ctx.stroke()
-
- # Draw lines out of the cell
- for start, end, colour in self.out_lines:
- ctx.move_to(cell_area.x + box_size * start + box_size / 2,
- bg_area.y + bg_area.height / 2)
-
- if start - end > 1:
- ctx.line_to(cell_area.x + box_size * start,
- bg_area.y + bg_area.height)
- ctx.line_to(cell_area.x + box_size * end + box_size,
- bg_area.y + bg_area.height)
- elif start - end < -1:
- ctx.line_to(cell_area.x + box_size * start + box_size,
- bg_area.y + bg_area.height)
- ctx.line_to(cell_area.x + box_size * end,
- bg_area.y + bg_area.height)
-
- ctx.line_to(cell_area.x + box_size * end + box_size / 2,
- bg_area.y + bg_area.height / 2 + bg_area.height)
-
- self.set_colour(ctx, colour, 0.0, 0.65)
- ctx.stroke()
-
- # Draw the revision node in the right column
- (column, colour, names) = self.node
- ctx.arc(cell_area.x + box_size * column + box_size / 2,
- cell_area.y + cell_area.height / 2,
- box_size / 4, 0, 2 * math.pi)
-
-
- self.set_colour(ctx, colour, 0.0, 0.5)
- ctx.stroke_preserve()
-
- self.set_colour(ctx, colour, 0.5, 1.0)
- ctx.fill_preserve()
-
- if (len(names) != 0):
- name = " "
- for item in names:
- name = name + item + " "
-
- ctx.set_font_size(13)
- if (flags & 1):
- self.set_colour(ctx, colour, 0.5, 1.0)
- else:
- self.set_colour(ctx, colour, 0.0, 0.5)
- ctx.show_text(name)
-
-class Commit(object):
- """ This represent a commit object obtained after parsing the git-rev-list
- output """
-
- __slots__ = ['children_sha1', 'message', 'author', 'date', 'committer',
- 'commit_date', 'commit_sha1', 'parent_sha1']
-
- children_sha1 = {}
-
- def __init__(self, commit_lines):
- self.message = ""
- self.author = ""
- self.date = ""
- self.committer = ""
- self.commit_date = ""
- self.commit_sha1 = ""
- self.parent_sha1 = [ ]
- self.parse_commit(commit_lines)
-
-
- def parse_commit(self, commit_lines):
-
- # First line is the sha1 lines
- line = string.strip(commit_lines[0])
- sha1 = re.split(" ", line)
- self.commit_sha1 = sha1[0]
- self.parent_sha1 = sha1[1:]
-
- #build the child list
- for parent_id in self.parent_sha1:
- try:
- Commit.children_sha1[parent_id].append(self.commit_sha1)
- except KeyError:
- Commit.children_sha1[parent_id] = [self.commit_sha1]
-
- # IF we don't have parent
- if (len(self.parent_sha1) == 0):
- self.parent_sha1 = [0]
-
- for line in commit_lines[1:]:
- m = re.match("^ ", line)
- if (m != None):
- # First line of the commit message used for short log
- if self.message == "":
- self.message = string.strip(line)
- continue
-
- m = re.match("tree", line)
- if (m != None):
- continue
-
- m = re.match("parent", line)
- if (m != None):
- continue
-
- m = re_ident.match(line)
- if (m != None):
- date = show_date(m.group('epoch'), m.group('tz'))
- if m.group(1) == "author":
- self.author = m.group('ident')
- self.date = date
- elif m.group(1) == "committer":
- self.committer = m.group('ident')
- self.commit_date = date
-
- continue
-
- def get_message(self, with_diff=0):
- if (with_diff == 1):
- message = self.diff_tree()
- else:
- fp = os.popen("git cat-file commit " + self.commit_sha1)
- message = fp.read()
- fp.close()
-
- return message
-
- def diff_tree(self):
- fp = os.popen("git diff-tree --pretty --cc -v -p --always " + self.commit_sha1)
- diff = fp.read()
- fp.close()
- return diff
-
-class AnnotateWindow(object):
- """Annotate window.
- This object represents and manages a single window containing the
- annotate information of the file
- """
-
- def __init__(self):
- self.window = gtk.Window(gtk.WINDOW_TOPLEVEL)
- self.window.set_border_width(0)
- self.window.set_title("Git repository browser annotation window")
- self.prev_read = ""
-
- # Use two thirds of the screen by default
- screen = self.window.get_screen()
- monitor = screen.get_monitor_geometry(0)
- width = int(monitor.width * 0.66)
- height = int(monitor.height * 0.66)
- self.window.set_default_size(width, height)
-
- def add_file_data(self, filename, commit_sha1, line_num):
- fp = os.popen("git cat-file blob " + commit_sha1 +":"+filename)
- i = 1;
- for line in fp.readlines():
- line = string.rstrip(line)
- self.model.append(None, ["HEAD", filename, line, i])
- i = i+1
- fp.close()
-
- # now set the cursor position
- self.treeview.set_cursor(line_num-1)
- self.treeview.grab_focus()
-
- def _treeview_cursor_cb(self, *args):
- """Callback for when the treeview cursor changes."""
- (path, col) = self.treeview.get_cursor()
- commit_sha1 = self.model[path][0]
- commit_msg = ""
- fp = os.popen("git cat-file commit " + commit_sha1)
- for line in fp.readlines():
- commit_msg = commit_msg + line
- fp.close()
-
- self.commit_buffer.set_text(commit_msg)
-
- def _treeview_row_activated(self, *args):
- """Callback for when the treeview row gets selected."""
- (path, col) = self.treeview.get_cursor()
- commit_sha1 = self.model[path][0]
- filename = self.model[path][1]
- line_num = self.model[path][3]
-
- window = AnnotateWindow();
- fp = os.popen("git rev-parse "+ commit_sha1 + "~1")
- commit_sha1 = string.strip(fp.readline())
- fp.close()
- window.annotate(filename, commit_sha1, line_num)
-
- def data_ready(self, source, condition):
- while (1):
- try :
- # A simple readline doesn't work
- # a readline bug ??
- buffer = source.read(100)
-
- except:
- # resource temporary not available
- return True
-
- if (len(buffer) == 0):
- gobject.source_remove(self.io_watch_tag)
- source.close()
- return False
-
- if (self.prev_read != ""):
- buffer = self.prev_read + buffer
- self.prev_read = ""
-
- if (buffer[len(buffer) -1] != '\n'):
- try:
- newline_index = buffer.rindex("\n")
- except ValueError:
- newline_index = 0
-
- self.prev_read = buffer[newline_index:(len(buffer))]
- buffer = buffer[0:newline_index]
-
- for buff in buffer.split("\n"):
- annotate_line = re.compile('^([0-9a-f]{40}) (.+) (.+) (.+)$')
- m = annotate_line.match(buff)
- if not m:
- annotate_line = re.compile('^(filename) (.+)$')
- m = annotate_line.match(buff)
- if not m:
- continue
- filename = m.group(2)
- else:
- self.commit_sha1 = m.group(1)
- self.source_line = int(m.group(2))
- self.result_line = int(m.group(3))
- self.count = int(m.group(4))
- #set the details only when we have the file name
- continue
-
- while (self.count > 0):
- # set at result_line + count-1 the sha1 as commit_sha1
- self.count = self.count - 1
- iter = self.model.iter_nth_child(None, self.result_line + self.count-1)
- self.model.set(iter, 0, self.commit_sha1, 1, filename, 3, self.source_line)
-
-
- def annotate(self, filename, commit_sha1, line_num):
- # verify the commit_sha1 specified has this filename
-
- fp = os.popen("git ls-tree "+ commit_sha1 + " -- " + filename)
- line = string.strip(fp.readline())
- if line == '':
- # pop up the message the file is not there as a part of the commit
- fp.close()
- dialog = gtk.MessageDialog(parent=None, flags=0,
- type=gtk.MESSAGE_WARNING, buttons=gtk.BUTTONS_CLOSE,
- message_format=None)
- dialog.set_markup("The file %s is not present in the parent commit %s" % (filename, commit_sha1))
- dialog.run()
- dialog.destroy()
- return
-
- fp.close()
-
- vpan = gtk.VPaned();
- self.window.add(vpan);
- vpan.show()
-
- scrollwin = gtk.ScrolledWindow()
- scrollwin.set_policy(gtk.POLICY_AUTOMATIC, gtk.POLICY_AUTOMATIC)
- scrollwin.set_shadow_type(gtk.SHADOW_IN)
- vpan.pack1(scrollwin, True, True);
- scrollwin.show()
-
- self.model = gtk.TreeStore(str, str, str, int)
- self.treeview = gtk.TreeView(self.model)
- self.treeview.set_rules_hint(True)
- self.treeview.set_search_column(0)
- self.treeview.connect("cursor-changed", self._treeview_cursor_cb)
- self.treeview.connect("row-activated", self._treeview_row_activated)
- scrollwin.add(self.treeview)
- self.treeview.show()
-
- cell = gtk.CellRendererText()
- cell.set_property("width-chars", 10)
- cell.set_property("ellipsize", pango.ELLIPSIZE_END)
- column = gtk.TreeViewColumn("Commit")
- column.set_resizable(True)
- column.pack_start(cell, expand=True)
- column.add_attribute(cell, "text", 0)
- self.treeview.append_column(column)
-
- cell = gtk.CellRendererText()
- cell.set_property("width-chars", 20)
- cell.set_property("ellipsize", pango.ELLIPSIZE_END)
- column = gtk.TreeViewColumn("File Name")
- column.set_resizable(True)
- column.pack_start(cell, expand=True)
- column.add_attribute(cell, "text", 1)
- self.treeview.append_column(column)
-
- cell = gtk.CellRendererText()
- cell.set_property("width-chars", 20)
- cell.set_property("ellipsize", pango.ELLIPSIZE_END)
- column = gtk.TreeViewColumn("Data")
- column.set_resizable(True)
- column.pack_start(cell, expand=True)
- column.add_attribute(cell, "text", 2)
- self.treeview.append_column(column)
-
- # The commit message window
- scrollwin = gtk.ScrolledWindow()
- scrollwin.set_policy(gtk.POLICY_AUTOMATIC, gtk.POLICY_AUTOMATIC)
- scrollwin.set_shadow_type(gtk.SHADOW_IN)
- vpan.pack2(scrollwin, True, True);
- scrollwin.show()
-
- commit_text = gtk.TextView()
- self.commit_buffer = gtk.TextBuffer()
- commit_text.set_buffer(self.commit_buffer)
- scrollwin.add(commit_text)
- commit_text.show()
-
- self.window.show()
-
- self.add_file_data(filename, commit_sha1, line_num)
-
- fp = os.popen("git blame --incremental -C -C -- " + filename + " " + commit_sha1)
- flags = fcntl.fcntl(fp.fileno(), fcntl.F_GETFL)
- fcntl.fcntl(fp.fileno(), fcntl.F_SETFL, flags | os.O_NONBLOCK)
- self.io_watch_tag = gobject.io_add_watch(fp, gobject.IO_IN, self.data_ready)
-
-
-class DiffWindow(object):
- """Diff window.
- This object represents and manages a single window containing the
- differences between two revisions on a branch.
- """
-
- def __init__(self):
- self.window = gtk.Window(gtk.WINDOW_TOPLEVEL)
- self.window.set_border_width(0)
- self.window.set_title("Git repository browser diff window")
-
- # Use two thirds of the screen by default
- screen = self.window.get_screen()
- monitor = screen.get_monitor_geometry(0)
- width = int(monitor.width * 0.66)
- height = int(monitor.height * 0.66)
- self.window.set_default_size(width, height)
-
-
- self.construct()
-
- def construct(self):
- """Construct the window contents."""
- vbox = gtk.VBox()
- self.window.add(vbox)
- vbox.show()
-
- menu_bar = gtk.MenuBar()
- save_menu = gtk.ImageMenuItem(gtk.STOCK_SAVE)
- save_menu.connect("activate", self.save_menu_response, "save")
- save_menu.show()
- menu_bar.append(save_menu)
- vbox.pack_start(menu_bar, expand=False, fill=True)
- menu_bar.show()
-
- hpan = gtk.HPaned()
-
- scrollwin = gtk.ScrolledWindow()
- scrollwin.set_policy(gtk.POLICY_AUTOMATIC, gtk.POLICY_AUTOMATIC)
- scrollwin.set_shadow_type(gtk.SHADOW_IN)
- hpan.pack1(scrollwin, True, True)
- scrollwin.show()
-
- (self.buffer, sourceview) = get_source_buffer_and_view()
-
- sourceview.set_editable(False)
- sourceview.modify_font(pango.FontDescription("Monospace"))
- scrollwin.add(sourceview)
- sourceview.show()
-
- # The file hierarchy: a scrollable treeview
- scrollwin = gtk.ScrolledWindow()
- scrollwin.set_policy(gtk.POLICY_AUTOMATIC, gtk.POLICY_AUTOMATIC)
- scrollwin.set_shadow_type(gtk.SHADOW_IN)
- scrollwin.set_size_request(20, -1)
- hpan.pack2(scrollwin, True, True)
- scrollwin.show()
-
- self.model = gtk.TreeStore(str, str, str)
- self.treeview = gtk.TreeView(self.model)
- self.treeview.set_search_column(1)
- self.treeview.connect("cursor-changed", self._treeview_clicked)
- scrollwin.add(self.treeview)
- self.treeview.show()
-
- cell = gtk.CellRendererText()
- cell.set_property("width-chars", 20)
- column = gtk.TreeViewColumn("Select to annotate")
- column.pack_start(cell, expand=True)
- column.add_attribute(cell, "text", 0)
- self.treeview.append_column(column)
-
- vbox.pack_start(hpan, expand=True, fill=True)
- hpan.show()
-
- def _treeview_clicked(self, *args):
- """Callback for when the treeview cursor changes."""
- (path, col) = self.treeview.get_cursor()
- specific_file = self.model[path][1]
- commit_sha1 = self.model[path][2]
- if specific_file == None :
- return
- elif specific_file == "" :
- specific_file = None
-
- window = AnnotateWindow();
- window.annotate(specific_file, commit_sha1, 1)
-
-
- def commit_files(self, commit_sha1, parent_sha1):
- self.model.clear()
- add = self.model.append(None, [ "Added", None, None])
- dele = self.model.append(None, [ "Deleted", None, None])
- mod = self.model.append(None, [ "Modified", None, None])
- diff_tree = re.compile('^(:.{6}) (.{6}) (.{40}) (.{40}) (A|D|M)\s(.+)$')
- fp = os.popen("git diff-tree -r --no-commit-id " + parent_sha1 + " " + commit_sha1)
- while 1:
- line = string.strip(fp.readline())
- if line == '':
- break
- m = diff_tree.match(line)
- if not m:
- continue
-
- attr = m.group(5)
- filename = m.group(6)
- if attr == "A":
- self.model.append(add, [filename, filename, commit_sha1])
- elif attr == "D":
- self.model.append(dele, [filename, filename, commit_sha1])
- elif attr == "M":
- self.model.append(mod, [filename, filename, commit_sha1])
- fp.close()
-
- self.treeview.expand_all()
-
- def set_diff(self, commit_sha1, parent_sha1, encoding):
- """Set the differences showed by this window.
- Compares the two trees and populates the window with the
- differences.
- """
- # Diff with the first commit or the last commit shows nothing
- if (commit_sha1 == 0 or parent_sha1 == 0 ):
- return
-
- fp = os.popen("git diff-tree -p " + parent_sha1 + " " + commit_sha1)
- self.buffer.set_text(unicode(fp.read(), encoding).encode('utf-8'))
- fp.close()
- self.commit_files(commit_sha1, parent_sha1)
- self.window.show()
-
- def save_menu_response(self, widget, string):
- dialog = gtk.FileChooserDialog("Save..", None, gtk.FILE_CHOOSER_ACTION_SAVE,
- (gtk.STOCK_CANCEL, gtk.RESPONSE_CANCEL,
- gtk.STOCK_SAVE, gtk.RESPONSE_OK))
- dialog.set_default_response(gtk.RESPONSE_OK)
- response = dialog.run()
- if response == gtk.RESPONSE_OK:
- patch_buffer = self.buffer.get_text(self.buffer.get_start_iter(),
- self.buffer.get_end_iter())
- fp = open(dialog.get_filename(), "w")
- fp.write(patch_buffer)
- fp.close()
- dialog.destroy()
-
-class GitView(object):
- """ This is the main class
- """
- version = "0.9"
-
- def __init__(self, with_diff=0):
- self.with_diff = with_diff
- self.window = gtk.Window(gtk.WINDOW_TOPLEVEL)
- self.window.set_border_width(0)
- self.window.set_title("Git repository browser")
-
- self.get_encoding()
- self.get_bt_sha1()
-
- # Use three-quarters of the screen by default
- screen = self.window.get_screen()
- monitor = screen.get_monitor_geometry(0)
- width = int(monitor.width * 0.75)
- height = int(monitor.height * 0.75)
- self.window.set_default_size(width, height)
-
- # FIXME AndyFitz!
- icon = self.window.render_icon(gtk.STOCK_INDEX, gtk.ICON_SIZE_BUTTON)
- self.window.set_icon(icon)
-
- self.accel_group = gtk.AccelGroup()
- self.window.add_accel_group(self.accel_group)
- self.accel_group.connect_group(0xffc2, 0, gtk.ACCEL_LOCKED, self.refresh);
- self.accel_group.connect_group(0xffc1, 0, gtk.ACCEL_LOCKED, self.maximize);
- self.accel_group.connect_group(0xffc8, 0, gtk.ACCEL_LOCKED, self.fullscreen);
- self.accel_group.connect_group(0xffc9, 0, gtk.ACCEL_LOCKED, self.unfullscreen);
-
- self.window.add(self.construct())
-
- def refresh(self, widget, event=None, *arguments, **keywords):
- self.get_encoding()
- self.get_bt_sha1()
- Commit.children_sha1 = {}
- self.set_branch(sys.argv[without_diff:])
- self.window.show()
- return True
-
- def maximize(self, widget, event=None, *arguments, **keywords):
- self.window.maximize()
- return True
-
- def fullscreen(self, widget, event=None, *arguments, **keywords):
- self.window.fullscreen()
- return True
-
- def unfullscreen(self, widget, event=None, *arguments, **keywords):
- self.window.unfullscreen()
- return True
-
- def get_bt_sha1(self):
- """ Update the bt_sha1 dictionary with the
- respective sha1 details """
-
- self.bt_sha1 = { }
- ls_remote = re.compile('^(.{40})\trefs/([^^]+)(?:\\^(..))?$');
- fp = os.popen('git ls-remote "${GIT_DIR-.git}"')
- while 1:
- line = string.strip(fp.readline())
- if line == '':
- break
- m = ls_remote.match(line)
- if not m:
- continue
- (sha1, name) = (m.group(1), m.group(2))
- if not self.bt_sha1.has_key(sha1):
- self.bt_sha1[sha1] = []
- self.bt_sha1[sha1].append(name)
- fp.close()
-
- def get_encoding(self):
- fp = os.popen("git config --get i18n.commitencoding")
- self.encoding=string.strip(fp.readline())
- fp.close()
- if (self.encoding == ""):
- self.encoding = "utf-8"
-
-
- def construct(self):
- """Construct the window contents."""
- vbox = gtk.VBox()
- paned = gtk.VPaned()
- paned.pack1(self.construct_top(), resize=False, shrink=True)
- paned.pack2(self.construct_bottom(), resize=False, shrink=True)
- menu_bar = gtk.MenuBar()
- menu_bar.set_pack_direction(gtk.PACK_DIRECTION_RTL)
- help_menu = gtk.MenuItem("Help")
- menu = gtk.Menu()
- about_menu = gtk.MenuItem("About")
- menu.append(about_menu)
- about_menu.connect("activate", self.about_menu_response, "about")
- about_menu.show()
- help_menu.set_submenu(menu)
- help_menu.show()
- menu_bar.append(help_menu)
- menu_bar.show()
- vbox.pack_start(menu_bar, expand=False, fill=True)
- vbox.pack_start(paned, expand=True, fill=True)
- paned.show()
- vbox.show()
- return vbox
-
-
- def construct_top(self):
- """Construct the top-half of the window."""
- vbox = gtk.VBox(spacing=6)
- vbox.set_border_width(12)
- vbox.show()
-
-
- scrollwin = gtk.ScrolledWindow()
- scrollwin.set_policy(gtk.POLICY_AUTOMATIC, gtk.POLICY_AUTOMATIC)
- scrollwin.set_shadow_type(gtk.SHADOW_IN)
- vbox.pack_start(scrollwin, expand=True, fill=True)
- scrollwin.show()
-
- self.treeview = gtk.TreeView()
- self.treeview.set_rules_hint(True)
- self.treeview.set_search_column(4)
- self.treeview.connect("cursor-changed", self._treeview_cursor_cb)
- scrollwin.add(self.treeview)
- self.treeview.show()
-
- cell = CellRendererGraph()
- column = gtk.TreeViewColumn()
- column.set_resizable(True)
- column.pack_start(cell, expand=True)
- column.add_attribute(cell, "node", 1)
- column.add_attribute(cell, "in-lines", 2)
- column.add_attribute(cell, "out-lines", 3)
- self.treeview.append_column(column)
-
- cell = gtk.CellRendererText()
- cell.set_property("width-chars", 65)
- cell.set_property("ellipsize", pango.ELLIPSIZE_END)
- column = gtk.TreeViewColumn("Message")
- column.set_resizable(True)
- column.pack_start(cell, expand=True)
- column.add_attribute(cell, "text", 4)
- self.treeview.append_column(column)
-
- cell = gtk.CellRendererText()
- cell.set_property("width-chars", 40)
- cell.set_property("ellipsize", pango.ELLIPSIZE_END)
- column = gtk.TreeViewColumn("Author")
- column.set_resizable(True)
- column.pack_start(cell, expand=True)
- column.add_attribute(cell, "text", 5)
- self.treeview.append_column(column)
-
- cell = gtk.CellRendererText()
- cell.set_property("ellipsize", pango.ELLIPSIZE_END)
- column = gtk.TreeViewColumn("Date")
- column.set_resizable(True)
- column.pack_start(cell, expand=True)
- column.add_attribute(cell, "text", 6)
- self.treeview.append_column(column)
-
- return vbox
-
- def about_menu_response(self, widget, string):
- dialog = gtk.AboutDialog()
- dialog.set_name("Gitview")
- dialog.set_version(GitView.version)
- dialog.set_authors(["Aneesh Kumar K.V <aneesh.kumar@gmail.com>"])
- dialog.set_website("http://www.kernel.org/pub/software/scm/git/")
- dialog.set_copyright("Use and distribute under the terms of the GNU General Public License")
- dialog.set_wrap_license(True)
- dialog.run()
- dialog.destroy()
-
-
- def construct_bottom(self):
- """Construct the bottom half of the window."""
- vbox = gtk.VBox(False, spacing=6)
- vbox.set_border_width(12)
- (width, height) = self.window.get_size()
- vbox.set_size_request(width, int(height / 2.5))
- vbox.show()
-
- self.table = gtk.Table(rows=4, columns=4)
- self.table.set_row_spacings(6)
- self.table.set_col_spacings(6)
- vbox.pack_start(self.table, expand=False, fill=True)
- self.table.show()
-
- align = gtk.Alignment(0.0, 0.5)
- label = gtk.Label()
- label.set_markup("<b>Revision:</b>")
- align.add(label)
- self.table.attach(align, 0, 1, 0, 1, gtk.FILL, gtk.FILL)
- label.show()
- align.show()
-
- align = gtk.Alignment(0.0, 0.5)
- self.revid_label = gtk.Label()
- self.revid_label.set_selectable(True)
- align.add(self.revid_label)
- self.table.attach(align, 1, 2, 0, 1, gtk.EXPAND | gtk.FILL, gtk.FILL)
- self.revid_label.show()
- align.show()
-
- align = gtk.Alignment(0.0, 0.5)
- label = gtk.Label()
- label.set_markup("<b>Committer:</b>")
- align.add(label)
- self.table.attach(align, 0, 1, 1, 2, gtk.FILL, gtk.FILL)
- label.show()
- align.show()
-
- align = gtk.Alignment(0.0, 0.5)
- self.committer_label = gtk.Label()
- self.committer_label.set_selectable(True)
- align.add(self.committer_label)
- self.table.attach(align, 1, 2, 1, 2, gtk.EXPAND | gtk.FILL, gtk.FILL)
- self.committer_label.show()
- align.show()
-
- align = gtk.Alignment(0.0, 0.5)
- label = gtk.Label()
- label.set_markup("<b>Timestamp:</b>")
- align.add(label)
- self.table.attach(align, 0, 1, 2, 3, gtk.FILL, gtk.FILL)
- label.show()
- align.show()
-
- align = gtk.Alignment(0.0, 0.5)
- self.timestamp_label = gtk.Label()
- self.timestamp_label.set_selectable(True)
- align.add(self.timestamp_label)
- self.table.attach(align, 1, 2, 2, 3, gtk.EXPAND | gtk.FILL, gtk.FILL)
- self.timestamp_label.show()
- align.show()
-
- align = gtk.Alignment(0.0, 0.5)
- label = gtk.Label()
- label.set_markup("<b>Parents:</b>")
- align.add(label)
- self.table.attach(align, 0, 1, 3, 4, gtk.FILL, gtk.FILL)
- label.show()
- align.show()
- self.parents_widgets = []
-
- align = gtk.Alignment(0.0, 0.5)
- label = gtk.Label()
- label.set_markup("<b>Children:</b>")
- align.add(label)
- self.table.attach(align, 2, 3, 3, 4, gtk.FILL, gtk.FILL)
- label.show()
- align.show()
- self.children_widgets = []
-
- scrollwin = gtk.ScrolledWindow()
- scrollwin.set_policy(gtk.POLICY_AUTOMATIC, gtk.POLICY_AUTOMATIC)
- scrollwin.set_shadow_type(gtk.SHADOW_IN)
- vbox.pack_start(scrollwin, expand=True, fill=True)
- scrollwin.show()
-
- (self.message_buffer, sourceview) = get_source_buffer_and_view()
-
- sourceview.set_editable(False)
- sourceview.modify_font(pango.FontDescription("Monospace"))
- scrollwin.add(sourceview)
- sourceview.show()
-
- return vbox
-
- def _treeview_cursor_cb(self, *args):
- """Callback for when the treeview cursor changes."""
- (path, col) = self.treeview.get_cursor()
- commit = self.model[path][0]
-
- if commit.committer is not None:
- committer = commit.committer
- timestamp = commit.commit_date
- message = commit.get_message(self.with_diff)
- revid_label = commit.commit_sha1
- else:
- committer = ""
- timestamp = ""
- message = ""
- revid_label = ""
-
- self.revid_label.set_text(revid_label)
- self.committer_label.set_text(committer)
- self.timestamp_label.set_text(timestamp)
- self.message_buffer.set_text(unicode(message, self.encoding).encode('utf-8'))
-
- for widget in self.parents_widgets:
- self.table.remove(widget)
-
- self.parents_widgets = []
- self.table.resize(4 + len(commit.parent_sha1) - 1, 4)
- for idx, parent_id in enumerate(commit.parent_sha1):
- self.table.set_row_spacing(idx + 3, 0)
-
- align = gtk.Alignment(0.0, 0.0)
- self.parents_widgets.append(align)
- self.table.attach(align, 1, 2, idx + 3, idx + 4,
- gtk.EXPAND | gtk.FILL, gtk.FILL)
- align.show()
-
- hbox = gtk.HBox(False, 0)
- align.add(hbox)
- hbox.show()
-
- label = gtk.Label(parent_id)
- label.set_selectable(True)
- hbox.pack_start(label, expand=False, fill=True)
- label.show()
-
- image = gtk.Image()
- image.set_from_stock(gtk.STOCK_JUMP_TO, gtk.ICON_SIZE_MENU)
- image.show()
-
- button = gtk.Button()
- button.add(image)
- button.set_relief(gtk.RELIEF_NONE)
- button.connect("clicked", self._go_clicked_cb, parent_id)
- hbox.pack_start(button, expand=False, fill=True)
- button.show()
-
- image = gtk.Image()
- image.set_from_stock(gtk.STOCK_FIND, gtk.ICON_SIZE_MENU)
- image.show()
-
- button = gtk.Button()
- button.add(image)
- button.set_relief(gtk.RELIEF_NONE)
- button.set_sensitive(True)
- button.connect("clicked", self._show_clicked_cb,
- commit.commit_sha1, parent_id, self.encoding)
- hbox.pack_start(button, expand=False, fill=True)
- button.show()
-
- # Populate with child details
- for widget in self.children_widgets:
- self.table.remove(widget)
-
- self.children_widgets = []
- try:
- child_sha1 = Commit.children_sha1[commit.commit_sha1]
- except KeyError:
- # We don't have child
- child_sha1 = [ 0 ]
-
- if ( len(child_sha1) > len(commit.parent_sha1)):
- self.table.resize(4 + len(child_sha1) - 1, 4)
-
- for idx, child_id in enumerate(child_sha1):
- self.table.set_row_spacing(idx + 3, 0)
-
- align = gtk.Alignment(0.0, 0.0)
- self.children_widgets.append(align)
- self.table.attach(align, 3, 4, idx + 3, idx + 4,
- gtk.EXPAND | gtk.FILL, gtk.FILL)
- align.show()
-
- hbox = gtk.HBox(False, 0)
- align.add(hbox)
- hbox.show()
-
- label = gtk.Label(child_id)
- label.set_selectable(True)
- hbox.pack_start(label, expand=False, fill=True)
- label.show()
-
- image = gtk.Image()
- image.set_from_stock(gtk.STOCK_JUMP_TO, gtk.ICON_SIZE_MENU)
- image.show()
-
- button = gtk.Button()
- button.add(image)
- button.set_relief(gtk.RELIEF_NONE)
- button.connect("clicked", self._go_clicked_cb, child_id)
- hbox.pack_start(button, expand=False, fill=True)
- button.show()
-
- image = gtk.Image()
- image.set_from_stock(gtk.STOCK_FIND, gtk.ICON_SIZE_MENU)
- image.show()
-
- button = gtk.Button()
- button.add(image)
- button.set_relief(gtk.RELIEF_NONE)
- button.set_sensitive(True)
- button.connect("clicked", self._show_clicked_cb,
- child_id, commit.commit_sha1, self.encoding)
- hbox.pack_start(button, expand=False, fill=True)
- button.show()
-
- def _destroy_cb(self, widget):
- """Callback for when a window we manage is destroyed."""
- self.quit()
-
-
- def quit(self):
- """Stop the GTK+ main loop."""
- gtk.main_quit()
-
- def run(self, args):
- self.set_branch(args)
- self.window.connect("destroy", self._destroy_cb)
- self.window.show()
- gtk.main()
-
- def set_branch(self, args):
- """Fill in different windows with info from the reposiroty"""
- fp = os.popen("git rev-parse --sq --default HEAD " + list_to_string(args, 1))
- git_rev_list_cmd = fp.read()
- fp.close()
- fp = os.popen("git rev-list --header --topo-order --parents " + git_rev_list_cmd)
- self.update_window(fp)
-
- def update_window(self, fp):
- commit_lines = []
-
- self.model = gtk.ListStore(gobject.TYPE_PYOBJECT, gobject.TYPE_PYOBJECT,
- gobject.TYPE_PYOBJECT, gobject.TYPE_PYOBJECT, str, str, str)
-
- # used for cursor positioning
- self.index = {}
-
- self.colours = {}
- self.nodepos = {}
- self.incomplete_line = {}
- self.commits = []
-
- index = 0
- last_colour = 0
- last_nodepos = -1
- out_line = []
- input_line = fp.readline()
- while (input_line != ""):
- # The commit header ends with '\0'
- # This NULL is immediately followed by the sha1 of the
- # next commit
- if (input_line[0] != '\0'):
- commit_lines.append(input_line)
- input_line = fp.readline()
- continue;
-
- commit = Commit(commit_lines)
- if (commit != None ):
- self.commits.append(commit)
-
- # Skip the '\0
- commit_lines = []
- commit_lines.append(input_line[1:])
- input_line = fp.readline()
-
- fp.close()
-
- for commit in self.commits:
- (out_line, last_colour, last_nodepos) = self.draw_graph(commit,
- index, out_line,
- last_colour,
- last_nodepos)
- self.index[commit.commit_sha1] = index
- index += 1
-
- self.treeview.set_model(self.model)
- self.treeview.show()
-
- def draw_graph(self, commit, index, out_line, last_colour, last_nodepos):
- in_line=[]
-
- # | -> outline
- # X
- # |\ <- inline
-
- # Reset nodepostion
- if (last_nodepos > 5):
- last_nodepos = -1
-
- # Add the incomplete lines of the last cell in this
- try:
- colour = self.colours[commit.commit_sha1]
- except KeyError:
- self.colours[commit.commit_sha1] = last_colour+1
- last_colour = self.colours[commit.commit_sha1]
- colour = self.colours[commit.commit_sha1]
-
- try:
- node_pos = self.nodepos[commit.commit_sha1]
- except KeyError:
- self.nodepos[commit.commit_sha1] = last_nodepos+1
- last_nodepos = self.nodepos[commit.commit_sha1]
- node_pos = self.nodepos[commit.commit_sha1]
-
- #The first parent always continue on the same line
- try:
- # check we already have the value
- tmp_node_pos = self.nodepos[commit.parent_sha1[0]]
- except KeyError:
- self.colours[commit.parent_sha1[0]] = colour
- self.nodepos[commit.parent_sha1[0]] = node_pos
-
- for sha1 in self.incomplete_line.keys():
- if (sha1 != commit.commit_sha1):
- self.draw_incomplete_line(sha1, node_pos,
- out_line, in_line, index)
- else:
- del self.incomplete_line[sha1]
-
-
- for parent_id in commit.parent_sha1:
- try:
- tmp_node_pos = self.nodepos[parent_id]
- except KeyError:
- self.colours[parent_id] = last_colour+1
- last_colour = self.colours[parent_id]
- self.nodepos[parent_id] = last_nodepos+1
- last_nodepos = self.nodepos[parent_id]
-
- in_line.append((node_pos, self.nodepos[parent_id],
- self.colours[parent_id]))
- self.add_incomplete_line(parent_id)
-
- try:
- branch_tag = self.bt_sha1[commit.commit_sha1]
- except KeyError:
- branch_tag = [ ]
-
-
- node = (node_pos, colour, branch_tag)
-
- self.model.append([commit, node, out_line, in_line,
- commit.message, commit.author, commit.date])
-
- return (in_line, last_colour, last_nodepos)
-
- def add_incomplete_line(self, sha1):
- try:
- self.incomplete_line[sha1].append(self.nodepos[sha1])
- except KeyError:
- self.incomplete_line[sha1] = [self.nodepos[sha1]]
-
- def draw_incomplete_line(self, sha1, node_pos, out_line, in_line, index):
- for idx, pos in enumerate(self.incomplete_line[sha1]):
- if(pos == node_pos):
- #remove the straight line and add a slash
- if ((pos, pos, self.colours[sha1]) in out_line):
- out_line.remove((pos, pos, self.colours[sha1]))
- out_line.append((pos, pos+0.5, self.colours[sha1]))
- self.incomplete_line[sha1][idx] = pos = pos+0.5
- try:
- next_commit = self.commits[index+1]
- if (next_commit.commit_sha1 == sha1 and pos != int(pos)):
- # join the line back to the node point
- # This need to be done only if we modified it
- in_line.append((pos, pos-0.5, self.colours[sha1]))
- continue;
- except IndexError:
- pass
- in_line.append((pos, pos, self.colours[sha1]))
-
-
- def _go_clicked_cb(self, widget, revid):
- """Callback for when the go button for a parent is clicked."""
- try:
- self.treeview.set_cursor(self.index[revid])
- except KeyError:
- dialog = gtk.MessageDialog(parent=None, flags=0,
- type=gtk.MESSAGE_WARNING, buttons=gtk.BUTTONS_CLOSE,
- message_format=None)
- dialog.set_markup("Revision <b>%s</b> not present in the list" % revid)
- # revid == 0 is the parent of the first commit
- if (revid != 0 ):
- dialog.format_secondary_text("Try running gitview without any options")
- dialog.run()
- dialog.destroy()
-
- self.treeview.grab_focus()
-
- def _show_clicked_cb(self, widget, commit_sha1, parent_sha1, encoding):
- """Callback for when the show button for a parent is clicked."""
- window = DiffWindow()
- window.set_diff(commit_sha1, parent_sha1, encoding)
- self.treeview.grab_focus()
-
-without_diff = 0
-if __name__ == "__main__":
-
- if (len(sys.argv) > 1 ):
- if (sys.argv[1] == "--without-diff"):
- without_diff = 1
-
- view = GitView( without_diff != 1)
- view.run(sys.argv[without_diff:])
+++ /dev/null
-gitview(1)
-==========
-
-NAME
-----
-gitview - A GTK based repository browser for git
-
-SYNOPSIS
---------
-[verse]
-'gitview' [options] [args]
-
-DESCRIPTION
----------
-
-Dependencies:
-
-* Python 2.4
-* PyGTK 2.8 or later
-* PyCairo 1.0 or later
-
-OPTIONS
--------
---without-diff::
-
- If the user doesn't want to list the commit diffs in the main window.
- This may speed up the repository browsing.
-
-<args>::
-
- All the valid option for linkgit:git-rev-list[1].
-
-Key Bindings
-------------
-F4::
- To maximize the window
-
-F5::
- To reread references.
-
-F11::
- Full screen
-
-F12::
- Leave full screen
-
-EXAMPLES
---------
-
-gitview v2.6.12.. include/scsi drivers/scsi::
-
- Show as the changes since version v2.6.12 that changed any file in the
- include/scsi or drivers/scsi subdirectories
-
-gitview --since=2.weeks.ago::
-
- Show the changes during the last two weeks
static int diff_suppress_blank_empty;
static int diff_use_color_default = -1;
static int diff_context_default = 3;
+static int diff_interhunk_context_default;
static const char *diff_word_regex_cfg;
static const char *external_diff_cmd_cfg;
static const char *diff_order_file_cfg;
return -1;
return 0;
}
+ if (!strcmp(var, "diff.interhunkcontext")) {
+ diff_interhunk_context_default = git_config_int(var, value);
+ if (diff_interhunk_context_default < 0)
+ return -1;
+ return 0;
+ }
if (!strcmp(var, "diff.renames")) {
diff_detect_rename_default = git_config_rename(var, value);
return 0;
options->rename_limit = -1;
options->dirstat_permille = diff_dirstat_permille_default;
options->context = diff_context_default;
+ options->interhunkcontext = diff_interhunk_context_default;
options->ws_error_highlight = ws_error_highlight_default;
DIFF_OPT_SET(options, RENAME_EMPTY);
#include "varint.h"
#include "ewah/ewok.h"
-struct path_simplify {
- int len;
- const char *path;
-};
-
/*
* Tells read_directory_recursive how a file or directory should be treated.
* Values are ordered by significance, e.g. if a directory contains both
static enum path_treatment read_directory_recursive(struct dir_struct *dir,
const char *path, int len, struct untracked_cache_dir *untracked,
- int check_only, const struct path_simplify *simplify);
+ int check_only, const struct pathspec *pathspec);
static int get_dtype(struct dirent *de, const char *path, int len);
int fspathcmp(const char *a, const char *b)
int fill_directory(struct dir_struct *dir, const struct pathspec *pathspec)
{
- size_t len;
+ char *prefix;
+ size_t prefix_len;
/*
* Calculate common prefix for the pathspec, and
* use that to optimize the directory walk
*/
- len = common_prefix_len(pathspec);
+ prefix = common_prefix(pathspec);
+ prefix_len = prefix ? strlen(prefix) : 0;
/* Read the directory and prune it */
- read_directory(dir, pathspec->nr ? pathspec->_raw[0] : "", len, pathspec);
- return len;
+ read_directory(dir, prefix, prefix_len, pathspec);
+
+ free(prefix);
+ return prefix_len;
}
int within_depth(const char *name, int namelen,
static enum path_treatment treat_directory(struct dir_struct *dir,
struct untracked_cache_dir *untracked,
const char *dirname, int len, int baselen, int exclude,
- const struct path_simplify *simplify)
+ const struct pathspec *pathspec)
{
/* The "len-1" is to strip the final '/' */
switch (directory_exists_in_index(dirname, len-1)) {
untracked = lookup_untracked(dir->untracked, untracked,
dirname + baselen, len - baselen);
return read_directory_recursive(dir, dirname, len,
- untracked, 1, simplify);
+ untracked, 1, pathspec);
}
/*
* reading - if the path cannot possibly be in the pathspec,
* return true, and we'll skip it early.
*/
-static int simplify_away(const char *path, int pathlen, const struct path_simplify *simplify)
+static int simplify_away(const char *path, int pathlen,
+ const struct pathspec *pathspec)
{
- if (simplify) {
- for (;;) {
- const char *match = simplify->path;
- int len = simplify->len;
+ int i;
- if (!match)
- break;
- if (len > pathlen)
- len = pathlen;
- if (!memcmp(path, match, len))
- return 0;
- simplify++;
- }
- return 1;
+ if (!pathspec || !pathspec->nr)
+ return 0;
+
+ GUARD_PATHSPEC(pathspec,
+ PATHSPEC_FROMTOP |
+ PATHSPEC_MAXDEPTH |
+ PATHSPEC_LITERAL |
+ PATHSPEC_GLOB |
+ PATHSPEC_ICASE |
+ PATHSPEC_EXCLUDE);
+
+ for (i = 0; i < pathspec->nr; i++) {
+ const struct pathspec_item *item = &pathspec->items[i];
+ int len = item->nowildcard_len;
+
+ if (len > pathlen)
+ len = pathlen;
+ if (!ps_strncmp(item, item->match, path, len))
+ return 0;
}
- return 0;
+
+ return 1;
}
/*
* 2. the path is a directory prefix of some element in the
* pathspec
*/
-static int exclude_matches_pathspec(const char *path, int len,
- const struct path_simplify *simplify)
-{
- if (simplify) {
- for (; simplify->path; simplify++) {
- if (len == simplify->len
- && !memcmp(path, simplify->path, len))
- return 1;
- if (len < simplify->len
- && simplify->path[len] == '/'
- && !memcmp(path, simplify->path, len))
- return 1;
- }
+static int exclude_matches_pathspec(const char *path, int pathlen,
+ const struct pathspec *pathspec)
+{
+ int i;
+
+ if (!pathspec || !pathspec->nr)
+ return 0;
+
+ GUARD_PATHSPEC(pathspec,
+ PATHSPEC_FROMTOP |
+ PATHSPEC_MAXDEPTH |
+ PATHSPEC_LITERAL |
+ PATHSPEC_GLOB |
+ PATHSPEC_ICASE |
+ PATHSPEC_EXCLUDE);
+
+ for (i = 0; i < pathspec->nr; i++) {
+ const struct pathspec_item *item = &pathspec->items[i];
+ int len = item->nowildcard_len;
+
+ if (len == pathlen &&
+ !ps_strncmp(item, item->match, path, pathlen))
+ return 1;
+ if (len > pathlen &&
+ item->match[pathlen] == '/' &&
+ !ps_strncmp(item, item->match, path, pathlen))
+ return 1;
}
return 0;
}
struct untracked_cache_dir *untracked,
struct strbuf *path,
int baselen,
- const struct path_simplify *simplify,
+ const struct pathspec *pathspec,
int dtype, struct dirent *de)
{
int exclude;
case DT_DIR:
strbuf_addch(path, '/');
return treat_directory(dir, untracked, path->buf, path->len,
- baselen, exclude, simplify);
+ baselen, exclude, pathspec);
case DT_REG:
case DT_LNK:
return exclude ? path_excluded : path_untracked;
struct cached_dir *cdir,
struct strbuf *path,
int baselen,
- const struct path_simplify *simplify)
+ const struct pathspec *pathspec)
{
strbuf_setlen(path, baselen);
if (!cdir->ucd) {
* with check_only set.
*/
return read_directory_recursive(dir, path->buf, path->len,
- cdir->ucd, 1, simplify);
+ cdir->ucd, 1, pathspec);
/*
* We get path_recurse in the first run when
* directory_exists_in_index() returns index_nonexistent. We
struct cached_dir *cdir,
struct strbuf *path,
int baselen,
- const struct path_simplify *simplify)
+ const struct pathspec *pathspec)
{
int dtype;
struct dirent *de = cdir->de;
if (!de)
return treat_path_fast(dir, untracked, cdir, path,
- baselen, simplify);
+ baselen, pathspec);
if (is_dot_or_dotdot(de->d_name) || !strcmp(de->d_name, ".git"))
return path_none;
strbuf_setlen(path, baselen);
strbuf_addstr(path, de->d_name);
- if (simplify_away(path->buf, path->len, simplify))
+ if (simplify_away(path->buf, path->len, pathspec))
return path_none;
dtype = DTYPE(de);
- return treat_one_path(dir, untracked, path, baselen, simplify, dtype, de);
+ return treat_one_path(dir, untracked, path, baselen, pathspec, dtype, de);
}
static void add_untracked(struct untracked_cache_dir *dir, const char *name)
static enum path_treatment read_directory_recursive(struct dir_struct *dir,
const char *base, int baselen,
struct untracked_cache_dir *untracked, int check_only,
- const struct path_simplify *simplify)
+ const struct pathspec *pathspec)
{
struct cached_dir cdir;
enum path_treatment state, subdir_state, dir_state = path_none;
while (!read_cached_dir(&cdir)) {
/* check how the file or directory should be treated */
- state = treat_path(dir, untracked, &cdir, &path, baselen, simplify);
+ state = treat_path(dir, untracked, &cdir, &path,
+ baselen, pathspec);
if (state > dir_state)
dir_state = state;
path.buf + baselen,
path.len - baselen);
subdir_state =
- read_directory_recursive(dir, path.buf, path.len,
- ud, check_only, simplify);
+ read_directory_recursive(dir, path.buf,
+ path.len, ud,
+ check_only, pathspec);
if (subdir_state > dir_state)
dir_state = subdir_state;
}
else if ((dir->flags & DIR_SHOW_IGNORED_TOO) ||
((dir->flags & DIR_COLLECT_IGNORED) &&
exclude_matches_pathspec(path.buf, path.len,
- simplify)))
+ pathspec)))
dir_add_ignored(dir, path.buf, path.len);
break;
return name_compare(e1->name, e1->len, e2->name, e2->len);
}
-static struct path_simplify *create_simplify(const char **pathspec)
-{
- int nr, alloc = 0;
- struct path_simplify *simplify = NULL;
-
- if (!pathspec)
- return NULL;
-
- for (nr = 0 ; ; nr++) {
- const char *match;
- ALLOC_GROW(simplify, nr + 1, alloc);
- match = *pathspec++;
- if (!match)
- break;
- simplify[nr].path = match;
- simplify[nr].len = simple_length(match);
- }
- simplify[nr].path = NULL;
- simplify[nr].len = 0;
- return simplify;
-}
-
-static void free_simplify(struct path_simplify *simplify)
-{
- free(simplify);
-}
-
static int treat_leading_path(struct dir_struct *dir,
const char *path, int len,
- const struct path_simplify *simplify)
+ const struct pathspec *pathspec)
{
struct strbuf sb = STRBUF_INIT;
int baselen, rc = 0;
strbuf_add(&sb, path, baselen);
if (!is_directory(sb.buf))
break;
- if (simplify_away(sb.buf, sb.len, simplify))
+ if (simplify_away(sb.buf, sb.len, pathspec))
break;
- if (treat_one_path(dir, NULL, &sb, baselen, simplify,
+ if (treat_one_path(dir, NULL, &sb, baselen, pathspec,
DT_DIR, NULL) == path_none)
break; /* do not recurse into it */
if (len <= baselen) {
return root;
}
-int read_directory(struct dir_struct *dir, const char *path, int len, const struct pathspec *pathspec)
+int read_directory(struct dir_struct *dir, const char *path,
+ int len, const struct pathspec *pathspec)
{
- struct path_simplify *simplify;
struct untracked_cache_dir *untracked;
- /*
- * Check out create_simplify()
- */
- if (pathspec)
- GUARD_PATHSPEC(pathspec,
- PATHSPEC_FROMTOP |
- PATHSPEC_MAXDEPTH |
- PATHSPEC_LITERAL |
- PATHSPEC_GLOB |
- PATHSPEC_ICASE |
- PATHSPEC_EXCLUDE);
-
if (has_symlink_leading_path(path, len))
return dir->nr;
- /*
- * exclude patterns are treated like positive ones in
- * create_simplify. Usually exclude patterns should be a
- * subset of positive ones, which has no impacts on
- * create_simplify().
- */
- simplify = create_simplify(pathspec ? pathspec->_raw : NULL);
untracked = validate_untracked_cache(dir, len, pathspec);
if (!untracked)
/*
* e.g. prep_exclude()
*/
dir->untracked = NULL;
- if (!len || treat_leading_path(dir, path, len, simplify))
- read_directory_recursive(dir, path, len, untracked, 0, simplify);
- free_simplify(simplify);
+ if (!len || treat_leading_path(dir, path, len, pathspec))
+ read_directory_recursive(dir, path, len, untracked, 0, pathspec);
QSORT(dir->entries, dir->nr, cmp_name);
QSORT(dir->ignored, dir->ignored_nr, cmp_name);
if (dir->untracked) {
{
struct strbuf file_name = STRBUF_INIT;
struct strbuf rel_path = STRBUF_INIT;
- char *git_dir = xstrdup(real_path(git_dir_));
- char *work_tree = xstrdup(real_path(work_tree_));
+ char *git_dir = real_pathdup(git_dir_);
+ char *work_tree = real_pathdup(work_tree_);
/* Update gitfile */
strbuf_addf(&file_name, "%s/.git", work_tree);
return;
}
git_work_tree_initialized = 1;
- work_tree = xstrdup(real_path(new_work_tree));
+ work_tree = real_pathdup(new_work_tree);
}
const char *get_git_work_tree(void)
/* Returns the highest-priority, location to look for git programs. */
const char *git_exec_path(void)
{
- const char *env;
+ static char *cached_exec_path;
if (argv_exec_path)
return argv_exec_path;
- env = getenv(EXEC_PATH_ENVIRONMENT);
- if (env && *env) {
- return env;
+ if (!cached_exec_path) {
+ const char *env = getenv(EXEC_PATH_ENVIRONMENT);
+ if (env && *env)
+ cached_exec_path = xstrdup(env);
+ else
+ cached_exec_path = system_path(GIT_EXEC_PATH);
}
-
- return system_path(GIT_EXEC_PATH);
+ return cached_exec_path;
}
static void add_path(struct strbuf *out, const char *path)
prompt=true
;;
-O*)
- orderfile="$1"
+ orderfile="${1#-O}"
;;
--)
shift
merge_keep_backup="$(git config --bool mergetool.keepBackup || echo true)"
merge_keep_temporaries="$(git config --bool mergetool.keepTemporaries || echo false)"
+ prefix=$(git rev-parse --show-prefix) || exit 1
+ cd_to_toplevel
+
+ if test -n "$orderfile"
+ then
+ orderfile=$(
+ git rev-parse --prefix "$prefix" -- "$orderfile" |
+ sed -e 1d
+ )
+ fi
+
if test $# -eq 0 && test -e "$GIT_DIR/MERGE_RR"
then
set -- $(git rerere remaining)
then
print_noop_and_exit
fi
+ elif test $# -ge 0
+ then
+ # rev-parse provides the -- needed for 'set'
+ eval "set $(git rev-parse --sq --prefix "$prefix" -- "$@")"
fi
files=$(git -c core.quotePath=false \
diff --name-only --diff-filter=U \
- ${orderfile:+"$orderfile"} -- "$@")
-
- cd_to_toplevel
+ ${orderfile:+"-O$orderfile"} -- "$@")
if test -z "$files"
then
if retries is None:
# Perform 3 retries by default
retries = 3
- real_cmd += ["-r", str(retries)]
+ if retries > 0:
+ # Provide a way to not pass this option by setting git-p4.retries to 0
+ real_cmd += ["-r", str(retries)]
if isinstance(cmd,basestring):
real_cmd = ' '.join(real_cmd) + ' ' + cmd
if test -f "$squash_msg"; then
mv "$squash_msg" "$squash_msg".bak || exit
count=$(($(sed -n \
- -e "1s/^$comment_char.*\([0-9][0-9]*\).*/\1/p" \
+ -e "1s/^$comment_char[^0-9]*\([0-9][0-9]*\).*/\1/p" \
-e "q" < "$squash_msg".bak)+1))
{
printf '%s\n' "$comment_char $(eval_ngettext \
# This file is licensed under the GPL v2, or a later version
# at the discretion of Linus Torvalds.
-USAGE='<start> <url> [<end>]'
-LONG_USAGE='Summarizes the changes between two commits to the standard output,
-and includes the given URL in the generated summary.'
SUBDIRECTORY_OK='Yes'
OPTIONS_KEEPDASHDASH=
OPTIONS_STUCKLONG=
or: $dashless [--quiet] update [--init] [--remote] [-N|--no-fetch] [-f|--force] [--checkout|--merge|--rebase] [--[no-]recommend-shallow] [--reference <repository>] [--recursive] [--] [<path>...]
or: $dashless [--quiet] summary [--cached|--files] [--summary-limit <n>] [commit] [--] [<path>...]
or: $dashless [--quiet] foreach [--recursive] <command>
- or: $dashless [--quiet] sync [--recursive] [--] [<path>...]"
+ or: $dashless [--quiet] sync [--recursive] [--] [<path>...]
+ or: $dashless [--quiet] absorbgitdirs [--] [<path>...]"
OPTIONS_SPEC=
SUBDIRECTORY_OK=Yes
. git-sh-setup
shift
done
- git ${wt_prefix:+-C "$wt_prefix"} submodule--helper init ${GIT_QUIET:+--quiet} ${prefix:+--prefix "$prefix"} "$@"
+ git ${wt_prefix:+-C "$wt_prefix"} ${prefix:+--super-prefix "$prefix"} submodule--helper init ${GIT_QUIET:+--quiet} "$@"
}
#
{ "fsck-objects", cmd_fsck, RUN_SETUP },
{ "gc", cmd_gc, RUN_SETUP },
{ "get-tar-commit-id", cmd_get_tar_commit_id },
- { "grep", cmd_grep, RUN_SETUP_GENTLY },
+ { "grep", cmd_grep, RUN_SETUP_GENTLY | SUPPORT_SUPER_PREFIX },
{ "hash-object", cmd_hash_object },
{ "help", cmd_help },
{ "index-pack", cmd_index_pack, RUN_SETUP_GENTLY },
static void execv_dashed_external(const char **argv)
{
- struct strbuf cmd = STRBUF_INIT;
- const char *tmp;
+ struct child_process cmd = CHILD_PROCESS_INIT;
int status;
if (get_super_prefix())
use_pager = check_pager_config(argv[0]);
commit_pager_choice();
- strbuf_addf(&cmd, "git-%s", argv[0]);
+ argv_array_pushf(&cmd.args, "git-%s", argv[0]);
+ argv_array_pushv(&cmd.args, argv + 1);
+ cmd.clean_on_exit = 1;
+ cmd.wait_after_clean = 1;
+ cmd.silent_exec_failure = 1;
- /*
- * argv[0] must be the git command, but the argv array
- * belongs to the caller, and may be reused in
- * subsequent loop iterations. Save argv[0] and
- * restore it on error.
- */
- tmp = argv[0];
- argv[0] = cmd.buf;
-
- trace_argv_printf(argv, "trace: exec:");
+ trace_argv_printf(cmd.args.argv, "trace: exec:");
/*
- * if we fail because the command is not found, it is
- * OK to return. Otherwise, we just pass along the status code.
+ * If we fail because the command is not found, it is
+ * OK to return. Otherwise, we just pass along the status code,
+ * or our usual generic code if we were not even able to exec
+ * the program.
*/
- status = run_command_v_opt(argv, RUN_SILENT_EXEC_FAILURE | RUN_CLEAN_ON_EXIT);
- if (status >= 0 || errno != ENOENT)
+ status = run_command(&cmd);
+ if (status >= 0)
exit(status);
-
- argv[0] = tmp;
-
- strbuf_release(&cmd);
+ else if (errno != ENOENT)
+ exit(128);
}
static int run_argv(int *argcp, const char ***argv)
#ifndef GPG_INTERFACE_H
#define GPG_INTERFACE_H
-#define GPG_VERIFY_VERBOSE 1
-#define GPG_VERIFY_RAW 2
+#define GPG_VERIFY_VERBOSE 1
+#define GPG_VERIFY_RAW 2
+#define GPG_VERIFY_OMIT_STATUS 4
struct signature_check {
char *payload;
case GREP_SOURCE_FILE:
gs->identifier = xstrdup(identifier);
break;
+ case GREP_SOURCE_SUBMODULE:
+ if (!identifier) {
+ gs->identifier = NULL;
+ break;
+ }
+ /*
+ * FALL THROUGH
+ * If the identifier is non-NULL (in the submodule case) it
+ * will be a SHA1 that needs to be copied.
+ */
case GREP_SOURCE_SHA1:
gs->identifier = xmalloc(20);
hashcpy(gs->identifier, identifier);
break;
case GREP_SOURCE_BUF:
gs->identifier = NULL;
+ break;
}
}
switch (gs->type) {
case GREP_SOURCE_FILE:
case GREP_SOURCE_SHA1:
+ case GREP_SOURCE_SUBMODULE:
free(gs->buf);
gs->buf = NULL;
gs->size = 0;
return grep_source_load_sha1(gs);
case GREP_SOURCE_BUF:
return gs->buf ? 0 : -1;
+ case GREP_SOURCE_SUBMODULE:
+ break;
}
- die("BUG: invalid grep_source type");
+ die("BUG: invalid grep_source type to load");
}
void grep_source_load_driver(struct grep_source *gs)
GREP_SOURCE_SHA1,
GREP_SOURCE_FILE,
GREP_SOURCE_BUF,
+ GREP_SOURCE_SUBMODULE,
} type;
void *identifier;
char mnemonic; /* this cannot be ':'! */
const char *name;
} pathspec_magic[] = {
- { PATHSPEC_FROMTOP, '/', "top" },
- { PATHSPEC_LITERAL, 0, "literal" },
- { PATHSPEC_GLOB, '\0', "glob" },
- { PATHSPEC_ICASE, '\0', "icase" },
- { PATHSPEC_EXCLUDE, '!', "exclude" },
+ { PATHSPEC_FROMTOP, '/', "top" },
+ { PATHSPEC_LITERAL, '\0', "literal" },
+ { PATHSPEC_GLOB, '\0', "glob" },
+ { PATHSPEC_ICASE, '\0', "icase" },
+ { PATHSPEC_EXCLUDE, '!', "exclude" },
};
-static void prefix_short_magic(struct strbuf *sb, int prefixlen,
- unsigned short_magic)
+static void prefix_magic(struct strbuf *sb, int prefixlen, unsigned magic)
{
int i;
strbuf_addstr(sb, ":(");
for (i = 0; i < ARRAY_SIZE(pathspec_magic); i++)
- if (short_magic & pathspec_magic[i].bit) {
+ if (magic & pathspec_magic[i].bit) {
if (sb->buf[sb->len - 1] != '(')
strbuf_addch(sb, ',');
strbuf_addstr(sb, pathspec_magic[i].name);
strbuf_addf(sb, ",prefix:%d)", prefixlen);
}
-/*
- * Take an element of a pathspec and check for magic signatures.
- * Append the result to the prefix. Return the magic bitmap.
- *
- * For now, we only parse the syntax and throw out anything other than
- * "top" magic.
- *
- * NEEDSWORK: This needs to be rewritten when we start migrating
- * get_pathspec() users to use the "struct pathspec" interface. For
- * example, a pathspec element may be marked as case-insensitive, but
- * the prefix part must always match literally, and a single stupid
- * string cannot express such a case.
- */
-static unsigned prefix_pathspec(struct pathspec_item *item,
- unsigned *p_short_magic,
- const char **raw, unsigned flags,
- const char *prefix, int prefixlen,
- const char *elt)
+static inline int get_literal_global(void)
{
- static int literal_global = -1;
- static int glob_global = -1;
- static int noglob_global = -1;
- static int icase_global = -1;
- unsigned magic = 0, short_magic = 0, global_magic = 0;
- const char *copyfrom = elt, *long_magic_end = NULL;
- char *match;
- int i, pathspec_prefix = -1;
+ static int literal = -1;
+
+ if (literal < 0)
+ literal = git_env_bool(GIT_LITERAL_PATHSPECS_ENVIRONMENT, 0);
+
+ return literal;
+}
+
+static inline int get_glob_global(void)
+{
+ static int glob = -1;
+
+ if (glob < 0)
+ glob = git_env_bool(GIT_GLOB_PATHSPECS_ENVIRONMENT, 0);
+
+ return glob;
+}
+
+static inline int get_noglob_global(void)
+{
+ static int noglob = -1;
+
+ if (noglob < 0)
+ noglob = git_env_bool(GIT_NOGLOB_PATHSPECS_ENVIRONMENT, 0);
+
+ return noglob;
+}
+
+static inline int get_icase_global(void)
+{
+ static int icase = -1;
+
+ if (icase < 0)
+ icase = git_env_bool(GIT_ICASE_PATHSPECS_ENVIRONMENT, 0);
+
+ return icase;
+}
+
+static int get_global_magic(int element_magic)
+{
+ int global_magic = 0;
- if (literal_global < 0)
- literal_global = git_env_bool(GIT_LITERAL_PATHSPECS_ENVIRONMENT, 0);
- if (literal_global)
+ if (get_literal_global())
global_magic |= PATHSPEC_LITERAL;
- if (glob_global < 0)
- glob_global = git_env_bool(GIT_GLOB_PATHSPECS_ENVIRONMENT, 0);
- if (glob_global)
+ /* --glob-pathspec is overridden by :(literal) */
+ if (get_glob_global() && !(element_magic & PATHSPEC_LITERAL))
global_magic |= PATHSPEC_GLOB;
- if (noglob_global < 0)
- noglob_global = git_env_bool(GIT_NOGLOB_PATHSPECS_ENVIRONMENT, 0);
-
- if (glob_global && noglob_global)
+ if (get_glob_global() && get_noglob_global())
die(_("global 'glob' and 'noglob' pathspec settings are incompatible"));
-
- if (icase_global < 0)
- icase_global = git_env_bool(GIT_ICASE_PATHSPECS_ENVIRONMENT, 0);
- if (icase_global)
+ if (get_icase_global())
global_magic |= PATHSPEC_ICASE;
if ((global_magic & PATHSPEC_LITERAL) &&
die(_("global 'literal' pathspec setting is incompatible "
"with all other global pathspec settings"));
- if (flags & PATHSPEC_LITERAL_PATH)
- global_magic = 0;
+ /* --noglob-pathspec adds :(literal) _unless_ :(glob) is specified */
+ if (get_noglob_global() && !(element_magic & PATHSPEC_GLOB))
+ global_magic |= PATHSPEC_LITERAL;
- if (elt[0] != ':' || literal_global ||
- (flags & PATHSPEC_LITERAL_PATH)) {
- ; /* nothing to do */
- } else if (elt[1] == '(') {
- /* longhand */
- const char *nextat;
- for (copyfrom = elt + 2;
- *copyfrom && *copyfrom != ')';
- copyfrom = nextat) {
- size_t len = strcspn(copyfrom, ",)");
- if (copyfrom[len] == ',')
- nextat = copyfrom + len + 1;
- else
- /* handle ')' and '\0' */
- nextat = copyfrom + len;
- if (!len)
- continue;
- for (i = 0; i < ARRAY_SIZE(pathspec_magic); i++) {
- if (strlen(pathspec_magic[i].name) == len &&
- !strncmp(pathspec_magic[i].name, copyfrom, len)) {
- magic |= pathspec_magic[i].bit;
- break;
- }
- if (starts_with(copyfrom, "prefix:")) {
- char *endptr;
- pathspec_prefix = strtol(copyfrom + 7,
- &endptr, 10);
- if (endptr - copyfrom != len)
- die(_("invalid parameter for pathspec magic 'prefix'"));
- /* "i" would be wrong, but it does not matter */
- break;
- }
+ return global_magic;
+}
+
+/*
+ * Parse the pathspec element looking for long magic
+ *
+ * saves all magic in 'magic'
+ * if prefix magic is used, save the prefix length in 'prefix_len'
+ * returns the position in 'elem' after all magic has been parsed
+ */
+static const char *parse_long_magic(unsigned *magic, int *prefix_len,
+ const char *elem)
+{
+ const char *pos;
+ const char *nextat;
+
+ for (pos = elem + 2; *pos && *pos != ')'; pos = nextat) {
+ size_t len = strcspn(pos, ",)");
+ int i;
+
+ if (pos[len] == ',')
+ nextat = pos + len + 1; /* handle ',' */
+ else
+ nextat = pos + len; /* handle ')' and '\0' */
+
+ if (!len)
+ continue;
+
+ if (starts_with(pos, "prefix:")) {
+ char *endptr;
+ *prefix_len = strtol(pos + 7, &endptr, 10);
+ if (endptr - pos != len)
+ die(_("invalid parameter for pathspec magic 'prefix'"));
+ continue;
+ }
+
+ for (i = 0; i < ARRAY_SIZE(pathspec_magic); i++) {
+ if (strlen(pathspec_magic[i].name) == len &&
+ !strncmp(pathspec_magic[i].name, pos, len)) {
+ *magic |= pathspec_magic[i].bit;
+ break;
}
- if (ARRAY_SIZE(pathspec_magic) <= i)
- die(_("Invalid pathspec magic '%.*s' in '%s'"),
- (int) len, copyfrom, elt);
}
- if (*copyfrom != ')')
- die(_("Missing ')' at the end of pathspec magic in '%s'"), elt);
- long_magic_end = copyfrom;
- copyfrom++;
- } else {
- /* shorthand */
- for (copyfrom = elt + 1;
- *copyfrom && *copyfrom != ':';
- copyfrom++) {
- char ch = *copyfrom;
- if (!is_pathspec_magic(ch))
+ if (ARRAY_SIZE(pathspec_magic) <= i)
+ die(_("Invalid pathspec magic '%.*s' in '%s'"),
+ (int) len, pos, elem);
+ }
+
+ if (*pos != ')')
+ die(_("Missing ')' at the end of pathspec magic in '%s'"),
+ elem);
+ pos++;
+
+ return pos;
+}
+
+/*
+ * Parse the pathspec element looking for short magic
+ *
+ * saves all magic in 'magic'
+ * returns the position in 'elem' after all magic has been parsed
+ */
+static const char *parse_short_magic(unsigned *magic, const char *elem)
+{
+ const char *pos;
+
+ for (pos = elem + 1; *pos && *pos != ':'; pos++) {
+ char ch = *pos;
+ int i;
+
+ if (!is_pathspec_magic(ch))
+ break;
+
+ for (i = 0; i < ARRAY_SIZE(pathspec_magic); i++) {
+ if (pathspec_magic[i].mnemonic == ch) {
+ *magic |= pathspec_magic[i].bit;
break;
- for (i = 0; i < ARRAY_SIZE(pathspec_magic); i++)
- if (pathspec_magic[i].mnemonic == ch) {
- short_magic |= pathspec_magic[i].bit;
- break;
- }
- if (ARRAY_SIZE(pathspec_magic) <= i)
- die(_("Unimplemented pathspec magic '%c' in '%s'"),
- ch, elt);
+ }
}
- if (*copyfrom == ':')
- copyfrom++;
+
+ if (ARRAY_SIZE(pathspec_magic) <= i)
+ die(_("Unimplemented pathspec magic '%c' in '%s'"),
+ ch, elem);
}
- magic |= short_magic;
- *p_short_magic = short_magic;
+ if (*pos == ':')
+ pos++;
- /* --noglob-pathspec adds :(literal) _unless_ :(glob) is specified */
- if (noglob_global && !(magic & PATHSPEC_GLOB))
- global_magic |= PATHSPEC_LITERAL;
+ return pos;
+}
- /* --glob-pathspec is overridden by :(literal) */
- if ((global_magic & PATHSPEC_GLOB) && (magic & PATHSPEC_LITERAL))
- global_magic &= ~PATHSPEC_GLOB;
+static const char *parse_element_magic(unsigned *magic, int *prefix_len,
+ const char *elem)
+{
+ if (elem[0] != ':' || get_literal_global())
+ return elem; /* nothing to do */
+ else if (elem[1] == '(')
+ /* longhand */
+ return parse_long_magic(magic, prefix_len, elem);
+ else
+ /* shorthand */
+ return parse_short_magic(magic, elem);
+}
+
+static void strip_submodule_slash_cheap(struct pathspec_item *item)
+{
+ if (item->len >= 1 && item->match[item->len - 1] == '/') {
+ int i = cache_name_pos(item->match, item->len - 1);
+
+ if (i >= 0 && S_ISGITLINK(active_cache[i]->ce_mode)) {
+ item->len--;
+ item->match[item->len] = '\0';
+ }
+ }
+}
+
+static void strip_submodule_slash_expensive(struct pathspec_item *item)
+{
+ int i;
+
+ for (i = 0; i < active_nr; i++) {
+ struct cache_entry *ce = active_cache[i];
+ int ce_len = ce_namelen(ce);
+
+ if (!S_ISGITLINK(ce->ce_mode))
+ continue;
+
+ if (item->len <= ce_len || item->match[ce_len] != '/' ||
+ memcmp(ce->name, item->match, ce_len))
+ continue;
+
+ if (item->len == ce_len + 1) {
+ /* strip trailing slash */
+ item->len--;
+ item->match[item->len] = '\0';
+ } else {
+ die(_("Pathspec '%s' is in submodule '%.*s'"),
+ item->original, ce_len, ce->name);
+ }
+ }
+}
+
+static void die_inside_submodule_path(struct pathspec_item *item)
+{
+ int i;
+
+ for (i = 0; i < active_nr; i++) {
+ struct cache_entry *ce = active_cache[i];
+ int ce_len = ce_namelen(ce);
+
+ if (!S_ISGITLINK(ce->ce_mode))
+ continue;
+
+ if (item->len < ce_len ||
+ !(item->match[ce_len] == '/' || item->match[ce_len] == '\0') ||
+ memcmp(ce->name, item->match, ce_len))
+ continue;
- magic |= global_magic;
+ die(_("Pathspec '%s' is in submodule '%.*s'"),
+ item->original, ce_len, ce->name);
+ }
+}
+
+/*
+ * Perform the initialization of a pathspec_item based on a pathspec element.
+ */
+static void init_pathspec_item(struct pathspec_item *item, unsigned flags,
+ const char *prefix, int prefixlen,
+ const char *elt)
+{
+ unsigned magic = 0, element_magic = 0;
+ const char *copyfrom = elt;
+ char *match;
+ int pathspec_prefix = -1;
+
+ /* PATHSPEC_LITERAL_PATH ignores magic */
+ if (flags & PATHSPEC_LITERAL_PATH) {
+ magic = PATHSPEC_LITERAL;
+ } else {
+ copyfrom = parse_element_magic(&element_magic,
+ &pathspec_prefix,
+ elt);
+ magic |= element_magic;
+ magic |= get_global_magic(element_magic);
+ }
+
+ item->magic = magic;
if (pathspec_prefix >= 0 &&
(prefixlen || (prefix && *prefix)))
if ((magic & PATHSPEC_LITERAL) && (magic & PATHSPEC_GLOB))
die(_("%s: 'literal' and 'glob' are incompatible"), elt);
+ /* Create match string which will be used for pathspec matching */
if (pathspec_prefix >= 0) {
match = xstrdup(copyfrom);
prefixlen = pathspec_prefix;
match = xstrdup(copyfrom);
prefixlen = 0;
} else {
- match = prefix_path_gently(prefix, prefixlen, &prefixlen, copyfrom);
+ match = prefix_path_gently(prefix, prefixlen,
+ &prefixlen, copyfrom);
if (!match)
die(_("%s: '%s' is outside repository"), elt, copyfrom);
}
- *raw = item->match = match;
+
+ item->match = match;
+ item->len = strlen(item->match);
+ item->prefix = prefixlen;
+
/*
* Prefix the pathspec (keep all magic) and assign to
* original. Useful for passing to another command.
*/
- if (flags & PATHSPEC_PREFIX_ORIGIN) {
+ if ((flags & PATHSPEC_PREFIX_ORIGIN) &&
+ prefixlen && !get_literal_global()) {
struct strbuf sb = STRBUF_INIT;
- if (prefixlen && !literal_global) {
- /* Preserve the actual prefix length of each pattern */
- if (short_magic)
- prefix_short_magic(&sb, prefixlen, short_magic);
- else if (long_magic_end) {
- strbuf_add(&sb, elt, long_magic_end - elt);
- strbuf_addf(&sb, ",prefix:%d)", prefixlen);
- } else
- strbuf_addf(&sb, ":(prefix:%d)", prefixlen);
- }
+
+ /* Preserve the actual prefix length of each pattern */
+ prefix_magic(&sb, prefixlen, element_magic);
+
strbuf_addstr(&sb, match);
item->original = strbuf_detach(&sb, NULL);
- } else
- item->original = elt;
- item->len = strlen(item->match);
- item->prefix = prefixlen;
-
- if ((flags & PATHSPEC_STRIP_SUBMODULE_SLASH_CHEAP) &&
- (item->len >= 1 && item->match[item->len - 1] == '/') &&
- (i = cache_name_pos(item->match, item->len - 1)) >= 0 &&
- S_ISGITLINK(active_cache[i]->ce_mode)) {
- item->len--;
- match[item->len] = '\0';
+ } else {
+ item->original = xstrdup(elt);
}
+ if (flags & PATHSPEC_STRIP_SUBMODULE_SLASH_CHEAP)
+ strip_submodule_slash_cheap(item);
+
if (flags & PATHSPEC_STRIP_SUBMODULE_SLASH_EXPENSIVE)
- for (i = 0; i < active_nr; i++) {
- struct cache_entry *ce = active_cache[i];
- int ce_len = ce_namelen(ce);
-
- if (!S_ISGITLINK(ce->ce_mode))
- continue;
-
- if (item->len <= ce_len || match[ce_len] != '/' ||
- memcmp(ce->name, match, ce_len))
- continue;
- if (item->len == ce_len + 1) {
- /* strip trailing slash */
- item->len--;
- match[item->len] = '\0';
- } else
- die (_("Pathspec '%s' is in submodule '%.*s'"),
- elt, ce_len, ce->name);
- }
+ strip_submodule_slash_expensive(item);
- if (magic & PATHSPEC_LITERAL)
+ if (magic & PATHSPEC_LITERAL) {
item->nowildcard_len = item->len;
- else {
+ } else {
item->nowildcard_len = simple_length(item->match);
if (item->nowildcard_len < prefixlen)
item->nowildcard_len = prefixlen;
}
+
item->flags = 0;
if (magic & PATHSPEC_GLOB) {
/*
}
/* sanity checks, pathspec matchers assume these are sane */
- assert(item->nowildcard_len <= item->len &&
- item->prefix <= item->len);
- return magic;
+ if (item->nowildcard_len > item->len ||
+ item->prefix > item->len) {
+ /*
+ * This case can be triggered by the user pointing us to a
+ * pathspec inside a submodule, which is an input error.
+ * Detect that here and complain, but fallback in the
+ * non-submodule case to a BUG, as we have no idea what
+ * would trigger that.
+ */
+ die_inside_submodule_path(item);
+ die ("BUG: item->nowildcard_len > item->len || item->prefix > item->len)");
+ }
}
static int pathspec_item_cmp(const void *a_, const void *b_)
}
static void NORETURN unsupported_magic(const char *pattern,
- unsigned magic,
- unsigned short_magic)
+ unsigned magic)
{
struct strbuf sb = STRBUF_INIT;
- int i, n;
- for (n = i = 0; i < ARRAY_SIZE(pathspec_magic); i++) {
+ int i;
+ for (i = 0; i < ARRAY_SIZE(pathspec_magic); i++) {
const struct pathspec_magic *m = pathspec_magic + i;
if (!(magic & m->bit))
continue;
if (sb.len)
- strbuf_addch(&sb, ' ');
- if (short_magic & m->bit)
- strbuf_addf(&sb, "'%c'", m->mnemonic);
+ strbuf_addstr(&sb, ", ");
+
+ if (m->mnemonic)
+ strbuf_addf(&sb, _("'%s' (mnemonic: '%c')"),
+ m->name, m->mnemonic);
else
strbuf_addf(&sb, "'%s'", m->name);
- n++;
}
/*
* We may want to substitute "this command" with a command
/* No arguments with prefix -> prefix pathspec */
if (!entry) {
- static const char *raw[2];
-
if (flags & PATHSPEC_PREFER_FULL)
return;
die("BUG: PATHSPEC_PREFER_CWD requires arguments");
pathspec->items = item = xcalloc(1, sizeof(*item));
- item->match = prefix;
- item->original = prefix;
+ item->match = xstrdup(prefix);
+ item->original = xstrdup(prefix);
item->nowildcard_len = item->len = strlen(prefix);
item->prefix = item->len;
- raw[0] = prefix;
- raw[1] = NULL;
pathspec->nr = 1;
- pathspec->_raw = raw;
return;
}
pathspec->nr = n;
ALLOC_ARRAY(pathspec->items, n);
item = pathspec->items;
- pathspec->_raw = argv;
prefixlen = prefix ? strlen(prefix) : 0;
for (i = 0; i < n; i++) {
- unsigned short_magic;
entry = argv[i];
- item[i].magic = prefix_pathspec(item + i, &short_magic,
- argv + i, flags,
- prefix, prefixlen, entry);
- if ((flags & PATHSPEC_LITERAL_PATH) &&
- !(magic_mask & PATHSPEC_LITERAL))
- item[i].magic |= PATHSPEC_LITERAL;
+ init_pathspec_item(item + i, flags, prefix, prefixlen, entry);
+
if (item[i].magic & PATHSPEC_EXCLUDE)
nr_exclude++;
if (item[i].magic & magic_mask)
- unsupported_magic(entry,
- item[i].magic & magic_mask,
- short_magic);
+ unsupported_magic(entry, item[i].magic & magic_mask);
if ((flags & PATHSPEC_SYMLINK_LEADING_PATH) &&
has_symlink_leading_path(item[i].match, item[i].len)) {
}
}
-/*
- * N.B. get_pathspec() is deprecated in favor of the "struct pathspec"
- * based interface - see pathspec.c:parse_pathspec().
- *
- * Arguments:
- * - prefix - a path relative to the root of the working tree
- * - pathspec - a list of paths underneath the prefix path
- *
- * Iterates over pathspec, prepending each path with prefix,
- * and return the resulting list.
- *
- * If pathspec is empty, return a singleton list containing prefix.
- *
- * If pathspec and prefix are both empty, return an empty list.
- *
- * This is typically used by built-in commands such as add.c, in order
- * to normalize argv arguments provided to the built-in into a list of
- * paths to process, all relative to the root of the working tree.
- */
-const char **get_pathspec(const char *prefix, const char **pathspec)
-{
- struct pathspec ps;
- parse_pathspec(&ps,
- PATHSPEC_ALL_MAGIC &
- ~(PATHSPEC_FROMTOP | PATHSPEC_LITERAL),
- PATHSPEC_PREFER_CWD,
- prefix, pathspec);
- return ps._raw;
-}
-
void copy_pathspec(struct pathspec *dst, const struct pathspec *src)
{
+ int i;
+
*dst = *src;
ALLOC_ARRAY(dst->items, dst->nr);
COPY_ARRAY(dst->items, src->items, dst->nr);
+
+ for (i = 0; i < dst->nr; i++) {
+ dst->items[i].match = xstrdup(src->items[i].match);
+ dst->items[i].original = xstrdup(src->items[i].original);
+ }
}
void clear_pathspec(struct pathspec *pathspec)
{
+ int i;
+
+ for (i = 0; i < pathspec->nr; i++) {
+ free(pathspec->items[i].match);
+ free(pathspec->items[i].original);
+ }
free(pathspec->items);
pathspec->items = NULL;
+ pathspec->nr = 0;
}
#define PATHSPEC_ONESTAR 1 /* the pathspec pattern satisfies GFNM_ONESTAR */
struct pathspec {
- const char **_raw; /* get_pathspec() result, not freed by clear_pathspec() */
int nr;
unsigned int has_wildcard:1;
unsigned int recursive:1;
unsigned magic;
int max_depth;
struct pathspec_item {
- const char *match;
- const char *original;
+ char *match;
+ char *original;
unsigned magic;
int len, prefix;
int nowildcard_len;
return index_name_stage_pos(istate, name, namelen, 0);
}
-/* Remove entry, return true if there are more entries to go.. */
int remove_index_entry_at(struct index_state *istate, int pos)
{
struct cache_entry *ce = istate->cache[pos];
return 1;
}
-void *read_blob_data_from_index(struct index_state *istate, const char *path, unsigned long *size)
+void *read_blob_data_from_index(const struct index_state *istate,
+ const char *path, unsigned long *size)
{
int pos, len;
unsigned long sz;
return ref;
}
-static int filter_ref_kind(struct ref_filter *filter, const char *refname)
+static int ref_kind_from_refname(const char *refname)
{
unsigned int i;
{ "refs/tags/", FILTER_REFS_TAGS}
};
- if (filter->kind == FILTER_REFS_BRANCHES ||
- filter->kind == FILTER_REFS_REMOTES ||
- filter->kind == FILTER_REFS_TAGS)
- return filter->kind;
- else if (!strcmp(refname, "HEAD"))
+ if (!strcmp(refname, "HEAD"))
return FILTER_REFS_DETACHED_HEAD;
for (i = 0; i < ARRAY_SIZE(ref_kind); i++) {
return FILTER_REFS_OTHERS;
}
+static int filter_ref_kind(struct ref_filter *filter, const char *refname)
+{
+ if (filter->kind == FILTER_REFS_BRANCHES ||
+ filter->kind == FILTER_REFS_REMOTES ||
+ filter->kind == FILTER_REFS_TAGS)
+ return filter->kind;
+ return ref_kind_from_refname(refname);
+}
+
/*
* A call-back given to for_each_ref(). Filter refs and keep them for
* later object processing.
putchar('\n');
}
+void pretty_print_ref(const char *name, const unsigned char *sha1,
+ const char *format)
+{
+ struct ref_array_item *ref_item;
+ ref_item = new_ref_array_item(name, sha1, 0);
+ ref_item->kind = ref_kind_from_refname(name);
+ show_ref_array_item(ref_item, format, 0);
+ free_array_item(ref_item);
+}
+
/* If no sorting option is given, use refname to sort as default */
struct ref_sorting *ref_default_sorting(void)
{
/* Function to parse --merged and --no-merged options */
int parse_opt_merge_filter(const struct option *opt, const char *arg, int unset);
+/*
+ * Print a single ref, outside of any ref-filter. Note that the
+ * name must be a fully qualified refname.
+ */
+void pretty_print_ref(const char *name, const unsigned char *sha1,
+ const char *format);
+
#endif /* REF_FILTER_H */
if (!f)
return;
+ remote->configured_in_repo = 1;
remote->origin = REMOTE_REMOTES;
while (strbuf_getline(&buf, f) != EOF) {
const char *v;
return;
}
+ remote->configured_in_repo = 1;
remote->origin = REMOTE_BRANCHES;
/*
}
remote = make_remote(name, namelen);
remote->origin = REMOTE_CONFIG;
+ if (current_config_scope() == CONFIG_SCOPE_REPO)
+ remote->configured_in_repo = 1;
if (!strcmp(subkey, "mirror"))
remote->mirror = git_config_bool(key, value);
else if (!strcmp(subkey, "skipdefaultupdate"))
return remote_get_1(name, pushremote_for_branch);
}
-int remote_is_configured(struct remote *remote)
+int remote_is_configured(struct remote *remote, int in_repo)
{
- return remote && remote->origin;
+ if (!remote)
+ return 0;
+ if (in_repo)
+ return remote->configured_in_repo;
+ return !!remote->origin;
}
int for_each_remote(each_remote_fn fn, void *priv)
{
struct remote *remote;
- if (!branch)
- return error_buf(err, _("HEAD does not point to a branch"));
-
remote = remote_get(pushremote_for_branch(branch, NULL));
if (!remote)
return error_buf(err,
const char *branch_get_push(struct branch *branch, struct strbuf *err)
{
+ if (!branch)
+ return error_buf(err, _("HEAD does not point to a branch"));
+
if (!branch->push_tracking_ref)
branch->push_tracking_ref = branch_get_push_1(branch, err);
return branch->push_tracking_ref;
struct hashmap_entry ent; /* must be first */
const char *name;
- int origin;
+ int origin, configured_in_repo;
const char *foreign_vcs;
struct remote *remote_get(const char *name);
struct remote *pushremote_get(const char *name);
-int remote_is_configured(struct remote *remote);
+int remote_is_configured(struct remote *remote, int in_repo);
typedef int each_remote_fn(struct remote *remote, void *priv);
int for_each_remote(each_remote_fn fn, void *priv);
static void cleanup_children(int sig, int in_signal)
{
+ struct child_to_clean *children_to_wait_for = NULL;
+
while (children_to_clean) {
struct child_to_clean *p = children_to_clean;
children_to_clean = p->next;
}
kill(p->pid, sig);
+
+ if (p->process->wait_after_clean) {
+ p->next = children_to_wait_for;
+ children_to_wait_for = p;
+ } else {
+ if (!in_signal)
+ free(p);
+ }
+ }
+
+ while (children_to_wait_for) {
+ struct child_to_clean *p = children_to_wait_for;
+ children_to_wait_for = p->next;
+
+ while (waitpid(p->pid, NULL, 0) < 0 && errno == EINTR)
+ ; /* spin waiting for process exit or error */
+
if (!in_signal)
free(p);
}
unsigned stdout_to_stderr:1;
unsigned use_shell:1;
unsigned clean_on_exit:1;
+ unsigned wait_after_clean:1;
void (*clean_on_exit_handler)(struct child_process *process);
void *clean_on_exit_handler_cbdata;
};
#include "argv-array.h"
#include "quote.h"
#include "trailer.h"
+#include "log-tree.h"
+#include "wt-status.h"
#define GIT_REFLOG_ACTION "GIT_REFLOG_ACTION"
static GIT_PATH_FUNC(git_path_head_file, "sequencer/head")
static GIT_PATH_FUNC(git_path_abort_safety_file, "sequencer/abort-safety")
+static GIT_PATH_FUNC(rebase_path, "rebase-merge")
+/*
+ * The file containing rebase commands, comments, and empty lines.
+ * This file is created by "git rebase -i" then edited by the user. As
+ * the lines are processed, they are removed from the front of this
+ * file and written to the tail of 'done'.
+ */
+static GIT_PATH_FUNC(rebase_path_todo, "rebase-merge/git-rebase-todo")
+/*
+ * The rebase command lines that have already been processed. A line
+ * is moved here when it is first handled, before any associated user
+ * actions.
+ */
+static GIT_PATH_FUNC(rebase_path_done, "rebase-merge/done")
+/*
+ * The file to keep track of how many commands were already processed (e.g.
+ * for the prompt).
+ */
+static GIT_PATH_FUNC(rebase_path_msgnum, "rebase-merge/msgnum");
+/*
+ * The file to keep track of how many commands are to be processed in total
+ * (e.g. for the prompt).
+ */
+static GIT_PATH_FUNC(rebase_path_msgtotal, "rebase-merge/end");
+/*
+ * The commit message that is planned to be used for any changes that
+ * need to be committed following a user interaction.
+ */
+static GIT_PATH_FUNC(rebase_path_message, "rebase-merge/message")
+/*
+ * The file into which is accumulated the suggested commit message for
+ * squash/fixup commands. When the first of a series of squash/fixups
+ * is seen, the file is created and the commit message from the
+ * previous commit and from the first squash/fixup commit are written
+ * to it. The commit message for each subsequent squash/fixup commit
+ * is appended to the file as it is processed.
+ *
+ * The first line of the file is of the form
+ * # This is a combination of $count commits.
+ * where $count is the number of commits whose messages have been
+ * written to the file so far (including the initial "pick" commit).
+ * Each time that a commit message is processed, this line is read and
+ * updated. It is deleted just before the combined commit is made.
+ */
+static GIT_PATH_FUNC(rebase_path_squash_msg, "rebase-merge/message-squash")
+/*
+ * If the current series of squash/fixups has not yet included a squash
+ * command, then this file exists and holds the commit message of the
+ * original "pick" commit. (If the series ends without a "squash"
+ * command, then this can be used as the commit message of the combined
+ * commit without opening the editor.)
+ */
+static GIT_PATH_FUNC(rebase_path_fixup_msg, "rebase-merge/message-fixup")
/*
* A script to set the GIT_AUTHOR_NAME, GIT_AUTHOR_EMAIL, and
* GIT_AUTHOR_DATE that will be used for the commit that is currently
* being rebased.
*/
static GIT_PATH_FUNC(rebase_path_author_script, "rebase-merge/author-script")
+/*
+ * When an "edit" rebase command is being processed, the SHA1 of the
+ * commit to be edited is recorded in this file. When "git rebase
+ * --continue" is executed, if there are any staged changes then they
+ * will be amended to the HEAD commit, but only provided the HEAD
+ * commit is still the commit to be edited. When any other rebase
+ * command is processed, this file is deleted.
+ */
+static GIT_PATH_FUNC(rebase_path_amend, "rebase-merge/amend")
+/*
+ * When we stop at a given patch via the "edit" command, this file contains
+ * the abbreviated commit name of the corresponding patch.
+ */
+static GIT_PATH_FUNC(rebase_path_stopped_sha, "rebase-merge/stopped-sha")
+/*
+ * For the post-rewrite hook, we make a list of rewritten commits and
+ * their new sha1s. The rewritten-pending list keeps the sha1s of
+ * commits that have been processed, but not committed yet,
+ * e.g. because they are waiting for a 'squash' command.
+ */
+static GIT_PATH_FUNC(rebase_path_rewritten_list, "rebase-merge/rewritten-list")
+static GIT_PATH_FUNC(rebase_path_rewritten_pending,
+ "rebase-merge/rewritten-pending")
/*
* The following files are written by git-rebase just after parsing the
* command-line (and are only consumed, not modified, by the sequencer).
*/
static GIT_PATH_FUNC(rebase_path_gpg_sign_opt, "rebase-merge/gpg_sign_opt")
+static GIT_PATH_FUNC(rebase_path_orig_head, "rebase-merge/orig-head")
+static GIT_PATH_FUNC(rebase_path_verbose, "rebase-merge/verbose")
+static GIT_PATH_FUNC(rebase_path_head_name, "rebase-merge/head-name")
+static GIT_PATH_FUNC(rebase_path_onto, "rebase-merge/onto")
+static GIT_PATH_FUNC(rebase_path_autostash, "rebase-merge/autostash")
+static GIT_PATH_FUNC(rebase_path_strategy, "rebase-merge/strategy")
+static GIT_PATH_FUNC(rebase_path_strategy_opts, "rebase-merge/strategy_opts")
-/* We will introduce the 'interactive rebase' mode later */
static inline int is_rebase_i(const struct replay_opts *opts)
{
- return 0;
+ return opts->action == REPLAY_INTERACTIVE_REBASE;
}
static const char *get_dir(const struct replay_opts *opts)
{
+ if (is_rebase_i(opts))
+ return rebase_path();
return git_path_seq_dir();
}
static const char *get_todo_path(const struct replay_opts *opts)
{
+ if (is_rebase_i(opts))
+ return rebase_path_todo();
return git_path_todo_file();
}
static const char *action_name(const struct replay_opts *opts)
{
- return opts->action == REPLAY_REVERT ? N_("revert") : N_("cherry-pick");
+ switch (opts->action) {
+ case REPLAY_REVERT:
+ return N_("revert");
+ case REPLAY_PICK:
+ return N_("cherry-pick");
+ case REPLAY_INTERACTIVE_REBASE:
+ return N_("rebase -i");
+ }
+ die(_("Unknown action: %d"), opts->action);
}
struct commit_message {
o.ancestor = base ? base_label : "(empty tree)";
o.branch1 = "HEAD";
o.branch2 = next ? next_label : "(empty tree)";
+ if (is_rebase_i(opts))
+ o.buffer_output = 2;
head_tree = parse_tree_indirect(head);
next_tree = next ? next->tree : empty_tree();
clean = merge_trees(&o,
head_tree,
next_tree, base_tree, &result);
+ if (is_rebase_i(opts) && clean <= 0)
+ fputs(o.obuf.buf, stdout);
strbuf_release(&o.obuf);
if (clean < 0)
return clean;
if (active_cache_changed &&
write_locked_index(&the_index, &index_lock, COMMIT_LOCK))
- /* TRANSLATORS: %s will be "revert" or "cherry-pick" */
+ /* TRANSLATORS: %s will be "revert", "cherry-pick" or
+ * "rebase -i".
+ */
return error(_("%s: Unable to write new index file"),
_(action_name(opts)));
rollback_lock_file(&index_lock);
return !hashcmp(active_cache_tree->sha1, head_commit->tree->object.oid.hash);
}
+static int write_author_script(const char *message)
+{
+ struct strbuf buf = STRBUF_INIT;
+ const char *eol;
+ int res;
+
+ for (;;)
+ if (!*message || starts_with(message, "\n")) {
+missing_author:
+ /* Missing 'author' line? */
+ unlink(rebase_path_author_script());
+ return 0;
+ } else if (skip_prefix(message, "author ", &message))
+ break;
+ else if ((eol = strchr(message, '\n')))
+ message = eol + 1;
+ else
+ goto missing_author;
+
+ strbuf_addstr(&buf, "GIT_AUTHOR_NAME='");
+ while (*message && *message != '\n' && *message != '\r')
+ if (skip_prefix(message, " <", &message))
+ break;
+ else if (*message != '\'')
+ strbuf_addch(&buf, *(message++));
+ else
+ strbuf_addf(&buf, "'\\\\%c'", *(message++));
+ strbuf_addstr(&buf, "'\nGIT_AUTHOR_EMAIL='");
+ while (*message && *message != '\n' && *message != '\r')
+ if (skip_prefix(message, "> ", &message))
+ break;
+ else if (*message != '\'')
+ strbuf_addch(&buf, *(message++));
+ else
+ strbuf_addf(&buf, "'\\\\%c'", *(message++));
+ strbuf_addstr(&buf, "'\nGIT_AUTHOR_DATE='@");
+ while (*message && *message != '\n' && *message != '\r')
+ if (*message != '\'')
+ strbuf_addch(&buf, *(message++));
+ else
+ strbuf_addf(&buf, "'\\\\%c'", *(message++));
+ res = write_message(buf.buf, buf.len, rebase_path_author_script(), 1);
+ strbuf_release(&buf);
+ return res;
+}
+
/*
- * Read the author-script file into an environment block, ready for use in
- * run_command(), that can be free()d afterwards.
+ * Read a list of environment variable assignments (such as the author-script
+ * file) into an environment block. Returns -1 on error, 0 otherwise.
*/
-static char **read_author_script(void)
+static int read_env_script(struct argv_array *env)
{
struct strbuf script = STRBUF_INIT;
int i, count = 0;
- char *p, *p2, **env;
- size_t env_size;
+ char *p, *p2;
if (strbuf_read_file(&script, rebase_path_author_script(), 256) <= 0)
- return NULL;
+ return -1;
for (p = script.buf; *p; p++)
if (skip_prefix(p, "'\\\\''", (const char **)&p2))
count++;
}
- env_size = (count + 1) * sizeof(*env);
- strbuf_grow(&script, env_size);
- memmove(script.buf + env_size, script.buf, script.len);
- p = script.buf + env_size;
- env = (char **)strbuf_detach(&script, NULL);
-
- for (i = 0; i < count; i++) {
- env[i] = p;
+ for (i = 0, p = script.buf; i < count; i++) {
+ argv_array_push(env, p);
p += strlen(p) + 1;
}
- env[count] = NULL;
- return env;
+ return 0;
}
static const char staged_changes_advice[] =
int allow_empty, int edit, int amend,
int cleanup_commit_message)
{
- char **env = NULL;
- struct argv_array array;
- int rc;
+ struct child_process cmd = CHILD_PROCESS_INIT;
const char *value;
+ cmd.git_cmd = 1;
+
if (is_rebase_i(opts)) {
- env = read_author_script();
- if (!env) {
+ if (!edit) {
+ cmd.stdout_to_stderr = 1;
+ cmd.err = -1;
+ }
+
+ if (read_env_script(&cmd.env_array)) {
const char *gpg_opt = gpg_sign_opt_quoted(opts);
return error(_(staged_changes_advice),
}
}
- argv_array_init(&array);
- argv_array_push(&array, "commit");
- argv_array_push(&array, "-n");
+ argv_array_push(&cmd.args, "commit");
+ argv_array_push(&cmd.args, "-n");
if (amend)
- argv_array_push(&array, "--amend");
+ argv_array_push(&cmd.args, "--amend");
if (opts->gpg_sign)
- argv_array_pushf(&array, "-S%s", opts->gpg_sign);
+ argv_array_pushf(&cmd.args, "-S%s", opts->gpg_sign);
if (opts->signoff)
- argv_array_push(&array, "-s");
+ argv_array_push(&cmd.args, "-s");
if (defmsg)
- argv_array_pushl(&array, "-F", defmsg, NULL);
+ argv_array_pushl(&cmd.args, "-F", defmsg, NULL);
if (cleanup_commit_message)
- argv_array_push(&array, "--cleanup=strip");
+ argv_array_push(&cmd.args, "--cleanup=strip");
if (edit)
- argv_array_push(&array, "-e");
+ argv_array_push(&cmd.args, "-e");
else if (!cleanup_commit_message &&
!opts->signoff && !opts->record_origin &&
git_config_get_value("commit.cleanup", &value))
- argv_array_push(&array, "--cleanup=verbatim");
+ argv_array_push(&cmd.args, "--cleanup=verbatim");
if (allow_empty)
- argv_array_push(&array, "--allow-empty");
+ argv_array_push(&cmd.args, "--allow-empty");
if (opts->allow_empty_message)
- argv_array_push(&array, "--allow-empty-message");
+ argv_array_push(&cmd.args, "--allow-empty-message");
- rc = run_command_v_opt_cd_env(array.argv, RUN_GIT_CMD, NULL,
- (const char *const *)env);
- argv_array_clear(&array);
- free(env);
+ if (cmd.err == -1) {
+ /* hide stderr on success */
+ struct strbuf buf = STRBUF_INIT;
+ int rc = pipe_command(&cmd,
+ NULL, 0,
+ /* stdout is already redirected */
+ NULL, 0,
+ &buf, 0);
+ if (rc)
+ fputs(buf.buf, stderr);
+ strbuf_release(&buf);
+ return rc;
+ }
- return rc;
+ return run_command(&cmd);
}
static int is_original_commit_empty(struct commit *commit)
return 1;
}
+/*
+ * Note that ordering matters in this enum. Not only must it match the mapping
+ * below, it is also divided into several sections that matter. When adding
+ * new commands, make sure you add it in the right section.
+ */
enum todo_command {
+ /* commands that handle commits */
TODO_PICK = 0,
- TODO_REVERT
+ TODO_REVERT,
+ TODO_EDIT,
+ TODO_REWORD,
+ TODO_FIXUP,
+ TODO_SQUASH,
+ /* commands that do something else than handling a single commit */
+ TODO_EXEC,
+ /* commands that do nothing but are counted for reporting progress */
+ TODO_NOOP,
+ TODO_DROP,
+ /* comments (not counted for reporting progress) */
+ TODO_COMMENT
};
-static const char *todo_command_strings[] = {
- "pick",
- "revert"
+static struct {
+ char c;
+ const char *str;
+} todo_command_info[] = {
+ { 'p', "pick" },
+ { 0, "revert" },
+ { 'e', "edit" },
+ { 'r', "reword" },
+ { 'f', "fixup" },
+ { 's', "squash" },
+ { 'x', "exec" },
+ { 0, "noop" },
+ { 'd', "drop" },
+ { 0, NULL }
};
static const char *command_to_string(const enum todo_command command)
{
- if ((size_t)command < ARRAY_SIZE(todo_command_strings))
- return todo_command_strings[command];
+ if (command < TODO_COMMENT)
+ return todo_command_info[command].str;
die("Unknown command: %d", command);
}
+static int is_noop(const enum todo_command command)
+{
+ return TODO_NOOP <= command;
+}
+
+static int is_fixup(enum todo_command command)
+{
+ return command == TODO_FIXUP || command == TODO_SQUASH;
+}
+
+static int update_squash_messages(enum todo_command command,
+ struct commit *commit, struct replay_opts *opts)
+{
+ struct strbuf buf = STRBUF_INIT;
+ int count, res;
+ const char *message, *body;
+
+ if (file_exists(rebase_path_squash_msg())) {
+ struct strbuf header = STRBUF_INIT;
+ char *eol, *p;
+
+ if (strbuf_read_file(&buf, rebase_path_squash_msg(), 2048) <= 0)
+ return error(_("could not read '%s'"),
+ rebase_path_squash_msg());
+
+ p = buf.buf + 1;
+ eol = strchrnul(buf.buf, '\n');
+ if (buf.buf[0] != comment_line_char ||
+ (p += strcspn(p, "0123456789\n")) == eol)
+ return error(_("unexpected 1st line of squash message:"
+ "\n\n\t%.*s"),
+ (int)(eol - buf.buf), buf.buf);
+ count = strtol(p, NULL, 10);
+
+ if (count < 1)
+ return error(_("invalid 1st line of squash message:\n"
+ "\n\t%.*s"),
+ (int)(eol - buf.buf), buf.buf);
+
+ strbuf_addf(&header, "%c ", comment_line_char);
+ strbuf_addf(&header,
+ _("This is a combination of %d commits."), ++count);
+ strbuf_splice(&buf, 0, eol - buf.buf, header.buf, header.len);
+ strbuf_release(&header);
+ } else {
+ unsigned char head[20];
+ struct commit *head_commit;
+ const char *head_message, *body;
+
+ if (get_sha1("HEAD", head))
+ return error(_("need a HEAD to fixup"));
+ if (!(head_commit = lookup_commit_reference(head)))
+ return error(_("could not read HEAD"));
+ if (!(head_message = get_commit_buffer(head_commit, NULL)))
+ return error(_("could not read HEAD's commit message"));
+
+ find_commit_subject(head_message, &body);
+ if (write_message(body, strlen(body),
+ rebase_path_fixup_msg(), 0)) {
+ unuse_commit_buffer(head_commit, head_message);
+ return error(_("cannot write '%s'"),
+ rebase_path_fixup_msg());
+ }
+
+ count = 2;
+ strbuf_addf(&buf, "%c ", comment_line_char);
+ strbuf_addf(&buf, _("This is a combination of %d commits."),
+ count);
+ strbuf_addf(&buf, "\n%c ", comment_line_char);
+ strbuf_addstr(&buf, _("This is the 1st commit message:"));
+ strbuf_addstr(&buf, "\n\n");
+ strbuf_addstr(&buf, body);
+
+ unuse_commit_buffer(head_commit, head_message);
+ }
+
+ if (!(message = get_commit_buffer(commit, NULL)))
+ return error(_("could not read commit message of %s"),
+ oid_to_hex(&commit->object.oid));
+ find_commit_subject(message, &body);
+
+ if (command == TODO_SQUASH) {
+ unlink(rebase_path_fixup_msg());
+ strbuf_addf(&buf, "\n%c ", comment_line_char);
+ strbuf_addf(&buf, _("This is the commit message #%d:"), count);
+ strbuf_addstr(&buf, "\n\n");
+ strbuf_addstr(&buf, body);
+ } else if (command == TODO_FIXUP) {
+ strbuf_addf(&buf, "\n%c ", comment_line_char);
+ strbuf_addf(&buf, _("The commit message #%d will be skipped:"),
+ count);
+ strbuf_addstr(&buf, "\n\n");
+ strbuf_add_commented_lines(&buf, body, strlen(body));
+ } else
+ return error(_("unknown command: %d"), command);
+ unuse_commit_buffer(commit, message);
+
+ res = write_message(buf.buf, buf.len, rebase_path_squash_msg(), 0);
+ strbuf_release(&buf);
+ return res;
+}
+
+static void flush_rewritten_pending(void) {
+ struct strbuf buf = STRBUF_INIT;
+ unsigned char newsha1[20];
+ FILE *out;
+
+ if (strbuf_read_file(&buf, rebase_path_rewritten_pending(), 82) > 0 &&
+ !get_sha1("HEAD", newsha1) &&
+ (out = fopen(rebase_path_rewritten_list(), "a"))) {
+ char *bol = buf.buf, *eol;
+
+ while (*bol) {
+ eol = strchrnul(bol, '\n');
+ fprintf(out, "%.*s %s\n", (int)(eol - bol),
+ bol, sha1_to_hex(newsha1));
+ if (!*eol)
+ break;
+ bol = eol + 1;
+ }
+ fclose(out);
+ unlink(rebase_path_rewritten_pending());
+ }
+ strbuf_release(&buf);
+}
+
+static void record_in_rewritten(struct object_id *oid,
+ enum todo_command next_command) {
+ FILE *out = fopen(rebase_path_rewritten_pending(), "a");
+
+ if (!out)
+ return;
+
+ fprintf(out, "%s\n", oid_to_hex(oid));
+ fclose(out);
+
+ if (!is_fixup(next_command))
+ flush_rewritten_pending();
+}
static int do_pick_commit(enum todo_command command, struct commit *commit,
- struct replay_opts *opts)
+ struct replay_opts *opts, int final_fixup)
{
+ int edit = opts->edit, cleanup_commit_message = 0;
+ const char *msg_file = edit ? NULL : git_path_merge_msg();
unsigned char head[20];
struct commit *base, *next, *parent;
const char *base_label, *next_label;
struct commit_message msg = { NULL, NULL, NULL, NULL };
struct strbuf msgbuf = STRBUF_INIT;
- int res, unborn = 0, allow;
+ int res, unborn = 0, amend = 0, allow = 0;
if (opts->no_commit) {
/*
}
discard_cache();
- if (!commit->parents) {
+ if (!commit->parents)
parent = NULL;
- }
else if (commit->parents->next) {
/* Reverting or cherry-picking a merge commit */
int cnt;
else
parent = commit->parents->item;
- if (opts->allow_ff &&
- ((parent && !hashcmp(parent->object.oid.hash, head)) ||
- (!parent && unborn)))
- return fast_forward_to(commit->object.oid.hash, head, unborn, opts);
+ if (get_message(commit, &msg) != 0)
+ return error(_("cannot get commit message for %s"),
+ oid_to_hex(&commit->object.oid));
+ if (opts->allow_ff && !is_fixup(command) &&
+ ((parent && !hashcmp(parent->object.oid.hash, head)) ||
+ (!parent && unborn))) {
+ if (is_rebase_i(opts))
+ write_author_script(msg.message);
+ res = fast_forward_to(commit->object.oid.hash, head, unborn,
+ opts);
+ if (res || command != TODO_REWORD)
+ goto leave;
+ edit = amend = 1;
+ msg_file = NULL;
+ goto fast_forward_edit;
+ }
if (parent && parse_commit(parent) < 0)
/* TRANSLATORS: The first %s will be a "todo" command like
"revert" or "pick", the second %s a SHA1. */
command_to_string(command),
oid_to_hex(&parent->object.oid));
- if (get_message(commit, &msg) != 0)
- return error(_("cannot get commit message for %s"),
- oid_to_hex(&commit->object.oid));
-
/*
* "commit" is an existing commit. We would want to apply
* the difference it introduces since its first parent "prev"
next = commit;
next_label = msg.label;
- /*
- * Append the commit log message to msgbuf; it starts
- * after the tree, parent, author, committer
- * information followed by "\n\n".
- */
- p = strstr(msg.message, "\n\n");
- if (p)
- strbuf_addstr(&msgbuf, skip_blank_lines(p + 2));
+ /* Append the commit log message to msgbuf. */
+ if (find_commit_subject(msg.message, &p))
+ strbuf_addstr(&msgbuf, p);
if (opts->record_origin) {
if (!has_conforming_footer(&msgbuf, NULL, 0))
}
}
- if (!opts->strategy || !strcmp(opts->strategy, "recursive") || command == TODO_REVERT) {
+ if (command == TODO_REWORD)
+ edit = 1;
+ else if (is_fixup(command)) {
+ if (update_squash_messages(command, commit, opts))
+ return -1;
+ amend = 1;
+ if (!final_fixup)
+ msg_file = rebase_path_squash_msg();
+ else if (file_exists(rebase_path_fixup_msg())) {
+ cleanup_commit_message = 1;
+ msg_file = rebase_path_fixup_msg();
+ } else {
+ const char *dest = git_path("SQUASH_MSG");
+ unlink(dest);
+ if (copy_file(dest, rebase_path_squash_msg(), 0666))
+ return error(_("could not rename '%s' to '%s'"),
+ rebase_path_squash_msg(), dest);
+ unlink(git_path("MERGE_MSG"));
+ msg_file = dest;
+ edit = 1;
+ }
+ }
+
+ if (is_rebase_i(opts) && write_author_script(msg.message) < 0)
+ res = -1;
+ else if (!opts->strategy || !strcmp(opts->strategy, "recursive") || command == TODO_REVERT) {
res = do_recursive_merge(base, next, base_label, next_label,
head, &msgbuf, opts);
if (res < 0)
goto leave;
}
if (!opts->no_commit)
- res = run_git_commit(opts->edit ? NULL : git_path_merge_msg(),
- opts, allow, opts->edit, 0, 0);
+fast_forward_edit:
+ res = run_git_commit(msg_file, opts, allow, edit, amend,
+ cleanup_commit_message);
+
+ if (!res && final_fixup) {
+ unlink(rebase_path_fixup_msg());
+ unlink(rebase_path_squash_msg());
+ }
leave:
free_message(commit, &msg);
struct strbuf buf;
struct todo_item *items;
int nr, alloc, current;
+ int done_nr, total_nr;
};
#define TODO_LIST_INIT { STRBUF_INIT }
/* left-trim */
bol += strspn(bol, " \t");
- for (i = 0; i < ARRAY_SIZE(todo_command_strings); i++)
- if (skip_prefix(bol, todo_command_strings[i], &bol)) {
+ if (bol == eol || *bol == '\r' || *bol == comment_line_char) {
+ item->command = TODO_COMMENT;
+ item->commit = NULL;
+ item->arg = bol;
+ item->arg_len = eol - bol;
+ return 0;
+ }
+
+ for (i = 0; i < TODO_COMMENT; i++)
+ if (skip_prefix(bol, todo_command_info[i].str, &bol)) {
+ item->command = i;
+ break;
+ } else if (bol[1] == ' ' && *bol == todo_command_info[i].c) {
+ bol++;
item->command = i;
break;
}
- if (i >= ARRAY_SIZE(todo_command_strings))
+ if (i >= TODO_COMMENT)
return -1;
+ if (item->command == TODO_NOOP) {
+ item->commit = NULL;
+ item->arg = bol;
+ item->arg_len = eol - bol;
+ return 0;
+ }
+
/* Eat up extra spaces/ tabs before object name */
padding = strspn(bol, " \t");
if (!padding)
return -1;
bol += padding;
+ if (item->command == TODO_EXEC) {
+ item->arg = bol;
+ item->arg_len = (int)(eol - bol);
+ return 0;
+ }
+
end_of_object_name = (char *) bol + strcspn(bol, " \t\n");
saved = *end_of_object_name;
*end_of_object_name = '\0';
{
struct todo_item *item;
char *p = buf, *next_p;
- int i, res = 0;
+ int i, res = 0, fixup_okay = file_exists(rebase_path_done());
for (i = 1; *p; i++, p = next_p) {
char *eol = strchrnul(p, '\n');
if (parse_insn_line(item, p, eol)) {
res = error(_("invalid line %d: %.*s"),
i, (int)(eol - p), p);
- item->command = -1;
+ item->command = TODO_NOOP;
}
+
+ if (fixup_okay)
+ ; /* do nothing */
+ else if (is_fixup(item->command))
+ return error(_("cannot '%s' without a previous commit"),
+ command_to_string(item->command));
+ else if (!is_noop(item->command))
+ fixup_okay = 1;
}
- if (!todo_list->nr)
- return error(_("no commits parsed."));
+
return res;
}
+static int count_commands(struct todo_list *todo_list)
+{
+ int count = 0, i;
+
+ for (i = 0; i < todo_list->nr; i++)
+ if (todo_list->items[i].command != TODO_COMMENT)
+ count++;
+
+ return count;
+}
+
static int read_populate_todo(struct todo_list *todo_list,
struct replay_opts *opts)
{
close(fd);
res = parse_insn_buffer(todo_list->buf.buf, todo_list);
- if (res)
+ if (res) {
+ if (is_rebase_i(opts))
+ return error(_("please fix this using "
+ "'git rebase --edit-todo'."));
return error(_("unusable instruction sheet: '%s'"), todo_file);
+ }
+
+ if (!todo_list->nr &&
+ (!is_rebase_i(opts) || !file_exists(rebase_path_done())))
+ return error(_("no commits parsed."));
if (!is_rebase_i(opts)) {
enum todo_command valid =
return error(_("cannot revert during a cherry-pick."));
}
+ if (is_rebase_i(opts)) {
+ struct todo_list done = TODO_LIST_INIT;
+ FILE *f = fopen(rebase_path_msgtotal(), "w");
+
+ if (strbuf_read_file(&done.buf, rebase_path_done(), 0) > 0 &&
+ !parse_insn_buffer(done.buf.buf, &done))
+ todo_list->done_nr = count_commands(&done);
+ else
+ todo_list->done_nr = 0;
+
+ todo_list->total_nr = todo_list->done_nr
+ + count_commands(todo_list);
+ todo_list_release(&done);
+
+ if (f) {
+ fprintf(f, "%d\n", todo_list->total_nr);
+ fclose(f);
+ }
+ }
+
return 0;
}
return 0;
}
+static void read_strategy_opts(struct replay_opts *opts, struct strbuf *buf)
+{
+ int i;
+
+ strbuf_reset(buf);
+ if (!read_oneliner(buf, rebase_path_strategy(), 0))
+ return;
+ opts->strategy = strbuf_detach(buf, NULL);
+ if (!read_oneliner(buf, rebase_path_strategy_opts(), 0))
+ return;
+
+ opts->xopts_nr = split_cmdline(buf->buf, (const char ***)&opts->xopts);
+ for (i = 0; i < opts->xopts_nr; i++) {
+ const char *arg = opts->xopts[i];
+
+ skip_prefix(arg, "--", &arg);
+ opts->xopts[i] = xstrdup(arg);
+ }
+}
+
static int read_populate_opts(struct replay_opts *opts)
{
if (is_rebase_i(opts)) {
opts->gpg_sign = xstrdup(buf.buf + 2);
}
}
+
+ if (file_exists(rebase_path_verbose()))
+ opts->verbose = 1;
+
+ read_strategy_opts(opts, &buf);
strbuf_release(&buf);
return 0;
{
enum todo_command command = opts->action == REPLAY_PICK ?
TODO_PICK : TODO_REVERT;
- const char *command_string = todo_command_strings[command];
+ const char *command_string = todo_command_info[command].str;
struct commit *commit;
if (prepare_revs(opts))
error(_("a cherry-pick or revert is already in progress"));
advise(_("try \"git cherry-pick (--continue | --quit | --abort)\""));
return -1;
- }
- else if (mkdir(git_path_seq_dir(), 0777) < 0)
+ } else if (mkdir(git_path_seq_dir(), 0777) < 0)
return error_errno(_("could not create sequencer directory '%s'"),
git_path_seq_dir());
return 0;
const char *todo_path = get_todo_path(opts);
int next = todo_list->current, offset, fd;
+ /*
+ * rebase -i writes "git-rebase-todo" without the currently executing
+ * command, appending it to "done" instead.
+ */
+ if (is_rebase_i(opts))
+ next++;
+
fd = hold_lock_file_for_update(&todo_lock, todo_path, 0);
if (fd < 0)
return error_errno(_("could not lock '%s'"), todo_path);
return error_errno(_("could not write to '%s'"), todo_path);
if (commit_lock_file(&todo_lock) < 0)
return error(_("failed to finalize '%s'."), todo_path);
+
+ if (is_rebase_i(opts)) {
+ const char *done_path = rebase_path_done();
+ int fd = open(done_path, O_CREAT | O_WRONLY | O_APPEND, 0666);
+ int prev_offset = !next ? 0 :
+ todo_list->items[next - 1].offset_in_buf;
+
+ if (fd >= 0 && offset > prev_offset &&
+ write_in_full(fd, todo_list->buf.buf + prev_offset,
+ offset - prev_offset) < 0) {
+ close(fd);
+ return error_errno(_("could not write to '%s'"),
+ done_path);
+ }
+ if (fd >= 0)
+ close(fd);
+ }
return 0;
}
return res;
}
+static int make_patch(struct commit *commit, struct replay_opts *opts)
+{
+ struct strbuf buf = STRBUF_INIT;
+ struct rev_info log_tree_opt;
+ const char *subject, *p;
+ int res = 0;
+
+ p = short_commit_name(commit);
+ if (write_message(p, strlen(p), rebase_path_stopped_sha(), 1) < 0)
+ return -1;
+
+ strbuf_addf(&buf, "%s/patch", get_dir(opts));
+ memset(&log_tree_opt, 0, sizeof(log_tree_opt));
+ init_revisions(&log_tree_opt, NULL);
+ log_tree_opt.abbrev = 0;
+ log_tree_opt.diff = 1;
+ log_tree_opt.diffopt.output_format = DIFF_FORMAT_PATCH;
+ log_tree_opt.disable_stdin = 1;
+ log_tree_opt.no_commit_id = 1;
+ log_tree_opt.diffopt.file = fopen(buf.buf, "w");
+ log_tree_opt.diffopt.use_color = GIT_COLOR_NEVER;
+ if (!log_tree_opt.diffopt.file)
+ res |= error_errno(_("could not open '%s'"), buf.buf);
+ else {
+ res |= log_tree_commit(&log_tree_opt, commit);
+ fclose(log_tree_opt.diffopt.file);
+ }
+ strbuf_reset(&buf);
+
+ strbuf_addf(&buf, "%s/message", get_dir(opts));
+ if (!file_exists(buf.buf)) {
+ const char *commit_buffer = get_commit_buffer(commit, NULL);
+ find_commit_subject(commit_buffer, &subject);
+ res |= write_message(subject, strlen(subject), buf.buf, 1);
+ unuse_commit_buffer(commit, commit_buffer);
+ }
+ strbuf_release(&buf);
+
+ return res;
+}
+
+static int intend_to_amend(void)
+{
+ unsigned char head[20];
+ char *p;
+
+ if (get_sha1("HEAD", head))
+ return error(_("cannot read HEAD"));
+
+ p = sha1_to_hex(head);
+ return write_message(p, strlen(p), rebase_path_amend(), 1);
+}
+
+static int error_with_patch(struct commit *commit,
+ const char *subject, int subject_len,
+ struct replay_opts *opts, int exit_code, int to_amend)
+{
+ if (make_patch(commit, opts))
+ return -1;
+
+ if (to_amend) {
+ if (intend_to_amend())
+ return -1;
+
+ fprintf(stderr, "You can amend the commit now, with\n"
+ "\n"
+ " git commit --amend %s\n"
+ "\n"
+ "Once you are satisfied with your changes, run\n"
+ "\n"
+ " git rebase --continue\n", gpg_sign_opt_quoted(opts));
+ } else if (exit_code)
+ fprintf(stderr, "Could not apply %s... %.*s\n",
+ short_commit_name(commit), subject_len, subject);
+
+ return exit_code;
+}
+
+static int error_failed_squash(struct commit *commit,
+ struct replay_opts *opts, int subject_len, const char *subject)
+{
+ if (rename(rebase_path_squash_msg(), rebase_path_message()))
+ return error(_("could not rename '%s' to '%s'"),
+ rebase_path_squash_msg(), rebase_path_message());
+ unlink(rebase_path_fixup_msg());
+ unlink(git_path("MERGE_MSG"));
+ if (copy_file(git_path("MERGE_MSG"), rebase_path_message(), 0666))
+ return error(_("could not copy '%s' to '%s'"),
+ rebase_path_message(), git_path("MERGE_MSG"));
+ return error_with_patch(commit, subject, subject_len, opts, 1, 0);
+}
+
+static int do_exec(const char *command_line)
+{
+ const char *child_argv[] = { NULL, NULL };
+ int dirty, status;
+
+ fprintf(stderr, "Executing: %s\n", command_line);
+ child_argv[0] = command_line;
+ status = run_command_v_opt(child_argv, RUN_USING_SHELL);
+
+ /* force re-reading of the cache */
+ if (discard_cache() < 0 || read_cache() < 0)
+ return error(_("could not read index"));
+
+ dirty = require_clean_work_tree("rebase", NULL, 1, 1);
+
+ if (status) {
+ warning(_("execution failed: %s\n%s"
+ "You can fix the problem, and then run\n"
+ "\n"
+ " git rebase --continue\n"
+ "\n"),
+ command_line,
+ dirty ? N_("and made changes to the index and/or the "
+ "working tree\n") : "");
+ if (status == 127)
+ /* command not found */
+ status = 1;
+ } else if (dirty) {
+ warning(_("execution succeeded: %s\nbut "
+ "left changes to the index and/or the working tree\n"
+ "Commit or stash your changes, and then run\n"
+ "\n"
+ " git rebase --continue\n"
+ "\n"), command_line);
+ status = 1;
+ }
+
+ return status;
+}
+
+static int is_final_fixup(struct todo_list *todo_list)
+{
+ int i = todo_list->current;
+
+ if (!is_fixup(todo_list->items[i].command))
+ return 0;
+
+ while (++i < todo_list->nr)
+ if (is_fixup(todo_list->items[i].command))
+ return 0;
+ else if (!is_noop(todo_list->items[i].command))
+ break;
+ return 1;
+}
+
+static enum todo_command peek_command(struct todo_list *todo_list, int offset)
+{
+ int i;
+
+ for (i = todo_list->current + offset; i < todo_list->nr; i++)
+ if (!is_noop(todo_list->items[i].command))
+ return todo_list->items[i].command;
+
+ return -1;
+}
+
+static int apply_autostash(struct replay_opts *opts)
+{
+ struct strbuf stash_sha1 = STRBUF_INIT;
+ struct child_process child = CHILD_PROCESS_INIT;
+ int ret = 0;
+
+ if (!read_oneliner(&stash_sha1, rebase_path_autostash(), 1)) {
+ strbuf_release(&stash_sha1);
+ return 0;
+ }
+ strbuf_trim(&stash_sha1);
+
+ child.git_cmd = 1;
+ argv_array_push(&child.args, "stash");
+ argv_array_push(&child.args, "apply");
+ argv_array_push(&child.args, stash_sha1.buf);
+ if (!run_command(&child))
+ printf(_("Applied autostash."));
+ else {
+ struct child_process store = CHILD_PROCESS_INIT;
+
+ store.git_cmd = 1;
+ argv_array_push(&store.args, "stash");
+ argv_array_push(&store.args, "store");
+ argv_array_push(&store.args, "-m");
+ argv_array_push(&store.args, "autostash");
+ argv_array_push(&store.args, "-q");
+ argv_array_push(&store.args, stash_sha1.buf);
+ if (run_command(&store))
+ ret = error(_("cannot store %s"), stash_sha1.buf);
+ else
+ printf(_("Applying autostash resulted in conflicts.\n"
+ "Your changes are safe in the stash.\n"
+ "You can run \"git stash pop\" or"
+ " \"git stash drop\" at any time.\n"));
+ }
+
+ strbuf_release(&stash_sha1);
+ return ret;
+}
+
+static const char *reflog_message(struct replay_opts *opts,
+ const char *sub_action, const char *fmt, ...)
+{
+ va_list ap;
+ static struct strbuf buf = STRBUF_INIT;
+
+ va_start(ap, fmt);
+ strbuf_reset(&buf);
+ strbuf_addstr(&buf, action_name(opts));
+ if (sub_action)
+ strbuf_addf(&buf, " (%s)", sub_action);
+ if (fmt) {
+ strbuf_addstr(&buf, ": ");
+ strbuf_vaddf(&buf, fmt, ap);
+ }
+ va_end(ap);
+
+ return buf.buf;
+}
+
static int pick_commits(struct todo_list *todo_list, struct replay_opts *opts)
{
- int res;
+ int res = 0;
setenv(GIT_REFLOG_ACTION, action_name(opts), 0);
if (opts->allow_ff)
struct todo_item *item = todo_list->items + todo_list->current;
if (save_todo(todo_list, opts))
return -1;
- res = do_pick_commit(item->command, item->commit, opts);
+ if (is_rebase_i(opts)) {
+ if (item->command != TODO_COMMENT) {
+ FILE *f = fopen(rebase_path_msgnum(), "w");
+
+ todo_list->done_nr++;
+
+ if (f) {
+ fprintf(f, "%d\n", todo_list->done_nr);
+ fclose(f);
+ }
+ fprintf(stderr, "Rebasing (%d/%d)%s",
+ todo_list->done_nr,
+ todo_list->total_nr,
+ opts->verbose ? "\n" : "\r");
+ }
+ unlink(rebase_path_message());
+ unlink(rebase_path_author_script());
+ unlink(rebase_path_stopped_sha());
+ unlink(rebase_path_amend());
+ }
+ if (item->command <= TODO_SQUASH) {
+ if (is_rebase_i(opts))
+ setenv("GIT_REFLOG_ACTION", reflog_message(opts,
+ command_to_string(item->command), NULL),
+ 1);
+ res = do_pick_commit(item->command, item->commit,
+ opts, is_final_fixup(todo_list));
+ if (is_rebase_i(opts) && res < 0) {
+ /* Reschedule */
+ todo_list->current--;
+ if (save_todo(todo_list, opts))
+ return -1;
+ }
+ if (item->command == TODO_EDIT) {
+ struct commit *commit = item->commit;
+ if (!res)
+ warning(_("stopped at %s... %.*s"),
+ short_commit_name(commit),
+ item->arg_len, item->arg);
+ return error_with_patch(commit,
+ item->arg, item->arg_len, opts, res,
+ !res);
+ }
+ if (is_rebase_i(opts) && !res)
+ record_in_rewritten(&item->commit->object.oid,
+ peek_command(todo_list, 1));
+ if (res && is_fixup(item->command)) {
+ if (res == 1)
+ intend_to_amend();
+ return error_failed_squash(item->commit, opts,
+ item->arg_len, item->arg);
+ } else if (res && is_rebase_i(opts))
+ return res | error_with_patch(item->commit,
+ item->arg, item->arg_len, opts, res,
+ item->command == TODO_REWORD);
+ } else if (item->command == TODO_EXEC) {
+ char *end_of_arg = (char *)(item->arg + item->arg_len);
+ int saved = *end_of_arg;
+
+ *end_of_arg = '\0';
+ res = do_exec(item->arg);
+ *end_of_arg = saved;
+ } else if (!is_noop(item->command))
+ return error(_("unknown command %d"), item->command);
+
todo_list->current++;
if (res)
return res;
}
+ if (is_rebase_i(opts)) {
+ struct strbuf head_ref = STRBUF_INIT, buf = STRBUF_INIT;
+ struct stat st;
+
+ /* Stopped in the middle, as planned? */
+ if (todo_list->current < todo_list->nr)
+ return 0;
+
+ if (read_oneliner(&head_ref, rebase_path_head_name(), 0) &&
+ starts_with(head_ref.buf, "refs/")) {
+ const char *msg;
+ unsigned char head[20], orig[20];
+ int res;
+
+ if (get_sha1("HEAD", head)) {
+ res = error(_("cannot read HEAD"));
+cleanup_head_ref:
+ strbuf_release(&head_ref);
+ strbuf_release(&buf);
+ return res;
+ }
+ if (!read_oneliner(&buf, rebase_path_orig_head(), 0) ||
+ get_sha1_hex(buf.buf, orig)) {
+ res = error(_("could not read orig-head"));
+ goto cleanup_head_ref;
+ }
+ if (!read_oneliner(&buf, rebase_path_onto(), 0)) {
+ res = error(_("could not read 'onto'"));
+ goto cleanup_head_ref;
+ }
+ msg = reflog_message(opts, "finish", "%s onto %s",
+ head_ref.buf, buf.buf);
+ if (update_ref(msg, head_ref.buf, head, orig,
+ REF_NODEREF, UPDATE_REFS_MSG_ON_ERR)) {
+ res = error(_("could not update %s"),
+ head_ref.buf);
+ goto cleanup_head_ref;
+ }
+ msg = reflog_message(opts, "finish", "returning to %s",
+ head_ref.buf);
+ if (create_symref("HEAD", head_ref.buf, msg)) {
+ res = error(_("could not update HEAD to %s"),
+ head_ref.buf);
+ goto cleanup_head_ref;
+ }
+ strbuf_reset(&buf);
+ }
+
+ if (opts->verbose) {
+ struct rev_info log_tree_opt;
+ struct object_id orig, head;
+
+ memset(&log_tree_opt, 0, sizeof(log_tree_opt));
+ init_revisions(&log_tree_opt, NULL);
+ log_tree_opt.diff = 1;
+ log_tree_opt.diffopt.output_format =
+ DIFF_FORMAT_DIFFSTAT;
+ log_tree_opt.disable_stdin = 1;
+
+ if (read_oneliner(&buf, rebase_path_orig_head(), 0) &&
+ !get_sha1(buf.buf, orig.hash) &&
+ !get_sha1("HEAD", head.hash)) {
+ diff_tree_sha1(orig.hash, head.hash,
+ "", &log_tree_opt.diffopt);
+ log_tree_diff_flush(&log_tree_opt);
+ }
+ }
+ flush_rewritten_pending();
+ if (!stat(rebase_path_rewritten_list(), &st) &&
+ st.st_size > 0) {
+ struct child_process child = CHILD_PROCESS_INIT;
+ const char *post_rewrite_hook =
+ find_hook("post-rewrite");
+
+ child.in = open(rebase_path_rewritten_list(), O_RDONLY);
+ child.git_cmd = 1;
+ argv_array_push(&child.args, "notes");
+ argv_array_push(&child.args, "copy");
+ argv_array_push(&child.args, "--for-rewrite=rebase");
+ /* we don't care if this copying failed */
+ run_command(&child);
+
+ if (post_rewrite_hook) {
+ struct child_process hook = CHILD_PROCESS_INIT;
+
+ hook.in = open(rebase_path_rewritten_list(),
+ O_RDONLY);
+ hook.stdout_to_stderr = 1;
+ argv_array_push(&hook.args, post_rewrite_hook);
+ argv_array_push(&hook.args, "rebase");
+ /* we don't care if this hook failed */
+ run_command(&hook);
+ }
+ }
+ apply_autostash(opts);
+
+ fprintf(stderr, "Successfully rebased and updated %s.\n",
+ head_ref.buf);
+
+ strbuf_release(&buf);
+ strbuf_release(&head_ref);
+ }
+
/*
* Sequence of picks finished successfully; cleanup by
* removing the .git/sequencer directory
return run_command_v_opt(argv, RUN_GIT_CMD);
}
+static int commit_staged_changes(struct replay_opts *opts)
+{
+ int amend = 0;
+
+ if (has_unstaged_changes(1))
+ return error(_("cannot rebase: You have unstaged changes."));
+ if (!has_uncommitted_changes(0)) {
+ const char *cherry_pick_head = git_path("CHERRY_PICK_HEAD");
+
+ if (file_exists(cherry_pick_head) && unlink(cherry_pick_head))
+ return error(_("could not remove CHERRY_PICK_HEAD"));
+ return 0;
+ }
+
+ if (file_exists(rebase_path_amend())) {
+ struct strbuf rev = STRBUF_INIT;
+ unsigned char head[20], to_amend[20];
+
+ if (get_sha1("HEAD", head))
+ return error(_("cannot amend non-existing commit"));
+ if (!read_oneliner(&rev, rebase_path_amend(), 0))
+ return error(_("invalid file: '%s'"), rebase_path_amend());
+ if (get_sha1_hex(rev.buf, to_amend))
+ return error(_("invalid contents: '%s'"),
+ rebase_path_amend());
+ if (hashcmp(head, to_amend))
+ return error(_("\nYou have uncommitted changes in your "
+ "working tree. Please, commit them\n"
+ "first and then run 'git rebase "
+ "--continue' again."));
+
+ strbuf_release(&rev);
+ amend = 1;
+ }
+
+ if (run_git_commit(rebase_path_message(), opts, 1, 1, amend, 0))
+ return error(_("could not commit staged changes."));
+ unlink(rebase_path_amend());
+ return 0;
+}
+
int sequencer_continue(struct replay_opts *opts)
{
struct todo_list todo_list = TODO_LIST_INIT;
if (read_and_refresh_cache(opts))
return -1;
- if (!file_exists(get_todo_path(opts)))
+ if (is_rebase_i(opts)) {
+ if (commit_staged_changes(opts))
+ return -1;
+ } else if (!file_exists(get_todo_path(opts)))
return continue_single_pick();
if (read_populate_opts(opts))
return -1;
if ((res = read_populate_todo(&todo_list, opts)))
goto release_todo_list;
- /* Verify that the conflict has been resolved */
- if (file_exists(git_path_cherry_pick_head()) ||
- file_exists(git_path_revert_head())) {
- res = continue_single_pick();
- if (res)
+ if (!is_rebase_i(opts)) {
+ /* Verify that the conflict has been resolved */
+ if (file_exists(git_path_cherry_pick_head()) ||
+ file_exists(git_path_revert_head())) {
+ res = continue_single_pick();
+ if (res)
+ goto release_todo_list;
+ }
+ if (index_differs_from("HEAD", 0, 0)) {
+ res = error_dirty_index(opts);
goto release_todo_list;
+ }
+ todo_list.current++;
+ } else if (file_exists(rebase_path_stopped_sha())) {
+ struct strbuf buf = STRBUF_INIT;
+ struct object_id oid;
+
+ if (read_oneliner(&buf, rebase_path_stopped_sha(), 1) &&
+ !get_sha1_committish(buf.buf, oid.hash))
+ record_in_rewritten(&oid, peek_command(&todo_list, 0));
+ strbuf_release(&buf);
}
- if (index_differs_from("HEAD", 0, 0)) {
- res = error_dirty_index(opts);
- goto release_todo_list;
- }
- todo_list.current++;
+
res = pick_commits(&todo_list, opts);
release_todo_list:
todo_list_release(&todo_list);
{
setenv(GIT_REFLOG_ACTION, action_name(opts), 0);
return do_pick_commit(opts->action == REPLAY_PICK ?
- TODO_PICK : TODO_REVERT, cmit, opts);
+ TODO_PICK : TODO_REVERT, cmit, opts, 0);
}
int sequencer_pick_revisions(struct replay_opts *opts)
enum replay_action {
REPLAY_REVERT,
- REPLAY_PICK
+ REPLAY_PICK,
+ REPLAY_INTERACTIVE_REBASE
};
struct replay_opts {
int allow_empty;
int allow_empty_message;
int keep_redundant_commits;
+ int verbose;
int mainline;
strbuf_addbuf(&path, &data);
strbuf_addstr(sb, real_path(path.buf));
ret = 1;
- } else
+ } else {
strbuf_addstr(sb, gitdir);
+ }
+
strbuf_release(&data);
strbuf_release(&path);
return ret;
/* --work-tree is set without --git-dir; use discovered one */
if (getenv(GIT_WORK_TREE_ENVIRONMENT) || git_work_tree_cfg) {
if (offset != cwd->len && !is_absolute_path(gitdir))
- gitdir = xstrdup(real_path(gitdir));
+ gitdir = real_pathdup(gitdir);
if (chdir(cwd->buf))
die_errno("Could not come back to cwd");
return setup_explicit_git_dir(gitdir, cwd, nongit_ok);
/* Keep entry but do not canonicalize it */
return 1;
} else {
- const char *real_path = real_path_if_valid(ceil);
- if (!real_path)
+ char *real_path = real_pathdup(ceil);
+ if (!real_path) {
return 0;
+ }
free(item->string);
- item->string = xstrdup(real_path);
+ item->string = real_path;
return 1;
}
}
struct strbuf pathbuf = STRBUF_INIT;
if (!is_absolute_path(entry) && relative_base) {
- strbuf_addstr(&pathbuf, real_path(relative_base));
+ strbuf_realpath(&pathbuf, relative_base, 1);
strbuf_addch(&pathbuf, '/');
}
strbuf_addstr(&pathbuf, entry);
return fd;
}
-static int stat_sha1_file(const unsigned char *sha1, struct stat *st)
+/*
+ * Find "sha1" as a loose object in the local repository or in an alternate.
+ * Returns 0 on success, negative on failure.
+ *
+ * The "path" out-parameter will give the path of the object we found (if any).
+ * Note that it may point to static storage and is only valid until another
+ * call to sha1_file_name(), etc.
+ */
+static int stat_sha1_file(const unsigned char *sha1, struct stat *st,
+ const char **path)
{
struct alternate_object_database *alt;
- if (!lstat(sha1_file_name(sha1), st))
+ *path = sha1_file_name(sha1);
+ if (!lstat(*path, st))
return 0;
prepare_alt_odb();
errno = ENOENT;
for (alt = alt_odb_list; alt; alt = alt->next) {
- const char *path = alt_sha1_path(alt, sha1);
- if (!lstat(path, st))
+ *path = alt_sha1_path(alt, sha1);
+ if (!lstat(*path, st))
return 0;
}
return -1;
}
-static int open_sha1_file(const unsigned char *sha1)
+/*
+ * Like stat_sha1_file(), but actually open the object and return the
+ * descriptor. See the caveats on the "path" parameter above.
+ */
+static int open_sha1_file(const unsigned char *sha1, const char **path)
{
int fd;
struct alternate_object_database *alt;
int most_interesting_errno;
- fd = git_open(sha1_file_name(sha1));
+ *path = sha1_file_name(sha1);
+ fd = git_open(*path);
if (fd >= 0)
return fd;
most_interesting_errno = errno;
prepare_alt_odb();
for (alt = alt_odb_list; alt; alt = alt->next) {
- const char *path = alt_sha1_path(alt, sha1);
- fd = git_open(path);
+ *path = alt_sha1_path(alt, sha1);
+ fd = git_open(*path);
if (fd >= 0)
return fd;
if (most_interesting_errno == ENOENT)
return -1;
}
-void *map_sha1_file(const unsigned char *sha1, unsigned long *size)
+/*
+ * Map the loose object at "path" if it is not NULL, or the path found by
+ * searching for a loose object named "sha1".
+ */
+static void *map_sha1_file_1(const char *path,
+ const unsigned char *sha1,
+ unsigned long *size)
{
void *map;
int fd;
- fd = open_sha1_file(sha1);
+ if (path)
+ fd = git_open(path);
+ else
+ fd = open_sha1_file(sha1, &path);
map = NULL;
if (fd >= 0) {
struct stat st;
*size = xsize_t(st.st_size);
if (!*size) {
/* mmap() is forbidden on empty files */
- error("object file %s is empty", sha1_file_name(sha1));
+ error("object file %s is empty", path);
return NULL;
}
map = xmmap(NULL, *size, PROT_READ, MAP_PRIVATE, fd, 0);
return map;
}
+void *map_sha1_file(const unsigned char *sha1, unsigned long *size)
+{
+ return map_sha1_file_1(NULL, sha1, size);
+}
+
unsigned long unpack_object_header_buffer(const unsigned char *buf,
unsigned long len, enum object_type *type, unsigned long *sizep)
{
void clear_delta_base_cache(void)
{
- struct hashmap_iter iter;
- struct delta_base_cache_entry *entry;
- for (entry = hashmap_iter_first(&delta_base_cache, &iter);
- entry;
- entry = hashmap_iter_next(&iter)) {
+ struct list_head *lru, *tmp;
+ list_for_each_safe(lru, tmp, &delta_base_cache_lru) {
+ struct delta_base_cache_entry *entry =
+ list_entry(lru, struct delta_base_cache_entry, lru);
release_delta_base_cache(entry);
}
}
* object even exists.
*/
if (!oi->typep && !oi->typename && !oi->sizep) {
+ const char *path;
struct stat st;
- if (stat_sha1_file(sha1, &st) < 0)
+ if (stat_sha1_file(sha1, &st, &path) < 0)
return -1;
if (oi->disk_sizep)
*oi->disk_sizep = st.st_size;
{
void *data;
const struct packed_git *p;
+ const char *path;
+ struct stat st;
const unsigned char *repl = lookup_replace_object_extended(sha1, flag);
errno = 0;
die("replacement %s not found for %s",
sha1_to_hex(repl), sha1_to_hex(sha1));
- if (has_loose_object(repl)) {
- const char *path = sha1_file_name(sha1);
-
+ if (!stat_sha1_file(repl, &st, &path))
die("loose object %s (stored in %s) is corrupt",
sha1_to_hex(repl), path);
- }
if ((p = has_packed_and_bad(repl)) != NULL)
die("packed object %s (stored in %s) is corrupt",
}
return r ? r : pack_errors;
}
+
+static int check_stream_sha1(git_zstream *stream,
+ const char *hdr,
+ unsigned long size,
+ const char *path,
+ const unsigned char *expected_sha1)
+{
+ git_SHA_CTX c;
+ unsigned char real_sha1[GIT_SHA1_RAWSZ];
+ unsigned char buf[4096];
+ unsigned long total_read;
+ int status = Z_OK;
+
+ git_SHA1_Init(&c);
+ git_SHA1_Update(&c, hdr, stream->total_out);
+
+ /*
+ * We already read some bytes into hdr, but the ones up to the NUL
+ * do not count against the object's content size.
+ */
+ total_read = stream->total_out - strlen(hdr) - 1;
+
+ /*
+ * This size comparison must be "<=" to read the final zlib packets;
+ * see the comment in unpack_sha1_rest for details.
+ */
+ while (total_read <= size &&
+ (status == Z_OK || status == Z_BUF_ERROR)) {
+ stream->next_out = buf;
+ stream->avail_out = sizeof(buf);
+ if (size - total_read < stream->avail_out)
+ stream->avail_out = size - total_read;
+ status = git_inflate(stream, Z_FINISH);
+ git_SHA1_Update(&c, buf, stream->next_out - buf);
+ total_read += stream->next_out - buf;
+ }
+ git_inflate_end(stream);
+
+ if (status != Z_STREAM_END) {
+ error("corrupt loose object '%s'", sha1_to_hex(expected_sha1));
+ return -1;
+ }
+ if (stream->avail_in) {
+ error("garbage at end of loose object '%s'",
+ sha1_to_hex(expected_sha1));
+ return -1;
+ }
+
+ git_SHA1_Final(real_sha1, &c);
+ if (hashcmp(expected_sha1, real_sha1)) {
+ error("sha1 mismatch for %s (expected %s)", path,
+ sha1_to_hex(expected_sha1));
+ return -1;
+ }
+
+ return 0;
+}
+
+int read_loose_object(const char *path,
+ const unsigned char *expected_sha1,
+ enum object_type *type,
+ unsigned long *size,
+ void **contents)
+{
+ int ret = -1;
+ int fd = -1;
+ void *map = NULL;
+ unsigned long mapsize;
+ git_zstream stream;
+ char hdr[32];
+
+ *contents = NULL;
+
+ map = map_sha1_file_1(path, NULL, &mapsize);
+ if (!map) {
+ error_errno("unable to mmap %s", path);
+ goto out;
+ }
+
+ if (unpack_sha1_header(&stream, map, mapsize, hdr, sizeof(hdr)) < 0) {
+ error("unable to unpack header of %s", path);
+ goto out;
+ }
+
+ *type = parse_sha1_header(hdr, size);
+ if (*type < 0) {
+ error("unable to parse header of %s", path);
+ git_inflate_end(&stream);
+ goto out;
+ }
+
+ if (*type == OBJ_BLOB) {
+ if (check_stream_sha1(&stream, hdr, *size, path, expected_sha1) < 0)
+ goto out;
+ } else {
+ *contents = unpack_sha1_rest(&stream, hdr, *size, expected_sha1);
+ if (!*contents) {
+ error("unable to unpack contents of %s", path);
+ git_inflate_end(&stream);
+ goto out;
+ }
+ if (check_sha1_signature(expected_sha1, *contents,
+ *size, typename(*type))) {
+ error("sha1 mismatch for %s (expected %s)", path,
+ sha1_to_hex(expected_sha1));
+ free(*contents);
+ goto out;
+ }
+ }
+
+ ret = 0; /* everything checks out */
+
+out:
+ if (map)
+ munmap(map, mapsize);
+ if (fd >= 0)
+ close(fd);
+ return ret;
+}
return RECURSE_SUBMODULES_ON_DEMAND;
else if (!strcmp(arg, "check"))
return RECURSE_SUBMODULES_CHECK;
+ else if (!strcmp(arg, "only"))
+ return RECURSE_SUBMODULES_ONLY;
else if (die_on_error)
die("bad %s argument: %s", opt, arg);
else
return ret;
}
-static int gitmodule_sha1_from_commit(const unsigned char *treeish_name,
+int gitmodule_sha1_from_commit(const unsigned char *treeish_name,
unsigned char *gitmodules_sha1,
struct strbuf *rev)
{
const char *name);
const struct submodule *submodule_from_path(const unsigned char *commit_or_tree,
const char *path);
+extern int gitmodule_sha1_from_commit(const unsigned char *commit_sha1,
+ unsigned char *gitmodules_sha1,
+ struct strbuf *rev);
void submodule_free(void);
#endif /* SUBMODULE_CONFIG_H */
}
}
+void gitmodules_config_sha1(const unsigned char *commit_sha1)
+{
+ struct strbuf rev = STRBUF_INIT;
+ unsigned char sha1[20];
+
+ if (gitmodule_sha1_from_commit(commit_sha1, sha1, &rev)) {
+ git_config_from_blob_sha1(submodule_config, rev.buf,
+ sha1, NULL);
+ }
+ strbuf_release(&rev);
+}
+
+/*
+ * Determine if a submodule has been initialized at a given 'path'
+ */
+int is_submodule_initialized(const char *path)
+{
+ int ret = 0;
+ const struct submodule *module = NULL;
+
+ module = submodule_from_path(null_sha1, path);
+
+ if (module) {
+ char *key = xstrfmt("submodule.%s.url", module->name);
+ char *value = NULL;
+
+ ret = !git_config_get_string(key, &value);
+
+ free(value);
+ free(key);
+ }
+
+ return ret;
+}
+
+/*
+ * Determine if a submodule has been populated at a given 'path'
+ */
+int is_submodule_populated(const char *path)
+{
+ int ret = 0;
+ char *gitdir = xstrfmt("%s/.git", path);
+
+ if (resolve_gitdir(gitdir))
+ ret = 1;
+
+ free(gitdir);
+ return ret;
+}
+
int parse_submodule_update_strategy(const char *value,
struct submodule_update_strategy *dst)
{
return 1;
}
-int ok_to_remove_submodule(const char *path)
+/*
+ * Check if it is a bad idea to remove a submodule, i.e. if we'd lose data
+ * when doing so.
+ *
+ * Return 1 if we'd lose data, return 0 if the removal is fine,
+ * and negative values for errors.
+ */
+int bad_to_remove_submodule(const char *path, unsigned flags)
{
ssize_t len;
struct child_process cp = CHILD_PROCESS_INIT;
- const char *argv[] = {
- "status",
- "--porcelain",
- "-u",
- "--ignore-submodules=none",
- NULL,
- };
struct strbuf buf = STRBUF_INIT;
- int ok_to_remove = 1;
+ int ret = 0;
if (!file_exists(path) || is_empty_dir(path))
- return 1;
+ return 0;
if (!submodule_uses_gitfile(path))
- return 0;
+ return 1;
+
+ argv_array_pushl(&cp.args, "status", "--porcelain",
+ "--ignore-submodules=none", NULL);
+
+ if (flags & SUBMODULE_REMOVAL_IGNORE_UNTRACKED)
+ argv_array_push(&cp.args, "-uno");
+ else
+ argv_array_push(&cp.args, "-uall");
+
+ if (!(flags & SUBMODULE_REMOVAL_IGNORE_IGNORED_UNTRACKED))
+ argv_array_push(&cp.args, "--ignored");
- cp.argv = argv;
prepare_submodule_repo_env(&cp.env_array);
cp.git_cmd = 1;
cp.no_stdin = 1;
cp.out = -1;
cp.dir = path;
- if (start_command(&cp))
- die("Could not run 'git status --porcelain -uall --ignore-submodules=none' in submodule %s", path);
+ if (start_command(&cp)) {
+ if (flags & SUBMODULE_REMOVAL_DIE_ON_ERROR)
+ die(_("could not start 'git status in submodule '%s'"),
+ path);
+ ret = -1;
+ goto out;
+ }
len = strbuf_read(&buf, cp.out, 1024);
if (len > 2)
- ok_to_remove = 0;
+ ret = 1;
close(cp.out);
- if (finish_command(&cp))
- die("'git status --porcelain -uall --ignore-submodules=none' failed in submodule %s", path);
-
+ if (finish_command(&cp)) {
+ if (flags & SUBMODULE_REMOVAL_DIE_ON_ERROR)
+ die(_("could not run 'git status in submodule '%s'"),
+ path);
+ ret = -1;
+ }
+out:
strbuf_release(&buf);
- return ok_to_remove;
+ return ret;
}
static int find_first_merges(struct object_array *result, const char *path,
if (strcmp(*var, CONFIG_DATA_ENVIRONMENT))
argv_array_push(out, *var);
}
- argv_array_push(out, "GIT_DIR=.git");
+ argv_array_pushf(out, "%s=%s", GIT_DIR_ENVIRONMENT,
+ DEFAULT_GIT_DIR_ENVIRONMENT);
}
/*
/* If it is an actual gitfile, it doesn't need migration. */
return;
- real_old_git_dir = xstrdup(real_path(old_git_dir));
+ real_old_git_dir = real_pathdup(old_git_dir);
sub = submodule_from_path(null_sha1, path);
if (!sub)
new_git_dir = git_path("modules/%s", sub->name);
if (safe_create_leading_directories_const(new_git_dir) < 0)
die(_("could not create directory '%s'"), new_git_dir);
- real_new_git_dir = xstrdup(real_path(new_git_dir));
+ real_new_git_dir = real_pathdup(new_git_dir);
if (!prefix)
prefix = get_super_prefix();
goto out;
/* Is it already absorbed into the superprojects git dir? */
- real_sub_git_dir = xstrdup(real_path(sub_git_dir));
- real_common_git_dir = xstrdup(real_path(get_git_common_dir()));
+ real_sub_git_dir = real_pathdup(sub_git_dir);
+ real_common_git_dir = real_pathdup(get_git_common_dir());
if (!skip_prefix(real_sub_git_dir, real_common_git_dir, &v))
relocate_single_git_dir_into_superproject(prefix, path);
struct sha1_array;
enum {
+ RECURSE_SUBMODULES_ONLY = -5,
RECURSE_SUBMODULES_CHECK = -4,
RECURSE_SUBMODULES_ERROR = -3,
RECURSE_SUBMODULES_NONE = -2,
};
#define SUBMODULE_UPDATE_STRATEGY_INIT {SM_UPDATE_UNSPECIFIED, NULL}
-int is_staging_gitmodules_ok(void);
-int update_path_in_gitmodules(const char *oldpath, const char *newpath);
-int remove_path_from_gitmodules(const char *path);
-void stage_updated_gitmodules(void);
-void set_diffopt_flags_from_submodule_config(struct diff_options *diffopt,
+extern int is_staging_gitmodules_ok(void);
+extern int update_path_in_gitmodules(const char *oldpath, const char *newpath);
+extern int remove_path_from_gitmodules(const char *path);
+extern void stage_updated_gitmodules(void);
+extern void set_diffopt_flags_from_submodule_config(struct diff_options *,
const char *path);
-int submodule_config(const char *var, const char *value, void *cb);
-void gitmodules_config(void);
-int parse_submodule_update_strategy(const char *value,
+extern int submodule_config(const char *var, const char *value, void *cb);
+extern void gitmodules_config(void);
+extern void gitmodules_config_sha1(const unsigned char *commit_sha1);
+extern int is_submodule_initialized(const char *path);
+extern int is_submodule_populated(const char *path);
+extern int parse_submodule_update_strategy(const char *value,
struct submodule_update_strategy *dst);
-const char *submodule_strategy_to_string(const struct submodule_update_strategy *s);
-void handle_ignore_submodules_arg(struct diff_options *diffopt, const char *);
-void show_submodule_summary(FILE *f, const char *path,
+extern const char *submodule_strategy_to_string(const struct submodule_update_strategy *s);
+extern void handle_ignore_submodules_arg(struct diff_options *, const char *);
+extern void show_submodule_summary(FILE *f, const char *path,
const char *line_prefix,
struct object_id *one, struct object_id *two,
unsigned dirty_submodule, const char *meta,
const char *del, const char *add, const char *reset);
-void show_submodule_inline_diff(FILE *f, const char *path,
+extern void show_submodule_inline_diff(FILE *f, const char *path,
const char *line_prefix,
struct object_id *one, struct object_id *two,
unsigned dirty_submodule, const char *meta,
const char *del, const char *add, const char *reset,
const struct diff_options *opt);
-void set_config_fetch_recurse_submodules(int value);
-void check_for_new_submodule_commits(unsigned char new_sha1[20]);
-int fetch_populated_submodules(const struct argv_array *options,
+extern void set_config_fetch_recurse_submodules(int value);
+extern void check_for_new_submodule_commits(unsigned char new_sha1[20]);
+extern int fetch_populated_submodules(const struct argv_array *options,
const char *prefix, int command_line_option,
int quiet, int max_parallel_jobs);
-unsigned is_submodule_modified(const char *path, int ignore_untracked);
-int submodule_uses_gitfile(const char *path);
-int ok_to_remove_submodule(const char *path);
-int merge_submodule(unsigned char result[20], const char *path, const unsigned char base[20],
- const unsigned char a[20], const unsigned char b[20], int search);
-int find_unpushed_submodules(struct sha1_array *commits, const char *remotes_name,
- struct string_list *needs_pushing);
+extern unsigned is_submodule_modified(const char *path, int ignore_untracked);
+extern int submodule_uses_gitfile(const char *path);
+
+#define SUBMODULE_REMOVAL_DIE_ON_ERROR (1<<0)
+#define SUBMODULE_REMOVAL_IGNORE_UNTRACKED (1<<1)
+#define SUBMODULE_REMOVAL_IGNORE_IGNORED_UNTRACKED (1<<2)
+extern int bad_to_remove_submodule(const char *path, unsigned flags);
+extern int merge_submodule(unsigned char result[20], const char *path,
+ const unsigned char base[20],
+ const unsigned char a[20],
+ const unsigned char b[20], int search);
+extern int find_unpushed_submodules(struct sha1_array *commits,
+ const char *remotes_name,
+ struct string_list *needs_pushing);
extern int push_unpushed_submodules(struct sha1_array *commits,
const char *remotes_name,
int dry_run);
-int parallel_submodules(void);
+extern void connect_work_tree_and_git_dir(const char *work_tree, const char *git_dir);
+extern int parallel_submodules(void);
/*
* Prepare the "env_array" parameter of a "struct child_process" for executing
* a submodule by clearing any repo-specific envirionment variables, but
* retaining any config in the environment.
*/
-void prepare_submodule_repo_env(struct argv_array *out);
+extern void prepare_submodule_repo_env(struct argv_array *out);
#define ABSORB_GITDIR_RECURSE_SUBMODULES (1<<0)
extern void absorb_git_dir_into_superproject(const char *prefix,
git checkout -b "replace_sub1_with_directory" "add_sub1" &&
git submodule update &&
- (
- cd sub1 &&
- git checkout modifications
- ) &&
+ git -C sub1 checkout modifications &&
git rm --cached sub1 &&
rm sub1/.git* &&
git config -f .gitmodules --remove-section "submodule.sub1" &&
test_expect_success 'setup: helpers for corruption tests' '
sha1_file() {
- echo "$*" | sed "s#..#.git/objects/&/#"
+ remainder=${1#??} &&
+ firsttwo=${1%$remainder} &&
+ echo ".git/objects/$firsttwo/$remainder"
} &&
remove_object() {
- file=$(sha1_file "$*") &&
- test -e "$file" &&
- rm -f "$file"
+ rm "$(sha1_file "$1")"
}
'
)
'
-remove_loose_object () {
- sha1="$(git rev-parse "$1")" &&
- remainder=${sha1#??} &&
- firsttwo=${sha1%$remainder} &&
- rm .git/objects/$firsttwo/$remainder
-}
-
test_expect_success 'fsck --name-objects' '
rm -rf name-objects &&
git init name-objects &&
test_commit julius caesar.t &&
test_commit augustus &&
test_commit caesar &&
- remove_loose_object $(git rev-parse julius:caesar.t) &&
+ remove_object $(git rev-parse julius:caesar.t) &&
test_must_fail git fsck --name-objects >out &&
tree=$(git rev-parse --verify julius:) &&
grep "$tree (\(refs/heads/master\|HEAD\)@{[0-9]*}:" out
)
'
+test_expect_success 'alternate objects are correctly blamed' '
+ test_when_finished "rm -rf alt.git .git/objects/info/alternates" &&
+ git init --bare alt.git &&
+ echo "../../alt.git/objects" >.git/objects/info/alternates &&
+ mkdir alt.git/objects/12 &&
+ >alt.git/objects/12/34567890123456789012345678901234567890 &&
+ test_must_fail git fsck >out 2>&1 &&
+ grep alt.git out
+'
+
+test_expect_success 'fsck errors in packed objects' '
+ git cat-file commit HEAD >basis &&
+ sed "s/</one/" basis >one &&
+ sed "s/</foo/" basis >two &&
+ one=$(git hash-object -t commit -w one) &&
+ two=$(git hash-object -t commit -w two) &&
+ pack=$(
+ {
+ echo $one &&
+ echo $two
+ } | git pack-objects .git/objects/pack/pack
+ ) &&
+ test_when_finished "rm -f .git/objects/pack/pack-$pack.*" &&
+ remove_object $one &&
+ remove_object $two &&
+ test_must_fail git fsck 2>out &&
+ grep "error in commit $one.* - bad name" out &&
+ grep "error in commit $two.* - bad name" out &&
+ ! grep corrupt out
+'
+
+test_expect_success 'fsck finds problems in duplicate loose objects' '
+ rm -rf broken-duplicate &&
+ git init broken-duplicate &&
+ (
+ cd broken-duplicate &&
+ test_commit duplicate &&
+ # no "-d" here, so we end up with duplicates
+ git repack &&
+ # now corrupt the loose copy
+ file=$(sha1_file "$(git rev-parse HEAD)") &&
+ rm "$file" &&
+ echo broken >"$file" &&
+ test_must_fail git fsck
+ )
+'
+
+test_expect_success 'fsck detects trailing loose garbage (commit)' '
+ git cat-file commit HEAD >basis &&
+ echo bump-commit-sha1 >>basis &&
+ commit=$(git hash-object -w -t commit basis) &&
+ file=$(sha1_file $commit) &&
+ test_when_finished "remove_object $commit" &&
+ chmod +w "$file" &&
+ echo garbage >>"$file" &&
+ test_must_fail git fsck 2>out &&
+ test_i18ngrep "garbage.*$commit" out
+'
+
+test_expect_success 'fsck detects trailing loose garbage (blob)' '
+ blob=$(echo trailing | git hash-object -w --stdin) &&
+ file=$(sha1_file $blob) &&
+ test_when_finished "remove_object $blob" &&
+ chmod +w "$file" &&
+ echo garbage >>"$file" &&
+ test_must_fail git fsck 2>out &&
+ test_i18ngrep "garbage.*$blob" out
+'
+
test_done
resolve topic@{push} refs/remotes/origin/magic/topic
'
+test_expect_success 'resolving @{push} fails with a detached HEAD' '
+ git checkout HEAD^0 &&
+ test_when_finished "git checkout -" &&
+ test_must_fail git rev-parse @{push}
+'
+
test_done
git show HEAD | grep "^Author: Twerp Snog"
'
+test_expect_success 'retain authorship w/ conflicts' '
+ git reset --hard twerp &&
+ test_commit a conflict a conflict-a &&
+ git reset --hard twerp &&
+ GIT_AUTHOR_NAME=AttributeMe \
+ test_commit b conflict b conflict-b &&
+ set_fake_editor &&
+ test_must_fail git rebase -i conflict-a &&
+ echo resolved >conflict &&
+ git add conflict &&
+ git rebase --continue &&
+ test $(git rev-parse conflict-a^0) = $(git rev-parse HEAD^) &&
+ git show >out &&
+ grep AttributeMe out
+'
+
test_expect_success 'squash' '
git reset --hard twerp &&
echo B > file7 &&
git cat-file commit HEAD~1 >picked_msg &&
git cat-file commit HEAD~2 >unrelatedpick_msg &&
git cat-file commit HEAD~3 >initial_msg &&
- test_must_fail grep "cherry picked from" initial_msg &&
+ ! grep "cherry picked from" initial_msg &&
grep "cherry picked from" unrelatedpick_msg &&
grep "cherry picked from" picked_msg &&
grep "cherry picked from" anotherpick_msg
git cat-file commit HEAD~1 >picked_msg &&
git cat-file commit HEAD~2 >unrelatedpick_msg &&
git cat-file commit HEAD~3 >initial_msg &&
- test_must_fail grep "Signed-off-by:" initial_msg &&
+ ! grep "Signed-off-by:" initial_msg &&
grep "Signed-off-by:" unrelatedpick_msg &&
- test_must_fail grep "Signed-off-by:" picked_msg &&
+ ! grep "Signed-off-by:" picked_msg &&
grep "Signed-off-by:" anotherpick_msg
'
test_cmp expect actual
'
-test_expect_success 'rm of a populated submodule with a .git directory fails even when forced' '
+test_expect_success 'rm of a populated submodule with a .git directory migrates git dir' '
git checkout -f master &&
git reset --hard &&
git submodule update &&
(cd submod &&
rm .git &&
cp -R ../.git/modules/sub .git &&
- GIT_WORK_TREE=. git config --unset core.worktree
+ GIT_WORK_TREE=. git config --unset core.worktree &&
+ rm -r ../.git/modules/sub
) &&
- test_must_fail git rm submod &&
- test -d submod &&
- test -d submod/.git &&
- git status -s -uno --ignore-submodules=none >actual &&
- ! test -s actual &&
- test_must_fail git rm -f submod &&
- test -d submod &&
- test -d submod/.git &&
+ git rm submod 2>output.err &&
+ ! test -d submod &&
+ ! test -d submod/.git &&
git status -s -uno --ignore-submodules=none >actual &&
- ! test -s actual &&
- rm -rf submod
+ test -s actual &&
+ test_i18ngrep Migrating output.err
'
cat >expect.deepmodified <<EOF
git submodule update --recursive &&
(cd submod/subsubmod &&
rm .git &&
- cp -R ../../.git/modules/sub/modules/sub .git &&
+ mv ../../.git/modules/sub/modules/sub .git &&
GIT_WORK_TREE=. git config --unset core.worktree
) &&
- test_must_fail git rm submod &&
- test -d submod &&
- test -d submod/subsubmod/.git &&
- git status -s -uno --ignore-submodules=none >actual &&
- ! test -s actual &&
- test_must_fail git rm -f submod &&
- test -d submod &&
- test -d submod/subsubmod/.git &&
+ git rm submod 2>output.err &&
+ ! test -d submod &&
+ ! test -d submod/subsubmod/.git &&
git status -s -uno --ignore-submodules=none >actual &&
- ! test -s actual &&
- rm -rf submod
+ test -s actual &&
+ test_i18ngrep Migrating output.err
'
test_expect_success 'checking out a commit after submodule removal needs manual updates' '
- git commit -m "submodule removal" submod &&
+ git commit -m "submodule removal" submod .gitmodules &&
git checkout HEAD^ &&
git submodule update &&
git checkout -q HEAD^ &&
}
t() {
+ use_config=
+ git config --unset diff.interHunkContext
+
case $# in
4) hunks=$4; cmd="diff -U$3";;
5) hunks=$5; cmd="diff -U$3 --inter-hunk-context=$4";;
+ 6) hunks=$5; cmd="diff -U$3"; git config diff.interHunkContext $4; use_config="(diff.interHunkContext=$4) ";;
esac
- label="$cmd, $1 common $2"
+ label="$use_config$cmd, $1 common $2"
file=f$1
expected=expected.$file.$3.$hunks
t 9 lines 3 2 2
t 9 lines 3 3 1
+# use diff.interHunkContext?
+t 1 line 0 0 2 config
+t 1 line 0 1 1 config
+t 1 line 0 2 1 config
+t 9 lines 3 3 1 config
+t 2 lines 0 0 2 config
+t 2 lines 0 1 2 config
+t 2 lines 0 2 1 config
+t 3 lines 1 0 2 config
+t 3 lines 1 1 1 config
+t 3 lines 1 2 1 config
+t 9 lines 3 2 2 config
+t 9 lines 3 3 1 config
+
+test_expect_success 'diff.interHunkContext invalid' '
+ git config diff.interHunkContext asdf &&
+ test_must_fail git diff &&
+ git config diff.interHunkContext -1 &&
+ test_must_fail git diff
+'
+
test_done
test_cmp_bin $original/nodiff.crlf $extracted/nodiff.crlf &&
test_cmp_bin $original/nodiff.lf $extracted/nodiff.lf
"
+
+ test_expect_success UNZIP " validate that custom diff is unchanged " "
+ test_cmp_bin $original/custom.cr $extracted/custom.cr &&
+ test_cmp_bin $original/custom.crlf $extracted/custom.crlf &&
+ test_cmp_bin $original/custom.lf $extracted/custom.lf
+ "
}
test_expect_success \
printf "text\r" >a/nodiff.cr &&
printf "text\r\n" >a/nodiff.crlf &&
printf "text\n" >a/nodiff.lf &&
+ printf "text\r" >a/custom.cr &&
+ printf "text\r\n" >a/custom.crlf &&
+ printf "text\n" >a/custom.lf &&
printf "\0\r" >a/binary.cr &&
printf "\0\r\n" >a/binary.crlf &&
printf "\0\n" >a/binary.lf &&
test_expect_success 'setup export-subst and diff attributes' '
echo "a/nodiff.* -diff" >>.git/info/attributes &&
echo "a/diff.* diff" >>.git/info/attributes &&
+ echo "a/custom.* diff=custom" >>.git/info/attributes &&
+ git config diff.custom.binary true &&
echo "substfile?" export-subst >>.git/info/attributes &&
git log --max-count=1 "--pretty=format:A${SUBSTFORMAT}O" HEAD \
>a/substfile1
'
-test_expect_success \
- 'create bare clone' \
- 'git clone --bare . bare.git &&
- cp .git/info/attributes bare.git/info/attributes'
+test_expect_success 'create bare clone' '
+ git clone --bare . bare.git &&
+ cp .git/info/attributes bare.git/info/attributes &&
+ # Recreate our changes to .git/config rather than just copying it, as
+ # we do not want to clobber core.bare or other settings.
+ git -C bare.git config diff.custom.binary true
+'
test_expect_success \
'remove ignored file' \
test_cmp expect actual
'
-test_expect_success 'incremental repack cannot create bitmaps' '
+test_expect_success 'incremental repack fails when bitmaps are requested' '
test_commit more-1 &&
- find .git/objects/pack -name "*.bitmap" >expect &&
- git repack -d &&
- find .git/objects/pack -name "*.bitmap" >actual &&
- test_cmp expect actual
+ test_must_fail git repack -d 2>err &&
+ test_i18ngrep "Incremental repacks are incompatible with bitmap" err
'
test_expect_success 'incremental repack can disable bitmaps' '
git --git-dir=dst/.git config --add \
receive.fsck.badDate warn &&
git push --porcelain dst bogus >act 2>&1 &&
- test_must_fail grep "missingEmail" act
+ ! grep "missingEmail" act
'
test_expect_success \
)
'
+test_expect_success 'rename succeeds with existing remote.<target>.prune' '
+ git clone one four.four &&
+ test_when_finished git config --global --unset remote.upstream.prune &&
+ git config --global remote.upstream.prune true &&
+ git -C four.four remote rename origin upstream
+'
+
cat >remotes_origin <<EOF
URL: $(pwd)/one
Push: refs/heads/master:refs/heads/upstream
test_expect_success 'push --porcelain bad url' '
mk_empty testrepo &&
test_must_fail git push >.git/bar --porcelain asdfasdfasd refs/heads/master:refs/remotes/origin/master &&
- test_must_fail grep -q Done .git/bar
+ ! grep -q Done .git/bar
'
test_expect_success 'push --porcelain rejected' '
test_cmp expected_submodule actual_submodule
'
+test_expect_success 'push --dry-run does not recursively update submodules' '
+ git -C work push --dry-run --recurse-submodules=only ../pub.git master &&
+
+ git -C submodule.git rev-parse master >actual_submodule &&
+ git -C pub.git rev-parse master >actual_pub &&
+ test_cmp expected_pub actual_pub &&
+ test_cmp expected_submodule actual_submodule
+'
+
+test_expect_success 'push only unpushed submodules recursively' '
+ git -C work/gar/bage rev-parse master >expected_submodule &&
+ git -C pub.git rev-parse master >expected_pub &&
+
+ git -C work push --recurse-submodules=only ../pub.git master &&
+
+ git -C submodule.git rev-parse master >actual_submodule &&
+ git -C pub.git rev-parse master >actual_pub &&
+ test_cmp expected_submodule actual_submodule &&
+ test_cmp expected_pub actual_pub
+'
+
test_done
--- /dev/null
+#!/bin/sh
+
+test_description='various UNC path tests (Windows-only)'
+. ./test-lib.sh
+
+if ! test_have_prereq MINGW; then
+ skip_all='skipping UNC path tests, requires Windows'
+ test_done
+fi
+
+UNCPATH="$(pwd)"
+case "$UNCPATH" in
+[A-Z]:*)
+ # Use administrative share e.g. \\localhost\C$\git-sdk-64\usr\src\git
+ # (we use forward slashes here because MSYS2 and Git accept them, and
+ # they are easier on the eyes)
+ UNCPATH="//localhost/${UNCPATH%%:*}\$/${UNCPATH#?:}"
+ test -d "$UNCPATH" || {
+ skip_all='could not access administrative share; skipping'
+ test_done
+ }
+ ;;
+*)
+ skip_all='skipping UNC path tests, cannot determine current path as UNC'
+ test_done
+ ;;
+esac
+
+test_expect_success setup '
+ test_commit initial
+'
+
+test_expect_success clone '
+ git clone "file://$UNCPATH" clone
+'
+
+test_expect_success push '
+ (
+ cd clone &&
+ git checkout -b to-push &&
+ test_commit to-push &&
+ git push origin HEAD
+ ) &&
+ rev="$(git -C clone rev-parse --verify refs/heads/to-push)" &&
+ test "$rev" = "$(git rev-parse --verify refs/heads/to-push)"
+'
+
+test_done
git clone --mirror src mirror2 &&
(cd mirror2 &&
git show-ref 2> clone.err > clone.out) &&
- test_must_fail grep Duplicate mirror2/clone.err &&
+ ! grep Duplicate mirror2/clone.err &&
grep some-tag mirror2/clone.out
'
test_i18ngrep "merge base must be tested" my_bisect_log.txt &&
grep $HASH4 my_bisect_log.txt &&
git bisect good > my_bisect_log.txt &&
- test_must_fail grep "merge base must be tested" my_bisect_log.txt &&
+ ! grep "merge base must be tested" my_bisect_log.txt &&
grep $HASH6 my_bisect_log.txt &&
git bisect reset
'
--- /dev/null
+#!/bin/sh
+
+test_description='test case exclude pathspec'
+
+. ./test-lib.sh
+
+test_expect_success 'setup a submodule' '
+ test_create_repo pretzel &&
+ : >pretzel/a &&
+ git -C pretzel add a &&
+ git -C pretzel commit -m "add a file" -- a &&
+ git submodule add ./pretzel sub &&
+ git commit -a -m "add submodule" &&
+ git submodule deinit --all
+'
+
+cat <<EOF >expect
+fatal: Pathspec 'sub/a' is in submodule 'sub'
+EOF
+
+test_expect_success 'error message for path inside submodule' '
+ echo a >sub/a &&
+ test_must_fail git add sub/a 2>actual &&
+ test_cmp expect actual
+'
+
+cat <<EOF >expect
+fatal: Pathspec '.' is in submodule 'sub'
+EOF
+
+test_expect_success 'error message for path inside submodule from within submodule' '
+ test_must_fail git -C sub add . 2>actual &&
+ test_cmp expect actual
+'
+
+test_done
)
'
+test_expect_success 'auto gc with too many loose objects does not attempt to create bitmaps' '
+ test_config gc.auto 3 &&
+ test_config gc.autodetach false &&
+ test_config pack.writebitmaps true &&
+ # We need to create two object whose sha1s start with 17
+ # since this is what git gc counts. As it happens, these
+ # two blobs will do so.
+ test_commit 263 &&
+ test_commit 410 &&
+ # Our first gc will create a pack; our second will create a second pack
+ git gc --auto &&
+ ls .git/objects/pack | sort >existing_packs &&
+ test_commit 523 &&
+ test_commit 790 &&
+
+ git gc --auto 2>err &&
+ test_i18ngrep ! "^warning:" err &&
+ ls .git/objects/pack/ | sort >post_packs &&
+ comm -1 -3 existing_packs post_packs >new &&
+ comm -2 -3 existing_packs post_packs >del &&
+ test_line_count = 0 del && # No packs are deleted
+ test_line_count = 2 new # There is one new pack and its .idx
+'
+
+
test_done
tag_exists mytag'
test_expect_success '--force is moot with a non-existing tag name' '
+ test_when_finished git tag -d newtag forcetag &&
git tag newtag >expect &&
git tag --force forcetag >actual &&
test_cmp expect actual
'
-git tag -d newtag forcetag
# deleting tags:
'
test_expect_success 'listing tags in column with column.*' '
- git config column.tag row &&
- git config column.ui dense &&
+ test_config column.tag row &&
+ test_config column.ui dense &&
COLUMNS=40 git tag -l >actual &&
- git config --unset column.ui &&
- git config --unset column.tag &&
cat >expected <<\EOF &&
a1 aa1 cba t210 t211
v0.2.1 v1.0 v1.0.1 v1.1.3
'
test_expect_success 'listing tags -n in column with column.ui ignored' '
- git config column.ui "row dense" &&
+ test_config column.ui "row dense" &&
COLUMNS=40 git tag -l -n >actual &&
- git config --unset column.ui &&
cat >expected <<\EOF &&
a1 Foo
aa1 Foo
test_must_fail git tag -v forged-tag
'
+test_expect_success 'verifying a proper tag with --format pass and format accordingly' '
+ cat >expect <<-\EOF
+ tagname : signed-tag
+ EOF &&
+ git tag -v --format="tagname : %(tag)" "signed-tag" >actual &&
+ test_cmp expect actual
+'
+
+test_expect_success 'verifying a forged tag with --format fail and format accordingly' '
+ cat >expect <<-\EOF
+ tagname : forged-tag
+ EOF &&
+ test_must_fail git tag -v --format="tagname : %(tag)" "forged-tag" >actual &&
+ test_cmp expect actual
+'
+
# blank and empty messages for signed tags:
get_tag_header empty-signed-tag $commit commit $time >expect
'
# try to sign with bad user.signingkey
-git config user.signingkey BobTheMouse
test_expect_success GPG \
'git tag -s fails if gpg is misconfigured (bad key)' \
- 'test_must_fail git tag -s -m tail tag-gpg-failure'
-git config --unset user.signingkey
+ 'test_config user.signingkey BobTheMouse &&
+ test_must_fail git tag -s -m tail tag-gpg-failure'
# try to produce invalid signature
test_expect_success GPG \
'
test_expect_success 'configured lexical sort' '
- git config tag.sort "v:refname" &&
+ test_config tag.sort "v:refname" &&
git tag -l "foo*" >actual &&
cat >expect <<-\EOF &&
foo1.3
'
test_expect_success 'option override configured sort' '
+ test_config tag.sort "v:refname" &&
git tag -l --sort=-refname "foo*" >actual &&
cat >expect <<-\EOF &&
foo1.6
'
test_expect_success 'invalid sort parameter in configuratoin' '
- git config tag.sort "v:notvalid" &&
+ test_config tag.sort "v:notvalid" &&
test_must_fail git tag -l "foo*"
'
test_expect_success 'version sort with prerelease reordering' '
- git config --unset tag.sort &&
- git config versionsort.prereleaseSuffix -rc &&
+ test_config versionsort.prereleaseSuffix -rc &&
git tag foo1.6-rc1 &&
git tag foo1.6-rc2 &&
git tag -l --sort=version:refname "foo*" >actual &&
'
test_expect_success 'reverse version sort with prerelease reordering' '
+ test_config versionsort.prereleaseSuffix -rc &&
git tag -l --sort=-version:refname "foo*" >actual &&
cat >expect <<-\EOF &&
foo1.10
test_cmp expect actual
'
+test_expect_success 'version sort with prerelease reordering and common leading character' '
+ test_config versionsort.prereleaseSuffix -before &&
+ git tag foo1.7-before1 &&
+ git tag foo1.7 &&
+ git tag foo1.7-after1 &&
+ git tag -l --sort=version:refname "foo1.7*" >actual &&
+ cat >expect <<-\EOF &&
+ foo1.7-before1
+ foo1.7
+ foo1.7-after1
+ EOF
+ test_cmp expect actual
+'
+
+test_expect_success 'version sort with prerelease reordering, multiple suffixes and common leading character' '
+ test_config versionsort.prereleaseSuffix -before &&
+ git config --add versionsort.prereleaseSuffix -after &&
+ git tag -l --sort=version:refname "foo1.7*" >actual &&
+ cat >expect <<-\EOF &&
+ foo1.7-before1
+ foo1.7-after1
+ foo1.7
+ EOF
+ test_cmp expect actual
+'
+
+test_expect_success 'version sort with prerelease reordering, multiple suffixes match the same tag' '
+ test_config versionsort.prereleaseSuffix -bar &&
+ git config --add versionsort.prereleaseSuffix -foo-baz &&
+ git config --add versionsort.prereleaseSuffix -foo-bar &&
+ git tag foo1.8-foo-bar &&
+ git tag foo1.8-foo-baz &&
+ git tag foo1.8 &&
+ git tag -l --sort=version:refname "foo1.8*" >actual &&
+ cat >expect <<-\EOF &&
+ foo1.8-foo-baz
+ foo1.8-foo-bar
+ foo1.8
+ EOF
+ test_cmp expect actual
+'
+
+test_expect_success 'version sort with prerelease reordering, multiple suffixes match starting at the same position' '
+ test_config versionsort.prereleaseSuffix -pre &&
+ git config --add versionsort.prereleaseSuffix -prerelease &&
+ git tag foo1.9-pre1 &&
+ git tag foo1.9-pre2 &&
+ git tag foo1.9-prerelease1 &&
+ git tag -l --sort=version:refname "foo1.9*" >actual &&
+ cat >expect <<-\EOF &&
+ foo1.9-pre1
+ foo1.9-pre2
+ foo1.9-prerelease1
+ EOF
+ test_cmp expect actual
+'
+
+test_expect_success 'version sort with general suffix reordering' '
+ test_config versionsort.suffix -alpha &&
+ git config --add versionsort.suffix -beta &&
+ git config --add versionsort.suffix "" &&
+ git config --add versionsort.suffix -gamma &&
+ git config --add versionsort.suffix -delta &&
+ git tag foo1.10-alpha &&
+ git tag foo1.10-beta &&
+ git tag foo1.10-gamma &&
+ git tag foo1.10-delta &&
+ git tag foo1.10-unlisted-suffix &&
+ git tag -l --sort=version:refname "foo1.10*" >actual &&
+ cat >expect <<-\EOF &&
+ foo1.10-alpha
+ foo1.10-beta
+ foo1.10
+ foo1.10-unlisted-suffix
+ foo1.10-gamma
+ foo1.10-delta
+ EOF
+ test_cmp expect actual
+'
+
+test_expect_success 'versionsort.suffix overrides versionsort.prereleaseSuffix' '
+ test_config versionsort.suffix -before &&
+ test_config versionsort.prereleaseSuffix -after &&
+ git tag -l --sort=version:refname "foo1.7*" >actual &&
+ cat >expect <<-\EOF &&
+ foo1.7-before1
+ foo1.7
+ foo1.7-after1
+ EOF
+ test_cmp expect actual
+'
+
+test_expect_success 'version sort with very long prerelease suffix' '
+ test_config versionsort.prereleaseSuffix -very-looooooooooooooooooooooooong-prerelease-suffix &&
+ git tag -l --sort=version:refname
+'
+
run_with_limited_stack () {
(ulimit -s 128 && "$@")
}
test_expect_success '--format should list tags as per format given' '
cat >expect <<-\EOF &&
- refname : refs/tags/foo1.10
- refname : refs/tags/foo1.3
- refname : refs/tags/foo1.6
- refname : refs/tags/foo1.6-rc1
- refname : refs/tags/foo1.6-rc2
+ refname : refs/tags/v1.0
+ refname : refs/tags/v1.0.1
+ refname : refs/tags/v1.1.3
EOF
- git tag -l --format="refname : %(refname)" "foo*" >actual &&
+ git tag -l --format="refname : %(refname)" "v1*" >actual &&
test_cmp expect actual
'
test_cmp expect.stderr actual.stderr
'
+test_expect_success 'verifying tag with --format' '
+ cat >expect <<-\EOF
+ tagname : fourth-signed
+ EOF &&
+ git verify-tag --format="tagname : %(tag)" "fourth-signed" >actual &&
+ test_cmp expect actual
+'
+
+test_expect_success 'verifying a forged tag with --format fail and format accordingly' '
+ cat >expect <<-\EOF
+ tagname : 7th forged-signed
+ EOF &&
+ test_must_fail git verify-tag --format="tagname : %(tag)" $(cat forged1.tag) >actual-forged &&
+ test_cmp expect actual-forged
+'
+
test_done
test_i18ncmp expect2 actual2
'
+cat <<EOF >expect2
+Submodule 'foo/sub' ($pwd/withsubs/../rebasing) registered for path 'sub'
+EOF
+
+test_expect_success 'submodule update --init from and of subdirectory' '
+ git init withsubs &&
+ (cd withsubs &&
+ mkdir foo &&
+ git submodule add "$(pwd)/../rebasing" foo/sub &&
+ (cd foo &&
+ git submodule deinit -f sub &&
+ git submodule update --init sub 2>../../actual2
+ )
+ ) &&
+ test_i18ncmp expect2 actual2
+'
+
apos="'";
test_expect_success 'submodule update does not fetch already present commits' '
(cd submodule &&
"" submodule \
>actual &&
test_cmp expect_local_path actual &&
- git config submodule.a.url $old_a &&
- git config submodule.submodule.url $old_submodule &&
+ git config submodule.a.url "$old_a" &&
+ git config submodule.submodule.url "$old_submodule" &&
git config --unset submodule.a.path c
)
'
+cat >super/expect_url <<EOF
+Submodule url: '../submodule' for path 'b'
+Submodule url: 'git@somewhere.else.net:submodule.git' for path 'submodule'
+EOF
+
+test_expect_success 'reading of local configuration for uninitialized submodules' '
+ (
+ cd super &&
+ git submodule deinit -f b &&
+ old_submodule=$(git config submodule.submodule.url) &&
+ git config submodule.submodule.url git@somewhere.else.net:submodule.git &&
+ test-submodule-config --url \
+ "" b \
+ "" submodule \
+ >actual &&
+ test_cmp expect_url actual &&
+ git config submodule.submodule.url "$old_submodule" &&
+ git submodule init b
+ )
+'
+
cat >super/expect_fetchrecurse_die.err <<EOF
fatal: bad submodule.submodule.fetchrecursesubmodules argument: blabla
EOF
git rm file12 &&
git commit -m "branch1 changes" &&
+ git checkout -b delete-base branch1 &&
+ mkdir -p a/a &&
+ (echo one; echo two; echo 3; echo 4) >a/a/file.txt &&
+ git add a/a/file.txt &&
+ git commit -m"base file" &&
+ git checkout -b move-to-b delete-base &&
+ mkdir -p b/b &&
+ git mv a/a/file.txt b/b/file.txt &&
+ (echo one; echo two; echo 4) >b/b/file.txt &&
+ git commit -a -m"move to b" &&
+ git checkout -b move-to-c delete-base &&
+ mkdir -p c/c &&
+ git mv a/a/file.txt c/c/file.txt &&
+ (echo one; echo two; echo 3) >c/c/file.txt &&
+ git commit -a -m"move to c" &&
+
git checkout -b stash1 master &&
echo stash1 change file11 >file11 &&
git add file11 &&
git rm file11 &&
git commit -m "master updates" &&
+ git clean -fdx &&
+ git checkout -b order-file-start master &&
+ echo start >a &&
+ echo start >b &&
+ git add a b &&
+ git commit -m start &&
+ git checkout -b order-file-side1 order-file-start &&
+ echo side1 >a &&
+ echo side1 >b &&
+ git add a b &&
+ git commit -m side1 &&
+ git checkout -b order-file-side2 order-file-start &&
+ echo side2 >a &&
+ echo side2 >b &&
+ git add a b &&
+ git commit -m side2 &&
+
git config merge.tool mytool &&
git config mergetool.mytool.cmd "cat \"\$REMOTE\" >\"\$MERGED\"" &&
git config mergetool.mytool.trustExitCode true &&
'
test_expect_success 'custom mergetool' '
- git checkout -b test1 branch1 &&
+ test_when_finished "git reset --hard" &&
+ git checkout -b test$test_count branch1 &&
git submodule update -N &&
test_must_fail git merge master >/dev/null 2>&1 &&
( yes "" | git mergetool both >/dev/null 2>&1 ) &&
'
test_expect_success 'mergetool crlf' '
+ test_when_finished "git reset --hard" &&
+ # This test_config line must go after the above reset line so that
+ # core.autocrlf is unconfigured before reset runs. (The
+ # test_config command uses test_when_finished internally and
+ # test_when_finished is LIFO.)
test_config core.autocrlf true &&
- git checkout -b test2 branch1 &&
+ git checkout -b test$test_count branch1 &&
test_must_fail git merge master >/dev/null 2>&1 &&
( yes "" | git mergetool file1 >/dev/null 2>&1 ) &&
( yes "" | git mergetool file2 >/dev/null 2>&1 ) &&
test "$(printf x | cat subdir/file3 -)" = "$(printf "master new sub\r\nx")" &&
git submodule update -N &&
test "$(cat submod/bar)" = "master submodule" &&
- git commit -m "branch1 resolved with mergetool - autocrlf" &&
- test_config core.autocrlf false &&
- git reset --hard
+ git commit -m "branch1 resolved with mergetool - autocrlf"
'
test_expect_success 'mergetool in subdir' '
- git checkout -b test3 branch1 &&
+ test_when_finished "git reset --hard" &&
+ git checkout -b test$test_count branch1 &&
git submodule update -N &&
(
cd subdir &&
'
test_expect_success 'mergetool on file in parent dir' '
+ test_when_finished "git reset --hard" &&
+ git checkout -b test$test_count branch1 &&
+ git submodule update -N &&
(
cd subdir &&
+ test_must_fail git merge master >/dev/null 2>&1 &&
+ ( yes "" | git mergetool file3 >/dev/null 2>&1 ) &&
( yes "" | git mergetool ../file1 >/dev/null 2>&1 ) &&
( yes "" | git mergetool ../file2 ../spaced\ name >/dev/null 2>&1 ) &&
( yes "" | git mergetool ../both >/dev/null 2>&1 ) &&
'
test_expect_success 'mergetool skips autoresolved' '
- git checkout -b test4 branch1 &&
+ test_when_finished "git reset --hard" &&
+ git checkout -b test$test_count branch1 &&
git submodule update -N &&
test_must_fail git merge master &&
test -n "$(git ls-files -u)" &&
( yes "d" | git mergetool file12 >/dev/null 2>&1 ) &&
( yes "l" | git mergetool submod >/dev/null 2>&1 ) &&
output="$(git mergetool --no-prompt)" &&
- test "$output" = "No files need merging" &&
- git reset --hard
+ test "$output" = "No files need merging"
'
-test_expect_success 'mergetool merges all from subdir' '
+test_expect_success 'mergetool merges all from subdir (rerere disabled)' '
+ test_when_finished "git reset --hard" &&
+ git checkout -b test$test_count branch1 &&
test_config rerere.enabled false &&
(
cd subdir &&
)
'
+test_expect_success 'mergetool merges all from subdir (rerere enabled)' '
+ test_when_finished "git reset --hard" &&
+ git checkout -b test$test_count branch1 &&
+ test_config rerere.enabled true &&
+ rm -rf .git/rr-cache &&
+ (
+ cd subdir &&
+ test_must_fail git merge master &&
+ ( yes "r" | git mergetool ../submod ) &&
+ ( yes "d" "d" | git mergetool --no-prompt ) &&
+ test "$(cat ../file1)" = "master updated" &&
+ test "$(cat ../file2)" = "master new" &&
+ test "$(cat file3)" = "master new sub" &&
+ ( cd .. && git submodule update -N ) &&
+ test "$(cat ../submod/bar)" = "master submodule" &&
+ git commit -m "branch2 resolved by mergetool from subdir"
+ )
+'
+
test_expect_success 'mergetool skips resolved paths when rerere is active' '
+ test_when_finished "git reset --hard" &&
test_config rerere.enabled true &&
rm -rf .git/rr-cache &&
- git checkout -b test5 branch1 &&
+ git checkout -b test$test_count branch1 &&
git submodule update -N &&
test_must_fail git merge master >/dev/null 2>&1 &&
( yes "l" | git mergetool --no-prompt submod >/dev/null 2>&1 ) &&
( yes "d" "d" | git mergetool --no-prompt >/dev/null 2>&1 ) &&
git submodule update -N &&
output="$(yes "n" | git mergetool --no-prompt)" &&
- test "$output" = "No files need merging" &&
- git reset --hard
+ test "$output" = "No files need merging"
'
test_expect_success 'conflicted stash sets up rerere' '
+ test_when_finished "git reset --hard" &&
test_config rerere.enabled true &&
git checkout stash1 &&
echo "Conflicting stash content" >file11 &&
'
test_expect_success 'mergetool takes partial path' '
- git reset --hard &&
+ test_when_finished "git reset --hard" &&
test_config rerere.enabled false &&
- git checkout -b test12 branch1 &&
+ git checkout -b test$test_count branch1 &&
git submodule update -N &&
test_must_fail git merge master &&
( yes "" | git mergetool subdir ) &&
- test "$(cat subdir/file3)" = "master new sub" &&
- git reset --hard
+ test "$(cat subdir/file3)" = "master new sub"
'
test_expect_success 'mergetool delete/delete conflict' '
- git checkout -b delete-base branch1 &&
- mkdir -p a/a &&
- (echo one; echo two; echo 3; echo 4) >a/a/file.txt &&
- git add a/a/file.txt &&
- git commit -m"base file" &&
- git checkout -b move-to-b delete-base &&
- mkdir -p b/b &&
- git mv a/a/file.txt b/b/file.txt &&
- (echo one; echo two; echo 4) >b/b/file.txt &&
- git commit -a -m"move to b" &&
- git checkout -b move-to-c delete-base &&
- mkdir -p c/c &&
- git mv a/a/file.txt c/c/file.txt &&
- (echo one; echo two; echo 3) >c/c/file.txt &&
- git commit -a -m"move to c" &&
+ test_when_finished "git reset --hard" &&
+ git checkout -b test$test_count move-to-c &&
test_must_fail git merge move-to-b &&
echo d | git mergetool a/a/file.txt &&
! test -f a/a/file.txt &&
- git reset --hard HEAD &&
+ git reset --hard &&
test_must_fail git merge move-to-b &&
echo m | git mergetool a/a/file.txt &&
test -f b/b/file.txt &&
- git reset --hard HEAD &&
+ git reset --hard &&
test_must_fail git merge move-to-b &&
! echo a | git mergetool a/a/file.txt &&
- ! test -f a/a/file.txt &&
- git reset --hard HEAD
+ ! test -f a/a/file.txt
'
test_expect_success 'mergetool produces no errors when keepBackup is used' '
+ test_when_finished "git reset --hard" &&
+ git checkout -b test$test_count move-to-c &&
test_config mergetool.keepBackup true &&
test_must_fail git merge move-to-b &&
: >expect &&
echo d | git mergetool a/a/file.txt 2>actual &&
test_cmp expect actual &&
- ! test -d a &&
- git reset --hard HEAD
+ ! test -d a
'
test_expect_success 'mergetool honors tempfile config for deleted files' '
+ test_when_finished "git reset --hard" &&
+ git checkout -b test$test_count move-to-c &&
test_config mergetool.keepTemporaries false &&
test_must_fail git merge move-to-b &&
echo d | git mergetool a/a/file.txt &&
- ! test -d a &&
- git reset --hard HEAD
+ ! test -d a
'
test_expect_success 'mergetool keeps tempfiles when aborting delete/delete' '
+ test_when_finished "git reset --hard" &&
+ test_when_finished "git clean -fdx" &&
+ git checkout -b test$test_count move-to-c &&
test_config mergetool.keepTemporaries true &&
test_must_fail git merge move-to-b &&
! (echo a; echo n) | git mergetool a/a/file.txt &&
file_REMOTE_.txt
EOF
ls -1 a/a | sed -e "s/[0-9]*//g" >actual &&
- test_cmp expect actual &&
- git clean -fdx &&
- git reset --hard HEAD
+ test_cmp expect actual
'
test_expect_success 'deleted vs modified submodule' '
- git checkout -b test6 branch1 &&
+ test_when_finished "git reset --hard" &&
+ git checkout -b test$test_count branch1 &&
git submodule update -N &&
mv submod submod-movedaside &&
git rm --cached submod &&
git commit -m "Submodule deleted from branch" &&
- git checkout -b test6.a test6 &&
+ git checkout -b test$test_count.a test$test_count &&
test_must_fail git merge master &&
test -n "$(git ls-files -u)" &&
( yes "" | git mergetool file1 file2 spaced\ name subdir/file3 >/dev/null 2>&1 ) &&
git commit -m "Merge resolved by keeping module" &&
mv submod submod-movedaside &&
- git checkout -b test6.b test6 &&
+ git checkout -b test$test_count.b test$test_count &&
git submodule update -N &&
test_must_fail git merge master &&
test -n "$(git ls-files -u)" &&
git commit -m "Merge resolved by deleting module" &&
mv submod-movedaside submod &&
- git checkout -b test6.c master &&
+ git checkout -b test$test_count.c master &&
git submodule update -N &&
- test_must_fail git merge test6 &&
+ test_must_fail git merge test$test_count &&
test -n "$(git ls-files -u)" &&
( yes "" | git mergetool file1 file2 spaced\ name subdir/file3 >/dev/null 2>&1 ) &&
( yes "" | git mergetool both >/dev/null 2>&1 ) &&
git commit -m "Merge resolved by deleting module" &&
mv submod.orig submod &&
- git checkout -b test6.d master &&
+ git checkout -b test$test_count.d master &&
git submodule update -N &&
- test_must_fail git merge test6 &&
+ test_must_fail git merge test$test_count &&
test -n "$(git ls-files -u)" &&
( yes "" | git mergetool file1 file2 spaced\ name subdir/file3 >/dev/null 2>&1 ) &&
( yes "" | git mergetool both >/dev/null 2>&1 ) &&
test "$(cat submod/bar)" = "master submodule" &&
output="$(git mergetool --no-prompt)" &&
test "$output" = "No files need merging" &&
- git commit -m "Merge resolved by keeping module" &&
- git reset --hard HEAD
+ git commit -m "Merge resolved by keeping module"
'
test_expect_success 'file vs modified submodule' '
- git checkout -b test7 branch1 &&
+ test_when_finished "git reset --hard" &&
+ git checkout -b test$test_count branch1 &&
git submodule update -N &&
mv submod submod-movedaside &&
git rm --cached submod &&
echo not a submodule >submod &&
git add submod &&
git commit -m "Submodule path becomes file" &&
- git checkout -b test7.a branch1 &&
+ git checkout -b test$test_count.a branch1 &&
test_must_fail git merge master &&
test -n "$(git ls-files -u)" &&
( yes "" | git mergetool file1 file2 spaced\ name subdir/file3 >/dev/null 2>&1 ) &&
git commit -m "Merge resolved by keeping module" &&
mv submod submod-movedaside &&
- git checkout -b test7.b test7 &&
+ git checkout -b test$test_count.b test$test_count &&
test_must_fail git merge master &&
test -n "$(git ls-files -u)" &&
( yes "" | git mergetool file1 file2 spaced\ name subdir/file3 >/dev/null 2>&1 ) &&
test "$output" = "No files need merging" &&
git commit -m "Merge resolved by keeping file" &&
- git checkout -b test7.c master &&
+ git checkout -b test$test_count.c master &&
rmdir submod && mv submod-movedaside submod &&
test ! -e submod.orig &&
git submodule update -N &&
- test_must_fail git merge test7 &&
+ test_must_fail git merge test$test_count &&
test -n "$(git ls-files -u)" &&
( yes "" | git mergetool file1 file2 spaced\ name subdir/file3 >/dev/null 2>&1 ) &&
( yes "" | git mergetool both >/dev/null 2>&1 ) &&
test "$output" = "No files need merging" &&
git commit -m "Merge resolved by keeping file" &&
- git checkout -b test7.d master &&
+ git checkout -b test$test_count.d master &&
rmdir submod && mv submod.orig submod &&
git submodule update -N &&
- test_must_fail git merge test7 &&
+ test_must_fail git merge test$test_count &&
test -n "$(git ls-files -u)" &&
( yes "" | git mergetool file1 file2 spaced\ name subdir/file3 >/dev/null 2>&1 ) &&
( yes "" | git mergetool both>/dev/null 2>&1 ) &&
'
test_expect_success 'submodule in subdirectory' '
- git checkout -b test10 branch1 &&
+ test_when_finished "git reset --hard" &&
+ git checkout -b test$test_count branch1 &&
git submodule update -N &&
(
cd subdir &&
git commit -m "add initial versions"
)
) &&
+ test_when_finished "rm -rf subdir/subdir_module" &&
git submodule add git://example.com/subsubmodule subdir/subdir_module &&
git add subdir/subdir_module &&
git commit -m "add submodule in subdirectory" &&
- git checkout -b test10.a test10 &&
+ git checkout -b test$test_count.a test$test_count &&
git submodule update -N &&
(
cd subdir/subdir_module &&
git checkout -b super10.a &&
- echo test10.a >file15 &&
+ echo test$test_count.a >file15 &&
git add file15 &&
git commit -m "on branch 10.a"
) &&
git add subdir/subdir_module &&
- git commit -m "change submodule in subdirectory on test10.a" &&
+ git commit -m "change submodule in subdirectory on test$test_count.a" &&
- git checkout -b test10.b test10 &&
+ git checkout -b test$test_count.b test$test_count &&
git submodule update -N &&
(
cd subdir/subdir_module &&
git checkout -b super10.b &&
- echo test10.b >file15 &&
+ echo test$test_count.b >file15 &&
git add file15 &&
git commit -m "on branch 10.b"
) &&
git add subdir/subdir_module &&
- git commit -m "change submodule in subdirectory on test10.b" &&
+ git commit -m "change submodule in subdirectory on test$test_count.b" &&
- test_must_fail git merge test10.a >/dev/null 2>&1 &&
+ test_must_fail git merge test$test_count.a >/dev/null 2>&1 &&
(
cd subdir &&
( yes "l" | git mergetool subdir_module )
) &&
- test "$(cat subdir/subdir_module/file15)" = "test10.b" &&
+ test "$(cat subdir/subdir_module/file15)" = "test$test_count.b" &&
git submodule update -N &&
- test "$(cat subdir/subdir_module/file15)" = "test10.b" &&
+ test "$(cat subdir/subdir_module/file15)" = "test$test_count.b" &&
git reset --hard &&
git submodule update -N &&
- test_must_fail git merge test10.a >/dev/null 2>&1 &&
+ test_must_fail git merge test$test_count.a >/dev/null 2>&1 &&
( yes "r" | git mergetool subdir/subdir_module ) &&
- test "$(cat subdir/subdir_module/file15)" = "test10.b" &&
+ test "$(cat subdir/subdir_module/file15)" = "test$test_count.b" &&
git submodule update -N &&
- test "$(cat subdir/subdir_module/file15)" = "test10.a" &&
- git commit -m "branch1 resolved with mergetool" &&
- rm -rf subdir/subdir_module
+ test "$(cat subdir/subdir_module/file15)" = "test$test_count.a" &&
+ git commit -m "branch1 resolved with mergetool"
'
test_expect_success 'directory vs modified submodule' '
- git checkout -b test11 branch1 &&
+ test_when_finished "git reset --hard" &&
+ git checkout -b test$test_count branch1 &&
mv submod submod-movedaside &&
git rm --cached submod &&
mkdir submod &&
test "$(cat submod/file16)" = "not a submodule" &&
rm -rf submod.orig &&
- git reset --hard >/dev/null 2>&1 &&
+ git reset --hard &&
test_must_fail git merge master &&
test -n "$(git ls-files -u)" &&
test ! -e submod.orig &&
( cd submod && git clean -f && git reset --hard ) &&
git submodule update -N &&
test "$(cat submod/bar)" = "master submodule" &&
- git reset --hard >/dev/null 2>&1 && rm -rf submod-movedaside &&
+ git reset --hard &&
+ rm -rf submod-movedaside &&
- git checkout -b test11.c master &&
+ git checkout -b test$test_count.c master &&
git submodule update -N &&
- test_must_fail git merge test11 &&
+ test_must_fail git merge test$test_count &&
test -n "$(git ls-files -u)" &&
( yes "l" | git mergetool submod ) &&
git submodule update -N &&
test "$(cat submod/bar)" = "master submodule" &&
- git reset --hard >/dev/null 2>&1 &&
+ git reset --hard &&
git submodule update -N &&
- test_must_fail git merge test11 &&
+ test_must_fail git merge test$test_count &&
test -n "$(git ls-files -u)" &&
test ! -e submod.orig &&
( yes "r" | git mergetool submod ) &&
test "$(cat submod/file16)" = "not a submodule" &&
- git reset --hard master >/dev/null 2>&1 &&
+ git reset --hard master &&
( cd submod && git clean -f && git reset --hard ) &&
git submodule update -N
'
test_expect_success 'file with no base' '
- git checkout -b test13 branch1 &&
+ test_when_finished "git reset --hard" &&
+ git checkout -b test$test_count branch1 &&
test_must_fail git merge master &&
git mergetool --no-prompt --tool mybase -- both &&
>expected &&
- test_cmp both expected &&
- git reset --hard master >/dev/null 2>&1
+ test_cmp both expected
'
test_expect_success 'custom commands override built-ins' '
- git checkout -b test14 branch1 &&
+ test_when_finished "git reset --hard" &&
+ git checkout -b test$test_count branch1 &&
test_config mergetool.defaults.cmd "cat \"\$REMOTE\" >\"\$MERGED\"" &&
test_config mergetool.defaults.trustExitCode true &&
test_must_fail git merge master &&
git mergetool --no-prompt --tool defaults -- both &&
echo master both added >expected &&
- test_cmp both expected &&
- git reset --hard master >/dev/null 2>&1
+ test_cmp both expected
'
test_expect_success 'filenames seen by tools start with ./' '
- git checkout -b test15 branch1 &&
+ test_when_finished "git reset --hard" &&
+ git checkout -b test$test_count branch1 &&
test_config mergetool.writeToTemp false &&
test_config mergetool.myecho.cmd "echo \"\$LOCAL\"" &&
test_config mergetool.myecho.trustExitCode true &&
test_must_fail git merge master &&
git mergetool --no-prompt --tool myecho -- both >actual &&
- grep ^\./both_LOCAL_ actual >/dev/null &&
- git reset --hard master >/dev/null 2>&1
+ grep ^\./both_LOCAL_ actual >/dev/null
'
test_lazy_prereq MKTEMP '
'
test_expect_success MKTEMP 'temporary filenames are used with mergetool.writeToTemp' '
- git checkout -b test16 branch1 &&
+ test_when_finished "git reset --hard" &&
+ git checkout -b test$test_count branch1 &&
test_config mergetool.writeToTemp true &&
test_config mergetool.myecho.cmd "echo \"\$LOCAL\"" &&
test_config mergetool.myecho.trustExitCode true &&
test_must_fail git merge master &&
git mergetool --no-prompt --tool myecho -- both >actual &&
- test_must_fail grep ^\./both_LOCAL_ actual >/dev/null &&
- grep /both_LOCAL_ actual >/dev/null &&
- git reset --hard master >/dev/null 2>&1
+ ! grep ^\./both_LOCAL_ actual >/dev/null &&
+ grep /both_LOCAL_ actual >/dev/null
'
test_expect_success 'diff.orderFile configuration is honored' '
+ test_when_finished "git reset --hard" &&
+ git checkout -b test$test_count order-file-side2 &&
test_config diff.orderFile order-file &&
test_config mergetool.myecho.cmd "echo \"\$LOCAL\"" &&
test_config mergetool.myecho.trustExitCode true &&
echo b >order-file &&
echo a >>order-file &&
- git checkout -b order-file-start master &&
- echo start >a &&
- echo start >b &&
- git add a b &&
- git commit -m start &&
- git checkout -b order-file-side1 order-file-start &&
- echo side1 >a &&
- echo side1 >b &&
- git add a b &&
- git commit -m side1 &&
- git checkout -b order-file-side2 order-file-start &&
- echo side2 >a &&
- echo side2 >b &&
- git add a b &&
- git commit -m side2 &&
test_must_fail git merge order-file-side1 &&
cat >expect <<-\EOF &&
Merging:
b
a
EOF
+
+ # make sure "order-file" that is ambiguous between
+ # rev and path is understood correctly.
+ git branch order-file HEAD &&
+
git mergetool --no-prompt --tool myecho >output &&
git grep --no-index -h -A2 Merging: output >actual &&
- test_cmp expect actual &&
- git reset --hard >/dev/null
+ test_cmp expect actual
'
test_expect_success 'mergetool -Oorder-file is honored' '
+ test_when_finished "git reset --hard" &&
+ git checkout -b test$test_count order-file-side2 &&
test_config diff.orderFile order-file &&
test_config mergetool.myecho.cmd "echo \"\$LOCAL\"" &&
test_config mergetool.myecho.trustExitCode true &&
+ echo b >order-file &&
+ echo a >>order-file &&
test_must_fail git merge order-file-side1 &&
cat >expect <<-\EOF &&
Merging:
git mergetool -O/dev/null --no-prompt --tool myecho >output &&
git grep --no-index -h -A2 Merging: output >actual &&
test_cmp expect actual &&
- git reset --hard >/dev/null 2>&1 &&
+ git reset --hard &&
git config --unset diff.orderFile &&
test_must_fail git merge order-file-side1 &&
EOF
git mergetool -Oorder-file --no-prompt --tool myecho >output &&
git grep --no-index -h -A2 Merging: output >actual &&
- test_cmp expect actual &&
- git reset --hard >/dev/null 2>&1
+ test_cmp expect actual
'
test_done
echo "a+bc"
echo "abc"
} >ab &&
+ {
+ echo d &&
+ echo 0
+ } >d0 &&
echo vvv >v &&
echo ww w >w &&
echo x x xx x >x &&
'
test_expect_success 'grep -G -F -P -E pattern' '
- >empty &&
- test_must_fail git grep -G -F -P -E "a\x{2b}b\x{2a}c" ab >actual &&
- test_cmp empty actual
+ echo "d0:d" >expected &&
+ git grep -G -F -P -E "[\d]" d0 >actual &&
+ test_cmp expected actual
'
test_expect_success 'grep pattern with grep.patternType=fixed, =basic, =perl, =extended' '
- >empty &&
- test_must_fail git \
+ echo "d0:d" >expected &&
+ git \
-c grep.patterntype=fixed \
-c grep.patterntype=basic \
-c grep.patterntype=perl \
-c grep.patterntype=extended \
- grep "a\x{2b}b\x{2a}c" ab >actual &&
- test_cmp empty actual
+ grep "[\d]" d0 >actual &&
+ test_cmp expected actual
'
test_expect_success LIBPCRE 'grep -G -F -E -P pattern' '
- echo "ab:a+b*c" >expected &&
- git grep -G -F -E -P "a\x{2b}b\x{2a}c" ab >actual &&
+ echo "d0:0" >expected &&
+ git grep -G -F -E -P "[\d]" d0 >actual &&
test_cmp expected actual
'
test_expect_success LIBPCRE 'grep pattern with grep.patternType=fixed, =basic, =extended, =perl' '
- echo "ab:a+b*c" >expected &&
+ echo "d0:0" >expected &&
git \
-c grep.patterntype=fixed \
-c grep.patterntype=basic \
-c grep.patterntype=extended \
-c grep.patterntype=perl \
- grep "a\x{2b}b\x{2a}c" ab >actual &&
+ grep "[\d]" d0 >actual &&
test_cmp expected actual
'
--- /dev/null
+#!/bin/sh
+
+test_description='Test grep recurse-submodules feature
+
+This test verifies the recurse-submodules feature correctly greps across
+submodules.
+'
+
+. ./test-lib.sh
+
+test_expect_success 'setup directory structure and submodule' '
+ echo "foobar" >a &&
+ mkdir b &&
+ echo "bar" >b/b &&
+ git add a b &&
+ git commit -m "add a and b" &&
+ git init submodule &&
+ echo "foobar" >submodule/a &&
+ git -C submodule add a &&
+ git -C submodule commit -m "add a" &&
+ git submodule add ./submodule &&
+ git commit -m "added submodule"
+'
+
+test_expect_success 'grep correctly finds patterns in a submodule' '
+ cat >expect <<-\EOF &&
+ a:foobar
+ b/b:bar
+ submodule/a:foobar
+ EOF
+
+ git grep -e "bar" --recurse-submodules >actual &&
+ test_cmp expect actual
+'
+
+test_expect_success 'grep and basic pathspecs' '
+ cat >expect <<-\EOF &&
+ submodule/a:foobar
+ EOF
+
+ git grep -e. --recurse-submodules -- submodule >actual &&
+ test_cmp expect actual
+'
+
+test_expect_success 'grep and nested submodules' '
+ git init submodule/sub &&
+ echo "foobar" >submodule/sub/a &&
+ git -C submodule/sub add a &&
+ git -C submodule/sub commit -m "add a" &&
+ git -C submodule submodule add ./sub &&
+ git -C submodule add sub &&
+ git -C submodule commit -m "added sub" &&
+ git add submodule &&
+ git commit -m "updated submodule" &&
+
+ cat >expect <<-\EOF &&
+ a:foobar
+ b/b:bar
+ submodule/a:foobar
+ submodule/sub/a:foobar
+ EOF
+
+ git grep -e "bar" --recurse-submodules >actual &&
+ test_cmp expect actual
+'
+
+test_expect_success 'grep and multiple patterns' '
+ cat >expect <<-\EOF &&
+ a:foobar
+ submodule/a:foobar
+ submodule/sub/a:foobar
+ EOF
+
+ git grep -e "bar" --and -e "foo" --recurse-submodules >actual &&
+ test_cmp expect actual
+'
+
+test_expect_success 'grep and multiple patterns' '
+ cat >expect <<-\EOF &&
+ b/b:bar
+ EOF
+
+ git grep -e "bar" --and --not -e "foo" --recurse-submodules >actual &&
+ test_cmp expect actual
+'
+
+test_expect_success 'basic grep tree' '
+ cat >expect <<-\EOF &&
+ HEAD:a:foobar
+ HEAD:b/b:bar
+ HEAD:submodule/a:foobar
+ HEAD:submodule/sub/a:foobar
+ EOF
+
+ git grep -e "bar" --recurse-submodules HEAD >actual &&
+ test_cmp expect actual
+'
+
+test_expect_success 'grep tree HEAD^' '
+ cat >expect <<-\EOF &&
+ HEAD^:a:foobar
+ HEAD^:b/b:bar
+ HEAD^:submodule/a:foobar
+ EOF
+
+ git grep -e "bar" --recurse-submodules HEAD^ >actual &&
+ test_cmp expect actual
+'
+
+test_expect_success 'grep tree HEAD^^' '
+ cat >expect <<-\EOF &&
+ HEAD^^:a:foobar
+ HEAD^^:b/b:bar
+ EOF
+
+ git grep -e "bar" --recurse-submodules HEAD^^ >actual &&
+ test_cmp expect actual
+'
+
+test_expect_success 'grep tree and pathspecs' '
+ cat >expect <<-\EOF &&
+ HEAD:submodule/a:foobar
+ HEAD:submodule/sub/a:foobar
+ EOF
+
+ git grep -e "bar" --recurse-submodules HEAD -- submodule >actual &&
+ test_cmp expect actual
+'
+
+test_expect_success 'grep tree and pathspecs' '
+ cat >expect <<-\EOF &&
+ HEAD:submodule/a:foobar
+ HEAD:submodule/sub/a:foobar
+ EOF
+
+ git grep -e "bar" --recurse-submodules HEAD -- "submodule*a" >actual &&
+ test_cmp expect actual
+'
+
+test_expect_success 'grep tree and more pathspecs' '
+ cat >expect <<-\EOF &&
+ HEAD:submodule/a:foobar
+ EOF
+
+ git grep -e "bar" --recurse-submodules HEAD -- "submodul?/a" >actual &&
+ test_cmp expect actual
+'
+
+test_expect_success 'grep tree and more pathspecs' '
+ cat >expect <<-\EOF &&
+ HEAD:submodule/sub/a:foobar
+ EOF
+
+ git grep -e "bar" --recurse-submodules HEAD -- "submodul*/sub/a" >actual &&
+ test_cmp expect actual
+'
+
+test_expect_success !MINGW 'grep recurse submodule colon in name' '
+ git init parent &&
+ test_when_finished "rm -rf parent" &&
+ echo "foobar" >"parent/fi:le" &&
+ git -C parent add "fi:le" &&
+ git -C parent commit -m "add fi:le" &&
+
+ git init "su:b" &&
+ test_when_finished "rm -rf su:b" &&
+ echo "foobar" >"su:b/fi:le" &&
+ git -C "su:b" add "fi:le" &&
+ git -C "su:b" commit -m "add fi:le" &&
+
+ git -C parent submodule add "../su:b" "su:b" &&
+ git -C parent commit -m "add submodule" &&
+
+ cat >expect <<-\EOF &&
+ fi:le:foobar
+ su:b/fi:le:foobar
+ EOF
+ git -C parent grep -e "foobar" --recurse-submodules >actual &&
+ test_cmp expect actual &&
+
+ cat >expect <<-\EOF &&
+ HEAD:fi:le:foobar
+ HEAD:su:b/fi:le:foobar
+ EOF
+ git -C parent grep -e "foobar" --recurse-submodules HEAD >actual &&
+ test_cmp expect actual
+'
+
+test_expect_success 'grep history with moved submoules' '
+ git init parent &&
+ test_when_finished "rm -rf parent" &&
+ echo "foobar" >parent/file &&
+ git -C parent add file &&
+ git -C parent commit -m "add file" &&
+
+ git init sub &&
+ test_when_finished "rm -rf sub" &&
+ echo "foobar" >sub/file &&
+ git -C sub add file &&
+ git -C sub commit -m "add file" &&
+
+ git -C parent submodule add ../sub dir/sub &&
+ git -C parent commit -m "add submodule" &&
+
+ cat >expect <<-\EOF &&
+ dir/sub/file:foobar
+ file:foobar
+ EOF
+ git -C parent grep -e "foobar" --recurse-submodules >actual &&
+ test_cmp expect actual &&
+
+ git -C parent mv dir/sub sub-moved &&
+ git -C parent commit -m "moved submodule" &&
+
+ cat >expect <<-\EOF &&
+ file:foobar
+ sub-moved/file:foobar
+ EOF
+ git -C parent grep -e "foobar" --recurse-submodules >actual &&
+ test_cmp expect actual &&
+
+ cat >expect <<-\EOF &&
+ HEAD^:dir/sub/file:foobar
+ HEAD^:file:foobar
+ EOF
+ git -C parent grep -e "foobar" --recurse-submodules HEAD^ >actual &&
+ test_cmp expect actual
+'
+
+test_incompatible_with_recurse_submodules ()
+{
+ test_expect_success "--recurse-submodules and $1 are incompatible" "
+ test_must_fail git grep -e. --recurse-submodules $1 2>actual &&
+ test_i18ngrep 'not supported with --recurse-submodules' actual
+ "
+}
+
+test_incompatible_with_recurse_submodules --untracked
+test_incompatible_with_recurse_submodules --no-index
+
+test_done
test_cmp expected_n result
'
+test_expect_success 'set up abbrev tests' '
+ test_commit abbrev &&
+ sha1=$(git rev-parse --verify HEAD) &&
+ check_abbrev () {
+ expect=$1; shift
+ echo $sha1 | cut -c 1-$expect >expect &&
+ git blame "$@" abbrev.t >actual &&
+ perl -lne "/[0-9a-f]+/ and print \$&" <actual >actual.sha &&
+ test_cmp expect actual.sha
+ }
+'
+
+test_expect_success 'blame --abbrev=<n> works' '
+ # non-boundary commits get +1 for alignment
+ check_abbrev 31 --abbrev=30 HEAD &&
+ check_abbrev 30 --abbrev=30 ^HEAD
+'
+
+test_expect_success 'blame -l aligns regular and boundary commits' '
+ check_abbrev 40 -l HEAD &&
+ check_abbrev 39 -l ^HEAD
+'
+
+test_expect_success 'blame --abbrev=40 behaves like -l' '
+ check_abbrev 40 --abbrev=40 HEAD &&
+ check_abbrev 39 --abbrev=40 ^HEAD
+'
+
+test_expect_success '--no-abbrev works like --abbrev=40' '
+ check_abbrev 40 --no-abbrev
+'
+
test_done
--- /dev/null
+#!/bin/sh
+
+test_description='
+The general idea is that we have a single file whose lines come from
+multiple other files, and those individual files were modified in the same
+commits. That means that we will see the same commit in multiple contexts,
+and each one should be attributed to the correct file.
+
+Note that we need to use "blame -C" to find the commit for all lines. We will
+not bother testing that the non-C case fails to find it. That is how blame
+behaves now, but it is not a property we want to make sure is retained.
+'
+. ./test-lib.sh
+
+# help avoid typing and reading long strings of similar lines
+# in the tests below
+generate_expect () {
+ while read nr data
+ do
+ i=0
+ while test $i -lt $nr
+ do
+ echo $data
+ i=$((i + 1))
+ done
+ done
+}
+
+test_expect_success 'setup split file case' '
+ # use lines long enough to trigger content detection
+ test_seq 1000 1010 >one &&
+ test_seq 2000 2010 >two &&
+ git add one two &&
+ test_commit base &&
+
+ sed "6s/^/modified /" <one >one.tmp &&
+ mv one.tmp one &&
+ sed "6s/^/modified /" <two >two.tmp &&
+ mv two.tmp two &&
+ git add -u &&
+ test_commit modified &&
+
+ cat one two >combined &&
+ git add combined &&
+ git rm one two &&
+ test_commit combined
+'
+
+test_expect_success 'setup simulated porcelain' '
+ # This just reads porcelain-ish output and tries
+ # to output the value of a given field for each line (either by
+ # reading the field that accompanies this line, or referencing
+ # the information found last time the commit was mentioned).
+ cat >read-porcelain.pl <<-\EOF
+ my $field = shift;
+ while (<>) {
+ if (/^[0-9a-f]{40} /) {
+ flush();
+ $hash = $&;
+ } elsif (/^$field (.*)/) {
+ $cache{$hash} = $1;
+ }
+ }
+ flush();
+
+ sub flush {
+ return unless defined $hash;
+ if (defined $cache{$hash}) {
+ print "$cache{$hash}\n";
+ } else {
+ print "NONE\n";
+ }
+ }
+ EOF
+'
+
+for output in porcelain line-porcelain
+do
+ test_expect_success "generate --$output output" '
+ git blame --root -C --$output combined >output
+ '
+
+ test_expect_success "$output output finds correct commits" '
+ generate_expect >expect <<-\EOF &&
+ 5 base
+ 1 modified
+ 10 base
+ 1 modified
+ 5 base
+ EOF
+ perl read-porcelain.pl summary <output >actual &&
+ test_cmp expect actual
+ '
+
+ test_expect_success "$output output shows correct filenames" '
+ generate_expect >expect <<-\EOF &&
+ 11 one
+ 11 two
+ EOF
+ perl read-porcelain.pl filename <output >actual &&
+ test_cmp expect actual
+ '
+
+ test_expect_success "$output output shows correct previous pointer" '
+ generate_expect >expect <<-EOF &&
+ 5 NONE
+ 1 $(git rev-parse modified^) one
+ 10 NONE
+ 1 $(git rev-parse modified^) two
+ 5 NONE
+ EOF
+ perl read-porcelain.pl previous <output >actual &&
+ test_cmp expect actual
+ '
+done
+
+test_done
--smtp-server="$(pwd)/fake.sendmail" \
$@ \
$patches >stdout &&
- test_must_fail grep "Send this email" stdout &&
+ ! grep "Send this email" stdout &&
>no_confirm_okay
}
test_expect_success 'init without -s/-T/-b/-t does not warn' '
test ! -d trunk &&
git svn init "$svnrepo"/project/trunk trunk 2>warning &&
- test_must_fail grep -q prefix warning &&
+ ! grep -q prefix warning &&
rm -rf trunk &&
rm -f warning
'
test_expect_success 'clone without -s/-T/-b/-t does not warn' '
test ! -d trunk &&
git svn clone "$svnrepo"/project/trunk 2>warning &&
- test_must_fail grep -q prefix warning &&
+ ! grep -q prefix warning &&
rm -rf trunk &&
rm -f warning
'
test_expect_success 'init with -s/-T/-b/-t assumes --prefix=origin/' '
test ! -d project &&
git svn init -s "$svnrepo"/project project 2>warning &&
- test_must_fail grep -q prefix warning &&
+ ! grep -q prefix warning &&
test_svn_configured_prefix "origin/" &&
rm -rf project &&
rm -f warning
test_expect_success 'clone with -s/-T/-b/-t assumes --prefix=origin/' '
test ! -d project &&
git svn clone -s "$svnrepo"/project 2>warning &&
- test_must_fail grep -q prefix warning &&
+ ! grep -q prefix warning &&
test_svn_configured_prefix "origin/" &&
rm -rf project &&
rm -f warning
test_expect_success 'init with -s/-T/-b/-t and --prefix "" still works' '
test ! -d project &&
git svn init -s "$svnrepo"/project project --prefix "" 2>warning &&
- test_must_fail grep -q prefix warning &&
+ ! grep -q prefix warning &&
test_svn_configured_prefix "" &&
rm -rf project &&
rm -f warning
test_expect_success 'clone with -s/-T/-b/-t and --prefix "" still works' '
test ! -d project &&
git svn clone -s "$svnrepo"/project --prefix "" 2>warning &&
- test_must_fail grep -q prefix warning &&
+ ! grep -q prefix warning &&
test_svn_configured_prefix "" &&
rm -rf project &&
rm -f warning
make_change_by_user usernamefile3 Derek derek@example.com &&
P4EDITOR=cat P4USER=alice P4PASSWD=secret &&
export P4EDITOR P4USER P4PASSWD &&
- git p4 commit |\
- grep "git author derek@example.com does not match" &&
+ git p4 commit >actual &&
+ grep "git author derek@example.com does not match" actual &&
make_change_by_user usernamefile3 Charlie charlie@example.com &&
- git p4 commit |\
- grep "git author charlie@example.com does not match" &&
+ git p4 commit >actual &&
+ grep "git author charlie@example.com does not match" actual &&
make_change_by_user usernamefile3 alice alice@example.com &&
- git p4 commit |\
- test_must_fail grep "git author.*does not match" &&
+ git p4 commit >actual &&
+ ! grep "git author.*does not match" actual &&
git config git-p4.skipUserNameCheck true &&
make_change_by_user usernamefile3 Charlie charlie@example.com &&
- git p4 commit |\
- test_must_fail grep "git author.*does not match" &&
+ git p4 commit >actual &&
+ ! grep "git author.*does not match" actual &&
p4_check_commit_author usernamefile3 alice
)
git diff-tree -r -C HEAD &&
git p4 submit &&
p4 filelog //depot/file8 &&
- p4 filelog //depot/file8 | test_must_fail grep -q "branch from" &&
+ ! p4 filelog //depot/file8 | grep -q "branch from" &&
echo "file9" >>file2 &&
git commit -a -m "Differentiate file2" &&
git config git-p4.detectCopies true &&
git p4 submit &&
p4 filelog //depot/file9 &&
- p4 filelog //depot/file9 | test_must_fail grep -q "branch from" &&
+ ! p4 filelog //depot/file9 | grep -q "branch from" &&
echo "file10" >>file2 &&
git commit -a -m "Differentiate file2" &&
git config git-p4.detectCopies $(($level + 2)) &&
git p4 submit &&
p4 filelog //depot/file12 &&
- p4 filelog //depot/file12 | test_must_fail grep -q "branch from" &&
+ ! p4 filelog //depot/file12 | grep -q "branch from" &&
echo "file13" >>file2 &&
git commit -a -m "Differentiate file2" &&
#include "commit.h"
#include "tree.h"
#include "blob.h"
+#include "gpg-interface.h"
const char *tag_type = "tag";
ret = check_signature(buf, payload_size, buf + payload_size,
size - payload_size, &sigc);
- print_signature_buffer(&sigc, flags);
+
+ if (!(flags & GPG_VERIFY_OMIT_STATUS))
+ print_signature_buffer(&sigc, flags);
signature_check_clear(&sigc);
return ret;
if (run_pre_push_hook(transport, remote_refs))
return -1;
- if ((flags & TRANSPORT_RECURSE_SUBMODULES_ON_DEMAND) && !is_bare_repository()) {
+ if ((flags & (TRANSPORT_RECURSE_SUBMODULES_ON_DEMAND |
+ TRANSPORT_RECURSE_SUBMODULES_ONLY)) &&
+ !is_bare_repository()) {
struct ref *ref = remote_refs;
struct sha1_array commits = SHA1_ARRAY_INIT;
}
if (((flags & TRANSPORT_RECURSE_SUBMODULES_CHECK) ||
- ((flags & TRANSPORT_RECURSE_SUBMODULES_ON_DEMAND) &&
+ ((flags & (TRANSPORT_RECURSE_SUBMODULES_ON_DEMAND |
+ TRANSPORT_RECURSE_SUBMODULES_ONLY)) &&
!pretend)) && !is_bare_repository()) {
struct ref *ref = remote_refs;
struct string_list needs_pushing = STRING_LIST_INIT_DUP;
sha1_array_clear(&commits);
}
- push_ret = transport->push_refs(transport, remote_refs, flags);
+ if (!(flags & TRANSPORT_RECURSE_SUBMODULES_ONLY))
+ push_ret = transport->push_refs(transport, remote_refs, flags);
+ else
+ push_ret = 0;
err = push_had_errors(remote_refs);
ret = push_ret | err;
if (flags & TRANSPORT_PUSH_SET_UPSTREAM)
set_upstreams(transport, remote_refs, pretend);
- if (!(flags & TRANSPORT_PUSH_DRY_RUN)) {
+ if (!(flags & (TRANSPORT_PUSH_DRY_RUN |
+ TRANSPORT_RECURSE_SUBMODULES_ONLY))) {
struct ref *ref;
for (ref = remote_refs; ref; ref = ref->next)
transport_update_tracking_ref(transport->remote, ref, verbose);
const struct ref *extra;
struct alternate_refs_data *cb = data;
- other = xstrdup(real_path(e->path));
+ other = real_pathdup(e->path);
len = strlen(other);
while (other[len-1] == '/')
enum transport_family family;
};
-#define TRANSPORT_PUSH_ALL 1
-#define TRANSPORT_PUSH_FORCE 2
-#define TRANSPORT_PUSH_DRY_RUN 4
-#define TRANSPORT_PUSH_MIRROR 8
-#define TRANSPORT_PUSH_PORCELAIN 16
-#define TRANSPORT_PUSH_SET_UPSTREAM 32
-#define TRANSPORT_RECURSE_SUBMODULES_CHECK 64
-#define TRANSPORT_PUSH_PRUNE 128
-#define TRANSPORT_RECURSE_SUBMODULES_ON_DEMAND 256
-#define TRANSPORT_PUSH_NO_HOOK 512
-#define TRANSPORT_PUSH_FOLLOW_TAGS 1024
-#define TRANSPORT_PUSH_CERT_ALWAYS 2048
-#define TRANSPORT_PUSH_CERT_IF_ASKED 4096
-#define TRANSPORT_PUSH_ATOMIC 8192
-#define TRANSPORT_PUSH_OPTIONS 16384
+#define TRANSPORT_PUSH_ALL (1<<0)
+#define TRANSPORT_PUSH_FORCE (1<<1)
+#define TRANSPORT_PUSH_DRY_RUN (1<<2)
+#define TRANSPORT_PUSH_MIRROR (1<<3)
+#define TRANSPORT_PUSH_PORCELAIN (1<<4)
+#define TRANSPORT_PUSH_SET_UPSTREAM (1<<5)
+#define TRANSPORT_RECURSE_SUBMODULES_CHECK (1<<6)
+#define TRANSPORT_PUSH_PRUNE (1<<7)
+#define TRANSPORT_RECURSE_SUBMODULES_ON_DEMAND (1<<8)
+#define TRANSPORT_PUSH_NO_HOOK (1<<9)
+#define TRANSPORT_PUSH_FOLLOW_TAGS (1<<10)
+#define TRANSPORT_PUSH_CERT_ALWAYS (1<<11)
+#define TRANSPORT_PUSH_CERT_IF_ASKED (1<<12)
+#define TRANSPORT_PUSH_ATOMIC (1<<13)
+#define TRANSPORT_PUSH_OPTIONS (1<<14)
+#define TRANSPORT_RECURSE_SUBMODULES_ONLY (1<<15)
extern int transport_summary_width(const struct ref *refs);
*/
if (ps->recursive && S_ISDIR(entry->mode))
return entry_interesting;
+
+ /*
+ * When matching against submodules with
+ * wildcard characters, ensure that the entry
+ * at least matches up to the first wild
+ * character. More accurate matching can then
+ * be performed in the submodule itself.
+ */
+ if (ps->recursive && S_ISGITLINK(entry->mode) &&
+ !ps_strncmp(item, match + baselen,
+ entry->path,
+ item->nowildcard_len - baselen))
+ return entry_interesting;
}
continue;
strbuf_setlen(base, base_offset + baselen);
return entry_interesting;
}
+
+ /*
+ * When matching against submodules with
+ * wildcard characters, ensure that the entry
+ * at least matches up to the first wild
+ * character. More accurate matching can then
+ * be performed in the submodule itself.
+ */
+ if (ps->recursive && S_ISGITLINK(entry->mode) &&
+ !ps_strncmp(item, match, base->buf + base_offset,
+ item->nowildcard_len)) {
+ strbuf_setlen(base, base_offset + baselen);
+ return entry_interesting;
+ }
+
strbuf_setlen(base, base_offset + baselen);
/*
schedule_dir_for_removal(ce->name, ce_namelen(ce));
}
-static int check_updates(struct unpack_trees_options *o,
- const struct checkout *state)
+static struct progress *get_progress(struct unpack_trees_options *o)
{
unsigned cnt = 0, total = 0;
+ struct index_state *index = &o->result;
+
+ if (!o->update || !o->verbose_update)
+ return NULL;
+
+ for (; cnt < index->cache_nr; cnt++) {
+ const struct cache_entry *ce = index->cache[cnt];
+ if (ce->ce_flags & (CE_UPDATE | CE_WT_REMOVE))
+ total++;
+ }
+
+ return start_progress_delay(_("Checking out files"),
+ total, 50, 1);
+}
+
+static int check_updates(struct unpack_trees_options *o)
+{
+ unsigned cnt = 0;
+ int errs = 0;
struct progress *progress = NULL;
struct index_state *index = &o->result;
+ struct checkout state = CHECKOUT_INIT;
int i;
- int errs = 0;
- if (o->update && o->verbose_update) {
- for (total = cnt = 0; cnt < index->cache_nr; cnt++) {
- const struct cache_entry *ce = index->cache[cnt];
- if (ce->ce_flags & (CE_UPDATE | CE_WT_REMOVE))
- total++;
- }
+ state.force = 1;
+ state.quiet = 1;
+ state.refresh_cache = 1;
+ state.istate = index;
- progress = start_progress_delay(_("Checking out files"),
- total, 50, 1);
- cnt = 0;
- }
+ progress = get_progress(o);
if (o->update)
- git_attr_set_direction(GIT_ATTR_CHECKOUT, &o->result);
+ git_attr_set_direction(GIT_ATTR_CHECKOUT, index);
for (i = 0; i < index->cache_nr; i++) {
const struct cache_entry *ce = index->cache[i];
display_progress(progress, ++cnt);
if (o->update && !o->dry_run)
unlink_entry(ce);
- continue;
}
}
- remove_marked_cache_entries(&o->result);
+ remove_marked_cache_entries(index);
remove_scheduled_dirs();
for (i = 0; i < index->cache_nr; i++) {
display_progress(progress, ++cnt);
ce->ce_flags &= ~CE_UPDATE;
if (o->update && !o->dry_run) {
- errs |= checkout_entry(ce, state, NULL);
+ errs |= checkout_entry(ce, &state, NULL);
}
}
}
int i, ret;
static struct cache_entry *dfc;
struct exclude_list el;
- struct checkout state = CHECKOUT_INIT;
if (len > MAX_UNPACK_TREES)
die("unpack_trees takes at most %d trees", MAX_UNPACK_TREES);
- state.force = 1;
- state.quiet = 1;
- state.refresh_cache = 1;
- state.istate = &o->result;
memset(&el, 0, sizeof(el));
if (!core_apply_sparse_checkout || !o->update)
}
o->src_index = NULL;
- ret = check_updates(o, &state) ? (-2) : 0;
+ ret = check_updates(o) ? (-2) : 0;
if (o->dst_index) {
if (!ret) {
if (!o->result.cache_tree)
#include "cache.h"
static FILE *error_handle;
-static int tweaked_error_buffering;
void vreportf(const char *prefix, const char *err, va_list params)
{
+ char msg[4096];
FILE *fh = error_handle ? error_handle : stderr;
+ char *p;
- fflush(fh);
- if (!tweaked_error_buffering) {
- setvbuf(fh, NULL, _IOLBF, 0);
- tweaked_error_buffering = 1;
+ vsnprintf(msg, sizeof(msg), err, params);
+ for (p = msg; *p; p++) {
+ if (iscntrl(*p) && *p != '\t' && *p != '\n')
+ *p = '?';
}
-
- fputs(prefix, fh);
- vfprintf(fh, err, params);
- fputc('\n', fh);
+ fprintf(fh, "%s%s\n", prefix, msg);
}
static NORETURN void usage_builtin(const char *err, va_list params)
void set_error_handle(FILE *fh)
{
error_handle = fh;
- tweaked_error_buffering = 0;
}
void NORETURN usagef(const char *err, ...)
static const struct string_list *prereleases;
static int initialized;
+struct suffix_match {
+ int conf_pos;
+ int start;
+ int len;
+};
+
+static void find_better_matching_suffix(const char *tagname, const char *suffix,
+ int suffix_len, int start, int conf_pos,
+ struct suffix_match *match)
+{
+ /*
+ * A better match either starts earlier or starts at the same offset
+ * but is longer.
+ */
+ int end = match->len < suffix_len ? match->start : match->start-1;
+ int i;
+ for (i = start; i <= end; i++)
+ if (starts_with(tagname + i, suffix)) {
+ match->conf_pos = conf_pos;
+ match->start = i;
+ match->len = suffix_len;
+ break;
+ }
+}
+
/*
- * p1 and p2 point to the first different character in two strings. If
- * either p1 or p2 starts with a prerelease suffix, it will be forced
- * to be on top.
- *
- * If both p1 and p2 start with (different) suffix, the order is
- * determined by config file.
+ * off is the offset of the first different character in the two strings
+ * s1 and s2. If either s1 or s2 contains a prerelease suffix containing
+ * that offset or a suffix ends right before that offset, then that
+ * string will be forced to be on top.
*
- * Note that we don't have to deal with the situation when both p1 and
- * p2 start with the same suffix because the common part is already
- * consumed by the caller.
+ * If both s1 and s2 contain a (different) suffix around that position,
+ * their order is determined by the order of those two suffixes in the
+ * configuration.
+ * If any of the strings contains more than one different suffixes around
+ * that position, then that string is sorted according to the contained
+ * suffix which starts at the earliest offset in that string.
+ * If more than one different contained suffixes start at that earliest
+ * offset, then that string is sorted according to the longest of those
+ * suffixes.
*
* Return non-zero if *diff contains the return value for versioncmp()
*/
-static int swap_prereleases(const void *p1_,
- const void *p2_,
+static int swap_prereleases(const char *s1,
+ const char *s2,
+ int off,
int *diff)
{
- const char *p1 = p1_;
- const char *p2 = p2_;
- int i, i1 = -1, i2 = -1;
+ int i;
+ struct suffix_match match1 = { -1, off, -1 };
+ struct suffix_match match2 = { -1, off, -1 };
for (i = 0; i < prereleases->nr; i++) {
const char *suffix = prereleases->items[i].string;
- if (i1 == -1 && starts_with(p1, suffix))
- i1 = i;
- if (i2 == -1 && starts_with(p2, suffix))
- i2 = i;
+ int start, suffix_len = strlen(suffix);
+ if (suffix_len < off)
+ start = off - suffix_len;
+ else
+ start = 0;
+ find_better_matching_suffix(s1, suffix, suffix_len, start,
+ i, &match1);
+ find_better_matching_suffix(s2, suffix, suffix_len, start,
+ i, &match2);
}
- if (i1 == -1 && i2 == -1)
+ if (match1.conf_pos == -1 && match2.conf_pos == -1)
return 0;
- if (i1 >= 0 && i2 >= 0)
- *diff = i1 - i2;
- else if (i1 >= 0)
+ if (match1.conf_pos == match2.conf_pos)
+ /* Found the same suffix in both, e.g. "-rc" in "v1.0-rcX"
+ * and "v1.0-rcY": the caller should decide based on "X"
+ * and "Y". */
+ return 0;
+
+ if (match1.conf_pos >= 0 && match2.conf_pos >= 0)
+ *diff = match1.conf_pos - match2.conf_pos;
+ else if (match1.conf_pos >= 0)
*diff = -1;
- else /* if (i2 >= 0) */
+ else /* if (match2.conf_pos >= 0) */
*diff = 1;
return 1;
}
}
if (!initialized) {
+ const struct string_list *deprecated_prereleases;
initialized = 1;
- prereleases = git_config_get_value_multi("versionsort.prereleasesuffix");
+ prereleases = git_config_get_value_multi("versionsort.suffix");
+ deprecated_prereleases = git_config_get_value_multi("versionsort.prereleasesuffix");
+ if (prereleases) {
+ if (deprecated_prereleases)
+ warning("ignoring versionsort.prereleasesuffix because versionsort.suffix is set");
+ } else
+ prereleases = deprecated_prereleases;
}
- if (prereleases && swap_prereleases(p1 - 1, p2 - 1, &diff))
+ if (prereleases && swap_prereleases(s1, s2, (const char *) p1 - s1 - 1,
+ &diff))
return diff;
state = result_type[state * 3 + (((c2 == '0') + (isdigit (c2) != 0)))];
return wt;
arg = prefix_filename(prefix, strlen(prefix), arg);
- path = xstrdup(real_path(arg));
+ path = real_pathdup(arg);
for (; *list; list++)
if (!fspathcmp(path, real_path((*list)->path)))
break;