* Interix, Cygwin and Minix ports got updated.
- * A handful of patches to update git-p4 (in contrib/).
+ * Various updates git-p4 (in contrib/) and "git fast-import".
* Gitweb learned to read from /etc/gitweb-common.conf when it exists,
before reading from gitweb_config.perl or from /etc/gitweb.conf
Android with 4kb window). We used to reject anything that was not
deflated with 32kb window.
+ * Interaction between the use of pager and coloring of the output has
+ been improved, especially when a command that is not built-in was
+ involved.
+
* "git am" learned to pass "--exclude=<path>" option through to underlying
"git apply".
you perform per each iteration does not need a working tree, of
course).
+ * The length of abbreviated object names in "git branch -v" output
+ now honors core.abbrev configuration variable.
+
* "git check-attr" can take relative paths from the command line.
* "git check-attr" learned "--all" option to list the attributes for a
generation machinery stolen from jgit, which might give better
performance.
+ * "git diff" had a wierd worst case behaviour that can be triggered
+ when comparing files with potentially many places that could match.
+
* "git fetch", "git push" and friends no longer show connection
errors for addresses that couldn't be connected when at least one
address succeeds (this is arguably a regression but a deliberate
* "git grep" learned "-W" option that shows wider context using the same
logic used by "git diff" to determine the hunk header.
+ * The "--decorate" option to "git log" and its family learned to
+ highlight grafted and replaced commits.
+
* "git rebase master topci" no longer spews usage hints after giving
"fatal: no such branch: topci" error message.
+ * The recursive merge strategy implementation got a fairly large
+ fixes for many corner cases that may rarely happen in real world
+ projects (it has been verified that none of the 16000+ merges in
+ the Linux kernel history back to v2.6.12 is affected with the
+ corner case bugs this update fixes).
+
* "git stash" learned --include-untracked option.
* "git submodule update" used to stop at the first error updating a
submodule; it now goes on to update other submodules that can be
updated, and reports the ones with errors at the end.
+ * "git push" can be told with --recurse-submodules=check option to
+ refuse pushing of the supermodule, if any of its submodules'
+ commits hasn't been pushed out to their remotes.
+
* "git upload-pack" and "git receive-pack" learned to pretend only a
subset of the refs exist in a repository. This may help a site to
put many tiny repositories into one repository (this would not be
Unless otherwise noted, all the fixes in 1.7.6.X maintenance track are
included in this release.
- * "ls-files ../$path" that is run from a subdirectory reported errors
+ * "git branch --set-upstream @{-1} foo" did not expand @{-1} correctly.
+ (merge e9d4f74 mg/branch-set-upstream-previous later to 'maint').
+
+ * "git branch -m" and "git checkout -b" incorrectly allowed the tip
+ of the branch that is currently checked out updated.
+ (merge 55c4a67 ci/forbid-unwanted-current-branch-update later to 'maint').
+
+ * "git clone" failed to clone locally from a ".git" file that itself
+ is not a directory but is a pointer to one.
+ (merge 9b0ebc7 nd/maint-clone-gitdir later to 'maint').
+
+ * "git clone" from a local repository that borrows from another
+ object store using a relative path in its objects/info/alternates
+ file did not adjust the alternates in the resulting repository.
+ (merge e6baf4a1 jc/maint-clone-alternates later to 'maint').
+
+ * "git describe --dirty" did not refresh the index before checking the
+ state of the working tree files.
+ (cherry-pick bb57148 ac/describe-dirty-refresh later to 'maint').
+
+ * "git ls-files ../$path" that is run from a subdirectory reported errors
incorrectly when there is no such path that matches the given pathspec.
(merge 0f64bfa cb/maint-ls-files-error-report later to 'maint').
--
exec >/var/tmp/1
echo O=$(git describe master)
-O=v1.7.6-576-g6fcb384
+O=v1.7.6.1-415-g284daf2
git log --first-parent --oneline $O..master
echo
git shortlog --no-merges ^maint ^$O master
--abbrev=<length>::
Alter the sha1's minimum display length in the output listing.
- The default value is 7.
+ The default value is 7 and can be overridden by the `core.abbrev`
+ config option.
--no-abbrev::
Display the full sha1s in the output listing rather than abbreviating them.
-e <pattern>::
--exclude=<pattern>::
- Specify special exceptions to not be cleaned. Each <pattern> is
- the same form as in $GIT_DIR/info/excludes and this option can be
- given multiple times.
+ In addition to those found in .gitignore (per directory) and
+ $GIT_DIR/info/exclude, also consider these patterns to be in the
+ set of the ignore rules in effect.
-x::
- Don't use the ignore rules. This allows removing all untracked
+ Don't use the standard ignore rules read from .gitignore (per
+ directory) and $GIT_DIR/info/exclude, but do still use the ignore
+ rules given with `-e` options. This allows removing all untracked
files, including build products. This can be used (possibly in
conjunction with 'git reset') to create a pristine
working directory to test a clean build.
Listen on an alternative port. Incompatible with '--inetd' option.
--init-timeout=<n>::
- Timeout between the moment the connection is established and the
- client request is received (typically a rather low value, since
+ Timeout (in seconds) between the moment the connection is established
+ and the client request is received (typically a rather low value, since
that should be basically immediate).
--timeout=<n>::
- Timeout for specific client sub-requests. This includes the time
- it takes for the server to process the sub-request and the time spent
- waiting for the next client's request.
+ Timeout (in seconds) for specific client sub-requests. This includes
+ the time it takes for the server to process the sub-request and the
+ time spent waiting for the next client's request.
--max-connections=<n>::
Maximum number of concurrent clients, defaults to 32. Set it to
(``cm@example.com''). `LT` and `GT` are the literal less-than (\x3c)
and greater-than (\x3e) symbols. These are required to delimit
the email address from the other fields in the line. Note that
-`<name>` is free-form and may contain any sequence of bytes, except
-`LT` and `LF`. It is typically UTF-8 encoded.
+`<name>` and `<email>` are free-form and may contain any sequence
+of bytes, except `LT`, `GT` and `LF`. `<name>` is typically UTF-8 encoded.
The time of the change is specified by `<when>` using the date format
that was selected by the \--date-format=<fmt> command line option.
(see OPTIONS, above).
import-marks::
+import-marks-if-exists::
Like --import-marks except in two respects: first, only one
- "feature import-marks" command is allowed per stream;
- second, an --import-marks= command-line option overrides
- any "feature import-marks" command in the stream.
+ "feature import-marks" or "feature import-marks-if-exists"
+ command is allowed per stream; second, an --import-marks=
+ or --import-marks-if-exists command-line option overrides
+ any of these "feature" commands in the stream; third,
+ "feature import-marks-if-exists" like a corresponding
+ command-line option silently skips a nonexistent file.
cat-blob::
ls::
--to=<email>::
Add a `To:` header to the email headers. This is in addition
to any configured headers, and may be used multiple times.
+ The negated form `--no-to` discards all `To:` headers added so
+ far (from config or command line).
--cc=<email>::
Add a `Cc:` header to the email headers. This is in addition
to any configured headers, and may be used multiple times.
+ The negated form `--no-cc` discards all `Cc:` headers added so
+ far (from config or command line).
--add-header=<header>::
Add an arbitrary header to the email headers. This is in addition
to any configured headers, and may be used multiple times.
- For example, `--add-header="Organization: git-foo"`
+ For example, `--add-header="Organization: git-foo"`.
+ The negated form `--no-add-header` discards *all* (`To:`,
+ `Cc:`, and custom) headers added so far from config or command
+ line.
--cover-letter::
In addition to the patches, generate a cover letter file
-----------
Downloads a remote git repository via HTTP.
+*NOTE*: use of this command without -a is deprecated. The -a
+behaviour will become the default in a future release.
+
OPTIONS
-------
commit-id::
its size is not included.
[\--] <path>...::
- Show only commits that affect any of the specified paths. To
- prevent confusion with options and branch names, paths may need
- to be prefixed with "\-- " to separate them from options or
- refnames.
+ Show only commits that are enough to explain how the files
+ that match the specified paths came to be. See "History
+ Simplification" below for details and other simplification
+ modes.
++
+To prevent confusion with options and branch names, paths may need to
+be prefixed with "\-- " to separate them from options or refnames.
include::rev-list-options.txt[]
-C <object>::
--reuse-message=<object>::
- Take the note message from the given blob object (for
- example, another note).
+ Take the given blob object (for example, another note) as the
+ note message. (Use `git notes copy <object>` instead to
+ copy notes between objects.)
-c <object>::
--reedit-message=<object>::
$ git notes --ref=built add -C "$blob" HEAD
------------
+(You cannot simply use `git notes --ref=built add -F a.out HEAD`
+because that is not binary-safe.)
Of course, it doesn't make much sense to display non-text-format notes
with 'git log', so if you use such notes, you'll probably need to write
some special-purpose tools to do something useful with them.
is specified. This flag forces progress status even if the
standard error stream is not directed to a terminal.
+--recurse-submodules=check::
+ Check whether all submodule commits used by the revisions to be
+ pushed are available on a remote tracking branch. Otherwise the
+ push will be aborted and the command will exit with non-zero status.
+
+
include::urls-remotes.txt[]
OUTPUT
affecting the working tree; and the 'rebase' command will be
able to update the working tree with the latest changes.
+--preserve-empty-dirs;;
+ Create a placeholder file in the local Git repository for each
+ empty directory fetched from Subversion. This includes directories
+ that become empty by removing all entries in the Subversion
+ repository (but not the directory itself). The placeholder files
+ are also tracked and removed when no longer necessary.
+
+--placeholder-filename=<filename>;;
+ Set the name of placeholder files created by --preserve-empty-dirs.
+ Default: ".gitignore"
+
'rebase'::
This fetches revisions from the SVN parent of the current HEAD
and rebases the current (uncommitted to SVN) work against it.
Add the given merge information during the dcommit
(e.g. `--mergeinfo="/branches/foo:1-10"`). All svn server versions can
store this information (as a property), and svn clients starting from
- version 1.5 can make use of it. 'git svn' currently does not use it
- and does not set it automatically.
+ version 1.5 can make use of it. To specify merge information from multiple
+ branches, use a single space character between the branches
+ (`--mergeinfo="/branches/foo:1-10 /branches/bar:3,5-6,8"`)
'branch'::
Create a branch in the SVN repository.
Object store associated with this repository. Usually
an object store is self sufficient (i.e. all the objects
that are referred to by an object found in it are also
- found in it), but there are couple of ways to violate
- it.
+ found in it), but there are a few ways to violate it.
+
-. You could populate the repository by running a commit walker
-without `-a` option. Depending on which options are given, you
-could have only commit objects without associated blobs and
-trees this way, for example. A repository with this kind of
-incomplete object store is not suitable to be published to the
-outside world but sometimes useful for private repository.
-. You also could have an incomplete but locally usable repository
-by cloning shallowly. See linkgit:git-clone[1].
-. You can be using `objects/info/alternates` mechanism, or
-`$GIT_ALTERNATE_OBJECT_DIRECTORIES` mechanism to 'borrow'
+. You could have an incomplete but locally usable repository
+by creating a shallow clone. See linkgit:git-clone[1].
+. You could be using the `objects/info/alternates` or
+`$GIT_ALTERNATE_OBJECT_DIRECTORIES` mechanisms to 'borrow'
objects from other object stores. A repository with this kind
of incomplete object store is not suitable to be published for
use with dumb transports but otherwise is OK as long as
-`objects/info/alternates` points at the right object stores
-it borrows from.
+`objects/info/alternates` points at the object stores it
+borrows from.
objects/[0-9a-f][0-9a-f]::
- Traditionally, each object is stored in its own file.
- They are split into 256 subdirectories using the first
- two letters from its object name to keep the number of
- directory entries `objects` directory itself needs to
- hold. Objects found here are often called 'unpacked'
- (or 'loose') objects.
+ A newly created object is stored in its own file.
+ The objects are splayed over 256 subdirectories using
+ the first two characters of the sha1 object name to
+ keep the number of directory entries in `objects`
+ itself to a manageable number. Objects found
+ here are often called 'unpacked' (or 'loose') objects.
objects/pack::
Packs (files that store many object in compressed form,
refs::
References are stored in subdirectories of this
- directory. The 'git prune' command knows to keep
+ directory. The 'git prune' command knows to preserve
objects reachable from refs found in this directory and
its subdirectories.
+
HEAD can also record a specific commit directly, instead of
being a symref to point at the current branch. Such a state
-is often called 'detached HEAD', and almost all commands work
-identically as normal. See linkgit:git-checkout[1] for
-details.
+is often called 'detached HEAD.' See linkgit:git-checkout[1]
+for details.
branches::
A slightly deprecated way to store shorthands to be used
- to specify URL to 'git fetch', 'git pull' and 'git push'
- commands is to store a file in `branches/<name>` and
- give 'name' to these commands in place of 'repository'
- argument.
+ to specify a URL to 'git fetch', 'git pull' and 'git push'.
+ A file can be stored as `branches/<name>` and then
+ 'name' can be given to these commands in place of
+ 'repository' argument. See the REMOTES section in
+ linkgit:git-fetch[1] for details. This mechanism is legacy
+ and not likely to be found in modern repositories.
hooks::
Hooks are customization scripts used by various git
at it. See also: linkgit:gitignore[5].
remotes::
- Stores shorthands to be used to give URL and default
- refnames to interact with remote repository to
- 'git fetch', 'git pull' and 'git push' commands.
+ Stores shorthands for URL and default refnames for use
+ when interacting with remote repositories via 'git fetch',
+ 'git pull' and 'git push' commands. See the REMOTES section
+ in linkgit:git-fetch[1] for details. This mechanism is legacy
+ and not likely to be found in modern repositories.
logs::
Records of changes made to refs are stored in this
. Can sort an unsorted list using `sort_string_list`.
+. Can remove individual items of an unsorted list using
+ `unsorted_string_list_delete_item`.
+
. Finally it should free the list using `string_list_clear`.
Example:
The above two functions need to look through all items, as opposed to their
counterpart for sorted lists, which performs a binary search.
+`unsorted_string_list_delete_item`::
+
+ Remove an item from a string_list. The `string` pointer of the items
+ will be freed in case the `strdup_strings` member of the string_list
+ is set. The third parameter controls if the `util` pointer of the
+ items should be freed or not.
+
Data structures
---------------
#!/bin/sh
GVF=GIT-VERSION-FILE
-DEF_VER=v1.7.6.GIT
+DEF_VER=v1.7.7-rc0
LF='
'
mandir = share/man
infodir = share/info
gitexecdir = libexec/git-core
+mergetoolsdir = $(gitexecdir)/mergetools
sharedir = $(prefix)/share
gitwebdir = $(sharedir)/gitweb
template_dir = share/git-core/templates
LIB_H += compat/bswap.h
LIB_H += compat/cygwin.h
LIB_H += compat/mingw.h
+LIB_H += compat/obstack.h
LIB_H += compat/win32/pthread.h
LIB_H += compat/win32/syslog.h
LIB_H += compat/win32/sys/poll.h
LIB_H += grep.h
LIB_H += hash.h
LIB_H += help.h
+LIB_H += kwset.h
LIB_H += levenshtein.h
LIB_H += list-objects.h
LIB_H += ll-merge.h
LIB_OBJS += color.o
LIB_OBJS += combine-diff.o
LIB_OBJS += commit.o
+LIB_OBJS += compat/obstack.o
LIB_OBJS += config.o
LIB_OBJS += connect.o
LIB_OBJS += convert.o
LIB_OBJS += help.o
LIB_OBJS += hex.o
LIB_OBJS += ident.o
+LIB_OBJS += kwset.o
LIB_OBJS += levenshtein.o
LIB_OBJS += list-objects.o
LIB_OBJS += ll-merge.o
LIB_OBJS += pack-write.o
LIB_OBJS += pager.o
LIB_OBJS += parse-options.o
+LIB_OBJS += parse-options-cb.o
LIB_OBJS += patch-delta.o
LIB_OBJS += patch-ids.o
LIB_OBJS += path.o
test-line-buffer$X: vcs-svn/lib.a
-test-parse-options$X: parse-options.o
+test-parse-options$X: parse-options.o parse-options-cb.o
test-string-pool$X: vcs-svn/lib.a
gitexec_instdir_SQ = $(subst ','\'',$(gitexec_instdir))
export gitexec_instdir
+ifneq ($(filter /%,$(firstword $(mergetoolsdir))),)
+mergetools_instdir = $(mergetoolsdir)
+else
+mergetools_instdir = $(prefix)/$(mergetoolsdir)
+endif
+mergetools_instdir_SQ = $(subst ','\'',$(mergetools_instdir))
+
install_bindir_programs := $(patsubst %,%$X,$(BINDIR_PROGRAMS_NEED_X)) $(BINDIR_PROGRAMS_NO_X)
install: all
$(INSTALL) -m 644 $(SCRIPT_LIB) '$(DESTDIR_SQ)$(gitexec_instdir_SQ)'
$(INSTALL) $(install_bindir_programs) '$(DESTDIR_SQ)$(bindir_SQ)'
$(MAKE) -C templates DESTDIR='$(DESTDIR_SQ)' install
+ $(INSTALL) -d -m 755 '$(DESTDIR_SQ)$(mergetools_instdir_SQ)'
+ (cd mergetools && $(TAR) cf - .) | \
+ (cd '$(DESTDIR_SQ)$(mergetools_instdir_SQ)' && umask 022 && $(TAR) xof -)
ifndef NO_PERL
$(MAKE) -C perl prefix='$(prefix_SQ)' DESTDIR='$(DESTDIR_SQ)' install
$(MAKE) -C gitweb install
}
return buf;
}
+
+/*
+ * Unlike prefix_path, this should be used if the named file does
+ * not have to interact with index entry; i.e. name of a random file
+ * on the filesystem.
+ */
+const char *prefix_filename(const char *pfx, int pfx_len, const char *arg)
+{
+ static char path[PATH_MAX];
+#ifndef WIN32
+ if (!pfx_len || is_absolute_path(arg))
+ return arg;
+ memcpy(path, pfx, pfx_len);
+ strcpy(path + pfx_len, arg);
+#else
+ char *p;
+ /* don't add prefix to absolute paths, but still replace '\' by '/' */
+ if (is_absolute_path(arg))
+ pfx_len = 0;
+ else if (pfx_len)
+ memcpy(path, pfx, pfx_len);
+ strcpy(path + pfx_len, arg);
+ for (p = path + pfx_len; *p; p++)
+ if (*p == '\\')
+ *p = '/';
+#endif
+ return path;
+}
+/*
+ * Handle git attributes. See gitattributes(5) for a description of
+ * the file syntax, and Documentation/technical/api-gitattributes.txt
+ * for a description of the API.
+ *
+ * One basic design decision here is that we are not going to support
+ * an insanely large number of attributes.
+ */
+
#define NO_THE_INDEX_COMPATIBILITY_MACROS
#include "cache.h"
#include "exec_cmd.h"
static const char *attributes_file;
-/*
- * The basic design decision here is that we are not going to have
- * insanely large number of attributes.
- *
- * This is a randomly chosen prime.
- */
+/* This is a randomly chosen prime. */
#define HASHSIZE 257
#ifndef DEBUG_ATTR
return git_attr_internal(name, strlen(name));
}
-/*
- * .gitattributes file is one line per record, each of which is
- *
- * (1) glob pattern.
- * (2) whitespace
- * (3) whitespace separated list of attribute names, each of which
- * could be prefixed with '-' to mean "set to false", '!' to mean
- * "unset".
- */
-
/* What does a matched pattern decide? */
struct attr_state {
struct git_attr *attr;
const char *setto;
};
+/*
+ * One rule, as from a .gitattributes file.
+ *
+ * If is_macro is true, then u.attr is a pointer to the git_attr being
+ * defined.
+ *
+ * If is_macro is false, then u.pattern points at the filename pattern
+ * to which the rule applies. (The memory pointed to is part of the
+ * memory block allocated for the match_attr instance.)
+ *
+ * In either case, num_attr is the number of attributes affected by
+ * this rule, and state is an array listing them. The attributes are
+ * listed as they appear in the file (macros unexpanded).
+ */
struct match_attr {
union {
char *pattern;
static const char blank[] = " \t\r\n";
+/*
+ * Parse a whitespace-delimited attribute state (i.e., "attr",
+ * "-attr", "!attr", or "attr=value") from the string starting at src.
+ * If e is not NULL, write the results to *e. Return a pointer to the
+ * remainder of the string (with leading whitespace removed), or NULL
+ * if there was an error.
+ */
static const char *parse_attr(const char *src, int lineno, const char *cp,
- int *num_attr, struct match_attr *res)
+ struct attr_state *e)
{
const char *ep, *equals;
int len;
len = equals - cp;
else
len = ep - cp;
- if (!res) {
+ if (!e) {
if (*cp == '-' || *cp == '!') {
cp++;
len--;
return NULL;
}
} else {
- struct attr_state *e;
-
- e = &(res->state[*num_attr]);
if (*cp == '-' || *cp == '!') {
e->setto = (*cp == '-') ? ATTR__FALSE : ATTR__UNSET;
cp++;
}
e->attr = git_attr_internal(cp, len);
}
- (*num_attr)++;
return ep + strspn(ep, blank);
}
int lineno, int macro_ok)
{
int namelen;
- int num_attr;
- const char *cp, *name;
+ int num_attr, i;
+ const char *cp, *name, *states;
struct match_attr *res = NULL;
- int pass;
int is_macro;
cp = line + strspn(line, blank);
else
is_macro = 0;
- for (pass = 0; pass < 2; pass++) {
- /* pass 0 counts and allocates, pass 1 fills */
- num_attr = 0;
- cp = name + namelen;
- cp = cp + strspn(cp, blank);
- while (*cp) {
- cp = parse_attr(src, lineno, cp, &num_attr, res);
- if (!cp)
- return NULL;
- }
- if (pass)
- break;
- res = xcalloc(1,
- sizeof(*res) +
- sizeof(struct attr_state) * num_attr +
- (is_macro ? 0 : namelen + 1));
- if (is_macro)
- res->u.attr = git_attr_internal(name, namelen);
- else {
- res->u.pattern = (char *)&(res->state[num_attr]);
- memcpy(res->u.pattern, name, namelen);
- res->u.pattern[namelen] = 0;
- }
- res->is_macro = is_macro;
- res->num_attr = num_attr;
+ states = name + namelen;
+ states += strspn(states, blank);
+
+ /* First pass to count the attr_states */
+ for (cp = states, num_attr = 0; *cp; num_attr++) {
+ cp = parse_attr(src, lineno, cp, NULL);
+ if (!cp)
+ return NULL;
+ }
+
+ res = xcalloc(1,
+ sizeof(*res) +
+ sizeof(struct attr_state) * num_attr +
+ (is_macro ? 0 : namelen + 1));
+ if (is_macro)
+ res->u.attr = git_attr_internal(name, namelen);
+ else {
+ res->u.pattern = (char *)&(res->state[num_attr]);
+ memcpy(res->u.pattern, name, namelen);
+ res->u.pattern[namelen] = 0;
}
+ res->is_macro = is_macro;
+ res->num_attr = num_attr;
+
+ /* Second pass to fill the attr_states */
+ for (cp = states, i = 0; *cp; i++) {
+ cp = parse_attr(src, lineno, cp, &(res->state[i]));
+ }
+
return res;
}
return 0;
}
+int validate_new_branchname(const char *name, struct strbuf *ref, int force)
+{
+ const char *head;
+ unsigned char sha1[20];
+
+ if (strbuf_check_branch_ref(ref, name))
+ die("'%s' is not a valid branch name.", name);
+
+ if (!ref_exists(ref->buf))
+ return 0;
+ else if (!force)
+ die("A branch named '%s' already exists.", ref->buf + strlen("refs/heads/"));
+
+ head = resolve_ref("HEAD", sha1, 0, NULL);
+ if (!is_bare_repository() && head && !strcmp(head, ref->buf))
+ die("Cannot force update the current branch.");
+
+ return 1;
+}
+
void create_branch(const char *head,
const char *name, const char *start_name,
int force, int reflog, enum branch_track track)
if (track == BRANCH_TRACK_EXPLICIT || track == BRANCH_TRACK_OVERRIDE)
explicit_tracking = 1;
- if (strbuf_check_branch_ref(&ref, name))
- die("'%s' is not a valid branch name.", name);
-
- if (resolve_ref(ref.buf, sha1, 1, NULL)) {
- if (!force && track == BRANCH_TRACK_OVERRIDE)
+ if (validate_new_branchname(name, &ref, force || track == BRANCH_TRACK_OVERRIDE)) {
+ if (!force)
dont_change_ref = 1;
- else if (!force)
- die("A branch named '%s' already exists.", name);
- else if (!is_bare_repository() && head && !strcmp(head, name))
- die("Cannot force update the current branch.");
- forcing = 1;
+ else
+ forcing = 1;
}
real_ref = NULL;
start_name);
if (real_ref && track)
- setup_tracking(name, real_ref, track);
+ setup_tracking(ref.buf+11, real_ref, track);
if (!dont_change_ref)
if (write_ref_sha1(lock, sha1, msg) < 0)
void create_branch(const char *head, const char *name, const char *start_name,
int force, int reflog, enum branch_track track);
+/*
+ * Validates that the requested branch may be created, returning the
+ * interpreted ref in ref, force indicates whether (non-head) branches
+ * may be overwritten. A non-zero return value indicates that the force
+ * parameter was non-zero and the branch already exists.
+ */
+int validate_new_branchname(const char *name, struct strbuf *ref, int force);
+
/*
* Remove information about the state of working on the current
* branch. (E.g., MERGE_HEAD)
static int git_branch_config(const char *var, const char *value, void *cb)
{
if (!strcmp(var, "color.branch")) {
- branch_use_color = git_config_colorbool(var, value, -1);
+ branch_use_color = git_config_colorbool(var, value);
return 0;
}
if (!prefixcmp(var, "color.branch.")) {
static const char *branch_get_color(enum color_branch ix)
{
- if (branch_use_color > 0)
+ if (want_color(branch_use_color))
return branch_colors[ix];
return "";
}
die(_("Invalid branch name: '%s'"), oldname);
}
- if (strbuf_check_branch_ref(&newref, newname))
- die(_("Invalid branch name: '%s'"), newname);
-
- if (resolve_ref(newref.buf, sha1, 1, NULL) && !force)
- die(_("A branch named '%s' already exists."), newref.buf + 11);
+ validate_new_branchname(newname, &newref, force);
strbuf_addf(&logmsg, "Branch: renamed %s to %s",
oldref.buf, newref.buf);
int cmd_branch(int argc, const char **argv, const char *prefix)
{
int delete = 0, rename = 0, force_create = 0;
- int verbose = 0, abbrev = DEFAULT_ABBREV, detached = 0;
+ int verbose = 0, abbrev = -1, detached = 0;
int reflog = 0;
enum branch_track track;
int kinds = REF_LOCAL_BRANCH;
git_config(git_branch_config, NULL);
- if (branch_use_color == -1)
- branch_use_color = git_use_color_default;
-
track = git_branch_track;
head = resolve_ref("HEAD", head_sha1, 0, NULL);
if (!!delete + !!rename + !!force_create > 1)
usage_with_options(builtin_branch_usage, options);
+ if (abbrev == -1)
+ abbrev = DEFAULT_ABBREV;
+
if (delete)
return delete_branches(argc, argv, delete > 1, kinds);
else if (argc == 0)
if (opts.new_branch) {
struct strbuf buf = STRBUF_INIT;
- if (strbuf_check_branch_ref(&buf, opts.new_branch))
- die(_("git checkout: we do not like '%s' as a branch name."),
- opts.new_branch);
- if (ref_exists(buf.buf)) {
- opts.branch_exists = 1;
- if (!opts.new_branch_force)
- die(_("git checkout: branch %s already exists"),
- opts.new_branch);
- }
+
+ opts.branch_exists = validate_new_branchname(opts.new_branch, &buf, !!opts.new_branch_force);
+
strbuf_release(&buf);
}
OPT_BOOLEAN('d', NULL, &remove_directories,
"remove whole directories"),
{ OPTION_CALLBACK, 'e', "exclude", &exclude_list, "pattern",
- "exclude <pattern>", PARSE_OPT_NONEG, exclude_cb },
+ "add <pattern> to ignore rules", PARSE_OPT_NONEG, exclude_cb },
OPT_BOOLEAN('x', NULL, &ignored, "remove ignored files, too"),
OPT_BOOLEAN('X', NULL, &ignored_only,
"remove only ignored files"),
setup_standard_excludes(&dir);
for (i = 0; i < exclude_list.nr; i++)
- add_exclude(exclude_list.items[i].string, "", 0, dir.exclude_list);
+ add_exclude(exclude_list.items[i].string, "", 0,
+ &dir.exclude_list[EXC_CMDL]);
pathspec = get_pathspec(prefix, argv);
static int option_no_checkout, option_bare, option_mirror;
static int option_local, option_no_hardlinks, option_shared, option_recursive;
-static char *option_template, *option_reference, *option_depth;
+static char *option_template, *option_depth;
static char *option_origin = NULL;
static char *option_branch = NULL;
static const char *real_git_dir;
static int option_verbosity;
static int option_progress;
static struct string_list option_config;
+static struct string_list option_reference;
+
+static int opt_parse_reference(const struct option *opt, const char *arg, int unset)
+{
+ struct string_list *option_reference = opt->value;
+ if (!arg)
+ return -1;
+ string_list_append(option_reference, arg);
+ return 0;
+}
static struct option builtin_clone_options[] = {
OPT__VERBOSITY(&option_verbosity),
"initialize submodules in the clone"),
OPT_STRING(0, "template", &option_template, "template-directory",
"directory from which templates will be used"),
- OPT_STRING(0, "reference", &option_reference, "repo",
- "reference repository"),
+ OPT_CALLBACK(0 , "reference", &option_reference, "repo",
+ "reference repository", &opt_parse_reference),
OPT_STRING('o', "origin", &option_origin, "branch",
"use <branch> instead of 'origin' to track upstream"),
OPT_STRING('b', "branch", &option_branch, "branch",
for (i = 0; i < ARRAY_SIZE(suffix); i++) {
const char *path;
path = mkpath("%s%s", repo, suffix[i]);
- if (is_directory(path)) {
+ if (stat(path, &st))
+ continue;
+ if (S_ISDIR(st.st_mode)) {
*is_bundle = 0;
return xstrdup(absolute_path(path));
+ } else if (S_ISREG(st.st_mode) && st.st_size > 8) {
+ /* Is it a "gitfile"? */
+ char signature[8];
+ int len, fd = open(path, O_RDONLY);
+ if (fd < 0)
+ continue;
+ len = read_in_full(fd, signature, 8);
+ close(fd);
+ if (len != 8 || strncmp(signature, "gitdir: ", 8))
+ continue;
+ path = read_gitfile(path);
+ if (path) {
+ *is_bundle = 0;
+ return xstrdup(absolute_path(path));
+ }
}
}
*end = '\0';
}
-static void setup_reference(const char *repo)
+static int add_one_reference(struct string_list_item *item, void *cb_data)
{
- const char *ref_git;
- char *ref_git_copy;
-
+ char *ref_git;
+ struct strbuf alternate = STRBUF_INIT;
struct remote *remote;
struct transport *transport;
const struct ref *extra;
- ref_git = real_path(option_reference);
-
- if (is_directory(mkpath("%s/.git/objects", ref_git)))
- ref_git = mkpath("%s/.git", ref_git);
- else if (!is_directory(mkpath("%s/objects", ref_git)))
+ /* Beware: real_path() and mkpath() return static buffer */
+ ref_git = xstrdup(real_path(item->string));
+ if (is_directory(mkpath("%s/.git/objects", ref_git))) {
+ char *ref_git_git = xstrdup(mkpath("%s/.git", ref_git));
+ free(ref_git);
+ ref_git = ref_git_git;
+ } else if (!is_directory(mkpath("%s/objects", ref_git)))
die(_("reference repository '%s' is not a local directory."),
- option_reference);
+ item->string);
- ref_git_copy = xstrdup(ref_git);
+ strbuf_addf(&alternate, "%s/objects", ref_git);
+ add_to_alternates_file(alternate.buf);
+ strbuf_release(&alternate);
- add_to_alternates_file(ref_git_copy);
-
- remote = remote_get(ref_git_copy);
- transport = transport_get(remote, ref_git_copy);
+ remote = remote_get(ref_git);
+ transport = transport_get(remote, ref_git);
for (extra = transport_get_remote_refs(transport); extra;
extra = extra->next)
add_extra_ref(extra->name, extra->old_sha1, 0);
transport_disconnect(transport);
+ free(ref_git);
+ return 0;
+}
- free(ref_git_copy);
+static void setup_reference(void)
+{
+ for_each_string_list(&option_reference, add_one_reference, NULL);
}
-static void copy_or_link_directory(struct strbuf *src, struct strbuf *dest)
+static void copy_alternates(struct strbuf *src, struct strbuf *dst,
+ const char *src_repo)
+{
+ /*
+ * Read from the source objects/info/alternates file
+ * and copy the entries to corresponding file in the
+ * destination repository with add_to_alternates_file().
+ * Both src and dst have "$path/objects/info/alternates".
+ *
+ * Instead of copying bit-for-bit from the original,
+ * we need to append to existing one so that the already
+ * created entry via "clone -s" is not lost, and also
+ * to turn entries with paths relative to the original
+ * absolute, so that they can be used in the new repository.
+ */
+ FILE *in = fopen(src->buf, "r");
+ struct strbuf line = STRBUF_INIT;
+
+ while (strbuf_getline(&line, in, '\n') != EOF) {
+ char *abs_path, abs_buf[PATH_MAX];
+ if (!line.len || line.buf[0] == '#')
+ continue;
+ if (is_absolute_path(line.buf)) {
+ add_to_alternates_file(line.buf);
+ continue;
+ }
+ abs_path = mkpath("%s/objects/%s", src_repo, line.buf);
+ normalize_path_copy(abs_buf, abs_path);
+ add_to_alternates_file(abs_buf);
+ }
+ strbuf_release(&line);
+ fclose(in);
+}
+
+static void copy_or_link_directory(struct strbuf *src, struct strbuf *dest,
+ const char *src_repo, int src_baselen)
{
struct dirent *de;
struct stat buf;
}
if (S_ISDIR(buf.st_mode)) {
if (de->d_name[0] != '.')
- copy_or_link_directory(src, dest);
+ copy_or_link_directory(src, dest,
+ src_repo, src_baselen);
+ continue;
+ }
+
+ /* Files that cannot be copied bit-for-bit... */
+ if (!strcmp(src->buf + src_baselen, "/info/alternates")) {
+ copy_alternates(src, dest, src_repo);
continue;
}
const char *dest_repo)
{
const struct ref *ret;
- struct strbuf src = STRBUF_INIT;
- struct strbuf dest = STRBUF_INIT;
struct remote *remote;
struct transport *transport;
- if (option_shared)
- add_to_alternates_file(src_repo);
- else {
+ if (option_shared) {
+ struct strbuf alt = STRBUF_INIT;
+ strbuf_addf(&alt, "%s/objects", src_repo);
+ add_to_alternates_file(alt.buf);
+ strbuf_release(&alt);
+ } else {
+ struct strbuf src = STRBUF_INIT;
+ struct strbuf dest = STRBUF_INIT;
strbuf_addf(&src, "%s/objects", src_repo);
strbuf_addf(&dest, "%s/objects", dest_repo);
- copy_or_link_directory(&src, &dest);
+ copy_or_link_directory(&src, &dest, src_repo, src.len);
strbuf_release(&src);
strbuf_release(&dest);
}
git_config_set(key.buf, repo);
strbuf_reset(&key);
- if (option_reference)
- setup_reference(git_dir);
+ if (option_reference.nr)
+ setup_reference();
fetch_pattern = value.buf;
refspec = parse_fetch_refspec(1, &fetch_pattern);
"\n"
"Otherwise, please use 'git reset'\n");
-static unsigned char head_sha1[20];
-
static const char *use_message_buffer;
static const char commit_editmsg[] = "COMMIT_EDITMSG";
static struct lock_file index_lock; /* real index */
static char *cleanup_arg;
static enum commit_whence whence;
-static int use_editor = 1, initial_commit, include_status = 1;
+static int use_editor = 1, include_status = 1;
static int show_ignored_in_status;
static const char *only_include_assumed;
static struct strbuf message;
}
}
-static void create_base_index(void)
+static void create_base_index(const struct commit *current_head)
{
struct tree *tree;
struct unpack_trees_options opts;
struct tree_desc t;
- if (initial_commit) {
+ if (!current_head) {
discard_cache();
return;
}
opts.dst_index = &the_index;
opts.fn = oneway_merge;
- tree = parse_tree_indirect(head_sha1);
+ tree = parse_tree_indirect(current_head->object.sha1);
if (!tree)
die(_("failed to unpack HEAD tree object"));
parse_tree(tree);
die_resolve_conflict("commit");
}
-static char *prepare_index(int argc, const char **argv, const char *prefix, int is_status)
+static char *prepare_index(int argc, const char **argv, const char *prefix,
+ const struct commit *current_head, int is_status)
{
int fd;
struct string_list partial;
memset(&partial, 0, sizeof(partial));
partial.strdup_strings = 1;
- if (list_paths(&partial, initial_commit ? NULL : "HEAD", prefix, pathspec))
+ if (list_paths(&partial, !current_head ? NULL : "HEAD", prefix, pathspec))
exit(1);
discard_cache();
(uintmax_t) getpid()),
LOCK_DIE_ON_ERROR);
- create_base_index();
+ create_base_index(current_head);
add_remove_files(&partial);
refresh_cache(REFRESH_QUIET);
return s->commitable;
}
-static int is_a_merge(const unsigned char *sha1)
+static int is_a_merge(const struct commit *current_head)
{
- struct commit *commit = lookup_commit(sha1);
- if (!commit || parse_commit(commit))
- die(_("could not parse HEAD commit"));
- return !!(commit->parents && commit->parents->next);
+ return !!(current_head->parents && current_head->parents->next);
}
static const char sign_off_header[] = "Signed-off-by: ";
}
static int prepare_to_commit(const char *index_file, const char *prefix,
+ struct commit *current_head,
struct wt_status *s,
struct strbuf *author_ident)
{
* empty due to conflict resolution, which the user should okay.
*/
if (!commitable && whence != FROM_MERGE && !allow_empty &&
- !(amend && is_a_merge(head_sha1))) {
+ !(amend && is_a_merge(current_head))) {
run_status(stdout, index_file, prefix, 0, s);
if (amend)
fputs(_(empty_amend_advice), stderr);
static int parse_and_validate_options(int argc, const char *argv[],
const char * const usage[],
const char *prefix,
+ struct commit *current_head,
struct wt_status *s)
{
int f = 0;
if (!use_editor)
setenv("GIT_EDITOR", ":", 1);
- if (get_sha1("HEAD", head_sha1))
- initial_commit = 1;
-
/* Sanity check options */
- if (amend && initial_commit)
+ if (amend && !current_head)
die(_("You have nothing to amend."));
if (amend && whence != FROM_COMMIT)
die(_("You are in the middle of a %s -- cannot amend."), whence_s());
}
static int dry_run_commit(int argc, const char **argv, const char *prefix,
- struct wt_status *s)
+ const struct commit *current_head, struct wt_status *s)
{
int commitable;
const char *index_file;
- index_file = prepare_index(argc, argv, prefix, 1);
+ index_file = prepare_index(argc, argv, prefix, current_head, 1);
commitable = run_status(stdout, index_file, prefix, 0, s);
rollback_index_files();
return 0;
}
if (!strcmp(k, "status.color") || !strcmp(k, "color.status")) {
- s->use_color = git_config_colorbool(k, v, -1);
+ s->use_color = git_config_colorbool(k, v);
return 0;
}
if (!prefixcmp(k, "status.color.") || !prefixcmp(k, "color.status.")) {
if (s.relative_paths)
s.prefix = prefix;
- if (s.use_color == -1)
- s.use_color = git_use_color_default;
- if (diff_use_color_default == -1)
- diff_use_color_default = git_use_color_default;
switch (status_format) {
case STATUS_FORMAT_SHORT:
return 0;
}
-static void print_summary(const char *prefix, const unsigned char *sha1)
+static void print_summary(const char *prefix, const unsigned char *sha1,
+ int initial_commit)
{
struct rev_info rev;
struct commit *commit;
struct strbuf author_ident = STRBUF_INIT;
const char *index_file, *reflog_msg;
char *nl, *p;
- unsigned char commit_sha1[20];
+ unsigned char sha1[20];
struct ref_lock *ref_lock;
struct commit_list *parents = NULL, **pptr = &parents;
struct stat statbuf;
int allow_fast_forward = 1;
struct wt_status s;
+ struct commit *current_head = NULL;
if (argc == 2 && !strcmp(argv[1], "-h"))
usage_with_options(builtin_commit_usage, builtin_commit_options);
git_config(git_commit_config, &s);
determine_whence(&s);
- if (s.use_color == -1)
- s.use_color = git_use_color_default;
- argc = parse_and_validate_options(argc, argv, builtin_commit_usage,
- prefix, &s);
- if (dry_run) {
- if (diff_use_color_default == -1)
- diff_use_color_default = git_use_color_default;
- return dry_run_commit(argc, argv, prefix, &s);
+ if (get_sha1("HEAD", sha1))
+ current_head = NULL;
+ else {
+ current_head = lookup_commit(sha1);
+ if (!current_head || parse_commit(current_head))
+ die(_("could not parse HEAD commit"));
}
- index_file = prepare_index(argc, argv, prefix, 0);
+ argc = parse_and_validate_options(argc, argv, builtin_commit_usage,
+ prefix, current_head, &s);
+ if (dry_run)
+ return dry_run_commit(argc, argv, prefix, current_head, &s);
+ index_file = prepare_index(argc, argv, prefix, current_head, 0);
/* Set up everything for writing the commit object. This includes
running hooks, writing the trees, and interacting with the user. */
- if (!prepare_to_commit(index_file, prefix, &s, &author_ident)) {
+ if (!prepare_to_commit(index_file, prefix,
+ current_head, &s, &author_ident)) {
rollback_index_files();
return 1;
}
/* Determine parents */
reflog_msg = getenv("GIT_REFLOG_ACTION");
- if (initial_commit) {
+ if (!current_head) {
if (!reflog_msg)
reflog_msg = "commit (initial)";
} else if (amend) {
struct commit_list *c;
- struct commit *commit;
if (!reflog_msg)
reflog_msg = "commit (amend)";
- commit = lookup_commit(head_sha1);
- if (!commit || parse_commit(commit))
- die(_("could not parse HEAD commit"));
-
- for (c = commit->parents; c; c = c->next)
+ for (c = current_head->parents; c; c = c->next)
pptr = &commit_list_insert(c->item, pptr)->next;
} else if (whence == FROM_MERGE) {
struct strbuf m = STRBUF_INIT;
if (!reflog_msg)
reflog_msg = "commit (merge)";
- pptr = &commit_list_insert(lookup_commit(head_sha1), pptr)->next;
+ pptr = &commit_list_insert(current_head, pptr)->next;
fp = fopen(git_path("MERGE_HEAD"), "r");
if (fp == NULL)
die_errno(_("could not open '%s' for reading"),
reflog_msg = (whence == FROM_CHERRY_PICK)
? "commit (cherry-pick)"
: "commit";
- pptr = &commit_list_insert(lookup_commit(head_sha1), pptr)->next;
+ pptr = &commit_list_insert(current_head, pptr)->next;
}
/* Finally, get the commit message */
exit(1);
}
- if (commit_tree(sb.buf, active_cache_tree->sha1, parents, commit_sha1,
+ if (commit_tree(sb.buf, active_cache_tree->sha1, parents, sha1,
author_ident.buf)) {
rollback_index_files();
die(_("failed to write commit object"));
strbuf_release(&author_ident);
ref_lock = lock_any_ref_for_update("HEAD",
- initial_commit ? NULL : head_sha1,
+ !current_head
+ ? NULL
+ : current_head->object.sha1,
0);
nl = strchr(sb.buf, '\n');
rollback_index_files();
die(_("cannot lock HEAD ref"));
}
- if (write_ref_sha1(ref_lock, commit_sha1, sb.buf) < 0) {
+ if (write_ref_sha1(ref_lock, sha1, sb.buf) < 0) {
rollback_index_files();
die(_("cannot update HEAD ref"));
}
struct notes_rewrite_cfg *cfg;
cfg = init_copy_notes_for_rewrite("amend");
if (cfg) {
- copy_note_for_rewrite(cfg, head_sha1, commit_sha1);
+ /* we are amending, so current_head is not NULL */
+ copy_note_for_rewrite(cfg, current_head->object.sha1, sha1);
finish_copy_notes_for_rewrite(cfg);
}
- run_rewrite_hook(head_sha1, commit_sha1);
+ run_rewrite_hook(current_head->object.sha1, sha1);
}
if (!quiet)
- print_summary(prefix, commit_sha1);
+ print_summary(prefix, sha1, !current_head);
return 0;
}
fputs(parsed_color, stdout);
}
-static int stdout_is_tty;
static int get_colorbool_found;
static int get_diff_color_found;
+static int get_color_ui_found;
static int git_get_colorbool_config(const char *var, const char *value,
void *cb)
{
- if (!strcmp(var, get_colorbool_slot)) {
- get_colorbool_found =
- git_config_colorbool(var, value, stdout_is_tty);
- }
- if (!strcmp(var, "diff.color")) {
- get_diff_color_found =
- git_config_colorbool(var, value, stdout_is_tty);
- }
- if (!strcmp(var, "color.ui")) {
- git_use_color_default = git_config_colorbool(var, value, stdout_is_tty);
- return 0;
- }
+ if (!strcmp(var, get_colorbool_slot))
+ get_colorbool_found = git_config_colorbool(var, value);
+ else if (!strcmp(var, "diff.color"))
+ get_diff_color_found = git_config_colorbool(var, value);
+ else if (!strcmp(var, "color.ui"))
+ get_color_ui_found = git_config_colorbool(var, value);
return 0;
}
if (!strcmp(get_colorbool_slot, "color.diff"))
get_colorbool_found = get_diff_color_found;
if (get_colorbool_found < 0)
- get_colorbool_found = git_use_color_default;
+ get_colorbool_found = get_color_ui_found;
}
+ get_colorbool_found = want_color(get_colorbool_found);
+
if (print) {
printf("%s\n", get_colorbool_found ? "true" : "false");
return 0;
}
else if (actions == ACTION_GET_COLORBOOL) {
if (argc == 1)
- stdout_is_tty = git_config_bool("command line", argv[0]);
- else if (argc == 0)
- stdout_is_tty = isatty(1);
+ color_stdout_is_tty = git_config_bool("command line", argv[0]);
return get_colorbool(argc != 0);
}
die(_("No names found, cannot describe anything."));
if (argc == 0) {
- if (dirty && !cmd_diff_index(ARRAY_SIZE(diff_index_args) - 1, diff_index_args, prefix))
- dirty = NULL;
+ if (dirty) {
+ static struct lock_file index_lock;
+ int fd;
+
+ read_cache_preload(NULL);
+ refresh_index(&the_index, REFRESH_QUIET|REFRESH_UNMERGED,
+ NULL, NULL, NULL);
+ fd = hold_locked_index(&index_lock, 0);
+ if (0 <= fd)
+ update_index_if_able(&the_index, &index_lock);
+
+ if (!cmd_diff_index(ARRAY_SIZE(diff_index_args) - 1,
+ diff_index_args, prefix))
+ dirty = NULL;
+ }
describe("HEAD", 1);
} else if (dirty) {
die(_("--dirty is incompatible with committishes"));
gitmodules_config();
git_config(git_diff_ui_config, NULL);
- if (diff_use_color_default == -1)
- diff_use_color_default = git_use_color_default;
-
init_revisions(&rev, prefix);
/* If this is a no-index diff, just run it and exit there. */
}
}
+struct write_shallow_data {
+ struct strbuf *out;
+ int use_pack_protocol;
+ int count;
+};
+
+static int write_one_shallow(const struct commit_graft *graft, void *cb_data)
+{
+ struct write_shallow_data *data = cb_data;
+ const char *hex = sha1_to_hex(graft->sha1);
+ data->count++;
+ if (data->use_pack_protocol)
+ packet_buf_write(data->out, "shallow %s", hex);
+ else {
+ strbuf_addstr(data->out, hex);
+ strbuf_addch(data->out, '\n');
+ }
+ return 0;
+}
+
+static int write_shallow_commits(struct strbuf *out, int use_pack_protocol)
+{
+ struct write_shallow_data data;
+ data.out = out;
+ data.use_pack_protocol = use_pack_protocol;
+ data.count = 0;
+ for_each_commit_graft(write_one_shallow, &data);
+ return data.count;
+}
+
static enum ack_type get_ack(int fd, unsigned char *result_sha1)
{
static char line[1000];
}
if (!strcmp(var, "color.grep"))
- opt->color = git_config_colorbool(var, value, -1);
+ opt->color = git_config_colorbool(var, value);
else if (!strcmp(var, "color.grep.context"))
color = opt->color_context;
else if (!strcmp(var, "color.grep.filename"))
strcpy(opt.color_sep, GIT_COLOR_CYAN);
opt.color = -1;
git_config(grep_config, &opt);
- if (opt.color == -1)
- opt.color = git_use_color_default;
/*
* If there is no -- then the paths must exist in the working
const char *src;
if (S_ISREG(st.st_mode))
- src = read_gitfile_gently(git_link);
+ src = read_gitfile(git_link);
else if (S_ISDIR(st.st_mode))
src = git_link;
else
git_config(git_log_config, NULL);
- if (diff_use_color_default == -1)
- diff_use_color_default = git_use_color_default;
-
init_revisions(&rev, prefix);
rev.diff = 1;
rev.simplify_history = 0;
git_config(git_log_config, NULL);
- if (diff_use_color_default == -1)
- diff_use_color_default = git_use_color_default;
-
init_pathspec(&match_all, NULL);
init_revisions(&rev, prefix);
rev.diff = 1;
git_config(git_log_config, NULL);
- if (diff_use_color_default == -1)
- diff_use_color_default = git_use_color_default;
-
init_revisions(&rev, prefix);
init_reflog_walk(&rev.reflog_info);
rev.verbose_header = 1;
git_config(git_log_config, NULL);
- if (diff_use_color_default == -1)
- diff_use_color_default = git_use_color_default;
-
init_revisions(&rev, prefix);
rev.always_show_header = 1;
memset(&opt, 0, sizeof(opt));
opts.output_format |=
DIFF_FORMAT_SUMMARY | DIFF_FORMAT_DIFFSTAT;
opts.detect_rename = DIFF_DETECT_RENAME;
- if (diff_use_color_default > 0)
- DIFF_OPT_SET(&opts, COLOR_DIFF);
if (diff_setup_done(&opts) < 0)
die(_("diff_setup_done failed"));
diff_tree_sha1(head, new_head, "", &opts);
strbuf_addch(&merge_msg, '\n');
run_prepare_commit_msg();
commit_tree(merge_msg.buf, result_tree, parents, result_commit, NULL);
- strbuf_addf(&buf, "Merge made by %s.", wt_strategy);
+ strbuf_addf(&buf, "Merge made by the '%s' strategy.", wt_strategy);
finish(result_commit, buf.buf);
strbuf_release(&buf);
drop_save();
git_config(git_merge_config, NULL);
- /* for color.ui */
- if (diff_use_color_default == -1)
- diff_use_color_default = git_use_color_default;
-
if (branch_mergeoptions)
parse_branch_merge_options(branch_mergeoptions);
argc = parse_options(argc, argv, prefix, builtin_merge_options,
#include "remote.h"
#include "transport.h"
#include "parse-options.h"
+#include "submodule.h"
static const char * const push_usage[] = {
"git push [<options>] [<repository> [<refspec>...]]",
return !!errs;
}
+static int option_parse_recurse_submodules(const struct option *opt,
+ const char *arg, int unset)
+{
+ int *flags = opt->value;
+ if (arg) {
+ if (!strcmp(arg, "check"))
+ *flags |= TRANSPORT_RECURSE_SUBMODULES_CHECK;
+ else
+ die("bad %s argument: %s", opt->long_name, arg);
+ } else
+ die("option %s needs an argument (check)", opt->long_name);
+
+ return 0;
+}
+
int cmd_push(int argc, const char **argv, const char *prefix)
{
int flags = 0;
OPT_BIT('n' , "dry-run", &flags, "dry run", TRANSPORT_PUSH_DRY_RUN),
OPT_BIT( 0, "porcelain", &flags, "machine-readable output", TRANSPORT_PUSH_PORCELAIN),
OPT_BIT('f', "force", &flags, "force updates", TRANSPORT_PUSH_FORCE),
+ { OPTION_CALLBACK, 0, "recurse-submodules", &flags, "check",
+ "controls recursive pushing of submodules",
+ PARSE_OPT_OPTARG, option_parse_recurse_submodules },
OPT_BOOLEAN( 0 , "thin", &thin, "use thin pack"),
OPT_STRING( 0 , "receive-pack", &receivepack, "receive-pack", "receive pack program"),
OPT_STRING( 0 , "exec", &receivepack, "receive-pack", "receive pack program"),
static struct tree *empty_tree(void)
{
- struct tree *tree = xcalloc(1, sizeof(struct tree));
-
- tree->object.parsed = 1;
- tree->object.type = OBJ_TREE;
- pretend_sha1_file(NULL, 0, OBJ_TREE, tree->object.sha1);
- return tree;
+ return lookup_tree((const unsigned char *)EMPTY_TREE_SHA1_BIN);
}
static NORETURN void die_dirty_index(const char *me)
static const char *get_color_code(int idx)
{
- if (showbranch_use_color)
+ if (want_color(showbranch_use_color))
return column_colors_ansi[idx % column_colors_ansi_max];
return "";
}
static const char *get_color_reset_code(void)
{
- if (showbranch_use_color)
+ if (want_color(showbranch_use_color))
return GIT_COLOR_RESET;
return "";
}
}
if (!strcmp(var, "color.showbranch")) {
- showbranch_use_color = git_config_colorbool(var, value, -1);
+ showbranch_use_color = git_config_colorbool(var, value);
return 0;
}
git_config(git_show_branch_config, NULL);
- if (showbranch_use_color == -1)
- showbranch_use_color = git_use_color_default;
-
/* If nothing is specified, try the default first */
if (ac == 1 && default_num) {
ac = default_num;
extern const char *get_git_namespace(void);
extern const char *strip_namespace(const char *namespaced_ref);
extern const char *get_git_work_tree(void);
-extern const char *read_gitfile_gently(const char *path);
+extern const char *read_gitfile(const char *path);
extern void set_git_work_tree(const char *tree);
#define ALTERNATE_DB_ENVIRONMENT "GIT_ALTERNATE_OBJECT_DIRECTORIES"
#include "cache.h"
#include "color.h"
-int git_use_color_default = 0;
+static int git_use_color_default = 0;
+int color_stdout_is_tty = -1;
/*
* The list of available column colors.
die("bad color value '%.*s' for variable '%s'", value_len, value, var);
}
-int git_config_colorbool(const char *var, const char *value, int stdout_is_tty)
+int git_config_colorbool(const char *var, const char *value)
{
if (value) {
if (!strcasecmp(value, "never"))
if (!strcasecmp(value, "always"))
return 1;
if (!strcasecmp(value, "auto"))
- goto auto_color;
+ return GIT_COLOR_AUTO;
}
if (!var)
return 0;
/* any normal truth value defaults to 'auto' */
- auto_color:
- if (stdout_is_tty < 0)
- stdout_is_tty = isatty(1);
- if (stdout_is_tty || (pager_in_use() && pager_use_color)) {
+ return GIT_COLOR_AUTO;
+}
+
+static int check_auto_color(void)
+{
+ if (color_stdout_is_tty < 0)
+ color_stdout_is_tty = isatty(1);
+ if (color_stdout_is_tty || (pager_in_use() && pager_use_color)) {
char *term = getenv("TERM");
if (term && strcmp(term, "dumb"))
return 1;
return 0;
}
-int git_color_default_config(const char *var, const char *value, void *cb)
+int want_color(int var)
+{
+ static int want_auto = -1;
+
+ if (var < 0)
+ var = git_use_color_default;
+
+ if (var == GIT_COLOR_AUTO) {
+ if (want_auto < 0)
+ want_auto = check_auto_color();
+ return want_auto;
+ }
+ return var;
+}
+
+int git_color_config(const char *var, const char *value, void *cb)
{
if (!strcmp(var, "color.ui")) {
- git_use_color_default = git_config_colorbool(var, value, -1);
+ git_use_color_default = git_config_colorbool(var, value);
return 0;
}
+ return 0;
+}
+
+int git_color_default_config(const char *var, const char *value, void *cb)
+{
+ if (git_color_config(var, value, cb) < 0)
+ return -1;
+
return git_default_config(var, value, cb);
}
#define GIT_COLOR_NIL "NIL"
/*
- * This variable stores the value of color.ui
+ * The first three are chosen to match common usage in the code, and what is
+ * returned from git_config_colorbool. The "auto" value can be returned from
+ * config_colorbool, and will be converted by want_color() into either 0 or 1.
*/
-extern int git_use_color_default;
+#define GIT_COLOR_UNKNOWN -1
+#define GIT_COLOR_NEVER 0
+#define GIT_COLOR_ALWAYS 1
+#define GIT_COLOR_AUTO 2
/* A default list of colors to use for commit graphs and show-branch output */
extern const char *column_colors_ansi[];
extern const int column_colors_ansi_max;
/*
- * Use this instead of git_default_config if you need the value of color.ui.
+ * Generally the color code will lazily figure this out itself, but
+ * this provides a mechanism for callers to override autodetection.
*/
+extern int color_stdout_is_tty;
+
+/*
+ * Use the first one if you need only color config; the second is a convenience
+ * if you are just going to change to git_default_config, too.
+ */
+int git_color_config(const char *var, const char *value, void *cb);
int git_color_default_config(const char *var, const char *value, void *cb);
-int git_config_colorbool(const char *var, const char *value, int stdout_is_tty);
+int git_config_colorbool(const char *var, const char *value);
+int want_color(int var);
void color_parse(const char *value, const char *var, char *dst);
void color_parse_mem(const char *value, int len, const char *var, char *dst);
__attribute__((format (printf, 3, 4)))
int abbrev = DIFF_OPT_TST(opt, FULL_INDEX) ? 40 : DEFAULT_ABBREV;
const char *a_prefix = opt->a_prefix ? opt->a_prefix : "a/";
const char *b_prefix = opt->b_prefix ? opt->b_prefix : "b/";
- int use_color = DIFF_OPT_TST(opt, COLOR_DIFF);
- const char *c_meta = diff_get_color(use_color, DIFF_METAINFO);
- const char *c_reset = diff_get_color(use_color, DIFF_RESET);
+ const char *c_meta = diff_get_color_opt(opt, DIFF_METAINFO);
+ const char *c_reset = diff_get_color_opt(opt, DIFF_RESET);
const char *abb;
int added = 0;
int deleted = 0;
show_combined_header(elem, num_parent, dense, rev,
mode_differs, 1);
dump_sline(sline, cnt, num_parent,
- DIFF_OPT_TST(opt, COLOR_DIFF), result_deleted);
+ opt->use_color, result_deleted);
}
free(result);
show_patch_diff(p, num_parent, dense, 1, rev);
}
+static void free_combined_pair(struct diff_filepair *pair)
+{
+ free(pair->two);
+ free(pair);
+}
+
+/*
+ * A combine_diff_path expresses N parents on the LHS against 1 merge
+ * result. Synthesize a diff_filepair that has N entries on the "one"
+ * side and 1 entry on the "two" side.
+ *
+ * In the future, we might want to add more data to combine_diff_path
+ * so that we can fill fields we are ignoring (most notably, size) here,
+ * but currently nobody uses it, so this should suffice for now.
+ */
+static struct diff_filepair *combined_pair(struct combine_diff_path *p,
+ int num_parent)
+{
+ int i;
+ struct diff_filepair *pair;
+ struct diff_filespec *pool;
+
+ pair = xmalloc(sizeof(*pair));
+ pool = xcalloc(num_parent + 1, sizeof(struct diff_filespec));
+ pair->one = pool + 1;
+ pair->two = pool;
+
+ for (i = 0; i < num_parent; i++) {
+ pair->one[i].path = p->path;
+ pair->one[i].mode = p->parent[i].mode;
+ hashcpy(pair->one[i].sha1, p->parent[i].sha1);
+ pair->one[i].sha1_valid = !is_null_sha1(p->parent[i].sha1);
+ pair->one[i].has_more_entries = 1;
+ }
+ pair->one[num_parent - 1].has_more_entries = 0;
+
+ pair->two->path = p->path;
+ pair->two->mode = p->mode;
+ hashcpy(pair->two->sha1, p->sha1);
+ pair->two->sha1_valid = !is_null_sha1(p->sha1);
+ return pair;
+}
+
+static void handle_combined_callback(struct diff_options *opt,
+ struct combine_diff_path *paths,
+ int num_parent,
+ int num_paths)
+{
+ struct combine_diff_path *p;
+ struct diff_queue_struct q;
+ int i;
+
+ q.queue = xcalloc(num_paths, sizeof(struct diff_filepair *));
+ q.alloc = num_paths;
+ q.nr = num_paths;
+ for (i = 0, p = paths; p; p = p->next) {
+ if (!p->len)
+ continue;
+ q.queue[i++] = combined_pair(p, num_parent);
+ }
+ opt->format_callback(&q, opt, opt->format_callback_data);
+ for (i = 0; i < num_paths; i++)
+ free_combined_pair(q.queue[i]);
+ free(q.queue);
+}
+
void diff_tree_combined(const unsigned char *sha1,
const unsigned char parent[][20],
int num_parent,
else if (opt->output_format &
(DIFF_FORMAT_NUMSTAT|DIFF_FORMAT_DIFFSTAT))
needsep = 1;
+ else if (opt->output_format & DIFF_FORMAT_CALLBACK)
+ handle_combined_callback(opt, paths, num_parent, num_paths);
+
if (opt->output_format & DIFF_FORMAT_PATCH) {
if (needsep)
putchar(opt->line_termination);
return commit_graft[pos];
}
-int write_shallow_commits(struct strbuf *out, int use_pack_protocol)
-{
- int i, count = 0;
- for (i = 0; i < commit_graft_nr; i++)
- if (commit_graft[i]->nr_parent < 0) {
- const char *hex =
- sha1_to_hex(commit_graft[i]->sha1);
- count++;
- if (use_pack_protocol)
- packet_buf_write(out, "shallow %s", hex);
- else {
- strbuf_addstr(out, hex);
- strbuf_addch(out, '\n');
- }
- }
- return count;
+int for_each_commit_graft(each_commit_graft_fn fn, void *cb_data)
+{
+ int i, ret;
+ for (i = ret = 0; i < commit_graft_nr && !ret; i++)
+ ret = fn(commit_graft[i], cb_data);
+ return ret;
}
int unregister_shallow(const unsigned char *sha1)
int nr_parent; /* < 0 if shallow commit */
unsigned char parent[FLEX_ARRAY][20]; /* more */
};
+typedef int (*each_commit_graft_fn)(const struct commit_graft *, void *);
struct commit_graft *read_graft_line(char *buf, int len);
int register_commit_graft(struct commit_graft *, int);
extern int register_shallow(const unsigned char *sha1);
extern int unregister_shallow(const unsigned char *sha1);
-extern int write_shallow_commits(struct strbuf *out, int use_pack_protocol);
+extern int for_each_commit_graft(each_commit_graft_fn, void *);
extern int is_repository_shallow(void);
extern struct commit_list *get_shallow_commits(struct object_array *heads,
int depth, int shallow_flag, int not_shallow_flag);
--- /dev/null
+/* obstack.c - subroutines used implicitly by object stack macros
+ Copyright (C) 1988, 1989, 1990, 1991, 1992, 1993, 1994, 1996, 1997, 1998,
+ 1999, 2000, 2001, 2002, 2003, 2004, 2005 Free Software Foundation, Inc.
+ This file is part of the GNU C Library.
+
+ The GNU C Library is free software; you can redistribute it and/or
+ modify it under the terms of the GNU Lesser General Public
+ License as published by the Free Software Foundation; either
+ version 2.1 of the License, or (at your option) any later version.
+
+ The GNU C Library is distributed in the hope that it will be useful,
+ but WITHOUT ANY WARRANTY; without even the implied warranty of
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ Lesser General Public License for more details.
+
+ You should have received a copy of the GNU Lesser General Public
+ License along with the GNU C Library; if not, write to the Free
+ Software Foundation, Inc., 51 Franklin Street, Fifth Floor,
+ Boston, MA 02110-1301, USA. */
+
+#include "git-compat-util.h"
+#include <gettext.h>
+#include "obstack.h"
+
+/* NOTE BEFORE MODIFYING THIS FILE: This version number must be
+ incremented whenever callers compiled using an old obstack.h can no
+ longer properly call the functions in this obstack.c. */
+#define OBSTACK_INTERFACE_VERSION 1
+
+/* Comment out all this code if we are using the GNU C Library, and are not
+ actually compiling the library itself, and the installed library
+ supports the same library interface we do. This code is part of the GNU
+ C Library, but also included in many other GNU distributions. Compiling
+ and linking in this code is a waste when using the GNU C library
+ (especially if it is a shared library). Rather than having every GNU
+ program understand `configure --with-gnu-libc' and omit the object
+ files, it is simpler to just do this in the source for each such file. */
+
+#include <stdio.h> /* Random thing to get __GNU_LIBRARY__. */
+#if !defined _LIBC && defined __GNU_LIBRARY__ && __GNU_LIBRARY__ > 1
+# include <gnu-versions.h>
+# if _GNU_OBSTACK_INTERFACE_VERSION == OBSTACK_INTERFACE_VERSION
+# define ELIDE_CODE
+# endif
+#endif
+
+#include <stddef.h>
+
+#ifndef ELIDE_CODE
+
+
+# if HAVE_INTTYPES_H
+# include <inttypes.h>
+# endif
+# if HAVE_STDINT_H || defined _LIBC
+# include <stdint.h>
+# endif
+
+/* Determine default alignment. */
+union fooround
+{
+ uintmax_t i;
+ long double d;
+ void *p;
+};
+struct fooalign
+{
+ char c;
+ union fooround u;
+};
+/* If malloc were really smart, it would round addresses to DEFAULT_ALIGNMENT.
+ But in fact it might be less smart and round addresses to as much as
+ DEFAULT_ROUNDING. So we prepare for it to do that. */
+enum
+ {
+ DEFAULT_ALIGNMENT = offsetof (struct fooalign, u),
+ DEFAULT_ROUNDING = sizeof (union fooround)
+ };
+
+/* When we copy a long block of data, this is the unit to do it with.
+ On some machines, copying successive ints does not work;
+ in such a case, redefine COPYING_UNIT to `long' (if that works)
+ or `char' as a last resort. */
+# ifndef COPYING_UNIT
+# define COPYING_UNIT int
+# endif
+
+
+/* The functions allocating more room by calling `obstack_chunk_alloc'
+ jump to the handler pointed to by `obstack_alloc_failed_handler'.
+ This can be set to a user defined function which should either
+ abort gracefully or use longjump - but shouldn't return. This
+ variable by default points to the internal function
+ `print_and_abort'. */
+static void print_and_abort (void);
+void (*obstack_alloc_failed_handler) (void) = print_and_abort;
+
+# ifdef _LIBC
+# if SHLIB_COMPAT (libc, GLIBC_2_0, GLIBC_2_3_4)
+/* A looong time ago (before 1994, anyway; we're not sure) this global variable
+ was used by non-GNU-C macros to avoid multiple evaluation. The GNU C
+ library still exports it because somebody might use it. */
+struct obstack *_obstack_compat;
+compat_symbol (libc, _obstack_compat, _obstack, GLIBC_2_0);
+# endif
+# endif
+
+/* Define a macro that either calls functions with the traditional malloc/free
+ calling interface, or calls functions with the mmalloc/mfree interface
+ (that adds an extra first argument), based on the state of use_extra_arg.
+ For free, do not use ?:, since some compilers, like the MIPS compilers,
+ do not allow (expr) ? void : void. */
+
+# define CALL_CHUNKFUN(h, size) \
+ (((h) -> use_extra_arg) \
+ ? (*(h)->chunkfun) ((h)->extra_arg, (size)) \
+ : (*(struct _obstack_chunk *(*) (long)) (h)->chunkfun) ((size)))
+
+# define CALL_FREEFUN(h, old_chunk) \
+ do { \
+ if ((h) -> use_extra_arg) \
+ (*(h)->freefun) ((h)->extra_arg, (old_chunk)); \
+ else \
+ (*(void (*) (void *)) (h)->freefun) ((old_chunk)); \
+ } while (0)
+
+\f
+/* Initialize an obstack H for use. Specify chunk size SIZE (0 means default).
+ Objects start on multiples of ALIGNMENT (0 means use default).
+ CHUNKFUN is the function to use to allocate chunks,
+ and FREEFUN the function to free them.
+
+ Return nonzero if successful, calls obstack_alloc_failed_handler if
+ allocation fails. */
+
+int
+_obstack_begin (struct obstack *h,
+ int size, int alignment,
+ void *(*chunkfun) (long),
+ void (*freefun) (void *))
+{
+ register struct _obstack_chunk *chunk; /* points to new chunk */
+
+ if (alignment == 0)
+ alignment = DEFAULT_ALIGNMENT;
+ if (size == 0)
+ /* Default size is what GNU malloc can fit in a 4096-byte block. */
+ {
+ /* 12 is sizeof (mhead) and 4 is EXTRA from GNU malloc.
+ Use the values for range checking, because if range checking is off,
+ the extra bytes won't be missed terribly, but if range checking is on
+ and we used a larger request, a whole extra 4096 bytes would be
+ allocated.
+
+ These number are irrelevant to the new GNU malloc. I suspect it is
+ less sensitive to the size of the request. */
+ int extra = ((((12 + DEFAULT_ROUNDING - 1) & ~(DEFAULT_ROUNDING - 1))
+ + 4 + DEFAULT_ROUNDING - 1)
+ & ~(DEFAULT_ROUNDING - 1));
+ size = 4096 - extra;
+ }
+
+ h->chunkfun = (struct _obstack_chunk * (*)(void *, long)) chunkfun;
+ h->freefun = (void (*) (void *, struct _obstack_chunk *)) freefun;
+ h->chunk_size = size;
+ h->alignment_mask = alignment - 1;
+ h->use_extra_arg = 0;
+
+ chunk = h->chunk = CALL_CHUNKFUN (h, h -> chunk_size);
+ if (!chunk)
+ (*obstack_alloc_failed_handler) ();
+ h->next_free = h->object_base = __PTR_ALIGN ((char *) chunk, chunk->contents,
+ alignment - 1);
+ h->chunk_limit = chunk->limit
+ = (char *) chunk + h->chunk_size;
+ chunk->prev = 0;
+ /* The initial chunk now contains no empty object. */
+ h->maybe_empty_object = 0;
+ h->alloc_failed = 0;
+ return 1;
+}
+
+int
+_obstack_begin_1 (struct obstack *h, int size, int alignment,
+ void *(*chunkfun) (void *, long),
+ void (*freefun) (void *, void *),
+ void *arg)
+{
+ register struct _obstack_chunk *chunk; /* points to new chunk */
+
+ if (alignment == 0)
+ alignment = DEFAULT_ALIGNMENT;
+ if (size == 0)
+ /* Default size is what GNU malloc can fit in a 4096-byte block. */
+ {
+ /* 12 is sizeof (mhead) and 4 is EXTRA from GNU malloc.
+ Use the values for range checking, because if range checking is off,
+ the extra bytes won't be missed terribly, but if range checking is on
+ and we used a larger request, a whole extra 4096 bytes would be
+ allocated.
+
+ These number are irrelevant to the new GNU malloc. I suspect it is
+ less sensitive to the size of the request. */
+ int extra = ((((12 + DEFAULT_ROUNDING - 1) & ~(DEFAULT_ROUNDING - 1))
+ + 4 + DEFAULT_ROUNDING - 1)
+ & ~(DEFAULT_ROUNDING - 1));
+ size = 4096 - extra;
+ }
+
+ h->chunkfun = (struct _obstack_chunk * (*)(void *,long)) chunkfun;
+ h->freefun = (void (*) (void *, struct _obstack_chunk *)) freefun;
+ h->chunk_size = size;
+ h->alignment_mask = alignment - 1;
+ h->extra_arg = arg;
+ h->use_extra_arg = 1;
+
+ chunk = h->chunk = CALL_CHUNKFUN (h, h -> chunk_size);
+ if (!chunk)
+ (*obstack_alloc_failed_handler) ();
+ h->next_free = h->object_base = __PTR_ALIGN ((char *) chunk, chunk->contents,
+ alignment - 1);
+ h->chunk_limit = chunk->limit
+ = (char *) chunk + h->chunk_size;
+ chunk->prev = 0;
+ /* The initial chunk now contains no empty object. */
+ h->maybe_empty_object = 0;
+ h->alloc_failed = 0;
+ return 1;
+}
+
+/* Allocate a new current chunk for the obstack *H
+ on the assumption that LENGTH bytes need to be added
+ to the current object, or a new object of length LENGTH allocated.
+ Copies any partial object from the end of the old chunk
+ to the beginning of the new one. */
+
+void
+_obstack_newchunk (struct obstack *h, int length)
+{
+ register struct _obstack_chunk *old_chunk = h->chunk;
+ register struct _obstack_chunk *new_chunk;
+ register long new_size;
+ register long obj_size = h->next_free - h->object_base;
+ register long i;
+ long already;
+ char *object_base;
+
+ /* Compute size for new chunk. */
+ new_size = (obj_size + length) + (obj_size >> 3) + h->alignment_mask + 100;
+ if (new_size < h->chunk_size)
+ new_size = h->chunk_size;
+
+ /* Allocate and initialize the new chunk. */
+ new_chunk = CALL_CHUNKFUN (h, new_size);
+ if (!new_chunk)
+ (*obstack_alloc_failed_handler) ();
+ h->chunk = new_chunk;
+ new_chunk->prev = old_chunk;
+ new_chunk->limit = h->chunk_limit = (char *) new_chunk + new_size;
+
+ /* Compute an aligned object_base in the new chunk */
+ object_base =
+ __PTR_ALIGN ((char *) new_chunk, new_chunk->contents, h->alignment_mask);
+
+ /* Move the existing object to the new chunk.
+ Word at a time is fast and is safe if the object
+ is sufficiently aligned. */
+ if (h->alignment_mask + 1 >= DEFAULT_ALIGNMENT)
+ {
+ for (i = obj_size / sizeof (COPYING_UNIT) - 1;
+ i >= 0; i--)
+ ((COPYING_UNIT *)object_base)[i]
+ = ((COPYING_UNIT *)h->object_base)[i];
+ /* We used to copy the odd few remaining bytes as one extra COPYING_UNIT,
+ but that can cross a page boundary on a machine
+ which does not do strict alignment for COPYING_UNITS. */
+ already = obj_size / sizeof (COPYING_UNIT) * sizeof (COPYING_UNIT);
+ }
+ else
+ already = 0;
+ /* Copy remaining bytes one by one. */
+ for (i = already; i < obj_size; i++)
+ object_base[i] = h->object_base[i];
+
+ /* If the object just copied was the only data in OLD_CHUNK,
+ free that chunk and remove it from the chain.
+ But not if that chunk might contain an empty object. */
+ if (! h->maybe_empty_object
+ && (h->object_base
+ == __PTR_ALIGN ((char *) old_chunk, old_chunk->contents,
+ h->alignment_mask)))
+ {
+ new_chunk->prev = old_chunk->prev;
+ CALL_FREEFUN (h, old_chunk);
+ }
+
+ h->object_base = object_base;
+ h->next_free = h->object_base + obj_size;
+ /* The new chunk certainly contains no empty object yet. */
+ h->maybe_empty_object = 0;
+}
+# ifdef _LIBC
+libc_hidden_def (_obstack_newchunk)
+# endif
+
+/* Return nonzero if object OBJ has been allocated from obstack H.
+ This is here for debugging.
+ If you use it in a program, you are probably losing. */
+
+/* Suppress -Wmissing-prototypes warning. We don't want to declare this in
+ obstack.h because it is just for debugging. */
+int _obstack_allocated_p (struct obstack *h, void *obj);
+
+int
+_obstack_allocated_p (struct obstack *h, void *obj)
+{
+ register struct _obstack_chunk *lp; /* below addr of any objects in this chunk */
+ register struct _obstack_chunk *plp; /* point to previous chunk if any */
+
+ lp = (h)->chunk;
+ /* We use >= rather than > since the object cannot be exactly at
+ the beginning of the chunk but might be an empty object exactly
+ at the end of an adjacent chunk. */
+ while (lp != 0 && ((void *) lp >= obj || (void *) (lp)->limit < obj))
+ {
+ plp = lp->prev;
+ lp = plp;
+ }
+ return lp != 0;
+}
+\f
+/* Free objects in obstack H, including OBJ and everything allocate
+ more recently than OBJ. If OBJ is zero, free everything in H. */
+
+# undef obstack_free
+
+void
+obstack_free (struct obstack *h, void *obj)
+{
+ register struct _obstack_chunk *lp; /* below addr of any objects in this chunk */
+ register struct _obstack_chunk *plp; /* point to previous chunk if any */
+
+ lp = h->chunk;
+ /* We use >= because there cannot be an object at the beginning of a chunk.
+ But there can be an empty object at that address
+ at the end of another chunk. */
+ while (lp != 0 && ((void *) lp >= obj || (void *) (lp)->limit < obj))
+ {
+ plp = lp->prev;
+ CALL_FREEFUN (h, lp);
+ lp = plp;
+ /* If we switch chunks, we can't tell whether the new current
+ chunk contains an empty object, so assume that it may. */
+ h->maybe_empty_object = 1;
+ }
+ if (lp)
+ {
+ h->object_base = h->next_free = (char *) (obj);
+ h->chunk_limit = lp->limit;
+ h->chunk = lp;
+ }
+ else if (obj != 0)
+ /* obj is not in any of the chunks! */
+ abort ();
+}
+
+# ifdef _LIBC
+/* Older versions of libc used a function _obstack_free intended to be
+ called by non-GCC compilers. */
+strong_alias (obstack_free, _obstack_free)
+# endif
+\f
+int
+_obstack_memory_used (struct obstack *h)
+{
+ register struct _obstack_chunk* lp;
+ register int nbytes = 0;
+
+ for (lp = h->chunk; lp != 0; lp = lp->prev)
+ {
+ nbytes += lp->limit - (char *) lp;
+ }
+ return nbytes;
+}
+\f
+# ifdef _LIBC
+# include <libio/iolibio.h>
+# endif
+
+# ifndef __attribute__
+/* This feature is available in gcc versions 2.5 and later. */
+# if __GNUC__ < 2 || (__GNUC__ == 2 && __GNUC_MINOR__ < 5)
+# define __attribute__(Spec) /* empty */
+# endif
+# endif
+
+static void
+__attribute__ ((noreturn))
+print_and_abort (void)
+{
+ /* Don't change any of these strings. Yes, it would be possible to add
+ the newline to the string and use fputs or so. But this must not
+ happen because the "memory exhausted" message appears in other places
+ like this and the translation should be reused instead of creating
+ a very similar string which requires a separate translation. */
+# ifdef _LIBC
+ (void) __fxprintf (NULL, "%s\n", _("memory exhausted"));
+# else
+ fprintf (stderr, "%s\n", _("memory exhausted"));
+# endif
+ exit (1);
+}
+
+#endif /* !ELIDE_CODE */
--- /dev/null
+/* obstack.h - object stack macros
+ Copyright (C) 1988-1994,1996-1999,2003,2004,2005,2009
+ Free Software Foundation, Inc.
+ This file is part of the GNU C Library.
+
+ The GNU C Library is free software; you can redistribute it and/or
+ modify it under the terms of the GNU Lesser General Public
+ License as published by the Free Software Foundation; either
+ version 2.1 of the License, or (at your option) any later version.
+
+ The GNU C Library is distributed in the hope that it will be useful,
+ but WITHOUT ANY WARRANTY; without even the implied warranty of
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ Lesser General Public License for more details.
+
+ You should have received a copy of the GNU Lesser General Public
+ License along with the GNU C Library; if not, write to the Free
+ Software Foundation, Inc., 51 Franklin Street, Fifth Floor,
+ Boston, MA 02110-1301, USA. */
+
+/* Summary:
+
+All the apparent functions defined here are macros. The idea
+is that you would use these pre-tested macros to solve a
+very specific set of problems, and they would run fast.
+Caution: no side-effects in arguments please!! They may be
+evaluated MANY times!!
+
+These macros operate a stack of objects. Each object starts life
+small, and may grow to maturity. (Consider building a word syllable
+by syllable.) An object can move while it is growing. Once it has
+been "finished" it never changes address again. So the "top of the
+stack" is typically an immature growing object, while the rest of the
+stack is of mature, fixed size and fixed address objects.
+
+These routines grab large chunks of memory, using a function you
+supply, called `obstack_chunk_alloc'. On occasion, they free chunks,
+by calling `obstack_chunk_free'. You must define them and declare
+them before using any obstack macros.
+
+Each independent stack is represented by a `struct obstack'.
+Each of the obstack macros expects a pointer to such a structure
+as the first argument.
+
+One motivation for this package is the problem of growing char strings
+in symbol tables. Unless you are "fascist pig with a read-only mind"
+--Gosper's immortal quote from HAKMEM item 154, out of context--you
+would not like to put any arbitrary upper limit on the length of your
+symbols.
+
+In practice this often means you will build many short symbols and a
+few long symbols. At the time you are reading a symbol you don't know
+how long it is. One traditional method is to read a symbol into a
+buffer, realloc()ating the buffer every time you try to read a symbol
+that is longer than the buffer. This is beaut, but you still will
+want to copy the symbol from the buffer to a more permanent
+symbol-table entry say about half the time.
+
+With obstacks, you can work differently. Use one obstack for all symbol
+names. As you read a symbol, grow the name in the obstack gradually.
+When the name is complete, finalize it. Then, if the symbol exists already,
+free the newly read name.
+
+The way we do this is to take a large chunk, allocating memory from
+low addresses. When you want to build a symbol in the chunk you just
+add chars above the current "high water mark" in the chunk. When you
+have finished adding chars, because you got to the end of the symbol,
+you know how long the chars are, and you can create a new object.
+Mostly the chars will not burst over the highest address of the chunk,
+because you would typically expect a chunk to be (say) 100 times as
+long as an average object.
+
+In case that isn't clear, when we have enough chars to make up
+the object, THEY ARE ALREADY CONTIGUOUS IN THE CHUNK (guaranteed)
+so we just point to it where it lies. No moving of chars is
+needed and this is the second win: potentially long strings need
+never be explicitly shuffled. Once an object is formed, it does not
+change its address during its lifetime.
+
+When the chars burst over a chunk boundary, we allocate a larger
+chunk, and then copy the partly formed object from the end of the old
+chunk to the beginning of the new larger chunk. We then carry on
+accreting characters to the end of the object as we normally would.
+
+A special macro is provided to add a single char at a time to a
+growing object. This allows the use of register variables, which
+break the ordinary 'growth' macro.
+
+Summary:
+ We allocate large chunks.
+ We carve out one object at a time from the current chunk.
+ Once carved, an object never moves.
+ We are free to append data of any size to the currently
+ growing object.
+ Exactly one object is growing in an obstack at any one time.
+ You can run one obstack per control block.
+ You may have as many control blocks as you dare.
+ Because of the way we do it, you can `unwind' an obstack
+ back to a previous state. (You may remove objects much
+ as you would with a stack.)
+*/
+
+
+/* Don't do the contents of this file more than once. */
+
+#ifndef _OBSTACK_H
+#define _OBSTACK_H 1
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+\f
+/* We need the type of a pointer subtraction. If __PTRDIFF_TYPE__ is
+ defined, as with GNU C, use that; that way we don't pollute the
+ namespace with <stddef.h>'s symbols. Otherwise, include <stddef.h>
+ and use ptrdiff_t. */
+
+#ifdef __PTRDIFF_TYPE__
+# define PTR_INT_TYPE __PTRDIFF_TYPE__
+#else
+# include <stddef.h>
+# define PTR_INT_TYPE ptrdiff_t
+#endif
+
+/* If B is the base of an object addressed by P, return the result of
+ aligning P to the next multiple of A + 1. B and P must be of type
+ char *. A + 1 must be a power of 2. */
+
+#define __BPTR_ALIGN(B, P, A) ((B) + (((P) - (B) + (A)) & ~(A)))
+
+/* Similiar to _BPTR_ALIGN (B, P, A), except optimize the common case
+ where pointers can be converted to integers, aligned as integers,
+ and converted back again. If PTR_INT_TYPE is narrower than a
+ pointer (e.g., the AS/400), play it safe and compute the alignment
+ relative to B. Otherwise, use the faster strategy of computing the
+ alignment relative to 0. */
+
+#define __PTR_ALIGN(B, P, A) \
+ __BPTR_ALIGN (sizeof (PTR_INT_TYPE) < sizeof (void *) ? (B) : (char *) 0, \
+ P, A)
+
+#include <string.h>
+
+struct _obstack_chunk /* Lives at front of each chunk. */
+{
+ char *limit; /* 1 past end of this chunk */
+ struct _obstack_chunk *prev; /* address of prior chunk or NULL */
+ char contents[4]; /* objects begin here */
+};
+
+struct obstack /* control current object in current chunk */
+{
+ long chunk_size; /* preferred size to allocate chunks in */
+ struct _obstack_chunk *chunk; /* address of current struct obstack_chunk */
+ char *object_base; /* address of object we are building */
+ char *next_free; /* where to add next char to current object */
+ char *chunk_limit; /* address of char after current chunk */
+ union
+ {
+ PTR_INT_TYPE tempint;
+ void *tempptr;
+ } temp; /* Temporary for some macros. */
+ int alignment_mask; /* Mask of alignment for each object. */
+ /* These prototypes vary based on `use_extra_arg', and we use
+ casts to the prototypeless function type in all assignments,
+ but having prototypes here quiets -Wstrict-prototypes. */
+ struct _obstack_chunk *(*chunkfun) (void *, long);
+ void (*freefun) (void *, struct _obstack_chunk *);
+ void *extra_arg; /* first arg for chunk alloc/dealloc funcs */
+ unsigned use_extra_arg:1; /* chunk alloc/dealloc funcs take extra arg */
+ unsigned maybe_empty_object:1;/* There is a possibility that the current
+ chunk contains a zero-length object. This
+ prevents freeing the chunk if we allocate
+ a bigger chunk to replace it. */
+ unsigned alloc_failed:1; /* No longer used, as we now call the failed
+ handler on error, but retained for binary
+ compatibility. */
+};
+
+/* Declare the external functions we use; they are in obstack.c. */
+
+extern void _obstack_newchunk (struct obstack *, int);
+extern int _obstack_begin (struct obstack *, int, int,
+ void *(*) (long), void (*) (void *));
+extern int _obstack_begin_1 (struct obstack *, int, int,
+ void *(*) (void *, long),
+ void (*) (void *, void *), void *);
+extern int _obstack_memory_used (struct obstack *);
+
+void obstack_free (struct obstack *, void *);
+
+\f
+/* Error handler called when `obstack_chunk_alloc' failed to allocate
+ more memory. This can be set to a user defined function which
+ should either abort gracefully or use longjump - but shouldn't
+ return. The default action is to print a message and abort. */
+extern void (*obstack_alloc_failed_handler) (void);
+\f
+/* Pointer to beginning of object being allocated or to be allocated next.
+ Note that this might not be the final address of the object
+ because a new chunk might be needed to hold the final size. */
+
+#define obstack_base(h) ((void *) (h)->object_base)
+
+/* Size for allocating ordinary chunks. */
+
+#define obstack_chunk_size(h) ((h)->chunk_size)
+
+/* Pointer to next byte not yet allocated in current chunk. */
+
+#define obstack_next_free(h) ((h)->next_free)
+
+/* Mask specifying low bits that should be clear in address of an object. */
+
+#define obstack_alignment_mask(h) ((h)->alignment_mask)
+
+/* To prevent prototype warnings provide complete argument list. */
+#define obstack_init(h) \
+ _obstack_begin ((h), 0, 0, \
+ (void *(*) (long)) obstack_chunk_alloc, \
+ (void (*) (void *)) obstack_chunk_free)
+
+#define obstack_begin(h, size) \
+ _obstack_begin ((h), (size), 0, \
+ (void *(*) (long)) obstack_chunk_alloc, \
+ (void (*) (void *)) obstack_chunk_free)
+
+#define obstack_specify_allocation(h, size, alignment, chunkfun, freefun) \
+ _obstack_begin ((h), (size), (alignment), \
+ (void *(*) (long)) (chunkfun), \
+ (void (*) (void *)) (freefun))
+
+#define obstack_specify_allocation_with_arg(h, size, alignment, chunkfun, freefun, arg) \
+ _obstack_begin_1 ((h), (size), (alignment), \
+ (void *(*) (void *, long)) (chunkfun), \
+ (void (*) (void *, void *)) (freefun), (arg))
+
+#define obstack_chunkfun(h, newchunkfun) \
+ ((h) -> chunkfun = (struct _obstack_chunk *(*)(void *, long)) (newchunkfun))
+
+#define obstack_freefun(h, newfreefun) \
+ ((h) -> freefun = (void (*)(void *, struct _obstack_chunk *)) (newfreefun))
+
+#define obstack_1grow_fast(h,achar) (*((h)->next_free)++ = (achar))
+
+#define obstack_blank_fast(h,n) ((h)->next_free += (n))
+
+#define obstack_memory_used(h) _obstack_memory_used (h)
+\f
+#if defined __GNUC__ && defined __STDC__ && __STDC__
+/* NextStep 2.0 cc is really gcc 1.93 but it defines __GNUC__ = 2 and
+ does not implement __extension__. But that compiler doesn't define
+ __GNUC_MINOR__. */
+# if __GNUC__ < 2 || (__NeXT__ && !__GNUC_MINOR__)
+# define __extension__
+# endif
+
+/* For GNU C, if not -traditional,
+ we can define these macros to compute all args only once
+ without using a global variable.
+ Also, we can avoid using the `temp' slot, to make faster code. */
+
+# define obstack_object_size(OBSTACK) \
+ __extension__ \
+ ({ struct obstack const *__o = (OBSTACK); \
+ (unsigned) (__o->next_free - __o->object_base); })
+
+# define obstack_room(OBSTACK) \
+ __extension__ \
+ ({ struct obstack const *__o = (OBSTACK); \
+ (unsigned) (__o->chunk_limit - __o->next_free); })
+
+# define obstack_make_room(OBSTACK,length) \
+__extension__ \
+({ struct obstack *__o = (OBSTACK); \
+ int __len = (length); \
+ if (__o->chunk_limit - __o->next_free < __len) \
+ _obstack_newchunk (__o, __len); \
+ (void) 0; })
+
+# define obstack_empty_p(OBSTACK) \
+ __extension__ \
+ ({ struct obstack const *__o = (OBSTACK); \
+ (__o->chunk->prev == 0 \
+ && __o->next_free == __PTR_ALIGN ((char *) __o->chunk, \
+ __o->chunk->contents, \
+ __o->alignment_mask)); })
+
+# define obstack_grow(OBSTACK,where,length) \
+__extension__ \
+({ struct obstack *__o = (OBSTACK); \
+ int __len = (length); \
+ if (__o->next_free + __len > __o->chunk_limit) \
+ _obstack_newchunk (__o, __len); \
+ memcpy (__o->next_free, where, __len); \
+ __o->next_free += __len; \
+ (void) 0; })
+
+# define obstack_grow0(OBSTACK,where,length) \
+__extension__ \
+({ struct obstack *__o = (OBSTACK); \
+ int __len = (length); \
+ if (__o->next_free + __len + 1 > __o->chunk_limit) \
+ _obstack_newchunk (__o, __len + 1); \
+ memcpy (__o->next_free, where, __len); \
+ __o->next_free += __len; \
+ *(__o->next_free)++ = 0; \
+ (void) 0; })
+
+# define obstack_1grow(OBSTACK,datum) \
+__extension__ \
+({ struct obstack *__o = (OBSTACK); \
+ if (__o->next_free + 1 > __o->chunk_limit) \
+ _obstack_newchunk (__o, 1); \
+ obstack_1grow_fast (__o, datum); \
+ (void) 0; })
+
+/* These assume that the obstack alignment is good enough for pointers
+ or ints, and that the data added so far to the current object
+ shares that much alignment. */
+
+# define obstack_ptr_grow(OBSTACK,datum) \
+__extension__ \
+({ struct obstack *__o = (OBSTACK); \
+ if (__o->next_free + sizeof (void *) > __o->chunk_limit) \
+ _obstack_newchunk (__o, sizeof (void *)); \
+ obstack_ptr_grow_fast (__o, datum); }) \
+
+# define obstack_int_grow(OBSTACK,datum) \
+__extension__ \
+({ struct obstack *__o = (OBSTACK); \
+ if (__o->next_free + sizeof (int) > __o->chunk_limit) \
+ _obstack_newchunk (__o, sizeof (int)); \
+ obstack_int_grow_fast (__o, datum); })
+
+# define obstack_ptr_grow_fast(OBSTACK,aptr) \
+__extension__ \
+({ struct obstack *__o1 = (OBSTACK); \
+ *(const void **) __o1->next_free = (aptr); \
+ __o1->next_free += sizeof (const void *); \
+ (void) 0; })
+
+# define obstack_int_grow_fast(OBSTACK,aint) \
+__extension__ \
+({ struct obstack *__o1 = (OBSTACK); \
+ *(int *) __o1->next_free = (aint); \
+ __o1->next_free += sizeof (int); \
+ (void) 0; })
+
+# define obstack_blank(OBSTACK,length) \
+__extension__ \
+({ struct obstack *__o = (OBSTACK); \
+ int __len = (length); \
+ if (__o->chunk_limit - __o->next_free < __len) \
+ _obstack_newchunk (__o, __len); \
+ obstack_blank_fast (__o, __len); \
+ (void) 0; })
+
+# define obstack_alloc(OBSTACK,length) \
+__extension__ \
+({ struct obstack *__h = (OBSTACK); \
+ obstack_blank (__h, (length)); \
+ obstack_finish (__h); })
+
+# define obstack_copy(OBSTACK,where,length) \
+__extension__ \
+({ struct obstack *__h = (OBSTACK); \
+ obstack_grow (__h, (where), (length)); \
+ obstack_finish (__h); })
+
+# define obstack_copy0(OBSTACK,where,length) \
+__extension__ \
+({ struct obstack *__h = (OBSTACK); \
+ obstack_grow0 (__h, (where), (length)); \
+ obstack_finish (__h); })
+
+/* The local variable is named __o1 to avoid a name conflict
+ when obstack_blank is called. */
+# define obstack_finish(OBSTACK) \
+__extension__ \
+({ struct obstack *__o1 = (OBSTACK); \
+ void *__value = (void *) __o1->object_base; \
+ if (__o1->next_free == __value) \
+ __o1->maybe_empty_object = 1; \
+ __o1->next_free \
+ = __PTR_ALIGN (__o1->object_base, __o1->next_free, \
+ __o1->alignment_mask); \
+ if (__o1->next_free - (char *)__o1->chunk \
+ > __o1->chunk_limit - (char *)__o1->chunk) \
+ __o1->next_free = __o1->chunk_limit; \
+ __o1->object_base = __o1->next_free; \
+ __value; })
+
+# define obstack_free(OBSTACK, OBJ) \
+__extension__ \
+({ struct obstack *__o = (OBSTACK); \
+ void *__obj = (OBJ); \
+ if (__obj > (void *)__o->chunk && __obj < (void *)__o->chunk_limit) \
+ __o->next_free = __o->object_base = (char *)__obj; \
+ else (obstack_free) (__o, __obj); })
+\f
+#else /* not __GNUC__ or not __STDC__ */
+
+# define obstack_object_size(h) \
+ (unsigned) ((h)->next_free - (h)->object_base)
+
+# define obstack_room(h) \
+ (unsigned) ((h)->chunk_limit - (h)->next_free)
+
+# define obstack_empty_p(h) \
+ ((h)->chunk->prev == 0 \
+ && (h)->next_free == __PTR_ALIGN ((char *) (h)->chunk, \
+ (h)->chunk->contents, \
+ (h)->alignment_mask))
+
+/* Note that the call to _obstack_newchunk is enclosed in (..., 0)
+ so that we can avoid having void expressions
+ in the arms of the conditional expression.
+ Casting the third operand to void was tried before,
+ but some compilers won't accept it. */
+
+# define obstack_make_room(h,length) \
+( (h)->temp.tempint = (length), \
+ (((h)->next_free + (h)->temp.tempint > (h)->chunk_limit) \
+ ? (_obstack_newchunk ((h), (h)->temp.tempint), 0) : 0))
+
+# define obstack_grow(h,where,length) \
+( (h)->temp.tempint = (length), \
+ (((h)->next_free + (h)->temp.tempint > (h)->chunk_limit) \
+ ? (_obstack_newchunk ((h), (h)->temp.tempint), 0) : 0), \
+ memcpy ((h)->next_free, where, (h)->temp.tempint), \
+ (h)->next_free += (h)->temp.tempint)
+
+# define obstack_grow0(h,where,length) \
+( (h)->temp.tempint = (length), \
+ (((h)->next_free + (h)->temp.tempint + 1 > (h)->chunk_limit) \
+ ? (_obstack_newchunk ((h), (h)->temp.tempint + 1), 0) : 0), \
+ memcpy ((h)->next_free, where, (h)->temp.tempint), \
+ (h)->next_free += (h)->temp.tempint, \
+ *((h)->next_free)++ = 0)
+
+# define obstack_1grow(h,datum) \
+( (((h)->next_free + 1 > (h)->chunk_limit) \
+ ? (_obstack_newchunk ((h), 1), 0) : 0), \
+ obstack_1grow_fast (h, datum))
+
+# define obstack_ptr_grow(h,datum) \
+( (((h)->next_free + sizeof (char *) > (h)->chunk_limit) \
+ ? (_obstack_newchunk ((h), sizeof (char *)), 0) : 0), \
+ obstack_ptr_grow_fast (h, datum))
+
+# define obstack_int_grow(h,datum) \
+( (((h)->next_free + sizeof (int) > (h)->chunk_limit) \
+ ? (_obstack_newchunk ((h), sizeof (int)), 0) : 0), \
+ obstack_int_grow_fast (h, datum))
+
+# define obstack_ptr_grow_fast(h,aptr) \
+ (((const void **) ((h)->next_free += sizeof (void *)))[-1] = (aptr))
+
+# define obstack_int_grow_fast(h,aint) \
+ (((int *) ((h)->next_free += sizeof (int)))[-1] = (aint))
+
+# define obstack_blank(h,length) \
+( (h)->temp.tempint = (length), \
+ (((h)->chunk_limit - (h)->next_free < (h)->temp.tempint) \
+ ? (_obstack_newchunk ((h), (h)->temp.tempint), 0) : 0), \
+ obstack_blank_fast (h, (h)->temp.tempint))
+
+# define obstack_alloc(h,length) \
+ (obstack_blank ((h), (length)), obstack_finish ((h)))
+
+# define obstack_copy(h,where,length) \
+ (obstack_grow ((h), (where), (length)), obstack_finish ((h)))
+
+# define obstack_copy0(h,where,length) \
+ (obstack_grow0 ((h), (where), (length)), obstack_finish ((h)))
+
+# define obstack_finish(h) \
+( ((h)->next_free == (h)->object_base \
+ ? (((h)->maybe_empty_object = 1), 0) \
+ : 0), \
+ (h)->temp.tempptr = (h)->object_base, \
+ (h)->next_free \
+ = __PTR_ALIGN ((h)->object_base, (h)->next_free, \
+ (h)->alignment_mask), \
+ (((h)->next_free - (char *) (h)->chunk \
+ > (h)->chunk_limit - (char *) (h)->chunk) \
+ ? ((h)->next_free = (h)->chunk_limit) : 0), \
+ (h)->object_base = (h)->next_free, \
+ (h)->temp.tempptr)
+
+# define obstack_free(h,obj) \
+( (h)->temp.tempint = (char *) (obj) - (char *) (h)->chunk, \
+ ((((h)->temp.tempint > 0 \
+ && (h)->temp.tempint < (h)->chunk_limit - (char *) (h)->chunk)) \
+ ? (int) ((h)->next_free = (h)->object_base \
+ = (h)->temp.tempint + (char *) (h)->chunk) \
+ : (((obstack_free) ((h), (h)->temp.tempint + (char *) (h)->chunk), 0), 0)))
+
+#endif /* not __GNUC__ or not __STDC__ */
+
+#ifdef __cplusplus
+} /* C++ */
+#endif
+
+#endif /* obstack.h */
_gitConfig[key] = read_pipe(cmd, ignore_error=True).strip()
return _gitConfig[key]
+def gitConfigList(key):
+ if not _gitConfig.has_key(key):
+ _gitConfig[key] = read_pipe("git config --get-all %s" % key, ignore_error=True).strip().split(os.linesep)
+ return _gitConfig[key]
+
def p4BranchesInGit(branchesAreInRemotes = True):
branches = {}
if not self.detectRenames:
# If not explicitly set check the config variable
- self.detectRenames = gitConfig("git-p4.detectRenames").lower() == "true"
+ self.detectRenames = gitConfig("git-p4.detectRenames")
- if self.detectRenames:
+ if self.detectRenames.lower() == "false" or self.detectRenames == "":
+ diffOpts = ""
+ elif self.detectRenames.lower() == "true":
diffOpts = "-M"
else:
- diffOpts = ""
+ diffOpts = "-M%s" % self.detectRenames
- if gitConfig("git-p4.detectCopies").lower() == "true":
+ detectCopies = gitConfig("git-p4.detectCopies")
+ if detectCopies.lower() == "true":
diffOpts += " -C"
+ elif detectCopies != "" and detectCopies.lower() != "false":
+ diffOpts += " -C%s" % detectCopies
- if gitConfig("git-p4.detectCopiesHarder").lower() == "true":
+ if gitConfig("git-p4.detectCopiesHarder", "--bool") == "true":
diffOpts += " --find-copies-harder"
diff = read_pipe_lines("git diff-tree -r %s \"%s^\" \"%s\"" % (diffOpts, id, id))
def getBranchMapping(self):
lostAndFoundBranches = set()
- for info in p4CmdList("branches"):
+ user = gitConfig("git-p4.branchUser")
+ if len(user) > 0:
+ command = "branches -u %s" % user
+ else:
+ command = "branches"
+
+ for info in p4CmdList(command):
details = p4Cmd("branch -o %s" % info["branch"])
viewIdx = 0
while details.has_key("View%s" % viewIdx):
if source not in self.knownBranches:
lostAndFoundBranches.add(source)
+ # Perforce does not strictly require branches to be defined, so we also
+ # check git config for a branch list.
+ #
+ # Example of branch definition in git config file:
+ # [git-p4]
+ # branchList=main:branchA
+ # branchList=main:branchB
+ # branchList=branchA:branchC
+ configBranches = gitConfigList("git-p4.branchList")
+ for branch in configBranches:
+ if branch:
+ (source, destination) = branch.split(":")
+ self.knownBranches[destination] = source
+
+ lostAndFoundBranches.discard(destination)
+
+ if source not in self.knownBranches:
+ lostAndFoundBranches.add(source)
+
for branch in lostAndFoundBranches:
self.knownBranches[branch] = branch
else:
paths = []
for (prev, cur) in zip(self.previousDepotPaths, depotPaths):
- for i in range(0, min(len(cur), len(prev))):
- if cur[i] <> prev[i]:
+ prev_list = prev.split("/")
+ cur_list = cur.split("/")
+ for i in range(0, min(len(cur_list), len(prev_list))):
+ if cur_list[i] <> prev_list[i]:
i = i - 1
break
- paths.append (cur[:i + 1])
+ paths.append ("/".join(cur_list[:i + 1]))
self.previousDepotPaths = paths
When submitting, git-p4 checks that the git commits are authored by the current
p4 user, and warns if they are not. This disables the check.
+git-p4.detectRenames
+
+Detect renames when submitting changes to Perforce server. Will enable -M git
+argument. Can be optionally set to a number representing the threshold
+percentage value of the rename detection.
+
+ git config [--global] git-p4.detectRenames true
+ git config [--global] git-p4.detectRenames 50
+
+git-p4.detectCopies
+
+Detect copies when submitting changes to Perforce server. Will enable -C git
+argument. Can be optionally set to a number representing the threshold
+percentage value of the copy detection.
+
+ git config [--global] git-p4.detectCopies true
+ git config [--global] git-p4.detectCopies 80
+
+git-p4.detectCopiesHarder
+
+Detect copies even between files that did not change when submitting changes to
+Perforce server. Will enable --find-copies-harder git argument.
+
+ git config [--global] git-p4.detectCopies true
+
+git-p4.branchUser
+
+Only use branch specifications defined by the selected username.
+
+ git config [--global] git-p4.branchUser username
+
+git-p4.branchList
+
+List of branches to be imported when branch detection is enabled.
+
+ git config [--global] git-p4.branchList main:branchA
+ git config [--global] --add git-p4.branchList main:branchB
+
Implementation Details...
=========================
dollar = memchr(src, '$', len);
if (!dollar)
break;
- memcpy(dst, src, dollar + 1 - src);
+ memmove(dst, src, dollar + 1 - src);
dst += dollar + 1 - src;
len -= dollar + 1 - src;
src = dollar + 1;
src = dollar + 1;
}
}
- memcpy(dst, src, len);
+ memmove(dst, src, len);
strbuf_setlen(buf, dst + len - buf->buf);
return 1;
}
int git_diff_ui_config(const char *var, const char *value, void *cb)
{
if (!strcmp(var, "diff.color") || !strcmp(var, "color.diff")) {
- diff_use_color_default = git_config_colorbool(var, value, -1);
+ diff_use_color_default = git_config_colorbool(var, value);
return 0;
}
if (!strcmp(var, "diff.renames")) {
if (!strcmp(var, "diff.ignoresubmodules"))
handle_ignore_submodules_arg(&default_diff_options, value);
+ if (git_color_config(var, value, cb) < 0)
+ return -1;
+
return git_diff_basic_config(var, value, cb);
}
if (!prefixcmp(var, "submodule."))
return parse_submodule_config_option(var, value);
- return git_color_default_config(var, value, cb);
+ return git_default_config(var, value, cb);
}
static char *quote_two(const char *one, const char *two)
struct diff_options *o)
{
int lc_a, lc_b;
- int color_diff = DIFF_OPT_TST(o, COLOR_DIFF);
const char *name_a_tab, *name_b_tab;
- const char *metainfo = diff_get_color(color_diff, DIFF_METAINFO);
- const char *fraginfo = diff_get_color(color_diff, DIFF_FRAGINFO);
- const char *reset = diff_get_color(color_diff, DIFF_RESET);
+ const char *metainfo = diff_get_color(o->use_color, DIFF_METAINFO);
+ const char *fraginfo = diff_get_color(o->use_color, DIFF_FRAGINFO);
+ const char *reset = diff_get_color(o->use_color, DIFF_RESET);
static struct strbuf a_name = STRBUF_INIT, b_name = STRBUF_INIT;
const char *a_prefix, *b_prefix;
char *data_one, *data_two;
size_two = fill_textconv(textconv_two, two, &data_two);
memset(&ecbdata, 0, sizeof(ecbdata));
- ecbdata.color_diff = color_diff;
+ ecbdata.color_diff = want_color(o->use_color);
ecbdata.found_changesp = &o->found_changes;
ecbdata.ws_rule = whitespace_rule(name_b ? name_b : name_a);
ecbdata.opt = o;
const char *diff_get_color(int diff_use_color, enum color_diff ix)
{
- if (diff_use_color)
+ if (want_color(diff_use_color))
return diff_colors[ix];
return "";
}
static void checkdiff_consume(void *priv, char *line, unsigned long len)
{
struct checkdiff_t *data = priv;
- int color_diff = DIFF_OPT_TST(data->o, COLOR_DIFF);
int marker_size = data->conflict_marker_size;
- const char *ws = diff_get_color(color_diff, DIFF_WHITESPACE);
- const char *reset = diff_get_color(color_diff, DIFF_RESET);
- const char *set = diff_get_color(color_diff, DIFF_FILE_NEW);
+ const char *ws = diff_get_color(data->o->use_color, DIFF_WHITESPACE);
+ const char *reset = diff_get_color(data->o->use_color, DIFF_RESET);
+ const char *set = diff_get_color(data->o->use_color, DIFF_FILE_NEW);
char *err;
char *line_prefix = "";
struct strbuf *msgbuf;
memset(&xecfg, 0, sizeof(xecfg));
memset(&ecbdata, 0, sizeof(ecbdata));
ecbdata.label_path = lbl;
- ecbdata.color_diff = DIFF_OPT_TST(o, COLOR_DIFF);
+ ecbdata.color_diff = want_color(o->use_color);
ecbdata.found_changesp = &o->found_changes;
ecbdata.ws_rule = whitespace_rule(name_b ? name_b : name_a);
if (ecbdata.ws_rule & WS_BLANK_AT_EOF)
break;
}
}
- if (DIFF_OPT_TST(o, COLOR_DIFF)) {
+ if (want_color(o->use_color)) {
struct diff_words_style *st = ecbdata.diff_words->style;
st->old.color = diff_get_color_opt(o, DIFF_FILE_OLD);
st->new.color = diff_get_color_opt(o, DIFF_FILE_NEW);
*/
fill_metainfo(msg, name, other, one, two, o, p,
&must_show_header,
- DIFF_OPT_TST(o, COLOR_DIFF) && !pgm);
+ want_color(o->use_color) && !pgm);
xfrm_msg = msg->len ? msg->buf : NULL;
}
options->change = diff_change;
options->add_remove = diff_addremove;
- if (diff_use_color_default > 0)
- DIFF_OPT_SET(options, COLOR_DIFF);
+ options->use_color = diff_use_color_default;
options->detect_rename = diff_detect_rename_default;
if (diff_no_prefix) {
else if (!strcmp(arg, "--follow"))
DIFF_OPT_SET(options, FOLLOW_RENAMES);
else if (!strcmp(arg, "--color"))
- DIFF_OPT_SET(options, COLOR_DIFF);
+ options->use_color = 1;
else if (!prefixcmp(arg, "--color=")) {
- int value = git_config_colorbool(NULL, arg+8, -1);
- if (value == 0)
- DIFF_OPT_CLR(options, COLOR_DIFF);
- else if (value > 0)
- DIFF_OPT_SET(options, COLOR_DIFF);
- else
+ int value = git_config_colorbool(NULL, arg+8);
+ if (value < 0)
return error("option `color' expects \"always\", \"auto\", or \"never\"");
+ options->use_color = value;
}
else if (!strcmp(arg, "--no-color"))
- DIFF_OPT_CLR(options, COLOR_DIFF);
+ options->use_color = 0;
else if (!strcmp(arg, "--color-words")) {
- DIFF_OPT_SET(options, COLOR_DIFF);
+ options->use_color = 1;
options->word_diff = DIFF_WORDS_COLOR;
}
else if (!prefixcmp(arg, "--color-words=")) {
- DIFF_OPT_SET(options, COLOR_DIFF);
+ options->use_color = 1;
options->word_diff = DIFF_WORDS_COLOR;
options->word_regex = arg + 14;
}
if (!strcmp(type, "plain"))
options->word_diff = DIFF_WORDS_PLAIN;
else if (!strcmp(type, "color")) {
- DIFF_OPT_SET(options, COLOR_DIFF);
+ options->use_color = 1;
options->word_diff = DIFF_WORDS_COLOR;
}
else if (!strcmp(type, "porcelain"))
#define DIFF_OPT_SILENT_ON_REMOVE (1 << 5)
#define DIFF_OPT_FIND_COPIES_HARDER (1 << 6)
#define DIFF_OPT_FOLLOW_RENAMES (1 << 7)
-#define DIFF_OPT_COLOR_DIFF (1 << 8)
+/* (1 << 8) unused */
/* (1 << 9) unused */
#define DIFF_OPT_HAS_CHANGES (1 << 10)
#define DIFF_OPT_QUICK (1 << 11)
const char *single_follow;
const char *a_prefix, *b_prefix;
unsigned flags;
+ int use_color;
int context;
int interhunkcontext;
int break_opt;
};
const char *diff_get_color(int diff_use_color, enum color_diff ix);
#define diff_get_color_opt(o, ix) \
- diff_get_color(DIFF_OPT_TST((o), COLOR_DIFF), ix)
+ diff_get_color((o)->use_color, ix)
extern const char mime_boundary_leader[];
#include "diff.h"
#include "diffcore.h"
#include "xdiff-interface.h"
+#include "kwset.h"
struct diffgrep_cb {
regex_t *regexp;
static unsigned int contains(struct diff_filespec *one,
const char *needle, unsigned long len,
- regex_t *regexp)
+ regex_t *regexp, kwset_t kws)
{
unsigned int cnt;
unsigned long sz;
} else { /* Classic exact string match */
while (sz) {
- const char *found = memmem(data, sz, needle, len);
- if (!found)
+ size_t offset = kwsexec(kws, data, sz, NULL);
+ const char *found;
+ if (offset == -1)
break;
+ else
+ found = data + offset;
sz -= found - data + len;
data = found + len;
cnt++;
unsigned long len = strlen(needle);
int i, has_changes;
regex_t regex, *regexp = NULL;
+ kwset_t kws = NULL;
struct diff_queue_struct outq;
DIFF_QUEUE_CLEAR(&outq);
die("invalid pickaxe regex: %s", errbuf);
}
regexp = ®ex;
+ } else {
+ kws = kwsalloc(NULL);
+ kwsincr(kws, needle, len);
+ kwsprep(kws);
}
if (opts & DIFF_PICKAXE_ALL) {
if (!DIFF_FILE_VALID(p->two))
continue; /* ignore unmerged */
/* created */
- if (contains(p->two, needle, len, regexp))
+ if (contains(p->two, needle, len, regexp, kws))
has_changes++;
}
else if (!DIFF_FILE_VALID(p->two)) {
- if (contains(p->one, needle, len, regexp))
+ if (contains(p->one, needle, len, regexp, kws))
has_changes++;
}
else if (!diff_unmodified_pair(p) &&
- contains(p->one, needle, len, regexp) !=
- contains(p->two, needle, len, regexp))
+ contains(p->one, needle, len, regexp, kws) !=
+ contains(p->two, needle, len, regexp, kws))
has_changes++;
}
if (has_changes)
if (!DIFF_FILE_VALID(p->two))
; /* ignore unmerged */
/* created */
- else if (contains(p->two, needle, len, regexp))
+ else if (contains(p->two, needle, len, regexp,
+ kws))
has_changes = 1;
}
else if (!DIFF_FILE_VALID(p->two)) {
- if (contains(p->one, needle, len, regexp))
+ if (contains(p->one, needle, len, regexp, kws))
has_changes = 1;
}
else if (!diff_unmodified_pair(p) &&
- contains(p->one, needle, len, regexp) !=
- contains(p->two, needle, len, regexp))
+ contains(p->one, needle, len, regexp, kws) !=
+ contains(p->two, needle, len, regexp, kws))
has_changes = 1;
if (has_changes)
if (opts & DIFF_PICKAXE_REGEX)
regfree(®ex);
+ else
+ kwsfree(kws);
free(q->queue);
*q = outq;
unsigned dirty_submodule : 2; /* For submodules: its work tree is dirty */
#define DIRTY_SUBMODULE_UNTRACKED 1
#define DIRTY_SUBMODULE_MODIFIED 2
-
+ unsigned has_more_entries : 1; /* only appear in combined diff */
struct userdiff_driver *driver;
/* data should be considered "binary"; -1 means "don't know yet" */
int is_binary;
git_dir = getenv(GIT_DIR_ENVIRONMENT);
git_dir = git_dir ? xstrdup(git_dir) : NULL;
if (!git_dir) {
- git_dir = read_gitfile_gently(DEFAULT_GIT_DIR_ENVIRONMENT);
+ git_dir = read_gitfile(DEFAULT_GIT_DIR_ENVIRONMENT);
git_dir = git_dir ? xstrdup(git_dir) : NULL;
}
if (!git_dir)
#define DEPTH_BITS 13
#define MAX_DEPTH ((1<<DEPTH_BITS)-1)
+/*
+ * We abuse the setuid bit on directories to mean "do not delta".
+ */
+#define NO_DELTA S_ISUID
+
struct object_entry {
struct pack_idx_entry idx;
struct object_entry *next;
static uintmax_t object_count_by_type[1 << TYPE_BITS];
static uintmax_t duplicate_count_by_type[1 << TYPE_BITS];
static uintmax_t delta_count_by_type[1 << TYPE_BITS];
+static uintmax_t delta_count_attempts_by_type[1 << TYPE_BITS];
static unsigned long object_count;
static unsigned long branch_count;
static unsigned long branch_load_count;
}
if (last && last->data.buf && last->depth < max_depth && dat->len > 20) {
+ delta_count_attempts_by_type[type]++;
delta = diff_delta(last->data.buf, last->data.len,
dat->buf, dat->len,
&deltalen, dat->len - 20);
struct tree_entry *e = t->entries[i];
if (!e->versions[v].mode)
continue;
- strbuf_addf(b, "%o %s%c", (unsigned int)e->versions[v].mode,
- e->name->str_dat, '\0');
+ strbuf_addf(b, "%o %s%c",
+ (unsigned int)(e->versions[v].mode & ~NO_DELTA),
+ e->name->str_dat, '\0');
strbuf_add(b, e->versions[v].sha1, 20);
}
}
struct tree_content *t = root->tree;
unsigned int i, j, del;
struct last_object lo = { STRBUF_INIT, 0, 0, /* no_swap */ 1 };
- struct object_entry *le;
+ struct object_entry *le = NULL;
if (!is_null_sha1(root->versions[1].sha1))
return;
store_tree(t->entries[i]);
}
- le = find_object(root->versions[0].sha1);
+ if (!(root->versions[0].mode & NO_DELTA))
+ le = find_object(root->versions[0].sha1);
if (S_ISDIR(root->versions[0].mode) && le && le->pack_id == pack_id) {
mktree(t, 0, &old_tree);
lo.data = old_tree;
{
if (!S_ISDIR(mode))
die("Root cannot be a non-directory");
+ hashclr(root->versions[0].sha1);
hashcpy(root->versions[1].sha1, sha1);
if (root->tree)
release_tree_content_recursive(root->tree);
if (e->tree)
release_tree_content_recursive(e->tree);
e->tree = subtree;
+
+ /*
+ * We need to leave e->versions[0].sha1 alone
+ * to avoid modifying the preimage tree used
+ * when writing out the parent directory.
+ * But after replacing the subdir with a
+ * completely different one, it's not a good
+ * delta base any more, and besides, we've
+ * thrown away the tree entries needed to
+ * make a delta against it.
+ *
+ * So let's just explicitly disable deltas
+ * for the subtree.
+ */
+ if (S_ISDIR(e->versions[0].mode))
+ e->versions[0].mode |= NO_DELTA;
+
hashclr(root->versions[1].sha1);
return 1;
}
static char *parse_ident(const char *buf)
{
- const char *gt;
+ const char *ltgt;
size_t name_len;
char *ident;
- gt = strrchr(buf, '>');
- if (!gt)
+ /* ensure there is a space delimiter even if there is no name */
+ if (*buf == '<')
+ --buf;
+
+ ltgt = buf + strcspn(buf, "<>");
+ if (*ltgt != '<')
+ die("Missing < in ident string: %s", buf);
+ if (ltgt != buf && ltgt[-1] != ' ')
+ die("Missing space before < in ident string: %s", buf);
+ ltgt = ltgt + 1 + strcspn(ltgt + 1, "<>");
+ if (*ltgt != '>')
die("Missing > in ident string: %s", buf);
- gt++;
- if (*gt != ' ')
+ ltgt++;
+ if (*ltgt != ' ')
die("Missing space after > in ident string: %s", buf);
- gt++;
- name_len = gt - buf;
+ ltgt++;
+ name_len = ltgt - buf;
ident = xmalloc(name_len + 24);
strncpy(ident, buf, name_len);
switch (whenspec) {
case WHENSPEC_RAW:
- if (validate_raw_date(gt, ident + name_len, 24) < 0)
- die("Invalid raw date \"%s\" in ident: %s", gt, buf);
+ if (validate_raw_date(ltgt, ident + name_len, 24) < 0)
+ die("Invalid raw date \"%s\" in ident: %s", ltgt, buf);
break;
case WHENSPEC_RFC2822:
- if (parse_date(gt, ident + name_len, 24) < 0)
- die("Invalid rfc2822 date \"%s\" in ident: %s", gt, buf);
+ if (parse_date(ltgt, ident + name_len, 24) < 0)
+ die("Invalid rfc2822 date \"%s\" in ident: %s", ltgt, buf);
break;
case WHENSPEC_NOW:
- if (strcmp("now", gt))
+ if (strcmp("now", ltgt))
die("Date in ident must be 'now': %s", buf);
datestamp(ident + name_len, 24);
break;
type = oe->type;
hashcpy(sha1, oe->idx.sha1);
} else if (!get_sha1(from, sha1)) {
- unsigned long size;
- char *buf;
-
- buf = read_sha1_file(sha1, &type, &size);
- if (!buf || size < 46)
- die("Not a valid commit: %s", from);
- free(buf);
+ struct object_entry *oe = find_object(sha1);
+ if (!oe) {
+ type = sha1_object_info(sha1, NULL);
+ if (type < 0)
+ die("Not a valid object: %s", from);
+ } else
+ type = oe->type;
} else
die("Invalid ref name or SHA1 expression: %s", from);
read_next_command();
strbuf_release(&line);
cat_blob_write(buf, size);
cat_blob_write("\n", 1);
- free(buf);
+ if (oe && oe->pack_id == pack_id) {
+ last_blob.offset = oe->idx.offset;
+ strbuf_attach(&last_blob.data, buf, size, size);
+ last_blob.depth = oe->depth;
+ } else
+ free(buf);
}
static void parse_cat_blob(void)
/* mode SP type SP object_name TAB path LF */
strbuf_reset(&line);
strbuf_addf(&line, "%06o %s %s\t",
- mode, type, sha1_to_hex(sha1));
+ mode & ~NO_DELTA, type, sha1_to_hex(sha1));
quote_c_style(path, &line, NULL, 0);
strbuf_addch(&line, '\n');
}
fprintf(stderr, "---------------------------------------------------------------------\n");
fprintf(stderr, "Alloc'd objects: %10" PRIuMAX "\n", alloc_count);
fprintf(stderr, "Total objects: %10" PRIuMAX " (%10" PRIuMAX " duplicates )\n", total_count, duplicate_count);
- fprintf(stderr, " blobs : %10" PRIuMAX " (%10" PRIuMAX " duplicates %10" PRIuMAX " deltas)\n", object_count_by_type[OBJ_BLOB], duplicate_count_by_type[OBJ_BLOB], delta_count_by_type[OBJ_BLOB]);
- fprintf(stderr, " trees : %10" PRIuMAX " (%10" PRIuMAX " duplicates %10" PRIuMAX " deltas)\n", object_count_by_type[OBJ_TREE], duplicate_count_by_type[OBJ_TREE], delta_count_by_type[OBJ_TREE]);
- fprintf(stderr, " commits: %10" PRIuMAX " (%10" PRIuMAX " duplicates %10" PRIuMAX " deltas)\n", object_count_by_type[OBJ_COMMIT], duplicate_count_by_type[OBJ_COMMIT], delta_count_by_type[OBJ_COMMIT]);
- fprintf(stderr, " tags : %10" PRIuMAX " (%10" PRIuMAX " duplicates %10" PRIuMAX " deltas)\n", object_count_by_type[OBJ_TAG], duplicate_count_by_type[OBJ_TAG], delta_count_by_type[OBJ_TAG]);
+ fprintf(stderr, " blobs : %10" PRIuMAX " (%10" PRIuMAX " duplicates %10" PRIuMAX " deltas of %10" PRIuMAX" attempts)\n", object_count_by_type[OBJ_BLOB], duplicate_count_by_type[OBJ_BLOB], delta_count_by_type[OBJ_BLOB], delta_count_attempts_by_type[OBJ_BLOB]);
+ fprintf(stderr, " trees : %10" PRIuMAX " (%10" PRIuMAX " duplicates %10" PRIuMAX " deltas of %10" PRIuMAX" attempts)\n", object_count_by_type[OBJ_TREE], duplicate_count_by_type[OBJ_TREE], delta_count_by_type[OBJ_TREE], delta_count_attempts_by_type[OBJ_TREE]);
+ fprintf(stderr, " commits: %10" PRIuMAX " (%10" PRIuMAX " duplicates %10" PRIuMAX " deltas of %10" PRIuMAX" attempts)\n", object_count_by_type[OBJ_COMMIT], duplicate_count_by_type[OBJ_COMMIT], delta_count_by_type[OBJ_COMMIT], delta_count_attempts_by_type[OBJ_COMMIT]);
+ fprintf(stderr, " tags : %10" PRIuMAX " (%10" PRIuMAX " duplicates %10" PRIuMAX " deltas of %10" PRIuMAX" attempts)\n", object_count_by_type[OBJ_TAG], duplicate_count_by_type[OBJ_TAG], delta_count_by_type[OBJ_TAG], delta_count_attempts_by_type[OBJ_TAG]);
fprintf(stderr, "Total branches: %10lu (%10lu loads )\n", branch_count, branch_load_count);
fprintf(stderr, " marks: %10" PRIuMAX " (%10" PRIuMAX " unique )\n", (((uintmax_t)1) << marks->shift) * 1024, marks_set_count);
fprintf(stderr, " atoms: %10u\n", atom_cnt);
static int fsck_ident(char **ident, struct object *obj, fsck_error error_func)
{
- if (**ident == '<' || **ident == '\n')
- return error_func(obj, FSCK_ERROR, "invalid author/committer line - missing space before email");
- *ident += strcspn(*ident, "<\n");
- if ((*ident)[-1] != ' ')
+ if (**ident == '<')
return error_func(obj, FSCK_ERROR, "invalid author/committer line - missing space before email");
+ *ident += strcspn(*ident, "<>\n");
+ if (**ident == '>')
+ return error_func(obj, FSCK_ERROR, "invalid author/committer line - bad name");
if (**ident != '<')
return error_func(obj, FSCK_ERROR, "invalid author/committer line - missing email");
+ if ((*ident)[-1] != ' ')
+ return error_func(obj, FSCK_ERROR, "invalid author/committer line - missing space before email");
(*ident)++;
*ident += strcspn(*ident, "<>\n");
if (**ident != '>')
perl -ne 'BEGIN { $subject = 0 }
if ($subject > 1) { print ; }
elsif (/^\s+$/) { next ; }
- elsif (/^Author:/) { print s/Author/From/ ; }
+ elsif (/^Author:/) { s/Author/From/ ; print ;}
elsif (/^(From|Date)/) { print ; }
elsif ($subject) {
$subject = 2 ;
msgnum=
;;
*)
- if test -n "$parse_patch" ; then
+ if test -n "$patch_format"
+ then
clean_abort "$(eval_gettext "Patch format \$patch_format is not supported.")"
else
clean_abort "$(gettext "Patch format detection failed.")"
should_prompt () {
prompt_merge=$(git config --bool mergetool.prompt || echo true)
prompt=$(git config --bool difftool.prompt || echo $prompt_merge)
- if test "$prompt" = true; then
+ if test "$prompt" = true
+ then
test -z "$GIT_DIFFTOOL_NO_PROMPT"
else
test -n "$GIT_DIFFTOOL_PROMPT"
# $LOCAL and $REMOTE are temporary files so prompt
# the user with the real $MERGED name before launching $merge_tool.
- if should_prompt; then
+ if should_prompt
+ then
printf "\nViewing: '$MERGED'\n"
- if use_ext_cmd; then
+ if use_ext_cmd
+ then
printf "Hit return to launch '%s': " \
"$GIT_DIFFTOOL_EXTCMD"
else
read ans
fi
- if use_ext_cmd; then
+ if use_ext_cmd
+ then
export BASE
eval $GIT_DIFFTOOL_EXTCMD '"$LOCAL"' '"$REMOTE"'
else
fi
}
-if ! use_ext_cmd; then
- if test -n "$GIT_DIFF_TOOL"; then
+if ! use_ext_cmd
+then
+ if test -n "$GIT_DIFF_TOOL"
+ then
merge_tool="$GIT_DIFF_TOOL"
else
merge_tool="$(get_merge_tool)" || exit
}
translate_merge_tool_path () {
- case "$1" in
- araxis)
- echo compare
- ;;
- bc3)
- echo bcompare
- ;;
- emerge)
- echo emacs
- ;;
- gvimdiff|gvimdiff2)
- echo gvim
- ;;
- vimdiff|vimdiff2)
- echo vim
- ;;
- *)
- echo "$1"
- ;;
- esac
+ echo "$1"
}
check_unchanged () {
- if test "$MERGED" -nt "$BACKUP"; then
+ if test "$MERGED" -nt "$BACKUP"
+ then
status=0
else
- while true; do
+ while true
+ do
echo "$MERGED seems unchanged."
printf "Was the merge successful? [y/n] "
read answer
fi
}
+valid_tool_config () {
+ if test -n "$(get_merge_tool_cmd "$1")"
+ then
+ return 0
+ else
+ return 1
+ fi
+}
+
valid_tool () {
+ setup_tool "$1" || valid_tool_config "$1"
+}
+
+setup_tool () {
case "$1" in
- araxis | bc3 | diffuse | ecmerge | emerge | gvimdiff | gvimdiff2 | \
- kdiff3 | meld | opendiff | p4merge | tkdiff | vimdiff | vimdiff2 | xxdiff)
- ;; # happy
- kompare)
- if ! diff_mode; then
- return 1
- fi
- ;;
- tortoisemerge)
- if ! merge_mode; then
- return 1
- fi
+ vim*|gvim*)
+ tool=vim
;;
*)
- if test -z "$(get_merge_tool_cmd "$1")"; then
- return 1
- fi
+ tool="$1"
;;
esac
+ mergetools="$(git --exec-path)/mergetools"
+
+ # Load the default definitions
+ . "$mergetools/defaults"
+ if ! test -f "$mergetools/$tool"
+ then
+ return 1
+ fi
+
+ # Load the redefined functions
+ . "$mergetools/$tool"
+
+ if merge_mode && ! can_merge
+ then
+ echo "error: '$tool' can not be used to resolve merges" >&2
+ exit 1
+ elif diff_mode && ! can_diff
+ then
+ echo "error: '$tool' can only be used to resolve merges" >&2
+ exit 1
+ fi
+ return 0
}
get_merge_tool_cmd () {
# Prints the custom command for a merge tool
- if test -n "$1"; then
- merge_tool="$1"
- else
- merge_tool="$(get_merge_tool)"
- fi
- if diff_mode; then
+ merge_tool="$1"
+ if diff_mode
+ then
echo "$(git config difftool.$merge_tool.cmd ||
git config mergetool.$merge_tool.cmd)"
else
fi
}
+# Entry point for running tools
run_merge_tool () {
# If GIT_PREFIX is empty then we cannot use it in tools
# that expect to be able to chdir() to its value.
base_present="$2"
status=0
- case "$1" in
- araxis)
- if merge_mode; then
- touch "$BACKUP"
- if $base_present; then
- "$merge_tool_path" -wait -merge -3 -a1 \
- "$BASE" "$LOCAL" "$REMOTE" "$MERGED" \
- >/dev/null 2>&1
- else
- "$merge_tool_path" -wait -2 \
- "$LOCAL" "$REMOTE" "$MERGED" \
- >/dev/null 2>&1
- fi
- check_unchanged
- else
- "$merge_tool_path" -wait -2 "$LOCAL" "$REMOTE" \
- >/dev/null 2>&1
- fi
- ;;
- bc3)
- if merge_mode; then
- touch "$BACKUP"
- if $base_present; then
- "$merge_tool_path" "$LOCAL" "$REMOTE" "$BASE" \
- -mergeoutput="$MERGED"
- else
- "$merge_tool_path" "$LOCAL" "$REMOTE" \
- -mergeoutput="$MERGED"
- fi
- check_unchanged
- else
- "$merge_tool_path" "$LOCAL" "$REMOTE"
- fi
- ;;
- diffuse)
- if merge_mode; then
- touch "$BACKUP"
- if $base_present; then
- "$merge_tool_path" \
- "$LOCAL" "$MERGED" "$REMOTE" \
- "$BASE" | cat
- else
- "$merge_tool_path" \
- "$LOCAL" "$MERGED" "$REMOTE" | cat
- fi
- check_unchanged
- else
- "$merge_tool_path" "$LOCAL" "$REMOTE" | cat
- fi
- ;;
- ecmerge)
- if merge_mode; then
- touch "$BACKUP"
- if $base_present; then
- "$merge_tool_path" "$BASE" "$LOCAL" "$REMOTE" \
- --default --mode=merge3 --to="$MERGED"
- else
- "$merge_tool_path" "$LOCAL" "$REMOTE" \
- --default --mode=merge2 --to="$MERGED"
- fi
- check_unchanged
- else
- "$merge_tool_path" --default --mode=diff2 \
- "$LOCAL" "$REMOTE"
- fi
- ;;
- emerge)
- if merge_mode; then
- if $base_present; then
- "$merge_tool_path" \
- -f emerge-files-with-ancestor-command \
- "$LOCAL" "$REMOTE" "$BASE" \
- "$(basename "$MERGED")"
- else
- "$merge_tool_path" \
- -f emerge-files-command \
- "$LOCAL" "$REMOTE" \
- "$(basename "$MERGED")"
- fi
- status=$?
- else
- "$merge_tool_path" -f emerge-files-command \
- "$LOCAL" "$REMOTE"
- fi
- ;;
- gvimdiff|vimdiff)
- if merge_mode; then
- touch "$BACKUP"
- if $base_present; then
- "$merge_tool_path" -f -d -c "wincmd J" \
- "$MERGED" "$LOCAL" "$BASE" "$REMOTE"
- else
- "$merge_tool_path" -f -d -c "wincmd l" \
- "$LOCAL" "$MERGED" "$REMOTE"
- fi
- check_unchanged
- else
- "$merge_tool_path" -R -f -d -c "wincmd l" \
- -c 'cd $GIT_PREFIX' \
- "$LOCAL" "$REMOTE"
- fi
- ;;
- gvimdiff2|vimdiff2)
- if merge_mode; then
- touch "$BACKUP"
- "$merge_tool_path" -f -d -c "wincmd l" \
- "$LOCAL" "$MERGED" "$REMOTE"
- check_unchanged
- else
- "$merge_tool_path" -R -f -d -c "wincmd l" \
- -c 'cd $GIT_PREFIX' \
- "$LOCAL" "$REMOTE"
- fi
- ;;
- kdiff3)
- if merge_mode; then
- if $base_present; then
- ("$merge_tool_path" --auto \
- --L1 "$MERGED (Base)" \
- --L2 "$MERGED (Local)" \
- --L3 "$MERGED (Remote)" \
- -o "$MERGED" \
- "$BASE" "$LOCAL" "$REMOTE" \
- > /dev/null 2>&1)
- else
- ("$merge_tool_path" --auto \
- --L1 "$MERGED (Local)" \
- --L2 "$MERGED (Remote)" \
- -o "$MERGED" \
- "$LOCAL" "$REMOTE" \
- > /dev/null 2>&1)
- fi
- status=$?
- else
- ("$merge_tool_path" --auto \
- --L1 "$MERGED (A)" \
- --L2 "$MERGED (B)" "$LOCAL" "$REMOTE" \
- > /dev/null 2>&1)
- fi
- ;;
- kompare)
- "$merge_tool_path" "$LOCAL" "$REMOTE"
- ;;
- meld)
- if merge_mode; then
- touch "$BACKUP"
- "$merge_tool_path" "$LOCAL" "$MERGED" "$REMOTE"
- check_unchanged
- else
- "$merge_tool_path" "$LOCAL" "$REMOTE"
- fi
- ;;
- opendiff)
- if merge_mode; then
- touch "$BACKUP"
- if $base_present; then
- "$merge_tool_path" "$LOCAL" "$REMOTE" \
- -ancestor "$BASE" \
- -merge "$MERGED" | cat
- else
- "$merge_tool_path" "$LOCAL" "$REMOTE" \
- -merge "$MERGED" | cat
- fi
- check_unchanged
- else
- "$merge_tool_path" "$LOCAL" "$REMOTE" | cat
- fi
- ;;
- p4merge)
- if merge_mode; then
- touch "$BACKUP"
- $base_present || >"$BASE"
- "$merge_tool_path" "$BASE" "$LOCAL" "$REMOTE" "$MERGED"
- check_unchanged
- else
- "$merge_tool_path" "$LOCAL" "$REMOTE"
- fi
- ;;
- tkdiff)
- if merge_mode; then
- if $base_present; then
- "$merge_tool_path" -a "$BASE" \
- -o "$MERGED" "$LOCAL" "$REMOTE"
- else
- "$merge_tool_path" \
- -o "$MERGED" "$LOCAL" "$REMOTE"
- fi
- status=$?
- else
- "$merge_tool_path" "$LOCAL" "$REMOTE"
- fi
- ;;
- tortoisemerge)
- if $base_present; then
- touch "$BACKUP"
- "$merge_tool_path" \
- -base:"$BASE" -mine:"$LOCAL" \
- -theirs:"$REMOTE" -merged:"$MERGED"
- check_unchanged
- else
- echo "TortoiseMerge cannot be used without a base" 1>&2
- status=1
- fi
- ;;
- xxdiff)
- if merge_mode; then
- touch "$BACKUP"
- if $base_present; then
- "$merge_tool_path" -X --show-merged-pane \
- -R 'Accel.SaveAsMerged: "Ctrl-S"' \
- -R 'Accel.Search: "Ctrl+F"' \
- -R 'Accel.SearchForward: "Ctrl-G"' \
- --merged-file "$MERGED" \
- "$LOCAL" "$BASE" "$REMOTE"
- else
- "$merge_tool_path" -X $extra \
- -R 'Accel.SaveAsMerged: "Ctrl-S"' \
- -R 'Accel.Search: "Ctrl+F"' \
- -R 'Accel.SearchForward: "Ctrl-G"' \
- --merged-file "$MERGED" \
- "$LOCAL" "$REMOTE"
- fi
- check_unchanged
- else
- "$merge_tool_path" \
- -R 'Accel.Search: "Ctrl+F"' \
- -R 'Accel.SearchForward: "Ctrl-G"' \
- "$LOCAL" "$REMOTE"
- fi
- ;;
- *)
- merge_tool_cmd="$(get_merge_tool_cmd "$1")"
- if test -z "$merge_tool_cmd"; then
- if merge_mode; then
- status=1
- fi
- break
- fi
- if merge_mode; then
- trust_exit_code="$(git config --bool \
- mergetool."$1".trustExitCode || echo false)"
- if test "$trust_exit_code" = "false"; then
- touch "$BACKUP"
- ( eval $merge_tool_cmd )
- check_unchanged
- else
- ( eval $merge_tool_cmd )
- status=$?
- fi
- else
- ( eval $merge_tool_cmd )
- fi
- ;;
- esac
+ # Bring tool-specific functions into scope
+ setup_tool "$1"
+
+ if merge_mode
+ then
+ merge_cmd "$1"
+ else
+ diff_cmd "$1"
+ fi
return $status
}
guess_merge_tool () {
- if merge_mode; then
+ if merge_mode
+ then
tools="tortoisemerge"
else
tools="kompare"
fi
- if test -n "$DISPLAY"; then
- if test -n "$GNOME_DESKTOP_SESSION_ID" ; then
+ if test -n "$DISPLAY"
+ then
+ if test -n "$GNOME_DESKTOP_SESSION_ID"
+ then
tools="meld opendiff kdiff3 tkdiff xxdiff $tools"
else
tools="opendiff kdiff3 tkdiff xxdiff meld $tools"
for i in $tools
do
merge_tool_path="$(translate_merge_tool_path "$i")"
- if type "$merge_tool_path" > /dev/null 2>&1; then
+ if type "$merge_tool_path" >/dev/null 2>&1
+ then
echo "$i"
return 0
fi
get_configured_merge_tool () {
# Diff mode first tries diff.tool and falls back to merge.tool.
# Merge mode only checks merge.tool
- if diff_mode; then
+ if diff_mode
+ then
merge_tool=$(git config diff.tool || git config merge.tool)
else
merge_tool=$(git config merge.tool)
fi
- if test -n "$merge_tool" && ! valid_tool "$merge_tool"; then
+ if test -n "$merge_tool" && ! valid_tool "$merge_tool"
+ then
echo >&2 "git config option $TOOL_MODE.tool set to unknown tool: $merge_tool"
echo >&2 "Resetting to default..."
return 1
get_merge_tool_path () {
# A merge tool has been set, so verify that it's valid.
- if test -n "$1"; then
- merge_tool="$1"
- else
- merge_tool="$(get_merge_tool)"
- fi
- if ! valid_tool "$merge_tool"; then
+ merge_tool="$1"
+ if ! valid_tool "$merge_tool"
+ then
echo >&2 "Unknown merge tool $merge_tool"
exit 1
fi
- if diff_mode; then
+ if diff_mode
+ then
merge_tool_path=$(git config difftool."$merge_tool".path ||
git config mergetool."$merge_tool".path)
else
merge_tool_path=$(git config mergetool."$merge_tool".path)
fi
- if test -z "$merge_tool_path"; then
+ if test -z "$merge_tool_path"
+ then
merge_tool_path="$(translate_merge_tool_path "$merge_tool")"
fi
if test -z "$(get_merge_tool_cmd "$merge_tool")" &&
- ! type "$merge_tool_path" > /dev/null 2>&1; then
+ ! type "$merge_tool_path" >/dev/null 2>&1
+ then
echo >&2 "The $TOOL_MODE tool $merge_tool is not available as"\
"'$merge_tool_path'"
exit 1
get_merge_tool () {
# Check if a merge tool has been configured
- merge_tool=$(get_configured_merge_tool)
+ merge_tool="$(get_configured_merge_tool)"
# Try to guess an appropriate merge tool if no tool has been set.
- if test -z "$merge_tool"; then
+ if test -z "$merge_tool"
+ then
merge_tool="$(guess_merge_tool)" || exit
fi
echo "$merge_tool"
$_prefix, $_no_checkout, $_url, $_verbose,
$_git_format, $_commit_url, $_tag, $_merge_info);
$Git::SVN::_follow_parent = 1;
+$SVN::Git::Fetcher::_placeholder_filename = ".gitignore";
$_q ||= 0;
my %remote_opts = ( 'username=s' => \$Git::SVN::Prompt::_username,
'config-dir=s' => \$Git::SVN::Ra::config_dir,
%fc_opts } ],
clone => [ \&cmd_clone, "Initialize and fetch revisions",
{ 'revision|r=s' => \$_revision,
+ 'preserve-empty-dirs' =>
+ \$SVN::Git::Fetcher::_preserve_empty_dirs,
+ 'placeholder-filename=s' =>
+ \$SVN::Git::Fetcher::_placeholder_filename,
%fc_opts, %init_opts } ],
init => [ \&cmd_init, "Initialize a repo for tracking" .
" (requires URL argument)",
my $ignore_regex = \$SVN::Git::Fetcher::_ignore_regex;
command_noisy('config', "$pfx.ignore-paths", $$ignore_regex)
if defined $$ignore_regex;
+
+ if (defined $SVN::Git::Fetcher::_preserve_empty_dirs) {
+ my $fname = \$SVN::Git::Fetcher::_placeholder_filename;
+ command_noisy('config', "$pfx.preserve-empty-dirs", 'true');
+ command_noisy('config', "$pfx.placeholder-filename", $$fname);
+ }
}
sub init_subdir {
}
my $expect_url = $url;
Git::SVN::remove_username($expect_url);
+ if (defined($_merge_info)) {
+ $_merge_info =~ tr{ }{\n};
+ }
while (1) {
my $d = shift @$linear_refs or last;
unless (defined $last_rev) {
my (undef, $max_commit) = $gs->rev_map_max(1);
last if (!$max_commit);
my ($url) = ::cmt_metadata($max_commit);
- last if ($url eq $gs->full_url);
+ last if ($url eq $gs->metadata_url);
$ref_id .= '-';
}
print STDERR "Initializing parent: $ref_id\n" unless $::_q > 1;
}
package SVN::Git::Fetcher;
-use vars qw/@ISA/;
+use vars qw/@ISA $_ignore_regex $_preserve_empty_dirs $_placeholder_filename
+ @deleted_gpath %added_placeholder $repo_id/;
use strict;
use warnings;
use Carp qw/croak/;
+use File::Basename qw/dirname/;
use IO::File qw//;
-use vars qw/$_ignore_regex/;
# file baton members: path, mode_a, mode_b, pool, fh, blob, base
sub new {
$self->{empty_symlinks} =
_mark_empty_symlinks($git_svn, $switch_path);
}
- $self->{ignore_regex} = eval { command_oneline('config', '--get',
- "svn-remote.$git_svn->{repo_id}.ignore-paths") };
+
+ # some options are read globally, but can be overridden locally
+ # per [svn-remote "..."] section. Command-line options will *NOT*
+ # override options set in an [svn-remote "..."] section
+ $repo_id = $git_svn->{repo_id};
+ my $k = "svn-remote.$repo_id.ignore-paths";
+ my $v = eval { command_oneline('config', '--get', $k) };
+ $self->{ignore_regex} = $v;
+
+ $k = "svn-remote.$repo_id.preserve-empty-dirs";
+ $v = eval { command_oneline('config', '--get', '--bool', $k) };
+ if ($v && $v eq 'true') {
+ $_preserve_empty_dirs = 1;
+ $k = "svn-remote.$repo_id.placeholder-filename";
+ $v = eval { command_oneline('config', '--get', $k) };
+ $_placeholder_filename = $v;
+ }
+
+ # Load the list of placeholder files added during previous invocations.
+ $k = "svn-remote.$repo_id.added-placeholder";
+ $v = eval { command_oneline('config', '--get-all', $k) };
+ if ($_preserve_empty_dirs && $v) {
+ # command() prints errors to stderr, so we only call it if
+ # command_oneline() succeeded.
+ my @v = command('config', '--get-all', $k);
+ $added_placeholder{ dirname($_) } = $_ foreach @v;
+ }
+
$self->{empty} = {};
$self->{dir_prop} = {};
$self->{file_prop} = {};
$self->{gii}->remove($gpath);
print "\tD\t$gpath\n" unless $::_q;
}
+ # Don't add to @deleted_gpath if we're deleting a placeholder file.
+ push @deleted_gpath, $gpath unless $added_placeholder{dirname($path)};
$self->{empty}->{$path} = 0;
undef;
}
my ($dir, $file) = ($path =~ m#^(.*?)/?([^/]+)$#);
delete $self->{empty}->{$dir};
$mode = '100644';
+
+ if ($added_placeholder{$dir}) {
+ # Remove our placeholder file, if we created one.
+ delete_entry($self, $added_placeholder{$dir})
+ unless $path eq $added_placeholder{$dir};
+ delete $added_placeholder{$dir}
+ }
}
+
{ path => $path, mode_a => $mode, mode_b => $mode,
pool => SVN::Pool->new, action => 'A' };
}
chomp;
$self->{gii}->remove($_);
print "\tD\t$_\n" unless $::_q;
+ push @deleted_gpath, $gpath;
}
command_close_pipe($ls, $ctx);
$self->{empty}->{$path} = 0;
my ($dir, $file) = ($path =~ m#^(.*?)/?([^/]+)$#);
delete $self->{empty}->{$dir};
$self->{empty}->{$path} = 1;
+
+ if ($added_placeholder{$dir}) {
+ # Remove our placeholder file, if we created one.
+ delete_entry($self, $added_placeholder{$dir});
+ delete $added_placeholder{$dir}
+ }
+
out:
{ path => $path };
}
sub close_edit {
my $self = shift;
+
+ if ($_preserve_empty_dirs) {
+ my @empty_dirs;
+
+ # Any entry flagged as empty that also has an associated
+ # dir_prop represents a newly created empty directory.
+ foreach my $i (keys %{$self->{empty}}) {
+ push @empty_dirs, $i if exists $self->{dir_prop}->{$i};
+ }
+
+ # Search for directories that have become empty due subsequent
+ # file deletes.
+ push @empty_dirs, $self->find_empty_directories();
+
+ # Finally, add a placeholder file to each empty directory.
+ $self->add_placeholder_file($_) foreach (@empty_dirs);
+
+ $self->stash_placeholder_list();
+ }
+
$self->{git_commit_ok} = 1;
$self->{nr} = $self->{gii}->{nr};
delete $self->{gii};
$self->SUPER::close_edit(@_);
}
+sub find_empty_directories {
+ my ($self) = @_;
+ my @empty_dirs;
+ my %dirs = map { dirname($_) => 1 } @deleted_gpath;
+
+ foreach my $dir (sort keys %dirs) {
+ next if $dir eq ".";
+
+ # If there have been any additions to this directory, there is
+ # no reason to check if it is empty.
+ my $skip_added = 0;
+ foreach my $t (qw/dir_prop file_prop/) {
+ foreach my $path (keys %{ $self->{$t} }) {
+ if (exists $self->{$t}->{dirname($path)}) {
+ $skip_added = 1;
+ last;
+ }
+ }
+ last if $skip_added;
+ }
+ next if $skip_added;
+
+ # Use `git ls-tree` to get the filenames of this directory
+ # that existed prior to this particular commit.
+ my $ls = command('ls-tree', '-z', '--name-only',
+ $self->{c}, "$dir/");
+ my %files = map { $_ => 1 } split(/\0/, $ls);
+
+ # Remove the filenames that were deleted during this commit.
+ delete $files{$_} foreach (@deleted_gpath);
+
+ # Report the directory if there are no filenames left.
+ push @empty_dirs, $dir unless (scalar %files);
+ }
+ @empty_dirs;
+}
+
+sub add_placeholder_file {
+ my ($self, $dir) = @_;
+ my $path = "$dir/$_placeholder_filename";
+ my $gpath = $self->git_path($path);
+
+ my $fh = $::_repository->temp_acquire($gpath);
+ my $hash = $::_repository->hash_and_insert_object(Git::temp_path($fh));
+ Git::temp_release($fh, 1);
+ $self->{gii}->update('100644', $hash, $gpath) or croak $!;
+
+ # The directory should no longer be considered empty.
+ delete $self->{empty}->{$dir} if exists $self->{empty}->{$dir};
+
+ # Keep track of any placeholder files we create.
+ $added_placeholder{$dir} = $path;
+}
+
+sub stash_placeholder_list {
+ my ($self) = @_;
+ my $k = "svn-remote.$repo_id.added-placeholder";
+ my $v = eval { command_oneline('config', '--get-all', $k) };
+ command_noisy('config', '--unset-all', $k) if $v;
+ foreach (values %added_placeholder) {
+ command_noisy('config', '--add', $k, $_);
+ }
+}
+
package SVN::Git::Editor;
use vars qw/@ISA $_rmdir $_cp_similarity $_find_copies_harder $_rename_limit/;
use strict;
const char *tmp;
int status;
+ if (use_pager == -1)
+ use_pager = check_pager_config(argv[0]);
commit_pager_choice();
strbuf_addf(&cmd, "git-%s", argv[0]);
static unsigned short graph_get_current_column_color(const struct git_graph *graph)
{
- if (!DIFF_OPT_TST(&graph->revs->diffopt, COLOR_DIFF))
+ if (!want_color(graph->revs->diffopt.use_color))
return column_colors_max;
return graph->default_column_color;
}
}
#endif /* !USE_LIBPCRE */
+static int is_fixed(const char *s, size_t len)
+{
+ size_t i;
+
+ /* regcomp cannot accept patterns with NULs so we
+ * consider any pattern containing a NUL fixed.
+ */
+ if (memchr(s, 0, len))
+ return 1;
+
+ for (i = 0; i < len; i++) {
+ if (is_regex_special(s[i]))
+ return 0;
+ }
+
+ return 1;
+}
+
static void compile_regexp(struct grep_pat *p, struct grep_opt *opt)
{
int err;
p->word_regexp = opt->word_regexp;
p->ignore_case = opt->ignore_case;
- p->fixed = opt->fixed;
- if (p->fixed)
+ if (opt->fixed || is_fixed(p->pattern, p->patternlen))
+ p->fixed = 1;
+ else
+ p->fixed = 0;
+
+ if (p->fixed) {
+ if (opt->regflags & REG_ICASE || p->ignore_case) {
+ static char trans[256];
+ int i;
+ for (i = 0; i < 256; i++)
+ trans[i] = tolower(i);
+ p->kws = kwsalloc(trans);
+ } else {
+ p->kws = kwsalloc(NULL);
+ }
+ kwsincr(p->kws, p->pattern, p->patternlen);
+ kwsprep(p->kws);
return;
+ }
if (opt->pcre) {
compile_pcre_regexp(p, opt);
case GREP_PATTERN: /* atom */
case GREP_PATTERN_HEAD:
case GREP_PATTERN_BODY:
- if (p->pcre_regexp)
+ if (p->kws)
+ kwsfree(p->kws);
+ else if (p->pcre_regexp)
free_pcre_regexp(p);
else
regfree(&p->regexp);
static void output_color(struct grep_opt *opt, const void *data, size_t size,
const char *color)
{
- if (opt->color && color && color[0]) {
+ if (want_color(opt->color) && color && color[0]) {
opt->output(opt, color, strlen(color));
opt->output(opt, data, size);
opt->output(opt, GIT_COLOR_RESET, strlen(GIT_COLOR_RESET));
static int fixmatch(struct grep_pat *p, char *line, char *eol,
regmatch_t *match)
{
- char *hit;
-
- if (p->ignore_case) {
- char *s = line;
- do {
- hit = strcasestr(s, p->pattern);
- if (hit)
- break;
- s += strlen(s) + 1;
- } while (s < eol);
- } else
- hit = memmem(line, eol - line, p->pattern, p->patternlen);
-
- if (!hit) {
+ struct kwsmatch kwsm;
+ size_t offset = kwsexec(p->kws, line, eol - line, &kwsm);
+ if (offset == -1) {
match->rm_so = match->rm_eo = -1;
return REG_NOMATCH;
- }
- else {
- match->rm_so = hit - line;
- match->rm_eo = match->rm_so + p->patternlen;
+ } else {
+ match->rm_so = offset;
+ match->rm_eo = match->rm_so + kwsm.size[0];
return 0;
}
}
typedef int pcre;
typedef int pcre_extra;
#endif
+#include "kwset.h"
enum grep_pat_token {
GREP_PATTERN,
regex_t regexp;
pcre *pcre_regexp;
pcre_extra *pcre_extra_info;
+ kwset_t kws;
unsigned fixed:1;
unsigned ignore_case:1;
unsigned word_regexp:1;
commits = 1;
}
+ if (get_all == 0)
+ warning("http-fetch: use without -a is deprecated.\n"
+ "In a future release, -a will become the default.");
+
if (argv[arg])
str_end_url_with_slash(argv[arg], &url);
--- /dev/null
+/*
+ * This file has been copied from commit e7ac713d^ in the GNU grep git
+ * repository. A few small changes have been made to adapt the code to
+ * Git.
+ */
+
+/* kwset.c - search for any of a set of keywords.
+ Copyright 1989, 1998, 2000, 2005 Free Software Foundation, Inc.
+
+ This program is free software; you can redistribute it and/or modify
+ it under the terms of the GNU General Public License as published by
+ the Free Software Foundation; either version 2, or (at your option)
+ any later version.
+
+ This program is distributed in the hope that it will be useful,
+ but WITHOUT ANY WARRANTY; without even the implied warranty of
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ GNU General Public License for more details.
+
+ You should have received a copy of the GNU General Public License
+ along with this program; if not, write to the Free Software
+ Foundation, Inc., 51 Franklin Street - Fifth Floor, Boston, MA
+ 02110-1301, USA. */
+
+/* Written August 1989 by Mike Haertel.
+ The author may be reached (Email) at the address mike@ai.mit.edu,
+ or (US mail) as Mike Haertel c/o Free Software Foundation. */
+
+/* The algorithm implemented by these routines bears a startling resemblence
+ to one discovered by Beate Commentz-Walter, although it is not identical.
+ See "A String Matching Algorithm Fast on the Average," Technical Report,
+ IBM-Germany, Scientific Center Heidelberg, Tiergartenstrasse 15, D-6900
+ Heidelberg, Germany. See also Aho, A.V., and M. Corasick, "Efficient
+ String Matching: An Aid to Bibliographic Search," CACM June 1975,
+ Vol. 18, No. 6, which describes the failure function used below. */
+
+#include "cache.h"
+
+#include "kwset.h"
+#include "compat/obstack.h"
+
+#define NCHAR (UCHAR_MAX + 1)
+#define obstack_chunk_alloc xmalloc
+#define obstack_chunk_free free
+
+#define U(c) ((unsigned char) (c))
+
+/* Balanced tree of edges and labels leaving a given trie node. */
+struct tree
+{
+ struct tree *llink; /* Left link; MUST be first field. */
+ struct tree *rlink; /* Right link (to larger labels). */
+ struct trie *trie; /* Trie node pointed to by this edge. */
+ unsigned char label; /* Label on this edge. */
+ char balance; /* Difference in depths of subtrees. */
+};
+
+/* Node of a trie representing a set of reversed keywords. */
+struct trie
+{
+ unsigned int accepting; /* Word index of accepted word, or zero. */
+ struct tree *links; /* Tree of edges leaving this node. */
+ struct trie *parent; /* Parent of this node. */
+ struct trie *next; /* List of all trie nodes in level order. */
+ struct trie *fail; /* Aho-Corasick failure function. */
+ int depth; /* Depth of this node from the root. */
+ int shift; /* Shift function for search failures. */
+ int maxshift; /* Max shift of self and descendents. */
+};
+
+/* Structure returned opaquely to the caller, containing everything. */
+struct kwset
+{
+ struct obstack obstack; /* Obstack for node allocation. */
+ int words; /* Number of words in the trie. */
+ struct trie *trie; /* The trie itself. */
+ int mind; /* Minimum depth of an accepting node. */
+ int maxd; /* Maximum depth of any node. */
+ unsigned char delta[NCHAR]; /* Delta table for rapid search. */
+ struct trie *next[NCHAR]; /* Table of children of the root. */
+ char *target; /* Target string if there's only one. */
+ int mind2; /* Used in Boyer-Moore search for one string. */
+ char const *trans; /* Character translation table. */
+};
+
+/* Allocate and initialize a keyword set object, returning an opaque
+ pointer to it. Return NULL if memory is not available. */
+kwset_t
+kwsalloc (char const *trans)
+{
+ struct kwset *kwset;
+
+ kwset = (struct kwset *) xmalloc(sizeof (struct kwset));
+
+ obstack_init(&kwset->obstack);
+ kwset->words = 0;
+ kwset->trie
+ = (struct trie *) obstack_alloc(&kwset->obstack, sizeof (struct trie));
+ if (!kwset->trie)
+ {
+ kwsfree((kwset_t) kwset);
+ return NULL;
+ }
+ kwset->trie->accepting = 0;
+ kwset->trie->links = NULL;
+ kwset->trie->parent = NULL;
+ kwset->trie->next = NULL;
+ kwset->trie->fail = NULL;
+ kwset->trie->depth = 0;
+ kwset->trie->shift = 0;
+ kwset->mind = INT_MAX;
+ kwset->maxd = -1;
+ kwset->target = NULL;
+ kwset->trans = trans;
+
+ return (kwset_t) kwset;
+}
+
+/* This upper bound is valid for CHAR_BIT >= 4 and
+ exact for CHAR_BIT in { 4..11, 13, 15, 17, 19 }. */
+#define DEPTH_SIZE (CHAR_BIT + CHAR_BIT/2)
+
+/* Add the given string to the contents of the keyword set. Return NULL
+ for success, an error message otherwise. */
+const char *
+kwsincr (kwset_t kws, char const *text, size_t len)
+{
+ struct kwset *kwset;
+ register struct trie *trie;
+ register unsigned char label;
+ register struct tree *link;
+ register int depth;
+ struct tree *links[DEPTH_SIZE];
+ enum { L, R } dirs[DEPTH_SIZE];
+ struct tree *t, *r, *l, *rl, *lr;
+
+ kwset = (struct kwset *) kws;
+ trie = kwset->trie;
+ text += len;
+
+ /* Descend the trie (built of reversed keywords) character-by-character,
+ installing new nodes when necessary. */
+ while (len--)
+ {
+ label = kwset->trans ? kwset->trans[U(*--text)] : *--text;
+
+ /* Descend the tree of outgoing links for this trie node,
+ looking for the current character and keeping track
+ of the path followed. */
+ link = trie->links;
+ links[0] = (struct tree *) &trie->links;
+ dirs[0] = L;
+ depth = 1;
+
+ while (link && label != link->label)
+ {
+ links[depth] = link;
+ if (label < link->label)
+ dirs[depth++] = L, link = link->llink;
+ else
+ dirs[depth++] = R, link = link->rlink;
+ }
+
+ /* The current character doesn't have an outgoing link at
+ this trie node, so build a new trie node and install
+ a link in the current trie node's tree. */
+ if (!link)
+ {
+ link = (struct tree *) obstack_alloc(&kwset->obstack,
+ sizeof (struct tree));
+ if (!link)
+ return "memory exhausted";
+ link->llink = NULL;
+ link->rlink = NULL;
+ link->trie = (struct trie *) obstack_alloc(&kwset->obstack,
+ sizeof (struct trie));
+ if (!link->trie)
+ {
+ obstack_free(&kwset->obstack, link);
+ return "memory exhausted";
+ }
+ link->trie->accepting = 0;
+ link->trie->links = NULL;
+ link->trie->parent = trie;
+ link->trie->next = NULL;
+ link->trie->fail = NULL;
+ link->trie->depth = trie->depth + 1;
+ link->trie->shift = 0;
+ link->label = label;
+ link->balance = 0;
+
+ /* Install the new tree node in its parent. */
+ if (dirs[--depth] == L)
+ links[depth]->llink = link;
+ else
+ links[depth]->rlink = link;
+
+ /* Back up the tree fixing the balance flags. */
+ while (depth && !links[depth]->balance)
+ {
+ if (dirs[depth] == L)
+ --links[depth]->balance;
+ else
+ ++links[depth]->balance;
+ --depth;
+ }
+
+ /* Rebalance the tree by pointer rotations if necessary. */
+ if (depth && ((dirs[depth] == L && --links[depth]->balance)
+ || (dirs[depth] == R && ++links[depth]->balance)))
+ {
+ switch (links[depth]->balance)
+ {
+ case (char) -2:
+ switch (dirs[depth + 1])
+ {
+ case L:
+ r = links[depth], t = r->llink, rl = t->rlink;
+ t->rlink = r, r->llink = rl;
+ t->balance = r->balance = 0;
+ break;
+ case R:
+ r = links[depth], l = r->llink, t = l->rlink;
+ rl = t->rlink, lr = t->llink;
+ t->llink = l, l->rlink = lr, t->rlink = r, r->llink = rl;
+ l->balance = t->balance != 1 ? 0 : -1;
+ r->balance = t->balance != (char) -1 ? 0 : 1;
+ t->balance = 0;
+ break;
+ default:
+ abort ();
+ }
+ break;
+ case 2:
+ switch (dirs[depth + 1])
+ {
+ case R:
+ l = links[depth], t = l->rlink, lr = t->llink;
+ t->llink = l, l->rlink = lr;
+ t->balance = l->balance = 0;
+ break;
+ case L:
+ l = links[depth], r = l->rlink, t = r->llink;
+ lr = t->llink, rl = t->rlink;
+ t->llink = l, l->rlink = lr, t->rlink = r, r->llink = rl;
+ l->balance = t->balance != 1 ? 0 : -1;
+ r->balance = t->balance != (char) -1 ? 0 : 1;
+ t->balance = 0;
+ break;
+ default:
+ abort ();
+ }
+ break;
+ default:
+ abort ();
+ }
+
+ if (dirs[depth - 1] == L)
+ links[depth - 1]->llink = t;
+ else
+ links[depth - 1]->rlink = t;
+ }
+ }
+
+ trie = link->trie;
+ }
+
+ /* Mark the node we finally reached as accepting, encoding the
+ index number of this word in the keyword set so far. */
+ if (!trie->accepting)
+ trie->accepting = 1 + 2 * kwset->words;
+ ++kwset->words;
+
+ /* Keep track of the longest and shortest string of the keyword set. */
+ if (trie->depth < kwset->mind)
+ kwset->mind = trie->depth;
+ if (trie->depth > kwset->maxd)
+ kwset->maxd = trie->depth;
+
+ return NULL;
+}
+
+/* Enqueue the trie nodes referenced from the given tree in the
+ given queue. */
+static void
+enqueue (struct tree *tree, struct trie **last)
+{
+ if (!tree)
+ return;
+ enqueue(tree->llink, last);
+ enqueue(tree->rlink, last);
+ (*last) = (*last)->next = tree->trie;
+}
+
+/* Compute the Aho-Corasick failure function for the trie nodes referenced
+ from the given tree, given the failure function for their parent as
+ well as a last resort failure node. */
+static void
+treefails (register struct tree const *tree, struct trie const *fail,
+ struct trie *recourse)
+{
+ register struct tree *link;
+
+ if (!tree)
+ return;
+
+ treefails(tree->llink, fail, recourse);
+ treefails(tree->rlink, fail, recourse);
+
+ /* Find, in the chain of fails going back to the root, the first
+ node that has a descendent on the current label. */
+ while (fail)
+ {
+ link = fail->links;
+ while (link && tree->label != link->label)
+ if (tree->label < link->label)
+ link = link->llink;
+ else
+ link = link->rlink;
+ if (link)
+ {
+ tree->trie->fail = link->trie;
+ return;
+ }
+ fail = fail->fail;
+ }
+
+ tree->trie->fail = recourse;
+}
+
+/* Set delta entries for the links of the given tree such that
+ the preexisting delta value is larger than the current depth. */
+static void
+treedelta (register struct tree const *tree,
+ register unsigned int depth,
+ unsigned char delta[])
+{
+ if (!tree)
+ return;
+ treedelta(tree->llink, depth, delta);
+ treedelta(tree->rlink, depth, delta);
+ if (depth < delta[tree->label])
+ delta[tree->label] = depth;
+}
+
+/* Return true if A has every label in B. */
+static int
+hasevery (register struct tree const *a, register struct tree const *b)
+{
+ if (!b)
+ return 1;
+ if (!hasevery(a, b->llink))
+ return 0;
+ if (!hasevery(a, b->rlink))
+ return 0;
+ while (a && b->label != a->label)
+ if (b->label < a->label)
+ a = a->llink;
+ else
+ a = a->rlink;
+ return !!a;
+}
+
+/* Compute a vector, indexed by character code, of the trie nodes
+ referenced from the given tree. */
+static void
+treenext (struct tree const *tree, struct trie *next[])
+{
+ if (!tree)
+ return;
+ treenext(tree->llink, next);
+ treenext(tree->rlink, next);
+ next[tree->label] = tree->trie;
+}
+
+/* Compute the shift for each trie node, as well as the delta
+ table and next cache for the given keyword set. */
+const char *
+kwsprep (kwset_t kws)
+{
+ register struct kwset *kwset;
+ register int i;
+ register struct trie *curr;
+ register char const *trans;
+ unsigned char delta[NCHAR];
+
+ kwset = (struct kwset *) kws;
+
+ /* Initial values for the delta table; will be changed later. The
+ delta entry for a given character is the smallest depth of any
+ node at which an outgoing edge is labeled by that character. */
+ memset(delta, kwset->mind < UCHAR_MAX ? kwset->mind : UCHAR_MAX, NCHAR);
+
+ /* Check if we can use the simple boyer-moore algorithm, instead
+ of the hairy commentz-walter algorithm. */
+ if (kwset->words == 1 && kwset->trans == NULL)
+ {
+ char c;
+
+ /* Looking for just one string. Extract it from the trie. */
+ kwset->target = obstack_alloc(&kwset->obstack, kwset->mind);
+ if (!kwset->target)
+ return "memory exhausted";
+ for (i = kwset->mind - 1, curr = kwset->trie; i >= 0; --i)
+ {
+ kwset->target[i] = curr->links->label;
+ curr = curr->links->trie;
+ }
+ /* Build the Boyer Moore delta. Boy that's easy compared to CW. */
+ for (i = 0; i < kwset->mind; ++i)
+ delta[U(kwset->target[i])] = kwset->mind - (i + 1);
+ /* Find the minimal delta2 shift that we might make after
+ a backwards match has failed. */
+ c = kwset->target[kwset->mind - 1];
+ for (i = kwset->mind - 2; i >= 0; --i)
+ if (kwset->target[i] == c)
+ break;
+ kwset->mind2 = kwset->mind - (i + 1);
+ }
+ else
+ {
+ register struct trie *fail;
+ struct trie *last, *next[NCHAR];
+
+ /* Traverse the nodes of the trie in level order, simultaneously
+ computing the delta table, failure function, and shift function. */
+ for (curr = last = kwset->trie; curr; curr = curr->next)
+ {
+ /* Enqueue the immediate descendents in the level order queue. */
+ enqueue(curr->links, &last);
+
+ curr->shift = kwset->mind;
+ curr->maxshift = kwset->mind;
+
+ /* Update the delta table for the descendents of this node. */
+ treedelta(curr->links, curr->depth, delta);
+
+ /* Compute the failure function for the decendents of this node. */
+ treefails(curr->links, curr->fail, kwset->trie);
+
+ /* Update the shifts at each node in the current node's chain
+ of fails back to the root. */
+ for (fail = curr->fail; fail; fail = fail->fail)
+ {
+ /* If the current node has some outgoing edge that the fail
+ doesn't, then the shift at the fail should be no larger
+ than the difference of their depths. */
+ if (!hasevery(fail->links, curr->links))
+ if (curr->depth - fail->depth < fail->shift)
+ fail->shift = curr->depth - fail->depth;
+
+ /* If the current node is accepting then the shift at the
+ fail and its descendents should be no larger than the
+ difference of their depths. */
+ if (curr->accepting && fail->maxshift > curr->depth - fail->depth)
+ fail->maxshift = curr->depth - fail->depth;
+ }
+ }
+
+ /* Traverse the trie in level order again, fixing up all nodes whose
+ shift exceeds their inherited maxshift. */
+ for (curr = kwset->trie->next; curr; curr = curr->next)
+ {
+ if (curr->maxshift > curr->parent->maxshift)
+ curr->maxshift = curr->parent->maxshift;
+ if (curr->shift > curr->maxshift)
+ curr->shift = curr->maxshift;
+ }
+
+ /* Create a vector, indexed by character code, of the outgoing links
+ from the root node. */
+ for (i = 0; i < NCHAR; ++i)
+ next[i] = NULL;
+ treenext(kwset->trie->links, next);
+
+ if ((trans = kwset->trans) != NULL)
+ for (i = 0; i < NCHAR; ++i)
+ kwset->next[i] = next[U(trans[i])];
+ else
+ memcpy(kwset->next, next, NCHAR * sizeof(struct trie *));
+ }
+
+ /* Fix things up for any translation table. */
+ if ((trans = kwset->trans) != NULL)
+ for (i = 0; i < NCHAR; ++i)
+ kwset->delta[i] = delta[U(trans[i])];
+ else
+ memcpy(kwset->delta, delta, NCHAR);
+
+ return NULL;
+}
+
+/* Fast boyer-moore search. */
+static size_t
+bmexec (kwset_t kws, char const *text, size_t size)
+{
+ struct kwset const *kwset;
+ register unsigned char const *d1;
+ register char const *ep, *sp, *tp;
+ register int d, gc, i, len, md2;
+
+ kwset = (struct kwset const *) kws;
+ len = kwset->mind;
+
+ if (len == 0)
+ return 0;
+ if (len > size)
+ return -1;
+ if (len == 1)
+ {
+ tp = memchr (text, kwset->target[0], size);
+ return tp ? tp - text : -1;
+ }
+
+ d1 = kwset->delta;
+ sp = kwset->target + len;
+ gc = U(sp[-2]);
+ md2 = kwset->mind2;
+ tp = text + len;
+
+ /* Significance of 12: 1 (initial offset) + 10 (skip loop) + 1 (md2). */
+ if (size > 12 * len)
+ /* 11 is not a bug, the initial offset happens only once. */
+ for (ep = text + size - 11 * len;;)
+ {
+ while (tp <= ep)
+ {
+ d = d1[U(tp[-1])], tp += d;
+ d = d1[U(tp[-1])], tp += d;
+ if (d == 0)
+ goto found;
+ d = d1[U(tp[-1])], tp += d;
+ d = d1[U(tp[-1])], tp += d;
+ d = d1[U(tp[-1])], tp += d;
+ if (d == 0)
+ goto found;
+ d = d1[U(tp[-1])], tp += d;
+ d = d1[U(tp[-1])], tp += d;
+ d = d1[U(tp[-1])], tp += d;
+ if (d == 0)
+ goto found;
+ d = d1[U(tp[-1])], tp += d;
+ d = d1[U(tp[-1])], tp += d;
+ }
+ break;
+ found:
+ if (U(tp[-2]) == gc)
+ {
+ for (i = 3; i <= len && U(tp[-i]) == U(sp[-i]); ++i)
+ ;
+ if (i > len)
+ return tp - len - text;
+ }
+ tp += md2;
+ }
+
+ /* Now we have only a few characters left to search. We
+ carefully avoid ever producing an out-of-bounds pointer. */
+ ep = text + size;
+ d = d1[U(tp[-1])];
+ while (d <= ep - tp)
+ {
+ d = d1[U((tp += d)[-1])];
+ if (d != 0)
+ continue;
+ if (U(tp[-2]) == gc)
+ {
+ for (i = 3; i <= len && U(tp[-i]) == U(sp[-i]); ++i)
+ ;
+ if (i > len)
+ return tp - len - text;
+ }
+ d = md2;
+ }
+
+ return -1;
+}
+
+/* Hairy multiple string search. */
+static size_t
+cwexec (kwset_t kws, char const *text, size_t len, struct kwsmatch *kwsmatch)
+{
+ struct kwset const *kwset;
+ struct trie * const *next;
+ struct trie const *trie;
+ struct trie const *accept;
+ char const *beg, *lim, *mch, *lmch;
+ register unsigned char c;
+ register unsigned char const *delta;
+ register int d;
+ register char const *end, *qlim;
+ register struct tree const *tree;
+ register char const *trans;
+
+ accept = NULL;
+
+ /* Initialize register copies and look for easy ways out. */
+ kwset = (struct kwset *) kws;
+ if (len < kwset->mind)
+ return -1;
+ next = kwset->next;
+ delta = kwset->delta;
+ trans = kwset->trans;
+ lim = text + len;
+ end = text;
+ if ((d = kwset->mind) != 0)
+ mch = NULL;
+ else
+ {
+ mch = text, accept = kwset->trie;
+ goto match;
+ }
+
+ if (len >= 4 * kwset->mind)
+ qlim = lim - 4 * kwset->mind;
+ else
+ qlim = NULL;
+
+ while (lim - end >= d)
+ {
+ if (qlim && end <= qlim)
+ {
+ end += d - 1;
+ while ((d = delta[c = *end]) && end < qlim)
+ {
+ end += d;
+ end += delta[U(*end)];
+ end += delta[U(*end)];
+ }
+ ++end;
+ }
+ else
+ d = delta[c = (end += d)[-1]];
+ if (d)
+ continue;
+ beg = end - 1;
+ trie = next[c];
+ if (trie->accepting)
+ {
+ mch = beg;
+ accept = trie;
+ }
+ d = trie->shift;
+ while (beg > text)
+ {
+ c = trans ? trans[U(*--beg)] : *--beg;
+ tree = trie->links;
+ while (tree && c != tree->label)
+ if (c < tree->label)
+ tree = tree->llink;
+ else
+ tree = tree->rlink;
+ if (tree)
+ {
+ trie = tree->trie;
+ if (trie->accepting)
+ {
+ mch = beg;
+ accept = trie;
+ }
+ }
+ else
+ break;
+ d = trie->shift;
+ }
+ if (mch)
+ goto match;
+ }
+ return -1;
+
+ match:
+ /* Given a known match, find the longest possible match anchored
+ at or before its starting point. This is nearly a verbatim
+ copy of the preceding main search loops. */
+ if (lim - mch > kwset->maxd)
+ lim = mch + kwset->maxd;
+ lmch = 0;
+ d = 1;
+ while (lim - end >= d)
+ {
+ if ((d = delta[c = (end += d)[-1]]) != 0)
+ continue;
+ beg = end - 1;
+ if (!(trie = next[c]))
+ {
+ d = 1;
+ continue;
+ }
+ if (trie->accepting && beg <= mch)
+ {
+ lmch = beg;
+ accept = trie;
+ }
+ d = trie->shift;
+ while (beg > text)
+ {
+ c = trans ? trans[U(*--beg)] : *--beg;
+ tree = trie->links;
+ while (tree && c != tree->label)
+ if (c < tree->label)
+ tree = tree->llink;
+ else
+ tree = tree->rlink;
+ if (tree)
+ {
+ trie = tree->trie;
+ if (trie->accepting && beg <= mch)
+ {
+ lmch = beg;
+ accept = trie;
+ }
+ }
+ else
+ break;
+ d = trie->shift;
+ }
+ if (lmch)
+ {
+ mch = lmch;
+ goto match;
+ }
+ if (!d)
+ d = 1;
+ }
+
+ if (kwsmatch)
+ {
+ kwsmatch->index = accept->accepting / 2;
+ kwsmatch->offset[0] = mch - text;
+ kwsmatch->size[0] = accept->depth;
+ }
+ return mch - text;
+}
+
+/* Search through the given text for a match of any member of the
+ given keyword set. Return a pointer to the first character of
+ the matching substring, or NULL if no match is found. If FOUNDLEN
+ is non-NULL store in the referenced location the length of the
+ matching substring. Similarly, if FOUNDIDX is non-NULL, store
+ in the referenced location the index number of the particular
+ keyword matched. */
+size_t
+kwsexec (kwset_t kws, char const *text, size_t size,
+ struct kwsmatch *kwsmatch)
+{
+ struct kwset const *kwset = (struct kwset *) kws;
+ if (kwset->words == 1 && kwset->trans == NULL)
+ {
+ size_t ret = bmexec (kws, text, size);
+ if (kwsmatch != NULL && ret != (size_t) -1)
+ {
+ kwsmatch->index = 0;
+ kwsmatch->offset[0] = ret;
+ kwsmatch->size[0] = kwset->mind;
+ }
+ return ret;
+ }
+ else
+ return cwexec(kws, text, size, kwsmatch);
+}
+
+/* Free the components of the given keyword set. */
+void
+kwsfree (kwset_t kws)
+{
+ struct kwset *kwset;
+
+ kwset = (struct kwset *) kws;
+ obstack_free(&kwset->obstack, NULL);
+ free(kws);
+}
--- /dev/null
+/* This file has been copied from commit e7ac713d^ in the GNU grep git
+ * repository. A few small changes have been made to adapt the code to
+ * Git.
+ */
+
+/* kwset.h - header declaring the keyword set library.
+ Copyright (C) 1989, 1998, 2005 Free Software Foundation, Inc.
+
+ This program is free software; you can redistribute it and/or modify
+ it under the terms of the GNU General Public License as published by
+ the Free Software Foundation; either version 2, or (at your option)
+ any later version.
+
+ This program is distributed in the hope that it will be useful,
+ but WITHOUT ANY WARRANTY; without even the implied warranty of
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ GNU General Public License for more details.
+
+ You should have received a copy of the GNU General Public License
+ along with this program; if not, write to the Free Software
+ Foundation, Inc., 51 Franklin Street - Fifth Floor, Boston, MA
+ 02110-1301, USA. */
+
+/* Written August 1989 by Mike Haertel.
+ The author may be reached (Email) at the address mike@ai.mit.edu,
+ or (US mail) as Mike Haertel c/o Free Software Foundation. */
+
+struct kwsmatch
+{
+ int index; /* Index number of matching keyword. */
+ size_t offset[1]; /* Offset of each submatch. */
+ size_t size[1]; /* Length of each submatch. */
+};
+
+struct kwset_t;
+typedef struct kwset_t* kwset_t;
+
+/* Return an opaque pointer to a newly allocated keyword set, or NULL
+ if enough memory cannot be obtained. The argument if non-NULL
+ specifies a table of character translations to be applied to all
+ pattern and search text. */
+extern kwset_t kwsalloc(char const *);
+
+/* Incrementally extend the keyword set to include the given string.
+ Return NULL for success, or an error message. Remember an index
+ number for each keyword included in the set. */
+extern const char *kwsincr(kwset_t, char const *, size_t);
+
+/* When the keyword set has been completely built, prepare it for
+ use. Return NULL for success, or an error message. */
+extern const char *kwsprep(kwset_t);
+
+/* Search through the given buffer for a member of the keyword set.
+ Return a pointer to the leftmost longest match found, or NULL if
+ no match is found. If foundlen is non-NULL, store the length of
+ the matching substring in the integer it points to. Similarly,
+ if foundindex is non-NULL, store the index of the particular
+ keyword found therein. */
+extern size_t kwsexec(kwset_t, char const *, size_t, struct kwsmatch *);
+
+/* Deallocate the given keyword set and all its associated storage. */
+extern void kwsfree(kwset_t);
+
DECORATION_REF_TAG,
DECORATION_REF_STASH,
DECORATION_REF_HEAD,
+ DECORATION_GRAFTED,
};
static char decoration_colors[][COLOR_MAXLEN] = {
GIT_COLOR_BOLD_YELLOW, /* REF_TAG */
GIT_COLOR_BOLD_MAGENTA, /* REF_STASH */
GIT_COLOR_BOLD_CYAN, /* REF_HEAD */
+ GIT_COLOR_BOLD_BLUE, /* GRAFTED */
};
static const char *decorate_get_color(int decorate_use_color, enum decoration_type ix)
{
- if (decorate_use_color)
+ if (want_color(decorate_use_color))
return decoration_colors[ix];
return "";
}
* for showing the commit sha1, use the same check for --decorate
*/
#define decorate_get_color_opt(o, ix) \
- decorate_get_color(DIFF_OPT_TST((o), COLOR_DIFF), ix)
+ decorate_get_color((o)->use_color, ix)
static void add_name_decoration(enum decoration_type type, const char *name, struct object *obj)
{
static int add_ref_decoration(const char *refname, const unsigned char *sha1, int flags, void *cb_data)
{
- struct object *obj = parse_object(sha1);
+ struct object *obj;
enum decoration_type type = DECORATION_NONE;
+
+ if (!prefixcmp(refname, "refs/replace/")) {
+ unsigned char original_sha1[20];
+ if (!read_replace_refs)
+ return 0;
+ if (get_sha1_hex(refname + 13, original_sha1)) {
+ warning("invalid replace ref %s", refname);
+ return 0;
+ }
+ obj = parse_object(original_sha1);
+ if (obj)
+ add_name_decoration(DECORATION_GRAFTED, "replaced", obj);
+ return 0;
+ }
+
+ obj = parse_object(sha1);
if (!obj)
return 0;
- if (!prefixcmp(refname, "refs/heads"))
+ if (!prefixcmp(refname, "refs/heads/"))
type = DECORATION_REF_LOCAL;
- else if (!prefixcmp(refname, "refs/remotes"))
+ else if (!prefixcmp(refname, "refs/remotes/"))
type = DECORATION_REF_REMOTE;
- else if (!prefixcmp(refname, "refs/tags"))
+ else if (!prefixcmp(refname, "refs/tags/"))
type = DECORATION_REF_TAG;
else if (!prefixcmp(refname, "refs/stash"))
type = DECORATION_REF_STASH;
return 0;
}
+static int add_graft_decoration(const struct commit_graft *graft, void *cb_data)
+{
+ struct commit *commit = lookup_commit(graft->sha1);
+ if (!commit)
+ return 0;
+ add_name_decoration(DECORATION_GRAFTED, "grafted", &commit->object);
+ return 0;
+}
+
void load_ref_decorations(int flags)
{
static int loaded;
loaded = 1;
for_each_ref(add_ref_decoration, &flags);
head_ref(add_ref_decoration, &flags);
+ for_each_commit_graft(add_graft_decoration, NULL);
}
}
enum rename_type {
RENAME_NORMAL = 0,
RENAME_DELETE,
- RENAME_ONE_FILE_TO_TWO
+ RENAME_ONE_FILE_TO_ONE,
+ RENAME_ONE_FILE_TO_TWO,
+ RENAME_TWO_FILES_TO_ONE
};
-struct rename_df_conflict_info {
+struct rename_conflict_info {
enum rename_type rename_type;
struct diff_filepair *pair1;
struct diff_filepair *pair2;
const char *branch2;
struct stage_data *dst_entry1;
struct stage_data *dst_entry2;
+ struct diff_filespec ren1_other;
+ struct diff_filespec ren2_other;
};
/*
unsigned mode;
unsigned char sha[20];
} stages[4];
- struct rename_df_conflict_info *rename_df_conflict_info;
+ struct rename_conflict_info *rename_conflict_info;
unsigned processed:1;
};
-static inline void setup_rename_df_conflict_info(enum rename_type rename_type,
- struct diff_filepair *pair1,
- struct diff_filepair *pair2,
- const char *branch1,
- const char *branch2,
- struct stage_data *dst_entry1,
- struct stage_data *dst_entry2)
+static inline void setup_rename_conflict_info(enum rename_type rename_type,
+ struct diff_filepair *pair1,
+ struct diff_filepair *pair2,
+ const char *branch1,
+ const char *branch2,
+ struct stage_data *dst_entry1,
+ struct stage_data *dst_entry2,
+ struct merge_options *o,
+ struct stage_data *src_entry1,
+ struct stage_data *src_entry2)
{
- struct rename_df_conflict_info *ci = xcalloc(1, sizeof(struct rename_df_conflict_info));
+ struct rename_conflict_info *ci = xcalloc(1, sizeof(struct rename_conflict_info));
ci->rename_type = rename_type;
ci->pair1 = pair1;
ci->branch1 = branch1;
ci->branch2 = branch2;
ci->dst_entry1 = dst_entry1;
- dst_entry1->rename_df_conflict_info = ci;
+ dst_entry1->rename_conflict_info = ci;
dst_entry1->processed = 0;
assert(!pair2 == !dst_entry2);
if (dst_entry2) {
ci->dst_entry2 = dst_entry2;
ci->pair2 = pair2;
- dst_entry2->rename_df_conflict_info = ci;
- dst_entry2->processed = 0;
+ dst_entry2->rename_conflict_info = ci;
+ }
+
+ if (rename_type == RENAME_TWO_FILES_TO_ONE) {
+ /*
+ * For each rename, there could have been
+ * modifications on the side of history where that
+ * file was not renamed.
+ */
+ int ostage1 = o->branch1 == branch1 ? 3 : 2;
+ int ostage2 = ostage1 ^ 1;
+
+ ci->ren1_other.path = pair1->one->path;
+ hashcpy(ci->ren1_other.sha1, src_entry1->stages[ostage1].sha);
+ ci->ren1_other.mode = src_entry1->stages[ostage1].mode;
+
+ ci->ren2_other.path = pair2->one->path;
+ hashcpy(ci->ren2_other.sha1, src_entry2->stages[ostage2].sha);
+ ci->ren2_other.mode = src_entry2->stages[ostage2].mode;
}
}
for (i = 0; i < active_nr; i++) {
struct cache_entry *ce = active_cache[i];
if (ce_stage(ce))
- fprintf(stderr, "BUG: %d %.*s", ce_stage(ce),
+ fprintf(stderr, "BUG: %d %.*s\n", ce_stage(ce),
(int)ce_namelen(ce), ce->name);
}
die("Bug in merge-recursive.c");
return unmerged;
}
-static void make_room_for_directories_of_df_conflicts(struct merge_options *o,
- struct string_list *entries)
+static int string_list_df_name_compare(const void *a, const void *b)
{
- /* If there are D/F conflicts, and the paths currently exist
- * in the working copy as a file, we want to remove them to
- * make room for the corresponding directory. Such paths will
- * later be processed in process_df_entry() at the end. If
- * the corresponding directory ends up being removed by the
- * merge, then the file will be reinstated at that time;
- * otherwise, if the file is not supposed to be removed by the
- * merge, the contents of the file will be placed in another
- * unique filename.
+ const struct string_list_item *one = a;
+ const struct string_list_item *two = b;
+ int onelen = strlen(one->string);
+ int twolen = strlen(two->string);
+ /*
+ * Here we only care that entries for D/F conflicts are
+ * adjacent, in particular with the file of the D/F conflict
+ * appearing before files below the corresponding directory.
+ * The order of the rest of the list is irrelevant for us.
*
- * NOTE: This function relies on the fact that entries for a
- * D/F conflict will appear adjacent in the index, with the
- * entries for the file appearing before entries for paths
- * below the corresponding directory.
+ * To achieve this, we sort with df_name_compare and provide
+ * the mode S_IFDIR so that D/F conflicts will sort correctly.
+ * We use the mode S_IFDIR for everything else for simplicity,
+ * since in other cases any changes in their order due to
+ * sorting cause no problems for us.
+ */
+ int cmp = df_name_compare(one->string, onelen, S_IFDIR,
+ two->string, twolen, S_IFDIR);
+ /*
+ * Now that 'foo' and 'foo/bar' compare equal, we have to make sure
+ * that 'foo' comes before 'foo/bar'.
*/
+ if (cmp)
+ return cmp;
+ return onelen - twolen;
+}
+
+static void record_df_conflict_files(struct merge_options *o,
+ struct string_list *entries)
+{
+ /* If there is a D/F conflict and the file for such a conflict
+ * currently exist in the working copy, we want to allow it to be
+ * removed to make room for the corresponding directory if needed.
+ * The files underneath the directories of such D/F conflicts will
+ * be processed before the corresponding file involved in the D/F
+ * conflict. If the D/F directory ends up being removed by the
+ * merge, then we won't have to touch the D/F file. If the D/F
+ * directory needs to be written to the working copy, then the D/F
+ * file will simply be removed (in make_room_for_path()) to make
+ * room for the necessary paths. Note that if both the directory
+ * and the file need to be present, then the D/F file will be
+ * reinstated with a new unique name at the time it is processed.
+ */
+ struct string_list df_sorted_entries;
const char *last_file = NULL;
int last_len = 0;
int i;
+ /*
+ * If we're merging merge-bases, we don't want to bother with
+ * any working directory changes.
+ */
+ if (o->call_depth)
+ return;
+
+ /* Ensure D/F conflicts are adjacent in the entries list. */
+ memset(&df_sorted_entries, 0, sizeof(struct string_list));
for (i = 0; i < entries->nr; i++) {
- const char *path = entries->items[i].string;
+ struct string_list_item *next = &entries->items[i];
+ string_list_append(&df_sorted_entries, next->string)->util =
+ next->util;
+ }
+ qsort(df_sorted_entries.items, entries->nr, sizeof(*entries->items),
+ string_list_df_name_compare);
+
+ string_list_clear(&o->df_conflict_file_set, 1);
+ for (i = 0; i < df_sorted_entries.nr; i++) {
+ const char *path = df_sorted_entries.items[i].string;
int len = strlen(path);
- struct stage_data *e = entries->items[i].util;
+ struct stage_data *e = df_sorted_entries.items[i].util;
/*
* Check if last_file & path correspond to a D/F conflict;
* i.e. whether path is last_file+'/'+<something>.
- * If so, remove last_file to make room for path and friends.
+ * If so, record that it's okay to remove last_file to make
+ * room for path and friends if needed.
*/
if (last_file &&
len > last_len &&
memcmp(path, last_file, last_len) == 0 &&
path[last_len] == '/') {
- output(o, 3, "Removing %s to make room for subdirectory; may re-add later.", last_file);
- unlink(last_file);
+ string_list_insert(&o->df_conflict_file_set, last_file);
}
/*
last_file = NULL;
}
}
+ string_list_clear(&df_sorted_entries, 0);
}
struct rename {
return renames;
}
-static int update_stages_options(const char *path, struct diff_filespec *o,
- struct diff_filespec *a, struct diff_filespec *b,
- int clear, int options)
+static int update_stages(const char *path, const struct diff_filespec *o,
+ const struct diff_filespec *a,
+ const struct diff_filespec *b)
{
+
+ /*
+ * NOTE: It is usually a bad idea to call update_stages on a path
+ * before calling update_file on that same path, since it can
+ * sometimes lead to spurious "refusing to lose untracked file..."
+ * messages from update_file (via make_room_for path via
+ * would_lose_untracked). Instead, reverse the order of the calls
+ * (executing update_file first and then update_stages).
+ */
+ int clear = 1;
+ int options = ADD_CACHE_OK_TO_ADD | ADD_CACHE_SKIP_DFCHECK;
if (clear)
if (remove_file_from_cache(path))
return -1;
return 0;
}
-static int update_stages(const char *path, struct diff_filespec *o,
- struct diff_filespec *a, struct diff_filespec *b,
- int clear)
+static void update_entry(struct stage_data *entry,
+ struct diff_filespec *o,
+ struct diff_filespec *a,
+ struct diff_filespec *b)
{
- int options = ADD_CACHE_OK_TO_ADD | ADD_CACHE_OK_TO_REPLACE;
- return update_stages_options(path, o, a, b, clear, options);
-}
-
-static int update_stages_and_entry(const char *path,
- struct stage_data *entry,
- struct diff_filespec *o,
- struct diff_filespec *a,
- struct diff_filespec *b,
- int clear)
-{
- int options;
-
entry->processed = 0;
entry->stages[1].mode = o->mode;
entry->stages[2].mode = a->mode;
hashcpy(entry->stages[1].sha, o->sha1);
hashcpy(entry->stages[2].sha, a->sha1);
hashcpy(entry->stages[3].sha, b->sha1);
- options = ADD_CACHE_OK_TO_ADD | ADD_CACHE_SKIP_DFCHECK;
- return update_stages_options(path, o, a, b, clear, options);
}
static int remove_file(struct merge_options *o, int clean,
}
}
-static int would_lose_untracked(const char *path)
+static int dir_in_way(const char *path, int check_working_copy)
+{
+ int pos, pathlen = strlen(path);
+ char *dirpath = xmalloc(pathlen + 2);
+ struct stat st;
+
+ strcpy(dirpath, path);
+ dirpath[pathlen] = '/';
+ dirpath[pathlen+1] = '\0';
+
+ pos = cache_name_pos(dirpath, pathlen+1);
+
+ if (pos < 0)
+ pos = -1 - pos;
+ if (pos < active_nr &&
+ !strncmp(dirpath, active_cache[pos]->name, pathlen+1)) {
+ free(dirpath);
+ return 1;
+ }
+
+ free(dirpath);
+ return check_working_copy && !lstat(path, &st) && S_ISDIR(st.st_mode);
+}
+
+static int was_tracked(const char *path)
{
int pos = cache_name_pos(path, strlen(path));
switch (ce_stage(active_cache[pos])) {
case 0:
case 2:
- return 0;
+ return 1;
}
pos++;
}
- return file_exists(path);
+ return 0;
}
-static int make_room_for_path(const char *path)
+static int would_lose_untracked(const char *path)
{
- int status;
+ return !was_tracked(path) && file_exists(path);
+}
+
+static int make_room_for_path(struct merge_options *o, const char *path)
+{
+ int status, i;
const char *msg = "failed to create path '%s'%s";
+ /* Unlink any D/F conflict files that are in the way */
+ for (i = 0; i < o->df_conflict_file_set.nr; i++) {
+ const char *df_path = o->df_conflict_file_set.items[i].string;
+ size_t pathlen = strlen(path);
+ size_t df_pathlen = strlen(df_path);
+ if (df_pathlen < pathlen &&
+ path[df_pathlen] == '/' &&
+ strncmp(path, df_path, df_pathlen) == 0) {
+ output(o, 3,
+ "Removing %s to make room for subdirectory\n",
+ df_path);
+ unlink(df_path);
+ unsorted_string_list_delete_item(&o->df_conflict_file_set,
+ i, 0);
+ break;
+ }
+ }
+
+ /* Make sure leading directories are created */
status = safe_create_leading_directories_const(path);
if (status) {
if (status == -3) {
}
}
- if (make_room_for_path(path) < 0) {
+ if (make_room_for_path(o, path) < 0) {
update_wd = 0;
free(buf);
goto update_index;
static int merge_3way(struct merge_options *o,
mmbuffer_t *result_buf,
- struct diff_filespec *one,
- struct diff_filespec *a,
- struct diff_filespec *b,
+ const struct diff_filespec *one,
+ const struct diff_filespec *a,
+ const struct diff_filespec *b,
const char *branch1,
const char *branch2)
{
return merge_status;
}
-static struct merge_file_info merge_file(struct merge_options *o,
- struct diff_filespec *one,
- struct diff_filespec *a,
- struct diff_filespec *b,
- const char *branch1,
- const char *branch2)
+static struct merge_file_info merge_file_1(struct merge_options *o,
+ const struct diff_filespec *one,
+ const struct diff_filespec *a,
+ const struct diff_filespec *b,
+ const char *branch1,
+ const char *branch2)
{
struct merge_file_info result;
result.merge = 0;
return result;
}
+static struct merge_file_info
+merge_file_special_markers(struct merge_options *o,
+ const struct diff_filespec *one,
+ const struct diff_filespec *a,
+ const struct diff_filespec *b,
+ const char *branch1,
+ const char *filename1,
+ const char *branch2,
+ const char *filename2)
+{
+ char *side1 = NULL;
+ char *side2 = NULL;
+ struct merge_file_info mfi;
+
+ if (filename1) {
+ side1 = xmalloc(strlen(branch1) + strlen(filename1) + 2);
+ sprintf(side1, "%s:%s", branch1, filename1);
+ }
+ if (filename2) {
+ side2 = xmalloc(strlen(branch2) + strlen(filename2) + 2);
+ sprintf(side2, "%s:%s", branch2, filename2);
+ }
+
+ mfi = merge_file_1(o, one, a, b,
+ side1 ? side1 : branch1, side2 ? side2 : branch2);
+ free(side1);
+ free(side2);
+ return mfi;
+}
+
+static struct merge_file_info merge_file(struct merge_options *o,
+ const char *path,
+ const unsigned char *o_sha, int o_mode,
+ const unsigned char *a_sha, int a_mode,
+ const unsigned char *b_sha, int b_mode,
+ const char *branch1,
+ const char *branch2)
+{
+ struct diff_filespec one, a, b;
+
+ one.path = a.path = b.path = (char *)path;
+ hashcpy(one.sha1, o_sha);
+ one.mode = o_mode;
+ hashcpy(a.sha1, a_sha);
+ a.mode = a_mode;
+ hashcpy(b.sha1, b_sha);
+ b.mode = b_mode;
+ return merge_file_1(o, &one, &a, &b, branch1, branch2);
+}
+
+static void handle_change_delete(struct merge_options *o,
+ const char *path,
+ const unsigned char *o_sha, int o_mode,
+ const unsigned char *a_sha, int a_mode,
+ const unsigned char *b_sha, int b_mode,
+ const char *change, const char *change_past)
+{
+ char *renamed = NULL;
+ if (dir_in_way(path, !o->call_depth)) {
+ renamed = unique_path(o, path, a_sha ? o->branch1 : o->branch2);
+ }
+
+ if (o->call_depth) {
+ /*
+ * We cannot arbitrarily accept either a_sha or b_sha as
+ * correct; since there is no true "middle point" between
+ * them, simply reuse the base version for virtual merge base.
+ */
+ remove_file_from_cache(path);
+ update_file(o, 0, o_sha, o_mode, renamed ? renamed : path);
+ } else if (!a_sha) {
+ output(o, 1, "CONFLICT (%s/delete): %s deleted in %s "
+ "and %s in %s. Version %s of %s left in tree%s%s.",
+ change, path, o->branch1,
+ change_past, o->branch2, o->branch2, path,
+ NULL == renamed ? "" : " at ",
+ NULL == renamed ? "" : renamed);
+ update_file(o, 0, b_sha, b_mode, renamed ? renamed : path);
+ } else {
+ output(o, 1, "CONFLICT (%s/delete): %s deleted in %s "
+ "and %s in %s. Version %s of %s left in tree%s%s.",
+ change, path, o->branch2,
+ change_past, o->branch1, o->branch1, path,
+ NULL == renamed ? "" : " at ",
+ NULL == renamed ? "" : renamed);
+ if (renamed)
+ update_file(o, 0, a_sha, a_mode, renamed);
+ /*
+ * No need to call update_file() on path when !renamed, since
+ * that would needlessly touch path. We could call
+ * update_file_flags() with update_cache=0 and update_wd=0,
+ * but that's a no-op.
+ */
+ }
+ free(renamed);
+}
+
static void conflict_rename_delete(struct merge_options *o,
struct diff_filepair *pair,
const char *rename_branch,
const char *other_branch)
{
- char *dest_name = pair->two->path;
- int df_conflict = 0;
- struct stat st;
+ const struct diff_filespec *orig = pair->one;
+ const struct diff_filespec *dest = pair->two;
+ const unsigned char *a_sha = NULL;
+ const unsigned char *b_sha = NULL;
+ int a_mode = 0;
+ int b_mode = 0;
+
+ if (rename_branch == o->branch1) {
+ a_sha = dest->sha1;
+ a_mode = dest->mode;
+ } else {
+ b_sha = dest->sha1;
+ b_mode = dest->mode;
+ }
- output(o, 1, "CONFLICT (rename/delete): Rename %s->%s in %s "
- "and deleted in %s",
- pair->one->path, pair->two->path, rename_branch,
- other_branch);
- if (!o->call_depth)
- update_stages(dest_name, NULL,
- rename_branch == o->branch1 ? pair->two : NULL,
- rename_branch == o->branch1 ? NULL : pair->two,
- 1);
- if (lstat(dest_name, &st) == 0 && S_ISDIR(st.st_mode)) {
- dest_name = unique_path(o, dest_name, rename_branch);
- df_conflict = 1;
+ handle_change_delete(o,
+ o->call_depth ? orig->path : dest->path,
+ orig->sha1, orig->mode,
+ a_sha, a_mode,
+ b_sha, b_mode,
+ "rename", "renamed");
+
+ if (o->call_depth) {
+ remove_file_from_cache(dest->path);
+ } else {
+ update_stages(dest->path, NULL,
+ rename_branch == o->branch1 ? dest : NULL,
+ rename_branch == o->branch1 ? NULL : dest);
}
- update_file(o, 0, pair->two->sha1, pair->two->mode, dest_name);
- if (df_conflict)
- free(dest_name);
+
}
-static void conflict_rename_rename_1to2(struct merge_options *o,
- struct diff_filepair *pair1,
- const char *branch1,
- struct diff_filepair *pair2,
- const char *branch2)
+static struct diff_filespec *filespec_from_entry(struct diff_filespec *target,
+ struct stage_data *entry,
+ int stage)
{
- /* One file was renamed in both branches, but to different names. */
- char *del[2];
- int delp = 0;
- const char *ren1_dst = pair1->two->path;
- const char *ren2_dst = pair2->two->path;
- const char *dst_name1 = ren1_dst;
- const char *dst_name2 = ren2_dst;
- struct stat st;
- if (lstat(ren1_dst, &st) == 0 && S_ISDIR(st.st_mode)) {
- dst_name1 = del[delp++] = unique_path(o, ren1_dst, branch1);
- output(o, 1, "%s is a directory in %s adding as %s instead",
- ren1_dst, branch2, dst_name1);
+ unsigned char *sha = entry->stages[stage].sha;
+ unsigned mode = entry->stages[stage].mode;
+ if (mode == 0 || is_null_sha1(sha))
+ return NULL;
+ hashcpy(target->sha1, sha);
+ target->mode = mode;
+ return target;
+}
+
+static void handle_file(struct merge_options *o,
+ struct diff_filespec *rename,
+ int stage,
+ struct rename_conflict_info *ci)
+{
+ char *dst_name = rename->path;
+ struct stage_data *dst_entry;
+ const char *cur_branch, *other_branch;
+ struct diff_filespec other;
+ struct diff_filespec *add;
+
+ if (stage == 2) {
+ dst_entry = ci->dst_entry1;
+ cur_branch = ci->branch1;
+ other_branch = ci->branch2;
+ } else {
+ dst_entry = ci->dst_entry2;
+ cur_branch = ci->branch2;
+ other_branch = ci->branch1;
}
- if (lstat(ren2_dst, &st) == 0 && S_ISDIR(st.st_mode)) {
- dst_name2 = del[delp++] = unique_path(o, ren2_dst, branch2);
- output(o, 1, "%s is a directory in %s adding as %s instead",
- ren2_dst, branch1, dst_name2);
+
+ add = filespec_from_entry(&other, dst_entry, stage ^ 1);
+ if (add) {
+ char *add_name = unique_path(o, rename->path, other_branch);
+ update_file(o, 0, add->sha1, add->mode, add_name);
+
+ remove_file(o, 0, rename->path, 0);
+ dst_name = unique_path(o, rename->path, cur_branch);
+ } else {
+ if (dir_in_way(rename->path, !o->call_depth)) {
+ dst_name = unique_path(o, rename->path, cur_branch);
+ output(o, 1, "%s is a directory in %s adding as %s instead",
+ rename->path, other_branch, dst_name);
+ }
}
+ update_file(o, 0, rename->sha1, rename->mode, dst_name);
+ if (stage == 2)
+ update_stages(rename->path, NULL, rename, add);
+ else
+ update_stages(rename->path, NULL, add, rename);
+
+ if (dst_name != rename->path)
+ free(dst_name);
+}
+
+static void conflict_rename_rename_1to2(struct merge_options *o,
+ struct rename_conflict_info *ci)
+{
+ /* One file was renamed in both branches, but to different names. */
+ struct diff_filespec *one = ci->pair1->one;
+ struct diff_filespec *a = ci->pair1->two;
+ struct diff_filespec *b = ci->pair2->two;
+
+ output(o, 1, "CONFLICT (rename/rename): "
+ "Rename \"%s\"->\"%s\" in branch \"%s\" "
+ "rename \"%s\"->\"%s\" in \"%s\"%s",
+ one->path, a->path, ci->branch1,
+ one->path, b->path, ci->branch2,
+ o->call_depth ? " (left unresolved)" : "");
if (o->call_depth) {
- remove_file_from_cache(dst_name1);
- remove_file_from_cache(dst_name2);
+ struct merge_file_info mfi;
+ struct diff_filespec other;
+ struct diff_filespec *add;
+ mfi = merge_file(o, one->path,
+ one->sha1, one->mode,
+ a->sha1, a->mode,
+ b->sha1, b->mode,
+ ci->branch1, ci->branch2);
/*
- * Uncomment to leave the conflicting names in the resulting tree
- *
- * update_file(o, 0, pair1->two->sha1, pair1->two->mode, dst_name1);
- * update_file(o, 0, pair2->two->sha1, pair2->two->mode, dst_name2);
+ * FIXME: For rename/add-source conflicts (if we could detect
+ * such), this is wrong. We should instead find a unique
+ * pathname and then either rename the add-source file to that
+ * unique path, or use that unique path instead of src here.
*/
- } else {
- update_stages(ren1_dst, NULL, pair1->two, NULL, 1);
- update_stages(ren2_dst, NULL, NULL, pair2->two, 1);
+ update_file(o, 0, mfi.sha, mfi.mode, one->path);
- update_file(o, 0, pair1->two->sha1, pair1->two->mode, dst_name1);
- update_file(o, 0, pair2->two->sha1, pair2->two->mode, dst_name2);
+ /*
+ * Above, we put the merged content at the merge-base's
+ * path. Now we usually need to delete both a->path and
+ * b->path. However, the rename on each side of the merge
+ * could also be involved in a rename/add conflict. In
+ * such cases, we should keep the added file around,
+ * resolving the conflict at that path in its favor.
+ */
+ add = filespec_from_entry(&other, ci->dst_entry1, 2 ^ 1);
+ if (add)
+ update_file(o, 0, add->sha1, add->mode, a->path);
+ else
+ remove_file_from_cache(a->path);
+ add = filespec_from_entry(&other, ci->dst_entry2, 3 ^ 1);
+ if (add)
+ update_file(o, 0, add->sha1, add->mode, b->path);
+ else
+ remove_file_from_cache(b->path);
+ } else {
+ handle_file(o, a, 2, ci);
+ handle_file(o, b, 3, ci);
}
- while (delp--)
- free(del[delp]);
}
static void conflict_rename_rename_2to1(struct merge_options *o,
- struct rename *ren1,
- const char *branch1,
- struct rename *ren2,
- const char *branch2)
+ struct rename_conflict_info *ci)
{
- /* Two files were renamed to the same thing. */
- char *new_path1 = unique_path(o, ren1->pair->two->path, branch1);
- char *new_path2 = unique_path(o, ren2->pair->two->path, branch2);
- output(o, 1, "Renaming %s to %s and %s to %s instead",
- ren1->pair->one->path, new_path1,
- ren2->pair->one->path, new_path2);
- remove_file(o, 0, ren1->pair->two->path, 0);
- update_file(o, 0, ren1->pair->two->sha1, ren1->pair->two->mode, new_path1);
- update_file(o, 0, ren2->pair->two->sha1, ren2->pair->two->mode, new_path2);
- free(new_path2);
- free(new_path1);
+ /* Two files, a & b, were renamed to the same thing, c. */
+ struct diff_filespec *a = ci->pair1->one;
+ struct diff_filespec *b = ci->pair2->one;
+ struct diff_filespec *c1 = ci->pair1->two;
+ struct diff_filespec *c2 = ci->pair2->two;
+ char *path = c1->path; /* == c2->path */
+ struct merge_file_info mfi_c1;
+ struct merge_file_info mfi_c2;
+
+ output(o, 1, "CONFLICT (rename/rename): "
+ "Rename %s->%s in %s. "
+ "Rename %s->%s in %s",
+ a->path, c1->path, ci->branch1,
+ b->path, c2->path, ci->branch2);
+
+ remove_file(o, 1, a->path, would_lose_untracked(a->path));
+ remove_file(o, 1, b->path, would_lose_untracked(b->path));
+
+ mfi_c1 = merge_file_special_markers(o, a, c1, &ci->ren1_other,
+ o->branch1, c1->path,
+ o->branch2, ci->ren1_other.path);
+ mfi_c2 = merge_file_special_markers(o, b, &ci->ren2_other, c2,
+ o->branch1, ci->ren2_other.path,
+ o->branch2, c2->path);
+
+ if (o->call_depth) {
+ /*
+ * If mfi_c1.clean && mfi_c2.clean, then it might make
+ * sense to do a two-way merge of those results. But, I
+ * think in all cases, it makes sense to have the virtual
+ * merge base just undo the renames; they can be detected
+ * again later for the non-recursive merge.
+ */
+ remove_file(o, 0, path, 0);
+ update_file(o, 0, mfi_c1.sha, mfi_c1.mode, a->path);
+ update_file(o, 0, mfi_c2.sha, mfi_c2.mode, b->path);
+ } else {
+ char *new_path1 = unique_path(o, path, ci->branch1);
+ char *new_path2 = unique_path(o, path, ci->branch2);
+ output(o, 1, "Renaming %s to %s and %s to %s instead",
+ a->path, new_path1, b->path, new_path2);
+ remove_file(o, 0, path, 0);
+ update_file(o, 0, mfi_c1.sha, mfi_c1.mode, new_path1);
+ update_file(o, 0, mfi_c2.sha, mfi_c2.mode, new_path2);
+ free(new_path2);
+ free(new_path1);
+ }
}
static int process_renames(struct merge_options *o,
for (i = 0; i < a_renames->nr; i++) {
sre = a_renames->items[i].util;
string_list_insert(&a_by_dst, sre->pair->two->path)->util
- = sre->dst_entry;
+ = (void *)sre;
}
for (i = 0; i < b_renames->nr; i++) {
sre = b_renames->items[i].util;
string_list_insert(&b_by_dst, sre->pair->two->path)->util
- = sre->dst_entry;
+ = (void *)sre;
}
for (i = 0, j = 0; i < a_renames->nr || j < b_renames->nr;) {
struct rename *ren1 = NULL, *ren2 = NULL;
const char *branch1, *branch2;
const char *ren1_src, *ren1_dst;
+ struct string_list_item *lookup;
if (i >= a_renames->nr) {
ren2 = b_renames->items[j++].util;
ren1 = tmp;
}
- ren1->dst_entry->processed = 1;
- ren1->src_entry->processed = 1;
-
if (ren1->processed)
continue;
ren1->processed = 1;
+ ren1->dst_entry->processed = 1;
+ /* BUG: We should only mark src_entry as processed if we
+ * are not dealing with a rename + add-source case.
+ */
+ ren1->src_entry->processed = 1;
ren1_src = ren1->pair->one->path;
ren1_dst = ren1->pair->two->path;
if (ren2) {
+ /* One file renamed on both sides */
const char *ren2_src = ren2->pair->one->path;
const char *ren2_dst = ren2->pair->two->path;
- /* Renamed in 1 and renamed in 2 */
+ enum rename_type rename_type;
if (strcmp(ren1_src, ren2_src) != 0)
- die("ren1.src != ren2.src");
+ die("ren1_src != ren2_src");
ren2->dst_entry->processed = 1;
ren2->processed = 1;
if (strcmp(ren1_dst, ren2_dst) != 0) {
- setup_rename_df_conflict_info(RENAME_ONE_FILE_TO_TWO,
- ren1->pair,
- ren2->pair,
- branch1,
- branch2,
- ren1->dst_entry,
- ren2->dst_entry);
+ rename_type = RENAME_ONE_FILE_TO_TWO;
+ clean_merge = 0;
} else {
+ rename_type = RENAME_ONE_FILE_TO_ONE;
+ /* BUG: We should only remove ren1_src in
+ * the base stage (think of rename +
+ * add-source cases).
+ */
remove_file(o, 1, ren1_src, 1);
- update_stages_and_entry(ren1_dst,
- ren1->dst_entry,
- ren1->pair->one,
- ren1->pair->two,
- ren2->pair->two,
- 1 /* clear */);
+ update_entry(ren1->dst_entry,
+ ren1->pair->one,
+ ren1->pair->two,
+ ren2->pair->two);
}
+ setup_rename_conflict_info(rename_type,
+ ren1->pair,
+ ren2->pair,
+ branch1,
+ branch2,
+ ren1->dst_entry,
+ ren2->dst_entry,
+ o,
+ NULL,
+ NULL);
+ } else if ((lookup = string_list_lookup(renames2Dst, ren1_dst))) {
+ /* Two different files renamed to the same thing */
+ char *ren2_dst;
+ ren2 = lookup->util;
+ ren2_dst = ren2->pair->two->path;
+ if (strcmp(ren1_dst, ren2_dst) != 0)
+ die("ren1_dst != ren2_dst");
+
+ clean_merge = 0;
+ ren2->processed = 1;
+ /*
+ * BUG: We should only mark src_entry as processed
+ * if we are not dealing with a rename + add-source
+ * case.
+ */
+ ren2->src_entry->processed = 1;
+
+ setup_rename_conflict_info(RENAME_TWO_FILES_TO_ONE,
+ ren1->pair,
+ ren2->pair,
+ branch1,
+ branch2,
+ ren1->dst_entry,
+ ren2->dst_entry,
+ o,
+ ren1->src_entry,
+ ren2->src_entry);
+
} else {
/* Renamed in 1, maybe changed in 2 */
- struct string_list_item *item;
/* we only use sha1 and mode of these */
struct diff_filespec src_other, dst_other;
int try_merge;
int renamed_stage = a_renames == renames1 ? 2 : 3;
int other_stage = a_renames == renames1 ? 3 : 2;
- remove_file(o, 1, ren1_src, o->call_depth || renamed_stage == 2);
+ /* BUG: We should only remove ren1_src in the base
+ * stage and in other_stage (think of rename +
+ * add-source case).
+ */
+ remove_file(o, 1, ren1_src,
+ renamed_stage == 2 || !was_tracked(ren1_src));
hashcpy(src_other.sha1, ren1->src_entry->stages[other_stage].sha);
src_other.mode = ren1->src_entry->stages[other_stage].mode;
try_merge = 0;
if (sha_eq(src_other.sha1, null_sha1)) {
- if (string_list_has_string(&o->current_directory_set, ren1_dst)) {
- ren1->dst_entry->processed = 0;
- setup_rename_df_conflict_info(RENAME_DELETE,
- ren1->pair,
- NULL,
- branch1,
- branch2,
- ren1->dst_entry,
- NULL);
- } else {
- clean_merge = 0;
- conflict_rename_delete(o, ren1->pair, branch1, branch2);
- }
+ setup_rename_conflict_info(RENAME_DELETE,
+ ren1->pair,
+ NULL,
+ branch1,
+ branch2,
+ ren1->dst_entry,
+ NULL,
+ o,
+ NULL,
+ NULL);
} else if ((dst_other.mode == ren1->pair->two->mode) &&
sha_eq(dst_other.sha1, ren1->pair->two->sha1)) {
- /* Added file on the other side
- identical to the file being
- renamed: clean merge */
- update_file(o, 1, ren1->pair->two->sha1, ren1->pair->two->mode, ren1_dst);
+ /*
+ * Added file on the other side identical to
+ * the file being renamed: clean merge.
+ * Also, there is no need to overwrite the
+ * file already in the working copy, so call
+ * update_file_flags() instead of
+ * update_file().
+ */
+ update_file_flags(o,
+ ren1->pair->two->sha1,
+ ren1->pair->two->mode,
+ ren1_dst,
+ 1, /* update_cache */
+ 0 /* update_wd */);
} else if (!sha_eq(dst_other.sha1, null_sha1)) {
- const char *new_path;
clean_merge = 0;
try_merge = 1;
output(o, 1, "CONFLICT (rename/add): Rename %s->%s in %s. "
ren1_dst, branch2);
if (o->call_depth) {
struct merge_file_info mfi;
- struct diff_filespec one, a, b;
-
- one.path = a.path = b.path =
- (char *)ren1_dst;
- hashcpy(one.sha1, null_sha1);
- one.mode = 0;
- hashcpy(a.sha1, ren1->pair->two->sha1);
- a.mode = ren1->pair->two->mode;
- hashcpy(b.sha1, dst_other.sha1);
- b.mode = dst_other.mode;
- mfi = merge_file(o, &one, &a, &b,
- branch1,
- branch2);
+ mfi = merge_file(o, ren1_dst, null_sha1, 0,
+ ren1->pair->two->sha1, ren1->pair->two->mode,
+ dst_other.sha1, dst_other.mode,
+ branch1, branch2);
output(o, 1, "Adding merged %s", ren1_dst);
- update_file(o, 0,
- mfi.sha,
- mfi.mode,
- ren1_dst);
+ update_file(o, 0, mfi.sha, mfi.mode, ren1_dst);
try_merge = 0;
} else {
- new_path = unique_path(o, ren1_dst, branch2);
+ char *new_path = unique_path(o, ren1_dst, branch2);
output(o, 1, "Adding as %s instead", new_path);
update_file(o, 0, dst_other.sha1, dst_other.mode, new_path);
+ free(new_path);
}
- } else if ((item = string_list_lookup(renames2Dst, ren1_dst))) {
- ren2 = item->util;
- clean_merge = 0;
- ren2->processed = 1;
- output(o, 1, "CONFLICT (rename/rename): "
- "Rename %s->%s in %s. "
- "Rename %s->%s in %s",
- ren1_src, ren1_dst, branch1,
- ren2->pair->one->path, ren2->pair->two->path, branch2);
- conflict_rename_rename_2to1(o, ren1, branch1, ren2, branch2);
} else
try_merge = 1;
b = ren1->pair->two;
a = &src_other;
}
- update_stages_and_entry(ren1_dst, ren1->dst_entry, one, a, b, 1);
- if (string_list_has_string(&o->current_directory_set, ren1_dst)) {
- setup_rename_df_conflict_info(RENAME_NORMAL,
- ren1->pair,
- NULL,
- branch1,
- NULL,
- ren1->dst_entry,
- NULL);
- }
+ update_entry(ren1->dst_entry, one, a, b);
+ setup_rename_conflict_info(RENAME_NORMAL,
+ ren1->pair,
+ NULL,
+ branch1,
+ NULL,
+ ren1->dst_entry,
+ NULL,
+ o,
+ NULL,
+ NULL);
}
}
}
return ret;
}
-static void handle_delete_modify(struct merge_options *o,
+static void handle_modify_delete(struct merge_options *o,
const char *path,
- const char *new_path,
+ unsigned char *o_sha, int o_mode,
unsigned char *a_sha, int a_mode,
unsigned char *b_sha, int b_mode)
{
- if (!a_sha) {
- output(o, 1, "CONFLICT (delete/modify): %s deleted in %s "
- "and modified in %s. Version %s of %s left in tree%s%s.",
- path, o->branch1,
- o->branch2, o->branch2, path,
- path == new_path ? "" : " at ",
- path == new_path ? "" : new_path);
- update_file(o, 0, b_sha, b_mode, new_path);
- } else {
- output(o, 1, "CONFLICT (delete/modify): %s deleted in %s "
- "and modified in %s. Version %s of %s left in tree%s%s.",
- path, o->branch2,
- o->branch1, o->branch1, path,
- path == new_path ? "" : " at ",
- path == new_path ? "" : new_path);
- update_file(o, 0, a_sha, a_mode, new_path);
- }
+ handle_change_delete(o,
+ path,
+ o_sha, o_mode,
+ a_sha, a_mode,
+ b_sha, b_mode,
+ "modify", "modified");
}
static int merge_content(struct merge_options *o,
unsigned char *o_sha, int o_mode,
unsigned char *a_sha, int a_mode,
unsigned char *b_sha, int b_mode,
- const char *df_rename_conflict_branch)
+ struct rename_conflict_info *rename_conflict_info)
{
const char *reason = "content";
+ const char *path1 = NULL, *path2 = NULL;
struct merge_file_info mfi;
struct diff_filespec one, a, b;
- struct stat st;
unsigned df_conflict_remains = 0;
if (!o_sha) {
hashcpy(b.sha1, b_sha);
b.mode = b_mode;
- mfi = merge_file(o, &one, &a, &b, o->branch1, o->branch2);
- if (df_rename_conflict_branch &&
- lstat(path, &st) == 0 && S_ISDIR(st.st_mode)) {
- df_conflict_remains = 1;
+ if (rename_conflict_info) {
+ struct diff_filepair *pair1 = rename_conflict_info->pair1;
+
+ path1 = (o->branch1 == rename_conflict_info->branch1) ?
+ pair1->two->path : pair1->one->path;
+ /* If rename_conflict_info->pair2 != NULL, we are in
+ * RENAME_ONE_FILE_TO_ONE case. Otherwise, we have a
+ * normal rename.
+ */
+ path2 = (rename_conflict_info->pair2 ||
+ o->branch2 == rename_conflict_info->branch1) ?
+ pair1->two->path : pair1->one->path;
+
+ if (dir_in_way(path, !o->call_depth))
+ df_conflict_remains = 1;
}
+ mfi = merge_file_special_markers(o, &one, &a, &b,
+ o->branch1, path1,
+ o->branch2, path2);
if (mfi.clean && !df_conflict_remains &&
- sha_eq(mfi.sha, a_sha) && mfi.mode == a.mode)
+ sha_eq(mfi.sha, a_sha) && mfi.mode == a_mode) {
+ int path_renamed_outside_HEAD;
output(o, 3, "Skipped %s (merged same as existing)", path);
- else
+ /*
+ * The content merge resulted in the same file contents we
+ * already had. We can return early if those file contents
+ * are recorded at the correct path (which may not be true
+ * if the merge involves a rename).
+ */
+ path_renamed_outside_HEAD = !path2 || !strcmp(path, path2);
+ if (!path_renamed_outside_HEAD) {
+ add_cacheinfo(mfi.mode, mfi.sha, path,
+ 0 /*stage*/, 1 /*refresh*/, 0 /*options*/);
+ return mfi.clean;
+ }
+ } else
output(o, 2, "Auto-merging %s", path);
if (!mfi.clean) {
reason = "submodule";
output(o, 1, "CONFLICT (%s): Merge conflict in %s",
reason, path);
+ if (rename_conflict_info && !df_conflict_remains)
+ update_stages(path, &one, &a, &b);
}
if (df_conflict_remains) {
- const char *new_path;
- update_file_flags(o, mfi.sha, mfi.mode, path,
- o->call_depth || mfi.clean, 0);
- new_path = unique_path(o, path, df_rename_conflict_branch);
- mfi.clean = 0;
+ char *new_path;
+ if (o->call_depth) {
+ remove_file_from_cache(path);
+ } else {
+ if (!mfi.clean)
+ update_stages(path, &one, &a, &b);
+ else {
+ int file_from_stage2 = was_tracked(path);
+ struct diff_filespec merged;
+ hashcpy(merged.sha1, mfi.sha);
+ merged.mode = mfi.mode;
+
+ update_stages(path, NULL,
+ file_from_stage2 ? &merged : NULL,
+ file_from_stage2 ? NULL : &merged);
+ }
+
+ }
+ new_path = unique_path(o, path, rename_conflict_info->branch1);
output(o, 1, "Adding as %s instead", new_path);
- update_file_flags(o, mfi.sha, mfi.mode, new_path, 0, 1);
+ update_file(o, 0, mfi.sha, mfi.mode, new_path);
+ free(new_path);
+ mfi.clean = 0;
} else {
update_file(o, mfi.clean, mfi.sha, mfi.mode, path);
}
unsigned char *a_sha = stage_sha(entry->stages[2].sha, a_mode);
unsigned char *b_sha = stage_sha(entry->stages[3].sha, b_mode);
- if (entry->rename_df_conflict_info)
- return 1; /* Such cases are handled elsewhere. */
-
- entry->processed = 1;
- if (o_sha && (!a_sha || !b_sha)) {
- /* Case A: Deleted in one */
- if ((!a_sha && !b_sha) ||
- (!b_sha && blob_unchanged(o_sha, a_sha, normalize, path)) ||
- (!a_sha && blob_unchanged(o_sha, b_sha, normalize, path))) {
- /* Deleted in both or deleted in one and
- * unchanged in the other */
- if (a_sha)
- output(o, 2, "Removing %s", path);
- /* do not touch working file if it did not exist */
- remove_file(o, 1, path, !a_sha);
- } else if (string_list_has_string(&o->current_directory_set,
- path)) {
- entry->processed = 0;
- return 1; /* Assume clean until processed */
- } else {
- /* Deleted in one and changed in the other */
- clean_merge = 0;
- handle_delete_modify(o, path, path,
- a_sha, a_mode, b_sha, b_mode);
- }
-
- } else if ((!o_sha && a_sha && !b_sha) ||
- (!o_sha && !a_sha && b_sha)) {
- /* Case B: Added in one. */
- unsigned mode;
- const unsigned char *sha;
-
- if (a_sha) {
- mode = a_mode;
- sha = a_sha;
- } else {
- mode = b_mode;
- sha = b_sha;
- }
- if (string_list_has_string(&o->current_directory_set, path)) {
- /* Handle D->F conflicts after all subfiles */
- entry->processed = 0;
- return 1; /* Assume clean until processed */
- } else {
- output(o, 2, "Adding %s", path);
- update_file(o, 1, sha, mode, path);
- }
- } else if (a_sha && b_sha) {
- /* Case C: Added in both (check for same permissions) and */
- /* case D: Modified in both, but differently. */
- clean_merge = merge_content(o, path,
- o_sha, o_mode, a_sha, a_mode, b_sha, b_mode,
- NULL);
- } else if (!o_sha && !a_sha && !b_sha) {
- /*
- * this entry was deleted altogether. a_mode == 0 means
- * we had that path and want to actively remove it.
- */
- remove_file(o, 1, path, !a_mode);
- } else
- die("Fatal merge failure, shouldn't happen.");
-
- return clean_merge;
-}
-
-/*
- * Per entry merge function for D/F (and/or rename) conflicts. In the
- * cases we can cleanly resolve D/F conflicts, process_entry() can
- * clean out all the files below the directory for us. All D/F
- * conflict cases must be handled here at the end to make sure any
- * directories that can be cleaned out, are.
- *
- * Some rename conflicts may also be handled here that don't necessarily
- * involve D/F conflicts, since the code to handle them is generic enough
- * to handle those rename conflicts with or without D/F conflicts also
- * being involved.
- */
-static int process_df_entry(struct merge_options *o,
- const char *path, struct stage_data *entry)
-{
- int clean_merge = 1;
- unsigned o_mode = entry->stages[1].mode;
- unsigned a_mode = entry->stages[2].mode;
- unsigned b_mode = entry->stages[3].mode;
- unsigned char *o_sha = stage_sha(entry->stages[1].sha, o_mode);
- unsigned char *a_sha = stage_sha(entry->stages[2].sha, a_mode);
- unsigned char *b_sha = stage_sha(entry->stages[3].sha, b_mode);
- struct stat st;
-
entry->processed = 1;
- if (entry->rename_df_conflict_info) {
- struct rename_df_conflict_info *conflict_info = entry->rename_df_conflict_info;
- char *src;
+ if (entry->rename_conflict_info) {
+ struct rename_conflict_info *conflict_info = entry->rename_conflict_info;
switch (conflict_info->rename_type) {
case RENAME_NORMAL:
+ case RENAME_ONE_FILE_TO_ONE:
clean_merge = merge_content(o, path,
o_sha, o_mode, a_sha, a_mode, b_sha, b_mode,
- conflict_info->branch1);
+ conflict_info);
break;
case RENAME_DELETE:
clean_merge = 0;
conflict_info->branch2);
break;
case RENAME_ONE_FILE_TO_TWO:
- src = conflict_info->pair1->one->path;
clean_merge = 0;
- output(o, 1, "CONFLICT (rename/rename): "
- "Rename \"%s\"->\"%s\" in branch \"%s\" "
- "rename \"%s\"->\"%s\" in \"%s\"%s",
- src, conflict_info->pair1->two->path, conflict_info->branch1,
- src, conflict_info->pair2->two->path, conflict_info->branch2,
- o->call_depth ? " (left unresolved)" : "");
- if (o->call_depth) {
- remove_file_from_cache(src);
- update_file(o, 0, conflict_info->pair1->one->sha1,
- conflict_info->pair1->one->mode, src);
- }
- conflict_rename_rename_1to2(o, conflict_info->pair1,
- conflict_info->branch1,
- conflict_info->pair2,
- conflict_info->branch2);
- conflict_info->dst_entry2->processed = 1;
+ conflict_rename_rename_1to2(o, conflict_info);
+ break;
+ case RENAME_TWO_FILES_TO_ONE:
+ clean_merge = 0;
+ conflict_rename_rename_2to1(o, conflict_info);
break;
default:
entry->processed = 0;
break;
}
} else if (o_sha && (!a_sha || !b_sha)) {
- /* Modify/delete; deleted side may have put a directory in the way */
- const char *new_path = path;
- if (lstat(path, &st) == 0 && S_ISDIR(st.st_mode))
- new_path = unique_path(o, path, a_sha ? o->branch1 : o->branch2);
- clean_merge = 0;
- handle_delete_modify(o, path, new_path,
- a_sha, a_mode, b_sha, b_mode);
- } else if (!o_sha && !!a_sha != !!b_sha) {
- /* directory -> (directory, file) */
+ /* Case A: Deleted in one */
+ if ((!a_sha && !b_sha) ||
+ (!b_sha && blob_unchanged(o_sha, a_sha, normalize, path)) ||
+ (!a_sha && blob_unchanged(o_sha, b_sha, normalize, path))) {
+ /* Deleted in both or deleted in one and
+ * unchanged in the other */
+ if (a_sha)
+ output(o, 2, "Removing %s", path);
+ /* do not touch working file if it did not exist */
+ remove_file(o, 1, path, !a_sha);
+ } else {
+ /* Modify/delete; deleted side may have put a directory in the way */
+ clean_merge = 0;
+ handle_modify_delete(o, path, o_sha, o_mode,
+ a_sha, a_mode, b_sha, b_mode);
+ }
+ } else if ((!o_sha && a_sha && !b_sha) ||
+ (!o_sha && !a_sha && b_sha)) {
+ /* Case B: Added in one. */
+ /* [nothing|directory] -> ([nothing|directory], file) */
+
const char *add_branch;
const char *other_branch;
unsigned mode;
sha = b_sha;
conf = "directory/file";
}
- if (lstat(path, &st) == 0 && S_ISDIR(st.st_mode)) {
- const char *new_path = unique_path(o, path, add_branch);
+ if (dir_in_way(path, !o->call_depth)) {
+ char *new_path = unique_path(o, path, add_branch);
clean_merge = 0;
output(o, 1, "CONFLICT (%s): There is a directory with name %s in %s. "
"Adding %s as %s",
conf, path, other_branch, path, new_path);
+ if (o->call_depth)
+ remove_file_from_cache(path);
update_file(o, 0, sha, mode, new_path);
+ if (o->call_depth)
+ remove_file_from_cache(path);
+ free(new_path);
} else {
output(o, 2, "Adding %s", path);
- update_file(o, 1, sha, mode, path);
+ /* do not overwrite file if already present */
+ update_file_flags(o, sha, mode, path, 1, !a_sha);
}
- } else {
- entry->processed = 0;
- return 1; /* not handled; assume clean until processed */
- }
+ } else if (a_sha && b_sha) {
+ /* Case C: Added in both (check for same permissions) and */
+ /* case D: Modified in both, but differently. */
+ clean_merge = merge_content(o, path,
+ o_sha, o_mode, a_sha, a_mode, b_sha, b_mode,
+ NULL);
+ } else if (!o_sha && !a_sha && !b_sha) {
+ /*
+ * this entry was deleted altogether. a_mode == 0 means
+ * we had that path and want to actively remove it.
+ */
+ remove_file(o, 1, path, !a_mode);
+ } else
+ die("Fatal merge failure, shouldn't happen.");
return clean_merge;
}
get_files_dirs(o, merge);
entries = get_unmerged();
- make_room_for_directories_of_df_conflicts(o, entries);
+ record_df_conflict_files(o, entries);
re_head = get_renames(o, head, common, head, merge, entries);
re_merge = get_renames(o, merge, common, head, merge, entries);
clean = process_renames(o, re_head, re_merge);
- for (i = 0; i < entries->nr; i++) {
+ for (i = entries->nr-1; 0 <= i; i--) {
const char *path = entries->items[i].string;
struct stage_data *e = entries->items[i].util;
if (!e->processed
&& !process_entry(o, path, e))
clean = 0;
}
- for (i = 0; i < entries->nr; i++) {
- const char *path = entries->items[i].string;
- struct stage_data *e = entries->items[i].util;
- if (!e->processed
- && !process_df_entry(o, path, e))
- clean = 0;
- }
for (i = 0; i < entries->nr; i++) {
struct stage_data *e = entries->items[i].util;
if (!e->processed)
merged_common_ancestors = pop_commit(&ca);
if (merged_common_ancestors == NULL) {
- /* if there is no common ancestor, make an empty tree */
- struct tree *tree = xcalloc(1, sizeof(struct tree));
+ /* if there is no common ancestor, use an empty tree */
+ struct tree *tree;
- tree->object.parsed = 1;
- tree->object.type = OBJ_TREE;
- pretend_sha1_file(NULL, 0, OBJ_TREE, tree->object.sha1);
+ tree = lookup_tree((const unsigned char *)EMPTY_TREE_SHA1_BIN);
merged_common_ancestors = make_virtual_commit(tree, "ancestor");
}
o->current_file_set.strdup_strings = 1;
memset(&o->current_directory_set, 0, sizeof(struct string_list));
o->current_directory_set.strdup_strings = 1;
+ memset(&o->df_conflict_file_set, 0, sizeof(struct string_list));
+ o->df_conflict_file_set.strdup_strings = 1;
}
int parse_merge_opt(struct merge_options *o, const char *s)
struct strbuf obuf;
struct string_list current_file_set;
struct string_list current_directory_set;
+ struct string_list df_conflict_file_set;
};
/* merge_trees() but with recursive ancestor consolidation */
--- /dev/null
+diff_cmd () {
+ "$merge_tool_path" -wait -2 "$LOCAL" "$REMOTE" >/dev/null 2>&1
+}
+
+merge_cmd () {
+ touch "$BACKUP"
+ if $base_present
+ then
+ "$merge_tool_path" -wait -merge -3 -a1 \
+ "$BASE" "$LOCAL" "$REMOTE" "$MERGED" >/dev/null 2>&1
+ else
+ "$merge_tool_path" -wait -2 \
+ "$LOCAL" "$REMOTE" "$MERGED" >/dev/null 2>&1
+ fi
+ check_unchanged
+}
+
+translate_merge_tool_path() {
+ echo compare
+}
--- /dev/null
+diff_cmd () {
+ "$merge_tool_path" "$LOCAL" "$REMOTE"
+}
+
+merge_cmd () {
+ touch "$BACKUP"
+ if $base_present
+ then
+ "$merge_tool_path" "$LOCAL" "$REMOTE" "$BASE" \
+ -mergeoutput="$MERGED"
+ else
+ "$merge_tool_path" "$LOCAL" "$REMOTE" \
+ -mergeoutput="$MERGED"
+ fi
+ check_unchanged
+}
+
+translate_merge_tool_path() {
+ echo bcompare
+}
--- /dev/null
+# Redefined by builtin tools
+can_merge () {
+ return 0
+}
+
+can_diff () {
+ return 0
+}
+
+diff_cmd () {
+ merge_tool_cmd="$(get_merge_tool_cmd "$1")"
+ if test -z "$merge_tool_cmd"
+ then
+ status=1
+ break
+ fi
+ ( eval $merge_tool_cmd )
+ status=$?
+ return $status
+}
+
+merge_cmd () {
+ merge_tool_cmd="$(get_merge_tool_cmd "$1")"
+ if test -z "$merge_tool_cmd"
+ then
+ status=1
+ break
+ fi
+ trust_exit_code="$(git config --bool \
+ mergetool."$1".trustExitCode || echo false)"
+ if test "$trust_exit_code" = "false"
+ then
+ touch "$BACKUP"
+ ( eval $merge_tool_cmd )
+ status=$?
+ check_unchanged
+ else
+ ( eval $merge_tool_cmd )
+ status=$?
+ fi
+ return $status
+}
+
+translate_merge_tool_path () {
+ echo "$1"
+}
--- /dev/null
+diff_cmd () {
+ "$merge_tool_path" "$LOCAL" "$REMOTE" | cat
+}
+
+merge_cmd () {
+ touch "$BACKUP"
+ if $base_present
+ then
+ "$merge_tool_path" \
+ "$LOCAL" "$MERGED" "$REMOTE" \
+ "$BASE" | cat
+ else
+ "$merge_tool_path" \
+ "$LOCAL" "$MERGED" "$REMOTE" | cat
+ fi
+ check_unchanged
+}
--- /dev/null
+diff_cmd () {
+ "$merge_tool_path" --default --mode=diff2 "$LOCAL" "$REMOTE"
+}
+
+merge_cmd () {
+ touch "$BACKUP"
+ if $base_present
+ then
+ "$merge_tool_path" "$BASE" "$LOCAL" "$REMOTE" \
+ --default --mode=merge3 --to="$MERGED"
+ else
+ "$merge_tool_path" "$LOCAL" "$REMOTE" \
+ --default --mode=merge2 --to="$MERGED"
+ fi
+ check_unchanged
+}
--- /dev/null
+diff_cmd () {
+ "$merge_tool_path" -f emerge-files-command "$LOCAL" "$REMOTE"
+}
+
+merge_cmd () {
+ if $base_present
+ then
+ "$merge_tool_path" \
+ -f emerge-files-with-ancestor-command \
+ "$LOCAL" "$REMOTE" "$BASE" \
+ "$(basename "$MERGED")"
+ else
+ "$merge_tool_path" \
+ -f emerge-files-command \
+ "$LOCAL" "$REMOTE" \
+ "$(basename "$MERGED")"
+ fi
+ status=$?
+}
+
+translate_merge_tool_path() {
+ echo emacs
+}
--- /dev/null
+diff_cmd () {
+ "$merge_tool_path" --auto \
+ --L1 "$MERGED (A)" --L2 "$MERGED (B)" \
+ "$LOCAL" "$REMOTE" >/dev/null 2>&1
+}
+
+merge_cmd () {
+ if $base_present
+ then
+ "$merge_tool_path" --auto \
+ --L1 "$MERGED (Base)" \
+ --L2 "$MERGED (Local)" \
+ --L3 "$MERGED (Remote)" \
+ -o "$MERGED" "$BASE" "$LOCAL" "$REMOTE" \
+ >/dev/null 2>&1
+ else
+ "$merge_tool_path" --auto \
+ --L1 "$MERGED (Local)" \
+ --L2 "$MERGED (Remote)" \
+ -o "$MERGED" "$LOCAL" "$REMOTE" \
+ >/dev/null 2>&1
+ fi
+ status=$?
+}
--- /dev/null
+can_merge () {
+ return 1
+}
+
+diff_cmd () {
+ "$merge_tool_path" "$LOCAL" "$REMOTE"
+}
--- /dev/null
+diff_cmd () {
+ "$merge_tool_path" "$LOCAL" "$REMOTE"
+}
+
+merge_cmd () {
+ if test -z "${meld_has_output_option:+set}"
+ then
+ check_meld_for_output_version
+ fi
+ touch "$BACKUP"
+ if test "$meld_has_output_option" = true
+ then
+ "$merge_tool_path" --output "$MERGED" \
+ "$LOCAL" "$BASE" "$REMOTE"
+ else
+ "$merge_tool_path" "$LOCAL" "$MERGED" "$REMOTE"
+ fi
+ check_unchanged
+}
+
+# Check whether 'meld --output <file>' is supported
+check_meld_for_output_version () {
+ meld_path="$(git config mergetool.meld.path)"
+ meld_path="${meld_path:-meld}"
+
+ if "$meld_path" --output /dev/null --help >/dev/null 2>&1
+ then
+ meld_has_output_option=true
+ else
+ meld_has_output_option=false
+ fi
+}
--- /dev/null
+diff_cmd () {
+ "$merge_tool_path" "$LOCAL" "$REMOTE" | cat
+}
+
+merge_cmd () {
+ touch "$BACKUP"
+ if $base_present
+ then
+ "$merge_tool_path" "$LOCAL" "$REMOTE" \
+ -ancestor "$BASE" -merge "$MERGED" | cat
+ else
+ "$merge_tool_path" "$LOCAL" "$REMOTE" \
+ -merge "$MERGED" | cat
+ fi
+ check_unchanged
+}
--- /dev/null
+diff_cmd () {
+ "$merge_tool_path" "$LOCAL" "$REMOTE"
+}
+
+merge_cmd () {
+ touch "$BACKUP"
+ $base_present || >"$BASE"
+ "$merge_tool_path" "$BASE" "$LOCAL" "$REMOTE" "$MERGED"
+ check_unchanged
+}
--- /dev/null
+diff_cmd () {
+ "$merge_tool_path" "$LOCAL" "$REMOTE"
+}
+
+merge_cmd () {
+ if $base_present
+ then
+ "$merge_tool_path" -a "$BASE" -o "$MERGED" "$LOCAL" "$REMOTE"
+ else
+ "$merge_tool_path" -o "$MERGED" "$LOCAL" "$REMOTE"
+ fi
+}
--- /dev/null
+can_diff () {
+ return 1
+}
+
+merge_cmd () {
+ if $base_present
+ then
+ touch "$BACKUP"
+ "$merge_tool_path" \
+ -base:"$BASE" -mine:"$LOCAL" \
+ -theirs:"$REMOTE" -merged:"$MERGED"
+ check_unchanged
+ else
+ echo "TortoiseMerge cannot be used without a base" 1>&2
+ return 1
+ fi
+}
--- /dev/null
+diff_cmd () {
+ case "$1" in
+ gvimdiff|vimdiff)
+ "$merge_tool_path" -R -f -d \
+ -c 'wincmd l' -c 'cd $GIT_PREFIX' "$LOCAL" "$REMOTE"
+ ;;
+ gvimdiff2|vimdiff2)
+ "$merge_tool_path" -R -f -d \
+ -c 'wincmd l' -c 'cd $GIT_PREFIX' "$LOCAL" "$REMOTE"
+ ;;
+ esac
+}
+
+merge_cmd () {
+ touch "$BACKUP"
+ case "$1" in
+ gvimdiff|vimdiff)
+ if $base_present
+ then
+ "$merge_tool_path" -f -d -c 'wincmd J' \
+ "$MERGED" "$LOCAL" "$BASE" "$REMOTE"
+ else
+ "$merge_tool_path" -f -d -c 'wincmd l' \
+ "$LOCAL" "$MERGED" "$REMOTE"
+ fi
+ ;;
+ gvimdiff2|vimdiff2)
+ "$merge_tool_path" -f -d -c 'wincmd l' \
+ "$LOCAL" "$MERGED" "$REMOTE"
+ ;;
+ esac
+ check_unchanged
+}
+
+translate_merge_tool_path() {
+ case "$1" in
+ gvimdiff|gvimdiff2)
+ echo gvim
+ ;;
+ vimdiff|vimdiff2)
+ echo vim
+ ;;
+ esac
+}
--- /dev/null
+diff_cmd () {
+ "$merge_tool_path" \
+ -R 'Accel.Search: "Ctrl+F"' \
+ -R 'Accel.SearchForward: "Ctrl-G"' \
+ "$LOCAL" "$REMOTE"
+}
+
+merge_cmd () {
+ touch "$BACKUP"
+ if $base_present
+ then
+ "$merge_tool_path" -X --show-merged-pane \
+ -R 'Accel.SaveAsMerged: "Ctrl-S"' \
+ -R 'Accel.Search: "Ctrl+F"' \
+ -R 'Accel.SearchForward: "Ctrl-G"' \
+ --merged-file "$MERGED" "$LOCAL" "$BASE" "$REMOTE"
+ else
+ "$merge_tool_path" -X $extra \
+ -R 'Accel.SaveAsMerged: "Ctrl-S"' \
+ -R 'Accel.Search: "Ctrl+F"' \
+ -R 'Accel.SearchForward: "Ctrl-G"' \
+ --merged-file "$MERGED" "$LOCAL" "$REMOTE"
+ fi
+ check_unchanged
+}
* something different on Windows.
*/
-static int spawned_pager;
-
#ifndef WIN32
static void pager_preexec(void)
{
if (!pager)
return;
- spawned_pager = 1; /* means we are emitting to terminal */
+ setenv("GIT_PAGER_IN_USE", "true", 1);
/* spawn the pager */
pager_argv[0] = pager;
int pager_in_use(void)
{
const char *env;
-
- if (spawned_pager)
- return 1;
-
env = getenv("GIT_PAGER_IN_USE");
return env ? git_config_bool("GIT_PAGER_IN_USE", env) : 0;
}
--- /dev/null
+#include "git-compat-util.h"
+#include "parse-options.h"
+#include "cache.h"
+#include "commit.h"
+#include "color.h"
+#include "string-list.h"
+
+/*----- some often used options -----*/
+
+int parse_opt_abbrev_cb(const struct option *opt, const char *arg, int unset)
+{
+ int v;
+
+ if (!arg) {
+ v = unset ? 0 : DEFAULT_ABBREV;
+ } else {
+ v = strtol(arg, (char **)&arg, 10);
+ if (*arg)
+ return opterror(opt, "expects a numerical value", 0);
+ if (v && v < MINIMUM_ABBREV)
+ v = MINIMUM_ABBREV;
+ else if (v > 40)
+ v = 40;
+ }
+ *(int *)(opt->value) = v;
+ return 0;
+}
+
+int parse_opt_approxidate_cb(const struct option *opt, const char *arg,
+ int unset)
+{
+ *(unsigned long *)(opt->value) = approxidate(arg);
+ return 0;
+}
+
+int parse_opt_color_flag_cb(const struct option *opt, const char *arg,
+ int unset)
+{
+ int value;
+
+ if (!arg)
+ arg = unset ? "never" : (const char *)opt->defval;
+ value = git_config_colorbool(NULL, arg);
+ if (value < 0)
+ return opterror(opt,
+ "expects \"always\", \"auto\", or \"never\"", 0);
+ *(int *)opt->value = value;
+ return 0;
+}
+
+int parse_opt_verbosity_cb(const struct option *opt, const char *arg,
+ int unset)
+{
+ int *target = opt->value;
+
+ if (unset)
+ /* --no-quiet, --no-verbose */
+ *target = 0;
+ else if (opt->short_name == 'v') {
+ if (*target >= 0)
+ (*target)++;
+ else
+ *target = 1;
+ } else {
+ if (*target <= 0)
+ (*target)--;
+ else
+ *target = -1;
+ }
+ return 0;
+}
+
+int parse_opt_with_commit(const struct option *opt, const char *arg, int unset)
+{
+ unsigned char sha1[20];
+ struct commit *commit;
+
+ if (!arg)
+ return -1;
+ if (get_sha1(arg, sha1))
+ return error("malformed object name %s", arg);
+ commit = lookup_commit_reference(sha1);
+ if (!commit)
+ return error("no such commit %s", arg);
+ commit_list_insert(commit, opt->value);
+ return 0;
+}
+
+int parse_opt_tertiary(const struct option *opt, const char *arg, int unset)
+{
+ int *target = opt->value;
+ *target = unset ? 2 : 1;
+ return 0;
+}
+
+int parse_options_concat(struct option *dst, size_t dst_size, struct option *src)
+{
+ int i, j;
+
+ for (i = 0; i < dst_size; i++)
+ if (dst[i].type == OPTION_END)
+ break;
+ for (j = 0; i < dst_size; i++, j++) {
+ dst[i] = src[j];
+ if (src[j].type == OPTION_END)
+ return 0;
+ }
+ return -1;
+}
+
+int parse_opt_string_list(const struct option *opt, const char *arg, int unset)
+{
+ struct string_list *v = opt->value;
+
+ if (unset) {
+ string_list_clear(v, 0);
+ return 0;
+ }
+
+ if (!arg)
+ return -1;
+
+ string_list_append(v, xstrdup(arg));
+ return 0;
+}
#include "cache.h"
#include "commit.h"
#include "color.h"
-#include "string-list.h"
static int parse_options_usage(struct parse_opt_ctx_t *ctx,
const char * const *usagestr,
#define OPT_SHORT 1
#define OPT_UNSET 2
-static int optbug(const struct option *opt, const char *reason)
+int optbug(const struct option *opt, const char *reason)
{
if (opt->long_name)
return error("BUG: option '%s' %s", opt->long_name, reason);
return error("BUG: switch '%c' %s", opt->short_name, reason);
}
-static int opterror(const struct option *opt, const char *reason, int flags)
+int opterror(const struct option *opt, const char *reason, int flags)
{
if (flags & OPT_SHORT)
return error("switch `%c' %s", opt->short_name, reason);
return usage_with_options_internal(ctx, usagestr, opts, 0, err);
}
-
-/*----- some often used options -----*/
-#include "cache.h"
-
-int parse_opt_abbrev_cb(const struct option *opt, const char *arg, int unset)
-{
- int v;
-
- if (!arg) {
- v = unset ? 0 : DEFAULT_ABBREV;
- } else {
- v = strtol(arg, (char **)&arg, 10);
- if (*arg)
- return opterror(opt, "expects a numerical value", 0);
- if (v && v < MINIMUM_ABBREV)
- v = MINIMUM_ABBREV;
- else if (v > 40)
- v = 40;
- }
- *(int *)(opt->value) = v;
- return 0;
-}
-
-int parse_opt_approxidate_cb(const struct option *opt, const char *arg,
- int unset)
-{
- *(unsigned long *)(opt->value) = approxidate(arg);
- return 0;
-}
-
-int parse_opt_color_flag_cb(const struct option *opt, const char *arg,
- int unset)
-{
- int value;
-
- if (!arg)
- arg = unset ? "never" : (const char *)opt->defval;
- value = git_config_colorbool(NULL, arg, -1);
- if (value < 0)
- return opterror(opt,
- "expects \"always\", \"auto\", or \"never\"", 0);
- *(int *)opt->value = value;
- return 0;
-}
-
-int parse_opt_verbosity_cb(const struct option *opt, const char *arg,
- int unset)
-{
- int *target = opt->value;
-
- if (unset)
- /* --no-quiet, --no-verbose */
- *target = 0;
- else if (opt->short_name == 'v') {
- if (*target >= 0)
- (*target)++;
- else
- *target = 1;
- } else {
- if (*target <= 0)
- (*target)--;
- else
- *target = -1;
- }
- return 0;
-}
-
-int parse_opt_with_commit(const struct option *opt, const char *arg, int unset)
-{
- unsigned char sha1[20];
- struct commit *commit;
-
- if (!arg)
- return -1;
- if (get_sha1(arg, sha1))
- return error("malformed object name %s", arg);
- commit = lookup_commit_reference(sha1);
- if (!commit)
- return error("no such commit %s", arg);
- commit_list_insert(commit, opt->value);
- return 0;
-}
-
-int parse_opt_tertiary(const struct option *opt, const char *arg, int unset)
-{
- int *target = opt->value;
- *target = unset ? 2 : 1;
- return 0;
-}
-
-int parse_options_concat(struct option *dst, size_t dst_size, struct option *src)
-{
- int i, j;
-
- for (i = 0; i < dst_size; i++)
- if (dst[i].type == OPTION_END)
- break;
- for (j = 0; i < dst_size; i++, j++) {
- dst[i] = src[j];
- if (src[j].type == OPTION_END)
- return 0;
- }
- return -1;
-}
-
-int parse_opt_string_list(const struct option *opt, const char *arg, int unset)
-{
- struct string_list *v = opt->value;
-
- if (unset) {
- string_list_clear(v, 0);
- return 0;
- }
-
- if (!arg)
- return -1;
-
- string_list_append(v, xstrdup(arg));
- return 0;
-}
const char * const *usagestr,
const struct option *options);
+extern int optbug(const struct option *opt, const char *reason);
+extern int opterror(const struct option *opt, const char *reason, int flags);
/*----- incremental advanced APIs -----*/
enum {
strbuf_addch(&buf, '/');
strbuf_addstr(&buf, ".git");
- git_dir = read_gitfile_gently(buf.buf);
+ git_dir = read_gitfile(buf.buf);
if (git_dir) {
strbuf_reset(&buf);
strbuf_addstr(&buf, git_dir);
memcpy(gitdir + len, "/.git", 6);
len += 5;
- tmp = read_gitfile_gently(gitdir);
+ tmp = read_gitfile(gitdir);
if (tmp) {
free(gitdir);
len = strlen(tmp);
return sanitized;
}
-/*
- * Unlike prefix_path, this should be used if the named file does
- * not have to interact with index entry; i.e. name of a random file
- * on the filesystem.
- */
-const char *prefix_filename(const char *pfx, int pfx_len, const char *arg)
-{
- static char path[PATH_MAX];
-#ifndef WIN32
- if (!pfx_len || is_absolute_path(arg))
- return arg;
- memcpy(path, pfx, pfx_len);
- strcpy(path + pfx_len, arg);
-#else
- char *p;
- /* don't add prefix to absolute paths, but still replace '\' by '/' */
- if (is_absolute_path(arg))
- pfx_len = 0;
- else if (pfx_len)
- memcpy(path, pfx, pfx_len);
- strcpy(path + pfx_len, arg);
- for (p = path + pfx_len; *p; p++)
- if (*p == '\\')
- *p = '/';
-#endif
- return path;
-}
-
int check_filename(const char *prefix, const char *arg)
{
const char *name;
* Try to read the location of the git directory from the .git file,
* return path to git directory if found.
*/
-const char *read_gitfile_gently(const char *path)
+const char *read_gitfile(const char *path)
{
char *buf;
char *dir;
if (PATH_MAX - 40 < strlen(gitdirenv))
die("'$%s' too big", GIT_DIR_ENVIRONMENT);
- gitfile = (char*)read_gitfile_gently(gitdirenv);
+ gitfile = (char*)read_gitfile(gitdirenv);
if (gitfile) {
gitfile = xstrdup(gitfile);
gitdirenv = gitfile;
if (one_filesystem)
current_device = get_device_or_die(".", NULL);
for (;;) {
- gitfile = (char*)read_gitfile_gently(DEFAULT_GIT_DIR_ENVIRONMENT);
+ gitfile = (char*)read_gitfile(DEFAULT_GIT_DIR_ENVIRONMENT);
if (gitfile)
gitdirenv = gitfile = xstrdup(gitfile);
else {
{
struct lock_file *lock = xcalloc(1, sizeof(struct lock_file));
int fd = hold_lock_file_for_append(lock, git_path("objects/info/alternates"), LOCK_DIE_ON_ERROR);
- char *alt = mkpath("%s/objects\n", reference);
+ char *alt = mkpath("%s\n", reference);
write_or_die(fd, alt, strlen(alt));
if (commit_lock_file(lock))
die("could not close alternates file");
{
sb->alloc = sb->len = 0;
sb->buf = strbuf_slopbuf;
- if (hint) {
+ if (hint)
strbuf_grow(sb, hint);
- sb->buf[0] = '\0';
- }
}
void strbuf_release(struct strbuf *sb)
void strbuf_grow(struct strbuf *sb, size_t extra)
{
+ int new_buf = !sb->alloc;
if (unsigned_add_overflows(extra, 1) ||
unsigned_add_overflows(sb->len, extra + 1))
die("you want to use way too much memory");
- if (!sb->alloc)
+ if (new_buf)
sb->buf = NULL;
ALLOC_GROW(sb->buf, sb->len + extra + 1, sb->alloc);
+ if (new_buf)
+ sb->buf[0] = '\0';
}
void strbuf_trim(struct strbuf *sb)
return unsorted_string_list_lookup(list, string) != NULL;
}
+void unsorted_string_list_delete_item(struct string_list *list, int i, int free_util)
+{
+ if (list->strdup_strings)
+ free(list->items[i].string);
+ if (free_util)
+ free(list->items[i].util);
+ list->items[i] = list->items[list->nr-1];
+ list->nr--;
+}
int unsorted_string_list_has_string(struct string_list *list, const char *string);
struct string_list_item *unsorted_string_list_lookup(struct string_list *list,
const char *string);
+void unsorted_string_list_delete_item(struct string_list *list, int i, int free_util);
#endif /* STRING_LIST_H */
const char *git_dir;
strbuf_addf(&objects_directory, "%s/.git", path);
- git_dir = read_gitfile_gently(objects_directory.buf);
+ git_dir = read_gitfile(objects_directory.buf);
if (git_dir) {
strbuf_reset(&objects_directory);
strbuf_addstr(&objects_directory, git_dir);
config_fetch_recurse_submodules = value;
}
+static int has_remote(const char *refname, const unsigned char *sha1, int flags, void *cb_data)
+{
+ return 1;
+}
+
+static int submodule_needs_pushing(const char *path, const unsigned char sha1[20])
+{
+ if (add_submodule_odb(path) || !lookup_commit_reference(sha1))
+ return 0;
+
+ if (for_each_remote_ref_submodule(path, has_remote, NULL) > 0) {
+ struct child_process cp;
+ const char *argv[] = {"rev-list", NULL, "--not", "--remotes", "-n", "1" , NULL};
+ struct strbuf buf = STRBUF_INIT;
+ int needs_pushing = 0;
+
+ argv[1] = sha1_to_hex(sha1);
+ memset(&cp, 0, sizeof(cp));
+ cp.argv = argv;
+ cp.env = local_repo_env;
+ cp.git_cmd = 1;
+ cp.no_stdin = 1;
+ cp.out = -1;
+ cp.dir = path;
+ if (start_command(&cp))
+ die("Could not run 'git rev-list %s --not --remotes -n 1' command in submodule %s",
+ sha1_to_hex(sha1), path);
+ if (strbuf_read(&buf, cp.out, 41))
+ needs_pushing = 1;
+ finish_command(&cp);
+ close(cp.out);
+ strbuf_release(&buf);
+ return needs_pushing;
+ }
+
+ return 0;
+}
+
+static void collect_submodules_from_diff(struct diff_queue_struct *q,
+ struct diff_options *options,
+ void *data)
+{
+ int i;
+ int *needs_pushing = data;
+
+ for (i = 0; i < q->nr; i++) {
+ struct diff_filepair *p = q->queue[i];
+ if (!S_ISGITLINK(p->two->mode))
+ continue;
+ if (submodule_needs_pushing(p->two->path, p->two->sha1)) {
+ *needs_pushing = 1;
+ break;
+ }
+ }
+}
+
+
+static void commit_need_pushing(struct commit *commit, struct commit_list *parent, int *needs_pushing)
+{
+ const unsigned char (*parents)[20];
+ unsigned int i, n;
+ struct rev_info rev;
+
+ n = commit_list_count(parent);
+ parents = xmalloc(n * sizeof(*parents));
+
+ for (i = 0; i < n; i++) {
+ hashcpy((unsigned char *)(parents + i), parent->item->object.sha1);
+ parent = parent->next;
+ }
+
+ init_revisions(&rev, NULL);
+ rev.diffopt.output_format |= DIFF_FORMAT_CALLBACK;
+ rev.diffopt.format_callback = collect_submodules_from_diff;
+ rev.diffopt.format_callback_data = needs_pushing;
+ diff_tree_combined(commit->object.sha1, parents, n, 1, &rev);
+
+ free(parents);
+}
+
+int check_submodule_needs_pushing(unsigned char new_sha1[20], const char *remotes_name)
+{
+ struct rev_info rev;
+ struct commit *commit;
+ const char *argv[] = {NULL, NULL, "--not", "NULL", NULL};
+ int argc = ARRAY_SIZE(argv) - 1;
+ char *sha1_copy;
+ int needs_pushing = 0;
+ struct strbuf remotes_arg = STRBUF_INIT;
+
+ strbuf_addf(&remotes_arg, "--remotes=%s", remotes_name);
+ init_revisions(&rev, NULL);
+ sha1_copy = xstrdup(sha1_to_hex(new_sha1));
+ argv[1] = sha1_copy;
+ argv[3] = remotes_arg.buf;
+ setup_revisions(argc, argv, &rev, NULL);
+ if (prepare_revision_walk(&rev))
+ die("revision walk setup failed");
+
+ while ((commit = get_revision(&rev)) && !needs_pushing)
+ commit_need_pushing(commit, commit->parents, &needs_pushing);
+
+ free(sha1_copy);
+ strbuf_release(&remotes_arg);
+
+ return needs_pushing;
+}
+
static int is_submodule_commit_present(const char *path, unsigned char sha1[20])
{
int is_present = 0;
strbuf_addf(&submodule_path, "%s/%s", work_tree, ce->name);
strbuf_addf(&submodule_git_dir, "%s/.git", submodule_path.buf);
strbuf_addf(&submodule_prefix, "%s%s/", prefix, ce->name);
- git_dir = read_gitfile_gently(submodule_git_dir.buf);
+ git_dir = read_gitfile(submodule_git_dir.buf);
if (!git_dir)
git_dir = submodule_git_dir.buf;
if (is_directory(git_dir)) {
const char *git_dir;
strbuf_addf(&buf, "%s/.git", path);
- git_dir = read_gitfile_gently(buf.buf);
+ git_dir = read_gitfile(buf.buf);
if (!git_dir)
git_dir = buf.buf;
if (!is_directory(git_dir)) {
unsigned is_submodule_modified(const char *path, int ignore_untracked);
int merge_submodule(unsigned char result[20], const char *path, const unsigned char base[20],
const unsigned char a[20], const unsigned char b[20]);
+int check_submodule_needs_pushing(unsigned char new_sha1[20], const char *remotes_name);
#endif
grep "error in commit $new" out
'
+test_expect_success 'missing < email delimiter is reported nicely' '
+ git cat-file commit HEAD >basis &&
+ sed "s/<//" basis >bad-email-2 &&
+ new=$(git hash-object -t commit -w --stdin <bad-email-2) &&
+ test_when_finished "remove_object $new" &&
+ git update-ref refs/heads/bogus "$new" &&
+ test_when_finished "git update-ref -d refs/heads/bogus" &&
+ git fsck 2>out &&
+ cat out &&
+ grep "error in commit $new.* - bad name" out
+'
+
+test_expect_success 'missing email is reported nicely' '
+ git cat-file commit HEAD >basis &&
+ sed "s/[a-z]* <[^>]*>//" basis >bad-email-3 &&
+ new=$(git hash-object -t commit -w --stdin <bad-email-3) &&
+ test_when_finished "remove_object $new" &&
+ git update-ref refs/heads/bogus "$new" &&
+ test_when_finished "git update-ref -d refs/heads/bogus" &&
+ git fsck 2>out &&
+ cat out &&
+ grep "error in commit $new.* - missing email" out
+'
+
+test_expect_success '> in name is reported' '
+ git cat-file commit HEAD >basis &&
+ sed "s/ </> </" basis >bad-email-4 &&
+ new=$(git hash-object -t commit -w --stdin <bad-email-4) &&
+ test_when_finished "remove_object $new" &&
+ git update-ref refs/heads/bogus "$new" &&
+ test_when_finished "git update-ref -d refs/heads/bogus" &&
+ git fsck 2>out &&
+ cat out &&
+ grep "error in commit $new" out
+'
+
test_expect_success 'tag pointing to nonexistent' '
cat >invalid-tag <<-\EOF &&
object ffffffffffffffffffffffffffffffffffffffff
test_must_fail do_checkout branch2 $HEAD2
'
+test_expect_success 'checkout -b to @{-1} fails with the right branch name' '
+ git reset --hard HEAD &&
+ git checkout branch1 &&
+ git checkout branch2 &&
+ echo >expect "fatal: A branch named '\''branch1'\'' already exists." &&
+ test_must_fail git checkout -b @{-1} 2>actual &&
+ test_cmp expect actual
+'
+
test_expect_success 'checkout -B to an existing branch resets branch to HEAD' '
git checkout branch1 &&
test_cmp expect actual
'
+test_expect_success 'checkout -B to the current branch fails before merging' '
+ git checkout branch1 &&
+ setup_dirty_mergeable &&
+ git commit -mfooble &&
+ test_must_fail git checkout -B branch1 initial &&
+ test_must_fail test_dirty_mergeable
+'
+
test_done
ln -s e a &&
git add a e &&
test_tick &&
- git commit -m "rename a->e, symlink a->e"
+ git commit -m "rename a->e, symlink a->e" &&
+ oln=`printf e | git hash-object --stdin`
fi
'
if test_have_prereq SYMLINKS
then
- test_expect_success 'merge-recursive rename vs. rename/symlink' '
+ test_expect_failure 'merge-recursive rename vs. rename/symlink' '
git checkout -f rename &&
git merge rename-ln &&
( git ls-tree -r HEAD ; git ls-files -s ) >actual &&
(
+ echo "120000 blob $oln a"
echo "100644 blob $o0 b"
echo "100644 blob $o0 c"
echo "100644 blob $o0 d/e"
echo "100644 blob $o0 e"
+ echo "120000 $oln 0 a"
echo "100644 $o0 0 b"
echo "100644 $o0 0 c"
echo "100644 $o0 0 d/e"
test_must_fail git branch -m q r/q
'
+test_expect_success 'git branch -M foo bar should fail when bar is checked out' '
+ git branch bar &&
+ git checkout -b foo &&
+ test_must_fail git branch -M bar foo
+'
+
+test_expect_success 'git branch -M baz bam should succeed when baz is checked out' '
+ git checkout -b baz &&
+ git branch bam &&
+ git branch -M baz bam
+'
+
mv .git/config .git/config-saved
test_expect_success 'git branch -m q q2 without config should succeed' '
echo second > file2 &&
git add file2 &&
test_tick &&
- git commit -m "second"
+ git commit -m "second" &&
+
+ git symbolic-ref HEAD refs/heads/third &&
+ rm .git/index file2 &&
+ echo third > file3 &&
+ git add file3 &&
+ test_tick &&
+ git commit -m "third"
'
test_expect_success 'cherry-pick a root commit' '
+ git checkout second^0 &&
git cherry-pick master &&
echo first >expect &&
test_cmp expect file1
'
+test_expect_success 'cherry-pick two root commits' '
+
+ echo first >expect.file1 &&
+ echo second >expect.file2 &&
+ echo third >expect.file3 &&
+
+ git checkout second^0 &&
+ git cherry-pick master third &&
+
+ test_cmp expect.file1 file1 &&
+ test_cmp expect.file2 file2 &&
+ test_cmp expect.file3 file3 &&
+ git rev-parse --verify HEAD^^ &&
+ test_must_fail git rev-parse --verify HEAD^^^
+
+'
+
test_done
echo bar6 > file2 &&
git add file2 &&
git stash &&
- ! "git rev-parse --quiet --verify does-not-exist" &&
+ test_must_fail git rev-parse --quiet --verify does-not-exist &&
test_must_fail git stash drop does-not-exist &&
test_must_fail git stash drop does-not-exist@{0} &&
test_must_fail git stash pop does-not-exist &&
grep "^To: R. E. Cipient <rcipient@example.com>\$" patch9
'
+# check_patch <patch>: Verify that <patch> looks like a half-sane
+# patch email to avoid a false positive with !grep
+check_patch () {
+ grep -e "^From:" "$1" &&
+ grep -e "^Date:" "$1" &&
+ grep -e "^Subject:" "$1"
+}
+
test_expect_success '--no-to overrides config.to' '
git config --replace-all format.to \
"R. E. Cipient <rcipient@example.com>" &&
git format-patch --no-to --stdout master..side |
sed -e "/^\$/q" >patch10 &&
+ check_patch patch10 &&
! grep "^To: R. E. Cipient <rcipient@example.com>\$" patch10
'
git format-patch --no-to --to="Someone Else <else@out.there>" \
--stdout master..side |
sed -e "/^\$/q" >patch11 &&
+ check_patch patch11 &&
! grep "^To: Someone <someone@out.there>\$" patch11 &&
grep "^To: Someone Else <else@out.there>\$" patch11
'
"C. E. Cipient <rcipient@example.com>" &&
git format-patch --no-cc --stdout master..side |
sed -e "/^\$/q" >patch12 &&
+ check_patch patch12 &&
! grep "^Cc: C. E. Cipient <rcipient@example.com>\$" patch12
'
-test_expect_success '--no-add-headers overrides config.headers' '
+test_expect_success '--no-add-header overrides config.headers' '
git config --replace-all format.headers \
"Header1: B. E. Cipient <rcipient@example.com>" &&
- git format-patch --no-add-headers --stdout master..side |
+ git format-patch --no-add-header --stdout master..side |
sed -e "/^\$/q" >patch13 &&
+ check_patch patch13 &&
! grep "^Header1: B. E. Cipient <rcipient@example.com>\$" patch13
'
git mv file foo &&
git commit -m foo &&
git format-patch --cover-letter -1 &&
+ check_patch 0000-cover-letter.patch &&
! grep "file => foo .* 0 *\$" 0000-cover-letter.patch &&
git format-patch --cover-letter -1 -M &&
grep "file => foo .* 0 *\$" 0000-cover-letter.patch
git config format.signature "config sig" &&
git format-patch --stdout --signature="my sig" --no-signature \
-1 >output &&
+ check_patch output &&
! grep "config sig" output &&
! grep "my sig" output &&
! grep "^-- \$" output
test_expect_success 'format.signature="" supresses signatures' '
git config format.signature "" &&
git format-patch --stdout -1 >output &&
+ check_patch output &&
! grep "^-- \$" output
'
test_expect_success 'format-patch --no-signature supresses signatures' '
git config --unset-all format.signature &&
git format-patch --stdout --no-signature -1 >output &&
+ check_patch output &&
! grep "^-- \$" output
'
test_expect_success 'format-patch --signature="" supresses signatures' '
- git format-patch --signature="" -1 >output &&
+ git format-patch --stdout --signature="" -1 >output &&
+ check_patch output &&
! grep "^-- \$" output
'
)
'
+test_expect_success 'push if submodule has no remote' '
+ (
+ cd work/gar/bage &&
+ >junk2 &&
+ git add junk2 &&
+ git commit -m "Second junk"
+ ) &&
+ (
+ cd work &&
+ git add gar/bage &&
+ git commit -m "Second commit for gar/bage" &&
+ git push --recurse-submodules=check ../pub.git master
+ )
+'
+
+test_expect_success 'push fails if submodule commit not on remote' '
+ (
+ cd work/gar &&
+ git clone --bare bage ../../submodule.git &&
+ cd bage &&
+ git remote add origin ../../../submodule.git &&
+ git fetch &&
+ >junk3 &&
+ git add junk3 &&
+ git commit -m "Third junk"
+ ) &&
+ (
+ cd work &&
+ git add gar/bage &&
+ git commit -m "Third commit for gar/bage" &&
+ test_must_fail git push --recurse-submodules=check ../pub.git master
+ )
+'
+
+test_expect_success 'push succeeds after commit was pushed to remote' '
+ (
+ cd work/gar/bage &&
+ git push origin master
+ ) &&
+ (
+ cd work &&
+ git push --recurse-submodules=check ../pub.git master
+ )
+'
+
+test_expect_success 'push fails when commit on multiple branches if one branch has no remote' '
+ (
+ cd work/gar/bage &&
+ >junk4 &&
+ git add junk4 &&
+ git commit -m "Fourth junk"
+ ) &&
+ (
+ cd work &&
+ git branch branch2 &&
+ git add gar/bage &&
+ git commit -m "Fourth commit for gar/bage" &&
+ git checkout branch2 &&
+ (
+ cd gar/bage &&
+ git checkout HEAD~1
+ ) &&
+ >junk1 &&
+ git add junk1 &&
+ git commit -m "First junk" &&
+ test_must_fail git push --recurse-submodules=check ../pub.git
+ )
+'
+
+test_expect_success 'push succeeds if submodule has no remote and is on the first superproject commit' '
+ git init --bare a
+ git clone a a1 &&
+ (
+ cd a1 &&
+ git init b
+ (
+ cd b &&
+ >junk &&
+ git add junk &&
+ git commit -m "initial"
+ ) &&
+ git add b &&
+ git commit -m "added submodule" &&
+ git push --recurse-submodule=check origin master
+ )
+'
+
test_done
x40="$x38$x2"
test_expect_success 'PUT and MOVE sends object to URLs with SHA-1 hash suffix' '
- sed -e "s/PUT /OP /" -e "s/MOVE /OP /" "$HTTPD_ROOT_PATH"/access.log |
- grep -e "\"OP .*/objects/$x2/${x38}_$x40 HTTP/[.0-9]*\" 20[0-9] "
+ sed \
+ -e "s/PUT /OP /" \
+ -e "s/MOVE /OP /" \
+ -e "s|/objects/$x2/${x38}_$x40|WANTED_PATH_REQUEST|" \
+ "$HTTPD_ROOT_PATH"/access.log |
+ grep -e "\"OP .*WANTED_PATH_REQUEST HTTP/[.0-9]*\" 20[0-9] "
'
test_cmp expected dst/.git
'
+test_expect_success 'clone from .git file' '
+ git clone dst/.git dst2
+'
+
test_expect_success 'clone separate gitdir where target already exists' '
rm -rf dst &&
test_must_fail git clone --separate-git-dir realgitdir src dst
'
+test_expect_success 'clone --reference from original' '
+ git clone --shared --bare src src-1 &&
+ git clone --bare src src-2 &&
+ git clone --reference=src-2 --bare src-1 target-8 &&
+ grep /src-2/ target-8/objects/info/alternates
+'
+
+test_expect_success 'clone with more than one --reference' '
+ git clone --bare src src-3 &&
+ git clone --bare src src-4 &&
+ git clone --reference=src-3 --reference=src-4 src target-9 &&
+ grep /src-3/ target-9/.git/objects/info/alternates &&
+ grep /src-4/ target-9/.git/objects/info/alternates
+'
+
+test_expect_success 'clone from original with relative alternate' '
+ mkdir nest &&
+ git clone --bare src nest/src-5 &&
+ echo ../../../src/.git/objects >nest/src-5/objects/info/alternates &&
+ git clone --bare nest/src-5 target-10 &&
+ grep /src/\\.git/objects target-10/objects/info/alternates
+'
+
test_done
git add letters &&
git commit -m initial &&
+ # Throw in letters.txt for sorting order fun
+ # ("letters.txt" sorts between "letters" and "letters/file")
echo i >>letters &&
- git add letters &&
+ echo "version 2" >letters.txt &&
+ git add letters letters.txt &&
git commit -m modified &&
git checkout -b delete HEAD^ &&
git rm letters &&
mkdir letters &&
>letters/file &&
- git add letters &&
+ echo "version 1" >letters.txt &&
+ git add letters letters.txt &&
git commit -m deleted
'
git checkout delete^0 &&
test_must_fail git merge modify &&
- test 3 = $(git ls-files -s | wc -l) &&
- test 2 = $(git ls-files -u | wc -l) &&
- test 1 = $(git ls-files -o | wc -l) &&
+ test 5 -eq $(git ls-files -s | wc -l) &&
+ test 4 -eq $(git ls-files -u | wc -l) &&
+ test 1 -eq $(git ls-files -o | wc -l) &&
test -f letters/file &&
+ test -f letters.txt &&
test -f letters~modify
'
test_expect_success 'modify/delete + directory/file conflict; other way' '
+ # Yes, we really need the double reset since "letters" appears as
+ # both a file and a directory.
+ git reset --hard &&
git reset --hard &&
git clean -f &&
git checkout modify^0 &&
+
test_must_fail git merge delete &&
- test 3 = $(git ls-files -s | wc -l) &&
- test 2 = $(git ls-files -u | wc -l) &&
- test 1 = $(git ls-files -o | wc -l) &&
+ test 5 -eq $(git ls-files -s | wc -l) &&
+ test 4 -eq $(git ls-files -u | wc -l) &&
+ test 1 -eq $(git ls-files -o | wc -l) &&
test -f letters/file &&
+ test -f letters.txt &&
test -f letters~HEAD
'
git reset --hard &&
git checkout --orphan dir-in-way &&
git rm -rf . &&
+ git clean -fdqx &&
mkdir sub &&
mkdir dir &&
git checkout -q renamed-file-has-no-conflicts^0 &&
test_must_fail git merge --strategy=recursive dir-in-way >output &&
- grep "CONFLICT (delete/modify): dir/file-in-the-way" output &&
+ grep "CONFLICT (modify/delete): dir/file-in-the-way" output &&
grep "Auto-merging dir" output &&
grep "Adding as dir~HEAD instead" output &&
- test 2 -eq "$(git ls-files -u | wc -l)" &&
+ test 3 -eq "$(git ls-files -u | wc -l)" &&
test 2 -eq "$(git ls-files -u dir/file-in-the-way | wc -l)" &&
test_must_fail git diff --quiet &&
test_must_fail git merge --strategy=recursive renamed-file-has-no-conflicts >output 2>errors &&
! grep "error: refusing to lose untracked file at" errors &&
- grep "CONFLICT (delete/modify): dir/file-in-the-way" output &&
+ grep "CONFLICT (modify/delete): dir/file-in-the-way" output &&
grep "Auto-merging dir" output &&
grep "Adding as dir~renamed-file-has-no-conflicts instead" output &&
- test 2 -eq "$(git ls-files -u | wc -l)" &&
+ test 3 -eq "$(git ls-files -u | wc -l)" &&
test 2 -eq "$(git ls-files -u dir/file-in-the-way | wc -l)" &&
test_must_fail git diff --quiet &&
8
9
10
-<<<<<<< HEAD
+<<<<<<< HEAD:dir
12
=======
11
->>>>>>> dir-not-in-way
+>>>>>>> dir-not-in-way:sub/file
EOF
test_expect_success 'Rename+D/F conflict; renamed file cannot merge, dir not in way' '
8
9
10
-<<<<<<< HEAD
+<<<<<<< HEAD:sub/file
11
=======
12
->>>>>>> renamed-file-has-conflicts
+>>>>>>> renamed-file-has-conflicts:dir
EOF
test_expect_success 'Same as previous, but merged other way' '
! test -f original
'
+test_expect_success 'setup avoid unnecessary update, normal rename' '
+ git reset --hard &&
+ git checkout --orphan avoid-unnecessary-update-1 &&
+ git rm -rf . &&
+ git clean -fdqx &&
+
+ printf "1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n" >original &&
+ git add -A &&
+ git commit -m "Common commmit" &&
+
+ git mv original rename &&
+ echo 11 >>rename &&
+ git add -u &&
+ git commit -m "Renamed and modified" &&
+
+ git checkout -b merge-branch-1 HEAD~1 &&
+ echo "random content" >random-file &&
+ git add -A &&
+ git commit -m "Random, unrelated changes"
+'
+
+test_expect_success 'avoid unnecessary update, normal rename' '
+ git checkout -q avoid-unnecessary-update-1^0 &&
+ test-chmtime =1000000000 rename &&
+ test-chmtime -v +0 rename >expect &&
+ git merge merge-branch-1 &&
+ test-chmtime -v +0 rename >actual &&
+ test_cmp expect actual # "rename" should have stayed intact
+'
+
+test_expect_success 'setup to test avoiding unnecessary update, with D/F conflict' '
+ git reset --hard &&
+ git checkout --orphan avoid-unnecessary-update-2 &&
+ git rm -rf . &&
+ git clean -fdqx &&
+
+ mkdir df &&
+ printf "1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n" >df/file &&
+ git add -A &&
+ git commit -m "Common commmit" &&
+
+ git mv df/file temp &&
+ rm -rf df &&
+ git mv temp df &&
+ echo 11 >>df &&
+ git add -u &&
+ git commit -m "Renamed and modified" &&
+
+ git checkout -b merge-branch-2 HEAD~1 &&
+ >unrelated-change &&
+ git add unrelated-change &&
+ git commit -m "Only unrelated changes"
+'
+
+test_expect_success 'avoid unnecessary update, with D/F conflict' '
+ git checkout -q avoid-unnecessary-update-2^0 &&
+ test-chmtime =1000000000 df &&
+ test-chmtime -v +0 df >expect &&
+ git merge merge-branch-2 &&
+ test-chmtime -v +0 df >actual &&
+ test_cmp expect actual # "df" should have stayed intact
+'
+
+test_expect_success 'setup avoid unnecessary update, dir->(file,nothing)' '
+ git rm -rf . &&
+ git clean -fdqx &&
+ rm -rf .git &&
+ git init &&
+
+ >irrelevant &&
+ mkdir df &&
+ >df/file &&
+ git add -A &&
+ git commit -mA &&
+
+ git checkout -b side
+ git rm -rf df &&
+ git commit -mB &&
+
+ git checkout master &&
+ git rm -rf df &&
+ echo bla >df &&
+ git add -A &&
+ git commit -m "Add a newfile"
+'
+
+test_expect_success 'avoid unnecessary update, dir->(file,nothing)' '
+ git checkout -q master^0 &&
+ test-chmtime =1000000000 df &&
+ test-chmtime -v +0 df >expect &&
+ git merge side &&
+ test-chmtime -v +0 df >actual &&
+ test_cmp expect actual # "df" should have stayed intact
+'
+
+test_expect_success 'setup avoid unnecessary update, modify/delete' '
+ git rm -rf . &&
+ git clean -fdqx &&
+ rm -rf .git &&
+ git init &&
+
+ >irrelevant &&
+ >file &&
+ git add -A &&
+ git commit -mA &&
+
+ git checkout -b side
+ git rm -f file &&
+ git commit -m "Delete file" &&
+
+ git checkout master &&
+ echo bla >file &&
+ git add -A &&
+ git commit -m "Modify file"
+'
+
+test_expect_success 'avoid unnecessary update, modify/delete' '
+ git checkout -q master^0 &&
+ test-chmtime =1000000000 file &&
+ test-chmtime -v +0 file >expect &&
+ test_must_fail git merge side &&
+ test-chmtime -v +0 file >actual &&
+ test_cmp expect actual # "file" should have stayed intact
+'
+
+test_expect_success 'setup avoid unnecessary update, rename/add-dest' '
+ git rm -rf . &&
+ git clean -fdqx &&
+ rm -rf .git &&
+ git init &&
+
+ printf "1\n2\n3\n4\n5\n6\n7\n8\n" >file &&
+ git add -A &&
+ git commit -mA &&
+
+ git checkout -b side
+ cp file newfile &&
+ git add -A &&
+ git commit -m "Add file copy" &&
+
+ git checkout master &&
+ git mv file newfile &&
+ git commit -m "Rename file"
+'
+
+test_expect_success 'avoid unnecessary update, rename/add-dest' '
+ git checkout -q master^0 &&
+ test-chmtime =1000000000 newfile &&
+ test-chmtime -v +0 newfile >expect &&
+ git merge side &&
+ test-chmtime -v +0 newfile >actual &&
+ test_cmp expect actual # "file" should have stayed intact
+'
+
+test_expect_success 'setup merge of rename + small change' '
+ git reset --hard &&
+ git checkout --orphan rename-plus-small-change &&
+ git rm -rf . &&
+ git clean -fdqx &&
+
+ echo ORIGINAL >file &&
+ git add file &&
+
+ test_tick &&
+ git commit -m Initial &&
+ git checkout -b rename_branch &&
+ git mv file renamed_file &&
+ git commit -m Rename &&
+ git checkout rename-plus-small-change &&
+ echo NEW-VERSION >file &&
+ git commit -a -m Reformat
+'
+
+test_expect_success 'merge rename + small change' '
+ git merge rename_branch &&
+
+ test 1 -eq $(git ls-files -s | wc -l) &&
+ test 0 -eq $(git ls-files -o | wc -l) &&
+ test $(git rev-parse HEAD:renamed_file) = $(git rev-parse HEAD~1:file)
+'
+
+test_expect_success 'setup for use of extended merge markers' '
+ git rm -rf . &&
+ git clean -fdqx &&
+ rm -rf .git &&
+ git init &&
+
+ printf "1\n2\n3\n4\n5\n6\n7\n8\n" >original_file &&
+ git add original_file &&
+ git commit -mA &&
+
+ git checkout -b rename &&
+ echo 9 >>original_file &&
+ git add original_file &&
+ git mv original_file renamed_file &&
+ git commit -mB &&
+
+ git checkout master &&
+ echo 8.5 >>original_file &&
+ git add original_file &&
+ git commit -mC
+'
+
+cat >expected <<\EOF &&
+1
+2
+3
+4
+5
+6
+7
+8
+<<<<<<< HEAD:renamed_file
+9
+=======
+8.5
+>>>>>>> master^0:original_file
+EOF
+
+test_expect_success 'merge master into rename has correct extended markers' '
+ git checkout rename^0 &&
+ test_must_fail git merge -s recursive master^0 &&
+ test_cmp expected renamed_file
+'
+
+cat >expected <<\EOF &&
+1
+2
+3
+4
+5
+6
+7
+8
+<<<<<<< HEAD:original_file
+8.5
+=======
+9
+>>>>>>> rename^0:renamed_file
+EOF
+
+test_expect_success 'merge rename into master has correct extended markers' '
+ git reset --hard &&
+ git checkout master^0 &&
+ test_must_fail git merge -s recursive rename^0 &&
+ test_cmp expected renamed_file
+'
+
+test_expect_success 'setup spurious "refusing to lose untracked" message' '
+ git rm -rf . &&
+ git clean -fdqx &&
+ rm -rf .git &&
+ git init &&
+
+ > irrelevant_file &&
+ printf "1\n2\n3\n4\n5\n6\n7\n8\n" >original_file &&
+ git add irrelevant_file original_file &&
+ git commit -mA &&
+
+ git checkout -b rename &&
+ git mv original_file renamed_file &&
+ git commit -mB &&
+
+ git checkout master &&
+ git rm original_file &&
+ git commit -mC
+'
+
+test_expect_success 'no spurious "refusing to lose untracked" message' '
+ git checkout master^0 &&
+ test_must_fail git merge rename^0 2>errors.txt &&
+ ! grep "refusing to lose untracked file" errors.txt
+'
+
test_done
git bisect reset &&
git checkout broken &&
git bisect start broken master --no-checkout &&
- git bisect run sh -c '
+ git bisect run \"\$SHELL_PATH\" -c '
GOOD=\$(git for-each-ref \"--format=%(objectname)\" refs/bisect/good-*) &&
git rev-list --objects BISECT_HEAD --not \$GOOD >tmp.\$\$ &&
git pack-objects --stdout >/dev/null < tmp.\$\$
#!/bin/sh
-test_description='recursive merge corner cases'
+test_description='recursive merge corner cases involving criss-cross merges'
. ./test-lib.sh
+get_clean_checkout () {
+ git reset --hard &&
+ git clean -fdqx &&
+ git checkout "$1"
+}
+
#
# L1 L2
# o---o
test_must_fail git merge -s recursive R2^0 &&
- test 5 = $(git ls-files -s | wc -l) &&
- test 3 = $(git ls-files -u | wc -l) &&
- test 0 = $(git ls-files -o | wc -l) &&
+ test 2 = $(git ls-files -s | wc -l) &&
+ test 2 = $(git ls-files -u | wc -l) &&
+ test 2 = $(git ls-files -o | wc -l) &&
- test $(git rev-parse :0:one) = $(git rev-parse L2:one) &&
- test $(git rev-parse :0:two) = $(git rev-parse R2:two) &&
test $(git rev-parse :2:three) = $(git rev-parse L2:three) &&
test $(git rev-parse :3:three) = $(git rev-parse R2:three) &&
- cp two merged &&
- >empty &&
- test_must_fail git merge-file \
- -L "Temporary merge branch 2" \
- -L "" \
- -L "Temporary merge branch 1" \
- merged empty one &&
- test $(git rev-parse :1:three) = $(git hash-object merged)
+ test $(git rev-parse L2:three) = $(git hash-object three~HEAD) &&
+ test $(git rev-parse R2:three) = $(git hash-object three~R2^0)
'
#
test_must_fail git merge -s recursive R2^0 &&
- test 5 = $(git ls-files -s | wc -l) &&
- test 3 = $(git ls-files -u | wc -l) &&
- test 0 = $(git ls-files -o | wc -l) &&
+ test 2 = $(git ls-files -s | wc -l) &&
+ test 2 = $(git ls-files -u | wc -l) &&
+ test 2 = $(git ls-files -o | wc -l) &&
- test $(git rev-parse :0:one) = $(git rev-parse L2:one) &&
- test $(git rev-parse :0:two) = $(git rev-parse R2:two) &&
test $(git rev-parse :2:three) = $(git rev-parse L2:three) &&
test $(git rev-parse :3:three) = $(git rev-parse R2:three) &&
- head -n 10 two >merged &&
- cp one merge-me &&
- >empty &&
- test_must_fail git merge-file \
- -L "Temporary merge branch 2" \
- -L "" \
- -L "Temporary merge branch 1" \
- merged empty merge-me &&
- test $(git rev-parse :1:three) = $(git hash-object merged)
+ test $(git rev-parse L2:three) = $(git hash-object three~HEAD) &&
+ test $(git rev-parse R2:three) = $(git hash-object three~R2^0)
'
#
test $(git rev-parse :1:new_a) = $(git hash-object merged)
'
+#
+# criss-cross + modify/delete:
+#
+# B D
+# o---o
+# / \ / \
+# A o X ? F
+# \ / \ /
+# o---o
+# C E
+#
+# Commit A: file with contents 'A\n'
+# Commit B: file with contents 'B\n'
+# Commit C: file not present
+# Commit D: file with contents 'B\n'
+# Commit E: file not present
+#
+# Merging commits D & E should result in modify/delete conflict.
+
+test_expect_success 'setup criss-cross + modify/delete resolved differently' '
+ git rm -rf . &&
+ git clean -fdqx &&
+ rm -rf .git &&
+ git init &&
+
+ echo A >file &&
+ git add file &&
+ test_tick &&
+ git commit -m A &&
+
+ git branch B &&
+ git checkout -b C &&
+ git rm file &&
+ test_tick &&
+ git commit -m C &&
+
+ git checkout B &&
+ echo B >file &&
+ git add file &&
+ test_tick &&
+ git commit -m B &&
+
+ git checkout B^0 &&
+ test_must_fail git merge C &&
+ echo B >file &&
+ git add file &&
+ test_tick &&
+ git commit -m D &&
+ git tag D &&
+
+ git checkout C^0 &&
+ test_must_fail git merge B &&
+ git rm file &&
+ test_tick &&
+ git commit -m E &&
+ git tag E
+'
+
+test_expect_success 'git detects conflict merging criss-cross+modify/delete' '
+ git checkout D^0 &&
+
+ test_must_fail git merge -s recursive E^0 &&
+
+ test 2 -eq $(git ls-files -s | wc -l) &&
+ test 2 -eq $(git ls-files -u | wc -l) &&
+
+ test $(git rev-parse :1:file) = $(git rev-parse master:file) &&
+ test $(git rev-parse :2:file) = $(git rev-parse B:file)
+'
+
+test_expect_success 'git detects conflict merging criss-cross+modify/delete, reverse direction' '
+ git reset --hard &&
+ git checkout E^0 &&
+
+ test_must_fail git merge -s recursive D^0 &&
+
+ test 2 -eq $(git ls-files -s | wc -l) &&
+ test 2 -eq $(git ls-files -u | wc -l) &&
+
+ test $(git rev-parse :1:file) = $(git rev-parse master:file) &&
+ test $(git rev-parse :3:file) = $(git rev-parse B:file)
+'
+
+#
+# criss-cross + modify/modify with very contrived file contents:
+#
+# B D
+# o---o
+# / \ / \
+# A o X ? F
+# \ / \ /
+# o---o
+# C E
+#
+# Commit A: file with contents 'A\n'
+# Commit B: file with contents 'B\n'
+# Commit C: file with contents 'C\n'
+# Commit D: file with contents 'D\n'
+# Commit E: file with contents:
+# <<<<<<< Temporary merge branch 1
+# C
+# =======
+# B
+# >>>>>>> Temporary merge branch 2
+#
+# Now, when we merge commits D & E, does git detect the conflict?
+
+test_expect_success 'setup differently handled merges of content conflict' '
+ git clean -fdqx &&
+ rm -rf .git &&
+ git init &&
+
+ echo A >file &&
+ git add file &&
+ test_tick &&
+ git commit -m A &&
+
+ git branch B &&
+ git checkout -b C &&
+ echo C >file &&
+ git add file &&
+ test_tick &&
+ git commit -m C &&
+
+ git checkout B &&
+ echo B >file &&
+ git add file &&
+ test_tick &&
+ git commit -m B &&
+
+ git checkout B^0 &&
+ test_must_fail git merge C &&
+ echo D >file &&
+ git add file &&
+ test_tick &&
+ git commit -m D &&
+ git tag D &&
+
+ git checkout C^0 &&
+ test_must_fail git merge B &&
+ cat <<EOF >file &&
+<<<<<<< Temporary merge branch 1
+C
+=======
+B
+>>>>>>> Temporary merge branch 2
+EOF
+ git add file &&
+ test_tick &&
+ git commit -m E &&
+ git tag E
+'
+
+test_expect_failure 'git detects conflict w/ criss-cross+contrived resolution' '
+ git checkout D^0 &&
+
+ test_must_fail git merge -s recursive E^0 &&
+
+ test 3 -eq $(git ls-files -s | wc -l) &&
+ test 3 -eq $(git ls-files -u | wc -l) &&
+ test 0 -eq $(git ls-files -o | wc -l) &&
+
+ test $(git rev-parse :2:file) = $(git rev-parse D:file) &&
+ test $(git rev-parse :3:file) = $(git rev-parse E:file)
+'
+
+#
+# criss-cross + d/f conflict via add/add:
+# Commit A: Neither file 'a' nor directory 'a/' exist.
+# Commit B: Introduce 'a'
+# Commit C: Introduce 'a/file'
+# Commit D: Merge B & C, keeping 'a' and deleting 'a/'
+#
+# Two different later cases:
+# Commit E1: Merge B & C, deleting 'a' but keeping 'a/file'
+# Commit E2: Merge B & C, deleting 'a' but keeping a slightly modified 'a/file'
+#
+# B D
+# o---o
+# / \ / \
+# A o X ? F
+# \ / \ /
+# o---o
+# C E1 or E2
+#
+# Merging D & E1 requires we first create a virtual merge base X from
+# merging A & B in memory. Now, if X could keep both 'a' and 'a/file' in
+# the index, then the merge of D & E1 could be resolved cleanly with both
+# 'a' and 'a/file' removed. Since git does not currently allow creating
+# such a tree, the best we can do is have X contain both 'a~<unique>' and
+# 'a/file' resulting in the merge of D and E1 having a rename/delete
+# conflict for 'a'. (Although this merge appears to be unsolvable with git
+# currently, git could do a lot better than it currently does with these
+# d/f conflicts, which is the purpose of this test.)
+#
+# Merge of D & E2 has similar issues for path 'a', but should always result
+# in a modify/delete conflict for path 'a/file'.
+#
+# We run each merge in both directions, to check for directional issues
+# with D/F conflict handling.
+#
+
+test_expect_success 'setup differently handled merges of directory/file conflict' '
+ git rm -rf . &&
+ git clean -fdqx &&
+ rm -rf .git &&
+ git init &&
+
+ >ignore-me &&
+ git add ignore-me &&
+ test_tick &&
+ git commit -m A &&
+ git tag A &&
+
+ git branch B &&
+ git checkout -b C &&
+ mkdir a &&
+ echo 10 >a/file &&
+ git add a/file &&
+ test_tick &&
+ git commit -m C &&
+
+ git checkout B &&
+ echo 5 >a &&
+ git add a &&
+ test_tick &&
+ git commit -m B &&
+
+ git checkout B^0 &&
+ test_must_fail git merge C &&
+ git clean -f &&
+ rm -rf a/ &&
+ echo 5 >a &&
+ git add a &&
+ test_tick &&
+ git commit -m D &&
+ git tag D &&
+
+ git checkout C^0 &&
+ test_must_fail git merge B &&
+ git clean -f &&
+ git rm --cached a &&
+ echo 10 >a/file &&
+ git add a/file &&
+ test_tick &&
+ git commit -m E1 &&
+ git tag E1 &&
+
+ git checkout C^0 &&
+ test_must_fail git merge B &&
+ git clean -f &&
+ git rm --cached a &&
+ printf "10\n11\n" >a/file &&
+ git add a/file &&
+ test_tick &&
+ git commit -m E2 &&
+ git tag E2
+'
+
+test_expect_success 'merge of D & E1 fails but has appropriate contents' '
+ get_clean_checkout D^0 &&
+
+ test_must_fail git merge -s recursive E1^0 &&
+
+ test 2 -eq $(git ls-files -s | wc -l) &&
+ test 1 -eq $(git ls-files -u | wc -l) &&
+ test 0 -eq $(git ls-files -o | wc -l) &&
+
+ test $(git rev-parse :0:ignore-me) = $(git rev-parse A:ignore-me) &&
+ test $(git rev-parse :2:a) = $(git rev-parse B:a)
+'
+
+test_expect_success 'merge of E1 & D fails but has appropriate contents' '
+ get_clean_checkout E1^0 &&
+
+ test_must_fail git merge -s recursive D^0 &&
+
+ test 2 -eq $(git ls-files -s | wc -l) &&
+ test 1 -eq $(git ls-files -u | wc -l) &&
+ test 0 -eq $(git ls-files -o | wc -l) &&
+
+ test $(git rev-parse :0:ignore-me) = $(git rev-parse A:ignore-me) &&
+ test $(git rev-parse :3:a) = $(git rev-parse B:a)
+'
+
+test_expect_success 'merge of D & E2 fails but has appropriate contents' '
+ get_clean_checkout D^0 &&
+
+ test_must_fail git merge -s recursive E2^0 &&
+
+ test 4 -eq $(git ls-files -s | wc -l) &&
+ test 3 -eq $(git ls-files -u | wc -l) &&
+ test 1 -eq $(git ls-files -o | wc -l) &&
+
+ test $(git rev-parse :2:a) = $(git rev-parse B:a) &&
+ test $(git rev-parse :3:a/file) = $(git rev-parse E2:a/file) &&
+ test $(git rev-parse :1:a/file) = $(git rev-parse C:a/file) &&
+ test $(git rev-parse :0:ignore-me) = $(git rev-parse A:ignore-me) &&
+
+ test -f a~HEAD
+'
+
+test_expect_success 'merge of E2 & D fails but has appropriate contents' '
+ get_clean_checkout E2^0 &&
+
+ test_must_fail git merge -s recursive D^0 &&
+
+ test 4 -eq $(git ls-files -s | wc -l) &&
+ test 3 -eq $(git ls-files -u | wc -l) &&
+ test 1 -eq $(git ls-files -o | wc -l) &&
+
+ test $(git rev-parse :3:a) = $(git rev-parse B:a) &&
+ test $(git rev-parse :2:a/file) = $(git rev-parse E2:a/file) &&
+ test $(git rev-parse :1:a/file) = $(git rev-parse C:a/file)
+ test $(git rev-parse :0:ignore-me) = $(git rev-parse A:ignore-me) &&
+
+ test -f a~D^0
+'
+
+#
+# criss-cross with rename/rename(1to2)/modify followed by
+# rename/rename(2to1)/modify:
+#
+# B D
+# o---o
+# / \ / \
+# A o X ? F
+# \ / \ /
+# o---o
+# C E
+#
+# Commit A: new file: a
+# Commit B: rename a->b, modifying by adding a line
+# Commit C: rename a->c
+# Commit D: merge B&C, resolving conflict by keeping contents in newname
+# Commit E: merge B&C, resolving conflict similar to D but adding another line
+#
+# There is a conflict merging B & C, but one of filename not of file
+# content. Whoever created D and E chose specific resolutions for that
+# conflict resolution. Now, since: (1) there is no content conflict
+# merging B & C, (2) D does not modify that merged content further, and (3)
+# both D & E resolve the name conflict in the same way, the modification to
+# newname in E should not cause any conflicts when it is merged with D.
+# (Note that this can be accomplished by having the virtual merge base have
+# the merged contents of b and c stored in a file named a, which seems like
+# the most logical choice anyway.)
+#
+# Comment from Junio: I do not necessarily agree with the choice "a", but
+# it feels sound to say "B and C do not agree what the final pathname
+# should be, but we know this content was derived from the common A:a so we
+# use one path whose name is arbitrary in the virtual merge base X between
+# D and E" and then further let the rename detection to notice that that
+# arbitrary path gets renamed between X-D to "newname" and X-E also to
+# "newname" to resolve it as both sides renaming it to the same new
+# name. It is akin to what we do at the content level, i.e. "B and C do not
+# agree what the final contents should be, so we leave the conflict marker
+# but that may cancel out at the final merge stage".
+
+test_expect_success 'setup rename/rename(1to2)/modify followed by what looks like rename/rename(2to1)/modify' '
+ git reset --hard &&
+ git rm -rf . &&
+ git clean -fdqx &&
+ rm -rf .git &&
+ git init &&
+
+ printf "1\n2\n3\n4\n5\n6\n" >a &&
+ git add a &&
+ git commit -m A &&
+ git tag A &&
+
+ git checkout -b B A &&
+ git mv a b &&
+ echo 7 >>b &&
+ git add -u &&
+ git commit -m B &&
+
+ git checkout -b C A &&
+ git mv a c &&
+ git commit -m C &&
+
+ git checkout -q B^0 &&
+ git merge --no-commit -s ours C^0 &&
+ git mv b newname &&
+ git commit -m "Merge commit C^0 into HEAD" &&
+ git tag D &&
+
+ git checkout -q C^0 &&
+ git merge --no-commit -s ours B^0 &&
+ git mv c newname &&
+ printf "7\n8\n" >>newname &&
+ git add -u &&
+ git commit -m "Merge commit B^0 into HEAD" &&
+ git tag E
+'
+
+test_expect_success 'handle rename/rename(1to2)/modify followed by what looks like rename/rename(2to1)/modify' '
+ git checkout D^0 &&
+
+ git merge -s recursive E^0 &&
+
+ test 1 -eq $(git ls-files -s | wc -l) &&
+ test 0 -eq $(git ls-files -u | wc -l) &&
+ test 0 -eq $(git ls-files -o | wc -l) &&
+
+ test $(git rev-parse HEAD:newname) = $(git rev-parse E:newname)
+'
+
+#
+# criss-cross with rename/rename(1to2)/add-source + resolvable modify/modify:
+#
+# B D
+# o---o
+# / \ / \
+# A o X ? F
+# \ / \ /
+# o---o
+# C E
+#
+# Commit A: new file: a
+# Commit B: rename a->b
+# Commit C: rename a->c, add different a
+# Commit D: merge B&C, keeping b&c and (new) a modified at beginning
+# Commit E: merge B&C, keeping b&c and (new) a modified at end
+#
+# Merging commits D & E should result in no conflict; doing so correctly
+# requires getting the virtual merge base (from merging B&C) right, handling
+# renaming carefully (both in the virtual merge base and later), and getting
+# content merge handled.
+
+test_expect_success 'setup criss-cross + rename/rename/add + modify/modify' '
+ git rm -rf . &&
+ git clean -fdqx &&
+ rm -rf .git &&
+ git init &&
+
+ printf "lots\nof\nwords\nand\ncontent\n" >a &&
+ git add a &&
+ git commit -m A &&
+ git tag A &&
+
+ git checkout -b B A &&
+ git mv a b &&
+ git commit -m B &&
+
+ git checkout -b C A &&
+ git mv a c &&
+ printf "2\n3\n4\n5\n6\n7\n" >a &&
+ git add a &&
+ git commit -m C &&
+
+ git checkout B^0 &&
+ git merge --no-commit -s ours C^0 &&
+ git checkout C -- a c &&
+ mv a old_a &&
+ echo 1 >a &&
+ cat old_a >>a &&
+ rm old_a &&
+ git add -u &&
+ git commit -m "Merge commit C^0 into HEAD" &&
+ git tag D &&
+
+ git checkout C^0 &&
+ git merge --no-commit -s ours B^0 &&
+ git checkout B -- b &&
+ echo 8 >>a &&
+ git add -u &&
+ git commit -m "Merge commit B^0 into HEAD" &&
+ git tag E
+'
+
+test_expect_failure 'detect rename/rename/add-source for virtual merge-base' '
+ git checkout D^0 &&
+
+ git merge -s recursive E^0 &&
+
+ test 3 -eq $(git ls-files -s | wc -l) &&
+ test 0 -eq $(git ls-files -u | wc -l) &&
+ test 0 -eq $(git ls-files -o | wc -l) &&
+
+ test $(git rev-parse HEAD:b) = $(git rev-parse A:a) &&
+ test $(git rev-parse HEAD:c) = $(git rev-parse A:a) &&
+ test "$(cat a)" = "$(printf "1\n2\n3\n4\n5\n6\n7\n8\n")"
+'
+
+#
+# criss-cross with rename/rename(1to2)/add-dest + simple modify:
+#
+# B D
+# o---o
+# / \ / \
+# A o X ? F
+# \ / \ /
+# o---o
+# C E
+#
+# Commit A: new file: a
+# Commit B: rename a->b, add c
+# Commit C: rename a->c
+# Commit D: merge B&C, keeping A:a and B:c
+# Commit E: merge B&C, keeping A:a and slightly modified c from B
+#
+# Merging commits D & E should result in no conflict. The virtual merge
+# base of B & C needs to not delete B:c for that to work, though...
+
+test_expect_success 'setup criss-cross+rename/rename/add-dest + simple modify' '
+ git rm -rf . &&
+ git clean -fdqx &&
+ rm -rf .git &&
+ git init &&
+
+ >a &&
+ git add a &&
+ git commit -m A &&
+ git tag A &&
+
+ git checkout -b B A &&
+ git mv a b &&
+ printf "1\n2\n3\n4\n5\n6\n7\n" >c &&
+ git add c &&
+ git commit -m B &&
+
+ git checkout -b C A &&
+ git mv a c &&
+ git commit -m C &&
+
+ git checkout B^0 &&
+ git merge --no-commit -s ours C^0 &&
+ git mv b a &&
+ git commit -m "D is like B but renames b back to a" &&
+ git tag D &&
+
+ git checkout B^0 &&
+ git merge --no-commit -s ours C^0 &&
+ git mv b a &&
+ echo 8 >>c &&
+ git add c &&
+ git commit -m "E like D but has mod in c" &&
+ git tag E
+'
+
+test_expect_success 'virtual merge base handles rename/rename(1to2)/add-dest' '
+ git checkout D^0 &&
+
+ git merge -s recursive E^0 &&
+
+ test 2 -eq $(git ls-files -s | wc -l) &&
+ test 0 -eq $(git ls-files -u | wc -l) &&
+ test 0 -eq $(git ls-files -o | wc -l) &&
+
+ test $(git rev-parse HEAD:a) = $(git rev-parse A:a) &&
+ test $(git rev-parse HEAD:c) = $(git rev-parse E:c)
+'
+
test_done
grep -q "^refs/heads/master$" actual &&
cmp expect2 actual2
'
+
+test_expect_success '--set-upstream @{-1}' '
+ git checkout from-master &&
+ git checkout from-master2 &&
+ git config branch.from-master2.merge > expect2 &&
+ git branch --set-upstream @{-1} follower &&
+ git config branch.from-master.merge > actual &&
+ git config branch.from-master2.merge > actual2 &&
+ git branch --set-upstream from-master follower &&
+ git config branch.from-master.merge > expect &&
+ test_cmp expect2 actual2 &&
+ test_cmp expect actual
+'
+
test_done
--- /dev/null
+#!/bin/sh
+
+test_description="recursive merge corner cases w/ renames but not criss-crosses"
+# t6036 has corner cases that involve both criss-cross merges and renames
+
+. ./test-lib.sh
+
+test_expect_success 'setup rename/delete + untracked file' '
+ echo "A pretty inscription" >ring &&
+ git add ring &&
+ test_tick &&
+ git commit -m beginning &&
+
+ git branch people &&
+ git checkout -b rename-the-ring &&
+ git mv ring one-ring-to-rule-them-all &&
+ test_tick &&
+ git commit -m fullname &&
+
+ git checkout people &&
+ git rm ring &&
+ echo gollum >owner &&
+ git add owner &&
+ test_tick &&
+ git commit -m track-people-instead-of-objects &&
+ echo "Myyy PRECIOUSSS" >ring
+'
+
+test_expect_success "Does git preserve Gollum's precious artifact?" '
+ test_must_fail git merge -s recursive rename-the-ring &&
+
+ # Make sure git did not delete an untracked file
+ test -f ring
+'
+
+# Testcase setup for rename/modify/add-source:
+# Commit A: new file: a
+# Commit B: modify a slightly
+# Commit C: rename a->b, add completely different a
+#
+# We should be able to merge B & C cleanly
+
+test_expect_success 'setup rename/modify/add-source conflict' '
+ git rm -rf . &&
+ git clean -fdqx &&
+ rm -rf .git &&
+ git init &&
+
+ printf "1\n2\n3\n4\n5\n6\n7\n" >a &&
+ git add a &&
+ git commit -m A &&
+ git tag A &&
+
+ git checkout -b B A &&
+ echo 8 >>a &&
+ git add a &&
+ git commit -m B &&
+
+ git checkout -b C A &&
+ git mv a b &&
+ echo something completely different >a &&
+ git add a &&
+ git commit -m C
+'
+
+test_expect_failure 'rename/modify/add-source conflict resolvable' '
+ git checkout B^0 &&
+
+ git merge -s recursive C^0 &&
+
+ test $(git rev-parse B:a) = $(git rev-parse b) &&
+ test $(git rev-parse C:a) = $(git rev-parse a)
+'
+
+test_expect_success 'setup resolvable conflict missed if rename missed' '
+ git rm -rf . &&
+ git clean -fdqx &&
+ rm -rf .git &&
+ git init &&
+
+ printf "1\n2\n3\n4\n5\n" >a &&
+ echo foo >b &&
+ git add a b &&
+ git commit -m A &&
+ git tag A &&
+
+ git checkout -b B A &&
+ git mv a c &&
+ echo "Completely different content" >a &&
+ git add a &&
+ git commit -m B &&
+
+ git checkout -b C A &&
+ echo 6 >>a &&
+ git add a &&
+ git commit -m C
+'
+
+test_expect_failure 'conflict caused if rename not detected' '
+ git checkout -q C^0 &&
+ git merge -s recursive B^0 &&
+
+ test 3 -eq $(git ls-files -s | wc -l) &&
+ test 0 -eq $(git ls-files -u | wc -l) &&
+ test 0 -eq $(git ls-files -o | wc -l) &&
+
+ test 6 -eq $(wc -l < c) &&
+ test $(git rev-parse HEAD:a) = $(git rev-parse B:a) &&
+ test $(git rev-parse HEAD:b) = $(git rev-parse A:b)
+'
+
+test_expect_success 'setup conflict resolved wrong if rename missed' '
+ git reset --hard &&
+ git clean -f &&
+
+ git checkout -b D A &&
+ echo 7 >>a &&
+ git add a &&
+ git mv a c &&
+ echo "Completely different content" >a &&
+ git add a &&
+ git commit -m D &&
+
+ git checkout -b E A &&
+ git rm a &&
+ echo "Completely different content" >>a &&
+ git add a &&
+ git commit -m E
+'
+
+test_expect_failure 'missed conflict if rename not detected' '
+ git checkout -q E^0 &&
+ test_must_fail git merge -s recursive D^0
+'
+
+# Tests for undetected rename/add-source causing a file to erroneously be
+# deleted (and for mishandled rename/rename(1to1) causing the same issue).
+#
+# This test uses a rename/rename(1to1)+add-source conflict (1to1 means the
+# same file is renamed on both sides to the same thing; it should trigger
+# the 1to2 logic, which it would do if the add-source didn't cause issues
+# for git's rename detection):
+# Commit A: new file: a
+# Commit B: rename a->b
+# Commit C: rename a->b, add unrelated a
+
+test_expect_success 'setup undetected rename/add-source causes data loss' '
+ git rm -rf . &&
+ git clean -fdqx &&
+ rm -rf .git &&
+ git init &&
+
+ printf "1\n2\n3\n4\n5\n" >a &&
+ git add a &&
+ git commit -m A &&
+ git tag A &&
+
+ git checkout -b B A &&
+ git mv a b &&
+ git commit -m B &&
+
+ git checkout -b C A &&
+ git mv a b &&
+ echo foobar >a &&
+ git add a &&
+ git commit -m C
+'
+
+test_expect_failure 'detect rename/add-source and preserve all data' '
+ git checkout B^0 &&
+
+ git merge -s recursive C^0 &&
+
+ test 2 -eq $(git ls-files -s | wc -l) &&
+ test 2 -eq $(git ls-files -u | wc -l) &&
+ test 0 -eq $(git ls-files -o | wc -l) &&
+
+ test -f a &&
+ test -f b &&
+
+ test $(git rev-parse HEAD:b) = $(git rev-parse A:a) &&
+ test $(git rev-parse HEAD:a) = $(git rev-parse C:a)
+'
+
+test_expect_failure 'detect rename/add-source and preserve all data, merge other way' '
+ git checkout C^0 &&
+
+ git merge -s recursive B^0 &&
+
+ test 2 -eq $(git ls-files -s | wc -l) &&
+ test 2 -eq $(git ls-files -u | wc -l) &&
+ test 0 -eq $(git ls-files -o | wc -l) &&
+
+ test -f a &&
+ test -f b &&
+
+ test $(git rev-parse HEAD:b) = $(git rev-parse A:a) &&
+ test $(git rev-parse HEAD:a) = $(git rev-parse C:a)
+'
+
+test_expect_success 'setup content merge + rename/directory conflict' '
+ git rm -rf . &&
+ git clean -fdqx &&
+ rm -rf .git &&
+ git init &&
+
+ printf "1\n2\n3\n4\n5\n6\n" >file &&
+ git add file &&
+ test_tick &&
+ git commit -m base &&
+ git tag base &&
+
+ git checkout -b right &&
+ echo 7 >>file &&
+ mkdir newfile &&
+ echo junk >newfile/realfile &&
+ git add file newfile/realfile &&
+ test_tick &&
+ git commit -m right &&
+
+ git checkout -b left-conflict base &&
+ echo 8 >>file &&
+ git add file &&
+ git mv file newfile &&
+ test_tick &&
+ git commit -m left &&
+
+ git checkout -b left-clean base &&
+ echo 0 >newfile &&
+ cat file >>newfile &&
+ git add newfile &&
+ git rm file &&
+ test_tick &&
+ git commit -m left
+'
+
+test_expect_success 'rename/directory conflict + clean content merge' '
+ git reset --hard &&
+ git reset --hard &&
+ git clean -fdqx &&
+
+ git checkout left-clean^0 &&
+
+ test_must_fail git merge -s recursive right^0 &&
+
+ test 2 -eq $(git ls-files -s | wc -l) &&
+ test 1 -eq $(git ls-files -u | wc -l) &&
+ test 1 -eq $(git ls-files -o | wc -l) &&
+
+ echo 0 >expect &&
+ git cat-file -p base:file >>expect &&
+ echo 7 >>expect &&
+ test_cmp expect newfile~HEAD &&
+
+ test $(git rev-parse :2:newfile) = $(git hash-object expect) &&
+
+ test -f newfile/realfile &&
+ test -f newfile~HEAD
+'
+
+test_expect_success 'rename/directory conflict + content merge conflict' '
+ git reset --hard &&
+ git reset --hard &&
+ git clean -fdqx &&
+
+ git checkout left-conflict^0 &&
+
+ test_must_fail git merge -s recursive right^0 &&
+
+ test 4 -eq $(git ls-files -s | wc -l) &&
+ test 3 -eq $(git ls-files -u | wc -l) &&
+ test 1 -eq $(git ls-files -o | wc -l) &&
+
+ git cat-file -p left-conflict:newfile >left &&
+ git cat-file -p base:file >base &&
+ git cat-file -p right:file >right &&
+ test_must_fail git merge-file \
+ -L "HEAD:newfile" \
+ -L "" \
+ -L "right^0:file" \
+ left base right &&
+ test_cmp left newfile~HEAD &&
+
+ test $(git rev-parse :1:newfile) = $(git rev-parse base:file) &&
+ test $(git rev-parse :2:newfile) = $(git rev-parse left-conflict:newfile) &&
+ test $(git rev-parse :3:newfile) = $(git rev-parse right:file) &&
+
+ test -f newfile/realfile &&
+ test -f newfile~HEAD
+'
+
+test_expect_success 'setup content merge + rename/directory conflict w/ disappearing dir' '
+ git reset --hard &&
+ git rm -rf . &&
+ git clean -fdqx &&
+ rm -rf .git &&
+ git init &&
+
+ mkdir sub &&
+ printf "1\n2\n3\n4\n5\n6\n" >sub/file &&
+ git add sub/file &&
+ test_tick &&
+ git commit -m base &&
+ git tag base &&
+
+ git checkout -b right &&
+ echo 7 >>sub/file &&
+ git add sub/file &&
+ test_tick &&
+ git commit -m right &&
+
+ git checkout -b left base &&
+ echo 0 >newfile &&
+ cat sub/file >>newfile &&
+ git rm sub/file &&
+ mv newfile sub &&
+ git add sub &&
+ test_tick &&
+ git commit -m left
+'
+
+test_expect_success 'disappearing dir in rename/directory conflict handled' '
+ git reset --hard &&
+ git clean -fdqx &&
+
+ git checkout left^0 &&
+
+ git merge -s recursive right^0 &&
+
+ test 1 -eq $(git ls-files -s | wc -l) &&
+ test 0 -eq $(git ls-files -u | wc -l) &&
+ test 0 -eq $(git ls-files -o | wc -l) &&
+
+ echo 0 >expect &&
+ git cat-file -p base:sub/file >>expect &&
+ echo 7 >>expect &&
+ test_cmp expect sub &&
+
+ test -f sub
+'
+
+# Test for all kinds of things that can go wrong with rename/rename (2to1):
+# Commit A: new files: a & b
+# Commit B: rename a->c, modify b
+# Commit C: rename b->c, modify a
+#
+# Merging of B & C should NOT be clean. Questions:
+# * Both a & b should be removed by the merge; are they?
+# * The two c's should contain modifications to a & b; do they?
+# * The index should contain two files, both for c; does it?
+# * The working copy should have two files, both of form c~<unique>; does it?
+# * Nothing else should be present. Is anything?
+
+test_expect_success 'setup rename/rename (2to1) + modify/modify' '
+ git rm -rf . &&
+ git clean -fdqx &&
+ rm -rf .git &&
+ git init &&
+
+ printf "1\n2\n3\n4\n5\n" >a &&
+ printf "5\n4\n3\n2\n1\n" >b &&
+ git add a b &&
+ git commit -m A &&
+ git tag A &&
+
+ git checkout -b B A &&
+ git mv a c &&
+ echo 0 >>b &&
+ git add b &&
+ git commit -m B &&
+
+ git checkout -b C A &&
+ git mv b c &&
+ echo 6 >>a &&
+ git add a &&
+ git commit -m C
+'
+
+test_expect_success 'handle rename/rename (2to1) conflict correctly' '
+ git checkout B^0 &&
+
+ test_must_fail git merge -s recursive C^0 >out &&
+ grep "CONFLICT (rename/rename)" out &&
+
+ test 2 -eq $(git ls-files -s | wc -l) &&
+ test 2 -eq $(git ls-files -u | wc -l) &&
+ test 2 -eq $(git ls-files -u c | wc -l) &&
+ test 3 -eq $(git ls-files -o | wc -l) &&
+
+ test ! -f a &&
+ test ! -f b &&
+ test -f c~HEAD &&
+ test -f c~C^0 &&
+
+ test $(git hash-object c~HEAD) = $(git rev-parse C:a) &&
+ test $(git hash-object c~C^0) = $(git rev-parse B:b)
+'
+
+# Testcase setup for simple rename/rename (1to2) conflict:
+# Commit A: new file: a
+# Commit B: rename a->b
+# Commit C: rename a->c
+test_expect_success 'setup simple rename/rename (1to2) conflict' '
+ git rm -rf . &&
+ git clean -fdqx &&
+ rm -rf .git &&
+ git init &&
+
+ echo stuff >a &&
+ git add a &&
+ test_tick &&
+ git commit -m A &&
+ git tag A &&
+
+ git checkout -b B A &&
+ git mv a b &&
+ test_tick &&
+ git commit -m B &&
+
+ git checkout -b C A &&
+ git mv a c &&
+ test_tick &&
+ git commit -m C
+'
+
+test_expect_success 'merge has correct working tree contents' '
+ git checkout C^0 &&
+
+ test_must_fail git merge -s recursive B^0 &&
+
+ test 3 -eq $(git ls-files -s | wc -l) &&
+ test 3 -eq $(git ls-files -u | wc -l) &&
+ test 0 -eq $(git ls-files -o | wc -l) &&
+
+ test $(git rev-parse :1:a) = $(git rev-parse A:a) &&
+ test $(git rev-parse :3:b) = $(git rev-parse A:a) &&
+ test $(git rev-parse :2:c) = $(git rev-parse A:a) &&
+
+ test ! -f a &&
+ test $(git hash-object b) = $(git rev-parse A:a) &&
+ test $(git hash-object c) = $(git rev-parse A:a)
+'
+
+# Testcase setup for rename/rename(1to2)/add-source conflict:
+# Commit A: new file: a
+# Commit B: rename a->b
+# Commit C: rename a->c, add completely different a
+#
+# Merging of B & C should NOT be clean; there's a rename/rename conflict
+
+test_expect_success 'setup rename/rename(1to2)/add-source conflict' '
+ git rm -rf . &&
+ git clean -fdqx &&
+ rm -rf .git &&
+ git init &&
+
+ printf "1\n2\n3\n4\n5\n6\n7\n" >a &&
+ git add a &&
+ git commit -m A &&
+ git tag A &&
+
+ git checkout -b B A &&
+ git mv a b &&
+ git commit -m B &&
+
+ git checkout -b C A &&
+ git mv a c &&
+ echo something completely different >a &&
+ git add a &&
+ git commit -m C
+'
+
+test_expect_failure 'detect conflict with rename/rename(1to2)/add-source merge' '
+ git checkout B^0 &&
+
+ test_must_fail git merge -s recursive C^0 &&
+
+ test 4 -eq $(git ls-files -s | wc -l) &&
+ test 0 -eq $(git ls-files -o | wc -l) &&
+
+ test $(git rev-parse 3:a) = $(git rev-parse C:a) &&
+ test $(git rev-parse 1:a) = $(git rev-parse A:a) &&
+ test $(git rev-parse 2:b) = $(git rev-parse B:b) &&
+ test $(git rev-parse 3:c) = $(git rev-parse C:c) &&
+
+ test -f a &&
+ test -f b &&
+ test -f c
+'
+
+test_expect_success 'setup rename/rename(1to2)/add-source resolvable conflict' '
+ git rm -rf . &&
+ git clean -fdqx &&
+ rm -rf .git &&
+ git init &&
+
+ >a &&
+ git add a &&
+ test_tick &&
+ git commit -m base &&
+ git tag A &&
+
+ git checkout -b B A &&
+ git mv a b &&
+ test_tick &&
+ git commit -m one &&
+
+ git checkout -b C A &&
+ git mv a b &&
+ echo important-info >a &&
+ git add a &&
+ test_tick &&
+ git commit -m two
+'
+
+test_expect_failure 'rename/rename/add-source still tracks new a file' '
+ git checkout C^0 &&
+ git merge -s recursive B^0 &&
+
+ test 2 -eq $(git ls-files -s | wc -l) &&
+ test 0 -eq $(git ls-files -o | wc -l) &&
+
+ test $(git rev-parse HEAD:a) = $(git rev-parse C:a) &&
+ test $(git rev-parse HEAD:b) = $(git rev-parse A:a)
+'
+
+test_expect_success 'setup rename/rename(1to2)/add-dest conflict' '
+ git rm -rf . &&
+ git clean -fdqx &&
+ rm -rf .git &&
+ git init &&
+
+ echo stuff >a &&
+ git add a &&
+ test_tick &&
+ git commit -m base &&
+ git tag A &&
+
+ git checkout -b B A &&
+ git mv a b &&
+ echo precious-data >c &&
+ git add c &&
+ test_tick &&
+ git commit -m one &&
+
+ git checkout -b C A &&
+ git mv a c &&
+ echo important-info >b &&
+ git add b &&
+ test_tick &&
+ git commit -m two
+'
+
+test_expect_success 'rename/rename/add-dest merge still knows about conflicting file versions' '
+ git checkout C^0 &&
+ test_must_fail git merge -s recursive B^0 &&
+
+ test 5 -eq $(git ls-files -s | wc -l) &&
+ test 2 -eq $(git ls-files -u b | wc -l) &&
+ test 2 -eq $(git ls-files -u c | wc -l) &&
+ test 4 -eq $(git ls-files -o | wc -l) &&
+
+ test $(git rev-parse :1:a) = $(git rev-parse A:a) &&
+ test $(git rev-parse :2:b) = $(git rev-parse C:b) &&
+ test $(git rev-parse :3:b) = $(git rev-parse B:b) &&
+ test $(git rev-parse :2:c) = $(git rev-parse C:c) &&
+ test $(git rev-parse :3:c) = $(git rev-parse B:c) &&
+
+ test $(git hash-object c~HEAD) = $(git rev-parse C:c) &&
+ test $(git hash-object c~B\^0) = $(git rev-parse B:c) &&
+ test $(git hash-object b~HEAD) = $(git rev-parse C:b) &&
+ test $(git hash-object b~B\^0) = $(git rev-parse B:b) &&
+
+ test ! -f b &&
+ test ! -f c
+'
+
+test_done
test_expect_success 'setup' '
sane_unset GIT_PAGER GIT_PAGER_IN_USE &&
- test_might_fail git config --unset core.pager &&
+ test_unconfig core.pager &&
PAGER="cat >paginated.out" &&
export PAGER &&
test_expect_success TTY 'configuration can disable pager' '
rm -f paginated.out &&
- test_might_fail git config --unset pager.grep &&
+ test_unconfig pager.grep &&
test_terminal git grep initial &&
test -e paginated.out &&
rm -f paginated.out &&
- git config pager.grep false &&
- test_when_finished "git config --unset pager.grep" &&
+ test_config pager.grep false &&
test_terminal git grep initial &&
! test -e paginated.out
'
test_expect_success TTY 'git config uses a pager if configured to' '
rm -f paginated.out &&
- git config pager.config true &&
- test_when_finished "git config --unset pager.config" &&
+ test_config pager.config true &&
test_terminal git config --list &&
test -e paginated.out
'
test_expect_success TTY 'configuration can enable pager (from subdir)' '
rm -f paginated.out &&
mkdir -p subdir &&
- git config pager.bundle true &&
- test_when_finished "git config --unset pager.bundle" &&
+ test_config pager.bundle true &&
git bundle create test.bundle --all &&
rm -f paginated.out subdir/paginated.out &&
test_expect_success 'no color when stdout is a regular file' '
rm -f colorless.log &&
- git config color.ui auto ||
+ test_config color.ui auto ||
cleanup_fail &&
git log >colorless.log &&
test_expect_success TTY 'color when writing to a pager' '
rm -f paginated.out &&
- git config color.ui auto ||
+ test_config color.ui auto ||
cleanup_fail &&
(
colorful paginated.out
'
+test_expect_success TTY 'colors are suppressed by color.pager' '
+ rm -f paginated.out &&
+ test_config color.ui auto &&
+ test_config color.pager false &&
+ (
+ TERM=vt100 &&
+ export TERM &&
+ test_terminal git log
+ ) &&
+ ! colorful paginated.out
+'
+
test_expect_success 'color when writing to a file intended for a pager' '
rm -f colorful.log &&
- git config color.ui auto ||
+ test_config color.ui auto ||
cleanup_fail &&
(
colorful colorful.log
'
+test_expect_success TTY 'colors are sent to pager for external commands' '
+ test_config alias.externallog "!git log" &&
+ test_config color.ui auto &&
+ (
+ TERM=vt100 &&
+ export TERM &&
+ test_terminal git -p externallog
+ ) &&
+ colorful paginated.out
+'
+
# Use this helper to make it easy for the caller of your
# terminal-using function to specify whether it should fail.
# If you write
$test_expectation SIMPLEPAGER,TTY "$cmd - default pager is used by default" "
sane_unset PAGER GIT_PAGER &&
- test_might_fail git config --unset core.pager &&
+ test_unconfig core.pager &&
rm -f default_pager_used ||
cleanup_fail &&
$test_expectation TTY "$cmd - PAGER overrides default pager" "
sane_unset GIT_PAGER &&
- test_might_fail git config --unset core.pager &&
+ test_unconfig core.pager &&
rm -f PAGER_used ||
cleanup_fail &&
PAGER=wc &&
export PAGER &&
- git config core.pager 'wc >core.pager_used' &&
+ test_config core.pager 'wc >core.pager_used' &&
$full_command &&
${if_local_config}test -e core.pager_used
"
PAGER=wc &&
stampname=\$(pwd)/core.pager_used &&
export PAGER stampname &&
- git config core.pager 'wc >\"\$stampname\"' &&
+ test_config core.pager 'wc >\"\$stampname\"' &&
mkdir sub &&
(
cd sub &&
rm -f GIT_PAGER_used ||
cleanup_fail &&
- git config core.pager wc &&
+ test_config core.pager wc &&
GIT_PAGER='wc >GIT_PAGER_used' &&
export GIT_PAGER &&
$full_command &&
'git -p apply </dev/null'
test_expect_success TTY 'command-specific pager' '
- unset PAGER GIT_PAGER;
+ sane_unset PAGER GIT_PAGER &&
echo "foo:initial" >expect &&
>actual &&
- git config --unset core.pager &&
- git config pager.log "sed s/^/foo:/ >actual" &&
+ test_unconfig core.pager &&
+ test_config pager.log "sed s/^/foo:/ >actual" &&
test_terminal git log --format=%s -1 &&
test_cmp expect actual
'
test_expect_success TTY 'command-specific pager overrides core.pager' '
- unset PAGER GIT_PAGER;
+ sane_unset PAGER GIT_PAGER &&
echo "foo:initial" >expect &&
>actual &&
- git config core.pager "exit 1"
- git config pager.log "sed s/^/foo:/ >actual" &&
+ test_config core.pager "exit 1"
+ test_config pager.log "sed s/^/foo:/ >actual" &&
test_terminal git log --format=%s -1 &&
test_cmp expect actual
'
GIT_PAGER="sed s/^/foo:/ >actual" && export GIT_PAGER &&
>actual &&
echo "foo:initial" >expect &&
- git config pager.log "exit 1" &&
+ test_config pager.log "exit 1" &&
test_terminal git log --format=%s -1 &&
test_cmp expect actual
'
+test_expect_success 'setup external command' '
+ cat >git-external <<-\EOF &&
+ #!/bin/sh
+ git "$@"
+ EOF
+ chmod +x git-external
+'
+
+test_expect_success TTY 'command-specific pager works for external commands' '
+ sane_unset PAGER GIT_PAGER &&
+ echo "foo:initial" >expect &&
+ >actual &&
+ test_config pager.external "sed s/^/foo:/ >actual" &&
+ test_terminal git --exec-path="`pwd`" external log --format=%s -1 &&
+ test_cmp expect actual
+'
+
+test_expect_success TTY 'sub-commands of externals use their own pager' '
+ sane_unset PAGER GIT_PAGER &&
+ echo "foo:initial" >expect &&
+ >actual &&
+ test_config pager.log "sed s/^/foo:/ >actual" &&
+ test_terminal git --exec-path=. external log --format=%s -1 &&
+ test_cmp expect actual
+'
+
+test_expect_success TTY 'external command pagers override sub-commands' '
+ sane_unset PAGER GIT_PAGER &&
+ >expect &&
+ >actual &&
+ test_config pager.external false &&
+ test_config pager.log "sed s/^/log:/ >actual" &&
+ test_terminal git --exec-path=. external log --format=%s -1 &&
+ test_cmp expect actual
+'
+
test_done
git grep -f f -Fi a
"
-test_expect_failure 'git grep -Fi Y<NUL>x a' "
+test_expect_success 'git grep -Fi Y<NUL>x a' "
printf 'YQx' | q_to_nul >f &&
test_must_fail git grep -f f -Fi a
"
git grep -f f a
"
-test_expect_failure 'git grep y<NUL>x a' "
+test_expect_success 'git grep y<NUL>x a' "
printf 'yQx' | q_to_nul >f &&
test_must_fail git grep -f f a
"
Trying simple merge with c2
Trying simple merge with c3
Trying simple merge with c4
-Merge made by octopus.
+Merge made by the 'octopus' strategy.
c2.c | 1 +
c3.c | 1 +
c4.c | 1 +
cat >expected <<\EOF
Already up-to-date with c4
Trying simple merge with c5
-Merge made by octopus.
+Merge made by the 'octopus' strategy.
c5.c | 1 +
1 files changed, 1 insertions(+), 0 deletions(-)
create mode 100644 c5.c
cat >expected <<\EOF
Fast-forwarding to: c1
Trying simple merge with c2
-Merge made by octopus.
+Merge made by the 'octopus' strategy.
c1.c | 1 +
c2.c | 1 +
2 files changed, 2 insertions(+), 0 deletions(-)
test "$mergeinfo" = "/branches/foo:1-10"
'
+test_expect_success 'change svn:mergeinfo multiline' '
+ touch baz &&
+ git add baz &&
+ git commit -m "baz" &&
+ git svn dcommit --mergeinfo="/branches/bar:1-10 /branches/other:3-5,8,10-11"
+'
+
+test_expect_success 'verify svn:mergeinfo multiline' '
+ mergeinfo=$(svn_cmd propget svn:mergeinfo "$svnrepo"/trunk)
+ test "$mergeinfo" = "/branches/bar:1-10
+/branches/other:3-5,8,10-11"
+'
+
test_done
--- /dev/null
+#!/bin/sh
+#
+# Copyright (c) 2011 Ray Chen
+#
+
+test_description='git svn test (option --preserve-empty-dirs)
+
+This test uses git to clone a Subversion repository that contains empty
+directories, and checks that corresponding directories are created in the
+local Git repository with placeholder files.'
+
+. ./lib-git-svn.sh
+
+say 'define NO_SVN_TESTS to skip git svn tests'
+GIT_REPO=git-svn-repo
+
+test_expect_success 'initialize source svn repo containing empty dirs' '
+ svn_cmd mkdir -m x "$svnrepo"/trunk &&
+ svn_cmd co "$svnrepo"/trunk "$SVN_TREE" &&
+ (
+ cd "$SVN_TREE" &&
+ mkdir -p 1 2 3/a 3/b 4 5 6 &&
+ echo "First non-empty file" > 2/file1.txt &&
+ echo "Second non-empty file" > 2/file2.txt &&
+ echo "Third non-empty file" > 3/a/file1.txt &&
+ echo "Fourth non-empty file" > 3/b/file1.txt &&
+ svn_cmd add 1 2 3 4 5 6 &&
+ svn_cmd commit -m "initial commit" &&
+
+ mkdir 4/a &&
+ svn_cmd add 4/a &&
+ svn_cmd commit -m "nested empty directory" &&
+ mkdir 4/a/b &&
+ svn_cmd add 4/a/b &&
+ svn_cmd commit -m "deeply nested empty directory" &&
+ mkdir 4/a/b/c &&
+ svn_cmd add 4/a/b/c &&
+ svn_cmd commit -m "really deeply nested empty directory" &&
+ echo "Kill the placeholder file" > 4/a/b/c/foo &&
+ svn_cmd add 4/a/b/c/foo &&
+ svn_cmd commit -m "Regular file to remove placeholder" &&
+
+ svn_cmd del 2/file2.txt &&
+ svn_cmd del 3/b &&
+ svn_cmd commit -m "delete non-last entry in directory" &&
+
+ svn_cmd del 2/file1.txt &&
+ svn_cmd del 3/a &&
+ svn_cmd commit -m "delete last entry in directory" &&
+
+ echo "Conflict file" > 5/.placeholder &&
+ mkdir 6/.placeholder &&
+ svn_cmd add 5/.placeholder 6/.placeholder &&
+ svn_cmd commit -m "Placeholder Namespace conflict"
+ ) &&
+ rm -rf "$SVN_TREE"
+'
+
+test_expect_success 'clone svn repo with --preserve-empty-dirs' '
+ git svn clone "$svnrepo"/trunk --preserve-empty-dirs "$GIT_REPO"
+'
+
+# "$GIT_REPO"/1 should only contain the placeholder file.
+test_expect_success 'directory empty from inception' '
+ test -f "$GIT_REPO"/1/.gitignore &&
+ test $(find "$GIT_REPO"/1 -type f | wc -l) = "1"
+'
+
+# "$GIT_REPO"/2 and "$GIT_REPO"/3 should only contain the placeholder file.
+test_expect_success 'directory empty from subsequent svn commit' '
+ test -f "$GIT_REPO"/2/.gitignore &&
+ test $(find "$GIT_REPO"/2 -type f | wc -l) = "1" &&
+ test -f "$GIT_REPO"/3/.gitignore &&
+ test $(find "$GIT_REPO"/3 -type f | wc -l) = "1"
+'
+
+# No placeholder files should exist in "$GIT_REPO"/4, even though one was
+# generated for every sub-directory at some point in the repo's history.
+test_expect_success 'add entry to previously empty directory' '
+ test $(find "$GIT_REPO"/4 -type f | wc -l) = "1" &&
+ test -f "$GIT_REPO"/4/a/b/c/foo
+'
+
+# The HEAD~2 commit should not have introduced .gitignore placeholder files.
+test_expect_success 'remove non-last entry from directory' '
+ (
+ cd "$GIT_REPO" &&
+ git checkout HEAD~2
+ ) &&
+ test_must_fail test -f "$GIT_REPO"/2/.gitignore &&
+ test_must_fail test -f "$GIT_REPO"/3/.gitignore
+'
+
+# After re-cloning the repository with --placeholder-file specified, there
+# should be 5 files named ".placeholder" in the local Git repo.
+test_expect_success 'clone svn repo with --placeholder-file specified' '
+ rm -rf "$GIT_REPO" &&
+ git svn clone "$svnrepo"/trunk --preserve-empty-dirs \
+ --placeholder-file=.placeholder "$GIT_REPO" &&
+ find "$GIT_REPO" -type f -name ".placeholder" &&
+ test $(find "$GIT_REPO" -type f -name ".placeholder" | wc -l) = "5"
+'
+
+# "$GIT_REPO"/5/.placeholder should be a file, and non-empty.
+test_expect_success 'placeholder namespace conflict with file' '
+ test -s "$GIT_REPO"/5/.placeholder
+'
+
+# "$GIT_REPO"/6/.placeholder should be a directory, and the "$GIT_REPO"/6 tree
+# should only contain one file: the placeholder.
+test_expect_success 'placeholder namespace conflict with directory' '
+ test -d "$GIT_REPO"/6/.placeholder &&
+ test -f "$GIT_REPO"/6/.placeholder/.placeholder &&
+ test $(find "$GIT_REPO"/6 -type f | wc -l) = "1"
+'
+
+# Prepare a second set of svn commits to test persistence during rebase.
+test_expect_success 'second set of svn commits and rebase' '
+ svn_cmd co "$svnrepo"/trunk "$SVN_TREE" &&
+ (
+ cd "$SVN_TREE" &&
+ mkdir -p 7 &&
+ echo "This should remove placeholder" > 1/file1.txt &&
+ echo "This should not remove placeholder" > 5/file1.txt &&
+ svn_cmd add 7 1/file1.txt 5/file1.txt &&
+ svn_cmd commit -m "subsequent svn commit for persistence tests"
+ ) &&
+ rm -rf "$SVN_TREE" &&
+ (
+ cd "$GIT_REPO" &&
+ git svn rebase
+ )
+'
+
+# Check that --preserve-empty-dirs and --placeholder-file flag state
+# stays persistent over multiple invocations.
+test_expect_success 'flag persistence during subsqeuent rebase' '
+ test -f "$GIT_REPO"/7/.placeholder &&
+ test $(find "$GIT_REPO"/7 -type f | wc -l) = "1"
+'
+
+# Check that placeholder files are properly removed when unnecessary,
+# even across multiple invocations.
+test_expect_success 'placeholder list persistence during subsqeuent rebase' '
+ test -f "$GIT_REPO"/1/file1.txt &&
+ test $(find "$GIT_REPO"/1 -type f | wc -l) = "1" &&
+
+ test -f "$GIT_REPO"/5/file1.txt &&
+ test -f "$GIT_REPO"/5/.placeholder &&
+ test $(find "$GIT_REPO"/5 -type f | wc -l) = "2"
+'
+
+test_done
An annotated tag without a tagger
EOF
+tag series-A-blob
+from :3
+data <<EOF
+An annotated tag that annotates a blob.
+EOF
+
INPUT_END
test_expect_success \
'A: create pack from stdin' \
test_cmp expect actual
'
+cat >expect <<EOF
+object $(git rev-parse refs/heads/master:file3)
+type blob
+tag series-A-blob
+
+An annotated tag that annotates a blob.
+EOF
+test_expect_success 'A: verify tag/series-A-blob' '
+ git cat-file tag tags/series-A-blob >actual &&
+ test_cmp expect actual
+'
+
cat >expect <<EOF
:2 `git rev-parse --verify master:file2`
:3 `git rev-parse --verify master:file3`
</dev/null &&
test_cmp expect marks.new'
+test_tick
+new_blob=$(echo testing | git hash-object --stdin)
+cat >input <<INPUT_END
+tag series-A-blob-2
+from $(git rev-parse refs/heads/master:file3)
+data <<EOF
+Tag blob by sha1.
+EOF
+
+blob
+mark :6
+data <<EOF
+testing
+EOF
+
+commit refs/heads/new_blob
+committer <> 0 +0000
+data 0
+M 644 :6 new_blob
+#pretend we got sha1 from fast-import
+ls "new_blob"
+
+tag series-A-blob-3
+from $new_blob
+data <<EOF
+Tag new_blob.
+EOF
+INPUT_END
+
+cat >expect <<EOF
+object $(git rev-parse refs/heads/master:file3)
+type blob
+tag series-A-blob-2
+
+Tag blob by sha1.
+object $new_blob
+type blob
+tag series-A-blob-3
+
+Tag new_blob.
+EOF
+
+test_expect_success \
+ 'A: tag blob by sha1' \
+ 'git fast-import <input &&
+ git cat-file tag tags/series-A-blob-2 >actual &&
+ git cat-file tag tags/series-A-blob-3 >>actual &&
+ test_cmp expect actual'
+
test_tick
cat >input <<INPUT_END
commit refs/heads/verify--import-marks
test `git rev-parse master` = `git rev-parse TEMP_TAG^`'
rm -f .git/TEMP_TAG
+git gc 2>/dev/null >/dev/null
+git prune 2>/dev/null >/dev/null
+
+cat >input <<INPUT_END
+commit refs/heads/empty-committer-1
+committer <> $GIT_COMMITTER_DATE
+data <<COMMIT
+empty commit
+COMMIT
+INPUT_END
+test_expect_success 'B: accept empty committer' '
+ git fast-import <input &&
+ out=$(git fsck) &&
+ echo "$out" &&
+ test -z "$out"
+'
+git update-ref -d refs/heads/empty-committer-1 || true
+
+git gc 2>/dev/null >/dev/null
+git prune 2>/dev/null >/dev/null
+
+cat >input <<INPUT_END
+commit refs/heads/empty-committer-2
+committer <a@b.com> $GIT_COMMITTER_DATE
+data <<COMMIT
+empty commit
+COMMIT
+INPUT_END
+test_expect_success 'B: accept and fixup committer with no name' '
+ git fast-import <input &&
+ out=$(git fsck) &&
+ echo "$out" &&
+ test -z "$out"
+'
+git update-ref -d refs/heads/empty-committer-2 || true
+
+git gc 2>/dev/null >/dev/null
+git prune 2>/dev/null >/dev/null
+
+cat >input <<INPUT_END
+commit refs/heads/invalid-committer
+committer Name email> $GIT_COMMITTER_DATE
+data <<COMMIT
+empty commit
+COMMIT
+INPUT_END
+test_expect_success 'B: fail on invalid committer (1)' '
+ test_must_fail git fast-import <input
+'
+git update-ref -d refs/heads/invalid-committer || true
+
+cat >input <<INPUT_END
+commit refs/heads/invalid-committer
+committer Name <e<mail> $GIT_COMMITTER_DATE
+data <<COMMIT
+empty commit
+COMMIT
+INPUT_END
+test_expect_success 'B: fail on invalid committer (2)' '
+ test_must_fail git fast-import <input
+'
+git update-ref -d refs/heads/invalid-committer || true
+
+cat >input <<INPUT_END
+commit refs/heads/invalid-committer
+committer Name <email>> $GIT_COMMITTER_DATE
+data <<COMMIT
+empty commit
+COMMIT
+INPUT_END
+test_expect_success 'B: fail on invalid committer (3)' '
+ test_must_fail git fast-import <input
+'
+git update-ref -d refs/heads/invalid-committer || true
+
+cat >input <<INPUT_END
+commit refs/heads/invalid-committer
+committer Name <email $GIT_COMMITTER_DATE
+data <<COMMIT
+empty commit
+COMMIT
+INPUT_END
+test_expect_success 'B: fail on invalid committer (4)' '
+ test_must_fail git fast-import <input
+'
+git update-ref -d refs/heads/invalid-committer || true
+
+cat >input <<INPUT_END
+commit refs/heads/invalid-committer
+committer Name<email> $GIT_COMMITTER_DATE
+data <<COMMIT
+empty commit
+COMMIT
+INPUT_END
+test_expect_success 'B: fail on invalid committer (5)' '
+ test_must_fail git fast-import <input
+'
+git update-ref -d refs/heads/invalid-committer || true
+
###
### series C
###
git diff-tree --abbrev --raw L^ L >output &&
test_cmp expect output'
+cat >input <<INPUT_END
+blob
+mark :1
+data <<EOF
+the data
+EOF
+
+commit refs/heads/L2
+committer C O Mitter <committer@example.com> 1112912473 -0700
+data <<COMMIT
+init L2
+COMMIT
+M 644 :1 a/b/c
+M 644 :1 a/b/d
+M 644 :1 a/e/f
+
+commit refs/heads/L2
+committer C O Mitter <committer@example.com> 1112912473 -0700
+data <<COMMIT
+update L2
+COMMIT
+C a g
+C a/e g/b
+M 644 :1 g/b/h
+INPUT_END
+
+cat <<EOF >expect
+g/b/f
+g/b/h
+EOF
+
+test_expect_success \
+ 'L: nested tree copy does not corrupt deltas' \
+ 'git fast-import <input &&
+ git ls-tree L2 g/b/ >tmp &&
+ cat tmp | cut -f 2 >actual &&
+ test_cmp expect actual &&
+ git fsck `git rev-parse L2`'
+
+git update-ref -d refs/heads/L2
+
###
### series M
###
test_cmp expect io.marks
'
+test_expect_success 'R: feature import-marks-if-exists' '
+ rm -f io.marks &&
+ >expect &&
+
+ git fast-import --export-marks=io.marks <<-\EOF &&
+ feature import-marks-if-exists=not_io.marks
+ EOF
+ test_cmp expect io.marks &&
+
+ blob=$(echo hi | git hash-object --stdin) &&
+
+ echo ":1 $blob" >io.marks &&
+ echo ":1 $blob" >expect &&
+ echo ":2 $blob" >>expect &&
+
+ git fast-import --export-marks=io.marks <<-\EOF &&
+ feature import-marks-if-exists=io.marks
+ blob
+ mark :2
+ data 3
+ hi
+
+ EOF
+ test_cmp expect io.marks &&
+
+ echo ":3 $blob" >>expect &&
+
+ git fast-import --import-marks=io.marks \
+ --export-marks=io.marks <<-\EOF &&
+ feature import-marks-if-exists=not_io.marks
+ blob
+ mark :3
+ data 3
+ hi
+
+ EOF
+ test_cmp expect io.marks &&
+
+ >expect &&
+
+ git fast-import --import-marks-if-exists=not_io.marks \
+ --export-marks=io.marks <<-\EOF
+ feature import-marks-if-exists=io.marks
+ EOF
+ test_cmp expect io.marks
+'
+
cat >input << EOF
feature import-marks=marks.out
feature export-marks=marks.new
test $p4time = $gittime
'
+# Rename a file and confirm that rename is not detected in P4.
+# Rename the new file again with detectRenames option enabled and confirm that
+# this is detected in P4.
+# Rename the new file again adding an extra line, configure a big threshold in
+# detectRenames and confirm that rename is not detected in P4.
+# Repeat, this time with a smaller threshold and confirm that the rename is
+# detected in P4.
+test_expect_success 'detect renames' '
+ "$GITP4" clone --dest="$git" //depot@all &&
+ test_when_finished cleanup_git &&
+ cd "$git" &&
+ git config git-p4.skipSubmitEditCheck true &&
+
+ git mv file1 file4 &&
+ git commit -a -m "Rename file1 to file4" &&
+ git diff-tree -r -M HEAD &&
+ "$GITP4" submit &&
+ p4 filelog //depot/file4 &&
+ ! p4 filelog //depot/file4 | grep -q "branch from" &&
+
+ git mv file4 file5 &&
+ git commit -a -m "Rename file4 to file5" &&
+ git diff-tree -r -M HEAD &&
+ git config git-p4.detectRenames true &&
+ "$GITP4" submit &&
+ p4 filelog //depot/file5 &&
+ p4 filelog //depot/file5 | grep -q "branch from //depot/file4" &&
+
+ git mv file5 file6 &&
+ echo update >>file6 &&
+ git add file6 &&
+ git commit -a -m "Rename file5 to file6 with changes" &&
+ git diff-tree -r -M HEAD &&
+ level=$(git diff-tree -r -M HEAD | sed 1d | cut -f1 | cut -d" " -f5 | sed "s/R0*//") &&
+ test -n "$level" && test "$level" -gt 0 && test "$level" -lt 98 &&
+ git config git-p4.detectRenames $((level + 2)) &&
+ "$GITP4" submit &&
+ p4 filelog //depot/file6 &&
+ ! p4 filelog //depot/file6 | grep -q "branch from" &&
+
+ git mv file6 file7 &&
+ echo update >>file7 &&
+ git add file7 &&
+ git commit -a -m "Rename file6 to file7 with changes" &&
+ git diff-tree -r -M HEAD &&
+ level=$(git diff-tree -r -M HEAD | sed 1d | cut -f1 | cut -d" " -f5 | sed "s/R0*//") &&
+ test -n "$level" && test "$level" -gt 2 && test "$level" -lt 100 &&
+ git config git-p4.detectRenames $((level - 2)) &&
+ "$GITP4" submit &&
+ p4 filelog //depot/file7 &&
+ p4 filelog //depot/file7 | grep -q "branch from //depot/file6"
+'
+
+# Copy a file and confirm that copy is not detected in P4.
+# Copy a file with detectCopies option enabled and confirm that copy is not
+# detected in P4.
+# Modify and copy a file with detectCopies option enabled and confirm that copy
+# is detected in P4.
+# Copy a file with detectCopies and detectCopiesHarder options enabled and
+# confirm that copy is detected in P4.
+# Modify and copy a file, configure a bigger threshold in detectCopies and
+# confirm that copy is not detected in P4.
+# Modify and copy a file, configure a smaller threshold in detectCopies and
+# confirm that copy is detected in P4.
+test_expect_success 'detect copies' '
+ "$GITP4" clone --dest="$git" //depot@all &&
+ test_when_finished cleanup_git &&
+ cd "$git" &&
+ git config git-p4.skipSubmitEditCheck true &&
+
+ cp file2 file8 &&
+ git add file8 &&
+ git commit -a -m "Copy file2 to file8" &&
+ git diff-tree -r -C HEAD &&
+ "$GITP4" submit &&
+ p4 filelog //depot/file8 &&
+ ! p4 filelog //depot/file8 | grep -q "branch from" &&
+
+ cp file2 file9 &&
+ git add file9 &&
+ git commit -a -m "Copy file2 to file9" &&
+ git diff-tree -r -C HEAD &&
+ git config git-p4.detectCopies true &&
+ "$GITP4" submit &&
+ p4 filelog //depot/file9 &&
+ ! p4 filelog //depot/file9 | grep -q "branch from" &&
+
+ echo "file2" >>file2 &&
+ cp file2 file10 &&
+ git add file2 file10 &&
+ git commit -a -m "Modify and copy file2 to file10" &&
+ git diff-tree -r -C HEAD &&
+ "$GITP4" submit &&
+ p4 filelog //depot/file10 &&
+ p4 filelog //depot/file10 | grep -q "branch from //depot/file" &&
+
+ cp file2 file11 &&
+ git add file11 &&
+ git commit -a -m "Copy file2 to file11" &&
+ git diff-tree -r -C --find-copies-harder HEAD &&
+ src=$(git diff-tree -r -C --find-copies-harder HEAD | sed 1d | cut -f2) &&
+ test "$src" = file10 &&
+ git config git-p4.detectCopiesHarder true &&
+ "$GITP4" submit &&
+ p4 filelog //depot/file11 &&
+ p4 filelog //depot/file11 | grep -q "branch from //depot/file" &&
+
+ cp file2 file12 &&
+ echo "some text" >>file12 &&
+ git add file12 &&
+ git commit -a -m "Copy file2 to file12 with changes" &&
+ git diff-tree -r -C --find-copies-harder HEAD &&
+ level=$(git diff-tree -r -C --find-copies-harder HEAD | sed 1d | cut -f1 | cut -d" " -f5 | sed "s/C0*//") &&
+ test -n "$level" && test "$level" -gt 0 && test "$level" -lt 98 &&
+ src=$(git diff-tree -r -C --find-copies-harder HEAD | sed 1d | cut -f2) &&
+ test "$src" = file10 &&
+ git config git-p4.detectCopies $((level + 2)) &&
+ "$GITP4" submit &&
+ p4 filelog //depot/file12 &&
+ ! p4 filelog //depot/file12 | grep -q "branch from" &&
+
+ cp file2 file13 &&
+ echo "different text" >>file13 &&
+ git add file13 &&
+ git commit -a -m "Copy file2 to file13 with changes" &&
+ git diff-tree -r -C --find-copies-harder HEAD &&
+ level=$(git diff-tree -r -C --find-copies-harder HEAD | sed 1d | cut -f1 | cut -d" " -f5 | sed "s/C0*//") &&
+ test -n "$level" && test "$level" -gt 2 && test "$level" -lt 100 &&
+ src=$(git diff-tree -r -C --find-copies-harder HEAD | sed 1d | cut -f2) &&
+ test "$src" = file10 &&
+ git config git-p4.detectCopies $((level - 2)) &&
+ "$GITP4" submit &&
+ p4 filelog //depot/file13 &&
+ p4 filelog //depot/file13 | grep -q "branch from //depot/file"
+'
+
+# Create a simple branch structure in P4 depot to check if it is correctly
+# cloned.
+test_expect_success 'add simple p4 branches' '
+ cd "$cli" &&
+ mkdir branch1 &&
+ cd branch1 &&
+ echo file1 >file1 &&
+ echo file2 >file2 &&
+ p4 add file1 file2 &&
+ p4 submit -d "branch1" &&
+ p4 integrate //depot/branch1/... //depot/branch2/... &&
+ p4 submit -d "branch2" &&
+ echo file3 >file3 &&
+ p4 add file3 &&
+ p4 submit -d "add file3 in branch1" &&
+ p4 open file2 &&
+ echo update >>file2 &&
+ p4 submit -d "update file2 in branch1" &&
+ p4 integrate //depot/branch1/... //depot/branch3/... &&
+ p4 submit -d "branch3" &&
+ cd "$TRASH_DIRECTORY"
+'
+
+# Configure branches through git-config and clone them.
+# All files are tested to make sure branches were cloned correctly.
+# Finally, make an update to branch1 on P4 side to check if it is imported
+# correctly by git-p4.
+test_expect_success 'git-p4 clone simple branches' '
+ test_when_finished cleanup_git &&
+ test_create_repo "$git" &&
+ cd "$git" &&
+ git config git-p4.branchList branch1:branch2 &&
+ git config --add git-p4.branchList branch1:branch3 &&
+ "$GITP4" clone --dest=. --detect-branches //depot@all &&
+ git log --all --graph --decorate --stat &&
+ git reset --hard p4/depot/branch1 &&
+ test -f file1 &&
+ test -f file2 &&
+ test -f file3 &&
+ grep -q update file2 &&
+ git reset --hard p4/depot/branch2 &&
+ test -f file1 &&
+ test -f file2 &&
+ test ! -f file3 &&
+ ! grep -q update file2 &&
+ git reset --hard p4/depot/branch3 &&
+ test -f file1 &&
+ test -f file2 &&
+ test -f file3 &&
+ grep -q update file2 &&
+ cd "$cli" &&
+ cd branch1 &&
+ p4 edit file2 &&
+ echo file2_ >>file2 &&
+ p4 submit -d "update file2 in branch1" &&
+ cd "$git" &&
+ git reset --hard p4/depot/branch1 &&
+ "$GITP4" rebase &&
+ grep -q file2_ file2
+'
+
test_expect_success 'shutdown' '
pid=`pgrep -f p4d` &&
test -n "$pid" &&
git update-index --add "--chmod=$@"
}
+# Unset a configuration variable, but don't fail if it doesn't exist.
+test_unconfig () {
+ git config --unset-all "$@"
+ config_status=$?
+ case "$config_status" in
+ 5) # ok, nothing to unset
+ config_status=0
+ ;;
+ esac
+ return $config_status
+}
+
+# Set git config, automatically unsetting it after the test is over.
+test_config () {
+ test_when_finished "test_unconfig '$1'" &&
+ git config "$@"
+}
+
# Use test_set_prereq to tell that a particular prerequisite is available.
# The prerequisite can later be checked for in two ways:
#
#include "refs.h"
#include "branch.h"
#include "url.h"
+#include "submodule.h"
/* rsync support */
flags & TRANSPORT_PUSH_MIRROR,
flags & TRANSPORT_PUSH_FORCE);
+ if ((flags & TRANSPORT_RECURSE_SUBMODULES_CHECK) && !is_bare_repository()) {
+ struct ref *ref = remote_refs;
+ for (; ref; ref = ref->next)
+ if (!is_null_sha1(ref->new_sha1) &&
+ check_submodule_needs_pushing(ref->new_sha1,transport->remote->name))
+ die("There are unpushed submodules, aborting.");
+ }
+
push_ret = transport->push_refs(transport, remote_refs, flags);
err = push_had_errors(remote_refs);
ret = push_ret | err;
#define TRANSPORT_PUSH_MIRROR 8
#define TRANSPORT_PUSH_PORCELAIN 16
#define TRANSPORT_PUSH_SET_UPSTREAM 32
+#define TRANSPORT_RECURSE_SUBMODULES_CHECK 64
#define TRANSPORT_SUMMARY_WIDTH (2 * DEFAULT_ABBREV + 3)
static const char *color(int slot, struct wt_status *s)
{
- const char *c = s->use_color > 0 ? s->color_palette[slot] : "";
+ const char *c = "";
+ if (want_color(s->use_color))
+ c = s->color_palette[slot];
if (slot == WT_STATUS_ONBRANCH && color_is_nil(c))
c = s->color_palette[WT_STATUS_HEADER];
return c;
* will have checked isatty on stdout).
*/
if (s->fp != stdout)
- DIFF_OPT_CLR(&rev.diffopt, COLOR_DIFF);
+ rev.diffopt.use_color = 0;
run_diff_index(&rev, 1);
}
char const *line;
long size;
long idx;
+ long len1, len2;
} xdlclass_t;
typedef struct s_xdlclassifier {
long hsize;
xdlclass_t **rchash;
chastore_t ncha;
+ xdlclass_t **rcrecs;
+ long alloc;
long count;
long flags;
} xdlclassifier_t;
static int xdl_init_classifier(xdlclassifier_t *cf, long size, long flags);
static void xdl_free_classifier(xdlclassifier_t *cf);
-static int xdl_classify_record(xdlclassifier_t *cf, xrecord_t **rhash, unsigned int hbits,
- xrecord_t *rec);
-static int xdl_prepare_ctx(mmfile_t *mf, long narec, xpparam_t const *xpp,
+static int xdl_classify_record(unsigned int pass, xdlclassifier_t *cf, xrecord_t **rhash,
+ unsigned int hbits, xrecord_t *rec);
+static int xdl_prepare_ctx(unsigned int pass, mmfile_t *mf, long narec, xpparam_t const *xpp,
xdlclassifier_t *cf, xdfile_t *xdf);
static void xdl_free_ctx(xdfile_t *xdf);
static int xdl_clean_mmatch(char const *dis, long i, long s, long e);
-static int xdl_cleanup_records(xdfile_t *xdf1, xdfile_t *xdf2);
+static int xdl_cleanup_records(xdlclassifier_t *cf, xdfile_t *xdf1, xdfile_t *xdf2);
static int xdl_trim_ends(xdfile_t *xdf1, xdfile_t *xdf2);
-static int xdl_optimize_ctxs(xdfile_t *xdf1, xdfile_t *xdf2);
+static int xdl_optimize_ctxs(xdlclassifier_t *cf, xdfile_t *xdf1, xdfile_t *xdf2);
}
memset(cf->rchash, 0, cf->hsize * sizeof(xdlclass_t *));
+ cf->alloc = size;
+ if (!(cf->rcrecs = (xdlclass_t **) xdl_malloc(cf->alloc * sizeof(xdlclass_t *)))) {
+
+ xdl_free(cf->rchash);
+ xdl_cha_free(&cf->ncha);
+ return -1;
+ }
+
cf->count = 0;
return 0;
static void xdl_free_classifier(xdlclassifier_t *cf) {
+ xdl_free(cf->rcrecs);
xdl_free(cf->rchash);
xdl_cha_free(&cf->ncha);
}
-static int xdl_classify_record(xdlclassifier_t *cf, xrecord_t **rhash, unsigned int hbits,
- xrecord_t *rec) {
+static int xdl_classify_record(unsigned int pass, xdlclassifier_t *cf, xrecord_t **rhash,
+ unsigned int hbits, xrecord_t *rec) {
long hi;
char const *line;
xdlclass_t *rcrec;
+ xdlclass_t **rcrecs;
line = rec->ptr;
hi = (long) XDL_HASHLONG(rec->ha, cf->hbits);
return -1;
}
rcrec->idx = cf->count++;
+ if (cf->count > cf->alloc) {
+ cf->alloc *= 2;
+ if (!(rcrecs = (xdlclass_t **) xdl_realloc(cf->rcrecs, cf->alloc * sizeof(xdlclass_t *)))) {
+
+ return -1;
+ }
+ cf->rcrecs = rcrecs;
+ }
+ cf->rcrecs[rcrec->idx] = rcrec;
rcrec->line = line;
rcrec->size = rec->size;
rcrec->ha = rec->ha;
+ rcrec->len1 = rcrec->len2 = 0;
rcrec->next = cf->rchash[hi];
cf->rchash[hi] = rcrec;
}
+ (pass == 1) ? rcrec->len1++ : rcrec->len2++;
+
rec->ha = (unsigned long) rcrec->idx;
hi = (long) XDL_HASHLONG(rec->ha, hbits);
}
-static int xdl_prepare_ctx(mmfile_t *mf, long narec, xpparam_t const *xpp,
+static int xdl_prepare_ctx(unsigned int pass, mmfile_t *mf, long narec, xpparam_t const *xpp,
xdlclassifier_t *cf, xdfile_t *xdf) {
unsigned int hbits;
long nrec, hsize, bsize;
recs[nrec++] = crec;
if (!(xpp->flags & XDF_HISTOGRAM_DIFF) &&
- xdl_classify_record(cf, rhash, hbits, crec) < 0)
+ xdl_classify_record(pass, cf, rhash, hbits, crec) < 0)
goto abort;
}
}
return -1;
}
- if (xdl_prepare_ctx(mf1, enl1, xpp, &cf, &xe->xdf1) < 0) {
+ if (xdl_prepare_ctx(1, mf1, enl1, xpp, &cf, &xe->xdf1) < 0) {
xdl_free_classifier(&cf);
return -1;
}
- if (xdl_prepare_ctx(mf2, enl2, xpp, &cf, &xe->xdf2) < 0) {
+ if (xdl_prepare_ctx(2, mf2, enl2, xpp, &cf, &xe->xdf2) < 0) {
xdl_free_ctx(&xe->xdf1);
xdl_free_classifier(&cf);
return -1;
}
- if (!(xpp->flags & XDF_HISTOGRAM_DIFF))
- xdl_free_classifier(&cf);
-
if (!(xpp->flags & XDF_PATIENCE_DIFF) &&
!(xpp->flags & XDF_HISTOGRAM_DIFF) &&
- xdl_optimize_ctxs(&xe->xdf1, &xe->xdf2) < 0) {
+ xdl_optimize_ctxs(&cf, &xe->xdf1, &xe->xdf2) < 0) {
xdl_free_ctx(&xe->xdf2);
xdl_free_ctx(&xe->xdf1);
return -1;
}
+ if (!(xpp->flags & XDF_HISTOGRAM_DIFF))
+ xdl_free_classifier(&cf);
+
return 0;
}
* matches on the other file. Also, lines that have multiple matches
* might be potentially discarded if they happear in a run of discardable.
*/
-static int xdl_cleanup_records(xdfile_t *xdf1, xdfile_t *xdf2) {
- long i, nm, rhi, nreff, mlim;
- unsigned long hav;
+static int xdl_cleanup_records(xdlclassifier_t *cf, xdfile_t *xdf1, xdfile_t *xdf2) {
+ long i, nm, nreff;
xrecord_t **recs;
- xrecord_t *rec;
+ xdlclass_t *rcrec;
char *dis, *dis1, *dis2;
if (!(dis = (char *) xdl_malloc(xdf1->nrec + xdf2->nrec + 2))) {
dis1 = dis;
dis2 = dis1 + xdf1->nrec + 1;
- if ((mlim = xdl_bogosqrt(xdf1->nrec)) > XDL_MAX_EQLIMIT)
- mlim = XDL_MAX_EQLIMIT;
for (i = xdf1->dstart, recs = &xdf1->recs[xdf1->dstart]; i <= xdf1->dend; i++, recs++) {
- hav = (*recs)->ha;
- rhi = (long) XDL_HASHLONG(hav, xdf2->hbits);
- for (nm = 0, rec = xdf2->rhash[rhi]; rec; rec = rec->next)
- if (rec->ha == hav && ++nm == mlim)
- break;
- dis1[i] = (nm == 0) ? 0: (nm >= mlim) ? 2: 1;
+ rcrec = cf->rcrecs[(*recs)->ha];
+ nm = rcrec ? rcrec->len2 : 0;
+ dis1[i] = (nm == 0) ? 0: 1;
}
- if ((mlim = xdl_bogosqrt(xdf2->nrec)) > XDL_MAX_EQLIMIT)
- mlim = XDL_MAX_EQLIMIT;
for (i = xdf2->dstart, recs = &xdf2->recs[xdf2->dstart]; i <= xdf2->dend; i++, recs++) {
- hav = (*recs)->ha;
- rhi = (long) XDL_HASHLONG(hav, xdf1->hbits);
- for (nm = 0, rec = xdf1->rhash[rhi]; rec; rec = rec->next)
- if (rec->ha == hav && ++nm == mlim)
- break;
- dis2[i] = (nm == 0) ? 0: (nm >= mlim) ? 2: 1;
+ rcrec = cf->rcrecs[(*recs)->ha];
+ nm = rcrec ? rcrec->len1 : 0;
+ dis2[i] = (nm == 0) ? 0: 1;
}
for (nreff = 0, i = xdf1->dstart, recs = &xdf1->recs[xdf1->dstart];
}
-static int xdl_optimize_ctxs(xdfile_t *xdf1, xdfile_t *xdf2) {
+static int xdl_optimize_ctxs(xdlclassifier_t *cf, xdfile_t *xdf1, xdfile_t *xdf2) {
if (xdl_trim_ends(xdf1, xdf2) < 0 ||
- xdl_cleanup_records(xdf1, xdf2) < 0) {
+ xdl_cleanup_records(cf, xdf1, xdf2) < 0) {
return -1;
}