Chris Shoemaker <c.shoemaker@cox.net>
Chris Wright <chrisw@sous-sol.org> <chrisw@osdl.org>
Cord Seele <cowose@gmail.com> <cowose@googlemail.com>
+Christian Couder <chriscool@tuxfamily.org> <christian.couder@gmail.com>
Christian Stimming <stimming@tuhh.de> <chs@ckiste.goetheallee>
Csaba Henk <csaba@gluster.com> <csaba@lowlife.hu>
Dan Johnson <computerdruid@gmail.com>
* "git status" learned to suggest "merge --abort" during a conflicted
merge, just like it already suggests "rebase --abort" during a
conflicted rebase.
- (merge b0a61ab mm/status-suggest-merge-abort later to maint).
* "git jump" script (in contrib/) has been updated a bit.
(merge a91e692 jk/git-jump later to maint).
* "git push" and "git clone" learned to give better progress meters
to the end user who is waiting on the terminal.
+ * An entry "git log --decorate" for the tip of the current branch is
+ shown as "HEAD -> name" (where "name" is the name of the branch);
+ paint the arrow in the same color as "HEAD", not in the color for
+ commits.
+
+ * "git format-patch" learned format.from configuration variable to
+ specify the default settings for its "--from" option.
+
+ * "git am -3" calls "git merge-recursive" when it needs to fall back
+ to a three-way merge; this call has been turned into an internal
+ subroutine call instead of spawning a separate subprocess.
+
Performance, Internal Implementation, Development Support etc.
* The .c/.h sources are marked as such in our .gitattributes file so
that "git diff -W" and friends would work better.
- (merge e82675a rs/help-c-source-with-gitattributes later to maint).
* Code clean-up to avoid using a variable string that compilers may
feel untrustable as printf-style format given to write_file()
library did not check all the functions from pthread libraries;
recent FreeBSD has some functions in libc but not others, and we
mistakenly thought linking with libc is enough when it is not.
- (merge a9b02de ew/autoconf-pthread later to maint).
* When "git fsck" reports a broken link (e.g. a tree object contains
a blob that does not exist), both containing object and the object
the containing object from existing refs (e.g. "HEAD~24^2:file.txt").
* Allow http daemon tests in Travis CI tests.
- (merge d9d1426 ls/travis-enable-httpd-tests later to maint).
* Makefile assumed that -lrt is always available on platforms that
want to use clock_gettime() and CLOCK_MONOTONIC, which is not a
to decide the set of supported options dynamically, which makes the
code error-prone and hard to read. This has been corrected by tweaking
the API to allocate and return a new copy of "struct option" array.
- (merge 023ff39 jk/parse-options-concat later to maint).
* "git fetch" exchanges batched have/ack messages between the sender
and the receiver, initially doubling every time and then falling
repository. The internal mechanism learned to grow the window size
more aggressively when working with the "smart http" transport.
+ * Tests for "git svn" have been taught to reuse the lib-httpd test
+ infrastructure when testing the subversion integration that
+ interacts with subversion repositories served over the http://
+ protocol.
+ (merge a8a5d25 ew/git-svn-http-tests later to maint).
+
+ * "git pack-objects" has a few options that tell it not to pack
+ objects found in certain packfiles, which require it to scan .idx
+ files of all available packs. The codepaths involved in these
+ operations have been optimized for a common case of not having any
+ non-local pack and/or any .kept pack.
+
+ * The t3700 test about "add --chmod=-x" have been made a bit more
+ robust and generally cleaned up.
+ (merge 766cdc4 ib/t3700-add-chmod-x-updates later to maint).
+
+ * The build procedure learned PAGER_ENV knob that lists what default
+ environment variable settings to export for popular pagers. This
+ mechanism is used to tweak the default settings to MORE on FreeBSD.
+ (merge 995bc22 ew/build-time-pager-tweaks later to maint).
+
Also contains various documentation updates and code clean-ups.
"file" did not appear in the current commit. When "file" was
created by renaming an existing file (but the change has not been
committed), this restriction was unnecessarily tight.
- (merge c66b470 mh/blame-worktree later to maint).
* "git add -N dir/file && git write-tree" produced an incorrect tree
when there are other paths in the same directory that sorts after
"file".
- (merge 6d6a782 nd/cache-tree-ita later to maint).
* "git fetch http://user:pass@host/repo..." scrubbed the userinfo
part, but "git push" didn't.
- (merge 68f3c07 jk/push-scrub-url later to maint).
* "git merge" with renormalization did not work well with
merge-recursive, due to "safer crlf" conversion kicking in when it
* The use of strbuf in "git rm" to build filename to remove was a bit
suboptimal, which has been fixed.
- (merge deb8e15 rs/rm-strbuf-optim later to maint).
* An age old bug that caused "git diff --ignore-space-at-eol"
misbehave has been fixed.
- (merge 044fb19 js/ignore-space-at-eol later to maint).
* "git notes merge" had a code to see if a path exists (and fails if
it does) and then open the path for writing (when it doesn't).
Replace it with open with O_EXCL.
- (merge deb9c15 rs/notes-merge-no-toctou later to maint).
* "git pack-objects" and "git index-pack" mostly operate with off_t
when talking about the offset of objects in a packfile, but there
were a handful of places that used "unsigned long" to hold that
value, leading to an unintended truncation.
- (merge ec9d224 nd/pack-ofs-4gb-limit later to maint).
* Recent update to "git daemon" tries to enable the socket-level
KEEPALIVE, but when it is spawned via inetd, the standard input
file descriptor may not necessarily be connected to a socket.
Suppress an ENOTSOCK error from setsockopt().
- (merge fab6027 ew/daemon-socket-keepalive later to maint).
* Recent FreeBSD stopped making perl available at /usr/bin/perl;
switch the default the built-in path to /usr/local/bin/perl on not
too ancient FreeBSD releases.
- (merge 259f22a ew/find-perl-on-freebsd-in-local later to maint).
* "git commit --help" said "--no-verify" is only about skipping the
pre-commit hook, and failed to say that it also skipped the
commit-msg hook.
- (merge def480f os/no-verify-skips-commit-msg-too later to maint).
* "git merge" in Git v2.9 was taught to forbid merging an unrelated
lines of history by default, but that is exactly the kind of thing
the "--rejoin" mode of "git subtree" (in contrib/) wants to do.
"git subtree" has been taught to use the "--allow-unrelated-histories"
option to override the default.
- (merge 0f12c7d da/subtree-2.9-regression later to maint).
* The build procedure for "git persistent-https" helper (in contrib/)
has been updated so that it can be built with more recent versions
of Go.
- (merge accb613 pm/build-persistent-https-with-recent-go later to maint).
* There is an optimization used in "git diff $treeA $treeB" to borrow
an already checked-out copy in the working tree when it is known to
conversion (including the clean filter), which defeats the whole
point of the optimization. The optimization has been disabled when
the conversion is necessary.
- (merge 06dec43 jk/diff-do-not-reuse-wtf-needs-cleaning later to maint).
* "git -c grep.patternType=extended log --basic-regexp" misbehaved
because the internal API to access the grep machinery was not
designed well.
- (merge 8465541 jc/grep-commandline-vs-configuration later to maint).
+
+ * Windows port was failing some tests in t4130, due to the lack of
+ inum in the returned values by its lstat(2) emulation.
+
+ * The reflog output format is documented better, and a new format
+ --date=unix to report the seconds-since-epoch (without timezone)
+ has been added.
+ (merge 442f6fd jk/reflog-date later to maint).
+
+ * "git difftool <paths>..." started in a subdirectory failed to
+ interpret the paths relative to that directory, which has been
+ fixed.
+ (merge 32b8c58 jk/difftool-in-subdir later to maint).
+
+ * The characters in the label shown for tags/refs for commits in
+ "gitweb" output are now properly escaped for proper HTML output.
+
+ * FreeBSD can lie when asked mtime of a directory, which made the
+ untracked cache code to fall back to a slow-path, which in turn
+ caused tests in t7063 to fail because it wanted to verify the
+ behaviour of the fast-path.
+
+ * Squelch compiler warnings for netmalloc (in compat/) library.
+
+ * A small memory leak in the command line parsing of "git blame"
+ has been plugged.
+
+ * The API documentation for hashmap was unclear if hashmap_entry
+ can be safely discarded without any other consideration. State
+ that it is safe to do so.
+
+ * Not-so-recent rewrite of "git am" that started making internal
+ calls into the commit machinery had an unintended regression, in
+ that no matter how many seconds it took to apply many patches, the
+ resulting committer timestamp for the resulting commits were all
+ the same.
+ (merge 4d9c7e6 jk/reset-ident-time-per-commit later to maint).
+
+ * "git push --force-with-lease" already had enough logic to allow
+ ensuring that such a push results in creation of a ref (i.e. the
+ receiving end did not have another push from sideways that would be
+ discarded by our force-pushing), but didn't expose this possibility
+ to the users. It does so now.
+ (merge 9eed4f3 jk/push-force-with-lease-creation later to maint).
* Other minor clean-ups and documentation updates
- (merge e51b0df pb/commit-editmsg-path later to maint).
- (merge b333d0d jk/send-pack-stdio later to maint).
- (merge fcf0fe9 lf/sideband-returns-void later to maint).
- (merge c2691e2 ah/unpack-trees-advice-messages later to maint).
- (merge c61b2af lf/recv-sideband-cleanup later to maint).
- (merge 31471ba rs/use-strbuf-addbuf later to maint).
- (merge 503e224 nd/test-helpers later to maint).
- (merge 16726cf jc/doc-diff-filter-exclude later to maint).
- (merge fd2e7da rs/worktree-use-strbuf-absolute-path later to maint).
- (merge 406621f sb/submodule-deinit-all later to maint).
- (merge 55cbe18 rs/submodule-config-code-cleanup later to maint).
- (merge 280abfd sb/pack-protocol-doc-nak later to maint).
* xdiff code we use to generate diffs is not prepared to handle
extremely large files. It uses "int" in many places, which can
overflow if we have a very large number of lines or even bytes in
- our input files, for example. Cap the input size to soemwhere
+ our input files, for example. Cap the input size to somewhere
around 1GB for now.
* Some protocols (like git-remote-ext) can execute arbitrary code
* xdiff code we use to generate diffs is not prepared to handle
extremely large files. It uses "int" in many places, which can
overflow if we have a very large number of lines or even bytes in
- our input files, for example. Cap the input size to soemwhere
+ our input files, for example. Cap the input size to somewhere
around 1GB for now.
* Some protocols (like git-remote-ext) can execute arbitrary code
* xdiff code we use to generate diffs is not prepared to handle
extremely large files. It uses "int" in many places, which can
overflow if we have a very large number of lines or even bytes in
- our input files, for example. Cap the input size to soemwhere
+ our input files, for example. Cap the input size to somewhere
around 1GB for now.
* Some protocols (like git-remote-ext) can execute arbitrary code
* xdiff code we use to generate diffs is not prepared to handle
extremely large files. It uses "int" in many places, which can
overflow if we have a very large number of lines or even bytes in
- our input files, for example. Cap the input size to soemwhere
+ our input files, for example. Cap the input size to somewhere
around 1GB for now.
* Some protocols (like git-remote-ext) can execute arbitrary code
* A test that unconditionally used "mktemp" learned that the command
is not necessarily available everywhere.
+ * "git blame file" allowed the lineage of lines in the uncommitted,
+ unadded contents of "file" to be inspected, but it refused when
+ "file" did not appear in the current commit. When "file" was
+ created by renaming an existing file (but the change has not been
+ committed), this restriction was unnecessarily tight.
+
+ * "git add -N dir/file && git write-tree" produced an incorrect tree
+ when there are other paths in the same directory that sorts after
+ "file".
+
+ * "git fetch http://user:pass@host/repo..." scrubbed the userinfo
+ part, but "git push" didn't.
+
+ * An age old bug that caused "git diff --ignore-space-at-eol"
+ misbehave has been fixed.
+
+ * "git notes merge" had a code to see if a path exists (and fails if
+ it does) and then open the path for writing (when it doesn't).
+ Replace it with open with O_EXCL.
+
+ * "git pack-objects" and "git index-pack" mostly operate with off_t
+ when talking about the offset of objects in a packfile, but there
+ were a handful of places that used "unsigned long" to hold that
+ value, leading to an unintended truncation.
+
+ * Recent update to "git daemon" tries to enable the socket-level
+ KEEPALIVE, but when it is spawned via inetd, the standard input
+ file descriptor may not necessarily be connected to a socket.
+ Suppress an ENOTSOCK error from setsockopt().
+
+ * Recent FreeBSD stopped making perl available at /usr/bin/perl;
+ switch the default the built-in path to /usr/local/bin/perl on not
+ too ancient FreeBSD releases.
+
+ * "git status" learned to suggest "merge --abort" during a conflicted
+ merge, just like it already suggests "rebase --abort" during a
+ conflicted rebase.
+
+ * The .c/.h sources are marked as such in our .gitattributes file so
+ that "git diff -W" and friends would work better.
+
+ * Existing autoconf generated test for the need to link with pthread
+ library did not check all the functions from pthread libraries;
+ recent FreeBSD has some functions in libc but not others, and we
+ mistakenly thought linking with libc is enough when it is not.
+
+ * Allow http daemon tests in Travis CI tests.
+
+ * Users of the parse_options_concat() API function need to allocate
+ extra slots in advance and fill them with OPT_END() when they want
+ to decide the set of supported options dynamically, which makes the
+ code error-prone and hard to read. This has been corrected by tweaking
+ the API to allocate and return a new copy of "struct option" array.
+
+ * The use of strbuf in "git rm" to build filename to remove was a bit
+ suboptimal, which has been fixed.
+
+ * "git commit --help" said "--no-verify" is only about skipping the
+ pre-commit hook, and failed to say that it also skipped the
+ commit-msg hook.
+
+ * "git merge" in Git v2.9 was taught to forbid merging an unrelated
+ lines of history by default, but that is exactly the kind of thing
+ the "--rejoin" mode of "git subtree" (in contrib/) wants to do.
+ "git subtree" has been taught to use the "--allow-unrelated-histories"
+ option to override the default.
+
+ * The build procedure for "git persistent-https" helper (in contrib/)
+ has been updated so that it can be built with more recent versions
+ of Go.
+
+ * There is an optimization used in "git diff $treeA $treeB" to borrow
+ an already checked-out copy in the working tree when it is known to
+ be the same as the blob being compared, expecting that open/mmap of
+ such a file is faster than reading it from the object store, which
+ involves inflating and applying delta. This however kicked in even
+ when the checked-out copy needs to go through the convert-to-git
+ conversion (including the clean filter), which defeats the whole
+ point of the optimization. The optimization has been disabled when
+ the conversion is necessary.
+
+ * "git -c grep.patternType=extended log --basic-regexp" misbehaved
+ because the internal API to access the grep machinery was not
+ designed well.
+
+ * Windows port was failing some tests in t4130, due to the lack of
+ inum in the returned values by its lstat(2) emulation.
+
+ * The characters in the label shown for tags/refs for commits in
+ "gitweb" output are now properly escaped for proper HTML output.
+
+ * FreeBSD can lie when asked mtime of a directory, which made the
+ untracked cache code to fall back to a slow-path, which in turn
+ caused tests in t7063 to fail because it wanted to verify the
+ behaviour of the fast-path.
+
+ * Squelch compiler warnings for netmalloc (in compat/) library.
+
+ * The API documentation for hashmap was unclear if hashmap_entry
+ can be safely discarded without any other consideration. State
+ that it is safe to do so.
+
Also contains minor documentation updates and code clean-ups.
value as the boundary. See the --attach option in
linkgit:git-format-patch[1].
+format.from::
+ Provides the default value for the `--from` option to format-patch.
+ Accepts a boolean value, or a name and email address. If false,
+ format-patch defaults to `--no-from`, using commit authors directly in
+ the "From:" field of patch mails. If true, format-patch defaults to
+ `--from`, using your committer identity in the "From:" field of patch
+ mails and including a "From:" field in the body of the patch mail if
+ different. If set to a non-boolean value, format-patch uses that
+ value instead of your committer identity. Defaults to false.
+
format.numbered::
A boolean which can enable or disable sequence numbers in patch
subjects. It defaults to "auto" which enables it only if there
out of memory with a large window, but still be able to take
advantage of the large window for the smaller objects. The
size can be suffixed with "k", "m", or "g".
- `--window-memory=0` makes memory usage unlimited, which is the
- default.
+ `--window-memory=0` makes memory usage unlimited. The default
+ is taken from the `pack.windowMemory` configuration variable.
--max-pack-size=<n>::
Maximum size of each output pack file. The size can be suffixed with
+
`--force-with-lease=<refname>:<expect>` will protect the named ref (alone),
if it is going to be updated, by requiring its current value to be
-the same as the specified value <expect> (which is allowed to be
+the same as the specified value `<expect>` (which is allowed to be
different from the remote-tracking branch we have for the refname,
or we do not even have to have such a remote-tracking branch when
-this form is used).
+this form is used). If `<expect>` is the empty string, then the named ref
+must not already exist.
+
Note that all forms other than `--force-with-lease=<refname>:<expect>`
that specifies the expected current value of the ref explicitly are
If only <infd> is given, it is assumed to be a bidirectional socket connected
to remote Git server (git-upload-pack, git-receive-pack or
-git-upload-achive). If both <infd> and <outfd> are given, they are assumed
+git-upload-archive). If both <infd> and <outfd> are given, they are assumed
to be pipes connected to a remote Git server (<infd> being the inbound pipe
and <outfd> being the outbound pipe.
out of memory with a large window, but still be able to take
advantage of the large window for the smaller objects. The
size can be suffixed with "k", "m", or "g".
- `--window-memory=0` makes memory usage unlimited, which is the
- default.
+ `--window-memory=0` makes memory usage unlimited. The default
+ is taken from the `pack.windowMemory` configuration variable.
+ Note that the actual memory usage will be the limit multiplied
+ by the number of threads used by linkgit:git-pack-objects[1].
--max-pack-size=<n>::
Maximum size of each output pack file. The size can be suffixed with
When `text` is set to "auto", the path is marked for automatic
end-of-line conversion. If Git decides that the content is
text, its line endings are converted to LF on checkin.
- When the file has been commited with CRLF, no conversion is done.
+ When the file has been committed with CRLF, no conversion is done.
Unspecified::
smudge = git-p4-filter --smudge %f
------------------------
+Note that "%f" is the name of the path that is being worked on. Depending
+on the version that is being filtered, the corresponding file on disk may
+not exist, or may have different contents. So, smudge and clean commands
+should not try to access the file on disk, but only act as filters on the
+content provided to them on standard input.
Interaction between checkin/checkout attributes
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
"--ignore-submodule" option. The 'git submodule' commands are not
affected by this setting.
+submodule.<name>.shallow::
+ When set to true, a clone of this submodule will be performed as a
+ shallow clone unless the user explicitly asks for a non-shallow
+ clone.
+
EXAMPLES
--------
`entry` points to the entry to initialize.
+
`hash` is the hash code of the entry.
++
+The hashmap_entry structure does not hold references to external resources,
+and it is safe to just discard it once you are done with it (i.e. if
+your structure was allocated with xmalloc(), you can just free(3) it,
+and if it is on stack, you can just let it go out of scope).
`void *hashmap_get(const struct hashmap *map, const void *key, const void *keydata)`::
# Define HAVE_BSD_SYSCTL if your platform has a BSD-compatible sysctl function.
#
# Define HAVE_GETDELIM if your system has the getdelim() function.
+#
+# Define PAGER_ENV to a SP separated VAR=VAL pairs to define
+# default environment variables to be passed when a pager is spawned, e.g.
+#
+# PAGER_ENV = LESS=FRX LV=-c
+#
+# to say "export LESS=FRX (and LV=-c) if the environment variable
+# LESS (and LV) is not set, respectively".
GIT-VERSION-FILE: FORCE
@$(SHELL_PATH) ./GIT-VERSION-GEN
LIB_OBJS += merge-blobs.o
LIB_OBJS += merge-recursive.o
LIB_OBJS += mergesort.o
+LIB_OBJS += mru.o
LIB_OBJS += name-hash.o
LIB_OBJS += notes.o
LIB_OBJS += notes-cache.o
NO_PYTHON = NoThanks
endif
+ifndef PAGER_ENV
+PAGER_ENV = LESS=FRX LV=-c
+endif
+
QUIET_SUBDIR0 = +$(MAKE) -C # space to separate -C and subdir
QUIET_SUBDIR1 =
BASIC_CFLAGS += -DDEFAULT_HELP_FORMAT='"$(DEFAULT_HELP_FORMAT)"'
endif
+PAGER_ENV_SQ = $(subst ','\'',$(PAGER_ENV))
+PAGER_ENV_CQ = "$(subst ",\",$(subst \,\\,$(PAGER_ENV)))"
+PAGER_ENV_CQ_SQ = $(subst ','\'',$(PAGER_ENV_CQ))
+BASIC_CFLAGS += -DPAGER_ENV='$(PAGER_ENV_CQ_SQ)'
+
ALL_CFLAGS += $(BASIC_CFLAGS)
ALL_LDFLAGS += $(BASIC_LDFLAGS)
SCRIPT_DEFINES = $(SHELL_PATH_SQ):$(DIFF_SQ):$(GIT_VERSION):\
$(localedir_SQ):$(NO_CURL):$(USE_GETTEXT_SCHEME):$(SANE_TOOL_PATH_SQ):\
- $(gitwebdir_SQ):$(PERL_PATH_SQ):$(SANE_TEXT_GREP)
+ $(gitwebdir_SQ):$(PERL_PATH_SQ):$(SANE_TEXT_GREP):$(PAGER_ENV)
define cmd_munge_script
$(RM) $@ $@+ && \
sed -e '1s|#!.*/sh|#!$(SHELL_PATH_SQ)|' \
-e 's|@@GITWEBDIR@@|$(gitwebdir_SQ)|g' \
-e 's|@@PERL@@|$(PERL_PATH_SQ)|g' \
-e 's|@@SANE_TEXT_GREP@@|$(SANE_TEXT_GREP)|g' \
+ -e 's|@@PAGER_ENV@@|$(PAGER_ENV_SQ)|g' \
$@.sh >$@+
endef
@echo NO_PERL=\''$(subst ','\'',$(subst ','\'',$(NO_PERL)))'\' >>$@+
@echo NO_PYTHON=\''$(subst ','\'',$(subst ','\'',$(NO_PYTHON)))'\' >>$@+
@echo NO_UNIX_SOCKETS=\''$(subst ','\'',$(subst ','\'',$(NO_UNIX_SOCKETS)))'\' >>$@+
+ @echo PAGER_ENV=\''$(subst ','\'',$(subst ','\'',$(PAGER_ENV)))'\' >>$@+
ifdef TEST_OUTPUT_DIRECTORY
@echo TEST_OUTPUT_DIRECTORY=\''$(subst ','\'',$(subst ','\'',$(TEST_OUTPUT_DIRECTORY)))'\' >>$@+
endif
xsnprintf(header->chksum, sizeof(header->chksum), "%07o", ustar_header_chksum(header));
}
-static int write_extended_header(struct archiver_args *args,
- const unsigned char *sha1,
- const void *buffer, unsigned long size)
+static void write_extended_header(struct archiver_args *args,
+ const unsigned char *sha1,
+ const void *buffer, unsigned long size)
{
struct ustar_header header;
unsigned int mode;
prepare_header(args, &header, mode, size);
write_blocked(&header, sizeof(header));
write_blocked(buffer, size);
- return 0;
}
static int write_tar_entry(struct archiver_args *args,
prepare_header(args, &header, mode, size_in_header);
if (ext_header.len > 0) {
- err = write_extended_header(args, sha1, ext_header.buf,
- ext_header.len);
- if (err) {
- free(buffer);
- return err;
- }
+ write_extended_header(args, sha1, ext_header.buf,
+ ext_header.len);
}
strbuf_release(&ext_header);
write_blocked(&header, sizeof(header));
return 0;
}
-/**
- * Do the three-way merge using fake ancestor, their tree constructed
- * from the fake ancestor and the postimage of the patch, and our
- * state.
- */
-static int run_fallback_merge_recursive(const struct am_state *state,
- unsigned char *orig_tree,
- unsigned char *our_tree,
- unsigned char *their_tree)
-{
- struct child_process cp = CHILD_PROCESS_INIT;
- int status;
-
- cp.git_cmd = 1;
-
- argv_array_pushf(&cp.env_array, "GITHEAD_%s=%.*s",
- sha1_to_hex(their_tree), linelen(state->msg), state->msg);
- if (state->quiet)
- argv_array_push(&cp.env_array, "GIT_MERGE_VERBOSITY=0");
-
- argv_array_push(&cp.args, "merge-recursive");
- argv_array_push(&cp.args, sha1_to_hex(orig_tree));
- argv_array_push(&cp.args, "--");
- argv_array_push(&cp.args, sha1_to_hex(our_tree));
- argv_array_push(&cp.args, sha1_to_hex(their_tree));
-
- status = run_command(&cp) ? (-1) : 0;
- discard_cache();
- read_cache();
- return status;
-}
-
/**
* Attempt a threeway merge, using index_path as the temporary index.
*/
static int fall_back_threeway(const struct am_state *state, const char *index_path)
{
- unsigned char orig_tree[GIT_SHA1_RAWSZ], their_tree[GIT_SHA1_RAWSZ],
- our_tree[GIT_SHA1_RAWSZ];
+ struct object_id orig_tree, their_tree, our_tree;
+ const struct object_id *bases[1] = { &orig_tree };
+ struct merge_options o;
+ struct commit *result;
+ char *their_tree_name;
- if (get_sha1("HEAD", our_tree) < 0)
- hashcpy(our_tree, EMPTY_TREE_SHA1_BIN);
+ if (get_oid("HEAD", &our_tree) < 0)
+ hashcpy(our_tree.hash, EMPTY_TREE_SHA1_BIN);
if (build_fake_ancestor(state, index_path))
return error("could not build fake ancestor");
discard_cache();
read_cache_from(index_path);
- if (write_index_as_tree(orig_tree, &the_index, index_path, 0, NULL))
+ if (write_index_as_tree(orig_tree.hash, &the_index, index_path, 0, NULL))
return error(_("Repository lacks necessary blobs to fall back on 3-way merge."));
say(state, stdout, _("Using index info to reconstruct a base tree..."));
init_revisions(&rev_info, NULL);
rev_info.diffopt.output_format = DIFF_FORMAT_NAME_STATUS;
diff_opt_parse(&rev_info.diffopt, &diff_filter_str, 1, rev_info.prefix);
- add_pending_sha1(&rev_info, "HEAD", our_tree, 0);
+ add_pending_sha1(&rev_info, "HEAD", our_tree.hash, 0);
diff_setup_done(&rev_info.diffopt);
run_diff_index(&rev_info, 1);
}
return error(_("Did you hand edit your patch?\n"
"It does not apply to blobs recorded in its index."));
- if (write_index_as_tree(their_tree, &the_index, index_path, 0, NULL))
+ if (write_index_as_tree(their_tree.hash, &the_index, index_path, 0, NULL))
return error("could not write tree");
say(state, stdout, _("Falling back to patching base and 3-way merge..."));
* changes.
*/
- if (run_fallback_merge_recursive(state, orig_tree, our_tree, their_tree)) {
+ init_merge_options(&o);
+
+ o.branch1 = "HEAD";
+ their_tree_name = xstrfmt("%.*s", linelen(state->msg), state->msg);
+ o.branch2 = their_tree_name;
+
+ if (state->quiet)
+ o.verbosity = 0;
+
+ if (merge_recursive_generic(&o, &our_tree, &their_tree, 1, bases, &result)) {
rerere(state->allow_rerere_autoupdate);
+ free(their_tree_name);
return error(_("Failed to merge in the changes."));
}
+ free(their_tree_name);
return 0;
}
const char *mail = am_path(state, msgnum(state));
int apply_status;
+ reset_ident_date();
+
if (!file_exists(mail))
goto next;
lno = prepare_lines(&sb);
if (lno && !range_list.nr)
- string_list_append(&range_list, xstrdup("1"));
+ string_list_append(&range_list, "1");
anchor = 1;
range_set_init(&ranges, range_list.nr);
o.ancestor = old->name;
o.branch1 = new->name;
o.branch2 = "local";
- merge_trees(&o, new->commit->tree, work,
+ ret = merge_trees(&o, new->commit->tree, work,
old->commit->tree, &result);
+ if (ret < 0)
+ exit(128);
ret = reset_tree(new->commit->tree, opts, 0,
writeout_error);
+ strbuf_release(&o.obuf);
if (ret)
return ret;
}
static void describe_one_orphan(struct strbuf *sb, struct commit *commit)
{
strbuf_addstr(sb, " ");
- strbuf_addstr(sb,
- find_unique_abbrev(commit->object.oid.hash, DEFAULT_ABBREV));
+ strbuf_add_unique_abbrev(sb, commit->object.oid.hash, DEFAULT_ABBREV);
strbuf_addch(sb, ' ');
if (!parse_commit(commit))
pp_commit_easy(CMIT_FMT_ONELINE, commit, sb);
static int use_global_config, use_system_config, use_local_config;
static struct git_config_source given_config_source;
static int actions, types;
-static const char *get_color_slot, *get_colorbool_slot;
static int end_null;
static int respect_includes = -1;
static int show_origin;
static void add_people_count(struct strbuf *out, struct string_list *people)
{
if (people->nr == 1)
- strbuf_addf(out, "%s", people->items[0].string);
+ strbuf_addstr(out, people->items[0].string);
else if (people->nr == 2)
strbuf_addf(out, "%s (%d) and %s (%d)",
people->items[0].string,
static int thread;
static int do_signoff;
static int base_auto;
+static char *from;
static const char *signature = git_version_string;
static const char *signature_file;
static int config_cover_letter;
base_auto = git_config_bool(var, value);
return 0;
}
+ if (!strcmp(var, "format.from")) {
+ int b = git_config_maybe_bool(var, value);
+ free(from);
+ if (b < 0)
+ from = xstrdup(value);
+ else if (b)
+ from = xstrdup(git_committer_info(IDENT_NO_DATE));
+ else
+ from = NULL;
+ return 0;
+ }
return git_log_config(var, value, cb);
}
int quiet = 0;
int reroll_count = -1;
char *branch_name = NULL;
- char *from = NULL;
char *base_commit = NULL;
struct base_tree_info bases;
*/
pos = cache_name_pos(ent->name, ent->len);
if (0 <= pos)
- die("bug in show-killed-files");
+ die("BUG: killed-file %.*s not found",
+ ent->len, ent->name);
pos = -pos - 1;
while (pos < active_nr &&
ce_stage(active_cache[pos]))
#include "fmt-merge-msg.h"
#include "gpg-interface.h"
#include "sequencer.h"
+#include "string-list.h"
#define DEFAULT_TWOHEAD (1<<0)
#define DEFAULT_OCTOPUS (1<<1)
hold_locked_index(&lock, 1);
clean = merge_recursive(&o, head,
remoteheads->item, reversed, &result);
+ if (clean < 0)
+ exit(128);
if (active_cache_changed &&
write_locked_index(&the_index, &lock, COMMIT_LOCK))
die (_("unable to write %s"), get_index_file());
return ret;
}
-static void split_merge_strategies(const char *string, struct strategy **list,
- int *nr, int *alloc)
-{
- char *p, *q, *buf;
-
- if (!string)
- return;
-
- buf = xstrdup(string);
- q = buf;
- for (;;) {
- p = strchr(q, ' ');
- if (!p) {
- ALLOC_GROW(*list, *nr + 1, *alloc);
- (*list)[(*nr)++].name = xstrdup(q);
- free(buf);
- return;
- } else {
- *p = '\0';
- ALLOC_GROW(*list, *nr + 1, *alloc);
- (*list)[(*nr)++].name = xstrdup(q);
- q = ++p;
- }
- }
-}
-
static void add_strategies(const char *string, unsigned attr)
{
- struct strategy *list = NULL;
- int list_alloc = 0, list_nr = 0, i;
-
- memset(&list, 0, sizeof(list));
- split_merge_strategies(string, &list, &list_nr, &list_alloc);
- if (list) {
- for (i = 0; i < list_nr; i++)
- append_strategy(get_strategy(list[i].name));
+ int i;
+
+ if (string) {
+ struct string_list list = STRING_LIST_INIT_DUP;
+ struct string_list_item *item;
+ string_list_split(&list, string, ' ', -1);
+ for_each_string_list_item(item, &list)
+ append_strategy(get_strategy(item->string));
+ string_list_clear(&list, 0);
return;
}
for (i = 0; i < ARRAY_SIZE(all_strategy); i++)
int cmd_mv(int argc, const char **argv, const char *prefix)
{
- int i, gitmodules_modified = 0;
+ int i, flags, gitmodules_modified = 0;
int verbose = 0, show_only = 0, force = 0, ignore_errors = 0;
struct option builtin_mv_options[] = {
OPT__VERBOSE(&verbose, N_("be verbose")),
modes = xcalloc(argc, sizeof(enum update_mode));
/*
* Keep trailing slash, needed to let
- * "git mv file no-such-dir/" error out.
+ * "git mv file no-such-dir/" error out, except in the case
+ * "git mv directory no-such-dir/".
*/
- dest_path = internal_copy_pathspec(prefix, argv + argc, 1,
- KEEP_TRAILING_SLASH);
+ flags = KEEP_TRAILING_SLASH;
+ if (argc == 1 && is_directory(argv[0]) && !is_directory(argv[1]))
+ flags = 0;
+ dest_path = internal_copy_pathspec(prefix, argv + argc, 1, flags);
submodule_gitfile = xcalloc(argc, sizeof(char *));
if (dest_path[0][0] == '\0')
static unsigned long unpack_unreachable_expiration;
static int pack_loose_unreachable;
static int local;
+static int have_non_local_packs;
static int incremental;
static int ignore_packed_keep;
static int allow_ofs_delta;
return 1;
if (incremental)
return 0;
+
+ /*
+ * When asked to do --local (do not include an
+ * object that appears in a pack we borrow
+ * from elsewhere) or --honor-pack-keep (do not
+ * include an object that appears in a pack marked
+ * with .keep), we need to make sure no copy of this
+ * object come from in _any_ pack that causes us to
+ * omit it, and need to complete this loop. When
+ * neither option is in effect, we know the object
+ * we just found is going to be packed, so break
+ * out of the loop to return 1 now.
+ */
+ if (!ignore_packed_keep &&
+ (!local || !have_non_local_packs))
+ break;
+
if (local && !p->pack_local)
return 0;
if (ignore_packed_keep && p->pack_local && p->pack_keep)
progress = 2;
prepare_packed_git();
+ if (ignore_packed_keep) {
+ struct packed_git *p;
+ for (p = packed_git; p; p = p->next)
+ if (p->pack_local && p->pack_keep)
+ break;
+ if (!p) /* no keep-able packs found */
+ ignore_packed_keep = 0;
+ }
+ if (local) {
+ /*
+ * unlike ignore_packed_keep above, we do not want to
+ * unset "local" based on looking at packs, as it
+ * also covers non-local objects
+ */
+ struct packed_git *p;
+ for (p = packed_git; p; p = p->next) {
+ if (!p->pack_local) {
+ have_non_local_packs = 1;
+ break;
+ }
+ }
+ }
if (progress)
progress_state = start_progress(_("Counting objects"), 0);
(stop_at_non_option ? PARSE_OPT_STOP_AT_NON_OPTION : 0) |
PARSE_OPT_SHELL_EVAL);
- strbuf_addf(&parsed, " --");
+ strbuf_addstr(&parsed, " --");
sq_quote_argv(&parsed, argv, 0);
puts(parsed.buf);
return 0;
static int clone_submodule(const char *path, const char *gitdir, const char *url,
const char *depth, const char *reference, int quiet)
{
- struct child_process cp;
- child_process_init(&cp);
+ struct child_process cp = CHILD_PROCESS_INIT;
argv_array_push(&cp.args, "clone");
argv_array_push(&cp.args, "--no-checkout");
if (index < suc->failed_clones_nr) {
int *p;
ce = suc->failed_clones[index];
- if (!prepare_to_clone_next_submodule(ce, child, suc, err))
- die("BUG: ce was a submodule before?");
+ if (!prepare_to_clone_next_submodule(ce, child, suc, err)) {
+ suc->current ++;
+ strbuf_addf(err, "BUG: submodule considered for cloning,"
+ "doesn't need cloning any more?\n");
+ return 0;
+ }
p = xmalloc(sizeof(*p));
*p = suc->current;
*idx_task_cb = p;
{
struct strbuf sb = STRBUF_INIT;
if (argc != 3)
- die("submodule--helper relative_path takes exactly 2 arguments, got %d", argc);
+ die("submodule--helper relative-path takes exactly 2 arguments, got %d", argc);
printf("%s", relative_path(argv[1], argv[2], &sb));
strbuf_release(&sb);
return 0;
}
+static const char *remote_submodule_branch(const char *path)
+{
+ const struct submodule *sub;
+ gitmodules_config();
+ git_config(submodule_config, NULL);
+
+ sub = submodule_from_path(null_sha1, path);
+ if (!sub)
+ return NULL;
+
+ if (!sub->branch)
+ return "master";
+
+ if (!strcmp(sub->branch, ".")) {
+ unsigned char sha1[20];
+ const char *refname = resolve_ref_unsafe("HEAD", 0, sha1, NULL);
+
+ if (!refname)
+ die(_("No such ref: %s"), "HEAD");
+
+ /* detached HEAD */
+ if (!strcmp(refname, "HEAD"))
+ die(_("Submodule (%s) branch configured to inherit "
+ "branch from superproject, but the superproject "
+ "is not on any branch"), sub->name);
+
+ if (!skip_prefix(refname, "refs/heads/", &refname))
+ die(_("Expecting a full ref name, got %s"), refname);
+ return refname;
+ }
+
+ return sub->branch;
+}
+
+static int resolve_remote_submodule_branch(int argc, const char **argv,
+ const char *prefix)
+{
+ const char *ret;
+ struct strbuf sb = STRBUF_INIT;
+ if (argc != 2)
+ die("submodule--helper remote-branch takes exactly one arguments, got %d", argc);
+
+ ret = remote_submodule_branch(argv[1]);
+ if (!ret)
+ die("submodule %s doesn't exist", argv[1]);
+
+ printf("%s", ret);
+ strbuf_release(&sb);
+ return 0;
+}
+
struct cmd_struct {
const char *cmd;
int (*fn)(int, const char **, const char *);
{"relative-path", resolve_relative_path},
{"resolve-relative-url", resolve_relative_url},
{"resolve-relative-url-test", resolve_relative_url_test},
- {"init", module_init}
+ {"init", module_init},
+ {"remote-branch", resolve_remote_submodule_branch}
};
int cmd_submodule__helper(int argc, const char **argv, const char *prefix)
report(_("Untracked cache enabled for '%s'"), get_git_work_tree());
break;
default:
- die("Bug: bad untracked_cache value: %d", untracked_cache);
+ die("BUG: bad untracked_cache value: %d", untracked_cache);
}
if (active_cache_changed) {
struct strbuf sb = STRBUF_INIT;
const char *name;
struct stat st;
- struct child_process cp;
+ struct child_process cp = CHILD_PROCESS_INIT;
struct argv_array child_env = ARGV_ARRAY_INIT;
int counter = 0, len, ret;
struct strbuf symref = STRBUF_INIT;
argv_array_pushf(&child_env, "%s=%s", GIT_DIR_ENVIRONMENT, sb_git.buf);
argv_array_pushf(&child_env, "%s=%s", GIT_WORK_TREE_ENVIRONMENT, path);
- memset(&cp, 0, sizeof(cp));
cp.git_cmd = 1;
if (commit)
}
if (opts.new_branch) {
- struct child_process cp;
- memset(&cp, 0, sizeof(cp));
+ struct child_process cp = CHILD_PROCESS_INIT;
cp.git_cmd = 1;
argv_array_push(&cp.args, "branch");
if (opts.force_new_branch)
extern const char *git_editor(void);
extern const char *git_pager(int stdout_is_tty);
extern int git_ident_config(const char *, const char *, void *);
+extern void reset_ident_date(void);
struct ident_split {
const char *name_begin;
char pack_name[FLEX_ARRAY]; /* more */
} *packed_git;
+/*
+ * A most-recently-used ordered version of the packed_git list, which can
+ * be iterated instead of packed_git (and marked via mru_mark).
+ */
+struct mru;
+extern struct mru *packed_git_mru;
+
struct pack_entry {
off_t offset;
unsigned char sha1[20];
extern void close_pack_windows(struct packed_git *);
extern void close_all_packs(void);
extern void unuse_pack(struct pack_window **);
-extern void free_pack_by_name(const char *);
extern void clear_delta_base_cache(void);
extern struct packed_git *add_packed_git(const char *path, size_t path_len, int local);
extern int copy_file_with_time(const char *dst, const char *src, int mode);
extern void write_or_die(int fd, const void *buf, size_t count);
-extern int write_or_whine_pipe(int fd, const void *buf, size_t count, const char *msg);
extern void fsync_or_die(int fd, const char *);
extern ssize_t read_in_full(int fd, void *buf, size_t count);
*
* After including this header file, using:
*
- * define_commit_slab(indegee, int);
+ * define_commit_slab(indegree, int);
*
* will let you call the following functions:
*
return slabname##_at_peek(s, c, 0); \
} \
\
-static int stat_ ##slabname## realloc
+struct slabname
/*
- * Note that this seemingly redundant second declaration is required
+ * Note that this redundant forward declaration is required
* to allow a terminating semicolon, which makes instantiations look
* like function declarations. I.e., the expansion of
*
* define_commit_slab(indegree, int);
*
- * ends in 'static int stat_indegreerealloc;'. This would otherwise
+ * ends in 'struct indegree;'. This would otherwise
* be a syntax error according (at least) to ISO C. It's hard to
* catch because GCC silently parses it by default.
*/
void **ret;
threadcache *tc;
int mymspace;
- size_t i, *adjustedsizes=(size_t *) alloca(elems*sizeof(size_t));
- if(!adjustedsizes) return 0;
- for(i=0; i<elems; i++)
- adjustedsizes[i]=sizes[i]<sizeof(threadcacheblk) ? sizeof(threadcacheblk) : sizes[i];
+ size_t i, *adjustedsizes=(size_t *) alloca(elems*sizeof(size_t));
+ if(!adjustedsizes) return 0;
+ for(i=0; i<elems; i++)
+ adjustedsizes[i]=sizes[i]<sizeof(threadcacheblk) ? sizeof(threadcacheblk) : sizes[i];
GetThreadCache(&p, &tc, &mymspace, 0);
GETMSPACE(m, p, tc, mymspace, 0,
ret=mspace_independent_comalloc(m, elems, adjustedsizes, chunks));
*/
char *strdup(const char *s1)
{
- char *s2 = 0;
- if (s1) {
- size_t len = strlen(s1) + 1;
- s2 = malloc(len);
+ size_t len = strlen(s1) + 1;
+ char *s2 = malloc(len);
+
+ if (s2)
memcpy(s2, s1, len);
- }
return s2;
}
#endif
HAVE_PATHS_H = YesPlease
GMTIME_UNRELIABLE_ERRORS = UnfortunatelyYes
HAVE_BSD_SYSCTL = YesPlease
+ PAGER_ENV = LESS=FRX LV=-c MORE=FRX
endif
ifeq ($(uname_S),OpenBSD)
NO_STRCASESTR = YesPlease
while [ $c -lt $cword ]; do
i="${words[c]}"
case "$i" in
- -d|-m) only_local_ref="y" ;;
- -r) has_r="y" ;;
+ -d|--delete|-m|--move) only_local_ref="y" ;;
+ -r|--remotes) has_r="y" ;;
esac
((c++))
done
--color --no-color --verbose --abbrev= --no-abbrev
--track --no-track --contains --merged --no-merged
--set-upstream-to= --edit-description --list
- --unset-upstream
+ --unset-upstream --delete --move --remotes
"
;;
*)
__git_diff_algorithms="myers minimal patience histogram"
+__git_diff_submodule_formats="log short"
+
__git_diff_common_options="--stat --numstat --shortstat --summary
--patch-with-stat --name-only --name-status --color
--no-color --color-words --no-renames --check
--dirstat --dirstat= --dirstat-by-file
--dirstat-by-file= --cumulative
--diff-algorithm=
+ --submodule --submodule=
"
_git_diff ()
__gitcomp "$__git_diff_algorithms" "" "${cur##--diff-algorithm=}"
return
;;
+ --submodule=*)
+ __gitcomp "$__git_diff_submodule_formats" "" "${cur##--submodule=}"
+ return
+ ;;
--*)
__gitcomp "--cached --staged --pickaxe-all --pickaxe-regex
--base --ours --theirs --no-index
__gitcomp "full short no" "" "${cur##--decorate=}"
return
;;
+ --diff-algorithm=*)
+ __gitcomp "$__git_diff_algorithms" "" "${cur##--diff-algorithm=}"
+ return
+ ;;
+ --submodule=*)
+ __gitcomp "$__git_diff_submodule_formats" "" "${cur##--submodule=}"
+ return
+ ;;
--*)
__gitcomp "
$__git_log_common_options
format.attach
format.cc
format.coverLetter
+ format.from
format.headers
format.numbered
format.pretty
__gitcomp "$__git_diff_algorithms" "" "${cur##--diff-algorithm=}"
return
;;
+ --submodule=*)
+ __gitcomp "$__git_diff_submodule_formats" "" "${cur##--submodule=}"
+ return
+ ;;
--*)
__gitcomp "--pretty= --format= --abbrev-commit --oneline
--show-signature
$mtime = oct $mtime;
next if $typeflag == 5; # directory
- print FI "blob\n", "mark :$next_mark\n";
- if ($typeflag == 2) { # symbolic link
- print FI "data ", length($linkname), "\n", $linkname;
- $mode = 0120000;
- } else {
- print FI "data $size\n";
- while ($size > 0 && read(I, $_, 512) == 512) {
- print FI substr($_, 0, $size);
- $size -= 512;
+ if ($typeflag != 1) { # handle hard links later
+ print FI "blob\n", "mark :$next_mark\n";
+ if ($typeflag == 2) { # symbolic link
+ print FI "data ", length($linkname), "\n",
+ $linkname;
+ $mode = 0120000;
+ } else {
+ print FI "data $size\n";
+ while ($size > 0 && read(I, $_, 512) == 512) {
+ print FI substr($_, 0, $size);
+ $size -= 512;
+ }
}
+ print FI "\n";
}
- print FI "\n";
my $path;
if ($prefix) {
} else {
$path = "$name";
}
- $files{$path} = [$next_mark++, $mode];
+
+ if ($typeflag == 1) { # hard link
+ $linkname = "$prefix/$linkname" if $prefix;
+ $files{$path} = [ $files{$linkname}->[0], $mode ];
+ } else {
+ $files{$path} = [$next_mark++, $mode];
+ }
$author_time = $mtime if $mtime > $author_time;
$path =~ m,^([^/]+)/,;
like ``<a href="foo">link</a>``, the reader will see the HTML
source code and not a proper link.
- Set ``multimailhook.htmlInIntro`` to true to allow writting HTML
+ Set ``multimailhook.htmlInIntro`` to true to allow writing HTML
formatting in introduction templates. Similarly, set
``multimailhook.htmlInFooter`` for HTML in the footer.
multimailhook.dateSubstitute
String to use as a substitute for ``Date:`` in the output of ``git
- log`` while formatting commit messages. This is usefull to avoid
+ log`` while formatting commit messages. This is useful to avoid
emitting a line that can be interpreted by mailers as the start of
a cited message (Zimbra webmail in particular). Defaults to
``CommitDate:``. Set to an empty string or ``none`` to deactivate
[InputOutput::RequireCheckedSyscalls]
functions = open say close
-# This rules demands to add a dependancy for the Readonly module. This is not
+# This rule demands to add a dependency for the Readonly module. This is not
# wished.
[-ValuesAndExpressions::ProhibitConstantPragma]
print {*STDERR} "Check the configuration of file uploads in your mediawiki.\n";
return $newrevid;
}
- # Deleting and uploading a file requires a priviledged user
+ # Deleting and uploading a file requires a privileged user
if ($file_deleted) {
$mediawiki = connect_maybe($mediawiki, $remotename, $url);
my $query = {
# also test that we still can split out an entirely new subtree
# if the parent of the first commit in the tree is not empty,
- # then the new subtree has accidently been attached to something
+ # then the new subtree has accidentally been attached to something
git subtree split --prefix="sub dir2" --branch subproj2-br &&
check_equal "$(git log --pretty=format:%P -1 subproj2-br)" ""
)
rename_dst_nr * rename_src_nr, 50, 1);
}
- mx = xcalloc(st_mult(num_create, NUM_CANDIDATE_PER_DST), sizeof(*mx));
+ mx = xcalloc(st_mult(NUM_CANDIDATE_PER_DST, num_create), sizeof(*mx));
for (dst_cnt = i = 0; i < rename_dst_nr; i++) {
struct diff_filespec *two = rename_dst[i].two;
struct diff_score *m;
exit($exitcode);
}
-sub find_worktree
-{
- # Git->repository->wc_path() does not honor changes to the working
- # tree location made by $ENV{GIT_WORK_TREE} or the 'core.worktree'
- # config variable.
- return Git::command_oneline('rev-parse', '--show-toplevel');
-}
-
sub print_tool_help
{
# See the comment at the bottom of file_diff() for the reason behind
sub use_wt_file
{
- my ($repo, $workdir, $file, $sha1) = @_;
+ my ($workdir, $file, $sha1) = @_;
my $null_sha1 = '0' x 40;
if (-l "$workdir/$file" || ! -e _) {
return (0, $null_sha1);
}
- my $wt_sha1 = $repo->command_oneline('hash-object', "$workdir/$file");
+ my $wt_sha1 = Git::command_oneline('hash-object', "$workdir/$file");
my $use = ($sha1 eq $null_sha1) || ($sha1 eq $wt_sha1);
return ($use, $wt_sha1);
}
{
my ($repo_path, $index, $worktree) = @_;
$ENV{GIT_INDEX_FILE} = $index;
- $ENV{GIT_WORK_TREE} = $worktree;
- my $must_unset_git_dir = 0;
- if (not defined($ENV{GIT_DIR})) {
- $must_unset_git_dir = 1;
- $ENV{GIT_DIR} = $repo_path;
- }
- my @refreshargs = qw/update-index --really-refresh -q --unmerged/;
- my @gitargs = qw/diff-files --name-only -z/;
+ my @gitargs = ('--git-dir', $repo_path, '--work-tree', $worktree);
+ my @refreshargs = (
+ @gitargs, 'update-index',
+ '--really-refresh', '-q', '--unmerged');
try {
Git::command_oneline(@refreshargs);
} catch Git::Error::Command with {};
- my $line = Git::command_oneline(@gitargs);
+ my @diffargs = (@gitargs, 'diff-files', '--name-only', '-z');
+ my $line = Git::command_oneline(@diffargs);
my @files;
if (defined $line) {
@files = split('\0', $line);
}
delete($ENV{GIT_INDEX_FILE});
- delete($ENV{GIT_WORK_TREE});
- delete($ENV{GIT_DIR}) if ($must_unset_git_dir);
return map { $_ => 1 } @files;
}
sub setup_dir_diff
{
- my ($repo, $workdir, $symlinks) = @_;
-
- # Run the diff; exit immediately if no diff found
- # 'Repository' and 'WorkingCopy' must be explicitly set to insure that
- # if $GIT_DIR and $GIT_WORK_TREE are set in ENV, they are actually used
- # by Git->repository->command*.
- my $repo_path = $repo->repo_path();
- my %repo_args = (Repository => $repo_path, WorkingCopy => $workdir);
- my $diffrepo = Git->repository(%repo_args);
-
+ my ($workdir, $symlinks) = @_;
my @gitargs = ('diff', '--raw', '--no-abbrev', '-z', @ARGV);
- my $diffrtn = $diffrepo->command_oneline(@gitargs);
+ my $diffrtn = Git::command_oneline(@gitargs);
exit(0) unless defined($diffrtn);
# Build index info for left and right sides of the diff
if ($lmode eq $symlink_mode) {
$symlink{$src_path}{left} =
- $diffrepo->command_oneline('show', "$lsha1");
+ Git::command_oneline('show', $lsha1);
}
if ($rmode eq $symlink_mode) {
$symlink{$dst_path}{right} =
- $diffrepo->command_oneline('show', "$rsha1");
+ Git::command_oneline('show', $rsha1);
}
if ($lmode ne $null_mode and $status !~ /^C/) {
if ($working_tree_dups{$dst_path}++) {
next;
}
- my ($use, $wt_sha1) = use_wt_file($repo, $workdir,
- $dst_path, $rsha1);
+ my ($use, $wt_sha1) =
+ use_wt_file($workdir, $dst_path, $rsha1);
if ($use) {
push @working_tree, $dst_path;
$wtindex .= "$rmode $wt_sha1\t$dst_path\0";
mkpath($ldir) or exit_cleanup($tmpdir, 1);
mkpath($rdir) or exit_cleanup($tmpdir, 1);
- # If $GIT_DIR is not set prior to calling 'git update-index' and
- # 'git checkout-index', then those commands will fail if difftool
- # is called from a directory other than the repo root.
- my $must_unset_git_dir = 0;
- if (not defined($ENV{GIT_DIR})) {
- $must_unset_git_dir = 1;
- $ENV{GIT_DIR} = $repo_path;
- }
-
# Populate the left and right directories based on each index file
my ($inpipe, $ctx);
$ENV{GIT_INDEX_FILE} = "$tmpdir/lindex";
($inpipe, $ctx) =
- $repo->command_input_pipe(qw(update-index -z --index-info));
+ Git::command_input_pipe('update-index', '-z', '--index-info');
print($inpipe $lindex);
- $repo->command_close_pipe($inpipe, $ctx);
+ Git::command_close_pipe($inpipe, $ctx);
my $rc = system('git', 'checkout-index', '--all', "--prefix=$ldir/");
exit_cleanup($tmpdir, $rc) if $rc != 0;
$ENV{GIT_INDEX_FILE} = "$tmpdir/rindex";
($inpipe, $ctx) =
- $repo->command_input_pipe(qw(update-index -z --index-info));
+ Git::command_input_pipe('update-index', '-z', '--index-info');
print($inpipe $rindex);
- $repo->command_close_pipe($inpipe, $ctx);
+ Git::command_close_pipe($inpipe, $ctx);
$rc = system('git', 'checkout-index', '--all', "--prefix=$rdir/");
exit_cleanup($tmpdir, $rc) if $rc != 0;
$ENV{GIT_INDEX_FILE} = "$tmpdir/wtindex";
($inpipe, $ctx) =
- $repo->command_input_pipe(qw(update-index --info-only -z --index-info));
+ Git::command_input_pipe('update-index', '--info-only', '-z', '--index-info');
print($inpipe $wtindex);
- $repo->command_close_pipe($inpipe, $ctx);
+ Git::command_close_pipe($inpipe, $ctx);
# If $GIT_DIR was explicitly set just for the update/checkout
# commands, then it should be unset before continuing.
- delete($ENV{GIT_DIR}) if ($must_unset_git_dir);
delete($ENV{GIT_INDEX_FILE});
# Changes in the working tree need special treatment since they are
my $rc;
my $error = 0;
my $repo = Git->repository();
- my $workdir = find_worktree();
- my ($a, $b, $tmpdir, @worktree) =
- setup_dir_diff($repo, $workdir, $symlinks);
+ my $repo_path = $repo->repo_path();
+ my $workdir = $repo->wc_path();
+ my ($a, $b, $tmpdir, @worktree) = setup_dir_diff($workdir, $symlinks);
if (defined($extcmd)) {
$rc = system($extcmd, $a, $b);
next if ! -f "$b/$file";
if (!$indices_loaded) {
- %wt_modified = changed_files($repo->repo_path(),
- "$tmpdir/wtindex", "$workdir");
- %tmp_modified = changed_files($repo->repo_path(),
- "$tmpdir/wtindex", "$b");
+ %wt_modified = changed_files(
+ $repo_path, "$tmpdir/wtindex", $workdir);
+ %tmp_modified = changed_files(
+ $repo_path, "$tmpdir/wtindex", $b);
$indices_loaded = 1;
}
if self.useClientSpec:
self.clientSpecDirs = getClientSpec()
- # Check for the existance of P4 branches
+ # Check for the existence of P4 branches
branchesDetected = (len(p4BranchesInGit().keys()) > 1)
if self.useClientSpec and not branchesDetected:
else
GIT_PAGER=cat
fi
- : "${LESS=-FRX}"
- : "${LV=-c}"
- export LESS LV
+ for vardef in @@PAGER_ENV@@
+ do
+ var=${vardef%%=*}
+ eval ": \"\${$vardef}\" && export $var"
+ done
eval "$GIT_PAGER" '"$@"'
}
'')
git fetch ;;
*)
- git fetch $(get_default_remote) "$2" ;;
+ shift
+ git fetch $(get_default_remote) "$@" ;;
esac
)
name=$(git submodule--helper name "$sm_path") || exit
url=$(git config submodule."$name".url)
- branch=$(get_submodule_config "$name" branch master)
if ! test -z "$update"
then
update_module=$update
if test -n "$remote"
then
+ branch=$(git submodule--helper remote-branch "$sm_path")
if test -z "$nofetch"
then
# Fetch remote before determining tracking $sha1
- fetch_in_submodule "$sm_path" ||
+ fetch_in_submodule "$sm_path" $depth ||
die "$(eval_gettext "Unable to fetch in submodule path '\$sm_path'")"
fi
remote_name=$(sanitize_submodule_env; cd "$sm_path" && get_default_remote)
# Run fetch only if $sha1 isn't present or it
# is not reachable from a ref.
is_tip_reachable "$sm_path" "$sha1" ||
- fetch_in_submodule "$sm_path" ||
+ fetch_in_submodule "$sm_path" $depth ||
die "$(eval_gettext "Unable to fetch in submodule path '\$displaypath'")"
# Now we tried the usual fetch, but $sha1 may
# not be reachable from any of the refs
is_tip_reachable "$sm_path" "$sha1" ||
- fetch_in_submodule "$sm_path" "$sha1" ||
+ fetch_in_submodule "$sm_path" $depth "$sha1" ||
die "$(eval_gettext "Fetched in submodule path '\$displaypath', but it did not contain \$sha1. Direct fetching of that commit failed.")"
fi
-href => href(
action=>$dest_action,
hash=>$dest
- )}, $name);
+ )}, esc_html($name));
$markers .= " <span class=\"".esc_attr($class)."\" title=\"".esc_attr($ref)."\">" .
$link . "</span>";
for (p = opt->header_list; p; p = p->next) {
if (p->token != GREP_PATTERN_HEAD)
- die("bug: a non-header pattern in grep header list.");
+ die("BUG: a non-header pattern in grep header list.");
if (p->field < GREP_HEADER_FIELD_MIN ||
GREP_HEADER_FIELD_MAX <= p->field)
- die("bug: unknown header field %d", p->field);
+ die("BUG: unknown header field %d", p->field);
compile_regexp(p, opt);
}
h = compile_pattern_atom(&pp);
if (!h || pp != p->next)
- die("bug: malformed header expr");
+ die("BUG: malformed header expr");
if (!header_group[p->field]) {
header_group[p->field] = h;
continue;
case GREP_BINARY_TEXT:
break;
default:
- die("bug: unknown binary handling mode");
+ die("BUG: unknown binary handling mode");
}
}
ls.userData = userData;
ls.userFunc = userFunc;
- strbuf_addf(&out_buffer.buf, PROPFIND_ALL_REQUEST);
+ strbuf_addstr(&out_buffer.buf, PROPFIND_ALL_REQUEST);
dav_headers = curl_slist_append(dav_headers, "Depth: 1");
dav_headers = curl_slist_append(dav_headers, "Content-Type: text/xml");
strbuf_addf(buf, "objects/%.*s/", 2, hex);
if (!only_two_digit_prefix)
- strbuf_addf(buf, "%s", hex+2);
+ strbuf_addstr(buf, hex + 2);
}
char *get_remote_object_url(const char *url, const char *hex,
return git_default_date.buf;
}
+void reset_ident_date(void)
+{
+ strbuf_reset(&git_default_date);
+}
+
static int crud(unsigned char c)
{
return c <= 32 ||
va_start(va, fmt);
if (blen <= 0 || (unsigned)(ret = vsnprintf(buf, blen, fmt, va)) >= (unsigned)blen)
- die("Fatal: buffer too small. Please report a bug.");
+ die("BUG: buffer too small. Please report a bug.");
va_end(va);
return ret;
}
if (current_and_HEAD &&
decoration->type == DECORATION_REF_HEAD) {
- strbuf_addstr(sb, color_reset);
- strbuf_addstr(sb, color_commit);
strbuf_addstr(sb, " -> ");
strbuf_addstr(sb, color_reset);
strbuf_addstr(sb, decorate_get_color(use_color, current_and_HEAD->type));
#include "dir.h"
#include "submodule.h"
+static void flush_output(struct merge_options *o)
+{
+ if (o->buffer_output < 2 && o->obuf.len) {
+ fputs(o->obuf.buf, stdout);
+ strbuf_reset(&o->obuf);
+ }
+}
+
+static int err(struct merge_options *o, const char *err, ...)
+{
+ va_list params;
+
+ if (o->buffer_output < 2)
+ flush_output(o);
+ else {
+ strbuf_complete(&o->obuf, '\n');
+ strbuf_addstr(&o->obuf, "error: ");
+ }
+ va_start(params, err);
+ strbuf_vaddf(&o->obuf, err, params);
+ va_end(params);
+ if (o->buffer_output > 1)
+ strbuf_addch(&o->obuf, '\n');
+ else {
+ error("%s", o->obuf.buf);
+ strbuf_reset(&o->obuf);
+ }
+
+ return -1;
+}
+
static struct tree *shift_tree_object(struct tree *one, struct tree *two,
const char *subtree_shift)
{
return (!o->call_depth && o->verbosity >= v) || o->verbosity >= 5;
}
-static void flush_output(struct merge_options *o)
-{
- if (o->obuf.len) {
- fputs(o->obuf.buf, stdout);
- strbuf_reset(&o->obuf);
- }
-}
-
__attribute__((format (printf, 3, 4)))
static void output(struct merge_options *o, int v, const char *fmt, ...)
{
static void output_commit_title(struct merge_options *o, struct commit *commit)
{
- int i;
- flush_output(o);
- for (i = o->call_depth; i--;)
- fputs(" ", stdout);
+ strbuf_addchars(&o->obuf, ' ', o->call_depth * 2);
if (commit->util)
- printf("virtual %s\n", merge_remote_util(commit)->name);
+ strbuf_addf(&o->obuf, "virtual %s\n",
+ merge_remote_util(commit)->name);
else {
- printf("%s ", find_unique_abbrev(commit->object.oid.hash, DEFAULT_ABBREV));
+ strbuf_addf(&o->obuf, "%s ",
+ find_unique_abbrev(commit->object.oid.hash,
+ DEFAULT_ABBREV));
if (parse_commit(commit) != 0)
- printf(_("(bad commit)\n"));
+ strbuf_addf(&o->obuf, _("(bad commit)\n"));
else {
const char *title;
const char *msg = get_commit_buffer(commit, NULL);
int len = find_commit_subject(msg, &title);
if (len)
- printf("%.*s\n", len, title);
+ strbuf_addf(&o->obuf, "%.*s\n", len, title);
unuse_commit_buffer(commit, msg);
}
}
+ flush_output(o);
}
-static int add_cacheinfo(unsigned int mode, const struct object_id *oid,
+static int add_cacheinfo(struct merge_options *o,
+ unsigned int mode, const struct object_id *oid,
const char *path, int stage, int refresh, int options)
{
struct cache_entry *ce;
ce = make_cache_entry(mode, oid ? oid->hash : null_sha1, path, stage, 0);
if (!ce)
- return error(_("addinfo_cache failed for path '%s'"), path);
+ return err(o, _("addinfo_cache failed for path '%s'"), path);
ret = add_cache_entry(ce, options);
if (refresh) {
fprintf(stderr, "BUG: %d %.*s\n", ce_stage(ce),
(int)ce_namelen(ce), ce->name);
}
- die("Bug in merge-recursive.c");
+ die("BUG: unmerged index entries in merge-recursive.c");
}
if (!active_cache_tree)
active_cache_tree = cache_tree();
if (!cache_tree_fully_valid(active_cache_tree) &&
- cache_tree_update(&the_index, 0) < 0)
- die(_("error building trees"));
+ cache_tree_update(&the_index, 0) < 0) {
+ err(o, _("error building trees"));
+ return NULL;
+ }
result = lookup_tree(active_cache_tree->sha1);
* and the file need to be present, then the D/F file will be
* reinstated with a new unique name at the time it is processed.
*/
- struct string_list df_sorted_entries;
+ struct string_list df_sorted_entries = STRING_LIST_INIT_NODUP;
const char *last_file = NULL;
int last_len = 0;
int i;
return;
/* Ensure D/F conflicts are adjacent in the entries list. */
- memset(&df_sorted_entries, 0, sizeof(struct string_list));
for (i = 0; i < entries->nr; i++) {
struct string_list_item *next = &entries->items[i];
string_list_append(&df_sorted_entries, next->string)->util =
return renames;
}
-static int update_stages(const char *path, const struct diff_filespec *o,
+static int update_stages(struct merge_options *opt, const char *path,
+ const struct diff_filespec *o,
const struct diff_filespec *a,
const struct diff_filespec *b)
{
if (remove_file_from_cache(path))
return -1;
if (o)
- if (add_cacheinfo(o->mode, &o->oid, path, 1, 0, options))
+ if (add_cacheinfo(opt, o->mode, &o->oid, path, 1, 0, options))
return -1;
if (a)
- if (add_cacheinfo(a->mode, &a->oid, path, 2, 0, options))
+ if (add_cacheinfo(opt, a->mode, &a->oid, path, 2, 0, options))
return -1;
if (b)
- if (add_cacheinfo(b->mode, &b->oid, path, 3, 0, options))
+ if (add_cacheinfo(opt, b->mode, &b->oid, path, 3, 0, options))
return -1;
return 0;
}
{
int pos = cache_name_pos(path, strlen(path));
- if (pos < 0)
- pos = -1 - pos;
- while (pos < active_nr &&
- !strcmp(path, active_cache[pos]->name)) {
- /*
- * If stage #0, it is definitely tracked.
- * If it has stage #2 then it was tracked
- * before this merge started. All other
- * cases the path was not tracked.
- */
- switch (ce_stage(active_cache[pos])) {
- case 0:
- case 2:
+ if (0 <= pos)
+ /* we have been tracking this path */
+ return 1;
+
+ /*
+ * Look for an unmerged entry for the path,
+ * specifically stage #2, which would indicate
+ * that "our" side before the merge started
+ * had the path tracked (and resulted in a conflict).
+ */
+ for (pos = -1 - pos;
+ pos < active_nr && !strcmp(path, active_cache[pos]->name);
+ pos++)
+ if (ce_stage(active_cache[pos]) == 2)
return 1;
- }
- pos++;
- }
return 0;
}
/* Make sure leading directories are created */
status = safe_create_leading_directories_const(path);
if (status) {
- if (status == SCLD_EXISTS) {
+ if (status == SCLD_EXISTS)
/* something else exists */
- error(msg, path, _(": perhaps a D/F conflict?"));
- return -1;
- }
- die(msg, path, "");
+ return err(o, msg, path, _(": perhaps a D/F conflict?"));
+ return err(o, msg, path, "");
}
/*
* tracking it.
*/
if (would_lose_untracked(path))
- return error(_("refusing to lose untracked file at '%s'"),
+ return err(o, _("refusing to lose untracked file at '%s'"),
path);
/* Successful unlink is good.. */
if (errno == ENOENT)
return 0;
/* .. but not some other error (who really cares what?) */
- return error(msg, path, _(": perhaps a D/F conflict?"));
+ return err(o, msg, path, _(": perhaps a D/F conflict?"));
}
-static void update_file_flags(struct merge_options *o,
- const struct object_id *oid,
- unsigned mode,
- const char *path,
- int update_cache,
- int update_wd)
+static int update_file_flags(struct merge_options *o,
+ const struct object_id *oid,
+ unsigned mode,
+ const char *path,
+ int update_cache,
+ int update_wd)
{
+ int ret = 0;
+
if (o->call_depth)
update_wd = 0;
buf = read_sha1_file(oid->hash, &type, &size);
if (!buf)
- die(_("cannot read object %s '%s'"), oid_to_hex(oid), path);
- if (type != OBJ_BLOB)
- die(_("blob expected for %s '%s'"), oid_to_hex(oid), path);
+ return err(o, _("cannot read object %s '%s'"), oid_to_hex(oid), path);
+ if (type != OBJ_BLOB) {
+ ret = err(o, _("blob expected for %s '%s'"), oid_to_hex(oid), path);
+ goto free_buf;
+ }
if (S_ISREG(mode)) {
struct strbuf strbuf = STRBUF_INIT;
if (convert_to_working_tree(path, buf, size, &strbuf)) {
if (make_room_for_path(o, path) < 0) {
update_wd = 0;
- free(buf);
- goto update_index;
+ goto free_buf;
}
if (S_ISREG(mode) || (!has_symlinks && S_ISLNK(mode))) {
int fd;
else
mode = 0666;
fd = open(path, O_WRONLY | O_TRUNC | O_CREAT, mode);
- if (fd < 0)
- die_errno(_("failed to open '%s'"), path);
+ if (fd < 0) {
+ ret = err(o, _("failed to open '%s': %s"),
+ path, strerror(errno));
+ goto free_buf;
+ }
write_in_full(fd, buf, size);
close(fd);
} else if (S_ISLNK(mode)) {
safe_create_leading_directories_const(path);
unlink(path);
if (symlink(lnk, path))
- die_errno(_("failed to symlink '%s'"), path);
+ ret = err(o, _("failed to symlink '%s': %s"),
+ path, strerror(errno));
free(lnk);
} else
- die(_("do not know what to do with %06o %s '%s'"),
- mode, oid_to_hex(oid), path);
+ ret = err(o,
+ _("do not know what to do with %06o %s '%s'"),
+ mode, oid_to_hex(oid), path);
+ free_buf:
free(buf);
}
update_index:
- if (update_cache)
- add_cacheinfo(mode, oid, path, 0, update_wd, ADD_CACHE_OK_TO_ADD);
+ if (!ret && update_cache)
+ add_cacheinfo(o, mode, oid, path, 0, update_wd, ADD_CACHE_OK_TO_ADD);
+ return ret;
}
-static void update_file(struct merge_options *o,
- int clean,
- const struct object_id *oid,
- unsigned mode,
- const char *path)
+static int update_file(struct merge_options *o,
+ int clean,
+ const struct object_id *oid,
+ unsigned mode,
+ const char *path)
{
- update_file_flags(o, oid, mode, path, o->call_depth || clean, !o->call_depth);
+ return update_file_flags(o, oid, mode, path, o->call_depth || clean, !o->call_depth);
}
/* Low level file merging, update and removal */
return merge_status;
}
-static struct merge_file_info merge_file_1(struct merge_options *o,
+static int merge_file_1(struct merge_options *o,
const struct diff_filespec *one,
const struct diff_filespec *a,
const struct diff_filespec *b,
const char *branch1,
- const char *branch2)
+ const char *branch2,
+ struct merge_file_info *result)
{
- struct merge_file_info result;
- result.merge = 0;
- result.clean = 1;
+ result->merge = 0;
+ result->clean = 1;
if ((S_IFMT & a->mode) != (S_IFMT & b->mode)) {
- result.clean = 0;
+ result->clean = 0;
if (S_ISREG(a->mode)) {
- result.mode = a->mode;
- oidcpy(&result.oid, &a->oid);
+ result->mode = a->mode;
+ oidcpy(&result->oid, &a->oid);
} else {
- result.mode = b->mode;
- oidcpy(&result.oid, &b->oid);
+ result->mode = b->mode;
+ oidcpy(&result->oid, &b->oid);
}
} else {
if (!oid_eq(&a->oid, &one->oid) && !oid_eq(&b->oid, &one->oid))
- result.merge = 1;
+ result->merge = 1;
/*
* Merge modes
*/
if (a->mode == b->mode || a->mode == one->mode)
- result.mode = b->mode;
+ result->mode = b->mode;
else {
- result.mode = a->mode;
+ result->mode = a->mode;
if (b->mode != one->mode) {
- result.clean = 0;
- result.merge = 1;
+ result->clean = 0;
+ result->merge = 1;
}
}
if (oid_eq(&a->oid, &b->oid) || oid_eq(&a->oid, &one->oid))
- oidcpy(&result.oid, &b->oid);
+ oidcpy(&result->oid, &b->oid);
else if (oid_eq(&b->oid, &one->oid))
- oidcpy(&result.oid, &a->oid);
+ oidcpy(&result->oid, &a->oid);
else if (S_ISREG(a->mode)) {
mmbuffer_t result_buf;
- int merge_status;
+ int ret = 0, merge_status;
merge_status = merge_3way(o, &result_buf, one, a, b,
branch1, branch2);
if ((merge_status < 0) || !result_buf.ptr)
- die(_("Failed to execute internal merge"));
+ ret = err(o, _("Failed to execute internal merge"));
- if (write_sha1_file(result_buf.ptr, result_buf.size,
- blob_type, result.oid.hash))
- die(_("Unable to add %s to database"),
- a->path);
+ if (!ret && write_sha1_file(result_buf.ptr, result_buf.size,
+ blob_type, result->oid.hash))
+ ret = err(o, _("Unable to add %s to database"),
+ a->path);
free(result_buf.ptr);
- result.clean = (merge_status == 0);
+ if (ret)
+ return ret;
+ result->clean = (merge_status == 0);
} else if (S_ISGITLINK(a->mode)) {
- result.clean = merge_submodule(result.oid.hash,
+ result->clean = merge_submodule(result->oid.hash,
one->path,
one->oid.hash,
a->oid.hash,
b->oid.hash,
!o->call_depth);
} else if (S_ISLNK(a->mode)) {
- oidcpy(&result.oid, &a->oid);
+ oidcpy(&result->oid, &a->oid);
if (!oid_eq(&a->oid, &b->oid))
- result.clean = 0;
- } else {
- die(_("unsupported object type in the tree"));
- }
+ result->clean = 0;
+ } else
+ die("BUG: unsupported object type in the tree");
}
- return result;
+ return 0;
}
-static struct merge_file_info
-merge_file_special_markers(struct merge_options *o,
+static int merge_file_special_markers(struct merge_options *o,
const struct diff_filespec *one,
const struct diff_filespec *a,
const struct diff_filespec *b,
const char *branch1,
const char *filename1,
const char *branch2,
- const char *filename2)
+ const char *filename2,
+ struct merge_file_info *mfi)
{
char *side1 = NULL;
char *side2 = NULL;
- struct merge_file_info mfi;
+ int ret;
if (filename1)
side1 = xstrfmt("%s:%s", branch1, filename1);
if (filename2)
side2 = xstrfmt("%s:%s", branch2, filename2);
- mfi = merge_file_1(o, one, a, b,
- side1 ? side1 : branch1, side2 ? side2 : branch2);
+ ret = merge_file_1(o, one, a, b,
+ side1 ? side1 : branch1,
+ side2 ? side2 : branch2, mfi);
free(side1);
free(side2);
- return mfi;
+ return ret;
}
-static struct merge_file_info merge_file_one(struct merge_options *o,
+static int merge_file_one(struct merge_options *o,
const char *path,
const struct object_id *o_oid, int o_mode,
const struct object_id *a_oid, int a_mode,
const struct object_id *b_oid, int b_mode,
const char *branch1,
- const char *branch2)
+ const char *branch2,
+ struct merge_file_info *mfi)
{
struct diff_filespec one, a, b;
a.mode = a_mode;
oidcpy(&b.oid, b_oid);
b.mode = b_mode;
- return merge_file_1(o, &one, &a, &b, branch1, branch2);
+ return merge_file_1(o, &one, &a, &b, branch1, branch2, mfi);
}
-static void handle_change_delete(struct merge_options *o,
+static int handle_change_delete(struct merge_options *o,
const char *path,
const struct object_id *o_oid, int o_mode,
const struct object_id *a_oid, int a_mode,
const char *change, const char *change_past)
{
char *renamed = NULL;
+ int ret = 0;
if (dir_in_way(path, !o->call_depth)) {
renamed = unique_path(o, path, a_oid ? o->branch1 : o->branch2);
}
* correct; since there is no true "middle point" between
* them, simply reuse the base version for virtual merge base.
*/
- remove_file_from_cache(path);
- update_file(o, 0, o_oid, o_mode, renamed ? renamed : path);
+ ret = remove_file_from_cache(path);
+ if (!ret)
+ ret = update_file(o, 0, o_oid, o_mode,
+ renamed ? renamed : path);
} else if (!a_oid) {
if (!renamed) {
output(o, 1, _("CONFLICT (%s/delete): %s deleted in %s "
"and %s in %s. Version %s of %s left in tree."),
change, path, o->branch1, change_past,
o->branch2, o->branch2, path);
- update_file(o, 0, b_oid, b_mode, path);
+ ret = update_file(o, 0, b_oid, b_mode, path);
} else {
output(o, 1, _("CONFLICT (%s/delete): %s deleted in %s "
"and %s in %s. Version %s of %s left in tree at %s."),
change, path, o->branch1, change_past,
o->branch2, o->branch2, path, renamed);
- update_file(o, 0, b_oid, b_mode, renamed);
+ ret = update_file(o, 0, b_oid, b_mode, renamed);
}
} else {
if (!renamed) {
"and %s in %s. Version %s of %s left in tree at %s."),
change, path, o->branch2, change_past,
o->branch1, o->branch1, path, renamed);
- update_file(o, 0, a_oid, a_mode, renamed);
+ ret = update_file(o, 0, a_oid, a_mode, renamed);
}
/*
* No need to call update_file() on path when !renamed, since
*/
}
free(renamed);
+
+ return ret;
}
-static void conflict_rename_delete(struct merge_options *o,
+static int conflict_rename_delete(struct merge_options *o,
struct diff_filepair *pair,
const char *rename_branch,
const char *other_branch)
b_mode = dest->mode;
}
- handle_change_delete(o,
- o->call_depth ? orig->path : dest->path,
- &orig->oid, orig->mode,
- a_oid, a_mode,
- b_oid, b_mode,
- _("rename"), _("renamed"));
-
- if (o->call_depth) {
- remove_file_from_cache(dest->path);
- } else {
- update_stages(dest->path, NULL,
- rename_branch == o->branch1 ? dest : NULL,
- rename_branch == o->branch1 ? NULL : dest);
- }
+ if (handle_change_delete(o,
+ o->call_depth ? orig->path : dest->path,
+ &orig->oid, orig->mode,
+ a_oid, a_mode,
+ b_oid, b_mode,
+ _("rename"), _("renamed")))
+ return -1;
+ if (o->call_depth)
+ return remove_file_from_cache(dest->path);
+ else
+ return update_stages(o, dest->path, NULL,
+ rename_branch == o->branch1 ? dest : NULL,
+ rename_branch == o->branch1 ? NULL : dest);
}
static struct diff_filespec *filespec_from_entry(struct diff_filespec *target,
return target;
}
-static void handle_file(struct merge_options *o,
+static int handle_file(struct merge_options *o,
struct diff_filespec *rename,
int stage,
struct rename_conflict_info *ci)
const char *cur_branch, *other_branch;
struct diff_filespec other;
struct diff_filespec *add;
+ int ret;
if (stage == 2) {
dst_entry = ci->dst_entry1;
add = filespec_from_entry(&other, dst_entry, stage ^ 1);
if (add) {
char *add_name = unique_path(o, rename->path, other_branch);
- update_file(o, 0, &add->oid, add->mode, add_name);
+ if (update_file(o, 0, &add->oid, add->mode, add_name))
+ return -1;
remove_file(o, 0, rename->path, 0);
dst_name = unique_path(o, rename->path, cur_branch);
rename->path, other_branch, dst_name);
}
}
- update_file(o, 0, &rename->oid, rename->mode, dst_name);
- if (stage == 2)
- update_stages(rename->path, NULL, rename, add);
+ if ((ret = update_file(o, 0, &rename->oid, rename->mode, dst_name)))
+ ; /* fall through, do allow dst_name to be released */
+ else if (stage == 2)
+ ret = update_stages(o, rename->path, NULL, rename, add);
else
- update_stages(rename->path, NULL, add, rename);
+ ret = update_stages(o, rename->path, NULL, add, rename);
if (dst_name != rename->path)
free(dst_name);
+
+ return ret;
}
-static void conflict_rename_rename_1to2(struct merge_options *o,
+static int conflict_rename_rename_1to2(struct merge_options *o,
struct rename_conflict_info *ci)
{
/* One file was renamed in both branches, but to different names. */
struct merge_file_info mfi;
struct diff_filespec other;
struct diff_filespec *add;
- mfi = merge_file_one(o, one->path,
+ if (merge_file_one(o, one->path,
&one->oid, one->mode,
&a->oid, a->mode,
&b->oid, b->mode,
- ci->branch1, ci->branch2);
+ ci->branch1, ci->branch2, &mfi))
+ return -1;
+
/*
* FIXME: For rename/add-source conflicts (if we could detect
* such), this is wrong. We should instead find a unique
* pathname and then either rename the add-source file to that
* unique path, or use that unique path instead of src here.
*/
- update_file(o, 0, &mfi.oid, mfi.mode, one->path);
+ if (update_file(o, 0, &mfi.oid, mfi.mode, one->path))
+ return -1;
/*
* Above, we put the merged content at the merge-base's
* resolving the conflict at that path in its favor.
*/
add = filespec_from_entry(&other, ci->dst_entry1, 2 ^ 1);
- if (add)
- update_file(o, 0, &add->oid, add->mode, a->path);
+ if (add) {
+ if (update_file(o, 0, &add->oid, add->mode, a->path))
+ return -1;
+ }
else
remove_file_from_cache(a->path);
add = filespec_from_entry(&other, ci->dst_entry2, 3 ^ 1);
- if (add)
- update_file(o, 0, &add->oid, add->mode, b->path);
+ if (add) {
+ if (update_file(o, 0, &add->oid, add->mode, b->path))
+ return -1;
+ }
else
remove_file_from_cache(b->path);
- } else {
- handle_file(o, a, 2, ci);
- handle_file(o, b, 3, ci);
- }
+ } else if (handle_file(o, a, 2, ci) || handle_file(o, b, 3, ci))
+ return -1;
+
+ return 0;
}
-static void conflict_rename_rename_2to1(struct merge_options *o,
+static int conflict_rename_rename_2to1(struct merge_options *o,
struct rename_conflict_info *ci)
{
/* Two files, a & b, were renamed to the same thing, c. */
char *path = c1->path; /* == c2->path */
struct merge_file_info mfi_c1;
struct merge_file_info mfi_c2;
+ int ret;
output(o, 1, _("CONFLICT (rename/rename): "
"Rename %s->%s in %s. "
remove_file(o, 1, a->path, o->call_depth || would_lose_untracked(a->path));
remove_file(o, 1, b->path, o->call_depth || would_lose_untracked(b->path));
- mfi_c1 = merge_file_special_markers(o, a, c1, &ci->ren1_other,
- o->branch1, c1->path,
- o->branch2, ci->ren1_other.path);
- mfi_c2 = merge_file_special_markers(o, b, &ci->ren2_other, c2,
- o->branch1, ci->ren2_other.path,
- o->branch2, c2->path);
+ if (merge_file_special_markers(o, a, c1, &ci->ren1_other,
+ o->branch1, c1->path,
+ o->branch2, ci->ren1_other.path, &mfi_c1) ||
+ merge_file_special_markers(o, b, &ci->ren2_other, c2,
+ o->branch1, ci->ren2_other.path,
+ o->branch2, c2->path, &mfi_c2))
+ return -1;
if (o->call_depth) {
/*
* again later for the non-recursive merge.
*/
remove_file(o, 0, path, 0);
- update_file(o, 0, &mfi_c1.oid, mfi_c1.mode, a->path);
- update_file(o, 0, &mfi_c2.oid, mfi_c2.mode, b->path);
+ ret = update_file(o, 0, &mfi_c1.oid, mfi_c1.mode, a->path);
+ if (!ret)
+ ret = update_file(o, 0, &mfi_c2.oid, mfi_c2.mode,
+ b->path);
} else {
char *new_path1 = unique_path(o, path, ci->branch1);
char *new_path2 = unique_path(o, path, ci->branch2);
output(o, 1, _("Renaming %s to %s and %s to %s instead"),
a->path, new_path1, b->path, new_path2);
remove_file(o, 0, path, 0);
- update_file(o, 0, &mfi_c1.oid, mfi_c1.mode, new_path1);
- update_file(o, 0, &mfi_c2.oid, mfi_c2.mode, new_path2);
+ ret = update_file(o, 0, &mfi_c1.oid, mfi_c1.mode, new_path1);
+ if (!ret)
+ ret = update_file(o, 0, &mfi_c2.oid, mfi_c2.mode,
+ new_path2);
free(new_path2);
free(new_path1);
}
+
+ return ret;
}
static int process_renames(struct merge_options *o,
const char *ren2_dst = ren2->pair->two->path;
enum rename_type rename_type;
if (strcmp(ren1_src, ren2_src) != 0)
- die("ren1_src != ren2_src");
+ die("BUG: ren1_src != ren2_src");
ren2->dst_entry->processed = 1;
ren2->processed = 1;
if (strcmp(ren1_dst, ren2_dst) != 0) {
ren2 = lookup->util;
ren2_dst = ren2->pair->two->path;
if (strcmp(ren1_dst, ren2_dst) != 0)
- die("ren1_dst != ren2_dst");
+ die("BUG: ren1_dst != ren2_dst");
clean_merge = 0;
ren2->processed = 1;
* update_file_flags() instead of
* update_file().
*/
- update_file_flags(o,
- &ren1->pair->two->oid,
- ren1->pair->two->mode,
- ren1_dst,
- 1, /* update_cache */
- 0 /* update_wd */);
+ if (update_file_flags(o,
+ &ren1->pair->two->oid,
+ ren1->pair->two->mode,
+ ren1_dst,
+ 1, /* update_cache */
+ 0 /* update_wd */))
+ clean_merge = -1;
} else if (!oid_eq(&dst_other.oid, &null_oid)) {
clean_merge = 0;
try_merge = 1;
ren1_dst, branch2);
if (o->call_depth) {
struct merge_file_info mfi;
- mfi = merge_file_one(o, ren1_dst, &null_oid, 0,
- &ren1->pair->two->oid,
- ren1->pair->two->mode,
- &dst_other.oid,
- dst_other.mode,
- branch1, branch2);
+ if (merge_file_one(o, ren1_dst, &null_oid, 0,
+ &ren1->pair->two->oid,
+ ren1->pair->two->mode,
+ &dst_other.oid,
+ dst_other.mode,
+ branch1, branch2, &mfi)) {
+ clean_merge = -1;
+ goto cleanup_and_return;
+ }
output(o, 1, _("Adding merged %s"), ren1_dst);
- update_file(o, 0, &mfi.oid,
- mfi.mode, ren1_dst);
+ if (update_file(o, 0, &mfi.oid,
+ mfi.mode, ren1_dst))
+ clean_merge = -1;
try_merge = 0;
} else {
char *new_path = unique_path(o, ren1_dst, branch2);
output(o, 1, _("Adding as %s instead"), new_path);
- update_file(o, 0, &dst_other.oid,
- dst_other.mode, new_path);
+ if (update_file(o, 0, &dst_other.oid,
+ dst_other.mode, new_path))
+ clean_merge = -1;
free(new_path);
}
} else
try_merge = 1;
+ if (clean_merge < 0)
+ goto cleanup_and_return;
if (try_merge) {
struct diff_filespec *one, *a, *b;
src_other.path = (char *)ren1_src;
}
}
}
+cleanup_and_return:
string_list_clear(&a_by_dst, 0);
string_list_clear(&b_by_dst, 0);
return (is_null_oid(oid) || mode == 0) ? NULL: (struct object_id *)oid;
}
-static int read_oid_strbuf(const struct object_id *oid, struct strbuf *dst)
+static int read_oid_strbuf(struct merge_options *o,
+ const struct object_id *oid, struct strbuf *dst)
{
void *buf;
enum object_type type;
unsigned long size;
buf = read_sha1_file(oid->hash, &type, &size);
if (!buf)
- return error(_("cannot read object %s"), oid_to_hex(oid));
+ return err(o, _("cannot read object %s"), oid_to_hex(oid));
if (type != OBJ_BLOB) {
free(buf);
- return error(_("object %s is not a blob"), oid_to_hex(oid));
+ return err(o, _("object %s is not a blob"), oid_to_hex(oid));
}
strbuf_attach(dst, buf, size, size + 1);
return 0;
}
-static int blob_unchanged(const struct object_id *o_oid,
+static int blob_unchanged(struct merge_options *opt,
+ const struct object_id *o_oid,
unsigned o_mode,
const struct object_id *a_oid,
unsigned a_mode,
return 0;
assert(o_oid && a_oid);
- if (read_oid_strbuf(o_oid, &o) || read_oid_strbuf(a_oid, &a))
+ if (read_oid_strbuf(opt, o_oid, &o) || read_oid_strbuf(opt, a_oid, &a))
goto error_return;
/*
* Note: binary | is used so that both renormalizations are
return ret;
}
-static void handle_modify_delete(struct merge_options *o,
+static int handle_modify_delete(struct merge_options *o,
const char *path,
struct object_id *o_oid, int o_mode,
struct object_id *a_oid, int a_mode,
struct object_id *b_oid, int b_mode)
{
- handle_change_delete(o,
- path,
- o_oid, o_mode,
- a_oid, a_mode,
- b_oid, b_mode,
- _("modify"), _("modified"));
+ return handle_change_delete(o,
+ path,
+ o_oid, o_mode,
+ a_oid, a_mode,
+ b_oid, b_mode,
+ _("modify"), _("modified"));
}
static int merge_content(struct merge_options *o,
if (dir_in_way(path, !o->call_depth))
df_conflict_remains = 1;
}
- mfi = merge_file_special_markers(o, &one, &a, &b,
- o->branch1, path1,
- o->branch2, path2);
+ if (merge_file_special_markers(o, &one, &a, &b,
+ o->branch1, path1,
+ o->branch2, path2, &mfi))
+ return -1;
if (mfi.clean && !df_conflict_remains &&
oid_eq(&mfi.oid, a_oid) && mfi.mode == a_mode) {
*/
path_renamed_outside_HEAD = !path2 || !strcmp(path, path2);
if (!path_renamed_outside_HEAD) {
- add_cacheinfo(mfi.mode, &mfi.oid, path,
+ add_cacheinfo(o, mfi.mode, &mfi.oid, path,
0, (!o->call_depth), 0);
return mfi.clean;
}
output(o, 1, _("CONFLICT (%s): Merge conflict in %s"),
reason, path);
if (rename_conflict_info && !df_conflict_remains)
- update_stages(path, &one, &a, &b);
+ if (update_stages(o, path, &one, &a, &b))
+ return -1;
}
if (df_conflict_remains) {
if (o->call_depth) {
remove_file_from_cache(path);
} else {
- if (!mfi.clean)
- update_stages(path, &one, &a, &b);
- else {
+ if (!mfi.clean) {
+ if (update_stages(o, path, &one, &a, &b))
+ return -1;
+ } else {
int file_from_stage2 = was_tracked(path);
struct diff_filespec merged;
oidcpy(&merged.oid, &mfi.oid);
merged.mode = mfi.mode;
- update_stages(path, NULL,
- file_from_stage2 ? &merged : NULL,
- file_from_stage2 ? NULL : &merged);
+ if (update_stages(o, path, NULL,
+ file_from_stage2 ? &merged : NULL,
+ file_from_stage2 ? NULL : &merged))
+ return -1;
}
}
new_path = unique_path(o, path, rename_conflict_info->branch1);
output(o, 1, _("Adding as %s instead"), new_path);
- update_file(o, 0, &mfi.oid, mfi.mode, new_path);
+ if (update_file(o, 0, &mfi.oid, mfi.mode, new_path)) {
+ free(new_path);
+ return -1;
+ }
free(new_path);
mfi.clean = 0;
- } else {
- update_file(o, mfi.clean, &mfi.oid, mfi.mode, path);
- }
+ } else if (update_file(o, mfi.clean, &mfi.oid, mfi.mode, path))
+ return -1;
return mfi.clean;
-
}
/* Per entry merge function */
break;
case RENAME_DELETE:
clean_merge = 0;
- conflict_rename_delete(o, conflict_info->pair1,
- conflict_info->branch1,
- conflict_info->branch2);
+ if (conflict_rename_delete(o,
+ conflict_info->pair1,
+ conflict_info->branch1,
+ conflict_info->branch2))
+ clean_merge = -1;
break;
case RENAME_ONE_FILE_TO_TWO:
clean_merge = 0;
- conflict_rename_rename_1to2(o, conflict_info);
+ if (conflict_rename_rename_1to2(o, conflict_info))
+ clean_merge = -1;
break;
case RENAME_TWO_FILES_TO_ONE:
clean_merge = 0;
- conflict_rename_rename_2to1(o, conflict_info);
+ if (conflict_rename_rename_2to1(o, conflict_info))
+ clean_merge = -1;
break;
default:
entry->processed = 0;
} else if (o_oid && (!a_oid || !b_oid)) {
/* Case A: Deleted in one */
if ((!a_oid && !b_oid) ||
- (!b_oid && blob_unchanged(o_oid, o_mode, a_oid, a_mode, normalize, path)) ||
- (!a_oid && blob_unchanged(o_oid, o_mode, b_oid, b_mode, normalize, path))) {
+ (!b_oid && blob_unchanged(o, o_oid, o_mode, a_oid, a_mode, normalize, path)) ||
+ (!a_oid && blob_unchanged(o, o_oid, o_mode, b_oid, b_mode, normalize, path))) {
/* Deleted in both or deleted in one and
* unchanged in the other */
if (a_oid)
} else {
/* Modify/delete; deleted side may have put a directory in the way */
clean_merge = 0;
- handle_modify_delete(o, path, o_oid, o_mode,
- a_oid, a_mode, b_oid, b_mode);
+ if (handle_modify_delete(o, path, o_oid, o_mode,
+ a_oid, a_mode, b_oid, b_mode))
+ clean_merge = -1;
}
} else if ((!o_oid && a_oid && !b_oid) ||
(!o_oid && !a_oid && b_oid)) {
output(o, 1, _("CONFLICT (%s): There is a directory with name %s in %s. "
"Adding %s as %s"),
conf, path, other_branch, path, new_path);
- update_file(o, 0, oid, mode, new_path);
- if (o->call_depth)
+ if (update_file(o, 0, oid, mode, new_path))
+ clean_merge = -1;
+ else if (o->call_depth)
remove_file_from_cache(path);
free(new_path);
} else {
output(o, 2, _("Adding %s"), path);
/* do not overwrite file if already present */
- update_file_flags(o, oid, mode, path, 1, !a_oid);
+ if (update_file_flags(o, oid, mode, path, 1, !a_oid))
+ clean_merge = -1;
}
} else if (a_oid && b_oid) {
/* Case C: Added in both (check for same permissions) and */
*/
remove_file(o, 1, path, !a_mode);
} else
- die(_("Fatal merge failure, shouldn't happen."));
+ die("BUG: fatal merge failure, shouldn't happen.");
return clean_merge;
}
if (code != 0) {
if (show(o, 4) || o->call_depth)
- die(_("merging of trees %s and %s failed"),
+ err(o, _("merging of trees %s and %s failed"),
oid_to_hex(&head->object.oid),
oid_to_hex(&merge->object.oid));
- else
- exit(128);
+ return -1;
}
if (unmerged_cache()) {
re_head = get_renames(o, head, common, head, merge, entries);
re_merge = get_renames(o, merge, common, head, merge, entries);
clean = process_renames(o, re_head, re_merge);
+ if (clean < 0)
+ return clean;
for (i = entries->nr-1; 0 <= i; i--) {
const char *path = entries->items[i].string;
struct stage_data *e = entries->items[i].util;
- if (!e->processed
- && !process_entry(o, path, e))
- clean = 0;
+ if (!e->processed) {
+ int ret = process_entry(o, path, e);
+ if (!ret)
+ clean = 0;
+ else if (ret < 0)
+ return ret;
+ }
}
for (i = 0; i < entries->nr; i++) {
struct stage_data *e = entries->items[i].util;
if (!e->processed)
- die(_("Unprocessed path??? %s"),
+ die("BUG: unprocessed path??? %s",
entries->items[i].string);
}
else
clean = 1;
- if (o->call_depth)
- *result = write_tree_from_memory(o);
+ if (o->call_depth && !(*result = write_tree_from_memory(o)))
+ return -1;
return clean;
}
/*
* When the merge fails, the result contains files
* with conflict markers. The cleanness flag is
- * ignored, it was never actually used, as result of
- * merge_trees has always overwritten it: the committed
- * "conflicts" were already resolved.
+ * ignored (unless indicating an error), it was never
+ * actually used, as result of merge_trees has always
+ * overwritten it: the committed "conflicts" were
+ * already resolved.
*/
discard_cache();
saved_b1 = o->branch1;
saved_b2 = o->branch2;
o->branch1 = "Temporary merge branch 1";
o->branch2 = "Temporary merge branch 2";
- merge_recursive(o, merged_common_ancestors, iter->item,
- NULL, &merged_common_ancestors);
+ if (merge_recursive(o, merged_common_ancestors, iter->item,
+ NULL, &merged_common_ancestors) < 0)
+ return -1;
o->branch1 = saved_b1;
o->branch2 = saved_b2;
o->call_depth--;
if (!merged_common_ancestors)
- die(_("merge returned no commit"));
+ return err(o, _("merge returned no commit"));
}
discard_cache();
o->ancestor = "merged common ancestors";
clean = merge_trees(o, h1->tree, h2->tree, merged_common_ancestors->tree,
&mrtree);
+ if (clean < 0) {
+ flush_output(o);
+ return clean;
+ }
if (o->call_depth) {
*result = make_virtual_commit(mrtree, "merged tree");
commit_list_insert(h2, &(*result)->parents->next);
}
flush_output(o);
+ if (!o->call_depth && o->buffer_output < 2)
+ strbuf_release(&o->obuf);
if (show(o, 2))
diff_warn_rename_limit("merge.renamelimit",
o->needed_rename_limit, 0);
for (i = 0; i < num_base_list; ++i) {
struct commit *base;
if (!(base = get_ref(base_list[i], oid_to_hex(base_list[i]))))
- return error(_("Could not parse object '%s'"),
+ return err(o, _("Could not parse object '%s'"),
oid_to_hex(base_list[i]));
commit_list_insert(base, &ca);
}
hold_locked_index(lock, 1);
clean = merge_recursive(o, head_commit, next_commit, ca,
result);
+ if (clean < 0)
+ return clean;
+
if (active_cache_changed &&
write_locked_index(&the_index, lock, COMMIT_LOCK))
- return error(_("Unable to write index."));
+ return err(o, _("Unable to write index."));
return clean ? 0 : 1;
}
MERGE_RECURSIVE_THEIRS
} recursive_variant;
const char *subtree_shift;
- unsigned buffer_output : 1;
+ unsigned buffer_output; /* 1: output at end, 2: keep buffered */
unsigned renormalize : 1;
long xdl_opts;
int verbosity;
--- /dev/null
+#include "cache.h"
+#include "mru.h"
+
+void mru_append(struct mru *mru, void *item)
+{
+ struct mru_entry *cur = xmalloc(sizeof(*cur));
+ cur->item = item;
+ cur->prev = mru->tail;
+ cur->next = NULL;
+
+ if (mru->tail)
+ mru->tail->next = cur;
+ else
+ mru->head = cur;
+ mru->tail = cur;
+}
+
+void mru_mark(struct mru *mru, struct mru_entry *entry)
+{
+ /* If we're already at the front of the list, nothing to do */
+ if (mru->head == entry)
+ return;
+
+ /* Otherwise, remove us from our current slot... */
+ if (entry->prev)
+ entry->prev->next = entry->next;
+ if (entry->next)
+ entry->next->prev = entry->prev;
+ else
+ mru->tail = entry->prev;
+
+ /* And insert us at the beginning. */
+ entry->prev = NULL;
+ entry->next = mru->head;
+ if (mru->head)
+ mru->head->prev = entry;
+ mru->head = entry;
+}
+
+void mru_clear(struct mru *mru)
+{
+ struct mru_entry *p = mru->head;
+
+ while (p) {
+ struct mru_entry *to_free = p;
+ p = p->next;
+ free(to_free);
+ }
+ mru->head = mru->tail = NULL;
+}
--- /dev/null
+#ifndef MRU_H
+#define MRU_H
+
+/**
+ * A simple most-recently-used cache, backed by a doubly-linked list.
+ *
+ * Usage is roughly:
+ *
+ * // Create a list. Zero-initialization is required.
+ * static struct mru cache;
+ * mru_append(&cache, item);
+ * ...
+ *
+ * // Iterate in MRU order.
+ * struct mru_entry *p;
+ * for (p = cache.head; p; p = p->next) {
+ * if (matches(p->item))
+ * break;
+ * }
+ *
+ * // Mark an item as used, moving it to the front of the list.
+ * mru_mark(&cache, p);
+ *
+ * // Reset the list to empty, cleaning up all resources.
+ * mru_clear(&cache);
+ *
+ * Note that you SHOULD NOT call mru_mark() and then continue traversing the
+ * list; it reorders the marked item to the front of the list, and therefore
+ * you will begin traversing the whole list again.
+ */
+
+struct mru_entry {
+ void *item;
+ struct mru_entry *prev, *next;
+};
+
+struct mru {
+ struct mru_entry *head, *tail;
+};
+
+void mru_append(struct mru *mru, void *item);
+void mru_mark(struct mru *mru, struct mru_entry *entry);
+void mru_clear(struct mru *mru);
+
+#endif /* MRU_H */
die_errno("unable to make temporary index file readable");
strbuf_addf(name_buffer, "%s.pack", sha1_to_hex(sha1));
- free_pack_by_name(name_buffer->buf);
if (rename(pack_tmp_name, name_buffer->buf))
die_errno("unable to rename temporary pack file");
return pager;
}
+static void setup_pager_env(struct argv_array *env)
+{
+ const char **argv;
+ int i;
+ char *pager_env = xstrdup(PAGER_ENV);
+ int n = split_cmdline(pager_env, &argv);
+
+ if (n < 0)
+ die("malformed build-time PAGER_ENV: %s",
+ split_cmdline_strerror(n));
+
+ for (i = 0; i < n; i++) {
+ char *cp = strchr(argv[i], '=');
+
+ if (!cp)
+ die("malformed build-time PAGER_ENV");
+
+ *cp = '\0';
+ if (!getenv(argv[i])) {
+ *cp = '=';
+ argv_array_push(env, argv[i]);
+ }
+ }
+ free(pager_env);
+ free(argv);
+}
+
void prepare_pager_args(struct child_process *pager_process, const char *pager)
{
argv_array_push(&pager_process->args, pager);
pager_process->use_shell = 1;
- if (!getenv("LESS"))
- argv_array_push(&pager_process->env_array, "LESS=FRX");
- if (!getenv("LV"))
- argv_array_push(&pager_process->env_array, "LV=-c");
+ setup_pager_env(&pager_process->env_array);
}
void setup_pager(void)
strbuf_addstr(sb, diff_get_color(c->auto_color, DIFF_RESET));
return 1;
}
- strbuf_addstr(sb, find_unique_abbrev(commit->object.oid.hash,
- c->pretty_ctx->abbrev));
+ strbuf_add_unique_abbrev(sb, commit->object.oid.hash,
+ c->pretty_ctx->abbrev);
strbuf_addstr(sb, diff_get_color(c->auto_color, DIFF_RESET));
c->abbrev_commit_hash.len = sb->len - c->abbrev_commit_hash.off;
return 1;
case 't': /* abbreviated tree hash */
if (add_again(sb, &c->abbrev_tree_hash))
return 1;
- strbuf_addstr(sb, find_unique_abbrev(commit->tree->object.oid.hash,
- c->pretty_ctx->abbrev));
+ strbuf_add_unique_abbrev(sb, commit->tree->object.oid.hash,
+ c->pretty_ctx->abbrev);
c->abbrev_tree_hash.len = sb->len - c->abbrev_tree_hash.off;
return 1;
case 'P': /* parent hashes */
for (p = commit->parents; p; p = p->next) {
if (p != commit->parents)
strbuf_addch(sb, ' ');
- strbuf_addstr(sb, find_unique_abbrev(
- p->item->object.oid.hash,
- c->pretty_ctx->abbrev));
+ strbuf_add_unique_abbrev(sb, p->item->object.oid.hash,
+ c->pretty_ctx->abbrev);
}
c->abbrev_parent_hashes.len = sb->len -
c->abbrev_parent_hashes.off;
/* -2 for strlen("%.*s") - strlen("%s"); +1 for NUL */
total_len += strlen(ref_rev_parse_rules[nr_rules]) - 2 + 1;
- scanf_fmts = xmalloc(st_add(st_mult(nr_rules, sizeof(char *)), total_len));
+ scanf_fmts = xmalloc(st_add(st_mult(sizeof(char *), nr_rules), total_len));
offset = 0;
for (i = 0; i < nr_rules; i++) {
* branch.
*/
if (ref->expect_old_sha1) {
- if (ref->expect_old_no_trackback ||
- oidcmp(&ref->old_oid, &ref->old_oid_expect))
+ if (oidcmp(&ref->old_oid, &ref->old_oid_expect))
reject_reason = REF_STATUS_REJECT_STALE;
else
/* If the ref isn't stale then force the update. */
entry = add_cas_entry(cas, arg, colon - arg);
if (!*colon)
entry->use_tracking = 1;
+ else if (!colon[1])
+ hashclr(entry->expect);
else if (get_sha1(colon + 1, entry->expect))
return error("cannot parse expected object name '%s'", colon + 1);
return 0;
if (!entry->use_tracking)
hashcpy(ref->old_oid_expect.hash, cas->entry[i].expect);
else if (remote_tracking(remote, ref->name, &ref->old_oid_expect))
- ref->expect_old_no_trackback = 1;
+ oidclr(&ref->old_oid_expect);
return;
}
ref->expect_old_sha1 = 1;
if (remote_tracking(remote, ref->name, &ref->old_oid_expect))
- ref->expect_old_no_trackback = 1;
+ oidclr(&ref->old_oid_expect);
}
void apply_push_cas(struct push_cas_option *cas,
force:1,
forced_update:1,
expect_old_sha1:1,
- expect_old_no_trackback:1,
deletion:1,
matched:1;
struct strbuf cert = STRBUF_INIT;
int update_seen = 0;
- strbuf_addf(&cert, "certificate version 0.1\n");
+ strbuf_addstr(&cert, "certificate version 0.1\n");
strbuf_addf(&cert, "pusher %s ", signing_key);
datestamp(&cert);
strbuf_addch(&cert, '\n');
{
struct strbuf seq_dir = STRBUF_INIT;
- strbuf_addf(&seq_dir, "%s", git_path(SEQ_DIR));
+ strbuf_addstr(&seq_dir, git_path(SEQ_DIR));
remove_dir_recursively(&seq_dir, 0);
strbuf_release(&seq_dir);
}
clean = merge_trees(&o,
head_tree,
next_tree, base_tree, &result);
+ strbuf_release(&o.obuf);
+ if (clean < 0)
+ return clean;
if (active_cache_changed &&
write_locked_index(&the_index, &index_lock, COMMIT_LOCK))
if (!opts->strategy || !strcmp(opts->strategy, "recursive") || opts->action == REPLAY_REVERT) {
res = do_recursive_merge(base, next, base_label, next_label,
head, &msgbuf, opts);
+ if (res < 0)
+ return res;
write_message(&msgbuf, git_path_merge_msg());
} else {
struct commit_list *common = NULL;
#include "bulk-checkin.h"
#include "streaming.h"
#include "dir.h"
+#include "mru.h"
#ifndef O_NOATIME
#if defined(__linux__) && (defined(__i386__) || defined(__PPC__))
0
};
-/*
- * A pointer to the last packed_git in which an object was found.
- * When an object is sought, we look in this packfile first, because
- * objects that are looked up at similar times are often in the same
- * packfile as one another.
- */
-static struct packed_git *last_found_pack;
-
static struct cached_object *find_cached_object(const unsigned char *sha1)
{
int i;
static size_t pack_mapped;
struct packed_git *packed_git;
+static struct mru packed_git_mru_storage;
+struct mru *packed_git_mru = &packed_git_mru_storage;
+
void pack_report(void)
{
fprintf(stderr,
for (p = packed_git; p; p = p->next)
if (p->do_not_close)
- die("BUG! Want to close pack marked 'do-not-close'");
+ die("BUG: want to close pack marked 'do-not-close'");
else
close_pack(p);
}
}
}
-/*
- * This is used by git-repack in case a newly created pack happens to
- * contain the same set of objects as an existing one. In that case
- * the resulting file might be different even if its name would be the
- * same. It is best to close any reference to the old pack before it is
- * replaced on disk. Of course no index pointers or windows for given pack
- * must subsist at this point. If ever objects from this pack are requested
- * again, the new version of the pack will be reinitialized through
- * reprepare_packed_git().
- */
-void free_pack_by_name(const char *pack_name)
-{
- struct packed_git *p, **pp = &packed_git;
-
- while (*pp) {
- p = *pp;
- if (strcmp(pack_name, p->pack_name) == 0) {
- clear_delta_base_cache();
- close_pack(p);
- free(p->bad_object_sha1);
- *pp = p->next;
- if (last_found_pack == p)
- last_found_pack = NULL;
- free(p);
- return;
- }
- pp = &p->next;
- }
-}
-
static unsigned int get_max_fd_limit(void)
{
#ifdef RLIMIT_NOFILE
free(ary);
}
+static void prepare_packed_git_mru(void)
+{
+ struct packed_git *p;
+
+ mru_clear(packed_git_mru);
+ for (p = packed_git; p; p = p->next)
+ mru_append(packed_git_mru, p);
+}
+
static int prepare_packed_git_run_once = 0;
void prepare_packed_git(void)
{
alt->name[-1] = '/';
}
rearrange_packed_git();
+ prepare_packed_git_mru();
prepare_packed_git_run_once = 1;
}
strbuf_add(oi->typename, type_buf, type_len);
/*
* Set type to 0 if its an unknown object and
- * we're obtaining the type using '--allow-unkown-type'
+ * we're obtaining the type using '--allow-unknown-type'
* option.
*/
if ((flags & LOOKUP_UNKNOWN_OBJECT) && (type < 0))
case OBJ_OFS_DELTA:
case OBJ_REF_DELTA:
if (data)
- die("BUG in unpack_entry: left loop at a valid delta");
+ die("BUG: unpack_entry: left loop at a valid delta");
break;
case OBJ_COMMIT:
case OBJ_TREE:
*/
static int find_pack_entry(const unsigned char *sha1, struct pack_entry *e)
{
- struct packed_git *p;
+ struct mru_entry *p;
prepare_packed_git();
if (!packed_git)
return 0;
- if (last_found_pack && fill_pack_entry(sha1, e, last_found_pack))
- return 1;
-
- for (p = packed_git; p; p = p->next) {
- if (p == last_found_pack)
- continue; /* we already checked this one */
-
- if (fill_pack_entry(sha1, e, p)) {
- last_found_pack = p;
+ for (p = packed_git_mru->head; p; p = p->next) {
+ if (fill_pack_entry(sha1, e, p->item)) {
+ mru_mark(packed_git_mru, p);
return 1;
}
}
unsigned int i, nr;
struct commit_list *head = NULL;
int bitmap_nr = (info->nr_bits + 31) / 32;
- size_t bitmap_size = st_mult(bitmap_nr, sizeof(uint32_t));
+ size_t bitmap_size = st_mult(sizeof(uint32_t), bitmap_nr);
uint32_t *tmp = xmalloc(bitmap_size); /* to be freed before return */
uint32_t *bitmap = paint_alloc(info);
struct commit *c = lookup_commit_reference_gently(sha1, 1);
{
free((void *) entry->config->path);
free((void *) entry->config->name);
+ free((void *) entry->config->branch);
free((void *) entry->config->update_strategy.command);
free(entry->config);
}
submodule->update_strategy.command = NULL;
submodule->fetch_recurse = RECURSE_SUBMODULES_NONE;
submodule->ignore = NULL;
+ submodule->branch = NULL;
submodule->recommend_shallow = -1;
hashcpy(submodule->gitmodules_sha1, gitmodules_sha1);
if (!me->overwrite && submodule->recommend_shallow != -1)
warn_multiple_config(me->commit_sha1, submodule->name,
"shallow");
- else {
+ else
submodule->recommend_shallow =
git_config_bool(var, value);
+ } else if (!strcmp(item.buf, "branch")) {
+ if (!me->overwrite && submodule->branch)
+ warn_multiple_config(me->commit_sha1, submodule->name,
+ "branch");
+ else {
+ free((void *)submodule->branch);
+ submodule->branch = xstrdup(value);
}
}
const char *url;
int fetch_recurse;
const char *ignore;
+ const char *branch;
struct submodule_update_strategy update_strategy;
/* the sha1 blob id of the responsible .gitmodules file */
unsigned char gitmodules_sha1[20];
$ sh ./t9200-git-cvsexport-commit.sh --run='1-4 !3'
will run tests 1, 2, and 4. Items that comes later have higher
-precendence. It means that this:
+precedence. It means that this:
$ sh ./t9200-git-cvsexport-commit.sh --run='!3 1-4'
return 0;
argv_array_pushv(&cp->args, d->argv);
- strbuf_addf(err, "preloaded output of a child\n");
+ strbuf_addstr(err, "preloaded output of a child\n");
number_callbacks++;
return 1;
}
void *cb,
void **task_cb)
{
- strbuf_addf(err, "no further jobs available\n");
+ strbuf_addstr(err, "no further jobs available\n");
return 0;
}
void *pp_cb,
void *pp_task_cb)
{
- strbuf_addf(err, "asking for a quick stop\n");
+ strbuf_addstr(err, "asking for a quick stop\n");
return 1;
}
--- /dev/null
+#!/bin/sh
+
+test_description='performance with large numbers of packs'
+. ./perf-lib.sh
+
+test_perf_large_repo
+
+# A real many-pack situation would probably come from having a lot of pushes
+# over time. We don't know how big each push would be, but we can fake it by
+# just walking the first-parent chain and having every 5 commits be their own
+# "push". This isn't _entirely_ accurate, as real pushes would have some
+# duplicate objects due to thin-pack fixing, but it's a reasonable
+# approximation.
+#
+# And then all of the rest of the objects can go in a single packfile that
+# represents the state before any of those pushes (actually, we'll generate
+# that first because in such a setup it would be the oldest pack, and we sort
+# the packs by reverse mtime inside git).
+repack_into_n () {
+ rm -rf staging &&
+ mkdir staging &&
+
+ git rev-list --first-parent HEAD |
+ sed -n '1~5p' |
+ head -n "$1" |
+ perl -e 'print reverse <>' \
+ >pushes
+
+ # create base packfile
+ head -n 1 pushes |
+ git pack-objects --delta-base-offset --revs staging/pack
+
+ # and then incrementals between each pair of commits
+ last= &&
+ while read rev
+ do
+ if test -n "$last"; then
+ {
+ echo "$rev" &&
+ echo "^$last"
+ } |
+ git pack-objects --delta-base-offset --revs \
+ staging/pack || return 1
+ fi
+ last=$rev
+ done <pushes &&
+
+ # and install the whole thing
+ rm -f .git/objects/pack/* &&
+ mv staging/* .git/objects/pack/
+}
+
+# Pretend we just have a single branch and no reflogs, and that everything is
+# in objects/pack; that makes our fake pack-building via repack_into_n()
+# much simpler.
+test_expect_success 'simplify reachability' '
+ tip=$(git rev-parse --verify HEAD) &&
+ git for-each-ref --format="option no-deref%0adelete %(refname)" |
+ git update-ref --stdin &&
+ rm -rf .git/logs &&
+ git update-ref refs/heads/master $tip &&
+ git symbolic-ref HEAD refs/heads/master &&
+ git repack -ad
+'
+
+for nr_packs in 1 50 1000
+do
+ test_expect_success "create $nr_packs-pack scenario" '
+ repack_into_n $nr_packs
+ '
+
+ test_perf "rev-list ($nr_packs)" '
+ git rev-list --objects --all >/dev/null
+ '
+
+ # This simulates the interesting part of the repack, which is the
+ # actual pack generation, without smudging the on-disk setup
+ # between trials.
+ test_perf "repack ($nr_packs)" '
+ git pack-objects --keep-true-parents \
+ --honor-pack-keep --non-empty --all \
+ --reflog --indexed-objects --delta-base-offset \
+ --stdout </dev/null >/dev/null
+ '
+done
+
+test_done
| git cat-file --batch)"
'
-test_expect_success "--batch-check for an emtpy line" '
+test_expect_success "--batch-check for an empty line" '
test " missing" = "$(echo | git cat-file --batch-check)"
'
path3/1.txt - a file in a directory
path3/2.txt - a file in a directory
-Test the handling of mulitple directories which have matching file
+Test the handling of multiple directories which have matching file
entries. Also test odd filename and missing entries handling.
'
. ./test-lib.sh
test_expect_success 'abort rebase -i with --autostash' '
test_when_finished "git reset --hard" &&
- echo uncommited-content >file0 &&
+ echo uncommitted-content >file0 &&
(
write_script abort-editor.sh <<-\EOF &&
echo >"$1"
test_must_fail git rebase -i --autostash HEAD^ &&
rm -f abort-editor.sh
) &&
- echo uncommited-content >expected &&
+ echo uncommitted-content >expected &&
test_cmp expected file0
'
. ./test-lib.sh
+# Test the file mode "$1" of the file "$2" in the index.
+test_mode_in_index () {
+ case "$(git ls-files -s "$2")" in
+ "$1 "*" $2")
+ echo pass
+ ;;
+ *)
+ echo fail
+ git ls-files -s "$2"
+ return 1
+ ;;
+ esac
+}
+
test_expect_success \
'Test of git add' \
'touch foo && git add foo'
echo foo >xfoo1 &&
chmod 755 xfoo1 &&
git add xfoo1 &&
- case "$(git ls-files --stage xfoo1)" in
- 100644" "*xfoo1) echo pass;;
- *) echo fail; git ls-files --stage xfoo1; (exit 1);;
- esac'
+ test_mode_in_index 100644 xfoo1'
test_expect_success 'git add: filemode=0 should not get confused by symlink' '
rm -f xfoo1 &&
test_ln_s_add foo xfoo1 &&
- case "$(git ls-files --stage xfoo1)" in
- 120000" "*xfoo1) echo pass;;
- *) echo fail; git ls-files --stage xfoo1; (exit 1);;
- esac
+ test_mode_in_index 120000 xfoo1
'
test_expect_success \
echo foo >xfoo2 &&
chmod 755 xfoo2 &&
git update-index --add xfoo2 &&
- case "$(git ls-files --stage xfoo2)" in
- 100644" "*xfoo2) echo pass;;
- *) echo fail; git ls-files --stage xfoo2; (exit 1);;
- esac'
+ test_mode_in_index 100644 xfoo2'
test_expect_success 'git add: filemode=0 should not get confused by symlink' '
rm -f xfoo2 &&
test_ln_s_add foo xfoo2 &&
- case "$(git ls-files --stage xfoo2)" in
- 120000" "*xfoo2) echo pass;;
- *) echo fail; git ls-files --stage xfoo2; (exit 1);;
- esac
+ test_mode_in_index 120000 xfoo2
'
test_expect_success \
'git update-index --add: Test that executable bit is not used...' \
'git config core.filemode 0 &&
test_ln_s_add xfoo2 xfoo3 && # runs git update-index --add
- case "$(git ls-files --stage xfoo3)" in
- 120000" "*xfoo3) echo pass;;
- *) echo fail; git ls-files --stage xfoo3; (exit 1);;
- esac'
+ test_mode_in_index 120000 xfoo3'
test_expect_success '.gitignore test setup' '
echo "*.ig" >.gitignore &&
test_i18ncmp expect.err actual.err
'
-test_expect_success 'git add --chmod=+x stages a non-executable file with +x' '
+test_expect_success 'git add --chmod=[+-]x stages correctly' '
+ rm -f foo1 &&
echo foo >foo1 &&
git add --chmod=+x foo1 &&
- case "$(git ls-files --stage foo1)" in
- 100755" "*foo1) echo pass;;
- *) echo fail; git ls-files --stage foo1; (exit 1);;
- esac
-'
-
-test_expect_success 'git add --chmod=-x stages an executable file with -x' '
- echo foo >xfoo1 &&
- chmod 755 xfoo1 &&
- git add --chmod=-x xfoo1 &&
- case "$(git ls-files --stage xfoo1)" in
- 100644" "*xfoo1) echo pass;;
- *) echo fail; git ls-files --stage xfoo1; (exit 1);;
- esac
+ test_mode_in_index 100755 foo1 &&
+ git add --chmod=-x foo1 &&
+ test_mode_in_index 100644 foo1
'
test_expect_success POSIXPERM,SYMLINKS 'git add --chmod=+x with symlinks' '
git config core.filemode 1 &&
git config core.symlinks 1 &&
+ rm -f foo2 &&
echo foo >foo2 &&
git add --chmod=+x foo2 &&
- case "$(git ls-files --stage foo2)" in
- 100755" "*foo2) echo pass;;
- *) echo fail; git ls-files --stage foo2; (exit 1);;
- esac
+ test_mode_in_index 100755 foo2
'
test_done
grep -e "^Subject:" "$1"
}
+test_expect_success 'format.from=false' '
+
+ git -c format.from=false format-patch --stdout master..side |
+ sed -e "/^\$/q" >patch &&
+ check_patch patch &&
+ ! grep "^From: C O Mitter <committer@example.com>\$" patch
+'
+
+test_expect_success 'format.from=true' '
+
+ git -c format.from=true format-patch --stdout master..side |
+ sed -e "/^\$/q" >patch &&
+ check_patch patch &&
+ grep "^From: C O Mitter <committer@example.com>\$" patch
+'
+
+test_expect_success 'format.from with address' '
+
+ git -c format.from="F R Om <from@example.com>" format-patch --stdout master..side |
+ sed -e "/^\$/q" >patch &&
+ check_patch patch &&
+ grep "^From: F R Om <from@example.com>\$" patch
+'
+
+test_expect_success '--no-from overrides format.from' '
+
+ git -c format.from="F R Om <from@example.com>" format-patch --no-from --stdout master..side |
+ sed -e "/^\$/q" >patch &&
+ check_patch patch &&
+ ! grep "^From: F R Om <from@example.com>\$" patch
+'
+
+test_expect_success '--from overrides format.from' '
+
+ git -c format.from="F R Om <from@example.com>" format-patch --from --stdout master..side |
+ sed -e "/^\$/q" >patch &&
+ check_patch patch &&
+ ! grep "^From: F R Om <from@example.com>\$" patch
+'
+
test_expect_success '--no-to overrides config.to' '
git config --replace-all format.to \
'
cat >expected <<EOF
-${c_commit}COMMIT_ID${c_reset}${c_commit} (${c_reset}${c_HEAD}HEAD${c_reset}${c_commit} ->\
+${c_commit}COMMIT_ID${c_reset}${c_commit} (${c_reset}${c_HEAD}HEAD ->\
${c_reset}${c_branch}master${c_reset}${c_commit},\
${c_reset}${c_tag}tag: v1.0${c_reset}${c_commit},\
${c_reset}${c_tag}tag: B${c_reset}${c_commit})${c_reset} B
test new = "$(git show HEAD:file2)"
'
+test_expect_success '--rebase with conflicts shows advice' '
+ test_when_finished "git rebase --abort; git checkout -f to-rebase" &&
+ git checkout -b seq &&
+ test_seq 5 >seq.txt &&
+ git add seq.txt &&
+ test_tick &&
+ git commit -m "Add seq.txt" &&
+ echo 6 >>seq.txt &&
+ test_tick &&
+ git commit -m "Append to seq.txt" seq.txt &&
+ git checkout -b with-conflicts HEAD^ &&
+ echo conflicting >>seq.txt &&
+ test_tick &&
+ git commit -m "Create conflict" seq.txt &&
+ test_must_fail git pull --rebase . seq 2>err >out &&
+ grep "When you have resolved this problem" out
+'
+
+test_expect_success 'failed --rebase shows advice' '
+ test_when_finished "git rebase --abort; git checkout -f to-rebase" &&
+ git checkout -b diverging &&
+ test_commit attributes .gitattributes "* text=auto" attrs &&
+ sha1="$(printf "1\\r\\n" | git hash-object -w --stdin)" &&
+ git update-index --cacheinfo 0644 $sha1 file &&
+ git commit -m v1-with-cr &&
+ # force checkout because `git reset --hard` will not leave clean `file`
+ git checkout -f -b fails-to-rebase HEAD^ &&
+ test_commit v2-without-cr file "2" file2-lf &&
+ test_must_fail git pull --rebase . diverging 2>err >out &&
+ grep "When you have resolved this problem" out
+'
+
test_expect_success '--rebase fails with multiple branches' '
git reset --hard before-rebase &&
test_must_fail git pull --rebase . copy master 2>err &&
test_cmp expect actual
'
+test_expect_success 'new branch covered by force-with-lease' '
+ setup_srcdst_basic &&
+ (
+ cd dst &&
+ git branch branch master &&
+ git push --force-with-lease=branch origin branch
+ ) &&
+ git ls-remote dst refs/heads/branch >expect &&
+ git ls-remote src refs/heads/branch >actual &&
+ test_cmp expect actual
+'
+
+test_expect_success 'new branch covered by force-with-lease (explicit)' '
+ setup_srcdst_basic &&
+ (
+ cd dst &&
+ git branch branch master &&
+ git push --force-with-lease=branch: origin branch
+ ) &&
+ git ls-remote dst refs/heads/branch >expect &&
+ git ls-remote src refs/heads/branch >actual &&
+ test_cmp expect actual
+'
+
+test_expect_success 'new branch already exists' '
+ setup_srcdst_basic &&
+ (
+ cd src &&
+ git checkout -b branch master &&
+ test_commit F
+ ) &&
+ (
+ cd dst &&
+ git branch branch master &&
+ test_must_fail git push --force-with-lease=branch: origin branch
+ )
+'
+
test_done
# "git rev-list<ENTER>" is likely to be a bug in the calling script and may
-# deserve an error message, but do cases where set of refs programatically
+# deserve an error message, but do cases where set of refs programmatically
# given using globbing and/or --stdin need to fail with the same error, or
# are we better off reporting a success with no output? The following few
# tests document the current behaviour to remind us that we might want to
# first bad commit: [32a594a3fdac2d57cf6d02987e30eec68511498c] Add <4: Ciao for now> into <hello>.
EOF
-test_expect_success 'bisect log: successfull result' '
+test_expect_success 'bisect log: successful result' '
git bisect reset &&
git bisect start $HASH4 $HASH2 &&
git bisect good &&
echo content >file &&
git add file &&
git commit -m "added sub and file" &&
- mkdir -p deep/directory/hierachy &&
- git submodule add ./. deep/directory/hierachy/sub &&
+ mkdir -p deep/directory/hierarchy &&
+ git submodule add ./. deep/directory/hierarchy/sub &&
git commit -m "added another submodule" &&
git branch submodule
'
# git status would fail if the update of linking git dir to
# work dir of the submodule failed.
git status &&
- git config -f ../.gitmodules submodule.deep/directory/hierachy/sub.path >../actual &&
- echo "directory/hierachy/sub" >../expect
+ git config -f ../.gitmodules submodule.deep/directory/hierarchy/sub.path >../actual &&
+ echo "directory/hierarchy/sub" >../expect
) &&
test_cmp actual expect
'
grep ^LV= pager-env.out
'
+test_expect_success !MINGW,TTY 'LESS and LV envvars set by git-sh-setup' '
+ (
+ sane_unset LESS LV &&
+ PAGER="env >pager-env.out; wc" &&
+ export PAGER &&
+ PATH="$(git --exec-path):$PATH" &&
+ export PATH &&
+ test_terminal sh -c ". git-sh-setup && git_pager"
+ ) &&
+ grep ^LESS= pager-env.out &&
+ grep ^LV= pager-env.out
+'
+
test_expect_success TTY 'some commands do not use a pager' '
rm -f paginated.out &&
test_terminal git rev-list HEAD &&
. ./test-lib.sh
+# On some filesystems (e.g. FreeBSD's ext2 and ufs) directory mtime
+# is updated lazily after contents in the directory changes, which
+# forces the untracked cache code to take the slow path. A test
+# that wants to make sure that the fast path works correctly should
+# call this helper to make mtime of the containing directory in sync
+# with the reality before checking the fast path behaviour.
+#
+# See <20160803174522.5571-1-pclouds@gmail.com> if you want to know
+# more.
+
+sync_mtime () {
+ find . -type d -ls >/dev/null
+}
+
avoid_racy() {
sleep 1
}
echo four >done/four && # four is gitignored at a higher level
echo five >done/five && # five is not gitignored
echo test >base && #we need to ensure that the root dir is touched
- rm base
+ rm base &&
+ sync_mtime
'
test_expect_success 'test sparse status with untracked cache' '
)
'
+test_expect_success 'submodule update --remote should fetch upstream changes with .' '
+ (
+ cd super &&
+ git config -f .gitmodules submodule."submodule".branch "." &&
+ git add .gitmodules &&
+ git commit -m "submodules: update from the respective superproject branch"
+ ) &&
+ (
+ cd submodule &&
+ echo line4a >> file &&
+ git add file &&
+ test_tick &&
+ git commit -m "upstream line4a" &&
+ git checkout -b test-branch &&
+ test_commit on-test-branch
+ ) &&
+ (
+ cd super &&
+ git submodule update --remote --force submodule &&
+ git -C submodule log -1 --oneline >actual
+ git -C ../submodule log -1 --oneline master >expect
+ test_cmp expect actual &&
+ git checkout -b test-branch &&
+ git submodule update --remote --force submodule &&
+ git -C submodule log -1 --oneline >actual
+ git -C ../submodule log -1 --oneline test-branch >expect
+ test_cmp expect actual &&
+ git checkout master &&
+ git branch -d test-branch &&
+ git reset --hard HEAD^
+ )
+'
+
test_expect_success 'local config should override .gitmodules branch' '
(cd submodule &&
- git checkout -b test-branch &&
+ git checkout test-branch &&
echo line5 >> file &&
git add file &&
test_tick &&
'
test_expect_success 'submodule update clone shallow submodule' '
+ test_when_finished "rm -rf super3" &&
+ first=$(git -C cloned submodule status submodule |cut -c2-41) &&
+ second=$(git -C submodule rev-parse HEAD) &&
+ commit_count=$(git -C submodule rev-list --count $first^..$second) &&
git clone cloned super3 &&
pwd=$(pwd) &&
- (cd super3 &&
- sed -e "s#url = ../#url = file://$pwd/#" <.gitmodules >.gitmodules.tmp &&
- mv -f .gitmodules.tmp .gitmodules &&
- git submodule update --init --depth=3
- (cd submodule &&
- test 1 = $(git log --oneline | wc -l)
- )
-)
+ (
+ cd super3 &&
+ sed -e "s#url = ../#url = file://$pwd/#" <.gitmodules >.gitmodules.tmp &&
+ mv -f .gitmodules.tmp .gitmodules &&
+ git submodule update --init --depth=$commit_count &&
+ test 1 = $(git -C submodule log --oneline | wc -l)
+ )
+'
+
+test_expect_success 'submodule update clone shallow submodule outside of depth' '
+ test_when_finished "rm -rf super3" &&
+ git clone cloned super3 &&
+ pwd=$(pwd) &&
+ (
+ cd super3 &&
+ sed -e "s#url = ../#url = file://$pwd/#" <.gitmodules >.gitmodules.tmp &&
+ mv -f .gitmodules.tmp .gitmodules &&
+ test_must_fail git submodule update --init --depth=1 2>actual &&
+ test_i18ngrep "Direct fetching of that commit failed." actual &&
+ git -C ../submodule config uploadpack.allowReachableSHA1InWant true &&
+ git submodule update --init --depth=1 >actual &&
+ test 1 = $(git -C submodule log --oneline | wc -l)
+ )
'
test_expect_success 'submodule update --recursive drops module name before recursing' '
)
'
+run_dir_diff_test 'difftool --dir-diff from subdirectory with GIT_DIR set' '
+ (
+ GIT_DIR=$(pwd)/.git &&
+ export GIT_DIR &&
+ GIT_WORK_TREE=$(pwd) &&
+ export GIT_WORK_TREE &&
+ cd sub &&
+ git difftool --dir-diff $symlinks --extcmd ls \
+ branch -- sub >output &&
+ grep sub output &&
+ ! grep file output
+ )
+'
+
run_dir_diff_test 'difftool --dir-diff when worktree file is missing' '
test_when_finished git reset --hard &&
rm file2 &&
'
test_expect_success 'log grep (9)' '
- git log -g --grep-reflog="commit: third" --author="non-existant" --pretty=tformat:%s >actual &&
+ git log -g --grep-reflog="commit: third" --author="non-existent" --pretty=tformat:%s >actual &&
: >expect &&
test_cmp expect actual
'
echo "more text" > src.c &&
GIT_CONFIG="$git_config" cvs -Q add src.c >cvs.log 2>&1 &&
marked_as . src.c "" &&
- echo "psuedo-binary" > temp.bin
+ echo "pseudo-binary" > temp.bin
) &&
GIT_CONFIG="$git_config" cvs -Q add subdir/temp.bin >cvs.log 2>&1 &&
marked_as subdir temp.bin "-kb" &&
#include "cache.h"
#include "quote.h"
+/*
+ * "Normalize" a key argument by converting NULL to our trace_default,
+ * and otherwise passing through the value. All caller-facing functions
+ * should normalize their inputs in this way, though most get it
+ * for free by calling get_trace_fd() (directly or indirectly).
+ */
+static void normalize_trace_key(struct trace_key **key)
+{
+ static struct trace_key trace_default = { "GIT_TRACE" };
+ if (!*key)
+ *key = &trace_default;
+}
+
/* Get a trace file descriptor from "key" env variable. */
static int get_trace_fd(struct trace_key *key)
{
- static struct trace_key trace_default = { "GIT_TRACE" };
const char *trace;
- /* use default "GIT_TRACE" if NULL */
- if (!key)
- key = &trace_default;
+ normalize_trace_key(&key);
/* don't open twice */
if (key->initialized)
else if (is_absolute_path(trace)) {
int fd = open(trace, O_WRONLY | O_APPEND | O_CREAT, 0666);
if (fd == -1) {
- fprintf(stderr,
- "Could not open '%s' for tracing: %s\n"
- "Defaulting to tracing on stderr...\n",
+ warning("could not open '%s' for tracing: %s",
trace, strerror(errno));
- key->fd = STDERR_FILENO;
+ trace_disable(key);
} else {
key->fd = fd;
key->need_close = 1;
}
} else {
- fprintf(stderr, "What does '%s' for %s mean?\n"
- "If you want to trace into a file, then please set "
- "%s to an absolute pathname (starting with /).\n"
- "Defaulting to tracing on stderr...\n",
- trace, key->key, key->key);
- key->fd = STDERR_FILENO;
+ warning("unknown trace value for '%s': %s\n"
+ " If you want to trace into a file, then please set %s\n"
+ " to an absolute pathname (starting with /)",
+ key->key, trace, key->key);
+ trace_disable(key);
}
key->initialized = 1;
void trace_disable(struct trace_key *key)
{
+ normalize_trace_key(&key);
+
if (key->need_close)
close(key->fd);
key->fd = 0;
key->need_close = 0;
}
-static const char err_msg[] = "Could not trace into fd given by "
- "GIT_TRACE environment variable";
-
static int prepare_trace_line(const char *file, int line,
struct trace_key *key, struct strbuf *buf)
{
return 1;
}
+static void trace_write(struct trace_key *key, const void *buf, unsigned len)
+{
+ if (write_in_full(get_trace_fd(key), buf, len) < 0) {
+ normalize_trace_key(&key);
+ warning("unable to write trace for %s: %s",
+ key->key, strerror(errno));
+ trace_disable(key);
+ }
+}
+
void trace_verbatim(struct trace_key *key, const void *buf, unsigned len)
{
if (!trace_want(key))
return;
- write_or_whine_pipe(get_trace_fd(key), buf, len, err_msg);
+ trace_write(key, buf, len);
}
static void print_trace_line(struct trace_key *key, struct strbuf *buf)
{
strbuf_complete_line(buf);
-
- write_or_whine_pipe(get_trace_fd(key), buf->buf, buf->len, err_msg);
+ trace_write(key, buf->buf, buf->len);
strbuf_release(buf);
}
warning(_("unknown value '%s' for key '%s'"), value, conf_key);
break;
default:
- die("internal bug in trailer.c");
+ die("BUG: trailer.c: unhandled type %d", type);
}
return 0;
}
}
/* Stream state: More data may be coming in this direction. */
-#define SSTATE_TRANSFERING 0
+#define SSTATE_TRANSFERRING 0
/*
* Stream state: No more data coming in this direction, flushing rest of
* data.
/* Stream state: Transfer in this direction finished. */
#define SSTATE_FINISHED 2
-#define STATE_NEEDS_READING(state) ((state) <= SSTATE_TRANSFERING)
+#define STATE_NEEDS_READING(state) ((state) <= SSTATE_TRANSFERRING)
#define STATE_NEEDS_WRITING(state) ((state) <= SSTATE_FLUSHING)
#define STATE_NEEDS_CLOSING(state) ((state) == SSTATE_FLUSHING)
state.ptg.dest = 1;
state.ptg.src_is_sock = (input == output);
state.ptg.dest_is_sock = 0;
- state.ptg.state = SSTATE_TRANSFERING;
+ state.ptg.state = SSTATE_TRANSFERRING;
state.ptg.bufuse = 0;
state.ptg.src_name = "remote input";
state.ptg.dest_name = "stdout";
state.gtp.dest = output;
state.gtp.src_is_sock = 0;
state.gtp.dest_is_sock = (input == output);
- state.gtp.state = SSTATE_TRANSFERING;
+ state.gtp.state = SSTATE_TRANSFERRING;
state.gtp.bufuse = 0;
state.gtp.src_name = "stdin";
state.gtp.dest_name = "remote output";
}
}
-static const char *status_abbrev(unsigned char sha1[20])
-{
- return find_unique_abbrev(sha1, DEFAULT_ABBREV);
-}
-
static void print_ok_ref_status(struct ref *ref, int porcelain)
{
if (ref->deletion)
char type;
const char *msg;
- strbuf_addstr(&quickref, status_abbrev(ref->old_oid.hash));
+ strbuf_add_unique_abbrev(&quickref, ref->old_oid.hash,
+ DEFAULT_ABBREV);
if (ref->forced_update) {
strbuf_addstr(&quickref, "...");
type = '+';
type = ' ';
msg = NULL;
}
- strbuf_addstr(&quickref, status_abbrev(ref->new_oid.hash));
+ strbuf_add_unique_abbrev(&quickref, ref->new_oid.hash,
+ DEFAULT_ABBREV);
print_ref_status(type, quickref.buf, ref, ref->peer_ref, msg, porcelain);
strbuf_release(&quickref);
struct git_transport_data *data;
if (!transport->smart_options)
- die("Bug detected: Taking over transport requires non-NULL "
+ die("BUG: taking over transport requires non-NULL "
"smart_options field.");
data = xcalloc(1, sizeof(*data));
OPT_BOOL(0, "stateless-rpc", &stateless_rpc,
N_("quit after a single request/response exchange")),
OPT_BOOL(0, "advertise-refs", &advertise_refs,
- N_("exit immediately after intial ref advertisement")),
+ N_("exit immediately after initial ref advertisement")),
OPT_BOOL(0, "strict", &strict,
N_("do not try <directory>/.git/ if <directory> is no Git directory")),
OPT_INTEGER(0, "timeout", &timeout,
die_errno("write error");
}
}
-
-int write_or_whine_pipe(int fd, const void *buf, size_t count, const char *msg)
-{
- if (write_in_full(fd, buf, count) < 0) {
- check_pipe(errno);
- fprintf(stderr, "%s: write error (%s)\n",
- msg, strerror(errno));
- return 0;
- }
-
- return 1;
-}
case 7:
return _("both modified:");
default:
- die("bug: unhandled unmerged status %x", stagemask);
+ die("BUG: unhandled unmerged status %x", stagemask);
}
}
status_printf(s, color(WT_STATUS_HEADER, s), "\t");
what = wt_status_diff_status_string(status);
if (!what)
- die("bug: unhandled diff status %c", status);
+ die("BUG: unhandled diff status %c", status);
len = label_width - utf8_strwidth(what);
assert(len >= 0);
if (status == DIFF_STATUS_COPIED || status == DIFF_STATUS_RENAMED)