# Use tabs whenever we need to fill whitespace that spans at least from one tab
# stop to the next one.
+#
+# These settings are mirrored in .editorconfig. Keep them in sync.
UseTab: Always
TabWidth: 8
IndentWidth: 8
--- /dev/null
+[*]
+charset = utf-8
+insert_final_newline = true
+
+# The settings for C (*.c and *.h) files are mirrored in .clang-format. Keep
+# them in sync.
+[*.{c,h,sh,perl,pl,pm}]
+indent_style = tab
+tab_width = 8
+
+[*.py]
+indent_style = space
+indent_size = 4
+
+[COMMIT_EDITMSG]
+max_line_length = 72
+/fuzz_corpora
+/fuzz-pack-headers
+/fuzz-pack-idx
/GIT-BUILD-OPTIONS
/GIT-CFLAGS
/GIT-LDFLAGS
* "git help -a" now gives verbose output (same as "git help -av").
Those who want the old output may say "git help --no-verbose -a"..
+ * "git cpn --help", when "cpn" is an alias to, say, "cherry-pick -n",
+ reported only the alias expansion of "cpn" in earlier versions of
+ Git. It now runs "git cherry-pick --help" to show the manual page
+ of the command, while sending the alias expansion to the standard
+ error stream.
+
Updates since v2.19
-------------------
advertisement. The alternate refs that are advertised are now
configurable with a pair of configuration variables.
+ * "git cmd --help" when "cmd" is aliased used to only say "cmd is
+ aliased to ...". Now it shows that to the standard error stream
+ and runs "git $cmd --help" where $cmd is the first word of the
+ alias expansion.
+
+ * The documentation of "git gc" has been updated to mention that it
+ is no longer limited to "pruning away crufts" but also updates
+ ancillary files like commit-graph as a part of repository
+ optimization.
+
+ * "git p4 unshelve" improvements.
+
+ * The logic to select the default user name and e-mail on Windows has
+ been improved.
+ (merge 501afcb8b0 js/mingw-default-ident later to maint).
+
Performance, Internal Implementation, Development Support etc.
object exists, even for paths that are outside of the partial
checkout area. The code has been updated to avoid such a check.
+ * To help developers, an EditorConfig file that attempts to follow
+ the project convention has been added.
+ (merge b548d698a0 bc/editorconfig later to maint).
+
+ * The result of coverage test can be combined with "git blame" to
+ check the test coverage of code introduced recently with a new
+ 'coverage-diff' tool (in contrib/).
+ (merge 783faedd65 ds/coverage-diff later to maint).
+
+ * An experiment to fuzz test a few areas, hopefully we can gain more
+ coverage to various areas.
+
Fixes since v2.19
-----------------
no blobs are needed.
(merge 4c7f9567ea jt/non-blob-lazy-fetch later to maint).
+ * The codepath to support the experimental split-index mode had
+ remaining "racily clean" issues fixed.
+ (merge 4c490f3d32 sg/split-index-racefix later to maint).
+
+ * "git log --graph" showing an octopus merge sometimes miscounted the
+ number of display columns it is consuming to show the merge and its
+ parent commits, which has been corrected.
+ (merge 04005834ed np/log-graph-octopus-fix later to maint).
+
* Code cleanup, docfix, build fix, etc.
(merge 96a7501aad ts/doc-build-manpage-xsl-quietly later to maint).
(merge b9b07efdb2 tg/conflict-marker-size later to maint).
(merge 6e8fc70fce rs/sequencer-oidset-insert-avoids-dups later to maint).
(merge ad0b8f9575 mw/doc-typofixes later to maint).
(merge d9f079ad1a jc/how-to-document-api later to maint).
+ (merge b1492bf315 ma/t7005-bash-workaround later to maint).
+ (merge ac1f98a0df du/rev-parse-is-plumbing later to maint).
+ (merge ca8ed443a5 mm/doc-no-dashed-git later to maint).
+ (merge ce366a8144 du/get-tar-commit-id-is-plumbing later to maint).
+ (merge 61018fe9e0 du/cherry-is-plumbing later to maint).
with when fetching or pushing over HTTPS. Can be overridden
by the `GIT_SSL_CAPATH` environment variable.
+http.sslBackend::
+ Name of the SSL backend to use (e.g. "openssl" or "schannel").
+ This option is ignored if cURL lacks support for choosing the SSL
+ backend at runtime.
+
+http.schannelCheckRevoke::
+ Used to enforce or disable certificate revocation checks in cURL
+ when http.sslBackend is set to "schannel". Defaults to `true` if
+ unset. Only necessary to disable this if Git consistently errors
+ and the message is about checking the revocation status of a
+ certificate. This option is ignored if cURL lacks support for
+ setting the relevant SSL option at runtime.
+
+http.schannelUseSSLCAInfo::
+ As of cURL v7.60.0, the Secure Channel backend can use the
+ certificate bundle provided via `http.sslCAInfo`, but that would
+ override the Windows Certificate Store. Since this is not desirable
+ by default, Git will tell cURL not to use that bundle by default
+ when the `schannel` backend was configured via `http.sslBackend`,
+ unless `http.schannelUseSSLCAInfo` overrides this behavior.
+
http.pinnedpubkey::
Public key of the https service. It may either be the filename of
a PEM or DER encoded public key file or a string starting with
This form is to view the changes on the branch containing
and up to the second <commit>, starting at a common ancestor
of both <commit>. "git diff A\...B" is equivalent to
- "git diff $(git-merge-base A B) B". You can omit any one
+ "git diff $(git merge-base A B) B". You can omit any one
of <commit>, which has the same effect as using HEAD instead.
-Just in case if you are doing something exotic, it should be
+Just in case you are doing something exotic, it should be
noted that all of the <commit> in the above description, except
in the last two forms that use ".." notations, can be any
<tree>.
such as compressing file revisions (to reduce disk space and increase
performance), removing unreachable objects which may have been
created from prior invocations of 'git add', packing refs, pruning
-reflog, rerere metadata or stale working trees.
+reflog, rerere metadata or stale working trees. May also update ancillary
+indexes such as the commit-graph.
Users are encouraged to run this task on a regular basis within
each repository to maintain good disk space utilization and good
purpose, but this can be overridden by other options or configuration
variables.
+If an alias is given, git shows the definition of the alias on
+standard output. To get the manual page for the aliased command, use
+`git COMMAND --help`.
+
Note that `git --help ...` is identical to `git help ...` because the
former is internally converted into the latter.
Unshelve
~~~~~~~~
Unshelving will take a shelved P4 changelist, and produce the equivalent git commit
-in the branch refs/remotes/p4/unshelved/<changelist>.
+in the branch refs/remotes/p4-unshelved/<changelist>.
The git commit is created relative to the current origin revision (HEAD by default).
-If the shelved changelist's parent revisions differ, git-p4 will refuse to unshelve;
-you need to be unshelving onto an equivalent tree.
+A parent commit is created based on the origin, and then the unshelve commit is
+created based on that.
The origin revision can be changed with the "--origin" option.
-If the target branch in refs/remotes/p4/unshelved already exists, the old one will
+If the target branch in refs/remotes/p4-unshelved already exists, the old one will
be renamed.
----
$ git p4 sync
$ git p4 unshelve 12345
-$ git show refs/remotes/p4/unshelved/12345
+$ git show p4-unshelved/12345
<submit more changes via p4 to the same files>
$ git p4 unshelve 12345
<refuses to unshelve until git is in sync with p4 again>
info "The branch '$1' is new..."
else
# updating -- make sure it is a fast-forward
- mb=$(git-merge-base "$2" "$3")
+ mb=$(git merge-base "$2" "$3")
case "$mb,$2" in
"$2,$mb") info "Update is fast-forward" ;;
*) noff=y; info "This is not a fast-forward update.";;
VCSSVN_OBJS =
GENERATED_H =
EXTRA_CPPFLAGS =
+FUZZ_OBJS =
+FUZZ_PROGRAMS =
LIB_OBJS =
PROGRAM_OBJS =
PROGRAMS =
ETAGS_TARGET = TAGS
+FUZZ_OBJS += fuzz-pack-headers.o
+FUZZ_OBJS += fuzz-pack-idx.o
+
+# Always build fuzz objects even if not testing, to prevent bit-rot.
+all:: $(FUZZ_OBJS)
+
+FUZZ_PROGRAMS += $(patsubst %.o,%,$(FUZZ_OBJS))
+
# Empty...
EXTRA_PROGRAMS =
OBJECTS := $(LIB_OBJS) $(BUILTIN_OBJS) $(PROGRAM_OBJS) $(TEST_OBJS) \
$(XDIFF_OBJS) \
$(VCSSVN_OBJS) \
+ $(FUZZ_OBJS) \
common-main.o \
git.o
ifndef NO_CURL
$(RM) $(LIB_FILE) $(XDIFF_LIB) $(VCSSVN_LIB)
$(RM) $(ALL_PROGRAMS) $(SCRIPT_LIB) $(BUILT_INS) git$X
$(RM) $(TEST_PROGRAMS) $(NO_INSTALL)
+ $(RM) $(FUZZ_PROGRAMS)
$(RM) -r bin-wrappers $(dep_dirs)
$(RM) -r po/build/
$(RM) *.pyc *.pyo */*.pyc */*.pyo command-list.h $(ETAGS_TARGET) tags cscope*
cover_db_html: cover_db
cover -report html -outputdir cover_db_html cover_db
+
+### Fuzz testing
+#
+# Building fuzz targets generally requires a special set of compiler flags that
+# are not necessarily appropriate for general builds, and that vary greatly
+# depending on the compiler version used.
+#
+# An example command to build against libFuzzer from LLVM 4.0.0:
+#
+# make CC=clang CXX=clang++ \
+# CFLAGS="-fsanitize-coverage=trace-pc-guard -fsanitize=address" \
+# LIB_FUZZING_ENGINE=/usr/lib/llvm-4.0/lib/libFuzzer.a \
+# fuzz-all
+#
+.PHONY: fuzz-all
+
+$(FUZZ_PROGRAMS): all
+ $(QUIET_LINK)$(CXX) $(CFLAGS) $(LIB_OBJS) $(BUILTIN_OBJS) \
+ $(XDIFF_OBJS) $(EXTLIBS) git.o $@.o $(LIB_FUZZING_ENGINE) -o $@
+
+fuzz-all: $(FUZZ_PROGRAMS)
alias = alias_lookup(cmd);
if (alias) {
- printf_ln(_("'%s' is aliased to '%s'"), cmd, alias);
- free(alias);
- exit(0);
+ const char **argv;
+ int count;
+
+ /*
+ * handle_builtin() in git.c rewrites "git cmd --help"
+ * to "git help --exclude-guides cmd", so we can use
+ * exclude_guides to distinguish "git cmd --help" from
+ * "git help cmd". In the latter case, or if cmd is an
+ * alias for a shell command, just print the alias
+ * definition.
+ */
+ if (!exclude_guides || alias[0] == '!') {
+ printf_ln(_("'%s' is aliased to '%s'"), cmd, alias);
+ free(alias);
+ exit(0);
+ }
+ /*
+ * Otherwise, we pretend that the command was "git
+ * word0 --help". We use split_cmdline() to get the
+ * first word of the alias, to ensure that we use the
+ * same rules as when the alias is actually
+ * used. split_cmdline() modifies alias in-place.
+ */
+ fprintf_ln(stderr, _("'%s' is aliased to '%s'"), cmd, alias);
+ count = split_cmdline(alias, &argv);
+ if (count < 0)
+ die(_("bad alias.%s string: %s"), cmd,
+ split_cmdline_strerror(count));
+ free(argv);
+ UNLEAK(alias);
+ return alias;
}
if (exclude_guides)
#define CE_MATCH_REFRESH 0x10
/* don't refresh_fsmonitor state or do stat comparison even if CE_FSMONITOR_VALID is true */
#define CE_MATCH_IGNORE_FSMONITOR 0X20
+extern int is_racy_timestamp(const struct index_state *istate,
+ const struct cache_entry *ce);
extern int ie_match_stat(struct index_state *, const struct cache_entry *, struct stat *, unsigned int);
extern int ie_modified(struct index_state *, const struct cache_entry *, struct stat *, unsigned int);
git-checkout mainporcelain history
git-checkout-index plumbingmanipulators
git-check-ref-format purehelpers
-git-cherry ancillaryinterrogators complete
+git-cherry plumbinginterrogators complete
git-cherry-pick mainporcelain
git-citool mainporcelain
git-clean mainporcelain
git-format-patch mainporcelain
git-fsck ancillaryinterrogators complete
git-gc mainporcelain
-git-get-tar-commit-id ancillaryinterrogators
+git-get-tar-commit-id plumbinginterrogators
git-grep mainporcelain info
git-gui mainporcelain
git-hash-object plumbingmanipulators
git-reset mainporcelain worktree
git-revert mainporcelain
git-rev-list plumbinginterrogators
-git-rev-parse ancillaryinterrogators
+git-rev-parse plumbinginterrogators
git-rm mainporcelain worktree
git-send-email foreignscminterface complete
git-send-pack synchingrepositories
#include "../strbuf.h"
#include "../run-command.h"
#include "../cache.h"
+#include "win32/lazyload.h"
#define HCAST(type, handle) ((type)(intptr_t)handle)
return si.dwAllocationGranularity;
}
+/* See https://msdn.microsoft.com/en-us/library/windows/desktop/ms724435.aspx */
+enum EXTENDED_NAME_FORMAT {
+ NameDisplay = 3,
+ NameUserPrincipal = 8
+};
+
+static char *get_extended_user_info(enum EXTENDED_NAME_FORMAT type)
+{
+ DECLARE_PROC_ADDR(secur32.dll, BOOL, GetUserNameExW,
+ enum EXTENDED_NAME_FORMAT, LPCWSTR, PULONG);
+ static wchar_t wbuffer[1024];
+ DWORD len;
+
+ if (!INIT_PROC_ADDR(GetUserNameExW))
+ return NULL;
+
+ len = ARRAY_SIZE(wbuffer);
+ if (GetUserNameExW(type, wbuffer, &len)) {
+ char *converted = xmalloc((len *= 3));
+ if (xwcstoutf(converted, wbuffer, len) >= 0)
+ return converted;
+ free(converted);
+ }
+
+ return NULL;
+}
+
+char *mingw_query_user_email(void)
+{
+ return get_extended_user_info(NameUserPrincipal);
+}
+
struct passwd *getpwuid(int uid)
{
+ static unsigned initialized;
static char user_name[100];
- static struct passwd p;
+ static struct passwd *p;
+ DWORD len;
+
+ if (initialized)
+ return p;
- DWORD len = sizeof(user_name);
- if (!GetUserName(user_name, &len))
+ len = sizeof(user_name);
+ if (!GetUserName(user_name, &len)) {
+ initialized = 1;
return NULL;
- p.pw_name = user_name;
- p.pw_gecos = "unknown";
- p.pw_dir = NULL;
- return &p;
+ }
+
+ p = xmalloc(sizeof(*p));
+ p->pw_name = user_name;
+ p->pw_gecos = get_extended_user_info(NameDisplay);
+ if (!p->pw_gecos)
+ p->pw_gecos = "unknown";
+ p->pw_dir = NULL;
+
+ initialized = 1;
+ return p;
}
static HANDLE timer_event;
int mingw_offset_1st_component(const char *path);
#define offset_1st_component mingw_offset_1st_component
#define PATH_SEP ';'
+extern char *mingw_query_user_email(void);
+#define query_user_email mingw_query_user_email
#if !defined(__MINGW64_VERSION_MAJOR) && (!defined(_MSC_VER) || _MSC_VER < 1800)
#define PRIuMAX "I64u"
#define PRId64 "I64d"
esac
}
-_git_cherry ()
-{
- case "$cur" in
- --*)
- __gitcomp_builtin cherry
- return
- esac
-
- __git_complete_refs
-}
-
__git_cherry_pick_inprogress_options="--continue --quit --abort"
_git_cherry_pick ()
--- /dev/null
+#!/bin/sh
+
+# Usage: Run 'contrib/coverage-diff.sh <version1> <version2>' from source-root
+# after running
+#
+# make coverage-test
+# make coverage-report
+#
+# while checked out at <version2>. This script combines the *.gcov files
+# generated by the 'make' commands above with 'git diff <version1> <version2>'
+# to report new lines that are not covered by the test suite.
+
+V1=$1
+V2=$2
+
+diff_lines () {
+ perl -e '
+ my $line_num;
+ while (<>) {
+ # Hunk header? Grab the beginning in postimage.
+ if (/^@@ -\d+(?:,\d+)? \+(\d+)(?:,\d+)? @@/) {
+ $line_num = $1;
+ next;
+ }
+
+ # Have we seen a hunk? Ignore "diff --git" etc.
+ next unless defined $line_num;
+
+ # Deleted line? Ignore.
+ if (/^-/) {
+ next;
+ }
+
+ # Show only the line number of added lines.
+ if (/^\+/) {
+ print "$line_num\n";
+ }
+ # Either common context or added line appear in
+ # the postimage. Count it.
+ $line_num++;
+ }
+ '
+}
+
+files=$(git diff --name-only "$V1" "$V2" -- \*.c)
+
+# create empty file
+>coverage-data.txt
+
+for file in $files
+do
+ git diff "$V1" "$V2" -- "$file" |
+ diff_lines |
+ sort >new_lines.txt
+
+ if ! test -s new_lines.txt
+ then
+ continue
+ fi
+
+ hash_file=$(echo $file | sed "s/\//\#/")
+
+ if ! test -s "$hash_file.gcov"
+ then
+ continue
+ fi
+
+ sed -ne '/#####:/{
+ s/ #####://
+ s/:.*//
+ s/ //g
+ p
+ }' "$hash_file.gcov" |
+ sort >uncovered_lines.txt
+
+ comm -12 uncovered_lines.txt new_lines.txt |
+ sed -e 's/$/\)/' |
+ sed -e 's/^/ /' >uncovered_new_lines.txt
+
+ grep -q '[^[:space:]]' <uncovered_new_lines.txt &&
+ echo $file >>coverage-data.txt &&
+ git blame -s "$V2" -- "$file" |
+ sed 's/\t//g' |
+ grep -f uncovered_new_lines.txt >>coverage-data.txt &&
+ echo >>coverage-data.txt
+
+ rm -f new_lines.txt uncovered_lines.txt uncovered_new_lines.txt
+done
+
+cat coverage-data.txt
+
+echo "Commits introducing uncovered code:"
+
+commit_list=$(cat coverage-data.txt |
+ grep -E '^[0-9a-f]{7,} ' |
+ awk '{print $1;}' |
+ sort |
+ uniq)
+
+(
+ for commit in $commit_list
+ do
+ git log --no-decorate --pretty=format:'%an %h: %s' -1 $commit
+ echo
+ done
+) | sort
+
+rm coverage-data.txt
}
check_parents () {
- missed=$(cache_miss "$@")
+ missed=$(cache_miss "$1")
+ local indent=$(($2 + 1))
for miss in $missed
do
if ! test -r "$cachedir/notree/$miss"
then
debug " incorrect order: $miss"
+ process_split_commit "$miss" "" "$indent"
fi
done
}
revs="$2"
main=
sub=
- git log --grep="^git-subtree-dir: $dir/*\$" \
+ local grep_format="^git-subtree-dir: $dir/*\$"
+ if test -n "$ignore_joins"
+ then
+ grep_format="^Add '$dir/' from commit '"
+ fi
+ git log --grep="$grep_format" \
--no-show-signature --pretty=format:'START %H%n%s%n%n%b%nEND%n' $revs |
while read a b junk
do
nonidentical=
p=
gotparents=
+ copycommit=
for parent in $newparents
do
ptree=$(toptree_for_commit $parent) || exit $?
if test "$ptree" = "$tree"
then
# an identical parent could be used in place of this rev.
- identical="$parent"
+ if test -n "$identical"
+ then
+ # if a previous identical parent was found, check whether
+ # one is already an ancestor of the other
+ mergebase=$(git merge-base $identical $parent)
+ if test "$identical" = "$mergebase"
+ then
+ # current identical commit is an ancestor of parent
+ identical="$parent"
+ elif test "$parent" != "$mergebase"
+ then
+ # no common history; commit must be copied
+ copycommit=1
+ fi
+ else
+ # first identical parent detected
+ identical="$parent"
+ fi
else
nonidentical="$parent"
fi
fi
done
- copycommit=
if test -n "$identical" && test -n "$nonidentical"
then
extras=$(git rev-list --count $identical..$nonidentical)
die "'$1' does not look like a ref"
}
+process_split_commit () {
+ local rev="$1"
+ local parents="$2"
+ local indent=$3
+
+ if test $indent -eq 0
+ then
+ revcount=$(($revcount + 1))
+ else
+ # processing commit without normal parent information;
+ # fetch from repo
+ parents=$(git rev-parse "$rev^@")
+ extracount=$(($extracount + 1))
+ fi
+
+ progress "$revcount/$revmax ($createcount) [$extracount]"
+
+ debug "Processing commit: $rev"
+ exists=$(cache_get "$rev")
+ if test -n "$exists"
+ then
+ debug " prior: $exists"
+ return
+ fi
+ createcount=$(($createcount + 1))
+ debug " parents: $parents"
+ check_parents "$parents" "$indent"
+ newparents=$(cache_get $parents)
+ debug " newparents: $newparents"
+
+ tree=$(subtree_for_commit "$rev" "$dir")
+ debug " tree is: $tree"
+
+ # ugly. is there no better way to tell if this is a subtree
+ # vs. a mainline commit? Does it matter?
+ if test -z "$tree"
+ then
+ set_notree "$rev"
+ if test -n "$newparents"
+ then
+ cache_set "$rev" "$rev"
+ fi
+ return
+ fi
+
+ newrev=$(copy_or_skip "$rev" "$tree" "$newparents") || exit $?
+ debug " newrev is: $newrev"
+ cache_set "$rev" "$newrev"
+ cache_set latest_new "$newrev"
+ cache_set latest_old "$rev"
+}
+
cmd_add () {
if test -e "$dir"
then
done
fi
- if test -n "$ignore_joins"
- then
- unrevs=
- else
- unrevs="$(find_existing_splits "$dir" "$revs")"
- fi
+ unrevs="$(find_existing_splits "$dir" "$revs")"
# We can't restrict rev-list to only $dir here, because some of our
# parents have the $dir contents the root, and those won't match.
revmax=$(eval "$grl" | wc -l)
revcount=0
createcount=0
+ extracount=0
eval "$grl" |
while read rev parents
do
- revcount=$(($revcount + 1))
- progress "$revcount/$revmax ($createcount)"
- debug "Processing commit: $rev"
- exists=$(cache_get "$rev")
- if test -n "$exists"
- then
- debug " prior: $exists"
- continue
- fi
- createcount=$(($createcount + 1))
- debug " parents: $parents"
- newparents=$(cache_get $parents)
- debug " newparents: $newparents"
-
- tree=$(subtree_for_commit "$rev" "$dir")
- debug " tree is: $tree"
-
- check_parents $parents
-
- # ugly. is there no better way to tell if this is a subtree
- # vs. a mainline commit? Does it matter?
- if test -z "$tree"
- then
- set_notree "$rev"
- if test -n "$newparents"
- then
- cache_set "$rev" "$rev"
- fi
- continue
- fi
-
- newrev=$(copy_or_skip "$rev" "$tree" "$newparents") || exit $?
- debug " newrev is: $newrev"
- cache_set "$rev" "$newrev"
- cache_set latest_new "$newrev"
- cache_set latest_old "$rev"
+ process_split_commit "$rev" "$parents" 0
done || exit $?
latest_new=$(cache_get latest_new)
static void emit_line_ws_markup(struct diff_options *o,
const char *set_sign, const char *set,
const char *reset,
- char sign, const char *line, int len,
+ int sign_index, const char *line, int len,
unsigned ws_rule, int blank_at_eof)
{
const char *ws = NULL;
+ int sign = o->output_indicators[sign_index];
if (o->ws_error_highlight & ws_rule) {
ws = diff_get_color_opt(o, DIFF_WHITESPACE);
set = diff_get_color_opt(o, DIFF_FILE_OLD);
}
emit_line_ws_markup(o, set_sign, set, reset,
- o->output_indicators[OUTPUT_INDICATOR_CONTEXT],
- line, len,
+ OUTPUT_INDICATOR_CONTEXT, line, len,
flags & (DIFF_SYMBOL_CONTENT_WS_MASK), 0);
break;
case DIFF_SYMBOL_PLUS:
flags &= ~DIFF_SYMBOL_CONTENT_WS_MASK;
}
emit_line_ws_markup(o, set_sign, set, reset,
- o->output_indicators[OUTPUT_INDICATOR_NEW],
- line, len,
+ OUTPUT_INDICATOR_NEW, line, len,
flags & DIFF_SYMBOL_CONTENT_WS_MASK,
flags & DIFF_SYMBOL_CONTENT_BLANK_LINE_EOF);
break;
set = diff_get_color_opt(o, DIFF_CONTEXT_DIM);
}
emit_line_ws_markup(o, set_sign, set, reset,
- o->output_indicators[OUTPUT_INDICATOR_OLD],
- line, len,
+ OUTPUT_INDICATOR_OLD, line, len,
flags & DIFF_SYMBOL_CONTENT_WS_MASK, 0);
break;
case DIFF_SYMBOL_WORDS_PORCELAIN:
--- /dev/null
+#include "packfile.h"
+
+int LLVMFuzzerTestOneInput(const uint8_t *data, size_t size);
+
+int LLVMFuzzerTestOneInput(const uint8_t *data, size_t size)
+{
+ enum object_type type;
+ unsigned long len;
+
+ unpack_object_header_buffer((const unsigned char *)data,
+ (unsigned long)size, &type, &len);
+
+ return 0;
+}
--- /dev/null
+#include "object-store.h"
+#include "packfile.h"
+
+int LLVMFuzzerTestOneInput(const uint8_t *data, size_t size);
+
+int LLVMFuzzerTestOneInput(const uint8_t *data, size_t size)
+{
+ struct packed_git p;
+
+ load_idx("fuzz-input", GIT_SHA1_RAWSZ, (void *)data, size, &p);
+
+ return 0;
+}
#define find_last_dir_sep git_find_last_dir_sep
#endif
+#ifndef query_user_email
+#define query_user_email() NULL
+#endif
+
#if defined(__HP_cc) && (__HP_cc >= 61000)
#define NORETURN __attribute__((noreturn))
#define NORETURN_PTR
return LargeFileSystem.processContent(self, git_mode, relPath, contents)
class Command:
+ delete_actions = ( "delete", "move/delete", "purge" )
+ add_actions = ( "add", "move/add" )
+
def __init__(self):
self.usage = "usage: %prog [options]"
self.needsGit = True
return ""
class P4Sync(Command, P4UserMap):
- delete_actions = ( "delete", "move/delete", "purge" )
def __init__(self):
Command.__init__(self)
if self.verbose:
print("checkpoint finished: " + out)
- def cmp_shelved(self, path, filerev, revision):
- """ Determine if a path at revision #filerev is the same as the file
- at revision @revision for a shelved changelist. If they don't match,
- unshelving won't be safe (we will get other changes mixed in).
-
- This is comparing the revision that the shelved changelist is *based* on, not
- the shelved changelist itself.
- """
- ret = p4Cmd(["diff2", "{0}#{1}".format(path, filerev), "{0}@{1}".format(path, revision)])
- if verbose:
- print("p4 diff2 path %s filerev %s revision %s => %s" % (path, filerev, revision, ret))
- return ret["status"] == "identical"
-
- def extractFilesFromCommit(self, commit, shelved=False, shelved_cl = 0, origin_revision = 0):
+ def extractFilesFromCommit(self, commit, shelved=False, shelved_cl = 0):
self.cloneExclude = [re.sub(r"\.\.\.$", "", path)
for path in self.cloneExclude]
files = []
file["type"] = commit["type%s" % fnum]
if shelved:
file["shelved_cl"] = int(shelved_cl)
-
- # For shelved changelists, check that the revision of each file that the
- # shelve was based on matches the revision that we are using for the
- # starting point for git-fast-import (self.initialParent). Otherwise
- # the resulting diff will contain deltas from multiple commits.
-
- if file["action"] != "add" and \
- not self.cmp_shelved(path, file["rev"], origin_revision):
- sys.exit("change {0} not based on {1} for {2}, cannot unshelve".format(
- commit["change"], self.initialParent, path))
-
files.append(file)
fnum = fnum + 1
return files
relPath = self.stripRepoPath(file['depotFile'], self.branchPrefixes)
relPath = self.encodeWithUTF8(relPath)
if verbose:
- size = int(self.stream_file['fileSize'])
+ if 'fileSize' in self.stream_file:
+ size = int(self.stream_file['fileSize'])
+ else:
+ size = 0 # deleted files don't get a fileSize apparently
sys.stdout.write('\r%s --> %s (%i MB)\n' % (file['depotFile'], relPath, size/1024/1024))
sys.stdout.flush()
print('Ignoring file outside of prefix: {0}'.format(path))
return hasPrefix
- def commit(self, details, files, branch, parent = ""):
+ def commit(self, details, files, branch, parent = "", allow_empty=False):
epoch = details["time"]
author = details["user"]
jobs = self.extractJobsFromCommit(details)
files = [f for f in files
if self.inClientSpec(f['path']) and self.hasBranchPrefix(f['path'])]
- if not files and not gitConfigBool('git-p4.keepEmptyCommits'):
+ if gitConfigBool('git-p4.keepEmptyCommits'):
+ allow_empty = True
+
+ if not files and not allow_empty:
print('Ignoring revision {0} as it would produce an empty commit.'
.format(details['change']))
return
else:
return None
- def importChanges(self, changes, shelved=False, origin_revision=0):
+ def importChanges(self, changes, origin_revision=0):
cnt = 1
for change in changes:
- description = p4_describe(change, shelved)
+ description = p4_describe(change)
self.updateOptionDict(description)
if not self.silent:
print("Parent of %s not found. Committing into head of %s" % (branch, parent))
self.commit(description, filesForCommit, branch, parent)
else:
- files = self.extractFilesFromCommit(description, shelved, change, origin_revision)
+ files = self.extractFilesFromCommit(description)
self.commit(description, files, self.branch,
self.initialParent)
# only needed once, to connect to the previous commit
]
self.verbose = False
self.noCommit = False
- self.destbranch = "refs/remotes/p4/unshelved"
+ self.destbranch = "refs/remotes/p4-unshelved"
def renameBranch(self, branch_name):
""" Rename the existing branch to branch_name.N
sys.exit("could not find git-p4 commits in {0}".format(self.origin))
+ def createShelveParent(self, change, branch_name, sync, origin):
+ """ Create a commit matching the parent of the shelved changelist 'change'
+ """
+ parent_description = p4_describe(change, shelved=True)
+ parent_description['desc'] = 'parent for shelved changelist {}\n'.format(change)
+ files = sync.extractFilesFromCommit(parent_description, shelved=False, shelved_cl=change)
+
+ parent_files = []
+ for f in files:
+ # if it was added in the shelved changelist, it won't exist in the parent
+ if f['action'] in self.add_actions:
+ continue
+
+ # if it was deleted in the shelved changelist it must not be deleted
+ # in the parent - we might even need to create it if the origin branch
+ # does not have it
+ if f['action'] in self.delete_actions:
+ f['action'] = 'add'
+
+ parent_files.append(f)
+
+ sync.commit(parent_description, parent_files, branch_name,
+ parent=origin, allow_empty=True)
+ print("created parent commit for {0} based on {1} in {2}".format(
+ change, self.origin, branch_name))
+
def run(self, args):
if len(args) != 1:
return False
sync = P4Sync()
changes = args
- sync.initialParent = self.origin
- # use the first change in the list to construct the branch to unshelve into
+ # only one change at a time
change = changes[0]
# if the target branch already exists, rename it
sync.suppress_meta_comment = True
settings = self.findLastP4Revision(self.origin)
- origin_revision = settings['change']
sync.depotPaths = settings['depot-paths']
sync.branchPrefixes = sync.depotPaths
sync.openStreams()
sync.loadUserMapFromCache()
sync.silent = True
- sync.importChanges(changes, shelved=True, origin_revision=origin_revision)
+
+ # create a commit for the parent of the shelved changelist
+ self.createShelveParent(change, branch_name, sync, self.origin)
+
+ # create the commit for the shelved changelist itself
+ description = p4_describe(change, True)
+ files = sync.extractFilesFromCommit(description, True, change)
+
+ sync.commit(description, files, branch_name, "")
sync.closeStreams()
print("unshelved changelist {0} into {1}".format(change, branch_name))
alias_command = (*argv)[0];
alias_string = alias_lookup(alias_command);
if (alias_string) {
+ if (*argcp > 1 && !strcmp((*argv)[1], "-h"))
+ fprintf_ln(stderr, _("'%s' is aliased to '%s'"),
+ alias_command, alias_string);
if (alias_string[0] == '!') {
struct child_process child = CHILD_PROCESS_INIT;
int nongit_ok;
}
/*
- * Draw an octopus merge and return the number of characters written.
+ * Draw the horizontal dashes of an octopus merge and return the number of
+ * characters written.
*/
static int graph_draw_octopus_merge(struct git_graph *graph,
struct strbuf *sb)
{
/*
- * Here dashless_commits represents the number of parents
- * which don't need to have dashes (because their edges fit
- * neatly under the commit).
- */
- const int dashless_commits = 2;
- int col_num, i;
- int num_dashes =
- ((graph->num_parents - dashless_commits) * 2) - 1;
- for (i = 0; i < num_dashes; i++) {
- col_num = (i / 2) + dashless_commits + graph->commit_index;
- strbuf_write_column(sb, &graph->new_columns[col_num], '-');
+ * Here dashless_parents represents the number of parents which don't
+ * need to have dashes (the edges labeled "0" and "1"). And
+ * dashful_parents are the remaining ones.
+ *
+ * | *---.
+ * | |\ \ \
+ * | | | | |
+ * x 0 1 2 3
+ *
+ */
+ const int dashless_parents = 2;
+ int dashful_parents = graph->num_parents - dashless_parents;
+
+ /*
+ * Usually, we add one new column for each parent (like the diagram
+ * above) but sometimes the first parent goes into an existing column,
+ * like this:
+ *
+ * | *---.
+ * | |\ \ \
+ * |/ / / /
+ * x 0 1 2
+ *
+ * In which case the number of parents will be one greater than the
+ * number of added columns.
+ */
+ int added_cols = (graph->num_new_columns - graph->num_columns);
+ int parent_in_old_cols = graph->num_parents - added_cols;
+
+ /*
+ * In both cases, commit_index corresponds to the edge labeled "0".
+ */
+ int first_col = graph->commit_index + dashless_parents
+ - parent_in_old_cols;
+
+ int i;
+ for (i = 0; i < dashful_parents; i++) {
+ strbuf_write_column(sb, &graph->new_columns[i+first_col], '-');
+ strbuf_write_column(sb, &graph->new_columns[i+first_col],
+ i == dashful_parents-1 ? '.' : '-');
}
- col_num = (i / 2) + dashless_commits + graph->commit_index;
- strbuf_write_column(sb, &graph->new_columns[col_num], '.');
- return num_dashes + 1;
+ return 2 * dashful_parents;
}
static void graph_output_commit_line(struct git_graph *graph, struct strbuf *sb)
static char *cached_accept_language;
+static char *http_ssl_backend;
+
+static int http_schannel_check_revoke = 1;
+/*
+ * With the backend being set to `schannel`, setting sslCAinfo would override
+ * the Certificate Store in cURL v7.60.0 and later, which is not what we want
+ * by default.
+ */
+static int http_schannel_use_ssl_cainfo;
+
size_t fread_buffer(char *ptr, size_t eltsize, size_t nmemb, void *buffer_)
{
size_t size = eltsize * nmemb;
curl_ssl_try = git_config_bool(var, value);
return 0;
}
+ if (!strcmp("http.sslbackend", var)) {
+ free(http_ssl_backend);
+ http_ssl_backend = xstrdup_or_null(value);
+ return 0;
+ }
+
+ if (!strcmp("http.schannelcheckrevoke", var)) {
+ http_schannel_check_revoke = git_config_bool(var, value);
+ return 0;
+ }
+
+ if (!strcmp("http.schannelusesslcainfo", var)) {
+ http_schannel_use_ssl_cainfo = git_config_bool(var, value);
+ return 0;
+ }
+
if (!strcmp("http.minsessions", var)) {
min_curl_sessions = git_config_int(var, value);
#ifndef USE_CURL_MULTI
}
#endif
+ if (http_ssl_backend && !strcmp("schannel", http_ssl_backend) &&
+ !http_schannel_check_revoke) {
+#if LIBCURL_VERSION_NUM >= 0x072c00
+ curl_easy_setopt(result, CURLOPT_SSL_OPTIONS, CURLSSLOPT_NO_REVOKE);
+#else
+ warning("CURLSSLOPT_NO_REVOKE not applied to curl SSL options because\n"
+ "your curl version is too old (< 7.44.0)");
+#endif
+ }
+
if (http_proactive_auth)
init_curl_http_auth(result);
if (ssl_pinnedkey != NULL)
curl_easy_setopt(result, CURLOPT_PINNEDPUBLICKEY, ssl_pinnedkey);
#endif
- if (ssl_cainfo != NULL)
+ if (http_ssl_backend && !strcmp("schannel", http_ssl_backend) &&
+ !http_schannel_use_ssl_cainfo) {
+ curl_easy_setopt(result, CURLOPT_CAINFO, NULL);
+#if LIBCURL_VERSION_NUM >= 0x073400
+ curl_easy_setopt(result, CURLOPT_PROXY_CAINFO, NULL);
+#endif
+ } else if (ssl_cainfo != NULL)
curl_easy_setopt(result, CURLOPT_CAINFO, ssl_cainfo);
if (curl_low_speed_limit > 0 && curl_low_speed_time > 0) {
git_config(urlmatch_config_entry, &config);
free(normalized_url);
+#if LIBCURL_VERSION_NUM >= 0x073800
+ if (http_ssl_backend) {
+ const curl_ssl_backend **backends;
+ struct strbuf buf = STRBUF_INIT;
+ int i;
+
+ switch (curl_global_sslset(-1, http_ssl_backend, &backends)) {
+ case CURLSSLSET_UNKNOWN_BACKEND:
+ strbuf_addf(&buf, _("Unsupported SSL backend '%s'. "
+ "Supported SSL backends:"),
+ http_ssl_backend);
+ for (i = 0; backends[i]; i++)
+ strbuf_addf(&buf, "\n\t%s", backends[i]->name);
+ die("%s", buf.buf);
+ case CURLSSLSET_NO_BACKENDS:
+ die(_("Could not set SSL backend to '%s': "
+ "cURL was built without SSL backends"),
+ http_ssl_backend);
+ case CURLSSLSET_TOO_LATE:
+ die(_("Could not set SSL backend to '%s': already set"),
+ http_ssl_backend);
+ case CURLSSLSET_OK:
+ break; /* Okay! */
+ }
+ }
+#endif
+
if (curl_global_init(CURL_GLOBAL_ALL) != CURLE_OK)
die("curl_global_init failed");
strbuf_addstr(&git_default_email, email);
committer_ident_explicitly_given |= IDENT_MAIL_GIVEN;
author_ident_explicitly_given |= IDENT_MAIL_GIVEN;
+ } else if ((email = query_user_email()) && email[0]) {
+ strbuf_addstr(&git_default_email, email);
+ free((char *)email);
} else
copy_email(xgetpwuid_self(&default_email_is_bogus),
&git_default_email, &default_email_is_bogus);
static int check_packed_git_idx(const char *path, struct packed_git *p)
{
void *idx_map;
- struct pack_idx_header *hdr;
size_t idx_size;
- uint32_t version, nr, i, *index;
- int fd = git_open(path);
+ int fd = git_open(path), ret;
struct stat st;
const unsigned int hashsz = the_hash_algo->rawsz;
idx_map = xmmap(NULL, idx_size, PROT_READ, MAP_PRIVATE, fd, 0);
close(fd);
- hdr = idx_map;
+ ret = load_idx(path, hashsz, idx_map, idx_size, p);
+
+ if (ret)
+ munmap(idx_map, idx_size);
+
+ return ret;
+}
+
+int load_idx(const char *path, const unsigned int hashsz, void *idx_map,
+ size_t idx_size, struct packed_git *p)
+{
+ struct pack_idx_header *hdr = idx_map;
+ uint32_t version, nr, i, *index;
+
+ if (idx_size < 4 * 256 + hashsz + hashsz)
+ return error("index file %s is too small", path);
+ if (idx_map == NULL)
+ return error("empty data");
+
if (hdr->idx_signature == htonl(PACK_IDX_SIGNATURE)) {
version = ntohl(hdr->idx_version);
- if (version < 2 || version > 2) {
- munmap(idx_map, idx_size);
+ if (version < 2 || version > 2)
return error("index file %s is version %"PRIu32
" and is not supported by this binary"
" (try upgrading GIT to a newer version)",
path, version);
- }
} else
version = 1;
index += 2; /* skip index header */
for (i = 0; i < 256; i++) {
uint32_t n = ntohl(index[i]);
- if (n < nr) {
- munmap(idx_map, idx_size);
+ if (n < nr)
return error("non-monotonic index %s", path);
- }
nr = n;
}
* - hash of the packfile
* - file checksum
*/
- if (idx_size != 4*256 + nr * (hashsz + 4) + hashsz + hashsz) {
- munmap(idx_map, idx_size);
+ if (idx_size != 4 * 256 + nr * (hashsz + 4) + hashsz + hashsz)
return error("wrong index v1 file size in %s", path);
- }
} else if (version == 2) {
/*
* Minimum size:
unsigned long max_size = min_size;
if (nr)
max_size += (nr - 1)*8;
- if (idx_size < min_size || idx_size > max_size) {
- munmap(idx_map, idx_size);
+ if (idx_size < min_size || idx_size > max_size)
return error("wrong index v2 file size in %s", path);
- }
if (idx_size != min_size &&
/*
* make sure we can deal with large pack offsets.
* 31-bit signed offset won't be enough, neither
* 32-bit unsigned one will be.
*/
- (sizeof(off_t) <= 4)) {
- munmap(idx_map, idx_size);
+ (sizeof(off_t) <= 4))
return error("pack too large for current definition of off_t in %s", path);
- }
}
p->index_version = version;
*/
extern int is_promisor_object(const struct object_id *oid);
+/*
+ * Expose a function for fuzz testing.
+ *
+ * load_idx() parses a block of memory as a packfile index and puts the results
+ * into a struct packed_git.
+ *
+ * This function should not be used directly. It is exposed here only so that we
+ * have a convenient entry-point for fuzz testing. For real uses, you should
+ * probably use open_pack_index() or parse_pack_index() instead.
+ */
+extern int load_idx(const char *path, const unsigned int hashsz, void *idx_map,
+ size_t idx_size, struct packed_git *p);
+
#endif
);
}
-static int is_racy_timestamp(const struct index_state *istate,
+int is_racy_timestamp(const struct index_state *istate,
const struct cache_entry *ce)
{
return (!S_ISGITLINK(ce->ce_mode) &&
die("position for delete %d exceeds base index size %d",
(int)pos, istate->cache_nr);
istate->cache[pos]->ce_flags |= CE_REMOVE;
- istate->split_index->nr_deletions = 1;
+ istate->split_index->nr_deletions++;
}
static void replace_entry(size_t pos, void *data)
si->saved_cache_nr = 0;
}
+/*
+ * Compare most of the fields in two cache entries, i.e. all except the
+ * hashmap_entry and the name.
+ */
+static int compare_ce_content(struct cache_entry *a, struct cache_entry *b)
+{
+ const unsigned int ondisk_flags = CE_STAGEMASK | CE_VALID |
+ CE_EXTENDED_FLAGS;
+ unsigned int ce_flags = a->ce_flags;
+ unsigned int base_flags = b->ce_flags;
+ int ret;
+
+ /* only on-disk flags matter */
+ a->ce_flags &= ondisk_flags;
+ b->ce_flags &= ondisk_flags;
+ ret = memcmp(&a->ce_stat_data, &b->ce_stat_data,
+ offsetof(struct cache_entry, name) -
+ offsetof(struct cache_entry, ce_stat_data));
+ a->ce_flags = ce_flags;
+ b->ce_flags = base_flags;
+
+ return ret;
+}
+
void prepare_to_write_split_index(struct index_state *istate)
{
struct split_index *si = init_split_index(istate);
*/
for (i = 0; i < istate->cache_nr; i++) {
struct cache_entry *base;
- /* namelen is checked separately */
- const unsigned int ondisk_flags =
- CE_STAGEMASK | CE_VALID | CE_EXTENDED_FLAGS;
- unsigned int ce_flags, base_flags, ret;
ce = istate->cache[i];
- if (!ce->index)
+ if (!ce->index) {
+ /*
+ * During simple update index operations this
+ * is a cache entry that is not present in
+ * the shared index. It will be added to the
+ * split index.
+ *
+ * However, it might also represent a file
+ * that already has a cache entry in the
+ * shared index, but a new index has just
+ * been constructed by unpack_trees(), and
+ * this entry now refers to different content
+ * than what was recorded in the original
+ * index, e.g. during 'read-tree -m HEAD^' or
+ * 'checkout HEAD^'. In this case the
+ * original entry in the shared index will be
+ * marked as deleted, and this entry will be
+ * added to the split index.
+ */
continue;
+ }
if (ce->index > si->base->cache_nr) {
- ce->index = 0;
- continue;
+ BUG("ce refers to a shared ce at %d, which is beyond the shared index size %d",
+ ce->index, si->base->cache_nr);
}
ce->ce_flags |= CE_MATCHED; /* or "shared" */
base = si->base->cache[ce->index - 1];
- if (ce == base)
+ if (ce == base) {
+ /* The entry is present in the shared index. */
+ if (ce->ce_flags & CE_UPDATE_IN_BASE) {
+ /*
+ * Already marked for inclusion in
+ * the split index, either because
+ * the corresponding file was
+ * modified and the cached stat data
+ * was refreshed, or because there
+ * is already a replacement entry in
+ * the split index.
+ * Nothing more to do here.
+ */
+ } else if (!ce_uptodate(ce) &&
+ is_racy_timestamp(istate, ce)) {
+ /*
+ * A racily clean cache entry stored
+ * only in the shared index: it must
+ * be added to the split index, so
+ * the subsequent do_write_index()
+ * can smudge its stat data.
+ */
+ ce->ce_flags |= CE_UPDATE_IN_BASE;
+ } else {
+ /*
+ * The entry is only present in the
+ * shared index and it was not
+ * refreshed.
+ * Just leave it there.
+ */
+ }
continue;
+ }
if (ce->ce_namelen != base->ce_namelen ||
strcmp(ce->name, base->name)) {
ce->index = 0;
continue;
}
- ce_flags = ce->ce_flags;
- base_flags = base->ce_flags;
- /* only on-disk flags matter */
- ce->ce_flags &= ondisk_flags;
- base->ce_flags &= ondisk_flags;
- ret = memcmp(&ce->ce_stat_data, &base->ce_stat_data,
- offsetof(struct cache_entry, name) -
- offsetof(struct cache_entry, ce_stat_data));
- ce->ce_flags = ce_flags;
- base->ce_flags = base_flags;
- if (ret)
+ /*
+ * This is the copy of a cache entry that is present
+ * in the shared index, created by unpack_trees()
+ * while it constructed a new index.
+ */
+ if (ce->ce_flags & CE_UPDATE_IN_BASE) {
+ /*
+ * Already marked for inclusion in the split
+ * index, either because the corresponding
+ * file was modified and the cached stat data
+ * was refreshed, or because the original
+ * entry already had a replacement entry in
+ * the split index.
+ * Nothing to do.
+ */
+ } else if (!ce_uptodate(ce) &&
+ is_racy_timestamp(istate, ce)) {
+ /*
+ * A copy of a racily clean cache entry from
+ * the shared index. It must be added to
+ * the split index, so the subsequent
+ * do_write_index() can smudge its stat data.
+ */
ce->ce_flags |= CE_UPDATE_IN_BASE;
+ } else {
+ /*
+ * Thoroughly compare the cached data to see
+ * whether it should be marked for inclusion
+ * in the split index.
+ *
+ * This comparison might be unnecessary, as
+ * code paths modifying the cached data do
+ * set CE_UPDATE_IN_BASE as well.
+ */
+ if (compare_ce_content(ce, base))
+ ce->ce_flags |= CE_UPDATE_IN_BASE;
+ }
discard_cache_entry(base);
si->base->cache[ce->index - 1] = ce;
}
sane_unset GIT_TEST_FSMONITOR
sane_unset GIT_TEST_INDEX_THREADS
+# Create a file named as $1 with content read from stdin.
+# Set the file's mtime to a few seconds in the past to avoid racy situations.
+create_non_racy_file () {
+ cat >"$1" &&
+ test-tool chmtime =-5 "$1"
+}
+
test_expect_success 'enable split index' '
git config splitIndex.maxPercentChange 100 &&
git update-index --split-index &&
'
test_expect_success 'add one file' '
- : >one &&
+ create_non_racy_file one &&
git update-index --add one &&
git ls-files --stage >ls-files.actual &&
cat >ls-files.expect <<-EOF &&
'
test_expect_success 'modify original file, base index untouched' '
- echo modified >one &&
+ echo modified | create_non_racy_file one &&
git update-index one &&
git ls-files --stage >ls-files.actual &&
cat >ls-files.expect <<-EOF &&
'
test_expect_success 'add another file, which stays index' '
- : >two &&
+ create_non_racy_file two &&
git update-index --add two &&
git ls-files --stage >ls-files.actual &&
cat >ls-files.expect <<-EOF &&
'
test_expect_success 'add original file back' '
- : >one &&
+ create_non_racy_file one &&
git update-index --add one &&
git ls-files --stage >ls-files.actual &&
cat >ls-files.expect <<-EOF &&
'
test_expect_success 'add new file' '
- : >two &&
+ create_non_racy_file two &&
git update-index --add two &&
git ls-files --stage >actual &&
cat >expect <<-EOF &&
test_expect_success 'set core.splitIndex config variable to true' '
git config core.splitIndex true &&
- : >three &&
+ create_non_racy_file three &&
git update-index --add three &&
git ls-files --stage >ls-files.actual &&
cat >ls-files.expect <<-EOF &&
test_cmp expect actual
'
-test_expect_success 'set core.splitIndex config variable to true' '
+test_expect_success 'set core.splitIndex config variable back to true' '
git config core.splitIndex true &&
- : >three &&
+ create_non_racy_file three &&
git update-index --add three &&
BASE=$(test-tool dump-split-index .git/index | grep "^base") &&
test-tool dump-split-index .git/index | sed "/^own/d" >actual &&
deletions:
EOF
test_cmp expect actual &&
- : >four &&
+ create_non_racy_file four &&
git update-index --add four &&
test-tool dump-split-index .git/index | sed "/^own/d" >actual &&
cat >expect <<-EOF &&
test_expect_success 'check behavior with splitIndex.maxPercentChange unset' '
git config --unset splitIndex.maxPercentChange &&
- : >five &&
+ create_non_racy_file five &&
git update-index --add five &&
BASE=$(test-tool dump-split-index .git/index | grep "^base") &&
test-tool dump-split-index .git/index | sed "/^own/d" >actual &&
deletions:
EOF
test_cmp expect actual &&
- : >six &&
+ create_non_racy_file six &&
git update-index --add six &&
test-tool dump-split-index .git/index | sed "/^own/d" >actual &&
cat >expect <<-EOF &&
test_expect_success 'check splitIndex.maxPercentChange set to 0' '
git config splitIndex.maxPercentChange 0 &&
- : >seven &&
+ create_non_racy_file seven &&
git update-index --add seven &&
BASE=$(test-tool dump-split-index .git/index | grep "^base") &&
test-tool dump-split-index .git/index | sed "/^own/d" >actual &&
deletions:
EOF
test_cmp expect actual &&
- : >eight &&
+ create_non_racy_file eight &&
git update-index --add eight &&
BASE=$(test-tool dump-split-index .git/index | grep "^base") &&
test-tool dump-split-index .git/index | sed "/^own/d" >actual &&
'
test_expect_success 'shared index files expire after 2 weeks by default' '
- : >ten &&
+ create_non_racy_file ten &&
git update-index --add ten &&
test $(ls .git/sharedindex.* | wc -l) -gt 2 &&
just_under_2_weeks_ago=$((5-14*86400)) &&
test-tool chmtime =$just_under_2_weeks_ago .git/sharedindex.* &&
- : >eleven &&
+ create_non_racy_file eleven &&
git update-index --add eleven &&
test $(ls .git/sharedindex.* | wc -l) -gt 2 &&
just_over_2_weeks_ago=$((-1-14*86400)) &&
test-tool chmtime =$just_over_2_weeks_ago .git/sharedindex.* &&
- : >twelve &&
+ create_non_racy_file twelve &&
git update-index --add twelve &&
test $(ls .git/sharedindex.* | wc -l) -le 2
'
test_expect_success 'check splitIndex.sharedIndexExpire set to 16 days' '
git config splitIndex.sharedIndexExpire "16.days.ago" &&
test-tool chmtime =$just_over_2_weeks_ago .git/sharedindex.* &&
- : >thirteen &&
+ create_non_racy_file thirteen &&
git update-index --add thirteen &&
test $(ls .git/sharedindex.* | wc -l) -gt 2 &&
just_over_16_days_ago=$((-1-16*86400)) &&
test-tool chmtime =$just_over_16_days_ago .git/sharedindex.* &&
- : >fourteen &&
+ create_non_racy_file fourteen &&
git update-index --add fourteen &&
test $(ls .git/sharedindex.* | wc -l) -le 2
'
git config splitIndex.sharedIndexExpire never &&
just_10_years_ago=$((-365*10*86400)) &&
test-tool chmtime =$just_10_years_ago .git/sharedindex.* &&
- : >fifteen &&
+ create_non_racy_file fifteen &&
git update-index --add fifteen &&
test $(ls .git/sharedindex.* | wc -l) -gt 2 &&
git config splitIndex.sharedIndexExpire now &&
just_1_second_ago=-1 &&
test-tool chmtime =$just_1_second_ago .git/sharedindex.* &&
- : >sixteen &&
+ create_non_racy_file sixteen &&
git update-index --add sixteen &&
test $(ls .git/sharedindex.* | wc -l) -le 2
'
# Create one new shared index file
git config core.sharedrepository "$mode" &&
git config core.splitIndex true &&
- : >one &&
+ create_non_racy_file one &&
git update-index --add one &&
echo "$modebits" >expect &&
test_modebits .git/index >actual &&
--- /dev/null
+#!/bin/sh
+
+# This test can give false success if your machine is sufficiently
+# slow or all trials happened to happen on second boundaries.
+
+test_description='racy split index'
+
+. ./test-lib.sh
+
+test_expect_success 'setup' '
+ # Only split the index when the test explicitly says so.
+ sane_unset GIT_TEST_SPLIT_INDEX &&
+ git config splitIndex.maxPercentChange 100 &&
+
+ echo "cached content" >racy-file &&
+ git add racy-file &&
+ git commit -m initial &&
+
+ echo something >other-file &&
+ # No raciness with this file.
+ test-tool chmtime =-20 other-file &&
+
+ echo "+cached content" >expect
+'
+
+check_cached_diff () {
+ git diff-index --patch --cached $EMPTY_TREE racy-file >diff &&
+ tail -1 diff >actual &&
+ test_cmp expect actual
+}
+
+trials="0 1 2 3 4"
+for trial in $trials
+do
+ test_expect_success "split the index while adding a racily clean file #$trial" '
+ rm -f .git/index .git/sharedindex.* &&
+
+ # The next three commands must be run within the same
+ # second (so both writes to racy-file result in the same
+ # mtime) to create the interesting racy situation.
+ echo "cached content" >racy-file &&
+
+ # Update and split the index. The cache entry of
+ # racy-file will be stored only in the shared index.
+ git update-index --split-index --add racy-file &&
+
+ # File size must stay the same.
+ echo "dirty worktree" >racy-file &&
+
+ # Subsequent git commands should notice that racy-file
+ # and the split index have the same mtime, and check
+ # the content of the file to see if it is actually
+ # clean.
+ check_cached_diff
+ '
+done
+
+for trial in $trials
+do
+ test_expect_success "add a racily clean file to an already split index #$trial" '
+ rm -f .git/index .git/sharedindex.* &&
+
+ git update-index --split-index &&
+
+ # The next three commands must be run within the same
+ # second.
+ echo "cached content" >racy-file &&
+
+ # Update the split index. The cache entry of racy-file
+ # will be stored only in the split index.
+ git update-index --add racy-file &&
+
+ # File size must stay the same.
+ echo "dirty worktree" >racy-file &&
+
+ # Subsequent git commands should notice that racy-file
+ # and the split index have the same mtime, and check
+ # the content of the file to see if it is actually
+ # clean.
+ check_cached_diff
+ '
+done
+
+for trial in $trials
+do
+ test_expect_success "split the index when the index contains a racily clean cache entry #$trial" '
+ rm -f .git/index .git/sharedindex.* &&
+
+ # The next three commands must be run within the same
+ # second.
+ echo "cached content" >racy-file &&
+
+ git update-index --add racy-file &&
+
+ # File size must stay the same.
+ echo "dirty worktree" >racy-file &&
+
+ # Now wait a bit to ensure that the split index written
+ # below will get a more recent mtime than racy-file.
+ sleep 1 &&
+
+ # Update and split the index when the index contains
+ # the racily clean cache entry of racy-file.
+ # A corresponding replacement cache entry with smudged
+ # stat data should be added to the new split index.
+ git update-index --split-index --add other-file &&
+
+ # Subsequent git commands should notice the smudged
+ # stat data in the replacement cache entry and that it
+ # doesnt match with the file the worktree.
+ check_cached_diff
+ '
+done
+
+for trial in $trials
+do
+ test_expect_success "update the split index when it contains a new racily clean cache entry #$trial" '
+ rm -f .git/index .git/sharedindex.* &&
+
+ git update-index --split-index &&
+
+ # The next three commands must be run within the same
+ # second.
+ echo "cached content" >racy-file &&
+
+ # Update the split index. The cache entry of racy-file
+ # will be stored only in the split index.
+ git update-index --add racy-file &&
+
+ # File size must stay the same.
+ echo "dirty worktree" >racy-file &&
+
+ # Now wait a bit to ensure that the split index written
+ # below will get a more recent mtime than racy-file.
+ sleep 1 &&
+
+ # Update the split index when the racily clean cache
+ # entry of racy-file is only stored in the split index.
+ # An updated cache entry with smudged stat data should
+ # be added to the new split index.
+ git update-index --add other-file &&
+
+ # Subsequent git commands should notice the smudged
+ # stat data.
+ check_cached_diff
+ '
+done
+
+for trial in $trials
+do
+ test_expect_success "update the split index when a racily clean cache entry is stored only in the shared index #$trial" '
+ rm -f .git/index .git/sharedindex.* &&
+
+ # The next three commands must be run within the same
+ # second.
+ echo "cached content" >racy-file &&
+
+ # Update and split the index. The cache entry of
+ # racy-file will be stored only in the shared index.
+ git update-index --split-index --add racy-file &&
+
+ # File size must stay the same.
+ echo "dirty worktree" >racy-file &&
+
+ # Now wait a bit to ensure that the split index written
+ # below will get a more recent mtime than racy-file.
+ sleep 1 &&
+
+ # Update the split index when the racily clean cache
+ # entry of racy-file is only stored in the shared index.
+ # A corresponding replacement cache entry with smudged
+ # stat data should be added to the new split index.
+ git update-index --add other-file &&
+
+ # Subsequent git commands should notice the smudged
+ # stat data.
+ check_cached_diff
+ '
+done
+
+for trial in $trials
+do
+ test_expect_success "update the split index after unpack trees() copied a racily clean cache entry from the shared index #$trial" '
+ rm -f .git/index .git/sharedindex.* &&
+
+ # The next three commands must be run within the same
+ # second.
+ echo "cached content" >racy-file &&
+
+ # Update and split the index. The cache entry of
+ # racy-file will be stored only in the shared index.
+ git update-index --split-index --add racy-file &&
+
+ # File size must stay the same.
+ echo "dirty worktree" >racy-file &&
+
+ # Now wait a bit to ensure that the split index written
+ # below will get a more recent mtime than racy-file.
+ sleep 1 &&
+
+ # Update the split index after unpack_trees() copied the
+ # racily clean cache entry of racy-file from the shared
+ # index. A corresponding replacement cache entry
+ # with smudged stat data should be added to the new
+ # split index.
+ git read-tree -m HEAD &&
+
+ # Subsequent git commands should notice the smudged
+ # stat data.
+ check_cached_diff
+ '
+done
+
+test_done
--- /dev/null
+#!/bin/sh
+
+test_description='git log --graph of skewed left octopus merge.'
+
+. ./test-lib.sh
+
+test_expect_success 'set up merge history' '
+ cat >expect.uncolored <<-\EOF &&
+ * left
+ | *---. octopus-merge
+ | |\ \ \
+ |/ / / /
+ | | | * 4
+ | | * | 3
+ | | |/
+ | * | 2
+ | |/
+ * | 1
+ |/
+ * initial
+ EOF
+ cat >expect.colors <<-\EOF &&
+ * left
+ <RED>|<RESET> *<BLUE>-<RESET><BLUE>-<RESET><MAGENTA>-<RESET><MAGENTA>.<RESET> octopus-merge
+ <RED>|<RESET> <RED>|<RESET><YELLOW>\<RESET> <BLUE>\<RESET> <MAGENTA>\<RESET>
+ <RED>|<RESET><RED>/<RESET> <YELLOW>/<RESET> <BLUE>/<RESET> <MAGENTA>/<RESET>
+ <RED>|<RESET> <YELLOW>|<RESET> <BLUE>|<RESET> * 4
+ <RED>|<RESET> <YELLOW>|<RESET> * <MAGENTA>|<RESET> 3
+ <RED>|<RESET> <YELLOW>|<RESET> <MAGENTA>|<RESET><MAGENTA>/<RESET>
+ <RED>|<RESET> * <MAGENTA>|<RESET> 2
+ <RED>|<RESET> <MAGENTA>|<RESET><MAGENTA>/<RESET>
+ * <MAGENTA>|<RESET> 1
+ <MAGENTA>|<RESET><MAGENTA>/<RESET>
+ * initial
+ EOF
+ test_commit initial &&
+ for i in 1 2 3 4 ; do
+ git checkout master -b $i || return $?
+ # Make tag name different from branch name, to avoid
+ # ambiguity error when calling checkout.
+ test_commit $i $i $i tag$i || return $?
+ done &&
+ git checkout 1 -b merge &&
+ test_tick &&
+ git merge -m octopus-merge 1 2 3 4 &&
+ git checkout 1 -b L &&
+ test_commit left
+'
+
+test_expect_success 'log --graph with tricky octopus merge with colors' '
+ test_config log.graphColors red,green,yellow,blue,magenta,cyan &&
+ git log --color=always --graph --date-order --pretty=tformat:%s --all >actual.colors.raw &&
+ test_decode_color <actual.colors.raw | sed "s/ *\$//" >actual.colors &&
+ test_cmp expect.colors actual.colors
+'
+
+test_expect_success 'log --graph with tricky octopus merge, no color' '
+ git log --color=never --graph --date-order --pretty=tformat:%s --all >actual.raw &&
+ sed "s/ *\$//" actual.raw >actual &&
+ test_cmp expect.uncolored actual
+'
+
+# Repeat the previous two tests with "normal" octopus merge (i.e.,
+# without the first parent skewing to the "left" branch column).
+
+test_expect_success 'log --graph with normal octopus merge, no color' '
+ cat >expect.uncolored <<-\EOF &&
+ *---. octopus-merge
+ |\ \ \
+ | | | * 4
+ | | * | 3
+ | | |/
+ | * | 2
+ | |/
+ * | 1
+ |/
+ * initial
+ EOF
+ git log --color=never --graph --date-order --pretty=tformat:%s merge >actual.raw &&
+ sed "s/ *\$//" actual.raw >actual &&
+ test_cmp expect.uncolored actual
+'
+
+test_expect_success 'log --graph with normal octopus merge with colors' '
+ cat >expect.colors <<-\EOF &&
+ *<YELLOW>-<RESET><YELLOW>-<RESET><BLUE>-<RESET><BLUE>.<RESET> octopus-merge
+ <RED>|<RESET><GREEN>\<RESET> <YELLOW>\<RESET> <BLUE>\<RESET>
+ <RED>|<RESET> <GREEN>|<RESET> <YELLOW>|<RESET> * 4
+ <RED>|<RESET> <GREEN>|<RESET> * <BLUE>|<RESET> 3
+ <RED>|<RESET> <GREEN>|<RESET> <BLUE>|<RESET><BLUE>/<RESET>
+ <RED>|<RESET> * <BLUE>|<RESET> 2
+ <RED>|<RESET> <BLUE>|<RESET><BLUE>/<RESET>
+ * <BLUE>|<RESET> 1
+ <BLUE>|<RESET><BLUE>/<RESET>
+ * initial
+ EOF
+ test_config log.graphColors red,green,yellow,blue,magenta,cyan &&
+ git log --color=always --graph --date-order --pretty=tformat:%s merge >actual.colors.raw &&
+ test_decode_color <actual.colors.raw | sed "s/ *\$//" >actual.colors &&
+ test_cmp expect.colors actual.colors
+'
+test_done
done
test_expect_success 'editor with a space' '
- echo "echo space >\$1" >"e space.sh" &&
+ echo "echo space >\"\$1\"" >"e space.sh" &&
chmod a+x "e space.sh" &&
GIT_EDITOR="./e\ space.sh" git commit --amend &&
test space = "$(git show -s --pretty=format:%s)"
p4 add file1 &&
p4 submit -d "change 1" &&
: >file_to_delete &&
+ : >file_to_move &&
p4 add file_to_delete &&
- p4 submit -d "file to delete"
+ p4 add file_to_move &&
+ p4 submit -d "add files to delete"
)
'
echo "new file" >file2 &&
p4 add file2 &&
p4 delete file_to_delete &&
+ p4 edit file_to_move &&
+ p4 move file_to_move moved_file &&
p4 opened &&
p4 shelve -i <<EOF
Change: new
//depot/file1
//depot/file2
//depot/file_to_delete
+ //depot/file_to_move
+ //depot/moved_file
EOF
) &&
cd "$git" &&
change=$(last_shelved_change) &&
git p4 unshelve $change &&
- git show refs/remotes/p4/unshelved/$change | grep -q "Further description" &&
- git cherry-pick refs/remotes/p4/unshelved/$change &&
+ git show refs/remotes/p4-unshelved/$change | grep -q "Further description" &&
+ git cherry-pick refs/remotes/p4-unshelved/$change &&
test_path_is_file file2 &&
test_cmp file1 "$cli"/file1 &&
test_cmp file2 "$cli"/file2 &&
- test_path_is_missing file_to_delete
+ test_path_is_missing file_to_delete &&
+ test_path_is_missing file_to_move &&
+ test_path_is_file moved_file
)
'
cd "$git" &&
change=$(last_shelved_change) &&
git p4 unshelve $change &&
- git diff refs/remotes/p4/unshelved/$change.0 refs/remotes/p4/unshelved/$change | grep -q file3
+ git diff refs/remotes/p4-unshelved/$change.0 refs/remotes/p4-unshelved/$change | grep -q file3
)
'
+shelve_one_file () {
+ description="Change to be unshelved" &&
+ file="$1" &&
+ p4 shelve -i <<EOF
+Change: new
+Description:
+ $description
+Files:
+ $file
+EOF
+}
+
# This is the tricky case where the shelved changelist base revision doesn't
# match git-p4's idea of the base revision
#
p4 submit -d "change:foo" &&
p4 edit file1 &&
echo "bar" >>file1 &&
- p4 shelve -i <<EOF &&
-Change: new
-Description:
- Change to be unshelved
-Files:
- //depot/file1
-EOF
+ shelve_one_file //depot/file1 &&
change=$(last_shelved_change) &&
- p4 describe -S $change | grep -q "Change to be unshelved"
+ p4 describe -S $change >out.txt &&
+ grep -q "Change to be unshelved" out.txt
)
'
-# Now try to unshelve it. git-p4 should refuse to do so.
+# Now try to unshelve it.
test_expect_success 'try to unshelve the change' '
test_when_finished cleanup_git &&
(
change=$(last_shelved_change) &&
cd "$git" &&
- test_must_fail git p4 unshelve $change 2>out.txt &&
- grep -q "cannot unshelve" out.txt
+ git p4 unshelve $change >out.txt &&
+ grep -q "unshelved changelist $change" out.txt
)
'
+# Specify the origin. Create 2 unrelated files, and check that
+# we only get the one in HEAD~, not the one in HEAD.
+
+test_expect_success 'unshelve specifying the origin' '
+ (
+ cd "$cli" &&
+ : >unrelated_file0 &&
+ p4 add unrelated_file0 &&
+ p4 submit -d "unrelated" &&
+ : >unrelated_file1 &&
+ p4 add unrelated_file1 &&
+ p4 submit -d "unrelated" &&
+ : >file_to_shelve &&
+ p4 add file_to_shelve &&
+ shelve_one_file //depot/file_to_shelve
+ ) &&
+ test_when_finished cleanup_git &&
+ git p4 clone --dest="$git" //depot/@all &&
+ (
+ cd "$git" &&
+ change=$(last_shelved_change) &&
+ git p4 unshelve --origin HEAD~ $change &&
+ git checkout refs/remotes/p4-unshelved/$change &&
+ test_path_is_file unrelated_file0 &&
+ test_path_is_missing unrelated_file1 &&
+ test_path_is_file file_to_shelve
+ )
+'
test_expect_success 'kill p4d' '
kill_p4d
'