--- /dev/null
+# Suppressions for ThreadSanitizer (tsan).
+#
+# This file is used by setting the environment variable TSAN_OPTIONS to, e.g.,
+# "suppressions=$(pwd)/.tsan-suppressions". Observe that relative paths such as
+# ".tsan-suppressions" might not work.
+
+# A static variable is written to racily, but we always write the same value, so
+# in practice it (hopefully!) doesn't matter.
+race:^want_color$
+race:^transfer_debug$
disables it is in effect), make sure the patch is
applicable to what the current index file records. If
the file to be patched in the working tree is not
- up-to-date, it is flagged as an error. This flag also
+ up to date, it is flagged as an error. This flag also
causes the index file to be updated.
--cached::
If `--index` is specified (explicitly or implicitly), then the submodule
commits must match the index exactly for the patch to apply. If any
of the submodules are checked-out, then these check-outs are completely
-ignored, i.e., they are not required to be up-to-date or clean and they
+ignored, i.e., they are not required to be up to date or clean and they
are not updated.
If `--index` is not specified, then the submodule commits in the patch
branch.autoSetupMerge configuration variable is true.
--set-upstream::
- If specified branch does not exist yet or if `--force` has been
- given, acts exactly like `--track`. Otherwise sets up configuration
- like `--track` would when creating the branch, except that where
- branch points to is not changed.
+ As this option had confusing syntax, it is no longer supported.
+ Please use `--track` or `--set-upstream-to` instead.
-u <upstream>::
--set-upstream-to=<upstream>::
That means that even if you offer only read access (e.g. by using
the pserver method), 'git-cvsserver' should have write access to
the database to work reliably (otherwise you need to make sure
-that the database is up-to-date any time 'git-cvsserver' is executed).
+that the database is up to date any time 'git-cvsserver' is executed).
By default it uses SQLite databases in the Git directory, named
`gitcvs.<module_name>.sqlite`. Note that the SQLite backend creates
The non-cached version asks the question:
show me the differences between HEAD and the currently checked out
- tree - index contents _and_ files that aren't up-to-date
+ tree - index contents _and_ files that aren't up to date
which is obviously a very useful question too, since that tells you what
you *could* commit. Again, the output matches the 'git diff-tree -r'
torvalds@ppc970:~/v2.6/linux> git diff-index --abbrev HEAD
:100644 100664 7476bb... 000000... kernel/sched.c
-i.e., it shows that the tree has changed, and that `kernel/sched.c` has is
-not up-to-date and may contain new stuff. The all-zero sha1 means that to
+i.e., it shows that the tree has changed, and that `kernel/sched.c` is
+not up to date and may contain new stuff. The all-zero sha1 means that to
get the real diff, you need to look at the object in the working directory
directly rather than do an object-to-object diff.
would result from the merge already.)
If all named commits are already ancestors of `HEAD`, 'git merge'
-will exit early with the message "Already up-to-date."
+will exit early with the message "Already up to date."
FAST-FORWARD MERGE
------------------
-f::
--force-rebase::
- Force a rebase even if the current branch is up-to-date and
+ Force a rebase even if the current branch is up to date and
the command without `--force` would return without doing anything.
+
You may find this (or --no-ff with an interactive rebase) helpful after
------------
you could run `git rebase master topic`, to bring yourself
-up-to-date before your topic is ready to be sent upstream.
+up to date before your topic is ready to be sent upstream.
This would result in falling back to a three-way merge, and it
would conflict the same way as the test merge you resolved earlier.
'git rerere' will be run by 'git rebase' to help you resolve this
in the linkgit:gitmodules[5] file will also be removed and that file
will be staged (unless --cached or -n are used).
-A submodule is considered up-to-date when the HEAD is the same as
+A submodule is considered up to date when the HEAD is the same as
recorded in the index, no tracked files are modified and no untracked
files that aren't ignored are present in the submodules work tree.
Ignored files are deemed expendable and won't stop a submodule's work
'set-tree'::
You should consider using 'dcommit' instead of this command.
Commit specified commit or tree objects to SVN. This relies on
- your imported fetch data being up-to-date. This makes
+ your imported fetch data being up to date. This makes
absolutely no attempts to do patching when committing to SVN, it
simply overwrites files with those specified in the tree or
commit. All merging is assumed to have taken place
Using --refresh
---------------
`--refresh` does not calculate a new sha1 file or bring the index
-up-to-date for mode/content changes. But what it *does* do is to
+up to date for mode/content changes. But what it *does* do is to
"re-match" the stat information of a file with the index, so that you
can refresh the index for a file that hasn't been changed but where
the stat entry is out of date.
$ git update-index --refresh
----------------
+
-in the new repository to make sure that the index file is up-to-date.
+in the new repository to make sure that the index file is up to date.
Note that the second point is true even across machines. You can
duplicate a remote Git repository with *any* regular copy mechanism, be it
----------------
where the `-u` flag means that you want the checkout to keep the index
-up-to-date (so that you don't have to refresh it afterward), and the
+up to date (so that you don't have to refresh it afterward), and the
`-a` flag means "check out all files" (if you have a stale copy or an
older version of a checked out tree you may also need to add the `-f`
flag first, to tell 'git checkout-index' to *force* overwriting of any old
First, you need to create an empty repository on the remote
machine that will house your public repository. This empty
-repository will be populated and be kept up-to-date by pushing
+repository will be populated and be kept up to date by pushing
into it later. Obviously, this repository creation needs to be
done only once.
would contain a call to 'git update-server-info'
but you need to manually enable the hook with
`mv post-update.sample post-update`. This makes sure
-'git update-server-info' keeps the necessary files up-to-date.
+'git update-server-info' keeps the necessary files up to date.
3. Push into the public repository from your primary
repository.
When enabled, the default 'post-update' hook runs
'git update-server-info' to keep the information used by dumb
-transports (e.g., HTTP) up-to-date. If you are publishing
+transports (e.g., HTTP) up to date. If you are publishing
a Git repository that is accessible via HTTP, you should
probably enable this hook.
This file is to help dumb transports discover what packs
are available in this object store. Whenever a pack is
added or removed, `git update-server-info` should be run
- to keep this file up-to-date if the repository is
+ to keep this file up to date if the repository is
published for dumb transports. 'git repack' does this
by default.
$ git status
On branch master
Changes to be committed:
-Your branch is up-to-date with 'origin/master'.
+Your branch is up to date with 'origin/master'.
(use "git reset HEAD <file>..." to unstage)
modified: file1
--ff-only::
Refuse to merge and exit with a non-zero status unless the
- current `HEAD` is already up-to-date or the merge can be
+ current `HEAD` is already up to date or the merge can be
resolved as a fast-forward.
--log[=<n>]::
terminate the connection by sending a flush-pkt, telling the server it can
now gracefully terminate, and disconnect, when it does not need any pack
data. This can happen with the ls-remote command, and also can happen when
-the client already is up-to-date.
+the client already is up to date.
Otherwise, it enters the negotiation phase, where the client and
server determine what the minimal packfile necessary for transport is,
If multiple cases apply, the one used is listed first.
A result which changes the index is an error if the index is not empty
-and not up-to-date.
+and not up to date.
Entries marked '+' have stat information. Spaces marked '*' don't
affect the result.
left in stage 0, and there are no other entries.
A result of "no merge" is an error if the index is not empty and not
-up-to-date.
+up to date.
*empty* means that the tree must not have a directory-file conflict
with the entry.
remote branch, then it will fail with an error like:
-------------------------------------------------
-error: remote 'refs/heads/master' is not an ancestor of
- local 'refs/heads/master'.
- Maybe you are not up-to-date and need to pull first?
-error: failed to push to 'ssh://yourserver.com/~you/proj.git'
+ ! [rejected] master -> master (non-fast-forward)
+error: failed to push some refs to '...'
+hint: Updates were rejected because the tip of your current branch is behind
+hint: its remote counterpart. Integrate the remote changes (e.g.
+hint: 'git pull ...') before pushing again.
+hint: See the 'Note about fast-forwards' in 'git push --help' for details.
-------------------------------------------------
This can happen, for example, if you:
Linus's tree will be stored in the remote-tracking branch named origin/master,
and can be updated using linkgit:git-fetch[1]; you can track other
public trees using linkgit:git-remote[1] to set up a "remote" and
-linkgit:git-fetch[1] to keep them up-to-date; see
+linkgit:git-fetch[1] to keep them up to date; see
<<repositories-and-branches>>.
Now create the branches in which you are going to work; these start out
# algorithm. This is slower, but may detect attempted collision attacks.
# Takes priority over other *_SHA1 knobs.
#
+# Define DC_SHA1_EXTERNAL in addition to DC_SHA1 if you want to build / link
+# git with the external SHA1 collision-detect library.
+# Without this option, i.e. the default behavior is to build git with its
+# own built-in code (or submodule).
+#
# Define DC_SHA1_SUBMODULE in addition to DC_SHA1 to use the
# sha1collisiondetection shipped as a submodule instead of the
# non-submodule copy in sha1dc/. This is an experimental option used
BASIC_CFLAGS += -DSHA1_APPLE
else
DC_SHA1 := YesPlease
+ BASIC_CFLAGS += -DSHA1_DC
+ LIB_OBJS += sha1dc_git.o
+ifdef DC_SHA1_EXTERNAL
+ ifdef DC_SHA1_SUBMODULE
+$(error Only set DC_SHA1_EXTERNAL or DC_SHA1_SUBMODULE, not both)
+ endif
+ BASIC_CFLAGS += -DDC_SHA1_EXTERNAL
+ EXTLIBS += -lsha1detectcoll
+else
ifdef DC_SHA1_SUBMODULE
LIB_OBJS += sha1collisiondetection/lib/sha1.o
LIB_OBJS += sha1collisiondetection/lib/ubc_check.o
LIB_OBJS += sha1dc/ubc_check.o
endif
BASIC_CFLAGS += \
- -DSHA1_DC \
-DSHA1DC_NO_STANDARD_INCLUDES \
-DSHA1DC_INIT_SAFE_HASH_DEFAULT=0 \
-DSHA1DC_CUSTOM_INCLUDE_SHA1_C="\"cache.h\"" \
- -DSHA1DC_CUSTOM_TRAILING_INCLUDE_SHA1_C="\"sha1dc_git.c\"" \
- -DSHA1DC_CUSTOM_TRAILING_INCLUDE_SHA1_H="\"sha1dc_git.h\"" \
-DSHA1DC_CUSTOM_INCLUDE_UBC_CHECK_C="\"git-compat-util.h\""
endif
endif
endif
endif
+endif
ifdef SHA1_MAX_BLOCK_SIZE
LIB_OBJS += compat/sha1-chunked.o
* 1970-01-01, and the seconds part must be "00".
*/
const char stamp_regexp[] =
- "^(1969-12-31|1970-01-01)"
- " "
- "[0-2][0-9]:[0-5][0-9]:00(\\.0+)?"
+ "^[0-2][0-9]:([0-5][0-9]):00(\\.0+)?"
" "
"([-+][0-2][0-9]:?[0-5][0-9])\n";
const char *timestamp = NULL, *cp, *colon;
static regex_t *stamp;
regmatch_t m[10];
- int zoneoffset;
- int hourminute;
+ int zoneoffset, epoch_hour, hour, minute;
int status;
for (cp = nameline; *cp != '\n'; cp++) {
}
if (!timestamp)
return 0;
+
+ /*
+ * YYYY-MM-DD hh:mm:ss must be from either 1969-12-31
+ * (west of GMT) or 1970-01-01 (east of GMT)
+ */
+ if (skip_prefix(timestamp, "1969-12-31 ", ×tamp))
+ epoch_hour = 24;
+ else if (skip_prefix(timestamp, "1970-01-01 ", ×tamp))
+ epoch_hour = 0;
+ else
+ return 0;
+
if (!stamp) {
stamp = xmalloc(sizeof(*stamp));
if (regcomp(stamp, stamp_regexp, REG_EXTENDED)) {
return 0;
}
+ hour = strtol(timestamp, NULL, 10);
+ minute = strtol(timestamp + m[1].rm_so, NULL, 10);
+
zoneoffset = strtol(timestamp + m[3].rm_so + 1, (char **) &colon, 10);
if (*colon == ':')
zoneoffset = zoneoffset * 60 + strtol(colon + 1, NULL, 10);
if (timestamp[m[3].rm_so] == '-')
zoneoffset = -zoneoffset;
- /*
- * YYYY-MM-DD hh:mm:ss must be from either 1969-12-31
- * (west of GMT) or 1970-01-01 (east of GMT)
- */
- if ((zoneoffset < 0 && memcmp(timestamp, "1969-12-31", 10)) ||
- (0 <= zoneoffset && memcmp(timestamp, "1970-01-01", 10)))
- return 0;
-
- hourminute = (strtol(timestamp + 11, NULL, 10) * 60 +
- strtol(timestamp + 14, NULL, 10) -
- zoneoffset);
-
- return ((zoneoffset < 0 && hourminute == 1440) ||
- (0 <= zoneoffset && !hourminute));
+ return hour * 60 + minute - zoneoffset == epoch_hour * 60;
}
/*
struct directory *bottom;
};
+static const struct attr_check *get_archive_attrs(const char *path)
+{
+ static struct attr_check *check;
+ if (!check)
+ check = attr_check_initl("export-ignore", "export-subst", NULL);
+ return git_check_attr(path, check) ? NULL : check;
+}
+
+static int check_attr_export_ignore(const struct attr_check *check)
+{
+ return check && ATTR_TRUE(check->items[0].value);
+}
+
+static int check_attr_export_subst(const struct attr_check *check)
+{
+ return check && ATTR_TRUE(check->items[1].value);
+}
+
+static int should_queue_directories(const struct archiver_args *args)
+{
+ return args->pathspec.has_wildcard;
+}
+
static int write_archive_entry(const unsigned char *sha1, const char *base,
int baselen, const char *filename, unsigned mode, int stage,
void *context)
{
static struct strbuf path = STRBUF_INIT;
- static struct attr_check *check;
struct archiver_context *c = context;
struct archiver_args *args = c->args;
write_archive_entry_fn_t write_entry = c->write_entry;
- const char *path_without_prefix;
int err;
+ const char *path_without_prefix;
args->convert = 0;
strbuf_reset(&path);
strbuf_addch(&path, '/');
path_without_prefix = path.buf + args->baselen;
- if (!check)
- check = attr_check_initl("export-ignore", "export-subst", NULL);
- if (!git_check_attr(path_without_prefix, check)) {
- if (ATTR_TRUE(check->items[0].value))
+ if (!S_ISDIR(mode) || !should_queue_directories(args)) {
+ const struct attr_check *check;
+ check = get_archive_attrs(path_without_prefix);
+ if (check_attr_export_ignore(check))
return 0;
- args->convert = ATTR_TRUE(check->items[1].value);
+ args->convert = check_attr_export_subst(check);
}
if (S_ISDIR(mode) || S_ISGITLINK(mode)) {
}
if (S_ISDIR(mode)) {
+ size_t baselen = base->len;
+ const struct attr_check *check;
+
+ /* Borrow base, but restore its original value when done. */
+ strbuf_addstr(base, filename);
+ strbuf_addch(base, '/');
+ check = get_archive_attrs(base->buf);
+ strbuf_setlen(base, baselen);
+
+ if (check_attr_export_ignore(check))
+ return 0;
queue_directory(sha1, base, filename,
mode, stage, c);
return READ_TREE_RECURSIVE;
}
err = read_tree_recursive(args->tree, "", 0, 0, &args->pathspec,
- args->pathspec.has_wildcard ?
+ should_queue_directories(args) ?
queue_or_write_archive_entry :
write_archive_entry_buf,
&context);
if (shortname) {
if (origin)
printf_ln(rebasing ?
- _("Branch %s set up to track remote branch %s from %s by rebasing.") :
- _("Branch %s set up to track remote branch %s from %s."),
+ _("Branch '%s' set up to track remote branch '%s' from '%s' by rebasing.") :
+ _("Branch '%s' set up to track remote branch '%s' from '%s'."),
local, shortname, origin);
else
printf_ln(rebasing ?
- _("Branch %s set up to track local branch %s by rebasing.") :
- _("Branch %s set up to track local branch %s."),
+ _("Branch '%s' set up to track local branch '%s' by rebasing.") :
+ _("Branch '%s' set up to track local branch '%s'."),
local, shortname);
} else {
if (origin)
printf_ln(rebasing ?
- _("Branch %s set up to track remote ref %s by rebasing.") :
- _("Branch %s set up to track remote ref %s."),
+ _("Branch '%s' set up to track remote ref '%s' by rebasing.") :
+ _("Branch '%s' set up to track remote ref '%s'."),
local, remote);
else
printf_ln(rebasing ?
- _("Branch %s set up to track local ref %s by rebasing.") :
- _("Branch %s set up to track local ref %s."),
+ _("Branch '%s' set up to track local ref '%s' by rebasing.") :
+ _("Branch '%s' set up to track local ref '%s'."),
local, remote);
}
}
if (worktrees[i]->is_detached)
continue;
- if (worktrees[i]->head_ref &&
- strcmp(oldref, worktrees[i]->head_ref))
+ if (!worktrees[i]->head_ref)
+ continue;
+ if (strcmp(oldref, worktrees[i]->head_ref))
continue;
refs = get_worktree_ref_store(worktrees[i]);
return -1;
while (!strbuf_getwholeline(&buf, fp, '\n')) {
/* The format is just "Commit Parent1 Parent2 ...\n" */
- struct commit_graft *graft = read_graft_line(buf.buf, buf.len);
+ struct commit_graft *graft = read_graft_line(&buf);
if (graft)
register_commit_graft(graft, 0);
}
OPT__QUIET(&quiet, N_("suppress informational messages")),
OPT_SET_INT('t', "track", &track, N_("set up tracking mode (see git-pull(1))"),
BRANCH_TRACK_EXPLICIT),
- OPT_SET_INT( 0, "set-upstream", &track, N_("change upstream info"),
- BRANCH_TRACK_OVERRIDE),
+ { OPTION_SET_INT, 0, "set-upstream", &track, NULL, N_("do not use"),
+ PARSE_OPT_NOARG | PARSE_OPT_HIDDEN, NULL, BRANCH_TRACK_OVERRIDE },
OPT_STRING('u', "set-upstream-to", &new_upstream, N_("upstream"), N_("change the upstream info")),
OPT_BOOL(0, "unset-upstream", &unset_upstream, N_("Unset the upstream info")),
OPT__COLOR(&branch_use_color, N_("use colored output")),
strbuf_release(&buf);
} else if (argc > 0 && argc <= 2) {
struct branch *branch = branch_get(argv[0]);
- int branch_existed = 0, remote_tracking = 0;
- struct strbuf buf = STRBUF_INIT;
if (!strcmp(argv[0], "HEAD"))
die(_("it does not make sense to create 'HEAD' manually"));
die(_("-a and -r options to 'git branch' do not make sense with a branch name"));
if (track == BRANCH_TRACK_OVERRIDE)
- fprintf(stderr, _("The --set-upstream flag is deprecated and will be removed. Consider using --track or --set-upstream-to\n"));
-
- strbuf_addf(&buf, "refs/remotes/%s", branch->name);
- remote_tracking = ref_exists(buf.buf);
- strbuf_release(&buf);
+ die(_("the '--set-upstream' option is no longer supported. Please use '--track' or '--set-upstream-to' instead."));
- branch_existed = ref_exists(branch->refname);
create_branch(argv[0], (argc == 2) ? argv[1] : head,
force, reflog, 0, quiet, track);
- /*
- * We only show the instructions if the user gave us
- * one branch which doesn't exist locally, but is the
- * name of a remote-tracking branch.
- */
- if (argc == 1 && track == BRANCH_TRACK_OVERRIDE &&
- !branch_existed && remote_tracking) {
- fprintf(stderr, _("\nIf you wanted to make '%s' track '%s', do this:\n\n"), head, branch->name);
- fprintf(stderr, " git branch -d %s\n", branch->name);
- fprintf(stderr, " git branch --set-upstream-to %s\n", branch->name);
- }
-
} else
usage_with_options(builtin_branch_usage, options);
* If head can reach all the merge then we are up to date.
* but first the most common case of merging one remote.
*/
- finish_up_to_date(_("Already up-to-date."));
+ finish_up_to_date(_("Already up to date."));
goto done;
} else if (fast_forward != FF_NO && !remoteheads->next &&
!common->next &&
}
}
if (up_to_date) {
- finish_up_to_date(_("Already up-to-date. Yeeah!"));
+ finish_up_to_date(_("Already up to date. Yeeah!"));
goto done;
}
}
{
struct thread_params *me = arg;
+ progress_lock();
while (me->remaining) {
+ progress_unlock();
+
find_deltas(me->list, &me->remaining,
me->window, me->depth, me->processed);
pthread_cond_wait(&me->cond, &me->mutex);
me->data_ready = 0;
pthread_mutex_unlock(&me->mutex);
+
+ progress_lock();
}
+ progress_unlock();
/* leave ->working 1 so that this doesn't get more work assigned */
return NULL;
}
int want_color(int var)
{
+ /*
+ * NEEDSWORK: This function is sometimes used from multiple threads, and
+ * we end up using want_auto racily. That "should not matter" since
+ * we always write the same value, but it's still wrong. This function
+ * is listed in .tsan-suppressions for the time being.
+ */
+
static int want_auto = -1;
if (var < 0)
return 0;
}
-struct commit_graft *read_graft_line(char *buf, int len)
+struct commit_graft *read_graft_line(struct strbuf *line)
{
/* The format is just "Commit Parent1 Parent2 ...\n" */
- int i;
+ int i, phase;
+ const char *tail = NULL;
struct commit_graft *graft = NULL;
- const int entry_size = GIT_SHA1_HEXSZ + 1;
+ struct object_id dummy_oid, *oid;
- while (len && isspace(buf[len-1]))
- buf[--len] = '\0';
- if (buf[0] == '#' || buf[0] == '\0')
+ strbuf_rtrim(line);
+ if (!line->len || line->buf[0] == '#')
return NULL;
- if ((len + 1) % entry_size)
- goto bad_graft_data;
- i = (len + 1) / entry_size - 1;
- graft = xmalloc(st_add(sizeof(*graft), st_mult(GIT_SHA1_RAWSZ, i)));
- graft->nr_parent = i;
- if (get_oid_hex(buf, &graft->oid))
- goto bad_graft_data;
- for (i = GIT_SHA1_HEXSZ; i < len; i += entry_size) {
- if (buf[i] != ' ')
- goto bad_graft_data;
- if (get_sha1_hex(buf + i + 1, graft->parent[i/entry_size].hash))
+ /*
+ * phase 0 verifies line, counts hashes in line and allocates graft
+ * phase 1 fills graft
+ */
+ for (phase = 0; phase < 2; phase++) {
+ oid = graft ? &graft->oid : &dummy_oid;
+ if (parse_oid_hex(line->buf, oid, &tail))
goto bad_graft_data;
+ for (i = 0; *tail != '\0'; i++) {
+ oid = graft ? &graft->parent[i] : &dummy_oid;
+ if (!isspace(*tail++) || parse_oid_hex(tail, oid, &tail))
+ goto bad_graft_data;
+ }
+ if (!graft) {
+ graft = xmalloc(st_add(sizeof(*graft),
+ st_mult(sizeof(struct object_id), i)));
+ graft->nr_parent = i;
+ }
}
return graft;
bad_graft_data:
- error("bad graft data: %s", buf);
- free(graft);
+ error("bad graft data: %s", line->buf);
+ assert(!graft);
return NULL;
}
return -1;
while (!strbuf_getwholeline(&buf, fp, '\n')) {
/* The format is just "Commit Parent1 Parent2 ...\n" */
- struct commit_graft *graft = read_graft_line(buf.buf, buf.len);
+ struct commit_graft *graft = read_graft_line(&buf);
if (!graft)
continue;
if (register_commit_graft(graft, 1))
};
typedef int (*each_commit_graft_fn)(const struct commit_graft *, void *);
-struct commit_graft *read_graft_line(char *buf, int len);
+struct commit_graft *read_graft_line(struct strbuf *line);
int register_commit_graft(struct commit_graft *, int);
struct commit_graft *lookup_commit_graft(const struct object_id *oid);
?,1,"$1",*)
# If head can reach all the merge then we are up to date.
# but first the most common case of merging one remote.
- finish_up_to_date "Already up-to-date."
+ finish_up_to_date "Already up to date."
exit 0
;;
t,1,"$head",*)
done
if test "$up_to_date" = t
then
- finish_up_to_date "Already up-to-date. Yeeah!"
+ finish_up_to_date "Already up to date. Yeeah!"
exit 0
fi
;;
case "$common" in
"$merge")
- echo "Already up-to-date. Yeeah!"
+ echo "Already up to date. Yeeah!"
dropheads
exit 0
;;
# this shouldn not actually do anything, since FETCH_HEAD
# is already a parent
result=$(git merge -s ours -m "merge -s -ours" FETCH_HEAD) &&
- check_equal "${result}" "Already up-to-date."
+ check_equal "${result}" "Already up to date."
)
'
ca->crlf_action = git_path_check_crlf(ccheck + 4);
if (ca->crlf_action == CRLF_UNDEFINED)
ca->crlf_action = git_path_check_crlf(ccheck + 0);
- ca->attr_action = ca->crlf_action;
ca->ident = git_path_check_ident(ccheck + 1);
ca->drv = git_path_check_convert(ccheck + 2);
if (ca->crlf_action != CRLF_BINARY) {
else if (eol_attr == EOL_CRLF)
ca->crlf_action = CRLF_TEXT_CRLF;
}
- ca->attr_action = ca->crlf_action;
} else {
ca->drv = NULL;
ca->crlf_action = CRLF_UNDEFINED;
ca->ident = 0;
}
+
+ /* Save attr and make a decision for action */
+ ca->attr_action = ca->crlf_action;
if (ca->crlf_action == CRLF_TEXT)
ca->crlf_action = text_eol_is_crlf() ? CRLF_TEXT_CRLF : CRLF_TEXT_INPUT;
if (ca->crlf_action == CRLF_UNDEFINED && auto_crlf == AUTO_CRLF_FALSE)
#include "dir.h"
#include "streaming.h"
#include "submodule.h"
+#include "progress.h"
static void create_directories(const char *path, int path_len,
const struct checkout *state)
int finish_delayed_checkout(struct checkout *state)
{
int errs = 0;
+ unsigned delayed_object_count;
+ off_t filtered_bytes = 0;
struct string_list_item *filter, *path;
+ struct progress *progress;
struct delayed_checkout *dco = state->delayed_checkout;
if (!state->delayed_checkout)
return errs;
dco->state = CE_RETRY;
+ delayed_object_count = dco->paths.nr;
+ progress = start_delayed_progress(_("Filtering content"), delayed_object_count);
while (dco->filters.nr > 0) {
for_each_string_list_item(filter, &dco->filters) {
struct string_list available_paths = STRING_LIST_INIT_NODUP;
+ display_progress(progress, delayed_object_count - dco->paths.nr);
if (!async_query_available_blobs(filter->string, &available_paths)) {
/* Filter reported an error */
}
ce = index_file_exists(state->istate, path->string,
strlen(path->string), 0);
- errs |= (ce ? checkout_entry(ce, state, NULL) : 1);
+ if (ce) {
+ errs |= checkout_entry(ce, state, NULL);
+ filtered_bytes += ce->ce_stat_data.sd_size;
+ display_throughput(progress, filtered_bytes);
+ } else
+ errs = 1;
}
}
string_list_remove_empty_items(&dco->filters, 0);
}
+ stop_progress(&progress);
string_list_clear(&dco->filters, 0);
/* At this point we should not have any delayed paths anymore. */
the translation of existing messages, or because the git-gui software
itself was updated and there are new messages that need translation.
-In any case, make sure you are up-to-date before starting your work:
+In any case, make sure you are up to date before starting your work:
$ git checkout master
$ git pull
case "$LF$common$LF" in
*"$LF$SHA1$LF"*)
- eval_gettextln "Already up-to-date with \$pretty_name"
+ eval_gettextln "Already up to date with \$pretty_name"
continue
;;
esac
def rebase(self):
if os.system("git update-index --refresh") != 0:
- die("Some files in your working directory are modified and different than what is in your index. You can use git update-index <filename> to bring the index up-to-date or stash away all your changes with git stash.");
+ die("Some files in your working directory are modified and different than what is in your index. You can use git update-index <filename> to bring the index up to date or stash away all your changes with git stash.");
if len(read_pipe("git diff-index HEAD --")) > 0:
die("You have uncommitted changes. Please commit them before rebasing or stash them away with git stash.");
}
my $have_email_valid = eval { require Email::Valid; 1 };
-my $have_mail_address = eval { require Mail::Address; 1 };
my $smtp;
my $auth;
my $num_sent = 0;
($repocommitter) = Git::ident_person(@repo, 'committer');
sub parse_address_line {
- if ($have_mail_address) {
- return map { $_->format } Mail::Address->parse($_[0]);
- } else {
- return Git::parse_mailboxes($_[0]);
- }
+ return Git::parse_mailboxes($_[0]);
}
sub split_addrs {
}
+sub strip_garbage_one_address {
+ my ($addr) = @_;
+ chomp $addr;
+ if ($addr =~ /^(("[^"]*"|[^"<]*)? *<[^>]*>).*/) {
+ # "Foo Bar" <foobar@example.com> [possibly garbage here]
+ # Foo Bar <foobar@example.com> [possibly garbage here]
+ return $1;
+ }
+ if ($addr =~ /^(<[^>]*>).*/) {
+ # <foo@example.com> [possibly garbage here]
+ # if garbage contains other addresses, they are ignored.
+ return $1;
+ }
+ if ($addr =~ /^([^"#,\s]*)/) {
+ # address without quoting: remove anything after the address
+ return $1;
+ }
+ return $addr;
+}
+
sub sanitize_address_list {
return (map { sanitize_address($_) } @_);
}
# Now parse the message body
while(<$fh>) {
$message .= $_;
- if (/^(Signed-off-by|Cc): ([^>]*>?)/i) {
+ if (/^(Signed-off-by|Cc): (.*)/i) {
chomp;
my ($what, $c) = ($1, $2);
- chomp $c;
+ # strip garbage for the address we'll use:
+ $c = strip_garbage_one_address($c);
+ # sanitize a bit more to decide whether to suppress the address:
my $sc = sanitize_address($c);
if ($sc eq $sender) {
next if ($suppress_cc{'self'});
#elif defined(SHA1_OPENSSL)
#include <openssl/sha.h>
#elif defined(SHA1_DC)
-#ifdef DC_SHA1_SUBMODULE
-#include "sha1collisiondetection/lib/sha1.h"
-#else
-#include "sha1dc/sha1.h"
-#endif
+#include "sha1dc_git.h"
#else /* SHA1_BLK */
#include "block-sha1/sha1.h"
#endif
}
if (oid_eq(&common->object.oid, &merge->object.oid)) {
- output(o, 0, _("Already up-to-date!"));
+ output(o, 0, _("Already up to date!"));
*result = head;
return 1;
}
if (!oidcmp(&remote->object.oid, base_oid)) {
/* Already merged; result == local commit */
if (o->verbosity >= 2)
- printf("Already up-to-date!\n");
+ printf("Already up to date!\n");
oidcpy(result_oid, &local->object.oid);
goto found_result;
}
#define CLR_PTR_TYPE(ptr) ((void *) ((uintptr_t) (ptr) & ~3))
#define SET_PTR_TYPE(ptr, type) ((void *) ((uintptr_t) (ptr) | (type)))
-#define GET_NIBBLE(n, sha1) (((sha1[(n) >> 1]) >> ((~(n) & 0x01) << 2)) & 0x0f)
+#define GET_NIBBLE(n, sha1) ((((sha1)[(n) >> 1]) >> ((~(n) & 0x01) << 2)) & 0x0f)
#define KEY_INDEX (GIT_SHA1_RAWSZ - 1)
#define FANOUT_PATH_SEPARATORS ((GIT_SHA1_HEXSZ / 2) - 1)
}
/*
- * Convert a partial SHA1 hex string to the corresponding partial SHA1 value.
- * - hex - Partial SHA1 segment in ASCII hex format
- * - hex_len - Length of above segment. Must be multiple of 2 between 0 and 40
- * - sha1 - Partial SHA1 value is written here
- * - sha1_len - Max #bytes to store in sha1, Must be >= hex_len / 2, and < 20
- * Returns -1 on error (invalid arguments or invalid SHA1 (not in hex format)).
- * Otherwise, returns number of bytes written to sha1 (i.e. hex_len / 2).
- * Pads sha1 with NULs up to sha1_len (not included in returned length).
+ * Read `len` pairs of hexadecimal digits from `hex` and write the
+ * values to `binary` as `len` bytes. Return 0 on success, or -1 if
+ * the input does not consist of hex digits).
*/
-static int get_oid_hex_segment(const char *hex, unsigned int hex_len,
- unsigned char *oid, unsigned int oid_len)
+static int hex_to_bytes(unsigned char *binary, const char *hex, size_t len)
{
- unsigned int i, len = hex_len >> 1;
- if (hex_len % 2 != 0 || len > oid_len)
- return -1;
- for (i = 0; i < len; i++) {
+ for (; len; len--, hex += 2) {
unsigned int val = (hexval(hex[0]) << 4) | hexval(hex[1]);
+
if (val & ~0xff)
return -1;
- *oid++ = val;
- hex += 2;
+ *binary++ = val;
}
- for (; i < oid_len; i++)
- *oid++ = 0;
- return len;
+ return 0;
}
static int non_note_cmp(const struct non_note *a, const struct non_note *b)
struct int_node *node, unsigned int n)
{
struct object_id object_oid;
- unsigned int prefix_len;
+ size_t prefix_len;
void *buf;
struct tree_desc desc;
struct name_entry entry;
- int len, path_len;
- unsigned char type;
- struct leaf_node *l;
buf = fill_tree_descriptor(&desc, &subtree->val_oid);
if (!buf)
assert(prefix_len * 2 >= n);
memcpy(object_oid.hash, subtree->key_oid.hash, prefix_len);
while (tree_entry(&desc, &entry)) {
- path_len = strlen(entry.path);
- len = get_oid_hex_segment(entry.path, path_len,
- object_oid.hash + prefix_len, GIT_SHA1_RAWSZ - prefix_len);
- if (len < 0)
- goto handle_non_note; /* entry.path is not a SHA1 */
- len += prefix_len;
+ unsigned char type;
+ struct leaf_node *l;
+ size_t path_len = strlen(entry.path);
+
+ if (path_len == 2 * (GIT_SHA1_RAWSZ - prefix_len)) {
+ /* This is potentially the remainder of the SHA-1 */
+
+ if (!S_ISREG(entry.mode))
+ /* notes must be blobs */
+ goto handle_non_note;
+
+ if (hex_to_bytes(object_oid.hash + prefix_len, entry.path,
+ GIT_SHA1_RAWSZ - prefix_len))
+ goto handle_non_note; /* entry.path is not a SHA1 */
- /*
- * If object SHA1 is complete (len == 20), assume note object
- * If object SHA1 is incomplete (len < 20), and current
- * component consists of 2 hex chars, assume note subtree
- */
- if (len <= GIT_SHA1_RAWSZ) {
type = PTR_TYPE_NOTE;
- l = (struct leaf_node *)
- xcalloc(1, sizeof(struct leaf_node));
- oidcpy(&l->key_oid, &object_oid);
- oidcpy(&l->val_oid, entry.oid);
- if (len < GIT_SHA1_RAWSZ) {
- if (!S_ISDIR(entry.mode) || path_len != 2)
- goto handle_non_note; /* not subtree */
- l->key_oid.hash[KEY_INDEX] = (unsigned char) len;
- type = PTR_TYPE_SUBTREE;
- }
- if (note_tree_insert(t, node, n, l, type,
- combine_notes_concatenate))
- die("Failed to load %s %s into notes tree "
- "from %s",
- type == PTR_TYPE_NOTE ? "note" : "subtree",
- oid_to_hex(&l->key_oid), t->ref);
+ } else if (path_len == 2) {
+ /* This is potentially an internal node */
+ size_t len = prefix_len;
+
+ if (!S_ISDIR(entry.mode))
+ /* internal nodes must be trees */
+ goto handle_non_note;
+
+ if (hex_to_bytes(object_oid.hash + len++, entry.path, 1))
+ goto handle_non_note; /* entry.path is not a SHA1 */
+
+ /*
+ * Pad the rest of the SHA-1 with zeros,
+ * except for the last byte, where we write
+ * the length:
+ */
+ memset(object_oid.hash + len, 0, GIT_SHA1_RAWSZ - len - 1);
+ object_oid.hash[KEY_INDEX] = (unsigned char)len;
+
+ type = PTR_TYPE_SUBTREE;
+ } else {
+ /* This can't be part of a note */
+ goto handle_non_note;
}
+
+ l = xcalloc(1, sizeof(*l));
+ oidcpy(&l->key_oid, &object_oid);
+ oidcpy(&l->val_oid, entry.oid);
+ if (note_tree_insert(t, node, n, l, type,
+ combine_notes_concatenate))
+ die("Failed to load %s %s into notes tree "
+ "from %s",
+ type == PTR_TYPE_NOTE ? "note" : "subtree",
+ oid_to_hex(&l->key_oid), t->ref);
+
continue;
handle_non_note:
/*
- * Determine full path for this non-note entry:
- * The filename is already found in entry.path, but the
- * directory part of the path must be deduced from the subtree
- * containing this entry. We assume here that the overall notes
- * tree follows a strict byte-based progressive fanout
- * structure (i.e. using 2/38, 2/2/36, etc. fanouts, and not
- * e.g. 4/36 fanout). This means that if a non-note is found at
- * path "dead/beef", the following code will register it as
- * being found on "de/ad/beef".
- * On the other hand, if you use such non-obvious non-note
- * paths in the middle of a notes tree, you deserve what's
- * coming to you ;). Note that for non-notes that are not
- * SHA1-like at the top level, there will be no problems.
- *
- * To conclude, it is strongly advised to make sure non-notes
- * have at least one non-hex character in the top-level path
- * component.
+ * Determine full path for this non-note entry. The
+ * filename is already found in entry.path, but the
+ * directory part of the path must be deduced from the
+ * subtree containing this entry based on our
+ * knowledge that the overall notes tree follows a
+ * strict byte-based progressive fanout structure
+ * (i.e. using 2/38, 2/2/36, etc. fanouts).
*/
{
struct strbuf non_note_path = STRBUF_INIT;
const char *q = oid_to_hex(&subtree->key_oid);
- int i;
+ size_t i;
for (i = 0; i < prefix_len; i++) {
strbuf_addch(&non_note_path, *q++);
strbuf_addch(&non_note_path, *q++);
{
struct pathspec_item *item;
const char *entry = argv ? *argv : NULL;
- int i, n, prefixlen, warn_empty_string, nr_exclude = 0;
+ int i, n, prefixlen, nr_exclude = 0;
memset(pathspec, 0, sizeof(*pathspec));
}
n = 0;
- warn_empty_string = 1;
while (argv[n]) {
- if (*argv[n] == '\0' && warn_empty_string) {
- warning(_("empty strings as pathspecs will be made invalid in upcoming releases. "
- "please use . instead if you meant to match all paths"));
- warn_empty_string = 0;
- }
+ if (*argv[n] == '\0')
+ die("empty string is not a valid pathspec. "
+ "please use . instead if you meant to match all paths");
n++;
}
_(" (use \"git branch --unset-upstream\" to fixup)\n"));
} else if (!ours && !theirs) {
strbuf_addf(sb,
- _("Your branch is up-to-date with '%s'.\n"),
+ _("Your branch is up to date with '%s'.\n"),
base);
} else if (!theirs) {
strbuf_addf(sb,
void *table,
size_t nr,
sha1_access_fn fn);
-
-extern int sha1_entry_pos(const void *table,
- size_t elem_size,
- size_t key_offset,
- unsigned lo, unsigned hi, unsigned nr,
- const unsigned char *key);
#endif
#include "quote.h"
#include "packfile.h"
-const unsigned char null_sha1[20];
+const unsigned char null_sha1[GIT_MAX_RAWSZ];
const struct object_id null_oid;
const struct object_id empty_tree_oid = {
EMPTY_TREE_SHA1_BIN_LITERAL
+#include "cache.h"
+
+#ifdef DC_SHA1_EXTERNAL
/*
- * This code is included at the end of sha1dc/sha1.c with the
- * SHA1DC_CUSTOM_TRAILING_INCLUDE_SHA1_C macro.
+ * Same as SHA1DCInit, but with default save_hash=0
*/
+void git_SHA1DCInit(SHA1_CTX *ctx)
+{
+ SHA1DCInit(ctx);
+ SHA1DCSetSafeHash(ctx, 0);
+}
+#endif
+/*
+ * Same as SHA1DCFinal, but convert collision attack case into a verbose die().
+ */
void git_SHA1DCFinal(unsigned char hash[20], SHA1_CTX *ctx)
{
if (!SHA1DCFinal(hash, ctx))
sha1_to_hex(hash));
}
+/*
+ * Same as SHA1DCUpdate, but adjust types to match git's usual interface.
+ */
void git_SHA1DCUpdate(SHA1_CTX *ctx, const void *vdata, unsigned long len)
{
const char *data = vdata;
-/*
- * This code is included at the end of sha1dc/sha1.h with the
- * SHA1DC_CUSTOM_TRAILING_INCLUDE_SHA1_H macro.
- */
+/* Plumbing with collition-detecting SHA1 code */
-/*
- * Same as SHA1DCFinal, but convert collision attack case into a verbose die().
- */
-void git_SHA1DCFinal(unsigned char [20], SHA1_CTX *);
+#ifdef DC_SHA1_SUBMODULE
+#include "sha1collisiondetection/lib/sha1.h"
+#elif defined(DC_SHA1_EXTERNAL)
+#include <sha1dc/sha1.h>
+#else
+#include "sha1dc/sha1.h"
+#endif
+
+#ifdef DC_SHA1_EXTERNAL
+void git_SHA1DCInit(SHA1_CTX *);
+#else
+#define git_SHA1DCInit SHA1DCInit
+#endif
-/*
- * Same as SHA1DCUpdate, but adjust types to match git's usual interface.
- */
+void git_SHA1DCFinal(unsigned char [20], SHA1_CTX *);
void git_SHA1DCUpdate(SHA1_CTX *ctx, const void *data, unsigned long len);
#define platform_SHA_CTX SHA1_CTX
-#define platform_SHA1_Init SHA1DCInit
+#define platform_SHA1_Init git_SHA1DCInit
#define platform_SHA1_Update git_SHA1DCUpdate
#define platform_SHA1_Final git_SHA1DCFinal
if (len > (sb->alloc ? sb->alloc - 1 : 0))
die("BUG: strbuf_setlen() beyond buffer");
sb->len = len;
- sb->buf[len] = '\0';
+ if (sb->buf != strbuf_slopbuf)
+ sb->buf[len] = '\0';
+ else
+ assert(!strbuf_slopbuf[0]);
}
/**
echo >.gitattributes &&
git checkout -b master &&
git add .gitattributes &&
- git commit -m "add .gitattributes" "" &&
+ git commit -m "add .gitattributes" . &&
printf "\$Id: 0000000000000000000000000000000000000000 \$\nLINEONE\nLINETWO\nLINETHREE" >LF &&
printf "\$Id: 0000000000000000000000000000000000000000 \$\r\nLINEONE\r\nLINETWO\r\nLINETHREE" >CRLF &&
printf "\$Id: 0000000000000000000000000000000000000000 \$\nLINEONE\r\nLINETWO\nLINETHREE" >CRLF_mix_LF &&
grep "^0\{40\}.*$msg$" .git/logs/HEAD
'
+test_expect_success 'git branch -M should leave orphaned HEAD alone' '
+ git init orphan &&
+ (
+ cd orphan &&
+ test_commit initial &&
+ git checkout --orphan lonely &&
+ grep lonely .git/HEAD &&
+ test_path_is_missing .git/refs/head/lonely &&
+ git branch -M master mistress &&
+ grep lonely .git/HEAD
+ )
+'
+
test_expect_success 'resulting reflog can be shown by log -g' '
oid=$(git rev-parse HEAD) &&
cat >expect <<-EOF &&
test_expect_success 'use --set-upstream-to modify a particular branch' '
git branch my13 &&
git branch --set-upstream-to master my13 &&
+ test_when_finished "git branch --unset-upstream my13" &&
test "$(git config branch.my13.remote)" = "." &&
test "$(git config branch.my13.merge)" = "refs/heads/master"
'
test_must_fail git config branch.my14.merge
'
-test_expect_success '--set-upstream shows message when creating a new branch that exists as remote-tracking' '
- git update-ref refs/remotes/origin/master HEAD &&
- git branch --set-upstream origin/master 2>actual &&
- test_when_finished git update-ref -d refs/remotes/origin/master &&
- test_when_finished git branch -d origin/master &&
- cat >expected <<EOF &&
-The --set-upstream flag is deprecated and will be removed. Consider using --track or --set-upstream-to
-
-If you wanted to make '"'master'"' track '"'origin/master'"', do this:
-
- git branch -d origin/master
- git branch --set-upstream-to origin/master
-EOF
- test_i18ncmp expected actual
-'
-
-test_expect_success '--set-upstream with two args only shows the deprecation message' '
- git branch --set-upstream master my13 2>actual &&
- test_when_finished git branch --unset-upstream master &&
- cat >expected <<EOF &&
-The --set-upstream flag is deprecated and will be removed. Consider using --track or --set-upstream-to
-EOF
- test_i18ncmp expected actual
-'
-
-test_expect_success '--set-upstream with one arg only shows the deprecation message if the branch existed' '
- git branch --set-upstream my13 2>actual &&
- test_when_finished git branch --unset-upstream my13 &&
- cat >expected <<EOF &&
-The --set-upstream flag is deprecated and will be removed. Consider using --track or --set-upstream-to
-EOF
- test_i18ncmp expected actual
+test_expect_success '--set-upstream fails' '
+ test_must_fail git branch --set-upstream origin/master
'
test_expect_success '--set-upstream-to notices an error to set branch as own upstream' '
test_must_fail git branch -d my10
'
-test_expect_success 'use set-upstream on the current branch' '
- git checkout master &&
- git --bare init myupstream.git &&
- git push myupstream.git master:refs/heads/frotz &&
- git remote add origin myupstream.git &&
- git fetch &&
- git branch --set-upstream master origin/frotz &&
-
- test "z$(git config branch.master.remote)" = "zorigin" &&
- test "z$(git config branch.master.merge)" = "zrefs/heads/frotz"
-
-'
-
test_expect_success 'use --edit-description' '
write_script editor <<-\EOF &&
echo "New contents" >"$1"
test_i18ncmp expect actual
'
-test_expect_success 'rm empty string should invoke warning' '
- git rm -rf "" 2>output &&
- test_i18ngrep "warning: empty strings" output
+test_expect_success 'rm empty string should fail' '
+ test_must_fail git rm -rf ""
'
test_done
test_i18ncmp expect.err actual.err
'
-test_expect_success 'git add empty string should invoke warning' '
- git add "" 2>output &&
- test_i18ngrep "warning: empty strings" output
+test_expect_success 'git add empty string should fail' '
+ test_must_fail git add ""
'
test_expect_success 'git add --chmod=[+-]x stages correctly' '
SUBSTFORMAT='%H (%h)%n'
test_expect_exists() {
- test_expect_success " $1 exists" "test -e $1"
+ test_expect_${2:-success} " $1 exists" "test -e $1"
}
test_expect_missing() {
- test_expect_success " $1 does not exist" "test ! -e $1"
+ test_expect_${2:-success} " $1 does not exist" "test ! -e $1"
+}
+
+extract_tar_to_dir () {
+ (mkdir "$1" && cd "$1" && "$TAR" xf -) <"$1.tar"
}
test_expect_success 'setup' '
echo ignored by tree >ignored-by-tree &&
echo ignored-by-tree export-ignore >.gitattributes &&
- git add ignored-by-tree .gitattributes &&
+ mkdir ignored-by-tree.d &&
+ >ignored-by-tree.d/file &&
+ echo ignored-by-tree.d export-ignore >>.gitattributes &&
+ git add ignored-by-tree ignored-by-tree.d .gitattributes &&
echo ignored by worktree >ignored-by-worktree &&
echo ignored-by-worktree export-ignore >.gitattributes &&
git add ignored-by-worktree &&
+ mkdir excluded-by-pathspec.d &&
+ >excluded-by-pathspec.d/file &&
+ git add excluded-by-pathspec.d &&
+
printf "A\$Format:%s\$O" "$SUBSTFORMAT" >nosubstfile &&
printf "A\$Format:%s\$O" "$SUBSTFORMAT" >substfile1 &&
printf "A not substituted O" >substfile2 &&
test_expect_missing archive/ignored
test_expect_missing archive/ignored-by-tree
+test_expect_missing archive/ignored-by-tree.d
+test_expect_missing archive/ignored-by-tree.d/file
test_expect_exists archive/ignored-by-worktree
+test_expect_exists archive/excluded-by-pathspec.d
+test_expect_exists archive/excluded-by-pathspec.d/file
+
+test_expect_success 'git archive with pathspec' '
+ git archive HEAD ":!excluded-by-pathspec.d" >archive-pathspec.tar &&
+ extract_tar_to_dir archive-pathspec
+'
+
+test_expect_missing archive-pathspec/ignored
+test_expect_missing archive-pathspec/ignored-by-tree
+test_expect_missing archive-pathspec/ignored-by-tree.d
+test_expect_missing archive-pathspec/ignored-by-tree.d/file
+test_expect_exists archive-pathspec/ignored-by-worktree
+test_expect_missing archive-pathspec/excluded-by-pathspec.d failure
+test_expect_missing archive-pathspec/excluded-by-pathspec.d/file
+
+test_expect_success 'git archive with wildcard pathspec' '
+ git archive HEAD ":!excluded-by-p*" >archive-pathspec-wildcard.tar &&
+ extract_tar_to_dir archive-pathspec-wildcard
+'
+
+test_expect_missing archive-pathspec-wildcard/ignored
+test_expect_missing archive-pathspec-wildcard/ignored-by-tree
+test_expect_missing archive-pathspec-wildcard/ignored-by-tree.d
+test_expect_missing archive-pathspec-wildcard/ignored-by-tree.d/file
+test_expect_exists archive-pathspec-wildcard/ignored-by-worktree
+test_expect_missing archive-pathspec-wildcard/excluded-by-pathspec.d
+test_expect_missing archive-pathspec-wildcard/excluded-by-pathspec.d/file
test_expect_success 'git archive with worktree attributes' '
git archive --worktree-attributes HEAD >worktree.tar &&
(
cd test && git checkout b6
) >actual &&
- test_i18ngrep "Your branch is up-to-date with .origin/master" actual
+ test_i18ngrep "Your branch is up to date with .origin/master" actual
'
test_expect_success 'status (diverged from upstream)' '
# reports nothing to commit
test_must_fail git commit --dry-run
) >actual &&
- test_i18ngrep "Your branch is up-to-date with .origin/master" actual
+ test_i18ngrep "Your branch is up to date with .origin/master" actual
'
cat >expect <<\EOF
test_must_fail git checkout heavytrack
'
-test_expect_success 'setup tracking with branch --set-upstream on existing branch' '
+test_expect_success '--set-upstream-to does not change branch' '
git branch from-master master &&
- test_must_fail git config branch.from-master.merge > actual &&
- git branch --set-upstream from-master master &&
- git config branch.from-master.merge > actual &&
- grep -q "^refs/heads/master$" actual
-'
-
-test_expect_success '--set-upstream does not change branch' '
+ git branch --set-upstream-to master from-master &&
git branch from-master2 master &&
test_must_fail git config branch.from-master2.merge > actual &&
git rev-list from-master2 &&
git update-ref refs/heads/from-master2 from-master2^ &&
git rev-parse from-master2 >expect2 &&
- git branch --set-upstream from-master2 master &&
+ git branch --set-upstream-to master from-master2 &&
git config branch.from-master.merge > actual &&
git rev-parse from-master2 >actual2 &&
grep -q "^refs/heads/master$" actual &&
cmp expect2 actual2
'
-test_expect_success '--set-upstream @{-1}' '
- git checkout from-master &&
+test_expect_success '--set-upstream-to @{-1}' '
+ git checkout follower &&
git checkout from-master2 &&
git config branch.from-master2.merge > expect2 &&
- git branch --set-upstream @{-1} follower &&
+ git branch --set-upstream-to @{-1} from-master &&
git config branch.from-master.merge > actual &&
git config branch.from-master2.merge > actual2 &&
- git branch --set-upstream from-master follower &&
+ git branch --set-upstream-to follower from-master &&
git config branch.from-master.merge > expect &&
test_cmp expect2 actual2 &&
test_cmp expect actual
!two@example.com!
!three@example.com!
!four@example.com!
+!five@example.com!
+!six@example.com!
EOF
"
Cc: <two@example.com> # trailing comments are ignored
Cc: <three@example.com>, <not.four@example.com> one address per line
Cc: "Some # Body" <four@example.com> [ <also.a.comment> ]
+ Cc: five@example.com # not.six@example.com
+ Cc: six@example.com, not.seven@example.com
EOF
clean_fake_sendmail &&
git send-email -1 --to=recipient@example.com \
not_in_topic=`git rev-list "^$topic" master`
if test -z "$not_in_topic"
then
- echo >&2 "$topic is already up-to-date with master"
+ echo >&2 "$topic is already up to date with master"
exit 1 ;# we could allow it, but there is no point.
else
exit 0
__attribute__((format (printf, 1, 2)))
static void transfer_debug(const char *fmt, ...)
{
+ /*
+ * NEEDSWORK: This function is sometimes used from multiple threads, and
+ * we end up using debug_enabled racily. That "should not matter" since
+ * we always write the same value, but it's still wrong. This function
+ * is listed in .tsan-suppressions for the time being.
+ */
+
va_list args;
char msgbuf[PBUFFERSIZE];
static int debug_enabled = -1;
msgs[ERROR_BIND_OVERLAP] = _("Entry '%s' overlaps with '%s'. Cannot bind.");
msgs[ERROR_SPARSE_NOT_UPTODATE_FILE] =
- _("Cannot update sparse checkout: the following entries are not up-to-date:\n%s");
+ _("Cannot update sparse checkout: the following entries are not up to date:\n%s");
msgs[ERROR_WOULD_LOSE_ORPHANED_OVERWRITTEN] =
_("The following working tree files would be overwritten by sparse checkout update:\n%s");
msgs[ERROR_WOULD_LOSE_ORPHANED_REMOVED] =
}
}
}
- errs |= finish_delayed_checkout(&state);
stop_progress(&progress);
+ errs |= finish_delayed_checkout(&state);
if (o->update)
git_attr_set_direction(GIT_ATTR_CHECKIN, NULL);
return errs != 0;
target = refs_resolve_ref_unsafe(get_worktree_ref_store(wt),
"HEAD",
- RESOLVE_REF_READING,
+ 0,
wt->head_sha1, &flags);
if (!target)
return;