A test retitling.
* sb/t3600-rephrase:
t3600: rename test to describe its functionality
Matthias Urlichs <matthias@urlichs.de> <smurf@kiste.(none)>
Matthias Urlichs <matthias@urlichs.de> <smurf@smurf.noris.de>
Michael Coleman <tutufan@gmail.com>
-Michael J Gruber <git@drmicha.warpmail.net> <michaeljgruber+gmane@fastmail.fm>
+Michael J Gruber <git@grubix.eu> <michaeljgruber+gmane@fastmail.fm>
+Michael J Gruber <git@grubix.eu> <git@drmicha.warpmail.net>
Michael S. Tsirkin <mst@kernel.org> <mst@redhat.com>
Michael S. Tsirkin <mst@kernel.org> <mst@mellanox.co.il>
Michael S. Tsirkin <mst@kernel.org> <mst@dev.mellanox.co.il>
--- /dev/null
+Git v2.12.2 Release Notes
+=========================
+
+Fixes since v2.12.1
+-------------------
+
+ * "git status --porcelain" is supposed to give a stable output, but a
+ few strings were left as translatable by mistake.
+
+ * "Dumb http" transport used to misparse a nonsense http-alternates
+ response, which has been fixed.
+
+ * "git diff --quiet" relies on the size field in diff_filespec to be
+ correctly populated, but diff_populate_filespec() helper function
+ made an incorrect short-cut when asked only to populate the size
+ field for paths that need to go through convert_to_git() (e.g. CRLF
+ conversion).
+
+ * There is no need for Python only to give a few messages to the
+ standard error stream, but we somehow did.
+
+ * A leak in a codepath to read from a packed object in (rare) cases
+ has been plugged.
+
+ * "git upload-pack", which is a counter-part of "git fetch", did not
+ report a request for a ref that was not advertised as invalid.
+ This is generally not a problem (because "git fetch" will stop
+ before making such a request), but is the right thing to do.
+
+ * A "gc.log" file left by a backgrounded "gc --auto" disables further
+ automatic gc; it has been taught to run at least once a day (by
+ default) by ignoring a stale "gc.log" file that is too old.
+
+ * "git remote rm X", when a branch has remote X configured as the
+ value of its branch.*.remote, tried to remove branch.*.remote and
+ branch.*.merge and failed if either is unset.
+
+ * A caller of tempfile API that uses stdio interface to write to
+ files may ignore errors while writing, which is detected when
+ tempfile is closed (with a call to ferror()). By that time, the
+ original errno that may have told us what went wrong is likely to
+ be long gone and was overwritten by an irrelevant value.
+ close_tempfile() now resets errno to EIO to make errno at least
+ predictable.
+
+ * "git show-branch" expected there were only very short branch names
+ in the repository and used a fixed-length buffer to hold them
+ without checking for overflow.
+
+ * The code that parses header fields in the commit object has been
+ updated for (micro)performance and code hygiene.
+
+ * A test that creates a confusing branch whose name is HEAD has been
+ corrected not to do so.
+
+ * "Cc:" on the trailer part does not have to conform to RFC strictly,
+ unlike in the e-mail header. "git send-email" has been updated to
+ ignore anything after '>' when picking addresses, to allow non-address
+ cruft like " # stable 4.4" after the address.
+
+ * "git push" had a handful of codepaths that could lead to a deadlock
+ when unexpected error happened, which has been fixed.
+
+ * Code to read submodule.<name>.ignore config did not state the
+ variable name correctly when giving an error message diagnosing
+ misconfiguration.
+
+ * "git ls-remote" and "git archive --remote" are designed to work
+ without being in a directory under Git's control. However, recent
+ updates revealed that we randomly look into a directory called
+ .git/ without actually doing necessary set-up when working in a
+ repository. Stop doing so.
+
+ * The code to parse the command line "git grep <patterns>... <rev>
+ [[--] <pathspec>...]" has been cleaned up, and a handful of bugs
+ have been fixed (e.g. we used to check "--" if it is a rev).
+
+ * The code to parse "git -c VAR=VAL cmd" and set configuration
+ variable for the duration of cmd had two small bugs, which have
+ been fixed.
+ This supersedes jc/config-case-cmdline topic that has been discarded.
+
+Also contains various documentation updates and code clean-ups.
Make `git gc --auto` return immediately and run in background
if the system supports it. Default is true.
+gc.logExpiry::
+ If the file gc.log exists, then `git gc --auto` won't run
+ unless that file is more than 'gc.logExpiry' old. Default is
+ "1.day". See `gc.pruneExpire` for more ways to specify its
+ value.
+
gc.packRefs::
Running `git pack-refs` in a repository renders it
unclonable by Git versions prior to 1.5.1.2 over dumb
pushing to the same repository you would normally pull from
(i.e. central workflow).
+* `tracking` - This is a deprecated synonym for `upstream`.
+
* `simple` - in centralized workflow, work like `upstream` with an
added safety to refuse to push if the upstream branch's name is
different from the local one.
HOOKS
-----
This command can run `commit-msg`, `prepare-commit-msg`, `pre-commit`,
-and `post-commit` hooks. See linkgit:githooks[5] for more
+`post-commit` and `post-rewrite` hooks. See linkgit:githooks[5] for more
information.
FILES
project root. Implies <<Remap_to_ancestor>>.
--prune-empty::
- Some kind of filters will generate empty commits, that left the tree
- untouched. This switch allow git-filter-branch to ignore such
- commits. Though, this switch only applies for commits that have one
- and only one parent, it will hence keep merges points. Also, this
- option is not compatible with the use of `--commit-filter`. Though you
- just need to use the function 'git_commit_non_empty_tree "$@"' instead
- of the `git commit-tree "$@"` idiom in your commit filter to make that
- happen.
+ Some filters will generate empty commits that leave the tree untouched.
+ This option instructs git-filter-branch to remove such commits if they
+ have exactly one or zero non-pruned parents; merge commits will
+ therefore remain intact. This option cannot be used together with
+ `--commit-filter`, though the same effect can be achieved by using the
+ provided `git_commit_non_empty_tree` function in a commit filter.
--original <namespace>::
Use this option to set the namespace where the original commits
of the patch series (but see the discussion of the `notes.rewrite`
configuration options in linkgit:git-notes[1] to use this workflow).
---[no]-signature=<signature>::
+--[no-]signature=<signature>::
Add a signature to each message produced. Per RFC 3676 the signature
is separated from the body by a line with '-- ' on it. If the
signature option is omitted the signature defaults to the Git version
reply to the given Message-Id, which avoids breaking threads to
provide a new patch series.
The second and subsequent emails will be sent as replies according to
- the `--[no]-chain-reply-to` setting.
+ the `--[no-]chain-reply-to` setting.
+
So for example when `--thread` and `--no-chain-reply-to` are specified, the
second and subsequent patches will be replies to the first one like in the
branch of the `git.git` repository.
Documentation for older releases are available here:
-* link:v2.12.1/git.html[documentation for release 2.12.1]
+* link:v2.12.2/git.html[documentation for release 2.12.2]
* release notes for
+ link:RelNotes/2.12.2.txt[2.12.2].
link:RelNotes/2.12.1.txt[2.12.1].
link:RelNotes/2.12.0.txt[2.12].
diff-patch format.
-diffcore-break: For Splitting Up "Complete Rewrites"
-----------------------------------------------------
+diffcore-break: For Splitting Up Complete Rewrites
+--------------------------------------------------
The second transformation in the chain is diffcore-break, and is
controlled by the -B option to the 'git diff-{asterisk}' commands. This is
after "-B" option (e.g. "-B75" to tell it to use 75%).
-diffcore-rename: For Detection Renames and Copies
+diffcore-rename: For Detecting Renames and Copies
-------------------------------------------------
This transformation is used to detect renames and copies, and is
copied happened to have been modified in the same changeset.
-diffcore-merge-broken: For Putting "Complete Rewrites" Back Together
---------------------------------------------------------------------
+diffcore-merge-broken: For Putting Complete Rewrites Back Together
+------------------------------------------------------------------
This transformation is used to merge filepairs broken by
diffcore-break, and not transformed into rename/copy by
#!/bin/sh
GVF=GIT-VERSION-FILE
-DEF_VER=v2.12.1
+DEF_VER=v2.12.2
LF='
'
#
# Define MKDIR_WO_TRAILING_SLASH if your mkdir() can't deal with trailing slash.
#
-# Define NO_MKSTEMPS if you don't have mkstemps in the C library.
-#
# Define NO_GECOS_IN_PWENT if you don't have pw_gecos in struct passwd
# in the C library.
#
COMPAT_CFLAGS += -DMKDIR_WO_TRAILING_SLASH
COMPAT_OBJS += compat/mkdir.o
endif
-ifdef NO_MKSTEMPS
- COMPAT_CFLAGS += -DNO_MKSTEMPS
-endif
ifdef NO_UNSETENV
COMPAT_CFLAGS += -DNO_UNSETENV
COMPAT_OBJS += compat/unsetenv.o
ifdef GIT_PERF_MAKE_OPTS
@echo GIT_PERF_MAKE_OPTS=\''$(subst ','\'',$(subst ','\'',$(GIT_PERF_MAKE_OPTS)))'\' >>$@+
endif
+ifdef GIT_INTEROP_MAKE_OPTS
+ @echo GIT_INTEROP_MAKE_OPTS=\''$(subst ','\'',$(subst ','\'',$(GIT_INTEROP_MAKE_OPTS)))'\' >>$@+
+endif
ifdef TEST_GIT_INDEX_VERSION
@echo TEST_GIT_INDEX_VERSION=\''$(subst ','\'',$(subst ','\'',$(TEST_GIT_INDEX_VERSION)))'\' >>$@+
endif
Please read the file [INSTALL][] for installation instructions.
-Many Git online resources are accessible from http://git-scm.com/
+Many Git online resources are accessible from <https://git-scm.com/>
including full documentation and Git related tools.
See [Documentation/gittutorial.txt][] to get started, then see
[Documentation/SubmittingPatches][] for instructions on patch submission).
To subscribe to the list, send an email with just "subscribe git" in
the body to majordomo@vger.kernel.org. The mailing list archives are
-available at https://public-inbox.org/git,
-http://marc.info/?l=git and other archival sites.
+available at <https://public-inbox.org/git/>,
+<http://marc.info/?l=git> and other archival sites.
The maintainer frequently sends the "What's cooking" reports that
list the current status of various development topics to the mailing
-Documentation/RelNotes/2.12.1.txt
\ No newline at end of file
+Documentation/RelNotes/2.12.2.txt
\ No newline at end of file
/*
* Append a new blame entry to a given output queue.
*/
-static void add_blame_entry(struct blame_entry ***queue, struct blame_entry *e)
+static void add_blame_entry(struct blame_entry ***queue,
+ const struct blame_entry *src)
{
+ struct blame_entry *e = xmalloc(sizeof(*e));
+ memcpy(e, src, sizeof(*e));
origin_incref(e->suspect);
e->next = **queue;
struct blame_entry *split,
struct blame_entry *e)
{
- struct blame_entry *new_entry;
-
if (split[0].suspect && split[2].suspect) {
/* The first part (reuse storage for the existing entry e) */
dup_entry(unblamed, e, &split[0]);
/* The last part -- me */
- new_entry = xmalloc(sizeof(*new_entry));
- memcpy(new_entry, &(split[2]), sizeof(struct blame_entry));
- add_blame_entry(unblamed, new_entry);
+ add_blame_entry(unblamed, &split[2]);
/* ... and the middle part -- parent */
- new_entry = xmalloc(sizeof(*new_entry));
- memcpy(new_entry, &(split[1]), sizeof(struct blame_entry));
- add_blame_entry(blamed, new_entry);
+ add_blame_entry(blamed, &split[1]);
}
else if (!split[0].suspect && !split[2].suspect)
/*
else if (split[0].suspect) {
/* me and then parent */
dup_entry(unblamed, e, &split[0]);
-
- new_entry = xmalloc(sizeof(*new_entry));
- memcpy(new_entry, &(split[1]), sizeof(struct blame_entry));
- add_blame_entry(blamed, new_entry);
+ add_blame_entry(blamed, &split[1]);
}
else {
/* parent and then me */
dup_entry(blamed, e, &split[1]);
-
- new_entry = xmalloc(sizeof(*new_entry));
- memcpy(new_entry, &(split[2]), sizeof(struct blame_entry));
- add_blame_entry(unblamed, new_entry);
+ add_blame_entry(unblamed, &split[2]);
}
}
int ret = 0;
int remote_branch = 0;
struct strbuf bname = STRBUF_INIT;
+ unsigned allowed_interpret;
switch (kinds) {
case FILTER_REFS_REMOTES:
fmt = "refs/remotes/%s";
/* For subsequent UI messages */
remote_branch = 1;
+ allowed_interpret = INTERPRET_BRANCH_REMOTE;
force = 1;
break;
case FILTER_REFS_BRANCHES:
fmt = "refs/heads/%s";
+ allowed_interpret = INTERPRET_BRANCH_LOCAL;
break;
default:
die(_("cannot use -a with -d"));
char *target = NULL;
int flags = 0;
- strbuf_branchname(&bname, argv[i]);
+ strbuf_branchname(&bname, argv[i], allowed_interpret);
free(name);
name = mkpathdup(fmt, bname.buf);
{
struct strbuf buf = STRBUF_INIT;
- strbuf_branchname(&buf, branch->name);
+ strbuf_branchname(&buf, branch->name, INTERPRET_BRANCH_LOCAL);
if (strcmp(buf.buf, branch->name))
branch->name = xstrdup(buf.buf);
strbuf_splice(&buf, 0, 0, "refs/heads/", 11);
* remote no-such-ref' would silently succeed without issuing
* an error.
*/
- for (i = 0; i < nr_sought; i++) {
- if (!sought[i] || sought[i]->matched)
- continue;
- error("no such remote ref %s", sought[i]->name);
- ret = 1;
- }
+ ret |= report_unmatched_refs(sought, nr_sought);
while (ref) {
printf("%s %s\n",
static int gc_auto_threshold = 6700;
static int gc_auto_pack_limit = 50;
static int detach_auto = 1;
+static unsigned long gc_log_expire_time;
+static const char *gc_log_expire = "1.day.ago";
static const char *prune_expire = "2.weeks.ago";
static const char *prune_worktrees_expire = "3.months.ago";
static void process_log_file(void)
{
struct stat st;
- if (!fstat(get_lock_file_fd(&log_lock), &st) && st.st_size)
+ if (fstat(get_lock_file_fd(&log_lock), &st)) {
+ /*
+ * Perhaps there was an i/o error or another
+ * unlikely situation. Try to make a note of
+ * this in gc.log along with any existing
+ * messages.
+ */
+ int saved_errno = errno;
+ fprintf(stderr, _("Failed to fstat %s: %s"),
+ get_tempfile_path(&log_lock.tempfile),
+ strerror(saved_errno));
+ fflush(stderr);
commit_lock_file(&log_lock);
- else
+ errno = saved_errno;
+ } else if (st.st_size) {
+ /* There was some error recorded in the lock file */
+ commit_lock_file(&log_lock);
+ } else {
+ /* No error, clean up any old gc.log */
+ unlink(git_path("gc.log"));
rollback_lock_file(&log_lock);
+ }
}
static void process_log_file_at_exit(void)
git_config_get_bool("gc.autodetach", &detach_auto);
git_config_date_string("gc.pruneexpire", &prune_expire);
git_config_date_string("gc.worktreepruneexpire", &prune_worktrees_expire);
+ git_config_date_string("gc.logexpiry", &gc_log_expire);
+
git_config(git_default_config, NULL);
}
static int report_last_gc_error(void)
{
struct strbuf sb = STRBUF_INIT;
- int ret;
+ int ret = 0;
+ struct stat st;
+ char *gc_log_path = git_pathdup("gc.log");
- ret = strbuf_read_file(&sb, git_path("gc.log"), 0);
+ if (stat(gc_log_path, &st)) {
+ if (errno == ENOENT)
+ goto done;
+
+ ret = error_errno(_("Can't stat %s"), gc_log_path);
+ goto done;
+ }
+
+ if (st.st_mtime < gc_log_expire_time)
+ goto done;
+
+ ret = strbuf_read_file(&sb, gc_log_path, 0);
if (ret > 0)
- return error(_("The last gc run reported the following. "
+ ret = error(_("The last gc run reported the following. "
"Please correct the root cause\n"
"and remove %s.\n"
"Automatic cleanup will not be performed "
"until the file is removed.\n\n"
"%s"),
- git_path("gc.log"), sb.buf);
+ gc_log_path, sb.buf);
strbuf_release(&sb);
- return 0;
+done:
+ free(gc_log_path);
+ return ret;
}
static int gc_before_repack(void)
argv_array_pushl(&prune_worktrees, "worktree", "prune", "--expire", NULL);
argv_array_pushl(&rerere, "rerere", "gc", NULL);
+ /* default expiry time, overwritten in gc_config */
gc_config();
+ if (parse_expiry_date(gc_log_expire, &gc_log_expire_time))
+ die(_("Failed to parse gc.logexpiry value %s"), gc_log_expire);
if (pack_refs < 0)
pack_refs = !is_bare_repository();
warning(_("There are too many unreachable loose objects; "
"run 'git prune' to remove them."));
+ if (!daemonized)
+ unlink(git_path("gc.log"));
+
return 0;
}
int dummy;
int use_index = 1;
int pattern_type_arg = GREP_PATTERN_TYPE_UNSPECIFIED;
+ int allow_revs;
struct option options[] = {
OPT_BOOL(0, "cached", &cached,
compile_grep_patterns(&opt);
- /* Check revs and then paths */
+ /*
+ * We have to find "--" in a separate pass, because its presence
+ * influences how we will parse arguments that come before it.
+ */
+ for (i = 0; i < argc; i++) {
+ if (!strcmp(argv[i], "--")) {
+ seen_dashdash = 1;
+ break;
+ }
+ }
+
+ /*
+ * Resolve any rev arguments. If we have a dashdash, then everything up
+ * to it must resolve as a rev. If not, then we stop at the first
+ * non-rev and assume everything else is a path.
+ */
+ allow_revs = use_index && !untracked;
for (i = 0; i < argc; i++) {
const char *arg = argv[i];
unsigned char sha1[20];
struct object_context oc;
- /* Is it a rev? */
- if (!get_sha1_with_context(arg, 0, sha1, &oc)) {
- struct object *object = parse_object_or_die(sha1, arg);
- if (!seen_dashdash)
- verify_non_filename(prefix, arg);
- add_object_array_with_path(object, arg, &list, oc.mode, oc.path);
- continue;
- }
+ struct object *object;
+
if (!strcmp(arg, "--")) {
i++;
- seen_dashdash = 1;
+ break;
}
- break;
+
+ if (!allow_revs) {
+ if (seen_dashdash)
+ die(_("--no-index or --untracked cannot be used with revs"));
+ break;
+ }
+
+ if (get_sha1_with_context(arg, 0, sha1, &oc)) {
+ if (seen_dashdash)
+ die(_("unable to resolve revision: %s"), arg);
+ break;
+ }
+
+ object = parse_object_or_die(sha1, arg);
+ if (!seen_dashdash)
+ verify_non_filename(prefix, arg);
+ add_object_array_with_path(object, arg, &list, oc.mode, oc.path);
}
+ /*
+ * Anything left over is presumed to be a path. But in the non-dashdash
+ * "do what I mean" case, we verify and complain when that isn't true.
+ */
+ if (!seen_dashdash) {
+ int j;
+ for (j = i; j < argc; j++)
+ verify_filename(prefix, argv[j], j == i && allow_revs);
+ }
+
+ parse_pathspec(&pathspec, 0,
+ PATHSPEC_PREFER_CWD |
+ (opt.max_depth != -1 ? PATHSPEC_MAXDEPTH_VALID : 0),
+ prefix, argv + i);
+ pathspec.max_depth = opt.max_depth;
+ pathspec.recursive = 1;
+
#ifndef NO_PTHREADS
if (list.nr || cached || show_in_pager)
num_threads = 0;
}
#endif
- /* The rest are paths */
- if (!seen_dashdash) {
- int j;
- for (j = i; j < argc; j++)
- verify_filename(prefix, argv[j], j == i);
- }
-
- parse_pathspec(&pathspec, 0,
- PATHSPEC_PREFER_CWD |
- (opt.max_depth != -1 ? PATHSPEC_MAXDEPTH_VALID : 0),
- prefix, argv + i);
- pathspec.max_depth = opt.max_depth;
- pathspec.recursive = 1;
-
if (recurse_submodules) {
gitmodules_config();
compile_submodule_options(&opt, &pathspec, cached, untracked,
if (!use_index || untracked) {
int use_exclude = (opt_exclude < 0) ? use_index : !!opt_exclude;
- if (list.nr)
- die(_("--no-index or --untracked cannot be used with revs."));
hit = grep_directory(&opt, &pathspec, use_exclude, use_index);
} else if (0 <= opt_exclude) {
die(_("--[no-]exclude-standard cannot be used for tracked contents."));
unsigned char *sha1)
{
const char *report = "pack";
- char name[PATH_MAX];
+ struct strbuf pack_name = STRBUF_INIT;
+ struct strbuf index_name = STRBUF_INIT;
+ struct strbuf keep_name_buf = STRBUF_INIT;
int err;
if (!from_stdin) {
int keep_fd, keep_msg_len = strlen(keep_msg);
if (!keep_name)
- keep_fd = odb_pack_keep(name, sizeof(name), sha1);
- else
- keep_fd = open(keep_name, O_RDWR|O_CREAT|O_EXCL, 0600);
+ keep_name = odb_pack_name(&keep_name_buf, sha1, "keep");
+ keep_fd = odb_pack_keep(keep_name);
if (keep_fd < 0) {
if (errno != EEXIST)
die_errno(_("cannot write keep file '%s'"),
- keep_name ? keep_name : name);
+ keep_name);
} else {
if (keep_msg_len > 0) {
write_or_die(keep_fd, keep_msg, keep_msg_len);
}
if (close(keep_fd) != 0)
die_errno(_("cannot close written keep file '%s'"),
- keep_name ? keep_name : name);
+ keep_name);
report = "keep";
}
}
if (final_pack_name != curr_pack_name) {
- if (!final_pack_name) {
- snprintf(name, sizeof(name), "%s/pack/pack-%s.pack",
- get_object_directory(), sha1_to_hex(sha1));
- final_pack_name = name;
- }
+ if (!final_pack_name)
+ final_pack_name = odb_pack_name(&pack_name, sha1, "pack");
if (finalize_object_file(curr_pack_name, final_pack_name))
die(_("cannot store pack file"));
} else if (from_stdin)
chmod(final_pack_name, 0444);
if (final_index_name != curr_index_name) {
- if (!final_index_name) {
- snprintf(name, sizeof(name), "%s/pack/pack-%s.idx",
- get_object_directory(), sha1_to_hex(sha1));
- final_index_name = name;
- }
+ if (!final_index_name)
+ final_index_name = odb_pack_name(&index_name, sha1, "idx");
if (finalize_object_file(curr_index_name, final_index_name))
die(_("cannot store index file"));
} else
input_offset += err;
}
}
+
+ strbuf_release(&index_name);
+ strbuf_release(&pack_name);
+ strbuf_release(&keep_name_buf);
}
static int git_index_pack_config(const char *k, const char *v, void *cb)
char *found_ref;
int len, early;
- strbuf_branchname(&bname, remote);
+ strbuf_branchname(&bname, remote, 0);
remote = bname.buf;
memset(branch_head, 0, sizeof(branch_head));
* 2. Updating our size/type to the non-delta representation. These were
* either not recorded initially (size) or overwritten with the delta type
* (type) when check_object() decided to reuse the delta.
+ *
+ * 3. Resetting our delta depth, as we are now a base object.
*/
static void drop_reused_delta(struct object_entry *entry)
{
p = &(*p)->delta_sibling;
}
entry->delta = NULL;
+ entry->depth = 0;
oi.sizep = &entry->size;
oi.typep = &entry->type;
* Follow the chain of deltas from this entry onward, throwing away any links
* that cause us to hit a cycle (as determined by the DFS state flags in
* the entries).
+ *
+ * We also detect too-long reused chains that would violate our --depth
+ * limit.
*/
static void break_delta_chains(struct object_entry *entry)
{
- /* If it's not a delta, it can't be part of a cycle. */
- if (!entry->delta) {
- entry->dfs_state = DFS_DONE;
- return;
- }
+ /*
+ * The actual depth of each object we will write is stored as an int,
+ * as it cannot exceed our int "depth" limit. But before we break
+ * changes based no that limit, we may potentially go as deep as the
+ * number of objects, which is elsewhere bounded to a uint32_t.
+ */
+ uint32_t total_depth;
+ struct object_entry *cur, *next;
+
+ for (cur = entry, total_depth = 0;
+ cur;
+ cur = cur->delta, total_depth++) {
+ if (cur->dfs_state == DFS_DONE) {
+ /*
+ * We've already seen this object and know it isn't
+ * part of a cycle. We do need to append its depth
+ * to our count.
+ */
+ total_depth += cur->depth;
+ break;
+ }
- switch (entry->dfs_state) {
- case DFS_NONE:
/*
- * This is the first time we've seen the object. We mark it as
- * part of the active potential cycle and recurse.
+ * We break cycles before looping, so an ACTIVE state (or any
+ * other cruft which made its way into the state variable)
+ * is a bug.
*/
- entry->dfs_state = DFS_ACTIVE;
- break_delta_chains(entry->delta);
- entry->dfs_state = DFS_DONE;
- break;
+ if (cur->dfs_state != DFS_NONE)
+ die("BUG: confusing delta dfs state in first pass: %d",
+ cur->dfs_state);
- case DFS_DONE:
- /* object already examined, and not part of a cycle */
- break;
+ /*
+ * Now we know this is the first time we've seen the object. If
+ * it's not a delta, we're done traversing, but we'll mark it
+ * done to save time on future traversals.
+ */
+ if (!cur->delta) {
+ cur->dfs_state = DFS_DONE;
+ break;
+ }
- case DFS_ACTIVE:
/*
- * We found a cycle that needs broken. It would be correct to
- * break any link in the chain, but it's convenient to
- * break this one.
+ * Mark ourselves as active and see if the next step causes
+ * us to cycle to another active object. It's important to do
+ * this _before_ we loop, because it impacts where we make the
+ * cut, and thus how our total_depth counter works.
+ * E.g., We may see a partial loop like:
+ *
+ * A -> B -> C -> D -> B
+ *
+ * Cutting B->C breaks the cycle. But now the depth of A is
+ * only 1, and our total_depth counter is at 3. The size of the
+ * error is always one less than the size of the cycle we
+ * broke. Commits C and D were "lost" from A's chain.
+ *
+ * If we instead cut D->B, then the depth of A is correct at 3.
+ * We keep all commits in the chain that we examined.
*/
- drop_reused_delta(entry);
- entry->dfs_state = DFS_DONE;
- break;
+ cur->dfs_state = DFS_ACTIVE;
+ if (cur->delta->dfs_state == DFS_ACTIVE) {
+ drop_reused_delta(cur);
+ cur->dfs_state = DFS_DONE;
+ break;
+ }
+ }
+
+ /*
+ * And now that we've gone all the way to the bottom of the chain, we
+ * need to clear the active flags and set the depth fields as
+ * appropriate. Unlike the loop above, which can quit when it drops a
+ * delta, we need to keep going to look for more depth cuts. So we need
+ * an extra "next" pointer to keep going after we reset cur->delta.
+ */
+ for (cur = entry; cur; cur = next) {
+ next = cur->delta;
+
+ /*
+ * We should have a chain of zero or more ACTIVE states down to
+ * a final DONE. We can quit after the DONE, because either it
+ * has no bases, or we've already handled them in a previous
+ * call.
+ */
+ if (cur->dfs_state == DFS_DONE)
+ break;
+ else if (cur->dfs_state != DFS_ACTIVE)
+ die("BUG: confusing delta dfs state in second pass: %d",
+ cur->dfs_state);
+
+ /*
+ * If the total_depth is more than depth, then we need to snip
+ * the chain into two or more smaller chains that don't exceed
+ * the maximum depth. Most of the resulting chains will contain
+ * (depth + 1) entries (i.e., depth deltas plus one base), and
+ * the last chain (i.e., the one containing entry) will contain
+ * whatever entries are left over, namely
+ * (total_depth % (depth + 1)) of them.
+ *
+ * Since we are iterating towards decreasing depth, we need to
+ * decrement total_depth as we go, and we need to write to the
+ * entry what its final depth will be after all of the
+ * snipping. Since we're snipping into chains of length (depth
+ * + 1) entries, the final depth of an entry will be its
+ * original depth modulo (depth + 1). Any time we encounter an
+ * entry whose final depth is supposed to be zero, we snip it
+ * from its delta base, thereby making it so.
+ */
+ cur->depth = (total_depth--) % (depth + 1);
+ if (!cur->depth)
+ drop_reused_delta(cur);
+
+ cur->dfs_state = DFS_DONE;
}
}
static void run_update_post_hook(struct command *commands)
{
struct command *cmd;
- int argc;
struct child_process proc = CHILD_PROCESS_INIT;
const char *hook;
hook = find_hook("post-update");
- for (argc = 0, cmd = commands; cmd; cmd = cmd->next) {
- if (cmd->error_string || cmd->did_not_exist)
- continue;
- argc++;
- }
- if (!argc || !hook)
+ if (!hook)
return;
- argv_array_push(&proc.args, hook);
for (cmd = commands; cmd; cmd = cmd->next) {
if (cmd->error_string || cmd->did_not_exist)
continue;
+ if (!proc.args.argc)
+ argv_array_push(&proc.args, hook);
argv_array_push(&proc.args, cmd->ref_name);
}
+ if (!proc.args.argc)
+ return;
proc.no_stdin = 1;
proc.stdout_to_stderr = 1;
}
tmp_objdir = tmp_objdir_create();
- if (!tmp_objdir)
+ if (!tmp_objdir) {
+ if (err_fd > 0)
+ close(err_fd);
return "unable to create temporary object directory";
+ }
child.env = tmp_objdir_env(tmp_objdir);
/*
strbuf_reset(&buf);
strbuf_addf(&buf, "branch.%s.%s",
item->string, *k);
- git_config_set(buf.buf, NULL);
+ result = git_config_set_gently(buf.buf, NULL);
+ if (result && result != CONFIG_NOTHING_SET)
+ die(_("could not unset '%s'"), buf.buf);
}
}
}
"\n"
"Run \"git rev-parse --parseopt -h\" for more information on the first usage.");
+/*
+ * Parse "opt" or "opt=<value>", setting value respectively to either
+ * NULL or the string after "=".
+ */
+static int opt_with_value(const char *arg, const char *opt, const char **value)
+{
+ if (skip_prefix(arg, opt, &arg)) {
+ if (!*arg) {
+ *value = NULL;
+ return 1;
+ }
+ if (*arg++ == '=') {
+ *value = arg;
+ return 1;
+ }
+ }
+ return 0;
+}
+
+static void handle_ref_opt(const char *pattern, const char *prefix)
+{
+ if (pattern)
+ for_each_glob_ref_in(show_reference, pattern, prefix, NULL);
+ else
+ for_each_ref_in(prefix, show_reference, NULL);
+ clear_ref_exclusion(&ref_excludes);
+}
+
int cmd_rev_parse(int argc, const char **argv, const char *prefix)
{
int i, as_is = 0, verify = 0, quiet = 0, revs_count = 0, type = 0;
flags |= GET_SHA1_QUIETLY;
continue;
}
- if (!strcmp(arg, "--short") ||
- starts_with(arg, "--short=")) {
+ if (opt_with_value(arg, "--short", &arg)) {
filter &= ~(DO_FLAGS|DO_NOREV);
verify = 1;
abbrev = DEFAULT_ABBREV;
- if (!arg[7])
+ if (!arg)
continue;
- abbrev = strtoul(arg + 8, NULL, 10);
+ abbrev = strtoul(arg, NULL, 10);
if (abbrev < MINIMUM_ABBREV)
abbrev = MINIMUM_ABBREV;
else if (40 <= abbrev)
symbolic = SHOW_SYMBOLIC_FULL;
continue;
}
- if (starts_with(arg, "--abbrev-ref") &&
- (!arg[12] || arg[12] == '=')) {
+ if (opt_with_value(arg, "--abbrev-ref", &arg)) {
abbrev_ref = 1;
abbrev_ref_strict = warn_ambiguous_refs;
- if (arg[12] == '=') {
- if (!strcmp(arg + 13, "strict"))
+ if (arg) {
+ if (!strcmp(arg, "strict"))
abbrev_ref_strict = 1;
- else if (!strcmp(arg + 13, "loose"))
+ else if (!strcmp(arg, "loose"))
abbrev_ref_strict = 0;
else
- die("unknown mode for %s", arg);
+ die("unknown mode for --abbrev-ref: %s",
+ arg);
}
continue;
}
for_each_ref(show_reference, NULL);
continue;
}
- if (starts_with(arg, "--disambiguate=")) {
- for_each_abbrev(arg + 15, show_abbrev, NULL);
+ if (skip_prefix(arg, "--disambiguate=", &arg)) {
+ for_each_abbrev(arg, show_abbrev, NULL);
continue;
}
if (!strcmp(arg, "--bisect")) {
for_each_ref_in("refs/bisect/good", anti_reference, NULL);
continue;
}
- if (starts_with(arg, "--branches=")) {
- for_each_glob_ref_in(show_reference, arg + 11,
- "refs/heads/", NULL);
- clear_ref_exclusion(&ref_excludes);
- continue;
- }
- if (!strcmp(arg, "--branches")) {
- for_each_branch_ref(show_reference, NULL);
- clear_ref_exclusion(&ref_excludes);
- continue;
- }
- if (starts_with(arg, "--tags=")) {
- for_each_glob_ref_in(show_reference, arg + 7,
- "refs/tags/", NULL);
- clear_ref_exclusion(&ref_excludes);
- continue;
- }
- if (!strcmp(arg, "--tags")) {
- for_each_tag_ref(show_reference, NULL);
- clear_ref_exclusion(&ref_excludes);
+ if (opt_with_value(arg, "--branches", &arg)) {
+ handle_ref_opt(arg, "refs/heads/");
continue;
}
- if (starts_with(arg, "--glob=")) {
- for_each_glob_ref(show_reference, arg + 7, NULL);
- clear_ref_exclusion(&ref_excludes);
+ if (opt_with_value(arg, "--tags", &arg)) {
+ handle_ref_opt(arg, "refs/tags/");
continue;
}
- if (starts_with(arg, "--remotes=")) {
- for_each_glob_ref_in(show_reference, arg + 10,
- "refs/remotes/", NULL);
- clear_ref_exclusion(&ref_excludes);
+ if (skip_prefix(arg, "--glob=", &arg)) {
+ handle_ref_opt(arg, NULL);
continue;
}
- if (!strcmp(arg, "--remotes")) {
- for_each_remote_ref(show_reference, NULL);
- clear_ref_exclusion(&ref_excludes);
+ if (opt_with_value(arg, "--remotes", &arg)) {
+ handle_ref_opt(arg, "refs/remotes/");
continue;
}
- if (starts_with(arg, "--exclude=")) {
- add_ref_exclusion(&ref_excludes, arg + 10);
+ if (skip_prefix(arg, "--exclude=", &arg)) {
+ add_ref_exclusion(&ref_excludes, arg);
continue;
}
if (!strcmp(arg, "--show-toplevel")) {
}
continue;
}
- if (starts_with(arg, "--since=")) {
- show_datestring("--max-age=", arg+8);
+ if (skip_prefix(arg, "--since=", &arg)) {
+ show_datestring("--max-age=", arg);
continue;
}
- if (starts_with(arg, "--after=")) {
- show_datestring("--max-age=", arg+8);
+ if (skip_prefix(arg, "--after=", &arg)) {
+ show_datestring("--max-age=", arg);
continue;
}
- if (starts_with(arg, "--before=")) {
- show_datestring("--min-age=", arg+9);
+ if (skip_prefix(arg, "--before=", &arg)) {
+ show_datestring("--min-age=", arg);
continue;
}
- if (starts_with(arg, "--until=")) {
- show_datestring("--min-age=", arg+8);
+ if (skip_prefix(arg, "--until=", &arg)) {
+ show_datestring("--min-age=", arg);
continue;
}
if (show_flag(arg) && verify)
ctx.fmt = CMIT_FMT_USERFORMAT;
ctx.abbrev = log->abbrev;
ctx.subject = "";
- ctx.after_subject = "";
ctx.date_mode.type = DATE_NORMAL;
ctx.output_encoding = get_log_output_encoding();
pp_commit_easy(CMIT_FMT_ONELINE, commit, &pretty);
pretty_str = pretty.buf;
}
- if (starts_with(pretty_str, "[PATCH] "))
- pretty_str += 8;
+ skip_prefix(pretty_str, "[PATCH] ", &pretty_str);
if (!no_name) {
if (name && name->head_name) {
}
}
-static int rev_is_head(char *head, int headlen, char *name,
+static int rev_is_head(const char *head, const char *name,
unsigned char *head_sha1, unsigned char *sha1)
{
- if ((!head[0]) ||
- (head_sha1 && sha1 && hashcmp(head_sha1, sha1)))
+ if (!head || (head_sha1 && sha1 && hashcmp(head_sha1, sha1)))
return 0;
- if (starts_with(head, "refs/heads/"))
- head += 11;
- if (starts_with(name, "refs/heads/"))
- name += 11;
- else if (starts_with(name, "heads/"))
- name += 6;
+ skip_prefix(head, "refs/heads/", &head);
+ if (!skip_prefix(name, "refs/heads/", &name))
+ skip_prefix(name, "heads/", &name);
return !strcmp(head, name);
}
int all_heads = 0, all_remotes = 0;
int all_mask, all_revs;
enum rev_sort_order sort_order = REV_SORT_IN_GRAPH_ORDER;
- char head[128];
- const char *head_p;
- int head_len;
+ char *head;
struct object_id head_oid;
int merge_base = 0;
int independent = 0;
snarf_refs(all_heads, all_remotes);
}
- head_p = resolve_ref_unsafe("HEAD", RESOLVE_REF_READING,
- head_oid.hash, NULL);
- if (head_p) {
- head_len = strlen(head_p);
- memcpy(head, head_p, head_len + 1);
- }
- else {
- head_len = 0;
- head[0] = 0;
- }
+ head = resolve_refdup("HEAD", RESOLVE_REF_READING,
+ head_oid.hash, NULL);
- if (with_current_branch && head_p) {
+ if (with_current_branch && head) {
int has_head = 0;
for (i = 0; !has_head && i < ref_name_cnt; i++) {
/* We are only interested in adding the branch
* HEAD points at.
*/
if (rev_is_head(head,
- head_len,
ref_name[i],
head_oid.hash, NULL))
has_head++;
}
if (!has_head) {
- int offset = starts_with(head, "refs/heads/") ? 11 : 0;
- append_one_rev(head + offset);
+ const char *name = head;
+ skip_prefix(name, "refs/heads/", &name);
+ append_one_rev(name);
}
}
for (i = 0; i < num_rev; i++) {
int j;
int is_head = rev_is_head(head,
- head_len,
ref_name[i],
head_oid.hash,
rev[i]->object.oid.hash);
return !hashcmp(oid->hash, EMPTY_TREE_SHA1_BIN);
}
-
-int git_mkstemp(char *path, size_t n, const char *template);
-
/* set default permissions by passing mode arguments to open(2) */
int git_mkstemps_mode(char *pattern, int suffix_len, int mode);
int git_mkstemp_mode(char *pattern, int mode);
extern char *sha1_to_hex(const unsigned char *sha1); /* static buffer result! */
extern char *oid_to_hex(const struct object_id *oid); /* same static buffer as sha1_to_hex */
-extern int interpret_branch_name(const char *str, int len, struct strbuf *);
+/*
+ * This reads short-hand syntax that not only evaluates to a commit
+ * object name, but also can act as if the end user spelled the name
+ * of the branch from the command line.
+ *
+ * - "@{-N}" finds the name of the Nth previous branch we were on, and
+ * places the name of the branch in the given buf and returns the
+ * number of characters parsed if successful.
+ *
+ * - "<branch>@{upstream}" finds the name of the other ref that
+ * <branch> is configured to merge with (missing <branch> defaults
+ * to the current branch), and places the name of the branch in the
+ * given buf and returns the number of characters parsed if
+ * successful.
+ *
+ * If the input is not of the accepted format, it returns a negative
+ * number to signal an error.
+ *
+ * If the input was ok but there are not N branch switches in the
+ * reflog, it returns 0.
+ *
+ * If "allowed" is non-zero, it is a treated as a bitfield of allowable
+ * expansions: local branches ("refs/heads/"), remote branches
+ * ("refs/remotes/"), or "HEAD". If no "allowed" bits are set, any expansion is
+ * allowed, even ones to refs outside of those namespaces.
+ */
+#define INTERPRET_BRANCH_LOCAL (1<<0)
+#define INTERPRET_BRANCH_REMOTE (1<<1)
+#define INTERPRET_BRANCH_HEAD (1<<2)
+extern int interpret_branch_name(const char *str, int len, struct strbuf *,
+ unsigned allowed);
extern int get_oid_mb(const char *str, struct object_id *oid);
extern int validate_headref(const char *ref);
extern void pack_report(void);
+/*
+ * Create a temporary file rooted in the object database directory.
+ */
+extern int odb_mkstemp(char *template, size_t limit, const char *pattern);
+
+/*
+ * Generate the filename to be used for a pack file with checksum "sha1" and
+ * extension "ext". The result is written into the strbuf "buf", overwriting
+ * any existing contents. A pointer to buf->buf is returned as a convenience.
+ *
+ * Example: odb_pack_name(out, sha1, "idx") => ".git/objects/pack/pack-1234..idx"
+ */
+extern char *odb_pack_name(struct strbuf *buf, const unsigned char *sha1, const char *ext);
+
+/*
+ * Create a pack .keep file named "name" (which should generally be the output
+ * of odb_pack_name). Returns a file descriptor opened for writing, or -1 on
+ * error.
+ */
+extern int odb_pack_keep(const char *name);
+
/*
* mmap the index file for the specified packfile (if it is not
* already mmapped). Return 0 on success.
*
* (i.e., what gets handed to a config_fn_t). The caller provides the section;
* we return -1 if it does not match, 0 otherwise. The subsection and key
- * out-parameters are filled by the function (and subsection is NULL if it is
+ * out-parameters are filled by the function (and *subsection is NULL if it is
* missing).
+ *
+ * If the subsection pointer-to-pointer passed in is NULL, returns 0 only if
+ * there is no subsection at all.
*/
extern int parse_config_key(const char *var,
const char *section,
static inline int standard_header_field(const char *field, size_t len)
{
- return ((len == 4 && !memcmp(field, "tree ", 5)) ||
- (len == 6 && !memcmp(field, "parent ", 7)) ||
- (len == 6 && !memcmp(field, "author ", 7)) ||
- (len == 9 && !memcmp(field, "committer ", 10)) ||
- (len == 8 && !memcmp(field, "encoding ", 9)));
+ return ((len == 4 && !memcmp(field, "tree", 4)) ||
+ (len == 6 && !memcmp(field, "parent", 6)) ||
+ (len == 6 && !memcmp(field, "author", 6)) ||
+ (len == 9 && !memcmp(field, "committer", 9)) ||
+ (len == 8 && !memcmp(field, "encoding", 8)));
}
static int excluded_header_field(const char *field, size_t len, const char **exclude)
while (*exclude) {
size_t xlen = strlen(*exclude);
- if (len == xlen &&
- !memcmp(field, *exclude, xlen) && field[xlen] == ' ')
+ if (len == xlen && !memcmp(field, *exclude, xlen))
return 1;
exclude++;
}
strbuf_reset(&buf);
it = NULL;
- eof = strchr(line, ' ');
- if (next <= eof)
+ eof = memchr(line, ' ', next - line);
+ if (!eof)
eof = next;
-
- if (standard_header_field(line, eof - line) ||
- excluded_header_field(line, eof - line, exclude))
+ else if (standard_header_field(line, eof - line) ||
+ excluded_header_field(line, eof - line, exclude))
continue;
it = xcalloc(1, sizeof(*it));
strbuf_release(&env);
}
+static inline int iskeychar(int c)
+{
+ return isalnum(c) || c == '-';
+}
+
+/*
+ * Auxiliary function to sanity-check and split the key into the section
+ * identifier and variable name.
+ *
+ * Returns 0 on success, -1 when there is an invalid character in the key and
+ * -2 if there is no section name in the key.
+ *
+ * store_key - pointer to char* which will hold a copy of the key with
+ * lowercase section and variable name
+ * baselen - pointer to int which will hold the length of the
+ * section + subsection part, can be NULL
+ */
+static int git_config_parse_key_1(const char *key, char **store_key, int *baselen_, int quiet)
+{
+ int i, dot, baselen;
+ const char *last_dot = strrchr(key, '.');
+
+ /*
+ * Since "key" actually contains the section name and the real
+ * key name separated by a dot, we have to know where the dot is.
+ */
+
+ if (last_dot == NULL || last_dot == key) {
+ if (!quiet)
+ error("key does not contain a section: %s", key);
+ return -CONFIG_NO_SECTION_OR_NAME;
+ }
+
+ if (!last_dot[1]) {
+ if (!quiet)
+ error("key does not contain variable name: %s", key);
+ return -CONFIG_NO_SECTION_OR_NAME;
+ }
+
+ baselen = last_dot - key;
+ if (baselen_)
+ *baselen_ = baselen;
+
+ /*
+ * Validate the key and while at it, lower case it for matching.
+ */
+ if (store_key)
+ *store_key = xmallocz(strlen(key));
+
+ dot = 0;
+ for (i = 0; key[i]; i++) {
+ unsigned char c = key[i];
+ if (c == '.')
+ dot = 1;
+ /* Leave the extended basename untouched.. */
+ if (!dot || i > baselen) {
+ if (!iskeychar(c) ||
+ (i == baselen + 1 && !isalpha(c))) {
+ if (!quiet)
+ error("invalid key: %s", key);
+ goto out_free_ret_1;
+ }
+ c = tolower(c);
+ } else if (c == '\n') {
+ if (!quiet)
+ error("invalid key (newline): %s", key);
+ goto out_free_ret_1;
+ }
+ if (store_key)
+ (*store_key)[i] = c;
+ }
+
+ return 0;
+
+out_free_ret_1:
+ if (store_key) {
+ free(*store_key);
+ *store_key = NULL;
+ }
+ return -CONFIG_INVALID_KEY;
+}
+
+int git_config_parse_key(const char *key, char **store_key, int *baselen)
+{
+ return git_config_parse_key_1(key, store_key, baselen, 0);
+}
+
+int git_config_key_is_valid(const char *key)
+{
+ return !git_config_parse_key_1(key, NULL, NULL, 1);
+}
+
int git_config_parse_parameter(const char *text,
config_fn_t fn, void *data)
{
const char *value;
+ char *canonical_name;
struct strbuf **pair;
+ int ret;
pair = strbuf_split_str(text, '=', 2);
if (!pair[0])
strbuf_list_free(pair);
return error("bogus config parameter: %s", text);
}
- strbuf_tolower(pair[0]);
- if (fn(pair[0]->buf, value, data) < 0) {
- strbuf_list_free(pair);
- return -1;
+
+ if (git_config_parse_key(pair[0]->buf, &canonical_name, NULL)) {
+ ret = -1;
+ } else {
+ ret = (fn(canonical_name, value, data) < 0) ? -1 : 0;
+ free(canonical_name);
}
strbuf_list_free(pair);
- return 0;
+ return ret;
}
int git_config_from_parameters(config_fn_t fn, void *data)
}
}
-static inline int iskeychar(int c)
-{
- return isalnum(c) || c == '-';
-}
-
static int get_value(config_fn_t fn, void *data, struct strbuf *name)
{
int c;
git_config_set_multivar(key, value, NULL, 0);
}
-/*
- * Auxiliary function to sanity-check and split the key into the section
- * identifier and variable name.
- *
- * Returns 0 on success, -1 when there is an invalid character in the key and
- * -2 if there is no section name in the key.
- *
- * store_key - pointer to char* which will hold a copy of the key with
- * lowercase section and variable name
- * baselen - pointer to int which will hold the length of the
- * section + subsection part, can be NULL
- */
-static int git_config_parse_key_1(const char *key, char **store_key, int *baselen_, int quiet)
-{
- int i, dot, baselen;
- const char *last_dot = strrchr(key, '.');
-
- /*
- * Since "key" actually contains the section name and the real
- * key name separated by a dot, we have to know where the dot is.
- */
-
- if (last_dot == NULL || last_dot == key) {
- if (!quiet)
- error("key does not contain a section: %s", key);
- return -CONFIG_NO_SECTION_OR_NAME;
- }
-
- if (!last_dot[1]) {
- if (!quiet)
- error("key does not contain variable name: %s", key);
- return -CONFIG_NO_SECTION_OR_NAME;
- }
-
- baselen = last_dot - key;
- if (baselen_)
- *baselen_ = baselen;
-
- /*
- * Validate the key and while at it, lower case it for matching.
- */
- if (store_key)
- *store_key = xmallocz(strlen(key));
-
- dot = 0;
- for (i = 0; key[i]; i++) {
- unsigned char c = key[i];
- if (c == '.')
- dot = 1;
- /* Leave the extended basename untouched.. */
- if (!dot || i > baselen) {
- if (!iskeychar(c) ||
- (i == baselen + 1 && !isalpha(c))) {
- if (!quiet)
- error("invalid key: %s", key);
- goto out_free_ret_1;
- }
- c = tolower(c);
- } else if (c == '\n') {
- if (!quiet)
- error("invalid key (newline): %s", key);
- goto out_free_ret_1;
- }
- if (store_key)
- (*store_key)[i] = c;
- }
-
- return 0;
-
-out_free_ret_1:
- if (store_key) {
- free(*store_key);
- *store_key = NULL;
- }
- return -CONFIG_INVALID_KEY;
-}
-
-int git_config_parse_key(const char *key, char **store_key, int *baselen)
-{
- return git_config_parse_key_1(key, store_key, baselen, 0);
-}
-
-int git_config_key_is_valid(const char *key)
-{
- return !git_config_parse_key_1(key, NULL, NULL, 1);
-}
-
/*
* If value==NULL, unset in (remove from) config,
* if value_regex!=NULL, disregard key/value pairs where value does not match.
const char **subsection, int *subsection_len,
const char **key)
{
- int section_len = strlen(section);
const char *dot;
/* Does it start with "section." ? */
- if (!starts_with(var, section) || var[section_len] != '.')
+ if (!skip_prefix(var, section, &var) || *var != '.')
return -1;
/*
*key = dot + 1;
/* Did we have a subsection at all? */
- if (dot == var + section_len) {
- *subsection = NULL;
- *subsection_len = 0;
+ if (dot == var) {
+ if (subsection) {
+ *subsection = NULL;
+ *subsection_len = 0;
+ }
}
else {
- *subsection = var + section_len + 1;
+ if (!subsection)
+ return -1;
+ *subsection = var + 1;
*subsection_len = dot - *subsection;
}
ifeq ($(uname_S),Linux)
HAVE_ALLOCA_H = YesPlease
NO_STRLCPY = YesPlease
- NO_MKSTEMPS = YesPlease
HAVE_PATHS_H = YesPlease
LIBC_CONTAINS_LIBINTL = YesPlease
HAVE_DEV_TTY = YesPlease
ifeq ($(uname_S),GNU/kFreeBSD)
HAVE_ALLOCA_H = YesPlease
NO_STRLCPY = YesPlease
- NO_MKSTEMPS = YesPlease
HAVE_PATHS_H = YesPlease
DIR_HAS_BSD_GROUP_SEMANTICS = YesPlease
LIBC_CONTAINS_LIBINTL = YesPlease
SHELL_PATH = /usr/local/bin/bash
NO_IPV6 = YesPlease
NO_HSTRERROR = YesPlease
- NO_MKSTEMPS = YesPlease
BASIC_CFLAGS += -Kthread
BASIC_CFLAGS += -I/usr/local/include
BASIC_LDFLAGS += -L/usr/local/lib
SHELL_PATH = /usr/bin/bash
NO_IPV6 = YesPlease
NO_HSTRERROR = YesPlease
- NO_MKSTEMPS = YesPlease
BASIC_CFLAGS += -I/usr/local/include
BASIC_LDFLAGS += -L/usr/local/lib
NO_STRCASESTR = YesPlease
NO_STRCASESTR = YesPlease
NO_MEMMEM = YesPlease
NO_MKDTEMP = YesPlease
- NO_MKSTEMPS = YesPlease
NO_REGEX = YesPlease
NO_MSGFMT_EXTENDED_OPTIONS = YesPlease
HAVE_DEV_TTY = YesPlease
NO_D_TYPE_IN_DIRENT = YesPlease
NO_STRCASESTR = YesPlease
NO_MEMMEM = YesPlease
- NO_MKSTEMPS = YesPlease
NO_SYMLINK_HEAD = YesPlease
NO_IPV6 = YesPlease
OLD_ICONV = UnfortunatelyYes
BASIC_CFLAGS += -I/usr/pkg/include
BASIC_LDFLAGS += -L/usr/pkg/lib $(CC_LD_DYNPATH)/usr/pkg/lib
USE_ST_TIMESPEC = YesPlease
- NO_MKSTEMPS = YesPlease
HAVE_PATHS_H = YesPlease
HAVE_BSD_SYSCTL = YesPlease
endif
NO_STRCASESTR = YesPlease
NO_MEMMEM = YesPlease
NO_MKDTEMP = YesPlease
- NO_MKSTEMPS = YesPlease
NO_STRLCPY = YesPlease
NO_NSEC = YesPlease
FREAD_READS_DIRECTORIES = UnfortunatelyYes
# GNU/Hurd
HAVE_ALLOCA_H = YesPlease
NO_STRLCPY = YesPlease
- NO_MKSTEMPS = YesPlease
HAVE_PATHS_H = YesPlease
LIBC_CONTAINS_LIBINTL = YesPlease
endif
NO_UNSETENV = YesPlease
NO_STRCASESTR = YesPlease
NO_MEMMEM = YesPlease
- NO_MKSTEMPS = YesPlease
NO_MKDTEMP = YesPlease
# When compiled with the MIPSpro 7.4.4m compiler, and without pthreads
# (i.e. NO_PTHREADS is set), and _with_ MMAP (i.e. NO_MMAP is not set),
NO_UNSETENV = YesPlease
NO_STRCASESTR = YesPlease
NO_MEMMEM = YesPlease
- NO_MKSTEMPS = YesPlease
NO_MKDTEMP = YesPlease
# When compiled with the MIPSpro 7.4.4m compiler, and without pthreads
# (i.e. NO_PTHREADS is set), and _with_ MMAP (i.e. NO_MMAP is not set),
NO_SETENV = YesPlease
NO_STRCASESTR = YesPlease
NO_MEMMEM = YesPlease
- NO_MKSTEMPS = YesPlease
NO_STRLCPY = YesPlease
NO_MKDTEMP = YesPlease
NO_UNSETENV = YesPlease
NO_ICONV = YesPlease
NO_STRTOUMAX = YesPlease
NO_MKDTEMP = YesPlease
- NO_MKSTEMPS = YesPlease
SNPRINTF_RETURNS_BOGUS = YesPlease
NO_SVN_TESTS = YesPlease
RUNTIME_PREFIX = YesPlease
NO_MKDTEMP = YesPlease
NO_STRTOUMAX = YesPlease
NO_NSEC = YesPlease
- NO_MKSTEMPS = YesPlease
ifeq ($(uname_R),3.5)
NO_INET_NTOP = YesPlease
NO_INET_PTON = YesPlease
NO_SETENV = YesPlease
NO_UNSETENV = YesPlease
NO_MKDTEMP = YesPlease
- NO_MKSTEMPS = YesPlease
# Currently libiconv-1.9.1.
OLD_ICONV = UnfortunatelyYes
NO_REGEX = YesPlease
NEEDS_LIBICONV = YesPlease
NO_STRTOUMAX = YesPlease
NO_MKDTEMP = YesPlease
- NO_MKSTEMPS = YesPlease
NO_SVN_TESTS = YesPlease
NO_PERL_MAKEMAKER = YesPlease
RUNTIME_PREFIX = YesPlease
NO_ICONV = YesPlease
NO_MEMMEM = YesPlease
NO_MKDTEMP = YesPlease
- NO_MKSTEMPS = YesPlease
NO_NSEC = YesPlease
NO_PTHREADS = YesPlease
NO_R_TO_GCC_LINKER = YesPlease
[NO_MKDTEMP=YesPlease])
GIT_CONF_SUBST([NO_MKDTEMP])
#
-# Define NO_MKSTEMPS if you don't have mkstemps in the C library.
-GIT_CHECK_FUNC(mkstemps,
-[NO_MKSTEMPS=],
-[NO_MKSTEMPS=YesPlease])
-GIT_CONF_SUBST([NO_MKSTEMPS])
-#
# Define NO_INITGROUPS if you don't have initgroups in the C library.
GIT_CHECK_FUNC(initgroups,
[NO_INITGROUPS=],
@@
- memcpy(dst, src, n * sizeof(T));
+ COPY_ARRAY(dst, src, n);
+
+@@
+type T;
+T *ptr;
+expression n;
+@@
+- ptr = xmalloc(n * sizeof(*ptr));
++ ALLOC_ARRAY(ptr, n);
+
+@@
+type T;
+T *ptr;
+expression n;
+@@
+- ptr = xmalloc(n * sizeof(T));
++ ALLOC_ARRAY(ptr, n);
@@
- strbuf_addstr(E1, find_unique_abbrev(E2, E3));
+ strbuf_add_unique_abbrev(E1, E2, E3);
+
+@@
+expression E1, E2;
+@@
+- strbuf_addstr(E1, real_path(E2));
++ strbuf_add_real_path(E1, E2);
-#!/usr/bin/env python
+#!/bin/sh
-import sys
-
-sys.stderr.write('WARNING: git-remote-bzr is now maintained independently.\n')
-sys.stderr.write('WARNING: For more information visit https://github.com/felipec/git-remote-bzr\n')
-
-sys.stderr.write('''WARNING:
+cat >&2 <<'EOT'
+WARNING: git-remote-bzr is now maintained independently.
+WARNING: For more information visit https://github.com/felipec/git-remote-bzr
+WARNING:
WARNING: You can pick a directory on your $PATH and download it, e.g.:
-WARNING: $ wget -O $HOME/bin/git-remote-bzr \\
+WARNING: $ wget -O $HOME/bin/git-remote-bzr \
WARNING: https://raw.github.com/felipec/git-remote-bzr/master/git-remote-bzr
WARNING: $ chmod +x $HOME/bin/git-remote-bzr
-''')
+EOT
-#!/usr/bin/env python
+#!/bin/sh
-import sys
-
-sys.stderr.write('WARNING: git-remote-hg is now maintained independently.\n')
-sys.stderr.write('WARNING: For more information visit https://github.com/felipec/git-remote-hg\n')
-
-sys.stderr.write('''WARNING:
+cat >&2 <<'EOT'
+WARNING: git-remote-hg is now maintained independently.
+WARNING: For more information visit https://github.com/felipec/git-remote-hg
+WARNING:
WARNING: You can pick a directory on your $PATH and download it, e.g.:
-WARNING: $ wget -O $HOME/bin/git-remote-hg \\
+WARNING: $ wget -O $HOME/bin/git-remote-hg \
WARNING: https://raw.github.com/felipec/git-remote-hg/master/git-remote-hg
WARNING: $ chmod +x $HOME/bin/git-remote-hg
-''')
+EOT
s->should_free = 1;
return 0;
}
- if (size_only)
+
+ /*
+ * Even if the caller would be happy with getting
+ * only the size, we cannot return early at this
+ * point if the path requires us to run the content
+ * conversion.
+ */
+ if (size_only && !would_convert_to_git(s->path))
return 0;
+
+ /*
+ * Note: this check uses xsize_t(st.st_size) that may
+ * not be the true size of the blob after it goes
+ * through convert_to_git(). This may not strictly be
+ * correct, but the whole point of big_file_threshold
+ * and is_binary check being that we want to avoid
+ * opening the file and inspecting the contents, this
+ * is probably fine.
+ */
if ((flags & CHECK_BINARY) &&
s->size > big_file_threshold && s->is_binary == -1) {
s->is_binary = 1;
regmatch_t regmatch;
int flags = 0;
- while (*data &&
+ while (sz && *data &&
!regexec_buf(regexp, data, sz, 1, ®match, flags)) {
flags |= REG_NOTBOL;
data += regmatch.rm_eo;
- if (*data && regmatch.rm_so == regmatch.rm_eo)
+ sz -= regmatch.rm_eo;
+ if (sz && *data && regmatch.rm_so == regmatch.rm_eo) {
data++;
+ sz--;
+ }
cnt++;
}
return xmkstemp_mode(template, mode);
}
-int odb_pack_keep(char *name, size_t namesz, const unsigned char *sha1)
+int odb_pack_keep(const char *name)
{
int fd;
- snprintf(name, namesz, "%s/pack/pack-%s.keep",
- get_object_directory(), sha1_to_hex(sha1));
fd = open(name, O_RDWR|O_CREAT|O_EXCL, 0600);
if (0 <= fd)
return fd;
/* slow path */
- safe_create_leading_directories(name);
+ safe_create_leading_directories_const(name);
return open(name, O_RDWR|O_CREAT|O_EXCL, 0600);
}
* the endianness conversion in a separate pass to ensure
* we're loading 8-byte aligned words.
*/
- memcpy(self->buffer, ptr, self->buffer_size * sizeof(uint64_t));
- ptr += self->buffer_size * sizeof(uint64_t);
+ memcpy(self->buffer, ptr, self->buffer_size * sizeof(eword_t));
+ ptr += self->buffer_size * sizeof(eword_t);
for (i = 0; i < self->buffer_size; ++i)
self->buffer[i] = ntohll(self->buffer[i]);
static char *keep_pack(const char *curr_index_name)
{
- static char name[PATH_MAX];
static const char *keep_msg = "fast-import";
+ struct strbuf name = STRBUF_INIT;
int keep_fd;
- keep_fd = odb_pack_keep(name, sizeof(name), pack_data->sha1);
+ odb_pack_name(&name, pack_data->sha1, "keep");
+ keep_fd = odb_pack_keep(name.buf);
if (keep_fd < 0)
die_errno("cannot create keep file");
write_or_die(keep_fd, keep_msg, strlen(keep_msg));
if (close(keep_fd))
die_errno("failed to write keep file");
- snprintf(name, sizeof(name), "%s/pack/pack-%s.pack",
- get_object_directory(), sha1_to_hex(pack_data->sha1));
- if (finalize_object_file(pack_data->pack_name, name))
+ odb_pack_name(&name, pack_data->sha1, "pack");
+ if (finalize_object_file(pack_data->pack_name, name.buf))
die("cannot store pack file");
- snprintf(name, sizeof(name), "%s/pack/pack-%s.idx",
- get_object_directory(), sha1_to_hex(pack_data->sha1));
- if (finalize_object_file(curr_index_name, name))
+ odb_pack_name(&name, pack_data->sha1, "idx");
+ if (finalize_object_file(curr_index_name, name.buf))
die("cannot store index file");
free((void *)curr_index_name);
- return name;
+ return strbuf_detach(&name, NULL);
}
static void unkeep_all_packs(void)
{
- static char name[PATH_MAX];
+ struct strbuf name = STRBUF_INIT;
int k;
for (k = 0; k < pack_id; k++) {
struct packed_git *p = all_packs[k];
- snprintf(name, sizeof(name), "%s/pack/pack-%s.keep",
- get_object_directory(), sha1_to_hex(p->sha1));
- unlink_or_warn(name);
+ odb_pack_name(&name, p->sha1, "keep");
+ unlink_or_warn(name.buf);
}
+ strbuf_release(&name);
}
static int loosen_small_pack(const struct packed_git *p)
die("core git rejected index %s", idx_name);
all_packs[pack_id] = new_p;
install_packed_git(new_p);
+ free(idx_name);
/* Print the boundary */
if (pack_edges) {
break; /* definitely do not have it */
else if (cmp == 0) {
keep = 1; /* definitely have it */
- sought[i]->matched = 1;
+ sought[i]->match_status = REF_MATCHED;
}
i++;
}
}
/* Append unmatched requests to the list */
- if ((allow_unadvertised_object_request &
- (ALLOW_TIP_SHA1 | ALLOW_REACHABLE_SHA1))) {
- for (i = 0; i < nr_sought; i++) {
- unsigned char sha1[20];
+ for (i = 0; i < nr_sought; i++) {
+ unsigned char sha1[20];
- ref = sought[i];
- if (ref->matched)
- continue;
- if (get_sha1_hex(ref->name, sha1) ||
- ref->name[40] != '\0' ||
- hashcmp(sha1, ref->old_oid.hash))
- continue;
+ ref = sought[i];
+ if (ref->match_status != REF_NOT_MATCHED)
+ continue;
+ if (get_sha1_hex(ref->name, sha1) ||
+ ref->name[40] != '\0' ||
+ hashcmp(sha1, ref->old_oid.hash))
+ continue;
- ref->matched = 1;
+ if ((allow_unadvertised_object_request &
+ (ALLOW_TIP_SHA1 | ALLOW_REACHABLE_SHA1))) {
+ ref->match_status = REF_MATCHED;
*newtail = copy_ref(ref);
newtail = &(*newtail)->next;
+ } else {
+ ref->match_status = REF_UNADVERTISED_NOT_ALLOWED;
}
}
*refs = newlist;
clear_shallow_info(&si);
return ref_cpy;
}
+
+int report_unmatched_refs(struct ref **sought, int nr_sought)
+{
+ int i, ret = 0;
+
+ for (i = 0; i < nr_sought; i++) {
+ if (!sought[i])
+ continue;
+ switch (sought[i]->match_status) {
+ case REF_MATCHED:
+ continue;
+ case REF_NOT_MATCHED:
+ error(_("no such remote ref %s"), sought[i]->name);
+ break;
+ case REF_UNADVERTISED_NOT_ALLOWED:
+ error(_("Server does not allow request for unadvertised object %s"),
+ sought[i]->name);
+ break;
+ }
+ ret = 1;
+ }
+ return ret;
+}
struct sha1_array *shallow,
char **pack_lockfile);
+/*
+ * Print an appropriate error message for each sought ref that wasn't
+ * matched. Return 0 if all sought refs were matched, otherwise 1.
+ */
+int report_unmatched_refs(struct ref **sought, int nr_sought);
+
#endif
extern char *gitmkdtemp(char *);
#endif
-#ifdef NO_MKSTEMPS
-#define mkstemps gitmkstemps
-extern int gitmkstemps(char *, int);
-#endif
-
#ifdef NO_UNSETENV
#define unsetenv gitunsetenv
extern void gitunsetenv(const char *);
extern FILE *xfdopen(int fd, const char *mode);
extern int xmkstemp(char *template);
extern int xmkstemp_mode(char *template, int mode);
-extern int odb_mkstemp(char *template, size_t limit, const char *pattern);
-extern int odb_pack_keep(char *name, size_t namesz, const unsigned char *sha1);
extern char *xgetcwd(void);
extern FILE *fopen_for_writing(const char *path);
{
if test $# = 3 && test "$1" = $(git rev-parse "$3^{tree}"); then
map "$3"
+ elif test $# = 1 && test "$1" = 4b825dc642cb6eb9a060e54bf8d69288fbee4904; then
+ :
else
git commit-tree "$@"
fi
# Now parse the message body
while(<$fh>) {
$message .= $_;
- if (/^(Signed-off-by|Cc): (.*)$/i) {
+ if (/^(Signed-off-by|Cc): ([^>]*>?)/i) {
chomp;
my ($what, $c) = ($1, $2);
chomp $c;
*/
if (repo->can_update_info_refs && !has_object_file(&ref->old_oid)) {
obj = lookup_unknown_object(ref->old_oid.hash);
- if (obj) {
- fprintf(stderr, " fetch %s for %s\n",
- oid_to_hex(&ref->old_oid), refname);
- add_fetch_request(obj);
- }
+ fprintf(stderr, " fetch %s for %s\n",
+ oid_to_hex(&ref->old_oid), refname);
+ add_fetch_request(obj);
}
ref->next = remote_refs;
};
int i;
+ if (http_follow_config != HTTP_FOLLOW_ALWAYS) {
+ warning("alternate disabled by http.followRedirects: %s", url);
+ return 0;
+ }
+
for (i = 0; i < ARRAY_SIZE(protocols); i++) {
const char *end;
if (skip_prefix(url, protocols[i], &end) &&
okay = 1;
}
}
- /* skip "objects\n" at end */
if (okay) {
struct strbuf target = STRBUF_INIT;
strbuf_add(&target, base, serverlen);
- strbuf_add(&target, data + i, posn - i - 7);
-
- if (is_alternate_allowed(target.buf)) {
+ strbuf_add(&target, data + i, posn - i);
+ if (!strbuf_strip_suffix(&target, "objects")) {
+ warning("ignoring alternate that does"
+ " not end in 'objects': %s",
+ target.buf);
+ strbuf_release(&target);
+ } else if (is_alternate_allowed(target.buf)) {
warning("adding alternate object store: %s",
target.buf);
newalt = xmalloc(sizeof(*newalt));
while (tail->next != NULL)
tail = tail->next;
tail->next = newalt;
+ } else {
+ strbuf_release(&target);
}
}
}
struct alternates_request alt_req;
struct walker_data *cdata = walker->data;
- if (http_follow_config != HTTP_FOLLOW_ALWAYS)
- return;
-
/*
* If another request has already started fetching alternates,
* wait for them to arrive and return to processing this request's
const char *ident_default_name(void)
{
- if (!git_default_name.len) {
+ if (!(ident_config_given & IDENT_NAME_GIVEN) && !git_default_name.len) {
copy_gecos(xgetpwuid_self(&default_name_is_bogus), &git_default_name);
strbuf_trim(&git_default_name);
}
const char *ident_default_email(void)
{
- if (!git_default_email.len) {
+ if (!(ident_config_given & IDENT_MAIL_GIVEN) && !git_default_email.len) {
const char *email = getenv("EMAIL");
if (email && email[0]) {
c == '\'';
}
+static int has_non_crud(const char *str)
+{
+ for (; *str; str++) {
+ if (!crud(*str))
+ return 1;
+ }
+ return 0;
+}
+
/*
* Copy over a string to the destination, but avoid special
* characters ('\n', '<' and '>') and remove crud at the end
int want_date = !(flag & IDENT_NO_DATE);
int want_name = !(flag & IDENT_NO_NAME);
+ if (!email) {
+ if (strict && ident_use_config_only
+ && !(ident_config_given & IDENT_MAIL_GIVEN)) {
+ fputs(_(env_hint), stderr);
+ die(_("no email was given and auto-detection is disabled"));
+ }
+ email = ident_default_email();
+ if (strict && default_email_is_bogus) {
+ fputs(_(env_hint), stderr);
+ die(_("unable to auto-detect email address (got '%s')"), email);
+ }
+ }
+
if (want_name) {
int using_default = 0;
if (!name) {
if (strict && ident_use_config_only
&& !(ident_config_given & IDENT_NAME_GIVEN)) {
fputs(_(env_hint), stderr);
- die("no name was given and auto-detection is disabled");
+ die(_("no name was given and auto-detection is disabled"));
}
name = ident_default_name();
using_default = 1;
if (strict && default_name_is_bogus) {
fputs(_(env_hint), stderr);
- die("unable to auto-detect name (got '%s')", name);
+ die(_("unable to auto-detect name (got '%s')"), name);
}
}
if (!*name) {
if (strict) {
if (using_default)
fputs(_(env_hint), stderr);
- die("empty ident name (for <%s>) not allowed", email);
+ die(_("empty ident name (for <%s>) not allowed"), email);
}
pw = xgetpwuid_self(NULL);
name = pw->pw_name;
}
- }
-
- if (!email) {
- if (strict && ident_use_config_only
- && !(ident_config_given & IDENT_MAIL_GIVEN)) {
- fputs(_(env_hint), stderr);
- die("no email was given and auto-detection is disabled");
- }
- email = ident_default_email();
- if (strict && default_email_is_bogus) {
- fputs(_(env_hint), stderr);
- die("unable to auto-detect email address (got '%s')", email);
- }
+ if (strict && !has_non_crud(name))
+ die(_("name consists only of disallowed characters: %s"), name);
}
strbuf_reset(&ident);
strbuf_addch(&ident, ' ');
if (date_str && date_str[0]) {
if (parse_date(date_str, &ident) < 0)
- die("invalid date format: %s", date_str);
+ die(_("invalid date format: %s"), date_str);
}
else
strbuf_addstr(&ident, ident_default_date());
/*
* State flags for depth-first search used for analyzing delta cycles.
+ *
+ * The depth is measured in delta-links to the base (so if A is a delta
+ * against B, then A has a depth of 1, and B a depth of 0).
*/
enum {
DFS_NONE = 0,
DFS_ACTIVE,
DFS_DONE
} dfs_state;
+ int depth;
};
struct packing_data {
static char *substitute_branch_name(const char **string, int *len)
{
struct strbuf buf = STRBUF_INIT;
- int ret = interpret_branch_name(*string, *len, &buf);
+ int ret = interpret_branch_name(*string, *len, &buf, 0);
if (ret == *len) {
size_t size;
int parse_hide_refs_config(const char *var, const char *value, const char *section)
{
+ const char *key;
if (!strcmp("transfer.hiderefs", var) ||
- /* NEEDSWORK: use parse_config_key() once both are merged */
- (starts_with(var, section) && var[strlen(section)] == '.' &&
- !strcmp(var + strlen(section), ".hiderefs"))) {
+ (!parse_config_key(var, section, NULL, NULL, &key) &&
+ !strcmp(key, "hiderefs"))) {
char *ref;
int len;
name = get_default(current_branch, &name_given);
ret = make_remote(name, 0);
- if (valid_remote_nick(name)) {
+ if (valid_remote_nick(name) && have_git_dir()) {
if (!valid_remote(ret))
read_remotes_file(ret);
if (!valid_remote(ret))
force:1,
forced_update:1,
expect_old_sha1:1,
- deletion:1,
- matched:1;
+ deletion:1;
+
+ enum {
+ REF_NOT_MATCHED = 0, /* initial value */
+ REF_MATCHED,
+ REF_UNADVERTISED_NOT_ALLOWED
+ } match_status;
/*
* Order is important here, as we write to FETCH_HEAD
revs->no_walk = 0;
if (revs->reflog_info && obj->type == OBJ_COMMIT) {
struct strbuf buf = STRBUF_INIT;
- int len = interpret_branch_name(name, 0, &buf);
+ int len = interpret_branch_name(name, 0, &buf, 0);
int st;
if (0 < len && name[len] && buf.len)
extern void mark_parents_uninteresting(struct commit *commit);
extern void mark_tree_uninteresting(struct tree *tree);
-char *path_name(struct strbuf *path, const char *name);
-
extern void show_object_with_name(FILE *, struct object *, const char *);
extern void add_pending_object(struct rev_info *revs,
kill(p->pid, sig);
- if (p->process->wait_after_clean) {
+ if (p->process && p->process->wait_after_clean) {
p->next = children_to_wait_for;
children_to_wait_for = p;
} else {
struct child_process po = CHILD_PROCESS_INIT;
FILE *po_in;
int i;
+ int rc;
i = 4;
if (args->use_thin_pack)
po.out = -1;
}
- if (finish_command(&po))
+ rc = finish_command(&po);
+ if (rc) {
+ /*
+ * For a normal non-zero exit, we assume pack-objects wrote
+ * something useful to stderr. For death by signal, though,
+ * we should mention it to the user. The exception is SIGPIPE
+ * (141), because that's a normal occurence if the remote end
+ * hangs up (and we'll report that by trying to read the unpack
+ * status).
+ */
+ if (rc > 128 && rc != 141)
+ error("pack-objects died of signal %d", rc - 128);
return -1;
+ }
+ return 0;
+}
+
+static int receive_unpack_status(int in)
+{
+ const char *line = packet_read_line(in, NULL);
+ if (!skip_prefix(line, "unpack ", &line))
+ return error(_("unable to parse remote unpack status: %s"), line);
+ if (strcmp(line, "ok"))
+ return error(_("remote unpack failed: %s"), line);
return 0;
}
static int receive_status(int in, struct ref *refs)
{
struct ref *hint;
- int ret = 0;
- char *line = packet_read_line(in, NULL);
- if (!starts_with(line, "unpack "))
- return error("did not receive remote status");
- if (strcmp(line, "unpack ok")) {
- error("unpack failed: %s", line + 7);
- ret = -1;
- }
+ int ret;
+
hint = NULL;
+ ret = receive_unpack_status(in);
while (1) {
char *refname;
char *msg;
- line = packet_read_line(in, NULL);
+ char *line = packet_read_line(in, NULL);
if (!line)
break;
if (!starts_with(line, "ok ") && !starts_with(line, "ng ")) {
close(out);
if (git_connection_is_socket(conn))
shutdown(fd[0], SHUT_WR);
+
+ /*
+ * Do not even bother with the return value; we know we
+ * are failing, and just want the error() side effects.
+ */
+ if (status_report)
+ receive_unpack_status(in);
+
if (use_sideband) {
close(demux.out);
finish_async(&demux);
if (!is_absolute_path(data.buf))
strbuf_addf(&path, "%s/", gitdir);
strbuf_addbuf(&path, &data);
- strbuf_addstr(sb, real_path(path.buf));
+ strbuf_add_real_path(sb, path.buf);
ret = 1;
} else {
strbuf_addstr(sb, gitdir);
return buf->buf;
}
-/*
- * Return the name of the pack or index file with the specified sha1
- * in its filename. *base and *name are scratch space that must be
- * provided by the caller. which should be "pack" or "idx".
- */
-static char *sha1_get_pack_name(const unsigned char *sha1,
- struct strbuf *buf,
- const char *which)
+ char *odb_pack_name(struct strbuf *buf,
+ const unsigned char *sha1,
+ const char *ext)
{
strbuf_reset(buf);
strbuf_addf(buf, "%s/pack/pack-%s.%s", get_object_directory(),
- sha1_to_hex(sha1), which);
+ sha1_to_hex(sha1), ext);
return buf->buf;
}
char *sha1_pack_name(const unsigned char *sha1)
{
static struct strbuf buf = STRBUF_INIT;
- return sha1_get_pack_name(sha1, &buf, "pack");
+ return odb_pack_name(&buf, sha1, "pack");
}
char *sha1_pack_index_name(const unsigned char *sha1)
{
static struct strbuf buf = STRBUF_INIT;
- return sha1_get_pack_name(sha1, &buf, "idx");
+ return odb_pack_name(&buf, sha1, "idx");
}
struct alternate_object_database *alt_odb_list;
while (delta_stack_nr) {
void *delta_data;
void *base = data;
+ void *external_base = NULL;
unsigned long delta_size, base_size = size;
int i;
p->pack_name);
mark_bad_packed_object(p, base_sha1);
base = read_object(base_sha1, &type, &base_size);
+ external_base = base;
}
}
"at offset %"PRIuMAX" from %s",
(uintmax_t)curpos, p->pack_name);
data = NULL;
+ free(external_base);
continue;
}
error("failed to apply delta");
free(delta_data);
+ free(external_base);
}
*final_type = type;
return 1;
}
-static int reinterpret(const char *name, int namelen, int len, struct strbuf *buf)
+static int reinterpret(const char *name, int namelen, int len,
+ struct strbuf *buf, unsigned allowed)
{
/* we have extra data, which might need further processing */
struct strbuf tmp = STRBUF_INIT;
int ret;
strbuf_add(buf, name + len, namelen - len);
- ret = interpret_branch_name(buf->buf, buf->len, &tmp);
+ ret = interpret_branch_name(buf->buf, buf->len, &tmp, allowed);
/* that data was not interpreted, remove our cruft */
if (ret < 0) {
strbuf_setlen(buf, used);
free(s);
}
+static int branch_interpret_allowed(const char *refname, unsigned allowed)
+{
+ if (!allowed)
+ return 1;
+
+ if ((allowed & INTERPRET_BRANCH_LOCAL) &&
+ starts_with(refname, "refs/heads/"))
+ return 1;
+ if ((allowed & INTERPRET_BRANCH_REMOTE) &&
+ starts_with(refname, "refs/remotes/"))
+ return 1;
+
+ return 0;
+}
+
static int interpret_branch_mark(const char *name, int namelen,
int at, struct strbuf *buf,
int (*get_mark)(const char *, int),
const char *(*get_data)(struct branch *,
- struct strbuf *))
+ struct strbuf *),
+ unsigned allowed)
{
int len;
struct branch *branch;
if (!value)
die("%s", err.buf);
+ if (!branch_interpret_allowed(value, allowed))
+ return -1;
+
set_shortened_ref(buf, value);
return len + at;
}
-/*
- * This reads short-hand syntax that not only evaluates to a commit
- * object name, but also can act as if the end user spelled the name
- * of the branch from the command line.
- *
- * - "@{-N}" finds the name of the Nth previous branch we were on, and
- * places the name of the branch in the given buf and returns the
- * number of characters parsed if successful.
- *
- * - "<branch>@{upstream}" finds the name of the other ref that
- * <branch> is configured to merge with (missing <branch> defaults
- * to the current branch), and places the name of the branch in the
- * given buf and returns the number of characters parsed if
- * successful.
- *
- * If the input is not of the accepted format, it returns a negative
- * number to signal an error.
- *
- * If the input was ok but there are not N branch switches in the
- * reflog, it returns 0.
- */
-int interpret_branch_name(const char *name, int namelen, struct strbuf *buf)
+int interpret_branch_name(const char *name, int namelen, struct strbuf *buf,
+ unsigned allowed)
{
char *at;
const char *start;
- int len = interpret_nth_prior_checkout(name, namelen, buf);
+ int len;
if (!namelen)
namelen = strlen(name);
- if (!len) {
- return len; /* syntax Ok, not enough switches */
- } else if (len > 0) {
- if (len == namelen)
- return len; /* consumed all */
- else
- return reinterpret(name, namelen, len, buf);
+ if (!allowed || (allowed & INTERPRET_BRANCH_LOCAL)) {
+ len = interpret_nth_prior_checkout(name, namelen, buf);
+ if (!len) {
+ return len; /* syntax Ok, not enough switches */
+ } else if (len > 0) {
+ if (len == namelen)
+ return len; /* consumed all */
+ else
+ return reinterpret(name, namelen, len, buf, allowed);
+ }
}
for (start = name;
(at = memchr(start, '@', namelen - (start - name)));
start = at + 1) {
- len = interpret_empty_at(name, namelen, at - name, buf);
- if (len > 0)
- return reinterpret(name, namelen, len, buf);
+ if (!allowed || (allowed & INTERPRET_BRANCH_HEAD)) {
+ len = interpret_empty_at(name, namelen, at - name, buf);
+ if (len > 0)
+ return reinterpret(name, namelen, len, buf,
+ allowed);
+ }
len = interpret_branch_mark(name, namelen, at - name, buf,
- upstream_mark, branch_get_upstream);
+ upstream_mark, branch_get_upstream,
+ allowed);
if (len > 0)
return len;
len = interpret_branch_mark(name, namelen, at - name, buf,
- push_mark, branch_get_push);
+ push_mark, branch_get_push,
+ allowed);
if (len > 0)
return len;
}
return -1;
}
-int strbuf_branchname(struct strbuf *sb, const char *name)
+void strbuf_branchname(struct strbuf *sb, const char *name, unsigned allowed)
{
int len = strlen(name);
- int used = interpret_branch_name(name, len, sb);
+ int used = interpret_branch_name(name, len, sb, allowed);
- if (used == len)
- return 0;
if (used < 0)
used = 0;
strbuf_add(sb, name + used, len - used);
- return len;
}
int strbuf_check_branch_ref(struct strbuf *sb, const char *name)
{
- strbuf_branchname(sb, name);
+ strbuf_branchname(sb, name, INTERPRET_BRANCH_LOCAL);
if (name[0] == '-')
return -1;
strbuf_splice(sb, 0, 0, "refs/heads/", 11);
strbuf_addstr(sb, path);
}
+void strbuf_add_real_path(struct strbuf *sb, const char *path)
+{
+ if (sb->len) {
+ struct strbuf resolved = STRBUF_INIT;
+ strbuf_realpath(&resolved, path, 1);
+ strbuf_addbuf(sb, &resolved);
+ strbuf_release(&resolved);
+ } else
+ strbuf_realpath(sb, path, 1);
+}
+
int printf_ln(const char *fmt, ...)
{
int ret;
*/
extern void strbuf_add_absolute_path(struct strbuf *sb, const char *path);
+/**
+ * Canonize `path` (make it absolute, resolve symlinks, remove extra
+ * slashes) and append it to `sb`. Die with an informative error
+ * message if there is a problem.
+ *
+ * The directory part of `path` (i.e., everything up to the last
+ * dir_sep) must denote a valid, existing directory, but the last
+ * component need not exist.
+ *
+ * Callers that don't mind links should use the more lightweight
+ * strbuf_add_absolute_path() instead.
+ */
+extern void strbuf_add_real_path(struct strbuf *sb, const char *path);
+
/**
* Normalize in-place the path contained in the strbuf. See
strbuf_complete(sb, '\n');
}
-extern int strbuf_branchname(struct strbuf *sb, const char *name);
+/*
+ * Copy "name" to "sb", expanding any special @-marks as handled by
+ * interpret_branch_name(). The result is a non-qualified branch name
+ * (so "foo" or "origin/master" instead of "refs/heads/foo" or
+ * "refs/remotes/origin/master").
+ *
+ * Note that the resulting name may not be a syntactically valid refname.
+ *
+ * If "allowed" is non-zero, restrict the set of allowed expansions. See
+ * interpret_branch_name() for details.
+ */
+extern void strbuf_branchname(struct strbuf *sb, const char *name,
+ unsigned allowed);
+
+/*
+ * Like strbuf_branchname() above, but confirm that the result is
+ * syntactically valid to be used as a local branch name in refs/heads/.
+ *
+ * The return value is "0" if the result is valid, and "-1" otherwise.
+ */
extern int strbuf_check_branch_ref(struct strbuf *sb, const char *name);
extern void strbuf_addstr_urlencode(struct strbuf *, const char *,
strcmp(value, "all") &&
strcmp(value, "none"))
warning("Invalid parameter '%s' for config option "
- "'submodule.%s.ignore'", value, var);
+ "'submodule.%s.ignore'", value, name.buf);
else {
free((void *) submodule->ignore);
submodule->ignore = xstrdup(value);
}
perl -MCGI -MCGI::Util -MCGI::Carp -e 0 >/dev/null 2>&1 || {
- skip_all='skipping gitweb tests, CGI module unusable'
+ skip_all='skipping gitweb tests, CGI & CGI::Util & CGI::Carp modules not available'
+ test_done
+}
+
+perl -mTime::HiRes -e 0 >/dev/null 2>&1 || {
+ skip_all='skipping gitweb tests, Time::HiRes module not available'
test_done
}
--- /dev/null
+/trash directory*/
+/test-results/
+/.prove/
+/build/
--- /dev/null
+-include ../../config.mak
+export GIT_TEST_OPTIONS
+
+SHELL_PATH ?= $(SHELL)
+SHELL_PATH_SQ = $(subst ','\'',$(SHELL_PATH))
+T = $(sort $(wildcard i[0-9][0-9][0-9][0-9]-*.sh))
+
+all: $(T)
+
+$(T):
+ @echo "*** $@ ***"; '$(SHELL_PATH_SQ)' $@ $(GIT_TEST_OPTS)
+
+clean:
+ rm -rf build "trash directory".* test-results
+
+.PHONY: all clean $(T)
--- /dev/null
+Git version interoperability tests
+==================================
+
+This directory has interoperability tests for git. Each script is
+similar to the normal test scripts found in t/, but with the added twist
+that two special versions of git, "git.a" and "git.b", are available in
+the PATH. Individual tests can then check the interaction between the
+two versions.
+
+When you add a feature that handles backwards compatibility between git
+versions, it's encouraged to add a test here to make sure it behaves as
+you expect.
+
+
+Running Tests
+-------------
+
+The easiest way to run tests is to say "make". This runs all
+the tests against their default versions.
+
+You can run a single test like:
+
+ $ ./i0000-basic.sh
+ ok 1 - bare git is forbidden
+ ok 2 - git.a version (v1.6.6.3)
+ ok 3 - git.b version (v2.11.1)
+ # passed all 3 test(s)
+ 1..3
+
+Each test contains default versions to run against. You may override
+these by setting `GIT_TEST_VERSION_A` and `GIT_TEST_VERSION_B` in the
+environment. Note that not all combinations will give sensible outcomes
+for all tests (e.g., a test checking for a specific old/new interaction
+may want something "old" enough" and something "new" enough; see
+individual tests for details).
+
+Version names should be resolvable as revisions in the current
+repository. They will be exported and built as needed using the
+config.mak files found at the root of your working tree.
+
+The exception is the special version "." which uses the currently-built
+contents of your working tree.
+
+You can set the following variables (in the environment or in your config.mak):
+
+ GIT_INTEROP_MAKE_OPTS
+ Options to pass to `make` when building a git version (e.g.,
+ `-j8`).
+
+You can also pass any command-line options taken by ordinary git tests (e.g.,
+"-v").
+
+
+Naming Tests
+------------
+
+The interop test files are named like:
+
+ iNNNN-short-description.sh
+
+where N is a decimal digit. The same conventions for choosing NNNN as
+for normal tests apply.
+
+
+Writing Tests
+-------------
+
+An interop test script starts like a normal script, declaring a few
+variables and then including interop-lib.sh (which includes test-lib.sh).
+Besides test_description, you should also set the $VERSION_A and $VERSION_B
+variables to give the default versions to test against. See t0000-basic.sh for
+an example.
+
+You can then use test_expect_success as usual, with a few differences:
+
+ 1. The special commands "git.a" and "git.b" correspond to the
+ two versions.
+
+ 2. You cannot call a bare "git". This is to prevent accidents where
+ you meant "git.a" or "git.b".
+
+ 3. The trash directory is _not_ a git repository by default. You
+ should create one with the appropriate version of git.
+
+At the end of the script, call test_done as usual.
--- /dev/null
+#!/bin/sh
+
+# Note that this test only works on real version numbers,
+# as it depends on matching the output to "git version".
+VERSION_A=v1.6.6.3
+VERSION_B=v2.11.1
+
+test_description='sanity test interop library'
+. ./interop-lib.sh
+
+test_expect_success 'bare git is forbidden' '
+ test_must_fail git version
+'
+
+test_expect_success "git.a version ($VERSION_A)" '
+ echo git version ${VERSION_A#v} >expect &&
+ git.a version >actual &&
+ test_cmp expect actual
+'
+
+test_expect_success "git.b version ($VERSION_B)" '
+ echo git version ${VERSION_B#v} >expect &&
+ git.b version >actual &&
+ test_cmp expect actual
+'
+
+test_done
--- /dev/null
+#!/bin/sh
+
+VERSION_A=.
+VERSION_B=v1.0.0
+
+: ${LIB_GIT_DAEMON_PORT:=5500}
+LIB_GIT_DAEMON_COMMAND='git.a daemon'
+
+test_description='clone and fetch by older client'
+. ./interop-lib.sh
+. "$TEST_DIRECTORY"/lib-git-daemon.sh
+
+start_git_daemon --export-all
+
+repo=$GIT_DAEMON_DOCUMENT_ROOT_PATH/repo
+
+test_expect_success "create repo served by $VERSION_A" '
+ git.a init "$repo" &&
+ git.a -C "$repo" commit --allow-empty -m one
+'
+
+test_expect_success "clone with $VERSION_B" '
+ git.b clone "$GIT_DAEMON_URL/repo" child &&
+ echo one >expect &&
+ git.a -C child log -1 --format=%s >actual &&
+ test_cmp expect actual
+'
+
+test_expect_success "fetch with $VERSION_B" '
+ git.a -C "$repo" commit --allow-empty -m two &&
+ (
+ cd child &&
+ git.b fetch
+ ) &&
+ echo two >expect &&
+ git.a -C child log -1 --format=%s FETCH_HEAD >actual &&
+ test_cmp expect actual
+'
+
+stop_git_daemon
+test_done
--- /dev/null
+# Interoperability testing framework. Each script should source
+# this after setting default $VERSION_A and $VERSION_B variables.
+
+. ../../GIT-BUILD-OPTIONS
+INTEROP_ROOT=$(pwd)
+BUILD_ROOT=$INTEROP_ROOT/build
+
+build_version () {
+ if test -z "$1"
+ then
+ echo >&2 "error: test script did not set default versions"
+ return 1
+ fi
+
+ if test "$1" = "."
+ then
+ git rev-parse --show-toplevel
+ return 0
+ fi
+
+ sha1=$(git rev-parse "$1^{tree}") || return 1
+ dir=$BUILD_ROOT/$sha1
+
+ if test -e "$dir/.built"
+ then
+ echo "$dir"
+ return 0
+ fi
+
+ echo >&2 "==> Building $1..."
+
+ mkdir -p "$dir" || return 1
+
+ (cd "$(git rev-parse --show-cdup)" && git archive --format=tar "$sha1") |
+ (cd "$dir" && tar x) ||
+ return 1
+
+ for config in config.mak config.mak.autogen config.status
+ do
+ if test -e "$INTEROP_ROOT/../../$config"
+ then
+ cp "$INTEROP_ROOT/../../$config" "$dir/" || return 1
+ fi
+ done
+
+ (
+ cd "$dir" &&
+ make $GIT_INTEROP_MAKE_OPTS >&2 &&
+ touch .built
+ ) || return 1
+
+ echo "$dir"
+}
+
+# Old versions of git don't have bin-wrappers, so let's give a rough emulation.
+wrap_git () {
+ write_script "$1" <<-EOF
+ GIT_EXEC_PATH="$2"
+ export GIT_EXEC_PATH
+ PATH="$2:\$PATH"
+ export GIT_EXEC_PATH
+ exec git "\$@"
+ EOF
+}
+
+generate_wrappers () {
+ mkdir -p .bin &&
+ wrap_git .bin/git.a "$DIR_A" &&
+ wrap_git .bin/git.b "$DIR_B" &&
+ write_script .bin/git <<-\EOF &&
+ echo >&2 fatal: test tried to run generic git
+ exit 1
+ EOF
+ PATH=$(pwd)/.bin:$PATH
+}
+
+VERSION_A=${GIT_TEST_VERSION_A:-$VERSION_A}
+VERSION_B=${GIT_TEST_VERSION_B:-$VERSION_B}
+
+if ! DIR_A=$(build_version "$VERSION_A") ||
+ ! DIR_B=$(build_version "$VERSION_B")
+then
+ echo >&2 "fatal: unable to build git versions"
+ exit 1
+fi
+
+TEST_DIRECTORY=$INTEROP_ROOT/..
+TEST_OUTPUT_DIRECTORY=$INTEROP_ROOT
+TEST_NO_CREATE_REPO=t
+. "$TEST_DIRECTORY"/test-lib.sh
+
+generate_wrappers || die "unable to set up interop test environment"
say >&3 "Starting git daemon ..."
mkfifo git_daemon_output
- git daemon --listen=127.0.0.1 --port="$LIB_GIT_DAEMON_PORT" \
+ ${LIB_GIT_DAEMON_COMMAND:-git daemon} \
+ --listen=127.0.0.1 --port="$LIB_GIT_DAEMON_PORT" \
--reuseaddr --verbose \
--base-path="$GIT_DAEMON_DOCUMENT_ROOT_PATH" \
"$@" "$GIT_DAEMON_DOCUMENT_ROOT_PATH" \
'
test_expect_success 'create new unreferenced commit' '
- commit=$(git commit-tree HEAD^{tree} -p HEAD)
+ commit=$(git commit-tree HEAD^{tree} -p HEAD) &&
+ test_export commit
'
test_perf 'rev-list $commit --not --all' '
git filter-branch -f base..HEAD
'
+test_perf 'noop prune-empty' '
+ git checkout --detach tip &&
+ git filter-branch -f --prune-empty base..HEAD
+'
+
test_done
error "bug in the test script: not 2 parameters to test-create-repo"
repo="$1"
source="$2"
- source_git="$(git -C "$source" rev-parse --git-dir)"
+ source_git="$("$MODERN_GIT" -C "$source" rev-parse --git-dir)"
objects_dir="$("$MODERN_GIT" -C "$source" rev-parse --git-path objects)"
mkdir -p "$repo/.git"
(
) &&
(
cd "$repo" &&
- git init -q && {
+ "$MODERN_GIT" init -q && {
test_have_prereq SYMLINKS ||
git config core.symlinks false
} &&
unset GIT_TEST_INSTALLED
else
GIT_TEST_INSTALLED="$mydir/bin-wrappers"
+ # Older versions of git lacked bin-wrappers; fallback to the
+ # files in the root.
+ test -d "$GIT_TEST_INSTALLED" || GIT_TEST_INSTALLED=$mydir
export GIT_TEST_INSTALLED
fi
run_one_dir "$@"
test_must_fail git merge @{-100}
'
+test_expect_success 'log -g @{-1}' '
+ git checkout -b last_branch &&
+ git checkout -b new_branch &&
+ echo "last_branch@{0}" >expect &&
+ git log -g --format=%gd @{-1} >actual &&
+ test_cmp expect actual
+'
+
test_done
test_cmp expect actual
'
+test_expect_success 'last one wins: two level vars' '
+
+ # sec.var and sec.VAR are the same variable, as the first
+ # and the last level of a configuration variable name is
+ # case insensitive.
+
+ echo VAL >expect &&
+
+ git -c sec.var=val -c sec.VAR=VAL config --get sec.var >actual &&
+ test_cmp expect actual &&
+ git -c SEC.var=val -c sec.var=VAL config --get sec.var >actual &&
+ test_cmp expect actual &&
+
+ git -c sec.var=val -c sec.VAR=VAL config --get SEC.var >actual &&
+ test_cmp expect actual &&
+ git -c SEC.var=val -c sec.var=VAL config --get sec.VAR >actual &&
+ test_cmp expect actual
+'
+
+test_expect_success 'last one wins: three level vars' '
+
+ # v.a.r and v.A.r are not the same variable, as the middle
+ # level of a three-level configuration variable name is
+ # case sensitive.
+
+ echo val >expect &&
+ git -c v.a.r=val -c v.A.r=VAL config --get v.a.r >actual &&
+ test_cmp expect actual &&
+ git -c v.a.r=val -c v.A.r=VAL config --get V.a.R >actual &&
+ test_cmp expect actual &&
+
+ # v.a.r and V.a.R are the same variable, as the first
+ # and the last level of a configuration variable name is
+ # case insensitive.
+
+ echo VAL >expect &&
+ git -c v.a.r=val -c v.a.R=VAL config --get v.a.r >actual &&
+ test_cmp expect actual &&
+ git -c v.a.r=val -c V.a.r=VAL config --get v.a.r >actual &&
+ test_cmp expect actual &&
+ git -c v.a.r=val -c v.a.R=VAL config --get V.a.R >actual &&
+ test_cmp expect actual &&
+ git -c v.a.r=val -c V.a.r=VAL config --get V.a.R >actual &&
+ test_cmp expect actual
+'
+
+for VAR in a .a a. a.0b a."b c". a."b c".0d
+do
+ test_expect_success "git -c $VAR=VAL rejects invalid '$VAR'" '
+ test_must_fail git -c "$VAR=VAL" config -l
+ '
+done
+
+for VAR in a.b a."b c".d
+do
+ test_expect_success "git -c $VAR=VAL works with valid '$VAR'" '
+ echo VAL >expect &&
+ git -c "$VAR=VAL" config --get "$VAR" >actual &&
+ test_cmp expect actual
+ '
+done
+
test_expect_success 'git -c is not confused by empty environment' '
GIT_CONFIG_PARAMETERS="" git -c x.one=1 config --list
'
--- /dev/null
+#!/bin/sh
+
+test_description='interpreting exotic branch name arguments
+
+Branch name arguments are usually names which are taken to be inside of
+refs/heads/, but we interpret some magic syntax like @{-1}, @{upstream}, etc.
+This script aims to check the behavior of those corner cases.
+'
+. ./test-lib.sh
+
+expect_branch() {
+ git log -1 --format=%s "$1" >actual &&
+ echo "$2" >expect &&
+ test_cmp expect actual
+}
+
+expect_deleted() {
+ test_must_fail git rev-parse --verify "$1"
+}
+
+test_expect_success 'set up repo' '
+ test_commit one &&
+ test_commit two &&
+ git remote add origin foo.git
+'
+
+test_expect_success 'update branch via @{-1}' '
+ git branch previous one &&
+
+ git checkout previous &&
+ git checkout master &&
+
+ git branch -f @{-1} two &&
+ expect_branch previous two
+'
+
+test_expect_success 'update branch via local @{upstream}' '
+ git branch local one &&
+ git branch --set-upstream-to=local &&
+
+ git branch -f @{upstream} two &&
+ expect_branch local two
+'
+
+test_expect_success 'disallow updating branch via remote @{upstream}' '
+ git update-ref refs/remotes/origin/remote one &&
+ git branch --set-upstream-to=origin/remote &&
+
+ test_must_fail git branch -f @{upstream} two
+'
+
+test_expect_success 'create branch with pseudo-qualified name' '
+ git branch refs/heads/qualified two &&
+ expect_branch refs/heads/refs/heads/qualified two
+'
+
+test_expect_success 'delete branch via @{-1}' '
+ git branch previous-del &&
+
+ git checkout previous-del &&
+ git checkout master &&
+
+ git branch -D @{-1} &&
+ expect_deleted previous-del
+'
+
+test_expect_success 'delete branch via local @{upstream}' '
+ git branch local-del &&
+ git branch --set-upstream-to=local-del &&
+
+ git branch -D @{upstream} &&
+ expect_deleted local-del
+'
+
+test_expect_success 'delete branch via remote @{upstream}' '
+ git update-ref refs/remotes/origin/remote-del two &&
+ git branch --set-upstream-to=origin/remote-del &&
+
+ git branch -r -D @{upstream} &&
+ expect_deleted origin/remote-del
+'
+
+# Note that we create two oddly named local branches here. We want to make
+# sure that we do not accidentally delete either of them, even if
+# shorten_unambiguous_ref() tweaks the name to avoid ambiguity.
+test_expect_success 'delete @{upstream} expansion matches -r option' '
+ git update-ref refs/remotes/origin/remote-del two &&
+ git branch --set-upstream-to=origin/remote-del &&
+ git update-ref refs/heads/origin/remote-del two &&
+ git update-ref refs/heads/remotes/origin/remote-del two &&
+
+ test_must_fail git branch -D @{upstream} &&
+ expect_branch refs/heads/origin/remote-del two &&
+ expect_branch refs/heads/remotes/origin/remote-del two
+'
+
+test_expect_success 'disallow deleting remote branch via @{-1}' '
+ git update-ref refs/remotes/origin/previous one &&
+
+ git checkout -b origin/previous two &&
+ git checkout master &&
+
+ test_must_fail git branch -r -D @{-1} &&
+ expect_branch refs/remotes/origin/previous one &&
+ expect_branch refs/heads/origin/previous two
+'
+
+# The thing we are testing here is that "@" is the real branch refs/heads/@,
+# and not refs/heads/HEAD. These tests should not imply that refs/heads/@ is a
+# sane thing, but it _is_ technically allowed for now. If we disallow it, these
+# can be switched to test_must_fail.
+test_expect_success 'create branch named "@"' '
+ git branch -f @ one &&
+ expect_branch refs/heads/@ one
+'
+
+test_expect_success 'delete branch named "@"' '
+ git update-ref refs/heads/@ two &&
+ git branch -D @ &&
+ expect_deleted refs/heads/@
+'
+
+test_expect_success 'checkout does not treat remote @{upstream} as a branch' '
+ git update-ref refs/remotes/origin/checkout one &&
+ git branch --set-upstream-to=origin/checkout &&
+ git update-ref refs/heads/origin/checkout two &&
+ git update-ref refs/heads/remotes/origin/checkout two &&
+
+ git checkout @{upstream} &&
+ expect_branch HEAD one
+'
+
+test_done
test_expect_code 1 git diff --quiet
'
+test_expect_success 'git diff --quiet on a path that need conversion' '
+ echo "crlf.txt text=auto" >.gitattributes &&
+ printf "Hello\r\nWorld\r\n" >crlf.txt &&
+ git add .gitattributes crlf.txt &&
+
+ printf "Hello\r\nWorld\n" >crlf.txt &&
+ git diff --quiet crlf.txt
+'
+
test_done
test 4096-zeroes.txt = "$(cat out)"
'
+test_expect_success '-S --pickaxe-regex' '
+ git diff --name-only -S0 --pickaxe-regex HEAD^ >out &&
+ verbose test 4096-zeroes.txt = "$(cat out)"
+'
+
test_done
--- /dev/null
+#!/bin/sh
+
+test_description='pack-objects breaks long cross-pack delta chains'
+. ./test-lib.sh
+
+# This mirrors a repeated push setup:
+#
+# 1. A client repeatedly modifies some files, makes a
+# commit, and pushes the result. It does this N times
+# before we get around to repacking.
+#
+# 2. Each push generates a thin pack with the new version of
+# various objects. Let's consider some file in the root tree
+# which is updated in each commit.
+#
+# When generating push number X, we feed commit X-1 (and
+# thus blob X-1) as a preferred base. The resulting pack has
+# blob X as a thin delta against blob X-1.
+#
+# On the receiving end, "index-pack --fix-thin" will
+# complete the pack with a base copy of blob X-1.
+#
+# 3. In older versions of git, if we used the delta from
+# pack X, then we'd always find blob X-1 as a base in the
+# same pack (and generate a fresh delta).
+#
+# But with the pack mru, we jump from delta to delta
+# following the traversal order:
+#
+# a. We grab blob X from pack X as a delta, putting it at
+# the tip of our mru list.
+#
+# b. Eventually we move onto commit X-1. We need other
+# objects which are only in pack X-1 (in the test code
+# below, it's the containing tree). That puts pack X-1
+# at the tip of our mru list.
+#
+# c. Eventually we look for blob X-1, and we find the
+# version in pack X-1 (because it's the mru tip).
+#
+# Now we have blob X as a delta against X-1, which is a delta
+# against X-2, and so forth.
+#
+# In the real world, these small pushes would get exploded by
+# unpack-objects rather than "index-pack --fix-thin", but the
+# same principle applies to larger pushes (they only need one
+# repeatedly-modified file to generate the delta chain).
+
+test_expect_success 'create series of packs' '
+ test-genrandom foo 4096 >content &&
+ prev= &&
+ for i in $(test_seq 1 10)
+ do
+ cat content >file &&
+ echo $i >>file &&
+ git add file &&
+ git commit -m $i &&
+ cur=$(git rev-parse HEAD^{tree}) &&
+ {
+ test -n "$prev" && echo "-$prev"
+ echo $cur
+ echo "$(git rev-parse :file) file"
+ } | git pack-objects --stdout >tmp &&
+ git index-pack --stdin --fix-thin <tmp || return 1
+ prev=$cur
+ done
+'
+
+max_chain() {
+ git index-pack --verify-stat-only "$1" >output &&
+ perl -lne '
+ /chain length = (\d+)/ and $len = $1;
+ END { print $len }
+ ' output
+}
+
+# Note that this whole setup is pretty reliant on the current
+# packing heuristics. We double-check that our test case
+# actually produces a long chain. If it doesn't, it should be
+# adjusted (or scrapped if the heuristics have become too unreliable)
+test_expect_success 'packing produces a long delta' '
+ # Use --window=0 to make sure we are seeing reused deltas,
+ # not computing a new long chain.
+ pack=$(git pack-objects --all --window=0 </dev/null pack) &&
+ test 9 = "$(max_chain pack-$pack.pack)"
+'
+
+test_expect_success '--depth limits depth' '
+ pack=$(git pack-objects --all --depth=5 </dev/null pack) &&
+ test 5 = "$(max_chain pack-$pack.pack)"
+'
+
+test_done
cd client &&
test_must_fail git fetch-pack --no-progress .. refs/heads/xyzzy
) >/dev/null 2>error-m &&
- test_cmp expect-error error-m
+ test_i18ncmp expect-error error-m
'
test_expect_success 'test missing ref after existing' '
cd client &&
test_must_fail git fetch-pack --no-progress .. refs/heads/A refs/heads/xyzzy
) >/dev/null 2>error-em &&
- test_cmp expect-error error-em
+ test_i18ncmp expect-error error-em
'
test_expect_success 'test missing ref before existing' '
cd client &&
test_must_fail git fetch-pack --no-progress .. refs/heads/xyzzy refs/heads/A
) >/dev/null 2>error-me &&
- test_cmp expect-error error-me
+ test_i18ncmp expect-error error-me
'
test_expect_success 'test --all, --depth, and explicit head' '
)
'
+test_expect_success 'remove remote with a branch without configured merge' '
+ test_when_finished "(
+ git -C test checkout master;
+ git -C test branch -D two;
+ git -C test config --remove-section remote.two;
+ git -C test config --remove-section branch.second;
+ true
+ )" &&
+ (
+ cd test &&
+ git remote add two ../two &&
+ git fetch two &&
+ git checkout -b second two/master^0 &&
+ git config branch.second.remote two &&
+ git checkout master &&
+ git remote rm two
+ )
+'
+
test_expect_success 'rename errors out early when deleting non-existent branch' '
(
cd test &&
test_expect_code 2 git ls-remote --exit-code git://localhost:$JGIT_DAEMON_PORT/empty.git
'
+test_expect_success 'ls-remote works outside repository' '
+ # It is important for this repo to be inside the nongit
+ # area, as we want a repo name that does not include
+ # slashes (because those inhibit some of our configuration
+ # lookups).
+ nongit git init --bare dst.git &&
+ nongit git ls-remote dst.git
+'
+
test_done
test_must_fail git cat-file -t $the_commit &&
# fetching the hidden object should fail by default
- test_must_fail git fetch -v ../testrepo $the_commit:refs/heads/copy &&
+ test_must_fail git fetch -v ../testrepo $the_commit:refs/heads/copy 2>err &&
+ test_i18ngrep "Server does not allow request for unadvertised object" err &&
test_must_fail git rev-parse --verify refs/heads/copy &&
# the server side can allow it to succeed
test_cmp file clone/file
'
+test_expect_success 'list refs from outside any repository' '
+ cat >expect <<-EOF &&
+ $(git rev-parse master) HEAD
+ $(git rev-parse master) refs/heads/master
+ EOF
+ nongit git ls-remote "$HTTPD_URL/dumb/repo.git" >actual &&
+ test_cmp expect actual
+'
+
test_expect_success 'create password-protected repository' '
mkdir -p "$HTTPD_DOCUMENT_ROOT_PATH/auth/dumb/" &&
cp -Rf "$HTTPD_DOCUMENT_ROOT_PATH/repo.git" \
check_obj "$quoted:$unquoted" <<-EOF
$one blob
$two blob
+ EOF
'
test_expect_success !MINGW 'broken quoting falls back to interpreting raw' '
test_when_finished "git checkout master" &&
git for-each-ref --format="%(HEAD) %(refname:short)" refs/heads/ >actual &&
sed -e "s/^\* / /" actual >expect &&
- git checkout --orphan HEAD &&
+ git checkout --orphan orphaned-branch &&
git for-each-ref --format="%(HEAD) %(refname:short)" refs/heads/ >actual &&
test_cmp expect actual
'
test_line_count = 2 new # There is one new pack and its .idx
'
+test_expect_success 'background auto gc does not run if gc.log is present and recent but does if it is old' '
+ test_commit foo &&
+ test_commit bar &&
+ git repack &&
+ test_config gc.autopacklimit 1 &&
+ test_config gc.autodetach true &&
+ echo fleem >.git/gc.log &&
+ test_must_fail git gc --auto 2>err &&
+ test_i18ngrep "^error:" err &&
+ test_config gc.logexpiry 5.days &&
+ test-chmtime =-345600 .git/gc.log &&
+ test_must_fail git gc --auto &&
+ test_config gc.logexpiry 2.days &&
+ git gc --auto
+'
test_done
git cat-file tag X/2 > actual &&
test_cmp expect actual
'
+test_expect_success 'setup --prune-empty comparisons' '
+ git checkout --orphan master-no-a &&
+ git rm -rf . &&
+ unset test_tick &&
+ test_tick &&
+ GIT_COMMITTER_DATE="@0 +0000" GIT_AUTHOR_DATE="@0 +0000" &&
+ test_commit --notick B B.t B Bx &&
+ git checkout -b branch-no-a Bx &&
+ test_commit D D.t D Dx &&
+ mkdir dir &&
+ test_commit dir/D dir/D.t dir/D dir/Dx &&
+ test_commit E E.t E Ex &&
+ git checkout master-no-a &&
+ test_commit C C.t C Cx &&
+ git checkout branch-no-a &&
+ git merge Cx -m "Merge tag '\''C'\'' into branch" &&
+ git tag Fx &&
+ test_commit G G.t G Gx &&
+ test_commit H H.t H Hx &&
+ git checkout branch
+'
test_expect_success 'Prune empty commits' '
git rev-list HEAD > expect &&
test_cmp expect actual
'
+test_expect_success '--prune-empty is able to prune root commit' '
+ git rev-list branch-no-a >expect &&
+ git branch testing H &&
+ git filter-branch -f --prune-empty --index-filter "git update-index --remove A.t" testing &&
+ git rev-list testing >actual &&
+ git branch -D testing &&
+ test_cmp expect actual
+'
+
+test_expect_success '--prune-empty is able to prune entire branch' '
+ git branch prune-entire B &&
+ git filter-branch -f --prune-empty --index-filter "git update-index --remove A.t B.t" prune-entire &&
+ test_path_is_missing .git/refs/heads/prune-entire &&
+ test_must_fail git reflog exists refs/heads/prune-entire
+'
+
test_expect_success '--remap-to-ancestor with filename filters' '
git checkout master &&
git reset --hard A &&
test_must_fail git tag -v forged-tag
'
-test_expect_success 'verifying a proper tag with --format pass and format accordingly' '
- cat >expect <<-\EOF
+test_expect_success GPG 'verifying a proper tag with --format pass and format accordingly' '
+ cat >expect <<-\EOF &&
tagname : signed-tag
- EOF &&
+ EOF
git tag -v --format="tagname : %(tag)" "signed-tag" >actual &&
test_cmp expect actual
'
-test_expect_success 'verifying a forged tag with --format fail and format accordingly' '
- cat >expect <<-\EOF
- tagname : forged-tag
- EOF &&
+test_expect_success GPG 'verifying a forged tag with --format should fail silently' '
+ >expect &&
test_must_fail git tag -v --format="tagname : %(tag)" "forged-tag" >actual &&
test_cmp expect actual
'
test_cmp expect.stderr actual.stderr
'
-test_expect_success 'verifying tag with --format' '
- cat >expect <<-\EOF
+test_expect_success GPG 'verifying tag with --format' '
+ cat >expect <<-\EOF &&
tagname : fourth-signed
- EOF &&
+ EOF
git verify-tag --format="tagname : %(tag)" "fourth-signed" >actual &&
test_cmp expect actual
'
-test_expect_success 'verifying a forged tag with --format fail and format accordingly' '
- cat >expect <<-\EOF
- tagname : 7th forged-signed
- EOF &&
+test_expect_success GPG 'verifying a forged tag with --format should fail silently' '
+ >expect &&
test_must_fail git verify-tag --format="tagname : %(tag)" $(cat forged1.tag) >actual-forged &&
test_cmp expect actual-forged
'
'
test_expect_success 'submodule update - command run for initial population of submodule' '
- cat <<-\ EOF >expect
+ cat >expect <<-EOF &&
Execution of '\''false $submodulesha1'\'' failed in submodule path '\''submodule'\''
- EOF &&
+ EOF
rm -rf super/submodule &&
- test_must_fail git -C super submodule update >../actual &&
+ test_must_fail git -C super submodule update 2>actual &&
test_cmp expect actual &&
git -C super submodule update --checkout
'
--- /dev/null
+#!/bin/sh
+
+test_description='corner cases in ident strings'
+. ./test-lib.sh
+
+# confirm that we do not segfault _and_ that we do not say "(null)", as
+# glibc systems will quietly handle our NULL pointer
+#
+# Note also that we can't use "env" here because we need to unset a variable,
+# and "-u" is not portable.
+test_expect_success 'empty name and missing email' '
+ (
+ sane_unset GIT_AUTHOR_EMAIL &&
+ GIT_AUTHOR_NAME= &&
+ test_must_fail git commit --allow-empty -m foo 2>err &&
+ test_i18ngrep ! null err
+ )
+'
+
+test_expect_success 'commit rejects all-crud name' '
+ test_must_fail env GIT_AUTHOR_NAME=" .;<>" \
+ git commit --allow-empty -m foo
+'
+
+# We must test the actual error message here, as an unwanted
+# auto-detection could fail for other reasons.
+test_expect_success 'empty configured name does not auto-detect' '
+ (
+ sane_unset GIT_AUTHOR_NAME &&
+ test_must_fail \
+ git -c user.name= commit --allow-empty -m foo 2>err &&
+ test_i18ngrep "empty ident name" err
+ )
+'
+
+test_done
test_cmp expected actual
'
+test_expect_success 'dashdash disambiguates rev as rev' '
+ test_when_finished "rm -f master" &&
+ echo content >master &&
+ echo master:hello.c >expect &&
+ git grep -l o master -- hello.c >actual &&
+ test_cmp expect actual
+'
+
+test_expect_success 'dashdash disambiguates pathspec as pathspec' '
+ test_when_finished "git rm -f master" &&
+ echo content >master &&
+ git add master &&
+ echo master:content >expect &&
+ git grep o -- master >actual &&
+ test_cmp expect actual
+'
+
+test_expect_success 'report bogus arg without dashdash' '
+ test_must_fail git grep o does-not-exist
+'
+
+test_expect_success 'report bogus rev with dashdash' '
+ test_must_fail git grep o hello.c --
+'
+
+test_expect_success 'allow non-existent path with dashdash' '
+ # We need a real match so grep exits with success.
+ tree=$(git ls-tree HEAD |
+ sed s/hello.c/not-in-working-tree/ |
+ git mktree) &&
+ git grep o "$tree" -- not-in-working-tree
+'
+
+test_expect_success 'grep --no-index pattern -- path' '
+ rm -fr non &&
+ mkdir -p non/git &&
+ (
+ GIT_CEILING_DIRECTORIES="$(pwd)/non" &&
+ export GIT_CEILING_DIRECTORIES &&
+ cd non/git &&
+ echo hello >hello &&
+ echo goodbye >goodbye &&
+ echo hello:hello >expect &&
+ git grep --no-index o -- hello >actual &&
+ test_cmp expect actual
+ )
+'
+
+test_expect_success 'grep --no-index complains of revs' '
+ test_must_fail git grep --no-index o master -- 2>err &&
+ test_i18ngrep "cannot be used with revs" err
+'
+
+test_expect_success 'grep --no-index prefers paths to revs' '
+ test_when_finished "rm -f master" &&
+ echo content >master &&
+ echo master:content >expect &&
+ git grep --no-index o master >actual &&
+ test_cmp expect actual
+'
+
+test_expect_success 'grep --no-index does not "diagnose" revs' '
+ test_must_fail git grep --no-index o :1:hello.c 2>err &&
+ test_i18ngrep ! -i "did you mean" err
+'
+
cat >expected <<EOF
hello.c:int main(int argc, const char **argv)
hello.c: printf("Hello world.\n");
!two@example.com!
!three@example.com!
!four@example.com!
-!five@example.com!
EOF
"
Test Cc: trailers.
Cc: one@example.com
- Cc: <two@example.com> # this is part of the name
- Cc: <three@example.com>, <four@example.com> # not.five@example.com
- Cc: "Some # Body" <five@example.com> [part.of.name.too]
+ Cc: <two@example.com> # trailing comments are ignored
+ Cc: <three@example.com>, <not.four@example.com> one address per line
+ Cc: "Some # Body" <four@example.com> [ <also.a.comment> ]
EOF
clean_fake_sendmail &&
git send-email -1 --to=recipient@example.com \
test_done
fi
+if ! test_have_prereq NOT_ROOT; then
+ skip_all='When cvs is compiled with CVS_BADROOT commits as root fail'
+ test_done
+fi
+
CVSROOT=$PWD/tmpcvsroot
CVSWORK=$PWD/cvswork
GIT_DIR=$PWD/.git
test_description='git cvsimport basic tests'
. ./lib-cvs.sh
+if ! test_have_prereq NOT_ROOT; then
+ skip_all='When cvs is compiled with CVS_BADROOT commits as root fail'
+ test_done
+fi
+
test_expect_success PERL 'setup cvsroot environment' '
CVSROOT=$(pwd)/cvsroot &&
export CVSROOT
export GIT_COMMITTER_DATE GIT_AUTHOR_DATE
}
-# Stop execution and start a shell. This is useful for debugging tests and
-# only makes sense together with "-v".
+# Stop execution and start a shell. This is useful for debugging tests.
#
# Be sure to remove all invocations of this command before submitting.
test_pause () {
- if test "$verbose" = t; then
- "$SHELL_PATH" <&6 >&3 2>&4
- else
- error >&5 "test_pause requires --verbose"
- fi
+ "$SHELL_PATH" <&6 >&5 2>&7
}
# Wrap git in gdb. Adding this to a command can make it easier to
#
# Example: "debug git checkout master".
debug () {
- GIT_TEST_GDB=1 "$@"
+ GIT_TEST_GDB=1 "$@" <&6 >&5 2>&7
}
# Call test_commit with the arguments
exec 5>&1
exec 6<&0
+exec 7>&2
if test "$verbose_log" = "t"
then
exec 3>>"$GIT_TEST_TEE_OUTPUT_FILE" 4>&3
tempfile->fd = -1;
if (fp) {
tempfile->fp = NULL;
- err = ferror(fp);
- err |= fclose(fp);
+ if (ferror(fp)) {
+ err = -1;
+ if (!fclose(fp))
+ errno = EIO;
+ } else {
+ err = fclose(fp);
+ }
} else {
err = close(fd);
}
helper->git_cmd = 0;
helper->silent_exec_failure = 1;
- argv_array_pushf(&helper->env_array, "%s=%s", GIT_DIR_ENVIRONMENT,
- get_git_dir());
+ if (have_git_dir())
+ argv_array_pushf(&helper->env_array, "%s=%s",
+ GIT_DIR_ENVIRONMENT, get_git_dir());
code = start_command(helper);
if (code < 0 && errno == ENOENT)
static int fetch_refs_via_pack(struct transport *transport,
int nr_heads, struct ref **to_fetch)
{
+ int ret = 0;
struct git_transport_data *data = transport->data;
struct ref *refs;
char *dest = xstrdup(transport->url);
&transport->pack_lockfile);
close(data->fd[0]);
close(data->fd[1]);
- if (finish_connect(data->conn)) {
- free_refs(refs);
- refs = NULL;
- }
+ if (finish_connect(data->conn))
+ ret = -1;
data->conn = NULL;
data->got_remote_heads = 0;
data->options.self_contained_and_connected =
args.self_contained_and_connected;
+ if (refs == NULL)
+ ret = -1;
+ if (report_unmatched_refs(to_fetch, nr_heads))
+ ret = -1;
+
free_refs(refs_tmp);
free_refs(refs);
free(dest);
- return (refs ? 0 : -1);
+ return ret;
}
static int push_had_errors(struct ref *ref)
use_include_tag = 1;
o = parse_object(sha1_buf);
- if (!o)
+ if (!o) {
+ packet_write_fmt(1,
+ "ERR upload-pack: not our ref %s",
+ sha1_to_hex(sha1_buf));
die("git upload-pack: not our ref %s",
sha1_to_hex(sha1_buf));
+ }
if (!(o->flags & WANTED)) {
o->flags |= WANTED;
if (!((allow_unadvertised_object_request & ALLOW_ANY_SHA1) == ALLOW_ANY_SHA1
struct dirent *d;
int counter = 0, alloc = 2;
- list = xmalloc(alloc * sizeof(struct worktree *));
+ ALLOC_ARRAY(list, alloc);
list[counter++] = get_main_worktree();
return fd;
}
-/* git_mkstemp() - create tmp file honoring TMPDIR variable */
-int git_mkstemp(char *path, size_t len, const char *template)
-{
- const char *tmp;
- size_t n;
-
- tmp = getenv("TMPDIR");
- if (!tmp)
- tmp = "/tmp";
- n = snprintf(path, len, "%s/%s", tmp, template);
- if (len <= n) {
- errno = ENAMETOOLONG;
- return -1;
- }
- return mkstemp(path);
-}
-
/* Adapted from libiberty's mkstemp.c. */
#undef TMP_MAX
return git_mkstemps_mode(pattern, 0, mode);
}
-#ifdef NO_MKSTEMPS
-int gitmkstemps(char *pattern, int suffix_len)
-{
- return git_mkstemps_mode(pattern, suffix_len, 0600);
-}
-#endif
-
int xmkstemp_mode(char *template, int mode)
{
int fd;
return;
branch_name = s->branch;
+#define LABEL(string) (s->no_gettext ? (string) : _(string))
+
if (s->is_initial)
- color_fprintf(s->fp, header_color, _("Initial commit on "));
+ color_fprintf(s->fp, header_color, LABEL(N_("Initial commit on ")));
if (!strcmp(s->branch, "HEAD")) {
color_fprintf(s->fp, color(WT_STATUS_NOBRANCH, s), "%s",
- _("HEAD (no branch)"));
+ LABEL(N_("HEAD (no branch)")));
goto conclude;
}
if (!upstream_is_gone && !num_ours && !num_theirs)
goto conclude;
-#define LABEL(string) (s->no_gettext ? (string) : _(string))
-
color_fprintf(s->fp, header_color, " [");
if (upstream_is_gone) {
color_fprintf(s->fp, header_color, LABEL(N_("gone")));
static void wt_shortstatus_print(struct wt_status *s)
{
- int i;
+ struct string_list_item *it;
if (s->show_branch)
wt_shortstatus_print_tracking(s);
- for (i = 0; i < s->change.nr; i++) {
- struct wt_status_change_data *d;
- struct string_list_item *it;
+ for_each_string_list_item(it, &s->change) {
+ struct wt_status_change_data *d = it->util;
- it = &(s->change.items[i]);
- d = it->util;
if (d->stagemask)
wt_shortstatus_unmerged(it, s);
else
wt_shortstatus_status(it, s);
}
- for (i = 0; i < s->untracked.nr; i++) {
- struct string_list_item *it;
-
- it = &(s->untracked.items[i]);
+ for_each_string_list_item(it, &s->untracked)
wt_shortstatus_other(it, s, "??");
- }
- for (i = 0; i < s->ignored.nr; i++) {
- struct string_list_item *it;
- it = &(s->ignored.items[i]);
+ for_each_string_list_item(it, &s->ignored)
wt_shortstatus_other(it, s, "!!");
- }
}
static void wt_porcelain_print(struct wt_status *s)