Code clean-up.
* jc/diff-sane-truncate-no-more:
diff: retire sane_truncate_fn
hand-rolled substitute.
(merge 90dbf226ba js/git-gui-msgfmt-on-windows later to maint).
+ * "git grep --recurse-submodules" has been reworked to give a more
+ consistent output across submodule boundary (and do its thing
+ without having to fork a separate process).
+
+ * A helper function to read a single whole line into strbuf
+ mistakenly triggered OOM error at EOF under certain conditions,
+ which has been fixed.
+ (merge 642956cf45 rs/strbuf-getwholeline-fix later to maint).
+
+ * The "ref-store" code reorganization continues.
+
Also contains various documentation updates and code clean-ups.
and then "git tag -l" is made to run pager by default.
(merge 595d59e2b5 ma/pager-per-subcommand-action later to maint).
+ * "git push --recurse-submodules $there HEAD:$target" was not
+ propagated down to the submodules, but now it is.
+ (merge c7be7201a7 bw/push-options-recursively-to-submodules later to maint).
+
+ * Commands like "git rebase" accepted the --rerere-autoupdate option
+ from the command line, but did not always use it. This has been
+ fixed.
+ (merge f826fb799e pw/sequence-rerere-autoupdate later to maint).
+
+ * "git clone --recurse-submodules --quiet" did not pass the quiet
+ option down to submodules.
+ (merge 03c004c581 bw/clone-recursive-quiet later to maint).
+
+ * Test portability fix for OBSD.
+ (merge bed67874e2 rs/obsd-getcwd-workaround later to maint).
+ (merge 4c7fda8fc1 rs/t4062-obsd later to maint).
+
+ * Portability fix for OBSD.
+ (merge 29c2eda80b rs/in-obsd-basename-dirname-take-const later to maint).
+
+ * "git am -s" has been taught that some input may end with a trailer
+ block that is not Signed-off-by: and it should refrain from adding
+ an extra blank line before adding a new sign-off in such a case.
+ (merge 735285b403 pw/am-signoff later to maint).
+
+ * "git svn" used with "--localtime" option did not compute the tz
+ offset for the timestamp in question and instead always used the
+ current time, which has been corrected.
+ (merge 1adc4b9a58 ur/svn-local-zone later to maint).
+
+ * Memory leak in an error codepath has been plugged.
+ (merge 83cd6f9017 rs/fsck-obj-leakfix later to maint).
+ (merge 896dca3ab7 rs/unpack-entry-leakfix later to maint).
+ (merge 149d8cbb2e rs/win32-syslog-leakfix later to maint).
+
* Other minor doc, test and build updates and code cleanups.
(merge 5b114f3bb0 rs/bswap-ubsan-fix later to maint).
(merge 168e63554c rs/move-array later to maint).
(merge fa64a2fdbe jt/subprocess-handshake later to maint).
(merge 0ba9c9a0fb jb/t8008-cleanup later to maint).
(merge a7c28a2161 jt/t1450-fsck-corrupt-packfile later to maint).
+ (merge dff2813391 ab/ref-filter-no-contains later to maint).
+ (merge f094b89a4d ma/parse-maybe-bool later to maint).
+ (merge 974ce8078c mf/no-dashed-subcommands later to maint).
+ (merge f81935cc4d jc/perl-git-comment-typofix later to maint).
+ (merge 57ea241ef0 rs/t3700-clean-leftover later to maint).
+ (merge f1068efefe jk/drop-sha1-entry-pos later to maint).
+ (merge 0b006014c8 jk/hashcmp-memcmp later to maint).
+ (merge 1e22a9917b rj/add-chmod-error-message later to maint).
+ (merge 881529c846 rs/apply-lose-prefix-length later to maint).
+ (merge 6355a76802 rs/find-pack-entry-bisection later to maint).
+ (merge de3ce210ed rs/merge-microcleanup later to maint).
synonyms are accepted for 'true' and 'false'; these are all
case-insensitive.
- true;; Boolean true can be spelled as `yes`, `on`, `true`,
- or `1`. Also, a variable defined without `= <value>`
+ true;; Boolean true literals are `yes`, `on`, `true`,
+ and `1`. Also, a variable defined without `= <value>`
is taken as true.
- false;; Boolean false can be spelled as `no`, `off`,
- `false`, or `0`.
+ false;; Boolean false literals are `no`, `off`, `false`,
+ `0` and the empty string.
+
When converting value to the canonical form using `--bool` type
-specifier; 'git config' will ensure that the output is "true" or
+specifier, 'git config' will ensure that the output is "true" or
"false" (spelled in lowercase).
integer::
[(--reroll-count|-v) <n>]
[--to=<email>] [--cc=<email>]
[--[no-]cover-letter] [--quiet] [--notes[=<ref>]]
+ [--progress]
[<common diff options>]
[ <since> | <revision range> ]
range are always formatted as creation patches, independently
of this flag.
+--progress::
+ Show progress reports on stderr as patches are generated.
+
CONFIGURATION
-------------
You can specify extra mail header lines to be added to each message,
<tree> option the prefix of all submodule output will be the name of
the parent project's <tree> object.
---parent-basename <basename>::
- For internal use only. In order to produce uniform output with the
- --recurse-submodules option, this option can be used to provide the
- basename of a parent's <tree> object to a submodule so the submodule
- can prefix its output with the parent's name rather than the SHA1 of
- the submodule.
-
-a::
--text::
Process binary files as if they were text.
'git push' [--all | --mirror | --tags] [--follow-tags] [--atomic] [-n | --dry-run] [--receive-pack=<git-receive-pack>]
[--repo=<repository>] [-f | --force] [-d | --delete] [--prune] [-v | --verbose]
[-u | --set-upstream] [--push-option=<string>]
- [--[no-]signed|--sign=(true|false|if-asked)]
+ [--[no-]signed|--signed=(true|false|if-asked)]
[--force-with-lease[=<refname>[:<expect>]]]
[--no-verify] [<repository> [<refspec>...]]
information, see `push.followTags` in linkgit:git-config[1].
--[no-]signed::
---sign=(true|false|if-asked)::
+--signed=(true|false|if-asked)::
GPG-sign the push request to update refs on the receiving
side, to allow it to be checked by the hooks and/or be
logged. If `false` or `--no-signed`, no signing will be
[verse]
'git send-pack' [--all] [--dry-run] [--force] [--receive-pack=<git-receive-pack>]
[--verbose] [--thin] [--atomic]
- [--[no-]signed|--sign=(true|false|if-asked)]
+ [--[no-]signed|--signed=(true|false|if-asked)]
[<host>:]<directory> [<ref>...]
DESCRIPTION
refs.
--[no-]signed::
---sign=(true|false|if-asked)::
+--signed=(true|false|if-asked)::
GPG-sign the push request to update refs on the receiving
side, to allow it to be checked by the hooks and/or be
logged. If `false` or `--no-signed`, no signing will be
Note that omitting the `=` in `git -c foo.bar ...` is allowed and sets
`foo.bar` to the boolean true value (just like `[foo]bar` would in a
config file). Including the equals but with an empty value (like `git -c
-foo.bar= ...`) sets `foo.bar` to the empty string.
+foo.bar= ...`) sets `foo.bar` to the empty string which ` git config
+--bool` will convert to `false`.
--exec-path[=<path>]::
Path to wherever your core Git programs are installed.
an `is_bool` flag is unset.
`git_config_maybe_bool`::
+Deprecated. Use `git_parse_maybe_bool` instead. They are exactly the
+same, except this function takes an unused argument `name`.
+
+`git_parse_maybe_bool`::
Same as `git_config_bool`, except that it returns -1 on error rather
than dying.
LIB_OBJS += refs.o
LIB_OBJS += refs/files-backend.o
LIB_OBJS += refs/iterator.o
+LIB_OBJS += refs/packed-backend.o
LIB_OBJS += refs/ref-cache.o
LIB_OBJS += ref-filter.o
LIB_OBJS += remote.o
{
memset(state, 0, sizeof(*state));
state->prefix = prefix;
- state->prefix_length = state->prefix ? strlen(state->prefix) : 0;
state->lock_file = lock_file;
state->newfd = -1;
state->apply = 1;
* Does it begin with "a/$our-prefix" and such? Then this is
* very likely to apply to our directory.
*/
- if (!strncmp(name, state->prefix, state->prefix_length))
+ if (starts_with(name, state->prefix))
val = count_slashes(state->prefix);
else {
cp++;
- if (!strncmp(cp, state->prefix, state->prefix_length))
+ if (starts_with(cp, state->prefix))
val = count_slashes(state->prefix) + 1;
}
}
int i;
/* Paths outside are not touched regardless of "--include" */
- if (0 < state->prefix_length) {
- int pathlen = strlen(pathname);
- if (pathlen <= state->prefix_length ||
- memcmp(state->prefix, pathname, state->prefix_length))
+ if (state->prefix && *state->prefix) {
+ const char *rest;
+ if (!skip_prefix(pathname, state->prefix, &rest) || !*rest)
return 0;
}
struct apply_state {
const char *prefix;
- int prefix_length;
/* These are lock_file related */
struct lock_file *lock_file;
int add_errors;
};
-static void chmod_pathspec(struct pathspec *pathspec, int force_mode)
+static void chmod_pathspec(struct pathspec *pathspec, char flip)
{
int i;
if (pathspec && !ce_path_match(ce, pathspec, NULL))
continue;
- if (chmod_cache_entry(ce, force_mode) < 0)
- fprintf(stderr, "cannot chmod '%s'", ce->name);
+ if (chmod_cache_entry(ce, flip) < 0)
+ fprintf(stderr, "cannot chmod %cx '%s'\n", flip, ce->name);
}
}
read_state_file(&sb, state, "utf8", 1);
state->utf8 = !strcmp(sb.buf, "t");
+ if (file_exists(am_path(state, "rerere-autoupdate"))) {
+ read_state_file(&sb, state, "rerere-autoupdate", 1);
+ state->allow_rerere_autoupdate = strcmp(sb.buf, "t") ?
+ RERERE_NOAUTOUPDATE : RERERE_AUTOUPDATE;
+ } else {
+ state->allow_rerere_autoupdate = 0;
+ }
+
read_state_file(&sb, state, "keep", 1);
if (!strcmp(sb.buf, "t"))
state->keep = KEEP_TRUE;
write_state_bool(state, "sign", state->signoff);
write_state_bool(state, "utf8", state->utf8);
+ if (state->allow_rerere_autoupdate)
+ write_state_bool(state, "rerere-autoupdate",
+ state->allow_rerere_autoupdate == RERERE_AUTOUPDATE);
+
switch (state->keep) {
case KEEP_FALSE:
str = "f";
*/
static void am_append_signoff(struct am_state *state)
{
- char *cp;
- struct strbuf mine = STRBUF_INIT;
struct strbuf sb = STRBUF_INIT;
strbuf_attach(&sb, state->msg, state->msg_len, state->msg_len);
-
- /* our sign-off */
- strbuf_addf(&mine, "\n%s%s\n",
- sign_off_header,
- fmt_name(getenv("GIT_COMMITTER_NAME"),
- getenv("GIT_COMMITTER_EMAIL")));
-
- /* Does sb end with it already? */
- if (mine.len < sb.len &&
- !strcmp(mine.buf, sb.buf + sb.len - mine.len))
- goto exit; /* no need to duplicate */
-
- /* Does it have any Signed-off-by: in the text */
- for (cp = sb.buf;
- cp && *cp && (cp = strstr(cp, sign_off_header)) != NULL;
- cp = strchr(cp, '\n')) {
- if (sb.buf == cp || cp[-1] == '\n')
- break;
- }
-
- strbuf_addstr(&sb, mine.buf + !!cp);
-exit:
- strbuf_release(&mine);
+ append_signoff(&sb, 0, 0);
state->msg = strbuf_detach(&sb, &state->msg_len);
}
if (submodule_progress)
argv_array_push(&args, "--progress");
+ if (option_verbosity < 0)
+ argv_array_push(&args, "--quiet");
+
err = run_command_v_opt(args.argv, RUN_GIT_CMD);
argv_array_clear(&args);
}
return 0;
}
- /*
- * Re-read the index as pre-commit hook could have updated it,
- * and write it out as a tree. We must do this before we invoke
- * the editor and after we invoke run_status above.
- */
- discard_cache();
+ if (!no_verify && find_hook("pre-commit")) {
+ /*
+ * Re-read the index as pre-commit hook could have updated it,
+ * and write it out as a tree. We must do this before we invoke
+ * the editor and after we invoke run_status above.
+ */
+ discard_cache();
+ }
read_cache_from(index_file);
+
if (update_main_cache_tree(0)) {
error(_("Error building trees"));
return 0;
static int all, append, dry_run, force, keep, multiple, update_head_ok, verbosity, deepen_relative;
static int progress = -1;
static int tags = TAGS_DEFAULT, unshallow, update_shallow, deepen;
-static int max_children = -1;
+static int max_children = 1;
static enum transport_family family;
static const char *depth;
static const char *deepen_since;
recurse_submodules = r;
}
+ if (!strcmp(k, "submodule.fetchjobs")) {
+ max_children = parse_submodule_fetchjobs(k, v);
+ return 0;
+ } else if (!strcmp(k, "fetch.recursesubmodules")) {
+ recurse_submodules = parse_fetch_recurse_submodules_arg(k, v);
+ return 0;
+ }
+
return git_default_config(k, v, cb);
}
+static int gitmodules_fetch_config(const char *var, const char *value, void *cb)
+{
+ if (!strcmp(var, "submodule.fetchjobs")) {
+ max_children = parse_submodule_fetchjobs(var, value);
+ return 0;
+ } else if (!strcmp(var, "fetch.recursesubmodules")) {
+ recurse_submodules = parse_fetch_recurse_submodules_arg(var, value);
+ return 0;
+ }
+
+ return 0;
+}
+
static int parse_refmap_arg(const struct option *opt, const char *arg, int unset)
{
ALLOC_GROW(refmap_array, refmap_nr + 1, refmap_alloc);
for (i = 1; i < argc; i++)
strbuf_addf(&default_rla, " %s", argv[i]);
+ config_from_gitmodules(gitmodules_fetch_config, NULL);
git_config(git_fetch_config, NULL);
argc = parse_options(argc, argv, prefix,
deepen = 1;
if (recurse_submodules != RECURSE_SUBMODULES_OFF) {
- set_config_fetch_recurse_submodules(recurse_submodules_default);
gitmodules_config();
git_config(submodule_config, NULL);
}
result = fetch_populated_submodules(&options,
submodule_prefix,
recurse_submodules,
+ recurse_submodules_default,
verbosity < 0,
max_children);
argv_array_clear(&options);
static int fsck_obj(struct object *obj)
{
+ int err;
+
if (obj->flags & SEEN)
return 0;
obj->flags |= SEEN;
if (fsck_walk(obj, NULL, &fsck_obj_options))
objerror(obj, "broken links");
- if (fsck_object(obj, NULL, 0, &fsck_obj_options))
- return -1;
-
- if (obj->type == OBJ_TREE) {
- struct tree *item = (struct tree *) obj;
-
- free_tree_buffer(item);
- }
+ err = fsck_object(obj, NULL, 0, &fsck_obj_options);
+ if (err)
+ goto out;
if (obj->type == OBJ_COMMIT) {
struct commit *commit = (struct commit *) obj;
- free_commit_buffer(commit);
-
if (!commit->parents && show_root)
printf("root %s\n", describe_object(&commit->object));
}
}
}
- return 0;
+out:
+ if (obj->type == OBJ_TREE)
+ free_tree_buffer((struct tree *)obj);
+ if (obj->type == OBJ_COMMIT)
+ free_commit_buffer((struct commit *)obj);
+ return err;
}
static int fsck_obj_buffer(const struct object_id *oid, enum object_type type,
NULL
};
-static const char *super_prefix;
static int recurse_submodules;
-static struct argv_array submodule_options = ARGV_ARRAY_INIT;
-static const char *parent_basename;
-
-static int grep_submodule_launch(struct grep_opt *opt,
- const struct grep_source *gs);
#define GREP_NUM_THREADS_DEFAULT 8
static int num_threads;
break;
opt->output_priv = w;
- if (w->source.type == GREP_SOURCE_SUBMODULE)
- hit |= grep_submodule_launch(opt, &w->source);
- else
- hit |= grep_source(opt, &w->source);
+ hit |= grep_source(opt, &w->source);
grep_source_clear_data(&w->source);
work_done(w);
}
{
struct strbuf pathbuf = STRBUF_INIT;
- if (super_prefix) {
- strbuf_add(&pathbuf, filename, tree_name_len);
- strbuf_addstr(&pathbuf, super_prefix);
- strbuf_addstr(&pathbuf, filename + tree_name_len);
+ if (opt->relative && opt->prefix_length) {
+ quote_path_relative(filename + tree_name_len, opt->prefix, &pathbuf);
+ strbuf_insert(&pathbuf, 0, filename, tree_name_len);
} else {
strbuf_addstr(&pathbuf, filename);
}
- if (opt->relative && opt->prefix_length) {
- char *name = strbuf_detach(&pathbuf, NULL);
- quote_path_relative(name + tree_name_len, opt->prefix, &pathbuf);
- strbuf_insert(&pathbuf, 0, name, tree_name_len);
- free(name);
- }
-
#ifndef NO_PTHREADS
if (num_threads) {
add_work(opt, GREP_SOURCE_OID, pathbuf.buf, path, oid);
{
struct strbuf buf = STRBUF_INIT;
- if (super_prefix)
- strbuf_addstr(&buf, super_prefix);
- strbuf_addstr(&buf, filename);
-
- if (opt->relative && opt->prefix_length) {
- char *name = strbuf_detach(&buf, NULL);
- quote_path_relative(name, opt->prefix, &buf);
- free(name);
- }
+ if (opt->relative && opt->prefix_length)
+ quote_path_relative(filename, opt->prefix, &buf);
+ else
+ strbuf_addstr(&buf, filename);
#ifndef NO_PTHREADS
if (num_threads) {
exit(status);
}
-static void compile_submodule_options(const struct grep_opt *opt,
- const char **argv,
- int cached, int untracked,
- int opt_exclude, int use_index,
- int pattern_type_arg)
-{
- struct grep_pat *pattern;
-
- if (recurse_submodules)
- argv_array_push(&submodule_options, "--recurse-submodules");
-
- if (cached)
- argv_array_push(&submodule_options, "--cached");
- if (!use_index)
- argv_array_push(&submodule_options, "--no-index");
- if (untracked)
- argv_array_push(&submodule_options, "--untracked");
- if (opt_exclude > 0)
- argv_array_push(&submodule_options, "--exclude-standard");
-
- if (opt->invert)
- argv_array_push(&submodule_options, "-v");
- if (opt->ignore_case)
- argv_array_push(&submodule_options, "-i");
- if (opt->word_regexp)
- argv_array_push(&submodule_options, "-w");
- switch (opt->binary) {
- case GREP_BINARY_NOMATCH:
- argv_array_push(&submodule_options, "-I");
- break;
- case GREP_BINARY_TEXT:
- argv_array_push(&submodule_options, "-a");
- break;
- default:
- break;
- }
- if (opt->allow_textconv)
- argv_array_push(&submodule_options, "--textconv");
- if (opt->max_depth != -1)
- argv_array_pushf(&submodule_options, "--max-depth=%d",
- opt->max_depth);
- if (opt->linenum)
- argv_array_push(&submodule_options, "-n");
- if (!opt->pathname)
- argv_array_push(&submodule_options, "-h");
- if (!opt->relative)
- argv_array_push(&submodule_options, "--full-name");
- if (opt->name_only)
- argv_array_push(&submodule_options, "-l");
- if (opt->unmatch_name_only)
- argv_array_push(&submodule_options, "-L");
- if (opt->null_following_name)
- argv_array_push(&submodule_options, "-z");
- if (opt->count)
- argv_array_push(&submodule_options, "-c");
- if (opt->file_break)
- argv_array_push(&submodule_options, "--break");
- if (opt->heading)
- argv_array_push(&submodule_options, "--heading");
- if (opt->pre_context)
- argv_array_pushf(&submodule_options, "--before-context=%d",
- opt->pre_context);
- if (opt->post_context)
- argv_array_pushf(&submodule_options, "--after-context=%d",
- opt->post_context);
- if (opt->funcname)
- argv_array_push(&submodule_options, "-p");
- if (opt->funcbody)
- argv_array_push(&submodule_options, "-W");
- if (opt->all_match)
- argv_array_push(&submodule_options, "--all-match");
- if (opt->debug)
- argv_array_push(&submodule_options, "--debug");
- if (opt->status_only)
- argv_array_push(&submodule_options, "-q");
-
- switch (pattern_type_arg) {
- case GREP_PATTERN_TYPE_BRE:
- argv_array_push(&submodule_options, "-G");
- break;
- case GREP_PATTERN_TYPE_ERE:
- argv_array_push(&submodule_options, "-E");
- break;
- case GREP_PATTERN_TYPE_FIXED:
- argv_array_push(&submodule_options, "-F");
- break;
- case GREP_PATTERN_TYPE_PCRE:
- argv_array_push(&submodule_options, "-P");
- break;
- case GREP_PATTERN_TYPE_UNSPECIFIED:
- break;
- default:
- die("BUG: Added a new grep pattern type without updating switch statement");
- }
-
- for (pattern = opt->pattern_list; pattern != NULL;
- pattern = pattern->next) {
- switch (pattern->token) {
- case GREP_PATTERN:
- argv_array_pushf(&submodule_options, "-e%s",
- pattern->pattern);
- break;
- case GREP_AND:
- case GREP_OPEN_PAREN:
- case GREP_CLOSE_PAREN:
- case GREP_NOT:
- case GREP_OR:
- argv_array_push(&submodule_options, pattern->pattern);
- break;
- /* BODY and HEAD are not used by git-grep */
- case GREP_PATTERN_BODY:
- case GREP_PATTERN_HEAD:
- break;
- }
- }
-
- /*
- * Limit number of threads for child process to use.
- * This is to prevent potential fork-bomb behavior of git-grep as each
- * submodule process has its own thread pool.
- */
- argv_array_pushf(&submodule_options, "--threads=%d",
- DIV_ROUND_UP(num_threads, 2));
-
- /* Add Pathspecs */
- argv_array_push(&submodule_options, "--");
- for (; *argv; argv++)
- argv_array_push(&submodule_options, *argv);
-}
+static int grep_cache(struct grep_opt *opt, struct repository *repo,
+ const struct pathspec *pathspec, int cached);
+static int grep_tree(struct grep_opt *opt, const struct pathspec *pathspec,
+ struct tree_desc *tree, struct strbuf *base, int tn_len,
+ int check_attr, struct repository *repo);
-/*
- * Launch child process to grep contents of a submodule
- */
-static int grep_submodule_launch(struct grep_opt *opt,
- const struct grep_source *gs)
+static int grep_submodule(struct grep_opt *opt, struct repository *superproject,
+ const struct pathspec *pathspec,
+ const struct object_id *oid,
+ const char *filename, const char *path)
{
- struct child_process cp = CHILD_PROCESS_INIT;
- int status, i;
- const char *end_of_base;
- const char *name;
- struct strbuf child_output = STRBUF_INIT;
-
- end_of_base = strchr(gs->name, ':');
- if (gs->identifier && end_of_base)
- name = end_of_base + 1;
- else
- name = gs->name;
+ struct repository submodule;
+ int hit;
- prepare_submodule_repo_env(&cp.env_array);
- argv_array_push(&cp.env_array, GIT_DIR_ENVIRONMENT);
+ if (!is_submodule_active(superproject, path))
+ return 0;
- if (opt->relative && opt->prefix_length)
- argv_array_pushf(&cp.env_array, "%s=%s",
- GIT_TOPLEVEL_PREFIX_ENVIRONMENT,
- opt->prefix);
+ if (repo_submodule_init(&submodule, superproject, path))
+ return 0;
- /* Add super prefix */
- argv_array_pushf(&cp.args, "--super-prefix=%s%s/",
- super_prefix ? super_prefix : "",
- name);
- argv_array_push(&cp.args, "grep");
+ repo_read_gitmodules(&submodule);
/*
- * Add basename of parent project
- * When performing grep on a tree object the filename is prefixed
- * with the object's name: 'tree-name:filename'. In order to
- * provide uniformity of output we want to pass the name of the
- * parent project's object name to the submodule so the submodule can
- * prefix its output with the parent's name and not its own OID.
+ * NEEDSWORK: This adds the submodule's object directory to the list of
+ * alternates for the single in-memory object store. This has some bad
+ * consequences for memory (processed objects will never be freed) and
+ * performance (this increases the number of pack files git has to pay
+ * attention to, to the sum of the number of pack files in all the
+ * repositories processed so far). This can be removed once the object
+ * store is no longer global and instead is a member of the repository
+ * object.
*/
- if (gs->identifier && end_of_base)
- argv_array_pushf(&cp.args, "--parent-basename=%.*s",
- (int) (end_of_base - gs->name),
- gs->name);
+ add_to_alternates_memory(submodule.objectdir);
- /* Add options */
- for (i = 0; i < submodule_options.argc; i++) {
- /*
- * If there is a tree identifier for the submodule, add the
- * rev after adding the submodule options but before the
- * pathspecs. To do this we listen for the '--' and insert the
- * oid before pushing the '--' onto the child process argv
- * array.
- */
- if (gs->identifier &&
- !strcmp("--", submodule_options.argv[i])) {
- argv_array_push(&cp.args, oid_to_hex(gs->identifier));
- }
+ if (oid) {
+ struct object *object;
+ struct tree_desc tree;
+ void *data;
+ unsigned long size;
+ struct strbuf base = STRBUF_INIT;
- argv_array_push(&cp.args, submodule_options.argv[i]);
- }
+ object = parse_object_or_die(oid, oid_to_hex(oid));
- cp.git_cmd = 1;
- cp.dir = gs->path;
+ grep_read_lock();
+ data = read_object_with_reference(object->oid.hash, tree_type,
+ &size, NULL);
+ grep_read_unlock();
- /*
- * Capture output to output buffer and check the return code from the
- * child process. A '0' indicates a hit, a '1' indicates no hit and
- * anything else is an error.
- */
- status = capture_command(&cp, &child_output, 0);
- if (status && (status != 1)) {
- /* flush the buffer */
- write_or_die(1, child_output.buf, child_output.len);
- die("process for submodule '%s' failed with exit code: %d",
- gs->name, status);
- }
+ if (!data)
+ die(_("unable to read tree (%s)"), oid_to_hex(&object->oid));
- opt->output(opt, child_output.buf, child_output.len);
- strbuf_release(&child_output);
- /* invert the return code to make a hit equal to 1 */
- return !status;
-}
+ strbuf_addstr(&base, filename);
+ strbuf_addch(&base, '/');
-/*
- * Prep grep structures for a submodule grep
- * oid: the oid of the submodule or NULL if using the working tree
- * filename: name of the submodule including tree name of parent
- * path: location of the submodule
- */
-static int grep_submodule(struct grep_opt *opt, const struct object_id *oid,
- const char *filename, const char *path)
-{
- if (!is_submodule_active(the_repository, path))
- return 0;
- if (!is_submodule_populated_gently(path, NULL)) {
- /*
- * If searching history, check for the presence of the
- * submodule's gitdir before skipping the submodule.
- */
- if (oid) {
- const struct submodule *sub =
- submodule_from_path(&null_oid, path);
- if (sub)
- path = git_path("modules/%s", sub->name);
-
- if (!(is_directory(path) && is_git_directory(path)))
- return 0;
- } else {
- return 0;
- }
+ init_tree_desc(&tree, data, size);
+ hit = grep_tree(opt, pathspec, &tree, &base, base.len,
+ object->type == OBJ_COMMIT, &submodule);
+ strbuf_release(&base);
+ free(data);
+ } else {
+ hit = grep_cache(opt, &submodule, pathspec, 1);
}
-#ifndef NO_PTHREADS
- if (num_threads) {
- add_work(opt, GREP_SOURCE_SUBMODULE, filename, path, oid);
- return 0;
- } else
-#endif
- {
- struct grep_source gs;
- int hit;
-
- grep_source_init(&gs, GREP_SOURCE_SUBMODULE,
- filename, path, oid);
- hit = grep_submodule_launch(opt, &gs);
-
- grep_source_clear(&gs);
- return hit;
- }
+ repo_clear(&submodule);
+ return hit;
}
-static int grep_cache(struct grep_opt *opt, const struct pathspec *pathspec,
- int cached)
+static int grep_cache(struct grep_opt *opt, struct repository *repo,
+ const struct pathspec *pathspec, int cached)
{
int hit = 0;
int nr;
struct strbuf name = STRBUF_INIT;
int name_base_len = 0;
- if (super_prefix) {
- name_base_len = strlen(super_prefix);
- strbuf_addstr(&name, super_prefix);
+ if (repo->submodule_prefix) {
+ name_base_len = strlen(repo->submodule_prefix);
+ strbuf_addstr(&name, repo->submodule_prefix);
}
- read_cache();
+ repo_read_index(repo);
- for (nr = 0; nr < active_nr; nr++) {
- const struct cache_entry *ce = active_cache[nr];
+ for (nr = 0; nr < repo->index->cache_nr; nr++) {
+ const struct cache_entry *ce = repo->index->cache[nr];
strbuf_setlen(&name, name_base_len);
strbuf_addstr(&name, ce->name);
ce_skip_worktree(ce)) {
if (ce_stage(ce) || ce_intent_to_add(ce))
continue;
- hit |= grep_oid(opt, &ce->oid, ce->name,
- 0, ce->name);
+ hit |= grep_oid(opt, &ce->oid, name.buf,
+ 0, name.buf);
} else {
- hit |= grep_file(opt, ce->name);
+ hit |= grep_file(opt, name.buf);
}
} else if (recurse_submodules && S_ISGITLINK(ce->ce_mode) &&
submodule_path_match(pathspec, name.buf, NULL)) {
- hit |= grep_submodule(opt, NULL, ce->name, ce->name);
+ hit |= grep_submodule(opt, repo, pathspec, NULL, ce->name, ce->name);
} else {
continue;
}
if (ce_stage(ce)) {
do {
nr++;
- } while (nr < active_nr &&
- !strcmp(ce->name, active_cache[nr]->name));
+ } while (nr < repo->index->cache_nr &&
+ !strcmp(ce->name, repo->index->cache[nr]->name));
nr--; /* compensate for loop control */
}
if (hit && opt->status_only)
static int grep_tree(struct grep_opt *opt, const struct pathspec *pathspec,
struct tree_desc *tree, struct strbuf *base, int tn_len,
- int check_attr)
+ int check_attr, struct repository *repo)
{
int hit = 0;
enum interesting match = entry_not_interesting;
int old_baselen = base->len;
struct strbuf name = STRBUF_INIT;
int name_base_len = 0;
- if (super_prefix) {
- strbuf_addstr(&name, super_prefix);
+ if (repo->submodule_prefix) {
+ strbuf_addstr(&name, repo->submodule_prefix);
name_base_len = name.len;
}
strbuf_addch(base, '/');
init_tree_desc(&sub, data, size);
hit |= grep_tree(opt, pathspec, &sub, base, tn_len,
- check_attr);
+ check_attr, repo);
free(data);
} else if (recurse_submodules && S_ISGITLINK(entry.mode)) {
- hit |= grep_submodule(opt, entry.oid, base->buf,
- base->buf + tn_len);
+ hit |= grep_submodule(opt, repo, pathspec, entry.oid,
+ base->buf, base->buf + tn_len);
}
strbuf_setlen(base, old_baselen);
}
static int grep_object(struct grep_opt *opt, const struct pathspec *pathspec,
- struct object *obj, const char *name, const char *path)
+ struct object *obj, const char *name, const char *path,
+ struct repository *repo)
{
if (obj->type == OBJ_BLOB)
return grep_oid(opt, &obj->oid, name, 0, path);
if (!data)
die(_("unable to read tree (%s)"), oid_to_hex(&obj->oid));
- /* Use parent's name as base when recursing submodules */
- if (recurse_submodules && parent_basename)
- name = parent_basename;
-
len = name ? strlen(name) : 0;
strbuf_init(&base, PATH_MAX + len + 1);
if (len) {
}
init_tree_desc(&tree, data, size);
hit = grep_tree(opt, pathspec, &tree, &base, base.len,
- obj->type == OBJ_COMMIT);
+ obj->type == OBJ_COMMIT, repo);
strbuf_release(&base);
free(data);
return hit;
}
static int grep_objects(struct grep_opt *opt, const struct pathspec *pathspec,
+ struct repository *repo,
const struct object_array *list)
{
unsigned int i;
submodule_free();
gitmodules_config_oid(&real_obj->oid);
}
- if (grep_object(opt, pathspec, real_obj, list->objects[i].name, list->objects[i].path)) {
+ if (grep_object(opt, pathspec, real_obj, list->objects[i].name, list->objects[i].path,
+ repo)) {
hit = 1;
if (opt->status_only)
break;
N_("ignore files specified via '.gitignore'"), 1),
OPT_BOOL(0, "recurse-submodules", &recurse_submodules,
N_("recursively search in each submodule")),
- OPT_STRING(0, "parent-basename", &parent_basename,
- N_("basename"),
- N_("prepend parent project's basename to output")),
OPT_GROUP(""),
OPT_BOOL('v', "invert-match", &opt.invert,
N_("show non-matching lines")),
init_grep_defaults();
git_config(grep_cmd_config, NULL);
grep_init(&opt, prefix);
- super_prefix = get_super_prefix();
/*
* If there is no -- then the paths must exist in the working
if (recurse_submodules) {
gitmodules_config();
- compile_submodule_options(&opt, argv + i, cached, untracked,
- opt_exclude, use_index,
- pattern_type_arg);
}
if (show_in_pager && (cached || list.nr))
if (!cached)
setup_work_tree();
- hit = grep_cache(&opt, &pathspec, cached);
+ hit = grep_cache(&opt, the_repository, &pathspec, cached);
} else {
if (cached)
die(_("both --cached and trees are given."));
- hit = grep_objects(&opt, &pathspec, &list);
+
+ hit = grep_objects(&opt, &pathspec, the_repository, &list);
}
if (num_threads)
#include "version.h"
#include "mailmap.h"
#include "gpg-interface.h"
+#include "progress.h"
/* Set a default date-time format for git log ("log.date" config variable) */
static const char *default_date_mode = NULL;
return (isatty(1) || pager_in_use()) ? DECORATE_SHORT_REFS : 0;
}
-static int parse_decoration_style(const char *var, const char *value)
+static int parse_decoration_style(const char *value)
{
- switch (git_config_maybe_bool(var, value)) {
+ switch (git_parse_maybe_bool(value)) {
case 1:
return DECORATE_SHORT_REFS;
case 0:
if (unset)
decoration_style = 0;
else if (arg)
- decoration_style = parse_decoration_style("command line", arg);
+ decoration_style = parse_decoration_style(arg);
else
decoration_style = DECORATE_SHORT_REFS;
if (!strcmp(var, "log.date"))
return git_config_string(&default_date_mode, var, value);
if (!strcmp(var, "log.decorate")) {
- decoration_style = parse_decoration_style(var, value);
+ decoration_style = parse_decoration_style(value);
if (decoration_style < 0)
decoration_style = 0; /* maybe warn? */
return 0;
return 0;
}
if (!strcmp(var, "format.from")) {
- int b = git_config_maybe_bool(var, value);
+ int b = git_parse_maybe_bool(value);
free(from);
if (b < 0)
from = xstrdup(value);
char *branch_name = NULL;
char *base_commit = NULL;
struct base_tree_info bases;
+ int show_progress = 0;
+ struct progress *progress = NULL;
const struct option builtin_format_patch_options[] = {
{ OPTION_CALLBACK, 'n', "numbered", &numbered, NULL,
OPT_FILENAME(0, "signature-file", &signature_file,
N_("add a signature from a file")),
OPT__QUIET(&quiet, N_("don't print the patch filenames")),
+ OPT_BOOL(0, "progress", &show_progress,
+ N_("show progress while generating patches")),
OPT_END()
};
start_number--;
}
rev.add_signoff = do_signoff;
+
+ if (show_progress)
+ progress = start_progress_delay(_("Generating patches"), total, 0, 2);
while (0 <= --nr) {
int shown;
+ display_progress(progress, total - nr);
commit = list[nr];
rev.nr = total - nr + (start_number - 1);
/* Make the second and subsequent mails replies to the first */
if (!use_stdout)
fclose(rev.diffopt.file);
}
+ stop_progress(&progress);
free(list);
free(branch_name);
string_list_clear(&extra_to, 0);
else if (!strcmp(k, "merge.renormalize"))
option_renormalize = git_config_bool(k, v);
else if (!strcmp(k, "merge.ff")) {
- int boolval = git_config_maybe_bool(k, v);
+ int boolval = git_parse_maybe_bool(v);
if (0 <= boolval) {
fast_forward = boolval ? FF_ALLOW : FF_NO;
} else if (v && !strcmp(v, "only")) {
return 0;
if (e) {
- int v = git_config_maybe_bool(name, e);
+ int v = git_parse_maybe_bool(e);
if (v < 0)
die(_("Bad value '%s' in environment '%s'"), e, name);
return v;
* current branch.
*/
branch = branch_to_free = resolve_refdup("HEAD", 0, head_oid.hash, NULL);
- if (branch && starts_with(branch, "refs/heads/"))
- branch += 11;
+ if (branch)
+ skip_prefix(branch, "refs/heads/", &branch);
if (!branch || is_null_oid(&head_oid))
head_commit = NULL;
else
struct strbuf submodule_dotgit = STRBUF_INIT;
if (!S_ISGITLINK(active_cache[first]->ce_mode))
die(_("Directory %s is in index and no submodule?"), src);
- if (!is_staging_gitmodules_ok())
+ if (!is_staging_gitmodules_ok(&the_index))
die(_("Please stage your changes to .gitmodules or stash them to proceed"));
strbuf_addf(&submodule_dotgit, "%s/.git", src);
*submodule_gitfile = read_gitfile(submodule_dotgit.buf);
static enum rebase_type parse_config_rebase(const char *key, const char *value,
int fatal)
{
- int v = git_config_maybe_bool("pull.rebase", value);
+ int v = git_parse_maybe_bool(value);
if (!v)
return REBASE_FALSE;
if (git_config_get_value("pull.ff", &value))
return NULL;
- switch (git_config_maybe_bool("pull.ff", value)) {
+ switch (git_parse_maybe_bool(value)) {
case 0:
return "--no-ff";
case 1:
} else if (!strcmp(k, "push.gpgsign")) {
const char *value;
if (!git_config_get_value("push.gpgsign", &value)) {
- switch (git_config_maybe_bool("push.gpgsign", value)) {
+ switch (git_parse_maybe_bool(value)) {
case 0:
set_push_cert_flags(flags, SEND_PACK_PUSH_CERT_NEVER);
break;
}
string_list_append(&info->merge, xstrdup(value));
} else {
- int v = git_config_maybe_bool(orig_key, value);
+ int v = git_parse_maybe_bool(value);
if (v >= 0)
info->rebase = v;
else if (!strcmp(value, "preserve"))
"--strategy-option", opts->xopts ? 1 : 0,
"-x", opts->record_origin,
"--ff", opts->allow_ff,
+ "--rerere-autoupdate", opts->allow_rerere_auto == RERERE_AUTOUPDATE,
+ "--no-rerere-autoupdate", opts->allow_rerere_auto == RERERE_NOAUTOUPDATE,
NULL);
}
list.entry[list.nr].name = xstrdup(ce->name);
list.entry[list.nr].is_submodule = S_ISGITLINK(ce->ce_mode);
if (list.entry[list.nr++].is_submodule &&
- !is_staging_gitmodules_ok())
+ !is_staging_gitmodules_ok(&the_index))
die (_("Please stage your changes to .gitmodules or stash them to proceed"));
}
if (!strcmp(k, "push.gpgsign")) {
const char *value;
if (!git_config_get_value("push.gpgsign", &value)) {
- switch (git_config_maybe_bool("push.gpgsign", value)) {
+ switch (git_parse_maybe_bool(value)) {
case 0:
args.push_cert = SEND_PACK_PUSH_CERT_NEVER;
break;
return 0;
}
+static int gitmodules_update_clone_config(const char *var, const char *value,
+ void *cb)
+{
+ int *max_jobs = cb;
+ if (!strcmp(var, "submodule.fetchjobs"))
+ *max_jobs = parse_submodule_fetchjobs(var, value);
+ return 0;
+}
+
static int update_clone(int argc, const char **argv, const char *prefix)
{
const char *update = NULL;
- int max_jobs = -1;
+ int max_jobs = 1;
struct string_list_item *item;
struct pathspec pathspec;
struct submodule_update_clone suc = SUBMODULE_UPDATE_CLONE_INIT;
};
suc.prefix = prefix;
+ config_from_gitmodules(gitmodules_update_clone_config, &max_jobs);
+ git_config(gitmodules_update_clone_config, &max_jobs);
+
argc = parse_options(argc, argv, prefix, module_update_clone_options,
git_submodule_helper_usage, 0);
gitmodules_config();
git_config(submodule_config, NULL);
- if (max_jobs < 0)
- max_jobs = parallel_submodules();
-
run_processes_parallel(max_jobs,
update_clone_get_next_task,
update_clone_start_failure,
static int push_check(int argc, const char **argv, const char *prefix)
{
struct remote *remote;
+ const char *superproject_head;
+ char *head;
+ int detached_head = 0;
+ struct object_id head_oid;
- if (argc < 2)
- die("submodule--helper push-check requires at least 1 argument");
+ if (argc < 3)
+ die("submodule--helper push-check requires at least 2 arguments");
+
+ /*
+ * superproject's resolved head ref.
+ * if HEAD then the superproject is in a detached head state, otherwise
+ * it will be the resolved head ref.
+ */
+ superproject_head = argv[1];
+ argv++;
+ argc--;
+ /* Get the submodule's head ref and determine if it is detached */
+ head = resolve_refdup("HEAD", 0, head_oid.hash, NULL);
+ if (!head)
+ die(_("Failed to resolve HEAD as a valid ref."));
+ if (!strcmp(head, "HEAD"))
+ detached_head = 1;
/*
* The remote must be configured.
if (rs->pattern || rs->matching)
continue;
- /*
- * LHS must match a single ref
- * NEEDSWORK: add logic to special case 'HEAD' once
- * working with submodules in a detached head state
- * ceases to be the norm.
- */
- if (count_refspec_match(rs->src, local_refs, NULL) != 1)
+ /* LHS must match a single ref */
+ switch (count_refspec_match(rs->src, local_refs, NULL)) {
+ case 1:
+ break;
+ case 0:
+ /*
+ * If LHS matches 'HEAD' then we need to ensure
+ * that it matches the same named branch
+ * checked out in the superproject.
+ */
+ if (!strcmp(rs->src, "HEAD")) {
+ if (!detached_head &&
+ !strcmp(head, superproject_head))
+ break;
+ die("HEAD does not match the named branch in the superproject");
+ }
+ default:
die("src refspec '%s' must name a ref",
rs->src);
+ }
}
free_refspec(refspec_nr, refspec);
}
+ free(head);
return 0;
}
#define GIT_WORK_TREE_ENVIRONMENT "GIT_WORK_TREE"
#define GIT_PREFIX_ENVIRONMENT "GIT_PREFIX"
#define GIT_SUPER_PREFIX_ENVIRONMENT "GIT_INTERNAL_SUPER_PREFIX"
-#define GIT_TOPLEVEL_PREFIX_ENVIRONMENT "GIT_INTERNAL_TOPLEVEL_PREFIX"
#define DEFAULT_GIT_DIR_ENVIRONMENT ".git"
#define DB_ENVIRONMENT "GIT_OBJECT_DIRECTORY"
#define INDEX_ENVIRONMENT "GIT_INDEX_FILE"
#define GITATTRIBUTES_FILE ".gitattributes"
#define INFOATTRIBUTES_FILE "info/attributes"
#define ATTRIBUTE_MACRO_PREFIX "[attr]"
+#define GITMODULES_FILE ".gitmodules"
#define GIT_NOTES_REF_ENVIRONMENT "GIT_NOTES_REF"
#define GIT_NOTES_DEFAULT_REF "refs/notes/commits"
#define GIT_NOTES_DISPLAY_REF_ENVIRONMENT "GIT_NOTES_DISPLAY_REF"
static inline int hashcmp(const unsigned char *sha1, const unsigned char *sha2)
{
- int i;
-
- for (i = 0; i < GIT_SHA1_RAWSZ; i++, sha1++, sha2++) {
- if (*sha1 != *sha2)
- return *sha1 - *sha2;
- }
-
- return 0;
+ return memcmp(sha1, sha2, GIT_SHA1_RAWSZ);
}
static inline int oidcmp(const struct object_id *oid1, const struct object_id *oid2)
char path[FLEX_ARRAY];
} *alt_odb_list;
extern void prepare_alt_odb(void);
-extern void read_info_alternates(const char * relative_base, int depth);
extern char *compute_alternate_path(const char *path, struct strbuf *err);
typedef int alt_odb_fn(struct alternate_object_database *, void *);
extern int foreach_alt_odb(alt_odb_fn, void*);
va_end(ap);
while ((pos = strstr(str, "%1")) != NULL) {
+ char *oldstr = str;
str = realloc(str, st_add(++str_len, 1));
if (!str) {
+ free(oldstr);
warning_errno("realloc failed");
return;
}
return ret;
}
-int git_parse_maybe_bool(const char *value)
+static int git_parse_maybe_bool_text(const char *value)
{
if (!value)
return 1;
return -1;
}
-int git_config_maybe_bool(const char *name, const char *value)
+int git_parse_maybe_bool(const char *value)
{
- int v = git_parse_maybe_bool(value);
+ int v = git_parse_maybe_bool_text(value);
if (0 <= v)
return v;
if (git_parse_int(value, &v))
return -1;
}
+int git_config_maybe_bool(const char *name, const char *value)
+{
+ return git_parse_maybe_bool(value);
+}
+
int git_config_bool_or_int(const char *name, const char *value, int *is_bool)
{
- int v = git_parse_maybe_bool(value);
+ int v = git_parse_maybe_bool_text(value);
if (0 <= v) {
*is_bool = 1;
return v;
{
const char *value;
if (!git_configset_get_value(cs, key, &value)) {
- *dest = git_config_maybe_bool(key, value);
+ *dest = git_parse_maybe_bool(value);
if (*dest == -1)
return -1;
return 0;
return repo_config_get_pathname(the_repository, key, dest);
}
+/*
+ * Note: This function exists solely to maintain backward compatibility with
+ * 'fetch' and 'update_clone' storing configuration in '.gitmodules' and should
+ * NOT be used anywhere else.
+ *
+ * Runs the provided config function on the '.gitmodules' file found in the
+ * working directory.
+ */
+void config_from_gitmodules(config_fn_t fn, void *data)
+{
+ if (the_repository->worktree) {
+ char *file = repo_worktree_path(the_repository, GITMODULES_FILE);
+ git_config_from_file(fn, file, data);
+ free(file);
+ }
+}
+
int git_config_get_expiry(const char *key, const char **output)
{
int ret = git_config_get_string_const(key, output);
extern int repo_config_get_pathname(struct repository *repo,
const char *key, const char **dest);
+/*
+ * Note: This function exists solely to maintain backward compatibility with
+ * 'fetch' and 'update_clone' storing configuration in '.gitmodules' and should
+ * NOT be used anywhere else.
+ *
+ * Runs the provided config function on the '.gitmodules' file found in the
+ * working directory.
+ */
+extern void config_from_gitmodules(config_fn_t fn, void *data);
+
extern int git_config_get_value(const char *key, const char **value);
extern const struct string_list *git_config_get_value_multi(const char *key);
extern void git_config_clear(void);
if test $? -ne 0
then
gettextln "Simple merge did not work, trying automatic merge."
- git-merge-index -o git-merge-one-file -a ||
+ git merge-index -o git-merge-one-file -a ||
OCTOPUS_FAILURE=1
next=$(git write-tree 2>/dev/null)
fi
;;
esac
- src1=$(git-unpack-file $2)
- src2=$(git-unpack-file $3)
+ src1=$(git unpack-file $2)
+ src2=$(git unpack-file $3)
case "$1" in
'')
echo "Added $4 in both, but differently."
- orig=$(git-unpack-file e69de29bb2d1d6434b8b29ae775ad8c2e48c5391)
+ orig=$(git unpack-file e69de29bb2d1d6434b8b29ae775ad8c2e48c5391)
;;
*)
echo "Auto-merging $4"
- orig=$(git-unpack-file $1)
+ orig=$(git unpack-file $1)
;;
esac
exit 0
else
echo "Simple merge failed, trying Automatic merge."
- if git-merge-index -o git-merge-one-file -a
+ if git merge-index -o git-merge-one-file -a
then
exit 0
else
# itself well to recording empty patches. fortunately, cherry-pick
# makes this easy
git cherry-pick ${gpg_sign_opt:+"$gpg_sign_opt"} --allow-empty \
- --right-only "$revisions" \
+ $allow_rerere_autoupdate --right-only "$revisions" \
${restrict_revision+^$restrict_revision}
ret=$?
else
git format-patch -k --stdout --full-index --cherry-pick --right-only \
--src-prefix=a/ --dst-prefix=b/ --no-renames --no-cover-letter \
+ $git_format_patch_opt \
"$revisions" ${restrict_revision+^$restrict_revision} \
>"$GIT_DIR/rebased-patches"
ret=$?
fi
git am $git_am_opt --rebasing --resolvemsg="$resolvemsg" \
+ $allow_rerere_autoupdate \
${gpg_sign_opt:+"$gpg_sign_opt"} <"$GIT_DIR/rebased-patches"
ret=$?
test -d "$rewritten" &&
pick_one_preserving_merges "$@" && return
- output eval git cherry-pick \
+ output eval git cherry-pick $allow_rerere_autoupdate \
${gpg_sign_opt:+$(git rev-parse --sq-quote "$gpg_sign_opt")} \
"$strategy_args" $empty_args $ff "$@"
merge_args="--no-log --no-ff"
if ! do_with_author output eval \
'git merge ${gpg_sign_opt:+"$gpg_sign_opt"} \
- $merge_args $strategy_args -m "$msg_content" $new_parents'
+ $allow_rerere_autoupdate $merge_args \
+ $strategy_args -m "$msg_content" $new_parents'
then
printf "%s\n" "$msg_content" > "$GIT_DIR"/MERGE_MSG
die_with_patch $sha1 "$(eval_gettext "Error redoing merge \$sha1")"
echo "$sha1 $(git rev-parse HEAD^0)" >> "$rewritten_list"
;;
*)
- output eval git cherry-pick \
+ output eval git cherry-pick $allow_rerere_autoupdate \
${gpg_sign_opt:+$(git rev-parse --sq-quote "$gpg_sign_opt")} \
"$strategy_args" "$@" ||
die_with_patch $sha1 "$(eval_gettext "Could not pick \$sha1")"
autostash="$(git config --bool rebase.autostash || echo false)"
fork_point=auto
git_am_opt=
+git_format_patch_opt=
rebase_root=
force_rebase=
allow_rerere_autoupdate=
state_dir="$apply_dir"
fi
+if test -t 2 && test -z "$GIT_QUIET"
+then
+ git_format_patch_opt="$git_format_patch_opt --progress"
+fi
+
if test -z "$rebase_root"
then
case "$#" in
}
untracked_files () {
+ if test "$1" = "-z"
+ then
+ shift
+ z=-z
+ else
+ z=
+ fi
excl_opt=--exclude-standard
test "$untracked" = "all" && excl_opt=
- git ls-files -o -z $excl_opt -- "$@"
+ git ls-files -o $z $excl_opt -- "$@"
}
clear_stash () {
# Untracked files are stored by themselves in a parentless commit, for
# ease of unpacking later.
u_commit=$(
- untracked_files "$@" | (
+ untracked_files -z "$@" | (
GIT_INDEX_FILE="$TMPindex" &&
export GIT_INDEX_FILE &&
rm -f "$TMPindex" &&
if test -z "$patch_mode"
then
+ test "$untracked" = "all" && CLEAN_X_OPTION=-x || CLEAN_X_OPTION=
+ if test -n "$untracked"
+ then
+ git clean --force --quiet -d $CLEAN_X_OPTION -- "$@"
+ fi
+
if test $# != 0
then
git reset -q -- "$@"
else
git reset --hard -q
fi
- test "$untracked" = "all" && CLEAN_X_OPTION=-x || CLEAN_X_OPTION=
- if test -n "$untracked"
- then
- git clean --force --quiet -d $CLEAN_X_OPTION -- "$@"
- fi
if test "$keep_index" = "t" && test -n "$i_tree"
then
if test -n "$u_tree"
then
- GIT_INDEX_FILE="$TMPindex" git-read-tree "$u_tree" &&
+ GIT_INDEX_FILE="$TMPindex" git read-tree "$u_tree" &&
GIT_INDEX_FILE="$TMPindex" git checkout-index --all &&
rm -f "$TMPindex" ||
die "$(gettext "Could not restore untracked files from stash entry")"
test $status != A && test $ignore_config = all && continue
fi
# Also show added or modified modules which are checked out
- GIT_DIR="$sm_path/.git" git-rev-parse --git-dir >/dev/null 2>&1 &&
+ GIT_DIR="$sm_path/.git" git rev-parse --git-dir >/dev/null 2>&1 &&
printf '%s\n' "$sm_path"
done
)
missing_dst=
test $mod_src = 160000 &&
- ! GIT_DIR="$name/.git" git-rev-parse -q --verify $sha1_src^0 >/dev/null &&
+ ! GIT_DIR="$name/.git" git rev-parse -q --verify $sha1_src^0 >/dev/null &&
missing_src=t
test $mod_dst = 160000 &&
- ! GIT_DIR="$name/.git" git-rev-parse -q --verify $sha1_dst^0 >/dev/null &&
+ ! GIT_DIR="$name/.git" git rev-parse -q --verify $sha1_dst^0 >/dev/null &&
missing_dst=t
display_name=$(git submodule--helper relative-path "$name" "$wt_prefix")
{ "fsck-objects", cmd_fsck, RUN_SETUP },
{ "gc", cmd_gc, RUN_SETUP },
{ "get-tar-commit-id", cmd_get_tar_commit_id },
- { "grep", cmd_grep, RUN_SETUP_GENTLY | SUPPORT_SUPER_PREFIX },
+ { "grep", cmd_grep, RUN_SETUP_GENTLY },
{ "hash-object", cmd_hash_object },
{ "help", cmd_help },
{ "index-pack", cmd_index_pack, RUN_SETUP_GENTLY },
return 0;
if (opt->status_only)
- return 0;
+ return opt->unmatch_name_only;
if (opt->unmatch_name_only) {
/* We did not see any hit, so we want to show this */
show_name(opt, gs->name);
case GREP_SOURCE_FILE:
gs->identifier = xstrdup(identifier);
break;
- case GREP_SOURCE_SUBMODULE:
- if (!identifier) {
- gs->identifier = NULL;
- break;
- }
- /*
- * FALL THROUGH
- * If the identifier is non-NULL (in the submodule case) it
- * will be a SHA1 that needs to be copied.
- */
case GREP_SOURCE_OID:
gs->identifier = oiddup(identifier);
break;
switch (gs->type) {
case GREP_SOURCE_FILE:
case GREP_SOURCE_OID:
- case GREP_SOURCE_SUBMODULE:
FREE_AND_NULL(gs->buf);
gs->size = 0;
break;
return grep_source_load_oid(gs);
case GREP_SOURCE_BUF:
return gs->buf ? 0 : -1;
- case GREP_SOURCE_SUBMODULE:
- break;
}
die("BUG: invalid grep_source type to load");
}
GREP_SOURCE_OID,
GREP_SOURCE_FILE,
GREP_SOURCE_BUF,
- GREP_SOURCE_SUBMODULE,
} type;
void *identifier;
const char *cmd;
if (skip_prefix(var, "pager.", &cmd) && !strcmp(cmd, data->cmd)) {
- int b = git_config_maybe_bool(var, value);
+ int b = git_parse_maybe_bool(value);
if (b >= 0)
data->want = b;
else {
=cut
sub get_tz_offset {
- # some systmes don't handle or mishandle %z, so be creative.
+ # some systems don't handle or mishandle %z, so be creative.
my $t = shift || time;
my $gm = timegm(localtime($t));
my $sign = qw( + + - )[ $gm <=> $t ];
delete $ENV{TZ};
}
- my $our_TZ = get_tz_offset();
+ my $our_TZ = get_tz_offset($epoch_in_UTC);
# This converts $epoch_in_UTC into our local timezone.
my ($sec, $min, $hour, $mday, $mon, $year,
return 1;
}
+/*
+ * Return true if refname, which has the specified oid and flags, can
+ * be resolved to an object in the database. If the referred-to object
+ * does not exist, emit a warning and return false.
+ */
+int ref_resolves_to_object(const char *refname,
+ const struct object_id *oid,
+ unsigned int flags)
+{
+ if (flags & REF_ISBROKEN)
+ return 0;
+ if (!has_sha1_file(oid->hash)) {
+ error("%s does not point to a valid object!", refname);
+ return 0;
+ }
+ return 1;
+}
+
char *refs_resolve_refdup(struct ref_store *refs,
const char *refname, int resolve_flags,
unsigned char *sha1, int *flags)
#include "../refs.h"
#include "refs-internal.h"
#include "ref-cache.h"
+#include "packed-backend.h"
#include "../iterator.h"
#include "../dir-iterator.h"
#include "../lockfile.h"
struct object_id old_oid;
};
-/*
- * Return true if refname, which has the specified oid and flags, can
- * be resolved to an object in the database. If the referred-to object
- * does not exist, emit a warning and return false.
- */
-static int ref_resolves_to_object(const char *refname,
- const struct object_id *oid,
- unsigned int flags)
-{
- if (flags & REF_ISBROKEN)
- return 0;
- if (!has_sha1_file(oid->hash)) {
- error("%s does not point to a valid object!", refname);
- return 0;
- }
- return 1;
-}
-
-struct packed_ref_cache {
- struct ref_cache *cache;
-
- /*
- * Count of references to the data structure in this instance,
- * including the pointer from files_ref_store::packed if any.
- * The data will not be freed as long as the reference count
- * is nonzero.
- */
- unsigned int referrers;
-
- /* The metadata from when this packed-refs cache was read */
- struct stat_validity validity;
-};
-
/*
* Future: need to be in "struct repository"
* when doing a full libification.
char *gitdir;
char *gitcommondir;
- char *packed_refs_path;
struct ref_cache *loose;
- struct packed_ref_cache *packed;
- /*
- * Lock used for the "packed-refs" file. Note that this (and
- * thus the enclosing `files_ref_store`) must not be freed.
- */
- struct lock_file packed_refs_lock;
+ struct ref_store *packed_ref_store;
};
-/*
- * Increment the reference count of *packed_refs.
- */
-static void acquire_packed_ref_cache(struct packed_ref_cache *packed_refs)
-{
- packed_refs->referrers++;
-}
-
-/*
- * Decrease the reference count of *packed_refs. If it goes to zero,
- * free *packed_refs and return true; otherwise return false.
- */
-static int release_packed_ref_cache(struct packed_ref_cache *packed_refs)
-{
- if (!--packed_refs->referrers) {
- free_ref_cache(packed_refs->cache);
- stat_validity_clear(&packed_refs->validity);
- free(packed_refs);
- return 1;
- } else {
- return 0;
- }
-}
-
-static void clear_packed_ref_cache(struct files_ref_store *refs)
-{
- if (refs->packed) {
- struct packed_ref_cache *packed_refs = refs->packed;
-
- if (is_lock_file_locked(&refs->packed_refs_lock))
- die("BUG: packed-ref cache cleared while locked");
- refs->packed = NULL;
- release_packed_ref_cache(packed_refs);
- }
-}
-
static void clear_loose_ref_cache(struct files_ref_store *refs)
{
if (refs->loose) {
get_common_dir_noenv(&sb, gitdir);
refs->gitcommondir = strbuf_detach(&sb, NULL);
strbuf_addf(&sb, "%s/packed-refs", refs->gitcommondir);
- refs->packed_refs_path = strbuf_detach(&sb, NULL);
+ refs->packed_ref_store = packed_ref_store_create(sb.buf, flags);
+ strbuf_release(&sb);
return ref_store;
}
return refs;
}
-/* The length of a peeled reference line in packed-refs, including EOL: */
-#define PEELED_LINE_LENGTH 42
-
-/*
- * The packed-refs header line that we write out. Perhaps other
- * traits will be added later. The trailing space is required.
- */
-static const char PACKED_REFS_HEADER[] =
- "# pack-refs with: peeled fully-peeled \n";
-
-/*
- * Parse one line from a packed-refs file. Write the SHA1 to sha1.
- * Return a pointer to the refname within the line (null-terminated),
- * or NULL if there was a problem.
- */
-static const char *parse_ref_line(struct strbuf *line, struct object_id *oid)
-{
- const char *ref;
-
- if (parse_oid_hex(line->buf, oid, &ref) < 0)
- return NULL;
- if (!isspace(*ref++))
- return NULL;
-
- if (isspace(*ref))
- return NULL;
-
- if (line->buf[line->len - 1] != '\n')
- return NULL;
- line->buf[--line->len] = 0;
-
- return ref;
-}
-
-/*
- * Read from `packed_refs_file` into a newly-allocated
- * `packed_ref_cache` and return it. The return value will already
- * have its reference count incremented.
- *
- * A comment line of the form "# pack-refs with: " may contain zero or
- * more traits. We interpret the traits as follows:
- *
- * No traits:
- *
- * Probably no references are peeled. But if the file contains a
- * peeled value for a reference, we will use it.
- *
- * peeled:
- *
- * References under "refs/tags/", if they *can* be peeled, *are*
- * peeled in this file. References outside of "refs/tags/" are
- * probably not peeled even if they could have been, but if we find
- * a peeled value for such a reference we will use it.
- *
- * fully-peeled:
- *
- * All references in the file that can be peeled are peeled.
- * Inversely (and this is more important), any references in the
- * file for which no peeled value is recorded is not peelable. This
- * trait should typically be written alongside "peeled" for
- * compatibility with older clients, but we do not require it
- * (i.e., "peeled" is a no-op if "fully-peeled" is set).
- */
-static struct packed_ref_cache *read_packed_refs(const char *packed_refs_file)
-{
- FILE *f;
- struct packed_ref_cache *packed_refs = xcalloc(1, sizeof(*packed_refs));
- struct ref_entry *last = NULL;
- struct strbuf line = STRBUF_INIT;
- enum { PEELED_NONE, PEELED_TAGS, PEELED_FULLY } peeled = PEELED_NONE;
- struct ref_dir *dir;
-
- acquire_packed_ref_cache(packed_refs);
- packed_refs->cache = create_ref_cache(NULL, NULL);
- packed_refs->cache->root->flag &= ~REF_INCOMPLETE;
-
- f = fopen(packed_refs_file, "r");
- if (!f) {
- if (errno == ENOENT) {
- /*
- * This is OK; it just means that no
- * "packed-refs" file has been written yet,
- * which is equivalent to it being empty.
- */
- return packed_refs;
- } else {
- die_errno("couldn't read %s", packed_refs_file);
- }
- }
-
- stat_validity_update(&packed_refs->validity, fileno(f));
-
- dir = get_ref_dir(packed_refs->cache->root);
- while (strbuf_getwholeline(&line, f, '\n') != EOF) {
- struct object_id oid;
- const char *refname;
- const char *traits;
-
- if (skip_prefix(line.buf, "# pack-refs with:", &traits)) {
- if (strstr(traits, " fully-peeled "))
- peeled = PEELED_FULLY;
- else if (strstr(traits, " peeled "))
- peeled = PEELED_TAGS;
- /* perhaps other traits later as well */
- continue;
- }
-
- refname = parse_ref_line(&line, &oid);
- if (refname) {
- int flag = REF_ISPACKED;
-
- if (check_refname_format(refname, REFNAME_ALLOW_ONELEVEL)) {
- if (!refname_is_safe(refname))
- die("packed refname is dangerous: %s", refname);
- oidclr(&oid);
- flag |= REF_BAD_NAME | REF_ISBROKEN;
- }
- last = create_ref_entry(refname, &oid, flag);
- if (peeled == PEELED_FULLY ||
- (peeled == PEELED_TAGS && starts_with(refname, "refs/tags/")))
- last->flag |= REF_KNOWS_PEELED;
- add_ref_entry(dir, last);
- continue;
- }
- if (last &&
- line.buf[0] == '^' &&
- line.len == PEELED_LINE_LENGTH &&
- line.buf[PEELED_LINE_LENGTH - 1] == '\n' &&
- !get_oid_hex(line.buf + 1, &oid)) {
- oidcpy(&last->u.value.peeled, &oid);
- /*
- * Regardless of what the file header said,
- * we definitely know the value of *this*
- * reference:
- */
- last->flag |= REF_KNOWS_PEELED;
- }
- }
-
- fclose(f);
- strbuf_release(&line);
-
- return packed_refs;
-}
-
-static const char *files_packed_refs_path(struct files_ref_store *refs)
-{
- return refs->packed_refs_path;
-}
-
static void files_reflog_path(struct files_ref_store *refs,
struct strbuf *sb,
const char *refname)
}
}
-/*
- * Check that the packed refs cache (if any) still reflects the
- * contents of the file. If not, clear the cache.
- */
-static void validate_packed_ref_cache(struct files_ref_store *refs)
-{
- if (refs->packed &&
- !stat_validity_check(&refs->packed->validity,
- files_packed_refs_path(refs)))
- clear_packed_ref_cache(refs);
-}
-
-/*
- * Get the packed_ref_cache for the specified files_ref_store,
- * creating and populating it if it hasn't been read before or if the
- * file has been changed (according to its `validity` field) since it
- * was last read. On the other hand, if we hold the lock, then assume
- * that the file hasn't been changed out from under us, so skip the
- * extra `stat()` call in `stat_validity_check()`.
- */
-static struct packed_ref_cache *get_packed_ref_cache(struct files_ref_store *refs)
-{
- const char *packed_refs_file = files_packed_refs_path(refs);
-
- if (!is_lock_file_locked(&refs->packed_refs_lock))
- validate_packed_ref_cache(refs);
-
- if (!refs->packed)
- refs->packed = read_packed_refs(packed_refs_file);
-
- return refs->packed;
-}
-
-static struct ref_dir *get_packed_ref_dir(struct packed_ref_cache *packed_ref_cache)
-{
- return get_ref_dir(packed_ref_cache->cache->root);
-}
-
-static struct ref_dir *get_packed_refs(struct files_ref_store *refs)
-{
- return get_packed_ref_dir(get_packed_ref_cache(refs));
-}
-
-/*
- * Add a reference to the in-memory packed reference cache. This may
- * only be called while the packed-refs file is locked (see
- * lock_packed_refs()). To actually write the packed-refs file, call
- * commit_packed_refs().
- */
-static void add_packed_ref(struct files_ref_store *refs,
- const char *refname, const struct object_id *oid)
-{
- struct packed_ref_cache *packed_ref_cache = get_packed_ref_cache(refs);
-
- if (!is_lock_file_locked(&refs->packed_refs_lock))
- die("BUG: packed refs not locked");
-
- if (check_refname_format(refname, REFNAME_ALLOW_ONELEVEL))
- die("Reference has invalid format: '%s'", refname);
-
- add_ref_entry(get_packed_ref_dir(packed_ref_cache),
- create_ref_entry(refname, oid, REF_ISPACKED));
-}
-
/*
* Read the loose references from the namespace dirname into dir
* (without recursing). dirname must end with '/'. dir must be the
return refs->loose;
}
-/*
- * Return the ref_entry for the given refname from the packed
- * references. If it does not exist, return NULL.
- */
-static struct ref_entry *get_packed_ref(struct files_ref_store *refs,
- const char *refname)
-{
- return find_ref_entry(get_packed_refs(refs), refname);
-}
-
-/*
- * A loose ref file doesn't exist; check for a packed ref.
- */
-static int resolve_packed_ref(struct files_ref_store *refs,
- const char *refname,
- unsigned char *sha1, unsigned int *flags)
-{
- struct ref_entry *entry;
-
- /*
- * The loose reference file does not exist; check for a packed
- * reference.
- */
- entry = get_packed_ref(refs, refname);
- if (entry) {
- hashcpy(sha1, entry->u.value.oid.hash);
- *flags |= REF_ISPACKED;
- return 0;
- }
- /* refname is not a packed reference. */
- return -1;
-}
-
static int files_read_raw_ref(struct ref_store *ref_store,
const char *refname, unsigned char *sha1,
struct strbuf *referent, unsigned int *type)
if (lstat(path, &st) < 0) {
if (errno != ENOENT)
goto out;
- if (resolve_packed_ref(refs, refname, sha1, type)) {
+ if (refs_read_raw_ref(refs->packed_ref_store, refname,
+ sha1, referent, type)) {
errno = ENOENT;
goto out;
}
* ref is supposed to be, there could still be a
* packed ref:
*/
- if (resolve_packed_ref(refs, refname, sha1, type)) {
+ if (refs_read_raw_ref(refs->packed_ref_store, refname,
+ sha1, referent, type)) {
errno = EISDIR;
goto out;
}
/*
* If the ref did not exist and we are creating it,
- * make sure there is no existing ref that conflicts
- * with refname:
+ * make sure there is no existing packed ref that
+ * conflicts with refname:
*/
if (refs_verify_refname_available(
- &refs->base, refname,
+ refs->packed_ref_store, refname,
extras, skip, err))
goto error_return;
}
* be expensive and (b) loose references anyway usually do not
* have REF_KNOWS_PEELED.
*/
- if (flag & REF_ISPACKED) {
- struct ref_entry *r = get_packed_ref(refs, refname);
- if (r) {
- if (peel_entry(r, 0))
- return -1;
- hashcpy(sha1, r->u.value.peeled.hash);
- return 0;
- }
- }
+ if (flag & REF_ISPACKED &&
+ !refs_peel_ref(refs->packed_ref_store, refname, sha1))
+ return 0;
return peel_object(base, sha1);
}
struct files_ref_iterator {
struct ref_iterator base;
- struct packed_ref_cache *packed_ref_cache;
struct ref_iterator *iter0;
unsigned int flags;
};
if (iter->iter0)
ok = ref_iterator_abort(iter->iter0);
- release_packed_ref_cache(iter->packed_ref_cache);
base_ref_iterator_free(ref_iterator);
return ok;
}
* (If they've already been read, that's OK; we only need to
* guarantee that they're read before the packed refs, not
* *how much* before.) After that, we call
- * get_packed_ref_cache(), which internally checks whether the
- * packed-ref cache is up to date with what is on disk, and
- * re-reads it if not.
+ * packed_ref_iterator_begin(), which internally checks
+ * whether the packed-ref cache is up to date with what is on
+ * disk, and re-reads it if not.
*/
loose_iter = cache_ref_iterator_begin(get_loose_ref_cache(refs),
prefix, 1);
- iter->packed_ref_cache = get_packed_ref_cache(refs);
- acquire_packed_ref_cache(iter->packed_ref_cache);
- packed_iter = cache_ref_iterator_begin(iter->packed_ref_cache->cache,
- prefix, 0);
+ /*
+ * The packed-refs file might contain broken references, for
+ * example an old version of a reference that points at an
+ * object that has since been garbage-collected. This is OK as
+ * long as there is a corresponding loose reference that
+ * overrides it, and we don't want to emit an error message in
+ * this case. So ask the packed_ref_store for all of its
+ * references, and (if needed) do our own check for broken
+ * ones in files_ref_iterator_advance(), after we have merged
+ * the packed and loose references.
+ */
+ packed_iter = refs_ref_iterator_begin(
+ refs->packed_ref_store, prefix, 0,
+ DO_FOR_EACH_INCLUDE_BROKEN);
iter->iter0 = overlay_ref_iterator_begin(loose_iter, packed_iter);
iter->flags = flags;
* our refname.
*/
if (is_null_oid(&lock->old_oid) &&
- refs_verify_refname_available(&refs->base, refname,
+ refs_verify_refname_available(refs->packed_ref_store, refname,
extras, skip, err)) {
last_errno = ENOTDIR;
goto error_return;
return lock;
}
-/*
- * Write an entry to the packed-refs file for the specified refname.
- * If peeled is non-NULL, write it as the entry's peeled value.
- */
-static void write_packed_entry(FILE *fh, const char *refname,
- const unsigned char *sha1,
- const unsigned char *peeled)
-{
- fprintf_or_die(fh, "%s %s\n", sha1_to_hex(sha1), refname);
- if (peeled)
- fprintf_or_die(fh, "^%s\n", sha1_to_hex(peeled));
-}
-
-/*
- * Lock the packed-refs file for writing. Flags is passed to
- * hold_lock_file_for_update(). Return 0 on success. On errors, set
- * errno appropriately and return a nonzero value.
- */
-static int lock_packed_refs(struct files_ref_store *refs, int flags)
-{
- static int timeout_configured = 0;
- static int timeout_value = 1000;
- struct packed_ref_cache *packed_ref_cache;
-
- files_assert_main_repository(refs, "lock_packed_refs");
-
- if (!timeout_configured) {
- git_config_get_int("core.packedrefstimeout", &timeout_value);
- timeout_configured = 1;
- }
-
- if (hold_lock_file_for_update_timeout(
- &refs->packed_refs_lock, files_packed_refs_path(refs),
- flags, timeout_value) < 0)
- return -1;
-
- /*
- * Now that we hold the `packed-refs` lock, make sure that our
- * cache matches the current version of the file. Normally
- * `get_packed_ref_cache()` does that for us, but that
- * function assumes that when the file is locked, any existing
- * cache is still valid. We've just locked the file, but it
- * might have changed the moment *before* we locked it.
- */
- validate_packed_ref_cache(refs);
-
- packed_ref_cache = get_packed_ref_cache(refs);
- /* Increment the reference count to prevent it from being freed: */
- acquire_packed_ref_cache(packed_ref_cache);
- return 0;
-}
-
-/*
- * Write the current version of the packed refs cache from memory to
- * disk. The packed-refs file must already be locked for writing (see
- * lock_packed_refs()). Return zero on success. On errors, set errno
- * and return a nonzero value
- */
-static int commit_packed_refs(struct files_ref_store *refs)
-{
- struct packed_ref_cache *packed_ref_cache =
- get_packed_ref_cache(refs);
- int ok, error = 0;
- int save_errno = 0;
- FILE *out;
- struct ref_iterator *iter;
-
- files_assert_main_repository(refs, "commit_packed_refs");
-
- if (!is_lock_file_locked(&refs->packed_refs_lock))
- die("BUG: packed-refs not locked");
-
- out = fdopen_lock_file(&refs->packed_refs_lock, "w");
- if (!out)
- die_errno("unable to fdopen packed-refs descriptor");
-
- fprintf_or_die(out, "%s", PACKED_REFS_HEADER);
-
- iter = cache_ref_iterator_begin(packed_ref_cache->cache, NULL, 0);
- while ((ok = ref_iterator_advance(iter)) == ITER_OK) {
- struct object_id peeled;
- int peel_error = ref_iterator_peel(iter, &peeled);
-
- write_packed_entry(out, iter->refname, iter->oid->hash,
- peel_error ? NULL : peeled.hash);
- }
-
- if (ok != ITER_DONE)
- die("error while iterating over references");
-
- if (commit_lock_file(&refs->packed_refs_lock)) {
- save_errno = errno;
- error = -1;
- }
- release_packed_ref_cache(packed_ref_cache);
- errno = save_errno;
- return error;
-}
-
-/*
- * Rollback the lockfile for the packed-refs file, and discard the
- * in-memory packed reference cache. (The packed-refs file will be
- * read anew if it is needed again after this function is called.)
- */
-static void rollback_packed_refs(struct files_ref_store *refs)
-{
- struct packed_ref_cache *packed_ref_cache =
- get_packed_ref_cache(refs);
-
- files_assert_main_repository(refs, "rollback_packed_refs");
-
- if (!is_lock_file_locked(&refs->packed_refs_lock))
- die("BUG: packed-refs not locked");
- rollback_lock_file(&refs->packed_refs_lock);
- release_packed_ref_cache(packed_ref_cache);
- clear_packed_ref_cache(refs);
-}
-
struct ref_to_prune {
struct ref_to_prune *next;
unsigned char sha1[20];
files_downcast(ref_store, REF_STORE_WRITE | REF_STORE_ODB,
"pack_refs");
struct ref_iterator *iter;
- struct ref_dir *packed_refs;
int ok;
struct ref_to_prune *refs_to_prune = NULL;
+ struct strbuf err = STRBUF_INIT;
- lock_packed_refs(refs, LOCK_DIE_ON_ERROR);
- packed_refs = get_packed_refs(refs);
+ packed_refs_lock(refs->packed_ref_store, LOCK_DIE_ON_ERROR, &err);
iter = cache_ref_iterator_begin(get_loose_ref_cache(refs), NULL, 0);
while ((ok = ref_iterator_advance(iter)) == ITER_OK) {
* in the packed ref cache. If the reference should be
* pruned, also add it to refs_to_prune.
*/
- struct ref_entry *packed_entry;
-
if (!should_pack_ref(iter->refname, iter->oid, iter->flags,
flags))
continue;
* we don't copy the peeled status, because we want it
* to be re-peeled.
*/
- packed_entry = find_ref_entry(packed_refs, iter->refname);
- if (packed_entry) {
- /* Overwrite existing packed entry with info from loose entry */
- packed_entry->flag = REF_ISPACKED;
- oidcpy(&packed_entry->u.value.oid, iter->oid);
- } else {
- packed_entry = create_ref_entry(iter->refname, iter->oid,
- REF_ISPACKED);
- add_ref_entry(packed_refs, packed_entry);
- }
- oidclr(&packed_entry->u.value.peeled);
+ add_packed_ref(refs->packed_ref_store, iter->refname, iter->oid);
/* Schedule the loose reference for pruning if requested. */
if ((flags & PACK_REFS_PRUNE)) {
if (ok != ITER_DONE)
die("error while iterating over references");
- if (commit_packed_refs(refs))
- die_errno("unable to overwrite old ref-pack file");
+ if (commit_packed_refs(refs->packed_ref_store, &err))
+ die("unable to overwrite old ref-pack file: %s", err.buf);
+ packed_refs_unlock(refs->packed_ref_store);
prune_refs(refs, refs_to_prune);
+ strbuf_release(&err);
return 0;
}
-/*
- * Rewrite the packed-refs file, omitting any refs listed in
- * 'refnames'. On error, leave packed-refs unchanged, write an error
- * message to 'err', and return a nonzero value.
- *
- * The refs in 'refnames' needn't be sorted. `err` must not be NULL.
- */
-static int repack_without_refs(struct files_ref_store *refs,
- struct string_list *refnames, struct strbuf *err)
-{
- struct ref_dir *packed;
- struct string_list_item *refname;
- int ret, needs_repacking = 0, removed = 0;
-
- files_assert_main_repository(refs, "repack_without_refs");
- assert(err);
-
- /* Look for a packed ref */
- for_each_string_list_item(refname, refnames) {
- if (get_packed_ref(refs, refname->string)) {
- needs_repacking = 1;
- break;
- }
- }
-
- /* Avoid locking if we have nothing to do */
- if (!needs_repacking)
- return 0; /* no refname exists in packed refs */
-
- if (lock_packed_refs(refs, 0)) {
- unable_to_lock_message(files_packed_refs_path(refs), errno, err);
- return -1;
- }
- packed = get_packed_refs(refs);
-
- /* Remove refnames from the cache */
- for_each_string_list_item(refname, refnames)
- if (remove_entry_from_dir(packed, refname->string) != -1)
- removed = 1;
- if (!removed) {
- /*
- * All packed entries disappeared while we were
- * acquiring the lock.
- */
- rollback_packed_refs(refs);
- return 0;
- }
-
- /* Write what remains */
- ret = commit_packed_refs(refs);
- if (ret)
- strbuf_addf(err, "unable to overwrite old ref-pack file: %s",
- strerror(errno));
- return ret;
-}
-
static int files_delete_refs(struct ref_store *ref_store, const char *msg,
struct string_list *refnames, unsigned int flags)
{
if (!refnames->nr)
return 0;
- result = repack_without_refs(refs, refnames, &err);
- if (result) {
- /*
- * If we failed to rewrite the packed-refs file, then
- * it is unsafe to try to remove loose refs, because
- * doing so might expose an obsolete packed value for
- * a reference that might even point at an object that
- * has been garbage collected.
- */
- if (refnames->nr == 1)
- error(_("could not delete reference %s: %s"),
- refnames->items[0].string, err.buf);
- else
- error(_("could not delete references: %s"), err.buf);
+ if (packed_refs_lock(refs->packed_ref_store, 0, &err))
+ goto error;
- goto out;
+ if (repack_without_refs(refs->packed_ref_store, refnames, &err)) {
+ packed_refs_unlock(refs->packed_ref_store);
+ goto error;
}
+ packed_refs_unlock(refs->packed_ref_store);
+
for (i = 0; i < refnames->nr; i++) {
const char *refname = refnames->items[i].string;
result |= error(_("could not remove reference %s"), refname);
}
-out:
strbuf_release(&err);
return result;
+
+error:
+ /*
+ * If we failed to rewrite the packed-refs file, then it is
+ * unsafe to try to remove loose refs, because doing so might
+ * expose an obsolete packed value for a reference that might
+ * even point at an object that has been garbage collected.
+ */
+ if (refnames->nr == 1)
+ error(_("could not delete reference %s: %s"),
+ refnames->items[0].string, err.buf);
+ else
+ error(_("could not delete references: %s"), err.buf);
+
+ strbuf_release(&err);
+ return -1;
}
/*
}
}
- if (repack_without_refs(refs, &refs_to_delete, err)) {
+ if (packed_refs_lock(refs->packed_ref_store, 0, err)) {
ret = TRANSACTION_GENERIC_ERROR;
goto cleanup;
}
+ if (repack_without_refs(refs->packed_ref_store, &refs_to_delete, err)) {
+ ret = TRANSACTION_GENERIC_ERROR;
+ packed_refs_unlock(refs->packed_ref_store);
+ goto cleanup;
+ }
+
+ packed_refs_unlock(refs->packed_ref_store);
+
/* Delete the reflogs of any references that were deleted: */
for_each_string_list_item(ref_to_delete, &refs_to_delete) {
strbuf_reset(&sb);
}
}
- if (lock_packed_refs(refs, 0)) {
- strbuf_addf(err, "unable to lock packed-refs file: %s",
- strerror(errno));
+ if (packed_refs_lock(refs->packed_ref_store, 0, err)) {
ret = TRANSACTION_GENERIC_ERROR;
goto cleanup;
}
if ((update->flags & REF_HAVE_NEW) &&
!is_null_oid(&update->new_oid))
- add_packed_ref(refs, update->refname,
+ add_packed_ref(refs->packed_ref_store, update->refname,
&update->new_oid);
}
- if (commit_packed_refs(refs)) {
- strbuf_addf(err, "unable to commit packed-refs file: %s",
- strerror(errno));
+ if (commit_packed_refs(refs->packed_ref_store, err)) {
ret = TRANSACTION_GENERIC_ERROR;
goto cleanup;
}
cleanup:
+ packed_refs_unlock(refs->packed_ref_store);
transaction->state = REF_TRANSACTION_CLOSED;
string_list_clear(&affected_refnames, 0);
return ret;
--- /dev/null
+#include "../cache.h"
+#include "../config.h"
+#include "../refs.h"
+#include "refs-internal.h"
+#include "ref-cache.h"
+#include "packed-backend.h"
+#include "../iterator.h"
+#include "../lockfile.h"
+
+struct packed_ref_cache {
+ struct ref_cache *cache;
+
+ /*
+ * Count of references to the data structure in this instance,
+ * including the pointer from files_ref_store::packed if any.
+ * The data will not be freed as long as the reference count
+ * is nonzero.
+ */
+ unsigned int referrers;
+
+ /* The metadata from when this packed-refs cache was read */
+ struct stat_validity validity;
+};
+
+/*
+ * Increment the reference count of *packed_refs.
+ */
+static void acquire_packed_ref_cache(struct packed_ref_cache *packed_refs)
+{
+ packed_refs->referrers++;
+}
+
+/*
+ * Decrease the reference count of *packed_refs. If it goes to zero,
+ * free *packed_refs and return true; otherwise return false.
+ */
+static int release_packed_ref_cache(struct packed_ref_cache *packed_refs)
+{
+ if (!--packed_refs->referrers) {
+ free_ref_cache(packed_refs->cache);
+ stat_validity_clear(&packed_refs->validity);
+ free(packed_refs);
+ return 1;
+ } else {
+ return 0;
+ }
+}
+
+/*
+ * A container for `packed-refs`-related data. It is not (yet) a
+ * `ref_store`.
+ */
+struct packed_ref_store {
+ struct ref_store base;
+
+ unsigned int store_flags;
+
+ /* The path of the "packed-refs" file: */
+ char *path;
+
+ /*
+ * A cache of the values read from the `packed-refs` file, if
+ * it might still be current; otherwise, NULL.
+ */
+ struct packed_ref_cache *cache;
+
+ /*
+ * Lock used for the "packed-refs" file. Note that this (and
+ * thus the enclosing `packed_ref_store`) must not be freed.
+ */
+ struct lock_file lock;
+
+ /*
+ * Temporary file used when rewriting new contents to the
+ * "packed-refs" file. Note that this (and thus the enclosing
+ * `packed_ref_store`) must not be freed.
+ */
+ struct tempfile tempfile;
+};
+
+struct ref_store *packed_ref_store_create(const char *path,
+ unsigned int store_flags)
+{
+ struct packed_ref_store *refs = xcalloc(1, sizeof(*refs));
+ struct ref_store *ref_store = (struct ref_store *)refs;
+
+ base_ref_store_init(ref_store, &refs_be_packed);
+ refs->store_flags = store_flags;
+
+ refs->path = xstrdup(path);
+ return ref_store;
+}
+
+/*
+ * Die if refs is not the main ref store. caller is used in any
+ * necessary error messages.
+ */
+static void packed_assert_main_repository(struct packed_ref_store *refs,
+ const char *caller)
+{
+ if (refs->store_flags & REF_STORE_MAIN)
+ return;
+
+ die("BUG: operation %s only allowed for main ref store", caller);
+}
+
+/*
+ * Downcast `ref_store` to `packed_ref_store`. Die if `ref_store` is
+ * not a `packed_ref_store`. Also die if `packed_ref_store` doesn't
+ * support at least the flags specified in `required_flags`. `caller`
+ * is used in any necessary error messages.
+ */
+static struct packed_ref_store *packed_downcast(struct ref_store *ref_store,
+ unsigned int required_flags,
+ const char *caller)
+{
+ struct packed_ref_store *refs;
+
+ if (ref_store->be != &refs_be_packed)
+ die("BUG: ref_store is type \"%s\" not \"packed\" in %s",
+ ref_store->be->name, caller);
+
+ refs = (struct packed_ref_store *)ref_store;
+
+ if ((refs->store_flags & required_flags) != required_flags)
+ die("BUG: unallowed operation (%s), requires %x, has %x\n",
+ caller, required_flags, refs->store_flags);
+
+ return refs;
+}
+
+static void clear_packed_ref_cache(struct packed_ref_store *refs)
+{
+ if (refs->cache) {
+ struct packed_ref_cache *cache = refs->cache;
+
+ refs->cache = NULL;
+ release_packed_ref_cache(cache);
+ }
+}
+
+/* The length of a peeled reference line in packed-refs, including EOL: */
+#define PEELED_LINE_LENGTH 42
+
+/*
+ * Parse one line from a packed-refs file. Write the SHA1 to sha1.
+ * Return a pointer to the refname within the line (null-terminated),
+ * or NULL if there was a problem.
+ */
+static const char *parse_ref_line(struct strbuf *line, struct object_id *oid)
+{
+ const char *ref;
+
+ if (parse_oid_hex(line->buf, oid, &ref) < 0)
+ return NULL;
+ if (!isspace(*ref++))
+ return NULL;
+
+ if (isspace(*ref))
+ return NULL;
+
+ if (line->buf[line->len - 1] != '\n')
+ return NULL;
+ line->buf[--line->len] = 0;
+
+ return ref;
+}
+
+/*
+ * Read from `packed_refs_file` into a newly-allocated
+ * `packed_ref_cache` and return it. The return value will already
+ * have its reference count incremented.
+ *
+ * A comment line of the form "# pack-refs with: " may contain zero or
+ * more traits. We interpret the traits as follows:
+ *
+ * No traits:
+ *
+ * Probably no references are peeled. But if the file contains a
+ * peeled value for a reference, we will use it.
+ *
+ * peeled:
+ *
+ * References under "refs/tags/", if they *can* be peeled, *are*
+ * peeled in this file. References outside of "refs/tags/" are
+ * probably not peeled even if they could have been, but if we find
+ * a peeled value for such a reference we will use it.
+ *
+ * fully-peeled:
+ *
+ * All references in the file that can be peeled are peeled.
+ * Inversely (and this is more important), any references in the
+ * file for which no peeled value is recorded is not peelable. This
+ * trait should typically be written alongside "peeled" for
+ * compatibility with older clients, but we do not require it
+ * (i.e., "peeled" is a no-op if "fully-peeled" is set).
+ */
+static struct packed_ref_cache *read_packed_refs(const char *packed_refs_file)
+{
+ FILE *f;
+ struct packed_ref_cache *packed_refs = xcalloc(1, sizeof(*packed_refs));
+ struct ref_entry *last = NULL;
+ struct strbuf line = STRBUF_INIT;
+ enum { PEELED_NONE, PEELED_TAGS, PEELED_FULLY } peeled = PEELED_NONE;
+ struct ref_dir *dir;
+
+ acquire_packed_ref_cache(packed_refs);
+ packed_refs->cache = create_ref_cache(NULL, NULL);
+ packed_refs->cache->root->flag &= ~REF_INCOMPLETE;
+
+ f = fopen(packed_refs_file, "r");
+ if (!f) {
+ if (errno == ENOENT) {
+ /*
+ * This is OK; it just means that no
+ * "packed-refs" file has been written yet,
+ * which is equivalent to it being empty.
+ */
+ return packed_refs;
+ } else {
+ die_errno("couldn't read %s", packed_refs_file);
+ }
+ }
+
+ stat_validity_update(&packed_refs->validity, fileno(f));
+
+ dir = get_ref_dir(packed_refs->cache->root);
+ while (strbuf_getwholeline(&line, f, '\n') != EOF) {
+ struct object_id oid;
+ const char *refname;
+ const char *traits;
+
+ if (!line.len || line.buf[line.len - 1] != '\n')
+ die("unterminated line in %s: %s", packed_refs_file, line.buf);
+
+ if (skip_prefix(line.buf, "# pack-refs with:", &traits)) {
+ if (strstr(traits, " fully-peeled "))
+ peeled = PEELED_FULLY;
+ else if (strstr(traits, " peeled "))
+ peeled = PEELED_TAGS;
+ /* perhaps other traits later as well */
+ continue;
+ }
+
+ refname = parse_ref_line(&line, &oid);
+ if (refname) {
+ int flag = REF_ISPACKED;
+
+ if (check_refname_format(refname, REFNAME_ALLOW_ONELEVEL)) {
+ if (!refname_is_safe(refname))
+ die("packed refname is dangerous: %s", refname);
+ oidclr(&oid);
+ flag |= REF_BAD_NAME | REF_ISBROKEN;
+ }
+ last = create_ref_entry(refname, &oid, flag);
+ if (peeled == PEELED_FULLY ||
+ (peeled == PEELED_TAGS && starts_with(refname, "refs/tags/")))
+ last->flag |= REF_KNOWS_PEELED;
+ add_ref_entry(dir, last);
+ } else if (last &&
+ line.buf[0] == '^' &&
+ line.len == PEELED_LINE_LENGTH &&
+ line.buf[PEELED_LINE_LENGTH - 1] == '\n' &&
+ !get_oid_hex(line.buf + 1, &oid)) {
+ oidcpy(&last->u.value.peeled, &oid);
+ /*
+ * Regardless of what the file header said,
+ * we definitely know the value of *this*
+ * reference:
+ */
+ last->flag |= REF_KNOWS_PEELED;
+ } else {
+ strbuf_setlen(&line, line.len - 1);
+ die("unexpected line in %s: %s", packed_refs_file, line.buf);
+ }
+ }
+
+ fclose(f);
+ strbuf_release(&line);
+
+ return packed_refs;
+}
+
+/*
+ * Check that the packed refs cache (if any) still reflects the
+ * contents of the file. If not, clear the cache.
+ */
+static void validate_packed_ref_cache(struct packed_ref_store *refs)
+{
+ if (refs->cache &&
+ !stat_validity_check(&refs->cache->validity, refs->path))
+ clear_packed_ref_cache(refs);
+}
+
+/*
+ * Get the packed_ref_cache for the specified packed_ref_store,
+ * creating and populating it if it hasn't been read before or if the
+ * file has been changed (according to its `validity` field) since it
+ * was last read. On the other hand, if we hold the lock, then assume
+ * that the file hasn't been changed out from under us, so skip the
+ * extra `stat()` call in `stat_validity_check()`.
+ */
+static struct packed_ref_cache *get_packed_ref_cache(struct packed_ref_store *refs)
+{
+ if (!is_lock_file_locked(&refs->lock))
+ validate_packed_ref_cache(refs);
+
+ if (!refs->cache)
+ refs->cache = read_packed_refs(refs->path);
+
+ return refs->cache;
+}
+
+static struct ref_dir *get_packed_ref_dir(struct packed_ref_cache *packed_ref_cache)
+{
+ return get_ref_dir(packed_ref_cache->cache->root);
+}
+
+static struct ref_dir *get_packed_refs(struct packed_ref_store *refs)
+{
+ return get_packed_ref_dir(get_packed_ref_cache(refs));
+}
+
+/*
+ * Add or overwrite a reference in the in-memory packed reference
+ * cache. This may only be called while the packed-refs file is locked
+ * (see packed_refs_lock()). To actually write the packed-refs file,
+ * call commit_packed_refs().
+ */
+void add_packed_ref(struct ref_store *ref_store,
+ const char *refname, const struct object_id *oid)
+{
+ struct packed_ref_store *refs =
+ packed_downcast(ref_store, REF_STORE_WRITE,
+ "add_packed_ref");
+ struct ref_dir *packed_refs;
+ struct ref_entry *packed_entry;
+
+ if (!is_lock_file_locked(&refs->lock))
+ die("BUG: packed refs not locked");
+
+ if (check_refname_format(refname, REFNAME_ALLOW_ONELEVEL))
+ die("Reference has invalid format: '%s'", refname);
+
+ packed_refs = get_packed_refs(refs);
+ packed_entry = find_ref_entry(packed_refs, refname);
+ if (packed_entry) {
+ /* Overwrite the existing entry: */
+ oidcpy(&packed_entry->u.value.oid, oid);
+ packed_entry->flag = REF_ISPACKED;
+ oidclr(&packed_entry->u.value.peeled);
+ } else {
+ packed_entry = create_ref_entry(refname, oid, REF_ISPACKED);
+ add_ref_entry(packed_refs, packed_entry);
+ }
+}
+
+/*
+ * Return the ref_entry for the given refname from the packed
+ * references. If it does not exist, return NULL.
+ */
+static struct ref_entry *get_packed_ref(struct packed_ref_store *refs,
+ const char *refname)
+{
+ return find_ref_entry(get_packed_refs(refs), refname);
+}
+
+static int packed_read_raw_ref(struct ref_store *ref_store,
+ const char *refname, unsigned char *sha1,
+ struct strbuf *referent, unsigned int *type)
+{
+ struct packed_ref_store *refs =
+ packed_downcast(ref_store, REF_STORE_READ, "read_raw_ref");
+
+ struct ref_entry *entry;
+
+ *type = 0;
+
+ entry = get_packed_ref(refs, refname);
+ if (!entry) {
+ errno = ENOENT;
+ return -1;
+ }
+
+ hashcpy(sha1, entry->u.value.oid.hash);
+ *type = REF_ISPACKED;
+ return 0;
+}
+
+static int packed_peel_ref(struct ref_store *ref_store,
+ const char *refname, unsigned char *sha1)
+{
+ struct packed_ref_store *refs =
+ packed_downcast(ref_store, REF_STORE_READ | REF_STORE_ODB,
+ "peel_ref");
+ struct ref_entry *r = get_packed_ref(refs, refname);
+
+ if (!r || peel_entry(r, 0))
+ return -1;
+
+ hashcpy(sha1, r->u.value.peeled.hash);
+ return 0;
+}
+
+struct packed_ref_iterator {
+ struct ref_iterator base;
+
+ struct packed_ref_cache *cache;
+ struct ref_iterator *iter0;
+ unsigned int flags;
+};
+
+static int packed_ref_iterator_advance(struct ref_iterator *ref_iterator)
+{
+ struct packed_ref_iterator *iter =
+ (struct packed_ref_iterator *)ref_iterator;
+ int ok;
+
+ while ((ok = ref_iterator_advance(iter->iter0)) == ITER_OK) {
+ if (iter->flags & DO_FOR_EACH_PER_WORKTREE_ONLY &&
+ ref_type(iter->iter0->refname) != REF_TYPE_PER_WORKTREE)
+ continue;
+
+ if (!(iter->flags & DO_FOR_EACH_INCLUDE_BROKEN) &&
+ !ref_resolves_to_object(iter->iter0->refname,
+ iter->iter0->oid,
+ iter->iter0->flags))
+ continue;
+
+ iter->base.refname = iter->iter0->refname;
+ iter->base.oid = iter->iter0->oid;
+ iter->base.flags = iter->iter0->flags;
+ return ITER_OK;
+ }
+
+ iter->iter0 = NULL;
+ if (ref_iterator_abort(ref_iterator) != ITER_DONE)
+ ok = ITER_ERROR;
+
+ return ok;
+}
+
+static int packed_ref_iterator_peel(struct ref_iterator *ref_iterator,
+ struct object_id *peeled)
+{
+ struct packed_ref_iterator *iter =
+ (struct packed_ref_iterator *)ref_iterator;
+
+ return ref_iterator_peel(iter->iter0, peeled);
+}
+
+static int packed_ref_iterator_abort(struct ref_iterator *ref_iterator)
+{
+ struct packed_ref_iterator *iter =
+ (struct packed_ref_iterator *)ref_iterator;
+ int ok = ITER_DONE;
+
+ if (iter->iter0)
+ ok = ref_iterator_abort(iter->iter0);
+
+ release_packed_ref_cache(iter->cache);
+ base_ref_iterator_free(ref_iterator);
+ return ok;
+}
+
+static struct ref_iterator_vtable packed_ref_iterator_vtable = {
+ packed_ref_iterator_advance,
+ packed_ref_iterator_peel,
+ packed_ref_iterator_abort
+};
+
+static struct ref_iterator *packed_ref_iterator_begin(
+ struct ref_store *ref_store,
+ const char *prefix, unsigned int flags)
+{
+ struct packed_ref_store *refs;
+ struct packed_ref_iterator *iter;
+ struct ref_iterator *ref_iterator;
+ unsigned int required_flags = REF_STORE_READ;
+
+ if (!(flags & DO_FOR_EACH_INCLUDE_BROKEN))
+ required_flags |= REF_STORE_ODB;
+ refs = packed_downcast(ref_store, required_flags, "ref_iterator_begin");
+
+ iter = xcalloc(1, sizeof(*iter));
+ ref_iterator = &iter->base;
+ base_ref_iterator_init(ref_iterator, &packed_ref_iterator_vtable);
+
+ /*
+ * Note that get_packed_ref_cache() internally checks whether
+ * the packed-ref cache is up to date with what is on disk,
+ * and re-reads it if not.
+ */
+
+ iter->cache = get_packed_ref_cache(refs);
+ acquire_packed_ref_cache(iter->cache);
+ iter->iter0 = cache_ref_iterator_begin(iter->cache->cache, prefix, 0);
+
+ iter->flags = flags;
+
+ return ref_iterator;
+}
+
+/*
+ * Write an entry to the packed-refs file for the specified refname.
+ * If peeled is non-NULL, write it as the entry's peeled value. On
+ * error, return a nonzero value and leave errno set at the value left
+ * by the failing call to `fprintf()`.
+ */
+static int write_packed_entry(FILE *fh, const char *refname,
+ const unsigned char *sha1,
+ const unsigned char *peeled)
+{
+ if (fprintf(fh, "%s %s\n", sha1_to_hex(sha1), refname) < 0 ||
+ (peeled && fprintf(fh, "^%s\n", sha1_to_hex(peeled)) < 0))
+ return -1;
+
+ return 0;
+}
+
+int packed_refs_lock(struct ref_store *ref_store, int flags, struct strbuf *err)
+{
+ struct packed_ref_store *refs =
+ packed_downcast(ref_store, REF_STORE_WRITE | REF_STORE_MAIN,
+ "packed_refs_lock");
+ static int timeout_configured = 0;
+ static int timeout_value = 1000;
+ struct packed_ref_cache *packed_ref_cache;
+
+ if (!timeout_configured) {
+ git_config_get_int("core.packedrefstimeout", &timeout_value);
+ timeout_configured = 1;
+ }
+
+ /*
+ * Note that we close the lockfile immediately because we
+ * don't write new content to it, but rather to a separate
+ * tempfile.
+ */
+ if (hold_lock_file_for_update_timeout(
+ &refs->lock,
+ refs->path,
+ flags, timeout_value) < 0) {
+ unable_to_lock_message(refs->path, errno, err);
+ return -1;
+ }
+
+ if (close_lock_file(&refs->lock)) {
+ strbuf_addf(err, "unable to close %s: %s", refs->path, strerror(errno));
+ return -1;
+ }
+
+ /*
+ * Now that we hold the `packed-refs` lock, make sure that our
+ * cache matches the current version of the file. Normally
+ * `get_packed_ref_cache()` does that for us, but that
+ * function assumes that when the file is locked, any existing
+ * cache is still valid. We've just locked the file, but it
+ * might have changed the moment *before* we locked it.
+ */
+ validate_packed_ref_cache(refs);
+
+ packed_ref_cache = get_packed_ref_cache(refs);
+ /* Increment the reference count to prevent it from being freed: */
+ acquire_packed_ref_cache(packed_ref_cache);
+ return 0;
+}
+
+void packed_refs_unlock(struct ref_store *ref_store)
+{
+ struct packed_ref_store *refs = packed_downcast(
+ ref_store,
+ REF_STORE_READ | REF_STORE_WRITE,
+ "packed_refs_unlock");
+
+ if (!is_lock_file_locked(&refs->lock))
+ die("BUG: packed_refs_unlock() called when not locked");
+ rollback_lock_file(&refs->lock);
+ release_packed_ref_cache(refs->cache);
+}
+
+int packed_refs_is_locked(struct ref_store *ref_store)
+{
+ struct packed_ref_store *refs = packed_downcast(
+ ref_store,
+ REF_STORE_READ | REF_STORE_WRITE,
+ "packed_refs_is_locked");
+
+ return is_lock_file_locked(&refs->lock);
+}
+
+/*
+ * The packed-refs header line that we write out. Perhaps other
+ * traits will be added later. The trailing space is required.
+ */
+static const char PACKED_REFS_HEADER[] =
+ "# pack-refs with: peeled fully-peeled \n";
+
+/*
+ * Write the current version of the packed refs cache from memory to
+ * disk. The packed-refs file must already be locked for writing (see
+ * packed_refs_lock()). Return zero on success. On errors, rollback
+ * the lockfile, write an error message to `err`, and return a nonzero
+ * value.
+ */
+int commit_packed_refs(struct ref_store *ref_store, struct strbuf *err)
+{
+ struct packed_ref_store *refs =
+ packed_downcast(ref_store, REF_STORE_WRITE | REF_STORE_MAIN,
+ "commit_packed_refs");
+ struct packed_ref_cache *packed_ref_cache =
+ get_packed_ref_cache(refs);
+ int ok;
+ int ret = -1;
+ struct strbuf sb = STRBUF_INIT;
+ FILE *out;
+ struct ref_iterator *iter;
+ char *packed_refs_path;
+
+ if (!is_lock_file_locked(&refs->lock))
+ die("BUG: commit_packed_refs() called when unlocked");
+
+ /*
+ * If packed-refs is a symlink, we want to overwrite the
+ * symlinked-to file, not the symlink itself. Also, put the
+ * staging file next to it:
+ */
+ packed_refs_path = get_locked_file_path(&refs->lock);
+ strbuf_addf(&sb, "%s.new", packed_refs_path);
+ if (create_tempfile(&refs->tempfile, sb.buf) < 0) {
+ strbuf_addf(err, "unable to create file %s: %s",
+ sb.buf, strerror(errno));
+ strbuf_release(&sb);
+ goto out;
+ }
+ strbuf_release(&sb);
+
+ out = fdopen_tempfile(&refs->tempfile, "w");
+ if (!out) {
+ strbuf_addf(err, "unable to fdopen packed-refs tempfile: %s",
+ strerror(errno));
+ goto error;
+ }
+
+ if (fprintf(out, "%s", PACKED_REFS_HEADER) < 0) {
+ strbuf_addf(err, "error writing to %s: %s",
+ get_tempfile_path(&refs->tempfile), strerror(errno));
+ goto error;
+ }
+
+ iter = cache_ref_iterator_begin(packed_ref_cache->cache, NULL, 0);
+ while ((ok = ref_iterator_advance(iter)) == ITER_OK) {
+ struct object_id peeled;
+ int peel_error = ref_iterator_peel(iter, &peeled);
+
+ if (write_packed_entry(out, iter->refname, iter->oid->hash,
+ peel_error ? NULL : peeled.hash)) {
+ strbuf_addf(err, "error writing to %s: %s",
+ get_tempfile_path(&refs->tempfile),
+ strerror(errno));
+ ref_iterator_abort(iter);
+ goto error;
+ }
+ }
+
+ if (ok != ITER_DONE) {
+ strbuf_addf(err, "unable to rewrite packed-refs file: "
+ "error iterating over old contents");
+ goto error;
+ }
+
+ if (rename_tempfile(&refs->tempfile, packed_refs_path)) {
+ strbuf_addf(err, "error replacing %s: %s",
+ refs->path, strerror(errno));
+ goto out;
+ }
+
+ ret = 0;
+ goto out;
+
+error:
+ delete_tempfile(&refs->tempfile);
+
+out:
+ free(packed_refs_path);
+ return ret;
+}
+
+/*
+ * Rewrite the packed-refs file, omitting any refs listed in
+ * 'refnames'. On error, leave packed-refs unchanged, write an error
+ * message to 'err', and return a nonzero value. The packed refs lock
+ * must be held when calling this function; it will still be held when
+ * the function returns.
+ *
+ * The refs in 'refnames' needn't be sorted. `err` must not be NULL.
+ */
+int repack_without_refs(struct ref_store *ref_store,
+ struct string_list *refnames, struct strbuf *err)
+{
+ struct packed_ref_store *refs =
+ packed_downcast(ref_store, REF_STORE_WRITE | REF_STORE_MAIN,
+ "repack_without_refs");
+ struct ref_dir *packed;
+ struct string_list_item *refname;
+ int needs_repacking = 0, removed = 0;
+
+ packed_assert_main_repository(refs, "repack_without_refs");
+ assert(err);
+
+ if (!is_lock_file_locked(&refs->lock))
+ die("BUG: repack_without_refs called without holding lock");
+
+ /* Look for a packed ref */
+ for_each_string_list_item(refname, refnames) {
+ if (get_packed_ref(refs, refname->string)) {
+ needs_repacking = 1;
+ break;
+ }
+ }
+
+ /* Avoid locking if we have nothing to do */
+ if (!needs_repacking)
+ return 0; /* no refname exists in packed refs */
+
+ packed = get_packed_refs(refs);
+
+ /* Remove refnames from the cache */
+ for_each_string_list_item(refname, refnames)
+ if (remove_entry_from_dir(packed, refname->string) != -1)
+ removed = 1;
+ if (!removed) {
+ /*
+ * All packed entries disappeared while we were
+ * acquiring the lock.
+ */
+ clear_packed_ref_cache(refs);
+ return 0;
+ }
+
+ /* Write what remains */
+ return commit_packed_refs(&refs->base, err);
+}
+
+static int packed_init_db(struct ref_store *ref_store, struct strbuf *err)
+{
+ /* Nothing to do. */
+ return 0;
+}
+
+static int packed_transaction_prepare(struct ref_store *ref_store,
+ struct ref_transaction *transaction,
+ struct strbuf *err)
+{
+ die("BUG: not implemented yet");
+}
+
+static int packed_transaction_abort(struct ref_store *ref_store,
+ struct ref_transaction *transaction,
+ struct strbuf *err)
+{
+ die("BUG: not implemented yet");
+}
+
+static int packed_transaction_finish(struct ref_store *ref_store,
+ struct ref_transaction *transaction,
+ struct strbuf *err)
+{
+ die("BUG: not implemented yet");
+}
+
+static int packed_initial_transaction_commit(struct ref_store *ref_store,
+ struct ref_transaction *transaction,
+ struct strbuf *err)
+{
+ return ref_transaction_commit(transaction, err);
+}
+
+static int packed_delete_refs(struct ref_store *ref_store, const char *msg,
+ struct string_list *refnames, unsigned int flags)
+{
+ die("BUG: not implemented yet");
+}
+
+static int packed_pack_refs(struct ref_store *ref_store, unsigned int flags)
+{
+ /*
+ * Packed refs are already packed. It might be that loose refs
+ * are packed *into* a packed refs store, but that is done by
+ * updating the packed references via a transaction.
+ */
+ return 0;
+}
+
+static int packed_create_symref(struct ref_store *ref_store,
+ const char *refname, const char *target,
+ const char *logmsg)
+{
+ die("BUG: packed reference store does not support symrefs");
+}
+
+static int packed_rename_ref(struct ref_store *ref_store,
+ const char *oldrefname, const char *newrefname,
+ const char *logmsg)
+{
+ die("BUG: packed reference store does not support renaming references");
+}
+
+static struct ref_iterator *packed_reflog_iterator_begin(struct ref_store *ref_store)
+{
+ return empty_ref_iterator_begin();
+}
+
+static int packed_for_each_reflog_ent(struct ref_store *ref_store,
+ const char *refname,
+ each_reflog_ent_fn fn, void *cb_data)
+{
+ return 0;
+}
+
+static int packed_for_each_reflog_ent_reverse(struct ref_store *ref_store,
+ const char *refname,
+ each_reflog_ent_fn fn,
+ void *cb_data)
+{
+ return 0;
+}
+
+static int packed_reflog_exists(struct ref_store *ref_store,
+ const char *refname)
+{
+ return 0;
+}
+
+static int packed_create_reflog(struct ref_store *ref_store,
+ const char *refname, int force_create,
+ struct strbuf *err)
+{
+ die("BUG: packed reference store does not support reflogs");
+}
+
+static int packed_delete_reflog(struct ref_store *ref_store,
+ const char *refname)
+{
+ return 0;
+}
+
+static int packed_reflog_expire(struct ref_store *ref_store,
+ const char *refname, const unsigned char *sha1,
+ unsigned int flags,
+ reflog_expiry_prepare_fn prepare_fn,
+ reflog_expiry_should_prune_fn should_prune_fn,
+ reflog_expiry_cleanup_fn cleanup_fn,
+ void *policy_cb_data)
+{
+ return 0;
+}
+
+struct ref_storage_be refs_be_packed = {
+ NULL,
+ "packed",
+ packed_ref_store_create,
+ packed_init_db,
+ packed_transaction_prepare,
+ packed_transaction_finish,
+ packed_transaction_abort,
+ packed_initial_transaction_commit,
+
+ packed_pack_refs,
+ packed_peel_ref,
+ packed_create_symref,
+ packed_delete_refs,
+ packed_rename_ref,
+
+ packed_ref_iterator_begin,
+ packed_read_raw_ref,
+
+ packed_reflog_iterator_begin,
+ packed_for_each_reflog_ent,
+ packed_for_each_reflog_ent_reverse,
+ packed_reflog_exists,
+ packed_create_reflog,
+ packed_delete_reflog,
+ packed_reflog_expire
+};
--- /dev/null
+#ifndef REFS_PACKED_BACKEND_H
+#define REFS_PACKED_BACKEND_H
+
+struct ref_store *packed_ref_store_create(const char *path,
+ unsigned int store_flags);
+
+/*
+ * Lock the packed-refs file for writing. Flags is passed to
+ * hold_lock_file_for_update(). Return 0 on success. On errors, write
+ * an error message to `err` and return a nonzero value.
+ */
+int packed_refs_lock(struct ref_store *ref_store, int flags, struct strbuf *err);
+
+void packed_refs_unlock(struct ref_store *ref_store);
+int packed_refs_is_locked(struct ref_store *ref_store);
+
+void add_packed_ref(struct ref_store *ref_store,
+ const char *refname, const struct object_id *oid);
+
+int commit_packed_refs(struct ref_store *ref_store, struct strbuf *err);
+
+int repack_without_refs(struct ref_store *ref_store,
+ struct string_list *refnames, struct strbuf *err);
+
+#endif /* REFS_PACKED_BACKEND_H */
*/
int refname_is_safe(const char *refname);
+/*
+ * Helper function: return true if refname, which has the specified
+ * oid and flags, can be resolved to an object in the database. If the
+ * referred-to object does not exist, emit a warning and return false.
+ */
+int ref_resolves_to_object(const char *refname,
+ const struct object_id *oid,
+ unsigned int flags);
+
enum peel_status {
/* object was peeled successfully: */
PEEL_PEELED = 0,
};
extern struct ref_storage_be refs_be_files;
+extern struct ref_storage_be refs_be_packed;
/*
* A representation of the reference store for the main repository or
#include "submodule-config.h"
/* The main repository */
-static struct repository the_repo;
+static struct repository the_repo = {
+ NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, &the_index, 0, 0
+};
struct repository *the_repository = &the_repo;
static char *git_path_from_env(const char *envvar, const char *git_dir,
{
if (!repo->index)
repo->index = xcalloc(1, sizeof(*repo->index));
- else
- discard_index(repo->index);
return read_index_from(repo->index, repo->index_file);
}
const char *path);
extern void repo_clear(struct repository *repo);
+/*
+ * Populates the repository's index from its index_file, an index struct will
+ * be allocated if needed.
+ *
+ * Return the number of index entries in the populated index or a value less
+ * than zero if an error occured. If the repository's index has already been
+ * populated then the number of entries will simply be returned.
+ */
extern int repo_read_index(struct repository *repo);
#endif /* REPOSITORY_H */
static GIT_PATH_FUNC(rebase_path_autostash, "rebase-merge/autostash")
static GIT_PATH_FUNC(rebase_path_strategy, "rebase-merge/strategy")
static GIT_PATH_FUNC(rebase_path_strategy_opts, "rebase-merge/strategy_opts")
+static GIT_PATH_FUNC(rebase_path_allow_rerere_autoupdate, "rebase-merge/allow_rerere_autoupdate")
static inline int is_rebase_i(const struct replay_opts *opts)
{
else if (!strcmp(key, "options.strategy-option")) {
ALLOC_GROW(opts->xopts, opts->xopts_nr + 1, opts->xopts_alloc);
opts->xopts[opts->xopts_nr++] = xstrdup(value);
- } else
+ } else if (!strcmp(key, "options.allow-rerere-auto"))
+ opts->allow_rerere_auto =
+ git_config_bool_or_int(key, value, &error_flag) ?
+ RERERE_AUTOUPDATE : RERERE_NOAUTOUPDATE;
+ else
return error(_("invalid key: %s"), key);
if (!error_flag)
free(opts->gpg_sign);
opts->gpg_sign = xstrdup(buf.buf + 2);
}
+ strbuf_reset(&buf);
+ }
+
+ if (read_oneliner(&buf, rebase_path_allow_rerere_autoupdate(), 1)) {
+ if (!strcmp(buf.buf, "--rerere-autoupdate"))
+ opts->allow_rerere_auto = RERERE_AUTOUPDATE;
+ else if (!strcmp(buf.buf, "--no-rerere-autoupdate"))
+ opts->allow_rerere_auto = RERERE_NOAUTOUPDATE;
+ strbuf_reset(&buf);
}
if (file_exists(rebase_path_verbose()))
"options.strategy-option",
opts->xopts[i], "^$", 0);
}
+ if (opts->allow_rerere_auto)
+ res |= git_config_set_in_file_gently(opts_file, "options.allow-rerere-auto",
+ opts->allow_rerere_auto == RERERE_AUTOUPDATE ?
+ "true" : "false");
return res;
}
{
static struct strbuf cwd = STRBUF_INIT;
struct strbuf dir = STRBUF_INIT, gitdir = STRBUF_INIT;
- const char *prefix, *env_prefix;
+ const char *prefix;
/*
* We may have read an incomplete configuration before
die("BUG: unhandled setup_git_directory_1() result");
}
- /*
- * NEEDSWORK: This was a hack in order to get ls-files and grep to have
- * properly formated output when recursing submodules. Once ls-files
- * and grep have been changed to perform this recursing in-process this
- * needs to be removed.
- */
- env_prefix = getenv(GIT_TOPLEVEL_PREFIX_ENVIRONMENT);
- if (env_prefix)
- prefix = env_prefix;
-
if (prefix)
setenv(GIT_PREFIX_ENVIRONMENT, prefix, 1);
else
} while (lo < hi);
return -lo-1;
}
-
-/*
- * Conventional binary search loop looks like this:
- *
- * unsigned lo, hi;
- * do {
- * unsigned mi = (lo + hi) / 2;
- * int cmp = "entry pointed at by mi" minus "target";
- * if (!cmp)
- * return (mi is the wanted one)
- * if (cmp > 0)
- * hi = mi; "mi is larger than target"
- * else
- * lo = mi+1; "mi is smaller than target"
- * } while (lo < hi);
- *
- * The invariants are:
- *
- * - When entering the loop, lo points at a slot that is never
- * above the target (it could be at the target), hi points at a
- * slot that is guaranteed to be above the target (it can never
- * be at the target).
- *
- * - We find a point 'mi' between lo and hi (mi could be the same
- * as lo, but never can be as same as hi), and check if it hits
- * the target. There are three cases:
- *
- * - if it is a hit, we are happy.
- *
- * - if it is strictly higher than the target, we set it to hi,
- * and repeat the search.
- *
- * - if it is strictly lower than the target, we update lo to
- * one slot after it, because we allow lo to be at the target.
- *
- * If the loop exits, there is no matching entry.
- *
- * When choosing 'mi', we do not have to take the "middle" but
- * anywhere in between lo and hi, as long as lo <= mi < hi is
- * satisfied. When we somehow know that the distance between the
- * target and lo is much shorter than the target and hi, we could
- * pick mi that is much closer to lo than the midway.
- *
- * Now, we can take advantage of the fact that SHA-1 is a good hash
- * function, and as long as there are enough entries in the table, we
- * can expect uniform distribution. An entry that begins with for
- * example "deadbeef..." is much likely to appear much later than in
- * the midway of the table. It can reasonably be expected to be near
- * 87% (222/256) from the top of the table.
- *
- * However, we do not want to pick "mi" too precisely. If the entry at
- * the 87% in the above example turns out to be higher than the target
- * we are looking for, we would end up narrowing the search space down
- * only by 13%, instead of 50% we would get if we did a simple binary
- * search. So we would want to hedge our bets by being less aggressive.
- *
- * The table at "table" holds at least "nr" entries of "elem_size"
- * bytes each. Each entry has the SHA-1 key at "key_offset". The
- * table is sorted by the SHA-1 key of the entries. The caller wants
- * to find the entry with "key", and knows that the entry at "lo" is
- * not higher than the entry it is looking for, and that the entry at
- * "hi" is higher than the entry it is looking for.
- */
-int sha1_entry_pos(const void *table,
- size_t elem_size,
- size_t key_offset,
- unsigned lo, unsigned hi, unsigned nr,
- const unsigned char *key)
-{
- const unsigned char *base = table;
- const unsigned char *hi_key, *lo_key;
- unsigned ofs_0;
- static int debug_lookup = -1;
-
- if (debug_lookup < 0)
- debug_lookup = !!getenv("GIT_DEBUG_LOOKUP");
-
- if (!nr || lo >= hi)
- return -1;
-
- if (nr == hi)
- hi_key = NULL;
- else
- hi_key = base + elem_size * hi + key_offset;
- lo_key = base + elem_size * lo + key_offset;
-
- ofs_0 = 0;
- do {
- int cmp;
- unsigned ofs, mi, range;
- unsigned lov, hiv, kyv;
- const unsigned char *mi_key;
-
- range = hi - lo;
- if (hi_key) {
- for (ofs = ofs_0; ofs < 20; ofs++)
- if (lo_key[ofs] != hi_key[ofs])
- break;
- ofs_0 = ofs;
- /*
- * byte 0 thru (ofs-1) are the same between
- * lo and hi; ofs is the first byte that is
- * different.
- *
- * If ofs==20, then no bytes are different,
- * meaning we have entries with duplicate
- * keys. We know that we are in a solid run
- * of this entry (because the entries are
- * sorted, and our lo and hi are the same,
- * there can be nothing but this single key
- * in between). So we can stop the search.
- * Either one of these entries is it (and
- * we do not care which), or we do not have
- * it.
- *
- * Furthermore, we know that one of our
- * endpoints must be the edge of the run of
- * duplicates. For example, given this
- * sequence:
- *
- * idx 0 1 2 3 4 5
- * key A C C C C D
- *
- * If we are searching for "B", we might
- * hit the duplicate run at lo=1, hi=3
- * (e.g., by first mi=3, then mi=0). But we
- * can never have lo > 1, because B < C.
- * That is, if our key is less than the
- * run, we know that "lo" is the edge, but
- * we can say nothing of "hi". Similarly,
- * if our key is greater than the run, we
- * know that "hi" is the edge, but we can
- * say nothing of "lo".
- *
- * Therefore if we do not find it, we also
- * know where it would go if it did exist:
- * just on the far side of the edge that we
- * know about.
- */
- if (ofs == 20) {
- mi = lo;
- mi_key = base + elem_size * mi + key_offset;
- cmp = memcmp(mi_key, key, 20);
- if (!cmp)
- return mi;
- if (cmp < 0)
- return -1 - hi;
- else
- return -1 - lo;
- }
-
- hiv = hi_key[ofs_0];
- if (ofs_0 < 19)
- hiv = (hiv << 8) | hi_key[ofs_0+1];
- } else {
- hiv = 256;
- if (ofs_0 < 19)
- hiv <<= 8;
- }
- lov = lo_key[ofs_0];
- kyv = key[ofs_0];
- if (ofs_0 < 19) {
- lov = (lov << 8) | lo_key[ofs_0+1];
- kyv = (kyv << 8) | key[ofs_0+1];
- }
- assert(lov < hiv);
-
- if (kyv < lov)
- return -1 - lo;
- if (hiv < kyv)
- return -1 - hi;
-
- /*
- * Even if we know the target is much closer to 'hi'
- * than 'lo', if we pick too precisely and overshoot
- * (e.g. when we know 'mi' is closer to 'hi' than to
- * 'lo', pick 'mi' that is higher than the target), we
- * end up narrowing the search space by a smaller
- * amount (i.e. the distance between 'mi' and 'hi')
- * than what we would have (i.e. about half of 'lo'
- * and 'hi'). Hedge our bets to pick 'mi' less
- * aggressively, i.e. make 'mi' a bit closer to the
- * middle than we would otherwise pick.
- */
- kyv = (kyv * 6 + lov + hiv) / 8;
- if (lov < hiv - 1) {
- if (kyv == lov)
- kyv++;
- else if (kyv == hiv)
- kyv--;
- }
- mi = (range - 1) * (kyv - lov) / (hiv - lov) + lo;
-
- if (debug_lookup) {
- printf("lo %u hi %u rg %u mi %u ", lo, hi, range, mi);
- printf("ofs %u lov %x, hiv %x, kyv %x\n",
- ofs_0, lov, hiv, kyv);
- }
- if (!(lo <= mi && mi < hi))
- die("assertion failure lo %u mi %u hi %u %s",
- lo, mi, hi, sha1_to_hex(key));
-
- mi_key = base + elem_size * mi + key_offset;
- cmp = memcmp(mi_key + ofs_0, key + ofs_0, 20 - ofs_0);
- if (!cmp)
- return mi;
- if (cmp > 0) {
- hi = mi;
- hi_key = mi_key;
- } else {
- lo = mi + 1;
- lo_key = mi_key + elem_size;
- }
- } while (lo < hi);
- return -lo-1;
-}
* SHA1, an extra slash for the first level indirection, and the
* terminating NUL.
*/
+static void read_info_alternates(const char * relative_base, int depth);
static int link_alt_odb_entry(const char *entry, const char *relative_base,
int depth, const char *normalized_objdir)
{
strbuf_release(&objdirbuf);
}
-void read_info_alternates(const char * relative_base, int depth)
+static void read_info_alternates(const char * relative_base, int depth)
{
char *map;
size_t mapsz;
hashclr(oi->delta_base_sha1);
}
+ oi->whence = in_delta_base_cache(p, obj_offset) ? OI_DBCACHED :
+ OI_PACKED;
+
out:
unuse_pack(&w_curs);
return type;
error("bad packed object CRC for %s",
sha1_to_hex(sha1));
mark_bad_packed_object(p, sha1);
- unuse_pack(&w_curs);
- return NULL;
+ data = NULL;
+ goto out;
}
}
if (final_size)
*final_size = size;
+out:
unuse_pack(&w_curs);
if (delta_stack != small_delta_stack)
const uint32_t *level1_ofs = p->index_data;
const unsigned char *index = p->index_data;
unsigned hi, lo, stride;
- static int use_lookup = -1;
static int debug_lookup = -1;
if (debug_lookup < 0)
printf("%02x%02x%02x... lo %u hi %u nr %"PRIu32"\n",
sha1[0], sha1[1], sha1[2], lo, hi, p->num_objects);
- if (use_lookup < 0)
- use_lookup = !!getenv("GIT_USE_LOOKUP");
- if (use_lookup) {
- int pos = sha1_entry_pos(index, stride, 0,
- lo, hi, p->num_objects, sha1);
- if (pos < 0)
- return 0;
- return nth_packed_object_offset(p, pos);
- }
-
- do {
+ while (lo < hi) {
unsigned mi = (lo + hi) / 2;
int cmp = hashcmp(index + mi * stride, sha1);
hi = mi;
else
lo = mi+1;
- } while (lo < hi);
+ }
return 0;
}
if (oi->sizep == &size_scratch)
oi->sizep = NULL;
strbuf_release(&hdrbuf);
+ oi->whence = OI_LOOSE;
return (status < 0) ? status : 0;
}
if (!find_pack_entry(real, &e)) {
/* Most likely it's a loose object. */
- if (!sha1_loose_object_info(real, oi, flags)) {
- oi->whence = OI_LOOSE;
+ if (!sha1_loose_object_info(real, oi, flags))
return 0;
- }
/* Not a loose object; someone else may have just packed it. */
if (flags & OBJECT_INFO_QUICK) {
if (rtype < 0) {
mark_bad_packed_object(e.p, real);
return sha1_object_info_extended(real, oi, 0);
- } else if (in_delta_base_cache(e.p, e.offset)) {
- oi->whence = OI_DBCACHED;
- } else {
- oi->whence = OI_PACKED;
+ } else if (oi->whence == OI_PACKED) {
oi->u.packed.offset = e.offset;
oi->u.packed.pack = e.p;
oi->u.packed.is_delta = (rtype == OBJ_REF_DELTA ||
return type;
}
-static void *read_packed_sha1(const unsigned char *sha1,
- enum object_type *type, unsigned long *size)
-{
- struct pack_entry e;
- void *data;
-
- if (!find_pack_entry(sha1, &e))
- return NULL;
- data = cache_or_unpack_entry(e.p, e.offset, size, type);
- if (!data) {
- /*
- * We're probably in deep shit, but let's try to fetch
- * the required object anyway from another pack or loose.
- * This should happen only in the presence of a corrupted
- * pack, and is better than failing outright.
- */
- error("failed to read object %s at offset %"PRIuMAX" from %s",
- sha1_to_hex(sha1), (uintmax_t)e.offset, e.p->pack_name);
- mark_bad_packed_object(e.p, sha1);
- data = read_object(sha1, type, size);
- }
- return data;
-}
-
int pretend_sha1_file(void *buf, unsigned long len, enum object_type type,
unsigned char *sha1)
{
if (has_loose_object(sha1))
return 0;
- buf = read_packed_sha1(sha1, &type, &len);
+ buf = read_object(sha1, &type, &len);
if (!buf)
return error("cannot read sha1_file for %s", sha1_to_hex(sha1));
hdrlen = xsnprintf(hdr, sizeof(hdr), "%s %lu", typename(type), len) + 1;
/* Translate slopbuf to NULL, as we cannot call realloc on it */
if (!sb->alloc)
sb->buf = NULL;
+ errno = 0;
r = getdelim(&sb->buf, &sb->alloc, term, fp);
if (r > 0) {
static int parse_fetch_recurse(const char *opt, const char *arg,
int die_on_error)
{
- switch (git_config_maybe_bool(opt, arg)) {
+ switch (git_parse_maybe_bool(arg)) {
case 1:
return RECURSE_SUBMODULES_ON;
case 0:
}
}
+int parse_submodule_fetchjobs(const char *var, const char *value)
+{
+ int fetchjobs = git_config_int(var, value);
+ if (fetchjobs < 0)
+ die(_("negative values not allowed for submodule.fetchjobs"));
+ return fetchjobs;
+}
+
int parse_fetch_recurse_submodules_arg(const char *opt, const char *arg)
{
return parse_fetch_recurse(opt, arg, 1);
static int parse_update_recurse(const char *opt, const char *arg,
int die_on_error)
{
- switch (git_config_maybe_bool(opt, arg)) {
+ switch (git_parse_maybe_bool(arg)) {
case 1:
return RECURSE_SUBMODULES_ON;
case 0:
static int parse_push_recurse(const char *opt, const char *arg,
int die_on_error)
{
- switch (git_config_maybe_bool(opt, arg)) {
+ switch (git_parse_maybe_bool(arg)) {
case 1:
/* There's no simple "on" value when pushing */
if (die_on_error)
extern void submodule_cache_free(struct submodule_cache *cache);
+extern int parse_submodule_fetchjobs(const char *var, const char *value);
extern int parse_fetch_recurse_submodules_arg(const char *opt, const char *arg);
struct option;
extern int option_fetch_parse_recurse_submodules(const struct option *opt,
#include "worktree.h"
#include "parse-options.h"
-static int config_fetch_recurse_submodules = RECURSE_SUBMODULES_ON_DEMAND;
static int config_update_recurse_submodules = RECURSE_SUBMODULES_OFF;
-static int parallel_jobs = 1;
static struct string_list changed_submodule_paths = STRING_LIST_INIT_DUP;
static int initialized_fetch_ref_tips;
static struct oid_array ref_tips_before_fetch;
static struct oid_array ref_tips_after_fetch;
/*
- * The following flag is set if the .gitmodules file is unmerged. We then
- * disable recursion for all submodules where .git/config doesn't have a
- * matching config entry because we can't guess what might be configured in
- * .gitmodules unless the user resolves the conflict. When a command line
- * option is given (which always overrides configuration) this flag will be
- * ignored.
+ * Check if the .gitmodules file is unmerged. Parsing of the .gitmodules file
+ * will be disabled because we can't guess what might be configured in
+ * .gitmodules unless the user resolves the conflict.
*/
-static int gitmodules_is_unmerged;
+int is_gitmodules_unmerged(const struct index_state *istate)
+{
+ int pos = index_name_pos(istate, GITMODULES_FILE, strlen(GITMODULES_FILE));
+ if (pos < 0) { /* .gitmodules not found or isn't merged */
+ pos = -1 - pos;
+ if (istate->cache_nr > pos) { /* there is a .gitmodules */
+ const struct cache_entry *ce = istate->cache[pos];
+ if (ce_namelen(ce) == strlen(GITMODULES_FILE) &&
+ !strcmp(ce->name, GITMODULES_FILE))
+ return 1;
+ }
+ }
+
+ return 0;
+}
/*
- * This flag is set if the .gitmodules file had unstaged modifications on
- * startup. This must be checked before allowing modifications to the
- * .gitmodules file with the intention to stage them later, because when
- * continuing we would stage the modifications the user didn't stage herself
- * too. That might change in a future version when we learn to stage the
- * changes we do ourselves without staging any previous modifications.
+ * Check if the .gitmodules file has unstaged modifications. This must be
+ * checked before allowing modifications to the .gitmodules file with the
+ * intention to stage them later, because when continuing we would stage the
+ * modifications the user didn't stage herself too. That might change in a
+ * future version when we learn to stage the changes we do ourselves without
+ * staging any previous modifications.
*/
-static int gitmodules_is_modified;
-
-int is_staging_gitmodules_ok(void)
+int is_staging_gitmodules_ok(const struct index_state *istate)
{
- return !gitmodules_is_modified;
+ int pos = index_name_pos(istate, GITMODULES_FILE, strlen(GITMODULES_FILE));
+
+ if ((pos >= 0) && (pos < istate->cache_nr)) {
+ struct stat st;
+ if (lstat(GITMODULES_FILE, &st) == 0 &&
+ ce_match_stat(istate->cache[pos], &st, 0) & DATA_CHANGED)
+ return 0;
+ }
+
+ return 1;
}
/*
struct strbuf entry = STRBUF_INIT;
const struct submodule *submodule;
- if (!file_exists(".gitmodules")) /* Do nothing without .gitmodules */
+ if (!file_exists(GITMODULES_FILE)) /* Do nothing without .gitmodules */
return -1;
- if (gitmodules_is_unmerged)
+ if (is_gitmodules_unmerged(&the_index))
die(_("Cannot change unmerged .gitmodules, resolve merge conflicts first"));
submodule = submodule_from_path(&null_oid, oldpath);
strbuf_addstr(&entry, "submodule.");
strbuf_addstr(&entry, submodule->name);
strbuf_addstr(&entry, ".path");
- if (git_config_set_in_file_gently(".gitmodules", entry.buf, newpath) < 0) {
+ if (git_config_set_in_file_gently(GITMODULES_FILE, entry.buf, newpath) < 0) {
/* Maybe the user already did that, don't error out here */
warning(_("Could not update .gitmodules entry %s"), entry.buf);
strbuf_release(&entry);
struct strbuf sect = STRBUF_INIT;
const struct submodule *submodule;
- if (!file_exists(".gitmodules")) /* Do nothing without .gitmodules */
+ if (!file_exists(GITMODULES_FILE)) /* Do nothing without .gitmodules */
return -1;
- if (gitmodules_is_unmerged)
+ if (is_gitmodules_unmerged(&the_index))
die(_("Cannot change unmerged .gitmodules, resolve merge conflicts first"));
submodule = submodule_from_path(&null_oid, path);
}
strbuf_addstr(§, "submodule.");
strbuf_addstr(§, submodule->name);
- if (git_config_rename_section_in_file(".gitmodules", sect.buf, NULL) < 0) {
+ if (git_config_rename_section_in_file(GITMODULES_FILE, sect.buf, NULL) < 0) {
/* Maybe the user already did that, don't error out here */
warning(_("Could not remove .gitmodules entry for %s"), path);
strbuf_release(§);
void stage_updated_gitmodules(void)
{
- if (add_file_to_cache(".gitmodules", 0))
+ if (add_file_to_cache(GITMODULES_FILE, 0))
die(_("staging updated .gitmodules failed"));
}
if (submodule) {
if (submodule->ignore)
handle_ignore_submodules_arg(diffopt, submodule->ignore);
- else if (gitmodules_is_unmerged)
+ else if (is_gitmodules_unmerged(&the_index))
DIFF_OPT_SET(diffopt, IGNORE_SUBMODULES);
}
}
/* For loading from the .gitmodules file. */
static int git_modules_config(const char *var, const char *value, void *cb)
{
- if (!strcmp(var, "submodule.fetchjobs")) {
- parallel_jobs = git_config_int(var, value);
- if (parallel_jobs < 0)
- die(_("negative values not allowed for submodule.fetchJobs"));
- return 0;
- } else if (starts_with(var, "submodule."))
+ if (starts_with(var, "submodule."))
return parse_submodule_config_option(var, value);
- else if (!strcmp(var, "fetch.recursesubmodules")) {
- config_fetch_recurse_submodules = parse_fetch_recurse_submodules_arg(var, value);
- return 0;
- }
return 0;
}
git_config(submodule_config, NULL);
}
-void gitmodules_config(void)
-{
- const char *work_tree = get_git_work_tree();
- if (work_tree) {
- struct strbuf gitmodules_path = STRBUF_INIT;
- int pos;
- strbuf_addstr(&gitmodules_path, work_tree);
- strbuf_addstr(&gitmodules_path, "/.gitmodules");
- if (read_cache() < 0)
- die("index file corrupt");
- pos = cache_name_pos(".gitmodules", 11);
- if (pos < 0) { /* .gitmodules not found or isn't merged */
- pos = -1 - pos;
- if (active_nr > pos) { /* there is a .gitmodules */
- const struct cache_entry *ce = active_cache[pos];
- if (ce_namelen(ce) == 11 &&
- !memcmp(ce->name, ".gitmodules", 11))
- gitmodules_is_unmerged = 1;
- }
- } else if (pos < active_nr) {
- struct stat st;
- if (lstat(".gitmodules", &st) == 0 &&
- ce_match_stat(active_cache[pos], &st, 0) & DATA_CHANGED)
- gitmodules_is_modified = 1;
- }
-
- if (!gitmodules_is_unmerged)
- git_config_from_file(git_modules_config,
- gitmodules_path.buf, NULL);
- strbuf_release(&gitmodules_path);
- }
-}
-
static int gitmodules_cb(const char *var, const char *value, void *data)
{
struct repository *repo = data;
void repo_read_gitmodules(struct repository *repo)
{
- char *gitmodules_path = repo_worktree_path(repo, ".gitmodules");
+ if (repo->worktree) {
+ char *gitmodules;
+
+ if (repo_read_index(repo) < 0)
+ return;
+
+ gitmodules = repo_worktree_path(repo, GITMODULES_FILE);
+
+ if (!is_gitmodules_unmerged(repo->index))
+ git_config_from_file(gitmodules_cb, gitmodules, repo);
- git_config_from_file(gitmodules_cb, gitmodules_path, repo);
- free(gitmodules_path);
+ free(gitmodules);
+ }
+}
+
+void gitmodules_config(void)
+{
+ repo_read_gitmodules(the_repository);
}
void gitmodules_config_oid(const struct object_id *commit_oid)
clear_commit_marks(right, ~0);
}
-void set_config_fetch_recurse_submodules(int value)
-{
- config_fetch_recurse_submodules = value;
-}
-
int should_update_submodules(void)
{
return config_update_recurse_submodules == RECURSE_SUBMODULES_ON;
* Perform a check in the submodule to see if the remote and refspec work.
* Die if the submodule can't be pushed.
*/
-static void submodule_push_check(const char *path, const struct remote *remote,
+static void submodule_push_check(const char *path, const char *head,
+ const struct remote *remote,
const char **refspec, int refspec_nr)
{
struct child_process cp = CHILD_PROCESS_INIT;
argv_array_push(&cp.args, "submodule--helper");
argv_array_push(&cp.args, "push-check");
+ argv_array_push(&cp.args, head);
argv_array_push(&cp.args, remote->name);
for (i = 0; i < refspec_nr; i++)
* won't be propagated due to the remote being unconfigured (e.g. a URL
* instead of a remote name).
*/
- if (remote->origin != REMOTE_UNCONFIGURED)
+ if (remote->origin != REMOTE_UNCONFIGURED) {
+ char *head;
+ struct object_id head_oid;
+
+ head = resolve_refdup("HEAD", 0, head_oid.hash, NULL);
+ if (!head)
+ die(_("Failed to resolve HEAD as a valid ref."));
+
for (i = 0; i < needs_pushing.nr; i++)
submodule_push_check(needs_pushing.items[i].string,
- remote, refspec, refspec_nr);
+ head, remote,
+ refspec, refspec_nr);
+ free(head);
+ }
/* Actually push the submodules */
for (i = 0; i < needs_pushing.nr; i++) {
const char *work_tree;
const char *prefix;
int command_line_option;
+ int default_option;
int quiet;
int result;
};
-#define SPF_INIT {0, ARGV_ARRAY_INIT, NULL, NULL, 0, 0, 0}
+#define SPF_INIT {0, ARGV_ARRAY_INIT, NULL, NULL, 0, 0, 0, 0}
static int get_next_submodule(struct child_process *cp,
struct strbuf *err, void *data, void **task_cb)
default_argv = "on-demand";
}
} else {
- if ((config_fetch_recurse_submodules == RECURSE_SUBMODULES_OFF) ||
- gitmodules_is_unmerged)
+ if (spf->default_option == RECURSE_SUBMODULES_OFF)
continue;
- if (config_fetch_recurse_submodules == RECURSE_SUBMODULES_ON_DEMAND) {
+ if (spf->default_option == RECURSE_SUBMODULES_ON_DEMAND) {
if (!unsorted_string_list_lookup(&changed_submodule_paths, ce->name))
continue;
default_argv = "on-demand";
int fetch_populated_submodules(const struct argv_array *options,
const char *prefix, int command_line_option,
+ int default_option,
int quiet, int max_parallel_jobs)
{
int i;
spf.work_tree = get_git_work_tree();
spf.command_line_option = command_line_option;
+ spf.default_option = default_option;
spf.quiet = quiet;
spf.prefix = prefix;
argv_array_push(&spf.args, "--recurse-submodules-default");
/* default value, "--submodule-prefix" and its value are added later */
- if (max_parallel_jobs < 0)
- max_parallel_jobs = parallel_jobs;
-
calculate_changed_submodule_paths();
run_processes_parallel(max_parallel_jobs,
get_next_submodule,
return 0;
}
-int parallel_submodules(void)
-{
- return parallel_jobs;
-}
-
/*
* Embeds a single submodules git directory into the superprojects git dir,
* non recursively.
};
#define SUBMODULE_UPDATE_STRATEGY_INIT {SM_UPDATE_UNSPECIFIED, NULL}
-extern int is_staging_gitmodules_ok(void);
+extern int is_gitmodules_unmerged(const struct index_state *istate);
+extern int is_staging_gitmodules_ok(const struct index_state *istate);
extern int update_path_in_gitmodules(const char *oldpath, const char *newpath);
extern int remove_path_from_gitmodules(const char *path);
extern void stage_updated_gitmodules(void);
unsigned dirty_submodule, const char *meta,
const char *del, const char *add, const char *reset,
const struct diff_options *opt);
-extern void set_config_fetch_recurse_submodules(int value);
/* Check if we want to update any submodule.*/
extern int should_update_submodules(void);
/*
extern void check_for_new_submodule_commits(struct object_id *oid);
extern int fetch_populated_submodules(const struct argv_array *options,
const char *prefix, int command_line_option,
+ int default_option,
int quiet, int max_parallel_jobs);
extern unsigned is_submodule_modified(const char *path, int ignore_untracked);
extern int submodule_uses_gitfile(const char *path);
const struct string_list *push_options,
int dry_run);
extern void connect_work_tree_and_git_dir(const char *work_tree, const char *git_dir);
-extern int parallel_submodules(void);
/*
* Given a submodule path (as in the index), return the repository
* path of that submodule in 'buf'. Return -1 on error or when the
const char *alternative; /* output: ... or this. */
};
+/*
+ * Compatibility wrappers for OpenBSD, whose basename(3) and dirname(3)
+ * have const parameters.
+ */
+static char *posix_basename(char *path)
+{
+ return basename(path);
+}
+
+static char *posix_dirname(char *path)
+{
+ return dirname(path);
+}
+
static int test_function(struct test_data *data, char *(*func)(char *input),
const char *funcname)
{
}
if (argc == 2 && !strcmp(argv[1], "basename"))
- return test_function(basename_data, basename, argv[1]);
+ return test_function(basename_data, posix_basename, argv[1]);
if (argc == 2 && !strcmp(argv[1], "dirname"))
- return test_function(dirname_data, dirname, argv[1]);
+ return test_function(dirname_data, posix_dirname, argv[1]);
fprintf(stderr, "%s: unknown function name: %s\n", argv[0],
argv[1] ? argv[1] : "(there was none)");
test_path_is_dir realgitdir/refs
'
-test_expect_success 'init in long base path' '
+test_lazy_prereq GETCWD_IGNORES_PERMS '
+ base=GETCWD_TEST_BASE_DIR &&
+ mkdir -p $base/dir &&
+ chmod 100 $base ||
+ error "bug in test script: cannot prepare $base"
+
+ (cd $base/dir && /bin/pwd -P)
+ status=$?
+
+ chmod 700 $base &&
+ rm -rf $base ||
+ error "bug in test script: cannot clean $base"
+ return $status
+'
+
+check_long_base_path () {
# exceed initial buffer size of strbuf_getcwd()
component=123456789abcdef &&
test_when_finished "chmod 0700 $component; rm -rf $component" &&
p31=$component/$component &&
p127=$p31/$p31/$p31/$p31 &&
mkdir -p $p127 &&
- chmod 0111 $component &&
+ if test $# = 1
+ then
+ chmod $1 $component
+ fi &&
(
cd $p127 &&
git init newdir
)
+}
+
+test_expect_success 'init in long base path' '
+ check_long_base_path
+'
+
+test_expect_success GETCWD_IGNORES_PERMS 'init in long restricted base path' '
+ check_long_base_path 0111
'
test_expect_success 're-init on .git file' '
treeM=$(git write-tree) &&
echo treeM $treeM &&
git ls-tree $treeM &&
- sum bozbar frotz nitfol >M.sum &&
+ cp bozbar bozbar.M &&
+ cp frotz frotz.M &&
+ cp nitfol nitfol.M &&
git diff-tree $treeH $treeM'
test_expect_success \
read_tree_u_must_succeed -m -u $treeH $treeM &&
git ls-files --stage >1-3.out &&
cmp M.out 1-3.out &&
- sum bozbar frotz nitfol >actual3.sum &&
- cmp M.sum actual3.sum &&
+ test_cmp bozbar.M bozbar &&
+ test_cmp frotz.M frotz &&
+ test_cmp nitfol.M nitfol &&
check_cache_at bozbar clean &&
check_cache_at frotz clean &&
check_cache_at nitfol clean'
test_might_fail git diff -U0 --no-index M.out 4.out >4diff.out &&
compare_change 4diff.out expected &&
check_cache_at yomin clean &&
- sum bozbar frotz nitfol >actual4.sum &&
- cmp M.sum actual4.sum &&
+ test_cmp bozbar.M bozbar &&
+ test_cmp frotz.M frotz &&
+ test_cmp nitfol.M nitfol &&
echo yomin >yomin1 &&
diff yomin yomin1 &&
rm -f yomin1'
test_might_fail git diff -U0 --no-index M.out 5.out >5diff.out &&
compare_change 5diff.out expected &&
check_cache_at yomin dirty &&
- sum bozbar frotz nitfol >actual5.sum &&
- cmp M.sum actual5.sum &&
+ test_cmp bozbar.M bozbar &&
+ test_cmp frotz.M frotz &&
+ test_cmp nitfol.M nitfol &&
: dirty index should have prevented -u from checking it out. &&
echo yomin yomin >yomin1 &&
diff yomin yomin1 &&
git ls-files --stage >6.out &&
test_cmp M.out 6.out &&
check_cache_at frotz clean &&
- sum bozbar frotz nitfol >actual3.sum &&
- cmp M.sum actual3.sum &&
+ test_cmp bozbar.M bozbar &&
+ test_cmp frotz.M frotz &&
+ test_cmp nitfol.M nitfol &&
echo frotz >frotz1 &&
diff frotz frotz1 &&
rm -f frotz1'
git ls-files --stage >7.out &&
test_cmp M.out 7.out &&
check_cache_at frotz dirty &&
- sum bozbar frotz nitfol >actual7.sum &&
- if cmp M.sum actual7.sum; then false; else :; fi &&
+ test_cmp bozbar.M bozbar &&
+ test_cmp nitfol.M nitfol &&
: dirty index should have prevented -u from checking it out. &&
echo frotz frotz >frotz1 &&
diff frotz frotz1 &&
read_tree_u_must_succeed -m -u $treeH $treeM &&
git ls-files --stage >10.out &&
cmp M.out 10.out &&
- sum bozbar frotz nitfol >actual10.sum &&
- cmp M.sum actual10.sum'
+ test_cmp bozbar.M bozbar &&
+ test_cmp frotz.M frotz &&
+ test_cmp nitfol.M nitfol
+'
test_expect_success \
'11 - dirty path removed.' \
git ls-files --stage >14.out &&
test_must_fail git diff -U0 --no-index M.out 14.out >14diff.out &&
compare_change 14diff.out expected &&
- sum bozbar frotz >actual14.sum &&
- grep -v nitfol M.sum > expected14.sum &&
- cmp expected14.sum actual14.sum &&
- sum bozbar frotz nitfol >actual14a.sum &&
- if cmp M.sum actual14a.sum; then false; else :; fi &&
+ test_cmp bozbar.M bozbar &&
+ test_cmp frotz.M frotz &&
check_cache_at nitfol clean &&
echo nitfol nitfol >nitfol1 &&
diff nitfol nitfol1 &&
test_must_fail git diff -U0 --no-index M.out 15.out >15diff.out &&
compare_change 15diff.out expected &&
check_cache_at nitfol dirty &&
- sum bozbar frotz >actual15.sum &&
- grep -v nitfol M.sum > expected15.sum &&
- cmp expected15.sum actual15.sum &&
- sum bozbar frotz nitfol >actual15a.sum &&
- if cmp M.sum actual15a.sum; then false; else :; fi &&
+ test_cmp bozbar.M bozbar &&
+ test_cmp frotz.M frotz &&
echo nitfol nitfol nitfol >nitfol1 &&
diff nitfol nitfol1 &&
rm -f nitfol1'
git ls-files --stage >18.out &&
test_cmp M.out 18.out &&
check_cache_at bozbar clean &&
- sum bozbar frotz nitfol >actual18.sum &&
- cmp M.sum actual18.sum'
+ test_cmp bozbar.M bozbar &&
+ test_cmp frotz.M frotz &&
+ test_cmp nitfol.M nitfol
+'
test_expect_success \
'19 - local change already having a good result, further modified.' \
git ls-files --stage >19.out &&
test_cmp M.out 19.out &&
check_cache_at bozbar dirty &&
- sum frotz nitfol >actual19.sum &&
- grep -v bozbar M.sum > expected19.sum &&
- cmp expected19.sum actual19.sum &&
- sum bozbar frotz nitfol >actual19a.sum &&
- if cmp M.sum actual19a.sum; then false; else :; fi &&
+ test_cmp frotz.M frotz &&
+ test_cmp nitfol.M nitfol &&
echo gnusto gnusto >bozbar1 &&
diff bozbar bozbar1 &&
rm -f bozbar1'
git ls-files --stage >20.out &&
test_cmp M.out 20.out &&
check_cache_at bozbar clean &&
- sum bozbar frotz nitfol >actual20.sum &&
- cmp M.sum actual20.sum'
+ test_cmp bozbar.M bozbar &&
+ test_cmp frotz.M frotz &&
+ test_cmp nitfol.M nitfol
+'
test_expect_success \
'21 - no local change, dirty cache.' \
+++ /dev/null
-#!/bin/sh
-#
-# Copyright (c) 2005 Johannes Schindelin
-#
-
-test_description='A simple turial in the form of a test case'
-
-. ./test-lib.sh
-
-test_expect_success 'blob' '
- echo "Hello World" > hello &&
- echo "Silly example" > example &&
-
- git update-index --add hello example &&
-
- test blob = "$(git cat-file -t 557db03)"
-'
-
-test_expect_success 'blob 557db03' '
- test "Hello World" = "$(git cat-file blob 557db03)"
-'
-
-echo "It's a new day for git" >>hello
-cat > diff.expect << EOF
-diff --git a/hello b/hello
-index 557db03..263414f 100644
---- a/hello
-+++ b/hello
-@@ -1 +1,2 @@
- Hello World
-+It's a new day for git
-EOF
-
-test_expect_success 'git diff-files -p' '
- git diff-files -p > diff.output &&
- test_cmp diff.expect diff.output
-'
-
-test_expect_success 'git diff' '
- git diff > diff.output &&
- test_cmp diff.expect diff.output
-'
-
-test_expect_success 'tree' '
- tree=$(git write-tree 2>/dev/null) &&
- test 8988da15d077d4829fc51d8544c097def6644dbb = $tree
-'
-
-test_expect_success 'git diff-index -p HEAD' '
- test_tick &&
- tree=$(git write-tree) &&
- commit=$(echo "Initial commit" | git commit-tree $tree) &&
- git update-ref HEAD $commit &&
- git diff-index -p HEAD > diff.output &&
- test_cmp diff.expect diff.output
-'
-
-test_expect_success 'git diff HEAD' '
- git diff HEAD > diff.output &&
- test_cmp diff.expect diff.output
-'
-
-cat > whatchanged.expect << EOF
-commit VARIABLE
-Author: VARIABLE
-Date: VARIABLE
-
- Initial commit
-
-diff --git a/example b/example
-new file mode 100644
-index 0000000..f24c74a
---- /dev/null
-+++ b/example
-@@ -0,0 +1 @@
-+Silly example
-diff --git a/hello b/hello
-new file mode 100644
-index 0000000..557db03
---- /dev/null
-+++ b/hello
-@@ -0,0 +1 @@
-+Hello World
-EOF
-
-test_expect_success 'git whatchanged -p --root' '
- git whatchanged -p --root |
- sed -e "1s/^\(.\{7\}\).\{40\}/\1VARIABLE/" \
- -e "2,3s/^\(.\{8\}\).*$/\1VARIABLE/" \
- > whatchanged.output &&
- test_cmp whatchanged.expect whatchanged.output
-'
-
-test_expect_success 'git tag my-first-tag' '
- git tag my-first-tag &&
- test_cmp .git/refs/heads/master .git/refs/tags/my-first-tag
-'
-
-test_expect_success 'git checkout -b mybranch' '
- git checkout -b mybranch &&
- test_cmp .git/refs/heads/master .git/refs/heads/mybranch
-'
-
-cat > branch.expect <<EOF
- master
-* mybranch
-EOF
-
-test_expect_success 'git branch' '
- git branch > branch.output &&
- test_cmp branch.expect branch.output
-'
-
-test_expect_success 'git resolve now fails' '
- git checkout mybranch &&
- echo "Work, work, work" >>hello &&
- test_tick &&
- git commit -m "Some work." -i hello &&
-
- git checkout master &&
-
- echo "Play, play, play" >>hello &&
- echo "Lots of fun" >>example &&
- test_tick &&
- git commit -m "Some fun." -i hello example &&
-
- test_must_fail git merge -m "Merge work in mybranch" mybranch
-'
-
-cat > hello << EOF
-Hello World
-It's a new day for git
-Play, play, play
-Work, work, work
-EOF
-
-cat > show-branch.expect << EOF
-* [master] Merge work in mybranch
- ! [mybranch] Some work.
---
-- [master] Merge work in mybranch
-*+ [mybranch] Some work.
-* [master^] Some fun.
-EOF
-
-test_expect_success 'git show-branch' '
- test_tick &&
- git commit -m "Merge work in mybranch" -i hello &&
- git show-branch --topo-order --more=1 master mybranch \
- > show-branch.output &&
- test_cmp show-branch.expect show-branch.output
-'
-
-cat > resolve.expect << EOF
-Updating VARIABLE..VARIABLE
-FASTFORWARD (no commit created; -m option ignored)
- example | 1 +
- hello | 1 +
- 2 files changed, 2 insertions(+)
-EOF
-
-test_expect_success 'git resolve' '
- git checkout mybranch &&
- git merge -m "Merge upstream changes." master |
- sed -e "1s/[0-9a-f]\{7\}/VARIABLE/g" \
- -e "s/^Fast[- ]forward /FASTFORWARD /" >resolve.output
-'
-
-test_expect_success 'git resolve output' '
- test_i18ncmp resolve.expect resolve.output
-'
-
-cat > show-branch2.expect << EOF
-! [master] Merge work in mybranch
- * [mybranch] Merge work in mybranch
---
--- [master] Merge work in mybranch
-EOF
-
-test_expect_success 'git show-branch (part 2)' '
- git show-branch --topo-order master mybranch > show-branch2.output &&
- test_cmp show-branch2.expect show-branch2.output
-'
-
-cat > show-branch3.expect << EOF
-! [master] Merge work in mybranch
- * [mybranch] Merge work in mybranch
---
--- [master] Merge work in mybranch
-+* [master^2] Some work.
-+* [master^] Some fun.
-EOF
-
-test_expect_success 'git show-branch (part 3)' '
- git show-branch --topo-order --more=2 master mybranch \
- > show-branch3.output &&
- test_cmp show-branch3.expect show-branch3.output
-'
-
-test_expect_success 'rewind to "Some fun." and "Some work."' '
- git checkout mybranch &&
- git reset --hard master^2 &&
- git checkout master &&
- git reset --hard master^
-'
-
-cat > show-branch4.expect << EOF
-* [master] Some fun.
- ! [mybranch] Some work.
---
-* [master] Some fun.
- + [mybranch] Some work.
-*+ [master^] Initial commit
-EOF
-
-test_expect_success 'git show-branch (part 4)' '
- git show-branch --topo-order > show-branch4.output &&
- test_cmp show-branch4.expect show-branch4.output
-'
-
-test_expect_success 'manual merge' '
- mb=$(git merge-base HEAD mybranch) &&
- git name-rev --name-only --tags $mb > name-rev.output &&
- test "my-first-tag" = $(cat name-rev.output) &&
-
- git read-tree -m -u $mb HEAD mybranch
-'
-
-cat > ls-files.expect << EOF
-100644 7f8b141b65fdcee47321e399a2598a235a032422 0 example
-100644 557db03de997c86a4a028e1ebd3a1ceb225be238 1 hello
-100644 ba42a2a96e3027f3333e13ede4ccf4498c3ae942 2 hello
-100644 cc44c73eb783565da5831b4d820c962954019b69 3 hello
-EOF
-
-test_expect_success 'git ls-files --stage' '
- git ls-files --stage > ls-files.output &&
- test_cmp ls-files.expect ls-files.output
-'
-
-cat > ls-files-unmerged.expect << EOF
-100644 557db03de997c86a4a028e1ebd3a1ceb225be238 1 hello
-100644 ba42a2a96e3027f3333e13ede4ccf4498c3ae942 2 hello
-100644 cc44c73eb783565da5831b4d820c962954019b69 3 hello
-EOF
-
-test_expect_success 'git ls-files --unmerged' '
- git ls-files --unmerged > ls-files-unmerged.output &&
- test_cmp ls-files-unmerged.expect ls-files-unmerged.output
-'
-
-test_expect_success 'git-merge-index' '
- test_must_fail git merge-index git-merge-one-file hello
-'
-
-test_expect_success 'git ls-files --stage (part 2)' '
- git ls-files --stage > ls-files.output2 &&
- test_cmp ls-files.expect ls-files.output2
-'
-
-test_expect_success 'git repack' 'git repack'
-test_expect_success 'git prune-packed' 'git prune-packed'
-test_expect_success '-> only packed objects' '
- git prune && # Remove conflict marked blobs
- test $(find .git/objects/[0-9a-f][0-9a-f] -type f -print 2>/dev/null | wc -l) = 0
-'
-
-test_done
--- /dev/null
+#!/bin/sh
+
+test_description='packed-refs entries are covered by loose refs'
+
+. ./test-lib.sh
+
+test_expect_success setup '
+ test_tick &&
+ git commit --allow-empty -m one &&
+ one=$(git rev-parse HEAD) &&
+ git for-each-ref >actual &&
+ echo "$one commit refs/heads/master" >expect &&
+ test_cmp expect actual &&
+
+ git pack-refs --all &&
+ git for-each-ref >actual &&
+ echo "$one commit refs/heads/master" >expect &&
+ test_cmp expect actual &&
+
+ git checkout --orphan another &&
+ test_tick &&
+ git commit --allow-empty -m two &&
+ two=$(git rev-parse HEAD) &&
+ git checkout -B master &&
+ git branch -D another &&
+
+ git for-each-ref >actual &&
+ echo "$two commit refs/heads/master" >expect &&
+ test_cmp expect actual &&
+
+ git reflog expire --expire=now --all &&
+ git prune &&
+ git tag -m v1.0 v1.0 master
+'
+
+test_expect_success 'no error from stale entry in packed-refs' '
+ git describe master >actual 2>&1 &&
+ echo "v1.0" >expect &&
+ test_cmp expect actual
+'
+
+test_done
test_must_fail git branch foo/bar/baz/lots/of/extra/components
'
+test_expect_success 'reject packed-refs with unterminated line' '
+ cp .git/packed-refs .git/packed-refs.bak &&
+ test_when_finished "mv .git/packed-refs.bak .git/packed-refs" &&
+ printf "%s" "$HEAD refs/zzzzz" >>.git/packed-refs &&
+ echo "fatal: unterminated line in .git/packed-refs: $HEAD refs/zzzzz" >expected_err &&
+ test_must_fail git for-each-ref >out 2>err &&
+ test_cmp expected_err err
+'
+
+test_expect_success 'reject packed-refs containing junk' '
+ cp .git/packed-refs .git/packed-refs.bak &&
+ test_when_finished "mv .git/packed-refs.bak .git/packed-refs" &&
+ printf "%s\n" "bogus content" >>.git/packed-refs &&
+ echo "fatal: unexpected line in .git/packed-refs: bogus content" >expected_err &&
+ test_must_fail git for-each-ref >out 2>err &&
+ test_cmp expected_err err
+'
+
+test_expect_success 'reject packed-refs with a short SHA-1' '
+ cp .git/packed-refs .git/packed-refs.bak &&
+ test_when_finished "mv .git/packed-refs.bak .git/packed-refs" &&
+ printf "%.7s %s\n" $HEAD refs/zzzzz >>.git/packed-refs &&
+ printf "fatal: unexpected line in .git/packed-refs: %.7s %s\n" $HEAD refs/zzzzz >expected_err &&
+ test_must_fail git for-each-ref >out 2>err &&
+ test_cmp expected_err err
+'
+
test_expect_success 'timeout if packed-refs.lock exists' '
LOCK=.git/packed-refs.lock &&
>"$LOCK" &&
git -c core.packedrefstimeout=3000 pack-refs --all --prune
'
+test_expect_success SYMLINKS 'pack symlinked packed-refs' '
+ # First make sure that symlinking works when reading:
+ git update-ref refs/heads/loosy refs/heads/master &&
+ git for-each-ref >all-refs-before &&
+ mv .git/packed-refs .git/my-deviant-packed-refs &&
+ ln -s my-deviant-packed-refs .git/packed-refs &&
+ git for-each-ref >all-refs-linked &&
+ test_cmp all-refs-before all-refs-linked &&
+ git pack-refs --all --prune &&
+ git for-each-ref >all-refs-packed &&
+ test_cmp all-refs-before all-refs-packed &&
+ test -h .git/packed-refs &&
+ test "$(readlink .git/packed-refs)" = "my-deviant-packed-refs"
+'
+
test_done
git rebase --continue
'
-test_expect_success 'non-interactive rebase --continue with rerere enabled' '
- test_config rerere.enabled true &&
- test_when_finished "test_might_fail git rebase --abort" &&
- git reset --hard commit-new-file-F2-on-topic-branch &&
- git checkout master &&
- rm -fr .git/rebase-* &&
-
- test_must_fail git rebase --onto master master topic &&
- echo "Resolved" >F2 &&
- git add F2 &&
- cp F2 F2.expected &&
- git rebase --continue &&
-
- git reset --hard commit-new-file-F2-on-topic-branch &&
- git checkout master &&
- test_must_fail git rebase --onto master master topic &&
- test_cmp F2.expected F2
-'
-
test_expect_success 'rebase --continue can not be used with other options' '
test_must_fail git rebase -v --continue &&
test_must_fail git rebase --continue -v
test -f funny.was.run
'
-test_expect_success 'rebase --continue remembers --rerere-autoupdate' '
+test_expect_success 'setup rerere database' '
rm -fr .git/rebase-* &&
git reset --hard commit-new-file-F3-on-topic-branch &&
git checkout master &&
test_commit "commit-new-file-F3" F3 3 &&
- git config rerere.enabled true &&
+ test_config rerere.enabled true &&
test_must_fail git rebase -m master topic &&
echo "Resolved" >F2 &&
+ cp F2 expected-F2 &&
git add F2 &&
test_must_fail git rebase --continue &&
echo "Resolved" >F3 &&
+ cp F3 expected-F3 &&
git add F3 &&
git rebase --continue &&
- git reset --hard topic@{1} &&
- test_must_fail git rebase -m --rerere-autoupdate master &&
- test "$(cat F2)" = "Resolved" &&
- test_must_fail git rebase --continue &&
- test "$(cat F3)" = "Resolved" &&
- git rebase --continue
+ git reset --hard topic@{1}
'
+prepare () {
+ rm -fr .git/rebase-* &&
+ git reset --hard commit-new-file-F3-on-topic-branch &&
+ git checkout master &&
+ test_config rerere.enabled true
+}
+
+test_rerere_autoupdate () {
+ action=$1 &&
+ test_expect_success "rebase $action --continue remembers --rerere-autoupdate" '
+ prepare &&
+ test_must_fail git rebase $action --rerere-autoupdate master topic &&
+ test_cmp expected-F2 F2 &&
+ git diff-files --quiet &&
+ test_must_fail git rebase --continue &&
+ test_cmp expected-F3 F3 &&
+ git diff-files --quiet &&
+ git rebase --continue
+ '
+
+ test_expect_success "rebase $action --continue honors rerere.autoUpdate" '
+ prepare &&
+ test_config rerere.autoupdate true &&
+ test_must_fail git rebase $action master topic &&
+ test_cmp expected-F2 F2 &&
+ git diff-files --quiet &&
+ test_must_fail git rebase --continue &&
+ test_cmp expected-F3 F3 &&
+ git diff-files --quiet &&
+ git rebase --continue
+ '
+
+ test_expect_success "rebase $action --continue remembers --no-rerere-autoupdate" '
+ prepare &&
+ test_config rerere.autoupdate true &&
+ test_must_fail git rebase $action --no-rerere-autoupdate master topic &&
+ test_cmp expected-F2 F2 &&
+ test_must_fail git diff-files --quiet &&
+ git add F2 &&
+ test_must_fail git rebase --continue &&
+ test_cmp expected-F3 F3 &&
+ test_must_fail git diff-files --quiet &&
+ git add F3 &&
+ git rebase --continue
+ '
+}
+
+test_rerere_autoupdate
+test_rerere_autoupdate -m
+GIT_SEQUENCE_EDITOR=: && export GIT_SEQUENCE_EDITOR
+test_rerere_autoupdate -i
+test_rerere_autoupdate --preserve-merges
+
test_done
. ./test-lib.sh
test_expect_success setup '
- echo foo >foo &&
- git add foo && test_tick && git commit -q -m 1 &&
- echo foo-master >foo &&
- git add foo && test_tick && git commit -q -m 2 &&
-
- git checkout -b dev HEAD^ &&
- echo foo-dev >foo &&
- git add foo && test_tick && git commit -q -m 3 &&
+ test_commit foo &&
+ test_commit foo-master foo &&
+ test_commit bar-master bar &&
+
+ git checkout -b dev foo &&
+ test_commit foo-dev foo &&
+ test_commit bar-dev bar &&
git config rerere.enabled true
'
'
test_expect_success 'fixup' '
- echo foo-dev >foo &&
- git add foo && test_tick && git commit -q -m 4 &&
- git reset --hard HEAD^ &&
- echo foo-dev >expect
+ echo foo-resolved >foo &&
+ echo bar-resolved >bar &&
+ git commit -am resolved &&
+ cp foo foo-expect &&
+ cp bar bar-expect &&
+ git reset --hard HEAD^
'
-test_expect_success 'cherry-pick conflict' '
- test_must_fail git cherry-pick master &&
- test_cmp expect foo
+test_expect_success 'cherry-pick conflict with --rerere-autoupdate' '
+ test_must_fail git cherry-pick --rerere-autoupdate foo..bar-master &&
+ test_cmp foo-expect foo &&
+ git diff-files --quiet &&
+ test_must_fail git cherry-pick --continue &&
+ test_cmp bar-expect bar &&
+ git diff-files --quiet &&
+ git cherry-pick --continue &&
+ git reset --hard bar-dev
+'
+
+test_expect_success 'cherry-pick conflict repsects rerere.autoUpdate' '
+ test_config rerere.autoUpdate true &&
+ test_must_fail git cherry-pick foo..bar-master &&
+ test_cmp foo-expect foo &&
+ git diff-files --quiet &&
+ test_must_fail git cherry-pick --continue &&
+ test_cmp bar-expect bar &&
+ git diff-files --quiet &&
+ git cherry-pick --continue &&
+ git reset --hard bar-dev
+'
+
+test_expect_success 'cherry-pick conflict with --no-rerere-autoupdate' '
+ test_config rerere.autoUpdate true &&
+ test_must_fail git cherry-pick --no-rerere-autoupdate foo..bar-master &&
+ test_cmp foo-expect foo &&
+ test_must_fail git diff-files --quiet &&
+ git add foo &&
+ test_must_fail git cherry-pick --continue &&
+ test_cmp bar-expect bar &&
+ test_must_fail git diff-files --quiet &&
+ git add bar &&
+ git cherry-pick --continue &&
+ git reset --hard bar-dev
+'
+
+test_expect_success 'cherry-pick --continue rejects --rerere-autoupdate' '
+ test_must_fail git cherry-pick --rerere-autoupdate foo..bar-master &&
+ test_cmp foo-expect foo &&
+ git diff-files --quiet &&
+ test_must_fail git cherry-pick --continue --rerere-autoupdate >actual 2>&1 &&
+ echo "fatal: cherry-pick: --rerere-autoupdate cannot be used with --continue" >expect &&
+ test_i18ncmp expect actual &&
+ test_must_fail git cherry-pick --continue --no-rerere-autoupdate >actual 2>&1 &&
+ echo "fatal: cherry-pick: --no-rerere-autoupdate cannot be used with --continue" >expect &&
+ test_i18ncmp expect actual &&
+ git cherry-pick --abort
'
-test_expect_success 'reconfigure' '
- git config rerere.enabled false &&
- git reset --hard
+test_expect_success 'cherry-pick --rerere-autoupdate more than once' '
+ test_must_fail git cherry-pick --rerere-autoupdate --rerere-autoupdate foo..bar-master &&
+ test_cmp foo-expect foo &&
+ git diff-files --quiet &&
+ git cherry-pick --abort &&
+ test_must_fail git cherry-pick --rerere-autoupdate --no-rerere-autoupdate --rerere-autoupdate foo..bar-master &&
+ test_cmp foo-expect foo &&
+ git diff-files --quiet &&
+ git cherry-pick --abort &&
+ test_must_fail git cherry-pick --rerere-autoupdate --no-rerere-autoupdate foo..bar-master &&
+ test_must_fail git diff-files --quiet &&
+ git cherry-pick --abort
'
test_expect_success 'cherry-pick conflict without rerere' '
+ test_config rerere.enabled false &&
test_must_fail git cherry-pick master &&
test_must_fail test_cmp expect foo
'
test_expect_success 'git add --chmod=[+-]x changes index with already added file' '
rm -f foo3 xfoo3 &&
+ git reset --hard &&
echo foo >foo3 &&
git add foo3 &&
git add --chmod=+x foo3 &&
test_path_is_file bar
'
+cat > .gitignore <<EOF
+ignored
+ignored.d/*
+EOF
+
+test_expect_success 'stash previously ignored file' '
+ git reset HEAD &&
+ git add .gitignore &&
+ git commit -m "Add .gitignore" &&
+ >ignored.d/foo &&
+ echo "!ignored.d/foo" >> .gitignore &&
+ git stash save --include-untracked &&
+ test_path_is_missing ignored.d/foo &&
+ git stash pop &&
+ test_path_is_file ignored.d/foo
+'
+
test_done
test_tick &&
git commit -m "A 4k file"
'
+
+# OpenBSD only supports up to 255 repetitions, so repeat twice for 64*64=4096.
test_expect_success '-G matches' '
- git diff --name-only -G "^0{4096}$" HEAD^ >out &&
+ git diff --name-only -G "^(0{64}){64}$" HEAD^ >out &&
test 4096-zeroes.txt = "$(cat out)"
'
dolore eu feugiat nulla facilisis at vero eros et accumsan et iusto odio
dignissim qui blandit praesent luptatum zzril delenit augue duis dolore te
feugait nulla facilisi.
+
+ Reported-by: A N Other <a.n.other@example.com>
EOF
cat >failmail <<-\EOF &&
echo world >>file &&
git add file &&
test_tick &&
- git commit -s -F msg &&
+ git commit -F msg &&
git tag second &&
git format-patch --stdout first >patch1 &&
echo "Date: $GIT_AUTHOR_DATE" &&
echo &&
sed -e "1,2d" msg &&
- echo &&
- echo "Signed-off-by: $GIT_COMMITTER_NAME <$GIT_COMMITTER_EMAIL>" &&
echo "---" &&
git diff-tree --no-commit-id --stat -p second
} >patch1-stgit.eml &&
echo "# Parent $_z40" &&
cat msg &&
echo &&
- echo "Signed-off-by: $GIT_COMMITTER_NAME <$GIT_COMMITTER_EMAIL>" &&
- echo &&
git diff-tree --no-commit-id -p second
} >patch1-hg.eml &&
git reset --hard &&
git checkout -b master2 first &&
git am --signoff <patch2 &&
- printf "%s\n" "$signoff" >expected &&
- echo "Signed-off-by: $GIT_COMMITTER_NAME <$GIT_COMMITTER_EMAIL>" >>expected &&
- git cat-file commit HEAD^ | grep "Signed-off-by:" >actual &&
- test_cmp expected actual &&
- echo "Signed-off-by: $GIT_COMMITTER_NAME <$GIT_COMMITTER_EMAIL>" >expected &&
- git cat-file commit HEAD | grep "Signed-off-by:" >actual &&
- test_cmp expected actual
+ {
+ printf "third\n\nSigned-off-by: %s <%s>\n\n" \
+ "$GIT_COMMITTER_NAME" "$GIT_COMMITTER_EMAIL" &&
+ cat msg &&
+ printf "Signed-off-by: %s <%s>\n\n" \
+ "$GIT_COMMITTER_NAME" "$GIT_COMMITTER_EMAIL"
+ } >expected-log &&
+ git log --pretty=%B -2 HEAD >actual &&
+ test_cmp expected-log actual
'
test_expect_success 'am stays in branch' '
'
test_expect_success 'am --signoff does not add Signed-off-by: line if already there' '
- git format-patch --stdout HEAD^ >patch3 &&
- sed -e "/^Subject/ s,\[PATCH,Re: Re: Re: & 1/5 v2] [foo," patch3 >patch4 &&
- rm -fr .git/rebase-apply &&
- git reset --hard &&
- git checkout HEAD^ &&
- git am --signoff patch4 &&
- git cat-file commit HEAD >actual &&
- test $(grep -c "^Signed-off-by:" actual) -eq 1
+ git format-patch --stdout first >patch3 &&
+ git reset --hard first &&
+ git am --signoff <patch3 &&
+ git log --pretty=%B -2 HEAD >actual &&
+ test_cmp expected-log actual
+'
+
+test_expect_success 'am --signoff adds Signed-off-by: if another author is preset' '
+ NAME="A N Other" &&
+ EMAIL="a.n.other@example.com" &&
+ {
+ printf "third\n\nSigned-off-by: %s <%s>\nSigned-off-by: %s <%s>\n\n" \
+ "$GIT_COMMITTER_NAME" "$GIT_COMMITTER_EMAIL" \
+ "$NAME" "$EMAIL" &&
+ cat msg &&
+ printf "Signed-off-by: %s <%s>\nSigned-off-by: %s <%s>\n\n" \
+ "$GIT_COMMITTER_NAME" "$GIT_COMMITTER_EMAIL" \
+ "$NAME" "$EMAIL"
+ } >expected-log &&
+ git reset --hard first &&
+ GIT_COMMITTER_NAME="$NAME" GIT_COMMITTER_EMAIL="$EMAIL" \
+ git am --signoff <patch3 &&
+ git log --pretty=%B -2 HEAD >actual &&
+ test_cmp expected-log actual
+'
+
+test_expect_success 'am --signoff duplicates Signed-off-by: if it is not the last one' '
+ NAME="A N Other" &&
+ EMAIL="a.n.other@example.com" &&
+ {
+ printf "third\n\nSigned-off-by: %s <%s>\n\
+Signed-off-by: %s <%s>\nSigned-off-by: %s <%s>\n\n" \
+ "$GIT_COMMITTER_NAME" "$GIT_COMMITTER_EMAIL" \
+ "$NAME" "$EMAIL" \
+ "$GIT_COMMITTER_NAME" "$GIT_COMMITTER_EMAIL" &&
+ cat msg &&
+ printf "Signed-off-by: %s <%s>\nSigned-off-by: %s <%s>\n\
+Signed-off-by: %s <%s>\n\n" \
+ "$GIT_COMMITTER_NAME" "$GIT_COMMITTER_EMAIL" \
+ "$NAME" "$EMAIL" \
+ "$GIT_COMMITTER_NAME" "$GIT_COMMITTER_EMAIL"
+ } >expected-log &&
+ git format-patch --stdout first >patch3 &&
+ git reset --hard first &&
+ git am --signoff <patch3 &&
+ git log --pretty=%B -2 HEAD >actual &&
+ test_cmp expected-log actual
'
test_expect_success 'am without --keep removes Re: and [PATCH] stuff' '
+ git format-patch --stdout HEAD^ >tmp &&
+ sed -e "/^Subject/ s,\[PATCH,Re: Re: Re: & 1/5 v2] [foo," tmp >patch4 &&
+ git reset --hard HEAD^ &&
+ git am <patch4 &&
git rev-parse HEAD >expected &&
git rev-parse master2 >actual &&
test_cmp expected actual
EOF
'
-test_expect_success 'lookup in duplicated pack (binary search)' '
+test_expect_success 'lookup in duplicated pack' '
git cat-file --batch-check <input >actual &&
test_cmp expect actual
'
-test_expect_success 'lookup in duplicated pack (GIT_USE_LOOKUP)' '
- (
- GIT_USE_LOOKUP=1 &&
- export GIT_USE_LOOKUP &&
- git cat-file --batch-check <input >actual
- ) &&
- test_cmp expect actual
-'
-
test_expect_success 'index-pack can reject packs with duplicates' '
clear_packs &&
create_pack dups.pack 2 &&
add_upstream_commit &&
(
cd downstream &&
- git config fetch.recurseSubmodules true
+ git config fetch.recurseSubmodules true &&
git fetch >../actual.out 2>../actual.err
) &&
test_must_be_empty actual.out &&
add_upstream_commit &&
(
cd downstream &&
- git config fetch.recurseSubmodules true
+ git config fetch.recurseSubmodules true &&
git fetch --no-recurse-submodules >../actual.out 2>../actual.err
) &&
! test -s actual.out &&
cd submodule &&
git config --unset fetch.recurseSubmodules
) &&
- git config --unset fetch.recurseSubmodules
+ git config --unset fetch.recurseSubmodules &&
git fetch >../actual.out 2>../actual.err
) &&
! test -s actual.out &&
) &&
head1=$(git rev-parse --short HEAD^) &&
git add subdir/deepsubmodule &&
- git commit -m "new deepsubmodule"
+ git commit -m "new deepsubmodule" &&
head2=$(git rev-parse --short HEAD) &&
echo "Fetching submodule submodule" > ../expect.err.sub &&
echo "From $pwd/submodule" >> ../expect.err.sub &&
# Fails when refspec includes an object id
test_must_fail git -C work push --recurse-submodules=on-demand origin \
"$(git -C work rev-parse branch2):refs/heads/branch2" &&
- # Fails when refspec includes 'HEAD' as it is unsupported at this time
+ # Fails when refspec includes HEAD and parent and submodule do not
+ # have the same named branch checked out
test_must_fail git -C work push --recurse-submodules=on-demand origin \
HEAD:refs/heads/branch2 &&
test_cmp expected_pub actual_pub
'
+test_expect_success 'push propagating HEAD refspec to a submodule' '
+ git -C work/gar/bage checkout branch2 &&
+ > work/gar/bage/junk12 &&
+ git -C work/gar/bage add junk12 &&
+ git -C work/gar/bage commit -m "Twelfth junk" &&
+
+ git -C work checkout branch2 &&
+ git -C work add gar/bage &&
+ git -C work commit -m "updating gar/bage in branch2" &&
+
+ # Passes since the superproject and submodules HEAD are both on branch2
+ git -C work push --recurse-submodules=on-demand origin \
+ HEAD:refs/heads/branch2 &&
+
+ git -C submodule.git rev-parse branch2 >actual_submodule &&
+ git -C pub.git rev-parse branch2 >actual_pub &&
+ git -C work/gar/bage rev-parse branch2 >expected_submodule &&
+ git -C work rev-parse branch2 >expected_pub &&
+ test_cmp expected_submodule actual_submodule &&
+ test_cmp expected_pub actual_pub
+'
+
test_done
test_i18ngrep "the receiving end does not support" err
'
+test_expect_success 'push --signed=1 is accepted' '
+ prepare_dst &&
+ mkdir -p dst/.git/hooks &&
+ test_must_fail git push --signed=1 dst noop ff +noff 2>err &&
+ test_i18ngrep "the receiving end does not support" err
+'
+
test_expect_success GPG 'no certificate for a signed push with no update' '
prepare_dst &&
mkdir -p dst/.git/hooks &&
run_with_limited_stack git tag --contains HEAD >actual &&
test_cmp expect actual &&
run_with_limited_stack git tag --no-contains HEAD >actual &&
- test_line_count ">" 10 actual
+ test_line_count "-gt" 10 actual
'
test_expect_success '--format should list tags as per format given' '
test_must_fail git -C multisuper_clone config --get submodule.sub1.active
'
+test_expect_success 'recursive clone respects -q' '
+ test_when_finished "rm -rf multisuper_clone" &&
+ git clone -q --recurse-submodules multisuper multisuper_clone >actual &&
+ test_must_be_empty actual
+'
+
test_done
test_cmp expected actual
'
+test_expect_success 'grep --files-without-match --quiet' '
+ git grep --files-without-match --quiet nonexistent_string >actual &&
+ test_cmp /dev/null actual
+'
+
cat >expected <<EOF
file:foo mmap bar_mmap
EOF
my $ok = join("|", qw(
TRACE
DEBUG
- USE_LOOKUP
TEST
.*_TEST
PROVE
find () {
/usr/bin/find "$@"
}
- sum () {
- md5sum "$@"
- }
# git sees Windows-style pwd
pwd () {
builtin pwd -W
# *) ;;
# esac
-# SOB=$(git var GIT_AUTHOR_IDENT | sed -n 's/^\(.*>\).*$/Signed-off-by: \1/p')
+# SOB=$(git var GIT_COMMITTER_IDENT | sed -n 's/^\(.*>\).*$/Signed-off-by: \1/p')
# git interpret-trailers --in-place --trailer "$SOB" "$COMMIT_MSG_FILE"
# if test -z "$COMMIT_SOURCE"
# then
for (i = 0; i < index->cache_nr; i++) {
struct cache_entry *ce = index->cache[i];
if (ce->ce_flags & CE_UPDATE) {
- int r = strcmp(ce->name, ".gitmodules");
+ int r = strcmp(ce->name, GITMODULES_FILE);
if (r < 0)
continue;
else if (r == 0) {