crawlers and some backup systems).
See linkgit:git-update-index[1]. True by default.
+core.untrackedCache::
+ Determines what to do about the untracked cache feature of the
+ index. It will be kept, if this variable is unset or set to
+ `keep`. It will automatically be added if set to `true`. And
+ it will automatically be removed, if set to `false`. Before
+ setting it to `true`, you should check that mtime is working
+ properly on your system.
+ See linkgit:git-update-index[1]. `keep` by default.
+
core.checkStat::
Determines which stat fields to match between the index
and work tree. The user can set this to 'default' or
so that locally committed merge commits will not be flattened
by running 'git pull'.
+
+When the value is `interactive`, the rebase is run in interactive mode.
++
*NOTE*: this is a possibly dangerous operation; do *not* use
it unless you understand the implications (see linkgit:git-rebase[1]
for details).
credential.helper::
Specify an external helper to be called when a username or
password credential is needed; the helper may consult external
- storage to avoid prompting the user for the credentials. See
- linkgit:gitcredentials[7] for details.
+ storage to avoid prompting the user for the credentials. Note
+ that multiple helpers may be defined. See linkgit:gitcredentials[7]
+ for details.
credential.useHttpPath::
When acquiring credentials, consider the "path" component of an http
format-patch is invoked, but in addition can be set to "auto", to
generate a cover-letter only when there's more than one patch.
+format.outputDirectory::
+ Set a custom directory to store the resulting files instead of the
+ current working directory.
+
filter.<driver>.clean::
The command which is used to convert the content of a worktree
file to a blob upon checkin. See linkgit:gitattributes[5] for
option is ignored when the 'grep.patternType' option is set to a value
other than 'default'.
+grep.threads::
+ Number of grep worker threads to use.
+ See `grep.threads` in linkgit:git-grep[1] for more information.
+
+grep.fallbackToNoIndex::
+ If set to true, fall back to git grep --no-index if git grep
+ is executed outside of a git repository. Defaults to false.
+
gpg.program::
Use this custom program instead of "gpg" found on $PATH when
making or verifying a PGP signature. The program must support the
http.proxy::
Override the HTTP proxy, normally configured using the 'http_proxy',
- 'https_proxy', and 'all_proxy' environment variables (see
- `curl(1)`). This can be overridden on a per-remote basis; see
- remote.<name>.proxy
+ 'https_proxy', and 'all_proxy' environment variables (see `curl(1)`). In
+ addition to the syntax understood by curl, it is possible to specify a
+ proxy string with a user name but no password, in which case git will
+ attempt to acquire one in the same way it does for other credentials. See
+ linkgit:gitcredentials[7] for more information. The syntax thus is
+ '[protocol://][user[:password]@]proxyhost[:port]'. This can be overridden
+ on a per-remote basis; see remote.<name>.proxy
+
+http.proxyAuthMethod::
+ Set the method with which to authenticate against the HTTP proxy. This
+ only takes effect if the configured proxy string contains a user name part
+ (i.e. is of the form 'user@host' or 'user@host:port'). This can be
+ overridden on a per-remote basis; see `remote.<name>.proxyAuthMethod`.
+ Both can be overridden by the 'GIT_HTTP_PROXY_AUTHMETHOD' environment
+ variable. Possible values are:
++
+--
+* `anyauth` - Automatically pick a suitable authentication method. It is
+ assumed that the proxy answers an unauthenticated request with a 407
+ status code and one or more Proxy-authenticate headers with supported
+ authentication methods. This is the default.
+* `basic` - HTTP Basic authentication
+* `digest` - HTTP Digest authentication; this prevents the password from being
+ transmitted to the proxy in clear text
+* `negotiate` - GSS-Negotiate authentication (compare the --negotiate option
+ of `curl(1)`)
+* `ntlm` - NTLM authentication (compare the --ntlm option of `curl(1)`)
+--
+
+http.emptyAuth::
+ Attempt authentication without seeking a username or password. This
+ can be used to attempt GSS-Negotiate authentication without specifying
+ a username in the URL, as libcurl normally requires a username for
+ authentication.
http.cookieFile::
File containing previously stored cookie lines which should be used
with when fetching or pushing over HTTPS. Can be overridden
by the 'GIT_SSL_CAPATH' environment variable.
+http.pinnedpubkey::
+ Public key of the https service. It may either be the filename of
+ a PEM or DER encoded public key file or a string starting with
+ 'sha256//' followed by the base64 encoded sha256 hash of the
+ public key. See also libcurl 'CURLOPT_PINNEDPUBLICKEY'. git will
+ exit with an error if this option is set but not supported by
+ cURL.
+
http.sslTry::
Attempt to use AUTH SSL/TLS and encrypted data transfers
when connecting via regular FTP protocol. This might be needed
setting is silently ignored if portable keystroke input
is not available; requires the Perl module Term::ReadKey.
+interactive.diffFilter::
+ When an interactive command (such as `git add --patch`) shows
+ a colorized diff, git will pipe the diff through the shell
+ command defined by this configuration variable. The command may
+ mark up the diff further for human consumption, provided that it
+ retains a one-to-one correspondence with the lines in the
+ original diff. Defaults to disabled (no filtering).
+
log.abbrevCommit::
If true, makes linkgit:git-log[1], linkgit:git-show[1], and
linkgit:git-whatchanged[1] assume `--abbrev-commit`. You may
larger than 2 GB.
+
If you have an old Git that does not understand the version 2 `*.idx` file,
-cloning or fetching over a non native protocol (e.g. "http" and "rsync")
+cloning or fetching over a non native protocol (e.g. "http")
that will copy both `*.pack` file and corresponding `*.idx` file from the
other side may give you a repository that cannot be accessed with your
older version of Git. If the `*.pack` file is smaller than 2 GB, however,
so that locally committed merge commits will not be flattened
by running 'git pull'.
+
+When the value is `interactive`, the rebase is run in interactive mode.
++
*NOTE*: this is a possibly dangerous operation; do *not* use
it unless you understand the implications (see linkgit:git-rebase[1]
for details).
override a value from a lower-priority config file. An explicit
command-line flag always overrides this config option.
+push.recurseSubmodules::
+ Make sure all submodule commits used by the revisions to be pushed
+ are available on a remote-tracking branch. If the value is 'check'
+ then Git will verify that all submodule commits that changed in the
+ revisions to be pushed are available on at least one remote of the
+ submodule. If any commits are missing, the push will be aborted and
+ exit with non-zero status. If the value is 'on-demand' then all
+ submodules that changed in the revisions to be pushed will be
+ pushed. If on-demand was not able to push all necessary revisions
+ it will also be aborted and exit with non-zero status. If the value
+ is 'no' then default behavior of ignoring submodules when pushing
+ is retained. You may override this configuration at time of push by
+ specifying '--recurse-submodules=check|on-demand|no'.
+
rebase.stat::
Whether to show a diffstat of what changed upstream since the last
rebase. False by default.
the proxy to use for that remote. Set to the empty string to
disable proxying for that remote.
+remote.<name>.proxyAuthMethod::
+ For remotes that require curl (http, https and ftp), the method to use for
+ authenticating against the proxy in use (probably set in
+ `remote.<name>.proxy`). See `http.proxyAuthMethod`.
+
remote.<name>.fetch::
The default set of "refspec" for linkgit:git-fetch[1]. See
linkgit:git-fetch[1].
"--ignore-submodules" option. The 'git submodule' commands are not
affected by this setting.
+ submodule.fetchJobs::
+ Specifies how many submodules are fetched/cloned at the same time.
+ A positive integer allows up to that number of submodules fetched
+ in parallel. A value of 0 will give some reasonable default.
+ If unset, it defaults to 1.
+
tag.sort::
This variable controls the sort ordering of tags when displayed by
linkgit:git-tag[1]. Without the "--sort=<value>" option provided, the
Can be overridden by the 'GIT_AUTHOR_NAME' and 'GIT_COMMITTER_NAME'
environment variables. See linkgit:git-commit-tree[1].
+user.useConfigOnly::
+ Instruct Git to avoid trying to guess defaults for 'user.email'
+ and 'user.name', and instead retrieve the values only from the
+ configuration. For example, if you have multiple email addresses
+ and would like to use a different one for each repository, then
+ with this configuration option set to `true` in the global config
+ along with a name, Git will prompt you to set up an email before
+ making new commits in a newly cloned repository.
+ Defaults to `false`.
+
user.signingKey::
If linkgit:git-tag[1] or linkgit:git-commit[1] is not selecting the
key you want it to automatically when creating a signed tag or
[-o <name>] [-b <name>] [-u <upload-pack>] [--reference <repository>]
[--dissociate] [--separate-git-dir <git dir>]
[--depth <depth>] [--[no-]single-branch]
- [--recursive | --recurse-submodules] [--] <repository>
+ [--recursive | --recurse-submodules] [--jobs <n>] [--] <repository>
[<directory>]
DESCRIPTION
--quiet::
-q::
Operate quietly. Progress is not reported to the standard
- error stream. This flag is also passed to the `rsync'
- command when given.
+ error stream.
--verbose::
-v::
--depth <depth>::
Create a 'shallow' clone with a history truncated to the
- specified number of revisions.
+ specified number of commits. Implies `--single-branch` unless
+ `--no-single-branch` is given to fetch the histories near the
+ tips of all branches.
--[no-]single-branch::
Clone only the history leading to the tip of a single branch,
either specified by the `--branch` option or the primary
- branch remote's `HEAD` points at. When creating a shallow
- clone with the `--depth` option, this is the default, unless
- `--no-single-branch` is given to fetch the histories near the
- tips of all branches.
+ branch remote's `HEAD` points at.
Further fetches into the resulting repository will only update the
remote-tracking branch for the branch this option was used for the
initial cloning. If the HEAD at the remote did not point at any
The result is Git repository can be separated from working
tree.
+ -j <n>::
+ --jobs <n>::
+ The number of submodules fetched at the same time.
+ Defaults to the `submodule.fetchJobs` option.
<repository>::
The (possibly remote) repository to clone from. See the
static char *option_upload_pack = "git-upload-pack";
static int option_verbosity;
static int option_progress = -1;
+static enum transport_family family;
static struct string_list option_config;
static struct string_list option_reference;
static int option_dissociate;
+ static int max_jobs = -1;
static struct option builtin_clone_options[] = {
OPT__VERBOSITY(&option_verbosity),
N_("initialize submodules in the clone")),
OPT_BOOL(0, "recurse-submodules", &option_recursive,
N_("initialize submodules in the clone")),
+ OPT_INTEGER('j', "jobs", &max_jobs,
+ N_("number of submodules cloned in parallel")),
OPT_STRING(0, "template", &option_template, N_("template-directory"),
N_("directory from which templates will be used")),
OPT_STRING_LIST(0, "reference", &option_reference, N_("repo"),
N_("separate git dir from working tree")),
OPT_STRING_LIST('c', "config", &option_config, N_("key=value"),
N_("set config inside the new repository")),
+ OPT_SET_INT('4', "ipv4", &family, N_("use IPv4 addresses only"),
+ TRANSPORT_FAMILY_IPV4),
+ OPT_SET_INT('6', "ipv6", &family, N_("use IPv6 addresses only"),
+ TRANSPORT_FAMILY_IPV6),
OPT_END()
};
- static const char *argv_submodule[] = {
- "submodule", "update", "--init", "--recursive", NULL
- };
-
static const char *get_repo_path_1(struct strbuf *path, int *is_bundle)
{
static char *suffix[] = { "/.git", "", ".git/.git", ".git" };
strip_suffix_mem(start, &len, is_bundle ? ".bundle" : ".git");
if (!len || (len == 1 && *start == '/'))
- die("No directory name could be guessed.\n"
- "Please specify a directory on the command line");
+ die(_("No directory name could be guessed.\n"
+ "Please specify a directory on the command line"));
if (is_bare)
dir = xstrfmt("%.*s.git", (int)len, start);
FILE *in = fopen(src->buf, "r");
struct strbuf line = STRBUF_INIT;
- while (strbuf_getline(&line, in, '\n') != EOF) {
+ while (strbuf_getline(&line, in) != EOF) {
char *abs_path;
if (!line.len || line.buf[0] == '#')
continue;
struct strbuf head_ref = STRBUF_INIT;
strbuf_addstr(&head_ref, branch_top);
strbuf_addstr(&head_ref, "HEAD");
- create_symref(head_ref.buf,
- remote_head_points_at->peer_ref->name,
- msg);
+ if (create_symref(head_ref.buf,
+ remote_head_points_at->peer_ref->name,
+ msg) < 0)
+ die(_("unable to update %s"), head_ref.buf);
+ strbuf_release(&head_ref);
}
}
const char *head;
if (our && skip_prefix(our->name, "refs/heads/", &head)) {
/* Local default branch link */
- create_symref("HEAD", our->name, NULL);
+ if (create_symref("HEAD", our->name, NULL) < 0)
+ die(_("unable to update HEAD"));
if (!option_bare) {
update_ref(msg, "HEAD", our->old_oid.hash, NULL, 0,
UPDATE_REFS_DIE_ON_ERR);
err |= run_hook_le(NULL, "post-checkout", sha1_to_hex(null_sha1),
sha1_to_hex(sha1), "1", NULL);
- if (!err && option_recursive)
- err = run_command_v_opt(argv_submodule, RUN_GIT_CMD);
+ if (!err && option_recursive) {
+ struct argv_array args = ARGV_ARRAY_INIT;
+ argv_array_pushl(&args, "submodule", "update", "--init", "--recursive", NULL);
+
+ if (max_jobs != -1)
+ argv_array_pushf(&args, "--jobs=%d", max_jobs);
+
+ err = run_command_v_opt(args.argv, RUN_GIT_CMD);
+ argv_array_clear(&args);
+ }
return err;
}
static int write_one_config(const char *key, const char *value, void *data)
{
- return git_config_set_multivar(key, value ? value : "true", "^$", 0);
+ return git_config_set_multivar_gently(key, value ? value : "true", "^$", 0);
}
static void write_config(struct string_list *config)
for (i = 0; i < config->nr; i++) {
if (git_config_parse_parameter(config->items[i].string,
write_one_config, NULL) < 0)
- die("unable to write parameters to config file");
+ die(_("unable to write parameters to config file"));
}
}
remote = remote_get(option_origin);
transport = transport_get(remote, remote->url[0]);
transport_set_verbosity(transport, option_verbosity, option_progress);
+ transport->family = family;
path = get_repo_path(remote->url[0], &is_bundle);
is_local = option_local != 0 && path && !is_bundle;
static int all, append, dry_run, force, keep, multiple, update_head_ok, verbosity;
static int progress = -1, recurse_submodules = RECURSE_SUBMODULES_DEFAULT;
static int tags = TAGS_DEFAULT, unshallow, update_shallow;
- static int max_children = 1;
+ static int max_children = -1;
+static enum transport_family family;
static const char *depth;
static const char *upload_pack;
static struct strbuf default_rla = STRBUF_INIT;
N_("accept refs that update .git/shallow")),
{ OPTION_CALLBACK, 0, "refmap", NULL, N_("refmap"),
N_("specify fetch refmap"), PARSE_OPT_NONEG, parse_refmap_arg },
+ OPT_SET_INT('4', "ipv4", &family, N_("use IPv4 addresses only"),
+ TRANSPORT_FAMILY_IPV4),
+ OPT_SET_INT('6', "ipv6", &family, N_("use IPv6 addresses only"),
+ TRANSPORT_FAMILY_IPV6),
OPT_END()
};
static int truncate_fetch_head(void)
{
const char *filename = git_path_fetch_head();
- FILE *fp = fopen(filename, "w");
+ FILE *fp = fopen_for_writing(filename);
if (!fp)
return error(_("cannot open %s: %s\n"), filename, strerror(errno));
struct transport *transport;
transport = transport_get(remote, NULL);
transport_set_verbosity(transport, verbosity, progress);
+ transport->family = family;
if (upload_pack)
set_option(transport, TRANS_OPT_UPLOADPACK, upload_pack);
if (keep)
git_config(get_remote_group, &g);
if (list->nr == prev_nr) {
- struct remote *remote;
- if (!remote_is_configured(name))
+ struct remote *remote = remote_get(name);
+ if (!remote_is_configured(remote))
return 0;
- remote = remote_get(name);
string_list_append(list, remote->name);
}
return 1;
if (argc > 0) {
int j = 0;
int i;
- refs = xcalloc(argc + 1, sizeof(const char *));
+ refs = xcalloc(st_add(argc, 1), sizeof(const char *));
for (i = 0; i < argc; i++) {
if (!strcmp(argv[i], "tag")) {
i++;
list.strdup_strings = 1;
string_list_clear(&list, 0);
+ close_all_packs();
+
argv_array_pushl(&argv_gc_auto, "gc", "--auto", NULL);
if (verbosity < 0)
argv_array_push(&argv_gc_auto, "--quiet");
struct module_list *list)
{
int i, result = 0;
- char *max_prefix, *ps_matched = NULL;
- int max_prefix_len;
+ char *ps_matched = NULL;
parse_pathspec(pathspec, 0,
PATHSPEC_PREFER_FULL |
PATHSPEC_STRIP_SUBMODULE_SLASH_CHEAP,
prefix, argv);
- /* Find common prefix for all pathspec's */
- max_prefix = common_prefix(pathspec);
- max_prefix_len = max_prefix ? strlen(max_prefix) : 0;
-
if (pathspec->nr)
ps_matched = xcalloc(pathspec->nr, 1);
for (i = 0; i < active_nr; i++) {
const struct cache_entry *ce = active_cache[i];
- if (!S_ISGITLINK(ce->ce_mode) ||
- !match_pathspec(pathspec, ce->name, ce_namelen(ce),
- max_prefix_len, ps_matched, 1))
+ if (!match_pathspec(pathspec, ce->name, ce_namelen(ce),
+ 0, ps_matched, 1) ||
+ !S_ISGITLINK(ce->ce_mode))
continue;
ALLOC_GROW(list->entries, list->nr + 1, list->alloc);
*/
i++;
}
- free(max_prefix);
if (ps_matched && report_path_error(ps_matched, pathspec, prefix))
result = -1;
return 0;
}
+ struct submodule_update_clone {
+ /* index into 'list', the list of submodules to look into for cloning */
+ int current;
+ struct module_list list;
+ unsigned warn_if_uninitialized : 1;
+
+ /* update parameter passed via commandline */
+ struct submodule_update_strategy update;
+
+ /* configuration parameters which are passed on to the children */
+ int quiet;
+ const char *reference;
+ const char *depth;
+ const char *recursive_prefix;
+ const char *prefix;
+
+ /* Machine-readable status lines to be consumed by git-submodule.sh */
+ struct string_list projectlines;
+
+ /* If we want to stop as fast as possible and return an error */
+ unsigned quickstop : 1;
+ };
+ #define SUBMODULE_UPDATE_CLONE_INIT {0, MODULE_LIST_INIT, 0, \
+ SUBMODULE_UPDATE_STRATEGY_INIT, 0, NULL, NULL, NULL, NULL, \
+ STRING_LIST_INIT_DUP, 0}
+
+ /**
+ * Determine whether 'ce' needs to be cloned. If so, prepare the 'child' to
+ * run the clone. Returns 1 if 'ce' needs to be cloned, 0 otherwise.
+ */
+ static int prepare_to_clone_next_submodule(const struct cache_entry *ce,
+ struct child_process *child,
+ struct submodule_update_clone *suc,
+ struct strbuf *out)
+ {
+ const struct submodule *sub = NULL;
+ struct strbuf displaypath_sb = STRBUF_INIT;
+ struct strbuf sb = STRBUF_INIT;
+ const char *displaypath = NULL;
+ char *url = NULL;
+ int needs_cloning = 0;
+
+ if (ce_stage(ce)) {
+ if (suc->recursive_prefix)
+ strbuf_addf(&sb, "%s/%s", suc->recursive_prefix, ce->name);
+ else
+ strbuf_addf(&sb, "%s", ce->name);
+ strbuf_addf(out, _("Skipping unmerged submodule %s"), sb.buf);
+ strbuf_addch(out, '\n');
+ goto cleanup;
+ }
+
+ sub = submodule_from_path(null_sha1, ce->name);
+
+ if (suc->recursive_prefix)
+ displaypath = relative_path(suc->recursive_prefix,
+ ce->name, &displaypath_sb);
+ else
+ displaypath = ce->name;
+
+ if (suc->update.type == SM_UPDATE_NONE
+ || (suc->update.type == SM_UPDATE_UNSPECIFIED
+ && sub->update_strategy.type == SM_UPDATE_NONE)) {
+ strbuf_addf(out, _("Skipping submodule '%s'"), displaypath);
+ strbuf_addch(out, '\n');
+ goto cleanup;
+ }
+
+ /*
+ * Looking up the url in .git/config.
+ * We must not fall back to .gitmodules as we only want
+ * to process configured submodules.
+ */
+ strbuf_reset(&sb);
+ strbuf_addf(&sb, "submodule.%s.url", sub->name);
+ git_config_get_string(sb.buf, &url);
+ if (!url) {
+ /*
+ * Only mention uninitialized submodules when their
+ * path have been specified
+ */
+ if (suc->warn_if_uninitialized) {
+ strbuf_addf(out,
+ _("Submodule path '%s' not initialized"),
+ displaypath);
+ strbuf_addch(out, '\n');
+ strbuf_addstr(out,
+ _("Maybe you want to use 'update --init'?"));
+ strbuf_addch(out, '\n');
+ }
+ goto cleanup;
+ }
+
+ strbuf_reset(&sb);
+ strbuf_addf(&sb, "%s/.git", ce->name);
+ needs_cloning = !file_exists(sb.buf);
+
+ strbuf_reset(&sb);
+ strbuf_addf(&sb, "%06o %s %d %d\t%s\n", ce->ce_mode,
+ sha1_to_hex(ce->sha1), ce_stage(ce),
+ needs_cloning, ce->name);
+ string_list_append(&suc->projectlines, sb.buf);
+
+ if (!needs_cloning)
+ goto cleanup;
+
+ child->git_cmd = 1;
+ child->no_stdin = 1;
+ child->stdout_to_stderr = 1;
+ child->err = -1;
+ argv_array_push(&child->args, "submodule--helper");
+ argv_array_push(&child->args, "clone");
+ if (suc->quiet)
+ argv_array_push(&child->args, "--quiet");
+ if (suc->prefix)
+ argv_array_pushl(&child->args, "--prefix", suc->prefix, NULL);
+ argv_array_pushl(&child->args, "--path", sub->path, NULL);
+ argv_array_pushl(&child->args, "--name", sub->name, NULL);
+ argv_array_pushl(&child->args, "--url", url, NULL);
+ if (suc->reference)
+ argv_array_push(&child->args, suc->reference);
+ if (suc->depth)
+ argv_array_push(&child->args, suc->depth);
+
+ cleanup:
+ free(url);
+ strbuf_reset(&displaypath_sb);
+ strbuf_reset(&sb);
+
+ return needs_cloning;
+ }
+
+ static int update_clone_get_next_task(struct child_process *child,
+ struct strbuf *err,
+ void *suc_cb,
+ void **void_task_cb)
+ {
+ struct submodule_update_clone *suc = suc_cb;
+
+ for (; suc->current < suc->list.nr; suc->current++) {
+ const struct cache_entry *ce = suc->list.entries[suc->current];
+ if (prepare_to_clone_next_submodule(ce, child, suc, err)) {
+ suc->current++;
+ return 1;
+ }
+ }
+ return 0;
+ }
+
+ static int update_clone_start_failure(struct strbuf *err,
+ void *suc_cb,
+ void *void_task_cb)
+ {
+ struct submodule_update_clone *suc = suc_cb;
+ suc->quickstop = 1;
+ return 1;
+ }
+
+ static int update_clone_task_finished(int result,
+ struct strbuf *err,
+ void *suc_cb,
+ void *void_task_cb)
+ {
+ struct submodule_update_clone *suc = suc_cb;
+
+ if (!result)
+ return 0;
+
+ suc->quickstop = 1;
+ return 1;
+ }
+
+ static int update_clone(int argc, const char **argv, const char *prefix)
+ {
+ const char *update = NULL;
+ int max_jobs = -1;
+ struct string_list_item *item;
+ struct pathspec pathspec;
+ struct submodule_update_clone suc = SUBMODULE_UPDATE_CLONE_INIT;
+
+ struct option module_update_clone_options[] = {
+ OPT_STRING(0, "prefix", &prefix,
+ N_("path"),
+ N_("path into the working tree")),
+ OPT_STRING(0, "recursive-prefix", &suc.recursive_prefix,
+ N_("path"),
+ N_("path into the working tree, across nested "
+ "submodule boundaries")),
+ OPT_STRING(0, "update", &update,
+ N_("string"),
+ N_("rebase, merge, checkout or none")),
+ OPT_STRING(0, "reference", &suc.reference, N_("repo"),
+ N_("reference repository")),
+ OPT_STRING(0, "depth", &suc.depth, "<depth>",
+ N_("Create a shallow clone truncated to the "
+ "specified number of revisions")),
+ OPT_INTEGER('j', "jobs", &max_jobs,
+ N_("parallel jobs")),
+ OPT__QUIET(&suc.quiet, N_("don't print cloning progress")),
+ OPT_END()
+ };
+
+ const char *const git_submodule_helper_usage[] = {
+ N_("git submodule--helper update_clone [--prefix=<path>] [<path>...]"),
+ NULL
+ };
+ suc.prefix = prefix;
+
+ argc = parse_options(argc, argv, prefix, module_update_clone_options,
+ git_submodule_helper_usage, 0);
+
+ if (update)
+ if (parse_submodule_update_strategy(update, &suc.update) < 0)
+ die(_("bad value for update parameter"));
+
+ if (module_list_compute(argc, argv, prefix, &pathspec, &suc.list) < 0)
+ return 1;
+
+ if (pathspec.nr)
+ suc.warn_if_uninitialized = 1;
+
+ /* Overlay the parsed .gitmodules file with .git/config */
+ gitmodules_config();
+ git_config(submodule_config, NULL);
+
+ if (max_jobs < 0)
+ max_jobs = parallel_submodules();
+
+ run_processes_parallel(max_jobs,
+ update_clone_get_next_task,
+ update_clone_start_failure,
+ update_clone_task_finished,
+ &suc);
+
+ /*
+ * We saved the output and put it out all at once now.
+ * That means:
+ * - the listener does not have to interleave their (checkout)
+ * work with our fetching. The writes involved in a
+ * checkout involve more straightforward sequential I/O.
+ * - the listener can avoid doing any work if fetching failed.
+ */
+ if (suc.quickstop)
+ return 1;
+
+ for_each_string_list_item(item, &suc.projectlines)
+ utf8_fprintf(stdout, "%s", item->string);
+
+ return 0;
+ }
+
struct cmd_struct {
const char *cmd;
int (*fn)(int, const char **, const char *);
{"list", module_list},
{"name", module_name},
{"clone", module_clone},
+ {"update-clone", update_clone}
};
int cmd_submodule__helper(int argc, const char **argv, const char *prefix)
{
int i;
if (argc < 2)
- die(_("fatal: submodule--helper subcommand must be "
+ die(_("submodule--helper subcommand must be "
"called with a subcommand"));
for (i = 0; i < ARRAY_SIZE(commands); i++)
if (!strcmp(argv[1], commands[i].cmd))
return commands[i].fn(argc - 1, argv + 1, prefix);
- die(_("fatal: '%s' is not a valid submodule--helper "
+ die(_("'%s' is not a valid submodule--helper "
"subcommand"), argv[1]);
}
done
}
+is_tip_reachable () (
+ clear_local_git_env
+ cd "$1" &&
+ rev=$(git rev-list -n 1 "$2" --not --all 2>/dev/null) &&
+ test -z "$rev"
+)
+
+fetch_in_submodule () (
+ clear_local_git_env
+ cd "$1" &&
+ case "$2" in
+ '')
+ git fetch ;;
+ *)
+ git fetch $(get_default_remote) "$2" ;;
+ esac
+)
+
#
# Update each submodule path to correct revision, using clone and checkout as needed
#
--depth=*)
depth=$1
;;
+ -j|--jobs)
+ case "$2" in '') usage ;; esac
+ jobs="--jobs=$2"
+ shift
+ ;;
+ --jobs=*)
+ jobs=$1
+ ;;
--)
shift
break
cmd_init "--" "$@" || return
fi
- cloned_modules=
- git submodule--helper list --prefix "$wt_prefix" "$@" | {
+ {
+ git submodule--helper update-clone ${GIT_QUIET:+--quiet} \
+ ${wt_prefix:+--prefix "$wt_prefix"} \
+ ${prefix:+--recursive-prefix "$prefix"} \
+ ${update:+--update "$update"} \
+ ${reference:+--reference "$reference"} \
+ ${depth:+--depth "$depth"} \
+ ${jobs:+$jobs} \
+ "$@" || echo "#unmatched"
+ } | {
err=
- while read mode sha1 stage sm_path
+ while read mode sha1 stage just_cloned sm_path
do
die_if_unmatched "$mode"
- if test "$stage" = U
- then
- echo >&2 "Skipping unmerged submodule $prefix$sm_path"
- continue
- fi
+
name=$(git submodule--helper name "$sm_path") || exit
url=$(git config submodule."$name".url)
branch=$(get_submodule_config "$name" branch master)
displaypath=$(relative_path "$prefix$sm_path")
- if test "$update_module" = "none"
+ if test $just_cloned -eq 1
then
- echo "Skipping submodule '$displaypath'"
- continue
- fi
-
- if test -z "$url"
- then
- # Only mention uninitialized submodules when its
- # path have been specified
- test "$#" != "0" &&
- say "$(eval_gettext "Submodule path '\$displaypath' not initialized
- Maybe you want to use 'update --init'?")"
- continue
- fi
-
- if ! test -d "$sm_path"/.git && ! test -f "$sm_path"/.git
- then
- git submodule--helper clone ${GIT_QUIET:+--quiet} --prefix "$prefix" --path "$sm_path" --name "$name" --url "$url" "$reference" "$depth" || exit
- cloned_modules="$cloned_modules;$name"
subsha1=
+ update_module=checkout
else
subsha1=$(clear_local_git_env; cd "$sm_path" &&
git rev-parse --verify HEAD) ||
then
# Run fetch only if $sha1 isn't present or it
# is not reachable from a ref.
- (clear_local_git_env; cd "$sm_path" &&
- ( (rev=$(git rev-list -n 1 $sha1 --not --all 2>/dev/null) &&
- test -z "$rev") || git-fetch)) ||
+ is_tip_reachable "$sm_path" "$sha1" ||
+ fetch_in_submodule "$sm_path" ||
die "$(eval_gettext "Unable to fetch in submodule path '\$displaypath'")"
+
+ # Now we tried the usual fetch, but $sha1 may
+ # not be reachable from any of the refs
+ is_tip_reachable "$sm_path" "$sha1" ||
+ fetch_in_submodule "$sm_path" "$sha1" ||
+ die "$(eval_gettext "Fetched in submodule path '\$displaypath', but it did not contain $sha1. Direct fetching of that commit failed.")"
fi
- # Is this something we just cloned?
- case ";$cloned_modules;" in
- *";$name;"*)
- # then there is no local change to integrate
- update_module=checkout ;;
- esac
-
must_die_on_failure=
case "$update_module" in
checkout)
return -1;
}
-static const char **prepare_shell_cmd(const char **argv)
+static const char **prepare_shell_cmd(struct argv_array *out, const char **argv)
{
- int argc, nargc = 0;
- const char **nargv;
-
- for (argc = 0; argv[argc]; argc++)
- ; /* just counting */
- /* +1 for NULL, +3 for "sh -c" plus extra $0 */
- nargv = xmalloc(sizeof(*nargv) * (argc + 1 + 3));
-
- if (argc < 1)
+ if (!argv[0])
die("BUG: shell command is empty");
if (strcspn(argv[0], "|&;<>()$`\\\"' \t\n*?[#~=%") != strlen(argv[0])) {
#ifndef GIT_WINDOWS_NATIVE
- nargv[nargc++] = SHELL_PATH;
+ argv_array_push(out, SHELL_PATH);
#else
- nargv[nargc++] = "sh";
+ argv_array_push(out, "sh");
#endif
- nargv[nargc++] = "-c";
-
- if (argc < 2)
- nargv[nargc++] = argv[0];
- else {
- struct strbuf arg0 = STRBUF_INIT;
- strbuf_addf(&arg0, "%s \"$@\"", argv[0]);
- nargv[nargc++] = strbuf_detach(&arg0, NULL);
- }
- }
+ argv_array_push(out, "-c");
- for (argc = 0; argv[argc]; argc++)
- nargv[nargc++] = argv[argc];
- nargv[nargc] = NULL;
+ /*
+ * If we have no extra arguments, we do not even need to
+ * bother with the "$@" magic.
+ */
+ if (!argv[1])
+ argv_array_push(out, argv[0]);
+ else
+ argv_array_pushf(out, "%s \"$@\"", argv[0]);
+ }
- return nargv;
+ argv_array_pushv(out, argv);
+ return out->argv;
}
#ifndef GIT_WINDOWS_NATIVE
static int execv_shell_cmd(const char **argv)
{
- const char **nargv = prepare_shell_cmd(argv);
- trace_argv_printf(nargv, "trace: exec:");
- sane_execvp(nargv[0], (char **)nargv);
- free(nargv);
+ struct argv_array nargv = ARGV_ARRAY_INIT;
+ prepare_shell_cmd(&nargv, argv);
+ trace_argv_printf(nargv.argv, "trace: exec:");
+ sane_execvp(nargv.argv[0], (char **)nargv.argv);
+ argv_array_clear(&nargv);
return -1;
}
#endif
error("waitpid is confused (%s)", argv0);
} else if (WIFSIGNALED(status)) {
code = WTERMSIG(status);
- if (code != SIGINT && code != SIGQUIT)
+ if (code != SIGINT && code != SIGQUIT && code != SIGPIPE)
error("%s died of signal %d", argv0, code);
/*
* This return value is chosen so that code & 0xff
{
int fhin = 0, fhout = 1, fherr = 2;
const char **sargv = cmd->argv;
+ struct argv_array nargv = ARGV_ARRAY_INIT;
if (cmd->no_stdin)
fhin = open("/dev/null", O_RDWR);
fhout = dup(cmd->out);
if (cmd->git_cmd)
- cmd->argv = prepare_git_cmd(cmd->argv);
+ cmd->argv = prepare_git_cmd(&nargv, cmd->argv);
else if (cmd->use_shell)
- cmd->argv = prepare_shell_cmd(cmd->argv);
+ cmd->argv = prepare_shell_cmd(&nargv, cmd->argv);
cmd->pid = mingw_spawnvpe(cmd->argv[0], cmd->argv, (char**) cmd->env,
cmd->dir, fhin, fhout, fherr);
if (cmd->clean_on_exit && cmd->pid >= 0)
mark_child_for_cleanup(cmd->pid);
- if (cmd->git_cmd)
- free(cmd->argv);
-
+ argv_array_clear(&nargv);
cmd->argv = sargv;
if (fhin != 0)
close(fhin);
return !pthread_equal(main_thread, pthread_self());
}
+void NORETURN async_exit(int code)
+{
+ pthread_exit((void *)(intptr_t)code);
+}
+
#else
static struct {
return process_is_async;
}
+void NORETURN async_exit(int code)
+{
+ exit(code);
+}
+
#endif
int start_async(struct async *async)
struct strbuf buffered_output; /* of finished children */
};
- static int default_start_failure(struct strbuf *err,
+ static int default_start_failure(struct strbuf *out,
void *pp_cb,
void *pp_task_cb)
{
}
static int default_task_finished(int result,
- struct strbuf *err,
+ struct strbuf *out,
void *pp_cb,
void *pp_task_cb)
{
* When get_next_task added messages to the buffer in its last
* iteration, the buffered output is non empty.
*/
- fputs(pp->buffered_output.buf, stderr);
+ strbuf_write(&pp->buffered_output, stderr);
strbuf_release(&pp->buffered_output);
sigchain_pop_common();
int i = pp->output_owner;
if (pp->children[i].state == GIT_CP_WORKING &&
pp->children[i].err.len) {
- fputs(pp->children[i].err.buf, stderr);
+ strbuf_write(&pp->children[i].err, stderr);
strbuf_reset(&pp->children[i].err);
}
}
strbuf_addbuf(&pp->buffered_output, &pp->children[i].err);
strbuf_reset(&pp->children[i].err);
} else {
- fputs(pp->children[i].err.buf, stderr);
+ strbuf_write(&pp->children[i].err, stderr);
strbuf_reset(&pp->children[i].err);
/* Output all other finished child processes */
- fputs(pp->buffered_output.buf, stderr);
+ strbuf_write(&pp->buffered_output, stderr);
strbuf_reset(&pp->buffered_output);
/*
int start_async(struct async *async);
int finish_async(struct async *async);
int in_async(void);
+void NORETURN async_exit(int code);
/**
* This callback should initialize the child process and preload the
* return the negative signal number.
*/
typedef int (*get_next_task_fn)(struct child_process *cp,
- struct strbuf *err,
+ struct strbuf *out,
void *pp_cb,
void **pp_task_cb);
* a new process.
*
* You must not write to stdout or stderr in this function. Add your
- * message to the strbuf err instead, which will be printed without
+ * message to the strbuf out instead, which will be printed without
* messing up the output of the other parallel processes.
*
* pp_cb is the callback cookie as passed into run_processes_parallel,
* To send a signal to other child processes for abortion, return
* the negative signal number.
*/
- typedef int (*start_failure_fn)(struct strbuf *err,
+ typedef int (*start_failure_fn)(struct strbuf *out,
void *pp_cb,
void *pp_task_cb);
* This callback is called on every child process that finished processing.
*
* You must not write to stdout or stderr in this function. Add your
- * message to the strbuf err instead, which will be printed without
+ * message to the strbuf out instead, which will be printed without
* messing up the output of the other parallel processes.
*
* pp_cb is the callback cookie as passed into run_processes_parallel,
* the negative signal number.
*/
typedef int (*task_finished_fn)(int result,
- struct strbuf *err,
+ struct strbuf *out,
void *pp_cb,
void *pp_task_cb);
return cnt;
}
+ ssize_t strbuf_write(struct strbuf *sb, FILE *f)
+ {
+ return sb->len ? fwrite(sb->buf, 1, sb->len, f) : 0;
+ }
+
+
#define STRBUF_MAXLINK (2*PATH_MAX)
int strbuf_readlink(struct strbuf *sb, const char *path, size_t hint)
if (errno == ENOMEM)
die("Out of memory, getdelim failed");
- /* Restore slopbuf that we moved out of the way before */
+ /*
+ * Restore strbuf invariants; if getdelim left us with a NULL pointer,
+ * we can just re-init, but otherwise we should make sure that our
+ * length is empty, and that the result is NUL-terminated.
+ */
if (!sb->buf)
strbuf_init(sb, 0);
+ else
+ strbuf_reset(sb);
return EOF;
}
#else
}
#endif
-int strbuf_getline(struct strbuf *sb, FILE *fp, int term)
+static int strbuf_getdelim(struct strbuf *sb, FILE *fp, int term)
{
if (strbuf_getwholeline(sb, fp, term))
return EOF;
- if (sb->buf[sb->len-1] == term)
- strbuf_setlen(sb, sb->len-1);
+ if (sb->buf[sb->len - 1] == term)
+ strbuf_setlen(sb, sb->len - 1);
+ return 0;
+}
+
+int strbuf_getline(struct strbuf *sb, FILE *fp)
+{
+ if (strbuf_getwholeline(sb, fp, '\n'))
+ return EOF;
+ if (sb->buf[sb->len - 1] == '\n') {
+ strbuf_setlen(sb, sb->len - 1);
+ if (sb->len && sb->buf[sb->len - 1] == '\r')
+ strbuf_setlen(sb, sb->len - 1);
+ }
return 0;
}
+int strbuf_getline_lf(struct strbuf *sb, FILE *fp)
+{
+ return strbuf_getdelim(sb, fp, '\n');
+}
+
+int strbuf_getline_nul(struct strbuf *sb, FILE *fp)
+{
+ return strbuf_getdelim(sb, fp, '\0');
+}
+
int strbuf_getwholeline_fd(struct strbuf *sb, int fd, int term)
{
strbuf_reset(sb);
size_t len, i;
len = strlen(string);
- result = xmalloc(len + 1);
+ result = xmallocz(len);
for (i = 0; i < len; i++)
result[i] = tolower(string[i]);
result[i] = '\0';
*
* NOTE: The buffer is rewound if the read fails. If -1 is returned,
* `errno` must be consulted, like you would do for `read(3)`.
- * `strbuf_read()`, `strbuf_read_file()` and `strbuf_getline()` has the
- * same behaviour as well.
+ * `strbuf_read()`, `strbuf_read_file()` and `strbuf_getline_*()`
+ * family of functions have the same behaviour as well.
*/
extern size_t strbuf_fread(struct strbuf *, size_t, FILE *);
*/
extern int strbuf_readlink(struct strbuf *sb, const char *path, size_t hint);
+ /**
+ * Write the whole content of the strbuf to the stream not stopping at
+ * NUL bytes.
+ */
+ extern ssize_t strbuf_write(struct strbuf *sb, FILE *stream);
+
/**
- * Read a line from a FILE *, overwriting the existing contents
- * of the strbuf. The second argument specifies the line
- * terminator character, typically `'\n'`.
+ * Read a line from a FILE *, overwriting the existing contents of
+ * the strbuf. The strbuf_getline*() family of functions share
+ * this signature, but have different line termination conventions.
+ *
* Reading stops after the terminator or at EOF. The terminator
* is removed from the buffer before returning. Returns 0 unless
* there was nothing left before EOF, in which case it returns `EOF`.
*/
-extern int strbuf_getline(struct strbuf *, FILE *, int);
+typedef int (*strbuf_getline_fn)(struct strbuf *, FILE *);
+
+/* Uses LF as the line terminator */
+extern int strbuf_getline_lf(struct strbuf *sb, FILE *fp);
+
+/* Uses NUL as the line terminator */
+extern int strbuf_getline_nul(struct strbuf *sb, FILE *fp);
+
+/*
+ * Similar to strbuf_getline_lf(), but additionally treats a CR that
+ * comes immediately before the LF as part of the terminator.
+ * This is the most friendly version to be used to read "text" files
+ * that can come from platforms whose native text format is CRLF
+ * terminated.
+ */
+extern int strbuf_getline(struct strbuf *, FILE *);
+
/**
* Like `strbuf_getline`, but keeps the trailing terminator (if
{
free((void *) entry->config->path);
free((void *) entry->config->name);
+ free((void *) entry->config->update_strategy.command);
free(entry->config);
}
submodule->path = NULL;
submodule->url = NULL;
+ submodule->update_strategy.type = SM_UPDATE_UNSPECIFIED;
+ submodule->update_strategy.command = NULL;
submodule->fetch_recurse = RECURSE_SUBMODULES_NONE;
submodule->ignore = NULL;
return parse_fetch_recurse(opt, arg, 1);
}
+static int parse_push_recurse(const char *opt, const char *arg,
+ int die_on_error)
+{
+ switch (git_config_maybe_bool(opt, arg)) {
+ case 1:
+ /* There's no simple "on" value when pushing */
+ if (die_on_error)
+ die("bad %s argument: %s", opt, arg);
+ else
+ return RECURSE_SUBMODULES_ERROR;
+ case 0:
+ return RECURSE_SUBMODULES_OFF;
+ default:
+ if (!strcmp(arg, "on-demand"))
+ return RECURSE_SUBMODULES_ON_DEMAND;
+ else if (!strcmp(arg, "check"))
+ return RECURSE_SUBMODULES_CHECK;
+ else if (die_on_error)
+ die("bad %s argument: %s", opt, arg);
+ else
+ return RECURSE_SUBMODULES_ERROR;
+ }
+}
+
+int parse_push_recurse_submodules_arg(const char *opt, const char *arg)
+{
+ return parse_push_recurse(opt, arg, 1);
+}
+
static void warn_multiple_config(const unsigned char *commit_sha1,
const char *name, const char *option)
{
if (!strcmp(item.buf, "path")) {
if (!value)
ret = config_error_nonbool(var);
- else if (!me->overwrite && submodule->path != NULL)
+ else if (!me->overwrite && submodule->path)
warn_multiple_config(me->commit_sha1, submodule->name,
"path");
else {
} else if (!strcmp(item.buf, "ignore")) {
if (!value)
ret = config_error_nonbool(var);
- else if (!me->overwrite && submodule->ignore != NULL)
+ else if (!me->overwrite && submodule->ignore)
warn_multiple_config(me->commit_sha1, submodule->name,
"ignore");
else if (strcmp(value, "untracked") &&
} else if (!strcmp(item.buf, "url")) {
if (!value) {
ret = config_error_nonbool(var);
- } else if (!me->overwrite && submodule->url != NULL) {
+ } else if (!me->overwrite && submodule->url) {
warn_multiple_config(me->commit_sha1, submodule->name,
"url");
} else {
free((void *) submodule->url);
submodule->url = xstrdup(value);
}
+ } else if (!strcmp(item.buf, "update")) {
+ if (!value)
+ ret = config_error_nonbool(var);
+ else if (!me->overwrite &&
+ submodule->update_strategy.type != SM_UPDATE_UNSPECIFIED)
+ warn_multiple_config(me->commit_sha1, submodule->name,
+ "update");
+ else if (parse_submodule_update_strategy(value,
+ &submodule->update_strategy) < 0)
+ die(_("invalid value for %s"), var);
}
strbuf_release(&name);
parameter.commit_sha1 = commit_sha1;
parameter.gitmodules_sha1 = sha1;
parameter.overwrite = 0;
- git_config_from_buf(parse_config, rev.buf, config, config_size,
- ¶meter);
+ git_config_from_mem(parse_config, "submodule-blob", rev.buf,
+ config, config_size, ¶meter);
free(config);
switch (lookup_type) {
#define SUBMODULE_CONFIG_CACHE_H
#include "hashmap.h"
+ #include "submodule.h"
#include "strbuf.h"
/*
const char *url;
int fetch_recurse;
const char *ignore;
+ struct submodule_update_strategy update_strategy;
/* the sha1 blob id of the responsible .gitmodules file */
unsigned char gitmodules_sha1[20];
};
int parse_fetch_recurse_submodules_arg(const char *opt, const char *arg);
+int parse_push_recurse_submodules_arg(const char *opt, const char *arg);
int parse_submodule_config_option(const char *var, const char *value);
const struct submodule *submodule_from_name(const unsigned char *commit_sha1,
const char *name);
#include "thread-utils.h"
static int config_fetch_recurse_submodules = RECURSE_SUBMODULES_ON_DEMAND;
+ static int parallel_jobs = 1;
static struct string_list changed_submodule_paths;
static int initialized_fetch_ref_tips;
static struct sha1_array ref_tips_before_fetch;
strbuf_addstr(&entry, "submodule.");
strbuf_addstr(&entry, submodule->name);
strbuf_addstr(&entry, ".path");
- if (git_config_set_in_file(".gitmodules", entry.buf, newpath) < 0) {
+ if (git_config_set_in_file_gently(".gitmodules", entry.buf, newpath) < 0) {
/* Maybe the user already did that, don't error out here */
warning(_("Could not update .gitmodules entry %s"), entry.buf);
strbuf_release(&entry);
struct strbuf objects_directory = STRBUF_INIT;
struct alternate_object_database *alt_odb;
int ret = 0;
- int alloc;
+ size_t alloc;
strbuf_git_path_submodule(&objects_directory, path, "objects/");
if (!is_directory(objects_directory.buf)) {
objects_directory.len))
goto done;
- alloc = objects_directory.len + 42; /* for "12/345..." sha1 */
- alt_odb = xmalloc(sizeof(*alt_odb) + alloc);
+ alloc = st_add(objects_directory.len, 42); /* for "12/345..." sha1 */
+ alt_odb = xmalloc(st_add(sizeof(*alt_odb), alloc));
alt_odb->next = alt_odb_list;
xsnprintf(alt_odb->base, alloc, "%s", objects_directory.buf);
alt_odb->name = alt_odb->base + objects_directory.len;
int submodule_config(const char *var, const char *value, void *cb)
{
- if (starts_with(var, "submodule."))
+ if (!strcmp(var, "submodule.fetchjobs")) {
+ parallel_jobs = git_config_int(var, value);
+ if (parallel_jobs < 0)
+ die(_("negative values not allowed for submodule.fetchJobs"));
+ return 0;
+ } else if (starts_with(var, "submodule."))
return parse_submodule_config_option(var, value);
else if (!strcmp(var, "fetch.recursesubmodules")) {
config_fetch_recurse_submodules = parse_fetch_recurse_submodules_arg(var, value);
}
}
+ int parse_submodule_update_strategy(const char *value,
+ struct submodule_update_strategy *dst)
+ {
+ free((void*)dst->command);
+ dst->command = NULL;
+ if (!strcmp(value, "none"))
+ dst->type = SM_UPDATE_NONE;
+ else if (!strcmp(value, "checkout"))
+ dst->type = SM_UPDATE_CHECKOUT;
+ else if (!strcmp(value, "rebase"))
+ dst->type = SM_UPDATE_REBASE;
+ else if (!strcmp(value, "merge"))
+ dst->type = SM_UPDATE_MERGE;
+ else if (skip_prefix(value, "!", &value)) {
+ dst->type = SM_UPDATE_COMMAND;
+ dst->command = xstrdup(value);
+ } else
+ return -1;
+ return 0;
+ }
+
void handle_ignore_submodules_arg(struct diff_options *diffopt,
const char *arg)
{
argv_array_push(&spf.args, "--recurse-submodules-default");
/* default value, "--submodule-prefix" and its value are added later */
+ if (max_parallel_jobs < 0)
+ max_parallel_jobs = parallel_jobs;
+
calculate_changed_submodule_paths();
run_processes_parallel(max_parallel_jobs,
get_next_submodule,
/* Update core.worktree setting */
strbuf_reset(&file_name);
strbuf_addf(&file_name, "%s/config", git_dir);
- if (git_config_set_in_file(file_name.buf, "core.worktree",
- relative_path(real_work_tree, git_dir,
- &rel_path)))
- die(_("Could not set core.worktree in %s"),
- file_name.buf);
+ git_config_set_in_file(file_name.buf, "core.worktree",
+ relative_path(real_work_tree, git_dir,
+ &rel_path));
strbuf_release(&file_name);
strbuf_release(&rel_path);
free((void *)real_work_tree);
}
+
+ int parallel_submodules(void)
+ {
+ return parallel_jobs;
+ }
struct argv_array;
enum {
+ RECURSE_SUBMODULES_CHECK = -4,
RECURSE_SUBMODULES_ERROR = -3,
RECURSE_SUBMODULES_NONE = -2,
RECURSE_SUBMODULES_ON_DEMAND = -1,
RECURSE_SUBMODULES_ON = 2
};
+ enum submodule_update_type {
+ SM_UPDATE_UNSPECIFIED = 0,
+ SM_UPDATE_CHECKOUT,
+ SM_UPDATE_REBASE,
+ SM_UPDATE_MERGE,
+ SM_UPDATE_NONE,
+ SM_UPDATE_COMMAND
+ };
+
+ struct submodule_update_strategy {
+ enum submodule_update_type type;
+ const char *command;
+ };
+ #define SUBMODULE_UPDATE_STRATEGY_INIT {SM_UPDATE_UNSPECIFIED, NULL}
+
int is_staging_gitmodules_ok(void);
int update_path_in_gitmodules(const char *oldpath, const char *newpath);
int remove_path_from_gitmodules(const char *path);
const char *path);
int submodule_config(const char *var, const char *value, void *cb);
void gitmodules_config(void);
+ int parse_submodule_update_strategy(const char *value,
+ struct submodule_update_strategy *dst);
void handle_ignore_submodules_arg(struct diff_options *diffopt, const char *);
void show_submodule_summary(FILE *f, const char *path,
const char *line_prefix,
struct string_list *needs_pushing);
int push_unpushed_submodules(unsigned char new_sha1[20], const char *remotes_name);
void connect_work_tree_and_git_dir(const char *work_tree, const char *git_dir);
+ int parallel_submodules(void);
#endif
git config --remove-section submodule.example &&
test_must_fail git config submodule.example.url &&
- git submodule update init > update.out &&
+ git submodule update init 2> update.out &&
cat update.out &&
test_i18ngrep "not initialized" update.out &&
test_must_fail git rev-parse --resolve-git-dir init/.git &&
mkdir -p sub &&
(
cd sub &&
- git submodule update ../init >update.out &&
+ git submodule update ../init 2>update.out &&
cat update.out &&
test_i18ngrep "not initialized" update.out &&
test_must_fail git rev-parse --resolve-git-dir ../init/.git &&
git commit -m "submodule example2 added"
'
+test_expect_success 'submodule deinit works on repository without submodules' '
+ test_when_finished "rm -rf newdirectory" &&
+ mkdir newdirectory &&
+ (
+ cd newdirectory &&
+ git init &&
+ >file &&
+ git add file &&
+ git commit -m "repo should not be empty"
+ git submodule deinit .
+ )
+'
+
test_expect_success 'submodule deinit should remove the whole submodule section from .git/config' '
git config submodule.example.foo bar &&
git config submodule.example2.frotz nitfol &&
)
'
+test_expect_success 'submodule helper list is not confused by common prefixes' '
+ mkdir -p dir1/b &&
+ (
+ cd dir1/b &&
+ git init &&
+ echo hi >testfile2 &&
+ git add . &&
+ git commit -m "test1"
+ ) &&
+ mkdir -p dir2/b &&
+ (
+ cd dir2/b &&
+ git init &&
+ echo hello >testfile1 &&
+ git add . &&
+ git commit -m "test2"
+ ) &&
+ git submodule add /dir1/b dir1/b &&
+ git submodule add /dir2/b dir2/b &&
+ git commit -m "first submodule commit" &&
+ git submodule--helper list dir1/b |cut -c51- >actual &&
+ echo "dir1/b" >expect &&
+ test_cmp expect actual
+'
+
test_done
compare_head()
{
- sha_master=`git rev-list --max-count=1 master`
- sha_head=`git rev-list --max-count=1 HEAD`
+ sha_master=$(git rev-list --max-count=1 master)
+ sha_head=$(git rev-list --max-count=1 HEAD)
test "$sha_master" = "$sha_head"
}
test_i18ngrep "Submodule path .deeper/submodule/subsubmodule.: checked out" actual
)
'
+
+ test_expect_success 'submodule update can be run in parallel' '
+ (cd super2 &&
+ GIT_TRACE=$(pwd)/trace.out git submodule update --jobs 7 &&
+ grep "7 tasks" trace.out &&
+ git config submodule.fetchJobs 8 &&
+ GIT_TRACE=$(pwd)/trace.out git submodule update &&
+ grep "8 tasks" trace.out &&
+ GIT_TRACE=$(pwd)/trace.out git submodule update --jobs 9 &&
+ grep "9 tasks" trace.out
+ )
+ '
+
+ test_expect_success 'git clone passes the parallel jobs config on to submodules' '
+ test_when_finished "rm -rf super4" &&
+ GIT_TRACE=$(pwd)/trace.out git clone --recurse-submodules --jobs 7 . super4 &&
+ grep "7 tasks" trace.out &&
+ rm -rf super4 &&
+ git config --global submodule.fetchJobs 8 &&
+ GIT_TRACE=$(pwd)/trace.out git clone --recurse-submodules . super4 &&
+ grep "8 tasks" trace.out &&
+ rm -rf super4 &&
+ GIT_TRACE=$(pwd)/trace.out git clone --recurse-submodules --jobs 9 . super4 &&
+ grep "9 tasks" trace.out &&
+ rm -rf super4
+ '
+
test_done