From: Junio C Hamano Date: Wed, 6 Apr 2016 18:39:01 +0000 (-0700) Subject: Merge branch 'sb/submodule-parallel-update' X-Git-Tag: v2.9.0-rc0~159 X-Git-Url: https://git.lorimer.id.au/gitweb.git/diff_plain/bdebbeb3346e9867005947aff356b99a7358e5ab?hp=-c Merge branch 'sb/submodule-parallel-update' A major part of "git submodule update" has been ported to C to take advantage of the recently added framework to run download tasks in parallel. * sb/submodule-parallel-update: clone: allow an explicit argument for parallel submodule clones submodule update: expose parallelism to the user submodule helper: remove double 'fatal: ' prefix git submodule update: have a dedicated helper for cloning run_processes_parallel: rename parameters for the callbacks run_processes_parallel: treat output of children as byte array submodule update: direct error message to stderr fetching submodules: respect `submodule.fetchJobs` config option submodule-config: drop check against NULL submodule-config: keep update strategy around --- bdebbeb3346e9867005947aff356b99a7358e5ab diff --combined Documentation/config.txt index aea6bd127d,3b02732caa..d29665666b --- a/Documentation/config.txt +++ b/Documentation/config.txt @@@ -308,15 -308,6 +308,15 @@@ core.trustctime: crawlers and some backup systems). See linkgit:git-update-index[1]. True by default. +core.untrackedCache:: + Determines what to do about the untracked cache feature of the + index. It will be kept, if this variable is unset or set to + `keep`. It will automatically be added if set to `true`. And + it will automatically be removed, if set to `false`. Before + setting it to `true`, you should check that mtime is working + properly on your system. + See linkgit:git-update-index[1]. `keep` by default. + core.checkStat:: Determines which stat fields to match between the index and work tree. The user can set this to 'default' or @@@ -879,8 -870,6 +879,8 @@@ When preserve, also pass `--preserve-me so that locally committed merge commits will not be flattened by running 'git pull'. + +When the value is `interactive`, the rebase is run in interactive mode. ++ *NOTE*: this is a possibly dangerous operation; do *not* use it unless you understand the implications (see linkgit:git-rebase[1] for details). @@@ -1113,9 -1102,8 +1113,9 @@@ commit.template: credential.helper:: Specify an external helper to be called when a username or password credential is needed; the helper may consult external - storage to avoid prompting the user for the credentials. See - linkgit:gitcredentials[7] for details. + storage to avoid prompting the user for the credentials. Note + that multiple helpers may be defined. See linkgit:gitcredentials[7] + for details. credential.useHttpPath:: When acquiring credentials, consider the "path" component of an http @@@ -1255,10 -1243,6 +1255,10 @@@ format.coverLetter: format-patch is invoked, but in addition can be set to "auto", to generate a cover-letter only when there's more than one patch. +format.outputDirectory:: + Set a custom directory to store the resulting files instead of the + current working directory. + filter..clean:: The command which is used to convert the content of a worktree file to a blob upon checkin. See linkgit:gitattributes[5] for @@@ -1466,14 -1450,6 +1466,14 @@@ grep.extendedRegexp: option is ignored when the 'grep.patternType' option is set to a value other than 'default'. +grep.threads:: + Number of grep worker threads to use. + See `grep.threads` in linkgit:git-grep[1] for more information. + +grep.fallbackToNoIndex:: + If set to true, fall back to git grep --no-index if git grep + is executed outside of a git repository. Defaults to false. + gpg.program:: Use this custom program instead of "gpg" found on $PATH when making or verifying a PGP signature. The program must support the @@@ -1620,40 -1596,9 +1620,40 @@@ help.htmlPath: http.proxy:: Override the HTTP proxy, normally configured using the 'http_proxy', - 'https_proxy', and 'all_proxy' environment variables (see - `curl(1)`). This can be overridden on a per-remote basis; see - remote..proxy + 'https_proxy', and 'all_proxy' environment variables (see `curl(1)`). In + addition to the syntax understood by curl, it is possible to specify a + proxy string with a user name but no password, in which case git will + attempt to acquire one in the same way it does for other credentials. See + linkgit:gitcredentials[7] for more information. The syntax thus is + '[protocol://][user[:password]@]proxyhost[:port]'. This can be overridden + on a per-remote basis; see remote..proxy + +http.proxyAuthMethod:: + Set the method with which to authenticate against the HTTP proxy. This + only takes effect if the configured proxy string contains a user name part + (i.e. is of the form 'user@host' or 'user@host:port'). This can be + overridden on a per-remote basis; see `remote..proxyAuthMethod`. + Both can be overridden by the 'GIT_HTTP_PROXY_AUTHMETHOD' environment + variable. Possible values are: ++ +-- +* `anyauth` - Automatically pick a suitable authentication method. It is + assumed that the proxy answers an unauthenticated request with a 407 + status code and one or more Proxy-authenticate headers with supported + authentication methods. This is the default. +* `basic` - HTTP Basic authentication +* `digest` - HTTP Digest authentication; this prevents the password from being + transmitted to the proxy in clear text +* `negotiate` - GSS-Negotiate authentication (compare the --negotiate option + of `curl(1)`) +* `ntlm` - NTLM authentication (compare the --ntlm option of `curl(1)`) +-- + +http.emptyAuth:: + Attempt authentication without seeking a username or password. This + can be used to attempt GSS-Negotiate authentication without specifying + a username in the URL, as libcurl normally requires a username for + authentication. http.cookieFile:: File containing previously stored cookie lines which should be used @@@ -1734,14 -1679,6 +1734,14 @@@ http.sslCAPath: with when fetching or pushing over HTTPS. Can be overridden by the 'GIT_SSL_CAPATH' environment variable. +http.pinnedpubkey:: + Public key of the https service. It may either be the filename of + a PEM or DER encoded public key file or a string starting with + 'sha256//' followed by the base64 encoded sha256 hash of the + public key. See also libcurl 'CURLOPT_PINNEDPUBLICKEY'. git will + exit with an error if this option is set but not supported by + cURL. + http.sslTry:: Attempt to use AUTH SSL/TLS and encrypted data transfers when connecting via regular FTP protocol. This might be needed @@@ -1887,14 -1824,6 +1887,14 @@@ interactive.singleKey: setting is silently ignored if portable keystroke input is not available; requires the Perl module Term::ReadKey. +interactive.diffFilter:: + When an interactive command (such as `git add --patch`) shows + a colorized diff, git will pipe the diff through the shell + command defined by this configuration variable. The command may + mark up the diff further for human consumption, provided that it + retains a one-to-one correspondence with the lines in the + original diff. Defaults to disabled (no filtering). + log.abbrevCommit:: If true, makes linkgit:git-log[1], linkgit:git-show[1], and linkgit:git-whatchanged[1] assume `--abbrev-commit`. You may @@@ -2145,7 -2074,7 +2145,7 @@@ pack.indexVersion: larger than 2 GB. + If you have an old Git that does not understand the version 2 `*.idx` file, -cloning or fetching over a non native protocol (e.g. "http" and "rsync") +cloning or fetching over a non native protocol (e.g. "http") that will copy both `*.pack` file and corresponding `*.idx` file from the other side may give you a repository that cannot be accessed with your older version of Git. If the `*.pack` file is smaller than 2 GB, however, @@@ -2220,8 -2149,6 +2220,8 @@@ When preserve, also pass `--preserve-me so that locally committed merge commits will not be flattened by running 'git pull'. + +When the value is `interactive`, the rebase is run in interactive mode. ++ *NOTE*: this is a possibly dangerous operation; do *not* use it unless you understand the implications (see linkgit:git-rebase[1] for details). @@@ -2302,20 -2229,6 +2302,20 @@@ push.gpgSign: override a value from a lower-priority config file. An explicit command-line flag always overrides this config option. +push.recurseSubmodules:: + Make sure all submodule commits used by the revisions to be pushed + are available on a remote-tracking branch. If the value is 'check' + then Git will verify that all submodule commits that changed in the + revisions to be pushed are available on at least one remote of the + submodule. If any commits are missing, the push will be aborted and + exit with non-zero status. If the value is 'on-demand' then all + submodules that changed in the revisions to be pushed will be + pushed. If on-demand was not able to push all necessary revisions + it will also be aborted and exit with non-zero status. If the value + is 'no' then default behavior of ignoring submodules when pushing + is retained. You may override this configuration at time of push by + specifying '--recurse-submodules=check|on-demand|no'. + rebase.stat:: Whether to show a diffstat of what changed upstream since the last rebase. False by default. @@@ -2480,11 -2393,6 +2480,11 @@@ remote..proxy: the proxy to use for that remote. Set to the empty string to disable proxying for that remote. +remote..proxyAuthMethod:: + For remotes that require curl (http, https and ftp), the method to use for + authenticating against the proxy in use (probably set in + `remote..proxy`). See `http.proxyAuthMethod`. + remote..fetch:: The default set of "refspec" for linkgit:git-fetch[1]. See linkgit:git-fetch[1]. @@@ -2738,6 -2646,12 +2738,12 @@@ submodule..ignore: "--ignore-submodules" option. The 'git submodule' commands are not affected by this setting. + submodule.fetchJobs:: + Specifies how many submodules are fetched/cloned at the same time. + A positive integer allows up to that number of submodules fetched + in parallel. A value of 0 will give some reasonable default. + If unset, it defaults to 1. + tag.sort:: This variable controls the sort ordering of tags when displayed by linkgit:git-tag[1]. Without the "--sort=" option provided, the @@@ -2853,16 -2767,6 +2859,16 @@@ user.name: Can be overridden by the 'GIT_AUTHOR_NAME' and 'GIT_COMMITTER_NAME' environment variables. See linkgit:git-commit-tree[1]. +user.useConfigOnly:: + Instruct Git to avoid trying to guess defaults for 'user.email' + and 'user.name', and instead retrieve the values only from the + configuration. For example, if you have multiple email addresses + and would like to use a different one for each repository, then + with this configuration option set to `true` in the global config + along with a name, Git will prompt you to set up an email before + making new commits in a newly cloned repository. + Defaults to `false`. + user.signingKey:: If linkgit:git-tag[1] or linkgit:git-commit[1] is not selecting the key you want it to automatically when creating a signed tag or diff --combined Documentation/git-clone.txt index b7c467a001,6db7b6d370..45d74be297 --- a/Documentation/git-clone.txt +++ b/Documentation/git-clone.txt @@@ -14,7 -14,7 +14,7 @@@ SYNOPSI [-o ] [-b ] [-u ] [--reference ] [--dissociate] [--separate-git-dir ] [--depth ] [--[no-]single-branch] - [--recursive | --recurse-submodules] [--] + [--recursive | --recurse-submodules] [--jobs ] [--] [] DESCRIPTION @@@ -115,7 -115,8 +115,7 @@@ objects from the source repository int --quiet:: -q:: Operate quietly. Progress is not reported to the standard - error stream. This flag is also passed to the `rsync' - command when given. + error stream. --verbose:: -v:: @@@ -189,14 -190,15 +189,14 @@@ --depth :: Create a 'shallow' clone with a history truncated to the - specified number of revisions. + specified number of commits. Implies `--single-branch` unless + `--no-single-branch` is given to fetch the histories near the + tips of all branches. --[no-]single-branch:: Clone only the history leading to the tip of a single branch, either specified by the `--branch` option or the primary - branch remote's `HEAD` points at. When creating a shallow - clone with the `--depth` option, this is the default, unless - `--no-single-branch` is given to fetch the histories near the - tips of all branches. + branch remote's `HEAD` points at. Further fetches into the resulting repository will only update the remote-tracking branch for the branch this option was used for the initial cloning. If the HEAD at the remote did not point at any @@@ -219,6 -221,10 +219,10 @@@ The result is Git repository can be separated from working tree. + -j :: + --jobs :: + The number of submodules fetched at the same time. + Defaults to the `submodule.fetchJobs` option. :: The (possibly remote) repository to clone from. See the diff --combined builtin/clone.c index 661639255c,b004fb4e00..6576ecf343 --- a/builtin/clone.c +++ b/builtin/clone.c @@@ -47,10 -47,10 +47,11 @@@ static const char *real_git_dir static char *option_upload_pack = "git-upload-pack"; static int option_verbosity; static int option_progress = -1; +static enum transport_family family; static struct string_list option_config; static struct string_list option_reference; static int option_dissociate; + static int max_jobs = -1; static struct option builtin_clone_options[] = { OPT__VERBOSITY(&option_verbosity), @@@ -73,6 -73,8 +74,8 @@@ N_("initialize submodules in the clone")), OPT_BOOL(0, "recurse-submodules", &option_recursive, N_("initialize submodules in the clone")), + OPT_INTEGER('j', "jobs", &max_jobs, + N_("number of submodules cloned in parallel")), OPT_STRING(0, "template", &option_template, N_("template-directory"), N_("directory from which templates will be used")), OPT_STRING_LIST(0, "reference", &option_reference, N_("repo"), @@@ -93,17 -95,9 +96,13 @@@ N_("separate git dir from working tree")), OPT_STRING_LIST('c', "config", &option_config, N_("key=value"), N_("set config inside the new repository")), + OPT_SET_INT('4', "ipv4", &family, N_("use IPv4 addresses only"), + TRANSPORT_FAMILY_IPV4), + OPT_SET_INT('6', "ipv6", &family, N_("use IPv6 addresses only"), + TRANSPORT_FAMILY_IPV6), OPT_END() }; - static const char *argv_submodule[] = { - "submodule", "update", "--init", "--recursive", NULL - }; - static const char *get_repo_path_1(struct strbuf *path, int *is_bundle) { static char *suffix[] = { "/.git", "", ".git/.git", ".git" }; @@@ -236,8 -230,8 +235,8 @@@ static char *guess_dir_name(const char strip_suffix_mem(start, &len, is_bundle ? ".bundle" : ".git"); if (!len || (len == 1 && *start == '/')) - die("No directory name could be guessed.\n" - "Please specify a directory on the command line"); + die(_("No directory name could be guessed.\n" + "Please specify a directory on the command line")); if (is_bare) dir = xstrfmt("%.*s.git", (int)len, start); @@@ -344,7 -338,7 +343,7 @@@ static void copy_alternates(struct strb FILE *in = fopen(src->buf, "r"); struct strbuf line = STRBUF_INIT; - while (strbuf_getline(&line, in, '\n') != EOF) { + while (strbuf_getline(&line, in) != EOF) { char *abs_path; if (!line.len || line.buf[0] == '#') continue; @@@ -641,11 -635,9 +640,11 @@@ static void update_remote_refs(const st struct strbuf head_ref = STRBUF_INIT; strbuf_addstr(&head_ref, branch_top); strbuf_addstr(&head_ref, "HEAD"); - create_symref(head_ref.buf, - remote_head_points_at->peer_ref->name, - msg); + if (create_symref(head_ref.buf, + remote_head_points_at->peer_ref->name, + msg) < 0) + die(_("unable to update %s"), head_ref.buf); + strbuf_release(&head_ref); } } @@@ -655,8 -647,7 +654,8 @@@ static void update_head(const struct re const char *head; if (our && skip_prefix(our->name, "refs/heads/", &head)) { /* Local default branch link */ - create_symref("HEAD", our->name, NULL); + if (create_symref("HEAD", our->name, NULL) < 0) + die(_("unable to update HEAD")); if (!option_bare) { update_ref(msg, "HEAD", our->old_oid.hash, NULL, 0, UPDATE_REFS_DIE_ON_ERR); @@@ -732,15 -723,23 +731,23 @@@ static int checkout(void err |= run_hook_le(NULL, "post-checkout", sha1_to_hex(null_sha1), sha1_to_hex(sha1), "1", NULL); - if (!err && option_recursive) - err = run_command_v_opt(argv_submodule, RUN_GIT_CMD); + if (!err && option_recursive) { + struct argv_array args = ARGV_ARRAY_INIT; + argv_array_pushl(&args, "submodule", "update", "--init", "--recursive", NULL); + + if (max_jobs != -1) + argv_array_pushf(&args, "--jobs=%d", max_jobs); + + err = run_command_v_opt(args.argv, RUN_GIT_CMD); + argv_array_clear(&args); + } return err; } static int write_one_config(const char *key, const char *value, void *data) { - return git_config_set_multivar(key, value ? value : "true", "^$", 0); + return git_config_set_multivar_gently(key, value ? value : "true", "^$", 0); } static void write_config(struct string_list *config) @@@ -750,7 -749,7 +757,7 @@@ for (i = 0; i < config->nr; i++) { if (git_config_parse_parameter(config->items[i].string, write_one_config, NULL) < 0) - die("unable to write parameters to config file"); + die(_("unable to write parameters to config file")); } } @@@ -975,7 -974,6 +982,7 @@@ int cmd_clone(int argc, const char **ar remote = remote_get(option_origin); transport = transport_get(remote, remote->url[0]); transport_set_verbosity(transport, option_verbosity, option_progress); + transport->family = family; path = get_repo_path(remote->url[0], &is_bundle); is_local = option_local != 0 && path && !is_bundle; diff --combined builtin/fetch.c index e4639d8eb1,5aa1c2de44..f8455bde7a --- a/builtin/fetch.c +++ b/builtin/fetch.c @@@ -37,8 -37,7 +37,8 @@@ static int prune = -1; /* unspecified * static int all, append, dry_run, force, keep, multiple, update_head_ok, verbosity; static int progress = -1, recurse_submodules = RECURSE_SUBMODULES_DEFAULT; static int tags = TAGS_DEFAULT, unshallow, update_shallow; - static int max_children = 1; + static int max_children = -1; +static enum transport_family family; static const char *depth; static const char *upload_pack; static struct strbuf default_rla = STRBUF_INIT; @@@ -128,10 -127,6 +128,10 @@@ static struct option builtin_fetch_opti N_("accept refs that update .git/shallow")), { OPTION_CALLBACK, 0, "refmap", NULL, N_("refmap"), N_("specify fetch refmap"), PARSE_OPT_NONEG, parse_refmap_arg }, + OPT_SET_INT('4', "ipv4", &family, N_("use IPv4 addresses only"), + TRANSPORT_FAMILY_IPV4), + OPT_SET_INT('6', "ipv6", &family, N_("use IPv6 addresses only"), + TRANSPORT_FAMILY_IPV6), OPT_END() }; @@@ -845,7 -840,7 +845,7 @@@ static void check_not_current_branch(st static int truncate_fetch_head(void) { const char *filename = git_path_fetch_head(); - FILE *fp = fopen(filename, "w"); + FILE *fp = fopen_for_writing(filename); if (!fp) return error(_("cannot open %s: %s\n"), filename, strerror(errno)); @@@ -869,7 -864,6 +869,7 @@@ static struct transport *prepare_transp struct transport *transport; transport = transport_get(remote, NULL); transport_set_verbosity(transport, verbosity, progress); + transport->family = family; if (upload_pack) set_option(transport, TRANS_OPT_UPLOADPACK, upload_pack); if (keep) @@@ -1022,9 -1016,10 +1022,9 @@@ static int add_remote_or_group(const ch git_config(get_remote_group, &g); if (list->nr == prev_nr) { - struct remote *remote; - if (!remote_is_configured(name)) + struct remote *remote = remote_get(name); + if (!remote_is_configured(remote)) return 0; - remote = remote_get(name); string_list_append(list, remote->name); } return 1; @@@ -1115,7 -1110,7 +1115,7 @@@ static int fetch_one(struct remote *rem if (argc > 0) { int j = 0; int i; - refs = xcalloc(argc + 1, sizeof(const char *)); + refs = xcalloc(st_add(argc, 1), sizeof(const char *)); for (i = 0; i < argc; i++) { if (!strcmp(argv[i], "tag")) { i++; @@@ -1230,8 -1225,6 +1230,8 @@@ int cmd_fetch(int argc, const char **ar list.strdup_strings = 1; string_list_clear(&list, 0); + close_all_packs(); + argv_array_pushl(&argv_gc_auto, "gc", "--auto", NULL); if (verbosity < 0) argv_array_push(&argv_gc_auto, "--quiet"); diff --combined builtin/submodule--helper.c index 5295b727d4,a484945d37..72e804ed4f --- a/builtin/submodule--helper.c +++ b/builtin/submodule--helper.c @@@ -22,12 -22,17 +22,12 @@@ static int module_list_compute(int argc struct module_list *list) { int i, result = 0; - char *max_prefix, *ps_matched = NULL; - int max_prefix_len; + char *ps_matched = NULL; parse_pathspec(pathspec, 0, PATHSPEC_PREFER_FULL | PATHSPEC_STRIP_SUBMODULE_SLASH_CHEAP, prefix, argv); - /* Find common prefix for all pathspec's */ - max_prefix = common_prefix(pathspec); - max_prefix_len = max_prefix ? strlen(max_prefix) : 0; - if (pathspec->nr) ps_matched = xcalloc(pathspec->nr, 1); @@@ -37,9 -42,9 +37,9 @@@ for (i = 0; i < active_nr; i++) { const struct cache_entry *ce = active_cache[i]; - if (!S_ISGITLINK(ce->ce_mode) || - !match_pathspec(pathspec, ce->name, ce_namelen(ce), - max_prefix_len, ps_matched, 1)) + if (!match_pathspec(pathspec, ce->name, ce_namelen(ce), + 0, ps_matched, 1) || + !S_ISGITLINK(ce->ce_mode)) continue; ALLOC_GROW(list->entries, list->nr + 1, list->alloc); @@@ -52,6 -57,7 +52,6 @@@ */ i++; } - free(max_prefix); if (ps_matched && report_path_error(ps_matched, pathspec, prefix)) result = -1; @@@ -249,6 -255,257 +249,257 @@@ static int module_clone(int argc, cons return 0; } + struct submodule_update_clone { + /* index into 'list', the list of submodules to look into for cloning */ + int current; + struct module_list list; + unsigned warn_if_uninitialized : 1; + + /* update parameter passed via commandline */ + struct submodule_update_strategy update; + + /* configuration parameters which are passed on to the children */ + int quiet; + const char *reference; + const char *depth; + const char *recursive_prefix; + const char *prefix; + + /* Machine-readable status lines to be consumed by git-submodule.sh */ + struct string_list projectlines; + + /* If we want to stop as fast as possible and return an error */ + unsigned quickstop : 1; + }; + #define SUBMODULE_UPDATE_CLONE_INIT {0, MODULE_LIST_INIT, 0, \ + SUBMODULE_UPDATE_STRATEGY_INIT, 0, NULL, NULL, NULL, NULL, \ + STRING_LIST_INIT_DUP, 0} + + /** + * Determine whether 'ce' needs to be cloned. If so, prepare the 'child' to + * run the clone. Returns 1 if 'ce' needs to be cloned, 0 otherwise. + */ + static int prepare_to_clone_next_submodule(const struct cache_entry *ce, + struct child_process *child, + struct submodule_update_clone *suc, + struct strbuf *out) + { + const struct submodule *sub = NULL; + struct strbuf displaypath_sb = STRBUF_INIT; + struct strbuf sb = STRBUF_INIT; + const char *displaypath = NULL; + char *url = NULL; + int needs_cloning = 0; + + if (ce_stage(ce)) { + if (suc->recursive_prefix) + strbuf_addf(&sb, "%s/%s", suc->recursive_prefix, ce->name); + else + strbuf_addf(&sb, "%s", ce->name); + strbuf_addf(out, _("Skipping unmerged submodule %s"), sb.buf); + strbuf_addch(out, '\n'); + goto cleanup; + } + + sub = submodule_from_path(null_sha1, ce->name); + + if (suc->recursive_prefix) + displaypath = relative_path(suc->recursive_prefix, + ce->name, &displaypath_sb); + else + displaypath = ce->name; + + if (suc->update.type == SM_UPDATE_NONE + || (suc->update.type == SM_UPDATE_UNSPECIFIED + && sub->update_strategy.type == SM_UPDATE_NONE)) { + strbuf_addf(out, _("Skipping submodule '%s'"), displaypath); + strbuf_addch(out, '\n'); + goto cleanup; + } + + /* + * Looking up the url in .git/config. + * We must not fall back to .gitmodules as we only want + * to process configured submodules. + */ + strbuf_reset(&sb); + strbuf_addf(&sb, "submodule.%s.url", sub->name); + git_config_get_string(sb.buf, &url); + if (!url) { + /* + * Only mention uninitialized submodules when their + * path have been specified + */ + if (suc->warn_if_uninitialized) { + strbuf_addf(out, + _("Submodule path '%s' not initialized"), + displaypath); + strbuf_addch(out, '\n'); + strbuf_addstr(out, + _("Maybe you want to use 'update --init'?")); + strbuf_addch(out, '\n'); + } + goto cleanup; + } + + strbuf_reset(&sb); + strbuf_addf(&sb, "%s/.git", ce->name); + needs_cloning = !file_exists(sb.buf); + + strbuf_reset(&sb); + strbuf_addf(&sb, "%06o %s %d %d\t%s\n", ce->ce_mode, + sha1_to_hex(ce->sha1), ce_stage(ce), + needs_cloning, ce->name); + string_list_append(&suc->projectlines, sb.buf); + + if (!needs_cloning) + goto cleanup; + + child->git_cmd = 1; + child->no_stdin = 1; + child->stdout_to_stderr = 1; + child->err = -1; + argv_array_push(&child->args, "submodule--helper"); + argv_array_push(&child->args, "clone"); + if (suc->quiet) + argv_array_push(&child->args, "--quiet"); + if (suc->prefix) + argv_array_pushl(&child->args, "--prefix", suc->prefix, NULL); + argv_array_pushl(&child->args, "--path", sub->path, NULL); + argv_array_pushl(&child->args, "--name", sub->name, NULL); + argv_array_pushl(&child->args, "--url", url, NULL); + if (suc->reference) + argv_array_push(&child->args, suc->reference); + if (suc->depth) + argv_array_push(&child->args, suc->depth); + + cleanup: + free(url); + strbuf_reset(&displaypath_sb); + strbuf_reset(&sb); + + return needs_cloning; + } + + static int update_clone_get_next_task(struct child_process *child, + struct strbuf *err, + void *suc_cb, + void **void_task_cb) + { + struct submodule_update_clone *suc = suc_cb; + + for (; suc->current < suc->list.nr; suc->current++) { + const struct cache_entry *ce = suc->list.entries[suc->current]; + if (prepare_to_clone_next_submodule(ce, child, suc, err)) { + suc->current++; + return 1; + } + } + return 0; + } + + static int update_clone_start_failure(struct strbuf *err, + void *suc_cb, + void *void_task_cb) + { + struct submodule_update_clone *suc = suc_cb; + suc->quickstop = 1; + return 1; + } + + static int update_clone_task_finished(int result, + struct strbuf *err, + void *suc_cb, + void *void_task_cb) + { + struct submodule_update_clone *suc = suc_cb; + + if (!result) + return 0; + + suc->quickstop = 1; + return 1; + } + + static int update_clone(int argc, const char **argv, const char *prefix) + { + const char *update = NULL; + int max_jobs = -1; + struct string_list_item *item; + struct pathspec pathspec; + struct submodule_update_clone suc = SUBMODULE_UPDATE_CLONE_INIT; + + struct option module_update_clone_options[] = { + OPT_STRING(0, "prefix", &prefix, + N_("path"), + N_("path into the working tree")), + OPT_STRING(0, "recursive-prefix", &suc.recursive_prefix, + N_("path"), + N_("path into the working tree, across nested " + "submodule boundaries")), + OPT_STRING(0, "update", &update, + N_("string"), + N_("rebase, merge, checkout or none")), + OPT_STRING(0, "reference", &suc.reference, N_("repo"), + N_("reference repository")), + OPT_STRING(0, "depth", &suc.depth, "", + N_("Create a shallow clone truncated to the " + "specified number of revisions")), + OPT_INTEGER('j', "jobs", &max_jobs, + N_("parallel jobs")), + OPT__QUIET(&suc.quiet, N_("don't print cloning progress")), + OPT_END() + }; + + const char *const git_submodule_helper_usage[] = { + N_("git submodule--helper update_clone [--prefix=] [...]"), + NULL + }; + suc.prefix = prefix; + + argc = parse_options(argc, argv, prefix, module_update_clone_options, + git_submodule_helper_usage, 0); + + if (update) + if (parse_submodule_update_strategy(update, &suc.update) < 0) + die(_("bad value for update parameter")); + + if (module_list_compute(argc, argv, prefix, &pathspec, &suc.list) < 0) + return 1; + + if (pathspec.nr) + suc.warn_if_uninitialized = 1; + + /* Overlay the parsed .gitmodules file with .git/config */ + gitmodules_config(); + git_config(submodule_config, NULL); + + if (max_jobs < 0) + max_jobs = parallel_submodules(); + + run_processes_parallel(max_jobs, + update_clone_get_next_task, + update_clone_start_failure, + update_clone_task_finished, + &suc); + + /* + * We saved the output and put it out all at once now. + * That means: + * - the listener does not have to interleave their (checkout) + * work with our fetching. The writes involved in a + * checkout involve more straightforward sequential I/O. + * - the listener can avoid doing any work if fetching failed. + */ + if (suc.quickstop) + return 1; + + for_each_string_list_item(item, &suc.projectlines) + utf8_fprintf(stdout, "%s", item->string); + + return 0; + } + struct cmd_struct { const char *cmd; int (*fn)(int, const char **, const char *); @@@ -258,19 -515,20 +509,20 @@@ static struct cmd_struct commands[] = {"list", module_list}, {"name", module_name}, {"clone", module_clone}, + {"update-clone", update_clone} }; int cmd_submodule__helper(int argc, const char **argv, const char *prefix) { int i; if (argc < 2) - die(_("fatal: submodule--helper subcommand must be " + die(_("submodule--helper subcommand must be " "called with a subcommand")); for (i = 0; i < ARRAY_SIZE(commands); i++) if (!strcmp(argv[1], commands[i].cmd)) return commands[i].fn(argc - 1, argv + 1, prefix); - die(_("fatal: '%s' is not a valid submodule--helper " + die(_("'%s' is not a valid submodule--helper " "subcommand"), argv[1]); } diff --combined git-submodule.sh index 43c68deee9,86018ee9c5..0322282120 --- a/git-submodule.sh +++ b/git-submodule.sh @@@ -591,24 -591,6 +591,24 @@@ cmd_deinit( done } +is_tip_reachable () ( + clear_local_git_env + cd "$1" && + rev=$(git rev-list -n 1 "$2" --not --all 2>/dev/null) && + test -z "$rev" +) + +fetch_in_submodule () ( + clear_local_git_env + cd "$1" && + case "$2" in + '') + git fetch ;; + *) + git fetch $(get_default_remote) "$2" ;; + esac +) + # # Update each submodule path to correct revision, using clone and checkout as needed # @@@ -663,6 -645,14 +663,14 @@@ cmd_update( --depth=*) depth=$1 ;; + -j|--jobs) + case "$2" in '') usage ;; esac + jobs="--jobs=$2" + shift + ;; + --jobs=*) + jobs=$1 + ;; --) shift break @@@ -682,17 -672,21 +690,21 @@@ cmd_init "--" "$@" || return fi - cloned_modules= - git submodule--helper list --prefix "$wt_prefix" "$@" | { + { + git submodule--helper update-clone ${GIT_QUIET:+--quiet} \ + ${wt_prefix:+--prefix "$wt_prefix"} \ + ${prefix:+--recursive-prefix "$prefix"} \ + ${update:+--update "$update"} \ + ${reference:+--reference "$reference"} \ + ${depth:+--depth "$depth"} \ + ${jobs:+$jobs} \ + "$@" || echo "#unmatched" + } | { err= - while read mode sha1 stage sm_path + while read mode sha1 stage just_cloned sm_path do die_if_unmatched "$mode" - if test "$stage" = U - then - echo >&2 "Skipping unmerged submodule $prefix$sm_path" - continue - fi + name=$(git submodule--helper name "$sm_path") || exit url=$(git config submodule."$name".url) branch=$(get_submodule_config "$name" branch master) @@@ -709,27 -703,10 +721,10 @@@ displaypath=$(relative_path "$prefix$sm_path") - if test "$update_module" = "none" + if test $just_cloned -eq 1 then - echo "Skipping submodule '$displaypath'" - continue - fi - - if test -z "$url" - then - # Only mention uninitialized submodules when its - # path have been specified - test "$#" != "0" && - say "$(eval_gettext "Submodule path '\$displaypath' not initialized - Maybe you want to use 'update --init'?")" - continue - fi - - if ! test -d "$sm_path"/.git && ! test -f "$sm_path"/.git - then - git submodule--helper clone ${GIT_QUIET:+--quiet} --prefix "$prefix" --path "$sm_path" --name "$name" --url "$url" "$reference" "$depth" || exit - cloned_modules="$cloned_modules;$name" subsha1= + update_module=checkout else subsha1=$(clear_local_git_env; cd "$sm_path" && git rev-parse --verify HEAD) || @@@ -763,24 -740,12 +758,17 @@@ then # Run fetch only if $sha1 isn't present or it # is not reachable from a ref. - (clear_local_git_env; cd "$sm_path" && - ( (rev=$(git rev-list -n 1 $sha1 --not --all 2>/dev/null) && - test -z "$rev") || git-fetch)) || + is_tip_reachable "$sm_path" "$sha1" || + fetch_in_submodule "$sm_path" || die "$(eval_gettext "Unable to fetch in submodule path '\$displaypath'")" + + # Now we tried the usual fetch, but $sha1 may + # not be reachable from any of the refs + is_tip_reachable "$sm_path" "$sha1" || + fetch_in_submodule "$sm_path" "$sha1" || + die "$(eval_gettext "Fetched in submodule path '\$displaypath', but it did not contain $sha1. Direct fetching of that commit failed.")" fi - # Is this something we just cloned? - case ";$cloned_modules;" in - *";$name;"*) - # then there is no local change to integrate - update_module=checkout ;; - esac - must_die_on_failure= case "$update_module" in checkout) diff --combined run-command.c index c72601056c,62c6721bd7..8c7115ade4 --- a/run-command.c +++ b/run-command.c @@@ -160,41 -160,50 +160,41 @@@ int sane_execvp(const char *file, char return -1; } -static const char **prepare_shell_cmd(const char **argv) +static const char **prepare_shell_cmd(struct argv_array *out, const char **argv) { - int argc, nargc = 0; - const char **nargv; - - for (argc = 0; argv[argc]; argc++) - ; /* just counting */ - /* +1 for NULL, +3 for "sh -c" plus extra $0 */ - nargv = xmalloc(sizeof(*nargv) * (argc + 1 + 3)); - - if (argc < 1) + if (!argv[0]) die("BUG: shell command is empty"); if (strcspn(argv[0], "|&;<>()$`\\\"' \t\n*?[#~=%") != strlen(argv[0])) { #ifndef GIT_WINDOWS_NATIVE - nargv[nargc++] = SHELL_PATH; + argv_array_push(out, SHELL_PATH); #else - nargv[nargc++] = "sh"; + argv_array_push(out, "sh"); #endif - nargv[nargc++] = "-c"; - - if (argc < 2) - nargv[nargc++] = argv[0]; - else { - struct strbuf arg0 = STRBUF_INIT; - strbuf_addf(&arg0, "%s \"$@\"", argv[0]); - nargv[nargc++] = strbuf_detach(&arg0, NULL); - } - } + argv_array_push(out, "-c"); - for (argc = 0; argv[argc]; argc++) - nargv[nargc++] = argv[argc]; - nargv[nargc] = NULL; + /* + * If we have no extra arguments, we do not even need to + * bother with the "$@" magic. + */ + if (!argv[1]) + argv_array_push(out, argv[0]); + else + argv_array_pushf(out, "%s \"$@\"", argv[0]); + } - return nargv; + argv_array_pushv(out, argv); + return out->argv; } #ifndef GIT_WINDOWS_NATIVE static int execv_shell_cmd(const char **argv) { - const char **nargv = prepare_shell_cmd(argv); - trace_argv_printf(nargv, "trace: exec:"); - sane_execvp(nargv[0], (char **)nargv); - free(nargv); + struct argv_array nargv = ARGV_ARRAY_INIT; + prepare_shell_cmd(&nargv, argv); + trace_argv_printf(nargv.argv, "trace: exec:"); + sane_execvp(nargv.argv[0], (char **)nargv.argv); + argv_array_clear(&nargv); return -1; } #endif @@@ -238,7 -247,7 +238,7 @@@ static int wait_or_whine(pid_t pid, con error("waitpid is confused (%s)", argv0); } else if (WIFSIGNALED(status)) { code = WTERMSIG(status); - if (code != SIGINT && code != SIGQUIT) + if (code != SIGINT && code != SIGQUIT && code != SIGPIPE) error("%s died of signal %d", argv0, code); /* * This return value is chosen so that code & 0xff @@@ -448,7 -457,6 +448,7 @@@ fail_pipe { int fhin = 0, fhout = 1, fherr = 2; const char **sargv = cmd->argv; + struct argv_array nargv = ARGV_ARRAY_INIT; if (cmd->no_stdin) fhin = open("/dev/null", O_RDWR); @@@ -474,9 -482,9 +474,9 @@@ fhout = dup(cmd->out); if (cmd->git_cmd) - cmd->argv = prepare_git_cmd(cmd->argv); + cmd->argv = prepare_git_cmd(&nargv, cmd->argv); else if (cmd->use_shell) - cmd->argv = prepare_shell_cmd(cmd->argv); + cmd->argv = prepare_shell_cmd(&nargv, cmd->argv); cmd->pid = mingw_spawnvpe(cmd->argv[0], cmd->argv, (char**) cmd->env, cmd->dir, fhin, fhout, fherr); @@@ -486,7 -494,9 +486,7 @@@ if (cmd->clean_on_exit && cmd->pid >= 0) mark_child_for_cleanup(cmd->pid); - if (cmd->git_cmd) - free(cmd->argv); - + argv_array_clear(&nargv); cmd->argv = sargv; if (fhin != 0) close(fhin); @@@ -625,11 -635,6 +625,11 @@@ int in_async(void return !pthread_equal(main_thread, pthread_self()); } +void NORETURN async_exit(int code) +{ + pthread_exit((void *)(intptr_t)code); +} + #else static struct { @@@ -675,11 -680,6 +675,11 @@@ int in_async(void return process_is_async; } +void NORETURN async_exit(int code) +{ + exit(code); +} + #endif int start_async(struct async *async) @@@ -902,7 -902,7 +902,7 @@@ struct parallel_processes struct strbuf buffered_output; /* of finished children */ }; - static int default_start_failure(struct strbuf *err, + static int default_start_failure(struct strbuf *out, void *pp_cb, void *pp_task_cb) { @@@ -910,7 -910,7 +910,7 @@@ } static int default_task_finished(int result, - struct strbuf *err, + struct strbuf *out, void *pp_cb, void *pp_task_cb) { @@@ -994,7 -994,7 +994,7 @@@ static void pp_cleanup(struct parallel_ * When get_next_task added messages to the buffer in its last * iteration, the buffered output is non empty. */ - fputs(pp->buffered_output.buf, stderr); + strbuf_write(&pp->buffered_output, stderr); strbuf_release(&pp->buffered_output); sigchain_pop_common(); @@@ -1079,7 -1079,7 +1079,7 @@@ static void pp_output(struct parallel_p int i = pp->output_owner; if (pp->children[i].state == GIT_CP_WORKING && pp->children[i].err.len) { - fputs(pp->children[i].err.buf, stderr); + strbuf_write(&pp->children[i].err, stderr); strbuf_reset(&pp->children[i].err); } } @@@ -1117,11 -1117,11 +1117,11 @@@ static int pp_collect_finished(struct p strbuf_addbuf(&pp->buffered_output, &pp->children[i].err); strbuf_reset(&pp->children[i].err); } else { - fputs(pp->children[i].err.buf, stderr); + strbuf_write(&pp->children[i].err, stderr); strbuf_reset(&pp->children[i].err); /* Output all other finished child processes */ - fputs(pp->buffered_output.buf, stderr); + strbuf_write(&pp->buffered_output, stderr); strbuf_reset(&pp->buffered_output); /* diff --combined run-command.h index 3d1e59e26e,65bd424216..de1727efab --- a/run-command.h +++ b/run-command.h @@@ -121,7 -121,6 +121,7 @@@ struct async int start_async(struct async *async); int finish_async(struct async *async); int in_async(void); +void NORETURN async_exit(int code); /** * This callback should initialize the child process and preload the @@@ -140,7 -139,7 +140,7 @@@ * return the negative signal number. */ typedef int (*get_next_task_fn)(struct child_process *cp, - struct strbuf *err, + struct strbuf *out, void *pp_cb, void **pp_task_cb); @@@ -149,7 -148,7 +149,7 @@@ * a new process. * * You must not write to stdout or stderr in this function. Add your - * message to the strbuf err instead, which will be printed without + * message to the strbuf out instead, which will be printed without * messing up the output of the other parallel processes. * * pp_cb is the callback cookie as passed into run_processes_parallel, @@@ -159,7 -158,7 +159,7 @@@ * To send a signal to other child processes for abortion, return * the negative signal number. */ - typedef int (*start_failure_fn)(struct strbuf *err, + typedef int (*start_failure_fn)(struct strbuf *out, void *pp_cb, void *pp_task_cb); @@@ -167,7 -166,7 +167,7 @@@ * This callback is called on every child process that finished processing. * * You must not write to stdout or stderr in this function. Add your - * message to the strbuf err instead, which will be printed without + * message to the strbuf out instead, which will be printed without * messing up the output of the other parallel processes. * * pp_cb is the callback cookie as passed into run_processes_parallel, @@@ -178,7 -177,7 +178,7 @@@ * the negative signal number. */ typedef int (*task_finished_fn)(int result, - struct strbuf *err, + struct strbuf *out, void *pp_cb, void *pp_task_cb); diff --combined strbuf.c index 2c08dbb153,5f6da82e7e..1ba600bd78 --- a/strbuf.c +++ b/strbuf.c @@@ -395,6 -395,12 +395,12 @@@ ssize_t strbuf_read_once(struct strbuf return cnt; } + ssize_t strbuf_write(struct strbuf *sb, FILE *f) + { + return sb->len ? fwrite(sb->buf, 1, sb->len, f) : 0; + } + + #define STRBUF_MAXLINK (2*PATH_MAX) int strbuf_readlink(struct strbuf *sb, const char *path, size_t hint) @@@ -481,15 -487,9 +487,15 @@@ int strbuf_getwholeline(struct strbuf * if (errno == ENOMEM) die("Out of memory, getdelim failed"); - /* Restore slopbuf that we moved out of the way before */ + /* + * Restore strbuf invariants; if getdelim left us with a NULL pointer, + * we can just re-init, but otherwise we should make sure that our + * length is empty, and that the result is NUL-terminated. + */ if (!sb->buf) strbuf_init(sb, 0); + else + strbuf_reset(sb); return EOF; } #else @@@ -518,37 -518,15 +524,37 @@@ int strbuf_getwholeline(struct strbuf * } #endif -int strbuf_getline(struct strbuf *sb, FILE *fp, int term) +static int strbuf_getdelim(struct strbuf *sb, FILE *fp, int term) { if (strbuf_getwholeline(sb, fp, term)) return EOF; - if (sb->buf[sb->len-1] == term) - strbuf_setlen(sb, sb->len-1); + if (sb->buf[sb->len - 1] == term) + strbuf_setlen(sb, sb->len - 1); + return 0; +} + +int strbuf_getline(struct strbuf *sb, FILE *fp) +{ + if (strbuf_getwholeline(sb, fp, '\n')) + return EOF; + if (sb->buf[sb->len - 1] == '\n') { + strbuf_setlen(sb, sb->len - 1); + if (sb->len && sb->buf[sb->len - 1] == '\r') + strbuf_setlen(sb, sb->len - 1); + } return 0; } +int strbuf_getline_lf(struct strbuf *sb, FILE *fp) +{ + return strbuf_getdelim(sb, fp, '\n'); +} + +int strbuf_getline_nul(struct strbuf *sb, FILE *fp) +{ + return strbuf_getdelim(sb, fp, '\0'); +} + int strbuf_getwholeline_fd(struct strbuf *sb, int fd, int term) { strbuf_reset(sb); @@@ -724,7 -702,7 +730,7 @@@ char *xstrdup_tolower(const char *strin size_t len, i; len = strlen(string); - result = xmalloc(len + 1); + result = xmallocz(len); for (i = 0; i < len; i++) result[i] = tolower(string[i]); result[i] = '\0'; diff --combined strbuf.h index f72fd14c2e,d4f2aa1365..7987405313 --- a/strbuf.h +++ b/strbuf.h @@@ -354,8 -354,8 +354,8 @@@ extern void strbuf_addftime(struct strb * * NOTE: The buffer is rewound if the read fails. If -1 is returned, * `errno` must be consulted, like you would do for `read(3)`. - * `strbuf_read()`, `strbuf_read_file()` and `strbuf_getline()` has the - * same behaviour as well. + * `strbuf_read()`, `strbuf_read_file()` and `strbuf_getline_*()` + * family of functions have the same behaviour as well. */ extern size_t strbuf_fread(struct strbuf *, size_t, FILE *); @@@ -386,32 -386,21 +386,38 @@@ extern ssize_t strbuf_read_file(struct */ extern int strbuf_readlink(struct strbuf *sb, const char *path, size_t hint); + /** + * Write the whole content of the strbuf to the stream not stopping at + * NUL bytes. + */ + extern ssize_t strbuf_write(struct strbuf *sb, FILE *stream); + /** - * Read a line from a FILE *, overwriting the existing contents - * of the strbuf. The second argument specifies the line - * terminator character, typically `'\n'`. + * Read a line from a FILE *, overwriting the existing contents of + * the strbuf. The strbuf_getline*() family of functions share + * this signature, but have different line termination conventions. + * * Reading stops after the terminator or at EOF. The terminator * is removed from the buffer before returning. Returns 0 unless * there was nothing left before EOF, in which case it returns `EOF`. */ -extern int strbuf_getline(struct strbuf *, FILE *, int); +typedef int (*strbuf_getline_fn)(struct strbuf *, FILE *); + +/* Uses LF as the line terminator */ +extern int strbuf_getline_lf(struct strbuf *sb, FILE *fp); + +/* Uses NUL as the line terminator */ +extern int strbuf_getline_nul(struct strbuf *sb, FILE *fp); + +/* + * Similar to strbuf_getline_lf(), but additionally treats a CR that + * comes immediately before the LF as part of the terminator. + * This is the most friendly version to be used to read "text" files + * that can come from platforms whose native text format is CRLF + * terminated. + */ +extern int strbuf_getline(struct strbuf *, FILE *); + /** * Like `strbuf_getline`, but keeps the trailing terminator (if diff --combined submodule-config.c index 92502b594d,9fa226919d..b82d1fbb22 --- a/submodule-config.c +++ b/submodule-config.c @@@ -59,6 -59,7 +59,7 @@@ static void free_one_config(struct subm { free((void *) entry->config->path); free((void *) entry->config->name); + free((void *) entry->config->update_strategy.command); free(entry->config); } @@@ -194,6 -195,8 +195,8 @@@ static struct submodule *lookup_or_crea submodule->path = NULL; submodule->url = NULL; + submodule->update_strategy.type = SM_UPDATE_UNSPECIFIED; + submodule->update_strategy.command = NULL; submodule->fetch_recurse = RECURSE_SUBMODULES_NONE; submodule->ignore = NULL; @@@ -228,35 -231,6 +231,35 @@@ int parse_fetch_recurse_submodules_arg( return parse_fetch_recurse(opt, arg, 1); } +static int parse_push_recurse(const char *opt, const char *arg, + int die_on_error) +{ + switch (git_config_maybe_bool(opt, arg)) { + case 1: + /* There's no simple "on" value when pushing */ + if (die_on_error) + die("bad %s argument: %s", opt, arg); + else + return RECURSE_SUBMODULES_ERROR; + case 0: + return RECURSE_SUBMODULES_OFF; + default: + if (!strcmp(arg, "on-demand")) + return RECURSE_SUBMODULES_ON_DEMAND; + else if (!strcmp(arg, "check")) + return RECURSE_SUBMODULES_CHECK; + else if (die_on_error) + die("bad %s argument: %s", opt, arg); + else + return RECURSE_SUBMODULES_ERROR; + } +} + +int parse_push_recurse_submodules_arg(const char *opt, const char *arg) +{ + return parse_push_recurse(opt, arg, 1); +} + static void warn_multiple_config(const unsigned char *commit_sha1, const char *name, const char *option) { @@@ -293,7 -267,7 +296,7 @@@ static int parse_config(const char *var if (!strcmp(item.buf, "path")) { if (!value) ret = config_error_nonbool(var); - else if (!me->overwrite && submodule->path != NULL) + else if (!me->overwrite && submodule->path) warn_multiple_config(me->commit_sha1, submodule->name, "path"); else { @@@ -317,7 -291,7 +320,7 @@@ } else if (!strcmp(item.buf, "ignore")) { if (!value) ret = config_error_nonbool(var); - else if (!me->overwrite && submodule->ignore != NULL) + else if (!me->overwrite && submodule->ignore) warn_multiple_config(me->commit_sha1, submodule->name, "ignore"); else if (strcmp(value, "untracked") && @@@ -333,13 -307,23 +336,23 @@@ } else if (!strcmp(item.buf, "url")) { if (!value) { ret = config_error_nonbool(var); - } else if (!me->overwrite && submodule->url != NULL) { + } else if (!me->overwrite && submodule->url) { warn_multiple_config(me->commit_sha1, submodule->name, "url"); } else { free((void *) submodule->url); submodule->url = xstrdup(value); } + } else if (!strcmp(item.buf, "update")) { + if (!value) + ret = config_error_nonbool(var); + else if (!me->overwrite && + submodule->update_strategy.type != SM_UPDATE_UNSPECIFIED) + warn_multiple_config(me->commit_sha1, submodule->name, + "update"); + else if (parse_submodule_update_strategy(value, + &submodule->update_strategy) < 0) + die(_("invalid value for %s"), var); } strbuf_release(&name); @@@ -427,8 -411,8 +440,8 @@@ static const struct submodule *config_f parameter.commit_sha1 = commit_sha1; parameter.gitmodules_sha1 = sha1; parameter.overwrite = 0; - git_config_from_buf(parse_config, rev.buf, config, config_size, - ¶meter); + git_config_from_mem(parse_config, "submodule-blob", rev.buf, + config, config_size, ¶meter); free(config); switch (lookup_type) { diff --combined submodule-config.h index 9bfa65af03,092ebfc931..e4857f53a8 --- a/submodule-config.h +++ b/submodule-config.h @@@ -2,6 -2,7 +2,7 @@@ #define SUBMODULE_CONFIG_CACHE_H #include "hashmap.h" + #include "submodule.h" #include "strbuf.h" /* @@@ -14,12 -15,12 +15,13 @@@ struct submodule const char *url; int fetch_recurse; const char *ignore; + struct submodule_update_strategy update_strategy; /* the sha1 blob id of the responsible .gitmodules file */ unsigned char gitmodules_sha1[20]; }; int parse_fetch_recurse_submodules_arg(const char *opt, const char *arg); +int parse_push_recurse_submodules_arg(const char *opt, const char *arg); int parse_submodule_config_option(const char *var, const char *value); const struct submodule *submodule_from_name(const unsigned char *commit_sha1, const char *name); diff --combined submodule.c index 62c4356c50,4bd14de42c..90825e17fa --- a/submodule.c +++ b/submodule.c @@@ -15,6 -15,7 +15,7 @@@ #include "thread-utils.h" static int config_fetch_recurse_submodules = RECURSE_SUBMODULES_ON_DEMAND; + static int parallel_jobs = 1; static struct string_list changed_submodule_paths; static int initialized_fetch_ref_tips; static struct sha1_array ref_tips_before_fetch; @@@ -69,7 -70,7 +70,7 @@@ int update_path_in_gitmodules(const cha strbuf_addstr(&entry, "submodule."); strbuf_addstr(&entry, submodule->name); strbuf_addstr(&entry, ".path"); - if (git_config_set_in_file(".gitmodules", entry.buf, newpath) < 0) { + if (git_config_set_in_file_gently(".gitmodules", entry.buf, newpath) < 0) { /* Maybe the user already did that, don't error out here */ warning(_("Could not update .gitmodules entry %s"), entry.buf); strbuf_release(&entry); @@@ -123,7 -124,7 +124,7 @@@ static int add_submodule_odb(const cha struct strbuf objects_directory = STRBUF_INIT; struct alternate_object_database *alt_odb; int ret = 0; - int alloc; + size_t alloc; strbuf_git_path_submodule(&objects_directory, path, "objects/"); if (!is_directory(objects_directory.buf)) { @@@ -138,8 -139,8 +139,8 @@@ objects_directory.len)) goto done; - alloc = objects_directory.len + 42; /* for "12/345..." sha1 */ - alt_odb = xmalloc(sizeof(*alt_odb) + alloc); + alloc = st_add(objects_directory.len, 42); /* for "12/345..." sha1 */ + alt_odb = xmalloc(st_add(sizeof(*alt_odb), alloc)); alt_odb->next = alt_odb_list; xsnprintf(alt_odb->base, alloc, "%s", objects_directory.buf); alt_odb->name = alt_odb->base + objects_directory.len; @@@ -169,7 -170,12 +170,12 @@@ void set_diffopt_flags_from_submodule_c int submodule_config(const char *var, const char *value, void *cb) { - if (starts_with(var, "submodule.")) + if (!strcmp(var, "submodule.fetchjobs")) { + parallel_jobs = git_config_int(var, value); + if (parallel_jobs < 0) + die(_("negative values not allowed for submodule.fetchJobs")); + return 0; + } else if (starts_with(var, "submodule.")) return parse_submodule_config_option(var, value); else if (!strcmp(var, "fetch.recursesubmodules")) { config_fetch_recurse_submodules = parse_fetch_recurse_submodules_arg(var, value); @@@ -210,6 -216,27 +216,27 @@@ void gitmodules_config(void } } + int parse_submodule_update_strategy(const char *value, + struct submodule_update_strategy *dst) + { + free((void*)dst->command); + dst->command = NULL; + if (!strcmp(value, "none")) + dst->type = SM_UPDATE_NONE; + else if (!strcmp(value, "checkout")) + dst->type = SM_UPDATE_CHECKOUT; + else if (!strcmp(value, "rebase")) + dst->type = SM_UPDATE_REBASE; + else if (!strcmp(value, "merge")) + dst->type = SM_UPDATE_MERGE; + else if (skip_prefix(value, "!", &value)) { + dst->type = SM_UPDATE_COMMAND; + dst->command = xstrdup(value); + } else + return -1; + return 0; + } + void handle_ignore_submodules_arg(struct diff_options *diffopt, const char *arg) { @@@ -750,6 -777,9 +777,9 @@@ int fetch_populated_submodules(const st argv_array_push(&spf.args, "--recurse-submodules-default"); /* default value, "--submodule-prefix" and its value are added later */ + if (max_parallel_jobs < 0) + max_parallel_jobs = parallel_jobs; + calculate_changed_submodule_paths(); run_processes_parallel(max_parallel_jobs, get_next_submodule, @@@ -1086,11 -1116,18 +1116,16 @@@ void connect_work_tree_and_git_dir(cons /* Update core.worktree setting */ strbuf_reset(&file_name); strbuf_addf(&file_name, "%s/config", git_dir); - if (git_config_set_in_file(file_name.buf, "core.worktree", - relative_path(real_work_tree, git_dir, - &rel_path))) - die(_("Could not set core.worktree in %s"), - file_name.buf); + git_config_set_in_file(file_name.buf, "core.worktree", + relative_path(real_work_tree, git_dir, + &rel_path)); strbuf_release(&file_name); strbuf_release(&rel_path); free((void *)real_work_tree); } + + int parallel_submodules(void) + { + return parallel_jobs; + } diff --combined submodule.h index e06eaa5ebb,3166608fb5..7ef3775184 --- a/submodule.h +++ b/submodule.h @@@ -5,7 -5,6 +5,7 @@@ struct diff_options struct argv_array; enum { + RECURSE_SUBMODULES_CHECK = -4, RECURSE_SUBMODULES_ERROR = -3, RECURSE_SUBMODULES_NONE = -2, RECURSE_SUBMODULES_ON_DEMAND = -1, @@@ -14,6 -13,21 +14,21 @@@ RECURSE_SUBMODULES_ON = 2 }; + enum submodule_update_type { + SM_UPDATE_UNSPECIFIED = 0, + SM_UPDATE_CHECKOUT, + SM_UPDATE_REBASE, + SM_UPDATE_MERGE, + SM_UPDATE_NONE, + SM_UPDATE_COMMAND + }; + + struct submodule_update_strategy { + enum submodule_update_type type; + const char *command; + }; + #define SUBMODULE_UPDATE_STRATEGY_INIT {SM_UPDATE_UNSPECIFIED, NULL} + int is_staging_gitmodules_ok(void); int update_path_in_gitmodules(const char *oldpath, const char *newpath); int remove_path_from_gitmodules(const char *path); @@@ -22,6 -36,8 +37,8 @@@ void set_diffopt_flags_from_submodule_c const char *path); int submodule_config(const char *var, const char *value, void *cb); void gitmodules_config(void); + int parse_submodule_update_strategy(const char *value, + struct submodule_update_strategy *dst); void handle_ignore_submodules_arg(struct diff_options *diffopt, const char *); void show_submodule_summary(FILE *f, const char *path, const char *line_prefix, @@@ -42,5 -58,6 +59,6 @@@ int find_unpushed_submodules(unsigned c struct string_list *needs_pushing); int push_unpushed_submodules(unsigned char new_sha1[20], const char *remotes_name); void connect_work_tree_and_git_dir(const char *work_tree, const char *git_dir); + int parallel_submodules(void); #endif diff --combined t/t7400-submodule-basic.sh index e1abd19230,5991e3c015..17d7a98207 --- a/t/t7400-submodule-basic.sh +++ b/t/t7400-submodule-basic.sh @@@ -462,7 -462,7 +462,7 @@@ test_expect_success 'update --init' git config --remove-section submodule.example && test_must_fail git config submodule.example.url && - git submodule update init > update.out && + git submodule update init 2> update.out && cat update.out && test_i18ngrep "not initialized" update.out && test_must_fail git rev-parse --resolve-git-dir init/.git && @@@ -480,7 -480,7 +480,7 @@@ test_expect_success 'update --init fro mkdir -p sub && ( cd sub && - git submodule update ../init >update.out && + git submodule update ../init 2>update.out && cat update.out && test_i18ngrep "not initialized" update.out && test_must_fail git rev-parse --resolve-git-dir ../init/.git && @@@ -849,19 -849,6 +849,19 @@@ test_expect_success 'set up a second su git commit -m "submodule example2 added" ' +test_expect_success 'submodule deinit works on repository without submodules' ' + test_when_finished "rm -rf newdirectory" && + mkdir newdirectory && + ( + cd newdirectory && + git init && + >file && + git add file && + git commit -m "repo should not be empty" + git submodule deinit . + ) +' + test_expect_success 'submodule deinit should remove the whole submodule section from .git/config' ' git config submodule.example.foo bar && git config submodule.example2.frotz nitfol && @@@ -1012,30 -999,5 +1012,30 @@@ test_expect_success 'submodule add clon ) ' +test_expect_success 'submodule helper list is not confused by common prefixes' ' + mkdir -p dir1/b && + ( + cd dir1/b && + git init && + echo hi >testfile2 && + git add . && + git commit -m "test1" + ) && + mkdir -p dir2/b && + ( + cd dir2/b && + git init && + echo hello >testfile1 && + git add . && + git commit -m "test2" + ) && + git submodule add /dir1/b dir1/b && + git submodule add /dir2/b dir2/b && + git commit -m "first submodule commit" && + git submodule--helper list dir1/b |cut -c51- >actual && + echo "dir1/b" >expect && + test_cmp expect actual +' + test_done diff --combined t/t7406-submodule-update.sh index 68ea31d693,090891e873..0791df75ac --- a/t/t7406-submodule-update.sh +++ b/t/t7406-submodule-update.sh @@@ -14,8 -14,8 +14,8 @@@ submodule and "git submodule update --r compare_head() { - sha_master=`git rev-list --max-count=1 master` - sha_head=`git rev-list --max-count=1 HEAD` + sha_master=$(git rev-list --max-count=1 master) + sha_head=$(git rev-list --max-count=1 HEAD) test "$sha_master" = "$sha_head" } @@@ -774,4 -774,31 +774,31 @@@ test_expect_success 'submodule update - test_i18ngrep "Submodule path .deeper/submodule/subsubmodule.: checked out" actual ) ' + + test_expect_success 'submodule update can be run in parallel' ' + (cd super2 && + GIT_TRACE=$(pwd)/trace.out git submodule update --jobs 7 && + grep "7 tasks" trace.out && + git config submodule.fetchJobs 8 && + GIT_TRACE=$(pwd)/trace.out git submodule update && + grep "8 tasks" trace.out && + GIT_TRACE=$(pwd)/trace.out git submodule update --jobs 9 && + grep "9 tasks" trace.out + ) + ' + + test_expect_success 'git clone passes the parallel jobs config on to submodules' ' + test_when_finished "rm -rf super4" && + GIT_TRACE=$(pwd)/trace.out git clone --recurse-submodules --jobs 7 . super4 && + grep "7 tasks" trace.out && + rm -rf super4 && + git config --global submodule.fetchJobs 8 && + GIT_TRACE=$(pwd)/trace.out git clone --recurse-submodules . super4 && + grep "8 tasks" trace.out && + rm -rf super4 && + GIT_TRACE=$(pwd)/trace.out git clone --recurse-submodules --jobs 9 . super4 && + grep "9 tasks" trace.out && + rm -rf super4 + ' + test_done