--- /dev/null
+Git v2.10.2 Release Notes
+=========================
+
+Fixes since v2.10.1
+-------------------
+
+ * The code that parses the format parameter of for-each-ref command
+ has seen a micro-optimization.
+
+ * The "graph" API used in "git log --graph" miscounted the number of
+ output columns consumed so far when drawing a padding line, which
+ has been fixed; this did not affect any existing code as nobody
+ tried to write anything after the padding on such a line, though.
+
+ * Almost everybody uses DEFAULT_ABBREV to refer to the default
+ setting for the abbreviation, but "git blame" peeked into
+ underlying variable bypassing the macro for no good reason.
+
+ * Doc update to clarify what "log -3 --reverse" does.
+
+ * An author name, that spelled a backslash-quoted double quote in the
+ human readable part "My \"double quoted\" name", was not unquoted
+ correctly while applying a patch from a piece of e-mail.
+
+ * The original command line syntax for "git merge", which was "git
+ merge <msg> HEAD <parent>...", has been deprecated for quite some
+ time, and "git gui" was the last in-tree user of the syntax. This
+ is finally fixed, so that we can move forward with the deprecation.
+
+ * Codepaths that read from an on-disk loose object were too loose in
+ validating what they are reading is a proper object file and
+ sometimes read past the data they read from the disk, which has
+ been corrected. H/t to Gustavo Grieco for reporting.
+
+ * "git worktree", even though it used the default_abbrev setting that
+ ought to be affected by core.abbrev configuration variable, ignored
+ the variable setting. The command has been taught to read the
+ default set of configuration variables to correct this.
+
+ * A low-level function verify_packfile() was meant to show errors
+ that were detected without dying itself, but under some conditions
+ it didn't and died instead, which has been fixed.
+
+
+Also contains minor documentation updates and code clean-ups.
* "git gui" l10n to Portuguese.
+ * When given an abbreviated object name that is not (or more
+ realistically, "no longer") unique, we gave a fatal error
+ "ambiguous argument". This error is now accompanied by hints that
+ lists the objects that begins with the given prefix. During the
+ course of development of this new feature, numerous minor bugs were
+ uncovered and corrected, the most notable one of which is that we
+ gave "short SHA1 xxxx is ambiguous." twice without good reason.
+
+ * "git log rev^..rev" is an often-used revision range specification
+ to show what was done on a side branch merged at rev. This has
+ gained a short-hand "rev^-1". In general "rev^-$n" is the same as
+ "^rev^$n rev", i.e. what has happened on other branches while the
+ history leading to nth parent was looking the other way.
+
+ * In recent versions of cURL, GSSAPI credential delegation is
+ disabled by default due to CVE-2011-2192; introduce a configuration
+ to selectively allow enabling this.
+ (merge 26a7b23429 ps/http-gssapi-cred-delegation later to maint).
+
Performance, Internal Implementation, Development Support etc.
* The codepath in "git fsck" to detect malformed tree objects has
been updated not to die but keep going after detecting them.
+ * We call "qsort(array, nelem, sizeof(array[0]), fn)", and most of
+ the time third parameter is redundant. A new QSORT() macro lets us
+ omit it.
+
+ * "git pack-objects" in a repository with many packfiles used to
+ spend a lot of time looking for/at objects in them; the accesses to
+ the packfiles are now optimized by checking the most-recently-used
+ packfile first.
+ (merge c9af708b1a jk/pack-objects-optim-mru later to maint).
+
Also contains various documentation updates and code clean-ups.
we are sending an object C, we want a tag B that directly points at
C but also a tag A that points at the tag B. We used to miss the
intermediate tag B in some cases.
- (merge b773dde jk/pack-tag-of-tag later to maint).
* Update Japanese translation for "git-gui".
- (merge 02748bc sy/git-gui-i18n-ja later to maint).
* "git fetch http::/site/path" did not die correctly and segfaulted
instead.
- (merge d63ed6e jk/fix-remote-curl-url-wo-proto later to maint).
* "git commit-tree" stopped reading commit.gpgsign configuration
variable that was meant for Porcelain "git commit" in Git 2.9; we
forgot to update "git gui" to look at the configuration to match
this change.
- (merge f14a310 js/git-gui-commit-gpgsign later to maint).
* "git add --chmod=+x" added recently lacked documentation, which has
been corrected.
- (merge 7ef7903 et/add-chmod-x later to maint).
* "git log --cherry-pick" used to include merge commits as candidates
to be matched up with other commits, resulting a lot of wasted time.
The patch-id generation logic has been updated to ignore merges to
avoid the wastage.
- (merge 7c81040 jk/patch-ids-no-merges later to maint).
* The http transport (with curl-multi option, which is the default
these days) failed to remove curl-easy handle from a curlm session,
which led to unnecessary API failures.
- (merge 2abc848 ew/http-do-not-forget-to-call-curl-multi-remove-handle later to maint).
* There were numerous corner cases in which the configuration files
are read and used or not read at all depending on the directory a
* Performance tests done via "t/perf" did not use the same set of
build configuration if the user relied on autoconf generated
configuration.
- (merge cd5c281 ks/perf-build-with-autoconf later to maint).
* "git format-patch --base=..." feature that was recently added
showed the base commit information after "-- " e-mail signature
line, which turned out to be inconvenient. The base information
has been moved above the signature line.
- (merge 480871e jt/format-patch-base-info-above-sig later to maint).
* More i18n.
(merge 43073f8 va/i18n later to maint).
than nice. As the underlying commands used inside "git rebase"
would fail with a more meaningful error message and advice text
when the bogus ident matters, this extra check was removed.
- (merge 1e461c4 jk/rebase-i-drop-ident-check later to maint).
* "git gc --aggressive" used to limit the delta-chain length to 250,
which is way too deep for gaining additional space savings and is
detrimental for runtime performance. The limit has been reduced to
50.
- (merge 07e7dbf jk/reduce-gc-aggressive-depth later to maint).
* Documentation for individual configuration variables to control use
of color (like `color.grep`) said that their default value is
When we updated the default value for color.ui from 'false' to
'auto' quite a while ago, all of them broke. This has been
corrected.
- (merge 14d16e2 mm/config-color-ui-default-to-auto later to maint).
* The pretty-format specifier "%C(auto)" used by the "log" family of
commands to enable coloring of the output is taught to also issue a
* A shell script example in check-ref-format documentation has been
fixed.
- (merge 92dece7 ep/doc-check-ref-format-example later to maint).
* "git checkout <word>" does not follow the usual disambiguation
rules when the <word> can be both a rev and a path, to allow
file 'foo' in the working tree without having to disambiguate.
This was poorly documented and the check was incorrect when the
command was run from a subdirectory.
- (merge b829b94 nd/checkout-disambiguation later to maint).
* Some codepaths in "git diff" used regexec(3) on a buffer that was
mmap(2)ed, which may not have a terminating NUL, leading to a read
internal directory structure we assumed HomeBrew uses, which was a
no-no. The procedure has been updated to ask HomeBrew things we
need to know to fix this.
- (merge f86f49b ls/travis-homebrew-path-fix later to maint).
* When "git rebase -i" is given a broken instruction, it told the
user to fix it with "--edit-todo", but didn't say what the step
after that was (i.e. "--continue").
- (merge 37875b4 rt/rebase-i-broken-insn-advise later to maint).
* Documentation around tools to import from CVS was fairly outdated.
- (merge 106b672 jk/doc-cvs-update later to maint).
* "git clone --recurse-submodules" lost the progress eye-candy in
recent update, which has been corrected.
* In the codepath that comes up with the hostname to be used in an
e-mail when the user didn't tell us, we looked at ai_canonname
field in struct addrinfo without making sure it is not NULL first.
- (merge c375a7efa3 jk/ident-ai-canonname-could-be-null later to maint).
* "git worktree", even though it used the default_abbrev setting that
ought to be affected by core.abbrev configuration variable, ignored
* Doc update to clarify what "log -3 --reverse" does.
(merge 04be69478f pb/rev-list-reverse-with-count later to maint).
+ * Almost everybody uses DEFAULT_ABBREV to refer to the default
+ setting for the abbreviation, but "git blame" peeked into
+ underlying variable bypassing the macro for no good reason.
+ (merge 5293284b4d jc/blame-abbrev later to maint).
+
+ * The "graph" API used in "git log --graph" miscounted the number of
+ output columns consumed so far when drawing a padding line, which
+ has been fixed; this did not affect any existing code as nobody
+ tried to write anything after the padding on such a line, though.
+ (merge 1647793524 jk/graph-padding-fix later to maint).
+
+ * The code that parses the format parameter of for-each-ref command
+ has seen a micro-optimization.
+ (merge e94ce1394e sg/ref-filter-parse-optim later to maint).
+
+ * When we started cURL to talk to imap server when a new enough
+ version of cURL library is available, we forgot to explicitly add
+ imap(s):// before the destination. To some folks, that didn't work
+ and the library tried to make HTTP(s) requests instead.
+ (merge d2d07ab861 ak/curl-imap-send-explicit-scheme later to maint).
+
+ * The ./configure script generated from configure.ac was taught how
+ to detect support of SSL by libcurl better.
+ (merge 924b7eb1c9 dp/autoconf-curl-ssl later to maint).
+
+ * The command-line completion script (in contrib/) learned to
+ complete "git cmd ^mas<HT>" to complete the negative end of
+ reference to "git cmd ^master".
+ (merge 49416ad22a cp/completion-negative-refs later to maint).
+
+ * The existing "git fetch --depth=<n>" option was hard to use
+ correctly when making the history of an existing shallow clone
+ deeper. A new option, "--deepen=<n>", has been added to make this
+ easier to use. "git clone" also learned "--shallow-since=<date>"
+ and "--shallow-exclude=<tag>" options to make it easier to specify
+ "I am interested only in the recent N months worth of history" and
+ "Give me only the history since that version".
+ (merge cccf74e2da nd/shallow-deepen later to maint).
+
+ * It is a common mistake to say "git blame --reverse OLD path",
+ expecting that the command line is dwimmed as if asking how lines
+ in path in an old revision OLD have survived up to the current
+ commit.
+ (merge e1d09701a4 jc/blame-reverse later to maint).
+
* Other minor doc, test and build updates and code cleanups.
- (merge e78d57e bw/pathspec-remove-unused-extern-decl later to maint).
- (merge ce25e4c rs/checkout-some-states-are-const later to maint).
- (merge a8342a4 rs/strbuf-remove-fix later to maint).
- (merge b56aa5b rs/unpack-trees-reduce-file-scope-global later to maint).
- (merge 5efc60c mr/vcs-svn-printf-ulong later to maint).
(merge a22ae75 rs/cocci later to maint).
(merge 45ccef87b3 rs/copy-array later to maint).
(merge 8201688ecd dt/mailinfo later to maint).
-S <revs-file>::
Use revisions from revs-file instead of calling linkgit:git-rev-list[1].
---reverse::
+--reverse <rev>..<rev>::
Walk history forward instead of backward. Instead of showing
the revision in which a line appeared, this shows the last
revision in which a line has existed. This requires a range of
revision like START..END where the path to blame exists in
- START.
+ START. `git blame --reverse START` is taken as `git blame
+ --reverse START..HEAD` for convenience.
-p::
--porcelain::
a username in the URL, as libcurl normally requires a username for
authentication.
+http.delegation::
+ Control GSSAPI credential delegation. The delegation is disabled
+ by default in libcurl since version 7.21.7. Set parameter to tell
+ the server what it is allowed to delegate when it comes to user
+ credentials. Used with GSS/kerberos. Possible values are:
++
+--
+* `none` - Don't allow any delegation.
+* `policy` - Delegates if and only if the OK-AS-DELEGATE flag is set in the
+ Kerberos service ticket, which is a matter of realm policy.
+* `always` - Unconditionally allow the server to delegate.
+--
+
+
http.extraHeader::
Pass an additional HTTP header when communicating with a server. If
more than one such entry exists, all of them are added as extra
linkgit:git-clone[1]), deepen or shorten the history to the specified
number of commits. Tags for the deepened commits are not fetched.
+--deepen=<depth>::
+ Similar to --depth, except it specifies the number of commits
+ from the current shallow boundary instead of from the tip of
+ each remote branch history.
+
+--shallow-since=<date>::
+ Deepen or shorten the history of a shallow repository to
+ include all reachable commits after <date>.
+
+--shallow-exclude=<revision>::
+ Deepen or shorten the history of a shallow repository to
+ exclude commits reachable from a specified remote branch or tag.
+ This option can be specified multiple times.
+
--unshallow::
If the source repository is complete, convert a shallow
repository to a complete one, removing all the limitations
[verse]
'git blame' [-c] [-b] [-l] [--root] [-t] [-f] [-n] [-s] [-e] [-p] [-w] [--incremental]
[-L <range>] [-S <revs-file>] [-M] [-C] [-C] [-C] [--since=<date>]
- [--progress] [--abbrev=<n>] [<rev> | --contents <file> | --reverse <rev>]
+ [--progress] [--abbrev=<n>] [<rev> | --contents <file> | --reverse <rev>..<rev>]
[--] <file>
DESCRIPTION
tips of all branches. If you want to clone submodules shallowly,
also pass `--shallow-submodules`.
+--shallow-since=<date>::
+ Create a shallow clone with a history after the specified time.
+
+--shallow-exclude=<revision>::
+ Create a shallow clone with a history, excluding commits
+ reachable from a specified remote branch or tag. This option
+ can be specified multiple times.
+
--[no-]single-branch::
Clone only the history leading to the tip of a single branch,
either specified by the `--branch` option or the primary
2. by using 'git rm' to remove files from the working tree
and the index, again before using the 'commit' command;
-3. by listing files as arguments to the 'commit' command, in which
+3. by listing files as arguments to the 'commit' command
+ (without --interactive or --patch switch), in which
case the commit will ignore changes staged in the index, and instead
record the current content of the listed files (which must already
be known to Git);
actual commit;
5. by using the --interactive or --patch switches with the 'commit' command
- to decide one by one which files or hunks should be part of the commit,
+ to decide one by one which files or hunks should be part of the commit
+ in addition to contents in the index,
before finalizing the operation. See the ``Interactive Mode'' section of
linkgit:git-add[1] to learn how to operate these modes.
'git-upload-pack' treats the special depth 2147483647 as
infinite even if there is an ancestor-chain that long.
+--shallow-since=<date>::
+ Deepen or shorten the history of a shallow'repository to
+ include all reachable commits after <date>.
+
+--shallow-exclude=<revision>::
+ Deepen or shorten the history of a shallow repository to
+ exclude commits reachable from a specified remote branch or tag.
+ This option can be specified multiple times.
+
+--deepen-relative::
+ Argument --depth specifies the number of commits from the
+ current shallow boundary instead of from the tip of each
+ remote branch history.
+
--no-progress::
Do not show the progress.
'option depth' <depth>::
Deepens the history of a shallow repository.
+'option deepen-since <timestamp>::
+ Deepens the history of a shallow repository based on time.
+
+'option deepen-not <ref>::
+ Deepens the history of a shallow repository excluding ref.
+ Multiple options add up.
+
+'option deepen-relative {'true'|'false'}::
+ Deepens the history of a shallow repository relative to
+ current boundary. Only valid when used with "option depth".
+
'option followtags' {'true'|'false'}::
If enabled the helper should automatically fetch annotated
tag objects if the object the tag points at was transferred
Other <rev>{caret} Parent Shorthand Notations
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-Two other shorthands exist, particularly useful for merge commits,
+Three other shorthands exist, particularly useful for merge commits,
for naming a set that is formed by a commit and its parent commits.
The 'r1{caret}@' notation means all parents of 'r1'.
The 'r1{caret}!' notation includes commit 'r1' but excludes all of its parents.
By itself, this notation denotes the single commit 'r1'.
+The '<rev>{caret}-{<n>}' notation includes '<rev>' but excludes the <n>th
+parent (i.e. a shorthand for '<rev>{caret}<n>..<rev>'), with '<n>' = 1 if
+not given. This is typically useful for merge commits where you
+can just pass '<commit>{caret}-' to get all the commits in the branch
+that was merged in merge commit '<commit>' (including '<commit>'
+itself).
+
While '<rev>{caret}<n>' was about specifying a single commit parent, these
-two notations consider all its parents. For example you can say
+three notations also consider its parents. For example you can say
'HEAD{caret}2{caret}@', however you cannot say 'HEAD{caret}@{caret}2'.
Revision Range Summary
as giving commit '<rev>' and then all its parents prefixed with
'{caret}' to exclude them (and their ancestors).
+'<rev>{caret}-{<n>}', e.g. 'HEAD{caret}-, HEAD{caret}-2'::
+ Equivalent to '<rev>{caret}<n>..<rev>', with '<n>' = 1 if not
+ given.
+
Here are a handful of examples using the Loeliger illustration above,
with each step in the notation's expansion and selection carefully
spelt out:
C I J F C
B..C = ^B C C
B...C = B ^F C G H D E B C
+ B^- = B^..B
+ = ^B^1 B E I J F B
C^@ = C^1
= F I J F
B^@ = B^1 B^2 B^3
`sha1_array_for_each_unique`::
Efficiently iterate over each unique element of the list,
executing the callback function for each one. If the array is
- not sorted, this function has the side effect of sorting it.
+ not sorted, this function has the side effect of sorting it. If
+ the callback returns a non-zero value, the iteration ends
+ immediately and the callback's return is propagated; otherwise,
+ 0 is returned.
Examples
--------
-----------------------------------------
-void print_callback(const unsigned char sha1[20],
+int print_callback(const unsigned char sha1[20],
void *data)
{
printf("%s\n", sha1_to_hex(sha1));
+ return 0; /* always continue */
}
void some_func(void)
shallow-line = PKT-LINE("shallow" SP obj-id)
- depth-request = PKT-LINE("deepen" SP depth)
+ depth-request = PKT-LINE("deepen" SP depth) /
+ PKT-LINE("deepen-since" SP timestamp) /
+ PKT-LINE("deepen-not" SP ref)
first-want = PKT-LINE("want" SP obj-id SP capability-list)
additional-want = PKT-LINE("want" SP obj-id)
the fetch-pack/upload-pack protocol so clients can request shallow
clones.
+deepen-since
+------------
+
+This capability adds "deepen-since" command to fetch-pack/upload-pack
+protocol so the client can request shallow clones that are cut at a
+specific time, instead of depth. Internally it's equivalent of doing
+"rev-list --max-age=<timestamp>" on the server side. "deepen-since"
+cannot be used with "deepen".
+
+deepen-not
+----------
+
+This capability adds "deepen-not" command to fetch-pack/upload-pack
+protocol so the client can request shallow clones that are cut at a
+specific revision, instead of depth. Internally it's equivalent of
+doing "rev-list --not <rev>" on the server side. "deepen-not"
+cannot be used with "deepen", but can be used with "deepen-since".
+
+deepen-relative
+---------------
+
+If this capability is requested by the client, the semantics of
+"deepen" command is changed. The "depth" argument is the depth from
+the current shallow boundary, instead of the depth from remote refs.
+
no-progress
-----------
export TCL_PATH TCLTK_PATH
SPARSE_FLAGS =
+SPATCH_FLAGS = --all-includes
%.cocci.patch: %.cocci $(C_SOURCES)
@echo ' ' SPATCH $<; \
for f in $(C_SOURCES); do \
- $(SPATCH) --sp-file $< $$f; \
+ $(SPATCH) --sp-file $< $$f $(SPATCH_FLAGS); \
done >$@ 2>$@.log; \
if test -s $@; \
then \
array[cnt].distance = distance;
cnt++;
}
- qsort(array, cnt, sizeof(*array), compare_commit_dist);
+ QSORT(array, cnt, compare_commit_dist);
for (p = list, i = 0; i < cnt; i++) {
char buf[100]; /* enough for dist=%d */
struct object *obj = &(array[i].commit->object);
unsigned largest_score = 0;
struct blame_entry *e;
int compute_auto_abbrev = (abbrev < 0);
- int auto_abbrev = default_abbrev;
+ int auto_abbrev = DEFAULT_ABBREV;
for (e = sb->ent; e; e = e->next) {
struct origin *suspect = e->suspect;
return xstrdup_or_null(name);
}
+static const char *dwim_reverse_initial(struct scoreboard *sb)
+{
+ /*
+ * DWIM "git blame --reverse ONE -- PATH" as
+ * "git blame --reverse ONE..HEAD -- PATH" but only do so
+ * when it makes sense.
+ */
+ struct object *obj;
+ struct commit *head_commit;
+ unsigned char head_sha1[20];
+
+ if (sb->revs->pending.nr != 1)
+ return NULL;
+
+ /* Is that sole rev a committish? */
+ obj = sb->revs->pending.objects[0].item;
+ obj = deref_tag(obj, NULL, 0);
+ if (obj->type != OBJ_COMMIT)
+ return NULL;
+
+ /* Do we have HEAD? */
+ if (!resolve_ref_unsafe("HEAD", RESOLVE_REF_READING, head_sha1, NULL))
+ return NULL;
+ head_commit = lookup_commit_reference_gently(head_sha1, 1);
+ if (!head_commit)
+ return NULL;
+
+ /* Turn "ONE" into "ONE..HEAD" then */
+ obj->flags |= UNINTERESTING;
+ add_pending_object(sb->revs, &head_commit->object, "HEAD");
+
+ sb->final = (struct commit *)obj;
+ return sb->revs->pending.objects[0].name;
+}
+
static char *prepare_initial(struct scoreboard *sb)
{
int i;
if (obj->type != OBJ_COMMIT)
die("Non commit %s?", revs->pending.objects[i].name);
if (sb->final)
- die("More than one commit to dig down to %s and %s?",
+ die("More than one commit to dig up from, %s and %s?",
revs->pending.objects[i].name,
final_commit_name);
sb->final = (struct commit *) obj;
final_commit_name = revs->pending.objects[i].name;
}
+
+ if (!final_commit_name)
+ final_commit_name = dwim_reverse_initial(sb);
if (!final_commit_name)
- die("No commit to dig down to?");
+ die("No commit to dig up from?");
return xstrdup(final_commit_name);
}
char *buf;
unsigned long size;
struct object_context obj_context;
- struct object_info oi = {NULL};
+ struct object_info oi = OBJECT_INFO_INIT;
struct strbuf sb = STRBUF_INIT;
unsigned flags = LOOKUP_REPLACE_OBJECT;
const char *path = force_path;
struct expand_data *expand;
};
-static void batch_object_cb(const unsigned char sha1[20], void *vdata)
+static int batch_object_cb(const unsigned char sha1[20], void *vdata)
{
struct object_cb_data *data = vdata;
hashcpy(data->expand->oid.hash, sha1);
batch_object_write(NULL, data->opt, data->expand);
+ return 0;
}
static int batch_loose_object(const unsigned char *sha1,
data.split_on_whitespace = 1;
if (opt->all_objects) {
- struct object_info empty;
- memset(&empty, 0, sizeof(empty));
+ struct object_info empty = OBJECT_INFO_INIT;
if (!memcmp(&data.info, &empty, sizeof(empty)))
data.skip_object_info = 1;
}
static int option_no_checkout, option_bare, option_mirror, option_single_branch = -1;
static int option_local = -1, option_no_hardlinks, option_shared, option_recursive;
static int option_shallow_submodules;
-static char *option_template, *option_depth;
+static int deepen;
+static char *option_template, *option_depth, *option_since;
static char *option_origin = NULL;
static char *option_branch = NULL;
+static struct string_list option_not = STRING_LIST_INIT_NODUP;
static const char *real_git_dir;
static char *option_upload_pack = "git-upload-pack";
static int option_verbosity;
N_("path to git-upload-pack on the remote")),
OPT_STRING(0, "depth", &option_depth, N_("depth"),
N_("create a shallow clone of that depth")),
+ OPT_STRING(0, "shallow-since", &option_since, N_("time"),
+ N_("create a shallow clone since a specific time")),
+ OPT_STRING_LIST(0, "shallow-exclude", &option_not, N_("revision"),
+ N_("deepen history of shallow clone by excluding rev")),
OPT_BOOL(0, "single-branch", &option_single_branch,
N_("clone only one branch, HEAD or --branch")),
OPT_BOOL(0, "shallow-submodules", &option_shallow_submodules,
continue;
}
abs_path = mkpathdup("%s/objects/%s", src_repo, line.buf);
- normalize_path_copy(abs_path, abs_path);
- add_to_alternates_file(abs_path);
+ if (!normalize_path_copy(abs_path, abs_path))
+ add_to_alternates_file(abs_path);
+ else
+ warning("skipping invalid relative alternate: %s/%s",
+ src_repo, line.buf);
free(abs_path);
}
strbuf_release(&line);
usage_msg_opt(_("You must specify a repository to clone."),
builtin_clone_usage, builtin_clone_options);
+ if (option_depth || option_since || option_not.nr)
+ deepen = 1;
if (option_single_branch == -1)
- option_single_branch = option_depth ? 1 : 0;
+ option_single_branch = deepen ? 1 : 0;
if (option_mirror)
option_bare = 1;
if (is_local) {
if (option_depth)
warning(_("--depth is ignored in local clones; use file:// instead."));
+ if (option_since)
+ warning(_("--shallow-since is ignored in local clones; use file:// instead."));
+ if (option_not.nr)
+ warning(_("--shallow-exclude is ignored in local clones; use file:// instead."));
if (!access(mkpath("%s/shallow", path), F_OK)) {
if (option_local > 0)
warning(_("source repository is shallow, ignoring --local"));
if (option_depth)
transport_set_option(transport, TRANS_OPT_DEPTH,
option_depth);
+ if (option_since)
+ transport_set_option(transport, TRANS_OPT_DEEPEN_SINCE,
+ option_since);
+ if (option_not.nr)
+ transport_set_option(transport, TRANS_OPT_DEEPEN_NOT,
+ (const char *)&option_not);
if (option_single_branch)
transport_set_option(transport, TRANS_OPT_FOLLOWTAGS, "1");
transport_set_option(transport, TRANS_OPT_UPLOADPACK,
option_upload_pack);
- if (transport->smart_options && !option_depth)
+ if (transport->smart_options && !deepen)
transport->smart_options->check_self_contained_and_connected = 1;
refs = transport_get_remote_refs(transport);
oid_to_hex(oid));
}
- qsort(all_matches, match_cnt, sizeof(all_matches[0]), compare_pt);
+ QSORT(all_matches, match_cnt, compare_pt);
if (gave_up_on) {
commit_list_insert_by_date(gave_up_on, &list);
* Handle files below a directory first, in case they are all deleted
* and the directory changes to a file or symlink.
*/
- qsort(q->queue, q->nr, sizeof(q->queue[0]), depth_first);
+ QSORT(q->queue, q->nr, depth_first);
for (i = 0; i < q->nr; i++) {
struct diff_filespec *ospec = q->queue[i]->one;
struct child_process *conn;
struct fetch_pack_args args;
struct sha1_array shallow = SHA1_ARRAY_INIT;
+ struct string_list deepen_not = STRING_LIST_INIT_DUP;
packet_trace_identity("fetch-pack");
for (i = 1; i < argc && *argv[i] == '-'; i++) {
const char *arg = argv[i];
- if (starts_with(arg, "--upload-pack=")) {
- args.uploadpack = arg + 14;
+ if (skip_prefix(arg, "--upload-pack=", &arg)) {
+ args.uploadpack = arg;
continue;
}
- if (starts_with(arg, "--exec=")) {
- args.uploadpack = arg + 7;
+ if (skip_prefix(arg, "--exec=", &arg)) {
+ args.uploadpack = arg;
continue;
}
if (!strcmp("--quiet", arg) || !strcmp("-q", arg)) {
args.verbose = 1;
continue;
}
- if (starts_with(arg, "--depth=")) {
- args.depth = strtol(arg + 8, NULL, 0);
+ if (skip_prefix(arg, "--depth=", &arg)) {
+ args.depth = strtol(arg, NULL, 0);
+ continue;
+ }
+ if (skip_prefix(arg, "--shallow-since=", &arg)) {
+ args.deepen_since = xstrdup(arg);
+ continue;
+ }
+ if (skip_prefix(arg, "--shallow-exclude=", &arg)) {
+ string_list_append(&deepen_not, arg);
+ continue;
+ }
+ if (!strcmp(arg, "--deepen-relative")) {
+ args.deepen_relative = 1;
continue;
}
if (!strcmp("--no-progress", arg)) {
}
usage(fetch_pack_usage);
}
+ if (deepen_not.nr)
+ args.deepen_not = &deepen_not;
if (i < argc)
dest = argv[i++];
static int prune = -1; /* unspecified */
#define PRUNE_BY_DEFAULT 0 /* do we prune by default? */
-static int all, append, dry_run, force, keep, multiple, update_head_ok, verbosity;
+static int all, append, dry_run, force, keep, multiple, update_head_ok, verbosity, deepen_relative;
static int progress = -1, recurse_submodules = RECURSE_SUBMODULES_DEFAULT;
-static int tags = TAGS_DEFAULT, unshallow, update_shallow;
+static int tags = TAGS_DEFAULT, unshallow, update_shallow, deepen;
static int max_children = -1;
static enum transport_family family;
static const char *depth;
+static const char *deepen_since;
static const char *upload_pack;
+static struct string_list deepen_not = STRING_LIST_INIT_NODUP;
static struct strbuf default_rla = STRBUF_INIT;
static struct transport *gtransport;
static struct transport *gsecondary;
OPT_BOOL(0, "progress", &progress, N_("force progress reporting")),
OPT_STRING(0, "depth", &depth, N_("depth"),
N_("deepen history of shallow clone")),
+ OPT_STRING(0, "shallow-since", &deepen_since, N_("time"),
+ N_("deepen history of shallow repository based on time")),
+ OPT_STRING_LIST(0, "shallow-exclude", &deepen_not, N_("revision"),
+ N_("deepen history of shallow clone by excluding rev")),
+ OPT_INTEGER(0, "deepen", &deepen_relative,
+ N_("deepen history of shallow clone")),
{ OPTION_SET_INT, 0, "unshallow", &unshallow, NULL,
N_("convert to a complete repository"),
PARSE_OPT_NONEG | PARSE_OPT_NOARG, NULL, 1 },
* really need to perform. Claiming failure now will ensure
* we perform the network exchange to deepen our history.
*/
- if (depth)
+ if (deepen)
return -1;
opt.quiet = 1;
return check_connected(iterate_ref_map, &rm, &opt);
name, transport->url);
}
-static struct transport *prepare_transport(struct remote *remote)
+static struct transport *prepare_transport(struct remote *remote, int deepen)
{
struct transport *transport;
transport = transport_get(remote, NULL);
set_option(transport, TRANS_OPT_KEEP, "yes");
if (depth)
set_option(transport, TRANS_OPT_DEPTH, depth);
+ if (deepen && deepen_since)
+ set_option(transport, TRANS_OPT_DEEPEN_SINCE, deepen_since);
+ if (deepen && deepen_not.nr)
+ set_option(transport, TRANS_OPT_DEEPEN_NOT,
+ (const char *)&deepen_not);
+ if (deepen_relative)
+ set_option(transport, TRANS_OPT_DEEPEN_RELATIVE, "yes");
if (update_shallow)
set_option(transport, TRANS_OPT_UPDATE_SHALLOW, "yes");
return transport;
static void backfill_tags(struct transport *transport, struct ref *ref_map)
{
- if (transport->cannot_reuse) {
- gsecondary = prepare_transport(transport->remote);
+ int cannot_reuse;
+
+ /*
+ * Once we have set TRANS_OPT_DEEPEN_SINCE, we can't unset it
+ * when remote helper is used (setting it to an empty string
+ * is not unsetting). We could extend the remote helper
+ * protocol for that, but for now, just force a new connection
+ * without deepen-since. Similar story for deepen-not.
+ */
+ cannot_reuse = transport->cannot_reuse ||
+ deepen_since || deepen_not.nr;
+ if (cannot_reuse) {
+ gsecondary = prepare_transport(transport->remote, 0);
transport = gsecondary;
}
transport_set_option(transport, TRANS_OPT_FOLLOWTAGS, NULL);
transport_set_option(transport, TRANS_OPT_DEPTH, "0");
+ transport_set_option(transport, TRANS_OPT_DEEPEN_RELATIVE, NULL);
fetch_refs(transport, ref_map);
if (gsecondary) {
die(_("No remote repository specified. Please, specify either a URL or a\n"
"remote name from which new revisions should be fetched."));
- gtransport = prepare_transport(remote);
+ gtransport = prepare_transport(remote, 1);
if (prune < 0) {
/* no command line request */
argc = parse_options(argc, argv, prefix,
builtin_fetch_options, builtin_fetch_usage, 0);
+ if (deepen_relative) {
+ if (deepen_relative < 0)
+ die(_("Negative depth in --deepen is not supported"));
+ if (depth)
+ die(_("--deepen and --depth are mutually exclusive"));
+ depth = xstrfmt("%d", deepen_relative);
+ }
if (unshallow) {
if (depth)
die(_("--depth and --unshallow cannot be used together"));
/* no need to be strict, transport_set_option() will validate it again */
if (depth && atoi(depth) < 1)
die(_("depth %s is not a positive number"), depth);
+ if (depth || deepen_since || deepen_not.nr)
+ deepen = 1;
if (recurse_submodules != RECURSE_SUBMODULES_OFF) {
if (recurse_submodules_default) {
struct string_list *authors,
struct string_list *committers)
{
- if (authors->nr)
- qsort(authors->items,
- authors->nr, sizeof(authors->items[0]),
- cmp_string_list_util_as_integral);
- if (committers->nr)
- qsort(committers->items,
- committers->nr, sizeof(committers->items[0]),
- cmp_string_list_util_as_integral);
+ QSORT(authors->items, authors->nr,
+ cmp_string_list_util_as_integral);
+ QSORT(committers->items, committers->nr,
+ cmp_string_list_util_as_integral);
credit_people(out, authors, 'a');
credit_people(out, committers, 'c');
return;
/* Sort deltas by base SHA1/offset for fast searching */
- qsort(ofs_deltas, nr_ofs_deltas, sizeof(struct ofs_delta_entry),
- compare_ofs_delta_entry);
- qsort(ref_deltas, nr_ref_deltas, sizeof(struct ref_delta_entry),
- compare_ref_delta_entry);
+ QSORT(ofs_deltas, nr_ofs_deltas, compare_ofs_delta_entry);
+ QSORT(ref_deltas, nr_ref_deltas, compare_ref_delta_entry);
if (verbose || show_resolving_progress)
progress = start_progress(_("Resolving deltas"),
ALLOC_ARRAY(sorted_by_pos, nr_ref_deltas);
for (i = 0; i < nr_ref_deltas; i++)
sorted_by_pos[i] = &ref_deltas[i];
- qsort(sorted_by_pos, nr_ref_deltas, sizeof(*sorted_by_pos), delta_pos_compare);
+ QSORT(sorted_by_pos, nr_ref_deltas, delta_pos_compare);
for (i = 0; i < nr_ref_deltas; i++) {
struct ref_delta_entry *d = sorted_by_pos[i];
opts->anomaly[opts->anomaly_nr++] = ntohl(idx2[off * 2 + 1]);
}
- if (1 < opts->anomaly_nr)
- qsort(opts->anomaly, opts->anomaly_nr, sizeof(uint32_t), cmp_uint32);
+ QSORT(opts->anomaly, opts->anomaly_nr, cmp_uint32);
}
static void read_idx_option(struct pack_idx_option *opts, const char *pack_name)
size_t size;
int i;
- qsort(entries, used, sizeof(*entries), ent_compare);
+ QSORT(entries, used, ent_compare);
for (size = i = 0; i < used; i++)
size += 32 + entries[i]->len;
return NULL;
if (!tip_table.sorted) {
- qsort(tip_table.table, tip_table.nr, sizeof(*tip_table.table),
- tipcmp);
+ QSORT(tip_table.table, tip_table.nr, tipcmp);
tip_table.sorted = 1;
}
#include "reachable.h"
#include "sha1-array.h"
#include "argv-array.h"
+#include "mru.h"
static const char *pack_usage[] = {
N_("git pack-objects --stdout [<options>...] [< <ref-list> | < <object-list>]"),
struct packed_git **found_pack,
off_t *found_offset)
{
- struct packed_git *p;
+ struct mru_entry *entry;
int want;
if (!exclude && local && has_loose_object_nonlocal(sha1))
return want;
}
- for (p = packed_git; p; p = p->next) {
+ for (entry = packed_git_mru->head; entry; entry = entry->next) {
+ struct packed_git *p = entry->item;
off_t offset;
if (p == *found_pack)
*found_pack = p;
}
want = want_found_object(exclude, p);
+ if (!exclude && want > 0)
+ mru_mark(packed_git_mru, entry);
if (want != -1)
return want;
}
(a->in_pack_offset > b->in_pack_offset);
}
+/*
+ * Drop an on-disk delta we were planning to reuse. Naively, this would
+ * just involve blanking out the "delta" field, but we have to deal
+ * with some extra book-keeping:
+ *
+ * 1. Removing ourselves from the delta_sibling linked list.
+ *
+ * 2. Updating our size/type to the non-delta representation. These were
+ * either not recorded initially (size) or overwritten with the delta type
+ * (type) when check_object() decided to reuse the delta.
+ */
+static void drop_reused_delta(struct object_entry *entry)
+{
+ struct object_entry **p = &entry->delta->delta_child;
+ struct object_info oi = OBJECT_INFO_INIT;
+
+ while (*p) {
+ if (*p == entry)
+ *p = (*p)->delta_sibling;
+ else
+ p = &(*p)->delta_sibling;
+ }
+ entry->delta = NULL;
+
+ oi.sizep = &entry->size;
+ oi.typep = &entry->type;
+ if (packed_object_info(entry->in_pack, entry->in_pack_offset, &oi) < 0) {
+ /*
+ * We failed to get the info from this pack for some reason;
+ * fall back to sha1_object_info, which may find another copy.
+ * And if that fails, the error will be recorded in entry->type
+ * and dealt with in prepare_pack().
+ */
+ entry->type = sha1_object_info(entry->idx.sha1, &entry->size);
+ }
+}
+
+/*
+ * Follow the chain of deltas from this entry onward, throwing away any links
+ * that cause us to hit a cycle (as determined by the DFS state flags in
+ * the entries).
+ */
+static void break_delta_chains(struct object_entry *entry)
+{
+ /* If it's not a delta, it can't be part of a cycle. */
+ if (!entry->delta) {
+ entry->dfs_state = DFS_DONE;
+ return;
+ }
+
+ switch (entry->dfs_state) {
+ case DFS_NONE:
+ /*
+ * This is the first time we've seen the object. We mark it as
+ * part of the active potential cycle and recurse.
+ */
+ entry->dfs_state = DFS_ACTIVE;
+ break_delta_chains(entry->delta);
+ entry->dfs_state = DFS_DONE;
+ break;
+
+ case DFS_DONE:
+ /* object already examined, and not part of a cycle */
+ break;
+
+ case DFS_ACTIVE:
+ /*
+ * We found a cycle that needs broken. It would be correct to
+ * break any link in the chain, but it's convenient to
+ * break this one.
+ */
+ drop_reused_delta(entry);
+ entry->dfs_state = DFS_DONE;
+ break;
+ }
+}
+
static void get_object_details(void)
{
uint32_t i;
sorted_by_offset = xcalloc(to_pack.nr_objects, sizeof(struct object_entry *));
for (i = 0; i < to_pack.nr_objects; i++)
sorted_by_offset[i] = to_pack.objects + i;
- qsort(sorted_by_offset, to_pack.nr_objects, sizeof(*sorted_by_offset), pack_offset_sort);
+ QSORT(sorted_by_offset, to_pack.nr_objects, pack_offset_sort);
for (i = 0; i < to_pack.nr_objects; i++) {
struct object_entry *entry = sorted_by_offset[i];
entry->no_try_delta = 1;
}
+ /*
+ * This must happen in a second pass, since we rely on the delta
+ * information for the whole list being completed.
+ */
+ for (i = 0; i < to_pack.nr_objects; i++)
+ break_delta_chains(&to_pack.objects[i]);
+
free(sorted_by_offset);
}
if (progress)
progress_state = start_progress(_("Compressing objects"),
nr_deltas);
- qsort(delta_list, n, sizeof(*delta_list), type_size_sort);
+ QSORT(delta_list, n, type_size_sort);
ll_find_deltas(delta_list, n, window+1, depth, &nr_done);
stop_progress(&progress_state);
if (nr_done != nr_deltas)
}
if (in_pack.nr) {
- qsort(in_pack.array, in_pack.nr, sizeof(in_pack.array[0]),
- ofscmp);
+ QSORT(in_pack.array, in_pack.nr, ofscmp);
for (i = 0; i < in_pack.nr; i++) {
struct object *o = in_pack.array[i].object;
add_object_entry(o->oid.hash, o->type, "", 0);
return 0;
}
-static void show_one_alternate_sha1(const unsigned char sha1[20], void *unused)
+static int show_one_alternate_sha1(const unsigned char sha1[20], void *unused)
{
show_ref(".have", sha1);
+ return 0;
}
static void collect_one_alternate_ref(const struct ref *ref, void *data)
info.width = info.width2 = 0;
for_each_string_list(&states.push, add_push_to_show_info, &info);
- qsort(info.list->items, info.list->nr,
- sizeof(*info.list->items), cmp_string_with_push);
+ QSORT(info.list->items, info.list->nr, cmp_string_with_push);
if (info.list->nr)
printf_ln(Q_(" Local ref configured for 'git push'%s:",
" Local refs configured for 'git push'%s:",
unsigned char sha1[20];
struct commit *commit;
struct commit_list *parents;
- int parents_only;
-
- if ((dotdot = strstr(arg, "^!")))
- parents_only = 0;
- else if ((dotdot = strstr(arg, "^@")))
- parents_only = 1;
-
- if (!dotdot || dotdot[2])
+ int parent_number;
+ int include_rev = 0;
+ int include_parents = 0;
+ int exclude_parent = 0;
+
+ if ((dotdot = strstr(arg, "^!"))) {
+ include_rev = 1;
+ if (dotdot[2])
+ return 0;
+ } else if ((dotdot = strstr(arg, "^@"))) {
+ include_parents = 1;
+ if (dotdot[2])
+ return 0;
+ } else if ((dotdot = strstr(arg, "^-"))) {
+ include_rev = 1;
+ exclude_parent = 1;
+
+ if (dotdot[2]) {
+ char *end;
+ exclude_parent = strtoul(dotdot + 2, &end, 10);
+ if (*end != '\0' || !exclude_parent)
+ return 0;
+ }
+ } else
return 0;
*dotdot = 0;
return 0;
}
- if (!parents_only)
- show_rev(NORMAL, sha1, arg);
commit = lookup_commit_reference(sha1);
- for (parents = commit->parents; parents; parents = parents->next)
- show_rev(parents_only ? NORMAL : REVERSED,
- parents->item->object.oid.hash, arg);
+ if (exclude_parent &&
+ exclude_parent > commit_list_count(commit->parents)) {
+ *dotdot = '^';
+ return 0;
+ }
+
+ if (include_rev)
+ show_rev(NORMAL, sha1, arg);
+ for (parents = commit->parents, parent_number = 1;
+ parents;
+ parents = parents->next, parent_number++) {
+ if (exclude_parent && parent_number != exclude_parent)
+ continue;
+
+ show_rev(include_parents ? NORMAL : REVERSED,
+ parents->item->object.oid.hash, arg);
+ }
*dotdot = '^';
return 1;
struct strbuf sb = STRBUF_INIT;
if (log->sort_by_number)
- qsort(log->list.items, log->list.nr, sizeof(struct string_list_item),
+ QSORT(log->list.items, log->list.nr,
log->summary ? compare_by_counter : compare_by_list);
for (i = 0; i < log->list.nr; i++) {
const struct string_list_item *item = &log->list.items[i];
static void sort_ref_range(int bottom, int top)
{
- qsort(ref_name + bottom, top - bottom, sizeof(ref_name[0]),
- compare_ref_name);
+ QSORT(ref_name + bottom, top - bottom, compare_ref_name);
}
static int append_ref(const char *refname, const struct object_id *oid,
if (saved_matches == ref_name_cnt &&
ref_name_cnt < MAX_REVS)
error(_("no matching refs with %s"), av);
- if (saved_matches + 1 < ref_name_cnt)
- sort_ref_range(saved_matches, ref_name_cnt);
+ sort_ref_range(saved_matches, ref_name_cnt);
return;
}
die("bad sha1 reference %s", av);
if (suc->recursive_prefix)
strbuf_addf(&sb, "%s/%s", suc->recursive_prefix, ce->name);
else
- strbuf_addf(&sb, "%s", ce->name);
+ strbuf_addstr(&sb, ce->name);
strbuf_addf(out, _("Skipping unmerged submodule %s"), sb.buf);
strbuf_addch(out, '\n');
goto cleanup;
#define GET_SHA1_FOLLOW_SYMLINKS 0100
#define GET_SHA1_ONLY_TO_DIE 04000
+#define GET_SHA1_DISAMBIGUATORS \
+ (GET_SHA1_COMMIT | GET_SHA1_COMMITTISH | \
+ GET_SHA1_TREE | GET_SHA1_TREEISH | \
+ GET_SHA1_BLOB)
+
extern int get_sha1(const char *str, unsigned char *sha1);
extern int get_sha1_commit(const char *str, unsigned char *sha1);
extern int get_sha1_committish(const char *str, unsigned char *sha1);
typedef int each_abbrev_fn(const unsigned char *sha1, void *);
extern int for_each_abbrev(const char *prefix, each_abbrev_fn, void *);
+extern int set_disambiguate_hint_config(const char *var, const char *value);
+
/*
* Try to read a SHA1 in hexadecimal format from the 40 characters
* starting at hex. Write the 20-byte result to sha1 in binary form.
} packed;
} u;
};
+
+/*
+ * Initializer for a "struct object_info" that wants no items. You may
+ * also memset() the memory to all-zeroes.
+ */
+#define OBJECT_INFO_INIT {NULL}
+
extern int sha1_object_info_extended(const unsigned char *, struct object_info *, unsigned flags);
+extern int packed_object_info(struct packed_git *pack, off_t offset, struct object_info *);
/* Dumb servers support */
extern int update_server_info(int);
extern int is_repository_shallow(void);
extern struct commit_list *get_shallow_commits(struct object_array *heads,
int depth, int shallow_flag, int not_shallow_flag);
+extern struct commit_list *get_shallow_commits_by_rev_list(
+ int ac, const char **av, int shallow_flag, int not_shallow_flag);
extern void set_alternate_shallow_file(const char *path, int override);
extern int write_shallow_commits(struct strbuf *out, int use_pack_protocol,
const struct sha1_array *extra);
return 0;
}
+ if (!strcmp(var, "core.disambiguate"))
+ return set_disambiguate_hint_config(var, value);
+
if (!strcmp(var, "core.loosecompression")) {
int level = git_config_int(var, value);
if (level == -1)
[NO_CURL=],
[NO_CURL=YesPlease])
-if test -z "${NO_CURL}" && test -z "${NO_OPENSSL}"; then
-
-AC_CHECK_LIB([curl], [Curl_ssl_init],
-[NEEDS_SSL_WITH_CURL=YesPlease],
-[NEEDS_SSL_WITH_CURL=])
-
-GIT_CONF_SUBST([NEEDS_SSL_WITH_CURL])
-
-fi
-
GIT_UNSTASH_FLAGS($CURLDIR)
GIT_CONF_SUBST([NO_CURL])
if test $CURL_CONFIG != no; then
GIT_CONF_SUBST([CURL_CONFIG])
+ if test -z "${NO_OPENSSL}"; then
+ AC_MSG_CHECKING([if Curl supports SSL])
+ if test $(curl-config --features|grep SSL) = SSL; then
+ NEEDS_SSL_WITH_CURL=YesPlease
+ AC_MSG_RESULT([yes])
+ else
+ NEEDS_SSL_WITH_CURL=
+ AC_MSG_RESULT([no])
+ fi
+ GIT_CONF_SUBST([NEEDS_SSL_WITH_CURL])
+ fi
fi
fi
--- /dev/null
+@@
+expression base, nmemb, compar;
+@@
+- qsort(base, nmemb, sizeof(*base), compar);
++ QSORT(base, nmemb, compar);
+
+@@
+expression base, nmemb, compar;
+@@
+- qsort(base, nmemb, sizeof(base[0]), compar);
++ QSORT(base, nmemb, compar);
+
+@@
+type T;
+T *base;
+expression nmemb, compar;
+@@
+- qsort(base, nmemb, sizeof(T), compar);
++ QSORT(base, nmemb, compar);
+
+@@
+expression base, nmemb, compar;
+@@
+- if (nmemb)
+ QSORT(base, nmemb, compar);
+
+@@
+expression base, nmemb, compar;
+@@
+- if (nmemb > 0)
+ QSORT(base, nmemb, compar);
+
+@@
+expression base, nmemb, compar;
+@@
+- if (nmemb > 1)
+ QSORT(base, nmemb, compar);
+@ strbuf_addf_with_format_only @
+expression E;
+constant fmt;
+@@
+ strbuf_addf(E,
+(
+ fmt
+|
+ _(fmt)
+)
+ );
+
+@ script:python @
+fmt << strbuf_addf_with_format_only.fmt;
+@@
+cocci.include_match("%" not in fmt)
+
+@ extends strbuf_addf_with_format_only @
+@@
+- strbuf_addf
++ strbuf_addstr
+ (E,
+(
+ fmt
+|
+ _(fmt)
+)
+ );
+
@@
expression E1, E2;
@@
-- strbuf_addf(E1, E2);
+- strbuf_addf(E1, "%s", E2);
+ strbuf_addstr(E1, E2);
+
+@@
+expression E1, E2, E3;
+@@
+- strbuf_addstr(E1, find_unique_abbrev(E2, E3));
++ strbuf_add_unique_abbrev(E1, E2, E3);
__git_refs ()
{
local i hash dir="$(__gitdir "${1-}")" track="${2-}"
- local format refs
+ local format refs pfx
if [ -d "$dir" ]; then
case "$cur" in
refs|refs/*)
track=""
;;
*)
+ [[ "$cur" == ^* ]] && pfx="^"
for i in HEAD FETCH_HEAD ORIG_HEAD MERGE_HEAD; do
- if [ -e "$dir/$i" ]; then echo $i; fi
+ if [ -e "$dir/$i" ]; then echo $pfx$i; fi
done
format="refname:short"
refs="refs/tags refs/heads refs/remotes"
;;
esac
- git --git-dir="$dir" for-each-ref --format="%($format)" \
+ git --git-dir="$dir" for-each-ref --format="$pfx%($format)" \
$refs
if [ -n "$track" ]; then
# employ the heuristic used by git checkout
return;
/* Show all directories with more than x% of the changes */
- qsort(dir.files, dir.nr, sizeof(dir.files[0]), dirstat_compare);
+ QSORT(dir.files, dir.nr, dirstat_compare);
gather_dirstat(options, &dir, changed, "", 0);
}
return;
/* Show all directories with more than x% of the changes */
- qsort(dir.files, dir.nr, sizeof(dir.files[0]), dirstat_compare);
+ QSORT(dir.files, dir.nr, dirstat_compare);
gather_dirstat(options, &dir, changed, "", 0);
}
}
strbuf_addf(msg, "%s%sindex %s..", line_prefix, set,
find_unique_abbrev(one->oid.hash, abbrev));
- strbuf_addstr(msg, find_unique_abbrev(two->oid.hash, abbrev));
+ strbuf_add_unique_abbrev(msg, two->oid.hash, abbrev);
if (one->mode == two->mode)
strbuf_addf(msg, " %06o", one->mode);
strbuf_addf(msg, "%s\n", reset);
void diffcore_fix_diff_index(struct diff_options *options)
{
struct diff_queue_struct *q = &diff_queued_diff;
- qsort(q->queue, q->nr, sizeof(q->queue[0]), diffnamecmp);
+ QSORT(q->queue, q->nr, diffnamecmp);
}
void diffcore_std(struct diff_options *options)
n = 0;
accum1 = accum2 = 0;
}
- qsort(hash->data,
- 1ul << hash->alloc_log2,
- sizeof(hash->data[0]),
- spanhash_cmp);
+ QSORT(hash->data, 1ul << hash->alloc_log2, spanhash_cmp);
return hash;
}
objs[i].orig_order = i;
objs[i].order = match_order(obj_path(objs[i].obj));
}
- qsort(objs, nr, sizeof(*objs), compare_objs_order);
+ QSORT(objs, nr, compare_objs_order);
}
static const char *pair_pathtwo(void *obj)
stop_progress(&progress);
/* cost matrix sorted by most to least similar pair */
- qsort(mx, dst_cnt * NUM_CANDIDATE_PER_DST, sizeof(*mx), score_compare);
+ QSORT(mx, dst_cnt * NUM_CANDIDATE_PER_DST, score_compare);
rename_count += find_renames(mx, dst_cnt, minimum_score, 0);
if (detect_rename == DIFF_DETECT_COPY)
if (!len || treat_leading_path(dir, path, len, simplify))
read_directory_recursive(dir, path, len, untracked, 0, simplify);
free_simplify(simplify);
- qsort(dir->entries, dir->nr, sizeof(struct dir_entry *), cmp_name);
- qsort(dir->ignored, dir->ignored_nr, sizeof(struct dir_entry *), cmp_name);
+ QSORT(dir->entries, dir->nr, cmp_name);
+ QSORT(dir->ignored, dir->ignored_nr, cmp_name);
if (dir->untracked) {
static struct trace_key trace_untracked_stats = TRACE_KEY_INIT(UNTRACKED_STATS);
trace_printf_key(&trace_untracked_stats,
unsigned int i;
if (!v)
- qsort(t->entries,t->entry_count,sizeof(t->entries[0]),tecmp0);
+ QSORT(t->entries, t->entry_count, tecmp0);
else
- qsort(t->entries,t->entry_count,sizeof(t->entries[0]),tecmp1);
+ QSORT(t->entries, t->entry_count, tecmp1);
for (i = 0; i < t->entry_count; i++) {
if (t->entries[i]->versions[v].mode)
static int unpack_limit = 100;
static int prefer_ofs_delta = 1;
static int no_done;
+static int deepen_since_ok;
+static int deepen_not_ok;
static int fetch_fsck_objects = -1;
static int transfer_fsck_objects = -1;
static int agent_supported;
#define ALLOW_REACHABLE_SHA1 02
static unsigned int allow_unadvertised_object_request;
+__attribute__((format (printf, 2, 3)))
+static inline void print_verbose(const struct fetch_pack_args *args,
+ const char *fmt, ...)
+{
+ va_list params;
+
+ if (!args->verbose)
+ return;
+
+ va_start(params, fmt);
+ vfprintf(stderr, fmt, params);
+ va_end(params);
+ fputc('\n', stderr);
+}
+
static void rev_list_push(struct commit *commit, int mark)
{
if (!(commit->object.flags & mark)) {
static void consume_shallow_list(struct fetch_pack_args *args, int fd)
{
- if (args->stateless_rpc && args->depth > 0) {
+ if (args->stateless_rpc && args->deepen) {
/* If we sent a depth we will get back "duplicate"
* shallow and unshallow commands every time there
* is a block of have lines exchanged.
continue;
if (starts_with(line, "unshallow "))
continue;
- die("git fetch-pack: expected shallow list");
+ die(_("git fetch-pack: expected shallow list"));
}
}
}
const char *arg;
if (!len)
- die("git fetch-pack: expected ACK/NAK, got EOF");
+ die(_("git fetch-pack: expected ACK/NAK, got EOF"));
if (!strcmp(line, "NAK"))
return NAK;
if (skip_prefix(line, "ACK ", &arg)) {
return ACK;
}
}
- die("git fetch_pack: expected ACK/NAK, got '%s'", line);
+ die(_("git fetch_pack: expected ACK/NAK, got '%s'"), line);
}
static void send_request(struct fetch_pack_args *args,
size_t state_len = 0;
if (args->stateless_rpc && multi_ack == 1)
- die("--stateless-rpc requires multi_ack_detailed");
+ die(_("--stateless-rpc requires multi_ack_detailed"));
if (marked)
for_each_ref(clear_marks, NULL);
marked = 1;
if (no_done) strbuf_addstr(&c, " no-done");
if (use_sideband == 2) strbuf_addstr(&c, " side-band-64k");
if (use_sideband == 1) strbuf_addstr(&c, " side-band");
+ if (args->deepen_relative) strbuf_addstr(&c, " deepen-relative");
if (args->use_thin_pack) strbuf_addstr(&c, " thin-pack");
if (args->no_progress) strbuf_addstr(&c, " no-progress");
if (args->include_tag) strbuf_addstr(&c, " include-tag");
if (prefer_ofs_delta) strbuf_addstr(&c, " ofs-delta");
+ if (deepen_since_ok) strbuf_addstr(&c, " deepen-since");
+ if (deepen_not_ok) strbuf_addstr(&c, " deepen-not");
if (agent_supported) strbuf_addf(&c, " agent=%s",
git_user_agent_sanitized());
packet_buf_write(&req_buf, "want %s%s\n", remote_hex, c.buf);
write_shallow_commits(&req_buf, 1, NULL);
if (args->depth > 0)
packet_buf_write(&req_buf, "deepen %d", args->depth);
+ if (args->deepen_since) {
+ unsigned long max_age = approxidate(args->deepen_since);
+ packet_buf_write(&req_buf, "deepen-since %lu", max_age);
+ }
+ if (args->deepen_not) {
+ int i;
+ for (i = 0; i < args->deepen_not->nr; i++) {
+ struct string_list_item *s = args->deepen_not->items + i;
+ packet_buf_write(&req_buf, "deepen-not %s", s->string);
+ }
+ }
packet_buf_flush(&req_buf);
state_len = req_buf.len;
- if (args->depth > 0) {
+ if (args->deepen) {
char *line;
const char *arg;
unsigned char sha1[20];
while ((line = packet_read_line(fd[0], NULL))) {
if (skip_prefix(line, "shallow ", &arg)) {
if (get_sha1_hex(arg, sha1))
- die("invalid shallow line: %s", line);
+ die(_("invalid shallow line: %s"), line);
register_shallow(sha1);
continue;
}
if (skip_prefix(line, "unshallow ", &arg)) {
if (get_sha1_hex(arg, sha1))
- die("invalid unshallow line: %s", line);
+ die(_("invalid unshallow line: %s"), line);
if (!lookup_object(sha1))
- die("object not found: %s", line);
+ die(_("object not found: %s"), line);
/* make sure that it is parsed as shallow */
if (!parse_object(sha1))
- die("error in object: %s", line);
+ die(_("error in object: %s"), line);
if (unregister_shallow(sha1))
- die("no shallow found: %s", line);
+ die(_("no shallow found: %s"), line);
continue;
}
- die("expected shallow/unshallow, got %s", line);
+ die(_("expected shallow/unshallow, got %s"), line);
}
} else if (!args->stateless_rpc)
send_request(args, fd[1], &req_buf);
retval = -1;
while ((sha1 = get_rev())) {
packet_buf_write(&req_buf, "have %s\n", sha1_to_hex(sha1));
- if (args->verbose)
- fprintf(stderr, "have %s\n", sha1_to_hex(sha1));
+ print_verbose(args, "have %s", sha1_to_hex(sha1));
in_vain++;
if (flush_at <= ++count) {
int ack;
consume_shallow_list(args, fd[0]);
do {
ack = get_ack(fd[0], result_sha1);
- if (args->verbose && ack)
- fprintf(stderr, "got ack %d %s\n", ack,
- sha1_to_hex(result_sha1));
+ if (ack)
+ print_verbose(args, _("got %s %d %s"), "ack",
+ ack, sha1_to_hex(result_sha1));
switch (ack) {
case ACK:
flushes = 0;
struct commit *commit =
lookup_commit(result_sha1);
if (!commit)
- die("invalid commit %s", sha1_to_hex(result_sha1));
+ die(_("invalid commit %s"), sha1_to_hex(result_sha1));
if (args->stateless_rpc
&& ack == ACK_common
&& !(commit->object.flags & COMMON)) {
} while (ack);
flushes--;
if (got_continue && MAX_IN_VAIN < in_vain) {
- if (args->verbose)
- fprintf(stderr, "giving up\n");
+ print_verbose(args, _("giving up"));
break; /* give up */
}
}
packet_buf_write(&req_buf, "done\n");
send_request(args, fd[1], &req_buf);
}
- if (args->verbose)
- fprintf(stderr, "done\n");
+ print_verbose(args, _("done"));
if (retval != 0) {
multi_ack = 0;
flushes++;
while (flushes || multi_ack) {
int ack = get_ack(fd[0], result_sha1);
if (ack) {
- if (args->verbose)
- fprintf(stderr, "got ack (%d) %s\n", ack,
- sha1_to_hex(result_sha1));
+ print_verbose(args, _("got %s (%d) %s"), "ack",
+ ack, sha1_to_hex(result_sha1));
if (ack == ACK)
return 0;
multi_ack = 1;
unsigned long cutoff)
{
while (complete && cutoff <= complete->item->date) {
- if (args->verbose)
- fprintf(stderr, "Marking %s as complete\n",
- oid_to_hex(&complete->item->object.oid));
+ print_verbose(args, _("Marking %s as complete"),
+ oid_to_hex(&complete->item->object.oid));
pop_most_recent_commit(&complete, COMPLETE);
}
}
}
if (!keep && args->fetch_all &&
- (!args->depth || !starts_with(ref->name, "refs/tags/")))
+ (!args->deepen || !starts_with(ref->name, "refs/tags/")))
keep = 1;
if (keep) {
}
}
- if (!args->depth) {
+ if (!args->deepen) {
for_each_ref(mark_complete_oid, NULL);
for_each_alternate_ref(mark_alternate_complete, NULL);
commit_list_sort_by_date(&complete);
o = lookup_object(remote);
if (!o || !(o->flags & COMPLETE)) {
retval = 0;
- if (!args->verbose)
- continue;
- fprintf(stderr,
- "want %s (%s)\n", sha1_to_hex(remote),
- ref->name);
+ print_verbose(args, "want %s (%s)", sha1_to_hex(remote),
+ ref->name);
continue;
}
- if (!args->verbose)
- continue;
- fprintf(stderr,
- "already have %s (%s)\n", sha1_to_hex(remote),
- ref->name);
+ print_verbose(args, _("already have %s (%s)"), sha1_to_hex(remote),
+ ref->name);
}
return retval;
}
demux.out = -1;
demux.isolate_sigpipe = 1;
if (start_async(&demux))
- die("fetch-pack: unable to fork off sideband"
- " demultiplexer");
+ die(_("fetch-pack: unable to fork off sideband demultiplexer"));
}
else
demux.out = xd[0];
if (!args->keep_pack && unpack_limit) {
if (read_pack_header(demux.out, &header))
- die("protocol error: bad pack header");
+ die(_("protocol error: bad pack header"));
pass_header = 1;
if (ntohl(header.hdr_entries) < unpack_limit)
do_keep = 0;
cmd.in = demux.out;
cmd.git_cmd = 1;
if (start_command(&cmd))
- die("fetch-pack: unable to fork off %s", cmd_name);
+ die(_("fetch-pack: unable to fork off %s"), cmd_name);
if (do_keep && pack_lockfile) {
*pack_lockfile = index_pack_lockfile(cmd.out);
close(cmd.out);
args->check_self_contained_and_connected &&
ret == 0;
else
- die("%s failed", cmd_name);
+ die(_("%s failed"), cmd_name);
if (use_sideband && finish_async(&demux))
- die("error in sideband demultiplexer");
+ die(_("error in sideband demultiplexer"));
return 0;
}
int agent_len;
sort_ref_list(&ref, ref_compare_name);
- qsort(sought, nr_sought, sizeof(*sought), cmp_ref_by_name);
+ QSORT(sought, nr_sought, cmp_ref_by_name);
if ((args->depth > 0 || is_repository_shallow()) && !server_supports("shallow"))
- die("Server does not support shallow clients");
+ die(_("Server does not support shallow clients"));
+ if (args->depth > 0 || args->deepen_since || args->deepen_not)
+ args->deepen = 1;
if (server_supports("multi_ack_detailed")) {
- if (args->verbose)
- fprintf(stderr, "Server supports multi_ack_detailed\n");
+ print_verbose(args, _("Server supports multi_ack_detailed"));
multi_ack = 2;
if (server_supports("no-done")) {
- if (args->verbose)
- fprintf(stderr, "Server supports no-done\n");
+ print_verbose(args, _("Server supports no-done"));
if (args->stateless_rpc)
no_done = 1;
}
}
else if (server_supports("multi_ack")) {
- if (args->verbose)
- fprintf(stderr, "Server supports multi_ack\n");
+ print_verbose(args, _("Server supports multi_ack"));
multi_ack = 1;
}
if (server_supports("side-band-64k")) {
- if (args->verbose)
- fprintf(stderr, "Server supports side-band-64k\n");
+ print_verbose(args, _("Server supports side-band-64k"));
use_sideband = 2;
}
else if (server_supports("side-band")) {
- if (args->verbose)
- fprintf(stderr, "Server supports side-band\n");
+ print_verbose(args, _("Server supports side-band"));
use_sideband = 1;
}
if (server_supports("allow-tip-sha1-in-want")) {
- if (args->verbose)
- fprintf(stderr, "Server supports allow-tip-sha1-in-want\n");
+ print_verbose(args, _("Server supports allow-tip-sha1-in-want"));
allow_unadvertised_object_request |= ALLOW_TIP_SHA1;
}
if (server_supports("allow-reachable-sha1-in-want")) {
- if (args->verbose)
- fprintf(stderr, "Server supports allow-reachable-sha1-in-want\n");
+ print_verbose(args, _("Server supports allow-reachable-sha1-in-want"));
allow_unadvertised_object_request |= ALLOW_REACHABLE_SHA1;
}
if (!server_supports("thin-pack"))
args->no_progress = 0;
if (!server_supports("include-tag"))
args->include_tag = 0;
- if (server_supports("ofs-delta")) {
- if (args->verbose)
- fprintf(stderr, "Server supports ofs-delta\n");
- } else
+ if (server_supports("ofs-delta"))
+ print_verbose(args, _("Server supports ofs-delta"));
+ else
prefer_ofs_delta = 0;
if ((agent_feature = server_feature_value("agent", &agent_len))) {
agent_supported = 1;
- if (args->verbose && agent_len)
- fprintf(stderr, "Server version is %.*s\n",
- agent_len, agent_feature);
+ if (agent_len)
+ print_verbose(args, _("Server version is %.*s"),
+ agent_len, agent_feature);
}
+ if (server_supports("deepen-since"))
+ deepen_since_ok = 1;
+ else if (args->deepen_since)
+ die(_("Server does not support --shallow-since"));
+ if (server_supports("deepen-not"))
+ deepen_not_ok = 1;
+ else if (args->deepen_not)
+ die(_("Server does not support --shallow-exclude"));
+ if (!server_supports("deepen-relative") && args->deepen_relative)
+ die(_("Server does not support --deepen"));
if (everything_local(args, &ref, sought, nr_sought)) {
packet_flush(fd[1]);
/* When cloning, it is not unusual to have
* no common commit.
*/
- warning("no common commits");
+ warning(_("no common commits"));
if (args->stateless_rpc)
packet_flush(fd[1]);
- if (args->depth > 0)
+ if (args->deepen)
setup_alternate_shallow(&shallow_lock, &alternate_shallow_file,
NULL);
else if (si->nr_ours || si->nr_theirs)
else
alternate_shallow_file = NULL;
if (get_pack(args, fd, pack_lockfile))
- die("git fetch-pack: fetch failed.");
+ die(_("git fetch-pack: fetch failed."));
all_done:
return ref;
int *status;
int i;
- if (args->depth > 0 && alternate_shallow_file) {
+ if (args->deepen && alternate_shallow_file) {
if (*alternate_shallow_file == '\0') { /* --unshallow */
unlink_or_warn(git_path_shallow());
rollback_lock_file(&shallow_lock);
if (!ref) {
packet_flush(fd[1]);
- die("no matching remote head");
+ die(_("no matching remote head"));
}
prepare_shallow_info(&si, shallow);
ref_cpy = do_fetch_pack(args, fd, ref, sought, nr_sought,
const char *uploadpack;
int unpacklimit;
int depth;
+ const char *deepen_since;
+ const struct string_list *deepen_not;
+ unsigned deepen_relative:1;
unsigned quiet:1;
unsigned keep_pack:1;
unsigned lock_pack:1;
unsigned self_contained_and_connected:1;
unsigned cloning:1;
unsigned update_shallow:1;
+ unsigned deepen:1;
};
/*
#define qsort git_qsort
#endif
+#define QSORT(base, n, compar) sane_qsort((base), (n), sizeof(*(base)), compar)
+static inline void sane_qsort(void *base, size_t nmemb, size_t size,
+ int(*compar)(const void *, const void *))
+{
+ if (nmemb > 1)
+ qsort(base, nmemb, size, compar);
+}
+
#ifndef REG_STARTEND
#error "Git requires REG_STARTEND support. Compile with NO_REGEX=NeedsStartEnd"
#endif
static void graph_padding_line(struct git_graph *graph, struct strbuf *sb)
{
int i;
+ int chars_written = 0;
if (graph->state != GRAPH_COMMIT) {
graph_next_line(graph, sb);
*/
for (i = 0; i < graph->num_columns; i++) {
struct column *col = &graph->columns[i];
+
strbuf_write_column(sb, col, '|');
- if (col->commit == graph->commit && graph->num_parents > 2)
- strbuf_addchars(sb, ' ', (graph->num_parents - 2) * 2);
- else
+ chars_written++;
+
+ if (col->commit == graph->commit && graph->num_parents > 2) {
+ int len = (graph->num_parents - 2) * 2;
+ strbuf_addchars(sb, ' ', len);
+ chars_written += len;
+ } else {
strbuf_addch(sb, ' ');
+ chars_written++;
+ }
}
- graph_pad_horizontally(graph, sb, graph->num_columns);
+ graph_pad_horizontally(graph, sb, chars_written);
/*
* Update graph->prev_state since we have output a padding line
if (exec_path) {
list_commands_in_dir(main_cmds, exec_path, prefix);
- qsort(main_cmds->names, main_cmds->cnt,
- sizeof(*main_cmds->names), cmdname_compare);
+ QSORT(main_cmds->names, main_cmds->cnt, cmdname_compare);
uniq(main_cmds);
}
}
free(paths);
- qsort(other_cmds->names, other_cmds->cnt,
- sizeof(*other_cmds->names), cmdname_compare);
+ QSORT(other_cmds->names, other_cmds->cnt, cmdname_compare);
uniq(other_cmds);
}
exclude_cmds(other_cmds, main_cmds);
longest = strlen(common_cmds[i].name);
}
- qsort(common_cmds, ARRAY_SIZE(common_cmds),
- sizeof(common_cmds[0]), cmd_group_cmp);
+ QSORT(common_cmds, ARRAY_SIZE(common_cmds), cmd_group_cmp);
puts(_("These are common Git commands used in various situations:"));
add_cmd_list(&main_cmds, &aliases);
add_cmd_list(&main_cmds, &other_cmds);
- qsort(main_cmds.names, main_cmds.cnt,
- sizeof(*main_cmds.names), cmdname_compare);
+ QSORT(main_cmds.names, main_cmds.cnt, cmdname_compare);
uniq(&main_cmds);
/* This abuses cmdname->len for levenshtein distance */
levenshtein(cmd, candidate, 0, 2, 1, 3) + 1;
}
- qsort(main_cmds.names, main_cmds.cnt,
- sizeof(*main_cmds.names), levenshtein_compare);
+ QSORT(main_cmds.names, main_cmds.cnt, levenshtein_compare);
if (!main_cmds.cnt)
die(_("Uh oh. Your system reports no Git commands at all."));
* here, too
*/
};
+#if LIBCURL_VERSION_NUM >= 0x071600
+static const char *curl_deleg;
+static struct {
+ const char *name;
+ long curl_deleg_param;
+} curl_deleg_levels[] = {
+ { "none", CURLGSSAPI_DELEGATION_NONE },
+ { "policy", CURLGSSAPI_DELEGATION_POLICY_FLAG },
+ { "always", CURLGSSAPI_DELEGATION_FLAG },
+};
+#endif
+
static struct credential proxy_auth = CREDENTIAL_INIT;
static const char *curl_proxyuserpwd;
static const char *curl_cookie_file;
return 0;
}
+ if (!strcmp("http.delegation", var)) {
+#if LIBCURL_VERSION_NUM >= 0x071600
+ return git_config_string(&curl_deleg, var, value);
+#else
+ warning(_("Delegation control is not supported with cURL < 7.22.0"));
+ return 0;
+#endif
+ }
+
if (!strcmp("http.pinnedpubkey", var)) {
#if LIBCURL_VERSION_NUM >= 0x072c00
return git_config_pathname(&ssl_pinnedkey, var, value);
static void init_curl_http_auth(CURL *result)
{
- if (!http_auth.username) {
+ if (!http_auth.username || !*http_auth.username) {
if (curl_empty_auth)
curl_easy_setopt(result, CURLOPT_USERPWD, ":");
return;
curl_easy_setopt(result, CURLOPT_HTTPAUTH, CURLAUTH_ANY);
#endif
+#if LIBCURL_VERSION_NUM >= 0x071600
+ if (curl_deleg) {
+ int i;
+ for (i = 0; i < ARRAY_SIZE(curl_deleg_levels); i++) {
+ if (!strcmp(curl_deleg, curl_deleg_levels[i].name)) {
+ curl_easy_setopt(result, CURLOPT_GSSAPI_DELEGATION,
+ curl_deleg_levels[i].curl_deleg_param);
+ break;
+ }
+ }
+ if (i == ARRAY_SIZE(curl_deleg_levels))
+ warning("Unknown delegation method '%s': using default",
+ curl_deleg);
+ }
+#endif
+
if (http_proactive_auth)
init_curl_http_auth(result);
curl_easy_setopt(curl, CURLOPT_USERNAME, server.user);
curl_easy_setopt(curl, CURLOPT_PASSWORD, server.pass);
+ strbuf_addstr(&path, server.use_ssl ? "imaps://" : "imap://");
strbuf_addstr(&path, server.host);
if (!path.len || path.buf[path.len - 1] != '/')
strbuf_addch(&path, '/');
int i;
int o = 0; /* output cursor */
- qsort(rs->ranges, rs->nr, sizeof(struct range), range_cmp);
+ QSORT(rs->ranges, rs->nr, range_cmp);
for (i = 0; i < rs->nr; i++) {
if (rs->ranges[i].start == rs->ranges[i].end)
* revision.h: 0---------10 26
* fetch-pack.c: 0---4
* walker.c: 0-2
- * upload-pack.c: 11----------------19
+ * upload-pack.c: 4 11----------------19
* builtin/blame.c: 12-13
* bisect.c: 16
* bundle.c: 16
{
unsigned int i = 0, j, next;
- qsort(indexed_commits, indexed_commits_nr, sizeof(indexed_commits[0]),
- date_compare);
+ QSORT(indexed_commits, indexed_commits_nr, date_compare);
if (writer.show_progress)
writer.progress = start_progress("Selecting bitmap commits", 0);
entries[i].offset = nth_packed_object_offset(p, i);
entries[i].nr = i;
}
- qsort(entries, nr_objects, sizeof(*entries), compare_entries);
+ QSORT(entries, nr_objects, compare_entries);
for (i = 0; i < nr_objects; i++) {
void *data;
unsigned no_try_delta:1;
unsigned tagged:1; /* near the very tip of refs */
unsigned filled:1; /* assigned write-order */
+
+ /*
+ * State flags for depth-first search used for analyzing delta cycles.
+ */
+ enum {
+ DFS_NONE = 0,
+ DFS_ACTIVE,
+ DFS_DONE
+ } dfs_state;
};
struct packing_data {
if (objects[i]->offset > last_obj_offset)
last_obj_offset = objects[i]->offset;
}
- qsort(sorted_by_sha, nr_objects, sizeof(sorted_by_sha[0]),
- sha1_compare);
+ QSORT(sorted_by_sha, nr_objects, sha1_compare);
}
else
sorted_by_sha = list = last = NULL;
if (pathspec->magic & PATHSPEC_MAXDEPTH) {
if (flags & PATHSPEC_KEEP_ORDER)
die("BUG: PATHSPEC_MAXDEPTH_VALID and PATHSPEC_KEEP_ORDER are incompatible");
- qsort(pathspec->items, pathspec->nr,
- sizeof(struct pathspec_item), pathspec_item_cmp);
+ QSORT(pathspec->items, pathspec->nr, pathspec_item_cmp);
}
}
case 'C':
if (starts_with(placeholder + 1, "(auto)")) {
c->auto_color = want_color(c->pretty_ctx->color);
- if (c->auto_color)
+ if (c->auto_color && sb->len)
strbuf_addstr(sb, GIT_COLOR_RESET);
return 7; /* consumed 7 bytes, "C(auto)" */
} else {
{
const char *sp;
const char *arg;
- int i, at;
+ int i, at, atom_len;
sp = atom;
if (*sp == '*' && sp < ep)
return i;
}
+ /*
+ * If the atom name has a colon, strip it and everything after
+ * it off - it specifies the format for this entry, and
+ * shouldn't be used for checking against the valid_atom
+ * table.
+ */
+ arg = memchr(sp, ':', ep - sp);
+ atom_len = (arg ? arg : ep) - sp;
+
/* Is the atom a valid one? */
for (i = 0; i < ARRAY_SIZE(valid_atom); i++) {
int len = strlen(valid_atom[i].name);
-
- /*
- * If the atom name has a colon, strip it and everything after
- * it off - it specifies the format for this entry, and
- * shouldn't be used for checking against the valid_atom
- * table.
- */
- arg = memchr(sp, ':', ep - sp);
- if (len == (arg ? arg : ep) - sp &&
- !memcmp(valid_atom[i].name, sp, len))
+ if (len == atom_len && !memcmp(valid_atom[i].name, sp, len))
break;
}
void ref_array_sort(struct ref_sorting *sorting, struct ref_array *array)
{
ref_sorting = sorting;
- qsort(array->items, array->nr, sizeof(struct ref_array_item *), compare_refs);
+ QSORT(array->items, array->nr, compare_refs);
}
static void append_literal(const char *cp, const char *ep, struct ref_formatting_state *state)
int dwim_ref(const char *str, int len, unsigned char *sha1, char **ref)
{
char *last_branch = substitute_branch_name(&str, &len);
+ int refs_found = expand_ref(str, len, sha1, ref);
+ free(last_branch);
+ return refs_found;
+}
+
+int expand_ref(const char *str, int len, unsigned char *sha1, char **ref)
+{
const char **p, *r;
int refs_found = 0;
warning("ignoring broken ref %s.", fullref);
}
}
- free(last_branch);
return refs_found;
}
*/
int refname_match(const char *abbrev_name, const char *full_name);
+int expand_ref(const char *str, int len, unsigned char *sha1, char **ref);
int dwim_ref(const char *str, int len, unsigned char *sha1, char **ref);
int dwim_log(const char *str, int len, unsigned char *sha1, char **ref);
if (dir->sorted == dir->nr)
return;
- qsort(dir->entries, dir->nr, sizeof(*dir->entries), ref_entry_cmp);
+ QSORT(dir->entries, dir->nr, ref_entry_cmp);
/* Remove any duplicates: */
for (i = 0, j = 0; j < dir->nr; j++) {
struct options {
int verbosity;
unsigned long depth;
+ char *deepen_since;
+ struct string_list deepen_not;
unsigned progress : 1,
check_self_contained_and_connected : 1,
cloning : 1,
dry_run : 1,
thin : 1,
/* One of the SEND_PACK_PUSH_CERT_* constants. */
- push_cert : 2;
+ push_cert : 2,
+ deepen_relative : 1;
};
static struct options options;
static struct string_list cas_options = STRING_LIST_INIT_DUP;
options.depth = v;
return 0;
}
+ else if (!strcmp(name, "deepen-since")) {
+ options.deepen_since = xstrdup(value);
+ return 0;
+ }
+ else if (!strcmp(name, "deepen-not")) {
+ string_list_append(&options.deepen_not, value);
+ return 0;
+ }
+ else if (!strcmp(name, "deepen-relative")) {
+ if (!strcmp(value, "true"))
+ options.deepen_relative = 1;
+ else if (!strcmp(value, "false"))
+ options.deepen_relative = 0;
+ else
+ return -1;
+ return 0;
+ }
else if (!strcmp(name, "followtags")) {
if (!strcmp(value, "true"))
options.followtags = 1;
int ret, i;
ALLOC_ARRAY(targets, nr_heads);
- if (options.depth)
- die("dumb http transport does not support --depth");
+ if (options.depth || options.deepen_since)
+ die("dumb http transport does not support shallow capabilities");
for (i = 0; i < nr_heads; i++)
targets[i] = xstrdup(oid_to_hex(&to_fetch[i]->old_oid));
{
struct rpc_state rpc;
struct strbuf preamble = STRBUF_INIT;
- char *depth_arg = NULL;
- int argc = 0, i, err;
- const char *argv[17];
-
- argv[argc++] = "fetch-pack";
- argv[argc++] = "--stateless-rpc";
- argv[argc++] = "--stdin";
- argv[argc++] = "--lock-pack";
+ int i, err;
+ struct argv_array args = ARGV_ARRAY_INIT;
+
+ argv_array_pushl(&args, "fetch-pack", "--stateless-rpc",
+ "--stdin", "--lock-pack", NULL);
if (options.followtags)
- argv[argc++] = "--include-tag";
+ argv_array_push(&args, "--include-tag");
if (options.thin)
- argv[argc++] = "--thin";
- if (options.verbosity >= 3) {
- argv[argc++] = "-v";
- argv[argc++] = "-v";
- }
+ argv_array_push(&args, "--thin");
+ if (options.verbosity >= 3)
+ argv_array_pushl(&args, "-v", "-v", NULL);
if (options.check_self_contained_and_connected)
- argv[argc++] = "--check-self-contained-and-connected";
+ argv_array_push(&args, "--check-self-contained-and-connected");
if (options.cloning)
- argv[argc++] = "--cloning";
+ argv_array_push(&args, "--cloning");
if (options.update_shallow)
- argv[argc++] = "--update-shallow";
+ argv_array_push(&args, "--update-shallow");
if (!options.progress)
- argv[argc++] = "--no-progress";
- if (options.depth) {
- struct strbuf buf = STRBUF_INIT;
- strbuf_addf(&buf, "--depth=%lu", options.depth);
- depth_arg = strbuf_detach(&buf, NULL);
- argv[argc++] = depth_arg;
- }
- argv[argc++] = url.buf;
- argv[argc++] = NULL;
+ argv_array_push(&args, "--no-progress");
+ if (options.depth)
+ argv_array_pushf(&args, "--depth=%lu", options.depth);
+ if (options.deepen_since)
+ argv_array_pushf(&args, "--shallow-since=%s", options.deepen_since);
+ for (i = 0; i < options.deepen_not.nr; i++)
+ argv_array_pushf(&args, "--shallow-exclude=%s",
+ options.deepen_not.items[i].string);
+ if (options.deepen_relative && options.depth)
+ argv_array_push(&args, "--deepen-relative");
+ argv_array_push(&args, url.buf);
for (i = 0; i < nr_heads; i++) {
struct ref *ref = to_fetch[i];
memset(&rpc, 0, sizeof(rpc));
rpc.service_name = "git-upload-pack",
- rpc.argv = argv;
+ rpc.argv = args.argv;
rpc.stdin_preamble = &preamble;
rpc.gzip_request = 1;
write_or_die(1, rpc.result.buf, rpc.result.len);
strbuf_release(&rpc.result);
strbuf_release(&preamble);
- free(depth_arg);
+ argv_array_clear(&args);
return err;
}
options.verbosity = 1;
options.progress = !!isatty(2);
options.thin = 1;
+ string_list_init(&options.deepen_not, 1);
remote = remote_get(argv[1]);
}
}
-static int add_parents_only(struct rev_info *revs, const char *arg_, int flags)
+static int add_parents_only(struct rev_info *revs, const char *arg_, int flags,
+ int exclude_parent)
{
unsigned char sha1[20];
struct object *it;
struct commit *commit;
struct commit_list *parents;
+ int parent_number;
const char *arg = arg_;
if (*arg == '^') {
if (it->type != OBJ_COMMIT)
return 0;
commit = (struct commit *)it;
- for (parents = commit->parents; parents; parents = parents->next) {
+ if (exclude_parent &&
+ exclude_parent > commit_list_count(commit->parents))
+ return 0;
+ for (parents = commit->parents, parent_number = 1;
+ parents;
+ parents = parents->next, parent_number++) {
+ if (exclude_parent && parent_number != exclude_parent)
+ continue;
+
it = &parents->item->object;
it->flags |= flags;
add_rev_cmdline(revs, it, arg_, REV_CMD_PARENTS_ONLY, flags);
}
*dotdot = '.';
}
+
dotdot = strstr(arg, "^@");
if (dotdot && !dotdot[2]) {
*dotdot = 0;
- if (add_parents_only(revs, arg, flags))
+ if (add_parents_only(revs, arg, flags, 0))
return 0;
*dotdot = '^';
}
dotdot = strstr(arg, "^!");
if (dotdot && !dotdot[2]) {
*dotdot = 0;
- if (!add_parents_only(revs, arg, flags ^ (UNINTERESTING | BOTTOM)))
+ if (!add_parents_only(revs, arg, flags ^ (UNINTERESTING | BOTTOM), 0))
+ *dotdot = '^';
+ }
+ dotdot = strstr(arg, "^-");
+ if (dotdot) {
+ int exclude_parent = 1;
+
+ if (dotdot[2]) {
+ char *end;
+ exclude_parent = strtoul(dotdot + 2, &end, 10);
+ if (*end != '\0' || !exclude_parent)
+ return -1;
+ }
+
+ *dotdot = 0;
+ if (!add_parents_only(revs, arg, flags ^ (UNINTERESTING | BOTTOM), exclude_parent))
*dotdot = '^';
}
}
/* renumber them */
- qsort(info, num_pack, sizeof(info[0]), compare_info);
+ QSORT(info, num_pack, compare_info);
for (i = 0; i < num_pack; i++)
info[i]->new_num = i;
}
static inline void
string_list_sort (string_list_ty *slp)
{
- if (slp->nitems > 0)
- qsort (slp->item, slp->nitems, sizeof (slp->item[0]), cmp_string);
+ QSORT(slp->item, slp->nitems, cmp_string);
}
/* Test whether a sorted string list contains a given string. */
static void sha1_array_sort(struct sha1_array *array)
{
- qsort(array->sha1, array->nr, sizeof(*array->sha1), void_hashcmp);
+ QSORT(array->sha1, array->nr, void_hashcmp);
array->sorted = 1;
}
array->sorted = 0;
}
-void sha1_array_for_each_unique(struct sha1_array *array,
+int sha1_array_for_each_unique(struct sha1_array *array,
for_each_sha1_fn fn,
void *data)
{
sha1_array_sort(array);
for (i = 0; i < array->nr; i++) {
+ int ret;
if (i > 0 && !hashcmp(array->sha1[i], array->sha1[i-1]))
continue;
- fn(array->sha1[i], data);
+ ret = fn(array->sha1[i], data);
+ if (ret)
+ return ret;
}
+ return 0;
}
int sha1_array_lookup(struct sha1_array *array, const unsigned char *sha1);
void sha1_array_clear(struct sha1_array *array);
-typedef void (*for_each_sha1_fn)(const unsigned char sha1[20],
- void *data);
-void sha1_array_for_each_unique(struct sha1_array *array,
- for_each_sha1_fn fn,
+typedef int (*for_each_sha1_fn)(const unsigned char sha1[20],
void *data);
+int sha1_array_for_each_unique(struct sha1_array *array,
+ for_each_sha1_fn fn,
+ void *data);
#endif /* SHA1_ARRAY_H */
int parse_sha1_header(const char *hdr, unsigned long *sizep)
{
- struct object_info oi;
+ struct object_info oi = OBJECT_INFO_INIT;
oi.sizep = sizep;
- oi.typename = NULL;
- oi.typep = NULL;
return parse_sha1_header_extended(hdr, &oi, LOOKUP_REPLACE_OBJECT);
}
goto out;
}
-static int packed_object_info(struct packed_git *p, off_t obj_offset,
- struct object_info *oi)
+int packed_object_info(struct packed_git *p, off_t obj_offset,
+ struct object_info *oi)
{
struct pack_window *w_curs = NULL;
unsigned long size;
int sha1_object_info(const unsigned char *sha1, unsigned long *sizep)
{
enum object_type type;
- struct object_info oi = {NULL};
+ struct object_info oi = OBJECT_INFO_INIT;
oi.typep = &type;
oi.sizep = sizep;
#include "refs.h"
#include "remote.h"
#include "dir.h"
+#include "sha1-array.h"
static int get_sha1_oneline(const char *, unsigned char *, struct commit_list *);
typedef int (*disambiguate_hint_fn)(const unsigned char *, void *);
struct disambiguate_state {
+ int len; /* length of prefix in hex chars */
+ char hex_pfx[GIT_SHA1_HEXSZ + 1];
+ unsigned char bin_pfx[GIT_SHA1_RAWSZ];
+
disambiguate_hint_fn fn;
void *cb_data;
- unsigned char candidate[20];
+ unsigned char candidate[GIT_SHA1_RAWSZ];
unsigned candidate_exists:1;
unsigned candidate_checked:1;
unsigned candidate_ok:1;
/* otherwise, current can be discarded and candidate is still good */
}
-static void find_short_object_filename(int len, const char *hex_pfx, struct disambiguate_state *ds)
+static void find_short_object_filename(struct disambiguate_state *ds)
{
struct alternate_object_database *alt;
- char hex[40];
+ char hex[GIT_SHA1_HEXSZ];
static struct alternate_object_database *fakeent;
if (!fakeent) {
}
fakeent->next = alt_odb_list;
- xsnprintf(hex, sizeof(hex), "%.2s", hex_pfx);
+ xsnprintf(hex, sizeof(hex), "%.2s", ds->hex_pfx);
for (alt = fakeent; alt && !ds->ambiguous; alt = alt->next) {
struct strbuf *buf = alt_scratch_buf(alt);
struct dirent *de;
DIR *dir;
- strbuf_addf(buf, "%.2s/", hex_pfx);
+ strbuf_addf(buf, "%.2s/", ds->hex_pfx);
dir = opendir(buf->buf);
if (!dir)
continue;
if (strlen(de->d_name) != 38)
continue;
- if (memcmp(de->d_name, hex_pfx + 2, len - 2))
+ if (memcmp(de->d_name, ds->hex_pfx + 2, ds->len - 2))
continue;
memcpy(hex + 2, de->d_name, 38);
if (!get_sha1_hex(hex, sha1))
return 1;
}
-static void unique_in_pack(int len,
- const unsigned char *bin_pfx,
- struct packed_git *p,
+static void unique_in_pack(struct packed_git *p,
struct disambiguate_state *ds)
{
uint32_t num, last, i, first = 0;
int cmp;
current = nth_packed_object_sha1(p, mid);
- cmp = hashcmp(bin_pfx, current);
+ cmp = hashcmp(ds->bin_pfx, current);
if (!cmp) {
first = mid;
break;
*/
for (i = first; i < num && !ds->ambiguous; i++) {
current = nth_packed_object_sha1(p, i);
- if (!match_sha(len, bin_pfx, current))
+ if (!match_sha(ds->len, ds->bin_pfx, current))
break;
update_candidates(ds, current);
}
}
-static void find_short_packed_object(int len, const unsigned char *bin_pfx,
- struct disambiguate_state *ds)
+static void find_short_packed_object(struct disambiguate_state *ds)
{
struct packed_git *p;
prepare_packed_git();
for (p = packed_git; p && !ds->ambiguous; p = p->next)
- unique_in_pack(len, bin_pfx, p, ds);
+ unique_in_pack(p, ds);
}
#define SHORT_NAME_NOT_FOUND (-1)
return 0;
/* We need to do this the hard way... */
- obj = deref_tag(lookup_object(sha1), NULL, 0);
+ obj = deref_tag(parse_object(sha1), NULL, 0);
if (obj && (obj->type == OBJ_TREE || obj->type == OBJ_COMMIT))
return 1;
return 0;
return kind == OBJ_BLOB;
}
-static int prepare_prefixes(const char *name, int len,
- unsigned char *bin_pfx,
- char *hex_pfx)
+static disambiguate_hint_fn default_disambiguate_hint;
+
+int set_disambiguate_hint_config(const char *var, const char *value)
+{
+ static const struct {
+ const char *name;
+ disambiguate_hint_fn fn;
+ } hints[] = {
+ { "none", NULL },
+ { "commit", disambiguate_commit_only },
+ { "committish", disambiguate_committish_only },
+ { "tree", disambiguate_tree_only },
+ { "treeish", disambiguate_treeish_only },
+ { "blob", disambiguate_blob_only }
+ };
+ int i;
+
+ if (!value)
+ return config_error_nonbool(var);
+
+ for (i = 0; i < ARRAY_SIZE(hints); i++) {
+ if (!strcasecmp(value, hints[i].name)) {
+ default_disambiguate_hint = hints[i].fn;
+ return 0;
+ }
+ }
+
+ return error("unknown hint type for '%s': %s", var, value);
+}
+
+static int init_object_disambiguation(const char *name, int len,
+ struct disambiguate_state *ds)
{
int i;
- hashclr(bin_pfx);
- memset(hex_pfx, 'x', 40);
+ if (len < MINIMUM_ABBREV || len > GIT_SHA1_HEXSZ)
+ return -1;
+
+ memset(ds, 0, sizeof(*ds));
+
for (i = 0; i < len ;i++) {
unsigned char c = name[i];
unsigned char val;
}
else
return -1;
- hex_pfx[i] = c;
+ ds->hex_pfx[i] = c;
if (!(i & 1))
val <<= 4;
- bin_pfx[i >> 1] |= val;
+ ds->bin_pfx[i >> 1] |= val;
}
+
+ ds->len = len;
+ ds->hex_pfx[len] = '\0';
+ prepare_alt_odb();
+ return 0;
+}
+
+static int show_ambiguous_object(const unsigned char *sha1, void *data)
+{
+ const struct disambiguate_state *ds = data;
+ struct strbuf desc = STRBUF_INIT;
+ int type;
+
+ if (ds->fn && !ds->fn(sha1, ds->cb_data))
+ return 0;
+
+ type = sha1_object_info(sha1, NULL);
+ if (type == OBJ_COMMIT) {
+ struct commit *commit = lookup_commit(sha1);
+ if (commit) {
+ struct pretty_print_context pp = {0};
+ pp.date_mode.type = DATE_SHORT;
+ format_commit_message(commit, " %ad - %s", &desc, &pp);
+ }
+ } else if (type == OBJ_TAG) {
+ struct tag *tag = lookup_tag(sha1);
+ if (!parse_tag(tag) && tag->tag)
+ strbuf_addf(&desc, " %s", tag->tag);
+ }
+
+ advise(" %s %s%s",
+ find_unique_abbrev(sha1, DEFAULT_ABBREV),
+ typename(type) ? typename(type) : "unknown type",
+ desc.buf);
+
+ strbuf_release(&desc);
return 0;
}
unsigned flags)
{
int status;
- char hex_pfx[40];
- unsigned char bin_pfx[20];
struct disambiguate_state ds;
int quietly = !!(flags & GET_SHA1_QUIETLY);
- if (len < MINIMUM_ABBREV || len > 40)
- return -1;
- if (prepare_prefixes(name, len, bin_pfx, hex_pfx) < 0)
+ if (init_object_disambiguation(name, len, &ds) < 0)
return -1;
- prepare_alt_odb();
+ if (HAS_MULTI_BITS(flags & GET_SHA1_DISAMBIGUATORS))
+ die("BUG: multiple get_short_sha1 disambiguator flags");
- memset(&ds, 0, sizeof(ds));
if (flags & GET_SHA1_COMMIT)
ds.fn = disambiguate_commit_only;
else if (flags & GET_SHA1_COMMITTISH)
ds.fn = disambiguate_treeish_only;
else if (flags & GET_SHA1_BLOB)
ds.fn = disambiguate_blob_only;
+ else
+ ds.fn = default_disambiguate_hint;
- find_short_object_filename(len, hex_pfx, &ds);
- find_short_packed_object(len, bin_pfx, &ds);
+ find_short_object_filename(&ds);
+ find_short_packed_object(&ds);
status = finish_object_disambiguation(&ds, sha1);
- if (!quietly && (status == SHORT_NAME_AMBIGUOUS))
- return error("short SHA1 %.*s is ambiguous.", len, hex_pfx);
+ if (!quietly && (status == SHORT_NAME_AMBIGUOUS)) {
+ error(_("short SHA1 %s is ambiguous"), ds.hex_pfx);
+
+ /*
+ * We may still have ambiguity if we simply saw a series of
+ * candidates that did not satisfy our hint function. In
+ * that case, we still want to show them, so disable the hint
+ * function entirely.
+ */
+ if (!ds.ambiguous)
+ ds.fn = NULL;
+
+ advise(_("The candidates are:"));
+ for_each_abbrev(ds.hex_pfx, show_ambiguous_object, &ds);
+ }
+
return status;
}
+static int collect_ambiguous(const unsigned char *sha1, void *data)
+{
+ sha1_array_append(data, sha1);
+ return 0;
+}
+
int for_each_abbrev(const char *prefix, each_abbrev_fn fn, void *cb_data)
{
- char hex_pfx[40];
- unsigned char bin_pfx[20];
+ struct sha1_array collect = SHA1_ARRAY_INIT;
struct disambiguate_state ds;
- int len = strlen(prefix);
+ int ret;
- if (len < MINIMUM_ABBREV || len > 40)
- return -1;
- if (prepare_prefixes(prefix, len, bin_pfx, hex_pfx) < 0)
+ if (init_object_disambiguation(prefix, strlen(prefix), &ds) < 0)
return -1;
- prepare_alt_odb();
-
- memset(&ds, 0, sizeof(ds));
ds.always_call_fn = 1;
- ds.cb_data = cb_data;
- ds.fn = fn;
+ ds.fn = collect_ambiguous;
+ ds.cb_data = &collect;
+ find_short_object_filename(&ds);
+ find_short_packed_object(&ds);
- find_short_object_filename(len, hex_pfx, &ds);
- find_short_packed_object(len, bin_pfx, &ds);
- return ds.ambiguous;
+ ret = sha1_array_for_each_unique(&collect, fn, cb_data);
+ sha1_array_clear(&collect);
+ return ret;
}
int find_unique_abbrev_r(char *hex, const unsigned char *sha1, int len)
}
}
-static int peel_onion(const char *name, int len, unsigned char *sha1)
+static int peel_onion(const char *name, int len, unsigned char *sha1,
+ unsigned lookup_flags)
{
unsigned char outer[20];
const char *sp;
unsigned int expected_type = 0;
- unsigned lookup_flags = 0;
struct object *o;
/*
else
return -1;
+ lookup_flags &= ~GET_SHA1_DISAMBIGUATORS;
if (expected_type == OBJ_COMMIT)
- lookup_flags = GET_SHA1_COMMITTISH;
+ lookup_flags |= GET_SHA1_COMMITTISH;
else if (expected_type == OBJ_TREE)
- lookup_flags = GET_SHA1_TREEISH;
+ lookup_flags |= GET_SHA1_TREEISH;
if (get_sha1_1(name, sp - name - 2, outer, lookup_flags))
return -1;
return get_nth_ancestor(name, len1, sha1, num);
}
- ret = peel_onion(name, len, sha1);
+ ret = peel_onion(name, len, sha1, lookup_flags);
if (!ret)
return 0;
const char *cp;
int only_to_die = flags & GET_SHA1_ONLY_TO_DIE;
+ if (only_to_die)
+ flags |= GET_SHA1_QUIETLY;
+
memset(oc, 0, sizeof(*oc));
oc->mode = S_IFINVALID;
ret = get_sha1_1(name, namelen, sha1, flags);
if (*cp == ':') {
unsigned char tree_sha1[20];
int len = cp - name;
- if (!get_sha1_1(name, len, tree_sha1, GET_SHA1_TREEISH)) {
+ unsigned sub_flags = flags;
+
+ sub_flags &= ~GET_SHA1_DISAMBIGUATORS;
+ sub_flags |= GET_SHA1_TREEISH;
+
+ if (!get_sha1_1(name, len, tree_sha1, sub_flags)) {
const char *filename = cp+1;
char *new_filename = NULL;
#include "diff.h"
#include "revision.h"
#include "commit-slab.h"
+#include "revision.h"
+#include "list-objects.h"
static int is_shallow = -1;
static struct stat_validity shallow_stat;
return result;
}
+static void show_commit(struct commit *commit, void *data)
+{
+ commit_list_insert(commit, data);
+}
+
+/*
+ * Given rev-list arguments, run rev-list. All reachable commits
+ * except border ones are marked with not_shallow_flag. Border commits
+ * are marked with shallow_flag. The list of border/shallow commits
+ * are also returned.
+ */
+struct commit_list *get_shallow_commits_by_rev_list(int ac, const char **av,
+ int shallow_flag,
+ int not_shallow_flag)
+{
+ struct commit_list *result = NULL, *p;
+ struct commit_list *not_shallow_list = NULL;
+ struct rev_info revs;
+ int both_flags = shallow_flag | not_shallow_flag;
+
+ /*
+ * SHALLOW (excluded) and NOT_SHALLOW (included) should not be
+ * set at this point. But better be safe than sorry.
+ */
+ clear_object_flags(both_flags);
+
+ is_repository_shallow(); /* make sure shallows are read */
+
+ init_revisions(&revs, NULL);
+ save_commit_buffer = 0;
+ setup_revisions(ac, av, &revs, NULL);
+
+ if (prepare_revision_walk(&revs))
+ die("revision walk setup failed");
+ traverse_commit_list(&revs, show_commit, NULL, ¬_shallow_list);
+
+ /* Mark all reachable commits as NOT_SHALLOW */
+ for (p = not_shallow_list; p; p = p->next)
+ p->item->object.flags |= not_shallow_flag;
+
+ /*
+ * mark border commits SHALLOW + NOT_SHALLOW.
+ * We cannot clear NOT_SHALLOW right now. Imagine border
+ * commit A is processed first, then commit B, whose parent is
+ * A, later. If NOT_SHALLOW on A is cleared at step 1, B
+ * itself is considered border at step 2, which is incorrect.
+ */
+ for (p = not_shallow_list; p; p = p->next) {
+ struct commit *c = p->item;
+ struct commit_list *parent;
+
+ if (parse_commit(c))
+ die("unable to parse commit %s",
+ oid_to_hex(&c->object.oid));
+
+ for (parent = c->parents; parent; parent = parent->next)
+ if (!(parent->item->object.flags & not_shallow_flag)) {
+ c->object.flags |= shallow_flag;
+ commit_list_insert(c, &result);
+ break;
+ }
+ }
+ free_commit_list(not_shallow_list);
+
+ /*
+ * Now we can clean up NOT_SHALLOW on border commits. Having
+ * both flags set can confuse the caller.
+ */
+ for (p = result; p; p = p->next) {
+ struct object *o = &p->item->object;
+ if ((o->flags & both_flags) == both_flags)
+ o->flags &= ~not_shallow_flag;
+ }
+ return result;
+}
+
static void check_shallow_file_for_update(void)
{
if (is_shallow == -1)
struct stream_filter *filter)
{
struct git_istream *st;
- struct object_info oi = {NULL};
+ struct object_info oi = OBJECT_INFO_INIT;
const unsigned char *real = lookup_replace_object(sha1);
enum input_source src = istream_source(real, type, &oi);
void string_list_sort(struct string_list *list)
{
compare_for_qsort = list->cmp ? list->cmp : strcmp;
- qsort(list->items, list->nr, sizeof(*list->items), cmp_items);
+ QSORT(list->items, list->nr, cmp_items);
}
struct string_list_item *unsorted_string_list_lookup(struct string_list *list,
find_unique_abbrev(one->hash, DEFAULT_ABBREV));
if (!fast_backward && !fast_forward)
strbuf_addch(&sb, '.');
- strbuf_addf(&sb, "%s", find_unique_abbrev(two->hash, DEFAULT_ABBREV));
+ strbuf_add_unique_abbrev(&sb, two->hash, DEFAULT_ABBREV);
if (message)
strbuf_addf(&sb, " %s%s\n", message, reset);
else
sha1_array_append(&ref_tips_after_fetch, new_sha1);
}
-static void add_sha1_to_argv(const unsigned char sha1[20], void *data)
+static int add_sha1_to_argv(const unsigned char sha1[20], void *data)
{
argv_array_push(data, sha1_to_hex(sha1));
+ return 0;
}
static void calculate_changed_submodule_paths(void)
static void dump(struct untracked_cache_dir *ucd, struct strbuf *base)
{
int i, len;
- qsort(ucd->untracked, ucd->untracked_nr, sizeof(*ucd->untracked),
- compare_untracked);
- qsort(ucd->dirs, ucd->dirs_nr, sizeof(*ucd->dirs),
- compare_dir);
+ QSORT(ucd->untracked, ucd->untracked_nr, compare_untracked);
+ QSORT(ucd->dirs, ucd->dirs_nr, compare_dir);
len = base->len;
strbuf_addf(base, "%s/", ucd->name);
printf("%s %s", base->buf,
#include "cache.h"
#include "sha1-array.h"
-static void print_sha1(const unsigned char sha1[20], void *data)
+static int print_sha1(const unsigned char sha1[20], void *data)
{
puts(sha1_to_hex(sha1));
+ return 0;
}
int cmd_main(int argc, const char **argv)
test_must_fail git log 000000000...
'
+# There are three objects with this prefix: a blob, a tree, and a tag. We know
+# the blob will not pass as a treeish, but the tree and tag should (and thus
+# cause an error).
+test_expect_success 'ambiguous tags peel to treeish' '
+ test_must_fail git rev-parse 0000000000f^{tree}
+'
+
test_expect_success 'rev-parse --disambiguate' '
# The test creates 16 objects that share the prefix and two
# commits created by commit-tree in earlier tests share a
test "$(sed -e "s/^\(.........\).*/\1/" actual | sort -u)" = 000000000
'
+test_expect_success 'rev-parse --disambiguate drops duplicates' '
+ git rev-parse --disambiguate=000000000 >expect &&
+ git pack-objects .git/objects/pack/pack <expect &&
+ git rev-parse --disambiguate=000000000 >actual &&
+ test_cmp expect actual
+'
+
test_expect_success 'ambiguous 40-hex ref' '
TREE=$(git mktree </dev/null) &&
REF=$(git rev-parse HEAD) &&
grep "refname.*${REF}.*ambiguous" err
'
+test_expect_success C_LOCALE_OUTPUT 'ambiguity errors are not repeated (raw)' '
+ test_must_fail git rev-parse 00000 2>stderr &&
+ grep "is ambiguous" stderr >errors &&
+ test_line_count = 1 errors
+'
+
+test_expect_success C_LOCALE_OUTPUT 'ambiguity errors are not repeated (treeish)' '
+ test_must_fail git rev-parse 00000:foo 2>stderr &&
+ grep "is ambiguous" stderr >errors &&
+ test_line_count = 1 errors
+'
+
+test_expect_success C_LOCALE_OUTPUT 'ambiguity errors are not repeated (peel)' '
+ test_must_fail git rev-parse 00000^{commit} 2>stderr &&
+ grep "is ambiguous" stderr >errors &&
+ test_line_count = 1 errors
+'
+
+test_expect_success C_LOCALE_OUTPUT 'ambiguity hints' '
+ test_must_fail git rev-parse 000000000 2>stderr &&
+ grep ^hint: stderr >hints &&
+ # 16 candidates, plus one intro line
+ test_line_count = 17 hints
+'
+
+test_expect_success C_LOCALE_OUTPUT 'ambiguity hints respect type' '
+ test_must_fail git rev-parse 000000000^{commit} 2>stderr &&
+ grep ^hint: stderr >hints &&
+ # 5 commits, 1 tag (which is a commitish), plus intro line
+ test_line_count = 7 hints
+'
+
+test_expect_success C_LOCALE_OUTPUT 'failed type-selector still shows hint' '
+ # these two blobs share the same prefix "ee3d", but neither
+ # will pass for a commit
+ echo 851 | git hash-object --stdin -w &&
+ echo 872 | git hash-object --stdin -w &&
+ test_must_fail git rev-parse ee3d^{commit} 2>stderr &&
+ grep ^hint: stderr >hints &&
+ test_line_count = 3 hints
+'
+
+test_expect_success 'core.disambiguate config can prefer types' '
+ # ambiguous between tree and tag
+ sha1=0000000000f &&
+ test_must_fail git rev-parse $sha1 &&
+ git rev-parse $sha1^{commit} &&
+ git -c core.disambiguate=committish rev-parse $sha1
+'
+
+test_expect_success 'core.disambiguate does not override context' '
+ # treeish ambiguous between tag and tree
+ test_must_fail \
+ git -c core.disambiguate=committish rev-parse $sha1^{tree}
+'
+
test_done
--- /dev/null
+#!/bin/sh
+
+test_description='test handling of inter-pack delta cycles during repack
+
+The goal here is to create a situation where we have two blobs, A and B, with A
+as a delta against B in one pack, and vice versa in the other. Then if we can
+persuade a full repack to find A from one pack and B from the other, that will
+give us a cycle when we attempt to reuse those deltas.
+
+The trick is in the "persuade" step, as it depends on the internals of how
+pack-objects picks which pack to reuse the deltas from. But we can assume
+that it does so in one of two general strategies:
+
+ 1. Using a static ordering of packs. In this case, no inter-pack cycles can
+ happen. Any objects with a delta relationship must be present in the same
+ pack (i.e., no "--thin" packs on disk), so we will find all related objects
+ from that pack. So assuming there are no cycles within a single pack (and
+ we avoid generating them via pack-objects or importing them via
+ index-pack), then our result will have no cycles.
+
+ So this case should pass the tests no matter how we arrange things.
+
+ 2. Picking the next pack to examine based on locality (i.e., where we found
+ something else recently).
+
+ In this case, we want to make sure that we find the delta versions of A and
+ B and not their base versions. We can do this by putting two blobs in each
+ pack. The first is a "dummy" blob that can only be found in the pack in
+ question. And then the second is the actual delta we want to find.
+
+ The two blobs must be present in the same tree, not present in other trees,
+ and the dummy pathname must sort before the delta path.
+
+The setup below focuses on case 2. We have two commits HEAD and HEAD^, each
+which has two files: "dummy" and "file". Then we can make two packs which
+contain:
+
+ [pack one]
+ HEAD:dummy
+ HEAD:file (as delta against HEAD^:file)
+ HEAD^:file (as base)
+
+ [pack two]
+ HEAD^:dummy
+ HEAD^:file (as delta against HEAD:file)
+ HEAD:file (as base)
+
+Then no matter which order we start looking at the packs in, we know that we
+will always find a delta for "file", because its lookup will always come
+immediately after the lookup for "dummy".
+'
+. ./test-lib.sh
+
+
+
+# Create a pack containing the the tree $1 and blob $1:file, with
+# the latter stored as a delta against $2:file.
+#
+# We convince pack-objects to make the delta in the direction of our choosing
+# by marking $2 as a preferred-base edge. That results in $1:file as a thin
+# delta, and index-pack completes it by adding $2:file as a base.
+#
+# Note that the two variants of "file" must be similar enough to convince git
+# to create the delta.
+make_pack () {
+ {
+ printf '%s\n' "-$(git rev-parse $2)"
+ printf '%s dummy\n' "$(git rev-parse $1:dummy)"
+ printf '%s file\n' "$(git rev-parse $1:file)"
+ } |
+ git pack-objects --stdout |
+ git index-pack --stdin --fix-thin
+}
+
+test_expect_success 'setup' '
+ test-genrandom base 4096 >base &&
+ for i in one two
+ do
+ # we want shared content here to encourage deltas...
+ cp base file &&
+ echo $i >>file &&
+
+ # ...whereas dummy should be short, because we do not want
+ # deltas that would create duplicates when we --fix-thin
+ echo $i >dummy &&
+
+ git add file dummy &&
+ test_tick &&
+ git commit -m $i ||
+ return 1
+ done &&
+
+ make_pack HEAD^ HEAD &&
+ make_pack HEAD HEAD^
+'
+
+test_expect_success 'repack' '
+ # We first want to check that we do not have any internal errors,
+ # and also that we do not hit the last-ditch cycle-breaking code
+ # in write_object(), which will issue a warning to stderr.
+ >expect &&
+ git repack -ad 2>stderr &&
+ test_cmp expect stderr &&
+
+ # And then double-check that the resulting pack is usable (i.e.,
+ # we did not fail to notice any cycles). We know we are accessing
+ # the objects via the new pack here, because "repack -d" will have
+ # removed the others.
+ git cat-file blob HEAD:file >/dev/null &&
+ git cat-file blob HEAD^:file >/dev/null
+'
+
+test_done
check_prot_path c:repo file c:repo
'
+test_expect_success 'clone shallow since ...' '
+ test_create_repo shallow-since &&
+ (
+ cd shallow-since &&
+ GIT_COMMITTER_DATE="100000000 +0700" git commit --allow-empty -m one &&
+ GIT_COMMITTER_DATE="200000000 +0700" git commit --allow-empty -m two &&
+ GIT_COMMITTER_DATE="300000000 +0700" git commit --allow-empty -m three &&
+ git clone --shallow-since "300000000 +0700" "file://$(pwd)/." ../shallow11 &&
+ git -C ../shallow11 log --pretty=tformat:%s HEAD >actual &&
+ echo three >expected &&
+ test_cmp expected actual
+ )
+'
+
+test_expect_success 'fetch shallow since ...' '
+ git -C shallow11 fetch --shallow-since "200000000 +0700" origin &&
+ git -C shallow11 log --pretty=tformat:%s origin/master >actual &&
+ cat >expected <<-\EOF &&
+ three
+ two
+ EOF
+ test_cmp expected actual
+'
+
+test_expect_success 'shallow clone exclude tag two' '
+ test_create_repo shallow-exclude &&
+ (
+ cd shallow-exclude &&
+ test_commit one &&
+ test_commit two &&
+ test_commit three &&
+ git clone --shallow-exclude two "file://$(pwd)/." ../shallow12 &&
+ git -C ../shallow12 log --pretty=tformat:%s HEAD >actual &&
+ echo three >expected &&
+ test_cmp expected actual
+ )
+'
+
+test_expect_success 'fetch exclude tag one' '
+ git -C shallow12 fetch --shallow-exclude one origin &&
+ git -C shallow12 log --pretty=tformat:%s origin/master >actual &&
+ test_write_lines three two >expected &&
+ test_cmp expected actual
+'
+
+test_expect_success 'fetching deepen' '
+ test_create_repo shallow-deepen &&
+ (
+ cd shallow-deepen &&
+ test_commit one &&
+ test_commit two &&
+ test_commit three &&
+ git clone --depth 1 "file://$(pwd)/." deepen &&
+ test_commit four &&
+ git -C deepen log --pretty=tformat:%s master >actual &&
+ echo three >expected &&
+ test_cmp expected actual &&
+ git -C deepen fetch --deepen=1 &&
+ git -C deepen log --pretty=tformat:%s origin/master >actual &&
+ cat >expected <<-\EOF &&
+ four
+ three
+ two
+ EOF
+ test_cmp expected actual
+ )
+'
+
test_done
)
'
+test_expect_success 'clone shallow since ...' '
+ test_create_repo shallow-since &&
+ (
+ cd shallow-since &&
+ GIT_COMMITTER_DATE="100000000 +0700" git commit --allow-empty -m one &&
+ GIT_COMMITTER_DATE="200000000 +0700" git commit --allow-empty -m two &&
+ GIT_COMMITTER_DATE="300000000 +0700" git commit --allow-empty -m three &&
+ mv .git "$HTTPD_DOCUMENT_ROOT_PATH/shallow-since.git" &&
+ git clone --shallow-since "300000000 +0700" $HTTPD_URL/smart/shallow-since.git ../shallow11 &&
+ git -C ../shallow11 log --pretty=tformat:%s HEAD >actual &&
+ echo three >expected &&
+ test_cmp expected actual
+ )
+'
+
+test_expect_success 'fetch shallow since ...' '
+ git -C shallow11 fetch --shallow-since "200000000 +0700" origin &&
+ git -C shallow11 log --pretty=tformat:%s origin/master >actual &&
+ cat >expected <<-\EOF &&
+ three
+ two
+ EOF
+ test_cmp expected actual
+'
+
+test_expect_success 'shallow clone exclude tag two' '
+ test_create_repo shallow-exclude &&
+ (
+ cd shallow-exclude &&
+ test_commit one &&
+ test_commit two &&
+ test_commit three &&
+ mv .git "$HTTPD_DOCUMENT_ROOT_PATH/shallow-exclude.git" &&
+ git clone --shallow-exclude two $HTTPD_URL/smart/shallow-exclude.git ../shallow12 &&
+ git -C ../shallow12 log --pretty=tformat:%s HEAD >actual &&
+ echo three >expected &&
+ test_cmp expected actual
+ )
+'
+
+test_expect_success 'fetch exclude tag one' '
+ git -C shallow12 fetch --shallow-exclude one origin &&
+ git -C shallow12 log --pretty=tformat:%s origin/master >actual &&
+ test_write_lines three two >expected &&
+ test_cmp expected actual
+'
+
+test_expect_success 'fetching deepen' '
+ test_create_repo shallow-deepen &&
+ (
+ cd shallow-deepen &&
+ test_commit one &&
+ test_commit two &&
+ test_commit three &&
+ mv .git "$HTTPD_DOCUMENT_ROOT_PATH/shallow-deepen.git" &&
+ git clone --depth 1 $HTTPD_URL/smart/shallow-deepen.git deepen &&
+ mv "$HTTPD_DOCUMENT_ROOT_PATH/shallow-deepen.git" .git &&
+ test_commit four &&
+ git -C deepen log --pretty=tformat:%s master >actual &&
+ echo three >expected &&
+ test_cmp expected actual &&
+ mv .git "$HTTPD_DOCUMENT_ROOT_PATH/shallow-deepen.git" &&
+ git -C deepen fetch --deepen=1 &&
+ git -C deepen log --pretty=tformat:%s origin/master >actual &&
+ cat >expected <<-\EOF &&
+ four
+ three
+ two
+ EOF
+ test_cmp expected actual
+ )
+'
+
stop_httpd
test_done
test_expect_success '%C(auto) respects --color' '
git log --color --format="%C(auto)%H" -1 >actual &&
- printf "\\033[m\\033[33m%s\\033[m\\n" $(git rev-parse HEAD) >expect &&
+ printf "\\033[33m%s\\033[m\\n" $(git rev-parse HEAD) >expect &&
test_cmp expect actual
'
test_cmp_rev_output start "git rev-parse ${start%?}"
'
+# rev^- tests; we can use a simpler setup for these
+
+test_expect_success 'setup for rev^- tests' '
+ test_commit one &&
+ test_commit two &&
+ test_commit three &&
+
+ # Merge in a branch for testing rev^-
+ git checkout -b branch &&
+ git checkout HEAD^^ &&
+ git merge -m merge --no-edit --no-ff branch &&
+ git checkout -b merge
+'
+
+# The merged branch has 2 commits + the merge
+test_expect_success 'rev-list --count merge^- = merge^..merge' '
+ git rev-list --count merge^..merge >expect &&
+ echo 3 >actual &&
+ test_cmp expect actual
+'
+
+# All rev^- rev-parse tests
+
+test_expect_success 'rev-parse merge^- = merge^..merge' '
+ git rev-parse merge^..merge >expect &&
+ git rev-parse merge^- >actual &&
+ test_cmp expect actual
+'
+
+test_expect_success 'rev-parse merge^-1 = merge^..merge' '
+ git rev-parse merge^1..merge >expect &&
+ git rev-parse merge^-1 >actual &&
+ test_cmp expect actual
+'
+
+test_expect_success 'rev-parse merge^-2 = merge^2..merge' '
+ git rev-parse merge^2..merge >expect &&
+ git rev-parse merge^-2 >actual &&
+ test_cmp expect actual
+'
+
+test_expect_success 'rev-parse merge^-0 (invalid parent)' '
+ test_must_fail git rev-parse merge^-0
+'
+
+test_expect_success 'rev-parse merge^-3 (invalid parent)' '
+ test_must_fail git rev-parse merge^-3
+'
+
+test_expect_success 'rev-parse merge^-^ (garbage after ^-)' '
+ test_must_fail git rev-parse merge^-^
+'
+
+test_expect_success 'rev-parse merge^-1x (garbage after ^-1)' '
+ test_must_fail git rev-parse merge^-1x
+'
+
+# All rev^- rev-list tests (should be mostly the same as rev-parse; the reason
+# for the duplication is that rev-parse and rev-list use different parsers).
+
+test_expect_success 'rev-list merge^- = merge^..merge' '
+ git rev-list merge^..merge >expect &&
+ git rev-list merge^- >actual &&
+ test_cmp expect actual
+'
+
+test_expect_success 'rev-list merge^-1 = merge^1..merge' '
+ git rev-list merge^1..merge >expect &&
+ git rev-list merge^-1 >actual &&
+ test_cmp expect actual
+'
+
+test_expect_success 'rev-list merge^-2 = merge^2..merge' '
+ git rev-list merge^2..merge >expect &&
+ git rev-list merge^-2 >actual &&
+ test_cmp expect actual
+'
+
+test_expect_success 'rev-list merge^-0 (invalid parent)' '
+ test_must_fail git rev-list merge^-0
+'
+
+test_expect_success 'rev-list merge^-3 (invalid parent)' '
+ test_must_fail git rev-list merge^-3
+'
+
+test_expect_success 'rev-list merge^-^ (garbage after ^-)' '
+ test_must_fail git rev-list merge^-^
+'
+
+test_expect_success 'rev-list merge^-1x (garbage after ^-1)' '
+ test_must_fail git rev-list merge^-1x
+'
+
test_done
TRANS_OPT_THIN,
TRANS_OPT_KEEP,
TRANS_OPT_FOLLOWTAGS,
+ TRANS_OPT_DEEPEN_RELATIVE
};
+static int strbuf_set_helper_option(struct helper_data *data,
+ struct strbuf *buf)
+{
+ int ret;
+
+ sendline(data, buf);
+ if (recvline(data, buf))
+ exit(128);
+
+ if (!strcmp(buf->buf, "ok"))
+ ret = 0;
+ else if (starts_with(buf->buf, "error"))
+ ret = -1;
+ else if (!strcmp(buf->buf, "unsupported"))
+ ret = 1;
+ else {
+ warning("%s unexpectedly said: '%s'", data->name, buf->buf);
+ ret = 1;
+ }
+ return ret;
+}
+
+static int string_list_set_helper_option(struct helper_data *data,
+ const char *name,
+ struct string_list *list)
+{
+ struct strbuf buf = STRBUF_INIT;
+ int i, ret = 0;
+
+ for (i = 0; i < list->nr; i++) {
+ strbuf_addf(&buf, "option %s ", name);
+ quote_c_style(list->items[i].string, &buf, NULL, 0);
+ strbuf_addch(&buf, '\n');
+
+ if ((ret = strbuf_set_helper_option(data, &buf)))
+ break;
+ strbuf_reset(&buf);
+ }
+ strbuf_release(&buf);
+ return ret;
+}
+
static int set_helper_option(struct transport *transport,
const char *name, const char *value)
{
if (!data->option)
return 1;
+ if (!strcmp(name, "deepen-not"))
+ return string_list_set_helper_option(data, name,
+ (struct string_list *)value);
+
for (i = 0; i < ARRAY_SIZE(unsupported_options); i++) {
if (!strcmp(name, unsupported_options[i]))
return 1;
quote_c_style(value, &buf, NULL, 0);
strbuf_addch(&buf, '\n');
- sendline(data, &buf);
- if (recvline(data, &buf))
- exit(128);
-
- if (!strcmp(buf.buf, "ok"))
- ret = 0;
- else if (starts_with(buf.buf, "error")) {
- ret = -1;
- } else if (!strcmp(buf.buf, "unsupported"))
- ret = 1;
- else {
- warning("%s unexpectedly said: '%s'", data->name, buf.buf);
- ret = 1;
- }
+ ret = strbuf_set_helper_option(data, &buf);
strbuf_release(&buf);
return ret;
}
die(_("transport: invalid depth option '%s'"), value);
}
return 0;
+ } else if (!strcmp(name, TRANS_OPT_DEEPEN_SINCE)) {
+ opts->deepen_since = value;
+ return 0;
+ } else if (!strcmp(name, TRANS_OPT_DEEPEN_NOT)) {
+ opts->deepen_not = (const struct string_list *)value;
+ return 0;
+ } else if (!strcmp(name, TRANS_OPT_DEEPEN_RELATIVE)) {
+ opts->deepen_relative = !!value;
+ return 0;
}
return 1;
}
args.quiet = (transport->verbose < 0);
args.no_progress = !transport->progress;
args.depth = data->options.depth;
+ args.deepen_since = data->options.deepen_since;
+ args.deepen_not = data->options.deepen_not;
+ args.deepen_relative = data->options.deepen_relative;
args.check_self_contained_and_connected =
data->options.check_self_contained_and_connected;
args.cloning = transport->cloning;
#include "run-command.h"
#include "remote.h"
+struct string_list;
+
struct git_transport_options {
unsigned thin : 1;
unsigned keep : 1;
unsigned check_self_contained_and_connected : 1;
unsigned self_contained_and_connected : 1;
unsigned update_shallow : 1;
+ unsigned deepen_relative : 1;
int depth;
+ const char *deepen_since;
+ const struct string_list *deepen_not;
const char *uploadpack;
const char *receivepack;
struct push_cas_option *cas;
/* Limit the depth of the fetch if not null */
#define TRANS_OPT_DEPTH "depth"
+/* Limit the depth of the fetch based on time if not null */
+#define TRANS_OPT_DEEPEN_SINCE "deepen-since"
+
+/* Limit the depth of the fetch based on revs if not null */
+#define TRANS_OPT_DEEPEN_NOT "deepen-not"
+
+/* Limit the deepen of the fetch if not null */
+#define TRANS_OPT_DEEPEN_RELATIVE "deepen-relative"
+
/* Aggressively fetch annotated tags if possible */
#define TRANS_OPT_FOLLOWTAGS "followtags"
* Sort the cache entry -- we need to nuke the cache tree, though.
*/
cache_tree_free(&active_cache_tree);
- qsort(active_cache, active_nr, sizeof(active_cache[0]),
- cmp_cache_name_compare);
+ QSORT(active_cache, active_nr, cmp_cache_name_compare);
return 0;
}
#include "version.h"
#include "string-list.h"
#include "parse-options.h"
+#include "argv-array.h"
static const char * const upload_pack_usage[] = {
N_("git upload-pack [<options>] <dir>"),
static unsigned long oldest_have;
+static int deepen_relative;
static int multi_ack;
static int no_done;
static int use_thin_pack, use_ofs_delta, use_include_tag;
die("git upload-pack: %s", abort_msg);
}
-static int got_sha1(char *hex, unsigned char *sha1)
+static int got_sha1(const char *hex, unsigned char *sha1)
{
struct object *o;
int we_knew_they_have = 0;
for (;;) {
char *line = packet_read_line(0, NULL);
+ const char *arg;
+
reset_timeout();
if (!line) {
got_other = 0;
continue;
}
- if (starts_with(line, "have ")) {
- switch (got_sha1(line+5, sha1)) {
+ if (skip_prefix(line, "have ", &arg)) {
+ switch (got_sha1(arg, sha1)) {
case -1: /* they have what we do not */
got_other = 1;
if (multi_ack && ok_to_give_up()) {
return o->flags & ((allow_hidden_ref ? HIDDEN_REF : 0) | OUR_REF);
}
-static void check_non_tip(void)
+/*
+ * on successful case, it's up to the caller to close cmd->out
+ */
+static int do_reachable_revlist(struct child_process *cmd,
+ struct object_array *src,
+ struct object_array *reachable)
{
static const char *argv[] = {
"rev-list", "--stdin", NULL,
};
- static struct child_process cmd = CHILD_PROCESS_INIT;
struct object *o;
char namebuf[42]; /* ^ + SHA-1 + LF */
int i;
- /*
- * In the normal in-process case without
- * uploadpack.allowReachableSHA1InWant,
- * non-tip requests can never happen.
- */
- if (!stateless_rpc && !(allow_unadvertised_object_request & ALLOW_REACHABLE_SHA1))
- goto error;
-
- cmd.argv = argv;
- cmd.git_cmd = 1;
- cmd.no_stderr = 1;
- cmd.in = -1;
- cmd.out = -1;
-
- if (start_command(&cmd))
- goto error;
+ cmd->argv = argv;
+ cmd->git_cmd = 1;
+ cmd->no_stderr = 1;
+ cmd->in = -1;
+ cmd->out = -1;
/*
- * If rev-list --stdin encounters an unknown commit, it
- * terminates, which will cause SIGPIPE in the write loop
+ * If the next rev-list --stdin encounters an unknown commit,
+ * it terminates, which will cause SIGPIPE in the write loop
* below.
*/
sigchain_push(SIGPIPE, SIG_IGN);
+ if (start_command(cmd))
+ goto error;
+
namebuf[0] = '^';
namebuf[41] = '\n';
for (i = get_max_object_index(); 0 < i; ) {
o = get_indexed_object(--i);
if (!o)
continue;
+ if (reachable && o->type == OBJ_COMMIT)
+ o->flags &= ~TMP_MARK;
if (!is_our_ref(o))
continue;
memcpy(namebuf + 1, oid_to_hex(&o->oid), GIT_SHA1_HEXSZ);
- if (write_in_full(cmd.in, namebuf, 42) < 0)
+ if (write_in_full(cmd->in, namebuf, 42) < 0)
goto error;
}
namebuf[40] = '\n';
- for (i = 0; i < want_obj.nr; i++) {
- o = want_obj.objects[i].item;
- if (is_our_ref(o))
+ for (i = 0; i < src->nr; i++) {
+ o = src->objects[i].item;
+ if (is_our_ref(o)) {
+ if (reachable)
+ add_object_array(o, NULL, reachable);
continue;
+ }
+ if (reachable && o->type == OBJ_COMMIT)
+ o->flags |= TMP_MARK;
memcpy(namebuf, oid_to_hex(&o->oid), GIT_SHA1_HEXSZ);
- if (write_in_full(cmd.in, namebuf, 41) < 0)
+ if (write_in_full(cmd->in, namebuf, 41) < 0)
goto error;
}
- close(cmd.in);
+ close(cmd->in);
+ cmd->in = -1;
+ sigchain_pop(SIGPIPE);
+ return 0;
+
+error:
sigchain_pop(SIGPIPE);
+ if (cmd->in >= 0)
+ close(cmd->in);
+ if (cmd->out >= 0)
+ close(cmd->out);
+ return -1;
+}
+
+static int get_reachable_list(struct object_array *src,
+ struct object_array *reachable)
+{
+ struct child_process cmd = CHILD_PROCESS_INIT;
+ int i;
+ struct object *o;
+ char namebuf[42]; /* ^ + SHA-1 + LF */
+
+ if (do_reachable_revlist(&cmd, src, reachable) < 0)
+ return -1;
+
+ while ((i = read_in_full(cmd.out, namebuf, 41)) == 41) {
+ struct object_id sha1;
+
+ if (namebuf[40] != '\n' || get_oid_hex(namebuf, &sha1))
+ break;
+
+ o = lookup_object(sha1.hash);
+ if (o && o->type == OBJ_COMMIT) {
+ o->flags &= ~TMP_MARK;
+ }
+ }
+ for (i = get_max_object_index(); 0 < i; i--) {
+ o = get_indexed_object(i - 1);
+ if (o && o->type == OBJ_COMMIT &&
+ (o->flags & TMP_MARK)) {
+ add_object_array(o, NULL, reachable);
+ o->flags &= ~TMP_MARK;
+ }
+ }
+ close(cmd.out);
+
+ if (finish_command(&cmd))
+ return -1;
+
+ return 0;
+}
+
+static int has_unreachable(struct object_array *src)
+{
+ struct child_process cmd = CHILD_PROCESS_INIT;
+ char buf[1];
+ int i;
+
+ if (do_reachable_revlist(&cmd, src, NULL) < 0)
+ return 1;
+
/*
* The commits out of the rev-list are not ancestors of
* our ref.
*/
- i = read_in_full(cmd.out, namebuf, 1);
+ i = read_in_full(cmd.out, buf, 1);
if (i)
goto error;
close(cmd.out);
+ cmd.out = -1;
/*
* rev-list may have died by encountering a bad commit
goto error;
/* All the non-tip ones are ancestors of what we advertised */
- return;
+ return 0;
+
+error:
+ sigchain_pop(SIGPIPE);
+ if (cmd.out >= 0)
+ close(cmd.out);
+ return 1;
+}
+
+static void check_non_tip(void)
+{
+ int i;
+
+ /*
+ * In the normal in-process case without
+ * uploadpack.allowReachableSHA1InWant,
+ * non-tip requests can never happen.
+ */
+ if (!stateless_rpc && !(allow_unadvertised_object_request & ALLOW_REACHABLE_SHA1))
+ goto error;
+ if (!has_unreachable(&want_obj))
+ /* All the non-tip ones are ancestors of what we advertised */
+ return;
error:
/* Pick one of them (we know there at least is one) */
for (i = 0; i < want_obj.nr; i++) {
- o = want_obj.objects[i].item;
+ struct object *o = want_obj.objects[i].item;
if (!is_our_ref(o))
die("git upload-pack: not our ref %s",
oid_to_hex(&o->oid));
}
}
+static void send_shallow(struct commit_list *result)
+{
+ while (result) {
+ struct object *object = &result->item->object;
+ if (!(object->flags & (CLIENT_SHALLOW|NOT_SHALLOW))) {
+ packet_write(1, "shallow %s",
+ oid_to_hex(&object->oid));
+ register_shallow(object->oid.hash);
+ shallow_nr++;
+ }
+ result = result->next;
+ }
+}
+
+static void send_unshallow(const struct object_array *shallows)
+{
+ int i;
+
+ for (i = 0; i < shallows->nr; i++) {
+ struct object *object = shallows->objects[i].item;
+ if (object->flags & NOT_SHALLOW) {
+ struct commit_list *parents;
+ packet_write(1, "unshallow %s",
+ oid_to_hex(&object->oid));
+ object->flags &= ~CLIENT_SHALLOW;
+ /*
+ * We want to _register_ "object" as shallow, but we
+ * also need to traverse object's parents to deepen a
+ * shallow clone. Unregister it for now so we can
+ * parse and add the parents to the want list, then
+ * re-register it.
+ */
+ unregister_shallow(object->oid.hash);
+ object->parsed = 0;
+ parse_commit_or_die((struct commit *)object);
+ parents = ((struct commit *)object)->parents;
+ while (parents) {
+ add_object_array(&parents->item->object,
+ NULL, &want_obj);
+ parents = parents->next;
+ }
+ add_object_array(object, NULL, &extra_edge_obj);
+ }
+ /* make sure commit traversal conforms to client */
+ register_shallow(object->oid.hash);
+ }
+}
+
+static void deepen(int depth, int deepen_relative,
+ struct object_array *shallows)
+{
+ if (depth == INFINITE_DEPTH && !is_repository_shallow()) {
+ int i;
+
+ for (i = 0; i < shallows->nr; i++) {
+ struct object *object = shallows->objects[i].item;
+ object->flags |= NOT_SHALLOW;
+ }
+ } else if (deepen_relative) {
+ struct object_array reachable_shallows = OBJECT_ARRAY_INIT;
+ struct commit_list *result;
+
+ get_reachable_list(shallows, &reachable_shallows);
+ result = get_shallow_commits(&reachable_shallows,
+ depth + 1,
+ SHALLOW, NOT_SHALLOW);
+ send_shallow(result);
+ free_commit_list(result);
+ object_array_clear(&reachable_shallows);
+ } else {
+ struct commit_list *result;
+
+ result = get_shallow_commits(&want_obj, depth,
+ SHALLOW, NOT_SHALLOW);
+ send_shallow(result);
+ free_commit_list(result);
+ }
+
+ send_unshallow(shallows);
+ packet_flush(1);
+}
+
+static void deepen_by_rev_list(int ac, const char **av,
+ struct object_array *shallows)
+{
+ struct commit_list *result;
+
+ result = get_shallow_commits_by_rev_list(ac, av, SHALLOW, NOT_SHALLOW);
+ send_shallow(result);
+ free_commit_list(result);
+ send_unshallow(shallows);
+ packet_flush(1);
+}
+
static void receive_needs(void)
{
struct object_array shallows = OBJECT_ARRAY_INIT;
+ struct string_list deepen_not = STRING_LIST_INIT_DUP;
int depth = 0;
int has_non_tip = 0;
+ unsigned long deepen_since = 0;
+ int deepen_rev_list = 0;
shallow_nr = 0;
for (;;) {
const char *features;
unsigned char sha1_buf[20];
char *line = packet_read_line(0, NULL);
+ const char *arg;
+
reset_timeout();
if (!line)
break;
- if (starts_with(line, "shallow ")) {
+ if (skip_prefix(line, "shallow ", &arg)) {
unsigned char sha1[20];
struct object *object;
- if (get_sha1_hex(line + 8, sha1))
+ if (get_sha1_hex(arg, sha1))
die("invalid shallow line: %s", line);
object = parse_object(sha1);
if (!object)
}
continue;
}
- if (starts_with(line, "deepen ")) {
- char *end;
- depth = strtol(line + 7, &end, 0);
- if (end == line + 7 || depth <= 0)
+ if (skip_prefix(line, "deepen ", &arg)) {
+ char *end = NULL;
+ depth = strtol(arg, &end, 0);
+ if (!end || *end || depth <= 0)
die("Invalid deepen: %s", line);
continue;
}
- if (!starts_with(line, "want ") ||
- get_sha1_hex(line+5, sha1_buf))
+ if (skip_prefix(line, "deepen-since ", &arg)) {
+ char *end = NULL;
+ deepen_since = strtoul(arg, &end, 0);
+ if (!end || *end || !deepen_since ||
+ /* revisions.c's max_age -1 is special */
+ deepen_since == -1)
+ die("Invalid deepen-since: %s", line);
+ deepen_rev_list = 1;
+ continue;
+ }
+ if (skip_prefix(line, "deepen-not ", &arg)) {
+ char *ref = NULL;
+ unsigned char sha1[20];
+ if (expand_ref(arg, strlen(arg), sha1, &ref) != 1)
+ die("git upload-pack: ambiguous deepen-not: %s", line);
+ string_list_append(&deepen_not, ref);
+ free(ref);
+ deepen_rev_list = 1;
+ continue;
+ }
+ if (!skip_prefix(line, "want ", &arg) ||
+ get_sha1_hex(arg, sha1_buf))
die("git upload-pack: protocol error, "
"expected to get sha, not '%s'", line);
- features = line + 45;
+ features = arg + 40;
+ if (parse_feature_request(features, "deepen-relative"))
+ deepen_relative = 1;
if (parse_feature_request(features, "multi_ack_detailed"))
multi_ack = 2;
else if (parse_feature_request(features, "multi_ack"))
if (!use_sideband && daemon_mode)
no_progress = 1;
- if (depth == 0 && shallows.nr == 0)
+ if (depth == 0 && !deepen_rev_list && shallows.nr == 0)
return;
- if (depth > 0) {
- struct commit_list *result = NULL, *backup = NULL;
+ if (depth > 0 && deepen_rev_list)
+ die("git upload-pack: deepen and deepen-since (or deepen-not) cannot be used together");
+ if (depth > 0)
+ deepen(depth, deepen_relative, &shallows);
+ else if (deepen_rev_list) {
+ struct argv_array av = ARGV_ARRAY_INIT;
int i;
- if (depth == INFINITE_DEPTH && !is_repository_shallow())
- for (i = 0; i < shallows.nr; i++) {
- struct object *object = shallows.objects[i].item;
- object->flags |= NOT_SHALLOW;
- }
- else
- backup = result =
- get_shallow_commits(&want_obj, depth,
- SHALLOW, NOT_SHALLOW);
- while (result) {
- struct object *object = &result->item->object;
- if (!(object->flags & (CLIENT_SHALLOW|NOT_SHALLOW))) {
- packet_write(1, "shallow %s",
- oid_to_hex(&object->oid));
- register_shallow(object->oid.hash);
- shallow_nr++;
+
+ argv_array_push(&av, "rev-list");
+ if (deepen_since)
+ argv_array_pushf(&av, "--max-age=%lu", deepen_since);
+ if (deepen_not.nr) {
+ argv_array_push(&av, "--not");
+ for (i = 0; i < deepen_not.nr; i++) {
+ struct string_list_item *s = deepen_not.items + i;
+ argv_array_push(&av, s->string);
}
- result = result->next;
+ argv_array_push(&av, "--not");
}
- free_commit_list(backup);
- for (i = 0; i < shallows.nr; i++) {
- struct object *object = shallows.objects[i].item;
- if (object->flags & NOT_SHALLOW) {
- struct commit_list *parents;
- packet_write(1, "unshallow %s",
- oid_to_hex(&object->oid));
- object->flags &= ~CLIENT_SHALLOW;
- /* make sure the real parents are parsed */
- unregister_shallow(object->oid.hash);
- object->parsed = 0;
- parse_commit_or_die((struct commit *)object);
- parents = ((struct commit *)object)->parents;
- while (parents) {
- add_object_array(&parents->item->object,
- NULL, &want_obj);
- parents = parents->next;
- }
- add_object_array(object, NULL, &extra_edge_obj);
- }
- /* make sure commit traversal conforms to client */
- register_shallow(object->oid.hash);
+ for (i = 0; i < want_obj.nr; i++) {
+ struct object *o = want_obj.objects[i].item;
+ argv_array_push(&av, oid_to_hex(&o->oid));
}
- packet_flush(1);
- } else
+ deepen_by_rev_list(av.argc, av.argv, &shallows);
+ argv_array_clear(&av);
+ }
+ else
if (shallows.nr > 0) {
int i;
for (i = 0; i < shallows.nr; i++)
int flag, void *cb_data)
{
static const char *capabilities = "multi_ack thin-pack side-band"
- " side-band-64k ofs-delta shallow no-progress"
- " include-tag multi_ack_detailed";
+ " side-band-64k ofs-delta shallow deepen-since deepen-not"
+ " deepen-relative no-progress include-tag multi_ack_detailed";
const char *refname_nons = strip_namespace(refname);
struct object_id peeled;
if (!strcmp(cb->buf.buf, "HEAD")) {
/* HEAD is relative. Resolve it to the right reflog entry. */
strbuf_reset(&cb->buf);
- strbuf_addstr(&cb->buf,
- find_unique_abbrev(nsha1, DEFAULT_ABBREV));
+ strbuf_add_unique_abbrev(&cb->buf, nsha1, DEFAULT_ABBREV);
}
return 1;
}