--------
Subsection names are case sensitive and can contain any characters except
-newline (doublequote `"` and backslash can be included by escaping them
-as `\"` and `\\`, respectively). Section headers cannot span multiple
-lines. Variables may belong directly to a section or to a given subsection.
-You can have `[section]` if you have `[section "subsection"]`, but you
-don't need to.
+newline and the null byte. Doublequote `"` and backslash can be included
+by escaping them as `\"` and `\\`, respectively. Backslashes preceding
+other characters are dropped when reading; for example, `\t` is read as
+`t` and `\0` is read as `0` Section headers cannot span multiple lines.
+Variables may belong directly to a section or to a given subsection. You
+can have `[section]` if you have `[section "subsection"]`, but you don't
+need to.
There is also a deprecated `[section.subsection]` syntax. With this
syntax, the subsection name is converted to lower-case and is also
addEmbeddedRepo::
Advice on what to do when you've accidentally added one
git repo inside of another.
+ ignoredHook::
+ Advice shown if an hook is ignored because the hook is not
+ set as executable.
+ waitingForEditor::
+ Print a message to the terminal whenever Git is waiting for
+ editor input from the user.
--
core.fileMode::
8.3 "short" names.
Defaults to `true` on Windows, and `false` elsewhere.
+core.fsmonitor::
+ If set, the value of this variable is used as a command which
+ will identify all files that may have changed since the
+ requested date/time. This information is used to speed up git by
+ avoiding unnecessary processing of files that have not changed.
+ See the "fsmonitor-watchman" section of linkgit:githooks[5].
+
core.trustctime::
If false, the ctime differences between the index and the
working tree are ignored; useful when the inode change time
Tells 'git apply' how to handle whitespaces, in the same way
as the `--whitespace` option. See linkgit:git-apply[1].
+blame.showRoot::
+ Do not treat root commits as boundaries in linkgit:git-blame[1].
+ This option defaults to false.
+
+blame.blankBoundary::
+ Show blank commit object name for boundary commits in
+ linkgit:git-blame[1]. This option defaults to false.
+
+blame.showEmail::
+ Show the author email instead of author name in linkgit:git-blame[1].
+ This option defaults to false.
+
+blame.date::
+ Specifies the format used to output dates in linkgit:git-blame[1].
+ If unset the iso format is used. For supported values,
+ see the discussion of the `--date` option at linkgit:git-log[1].
+
branch.autoSetupMerge::
Tells 'git branch' and 'git checkout' to set up new branches
so that linkgit:git-pull[1] will appropriately merge from the
http.sslVerify::
Whether to verify the SSL certificate when fetching or pushing
- over HTTPS. Can be overridden by the `GIT_SSL_NO_VERIFY` environment
- variable.
+ over HTTPS. Defaults to true. Can be overridden by the
+ `GIT_SSL_NO_VERIFY` environment variable.
http.sslCert::
File containing the SSL certificate when fetching or pushing
visited as a result of a redirection do not participate in matching.
ssh.variant::
- Depending on the value of the environment variables `GIT_SSH` or
- `GIT_SSH_COMMAND`, or the config setting `core.sshCommand`, Git
- auto-detects whether to adjust its command-line parameters for use
- with plink or tortoiseplink, as opposed to the default (OpenSSH).
+ By default, Git determines the command line arguments to use
+ based on the basename of the configured SSH command (configured
+ using the environment variable `GIT_SSH` or `GIT_SSH_COMMAND` or
+ the config setting `core.sshCommand`). If the basename is
+ unrecognized, Git will attempt to detect support of OpenSSH
+ options by first invoking the configured SSH command with the
+ `-G` (print configuration) option and will subsequently use
+ OpenSSH options (if that is successful) or no options besides
+ the host and remote command (if it fails).
++
+The config variable `ssh.variant` can be set to override this detection.
+Valid values are `ssh` (to use OpenSSH options), `plink`, `putty`,
+`tortoiseplink`, `simple` (no options except the host and remote command).
+The default auto-detection can be explicitly requested using the value
+`auto`. Any other value is treated as `ssh`. This setting can also be
+overridden via the environment variable `GIT_SSH_VARIANT`.
++
+The current command-line parameters used for each variant are as
+follows:
+
-The config variable `ssh.variant` can be set to override this auto-detection;
-valid values are `ssh`, `plink`, `putty` or `tortoiseplink`. Any other value
-will be treated as normal ssh. This setting can be overridden via the
-environment variable `GIT_SSH_VARIANT`.
+--
+
+* `ssh` - [-p port] [-4] [-6] [-o option] [username@]host command
+
+* `simple` - [username@]host command
+
+* `plink` or `putty` - [-P port] [-4] [-6] [username@]host command
+
+* `tortoiseplink` - [-P port] [-4] [-6] -batch [username@]host command
+
+--
++
+Except for the `simple` variant, command-line parameters are likely to
+change as git gains new features.
i18n.commitEncoding::
Character encoding the commit messages are stored in; Git itself
`hg` to allow the `git-remote-hg` helper)
--
+protocol.version::
+ Experimental. If set, clients will attempt to communicate with a
+ server using the specified protocol version. If unset, no
+ attempt will be made by the client to communicate using a
+ particular protocol version, this results in protocol version 0
+ being used.
+ Supported versions:
++
+--
+
+* `0` - the original wire protocol.
+
+* `1` - the original wire protocol with the addition of a version string
+ in the initial response from the server.
+
+--
+
pull.ff::
By default, Git does not create an extra merge commit when merging
a commit that is a descendant of the current commit. Instead, the
override a value from a lower-priority config file. An explicit
command-line flag always overrides this config option.
+push.pushOption::
+ When no `--push-option=<option>` argument is given from the
+ command line, `git push` behaves as if each <value> of
+ this variable is given as `--push-option=<value>`.
++
+This is a multi-valued variable, and an empty value can be used in a
+higher priority configuration file (e.g. `.git/config` in a
+repository) to clear the values inherited from a lower priority
+configuration files (e.g. `$HOME/.gitconfig`).
++
+--
+
+Example:
+
+/etc/gitconfig
+ push.pushoption = a
+ push.pushoption = b
+
+~/.gitconfig
+ push.pushoption = c
+
+repo/.git/config
+ push.pushoption =
+ push.pushoption = b
+
+This will result in only b (a and c are cleared).
+
+--
+
push.recurseSubmodules::
Make sure all submodule commits used by the revisions to be pushed
are available on a remote-tracking branch. If the value is 'check'
is retained. You may override this configuration at time of push by
specifying '--recurse-submodules=check|on-demand|no'.
-rebase.stat::
- Whether to show a diffstat of what changed upstream since the last
- rebase. False by default.
-
-rebase.autoSquash::
- If set to true enable `--autosquash` option by default.
-
-rebase.autoStash::
- When set to true, automatically create a temporary stash entry
- before the operation begins, and apply it after the operation
- ends. This means that you can run rebase on a dirty worktree.
- However, use with care: the final stash application after a
- successful rebase might result in non-trivial conflicts.
- Defaults to false.
-
-rebase.missingCommitsCheck::
- If set to "warn", git rebase -i will print a warning if some
- commits are removed (e.g. a line was deleted), however the
- rebase will still proceed. If set to "error", it will print
- the previous warning and stop the rebase, 'git rebase
- --edit-todo' can then be used to correct the error. If set to
- "ignore", no checking is done.
- To drop a commit without warning or error, use the `drop`
- command in the todo-list.
- Defaults to "ignore".
-
-rebase.instructionFormat::
- A format string, as specified in linkgit:git-log[1], to be used for
- the instruction list during an interactive rebase. The format will automatically
- have the long commit hash prepended to the format.
+include::rebase-config.txt[]
receive.advertiseAtomic::
By default, git-receive-pack will advertise the atomic push
sendemail.suppresscc::
sendemail.suppressFrom::
sendemail.to::
+sendemail.tocmd::
sendemail.smtpDomain::
sendemail.smtpServer::
sendemail.smtpServerPort::
was run. I.e., `upload-pack` will feed input intended for
`pack-objects` to the hook, and expects a completed packfile on
stdout.
+
+ uploadpack.allowFilter::
+ If this option is set, `upload-pack` will advertise partial
+ clone and partial fetch object filtering.
+
Note that this configuration variable is ignored if it is seen in the
repository-level config (this is a safety measure against fetching from
Specify a web browser that may be used by some commands.
Currently only linkgit:git-instaweb[1] and linkgit:git-help[1]
may use it.
+
+worktree.guessRemote::
+ With `add`, if no branch argument, and neither of `-b` nor
+ `-B` nor `--detach` are given, the command defaults to
+ creating a new branch from HEAD. If `worktree.guessRemote` is
+ set to true, `worktree add` tries to find a remote-tracking
+ branch whose name uniquely matches the new branch name. If
+ such a branch exists, it is checked out and set as "upstream"
+ for the new branch. If no such match can be found, it falls
+ back to creating a new branch from the current HEAD.
The file:// transport runs the 'upload-pack' or 'receive-pack'
process locally and communicates with it over a pipe.
+Extra Parameters
+----------------
+
+The protocol provides a mechanism in which clients can send additional
+information in its first message to the server. These are called "Extra
+Parameters", and are supported by the Git, SSH, and HTTP protocols.
+
+Each Extra Parameter takes the form of `<key>=<value>` or `<key>`.
+
+Servers that receive any such Extra Parameters MUST ignore all
+unrecognized keys. Currently, the only Extra Parameter recognized is
+"version=1".
+
Git Transport
-------------
on the wire using the pkt-line format, followed by a NUL byte and a
hostname parameter, terminated by a NUL byte.
- 0032git-upload-pack /project.git\0host=myserver.com\0
+ 0033git-upload-pack /project.git\0host=myserver.com\0
+
+The transport may send Extra Parameters by adding an additional NUL
+byte, and then adding one or more NUL-terminated strings:
+
+ 003egit-upload-pack /project.git\0host=myserver.com\0\0version=1\0
--
- git-proto-request = request-command SP pathname NUL [ host-parameter NUL ]
+ git-proto-request = request-command SP pathname NUL
+ [ host-parameter NUL ] [ NUL extra-parameters ]
request-command = "git-upload-pack" / "git-receive-pack" /
"git-upload-archive" ; case sensitive
pathname = *( %x01-ff ) ; exclude NUL
host-parameter = "host=" hostname [ ":" port ]
+ extra-parameters = 1*extra-parameter
+ extra-parameter = 1*( %x01-ff ) NUL
--
-Only host-parameter is allowed in the git-proto-request. Clients
-MUST NOT attempt to send additional parameters. It is used for the
+host-parameter is used for the
git-daemon name based virtual hosting. See --interpolated-path
option to git daemon, with the %H/%CH format characters.
v
ssh user@example.com "git-upload-pack '~alice/project.git'"
+Depending on the value of the `protocol.version` configuration variable,
+Git may attempt to send Extra Parameters as a colon-separated string in
+the GIT_PROTOCOL environment variable. This is done only if
+the `ssh.variant` configuration variable indicates that the ssh command
+supports passing environment variables as an argument.
+
A few things to remember here:
- The "command name" is spelled with dash (e.g. git-upload-pack), but
-------------------
When the client initially connects the server will immediately respond
-with a listing of each reference it has (all branches and tags) along
+with a version number (if "version=1" is sent as an Extra Parameter),
+and a listing of each reference it has (all branches and tags) along
with the object name that each reference currently points to.
- $ echo -e -n "0039git-upload-pack /schacon/gitbook.git\0host=example.com\0" |
+ $ echo -e -n "0044git-upload-pack /schacon/gitbook.git\0host=example.com\0\0version=1\0" |
nc -v example.com 9418
+ 000aversion 1
00887217a7c7e582c46cec22a130adf4b9d7d950fba0 HEAD\0multi_ack thin-pack
side-band side-band-64k ofs-delta shallow no-progress include-tag
00441d3fcd5ced445d1abc402225c0b8a1299641f497 refs/heads/integration
MUST peel the ref if it's an annotated tag.
----
- advertised-refs = (no-refs / list-of-refs)
+ advertised-refs = *1("version 1")
+ (no-refs / list-of-refs)
*shallow
flush-pkt
upload-request = want-list
*shallow-line
*1depth-request
+ [filter-request]
flush-pkt
want-list = first-want
additional-want = PKT-LINE("want" SP obj-id)
depth = 1*DIGIT
+
+ filter-request = PKT-LINE("filter" SP filter-spec)
----
Clients MUST send all the obj-ids it wants from the reference
result are defined as shallow and marked as such in the server. This
information is sent back to the client in the next step.
+ The client can optionally request that pack-objects omit various
+ objects from the packfile using one of several filtering techniques.
+ These are intended for use with partial clone and partial fetch
+ operations. See `rev-list` for possible "filter-spec" values.
+
Once all the 'want's and 'shallow's (and optional 'deepen') are
transferred, clients MUST send a flush-pkt, to tell the server side
that it is done sending the list.
#include "run-command.h"
#include "connected.h"
#include "packfile.h"
+ #include "list-objects-filter-options.h"
/*
* Overall FIXMEs:
static int option_dissociate;
static int max_jobs = -1;
static struct string_list option_recurse_submodules = STRING_LIST_INIT_NODUP;
+ static struct list_objects_filter_options filter_options;
static int recurse_submodules_cb(const struct option *opt,
const char *arg, int unset)
TRANSPORT_FAMILY_IPV4),
OPT_SET_INT('6', "ipv6", &family, N_("use IPv6 addresses only"),
TRANSPORT_FAMILY_IPV6),
+ OPT_PARSE_LIST_OBJECTS_FILTER(&filter_options),
OPT_END()
};
{
if (option_shared) {
struct strbuf alt = STRBUF_INIT;
- strbuf_addf(&alt, "%s/objects", src_repo);
+ get_common_dir(&alt, src_repo);
+ strbuf_addstr(&alt, "/objects");
add_to_alternates_file(alt.buf);
strbuf_release(&alt);
} else {
}
static const char *junk_work_tree;
+static int junk_work_tree_flags;
static const char *junk_git_dir;
+static int junk_git_dir_flags;
static enum {
JUNK_LEAVE_NONE,
JUNK_LEAVE_REPO,
if (junk_git_dir) {
strbuf_addstr(&sb, junk_git_dir);
- remove_dir_recursively(&sb, 0);
+ remove_dir_recursively(&sb, junk_git_dir_flags);
strbuf_reset(&sb);
}
if (junk_work_tree) {
strbuf_addstr(&sb, junk_work_tree);
- remove_dir_recursively(&sb, 0);
+ remove_dir_recursively(&sb, junk_work_tree_flags);
}
strbuf_release(&sb);
}
for (r = local_refs; r; r = r->next) {
if (!r->peer_ref)
continue;
- if (ref_transaction_create(t, r->peer_ref->name, r->old_oid.hash,
+ if (ref_transaction_create(t, r->peer_ref->name, &r->old_oid,
0, NULL, &err))
die("%s", err.buf);
}
continue;
if (!has_object_file(&ref->old_oid))
continue;
- update_ref(msg, ref->name, ref->old_oid.hash,
- NULL, 0, UPDATE_REFS_DIE_ON_ERR);
+ update_ref(msg, ref->name, &ref->old_oid, NULL, 0,
+ UPDATE_REFS_DIE_ON_ERR);
}
}
-static int iterate_ref_map(void *cb_data, unsigned char sha1[20])
+static int iterate_ref_map(void *cb_data, struct object_id *oid)
{
struct ref **rm = cb_data;
struct ref *ref = *rm;
if (!ref)
return -1;
- hashcpy(sha1, ref->old_oid.hash);
+ oidcpy(oid, &ref->old_oid);
*rm = ref->next;
return 0;
}
if (create_symref("HEAD", our->name, NULL) < 0)
die(_("unable to update HEAD"));
if (!option_bare) {
- update_ref(msg, "HEAD", our->old_oid.hash, NULL, 0,
+ update_ref(msg, "HEAD", &our->old_oid, NULL, 0,
UPDATE_REFS_DIE_ON_ERR);
install_branch_config(0, head, option_origin, our->name);
}
} else if (our) {
struct commit *c = lookup_commit_reference(&our->old_oid);
/* --branch specifies a non-branch (i.e. tags), detach HEAD */
- update_ref(msg, "HEAD", c->object.oid.hash,
- NULL, REF_NODEREF, UPDATE_REFS_DIE_ON_ERR);
+ update_ref(msg, "HEAD", &c->object.oid, NULL, REF_NO_DEREF,
+ UPDATE_REFS_DIE_ON_ERR);
} else if (remote) {
/*
* We know remote HEAD points to a non-branch, or
* HEAD points to a branch but we don't know which one.
* Detach HEAD in all these cases.
*/
- update_ref(msg, "HEAD", remote->old_oid.hash,
- NULL, REF_NODEREF, UPDATE_REFS_DIE_ON_ERR);
+ update_ref(msg, "HEAD", &remote->old_oid, NULL, REF_NO_DEREF,
+ UPDATE_REFS_DIE_ON_ERR);
}
}
{
struct object_id oid;
char *head;
- struct lock_file *lock_file;
+ struct lock_file lock_file = LOCK_INIT;
struct unpack_trees_options opts;
struct tree *tree;
struct tree_desc t;
if (option_no_checkout)
return 0;
- head = resolve_refdup("HEAD", RESOLVE_REF_READING, oid.hash, NULL);
+ head = resolve_refdup("HEAD", RESOLVE_REF_READING, &oid, NULL);
if (!head) {
warning(_("remote HEAD refers to nonexistent ref, "
"unable to checkout.\n"));
/* We need to be in the new work tree for the checkout */
setup_work_tree();
- lock_file = xcalloc(1, sizeof(struct lock_file));
- hold_locked_index(lock_file, LOCK_DIE_ON_ERROR);
+ hold_locked_index(&lock_file, LOCK_DIE_ON_ERROR);
memset(&opts, 0, sizeof opts);
opts.update = 1;
if (unpack_trees(1, &t, &opts) < 0)
die(_("unable to checkout working tree"));
- if (write_locked_index(&the_index, lock_file, COMMIT_LOCK))
+ if (write_locked_index(&the_index, &lock_file, COMMIT_LOCK))
die(_("unable to write new index file"));
err |= run_hook_le(NULL, "post-checkout", sha1_to_hex(null_sha1),
free(alternates);
}
+static int dir_exists(const char *path)
+{
+ struct stat sb;
+ return !stat(path, &sb);
+}
+
int cmd_clone(int argc, const char **argv, const char *prefix)
{
int is_bundle = 0, is_local;
- struct stat buf;
const char *repo_name, *repo, *work_tree, *git_dir;
char *path, *dir;
int dest_exists;
struct refspec *refspec;
const char *fetch_pattern;
+ fetch_if_missing = 0;
+
packet_trace_identity("clone");
argc = parse_options(argc, argv, prefix, builtin_clone_options,
builtin_clone_usage, 0);
dir = guess_dir_name(repo_name, is_bundle, option_bare);
strip_trailing_slashes(dir);
- dest_exists = !stat(dir, &buf);
+ dest_exists = dir_exists(dir);
if (dest_exists && !is_empty_dir(dir))
die(_("destination path '%s' already exists and is not "
"an empty directory."), dir);
work_tree = NULL;
else {
work_tree = getenv("GIT_WORK_TREE");
- if (work_tree && !stat(work_tree, &buf))
+ if (work_tree && dir_exists(work_tree))
die(_("working tree '%s' already exists."), work_tree);
}
if (safe_create_leading_directories_const(work_tree) < 0)
die_errno(_("could not create leading directories of '%s'"),
work_tree);
- if (!dest_exists && mkdir(work_tree, 0777))
+ if (dest_exists)
+ junk_work_tree_flags |= REMOVE_DIR_KEEP_TOPLEVEL;
+ else if (mkdir(work_tree, 0777))
die_errno(_("could not create work tree dir '%s'"),
work_tree);
junk_work_tree = work_tree;
set_git_work_tree(work_tree);
}
- junk_git_dir = real_git_dir ? real_git_dir : git_dir;
+ if (real_git_dir) {
+ if (dir_exists(real_git_dir))
+ junk_git_dir_flags |= REMOVE_DIR_KEEP_TOPLEVEL;
+ junk_git_dir = real_git_dir;
+ } else {
+ if (dest_exists)
+ junk_git_dir_flags |= REMOVE_DIR_KEEP_TOPLEVEL;
+ junk_git_dir = git_dir;
+ }
if (safe_create_leading_directories_const(git_dir) < 0)
die(_("could not create leading directories of '%s'"), git_dir);
warning(_("--shallow-since is ignored in local clones; use file:// instead."));
if (option_not.nr)
warning(_("--shallow-exclude is ignored in local clones; use file:// instead."));
+ if (filter_options.choice)
+ warning(_("--filter is ignored in local clones; use file:// instead."));
if (!access(mkpath("%s/shallow", path), F_OK)) {
if (option_local > 0)
warning(_("source repository is shallow, ignoring --local"));
warning(_("--local is ignored"));
transport->cloning = 1;
- if (!transport->get_refs_list || (!is_local && !transport->fetch))
- die(_("Don't know how to clone %s"), transport->url);
-
transport_set_option(transport, TRANS_OPT_KEEP, "yes");
if (option_depth)
transport_set_option(transport, TRANS_OPT_UPLOADPACK,
option_upload_pack);
- if (transport->smart_options && !deepen)
+ if (filter_options.choice) {
+ transport_set_option(transport, TRANS_OPT_LIST_OBJECTS_FILTER,
+ filter_options.filter_spec);
+ transport_set_option(transport, TRANS_OPT_FROM_PROMISOR, "1");
+ }
+
+ if (transport->smart_options && !deepen && !filter_options.choice)
transport->smart_options->check_self_contained_and_connected = 1;
refs = transport_get_remote_refs(transport);
write_refspec_config(src_ref_prefix, our_head_points_at,
remote_head_points_at, &branch_top);
+ if (filter_options.choice)
+ partial_clone_register("origin", &filter_options);
+
if (is_local)
clone_local(path, git_dir);
else if (refs && complete_refs_before_fetch)
transport_fetch_refs(transport, mapped_refs);
update_remote_refs(refs, mapped_refs, remote_head_points_at,
- branch_top.buf, reflog_msg.buf, transport, !is_local);
+ branch_top.buf, reflog_msg.buf, transport,
+ !is_local && !filter_options.choice);
update_head(our_head_points_at, remote_head, reflog_msg.buf);
}
junk_mode = JUNK_LEAVE_REPO;
+ fetch_if_missing = 1;
err = checkout(submodule_progress);
strbuf_release(&reflog_msg);
*/
#include "cache.h"
#include "config.h"
+#include "repository.h"
#include "refs.h"
#include "commit.h"
#include "builtin.h"
#include "argv-array.h"
#include "utf8.h"
#include "packfile.h"
+ #include "list-objects-filter-options.h"
static const char * const builtin_fetch_usage[] = {
N_("git fetch [<options>] [<repository> [<refspec>...]]"),
static int shown_url = 0;
static int refmap_alloc, refmap_nr;
static const char **refmap_array;
+ static struct list_objects_filter_options filter_options;
static int git_fetch_config(const char *k, const char *v, void *cb)
{
TRANSPORT_FAMILY_IPV4),
OPT_SET_INT('6', "ipv6", &family, N_("use IPv6 addresses only"),
TRANSPORT_FAMILY_IPV6),
+ OPT_PARSE_LIST_OBJECTS_FILTER(&filter_options),
OPT_END()
};
transaction = ref_transaction_begin(&err);
if (!transaction ||
ref_transaction_update(transaction, ref->name,
- ref->new_oid.hash,
- check_old ? ref->old_oid.hash : NULL,
+ &ref->new_oid,
+ check_old ? &ref->old_oid : NULL,
0, msg, &err))
goto fail;
}
}
-static int iterate_ref_map(void *cb_data, unsigned char sha1[20])
+static int iterate_ref_map(void *cb_data, struct object_id *oid)
{
struct ref **rm = cb_data;
struct ref *ref = *rm;
if (!ref)
return -1; /* end of the list */
*rm = ref->next;
- hashcpy(sha1, ref->old_oid.hash);
+ oidcpy(oid, &ref->old_oid);
return 0;
}
set_option(transport, TRANS_OPT_DEEPEN_RELATIVE, "yes");
if (update_shallow)
set_option(transport, TRANS_OPT_UPDATE_SHALLOW, "yes");
+ if (filter_options.choice) {
+ set_option(transport, TRANS_OPT_LIST_OBJECTS_FILTER,
+ filter_options.filter_spec);
+ set_option(transport, TRANS_OPT_FROM_PROMISOR, "1");
+ }
return transport;
}
tags = TAGS_UNSET;
}
- if (!transport->get_refs_list || !transport->fetch)
- die(_("Don't know how to fetch from %s"), transport->url);
-
/* if not appending, truncate FETCH_HEAD */
if (!append && !dry_run) {
retcode = truncate_fetch_head();
return result;
}
+ /*
+ * Fetching from the promisor remote should use the given filter-spec
+ * or inherit the default filter-spec from the config.
+ */
+ static inline void fetch_one_setup_partial(struct remote *remote)
+ {
+ /*
+ * Explicit --no-filter argument overrides everything, regardless
+ * of any prior partial clones and fetches.
+ */
+ if (filter_options.no_filter)
+ return;
+
+ /*
+ * If no prior partial clone/fetch and the current fetch DID NOT
+ * request a partial-fetch, do a normal fetch.
+ */
+ if (!repository_format_partial_clone && !filter_options.choice)
+ return;
+
+ /*
+ * If this is the FIRST partial-fetch request, we enable partial
+ * on this repo and remember the given filter-spec as the default
+ * for subsequent fetches to this remote.
+ */
+ if (!repository_format_partial_clone && filter_options.choice) {
+ partial_clone_register(remote->name, &filter_options);
+ return;
+ }
+
+ /*
+ * We are currently limited to only ONE promisor remote and only
+ * allow partial-fetches from the promisor remote.
+ */
+ if (strcmp(remote->name, repository_format_partial_clone)) {
+ if (filter_options.choice)
+ die(_("--filter can only be used with the remote configured in core.partialClone"));
+ return;
+ }
+
+ /*
+ * Do a partial-fetch from the promisor remote using either the
+ * explicitly given filter-spec or inherit the filter-spec from
+ * the config.
+ */
+ if (!filter_options.choice)
+ partial_clone_get_default_filter_spec(&filter_options);
+ return;
+ }
+
static int fetch_one(struct remote *remote, int argc, const char **argv)
{
static const char **refs = NULL;
{
int i;
struct string_list list = STRING_LIST_INIT_DUP;
- struct remote *remote;
+ struct remote *remote = NULL;
int result = 0;
struct argv_array argv_gc_auto = ARGV_ARRAY_INIT;
packet_trace_identity("fetch");
+ fetch_if_missing = 0;
+
/* Record the command line for the reflog */
strbuf_addstr(&default_rla, "fetch");
for (i = 1; i < argc; i++)
if (depth || deepen_since || deepen_not.nr)
deepen = 1;
+ if (filter_options.choice && !repository_format_partial_clone)
+ die("--filter can only be used when extensions.partialClone is set");
+
if (all) {
if (argc == 1)
die(_("fetch --all does not take a repository argument"));
else if (argc > 1)
die(_("fetch --all does not make sense with refspecs"));
(void) for_each_remote(get_one_remote_for_fetch, &list);
- result = fetch_multiple(&list);
} else if (argc == 0) {
/* No arguments -- use default remote */
remote = remote_get(NULL);
- result = fetch_one(remote, argc, argv);
} else if (multiple) {
/* All arguments are assumed to be remotes or groups */
for (i = 0; i < argc; i++)
if (!add_remote_or_group(argv[i], &list))
die(_("No such remote or remote group: %s"), argv[i]);
- result = fetch_multiple(&list);
} else {
/* Single remote or group */
(void) add_remote_or_group(argv[0], &list);
/* More than one remote */
if (argc > 1)
die(_("Fetching a group and specifying refspecs does not make sense"));
- result = fetch_multiple(&list);
} else {
/* Zero or one remotes */
remote = remote_get(argv[0]);
- result = fetch_one(remote, argc-1, argv+1);
+ argc--;
+ argv++;
}
}
+ if (remote) {
+ if (filter_options.choice || repository_format_partial_clone)
+ fetch_one_setup_partial(remote);
+ result = fetch_one(remote, argc, argv);
+ } else {
+ if (filter_options.choice)
+ die(_("--filter can only be used with the remote configured in core.partialClone"));
+ /* TODO should this also die if we have a previous partial-clone? */
+ result = fetch_multiple(&list);
+ }
+
if (!result && (recurse_submodules != RECURSE_SUBMODULES_OFF)) {
struct argv_array options = ARGV_ARRAY_INIT;
add_options_to_argv(&options);
- result = fetch_populated_submodules(&options,
+ result = fetch_populated_submodules(the_repository,
+ &options,
submodule_prefix,
recurse_submodules,
recurse_submodules_default,
}
static int show_object_fast(
- const unsigned char *sha1,
+ const struct object_id *oid,
enum object_type type,
int exclude,
uint32_t name_hash,
struct packed_git *found_pack,
off_t found_offset)
{
- fprintf(stdout, "%s\n", sha1_to_hex(sha1));
+ fprintf(stdout, "%s\n", oid_to_hex(oid));
return 1;
}
if (revs.bisect)
bisect_list = 1;
- if (DIFF_OPT_TST(&revs.diffopt, QUICK))
+ if (revs.diffopt.flags.quick)
info.flags |= REV_LIST_QUIET;
for (i = 1 ; i < argc; i++) {
const char *arg = argv[i];
continue;
}
if (!strcmp(arg, ("--no-" CL_ARG__FILTER))) {
- list_objects_filter_release(&filter_options);
+ list_objects_filter_set_no_filter(&filter_options);
continue;
}
if (!strcmp(arg, "--filter-print-omitted")) {
if (bisect_list) {
int reaches = reaches, all = all;
- revs.commits = find_bisection(revs.commits, &reaches, &all,
- bisect_find_all);
+ find_bisection(&revs.commits, &reaches, &all, bisect_find_all);
if (bisect_show_vars)
return show_bisect_vars(&info, reaches, all);
#include "hash.h"
#include "path.h"
#include "sha1-array.h"
+#include "repository.h"
#ifndef platform_SHA_CTX
/*
unsigned char hash[GIT_MAX_RAWSZ];
};
+#define the_hash_algo the_repository->hash_algo
+
#if defined(DT_UNKNOWN) && !defined(NO_D_TYPE_IN_DIRENT)
#define DTYPE(de) ((de)->d_type)
#else
#define CE_ADDED (1 << 19)
#define CE_HASHED (1 << 20)
+#define CE_FSMONITOR_VALID (1 << 21)
#define CE_WT_REMOVE (1 << 22) /* remove in work directory */
#define CE_CONFLICTED (1 << 23)
#define CACHE_TREE_CHANGED (1 << 5)
#define SPLIT_INDEX_ORDERED (1 << 6)
#define UNTRACKED_CHANGED (1 << 7)
+#define FSMONITOR_CHANGED (1 << 8)
struct split_index;
struct untracked_cache;
struct hashmap dir_hash;
unsigned char sha1[20];
struct untracked_cache *untracked;
+ uint64_t fsmonitor_last_update;
+ struct ewah_bitmap *fsmonitor_dirty;
};
extern struct index_state the_index;
#define GIT_QUARANTINE_ENVIRONMENT "GIT_QUARANTINE_PATH"
#define GIT_OPTIONAL_LOCKS_ENVIRONMENT "GIT_OPTIONAL_LOCKS"
+/*
+ * Environment variable used in handshaking the wire protocol.
+ * Contains a colon ':' separated list of keys with optional values
+ * 'key[=value]'. Presence of unknown keys and values must be
+ * ignored.
+ */
+#define GIT_PROTOCOL_ENVIRONMENT "GIT_PROTOCOL"
+/* HTTP header used to handshake the wire protocol */
+#define GIT_PROTOCOL_HEADER "Git-Protocol"
+
/*
* This environment variable is expected to contain a boolean indicating
* whether we should or should not treat:
extern int read_index_from(struct index_state *, const char *path);
extern int is_index_unborn(struct index_state *);
extern int read_index_unmerged(struct index_state *);
+
+/* For use with `write_locked_index()`. */
#define COMMIT_LOCK (1 << 0)
-#define CLOSE_LOCK (1 << 1)
+
+/*
+ * Write the index while holding an already-taken lock. Close the lock,
+ * and if `COMMIT_LOCK` is given, commit it.
+ *
+ * Unless a split index is in use, write the index into the lockfile.
+ *
+ * With a split index, write the shared index to a temporary file,
+ * adjust its permissions and rename it into place, then write the
+ * split index to the lockfile. If the temporary file for the shared
+ * index cannot be created, fall back to the behavior described in
+ * the previous paragraph.
+ *
+ * With `COMMIT_LOCK`, the lock is always committed or rolled back.
+ * Without it, the lock is closed, but neither committed nor rolled
+ * back.
+ */
extern int write_locked_index(struct index_state *, struct lock_file *lock, unsigned flags);
+
extern int discard_index(struct index_state *);
extern void move_index_extensions(struct index_state *dst, struct index_state *src);
extern int unmerged_index(const struct index_state *);
+
+/**
+ * Returns 1 if the index differs from HEAD, 0 otherwise. When on an unborn
+ * branch, returns 1 if there are entries in the index, 0 otherwise. If an
+ * strbuf is provided, the space-separated list of files that differ will be
+ * appended to it.
+ */
+extern int index_has_changes(struct strbuf *sb);
+
extern int verify_path(const char *path);
extern int strcmp_offset(const char *s1, const char *s2, size_t *first_change);
extern int index_dir_exists(struct index_state *istate, const char *name, int namelen);
#define CE_MATCH_IGNORE_MISSING 0x08
/* enable stat refresh */
#define CE_MATCH_REFRESH 0x10
-extern int ie_match_stat(const struct index_state *, const struct cache_entry *, struct stat *, unsigned int);
-extern int ie_modified(const struct index_state *, const struct cache_entry *, struct stat *, unsigned int);
+/* don't refresh_fsmonitor state or do stat comparison even if CE_FSMONITOR_VALID is true */
+#define CE_MATCH_IGNORE_FSMONITOR 0X20
+extern int ie_match_stat(struct index_state *, const struct cache_entry *, struct stat *, unsigned int);
+extern int ie_modified(struct index_state *, const struct cache_entry *, struct stat *, unsigned int);
#define HASH_WRITE_OBJECT 1
#define HASH_FORMAT_CHECK 2
+#define HASH_RENORMALIZE 4
extern int index_fd(struct object_id *oid, int fd, struct stat *st, enum object_type type, const char *path, unsigned flags);
extern int index_path(struct object_id *oid, const char *path, struct stat *st, unsigned flags);
extern int refresh_index(struct index_state *, unsigned int flags, const struct pathspec *pathspec, char *seen, const char *header_msg);
extern struct cache_entry *refresh_cache_entry(struct cache_entry *, unsigned int);
+/*
+ * Opportunistically update the index but do not complain if we can't.
+ * The lockfile is always committed or rolled back.
+ */
extern void update_index_if_able(struct index_state *, struct lock_file *);
extern int hold_locked_index(struct lock_file *, int);
extern void set_alternate_index_output(const char *);
extern int verify_index_checksum;
+extern int verify_ce_order;
/* Environment bits from configuration mechanism */
extern int trust_executable_bit;
extern int precomposed_unicode;
extern int protect_hfs;
extern int protect_ntfs;
+extern const char *core_fsmonitor;
/*
* Include broken refs in all ref iterations, which will
#define GIT_REPO_VERSION_READ 1
extern int repository_format_precious_objects;
extern char *repository_format_partial_clone;
+ extern const char *core_partial_clone_filter_default;
struct repository_format {
int version;
int precious_objects;
char *partial_clone; /* value of extensions.partialclone */
int is_bare;
+ int hash_algo;
char *work_tree;
struct string_list unknown_extensions;
};
static inline int is_empty_blob_sha1(const unsigned char *sha1)
{
- return !hashcmp(sha1, EMPTY_BLOB_SHA1_BIN);
+ return !hashcmp(sha1, the_hash_algo->empty_blob->hash);
}
static inline int is_empty_blob_oid(const struct object_id *oid)
{
- return !hashcmp(oid->hash, EMPTY_BLOB_SHA1_BIN);
+ return !oidcmp(oid, the_hash_algo->empty_blob);
}
static inline int is_empty_tree_sha1(const unsigned char *sha1)
{
- return !hashcmp(sha1, EMPTY_TREE_SHA1_BIN);
+ return !hashcmp(sha1, the_hash_algo->empty_tree->hash);
}
static inline int is_empty_tree_oid(const struct object_id *oid)
{
- return !hashcmp(oid->hash, EMPTY_TREE_SHA1_BIN);
+ return !oidcmp(oid, the_hash_algo->empty_tree);
}
/* set default permissions by passing mode arguments to open(2) */
extern int get_sha1_hex(const char *hex, unsigned char *sha1);
extern int get_oid_hex(const char *hex, struct object_id *sha1);
+/*
+ * Read `len` pairs of hexadecimal digits from `hex` and write the
+ * values to `binary` as `len` bytes. Return 0 on success, or -1 if
+ * the input does not consist of hex digits).
+ */
+extern int hex_to_bytes(unsigned char *binary, const char *hex, size_t len);
+
/*
* Convert a binary sha1 to its hex equivalent. The `_r` variant is reentrant,
* and writes the NUL-terminated output to the buffer `out`, which must be at
extern const char *ident_default_email(void);
extern const char *git_editor(void);
extern const char *git_pager(int stdout_is_tty);
+extern int is_terminal_dumb(void);
extern int git_ident_config(const char *, const char *, void *);
extern void reset_ident_date(void);
*/
void safe_create_dir(const char *dir, int share);
+/*
+ * Should we print an ellipsis after an abbreviated SHA-1 value
+ * when doing diff-raw output or indicating a detached HEAD?
+ */
+extern int print_sha1_ellipsis(void);
+
#endif /* CACHE_H */
return 0;
}
+int git_config_expiry_date(timestamp_t *timestamp, const char *var, const char *value)
+{
+ if (!value)
+ return config_error_nonbool(var);
+ if (parse_expiry_date(value, timestamp))
+ return error(_("'%s' for '%s' is not a valid timestamp"),
+ value, var);
+ return 0;
+}
+
static int git_default_core_config(const char *var, const char *value)
{
/* This needs a better name */
return 0;
}
+ if (!strcmp(var, "core.partialclonefilter")) {
+ return git_config_string(&core_partial_clone_filter_default,
+ var, value);
+ }
+
/* Add other config variables here and to Documentation/config.txt. */
return 0;
}
return -1; /* default value */
}
+int git_config_get_fsmonitor(void)
+{
+ if (git_config_get_pathname("core.fsmonitor", &core_fsmonitor))
+ core_fsmonitor = getenv("GIT_FSMONITOR_TEST");
+
+ if (core_fsmonitor && !*core_fsmonitor)
+ core_fsmonitor = NULL;
+
+ if (core_fsmonitor)
+ return 1;
+
+ return 0;
+}
+
NORETURN
void git_die_config_linenr(const char *key, const char *filename, int linenr)
{
struct strbuf sb = store_create_section(key);
ssize_t ret;
- ret = write_in_full(fd, sb.buf, sb.len) == sb.len;
+ ret = write_in_full(fd, sb.buf, sb.len);
strbuf_release(&sb);
return ret;
{
int ret = 0, remove = 0;
char *filename_buf = NULL;
- struct lock_file *lock;
+ struct lock_file lock = LOCK_INIT;
int out_fd;
char buf[1024];
FILE *config_file = NULL;
if (!config_filename)
config_filename = filename_buf = git_pathdup("config");
- lock = xcalloc(1, sizeof(struct lock_file));
- out_fd = hold_lock_file_for_update(lock, config_filename, 0);
+ out_fd = hold_lock_file_for_update(&lock, config_filename, 0);
if (out_fd < 0) {
ret = error("could not lock config file %s", config_filename);
goto out;
goto out;
}
- if (chmod(get_lock_file_path(lock), st.st_mode & 07777) < 0) {
+ if (chmod(get_lock_file_path(&lock), st.st_mode & 07777) < 0) {
ret = error_errno("chmod on %s failed",
- get_lock_file_path(lock));
+ get_lock_file_path(&lock));
goto out;
}
* multiple [branch "$name"] sections.
*/
if (copystr.len > 0) {
- if (write_in_full(out_fd, copystr.buf, copystr.len) != copystr.len) {
- ret = write_error(get_lock_file_path(lock));
+ if (write_in_full(out_fd, copystr.buf, copystr.len) < 0) {
+ ret = write_error(get_lock_file_path(&lock));
goto out;
}
strbuf_reset(©str);
store.baselen = strlen(new_name);
if (!copy) {
if (write_section(out_fd, new_name) < 0) {
- ret = write_error(get_lock_file_path(lock));
+ ret = write_error(get_lock_file_path(&lock));
goto out;
}
/*
}
if (write_in_full(out_fd, output, length) < 0) {
- ret = write_error(get_lock_file_path(lock));
+ ret = write_error(get_lock_file_path(&lock));
goto out;
}
}
* logic in the loop above.
*/
if (copystr.len > 0) {
- if (write_in_full(out_fd, copystr.buf, copystr.len) != copystr.len) {
- ret = write_error(get_lock_file_path(lock));
+ if (write_in_full(out_fd, copystr.buf, copystr.len) < 0) {
+ ret = write_error(get_lock_file_path(&lock));
goto out;
}
strbuf_reset(©str);
fclose(config_file);
config_file = NULL;
commit_and_out:
- if (commit_lock_file(lock) < 0)
+ if (commit_lock_file(&lock) < 0)
ret = error_errno("could not write config file %s",
config_filename);
out:
if (config_file)
fclose(config_file);
- rollback_lock_file(lock);
+ rollback_lock_file(&lock);
out_no_rollback:
free(filename_buf);
return ret;
*
* Returns 0 if everything is connected, non-zero otherwise.
*/
-int check_connected(sha1_iterate_fn fn, void *cb_data,
+int check_connected(oid_iterate_fn fn, void *cb_data,
struct check_connected_options *opt)
{
struct child_process rev_list = CHILD_PROCESS_INIT;
struct check_connected_options defaults = CHECK_CONNECTED_INIT;
- char commit[41];
- unsigned char sha1[20];
+ char commit[GIT_MAX_HEXSZ + 1];
+ struct object_id oid;
int err = 0;
struct packed_git *new_pack = NULL;
struct transport *transport;
opt = &defaults;
transport = opt->transport;
- if (fn(cb_data, sha1)) {
+ if (fn(cb_data, &oid)) {
if (opt->err_fd)
close(opt->err_fd);
return err;
argv_array_push(&rev_list.args,"rev-list");
argv_array_push(&rev_list.args, "--objects");
argv_array_push(&rev_list.args, "--stdin");
+ if (repository_format_partial_clone)
+ argv_array_push(&rev_list.args, "--exclude-promisor-objects");
argv_array_push(&rev_list.args, "--not");
argv_array_push(&rev_list.args, "--all");
argv_array_push(&rev_list.args, "--quiet");
sigchain_push(SIGPIPE, SIG_IGN);
- commit[40] = '\n';
+ commit[GIT_SHA1_HEXSZ] = '\n';
do {
/*
* If index-pack already checked that:
* are sure the ref is good and not sending it to
* rev-list for verification.
*/
- if (new_pack && find_pack_entry_one(sha1, new_pack))
+ if (new_pack && find_pack_entry_one(oid.hash, new_pack))
continue;
- memcpy(commit, sha1_to_hex(sha1), 40);
- if (write_in_full(rev_list.in, commit, 41) < 0) {
+ memcpy(commit, oid_to_hex(&oid), GIT_SHA1_HEXSZ);
+ if (write_in_full(rev_list.in, commit, GIT_SHA1_HEXSZ + 1) < 0) {
if (errno != EPIPE && errno != EINVAL)
error_errno(_("failed write to rev-list"));
err = -1;
break;
}
- } while (!fn(cb_data, sha1));
+ } while (!fn(cb_data, &oid));
if (close(rev_list.in))
err = error_errno(_("failed to close rev-list's stdin"));
int ref_paranoia = -1;
int repository_format_precious_objects;
char *repository_format_partial_clone;
+ const char *core_partial_clone_filter_default;
const char *git_commit_encoding;
const char *git_log_output_encoding;
const char *apply_default_whitespace;
#define PROTECT_NTFS_DEFAULT 0
#endif
int protect_ntfs = PROTECT_NTFS_DEFAULT;
+const char *core_fsmonitor;
/*
* The character that begins a commented line in user-editable file
{
return git_env_bool(GIT_OPTIONAL_LOCKS_ENVIRONMENT, 1);
}
+
+int print_sha1_ellipsis(void)
+{
+ /*
+ * Determine if the calling environment contains the variable
+ * GIT_PRINT_SHA1_ELLIPSIS set to "yes".
+ */
+ static int cached_result = -1; /* unknown */
+
+ if (cached_result < 0) {
+ const char *v = getenv("GIT_PRINT_SHA1_ELLIPSIS");
+ cached_result = (v && !strcasecmp(v, "yes"));
+ }
+ return cached_result;
+}
static int fetch_fsck_objects = -1;
static int transfer_fsck_objects = -1;
static int agent_supported;
+ static int server_supports_filtering;
static struct lock_file shallow_lock;
static const char *alternate_shallow_file;
if (deepen_not_ok) strbuf_addstr(&c, " deepen-not");
if (agent_supported) strbuf_addf(&c, " agent=%s",
git_user_agent_sanitized());
+ if (args->filter_options.choice)
+ strbuf_addstr(&c, " filter");
packet_buf_write(&req_buf, "want %s%s\n", remote_hex, c.buf);
strbuf_release(&c);
} else
packet_buf_write(&req_buf, "deepen-not %s", s->string);
}
}
+ if (server_supports_filtering && args->filter_options.choice)
+ packet_buf_write(&req_buf, "filter %s",
+ args->filter_options.filter_spec);
packet_buf_flush(&req_buf);
state_len = req_buf.len;
{
struct ref *ref;
int retval;
+ int old_save_commit_buffer = save_commit_buffer;
timestamp_t cutoff = 0;
save_commit_buffer = 0;
for (ref = *refs; ref; ref = ref->next) {
struct object *o;
- if (!has_object_file(&ref->old_oid))
+ if (!has_object_file_with_flags(&ref->old_oid,
+ OBJECT_INFO_QUICK))
continue;
o = parse_object(&ref->old_oid);
print_verbose(args, _("already have %s (%s)"), oid_to_hex(remote),
ref->name);
}
+
+ save_commit_buffer = old_save_commit_buffer;
+
return retval;
}
else
prefer_ofs_delta = 0;
+ if (server_supports("filter")) {
+ server_supports_filtering = 1;
+ print_verbose(args, _("Server supports filter"));
+ } else if (args->filter_options.choice) {
+ warning("filtering not recognized by server, ignoring");
+ }
+
if ((agent_feature = server_feature_value("agent", &agent_len))) {
agent_supported = 1;
if (agent_len)
test_cmp fetch.expected fetch.actual
'
-setup_ssh_wrapper () {
- test_expect_success 'setup ssh wrapper' '
- cp "$GIT_BUILD_DIR/t/helper/test-fake-ssh$X" \
- "$TRASH_DIRECTORY/ssh-wrapper$X" &&
- GIT_SSH="$TRASH_DIRECTORY/ssh-wrapper$X" &&
- export GIT_SSH &&
- export TRASH_DIRECTORY &&
- >"$TRASH_DIRECTORY"/ssh-output
- '
-}
+test_expect_success 'set up ssh wrapper' '
+ cp "$GIT_BUILD_DIR/t/helper/test-fake-ssh$X" \
+ "$TRASH_DIRECTORY/ssh$X" &&
+ GIT_SSH="$TRASH_DIRECTORY/ssh$X" &&
+ export GIT_SSH &&
+ export TRASH_DIRECTORY &&
+ >"$TRASH_DIRECTORY"/ssh-output
+'
copy_ssh_wrapper_as () {
- cp "$TRASH_DIRECTORY/ssh-wrapper$X" "${1%$X}$X" &&
+ rm -f "${1%$X}$X" &&
+ cp "$TRASH_DIRECTORY/ssh$X" "${1%$X}$X" &&
+ test_when_finished "rm $(git rev-parse --sq-quote "${1%$X}$X")" &&
GIT_SSH="${1%$X}$X" &&
- export GIT_SSH
+ test_when_finished "GIT_SSH=\"\$TRASH_DIRECTORY/ssh\$X\""
}
expect_ssh () {
(cd "$TRASH_DIRECTORY" && test_cmp ssh-expect ssh-output)
}
-setup_ssh_wrapper
-
test_expect_success 'clone myhost:src uses ssh' '
git clone myhost:src ssh-clone &&
expect_ssh myhost src
expect_ssh "-p 123" myhost src
'
-test_expect_success 'uplink is not treated as putty' '
+test_expect_success 'OpenSSH variant passes -4' '
+ git clone -4 "[myhost:123]:src" ssh-ipv4-clone &&
+ expect_ssh "-4 -p 123" myhost src
+'
+
+test_expect_success 'variant can be overridden' '
+ copy_ssh_wrapper_as "$TRASH_DIRECTORY/putty" &&
+ git -c ssh.variant=putty clone -4 "[myhost:123]:src" ssh-putty-clone &&
+ expect_ssh "-4 -P 123" myhost src
+'
+
+test_expect_success 'variant=auto picks based on basename' '
+ copy_ssh_wrapper_as "$TRASH_DIRECTORY/plink" &&
+ git -c ssh.variant=auto clone -4 "[myhost:123]:src" ssh-auto-clone &&
+ expect_ssh "-4 -P 123" myhost src
+'
+
+test_expect_success 'simple does not support -4/-6' '
+ copy_ssh_wrapper_as "$TRASH_DIRECTORY/simple" &&
+ test_must_fail git clone -4 "myhost:src" ssh-4-clone-simple
+'
+
+test_expect_success 'simple does not support port' '
+ copy_ssh_wrapper_as "$TRASH_DIRECTORY/simple" &&
+ test_must_fail git clone "[myhost:123]:src" ssh-bracket-clone-simple
+'
+
+test_expect_success 'uplink is treated as simple' '
copy_ssh_wrapper_as "$TRASH_DIRECTORY/uplink" &&
- git clone "[myhost:123]:src" ssh-bracket-clone-uplink &&
+ test_must_fail git clone "[myhost:123]:src" ssh-bracket-clone-uplink &&
+ git clone "myhost:src" ssh-clone-uplink &&
+ expect_ssh myhost src
+'
+
+test_expect_success 'OpenSSH-like uplink is treated as ssh' '
+ write_script "$TRASH_DIRECTORY/uplink" <<-EOF &&
+ if test "\$1" = "-G"
+ then
+ exit 0
+ fi &&
+ exec "\$TRASH_DIRECTORY/ssh$X" "\$@"
+ EOF
+ test_when_finished "rm -f \"\$TRASH_DIRECTORY/uplink\"" &&
+ GIT_SSH="$TRASH_DIRECTORY/uplink" &&
+ test_when_finished "GIT_SSH=\"\$TRASH_DIRECTORY/ssh\$X\"" &&
+ git clone "[myhost:123]:src" ssh-bracket-clone-sshlike-uplink &&
expect_ssh "-p 123" myhost src
'
'
test_expect_success 'GIT_SSH_VARIANT overrides plink detection to plink' '
+ copy_ssh_wrapper_as "$TRASH_DIRECTORY/plink" &&
GIT_SSH_VARIANT=plink \
git clone "[myhost:123]:src" ssh-bracket-clone-variant-3 &&
expect_ssh "-P 123" myhost src
'
test_expect_success 'GIT_SSH_VARIANT overrides plink to tortoiseplink' '
+ copy_ssh_wrapper_as "$TRASH_DIRECTORY/plink" &&
GIT_SSH_VARIANT=tortoiseplink \
git clone "[myhost:123]:src" ssh-bracket-clone-variant-4 &&
expect_ssh "-batch -P 123" myhost src
git clone "[myhost:123]:src" sq-failure
'
-# Reset the GIT_SSH environment variable for clone tests.
-setup_ssh_wrapper
-
counter=0
# $1 url
# $2 none|host
git -C replay.git index-pack -v --stdin <tmp.pack
'
+hex2oct () {
+ perl -ne 'printf "\\%03o", hex for /../g'
+}
+
+test_expect_success 'clone on case-insensitive fs' '
+ git init icasefs &&
+ (
+ cd icasefs
+ o=$(git hash-object -w --stdin </dev/null | hex2oct) &&
+ t=$(printf "100644 X\0${o}100644 x\0${o}" |
+ git hash-object -w -t tree --stdin) &&
+ c=$(git commit-tree -m bogus $t) &&
+ git update-ref refs/heads/bogus $c &&
+ git clone -b bogus . bogus
+ )
+'
+
+ partial_clone () {
+ SERVER="$1" &&
+ URL="$2" &&
+
+ rm -rf "$SERVER" client &&
+ test_create_repo "$SERVER" &&
+ test_commit -C "$SERVER" one &&
+ HASH1=$(git hash-object "$SERVER/one.t") &&
+ git -C "$SERVER" revert HEAD &&
+ test_commit -C "$SERVER" two &&
+ HASH2=$(git hash-object "$SERVER/two.t") &&
+ test_config -C "$SERVER" uploadpack.allowfilter 1 &&
+ test_config -C "$SERVER" uploadpack.allowanysha1inwant 1 &&
+
+ git clone --filter=blob:limit=0 "$URL" client &&
+
+ git -C client fsck &&
+
+ # Ensure that unneeded blobs are not inadvertently fetched.
+ test_config -C client extensions.partialclone "not a remote" &&
+ test_must_fail git -C client cat-file -e "$HASH1" &&
+
+ # But this blob was fetched, because clone performs an initial checkout
+ git -C client cat-file -e "$HASH2"
+ }
+
+ test_expect_success 'partial clone' '
+ partial_clone server "file://$(pwd)/server"
+ '
+
+ test_expect_success 'partial clone: warn if server does not support object filtering' '
+ rm -rf server client &&
+ test_create_repo server &&
+ test_commit -C server one &&
+
+ git clone --filter=blob:limit=0 "file://$(pwd)/server" client 2> err &&
+
+ test_i18ngrep "filtering not recognized by server" err
+ '
+
+ test_expect_success 'batch missing blob request during checkout' '
+ rm -rf server client &&
+
+ test_create_repo server &&
+ echo a >server/a &&
+ echo b >server/b &&
+ git -C server add a b &&
+
+ git -C server commit -m x &&
+ echo aa >server/a &&
+ echo bb >server/b &&
+ git -C server add a b &&
+ git -C server commit -m x &&
+
+ test_config -C server uploadpack.allowfilter 1 &&
+ test_config -C server uploadpack.allowanysha1inwant 1 &&
+
+ git clone --filter=blob:limit=0 "file://$(pwd)/server" client &&
+
+ # Ensure that there is only one negotiation by checking that there is
+ # only "done" line sent. ("done" marks the end of negotiation.)
+ GIT_TRACE_PACKET="$(pwd)/trace" git -C client checkout HEAD^ &&
+ grep "git> done" trace >done_lines &&
+ test_line_count = 1 done_lines
+ '
+
+ test_expect_success 'batch missing blob request does not inadvertently try to fetch gitlinks' '
+ rm -rf server client &&
+
+ test_create_repo repo_for_submodule &&
+ test_commit -C repo_for_submodule x &&
+
+ test_create_repo server &&
+ echo a >server/a &&
+ echo b >server/b &&
+ git -C server add a b &&
+ git -C server commit -m x &&
+
+ echo aa >server/a &&
+ echo bb >server/b &&
+ # Also add a gitlink pointing to an arbitrary repository
+ git -C server submodule add "$(pwd)/repo_for_submodule" c &&
+ git -C server add a b c &&
+ git -C server commit -m x &&
+
+ test_config -C server uploadpack.allowfilter 1 &&
+ test_config -C server uploadpack.allowanysha1inwant 1 &&
+
+ # Make sure that it succeeds
+ git clone --filter=blob:limit=0 "file://$(pwd)/server" client
+ '
+
+ . "$TEST_DIRECTORY"/lib-httpd.sh
+ start_httpd
+
+ test_expect_success 'partial clone using HTTP' '
+ partial_clone "$HTTPD_DOCUMENT_ROOT_PATH/server" "$HTTPD_URL/smart/server"
+ '
+
+ stop_httpd
+
test_done
#include "sigchain.h"
#include "argv-array.h"
#include "refs.h"
+#include "transport-internal.h"
static int debug;
else
private = xstrdup(name);
if (private) {
- if (read_ref(private, posn->old_oid.hash) < 0)
+ if (read_ref(private, &posn->old_oid) < 0)
die("Could not read ref %s", private);
free(private);
}
if (process_connect(transport, 0)) {
do_take_over(transport);
- return transport->fetch(transport, nr_heads, to_fetch);
+ return transport->vtable->fetch(transport, nr_heads, to_fetch);
}
count = 0;
if (data->transport_options.update_shallow)
set_helper_option(transport, "update-shallow", "true");
+ if (data->transport_options.filter_options.choice)
+ set_helper_option(
+ transport, "filter",
+ data->transport_options.filter_options.filter_spec);
+
if (data->fetch)
return fetch_with_fetch(transport, nr_heads, to_fetch);
private = apply_refspecs(data->refspecs, data->refspec_nr, ref->name);
if (!private)
continue;
- update_ref("update by helper", private, ref->new_oid.hash, NULL, 0, 0);
+ update_ref("update by helper", private, &ref->new_oid, NULL,
+ 0, 0);
free(private);
}
strbuf_release(&buf);
struct strbuf cas = STRBUF_INIT;
strbuf_addf(&cas, "%s:%s",
ref->name, oid_to_hex(&ref->old_oid_expect));
- string_list_append(&cas_options, strbuf_detach(&cas, NULL));
+ string_list_append_nodup(&cas_options,
+ strbuf_detach(&cas, NULL));
}
}
if (buf.len == 0) {
strbuf_addch(&buf, '\n');
sendline(data, &buf);
strbuf_release(&buf);
+ string_list_clear(&cas_options, 0);
return push_update_refs_status(data, remote_refs, flags);
}
private = apply_refspecs(data->refspecs, data->refspec_nr, ref->name);
if (private && !get_oid(private, &oid)) {
strbuf_addf(&buf, "^%s", private);
- string_list_append(&revlist_args, strbuf_detach(&buf, NULL));
+ string_list_append_nodup(&revlist_args,
+ strbuf_detach(&buf, NULL));
oidcpy(&ref->old_oid, &oid);
}
free(private);
int flag;
/* Follow symbolic refs (mainly for HEAD). */
- name = resolve_ref_unsafe(
- ref->peer_ref->name,
- RESOLVE_REF_READING,
- oid.hash, &flag);
+ name = resolve_ref_unsafe(ref->peer_ref->name,
+ RESOLVE_REF_READING,
+ &oid, &flag);
if (!name || !(flag & REF_ISSYMREF))
name = ref->peer_ref->name;
if (process_connect(transport, 1)) {
do_take_over(transport);
- return transport->push_refs(transport, remote_refs, flags);
+ return transport->vtable->push_refs(transport, remote_refs, flags);
}
if (!remote_refs) {
if (process_connect(transport, for_push)) {
do_take_over(transport);
- return transport->get_refs_list(transport, for_push);
+ return transport->vtable->get_refs_list(transport, for_push);
}
if (data->push && for_push)
if (eon) {
if (has_attribute(eon + 1, "unchanged")) {
(*tail)->status |= REF_STATUS_UPTODATE;
- if (read_ref((*tail)->name,
- (*tail)->old_oid.hash) < 0)
+ if (read_ref((*tail)->name, &(*tail)->old_oid) < 0)
die(_("Could not read ref %s"),
(*tail)->name);
}
return ret;
}
+static struct transport_vtable vtable = {
+ set_helper_option,
+ get_refs_list,
+ fetch,
+ push_refs,
+ connect_helper,
+ release_helper
+};
+
int transport_helper_init(struct transport *transport, const char *name)
{
struct helper_data *data = xcalloc(1, sizeof(*data));
debug = 1;
transport->data = data;
- transport->set_option = set_helper_option;
- transport->get_refs_list = get_refs_list;
- transport->fetch = fetch;
- transport->push_refs = push_refs;
- transport->disconnect = release_helper;
- transport->connect = connect_helper;
+ transport->vtable = &vtable;
transport->smart_options = &(data->transport_options);
return 0;
}
#include "string-list.h"
#include "sha1-array.h"
#include "sigchain.h"
+#include "transport-internal.h"
static void set_upstreams(struct transport *transport, struct ref *refs,
int pretend)
} else if (!strcmp(name, TRANS_OPT_NO_DEPENDENTS)) {
opts->no_dependents = !!value;
return 0;
+ } else if (!strcmp(name, TRANS_OPT_LIST_OBJECTS_FILTER)) {
+ parse_list_objects_filter(&opts->filter_options, value);
+ return 0;
}
return 1;
}
args.update_shallow = data->options.update_shallow;
args.from_promisor = data->options.from_promisor;
args.no_dependents = data->options.no_dependents;
+ args.filter_options = data->options.filter_options;
if (!data->got_remote_heads) {
connect_setup(transport, 0);
if (ref->deletion) {
delete_ref(NULL, rs.dst, NULL, 0);
} else
- update_ref("update by push", rs.dst,
- ref->new_oid.hash, NULL, 0, 0);
+ update_ref("update by push", rs.dst, &ref->new_oid,
+ NULL, 0, 0);
free(rs.dst);
}
}
return 0;
}
+static struct transport_vtable taken_over_vtable = {
+ NULL,
+ get_refs_via_connect,
+ fetch_refs_via_pack,
+ git_transport_push,
+ NULL,
+ disconnect_git
+};
+
void transport_take_over(struct transport *transport,
struct child_process *child)
{
data->got_remote_heads = 0;
transport->data = data;
- transport->set_option = NULL;
- transport->get_refs_list = get_refs_via_connect;
- transport->fetch = fetch_refs_via_pack;
- transport->push = NULL;
- transport->push_refs = git_transport_push;
- transport->disconnect = disconnect_git;
+ transport->vtable = &taken_over_vtable;
transport->smart_options = &(data->options);
transport->cannot_reuse = 1;
die("transport '%s' not allowed", type);
}
+static struct transport_vtable bundle_vtable = {
+ NULL,
+ get_refs_from_bundle,
+ fetch_refs_from_bundle,
+ NULL,
+ NULL,
+ close_bundle
+};
+
+static struct transport_vtable builtin_smart_vtable = {
+ NULL,
+ get_refs_via_connect,
+ fetch_refs_via_pack,
+ git_transport_push,
+ connect_git,
+ disconnect_git
+};
+
struct transport *transport_get(struct remote *remote, const char *url)
{
const char *helper;
struct bundle_transport_data *data = xcalloc(1, sizeof(*data));
transport_check_allowed("file");
ret->data = data;
- ret->get_refs_list = get_refs_from_bundle;
- ret->fetch = fetch_refs_from_bundle;
- ret->disconnect = close_bundle;
+ ret->vtable = &bundle_vtable;
ret->smart_options = NULL;
} else if (!is_url(url)
|| starts_with(url, "file://")
*/
struct git_transport_data *data = xcalloc(1, sizeof(*data));
ret->data = data;
- ret->set_option = NULL;
- ret->get_refs_list = get_refs_via_connect;
- ret->fetch = fetch_refs_via_pack;
- ret->push_refs = git_transport_push;
- ret->connect = connect_git;
- ret->disconnect = disconnect_git;
+ ret->vtable = &builtin_smart_vtable;
ret->smart_options = &(data->options);
data->conn = NULL;
git_reports = set_git_option(transport->smart_options,
name, value);
- if (transport->set_option)
- protocol_reports = transport->set_option(transport, name,
- value);
+ if (transport->vtable->set_option)
+ protocol_reports = transport->vtable->set_option(transport,
+ name, value);
/* If either report is 0, report 0 (success). */
if (!git_reports || !protocol_reports)
*reject_reasons = 0;
transport_verify_remote_names(refspec_nr, refspec);
- if (transport->push) {
- /* Maybe FIXME. But no important transport uses this case. */
- if (flags & TRANSPORT_PUSH_SET_UPSTREAM)
- die("This transport does not support using --set-upstream");
-
- return transport->push(transport, refspec_nr, refspec, flags);
- } else if (transport->push_refs) {
+ if (transport->vtable->push_refs) {
struct ref *remote_refs;
struct ref *local_refs = get_local_heads();
int match_flags = MATCH_REFS_NONE;
if (check_push_refs(local_refs, refspec_nr, refspec) < 0)
return -1;
- remote_refs = transport->get_refs_list(transport, 1);
+ remote_refs = transport->vtable->get_refs_list(transport, 1);
if (flags & TRANSPORT_PUSH_ALL)
match_flags |= MATCH_REFS_ALL;
}
if (!(flags & TRANSPORT_RECURSE_SUBMODULES_ONLY))
- push_ret = transport->push_refs(transport, remote_refs, flags);
+ push_ret = transport->vtable->push_refs(transport, remote_refs, flags);
else
push_ret = 0;
err = push_had_errors(remote_refs);
const struct ref *transport_get_remote_refs(struct transport *transport)
{
if (!transport->got_remote_refs) {
- transport->remote_refs = transport->get_refs_list(transport, 0);
+ transport->remote_refs = transport->vtable->get_refs_list(transport, 0);
transport->got_remote_refs = 1;
}
heads[nr_heads++] = rm;
}
- rc = transport->fetch(transport, nr_heads, heads);
+ rc = transport->vtable->fetch(transport, nr_heads, heads);
free(heads);
return rc;
int transport_connect(struct transport *transport, const char *name,
const char *exec, int fd[2])
{
- if (transport->connect)
- return transport->connect(transport, name, exec, fd);
+ if (transport->vtable->connect)
+ return transport->vtable->connect(transport, name, exec, fd);
else
die("Operation not supported by protocol");
}
int transport_disconnect(struct transport *transport)
{
int ret = 0;
- if (transport->disconnect)
- ret = transport->disconnect(transport);
+ if (transport->vtable->disconnect)
+ ret = transport->vtable->disconnect(transport);
free(transport);
return ret;
}
#include "cache.h"
#include "run-command.h"
#include "remote.h"
+ #include "list-objects-filter-options.h"
struct string_list;
const char *uploadpack;
const char *receivepack;
struct push_cas_option *cas;
+ struct list_objects_filter_options filter_options;
};
enum transport_family {
};
struct transport {
+ const struct transport_vtable *vtable;
+
struct remote *remote;
const char *url;
void *data;
*/
const struct string_list *push_options;
- /**
- * Returns 0 if successful, positive if the option is not
- * recognized or is inapplicable, and negative if the option
- * is applicable but the value is invalid.
- **/
- int (*set_option)(struct transport *connection, const char *name,
- const char *value);
-
- /**
- * Returns a list of the remote side's refs. In order to allow
- * the transport to try to share connections, for_push is a
- * hint as to whether the ultimate operation is a push or a fetch.
- *
- * If the transport is able to determine the remote hash for
- * the ref without a huge amount of effort, it should store it
- * in the ref's old_sha1 field; otherwise it should be all 0.
- **/
- struct ref *(*get_refs_list)(struct transport *transport, int for_push);
-
- /**
- * Fetch the objects for the given refs. Note that this gets
- * an array, and should ignore the list structure.
- *
- * If the transport did not get hashes for refs in
- * get_refs_list(), it should set the old_sha1 fields in the
- * provided refs now.
- **/
- int (*fetch)(struct transport *transport, int refs_nr, struct ref **refs);
-
- /**
- * Push the objects and refs. Send the necessary objects, and
- * then, for any refs where peer_ref is set and
- * peer_ref->new_oid is different from old_oid, tell the
- * remote side to update each ref in the list from old_oid to
- * peer_ref->new_oid.
- *
- * Where possible, set the status for each ref appropriately.
- *
- * The transport must modify new_sha1 in the ref to the new
- * value if the remote accepted the change. Note that this
- * could be a different value from peer_ref->new_oid if the
- * process involved generating new commits.
- **/
- int (*push_refs)(struct transport *transport, struct ref *refs, int flags);
- int (*push)(struct transport *connection, int refspec_nr, const char **refspec, int flags);
- int (*connect)(struct transport *connection, const char *name,
- const char *executable, int fd[2]);
-
- /** get_refs_list(), fetch(), and push_refs() can keep
- * resources (such as a connection) reserved for further
- * use. disconnect() releases these resources.
- **/
- int (*disconnect)(struct transport *connection);
char *pack_lockfile;
signed verbose : 3;
/**
*/
#define TRANS_OPT_NO_DEPENDENTS "no-dependents"
+ /* Filter objects for partial clone and fetch */
+ #define TRANS_OPT_LIST_OBJECTS_FILTER "filter"
+
/**
* Returns 0 if the option was used, non-zero otherwise. Prints a
* message to stderr if the option is not used.
#include "dir.h"
#include "submodule.h"
#include "submodule-config.h"
+#include "fsmonitor.h"
+ #include "fetch-object.h"
/*
* Error messages expected by scripts out of plumbing commands such as
load_gitmodules_file(index, &state);
enable_delayed_checkout(&state);
+ if (repository_format_partial_clone && o->update && !o->dry_run) {
+ /*
+ * Prefetch the objects that are to be checked out in the loop
+ * below.
+ */
+ struct oid_array to_fetch = OID_ARRAY_INIT;
+ int fetch_if_missing_store = fetch_if_missing;
+ fetch_if_missing = 0;
+ for (i = 0; i < index->cache_nr; i++) {
+ struct cache_entry *ce = index->cache[i];
+ if ((ce->ce_flags & CE_UPDATE) &&
+ !S_ISGITLINK(ce->ce_mode)) {
+ if (!has_object_file(&ce->oid))
+ oid_array_append(&to_fetch, &ce->oid);
+ }
+ }
+ if (to_fetch.nr)
+ fetch_objects(repository_format_partial_clone,
+ &to_fetch);
+ fetch_if_missing = fetch_if_missing_store;
+ }
for (i = 0; i < index->cache_nr; i++) {
struct cache_entry *ce = index->cache[i];
ce->ce_flags &= ~CE_SKIP_WORKTREE;
if (was_skip_worktree != ce_skip_worktree(ce)) {
ce->ce_flags |= CE_UPDATE_IN_BASE;
+ mark_fsmonitor_invalid(istate, ce);
istate->cache_changed |= CE_ENTRY_CHANGED;
}
int cnt = 0;
if (S_ISGITLINK(ce->ce_mode)) {
- unsigned char sha1[20];
- int sub_head = resolve_gitlink_ref(ce->name, "HEAD", sha1);
+ struct object_id oid;
+ int sub_head = resolve_gitlink_ref(ce->name, "HEAD", &oid);
/*
* If we are not going to update the submodule, then
* we don't care.
*/
- if (!sub_head && !hashcmp(sha1, ce->oid.hash))
+ if (!sub_head && !oidcmp(&oid, &ce->oid))
return 0;
- return verify_clean_submodule(sub_head ? NULL : sha1_to_hex(sha1),
+ return verify_clean_submodule(sub_head ? NULL : oid_to_hex(&oid),
ce, error_type, o);
}
ie_match_stat(o->src_index, old, &st, CE_MATCH_IGNORE_VALID|CE_MATCH_IGNORE_SKIP_WORKTREE))
update |= CE_UPDATE;
}
+ if (o->update && S_ISGITLINK(old->ce_mode) &&
+ should_update_submodules() && !verify_uptodate(old, o))
+ update |= CE_UPDATE;
add_entry(o, old, update, 0);
return 0;
}
#include "diff.h"
#include "revision.h"
#include "list-objects.h"
+ #include "list-objects-filter.h"
+ #include "list-objects-filter-options.h"
#include "run-command.h"
#include "connect.h"
#include "sigchain.h"
#include "parse-options.h"
#include "argv-array.h"
#include "prio-queue.h"
+#include "protocol.h"
+ #include "quote.h"
static const char * const upload_pack_usage[] = {
N_("git upload-pack [<options>] <dir>"),
static int stateless_rpc;
static const char *pack_objects_hook;
+ static int filter_capability_requested;
+ static int filter_advertise;
+ static struct list_objects_filter_options filter_options;
+
static void reset_timeout(void)
{
alarm(timeout);
argv_array_push(&pack_objects.args, "--delta-base-offset");
if (use_include_tag)
argv_array_push(&pack_objects.args, "--include-tag");
+ if (filter_options.filter_spec) {
+ if (pack_objects.use_shell) {
+ struct strbuf buf = STRBUF_INIT;
+ sq_quote_buf(&buf, filter_options.filter_spec);
+ argv_array_pushf(&pack_objects.args, "--filter=%s", buf.buf);
+ strbuf_release(&buf);
+ } else {
+ argv_array_pushf(&pack_objects.args, "--filter=%s",
+ filter_options.filter_spec);
+ }
+ }
pack_objects.in = -1;
pack_objects.out = -1;
if (skip_prefix(line, "deepen-not ", &arg)) {
char *ref = NULL;
struct object_id oid;
- if (expand_ref(arg, strlen(arg), oid.hash, &ref) != 1)
+ if (expand_ref(arg, strlen(arg), &oid, &ref) != 1)
die("git upload-pack: ambiguous deepen-not: %s", line);
string_list_append(&deepen_not, ref);
free(ref);
deepen_rev_list = 1;
continue;
}
+ if (skip_prefix(line, "filter ", &arg)) {
+ if (!filter_capability_requested)
+ die("git upload-pack: filtering capability not negotiated");
+ parse_list_objects_filter(&filter_options, arg);
+ continue;
+ }
if (!skip_prefix(line, "want ", &arg) ||
get_oid_hex(arg, &oid_buf))
die("git upload-pack: protocol error, "
no_progress = 1;
if (parse_feature_request(features, "include-tag"))
use_include_tag = 1;
+ if (parse_feature_request(features, "filter"))
+ filter_capability_requested = 1;
o = parse_object(&oid_buf);
if (!o) {
struct strbuf symref_info = STRBUF_INIT;
format_symref_info(&symref_info, cb_data);
- packet_write_fmt(1, "%s %s%c%s%s%s%s%s agent=%s\n",
+ packet_write_fmt(1, "%s %s%c%s%s%s%s%s%s agent=%s\n",
oid_to_hex(oid), refname_nons,
0, capabilities,
(allow_unadvertised_object_request & ALLOW_TIP_SHA1) ?
" allow-reachable-sha1-in-want" : "",
stateless_rpc ? " no-done" : "",
symref_info.buf,
+ filter_advertise ? " filter" : "",
git_user_agent_sanitized());
strbuf_release(&symref_info);
} else {
packet_write_fmt(1, "%s %s\n", oid_to_hex(oid), refname_nons);
}
capabilities = NULL;
- if (!peel_ref(refname, peeled.hash))
+ if (!peel_ref(refname, &peeled))
packet_write_fmt(1, "%s %s^{}\n", oid_to_hex(&peeled), refname_nons);
return 0;
}
} else if (current_config_scope() != CONFIG_SCOPE_REPO) {
if (!strcmp("uploadpack.packobjectshook", var))
return git_config_string(&pack_objects_hook, var, value);
+ } else if (!strcmp("uploadpack.allowfilter", var)) {
+ filter_advertise = git_config_bool(var, value);
}
return parse_hide_refs_config(var, value, "uploadpack");
}
die("'%s' does not appear to be a git repository", dir);
git_config(upload_pack_config, NULL);
- upload_pack();
+
+ switch (determine_protocol_version_server()) {
+ case protocol_v1:
+ /*
+ * v1 is just the original protocol with a version string,
+ * so just fall through after writing the version string.
+ */
+ if (advertise_refs || !stateless_rpc)
+ packet_write_fmt(1, "version 1\n");
+
+ /* fallthrough */
+ case protocol_v0:
+ upload_pack();
+ break;
+ case protocol_unknown_version:
+ BUG("unknown protocol version");
+ }
+
return 0;
}