/git-remote-fd
/git-remote-ext
/git-remote-testgit
+/git-remote-testsvn
/git-repack
/git-replace
/git-repo-config
Backward compatibility notes
----------------------------
-In the next major release, we will change the behavior of the "git
-push" command. When "git push [$there]" does not say what to push, we
-have used the traditional "matching" semantics so far (all your branches were
-sent to the remote as long as there already are branches of the same
-name over there). We will now use the "simple" semantics, that pushes the
-current branch to the branch with the same name only when the current
+In the next major release (not *this* one), we will change the
+behavior of the "git push" command.
+
+When "git push [$there]" does not say what to push, we have used the
+traditional "matching" semantics so far (all your branches were sent
+to the remote as long as there already are branches of the same name
+over there). We will use the "simple" semantics that pushes the
+current branch to the branch with the same name, only when the current
branch is set to integrate with that remote branch. There is a user
preference configuration variable "push.default" to change this, and
"git push" will warn about the upcoming change until you set this
-variable.
+variable in this release.
"git branch --set-upstream" is deprecated and may be removed in a
relatively distant future. "git branch [-u|--set-upstream-to]" has
* When "git am" sanitizes the "Subject:" line, we strip the prefix from
"Re: subject" and also from a less common "re: subject", but left
- the even less common "RE: subject" intact. We strip that now, too.
+ the even less common "RE: subject" intact. Now we strip that too.
* It was tempting to say "git branch --set-upstream origin/master",
but that tells Git to arrange the local branch "origin/master" to
* "git grep" learned to use a non-standard pattern type by default if
a configuration variable tells it to.
+ * Accumulated updates to "git gui" has been merged.
+
* "git log -g" learned the "--grep-reflog=<pattern>" option to limit
its output to commits with a reflog message that matches the given
pattern.
encountering a conflict during "p4 submit".
-Performance, Internal Implementation, etc. (please report possible regressions)
+Performance, Internal Implementation, etc.
* Git ships with a fall-back regexp implementation for platforms with
buggy regexp library, but it was easy for people to keep using their
Limit the width of the graph part in --stat output. If set, applies
to all commands generating --stat output except format-patch.
+diff.context::
+ Generate diffs with <n> lines of context instead of the default of
+ 3. This value is overridden by the -U option.
+
diff.external::
If this config variable is set, diff generation is not
performed using the internal diff machinery, but using the
the list command. If no 'refspec' capability is advertised,
there is an implied `refspec *:*`.
+'bidi-import'::
+ The fast-import commands 'cat-blob' and 'ls' can be used by remote-helpers
+ to retrieve information about blobs and trees that already exist in
+ fast-import's memory. This requires a channel from fast-import to the
+ remote-helper.
+ If it is advertised in addition to "import", git establishes a pipe from
+ fast-import to the remote-helper's stdin.
+ It follows that git and fast-import are both connected to the
+ remote-helper's stdin. Because git can send multiple commands to
+ the remote-helper it is required that helpers that use 'bidi-import'
+ buffer all 'import' commands of a batch before sending data to fast-import.
+ This is to prevent mixing commands and fast-import responses on the
+ helper's stdin.
+
Capabilities for Pushing
~~~~~~~~~~~~~~~~~~~~~~~~
'connect'::
helper should produce a fast-import stream terminated by a 'done'
command.
+
-Supported if the helper has the "import" capability.
+Note that if the 'bidi-import' capability is used the complete batch
+sequence has to be buffered before starting to send data to fast-import
+to prevent mixing of commands and fast-import responses on the helper's
+stdin.
++
+Supported if the helper has the 'import' capability.
'connect' <service>::
Connects to given service. Standard input and standard output
Typically you would first remove all tracked files from the working
tree using this command:
+Submodules
+~~~~~~~~~~
+Only submodules using a gitfile (which means they were cloned
+with a git version 1.7.8 or newer) will be removed from the work
+tree, as their repository lives inside the .git directory of the
+superproject. If a submodule (or one of those nested inside it)
+still uses a .git directory, `git rm` will fail - no matter if forced
+or not - to protect the submodule's history.
+
+A submodule is considered up-to-date when the HEAD is the same as
+recorded in the index, no tracked files are modified and no untracked
+files that aren't ignored are present in the submodules work tree.
+Ignored files are deemed expendable and won't stop a submodule's work
+tree from being removed.
+
----------------
git ls-files -z | xargs -0 rm -f
----------------
SYNOPSIS
--------
[verse]
-'git submodule' [--quiet] add [-b branch] [-f|--force]
+'git submodule' [--quiet] add [-b branch] [-f|--force] [--name <name>]
[--reference <repository>] [--] <repository> [<path>]
'git submodule' [--quiet] status [--cached] [--recursive] [--] [<path>...]
'git submodule' [--quiet] init [--] [<path>...]
Initialize all submodules for which "git submodule init" has not been
called so far before updating.
+--name::
+ This option is only valid for the add command. It sets the submodule's
+ name to the given string instead of defaulting to its path. The name
+ must be valid as a directory name and may not end with a '/'.
+
--reference <repository>::
This option is only valid for add and update commands. These
commands sometimes need to clone a remote repository. In this case,
branch of the `git.git` repository.
Documentation for older releases are available here:
+* link:v1.8.0/git.html[documentation for release 1.8.0]
+
+* release notes for
+ link:RelNotes/1.8.0.txt[1.8.0],
+
* link:v1.7.12.4/git.html[documentation for release 1.7.12.4]
* release notes for
of linkgit:git-config[1].
The file contains one subsection per submodule, and the subsection value
-is the name of the submodule. Each submodule section also contains the
+is the name of the submodule. The name is set to the path where the
+submodule has been added unless it was customized with the '--name'
+option of 'git submodule add'. Each submodule section also contains the
following required keys:
submodule.<name>.path::
Match the regexp limiting patterns without regard to letters case.
+--basic-regexp::
+
+ Consider the limiting patterns to be basic regular expressions;
+ this is the default.
+
-E::
--extended-regexp::
Consider the limiting patterns to be fixed strings (don't interpret
pattern as a regular expression).
+--perl-regexp::
+
+ Consider the limiting patterns to be Perl-compatible regexp.
+ Requires libpcre to be compiled in.
+
--remove-empty::
Stop when a given path disappears from the tree.
`argv_array_clear`::
Free all memory associated with the array and return it to the
initial, empty state.
+
+`argv_array_detach`::
+ Detach the argv array from the `struct argv_array`, transfering
+ ownership of the allocated array and strings.
+
+`argv_array_free_detached`::
+ Free the memory allocated by a `struct argv_array` that was later
+ detached and is now no longer needed.
#!/bin/sh
GVF=GIT-VERSION-FILE
-DEF_VER=v1.8.0-rc3
+DEF_VER=v1.8.0
LF='
'
PROGRAM_OBJS += shell.o
PROGRAM_OBJS += show-index.o
PROGRAM_OBJS += upload-pack.o
+PROGRAM_OBJS += remote-testsvn.o
# Binary suffix, set to .exe for Windows builds
X =
$(QUIET_LINK)$(CC) $(ALL_CFLAGS) -o $@ $(ALL_LDFLAGS) $(filter %.o,$^) \
$(LIBS) $(CURL_LIBCURL) $(EXPAT_LIBEXPAT)
+git-remote-testsvn$X: remote-testsvn.o GIT-LDFLAGS $(GITLIBS) $(VCSSVN_LIB)
+ $(QUIET_LINK)$(CC) $(ALL_CFLAGS) -o $@ $(ALL_LDFLAGS) $(filter %.o,$^) $(LIBS) \
+ $(VCSSVN_LIB)
+
$(REMOTE_CURL_ALIASES): $(REMOTE_CURL_PRIMARY)
$(QUIET_LNCP)$(RM) $@ && \
ln $< $@ 2>/dev/null || \
}
argv_array_init(array);
}
+
+const char **argv_array_detach(struct argv_array *array, int *argc)
+{
+ const char **argv =
+ array->argv == empty_argv || array->argc == 0 ? NULL : array->argv;
+ if (argc)
+ *argc = array->argc;
+ argv_array_init(array);
+ return argv;
+}
+
+void argv_array_free_detached(const char **argv)
+{
+ if (argv) {
+ int i;
+ for (i = 0; argv[i]; i++)
+ free((char **)argv[i]);
+ free(argv);
+ }
+}
void argv_array_pushl(struct argv_array *, ...);
void argv_array_pop(struct argv_array *);
void argv_array_clear(struct argv_array *);
+const char **argv_array_detach(struct argv_array *array, int *argc);
+void argv_array_free_detached(const char **argv);
#endif /* ARGV_ARRAY_H */
static struct attr_stack {
struct attr_stack *prev;
char *origin;
+ size_t originlen;
unsigned num_matches;
unsigned alloc;
struct match_attr **attrs;
if (!is_bare_repository() || direction == GIT_ATTR_INDEX) {
elem = read_attr(GITATTRIBUTES_FILE, 1);
elem->origin = xstrdup("");
+ elem->originlen = 0;
elem->prev = attr_stack;
attr_stack = elem;
debug_push(elem);
strbuf_addstr(&pathbuf, GITATTRIBUTES_FILE);
elem = read_attr(pathbuf.buf, 0);
strbuf_setlen(&pathbuf, cp - path);
- elem->origin = strbuf_detach(&pathbuf, NULL);
+ elem->origin = strbuf_detach(&pathbuf, &elem->originlen);
elem->prev = attr_stack;
attr_stack = elem;
debug_push(elem);
}
static int path_matches(const char *pathname, int pathlen,
+ const char *basename,
const char *pattern,
const char *base, int baselen)
{
if (!strchr(pattern, '/')) {
- /* match basename */
- const char *basename = strrchr(pathname, '/');
- basename = basename ? basename + 1 : pathname;
return (fnmatch_icase(pattern, basename, 0) == 0);
}
/*
return rem;
}
-static int fill(const char *path, int pathlen, struct attr_stack *stk, int rem)
+static int fill(const char *path, int pathlen, const char *basename,
+ struct attr_stack *stk, int rem)
{
int i;
const char *base = stk->origin ? stk->origin : "";
struct match_attr *a = stk->attrs[i];
if (a->is_macro)
continue;
- if (path_matches(path, pathlen,
- a->u.pattern, base, strlen(base)))
+ if (path_matches(path, pathlen, basename,
+ a->u.pattern, base, stk->originlen))
rem = fill_one("fill", a, rem);
}
return rem;
{
struct attr_stack *stk;
int i, pathlen, rem;
+ const char *basename;
prepare_attr_stack(path);
for (i = 0; i < attr_nr; i++)
check_all_attr[i].value = ATTR__UNKNOWN;
+ basename = strrchr(path, '/');
+ basename = basename ? basename + 1 : path;
+
pathlen = strlen(path);
rem = attr_nr;
for (stk = attr_stack; 0 < rem && stk; stk = stk->prev)
- rem = fill(path, pathlen, stk, rem);
+ rem = fill(path, pathlen, basename, stk, rem);
}
int git_check_attr(const char *path, int num, struct git_attr_check *check)
if (!all && !might_be_tag)
return 0;
- if (!peel_ref(path, peeled) && !is_null_sha1(peeled)) {
+ if (!peel_ref(path, peeled)) {
is_tag = !!hashcmp(sha1, peeled);
} else {
hashcpy(peeled, sha1);
static int skip_first_line;
static void add_work(struct grep_opt *opt, enum grep_source_type type,
- const char *name, const void *id)
+ const char *name, const char *path, const void *id)
{
grep_lock();
pthread_cond_wait(&cond_write, &grep_mutex);
}
- grep_source_init(&todo[todo_end].source, type, name, id);
+ grep_source_init(&todo[todo_end].source, type, name, path, id);
if (opt->binary != GREP_BINARY_TEXT)
grep_source_load_driver(&todo[todo_end].source);
todo[todo_end].done = 0;
}
#endif
-static int parse_pattern_type_arg(const char *opt, const char *arg)
+static int grep_cmd_config(const char *var, const char *value, void *cb)
{
- if (!strcmp(arg, "default"))
- return GREP_PATTERN_TYPE_UNSPECIFIED;
- else if (!strcmp(arg, "basic"))
- return GREP_PATTERN_TYPE_BRE;
- else if (!strcmp(arg, "extended"))
- return GREP_PATTERN_TYPE_ERE;
- else if (!strcmp(arg, "fixed"))
- return GREP_PATTERN_TYPE_FIXED;
- else if (!strcmp(arg, "perl"))
- return GREP_PATTERN_TYPE_PCRE;
- die("bad %s argument: %s", opt, arg);
-}
-
-static void grep_pattern_type_options(const int pattern_type, struct grep_opt *opt)
-{
- switch (pattern_type) {
- case GREP_PATTERN_TYPE_UNSPECIFIED:
- /* fall through */
-
- case GREP_PATTERN_TYPE_BRE:
- opt->fixed = 0;
- opt->pcre = 0;
- opt->regflags &= ~REG_EXTENDED;
- break;
-
- case GREP_PATTERN_TYPE_ERE:
- opt->fixed = 0;
- opt->pcre = 0;
- opt->regflags |= REG_EXTENDED;
- break;
-
- case GREP_PATTERN_TYPE_FIXED:
- opt->fixed = 1;
- opt->pcre = 0;
- opt->regflags &= ~REG_EXTENDED;
- break;
-
- case GREP_PATTERN_TYPE_PCRE:
- opt->fixed = 0;
- opt->pcre = 1;
- opt->regflags &= ~REG_EXTENDED;
- break;
- }
-}
-
-static int grep_config(const char *var, const char *value, void *cb)
-{
- struct grep_opt *opt = cb;
- char *color = NULL;
-
- if (userdiff_config(var, value) < 0)
- return -1;
-
- if (!strcmp(var, "grep.extendedregexp")) {
- if (git_config_bool(var, value))
- opt->extended_regexp_option = 1;
- else
- opt->extended_regexp_option = 0;
- return 0;
- }
-
- if (!strcmp(var, "grep.patterntype")) {
- opt->pattern_type_option = parse_pattern_type_arg(var, value);
- return 0;
- }
-
- if (!strcmp(var, "grep.linenumber")) {
- opt->linenum = git_config_bool(var, value);
- return 0;
- }
-
- if (!strcmp(var, "color.grep"))
- opt->color = git_config_colorbool(var, value);
- else if (!strcmp(var, "color.grep.context"))
- color = opt->color_context;
- else if (!strcmp(var, "color.grep.filename"))
- color = opt->color_filename;
- else if (!strcmp(var, "color.grep.function"))
- color = opt->color_function;
- else if (!strcmp(var, "color.grep.linenumber"))
- color = opt->color_lineno;
- else if (!strcmp(var, "color.grep.match"))
- color = opt->color_match;
- else if (!strcmp(var, "color.grep.selected"))
- color = opt->color_selected;
- else if (!strcmp(var, "color.grep.separator"))
- color = opt->color_sep;
- else
- return git_color_default_config(var, value, cb);
- if (color) {
- if (!value)
- return config_error_nonbool(var);
- color_parse(value, var, color);
- }
- return 0;
+ int st = grep_config(var, value, cb);
+ if (git_color_default_config(var, value, cb) < 0)
+ st = -1;
+ return st;
}
static void *lock_and_read_sha1_file(const unsigned char *sha1, enum object_type *type, unsigned long *size)
}
static int grep_sha1(struct grep_opt *opt, const unsigned char *sha1,
- const char *filename, int tree_name_len)
+ const char *filename, int tree_name_len,
+ const char *path)
{
struct strbuf pathbuf = STRBUF_INIT;
#ifndef NO_PTHREADS
if (use_threads) {
- add_work(opt, GREP_SOURCE_SHA1, pathbuf.buf, sha1);
+ add_work(opt, GREP_SOURCE_SHA1, pathbuf.buf, path, sha1);
strbuf_release(&pathbuf);
return 0;
} else
struct grep_source gs;
int hit;
- grep_source_init(&gs, GREP_SOURCE_SHA1, pathbuf.buf, sha1);
+ grep_source_init(&gs, GREP_SOURCE_SHA1, pathbuf.buf, path, sha1);
strbuf_release(&pathbuf);
hit = grep_source(opt, &gs);
#ifndef NO_PTHREADS
if (use_threads) {
- add_work(opt, GREP_SOURCE_FILE, buf.buf, filename);
+ add_work(opt, GREP_SOURCE_FILE, buf.buf, filename, filename);
strbuf_release(&buf);
return 0;
} else
struct grep_source gs;
int hit;
- grep_source_init(&gs, GREP_SOURCE_FILE, buf.buf, filename);
+ grep_source_init(&gs, GREP_SOURCE_FILE, buf.buf, filename, filename);
strbuf_release(&buf);
hit = grep_source(opt, &gs);
if (cached || (ce->ce_flags & CE_VALID) || ce_skip_worktree(ce)) {
if (ce_stage(ce))
continue;
- hit |= grep_sha1(opt, ce->sha1, ce->name, 0);
+ hit |= grep_sha1(opt, ce->sha1, ce->name, 0, ce->name);
}
else
hit |= grep_file(opt, ce->name);
}
static int grep_tree(struct grep_opt *opt, const struct pathspec *pathspec,
- struct tree_desc *tree, struct strbuf *base, int tn_len)
+ struct tree_desc *tree, struct strbuf *base, int tn_len,
+ int check_attr)
{
int hit = 0;
enum interesting match = entry_not_interesting;
strbuf_add(base, entry.path, te_len);
if (S_ISREG(entry.mode)) {
- hit |= grep_sha1(opt, entry.sha1, base->buf, tn_len);
+ hit |= grep_sha1(opt, entry.sha1, base->buf, tn_len,
+ check_attr ? base->buf + tn_len : NULL);
}
else if (S_ISDIR(entry.mode)) {
enum object_type type;
strbuf_addch(base, '/');
init_tree_desc(&sub, data, size);
- hit |= grep_tree(opt, pathspec, &sub, base, tn_len);
+ hit |= grep_tree(opt, pathspec, &sub, base, tn_len,
+ check_attr);
free(data);
}
strbuf_setlen(base, old_baselen);
struct object *obj, const char *name)
{
if (obj->type == OBJ_BLOB)
- return grep_sha1(opt, obj->sha1, name, 0);
+ return grep_sha1(opt, obj->sha1, name, 0, NULL);
if (obj->type == OBJ_COMMIT || obj->type == OBJ_TREE) {
struct tree_desc tree;
void *data;
strbuf_addch(&base, ':');
}
init_tree_desc(&tree, data, size);
- hit = grep_tree(opt, pathspec, &tree, &base, base.len);
+ hit = grep_tree(opt, pathspec, &tree, &base, base.len,
+ obj->type == OBJ_COMMIT);
strbuf_release(&base);
free(data);
return hit;
if (argc == 2 && !strcmp(argv[1], "-h"))
usage_with_options(grep_usage, options);
- memset(&opt, 0, sizeof(opt));
- opt.prefix = prefix;
- opt.prefix_length = (prefix && *prefix) ? strlen(prefix) : 0;
- opt.relative = 1;
- opt.pathname = 1;
- opt.pattern_tail = &opt.pattern_list;
- opt.header_tail = &opt.header_list;
- opt.regflags = REG_NEWLINE;
- opt.max_depth = -1;
- opt.pattern_type_option = GREP_PATTERN_TYPE_UNSPECIFIED;
- opt.extended_regexp_option = 0;
-
- strcpy(opt.color_context, "");
- strcpy(opt.color_filename, "");
- strcpy(opt.color_function, "");
- strcpy(opt.color_lineno, "");
- strcpy(opt.color_match, GIT_COLOR_BOLD_RED);
- strcpy(opt.color_selected, "");
- strcpy(opt.color_sep, GIT_COLOR_CYAN);
- opt.color = -1;
- git_config(grep_config, &opt);
+ init_grep_defaults();
+ git_config(grep_cmd_config, NULL);
+ grep_init(&opt, prefix);
/*
* If there is no -- then the paths must exist in the working
PARSE_OPT_KEEP_DASHDASH |
PARSE_OPT_STOP_AT_NON_OPTION |
PARSE_OPT_NO_INTERNAL_HELP);
-
- if (pattern_type_arg != GREP_PATTERN_TYPE_UNSPECIFIED)
- grep_pattern_type_options(pattern_type_arg, &opt);
- else if (opt.pattern_type_option != GREP_PATTERN_TYPE_UNSPECIFIED)
- grep_pattern_type_options(opt.pattern_type_option, &opt);
- else if (opt.extended_regexp_option)
- grep_pattern_type_options(GREP_PATTERN_TYPE_ERE, &opt);
+ grep_commit_pattern_type(pattern_type_arg, &opt);
if (use_index && !startup_info->have_repository)
/* die the same way as if we did it at the beginning */
}
if (!prefixcmp(var, "color.decorate."))
return parse_decorate_color_config(var, 15, value);
-
+ if (grep_config(var, value, cb) < 0)
+ return -1;
return git_diff_ui_config(var, value, cb);
}
struct rev_info rev;
struct setup_revision_opt opt;
+ init_grep_defaults();
git_config(git_log_config, NULL);
init_revisions(&rev, prefix);
struct pathspec match_all;
int i, count, ret = 0;
+ init_grep_defaults();
git_config(git_log_config, NULL);
init_pathspec(&match_all, NULL);
struct rev_info rev;
struct setup_revision_opt opt;
+ init_grep_defaults();
git_config(git_log_config, NULL);
init_revisions(&rev, prefix);
struct rev_info rev;
struct setup_revision_opt opt;
+ init_grep_defaults();
git_config(git_log_config, NULL);
init_revisions(&rev, prefix);
extra_hdr.strdup_strings = 1;
extra_to.strdup_strings = 1;
extra_cc.strdup_strings = 1;
+ init_grep_defaults();
git_config(git_format_config, NULL);
init_revisions(&rev, prefix);
rev.commit_format = CMIT_FMT_EMAIL;
if (!prefixcmp(path, "refs/tags/") && /* is a tag? */
!peel_ref(path, peeled) && /* peelable? */
- !is_null_sha1(peeled) && /* annotated tag? */
locate_object_entry(peeled)) /* object packed? */
add_object_entry(sha1, OBJ_TAG, NULL, 0);
return 0;
#include "cache-tree.h"
#include "tree-walk.h"
#include "parse-options.h"
+#include "submodule.h"
static const char * const builtin_rm_usage[] = {
N_("git rm [options] [--] <file>..."),
static struct {
int nr, alloc;
- const char **name;
+ struct {
+ const char *name;
+ char is_submodule;
+ } *entry;
} list;
+static int get_ours_cache_pos(const char *path, int pos)
+{
+ int i = -pos - 1;
+
+ while ((i < active_nr) && !strcmp(active_cache[i]->name, path)) {
+ if (ce_stage(active_cache[i]) == 2)
+ return i;
+ i++;
+ }
+ return -1;
+}
+
+static int check_submodules_use_gitfiles(void)
+{
+ int i;
+ int errs = 0;
+
+ for (i = 0; i < list.nr; i++) {
+ const char *name = list.entry[i].name;
+ int pos;
+ struct cache_entry *ce;
+ struct stat st;
+
+ pos = cache_name_pos(name, strlen(name));
+ if (pos < 0) {
+ pos = get_ours_cache_pos(name, pos);
+ if (pos < 0)
+ continue;
+ }
+ ce = active_cache[pos];
+
+ if (!S_ISGITLINK(ce->ce_mode) ||
+ (lstat(ce->name, &st) < 0) ||
+ is_empty_dir(name))
+ continue;
+
+ if (!submodule_uses_gitfile(name))
+ errs = error(_("submodule '%s' (or one of its nested "
+ "submodules) uses a .git directory\n"
+ "(use 'rm -rf' if you really want to remove "
+ "it including all of its history)"), name);
+ }
+
+ return errs;
+}
+
static int check_local_mod(unsigned char *head, int index_only)
{
/*
struct stat st;
int pos;
struct cache_entry *ce;
- const char *name = list.name[i];
+ const char *name = list.entry[i].name;
unsigned char sha1[20];
unsigned mode;
int local_changes = 0;
int staged_changes = 0;
pos = cache_name_pos(name, strlen(name));
- if (pos < 0)
- continue; /* removing unmerged entry */
+ if (pos < 0) {
+ /*
+ * Skip unmerged entries except for populated submodules
+ * that could lose history when removed.
+ */
+ pos = get_ours_cache_pos(name, pos);
+ if (pos < 0)
+ continue;
+
+ if (!S_ISGITLINK(active_cache[pos]->ce_mode) ||
+ is_empty_dir(name))
+ continue;
+ }
ce = active_cache[pos];
if (lstat(ce->name, &st) < 0) {
/* if a file was removed and it is now a
* directory, that is the same as ENOENT as
* far as git is concerned; we do not track
- * directories.
+ * directories unless they are submodules.
*/
- continue;
+ if (!S_ISGITLINK(ce->ce_mode))
+ continue;
}
/*
/*
* Is the index different from the file in the work tree?
+ * If it's a submodule, is its work tree modified?
*/
- if (ce_match_stat(ce, &st, 0))
+ if (ce_match_stat(ce, &st, 0) ||
+ (S_ISGITLINK(ce->ce_mode) &&
+ !ok_to_remove_submodule(ce->name)))
local_changes = 1;
/*
errs = error(_("'%s' has changes staged in the index\n"
"(use --cached to keep the file, "
"or -f to force removal)"), name);
- if (local_changes)
- errs = error(_("'%s' has local modifications\n"
- "(use --cached to keep the file, "
- "or -f to force removal)"), name);
+ if (local_changes) {
+ if (S_ISGITLINK(ce->ce_mode) &&
+ !submodule_uses_gitfile(name)) {
+ errs = error(_("submodule '%s' (or one of its nested "
+ "submodules) uses a .git directory\n"
+ "(use 'rm -rf' if you really want to remove "
+ "it including all of its history)"), name);
+ } else
+ errs = error(_("'%s' has local modifications\n"
+ "(use --cached to keep the file, "
+ "or -f to force removal)"), name);
+ }
}
}
return errs;
struct cache_entry *ce = active_cache[i];
if (!match_pathspec(pathspec, ce->name, ce_namelen(ce), 0, seen))
continue;
- ALLOC_GROW(list.name, list.nr + 1, list.alloc);
- list.name[list.nr++] = ce->name;
+ ALLOC_GROW(list.entry, list.nr + 1, list.alloc);
+ list.entry[list.nr].name = ce->name;
+ list.entry[list.nr++].is_submodule = S_ISGITLINK(ce->ce_mode);
}
if (pathspec) {
hashclr(sha1);
if (check_local_mod(sha1, index_only))
exit(1);
+ } else if (!index_only) {
+ if (check_submodules_use_gitfiles())
+ exit(1);
}
/*
* the index unless all of them succeed.
*/
for (i = 0; i < list.nr; i++) {
- const char *path = list.name[i];
+ const char *path = list.entry[i].name;
if (!quiet)
printf("rm '%s'\n", path);
if (!index_only) {
int removed = 0;
for (i = 0; i < list.nr; i++) {
- const char *path = list.name[i];
+ const char *path = list.entry[i].name;
+ if (list.entry[i].is_submodule) {
+ if (is_empty_dir(path)) {
+ if (!rmdir(path)) {
+ removed = 1;
+ continue;
+ }
+ } else {
+ struct strbuf buf = STRBUF_INIT;
+ strbuf_addstr(&buf, path);
+ if (!remove_dir_recursively(&buf, 0)) {
+ removed = 1;
+ strbuf_release(&buf);
+ continue;
+ }
+ strbuf_release(&buf);
+ /* Fallthrough and let remove_path() fail. */
+ }
+ }
if (!remove_path(path)) {
removed = 1;
continue;
static int show_ref(const char *refname, const unsigned char *sha1, int flag, void *cbdata)
{
- struct object *obj;
const char *hex;
unsigned char peeled[20];
if (!deref_tags)
return 0;
- if ((flag & REF_ISPACKED) && !peel_ref(refname, peeled)) {
- if (!is_null_sha1(peeled)) {
- hex = find_unique_abbrev(peeled, abbrev);
- printf("%s %s^{}\n", hex, refname);
- }
- }
- else {
- obj = parse_object(sha1);
- if (!obj)
- die("git show-ref: bad ref %s (%s)", refname,
- sha1_to_hex(sha1));
- if (obj->type == OBJ_TAG) {
- obj = deref_tag(obj, refname, 0);
- if (!obj)
- die("git show-ref: bad tag at ref %s (%s)", refname,
- sha1_to_hex(sha1));
- hex = find_unique_abbrev(obj->sha1, abbrev);
- printf("%s %s^{}\n", hex, refname);
- }
+ if (!peel_ref(refname, peeled)) {
+ hex = find_unique_abbrev(peeled, abbrev);
+ printf("%s %s^{}\n", hex, refname);
}
return 0;
}
return freopen(filename, otype, stream);
}
+#undef fflush
+int mingw_fflush(FILE *stream)
+{
+ int ret = fflush(stream);
+
+ /*
+ * write() is used behind the scenes of stdio output functions.
+ * Since git code does not check for errors after each stdio write
+ * operation, it can happen that write() is called by a later
+ * stdio function even if an earlier write() call failed. In the
+ * case of a pipe whose readable end was closed, only the first
+ * call to write() reports EPIPE on Windows. Subsequent write()
+ * calls report EINVAL. It is impossible to notice whether this
+ * fflush invocation triggered such a case, therefore, we have to
+ * catch all EINVAL errors whole-sale.
+ */
+ if (ret && errno == EINVAL)
+ errno = EPIPE;
+
+ return ret;
+}
+
/*
* The unit of FILETIME is 100-nanoseconds since January 1, 1601, UTC.
* Returns the 100-nanoseconds ("hekto nanoseconds") since the epoch.
FILE *mingw_freopen (const char *filename, const char *otype, FILE *stream);
#define freopen mingw_freopen
+int mingw_fflush(FILE *stream);
+#define fflush mingw_fflush
+
char *mingw_getcwd(char *pointer, int len);
#define getcwd mingw_getcwd
#include "strbuf.h"
#include "quote.h"
-#define MAXNAME (256)
-
typedef struct config_file {
struct config_file *prev;
FILE *f;
int linenr;
int eof;
struct strbuf value;
- char var[MAXNAME];
+ struct strbuf var;
} config_file;
static config_file *cf;
return isalnum(c) || c == '-';
}
-static int get_value(config_fn_t fn, void *data, char *name, unsigned int len)
+static int get_value(config_fn_t fn, void *data, struct strbuf *name)
{
int c;
char *value;
break;
if (!iskeychar(c))
break;
- name[len++] = tolower(c);
- if (len >= MAXNAME)
- return -1;
+ strbuf_addch(name, tolower(c));
}
- name[len] = 0;
+
while (c == ' ' || c == '\t')
c = get_next_char();
if (!value)
return -1;
}
- return fn(name, value, data);
+ return fn(name->buf, value, data);
}
-static int get_extended_base_var(char *name, int baselen, int c)
+static int get_extended_base_var(struct strbuf *name, int c)
{
do {
if (c == '\n')
/* We require the format to be '[base "extension"]' */
if (c != '"')
return -1;
- name[baselen++] = '.';
+ strbuf_addch(name, '.');
for (;;) {
int c = get_next_char();
if (c == '\n')
goto error_incomplete_line;
}
- name[baselen++] = c;
- if (baselen > MAXNAME / 2)
- return -1;
+ strbuf_addch(name, c);
}
/* Final ']' */
if (get_next_char() != ']')
return -1;
- return baselen;
+ return 0;
error_incomplete_line:
cf->linenr--;
return -1;
}
-static int get_base_var(char *name)
+static int get_base_var(struct strbuf *name)
{
- int baselen = 0;
-
for (;;) {
int c = get_next_char();
if (cf->eof)
return -1;
if (c == ']')
- return baselen;
+ return 0;
if (isspace(c))
- return get_extended_base_var(name, baselen, c);
+ return get_extended_base_var(name, c);
if (!iskeychar(c) && c != '.')
return -1;
- if (baselen > MAXNAME / 2)
- return -1;
- name[baselen++] = tolower(c);
+ strbuf_addch(name, tolower(c));
}
}
{
int comment = 0;
int baselen = 0;
- char *var = cf->var;
+ struct strbuf *var = &cf->var;
/* U+FEFF Byte Order Mark in UTF8 */
static const unsigned char *utf8_bom = (unsigned char *) "\xef\xbb\xbf";
continue;
}
if (c == '[') {
- baselen = get_base_var(var);
- if (baselen <= 0)
+ /* Reset prior to determining a new stem */
+ strbuf_reset(var);
+ if (get_base_var(var) < 0 || var->len < 1)
break;
- var[baselen++] = '.';
- var[baselen] = 0;
+ strbuf_addch(var, '.');
+ baselen = var->len;
continue;
}
if (!isalpha(c))
break;
- var[baselen] = tolower(c);
- if (get_value(fn, data, var, baselen+1) < 0)
+ /*
+ * Truncate the var name back to the section header
+ * stem prior to grabbing the suffix part of the name
+ * and the value.
+ */
+ strbuf_setlen(var, baselen);
+ strbuf_addch(var, tolower(c));
+ if (get_value(fn, data, var) < 0)
break;
}
die("bad config file line %d in %s", cf->linenr, cf->name);
top.linenr = 1;
top.eof = 0;
strbuf_init(&top.value, 1024);
+ strbuf_init(&top.var, 1024);
cf = ⊤
ret = git_parse_file(fn, data);
/* pop config-file parsing state stack */
strbuf_release(&top.value);
+ strbuf_release(&top.var);
cf = top.prev;
fclose(f);
{
if (svndump_init(NULL))
return 1;
- svndump_read((argc > 1) ? argv[1] : NULL);
+ svndump_read((argc > 1) ? argv[1] : NULL, "refs/heads/master",
+ "refs/notes/svn/revs");
svndump_deinit();
svndump_reset();
return 0;
--- /dev/null
+#!/usr/bin/python
+"""
+Simulates svnrdump by replaying an existing dump from a file, taking care
+of the specified revision range.
+To simulate incremental imports the environment variable SVNRMAX can be set
+to the highest revision that should be available.
+"""
+import sys, os
+
+
+def getrevlimit():
+ var = 'SVNRMAX'
+ if os.environ.has_key(var):
+ return os.environ[var]
+ return None
+
+def writedump(url, lower, upper):
+ if url.startswith('sim://'):
+ filename = url[6:]
+ if filename[-1] == '/': filename = filename[:-1] #remove terminating slash
+ else:
+ raise ValueError('sim:// url required')
+ f = open(filename, 'r');
+ state = 'header'
+ wroterev = False
+ while(True):
+ l = f.readline()
+ if l == '': break
+ if state == 'header' and l.startswith('Revision-number: '):
+ state = 'prefix'
+ if state == 'prefix' and l == 'Revision-number: %s\n' % lower:
+ state = 'selection'
+ if not upper == 'HEAD' and state == 'selection' and l == 'Revision-number: %s\n' % upper:
+ break;
+
+ if state == 'header' or state == 'selection':
+ if state == 'selection': wroterev = True
+ sys.stdout.write(l)
+ return wroterev
+
+if __name__ == "__main__":
+ if not (len(sys.argv) in (3, 4, 5)):
+ print "usage: %s dump URL -rLOWER:UPPER"
+ sys.exit(1)
+ if not sys.argv[1] == 'dump': raise NotImplementedError('only "dump" is suppported.')
+ url = sys.argv[2]
+ r = ('0', 'HEAD')
+ if len(sys.argv) == 4 and sys.argv[3][0:2] == '-r':
+ r = sys.argv[3][2:].lstrip().split(':')
+ if not getrevlimit() is None: r[1] = getrevlimit()
+ if writedump(url, r[0], r[1]): ret = 0
+ else: ret = 1
+ sys.exit(ret)
static int diff_rename_limit_default = 400;
static int diff_suppress_blank_empty;
static int diff_use_color_default = -1;
+static int diff_context_default = 3;
static const char *diff_word_regex_cfg;
static const char *external_diff_cmd_cfg;
int diff_auto_refresh_index = 1;
diff_use_color_default = git_config_colorbool(var, value);
return 0;
}
+ if (!strcmp(var, "diff.context")) {
+ diff_context_default = git_config_int(var, value);
+ if (diff_context_default < 0)
+ return -1;
+ return 0;
+ }
if (!strcmp(var, "diff.renames")) {
diff_detect_rename_default = git_config_rename(var, value);
return 0;
options->break_opt = -1;
options->rename_limit = -1;
options->dirstat_permille = diff_dirstat_permille_default;
- options->context = 3;
+ options->context = diff_context_default;
DIFF_OPT_SET(options, RENAME_EMPTY);
options->change = diff_change;
eval "$functions"
-# When piped a commit, output a script to set the ident of either
-# "author" or "committer
+finish_ident() {
+ # Ensure non-empty id name.
+ echo "case \"\$GIT_$1_NAME\" in \"\") GIT_$1_NAME=\"\${GIT_$1_EMAIL%%@*}\" && export GIT_$1_NAME;; esac"
+ # And make sure everything is exported.
+ echo "export GIT_$1_NAME"
+ echo "export GIT_$1_EMAIL"
+ echo "export GIT_$1_DATE"
+}
set_ident () {
- lid="$(echo "$1" | tr "[A-Z]" "[a-z]")"
- uid="$(echo "$1" | tr "[a-z]" "[A-Z]")"
- pick_id_script='
- /^'$lid' /{
- s/'\''/'\''\\'\'\''/g
- h
- s/^'$lid' \([^<]*\) <[^>]*> .*$/\1/
- s/'\''/'\''\'\'\''/g
- s/.*/GIT_'$uid'_NAME='\''&'\''; export GIT_'$uid'_NAME/p
-
- g
- s/^'$lid' [^<]* <\([^>]*\)> .*$/\1/
- s/'\''/'\''\'\'\''/g
- s/.*/GIT_'$uid'_EMAIL='\''&'\''; export GIT_'$uid'_EMAIL/p
-
- g
- s/^'$lid' [^<]* <[^>]*> \(.*\)$/@\1/
- s/'\''/'\''\'\'\''/g
- s/.*/GIT_'$uid'_DATE='\''&'\''; export GIT_'$uid'_DATE/p
-
- q
- }
- '
-
- LANG=C LC_ALL=C sed -ne "$pick_id_script"
- # Ensure non-empty id name.
- echo "case \"\$GIT_${uid}_NAME\" in \"\") GIT_${uid}_NAME=\"\${GIT_${uid}_EMAIL%%@*}\" && export GIT_${uid}_NAME;; esac"
+ parse_ident_from_commit author AUTHOR committer COMMITTER
+ finish_ident AUTHOR
+ finish_ident COMMITTER
}
USAGE="[--env-filter <command>] [--tree-filter <command>]
git cat-file commit "$commit" >../commit ||
die "Cannot read commit $commit"
- eval "$(set_ident AUTHOR <../commit)" ||
- die "setting author failed for commit $commit"
- eval "$(set_ident COMMITTER <../commit)" ||
- die "setting committer failed for commit $commit"
+ eval "$(set_ident <../commit)" ||
+ die "setting author/committer failed for commit $commit"
eval "$filter_env" < /dev/null ||
die "env filter failed: $filter_env"
#!/bin/sh
GVF=GIT-VERSION-FILE
-DEF_VER=0.16.GITGUI
+DEF_VER=0.17.GITGUI
LF='
'
if {$_trace >= 0} {
set argv [lreplace $argv $_trace $_trace]
set _trace 1
+ if {[tk windowingsystem] eq "win32"} { console show }
} else {
set _trace 0
}
(![$ui_comm edit modified]
|| [string trim [$ui_comm get 0.0 end]] eq {})} {
if {[string match amend* $commit_type]} {
- } elseif {[load_message GITGUI_MSG]} {
+ } elseif {[load_message GITGUI_MSG utf-8]} {
} elseif {[run_prepare_commit_msg_hook]} {
} elseif {[load_message MERGE_MSG]} {
} elseif {[load_message SQUASH_MSG]} {
fileevent $fd_lo readable [list read_ls_others $fd_lo $after]
}
-proc load_message {file} {
+proc load_message {file {encoding {}}} {
global ui_comm
set f [gitdir $file]
return 0
}
fconfigure $fd -eofchar {}
+ if {$encoding ne {}} {
+ fconfigure $fd -encoding $encoding
+ }
set content [string trim [read $fd]]
close $fd
regsub -all -line {[ \r\t]+$} $content {} content
&& $msg ne {}} {
catch {
set fd [open $save w]
+ fconfigure $fd -encoding utf-8
puts -nonewline $fd $msg
close $fd
}
set jump_spec {}
set is_path 0
foreach a $argv {
- if {$is_path || [file exists $_prefix$a]} {
+ if {[file exists $a]} {
+ if {$path ne {}} usage
+ set path [normalize_relpath $a]
+ break
+ } elseif {[file exists $_prefix$a]} {
if {$path ne {}} usage
set path [normalize_relpath $_prefix$a]
break
+ }
+
+ if {$is_path} {
+ if {$path ne {}} usage
+ break
} elseif {$a eq {--}} {
if {$path ne {}} {
if {$head ne {}} usage
unset is_path
if {$head ne {} && $path eq {}} {
- set path [normalize_relpath $_prefix$head]
- set head {}
+ if {[string index $head 0] eq {/}} {
+ set path [normalize_relpath $head]
+ set head {}
+ } else {
+ set path [normalize_relpath $_prefix$head]
+ set head {}
+ }
}
if {$head eq {}} {
bind $ui_diff <$M1B-Key-V> {break}
bind $ui_diff <$M1B-Key-a> {%W tag add sel 0.0 end;break}
bind $ui_diff <$M1B-Key-A> {%W tag add sel 0.0 end;break}
+bind $ui_diff <$M1B-Key-j> {do_revert_selection;break}
+bind $ui_diff <$M1B-Key-J> {do_revert_selection;break}
bind $ui_diff <Key-Up> {catch {%W yview scroll -1 units};break}
bind $ui_diff <Key-Down> {catch {%W yview scroll 1 units};break}
bind $ui_diff <Key-Left> {catch {%W xview scroll -1 units};break}
bind . <$M1B-Key-S> do_signoff
bind . <$M1B-Key-t> do_add_selection
bind . <$M1B-Key-T> do_add_selection
+bind . <$M1B-Key-u> do_unstage_selection
+bind . <$M1B-Key-U> do_unstage_selection
bind . <$M1B-Key-j> do_revert_selection
bind . <$M1B-Key-J> do_revert_selection
bind . <$M1B-Key-i> do_add_all
}
if {[winfo exists $ui_comm]} {
- set GITGUI_BCK_exists [load_message GITGUI_BCK]
+ set GITGUI_BCK_exists [load_message GITGUI_BCK utf-8]
# -- If both our backup and message files exist use the
# newer of the two files to initialize the buffer.
} elseif {$m} {
catch {
set fd [open [gitdir GITGUI_BCK] w]
+ fconfigure $fd -encoding utf-8
puts -nonewline $fd $msg
close $fd
set GITGUI_BCK_exists 1
&& [is_config_true gui.warndetachedcommit]} {
set msg [mc "You are about to commit on a detached head.\
This is a potentially dangerous thing to do because if you switch\
-to another branch you will loose your changes and it can be difficult\
+to another branch you will lose your changes and it can be difficult\
to retrieve them later from the reflog. You should probably cancel this\
commit and create a new branch to continue.\n\
\n\
catch {file delete [gitdir MERGE_MSG]}
catch {file delete [gitdir SQUASH_MSG]}
catch {file delete [gitdir GITGUI_MSG]}
+ catch {file delete [gitdir CHERRY_PICK_HEAD]}
# -- Let rerere do its thing.
#
method update {have total} {
set pdone 0
+ set cdone 0
if {$total > 0} {
set pdone [expr {100 * $have / $total}]
set cdone [expr {[winfo width $w_c] * $have / $total}]
} else {
set argv0 [file join $gitexecdir [file tail [lindex $argv 0]]]
set AppMain_source [file join $gitguilib git-gui.tcl]
- if {[pwd] eq {/}} {
+ if {[info exists env(PWD)]} {
+ cd $env(PWD)
+ } elseif {[pwd] eq {/}} {
cd $env(HOME)
}
}
#: git-gui.sh:1154
msgid "Cannot use bare repository:"
-msgstr "Leeres Projektarchiv kann nicht benutzt werden:"
+msgstr "Bloßes Projektarchiv kann nicht benutzt werden:"
#: git-gui.sh:1162
msgid "No working directory"
#: git-gui.sh:1454
msgid "Calling prepare-commit-msg hook..."
-msgstr "Aufrufen der Eintragen-Vorbereiten-Kontrolle..."
+msgstr "Aufrufen der Eintragen-Vorbereiten-Kontrolle (»prepare-commit hook«)..."
#: git-gui.sh:1471
msgid "Commit declined by prepare-commit-msg hook."
#: git-gui.sh:2465 lib/choose_rev.tcl:557
msgid "Remote"
-msgstr "Andere Archive"
+msgstr "Externe Archive"
#: git-gui.sh:2468
msgid "Tools"
#: git-gui.sh:3328
msgid "Use Remote Version"
-msgstr "Entfernte Version benutzen"
+msgstr "Externe Version benutzen"
#: git-gui.sh:3332
msgid "Use Local Version"
#: lib/branch_create.tcl:140
#, tcl-format
msgid "Tracking branch %s is not a branch in the remote repository."
-msgstr "Übernahmezweig »%s« ist kein Zweig im anderen Projektarchiv."
+msgstr "Übernahmezweig »%s« ist kein Zweig im externen Projektarchiv."
#: lib/branch_create.tcl:153 lib/branch_rename.tcl:86
msgid "Please supply a branch name."
#: lib/commit.tcl:234
msgid "Calling pre-commit hook..."
-msgstr "Aufrufen der Vor-Eintragen-Kontrolle..."
+msgstr "Aufrufen der Vor-Eintragen-Kontrolle (»pre-commit hook«)..."
#: lib/commit.tcl:249
msgid "Commit declined by pre-commit hook."
#: lib/commit.tcl:272
msgid "Calling commit-msg hook..."
-msgstr "Aufrufen der Versionsbeschreibungs-Kontrolle..."
+msgstr "Aufrufen der Versionsbeschreibungs-Kontrolle (»commit-message hook«)..."
#: lib/commit.tcl:287
msgid "Commit declined by commit-msg hook."
#: lib/remote_add.tcl:19
msgid "Add Remote"
-msgstr "Anderes Archiv hinzufügen"
+msgstr "Externes Archiv hinzufügen"
#: lib/remote_add.tcl:24
msgid "Add New Remote"
-msgstr "Neues anderes Archiv hinzufügen"
+msgstr "Neues externes Archiv hinzufügen"
#: lib/remote_add.tcl:28 lib/tools_dlg.tcl:36
msgid "Add"
#: lib/remote_add.tcl:37
msgid "Remote Details"
-msgstr "Einzelheiten des anderen Archivs"
+msgstr "Einzelheiten des externen Archivs"
#: lib/remote_add.tcl:50
msgid "Location:"
#: lib/remote_add.tcl:71
msgid "Initialize Remote Repository and Push"
-msgstr "Anderes Archiv initialisieren und dahin versenden"
+msgstr "Externes Archiv initialisieren und dahin versenden"
#: lib/remote_add.tcl:77
msgid "Do Nothing Else Now"
#: lib/remote_add.tcl:101
msgid "Please supply a remote name."
-msgstr "Bitte geben Sie einen Namen des anderen Archivs an."
+msgstr "Bitte geben Sie einen Namen des externen Archivs an."
#: lib/remote_add.tcl:114
#, tcl-format
msgid "'%s' is not an acceptable remote name."
-msgstr "»%s« ist kein zulässiger Name eines anderen Archivs."
+msgstr "»%s« ist kein zulässiger Name eines externen Archivs."
#: lib/remote_add.tcl:125
#, tcl-format
msgid "Failed to add remote '%s' of location '%s'."
-msgstr "Fehler beim Hinzufügen des anderen Archivs »%s« aus Herkunftsort »%s«."
+msgstr "Fehler beim Hinzufügen des externen Archivs »%s« aus Herkunftsort »%s«."
#: lib/remote_add.tcl:133 lib/transport.tcl:6
#, tcl-format
#: lib/remote_add.tcl:157
#, tcl-format
msgid "Do not know how to initialize repository at location '%s'."
-msgstr "Initialisieren eines anderen Archivs an Adresse »%s« ist nicht möglich."
+msgstr "Initialisieren eines externen Archivs an Adresse »%s« ist nicht möglich."
#: lib/remote_add.tcl:163 lib/transport.tcl:25 lib/transport.tcl:63
#: lib/transport.tcl:81
#: lib/remote_branch_delete.tcl:29 lib/remote_branch_delete.tcl:34
msgid "Delete Branch Remotely"
-msgstr "Zweig in anderem Archiv löschen"
+msgstr "Zweig in externem Archiv löschen"
#: lib/remote_branch_delete.tcl:47
msgid "From Repository"
#: lib/remote_branch_delete.tcl:50 lib/transport.tcl:134
msgid "Remote:"
-msgstr "Anderes Archiv:"
+msgstr "Externes Archiv:"
#: lib/remote_branch_delete.tcl:66 lib/transport.tcl:149
msgid "Arbitrary Location:"
#: lib/remote.tcl:163
msgid "Remove Remote"
-msgstr "Anderes Archiv entfernen"
+msgstr "Externes Archiv entfernen"
#: lib/remote.tcl:168
msgid "Prune from"
msgstr "Schlüsselerzeugung fehlgeschlagen."
#: lib/sshkey.tcl:118
-msgid "Generation succeded, but no keys found."
+msgid "Generation succeeded, but no keys found."
msgstr "Schlüsselerzeugung erfolgreich, aber keine Schlüssel gefunden."
#: lib/sshkey.tcl:121
msgstr "La génération a échoué."
#: lib/sshkey.tcl:118
-msgid "Generation succeded, but no keys found."
+msgid "Generation succeeded, but no keys found."
msgstr "La génération a réussi, mais aucune clé n'a été trouvée."
#: lib/sshkey.tcl:121
msgstr ""
#: lib/sshkey.tcl:118
-msgid "Generation succeded, but no keys found."
+msgid "Generation succeeded, but no keys found."
msgstr ""
#: lib/sshkey.tcl:121
msgstr "A generálás nem sikerült."
#: lib/sshkey.tcl:118
-msgid "Generation succeded, but no keys found."
+msgid "Generation succeeded, but no keys found."
msgstr "A generálás sikeres, de egy kulcs se található."
#: lib/sshkey.tcl:121
msgstr "Errore durante la creazione della chiave."
#: lib/sshkey.tcl:118
-msgid "Generation succeded, but no keys found."
+msgid "Generation succeeded, but no keys found."
msgstr "La chiave è stata creata con successo, ma non è stata trovata."
#: lib/sshkey.tcl:121
msgstr "生成に失敗しました。"
#: lib/sshkey.tcl:118
-msgid "Generation succeded, but no keys found."
+msgid "Generation succeeded, but no keys found."
msgstr "生成には成功しましたが、鍵が見つかりません。"
#: lib/sshkey.tcl:121
msgstr "Generering feilet."
#: lib/sshkey.tcl:118
-msgid "Generation succeded, but no keys found."
+msgid "Generation succeeded, but no keys found."
msgstr "Generering vellykket, men ingen nøkler er funnet."
#: lib/sshkey.tcl:121
msgstr "A geração da chave falhou."
#: lib/sshkey.tcl:118
-msgid "Generation succeded, but no keys found."
+msgid "Generation succeeded, but no keys found."
msgstr "A geração da chave foi bem-sucedida, mas nenhuma chave foi encontrada."
#: lib/sshkey.tcl:121
msgstr "Ключ не создан."
#: lib/sshkey.tcl:118
-msgid "Generation succeded, but no keys found."
+msgid "Generation succeeded, but no keys found."
msgstr "Создание ключа завершилось, но результат не был найден"
#: lib/sshkey.tcl:121
msgstr "Misslyckades med att skapa."
#: lib/sshkey.tcl:120
-msgid "Generation succeded, but no keys found."
+msgid "Generation succeeded, but no keys found."
msgstr "Lyckades skapa nyckeln, men hittar inte någon nyckel."
#: lib/sshkey.tcl:123
fi
}
+# Generate a sed script to parse identities from a commit.
+#
+# Reads the commit from stdin, which should be in raw format (e.g., from
+# cat-file or "--pretty=raw").
+#
+# The first argument specifies the ident line to parse (e.g., "author"), and
+# the second specifies the environment variable to put it in (e.g., "AUTHOR"
+# for "GIT_AUTHOR_*"). Multiple pairs can be given to parse author and
+# committer.
+pick_ident_script () {
+ while test $# -gt 0
+ do
+ lid=$1; shift
+ uid=$1; shift
+ printf '%s' "
+ /^$lid /{
+ s/'/'\\\\''/g
+ h
+ s/^$lid "'\([^<]*\) <[^>]*> .*$/\1/'"
+ s/.*/GIT_${uid}_NAME='&'/p
+
+ g
+ s/^$lid "'[^<]* <\([^>]*\)> .*$/\1/'"
+ s/.*/GIT_${uid}_EMAIL='&'/p
+
+ g
+ s/^$lid "'[^<]* <[^>]*> \(.*\)$/@\1/'"
+ s/.*/GIT_${uid}_DATE='&'/p
+ }
+ "
+ done
+ echo '/^$/q'
+}
+
+# Create a pick-script as above and feed it to sed. Stdout is suitable for
+# feeding to eval.
+parse_ident_from_commit () {
+ LANG=C LC_ALL=C sed -ne "$(pick_ident_script "$@")"
+}
+
+# Parse the author from a commit given as an argument. Stdout is suitable for
+# feeding to eval to set the usual GIT_* ident variables.
get_author_ident_from_commit () {
- pick_author_script='
- /^author /{
- s/'\''/'\''\\'\'\''/g
- h
- s/^author \([^<]*\) <[^>]*> .*$/\1/
- s/.*/GIT_AUTHOR_NAME='\''&'\''/p
-
- g
- s/^author [^<]* <\([^>]*\)> .*$/\1/
- s/.*/GIT_AUTHOR_EMAIL='\''&'\''/p
-
- g
- s/^author [^<]* <[^>]*> \(.*\)$/@\1/
- s/.*/GIT_AUTHOR_DATE='\''&'\''/p
-
- q
- }
- '
encoding=$(git config i18n.commitencoding || echo UTF-8)
git show -s --pretty=raw --encoding="$encoding" "$1" -- |
- LANG=C LC_ALL=C sed -ne "$pick_author_script"
+ parse_ident_from_commit author AUTHOR
}
# Clear repo-local GIT_* environment variables. Useful when switching to
# Copyright (c) 2007 Lars Hjemli
dashless=$(basename "$0" | sed -e 's/-/ /')
-USAGE="[--quiet] add [-b branch] [-f|--force] [--reference <repository>] [--] <repository> [<path>]
+USAGE="[--quiet] add [-b branch] [-f|--force] [--name <name>] [--reference <repository>] [--] <repository> [<path>]
or: $dashless [--quiet] status [--cached] [--recursive] [--] [<path>...]
or: $dashless [--quiet] init [--] [<path>...]
or: $dashless [--quiet] update [--init] [-N|--no-fetch] [-f|--force] [--rebase] [--reference <repository>] [--merge] [--recursive] [--] [<path>...]
nofetch=
update=
prefix=
+custom_name=
# The function takes at most 2 arguments. The first argument is the
# URL that navigates to the submodule origin repo. When relative, this URL
module_clone()
{
sm_path=$1
- url=$2
- reference="$3"
+ name=$2
+ url=$3
+ reference="$4"
quiet=
if test -n "$GIT_QUIET"
then
gitdir=
gitdir_base=
- name=$(module_name "$sm_path" 2>/dev/null)
- test -n "$name" || name="$sm_path"
base_name=$(dirname "$name")
gitdir=$(git rev-parse --git-dir)
reference="$1"
shift
;;
+ --name)
+ case "$2" in '') usage ;; esac
+ custom_name=$2
+ shift
+ ;;
--)
shift
break
exit 1
fi
+ if test -n "$custom_name"
+ then
+ sm_name="$custom_name"
+ else
+ sm_name="$sm_path"
+ fi
+
# perhaps the path exists and is already a git repo, else clone it
if test -e "$sm_path"
then
fi
else
-
- module_clone "$sm_path" "$realrepo" "$reference" || exit
+ if test -d ".git/modules/$sm_name"
+ then
+ if test -z "$force"
+ then
+ echo >&2 "$(eval_gettext "A git directory for '\$sm_name' is found locally with remote(s):")"
+ GIT_DIR=".git/modules/$sm_name" GIT_WORK_TREE=. git remote -v | grep '(fetch)' | sed -e s,^," ", -e s,' (fetch)',, >&2
+ echo >&2 "$(eval_gettext "If you want to reuse this local git directory instead of cloning again from")"
+ echo >&2 " $realrepo"
+ echo >&2 "$(eval_gettext "use the '--force' option. If the local git directory is not the correct repo")"
+ die "$(eval_gettext "or you are unsure what this means choose another name with the '--name' option.")"
+ else
+ echo "$(eval_gettext "Reactivating local git directory for submodule '\$sm_name'.")"
+ fi
+ fi
+ module_clone "$sm_path" "$sm_name" "$realrepo" "$reference" || exit
(
clear_local_git_env
cd "$sm_path" &&
esac
) || die "$(eval_gettext "Unable to checkout submodule '\$sm_path'")"
fi
- git config submodule."$sm_path".url "$realrepo"
+ git config submodule."$sm_name".url "$realrepo"
git add $force "$sm_path" ||
die "$(eval_gettext "Failed to add submodule '\$sm_path'")"
- git config -f .gitmodules submodule."$sm_path".path "$sm_path" &&
- git config -f .gitmodules submodule."$sm_path".url "$repo" &&
+ git config -f .gitmodules submodule."$sm_name".path "$sm_path" &&
+ git config -f .gitmodules submodule."$sm_name".url "$repo" &&
git add --force .gitmodules ||
die "$(eval_gettext "Failed to register submodule '\$sm_path'")"
}
if ! test -d "$sm_path"/.git -o -f "$sm_path"/.git
then
- module_clone "$sm_path" "$url" "$reference"|| exit
+ module_clone "$sm_path" "$name" "$url" "$reference" || exit
cloned_modules="$cloned_modules;$name"
subsha1=
else
static int grep_source_load(struct grep_source *gs);
static int grep_source_is_binary(struct grep_source *gs);
+static struct grep_opt grep_defaults;
+
+/*
+ * Initialize the grep_defaults template with hardcoded defaults.
+ * We could let the compiler do this, but without C99 initializers
+ * the code gets unwieldy and unreadable, so...
+ */
+void init_grep_defaults(void)
+{
+ struct grep_opt *opt = &grep_defaults;
+ static int run_once;
+
+ if (run_once)
+ return;
+ run_once++;
+
+ memset(opt, 0, sizeof(*opt));
+ opt->relative = 1;
+ opt->pathname = 1;
+ opt->regflags = REG_NEWLINE;
+ opt->max_depth = -1;
+ opt->pattern_type_option = GREP_PATTERN_TYPE_UNSPECIFIED;
+ opt->extended_regexp_option = 0;
+ strcpy(opt->color_context, "");
+ strcpy(opt->color_filename, "");
+ strcpy(opt->color_function, "");
+ strcpy(opt->color_lineno, "");
+ strcpy(opt->color_match, GIT_COLOR_BOLD_RED);
+ strcpy(opt->color_selected, "");
+ strcpy(opt->color_sep, GIT_COLOR_CYAN);
+ opt->color = -1;
+}
+
+static int parse_pattern_type_arg(const char *opt, const char *arg)
+{
+ if (!strcmp(arg, "default"))
+ return GREP_PATTERN_TYPE_UNSPECIFIED;
+ else if (!strcmp(arg, "basic"))
+ return GREP_PATTERN_TYPE_BRE;
+ else if (!strcmp(arg, "extended"))
+ return GREP_PATTERN_TYPE_ERE;
+ else if (!strcmp(arg, "fixed"))
+ return GREP_PATTERN_TYPE_FIXED;
+ else if (!strcmp(arg, "perl"))
+ return GREP_PATTERN_TYPE_PCRE;
+ die("bad %s argument: %s", opt, arg);
+}
+
+/*
+ * Read the configuration file once and store it in
+ * the grep_defaults template.
+ */
+int grep_config(const char *var, const char *value, void *cb)
+{
+ struct grep_opt *opt = &grep_defaults;
+ char *color = NULL;
+
+ if (userdiff_config(var, value) < 0)
+ return -1;
+
+ if (!strcmp(var, "grep.extendedregexp")) {
+ if (git_config_bool(var, value))
+ opt->extended_regexp_option = 1;
+ else
+ opt->extended_regexp_option = 0;
+ return 0;
+ }
+
+ if (!strcmp(var, "grep.patterntype")) {
+ opt->pattern_type_option = parse_pattern_type_arg(var, value);
+ return 0;
+ }
+
+ if (!strcmp(var, "grep.linenumber")) {
+ opt->linenum = git_config_bool(var, value);
+ return 0;
+ }
+
+ if (!strcmp(var, "color.grep"))
+ opt->color = git_config_colorbool(var, value);
+ else if (!strcmp(var, "color.grep.context"))
+ color = opt->color_context;
+ else if (!strcmp(var, "color.grep.filename"))
+ color = opt->color_filename;
+ else if (!strcmp(var, "color.grep.function"))
+ color = opt->color_function;
+ else if (!strcmp(var, "color.grep.linenumber"))
+ color = opt->color_lineno;
+ else if (!strcmp(var, "color.grep.match"))
+ color = opt->color_match;
+ else if (!strcmp(var, "color.grep.selected"))
+ color = opt->color_selected;
+ else if (!strcmp(var, "color.grep.separator"))
+ color = opt->color_sep;
+
+ if (color) {
+ if (!value)
+ return config_error_nonbool(var);
+ color_parse(value, var, color);
+ }
+ return 0;
+}
+
+/*
+ * Initialize one instance of grep_opt and copy the
+ * default values from the template we read the configuration
+ * information in an earlier call to git_config(grep_config).
+ */
+void grep_init(struct grep_opt *opt, const char *prefix)
+{
+ struct grep_opt *def = &grep_defaults;
+
+ memset(opt, 0, sizeof(*opt));
+ opt->prefix = prefix;
+ opt->prefix_length = (prefix && *prefix) ? strlen(prefix) : 0;
+ opt->pattern_tail = &opt->pattern_list;
+ opt->header_tail = &opt->header_list;
+
+ opt->color = def->color;
+ opt->extended_regexp_option = def->extended_regexp_option;
+ opt->pattern_type_option = def->pattern_type_option;
+ opt->linenum = def->linenum;
+ opt->max_depth = def->max_depth;
+ opt->pathname = def->pathname;
+ opt->regflags = def->regflags;
+ opt->relative = def->relative;
+
+ strcpy(opt->color_context, def->color_context);
+ strcpy(opt->color_filename, def->color_filename);
+ strcpy(opt->color_function, def->color_function);
+ strcpy(opt->color_lineno, def->color_lineno);
+ strcpy(opt->color_match, def->color_match);
+ strcpy(opt->color_selected, def->color_selected);
+ strcpy(opt->color_sep, def->color_sep);
+}
+
+void grep_commit_pattern_type(enum grep_pattern_type pattern_type, struct grep_opt *opt)
+{
+ if (pattern_type != GREP_PATTERN_TYPE_UNSPECIFIED)
+ grep_set_pattern_type_option(pattern_type, opt);
+ else if (opt->pattern_type_option != GREP_PATTERN_TYPE_UNSPECIFIED)
+ grep_set_pattern_type_option(opt->pattern_type_option, opt);
+ else if (opt->extended_regexp_option)
+ grep_set_pattern_type_option(GREP_PATTERN_TYPE_ERE, opt);
+}
+
+void grep_set_pattern_type_option(enum grep_pattern_type pattern_type, struct grep_opt *opt)
+{
+ switch (pattern_type) {
+ case GREP_PATTERN_TYPE_UNSPECIFIED:
+ /* fall through */
+
+ case GREP_PATTERN_TYPE_BRE:
+ opt->fixed = 0;
+ opt->pcre = 0;
+ opt->regflags &= ~REG_EXTENDED;
+ break;
+
+ case GREP_PATTERN_TYPE_ERE:
+ opt->fixed = 0;
+ opt->pcre = 0;
+ opt->regflags |= REG_EXTENDED;
+ break;
+
+ case GREP_PATTERN_TYPE_FIXED:
+ opt->fixed = 1;
+ opt->pcre = 0;
+ opt->regflags &= ~REG_EXTENDED;
+ break;
+
+ case GREP_PATTERN_TYPE_PCRE:
+ opt->fixed = 0;
+ opt->pcre = 1;
+ opt->regflags &= ~REG_EXTENDED;
+ break;
+ }
+}
static struct grep_pat *create_grep_pat(const char *pat, size_t patlen,
const char *origin, int no,
struct grep_source gs;
int r;
- grep_source_init(&gs, GREP_SOURCE_BUF, NULL, NULL);
+ grep_source_init(&gs, GREP_SOURCE_BUF, NULL, NULL, NULL);
gs.buf = buf;
gs.size = size;
}
void grep_source_init(struct grep_source *gs, enum grep_source_type type,
- const char *name, const void *identifier)
+ const char *name, const char *path,
+ const void *identifier)
{
gs->type = type;
gs->name = name ? xstrdup(name) : NULL;
+ gs->path = path ? xstrdup(path) : NULL;
gs->buf = NULL;
gs->size = 0;
gs->driver = NULL;
{
free(gs->name);
gs->name = NULL;
+ free(gs->path);
+ gs->path = NULL;
free(gs->identifier);
gs->identifier = NULL;
grep_source_clear_data(gs);
return;
grep_attr_lock();
- gs->driver = userdiff_find_by_path(gs->name);
+ if (gs->path)
+ gs->driver = userdiff_find_by_path(gs->path);
if (!gs->driver)
gs->driver = userdiff_find_by_name("default");
grep_attr_unlock();
void *output_priv;
};
+extern void init_grep_defaults(void);
+extern int grep_config(const char *var, const char *value, void *);
+extern void grep_init(struct grep_opt *, const char *prefix);
+void grep_set_pattern_type_option(enum grep_pattern_type, struct grep_opt *opt);
+void grep_commit_pattern_type(enum grep_pattern_type, struct grep_opt *opt);
+
extern void append_grep_pat(struct grep_opt *opt, const char *pat, size_t patlen, const char *origin, int no, enum grep_pat_token t);
extern void append_grep_pattern(struct grep_opt *opt, const char *pat, const char *origin, int no, enum grep_pat_token t);
extern void append_header_grep_pattern(struct grep_opt *, enum grep_header_field, const char *);
char *buf;
unsigned long size;
+ char *path; /* for attribute lookups */
struct userdiff_driver *driver;
};
void grep_source_init(struct grep_source *gs, enum grep_source_type type,
- const char *name, const void *identifier);
+ const char *name, const char *path,
+ const void *identifier);
void grep_source_clear_data(struct grep_source *gs);
void grep_source_clear(struct grep_source *gs);
void grep_source_load_driver(struct grep_source *gs);
return strbuf_detach(&buf, NULL);
}
-int handle_curl_result(struct active_request_slot *slot,
- struct slot_results *results)
+int handle_curl_result(struct slot_results *results)
{
if (results->curl_result == CURLE_OK) {
credential_approve(&http_auth);
return HTTP_NOAUTH;
} else {
credential_fill(&http_auth);
- init_curl_http_auth(slot->curl);
return HTTP_REAUTH;
}
} else {
if (start_active_slot(slot)) {
run_active_slot(slot);
- ret = handle_curl_result(slot, &results);
+ ret = handle_curl_result(&results);
} else {
error("Unable to start HTTP request for %s", url);
ret = HTTP_START_FAILED;
extern void run_active_slot(struct active_request_slot *slot);
extern void finish_active_slot(struct active_request_slot *slot);
extern void finish_all_active_slots(void);
-extern int handle_curl_result(struct active_request_slot *slot,
- struct slot_results *results);
+extern int handle_curl_result(struct slot_results *results);
#ifdef USE_CURL_MULTI
extern void fill_active_slots(void);
diff_cmd () {
+ # p4merge does not like /dev/null
+ rm_local=
+ rm_remote=
+ if test "/dev/null" = "$LOCAL"
+ then
+ LOCAL="./p4merge-dev-null.LOCAL.$$"
+ >"$LOCAL"
+ rm_local=true
+ fi
+ if test "/dev/null" = "$REMOTE"
+ then
+ REMOTE="./p4merge-dev-null.REMOTE.$$"
+ >"$REMOTE"
+ rm_remote=true
+ fi
+
"$merge_tool_path" "$LOCAL" "$REMOTE"
+
+ if test -n "$rm_local"
+ then
+ rm -f "$LOCAL"
+ fi
+ if test -n "$rm_remote"
+ then
+ rm -f "$REMOTE"
+ fi
}
merge_cmd () {
* something different on Windows.
*/
-#ifndef WIN32
-static void pager_preexec(void)
-{
- /*
- * Work around bug in "less" by not starting it until we
- * have real input
- */
- fd_set in;
-
- FD_ZERO(&in);
- FD_SET(0, &in);
- select(1, &in, NULL, &in, NULL);
-}
-#endif
-
static const char *pager_argv[] = { NULL, NULL };
static struct child_process pager_process;
static const char *env[] = { "LESS=FRSX", NULL };
pager_process.env = env;
}
-#ifndef WIN32
- pager_process.preexec_cb = pager_preexec;
-#endif
if (start_command(&pager_process))
return;
if (current_ref && (current_ref->name == refname
|| !strcmp(current_ref->name, refname))) {
if (current_ref->flag & REF_KNOWS_PEELED) {
+ if (is_null_sha1(current_ref->u.value.peeled))
+ return -1;
hashcpy(sha1, current_ref->u.value.peeled);
return 0;
}
}
fallback:
- o = parse_object(base);
- if (o && o->type == OBJ_TAG) {
- o = deref_tag(o, refname, 0);
+ o = lookup_unknown_object(base);
+ if (o->type == OBJ_NONE) {
+ int type = sha1_object_info(base, NULL);
+ if (type < 0)
+ return -1;
+ o->type = type;
+ }
+
+ if (o->type == OBJ_TAG) {
+ o = deref_tag_noverify(o);
if (o) {
hashcpy(sha1, o->sha1);
return 0;
slot->curl_result = curl_easy_perform(slot->curl);
finish_active_slot(slot);
- err = handle_curl_result(slot, &results);
+ err = handle_curl_result(&results);
if (err != HTTP_OK && err != HTTP_REAUTH) {
error("RPC failed; result=%d, HTTP code = %ld",
results.curl_result, results.http_code);
return -1;
}
+ headers = curl_slist_append(headers, rpc->hdr_content_type);
+ headers = curl_slist_append(headers, rpc->hdr_accept);
+ headers = curl_slist_append(headers, "Expect:");
+
+retry:
slot = get_active_slot();
curl_easy_setopt(slot->curl, CURLOPT_NOBODY, 0);
curl_easy_setopt(slot->curl, CURLOPT_URL, rpc->service_url);
curl_easy_setopt(slot->curl, CURLOPT_ENCODING, "gzip");
- headers = curl_slist_append(headers, rpc->hdr_content_type);
- headers = curl_slist_append(headers, rpc->hdr_accept);
- headers = curl_slist_append(headers, "Expect:");
-
if (large_request) {
/* The request body is large and the size cannot be predicted.
* We must use chunked encoding to send it.
curl_easy_setopt(slot->curl, CURLOPT_WRITEFUNCTION, rpc_in);
curl_easy_setopt(slot->curl, CURLOPT_FILE, rpc);
- do {
- err = run_slot(slot);
- } while (err == HTTP_REAUTH && !large_request && !use_gzip);
+ err = run_slot(slot);
+ if (err == HTTP_REAUTH && !large_request && !use_gzip)
+ goto retry;
if (err != HTTP_OK)
err = -1;
--- /dev/null
+#include "cache.h"
+#include "remote.h"
+#include "strbuf.h"
+#include "url.h"
+#include "exec_cmd.h"
+#include "run-command.h"
+#include "vcs-svn/svndump.h"
+#include "notes.h"
+#include "argv-array.h"
+
+static const char *url;
+static int dump_from_file;
+static const char *private_ref;
+static const char *remote_ref = "refs/heads/master";
+static const char *marksfilename, *notes_ref;
+struct rev_note { unsigned int rev_nr; };
+
+static int cmd_capabilities(const char *line);
+static int cmd_import(const char *line);
+static int cmd_list(const char *line);
+
+typedef int (*input_command_handler)(const char *);
+struct input_command_entry {
+ const char *name;
+ input_command_handler fn;
+ unsigned char batchable; /* whether the command starts or is part of a batch */
+};
+
+static const struct input_command_entry input_command_list[] = {
+ { "capabilities", cmd_capabilities, 0 },
+ { "import", cmd_import, 1 },
+ { "list", cmd_list, 0 },
+ { NULL, NULL }
+};
+
+static int cmd_capabilities(const char *line)
+{
+ printf("import\n");
+ printf("bidi-import\n");
+ printf("refspec %s:%s\n\n", remote_ref, private_ref);
+ fflush(stdout);
+ return 0;
+}
+
+static void terminate_batch(void)
+{
+ /* terminate a current batch's fast-import stream */
+ printf("done\n");
+ fflush(stdout);
+}
+
+/* NOTE: 'ref' refers to a git reference, while 'rev' refers to a svn revision. */
+static char *read_ref_note(const unsigned char sha1[20])
+{
+ const unsigned char *note_sha1;
+ char *msg = NULL;
+ unsigned long msglen;
+ enum object_type type;
+
+ init_notes(NULL, notes_ref, NULL, 0);
+ if (!(note_sha1 = get_note(NULL, sha1)))
+ return NULL; /* note tree not found */
+ if (!(msg = read_sha1_file(note_sha1, &type, &msglen)))
+ error("Empty notes tree. %s", notes_ref);
+ else if (!msglen || type != OBJ_BLOB) {
+ error("Note contains unusable content. "
+ "Is something else using this notes tree? %s", notes_ref);
+ free(msg);
+ msg = NULL;
+ }
+ free_notes(NULL);
+ return msg;
+}
+
+static int parse_rev_note(const char *msg, struct rev_note *res)
+{
+ const char *key, *value, *end;
+ size_t len;
+
+ while (*msg) {
+ end = strchr(msg, '\n');
+ len = end ? end - msg : strlen(msg);
+
+ key = "Revision-number: ";
+ if (!prefixcmp(msg, key)) {
+ long i;
+ char *end;
+ value = msg + strlen(key);
+ i = strtol(value, &end, 0);
+ if (end == value || i < 0 || i > UINT32_MAX)
+ return -1;
+ res->rev_nr = i;
+ }
+ msg += len + 1;
+ }
+ return 0;
+}
+
+static int note2mark_cb(const unsigned char *object_sha1,
+ const unsigned char *note_sha1, char *note_path,
+ void *cb_data)
+{
+ FILE *file = (FILE *)cb_data;
+ char *msg;
+ unsigned long msglen;
+ enum object_type type;
+ struct rev_note note;
+
+ if (!(msg = read_sha1_file(note_sha1, &type, &msglen)) ||
+ !msglen || type != OBJ_BLOB) {
+ free(msg);
+ return 1;
+ }
+ if (parse_rev_note(msg, ¬e))
+ return 2;
+ if (fprintf(file, ":%d %s\n", note.rev_nr, sha1_to_hex(object_sha1)) < 1)
+ return 3;
+ return 0;
+}
+
+static void regenerate_marks(void)
+{
+ int ret;
+ FILE *marksfile = fopen(marksfilename, "w+");
+
+ if (!marksfile)
+ die_errno("Couldn't create mark file %s.", marksfilename);
+ ret = for_each_note(NULL, 0, note2mark_cb, marksfile);
+ if (ret)
+ die("Regeneration of marks failed, returned %d.", ret);
+ fclose(marksfile);
+}
+
+static void check_or_regenerate_marks(int latestrev)
+{
+ FILE *marksfile;
+ struct strbuf sb = STRBUF_INIT;
+ struct strbuf line = STRBUF_INIT;
+ int found = 0;
+
+ if (latestrev < 1)
+ return;
+
+ init_notes(NULL, notes_ref, NULL, 0);
+ marksfile = fopen(marksfilename, "r");
+ if (!marksfile) {
+ regenerate_marks();
+ marksfile = fopen(marksfilename, "r");
+ if (!marksfile)
+ die_errno("cannot read marks file %s!", marksfilename);
+ fclose(marksfile);
+ } else {
+ strbuf_addf(&sb, ":%d ", latestrev);
+ while (strbuf_getline(&line, marksfile, '\n') != EOF) {
+ if (!prefixcmp(line.buf, sb.buf)) {
+ found++;
+ break;
+ }
+ }
+ fclose(marksfile);
+ if (!found)
+ regenerate_marks();
+ }
+ free_notes(NULL);
+ strbuf_release(&sb);
+ strbuf_release(&line);
+}
+
+static int cmd_import(const char *line)
+{
+ int code;
+ int dumpin_fd;
+ char *note_msg;
+ unsigned char head_sha1[20];
+ unsigned int startrev;
+ struct argv_array svndump_argv = ARGV_ARRAY_INIT;
+ struct child_process svndump_proc;
+
+ if (read_ref(private_ref, head_sha1))
+ startrev = 0;
+ else {
+ note_msg = read_ref_note(head_sha1);
+ if(note_msg == NULL) {
+ warning("No note found for %s.", private_ref);
+ startrev = 0;
+ } else {
+ struct rev_note note = { 0 };
+ if (parse_rev_note(note_msg, ¬e))
+ die("Revision number couldn't be parsed from note.");
+ startrev = note.rev_nr + 1;
+ free(note_msg);
+ }
+ }
+ check_or_regenerate_marks(startrev - 1);
+
+ if (dump_from_file) {
+ dumpin_fd = open(url, O_RDONLY);
+ if(dumpin_fd < 0)
+ die_errno("Couldn't open svn dump file %s.", url);
+ } else {
+ memset(&svndump_proc, 0, sizeof(struct child_process));
+ svndump_proc.out = -1;
+ argv_array_push(&svndump_argv, "svnrdump");
+ argv_array_push(&svndump_argv, "dump");
+ argv_array_push(&svndump_argv, url);
+ argv_array_pushf(&svndump_argv, "-r%u:HEAD", startrev);
+ svndump_proc.argv = svndump_argv.argv;
+
+ code = start_command(&svndump_proc);
+ if (code)
+ die("Unable to start %s, code %d", svndump_proc.argv[0], code);
+ dumpin_fd = svndump_proc.out;
+ }
+ /* setup marks file import/export */
+ printf("feature import-marks-if-exists=%s\n"
+ "feature export-marks=%s\n", marksfilename, marksfilename);
+
+ svndump_init_fd(dumpin_fd, STDIN_FILENO);
+ svndump_read(url, private_ref, notes_ref);
+ svndump_deinit();
+ svndump_reset();
+
+ close(dumpin_fd);
+ if (!dump_from_file) {
+ code = finish_command(&svndump_proc);
+ if (code)
+ warning("%s, returned %d", svndump_proc.argv[0], code);
+ argv_array_clear(&svndump_argv);
+ }
+
+ return 0;
+}
+
+static int cmd_list(const char *line)
+{
+ printf("? %s\n\n", remote_ref);
+ fflush(stdout);
+ return 0;
+}
+
+static int do_command(struct strbuf *line)
+{
+ const struct input_command_entry *p = input_command_list;
+ static struct string_list batchlines = STRING_LIST_INIT_DUP;
+ static const struct input_command_entry *batch_cmd;
+ /*
+ * commands can be grouped together in a batch.
+ * Batches are ended by \n. If no batch is active the program ends.
+ * During a batch all lines are buffered and passed to the handler function
+ * when the batch is terminated.
+ */
+ if (line->len == 0) {
+ if (batch_cmd) {
+ struct string_list_item *item;
+ for_each_string_list_item(item, &batchlines)
+ batch_cmd->fn(item->string);
+ terminate_batch();
+ batch_cmd = NULL;
+ string_list_clear(&batchlines, 0);
+ return 0; /* end of the batch, continue reading other commands. */
+ }
+ return 1; /* end of command stream, quit */
+ }
+ if (batch_cmd) {
+ if (prefixcmp(batch_cmd->name, line->buf))
+ die("Active %s batch interrupted by %s", batch_cmd->name, line->buf);
+ /* buffer batch lines */
+ string_list_append(&batchlines, line->buf);
+ return 0;
+ }
+
+ for (p = input_command_list; p->name; p++) {
+ if (!prefixcmp(line->buf, p->name) && (strlen(p->name) == line->len ||
+ line->buf[strlen(p->name)] == ' ')) {
+ if (p->batchable) {
+ batch_cmd = p;
+ string_list_append(&batchlines, line->buf);
+ return 0;
+ }
+ return p->fn(line->buf);
+ }
+ }
+ die("Unknown command '%s'\n", line->buf);
+ return 0;
+}
+
+int main(int argc, const char **argv)
+{
+ struct strbuf buf = STRBUF_INIT, url_sb = STRBUF_INIT,
+ private_ref_sb = STRBUF_INIT, marksfilename_sb = STRBUF_INIT,
+ notes_ref_sb = STRBUF_INIT;
+ static struct remote *remote;
+ const char *url_in;
+
+ git_extract_argv0_path(argv[0]);
+ setup_git_directory();
+ if (argc < 2 || argc > 3) {
+ usage("git-remote-svn <remote-name> [<url>]");
+ return 1;
+ }
+
+ remote = remote_get(argv[1]);
+ url_in = (argc == 3) ? argv[2] : remote->url[0];
+
+ if (!prefixcmp(url_in, "file://")) {
+ dump_from_file = 1;
+ url = url_decode(url_in + sizeof("file://")-1);
+ } else {
+ dump_from_file = 0;
+ end_url_with_slash(&url_sb, url_in);
+ url = url_sb.buf;
+ }
+
+ strbuf_addf(&private_ref_sb, "refs/svn/%s/master", remote->name);
+ private_ref = private_ref_sb.buf;
+
+ strbuf_addf(¬es_ref_sb, "refs/notes/%s/revs", remote->name);
+ notes_ref = notes_ref_sb.buf;
+
+ strbuf_addf(&marksfilename_sb, "%s/info/fast-import/remote-svn/%s.marks",
+ get_git_dir(), remote->name);
+ marksfilename = marksfilename_sb.buf;
+
+ while (1) {
+ if (strbuf_getline(&buf, stdin, '\n') == EOF) {
+ if (ferror(stdin))
+ die("Error reading command stream");
+ else
+ die("Unexpected end of command stream");
+ }
+ if (do_command(&buf))
+ break;
+ strbuf_reset(&buf);
+ }
+
+ strbuf_release(&buf);
+ strbuf_release(&url_sb);
+ strbuf_release(&private_ref_sb);
+ strbuf_release(¬es_ref_sb);
+ strbuf_release(&marksfilename_sb);
+ return 0;
+}
revs->commit_format = CMIT_FMT_DEFAULT;
+ init_grep_defaults();
+ grep_init(&revs->grep_filter, prefix);
revs->grep_filter.status_only = 1;
- revs->grep_filter.pattern_tail = &(revs->grep_filter.pattern_list);
- revs->grep_filter.header_tail = &(revs->grep_filter.header_list);
revs->grep_filter.regflags = REG_NEWLINE;
diff_setup(&revs->diffopt);
return argcount;
} else if (!strcmp(arg, "--grep-debug")) {
revs->grep_filter.debug = 1;
+ } else if (!strcmp(arg, "--basic-regexp")) {
+ grep_set_pattern_type_option(GREP_PATTERN_TYPE_BRE, &revs->grep_filter);
} else if (!strcmp(arg, "--extended-regexp") || !strcmp(arg, "-E")) {
- revs->grep_filter.regflags |= REG_EXTENDED;
+ grep_set_pattern_type_option(GREP_PATTERN_TYPE_ERE, &revs->grep_filter);
} else if (!strcmp(arg, "--regexp-ignore-case") || !strcmp(arg, "-i")) {
revs->grep_filter.regflags |= REG_ICASE;
DIFF_OPT_SET(&revs->diffopt, PICKAXE_IGNORE_CASE);
} else if (!strcmp(arg, "--fixed-strings") || !strcmp(arg, "-F")) {
- revs->grep_filter.fixed = 1;
+ grep_set_pattern_type_option(GREP_PATTERN_TYPE_FIXED, &revs->grep_filter);
+ } else if (!strcmp(arg, "--perl-regexp")) {
+ grep_set_pattern_type_option(GREP_PATTERN_TYPE_PCRE, &revs->grep_filter);
} else if (!strcmp(arg, "--all-match")) {
revs->grep_filter.all_match = 1;
} else if ((argcount = parse_long_opt("encoding", argv, &optarg))) {
revs->diffopt.abbrev = revs->abbrev;
diff_setup_done(&revs->diffopt);
+ grep_commit_pattern_type(GREP_PATTERN_TYPE_UNSPECIFIED,
+ &revs->grep_filter);
compile_grep_patterns(&revs->grep_filter);
if (revs->reverse && revs->reflog_info)
unsetenv(*cmd->env);
}
}
- if (cmd->preexec_cb) {
- /*
- * We cannot predict what the pre-exec callback does.
- * Forgo parent notification.
- */
- close(child_notifier);
- child_notifier = -1;
-
- cmd->preexec_cb();
- }
if (cmd->git_cmd) {
execv_git_cmd(cmd->argv);
} else if (cmd->use_shell) {
unsigned stdout_to_stderr:1;
unsigned use_shell:1;
unsigned clean_on_exit:1;
- void (*preexec_cb)(void);
};
int start_command(struct child_process *);
char *strbuf_detach(struct strbuf *sb, size_t *sz)
{
- char *res = sb->alloc ? sb->buf : NULL;
+ char *res;
+ strbuf_grow(sb, 0);
+ res = sb->buf;
if (sz)
*sz = sb->len;
strbuf_init(sb, 0);
return dirty_submodule;
}
+int submodule_uses_gitfile(const char *path)
+{
+ struct child_process cp;
+ const char *argv[] = {
+ "submodule",
+ "foreach",
+ "--quiet",
+ "--recursive",
+ "test -f .git",
+ NULL,
+ };
+ struct strbuf buf = STRBUF_INIT;
+ const char *git_dir;
+
+ strbuf_addf(&buf, "%s/.git", path);
+ git_dir = read_gitfile(buf.buf);
+ if (!git_dir) {
+ strbuf_release(&buf);
+ return 0;
+ }
+ strbuf_release(&buf);
+
+ /* Now test that all nested submodules use a gitfile too */
+ memset(&cp, 0, sizeof(cp));
+ cp.argv = argv;
+ cp.env = local_repo_env;
+ cp.git_cmd = 1;
+ cp.no_stdin = 1;
+ cp.no_stderr = 1;
+ cp.no_stdout = 1;
+ cp.dir = path;
+ if (run_command(&cp))
+ return 0;
+
+ return 1;
+}
+
+int ok_to_remove_submodule(const char *path)
+{
+ struct stat st;
+ ssize_t len;
+ struct child_process cp;
+ const char *argv[] = {
+ "status",
+ "--porcelain",
+ "-u",
+ "--ignore-submodules=none",
+ NULL,
+ };
+ struct strbuf buf = STRBUF_INIT;
+ int ok_to_remove = 1;
+
+ if ((lstat(path, &st) < 0) || is_empty_dir(path))
+ return 1;
+
+ if (!submodule_uses_gitfile(path))
+ return 0;
+
+ memset(&cp, 0, sizeof(cp));
+ cp.argv = argv;
+ cp.env = local_repo_env;
+ cp.git_cmd = 1;
+ cp.no_stdin = 1;
+ cp.out = -1;
+ cp.dir = path;
+ if (start_command(&cp))
+ die("Could not run 'git status --porcelain -uall --ignore-submodules=none' in submodule %s", path);
+
+ len = strbuf_read(&buf, cp.out, 1024);
+ if (len > 2)
+ ok_to_remove = 0;
+ close(cp.out);
+
+ if (finish_command(&cp))
+ die("'git status --porcelain -uall --ignore-submodules=none' failed in submodule %s", path);
+
+ strbuf_release(&buf);
+ return ok_to_remove;
+}
+
static int find_first_merges(struct object_array *result, const char *path,
struct commit *a, struct commit *b)
{
const char *prefix, int command_line_option,
int quiet);
unsigned is_submodule_modified(const char *path, int ignore_untracked);
+int submodule_uses_gitfile(const char *path);
+int ok_to_remove_submodule(const char *path);
int merge_submodule(unsigned char result[20], const char *path, const unsigned char base[20],
const unsigned char a[20], const unsigned char b[20], int search);
int find_unpushed_submodules(unsigned char new_sha1[20], const char *remotes_name,
! test -d dir
'
+cat >expect <<EOF
+D submod
+EOF
+
+cat >expect.modified <<EOF
+ M submod
+EOF
+
+test_expect_success 'rm removes empty submodules from work tree' '
+ mkdir submod &&
+ git update-index --add --cacheinfo 160000 $(git rev-parse HEAD) submod &&
+ git config -f .gitmodules submodule.sub.url ./. &&
+ git config -f .gitmodules submodule.sub.path submod &&
+ git submodule init &&
+ git add .gitmodules &&
+ git commit -m "add submodule" &&
+ git rm submod &&
+ test ! -e submod &&
+ git status -s -uno --ignore-submodules=none > actual &&
+ test_cmp expect actual
+'
+
+test_expect_success 'rm removes removed submodule from index' '
+ git reset --hard &&
+ git submodule update &&
+ rm -rf submod &&
+ git rm submod &&
+ git status -s -uno --ignore-submodules=none > actual &&
+ test_cmp expect actual
+'
+
+test_expect_success 'rm removes work tree of unmodified submodules' '
+ git reset --hard &&
+ git submodule update &&
+ git rm submod &&
+ test ! -d submod &&
+ git status -s -uno --ignore-submodules=none > actual &&
+ test_cmp expect actual
+'
+
+test_expect_success 'rm of a populated submodule with different HEAD fails unless forced' '
+ git reset --hard &&
+ git submodule update &&
+ (cd submod &&
+ git checkout HEAD^
+ ) &&
+ test_must_fail git rm submod &&
+ test -d submod &&
+ test -f submod/.git &&
+ git status -s -uno --ignore-submodules=none > actual &&
+ test_cmp expect.modified actual &&
+ git rm -f submod &&
+ test ! -d submod &&
+ git status -s -uno --ignore-submodules=none > actual &&
+ test_cmp expect actual
+'
+
+test_expect_success 'rm of a populated submodule with modifications fails unless forced' '
+ git reset --hard &&
+ git submodule update &&
+ (cd submod &&
+ echo X >empty
+ ) &&
+ test_must_fail git rm submod &&
+ test -d submod &&
+ test -f submod/.git &&
+ git status -s -uno --ignore-submodules=none > actual &&
+ test_cmp expect.modified actual &&
+ git rm -f submod &&
+ test ! -d submod &&
+ git status -s -uno --ignore-submodules=none > actual &&
+ test_cmp expect actual
+'
+
+test_expect_success 'rm of a populated submodule with untracked files fails unless forced' '
+ git reset --hard &&
+ git submodule update &&
+ (cd submod &&
+ echo X >untracked
+ ) &&
+ test_must_fail git rm submod &&
+ test -d submod &&
+ test -f submod/.git &&
+ git status -s -uno --ignore-submodules=none > actual &&
+ test_cmp expect.modified actual &&
+ git rm -f submod &&
+ test ! -d submod &&
+ git status -s -uno --ignore-submodules=none > actual &&
+ test_cmp expect actual
+'
+
+test_expect_success 'setup submodule conflict' '
+ git reset --hard &&
+ git submodule update &&
+ git checkout -b branch1 &&
+ echo 1 >nitfol &&
+ git add nitfol &&
+ git commit -m "added nitfol 1" &&
+ git checkout -b branch2 master &&
+ echo 2 >nitfol &&
+ git add nitfol &&
+ git commit -m "added nitfol 2" &&
+ git checkout -b conflict1 master &&
+ (cd submod &&
+ git fetch &&
+ git checkout branch1
+ ) &&
+ git add submod &&
+ git commit -m "submod 1" &&
+ git checkout -b conflict2 master &&
+ (cd submod &&
+ git checkout branch2
+ ) &&
+ git add submod &&
+ git commit -m "submod 2"
+'
+
+cat >expect.conflict <<EOF
+UU submod
+EOF
+
+test_expect_success 'rm removes work tree of unmodified conflicted submodule' '
+ git checkout conflict1 &&
+ git reset --hard &&
+ git submodule update &&
+ test_must_fail git merge conflict2 &&
+ git rm submod &&
+ test ! -d submod &&
+ git status -s -uno --ignore-submodules=none > actual &&
+ test_cmp expect actual
+'
+
+test_expect_success 'rm of a conflicted populated submodule with different HEAD fails unless forced' '
+ git checkout conflict1 &&
+ git reset --hard &&
+ git submodule update &&
+ (cd submod &&
+ git checkout HEAD^
+ ) &&
+ test_must_fail git merge conflict2 &&
+ test_must_fail git rm submod &&
+ test -d submod &&
+ test -f submod/.git &&
+ git status -s -uno --ignore-submodules=none > actual &&
+ test_cmp expect.conflict actual &&
+ git rm -f submod &&
+ test ! -d submod &&
+ git status -s -uno --ignore-submodules=none > actual &&
+ test_cmp expect actual
+'
+
+test_expect_success 'rm of a conflicted populated submodule with modifications fails unless forced' '
+ git checkout conflict1 &&
+ git reset --hard &&
+ git submodule update &&
+ (cd submod &&
+ echo X >empty
+ ) &&
+ test_must_fail git merge conflict2 &&
+ test_must_fail git rm submod &&
+ test -d submod &&
+ test -f submod/.git &&
+ git status -s -uno --ignore-submodules=none > actual &&
+ test_cmp expect.conflict actual &&
+ git rm -f submod &&
+ test ! -d submod &&
+ git status -s -uno --ignore-submodules=none > actual &&
+ test_cmp expect actual
+'
+
+test_expect_success 'rm of a conflicted populated submodule with untracked files fails unless forced' '
+ git checkout conflict1 &&
+ git reset --hard &&
+ git submodule update &&
+ (cd submod &&
+ echo X >untracked
+ ) &&
+ test_must_fail git merge conflict2 &&
+ test_must_fail git rm submod &&
+ test -d submod &&
+ test -f submod/.git &&
+ git status -s -uno --ignore-submodules=none > actual &&
+ test_cmp expect.conflict actual &&
+ git rm -f submod &&
+ test ! -d submod &&
+ git status -s -uno --ignore-submodules=none > actual &&
+ test_cmp expect actual
+'
+
+test_expect_success 'rm of a conflicted populated submodule with a .git directory fails even when forced' '
+ git checkout conflict1 &&
+ git reset --hard &&
+ git submodule update &&
+ (cd submod &&
+ rm .git &&
+ cp -a ../.git/modules/sub .git &&
+ GIT_WORK_TREE=. git config --unset core.worktree
+ ) &&
+ test_must_fail git merge conflict2 &&
+ test_must_fail git rm submod &&
+ test -d submod &&
+ test -d submod/.git &&
+ git status -s -uno --ignore-submodules=none > actual &&
+ test_cmp expect.conflict actual &&
+ test_must_fail git rm -f submod &&
+ test -d submod &&
+ test -d submod/.git &&
+ git status -s -uno --ignore-submodules=none > actual &&
+ test_cmp expect.conflict actual &&
+ git merge --abort &&
+ rm -rf submod
+'
+
+test_expect_success 'rm of a conflicted unpopulated submodule succeeds' '
+ git checkout conflict1 &&
+ git reset --hard &&
+ test_must_fail git merge conflict2 &&
+ git rm submod &&
+ test ! -d submod &&
+ git status -s -uno --ignore-submodules=none > actual &&
+ test_cmp expect actual
+'
+
+test_expect_success 'rm of a populated submodule with a .git directory fails even when forced' '
+ git checkout -f master &&
+ git reset --hard &&
+ git submodule update &&
+ (cd submod &&
+ rm .git &&
+ cp -a ../.git/modules/sub .git &&
+ GIT_WORK_TREE=. git config --unset core.worktree
+ ) &&
+ test_must_fail git rm submod &&
+ test -d submod &&
+ test -d submod/.git &&
+ git status -s -uno --ignore-submodules=none > actual &&
+ ! test -s actual &&
+ test_must_fail git rm -f submod &&
+ test -d submod &&
+ test -d submod/.git &&
+ git status -s -uno --ignore-submodules=none > actual &&
+ ! test -s actual &&
+ rm -rf submod
+'
+
+cat >expect.deepmodified <<EOF
+ M submod/subsubmod
+EOF
+
+test_expect_success 'setup subsubmodule' '
+ git reset --hard &&
+ git submodule update &&
+ (cd submod &&
+ git update-index --add --cacheinfo 160000 $(git rev-parse HEAD) subsubmod &&
+ git config -f .gitmodules submodule.sub.url ../. &&
+ git config -f .gitmodules submodule.sub.path subsubmod &&
+ git submodule init &&
+ git add .gitmodules &&
+ git commit -m "add subsubmodule" &&
+ git submodule update subsubmod
+ ) &&
+ git commit -a -m "added deep submodule"
+'
+
+test_expect_success 'rm recursively removes work tree of unmodified submodules' '
+ git rm submod &&
+ test ! -d submod &&
+ git status -s -uno --ignore-submodules=none > actual &&
+ test_cmp expect actual
+'
+
+test_expect_success 'rm of a populated nested submodule with different nested HEAD fails unless forced' '
+ git reset --hard &&
+ git submodule update --recursive &&
+ (cd submod/subsubmod &&
+ git checkout HEAD^
+ ) &&
+ test_must_fail git rm submod &&
+ test -d submod &&
+ test -f submod/.git &&
+ git status -s -uno --ignore-submodules=none > actual &&
+ test_cmp expect.modified actual &&
+ git rm -f submod &&
+ test ! -d submod &&
+ git status -s -uno --ignore-submodules=none > actual &&
+ test_cmp expect actual
+'
+
+test_expect_success 'rm of a populated nested submodule with nested modifications fails unless forced' '
+ git reset --hard &&
+ git submodule update --recursive &&
+ (cd submod/subsubmod &&
+ echo X >empty
+ ) &&
+ test_must_fail git rm submod &&
+ test -d submod &&
+ test -f submod/.git &&
+ git status -s -uno --ignore-submodules=none > actual &&
+ test_cmp expect.modified actual &&
+ git rm -f submod &&
+ test ! -d submod &&
+ git status -s -uno --ignore-submodules=none > actual &&
+ test_cmp expect actual
+'
+
+test_expect_success 'rm of a populated nested submodule with nested untracked files fails unless forced' '
+ git reset --hard &&
+ git submodule update --recursive &&
+ (cd submod/subsubmod &&
+ echo X >untracked
+ ) &&
+ test_must_fail git rm submod &&
+ test -d submod &&
+ test -f submod/.git &&
+ git status -s -uno --ignore-submodules=none > actual &&
+ test_cmp expect.modified actual &&
+ git rm -f submod &&
+ test ! -d submod &&
+ git status -s -uno --ignore-submodules=none > actual &&
+ test_cmp expect actual
+'
+
+test_expect_success 'rm of a populated nested submodule with a nested .git directory fails even when forced' '
+ git reset --hard &&
+ git submodule update --recursive &&
+ (cd submod/subsubmod &&
+ rm .git &&
+ cp -a ../../.git/modules/sub/modules/sub .git &&
+ GIT_WORK_TREE=. git config --unset core.worktree
+ ) &&
+ test_must_fail git rm submod &&
+ test -d submod &&
+ test -d submod/subsubmod/.git &&
+ git status -s -uno --ignore-submodules=none > actual &&
+ ! test -s actual &&
+ test_must_fail git rm -f submod &&
+ test -d submod &&
+ test -d submod/subsubmod/.git &&
+ git status -s -uno --ignore-submodules=none > actual &&
+ ! test -s actual &&
+ rm -rf submod
+'
+
test_done
--- /dev/null
+#!/bin/sh
+#
+# Copyright (c) 2012 Mozilla Foundation
+#
+
+test_description='diff.context configuration'
+
+. ./test-lib.sh
+
+test_expect_success 'setup' '
+ cat >template <<-\EOF &&
+ firstline
+ b
+ c
+ d
+ e
+ f
+ preline
+ TARGET
+ postline
+ i
+ j
+ k
+ l
+ m
+ n
+ EOF
+ sed "/TARGET/d" >x <template &&
+ git update-index --add x &&
+ git commit -m initial &&
+
+ sed "s/TARGET/ADDED/" >x <template &&
+ git update-index --add x &&
+ git commit -m next &&
+
+ sed "s/TARGET/MODIFIED/" >x <template
+'
+
+test_expect_success 'the default number of context lines is 3' '
+ git diff >output &&
+ ! grep "^ d" output &&
+ grep "^ e" output &&
+ grep "^ j" output &&
+ ! grep "^ k" output
+'
+
+test_expect_success 'diff.context honored by "log"' '
+ git log -1 -p >output &&
+ ! grep firstline output &&
+ git config diff.context 8 &&
+ git log -1 -p >output &&
+ grep "^ firstline" output
+'
+
+test_expect_success 'The -U option overrides diff.context' '
+ git config diff.context 8 &&
+ git log -U4 -1 >output &&
+ ! grep "^ firstline" output
+'
+
+test_expect_success 'diff.context honored by "diff"' '
+ git config diff.context 8 &&
+ git diff >output &&
+ grep "^ firstline" output
+'
+
+test_expect_success 'plumbing not affected' '
+ git config diff.context 8 &&
+ git diff-files -p >output &&
+ ! grep "^ firstline" output
+'
+
+test_expect_success 'non-integer config parsing' '
+ git config diff.context no &&
+ test_must_fail git diff 2>output &&
+ test_i18ngrep "bad config value" output
+'
+
+test_expect_success 'negative integer config parsing' '
+ git config diff.context -1 &&
+ test_must_fail git diff 2>output &&
+ test_i18ngrep "bad config file" output
+'
+
+test_expect_success '-U0 is valid, so is diff.context=0' '
+ git config diff.context 0 &&
+ git diff >output &&
+ grep "^-ADDED" output &&
+ grep "^+MODIFIED" output
+'
+
+test_done
test_cmp expect actual
'
+test_expect_success 'log -F -E --grep=<ere> uses ere' '
+ echo second >expect &&
+ git log -1 --pretty="tformat:%s" -F -E --grep=s.c.nd >actual &&
+ test_cmp expect actual
+'
+
cat > expect <<EOF
* Second
* sixth
test_tick &&
GIT_AUTHOR_NAME="B V Uips" git commit -m bvuips &&
git branch preserved-author &&
- git filter-branch -f --msg-filter "cat; \
+ (sane_unset GIT_AUTHOR_NAME &&
+ git filter-branch -f --msg-filter "cat; \
test \$GIT_COMMIT != $(git rev-parse master) || \
echo Hallo" \
- preserved-author &&
+ preserved-author) &&
test 1 = $(git rev-list --author="B V Uips" preserved-author | wc -l)
'
test_cmp expect actual
'
+test_expect_success 'grep --cached respects binary diff attribute' '
+ git grep --cached text t >actual &&
+ test_cmp expect actual
+'
+
+test_expect_success 'grep --cached respects binary diff attribute (2)' '
+ git add .gitattributes &&
+ rm .gitattributes &&
+ git grep --cached text t >actual &&
+ test_when_finished "git rm --cached .gitattributes" &&
+ test_when_finished "git checkout .gitattributes" &&
+ test_cmp expect actual
+'
+
+test_expect_success 'grep revision respects binary diff attribute' '
+ git commit -m new &&
+ echo "Binary file HEAD:t matches" >expect &&
+ git grep text HEAD -- t >actual &&
+ test_when_finished "git reset HEAD^" &&
+ test_cmp expect actual
+'
+
test_expect_success 'grep respects not-binary diff attribute' '
echo binQary | q_to_nul >b &&
git add b &&
)
'
+test_expect_success 'submodule add --name allows to replace a submodule with another at the same path' '
+ (
+ cd addtest2 &&
+ (
+ cd repo &&
+ echo "$submodurl/repo" >expect &&
+ git config remote.origin.url >actual &&
+ test_cmp expect actual &&
+ echo "gitdir: ../.git/modules/repo" >expect &&
+ test_cmp expect .git
+ ) &&
+ rm -rf repo &&
+ git rm repo &&
+ git submodule add -q --name repo_new "$submodurl/bare.git" repo >actual &&
+ test ! -s actual &&
+ echo "gitdir: ../.git/modules/submod" >expect &&
+ test_cmp expect submod/.git &&
+ (
+ cd repo &&
+ echo "$submodurl/bare.git" >expect &&
+ git config remote.origin.url >actual &&
+ test_cmp expect actual &&
+ echo "gitdir: ../.git/modules/repo_new" >expect &&
+ test_cmp expect .git
+ ) &&
+ echo "repo" >expect &&
+ git config -f .gitmodules submodule.repo.path >actual &&
+ test_cmp expect actual &&
+ git config -f .gitmodules submodule.repo_new.path >actual &&
+ test_cmp expect actual&&
+ echo "$submodurl/repo" >expect &&
+ git config -f .gitmodules submodule.repo.url >actual &&
+ test_cmp expect actual &&
+ echo "$submodurl/bare.git" >expect &&
+ git config -f .gitmodules submodule.repo_new.url >actual &&
+ test_cmp expect actual &&
+ echo "$submodurl/repo" >expect &&
+ git config submodule.repo.url >actual &&
+ test_cmp expect actual &&
+ echo "$submodurl/bare.git" >expect &&
+ git config submodule.repo_new.url >actual &&
+ test_cmp expect actual
+ )
+'
+
+test_expect_success 'submodule add with an existing name fails unless forced' '
+ (
+ cd addtest2 &&
+ rm -rf repo &&
+ git rm repo &&
+ test_must_fail git submodule add -q --name repo_new "$submodurl/repo.git" repo &&
+ test ! -d repo &&
+ echo "repo" >expect &&
+ git config -f .gitmodules submodule.repo_new.path >actual &&
+ test_cmp expect actual&&
+ echo "$submodurl/bare.git" >expect &&
+ git config -f .gitmodules submodule.repo_new.url >actual &&
+ test_cmp expect actual &&
+ echo "$submodurl/bare.git" >expect &&
+ git config submodule.repo_new.url >actual &&
+ test_cmp expect actual &&
+ git submodule add -f -q --name repo_new "$submodurl/repo.git" repo &&
+ test -d repo &&
+ echo "repo" >expect &&
+ git config -f .gitmodules submodule.repo_new.path >actual &&
+ test_cmp expect actual&&
+ echo "$submodurl/repo.git" >expect &&
+ git config -f .gitmodules submodule.repo_new.url >actual &&
+ test_cmp expect actual &&
+ echo "$submodurl/repo.git" >expect &&
+ git config submodule.repo_new.url >actual &&
+ test_cmp expect actual
+ )
+'
+
test_done
(cd super &&
git reset --hard master &&
rm -rf deeper/ &&
- git submodule add ../submodule deeper/submodule
+ git submodule add --force ../submodule deeper/submodule
)
'
--- /dev/null
+#!/bin/sh
+
+test_description='tests remote-svn'
+
+. ./test-lib.sh
+
+MARKSPATH=.git/info/fast-import/remote-svn
+
+if ! test_have_prereq PYTHON
+then
+ skip_all='skipping remote-svn tests, python not available'
+ test_done
+fi
+
+# We override svnrdump by placing a symlink to the svnrdump-emulator in .
+export PATH="$HOME:$PATH"
+ln -sf $GIT_BUILD_DIR/contrib/svn-fe/svnrdump_sim.py "$HOME/svnrdump"
+
+init_git () {
+ rm -fr .git &&
+ git init &&
+ #git remote add svnsim testsvn::sim:///$TEST_DIRECTORY/t9020/example.svnrdump
+ # let's reuse an exisiting dump file!?
+ git remote add svnsim testsvn::sim://$TEST_DIRECTORY/t9154/svn.dump
+ git remote add svnfile testsvn::file://$TEST_DIRECTORY/t9154/svn.dump
+}
+
+if test -e "$GIT_BUILD_DIR/git-remote-testsvn"
+then
+ test_set_prereq REMOTE_SVN
+fi
+
+test_debug '
+ git --version
+ which git
+ which svnrdump
+'
+
+test_expect_success REMOTE_SVN 'simple fetch' '
+ init_git &&
+ git fetch svnsim &&
+ test_cmp .git/refs/svn/svnsim/master .git/refs/remotes/svnsim/master &&
+ cp .git/refs/remotes/svnsim/master master.good
+'
+
+test_debug '
+ cat .git/refs/svn/svnsim/master
+ cat .git/refs/remotes/svnsim/master
+'
+
+test_expect_success REMOTE_SVN 'repeated fetch, nothing shall change' '
+ git fetch svnsim &&
+ test_cmp master.good .git/refs/remotes/svnsim/master
+'
+
+test_expect_success REMOTE_SVN 'fetch from a file:// url gives the same result' '
+ git fetch svnfile
+'
+
+test_expect_failure REMOTE_SVN 'the sha1 differ because the git-svn-id line in the commit msg contains the url' '
+ test_cmp .git/refs/remotes/svnfile/master .git/refs/remotes/svnsim/master
+'
+
+test_expect_success REMOTE_SVN 'mark-file regeneration' '
+ # filter out any other marks, that can not be regenerated. Only up to 3 digit revisions are allowed here
+ grep ":[0-9]\{1,3\} " $MARKSPATH/svnsim.marks > $MARKSPATH/svnsim.marks.old &&
+ rm $MARKSPATH/svnsim.marks &&
+ git fetch svnsim &&
+ test_cmp $MARKSPATH/svnsim.marks.old $MARKSPATH/svnsim.marks
+'
+
+test_expect_success REMOTE_SVN 'incremental imports must lead to the same head' '
+ export SVNRMAX=3 &&
+ init_git &&
+ git fetch svnsim &&
+ test_cmp .git/refs/svn/svnsim/master .git/refs/remotes/svnsim/master &&
+ unset SVNRMAX &&
+ git fetch svnsim &&
+ test_cmp master.good .git/refs/remotes/svnsim/master
+'
+
+test_debug 'git branch -a'
+
+test_done
say_color() {
test -z "$1" && test -n "$quiet" && return
shift
- echo "$*"
+ printf "%s\n" "$*"
}
fi
if (argc == 2) {
if (svndump_init(argv[1]))
return 1;
- svndump_read(NULL);
+ svndump_read(NULL, "refs/heads/master", "refs/notes/svn/revs");
svndump_deinit();
svndump_reset();
return 0;
#include "string-list.h"
#include "thread-utils.h"
#include "sigchain.h"
+#include "argv-array.h"
static int debug;
FILE *out;
unsigned fetch : 1,
import : 1,
+ bidi_import : 1,
export : 1,
option : 1,
push : 1,
static struct child_process *get_helper(struct transport *transport)
{
struct helper_data *data = transport->data;
+ struct argv_array argv = ARGV_ARRAY_INIT;
struct strbuf buf = STRBUF_INIT;
struct child_process *helper;
const char **refspecs = NULL;
helper->in = -1;
helper->out = -1;
helper->err = 0;
- helper->argv = xcalloc(4, sizeof(*helper->argv));
- strbuf_addf(&buf, "git-remote-%s", data->name);
- helper->argv[0] = strbuf_detach(&buf, NULL);
- helper->argv[1] = transport->remote->name;
- helper->argv[2] = remove_ext_force(transport->url);
+ argv_array_pushf(&argv, "git-remote-%s", data->name);
+ argv_array_push(&argv, transport->remote->name);
+ argv_array_push(&argv, remove_ext_force(transport->url));
+ helper->argv = argv_array_detach(&argv, NULL);
helper->git_cmd = 0;
helper->silent_exec_failure = 1;
data->push = 1;
else if (!strcmp(capname, "import"))
data->import = 1;
+ else if (!strcmp(capname, "bidi-import"))
+ data->bidi_import = 1;
else if (!strcmp(capname, "export"))
data->export = 1;
else if (!data->refspecs && !prefixcmp(capname, "refspec ")) {
close(data->helper->out);
fclose(data->out);
res = finish_command(data->helper);
- free((char *)data->helper->argv[0]);
- free(data->helper->argv);
+ argv_array_free_detached(data->helper->argv);
free(data->helper);
data->helper = NULL;
}
static int get_importer(struct transport *transport, struct child_process *fastimport)
{
struct child_process *helper = get_helper(transport);
+ struct helper_data *data = transport->data;
+ struct argv_array argv = ARGV_ARRAY_INIT;
+ int cat_blob_fd, code;
memset(fastimport, 0, sizeof(*fastimport));
fastimport->in = helper->out;
- fastimport->argv = xcalloc(5, sizeof(*fastimport->argv));
- fastimport->argv[0] = "fast-import";
- fastimport->argv[1] = "--quiet";
+ argv_array_push(&argv, "fast-import");
+ argv_array_push(&argv, debug ? "--stats" : "--quiet");
+ if (data->bidi_import) {
+ cat_blob_fd = xdup(helper->in);
+ argv_array_pushf(&argv, "--cat-blob-fd=%d", cat_blob_fd);
+ }
+ fastimport->argv = argv.argv;
fastimport->git_cmd = 1;
- return start_command(fastimport);
+
+ code = start_command(fastimport);
+ return code;
}
static int get_exporter(struct transport *transport,
}
write_constant(data->helper->in, "\n");
+ /*
+ * remote-helpers that advertise the bidi-import capability are required to
+ * buffer the complete batch of import commands until this newline before
+ * sending data to fast-import.
+ * These helpers read back data from fast-import on their stdin, which could
+ * be mixed with import commands, otherwise.
+ */
if (finish_command(&fastimport))
die("Error while running fast-import");
- free(fastimport.argv);
- fastimport.argv = NULL;
+ argv_array_free_detached(fastimport.argv);
/*
* The fast-import stream of a remote helper that advertises
" include-tag multi_ack_detailed";
struct object *o = lookup_unknown_object(sha1);
const char *refname_nons = strip_namespace(refname);
-
- if (o->type == OBJ_NONE) {
- o->type = sha1_object_info(sha1, NULL);
- if (o->type < 0)
- die("git upload-pack: cannot find object %s:", sha1_to_hex(sha1));
- }
+ unsigned char peeled[20];
if (capabilities)
packet_write(1, "%s %s%c%s%s agent=%s\n",
o->flags |= OUR_REF;
nr_our_refs++;
}
- if (o->type == OBJ_TAG) {
- o = deref_tag_noverify(o);
- if (o)
- packet_write(1, "%s %s^{}\n", sha1_to_hex(o->sha1), refname_nons);
- }
+ if (!peel_ref(refname, peeled))
+ packet_write(1, "%s %s^{}\n", sha1_to_hex(peeled), refname_nons);
return 0;
}
* See LICENSE for details.
*/
-#include "git-compat-util.h"
-#include "strbuf.h"
+#include "cache.h"
#include "quote.h"
#include "fast_export.h"
#include "repo_tree.h"
putchar('\n');
}
+void fast_export_begin_note(uint32_t revision, const char *author,
+ const char *log, unsigned long timestamp, const char *note_ref)
+{
+ static int firstnote = 1;
+ size_t loglen = strlen(log);
+ printf("commit %s\n", note_ref);
+ printf("committer %s <%s@%s> %ld +0000\n", author, author, "local", timestamp);
+ printf("data %"PRIuMAX"\n", (uintmax_t)loglen);
+ fwrite(log, loglen, 1, stdout);
+ if (firstnote) {
+ if (revision > 1)
+ printf("from %s^0", note_ref);
+ firstnote = 0;
+ }
+ fputc('\n', stdout);
+}
+
+void fast_export_note(const char *committish, const char *dataref)
+{
+ printf("N %s %s\n", dataref, committish);
+}
+
static char gitsvnline[MAX_GITSVN_LINE_LEN];
void fast_export_begin_commit(uint32_t revision, const char *author,
const struct strbuf *log,
const char *uuid, const char *url,
- unsigned long timestamp)
+ unsigned long timestamp, const char *local_ref)
{
static const struct strbuf empty = STRBUF_INIT;
if (!log)
} else {
*gitsvnline = '\0';
}
- printf("commit refs/heads/master\n");
+ printf("commit %s\n", local_ref);
printf("mark :%"PRIu32"\n", revision);
printf("committer %s <%s@%s> %ld +0000\n",
*author ? author : "nobody",
return ret;
}
+void fast_export_buf_to_data(const struct strbuf *data)
+{
+ printf("data %"PRIuMAX"\n", (uintmax_t)data->len);
+ fwrite(data->buf, data->len, 1, stdout);
+ fputc('\n', stdout);
+}
+
void fast_export_data(uint32_t mode, off_t len, struct line_buffer *input)
{
assert(len >= 0);
void fast_export_delete(const char *path);
void fast_export_modify(const char *path, uint32_t mode, const char *dataref);
+void fast_export_note(const char *committish, const char *dataref);
+void fast_export_begin_note(uint32_t revision, const char *author,
+ const char *log, unsigned long timestamp, const char *note_ref);
void fast_export_begin_commit(uint32_t revision, const char *author,
- const struct strbuf *log, const char *uuid,
- const char *url, unsigned long timestamp);
+ const struct strbuf *log, const char *uuid,const char *url,
+ unsigned long timestamp, const char *local_ref);
void fast_export_end_commit(uint32_t revision);
void fast_export_data(uint32_t mode, off_t len, struct line_buffer *input);
+void fast_export_buf_to_data(const struct strbuf *data);
void fast_export_blob_delta(uint32_t mode,
uint32_t old_mode, const char *old_data,
off_t len, struct line_buffer *input);
static struct {
uint32_t revision;
unsigned long timestamp;
- struct strbuf log, author;
+ struct strbuf log, author, note;
} rev_ctx;
static struct {
rev_ctx.timestamp = 0;
strbuf_reset(&rev_ctx.log);
strbuf_reset(&rev_ctx.author);
+ strbuf_reset(&rev_ctx.note);
}
static void reset_dump_ctx(const char *url)
node_ctx.text_length, &input);
}
-static void begin_revision(void)
+static void begin_revision(const char *remote_ref)
{
if (!rev_ctx.revision) /* revision 0 gets no git commit. */
return;
fast_export_begin_commit(rev_ctx.revision, rev_ctx.author.buf,
&rev_ctx.log, dump_ctx.uuid.buf, dump_ctx.url.buf,
- rev_ctx.timestamp);
+ rev_ctx.timestamp, remote_ref);
}
-static void end_revision(void)
+static void end_revision(const char *note_ref)
{
- if (rev_ctx.revision)
+ struct strbuf mark = STRBUF_INIT;
+ if (rev_ctx.revision) {
fast_export_end_commit(rev_ctx.revision);
+ fast_export_begin_note(rev_ctx.revision, "remote-svn",
+ "Note created by remote-svn.", rev_ctx.timestamp, note_ref);
+ strbuf_addf(&mark, ":%"PRIu32, rev_ctx.revision);
+ fast_export_note(mark.buf, "inline");
+ fast_export_buf_to_data(&rev_ctx.note);
+ }
}
-void svndump_read(const char *url)
+void svndump_read(const char *url, const char *local_ref, const char *notes_ref)
{
char *val;
char *t;
if (active_ctx == NODE_CTX)
handle_node();
if (active_ctx == REV_CTX)
- begin_revision();
+ begin_revision(local_ref);
if (active_ctx != DUMP_CTX)
- end_revision();
+ end_revision(notes_ref);
active_ctx = REV_CTX;
reset_rev_ctx(atoi(val));
+ strbuf_addf(&rev_ctx.note, "%s\n", t);
break;
case sizeof("Node-path"):
if (constcmp(t, "Node-"))
if (active_ctx == NODE_CTX)
handle_node();
if (active_ctx == REV_CTX)
- begin_revision();
+ begin_revision(local_ref);
active_ctx = NODE_CTX;
reset_node_ctx(val);
+ strbuf_addf(&rev_ctx.note, "%s\n", t);
break;
}
if (constcmp(t + strlen("Node-"), "kind"))
continue;
+ strbuf_addf(&rev_ctx.note, "%s\n", t);
if (!strcmp(val, "dir"))
node_ctx.type = REPO_MODE_DIR;
else if (!strcmp(val, "file"))
case sizeof("Node-action"):
if (constcmp(t, "Node-action"))
continue;
+ strbuf_addf(&rev_ctx.note, "%s\n", t);
if (!strcmp(val, "delete")) {
node_ctx.action = NODEACT_DELETE;
} else if (!strcmp(val, "add")) {
continue;
strbuf_reset(&node_ctx.src);
strbuf_addstr(&node_ctx.src, val);
+ strbuf_addf(&rev_ctx.note, "%s\n", t);
break;
case sizeof("Node-copyfrom-rev"):
if (constcmp(t, "Node-copyfrom-rev"))
continue;
node_ctx.srcRev = atoi(val);
+ strbuf_addf(&rev_ctx.note, "%s\n", t);
break;
case sizeof("Text-content-length"):
if (constcmp(t, "Text") && constcmp(t, "Prop"))
if (active_ctx == NODE_CTX)
handle_node();
if (active_ctx == REV_CTX)
- begin_revision();
+ begin_revision(local_ref);
if (active_ctx != DUMP_CTX)
- end_revision();
+ end_revision(notes_ref);
}
-int svndump_init(const char *filename)
+static void init(int report_fd)
{
- if (buffer_init(&input, filename))
- return error("cannot open %s: %s", filename, strerror(errno));
- fast_export_init(REPORT_FILENO);
+ fast_export_init(report_fd);
strbuf_init(&dump_ctx.uuid, 4096);
strbuf_init(&dump_ctx.url, 4096);
strbuf_init(&rev_ctx.log, 4096);
strbuf_init(&rev_ctx.author, 4096);
+ strbuf_init(&rev_ctx.note, 4096);
strbuf_init(&node_ctx.src, 4096);
strbuf_init(&node_ctx.dst, 4096);
reset_dump_ctx(NULL);
reset_rev_ctx(0);
reset_node_ctx(NULL);
+ return;
+}
+
+int svndump_init(const char *filename)
+{
+ if (buffer_init(&input, filename))
+ return error("cannot open %s: %s", filename ? filename : "NULL", strerror(errno));
+ init(REPORT_FILENO);
+ return 0;
+}
+
+int svndump_init_fd(int in_fd, int back_fd)
+{
+ if(buffer_fdinit(&input, xdup(in_fd)))
+ return error("cannot open fd %d: %s", in_fd, strerror(errno));
+ init(xdup(back_fd));
return 0;
}
reset_rev_ctx(0);
reset_node_ctx(NULL);
strbuf_release(&rev_ctx.log);
+ strbuf_release(&rev_ctx.author);
+ strbuf_release(&rev_ctx.note);
strbuf_release(&node_ctx.src);
strbuf_release(&node_ctx.dst);
if (buffer_deinit(&input))
#define SVNDUMP_H_
int svndump_init(const char *filename);
-void svndump_read(const char *url);
+int svndump_init_fd(int in_fd, int back_fd);
+void svndump_read(const char *url, const char *local_ref, const char *notes_ref);
void svndump_deinit(void);
void svndump_reset(void);
return;
}
if (fflush(f)) {
- /*
- * On Windows, EPIPE is returned only by the first write()
- * after the reading end has closed its handle; subsequent
- * write()s return EINVAL.
- */
- if (errno == EPIPE || errno == EINVAL)
+ if (errno == EPIPE)
exit(0);
die_errno("write failure on '%s'", desc);
}